499 views

Uploaded by Nouman Khan

Solutions Manual for Probability and Random Processes for Electrical and Computer Engineers by John Gubner

- Mathematics Notes and Formula for Class 12 Chapter 1. Relations and Functions
- Creating Mathematical Infinities
- Closure Property of Rational Numbers
- Real Analysis
- Lecture 13
- Elementary Set Theory
- STAT901Ass1
- Kernels, Truth and Satisfaction
- 2008 Massoulie Coupon Replication Systems
- Discovering Bitcoin’s Public Topology and Influential Nodes
- 2017 Dec Reddyvari Mode Suppression
- 2017-Oguz-A New Stable Peer-To-Peer Protocol
- A Stable Random-contact Algorithm for Peer-To-peer File Sharing
- 2014-Deanonymisation of clients in bitcoin p2p network.pdf
- 2013-09-Information Propagation in the Bitcoin Network
- Ch6Queue
- SOR Main Angol
- Khalil Nonlinear Control Ch 3
- Phillips Nagle Chapters 1 and 2
- sigmaalg.pdf

You are on page 1of 260

Electrical and Computer Engineers

John A. Gubner

University of WisconsinMadison

CHAPTER 1

Problem Solutions

1. = {1, 2, 3, 4, 5, 6}.

(b) {(x, y) IR2 : 4 x2 + y2 25}.

(b) (1, 3) (2, 4) = (1, 4).

(c) (1, 3) [2, 4) = [2, 3).

(d) (3, 6] \ (5, 7) = (3, 5].

6. Sketches:

y y y

1

1 1

x x x

B0 B1 B1

y y y

x x x

3

C1 H3 J3

1

2 Chapter 1 Problem Solutions

y y

3

3

x x

3 3

U

H3 J3 = M 3 H3 U J3 = N 3

y 4

2 3

x x

2 3 4

U U

M2 N 3 = M2 M4 N3

7. (a) [1, 4] [0, 2] [3, 5] = [1, 4] [0, 2] [1, 4] [3, 5] = [1, 2] [3, 4].

(b)

c

[0, 1] [2, 3] = [0, 1] c [2, 3] c

h i h i

= (, 0) (1, ) (, 2) (3, )

h i

= (, 0) (, 2) (3, )

h i

(1, ) (, 2) (3, )

= (, 0) (1, 2) (3, ).

\

(c) ( 1n , n1 ) = {0}.

n=1

\

1

(d) [0, 3 + 2n ) = [0, 3].

n=1

[

1

(e) [5, 7 3n ] = [5, 7).

n=1

[

(f) [0, n] = [0, ).

n=1

Chapter 1 Problem Solutions 3

= (A B) C, since C A A C = C.

For the second part of the problem, suppose (A B) C = A (B C). We must show

that C A. Let C. Then (A B) C. But then A (B C), which

implies A.

9. Let I := { : A B}. We must show that A I = A B.

: Let A B. Then A and B. We must show that I too. In other

words, we must show that A B. But we already have B.

10. The function f : (, ) [0, ) with f (x) = x3 is not well defined because not all

values of f (x) lie in the claimed co-domain [0, ).

11. (a) The function will be invertible if Y = [1, 1].

(b) {x : f (x) 1/2} = [ /2, /6].

(c) {x : f (x) < 0} = [ /2, 0).

12. (a) Since f is not one-to-one, no choice of co-domain Y can make f : [0, ] Y

invertible.

(b) {x : f (x) 1/2} = [0, /6] [5 /6, ].

(c) {x : f (x) < 0} = .

13. For B IR,

X, 0 B and 1 B,

1 A, 1 B but 0

/ B,

f (B) = c,

A 0 B but 1

/ B,

, 0/ B and 1

/ B.

14. Let f : X Y be a function such that f takes only n distinct values, say y1 , . . . , yn .

Let B Y be such that f 1 (B) is nonempty. By definition, each x f 1 (B) has the

property that f (x) B. But f (x) must be one of the values y1 , . . . , yn , say yi . Now

f (x) = yi if and only if x Ai := f 1 ({yi }). Hence,

[

f 1 (B) = Ai .

i:yi B

/Bx

[

(b) f (x) Bn if and only if f (x) Bn for some n; i.e., if and only if x f 1 (Bn )

n=1

[

for some n. But this says that x f 1 (Bn ).

n=1

4 Chapter 1 Problem Solutions

\

(c) f (x) Bn if and only if f (x) Bn for all n; i.e., if and only if x f 1 (Bn )

n=1

\

for all n. But this says that x f 1 (Bn ).

n=1

S S S

16. If B = i {bi } and C = i {ci }, put a2i := bi and a2i1 := ci . Then A = i ai = B C

is countable.

S

17. Since each Ci is countable, we can write Ci = j ci j . It then follows that

[ [

[

B := Ci = {ci j }

i=1 i=1 j=1

S

18. Let A = m {am } be a countable set, and let B A. We must show that B is countable.

If B = , were done by definition. Otherwise, there is at least one element

S

of B in

A, say ak . Then put bn := an if an B, and put bn := ak if an

/ B. Then n {bn } = B

and we see that B is countable.

19. Let A B where A is uncountable. We must show that B is uncountable. We prove

this by contradiction. Suppose that B is countable. Then by the previous problem, A

is countable, contradicting the assumption that A is uncountable.

20. Suppose A is countable and B is uncountable. We must show that AB is uncountable.

We prove this by contradiction. Suppose that A B is countable. Then since B

A B, we would have B countable as well, contradicting the assumption that B is

uncountable.

21. MATLAB. OMITTED.

22. MATLAB. Intuitive explanation: Using only the numbers 1, 2, 3, 4, 5, 6, consider how

many ways there are to write the following numbers:

2 = 1+1 1 way, 1/36 = 0.0278

3 = 1+2 = 2+1 2 ways, 2/36 = 0.0556

4 = 1+3 = 2+2 = 3+1 3 ways, 3/36 = 0.0833

5 = 1+4 = 2+3 = 3+2 = 4+1 4 ways, 4/36 = 0.1111

6 = 1+5 = 2+4 = 3+3 = 4+2 = 5+1 5 ways, 5/36 = 0.1389

7 = 1+6 = 2+5 = 3+4 = 4+3 = 5+2 = 6+1 6 ways, 6/36 = 0.1667

8 = 2+6 = 3+5 = 4+4 = 5+3 = 6+2 5 ways, 5/36 = 0.1389

9 = 3+6 = 4+5 = 5+4 = 6+3 4 ways, 4/36 = 0.1111

10 = 4+6 = 5+5 = 6+4 3 ways, 3/36 = 0.0833

11 = 5+6 = 6+5 2 ways, 2/36 = 0.0556

12 = 6+6 1 way, 1/36 = 0.0278

36 ways, 36/36 = 1

|A| |A|

P(A) := = .

|| 26

Chapter 1 Problem Solutions 5

The event that a vowel is chosen is V = {1, 5, 9, 15, 21}, and P(V ) = |V |/26 = 5/26.

24. Let := {(i, j) : 1 i, j 26 and i 6= j}. For A , put P(A) := |A|/||. The event

that a vowel is chosen followed by a consonant is

Bvc = (i, j) : i = 1, 5, 9, 15, or 21 and j {1, . . . , 26} \ {1, 5, 9, 15, 21} .

Bcv = (i, j) : i {1, . . . , 26} \ {1, 5, 9, 15, 21} and j = 1, 5, 9, 15, or 21 .

We need to compute

P(Bvc Bcv ) = = = 0.323.

|| 650 65

Bvv = (i, j) : i, j {1, 5, 9, 15, 21} with i 6= j ,

25. MATLAB. The code for simulating the drawing of a face card is

%

n = 10000; % Number of draws.

X = ceil(52*rand(1,n));

faces = (41 <= X & X <= 52);

nfaces = sum(faces);

fprintf(There were %g face cards in %g draws.\n,nfaces,n)

26. Since 9 pm to 7 am is 10 hours, take := [0, 10]. The probability that the baby wakes

up during a time interval 0 t1 < t2 10 is

Z t2

1

P([t1 ,t2 ]) := d .

t1 10

R2

Hence, P([2, 10] c ) = P([0, 2]) = 0 1/10 d = 1/5.

27. Starting with the equations

SN = 1 + z + z2 + + zN2 + zN1

zSN = z + z2 + + zN2 + zN1 + zN ,

subtract the second line from the first. Canceling common terms leaves

SN zSN = 1 zN , or SN (1 z) = 1 zN .

6 Chapter 1 Problem Solutions

28. Let x = p(1). Then p(2) = 2p(1) = 2x, p(3) = 2p(2) = 22 x, p(4) = 2p(3) = 23 x,

p(5) = 24 x, and p(6) = 25 x. In general, p( ) = 2 1 x and we can write

6 6 5

1 26

1 = p( ) = 2 1 x = x 2 =

12

x = 63x.

=1 =1 =0

29. (a) By inclusionexclusion, P(A B) = P(A) + P(B) P(A B), which can be re-

arranged as P(A B) = P(A) + P(B) P(A B).

(b) Since P(A) = P(A B) + P(A B c ),

30. We must check the four axioms of a probability measure. First,

P() = P1 () + (1 )P2 () = 0 + (1 ) 0 = 0.

Second,

Third,

[ [ [

P An = P1 An + (1 )P2 An

n=1 n=1 n=1

= P1 (An ) + (1 ) P2 (An )

n=1 n=1

= [ P1 (An ) + (1 )P2 (An )]

n=1

= P(An ).

n=1

/ S, () = 0. Second, by definition, (A) 0. Third, for disjoint

31. First, since 0

An , suppose 0 n An . Then 0 Am for some S m,and 0 / An for n 6= m. Then

(Am ) = 1 and (An ) = 0 for n 6= m. Hence, n An = 1 and n (An ) = (Am ) =

S S

1. A similar analysis shows that if 0 / n An then n An and n (An ) are both

zero. Finally, since 0 , () = 1.

Chapter 1 Problem Solutions 7

32. Starting with the assumption that for any two disjoint events A and B, P(A B) =

P(A) + P(B), we have that for N = 2,

N N

[

P An = P(An ). ()

n=1 n=1

Now we must show that if () holds for any N 2, then () holds for N + 1. Write

N+1 N

[ [

P An = P An AN+1

n=1 n=1

N

[

= P An + P(AN+1 ), additivity for two events,

n=1

N

= P(An ) + P(AN+1 ), by (),

n=1

N+1

= P(An ).

n=1

33. Since An := Fn Fn1 1 n

N

[ N

[

An Fn .

n=1 n=1

S

The hard part is to show the reverse inclusion . Suppose Nn=1 Fn . Then Fn

for some n in the range 1, . . . , N. However, may belong to Fn for several values of

n since the Fn may not be disjoint. Let

k := min{n : Fn and 1 n N}.

In other words, 1 k N and Fk , but

/ Fn for n < k; in symbols,

c

Fk Fk1 F1c =: Ak .

S S S

Hence, Ak Nn=1 An . The proof that n=1 An n=1 Fn is similar except that

k := min{n : Fn and n 1}.

34. For arbitrary events Fn , let An be as in the preceding problem. We can then write

[ [

P Fn = P An = P(An ), since the An are disjoint,

n=1 n=1 n=1

N

= lim

N

P(An ), by def. of infinite sum,

n=1

N

[

= lim P An

N

n=1

N

[

= lim P Fn .

N

n=1

8 Chapter 1 Problem Solutions

\ [

P Gn = 1 P Fn , by De Morgans law,

n=1 n=1

N

[

= 1 lim P Fn , by the preceding problem,

N

n=1

N

\

= 1 lim 1 P Gn , by De Morgans law,

N

n=1

N

\

= lim P Gn .

N

n=1

This establishes the union bound for N = 2. Now suppose the union bound holds for

some N 2. We must show it holds for N + 1. Write

N+1 N

[ [

P Fn = P Fn FN+1

n=1 n=1

N

[

P Fn + P(FN+1 ), by the union bound for two events,

n=1

N

P(Fn ) + P(FN+1 ), by the union bound for N events,

n=1

N+1

= P(Fn ).

n=1

37. To establish the union bound for a countable sequence of events, we proceed as fol-

c F c F be disjoint with S A = S F . Then

lows. Let An := Fn Fn1 1 n n=1 n n=1 n

[ [

P Fn = P An

n=1 n=1

= P(An ), since the An are disjoint,

n=1

P(Fn ), since An Fn .

n=1

S

38. Following the hint, we put Gn := k=n Bk so that we can write

N

\ [ \ \

P Bk = P Gn = lim P Gn , limit property of P,

N

n=1 k=n n=1 n=1

Chapter 1 Problem Solutions 9

N

[

= lim P Bk , definition of GN ,

N

k=N

lim

N

P(Bk ), union bound.

k=N

k=1 P(Bk ) < .

(a) P(A0 ) = 1, P(A1 ) = 2/3, P(A2 ) = 4/9 = (2/3)2 , and P(A3 ) = 8/27 = (2/3)3 .

(b) P(An ) = (2/3)n .

(c) Write

N

\ \

P(A) = P An = lim P An , limit property of P,

N

n=1 n=1

= lim P(AN ), since An An+1 ,

N

= lim (2/3)N = 0.

N

40. Consider the collection consisting of the empty set along with all unions of the form

S

i Aki for some finite subsequence of distinct elements from {1, . . . , n}. We first show

that this collection is a -field. First, it contains by definition. Second, since

A1 , . . . , An is a partition,

[ c [

Aki = Ami ,

i i

where mi is the subsequence {1, . . . , n} \ {ki }. Hence, the collection is closed under

complementation. Third,

[ [

[

Akn,i = Am j ,

n=1 i j

where an integer l {1, . . . , n} is in {m j } if and only if kn,i = l for some n and some i.

This shows that the collection is a -field. Finally, since every element in our collec-

tion must be contained in every -field that contains A1 , . . . , An , our collection must

be (A1 , . . . , An ).

41. We claim that A is not a -field. Our proof is by contradiction: We assume A is a

-field and derive a contradiction. Consider the set

\

[0, 1/2n ) = {0}.

n=1

A is a -field. Hence, {0} A . Now, any set in A must belong to some An . By the

10 Chapter 1 Problem Solutions

preceding problem, every set in An must be a finite union of sets from Cn . However,

the singleton set {0} cannot be expressed as a finite union of sets from any Cn . Hence,

{0} /A.

42. Let Ai := X 1 ({xi }) for i = 1, . . . , n. By the problems mentioned in the hint, for any

subset B, if X 1 (B) 6= , then

[

X 1 (B) = Ai (A1 , . . . , An ).

i:xi B

It follows that the smallest -field containing all the X 1 (B) is (A1 , . . . , An ).

43. (a) F = , A, B, {3}, {1, 2}, {4, 5}, {1, 2, 4, 5},

(b) The corresponding probabilities are 0, 5/8, 7/8, 1/2, 1/8, 3/8, 1/2, 1.

(c) Since {1}

/ F , P({1}) is not defined.

44. Suppose that a -field A contains an infinite sequence Fn of sets. If the sequence is

not disjoint, we can construct a new sequence An that is disjoint with each An A .

Let a = a1 , a2 , . . . be an infinite sequence of zeros and ones. Then A contains each

union of the form [

Ai .

i:ai =1

Furthermore, since the Ai are disjoint, each sequence a gives a different union, and

we know from the text that the number of infinite sequences a is uncountably infinite.

T T

45. (a) First, since is in each A , A . Second, if TA A , then A A

for each , and so A c A for each . Hence, A c A . Third, if An A

S

for all n,

S

then for

T

each n and each , An A . Then n An A for each ,

and so n An A .

(b) We first note that

A1 = {, {1}, {2}, {3, 4}, {2, 3, 4}, {1, 3, 4}, {1, 2}, }

and

A2 = {, {1}, {3}, {2, 4}, {2, 3, 4}, {1, 2, 4}, {1, 3}, }.

It is then easy to see that

T

(c) First note that by part (a), A :C A A is a -field, and since C A for each

T

A , the -field A :C A A contains C . Finally, if D is any -field that contains

C , then D is one of the A s in the intersection. Hence,

\

C A D.

A :C A

T

Thus A :C A A is the smallest -field that contains C .

Chapter 1 Problem Solutions 11

46. The union of two -fields is not always a -field. Here is an example. Let :=

{1, 2, 3, 4}, and put

F := , {1, 2}, {3, 4}, and G := , {1, 3}, {2, 4}, .

Then

F G = , {1, 2}, {3, 4}, {1, 3}, {2, 4},

is not a -field since it does not contain {1, 2} {1, 3} = {1}.

47. Let denote the positive integers, and let A denote the collection of subsets A such

that either A or A c is finite.

(a) Let E denote the subset of even integers. Then E does not belong to A since

neither E nor E c (the odd integers) is a finite set.

(b) To show that A is closed under finite unions, we consider two cases. First

suppose that A1 , . . . , An are all finite. Then

n n

[

Ai |Ai | < ,

i=1 i=1

Sn

and so i=1 Ai A . In the second case, suppose that some A cj is finite. Then

n c n

[ \

Ai = Aic A cj .

i=1 i=1

Sn

Hence, the complement of i=1 Ai is finite, and so the union belongs to A .

S

(c) A is not a -field. To see this, put Ai := {2i} for i = 1, 2, . . . . Then i=1 Ai =

E

/ A by part (a).

48. Let be an uncountable set. Let A denote the collection of all subsets A such that

either A is countable or A c is countable. We show that A is a -field. First, the empty

set is countable. Second, if A A , we must show that A c A . There are two cases.

If A is countable, then the complement of A c is A, and so A c A . If A c is countable,

then A c A . Third, let A1S, A2 , . . . belong to A . There are two cases to consider. If

all An are countable, then n An is also countable by an earlier problem. Otherwise,

if some Amc is countable, then write

c

[ \

An = Anc Amc .

n=1 n=1

Since

S

the subset of a countable set is countable, we see that the complement of

n=1 n is countable, and thus the union belongs to A .

A

T 1 1

49. (a) Since (a, b] = n=1 (a, b + n ), and since each (a, b + n ) B, (a, b] B.

T

(b) Since {a} = 1 1 1 1

n=1 (a n , a+ n ), and since each (a n , a+ n ) B, the singleton

{a} B.

12 Chapter 1 Problem Solutions

(c) Since by part (b), singleton sets are Borel sets, and since A is a countable union

of Borel sets, A B; i.e., A is a Borel set.

(d) Using part (a), write

\

1

(a, b] = (a, b + n)

n=1

N

\

= lim (a, b + 1n ) , limit property of probability,

N

n=1

= lim (a, b + N1 ) , decreasing sets,

N

= lim (b + N1 ) a, characterization of ,

N

= b a.

\

1 1

{a} = (a n,a+ n)

n=1

N

\

1 1

= lim (a n,a+ n) , limit property of probability,

N

n=1

= lim (a N1 , a + N1 ) , decreasing sets,

N

= lim 2/N, characterization of ,

N

= 0.

50. Let I denote the collection of open intervals, and let O denote the collection of open

sets. We need to show that (I ) = (O). Since I O, every -field containing

O also contains I . Hence, the smallest -field containing O contains I ; i.e., I

(O). By the definition of the smallest -field containing I , it follows that (I )

(O). Now, if we can show that O (I ), then it will similarly follow that (O)

(I ). Recall that in the problem statement, it was shown that every open set U can

be written as a countable union of open intervals. This means U (I ). This proves

that O (I ) as required.

51. MATLAB. Chips from S1 are 80% reliable; chips from S2 are 70% reliable.

52. Observe that

N(Od,S1 )

N(Ow,S1 ) = N(OS1 ) N(Od,S1 ) = N(OS1 ) 1

N(OS1 )

and

N(Od,S2 )

N(Ow,S2 ) = N(OS2 ) N(Od,S2 ) = N(OS2 ) 1 .

N(OS2 )

Chapter 1 Problem Solutions 13

P(A [B C]) P(B C) P([A B] C)

P(A|B C) P(B|C) = = = P(A B|C).

P(B C) P(C) P(C)

From this formula, we can isolate the equation

P([A B] C)

P(A|B C) P(B|C) = .

P(C)

Multiplying through by P(C) yields P(A|B C) P(B|C) P(C) = P(A B C).

54. (a) P(MM) = 140/(140 + 60) = 140/200 = 14/20 = 7/10 = 0.7. Then P(HT) =

1 P(MM) = 0.3.

(b) Let D denote the event that a workstation is defective. Then

= (.1)(.7) + (.2)(.3)

= .07 + .06 = 0.13.

(c) Write

P(D|MM)P(MM) .07 7

P(MM|D) = = = .

P(D) .13 13

55. Let O denote the event that a cell is overloaded, and let B denote the event that a call

is blocked. The problem statement tells us that

P(B|O)P(O) 3/10 1/3

P(O|B) = = .

P(B) P(B)

Next compute

= (3/10)(1/3) + (1/10)(2/3) = 5/30 = 1/6.

We conclude that

1/10 6 3

P(O|B) = = = = 0.6.

1/6 10 5

56. The problem statement tells us that P(R1 |T0 ) = and P(R0 |T1 ) = . We also know

that

1 = P() = P(T0 T1 ) = P(T0 ) + P(T1 ).

The problem statement tells us that these last two probabilities are the same; hence

they are both equal to 1/2. To find P(T1 |R1 ), we begin by writing

P(R1 |T1 )P(T1 )

P(T1 |R1 ) = .

P(R1 )

14 Chapter 1 Problem Solutions

Next, we note that P(R1 |T1 ) = 1 P(R0 |T1 ) = 1 . By the law of total probability,

= (1 )(1/2) + (1/2) = (1 + )/2.

So,

(1 )(1/2) 1

P(T1 |R1 ) = = .

(1 + )/2 1 +

57. Let H denote the event that a student does the homework, and let E denote the event

that a student passes the exam. Then the problem statement tells us that

= (.8)(.6) + (.1)(1 .6) = .48 + .04 = .52.

Next,

P(E|H)P(H) .48 12

P(H|E) = = = .

P(E) .52 13

We must compute

P(CF |AF ) = = = .

P(AF ) P(AF ) P(AF )

= (1/3)(1/4) + (1/10)(1 1/4) = 1/12 + 3/40 = 19/120.

1 120 10

P(CF |AF ) = = .

12 19 19

59. Let F denote the event that a patient receives a flu shot. Let S, M, and R denote the

events that Sue, Minnie, or Robin sees the patient. The problem tells us that

P(S) = .2, P(M) = .4, P(R) = .4, P(F|S) = .6, P(F|M) = .3, and P(F|R) = .1.

We must compute

P(S|F) = = = .

P(F) P(F) P(F)

Chapter 1 Problem Solutions 15

Next,

= (.6)(.2) + (.3)(.4) + (.1)(.4) = .12 + .12 + .04 = 0.28.

Thus,

12 100 3

P(S|F) = = .

100 28 7

60. (a) Let = {1, 2, 3, 4, 5} with P(A) := |A|/||. Without loss of generality, let 1

and 2 correspond to the two defective chips. Then D := {1, 2} is the event that

a defective chip is tested. Hence, P(D) = |D|/5 = 2/5.

(b) Your friends information tells you that of the three chips you may test, one is

defective and two are not. Hence, the conditional probability that the chip you

test is defective is 1/3.

(c) Yes, your intuition is correct. To prove this, we construct a sample space and

probability measure and compute the desired conditional probability. Let

where i, j, k {1, 2, 3, 4, 5}. Here i and j are the chips taken by the friend, and

k is the chip that you test. We again take 1 and 2 to be the defective chips. The

10 possibilities for i and j are

12 13 14 15

23 24 25

34 35

45

For each pair in the above table, there are three possible values of k:

145 135 134

125 124

123

Hence, there are 30 triples in . For the probability measure we take P(A) :=

|A|/||. Now let Fi j denote the event that the friend takes chips i and j with

i < j. For example, if the friend takes chips 1 and 2, then from the second table,

k has to be 3 or 4 or 5; i.e.,

T := F12 F13 F14 F15 F23 F24 F25 F34 F35 F45 .

16 Chapter 1 Problem Solutions

P(D T )

P(D|T ) = .

P(T )

Since the Fi j that make up T are disjoint, |T | = 103 = 30 and P(T ) = |T |/|| =

1. We next observe that

[D F23 ] [D F24 ] [D F25 ]

[D F34 ] [D F35 ] [D F45 ].

Of the above intersections, the first six intersections are singleton sets, and the

last three are pairs. Hence, |DT | = 61+32 and so P(DT ) = 12/30 = 2/5.

We conclude that P(D|T ) = P(D T )/P(T ) = (2/5)/1 = 2/5, which is the

answer in part (a).

Remark. The model in part (c) can be used to solve part (b) by observing

that the probability in part (b) is

61. (a) If two sets A and B are disjoint, then by definition, A B = .

(b) If two events A and B are independent, then by definition, P(AB) = P(A)P(B).

(c) If two events A and B are disjoint, then P(A B) = P() = 0. In order for

them to be independent, we must have P(A)P(B) = 0; i.e., at least one of the

two events must have zero probability. If two disjoint events both have positive

probability, then they cannot be independent.

62. Let W denote the event that the decoder outputs the wrong message. Of course, W c

is the event that the decoder outputs the correct message. We must find P(W ) =

1 P(W c ). Now, W c occurs if only the first bit is flipped, or only the second bit is

flipped, or only the third bit is flipped, or if no bits are flipped. Denote these disjoint

events by F100 , F010 , F001 , and F000 , respectively. Then

= P(F100 ) + P(F010 ) + P(F001 ) + P(F000 )

= p(1 p)2 + (1 p)p(1 p) + (1 p)2 p + (1 p)3

= 3p(1 p)2 + (1 p)3 .

Hence,

P(W ) = 1 3p(1 p)2 (1 p)3 = 3p2 2p3 .

If p = 0.1, then P(W ) = 0.03 0.002 = 0.028.

63. Let Ai denote the event that your phone selects channel i, i = 1, . . . , 10. Let B j denote

the event that your neighbors phone selects channel j, j = 1, . . . , 10. Let P(Ai ) =

Chapter 1 Problem Solutions 17

10 10 10 10

[ 1 1

P [Ai B j ] = P(Ai B j ) = P(Ai )P(B j ) = 10 10 = 0.1.

i=1 i=1 i=1 i=1

64. Let L denote the event that the left airbag works properly, and let R denote the event

that the right airbag works properly. Assume L and R are independent with P(L c ) =

P(R c ) = p. The probability that at least one airbag works properly is

65. The probability that the dart never lands within 2 cm of the center is

N

\ \

c c

P An = lim P An , limit property of P,

N

n=1 n=1

N

= lim

N

P(Anc ), independence,

n=1

N

= lim

N

(1 p)

n=1

= lim (1 p)N = 0.

N

66. Let Wi denote the event that you win on your ith play of the lottery. The probability

that you win at least once in n plays is

n n n

[ \

c

P Wi = 1 P Wi = 1 P(Wic ), by independence,

i=1 i=1 i=1

n

= 1 (1 p) = 1 (1 p)n .

i=1

We need to choose n so that 1 (1 p)n > 1/2, which happens if and only if

ln 2

1/2 > (1 p)n or ln 2 > n ln(1 p) or < n,

ln(1 p)

where the last step uses the fact that ln(1 p) is negative. For p = 106 , we need

n > 693147.

67. Let A denote the event that Anne catches no fish, and let B denote the event that Betty

catches no fish. Assume A and B are independent with P(A) = P(B) = p. We must

compute

P(A [A B]) P(A)

P(A|A B) = = ,

P(A B) P(A B)

where the last step uses the fact that A A B. To compute the denominator, write

18 Chapter 1 Problem Solutions

Then

p 1

P(A|A B) = = .

p(2 p) 2 p

Next, since A and B are independent and since A and C are independent,

It follows that P(A B \C) = P(A)P(B \C), which establishes the claimed indepen-

dence.

69. We show that A, B, and C are mutually independent. To begin, note that P(A) =

P(B) = P(C) = 1/2. Next, we need to identify the events

A B = [0, 1/4)

A C = [0, 1/8) [1/4, 3/8)

B C = [0, 1/8) [1/2, 5/8)

A B C = [0, 1/8)

We find that

and

P(A B C) = P(A)P(B)P(C).

70. From a previous problem we have that P(A C|B) = P(A|B C)P(C|B). Hence,

P(A C|B) = P(A|B)P(C|B) if and only if P(A|B C) = P(A|B).

71. We show that the probability of the complementary event is zero. By the union bound,

[ \ \

c c

P Bk P Bk .

n=1 k=n n=1 k=n

N

\ \

c c

P Bk = lim P Bk , limit property of P,

N

k=n k=n

Chapter 1 Problem Solutions 19

N

= lim

N

P(Bkc ), independence,

k=n

N

= lim

N

[1 P(Bk )]

k=n

N

lim

N

exp[P(Bk )], the hint,

k=n

N

= lim exp P(Bk )

N

k=n

N

= exp lim

N

k ,

P(B ) since exp is continuous,

k=n

= exp P(Bk )

k=n

= e = 0,

k=1 P(Bk ) = k=n P(Bk ) = .

73. There are 2n n-bit numbers.

74. There are 100! different orderings of the 100 message packets. In order that the first

header packet to be received is the 10th packet to arrive, the first 9 packets to be

received must come from the 96 data packets, the 10th packet must come from the 4

header packets, and the remaining 90 packets can be in any order. More specifically,

there are 96 possibilities for the first packet, 95 for the second, . . . , 88 for the ninth, 4

for the tenth, and 90! for the remaining 90 packets. Hence, the desired probability is

96 88 4 90! 96 88 4 90 89 88 4

= = = 0.02996.

100! 100 91 100 99 98 97

5

75. 2 = 10 pictures are needed.

76. Suppose the player chooses distinct digits wxyz. The player wins if any of the 4! = 24

permutations of wxyz occurs. Since each permutation has probability 1/10 000 of

occurring, the probability of winning is 24/10 000 = 0.0024.

77. There are 83 = 56 8-bit words with 3 ones (and 5 zeros).

78. The probability that a random byte has 4 ones and 4 zeros is 84 /28 = 70/256 =

0.2734.

79. In the first case, since the prizes are different, order is important. Hence, there are

41 40 39 = 63 960 outcomes. In the second case, since the prizes are the same, order

is not important. Hence, there are 41 3 = 10 660 outcomes.

80. There are 52 14 possible hands. Since the deck contains 13 spades, 13 hearts, 13

13 13 13

diamonds, and 13 clubs, there are 13 2 3 4 5 hands with 2 spades, 3 hearts, 4

20 Chapter 1 Problem Solutions

13 13 13 13

2 3 4 5

52

= 0.0116.

14

81. All five cards are of the same suit if and only if they are all spades or all hearts or all

diamonds or all clubs. These are four disjoint events. Hence, the answer is four times

the probability of getting all spades:

13

5 1287

4 52 = 4 = 0.00198.

5

2 598 960

n

82. There are k1 ,...,km such partitions.

83. The general result is

n

mn .

k0 , . . . , km1

4

When n = 4 and m = 10 and a player chooses xxyz, we compute 2,1,1 /10 000 =

4

0.0012. For xxyy, we compute 2,2 /10 000 = 0.0006. For xxxy, we compute

4

3,1 /10 000 = 0.0004.

84. Two apples and three carrots corresponds to (0, 0, 1, 1, 0, 0, 0). Five apples corre-

sponds to (0, 0, 0, 0, 0, 1, 1).

CHAPTER 2

Problem Solutions

(b) { : X( ) > 4} = {5, 6}.

(c) P(X 3) = P(X = 1) + P(X = 2) + P(X = 3) = 3(2/15) = 2/5, and

P(X > 4) = P(X = 5) + P(X = 6) = 2/15 + 1/3 = 7/15.

2. (a) { : X( ) = 2} = {1, 2, 3, 4}.

(b) { : X( ) = 1} = {41, 42, . . . , 52}.

(c) P(X = 1 or X = 2) = P({1, 2, 3, 4} {41, 42, . . . , 52}). Since these are disjoint

events, the probability of their union is 4/52 + 12/52 = 16/52 = 4/13.

3. (a) { [0, ) : X( ) 1} = [0, 1].

(b) { [0, ) : X( ) 3} = [0, 3].

R

(c) P(X 1) = 01 e d = 1 e1 . P(X 3) = 1 e3 , P(1 < X 3) = P(X

3) P(X 1) = e1 e3 .

4. First, since X 1 () = , () = P(X 1 ()) = P() = 0. Second, (B) =

P(X 1 (B)) 0. Third, for disjoint Bn ,

[ [ [

1

Bn = P X Bn =P X (Bn ) = P(X 1 (Bn )) = (Bn ).

1

5. Since

P(Y > n 1) = P(Y = k) = P(Y = n) + P(Y = k) = P(Y = n) + P(Y > n),

k=n k=n+1

6. P(Y = 0) = P({TTT,THH,HTH,HHT}) = 4/8 = 1/2, and

P(Y = 1) = P({TTH,THT,HTT,HHH}) = 4/8 = 1/2.

7. P(X = 1) = P(X = 2) = P(X = 3) = P(X = 4) = P(X = 5) = 2/15 and P(X = 6) =

1/3.

8. P(X = 2) = P({1, 2, 3, 4}) = 4/52 = 1/13. P(X = 1) = P({41, 42, . . . , 52}) = 12/52 =

3/13. P(X = 0) = 1 P(X = 2) P(X = 1) = 9/13.

9. The possible values of X are 0, 1, 4, 9, 16. We have P(X = 0) = P({0}) = 1/7, P(X =

1) = P({1, 1}) = 2/7, P(X = 4) = P({2, 2}) = 2/7, P(X = 9) = P({3}) = 1/7,

and P(X = 16) = P({4}) = 1/7.

21

22 Chapter 2 Problem Solutions

10. We have

= 1 [e + e ] = 1 e (1 + ).

11. The probability that the sensor fails to activate is

that the sensor activates is 1 P(X < 4) = 0.143.

12. Let {Xk = 1} correspond to the event that the kth student gets an A. This event has

probability P(Xk = 1) = p. Now, the event that only the kth student gets an A is

15 15

[

P {Xk = 1 and Xl = 0 for l 6= k} = P({Xk = 1 and Xl = 0 for l 6= k})

k=1 k=1

15

= p(1 p)14

k=1

= 15(.1)(.9)14 = 0.3432.

13. Let X1 , X2 , X3 be the random digits of the drawing. Then P(Xi = k) = 1/10 for k =

0, . . . , 9 since each digit has probability 1/10 of being chosen. Then if the player

chooses d1 d2 d3 , the probability of winning is

P {X1 = d1 , X2 = d2 , X3 = d3 } {X1 = d1 , X2 = d2 , X3 6= d3 }

{X1 = d1 , X2 6= d2 , X3 = d3 } {X1 6= d1 , X2 = d2 , X3 = d3 } ,

which is equal to .13 +3[.12 (.9)] = 0.028 since the union is disjoint and since X1 , X2 , X3

are independent.

14.

m m

[ \

P {Xk < 2} = 1 P {Xk 2}

k=1 k=1

m m

= 1 P(Xk 2) = 1 [1 P(Xk 1)]

k=1 k=1

= 1 [1 {e +e m

}] = 1 [1 e (1 + )]m .

Chapter 2 Problem Solutions 23

15. (a)

n n n

[ \

P {Xi 2} = 1 P {Xi 1} = 1 P(Xi 1)

i=1 i=1 i=1

n

= 1 [P(Xi = 0) + P(Xi = 1)]

i=1

= 1 [e + e ]n = 1 en (1 + )n .

n n n

\

(b) P {Xi 1} = P(Xi 1) = [1 P(Xi = 0)] = (1 e )n .

i=1 i=1 i=1

n n

\

(c) P {Xi = 1} = P(Xi = 1) = ( e )n = n en .

i=1 i=1

1

(1 p)pk = (1 p) pk = (1 p) 1 p

= 1.

k=0 k=0

1

(1 p)pk1 = (1 p) pk1 = (1 p) pn = (1 p)

1 p

= 1.

k=1 k=1 n=0

17. Let Xi be the price of stock i, which is a geometric0 (p) random variable. Then

29 29 29

[ \

P {Xi > 10} = 1 P {Xi 10} = 1 [(1 p)(1 + p + + p10 )]

i=1 i=1 i=1

1 p11 29

= 1 (1 p) = 1 (1 p11 )29 .

1 p

18. For the first problem, we have

n n n

\

P(min(X1 , . . . , Xn ) > `) = P {Xk > `} = P(Xk > `) = p` = pn` .

k=1 k=1 k=1

Similarly,

n n n

\

P(max(X1 , . . . , Xn ) `) = P {Xk `} = P(Xk `) = (1 p` ) = (1 p` )n .

k=1 k=1 k=1

19. Let Xk denote the number of coins in the pocket of the kth student. Then the Xk are

independent and is uniformly distributed from 0 to 20; i.e., P(Xk = i) = 1/21.

24 Chapter 2 Problem Solutions

25 25

\

(a) P {Xk 5} = 16/21 = (16/21)25 = 1.12 103 .

k=1 k=1

25 25

[ \

(b) P {Xk 19} = 1 P {Xk 18} = 1 (1 2/21)25 = 0.918.

k=1 k=1

(c) The probability that only student k has 19 coins in his or her pocket is

\

P {Xk = 10} {Xl 6= 19} = (1/21)(20/21)24 = 0.01477.

l6=k

25

[ \

P {Xk = 10} {Xl 6= 19} = 25(0.01477) = 0.369.

k=1 l6=k

P(Y = k) = P {X1 = 1} {Xk1 = 1}{Xk = 0} = pk1 (1 p), k = 1, 2, . . . .

21. (a) Write

P(X > n) = (1 p)pk1 = (1 p)p`+n

k=n+1 `=0

1

= (1 p)pn p` = (1 p)pn = pn .

`=0 1 p

(b) Write

P(X > n + k|X > n) = = = n = pk .

P(X > n) P(X > n) p

22. Since P(Y > k) = P(Y > n + k|Y > n), we can write

P(Y > k) = P(Y > n + k|Y > n) = = .

P(Y > n) P(Y > n)

Let p := P(Y > 1). Taking k = 1 above yields P(Y > n + 1) = P(Y > n)p. Then

with n = 1 we have P(Y > 2) = P(Y > 1)p = p2 . With n = 2 we have P(Y > 3) =

P(Y > 2)P(Y > 1) = p3 . In general then P(Y > n) = pn . Finally, P(Y = n) = P(Y >

n 1) P(Y > n) = pn1 pn = pn1 (1 p), which is the geometric1 (p) pmf.

23. (a) To compute pX (i), we sum row i of the matrix. This yields pX (1) = pX (3) = 1/4

and pX (2) = 1/2. To compute pY ( j), we sum column j to get pY (1) = pY (3) =

1/4 and pY (2) = 1/2.

Chapter 2 Problem Solutions 25

(b) To compute P(X < Y ), we sum pXY (i, j) over i and j such that i < j. We have

P(X < Y ) = pXY (1, 2) + pXY (1, 3) + pXY (2, 3) = 0 + 1/8 + 0 = 1/8.

(c) We claim that X and Y are not independent. For example, pXY (1, 2) = 0 is not

equal to pX (1)pY (2) = 1/8.

24. (a) To compute pX (i), we sum row i of the matrix. This yields pX (1) = pX (3) = 1/4

and pX (2) = 1/2. To compute pY ( j), we sum column j to get pY (1) = pY (3) =

1/6 and pY (2) = 2/3.

(b) To compute P(X < Y ), we sum pXY (i, j) over i and j such that i < j. We have

P(X < Y ) = pXY (1, 2) + pXY (1, 3) + pXY (2, 3) = 1/6 + 1/24 + 1/12 = 7/24.

(c) Using the results of part (a), it is easy to verify that pX (i)pY ( j) = pXY (i, j) for

i, j = 1, 2, 3. Hence, X and Y are independent.

25. To compute the marginal of X, write

e3 3j e3 3

pX (1) =

3 =

3

e = 1/3.

j=0 j!

Similarly,

4e6 6j 4e6 6

pX (2) =

6 =

6

e = 2/3.

j=0 j!

We clearly have pY ( j) = 0 for j < 0 and

3 j1 e3 6 j1 e6

pY ( j) = +4 , j 0.

j! j!

Since pX (1)pY ( j) 6= pXY (1, j), X and Y are not independent.

26. (a) For k 1,

(1 p)pk1 kn ek

pX (k) = n!

n=0

kn

= (1 p)pk1 ek n! = (1 p)pk1 ek ek = (1 p)pk1 ,

n=0

(b) Next,

1 p

pY (0) = (1 p)pk1 ek = (p/e)k1

e k=1

k=1

1 p 1 p 1 1 p

= (p/e)m = e 1 p/e = e p .

e m=0

26 Chapter 2 Problem Solutions

(c) Since pX (1)pY (0) = (1 p)2 /(e p) is not equal to pXY (1, 0) = (1 p)/e, X

and Y are not independent.

27. MATLAB. Here is a script:

p = ones(1,51)/51;

k=[0:50];

i = find(g(k) >= -16);

fprintf(The answer is %g\n,sum(p(i)))

where

function y = g(x)

y = 5*x.*(x-10).*(x-20).*(x-30).*(x-40).*(x-50)/1e6;

28. MATLAB. If you modified your program for the preceding problem only by the way

you compute P(X = k), then you may get only 0.5001 = P(g(X) 16 and X 50).

Note that g(x) > 0 for x > 50. Hence, you also have to add P(X 51) = p51 = 0.0731

to 0.5001 to get 0.5732.

29. MATLAB. OMITTED.

30. MATLAB. OMITTED.

31. MATLAB. OMITTED.

32. E[X] = 2(1/3) + 5(2/3) = 12/3 = 4.

33. E[I(2,6) (X)] = 5k=3 P(X = k) = (1 p)[p3 + p4 + p5 ]. For p = 1/2, we get E[I(2,6) (X)] =

7/64 = 0.109375.

34. Write

n e

1 e n+1 e n

E[1/(X + 1)] = n+1 n!

=

n=0 (n + 1)!

= n!

n=1

n=0

e n e 1 e

=

n=0 n!

1 =

[e 1] =

.

35. Since var(X) = E[X 2 ] (E[X])2 , E[X 2 ] = var(X) + (E[X])2 . Hence, E[X 2 ] = 7 + 22 =

7 + 4 = 11.

36. Since Y = cX, E[Y ] = E[cX] = cm. Hence,

= c2 E[(X m)2 ] = c2 var(X) = c2 2 .

= E[X 3 ] + 3E[X 2Y ] + 3E[XY 2 ] + E[Y 3 ]

= E[X 3 ] + 3E[X 2 ]E[Y ] + 3E[X]E[Y 2 ] + E[Y 3 ], by independence.

Chapter 2 Problem Solutions 27

Now, as noted in the text, for a Bernoulli(p) random variable, X n = X, and so E[X n ] =

E[X] = p. Similarly E[Y n ] = q. Thus,

and differentiate with respect to c to get f 0 (c) = 2m + 2c. Solving f 0 (c) = 0 results

in c = m. An alternative approach is to write

39. The two sketches are:

x /a

1/2

( x /a )

1 1 I[ a , )( x )

I[ a , )( x )

8

8

a a

40. The right-hand side is easy: E[X]/2 = (3/4)/2 = 3/8 = 0.375. The left-hand side is

more work:

For = 3/4, P(X 2) = 0.1734. So the bound is a little more than twice the value

of the probability.

41. The Chebyshev bound is ( + 2 )/4. For = 3/4, the bound is 0.3281, which is

a little better than the Markov inequality bound in the preceding problem. The true

probability is 0.1734.

42. Comparing the definitions of XY and cov(X,Y ), we find XY = cov(X,Y )/(X Y ).

Hence, cov(X,Y ) = X Y XY . Since cov(X,Y ) := E[(X mX )(Y mY ), if Y = X,

we see that cov(X, X) = E[(X mX )2 ] =: var(X).

43. Put

Then

f 0 (a) = 2X Y + 2aY2 .

Setting this equal to zero and solving for a yields a = (X /Y ).

28 Chapter 2 Problem Solutions

44. Since P(X = 1) = P(X = 2) = 1/4, E[X] = 0. Similarly, since P(XY = 1) = 1/4

and P(XY = 4) = 1/4, E[XY ] = 0. Thus, E[XY ] = 0 = E[X]E[Y ] and we see that X

and Y are uncorrelated. Next, since X = 1 implies Y = 1, P(X = 1,Y = 1) = P(X =

1) = 1/4 while P(Y = 1) = P(X = 1 or X = 1) = 1/2. Thus,

45. As discussed in the text, for uncorrelated random variables, the variance of the sum

is the sum of the variances. Since independent random variables are uncorrelated, the

same results holds for them too. Hence, for Y = X1 + + XM ,

M

var(Y ) = var(Xk ).

k=1

We also have E[Y ] = Mk=1 E[Xk ]. Next, since the Xk are i.i.d. geometric1 (p), E[Xk ] =

1/(1 p) and var(Xk ) = p/(1 p)2 . It follows that var(Y ) = M p/(1 p)2 and E[Y ] =

M/(1 p). We conclude by writing

Mp M2 M(p + M)

E[Y 2 ] = var(Y ) + (E[Y ])2 = 2

+ = .

(1 p) (1 p)2 (1 p)2

46. From E[Y ] = E[dX s(1 X)] = d p s(1 p) = 0, we find that d/s = (1 p)/p.

47. (a) p = 1/1000.

(b) Since (1 p)/p = (999/1000)/(1/1000) = 999, the fair odds against are 999 :1.

(c) Since the fair odds of 999 :1 are not equal to the offered odds of 500 :1, the game

is not fair. To make the game fair, the lottery should pay $900 instead of $500.

48. First note that

1 1

Z p1 , p 6= 1,

1 1 p t 1

dt =

1 tp

lnt , p = 1.

1

For p > 1, the integral is equal to 1/(p 1). For p 1, the integral is infinite.

For 0 < p 1, write

Z k+1 Z

1 1 1

kp t p

dt =

1 tp

dt = .

k=1 k=1 k

k=2 1/k < . To this end, write

Z k+1 Z

1 1 1 1

kp = (k + 1) p k t p

dt =

1 tp

dt < .

k=2 k=1 k=1

C1

p 1

1

E[X n ] = kn p

= C p pn

.

k=1 k k=1 k

Chapter 2 Problem Solutions 29

By the preceding problem this last sum is finite for p n > 1, or equivalently, n <

p 1. Otherwise the sum is infinite; the case 1 p n > 0 being handled by the

preceding problem, and the case 0 p n being obvious.

50. If all outcomes are equally likely,

n

1 1 n

H(X) = pi log pi = log n = log n.

n i=1

i=1

n

1 1

H(X) = pi log pi = p j log

pj

= 1 log 1 = 0.

i=1

n n

E[g(X)] = g(xi )pi and g(E[X]) = g xi pi .

i=1 i=1

inequality holds for n = 2. Now suppose Jensens inequality holds for some n 2.

We must show it holds for n + 1. The case of n is

n n

g(xi )pi g xi pi , if p1 + + pn = 1.

i=1 i=1

n+1 n

pi

g(xi )pi = (1 pn+1 ) g(xi ) 1 pn+1

+ pn+1 g(xn+1 ).

i=1 i=1

n

pi p1 + + pn 1 pn+1

1 pn+1 =

pn+1

=

1 pn+1

= 1,

i=1

n n

pi pi

g(xi ) 1 pn+1 g xi 1 pn+1 .

i=1 i=1

30 Chapter 2 Problem Solutions

Hence,

n+1 n

pi

g(xi )pi (1 pn+1 )g xi 1 pn+1

+ pn+1 g(xn+1 ).

i=1 i=1

n+1 n

pi

g(xi )pi g (1 pn+1 ) xi 1 pn+1

+ pn+1 xn+1

i=1 i=1

n+1

= g pi xi .

i=1

and

E[X] = E[|Z| ].

Then Jensens inequality tells us that

E[|Z| ] (E[|Z| ]) / .

53. (a) For all discrete random variables, we have i P(X = xi ) = 1. For a nonnegative

random variable, if xk < 0, we have

i i

(b) Write

i i:xi 0 k:xk <0

By part (a), the last sum is zero. The remaining sum is obviously nonnegative.

(c) By part (b), 0 E[X Y ] = E[X] E[Y ]. Hence, E[Y ] E[X].

CHAPTER 3

Problem Solutions

1

1. First, E[X] = G0X (1) = 6 + 43 z = 1

6 + 4

3 = 9

6 = 32 . Second, E[X(X 1)] =

z=1

G00X (1) = 4/3. Third, from E[X(X 1)] = E[X 2 ]E[X], we have E[X 2 ] = 4/3+3/2 =

17/6. Finally, var(X) = E[X 2 ] (E[X])2 = 17/6 9/4 = (34 27)/12 = 7/12.

2. pX (0) = GX (0) = 16 + 61 z + 23 z2 = 1/6, pX (1) = G0X (0) = 16 + 43 z = 1/6,

z=0 z=0

and pX (2) = G00X (0)/2 = (4/3)/2 = 2/3.

3. To begin, note that G0X (z) = 5((2 + z)/3)4 /3, and G00X (z) = 20((2 + z)/3)3 /9. Hence,

E[X] = G0X (1) = 5/3 and E[X(X 1)] = 20/9. Since E[X(X 1)] = E[X 2 ] E[X],

E[X 2 ] = 20/9 + 5/3 = 35/9. Finally, var(X) = E[X 2 ] (E[X])2 = 35/9 25/9 =

10/9.

4. For X geometric0 (p),

1

GX (z) = zn P(X = n) = zn (1 p)pn = (1 p) (zp)n = (1 p) 1 pz .

n=0 n=0 n=0

Then

1 p 1 p p

E[X] = G0X (1) = p = p= .

(1 pz)2 z=1 (1 p)2 1 p

Next,

(1 p)p

2

E[X ] E[X] = E[X(X 1)] = G00X (1) = 2p(1 pz)

(1 pz)4 z=1

(1 p)p 2p 2p2

= 3

= .

(1 p) (1 p)2

This implies that

2p2 p 2p2 + p(1 p) p + p2

E[X 2 ] = + = = .

(1 p)2 1 p (1 p)2 (1 p)2

Finally,

p + p2 p2 p

var(X) = E[X 2 ] (E[X])2 = 2

2

= .

(1 p) (1 p) (1 p)2

1 p

GX (z) = zn P(X = n) = zn (1 p)pn1 = (zp)n

p n=1

n=1 n=1

1 p pz (1 p)z

= = .

p 1 pz 1 pz

31

32 Chapter 3 Problem Solutions

Now

(1 pz)(1 p) + (1 p)pz 1 p

G0X (z) = = .

(1 pz)2 (1 pz)2

Hence,

1

E[X] = G0X (1) = .

1 p

Next,

(1 p) 2p(1 p)

G00X (z) = 2(1 pz)p = .

(1 pz)4 (1 pz)3

We then have

2p(1 p) 2p

E[X 2 ] E[X] = E[X(X 1)] = G00X (1) = = ,

(1 p)3 (1 p)2

and

2p 1 2p + 1 p 1+ p

E[X 2 ] = 2

+ = 2

= .

(1 p) 1 p (1 p) (1 p)2

Finally,

1+ p 1 p

var(X) = E[X 2 ] (E[X])2 = = .

(1 p)2 (1 p)2 (1 p)2

find GY (z), which turns out to be Poisson( ) with := 1 + + n . It then follows

that P(Y = 2) = 2 e /2. It remains to write

n n

GY (z) = E[zY ] = E[zX1 ++Xn ] = E[zX1 zXn ] = E[zXi ] = ei (z1)

i=1 i=1

n

= exp i (z 1) = e (z1) ,

i=1

6. If GX (z) = k

k=0 z P(X = k), then GX (1) = k=0 P(X = k) = 1. For the particular

formula in the problem, we must have 1 = GX (1) = (a0 + a1 + a2 + + an )m /D, or

D = (a0 + a1 + a2 + + an )m .

7. From the table on the inside of the front cover of the text, E[Xi ] = 1/(1 p). Thus,

n

n

E[Y ] = E[Xi ] =

1 p

.

i=1

Second, since the variance of the sum of uncorrelated random variables is the sum of

the variances, and since var(Xi ) = p/(1 p)2 , we have

n

np

var(Y ) = var(Xi ) =

(1 p)2

.

i=1

Chapter 3 Problem Solutions 33

np n2 n(p + n)

E[Y 2 ] = var(Y ) + (E[Y ])2 = 2

+ = .

(1 p) (1 p)2 (1 p)2

Since Y is the sum of i.i.d. geometric1 (p) random variables, the pgf of Y is the product

of the individual pgfs. Thus,

n

(1 p)z (1 p)z n

GY (z) = = .

i=1 1 pz 1 pz

E[Y ] = GY0 (1) = n[(1 p) + pz]n1 p = np.

z=1

Next,

E[Y (Y 1)] = GY00 (1) = n(n 1)[(1 p) + pz]n2 p2 = n(n 1)p2 .

z=1

Finally,

n n

n

GY (z) = P(Y = k)zk = k pk (1 p)nk zk .

k=0 k=0

Hence,

n

n

1 = GY (1) = k pk (1 p)nk .

k=0

n

n

k pk (1 p)nk = 1,

k=0

we follow the hint and replace p with a/(a + b). Note that 1 p = b/(a + b). With

these substitutions, the above equation becomes

n k nk n

n a b n ak bnk

a+b a+b

= 1 or k nk

= 1.

k=0 k k=0 k (a + b) (a + b)

n

n

k ak bnk = (a + b)n .

k=0

34 Chapter 3 Problem Solutions

of errors in n bits. Assume the Xi are independent. Then

GYn (z) = E[zYn ] = E[zX1 ++Xn ] = E[zX1 zXn ]

= E[zX1 ] E[zXn ], by independence,

= [(1 p) + pz]n , since the Xi are i.i.d. Bernoulli(p).

We recognize this last expression

as the binomial(n, p) probability generating func-

tion. Thus, P(Yn = k) = nk pk (1 p)nk .

12. Let Xi binomial(ni , p) denote the number of students in the ith room. Then Y =

X1 + + XM is the total number of students in the school. Next,

GY (z) = E[zY ] = E[zX1 ++XM ]

= E[zX1 ] E[zXM ], by independence,

M

= [(1 p) + pz]ni , since Xi binomial(ni , p),

i=1

= [(1 p) + pz]n1 ++nM .

Setting n := n1 + + nM , we see that Y binomial(n, p). Hence, P(Y = k) =

n k

k p (1 p)nk .

13. Let Y = X1 + +Xn , where the Xi are i.i.d. with P(Xi = 1) = 1 p and P(Xi = 2) = p.

Observe that Xi 1 Bernoulli(p). Hence,

n

Y = n + (Xi 1) .

i=1

| {z }

=: Z

Since Z is the sum of i.i.d. Bernoulli(p) random variables, Z binomial(n, p). Hence,

P(Y = k) = P(n + Z = k) = P(Z = k n)

n

= pkn (1 p)2nk , k = n, . . . , 2n.

kn

Xn (n = 10) is the number of bits flipped. A codeword cannot be decoded if Y > 2.

We need to find P(Y > 2). Observe that

n

GY (z) = E[zY ] = E[zX1 ++Xn ] = E[zX1 zXn ] = E[zXi ] = [(1 p) + pz]n .

i=1

P(Y > 2) = 1 P(Y 2) = 1 [P(Y = 0) + P(Y = 1) + P(Y = 2)]

10 10 10 2

= 1 (1 p)10 + p(1 p)9 + p (1 p)8

0 1 2

= 1 (1 p)8 [(1 p)2 + 10p(1 p) + 45p2 ].

Chapter 3 Problem Solutions 35

k P Binomial(n, p) = k P Poisson(np) = k

0 0.2215 0.2231

1 0.3355 0.3347

2 0.2525 0.2510

3 0.1258 0.1255

4 0.0467 0.0471

5 0.0138 0.0141

16. If the Xi are i.i.d. with mean m, then

n

1 1 n 1 n nm

E[Mn ] = E

n i=1

Xi =

n i=1

E[Xi ] =

n i=1

m =

n

= m.

If X is any random variable with mean m, then E[cX] = cE[X] = cm, and

By Chebyshevs inequality,

2

P(|Mn m| ) < 0.1

n 2

160 students. If instead = 1, we require n > 0.1 = 10 students.

18. (a) E[Xi ] = E[IB (Zi )] = P(Zi B). Setting p := P(Zi B), we see that Xi = IB (Zi )

Bernoulli(p). Hence, var(Xi ) = p(1 p).

(b) In fact, the Xi are independent. Hence, they are uncorrelated.

19. Mn = 0 if and only if all the Xi are zero. Hence,

n n

\

P(Mn = 0) = P {Xi = 0} = P(Xi = 0) = (1 p)n .

i=1 i=1

Hence, the chances are more than 90% that when we run a simulation, M100 = 0 and

we learn nothing!

20. If Xi = Z Bernoulli(1/2) for all i, then

1 n 1 n

Mn =

n i=1

Xi = Z = Z,

n i=1

36 Chapter 3 Problem Solutions

Now, Z 1/2 = 1/2, and |Z 1/2| = 1/2 with probability one. Thus,

21. From the discussion of the weak law in the text, we have

2

P(|Mn m| n ) .

nn2

22. We have from the example that with p := /( + ), pX|Z (i| j) = ij pi (1 p) ji

for i = 0, . . . , j. In other words, as a function of i, pX|Z (i| j) is a binomial( j, p) pmf.

Hence,

j

E[X|Z = j] = ipX|Z (i| j)

i=0

is just the mean of a binomial( j, p) pmf. The mean of such a pmf is j p. Hence,

E[X|Z = j] = j p = j /( + ).

23. The problem is telling us that P(Y = k|X = i) = nk pki (1 pi )nk . Hence

= 1 [P(X = 0|Y = j) + P(X = 1|Y = j) + P(X = 2|Y = j)]

= 1 [e j + j e j + j2 e j /2]

= 1 e j [1 + j + j2 /2].

pX|Y (xi |y j ) := = = P(X = xi ) = pX (xi ).

P(Y = y j ) P(Y = y j )

pY |X (y j |xi ) := = = P(Y = y j ) = pY (y j ).

P(X = xi ) P(X = xi )

Chapter 3 Problem Solutions 37

P(T = n) = P(X Y = n) = P(X Y = n|Y = k)P(Y = k)

k=0

= P(X k = n|Y = k)P(Y = k), by the substitution law,

k=0

= P(X = n + k|Y = k)P(Y = k)

k=0

= P(X = n + k)P(Y = k), by independence,

k=0

= (1 p)pn+k (1 q)qk

k=0

(1 p)(1 q)pn

= (1 p)(1 q)pn (pq)k = .

k=0 1 pq

n e n e

P(Y = n|X = 1) = and P(Y = n|X = 2) = .

n! n!

The problem also tells us that P(X = 1) = P(X = 2) = 1/2. We can now write

P(X = 1|Y = 2) = = = .

P(Y = 2) P(Y = 2) P(Y = 2)

It remains to use the law of total probability to compute

2

P(Y = 2) = P(Y = 2|X = i)P(X = i)

i=1

= [P(Y = 2|X = 1) + P(Y = 2|X = 2)]/2

= [ 2 e /2 + 2 e /2]/2 = [ 2 e + 2 e ]/4.

We conclude by writing

2 e /4 1

P(X = 1|Y = 2) = = .

[ 2 e + 2 e ]/4 1 + ( / )2 e

28. Let X = 0 or X = 1 according to whether message zero or message one is sent. The

problem tells us that P(X = 0) = P(X = 1) = 1/2 and that

P(X = 1|Y = k) = = .

P(Y = k) P(Y = k)

38 Chapter 3 Problem Solutions

P(Y = k) = [P(Y = k|X = 0) + P(Y = k|X = 1)]/2 = [(1 p)pk + (1 q)qk ]/2.

(1 q)qk (1/2) 1

P(X = 1|Y = k) = = .

[(1 p)pk + (1 q)qk ]/2 (1p)pk

1 + (1q)qk

29. Let R denote the number of red apples in a crate, and let G denote the number of green

apples in a crate. The problem is telling us that R Poisson( ) and G Poisson( )

are independent. If T = R + G is the total number of apples in the crate, we must

compute

P(T = k|G = 0)P(G = 0)

P(G = 0|T = k) = .

P(T = k)

We first use the law of total probability, substitution, and independence to write

We also note from the text that the sum of two independent Poisson random variables

is a Poisson random variable whose parameter is the sum of the individual parameters.

Hence, P(T = k) = ( + )k e( + ) /k!. We can now write

k

k e /k! e

P(G = 0|T = k) = = .

( + )k e( + ) /k! +

n

P(Y = 1|X = n)P(X = n)

1

n!

e

P(X = n|Y = 1) = = n+1 .

P(Y = 1) P(Y = 1)

Next, we compute

1 n e

P(Y = 1) = P(Y = 1|X = n)P(X = n) = n+1 n!

n=0 n=0

e n+1 e k e k

=

(n + 1)! =

k!

=

1

n=0 k=1 k=0 k!

e 1 e

= [e 1] = .

We conclude with

n n

n!

1

e

n+1

1

e n+1

P(X = n|Y = 1) = = n+1 n! = .

P(Y = 1) (1 e )/ (e 1)(n + 1)!

Chapter 3 Problem Solutions 39

n k nk n e /n!

P(Y = k|X = n)P(X = n) k p (1 p)

P(X = n|Y = k) = = .

P(Y = k) P(Y = k)

Next,

n k

P(Y = k) = P(Y = k|X = n)P(X = n) = p (1 p)nk n e /n!

n=0 n=k k

pk k e [(1 p) ]nk pk k e [(1 p) ]m

=

k! (n k)!

=

k! m!

n=k m=0

pk k e (1p) (p )k ep

= e = .

k! k!

n k nk n e /n! n k nk n e /n!

k p (1 p) k p (1 p)

P(X = n|Y = k) = =

P(Y = k) (p )k ep /k!

[(1 p) ]nk e(1p)

= .

(n k)!

P {X > k} {max(X,Y ) > k} P(X > k)

P X > k max(X,Y ) > k = = ,

P max(X,Y ) > k P max(X,Y ) > k

P max(X,Y ) > k = 1 P max(X,Y ) k = 1 P(X k)P(Y k).

If we put k := P(X k) and use the fact that X and Y have the same pmf, then

1 k 1 k 1

P X > k max(X,Y ) > k = = = .

1 k2 (1 k )(1 + k ) 1 + k

33. (a) Observe that

= (1 p)(1 q)[pq4 + p2 q2 + p4 q].

40 Chapter 3 Problem Solutions

(b) Write

pZ ( j) = pY ( j i)pX (i)

i

= pY ( j i)pX (i), since pX (i) = 0 for i < 0,

i=0

j

= pY ( j i)pX (i), since pY (k) = 0 for k < 0,

i=0

j j

= (1 p)(1 q) pi q ji = (1 p)(1 q)q j (p/q)i .

i=0 i=0

Now, if p = q,

pZ ( j) = (1 p)2 p j ( j + 1).

If p 6= q,

pZ ( j) = (1 p)(1 q)q j = (1 p)(1 q) .

1 p/q q p

34. For j = 0, 1, 2, 3,

j

pZ ( j) = (1/16) = ( j + 1)/16.

i=0

For j = 4, 5, 6,

3

pZ ( j) = (1/16) = (7 j)/16.

i= j3

35. We first write

P(Y = j|X = 1) P(X = 0)

P(Y = j|X = 0) P(X = 1)

as

1j e1 / j! 1 p

.

0j e0 / j! p

We can further simplify this to

j 1 p 1 0

1

e .

0 p

Taking logarithms and rearranging, we obtain

h 1 p i.

j 1 0 + ln ln(1 /0 ).

p

Observe that the right-hand side is just a number (threshold) that is computable from

the problem data. If we observe Y = j, we compare j to the threshold. If j is greater

than or equal to this number, we decide X = 1; otherwise, we decide X = 0.

Chapter 3 Problem Solutions 41

P(Y = j|X = 1) P(X = 0)

P(Y = j|X = 0) P(X = 1)

as

(1 q1 )q1j 1 p

.

(1 q0 )q0j p

We can further simplify this to

q j (1 p)(1 q0 )

1

.

q0 p(1 q1 )

Taking logarithms and rearranging, we obtain

h (1 p)(1 q ) i.

0

j ln ln(q1 /q0 ),

p(1 q1 )

since q1 < q0 implies ln(q1 /q0 ) < 0.

37. Starting with P(X = xi |Y = y j ) = h(xi ), we have

If we can show that h(xi ) = pX (xi ), then it will follow that X and Y are independent.

Now observe that the sum over j of the left-hand side reduces to P(X = xi ) = pX (xi ).

The sum over j of the right-hand side reduces to h(xi ). Hence, pX (xi ) = h(xi ) as

desired.

38. First write

pX (1) = j=0 pXY (1, j) = 1/3 and pX (2) = j=0 pXY (2, j) = 2/3. It then follows

that pY |X ( j|1) is Poisson(3) and pY |X ( j|2) is Poisson(6). With these observations, it

is clear that

E[Y |X = 1] = 3 and E[Y |X = 2] = 6,

and

and

(1/3)3 j e3 / j! 1

pX|Y (1| j) = =

(1/3)3 j e3 / j! + (2/3)6 j e6 / j! 1 + 2 j+1 e3

and

(2/3)6 j e6 / j! 1

pX|Y (2| j) = = .

(1/3)3 j e3 / j! + (2/3)6 j e6 / j! 1 + 2( j+1) e3

42 Chapter 3 Problem Solutions

We now have

1 1

E[X|Y = j] = 1 +2

1 + 2 j+1 e3 1 + 2( j+1) e3

1 2 j+1 e3 1 + 2 j+2 e3

= +2 = .

1 + 2 j+1 e3 1 + 2 j+1 e3 1 + 2 j+1 e3

1

E[Y ] = E[Y |X = k]P(X = k) = kP(X = k) = E[X] =

1 p

,

k=1 k=1

E[XY ] = E[XY |X = k]P(X = k) = E[kY |X = k]P(X = k), by substitution,

k=1 k=1

= kE[Y |X = k]P(X = k) = k2 P(X = k) = E[X 2 ]

k=1 k=1

p 1 1+ p

= var(X) + (E[X])2 = 2

+ 2

= .

(1 p) (1 p) (1 p)2

Since E[Y 2 |X = k] = k + k2 ,

E[Y 2 ] = E[Y 2 |X = k]P(X = k) = (k + k2 )P(X = k) = E[X] + E[X 2 ]

k=1 k=1

1 1+ p 2

= + = .

1 p (1 p)2 (1 p)2

2 1 1

var(Y ) = E[Y 2 ] (E[Y ])2 = = .

(1 p)2 (1 p)2 (1 p)2

40. From the solution of the example, it is immediate that E[Y |X = 1] = and E[Y |X =

0] = /2. Next,

Similarly,

= ( /2 + 2 /4)(1 p) + ( + 2 )p.

To conclude, we have

Chapter 3 Problem Solutions 43

41. Write

1 1

E[(X + 1)Y 2 ] = E[(X + 1)Y 2 |X = i]P(X = i) = E[(i + 1)Y 2 |X = i]P(X = i)

i=0 i=0

1

= (i + 1)E[Y 2 |X = i]P(X = i).

i=0

E[Y 2 |X = i] = ( + 2 ) = 3(i + 1) + 9(i + 1)2 .

=3(i+1)

1

E[(X + 1)Y 2 ] = (i + 1)[3(i + 1) + 9(i + 1)2 ]P(X = i)

i=0

1

= (i + 1)2 [3 + 9(i + 1)]P(X = i)

i=0

= 12(1/3) + 84(2/3) = 4 + 56 = 60.

42. Write

E[XY ] = E[XY |X = n]P(X = n) = E[nY |X = n]P(X = n)

n=0 n=0

n e 1

= nE[Y |X = n]P(X = n) = nn+1

n!

n=0 n=0

X X +11 1

= E = E = 1E .

X +1 X +1 X +1

Hence,

1 e

E[XY ] = 1 .

43. Write

E[XY ] = E[XY |X = n]P(X = n) = E[nY |X = n]P(X = n)

n=1 n=1

n

= nE[Y |X = n]P(X = n) = n 1 q P(X = n)

n=1 n=1

1 1 h i

= E[X 2 ] = var(X) + (E[X])2

1q 1q

1 p 1 1+ p

= 2

+ 2

= .

1 q (1 p) (1 p) (1 q)(1 p)2

44 Chapter 3 Problem Solutions

44. Write

E[X 2 ] = E[X 2 |Y = k]P(Y = k) = (k + k2 )P(Y = k) = E[Y +Y 2 ]

k=1 n=1

= E[Y ] + E[Y 2 ] = m + (r + m2 ) = m + m2 + r.

= [(1 p) + pz]n [(1 p) + pz]m = [(1 p) + pz]n+m .

m 6

= P(Y = 6|X = 4) = P(Y = 6) = p (1 p)m6 .

6

46. Write

GY (z) = E[zY ] = E[zY |X = k]P(X = k) = ek(z1) P(X = k)

k=1 k=1

(1 p)ez1

= (ez1 )k P(X = k) = GX (ez1 ) =

1 pez1

.

k=1

CHAPTER 4

Problem Solutions

1. Let Vi denote the input voltage at the ith sampling time. The problem tells us that the

Vi are independent and uniformly distributed on [0, 7]. The alarm sounds if Vi > 5 for

i = 1, 2, 3. The probability of this is

3 3

\

P {Vi > 5} = P(Vi > 5).

i=1 i=1

R7

Now, P(Vi > 5) = 5 (1/7) dt = 2/7. Hence, the desired probability is (2/7)3 =

8/343 = 0.0233.

R

2. We must solve t f (x) dx = 1/2 for t. Now,

Z

1

3 1

2x dx = 2 = 2 .

t x t t

Solving 1/t 2 = 1/2, we find that t = 2.

R

3. To find c, we solve 01 cx1/2 dx = 1. The left-hand side of this equation is 2cx1/2 |10 =

R

2c. Solving 2c = 1 yields c = 1/2. For the median, we must solve t1 (1/2)x1/2 dx =

1/2 or x1/2 |t1 = 1/2. We find that t = 1/4.

R

4. (a) For t 0, P(X > t) = t e x dx = e x |t = e t .

(b) First, P(X > t + t|X > t) = P(X > t + t, X > t)/P(X > t). Next, observe that

{X > t + t} {X > t} = {X > t + t},

and so P(X > t + t|X > t) = P(X > t + t)/P(X > t) = e (t+t) /e t =

e t .

5. Let Xi denote the voltage output by regulator i. Then the Xi are i.i.d. exp( ) random

variables. Now put

10

Y := I(v,) (Xi )

i=1

so that Y counts the number of regulators that output more than v volts. We must

compute P(Y = 3). Now, the I(v,) (Xi ) are i.i.d. Bernoulli(p) random variables, where

Z

x

p = P(Xi > v) = e dx = e = e v .

x

v v

Next, we now from the previous chapter that a sum of n i.i.d. Bernoulli(p) random

variables is a binomial(n, p). Thus,

n 3 10 3 v

P(Y = 3) = p (1 p)n3 = e (1 e v )7 = 120e3 v (1 e v )7 .

3 3

45

46 Chapter 4 Problem Solutions

R

6. First note that P(Xi > 2) = e x dx = e x |

2 2 =e

2 .

T

(a) P min(X1 , . . . , Xn ) > 2 = P ni=1 {Xi > 2} = ni=1 P(Xi > 2) = e2n .

(b) Write

P max(X1 , . . . , Xn ) > 2 = 1 P max(X1 , . . . , Xn ) 2

n

\

= 1P {Xi 2}

i=1

n

= 1 P(Xi 2) = 1 [1 e2 ]n .

i=1

R2

7. (a) P(Y 2) = 0 e y dy = 1 e2 .

(b) P(X 12,Y 12) = P(X 12)P(Y 12) = (1 e12 )(1 e12 ).

(c) Write

= 1 P(X > 12)P(Y > 12)

= 1 e12 e12 = 1 e12( + ) .

Z Z

p

px p1 e x dx = ey dy = ey = 1.

0 0 0

Z Z

p p

P(X > t) = px p1 e x dx = ey dy = e t .

t t p

n n

\ p

P {Xi 3} = P(Xi 3) = [1 P(X1 > 3)]n = [1 e 3 ]n .

i=1 i=1

n n

[ \ p

P {Xi > 3} = 1 P {Xi 3} = 1 [1 e 3 ]n .

i=1 i=1

2 /2

9. (a) Since f 0 (x) = xex / 2 , f 0 (x) < 0 for x > 0 and f 0 (x) > 0 for x < 0.

2

(b) Since f 00 (x) = (x2 1)ex /2 / 2 , we see that f 00 (x) > 0 for |x| > 1 and f 00 (x) <

0 for |x| < 1.

2 /2 2 /2

(c) Rearrange ex x2 /2 to get ex 2/x2 0 as |x| .

Chapter 4 Problem Solutions 47

10. Following the hint, write f (x) = ((x m)/ )/ , where is the standard normal

density. Observe that f 0 (x) = 0 ((x m)/ )/ 2 and f 00 (x) = 00 ((x m)/ )/ 3 .

(a) Since the argument of 0 is positive for x > m and negative for x < m, f (x) is

decreasing for x > m and increasing for x < m. Hence, f has a global maximum

at x = m.

(b) Since the absolute value of the argument of 00 is greater than one if and only if

|x m| > , f (x) is concave for |x m| < and convex for |x m| > .

11. Since is bounded, lim ((x m)/ )/ = 0. Hence, lim f (x) = 0. For

x 6= m, we have

h 2 . i

exp xm 2 2 2

f (x) = 2 = 0

2 2 xm 2 (x m)2

as 0. Otherwise, since f (m) = [ 2 ]1 , lim 0 f (m) = .

12. (a) f (x) = n pn fn (x) is obviously nonnegative. Also,

Z Z Z

f (x) dx = pn fn (x) dx =

n

pn

fn (x) dx = pn 1 = pn = 1.

n n n

3/4 3/4

1/2 1/2

(b) (c)

1/4 1/4

0 x 0 x

0 1 2 3 0 1 2 3

R

13. Clearly, (g h)(x) = g(y)h(x y) dy

0 since g and h are nonnegative. Next,

Z Z Z

(g h)(x) dx = g(y)h(x y) dy dx

Z Z

= g(y) h(x y) dx dy

Z Z Z

= g(y) h( ) d dy = g(y) dy = 1.

| {z }

=1

R

14. (a) Let p > 1. On (p) = 0 x p1 ex dx, use integration by parts with u = x p1 and

dv = ex dx. Then du = (p 1)x p2 dx, v = ex , and

Z

(p) = x p1 ex +(p 1) x(p1)1 ex dx = (p 1)(p 1).

0 0

| {z }

=0

48 Chapter 4 Problem Solutions

R

(b) On (1/2) = 0 x1/2 ex dx,make the change of variable x = y2 /2 or y = 2x.

Then dx = y dy and x1/2 = 2/y. Hence,

Z Z y2 /2 Z ey2 /2

2 y2 /2

(1/2) = e y dy = 2 e dy = 2 2 dy

0 y 0 0 2

1

= 2 2 = .

2

(c) By repeatedly using the recursion formula in part (a), we have

2n + 1 2n 1 2n 1 2n 1 2n 3 2n 3

= =

2 2 2 2 2 2

..

.

2n 1 2n 3 5 3 1

= (1/2)

2 2 2 2 2

2n 1 2n 3 5 3 1

=

2 2 2 2 2

(2n 1)!!

= .

2n

(d) First note that g p (y) = 0 for y 0, and similarly for gq (y). Hence, in order to

have g p (y)gq (x y) > 0, we need y > 0 and x y > 0, or equivalently, x > y > 0.

Of course, if x 0 this does not happen. Thus, (g p gq )(x) = 0 for x 0. For

x > 0, we follow the hint and write

Z

(g p gq )(x) = g p (y)gq (x y) dy

Z x

= g p (y)gq (x y) dy

0

Z x

1

= y p1 ey (x y)q1 e(xy) dy

(p)(q) 0

Z

xq1 ex x p1

= y (1 y/x)q1 dy

(p)(q) 0

Z 1

xq ex

= (x ) p1 (1 )q1 d , ch. of var. = y/x,

(p)(q) 0

Z 1

x p+q1 ex

= p1 (1 )q1 d . ()

(p)(q) 0

by Problem 13. In particular, this means R

that the left-hand side integrates to

one. On the right-hand side, note that 0 x p+q1 ex dx = (p + q). Hence,

integrating the above equation with respect to x from zero to infinity yields

Z 1

(p + q)

1 = p1 (1 )q1 d .

(p)(q) 0

Solving for the above integral and substituting the result into (), we find that

(g p gq )(x) = g p+q (x).

Chapter 4 Problem Solutions 49

R R

15. (a) In f (x) dx = f ( x) dx, make the change of variable y = x, dy =

dx to get Z Z

f (x) dx = f (y) dy = 1.

( x)0 e x

g1, (x) = = e x ,

0!

which we recognize as the exp( ) density.

(c) The desired probability is

Z

( x)m1 e x

Pm (t) := dx.

t (m 1)!

R

Note that P1 (t) = t e x dx = e t . For m > 1, apply integration by parts

with u = ( x)m1 /(m 1)! and dv = e x dx. Then

( t)m1 e t

Pm (t) = + Pm1 (t).

(m 1)!

( t)m1 e t ( t)m2 e t

Pm (t) = + + + e t .

(m 1)! (m 2)!

(d) We have

g 2m+1 , 1 (x) = 2 = (2m1)!!

2 2 ((2m + 1)/2) m 2

xm1/2 ex/2

= .

(2m 1) 5 3 1 2

16. (a) We see that b1,1(x) = 1 is the uniform(0, 1) density, b2,2 (x) = 6x(1 x), and

b1/2,1 (x) = 1/(2 x ).

2

b (x)

1/2,1

b (x)

2,2

1.5

b (x)

1,1

1

0.5

0

0 0.25 0.5 0.75 1

x

50 Chapter 4 Problem Solutions

Z 1

x p+q1 ex

g p+q (x) = (g p gq )(x) = p1 (1 )q1 d .

(p)(q) 0

Integrating the left and right-hand sides with respect to x from zero to infinity

yields

Z

(p + q) 1 p1

1 = (1 )q1 d ,

(p)(q) 0

which says that the beta density integrates to one.

17. Starting with

Z 1

(p) (q) = (p + q) u p1 (1 u)q1 du,

0

make the change of variable u = sin2 , du = 2 sin cos d . We obtain

Z 1

(p) (q) = (p + q) u p1 (1 u)q1 du

0

Z /2

= (p + q) (sin2 ) p1 (1 sin2 )q1 2 sin cos d

0

Z /2

= 2(p + q) (sin )2p1 (cos )2q1 d .

0

Z /2

(1/2)2 = 2 1 d = ,

0

and it follows that (1/2) = .

18. Starting with

Z 1

(p) (q) = (p + q) u p1 (1 u)q1 du,

0

make the change of variable u = sin2 , du = 2 sin cos d . We obtain

Z 1

(p) (q)

= u p1 (1 u)q1 du

(p + q) 0

Z /2

= (sin2 ) p1 (1 sin2 )q1 2 sin cos d

0

Z /2

= 2 (sin )2p1 (cos )2q1 d .

0

n+1

Z /2

2

= 2 sinn d ,

n+2 0

2

and the desired result follows.

Chapter 4 Problem Solutions 51

19. Starting with the integral definition of B(p, q), make the change of variable u = 1

e , which implies both du = e d and 1 u = e . Hence,

Z 1 Z

B(p, q) = u p1 (1 u)q1 du = (1 e ) p1 (e )q1 e d

0 0

Z

= (1 e ) p1 eq d .

0

20. We first use the fact that the density is even and then makepthe change of variable

e = 1 + x2 / , which implies both e d = 2x/ dx and x = (e 1). Thus,

Z Z

x2 ( +1)/2 x2 ( +1)/2

1+ dx = 2 1+ dx

0

Z

1

= 2 (e )( +1)/2 2 e p d

0 (e 1)

Z /2 1/2 p

= (e ) (e ) / e 1 d

0

Z /2

= (e ) (1 e )1/2 d

0

Z

= (1 e )1/21 e /2 d .

0

By the preceding problem, this is equal to B(1/2, /2), and we see that Students t

density integrates to one.

21. (a) Using Stirlings formula,

1+ 1 + (1+ )/21/2 (1+ )/2 1 + /2

2 e e1/2

2 2 = 2

/21/2 /2 /21/2

2 e

2 2 2

1 + /2 ( /2)1/2 1

= e1/2 = [(1 + 1/ ) ]1/2

2 e1/2

1 1

(e1 )1/2 = .

2e 1/2 2

(b) First write

x2 ( +1)/2 x2 1/2 x2 1/2 2 2

1+ = 1+ 1+ [ex ]1/2 11/2 = ex /2 .

It then follows that

2 ( +1)/2

1+

1 + x 1

f (x) = = 2

B( 12 , 2 )

2 ( +1)/2

1 + x

2

2

1 ex /2

2 /2 =

.

2 ex 2

52 Chapter 4 Problem Solutions

22. Making the change of variable t = 1/(1 + z) as suggested in the hint, note that it is

equivalent to 1 + z = t 1 , which implies dz = t 2 dt. Thus,

Z Z 1 p1 Z 1

z p1 1 p+q dt 1 t p1 p+q2

p+q

dz = 1 t 2

= t dt

0 (1 + z) 0 t t 0 t

Z 1

= (1 t) p1t q1 dt = B(q, p) = B(p, q).

0

Z Z

2 2

23. E[X] = x dx = 2x2 dx = = 2.

1 x3 1 x 1

24. If the input-output relation has n levels, then the distance from Vmax to +Vmax should

be n; i.e., n = 2Vmax , or = 2Vmax /n. Next, we have from the example in the text

that the performance is 2 /12, and we need 2 /12 < , or

1 2Vmax 2

< .

12 n

Solving this for n yields Vmax / 3 < n = 2b . Taking natural logarithms, we have

.

b > ln Vmax / 3 ln 2.

Z Z Z

E[Z] = z fZ (z) dz = z f (z m) dz = (x + m) f (x) dx

Z Z

= x f (x) dx + m f (x) dx = E[X] + m = 0 + m = m.

Z Z

2 2

26. E[X 2 ] = x2 dx = dx = 2 ln x = 2( 0) = .

1 x3 1 x 1

R

27. First note that since Students t density is even, E[|X|k ] = k

|x| f (x) dx is propor-

tional to

Z Z 1 Z

xk xk xk

dx = dx + dx

0 (1 + x2 / )( +1)/2 0 (1 + x2 / )( +1)/2 1 (1 + x2 / )( +1)/2

With regard to this last integral, observe that

Z Z Z

xk xk dx

dx dx = ( +1)/2 ,

1 (1 + x / )( +1)/2

2 1 (x / )( +1)/2

2 1 x +1k

which is finite if + 1 k > 1, or k < . Next, instead of breaking

the range of

integration at one, we break it at the solution of x2 / = 1, or x = . Then

Z Z Z

xk xk

( +1)/2 dx

dx dx = ,

(1 + x / )( +1)/2

2 (x / + x2 / )( +1)/2

2 2

x +1k

which is infinite if + 1 k 1, or k .

Chapter 4 Problem Solutions 53

28. Begin with E[Y 4 ] = E[(Z + n)4 ] = E[Z 4 + 4Z 3 n + 6Z 2 n2 + 4Zn3 + n4 ]. The moments

of the standard normal were computed in an example in this chapter. Hence E[Y 4 ] =

3 + 4 0 n + 6 1 n2 + 4 0 n3 + n4 = 3 + 6n2 + n4 .

Z Z

x p1 ex 1 (n + p)

29. E[X n ] = xn dx = x(n+p)1 ex dx = .

0 (p) (p) 0 (p)

30. (a) First write

Z Z Z 2

x2 /2 1 2 x2 /2 2 2 ex /2

E[X] = x xe dx = x e dx = x dx,

0 2 2 2

where the last integral isthe second moment of a standard normal density, which

2 p

is one. Hence, E[X] = = /2.

2

(b) For higher-order moments, first write

Z Z

2 /2 2 /2

E[X n ] = xn xex dx = xn+1 ex dx.

0 0

2

Now make the change of variable t = x /2, which implies x = 2t, or dx =

dt/ 2t. Hence,

Z

dt

E[X n ] = [(2t)1/2 ]n+1 et

0

Z

21/2t 1/2

= 2n/2 t [(n/2)+1]1 et dt = 2n/2 (1 + n/2).

0

31. Let Xi denote the flow on link i, and put Yi := I( ,) (Xi ) so that Yi = 1 if the flow

on link i is greater than . Put Z := ni=1 Yi so that Z counts the number of links

with flows greater than . The buffer overflows if Z > 2. Since the Xi are i.i.d.,

so are the Yi . Furthermore, the Yi are Bernoulli(p), where p = P(Xi > ). Hence,

Z binomial(n, p). Thus,

P(Z > 2) = 1 P(Z 2)

n n n 2

= 1 (1 p)n + p(1 p)n1 + p (1 p)n2

0 1 2

= 1 (1 p)n2 [(1 p)2 + np(1 p) + 21 n(n 1)p2 ].

In remains to compute

Z

x2 /2

x2 /2 2 /2

p = P(Xi > ) = xe dx = e = e .

32. The key is to use the change of variable = x p , which implies both d = px p1 dx

and x = ( / )1/p . Hence,

Z Z

p

E[X n ] = xn px p1 e x dx = [( / )1/p ]n e d

0 0

Z

= (1/ ) n/p

[(n/p)+1]1 e d = (1 + n/p) n/p .

0

54 Chapter 4 Problem Solutions

33. Write

Z Z

x1/2 ex dx =

E[Y ] = E[(X 1/4 )2 ] = E[X 1/2 ] = x3/21 ex dx

0 0

= (3/2) = (1/2)(1/2) = /2.

34. We have

n n

[ \

P {Xi < /2} = 1 P {Xi /2}

i=1 i=1

n

= 1 P(Xi /2)

i=1

Z

n

x

= 1 e dx , with := 1/ ,

/2

= 1 (e /2 )n = 1 en/2 .

5 5

[ \

P {Xi > 25} = 1 P {Xi 25}

i=1 i=1

5

= 1 P(Xi 25)

i=1

Z 25

5

x

= 1 e dx

0

Z 2 Z 1/2

h(X) = (1/2) log 2 dx = log 2 and h(X) = 2 log(1/2) dx = log(1/2).

0 0

For the third calculation, note that ln f (x) = 21 [(x m)/ ]2 + 12 ln 2 2 . Then

Z

1

h(X) = f (x) [(x m)/ ]2 + ln 2 2 dx

2

1 X m 2

= E + ln 2 2 = 21 {1 + ln 2 2 } = 1

2 ln 2 2 e.

2

Z

x2n (1 + x2 / )( +1)/2 dx.

Chapter 4 Problem Solutions 55

First use the fact that the integrand is even and then makep the change of variable

e = 1 + x2 / , which implies both e d = 2x/ dx and x = (e 1). Thus,

Z Z

2x

x2n (1 + x2 / )( +1)/2 dx = x2n1 (1 + x2 / )( +1)/2 dx

0

Z p

= ( (e 1) )2n1 (e )( +1)/2 e d

0

Z

= n+1/2 (e 1)n1/2 e ( +1)/2 e d

0

Z

= n+1/2 (1 e )n1/2 e ( 2n)/2 d

0

Z

= n+1/2 (1 e )(n+1/2)1 e ( 2n)/2 d

0

= n+1/2 B n + 1/2, ( 2n)/2 , by Problem 19.

Hence,

1

E[X 2n ] = n+1/2 B n + 1/2, ( 2n)/2

B( 21 , 2 )

2n

( 2n+1

2 )( 2 ) ( +1

2 )

2n+1

n ( 2 )( 2 )

2n

= n+1/2 = .

( +1

2 ) ( 12 )( 2 ) ( 12 )( 2 )

2 s2 /2

38. From MX (s) = e , we have MX0 (s) = MX (s) 2 s and then

2

39. Let M(s) := es /2 denote the moment generating function of the standard normal ran-

dom variable. For the N(m, 2 ) moment generating function, we use the change of

variable y = (x m)/ , dy = dx/ to write

Z Z 2 Z 2

exp[ 21 ( xm )2 ] ey /2 ey /2

esx dx = es( y+m) dy = esm es y dy

2 2 2

2 s2 /2

= esm M(s ) = esm+ .

Z 1

s x1s 1 1

40. E[esY ] = E[es ln(1/X) ] = E[eln X ] = E[X s ] = xs dx = = .

0 1 s 0 1s

41. First note that

x, x 0,

|x| =

x, x < 0.

Then the Laplace( ) mgf is

Z

E[esX ] = esx 2 e |x| dx

56 Chapter 4 Problem Solutions

Z Z 0

sx x sx x

= e e dx + e e dx

2 0

Z Z 0

x( s) x( +s)

= e dx + e dx .

2 0

Of these last two integrals, the one on the left is finite if > Re s, while the second is

finite if Re s > . For both of them to be finite, we need < Re s < . For such

s both integrals are easy to evaluate. We get

sX 1 1 2 2

MX (s) := E[e ] = + = 2 2 = 2 2.

2 s +s 2 s s

Now, MX0 (s) = 2s 2 /( 2 s2 )2 , and so the mean is MX0 (0) = 0. We continue with

( 2 s2 )2 + 4s2 ( 2 s2 )

MX00 (s) = 2 2

( 2 s2 )4 .

Hence, the second moment is MX00 (0) = 2/ 2 . Since the mean is zero, the second

moment is also the variance.

42. Since X is a nonnegative random variable, for s 0, sX 0 and esX 1. Hence, for

s 0, MX (s) = E[esX ] E[1] = 1 < . For s > 0, we show that MX (s) = . We use

the fact that for z > 0,

n

z z3

ez = .

n=0 n! 3!

Then for s > 0, sX > 0, and we can write

Z

(sX)3 s3 2s3 x3

MX (s) = E[esX ] E = E[X 3 ] = dx = .

3! 3! 3! 1 x3

43. We apply integration by parts with u = x p1 /(p) and dv = ex(1s) dx. Then du =

x p2 /(p 1) dx and v = ex(1s) /(1 s). Hence,

Z Z

x p1 ex x p1

Mp (s) = esx dx = ex(1s) dx

0 (p) 0 (p)

Z

x p1 ex(1s) 1 x p2 x(1s)

= + e dx.

(p) 1 s 0 1 s 0 (p 1)

The last term is Mp1 (s)/(1 s). The other term is zero if p > 1 and Re s < 1.

44. (a) In this case, we use the change of variable t = x(1 s), which implies x =

t/(1 s) and dx = dt/(1 s). Hence,

Z Z

x p1 ex 1

Mp (s) = esx dx = x p1 ex(1s) dx

0 (p) (p) 0

Z

1 t p1 t dt

= e

(p) 0 1 s 1s

1 p 1 Z 1 p

p1 t

= t e dt = .

1s (p) 0 1s

| {z }

=1

Chapter 4 Problem Solutions 57

(b) From MX (s) = (1 s)p , we find MX0 (s) = p(1 s)p1 , MX00 (s) = p(p + 1)(1

s)p2 , and so on. The general result is

(n) (n + p)

MX (s) = p(p + 1) (p + [n 1])(1 s)pn = (1 s)pn .

(p)

Hence, the Taylor series is

sn (n) sn (n + p)

MX (s) = n! MX (0) = n! (p)

.

n=0 n=0

Z Z

( x) p1 e x t p1 et

E[esX ] = esx dx = e(s/ )t dt,

0 (p) 0 (p)

which is the moment generating function of g p evaluated at s/ . Hence,

1 p p

E[esX ] = = ,

1 s/ s

and the characteristic function is

1 p p

E[e j X ] = = .

1 j / j

m m

(b) The Erlang(m, ) mgf is , and the chf is .

s j

1 k/2

(c) The chi-squared with k degrees of freedom mgf is , and the chf is

1 k/2 1 2s

.

1 2 j

46. First write

x /2 Z 2 Z x2 (12s)/2

2 2e e

MY (s) = E[esY ] = E[esX ] = esx dx = dx.

2 2

If we let (1 2s) = 1/ 2 ; i.e., = 1/ 1 2s, then

Z (x/ )2 /2

e 1

MY (s) = dx = = .

2 1 2s

2 2 /2 2 2 2 2 2

esx e(xm) = e(x 2xm+m 2sx )/2 = e[x (12s)2xm]/2 em /2

2 2 2 2

= e(12s){x 2xm/(12s)+[m/(12s)] [m/(12s)] }/2 em /2

2 2 2

= e(12s){x[m/(12s)]} /2 em /[2(12s)] em /2

2 /2 2 /(12s)

= e(12s){x[m/(12s)]} esm .

58 Chapter 4 Problem Solutions

If we now let 1 2s = 1/ 2 , or = 1/ 1 2s, and = m/(1 2s), then

Z (xm)2 /2 Z [(x )/ ]2 /2

sY sX 2 sx2 e sm2 /(12s) e

E[e ] = E[e ] = e dx = e dx

2 2

2

2 /(12s) esm /(12s)

= esm = .

1 2s

49. The key observation is that

, 0,

| | =

, < 0.

Z

1 | | j x

fX (x) = e e d

2

Z Z

1 1 0 j x

= e e j x d + e e d

2 0 2

Z Z 0

1

= e ( + jx) d + e ( jx) d

2 0

0

1 1 ( + jx) 1

( jx)

= e + e

2 + jx 0 jx

1 1 1 1 2 /

= + = 2 2

= 2 .

2 + jx jx 2 + x + x2

2 2

d ex /2 ex /2

50. (a) = x = x f (x).

dx 2 2

Z Z Z

d j x

(b) X0 ( ) = e f (x) dx = j e j x x f (x) dx = j e j x f 0 (x) dx.

d

(c) In this last integral, let u = e j x and dv = f 0 (x) dx. Then du = j e j x dx, v =

f (x), and the last integral is equal to

Z

j x

e f (x) j e j x f (x) dx = jX ( ).

| {z }

=0

2 /2

(e) If K( ) := X ( )e , then

2 /2 2 /2 2 /2 2 /2

K 0 ( ) = X0 ( )e + X ( ) e = X ( )e + X ( ) e = 0.

Chapter 4 Problem Solutions 59

(f) By the mean-value theorem of calculus, for every , there is a 0 between 0 and

such that K( ) K(0) = K 0 (0 )( 0). Since the derivative is zero, we have

2

K( ) = K(0) = X (0) = 1. It then follows that X ( ) = e /2 .

51. Following the hints, we first write

d d x p ex px p1 ex x p ex

xg p (x) = = = pg p (x) xg p (x) = (p x)g p (x).

dx dx (p) (p)

In Z Z

d

X0 ( ) = e j x g p (x) dx = j e j x xg p (x) dx,

d 0 0

apply integration by parts with u = xg p (x) and dv = e j x dx. Then du is given above,

v = e j x /( j ), and

Z

0 xg p (x)e j x 1 j x

X ( ) = j j 0 e (p x)g p (x) dx

j 0

| {z }

=0

Z Z

1 j x 1 j x

= p e g p (x) dx e ( jx)g p (x) dx

0 j 0

1 1

= pX ( ) X0 ( ) = (p/ )X ( ) + (1/ j )X0 ( ).

j

Rearrange this to get

X0 ( )(1 1/ j ) = (p/ )X ( ),

X0 ( )( j + 1) = j pX ( ).

K 0 ( ) = X0 ( )(1 j ) p + X ( )p(1 j ) p1 ( j)

= (1 j ) p1 [X0 ( )(1 j ) j pX ( )] = 0.

such that K( ) K(0) = K 0 (0 )( 0). Since the derivative is zero, we have

K( ) = K(0) = X (0) = 1. It then follows that X ( ) = 1/(1 j ) p .

52. We use the formula cov(X, Z) = E[XZ] E[X]E[Z]. The mean of an exp( ) random

variable is 1/ . Hence, E[X] = 1. Since Z := X + Y , E[Z] = E[X] + E[Y ]. Since

the Laplace random variable has zero mean, E[Y ] = 0. Hence, E[Z] = E[X] = 1.

Next, E[XZ] = E[X(X + Y )] = E[X 2 ] + E[XY ] = E[X 2 ] + E[X]E[Y ] by independence.

Since E[Y ] = 0, E[XZ] = E[X 2 ] = var(X) + (E[X])2 = 1 + 12 = 2, where we have used

the fact that the variance of an exp( ) random variable is 1/ 2 . We can now write

cov(X, Z) = 2 1 = 1. Since Z is the sum of independent, and therefore uncorrelated,

random variables, var(Z) = var(X + Y ) = var(X) + var(Y ) = 1 + 2 = 3, where we

have used the fact that the variance of a Laplace( ) random variable is 2/ 2 .

60 Chapter 4 Problem Solutions

54. Write

2

MZ (s) = = = 2 2,

s (s) s +s s

55. Because the Xi are independent, we can write

n n n

sYn s(X1 ++Xn ) sXi

MYn (s) := E[e ] = E[e ] = E e = E[esXi ] = MXi (s). ()

i=1 i=1 i=1

2 2

(a) For Xi N(mi , i2 ), MXi (s) = esmi +i s /2 . Hence,

n n n

smi +i2 s2 /2 2 2 2 s2 /2

MYn (s) = e = exp s mi + i s /2 = e|sm+{z },

i=1 i=1 i=1

N(m, 2 ) mgf

(b) For Cauchy random variables, we must observe that the moment generating

function exists only for s = j . Equivalently, we must use characteristic func-

tions. In this case, () becomes

n

Yn ( ) := E[e jYn ] = Xi ( ).

i=1

n n

i | | | |

Yn ( ) = e = exp i | | = |e {z } ,

i=1 i=1

Cauchy( ) chf

provided we put := 1 + + n .

(c) For Xi gamma(pi , ), the mgf is MXi (s) = [ /( s)] pi . Hence,

n pi p1 ++pn p

MYn (s) = s

=

s

=

s

,

i=1 | {z }

gamma(p, ) mgf

provided we put p := p1 + + pn .

Chapter 4 Problem Solutions 61

56. From part (c) of the preceding problem, Y gamma(rp, ). The table inside the back

cover of the text gives the nth moment of a gamma random variable. Hence,

(n + rp)

E[Y n ] = .

n (rp)

57. Let Ti denote the time to transmit packet i. Then the time to transmit n packets is T :=

T1 + + Tn . We need to find the density of T . Since the Ti are exponential, we can

apply the remark in the statement of Problem 55(c) to conclude that T Erlang(n, ).

Hence,

( t)n1 e t

fT (t) = , t 0.

(n 1)!

n n

1 1

58. Observe that Y = ln = ln . By Problem 40, each term is an exp(1)

i=1 X i i=1 Xi

random variable. Hence, by the remark in the statement of Problem 55(c), Y

Erlang(n, 1); i.e.,

yn1 ey

fY (y) = , y 0.

(n 1)!

59. Consider the characteristic function,

n n

j Y j (1 X1 ++n Xn ) j(i )Xi

Y ( ) = E[e ] = E[e ] = E e = E[e j(i )Xi ]

i=1 i=1

n n n

|i | i | |

= e = e = exp i | | .

i=1 i=1 i=1

This is the chf of a Cauchy random variable with parameter ni=1 i . Hence,

ni=1 i

fY (y) = 2 .

ni=1 i + y2

P(|Z| 2). We first find the density of Z using characteristic functions. Write

which is the chf of a Cauchy(2) random variable. Since the Cauchy density is even,

Z 2 1 2 2

1 1 z 2 1

P(|Z| 2) = 2 fZ (z) dz = 2 tan + = tan1 (1) = = .

0 2 2 0 4 2

61. Let X := U +V +W be the sum of the three voltages. The alarm sounds if X > x. To

find P(X > x), we need the density of X. Since U, V , and W are i.i.d. exp( ) random

variables, by the remark in the statement of Problem 55(c), X Erlang(3, ). By

Problem 15(c),

2

( x)k e x

P(X > x) = .

k=0 k!

62 Chapter 4 Problem Solutions

62. Let Xi Cauchy( ) be the i.i.d. line loads. Let Y := X1 + + Xn be the total load.

The substation shuts down if Y > `. To find P(Y > `), we need to find the density of

Y . By Problem 55(b), Y Cauchy(n ), and so

Z y 1

1

P(Y > `) = fY (y) dy = tan1 +

` n 2 `

`

1 ` 1 1 1

= 1 tan1 + = tan1 .

n 2 2 n

63. Let the Ui uniform[0, 1] be the i.i.d. efficiencies of the extractors. Let Xi = 1 if

extractor i operates with efficiency less than 0.25; in symbols, Xi = I[0,0.25) (Ui ), which

is Bernoulli(p) with p = 0.25. Then Y := X1 + + X13 is the number of extractors

operating at less than 0.25 efficiency. The outpost operates normally if Y < 3. We

must compute P(Y < 3). Since Y is the sum of i.i.d. Bernoulli(p) random variables,

Y binomial(13, p). Thus,

13 0 13 13 12 13 2

= p (1 p) + p(1 p) + p (1 p)11

0 1 2

= (1 p)11 [(1 p)2 + 13p(1 p) + 78p2 ] = 0.3326.

k = 2 degrees of freedom. Since the number of degrees of freedom is even, R is

Erlang(k/2, 1/2) = Erlang(1, 1/2) = exp(1/2). Hence,

Z

P(R > r) = (1/2)ex/2 dx = er/2 .

r

Z Z 2

( 2 /2)n e /2

0

ck, 2 (x) dx =

0 n=0 n!

c2n+k (x) dx

2 /2 Z

( 2 /2)n e

= n!

c2n+k (x) dx

n=0 |0 {z }

=1

2

( 2 /2)n e /2

= n!

= 1.

n=0 | {z }

Poisson( 2 /2) pmf

Z

Mk, 2 (s) = esx ck, 2 (x) dx

0

Z 2

sx ( 2 /2)n e /2

= e c2n+k (x) dx

0 n=0 n!

Chapter 4 Problem Solutions 63

2 /2 Z

( 2 /2)n e

= esx c2n+k (x) dx

n=0 n! 0

2 /2

(2n+k)/2

( 2 /2)n e 1

=

n=0 n! 1 2s

2

e /2

1 2 /2 n

=

(1 2s)k/2 n=0 n! 1 2s

h i

1

2

e /2 ( 2 /2)/(12s) exp ( 2 /2) 12s 1

= e =

(1 2s)k/2 (1 2s)k/2

h i

2s

exp ( 2 /2) 12s exp[s 2 /(1 2s)]

= = .

(1 2s)k/2 (1 2s)k/2

d s 2 (1 2s) 2 s 2 (2)

= = 2,

ds 1 2s s=0 1 2s

s=0

(s) + (s)

then it is easy to show that Mk,0 2 (s) has the general form , where

(1 2s)k

2 0 2

(0) = and (0) = k. Hence, E[X] = Mk, 2 (0) = + k.

(d) The usual mgf argument gives

n

MY (s) = E[esY ] = E[es(X1 ++Xn ) ] = Mki ,i2 (s)

i=1

n

exp[si2 /(1 2s)]

=

i=1 (1 2s)ki /2

exp[s(12 + + n2 )/(1 2s)]

= .

(1 2s)(k1 ++kn )/2

chi-squared with k degrees of freedom and noncentrality parameter 2 .

(e) We first consider

e x + e x 1 ( x )n

2n xn

= [1 + (1)n ] =

2 2 n=0 n! n=0 (2n)!

2n xn

= 1 3 5 (2n 1) 2n n!

n=0

2n ( 2 /2)n (x/2)n

= 1 3 5 (2n 1) n!

n=0

( 2 /2)n (x/2)n

= ( 2n+1

, by Problem 14(c).

n=0 2 ) n!

64 Chapter 4 Problem Solutions

2 2

e(x+ )/2 e x + e x e(x+ )/2 ( 2 /2)n (x/2)n

2 x

2

=

2 x n=0 ( 2n+1

2 ) n!

2 /2

( 2 /2)n e (1/2)(x/2)n1/2 ex/2

= n!

( 2n+1

n=0 2 )

2 /2

( 2 /2)n e

= n!

c2n+1 (x) = c1, 2 (x).

n=0

R

66. First, P(X a) = a 2x3 dx = 1/a2 , while, using the result of Problem 23, the

Markov bound is E[X]/a = 2/a. Thus, the true probability is 1/a2 , but the bound

is 2/a, which decays much more slowly for large a.

67. We begin by noting that P(X a) = ea , E[X] = 1, and E[X 2 ] = 2. Hence, the Markov

bound is 1/a, and the Chebyshev bound is 2/a2 . To find the Chernoff bound, we must

minimize h(s) := esa MX (s) = esa /(1 s) over 0 < s < 1. Now,

(1 s)(a)esa + esa

h0 (s) = .

(1 s)2

Solving h0 (s) = 0, we find s = (a 1)/a, which is positive only for a > 1. Hence, the

Chernoff bound is valid only for a > 1. For a > 1, the Chernoff bound is

ea(a1)/a

h((a 1)/a) = = ae1a .

1 (a 1)/a

(a) It is easy to see that the Markov bound is smaller than the Chebyshev bound

for 0 < a < 2. However, note that the Markov bound is greater than one for

0 < a < 1, and the Chebyshev bound is greater than one for 0 < a < 2.

(b) MATLAB.

2 0

10

Chebyshev 2/a2 Markov 1/a

2

1.5 10

2

Chebyshev 2/a

4

10

1 Chernoff ae1a

1a

Chernoff ae 6

10

0.5

Markov 1/a 8 P(X > a)

10

P(X > a)

0

1 2 3 4 5 6 6 8 10 12 14 16 18 20

a a

The Markov bound is the smallest on [1, 2]. The Chebyshev bound is the small-

est from a = 2 to a bit more than a = 5. Beyond that, the Chernoff bound is the

smallest.

CHAPTER 5

Problem Solutions

1. For x 0, x

Z x

t t x

F(x) = e dt = e = 1e .

0 0

2. For x 0,

Z x x

t (t/ )2 /2

(t/ )2 /2 (x/ ) 2 /2

F(x) = e dt = e = 1e .

0 2 0

3. For x 0,

Z x x

p p p

F(x) = pt p1 e t dt = e t = 1 e x .

0 0

4. For x 0, first write

Z xr Z x/ r

2 t 2 (t/ )2 /2 2 2 /2

F(x) = e dt = 2 e d ,

0 3 0

where we have used the change of variable = t/ . Next use integration by parts

2

with u = and dv = e /2 d . Then

r x/ Z x/

2

2 /2 2 /2

F(x) = e + 0 e d

0

r Z x/ 2 /2

2 x (x/ )2 /2 e

= e +2 d

0 2

r

2 x (x/ )2 /2

= e + 2[(x/ ) 1/2]

r

2 x (x/ )2 /2

= 2(x/ ) 1 e .

5. For y > 0, F(y) = P(Y y) = P(eZ y) = P(Z ln y) = FZ (ln y). Then fY (y) =

fZ (ln y)/y for y > 0. Since Y := eZ > 0, fY (y) = 0 for y 0.

65

66 Chapter 5 Problem Solutions

Thus, fY (y) = fX (1 y) (1) = fX (1 y). In the case of X uniform(0, 1),

7. For y > 0,

Thus, fY (y) = fX (ey ) (ey ) = ey , since fX (ey ) = I(0,1) (ey ) = 1 for y > 0.

Since Y := ln(1/X) > 0, fY (y) = 0 for y 0.

8. For y 0,

Thus, fY (y) = fX ((y/ )1/p ) 1p (y/ )(1/p)1 / . Using the formula for the Weibull

density, we find that fY (y) = ey for y 0. Since Y := X p 0, fY (y) = 0 for y < 0.

Thus, Y exp(1).

2

9. For y 0, write FY (y) = P( X y) = P(X y2 ) = FX (y2 ) = 1 ey . Thus,

2 y 2

fY (y) = ey (2y) = e(y/(1/ 2)) /2 ,

(1/ 2)2

which is the Rayleigh(1/ 2 ) density.

10. Recall that the moment generating function of X N(m, 2 ) is MX (s) = E[esX ] =

2 2

esm+s /2 . Thus,

2 2 /2

E[Y n ] = E[(eX )n ] = E[enX ] = MX (n) = enm+n .

11. For y > 0, FY (y) = P(X 2 y) = P( y X y) = FX ( y) FX ( y). Thus,

fY (y) = fX ( y)( 12 y1/2 ) fX ( y)( 12 y1/2 ).

Since fX is even,

ey/2

fY (y) = y1/2 fX ( y) = , y > 0.

2 y

= P( y X + m y)

= P( y m X y m)

= FX ( y m) FX ( y m).

Chapter 5 Problem Solutions 67

Thus,

fY (y) = fX ( y m)( 21 y1/2 ) fX ( y m)( 12 y1/2 )

1 2 2

= e( ym) /2 + e( 2m) /2 2

2 y

1 2 2

= e(y2 ym+m )/2 + e(y+2 ym+m )/2 2

2 y

2

e(y+m )/2 em y + em y

= , y > 0.

2 y 2

n n

FXmax (z) = FXk (z) = F(z)n and FXmin (z) = 1 [1FXk (z)] = 1[1F(z)]n .

k=1 k=1

14. Let Z := max(X,Y ). Since X and Y are i.i.d., we have from the preceding problem

that FZ (z) = FX (z)2 . Hence,

Next,

Z Z

E[Z] = z fZ (z) dz = 2 ze z (1 e z ) dz

0 0

Z Z

= 2 z e z dz z (2 )e(2 )z dz

0 0

1 1 3

= 2 = .

2 2

15. Use the laws of total probability and substitution and the fact that conditioned on

X = m, Y Erlang(m, ). In particular, E[Y |X = m] = m/ . We can now write

E[XY ] = E[XY |X = m]P(X = m) = E[mY |X = m]P(X = m)

m=0 m=0

= mE[Y |X = m]P(X = m) = m(m/ )P(X = m)

m=0 m=0

1 + 2

= E[X 2 ] = , since X Poisson( ).

16. The problem statement tells us that P(Y > y|X = n) = eny . Using the law of total

probability and the pgf of X Poisson( ), we have

P(Y > y) = P(Y > y|X = n)P(X = n) = eny P(X = n)

n=0 n=0

yX y (ey 1)

= E[e ] = GX (e ) = e .

68 Chapter 5 Problem Solutions

17. (a) Using the law of substitution, independence, and the fact that Y N(0, 1), write

= P(Y z i) = (z i).

Next,

1

FZ (z) = FZ|X (z|i)P(X = i) = (1 p)(z) + p(z 1),

i=0

and so

2 /2 2 /2

(1 p)ez + pe(z1)

fZ (z) = .

2

(b) From part (a), it is easy to see that fZ|X (z|i) = exp[(z i)2 /2]/ 2 . Hence,

becomes ,

fZ|X (z|0) P(X = 1) exp[z2 /2] p

1 1 p

z + ln .

2 p

= P(Y z i/a|A = a, X = i) = P(Y z i/a) = (z i/a).

FZn (z) = P(Zn z) = P( Yn z) = P(Yn z2 ) = FYn (z2 ).

1 (y/2)n/21 ey/2

fYn (y) = 2 , y > 0,

(n/2)

and so

2 /2

(z2 /2)n/21 ez

fZn (z) = z , z > 0.

(n/2)

(b) When n = 1, we obtain the folded normal density,

2 /2 2

(z2 /2)1/2 ez ez /2

fZ1 (z) = z = 2 , z > 0.

(1/2) 2

Chapter 5 Problem Solutions 69

2 /2

(z2 /2)0 ez 2 /2

fZ2 (z) = z = zez , z > 0.

(1)

(d) When n = 3, we obtain the Maxwell density,

2 /2 2 r

(z2 /2)1/2 ez z2 ez /2 2 2 z2 /2

fZ3 (z) = z = 1 1 = z e , z > 0.

(3/2) 2 2 ( 2 )

2 /2

(z2 /2)m1 ez 2 2

fZ2m (z) = z = z2m1 ez /2 , z > 0.

(m) 2m (m)

20. Let Z := X1 +

+ Xn . By Problem 55(a) in Chapter 4, Z N(0, n), and it follows

that V := Z/ n N(0, 1). We can now write Y = Z 2 = nV 2 . By Problem 11, V 2 is

chi-squared with one degree of freedom. Hence,

FY (y) = P(nV 2 y) = P(V 2 y/n),

and

e(y/n)/2 1 e(y/n)/2

fY (y) = fV 2 (y/n)/n = p = , y > 0.

2 y/n n 2 ny

q q

q q1 q1 (yq ) p1 ey qyqp1 ey

fY (y) = fX (y ) (qy ) = qy = , y > 0.

(p) (p)

q

(b) Since q > 0, as y 0, yq 0, and ey 1. Hence, the behavior of fY (y)

as y 0 is determined by the behavior of y p1 . For p > 1, p 1 > 0, and

so y p1 0. For p = 1, y p1 = y0 = 1. For 0 < p < 1, p 1 < 0 and so

y p1 = 1/y1p . Thus,

0, p > 1,

lim fY (y) = q/(1/q), p = 1,

y0

, 0 < p < 1.

q

q( y) p1 e( y)

fY (y) = , y > 0.

(p/q)

1/p y) p p

fY (y) = 1/p p( 1/p y) p1 e( = 1/p p 11/p y p1 e y

p

= py p1 e y ,

which is the Weibull(p, ) density.

70 Chapter 5 Problem Solutions

(ii) Taking p = q = 2 and replacing with 1/( 2 ) yields

2 2

fY (y) = 2/( 2 )[y/( 2 )]e[y/( 2 )] = (y/ 2 )e(y/ ) /2 ,

which is the required Rayleigh density.

(iii) Taking p = 3, q = 2, and replacing with 1/( 2 ) yields

2 2

2/( 2 )[y/( 2 )]2 e[y/( 2 )] (y2 / 3 )e(y/ ) /2

fY (y) = = 1 1

(3/2) 2 2 ( 2 )

r

2

2 y (y/ )2

= e ,

3

which is the required Maxwell density.

(d) In

Z q

q( y) p1 e( y)

E[Y n ] = yn dy,

0 (p/q)

make the change of variable t = ( y)q , dt = q( y)q1 dy. Then

Z

1 q

E[Y n ] = n

( y)n ( y) pq e( y) q( y)q1 dy

(p/q) 0

Z

1

= (t 1/q )n+pq et dt

(p/q) n 0

Z

1 ((n + p)/q)

= t (n+p)/q1 et dt = .

(p/q) n 0 (p/q) n

(e) We use the same change of variable as in part (d) to write

Z y q Z ( y)q

q( ) p1 e( ) 1

FY (y) = d = (t 1/q ) pq et dt

0 (p/q) (p/q) 0

Z ( y)q

1

= t (p/q)1 et dt = G p/q (( y)q ).

(p/q) 0

2

22. Following the hint, let u = t 1 and dv = tet /2 dt so that du = 1/t 2 dt and v =

2

et /2 . Then

Z Z t 2 /2

et /2

2

t 2 /2 e

e dt = dt

x t x x t2

2 Z t 2 /2

ex /2 e

= dt ()

x x t2

2

ex /2

< .

x

The next step is to write the integral in () as

Z t 2 /2 Z

e 1 2 /2

dt = tet dt

x t2 x t3

Chapter 5 Problem Solutions 71

2

and apply integration by parts with u = t 3 and dv = tet /2 dt so that du = 3/t 4 dt

2

and v = et /2 . Then

Z t 2 /2 Z t 2 /2

et /2

2

e e

2

dt = 3 3 dt

x t t x x t4

2 Z t 2 /2

ex /2 e

= 3 dt.

x3 x t4

Substituting this into (), we find that

Z 2 2 Z t 2 /2

t 2 /2 ex /2 ex /2 e

e dt = +3 dt

x x x3 x t4

2 2

ex /2 ex /2

.

x x3

Z

E[FXc (Z)] = FXc (z) fZ (z) dz

Z Z

= fX (x) dx fZ (z) dz

z

Z Z

= I(z,) (x) fX (x) dx fZ (z) dz

Z Z

= fX (x) I(z,) (x) fZ (z) dz dx

Z Z

= fX (x) I(,x) (z) fZ (z) dz dx

Z Z x

= fX (x) fZ (z) dz dx

Z

= fX (x)FZ (x) dx = E[FZ (X)].

m1

( z)k e z

FZ (z) = 1 k!

, for z 0,

k=0

m1

( X)k e X

E[FZ (X)] = E I[0,) (X) 1

k=0 k!

m1

k

= P(X 0) E[X k e X I[0,) (X)].

k=0 k!

72 Chapter 5 Problem Solutions

(c) Let X N(0, 1) so that E[Q(Z)] = E[FXc (Z)] = E[FZ (X)]. Next, since Z

exp( ) = Erlang(1, ), we can use the result of part (b) to write

E[FZ (X)] = P(X 0) E[e X I[0,) (X)]

Z x2 /2

1 x e

= e dx

2 0 2

2 Z

1 e /2 2 +2x + 2 )/2

= e(x dx

2 2 0

2 Z

1 e /2 2 /2

= e(x+ ) dx

2 2 0

2 Z

1 e /2 2 /2 1 2

= et dt = e /2 Q( ).

2 2 2

(d) Put Z := Y and make the following observations. First, for z 0,

FZ (z) = P( Y z) = P( 2Y z2 ) = P(Y (z/ )2 ) = FY ((z/ )2 ),

and FZ (z) = 0 for z < 0. Second, since Y is chi-squared with 2m-degrees of

freedom, Y Erlang(m, 1/2). Hence,

2 /2

m1

((z/ )2 /2)k e(z/ )

FZ (z) = 1 k!

, for z 0.

k=0

E[Q( Y )] = E[Q(Z)] = E[FXc (Z)] = E[FZ (X)]

is equal to

2

m1

((X/ )2 /2)k e(X/ ) /2

E I[0,) (X) 1 ,

k=0 k!

which simplifies to

m1

1 2

P(X 0) E[X 2k e(X/ ) /2 I[0,) (X)].

k=0 2k 2k k!

e 2 := (1 + 1/ 2 )1 , we have

Now, P(X 0) = 1/2, and with

Z x2 /2

2k (X/ )2 /2 2k (x/ )2 /2 e

E[X e I[0,) (X)] = x e dx

0 2

Z

e 2

= x2k e(x/e ) /2 dx

2 e 0

Z

e 1 2

= x2k e(x/e ) /2 dx

2 2 e

e

= e 2k .

1 3 5 (2k 1)

2

Putting this all together, the desired result follows.

Chapter 5 Problem Solutions 73

(e) If m = 1 in part (d), then Z defined in solution of part (d) is Rayleigh( ) by the

same argument as in the solution of Problem 19(b). Hence, the desired result

follows by taking m = 1 in the result of part (d).

(f) Since the Vi are independent exp(i ), the mgf of Y := V1 + +Vm is

m m

k k

MY (s) = k s = ck k s ,

k=1 k=1

where the ck are the result of expansion by partial fractions. Inverse transform-

ing term by term, we find that

m

fY (y) = ck k ek y , y 0.

k=1

m

FY (y) = ck (1 ek y ), y 0.

k=1

Next, put Z := Y , and note that for z 0,

FZ (z) = P( Y z) = P(Y z2 ) = FY (z2 ).

Hence,

m

2

FZ (z) = ck (1 ek z ), z 0.

k=1

We can now write that E[Q(Z)] = E[FXc (Z)] = E[FZ (X)] is equal to

m

k X 2

E I[0,) (X) ck (1 e ) .

k=1

ek2 := (1 + 2k )1 ,

Now observe that with

Z x2 /2 Z x2 /2

k X 2 k x2 e 1 k x2 e

E[I[0,) (X)e ] = e dx = e dx

0 2 2 2

Z 2 Z (x/ek )2 /2

1 e(1+2k )x /2 ek e

= dx = dx

2 2 2 2

ek

ek 1

= = p .

2 2 1 + 2k

Putting this all together, the desired result follows.

24. Recall that the noncentral chi-squared density with k degrees of freedom and noncen-

trality parameter 2 is given by

2 /2

( 2 /2)n e

ck, 2 (t) := n!

c2n+k (t), t > 0,

n=0

74 Chapter 5 Problem Solutions

where c2n+k denotes the central chi-squared density with 2n + k degrees of freedom.

Hence,

Z x Z x 2

( 2 /2)n e /2

Ck, 2 (x) =

0

ck, 2 (t) dt =

0 n=0

n!

c2n+k (t) dt

2 /2 Z x 2 /2

( 2 /2)n e ( 2 /2)n e

= n! 0

c2n+k (t) dt = n!

C2n+k (x).

n=0 n=0

FZn (z) = P( Yn z) = P(Yn z2 ) = FYn (z2 ).

(mz/2)2`+n/21 (mz/2)2`+n/21

In/21 (mz) = `!(` + (n/2) 1 + 1) = .

`=0 `=0 `!(` + n/2)

2 /2

(m2 /2)` em

fYn (y) := `!

c2`+n (y)

`=0

2 /2

(m2 /2)` em (y/2)(2`+n)/21 ey/2

= `!

21

((2` + n)/2)

.

`=0

So,

2 +z2 )/2 (m2 /2)` (z2 /2)`+n/21

fZn (z) = fYn (z2 ) 2z = 2ze(m 21

`=0 `!(` + n/2)

2 +z2 )/2 (mz)2`+n/21 mn/2+1 (1/2)2`+n/21 zn/21

= ze(m

`=0 `!(` + n/2)

zn/2 (m2 +z2 )

= e In/21 (mz).

mn/21

(b) Obvious.

(c) Begin by writing

FYn (y) = P(Zn2 y) = P(Zn y ) = FZn (y1/2 ).

Then

(y1/2 )n/2 (m2 +y)/2

= 1

2 e In/21 (m y )y1/2

mn/21

y n/21 2

= 12 e(m +y)/2 In/21 (m y ).

m

Chapter 5 Problem Solutions 75

Z

t n/2 (m2 +t 2 )/2

FZcn (z) = e In/21 (mt) dt

z mn/21

Z

(mt)n/2 (m2 +t 2 )/2

= e In/21 (mt) dt.

z mn1

Now apply integration by parts with

2 /2 2 /2

u = (mt)n/21 In/21 (mt) and dv = mtet em /mn1 dt.

Then

2 /2 2 /2

v = mem et /mn1 , and by the hint, du = (mt)n/21 In/22 (mt) m dt.

Thus,

e(m +t )/2

2 2

mn2 z

Z

(mt)n/21 (m2 +t 2 )/2

+ e I n/22 (mt) dt

z mn3

z n/21 2 +z2 )/2

= e(m In/21 (mz) + FZn2 (z).

m

(e) Using induction, this is immediate from part (d).

2 /2

e z) = ez

(f) It is easy to see that Q(0, z) = Q(0, . We then turn to

m

Q(m, z) =

m

e (m/z)k Ik (mz)

k=0

(m2 +z2 )/2 k 2k k

= e m(m/z) Ik (mz) + z (mz) Ik1 (mz)z

k=0

(m2 +z2 )/2 k k

= e m(m/z) Ik (mz) + (m/z) Ik1 (mz)z

k=0

2 2

(m +z )/2 k

= ze (m/z) Ik1 (mz) (m/z)Ik (mz)

k=0

2 +z2 )/2

= ze(m (m/z)k Ik1 (mz) (m/z)k+1 Ik (mz)

k=0

(m2 +z2 )/2 2 +z2 )/2

= ze I1 (mz) = ze(m I1 (mz).

To conclude, we compute

Z

Q 2 +t 2 )/2

= te(m I0 (mt) dt

m m z

Z Z

2 +t 2 )/2 2 +t 2 )/2

= mte(m I0 (mt) dt + te(m I1 (mt) t dt.

z z

76 Chapter 5 Problem Solutions

Z

2 +t 2 )/2

(mt)I1 (mt) (t/m)e(m dt.

z

2 2

Now apply integration by parts with u = (mt)I1 (mt) and dv = te(t +m )/2 /m dt.

2 2

Then du = (mt)I0 (mt)m dt and v = e(m +t )/2 /m, and the above integral is

equal to Z

2 +z2 )/2 2 +t 2 )/2

ze(m I1 (mz) + mte(m I0 (mt) dt.

z

2 +z2 )/2

Putting this all together, we find that Q/ m = ze(m I1 (mz).

(x/2)2`+

26. Recall that I (x) := `!(` + + 1) .

`=0

(a) Write

I (x) 1 (x/2)2` 1

= + as x 0.

(x/2) ( + 1) `=1 `!(` + + 1) ( + 1)

Now write

fZn (z) = e In/21 (mz)

mn/21

zn/2 2 2 In/21 (mz)

= n/21 e(m +z )/2 (mz/2)n/21

m (mz/2)n/21

zn1 2 2 In/21 (mz)

= n/21 e(m +z )/2 .

2 (mz/2)n/21

Thus,

2 0, n > 1,

em /2 2 p

lim fZn (z) = n/21 lim zn1 = em /2 2/ , n = 1,

z0 2 (n/2) z0

, 0 < n < 1.

(x/2)2`+ 1 (` + )(x/2)2`+ 1

I 1 (x) = = .

`=0 `!(` + ) `=0 `!(` + + 1)

(x/2)2k+ +1 (x/2)2`+ 1

I +1 (x) = k!(k + + 2) = (` 1)!(` + + 1)

k=0 `=1

`(x/2)2`+ 1 `(x/2)2`+ 1

= = `!(` + + 1) .

`=1 `!(` + + 1) `=0

Chapter 5 Problem Solutions 77

(2` + )(x/2)2`+ 1

I 1 (x) + I +1 (x) = `!(` + + 1) = 2I0 (x).

`=0

(x/2)2`+ 1

I 1 (x) I +1 (x) = `!(` + + 1) = 2( /x)I (x).

`=0

R

(c) To the integral In (x) := (2 )1 ex cos cos(n ) d , apply integration by parts

with u = ex cos and dv = cos n d . Then du = ex cos (x sin ) d and v =

sin(n )/n. We find that

Z

1 x cos sin n x x cos

In (x) = e + e sin n sin d .

2 n n

| {z }

=0

We next use the identity sin A sin B = 12 [cos(A B) cos(A + B)] to get

x

In (x) = [In1 (x) In+1 (x)].

2n

R x cos

(d) Since I0 (x) := (1/ ) 0 e d , make the change of variable t = /2.

Then

Z /2 Z /2

1 1

I0 (x) = ex cos(t+ /2) dt = ex sint dt

/2 /2

Z /2 Z /2

1 (x sint)k 1 (x)k

= dt = sink t dt

/2 k=0 k! k=0 k! /2

| {z }

= 0 for k odd

Z /2 Z

1 x2` 2 x2` /2

= (2`)! sin2` t dt = sin2` t dt

`=0 /2 `=0 (2`)! 0

2` + 1

2 x2` 2

= (2`)!

`=0

2` + 2

, by Problem 18 in Ch. 4.

2

2

Now,

(2`)! = 1 3 5 (2` 1) 2 4 6 2` = 1 3 5 (2` 1) 2` `!,

and from Problem 14(c) in Chapter 4,

2` + 1 1 3 5 (2` 1)

= .

2 2`

Hence,

(x/2)2`

I0 (x) = `!(` + 1) =: I0 (x).

`=0

78 Chapter 5 Problem Solutions

Z

1 x cos

In1 (x) = e cos([n 1] ) d

2

Z

1

= ex cos [cos n cos sin n sin ] d .

2

Then Z

1

1

2 [In1 (x) + In+1 (x)] = ex cos cos n cos d .

2

Since

Z Z

1 x cos 1 x cos

In0 (x) = e cos(n ) d = e cos(n ) cos d ,

x 2 2

we see that 12 [In1 (x) + In+1 (x)] = In0 (x).

27. MATLAB.

0.5

0.4

0.3

0.2

0.1

0

0 1 2 3 4

28. See previous problem solution for graph. The probabilities are:

k P(X = k)

0 0.0039

1 0.0469

2 0.2109

3 0.4219

4 0.3164

1

1/2

1/3

1/6

0 1

Chapter 5 Problem Solutions 79

R R

(b) P(X = 0) = {0} f (t) dt = 1/2, and P(X = 1) = {1} f (t) dt = 1/6.

(c) We have

Z 1 Z 1

1 t 1 1

P(0 < X < 1) = f (t) dt = 3 e u(t) + 2 (t) + 6 (t 1) dt

0+ 0+

Z 1 Z 1 1

1 t 1 t 1 e1

= 3 e dt = 3 e dt = 31 et =

0+ 0 0 3

and

Z Z

1 t

P(X > 1) = f (t) dt = 3 e dt = 31 et = e1 /3.

1+ 1 1

(d) Write

1 1 e1 1

= + + = 1 e1 /3

2 3 6

and

1 + 2e1

P(X > 1) = P(X = 1) + P(X > 1) = 1/6 + e1 /3 = .

6

(e) Write

Z Z

t t

E[X] = t f (t) dt = 3 e dt + 0 P(X = 0) + 1 P(X = 1)

0

1

= 3 E[exp RV w/ = 1] + 1/6 = 1/3 + 1/6 = 1/2.

x 1 R

30. For the first part of the problem, we have E[eX ] = e 2 [ (x) + I(0,1] (x)] dx =

R 1

1 0 1 1 x x

2 e + 2 0 e dx = 1/2 + e /20 = 1/2 + (e 1)/2 = e/2. For the second part, first

write

P({X = 0} {X 1/2}) P(X = 0)

P(X = 0|X 1/2) = = .

P(X 1/2) P(X 1/2)

Since P(X = 0) = 1/2 and

Z 1/2 Z 1/2

1

P(X 1/2) = 2 [ (x) + I(0,1] (x)] dx = 12 + 21 dx = 21 + 41 = 34 ,

0

1/2

we have P(X = 0|X 1/2) = 3/4 = 2/3.

R

31. The approach is to find the density and then compute E[X] = x fX (x) dx. The catch

is that the cdf has a jump at x = 1/2, and so the density has an impulse there. Put

2x, 0 < x < 1/2,

fX (x) := 1, 1/2 < x < 1,

0, otherwise.

80 Chapter 5 Problem Solutions

Z Z 1/2 Z 1

E[X] = x fX (x) dx = x 2x dx + x 1 dx + 41 12

0 1/2

2 1 3

1 2

= 3 2 + 12 [1 2 ]+ 8

1

1 3 1 11 1

= 12 + 8 + 8 = 24 + 8 = 7/12.

R

32. The approach is to find the density and then compute E[ X ] = x fX (x) dx. The

catch is that the cdf has a jump at x = 4, and so the density has an impulse there. Put

x1/2 /8, 0 < x < 4,

fX (x) := 1/20, 4 < x < 9,

0, otherwise.

Z Z 4 Z 9 9

x1/2 fX (x) dx = 1/8 dx + x1/2 /20 dx = 1/2 + x3/2 /30

0 4 4

= 1/2 + (27 8)/30 = 1/2 + 19/30 = 17/15.

The complete answer is

E[ X ] = 17/15 + 4/4 = 17/15 + 1/2 = 34/30 + 15/30 = 49/30.

Z y Z y

e|t| dt = et dt = ey ,

and for y 0,

Z y Z 0 Z y y

e|t| dt = et dt + et dt = 1 + (et ) = 1 + 1 ey = 2 ey .

0 0

Hence,

y

e /4, y < 0,

FY (y) = (2 ey )/4 + 1/3, 0 y < 7,

(2 ey )/4 + 1/3 + 1/6, y 7,

y

e /4, y < 0,

= 5/6 ey /4, 0 y < 7,

1 ey /4, y 7.

1

0.75

0.5

0.25

0

2 0 2 4 6 8

Chapter 5 Problem Solutions 81

P(Y = 1) = 1/2. Using the law of total probability,

1

FX (x) = P(X x) = P(X x|Y = i)P(Y = i)

i=0

Z x

1

= 2 P(X x|Y = 0) + I(0,1] (t) dt .

1, x 1,

1

FX (x) = (1 + x), 0 x < 1,

2

0, x < 0.

1

fX (x) = 2 [I(0,1) (x) + (x)].

1 1

1/2 1/2

0 1 0 1

cdf density

35. (a) Recall that as x varies from +1 to 1, cos1 x varies from 0 to . Hence,

FX (x) = P(cos x) = P [ , x ] [x , ] , where x := cos1 x. Since

uniform[ , ],

x x ( ) x cos1 x

FX (x) = + = 2 = 1 .

2 2 2

(b) Recall that as y varies from +1 to 1, sin1 y varies from /2 to /2. For

y 0,

FY (y) = P(sin y) = P [ , y ] [ y , ]

y y ( ) + 2y 1 sin1 y

= + = = + ,

2 2 2 2

and for y < 0,

FY (y) = P(sin y) = P( [ y , y ])

2y + 1 sin1 y

= = + .

2 2

82 Chapter 5 Problem Solutions

(c) Write

1 1 1/

fX (x) = = ,

1 x2 1 x2

and

1 1 1/

fY (y) = p = p .

1y2 1 y2

Y + 1

FZ (z) = P(Z z) = P z = P(Y 2z 1) = FY (2z 1).

2

Then differentiate to get

2 1

fZ (z) = fY (2z 1) 2 = p = p

1 (2z 1)2 z(1 z)

( 12 + 21 )

= z1/21 (1 z)1/21 ,

( 12 )( 21 )

36. The cdf is

1, y 3,

Z 1+ 1+y

FY (y) = 1/4 dx, 1 y < 3,

1 1+y

0, y < 1,

1, y 3,

1

= 1 + y, 1 y < 3,

2

0, y < 1,

1

, 1 < y < 3,

fY (y) = 4 1+y

0, otherwise.

1,

p y 2,

2(3 2/y )

FY (y) = , 1/2 y < 2,

6

1/3, 0 y < 1/2,

0, y < 0,

and the density is

2 3/2 1 1

fY (y) = y I(1/2,2) (y) + (y) + (y 2).

6 3 3

Chapter 5 Problem Solutions 83

p

FY (y) = P(Y y) = P( 1 R2 y) = P(1 R2 y2 ) = P(1 y2 R2 )

p

p 2 1 y2 1

2

= P( 1 y R) = = 1 (1 y2 )1/2 .

2 2

We then differentiate to get the density

p y , 0 < y < 1,

fY (y) = 2(1 y2 )

0, otherwise.

39. The first thing to note is that for 0 R 2, 0 R2 2. It is then easyto see that

the minimum value of Z = [R2 (1 R2 /4)]1 occurs when R2 = 2 or R = 2. Hence,

the random variable Z takes values in the range [1, ). So, for z 1, we write

1 2 2

FZ (z) = P 2 z = P R (1 R /4) 1/z

R (1 R2 /4)

= P (R2 /4)(1 R2 /4) 1/(4z) .

equality if and only if

1 1 4y

x = .

2

Since we will have x = R2 /4 1/2, we need the negative root. Thus,

2 1 1 4y 2

p

FZ (z) = P (R /4) = P R 2[1 1 1/z ]

2

q p

q 2 2[1 1 1/z ]

p

= P R 2[1 1 1/z ] =

2

q p

= 1 1 1 1/z.

Differentiating, we obtain

1 p 1/2 d p

fZ (z) = 1 1 1/z [1 1 1/z ]

2 dz

1 p 1/2 1 1

= 1 1 1/z (1 1/z)1/2 2

2 2 z

1 p

= 2 [(1 1 1/z )(1 1/z)]1/2 .

4z

40. First note that as R varies from 0 to 2, T varies from to 2 . For t 2 ,

write

2

2

FT (t) = P(T t) = P p t = P 2 1 R /4

1 R2 /4 t

84 Chapter 5 Problem Solutions

q

= P R 2 1 2 /t 2 = 2(1 2 /t 2 )1/2 .

2 2 2 2 1/2 2 2

fT (t) = (1 /t ) = .

t3 t2 t2 2

For the second part of the problem, observe that as R varies between 0 and 2, M

varies between 1 and e . For m in this range, note that ln m < 0, and write

2

FM (m) = P(M m) = P(e (R/2)/ 1R /4 m)

.q

= P (R/2) 1 R2 /4 ln m

.

= P 2 (R2 /4) (1 R2 /4) ( ln m)2

= P (R2 /4) 1/[1 + { /( ln m)}2 ]

= P R 2[1 + { /( ln m)}2 ]1/2

= 1 2[1 + { /( ln m)}2 ]1/2 ,

2 2

2 2 3/2 2

fM (m) = [1 + { /( ln m)} ] = .

m( ln m)3 m[( ln m)2 + 2 ]3/2

1, y 1,

FY (y) = (y + y )/2, 0 y < 1,

0, y < 0,

and

1

fY (y) = 2 [1 + 1/(2 y )], 0 < y < 1,

0, otherwise.

(b) For X uniform[1, 2],

1, y 2,

(y + 1)/3, 1 y < 2,

FY (y) =

(y + y )/3, 0 y < 1,

0, y < 0,

and

1/3, 1<y<2

1

fY (y) = 3 [1 + 1/(2 y )], 0 < y < 1,

0, otherwise.

Chapter 5 Problem Solutions 85

1, y 2,

(y + 2)/5, 1 y < 2,

FY (y) =

(y + y )/5, 0 y < 1,

0, y < 0,

and

1

fY (y) = [I (y) + {1 + 1/(2 y )}I(0,1) (y) + (y 1) + (y 2)].

5 (1,2)

(d) For X exp( ),

1, y 2,

FY (y) = P(X y) + P(X 3), 0 y < 2,

0, y < 0,

1, y 2,

= (1 e y ) + e3 , 0 y < 2,

0, y < 0,

and

fY (y) = e y I(0,2) (y) + e3 (y) + (e2 e3 ) (y 2).

42. (a) If X uniform[1, 1], then Y = g(X) = 0, and so

FY (y) = u(y) (the unit step function), and fY (y) = (y).

(b) If X uniform[2, 2],

1, y 1,

1

FY (y) = 2 (y + 1), 0 y < 1,

0, y < 0,

and

1

fY (y) = [I (y) + (y)].

2 (0,1)

(c) We have

1, y 1,

FY (y) = (y + 1)/3, 0 y < 1,

0, y < 0,

and

1

fY (y) = 3 [I(0,1) (y) + (y) + (y 1)].

(d) If X Laplace( ),

R y+1 1,

y 1,

x

FY (y) = 2 0 2e dx, 0 y < 1,

0, y < 0,

1, y 1,

= 1 e (y+1) , 0 y < 1,

0, y < 0,

86 Chapter 5 Problem Solutions

and

fY (y) = e (y+1) I(0,1) (y) + (1 e ) (y) + e2 (y 1).

43. (a) If X uniform[3, 2],

1, y 1,

1 1/3

FY (y) = 5 (y + y + 2), 0 y < 1,

1

5 [y + 2 (y)1/2 ], 1 y < 0,

0, y < 1,

and

1 2/3

fY (y) = 5 [1 + 1/(3y )]I(0,1) (y) + 15 [1 + 1/(2 y )]I(1,0) (y) + 15 (y 1).

1, y 1,

1 1/3

FY (y) = 4 (y + y + 2), 0 y < 1,

1

4 [y + 2 (y)1/2 ], 1 y < 0,

0, y < 1,

and

1 2/3

fY (y) = 4 [1 + 1/(3y )]I(0,1) (y) + 41 [1 + 1/(2 y )]I(1,0) (y).

1, y 1,

1 1/3

FY (y) = 2 (y + 1), 0 y < 1,

1 1/2 ],

2 [1 (y) 1 y < 0,

0, y < 1,

and

1 1

fY (y) = I(0,1) (y) + I(1,0) (y).

6y2/3 4 y

44. We have

0, y < 1,

1 [y + 1 + y + 1],

1 y < 1,

6

FY (y) = 1

6 [3 + y + 1 ],

1 y < 8,

1, y 8,

and

fY (y) = 1 1

6 (y 1) + [ 6

1

+ 12 (y + 1)1/2 ]I(1,1) (y) + 12

1

(y + 1)1/2 I(1,8) (y).

45. We have

0, y < 0,

FY (y) = (y2 + y + y + 1)/4, 0 y < 1,

1, y 1,

and

1 1

fY (y) = 4 (y) + 4 [2y + 1/(2 y) + 1]I(0,1) (y).

Chapter 5 Problem Solutions 87

46. We have

1, y 1,

1 2

6 [4 + 3y y ], 0 y < 1,

FY (y) = 1 2

6 [3 + 2y y ], 1 y < 0,

0, y < 1,

and

fY (y) = [ 12 13 y]I(0,1) (y) + 31 [1 y]I(1,0) (y) + 61 (y).

47. We have

1,

p y 1,

FY (y) = 1

3 [1 + y + y/(2 y) ], 0 y < 1,

0, y < 0,

and

y 1/2 1

1

fY (y) = 3 (y) + 1 + .

2y (2 y)2

48. Observe that g is a periodic sawtooth function with period one. Also note that since

0 g(x) < 1, Y = g(X) takes values in [0, 1).

FY (y) = P(Y y) = P(g(X) y) = P(k X k+y) = FX (k+y)FX (k).

k=0 k=0

1 ey

FY (y) = (1 ek ey ) (1 ek ) = ek (1 ey ) =

1 e1

.

k=0 k=0

Differentiating, we get

ey

fY (y) = , 0 y < 1.

1 e1

We say that Y has a truncated exponential density.

(b) When X uniform[0, 1), we obtain Y = X uniform[0, 1).

(c) Suppose X uniform[ , + ), where = m + for some integer m 0 and

some 0 < < 1. For 0 y <

FY (y) = (m + 1 + y) (m + 1) = y,

88 Chapter 5 Problem Solutions

49. In the derivation of Property (vii) of cdfs, we started with the formula

[

(, x0 ) = (, x0 n1 ].

n=1

[

(, x0 ) = (, x0 n1 ).

n=1

Hence,

[

G(x0 ) = P(X < x0 ) = P {X < x0 n } = lim P(X < x0 N1 ) = lim G(x0 N1 ).

1

N N

n=1

Thus, G is left continuous. In a similar way, we can adapt the derivation of Prop-

erty (vi) of cdfs to write

\

P(X x0 ) = P {X < x0 + 1n } = lim P(X < x0 + 1n ) = lim G(x0 + 1n ) = G(x0 +).

N N

n=1

To conclude, write

50. First note that since FY (t) is right continuous, so is 1FY (t) = P(Y > t). Next, we use

the assumption P(Y > t + t|T > t) = P(Y > t) to show that with h(t) := P(Y > t),

h(t + t) = h(t) + h(t). To this end, write

P({Y > t + t} {Y > t})

h(t) = ln P(Y > t) = ln P(Y > t + t|Y > t) = ln

P(Y > t)

P(Y > t + t)

= ln = ln P(Y > t + t) ln P(Y > t) = h(t + t) h(t).

P(Y > t)

Rewrite this result as h(t + t) = h(t) + h(t). Then with t = t, we have h(2t) =

2h(t). With t = 2t, we have h(3t) = h(t) + h(2t) = h(t) + 2h(t) = 3h(t). In general,

h(nt) = nh(t). In a similar manner we can show that

t t

h(t) = h ++ = mh(t/m),

m m

and so h(t/m) = h(t)/m. We now have that for rational a = n/m, h(at) = h(n(t/m)) =

nh(t/m) = (n/m)h(t) = ah(t). For general a 0, let ak a with ak rational. Then by

the right continuity of h,

k k

h(t) = h(t 1) = th(1).

Chapter 5 Problem Solutions 89

Thus,

t h(1) = h(t) = ln P(Y > t) = ln(1 FY (t)),

and we have 1 FY (t) = eh(1)t , which implies Y exp(h(1)). Of couse, h(1) =

ln P(Y > 1) = ln[1 FY (1)].

51. We begin with

1 n Xi m 1 n

E[Yn ] = E = E[Xi ] m = 0.

n i=1 n i=1

For the variance, we use the fact that since independent random variables are uncor-

related, the variance of the sum is the sum of the variances. Thus,

n n n

Xi m var(Xi ) 2

var(Yn ) = var = 2

= 2

= 1.

i=1 n i=1 n i=1 n

52. Let Xi denote the time to transmit the ith packet, where Xi has mean m and variance

2 . The total time to transmit n packets is Tn := X1 + + Xn . The expected total

time is E[Tn ] = nm. Since we do not know the distribution of the Xi , we cannot know

the distribution of Tn . However, we use the central limit theorem to approximate

P(Tn > 2nm). Note that the sample mean Mn = Tn /n. Write

P(Tn > 2nm) = P( 1n Tn > 2m) = P(Mn > 2m) = P(Mn m > m)

Mn m m m

= P > = P Yn >

/ n / n / n

= 1 FYn (m n/ ) 1 (m n/ ),

53. Let Xi = 1 if bit i is in error, and Xi = 0 otherwise. Then P(Xi = 1) = p. Although

the problem does not say so, let us assume that the Xi are independent. Then Mn =

1 n

n i=1 Xi is the fraction of bits in error. We cannot reliably decode of Mn > t. To

approximate the probability that we cannot reliably decode, write

Mn m t m t m t m

P(Mn > t) = P > = P Yn > = 1 FYn

/ n / n / n / n

t m tp

1 = 1 p ,

/ n p(1 p)/n

54. If the Xi are i.i.d. Poisson(1), then Tn := X1 + + Xn is Poisson(n). Thus,

nk en 1 1 kn1 2 1

P(Tn = k) = exp .

k! 2 2 1 n 1 n

Taking k = n, we obtain nn en n!/ 2 n or n! 2 nn+1/2 en .

90 Chapter 5 Problem Solutions

55. Recall that gn is the density of X1 + + Xn . If the Xi are i.i.d. uniform[1, 1], then

gn is the convolution of (1/2)I[1,1] (x) with itself n times. From graphical considera-

tions, it is clear that gn (x) = 0 for |x| > n; i.e., xmax = n.

56. To begin, write

n n h i

Yn ( ) = E e j (X1 ++Xn )/ n = E e j( / n )Xi

= 1 j / n

2 e + 1 j / n

2 e

i=1 i=1

2 /2 n 2

= cosn ( / n ) 1 e /2 .

n

(b) The desired probability is

Z n1

e

R(t) := P(T > t) = d .

t (n 1)!

R

Let Pn (t) denote the above integral. Then P1 (t) = t e d = e . For n > 1,

apply integration by parts with u = n1 /(n 1)! and dv = e d . Then

t n1 et

Pn (t) = + Pn1 (t).

(n 1)!

t n1 et t n2 et

Pn (t) = + + + et ,

(n 1)! (n 2)!

(c) The failure rate is

r(t) = = n1

= n1 k

.

R(t) t k t t

e (n 1)!

k=0 k! k=0 k!

1

0

0 2 4 6 8 10

58. (a) Let =1. For p = 1/2, r(t) = 1/(2 2). For p = 1, r(t) = 1. For p = 3/2,

r(t) = 3 t/2. For p = 2, r(t) = 2t. For p = 3, r(t) = 3t 2 .

Chapter 5 Problem Solutions 91

p=3

2 p=2

p = 3/2

1 p=1

p = 1/2

0

0 1

(b) We have from the text that

Zt Zt

p

R(t) = exp r( ) d = exp p p1

d = e t .

0 0

p

E[T ] = R(t) dt = e t dt.

0 0

Z 1/p1 d Z

1

E[T ] = e = 1/p1 e d

0 p p 1/p 0

1 (1/p + 1)

= 1/p

(1/p) = .

p 1/p

(d) Using the result of part (b),

p

fT (t) = R0 (t) = t p1 e t , t > 0.

Z

Zt

1

R(t) = exp r( ) d = exp p d

0 t0

= exp[p(lnt lnt0 )] = exp[ln(t0 /t) p ] = (t0 /t) p , t t0 .

0

0 1 2 3 4 5

(c) For p > 1, the MTTF is

Z Z t 1p

t0

E[T ] = R(t) dt = (t0 /t) p dt = t0p = .

0 t0 1 p t0 p1

92 Chapter 5 Problem Solutions

p t0 p

p

pt0

fT (t) = r(t)R(t) = = p+1 , t t0 .

t t t

6

4

2

0

0 1 2 3

(b) We first compute

Z t Z t

r( ) d = 2 2 + 2 d = 1 3 2

3 t t + 2t.

0 0

Then

Rt 1 3 t 2 +2t)

fT (t) = r(t)e 0 r( ) d = [t 2 2t + 2]e( 3 t , t 0.

61. (a) If T uniform[1, 2], then for 0 t < 1, R(t) = P(T > t) = 1, and for t 2,

R(t) = P(T > t) = 0. For 1 t < 2,

Z 2

R(t) = P(T > t) = 1 d = 2 t.

t

1, 0 t < 1,

R(t) = 2 t, 1 t < 2,

0, t 2.

0

0 1 2 3

(b) The failure rate is

d d 1

r(t) = ln R(t) = ln(2 t) = , 1 < t < 2.

dt dt 2t

Chapter 5 Problem Solutions 93

62. Write

R(t) := P(T > t) = P(T1 > t, T2 > t) = P(T1 > t)P(T2 > t) = R1 (t)R2 (t).

63. Write

= 1 P(T1 t)P(T2 t) = 1 [1 R1 (t)][1 R2 (t)]

= 1 [1 R1 (t) R2 (t) + R1 (t)R2 (t)]

= R1 (t) + R2 (t) R1 (t)R2 (t).

Z Z Z

E[Y n ] = E[T ] = P(T > t) dt = P(Y n > t) dt = P(Y > t 1/n ) dt.

0 0 0

Z

E[Y n ] = P(Y > y) nyn1 dy.

0

CHAPTER 6

Problem Solutions

1. Since the Xi are uncorrelated with common mean m and common variance 2 ,

n

1

E[Sn2 ] = E Xi2 nE[Mn ]

n1 i=1

1 h n oi

= n( 2 + m2 ) n var(Mn ) + (E[Mn ])2

n1

1

= n( 2 + m2 ) n{ 2 /n + m2 }

n1

1

= (n 1) 2 + nm2 nm2 = 2 .

n1

p

2. (a) The mean of a Rayleigh( ) random variable is /2. Consider

p

n := Mn / /2.

Then

p p p p

E[n ] = E[Mn / /2 ] = E[Mn ]/ /2 = /2 /2 = .

p p p p

n = Mn / /2 E[Mn ] /2 = /2 /2 = ,

(b) MATLAB. Add the line of code lambdan=mean(X)/sqrt(pi/2).

p

(c) MATLAB. Since Mn /2, we solve for and put

n := 2(Mn / )2 .

3. (a) The mean of a gamma(p, ) random variable is p/ . We put

pn := Mn .

is unbiased and strongly consistent.

(b) MATLAB. In this problem = 1/2 and p = k/2, or k = 2p. We use kn := 2pn =

2( Mn ) = 2((1/2)Mn ) = Mn . We therefore add the line of code kn=mean(X).

94

Chapter 6 Problem Solutions 95

4. (a) Since the mean of a noncentral chi-squared random variable with k degrees of

freedom and noncentrality parameter 2 is k + 2 , we put

n2 := Mn k.

n2 is an unbiased estimator of 2 . Next, since n2 = Mn k E[Mn ] k =

(k + 2 ) k = 2 , the estimator is strongly consistent.

(b) MATLAB. Since k = 5, add the line of code lambda2n=mean(X)-5.

5. (a) Since the mean of a gamma(p, ) random variable is p/ , we put n := p/Mn .

Then n = p/Mn p/E[Mn ] = p/(p/ ) = , and we see that n is a strongly

consistent estimator of .

(b) MATLAB. Since p = 3, add the line of code lambdan=3/mean(X).

6. (a) Since the variance of a Laplace( ) random variable is 2/ 2 , we put

q

n := 2/Sn2 .

p

Since Sn2 converges to the variance, we have n 2/(2/ 2 ) = , and we see

that n is a strongly consistent estimator of .

(b) MATLAB. Add the line of code lambdan=sqrt(2/var(X)).

7. (a) The mean of a gamma(p, ) random variable is p/ . The second moment is

p(p + 1)/ 2 . Hence, the variance is

p(p + 1) p2 p

2

2 = 2.

Mn

n := and pn := n Mn .

Sn2

Now, Mn p/ and Sn2 p/ 2 . It follows that n (p/ ) (p/ 2 ) = and

then pn (p/ ) = p. Hence, n is a strongly consistent estimator of , and

pn is a strongly consistent estimator of p.

(b) MATLAB. Add the code

Mn = mean(X)

lambdan = Mn/var(X)

pn = lambdan*Mn

(q + p)/q (1 + p/q) p/q

E[X q ] = = = .

(p/q) q (p/q) q q

96 Chapter 6 Problem Solutions

1/q

p/q

n := q .

1

n ni=1 Xi

Then 1/q

p/q

n = ,

(p/q)/ q

and we see that n is a strongly consistent estimator of .

9. In the preceding problem E[X q ] = (p/q)/ q . Now consider

2q (2q + p)/q (2 + p/q) (1 + p/q)(p/q)

E[X ] = 2q

= 2q

= .

(p/q) (p/q) 2q

(E[X q ])2

p/q = .

var(X q )

1 n q

Xnq := Xi

n i=1

and then

(Xnq )2

pn := q h i

q q

1

n1 ni=1 (Xi )2 n(Xn )2

and 1/q

1/q

pn /q Xnq

n := = h i .

Xnq 1

n1 n

(X

i=1 i

q 2

) n(Xn

q 2

)

11. MATLAB. The required script can be created using the code from Problem 2 followed

by the lines

global lambdan

lambdan = mean(X)/sqrt(pi/2)

followed by the script from Problem 10 modifed as follows: The chi-squared statistic

Z can be computed by inserting the lines

p = CDF(b) - CDF(a);

Z = sum((H-n*p).2./(n*p))

Chapter 6 Problem Solutions 97

after the creation of the right edge sequence in the script given in Problem 10, where

CDF is the function

function y = CDF(t) % Rayleigh CDF

global lambdan

y = zeros(size(t));

i = find(t>0);

y(i) = 1-exp(-(t(i)/lambdan).2/2);

In addition, the line defining y in the script from Problem 10 should be changed to

y=PDF(t), where PDF is the function

function y = PDF(t) % Rayleigh density

global lambdan

y = zeros(size(t));

i = find(t>0);

y(i) = (t(i)/lambdan2).*exp(-(t(i)/lambdan).2/2);

0.05 and since there are m = 15 bins and r = 1 estimated parameter, the degrees of

freedom parameter is k = m 1 r = 15 1 1 = 13 in the chi-squared table in the

text.

12. MATLAB. Similar to the solution of Problem 11 except that it is easier to use the

M ATLAB function chi2cdf or gamcdf to compute the required cdfs for evaluating

the chi-squared statistic Z. For the same reasons as in Problem 11, z = 22.362.

13. MATLAB. Similar to the solution of Problem 11 except that it is easier to use the

M ATLAB function ncx2cdf to compute the required cdfs for evaluating the chi-

squared statistic Z. For the same reasons as in Problem 11, z = 22.362.

14. MATLAB. Similar to the solution of Problem 11 except that it is easier to use the M AT-

LAB function gamcdf to compute the required cdfs for evaluating the chi-squared

statistic Z. For the same reasons as in Problem 11, z = 22.362.

15. MATLAB. Similar to the solution of Problem 11. For the same reasons as in Prob-

lem 11, z = 22.362.

16. Since

n n n

E[H j ] = E [e j ,e j+1 ) i =

I (X ) P(e j Xi < e j+1 ) = pj = np j ,

i=1 i=1 i=1

H j np j

E = 0.

np j

Since the Xi are i.i.d., the I[e j ,e j+1 ) (Xi ) are i.i.d. Bernoulli(p j ). Hence,

n

E[(H j np j )2 ] = var(H j ) = var I[e j ,e j+1 ) (Xi )

i=1

n

= var I[e j ,e j+1 ) (Xi ) = n p j (1 p j ),

i=1

98 Chapter 6 Problem Solutions

and so 2

H j np j

E = 1 p j.

np j

Z x Z x Z

F(x) = f (t) dt = f ( ) d = f ( ) d = 1 F(x).

x

18. The width of any confidence interval is w = 2 y/ n. If = 2 and n = 100,

2 2 2.576

w99% = = 1.03.

10

To make w99% < 1/4 requires

2 y

< 1/4 or n > (8 y)2 = (16 2.576)2 = 1699.

n

2

i ] = m, and var(Xi ) = var(Wi ) = 4. So, = 4.

19. First observe that with Xi = m+Wi , E[X

For 95% confidence interval, y /2 / n = 2 1.960/10 = 0.392, and so

20. Write

1 n

P(|Mn m| ) = P( Mn m ) = P (m +Wi ) m

n i=1

n n

1

= P Wi = P n Wi n

n i=1 i=1

| {z }

Cauchy(n)

2

= tan1 (n /n),

which is equal to 2/3 if and only if tan1 ( ) = /3, or = 3.

21. MATLAB. OMITTED.

22. We use the formula m = Mn y /2 Sn / n = 10.083 (1.960)(0.568)/10 to get

23. We use the formula m = Mn y /2 Sn / n = 4.422 (1.812)(0.957)/10 to get

Chapter 6 Problem Solutions 99

y /2 Sn / n = 0.1 (1.645)(.302)/10 to get

Thus, we are 90% sure that the number of defectives is between 503 and 1497 out of

a total of 10 000 units.

25. We have Mn = 1559/3000. We use the formula m = Mn y /2 Sn / n = 0.520

(1.645)(.5)/ 3000 to get

and the confidence interval is [0.505, 0.535]. Hence, the probability is at least 90%

that more than 50.5% of the voters will vote for candiate A. So we are 90% sure that

candidate A will win. The 99% confidence interval is given by

and the confidence interval is [0.496, 0.544]. Hence, we are not 99% sure that candi-

date A will win.

26. We have Mn = number defective/n = 48/500

= 0.0960. We use the formula m =

Mn y /2 Sn / n = 0.0960 (1.881)(.295)/ 500 to get

Thus, we are 94% sure that the number of defectives is between 7118 and 12 082 out

of a total of 100 000 units.

27. (a) We have Mn = 6/100. We use the formula m = Mn y /2 Sn / n = 0.06

(2.170)(.239)/10 to get

We are thus 97% sure that p = m lies in the interval [0.0081, 0.1119]. Thus, we

are not 97% sure that p < 0.1.

(b) We have Mn = 71/1000. We use the formula m = Mn y /2 Sn / n = 0.071

(2.170)(.257)/ 1000 to get

We are thus 97% sure that p = m lies in the interval [0.053, 0.089]. Thus, we are

97% sure that p < 0.089 < 0.1.

100 Chapter 6 Problem Solutions

28. (a) Let Ti denote the time to transmit the ith packet. Then we need to compute

n n

[ \

P {Ti > t} = 1 P {Ti t} = 1 FT1 (t)n = 1 (1 et/ )n .

i=1 i=1

(b) Using the notation from part (a), T = T1 + + Tn . Since the Ti are i.i.d.

exp(1/ ), T is Erlang(n, 1/ ) by Problem 55(c) in Chapter 4 and the remark

following it. Hence,

(t/ )n1 et/

fT (t) = (1/ ) , t 0.

(n 1)!

(c) We have

y /2 Sn 1.960(1.798)

= Mn = 1.994 = 1.994 0.352

n 10

and confidence interval [1.642, 2.346] with 95% probability.

29. MATLAB. OMITTED.

30. By the hint,

ni=1 Xi is Gaussian with mean nm and variance n 2 . Since Mn =

n

i=1 Xi /n, it is easy to see that Mn is still Gaussian, and its mean is (nm)/n = m.

Its variance is (n 2 )/n2 = 2 /n. Next, Mn m remains

p Gaussian but with mean zero

and the same variance 2 /n. Finally, (M p n m)/ /n remains Gaussian and with

mean zero, but its variance is ( 2 /n)/( 2 /n )2 = 1.

31. We use the fomula m = Mn y /2 Sn / n, where in this Gaussian case, y /2 is taken

from the tables using Students t distribution with n = 10. Thus,

y /2 Sn 2.262 1.904

m = Mn = 14.832 = 14.832 1.362,

n 10

and the confidence interval is [13.470, 16.194] with 95% probability.

32. We use [nVn2 /u, nVn2 /`], where u and ` are chosen from the appropriate table. For a

95% confidence interval, ` = 74.222 and u = 129.561. Thus,

2

nVn nVn2 100(4.413) 100(4.413)

, = , = [3.406, 5.946].

u ` 129.561 74.222

33. We use [(n 1)Sn2 /u, (n 1)Sn2 /`], where u and ` are chosen from the appropriate

table. For a 95% confidence interval, ` = 73.361 and u = 128.422. Thus,

(n 1)Sn2 (n 1)Sn2 99(4.736) 99(4.736)

, = , = [3.651, 6.391].

u ` 128.422 73.361

34. For the two-sided test at the 0.05 significance level, we compare |Zn | with y /2 =

1.960. Since |Zn | = 1.8 1.960 = y /2 , we accept the null hypothesis. For the

one-sided test of m > m0 at the 0.05 significance level, we compare Zn with y =

1.645. Since it is not the case that Zn = 1.80 > 1.645 = y , we do not accept

the null hypothesis.

Chapter 6 Problem Solutions 101

35. Suppose (y) = . Then by Problem 17, (y) = 1 (y), and so 1 (y) = ,

or (y) = 1 .

36. (a) Since Zn = 1.50 y = 1.555, the Internet service provider accepts the null

hypothesis.

(b) Since Zn = 1.50 > 1.555 = y , we accept the null hypothesis; i.e., we reject

the claim of the Internet service provider.

37. The computer vendor would take the null hypothesis to be m m0 . To give the vendor

the benefit of the doubt, the consumer group uses m m0 as the null hypothesis. To

accept the null hypothesis would require Zn y . Only by using the sigificance level

of 0.10, which has y = 1.282, can the consumer group give the benefit of the doubt

to the vendor and still reject the vendors claim.

38. Giving itself the benfit of the doubt, the company uses the null hypothesis m > m0 and

uses a 0.05 significance level. The null hypothesis will be accepted if Zn > y =

1.645. Since Zn = 1.6 > 1.645, the company believes it has justified its claim.

39. Write

n n

g) =

e(b |Yk (baxk + bb)|2 = |Yk (baxk + [Y abx])|2

k=1 k=1

n

= |(Yk Y ) + ab(xk x)|2

k=1

n h i

= (Yk Y )2 2b

a(xk x)(Yk Y ) + ab2 (xk x)2

k=1

aSxY + ab2 Sxx

= SYY 2b

S

xY 2

= SYY 2b

aSxY + ab Sxx = SYY abSxY = SYY SxY /Sxx .

Sxx

40. Write

= g(x) + E[W |X = x] = g(x) + E[W ] = g(x).

42. MATLAB. OMITTED.

43. MATLAB. If z = c/t q , then ln z = ln c q lnt. If y = ln z and x = lnt, then y = (q)x +

ln c. If y a(1)x + a(2), then q = a(1) and c = exp(a(2)). Hence, the two lines of

code that we need are

qhat = -a(1)

chat = exp(a(2))

44. Obvious.

102 Chapter 6 Problem Solutions

45. Write

ez /2 . s2 /2

2 2

e(zs) /2

fZe (z) = esz fZ (z)/MZ (s) = esz e = .

2 2

e = t, put s = t. Then Ze (t, 1).

To make E[Z]

46. Write

( z) p1 e z . p

fZe (z) = esz fZ (z)/MZ (s) = esz , z > 0.

(p) s

Then

p Z ( z) p1 ez( s) Z

( s) p z p1 ez( s)

e =

E[Z] z dz = z dz,

s 0 (p) 0 (p)

which is the mean of a gamma(p, s) density. Hence, E[Z]

e

E[Z] = t, we need p/( s) = t or s = p/t.

47. MATLAB. OMITTED.

48. First, .

pZe (zi ) = eszi pZ (zi ) [(1 p) + pes ].

Then

CHAPTER 7

Problem Solutions

1. We have

FZ (z) = P(Z z) = P(Y X z) = P((X,Y ) Az ),

where

Az := {(x, y) : y x z} = {(x, y) : y x + z}.

z

z

x

2. We have

h i

FZ (z) = P(Z z) = P(Y /X z) = P {Y /X z} {X < 0} {X > 0}

= P((X,Y ) D +

z Dz ) = P((X,Y ) Az ),

where

Az := D +

z Dz ,

and

D

z := {(x, y) : y/x z and x < 0} = {(x, y) : y zx and x < 0},

and

D+

z := {(x, y) : y/x z and x > 0} = {(x, y) : y zx and x > 0}.

z

y

pe

lo

fs

eo

lin

+

Dz Dz x

103

104 Chapter 7 Problem Solutions

a b

(b) A := (, a] (, d] is

d

a b

(c) B := (, b] (, c] is

a b

(d) C := (a, b] (, c] is

a b

(e) D := (, a] (c, d] is

d

a b

Chapter 7 Problem Solutions 105

(f) A B is

a b

4. Following the hint and then observing that R and A B are disjoint, we have

P (X,Y ) (, b] (, d] = P (X,Y ) R + P (X,Y ) A B . ()

P (X,Y ) A B = P (X,Y ) A + P (X,Y ) B P (X,Y ) A B

= FXY (a, d) + FXY (b, c) FXY (a, c).

Hence, () becomes

FXY (b, d) = P (X,Y ) R + FXY (a, d) + FXY (b, c) FXY (a, c),

P (X,Y ) R = FXY (b, d) FXY (a, d) FXY (b, c) + FXY (a, c).

(b) {(x, y) : 2 < x 4, 1 y < 2} = (2, 4] [1, 2).

(c) {(x, y) : 2 < x 4, y = 1} = (2, 4] {1}.

(d) {(x, y) : 2 < x 4} = (2, 4] IR.

(e) {(x, y) : y = 1} = IR {1}.

(f) {(1, 1), (2, 1), (3, 1)} = {1, 2, 3} {1}.

(g) The union of {(1, 3), (2, 3), (3, 3)} and the set in (f) is equal to {1, 2, 3} {1, 3}.

(h) {(1, 0), (2, 0), (3, 0), (0, 1), (1, 1), (2, 1), (3, 1)} is NOT a product set.

6. We have

(

1, x 2, y e2y

1 e , y 0,

FX (x) = x 1, 1 x < 2, and FY (y) = y

0, y < 0,

0, x < 1,

where the quotient involving division by y is understood as taking its limiting value

of one when y = 0. Since FX (x)FY (y) 6= FXY (x, y) when 1 x 2 and y > 0, X and Y

are NOT independent.

106 Chapter 7 Problem Solutions

7. We have

1, x 3, 1 2y 5e3y ],

FX (x) = 2/7, 2 x < 3, and FY (y) = 7 [7 2e y 0,

0, y < 0.

0, x < 2,

y + ex(y+1) (y + 1){1 + ex(y+1) (x)} {y + ex(y+1) }(1)

=

y y+1 (y + 1)2

(y + 1) x(y + 1)ex(y+1) y ex(y+1)

=

(y + 1)2

1 ex(y+1) {1 + x(y + 1)}

= .

(y + 1)2

Then compute

y + ex(y+1) {1 + x(y + 1)}ex(y+1) (y + 1) ex(y+1) (y + 1)

=

x y y+1 (y + 1)2

xex(y+1) (y + 1)2

= = xex(y+1) , x, y > 0.

(y + 1)2

2

exp[|y x| x2 /2] ex /2 e|yx|

fXY (x, y) = = .

2 2 2 2

When integrating this last factor with respect to y, make the change of variable =

y x to get

Z 2 Z 2 Z

ex /2 1 |yx| ex /2 1 | |

fX (x) = fXY (x, y) dy = 2e dy = 2e d

2 2

2 Z 2

ex /2 ex /2

= e d = .

2 0 2

Thus, X N(0, 1).

10. The first step is to factor fXY (x, y) as

2 2

4e(xy) /2 4 e(xy) /2

fXY (x, y) = = 5 .

5

y 2 y 2

Regarding this last factor a function of x, it is an N(y, 1) density. In other words, when

integrated with respect to x, the result is one. In symbols,

Z Z (xy)2 /2

4 e 4

fY (y) = fXY (x, y) dx = dx = , y 1.

y5 2 y5

Chapter 7 Problem Solutions 107

FXY (x, y) FXY (x, y)

fU (u) = + .

x x=u, y=u y x=u, y=u

If in addition X and Y have the same density, say f , (and therefore the same cdf, say

F), then

FU (u) = F(u)2 , and fU (u) = 2F(u) f (u).

We next analyze V := min(X,Y ). Using the inclusionexclusion formula,

= P(X v) + P(Y v) P(X v and Y v)

= FX (v) + FY (v) FXY (v, v).

The density is

FXY (x, y) FXY (x, y)

fV (v) = fX (v) + fY (v) .

x x=v, y=v y x=v, y=v

and

fV (v) = fX (v) + fY (v) fX (v)FY (v) FX (v) fY (v).

If in addition X and Y have the same density f and cdf F, then

12. Since X gamma(p, 1) and Y gamma(q, 1) are independent, we have from Prob-

lem 55(c) in Chapter 4 that Z gamma(p + q, 1). Since p = q = 1/2, we further have

Z gamma(1, 1) = exp(1). Hence, P(Z > 1) = e1 .

13. We have from Problem 55(b) in Chapter 4 that Z Cauchy( + ). Since = =

1/2, Z Cauchy(1). Thus,

1 1 1 1 3

P(Z 1) = tan1 (1) + = + = .

2 4 2 4

108 Chapter 7 Problem Solutions

FZ (z) = fXY (x, y) dx dy.

Z Z Z

zy

fZ (z) = FZ (z) = fXY (x, y) dx dy = fXY (z y, y) dy.

z z

z Bz )

ZZ ZZ

= fXY (x, y) dx dy + fXY (x, y) dx dy

B+

z B

z

Z Z z/y Z 0

Z

= fXY (x, y) dx dy + fXY (x, y) dx dy.

0 z/y

Then

Z Z 0

fZ (z) = FZ (z) = fXY (z/y, y)/y dy fXY (z/y, y)/y dy

z 0

Z Z 0

= fXY (z/y, y)/|y| dy + fXY (z/y, y)/|y| dy

0

Z

= fXY (z/y, y)/|y| dy.

FZ (z) = fXY (x, y) dy dx.

Then

Z Z Z

z+x

fZ (z) = FZ (z) = fXY (x, y) dy dx = fXY (x, z + x) dx.

z z

= P(Y Xz, X > 0) + P(Y Xz, X < 0)

Z Z xz Z 0 Z

= fXY (x, y) dy dx + fXY (x, y) dy dx.

0 xz

Then

Z Z 0 Z

fZ (z) = FZ (z) = fXY (x, xz)x dx fXY (x, xz)x dx = fXY (x, xz)|x| dx.

z 0

Chapter 7 Problem Solutions 109

1

D

0

1

1 0 1

(b) Since fXY (x, y) = Kxn ym for (x, y) D and since D contains negative values of

x, we must have n even in order that the density be nonnegative. In this case,

the integral of the density over the region D must be one. Hence, K must be

such that

ZZ Z 1 Z 1 Z 1 m+1 1

n m n y

1 = fXY (x, y) dy dx = Kx y dy dx = Kx dx

1 |x| 1 m + 1 |x|

D

Z 1 Z 1

Kxn K

= [1 |x|m+1 ] dx = xn |x|n+m+1 dx

1 m + 1 m + 1 1

Z 1

2K 2K 1 1

= xn xn+m+1 dx =

m+1 0 m+1 n+1 n+m+2

2K

= .

(n + 1)(n + m + 2)

(c) A sketch of Az with z = 0.3:

1 A

z

1 Az

1 0 1

(d) A sketch of Az D with z = 0.3:

1

1 0 1

(e)

Z z Z 1 Z 1 Z 1

P((X,Y ) Az ) = Kxn ym dy dx + Kxn ym dy dx

z z/x z x

110 Chapter 7 Problem Solutions

Z z Z 1

Z 1 Z 1

= Kxn ym dy dx + Kxn ym dy dx

z z/x z x

Z

z Z 1

K

= x [1 (z/x)m+1 ] dx +

n

xn [1 xm+1 ] dx

m+1 z z

Z

z Z 1

K n m+1 nm1 n n+m+1

= x z x dx + x x dx

m+1 z z

Z 1 Z

z Z 1

K n m+1 nm1 n+m+1

= x dx z x dx x dx

m+1 z z z

Z

K 1 zn z 1 z(n+m+2)/2

= zm+1 xnm1 dx .

m+1 n+1 z n+m+2

( z )n+m+2 zn+1

.

nm

Otherwise, the integral is equal to

zm+1

ln z.

2

19. Let X uniform[0, w] and Y uniform[0, h]. We need to compute P(XY wh).

Before proceeding, we make a few observations. First, since X 0, we can write for

z > 0,

Z Z Z Z

P(XY z) = fXY (x, y) dy dx = fX (x) fY (y) dy dx.

0 z/x 0 z/x

Since Y uniform[0, h], the inner integral will be zero if z/x > h. Since z/x h if

and only if x z/h,

Z Z h Z

1 z

P(XY z) = fX (x) dy dx = fX (x) 1 dx.

z/h z/x h z/h xh

Z Z

wh w

P(XY wh) = fX (x) 1 dx = fX (x) 1 dx

wh/h xh w x

Z w

1 w w

= 1 dx = (1 ) ln = (1 ) + ln .

w w x w

Z

1 1

E[XY ] = E[cos sin ] = 2 E[sin 2] = sin 2 d

4

cos(2 ) cos(2 )

= = 0.

8

Chapter 7 Problem Solutions 111

Similarly,

Z

1 sin( ) sin( ) 00

E[X] = E[cos ] = cos d = = = 0,

2 2 2

and

Z

1 cos( ) cos( ) (1) (1)

E[Y ] = E[sin ] = sin d = = = 0.

2 2 2

To prove that X and Y are not independent, we argue by contradiction. However,

before we begin, observe that since (X,Y ) satisfies

X 2 +Y 2 = cos2 + sin2 = 1,

(X,Y ) always lies on the unit circle. Now consider the square of side one centered at

the origin,

S := {(x, y) : |x| 1/2, and |y| 1/2}.

Since this region lies strictly inside the unit circle, P((X,Y ) S) = 0. Now, to obtain

a contradiction suppose that X and Y are independent. Then fXY (x, y) = fX (x) fY (y),

where fX and fY are both arcsine densities by Problem 35 in Chapter 5. Hence, for

|x| < 1 and |y| < 1, fXY (x, y) fXY (0, 0) = 1/ 2 . We can now write

ZZ ZZ

P((X,Y ) S) = fXY (x, y) dx dy 1/ 2 dx dy = 1/ 2 > 0,

S S

which is a contradiction.

21. If E[h(X)k(Y )] = E[h(X)]E[k(Y )] for all bounded continuous functions h and k, then

we may specialize this equation to the functions h(x) = e j1 x and k(y) = e j2 y to show

that the joint characteristic function satisfies

Since the joint characteristic function is the product of the marginal characteristic

functions, X and Y are independent.

22. (a) Following the hint, let D denote the half-plane D := {(x, y) : x > x0 },

distance of point from origin is x0 / cos

y

D

x

x0

112 Chapter 7 Problem Solutions

and write

ZZ 2 2 )/2

e(x +y

P((X,Y ) D) = dx dy.

2

D

Z /2 Z 2

er /2

P((X,Y ) D) = r dr d

/2 x0 / cos 2

Z

1 /2 2

= er /2 d

2 /2 x0 / cos

Z /2

1

= exp[(x0 / cos )2 /2] d

2 /2

Z /2

1 x02

= exp d .

0 2 cos2

Then cos becomes cos( /2 t) = sint, and the preceding integral becomes

Z

1 /2 x02

exp dt.

0 2 sin2 t

23. Write

Z Z Z

fXY (x, y) 1 fX (x)

fY |X (y|x) dy = dy = fXY (x, y) dy = = 1.

fX (x) fX (x) fX (x)

P (X,Y ) A x < X x + x

P((X,Y ) A|x < X x + x) =

P(x < X x + x)

P (X,Y ) A (X,Y ) (x, x + x] IR

=

P(x < X x + x)

n h io

P (X,Y ) A (x, x + x] IR

=

P(x < X x + x)

ZZ

fXY (t, y) dy dt

A (x,x+x]IR

= Z x+x

fX ( ) d

x

Chapter 7 Problem Solutions 113

ZZ

IA (t, y)I(x,x+x]IR (t, y) fXY (t, y) dy dt

= Z x+x

fX ( ) d

x

Z x+x Z

IA (t, y) fXY (t, y) dy dt

x

= Z x+x

fX ( ) d

x

Z Z

1 x+x

IA (t, y) fXY (t, y) dy dt

x x

= Z x+x

1

fX ( ) d .

x x

It now follows that

Z

IA (x, y) fXY (x, y) dy

lim P((X,Y ) A|x < X x + x) =

t fX (x)

Z

= IA (x, y) fY |X (y|x) dy.

xex(y+1)

fY |X (y|x) = = xexy , y > 0.

ex

As a function of y, this is an exponential density with parameter x. This is very

different from fY (y) = 1/(y + 1)2 . We next compute, for y > 0,

xex(y+1)

fX|Y (x|y) = = (y + 1)2 xe(y+1)x , x > 0.

1/(y + 1)2

As a function of x this is an Erlang(2, y + 1) density, which is not the same as fX

exp(1).

26. We first compute, for x > 0,

xex(y+1)

fY |X (y|x) = = xexy , y > 0.

ex

As a function of y, this is an exponential density with parameter x. Hence,

Z

E[Y |X = x] = y fY |X (y|x) dy = 1/x.

0

We next compute, for y > 0,

xex(y+1)

fX|Y (x|y) = = (y + 1)2 xe(y+1)x , x > 0.

1/(y + 1)2

As a function of x this is an Erlang(2, y + 1) density. Hence,

Z

E[X|Y = y] = x fX|Y (x|y) dx = 2/(y + 1).

0

114 Chapter 7 Problem Solutions

27. Write

Z Z Z

P(X B|Y = y) fY (y) dy = fX|Y (x|y) dx fY (y) dy

B

Z Z

= IB (x) fX|Y (x|y) dx fY (y) dy

Z Z

= IB (x) fXY (x, y) dy dx

Z

= fX (x)dx = P(X B).

B

28. For z 0,

2 )/(2 2 )

Z z e(zy

2 2) Z z

2 e(y/ ) /2 ez/(2 1

fZ (z) = p dy = p dy

z z y2 2 2 2 z z y2

2) Z z 2) Z 1

ez/(2 1 ez/(2 1

= p dy = dt

2 0 z y2 2 0 1 t2

ez/(2 2 ) ez/(2 )

2

2 2 2

29. For z 0,

Z Z

fZ (z) = x e xz e x dx + y e yz e y dy

0 0

Z

= 2 x e xz e x dx

0

Z

2 2

= x [ z + ]ex[ z+ ] dx.

z+ 0

Now, this last integral is the expectation of an exponential density with parameter

z + . Hence,

2 2 1 2

fZ (z) = = , z 0.

z+ z+ (z + 1)2

30. Using the law of total probability, substitution, and independence, we have

Z Z

P(X Y ) = P(X Y |X = x) fX (x) dx = P(x Y |X = x) fX (x) dx

Z0 Z 0

= P(Y x) fX (x) dx = e x e x dx

0 0

Z

( + )

= ex( + ) dx = e = .

0 + 0 +

Chapter 7 Problem Solutions 115

31. Using the law of total probability, substitution, and independence, we have

Z

P(Y / ln(1 + X 2 ) > 1) = P(Y / ln(1 + X 2 ) > 1|X = x) fX (x) dx

Z 2

= P(Y > ln(1 + x2 )|X = x) 1 dx

1

Z 2 Z 2

2

= 2

P(Y > ln(1 + x )) dx = e ln(1+x ) dx

1 1

Z 2 Z 2

2 )1 1

= eln(1+x dx = dx

1 1 1 + x2

= tan1 (2) tan1 (1).

32. First find the cdf using the law of total probability and substitution. Then differentiate

to obtain the density.

(a) For Z = eX Y ,

Z

FZ (z) = P(Z z) = P(eX Y z) = P(eX Y z|X = x) fX (x) dx

Z Z

= P(Y zex |X = x) fX (x) dx = FY |X (zex |x) fX (x) dx.

Then Z

fZ (z) = fY |X (zex |x)ex fX (x) dx,

and so Z

fZ (z) = fXY (x, zex )ex dx.

(b) Since Z = |X + Y | 0, we know that FZ (z) and fZ (z) are zero for z < 0. For

z 0, write

Z

FZ (z) = P(Z z) = P(|X +Y | z) = P(|X +Y | z|X = x) fX (x) dx

Z

= P(|x +Y | z|X = x) fX (x) dx

Z

= P(z x +Y z|X = x) fX (x) dx

Z

= P(z x Y z x|X = x) fX (x) dx

Z

= FY |X (z x|x) FY |X (z x|x) fX (x) dx.

Then

Z

fZ (z) = fY |X (z x|x) fY |X (z x|x)(1) fX (x) dx

Z

= fY |X (z x|x) + fY |X (z x|x) fX (x) dx

Z

= fXY (x, z x) + fXY (x, z x) dx.

116 Chapter 7 Problem Solutions

33. (a) First find the cdf of Z using the law of total probability, substitution, and inde-

pendence. Then differentiate to obtain the density. Write

Z

FZ (z) = P(Z z) = P(Y /X z) = P(Y /X z|X = x) fX (x) dx

Z Z

= P(Y /x z|X = x) fX (x) dx = P(Y /x z) fX (x) dx

Z 0 Z

= P(Y /x z) fX (x) dx + P(Y /x z) fX (x) dx

0

Z 0 Z

= P(Y zx) fX (x) dx + P(Y zx) fX (x) dx

0

Z 0 Z

= [1 FY (zx)] fX (x) dx + FY (zx) fX (x) dx.

0

Then

Z 0 Z

fZ (z) = fY (zx)x fX (x) dx + fY (zx)x fX (x) dx

0

Z 0 Z

= fY (zx)|x| fX (x) dx + fY (zx)|x| fX (x) dx

0

Z

= fY (zx) fX (x)|x| dx.

(b) Using the result of part (a) and the fact that the integrand is even, write

Z |zx|2 /(2 2 ) (x/ )2 /2 Z

e e 2 x (x/ )2 [1+z2 ]/2

fZ (z) = |x| dx = e dx.

2 2 2 0 2

Now make the change of variable = (x/ ) 1 + z2 to get

Z

1/ 2 /2 1/ 2 /2 1/

fZ (z) = e d = e = ,

1 + z2 0 1+z 2 0 1 + z2

which is the Cauchy(1) density.

(c) Using the result of part (a) and the fact that the integrand is even, write

Z Z

|zx| |x| 2

fZ (z) = 2e 2e |x| dx = xex (|z|+1) dx

2 0

Z

2 /2

= x (|z| + 1)e (|z|+1)x dx,

(|z| + 1) 0

where this last integral is the mean of an exponential density with parameter

(|z| + 1). Hence,

2 /2 1 1

fZ (z) = = .

(|z| + 1) (|z| + 1) 2(|z| + 1)2

Chapter 7 Problem Solutions 117

(d) For z > 0, use the result of part (a) and the fact that the integrand is even, write

Z 2 Z 1/z x2 /2

1 ex /2 e

fZ (z) = 2 I[1,1] (zx)

|x| dx = x dx

2 0 2

2

1 2 1/z 1 e1/(2z )

= (ex /2 )0 = .

2 2

The same formula holds for z < 0, and it is easy to check that fZ (0) = 1/ 2 .

(e) Since Z 0, for z 0, we use the result from part (a) to write

Z Z

zx (zx/ )2 /2 x (x/ )2 /2 x 3 (x/ )2 (z2 +1)/2 dx

fZ (z) = 2

e 2

e x dx = z e

0 0

Z Z 3

2 (z2 +1)/2 t 2 /2 dt

= z 3 e d = z et

0 0 z2 + 1 z2 + 1

Z Z

z 2 /2 2z

= t 3 et dt = ses ds.

(z + 1)2

2

0 (z + 1)2

2

0

This last integral is the mean of an exp(1) density, which is one. Hence fZ (z) =

2z/(z2 + 1)2 .

34. For the cdf, use the law of total probability, substitution, and independence to write

Z

FZ (z) = P(Z z) = P(Y / ln X z) = P(Y / ln X z|X = x) fX (x) dx

0

Z Z

= P(Y / ln x z|X = x) fX (x) dx = P(Y / ln x z) fX (x) dx

0 0

Z 1 Z

= P(Y z ln x) fX (x) dx + P(Y z ln x) fX (x) dx.

0 1

Then

Z 1 Z

fZ (z) = fY (z ln x)(ln x) fX (x) dx + fY (z ln x)(ln x) fX (x) dx

Z0 1

0

35. Use the law of total probability, substitution, and independence to write

Z 1/2 Z 1/2

E[e(X+Z)U ] = E[e(X+Z)U |U = u] du = E[e(X+Z)u |U = u] du

1/2 1/2

Z 1/2 Z 1/2 Z 1/2

1

= E[e(X+Z)u ] du = E[eXu ]E[eZu ] du = 2

du

1/2 1/2 1/2 (1 u)

1 1/2 1 1 2 4

= = = 2 = .

1u 1/2 1 1/2 1 + 1/2 3 3

118 Chapter 7 Problem Solutions

Z 2 Z 2 Z 2

E[X 2Y ] = E[X 2Y |Y = y] dy = E[X 2 y|Y = y] dy = yE[X 2 |Y = y] dy

1 1 1

Z 2 Z 2

= y (2/y2 ) dy = 2 1/y dy = 2 ln 2.

1 1

Z Z

E[X nY r ] = E[X nY r |Y = y] fY (y) dy = E[X n yr |Y = y] fY (y) dy

0 0

Z Z

(n + p)

= yr E[X n |Y = y] fY (y) dy =fY (y) dy yr

0 yn (p) 0

Z

(n + p) rn (n + p) (n + p) (r n)!

= y fY (y) dy = E[Y rn ] = rn .

(p) 0 (p) (p)

38. (a) Use the law of total probability, substitution, and independence to find the cdf.

Write

Z

FY (y) = P(Y y) = P(eVU y) = P(eVU y|V = v) fV (v) dv

0

Z Z

vU

= P(e y|V = v) fV (v) dv = P(evU y) fV (v) dv

0 0

Z

= P(U 1v ln y) fV (v) dv.

0

Then Z

1 1

fY (y) = vy fU ( v ln y) fV (v) dv.

0

separately. For y > 1, 1v ln y 0 for all v 0, and 1v ln y 1/2 for v 2 ln y.

Thus, fU ( 1v ln y) = 1 for v 2 ln y, and we can write

Z

e2 ln y 1

fY (y) = 1

yv vev dv = = 3.

2 ln y y y

Z

1 2 ln y

fY (y) = 1

yv vev dv = e = y.

2 ln y y

1/y3 , y 1,

fY (y) = y, 0 y < 1,

0, y < 0.

Chapter 7 Problem Solutions 119

Z 1 Z

1 1 4

E[Y ] = y2 dy + dy = +1 = .

0 1 y2 3 3

(c) Using the law of total probability, substitution, and independence, we have

Z 1/2 Z 1/2

VU VU

E[e ] = E[e |U = u] du = E[eVu |U = u] du

1/2 1/2

Z 1/2 Z 1/2

1 1 1/2 4

= E[eVu ] du = du = = .

1/2 1/2 (1 u)

2 1 u 1/2 3

39. (a) This problem is interesting because the answer does not depend on the random

variable X. Assuming X has a density fX (x), first write

Z

E[cos(X +Y )] = E[cos(X +Y )|X = x] fX (x) dx

Z

= E[cos(x +Y )|X = x] fX (x) dx.

Z x+ Z 2x+

dy d

E[cos(x +Y )|X = x] = cos(x + y) = cos = 0,

x 2 2x 2

since we are integrating cos over an interval of length 2 . Thus, E[cos(X +

Y )] = 0 as well.

(b) Write

Z Z 2

P(Y > y) = P(Y > y|X = x) fX (x) dx = P(Y > y|X = x) dx

1

Z 2

ey e2y

= exy dx = .

1 y

(c) Begin in the usual way by writing

Z Z

E[XeY ] = E[XeY |X = x] fX (x) dx = E[xeY |X = x] fX (x) dx

Z

= xE[eY |X = x] fX (x) dx.

2 x2 /2 2 /2

E[eY |X = x] = E[esY |X = x] = es = ex .

s=1 s=1

Z Z 7

2 /2 1 2 /2 1 x2 /2 7

E[XeY ] = xex fX (x) dx = xex dx = e 3

4 3 4

e49/2 e9/2

= .

4

120 Chapter 7 Problem Solutions

(d) Write

Z Z 2

E[cos(XY )] = E[cos(XY )|X = x] fX (x) dx = E[cos(xY )|X = x] dx

1

Z 2 Z 2

= E[Re(e jxY )|X = x] dx = Re E[e jxY |X = x] dx

1 1

Z 2 Z 2

2 (1/x)/2

= Re ex dx = ex/2 dx = 2(e1/2 e1 ).

1 1

Z

MY (s) = E[esY ] = E[esZX ] = E[esZX |Z = z] fZ (z) dz

0

Z Z

= E[eszX |Z = z] fZ (z) dz = E[eszX ] fZ (z) dz

0 0

Z Z

2 2 /2 2 2 /2 2 /2

= e(sz) fZ (z) dz = e(sz) zez dz

0 0

Z

2 2 )z2 /2

= ze(1s dz.

0

Now make the change of variable t = z 1 s2 2 to get

Z

t 2 dt 1 1/ 2

MY (s) = et /2 = 2 2

= .

0 1 s2 2 1 s2 2 1s 1/ 2 s2

Hence, Y Laplace(1/ ).

41. Using the law of total probability and substitution,

Z Z

n m n m

E[X Y ] = E[X Y |Y = y] fY (y) dy = E[X n ym |Y = y] fY (y) dy

0

Z Z 0

= ym E[X n |Y = y] fY (y) dy = ym 2n/2 yn (1 + n/2) fY (y) dy

0 0

(n + m)!

= 2n/2 (1 + n/2)E[Y n+m ] = 2n/2 (1 + n/2) .

n+m

42. (a) We use the law of total probability, substitution, and independence to write

Z

FZ (z) = P(Z z) = P(X/Y z) = P(X/Y z|Y = y) fY (y) dy

0

Z Z

= P(X/y z|Y = y) fY (y) dy = P(X zy|Y = y) fY (y) dy

Z0 Z 0

0 0

Differentiating, we have

Z Z

( zy) p1 e zy ( y)q1 e y

fZ (z) = fX (zy)y fY (y) dy = y dy.

0 0 (p) (q)

Chapter 7 Problem Solutions 121

Z

(zw) p1 ezw wq1 ew

fZ (z) = w dw

0 (p) (q)

Z

z p1

= w p+q1 ew(1+z) dw.

(p)(q) 0

w = /(1 + z). Then

Z p+q1

z p1 d

fZ (z) = e

(p)(q) 0 1 + z (1 + z)

Z

z p1

= p+q1 e d

(p)(q)(1 + z) p+q 0

| {z }

= (p+q)

z p1

= .

B(p, q)(1 + z) p+q

(b) Starting with V := Z/(1 + Z), we first write

Z

FV (v) = P v = P(Z v + vZ) = P(Z(1 v) v)

1+Z

= P(Z v/(1 v)).

Differentiating, we have

v (1 v) + v v 1

fV (v) = fZ 2

= fZ .

1 v (1 v) 1 v (1 v)2

Now apply the formula derived in part (a) and use the fact that

v 1

1+ =

1v 1v

to get

[v/(1 v)] p1 1 v p1 (1 v)q1

fV (v) =

1 p+q (1 v)2

= ,

B(p, q)( 1v ) B(p, q)

which is the beta density with parameters p and q.

43. Put q := (n 1)p and Zi := j6=i X j , which is gamma(q, ) by Problem 55(c) in Chap-

ter 4. Now observe that Yi = Xi /(Xi + Zi ), which has a beta density with parameters p

and q := (n 1)p by Problem 42(b).

44. Using the law of total probability, substitution, and independence,

p p

FZ (z) = P(Z z) = P(X/ Y /k z) = P(X z Y /k )

Z p Z p

= P(X z Y /k |Y = y) fY (y) dy = P(X z y/k |Y = y) fY (y) dy

Z0 p

0

0

122 Chapter 7 Problem Solutions

Then

Z p p

fZ (z) = fX (z y/k ) y/k fY (y) dy

0

Z (z2 y/k)/2 p

e 1

(y/2)k/21 ey/2

= y/k 2 dy

0 2 (k/2)

Z

1 2

= (y/2)k/21/2 ey(1+z /k)/2 dy.

2 k (k/2) 0

Z k/21/2

1 d

fZ (z) = e

k (k/2) 0 1 + z2 /k 1 + z2 /k

Z

1

= k/2+1/21 e d

(1/2) k (k/2)(1 + z2 /k)k/2+1/2 0

1 (1 + z2 /k)(k+1)/2

= (k/2 + 1/2) = ,

(1/2) k (k/2)(1 + z2 /k)k/2+1/2 k B(1/2, k/2)

45. We use the law of total probability, substitution, and independence to write

Z

FZ (z) = P(Z z) = P(X/Y z) = P(X/Y z|Y = y) fY (y) dy

0

Z Z

= P(X/y z|Y = y) fY (y) dy = P(X zy|Y = y) fY (y) dy

Z0 Z 0

0 0

Differentiating, we have

Z Z r r

( zy) p1 e( zy) ( y)q1 e( y)

fZ (z) = fX (zy)y fY (y) dy = r r y dy.

0 0 (p/r) (q/r)

Z r r

(zw) p1 e(zw) wq1 ew

fZ (z) = r r w dw

0 (p/r) (q/r)

Z

r2 z p1 r r

= w p+q1 ew (1+z ) dw.

(p/r)(q/r) 0

w = ( /[1 + zr ])1/r . Then

Z (p+q1)/r

r2 z p1 d

fZ (z) = e (r1)/r

(p/r)(q/r) 0 1 + zr r(1 + zr )

1+zr

Chapter 7 Problem Solutions 123

Z

rz p1

= r (p+q)/r

(p+q)/r1 e d

(p/r)(q/r)(1 + z ) | 0

{z }

= ((p+q)/r)

rz p1

= .

B(p/r, q/r)(1 + zr )(p+q)/r

Z z Z z

1 1 1

fZ (z) = y1/2 (z y)1/2 dy = p dy.

4 0 4z 0 (y/z)(1 (y/z))

Z 1

1 1 1 1 1

fZ (z) = dt = sin t = /4.

2 0 1 t2 2 0

Z 1 Z

1 1 1 1/ z 1

fZ (z) = p dy = dt

4z

z1 (y/z)(1 (y/z)) 2 11/z 1 t2

1 1 1/ z 1 1 p

= sin t = sin (1/ z ) sin1 ( 1 1/z ) .

2 11/z 2

Putting this all together yields

/4, p 0 < z 1,

1 1 1

fZ (z) = sin (1/ z ) sin ( 1 1/z ) , 1 < z 2,

2

0, otherwise.

1 v u

fUV (u, v) = (u, v) = (u) p p ,

1 2 1 2

| {z }

N( u, 1 2 ) density in v

we see that

Z Z

v u 1

fU (u) = fUV (u, v) dv = (u) p p dv

1 2 1 2

Z

1 v u

= (u) p p dv = (u).

1 2 1 2

| {z }

density in v integrates to one

Similarly writing

u v 1

fUV (u, v) = (u, v) = p p (v),

1 2 1 2

124 Chapter 7 Problem Solutions

we have

Z Z

u v

1

fV (v) = fUV (u, v) du = p p (v) du

1 2 1 2

Z

1 v u

= (v) p p du = (v).

1 2 1 2

48. Using

1 v u

(u, v) = (u) p p ,

1 2 1 2

we can write

1 x mX y mY

fXY (x, y) = ,

X Y X Y

! ym xmX

!

Y X

Y

1 x mX 1

= p p . ()

X Y X 1 2 1 2

R

Then in f XY (x, y) dy, make the change of variable v = (y mY )/Y , dv = dy/Y

to get

Z Z !

1 x mX 1 v xm

X

X

fX (x) = fXY (x, y) dy = p p dv

X X 1 2 1 2

| {z }

density in v integrates to one

1 x mX

= .

X X

ymY

!

fXY (x, y) 1 Y xm

X

X

fY |X (y|x) = = p p

fX (x) Y 1 2 1 2

!

1 y [mY + YX (x mX )]

= p p .

Y 1 2 Y 1 2

Thus, fY |X ( |x) N mY + YX (x mX ), Y2 (1 2 ) . Proceeding in an analogous

way, using

1 u v

(u, v) = p p (v),

1 2 1 2

we can write

1 x mX y mY

fXY (x, y) = ,

X Y X Y

!

1

xmX

X ym

Y

Y

y mY

= p p . ()

X Y 1 2 1 2 Y

Chapter 7 Problem Solutions 125

R

Then in f XY (x, y) dx, make the change of variable u = (x mX )/X , du = dx/X

to get

Z Z !

1 y mY 1 u ym

Y

Y

fY (y) = fXY (x, y) dx = p p du

Y Y 1 2 1 2

| {z }

density in u integrates to one

1 y mY

= .

Y Y

!

fXY (x, y) 1

xmX

X ym

Y

Y

fX|Y (x|y) = = p p

fY (y) X 1 2 1 2

!

1 x [mX + YX (y mY )]

= p p .

X 1 2 X 1 2

Thus, fX|Y ( |y) N mX + YX (y mY ), X2 (1 2 ) .

Y

fY |X ( |x) N mY + (x mX ), Y2 (1 2 )

X

and

X

fX|Y ( |y) N mX + (y mY ), X2 (1 2 ) .

Y

Hence, Z

Y

E[Y |X = x] = y fY |X (y|x) dx = mY + (x mX ),

X

and Z

X

E[X|Y = y] = x fX|Y (x|y) dx = mX + (y mY ).

Y

50. From Problem 48, we know that fX N(mX , X2 ). Hence, E[X] = mX and E[X 2 ] =

var(X) + m2X = X2 + m2X . To compute cov(X,Y ), we use the law of total probability

and substitution to write

Z

cov(X,Y ) = E[(X mX )(Y mY )] = E[(X mX )(Y mY )|Y = y] fY (y) dy

Z

= E[(X mX )(y mY )|Y = y] fY (y) dy

Z

= (y mY )E[(X mX )|Y = y] fY (y) dy

Z

= (y mY ) E[X|Y = y] mX fY (y) dy

Z n o

X

= (y mY ) (y mY ) fY (y) dy

Y

126 Chapter 7 Problem Solutions

Z

X X

= (y mY )2 fY (y) dy = E[(Y mY )2 ]

Y Y

X

= Y2 = X Y .

Y

It then follows that

cov(X,Y )

= .

X Y

51. (a) Using the results of Problem 47, we have

Z Z

1

fU (u) = fUV (u, v) dv = 1 (u, v) + 2 (u, v) dv

2

1

= [ (u) + (u)] = (u),

2

and

Z Z

1

fV (v) = fUV (u, v) du = 1 (u, v) + 2 (u, v) du

2

1

= [ (v) + (v)] = (v).

2

Thus, fU and fV are N(0, 1) densities.

(b) Write

Z Z

:= E[UV ] = uv fUV (u, v) du dv

Z Z

1

= uv [1 (u, v) + 2 (u, v)] du dv

2

Z Z Z Z

1

= uv1 (u, v) du dv + uv2 (u, v) du dv

2

1 + 2

= .

2

(c) If indeed

1

fUV (u, v) = [ (u, v) + 2 (u, v)]

2 1

is a bivariate normal density, then

1

exp 2(1 2 )

[u2 2 uv + v2 ]

fUV (u, v) = p .

2 1 2

In particular then,

1

fUV (u, u) = [ (u, u) + 2 (u, u)],

2 1

or " #

2 2 2

eu /(1+ ) 1 eu /(1+1 ) eu /(1+2 )

p = q + q .

2 1 2 2 2 1 2 2 1 2

1 2

Chapter 7 Problem Solutions 127

1 1 1

p = q = q = 0,

2 1 2 2

4 1 1 4 1 22

which is false.

(d) First observe that

fUV (u, v) 1 1 (u, v) 2 (u, v)

fV |U (v|u) = = + .

fU (u) 2 (u) (u)

| {z } | {z }

N(1 u,112 ) N(2 u,122 )

Hence,

Z

1

v2 fV |U (v|u) dv = [(1 12 ) + (1 u)2 + (1 22 ) + (2 u)2 ]

2

1

= [2 12 22 + (12 + 22 )u2 ],

2

which depends on u unless 1 = 2 = 0.

52. For u0 , v0 0, let D := {(u, v) : u u0 , v v0 }. Then

ZZ

P(U > u0 ,V > v0 ) = (u, v) du dv

D

Z tan1 (v0 /u0 ) Z

= (r cos , r sin )r dr d (#)

0 v0 / sin

Z /2 Z

+ (r cos , r sin )r dr d . (##)

tan1 (v0 /u0 ) u0 / cos

Now, since

r2

exp 2(1 2 )

[1 sin 2 ]

(r cos , r sin ) = p ,

2 1 2

we can express the anti-derivative of r (r cos , r sin ) with respect to r in closed

form as p

1 2 r2

exp [1 sin 2 ] .

2 (1 sin 2 ) 2(1 2 )

Hence, the double integral in (#) reduces to

Z tan1 (v0 /u0 )

h(v20 , ) d .

0

Z /2 p

1 2 u20

exp [1 sin 2 ] d .

tan1 (v0 /u0 ) 2 (1 sin 2 ) 2(1 2 ) cos2

128 Chapter 7 Problem Solutions

Z /2tan1 (v0 /u0 )

h (u20 ,t) dt.

0

Z /4 Z /4

Q(x0 )2 = h0 (x02 , ) d + h0 (x02 , ) d

0 0

Z /4 2 Z /4

exp[x02 /(2 sin )] 1 x02

= 2 d = exp d .

0 2 0 2 sin2

54. Factor

2 exp[|x y| (y z)2 /2]

fXY Z (x, y, z) = , z 1,

z5 2

as

2

4 e(yz) /2 1 |xy|

fXY Z (x, y, z) = 2e .

z5 2

Now, the second factor on the right is an N(z, 1) density in the variable y, and the

third factor is a Laplace(1) density that has been shifted to have mean y. Hence, the

integral of the third factor with respect to x yields one, and we have by inspection that

2

4 e(yz) /2

fY Z (y, z) = , z 1.

z5 2

We then easily see that

fXY Z (x, y, z) 1 |xy|

fX|Y Z (x|y, z) := = 2e , z 1.

fY Z (y, z)

Next, since the right-hand factor in the formula for fY Z (y, z) is an N(z, 1) density in y,

if we integrate this factor with respect to y, we get one. Thus,

4

fZ (z) = 5 , z 1.

z

We can now see that

2

fY Z (y, z) e(yz) /2

fY |Z (y|z) := = , z 1.

fZ (z) 2

55. To find fXY (x, y), first write

2 /2 2 /2 2 /2 2 /2 2 2 /4

e(xy) e(yz) ez e(xy) e(zy/2) ey

fXY Z (x, y, z) := =

(2 )3/2 (2 )3/2

2 /2

2 2

e(xy) e(y/ 2 ) /2 e(zy/2)

=

2 2

2 /2

2 2

e(xy) e(y/ 2 ) /2 e(zy/2)

=

2 2 2 / 2

2 /2

2 2 /2

e(xy) e(y/ 2 ) /2 e[(zy/2)/(1/ 2 )]

= .

2 2 2 / 2

Chapter 7 Problem Solutions 129

Now the right-hand factor is an N(y/2, 1/2) density in the variable z. Hence, its

integral with respect to z is one. We thus have

2 /2

2 2 2

e(xy) e(y/ 2 ) /2 e(y/ 2 ) /2 e(xy) /2

fXY (x, y) = = ,

2 2 2 2 2

which shows that Y N(0, 2), and given Y = y, X is conditionally N(y, 1). Thus,

E[Y ] = 0 and var(Y ) = 2.

Next,

Z Z

E[X] = E[X|Y = y] fY (y) dy = y fY (y) dy = E[Y ] = 0,

and

Z Z

var(X) = E[X 2 ] = E[X 2 |Y = y] fY (y) dy = (1 + y2 ) fY (y) dy

= 1 + E[Y 2 ] = 1 + var(Y ) = 1 + 2 = 3.

Finally,

Z Z

E[XY ] = E[XY |Y = y] fY (y) dy = E[Xy|Y = y] fY (y) dy

Z Z

= yE[X|Y = y] fY (y) dy = y2 fY (y) dy = E[Y 2 ] = var(Y ) = 2.

Z Z

E[XY ] = E[XY |Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= E[Xy|Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= yE[X|Y = y, Z = z] fY Z (y, z) dy dz.

Since fX|Y Z ( |y, z) N(y, z2 ), the preceding conditional expectation is just y. Hence,

Z Z Z

E[XY ] = y2 fY Z (y, z) dy dz = E[Y 2 ] = E[Y 2 |Z = z] fZ (z) dz.

Since fY |Z ( |z) exp(z), the preceding conditional expectation is just 2/z2 . Thus,

Z Z 2

2 2 3 6

E[XY ] = fZ (z) dz = z2 dz = .

z2 1 z2 7 7

A similar analysis yields

Z Z

E[Y Z] = E[Y Z|Z = z] fZ (z) dz = E[Y z|Z = z] fZ (z) dz

Z Z Z

= zE[Y |Z = z] fZ (z) dz = z(1/z) fZ (z) dz = fZ (z) dz = 1.

130 Chapter 7 Problem Solutions

57. Write

Z Z

E[XY Z] = E[XY Z|Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= E[Xyz|Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= yzE[X|Y = y, Z = z] fY Z (y, z) dy dz.

Since fX|Y Z ( |y, z) is a shifted Laplace density with mean y, the preceding conditional

expectation is just y. Hence,

Z Z Z

2 2

E[XY Z] = y z fY Z (y, z) dy dz = E[Y Z] = E[Y 2 Z|Z = z] fZ (z) dy

Z Z

2

= E[Y z|Z = z] fZ (z) dy = zE[Y 2 |Z = z] fZ (z) dy.

Since fY |Z ( |z) N(z, 1), the preceding conditional expectation is just 1 + z2 . Thus,

Z Z Z

E[XY Z] = z(1 + z2 ) fZ (z) dz = [z + z3 ] 4/z5 dz = 4 z4 + z2 dz

1 1

= 4(1/3 + 1) = 16/3.

58. Write

Z Z

E[XY Z] = E[XY Z|X = x,Y = y] fXY (x, y) dx dy

Z Z

= E[xyZ|X = x,Y = y] fXY (x, y) dx dy

Z Z

= xyE[Z|X = x,Y = y] fXY (x, y) dx dy

Z Z Z

= x2 y fXY (x, y) dx dy = E[X 2Y ] = E[X 2Y |X = x] fX (x) dx

Z Z

2

= E[x Y |X = x] fX (x) dx = x2 E[Y |X = x] fX (x) dx

Z Z 2

= x3 fX (x) dx = x3 dx = 15/4.

1

59. We use the law of total probability, substitution, and independence to write

N N

Y ( ) = E[e jY ] = E[e j i=1 Xi ] = E[e j i=1 Xi |N = n]P(N = n)

n=1

n

= E[e j i=1 Xi |N = n]P(N = n)

n=1

n

= E[e j i=1 Xi ]P(N = n)

n=1

Chapter 7 Problem Solutions 131

n

j Xi

= E e P(N = n)

n=1 i=1

n

j Xi

= E[e ] P(N = n)

n=1 i=1

= X ( )n P(N = n) = GN (X ( )).

n=1

/( j ). Then

(1 p)X ( ) (1 p) /( j ) (1 p) (1 p)

Y ( ) = = = = ,

1 pX ( ) 1 p /( j ) ( j ) p (1 p) j

CHAPTER 8

Problem Solutions

1. We have

10 40 230 280 330

20 50 7 8 9 = 340 410 480

4 5 6

30 60 450 540 630

and

230 280 330

tr 340 410 480 = 1270.

450 540 630

3. MATLAB. We have

7 4

A0 = 8 5 .

9 6

4. Write

r r n n r n

tr(AB) = (AB)ii = Aik Bki = Bki Aik = (BA)kk = tr(BA).

i=1 i=1 k=1 k=1 i=1 k=1

5. (a) Write

r r n r n

0 0 0

tr(AB ) = (AB )ii = Aik (B )ki = Aik Bik .

i=1 i=1 k=1 i=1 k=1

r n

0 = tr(AA0 ) = A2ik ,

i=1 k=1

which implies Aik = 0 for all i and k. In other words, A is the zero matrix of size

r n.

6. Following the hint, we first write

0 kxk2 2 2

+ 4

kyk2 = kxk2 ,

kyk kyk kyk2

132

Chapter 8 Problem Solutions 133

which can be rearranged to get |hx, yi|2 kxk2 kyk2 . Conversely, suppose |hx, yi|2 =

kxk2 kyk2 . There are two cases to consider. If y 6= 0, then reversing the above sequence

of observations implies 0 = kx yk2 , which implies x = y. On the other hand, if

y = 0 and if

|hx, yi| = kxk kyk,

then we must have kxk = 0; i.e., x = 0 and y = 0, and in this case x = y for all .

7. Consider the i j component of E[XB]. Since

(E[XB])i j = E[(XB)i j ] = E Xik Bk j = E[Xik ]Bk j = (E[X])ik Bk j

k k k

= (E[X]B)i j

holds for all i j, E[XB] = E[X]B.

8. tr(E[X]) = (E[X])ii = E[Xii ] = E Xii = E[tr(X)].

i i i

9. Write

E[kX E[X]k2 ] = E[(X E[X])0 (X E[X])], which is a scalar,

n o

= tr E[(X E[X])0 (X E[X])]

h n oi

= E tr (X E[X])0 (X E[X]) , by Problem 8,

h n oi

= E tr (X E[X])(X E[X])0 , by Problem 4,

n o

= tr E[(X E[X])(X E[X])0 ] , by Problem 8,

n n

= tr(C) = Cii = var(Xi ).

i=1 i=1

0

10. Since E [X,Y, Z]0 = E[X], E[Y ], E[Z] , and

E[X 2 ] E[XY ] E[XZ] E[X]2 E[X]E[Y ] E[X]E[Z]

cov([X,Y, Z]0 ) = E[Y X] E[Y 2 ] E[Y Z] E[Y ]E[X] E[Y ]2 E[Y ]E[Z] ,

E[ZX] E[ZY ] E[Z 2 ] E[Z]E[X] E[Z]E[Y ] E[Z]2

we begin by computing all the entries of these matrices. For the mean vector,

Z 2 Z 2 2

3 2

E[Z] = 3

z 7 z dz = 7 z3 dz = 37 41 z4 = 3 15/28 = 45/28.

1 1 1

Next,

Z 2 Z 2 Z 2 2

3 2

E[Y ] = E[Y |Z = z] fZ (z) dz = 1

z 7 z dz = 3

7 z dz = 3

7 12 z2 = 9/14.

1 1 1 1

E[X] = E[ZU +Y ] = E[Z]E[U] + E[Y ] = E[Y ] = 9/14.

134 Chapter 8 Problem Solutions

E [X,Y, Z]0 = [9/14, 9/14, 45/28]0 .

Z 2 Z 2

E[Y Z] = E[Y Z|Z = z] fZ (z) dz = E[Y z|Z = z] fZ (z) dz

1 1

Z 2 Z 2 Z 2

= zE[Y |Z = z] fZ (z) dz = z(1/z) fZ (z) dz = fZ (z) dz = 1.

1 1 1

Next, Z 2 Z 2 2

E[Z 2 ] = z2 37 z2 dz = 3

7 z4 dz = 3

7 51 z5 = 93/35.

1 1 1

Now, Z 2 Z 2

E[Y 2 ] = E[Y 2 |Z = z] fZ (z) dz = (2/z2 ) 73 z2 dz = 6/7.

1 1

We can now compute

and

= E[Z 2 ] + E[Y 2 ] = 93/35 + 6/7 = 123/35.

123/35 6/7 1 81/196 81/196 405/392

cov([X,Y, Z]0 ) = 6/7 6/7 1 81/196 81/196 405/392

1 1 93/35 405/392 405/392 2025/784

3.1010 0.4439 0.0332

= 0.4439 0.4439 0.0332 .

0.0332 0.0332 0.0742

0

11. Since E [X,Y, Z]0 = E[X], E[Y ], E[Z] , and

E[X 2 ] E[XY ] E[XZ] E[X]2 E[X]E[Y ] E[X]E[Z]

cov([X,Y, Z]0 ) = E[Y X] E[Y 2 ] E[Y Z] E[Y ]E[X] E[Y ]2 E[Y ]E[Z] ,

E[ZX] E[ZY ] E[Z 2 ] E[Z]E[X] E[Z]E[Y ] E[Z]2

we compute all the entries of these matrices. To make this job easier, we first factor

fXY Z (x, y, z) = , z 1,

z5 2

Chapter 8 Problem Solutions 135

2

e(yz) /2 4

1 |xy|

fXY Z (x, y, z) = 2e

5 , z 1.

2 z

We then see that as a function of x, fX|Y Z (x|y, z) is a shifted Laplace(1) density. Sim-

ilarly, as a function of y, fY |Z (y|z) is an N(z, 1) density. Thus,

Z Z Z Z

E[X] = E[X|Y = y, Z = z] fY Z (y, z) dy dz = y fY Z (y, z) dy dz

Z Z

= E[Y ] = E[Y |Z = z] fZ (z) dz = z fZ (z) dz = E[Z]

Z 4 Z

4

= z 5 dz = 4

dz = 4/3.

1 z 1 z

Thus, E [X,Y, Z]0 = [4/3, 4/3, 4/3]0 . We next compute

Z Z

E[X 2 ] = E[X 2 |Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= (2 + y2 ) fY Z (y, z) dy dz = 2 + E[Y 2 ],

where

Z Z

E[Y 2 ] = E[Y 2 |Z = z] fZ (z) dz = (1 + z2 ) fZ (z) dz = 1 + E[Z 2 ].

Now, Z Z

4 4

E[Z 2 ] = dz = z2 dz = 2.

1 z5 1 z3

Thus, E[Y 2 ] = 3 and E[X 2 ] = 5. We next turn to

Z Z

E[XY ] = E[XY |Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= E[Xy|Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= yE[X|Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= y2 fY Z (y, z) dy dz = E[Y 2 ] = 3.

We also have

Z Z

E[XZ] = E[XZ|Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= E[Xz|Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= yz fY Z (y, z) dy dz = E[Y Z]

Z Z

= E[Y Z|Z = z] fZ (z) dz = zE[Y |Z = z] fZ (z) dz

Z

= z2 fZ (z) dz = E[Z 2 ] = 2.

136 Chapter 8 Problem Solutions

5 3 2 1 1 1

16

cov([X,Y, Z]0 ) = 3 3 2 1 1 1

9

2 2 2 1 1 1

29 11 2 3.2222 1.2222 0.2222

1

= 11 11 2 = 1.2222 1.2222 0.2222 .

9

2 2 2 0.2222 0.2222 0.2222

0

12. Since E [X,Y, Z]0 = E[X], E[Y ], E[Z] , and

E[X 2 ] E[XY ] E[XZ] E[X]2 E[X]E[Y ] E[X]E[Z]

cov([X,Y, Z]0 ) = E[Y X] E[Y 2 ] E[Y Z] E[Y ]E[X] E[Y ]2 E[Y ]E[Z] ,

E[ZX] E[ZY ] E[Z 2 ] E[Z]E[X] E[Z]E[Y ] E[Z]2

we compute all the entries of these matrices. We begin with

Z Z

E[Z] = E[Z|Y = y, X = x] fXY (x, y) dy dx

Z Z

= x fXY (x, y) dy dx = E[X] = 3/2.

Next,

Z Z

E[Y ] = E[Y |X = x] fX (x) dx = x fX (x) dx = E[X] = 3/2.

We now compute

Z Z

E[XY ] = E[XY |X = x] fX (x) dx = E[xY |X = x] fX (x) dx

Z Z

= xE[Y |X = x] fX (x) dx = x2 fX (x) dx = E[X 2 ]

= var(X) + E[X]2 = 1/12 + (3/2) = 7/3. 2

Then

Z Z

E[XZ] = E[XZ|Y = y, X = x] fXY (x, y) dy dx

Z Z

= xE[Z|Y = y, X = x] fXY (x, y) dy dx

Z Z

= x2 fXY (x, y) dy dx = E[X 2 ] = 7/3,

and

Z Z

E[Y Z] = E[Y Z|Y = y, X = x] fXY (x, y) dy dx

Z Z

= yE[Z|Y = y, X = x] fXY (x, y) dy dx

Z Z

= xy fXY (x, y) dy dx = E[XY ] = 7/3.

Chapter 8 Problem Solutions 137

Next,

Z Z

E[Y 2 ] = E[Y 2 |X = x] fX (x) dx = 2x2 fX (x) dx = 2E[X 2 ] = 14/3,

and

Z Z

E[Z 2 ] = E[Z 2 |Y = y, X = x] fXY (x, y) dy dx

Z Z

= (1 + x2 ) fXY (x, y) dy dx = 1 + E[X 2 ] = 1 + 7/3 = 10/3.

7 7 7 3 2 1 1 1

1

cov([X,Y, Z]0 ) = 7 14 7 1 1 1

3 2

7 7 10 1 1 1

1 1 1 0.0833 0.0833 0.0833

1

= 1 29 1 = 0.0833 2.4167 0.0833 .

12

1 1 13 0.0833 0.0833 1.0833

0

13. Since E [X,Y, Z]0 = E[X], E[Y ], E[Z] , and

E[X 2 ] E[XY ] E[XZ] E[X]2 E[X]E[Y ] E[X]E[Z]

cov([X,Y, Z]0 ) = E[Y X] E[Y 2 ] E[Y Z] E[Y ]E[X] E[Y ]2 E[Y ]E[Z] ,

E[ZX] E[ZY ] E[Z 2 ] E[Z]E[X] E[Z]E[Y ] E[Z]2

we compute all the entries of these matrices. In order to do this, we first note that

Z N(0, 1). Next, as a function of y, fY |Z (y|z) is an N(z, 1) density. Similarly, as a

function of x, fX|Y Z (x|y, z) is an N(y, 1) density. Hence, E[Z] = 0,

Z Z

E[Y ] = E[Y |Z = z] fZ (z) dz = z fZ (z) dz = E[Z] = 0,

and

Z Z

E[X] = E[X|Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= y fY Z (y, z) dy dz = E[Y ] = 0.

We next compute

Z Z

E[XY ] = E[XY |Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= E[Xy|Y = y, Z = z] fY Z (y, z) dy dz

Z Z Z Z

= yE[X|Y = y, Z = z] fY Z (y, z) dy dz = y2 fY Z (y, z) dy dz

Z Z

= E[Y 2 ] = E[Y 2 |Z = z] fZ (z) dz = (1 + z2 ) fZ (z) dz

= 1 + E[Z 2 ] = 2.

138 Chapter 8 Problem Solutions

Then

Z Z

E[Y Z] = E[Y Z|Z = z] fZ (z) dz = zE[Y |Z = z] fZ (z) dz

Z

= z2 fZ (z) dz = E[Z 2 ] = 1,

and

Z Z

E[XZ] = E[XZ|Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= zE[X|Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= yz fY Z (y, z) dy dz = E[Y Z] = 1.

Next,

Z Z

E[Y 2 ] = E[Y 2 |Z = z] fZ (z) dz = (1 + z2 ) fZ (z) dz = 1 + E[Z 2 ] = 2,

and

Z Z

E[X 2 ] = E[X 2 |Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= (1 + y2 ) fY Z (y, z) dy dz = 1 + E[Y 2 ] = 3.

3 2 1

cov([X,Y, Z]0 ) = 2 2 1 .

1 1 1

14. We first note that Z N(0, 1). Next, as a function of y, fY |Z (y|z) is an N(z, 1) density.

Similarly, as a function of x, fX|Y Z (x|y, z) is an N(y, 1) density. Hence,

Z Z

E[e j(1 X+2Y +3 Z) ] = E[e j(1 X+2Y +3 Z) |Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= E[e j(1 X+2 y+3 z) |Y = y, Z = z] fY Z (y, z) dy dz

Z Z

= E[e j1 X |Y = y, Z = z]e j2 y e j3 z fY Z (y, z) dy dz

Z Z

2

= e j1 y1 /2 e j2 y e j3 z fY Z (y, z) dy dz

Z Z

2

= e1 /2 e j(1 +2 )y e j3 z fY Z (y, z) dy dz

2

= e1 /2 E[e j(1 +2 )Y e j3 Z ]

Chapter 8 Problem Solutions 139

Z

2

= e1 /2 E[e j(1 +2 )Y e j3 Z |Z = z] fZ (z) dz

Z

2

= e1 /2 e j3 z E[e j(1 +2 )Y |Z = z] fZ (z) dz

Z

2 2 /2

= e1 /2 e j3 z e j(1 +2 )z(1 +2 ) fZ (z) dz

Z

2 2 /2

= e1 /2 e(1 +2 ) e j(1 +2 +3 )z fZ (z) dz

2 2 /2

= e1 /2 e(1 +2 ) E[e j(1 +2 +3 )z ]

2 2 2 /2

= e1 /2 e(1 +2 ) /2 e(1 +2 +3 )

2 2 2

= e[1 +(1 +2 ) +(1 +2 +3 ) ]/2 .

16. First, since R = E[XX 0 ], we see that R0 = E[XX 0 ]0 = E[(XX 0 )0 ] = E[XX 0 ] = R. Thus,

R is symmetric. Next, define the scalar Y := c0 X. Then

and we see that R is positive semidefinite.

17. Use the CauchySchwarz inequality to write

(CXY )i j = E[(Xi mX,i )(Y j mY, j )]

q q

E[(Xi mX,i )2 ]E[(Y j mY, j )2 ] = (CX )ii (CY ) j j .

18. If

cos sin

P = ,

sin cos

then

U X cos sin X X cos +Y sin

:= P0 = = .

V Y sin cos Y X sin +Y cos

= E[(Y 2 X 2 ) sin cos + XY (cos2 sin2 )]

Y2 X2

= sin 2 + E[XY ] cos 2 ,

2

which is zero if and only if

2E[XY ]

tan 2 = .

X2 Y2

Hence,

1 1 2E[XY ]

= tan .

2 X2 Y2

140 Chapter 8 Problem Solutions

19. (a) Write (ei e0i )mn = (ei )m (e0i )n , which equals one if and only if m = i and n = i.

Hence, ei e0i must be all zeros except at position i i where it is one.

(b) Write

0

e1 e4 e5 e1

E 0E = e04

e5

= e1 e01 + e4 e04 + e5 e05

= diag(1, 0, 0, 0, 0) + diag(0, 0, 0, 1, 0) + diag(0, 0, 0, 0, 1)

= diag(1, 0, 0, 1, 1).

= QMQ0 + I = Q(M + I)Q0 .

Hence, Q0CX Q = M + I, which is diagonal. The point here is that we may take

P = Q.

(b) We now put Y := P0 X = QX = Q(U +V ). Then

= Q{CU + E[UV 0 ] + E[VU 0 ] +CV }Q0 = QCU Q0 + QIQ0 = M + I.

u+v uv

x = and y = .

2 2

We can now write

" x x

# " #

u v 1/2 1/2

dH = y y

= , det dH = 1/2,

u v

1/2 1/2

and

| det dH | = 1/2,

and so u+v uv 1

fUV (u, v) = fXY , .

2 2 2

22. Starting with u = xy and v = y/x, write y = xv. Then u = x2 v, and x = (u/v)1/2 . We

also have y = (u/v)1/2 v = (uv)1/2 . Then

x x

u = (1/2)/ uv, v = (1/2) u/v3/2 ,

y

p y

p

u = (1/2) v/u, v = (1/2) u/v.

Chapter 8 Problem Solutions 141

(1/2)/ uv (1/2) u/v3/2

dH = p p ,

(1/2) v/u (1/2) u/v

and so

1 1 1

| det dH | = + = .

4v 4v 2|v|

Thus, p 1

fUV (u, v) = fXY u/v, uv , u, v > 0.

2v

For fU , write

Z Z p 1

fU (u) = fUV (u, v) dv = fXY u/v, uv dv

0 0 2v

Now make the change of variable y = uv, or y2 = uv. Then 2y dy = u dv, and

Z u 1

fU (u) = fXY , y dy.

0 y y

For fV , write

Z Z p 1

fV (v) = fUV (u, v) du = fXY u/v, uv du.

0 0 2v

p

This time make the change of variable x = u/v or x2 = u/v. Then 2x dx = du/v,

and Z

fV (v) = fXY (x, vx)x dx.

0

" x x # " #

u v 1 0

dH = y y = , det dH = u, and | det dH | = |u|.

u v

v u

Then

|u| |uv|

fUV (u, v) = fXY (u, uv)|u| = 2e 2e |u|,

and

Z Z

2 (1+|v|)|u| 2

fV (v) = 4 |u|e du = 2 ue (1+|v|)u du

0

Z

= u (1 + |v|)e (1+|v|)u du.

2(1 + |v|) 0

Now, this last integral is the mean of an exponential density with parameter (1+|v|).

Hence,

1 1

fV (v) = = .

2(1 + |v|) (1 + |v|) 2(1 + |v|)2

142 Chapter 8 Problem Solutions

24. Starting with u = 2 ln x cos(2 y) and v = 2 ln x sin(2 y), we have

2 +v2 )/2

Hence, x = e(u . We also have

v 1

= tan(2 y) or y = tan1 (v/u).

u 2

We can now write

x 2 +v2 )/2 x 2 +v2 )/2

u = ue(u , v = ve(u ,

y 1 1 v y 1 1 1

u = 2 1+(u/v)2 u2

, v = 2 1+(u/v)2 u.

In other words,

2 +v2 )/2 2 +v2 )/2

ue(u ve(u

dH = ,

1 1

2 1+(u/v)2 v

u2

1 1

2 1+(u/v)2 1u

and so

1 e(u2 +v2 )/2 2 2

1 e(u +v )/2 v2

| det dH| =

2 1 + (v/u)2 2 1 + (v/u)2 u2

2 2

e(u +v )/2 1 (v/u)2

= +

1 + (v/u)2 1 + (v/u)2

2

2 2

eu /2 ev /2

= .

2 2

2 2

We next use the formula fUV (u, v) = fXY (x, y) | det dH|, where x = e(u +v )/2 and

y = tan1 (v/u)/(2 ). Fortunately, since these formulas for x and y lie in (0, 1],

fXY (x, y) = I(0,1] (x)I(0,1] (y) = 1, and we see that

2 2

eu /2 ev /2

fUV (u, v) = 1 | det dH| = .

2 2

2

Integrating out v shows that fU (u) = eu /2 / 2 , and integrating out v shows that

2

fV (v) = ev /2 / 2 . It now follows that fUV (u, v) = fU (u) fV (v), and we see that U

and V are independent.

25. Starting with u = x + y and v = x/(x + y) = x/u, we see that v = x/u, or x = uv. Next,

from y = u x = u uv, we get y = u(1 v). We can now write

x x

u = v, v = u,

y y

u = 1 v, v = u.

In other words,

v u

dH = ,

1 v u

Chapter 8 Problem Solutions 143

and so

| det dH| = | uv u(1 v)| = |u|.

We next write

fUV (u, v) = fXY (uv, u(1 v))|u|.

If X and Y are independent gamma RVs, then for u > 0 and 0 < v < 1,

fUV (u, v) = u

(p) (q)

( u) p+q1 e u (p + q) p1

= v (1 v)q1 ,

(p + q) (p)(q)

Hence, it is easy to integrate out either u or v and show that fUV (u, v) = fU (u) fV (v),

where fU gamma(p + q, ) and fV beta(p, q). Thus, U and V are independent.

26. Starting with u = x + y and v = x/y, write x = yv and then u = yv + y = y(v + 1). Solve

for y = u/(v + 1) and x = u y = u u/(v + 1) = uv/(v + 1). Next,

x x

u = v/(v + 1), v = u/(v + 1)2 ,

y y

u = 1/(v + 1), v = u/(v + 1)2 .

In other words,

v/(v + 1) u/(v + 1)2

dH = ,

1/(v + 1) u/(v + 1)2

and so

uv u u(v + 1) |u|

| det dH | =

= = .

(v + 1)3 (v + 1)3 (v + 1)3 (v + 1)2

We next write

uv u |u|

fUV (u, v) = fXY , .

v + 1 v + 1 (v + 1)2

When X gamma(p, ) and Y gamma(q, ) are independent, then U and V are

nonnegative, and

fUV (u, v) =

(p) (q) (v + 1)2

( u) p+q1 e u (p + q) v p1

=

(p + q) (p)(q) (v + 1) p+q

( u) p+q1 e u v p1

= ,

(p + q) B(p, q)(v + 1) p+q

which shows that U and V are independent with the required marginal densities.

144 Chapter 8 Problem Solutions

What is different in this problem is that X and Y are correlated Gaussian random

variables; i.e.,

2 2 2

e(x 2 xy+y )/[2(1 )]

fXY (x, y) = p .

2 1 2

Hence,

2 cos2 2 (r cos )(r sin )+r2 sin2 )/[2(1 2 )]

re(r

fR, (r, ) = p

2 1 2

2 (1 sin 2 )/[2(1 2 )]

rer

= p .

2 1 2

To find the density of , we must integrate this with respect to r. Notice that the

2 2

integrand is proportional to re r /2 , whose anti-derivative is e r /2 / . Here we

2

have = (1 sin 2 )/(1 ). We can now write

Z Z

1 2 /2 1 1

f ( ) = fR, (r, ) dr = p re r dr = p

0 2 1 2 0 2 1 2

p

1 2 1 2

= p = .

2 1 2 (1 sin 2 ) 2 (1 sin 2 )

28. Since X N(0, 1), we have E[X] = E[X 3 ] = 0, E[X 4 ] = 3, and E[X 6 ] = 15. Since

W N(0, 1), E[W ] = 0 too. Hence, E[Y ] = E[X 3 + W ] = 0. It then follows that mY ,

mX and b = mX Amy are zero. We next compute

and

CY = E[Y 2 ] = E[X 6 + 2X 3W +W 2 ] = 15 + 0 + 1 = 16.

Hence, ACY = CXY implies A = 3/16, and then Xb = A(Y mY ) + mX = (3/16)Y .

29. First note that since X and W are zero mean, so is Y . Next,

and

= E[X 2 ] + 2E[X]E[W ] + E[W 2 ] = 1 + 0 + 2/ 2 = 1 + 2/ 2 .

2

Xb = Y.

2+2

Chapter 8 Problem Solutions 145

= E[(X mX )(G{X mX } +W )0 ] = CX G0 +CXW = CX G0 ,

and

= GCX G0 + GCXW +CW X G0 +CW = GCX G0 +CW ,

A = CX G0 (GCX G0 +CW )1 .

Then

A = CX G0 ( + )1

= CX G0 [CW

1 1

CW G(CX1 + G0CW

1

G)1 G0CW

1

]

= CX G0CW

1

CX G0CW

1

G(CX1 + G0CW

1

G)1 G0CW

1

= [CX CX G0CW

1

G(CX1 + G0CW

1

G)1 ]G0CW

1

1

G) CX G0CW

1

G](CX1 + G0CW

1

G)1 G0CW

1

= [I +CX G0CW

1

G CX G0CW

1

G](CX1 + G0CW

1

G)1 G0CW

1

= (CX1 + G0CW

1

G)1 G0CW

1

.

Xb = A(Y mY ) + mX , where ACY = CXY .

Next, with Z := BX, we have mZ = BmX and

We must solve AC

e := BA solves the required equation. Hence,

get (BA)CY = BCXY = CZY . We see that A

the linear MMSE estimate of X based on Z is

b

(BA)(Y mY ) + mZ = (BA)(Y mY ) + BmX = B{A(Y mY ) + mX } = BX.

146 Chapter 8 Problem Solutions

= E[k(X AY ) + (A B)Y )k2 ]

= E[kX AY k2 ] + 2E[{(A B)Y }0 (X AY )] + E[k(A B)Y k2 ]

= E[kX AY k2 ] + E[k(A B)Y k2 ]

E[kX AY k2 ],

Next, rewrite the orthogonality condition as

= E[tr{(X AY )(CY )0 }] = tr{E[(X AY )(CY )0 ]}

= tr{(RXY ARY )C0 }.

Now, this expression must be zero for all C, including C = RXY ARY . However,

since tr(DD0 ) = 0 implies D = 0, we conclude that the optimal A solves ARY = RXY .

Next, the best constant estimator is easily found by writing

34. To begin, write

b

E[(X X)(X b 0 ] = E[{(X mX ) A(Y mY )}{(X mX ) A(Y mY )}0 ]

X)

= CX ACY X CXY A0 + ACY A0 . ()

We now use the fact that ACY = CXY . If we multiply ACY = CXY on the right by A0 ,

we obtain

ACY A0 = CXY A0 .

Furthermore, since ACY A0 is symmetric, we can take the transpose of the above ex-

pression and obtain

ACY A0 = ACY X .

By making appropriate substitutions in (), we find that the error covariance is also

given by

CX ACY X , CX CXY A0 , and CX ACY A0 .

35. Write

n o

E[kX Xk b 0 (X X)]

b 2 ] = E[(X X) b 0 (X X)]

b = tr E[(X X) b

b 0 (X X)}]

= E[tr{(X X) b b

= E[tr{(X X)(X b 0 }]

X)

b

= tr{E[(X X)(X b 0 ]} = tr{CX ACY X }.

X)

Chapter 8 Problem Solutions 147

0.0622 0.0467 0.0136 0.1007

0.0489 0.0908 0.0359 0.1812

A =

0.0269 0.0070 0.0166 0.0921

37. Write X in the form X = [Y 0 , Z 0 ]0 , where Y := [X1 , . . . , Xm ]0 . Then

0 0

Y Y Z CY CY Z C1 C2

CX = E = = ,

Z CZY CZ C20 C3

and

Y 0 CY C1

CXY = E Y = = .

Z CZY C20

Solving ACY = CXY becomes

C1 I

AC1 = , or A = .

C20 C20 C11

I Y

AY = Y = .

C20 C11 C20 C11Y

In other words, Yb = Y and Zb = C20 C11Y . Note that the matrix required for the linear

MMSE estimate of Z based on Y is the solution of BCY = CZY or BC1 = C20 ; i.e.,

B = C20 C11 . Next, the error covariance for estimating X based on Y is

b b 0 ] = CX ACY X = C10 C2 I C1 C2

E[(X X)(X X)

C2 C3 C20 C11

C1 C2 C1 C2

=

C20 C3 C20 C20 C11C2

0 0

= ,

0 C3 C20 C11C2

b 2 ] = tr(CX ACY X ) = tr(C3 C20 C1C2 ).

E[kX Xk 1

ACZ = CXZ in component form, and using the fact that CZ is diagonal,

k

becomes

Ai j (CZ ) j j = (CXZ )i j

148 Chapter 8 Problem Solutions

can be solved only if (CXZ )i j = 0, which we now show to be the case by using the

CauchySchwarz inequality. Write

|(CXZ )i j | = E[(Xi (mX )i )(Z j (mZ ) j )]

q

E[(Xi (mX )i )2 ]E[(Z j (mZ ) j )2 ]

q

= (CX )ii (CZ ) j j .

(CXZ )i j . Now that we have shown that we can always solve ACZ = CXZ , observe

that this equation is equivalent to

39. Since X has the form X = [Y 0 , Z 0 ]0 , if we take G = [I, 0] and W 0, then

I 0 Y

GX +W = = Y.

Z

40. Write

1 n 2 1 n 1 n

E Xk = E[Xk2 ] = 2 = 2 .

n k=1 n k=1 n k=1

41. Write

1 n 1 n 1 n

E Xk Xk0

n k=1

=

n k=1

E[Xk Xk0 ] = C = C.

n k=1

Mn = mean(X,2)

MnMAT = kron(ones(1,n),Mn);

Chat = (X-MnMAT)*(X-MnMAT)/(n-1)

43. We must first find fY |X (y|x). To this end, use substitution and independence to write

is x = y. Hence, gML (y) = y.

44. By the same argument as in the solution of Problem 43, fY |X (y|x) = ( /2)e |yx| .

When X exp( ) and we maximize over x, we must impose the constraint x 0.

Hence,

y, y 0,

gML (y) = argmax 2 e |yx| =

x0 0, y < 0.

Chapter 8 Problem Solutions 149

y, 0 y 1,

gML (y) = argmax 2 e |yx| = 1, y > 1,

0x1

0, y < 0.

45. By the same argument as in the solution of Problem 43, fY |X (y|x) = ( /2)e |yx| .

For the MAP estimator, we must maximize

|yx|

fY |X (y|x) fX (x) = 2e e x , x 0.

By considering separately the cases x y and x > y,

(

y e( )x , 0 x y,

2 e

fY |X (y|x) fX (x) =

y ( + )x

2 e e , x > y, x 0.

When y 0, observe that the two formulas agree at x = y and have the common value

( /2)e y ; in fact, if > , the first formula is maximized at x = y, while the

second formula is always maximized at x = y. If y < 0, then only the second formula

is valid, and its region of validity is x 0. This formula is maximized at x = 0. Hence,

for > ,

y, y 0,

gMAP (y) =

0, y < 0.

We now consider the case . As before, if y < 0, the maximizing value of x is

zero. If y 0, then the maximum value of fY |X (y|x) fX (x) for 0 x y occurs at

x = 0 with a maximum value of ( /2)e y . The maximum value of fY |X (y|x) fX (x)

for x y occurs at x = y with a maximum value of ( /2)e y . For < ,

which corresponds to x = 0. Hence, for < ,

gMAP (y) = 0, < y < .

2 /2

fXY (x, y) = (x/y2 )e(x/y) e y , x, y > 0,

we see that Y exp( ), and

p that given Y = y, X Rayleigh(y). Hence, the MMSE

estimator is E[X|Y = y] = /2 y. To compute the MAP estimator, we must solve

2 /2

argmax(x/y2 )e(x/y) .

x0

We do this by differentiating with respect to x and setting the derivative equal to zero.

Write

2

2 (x/y)2 /2 e(x/y) /2 x2

(x/y )e = 1 2 .

x y2 y

Hence,

gMAP (y) = y.

150 Chapter 8 Problem Solutions

E[{g2 (Y ) g1 (Y )}h(Y )] = 0. ()

Since h is an arbitrary bounded function, put h(y) := sgn[g2 (y) g1 (y)], where

1, x > 0,

sgn(x) := 0, x = 0,

1, x < 0.

CHAPTER 9

Problem Solutions

1. We first compute

12 1 2

detC = det = 12 22 (1 2 )2 = 12 22 (1 2 ),

1 2 22

p

and detC = 1 2 1 2 . Next,

1

1 22 1 2 2 (1 2 ) 1 2 (1 2 )

C1 = =

1

1

,

detC 1 2 12

2 2

1 2 (1 ) 2 (1 )2

and

x y

x (1 2 ) 1 2 (1 )

2 2

x y C1 = x y 1 x y

y + 2

2

1 2 (1 ) 1 (1 )

2

x2 xy xy y2

= +

12 (1 2 ) 1 2 (1 2 ) 1 2 (1 2 ) 12 (1 2 )

1 h x 2 xy y 2 i

= 2 + ,

1 2 1 1 2 2

and the result follows.

2. Here is the plot:

n=4

1 n=1

0

4 2 0 2 4

3. (a) First write c1 X + c2Y = c1 X + c2 (3X) = (c1 + 3c2 )X, which is easily seen to be

N(0, (c1 + 3c2 )2 ). Thus, X and Y are jointly Gaussian.

(b) Observe that E[XY ] = E[X(3X)] = 3E[X 2 ] = 3 and E[Y 2 ] = E[(3X)2 ] = 9E[X 2 ] =

9. Since X and Y have zero means,

E[X 2 ] E[XY ] 1 3

cov([X,Y ]0 ) = = .

E[Y X] E[Y 2 ] 3 9

151

152 Chapter 9 Problem Solutions

By substitution, this last conditional probability is P(3x y|X = x). The event

{3x y} is deterministic and therefore independent of X. Hence, we can drop

the conditioning and get

probability is just u(y 3x).

4. If Y = ni=1 ci Xi , its characteristic function is

n n n

j (ni=1 ci Xi ) j ci Xi

Y ( ) = E[e ] = E e = E[e j ci Xi ] = Xi ( ci )

i=1 i=1 i=1

n

2 2 n 2 ( n c2 2 )/2

= e j( ci )mi ( ci ) i /2 = e j (i=1 ci mi ) i=1 i i ,

i=1

5. First, E[Y ] = E[AX + b] = AE[X] + b = Am + b. Second,

6. Write out

Y1 = X1

Y2 = X1 + X2

Y3 = X1 + X2 + X3

..

.

notation as

X1 1 0 0 Y1

X2 1 1 0 0

Y2

X3 0 1 1 0 0 Y3

.. = .. .. .

. . .

Xn1 0 0 1 1 0 Yn1

Xn 0 0 1 1 Yn

| {z }

=: A

7. Since X N(0,C), the scalar Y := 0 X is also Gaussian and has zero mean. Hence,

E[( 0 XX 0 )k ] = E[(Y 2 )k ] = E[Y 2k ] = (2k 1) 5 3 1 (E[Y 2 ])k . Now observe that

E[Y 2 ] = E[ 0 XX 0 ] = 0C and the result follows.

Chapter 9 Problem Solutions 153

8. We first have

1 n 1 n

E[Y j ] = E[X j X] = m E Xi

n i=1

= m m = 0.

n i=1

n

1 n n 1 2 2 2

E[ X X ] = 2 E[Xi X j ] = 2 ( + m ) + m

n i=1 j=1 n i=1 i6= j

1 n o 1 n o 2

= 2 n( 2 + m2 ) + n(n 1)m2 = 2 n 2 + n2 m2 = + m2 ,

n n n

and

1 n 1n 2 2 2

o 2

E[ X X j ] = E[X X

i j ] = ( + m ) + (n 1)m = + m2 .

n i=1 n n

9. Following the hint, in the expansion of E[( 0 X)2k ], the sum of all the coefficients of

i1 i2k is (2k)!E[Xi1 Xi2k ]. The corresponding sum of coeficients in the expan-

sion of

(2k 1)(2k 3) 5 3 1 ( 0C )k

is

(2k 1)(2k 3) 5 3 1 2k k! C j1 j2 C j2k1 j2k ,

j1 ,..., j2k

where the sum is over all j1 , . . . , j2k that are permutations of i1 , . . . , i2k and such that

the product C j1 j2 C j2k1 j2k is distinct. Since

= (2k)!,

10. Write

j 1 , j2 , j3 , j4

E[X12 X22 ] = C11C22 +C12

2

.

0 0

Y ( ) = E[e jY ] = E[e j (a X) ] = E[e j( a) X ] = X ( a)

0 0 0 0 2 /2

= e j( a) m( a) C( a)/2 = e j (a m)(a Ca) ,

154 Chapter 9 Problem Solutions

0 0

12. With X = [U 0 ,W 0 ]0 and = [ 0 , 0 ]0 , we have X ( ) = e j m C /2 , where

mU

0m = 0 0 = 0 mU + 0 mW ,

mW

and

0

S 0 0 0 S

C = 0 0 = = 0 S + 0 T .

0 T T

Thus,

0 0 m )( 0 S + 0 T )/2 0 0 0 m 0 T /2

X ( ) = e j( mU + W = e j mU S /2 e j W ,

functions.

13. (a) Since X is N(0,C) and Y := C1/2 X, Y is also normal. It remains to find the

mean and covariance of Y . We have E[Y ] = E[C1/2 X] = C1/2 E[X] = 0 and

E[YY 0 ] = E[C1/2 XX 0C1/2 ] = C1/2 E[XX 0 ]C1/2 = C1/2CC1/2 = I. Hence,

Y N(0, I).

(b) Since the covariance matrix of Y is diagonal, the components of Y are uncorre-

lated. Since Y is also Gaussian, the components of Y are independent. Since the

covariance matrix of Y is the indentity, each Yk N(0, 1). Hence, each Yk2 is chi-

squared with one degree of freedom by Problem 46 in Chapter 4 or Problem 11

in Chapter 5.

(c) By the Remark in Problem 55(c) in Chapter 4, V is chi-squared with n degrees

of freedom.

14. Since

X Y

Z := det = X 2 +Y 2 ,

Y X

where X and Y are independent N(0, 1), observe that X 2 and Y 2 are chi-squared with

one degree of freedom. Hence, Z is chi-squared with two degrees of freedom, which

is the same as exp(1/2).

15. Begin with

Z Z

1 0 0 0 1 0 0

fX (x) = e j x e j m C /2 d = e j(xm) e C /2 d .

(2 )n IRn (2 )n IRn

Z

1 0 1/2 0 d

fX (x) = e j(xm) C e /2

(2 )n IRn detC1/2

Z

1 1/2 0 0 d

= e j{C (xm)} e /2 .

(2 )n IRn detC

Chapter 9 Problem Solutions 155

Z n Z

1 0 0 /2 d 1 1 jti i 2 /2

fX (x) = e jt e = e e i d i .

(2 )n IRn detC detC i=1 2

2

Observe that ei /2 is the characteristic function of a scalar N(0, 1) random variable.

Hence,

n ti2 /2

1 e 1

fX (x) = = exp[t 0t/2].

detC i=1 2 (2 )n/2 detC

Recalling that t = C1/2 (x m) yields

fX (x) = .

(2 )n/2 detC

X Y

Z := det = XV YU.

U V

Then consider the conditional cumulative distribution function,

FZ|UV (z|u, v) = P(Z z|U = u,V = v) = P(XV YU z|U = u,V = v)

= P(Xv Yu z|U = u,V = v).

Since [X,Y ]0 and [U,V ]0 are jointly Gaussian and uncorrelated, they are independent.

Hence, we can drop the conditioning and get

FZ|UV (z|u, v) = P(Xv Yu z).

Next, since X and Y are independent and N(0, 1), Xv Yu N(0, u2 + v2 ). Hence,

fZ|UV ( |u, v) N(0, u2 + v2 ).

17. We first use the fact that A solves ACY = CXY to show that (X mX ) A(Y mY ) and

Y are uncorrelated. Write

E[{(X mX ) A(Y mY )}Y 0 ] = CXY ACY = 0.

We next show that (X mX ) A(Y mY ) and Y are jointly Gaussian by writing them

as an affine transformation of the Gaussian vector [X 0 ,Y 0 ]0 ; i.e.,

(X mX ) A(Y mY ) I A X AmY mX

= + .

Y 0 I Y 0

It now follows that (X mX ) A(Y mY ) and Y are independent. Using the hints on

substitution and independence, we compute the conditional characteristic function,

0 0 0

E[e X |Y = y] = E e j [(XmX )A(Y mY )] e j [mX +A(Y mY )] Y = y

0 0

= e j [mX +A(ymY )] E e j [(XmX )A(Y mY )] Y = y

0 0

= e j [mX +A(ymY )] E e j [(XmX )A(Y mY )] .

156 Chapter 9 Problem Solutions

This last expectation is the characteristic function of the zero-mean Gaussian random

vector (X mX ) A(Y mY ). To compute its covariance matrix first observe that

since ACY = CXY , we have ACY A0 = CXY A0 . Then

E[{(X mX ) A(Y mY )}{(X mX ) A(Y mY )}0 ]

= CX CXY A0 ACY X + ACY A0

= CX ACY X .

We now have

0 0 0

E[e X |Y = y] = e j [mX +A(ymY )] e [CX ACY X ] /2 .

$

Thus, given Y = y, X is conditionally N mX + A(y mY ),CX ACY X .

18. First observe that

X Y

Z := det = XV YU.

U V

Then consider the conditional cumulative distribution function,

FZ|UV (z|u, v) = P(Z z|U = u,V = v) = P(XV YU z|U = u,V = v)

= P(Xv Yu z|U = u,V = v).

Since [X,Y,U,V ]0 is Gaussian, given U = u and V = v, [X,Y ]0 is conditionally

u

N A ,C[X,Y ]0 AC[U,V ]0 ,[X,Y ]0 ,

v

where A solves AC[U,V ]0 = C[X,Y ]0 ,[U,V ]0 . We now turn to the conditional distribution of

Xv Yu. Since the conditional distribution of [X,Y ]0 is Gaussian, so is the conditional

distribution of the linear combination Xv Yu. Hence, all we need to find are the

conditional mean and the conditional variance of Xv Yu; i.e.,

X

E[Xv Yu|U = u,V = v] = E v u U = u,V = v

Y

X

= v u E U = u,V = v

Y

u

= v u A ,

v

and

2

X u

A

E v u

Y v U = u,V = v

2

X U

= E v u A U = u,V = v

Y V

0

X U X U

= v u E A A U = u,V = v v

Y V Y V u

v

= v u C[X,Y ]0 AC[U,V ]0 ,[X,Y ]0 ,

u

Chapter 9 Problem Solutions 157

where the last step uses the fact that [X,Y ]0 A[U,V ]0 is independent of [U,V ]0 . If

[X,Y ]0 and [U,V ]0 are uncorrelated, i.e., C[X,Y ]0 ,[U,V ]0 = 0, then A = 0 solves AC[U,V ]0 =

0; in this case, the conditional mean is zero, and the conditional variance simplifies to

v v

v u C[X,Y ]0 = v u I = v2 + u2 .

u u

Then

= E[{(X mX ) + j(Y mY )}{(X mX ) + j(Y mY )} ]

= E[{(X mX ) + j(Y mY )}{(X mX ) j(Y mY )}]

= var(X) j cov(X,Y ) + j cov(Y, X) + var(Y ) = var(X) + var(Y ).

= E[{(X mX ) + j(Y mY )}{(X mX ) + j(Y mY )}H ]

= E[{(X mX ) + j(Y mY )}{(X mX )H j(Y mY )H }]

= CX jCXY + jCY X +CY = (CX +CY ) + j(CY X CXY ).

= E[(Xi (mX )i )(Yi (mY )i )].

Hence, E[(Xi (mX )i )(Yi (mY )i )] = 0, and we see that Xi and Yi are uncorre-

lated.

(c) By part (a), if K is real, then CY X = CXY . Circular symmetry implies CY X =

CXY . It follows that CXY = CXY , and then CXY = 0; i.e., X and Y are uncor-

related.

21. First,

0 0

ex (2I)x/2 ey (2I)y/2

fX (x) = and fY (y) = .

(2 )n/2 (1/2)n/2 (2 )n/2 (1/2)n/2

Then

0 0 y) H

e(x x+y e(x+ jy) (x+ jy)

fXY (x, y) = fX (x) fY (y) = = .

n n

22. (a) Immediate from Problem 20(a).

(b) Since 0 Q is a scalar, ( 0 Q )0 = 0 Q . Since Q0 = Q, ( 0 Q )0 = 0 Q0 =

0 Q . Thus, 0 Q = 0 Q , and it follows that 0 Q = 0.

158 Chapter 9 Problem Solutions

Kw = (R + jQ)( + j ) = R + jR + jQ Q .

Next,

wH (Kw) = ( 0 j 0 )(R + jR + jQ Q )

= 0 R + j 0 R + j 0 Q 0 Q j 0 R + 0 R + 0 Q j 0 Q

= 0 R + j 0 R + j 0 Q + 0 Q j 0 R + 0 R + 0 Q j 0 Q

= 0 R + 0 R + 2 0 Q ,

where we have used the result of parts (a) and (b).

23. First,

AZ = ( + j )(X + jY ) = ( X Y ) + j( X + Y ).

Second,

X X Y

= .

Y X + Y

Now assume that circular symmetry holds; i.e., CX = CY and CXY = CY X . Put

U := X Y and V := X + Y . Assuming zero means to simplify the notation,

CU = E[( X Y )( X Y )0 ] = CX 0 CY X 0 CXY 0 + 0CY

= CX 0 CY X 0 + CY X 0 + 0CX .

Similarly,

CV = E[( X + Y )( X + Y )0 ] = CX 0 + CY X 0 + CXY 0 + CY 0

= CX 0 + CY X 0 CY X 0 + CX 0 .

Hence, CU = CV . It remains to compute

CUV = E[( X Y )( X + Y )0 ] = CX 0 + CXY 0 CY X 0 CY 0

= CX 0 CY X 0 CY X 0 CX 0

and

CVU = E[( X + Y )( X Y )0 ] = CX 0 CXY 0 + CY X 0 CY 0

= CX 0 + CY X 0 + CY X 0 CX 0 ,

which shows that CU = CV . Thus, if Z is circularly symmetric, so is AZ.

24. To begin, note that with R = [X 0 ,U 0 ]0 and I = [Y 0 ,V 0 ]0 ,

CX CXU CY CYV

CR = , CI = ,

CUX CU CVY CV

and

CXY CXV CY X CYU

CRI = , CIR = .

CUY CUV CV X CVU

Also, is circularly symmetric means CR = CI and CRI = CIR .

Chapter 9 Problem Solutions 159

= CXU jCXV + jCYU +CYV

= 2(CXU jCXV ), since is circularly symmetric.

Second,

CXU CXV CXU CXV

CZeWe = = , since is circularly symmetric.

CYU CYV CXV CXU

(b) Assuming zero means again, we compute

= CU jCUV + jCVU +CV = 2(CU jCUV ).

or

( CU + CUV ) + j( CU CUV ) = CXU jCXV . ()

We also have

CU CUV

CWe =

CVU CV

e e = C e e becomes

so that ACW ZW

CU CUV

= CZeWe

CVU CV

or

CU CVU CUV CV

= CZeWe

CU + CVU CUV + CV

or

CU + CUV CUV CU CXU CXV

= ,

CU CUV CUV + CU CXV CXU

which is equivalent to ().

(c) If A solves AKW = KZW , then by part (b), A e solves AC

e e = C e e . Hence, by

W ZW

e = w,

Problem 17, given W e

ew

Ze N mZe + A( e ee .

e mWe ),CZe ACWZ

e e e is equivalent to

Next, CZe ACWZ

CX CXY CUX CUY

,

CY X CY CV X CVY

160 Chapter 9 Problem Solutions

CX CXY CUX CUY

,

CXY CX CUY CUX

or

CX CXY CUX + CUY CUY CUX

,

CXY CX CUX CUY CUY + CUX

which is equivalent to

2(CX jCXY ) ( + j ) 2(CUX jCUY ),

which is exactly KZ AKW Z . Thus, given W = w,

$

Z N mZ + A(w mW ), KZ AKW Z .

25. Let Z = X + jY with X and Y independent N(0, 1/2) as in the text.

(a) Since X and Y are zero mean,

cov(Z) = E[ZZ ] = E[X 2 +Y 2 ] = 12 + 12 = 1.

(b) First write 2|Z|2 = 2(X 2 +Y 2 ) = ( 2 X)2 + ( 2Y )2 . Now, 2 X and 2Y are

both N(0, 1). Hence, their squares are chi-squared with one degree of freedom

by Problem 46 in Chapter 4 or Problem 11 in Chapter 5. Hence, by Prob-

lem 55(c) in Chapter 4 and the remark following it, 2|Z|2 is chi-squared with

two degrees of freedom.

26. With X N(mr , 1) and Y N(mi , 1), it follows either from Problem 47 in Chapter 4

or from Problem 12 in Chapter 5 that X 2 and Y 2 are noncentral chi-squared with one

degree of freedom and respective noncentrality parameters m2r and m2i . Since X and Y

are independent, it follows from Problem 65 in Chapter 4 that X 2 +Y 2 is noncentral

chi-squared with two degrees of freedom and noncentralityparameter m2r + m2i . It is

now immediate from Problem 26 in Chapter 5 that |Z| = X 2 +Y 2 has the orginal

Rice density.

27. (a) The covariance matrix of W is

E[WW H ] = E[K 1/2 ZZ H K 1/2 ] = K 1/2 E[ZZ H ]K 1/2 = K 1/2 KK 1/2 = I.

Hence,

H 2

ew w n

e|wk |

fW (w) =

n

= .

k=1

2 2 2

2

2

e|wk | e(uk +vk ) e[(uk /(1/ 2 )) +(vk /(1/ 2 )) ]/2

fWk (w) = = $p 2 = $p 2

2 /2 2 /2

2 /2

2 /2

e[uk /(1/ 2 )] e[vk /(1/ 2 )]

= p p = fUkVk (uk , vk ).

2 /2 2 /2

Hence, Uk and Vk are independent N(0, 1/2).

Chapter 9 Problem Solutions 161

(c) Write

n

2kW k2 = ( 2Uk )2 + ( 2Vk )2 .

k=1

Since 2Uk and 2Vk are independent N(0, 1), their squares are chi-squared

with one degree

of freedom

by Problem 46 in Chapter 4 or Problem 11 in Chap-

ter 5. Next, ( 2Uk )2 + ( 2Vk )2 is chi-squared with two degrees of freedom by

Problem 55(c) in Chapter 4 and the remark following it. Similarly, since the Wk

are indepdendent, 2kW k2 is chi-squared with 2n degrees of freedom.

28. (a) Write

since M 0 = M. Hence, v0 Mu = 0.

(b) By part (a) with v = Mu we have

0 = v0 Mu = (Mu)0 Mu = kMuk2 .

Hence, Mu = 0 for all u, and it follows that M must be the zero matrix.

29. We have from the text that

CX CXY

0 0

CY X CY

is equal to

0CX + 0CXY + 0CY X + 0CY ,

which, upon noting that 0CXY is a scalar and therefore equal to its transpose, sim-

plifies to

0CX + 2 0CY X + 0CY . ()

We also have from the text (via Problem 22) that

If () is equal to wH Kw/2 for all and all , then in particular, this must hold for all

when = 0. This implies

CX +CY CX CY

0CX = 0 or 0 = 0.

2 2

Since is arbitrary, (CX CY )/2 = 0, or CX = CY . This means that we can now write

CXY = 0, and so CXY = CY X .

162 Chapter 9 Problem Solutions

30. (a) Since is 2n 2n, det(2) = 22n det . From the hint it follows that det =

(det K)2 /22n .

(b) Write

VV 1 = (A + BCD)[A1 A1 B(C1 + DA1 B)1 DA1 ]

= (A + BCD)A1 [I B(C1 + DA1 B)1 DA1 ]

= (I + BCDA1 )[I B(C1 + DA1 B)1 DA1 ]

= I + BCDA1 B(C1 + DA1 B)1 DA1

BCDA1 B(C1 + DA1 B)1 DA1

= I + BCDA1

B[I +CDA1 B](C1 + DA1 B)1 DA1

= I + BCDA1

BC[C1 + DA1 B](C1 + DA1 B)1 DA1

= I + BCDA1 BCDA1 = I.

CX CY X 1 CX1CY X 1

1 = 1 1

CY X CX CY X CX 1

CX 1 +CY X 1CY X CX1 CY X 1 CY X 1

= 1 1 1

CY X CX CY X CX CY X CX1CY X 1 +CX 1

CX 1 +CY X 1CY X CX1 0

=

CY X 1 CX 1CY X CX1 (CY X CX1CY X +CX )1

CX 1 +CY X 1CY X CX1 0

= .

CY X 1 CX 1CY X CX1 I

Using the hint that

1 = CX1 CX1CY X 1CY X CX1 ,

we easily obtain

CX 1 = I CY X 1CY X CX1 ,

from which it follows that

1 I 0

= .

CY X 1 CX 1CY X CX1 I

To show that the lower-left block is also zero, use the hint to write

CY X 1 CX 1CY X CX1

= CY X [CX1 CX1CY X 1CY X CX1 ] CX 1CY X CX1

= CY X CX1 CY X CX1CY X 1CY X CX1 CX 1CY X CX1

= CY X CX1 [CY X CX1CY X +CX ]1CY X CX1

= CY X CX1 1CY X CX1 = 0.

Chapter 9 Problem Solutions 163

(d) Write

KK 1 = 2(CX + jCY X )(1 jCX1CY X 1 )/2

= CX 1 +CY X CX1CY X 1 + j(CY X 1 CY X 1 )

= 1 = I.

1 x 0 0 1 CX1CY X 1 x

x0 y0 = x y

y 1CY X CX1 1 y

1 1

x +CX CY X 1 y

= x0 y0

1CY X CX1 x + 1 y

= x 0 1 x + x 0CX1CY X 1 y y 0 1CY X CX1 x + y 0 1 y.

Now, since each of the above terms on the third line is a scalar, each term is

equal to its transpose. In particular,

y 0 1CY X CX1 x = x 0CX1CXY 1 y = x 0CX1CY X 1 y.

Hence,

x

1

2 x 0 y 0 1 = 1 0 1 0 1 1 0 1

2 (x x + 2x CX CY X y + y y). ()

y

We next compute

zH K 1 z = 1 0

2 (x jy 0 )(1 jCX1CY X 1 )(x + jy)

1 0

= 2 (x jy 0 )[(1 x +CX1CY X 1 y) + j(1 y CX1CY X 1 x)]

1 0

= 2 (x jy 0 )[(1 x +CX1CY X 1 y) + j(1 y 1CY X CX1 x)]

zH K 1 z = 1 0 1 1 1 0 1

2 [{x ( x +CX CY X y) + y ( y CY X CX x)}

1 1

1 0 1 0 1 1 0 1 0 1 1

= 2 [{x x + x CX CY X y + y y y CY X CX x}

+ j{x 0 1 y x 0 1CY X CX1 x y 0 1 x y 0CX1CY X 1 y}].

We now use the fact that since each of the terms in the last line is a scalar, it is

equal to its transpose. Also CXY = CY X . Hence,

zH K 1 z = 1 0 1 0 1 1

2 [{x x + 2x CX CY X y + y y}

0 1

Since

x 0 1CY X CX1 x = (x 0 1CY X CX1 x)0 = x 0CX1CY X 1 x = x 0 1CY X CX1 x,

and similarly for y 0CX1CY X 1 y, the two imaginary terms above are zero.

CHAPTER 10

Problem Solutions

1. Write

= g(t, 1)P(Z = 1) + g(t, 2)P(Z = 2) + g(t, 3)P(Z = 3)

= p1 a(t) + p2 b(t) + p3 c(t),

and

= g(t, 1)g(s, 1)p1 + g(t, 2)g(s, 2)p2 + g(t, 3)g(s, 3)p3

= a(t)a(s)p1 + b(t)b(s)p2 + c(t)c(s)p3 .

Chapter 2 of the text, write

Z

0 |g( ) h( )|2 d

Z Z

= 2

|g( )| d h( )g( ) d

Z Z

g( )h( ) d + | |2 |h( )|2 d .

Then put R

g( )h( ) d

= R

2

|h( )| d

to get

R 2

Z g( )h( ) d

0 |g( )|2 d R

2

|h( )| d

R 2 R 2

g( )h( ) d g( )h( ) d Z

R 2

+ R 2 |h( )|2 d

|h( )| d 2

|h( )| d

R 2

Z g( )h( ) d

2

= |g( )| d R 2

.

|h( )| d

164

Chapter 10 Problem Solutions 165

3. Write

= E[Xt1 Xt2 ] mX (t1 )E[Xt2 ] E[Xt1 ]mX (t2 ) + mX (t1 )mX (t2 )

= E[Xt1 Xt2 ] mX (t1 )mX (t2 ) mX (t1 )mX (t2 ) + mX (t1 )mX (t2 )

= RX (t1 ,t2 ) mX (t1 )mX (t2 ).

Similarly,

= E[Xt1 Yt2 ] mX (t1 )E[Yt2 ] E[Xt1 ]mY (t2 ) + mX (t1 )mY (t2 )

= E[Xt1 Xt2 ] mX (t1 )mY (t2 ) mX (t1 )mY (t2 ) + mX (t1 )mY (t2 )

= RXY (t1 ,t2 ) mX (t1 )mY (t2 ).

4. Write

n 2 n n n n

0 E ci Xti = E ci Xti k tk

c X = ci E[Xti Xtk ]ck

i=1 i=1 k=1 i=1 k=1

n n

= ci RX (ti ,tk )ck .

i=1 k=1

2

ex /(2t)

fXt (x) = .

2 t

Yn = h(k)Xnk .

k=

(a) Write

mY (n) = E[Yn ] = E h(k)Xnk = h(k)E[Xnk ]

k= k=

= h(k)mX (n k).

k=

(b) Write

E[XnYm ] = E Xn h(k)Xmk = h(k)E[Xn Xmk ]

k= k=

= h(k)RX (n, m k).

k=

166 Chapter 10 Problem Solutions

(c) Write

E[YnYm ] = E h(l)Xnl Ym = h(l)E[Xnl Ym ]

l= l=

= h(l)RXY (n l, m) = h(l) h(k)RX (n l, m k) .

l= l= k=

(a) Consider choices t1 = 0 and t2 = ( /2)/(2 f ). Then Xt1 = cos() and Xt2 =

sin(), which are not jointly continuous.

(b) Write

Z

d

E[g(Xt )] = E[g(cos(2 f t + ))] = g(cos(2 f t + ))

2

Z +2 f t Z

d d

= g(cos( )) = g(cos( )) ,

+2 f t 2 2

since the integrand has period 2 and the range of integration has length 2 .

Thus, E[g(Xt )] does not depend on t.

8. (a) Using independence and a trigonometric identity, write

E[Yt1 Yt2 ] = E[Xt1 Xt2 cos(2 f t1 + ) cos(2 f t2 + )]

= E[Xt1 Xt2 ]E[cos(2 f t1 + ) cos(2 f t2 + )]

= 21 RX (t1 t2 )E cos(2 f [t1 t2 ]) + cos(2 f [t1 + t2 ] + 2)

= 21 RX (t1 t2 ) cos(2 f [t1 t2 ]) + E cos(2 f [t1 + t2 ] + 2)

| {z }

=0

1

= 2 RX (t1 t2 ) cos(2 f [t1 t2 ]).

E[Xt1 Yt2 ] = E[Xt1 Xt2 cos(2 f t2 + )] = E[Xt1 Xt2 ] E[cos(2 f t2 + )] = 0.

| {z }

=0

(c) It is clear that Yt is zero mean. Together with part (a) it follows that Yt is WSS.

9. By Problem 7(b), FXt (x) = P(Xt x) = E[I(,x] (Xt )] does not depend on t, and so

we can restrict attention to the case t = 0. Since

X0 = cos() has the arcsine density

of Problem 35 in Chapter 5, f (x) = (1/ )/ 1 x2 for |x| < 1.

10. Second-order strict stationarity means that for every two-dimensional set B, for every

t1 ,t2 , and t, P((Xt1 +t , Xt2 +t ) B) does not depend on t. In particular, this is true

whenever B has the form B = A IR for any one-dimensional set A; i.e.,

P((Xt1 +t , Xt2 +t ) B) = P((Xt1 +t , Xt2 +t ) A IR) = P(Xt1 A, Xt2 IR)

= P(Xt1 A).

does not depend on t. Hence Xt is first-order strictly stationary.

Chapter 10 Problem Solutions 167

E[Xt ] = 1 does not depend on t, and E[Xt1 Xt2 ] = (1)2 = 1 depends on t1

and t2 only through their difference t1 t2 ; in fact the correlation function is a

constant function of t1 t2 . Thus, Xt is WSS.

(b) If p1 = 1 and p2 = p3 = 0, then Xt = e|t| with probability one. Then E[Xt ] =

e|t| depends on t. Hence, Xt is not WSS.

(c) First, the only way to have X0 = 1 is to have Xt = a(t) = e|t| , which requires

Z = 1. Hence,

P(X0 = 1) = P(Z = 1) = p1 .

which requires Z = 3. Hence,

Third, the only way to have Xt 0 for 0.5 t 1 is to have Xt = b(t) = sin(2 t)

or Xt = c(t) = 1. Hence,

E[e j(1Y1+m ++nYn+m ) ] = E[e j{1 q(X1+m ,...,Xm+L )++n q(Xn+m ,...,Xn+m+L1 )} ].

The exponential on the right is just a function of X1+m , . . . , Xn+L1+m . Since the Xk

process is strictly stationary, the expectation on the right is unchanged if we replace

X1+m , . . . , Xn+L1+m by X1 , . . . , Xn+L1 ; i.e., the above right-hand side is equal to

E[e j{1 q(X1 ,...,XL )++n q(Xn ,...,Xn+L1 )} ] = E[e j(1Y1 ++nYn ) ].

13. We begin with

Z Z

|x|

E[g(X0 )] = E[X0 I[0,) (X0 )] = xI[0,) (x) 2e dx = 1

2 x e x dx,

0

which is just 1/2 times the expectation of an exp( ) random variable. We thus see

that E[g(X0 )] = 1/(2 ). Next, for n 6= 0, we compute

Z 2 Z

ex /2 1 2

E[g(Xn )] = xI[0,) (x) dx = xex /2 dx

2 2 0

1

2 1

= ex /2 = .

2 0 2

Hence, E[g(X0 )] 6= E[g(Xn )] for n 6= 0, and it follows that Xk is not strictly stationary.

168 Chapter 10 Problem Solutions

Z T0 Z t+T0 Z T0

1 1 1

E[q(t + T )] = q(t + ) d = q( ) d = q( ) d ,

T0 0 T0 t T0 0

where we have used the fact that since q has period T0 , the integral of q over any inter-

val of length T0 yields the same result. The second thing to consider is the correlation

function. Write

Z

1 T0

E[q(t1 + T )q(t2 + T )] = q(t1 + )q(t2 + ) d

T0 0

Z

1 t2 +T0

= q(t1 + t2 )q( ) d

T0 t2

Z

1 T0

= q([t1 t2 ] + )q( ) d ,

T0 0

where we have used the fact that as a function of , the product q([t1 t2 ] + )q( )

has period T0 . Since the mean function does not depend on t, and since the correlation

function depends on t1 and t2 only through their difference, Xt is WSS.

15. For arbitrary functions h, write

E[h(Xt1 +t , . . . , Xtn +t )] = E h q(t1 + t + T ), . . . , q(tn + t + T )

Z

1 T0

= h q(t1 + t + ), . . . , q(tn + t + ) d

T0 0

Z

1 t+T0

= h q(t1 + ), . . . , q(tn + ) d

T0 t

Z

1 T0

= h q(t1 + ), . . . , q(tn + ) d ,

T0 0

where the last step follows because we are integrating a function of period T0 over an

interval of length T0 . Hence,

16. First write E[Yn ] = E[Xn Xn1 ] = E[Xn ] E[Xn1 ] = 0 since E[Xn ] does not depend

on n. Next, write

= RX (n m) RX ([n 1] m) RX (n [m 1]) + RX (n m)

= 2RX (n m) RX (n m 1) RX (n m + 1),

2

17. From the Fourier transform table, SX ( f ) = 2 e(2 f ) /2 .

18. From the Fourier transform table, SX ( f ) = e2 | f | .

Chapter 10 Problem Solutions 169

19. (a) Since correlation functions are real and even, we can write

Z Z

SX ( f ) = RX ( )e j2 f d = RX ( ) cos(2 f ) d

Z Z

= 2 RX ( ) cos(2 f ) d = 2 Re RX ( )e j2 f d .

0 0

(b) OMITTED.

(c) The requested plot is at the left; at the right the plot is focused closer to f = 0.

2 2

1 1

0 0

6 3 0 3 6 1 0.5 0 0.5 1

(d) The requested plot is at the left; at the right the plot is focused closer to f = 0.

4 4

2 2

0 0

6 3 0 3 6 1 0.5 0 0.5 1

(b) Since RX (n) is real and even, we can write

SX ( f ) = RX (n)e j2 f n = RX (n)[cos(2 f n) j sin(2 f n)]

n= n=

odd function of n

z }| {

= RX (n) cos(2 f n) j RX (n) sin(2 f n)

n= n=

| {z }

=0

= RX (n) cos(2 f n),

n=

21. (a) Since correlation functions are real and even, we can write

SX ( f ) = RX (n)e j2 f n

n=

170 Chapter 10 Problem Solutions

1

= RX (0) + RX (n)e j2 f n + RX (n)e j2 f n

n=1 n=

= RX (0) + RX (n)e j2 f n + RX (k)e j2 f k

n=1 k=1

= RX (0) + 2 RX (n) cos(2 f n)

n=1

= RX (0) + 2 Re RX (n)e j2 f n .

n=1

(b) OMITTED.

(c) Here is the plot:

0

0.5 0 0.5

(d) Here is the plot:

0

0.5 0 0.5

22. Write

Z Z

h(t)e j2 f t dt = h( )e j2 f ( ) d

Z

= h( )e j2 f d

Z

= h( ) e j2 f d

Z

j2 f

= h( )e d , since h is real,

= H( f ) .

lows that

d d 2 2

RXY ( ) = RX ( ) = e /2 = e /2 .

d d

Chapter 10 Problem Solutions 171

d2 d d 2 /2 2

RY ( ) = 2

RX ( ) = RXY ( ) = e = e /2 (1 2 ).

d d d

Similarly, since h(t) = 3 sin( t)/( t), we have from the transform table that H( f ) =

3I[1/2,1/2] ( f ). We can now write

2 /2 2

25. First note that since RX ( ) = e 2 e(2 f ) /2 .

, SX ( f ) =

2 2 2

(a) SXY ( f ) = H( f ) SX ( f ) = e(2 f ) /2 ] 2 e(2 f ) /2 = 2 e(2 f ) .

(b) Writing

1 2 2

SXY ( f ) = 2 2e( 2) (2 f ) /2 ,

2

we have from the transform table that

1 2 1 2

RXY ( ) = e( / 2) /2 = e /4 .

2 2

(c) Write

1 2

E[Xt1 Yt2 ] = RXY (t1 t2 ) = e(t1 t2 ) /4 .

2

2 2 2

(d) SY ( f ) = |H( f )|2 SX ( f ) = e(2 f ) 2 e(2 f ) /2 = 2 e3(2 f ) /2 .

(e) Writing

1 2 2

SY ( f ) = 2 3e( 3) (2 f ) /2 ,

3

we have from the transform table that

1 2 1 2

RY ( ) = e( / 3) /2 = e /6 .

3 3

26. We have from the transform table that SX ( f ) = [sin( f )/( f )]2 . The goal is to

choose a filter H( f ) so that RY ( ) = sin( )/( ); i.e., so that SY ( f ) = I[1/2,1/2] ( f ).

Thus, the formula SY ( f ) = |H( f )|2 SX ( f ) becomes

2

sin( f )

I[1/2,1/2] ( f ) = |H( f )|2 .

f

We therefore take

f

, | f | 1/2,

H( f ) = sin( f )

0, | f | > 1/2.

172 Chapter 10 Problem Solutions

27. Since Yt and Zt are responses of LTI systems to a WSS input, Yt and Zt are individually

WSS. If we can show that E[Yt1 Zt2 ] depends on t1 and t2 only throught their difference,

then Yt and Zt will be J-WSS. We show this to be the case. Write

Z Z

E[Yt1 Zt2 ] = E h( )Xt1 d g( )Xt2 d

Z Z

= h( )g( )E[Xt1 Xt2 ] d d

Z Z

= h( )g( )RX ([t1 ] [t2 ]) d d ,

R

28. Observe that if h(t) := (t) (t 1), then h( )Xt d = Xt Xt1 .

(a) Since Yt := Xt Xt1 is the response of an LTI system to a WSS input, Xt and Yt

is J-WSS.

(b) Since H( f ) = 1 e j2 f ,

e j2 f + e j2 f

= 22 = 2[1 cos(2 f )],

2

we have

2 4[1 cos(2 f )]

SY ( f ) = |H( f )|2 SX ( f ) = 2[1 cos(2 f )] = .

1 + (2 f )2 1 + (2 f )2

Rt

29. In Yt = t3 X d , make the change of variable = t , d = d to get

Z 3 Z

Yt = Xt d = I[0,3] ( )Xt d .

0

This shows that Yt is the response to Xt of the LTI system with impulse h(t) = I[0,3] (t).

Hence, Yt is WSS.

30. Apply the formula Z

H(0) = h(t)e j2 f t dt

f =0

with h(t) = (1/ )[(sint)/t]2 . Then from the table, H( f ) = (1 | f |)I[1/ ,1/ ] ( f ),

and we find that

Z Z

1 sint 2 sint 2

1 = dt, which is equivalent to = dt.

t t

E[XnYm ] = h(k)RX (n, m k) = h(k)RX (n m + k).

k= k=

Chapter 10 Problem Solutions 173

(b) Similarly,

E[YnYm ] = h(l) h(k)RX (n l, m k)

l= k=

= h(l) h(k)RX (n l [m k])

l= k=

= h(l) h(k)RX ([n m] [l k]) .

l= k=

RX (n) = h(k)RX (n + k),

k=

and so

j2 f n

SXY ( f ) = RX (n)e = h(k)RX (n + k) e j2 f n

n= n= k=

j2 f n

= h(k) RX (n + k)e

k= n=

= h(k) RX (m)e j2 f (mk)

k= m=

j2 f k j2 f m

= h(k)e RX (m)e

k= m=

= h(k)e j2 f k SX ( f ), since h(k) is real,

k=

= H( f ) SX ( f ).

RY (n) = h(l) h(k)RX (n [l k]) ,

l= k=

and so

SY ( f ) = h(l) h(k)RX (n [l k]) e j2 f n

n= l= k=

= h(l) h(k) RX (n [l k])e j2 f n

l= k= n=

= h(l) h(k) RX (m)e j2 f (m+[lk])

l= k= m=

= h(l)e j2 f l h(k)e j2 f k RX (m)e j2 f m

l= k= m=

174 Chapter 10 Problem Solutions

Z Z

1 T nE0 1

|x(t)|2 dt = + |x(t)|2 dt.

2T 0 2T 2T 0

We first observe that since

T T0

T0 T0 + = T0 + T0 ,

n n n

T /n T0 . It follows that

nE0 E0 E0

= .

2T 2(T /n) 2T0

We next show that the integral on the right goes to zero. Write

Z Z T0

1 1 E0

|x(t)|2 dt |x(t)|2 dt = 0.

2T 0 2T 0 2T

A similar argument shows that

Z 0

1

|x(t)|2 dt E0 /(2T0 ).

2T T

RT

Putting this all together shows that (1/2T ) T |x(t)|2 dt E0 /T0 .

33. Write Z Z Z

E Xt2 dt = E[Xt2 ] dt = RX (0) d = .

R

34. If the function q(W ) := 0W S1 ( f ) S2 ( f ) d f is identically zero, then so is its deriva-

tive, q0 (W ) = S1 ( f ) S2 ( f ). But then S1 ( f ) = S2 ( f ) for all f 0.

35. If h(t) = I[T,T ] (t), and white noise is applied to the corresponding system, the cross

power spectral density of the input and output is

sin(2 T f ) N0

SXY ( f ) = H( f ) N0 /2 = 2T ,

2 T f 2

which is real and even, but not nonnegative. Similarly, if h(t) = et I[0,) (t),

N0 /2

SXY ( f ) = ,

1 + j2 f

which is complex valued.

36. First write

2 (1 f 2 )2 N0 /2, | f | 1,

SY ( f ) = |H( f )| N0 /2 =

0, | f | > 1.

Then

Z 1 Z

N0

PY = SY ( f ) d f =2 (1 2 f 2 + f 4 ) d f

2 0

h i1

= N0 f 32 f 3 + 15 f 5 = N0 [1 23 + 51 ] = 8N0 /15.

0

Chapter 10 Problem Solutions 175

2 2 /2

37. First note that RX ( ) = e(2 ) p

/2 has power spectral density S ( f ) = e f

X / 2 .

Using the definition of H( f ) = | f | I[1,1] ( f ),

Z Z Z 1

E[Yt2 ] = RY (0) = SY ( f ) d f = |H( f )|2 SX ( f ) d f = | f |SX ( f ) d f

1

Z 1 r Z r

2 1 f 2 /2 2 f 2 /2 1

= 2 f SX ( f ) d f = fe df = e

0 0 0

r

2

= (1 e1/2 ) .

2

2 sin f

SY ( f ) = |H( f )| SX ( f ) = N0 /2.

f

This is the Fourier transform of

RY ( ) = (1 | |)I[1,1] ( ) N20 .

39. (a) First note that

1/(RC) 1

H( f ) = = .

1/(RC) + j2 f 1 + j(2 f )RC

Then

N0 /2

SXY ( f ) = H( f ) SX ( f ) = .

1 j(2 f )RC

(b) The inverse Fourier transform of SXY ( f ) = H( f ) N0 /2 is

N0 /(RC)

RXY ( ) = e u( ).

2RC

N0 /2

(d) SY ( f ) = |H( f )|2 SX ( f ) = .

1 + (2 f RC)2

(e) Since

N0 /2 N0 2(1/(RC)) RC

SY ( f ) = = ,

1 + (2 f RC)2 2(RC)2 1 2 2

RC + (2 f )2

we have that

N0 | |/(RC)

RY ( ) = e .

4RC

176 Chapter 10 Problem Solutions

40. To begin, write E[Yt+1/2Yt ] = RY (1/2). Next, since the input has power spectral den-

sity N0 /2 and since h(t) = 1/(1 + t 2 ) has transform H( f ) = e2 | f | , we can write

N0

SY ( f ) = |H( f )|2 SX ( f ) = | e2 | f | |2 N20 = 2 e4 | f | N20 = 2 e2 (2)| f | .

N0 2 N0

RY ( ) = = ,

2 4 + 2 4 + 2

and so E[Yt+1/2Yt ] = RY (1/2) = 4 N0 /17.

41. Since H( f ) = sin( T f )/( T f ), we can write

N0 sin T f 2 N0

SY ( f ) = |H( f )|2 = T .

2 T f 2T

Then

N0

RY ( ) = (1 | |/T )I[T,T ] ( ).

2T

42. To begin, write

Z t Z t Z

(t )

Yt = e t

e X d = e X d = e(t ) u(t )X d ,

where u is the unit-step function. We then see that Yt is the response to Xt of the

LTI system with impulse response h(t) := et u(t). Hence, we know from the text

that Xt and Yt are jointly wide-sense stationary. Next, since SX ( f ) = N0 /2, RX ( ) =

(N0 /2) ( ). We then compute in the time domain,

Z Z

N0 N0

RXY ( ) = h( )RX ( ) d = h( ) ( ) d = h( )

2 2

N0

= e u( ).

2

Next,

N0 /2

SXY ( f ) = H( f ) SX ( f ) = ,

1 j2 f

and

N0 /2

SY ( f ) = |H( f )|2 SX ( f ) = .

1 + (2 f )2

It then follows that RY ( ) = (N0 /4)e| | .

43. Consider the impulse response

h( ) := hn ( n).

n=

Chapter 10 Problem Solutions 177

Then

Z Z

h( )Xt d =

n=

hn ( n) Xt d

Z

= hn

Xt ( n) d = hn Xtn =: Yt .

n= n=

(a) Since Yt is the response of the LTI system with impulse response h(t) to the

WSS input Xt , Xt and Yt are J-WSS.

(b) Since Yt is the response of the LTI system with impulse response h(t) to the

WSS input Xt , SY ( f ) = |H( f )|2 SX ( f ), where

Z Z

H( f ) = h( )e j2 f d = n

h ( n) e j2 f d

n=

Z

= hn ( n)e j2 f d = hn e j2 f n

n= n=

has period one. Hence, P( f ) = |H( f )|2 is real, nonnegative, and has period one.

44. When the input power spectral density is SW ( f ) = 3, the output power spectral density

2

is |H( f )|2 3. We are also told that this output power spectral density is equal to e f .

2 2 2

Hence, |H( f )|2 3 = e f , or |H( f )|2 = e f /3. Next, if SX ( f ) = e f I[1,1] ( f ), then

2 2

SY ( f ) = |H( f )|2 SX ( f ) = (e f /3) e f I[1,1] ( f ) = (1/3)I[1,1] ( f ). It then follows

that

1 sin(2 ) 2 sin(2 )

RY ( ) = 2 = .

3 2 3 2

45. Since H( f ) = GI[B,B] ( f ) and Yt is the response to white noise, the output power

spectral density is SY ( f ) = G2 I[B,B] ( f ) N0 /2, and so

G2 N0 sin(2 B ) sin(2 B )

RY ( ) = 2B = G2 BN0 .

2 2 B 2 B

Note that

sin(2 Bk/(2B)) sin( k)

RY (kt) = RY (k/(2B)) = G2 BN0 = G2 BN0 ,

2 Bk/(2B) k

which is G2 BN0 for k = 0 and zero otherwise. It is obvious that the Xi are zero mean.

Since E[Xi X j ] = RY (i j), and the Xi are uncorrelated with variance E[Xi2 ] = RY (0) =

G2 BN0 .

46. (a) First write RX ( ) = E[Xt+ Xt ]. Then RX ( ) = E[Xt Xt ] = (E[Xt Xt

]) =

RX ( ) .

(b) Since Z

SX ( f ) = RX ( )e j2 f d ,

178 Chapter 10 Problem Solutions

we can write

Z Z

SX ( f ) = RX ( ) e j2 f d = RX (t) e j2 f t dt

Z

= RX (t)e j2 f t dt, by part (a),

= SX ( f ).

Since SX ( f ) is equal to its complex conjugate, SX ( f ) is real.

(c) Write

Z

Z

E[Xt1 Yt2 ] = E Xt1 h( )Xt2 d = h( ) E[Xt1 Xt2 ] d

Z Z

= h( ) RX ([t1 t2 ] + ) d = h( ) RX ([t1 t2 ] ) d .

RXY ( ) = h( ) RX ( ) d ,

which is the convolution of h() and

RX . Hence, the transform of this equa-

tion is the product of the transform of h() and SX . We just have to observe

that

Z Z Z

h( ) e j2 f d = h(t) e j2 f t dt = h(t)e j2 f t dt = H( f ) .

Z

RY ( ) = E[Yt+ Yt ] = E h( )Xt+ d Yt

Z Z

= h( )E[Xt+ Yt ] d = h( )RXY ( ) d ,

SY ( f ) = H( f )SXY ( f ) = H( f )H( f ) SX ( f ) = |H( f )|2 SX ( f ).

Z Z

47. (a) RX ( ) = SX ( f )e j2 f d f = ( f )e j2 f d f = e j2 0 = 1.

(b) Write

Z

RX ( ) = [ ( f f0 ) + ( f + f0 )]e j2 f d f = e j2 f0 + e j2 f0

= 2 cos(2 f0 ).

2 /2

h 1 2 2 1 i 2

SX ( f ) = e f = e( 2 ) (2 f ) /2 2

2 2

h 1 2 2 1 i

= e( 2 ) (2 f ) /2 2 2 .

2

Chapter 10 Problem Solutions 179

2 /2

RX ( ) = 2 e(2 ) .

1 1/(2 ) 2

RX ( ) = = .

(1/(2 ))2 + 2 1 + (2 )2

48. Write

Z Z Z W

E[Xt2 ] = RX (0) = SX ( f )e j2 f d f = SX ( f ) d f = 1 d f = 2W.

=0 W

2

(b) e f cos( f ) is not nonnegative.

(c) (1 f 2 )/(1 + f 4 ) is not nonnegative.

(d) 1/(1 + j f 2 ) is not real valued.

50. (a) Since sin is odd, it is NOT a valid correlation function.

(b) Since the Fourier transform of cos is [ ( f 1) + ( f + 1)]/2, which is real,

even, and nonnegative, cos IS a valid correlation function.

2 2

(c) Since the Fourier transform of e /2 is 2 e(2 f ) /2 , which is real, even, and

2

nonnegative, e /2 IS a valid correlation function.

(d) Since the Fourier transform of e| | is 2/[1 + (2 f )2 ], which is real, even, and

nonnegative, e| | IS a valid correlation function.

(e) Since the value of 2 e| | at = 0 is less than the value for other values of ,

2 e| | is NOT a valid correlation function.

(f) Since the Fourier transform of I[T,T ] ( ) is (2T ) sin(2 T f )/(2 T f ) is not non-

negative, I[T,T ] ( ) is NOT a valid correlation function.

51. Since R0 ( ) is a correlation function, S0 ( f ) is real, even, and nonnegative. Since

R( ) = R0 ( ) cos(2 f0 ),

1

S( f ) = 2 [S0 ( f f0 ) + S0 ( f + f0 )],

1

S( f ) = 2 [S0 ( f f 0 ) + S0 ( f + f 0 )]

1

= 2 [S0 ( f + f 0 ) + S0 ( f f 0 )], since S0 is even,

= S( f ).

180 Chapter 10 Problem Solutions

2S( f ) cos(2 f 0 ). Hence, the answer cannot be (a) because it is possible to have

S( f ) > 0 and cos(2 f 0 ) < 0 for some values of f . Let S( f ) = I[1/(40 ),1/(40 )] ( f ),

which is real, even, and nonnegative. Hence, its inverse transform, which we denote

by R( ), is a correlation function. In this case, S( f ) = 2S( f ) cos(2 f 0 ) 0 for all f ,

and is real and even too. Hence, for this choice of R( ), R( ) is a correlation function.

Therefore, the answer is (b).

53. To begin, write

Z Z

j2 f

R( ) = S( f )e df = S( f )[cos(2 f ) j sin(2 f )] d f .

Since S is real and even, the integral of S( f ) sin(2 f ) is zero, and we have

Z

R( ) = S( f ) cos(2 f ) d f ,

which is a real and even function of . Finally,

Z Z

|R( )| = S( f )e j2 f d f S( f )e j2 f d f

Z Z

= |S( f )| d f = S( f )d f = R(0).

54. Let S0 ( f ) denote the Fourier transform of R0 ( ), and let S( f ) denote the Fourier

transform of R( ).

(a) The derivation in the text showing that the transform of a correlation function

is real and even uses only the fact that correlation functions are real and even.

Hence, S0 ( f ) is real and even. Furthermore, since R is the convolution of R0

with itself, S( f ) = S0 ( f )2 , which is real, even, and nonnegative. Hence, R( ) is

a correlation function.

(b) If R0 ( ) = I[T,T ] ( ), then

2

sin(2 T f ) sin(2 T f )

S0 ( f ) = 2T and S( f ) = 2T 2T .

2 T f 2 T f

Hence, R( ) = 2T 1 | |/(2T ) I[2T,2T ] ( ).

55. Taking = N0 /2 as in the text,

h(t) = v(t0 t) = sin(t0 t)I[0, ] (t0 t).

Then h is causal for t0 .

2

2 ) /2 , V ( f ) = 2 2e2(2 f )2 /2

2

56. Since v(t) = e(t/ = 2 e(2 f ) . Then

2

V ( f ) e j2 f t0 2 e(2 f ) e j2 f t0

H( f ) = = 2

SX ( f ) e(2 f ) /2

2 2

= 2 e(2 f ) /2 e j2 f t0 = 2 2 e(2 f ) /2 e j2 f t0 ,

2

and it follows that h(t) = 2 e(tt0 ) /2 .

Chapter 10 Problem Solutions 181

57. Let v0 (n) := k h(n k)v(k) and Yn := k h(n k)Xk . The SNR is v0 (n0 )2 /E[Yn20 ]. We

have Z Z

1/2 1/2

E[Yn20 ] = SY ( f ) d f = |H( f )|2 SX ( f ) d f .

1/2 1/2

Z 1/2 2

j2 f n0

2

|v0 (n0 )| = H( f )V ( f )e d f

1/2

Z 1/2

p V ( f )e j2 f n0 2

= H( f ) SX ( f ) p d f

1/2 SX ( f )

Z 1/2 Z 1/2

|V ( f ) e j2 f n0 |2

|H( f )|2 SX ( f ) d f d f,

1/2 1/2 SX ( f )

p V ( f ) e j2 f n0

H( f ) SX ( f ) = p (#)

SX ( f )

for some constant . It is now clear that the SNR is upper bounded by

Z 1/2

|V ( f ) e j2 f n0 |2

df

1/2 SX ( f )

and that the SNR equals the bound if and only if (#) holds with equality for some

constant . Hence, the matched filter transfer function is

V ( f ) e j2 f n0

H( f ) = .

SX ( f )

which does not depend on t since Vt and Xt are each individually WSS. Next write

Now write

By (), the term E[Ut1 Vt2 ] depends on t1 and t2 only through their difference. Since

it follows that E[Ut1 Ut2 ] depends on t1 and t2 only through their difference. Hence, Ut

and Vt are J-WSS.

182 Chapter 10 Problem Solutions

59. The assumptions in the problem imply that Vt and Xt are J-WSS, and by the preceding

problem, it follows that Ut and Vt are J-WSS. We can therefore apply the formulas

for the Wiener filter derived in the text. It just remains to compute the quantities used

in the formulas. First,

RVU ( ) = E[Vt+ Ut ] = E[Vt+ (Vt + Xt )] = RV ( ) + RXV ( ) = RV ( ),

which implies SVU ( f ) = SV ( f ). Similarly,

RU ( ) = E[Ut+ Ut ] = E[(Vt+ + Xt+ )Ut ]

= RVU ( ) + E[Xt+ (Vt + Xt )]

= RV ( ) + RXV ( ) + RX ( ) = RV ( ) + RX ( ),

and so SU ( f ) = SV ( f ) + SX ( f ). We then have

SVU ( f ) SV ( f )

H( f ) = = .

SU ( f ) SV ( f ) + SX ( f )

SV ( f ) (1 | f |)I[1,1] ( f )

H( f ) = =

SV ( f ) + SX ( f ) (1 | f |)I[1,1] ( f ) + 1 I[1,1] ( f )

(1 | f |)I[1,1] ( f )

= = I[1,1] ( f ),

1 | f |I[1,1] ( f )

and so

sin(2 t)

h(t) = 2 .

2 t

61. To begin, write

E[|Vt Vbt |2 ] = E[(Vt Vbt )(Vt Vbt )] = E[(Vt Vbt )Vt ] E[(Vt Vbt )Vbt ]

= E[(Vt Vbt )Vt ], by the orthogonality principle,

Z

= E[Vt2 ] E[Vbt Vt ] = RV (0) E[Vbt Vt ] = SV ( f ) d f E[Vbt Vt ].

Z

Z

E[Vbt Vt ] = E h( )Ut d Vt = h( )E[Vt Ut ] d

Z Z

= h( )RVU ( ) d = h( )RVU ( ) d , since RVU ( ) is real,

Z

= H( f )SVU ( f ) d f , by Parsevals formula,

Z Z

SVU ( f ) |SVU ( f )|2

= SVU ( f ) d f = d f.

SU ( f ) SU ( f )

Putting these two observations together yields

Z Z Z

|SVU ( f )|2 |SVU ( f )|2

E[|Vt Vbt |2 ] = SV ( f ) d f df = SV ( f ) d f.

SU ( f ) SU ( f )

Chapter 10 Problem Solutions 183

62. Denote the optimal estimator by Vbn = k= h(k)Unk , and denote any other estima-

tor by Ven =

k= h(k)Unk . The discrete-time orthogonality principle says that if

E (Vn Vbn ) h(k)Unk = 0 ()

k=

for every h, then h is optimal in that E[|Vn Vbn |2 ] E[|Vn Ven |2 ] for every h. To es-

tablish the orthogonality principle, assume the above equation holds for every choice

of h. Then we can write

= E[|Vn Vbn |2 + 2(Vn Vbn )(Vbn Ven ) + |Vbn Ven |2 ]

= E[|Vn Vbn |2 ] + 2E[(Vn Vbn )(Vbn Ven )] + E[|Vbn Ven |2 ]. ()

E[(Vn Vbn )(Vbn Ven )] = E (Vn Vbn ) h(k)Unk h(k)Unk

k= k=

= E (Vn Vbn ) h(k) h(k) Unk = 0, by ().

k=

The next task is to characterize the filter h that satisfies the orthogonality condition

for every choice of h. Write the orthogonality condition as

0 = E (Vn Vbn ) h(k)Utk = E h(k)(Vn Vbn )Utk

k= k=

= E[h(k)(Vn Vbn )Utk ] = h(k)E[(Vn Vbn )Utk ]

k= k=

= b (k)].

h(k)[RVU (k) RVU

k=

Since this must hold for all h, take h(k) = RVU (k) RVU

b (k) to get

RVU (k) R b (k)2 = 0.

VU

k=

b (k) for

Thus, the orthogonality condition holds for all h if and only if RVU (k) = RVU

all k.

184 Chapter 10 Problem Solutions

b

b . Recall that Vn is the response of an LTI system to

The next task is to analyze RVU

input Un . Applying the result of Problem 31(a) with X replaced by U and Y replaced

by Vb , we have, also using the fact that RU is even,

b (m) = RU Vb (m) =

RVU h(k)RU (m k).

k=

b (m) =

RVU (m) = RVU h(k)RU (m k)

k=

yields

SVU ( f )

SVU ( f ) = H( f )SU ( f ), and so H( f ) = .

SU ( f )

63. We have

SV ( f ) 2 /[ 2 + (2 f )2 ] 2

H( f ) = = =

SV ( f ) + SX ( f ) 2 /[ 2 + (2 f )2 ] + 1 2 + 2 + (2 f )2

2A

= ,

A A2 + (2 f )2

where A := 2 + 2 . Hence, h(t) = ( /A)eA|t| .

64. To begin, write

+ j2 f 1

K( f ) = = + j2 f .

A + j2 f A + j2 f 1 + j2 f

Then

d At

k(t) = eAt u(t) + e u(t)

dt

= eAt u(t) AeAt u(t) + eAt (t)

= ( A)eAt u(t) + (t),

since eAt (t) = (t) for both t = 0 and for t 6= 0. This is a causal impulse response.

65. Let Zt := Vt+t . Then the causal Wiener filter for Zt yields the prediction or smoothing

filter for Vt+t . The WienerHopf equation for Zt is

Z

RZU ( ) = ht ( )RU ( ) d , 0.

0

Z

RVU ( + t) = ht ( )RU ( ) d , 0.

0

Chapter 10 Problem Solutions 185

Z

RVU ( + t) = ht ( ) ( ) d = ht ( ), 0.

0

If h(t) = RVU (t)u(t) denotes the causal Wiener filter, then for t 0 (prediction), we

can write

ht ( ) = RVU ( + t) = h( + t), 0.

If t < 0 (smoothing), we can write ht ( ) = h( + t) only for t. For 0

< t, h( + t) = 0 while ht ( ) = RVU ( + t).

66. By the hint, the limit of the double sums is the desired double integral. If we can show

that each of these double sums is nonnegative, then the limit will also be nonnegative.

To this end put Zi := Xti e j2 f ti t i . Then

n 2 n n n n

0 E Zi = E Zi Zk = E[Zi Zk ]

i=1 i=1 k=1 i=1 k=1

n n

= E[Xti Xtk ]e j2 f ti e j2 f tk t i t k

i=1 k=1

n n

= RX (ti tk )e j2 f ti e j2 f tk t i t k .

i=1 k=1

1 RT

at f = 0. Hence, we have convergence in mean square of 2T T t dt to E[Yt ].

Y

(b) The Fourier transform of CY ( ) = sin( )/( ) is I[1/2,1/2] ( f ), which is con-

1 RT

tinuous at f = 0. Hence, we have convergence in mean square of 2T T Yt dt to

E[Yt ].

68. We first point out that this is not a question about mean-square convergence. Write

Z T

1 sin(2 T + ) sin(2 (T ) + )

cos(2 t + ) dt = .

2T T 2T 2

Since | sin x | 1, we can write

1 ZT 2

2T T cos(2 t + ) dt 4 T 0,

69. As suggested by the hint, put Yt := Xt+ Xt . It will be sufficient if Yt is WSS and if the

Fourier transform of the covariance function of Yt is continuous at the origin. First,

since Xt is WSS, the mean of Yt is

E[Yt ] = E[Xt+ Xt ] = RX ( ),

which does not depend on t. Before examining the correlation function of Yt , we

assume that Xt is fourth-order strictly stationary so that

E[Yt1 Yt2 ] = E[Xt1 + Xt1 Xt2 + Xt2 ]

186 Chapter 10 Problem Solutions

must be unchanged if on the right-hand side we subtract t2 from every subscript ex-

pression to get

E[Xt1 + t2 Xt1 t2 X X0 ].

Since this depends on t1 and t2 only through their difference, we see that Yt is WSS if

Xt is fourth-order strictly stationary. Now, the covariance function of Yt is

C( ) = E[X + X X X0 ] RX ( )2 .

If the Fourier transform of this function of is continuous at the origin, then

Z T

1

Xt+ Xt dt RX ( ).

2T T

70. As suggested by the hint, put Yt := IB (Xt ). It will be sufficient if Yt is WSS and if

the Fourier transform of the covariance function of Yt is continuous at the origin. We

assume at the outset that Xt is second-order strictly stationary. Then the mean of Yt is

E[Yt ] = E[IB (Xt )] = P(Xt B),

which does not depend on t. Similarly,

E[Yt1 Yt2 ] = E[IB (Xt1 )IB (Xt2 )] = P(Xt1 B, Xt2 B)

must be unchanged if on the right-hand side we subtract t2 from every subscript to get

P(Xt1 t2 B, X0 B).

Since this depends on t1 and t2 only through their difference, we see that Yt is WSS if

Xt is second-order strictly stationary. Now, the covariance function of Yt is

C( ) = P(X B, X0 B) P(Xt B)2 .

If the Fourier transform of this function of is continuous at the origin, then

Z T

1

IB (Xt ) dt P(Xt B).

2T T

Z

1 T

RXY ( ) := lim RXY ( + , ) d

T 2T T

Z Z

1 T

= lim h( )RX ( + , ) d d

T 2T T

Z Z

1 T

= h( ) lim RX ( + , ) d d

T 2T T

Z Z

1 T

= h( ) lim RX ( + + , ) d d

T 2T T

Z Z

1 T

= h( ) lim RX ( + + , ) d d .

T 2T T

| {z }

= RX ( + )

Chapter 10 Problem Solutions 187

72. Write

Z

1 T

RY ( ) := lim RY ( + , ) d

T 2T T

Z Z

1 T

= lim h( )RXY ( + , ) d d

T 2T T

Z Z

1 T

= h( ) lim RXY ([ ] + , ) d d .

T 2T T

| {z }

= RXY ( )

73. Let SXY ( f ) denote the Fourier transform of RXY ( ), and let SY ( f ) denote the Fourier

transform of RY ( ). Then by the preceding two problems,

74. This is an instance of Problem 32.

CHAPTER 11

Problem Solutions

t ( t)2 ( t)3

P(Nt > 3) = 1 P(Nt 3) = 1 e 1 + t + +

2! 3!

2 3

4 4

= 1 e4 1 + 4 + + = 1 e4 (5 + 8 + 32/3)

2 6

= 1 e4 (39/3 + 32/3) = 1 e4 (71/3) = 0.5665.

(2 5)10 e25

3. (a) P(N5 = 10) = = 0.125.

10!

(b) We have

5

\ 5 5

22 e2

P {Ni Ni1 = 2} = P(Ni Ni1 = 2) = 2!

i=1 i=1 i=1

2 5 10

= (2e ) = 32e = 1.453 103 .

4. Let Nt denote the number of crates sold through time t (in days). Then Ni Ni1 is

the number of crates sold on day i, and so

5 5 5 h i

\

P {Ni Ni1 3} = P(Ni Ni1 3) = 1 P(Ni Ni1 2)

i=1 i=1 i=1

5 h i

= 1 e 1 + + 2 /2!

i=1

h i5

= 1 e3 1 + 3 + 9/2 = 0.06385.

5. Let Nt denote the number of fishing rods sold through time t (in days). Then Ni Ni1

is the number of crates sold on day i, and so

5 5

[ \

P {Ni Ni1 3} = 1 P {Ni Ni1 2}

i=1 i=1

5

= 1 P(Ni Ni1 2)

i=1

188

Chapter 11 Problem Solutions 189

5

= 1 e (1 + + 2 /2!)

i=1

5

= 1 e2 (1 + 2 + 4/2!) = 1 e10 55 = 0.858.

6. Since the average time between hit songs is 7 months, the rate is = 1/7 per month.

(a) Since a year is 12 months, we write

= 1 e12/7 [1 + 12/7 + (12/7)2 /2] = 0.247.

(b) Let Tn denote the time of the nth hit song. Since Tn = X1 + + Xn , E[Tn ] =

nE[X1 ] = 7n. For n = 10, E[T10 ] = 70 months.

7. (a) Since N0 0, Nt = Nt N0 . Since (0,t] (t,t + t] = , Nt N0 and Nt+t Nt

are independent.

(b) Write

= P(Nt+t Nt = `|Nt N0 = k), since N0 0,

= P(Nt+t Nt = `), by independent increments.

(c) Write

P(Nt+t = k + `|Nt = k)P(Nt = k)

P(Nt = k|Nt+t = k + `) =

P(Nt+t = k + `)

P(Nt+t Nt = `)P(Nt = k)

= , by part (b),

P(Nt+t = k + `)

( t)` e t ( t)k e t

`! k!

=

[ (t+t)]k+` e (t+t)

(k+`)!

k `

k+` t t

= .

k t + t t + t

n k

P(Nt = k|Nt+t = n) = p (1 p)nk , k = 0, . . . , n.

k

(1 + n) n(n) n

E[Tn ] = = = .

(n) (n)

Alternatively, since Tn = X1 + +Xn , where the Xi are i.i.d. exp( ), E[Tn ] = nE[Xi ] =

n/ .

190 Chapter 11 Problem Solutions

(b) P(N2 = 0) = e 2 = e4 = 0.0183.

(c) E[N12 ] = 12 = 24 snowstorms.

(d) Write

12 12

[ \

P {Ni Ni1 5} = 1 P {Ni Ni1 4}

i=1 i=1

= 1 [e (1 + + 2 /2 + 3 /6 + 4 /24)]12

= 1 [e2 (1 + 2 + 2 + 4/3 + 2/3)]12

= 1 [7e2 ]12 = 1 0.523 = 0.477.

(b) Write

4 4

[ \

P {Ni Ni1 2} = 1 P {Ni Ni1 1}

i=1 i=1

4

= 1 P(Ni Ni1 1)

i=1

4

= 1 [e 1 + e 1 ( 1)]

i=1

= 1 [e (1 + )]4 = 1 [ 32 e1/2 ]4

81 2

= 1 16 e = 0.315.

11. We need to find var(Tn ) = var(X1 + + Xn ). Since the Xi are independent, they

are uncorrelated, and so the variance of the sum is the sum of the variances. Since

Xi exp( ), var(Tn ) = n var(Xi ) = n/ 2 . An alternative approach is to use the fact

that Tn Erlang(n, ). Since the moments of Tn are available,

(1 + n)n n 2 n

var(Tn ) = E[Tn2 ] (E[Tn ])2 = 2

= 2.

12. To begin, use the law of total probability, substitution, and independence to write

Z 1 Z 1

Nln(1+tU) Nln(1+tU)

Yt

E[z ] = E[z ] = E[z |U = u] du = E[zNln(1+tu) |U = u] du

0 0

Z 1 Z 1 Z 1

z1

= E[zNln(1+tu) ] du = exp[(z 1) ln(1 + tu)] du = eln(1+tu) du

0 0 0

Z 1

= (1 + tu)z1 du.

0

Chapter 11 Problem Solutions 191

find that

ln(1 + t)

, z = 0,

Yt

E[z ] = t

z

(1 + t) 1 , z 6= 0.

tz

13. Denote the arrival times of Nt by T1 , T2 , . . . , and let Xk := Tk Tk1 denote the inter-

arrival times. Similarly, denote the arrival times of Mt by S1 , S2 , . . . . (As it turns out,

we do not need the interarrival times of Mt .) Then for arbitrary k > 1, we use the law

of total probability, substitution, and independence to compute

P(MTk MTk1 = m)

Z Z

= P(MTk MTk1 = m|Tk = t, Tk1 = ) fTk Tk1 (t, ) dt d

Z0 Z

= P(Mt M = m|Tk = t, Tk1 = ) fTk Tk1 (t, ) dt d

Z0 Z

= P(Mt M = m) fTk Tk1 (t, ) dt d

0

Z Z

[ (t )]m e (t )

= fTk Tk1 (t, ) dt d

0 m!

[ (Tk Tk1 )]m e (Tk Tk1 )

= E

m!

Z

( Xk )m e Xk ( x)m e x

= E = e x dx

m! 0 m!

Z

m

= xm ( + )e( + )x dx

( + )m! 0

| {z }

mth moment of exp( + ) density

m

m m!

= m

= ,

( + )m! ( + ) + +

which is a geometric0 ( /( + )) pmf in m.

14. It suffices to show that the probability generating function GMt (z) has the form e (z1)

for some > 0. We use the law of total probability, substitution, and independence

to write

Nt Nt

GMt (z) = E[zMt ] = E[zi=1 Yi ] = E[zi=1 Yi |Nt = n]P(Nt = n)

n=0

n n

= E[zi=1 Yi |Nt = n]P(Nt = n) = E[zi=1 Yi ]P(Nt = n)

n=0 n=0

n

Yi

= E[z ] P(Nt = n) = [pz + (1 p)]n P(Nt = n)

n=0 i=1 n=0

t([pz+(1p)]1)

= GNt pz + (1 p) = e = e p t(z1) .

Thus, Mt Poisson(p t).

192 Chapter 11 Problem Solutions

t

Vi denote the total energy throught time t, with Mt = 0 for Nt = 0. We

use the law of total probability, substitution, and independence to write

Nt

E[Mt ] = E[Mt |Nt = n]P(Nt = n) = E Vi Nt = n P(Nt = n)

n=0 n=1 i=1

n n

= E Vi Nt = n P(Nt = n) = E[Vi ] P(Nt = n)

n=1 i=1 n=1 i=1

= nE[V1 ]P(Nt = n) = E[V1 ] nP(Nt = n) = E[V1 ]E[Nt ] = E[V1 ]( t).

n=1 n=1

16. We have

2.132(1.96)

= 5.170 = 5.170 0.418 with 95% probability.

10

In other words, [4.752, 5.588] with 95% probability.

17. To begin, observe that

n n n

Y = g(Ti ) = gk I(tk1 ,tk ] (Ti ) = gk I(tk1 ,tk ] (Ti ) = gk (Ntk Ntk1 )

i=1 i=1 k=1 k=1 i=1 k=1

n n Z

E[Y ] = gk E[Ntk Ntk1 ] = gk (tk tk1 ) =

0

g( ) d ,

k=1 k=1

n

n

Y ( ) = E[e jY ] = E[e j k=1 gk (Ntk Ntk1 ) ] = E[e j gk (Ntk Ntk1 ) ]

k=1

n h i n

= exp (tk tk1 )(e j gk 1) = exp (tk tk1 )(e j gk 1)

k=1 k=1

Z

j g( )

= exp e 1 d ,

0

does not lie in any (tk1 ,tk ]). We now compute the correlation,

n n

E[Y Z] = E gk (Ntk Ntk1 ) hl (Ntl Ntl1 )

k=1 l=1

= gk hl E[(Ntk Ntk1 )(Ntl Ntl1 )] + gk hk E[(Ntk Ntk1 )2 ]

k6=l k

2

= gk hl (tk tk1 )(tl tl1 ) + gk hk [ (tk tk1 ) + 2 (tk tk1 )2 ]

k6=l k

Chapter 11 Problem Solutions 193

k,l k

= gk (tk tk1 ) hl (tl tl1 ) + gk hk (tk tk1 )

k l k

Z Z Z

= g( ) d h( ) d + g( )h( ) d .

0 0 0

Hence, Z

cov(Y, Z) = E[Y Z] E[Y ]E[Z] = g( )h( ) d .

0

18. The key is to use g( ) = h(t ) in the preceding problem. It then immediately

follows that

Z Z

j h(t )

E[Yt ] = h(t ) d , Yt ( ) = exp e 1 d ,

0 0

and Z

cov(Yt ,Ys ) = h(t )h(s ) d .

0

X = -log(rand(1))/lambda; % Generate exp(lambda) RV

with

X = randn(1)2; % Generate chi-squared RV

20. If Nt is a Poisson process of rate , then FX is the exp( ) cdf. Hence,

Z Z

P(X1 < Y1 ) = FX (y) fY (y) dy = [1 e y ] fY (y) dy = 1 MY ( ).

0 0

P(X1 < Y1 ) = 1 = 1 = .

( ) + +

Since X1 uniform[0, 1], var(X1 ) = 1/12, and var(Tn ) = n/12.

22. In the case of a Poisson process, Tk is Erlang(k, ). Hence,

k1

( t)l e t ( t)l e t

Fk (t) = 1 l! =

l!

k=1 k=1 l=0 k=1 l=k

l

( t) e t ( t) e t

l

= I[k,) (l) = I[k,) (l)

k=1 l=0 l! l=0 l! k=1

( t)l e t

= l l!

= E[Nt ] = t.

l=0

194 Chapter 11 Problem Solutions

E[Nt |X1 = x] = E I[0,t] (Tn )X1 = x = E I[0,t] (X1 + + Xn )X1 = x

n=1 n=1

= E I[0,t] (x + X2 + + Xn )X1 = x .

n=1

E[Nt |X1 = x] = 0 for x > t.

(b) First, for n = 1 and x t, I[0,t] (x) = 1. Next, for n 2 and x t, I[0,t] (x + X2 +

+ Xn ) = I[0,tx] (X2 + + Xn ). So,

E[Nt |X1 = x] = 1 + E I[0,tx] (X2 + + Xn )

n=2

= 1 + E I[0,tx] (X1 + + Xn )

n=1

= 1 + E[Ntx ],

(c) By the law of total probability,

Z

E[Nt ] = E[Nt |X1 = x] f (x) dx

0

Z t Z

= E[Nt |X1 = x] f (x) dx + E[Nt |X1 = x] f (x) dx

0 t | {z }

= 0 for x>t

Z t Z t

= 1 + E[Ntx ] f (x) dx = F(t) + E[Ntx ] f (x) dx.

0 0

24. With the understanding that m(t) = 0 for t < 0, we can write the renewal equation as

Z

m(t) = F(t) + m(t x) f (x) dx,

0

where the last term is a convolution. Hence, taking the Laplace transform of the

renewal equation yields

Z Z

M(s) := m(t)est dt = F(t)est dt + M(s)MX (s).

0 0

Z Z

F(t)est 1

F(t)est dt = f (t)est dt.

0 s 0 s 0

Thus,

1

M(s) = MX (s) + M(s)MX (s),

s

Chapter 11 Problem Solutions 195

1

M(s)[1 MX (s)] = MX (s),

s

or

1 MX (s) 1 /( s) 1

M(s) = = = = 2.

s 1 MX (s) s 1 /( s) s s s

It follows that m(t) = tu(t).

25. For 0 s < t < , write

Z t Z s Z s Z t

E[Vt Vs ] = E X d X d = E[X X ] d d

0 0 0 0

Z s Z t Z s Z t

2

= RX ( ) d d = ( ) d d

0 0 0 0

Z s

2 2

= d = s.

0

= E[(Wt Ws )(Ws W0 )] + 2 s

= E[Wt Ws ]E[Ws W0 ] + 2 s = 0 0 + 2 s = 2 s.

E[Yt1 Yt2 ] = E[eWt1 eWt2 ] = E[eWt2 Wt1 e2Wt1 ] = E[eWt2 Wt1 e2(Wt1 W0 ) ]

2 (t 2t 2 (t

= E[eWt2 Wt1 ]E[e2(Wt1 W0 ) ] = e 2 t1 )/2 e4 1 /2 = e 2 +3t1 )/2 .

t1 t1 t1 t1

t1 t2 t2 t2

t3

cov(X) = 2 t1 t2 t3 .

.. .. .. .. ..

. . . . .

t1 t2 t3 tn

29. Let 0 t1 < < tn+1 < and 0 s1 < < sm+1 < , and suppose that

n m

g( ) = gi I(ti ,ti+1 ] ( ) and h( ) = h j I(s j ,s j+1 ] ( ).

i=1 j=1

Denote the distinct points of {ti } {s j } in increasing order by 1 < < p . Then

p p

g( ) = gk I(k ,k+1 ] ( ) and h( ) = hk I(k ,k+1 ] ( ),

k=1 k=1

196 Chapter 11 Problem Solutions

where the gk are taken from the gi , and the hk are taken from the h j . We can now

write

Z Z p p

0

g( ) dW +

0

h( ) dW = gk (Wk+1 Wk ) + hk (Wk+1 Wk )

k=1 k=1

p

= [gk + hk ](Wk+1 Wk )

k=1

Z

= [g( ) + h( )] dW .

0

Z Z 2 Z 2

E g( ) dW h( ) dW = E g( ) dW

0 0 0

Z Z

2E g( ) dW h( ) dW

0 0

Z

2

+E h( ) dW

0

Z Z

= 2 g( )2 d + 2 h( )2 d

0 0

Z Z

2E g( ) dW h( ) dW .

0 0

Second, we write

Z Z

2 Z

2

E g( ) dW h( ) dW = E [g( ) h( )] dW

0 0 0

Z

= 2 [g( ) h( )]2 d

0

Z Z

= 2 g( )2 d + 2 h( )2 d

0 0

Z

2

2 g( )h( ) d .

0

Z Z Z

E g( ) dW h( ) dW = 2 g( )h( ) d .

0 0 0

Z t Z

Yt := g( ) dW = g( )I[0,t] ( ) dW ,

0 0

it follows that

Z Z t

E[Yt2 ] = 2 2

[g( )I[0,t] ( )] d = 2

g( )2 d .

0 0

Chapter 11 Problem Solutions 197

Z Z

E[Yt1 Yt2 ] = E g( )I[0,t1 ] ( ) dW g( )I[0,t2 ] ( ) dW

0 0

Z

= 2 g( )I[0,t1 ] ( ) g( )I[0,t2 ] ( ) d

0

Z Z min(t1 ,t2 )

= 2 g( )2 I[0,t1 ] ( )I[0,t2 ] ( ) d = 2 g( )2 d .

0 0

32. By independence of V and the Wiener process along with the result of the previous

problem,

Z min(t1 ,t2 )

RY (t1 ,t2 ) = e (t1 +t2 ) E[V 2 ] + 2 e (t1 +t2 ) e2 d .

0

Z t1

1 2 t1

e2 d = (e 1),

0 2

and so

2 2 (t2 t1 )

E[Yt1 Yt2 ] = e (t1 +t2 ) q2 + e .

2 2

Similarly, if t2 < t1 ,

2 2 (t1 t2 )

E[Yt1 Yt2 ] = e (t1 +t2 ) q2 + e .

2 2

In either case, we can write

2 2 |t1 t2 |

E[Yt1 Yt2 ] = e (t1 +t2 ) q2 + e .

2 2

33. Write

E[Yt1 Yt2 ] = E[We2 t1 We2 t2 ] = min(e2 t1 , e2 t2 ).

2 2

For t1 t2 , this reduces to

e = e .

2 2

If t2 < t1 , we have

e (t1 +t2 ) 2 2 t2 2 (t1 t2 )

e = e .

2 2

We conclude that

2 |t1 t2 |

E[Yt1 Yt2 ] = e .

2

198 Chapter 11 Problem Solutions

Z t

2 Z t

34. (a) P(t) := E[Yt2 ] = E g( ) dW = g( )2 d .

0 0

(b) If g(t) is never zero, then for 0 t1 < t2 < ,

Z t2

P(t2 ) P(t1 ) = g( )2 d > 0.

t1

(c) First,

Z P1 (t)

E[Xt ] = E[YP1 (t) ] = E g( ) dW = 0

0

since Wiener integrals have zero mean. Second,

Z P1 (t) 2 Z P1 (t)

2

E[Xt ] = E g( ) dW = g( )2 d = P(P1 (t)) = t.

0 0

(b) For s < 0, E[Ws2 ] = E[(W0 Ws )2 ] = s.

(c) From parts (a) and (b) we see that no matter what the sign of t, E[Wt2 ] = |t|.

Whether t > s or s < t, we can write

and so

E[Wt2 ] + E[Ws2 ] |t s| |t| + |s| |t s|

E[Wt Ws ] = = .

2 2

=1

Z z }| {

P(X = xk ) = P((X,Y ) {xk } IR) =

I{xk } (xi ) IIR (y) fXY (xi , y) dy

i

Z Z

=

I{xk } (xi ) fXY (xi , y) dy =

fXY (xk , y) dy.

i

(b) Write

Z

P(Y C) = P (X,Y ) IR C =

IIR (xi )IC (y) fXY (xi , y) dy

i

Z Z

= IC (y) fXY (xi , y) dy = fXY (xi , y) dy.

i C i

P(X = xk ,Y C) P((X,Y ) {xk } C)

P(Y C|X = xk ) = = .

P(X = xk ) P(X = xk )

Chapter 11 Problem Solutions 199

Then since

Z Z

P((X,Y ) {xk } C) =

I{xk } (xi )IC (y) fXY (xi , y) dy =

C

fXY (xk , y) dy,

i

we have

Z

fXY (xk , y) dy Z

C fXY (xk , y)

P(Y C|X = xk ) = = dy.

pX (xk ) C pX (xk )

(d) Write

Z Z

P(X B|Y = y) fY (y) dy =

B i X|Y i fY (y) dy

I (x )p (x |y)

i

Z

fXY (xi , y)

= IB (xi ) fY (y)

fY (y) dy

i

Z

= IB (xi )

fXY (xi , y) dy

i

Z

= IB (xi )

IIR (y) fXY (xi , y) dy

i

Z

=

IBIR (xi , y) fXY (xi , y) dy

i

= P((X,Y ) B IR)

= P(X B,Y IR) = P(X B).

F 1 (U) x U F(x),

and since F 1 is nondecreasing,

U F(x) F 1 (U) x.

Hence, {F 1 (U) x} = {U F(x)}, and with X := F 1 (U) we can write

Z F(x)

P(X x) = P(F 1 (U) x) = P(U F(x)) = 1 du = F(x).

0

F(x)

1

3/4 x/2

1/2

1/4

x

0 1 1 2

2

200 Chapter 11 Problem Solutions

(b) For 1/2 u < 1, Bu = [2u, ), and G(u) = 2u. For 1/4 < u < 1/2, Bu = [1, ),

= 1. For u = 1/4,

and G(u) Bu = [1/2, ), and G(u) = 1/2. For 0 u < 1/4,

Bu = [ u, ), and G(u) = u. Hence,

G(u)

2

2u

1

1/2

u

0 1 1 3 1

4 2 4

G(u), F(G(u)) u. Thus, F(x) F(G(u)) u. Now suppose u F(x). Then by the

definition of G(u), G(u) x.

Z F(x)

P(X x) = P(G(U) x) = P(U F(x)) = 1 du = F(x).

0

function x = G(u)

i1 = find(u <= .25);

i2 = find(.25 < u & u <= .5);

i3 = find(.5 < u & u <= 1);

x(i1) = sqrt(u(i1));

x(i2) = 1;

x(i3) = 2*u(i3);

0

0 0.5 1 2

Chapter 11 Problem Solutions 201

f (x)

1

3/4

1/2

1/4

x

0 1 1 2

2

jm ,..., jn

jm ,..., jn , j

j

j, jm ,..., jn

= pm1,n ( j, im , . . . , in ),

j

j

and

pm1,n ( j, im , . . . , in ) = pm,n (im , . . . , in ).

j

multiply both formulass by IB (im , . . . , in ) and sum over all im , . . . , in , we obtain the

original consistency conditions.

43. For the first condition, write

j j

| {z } r( j|in ) .

j

= pm,n (im ,...,in ) | {z }

=1

202 Chapter 11 Problem Solutions

j j

= q( j)r(im | j) r(im+1 |im ) r(in |in1 ) .

j

| {z }

= q(im )

| {z }

= pm,n (im ,...,in )

44. If

Z Z

= IBn (x1 , . . . , xn )IIR (y) fn+1 (x1 , . . . , xn , y) dy dxn dx1

Z Z Z

= fn+1 (x1 , . . . , xn , y) dy dxn dx1 ,

Bn

then necessarily the quantity in square brackets is the joint density fn (x1 , . . . , xn ).

Conversely, if the quantity in square brackets is equal to fn (x1 , . . . , xn ), we can write

Z Z

n+1 (Bn IR) = IBn (x1 , . . . , xn )IIR (y) fn+1 (x1 , . . . , xn , y) dy dxn dx1

Z Z Z

= fn+1 (x1 , . . . , xn , y) dy dxn dx1

Bn

Z Z

= fn (x1 , . . . , xn )dxn dx1 = n (Bn ).

Bn

t1 ,...,tn+1 (Bn,k )

Z Z

= IBn (x1 , . . . , xk1 , xk+1 , . . . , xn )IIR (xk ) ft1 ,...,tn+1 (x1 , . . . , xn+1 ) dxn+1 dx1

Z Z Z

= ft1 ,...,tn+1 (x1 , . . . , xn+1 ) dxk dxn+1 dxk+1 dxk1 dx1 .

Bn

Z

ft1 ,...,tk1 ,tk+1 ,...,tn (x1 , . . . , xk1 , xk+1 , . . . , xn ) = ft1 ,...,tn+1 (x1 , . . . , xn+1 ) dxk

and conversely.

46. By the hint, [Wt1 , . . . ,Wtn ]0 is a linear transformation of a vector of independent Gaus-

sian increments, which is a Gaussian vector. Hence, [Wt1 , . . . ,Wtn ]0 is a Gaussian

vector. Since n and the times t1 < < tn are arbitrary, Wt is a Gaussian process.

Chapter 11 Problem Solutions 203

47. Let X = [Wt1 W0 , . . . ,Wtn Wtn1 ]0 , and let Y = [Wt1 , . . . ,Wtn ]0 . Then Y = AX, where

A denotes the matrix given in the statement of Problem 46. Since the components of

X are independent,

2

n

exi /[2(ti ti1 )]

fX (x) = p ,

i=1 2 (ti ti1 )

where it is understood that t0 := 0. Using the example suggested in the hint, we have

fX (x)

fY (y) = .

| det A| x=A1 y

2

e(wi wi1 ) /[2(ti ti1 )]

n

ft1 ,...,tn (w1 , . . . , wn ) = p2 (t t ) ,

i=1 i i1

48. By the hint, it suffices to show that C is positive semidefinite. Write

n n n n

a0Ca = ai akCik = ai ak R(ti tk )

i=1 k=1 i=1 k=1

n n Z

= ai ak S( f )e j2 f (ti tk ) d f

i=1 k=1

Z n 2

= S( f ) ai e j2 f ti d f 0.

i=1

CHAPTER 12

Problem Solutions

= h(i)P(X = i,Y = j, Z = k).

2. Write

X1 = g(X0 , Z1 )

X2 = g(X1 , Z2 ) = g(g(X0 , Z1 ), Z2 )

X3 = g(X2 , Z3 ) = g(g(g(X0 , Z1 ), Z2 ), Z3 )

..

.

(X0 , Z1 , . . . , Zn ), which is independent of Zn+1 . Now observe that

= P(g(in , Zn+1 ) = in+1 |Xn = in , . . . , X0 = i0 )

= P(g(in , Zn+1 ) = in+1 ),

3. Write

P(A B|C) = = = P(A|B C)P(B|C).

P(C) P(B C) P(C)

204

Chapter 12 Problem Solutions 205

4. Write

P(X0 = i, X1 = j, X2 = k, X3 = l)

= P(X3 = l|X2 = k, X1 = j, X0 = i)

P(X2 = k|X1 = j, X0 = i)P(X1 = j|X0 = i)P(X0 = i)

= pkl p jk pi j i .

k

This tells us that 1 = (a/b)0 . Then we use the fact that 0 + 1 = 1 to write 0 +

(a/b)0 = 1, or 0 = 1/(1 + a/b) = b/(a + b). We also have 1 = (a/b)0 = a/(a +

b).

6. The state transition diagram is

1/2

1/2 1/2

1/4

1/2

1 2

3/4

7. The state transition diagram is

1/2

1/4

1/4

1/2

1 2

3/4 3/4

206 Chapter 12 Problem Solutions

1/2

1/2

0 9/10 1

1/10 1/10

1/2

3 2

9/10

1/2

9. The first equation is

0 = 0 p00 + 1 p10 = 0 (1 a) + 1 b,

(a + b)1 = 0 a + 2 b or (a + b)(a/b)0 = 0 a + 2 b,

from which it follows that 2 = (a/b)2 0 . Now suppose that i = (a/b)i 0 holds for

i = 0, . . . , j < N. Then from

we obtain

from which it follows that j+1 = (a/b) j+1 0 . To find 0 , we use the fact that 0 +

+ N = 1. Using the finite geometric series formula, we have

N

1 (a/b)N+1

1 = (a/b) j 0 = 0

1 a/b

.

j=0

Hence,

1 a/b a j

j = , j = 0, . . . , N, a 6= b.

1 (a/b)N+1 b

If a = b, then j = 0 for j = 0, . . . , N implies j = 1/(N + 1).

Chapter 12 Problem Solutions 207

10. Let and b denote the two solutions in the example. For 0 1, put

ej := j + (1 )bj 0.

Then

ej = j + (1 )bj = j + (1 ) bj = 1 + (1 ) 1 = 1.

j j j j

ei pi j = [ i + (1 )bi ]pi j = i pi j + (1 ) bi pi j

i i i i

= j + (1 )bj = ej .

12. Write

E[T1 ( j)|X0 = i] = kP(T1 ( j) = k|X0 = i) + P(T1 ( j) = |X0 = i)

k=1

(k)

= k fi j + (1 fi j ).

k=1

h

{T2 = 5} = {X5 = j} {X4 = j, X3 6= j, X2 6= j, X1 6= j}

{X4 6= j, X3 = j, X2 6= j, X1 6= j}

{X4 6= j, X3 6= j, X2 = j, X1 =6 j} i

{X4 6= j, X3 6= j, X2 6= j, X1 = j} .

Then

(b) Write

P(X5 = j, X4 6= j, X3 6= j, X2 = j, X1 6= j, X0 = i)

as

[

P {X5 = j, X4 6= j, X3 6= j, X2 = j, X1 = l, X0 = i} ,

l6= j

which is equal to

l6= j

208 Chapter 12 Problem Solutions

l6= j

which is just

P(X5 = j, X4 6= j, X3 6= j|X2 = j)P(X2 = j, X1 6= j, X0 = i).

It now follows that P(X5 = j, X4 6= j, X3 6= j|X2 = j, X1 6= j, X0 = i) is equal to

P(X5 = j, X4 6= j, X3 6= j|X2 = j).

(c) Write

P(X5 = j, X4 6= j, X3 6= j|X2 = j) = P(X5 = j, X4 = k, X3 = l|X2 = j)

k6= j,l6= j

= P(X3 = j, X2 = k, X1 = l|X0 = j)

k6= j,l6= j

= P(X3 = j, X2 6= j, X1 6= j|X0 = j).

14. Write

\

P(V ( j) = |X0 = i) = P {V ( j) L}X0 = i

L=1

M

\

= lim P

{V ( j) L}X0 = i , limit property of P,

M

L=1

= lim P(V ( j) M|X0 = i), decreasing events.

M

E[V ( j)|X0 = i] = E I{ j} (Xn )X0 = i = E[I{ j} (Xn )|X0 = i]

n=1 n=1

(n)

= P(Xn = j|X0 = i) = pi j . ()

n=1 n=1

Next,

P(V ( j) = |X0 = i) = lim P(V ( j) L|X0 = i), from preceding problem,

L

= lim fi j ( f j j )L1 , from the text.

L

positive probability, and so the expectation on the left in () must be infinite, and

hence, so is the sum on the far right in (). If fi j = 0, then for any n = 1, 2, . . . , we

can write

[

0 = fi j := P(T1 ( j) < |Xi = 0) P

{Xm = j}Xi = 0

m=1

(n)

P(Xn = j|X0 = i) = pi j ,

Chapter 12 Problem Solutions 209

(n)

and it follows that

n=1 pi j = 0.

In the transient case ( f j j < 1), we have from the text that

L

fi j ( f j j )L1 (1 f j j ), L 1,

P(V ( j) = L|X0 = i) =

1 fi j , L = 0,

where we use the fact that the number of visits to state j is zero if and only if we never

visit state j, and this happens if and only if T1 ( j) = . We can therefore write

fi j

E[V ( j)|X0 = i] = L P(V ( j) = L|X0 = i) =

1 fjj

< ,

L=0

16. From m 1 bm c m ,

1 1 1

.

m bm c m 1

Then

1 m 1 1

.

bm c 1/m

17. (a) Write h( j) = IS ( j) = nl=1 I{sl } ( j). Then

1 m 1 m n n

1 m

lim

m m

h(Xk ) = m lim

m k=1 I{sl } (Xk ) = m

lim

m {sl }

I (Xk )

k=1 l=1 l=1 k=1

n n

= m

lim Vm (sl )/m = s l , by Theorems 3 and 4,

l=1 l=1

= IS ( j) j = h( j) j .

j j

(b) With h( j) = nl=1 cl ISl ( j), where each Sl is a finite subset of states, we can write

1 m 1 m n n

1 m

lim

m m

h(Xk ) = lim

m m

cl ISl (Xk ) = cl lim

m m

ISl (Xk )

k=1 k=1 l=1 l=1 k=1

n

= cl ISl ( j) j , by part (a)

l=1 j

n

= l Sl j =

c I ( j) h( j) j .

j l=1 j

210 Chapter 12 Problem Solutions

(c) If h( j) = 0 for all but finitely many states, then there are at most finitely many

distinct nonzero values that h( j) can take, say c1 , . . . , cn . Put Sl := { j : h( j) =

cl }. By the assumption about h, each Sl is a finite set. Furthermore, we can

write

n

h( j) = cl ISl ( j).

l=1

18. MATLAB. OMITTED.

19. MATLAB. OMITTED.

20. Suppose k Ai A j . Since k Ai , i k. Since k A j , j k. Hence, i j. Now,

if l Ai , l i j and we see that l A j . Similarly, if l A j , l j i and we see

that l Ai . Thus, Ai = A j .

21. Yes. As pointed out in the discussion at the end of the section, by combining Theo-

rem 8 and Theorem 6, the chain has a unique stationary distribution. Now, by The-

orem 4, all states are positive recurrent, which by definition means that the expected

time to return to a state is finite.

22. Write

( t)n e t

gi,i+n = lim = lim ( t)n1 e t = 0, for n 2.

t0 t n! n! t0

1 5

2

4

1 2

24. MATLAB. Change the line A=P-In; to A=P;.

25. The forward equation is

j+1

p0i j (t) = pik (t)gk j = pik gk j

k k= j1

= pi, j1 (t) j1 pi j (t)[ j + j ] + pi, j+1 (t) j+1 .

Chapter 12 Problem Solutions 211

i+1

p0i j (t) = gik pk j (t) = gik pi j (t)

k k=i1

= i pi1, j (t) (i + i )pii (t) + i pi+1, j (t).

The chain is conservative because gi,i1 +gi,i+1 = i + i = [(i + i )] = gii < .

26. We use the formula 0 = k k gk j . For j = 0, we have

0 = k gk0 = 0 (0 ) + 1 1 ,

k

0 = k gk1 = 0 0 + 1 [(1 + 1 )] + 2 2

k

= 0 0 + (0 /1 )0 [(1 + 1 )] + 2 2 = 0 (0 1 /1 ) + 2 2 ,

which implies 2 = 0 0 1 /(1 2 ). Now suppose that i = 0 0 i1 /(1 i )

for i = 1, . . . , j. Then from

0 = k gk j = j1 j1 j ( j + j ) + j+1 j+1

k

0 j2 0 j1

= 0 j1 0 ( j + j ) + j+1 j+1 ,

1 j1 1 j

and it follows that

0 j

j+1 = 0 .

1 j+1

Now, let

0 j1

B := < .

j=1 1 j

0 j1

1 = 0 + 1 + = 0 + 0 = 0 (1 + B).

j=1 1 j

1 0 j1

j = , j 1.

1 + B 1 j

in this case,

k

1

1+B = = ,

k=0 1 + /

and

j = (1 / )( / ) j geometric0 ( / ).

212 Chapter 12 Problem Solutions

27. The solution is very similar the that of the preceding problem. In this case, put

N

0 j1

BN := < ,

j=1 1 j

1 0 j1

j = , j = 1, . . . , N.

1 + BN 1 j

If i = and i = , then

N

1 ( / )N+1

1 + BN = ( / )k =

1 /

,

k=0

and

1 /

j = ( / ) j , j = 0, . . . , N.

1 ( / )N+1

If = , then j = 1/(N + 1).

28. To begin, write

j j

Then

m0i (t) = j p0i j (t) = j pik (t)gk j = pik (t) jgk j

j j k k j

h i

= pik (t) (k 1)gk,k1 + kgkk + (k + 1)gk,k+1

k

h i

= pik (t) (k 1)(k ) k(k + + k ) + (k + 1)(k + )

k

= pik (t)[k + k + ] = mi (t) + mi (t) + .

k

We must solve

m0i (t) + ( )mi (t) = , mi (0) = i.

If 6= , it is readily verified that

mi (t) = i e( )t +

solves the equation. If = , then mi (t) = t + i solves m0i (t) = with mi (0) = i.

29. To begin, write

p0i j (t) = pik (t)gk j = pi, j1 (t)g j1, j + pi j (t)g j j = pi, j1 (t) pi j (t). ()

k

Chapter 12 Problem Solutions 213

Observe that for j = i, pi,i1 (t) = 0 since the chain can not go from state i to a lower

state. Thus,

p0ii (t) = pii (t).

Also, pii (0) = P(Xt = i|X0 = i) = 1. Thus, pii (t) = e t , and it is easily verified that

( t)n e t

pi,i+n (t) = , n = 0, 1, 2, . . . ,

n!

solves (). Thus, Xt is a Poisson process of rate with X0 = i instead of X0 = 0.

30. (a) The assumption that D > implies that the k are not all zero, in which case,

the k would not sum to one and would not be a pmf. Aside from this, it is clear

that k 0 for all k and that k k = D/D = 1. Next, since p j j = 0, write

1

gk j

k pk j = [k gkk /D] gkk k gk j

D k6= j

=

k k6= j

1

=

D

k kj

g j j j = j g j j /D = j .

g

k

| {z }

=0

(b) The assumption that D > implies that the k are not all zero. Aside from

this, it is clear that k 0 for all k and that k k = D/D = 1. Next, write

1 gk j

k gk j = [(k /gkk )/D]gk j =

D k

k

g kk

k k

1 gk j

=

D

k gkk j

k6= j

1

=

D

k pk j j , since p j j = 0,

k

| {z }

= j

= 0.

(c) Since k and k are pmfs, they sum to one. Hence, if gii = g for all i, D = g

and D = 1/g. In (a), k = k gkk /D = k g/g = k . In (b), k = (k /gkk )/D =

(k /g)/(1/g) = k .

31. Following the hints, write

= P(Xs = i,t s t + t|Xs = i, 0 s t)

= P(Xs = i,t s t + t|Xt = i)

= P(Xs = i, 0 s t|X0 = i)

= P(T > t|X0 = i).

214 Chapter 12 Problem Solutions

1 P(T > t|X0 = i) 1 P(Xs = i, 0 s t|X0 = i)

lim = lim

t0 t t0 t

1 P(Xt = i|X0 = i)

= lim =: gii .

t0 t

32. Using substitution in reverse, write

P(Wt y|Ws = x,Wsn1 = xn1 , . . . ,Ws0 = x0 )

= P(Wt x y x|Ws = x,Wsn1 = xn1 , . . . ,Ws0 = x0 )

= P(Wt Ws y x|Ws = x,Wsn1 = xn1 , . . . ,Ws0 = x0 ).

Now, using the fact that W0 0, this last conditional probability is equal to

P(Wt Ws y x|Ws Wsn1 = x xn1 , . . . ,Ws1 Ws0 = x1 x0 ,Ws0 W0 = x0 ).

Since the Wiener process has independent increments that are Gaussian, this last ex-

pression reduces to

Z yx

exp[{ /[ t s ]}2 /2]

P(Wt Ws y x) = p d .

2 2 (t s)

Since this depends on x but not on xn1 , . . . , x0 ,

P(Wt y|Ws = x,Wsn1 = xn1 , . . . ,Ws0 = x0 ) = P(Wt y|Ws = x).

Hence, Wt is a Markov process.

33. Write

P(X = x|Y = y, Z = z)P(Y = y|Z = z)

y

P(X = x,Y = y, Z = z) P(Y = y, Z = z)

= P(Y = y, Z = z)

P(Z = z)

y

1

P(Z = z)

= P(X = x,Y = y, Z = z)

y

P(X = x, Z = z)

= = P(X = x|Z = z).

P(Z = z)

34. Using the law of total conditional probability, write

Pt+s (B) = P(Xt+s B|X0 = x)

Z

= P(Xt+s B|Xs = z, X0 = x) fXs |X0 (z|x) dz

Z

= P(Xt+s B|Xs = z) fs (x, z) dz

Z

= P(Xt B|X0 = z) fs (x, z) dz

Z

= Pt (z, B) fs (x, z) dz.

CHAPTER 13

Problem Solutions

1. First observe that E[|Xn | p ] = n p/2 P(U 1/n) = n p/2 (1/n) = n p/21 , which goes to

zero if and only if p < 2. Thus, 1 p < 2.

2

Nt 1 var(Nt ) t

2. E = 2 E[|Nt t|2 ] = = 2 = 0.

t t t2 t t

3. Given > 0, let n N imply |C(k)| /2. Then for such n,

1 n1 1 N1 1 n

n k=0

C(k) =

n k=0

C(k) + C(k)

n k=N

and n1

1 N1 n N1

C(k) 1 |C(k)| + 1 /2 1 |C(k)| + /2.

n n k=0 n k=N n k=0

k=0

4. Applying the hint followed by the CauchySchwarz inequality shows that

n1 q

1

C(k) E[(X1 m)2 ]E[(Mn m)2 ].

n

k=0

5. Starting with the hint, we write

E[Z Zn ] + nE[IA ], since Zn Z and Zn n,

= E[|Z Zn |] + nP(A).

/2. Then in particular we have

6. Following the hint, we find that given > 0, there exists a > 0 such that

Z x

P(U x) = x < implies E[ f (x +U)I{Ux} ] = f (x + t) dt < .

0

215

216 Chapter 13 Problem Solutions

Z x Z x+x

f (x + t) dt = f ( ) d = F(x + x) F(x).

0 x

For 0 < x < , take A = {U > 1 + x} and Z = f (x 1 + U). Then P(U >

1 + x) = x implies

Z 1

E[ f (x + 1 +U)I{U>1+Dx} ] = f (x 1 + t) dt < .

1+x

Z x

f ( ) d = F(x) F(x + x).

x+x

Thus, F(x + x) F(x) > . We can now write that given > 0, there exists a

such that |x| < implies |F(x + x) F(x)| < .

7. Following the hint, we can write

h1 1 i 1 p 1 q

exp ln(|X|/ ) p + ln(|Y |/ )q eln(|X|/ ) + eln(|Y |/ )

p q p q

or h i

1 |X| p 1 |Y | q

exp ln(|X|/ ) + ln(|Y |/ ) +

p q

or

|XY | 1 |X| p 1 |Y |q

+ .

p p q q

Hence,

E[|XY |] 1 E[|X| p ] 1 E[|Y |q ] 1 p 1 q

+ = + = 1.

p p q q p p q q

The hint assumes neither nor is zero or infinity. However, if either or is zero,

then both sides of the inequality are zero. If neither is zero and one of them is infinity,

then the right-hand side is infinity and the inequality is trivial.

8. Following the hint, with X = |Z| , Y = 1, and p = / , we have

9. By Lyapunovs inequality, E[|Xn X| ]1/ E[|Xn X| ]1/ . Raising both sides to

the power yields E[|Xn X| ] E[|Xn X| ] / . Hence, if E[|Xn X| ] 0, then

E[|Xn X| ] 0 too.

10. Following the hint, write

E[|X +Y | p ] = E[|X +Y | |X +Y | p1 ]

E[|X| |X +Y | p1 ] + E[|Y | |X +Y | p1 ]

E[|X| p ]1/p E[(|X +Y | p1 )q ]1/q + E[|Y | p ]1/p E[(|X +Y | p1 )q ]1/q ,

Chapter 13 Problem Solutions 217

where 1/q := 1 1/p. Hence, 1/q = (p 1)/p and q = p/(p 1). Now divide the

above inequality by E[|X +Y | p ](p1)/p to get

11. For a Wiener process, Wt Wt0 N(0, 2 |t t0 |). To simplify the notation, put 2 :=

2 |t t0 |. Then

Z 2 Z (x/ )2 /2

e(x/ ) /2 e

E[|Wt Wt0 |] = |x| dx = 2x dx

2 2

0

Z r

2 t 2 /2 2 t 2 /2 |t t0 |

= te dt = e = 2 ,

2 0 2 0 2

12. Let t0 > 0 be arbitrary. Since E[|Nt Nt0 |2 ] = |t t0 | + 2 |t t0 |2 , it is clear that as

t t0 , E[|Nt Nt0 |2 ] 0. Hence, Nt is continuous in mean square.

13. First write

= E[Xt (Xs X )] + E[(Xt X )X ].

q q

|R(t, s) R( , )| E[Xt2 ]E[(Xs X )2 ] + E[(Xt X )2 ]E[X2 ],

which goes to zero as (t, s) ( , ). Note that we need the boundedness of E[Xt2 ] for

t near .

14. (a) E[|Xt+T Xt |2 ] = R(0) 2R(T ) + R(0) = 0.

(b) Write

q

|R(t + T ) R(t)| = E[(Xt+T Xt )X0 ] E[|Xt+T Xt |2 ]E[X02 ],

15. Write

= k(Xt Xt0 ) + (Yt Yt0 )k p

kXt Xt0 k p + kYt Yt0 k p ,

218 Chapter 13 Problem Solutions

17. Let > 0 be given. Since kXn Xk p 0, there is an N such that for n N, kXn

Xk p < /2. Thus, for n, m N,

kXn Xm k p = k(Xn X) + (X Xm )k p kXn Xk p + kX Xm k p < /2 + /2 = .

kXn Xm k p < 1. In particular, with m = N we have from

kXn k p kXN k p kXn XN k p

that

kXn k p kXn XN k p + kXN k p < 1 + kXN k p , for n N.

To get a bound that also holds for n = 1, . . . , N 1, write

kXn k p max(kX1 k p , . . . , kXN1 k p , 1 + kXN k p ).

19. Since Xn converges, it is Cauchy and therefore bounded by the preceding two prob-

lems. Hence, we can write kXn k p B < for some constant B. Given > 0, let

n N imply kXn Xk p < /(2kY kq ) and kYn Y kq < /(2B). Then

kXnYn XY k1 = E[|XnYn XnY + XnY XY |]

E[|Xn (Yn Y )|] + E[|(Xn X)Y |]

kXn k p kYn Y kq + kXn Xk p kY kq , by Holders inequality,

< B + kY kq = .

2B 2kY kq

kX Y k p = k(X Xn ) + (Xn Y )k p kX Xn k p + kXn Y k p 0.

Since kX Y k p = 0, E[|X Y | p ] = 0.

21. For the rightmost inequality, observe that

kX Y k p = kX + (Y )k p kXk p + k Y k p = kXk p + kY k p .

For the remaining inequality, first write

kXk p = k(X Y ) +Y k p kX Y k p + kY k p ,

from which it follows that

kXk p kY k p kX Y k p . ()

Similarly, from

kY k p = k(Y X) + Xk p kY Xk p + kXk p ,

it follows that

kY k p kXk p kY Xk p = kX Y k p . ()

From () and () it follows that

kXk p kY k p kX Y k p .

Chapter 13 Problem Solutions 219

lim E[|Xn | p ] = E[|X| p ] (#)

n

kXk p kY k p kX Y k p ,

23. Write

kX +Y k2p + kX Y k2p = hX +Y, X +Y i + hX Y, X Y i

= hX, Xi + 2hX,Y i + hY,Y i + hX, Xi 2hX,Y i + hY,Y i

= 2(kXk2p + kY k2p ).

24. Write

|hXn ,Yn i hX,Y i| = |hXn ,Yn i hX,Yn i + hX,Yn i hX,Y i|

khXn X,Y i| + |hX,Yn Y i|

kXn Xk2 kYn k2 + kXk2 kYn Y k2 0.

Here we have used the CauchySchwarz inequality and the fact that since Yn con-

verges, it is bounded.

25. As in the example, for n > m, we can write

n n

kYn Ym k22 |hk | |hl | |hXk , Xl i|.

k=m+1 l=m+1

n n

kYn Ym k22 B|hk |2 B |hk |2 0

k=m+1 k=m+1

k=1 |hk | < .

26. Put Yn := nk=1 hk Xk . It suffices to show that Yn is Cauchy in L p . Write, for n > m,

n
n n

kYn Ym k p =
hk Xk

|hk | kXk k p = B1/p

|hk | 0

k=m+1 p k=m+1 k=m+1

k=1 |hk | < .

27. Write

n

E[Y Z] = E i i i1 j j j1

X (t t ) X (s s )

i=1 j=1

n

= R(i , j )(ti ti1 )(s j s j1 ).

i=1 j=1

220 Chapter 13 Problem Solutions

n

Y := g(i )Xi (ti ti1 ) and Z := g( j )X j (s j s j1 ),

i=1 j=1

then

n

E[Y Z] = g(i )R(i , j )g( j )(ti ti1 )(s j s j1 ).

i=1 j=1

Hence, given finer and finer partitions, with Ym defined analogously to Y above, we

see that

Z bZ b Z bZ b

g(t)R(t, s)g(s) dt ds 2 g(t)R(t, s)g(s) dt ds

a a a a

Z bZ b

+ g(t)R(t, s)g(s) dt ds = 0.

a a

since Ym converges in mean square to Y , E[Ym2 ] E[Y 2 ], and it is clear that

Z bZ b

E[Ym2 ] g(t)R(t, s)g(s) dt ds.

a a

Z T

R(t s) (s) ds = (t), 0 t T. ()

0

Since R is defined for all t, we can extend the definition of on the right-hand side in

the obvious way. Furthermore, since R has period T , so will the extended definition

of . Hence, both R and have Fourier series representations, say

n n

Z T

rn e j2 nt/T 0

e j2 ns/T (s) ds = n e j2 nt/T .

n | {z } n

= T n

function. Hence, there is at least one value of n with n 6= 0. For all n with n 6= 0,

= Trn . Thus,

(t) = n e j2 nt/T .

n:rn = /T

Chapter 13 Problem Solutions 221

Rb

30. If a R(t, s) (s) ds = (t) then

Z bZ b Z b Z b

0 R(t, s) (t) (s) dt ds = (t) R(t, s) (s) ds dt

a a a a

Z b Z b

= (t)[ (t)] dt = (t)2 dt.

a a

Hence, 0.

31. To begin, write

Z b Z b Z b Z b

k k (t)m (t) dt = k k (t)m (t) dt = R(t, s)k (s) ds m (t) dt

a a a a

Z b Z b

= k (s) R(s,t)m (t) dt ds, since R(t, s) = R(s,t),

a a

Z b Z b

= k (s) m m (s) ds = m k (s)m (s) ds.

a a

(k m ) k (t)m (t) dt = 0.

a

Rb

If k 6= m , we must have a k (t)m (t) dt = 0.

32. (a) Write

Z T Z T

0

R(t, s)g(s) ds =

0

k k (t)k (s) g(s) ds

k=1

Z T

= k k (t) 0

g(s)k (s) ds = k gk k (t).

k=1 k=1

(b) Write

Z T Z T

R(t, s) (s) ds = R(t, s) g(s) gk k (s) ds

0 0 k=1

Z T Z T

= R(t, s)g(s) ds gk R(t, s)k (s) ds

0 k=1 |0 {z }

= k k (t)

= k gk k (t) k gk k (t) = 0.

k=1 k=1

(c) Write

Z TZ T Z T Z T

E[Z 2 ] = (t)R(t, s) (s) dt ds = (t) R(t, s) (s) ds dt = 0.

0 0 0 0

222 Chapter 13 Problem Solutions

Z T

(t) = e|ts| (s) ds (#)

0

Z t Z T

= e(ts) (s) ds + e(st) (s) ds

0 t

Z t Z T

t

= e s

e (s) ds + e t

es (s) ds.

0 t

Differentiating yields

Z t Z T

0 (t) = et es (s) ds + et et (t) + et es (s) ds + et (et (t))

0 t

Z t Z T

= et es (s) ds + et es (s) ds. (##)

0 t

Z t Z T

00 (t) = et es (s) ds et et (t) + et es (s) ds + et (et (t))

0 t

Z T

= e|ts| (s) ds 2 (t)

0

= (t) 2 (t), by (#),

= ( 2) (t).

p

For 0 < < 2, put := 2/ 1. Then 00 (t) = 2 (t). Hence, we must have

34. The required orthogonality principle is that

L

E (Xt Xbt ) ci Ai = 0

i=1

L

Xbt = cbj A j .

j=1

In particular, we must have E[(Xt Xbt )Ai ] = 0. Now, we know from the text that

E[Xt Ai ] = i i (t). We also have

L

E[Xbt Ai ] = cbj E[A j Ai ] = cbi i .

j=1

Chapter 13 Problem Solutions 223

L

Xbt = Ai i (t).

i=1

35. Since kYn Y k2 0, by the hint, E[Yn ] E[Y ] and E[Yn2 ] E[Y 2 ]. Since gn is

piecewise constant, we know that E[Yn ] = 0, and so E[Y ] = 0 too. Next, an argument

analogous to the one in Problem 21 tells us that if kgn gk 0, then kgn k kgk.

Hence,

Z Z

2

E[Y ] = lim E[Yn2 ] = lim 2 2

gn (t) dt = 2

g(t)2 dt.

n n 0 0

kY Ye k2 = kY Yn k2 + kYn Yen k2 + kYen Ye k2 ,

where the first and last terms on the right go to zero. As for the middle term, write

Z Z 2

kYn Yen k22 = E gn (t) dWt gen (t) dWt

0 0

Z

2 Z

= E [gn (t) gen (t)] dWt = 2 [gn (t) gen (t)]2 dt.

0 0

kYn Yen k2 = kgn gen k kgn gk + kg gen k 0.

37. We know from our earlier work that the Wiener integralR

is linear on piecewise-con-

R

stant functions. To analyze theRgeneral case, let Y = 0 g(t) dWt and Z = 0 h(t) dWt .

We must show that aY + bZ = 0 ag(t) + bh(t) dWt . Let gn (t) and hn (t) be piecewise-

constant

R

functions suchR that kgn gk 0 and khn hk 0 and such that Yn :=

0 g n (t) dW t and Zn := 0 hn (t) dWt converge in mean square to Y and Z, respectively.

Now observe that

Z Z Z

aYn + bZn = a gn (t) dWt + b hn (t) dWt = agn (t) + bhn (t) dWt , ()

0 0 0

k(agn + bhn ) (ag + bh)k |a| kgn gk + |b| khn hk 0,

R

it follows that the right-hand side of () converges in mean square to 0 ag(t) +

bh(t) dWt . Since the left-hand side of () converges in mean square to aY + bZ, the

desired result follows.

38. We must find all values of for which E[|Xt /t|2 ] 0. First compute

Z t 2 Z t

t 2 +1

2

E[Xt ] = E dW = 2 d = .

0 0 2 + 1

224 Chapter 13 Problem Solutions

39. Using the law of total probability, substitution, and independence, write

Z T 2 Z Z T 2

2

E[YT ] = E n

dW = E n

dW T = t fT (t) dt

0 0 0

Z Z t 2 Z Z t 2

= E dW T = t fT (t) dt =

n

E n dW fT (t) dt.

0 0 0 0

Z Z t Z 2n+1

2 2n t E[T 2n+1 ]

E[YT ] = d fT (t) dt = fT (t) dt =

0 0 0 2n + 1 2n + 1

(2n + 1)!/ 2n+1 (2n)!

= = 2n+1 .

2n + 1

40. (a) Write

g(t + t) g(t) = E[ f (Wt+t )] E[ f (Wt )]

= E[ f (Wt+t ) f (Wt )]

E[ f 0 (Wt )(Wt+t Wt )] + 21 E[ f 00 (Wt )(Wt+t Wt )2 ]

= E[ f 0 (Wt W0 )(Wt+t Wt )]

+ 12 E[ f 00 (Wt W0 )(Wt+t Wt )2 ]

= E[ f 0 (Wt W0 )] 0 + 21 E[ f 00 (Wt W0 )] t.

(b) If f (x) = ex , then g0 (t) = 21 E[eWt ] = 12 g(t). In this case, g(t) = et/2 , since g(0) =

E[eW0 ] = 1.

2

(c) We have by direct calculation that g(t) = E[eWt ] = es t/2 s=1 = et/2 .

41. Let C be the ball of radius r, C := {Y L p : kY k p r}. For X

/ C, i.e., kXk p > r, we

show that

r

Xb = X.

kXk p

To begin, note that the proposed formula for Xb satisfies kXk

b p = r so that Xb C as

required. Now observe that

b p = X r X = 1 r kXk p = kXk p r.

kX Xk kXk p p kXk p

Next, for any Y C,

kX Y k p kXk p kY k p

= kXk p kY k p

kXk p r

b p.

= kX Xk

b

Thus, no Y C is closer to X than X.

Chapter 13 Problem Solutions 225

42. Suppose that Xb and Xe are both elements of a subspace M and that hX X,Y

b i = 0 for

e

all Y M and hX X,Y i = 0 for all Y M. Then write

kXb Xk

e 22 = hXb X,

e Xb Xi

e = h(Xb X) + (X X),

e Xb Xei = 0 + 0 = 0.

| {z }

M

43. For XbN is to be the projection of XbM onto N, it is sufficient that the orthogonality

principle be satisfied. In other words, it suffices to show that

hXbM XbN ,Y i = 0, for all Y N.

Observe that

hXbM XbN ,Y i = h(XbM X) + (X XbN ),Y i = hX XbM ,Y i + hX XbN ,Y i.

Now, the last term on the right is zero by the orthogonality principle for the projection

of X onto N, since Y N. To show that hX XbM ,Y i = 0, observe that since N M,

Y N implies Y M. By the orthogonality principle for the projection of X onto M,

hX XbM ,Y i = 0 for Y M.

44. In the diagram, M is the disk and N is the horizontal line segment. The is XbM , the

projection of X onto the disk M. The is XbN , the projection of X onto line segment

N. The is ([XbM )N , the projection of the circle XbM onto the line segment N. We see

[

that (Xb ) 6= Xb .

M N N

X

XbM

XbN

d

(XbM )N

show that X M. Since gn (Y ) converges, it is Cauchy. Writing

kgn (Y ) gm (Y )k22 = E[|gn (Y ) gm (Y )|2 ]

Z

= |gn (y) gm (y)|2 fY (y) dy = kgn gm kY ,

we see that gn is Cauchy in G, which is complete. Hence, there exists a g G with

kgn gkY 0. We claim X = g(Y ). Write

kg(Y ) Xk2 = kg(Y ) gn (Y ) + gn (Y ) Xk2

kg(Y ) gn (Y )k2 + kgn (Y ) Xk2 ,

where the last term goes to zero by assumption. Now observe that

Z

kg(Y ) gn (Y )k22 = E[|g(Y ) gn (Y )|2 ] = |g(y) gn (y)|2 fY (y) dy

= kg gn kY2 0.

Thus, X = g(Y ) M as required.

226 Chapter 13 Problem Solutions

R

46. We claim that the required projection is 01 f (t) dWt . Note that this is an element of M

since Z 1 Z

f (t)2 dt f (t)2 dt < .

0 0

Consider the orthogonality condition

Z Z 1 Z 1 Z

Z 1

E f (t) dWt f (t) dWt g(t) dWt = E f (t) dWt g(t) dWt .

0 0 0 1 0

Now put

f (t), t > 1, 0, t > 1,

f(t) := and g(t) :=

0, 0 t 1, g(t), 0 t 1,

Z Z Z

E f(t) dWt g(t) dWt = f(t)g(t) dt,

0 0 0

47. The function g(t) will be optimal if the orthogonality condition

Z Z

E X g( ) dW g( ) dW = 0

0 0

R

holds for all g with 0 g( )2 d . In particular, this must be true for g( ) = I[0,t] ( ). In

this case, the above expectation reduces to

Z Z Z

E X I[0,t] ( ) dW E g( ) dW I[0,t] ( ) dW .

0 0 0

Now, since Z Z t

I[0,t] ( ) dW = dW = Wt ,

0 0

we have the further simplification

Z Z t

E[XWt ] 2 g( )I[0,t] ( ) d = E[XWt ] 2 g( ) d .

0 0

Since this must be equal to zero for all t 0, we can differentiate and obtain

1 d

g(t) = E[XWt ].

2 dt

48. (a) Using the methods of Chapter 5, it is not too hard to show that

1, y 1/4,

[2 1 4y ]/2, 0 y < 1/4,

FY (y) =

[3/2 (1/2) 1 4y ]/2, 2 y < 0,

0, y < 2.

Chapter 13 Problem Solutions 227

1/ 1 4y, 0 < y < 1/4,

fY (y) = 1/(2 1 4y ), 2 < y < 0,

0, otherwise.

(b) We must find a gb(y) such that

E[v(X)g(Y )] = E[b

g(Y )g(Y )], for all bounded g.

For future reference, note that

Z 1

1

E[v(X)g(Y )] = E[v(X)g(X(1 X))] = v(x)g(x(1 x)) dx.

2 1

Now, by considering the problem of solving g(x) = y for x in the two cases

0 y 1/4 and 2 y < 0 suggests that we try

1 v 1+ 14y + 1 v 1 14y , 0 y 1/4,

2 2 2 2

gb(y) =

v 1 14y

, 2 y < 0.

2

To check, we compute

g(Y )g(Y )] = E[b

E[b g(X(1 X))g(X(1 X))]

Z 0 Z 1/2 Z 1

1

= + + gb(x(1 x))g(x(1 x)) dx

2 1 0 1/2

Z 0

1

= v(x)g(x(1 x)) dx

2 1

Z 1/2

v(1 x) + v(x)

+ g(x(1 x)) dx

0 2

Z 1

v(x) + v(1 x)

+ g(x(1 x)) dx

1/2 2

Z 0 Z 1

1

= v(x)g(x(1 x)) dx + v(x)g(x(1 x)) dx

2 1 0

Z 1

= v(x)g(x(1 x)) dx.

1

49. (a) Using the methods of Chapter 5, it is not too hard to show that

(

F (sin1 (y)) + 1 F ( sin1 (y)), 0 y 1,

FY (y) =

F (sin1 (y)) F ( sin1 (y)), 1 y < 0.

Hence,

f (sin1 (y)) + f ( sin1 (y))

p , 0 y < 1,

1 y2

fY (y) =

f (sin1 (y)) + f ( sin1 (y))

p , 1 < y < 0.

1 y2

228 Chapter 13 Problem Solutions

(b) Consider

Z

E[v(X)g(Y )] = E[v(cos )g(sin )] = v(cos )g(sin ) f ( ) d .

Next, write

Z /2 Z 1 p f (sin1 (y))

v(cos )g(sin ) f ( ) d = v( 1 y2 )g(y) p dy

0 0 1 y2

and

Z 0 Z 0 p f (sin1 (y))

v(cos )g(sin ) f ( ) d = v( 1 y2 )g(y) p dy.

/2 1 1 y2

Z Z /2

v(cos )g(sin ) f ( ) d = v(cos( t))g(sin( t)) f ( t) dt

/2 0

Z /2

= v( cost)g(sint) f ( t) dt

0

Z 1 p f ( sin1 (y))

= v( 1 y2 )g(y) p dy,

0 1 y2

and

Z /2

v(cos )g(sin ) f ( ) d

Z 0

= v(cos( t))g(sin( t)) f ( t) dt

/2

Z 0

= v( cost)g(sint) f ( t) dt

/2

Z 0 p f ( sin1 (y))

= v( 1 y2 )g(y) p dy.

1 1 y2

E[v(X)g(Y )]

Z 1 p p

v( 1 y2 ) f (sin1 (y)) + v( 1 y2 ) f ( sin1 (y))

=

0 f (sin1 (y)) + f ( sin1 (y))

g(y) fY (y) dy

Z 0 p p

v( 1 y2 ) f (sin1 (y)) + v( 1 y2 ) f ( sin1 (y))

+

1 f (sin1 (y)) + f ( sin1 (y))

g(y) fY (y) dy.

Chapter 13 Problem Solutions 229

We conclude that

E[v(X)|Y = y]

1 1

2 2

v( 1y ) f (sin 1(y))+v( 1y1) f ( sin (y)) , 0 < y < 1,

f (sin (y))+ f ( sin (y))

= 1

1

2 2

v( 1y ) f (sin 1(y))+v( 1y )1f ( sin (y)) , 1 < y < 0.

f (sin (y))+ f ( sin (y))

1

E[X|Y ]I(,1/n) (E[X|Y ]) I (E[X|Y ]),

n (,1/n)

and so

1

0 E[Xg(Y )] = E[E[X|Y ]g(Y )] P(E[X|Y ] < 1/n) < 0,

n

which is a contradiction. Hence, P(E[X|Y ] < 0) = 0.

51. Write

= E[X + g(Y )] E[X g(Y )]

= E E[X + |Y ]g(Y ) E E[X |Y ]g(Y )

= E E[X + |Y ] E[X |Y ] g(Y ) .

52. Following the hint, we begin with

E[X|Y ] = E[X + |Y ] E[X |Y ] E[X + |Y ] + E[X |Y ] = E[X + |Y ] + E[X |Y ].

Then

E E[X|Y ] E E[X + |Y ] + E E[X |Y ] = E[X + ] + E[X ] < .

53. To show that E[h(Y )X|Y ] = h(Y )E[X|Y ], we have to show that the right-hand side

satisfies the characterizing equation of the left-hand side. Since the characterizing

equation for the left-hand side is

E[{h(Y )X}g(Y )] = E E[h(Y )X|Y ]g(Y ) , for all bounded g,

E[{h(Y )X}g(Y )] = E {h(Y )E[X|Y ]}g(Y ) , for all bounded g. ()

The only other thing we know is the characterizing equation for E[X|Y ], which is

E[Xg(Y )] = E E[X|Y ]g(Y ) , for all bounded g.

230 Chapter 13 Problem Solutions

Since g in the above formula is an arbitrary and bounded function, and since h is

also bounded, we can rewrite the above formula by replacing g(Y ) with g(Y )h(Y ) for

arbitrary bounded g. We thus have

E[X{g(Y )h(Y )}] = E E[X|Y ]{g(Y )h(Y )} , for all bounded g,

54. We must show that E[X|q(Y )] satisfies the characterizing

of E E[X|Y ] q(Y ) .

equation

To write down the characterizing equation for E E[X|Y ]q(Y ) , it is convenient to use

the notation Z := E[X|Y ]. Then the characterizing equation for E[Z|q(Y )] is

E[Zg(q(Y ))] = E E[Z|q(Y )]g(q(Y )) , for all bounded g.

We must show that this equation holds when E[Z|q(Y )] is replaced by E[X|q(Y )]; i.e.,

we must show that

E[Zg(q(Y ))] = E E[X|q(Y )]g(q(Y )) , for all bounded g.

E[E[X|Y ]g(q(Y ))] = E E[X|q(Y )]g(q(Y )) , for all bounded g. ()

E[Xh(Y )] = E E[X|Y ]h(Y ) , for all bounded h.

Since h is bounded and arbitrary, we can replace h(Y ) by g(q(Y )) for arbitrary bound-

ed g. Thus,

E[Xg(q(Y ))] = E E[X|Y ]g(q(Y )) , for all bounded g.

E[Xg(q(Y ))] = E E[X|q(Y )]g(q(Y )) , for all bounded g.

55. The desired result,

can be rewritten as

E[(X E[X|Y ])g(Y )h(Y )] = 0,

where g is a bounded function and h(Y ) L2 . But then g(Y )h(Y ) L2 , and there-

fore this last equation must hold by the orthogonality principle since E[X|Y ] is the

projection of X onto M = {v(Y ) : E[v(Y )2 ] < }.

Chapter 13 Problem Solutions 231

h i

E[{h(Y )X}g(Y )] = E lim hn (Y )Xg(Y )

n

= lim E[X{hn (Y )g(Y )}]

n

= lim E E[X|Y ]{hn (Y )g(Y )}

n

h i

= E lim hn (Y )E[X|Y ]g(Y )

n

= E {h(Y )E[X|Y ]}g(Y ) .

57. First write

n

Y = E[Y |Y ] = E[X1 + + Xn |Y ] = E[Xi |Y ].

i=1

By symmetry, we must have E[Xi |Y ] = E[X1 |Y ] for all i. Then Y = nE[X1 |Y ], or

E[X1 |Y ] = Y /n.

58. Write

= E[Yn+1 |Y1 , . . . ,Yn ] +Yn + +Y1

= E[Yn+1 ] + Xn , by indep. & def. of Xn ,

= Xn , since E[Yn+1 ] = 0.

59. For n 1,

E[Xn+1 ] = E E[Xn+1 |Yn , . . . ,Y1 ] = E[Xn ],

where the second equality uses the definition of a martingale. Hence, E[Xn ] = E[X1 ]

for n 1.

60. For n 1,

E[Xn+1 ] = E E[Xn+1 |Yn , . . . ,Y1 ] E[Xn ],

where the inequality uses the definition of a supermartingale. Since Xn 0, E[Xn ] 0.

Hence, 0 E[Xn ] E[X1 ] for n 1.

61. Since Xn+1 := E[Z|Yn+1 , . . . ,Y1 ],

E[Xn+1 |Yn , . . . ,Y1 ] = E E[Z|Yn+1 , . . . ,Y1 ]Yn , . . . ,Y1

= E[Z|Yn , . . . ,Y1 ], by the smoothing property,

=: Xn .

62. Since Xn := w(Yn ) w(Y1 ), observe that Xn is a function of Y1 , . . . ,Yn and that Xn+1 =

w(Yn+1 )Xn . Then

E[Xn+1 |Yn , . . . ,Y1 ] = E[w(Yn+1 )Xn |Yn , . . . ,Y1 ] = Xn E[w(Yn+1 )|Yn , . . . ,Y1 ]

= Xn E[w(Yn+1 )], by independence.

232 Chapter 13 Problem Solutions

It remains to compute

Z Z Z

f (y)

E[w(Yn+1 )] = w(y) f (y) dy = f (y) dy = f(y) dy = 1.

f (y)

63. To begin, write

fYn+1 Y1 (yn+1 , . . . , y1 )

wn+1 (y1 , . . . , yn+1 ) =

fYn+1 Y1 (yn+1 , . . . , y1 )

fY |Y Y (yn+1 |yn , . . . , y1 ) fYn Y1 (yn , . . . , y1 )

= n+1 n 1 .

fYn+1 |Yn Y1 (yn+1 |yn , . . . , y1 ) fYn Y1 (yn , . . . , y1 )

If we put

fYn+1 |Yn Y1 (yn+1 |yn , . . . , y1 )

wn+1 (yn+1 , . . . , y1 ) := ,

fYn+1 |Yn Y1 (yn+1 |yn , . . . , y1 )

then Xn+1 := wn+1 (Y1 , . . . ,Yn+1 ) = wn+1 (Yn+1 , . . . ,Y1 )Xn , where Xn is a function of

Y1 , . . . ,Yn . We can now write

= Xn E[wn+1 (Yn+1 , . . . ,Y1 )|Yn , . . . ,Y1 ].

= E[wn+1 (Yn+1 , yn . . . , y1 )|Yn = yn , . . . ,Y1 = y1 ]

Z

= wn+1 (y, yn , . . . , y1 ) fYn+1 |Yn Y1 (y|yn , . . . , y1 ) dy

Z

= fYn+1 |Yn Y1 (y|yn , . . . , y1 ) dy = 1.

Y0 , . . . ,Yn . Note also that Xn+1 = Xn +Wn+1 . Now write

Next,

Z

E[Wn+1 |Yn = yn , . . . ,Y0 = y0 ] = E Yn+1

zp(z|Yn ) dzYn = yn , . . . ,Y0 = y0

Z

= E Yn+1

zp(z|yn ) dzYn = yn , . . . ,Y0 = y0

Z

= E[Yn+1 |Yn = yn , . . . ,Y0 = y0 ] zp(z|yn ) dz

Z Z

= z fYn+1 |Yn Y0 (z|yn , . . . , y0 ) dz zp(z|yn ) dz

Chapter 13 Problem Solutions 233

Z Z

= z fYn+1 |Yn (z|yn ) dz zp(z|yn ) dz

Z Z

= zp(z|yn ) dz zp(z|yn ) dz = 0.

65. First write

E[Yn+1 |Xn = in , . . . , X0 = i0 ] = E Xn+1 Xn = in , . . . , X0 = i0

= j P(Xn+1 = j|Xn = in , . . . , X0 = i0 )

j

= j P(Xn+1 = j|Xn = in ).

j

j

1 a i1 1 a i+1

= (1 a) + a

a a

1 a i1 h 1 a 2 i

= 1a+ a

a a

1 a i1 h (1 a)2 i

= 1a+

a a

(1 a)i h 1ai

= 1+ = i.

ai1 a

If Xn = 0 or Xn = 1, then Xn+1 = Xn , and so

Xn .

66. First, since Xn is a submartingale, it is clear that

= E[Xn+1 |Yn , . . . ,Y1 ] An+1

h i

= E[Xn+1 |Yn , . . . ,Y1 ] An + (E[Xn+1 |Yn , . . . ,Y1 ] Xn )

= Xn An = Mn .

234 Chapter 13 Problem Solutions

= E[Z f1 Z1/2 ]E[(Z f2 Z f1 ) ] + E[|Z f1 |2 ]

Z f1

= 0 0 + E[|Z f1 |2 ] = S( ) d .

1/2

n+1

1 n j2 f n 1 e j2 f e j2 f e j2 f 1 e j2 f n

n k=1

e =

n 1 e j2 f

=

n

1 e j2 f

e j2 f e j f n e j f n e j f n e j2 f e j f n sin( f n)

= j f j f = j f .

n e e e j f n e sin( f )

N N

E[|Y |2 ] = E[YY ] = E dn Xn dk Xk

n=N k=N

N N N N

= dn dk E[Xn Xk ] = dn dk R(n k)

n=N k=N n=N k=N

N N Z 1/2

= dn dk S( f )e j2 f (nk) d f

n=N k=N 1/2

Z 1/2 N

2

j2 f n

= S( f ) dn e d f.

1/2 n=N

70. Write

N N N N

E[T (G0 )T (H0 )] = E gn Xn hk Xk = gn hn R(n k)

n=N k=N n=N k=N

N N Z 1/2

= gn hn S( f )e j2 f (nk) d f

n=N k=N 1/2

Z 1/2 N N

j2 f n j2 f k

=

1/2

S( f ) gn e hk e df

n=N k=N

Z 1/2

= S( f )G0 ( f )H0 ( f ) d f .

1/2

en ) + T (G

kT (G) Y k2 = kT (G) T (Gn ) + T (Gn ) T (G en ) Y k2

kT (G) T (Gn )k2 + kT (Gn ) T (G en )k2 + kT (G

en ) Y k2 .

Chapter 13 Problem Solutions 235

On the right-hand side, the first and third terms go to zero. To analyze the middle

term, write

en )k2 = kT (Gn G

kT (Gn ) T (G en )k2 = kGn G

en k

kGn Gk + kG Gen k 0.

(b) To show T is norm preserving on L2 (S), let Gn G with T (Gn ) T (G). Then

by Problem 21, kT (Gn )k2 kT (G)k2 , and similarly, kGn k kGk. Now write

n n

= kGk.

(c) To show T is linear on L2 (S), fix G, H L2 (S), and let Gn and Hn be trigono-

metric polynomials converging to G and H, respectively. Then

Gn + Hn G + H, ()

kT ( G + H) { T (G) + T (H)}k2

= kT ( G + H) T ( Gn + Hn )k2

+ kT ( Gn + Hn ) { T (Gn ) + T (Hn )}k2

+ k{ T (Gn ) + T (Hn )} { T (G) + T (H)}k2 .

Now, the first term on the right goes to zero on account of () and the defintion

of T . The second term on the right is equal to zero because T is linear on

trigonometric polynomials. The third term goes to zero upon observing that

+ | | kT (Hn ) T (H)k2 .

(d) Using parts (b) and (c), kT (G) T (H)k2 = kT (G H)k2 = kG Hk implies

that T is actually uniformly continuous.

72. It suffices to show that I[1/2, f ] L2 (S). Write

Z 1/2 Z f Z 1/2

I[1/2, f ] ( )2 S( ) d = S( ) d S( ) d = R(0) = E[Xn2 ] < .

1/2 1/2 1/2

73. (a) We know that for trigonometric polynomials, E[T (G)] = 0. Hence, if Gn is a

sequence of trigonometric polynomials converging to G in L2 (S), then T (Gn )

T (G) in L2 , and then E[T (Gn )] E[T (G)].

(b) For trigonometric polynomials G and H, we have

Z 1/2

hT (G), T (H)i := E[T (G)T (H) ] = G( f )H( f ) S( f ) d f .

1/2

236 Chapter 13 Problem Solutions

H in L2 (S), then we can use the result of Problem 24 to write

Z 1/2

hT (G), T (H)i = lim hT (Gn ), T (Hn )i = lim Gn ( f )Hn ( f ) S( f ) d f

n n 1/2

Z 1/2

= G( f )H( f ) S( f ) d f .

1/2

Z 1/2

= I( f1 , f2 ] ( f )I( f3 , f4 ] ( f )S( f ) d f = 0.

1/2

Z 1/2

L2 (SY ) := G: |G( f )|2 SY ( f ) d f < .

1/2

(b) Write

N N

TY (G0 ) = gnYn = gn hk Xnk

n=N n=N k=

N Z 1/2

= gn hk e j2 f (nk) dZ f

n=N k= 1/2

Z 1/2

= G0 ( f )H( f ) dZ f .

1/2

to G in L2 (SY ). Since TY is continuous,

Z 1/2

TY (G) = lim TY (Gn ) = lim Gn ( f )H( f ) dZ f

n n 1/2

n

Z 1/2

= G( f )H( f ) dZ f .

1/2

Z 1/2

V f := TY (I[1/2, f ] ) = I[1/2, f ] ( )H( ) dZ ()

1/2

Z f

= H( ) dZ .

1/2

Chapter 13 Problem Solutions 237

tions. For general G L2 (SY ), approximate G by a sequence of piecewise-

constant functions Gn and write

Z 1/2 Z 1/2 Z 1/2

G( f ) dV f = lim Gn ( f ) dV f = lim Gn ( f )H( f ) dZ f

1/2 n 1/2 n 1/2

Z 1/2

= lim T (Gn H) = T (GH) = G( f )H( f ) dZ f .

n 1/2

CHAPTER 14

Problem Solutions

we can write

Z Z

1/(n ) 1/

P(|Xn | ) = 2 dx = 2 dy 0.

(1/n)2 + x2 n 1 + y2

2. Since |cn c| 0, given > 0, there is an N such that for n N, |cn c| < . For

such n,

{ : |cn c| } = ,

and so P(|cn c| ) = 0.

3. We show that Xn converges in probability to zero. Observe that Xn takes only the

values n and zero. Hence, for 0 < < 1, |Xn | if and only if Xn = n, which

happens if and only if U 1/ n. We can now write

P(|Xn | ) = P(U 1/ n ) = 1/ n 0.

4. Write Z

P(|Xn | ) = P(|V | cn ) = 2 fV (v) dv 0,

cn

since cn .

5. Using the hint,

[

P(X 6= Y ) = P {|X Y | 1/k} = lim P(|X Y | 1/K).

K

k=1

We now show that the above limit is zero. To begin, observe that

imply

write

P(|X Y | 1/K) P {|Xn X| 1/(2K)} {|Xn Y | 1/(2K)}

P {|Xn X| 1/(2K)} + P {|Xn Y | 1/(2K)} ,

238

Chapter 14 Problem Solutions 239

6. Let > 0 be given. We must show that for every > 0, for all sufficiently large n,

P(|Xn X| ) < . Without loss of generality, assume < . Let n be so large that

P(|Xn X| ) < . Then since < ,

= 1 [FX ( ) FX (( ))] = 1 FX ( ) + FX (( ))

1 FX ( ) + FX ( ).

FX ( ) < /8 and FX ( ) < /8, and then P(|X| > ) < /4. Similarly, for

large , P(|Y | > ) < /4.

(b) Observe that if the four conditions hold, then

and similarly, |Yn | 2 . Now that both (Xn ,Yn ) and (X,Y ) lie in the rectangle,

{|X| > } {|Y | > }.

Hence,

+ P(|X| > ) + P(|Y | > )

< /4 + /4 + /4 + /4 = .

8. (a) Since the Xi are i.i.d., the Xi2 are i.i.d. and therefore uncorrelated and have com-

mon mean E[Xi2 ] = 2 + m2 and common variance

2 2

E Xi2 E[Xi2 ] = E[Xi4 ] E[Xi2 ] = 4 ( 2 + m2 )2 .

m2 .

(b) Observe that Sn2 = g(Mn ,Vn )n/(n 1), where g(m, v) := v m2 is continuous.

Hence, by the preceding problem, g(Mn ,Vn ) converges in probability to g(m, v)

= ( 2 + m2 ) m2 = 2 . Next, by Problem 2, n/(n 1) converges in probability

to 1, and then the product [n/(n 1)]g(Mn ,Vn ) converges in probability to 1

g(m, v) = 2 .

240 Chapter 14 Problem Solutions

as n . Now, if |X Xn | < 1, then |X| |Xn | < 1, and it follows that

|X| < 1 + |Xn | 1 + |Y |.

Equivalently,

|X| |Y | + 1 implies |Xn X| 1.

Hence,

P(|X| |Y | + 1) P(|Xn X| 1) 0.

(b) Following the hint, write

E[|Xn X|] = E[|Xn X|IAn ] + E[|Xn X|IAnc ]

E |Xn | + |X| IAn + P(Anc )

E Y + |X| IAn +

+ ,

since for large n, P(An ) < implies E Y + |X| IAn < . Hence, E[|Xn X|] <

2 .

10. (a) Suppose g(x) is bounded, nonnegative, and g(x) 0 as x 0. Then given

> 0, there exists a > 0 such that g(x) < /2 for all |x| < . For |x| ,

we use the fact that g is bounded to write g(x) G for some positive, finite G.

Since Xn converges to zero in probability, for large n, P(|Xn | ) < /(2G).

Now write

E[g(Xn )] = E[g(Xn )I[0, ) (|Xn |)] + E[g(Xn )I[ ,) (|Xn |)]

E[( /2)I[0, ) (|Xn |)] + E[GI[ ,) (|Xn |)]

= P(|Xn | < ) + G P(|Xn | )

2

< +G = .

2 2G

(b) By applying part (a) to the function g(x) = x/(1 + x), it follows that if Xn con-

verges in probability to zero, then

|Xn |

lim E = 0.

n 1 + |Xn |

Now we show that if the above limit holds, then Xn must converge in probability

to zero. Following the hint, we use the fact that g(x) = 1/(1 + x) is an increasing

function for x 0. Write

E[g(Xn )] = E[g(Xn )I[ ,) (|Xn |)] + E[g(Xn )I[0, ) (|Xn |)]

E[g(Xn )I[ ,) (|Xn |)]

E[g( )I[ ,) (|Xn |)]

= g( )P(|Xn | ).

Thus, if g(x) is nonnegative and nondecreasing, if E[g(Xn )] 0, then Xn con-

verges in distribution to zero.

Chapter 14 Problem Solutions 241

11. First note that for the constant random variable Y c, FY (y) = I[c,) (y). Similarly,

for Yn cn , FYn (y) = I[cn ,) (y). Since the only point at which FY is not continuous is

y = c, we must show that I[cn ,) (y) I[c,) (y) for all y 6= c. Consider a y with c < y.

For all sufficiently large n, cn will be very close to c so close that cn < y, which

implies FYn (y) = 1 = FY (y). Now consider y < c. For all sufficiently large n, cn will

be very close to c so close that y < cn , which implies FYn (y) = 0 = FY (y).

12. For 0 < c < , FY (y) = P(cX y) = FX (y/c), and FYn (y) = FX (y/cn ). Now, y is a

continuity point of FY if and only if y/c is a continuity point of FX . For such y, since

y/cn y/c, FX (y/cn ) FX (y/c). For c = 0, Y 0, and FY (y) = I[0,) (y). For y 6= 0,

+, y > 0,

y/cn

, y < 0,

and so

1, y > 0,

FYn (y) = FX (y/cn )

0, y < 0,

which is exactly FY (y) for y 6= 0.

13. Since Xt Yt Zt ,

{Zt y} {Yt y} {Xt y},

we can write

P(Zt y) P(Yt y) P(Xt y),

or FZt (y) FYt (y) FXt (y), and it follows that

t t t

| {z } | {z }

= F(y) = F(y)

14. Since Xn converges in mean to X, the inequality

E[Xn ] E[X] = E[Xn X] E[|Xn X|]

1/E[Xn ] 1/E[X]

Xn ( ) = .

1/E[Xn ] j 1/E[X] j

Since limits are unique, the above right-hand side must be X ( ), which implies X is

an exponential random variable.

15. Since Xn and Yn each converge in distribution to constants x and y, respectively, they

also converge in probability. Hence, as noted in the text, Xn +Yn converges in proba-

bility to x + y. Since convergence in probability implies convergence in distribution,

Xn +Yn converges in distribution to x + y.

242 Chapter 14 Problem Solutions

16. Following the hint, we note that each Yn is a finite linear combination of independent

Gaussian increments. Hence, each Yn is Gaussian. Since Y is the mean-square limit

of the Gaussian Yn , the distribution of RY is also Gaussian by the example cited in

theRhint. Furthermore, since each Yn = 0 gn ( ) dW , Yn has zero mean and variance

2 0 gn ( )2 d = 2 kgn k2 . By the cited example, Y has zero mean. Also, since

kgn k kgk kgn gk 0,

we have

var(Y ) = E[Y 2 ] = lim E[Yn2 ] = lim var(Yn )

n n

Z

= lim 2 kgn k2 = 2 kgk2 = 2 g( )2 d .

n 0

n n Z Z n

ci Xti = ci 0

g(ti , ) dW =

0

ci g(ti , ) dW ,

i=1 i=1 i=1

| {z }

=: g( )

18. The plan is to show that the increments are Gaussian and uncorrelated. It will then

follow that the increments are independent. For 0 u < v s < t < , write

Xu

Xv Xu 1 1 0 0

Xv .

=

Xt Xs 0 0 1 1 Xs

Xt

By writing

Z t Z Z

Xt := g( ) dW = g( )I[0,t] ( ) dW = h(t, ) dW ,

0 0 0

where h(t, ) := g( )I[0,t] ( ), we see that by the preceding problem, the vector on the

right above is Gaussian, and hence, so are the increments in the vector on the left.

Next,

Z t Z v

E[(Xt Xs )(Xv Xu )] = E g( ) dW g( ) dW

s u

Z Z

= E g( )I(s,t] ( ) dW g( )I(u,v] ( ) dW

0 0

Z

2 2

= g( ) I(s,t] ( )I(u,v] ( ) d = 0.

0

n n Z b Zb n Z b

ck Ak = ck X k ( ) d = X ck k ( ) d = g( )X d ,

a a a

k=1 k=1 k=1

Chapter 14 Problem Solutions 243

i

Since X is a Gaussian process, these sums are Gaussian, and hence, so is their mean-

square limit.

20. If MXn (s) MX (s), then this holds when s = j ; i.e., Xn ( ) X ( ). But this

implies that Xn converges in distribution to X. Similarly, if GXn (z) GX (z), then

this holds when z = e j ; i.e., Xn ( ) X ( ). But this implies Xn converges in

distribution to X.

21. (a) Write

FX (k + 1/2) FX (k 1/2) = p(k),

where we have used the fact that since X is integer valued, k 1/2 is a continuity

point of FX .

(b) The continuity points of FX are the noninteger values of x. For such x > 0,

suppose k < x < k + 1. Then

k k

FXn (x) = P(Xn x) = P(Xn k) = pn (i) p(i) = P(X k) = FX (x).

i=0 i=0

22. Let

n k n!

pn (k) := P(Xn = k) = p (1 pn )nk = pk (1 pn )nk

k n k!(n k)! n

2 nn+1/2 en (n k)!

qn := 1 and rn (k) := 1,

n! 2 (n k)nk+1/2 e(nk)

and so qn rn (k) 1 as well. If we can show that pn (k)qn rn (k) p(k), then

lim pn (k) = lim = n = = p(k).

n n qn rn (k) lim qn rn (k) 1

n

Now write

2 nn+1/2 en (n k)!

lim pn (k)qn rn (k) = lim pn (k)

n n n! 2 (n k)nk+1/2 e(nk)

ek nn+1/2

= lim pkn (1 pn )nk

k! n (n k)nk+1/2

244 Chapter 14 Problem Solutions

ek nn+1/2

= lim pkn (1 pn )nk nk+1/2

k! n n (1 k/n)nk+1/2

ek (npn )k (1 pn )nk

= lim

k! n (1 k/n)k+1/2 (1 k/n)n

ek (npn )k [1 (npn )/n]n (1 pn )k

= lim

k! n (1 k/n)k+1/2 (1 k/n)n

ek k e 1 k e

= k = = p(k).

k! 1 e k!

23. First write

npn (z 1) n

GXn (z) = [(1 pn ) + pn z]n = [1 + pn (z 1)]n = 1 + .

n

24. (a) Here are the sketches:

I(,a] (t) ga,b (t) I(,b] (t)

1 1 1

t t t

a b a b a b

(b) From part (a) we can write

| {z } | {z }

= P(Y a) = P(Y b)

or

FY (a) E[ga,b (Y )] FY (b).

n n

(d) Similarly,

n n

(e) If x is a continuity point of FX , then given any > 0, for sufficiently small x,

Chapter 14 Problem Solutions 245

n n

Since > 0 is arbitrary, the liminf and the limsup are equal, limn FXn (x) exists

and is equal to FX (x).

25. If Xn converges in distribution to zero, then Xn converges in probability to zero, and

by Problem 10,

|Xn |

E = 0.

1 + |Xn |

Conversely, if the above limit holds, then by Problem 10, Xn converges in probability

to zero, which implies convergence in distribution to zero.

26. Observe that fn (x) = n f (nx) implies

1, x > 0,

Fn (x) = F(nx) F(0), x = 0,

0, x < 0.

So, for x 6= 0, Fn (x) I[0,) (x), which is the cdf of X 0. In other words, Xn con-

verges in distribution to zero, which implies convergence in probability to zero.

27. Since Xn converges in mean square to X, Xn converges in distribution to X. Since

2

g(x) := x2 ex is a bounded continuous function, E[g(Xn )] E[g(X)].

28. Let 0 < < 1 be given. For large t, we have |u(t) 1| < , or

Rewrite this as

Then

F(z(1 )) = lim FZt (z(1 )) lim P(Zt zu(t))

t t

and

lim P(Zt zu(t)) lim FZt (z(1 + )) = FZ (z(1 + )).

t t

t

246 Chapter 14 Problem Solutions

u(t) := c/c(t), we have by the preceding problem that P(c(t)Zt z) F(z/c).

30. First write FXt (x) = P(Zt + s(t) x) = P(Zt x s(t)). Let > 0 be given. Then

s(t) 0 implies that for large t, |s(t)| < , or

Then

It follows that

t t

t

Nbtc Ndte

t t

Xt := p Yt p =: Zt .

/t /t

According to Problem 13, it suffices to show that Xt and Zt converge in distribution

to N(0, 1) random variables. By the preceding two problems, the distribution limit of

Zt is the same as that of c(t)Zt + s(t) if c(t) 1 and s(t) 0. We take

q

t dte

c(t) := q = t dte 1

t dte

and

t dte t dte 1 q

s(t) := q q = dte 0.

dte dte

Ndte

dte

c(t)Zt + s(t) = q ,

dte

goes through the values of Yn and therefore converges in distribution to an N(0, 1)

random variable. It is similar to show that the distribution limit of Xt is also N(0, 1).

32. (a) Let G := {Xn X}. For G,

1 1

.

1 + Xn ( )2 1 + X( )2

Chapter 14 Problem Solutions 247

(b) Since almost sure convergence implies convergence in probability, which im-

plies convergence in distribution, we have 1/(1 + Xn2 ) converging in distribution

to 1/(1 + X 2 ). Since g(x) = 1/(1 + x2 ) is bounded and continuous, E[g(Xn )]

E[g(X)]; i.e.,

1 1

lim E = E .

n 1 + Xn2 1 + X2

{g(Xn ,Yn ) g(X,Y )}. We must show that P(G c ) = 0. Our plan is to show that

GX GY G. It follows that G c GXc GYc , and we can then write

For GX GY , (Xn ( ),Yn ( )) (X( ),Y ( )). Since g(x, y) is continuous, for

such , g(Xn ( ),Yn ( )) g(X( ),Y ( )). Thus, GX GY G.

34. Let GX := {Xn X} and GXY := {X = Y }, where P(GXc ) = P(GXY c ) = 0. Put G :=

Y

c

{Xn Y }. We must show that P(GY ) = 0. Our plan is to show that GX GXY GY .

It follows that GYc GXc GXY

c , and we can then write

c

) = 0.

Y ( ); i.e., GY . Thus, GX GXY GY .

35. Let GX := {Xn X} and GY := {Xn Y }, where P(GXc ) = P(GYc ) = 0. Put GXY :=

{X = Y }. We must show that P(GXY c ) = 0. Our plan is to show that G G G .

X Y XY

c c c

It follows that GXY GX GY , and we can then write

c

P(GXY ) P(GXc ) + P(GYc ) = 0.

numbers are unique, for such , X( ) = Y ( ); i.e., GXY . Thus, GX GY GXY .

T

36. Let GX := {Xn X}, GY := {Yn Y }, and GI := c

n=1 {Xn Yn }, where P(GX ) =

c c

P(GY ) = P(GI ) = 0. This last equality follows because

P(GIc ) P(Xn > Yn ) = 0.

n=1

GY GI G. It follows that G c GXc GYc GIc , and we can then write

By properties of sequences of real numbers, for such , we must have X( ) Y ( ).

Thus, GX GY GI G.

37. Suppose Xn converges almost surely to X, and Xn converges in mean to Y . Then Xn

converges in probability to X and to Y . By Problem 5, X = Y a.s.

248 Chapter 14 Problem Solutions

cn X( ) = 0 0. Hence,

+, if X( ) > 0,

Y ( ) = 0, if X( ) = 0,

, if X( ) < 0.

\ [ [

P

{Xn = j}X0 = i = lim P

{Xn = j}X0 = i

M

N=1 n=N n=M

(n)

lim

M

P(Xn = j|X0 = i) = lim

M

pi j ,

n=M n=M

(n)

which must be zero since the pi j are summable.

40. Following the hint, write

\ E[S]

P(S = ) = P {S > n} = lim P(S > N) lim = 0.

N N N

n=1

n n

E[S] = lim E[Sn ] = lim lim |hk |E[|Xk |] B1/p |hk | < .

E[|hk ||Xk |] = n

n n

k=1 k=1 k=1

Z n+1 Z n+1

1 1 1

p

dt p

dt = ,

n t n (n + 1) (n + 1) p

we obtain

Z Z n+1

1 1 1 1

>

t p

dt = t p

dt (n + 1) p = np ,

1 n=1 n n=1 n=2

which implies

n=1 1/n

p < .

43. Write

n n n n

1

E[Mn4 ] = E X i X j Xl Xm

n4 i=1 j=1 l=1 m=1

n n n n n

1

= 4 E[Xi4 ] + E[Xi Xi Xl Xl ] + E[Xi X j Xi X j ]

n i=1 i=1 l=1,l6=i i=1 j=1, j6=i

n n n n

3

+ E[Xi X j X j Xi ] + 4 E[Xi X j ]

i=1 j=1, j6=i i=1 j=1, j6=i | {z }

=0

1

= 4 [n + 3n(n 1) 4 ].

n

Chapter 14 Problem Solutions 249

n=1 P(|Xn | ) < . To this end, write

Z

P(|Xn | ) = 2 n nx

2e dx = enx = en .

Then

e

P(|Xn | ) = (e )n = 1 e

< .

n=1 n=1

2 /2

P(|Xn | ) = P(Xn ) = e(n ) .

Then

2 2 2 1

P(|Xn | ) = e n /2 (e /2 )n = 2

1 e /2

< .

n=1 n=1 n=1

46. For n > 1/ , write

Z

p 1 p 1

P(|Xn | ) = p1

x dx = .

n n p1 p1

Then

1 1

P(|Xn | ) = P(|Xn | ) +

p1 n p1

<

n>1 n<1/ n>1/

47. To begin, first observe that

that

n=1 pn < .

n=1 pn < , but n pn = n

1/2 6 0. For this choice of p ,

n

Xn converges almost surely but not in mean to X.

(b) If pn = 1/n3 , then 2

n=1 pn < and n pn = 1/n 0. For this choice of pn , Xn

converges almost surely and in mean to X.

48. To apply the weak law of large numbers of this section to (1/n) ni=1 Xi2 requires only

that the Xi2 be i.i.d. and have finite mean; there is no second-moment requirement on

Xi2 (which would be a requirement on Xi4 ).

250 Chapter 14 Problem Solutions

49. By writing

Nn 1 n

= Nk Nk1 ,

n n k=1

which is a sum of i.i.d. Poisson( ) random variables, we see that Nn /n converges

almost surely to by the strong law of large numbers. We next observe that Nbtc

Nt Ndte . Then

Nbtc Nt Ndte

,

t t t

and it follows that

Nbtc Nt Ndte

,

dte t btc

and then

btc Nbtc Nt Ndte dte

.

dte btc t dte btc

|{z} |{z} |{z} |{z}

1 1

50. (a) By the strong law of large numbers, for not in a set of probability zero,

1 n

Xk ( ) .

n k=1

n

1

Xk ( ) < ,

n

k=1

which implies

1 n

Xk ( ) < ,

n k=1

from which it follows that

n

Xk ( ) < n( + ).

k=1

(b) Given M > 0, let > 0 and choose n in part (a) so that both n( + ) > M and

n M hold. Then

n

Tn ( ) = Xk ( ) < n( + ).

k=1

Nt ( ) = I[0,t] (Tk ( )) n M.

k=1

Chapter 14 Problem Solutions 251

(c) As noted in the solution of part (a), the strong law of large numbers implies

Tn 1 n

= Xk a.s.

n n k=1

Hence, n/Tn 1/ a.s.

(d) First observe that

Nt Nt

YNt = ,

TNt t

and so

1 Nt

= lim YNt lim .

t t t

Next,

Nt + 1 Nt + 1 Nt 1

YNt +1 = = + ,

TNt +1 t t t

and so

1 Nt

= lim YNt +1 lim + 0.

t t t

Hence,

Nt 1

lim = .

t t

51. Let Xn := nI(0,1/n] (U), where U uniform(0, 1]. Then for every , U( ) (0, 1]. For

n > 1/U( ), or U( ) > 1/n, Xn ( ) = 0. Thus, Xn ( ) 0 for every . However,

E[Xn ] = nP(U 1/n) = 1.

Hence, Xn does not converge in mean to zero.

52. Suppose D contains n points, say a x1 < < xn b. Then

n

n < G(xk +) G(xk )

k=1

n1 n

= G(xn +) + G(xk +) G(xk ) G(x1 )

k=1 k=2

n1 n

G(b) + G(xk+1 ) G(xk1 ) G(a)

k=1 k=2

n1

= G(b) + [G(xk+1 ) G(xk )] G(a)

k=1

G(b) + G(xn ) G(x1 ) G(a) 2[G(b) G(a)].

53. Write

[ [ 2

P U Kn = P {U Kn } P(U Kn ) 2n = 2 .

n=1 n=1 n=1 n=1

CHAPTER 15

Problem Solutions

FW1 () = 1, x > 0,

lim FWt (x) =

t0 FW1 () = 0, x < 0,

which is the cdf of the zero random variable for x 6= 0. Hence, Wt converges in

distribution to the zero random variable.

Next,

, if W1 ( ) > 0,

X( ) := lim t H W1 ( ) = 0, if W1 ( ) = 0,

t

, if W1 ( ) < 0.

Thus,

P(X = ) = P(W1 > 0) = 1 FW1 (0), P(X = ) = P(W1 < 0) = FW1 (0),

and

P(X = 0) = P(W1 = 0) = FW1 (0) FW1 (0).

determined by the covariance matrix, we simply observe that

and

E[( 1/2Wt1 )( 1/2Wt2 )] = 2 min(t1 ,t2 ).

3. Fix > 0, and consider the process Zt := Wt Wt . Since the Wiener process is

Gaussian with zero mean, so is the process Zt . Hence, it suffices to consider the

covariance

E[Zt1 Zt2 ] = E[(Wt1 Wt1 )(Wt2 Wt2 )].

The time intervals involved do not overlap if t1 < t2 or if t2 < t1 . Hence,

0, |t2 t1 | > ,

E[Zt1 Zt2 ] =

2 (|t2 t1 | + ), |t2 t1 | ,

4. For H = 1/2, qH ( ) = I(0,) ( ), and CH = 1. So,

Z

BH (t) BH (s) = [I(,t) ( ) I(,s) ( )] dW = Wt Ws .

252

Chapter 15 Problem Solutions 253

Z

[(1 + )H1/2 H1/2 ]2 d < .

1

(1 + )H1/2 H1/2 |H 1/2|/ 3/2H .

Then

Z Z

1

[(1 + )H1/2 H1/2 ]2 d (H 1/2)2 d < ,

1 1 32H

since 3 2H > 1.

6. The expression

M

n

P 1H y = 1

/n

says that

y

= Mn with probability 1 .

n1H

Hence, the width of the confidence interval is

2y 2y

1H

= nH1/2 .

n n

7. Suppose E[Yk2 ] = 2 k2H for k = 1, . . . , n. Substituting this into the required formula

yields

2

E[Yn+1 ] 2 n2H 2 n2H 2 (n 1)2H = 2C(n),

which we are assuming is equal to

2 (n + 1)2H 2n2H + (n 1)2H .

2 ] = 2 (n + 1)2H .

It follows that E[Yn+1

8. (a) Clearly,

m

(m)

Xe := (Xk )

k=( 1)m+1

(m) (m) m nm

E Xe Xen = C(k l)

k=( 1)m+1 l=(n1)m+1

254 Chapter 15 Problem Solutions

m m

= C(k [i + (n 1)m])

k=( 1)m+1 i=1

m m

= C([ j + ( 1)m] [i + (n 1)m])

i=1 j=1

m m

= C( j i + m( n)).

i=1 j=1

(m)

Thus, Xe is WSS.

(b) From the solution of part (a), we see that

m m

Ce(m) (0) = C( j i)

i=1 j=1

m1

= mC(0) + 2 C(k)(m k)

k=1

= E[Ym2 ] = m 2 2H

, by Problem 7.

9. Starting with

C(m) (n) 2

lim = |n + 1|2H 2|n|2H + |n 1|2H ,

m m2H2 2

put n = 0 to get

C(m) (0)

lim = 2 .

m m2H2

Then observe that

m 2

1

(m) E (Xk )

C(m) (0) E[(X1 )2 ] m k=1

= = .

m2H2 m2H2 m2H2

2 2

2 2

Ce(m) (n) 1 E[Y(n+1)m ] E[Ynm ] E[Ynm ] E[Y(n1)m ]

=

m2H 2 m2H

2

E[Y(n+1)m ] 2 2

E[Y(n1)m ]

1 2H E[Ynm ]

= (n + 1)2H 2n + (n 1)2H

2 [(n + 1)m]2H (nm)2H [(n 1)m]2H

2

[(n + 1)2H 2n2H + (n 1)2H ].

2

Ce(m) (0)

lim = 2 ,

m m2H2

Chapter 15 Problem Solutions 255

and

Ce(m) (n) Ce(m) (n)/m2H2

lim (m) (n) = lim = lim

m m C e(m) (0) m Ce(m) (0)/m2H2

1

= [|n + 1|2H 2|n|2H + |n 1|2H ].

2

Conversely, if both conditions hold,

Ce(m) (n) Ce(m) (0) (m) (n)

lim 2H2

= lim

m m m m2H2

Ce (0)

(m)

= lim 2H2 lim (m) (n)

m m m

2

= [|n + 1|2H 2|n|2H + |n 1|2H ].

2

12. To begin, write

j2 f 2d

j f j f

j f 2d

e j f e j f 2d

S( f ) = 1 e = e [e e ]

= 2 j

2j

2d d

= 2 sin( f ) = 4 sin2 ( f ) .

Then

Z 1/2 d

C(n) = 4 sin2 ( f ) e j2 f n d f

1/2

Z 1/2 d

= 4 sin2 ( f ) cos(2 f n) d f

1/2

Z 1/2 d

= 2 4 sin2 ( f ) cos(2 f n) d f

0

Z d d

= 2 4 sin2 ( /2) cos(n )

0 2

Z

1 2

d

= 4 sin ( /2) cos(n ) d . ()

0

Next, as suggested in the hint, apply the change of variable = 2 to

Z 2 d

1

4 sin2 ( /2) cos(n ) d

to get Z

1 d

4 sin2 ([2 ]/2) cos(n[2 ]) d ,

0

which, using a trigonometric identity, is equal to (). Thus,

Z

1 2 d

C(n) = 4 sin2 ( /2) cos(n ) d

2 0

Z

1 d

= 4 sin2 (t) cos(2nt) dt.

0

256 Chapter 15 Problem Solutions

C(n) =

(1 2d)((2 2d + 2n)/2)((2 2d 2n)/2)

(1)n (1 2d)

= .

(1 d + n)(1 d n)

13. Following hint (i), let u = sin2 and dv = 3 d . Then du = 2 sin cos d and

v = 2 /( 2). Hence,

Z r Z r

2 sin2 r 1

3 sin2 d = 2 sin 2 d .

2 2

Next,

Z r 2r Z

1 2 1 dt

sin 2 d = (t/2) 2 sint

2 2 2 2

1 Z 2r

(1/2)

= t 2 sint dt

2 2

2r Z 2r

21 t 1 1

= sint + t 1 cost dt .

2 1 2 1 2

Now write

Z 2r Z 2r

t 1 cost dt = Re t 1 e jt dt Re e j /2 ( ) = cos( /2)( )

2 2

sin 2 sint

2 sin2 = and t 1 sint = t

t

both tend to zero as their arguments tend to zero or to infinity.

14. (a) First observe that

1 1

1 1 1

Q( f ) = |i f |2H+1 = | l f |2H+1 = 2H+1

.

i=1 l= i= |i + f |

Hence,

h 1 i

S( f ) = Q( f ) + + Q( f ) sin2 ( f ),

| f |2H+1

and then

S( f ) 2H1 2 sin2 ( f )

= | f | [Q( f ) + Q( f )] sin ( f ) + 2.

| f |12H f2

Chapter 15 Problem Solutions 257

(b) We have

Z 1/2

4 cos([1 H] )(2 2H)

S( f ) d f = 2 = 2

1/2 (2 )22H (2H 1)2H

(2 )2H cos( H)(2 2H)

= .

2H(1 2H)

15. Write

(1)n (1 2d)

C(n) =

(n + 1 d)(1 d n)

(1)n (1 2d)

=

(n + 1 d)(1)n (d)(1 d)/(n + d)

(1 2d) (n + d)

= .

(1 d)(d) (n + 1 d)

Now, with = 1 d, observe that

(n + ) (n + )n+ 1/2 e(n+ )

nn+1/2 [1 + (1 )/n)]n+1/2

= e12d

nn+ 1/2 (1 + /n)n+ 1/2

[1 + (1 )/n)]n+1/2

= e12d n2d1 .

(1 + /n)n+ 1/2

Thus,

1/2

12d (n + 1 ) 12d e1 12d 1/2

n e 1/2 = e e = e1/2d .

(n + ) e

16. Evaluating the integral, we have

In n1 k1 1 (k/n)1 1

= = .

n1 (1 )n1 1 1

Now, given a small > 0, for large n, In /n1 > 1/(1 ) , which implies

In /n > n(1/(1 ) ) . With Bn as in the hint, we have from Bn + n

k In Bn that

Bn 1 k In Bn

1

+ + 1

1 1

n n n n n

or

In Bn In 1 k

1 1 .

n

1 n1 n n n

Thus, Bn /n1 1/(1 ).

258 Chapter 15 Problem Solutions

n1 Z n n1

1 k

t 1 dt ( + 1)1 .

=k | {z } =k

| {z } | {z }

=: Bn =: In

= Bn k1 +n1

In 1 (k/n)2 1

= .

n2 2 2

Also,

In Bn k1 1

2 +

n2 n2 n n

and

Bn In

2

n2 n

imply Bn /n2 1/(2 ) as required.

18. Observe that

n

|hn | = k bnk

n=q n=q k=nq

|k ||bnk |I[nq,n] (k)

n=q k=0

= |k | |bnk |I[0,q] (n k)

k=0 n=q

q

M i (1 + /2)k

|b |

i=0 k=0

q

1

= M |bi | < .

i=0 1 1/(1 + /2)

mnYn = mn ak Xnk = mn anl Xl

n n k n l

= Xl mn anl = Xl m( +l) a ,

l n l

| {z }

= (ml)

where the reduction to the impulse follows because the convolution of n and an

corresponds in the z-transform domain to the product [1/A(z)] A(z) = 1, and 1 is the

transform of the unit impulse. Thus, n mnYn = Xm .

Chapter 15 Problem Solutions 259

Next,

mnYn = mn bk Znk = mn bnl Zl

n n k n l

= Zl mn bnl = Zl m(l+k) bk

l n l k

| {z }

= hml

since this last convolution corresponds in the z-transform domain to the product

[1/A(z)] B(z) =: H(z).

20. Since Xn is WSS, E[|Xn |2 ] is a finite constant. Since

k=0 |hk | < , we have by an

example in Chapter 13 or by Problem 26 in Chapter 13 that m k=0 hk Xnk converges in

mean square as m . By another example in Chapter 13,

E[Yn ] = E hk Xnk = hk E[Xnk ].

k=0 k=0

If E[Xnk ] = , then

E[Yn ] = hk

k=0

is finite and does not depend on n. Next, by the continuity of the inner product

(Problem 24 in Chapter 13),

m m

E Xl hk Xnk = lim E Xl hk Xnk = lim hk E[Xl Xnk ]

m m

k=0 k=0 k=0

m

= lim

m

hk RX (l n + k) = hk RX ([l n] + k).

k=0 k=0

Similarly,

m

E[YnYl ] = E hk Xnk Yl = lim E hk Xnk Yl

m

k=0 k=0

m m

= lim

m

hk E[XnkYl ] = lim

m

hk RXY (n k l)

k=0 k=0

= hk RXY ([n l] k).

k=0