Professional Documents
Culture Documents
Math Solns PDF
Math Solns PDF
Problem Solutions
July 26, 2004 Draft
Roy D. Yates and David J. Goodman
• This solution manual remains under construction. The current count is that 575 out of 695
problems in the text are solved here, including all problems through Chapter 5.
• At the moment, we have not confirmed the correctness of every single solution. If you find
errors or have suggestions or comments, please send email to ryates@winlab.rutgers.edu.
• M ATLAB functions written as solutions to homework probalems can be found in the archive
matsoln.zip (available to instructors) or in the directory matsoln. Other M ATLAB func-
tions used in the text or in these hoemwork solutions can be found in the archive matcode.zip
or directory matcode. The .m files in matcode are available for download from the Wiley
website. Two oter documents of interest are also available for download:
• A web-based solution set constructor for the second edition is also under construction.
• A major update of this solution manual will occur prior to September, 2004.
1
Problem Solutions – Chapter 1
M O
(b) Every pizza is either Regular (R), or Tuscan (T ). Hence R ∪ T = S so that R and T are
collectively exhaustive. Thus its also (trivially) true that R ∪ T ∪ M = S. That is, R, T and
M are also collectively exhaustive.
(c) From the Venn diagram, T and O are mutually exclusive. In words, this means that Tuscan
pizzas never have onions or pizzas with onions are never Tuscan. As an aside, “Tuscan” is
a fake pizza designation; one shouldn’t conclude that people from Tuscany actually dislike
onions.
(d) From the Venn diagram, M ∩ T and O are mutually exclusive. Thus Gerlanda’s doesn’t make
Tuscan pizza with mushrooms and onions.
(e) Yes. In terms of the Venn diagram, these pizzas are in the set (T ∪ M ∪ O)c .
M O
2
Problem 1.2.1 Solution
(a) An outcome specifies whether the fax is high (h), medium (m), or low (l) speed, and whether
the fax has two (t) pages or four ( f ) pages. The sample space is
Z F = {aa f, a f f, f a f, f f f } . (2)
X A = {aaa, aa f, a f a, a f f } . (3)
D = { f f a, f a f, a f f, f f f } . (5)
3
Problem 1.2.3 Solution
The sample space is
The event H defined by the event of a July birthday is described by following 31 sample points.
1. We can divide students into engineers or non-engineers. Let A1 equal the set of engineering
students and A2 the non-engineers. The pair {A1 , A2 } is an event space.
2. We can also separate students by GPA. Let Bi denote the subset of students with GPAs G
satisfying i − 1 ≤ G < i. At Rutgers, {B1 , B2 , . . . , B5 } is an event space. Note that B5 is
the set of all students with perfect 4.0 GPAs. Of course, other schools use different scales for
GPA.
3. We can also divide the students by age. Let Ci denote the subset of students of age i in years.
At most universities, {C10 , C11 , . . . , C100 } would be an event space. Since a university may
have prodigies either under 10 or over 100, we note that {C0 , C1 , . . .} is always an event space
4. Lastly, we can categorize students by attendance. Let D0 denote the number of students who
have missed zero lectures and let D1 denote all other students. Although it is likely that D0 is
an empty set, {D0 , D1 } is a well defined event space.
4
2. If we need to check whether the first resistance exceeds the second resistance, an event space
is
B1 = {R1 > R2 } B2 = {R1 ≤ R2 } . (2)
3. If we need to check whether each resistance doesn’t fall below a minimum value (in this case
50 ohms for R1 and 100 ohms for R2 ), an event space is
C1 = {R1 < 50, R2 < 100} , C2 = {R1 < 50, R2 ≥ 100} , (3)
C3 = {R1 ≥ 50, R2 < 100} , C4 = {R1 ≥ 50, R2 ≥ 100} . (4)
4. If we want to check whether the resistors in parallel are within an acceptable range of 90 to
110 ohms, an event space is
D1 = (1/R1 + 1/R2 )−1 < 90 , (5)
D2 = 90 ≤ (1/R1 + 1/R2 )−1 ≤ 110 , (6)
−1
D2 = 110 < (1/R1 + 1/R2 ) . (7)
S = {L F, B F, L W, BW } . (1)
From the problem statement, we know that P[L F] = 0.5, P[B F] = 0.2 and P[BW ] = 0.2. This
implies P[L W ] = 1 − 0.5 − 0.2 − 0.2 = 0.1. The questions can be answered using Theorem 1.5.
(a) The probability that a program is slow is
S = {H F, H W, M F, M W } . (1)
The problem statement tells us that P[H F] = 0.2, P[M W ] = 0.1 and P[F] = 0.5. We can use
these facts to find the probabilities of the other outcomes. In particular,
P [F] = P [H F] + P [M F] . (2)
5
This implies
P [M F] = P [F] − P [H F] = 0.5 − 0.2 = 0.3. (3)
Also, since the probabilities must sum to 1,
Now that we have found the probabilities of the outcomes, finding any other probability is easy.
(b) The probability that a cell hpone is mobile and fast is P[M F] = 0.3.
This is the answer you would expect since 13 out of 52 cards are hearts. The point to keep in mind
is that this is not just the common sense answer but is the result of a probability model for a shuffled
deck and the axioms of probability.
6
Since each of the 11 possible outcomes is equally likely, the probability of receiving a grade of i, for
each i = 0, 1, . . . , 10 is P[si ] = 1/11. The probability that the student gets an A is the probability
that she gets a score of 9 or higher. That is
The probability of failing requires the student to get a grade less than 4.
P Failing = P [3] + P [2] + P [1] + P [0] = 1/11 + 1/11 + 1/11 + 1/11 = 4/11. (3)
P [L ∪ H2 ] = P [L H0 ] + P [L H1 ] + P [L H2 ] + P [B H2 ] (3)
= 0.1 + 0.1 + 0.2 + 0.1 = 0.5. (4)
(a) From the given probability distribution of billed minutes, M, the probability that a call is
billed for more than 3 minutes is
(b) The probability that a call will billed for 9 minutes or less is
9
P [9 minutes or less] = α(1 − α)i−1 = 1 − (0.57)3 . (5)
i=1
7
Problem 1.4.3 Solution
The first generation consists of two plants each with genotype yg or gy. They are crossed to produce
the following second generation genotypes, S = {yy, yg, gy, gg}. Each genotype is just as likely
as any other so the probability of each genotype is consequently 1/4. A pea plant has yellow seeds
if it possesses at least one dominant y gene. The set of pea plants with yellow seeds is
Y = {yy, yg, gy} . (1)
So the probability of a pea plant with yellow seeds is
P [Y ] = P [yy] + P [yg] + P [gy] = 3/4. (2)
8
Problem 1.4.6 Solution
(a) For convenience, let pi = P[F Hi ] and qi = P[V Hi ]. Using this shorthand, the six unknowns
p0 , p1 , p2 , q0 , q1 , q2 fill the table as
H0 H1 H2
F p0 p1 p2 . (1)
V q0 q1 q2
However, we are given a number of facts:
p0 + q0 = 1/3, p1 + q1 = 1/3, (2)
p2 + q2 = 1/3, p0 + p1 + p2 = 5/12. (3)
Other facts, such as q0 + q1 + q2 = 7/12, can be derived from these facts. Thus, we have
four equations and six unknowns, choosing p0 and p1 will specify the other unknowns. Un-
fortunately, arbitrary choices for either p0 or p1 will lead to negative values for the other
probabilities. In terms of p0 and p1 , the other unknowns are
q0 = 1/3 − p0 , p2 = 5/12 − ( p0 + p1 ), (4)
q1 = 1/3 − p1 , q2 = p0 + p1 − 1/12. (5)
Because the probabilities must be nonnegative, we see that
0 ≤ p0 ≤ 1/3, (6)
0 ≤ p1 ≤ 1/3, (7)
1/12 ≤ p0 + p1 ≤ 5/12. (8)
Although there are an infinite number of solutions, three possible solutions are:
p0 = 1/3, p1 = 1/12, p2 = 0, (9)
q0 = 0, q1 = 1/4, q2 = 1/3. (10)
and
p0 = 1/4, p1 = 1/12, p2 = 1/12, (11)
q0 = 1/12, q1 = 3/12, q2 = 3/12. (12)
and
p0 = 0, p1 = 1/12, p2 = 1/3, (13)
q0 = 1/3, q1 = 3/12, q2 = 0. (14)
(b) In terms of the pi , qi notation, the new facts are p0 = 1/4 and q1 = 1/6. These extra facts
uniquely specify the probabilities. In this case,
p0 = 1/4, p1 = 1/6, p2 = 0, (15)
q0 = 1/12, q1 = 1/6, q2 = 1/3. (16)
9
Problem 1.4.7 Solution
It is tempting to use the following proof:
The above “proof” used the property that for mutually exclusive sets A1 and A2 ,
The problem is that this property is a consequence of the three axioms, and thus must be proven. For
a proof that uses just the three axioms, let A1 be an arbitrary set and for n = 2, 3, . . ., let An = φ.
∞
Since A1 = ∪i=1 Ai , we can use Axiom 3 to write
∞
∞
P [A1 ] = P ∪i=1 Ai = P [A1 ] + P [A2 ] + P [Ai ] . (3)
i=3
By subtracting P[A1 ] from both sides, the fact that A2 = φ permits us to write
∞
P [φ] + P [Ai ] = 0. (4)
n=3
By Axiom 1, P[Ai ] ≥ 0 for all i. Thus, ∞
n=3 P[Ai ] ≥ 0. This implies P[φ] ≤ 0. Since Axiom 1
requires P[φ] ≥ 0, we must have P[φ] = 0.
m m
m
P ∪i=1 Bi = P [Ai ] = P [Bi ] . (2)
i=1 i=1
10
For the mutually exclusive events B1 , . . . , Bm , let Ai = Bi for i = 1, . . . , m and let Ai = φ for
i > m. In that case, by Axiom 3,
m−1 ∞
= P [Bi ] + P [Ai ] . (3)
i=1 i=m
m
P [B1 ∪ B2 ∪ · · · ∪ Bm ] = P [Bi ] . (5)
i=1
Thus, P[φ] = 0. Note that this proof uses only Theorem 1.4 which uses only Axiom 3.
Since, Axiom 2 says P[S] = 1, P[Ac ] = 1 − P[A]. This proof uses Axioms 2 and 3.
(c) By Theorem 1.2, we can write both A and B as unions of disjoint events:
Note that so far we have used only Axiom 3. Finally, we observe that A ∪ B can be written
as the union of mutually exclusive events
11
Once again, using Theorem 1.4, we have
which completes the proof. Note that this claim required only Axiom 3.
(d) Observe that since A ⊂ B, we can write B as the disjoint union B = A ∪ (Ac B). By
Theorem 1.4 (which uses Axiom 3),
P [B] = P [A] + P Ac B . (14)
By Axiom 1, P[Ac B] ≥ 0, hich implies P[A] ≤ P[B]. This proof uses Axioms 1 and 3.
(b) The probability of one handoff is P[H1 ] = P[H1 B] + P[H1 L] = 0.2. The probability that a
call with one handoff will be long is
P [H1 L] 0.1 1
P [L|H1 ] = = = . (3)
P [H1 ] 0.2 2
(c) The probability a call is long is P[L] = 1 − P[B] = 0.4. The probability that a long call will
have one or more handoffs is
P [H1 L ∪ H2 L] P [H1 L] + P [H2 L] 0.1 + 0.2 3
P [H1 ∪ H2 |L] = = = = . (4)
P [L] P [L] 0.4 4
12
(b) The conditional probability that 6 is rolled given that the roll is greater than 3 is
P [R6 G 3 ] P [s6 ] 1/6
P [R6 |G 3 ] = = = . (2)
P [G 3 ] P [s4 , s5 , s6 ] 3/6
(c) The event E that the roll is even is E = {s2 , s4 , s6 } and has probability 3/6. The joint
probability of G 3 and E is
P [G 3 E] = P [s4 , s6 ] = 1/3. (3)
The conditional probabilities of G 3 given E is
P [G 3 E] 1/3 2
P [G 3 |E] = = = . (4)
P [E] 1/2 3
(d) The conditional probability that the roll is even given that it’s greater than 3 is
P [E G 3 ] 1/3 2
P [E|G 3 ] = = = . (5)
P [G 3 ] 1/2 3
13
Problem 1.5.5 Solution
The sample outcomes can be written i jk where the first card drawn is i, the second is j and the third
is k. The sample space is
S = {234, 243, 324, 342, 423, 432} . (1)
and each of the six outcomes has probability 1/6. The events E1 , E 2 , E 3 , O1 , O2 , O3 are
E 1 = {234, 243, 423, 432} , O1 = {324, 342} , (2)
E 2 = {243, 324, 342, 423} , O2 = {234, 432} , (3)
E 3 = {234, 324, 342, 432} , O3 = {243, 423} . (4)
(a) The conditional probability the second card is even given that the first card is even is
P [E 2 E 1 ] P [243, 423] 2/6
P [E 2 |E 1 ] = = = = 1/2. (5)
P [E 1 ] P [234, 243, 423, 432] 4/6
(b) The conditional probability the first card is even given that the second card is even is
P [E 1 E 2 ] P [243, 423] 2/6
P [E 1 |E 2 ] = = = = 1/2. (6)
P [E 2 ] P [243, 324, 342, 423] 4/6
(c) The probability the first two cards are even given the third card is even is
P [E 1 E 2 E 3 ]
P [E 1 E 2 |E 3 ] = = 0. (7)
P [E 3 ]
(d) The conditional probabilities the second card is even given that the first card is odd is
P [O1 E 2 ] P [O1 ]
P [E 2 |O1 ] = = = 1. (8)
P [O1 ] P [O1 ]
(e) The conditional probability the second card is odd given that the first card is odd is
P [O1 O2 ]
P [O2 |O1 ] = = 0. (9)
P [O1 ]
14
(b) The conditional probability that a tick has HGE given that it has Lyme disease is
P [L H ] 0.0236
P [H |L] = = = 0.1475. (5)
P [L] 0.16
• P[A] = 1 implying A = B = S.
• P[A] = 0 implying A = B = φ.
In the Venn diagram, assume the sample space has area 1 corre-
A sponding to probability 1. As drawn, both A and B have area 1/4
so that P[A] = P[B] = 1/4. Moreover, the intersection AB has
area 1/16 and covers 1/4 of A and 1/4 of B. That is, A and B are
independent since
B
P [AB] = P [A] P [B] . (1)
15
(c) Since C and D are independent,
P [C ∩ D] = P [C] P [D] = 15/64. (3)
The next few items are a little trickier. From Venn diagrams, we see
P C ∩ D c = P [C] − P [C ∩ D] = 5/8 − 15/64 = 25/64. (4)
It follows that
P C ∪ D c = P [C] + P D c − P C ∩ D c (5)
= 5/8 + (1 − 3/8) − 25/64 = 55/64. (6)
Using DeMorgan’s law, we have
P C c ∩ D c = P (C ∪ D)c = 1 − P [C ∪ D] = 15/64. (7)
16
(d) Note that we found P[C ∪ D] = 5/6. We can also use the earlier results to show
P C ∪ D c = P [C] + P [D] − P C ∩ D c = 1/2 + (1 − 2/3) − 1/6 = 2/3. (8)
Each event Ai has probability 1/2. Moreover, each pair of events is independent since
rr yy rr yg rrgy rrgg
r wyy r wyg r wgy r wgg
(1)
wr yy wr yg wrgy wrgg
wwyy wwyg wwgy wwgg
A plant has yellow seeds, that is event Y occurs, if a plant has at least one dominant y gene. Except
for the four outcomes with a pair of recessive g genes, the remaining 12 outcomes have yellow
seeds. From the above, we see that
and
P [R] = 12/16 = 3/4. (3)
To find the conditional probabilities P[R|Y ] and P[Y |R], we first must find P[RY ]. Note that RY ,
the event that a plant has rounded yellow seeds, is the set of outcomes
RY = {rr yy, rr yg, rrgy, r wyy, r wyg, r wgy, wr yy, wr yg, wrgy} . (4)
17
and
P [RY ] 9/16
P [R |Y ] = = = 3/4. (6)
P [Y ] 3/4
Thus P[R|Y ] = P[R] and P[Y |R] = P[Y ] and R and Y are independent events. There are four
visibly different pea plants, corresponding to whether the peas are round (R) or not (Rc ), or yellow
(Y ) or not (Y c ). These four visible events have probabilities
P [RY ] = 9/16 P RY c = 3/16, (7)
c
c c
P R Y = 3/16 P R Y = 1/16. (8)
(a) For any events A and B, we can write the law of total probability in the form of
P [A] = P [AB] + P AB c . (1)
(b) Proving that Ac and B are independent is not really necessary. Since A and B are arbitrary
labels, it is really the same claim as in part (a). That is, simply reversing the labels of A and
B proves the claim. Alternatively, one can construct exactly the same proof as in part (a) with
the labels A and B reversed.
(c) To prove that Ac and B c are independent, we apply the result of part (a) to the sets A and B c .
Since we know from part (a) that A and B c are independent, part (b) says that Ac and B c are
independent.
18
Problem 1.6.9 Solution
A AB B
In the Venn diagram at right, assume the sample space has area 1 cor-
responding to probability 1. As drawn, A, B, and C each have area 1/3
AC C BC
and thus probability 1/3. The three way intersection ABC has zero
probability, implying A, B, and C are not mutually independent since
However, AB, BC, and AC each has area 1/9. As a result, each pair of events is independent since
P [AB] = P [A] P [B] , P [BC] = P [B] P [C] , P [AC] = P [A] P [C] . (2)
H2 •H1 H2
1/4 1/16
1/4 H1 T2 •H1 T2 3/16
3/4
X
XXX
X
3/4 X T1 XXX 1/4 H2 •T1 H2 3/16
XXX
3/4 T2 •T1 T2 9/16
This implies
P [H1 H2 ] 1/16
P [H1 |H2 ] = = = 1/4. (2)
P [H2 ] 1/4
(b) The probability that the first flip is heads and the second flip is tails is P[H1 T2 ] = 3/16.
3/4 G 2 •G 1 G 2
3/8
1/2 G 1 XXX
XXX
•G 1 R2
1/4 R2 1/8
HH
HH
G 2 •R1 G 2
1/4 1/8
1/2HH R X
1 XXX
X
3/4 X R2 •R1 R2 3/8
19
From the tree, the probability the second light is green is
The conditional probability that the first light was green given the second light was green is
P [G 1 G 2 ] P [G 2 |G 1 ] P [G 1 ]
P [G 1 |G 2 ] = = = 3/4. (2)
P [G 2 ] P [G 2 ]
3/4 G 2 •G 1 G 2 3/8
1/2 G1 XXX
XXX
B2 •G 1 B2
1/4 1/8
HH
H 1/4 G 2 •B1 G 2
HH
1/8
1/2 H B1
XXX
XXX
3/4 B2 •B1 B2 3/8
The game goes into overtime if exactly one free throw is made. This event has probability
H •AH
1/4 1/8
1/2 A •AT 3/8
3/4 T
X
XXX
X
1/2 X B XX 3/4 H •B H 3/8
XXX
X
1/4 T •BT 1/8
20
Problem 1.7.5 Solution
The P[− |H ] is the probability that a person who has HIV tests negative for the disease. This is
referred to as a false-negative result. The case where a person who does not have HIV but tests
positive for the disease, is called a false-positive result and has probability P[+|H c ]. Since the test
is correct 99% of the time,
P [−|H ] = P +|H c = 0.01. (1)
Now the probability that a person who has tested positive for HIV actually has the disease is
P [+, H ] P [+, H ]
P [H |+] = = . (2)
P [+] P [+, H ] + P [+, H c ]
We can use Bayes’ formula to evaluate these joint probabilities.
P [+|H ] P [H ]
P [H |+] = (3)
P [+|H ] P [H ] + P [+|H c ] P [H c ]
(0.99)(0.0002)
= (4)
(0.99)(0.0002) + (0.01)(0.9998)
= 0.0194. (5)
Thus, even though the test is correct 99% of the time, the probability that a random person who tests
positive actually has HIV is less than 0.02. The reason this probability is so low is that the a priori
probability that a person has HIV is very small.
A2 •A1 A2
4/5 12/25
3/5 A1 •A1 D2
D2 3/25
1/5
XXXX
X
2/5 X D1 XX
2/5
A2 •D1 A2 4/25
XXX
X 3/5 D2 •D1 D2 6/25
(a) We wish to find the probability P[E 1 ] that exactly one photodetector is acceptable. From the
tree, we have
(b) The probability that both photodetectors are defective is P[D1 D2 ] = 6/25.
21
H2 •A1 H1 H2
3/4 3/32
1/4 H1 T2 •A1 H1 T2 1/32
1/4
3/4
1/2 A1 T1 XXX H2 •A1 T1 H2 9/32
XXX
3/4
1/4 T2 •A1 T1 T2 3/32
HH
HH
H2 •B1 H1 H2
1/4 3/32
1/2HH
H1
3/4
B1 XXX T2 •B1 H1 T2 9/32
XXX 3/4
1/4
1/4 T1 XXX H2 •B1 T1 H2 1/32
XXX
3/4 T2 •B1 T1 T2 3/32
The probability of H1 is
Similarly,
Thus P[H1 H2 ] = P[H1 ]P[H2 ], implying H1 and H2 are not independent. This result should not
be surprising since if the first flip is heads, it is likely that coin B was picked first. In this case, the
second flip is less likely to be heads since it becomes more likely that the second coin flipped was
coin A.
(a) The primary difficulty in this problem is translating the words into the correct tree diagram.
The tree for this problem is shown below.
H3 •T1 H2 H3 1/8
1/2
H1 •H1 1/2
1/2
H4
1/2 •T1 H2 T3 H4 1/16
1/2 H2 T3 T4 •T1 H2 T3 T4 1/16
1/2 1/2
X
X XXX 1/2 1/2
1/2 T1 T2 H3 XX H •T1 T2 H3 H4 1/16
1/2 Z XXX 4
Z 1/2 T4 •T1 T2 H3 T4 1/16
Z
ZZ
1/2
T3 •T1 T2 T3 1/8
22
Similarly,
P [T3 ] = P [T1 H2 T3 H4 ] + P [T1 H2 T3 T4 ] + P [T1 T2 T3 ] (3)
= 1/8 + 1/16 + 1/16 = 1/4. (4)
(a) We wish to know what the probability that we find no good photodiodes in n pairs of diodes.
Testing each pair of diodes is an independent trial such that with probability p, both diodes
of a pair are bad. From Problem 1.7.6, we can easily calculate p.
p = P [both diodes are defective] = P [D1 D2 ] = 6/25. (1)
The probability of Z n , the probability of zero acceptable diodes out of n pairs of diodes is pn
because on each test of a pair of diodes, both must be defective.
n n
6
P [Z n ] = p=p =n
(2)
i=1
25
(b) Another way to phrase this question is to ask how many pairs must we test until P[Z n ] ≤ 0.01.
Since P[Z n ] = (6/25)n , we require
n
6 ln 0.01
≤ 0.01 ⇒ n ≥ = 3.23. (3)
25 ln 6/25
Since n must be an integer, n = 4 pairs must be tested.
23
Problem 1.7.10 Solution
The experiment ends as soon as a fish is caught. The tree resembles
From the tree, P[C1 ] = p and P[C2 ] = (1 − p) p. Finally, a fish is caught on the nth cast if no fish
were caught on the previous n − 1 casts. Thus,
(a) The experiment of picking two cards and recording them in the order in which they were se-
lected can be modeled by two sub-experiments. The first is to pick the first card and record it,
the second sub-experiment is to pick the second card without replacing the first and recording
it. For the first sub-experiment we can have any one of the possible 52 cards for a total of 52
possibilities. The second experiment consists of all the cards minus the one that was picked
first(because we are sampling without replacement) for a total of 51 possible outcomes. So the
total number of outcomes is the product of the number of outcomes for each sub-experiment.
(b) To have the same card but different suit we can make the following sub-experiments. First
we need to pick one of the 52 cards. Then we need to pick one of the 3 remaining cards that
are of the same type but different suit out of the remaining 51 cards. So the total number
outcomes is
52 · 3 = 156 outcomes. (2)
24
(c) The probability that the two cards are of the same type but different suit is the number of
outcomes that are of the same type but different suit divided by the total number of outcomes
involved in picking two cards at random from a deck of 52 cards.
156 1
P same type, different suit = = . (3)
2652 17
(d) Now we are not concerned with the ordering of the cards. So before, the outcomes (K ♥, 8♦)
and (8♦, K ♥) were distinct. Now, those two outcomes are not distinct and are only consid-
ered to be the single outcome that a King of hearts and 8 of diamonds were selected. So every
pair of outcomes before collapses to a single outcome when we disregard ordering. So we can
redo parts (a) and (b) above by halving the corresponding values found in parts (a) and (b).
The probability however, does not change because both the numerator and the denominator
have been reduced by an equal factor of 2, which does not change their ratio.
25
Problem 1.8.5 Solution
When the DH can be chosen among all the players, including the pitchers, there are two cases:
• The DH is a field player. In this case, the number of possible lineups, N F , is given in Prob-
lem 1.8.4. In this case, the designated hitter must be chosen from the 15 field players. We
repeat the solution of Problem 1.8.4 here: We can break down the experiment of choosing a
starting lineup into a sequence of subexperiments:
1. Choose 1 of the 10 pitchers. There are N1 = 10 1
= 10 ways to do this.
2. Choose 1 of the 15 field players to be the designated hitter (DH). There are N2 = 151
=
15 ways to do this.
3. Of theremaining
14 field players, choose 8 for the remaining field positions. There are
N3 = 14 8
to do this.
4. For the 9 batters (consisting of the 8 field players and the designated hitter), choose a
batting lineup. There are N4 = 9! ways to do this.
So the total number of different starting lineups when the DH is selected among the field
players is
14
N = N1 N2 N3 N4 = (10)(15) 9! = 163,459,296,000. (1)
8
• The DH is a pitcher. In this case, there are 10 choices for the pitcher,
10 choices for the
DH among the pitchers (including the pitcher batting for himself), 15 8
choices for the field
players, and 9! ways of ordering the batters into a lineup. The number of possible lineups is
15
N = (10)(10) 9! = 233, 513, 280, 000. (2)
8
(a) We can find the number of valid starting lineups by noticing that the swingman presents
three situations: (1) the swingman plays guard, (2) the swingman plays forward, and (3) the
swingman doesn’t play. The first situation is when the swingman can be chosen to play the
guard position, and the second where the swingman can only be chosen to play the forward
position. Let Ni denote the number of lineups corresponding to case i. Then we can write the
total number of lineups as N1 + N2 + N3 . In the first situation, we have to choose 1 out of 3
centers, 2 out of 4 forwards, and 1 out of 4 guards so that
3 4 4
N1 = = 72. (1)
1 2 1
In the second case, we need to choose 1 out of 3 centers, 1 out of 4 forwards and 2 out of 4
guards, yielding
3 4 4
N2 = = 72. (2)
1 1 2
26
Finally, with the swingman on the bench, we choose 1 out of 3 centers, 2 out of 4 forward,
and 2 out of four guards. This implies
3 4 4
N3 = = 108, (3)
1 2 2
and the number of total lineups is N1 + N2 + N3 = 252.
n 9 11 14 17
k 0 1 2 3 (2)
p 0.0079 0.012 0.0105 0.0090
(a) Since the probability of a zero is 0.8, we can express the probability of the code word 00111
as 2 occurrences of a 0 and three occurrences of a 1. Therefore
(b) The probability that a code word has exactly three 1’s is
5
P [three 1’s] = (0.8)2 (0.2)3 = 0.0512. (2)
3
27
The probability that they win 10 titles in 11 years is
11
P 10 titles in 11 years = (.32)10 (.68) = 0.00082. (2)
10
The probability of each of these events is less than 1 in 1000! Given that these events took place in
the relatively short fifty year history of the NBA, it should seem that these probabilities should be
much higher. What the model overlooks is that the sequence of 10 titles in 11 years started when Bill
Russell joined the Celtics. In the years with Russell (and a strong supporting cast) the probability
of a championship was much higher.
W3
p •W1 L 2 W3 p3
p W1
p L2 L3
1− p •W1 L 2 L 3 p 2 (1− p)
XXX
XX
1− p X L 1
1− p p
HH W2 XX W3 •L 1 W2 W3 p(1− p)2
XXX
HH 1− p X L3 •L 1 W2 L 3 (1− p)3
pHH
L 2 •L 1 L 2 p(1− p)
The probability that the team with the home court advantage wins is
P [H ] = P [W1 W2 ] + P [W1 L 2 W3 ] + P [L 1 W2 W3 ] (1)
= p(1 − p) + p + p(1 − p) .
3 2
(2)
Note that P[H ] ≤ p for 1/2 ≤ p ≤ 1. Since the team with the home court advantage would win
a 1 game playoff with probability p, the home court team is less likely to win a three game series
than a 1 game playoff!
28
Problem 1.9.5 Solution
(a) There are 3 group 1 kickers and 6 group 2 kickers. Using G i to denote that a group i kicker
was chosen, we have
P [G 1 ] = 1/3 P [G 2 ] = 2/3. (1)
In addition, the problem statement tells us that
P [K |G 1 ] = 1/2 P [K |G 2 ] = 1/3. (2)
Combining these facts using the Law of Total Probability yields
P [K ] = P [K |G 1 ] P [G 1 ] + P [K |G 2 ] P [G 2 ] (3)
= (1/2)(1/3) + (1/3)(2/3) = 7/18. (4)
(b) To solve this part, we need to identify the groups from which the first and second kicker were
chosen. Let ci indicate whether a kicker was chosen from group i and let Ci j indicate that the
first kicker was chosen from group i and the second kicker from group j. The experiment to
choose the kickers is described by the sample tree:
2/8 c1 •C11
1/12
3/9 c1 c2 •C12 1/4
6/8
XXXX
X
6/9 X c2 XXX 3/8 c1 •C21 1/4
XXX
5/8 c2 •C22 5/12
Since a kicker from group 1 makes a kick with probability 1/2 while a kicker from group 2
makes a kick with probability 1/3,
P [K 1 K 2 |C11 ] = (1/2)2 P [K 1 K 2 |C12 ] = (1/2)(1/3) (5)
P [K 1 K 2 |C21 ] = (1/3)(1/2) P [K 1 K 2 |C22 ] = (1/3)2
(6)
By the law of total probability,
P [K 1 K 2 ] = P [K 1 K 2 |C11 ] P [C11 ] + P [K 1 K 2 |C12 ] P [C12 ] (7)
+ P [K 1 K 2 |C21 ] P [C21 ] + P [K 1 K 2 |C22 ] P [C22 ] (8)
1 1 11 11 1 5
= + + + = 15/96. (9)
4 12 6 4 6 4 9 12
It should be apparent that P[K 1 ] = P[K ] from part (a). Symmetry should also make it clear
that P[K 1 ] = P[K 2 ] since for any ordering of two kickers, the reverse ordering is equally
likely. If this is not clear, we derive this result by calculating P[K 2 |Ci j ] and using the law of
total probability to calculate P[K 2 ].
P [K 2 |C11 ] = 1/2, P [K 2 |C12 ] = 1/3, (10)
P [K 2 |C21 ] = 1/2, P [K 2 |C22 ] = 1/3. (11)
29
By the law of total probability,
(c) Once a kicker is chosen, each of the 10 field goals is an independent trial. If the kicker is
from group 1, then the success probability is 1/2. If the kicker is from group 2, the success
probability is 1/3. Out of 10 kicks, there are 5 misses iff there are 5 successful kicks. Given
the type of kicker chosen, the probability of 5 misses is
10 10
P [M|G 1 ] = (1/2) (1/2) ,
5 5
P [M|G 2 ] = (1/3)5 (2/3)5 . (15)
5 5
We use the Law of Total Probability to find
W1 W2 W3 W5
W4 W6
To find the probability that the device works, we replace series devices 1, 2, and 3, and parallel
devices 5 and 6 each with a single device labeled with the probability that it works. In particular,
30
(1-q)
3 2
1-q
1-q
The probability P[W ] that the two devices in parallel work is 1 minus the probability that neither
works:
P W = 1 − q(1 − (1 − q)3 ). (3)
Finally, for the device to work, both composite device in series must work. Thus, the probability
the device works is
P [W ] = [1 − q(1 − (1 − q)3 )][1 − q 2 ]. (4)
31
Problem 1.10.4 Solution
From the statement of Problem 1.10.1, the device components are configured in the following way:
W1 W2 W3 W5
W4 W6
By symmetry, note that the reliability of the system is the same whether we replace component 1,
component 2, or component 3. Similarly, the reliability is the same whether we replace component
5 or component 6. Thus we consider the following cases:
I Replace component 1 In this case
q
P [W1 W2 W3 ] = (1 − )(1 − q)2 , P [W4 ] = 1 − q, P [W5 ∪ W6 ] = 1 − q 2 . (1)
2
This implies
q2
P [W1 W2 W3 ∪ W4 ] = 1 − (1 − P [W1 W2 W3 ])(1 − P [W4 ]) = 1 − (5 − 4q + q 2 ). (2)
2
In this case, the probability the system works is
q2
P [W I ] = P [W1 W2 W3 ∪ W4 ] P [W5 ∪ W6 ] = 1 − (5 − 4q + q ) (1 − q 2 ).
2
(3)
2
II Replace component 4 In this case,
q
P [W1 W2 W3 ] = (1 − q)3 , P [W4 ] = 1 − , P [W5 ∪ W6 ] = 1 − q 2 . (4)
2
This implies
q q
P [W1 W2 W3 ∪ W4 ] = 1 − (1 − P [W1 W2 W3 ])(1 − P [W4 ]) = 1 − + (1 − q)3 . (5)
2 2
In this case, the probability the system works is
q q
P [W I I ] = P [W1 W2 W3 ∪ W4 ] P [W5 ∪ W6 ] = 1 − + (1 − q)3 (1 − q 2 ). (6)
2 2
III Replace component 5 In this case,
q2
P [W1 W2 W3 ] = (1 − q)3 , P [W4 ] = 1 − q, P [W5 ∪ W6 ] = 1 − . (7)
2
This implies
P [W1 W2 W3 ∪ W4 ] = 1 − (1 − P [W1 W2 W3 ])(1 − P [W4 ]) = (1 − q) 1 + q(1 − q)2 .
(8)
In this case, the probability the system works is
P [W I I I ] = P [W1 W2 W3 ∪ W4 ] P [W5 ∪ W6 ] (9)
q2
= (1 − q) 1 − 1 + q(1 − q)2 . (10)
2
32
From these expressions, its hard to tell which substitution creates the most reliable circuit. First, we
observe that P[W I I ] > P[W I ] if and only if
q q q2
1− + (1 − q)3 > 1 − (5 − 4q + q 2 ). (11)
2 2 2
Some algebra will show that P[W I I ] > P[W I ] if and only if q 2 < 2, which occurs for all nontrivial
(i.e., nonzero) values of q. Similar algebra will show that P[W I I ] > P[W I I I ] for all values of
0 ≤ q ≤ 1. Thus the best policy is to replace component 4.
Keep in mind that 50*rand(200,1) produces a 200 × 1 vector of random numbers, each in the
interval (0, 50). Applying the ceiling function converts these random numbers to rndom integers in
the set {1, 2, . . . , 50}. Finally, we add 50 to produce random numbers between 51 and 100.
function [C,H]=twocoin(n);
C=ceil(2*rand(n,1));
P=1-(C/4);
H=(rand(n,1)< P);
The first line produces the n × 1 vector C such that C(i) indicates whether coin 1 or coin 2 is
chosen for trial i. Next, we generate the vector P such that P(i)=0.75 if C(i)=1; otherwise,
if C(i)=2, then P(i)=0.5. As a result, H(i) is the simulated result of a coin flip with heads,
corresponding to H(i)=1, occurring with probability P(i).
function C=bit100(n);
% n is the number of 100 bit packets sent
B=floor(2*rand(n,100));
P=0.03-0.02*B;
E=(rand(n,100)< P);
C=sum((sum(E,2)<=5));
First, B is an n × 100 matrix such that B(i,j) indicates whether bit i of packet j is zero or one.
Next, we generate the n × 100 matrix P such that P(i,j)=0.03 if B(i,j)=0; otherwise, if
B(i,j)=1, then P(i,j)=0.01. As a result, E(i,j) is the simulated error indicator for bit i of
packet j. That is, E(i,j)=1 if bit i of packet j is in error; otherwise E(i,j)=0. Next we sum
across the rows of E to obtain the number of errors in each packet. Finally, we count the number of
packets with 5 or more errors.
33
For n = 100 packets, the packet success probability is inconclusive. Experimentation will show
that C=97, C=98, C=99 and C=100 correct packets are typica values that might be observed. By
increasing n, more consistent results are obtained. For example, repeated trials with n = 100, 000
packets typically produces around C = 98, 400 correct packets. Thus 0.984 is a reasonable estimate
for the probability of a packet being transmitted correctly.
function N=reliable6(n,q);
% n is the number of 6 component devices
%N is the number of working devices
W=rand(n,6)>q;
D=(W(:,1)&W(:,2)&W(:,3))|W(:,4);
D=D&(W(:,5)|W(:,6));
N=sum(D);
The n × 6 matrix W is a logical matrix such that W(i,j)=1 if component j of device i works
properly. Because W is a logical matrix, we can use the M ATLAB logical operators | and & to
implement the logic requirements for a working device. By applying these logical operators to the
n × 1 columns of W, we simulate the test of n circuits. Note that D(i)=1 if device i works.
Otherwise, D(i)=0. Lastly, we count the number N of working devices. The following code
snippet produces ten sample runs, where each sample run tests n=100 devices for q = 0.2.
>> for n=1:10, w(n)=reliable6(100,0.2); end
>> w
w =
82 87 87 92 91 85 85 83 90 89
>>
As we see, the number of working devices is typically around 85 out of 100. Solving Prob-
lem 1.10.1, will show that the probability the device works is actually 0.8663.
function n=countequal(x,y)
%Usage: n=countequal(x,y)
%n(j)= # elements of x = y(j)
[MX,MY]=ndgrid(x,y);
%each column of MX = x
%each row of MY = y
n=(sum((MX==MY),1))’;
for countequal is quite short (just two lines excluding comments) but needs some explanation.
The key is in the operation
[MX,MY]=ndgrid(x,y).
34
The M ATLAB built-in function ndgrid facilitates plotting a function g(x, y) as a surface over the
x, y plane. The x, y plane is represented by a grid of all pairs of points x(i), y( j). When x has
n elements, and y has m elements, ndgrid(x,y) creates a grid (an n × m array) of all possible
pairs [x(i) y(j)]. This grid is represented by two separate n × m matrices: MX and MY which
indicate the x and y values at each grid point. Mathematically,
Next, C=(MX==MY) is an n×m array such that C(i,j)=1 if x(i)=y(j); otherwise C(i,j)=0.
That is, the jth column of C indicates indicates which elements of x equal y(j). Lastly, we sum
along each column j to count number of elements of x equal to y(j). That is, we sum along
column j to count the number of occurrences (in x) of y(j).
function N=ultrareliable6(n,q);
% n is the number of 6 component devices
%N is the number of working devices
for r=1:6,
W=rand(n,6)>q;
R=rand(n,1)>(q/2);
W(:,r)=R;
D=(W(:,1)&W(:,2)&W(:,3))|W(:,4);
D=D&(W(:,5)|W(:,6));
N(r)=sum(D);
end
This above code is based on the code for the solution of Problem 1.11.4. The n × 6 matrix W
is a logical matrix such that W(i,j)=1 if component j of device i works properly. Because
W is a logical matrix, we can use the M ATLAB logical operators | and & to implement the logic
requirements for a working device. By applying these logical opeators to the n × 1 columns of W,
we simulate the test of n circuits. Note that D(i)=1 if device i works. Otherwise, D(i)=0. Note
that in the code, we first generate the matrix W such that each component has failure probability
q. To simulate the replacement of the jth device by the ultrareliable version by replacing the jth
column of W by the column vector R in which a device has failure probability q/2. Lastly, for each
column replacement, we count the number N of working devices. A sample run for n = 100 trials
and q = 0.2 yielded these results:
>> ultrareliable6(100,0.2)
ans =
93 89 91 92 90 93
From the above, we see, for example, that replacing the third component with an ultrareliable com-
ponent resulted in 91 working devices. The results are fairly inconclusive in that replacing devices
1, 2, or 3 should yield the same probability of device failure. If we experiment with n = 10, 000
runs, the results are more definitive:
35
>> ultrareliable6(10000,0.2)
ans =
8738 8762 8806 9135 8800 8796
>> >> ultrareliable6(10000,0.2)
ans =
8771 8795 8806 9178 8886 8875
>>
In both cases, it is clear that replacing component 4 maximizes the device reliability. The somewhat
complicated solution of Problem 1.10.4 will confirm this observation.
36
Problem Solutions – Chapter 2
(a) We wish to find the value of c that makes the PMF sum up to one.
c(1/2)n n = 0, 1, 2
PN (n) = (1)
0 otherwise
2
Therefore, n=0 PN (n) = c + c/2 + c/4 = 1, implying c = 4/7.
From the PMFs PX (x) and PR (r ), we can calculate the requested probabilities
4
PV (v) = c(12 + 22 + 32 + 42 ) = 30c = 1 (1)
v=1
Hence c = 1/30.
1 42 17
P [V ∈ U ] = PV (1) + PV (4) = + = (2)
30 30 30
37
(c) The probability that V is even is
22 42 2
P [V is even] = PV (2) + PV (4) = + = (3)
30 30 3
32 42 5
P [V > 2] = PV (3) + PV (4) = + = (4)
30 30 6
Thus c = 8/7.
(b)
8 2
P [X = 4] = PX (4) = = (2)
7·4 7
(c)
8 4
P [X < 4] = PX (2) = = (3)
7·2 7
(d)
8 8 3
P [3 ≤ X ≤ 9] = PX (4) + PX (8) = + = (4)
7·4 7·8 7
1− p B •Y =0 1− p B •Y =1
p G p G •Y =2
38
Problem 2.2.6 Solution
The probability that a caller fails to get through in three tries is (1 − p)3 . To be sure that at least
95% of all callers get through, we need (1 − p)3 ≤ 0.05. This implies p = 0.6316.
(a) In the setup of a mobile call, the phone will send the “SETUP” message up to six times.
Each time the setup message is sent, we have a Bernoulli trial with success probability p. Of
course, the phone stops trying as soon as there is a success. Using r to denote a successful
response, and n a non-response, the sample tree is
p r •K =1 p r •K =2 p r •K =3p r •K =4p r •K =5 p r •K =6
(b) We can write the PMF of K , the number of “SETUP” messages sent as
⎧
⎨ (1 − p)k−1 p k = 1, 2, . . . , 5
PK (k) = (1 − p) p + (1 − p) = (1 − p) k = 6
5 6 5
(1)
⎩
0 otherwise
Note that the expression for PK (6) is different because K = 6 if either there was a success
or a failure on the sixth attempt. In fact, K = 6 whenever there were failures on the first five
attempts which is why PK (6) simplifies to (1 − p)5 .
39
(c) Let B denote the event that a busy signal is given after six failed setup attempts. The proba-
bility of six consecutive failures is P[B] = (1 − p)6 .
(a) If it is indeed true that Y , the number of yellow M&M’s in a package, is uniformly distributed
between 5 and 15, then the PMF of Y , is
1/11 y = 5, 6, 7, . . . , 15
PY (y) = (1)
0 otherwise
(b)
P [Y < 10] = PY (5) + PY (6) + · · · + PY (9) = 5/11 (2)
(c)
P [Y > 12] = PY (13) + PY (14) + PY (15) = 3/11 (3)
(d)
P [8 ≤ Y ≤ 12] = PY (8) + PY (9) + · · · + PY (12) = 5/11 (4)
(a) Each paging attempt is an independent Bernoulli trial with success probability p. The number
of times K that the pager receives a message is the number of successes in n Bernoulli trials
and has the binomial PMF
n k
p (1 − p)n−k k = 0, 1, . . . , n
PK (k) = k (1)
0 otherwise
(b) Let R denote the event that the paging message was received at least once. The event R has
probability
P [R] = P [B > 0] = 1 − P [B = 0] = 1 − (1 − p)n (2)
To ensure that P[R] ≥ 0.95 requires that n ≥ ln(0.05)/ ln(1 − p). For p = 0.8, we must
have n ≥ 1.86. Thus, n = 2 pages would be necessary.
40
Problem 2.3.4 Solution
(a) Let X be the number of times the frisbee is thrown until the dog catches it and runs away.
Each throw of the frisbee can be viewed as a Bernoulli trial in which a success occurs if the
dog catches the frisbee an runs away. Thus, the experiment ends on the first success and X
has the geometric PMF
(1 − p)x−1 p x = 1, 2, . . .
PX (x) = (1)
0 otherwise
(b) The child will throw the frisbee more than four times iff there are failures on the first 4 trials
which has probability (1 − p)4 . If p = 0.2, the probability of more than four throws is
(0.8)4 = 0.4096.
(b) The probability that no more three paging attempts are required is
∞
P [N ≤ 3] = 1 − P [N > 3] = 1 − PN (n) = 1 − (1 − p)3 (2)
n=4
This answer can be obtained without calculation since N > 3 if the first three paging attempts
fail and that event occurs with probability (1 − p)3 . Hence, we must choose p to satisfy
1 − (1 − p)3 ≥ 0.95 or (1 − p)3 ≤ 0.05. This implies
p ≥ 1 − (0.05)1/3 ≈ 0.6316 (3)
41
Problem 2.3.7 Solution
Since an average of T /5 buses arrive in an interval of T minutes, buses arrive at the bus stop at a
rate of 1/5 buses per minute.
(a) From the definition of the Poisson PMF, the PMF of B, the number of buses in T minutes, is
(T /5)b e−T /5 /b! b = 0, 1, . . .
PB (b) = (1)
0 otherwise
(b) Choosing T = 2 minutes, the probability that three buses arrive in a two minute interval is
(c) By choosing T = 10 minutes, the probability of zero buses arriving in a ten minute interval
is
PB (0) = e−10/5 /0! = e−2 ≈ 0.135 (3)
(a) If each message is transmitted 8 times and the probability of a successful transmission is p,
then the PMF of N , the number of successful transmissions has the binomial PMF
8 n
p (1 − p)8−n n = 0, 1, . . . , 8
PN (n) = n (1)
0 otherwise
(b) The indicator random variable I equals zero if and only if N = 8. Hence,
P [I = 0] = P [N = 0] = 1 − P [I = 1] (2)
42
Problem 2.3.9 Solution
n
The requirement that x=1 PX (x) = 1 implies
1
n=1: c(1) =1 c(1) = 1 (1)
1
1 1 2
n=2: c(2) + =1 c(2) = (2)
1 2 3
1 1 1 6
n=3: c(3) + + =1 c(3) = (3)
1 2 3 11
1 1 1 1 12
n=4: c(4) + + + =1 c(4) = (4)
1 2 3 4 25
1 1 1 1 1 12
n=5: c(5) + + + + =1 c(5) = (5)
1 2 3 4 5 25
1 1 1 1 1 20
n=6: c(6) + + + + =1 c(6) = (6)
1 2 3 4 6 49
As an aside, find c(n) for large values of n is easy using the recursion
1 1 1
= + . (7)
c(n + 1) c(n) n + 1
(a) We can view whether each caller knows the birthdate as a Bernoulli trial. As a result, L is
the number of trials needed for 6 successes. That is, L has a Pascal PMF with parameters
p = 0.75 and k = 6 as defined by Definition 2.8. In particular,
l−1
(0.75)6 (0.25)l−6 l = 6, 7, . . .
PL (l) = 5 (1)
0 otherwise
(c) The probability that the station will need nine or more calls to find a winner is
P [L ≥ 9] = 1 − P [L < 9] (3)
= 1 − PL (6) − PL (7) − PL (8) (4)
= 1 − (0.75) [1 + 6(0.25) + 21(0.25) ] ≈ 0.321
6 2
(5)
43
Problem 2.3.11 Solution
The packets are delay sensitive and can only be retransmitted d times. For t < d, a packet is
transmitted t times if the first t − 1 attempts fail followed by a successful transmission on attempt
t. Further, the packet is transmitted d times if there are failures on the first d − 1 transmissions, no
matter what the outcome of attempt d. So the random variable T , the number of times that a packet
is transmitted, can be represented by the following PMF.
⎧
⎨ p(1 − p)t−1 t = 1, 2, . . . , d − 1
PT (t) = (1 − p)d−1 t = d (1)
⎩
0 otherwise
(a) Since each day is independent of any other day, P[W33 ] is just the probability that a winning
lottery ticket was bought. Similarly for P[L 87 ] and P[N99 ] become just the probability that a
losing ticket was bought and that no ticket was bought on a single day, respectively. Therefore
(b) Supose we say a success occurs on the kth trial if on day k we buy a ticket. Otherwise, a
failure occurs. The probability of success is simply 1/2. The random variable K is just the
number of trials until the first success and has the geometric PMF
(1/2)(1/2)k−1 = (1/2)k k = 1, 2, . . .
PK (k) = (2)
0 otherwise
(c) The probability that you decide to buy a ticket and it is a losing ticket is (1− p)/2, independent
of any other day. If we view buying a losing ticket as a Bernoulli success, R, the number of
losing lottery tickets bought in m days, has the binomial PMF
m
[(1 − p)/2]r [(1 + p)/2]m−r r = 0, 1, . . . , m
PR (r ) = r (3)
0 otherwise
(d) Letting D be the day on which the j-th losing ticket is bought, we can find the probability
that D = d by noting that j − 1 losing tickets must have been purchased in the d − 1 previous
days. Therefore D has the Pascal PMF
j−1
[(1 − p)/2]d [(1 + p)/2]d− j d = j, j + 1, . . .
PD (d) = d−1 (4)
0 otherwise
44
(a) Let Sn denote the event that the Sixers win the series in n games. Similarly, Cn is the event
that the Celtics in in n games. The Sixers win the series in 3 games if they win three straight,
which occurs with probability
P [S3 ] = (1/2)3 = 1/8 (1)
The Sixers win the series in 4 games if they win two out of the first three games and they win
the fourth game so that
3
P [S4 ] = (1/2)3 (1/2) = 3/16 (2)
2
The Sixers win the series in five games if they win two out of the first four games and then
win game five. Hence,
4
P [S5 ] = (1/2)4 (1/2) = 3/16 (3)
2
By symmetry, P[Cn ] = P[Sn ]. Further we observe that the series last n games if either the
Sixers or the Celtics win the series in n games. Thus,
P [N = n] = P [Sn ] + P [Cn ] = 2P [Sn ] (4)
Consequently, the total number of games, N , played in a best of 5 series between the Celtics
and the Sixers can be described by the PMF
⎧
⎪
⎪ 2(1/2)3 = 1/4 n=3
⎨ 3
21(1/2) = 3/8 n = 4
4
PN (n) = (5)
⎪
⎪ 2 4
(1/2)5 = 3/8 n = 5
⎩ 2
0 otherwise
(b) For the total number of Celtic wins W , we note that if the Celtics get w < 3 wins, then the
Sixers won the series in 3 + w games. Also, the Celtics win 3 games if they win the series in
3,4, or 5 games. Mathematically,
P [S3+w ] w = 0, 1, 2
P [W = w] = (6)
P [C3 ] + P [C4 ] + P [C5 ] w = 3
Thus, the number of wins by the Celtics, W , has the PMF shown below.
⎧
⎪
⎪ P [S3 ] = 1/8 w=0
⎪
⎪
⎨ P [S4 ] = 3/16 w=1
PW (w) = P [S5 ] = 3/16 w=2 (7)
⎪
⎪
⎪
⎪ 1/8 + 3/16 + 3/16 = 1/2 w=3
⎩
0 otherwise
(c) The number of Celtic losses L equals the number of Sixers’ wins WS . This implies PL (l) =
PWS (l). Since either team is equally likely to win any game, by symmetry, PWS (w) = PW (w).
This implies PL (l) = PWS (l) = PW (l). The complete expression of for the PMF of L is
⎧
⎪
⎪ 1/8 l = 0
⎪
⎪
⎨ 3/16 l = 1
PL (l) = PW (l) = 3/16 l = 2 (8)
⎪
⎪
⎪
⎪ 1/2 l = 3
⎩
0 otherwise
45
Problem 2.3.14 Solution
Since a and b are positive, let K be a binomial random variable for n trials and success probability
p = a/(a + b). First, we observe that the sum of over all possible values of the PMF of K is
n n
n
PK (k) = p k (1 − p)n−k (1)
k=0 k=0
k
n k n−k
n a b
= (2)
k=0
k a+b a+b
n n k n−k
a b
= k=0 k n (3)
(a + b)
n
Since k=0 PK (k) = 1, we see that
n n
n k n−k
(a + b) = (a + b)
n n
PK (k) = a b (4)
k=0 k=0
k
(g) From the staircase CDF of Problem 2.4.1, we see that Y is a discrete random variable. The
jumps in the CDF occur at at the values that Y can take on. The height of each jump equals
the probability of that value. The PMF of Y is
⎧
⎪
⎪ 1/4 y = 1
⎨
1/4 y = 2
PY (y) = (1)
⎪
⎪ 1/2 y = 3
⎩
0 otherwise
46
(a) The given CDF is shown in the diagram below.
1 ⎧
0.8 ⎪
⎪ 0 x < −1
⎨
F (x)
0.6
0.2 −1 ≤ x < 0
X
0.4
FX (x) = (1)
0.2
⎪
⎪ 0.7 0≤x <1
0 ⎩
−2 −1 0 1 2 1 x ≥1
x
(a) Similar to the previous problem, the graph of the CDF is shown below.
1 ⎧
0.8 ⎪
⎪ 0 x < −3
⎨
F (x)
0.6
0.4 −3 ≤ x < 5
X
0.4
FX (x) = (1)
0.2
⎪
⎪ 0.8 5 ≤ x < 7
0 ⎩
−3 0 5 7 1 x ≥7
x
Since K is integer valued, FK (k) = FK (k) for all integer and non-integer values of k. (If this
point is not clear, you should review Example 2.24.) Thus, the complete expression for the CDF of
K is
0 k < 1,
FK (k) = (3)
1 − (1 − p)k k ≥ 1.
47
Problem 2.4.5 Solution
Since mushrooms occur with probability 2/3, the number of pizzas sold before the first mushroom
pizza is N = n < 100 if the first n pizzas do not have mushrooms followed by mushrooms on pizza
n + 1. Also, it is possible that N = 100 if all 100 pizzas are sold without mushrooms. the resulting
PMF is ⎧
⎨ (1/3)n (2/3) n = 0, 1, . . . , 99
PN (n) = (1/3)100 n = 100 (1)
⎩
0 otherwise
For integers n < 100, the CDF of N obeys
n
n
FN (n) = PN (i) = (1/3)i (2/3) = 1 − (1/3)n+1 (2)
i=0 i=0
A complete expression for FN (n) must give a valid answer for every value of n, including non-
integer values. We can write the CDF using the floor function x which denote the largest integer
less than or equal to X . The complete expression for the CDF is
⎧
⎨ 0 x <0
x+1
FN (x) = 1 − (1/3) 0 ≤ x < 100 (3)
⎩
1 x ≥ 100
48
The corresponding CDF of Y is
⎧
⎪
⎪ 0 y<0
⎨
1− p 0≤ y <1
FY (y) = (2)
⎪
⎪ 1 − p2 1 ≤ y < 2
⎩
1 y≥2
1 1 1
0.75 0.75 0.75
FY(y)
FY(y)
FY(y)
0.5 0.5 0.5
0.25 0.25 0.25
0 0 0 (3)
−1 0 1 2 3 −1 0 1 2 3 −1 0 1 2 3
y y y
p = 1/4 p = 1/2 p = 3/4
0.5
0.25 FN (n) = 7/8 3≤n<4 (3)
⎪
⎪
0 ⎪
⎪ 15/16 4≤n<5
0 1 2 3 4 5 6 7 ⎪
⎪
⎪
⎪ 31/32 5≤n<6
n ⎩
1 n≥6
(a) The mode must satisfy PX (xmod ) ≥ PX (x) for all x. In the case of the uniform PMF, any
integer x between 1 and 100 is a mode of the random variable X . Hence, the set of all modes
is
X mod = {1, 2, . . . , 100} (1)
49
(b) The median must satisfy P[X < xmed ] = P[X > xmed ]. Since
P [X ≤ 50] = P [X ≥ 51] = 1/2 (2)
we observe that xmed = 50.5 is a median since it satisfies
P [X < xmed ] = P [X > xmed ] = 1/2 (3)
In fact, for any x satisfying 50 < x < 51, P[X < x ] = P[X > x ] = 1/2. Thus,
X med = {x|50 < x < 51} (4)
(b) The expected cost, E[C], is simply the sum of the cost of each type of call multiplied by the
probability of such a call occurring.
E [C] = 20(0.6) + 30(0.4) = 24 cents (2)
50
Problem 2.5.5 Solution
From the solution to Problem 2.4.3, the PMF of X is
⎧
⎪
⎪ 0.4 x = −3
⎨
0.4 x = 5
PX (x) = (1)
⎪
⎪ 0.2 x = 7
⎩
0 otherwise
4
4 1 4 1 4 1 4 1 4 1
E [X ] = x PX (x) = 0 4
+1 4
+2 4
+3 4
+4 (2)
x=0
0 2 1 2 2 2 3 2 4 24
= [4 + 12 + 12 + 4]/24 = 2 (3)
5
E [X ] = x PX (x) (2)
x=0
5 1 5 1 5 1 5 1 5 1 5 1
=0 + 1 + 2 + 3 + 4 + 5 (3)
0 25 1 25 2 25 3 25 4 25 5 25
= [5 + 20 + 30 + 20 + 5]/2 = 2.5
5
(4)
51
(a) Let X = 1 if a data packet is decoded correctly; otherwise X = 0. Random variable X is a
Bernoulli random variable with PMF
⎧
⎨ 0.001 x = 0
PX (x) = 0.999 x = 1 (1)
⎩
0 otherwise
The parameter = 0.001 is the probability a packet is corrupted. The expected value of X is
E [X ] = 1 − = 0.999 (2)
(b) Let Y denote the number of packets received in error out of 100 packets transmitted. Y has
the binomial PMF
100
(0.001) y (0.999)100−y y = 0, 1, . . . , 100
PY (y) = y (3)
0 otherwise
(c) Let L equal the number of packets that must be received to decode 5 packets in error. L has
the Pascal PMF
l−1
(0.001)5 (0.999)l−5 l = 5, 6, . . .
PL (l) = 4 (5)
0 otherwise
The expected value of L is
5 5
E [L] = = = 5000 (6)
0.001
(d) If packet arrivals obey a Poisson model with an average arrival rate of 1000 packets per
second, then the number N of packets that arrive in 5 seconds has the Poisson PMF
5000n e−5000 /n! n = 0, 1, . . .
PN (n) = (7)
0 otherwise
The expected value of N is E[N ] = 5000.
So, on the average, we can expect to break even, which is not a very exciting proposition.
52
Problem 2.5.10 Solution
By the definition of the expected value,
n
n x
E [X n ] = x p (1 − p)n−x (1)
x=1
x
n
(n − 1)!
= np p x−1 (1 − p)n−1−(x−1) (2)
x=1
(x − 1)!(n − 1 − (x − 1))!
The above sum is 1 because it is he sum of a binomial random variable for n − 1 trials over all
possible values.
At this point, the key step is to reverse the order of summation. You may need to make a sketch of
the feasible values for i and j to see how this reversal occurs. In this case,
∞
∞
j−1 ∞
P [X > i] = PX ( j) = j PX ( j) = E [X ] (2)
i=0 j=1 i=0 j=1
(a) Since Y has range SY = {1, 2, 3}, the range of U = Y 2 is SU = {1, 4, 9}. The PMF of U can
be found by observing that
√
√
P [U = u] = P Y 2 = u = P Y = u + P Y = − u (2)
√
Since Y is never negative, PU (u) = PY ( u). Hence,
PU (1) = PY (1) = 1/4 PU (4) = PY (2) = 1/4 PU (9) = PY (3) = 1/2 (3)
53
For all other values of u, PU (u) = 0. The complete expression for the PMF of U is
⎧
⎪
⎪ 1/4 u = 1
⎨
1/4 u = 4
PU (u) = (4)
⎪
⎪ 1/2 u = 9
⎩
0 otherwise
In particular,
54
(b) From the PMF, we can construct the staircase CDF of V .
⎧
⎨ 0 v<0
FV (v) = 0.5 0 ≤ v < 1 (5)
⎩
1 v≥1
This implies
PW (−7) = PX (7) = 0.2 PW (−5) = PX (5) = 0.4 PW (3) = PX (−3) = 0.4 (3)
55
Problem 2.6.4 Solution
A tree for the experiment is
1/3
D=99.75 •C=100074.75
XXX 1/3 D=100 •C=10100
XXX
X
1/3 D=100.25 •C=10125.13
(a) The source continues to transmit packets until one is received correctly. Hence, the total
number of times that a packet is transmitted is X = x if the first x − 1 transmissions were in
error. Therefore the PMF of X is
x−1
q (1 − q) x = 1, 2, . . .
PX (x) = (1)
0 otherwise
(b) The time required to send a packet is a millisecond and the time required to send an acknowl-
edgment back to the source takes another millisecond. Thus, if X transmissions of a packet
are needed to send the packet correctly, then the packet is correctly received after T = 2X − 1
milliseconds. Therefore, for an odd integer t > 0, T = t iff X = (t + 1)/2. Thus,
(t−1)/2
q (1 − q) t = 1, 3, 5, . . .
PT (t) = PX ((t + 1)/2) = (2)
0 otherwise
30
PC (20) = P [M ≤ 30] = (1 − p)m−1 p = 1 − (1 − p)30 (2)
m=1
56
When M ≥ 30, C = 20 + (M − 30)/2 or M = 2C − 10. Thus,
Since each winning ticket grosses $1000, the revenue we collect over 50 years is R = 1000T
dollars. The expected revenue is
But buying a lottery ticket everyday for 50 years, at $2.00 a pop isn’t cheap and will cost us a total
of 18250 · 2 = $36500. Our net profit is then Q = R − 36500 and the result of our loyal 50 year
patronage of the lottery system, is disappointing expected loss of
57
Problem 2.7.3 Solution
Let X denote the number of points the shooter scores. If the shot is uncontested, the expected
number of points scored is
E [X ] = (0.6)2 = 1.2 (1)
If we foul the shooter, then X is a binomial random variable with mean E[X ] = 2 p. If 2 p > 1.2,
then we should not foul the shooter. Generally, p will exceed 0.6 since a free throw is usually
even easier than an uncontested shot taken during the action of the game. Furthermore, fouling
the shooter ultimately leads to the the detriment of players possibly fouling out. This suggests
that fouling a player is not a good idea. The only real exception occurs when facing a player like
Shaquille O’Neal whose free throw probability p is lower than his field goal percentage during a
game.
58
∞
Since m=1 PM (m) = 1 and since PM (m) = (1 − p)m−1 p for m ≥ 1, we have
∞
(1 − p)30
E [C] = 20 + (m − 30)(1 − p)m−31 p (4)
2 m=31
For this cellular billing plan, we are given no free minutes, but are charged half the flat fee. That
is, we are going to pay 15 dollars regardless and $1 for each minute we use the phone. Hence
C = 15 + M and for c ≥ 16, P[C = c] = P[M = c − 15]. Thus we can construct the PMF of the
cost C
(1 − p)c−16 p c = 16, 17, . . .
PC (c) = (2)
0 otherwise
Since C = 15 + M, the expected cost per month of the plan is
In Problem 2.7.5, we found that that the expected cost of the plan was
In comparing the expected costs of the two plans, we see that the new plan is better (i.e. cheaper) if
A simple plot will show that the new plan is better if p ≤ p0 ≈ 0.2.
Let’s first consider the case when only standard devices are used. In this case, a circuit works with
probability P[W ] = (1 − q)10 . The profit made on a working device is k − 10 dollars while a
nonworking circuit has a profit of -10 dollars. That is, E[R|W ] = k − 10 and E[R|W c ] = −10. Of
59
course, a negative profit is actually a loss. Using Rs to denote the profit using standard circuits, the
expected profit is
And for the ultra-reliable case, the circuit works with probability P[W ] = (1−q/2)10 . The profit per
working circuit is E[R|W ] = k − 30 dollars while the profit for a nonworking circuit is E[R|W c ] =
−30 dollars. The expected profit is
To determine which implementation generates the most profit, we solve E[Ru ] ≥ E[Rs ], yielding
k ≥ 20/[(0.95)10 − (0.9)10 ] = 80.21. So for k < $80.21 using all standard devices results in greater
revenue, while for k > $80.21 more revenue will be generated by implementing all ultra-reliable
devices. That is, when the price commanded for a working circuit is sufficiently high, we should
spend the extra money to ensure that more working circuits can be produced.
1 1
q = 46 = ≈ 1.07 × 10−7 (1)
6
9,366,819
(b) Assuming each ticket is chosen randomly, each of the 2n − 1 other tickets is independently a
winner with probability q. The number of other winning tickets K n has the binomial PMF
2n−1 k
q (1 − q)2n−1−k k = 0, 1, . . . , 2n − 1
PK n (k) = k (2)
0 otherwise
(c) Since there are K n + 1 winning tickets in all, the value of your winning ticket is Wn =
n/(K n + 1) which has mean
1
E [Wn ] = n E (3)
Kn + 1
Calculating the expected value
1
2n−1
1
E = PK n (k) (4)
Kn + 1 k=0
k + 1
is fairly complicated. The trick is to express the sum in terms of the sum of a binomial PMF.
2n−1
1
1 (2n − 1)!
E = q k (1 − q)2n−1−k (5)
Kn + 1 k=0
k + 1 k!(2n − 1 − k)!
1
2n−1
(2n)!
= q k (1 − q)2n−(k+1) (6)
2n k=0 (k + 1)!(2n − (k + 1))!
60
By factoring out 1/q, we obtain
2n−1
1 1 2n
E = q k+1 (1 − q)2n−(k+1) (7)
Kn + 1 2nq k=0 k + 1
2n
1 2n j
= q (1 − q)2n− j (8)
2nq j=1 j
A
We observe that the above sum labeled A is the sum of a binomial PMF for 2n trials and
success probability q over all possible values except j = 0. Thus
2n 0
A =1− q (1 − q)2n−0 = 1 − (1 − q)2n (9)
0
This implies
1 1 − (1 − q)2n
E = (10)
Kn + 1 2nq
Our expected return on a winning ticket is
1 1 − (1 − q)2n
E [Wn ] = n E = (11)
Kn + 1 2q
Note that when nq 1, we can use the approximation that (1 − q)2n ≈ 1 − 2nq to show that
1 − (1 − 2nq)
E [Wn ] ≈ =n (nq 1) (12)
2q
However, in the limit as the value of the prize n approaches infinity, we have
1
lim E [Wn ] = ≈ 4.683 × 106 (13)
n→∞ 2q
That is, as the pot grows to infinity, the expected return on a winning ticket doesn’t approach
infinity because there is a corresponding increase in the number of other winning tickets. If
it’s not clear how large n must be for this effect to be seen, consider the following table:
n 106 107 108
(14)
E [Wn ] 9.00 × 105 4.13 × 106 4.68 × 106
When the pot is $1 million, our expected return is $900,000. However, we see that when the
pot reaches $100 million, our expected return is very close to 1/(2q), less than $5 million!
61
(b) Assuming each ticket is chosen randomly, each of the 2n − 1 other tickets is independently a
winner with probability q. The number of other winning tickets K n has the binomial PMF
2n−1 k
q (1 − q)2n−1−k k = 0, 1, . . . , 2n − 1
PK n (k) = k (2)
0 otherwise
Since the pot has n + r dollars, the expected amount that you win on your ticket is
n +r 1
E [V ] = 0(1 − q) + q E = q(n + r )E (3)
Kn + 1 Kn + 1
Note that E[1/K n + 1] was also evaluated in Problem 2.7.8. For completeness, we repeat
those steps here.
2n−1
1
1 (2n − 1)!
E = q k (1 − q)2n−1−k (4)
Kn + 1 k=0
k + 1 k!(2n − 1 − k)!
1
2n−1
(2n)!
= q k (1 − q)2n−(k+1) (5)
2n k=0 (k + 1)!(2n − (k + 1))!
We observe that the above sum labeled A is the sum of a binomial PMF for 02n trials2n−0
and
success probability q over all possible values except j = 0. Thus A = 1 − 2n
0
q (1 − q) ,
which implies
1 A 1 − (1 − q)2n
E = = (8)
Kn + 1 2nq 2nq
The expected value of your ticket is
62
(c) So that we can use the results of the previous part, suppose there were 2n − 1 tickets sold be-
fore you must make your decision. If you buy one of each possible ticket, you are guaranteed
to have one winning ticket. From the other 2n − 1 tickets, there will be K n winners. The total
number of winning tickets will be K n + 1. In the previous part we found that
1 1 − (1 − q)2n
E = (11)
Kn + 1 2nq
Let R denote the expected return from buying one of each possible ticket. The pot had
r dollars beforehand. The 2n − 1 other tickets are sold add n − 1/2 dollars to the pot.
Furthermore, you must buy 1/q tickets, adding 1/(2q) dollars to the pot. Since the cost of
the tickets is 1/q dollars, your expected profit
r + n − 1/2 + 1/(2q) 1
E [R] = E − (12)
Kn + 1 q
q(2r + 2n − 1) + 1 1 1
= E − (13)
2q Kn + 1 q
[q(2r + 2n − 1) + 1](1 − (1 − q)2n ) 1
= − (14)
4nq 2 q
For fixed n, sufficiently large r will make E[R] > 0. On the other hand, for fixed r ,
limn→∞ E[R] = −1/(2q). That is, as n approaches infinity, your expected loss will be
quite large.
63
The expected value of Y is
E [Y ] = y PY (y) = 1(1/4) + 2(1/4) + 3(1/2) = 9/4 (2)
y
The variance of Y is
Var[Y ] = E Y 2 − (E [Y ])2 = 23/4 − (9/4)2 = 11/16 (4)
The variance of X is
Var[X ] = E X 2 − (E [X ])2 = 0.5 − (0.1)2 = 0.49 (4)
The variance of X is
Var[X ] = E X 2 − (E [X ])2 = 23.4 − (2.2)2 = 18.56 (4)
64
Problem 2.8.5 Solution
5
E X 2
= x 2 PX (x) (4)
x=0
5 1 2 5 1 2 5 1 2 5 1 2 5 1 2 5 1
=0 2
+1 +2 +3 +4 +5 (5)
0 25 1 25 2 25 3 25 4 25 5 25
= [5 + 40 + 90 + 80 + 25]/25 = 240/32 = 15/2 (6)
65
The variance of X is
Var[X ] = E X 2 − (E [X ])2 = 15/2 − 25/4 = 5/4 (7)
√
By taking the square root of the variance, the standard deviation of X is σ X = 5/4 ≈ 1.12.
(b) The probability that X is within one standard deviation of its mean is
66
Problem 2.8.9 Solution
With our measure of jitter being σT , and the fact that T = 2X − 1, we can express the jitter as a
function of q by realizing that
4q
Var[T ] = 4 Var[X ] = (1)
(1 − q)2
Therefore, our maximum permitted jitter is
√
2 q
σT = = 2 msec (2)
(1 − q)
with respect to x̂. We can expand the square and take the expectation while treating x̂ as a constant.
This yields
e(x̂) = E X 2 − 2x̂ X + x̂ 2 = E X 2 − 2x̂ E [X ] + x̂ 2 (2)
Solving for the value of x̂ that makes the derivative de(x̂)/d x̂ equal to zero results in the value of x̂
that minimizes e(x̂). Note that when we take the derivative with respect to x̂, both E[X 2 ] and E[X ]
are simply constants.
d
2
E X − 2x̂ E [X ] + x̂ 2 = 2E [X ] − 2x̂ = 0 (3)
d x̂
Hence we see that x̂ = E[X ]. In the sense of mean squared error, the best guess for a random
variable is the mean value. In Chapter 9 this idea is extended to develop minimum mean squared
error estimation.
The mean of K is
∞ ∞
λk e−λ λk−1 e−λ
E [K ] = k =λ =λ (2)
k=0
k! k=1
(k − 1)!
67
To find E[K 2 ], we use the hint and first find
∞
∞
λk e−λ λk e−λ
E [K (K − 1)] = k(k − 1) = (3)
k=0
k! k=2
(k − 2)!
The above sum equals 1 because it is the sum of a Poisson PMF over all possible values. Since
E[K ] = λ, the variance of K is
Var[K ] = E K 2 − (E [K ])2 (5)
= E [K (K − 1)] + E [K ] − (E [K ])2 (6)
=λ +λ−λ =λ
2 2
(7)
where
4
E D 2
= d 2 PD (d) = 0.2 + 1.6 + 2.7 + 1.6 = 6.1 (2)
d=1
So finally we have √
σD = 6.1 − 2.32 = 0.81 = 0.9 (3)
The probability of the event B = {Y < 3} is P[B] = 1 − P[Y = 3] = 1/2. From Theorem 2.17,
the conditional PMF of Y given B is
⎧
PY (y) ⎨ 1/2 y = 1
y∈B
PY |B (y) = P[B] = 1/2 y = 2 (2)
0 otherwise ⎩
0 otherwise
68
The conditional first and second moments of Y are
E [Y |B] = y PY |B (y) = 1(1/2) + 2(1/2) = 3/2 (3)
y
2
E Y 2 |B = y PY |B (y) = 12 (1/2) + 22 (1/2) = 5/2 (4)
y
69
The conditional first and second moments of X are
E [X |B] = x PX |B (x) = 5(2/3) + 7(1/3) = 17/3 (3)
x
2
E X 2 |B = x PX |B (x) = 52 (2/3) + 72 (1/3) = 33 (4)
x
4
4 1 4 1 4 1 4 1
E [X |B] = x PX |B (x) = 1 2 +3 +4 (2)
x=1
1 15 2 15 3 15 4 15
= [4 + 12 + 12 + 4]/15 = 32/15 (3)
2 4
2 4 1 2 4 1 2 4 1 2 4 1
E X |B = x PX |B (x) = 1
2
2 +3 +4 (4)
x=1
1 15 2 15 3 15 4 15
= [4 + 24 + 36 + 16]/15 = 80/15 (5)
70
The conditional first and second moments of X are
5
5 1 5 1 5 1
E [X |B] = x PX |B (x) = 3 +4 +5 (4)
x=3
3 21 4 21 5 21
= [30 + 20 + 5]/21 = 55/21 (5)
2 5
2 5 1 2 5 1 2 5 1
E X |B = x PX |B (x) = 3
2
+4 +5 (6)
x=3
3 21 4 21 5 21
= [90 + 80 + 25]/21 = 195/21 = 65/7 (7)
The conditional variance of X is
Var[X |B] = E X 2 |B − (E [X |B])2 = 65/7 − (55/21)2 = 1070/441 = 2.43 (8)
(a) Consider each circuit test as a Bernoulli trial such that a failed circuit is called a success. The
number of trials until the first success (i.e. a failed circuit) has the geometric PMF
(1 − p)n−1 p n = 1, 2, . . .
PN (n) = (1)
0 otherwise
Note that (1 − p)19 is just the probability that the first 19 circuits pass the test, which is
what we would expect since there must be at least 20 tests if the first 19 circuits pass. The
conditional PMF of N given B is
PN (n)
n∈B (1 − p)n−20 p n = 20, 21, . . .
PN |B (n) = P[B] = (3)
0 otherwise 0 otherwise
We see that in the above sum, we effectively have the expected value of J + 19 where J is
geometric random variable with parameter p. This is not surprising since the N ≥ 20 iff we
observed 19 successful tests. After 19 successful tests, the number of additional tests needed
to find the first failure is still a geometric random variable with mean 1/ p.
71
Problem 2.9.7 Solution
P [M > 0] = 1 − P [M = 0] = 1 − q (2)
(b) The probability that we run a marathon on any particular day is the probability that M ≥ 26.
∞
r = P [M ≥ 26] = q(1 − q)m = (1 − q)26 (3)
m=26
(c) We run a marathon on each day with probability equal to r , and we do not run a marathon
with probability 1 − r . Therefore in a year we have 365 tests of our jogging resolve, and thus
365 chances to run a marathon. So the PMF of the number of marathons run in a year, J , can
be expressed as
r (1 − r )365− j j = 0, 1, . . . , 365
365 j
PJ ( j) = j (4)
0 otherwise
(d) The random variable K is defined as the number of miles we run above that required for a
marathon, K = M − 26. Given the event, A, that we have run a marathon, we wish to know
how many miles in excess of 26 we in fact ran. So we want to know the conditional PMF
PK |A (k).
P [K = k, A] P [M = 26 + k]
PK |A (k) = = (5)
P [A] P [A]
Since P[A] = r , for k = 0, 1, . . .,
(1 − q)26+k q
PK |A (k) = = (1 − q)k q (6)
(1 − q)26
The complete expression of for the conditional PMF of K is
(1 − q)k q k = 0, 1, . . .
PK |A (k) = (7)
0 otherwise
72
(a) The event that a fax was sent to machine A can be expressed mathematically as the event that
the number of pages X is an even number. Similarly, the event that a fax was sent to B is the
event that X is an odd number. Since S X = {1, 2, . . . , 8}, we define the set A = {2, 4, 6, 8}.
Using this definition for A, we have that the event that a fax is sent to A is equivalent to the
event X ∈ A. The event A has probability
(b) Let the event B denote the event that the fax was sent to B and that the fax had no more than
6 pages. Hence, the event B = {1, 3, 5} has probability
P B = PX (1) + PX (3) + PX (5) = 0.4 (8)
Given the event B , the conditional first and second moments are
E X |B = x PX |B (x) = 1(3/8) + 3(3/8) + 5(1/4)+ = 11/4 (10)
x
2
E X 2 |B = x PX |B (x) = 1(3/8) + 9(3/8) + 25(1/4) = 10 (11)
x
73
Problem 2.10.1 Solution
For a binomial (n, p) random variable X , the solution in terms of math is
√
n
P [E 2 ] = PX x 2 (1)
x=0
In terms of M ATLAB, the efficient solution is to generate the vector of perfect squares x = [0 1 4 9 16 ...]
and then to pass that vector to the binomialpmf.m. In this case, the values of the binomial PMF
are calculated only once. Here is the code:
function q=perfectbinomial(n,p);
i=0:floor(sqrt(n));
x=i.ˆ2;
q=sum(binomialpmf(n,p,x));
For a binomial (100, 0.5) random variable X , the probability X is a perfect square is
>> perfectbinomial(100,0.5)
ans =
0.0811
function x=faxlength8(m);
sx=1:8;
p=[0.15*ones(1,4) 0.1*ones(1,4)];
x=finiterv(sx,p,m);
function y=avgfax(m);
x=faxlength8(m);
yy=cumsum([10 9 8 7 6]);
yy=[yy 50 50 50];
y=sum(yy(x))/m;
74
For m = 100, the results are arguably more consistent:
>> [avgfax(100) avgfax(100) avgfax(100) avgfax(100)]
ans =
34.5300 33.3000 29.8100 33.6900
>>
Finally, for m = 1000, we obtain results reasonably close to E[Y ]:
>> [avgfax(1000) avgfax(1000) avgfax(1000) avgfax(1000)]
ans =
32.1740 31.8920 33.1890 32.8250
>>
In Chapter 7, we will develop techniques to show how Y converges to E[Y ] as m → ∞.
What makes the Zipf distribution hard to analyze is that there is no closed form expression for
! n "−1
1
c(n) = . (3)
x=1
x
Thus, we use M ATLAB to grind through the calculations. The following simple program generates
the Zipf distributions and returns the correct value of k.
function k=zipfcache(n,p);
%Usage: k=zipfcache(n,p);
%for the Zipf (n,alpha=1) distribution, returns the smallest k
%such that the first k items have total probability p
pmf=1./(1:n);
pmf=pmf/sum(pmf); %normalize to sum to 1
cdf=cumsum(pmf);
k=1+sum(cdf<=p);
The program zipfcache generalizes 0.75 to be the probability p. Although this program is
sufficient, the problem asks us to find k for all values of n from 1 to 103 !. One way to do this is to
call zipfcache a thousand times to find k for each value of n. A better way is to use the properties
of the Zipf PDF. In particular,
k
1 c(n)
P X n ≤ k = c(n) = (4)
x=1
x c(k )
75
Thus we wish to find
# #
c(n) 1 p
k = min k | ≥ p = min k | ≥ . (5)
c(k ) c(k ) c(n)
Note that the definition of k implies that
1 p
< , k = 1, . . . , k − 1. (6)
c(k )
c(n)
Using the notation |A| to denote the number of elements in the set A, we can write
$ #$
$ 1 p $$
$
k =1+$ k | < (7)
c(k ) c(n) $
This is the basis for a very short M ATLAB program:
function k=zipfcacheall(n,p);
%Usage: k=zipfcacheall(n,p);
%returns vector k such that the first
%k(m) items have total probability >= p
%for the Zipf(m,1) distribution.
c=1./cumsum(1./(1:n));
k=1+countless(1./c,p./c);
Note that zipfcacheall uses a short M ATLAB program countless.m that is almost the same
as count.m introduced in Example 2.47. If n=countless(x,y), then n(i) is the number of
elements of x that are strictly less than y(i) while count returns the number of elements less
than or equal to y(i).
In any case, the commands
k=zipfcacheall(1000,0.75);
plot(1:1000,k);
is sufficient to produce this figure of k as a function of m:
200
150
100
k
50
0
0 100 200 300 400 500 600 700 800 900 1000
n
We see in the figure that the number of files that must be cached grows slowly with the total number
of files n.
Finally, we make one last observation. It is generally desirable for M ATLAB to execute opera-
tions in parallel. The program zipfcacheall generally will run faster than n calls to zipfcache.
However, to do its counting all at once, countless generates and n × n array. When n is not too
large, say n ≤ 1000, the resulting array with n 2 = 1,000,000 elements fits in memory. For much
large values of n, say n = 106 (as was proposed in the original printing of this edition of the text,
countless will cause an “out of memory” error.
76
Problem 2.10.5 Solution
We use poissonrv.m to generate random samples of a Poisson (α = 5) random variable. To
compare the Poisson PMF against the output of poissonrv, relative frequencies are calculated
using the hist function. The following code plots the relative frequency against the PMF.
function diff=poissontest(alpha,m)
x=poissonrv(alpha,m);
xr=0:ceil(3*alpha);
pxsample=hist(x,xr)/m;
pxsample=pxsample(:);
%pxsample=(countequal(x,xr)/m);
px=poissonpmf(alpha,xr);
plot(xr,pxsample,xr,px);
diff=sum((pxsample-px).ˆ2);
For m = 100, 1000, 10000, here are sample plots comparing the PMF and the relative frequency.
The plots show reasonable agreement for m = 10000 samples.
0.2 0.2
0.1 0.1
0 0
0 5 10 15 0 5 10 15
(a) m = 100 (b) m = 100
0.2 0.2
0.1 0.1
0 0
0 5 10 15 0 5 10 15
(a) m = 1000 (b) m = 1000
0.2 0.2
0.1 0.1
0 0
0 5 10 15 0 5 10 15
(a) m = 10,000 (b) m = 10,000
77
For (n, p) = (10, 1), the binomial PMF has no randomness. For (n, p) = (100, 0.1), the approxi-
mation is reasonable:
1 0.2
0.8
0.15
0.6
0.1
0.4
0.05
0.2
0 0
0 5 10 15 20 0 5 10 15 20
(a) n = 10, p = 1 (b) n = 100, p = 0.1
Finally, for (n, p) = (1000, 0.01), and (n, p) = (10000, 0.001), the approximation is very good:
0.2 0.2
0.15 0.15
0.1 0.1
0.05 0.05
0 0
0 5 10 15 20 0 5 10 15 20
(a) n = 1000, p = 0.01 (b) n = 10000, p = 0.001
Subtracting 1 from each side and then multiplying through by −1 (which reverses the inequalities),
we obtain
∗ ∗
(1 − p)k −1 > 1 − R ≥ (1 − p)k . (3)
Next we take the logarithm of each side. Since logarithms are monotonic functions, we have
Since 0 < p < 1, we have that ln(1 − p) < 0. Thus dividing through by ln(1 − p) reverses the
inequalities, yielding
ln(1 − R)
k∗ − 1 > ≤ k ∗. (5)
ln(1 − p)
78
Since k ∗ is an integer, it must be the smallest integer greater than or equal to ln(1 − R)/ ln(1 − p).
That is, following the last step of the random sample algorithm,
% &
∗ ln(1 − R)
K =k = (6)
ln(1 − p)
function x=geometricrv(p,m)
%Usage: x=geometricrv(p,m)
% returns m samples of a geometric (p) rv
r=rand(m,1);
x=ceil(log(1-r)/log(1-p));
function pmf=poissonpmf(alpha,x)
%Poisson (alpha) rv X,
%out=vector pmf: pmf(i)=P[X=x(i)]
x=x(:);
if (alpha==0)
pmf=1.0*(x==0);
else
k=(1:ceil(max(x)))’;
logfacts =cumsum(log(k));
pb=exp([-alpha; ...
-alpha+ (k*log(alpha))-logfacts]);
okx=(x>=0).*(x==floor(x));
x=okx.*x;
pmf=okx.*pb(x+1);
end
%pmf(i)=0 for zero-prob x(i)
By summing logarithms, the intermediate terms are much less likely to overflow.
79
Problem Solutions – Chapter 3
(a)
P [X > 1/2] = 1 − P [X ≤ 1/2] = 1 − FX (1/2) = 1 − 3/4 = 1/4 (2)
(b) This is a little trickier than it should be. Being careful, we can write
Since the CDF of X is a continuous function, the probability that X takes on any specific
value is zero. This implies P[X = 3/4] = 0 and P[X = −1/2] = 0. (If this is not clear at
this point, it will become clear in Section 3.6.) Thus,
P [−1/2 ≤ X < 3/4] = P [−1/2 < X ≤ 3/4] = FX (3/4) − FX (−1/2) = 5/8 (4)
(c)
P [|X | ≤ 1/2] = P [−1/2 ≤ X ≤ 1/2] = P [X ≤ 1/2] − P [X < −1/2] (5)
Note that P[X ≤ 1/2] = FX (1/2) = 3/4. Since the probability that P[X = −1/2] = 0,
P[X < −1/2] = P[X ≤ 1/2]. Hence P[X < −1/2] = FX (−1/2) = 1/4. This implies
(a) For V to be a continuous random variable, FV (v) must be a continuous function. This oc-
curs if we choose c such that FV (v) doesn’t have a discontinuity at v = 7. We meet this
requirement if c(7 + 5)2 = 1. This implies c = 1/144.
80
(b)
P [V > 4] = 1 − P [V ≤ 4] = 1 − FV (4) = 1 − 81/144 = 63/144 (2)
(c)
P [−3 < V ≤ 0] = FV (0) − FV (−3) = 25/144 − 4/144 = 21/144 (3)
(d) Since 0 ≤ FV (v) ≤ 1 and since FV (v) is a nondecreasing function, it must be that −5 ≤ a ≤
7. In this range,
(a)
P [W ≤ 4] = FW (4) = 1/4 (2)
(b)
P [−2 < W ≤ 2] = FW (2) − FW (−2) = 1/4 − 1/4 = 0 (3)
(c)
P [W > 0] = 1 − P [W ≤ 0] = 1 − FW (0) = 3/4 (4)
(d) By inspection of FW (w), we observe that P[W ≤ a] = FW (a) = 1/2 for a in the range
3 ≤ a ≤ 5. In this range,
(a) By definition, nx is the smallest integer that is greater than or equal to nx. This implies
nx ≤ nx ≤ nx + 1.
81
(b) By part (a),
nx nx nx + 1
≤ ≤ (1)
n n n
That is,
nx 1
x≤ ≤x+ (2)
n n
This implies
nx 1
x ≤ lim ≤ lim x + = x (3)
n→∞ n n→∞ n
(c) In the same way, nx is the largest integer that is less than or equal to nx. This implies
nx − 1 ≤ nx ≤ nx. It follows that
nx − 1 nx nx
≤ ≤ (4)
n n n
That is,
1 nx
x− ≤ ≤x (5)
n n
This implies
1 nx
lim x − = x ≤ lim ≤x (6)
n→∞ n n→∞ n
82
Problem 3.2.2 Solution
From the CDF, we can find the PDF by direct differentiation. The CDF and correponding PDF are
⎧
⎨ 0 x < −1
1/2 −1 ≤ x ≤ 1
FX (x) = (x + 1)/2 −1 ≤ x ≤ 1 f X (x) = (1)
⎩ 0 otherwise
1 x >1
For the PDF to be non-negative for x ∈ [0, 1], we must have ax + 2 − 2a/3 ≥ 0 for all x ∈ [0, 1].
This requirement can be written as
a(2/3 − x) ≤ 2 (0 ≤ x ≤ 1) (4)
83
For x = 2/3, the requirement holds for all a. However, the problem is tricky because we must
consider the cases 0 ≤ x < 2/3 and 2/3 < x ≤ 1 separately because of the sign change of the
inequality. When 0 ≤ x < 2/3, we have 2/3 − x > 0 and the requirement is most stringent at
x = 0 where we require 2a/3 ≤ 2 or a ≤ 3. When 2/3 < x ≤ 1, we can write the constraint
as a(x − 2/3) ≥ −2. In this case, the constraint is most stringent at x = 1, where we must have
a/3 ≥ −2 or a ≥ −6. Thus a complete expression for our requirements are
−6 ≤ a ≤ 3 b = 2 − 2a/3 (5)
As we see in the following plot, the shape of the PDF f X (x) varies greatly with the value of a.
3
a=−6
a=−3
2.5 a=0
a=3
1.5
0.5
0
0 0.2 0.4 0.6 0.8 1
and
E [h(X )] = E X 2 = Var [X ] + E [X ]2 = 4/3 + 1 = 7/3 (3)
(c) Finally
E [Y ] = E [h(X )] = E X 2 = 7/3 (4)
' 3 4
4
2 2 x 49 61 49
Var [Y ] = E X − E X = dx − = − (5)
−1 4 9 5 9
84
Problem 3.3.2 Solution
1+9 (9 − 1)2 16
E [X ] = =5 Var [X ] = = (1)
2 12 3
√
(b) Define h(X ) = 1/ X then
√
h(E [X ]) = 1/ 5 (2)
' 9 −1/2
x
E [h(X )] = d x = 1/2 (3)
1 8
(c)
85
Problem 3.3.4 Solution
We can find the expected value of X by direct integration of the given PDF.
y/2 0 ≤ y ≤ 2
f Y (y) = (1)
0 otherwise
The expectation is
' 2
y2
E [Y ] = dy = 4/3 (2)
0 2
To find the variance, we first find the second moment
' 2 3
2 y
E Y = dy = 2. (3)
0 2
(b)
'
y2 1
E Y 2
= dy = 1/3 (4)
−1 2
Var[Y ] = E Y 2 − E [Y ]2 = 1/3 − 0 = 1/3 (5)
86
(a) The expected value of V is
' ∞ ' 7
1
E [V ] = v f V (v) dv = (v 2 + 5v) dv (2)
−∞ 72 −5
$7
1 v 3 5v 2 $$ 1 343 245 125 125
= + = + + − =3 (3)
72 3 2 $−5 72 3 2 3 2
87
(c) Note that 2U = e(ln 2)U . This implies that
' '
1 (ln 2)u 2u
2u du = e(ln 2)u du = e = (6)
ln 2 ln 2
We see that E[X n ] < ∞ if and only if α − n + 1 > 1, or, equivalently, n < α. In this case,
$ y=∞
n αµn $
−(α−n+1)+1 $
E X = y $ (4)
−(α − n + 1) + 1 y=1
$ y=∞
−αµ −(α−n) $
n $ αµ n
= y $ = (5)
α−n α−n y=1
88
Problem 3.4.2 Solution
From Appendix A, we observe that an exponential PDF Y with parameter λ > 0 has PDF
−λy
λe y≥0
f Y (y) = (1)
0 otherwise
(c)
' ∞ $∞
P [Y > 5] = f Y (y) dy = −e−y/5 $5 = e−1 (3)
5
(b) Substituting the parameters n = 5 and λ = 1/3 into the given PDF, we obtain
(1/3)5 x 4 e−x/3 /24 x ≥ 0
f X (x) = (3)
0 otherwise
89
(c) The probability that 1/2 ≤ Y < 3/2 is
' 3/2 ' 3/2
P [1/2 ≤ Y < 3/2] = f Y (y) dy = 4ye−2y dy (2)
1/2 1/2
( (
This integral is easily completed using the integration by parts formula u dv = uv − v du
with
u = 2y dv = 2e−2y
du = 2dy v = −e−2y
Making these substitutions, we obtain
'
$3/2 3/2
P [1/2 ≤ Y < 3/2] = −2ye−2y $1/2 + 2e−2y dy (3)
1/2
(b) For x < −5, FX (x) = 0. For x ≥ 5, FX (x) = 1. For −5 ≤ x ≤ 5, the CDF is
' x
x +5
FX (x) = f X (τ ) dτ = (2)
−5 10
The complete expression for the CDF of X is
⎧
⎨ 0 x < −5
FX (x) = (x + 5)/10 5 ≤ x ≤ 5 (3)
⎩
1 x >5
90
Problem 3.4.6 Solution
We know that X has a uniform PDF over [a, b) and has mean µ X = 7 and variance Var[X ] = 3.
All that is left to do is determine the values of the constants a and b, to complete the model of the
uniform PDF.
a+b (b − a)2
E [X ] = =7 Var[X ] = =3 (1)
2 12
Since we assume b > a, this implies
a + b = 14 b−a =6 (2)
a=4 b = 10 (3)
(c) X is an exponential random variable with parameter a = 1/2. By Theorem 3.8, the expected
value of X is E[X ] = 1/a = 2.
91
Where we factored (b2 − a 2 ) = (b − a)(b + a). The variance of U can also be found by finding
E[U 2 ]. ' b
(b3 − a 3 )
E U2 = u 2 /(b − a) du = (3)
a 3(b − a)
Therefore the variance is
2
(b3 − a 3 ) b+a (b − a)2
Var[U ] = − = (4)
3(b − a) 2 12
We will use C A (X ) and C B (X ) to denote the cost of a call under the two plans. From the problem
statement, we note that C A (X ) = 10X so that E[C A (X )] = 10E[X ] = 10τ . On the other hand
since given X ≥ 20, X −20 has a PDF identical to X by the memoryless property of the exponential
random variable. Thus,
E [C B (X )] = 99 + 10τ e−20/τ (8)
Some numeric comparisons show that E[C B (X )] ≤ E[C A (X )] if τ > 12.34 minutes. That is, the
flat price for the first 20 minutes is a good deal only if your average phone call is sufficiently long.
92
For n > 1, we have ' ∞
λn−1 x n−1 −λx
In = λe dt (2)
(n − 1)!
0
dv
u
(
We define
( u and dv as shown above in order to use the integration by parts formula u dv =
uv − v du. Since
λn−1 x n−1
du = dx v = −e−λx (3)
(n − 2)!
we can write
' ∞
In = uv|∞
0 − v du (4)
0
$∞ ' ∞ n−1 n−1
λ x n−1 n−1 $
−λx $ λ x
=− e $ + e−λx d x = 0 + In−1 (5)
(n − 1)! 0 0 (n − 2)!
The above marked integral equals 1 since it is the integral of an Erlang PDF with parameters λ and
n + k over all possible values. Hence,
(n + k − 1)!
E Xk = k (3)
λ (n − 1)!
This implies that the first and second moments are
n! n
(n + 1)! (n + 1)n
E [X ] = = E X2 = 2 = (4)
(n − 1)!λ λ λ (n − 1)! λ2
93
(a) By Definition 3.7, the CDF of Erlang (n, λ) random variable X n is
' x ' x n n−1 −λt
λ t e
FX n (x) = f X n (t) dt = dt. (2)
−∞ 0 (n − 1)!
t n−1
u= dv = λn e−λt dt (3)
(n − 1)!
t n−2
du = v = −λn−1 e−λt (4)
(n − 2)!
( (
Thus, using the integration by parts formula u dv = uv − v du, we have
' $x ' x n−1 n−2 −λt
x
λn t n−1 e−λt λn−1 t n−1 e−λt $$ λ t e
FX n (x) = dt = − + dt (5)
0 (n − 1)! (n − 1)! $0 0 (n − 2)!
λn−1 x n−1 e−λx
=− + FX n−1 (x) (6)
(n − 1)!
(c) Now we do proof by induction. For n = 1, the Erlang (n, λ) random variable X 1 is simply
an exponential random variable. Hence for x ≥ 0, FX 1 (x) = 1 − e−λx . Now we suppose the
claim is true for FX n−1 (x) so that
n−2
(λx)k e−λx
FX n−1 (x) = 1 − . (7)
k=0
k!
(λx)n−1 e−λx
FX n (x) = FX n−1 (x) − (8)
(n − 1)!
n−2
(λx)k e−λx (λx)n−1 e−λx
=1− − (9)
k=0
k! (n − 1)!
94
( (
Now we use the integration by parts formula u dv = uv − v du with u = x n and dv = λe−λx d x.
This implies du = nx n−1 d x and v = −e−λx so that
' ∞
n $
n −λx $∞
E X = −x e 0
+ nx n−1 e−λx d x (2)
' 0
n ∞ n−1 −λx
=0+ x λe dx (3)
λ 0
n
= E X n−1 (4)
λ
By our induction hyothesis, E[X n−1 ] = (n − 1)!/λn−1 which implies
E X n = n!/λn (5)
(a) Since f X (x) ≥ 0 and x ≥ r over the entire integral, we can write
' ∞ ' ∞
x f X (x) d x ≥ r f X (x) d x = r P [X > r ] (1)
r r
Since r P[X > r ] ≥ 0 for all r ≥ 0, we must have limr →∞ r P[X > r ] = 0.
( (
(c) We can use the integration by parts formula u dv = uv − v du by defining u = 1 − FX (x)
and dv = d x. This yields
' ∞ ' ∞
∞
[1 − FX (x)] d x = x[1 − FX (x)]|0 + x f X (x) d x (5)
0 0
By part (b), limr →∞ r P[X > r ] = 0 and this implies x[1 − FX (x)]|∞0 = 0. Thus,
' ∞ ' ∞
[1 − FX (x)] d x = x f X (x) d x = E [X ] (7)
0 0
95
Problem 3.5.1 Solution
Given that the peak temperature, T , is a Gaussian random variable with mean 85 and standard
deviation 10 we can use the fact that FT (t) = ((t − µT )/σT ) and Table 3.1 on page 123 to
evaluate the following
100 − 85
P [T > 100] = 1 − P [T ≤ 100] = 1 − FT (100) = 1 − (1)
10
= 1 − (1.5) = 1 − 0.933 = 0.066 (2)
60 − 85
P [T < 60] = = (−2.5) (3)
10
= 1 − (2.5) = 1 − .993 = 0.007 (4)
P [70 ≤ T ≤ 100] = FT (100) − FT (70) (5)
= (1.5) − (−1.5) = 2 (1.5) − 1 = .866 (6)
We can find the variance Var[X ] by expanding the above probability in terms of the (·) function.
10
P [−10 ≤ X ≤ 10] = FX (10) − FX (−10) = 2 −1 (2)
σX
This implies (10/σ X ) = 0.55. Using Table 3.1 for the Gaussian CDF, we find that 10/σ X = 0.15
or σ X = 66.6.
96
Problem 3.5.5 Solution
Moving to Antarctica, we find that the temperature, T is still Gaussian but with variance 225. We
also know that with probability 1/2, T exceeds 10 degrees. First we would like to find the mean
temperature, and we do so by looking at the second fact.
10 − µT
P [T > 10] = 1 − P [T ≤ 10] = 1 − = 1/2 (1)
15
By looking at the table we find that if () = 1/2, then = 0. Therefore,
10 − µT
= 1/2 (2)
15
implies that (10 − µT )/15 = 0 or µT = 10. Now we have a Gaussian T with mean 10 and standard
deviation 15. So we are prepared to answer the following problems.
32 − 10
P [T > 32] = 1 − P [T ≤ 32] = 1 − (3)
15
= 1 − (1.45) = 1 − 0.926 = 0.074 (4)
0 − 10
P [T < 0] = FT (0) = (5)
15
= (−2/3) = 1 − (2/3) (6)
= 1 − (0.67) = 1 − 0.749 = 0.251 (7)
P [T > 60] = 1 − P [T ≤ 60] = 1 − FT (60) (8)
60 − 10
=1− = 1 − (10/3) (9)
15
= Q(3.33) = 4.34 · 10−4 (10)
97
Recall that (x) = 0.01 for a negative value of x. This is consistent with our earlier observation that
we would need n > 25 corresponding to 100−4n < 0. Thus, we use the identity (x) = 1− (−x)
to write
100 − 4n 4n − 100
√ =1− √ = 0.01 (5)
n n
Equivalently, we have
4n − 100
√ = 0.99 (6)
n
√
From the table of the function, we have that (4n − 100)/ n = 2.33, or
Solving this quadratic yields n = 28.09. Hence, only after 28 years are we 99 percent sure that the
prof will have spent $1000. Note that a second root of the quadratic yields n = 22.25. This root is
not a valid solution to our problem. Mathematically, it is a solution of our quadratic in which we
√
choose the negative root of n. This would correspond to assuming the standard deviation of Yn is
negative.
(a) Let H denote the height in inches of a U.S male. To find σ X , we look at the fact that the
probability that P[H ≥ 84] is the number of men who are at least 7 feet tall divided by the
total number of men (the frequency interpretation of probability). Since we measure H in
inches, we have
23,000 70 − 84
P [H ≥ 84] = = = 0.00023 (1)
100,000,000 σX
(b) The probability that a randomly chosen man is at least 8 feet tall is
96 − 70
P [H ≥ 96] = Q = Q(6.5) (3)
4
Unfortunately, Table 3.2 doesn’t include Q(6.5), although it should be apparent that the prob-
ability is very small. In fact, Q(6.5) = 4.0 × 10−11 .
(c) First we need to find the probability that a man is at least 7’6”.
90 − 70
P [H ≥ 90] = Q = Q(5) ≈ 3 · 10−7 = β (4)
4
98
Although Table 3.2 stops at Q(4.99), if you’re curious, the exact value is Q(5) = 2.87 · 10−7 .
Now we can begin to find the probability that no man is at least 7’6”. This can be modeled
as 100,000,000 repetitions of a Bernoulli trial with parameter 1 − β. The probability that no
man is at least 7’6” is
(1 − β)100,000,000 = 9.4 × 10−14 (5)
(d) The expected value of N is just the number of trials multiplied by the probability that a man
is at least 7’6”.
E [N ] = 100,000,000 · β = 30 (6)
99
(b) When we write I 2 as the product of integrals, we use y to denote the other variable of inte-
gration so that
' ∞ ' ∞
1 −x 2 /2 1 −y 2 /2
I = √
2
e dx √ e dy (3)
2π −∞ 2π −∞
' ∞' ∞
1
e−(x +y )/2 d x d y
2 2
= (4)
2π −∞ −∞
100
where γ̄ = γ /(1 + γ ). Next, recalling that Q(0) = 1/2 and making the substitution t = y/γ̄ , we
obtain
) '
1 1 γ̄ ∞ −1/2 −t
Pe = − t e dt (9)
2 2 π 0
From Math Fact B.11, we see that the remaining integral is the (z) function evaluated z = 1/2.
√
Since (1/2) = π ,
) )
1 1 γ̄ 1 1 γ
Pe = − (1/2) = 1 − γ̄ = 1− (10)
2 2 π 2 2 1+γ
Where FX (−1− ) denotes the limiting value of the CDF found by approaching −1 from the
left. Likewise, FX (−1+ ) is interpreted to be the value of the CDF found by approaching
−1 from the right. We notice that these two probabilities are the same and therefore the
probability that X is exactly −1 is zero.
(b)
P [X < 0] = FX 0− = 1/3 (3)
P [X ≤ 0] = FX (0) = 2/3 (4)
Here we see that there is a discrete jump at X = 0. Approached from the left the CDF yields a
value of 1/3 but approached from the right the value is 2/3. This means that there is a non-zero
probability that X = 0, in fact that probability is the difference of the two values.
(c)
P [0 < X ≤ 1] = FX (1) − FX 0+ = 1 − 2/3 = 1/3 (6)
P [0 ≤ X ≤ 1] = FX (1) − FX 0− = 1 − 1/3 = 2/3 (7)
The difference in the last two probabilities above is that the first was concerned with the
probability that X was strictly greater then 0, and the second with the probability that X was
greater than or equal to zero. Since the the second probability is a larger set (it includes the
probability that X = 0) it should always be greater than or equal to the first probability. The
two differ by the probability that X = 0, and this difference is non-zero only when the random
variable exhibits a discrete jump in the CDF.
101
Problem 3.6.2 Solution
Similar to the previous problem we find
(a)
P [X < −1] = FX −1− = 0 P [X ≤ −1] = FX (−1) = 1/4 (1)
(b)
P [X < 0] = FX 0− = 1/2 P [X ≤ 0] = FX (0) = 1/2 (2)
(c)
(a) By taking the derivative of the CDF FX (x) given in Problem 3.6.2, we obtain the PDF
δ(x+1)
+ 1/4 + δ(x−1) −1 ≤ x ≤ 1
f X (x) = 4 4 (1)
0 otherwise
102
Problem 3.6.4 Solution
The PMF of a Bernoulli random variable with mean p is
⎧
⎨ 1− p x =0
PX (x) = p x =1 (1)
⎩
0 otherwise
(a) Since the conversation time cannot be negative, we know that FW (w) = 0 for w < 0. The
conversation time W is zero iff either the phone is busy, no one answers, or if the conversation
time X of a completed call is zero. Let A be the event that the call is answered. Note that the
event Ac implies W = 0. For w ≥ 0,
FW (w) = P Ac + P [A] FW |A (w) = (1/2) + (1/2)FX (w) (1)
Next, we keep in mind that since X must be nonnegative, f X (x) = 0 for x < 0. Hence,
103
(c) From the PDF f W (w), calculating the moments is straightforward.
' ∞ ' ∞
E [W ] = w f W (w) dw = (1/2) w f X (w) dw = E [X ] /2 (5)
−∞ −∞
The variance of W is
Var[W ] = E W 2 − (E [W ])2 = E X 2 /2 − (E [X ] /2)2 (7)
= (1/2) Var[X ] + (E [X ])2 /4 (8)
Of course, for y < 60, P[D ≤ y|G] = 0. From the problem statement, if the throw is a foul, then
D = 0. This implies
P D ≤ y|G c = u(y) (3)
where u(·) denotes the unit step function. Since P[G] = 0.7, we can write
FD (y) = P [G] P [D ≤ y|G] + P G c P D ≤ y|G c (4)
0.3u(y) y < 60
= (5)
0.3 + 0.7(1 − e−(y−60)/10 ) y ≥ 60
104
Another way to write this CDF is
However, when we take the derivative, either expression for the CDF will yield the PDF. However,
taking the derivative of the first expression perhaps may be simpler:
0.3δ(y) y < 60
f D (y) = (7)
0.07e−(y−60)/10 y ≥ 60
Taking the derivative of the second expression for the CDF is a little tricky because of the product
of the exponential and the step function. However, applying the usual rule for the differentation of
a product does give the correct answer:
The middle term δ(y − 60)(1 − e−(y−60)/10 ) dropped out because at y = 60, e−(y−60)/10 = 1.
Likewise when the professor is more than 5 minutes late, the students leave and a 0 minute lecture
is observed. Since he is late 30% of the time and given that he is late, his arrival is uniformly
distributed between 0 and 10 minutes, the probability that there is no lecture is
The only other possible lecture durations are uniformly distributed between 75 and 80 minutes,
because the students will not wait longer then 5 minutes, and that probability must add to a total of
1 − 0.7 − 0.15 = 0.15. So the PDF of T can be written as
⎧
⎪
⎪ 0.15δ(t) t =0
⎨
0.03 75 ≤ 7 < 80
f T (t) = (3)
⎪
⎪ 0.7δ(t − 80) t = 80
⎩
0 otherwise
105
Hence, the CDF of Y is ⎧
⎨ 0 y<0
√
FY (y) = y 0≤y<1 (3)
⎩
1 y≥1
By taking the derivative of the CDF, we obtain the PDF
√
1/(2 y) 0 ≤ y < 1
f Y (y) = (4)
0 otherwise
Problem√3.7.2 Solution
Since Y = X , the fact that X is nonegative and that we asume the squre root is always positive
implies FY (y) = 0 for y < 0. In addition, for y ≥ 0, we can find the CDF of Y by writing
√
FY (y) = P [Y ≤ y] = P X ≤ y = P X ≤ y 2 = FX y 2 (1)
f Y (y) = (3)
0 otherwise
In comparing this result to the √
Rayleigh PDF given in Appendix A, we observe that Y is a Rayleigh
(a) random variable with a = 2λ.
106
(a) We can find the CDF of Y , FY (y) by noting that Y can only take on two possible values, 0
and 100. And the probability that Y takes on these two values depends on the probability that
X < 0 and X ≥ 0, respectively. Therefore
⎧
⎨ 0 y<0
FY (y) = P [Y ≤ y] = P [X < 0] 0 ≤ y < 100 (2)
⎩
1 y ≥ 100
The probabilities concerned with X can be found from the given CDF FX (x). This is the
general strategy for solving problems of this type: to express the CDF of Y in terms of the
CDF of X . Since P[X < 0] = FX (0− ) = 1/3, the CDF of Y is
⎧
⎨ 0 y<0
FY (y) = P [Y ≤ y] = 1/3 0 ≤ y < 100 (3)
⎩
1 y ≥ 100
(b) The CDF FY (y) has jumps of 1/3 at y = 0 and 2/3 at y = 100. The corresponding PDF of
Y is
f Y (y) = δ(y)/3 + 2δ(y − 100)/3 (4)
2
X
0
0 0.5 1
U
(a) From the sketch, we observe that X will be nonnegative. Hence FX (x) = 0 for x < 0. Since
U has a uniform distribution on [0, 1], for 0 ≤ u ≤ 1, P[U ≤ u] = u. We use this fact to find
the CDF of X . For x ≥ 0,
FX (x) = P [− ln(1 − U ) ≤ x] = P 1 − U ≥ e−x = P U ≤ 1 − e−x (1)
107
(b) By taking the derivative, the PDF is
e−x x ≥0
f X (x) = (4)
0 otherwise
Thus, X has an exponential PDF. In fact, since most computer languages provide uniform
[0, 1] random numbers, the procedure outlined in this problem provides a way to generate
exponential random variables from uniform random variables.
Therefore, for 0 ≤ y ≤ 1,
P [Y ≤ y] = P [g(X ) ≤ y] = y 3 (3)
Thus, using g(X ) = X 1/3 , we see that for 0 ≤ y ≤ 1,
P [g(X ) ≤ y] = P X 1/3 ≤ y = P X ≤ y 3 = y 3 (4)
(a)
108
(b) For 0 ≤ l ≤ 0.5,
So the CDF of L is ⎧
⎨ 0 l<0
FL (l) = l 0 ≤ l < 0.5 (8)
⎩
1 l ≥ 0.5
109
Problem 3.7.9 Solution
The uniform (0, 2) random variable U has PDF and CDF
⎧
⎨ 0 u < 0,
1/2 0 ≤ u ≤ 2,
fU (u) = FU (u) = u/2 0 ≤ u < 2, (1)
0 otherwise, ⎩
1 u > 2.
The uniform random variable U is subjected to the following clipper.
U U ≤1
W = g(U ) = (2)
1 U >1
To find the CDF of the output of the clipper, W , we remember that W = U for 0 ≤ U ≤ 1
while W = 1 for 1 ≤ U ≤ 2. First, this implies W is nonnegative, i.e., FW (w) = 0 for w < 0.
Furthermore, for 0 ≤ w ≤ 1,
Lastly, we observe that it is always true that W ≤ 1. This implies FW (w) = 1 for w ≥ 1. Therefore
the CDF of W is ⎧
⎨ 0 w<0
FW (w) = w/2 0≤w<1 (4)
⎩
1 w≥1
From the jump in the CDF at w = 1, we see that P[W = 1] = 1/2. The corresponding PDF can be
found by taking the derivative and using the delta function to model the discontinuity.
1/2 + (1/2)δ(w − 1) 0 ≤ w ≤ 1
f W (w) = (5)
0 otherwise
The expected value of W is
' ∞ ' 1
E [W ] = w f W (w) dw = w[1/2 + (1/2)δ(w − 1)] dw (6)
−∞ 0
= 1/4 + 1/2 = 3/4. (7)
110
Problem 3.7.11 Solution
The PDF of U is
1/2 −1 ≤ u ≤ 1
fU (u) = (1)
0 otherwise
Since W ≥ 0, we see that FW (w) = 0 for w < 0. Next, we observe that the rectifier output W is a
mixed random variable since
' 0
P [W = 0] = P [U < 0] = fU (u) du = 1/2 (2)
−1
By taking the derivative of the CDF, we find the PDF of W ; however, we must keep in mind that
the discontinuity in the CDF at w = 0 yields a corresponding impulse in the PDF.
(δ(w) + 1)/2 0 ≤ w ≤ 1
f W (w) = (6)
0 otherwise
Perhaps an easier way to find the expected value is to use Theorem 2.10. In this case,
' ∞ ' 1
E [W ] = g(u) f W (w) du = u(1/2) du = 1/4 (8)
−∞ 0
111
(a) If X is uniform (b, c), then Y = a X has PDF
1
1 b ≤ y/a ≤ c 1
ab ≤ y ≤ ac
f Y (y) = f X (y/a) = a(c−b) = ac−ab (2)
a 0 otherwise 0 otherwise
Thus Y has the PDF of a uniform (ab, ac) random variable.
(b) Using Theorem 3.19, the PDF of Y = a X is
λ −λ(y/a)
1 e y/a ≥ 0
f Y (y) = f X (y/a) = a (3)
a 0 otherwise
(λ/a)e−(λ/a)y y ≥ 0
= (4)
0 otherwise
Hence Y is an exponential (λ/a) exponential random variable.
(c) Using Theorem 3.19, the PDF of Y = a X is
n
λ (y/a)n−1 e−λ(y/a)
1 y/a ≥ 0
f Y (y) = f X (y/a) = a(n−1)! (5)
a 0 otherwise
n n−1 −(λ/a)y
(λ/a) y e
y ≥ 0,
= (n−1)! (6)
0 otherwise,
which is an Erlang (n, λ) PDF.
(d) If X is a Gaussian (µ, σ ) random variable, then Y = a X has PDF
1
e−((y/a)−µ) /2σ
2 2
f Y (y) = f X (y/a) = √ (7)
a 2π σ 2
1
e−(y−aµ) /2(a σ )
2 2 2
=√ (8)
2πa σ
2 2
(9)
Thus Y is a Gaussian random variable with expected value E[Y ] = aµ and Var[Y ] = a 2 σ 2 .
That is, Y is a Gaussian (aµ, aσ ) random variable.
112
Therefore the CDF of Y is ⎧
⎨ 0 y<a
FY (y) = y−a
a≤y≤b (5)
⎩ b−a
1 y≥b
By differentiating with respect to y we arrive at the PDF
1/(b − a) a ≤ x ≤ b
f Y (y) = (6)
0 otherwise
1
0
0 1 2 3
X
(b) Since Y ≥ 1/2, we can conclude that FY (y) = 0 for y < 1/2. Also, FY (1/2) = P[Y = 1/2] =
1/4. Similarly, for 1/2 < y ≤ 1,
113
Problem 3.7.16 Solution
We can prove the assertion by considering the cases where a > 0 and a < 0, respectively. For the
case where a > 0 we have
y−b y−b
FY (y) = P [Y ≤ y] = P X ≤ = FX (1)
a a
114
Problem 3.7.18 Solution
(a) Given FX (x) is a continuous function, there exists x0 such that FX (x0 ) = u. For each value of
u, the corresponding x0 is unique. To see this, suppose there were also x1 such that FX (x1 ) =
u. Without loss of generality, we can assume x1 > x0 since otherwise we could exchange the
points x0 and x1 . Since FX (x0 ) = FX (x1 ) = u, the fact that FX (x) is nondecreasing implies
FX (x) = u for all x ∈ [x0 , x1 ], i.e., FX (x) is flat over the interval [x0 , x1 ], which contradicts
the assumption that FX (x) has no flat intervals. Thus, for any u ∈ (0, 1), there is a unique x0
such that FX (x) = u. Moreiver, the same x0 is the minimum of all x such that FX (x ) ≥ u.
The uniqueness of x0 such that FX (x)x0 = u permits us to define F̃(u) = x0 = FX−1 (u).
(b) In this part, we are given that FX (x) has a jump discontinuity at x0 . That is, there exists
u− − + + − + − +
0 = FX (x 0 ) and u 0 = FX (x 0 ) with u 0 < u 0 . Consider any u in the interval [u 0 , u 0 ].
+
Since FX (x0 ) = FX (x0 ) and FX (x) is nondecreasing,
FX (x) ≥ FX (x0 ) = u +
0, x ≥ x0 . (1)
Moreover,
FX (x) < FX x0− = u −
0, x < x0 . (2)
Thus for any u satisfying u −
o ≤ u ≤ u+
FX (x) < u for x < x0 and FX (x) ≥ u for x ≥ x0 .
0,
Thus, F̃(u) = min{x|FX (x) ≥ u} = x0 .
(c) We note that the first two parts of this problem were just designed to show the properties of
F̃(u). First, we observe that
P X̂ ≤ x = P F̃(U ) ≤ x = P min x |FX x ≥ U ≤ x . (3)
Note that P[A] = P[ X̂ ≤ x]. In addition, P[B] = P[U ≤ FX (x)] = FX (x) since P[U ≤ u] =
u for any u ∈ [0, 1].
We will show that the events A and B are the same. This fact implies
P X̂ ≤ x = P [A] = P [B] = P [U ≤ FX (x)] = FX (x) . (6)
All that remains is to show A and B are the same. As always, we need to show that A ⊂ B
and that B ⊂ A.
• To show A ⊂ B, suppose A is true and min{x |FX (x ) ≥ U } ≤ x. This implies there
exists x0 ≤ x such that FX (x0 ) ≥ U . Since x0 ≤ x, it follows from FX (x) being
nondecreasing that FX (x0 ) ≤ FX (x). We can thus conclude that
115
• To show B ⊂ A, we suppose event B is true so that U ≤ FX (x). We define the set
L = x |FX x ≥ U . (8)
(b) Given B, we see that X has a uniform PDF over [a, b] with a = −3 and b = 3. From
Theorem 3.6, the conditional expected value of X is E[X |B] = (a + b)/2 = 0.
(c) From Theorem 3.6, the conditional variance of X is Var[X |B] = (b − a)2 /12 = 3.
116
(b) The conditional expected value of Y given A is
' ∞ ' 2
1/5
E [Y |A] = y f Y |A (y) dy = ye−y/5 dy (5)
−∞ 1 − e−2/5 0
( (
Using the integration by parts formula u dv = uv − v du with u = y and dv = e−y/5 dy
yields
' 2
1/5 $
−y/5 $2 −y/5
E [Y |A] = −5ye + 5e dy (6)
1 − e−2/5 0
0
1/5 −2/5
$
−y/5 $2
= −10e − 25e (7)
1 − e−2/5 0
5 − 7e−2/5
= (8)
1 − e−2/5
117
(a) Since W has expected value µ = 0, f W (w) is symmetric about w = 0. Hence P[C] =
P[W > 0] = 1/2. From Definition 3.15, the conditional PDF of W given C is
−w2 /32 √
f W (w) /P [C] w ∈ C 2e / 32π w > 0
f W |C (w) = = (2)
0 otherwise 0 otherwise
118
The conditional expected value of T is
' ∞
E [T |T > 0.02] = t (100)e−100(t−0.02) dt (4)
0.02
119
(b) If instead we learn that D ≤ 70, we can calculate the conditional PDF by first calculating
' 70
P [D ≤ 70] = f D (y) dy (4)
0
' 60 ' 70
= 0.3δ(y) dy + 0.07e−(y−60)/10 dy (5)
0 60
$
−(y−60)/10 $70
= 0.3 + −0.7e 60
= 1 − 0.7e−1 (6)
(a) Given that a person is healthy, X is a Gaussian (µ = 90, σ = 20) random variable. Thus,
1 1
√ e−(x−µ) /2σ = √ e−(x−90) /800
2 2 2
f X |H (x) = (1)
σ 2π 20 2π
(b) Given the event H , we use the conditional PDF f X |H (x) to calculate the required probabilities
P T + |H = P [X ≥ 140|H ] = P [X − 90 ≥ 50|H ] (2)
X − 90
=P ≥ 2.5|H = 1 − (2.5) = 0.006 (3)
20
Similarly,
P T − |H = P [X ≤ 110|H ] = P [X − 90 ≤ 20|H ] (4)
X − 90
=P ≤ 1|H = (1) = 0.841 (5)
20
120
Thus,
−
P T − |H P [H ]
P H |T = (10)
P [T − |D] P [D] + P [T − |H ] P [H ]
0.841(0.9)
= = 0.986 (11)
0.106(0.1) + 0.841(0.9)
(a) The event Bi that Y = /2 + i occurs if and only if i ≤ X < (i + 1). In particular,
since X has the uniform (−r/2, r/2) PDF
1/r −r/2 ≤ x < r/2,
f X (x) = (1)
0 otherwise,
we observe that ' (i+1)
1
P [Bi ] = dx = (2)
i r r
In addition, the conditional PDF of X given Bi is
f X (x) /P [B] x ∈ Bi 1/ i ≤ x < (i + 1)
f X |Bi (x) = = (3)
0 otherwise 0 otherwise
It follows that given Bi , Z = X − Y = X − /2 − i, which is a uniform (−/2, /2)
random variable. That is,
1/ −/2 ≤ z < /2
f Z |Bi (z) = (4)
0 otherwise
(b) We observe that f Z |Bi (z) is the same for every i. Thus, we can write
f Z (z) = P [Bi ] f Z |Bi (z) = f Z |B0 (z) P [Bi ] = f Z |B0 (z) (5)
i i
Thus, Z is a uniform (−/2, /2) random variable. From the definition of a uniform (a, b)
random variable, Z has mean and variance
(/2 − (−/2))2 2
E [Z ] = 0, Var[Z ] = = . (6)
12 12
121
Problem 3.8.9 Solution
For this problem, almost any non-uniform random variable X will yield a non-uniform random
variable Z . For example, suppose X has the “triangular” PDF
8x/r 2 0 ≤ x ≤ r/2
f X (x) = (1)
0 otherwise
In this case, the event Bi that Y = i + /2 occurs if and only if i ≤ X < (i + 1). Thus
' (i+1)
8x 8(i + /2)
P [Bi ] = 2
dx = (2)
i r r2
It follows that the conditional PDF of X given Bi is
f X (x)
x ∈ B x
i ≤ x < (i + 1)
f X |Bi (x) = P[B i ] i
= (i+/2) (3)
0 otherwise 0 otherwise
We observe that the PDF of Z depends on which event Bi occurs. Moreover, f Z |Bi (z) is non-uniform
for all Bi .
We see that Y is a uniform (0, 4) random variable. By Theorem 3.20, if X is a uniform (0, 1)
random variable, then Y = 4X is a uniform (0, 4) random variable. Using rand as M ATLAB’s
uniform (0, 1) random variable, the program quiz31rv is essentially a one line program:
function y=quiz31rv(m)
%Usage y=quiz31rv(m)
%Returns the vector y holding m
%samples of the uniform (0,4) random
%variable Y of Quiz 3.1
y=4*rand(m,1);
122
function x=modemrv(m);
%Usage: x=modemrv(m)
%generates m samples of X, the modem
%receiver voltage in Exampe 3.32.
%X=+-5 + N where N is Gaussian (0,2)
sb=[-5; 5]; pb=[0.5; 0.5];
b=finiterv(sb,pb,m);
noise=gaussrv(0,2,m);
x=b+noise;
The commands
x=modemrv(10000); hist(x,100);
generate 10,000 sample of the modem receiver voltage and plots the relative frequencies using 100
bins. Here is an example plot:
300
250
Relative Frequency
200
150
100
50
0
−15 −10 −5 0 5 10 15
x
As expected, the result is qualitatively similar (“hills” around X = −5 and X = 5) to the sketch in
Figure 3.3.
This code generates two plots of the relative error e(z) as a function of z:
z=0:0.02:6;
q=1.0-phi(z(:));
qhat=qapprox(z);
e=(q-qhat)./q;
plot(z,e); figure;
semilogy(z,abs(e));
123
−3
x 10 0
1 10
−1
|e(z)|
−5
e(z)
10
−2
−3
−10
−4 10
0 2 4 6 0 2 4 6
z z
The left side plot graphs e(z) versus z. It appears that the e(z) = 0 for z ≤ 3. In fact, e(z)
is nonzero over that range, but the relative error is so small that it isn’t visible in comparison to
e(6) ≈ −3.5 × 10−3 . To see the error for small z, the right hand graph plots |e(z)| versus z in log
scale where we observe very small relative errors on the order of 10−7 .
function k=georv(p,m);
lambda= -log(1-p);
k=ceil(exponentialrv(lambda,m));
To compare this technique with that use in geometricrv.m, we first examine the code for
exponentialrv.m:
function x=exponentialrv(lambda,m)
x=-(1/lambda)*log(1-rand(m,1));
This is precisely the same function implemented by geometricrv.m. In short, the two methods
for generating geometric ( p) random samples are one in the same.
124
(0, 1) random variable U , P[U = 1/4] = 0. Thus we can choose any w ∈ [−3, 3]. In particular,
we define the inverse CDF as
−1 8u − 5 0 ≤ u ≤ 1/4
w = FW (u) = (1)
(8u + 7)/3 1/4 < u ≤ 1
Note that because 0 ≤ FW (w) ≤ 1, the inverse FW−1 (u) is defined only for 0 ≤ u ≤ 1. Careful
inspection will show that u = (w + 5)/8 for −5 ≤ w < −3 and that u = 1/4 + 3(w − 3)/8 for
−3 ≤ w ≤ 5. Thus, for a uniform (0, 1) random variable U , the function W = FW−1 (U ) produces a
random variable with CDF FW (w). To implement this solution in M ATLAB, we define
function w=iwcdf(u);
w=((u>=0).*(u <= 0.25).*(8*u-5))+...
((u > 0.25).*(u<=1).*((8*u+7)/3));
so that the M ATLAB code W=icdfrv(@iwcdf,m) generates m samples of random variable W .
function exponentialtest(lambda,n)
delta=0.01;
x=exponentialrv(lambda,n);
xr=(0:delta:(5.0/lambda))’;
fxsample=(histc(x,xr)/(n*delta));
fx=exponentialpdf(lambda,xr);
plot(xr,fx,xr,fxsample);
generates n samples of an exponential λ random variable and plots the relative frequency
n i /(n) against the corresponding exponential PDF. Note that the histc function generates
a histogram using xr to define the edges of the bins. Two representative plots for n = 1,000
and n = 100,000 samples appear in the following figure:
1.5 1.5
1 1
0.5 0.5
0 0
0 1 2 3 4 5 0 1 2 3 4 5
exponentialtest(1,1000) exponentialtest(1,100000)
For n = 1,000, the jaggedness of the relative frequency occurs because δ is sufficiently
small that the number of sample of X in each bin i < X ≤ (i + 1) is fairly small. For
n = 100,000, the greater smoothness of the curve demonstrates how the relative frequency is
becoming a better approximation to the actual PDF.
125
(b) Similar results hold for Gaussian random variables. The following code generates the same
comparison between the Gaussian PDF and the relative frequency of n samples.
function gausstest(mu,sigma2,n)
delta=0.01;
x=gaussrv(mu,sigma2,n);
xr=(0:delta:(mu+(3*sqrt(sigma2))))’;
fxsample=(histc(x,xr)/(n*delta));
fx=gausspdf(mu,sigma2,xr);
plot(xr,fx,xr,fxsample);
1 0.5
0.8 0.4
0.6 0.3
0.4 0.2
0.2 0.1
0 0
0 2 4 6 0 2 4 6
gausstest(3,1,1000) gausstest(3,1,100000)
126
15000 15000 15000
0 0 0
−2 0 2 −1 0 1 −0.5 0 0.5
b=1 b=2 b=3
It is obvious that for b = 1 bit quantization, the error is decidely not uniform. However, it appears
that the error is uniform for b = 2 and b = 3. You can verify that uniform errors is a reasonable
model for larger values of b.
0.5
FU (u) = 1/4 −3 ≤ u < 3 (1)
⎪
⎪
0 ⎪ 1/4 + 3(u − 3)/8 3 ≤ u < 5
⎪
−5 0 5 ⎩
u 1 u ≥ 5.
At x = 1/4, there are multiple values of u such that FU (u) = 1/4. However, except for x = 1/4,
the inverse FU−1 (x) is well defined over 0 < x < 1. At x = 1/4, we can arbitrarily define a value for
FU−1 (1/4) because when we produce sample values of FU−1 (X ), the event X = 1/4 has probability
zero. To generate the inverse CDF, given a value of x, 0 < x < 1, we ave to find the value of u such
that x = FU (u). From the CDF we see that
1 u+5
0≤x ≤ ⇒x= (2)
4 8
1 1 3
<x ≤1 ⇒ x = + (u − 3) (3)
4 4 8
(4)
127
function u=urv(m)
%Usage: u=urv(m)
%Generates m samples of the random
%variable U defined in Problem 3.3.7
x=rand(m,1);
u=(x<=1/4).*(8*x-5);
u=u+(x>1/4).*(8*x+7)/3;
To see that this generates the correct output, we can generate a histogram of a million sample values
of U using the commands
u=urv(1000000); hist(u,100);
The output
4 is shown in the following graph, alongside the corresponding PDF of U .
x 10
4
⎧
3 ⎪
⎪ 0 u < −5
⎪
⎪
⎨ 1/8 −5 ≤ u < −3
Count
2
fU (u) = 0 −3 ≤ u < 3 (6)
⎪
⎪
1 ⎪ 3/8
⎪ 3≤u<5
⎩
0 0 u ≥ 5.
−5 0 5
u
Note that the scaling constant 104 on the histogram plot comes from the fact that the histogram was
generated using 106 sample points and 100 bins. The width of each bin is = 10/100 = 0.1.
Consider a bin of idth centered at u 0 . A sample value of U would fall in that bin with proba-
bility fU (u 0 ). Given that we generate m = 106 samples, we would expect about m fU (u 0 ) =
105 fU (u 0 ) samples in each bin. For −5 < u 0 < −3, we would expect to see about 1.25 × 104
samples in each bin. For 3 < u 0 < 5, we would expect to see about 3.75 × 104 samples in each bin.
As can be seen, these conclusions are consistent with the histogam data.
Finally, we comment that if you generate histograms for a range of values of m, the number of
samples, you will see that the histograms will become more and more similar to a scaled version
of the PDF. This gives the (false) impression that any bin centered on u 0 has a number of samples
increasingly close to m fU (u 0 ). Because the histpgram is always the same height, what is actually
happening is that the vertical axis is effectively scaled by 1/m and the height of a histogram bar is
proportional to the fraction of m samples that land in that bin. We will see in Chapter 7 that the
fraction of samples in a bin does converge to the probability of a sample being in that bin as the
number of samples m goes to infinity.
0.5
FX (x) = (x + 1)/4 −1 ≤ x < 1, (1)
⎩
0 1 x ≥ 1.
−2 0 2
x
Following the procedure outlined in Problem 3.7.18, we define for 0 < u ≤ 1,
128
We observe that if 0 < u < 1/4, then we can choose x so that FX (x) = u. In this case, (x + 1)/4 =
u, or equivalently, x = 4u − 1. For 1/4 ≤ u ≤ 1, the minimum x that satisfies FX (x) ≥ u is x = 1.
These facts imply
4u − 1 0 < u < 1/4
F̃(u) = (3)
1 1/4 ≤ u ≤ 1
It follows that if U is a uniform (0, 1) random variable, then F̃(U ) has the same CDF as X . This is
trivial to implement in M ATLAB.
function x=quiz36rv(m)
%Usage x=quiz36rv(m)
%Returns the vector x holding m samples
%of the random variable X of Quiz 3.6
u=rand(m,1);
x=((4*u-1).*(u< 0.25))+(1.0*(u>=0.25));
129
Problem Solutions – Chapter 4
(a) The probability P[X ≤ 2, Y ≤ 3] can be found be evaluating the joint CDF FX,Y (x, y) at
x = 2 and y = 3. This yields
(b) To find the marginal CDF of X, FX (x), we simply evaluate the joint CDF at y = ∞.
1 − e−x x ≥ 0
FX (x) = FX,Y (x, ∞) = (2)
0 otherwise
(c) Likewise for the marginal CDF of Y , we evaluate the joint CDF at X = ∞.
1 − e−y y ≥ 0
FY (y) = FX,Y (∞, y) = (3)
0 otherwise
(a) Because the probability that any random variable is less than −∞ is zero, we have
(b) The probability that any random variable is less than infinity is always one.
(d) Part (d) follows the same logic as that of part (a).
130
Problem 4.1.3 Solution
We wish to find P[x1 ≤ X ≤ x2 ] or P[y1 ≤ Y ≤ y2 ]. We define events A = {y1 ≤ Y ≤ y2 } and
B = {y1 ≤ Y ≤ y2 } so that
Keep in mind that the intersection of events A and B are all the outcomes such that both A and B
occur, specifically, AB = {x1 ≤ X ≤ x2 , y1 ≤ Y ≤ y2 }. It follows that
P [A ∪ B] = P [x1 ≤ X ≤ x2 ] + P [y1 ≤ Y ≤ y2 ]
− P [x1 ≤ X ≤ x2 , y1 ≤ Y ≤ y2 ] . (2)
By Theorem 4.5,
P [x1 ≤ X ≤ x2 , y1 ≤ Y ≤ y2 ]
= FX,Y (x2 , y2 ) − FX,Y (x2 , y1 ) − FX,Y (x1 , y2 ) + FX,Y (x1 , y1 ) . (3)
131
(a) The events A, B, and C are
Y Y Y
y2 y2 y2
y1
y1 y1
(3)
X X X
x1 x1 x2 x1 x2
A B C
(b) In terms of the joint CDF FX,Y (x, y), we can write
P [A] = FX,Y (x1 , y2 ) − FX,Y (x1 , y1 ) (4)
P [B] = FX,Y (x2 , y1 ) − FX,Y (x1 , y1 ) (5)
P [A ∪ B ∪ C] = FX,Y (x2 , y2 ) − FX,Y (x1 , y1 ) (6)
132
Problem 4.2.1 Solution
In this problem, it is helpful to label points with nonzero probability on the X, Y plane:
y PX,Y (x, y)
4
6
3 •3c •6c •12c
2
1 •c •2c •4c
0 - x
0 1 2 3 4
Thus c = 1/28.
(d) There are two ways to solve this part. The direct way is to calculate
1(1) + 2(0) 1
P [Y = X ] = PX,Y (x, y) = = (5)
x=1,2,4 y=x
28 28
The indirect way is to use the previous results and the observation that
(e)
(1)(3) + (2)(3) + (4)(3) 21 3
P [Y = 3] = PX,Y (x, 3) = = = (7)
x=1,2,4
28 28 4
133
Problem 4.2.2 Solution
On the X, Y plane, the joint PMF is
y
PX,Y (x, y) 6
•c 1 •c •3c
•2c •2c - x
1 2
•3c •c •c
?
(a) To find c, we sum the PMF over all possible values of X and Y . We choose c so the sum
equals one.
PX,Y (x, y) = c |x + y| = 6c + 2c + 6c = 14c (1)
x y x=−2,0,2 y=−1,0,1
Thus c = 1/14.
(b)
P [Y < X ] = PX,Y (0, −1) + PX,Y (2, −1) + PX,Y (2, 0) + PX,Y (2, 1) (2)
= c + c + 2c + 3c = 7c = 1/2 (3)
(c)
P [Y > X ] = PX,Y (−2, −1) + PX,Y (−2, 0) + PX,Y (−2, 1) + PX,Y (0, 1) (4)
= 3c + 2c + c + c = 7c = 1/2 (5)
(e)
134
•rr p2
p r
p
X
r XXX
XX
1− p a •ra p(1− p)
HH
HH •ar
p r p(1− p)
1− pHH a
XXX
XXX
1− p a •aa (1− p)2
Now we construct a table that maps the sample outcomes to values of X and Y .
outcome P [·] X Y
rr p2 1 1
ra p(1 − p) 1 0 (1)
ar p(1 − p) 0 1
aa (1 − p)2 0 0
outcome X Y
hh 0 1
ht 1 0 (1)
th 1 1
tt 2 0
PX,Y (x, y) y = 0 y = 1
x =0 0 1/4
(2)
x =1 1/4 1/4
x =2 1/4 0
135
• Lowercase axis labels: For the lowercase labels, we observe that we are depicting the masses
associated with the joint PMF PX,Y (x, y) whose arguments are x and y. Since the PMF
function is defined in terms of x and y, the axis labels should be x and y.
• Uppercase axis labels: On the other hand, we are depicting the possible outcomes (labeled
with their respective probabilities) of the pair of random variables X and Y . The correspond-
ing axis labels should be X and Y just as in Figure 4.2. The fact that we have labeled the
possible outcomes by their probabilities is irrelevant. Further, since the expression for the
PMF PX,Y (x, y) given in the figure could just as well have been written PX,Y (·, ·), it is clear
that the lowercase x and y are not what matter.
136
We can combine these cases into a single complete expression for the joint PMF.
⎧ n−1
⎨ k−1(1 − p)k p n−k x = 0, k = 1, 2, . . . , n
PK ,X (k, x) = n−1
(1 − p)k p n−k x = 1, k = 0, 1, . . . , n − 1 (3)
⎩ k
0 otherwise
B Test x + 1 must be a rejection since otherwise we would have x + 1 acceptable at the be-
ginnning.
Since the events A, B and C are independent, the joint PMF for x + k ≤ r , x ≥ 0 and k ≥ 0 is
n−x −1
PK ,X (k, x) = p (1 − p)
x
(1 − p)k−1 p n−x−1−(k−1) (1)
k−1
P[A] P[B]
P[C]
3 • 6c
3c
• •
12c
1 •c 2c
• •
4c
0 - x
0 1 2 3 4
137
(a) The marginal PMFs of X and Y are
⎧
⎪
⎪ 4/28 x =1
⎨
8/28 x =2
PX (x) = PX,Y (x, y) = (1)
⎪
⎪ 16/28 x =4
y=1,3 ⎩
0 otherwise
⎧
⎨ 7/28 y=1
PY (y) = PX,Y (x, y) = 21/28 y=3 (2)
⎩
x=1,2,4 0 otherwise
138
(b) The expected values of X and Y are
E [X ] = x PX (x) = −2(6/14) + 2(6/14) = 0 (3)
x=−2,0,2
E [Y ] = y PY (y) = −1(5/14) + 1(5/14) = 0 (4)
y=−1,0,1
(c) Since X and Y both have zero mean, the variances are
Var[X ] = E X 2 = x 2 PX (x) = (−2)2 (6/14) + 22 (6/14) = 24/7 (5)
x=−2,0,2
Var[Y ] = E Y 2 = y 2 PY (y) = (−1)2 (5/14) + 12 (5/14) = 5/7 (6)
y=−1,0,1
√ √
The standard deviations are σ X = 24/7 and σY = 5/7.
100 100n e−100
n = 0, 1, . . .
PN (n) = PN ,K (n, k) = n! (1)
0 otherwise
k=0
∞ 100
p k (1 − p)100−k k = 0, 1, . . . , 100
PK (k) = PN ,K (n, k) = k (2)
0 otherwise
n=0
n
n
PN (n) = PN ,K (n, k) = (1 − p)n−1 p/n = (1 − p)n−1 p (2)
k=1 k=1
The marginal PMF of K is found by summing over all possible N . Note that if K = k, then N ≥ k.
Thus,
∞
1
PK (k) = (1 − p)n−1 p (3)
n=k
n
Unfortunately, this sum cannot be simplified.
139
Problem 4.3.5 Solution
For n = 0, 1, . . ., the marginal PMF of N is
n
100n e−100 100n e−100
PN (n) = PN ,K (n, k) = = (1)
k k=0
(n + 1)! n!
(b) To find the P[X ≤ Y ] we look to integrate over the area indicated by the graph
Y
1 X£Y ' 1/2 ' 1−x
X=Y P [X ≤ Y ] = dy dx (3)
0 x
' 1/2
= (2 − 4x) d x (4)
0
X
= 1/2 (5)
1
(c) The probability P[X + Y ≤ 1/2] can be seen in the figure. Here we can set up the following
integrals
140
Y
Y+X=1 ' '
1 1/2 1/2−x
P [X + Y ≤ 1/2] = 2 dy dx (6)
Y+X=½ 0 0
' 1/2
= (1 − 2x) d x (7)
0
X
= 1/2 − 1/4 = 1/4 (8)
1
(a) To find the constant c integrate f X,Y (x, y) over the all possible values of X and Y to get
' 1 ' 1
1= cx y 2 d x d y = c/6 (2)
0 0
Therefore c = 6.
(b) The probability P[X ≥ Y ] is the integral of the joint PDF f X,Y (x, y) over the indicated shaded
region.
Y
1 ' 1 ' x
P [X ≥ Y ] = 6x y 2 d y d x (3)
0 0
' 1
= 2x 4 d x (4)
0
X = 2/5 (5)
1
Y
1 Similarly, to find P[Y ≤ X 2 ] we can integrate over the region
shown in the figure.
Y=X 2
' '
1 x2
P Y ≤X 2
= 6x y 2 d y d x (6)
0 0
X = 1/4 (7)
1
(c) Here we can choose to either integrate f X,Y (x, y) over the lighter shaded region, which would
require the evaluation of two integrals, or we can perform one integral over the darker region
by recognizing
141
min(X,Y) < ½
Y
min(X,Y) > ½
1 P [min(X, Y ) ≤ 1/2] = 1 − P [min(X, Y ) > 1/2] (8)
' 1 ' 1
=1− 6x y 2 d x d y (9)
1/2 1/2
' 1 2
9y 11
=1− dy = (10)
X 1/2 4 32
1
(d) The probability P[max(X, Y ) ≤ 3/4] can be found be integrating over the shaded region
shown below.
Y
1 max(X,Y) < ¾
P [max(X, Y ) ≤ 3/4] = P [X ≤ 3/4, Y ≤ 3/4] (11)
' 3' 3
4 4
= 6x y 2 d x d y (12)
0 $3/4
0
$3/4
2$
= x 0 y 3 $0 (13)
X
= (3/4)5 = 0.237 (14)
1
142
(b) The event min(X, Y ) ≥ 1 is the same as the event {X ≥ 1, Y ≥ 1}. Thus,
' ∞' ∞
P [min(X, Y ) ≥ 1] = 6e−(2x+3y) d y d x = e−(2+3) (10)
1 1
• x < 0 or y < 0
Y
1
In this case, the region of integration doesn’t overlap the
region of nonzero probability and
y ' y ' x
FX,Y (x, y) = f X,Y (u, v) du dv = 0 (1)
X −∞ −∞
x 1
• 0<y≤x ≤1
Y In this case, the region where the integral has a nonzero
contribution is
1
' y ' x
FX,Y (x, y) = f X,Y (u, v) d y d x (2)
−∞ −∞
y ' y' x
= 8uv du dv (3)
v
' y
0
X
x 1 = 4(x 2 − v 2 )v dv (4)
0
$ v=y
= 2x 2 v 2 − v 4 $v=0 = 2x 2 y 2 − y 4 (5)
• 0 < x ≤ y and 0 ≤ x ≤ 1
143
Y
1 ' y ' x
FX,Y (x, y) = f X,Y (u, v) dv du (6)
y '−∞ −∞
x' u
= 8uv dv du (7)
' 0
x
0
1 x
X
= 4u 3 du = x 4 (8)
0
• 0 < y ≤ 1 and x ≥ 1
Y
' y ' x
1
FX,Y (x, y) = f X,Y (u, v) dv du (9)
−∞ −∞
y ' y' 1
= 8uv du dv (10)
v
' 0
y
X = 4v(1 − v 2 ) dv (11)
x 1 0
= 2y 2 − y 4 (12)
• x ≥ 1 and y ≥ 1
Y
y
1
In this case, the region of integration completely covers
the region of nonzero probability and
' y ' x
FX,Y (x, y) = f X,Y (u, v) du dv (13)
−∞ −∞
X =1 (14)
1 x
144
Problem 4.5.1 Solution
(a) The joint PDF (and the corresponding region of nonzero probability) are
Y
1/2 −1 ≤ x ≤ y ≤ 1
X f X,Y (x, y) = (1)
0 otherwise
1
-1
(b)
' ' '
1 1
1 1
1−x
P [X > 0] = dy dx = d x = 1/4 (2)
0 x 2 0 2
This result can be deduced by geometry. The shaded triangle of the X, Y plane corresponding
to the event X > 0 is 1/4 of the total shaded area.
(c) For x > 1 or x < −1, f X (x) = 0. For −1 ≤ x ≤ 1,
' ∞ ' 1
1
f X (x) = f X,Y (x, y) dy = dy = (1 − x)/2 (3)
−∞ x 2
The complete expression for the marginal PDF is
(1 − x)/2 −1 ≤ x ≤ 1
f X (x) = (4)
0 otherwise
145
Problem 4.5.3 Solution
Random variables X and Y have joint PDF
1/(πr 2 ) 0 ≤ x 2 + y 2 ≤ r 2
f X,Y (x, y) = (1)
0 otherwise
1
5x 2 /2 −1 ≤ x ≤ 1, 0 ≤ y ≤ x 2
f X,Y (x, y) = (1)
0 otherwise
X
-1 1
We can find the appropriate marginal PDFs by integrating the joint PDF.
146
Problem 4.5.5 Solution
In this problem, the joint PDF is
2 |x y| /r 4 0 ≤ x 2 + y 2 ≤ r 2
f X,Y (x, y) = (1)
0 otherwise
Since |y| is symmetric about the origin, we can simplify the integral to
' √r 2 −x 2 $√ 2 2
4 |x| 2 |x| 2 $$ r −x 2 |x| (r 2 − x 2 )
f X (x) = 4 y dy = 4 y $ = (3)
r 0 r 0 r4
Note that for |x| > r , f X (x) = 0. Hence the complete expression for the PDF of X is
2|x|(r 2 −x 2 )
−r ≤ x ≤ r
f X (x) = r4 (4)
0 otherwise
(b) Note that the joint PDF is symmetric in x and y so that fY (y) = f X (y).
(a) The joint PDF of X and Y and the region of nonzero probability are
Y
1
cy 0 ≤ y ≤ x ≤ 1
f X,Y (x, y) = (1)
0 otherwise
X
1
(b) To find the value of the constant, c, we integrate the joint PDF over all x and y.
' ' ' ' ' $1
∞ ∞ 1 x 1
cx 2 cx 3 $$ c
f X,Y (x, y) d x d y = cy dy d x = dx = $ = (2)
−∞ −∞ 0 0 0 2 6 0 6
Thus c = 6.
(c) We can find the CDF FX (x) = P[X ≤ x] by integrating the joint PDF over the event X ≤ x.
For x < 0, FX (x) = 0. For x > 1, FX (x) = 1. For 0 ≤ x ≤ 1,
147
Y ''
1 FX (x) = f X,Y x , y dy d x (3)
x ≤x
' x ' x
= 6y dy d x (4)
' 0
x
0
X
x 1 = 3(x )2 d x = x 3 (5)
0
The complete expression for the joint CDF is
⎧
⎨ 0 x <0
FX (x) = x3 0 ≤ x ≤ 1 (6)
⎩
1 x ≥1
(d) Similarly, we find the CDF of Y by integrating f X,Y (x, y) over the event Y ≤ y. For y < 0,
FY (y) = 0 and for y > 1, FY (y) = 1. For 0 ≤ y ≤ 1,
''
Y
FY (y) = f X,Y x , y dy d x (7)
1 y ≤y
' y' 1
y
= 6y d x dy (8)
0 y
' y
X = 6y (1 − y ) dy (9)
1 0
$y
= 3(y )2 − 2(y )3 $0 = 3y 2 − 2y 3 (10)
The complete expression for the CDF of Y is
⎧
⎨ 0 y<0
FY (y) = 3y − 2y 0 ≤ y ≤ 1
2 3
(11)
⎩
1 y>1
(e) To find P[Y ≤ X/2], we integrate the joint PDF f X,Y (x, y) over the region y ≤ x/2.
Y ' 1 ' x/2
1 P [Y ≤ X/2] = 6y dy d x (12)
0 0
' 1 $x/2
½ = 3y 2 $0 d x (13)
0
' 1
X 3x 2
1 = d x = 1/4 (14)
0 4
148
y
4 6 PX,Y (x, y)
W =−2 W =−1 W =1
3/28 6/28 12/28
3 • • •
W =0 W =1 W =3
1/28 2/28 4/28
1 • • •
0 - x
0 1 2 3 4
(a) To find the PMF of W , we simply add the probabilities associated with each possible value of
W:
149
To find c, we sum the PMF over all possible values of X and Y . We choose c so the sum equals
one.
PX,Y (x, y) = c |x + y| (1)
x y x=−2,0,2 y=−1,0,1
= 6c + 2c + 6c = 14c (2)
Thus c = 1/14. Now we can solve the actual problem.
(a) From the above graph, we can calculate the probability of each possible value of w.
PW (−4) = PX,Y (−2, −1) = 3c (3)
PW (−2) = PX,Y (−2, 0) + PX,Y (0, −1) = 3c (4)
PW (0) = PX,Y (−2, 1) + PX,Y (2, −1) = 2c (5)
PW (2) = PX,Y (0, 1) + PX,Y (2, 0) = 3c (6)
PW (4) = PX,Y (2, 1) = 3c (7)
With c = 1/14, we can summarize the PMF as
⎧
⎨ 3/14 w = −4, −2, 2, 4
PW (w) = 2/14 w = 0 (8)
⎩
0 otherwise
150
To find the PMF of W , we observe that for w = 1, . . . , 10,
PW (w) = P [W > w − 1] − P [W > w] (4)
= 0.01[(10 − w − 1)2 − (10 − w)2 ] = 0.01(21 − 2w) (5)
The complete expression for the PMF of W is
0.01(21 − 2w) w = 1, 2, . . . , 10
PW (w) = (6)
0 otherwise
(a) The minimum value of W is W = 0, which occurs when X = 0 and Y = 0. The maximum
value of W is W = 1, which occurs when X = 1 or Y = 1. The range of W is SW =
{w|0 ≤ w ≤ 1}.
(b) For 0 ≤ w ≤ 1, the CDF of W is
Y
1
FW (w) = P [max(X, Y ) ≤ w] (1)
w W<w
= P [X ≤ w, Y ≤ w] (2)
' w' w
X = f X,Y (x, y) d y d x (3)
0 0
w 1
151
Substituting f X,Y (x, y) = x + y yields
' w' w
FW (w) = (x + y) d y d x (4)
' w! $ y=w "
0 0
' w
y 2 $$
= xy + $ dx = (wx + w 2 /2) d x = w3 (5)
0 2 y=0 0
(a) Since the joint PDF f X,Y (x, y) is nonzero only for 0 ≤ y ≤ x ≤ 1, we observe that W =
Y − X ≤ 0 since Y ≤ X . In addition, the most negative value of W occurs when Y = 0 and
X = 1 and W = −1. Hence the range of W is SW = {w| − 1 ≤ w ≤ 0}.
(b) For w < −1, FW (w) = 0. For w > 0, FW (w) = 1. For −1 ≤ w ≤ 0, the CDF of W is
Y
FW (w) = P [Y − X ≤ w] (1)
1 ' 1 ' x+w
Y=X+w = 6y dy d x (2)
½ −w 0
' 1
= 3(x + w)2 d x (3)
X −w
-w 1 $1
= (x + w)3 $−w = (1 + w)3 (4)
Therefore, the complete CDF of W is
⎧
⎨ 0 w < −1
FW (w) = (1 + w)3 −1 ≤ w ≤ 0 (5)
⎩
1 w>0
152
Problem 4.6.8 Solution
Random variables X and Y have joint PDF
Y
1
2 0≤y≤x ≤1
f X,Y (x, y) = (1)
0 otherwise
X
1
We see that W has a uniform PDF over [0, 1]. Thus E[W ] = 1/2.
1
2 0≤y≤x ≤1
f X,Y (x, y) = (1)
0 otherwise
X
1
(a) Since f X,Y (x, y) = 0 for y > x, we can conclude that Y ≤ X and that W = X/Y ≥ 1. Since
Y can be arbitrarily small but positive, W can be arbitrarily large. Hence the range of W is
SW = {w|w ≥ 1}.
153
Y
1 P[Y<X/w]
FW (w) = P [X/Y ≤ w] (2)
1/w = 1 − P [X/Y > w] (3)
= 1 − P [Y < X/w] (4)
X
1
= 1 − 1/w (5)
Note that we have used the fact that P[Y < X/w] equals 1/2 times the area of the corre-
sponding triangle. The complete CDF is
0 w<1
FW (w) = (6)
1 − 1/w w ≥ 1
The PDF of W is found by differentiating the CDF.
d FW (w) 1/w 2 w ≥ 1
f W (w) = = (7)
dw 0 otherwise
To find the expected value E[W ], we write
' ∞ ' ∞
dw
E [W ] = w f W (w) dw = . (8)
−∞ 1 w
However, the integral diverges and E[W ] is undefined.
154
Problem 4.7.1 Solution
y
4 6 PX,Y (x, y) In Problem 4.2.1, we found the joint PMF PX,Y (x, y) as
3/28 shown. Also the expected values and variances were
3 • •6/28 •12/28
2 E [X ] = 3 Var[X ] = 10/7 (1)
1
1/28
• •2/28 •4/28 E [Y ] = 5/2 Var[Y ] = 3/4 (2)
0 - x
0 1 2 3 4 We use these results now to solve this problem.
155
Problem 4.7.2 Solution
y
PX,Y (x, y) 6 In Problem 4.2.1, we found the joint PMF
PX,Y (x, y) shown here. The expected values and
•1/14 1 •1/14 •3/14 variances were found to be
•2/14 •2/14 - x
1 2 E [X ] = 0 Var[X ] = 24/7 (1)
•3/14
•1/14
• 1/14
E [Y ] = 0 Var[Y ] = 5/7 (2)
156
Problem 4.7.3 Solution
In the solution to Quiz 4.3, the joint PMF and the marginal PMFs are
1
r H,B = E [H B] = hb PH,B (h, b) (2)
h=−1 b=0,2,4
since only terms in which both h and b are nonzero make a contribution. Using the marginal PMFs,
the expected values of X and Y are
1
E [H ] = h PH (h) = −1(0.6) + 0(0.2) + 1(0.2) = −0.2 (4)
h=−1
E [B] = b PB (b) = 0(0.2) + 2(0.5) + 4(0.3) = 2.2 (5)
b=0,2,4
The covariance is
4
25 13 7 3
E [Y ] = y PY (y) = 1 + 2 + 3 + 4 = 7/4 (2)
y=1
48 48 48 48
4
1
E [X ] = x PX (x) = (1 + 2 + 3 + 4) = 5/2 (3)
x=1
4
157
(b) To find the variances, we first find the second moments.
4
25 13 7 3
E Y 2
= y 2 PY (y) = 12 + 22 + 32 + 42 = 47/12 (4)
y=1
48 48 48 48
4
1 2
E X 2
= x 2 PX (x) = 1 + 22 + 32 + 42 = 15/2 (5)
x=1
4
(c) To find the correlation, we evaluate the product X Y over all values of X and Y . Specifically,
4
x
r X,Y = E [X Y ] = x y PX,Y (x, y) (8)
x=1 y=1
1 2 3 4 4 6 8 9 12 16
= + + + + + + + + + (9)
4 8 12 16 8 12 16 12 16 16
=5 (10)
158
The expected values are
5
x +1 70 10 5
6−y 35 5
E [X ] = x = = E [Y ] = y = = (5)
x=0
21 21 3 y=0
21 21 3
5 x
xy 1
5 x
1 2
5
280 20
E [X Y ] = = x y= x (x + 1) = = (6)
x=0 y=0
21 21 x=1 y=1 42 x=1 42 3
y
4 6 PX,Y (x, y)
1/16
• To solve this problem, we
W =4
V =4 identify the values of W =
1/12 1/16 min(X, Y ) and V = max(X, Y )
3 • • for each possible pair x, y. Here
W =3 W =3
V =3 V =4 we observe that W = Y and
1/8 1/12 1/16 V = X . This is a result
2 • • •
W =2 W =2 W =2 of the underlying experiment
V =2 V =3 V =4
in that given X = x, each
1/4 1/8 1/12 1/16
1 • • • • Y ∈ {1, 2, . . . , x} is equally
W =1 W =1 W =1 W =1
V =1 V =2 V =3 V =4 likely. Hence Y ≤ X . This
implies min(X, Y ) = Y and
- x
0 max(X, Y ) = X .
0 1 2 3 4
Using the results from Problem 4.7.4, we have the following answers.
159
(d) The covariance of W and V is
Cov [W, V ] = Cov [X, Y ] = 10/16 (4)
160
(b) The expected value of Y is
' ' $2
∞ 2
2y + 1 y2 y 3 $$ 11
E [Y ] = y f Y (y) dy = y dy = + $ = (7)
−∞ 0 6 12 9 0 9
The second moment of Y is
' ∞ ' $2
2 2
2 2y +1 y3 y 4 $$ 16
E Y = y f Y (y) dy =
2
y dy = + $ = (8)
−∞ 0 6 18 12 0 9
161
(b) The mean of Y is ' ' '
1 1 1
4x 2
E [Y ] = 4x y d y d x =
2
dx = (3)
0 0 0 3 3
The second moment of Y is
' ' '
1 1 1
1
E Y2 = 4x y 3 d y d x = x dx = (4)
0 0 0 2
1
5x 2 /2 −1 ≤ x ≤ 1, 0 ≤ y ≤ x 2
f X,Y (x, y) = (1)
0 otherwise
X
-1 1
Since E[X ] = 0, the variance of X and the second moment are both
' ' $1
1 x2
2 5x
2
5x 7 $$ 10
Var[X ] = E X 2
= x dy dx = $ = (3)
−1 0 2 14 −1 14
162
(b) The first and second moments of Y are
' 1 ' x2
5x 2 5
E [Y ] = dy dx = y (4)
−1 0 2 14
' 1'
5x 2
5
E Y2 = x 2 y2 dy dx = (5)
−1 0 2 26
1
2 0≤y≤x ≤1
f X,Y (x, y) = (1)
0 otherwise
X
1
Before finding moments, it is helpful to first find the marginal PDFs. For 0 ≤ x ≤ 1,
' ∞ ' x
f X (x) = f X,Y (x, y) dy = 2 dy = 2x (2)
−∞ 0
Also, for y < 0 or y > 1, f Y (y) = 0. Complete expressions for the marginal PDFs are
2x 0 ≤ x ≤ 1 2(1 − y) 0 ≤ y ≤ 1
f X (x) = f Y (y) = (4)
0 otherwise 0 otherwise
163
(a) The first two moments of X are
' ∞ ' 1
E [X ] = x f X (x) d x = 2x 2 d x = 2/3 (5)
−∞ 0
' ∞ '
1
E X 2
= x f X (x) d x =
2
2x 3 d x = 1/2 (6)
−∞ 0
164
Problem 4.7.13 Solution
For this problem, calculating the marginal PMF of K is not easy. However, the marginal PMF of N
is easy to find. For n = 1, 2, . . .,
n
(1 − p)n−1 p
PN (n) = = (1 − p)n−1 p (1)
k=1
n
Applying the values of E[N ] and E[N 2 ] found above, we find that
2 E N2 E [N ] 1 2 1 1
E K = + + = 2
+ + (9)
3 2 6 3p 6p 6
Thus, we can calculate the variance of K .
5 1 5
Var[K ] = E K 2 − (E [K ])2 = 2
− + (10)
12 p 3 p 12
165
To find the correlation of N and K ,
∞
∞
(1 − p)n−1 p
n n
E [N K ] = nk = (1 − p)n−1 p k (11)
n=1 k=1
n n=1 k=1
n
Since k=1 k = n(n + 1)/2,
∞
n(n + 1) N (N + 1) 1
E [N K ] = (1 − p) n−1
p=E = 2 (12)
n=1
2 2 p
10
10
P [A] = P [X > 5, Y > 5] = 0.01 = 0.25 (1)
x=6 y=6
5
5
P [B] = P [X ≤ 5, Y ≤ 5] = 0.01 = 0.25 (1)
x=1 y=1
166
Problem 4.8.3 Solution
Given the event A = {X + Y ≤ 1}, we wish to find f X,Y |A (x, y). First we find
' 1 ' 1−x
P [A] = 6e−(2x+3y) d y d x = 1 − 3e−2 + 2e−3 (1)
0 0
So then
6e−(2x+3y)
x + y ≤ 1, x ≥ 0, y ≥ 0
f X,Y |A (x, y) = 1−3e−2 +2e−3 (2)
0 otherwise
n
n
1
PN (n) = PN ,K (n, k) = (1 − p)n−1 p = (1 − p)n−1 p (1)
k=1 k=1
n
The conditional PMF PN |B (n|b) could be found directly from PN (n) using Theorem 2.17. However,
we can also find it just by summing the conditional joint PMF.
n
(1 − p)n−10 p n = 10, 11, . . .
PN |B (n) = PN ,K |B (n, k) = (5)
0 otherwise
k=1
From the conditional PMF PN |B (n), we can calculate directly the conditional moments of N given
B. Instead, however, we observe that given B, N = N − 9 has a geometric PMF with mean 1/ p.
That is, for n = 1, 2, . . .,
167
Note that further along in the problem we will need E[N 2 |B] which we now calculate.
E N 2 |B = Var[N |B] + (E [N |B])2 (9)
2 17
= 2+ + 81 (10)
p p
For the conditional moments of K , we work directly with the conditional PMF PN ,K |B (n, k).
∞ ∞
(1 − p)n−10 p
n n
(1 − p)n−10 p
E [K |B] = k = k (11)
n=10 k=1
n n=10
n k=1
n
Since k=1 k = n(n + 1)/2,
∞
n+1 1 1
E [K |B] = (1 − p)n−1 p = E [N + 1|B] = +5 (12)
n=1
2 2 2p
Applying the values of E[N |B] and E[N 2 |B] found above, we find that
2 E N 2 |B E [N |B] 1 2 37 2
E K |B = + + = + + 31 (16)
3 2 6 3 p2 6 p 3
Thus, we can calculate the conditional variance of K .
5 7 2
Var[K |B] = E K 2 |B − (E [K |B])2 = 2
− +6 (17)
12 p 6p 3
To find the conditional correlation of N and K ,
∞
n ∞ n
(1 − p)n−10 p
E [N K |B] = nk = (1 − p) p
n−1
k (18)
n=10 k=1
n n=10 k=1
n
Since k=1 k = n(n + 1)/2,
∞
n(n + 1) 1 1 9
E [N K |B] = (1 − p)n−10 p = E [N (N + 1)|B] = 2 + + 45 (19)
n=10
2 2 p p
168
Problem 4.8.5 Solution
The joint PDF of X and Y is
(x + y)/3 0 ≤ x ≤ 1, 0 ≤ y ≤ 2
f X,Y (x, y) = (1)
0 otherwise
(a) The probability that Y ≤ 1 is
''
P [A] = P [Y ≤ 1] = f X,Y (x, y) d x d y (2)
Y y≤1
2 ' 1' 1
x+y
= dy dx (3)
Y 1
3
' 1! $ y=1 "
1 0 0
xy y 2 $$
= + $ dx (4)
X 3
0 6 y=0
1 ' $1
1
2x + 1 x2 x$ 1
= dx = + $$ = (5)
0 6 6 6 0 3
From f X,Y |A (x, y), we find the conditional marginal PDF f X |A (x). For 0 ≤ x ≤ 1,
' ∞ ' 1 $ y=1
y 2 $$ 1
f X |A (x) = f X,Y |A (x, y) dy = (x + y) dy = x y + $ =x+ (7)
−∞ 0 2 y=0 2
The complete expression is
x + 1/2 0 ≤ x ≤ 1
f X |A (x) = (8)
0 otherwise
For 0 ≤ y ≤ 1, the conditional marginal PDF of Y is
' ∞ ' 1 $x=1
x2 $
f Y |A (y) = f X,Y |A (x, y) d x = (x + y) d x = + x y $$ = y + 1/2 (9)
−∞ 0 2 x=0
169
(a) The probability of event A = {Y ≤ 1/2} is
'' ' '
1 1/2
4x + 2y
P [A] = f X,Y (x, y) d y d x = d y d x. (2)
y≤1/2 0 0 3
170
(a) The event A = {Y ≤ 1/4} has probability
Y ' 1/2 ' x2 ' 1 ' 1/4
5x 2 5x 2
P [A] = 2 dy dx + 2 dy dx
1 Y<1/4 0 0 2 1/2 0 2
(1)
' 1/2 ' 1
5x 2
¼ = 5x 4 d x + dx (2)
X 0 1/2 4
$1/2 $1
-1 -½ ½ 1 = x 5 $0 + $
5x /12 1/2
3
= 19/48 (3)
This implies
f X,Y (x, y) /P [A] (x, y) ∈ A
f X,Y |A (x, y) = (4)
0 otherwise
120x 2 /19 −1 ≤ x ≤ 1, 0 ≤ y ≤ x 2 , y ≤ 1/4
= (5)
0 otherwise
(b)
' ∞ '
1
120x 2 80
(1 − y 3/2 ) 0 ≤ y ≤ 1/4
f Y |A (y) = f X,Y |A (x, y) d x = 2 √
dx = 19 (6)
−∞ y 19 0 otherwise
171
Problem 4.9.1 Solution
The main part of this problem is just interpreting the problem statement. No calculations are neces-
sary. Since a trip is equally likely to last 2, 3 or 4 days,
1/3 d = 2, 3, 4
PD (d) = (1)
0 otherwise
Given a trip lasts d days, the weight change is equally likely to be any value between −d and d
pounds. Thus,
1/(2d + 1) w = −d, −d + 1, . . . , d
PW |D (w|d) = (2)
0 otherwise
The joint PMF is simply
1/(6d + 3) d = 2, 3, 4; w = −d, . . . , d
PD,W (d, w) = PW |D (w|d) PD (d) = (3)
0 otherwise
172
Problem 4.9.3 Solution
(x + y) 0 ≤ x, y ≤ 1
f X,Y (x, y) = (1)
0 otherwise
(a) The conditional PDF f X |Y (x|y) is defined for all y such that 0 ≤ y ≤ 1. For 0 ≤ y ≤ 1,
(x+y)
f X,Y (x, y) (x + y) 0≤x ≤1
f X |Y (x) = = (1 = x+1/2 (2)
f X (x) (x + y) dy 0 otherwise
0
(b) The conditional PDF f Y |X (y|x) is defined for all values of x in the interval [0, 1]. For 0 ≤
x ≤ 1,
(x+y)
f X,Y (x, y) (x + y) 0≤y≤1
f Y |X (y) = = (1 = y+1/2 (3)
f Y (y) (x + y) d x 0 otherwise
0
1
2 0≤y≤x ≤1
f X,Y (x, y) = (1)
0 otherwise
X
1
For 0 ≤ y ≤ 1,
' ∞ ' 1
f Y (y) = f X,Y (x, y) d x = 2 d x = 2(1 − y) (2)
−∞ y
Also, for y < 0 or y > 1, f Y (y) = 0. The complete expression for the marginal PDF is
2(1 − y) 0 ≤ y ≤ 1
f Y (y) = (3)
0 otherwise
That is, since Y ≤ X ≤ 1, X is uniform over [y, 1] when Y = y. The conditional expectation of X
given Y = y can be calculated as
' ∞ ' 1 $1
x x 2 $$ 1+y
E [X |Y = y] = x f X |Y (x|y) d x = dx = $ = (5)
−∞ y 1− y 2(1 − y) y 2
In fact, since we know that the conditional PDF of X is uniform over [y, 1] when Y = y, it wasn’t
really necessary to perform the calculation.
173
Problem 4.9.5 Solution
Random variables X and Y have joint PDF
Y
1
2 0≤y≤x ≤1
f X,Y (x, y) = (1)
0 otherwise
X
1
For 0 ≤ x ≤ 1, the marginal PDF for X satisfies
' ∞ ' x
f X (x) = f X,Y (x, y) dy = 2 dy = 2x (2)
−∞ 0
Note that f X (x) = 0 for x < 0 or x > 1. Hence the complete expression for the marginal PDF of
X is
2x 0 ≤ x ≤ 1
f X (x) = (3)
0 otherwise
The conditional PDF of Y given X = x is
f X,Y (x, y) 1/x 0 ≤ y ≤ x
f Y |X (y|x) = = (4)
f X (x) 0 otherwise
Given X = x, Y has a uniform PDF over [0, x] and thus (∞has conditional expected value E[Y |X = x] =
x/2. Another way to obtain this result is to calculate −∞ y f Y |X (y|x) dy.
(a) First we observe that A takes on the values S A = {−1, 1} while B takes on values from
S B = {0, 1}. To construct a table describing PA,B (a, b) we build a table for all possible
values of pairs (A, B). The general form of the entries is
PA,B (a, b) b=0 b=1
a = −1 PB|A (0| − 1) PA (−1) PB|A (1| − 1) PA (−1) (1)
a=1 PB|A (0|1) PA (1) PB|A (1|1) PA (1)
Now we fill in the entries using the conditional PMFs PB|A (b|a) and the marginal PMF PA (a).
This yields
PA,B (a, b) b=0 b=1 PA,B (a, b) b = 0 b = 1
a = −1 (1/3)(1/3) (2/3)(1/3) which simplifies to a = −1 1/9 2/9
a=1 (1/2)(2/3) (1/2)(2/3) a=1 1/3 1/3
(2)
174
(b) Since PA (1) = PA,B (1, 0) + PA,B (1, 1) = 2/3,
PA,B (1, b) 1/2 b = 0, 1,
PB|A (b|1) = = (3)
PA (1) 0 otherwise.
1
E [B|A = 1] = b PB|A (b|1) = PB|A (1|1) = 1/2. (4)
b=0
(c) Before finding the conditional PMF PA|B (a|1), we first sum the columns of the joint PMF
table to find
4/9 b = 0
PB (b) = (5)
5/9 b = 1
The conditional PMF of A given B = 1 is
PA,B (a, 1) 2/5 a = −1
PA|B (a|1) = = (6)
PB (1) 3/5 a = 1
(d) Now that we have the conditional PMF PA|B (a|1), calculating conditional expectations is
easy.
E [A|B = 1] = a PA|B (a|1) = −1(2/5) + (3/5) = 1/5 (7)
a=−1,1
E A |B = 1 =
2
a 2 PA|B (a|1) = 2/5 + 3/5 = 1 (8)
a=−1,1
1
E [B] = b PB (b) = 0(4/9) + 1(5/9) = 5/9 (11)
b=0
1
E [AB] = ab PA,B (a, b) (12)
a=−1,1 b=0
175
Problem 4.9.8 Solution
First we need to find the conditional expectations
1
E [B|A = −1] = b PB|A (b| − 1) = 0(1/3) + 1(2/3) = 2/3 (1)
b=0
1
E [B|A = 1] = b PB|A (b|1) = 0(1/2) + 1(1/2) = 1/2 (2)
b=0
Keep in mind that E[B|A] is a random variable that is a function of A. that is we can write
2/3 A = −1
E [B|A] = g(A) = (3)
1/2 A = 1
We see that the range of U is SU = {1/2, 2/3}. In particular,
PU (1/2) = PA (1) = 2/3 PU (2/3) = PA (−1) = 1/3 (4)
The complete PMF of U is
2/3 u = 1/2
PU (u) = (5)
1/3 u = 2/3
Note that
E [E [B|A]] = E [U ] = u PU (u) = (1/2)(2/3) + (2/3)(1/3) = 5/9 (6)
u
176
Problem 4.9.10 Solution
This problem is fairly easy when we use conditional PMF’s. In particular, given that N = n pizzas
were sold before noon, each of those pizzas has mushrooms with probability 1/3. The conditional
PMF of M given N is the binomial distribution
n
(1/3)m (2/3)n−m m = 0, 1, . . . , n
PM|N (m|n) = m (1)
0 otherwise
The other fact we know is that for each of the 100 pizzas sold, the pizza is sold before noon with
probability 1/2. Hence, N has the binomial PMF
100
(1/2)n (1/2)100−n n = 0, 1, . . . , 100
PN (n) = n (2)
0 otherwise
1/2 −1 ≤ x ≤ y ≤ 1
X f X,Y (x, y) = (1)
0 otherwise
1
-1
(c) Given Y = y, the conditional PDF of X is uniform over [−1, y]. Hence the conditional
expected value is E[X |Y = y] = (y − 1)/2.
177
Problem 4.9.12 Solution
We are given that the joint PDF of X and Y is
1/(πr 2 ) 0 ≤ x 2 + y 2 ≤ r 2
f X,Y (x, y) = (1)
0 otherwise
(a) The marginal PDF of X is
' √r 2 −x 2 √
r 2 −x 2
1 2
−r ≤ x ≤ r
f X (x) = 2 dy = πr 2 (2)
0 πr 2 0 otherwise
The conditional PDF of Y given X is
√
f X,Y (x, y) 1/(2 r 2 − x 2 ) y 2 ≤ r 2 − x 2
f Y |X (y|x) = = (3)
f X (x) 0 otherwise
√ √
(b) Given X = x, we observe that over the interval [− r 2 − x 2 , r 2 − x 2 ], Y has a uniform
PDF. Since the conditional PDF f Y |X (y|x) is symmetric about y = 0,
E [Y |X = x] = 0 (4)
n−1
PN (n) = (1 − p)n−2 p 2 = (n − 1)(1 − p)n−2 p 2 , n = 2, 3, . . . (5)
m=1
178
The complete expressions for the marginal PMF’s are
(1 − p)m−1 p m = 1, 2, . . .
PM (m) = (9)
0 otherwise
(n − 1)(1 − p) p n = 2, 3, . . .
n−2 2
PN (n) = (10)
0 otherwise
Not surprisingly, if we view each voice call as a successful Bernoulli trial, M has a geometric PMF
since it is the number of trials up to and including the first success. Also, N has a Pascal PMF since
it is the number of trials required to see 2 successes. The conditional PMF’s are now easy to find.
PM,N (m, n) (1 − p)n−m−1 p n = m + 1, m + 2, . . .
PN |M (n|m) = = (11)
PM (m) 0 otherwise
(a) The number of buses, N , must be greater than zero. Also, the number of minutes that pass
cannot be less than the number of buses. Thus, P[N = n, T = t] > 0 for integers n, t satis-
fying 1 ≤ n ≤ t.
(b) First, we find the joint PMF of N and T by carefully considering the possible sample paths.
In particular, PN ,T (n, t) = P[ABC] = P[A]P[B]P[C] where the events A, B and C are
These events are independent since each trial to board a bus is independent of when the buses
arrive. These events have probabilities
t − 1 n−1
P [A] = p (1 − p)t−1−(n−1) (4)
n−1
P [B] = (1 − q)n−1 (5)
P [C] = pq (6)
179
(c) It is possible to find the marginal PMF’s by summing the joint PMF. However, it is much
easier to obtain the marginal PMFs by consideration of the experiment. Specifically, when a
bus arrives, it is boarded with probability q. Moreover, the experiment ends when a bus is
boarded. By viewing whether each arriving bus is boarded as an independent trial, N is the
number of trials until the first success. Thus, N has the geometric PMF
(1 − q)n−1 q n = 1, 2, . . .
PN (n) = (8)
0 otherwise
To find the PMF of T , suppose we regard each minute as an independent trial in which a
success occurs if a bus arrives and that bus is boarded. In this case, the success probability is
pq and T is the number of minutes up to and including the first success. The PMF of T is
also geometric.
(1 − pq)t−1 pq t = 1, 2, . . .
PT (t) = (9)
0 otherwise
(d) Once we have the marginal PMFs, the conditional PMFs are easy to find.
n−1 t−1−(n−1)
PN ,T (n, t) t−1 p(1−q) 1− p
n = 1, 2, . . . , t
PN |T (n|t) = = n−1 1− pq 1− pq (10)
PT (t) 0 otherwise
That is, given you depart at time T = t, the number of buses that arrive during minutes
1, . . . , t − 1 has a binomial PMF since in each minute a bus arrives with probability p. Simi-
larly, the conditional PMF of T given N is
t−1 n
PN ,T (n, t) p (1 − p)t−n t = n, n + 1, . . .
PT |N (t|n) = = n−1 (11)
PN (n) 0 otherwise
This result can be explained. Given that you board bus N = n, the time T when you leave is
the time for n buses to arrive. If we view each bus arrival as a success of an independent trial,
the time for n buses to arrive has the above Pascal PMF.
Note that N is the number of trials required to observe 100 successes. Moreover, the number of
trials needed to observe 100 successes is N = T + N where N is the number of trials needed
to observe successes 2 through 100. Since N is just the number of trials needed to observe 99
successes, it has the Pascal PMF
n−1 98
α (1 − α)n−98 n = 99, 100, . . .
PN (n) = 98 (2)
0 otherwise
180
Since the trials needed to generate successes 2 though 100 are independent of the trials that yield
the first success, N and T are independent. Hence
This solution can also be found a consideration of the sample sequence of Bernoulli trials in which
we either observe or do not observe a fax message. To find the conditional PMF PT |N (t|n), we first
must recognize that N is simply the number of trials needed to observe 100 successes and thus has
the Pascal PMF n−1 100
α (1 − α)n−100 n = 100, 101, . . .
PN (n) = 99 (7)
0 otherwise
Hence the conditional PMF is
n−t−1
PN ,T (n, t) 1−α
PT |N (t|n) = = n−1
98
(8)
PN (n) 99
α
The joint PMF of X and N can be expressed as the product of the marginal PMFs because we know
that X and Y are independent.
75 25
(1/2)100 x = 0, 1, . . . , 75 y = 0, 1, . . . , 25
PX,Y (x, y) = x y (2)
0 otherwise
181
We can calculate the requested moments.
(a) Normally, checking independence requires the marginal PMFs. However, in this problem, the
zeroes in the table of the joint PMF PX,Y (x, y) allows us to verify very quickly that X and Y
are dependent. In particular, PX (−1) = 1/4 and PY (1) = 14/48 but
(b) To fill in the tree diagram, we need the marginal PMF PX (x) and the conditional PMFs
PY |X (y|x). By summing the rows on the table for the joint PMF, we obtain
Now we use the conditional PMF PY |X (y|x) = PX,Y (x, y)/PX (x) to write
⎧
⎨ 3/4 y = −1
1/3 y = −1, 0, 1
PY |X (y| − 1) = 1/4 y = 0 PY |X (y|0) = (3)
⎩ 0 otherwise
0 otherwise
1/2 y = 0, 1
PY |X (y|1) = (4)
0 otherwise
Now we can us these probabilities to label the tree. The generic solution and the specific
solution with the exact values are
182
PY |X (−1|−1)
Y =−1
3/4 Y =−1
X =−1 PY |X (0|−1) Y =0 X =−1
1/4 Y =0
PX (−1) 1/4
Y =−1
PY |X (−1|0) Y =−1
1/3
X =0
HPH Y =0 X =0
HH 1/3 Y =0
@ PX (0) Y |X (0|0)
HH @ 1/2
HH
@ @
PY |X (1|0) H
H 1/3 H
H
@ Y =1 @ Y =1
@
PX (1)
@
1/4
@
@ X =1
P
Y |X
XX
(0|1)
Y =0 @
@ X =1 XXX1/2 Y =0
XX XX
X
X
PY |X (1|1) Y =1 1/2 X Y =1
Since PM|N (m|n) depends on the event N = n, we see that M and N are dependent.
Similarly, no matter how large X 1 may be, the number of additional flips for the second heads
is the same experiment as the number of flips needed for the first occurrence of heads. That is,
PX 2 (x) = PX 1 (x). Moreover, the flips needed to generate the second occurrence of heads are
independent of the flips that yield the first heads. Hence, it should be apparent that X 1 and X 2 are
independent and
(1 − p)x1 +x2 −2 p 2 x1 = 1, 2, . . . ; x2 = 1, 2, . . .
PX 1 ,X 2 (x1 , x2 ) = PX 1 (x1 ) PX 2 (x2 ) = (2)
0 otherwise
However, if this independence is not obvious, it can be derived by examination of the sample path.
When x1 ≥ 1 and x2 ≥ 1, the event {X 1 = x1 , X 2 = x2 } occurs iff we observe the sample sequence
tt · · · t h tt
· · · t h (3)
x 1 − 1 times x 2 − 1 times
The above sample sequence has probability (1− p)x1 −1 p(1− p)x2 −1 p which in fact equals PX 1 ,X 2 (x1 , x2 )
given earlier.
183
Problem 4.10.6 Solution
We will solve this problem when the probability of heads is p. For the fair coin, p = 1/2. The
number X 1 of flips until the first heads and the number X 2 of additional flips for the second heads
both have the geometric PMF
(1 − p)x−1 p x = 1, 2, . . .
PX 1 (x) = PX 2 (x) = (1)
0 otherwise
Thus, E[X i ] = 1/ p and Var[X i ] = (1 − p)/ p 2 . By Theorem 4.14,
E [Y ] = E [X 1 ] − E [X 2 ] = 0 (2)
Since X 1 and X 2 are independent, Theorem 4.27 says
2(1 − p)
Var[Y ] = Var[X 1 ] + Var[−X 2 ] = Var[X 1 ] + Var[X 2 ] = (3)
p2
(b) By Theorem 3.5(f), Var[−X 2 ] = (−1)2 Var[X 2 ] = Var[X 2 ]. Since X 1 and X 2 are indepen-
dent, Theorem 4.27(a) says that
Var[X 1 − X 2 ] = Var[X 1 + (−X 2 )] = Var[X 1 ] + Var[−X 2 ] = 2 Var[X ] (2)
184
Problem 4.10.9 Solution
Since X and Y are take on only integer values, W = X + Y is integer valued as well. Thus for an
integer w,
PW (w) = P [W = w] = P [X + Y = w] . (1)
Since X and Y are independent, PX,Y (k, w − k) = PX (k)PY (w − k). It follows that for any integer
w,
∞
PW (w) = PX (k) PY (w − k) . (3)
k=−∞
This table can be translated directly into the joint PMF of N and D.
(b) We find the marginal PMF PD (d) by summing the columns of the joint PMF. This yields
⎧
⎪
⎪ 0.3 d = 20,
⎨
0.4 d = 100,
PD (d) = (2)
⎪
⎪ 0.3 d = 300,
⎩
0 otherwise.
185
(c) To find the conditional PMF PD|N (d|2), we first need to find the probability of the condition-
ing event
PN (2) = PN ,D (2, 20) + PN ,D (2, 100) + PN ,D (2, 300) = 0.4 (3)
The conditional PMF of N D given N = 2 is
⎧
⎪
⎪ 1/4 d = 20
PN ,D (2, d) ⎨ 1/2 d = 100
PD|N (d|2) = = (4)
PN (2) ⎪ 1/4
⎪ d = 300
⎩
0 otherwise
(e) To check independence, we could calculate the marginal PMFs of N and D. In this case,
however, it is simpler to observe that PD (d) = PD|N (d|2). Hence N and D are dependent.
(f) In terms of N and D, the cost (in cents) of a fax is C = N D. The expected value of C is
E [C] = nd PN ,D (n, d) (6)
n,d
Factory Q Factory R
small order 0.3 0.2
medium order 0.1 0.2
large order 0.1 0.1
186
(b) Before we find E[B], it will prove helpful to find the marginal PMFs PB (b) and PM (m).
These can be found from the row and column sums of the table of the joint PMF
(c) From the marginal PMF of B, we know that PB (2) = 0.3. The conditional PMF of M given
B = 2 is ⎧
1/3 m = 60
PB,M (2, m) ⎨
PM|B (m|2) = = 2/3 m = 180 (4)
PB (2) ⎩
0 otherwise
(e) From the marginal PMFs we calculated in the table of part (b), we can conclude that B and
M are not independent. since PB,M (1, 60) = PB (1)PM (m)60.
(f) In terms of M and B, the cost (in cents) of sending a shipment is C = B M. The expected
value of C is
E [C] = bm PB,M (b, m) (6)
b,m
(a) Since X 1 and X 2 are identically distributed they will share the same CDF FX (x).
⎧
' x x ≤0
⎨ 02
FX (x) = fX x dx = x /4 0 ≤ x ≤ 2 (2)
⎩
0 1 x ≥2
187
(b) Since X 1 and X 2 are independent, we can say that
1
P [X 1 ≤ 1, X 2 ≤ 1] = P [X 1 ≤ 1] P [X 2 ≤ 1] = FX 1 (1) FX 2 (1) = [FX (1)]2 = (3)
16
(d)
(b) First, we need to calculate the conditional joint PDF i pd f X, Y |Ax, y. The first step is to
write down the joint PDF of X and Y :
6x y 2 0 ≤ x ≤ 1, 0 ≤ y ≤ 1
f X,Y (x, y) = f X (x) f Y (y) = (4)
0 otherwise
188
The event A has probability
Y
''
1 P [A] = f X,Y (x, y) d y d x (5)
x>y
' 1' x
X>Y
= 6x y 2 d y d x (6)
0 0
X ' 1
1 = 2x 4 d x = 2/5 (7)
0
The conditional joint PDF of X and Y given A is
Y
1
f X,Y (x,y)
(x, y) ∈ A
f X,Y |A (x, y) = P[A] (8)
0 otherwise
15x y 2 0 ≤ y ≤ x ≤ 1
= (9)
X 0 otherwise
1
The triangular region of nonzero probability is a signal that given A, X and Y are no longer
independent. The conditional expected value of X given A is
' ∞' ∞
E [X |A] = x f X,Y |A (x, y|a) x, y dy d x (10)
−∞ −∞
' 1 ' x
= 15 x 2
y2 d y d x (11)
0 0
' 1
=5 x 5 d x = 5/6 (12)
0
We see that E[X |A] > E[X ] while E[Y |A] < E[Y ]. That is, learning X > Y gives us a clue
that X may be larger than usual while Y may be smaller than usual.
189
By Definition 4.3,
' x ' y
FX,Y (x, y) = f X,Y (u, v) dv du (3)
−∞ −∞
' x ' y
= f X (u) du f Y (v) dv (4)
−∞ −∞
= FX (x) FX (x) (5)
For W = Y − X we can find f W (w) by integrating over the region indicated in the figure below to
get FW (w) then taking the derivative with respect to w. Since Y ≥ X , W = Y − X is nonnegative.
Hence FW (w) = 0 for w < 0. For w ≥ 0,
Y
(a) To find if W and X are independent, we must be able to factor the joint density function
f X,W (x, w) into the product f X (x) f W (w) of marginal density functions. To verify this, we
must find the joint PDF of X and W . First we find the joint CDF.
Since Y ≥ X , the CDF of W satisfies FX,W (x, w) = P[X ≤ x, X ≤ Y ≤ X + w]. Thus, for
x ≥ 0 and w ≥ 0,
190
' x ' x +w
Y FX,W (x, w) = λ2 e−λy d y d x (3)
x
{X<x}∩{X<Y<X+w} ' 0
x $x +w
= −λe−λy $x dx (4)
' 0
x
w = −λe−λ(x +w) + λe−λx dx (5)
X
0
$
$x
x = e−λ(x +w) − e−λx $ (6)
0
= (1 − e−λx )(1 − e ) −λw
(7)
We see that FX,W (x, w) = FX (x)FW (w). Moreover, by applying Theorem 4.4,
∂ 2 FX,W (x, w)
f X,W (x, w) = = λe−λx λe−λw = f X (x) f W (w) (8)
∂ x ∂w
Since we have our desired factorization, W and X are independent.
(b) Following the same procedure, we find the joint CDF of Y and W .
FW,Y (w, y) = P [W ≤ w, Y ≤ y] = P [Y − X ≤ w, Y ≤ y] = P [Y ≤ X + w, Y ≤ y]
(9)
191
The complete expression for the joint CDF is
⎧
⎨ 1 − e−λw − λwe−λy 0 ≤ w ≤ y
FW,Y (w, y) = 1 − (1 + λy)e−λy 0≤y≤w (19)
⎩
0 otherwise
The joint PDF f W,Y (w, y) doesn’t factor and thus W and Y are dependent.
Note that U = min(X, Y ) > u if and only if X > u and Y > u. In the same way, since V =
max(X, Y ), V ≤ v if and only if X ≤ v and Y ≤ v. Thus
∂ 2 FU,V (u, v)
fU,V (u, v) = (9)
∂u∂v
∂
= [ f X (v) FY (u) + FX (u) f Y (v)] (10)
∂u
= f X (u) f Y (v) + f X (v) f Y (v) (11)
192
Problem 4.11.1 Solution
f X,Y (x, y) = ce−(x
2 /8)−(y 2 /18)
(1)
The omission of any limits for the PDF indicates that it is defined over all x and y. We know that
f X,Y (x, y) is in the form of the bivariate Gaussian distribution so we look to Definition 4.17 and
attempt to find values for σY , σ X , E[X ], E[Y ] and ρ. First, we know that the constant is
1
c= (2)
2π σ X σY 1 − ρ 2
Because the exponent of f X,Y (x, y) doesn’t contain any cross terms we know that ρ must be zero,
and we are left to solve the following for E[X ], E[Y ], σ X , and σY :
2 2
x − E [X ] x2 y − E [Y ] y2
= = (3)
σX 8 σY 18
E [X ] = E [Y ] = 0 (4)
√
σX = 8 (5)
√
σY = 18 (6)
193
√
(c) Since ρ = 1/ 2, now we can solve for σ X and σY .
√
σ X = 1/ 2 σY = 1/2 (6)
µ X = µY = 0 σ X2 = σY2 = 1 (1)
Because X and Y have a uniform PDF over the bullseye area, P[B] is just the value of the
joint PDF over the area times the area of the bullseye.
P [B] = P X 2 + Y 2 ≤ 22 = 10−4 · π 22 = 4π · 10−4 ≈ 0.0013 (3)
194
(b) In this case, the joint PDF of X and Y is inversely proportional to the area of the target.
1/[π 502 ] x 2 + y 2 ≤ 502
f X,Y (x, y) = (4)
0 otherwise
The probability of a bullseye is
2
π 22 1
P [B] = P X + Y ≤ 2
2 2 2
= = ≈ 0.0016. (5)
π 502 25
(c) In this instance, X and Y have the identical Gaussian (0, σ ) PDF with σ 2 = 100; i.e.,
1
e−x
2 /2σ 2
f X (x) = f Y (x) = √ (6)
2π σ 2
Since X and Y are independent, their joint PDF is
1 −(x 2 +y 2 )/2σ 2
f X,Y (x, y) = f X (x) f Y (y) = e (7)
2π σ 2
To find P[B], we write
''
P [B] = P X 2 + Y 2 ≤ 22 = f X,Y (x, y) d x d y (8)
x 2 +y 2 ≤22
''
1
e−(x
2 +y 2 )/2σ 2
= dx dy (9)
2π σ 2 x 2 +y 2 ≤22
Given that the temperature is high, then W is measured. Since ρ = 0, W and T are indepen-
dent and
W −7 10 − 7
q = P [W > 10] = P > = 1 − (1.5) = 0.067. (2)
2 2
The tree for this experiment is
195
q W >10
T >38 W ≤10
p 1−q
T ≤38
1− p
P [I ] = P [T > 38, W > 10] = P [T > 38] P [W > 10] = pq = 0.0107. (3)
To find the conditional probability P[I |T = t], we need to find the conditional PDF of W
given T = t. The direct way is simply to use algebra to find
f W,T (w, t)
f W |T (w|t) = (6)
f T (t)
The required algebra is essentially the same as that needed to prove Theorem 4.29. Its easier
just to apply Theorem 4.29 which says that given T = t, the conditional distribution of W is
Gaussian with
σW
E [W |T = t] = E [W ] + ρ (t − E [T ]) (7)
σT
Var[W |T = t] = σW2 (1 − ρ 2 ) (8)
Using this conditional mean and variance, we obtain the conditional Gaussian PDF
√ 2
1 − w−(7+ 2(t−37)) /4
f W |T (w|t) = √ e (10)
4π
196
Given T = t, the conditional probability the person is declared ill is
P [I |T = t] = P [W > 10|T = t] (11)
0 √ √ 1
W − (7 + 2(t − 37)) 10 − (7 + 2(t − 37))
=P √ > √ (12)
2 2
0 √ 1 ! √ "
3 − 2(t − 37) 3 2
=P Z> √ =Q − (t − 37) (13)
2 2
197
The marked integral equals 1 because for each value of x, it is the integral of a Gaussian PDF of
one variable over all possible values. In fact, it is the integral of the conditional PDF fY |X (y|x) over
all possible y. To complete the proof, we see that
' ∞' ∞ ' ∞
1
√ e−(x−µ X ) /2σ X d x = 1
2 2
f X,Y (x, y) d x d y = (4)
−∞ −∞ −∞ σ X 2π
since the remaining integral is the integral of the marginal Gaussian PDF f X (x) over all possible x.
Since Var[Y ] = E[Y 2 ] − (E[Y ])2 , we will find the moments of Y . The first moment is
For the second moment of Y , we follow the problem hint and use the iterated expectation
E Y 2 = E X 12 X 22 = E E X 12 X 22 |X 2 = E X 22 E X 12 |X 2 . (3)
It follows that
E X 12 X 22 = E X 22 E X 12 |X 22 (8)
σ1 2 σ1
2
= E [µ1 + σ1 (1 − ρ )]X 2 + 2ρµ1 (X 2 − µ2 )X 2 + ρ 2 (X 2 − µ2 ) X 2 .
2 2 2 2 2 2 2
(9)
σ2 σ2
198
We observe that
E (X 2 − µ2 )X 22 = E (X 2 − µ2 )(X 2 − µ2 + µ2 )2 (11)
= E (X 2 − µ2 ) (X 2 − µ2 )2 + 2µ2 (X 2 − µ2 ) + µ22 (12)
= E (X 2 − µ2 )3 + 2µ2 E (X 2 − µ2 )2 + µ2 E [(X 2 − µ2 )] (13)
We recall that E[X 2 − µ2 ] = 0 and that E[(X 2 − µ2 )2 ] = σ22 . We now look ahead to Problem 6.3.4
to learn that
E (X 2 − µ2 )3 = 0, E (X 2 − µ2 )4 = 3σ24 . (14)
This implies
E (X 2 − µ2 )X 22 = 2µ2 σ22 . (15)
It follows that
E (X 2 − µ2 )2 X 22 = 3σ24 + µ22 σ22 . (20)
σ1 σ2
E X 12 X 22 = µ21 + σ12 (1 − ρ 2 ) (σ22 + µ22 ) + 2ρµ1 (2µ2 σ22 ) + ρ 2 12 (3σ24 + µ22 σ22 ) (21)
σ2 σ2
= (1 + 2ρ 2 )σ12 σ22 + µ22 σ12 + µ21 σ22 + µ21 µ22 + 4ρµ1 µ2 σ1 σ2 . (22)
199
>> format rat;
>> imagepmf;
>> [SX(:) SY(:) PXY(:)]
ans =
800 400 1/5
1200 400 1/20
1600 400 0
800 800 1/20
1200 800 1/5
1600 800 1/10
800 1200 1/10
1200 1200 1/10
1600 1200 1/5
>>
Note that the command format rat wasn’t necessary; it just formats the output as rational num-
bers, i.e., ratios of integers, which you may or may not find esthetically pleasing.
function ex=finiteexp(sx,px);
%Usage: ex=finiteexp(sx,px)
%returns the expected value E[X]
%of finite random variable X described
%by samples sx and probabilities px
ex=sum((sx(:)).*(px(:)));
Note that finiteexp performs its calculations on the sample values sx and probabilities px
using the column vectors sx(:) and px(:). As a result, we can use the same finiteexp
function when the random variable is represented by grid variables. For example, we can calculate
the correlation r = E[X Y ] as
r=finiteexp(SX.*SY,PXY)
200
function covxy=finitecov(SX,SY,PXY);
%Usage: cxy=finitecov(SX,SY,PXY)
%returns the covariance of
%finite random variables X and Y
%given by grids SX, SY, and PXY
ex=finiteexp(SX,PXY);
ey=finiteexp(SY,PXY);
R=finiteexp(SX.*SY,PXY);
covxy=R-ex*ey;
%trianglecdfplot.m
[X,Y]=meshgrid(0:0.05:1.5);
R=(0<=Y).*(Y<=X).*(X<=1).*(2*(X.*Y)-(Y.ˆ2));
R=R+((0<=X).*(X<Y).*(X<=1).*(X.ˆ2));
R=R+((0<=Y).*(Y<=1).*(1<X).*((2*Y)-(Y.ˆ2)));
R=R+((X>1).*(Y>1));
mesh(X,Y,R);
xlabel(’\it x’);
ylabel(’\it y’);
For functions like FX,Y (x, y) that have multiple cases, we calculate the function for each case and
multiply by the corresponding boolean condition so as to have a zero contribution when that case
doesn’t apply. Using this technique, its important to define the boundary conditions carefully to
make sure that no point is included in two different boundary conditions.
201
function [SX,SY,PXY]=circuits(n,p);
%Usage: [SX,SY,PXY]=circuits(n,p);
% (See Problem 4.12.4)
[SX,SY]=ndgrid(0:n,0:n);
PXY=0*SX;
PXY(find((SX==n) & (SY==n)))=pˆn;
for y=0:(n-1),
I=find((SY==y) &(SX>=SY) &(SX<n));
PXY(I)=(pˆy)*(1-p)* ...
binomialpmf(n-y-1,p,SX(I)-y);
end;
The only catch is that for a given value of y, we need to calculate the binomial probability of x − y
successes in (n − y − 1) trials. We can do this using the function call
binomialpmf(n-y-1,p,x-y)
However, this function expects the argument n-y-1 to be a scalar. As a result, we must perform a
separate call to binomialpmf for each value of y.
An alternate solution is direct calculation of the PMF PX,Y (x, y) in Problem 4.2.6. In this
case, calculate m! using the M ATLAB function gamma(m+1). Because, gamma(x) function will
calculate the gamma function for each element in a vector x, we can calculate the PMF without any
loops:
function [SX,SY,PXY]=circuits2(n,p);
%Usage: [SX,SY,PXY]=circuits2(n,p);
% (See Problem 4.12.4)
[SX,SY]=ndgrid(0:n,0:n);
PXY=0*SX;
PXY(find((SX==n) & (SY==n)))=pˆn;
I=find((SY<=SX) &(SX<n));
PXY(I)=(gamma(n-SY(I))./(gamma(SX(I)-SY(I)+1).*gamma(n-SX(I))))...
.*(p.ˆSX(I)).*((1-p).ˆ(n-SX(I)));
Some experimentation with cputime will show that circuits2(n,p) runs much faster than
circuits(n,p). As is typical, the for loop in circuit results in time wasted running the
M ATLAB interpretor and in regenerating the binomial PMF in each cycle.
To finish the problem, we need to calculate the correlation coefficient
Cov [X, Y ]
ρ X,Y = . (1)
σ X σY
In fact, this is one of those problems where a general solution is better than a specific solution. The
general problem is that given a pair of finite random variables described by the grid variables SX,
SY and PMF PXY, we wish to calculate the correlation coefficient
This problem is solved in a few simple steps. First we write a function that calculates the
expected value of a finite random variable.
function ex=finiteexp(sx,px);
%Usage: ex=finiteexp(sx,px)
%returns the expected value E[X]
%of finite random variable X described
%by samples sx and probabilities px
ex=sum((sx(:)).*(px(:)));
202
Note that finiteexp performs its calculations on the sample values sx and probabilities px
using the column vectors sx(:) and px(:). As a result, we can use the same finiteexp
function when the random variable is represented by grid variables. We can build on finiteexp
to calculate the variance using finitevar:
function v=finitevar(sx,px);
%Usage: ex=finitevar(sx,px)
% returns the variance Var[X]
% of finite random variables X described by
% samples sx and probabilities px
ex2=finiteexp(sx.ˆ2,px);
ex=finiteexp(sx,px);
v=ex2-(exˆ2);
function rho=finitecoeff(SX,SY,PXY);
%Usage: rho=finitecoeff(SX,SY,PXY)
%Calculate the correlation coefficient rho of
%finite random variables X and Y
ex=finiteexp(SX,PXY); vx=finitevar(SX,PXY);
ey=finiteexp(SY,PXY); vy=finitevar(SY,PXY);
R=finiteexp(SX.*SY,PXY);
rho=(R-ex*ey)/sqrt(vx*vy);
λ/µ
FW (w) = 1 − . (1)
λ/µ + w
Setting u = FW (w) and solving for w yields
λ u
w= FW−1 (u) = (2)
µ 1−u
203
function w=wrv1(lambda,mu,m) function w=wrv2(lambda,mu,m)
%Usage: w=wrv1(lambda,mu,m) %Usage: w=wrv1(lambda,mu,m)
%Return m samples of W=Y/X %Return m samples of W=Y/X
%X is exponential (lambda) %X is exponential (lambda)
%Y is exponential (mu) %Y is exponential (mu)
%Uses CDF of F_W(w)
x=exponentialrv(lambda,m);
y=exponentialrv(mu,m); u=rand(m,1);
w=y./x; w=(lambda/mu)*u./(1-u);
We would expect that wrv2 would be faster simply because it does less work. In fact, its
instructive to account for the work each program does.
• wrv1 Each exponential random sample requires the generation of a uniform random variable,
and the calculation of a logarithm. Thus, we generate 2m uniform random variables, calculate
2m logarithms, and perform m floating point divisions.
• wrv2 Generate m uniform random variables and perform m floating points divisions.
This quickie analysis indicates that wrv1 executes roughly 5m operations while wrv2 executes
about 2m operations. We might guess that wrv2 would be faster by a factor of 2.5. Experimentally,
we calculated the execution time associated with generating a million samples:
>> t2=cputime;w2=wrv2(1,1,1000000);t2=cputime-t2
t2 =
0.2500
>> t1=cputime;w1=wrv1(1,1,1000000);t1=cputime-t1
t1 =
0.7610
>>
We see in our simple experiments that wrv2 is faster by a rough factor of 3. (Note that repeating
such trials yielded qualitatively similar results.)
204
Problem Solutions – Chapter 5
(b) Let L 2 denote the event that exactly two laptops need LCD repairs. Thus P[L 2 ] = PN1 (2).
Since each laptop requires an LCD repair with probability p1 = 8/15, the number of LCD
repairs, N1 , is a binomial (4, 8/15) random variable with PMF
4
PN1 (n 1 ) = (8/15)n 1 (7/15)4−n 1 (3)
n1
The probability that two laptops need LCD repairs is
4
PN1 (2) = (8/15)2 (7/15)2 = 0.3717 (4)
2
(c) A repair is type (2) with probability p2 = 4/15. A repair is type (3) with probability p3 =
2/15; otherwise a repair is type “other” with probability po = 9/15. Define X as the number
of “other” repairs needed. The joint PMF of X, N2 , N3 is the multinomial PMF
n 2 n 3 x
4 4 2 9
PN2 ,N3 ,X (n 2 , n 3 , x) = (5)
n2, n3, x 15 15 15
However, Since X + 4 − N2 − N3 , we observe that
205
Finally, the probability that more laptops require motherboard repairs than keyboard repairs
is
P [N2 > N3 ] = PN2 ,N3 (1, 0) + PN2 ,N3 (2, 0) + PN2 ,N3 (2, 1) + PN2 (3) + PN2 (4) (10)
where we use the fact that if N2 = 3 or N2 = 4, then we must have N2 > N3 . Inserting the
various probabilities, we obtain
P [N2 > N3 ] = PN2 ,N3 (1, 0) + PN2 ,N3 (2, 0) + PN2 ,N3 (2, 1) + PN2 (3) + PN2 (4) (11)
Since a pizza has topping i with probability pi independent of whether any other topping is on the
pizza, the number Ni of pizzas with topping i is independent of the number of pizzas with any other
toppings. That is, N1 , . . . , N4 are mutually independent and have joint PMF
However, simplifying the above integral depends on the values of each xi . In particular,
f X 1 ,...,X n (y1 , . . . , yn ) = 1 if and only if 0 ≤ yi ≤ 1 for each i. Since FX 1 ,...,X n (x1 , . . . , xn ) = 0
if any xi < 0, we limit, for the moment, our attention to the case where xi ≥ 0 for all i. In
this case, some thought will show that we can write the limits in the following way:
' max(1,x 1 ) ' min(1,x n )
FX 1 ,...,X n (x1 , . . . , xn ) = ··· dy1 · · · dyn (2)
0 0
= min(1, x1 ) min(1, x2 ) · · · min(1, xn ) (3)
206
(b) For n = 3,
1 − P min X i ≤ 3/4 = P min X i > 3/4 (5)
i i
= (1 − 3/4) = 1/64 3
(8)
However, just keep in mind that the inequalities 0 ≤ x and x ≤ 1 are vector inequalities that must
hold for every component xi .
n ' 1 ' 1 ' 1
=c ai d x1 · · · xi d xi · · · d xn (3)
i=1 0 0 0
! $1 "
n
xi2 $$ n
ai
=c ai $ = c (4)
i=1
2 0 i=1
2
207
Given f X (x) with c = 2/3 and a1 = a2 = a3 = 1 in Problem 5.2.2, find the
marginal PDF f X 3 (x3 ).
Filling in the parameters in Problem 5.2.2, we obtain the vector PDF
2
(x + x2 + x3 ) 0 ≤ x1 , x2 , x3 ≤ 1
f X (x) = 3 1 (1)
0 otherwise
In this case, for 0 ≤ x3 ≤ 1, the marginal PDF of X 3 is
' '
2 1 1
f X 3 (x3 ) = (x1 + x2 + x3 ) d x1 d x2 (2)
3 0 0
' $x1 =1
2 1 x12 $
= + x2 x1 + x3 x1 $$ d x2 (3)
3 0 2 x 1 =0
'
2 1 1
= + x2 + x3 d x2 (4)
3 0 2
$x2 =1
2 x2 x22 $ 2 1 1
= + $
+ x3 x2 $ = + + x3 (5)
3 2 2 x 2 =0 3 2 2
The complete expresion for the marginal PDF of X 3 is
2(1 + x3 )/3 0 ≤ x3 ≤ 1,
f X 3 (x3 ) = (6)
0 otherwise.
208
The complete expression is
p 2 (1 − p)k2 −2 1 ≤ k1 < k2
PK 1 ,K 2 (k1 , k2 ) = (6)
0 otherwise
(b) Going back to first principles, we note that K n is the number of trials up to and including
the nth success. Thus K 1 is a geometric ( p) random variable, K 2 is an Pascal (2, p) random
variable, and K 3 is an Pascal (3, p) random variable. We could write down the respective
marginal PMFs of K 1 , K 2 and K 3 just by looking up the Pascal (n, p) PMF. Nevertheless, it
is instructive to derive these PMFs from the joint PMF PK 1 ,K 2 ,K 3 (k1 , k2 , k3 ).
For k1 ≥ 1, we can find PK 1 (k1 ) via
∞
∞
PK 1 (k1 ) = PK 1 ,K 2 (k1 , k2 ) = p 2 (1 − p)k2 −2 (13)
k2 =−∞ k2 =k1 +1
The complete expression for the PMF of K 1 is the usual geometric PMF
p(1 − p)k1 −1 k1 = 1, 2, . . . ,
PK 1 (k1 ) = (16)
0 otherwise.
209
Following the same procedure, the marginal PMF of K 2 is
∞
2 −1
k
PK 2 (k2 ) = PK 1 ,K 2 (k1 , k2 ) = p 2 (1 − p)k2 −2 (17)
k1 =−∞ k1 =1
210
The complete expression for the joint PDF of Y1 and Y2 is
12(1 − y2 )2 0 ≤ y1 ≤ y2 ≤ 1
f Y1 ,Y2 (y1 , y2 ) = (9)
0 otherwise
Now we note that the following events are one in the same:
3 4
{N0 = n 0 , N1 = n 1 } = N0 = n 0 , N1 = n 1 , N̂ = 10000 − n 0 − n 1 (4)
211
Problem 5.3.6 Solution
In Example 5.1, random variables N1 , . . . , Nr have the multinomial distribution
n
PN1 ,...,Nr (n 1 , . . . , n r ) = p n 1 · · · prnr (1)
n 1 , . . . , nr 1
(a) To evaluate the joint PMF of N1 and N2 , we define a new experiment with mutually exclusive
events: s1 , s2 and “other” Let N̂ denote the number of trial outcomes that are “other”. In this
case, a trial is in the “other” category with probability p̂ = 1 − p1 − p2 . The joint PMF of
N1 , N2 , and N̂ is
n!
PN1 ,N2 , N̂ n 1 , n 2 , n̂ = p n 1 p n 2 (1 − p1 − p2 )n̂ n 1 + n 2 + n̂ = n (2)
n 1 !n 2 !n̂! 1 2
Now we note that the following events are one in the same:
3 4
{N1 = n 1 , N2 = n 2 } = N1 = n 1 , N2 = n 2 , N̂ = n − n 1 − n 2 (3)
(b) We could find the PMF of Ti by summing the joint PMF PN1 ,...,Nr (n 1 , . . . , n r ). However, it is
easier to start from first principles. Suppose we say a success occurs if the outcome of the trial
is in the set {s1 , s2 , . . . , si } and otherwise a failure occurs. In this case, the success probability
is qi = p1 + · · · + pi and Ti is the number of successes in n trials. Thus, Ti has the binomial
PMF n t
q (1 − qi )n−t t = 0, 1, . . . , n
PTi (t) = t i (6)
0 otherwise
n!
PT1 ,T2 (t1 , t2 ) = p1t1 p2t2 −t1 (1 − p1 − p2 )n−t2 0 ≤ t1 ≤ t2 ≤ n (10)
t1 !(t2 − t1 )!(n − t2 )!
212
Problem 5.3.7 Solution
(a) Note that Z is the number of three page faxes. In principle, we can sum the joint PMF
PX,Y,Z (x, y, z) over all x, y to find PZ (z). However, it is better to realize that each fax has 3
pages with probability 1/6, independent of any other fax. Thus, Z has the binomial PMF
5
(1/6)z (5/6)5−z z = 0, 1, . . . , 5
PZ (z) = z (1)
0 otherwise
(b) From the properties of the binomial distribution given in Appendix A, we know that E[Z ] =
5(1/6).
(c) We want to find the conditional PMF of the number X of 1-page faxes and number Y of
2-page faxes given Z = 2 3-page faxes. Note that given Z = 2, X + Y = 3. Hence for
non-negative integers x, y satisfying x + y = 3,
PX,Y,Z (x, y, 2)
5!
x!y!2!
(1/3)x (1/2) y (1/6)2
PX,Y |Z (x, y|2) = = 5 (2)
PZ (2) 2
(1/6)2 (5/6)3
That is, given Z = 2, there are 3 faxes left, each of which independently could be a 1-page fax.
The conditonal PMF of the number of 1-page faxes is binomial where 2/5 is the conditional
probability that a fax has 1 page given that it either has 1 page or 2 pages. Moreover given
X = x and Z = 2 we must have Y = 3 − x.
(d) Given Z = 2, the conditional PMF of X is binomial for 3 trials and success probability 2/5.
The conditional expectation of X given Z = 2 is E[X |Z = 2] = 3(2/5) = 6/5.
(e) There are several ways to solve this problem. The most straightforward approach is to re-
alize that for integers 0 ≤ x ≤ 5 and 0 ≤ y ≤ 5, the event {X = x, Y = y} occurs iff
{X = x, Y = y, Z = 5 − (x + y)}. For the rest of this problem, we assume x and y are non-
negative integers so that
213
The above expression may seem unwieldy and it isn’t even clear that it will sum to 1. To
simplify the expression, we observe that
Using PZ (z) found in part (c), we can calculate PX,Y |Z (x, y|5 − x − y) for 0 ≤ x + y ≤ 5.
integer valued.
PX,Y,Z (x, y, 5 − x − y)
PX,Y |Z (x, y|5 − x + y) = (8)
PZ (5 − x − y)
x y
x+y 1/3 1/2
= (9)
x 1/2 + 1/3 1/2 + 1/3
x (x+y)−x
x+y 2 3
= (10)
x 5 5
In the above expression, it is wise to think of x + y as some fixed value. In that case, we see
that given x + y is a fixed value, X and Y have a joint PMF given by a binomial distribution
in x. This should not be surprising since it is just a generalization of the case when Z = 2.
That is, given that there were a fixed number of faxes that had either one or two pages, each
of those faxes is a one page fax with probability (1/3)/(1/2 + 1/3) and so the number of
one page faxes should have a binomial distribution, Moreover, given the number X of one
page faxes, the number Y of two page faxes is completely specified. Finally, by rewriting
PX,Y (x, y) given above, the complete expression for the joint PMF of X and Y is
1 5−x−y 5 x+y x+y 2 x 3 y
5
x, y ≥ 0
PX,Y (x, y) = 5−x−y 6 6 x 5 5 (11)
0 otherwise
{K 1 = k1 , K 2 = k2 , · · · , K n = kn } (2)
214
Note that the events A1 , A2 , . . . , An are independent and
P A j = (1 − p)k j −k j−1 −1 p. (3)
Thus
(c) Rather than try to deduce PK i (ki ) from the joint PMF PKn (kn ), it is simpler to return to first
principles. In particular, K i is the number of trials up to and including the ith success and has
the Pascal (i, p) PMF
ki − 1 i
PK i (ki ) = p (1 − p)ki −i . (10)
i −1
215
Problem 5.4.2 Solution
The random variables N1 , N2 , N3 and N4 are dependent. To see this we observe that PNi (4) = pi4 .
However,
PN1 ,N2 ,N3 ,N4 (4, 4, 4, 4) = 0 = p14 p24 p34 p44 = PN1 (4) PN2 (4) PN3 (4) PN4 (4) . (1)
Thus,
1 0 ≤ x ≤ 1,
f X 1 (x1 ) = (4)
0 otherwise.
Following similar steps, one can show that
1 0 ≤ x ≤ 1,
f X 1 (x) = f X 2 (x) = f X 3 (x) = f X 4 (x) = (5)
0 otherwise.
Thus
f X (x) = f X 1 (x) f X 2 (x) f X 3 (x) f X 4 (x) . (6)
We conclude that X 1 , X 2 , X 3 and X 4 are independent.
Thus,
e−x1 x1 ≥ 0,
f X 1 (x1 ) = (5)
0 otherwise.
216
Following similar steps, one can show that
' ∞' ∞ −2x
2 2 x2 ≥ 0,
f X 2 (x2 ) = f X (x) d x1 d x3 = (6)
0 0 0 otherwise.
' ∞' ∞ −3x
3 3 x3 ≥ 0,
f X 3 (x3 ) = f X (x) d x1 d x2 = (7)
0 0 0 otherwise.
Thus
f X (x) = f X 1 (x1 ) f X 2 (x2 ) f X 3 (x3 ) . (8)
We conclude that X 1 , X 2 , and X 3 are independent.
Lastly,
' x3 ' x3 ' x3
−x 3
f X 3 (x3 ) = e d x2 d x1 = (x3 − x1 )e−x3 d x1 (3)
0 x1 0
$x1 =x3
1 $
2 −x 3 $ 1
= − (x3 − x1 ) e $ = x32 e−x3 (4)
2 x 1 =0 2
The complete expressions for the three marginal PDFs are
−x
e 1 x1 ≥ 0
f X 1 (x1 ) = (5)
0 otherwise
−x 2
x2 e x2 ≥ 0
f X 2 (x2 ) = (6)
0 otherwise
2 −x 3
(1/2)x3 e x3 ≥ 0
f X 3 (x3 ) = (7)
0 otherwise
In fact, each X i is an Erlang (n, λ) = (i, 1) random variable.
217
Problem 5.4.7 Solution
Since U1 , . . . , Un are iid uniform (0, 1) random variables,
1/T n 0 ≤ u i ≤ 1; i = 1, 2, . . . , n
fU1 ,...,Un (u 1 , . . . , u n ) = (1)
0 otherwise
Since U1 , . . . , Un are continuous, P[Ui = U j ] = 0 for all i = j. For the same reason, P[X i = X j ] =
0 for i = j. Thus we need only to consider the case when x1 < x2 < · · · < xn .
To understand the claim, it is instructive to start with the n = 2 case. In this case, (X 1 , X 2 ) =
(x1 , x2 ) (with x1 < x2 ) if either (U1 , U2 ) = (x1 , x2 ) or (U1 , U2 ) = (x2 , x1 ). For infinitesimal ,
Since there are n! permutations and fU1 ,...,Un (xπ(1) , . . . , xπ(n) ) = 1/T n for each permutation π, we
can conclude that
f X 1 ,...,X n (x1 , . . . , xn ) = n!/T n . (8)
Since the order statistics are necessarily ordered, f X 1 ,...,X n (x1 , . . . , xn ) = 0 unless x1 < · · · < xn .
For an arbitrary matrix A, the system of equations Ax = y−b may have no solutions (if the columns
of A do not span the vector space), multiple solutions (if the columns of A are linearly dependent),
or, when A is invertible, exactly one solution. In the invertible case,
PY (y) = P [AX = y − b] = P X = A−1 (y − b) = PX A−1 (y − b) . (2)
As an aside, we note that when Ax = y − b has multiple solutions, we would need to do some
bookkeeping to add up the probabilities PX (x) for all vectors x satisfying Ax = y − b. This can get
disagreeably complicated.
218
Problem 5.5.2 Solution
The random variable Jn is the number of times that message n is transmitted. Since each transmis-
sion is a success with probability p, independent of any other transmission, the number of transmis-
sions of message n is independent of the number of transmissions of message m. That is, for m = n,
Jm and Jn are independent random variables. Moreover, because each message is transmitted over
and over until it is transmitted succesfully, each Jm is a geometric ( p) random variable with PMF
(1 − p) j−1 p j = 1, 2, . . .
PJm ( j) = (1)
0 otherwise.
Thus the PMF of J = J1 J2 J3 is
⎧ 3 j1 + j2 + j3 −3
⎪ p (1 − p)
⎪
⎨
ji = 1, 2, . . . ;
i = 1, 2, 3
PJ (j) = PJ1 ( j1 ) PJ2 ( j2 ) PJ3 ( j3 ) = (2)
⎪
⎪
⎩
0 otherwise.
(b) This question is worded in a somewhat confusing way. The “expected response time” refers
to E[X i ], the response time of an individual truck, rather than E[R]. If the expected response
time of a truck is τ , then each X i has CDF
1 − e−x/τ x ≥ 0
FX i (x) = FX (x) = (4)
0 otherwise.
The goal of this problem is to find the maximum permissible value of τ . When each truck has
expected response time τ , the CDF of R is
(1 − e−r/τ )6 r ≥ 0,
FR (r ) = (FX (x) r )6 = (5)
0 otherwise.
We need to find τ such that
P [R ≤ 3] = (1 − e−3/τ )6 = 0.9. (6)
This implies
−3
τ= = 0.7406 s. (7)
ln 1 − (0.9)1/6
219
Problem 5.5.4 Solution
Let X i denote the finishing time of boat i. Since finishing times of all boats are iid Gaussian random
variables with expected value 35 minutes and standard deviation 5 minutes, we know that each X i
has CDF
X i − 35 x − 35 x − 35
FX i (x) = P [X i ≤ x] = P ≤ = (1)
5 5 5
(a) The time of the winning boat is
W = min(X 1 , X 2 , . . . , X 10 ) (2)
To find the probability that W ≤ 25, we will find the CDF FW (w) since this will also be
useful for part (c).
(b) The finishing time of the last boat is L = max(X 1 , . . . , X 10 ). The probability that the last
boat finishes in more than 50 minutes is
Once again, since the X i are iid Gaussian (35, 5) random variables,
10
10
P [L > 50] = 1 − P [X i ≤ 50] = 1 − FX i (50) (12)
i=1
= 1 − ( ([50 − 35]/5))10 (13)
= 1 − ( (3)) 10
= 0.0134 (14)
(c) A boat will finish in negative time if and only iff the winning boat finishes in negative time,
which has probability
220
Unfortunately, the tables in the text have neither (7) nor Q(7). However, those with access
to M ATLAB, or a programmable calculator, can find out that Q(7) = 1− (7) = 1.28×10−12 .
This implies that a boat finishes in negative time with probability
FW (0) = 1 − (1 − 1.28 × 10−12 )10 = 1.28 × 10−11 . (16)
221
The same recursion will also allow us to show that
1
E J22 = (3/2)8 1012 + (3/2)6 + (3/2)5 + (3/2)4 + (3/2)3 106 (14)
4
1
E J12 = (3/2)10 1012 + (3/2)8 + (3/2)7 + (3/2)6 + (3/2)5 + (3/2)4 106 (15)
4
1
E J02 = (3/2)12 1012 + (3/2)10 + (3/2)9 + · · · + (3/2)5 106 (16)
4
Finally, day 0 is the same as any other day in that J = J0 + N0 /2 where N0 is a Poisson random
variable with mean J0 . By the same argument that we used to develop recursions for E[Ji ] and
E[Ji2 ], we can show
E [J ] = (3/2)E [J0 ] = (3/2)7 106 ≈ 17 × 106 (17)
and
E J 2 = (3/2)2 E J02 + E [J0 ] /4 (18)
1
= (3/2)14 1012 + (3/2)12 + (3/2)11 + · · · + (3/2)6 106 (19)
4
6
10
= (3/2)14 1012 + (3/2)6 [(3/2)7 − 1] (20)
2
Finally, the variance of J is
106
Var[J ] = E J 2 − (E [J ])2 = (3/2)6 [(3/2)7 − 1] (21)
2
Since the variance is hard to interpret, we note that the standard deviation of J is σ J ≈ 9572.
Although the expected jackpot grows rapidly, the standard deviation of the jackpot is fairly small.
222
Problem 5.6.1 Solution
(a) The coavariance matrix of X = X 1 X 2 is
Var[X 1 ] Cov [X 1 , X 2 ] 4 3
CX = = . (1)
Cov [X 1 , X 2 ] Var[X 2 ] 3 9
n
E [Y ] = E [X i ] = 0 (1)
i=1
The variance of any sum of random variables can be expressed in terms of the individual variances
and co-variances. Since the E[Y ] is zero, Var[Y ] = E[Y 2 ]. Thus,
⎡! "2 ⎤ ⎡ ⎤
n n n n
n
Var[Y ] = E ⎣ Xi ⎦ = E ⎣ Xi X j ⎦ = E X i2 + E Xi X j (2)
i=1 i=1 j=1 i=1 i=1 j =i
223
Problem 5.6.4 Solution
Inspection of the vector PDF f X (x) will show that X 1 , X 2 , X 3 , and X 4 are iid uniform (0, 1) random
variables. That is,
f X (x) = f X 1 (x1 ) f X 2 (x2 ) f X 3 (x3 ) f X 4 (x4 ) (1)
where each X i has the uniform (0, 1) PDF
1 0≤x ≤1
f X i (x) = (2)
0 otherwise
It follows that for each i, E[X i ] = 1/2, E[X i2 ] = 1/3 and Var[X i ] = 1/12. In addition, X i and X j
have correlation
E X i X j = E [X i ] E X j = 1/4. (3)
and covariance Cov[X i , X j ] = 0 for i = j since independent random variables always have zero
covariance.
224
Problem 5.6.5 Solution
The random variable Jm is the number of times that message m is transmitted. Since each trans-
mission is a success with probability p, independent of any other transmission, J1 , J2 and J3 are iid
geometric ( p) random variables with
1 1− p
E [Jm ] = , Var[Jm ] = . (1)
p p2
Thus the vector J = J1 J2 J3 has expected value
E [J] = E [J1 ] E [J2 ] E J3 = 1/ p 1/ p 1/ p . (2)
For m = n,
1− p 1 2− p
RJ (m, m) = E Jm2 = Var[Jm ] + (E Jm2 )2 = + 2 = . (4)
p2 p p2
Thus ⎡ ⎤
2− p 1 1
1
RJ = 2 ⎣ 1 2− p 1 ⎦. (5)
p
1 1 2− p
Because Jm and Jn are independent, off-diagonal terms in the covariance matrix are
225
Since the components of J are independent, it has the diagonal covariance matrix
⎡ ⎤
Var[J1 ] 0 0
1− p
CJ = ⎣ 0 Var[J2 ] 0 ⎦= I (3)
p2
0 0 Var[J3 ]
Given these properties of J, finding the same properties of K = AJ is simple.
(a) The expected value of K is
⎡ ⎤⎡ ⎤ ⎡ ⎤
1 0 0 1/ p 1/ p
E [K] = Aµ J = ⎣1 1 0⎦ ⎣1/ p ⎦ = ⎣2/ p ⎦ (4)
1 1 1 1/ p 3/ p
C K = AC J A (5)
1− p
= AIA (6)
p2
⎡ ⎤⎡ ⎤ ⎡ ⎤
1 0 0 1 1 1 1 1 1
1− p ⎣ 1− p ⎣
= 1 1 0⎦ ⎣0 1 1⎦ = 1 2 2⎦ (7)
p2 p2
1 1 1 0 0 1 1 2 3
(c) Given the expected value vector µ K and the covariance matrix C K , we can use Theorem 5.12
to find the correlation matrix
R K = C K + µ K µK (8)
⎡ ⎤ ⎡ ⎤
1 1 1 1/ p
1− p ⎣
= 2
1 2 2⎦ + ⎣2/ p ⎦ 1/ p 2/ p 3/ p (9)
p
1 2 3 3/ p
⎡ ⎤ ⎡ ⎤
1 1 1 1 2 3
1− p ⎣ 1
= 1 2 2⎦ + 2 ⎣2 4 6⎦ (10)
p2 p
1 2 3 3 6 9
⎡ ⎤
2− p 3− p 4− p
1 ⎣
= 2 3 − p 6 − 2p 8 − 2p ⎦ (11)
p
4 − p 8 − 2 p 12 − 3 p
226
This implies
' 1
E [Y1 ] = E [Y3 ] = 2y(1 − y) dy = 1/3 (3)
0
' 1
E [Y2 ] = E [Y4 ] = 2y 2 dy = 2/3 (4)
0
Thus Y has expected value E[Y] = 1/3 2/3 1/3 2/3 . The second part of the problem is to
find the correlation matrix RY . In fact, we need to find RY (i, j) = E[Yi Y j ] for each i, j pair. We
will see that these are seriously tedious calculations. For i = j, the second moments are
' 1
2
2
E Y1 = E Y3 = 2y 2 (1 − y) dy = 1/6, (5)
0
' 1
2
2
E Y2 = E Y4 = 2y 3 dy = 1/2. (6)
0
To find the off diagonal terms RY (i, j) = E[Yi Y j ], we need to find the marginal PDFs f Yi ,Y j (yi , y j ).
Example 5.5 showed that
4(1 − y1 )y4 0 ≤ y1 ≤ 1, 0 ≤ y4 ≤ 1,
f Y1 ,Y4 (y1 , y4 ) = (8)
0 otherwise.
4y2 (1 − y3 ) 0 ≤ y2 ≤ 1, 0 ≤ y3 ≤ 1,
f Y2 ,Y3 (y2 , y3 ) = (9)
0 otherwise.
Inspection will show that Y1 and Y4 are independent since f Y1 ,Y4 (y1 , y4 ) = f Y1 (y1 ) f Y4 (y4 ). Simi-
larly, Y2 and Y4 are independent since f Y2 ,Y3 (y2 , y3 ) = f Y2 (y2 ) f Y3 (y3 ). This implies
We also need to calculate f Y1 ,Y2 (y1 , y2 ), f Y3 ,Y4 (y3 , y4 ), f Y1 ,Y3 (y1 , y3 ) and f Y2 ,Y4 (y2 , y4 ). To start, for
0 ≤ y1 ≤ y2 ≤ 1,
' ∞' ∞
f Y1 ,Y2 (y1 , y2 ) = f Y1 ,Y2 ,Y3 ,Y4 (y1 , y2 , y3 , y4 ) dy3 dy4 (12)
−∞ −∞
' 1 ' y4 ' 1
= 4 dy3 dy4 = 4y4 dy4 = 2. (13)
0 0 0
Similarly, for 0 ≤ y3 ≤ y4 ≤ 1,
' ∞ ' ∞
f Y3 ,Y4 (y3 , y4 ) = f Y1 ,Y2 ,Y3 ,Y4 (y1 , y2 , y3 , y4 ) dy1 dy2 (14)
−∞ −∞
' 1 ' y2 ' 1
= 4 dy1 dy2 = 4y2 dy2 = 2. (15)
0 0 0
227
In fact, these PDFs are the same in that
2 0 ≤ x ≤ y ≤ 1,
f Y1 ,Y2 (x, y) = f Y3 ,Y4 (x, y) = (16)
0 otherwise.
This implies RY (1, 2) = RY (3, 4) = E[Y3 Y4 ] and that
' 1' y ' 1 '
$
2$y
1
1
E [Y3 Y4 ] = 2x y d x dy = yx 0 dy = y 3 dy = . (17)
0 0 0 0 4
Continuing in the same way, we see for 0 ≤ y1 ≤ 1 and 0 ≤ y3 ≤ 1 that
' ∞' ∞
f Y1 ,Y3 (y1 , y3 ) = f Y1 ,Y2 ,Y3 ,Y4 (y1 , y2 , y3 , y4 ) dy2 dy4 (18)
−∞ −∞
' 1 ' 1
=4 dy2 dy4 (19)
y1 y3
= 4(1 − y1 )(1 − y3 ). (20)
We observe that Y1 and Y3 are independent since f Y1 ,Y3 (y1 , y3 ) = f Y1 (y1 ) f Y3 (y3 ). It follows that
RY (1, 3) = E [Y1 Y3 ] = E [Y1 ] E [Y3 ] = 1/9. (21)
Finally, we need to calculate
' ∞ ' ∞
f Y2 ,Y4 (y2 , y4 ) = f Y1 ,Y2 ,Y3 ,Y4 (y1 , y2 , y3 , y4 ) dy1 dy3 (22)
−∞ −∞
' y2 ' y4
=4 dy1 dy3 (23)
0 0
= 4y2 y4 . (24)
We observe that Y2 and Y4 are independent since f Y2 ,Y4 (y2 , y4 ) = f Y2 (y2 ) f Y4 (y4 ). It follows that
RY (2, 4) = E[Y2 Y4 ] = E[Y2 ]E[Y4 ] = 4/9. The above results give RY (i, j) for i ≤ j. Since RY is
a symmetric matrix, ⎡ ⎤
1/6 1/4 1/9 2/9
⎢1/4 1/2 2/9 4/9⎥
RY = ⎢ ⎣1/9 2/9 1/6 1/4⎦ .
⎥ (25)
2/9 4/9 1/4 1/2
Since µX = 1/3 2/3 1/3 2/3 , the covariance matrix is
CY = RY − µX µX (26)
⎡ ⎤
⎡ ⎤ 1/3
1/6 1/4 1/9 2/9 ⎢2/3⎥
= ⎣1/4 1/2 2/9 4/9⎦ − ⎢ ⎥
⎣1/3⎦ 1/3 2/3 1/3 2/3 (27)
2/9 4/9 1/4 1/2
2/3
⎡ ⎤
1/18 1/36 0 0
⎢1/36 1/18 0 0 ⎥
=⎢
⎣ 0
⎥.
0 1/18 1/36⎦
0 0 1/36 1/18
(28)
228
The off-diagonal zero blocks are a consequence of Y1 Y2 being independent of Y3 Y4 . Along
the diagonal, the two identical sub-blocks occur because fY1 ,Y2 (x, y) = f Y3 ,Y4 (x, y). In short, the
matrix structure is the result of Y1 Y2 and Y3 Y4 being iid random vectors.
229
Problem 5.6.9 Solution
Given an arbitrary random vector X, we can define Y = X − µX so that
CX = E (X − µX )(X − µX ) = E YY = RY . (1)
It follows that the covariance matrix CX is positive semi-definite if and only if the correlation matrix
RY is positive semi-definite. Thus, it is sufficient to show that every correlation matrix, whether it
is denoted RY or RX , is positive semi-definite.
To show a correlation matrix RX is positive semi-definite, we write
a RX a = a E XX a = E a XX a = E (a X)(X a) = E (a X)2 . (2)
We note that W = a X is a random variable. Since E[W 2 ] ≥ 0 for any random variable W ,
a RX a = E W 2 ≥ 0. (3)
230
The PDF of Y is
1 −1 (y−µ
f Y (y) = √ e−(y−µY ) CY Y )/2 (10)
2π 12
1
e−(y1 +y1 y2 −16y1 −20y2 +y2 +112)/6
2 2
=√ (11)
48π 2
Since Y = X 1 , X 2 , the PDF of X 1 and X 2 is simply
1
e−(x1 +x1 x2 −16x1 −20x2 +x2 +112)/6
2 2
f X 1 ,X 2 (x1 , x2 ) = f Y1 ,Y2 (x1 , x2 ) = √ (12)
48π 2
(c) We can observe directly from µ X and C X that X 1 is a Gaussian (4, 2) random variable. Thus,
X1 − 4 8−4
P [X 1 > 8] = P > = Q(2) = 0.0228 (13)
2 2
Since the two rows of A are linearly independent row vectors, A has rank 2. By Theorem 5.16, Y is a
Gaussian random vector. Given these facts, the various parts of this problem are just straightforward
calculations using Theorem 5.16.
CY = ACX A (4)
⎡ ⎤⎡ ⎤
4 −2 1 1 1
1 1/2 2/3 ⎣ ⎦ ⎣ ⎦ 1 43 55
= −2 4 −2 1/2 −1/2 = . (5)
1 −1/2 2/3 9 55 103
1 −2 4 2/3 2/3
231
(c) Y has correlation matrix
1 43 55 8
1 619 55
RY = CY + µY µY = + 8 0 = (6)
9 55 103 0 9 55 103
(d) From µY , we see that E[Y2 ] = 0. From the covariance matrix CY , we learn that Y2 has
variance σ22 = CY (2, 2) = 103/9. Since Y2 is a Gaussian random variable,
1 Y2 1
P [−1 ≤ Y2 ≤ 1] = P − ≤ ≤ (7)
σ2 σ2 σ2
1 −1
= − (8)
σ2 σ2
1
= 2 −1 (9)
σ2
3
= 2 √ − 1 = 0.2325. (10)
103
Var[Y ] = CY = a CX a. (1)
Thus,
1 1
= . (3)
2π [det (CX )]1/2
2π σ1 σ2 1 − ρ 2
Using the 2 × 2 matrix inverse formula
−1
a b 1 d −b
= , (4)
c d ad − bc −c a
232
we obtain
0 1 −ρ 1
1 σ22 −ρσ1 σ2 1 σ12 σ1 σ2
C−1 = 2 2 = −ρ . (5)
σ1 σ2 (1 − ρ ) −ρσ1 σ2 σ1 1
X 2 2
1 − ρ σ1 σ2
2
σ22
Thus
0 1 −ρ 1
σ12 σ1 σ2 x1 − µ1
x1 − µ1 x2 − µ2 −ρ
1 σ1 σ2
1
σ22
x2 − µ2
− (x − µX ) C−1
X (x − µX ) = − (6)
2 2(1 − ρ 2 )
0 x1 −µ1 ρ(x2 −µ2 ) 1
σ12
− σ1 σ2
x1 − µ1 x2 − µ2
− σ1 σ2 + x2σ−µ
ρ(x 1 −µ1 )
2
2
=− 2
(7)
2(1 − ρ 2 )
(x 1 −µ1 )2 2ρ(x 1 −µ1 )(x 2 −µ2 ) (x 2 −µ2 )2
σ12
− σ1 σ2
+ σ22
=− . (8)
2(1 − ρ 2 )
Combining Equations (1), (3), and (8), we see that
⎡ (x −µ )2 2ρ(x 1 −µ1 )(x 2 −µ2 ) (x 2 −µ2 )2
⎤
1
1
σ12
1
− σ1 σ2
+ σ22
f X (x) = ⎣
exp − ⎦, (9)
2π σ1 σ2 1 − ρ 2 2(1 − ρ 2 )
CY = QCX Q (1)
cos θ − sin θ σ12 0 cos θ sin θ
= (2)
sin θ cos θ 0 σ22 − sin θ cos θ
2
σ1 cos2 θ + σ22 sin2 θ (σ12 − σ22 ) sin θ cos θ
= . (3)
(σ12 − σ22 ) sin θ cos θ σ12 sin2 θ + σ22 cos2 θ
233
We conclude that Y1 and Y2 have covariance
Cov [Y1 , Y2 ] = CY (1, 2) = (σ12 − σ22 ) sin θ cos θ. (4)
Since Y1 and Y2 are jointly Gaussian, they are independent if and only if Cov[Y1 , Y2 ] = 0.
Thus, Y1 and Y2 are independent for all θ if and only if σ12 = σ22 . In this case, when
the joint PDF f X (x) is symmetric in x1 and x2 . In terms of polar coordinates, the PDF
f X (x) = f X 1 ,X 2 (x1 , x2 ) depends on r = x12 + x22 but for a given r , is constant for all
φ = tan−1 (x2 /x1 ). The transformation of X to Y is just a rotation of the coordinate system
by θ preserves this circular symmetry.
(b) If σ22 > σ12 , then Y1 and Y2 are independent if and only if sin θ cos θ = 0. This occurs in the
following cases:
• θ = 0: Y1 = X 1 and Y2 = X 2
• θ = π/2: Y1 = −X 2 and Y2 = −X 1
• θ = π : Y1 = −X 1 and Y2 = −X 2
• θ = −π/2: Y1 = X 2 and Y2 = X 1
In all four cases, Y1 and Y2 are just relabeled versions, possibly with sign changes, of X 1
and X 2 . In these cases, Y1 and Y2 are independent because X 1 and X 2 are independent. For
other values of θ, each Yi is a linear combination of both X 1 and X 2 . This mixing results in
correlation between Y1 and Y2 .
234
The covariance matrix of W is
CW = E (W − µW )(W − µW ) (2)
X − µX
=E (X − µX ) (Y − µY ) (3)
Y − µY
E
(X − µX )(X − µX ) E
(X − µX )(Y − µY )
= (4)
E (Y − µY )(X − µX ) E (Y − µY )(Y − µY )
CX CXY
= . (5)
CYX CY
(a) If you are familiar with the Gram-Schmidt procedure, the argument is that applying Gram-
Schmidt to the rows of A yields m orthogonal row vectors. It is then possible to augment
those vectors with an additional n − m orothogonal vectors. Those orthogonal vectors would
be the rows of Ã.
An alternate argument is that since A has rank m the nullspace of A, i.e., the set of all vectors y
such that Ay = 0 has dimension n −m. We can choose any n −m linearly independent vectors
y1 , y2 , . . . , yn−m in the nullspace A. We then define à to have columns y1 , y2 , . . . , yn−m . It
follows that AÃ = 0.
is a rank n matrix. To prove this fact, we will suppose there exists w such that Āw = 0, and
then show that w is a zero vector. Since A and à together have n linearly independent rows,
we can write the row vector w as a linear combination of the rows of A and Ã. That is, for
some v and ṽ,
w = vt A + ṽ Ã. (3)
235
The condition Āw = 0 implies
0
A
A v + Ã ṽ = . (4)
ÃC−1
X 0
This implies
Since AÃ = 0, Equation (5) implies that AA v = 0. Since A is rank m, AA is an m × m
rank m matrix. It follows that v = 0. We can then conclude from Equation (6) that
ÃC−1
X Ã ṽ = 0. (7)
(c) We note that By Theorem 5.16, the Gaussian vector Ȳ = ĀX has covariance matrix
Since (C−1 −1
X ) = CX ,
Ā = A (ÃC−1
X ) = A C−1
X Ã . (9)
Applying this result to Equation (8) yields
A
−1
ACX
−1
ACX A AÃ
C̄ = CX A CX Ã = A CX Ã = . (10)
ÃC−1
X Ã ÃA ÃC−1
X Ã
Since ÃA = 0,
ACX A 0 CY 0
C̄ = = . (11)
0 ÃC−1
X Ã 0 CŶ
We see that C̄ is block diagonal covariance matrix. From the claim of Problem 5.7.8, we can
conclude that Y and Ŷ are independent Gaussian random vectors.
236
and covariance matrix
CY = Var[Y ] = AC X A (3)
⎡⎤⎡ ⎤
4 −2 1 1/3
⎣ ⎦ ⎣ 2
= 1/3 1/3 1/3 −2 4 −2 1/3⎦ = (4)
3
1 −2 4 1/3
√
Thus Y is a Gaussian (6, 2/3) random variable, implying
Y −6 4−6 √ √
P [Y > 4] = P √ >√ = 1 − (− 6) = ( 6) = 0.9928 (5)
2/3 2/3
(a) The covariance matrix C X has Var[X i ] = 25 for each diagonal entry. For i = j, the i, jth
entry of C X is
[C X ]i j = ρ X i X j Var[X i ] Var[X j ] = (0.8)(25) = 20 (1)
The covariance matrix of X is a 10 × 10 matrix of the form
⎡ ⎤
25 20 · · · 20
⎢ .⎥
⎢20 25 . . . .. ⎥
⎢
CX = ⎢ . . ⎥. (2)
. . .. ⎥
⎣. . . 20⎦
20 · · · 20 25
Since Y is Gaussian,
Y −5 25 − 20.5
P [Y ≤ 25] = P √ ≤ √ = (0.9939) = 0.8399. (5)
20.5 20.5
237
From this model, the vector T = T1 · · · T31 has covariance matrix
⎡ ⎤
C T [0] C T [1] ···
C T [30]
⎢ .. .. ⎥
⎢ C T [1] C T [0] .. ⎥
⎢
CT = ⎢ . ⎥. (2)
. . . .. ⎥
⎣ . . C T [1] ⎦
.
C T [30] ··· C T [1] C T [0]
If you have read the solution to Quiz 5.8, you know that CT is a symmetric Toeplitz matrix and
that M ATLAB has a toeplitz function to generate Toeplitz matrices. Using the toeplitz
function to generate the covariance matrix, it is easy to use gaussvector to generate samples of
the random vector T. Here is the code for estimating P[A] using m samples.
function p=julytemp583(m); julytemp583(100000)
c=36./(1+(0:30)); ans =
CT=toeplitz(c); 0.0684
mu=80*ones(31,1); >> julytemp583(100000)
T=gaussvector(mu,CT,m); ans =
Y=sum(T)/31; 0.0706
Tmin=min(T); >> julytemp583(100000)
p=sum((Tmin>=72) & (Y <= 82))/m; ans =
0.0714
>> julytemp583(100000)
ans =
0.0701
We see from repeated experiments with m = 100,000 trials that P[A] ≈ 0.07.
A program to estimate P[W ≤ 25] uses gaussvector to generate m sample vector of race times
X. In the program sailboats.m, X is an 10 × m matrix such that each column of X is a vector of
race times. In addition min(X) is a row vector indicating the fastest time in each race.
238
function p=sailboats(w,m) >> sailboats(25,10000)
%Usage: p=sailboats(f,m) ans =
%In Problem 5.8.4, W is the 0.0827
%winning time in a 10 boat race. >> sailboats(25,100000)
%We use m trials to estimate ans =
%P[W<=w] 0.0801
CX=(5*eye(10))+(20*ones(10,10)); >> sailboats(25,100000)
mu=35*ones(10,1); ans =
X=gaussvector(mu,CX,m); 0.0803
W=min(X); >> sailboats(25,100000)
p=sum(W<=w)/m; ans =
0.0798
We see from repeated experiments with m = 100,000 trials that P[W ≤ 25] ≈ 0.08.
function jackpot=lottery1(jstart,M,D)
%Usage: function j=lottery1(jstart,M,D)
%Perform M trials of the D day lottery
%of Problem 5.5.5 and initial jackpot jstart
jackpot=zeros(M,1);
for m=1:M,
disp(’trm)
jackpot(m)=jstart;
for d=1:D,
jackpot(m)=jackpot(m)+(0.5*poissonrv(jackpot(m),1));
end
end
The main problem with lottery1 is that it will run very slowly. Each call to poissonrv
generates an entire Poisson PMF PX (x) for x = 0, 1, . . . , xmax where xmax ≥ 2 · 106 . This is slow
in several ways. First, we repeating the calculation of xj=1 max
log j with each call to poissonrv.
Second, each call to poissonrv asks for a Poisson sample value with expected value α > 1 · 106 .
In these cases, for small values of x, PX (x) = α x e−αx /x! is so small that it is less than the smallest
nonzero number that M ATLAB can store!
To speed up the simulation, we have written a program bigpoissonrv which generates Pois-
son (α) samples for large α. The program makes an approximation that for a Poisson (α) ran-
√ √
dom variable X , PX (x) ≈ 0 for |x − α| > 6 α. Since X has standard deviation α, we are
assuming that X cannot be more than six standard deviations away from its mean value. The er-
ror in this approximation is very small. In fact, for a Poisson (a) random variable, the program
√
poissonsigma(a,k) calculates the error P[|X − a| > k a]. Here is poissonsigma.m and
some simple calculations:
239
function err=poissonsigma(a,k); >> poissonsigma(1,6)
xmin=max(0,floor(a-k*sqrt(a))); ans =
xmax=a+ceil(k*sqrt(a)); 1.0249e-005
sx=xmin:xmax; >> poissonsigma(10,6)
logfacts =cumsum([0,log(1:xmax)]); ans =
%logfacts includes 0 in case xmin=0 2.5100e-007
%Now we extract needed values: >> poissonsigma(100,6)
logfacts=logfacts(sx+1); ans =
%pmf(i,:) is a Poisson a(i) PMF 1.2620e-008
% from xmin to xmax >> poissonsigma(1000,6)
pmf=exp(-a+ (log(a)*sx)-(logfacts)); ans =
err=1-sum(pmf); 2.6777e-009
>> poissonsigma(10000,6)
ans =
1.8081e-009
>> poissonsigma(100000,6)
ans =
-1.6383e-010
The error reported by poissonsigma(a,k) should always be positive. In fact, we observe
negative errors for very large a. For large α and x, numerical calculation of PX (x) = α x e−α /x!
is tricky because we are taking ratios of very large numbers. In fact, for α = x = 1,000,000,
M ATLAB calculation of α x and x! will report infinity while e−α will
x evaluate as zero. Our method
of calculating the Poisson (α) PMF is to use the fact that ln x! = j=1 ln j to calculate
⎛ ⎞
x
exp (ln PX (x)) = exp ⎝x ln α − α − ln j ⎠ . (1)
j=1
This method works reasonably well except that the calculation of the logarithm has finite precision.
The consequence is that the calculated sum over the PMF can vary from 1 by a very small amount,
on the order of 10−7 in our experiments. In our problem, the error is inconsequential, however, one
should keep in mind that this may not be the case in other other experiments using large Poisson
random variables. In any case, we can conclude that within the accuracy of M ATLAB’s simulated
experiments, the approximations to be used by bigpoissonrv are not significant.
The
other feature of bigpoissonrv is that for a vector alpha corresponding to expected val-
ues α1 · · · αm , bigpoissonrv returns a vector X such that X(i) is a Poisson alpha(i)
sample. The work of calculating the sum of logarithms is done only once for all calculated samples.
The result is a significant savings in cpu time as long as the values of alpha are reasonably close
to each other.
240
function x=bigpoissonrv(alpha)
%for vector alpha, returns a vector x such that
% x(i) is a Poisson (alpha(i)) rv
%set up Poisson CDF from xmin to xmax for each alpha(i)
alpha=alpha(:);
amin=min(alpha(:));
amax=max(alpha(:));
%Assume Poisson PMF is negligible +-6 sigma from the average
xmin=max(0,floor(amin-6*sqrt(amax)));
xmax=amax+ceil(6*sqrt(amax));%set max range
sx=xmin:xmax;
%Now we include the basic code of poissonpmf (but starting at xmin)
logfacts =cumsum([0,log(1:xmax)]); %include 0 in case xmin=0
logfacts=logfacts(sx+1); %extract needed values
%pmf(i,:) is a Poisson alpha(i) PMF from xmin to xmax
pmf=exp(-alpha*ones(size(sx))+ ...
(log(alpha)*sx)-(ones(size(alpha))*logfacts));
cdf=cumsum(pmf,2); %each row is a cdf
x=(xmin-1)+sum((rand(size(alpha))*ones(size(sx)))<=cdf,2);
Finally, given bigpoissonrv, we can write a short program lottery that simulates trials of the
jackpot experiment. Ideally, we would like to use lottery to perform m = 1,000 trials in a single
pass. In general, M ATLAB is more efficient when calculations are executed in parallel using vectors.
√
However, in bigpoissonrv, the matrix pmf will have m rows and at least 12 α = 12,000
columns. For m more than several hundred, M ATLAB running on my laptop reported an “Out of
Memory” error. Thus, we wrote the program lottery to perform M trials at once and to repeat that
N times. The output is an M × N matrix where each i, j entry is a sample jackpot after seven days.
function jackpot=lottery(jstart,M,N,D)
%Usage: function j=lottery(jstart,M,N,D)
%Perform M trials of the D day lottery
%of Problem 5.5.5 and initial jackpot jstart
jackpot=zeros(M,N);
for n=1:N,
jackpot(:,n)=jstart*ones(M,1);
for d=1:D,
disp(d);
jackpot(:,n)=jackpot(:,n)+(0.5*bigpoissonrv(jackpot(:,n)));
end
end
241
150
100
Frequency
50
0
1.7076 1.7078 1.708 1.7082 1.7084 1.7086 1.7088 1.709 1.7092 1.7094 1.7096
J 7
x 10
If you go back and solve Problem 5.5.5, you will see that the jackpot J has expected value
E[J ] = (3/2)7 × 106 = 1.70859 × 107 dollars. Thus it is not surprising that the histogram is
centered around a jackpot of 1.708 × 107 dollars. If we did more trials, and used more histogram
bins, the histogram would appear to converge to the shape of a Gaussian PDF. This fact is explored
in Chapter 6.
242
Problem Solutions – Chapter 6
(a) The PMF of N1 , the number of phone calls needed to obtain the correct answer, can be
determined by observing that if the correct answer is given on the nth call, then the previous
n − 1 calls must have given wrong answers so that
(3/4)n−1 (1/4) n = 1, 2, . . .
PN1 (n) = (1)
0 otherwise
(b) N1 is a geometric random variable with parameter p = 1/4. In Theorem 2.5, the mean of a
geometric random variable is found to be 1/ p. For our case, E[N1 ] = 4.
(c) Using the same logic as in part (a) we recognize that in order for n to be the fourth correct
answer, that the previous n−1 calls must have contained exactly 3 correct answers and that the
fourth correct answer arrived on the n-th call. This is described by a Pascal random variable.
n−1
(3/4)n−4 (1/4)4 n = 4, 5, . . .
PN4 (n 4 ) = 3 (2)
0 otherwise
243
(d) Using the hint given in the problem statement we can find the mean of N4 by summing up
the means of the 4 identically distributed geometric random variables each with mean 4. This
gives E[N4 ] = 4E[N1 ] = 16.
Thus the variance of X is Var[X ] = E[X 2 ] − (E[X ])2 = 1/18. By symmetry, it should be apparent
that E[Y ] = E[X ] = 1/3 and Var[Y ] = Var[X ] = 1/18. To find the covariance, we first find the
correlation
' 1 ' 1−x ' 1
E [X Y ] = 2x y dy d x = x(1 − x)2 d x = 1/12 (5)
0 0 0
The covariance is
For this specific problem, it’s arguable whether it would easier to find Var[W ] by first deriving the
CDF and PDF of W . In particular, for 0 ≤ w ≤ 1,
' w ' w−x ' w
FW (w) = P [X + Y ≤ w] = 2 dy dx = 2(w − x) d x = w2 (8)
0 0 0
The variance of W is Var[W ] = E[W 2 ] − (E[W ])2 = 1/18. Not surprisingly, we get the same
answer both ways.
244
Problem 6.1.5 Solution
Since each X i has zero mean, the mean of Yn is
E [Yn ] = E [X n + X n−1 + X n−2 ] /3 = 0 (1)
Since Yn has zero mean, the variance of Yn is
Var[Yn ] = E Yn2 (2)
1
= E (X n + X n−1 + X n−2 )2 (3)
9
1
2
= E X n + X n−12
+ X n−2
2
+ 2X n X n−1 + 2X n X n−2 + 2X n−1 X n−2 (4)
9
1 4
= (1 + 1 + 1 + 2/4 + 0 + 2/4) = (5)
9 9
X+Y=w
FW (w) = 2 dx dy + 2 dx dy (3)
w
0 0 2 0
X
w w2
= 2w − 1 − (4)
2
Putting all the parts together gives the CDF FW (w) and (by taking the derivative) the PDF f W (w).
⎧
⎪
⎪ 0 w<0 ⎧
⎪
⎨ w2 ⎨ w 0≤w≤1
0≤w≤1
FW (w) = 2 f W (w) = 2 − w 1≤w≤2 (5)
⎪ 2w − 1 − w2 1 ≤ w ≤ 2 ⎩
2
⎪
⎪ 0 otherwise
⎩ 1 w>2
245
Problem 6.2.2 Solution
The joint PDF of X and Y is
1 0 ≤ x, y ≤ 1
f X,Y (x, y) = (1)
0 otherwise
Proceeding as in Problem 6.2.1, we must first find FW (w) by integrating over the square defined by
0 ≤ x, y ≤ 1. Again we are forced to find FW (w) in parts as we did in Problem 6.2.1 resulting in
the following integrals for their appropriate regions. For 0 ≤ w ≤ 1,
' w ' w−x
FW (w) = d x d y = w2 /2 (2)
0 0
For 1 ≤ w ≤ 2,
' w−1 ' 1 ' 1 ' w−y
FW (w) = dx dy + d x d y = 2w − 1 − w2 /2 (3)
0 0 w−1 0
The complete CDF FW (w) is shown below along with the corresponding PDF f W (w) = d FW (w)/dw.
⎧
⎪ 0 w<0 ⎧
⎪
⎨ 2 ⎨ w 0≤w≤1
w /2 0≤w≤1
FW (w) = f (w) = 2 − w 1≤w≤2 (4)
⎪
⎪ 2w − 1 − w2 /2 1 ≤ w ≤ 2 W
⎩
⎩ 0 otherwise
1 otherwise
246
Problem 6.2.4 Solution
In this problem, X and Y have joint PDF
8x y 0 ≤ y ≤ x ≤ 1
f X,Y (x, y) = (1)
0 otherwise
(∞
We can find the PDF of W using Theorem 6.4: f W (w) = −∞ f X,Y (x, w − x) d x. The only tricky
part remaining is to determine the limits of the integration. First, for w < 0, f W (w) = 0. The two
remaining cases are shown in the accompanying figure. The shaded area shows where the joint PDF
f X,Y (x, y) is nonzero. The diagonal lines depict y = w − x as a function of x. The intersection of
the diagonal line and the shaded area define our limits of integration.
For 0 ≤ w ≤ 1,
' w
y f W (w) = 8x(w − x) d x (2)
w/2
2 $w
= 4wx 2 − 8x 3 /3$w/2 = 2w3 /3 (3)
w
1<w<2
1 For 1 ≤ w ≤ 2,
w '
0<w<1 1
f W (w) = 8x(w − x) d x (4)
x w/2
$1
= 4wx 2 − 8x 3 /3$w/2
w 1 w 2
(5)
= 4w − 8/3 − 2w3 /3 (6)
Since X + Y ≤ 2, f W (w) = 0 for w > 2. Hence the complete expression for the PDF of W is
⎧
⎨ 2w 3 /3 0≤w≤1
f W (w) = 4w − 8/3 − 2w /3 1 ≤ w ≤ 2
3
(7)
⎩
0 otherwise
= f X,Y (w + y, y) dy (3)
−∞
247
Problem 6.2.6 Solution
The random variables K and J have PMFs
j −α
α e β k e−β
j = 0, 1, 2, . . . k = 0, 1, 2, . . .
PJ ( j) = j! PK (k) = k! (1)
0 otherwise 0 otherwise
n
P [N = n] = PJ (n − k) PK (k) (3)
k=0
n
α n−k e−α β k e−β
= (4)
k=0
(n − k)! k!
n−k k
(α + β)n e−(α+β)
n
n! α β
= (5)
n! k!(n − k)! α + β α+β
k=0
1
The marked sum above equals 1 because it is the sum of a binomial PMF over all possible values.
The PMF of N is the Poisson PMF
(α+β)n e−(α+β)
n = 0, 1, 2, . . .
PN (n) = n! (6)
0 otherwise
248
Problem 6.3.2 Solution
(a) By summing across the rows of the table, we see that J has PMF
0.6 j = −2
PJ ( j) = (1)
0.4 j = −1
(b) Summing down the columns of the table, we see that K has PMF
⎧
⎨ 0.7 k = −1
PK (k) = 0.2 k = 0 (2)
⎩
0.1 k = 1
(c) To find the PMF of M = J + K , it is easist to annotate each entry in the table with the
coresponding value of M:
(d) One way to solve this problem, is to find the MGF φ M (s) and then take four derivatives.
Sometimes its better to just work with definition of E[M 4 ]:
E M4 = PM (m) m 4 (5)
m
= 0.42(−3)4 + 0.40(−2)4 + 0.14(−1)4 + 0.04(0)4 = 40.434 (6)
As best I can tell, the prupose of this problem is to check that you know when not to use the
methods in this chapter.
249
Now to find the first moment, we evaluate the derivative of φ X (s) at s = 0.
$
$
d φ X (s) $$ s bebs − aeas − ebs − eas $$
E [X ] = = $ (2)
ds $s=0 (b − a)s 2 $
s=0
Direct evaluation of the above expression at s = 0 yields 0/0 so we must apply l’Hôpital’s rule and
differentiate the numerator and denominator.
bebs − aeas + s b2 ebs − a 2 eas − bebs − aeas
E [X ] = lim (3)
s→0 2(b − a)s
b2 ebs − a 2 eas b+a
= lim = (4)
s→0 2(b − a) 2
To find the second moment of X , we first find that the second derivative of φ X (s) is
d 2 φ X (s) s 2 b2 ebs − a 2 eas − 2s bebs − aeas + 2 bebs − aeas
= (5)
ds 2 (b − a)s 3
Substituting s = 0 will yield 0/0 so once again we apply l’Hôpital’s rule and differentiate the
numerator and denominator.
2 d 2 φ X (s) s 2 b3 ebs − a 3 eas
E X = lim = lim (6)
s→0 ds 2 s→0 3(b − a)s 2
b3 − a 3
= = (b2 + ab + a 2 )/3 (7)
3(b − a)
In this case, it is probably simpler to find these moments without using the MGF.
To calculate the moments of Y , we define Y = X + µ so that Y is Gaussian (µ, σ ). In this case the
second moment of Y is
E Y 2 = E (X + µ)2 = E X 2 + 2µX + µ2 = σ 2 + µ2 . (5)
250
Similarly, the third moment of Y is
E Y 3 = E (X + µ)3 (6)
= E X 3 + 3µX 2 + 3µ2 X + µ3 = 3µσ 2 + µ3 . (7)
Finally, the fourth moment of Y is
E Y 4 = E (X + µ)4 (8)
= E X 4 + 4µX 3 + 6µ2 X 2 + 4µ3 X + µ4 (9)
= 3σ 4 + 6µ2 σ 2 + µ4 . (10)
251
We can use these results to derive two well known results. We observe that we can directly use the
PMF PK (k) to calculate the moments
1
1
n n
E [K ] = k E K2 = k2 (12)
n k=1 n k=1
Using the answers we found for E[K ] and E[K 2 ], we have the formulas
n
n(n + 1)
n
n(n + 1)(2n + 1)
k= k2 = (13)
k=1
2 k=1
6
(c) Although we could just use the fact that the expectation of the sum equals the sum of the
expectations, the problem asks us to find the moments using φ M (s). In this case,
$
d φ M (s) $$ $
E [M] = $ = n(1 − p + pes )n−1 pes $s=0 = np (3)
ds s=0
The variance of M is
Var[M] = E M 2 − (E [M])2 = np(1 − p) = n Var[K ] (7)
252
Problem 6.4.4 Solution
Based on the problem statement, the number of points X i that you earn for game i has PMF
1/3 x = 0, 1, 2
PX i (x) = (1)
0 otherwise
(a) The MGF of X i is
φ X i (s) = E es X i = 1/3 + es /3 + e2s /3 (2)
Since Y = X 1 + · · · + X n , Theorem 6.8 implies
φY (s) = [φ X i (s)]n = [1 + es + e2s ]n /3n (3)
Hence, Var[X i ] = E[X i2 ]−(E[X i ])2 = 2/3. By Theorems 6.1 and 6.3, the mean and variance
of Y are
E [Y ] = n E [X ] = n (6)
Var[Y ] = n Var[X ] = 2n/3 (7)
Another more complicated way to find the mean and variance is to evaluate derivatives of
φY (s) as s = 0.
(c) Since the MGF of Ri is of the same form as that of the Poisson with parameter, α = 2i.
Therefore we can conclude that Ri is in fact a Poisson random variable with parameter α = 2i.
That is,
(2i)r e−2i /r ! r = 0, 1, 2, . . .
PRi (r ) = (3)
0 otherwise
(d) Because Ri is a Poisson random variable with parameter α = 2i, the mean and variance of
Ri are then both 2i.
253
Problem 6.4.6 Solution
The total energy stored over the 31 days is
Y = X 1 + X 2 + · · · + X 31 (1)
The random variables X 1 , . . . , X 31 are Gaussian and independent but not identically distributed.
However, since the sum of independent Gaussian random variables is Gaussian, we know that Y is
Gaussian. Hence, all we need to do is find the mean and variance of Y in order to specify the PDF
of Y . The mean of Y is
31
31
31(32)
E [Y ] = E [X i ] = (32 − i/4) = 32(31) − = 868 kW-hr (2)
i=1 i=1
8
254
Problem 6.5.1 Solution
(a) From Table 6.1, we see that the exponential random variable X has MGF
λ
φ X (s) = (1)
λ−s
(b) Note that K is a geometric random variable identical to the geometric random variable X in
Table 6.1 with parameter p = 1 − q. From Table 6.1, we know that random variable K has
MGF
(1 − q)es
φ K (s) = (2)
1 − qes
Since K is independent of each X i , V = X 1 + · · · + X K is a random sum of random variables.
From Theorem 6.12,
λ
(1 − q) λ−s (1 − q)λ
φV (s) = φ K (ln φ X (s)) = λ
= (3)
1− q λ−s (1 − q)λ − s
We see that the MGF of V is that of an exponential random variable with parameter (1 − q)λ.
The PDF of V is
(1 − q)λe−(1−q)λv v ≥ 0
f V (v) = (4)
0 otherwise
K = X1 + · · · + X N (3)
We see that K has the MGF of a Poisson random variable with mean E[K ] = 30(2/3) = 20,
variance Var[K ] = 20, and PMF
(20)k e−20 /k! k = 0, 1, . . .
PK (k) = (5)
0 otherwise
255
Problem 6.5.3 Solution
In this problem, Y = X 1 + · · · + X N is not a straightforward random sum of random variables
because N and the X i ’s are dependent. In particular, given N = n, then we know that there were
exactly 100 heads in N flips. Hence, given N , X 1 + · · · + X N = 100 no matter what is the actual
value of N . Hence Y = 100 every time and the PMF of Y is
1 y = 100
PY (y) = (1)
0 otherwise
V + Y1 + · · · + Y K (1)
The PDF of V cannot be found in a simple form. However, we can use the MGF to calculate the
mean and variance. In particular,
$ $
d φV (s) $$ 300 $
$
E [V ] = $ =e 300s/(1−15s)
= 300 (5)
ds s=0 (1 − 15s) $s=0
2
$
2 d 2 φV (s) $$
E V = (6)
ds 2 $s=0
2 $
$
300 9000 $
= e300s/(1−15s) + e 300s/(1−15s)
$ = 99, 000 (7)
(1 − 15s)2 (1 − 15s)3 $
s=0
Thus, V has variance Var[V ] = E[V 2 ] − (E[V ])2 = 9, 000 and standard deviation σV ≈ 94.9.
A second way to calculate the mean and variance of V is to use Theorem 6.13 which says
256
Problem 6.5.5 Solution
Since each ticket is equally likely to have one of 46
6
combinations, the probability a ticket is a
winner is
1
q = 46 (1)
6
Let X i = 1 if the ith ticket sold is a winner; otherwise X i = 0. Since the number K of tickets sold
has a Poisson PMF with E[K ] = r , the number of winning tickets is the random sum
V = X1 + · · · + X K (2)
From Appendix A,
φ K (s) = er [e
s −1]
φ X (s) = (1 − q) + qes (3)
By Theorem 6.12,
φV (s) = φ K (ln φ X (s)) = er [φ X (s)−1] = erq(e
s −1)
(4)
Hence, we see that V has the MGF of a Poisson random variable with mean E[V ] = rq. The PMF
of V is
(rq)v e−rq /v! v = 0, 1, 2, . . .
PV (v) = (5)
0 otherwise
(a) We can view K as a shifted geometric random variable. To find the MGF, we start from first
principles with Definition 6.1:
∞
∞
p
φ K (s) = esk p(1 − p)k = p [(1 − p)es ]k = (1)
k=0 n=0
1 − (1 − p)es
2 /2
(b) First, we need to recall that each X i has MGF φ X (s) = es+s . From Theorem 6.12, the
MGF of R is
p
φ R (s) = φ K (ln φ X (s)) = φ K (s + s 2 /2) = (2)
1 − (1 − p)es+s 2 /2
(c) To use Theorem 6.13, we first need to calculate the mean and variance of K :
$ $
d φ K (s) $$ p(1 − p)es 2 $$ 1− p
E [K ] = $ = $ = (3)
ds s=0 1 − (1 − p)es $ p
$ s=0
$
2 d 2 φ K (s) $$ [1 − (1 − p)es ]es + 2(1 − p)e2s $$
E K = = p(1 − p) (4)
ds 2 $s=0 [1 − (1 − p)es ]3 $
s=0
(1 − p)(2 − p)
= (5)
p2
Hence, Var[K ] = E[K 2 ] − (E[K ])2 = (1 − p)/ p 2 . Finally. we can use Theorem 6.13 to
write
1− p 1− p 1 − p2
Var[R] = E [K ] Var[X ] + (E [X ])2 Var[K ] = + = (6)
p p2 p2
257
Problem 6.5.7 Solution
The way to solve for the mean and variance of U is to use conditional expectations. Given K = k,
U = X 1 + · · · + X k and
E [U |K = k] = E [X 1 + · · · + X k |X 1 + · · · + X n = k] (1)
k
= E [X i |X 1 + · · · + X n = k] (2)
i=1
This says that the random variable E[U |K ] = K 2 /n. Using iterated expectations, we have
E [U ] = E [E [U |K ]] = E K 2 /n (9)
Since K is a binomial random varaiable, we know that E[K ] = np and Var[K ] = np(1 − p). Thus,
1
2 1
E [U ] = E K = Var[K ] + (E [K ])2 = p(1 − p) + np2 (10)
n n
On the other hand, V is just and ordinary random sum of independent random variables and the
mean of E[V ] = E[X ]E[M] = np 2 .
258
Problem 6.5.8 Solution
Using N to denote the number of games played, we can write the total number of points earned as
the random sum
Y = X1 + X2 + · · · + X N (1)
(a) It is tempting to use Theorem 6.12 to find φY (s); however, this would be wrong since each
X i is not independent of N . In this problem, we must start from first principles using iterated
expectations.
∞
φY (s) = E E es(X 1 +···+X N ) |N = PN (n) E es(X 1 +···+X n ) |N = n (2)
n=1
It follows that
∞
1 n ln[(es +e2s )/2] φ N (ln[es /2 + e2s /2])
φY (s) = s PN (n) e = (9)
e /2 + e2s /2 n=1 es /2 + e2s /2
The tournament ends as soon as you lose a game. Since each game is a loss with probability
1/3 independent of any previous game, the number of games played has the geometric PMF
and corresponding MGF
(2/3)n−1 (1/3) n = 1, 2, . . . (1/3)es
PN (n) = φ N (s) = (10)
0 otherwise 1 − (2/3)es
Thus, the MGF of Y is
1/3
φY (s) = (11)
1− (es+ e2s )/3
259
(b) To find the moments of Y , we evaluate the derivatives of the MGF φY (s). Since
d φY (s) es + 2e2s
=
2 (12)
ds 9 1 − es /3 − e2s /3
we see that $
d φY (s) $$ 3
E [Y ] = $ = =3 (13)
ds s=0 9(1/3)2
If you’re curious, you may notice that E[Y ] = 3 precisely equals E[N ]E[X i ], the answer
you would get if you mistakenly assumed that N and each X i were independent. Although
this may seem like a coincidence, its actually the result of theorem known as Wald’s equality.
The second derivative of the MGF is
d 2 φY (s) (1 − es /3 − e2s /3)(es + 4e2s ) + 2(es + 2e2s )2 /3
= (14)
ds 2 9(1 − es /3 − e2s /3)3
The second moment of Y is
$
d 2 φY (s) $$ 5/3 + 6
E Y 2
= 2 $ = = 23 (15)
ds s=0 1/3
260
Problem 6.6.2 Solution
Knowing that the probability that voice call occurs is 0.8 and the probability that a data call occurs
is 0.2 we can define the random variable Di as the number of data calls in a single telephone call. It
is obvious that for any i there are only two possible values for Di , namely 0 and 1. Furthermore for
all i the Di ’s are independent and identically distributed withe the following PMF.
⎧
⎨ 0.8 d = 0
PD (d) = 0.2 d = 1 (1)
⎩
0 otherwise
With these facts, we can answer the questions posed by the problem.
(a) Let X 1 , . . . , X 120 denote the set of call durations (measured in minutes) during the month.
From the problem statement, each X − I is an exponential (λ) random variable with E[X i ] =
1/λ = 2.5 min and Var[X i ] = 1/λ2 = 6.25 min2 . The total number of minutes used during
the month is Y = X 1 + · · · + X 120 . By Theorem 6.1 and Theorem 6.3,
(b) If the actual call duration is X i , the subscriber is billed for Mi = X i minutes. Because
each X i is an exponential (λ) random variable, Theorem 3.9 says that Mi is a geometric ( p)
random variable with p = 1 − e−λ = 0.3297. Since Mi is geometric,
1 1− p
E [Mi ] = = 3.033, Var[Mi ] = = 6.167. (3)
p p2
261
The number of billed minutes in the month is B = M1 + · · · + M120 . Since M1 , . . . , M120 are
iid random variables,
E [B] = 120E [Mi ] = 364.0, Var[B] = 120 Var[Mi ] = 740.08. (4)
Similar to part (a), the subscriber is billed $36 if B = 315 minutes. The probability the
subscriber is billed more than $36 is
B − 364 315 − 365
P [B > 315] = P √ > √ = Q(−1.8) = (1.8) = 0.964. (5)
740.08 740.08
(a) Since the number of requests N has expected value E[N ] = 300 and variance Var[N ] = 300,
we need C to satisfy
N − 300 C − 300
P [N > C] = P √ > √ (1)
300 300
C − 300
=1− √ = 0.05. (2)
300
From Table 3.1, we note that (1.65) = 0.9505. Thus,
√
C = 300 + 1.65 300 = 328.6. (3)
262
(b) For C = 328.6, the exact probability of overload is
(c) This part of the problem could be stated more carefully. Re-examining Definition 2.10 for the
Poisson random variable and the accompanying discussion in Chapter 2, we observe that the
webserver has an arrival rate of λ = 300 hits/min, or equivalently λ = 5 hits/sec. Thus in a
one second interval, the number of requests N is a Poisson (α = 5) random variable.
However, since the server “capacity” in a one second interval is not precisely defined, we will
make the somewhat arbitrary definition that the server capacity is C = 328.6/60 = 5.477
packets/sec. With this somewhat arbitrary definition, the probability of overload in a one
second interval is
P N > C = 1 − P N ≤ 5.477 = 1 − P N ≤ 5 . (5)
Because the number of arrivals in the interval is small, it would be a mistake to use the Central
Limit Theorem to estimate this overload probability. However, the direct calculation of the
overload probability is not hard. For E[N ] = α = 5,
5 5
αn
1 − P N ≤ 5 = 1 − PN (n) = 1 − e−α = 0.3840. (6)
n=0 n=0
n!
(d) Here we find the smallest C such that P[N ≤ C] ≥ 0.95. From the previous step, we know
that C > 5. Since N is a Poisson (α = 5) random variable, we need to find the smallest C
such that
C
P [N ≤ C] = α n e−α /n! ≥ 0.95. (7)
n=0
(e) If we use the Central Limit theorem to estimate the overload probability in a one second
interval, we would use the facts that E[N ] = 5 and Var[N ] = 5 to estimate the the overload
probability as
5−5
1− P N ≤5 =1− √ = 0.5 (8)
5
which overestimates the overload probability by roughly 30 percent. We recall from Chapter 2
that a Poisson random is the limiting case of the (n, p) binomial random variable when n is
large and np = α.In general, for fixed p, the Poisson and binomial PMFs become closer as n
increases. Since large n is also the case for which the central limit theorem applies, it is not
surprising that the the CLT approximation for the Poisson (α) CDF is better when α = np is
large.
263
Comment: Perhaps a more interesting question is why the overload probability in a one-second
interval is so much higher than that in a one-minute interval? To answer this, consider a T -second
interval in which the number of requests NT is a Poisson (λT ) random variable while the server
capacity is cT hits. In the earlier problem parts, c = 5.477 hits/sec. We make the assumption that
the server system is reasonably well-engineered in that c > λ. (We will learn in Chapter 12 that to
assume otherwise means that the backlog of requests will grow without bound.) Further, assuming
T is fairly large, we use the CLT to estimate the probability of overload in a T -second interval as
√
N T − λT cT − λT
P [N T ≥ cT ] = P √ ≥ √ =Q k T , (9)
λT λT
√
where k = (c − λ)/ λ. As long as c > λ, the overload probability decreases with increasing T .
In fact, the overload probability goes rapidly to zero as T becomes large. The reason is that the
gap cT − λT between server capacity cT and the expected number of requests λT √ grows linearly
in T while the standard deviation of the number of requests grows proportional to T . However,
one should add that the definition of a T -second overload is somewhat arbitrary. In fact, one can
argue that as T becomes large, the requirement for no overloads simply becomes less stringent. In
Chapter 12, we will learn techniques to analyze a system such as this webserver in terms of the
average backlog of requests and the average delay in serving in serving a request. These statistics
won’t depend on a particular time period T and perhaps better describe the system performance.
(a) The number of tests L needed to identify 500 acceptable circuits is a Pascal (k = 500, p =
0.8) random variable, which has expected value E[L] = k/ p = 625 tests.
(b) Let K denote the number of acceptable circuits in n = 600 tests. Since K is binomial
(n = 600, p = 0.8), E[K ] = np = 480 and Var[K ] = np(1 − p) = 96. Using the CLT, we
estimate the probability of finding at least 500 acceptable circuits as
K − 480 20 20
P [K ≥ 500] = P √ ≥√ ≈Q √ = 0.0206. (1)
96 96 96
(c) Using M ATLAB, we observe that
1.0-binomialcdf(600,0.8,499)
ans =
0.0215
(d) We need to find the smallest value of n such that the binomial (n, p) random variable K satis-
fies P[K ≥ 500] ≥ 0.9. Since E[K ] = np and Var[K ] = np(1 − p), the CLT approximation
yields
K − np 500 − np
P [K ≥ 500] = P √ ≥√ ≈ 1 − (z) = 0.90. (2)
np(1 − p) np(1 − p)
√
where z = (500 − np)/ np(1 − p). It follows that 1 − (z) = (−z) ≥ 0.9, implying
z = −1.29. Since p = 0.8, we have that
np − 500 = 1.29 np(1 − p). (3)
264
Equivalently, for p = 0.8, solving the quadratic equation
500 2 1− p
n− = (1.29)2 n (4)
p p
we obtain n = 641.3. Thus we should test n = 642 circuits.
this upper bound to the true probability. Note that for c = 1, 2 we use Table 3.1 and the fact that
Q(c) = 1 − (c).
c=1 c=2 c=3 c=4 c=5
−4
Chernoff bound 0.606 0.135 0.011 3.35 × 10 3.73 × 10−6 (3)
−5
Q(c) 0.1587 0.0228 0.0013 3.17 × 10 2.87 × 10−7
We see that in this case, the Chernoff bound typically overestimates the true probability by roughly
a factor of 10.
265
Problem 6.8.4 Solution
This problem is solved completely in the solution to Quiz 6.8! We repeat that solution here. Since
W = X 1 + X 2 + X 3 is an Erlang (n = 3, λ = 1/2) random variable, Theorem 3.11 says that for
any w > 0, the CDF of W satisfies
2
(λw)k e−λw
FW (w) = 1 − (1)
k=0
k!
For y ≥ 0, y n is a nondecreasing function of y. This implies that the value of s that minimizes
e−sc φ X (s) also minimizes (e−sc φ X (s))n . Hence
n
−sc
P [Mn (X ) ≥ c] = P [Wn ≥ nc] ≤ min e φ X (s) (3)
s≥0
A complication is that the event Wn < w is not the same as Wn ≤ w when w is an integer. In this
case, we observe that
Thus
9 :
P [Bn ] = FWn 0.501 × 10n − FWn 0.499 × 109 − 1 (4)
266
function pb=binomialcdftest(N);
pb=zeros(1,N);
for n=1:N,
w=[0.499 0.501]*10ˆn;
w(1)=ceil(w(1))-1;
pb(n)=diff(binomialcdf(10ˆn,0.5,w));
end
Unfortunately, on this user’s machine (a Windows XP laptop), the program fails for N = 4. The
problem, as noted earlier is that binomialcdf.m uses binomialpmf.m, which fails for a bi-
nomial (10000, p) random variable. Of course, your mileage may vary. A slightly better solution
is to use the bignomialcdf.m function, which is identical to binomialcdf.m except it calls
bignomialpmf.m rather than binomialpmf.m. This enables calculations for larger values of
n, although at some cost in numerical accuracy. Here is the code:
function pb=bignomialcdftest(N);
pb=zeros(1,N);
for n=1:N,
w=[0.499 0.501]*10ˆn;
w(1)=ceil(w(1))-1;
pb(n)=diff(bignomialcdf(10ˆn,0.5,w));
end
For comparison, here are the outputs of the two programs:
>> binomialcdftest(4)
ans =
0.2461 0.0796 0.0756 NaN
>> bignomialcdftest(6)
ans =
0.2461 0.0796 0.0756 0.1663 0.4750 0.9546
The result 0.9546 for n = 6 corresponds to the exact probability in Example 6.15 which used the
CLT to estimate the probability as 0.9544. Unfortunately for this user, for n = 7, bignomialcdftest(7)
failed.
function df=erlangclt(n); From the forms of the functions, it not likely to be apparent
r=3*sqrt(n); that f X (x) and f Y (x) are similar. The following program
x=(n-r):(2*r)/100:n+r; plots f X (x) and f Y (x) for values of x within three stan-
fx=erlangpdf(n,1,x); dard deviations of the expected value n. Below are sample
fy=gausspdf(n,sqrt(n),x);
outputs of erlangclt(n) for n = 4, 20, 100.
plot(x,fx,x,fy);
df=fx-fy;
267
In the graphs we will see that as n increases, the Erlang PDF becomes increasingly similar to
the Gaussian PDF of the same expected value and variance. This is not surprising since the Erlang
(n, λ) random variable is the sum of n of exponential random variables and the CLT says that the
Erlang CDF should converge to a Gaussian CDF as n gets large.
0.1
0.2 0.04
fX(x) fY(x)
fX(x) fY(x)
fX(x) fY(x)
0.15
0.05
0.1 0.02
0.05
0 0 0
0 5 10 10 20 30 80 100 120
x x x
erlangclt(4) erlangclt(20) erlangclt(100)
On the other hand, the convergence should be viewed with some caution. For example, the
mode (the peak value) of the Erlang PDF occurs at x = n − 1 while the mode of the Gaussian PDF
is at x = n. This difference only appears to go away for n = 100 because the graph x-axis range
is expanding. More important, the two PDFs are quite different far away from the center of the
distribution. The Erlang PDF is always zero for x < 0 while the Gaussian PDF is always positive.
For large postive x, the two distributions do not have the same exponential decay. Thus it’s not
a good idea to use the CLT to estimate probabilities of rare events such as {X > x} for extremely
large values of x.
function y=binomcltpmf(n,p)
x=-1:17;
xx=-1:0.05:17;
y=binomialpmf(n,p,x);
std=sqrt(n*p*(1-p));
clt=gausspdf(n*p,std,xx);
hold off;
pmfplot(x,y,’\it x’,’\it p_X(x) f_X(x)’);
hold on; plot(xx,clt); hold off;
268
1 0.4
fX(x)
fX(x)
0.5 0.2
pX(x)
pX(x)
0 0
−5 0 5 10 15 20 −5 0 5 10 15 20
x x
binomcltpmf(2,0.5) binomcltpmf(4,0.5)
0.4 0.2
fX(x)
fX(x)
0.2 0.1
pX(x)
pX(x)
0 0
−5 0 5 10 15 20 −5 0 5 10 15 20
x x
binomcltpmf(8,0.5) binomcltpmf(16,0.5)
To see why the values of the PDF and PMF are roughly the same, consider the Gaussian random
variable Y . For small ,
FY (x + /2) − FY (x − /2)
f Y (x) ≈ . (1)
For = 1, we obtain
f Y (x) ≈ FY (x + 1/2) − FY (x − 1/2) . (2)
Since the Gaussian CDF is approximately the same as the CDF of the binomial (n, p) random
variable X , we observe for an integer x that
f Y (x) ≈ FX (x + 1/2) − FX (x − 1/2) = PX (x) . (3)
Although the equivalence in heights of the PMF and PDF is only an approximation, it can be useful
for checking the correctness of a result.
269
The resulting plot will be essentially identical to Figure 6.4. One final note, the command
set(h,’LineWidth’,0.25) is used to make the bars of the PMF thin enough to be resolved
individually.
0.04
0.03
PW(w)
0.02
0.01
0
0 10 20 30 40 50 60
w
270
%sum2unif.m
sx=0:20;px=ones(1,21)/21;
sy=0:80;py=ones(1,81)/81;
[pw,sw]=sumfinitepmf(px,sx,py,sy);
h=pmfplot(sw,pw,’\it w’,’\it P_W(w)’);
set(h,’LineWidth’,0.25);
0.015
0.01
PW(w)
0.005
0
0 10 20 30 40 50 60 70 80 90 100
w
271
Problem Solutions – Chapter 7
(a) Using Theorem 7.1, σ M2 n (x) = σ X2 /n. Realizing that σ X2 = 25, we obtain
σ X2 25
Var[M9 (X )] = = (1)
9 9
(b)
P [X 1 ≥ 7] = 1 − P [X 1 ≤ 7] (2)
−7/5 −7/5
= 1 − FX (7) = 1 − (1 − e )=e ≈ 0.247 (3)
Now the probability that M9 (X ) > 7 can be approximated using the Central Limit Theorem
(CLT).
63 − 9µ X
P [M9 (X ) > 7] = 1 − P [(X 1 + . . . + X 9 ) ≤ 63] ≈ 1 − √ = 1 − (6/5)
9σ X
(5)
(a) Since X 1 is a uniform random variable, it must have a uniform PDF over an interval [a, b].
From Appendix A, we can look up that µ X = (a + b)/2 and that Var[X ] = (b − a)2 /12.
Hence, given the mean and variance, we obtain the following equations for a and b.
Solving these equations yields a = 4 and b = 10 from which we can state the distribution of
X.
1/6 4 ≤ x ≤ 10
f X (x) = (2)
0 otherwise
272
(c)
' ∞ ' 10
P [X 1 ≥ 9] = f X 1 (x) d x = (1/6) d x = 1/6 (4)
9 9
(d) The variance of M16 (X ) is much less than Var[X 1 ]. Hence, the PDF of M16 (X ) should be
much more concentrated about E[X ] than the PDF of X 1 . Thus we should expect P[M16 (X ) > 9]
to be much less than P[X 1 > 9].
Thus Var[Y ] = 1/5 − (1/3)2 = 4/45 and the sample mean Mn (Y ) has standard error
)
4
en = . (4)
45n
(a) Since Yn = X 2n−1 + (−X 2n ), Theorem 6.1 says that the expected value of the difference is
(b) By Theorem 6.2, the variance of the difference between X 2n−1 and X 2n is
273
(c) Each Yn is the difference of two samples of X that are independent of the samples used by
any other Ym . Thus Y1 , Y2 , . . . is an iid random sequence. By Theorem 7.1, the mean and
variance of Mn (Y ) are
Var[W ] 1002
P [|W − E [W ] | ≥ 200] ≤ ≤ = 0.25 (1)
2002 2002
σ X2
P [|X − E [X ] | ≥ c] ≤ (1)
c2
Choosing c = kσ X , we obtain
1
P [|X − E [X ] | ≥ kσ ] ≤ (2)
k2
The actual probability the Gaussian random variable Y is more than k standard deviations from its
expected value is
The following table compares the upper bound and the true probability:
The Chebyshev bound gets increasingly weak as k goes up. As an example, for k = 4, the bound
exceeds the true probability by a factor of 1,000 while for k = 5 the bound exceeds the actual
probability by a factor of nearly 100,000.
274
Problem 7.2.4 Solution
Let X 1 denote the number of rolls until the first occurrence of snake eyes. Similarly, let X i denote
the number of additional rolls for the ith occurrence. Since each roll is snake eyes with probability
p = 1/36, X 1 , X 2 and X 3 are iid geometric ( p) random variables. Thus
1 1− p
E [X i ] = = 36, Var [X i ] = = 1260. (1)
p p2
The number of rolls needed for third occurence of snake eyes is the independent sum R = X 1 +
X 2 + X 3 . Thus,
0.5
f V2 (v) = 2−v 1<v ≤2 (1)
⎩
0 0 otherwise
0 1 2
v
The PDF of V is the convolution integral
' ∞
f V (v) = f V2 (y) f Y3 (v − y) dy (2)
−∞
' 1 ' 2
= y f Y3 (v − y) dy + (2 − y) f Y3 (v − y) dy. (3)
0 1
275
Evaluation of these integrals depends on v through the function
1 v−1<v <1
f Y3 (v − y) = (4)
0 otherwise
To compute the convolution, it is helpful to depict the three distinct cases. In each case, the square
“pulse” is f Y3 (v − y) and the trianglular pulse is f V2 (y).
1 1 1
0 0 0
−1 0 1 2 3 −1 0 1 2 3 −1 0 1 2 3
0≤v<1 1≤v<2 2≤v<3
From the graphs, we can compute the convolution for each case:
' v
1
0≤v<1: f V3 (v) = y dy = v 2 (5)
0 2
' 1 ' v
3
1≤v<2: f V3 (v) = y dy + (2 − y) dy = −v 2 + 3v − (6)
v−1 1 2
' 2
(3 − v)2
2≤v<3: f V3 (v) = (2 − y) dy = (7)
v−1 2
To complete the problem, we use Theorem 3.20 to observe that W = 30V3 is the sum of three iid
uniform (0, 30) random variables. From Theorem 3.19,
⎧
⎪
⎪ (w/30)2 /60 0 ≤ w < 30,
⎨
1 [−(w/30)2 + 3(w/30) − 3/2]/30 30 ≤ w < 60,
f W (w) = f V (v3 ) v/30 = (8)
30 3 ⎪
⎪ [3 − (w/30)]2 /60 60 ≤ w < 90,
⎩
0 otherwise.
For comparison, the Markov inequality indicated that P[W < 75] ≤ 3/5 and the Chebyshev in-
equality showed that P[W < 75] ≤ 1/4. As we see, both inequalities are quite weak in this case.
276
(a) By the Markov inequality,
E [R] 54
P [R ≥ 250] ≤ = = 0.432. (2)
250 125
Thus the Markov and Chebyshev inequalities are valid bounds but not good estimates of
P[R ≥ 250].
P [µ − σ ≤ Y ≤ µ + σ ] = P [−σ ≤ Y − µ ≤ σ ] (1)
Y −µ
= P −1 ≤ ≤1 (2)
σ
= (1) − (−1) = 2 (1) − 1 = 0.6827. (3)
Note that Y can be any Gaussian random variable, including, for example, Mn (X ) when X is Gaus-
sian. When X is not Gaussian, the same claim holds to the extent that the central limit theorem
promises that Mn (X ) is nearly Gaussian for large n.
277
It follows for n ≥ n 0 that
$ $2 $ $2 $ $2
$ $ $ $ $ $
P $ R̂n − r $ ≥ ≤ P $ R̂n − E R̂n $ + /2 ≥ = P $ R̂n − E R̂n $ ≥ /2
2 2 2 2
(3)
(a) Since the expectation of a sum equals the sum of the expectations also holds for vectors,
1 1
n n
E [M(n)] = E [X(i)] = µ = µX . (1)
n i=1 n i=1 X
n
(b) The jth component of M(n) is M j (n) = n1 i=1 X j (i), which is just the sample mean of X j .
Defining A j = {|M j (n) − µ j | ≥ c}, we observe that
$ $
$ $
P max M j (n) − µ j ≥ c = P [A1 ∪ A2 ∪ · · · ∪ Ak ] . (2)
j=1,...,k
278
Problem 7.3.5 Solution
Note that we can write Yk as
2 2
X 2k−1 − X 2k X 2k − X 2k−1 (X 2k − X 2k−1 )2
Yk = + = (1)
2 2 2
Hence,
1
2
E [Yk ] = E X 2k − 2X 2k X 2k−1 + X 2k−1
2
= E X 2 − (E [X ])2 = Var[X ] (2)
2
Next we observe that Y1 , Y2 , . . . is an iid random sequence. If this independence is not obvious,
consider that Y1 is a function of X 1 and X 2 , Y2 is a function of X 3 and X 4 , and so on. Since
X 1 , X 2 , . . . is an idd sequence, Y1 , Y2 , . . . is an iid sequence. Hence, E[Mn (Y )] = E[Y ] = Var[X ],
implying Mn (Y ) is an unbiased estimator of Var[X ]. We can use Theorem 7.5 to prove that Mn (Y )
is consistent if we show that Var[Y ] is finite. Since Var[Y ] ≤ E[Y 2 ], it is sufficient to prove that
E[Y 2 ] < ∞. Note that
4
X 2k − 4X 2k
3
X 2k−1 + 6X 2k
2 2
X 2k−1 − 4X 2k X 2k−1
3
+ X 2k−1
4
Yk2 = (3)
4
Taking expectations yields
1
3
2 2
E Yk2 = E X 4 − 2E X 3 E [X ] + E X (4)
2 2
Hence, if the first four moments of X are finite, then Var[Y ] ≤ E[Y 2 ] < ∞. By Theorem 7.5, the
sequence Mn (Y ) is consistent.
n
n−1
n
Var[X 1 + · · · + X n ] = Var[X i ] + 2 Cov X i , X j (1)
i=1 i=1 j=i+1
Note that Var[X i ] = σ 2 and for j > i, Cov[X i , X j ] = σ 2 a j−i . This implies
n−1
n
Var[X 1 + · · · + X n ] = nσ 2 + 2σ 2 a j−i (2)
i=1 j=i+1
n−1
= nσ 2 + 2σ 2 a + a 2 + · · · + a n−i (3)
i=1
2aσ 2
n−1
= nσ + 2
(1 − a n−i ) (4)
1 − a i=1
279
With some more algebra, we obtain
2aσ 2 2aσ 2
Var[X 1 + · · · + X n ] = nσ 2 + (n − 1) − a + a 2 + · · · + a n−1 (5)
1−a 1−a
2
n(1 + a)σ 2 2aσ 2 a
= − − 2σ 2
(1 − a n−1 ) (6)
1−a 1−a 1−a
(b) Since the expected value of a sum equals the sum of the expected values,
E [X 1 ] + · · · + E [X n ]
E [M(X 1 , . . . , X n )] = =µ (8)
n
The variance of M(X 1 , . . . , X n ) is
Var[X 1 + · · · + X n ] σ 2 (1 + a)
Var[M(X 1 , . . . , X n )] = ≤ (9)
n2 n(1 − a)
Applying the Chebyshev inequality to M(X 1 , . . . , X n ) yields
Var[M(X 1 , . . . , X n )] σ 2 (1 + a)
P [|M(X 1 , . . . , X n ) − µ| ≥ c] ≤ ≤ (10)
c2 n(1 − a)c2
(c) Taking the limit as n approaches infinity of the bound derived in part (b) yields
σ 2 (1 + a)
lim P [|M(X 1 , . . . , X n ) − µ| ≥ c] ≤ lim =0 (11)
n→∞ n→∞ n(1 − a)c2
Thus
lim P [|M(X 1 , . . . , X n ) − µ| ≥ c] = 0 (12)
n→∞
(a) Since the expectation of the sum equals the sum of the expectations,
1 n
1 n
E R̂(n) = E X(m)X (m) = R = R. (1)
n m=1 n m=1
(b) This proof follows the method used to solve Problem 7.3.4. The i, jth element of R̂(n) is
1 n
R̂i, j (n) = n m=1 X i (m)X j (m), which is just the sample mean of X i X j . Defining the event
3$
$$ 4
$
Ai, j = $ R̂i, j (n) − E X i X j $ ≥ c , (2)
280
we observe that
$
$
$$
P max $ R̂i, j (n) − E X i X j $ ≥ c = P ∪i, j Ai, j . (3)
i, j
Var[ R̂i, j (n)] Var[X i X j ]
P Ai, j ≤ = . (4)
c2 nc2
By the union bound,
$
$
$$
1
P max $ R̂i, j (n) − E X i X j $ ≥ c ≤ P Ai, j ≤ 2 Var[X i X j ] (5)
i, j
i, j
nc i, j
By the result of Problem 4.11.8, X i X j , the product of jointly Gaussian random variables, has
finite variance. Thus
k
k
Var[X i X j ] = Var[X i X j ] ≤ k 2 max Var[X i X j ] < ∞. (6)
i, j
i, j i=1 j=1
It follows that
$ $
$ $ k 2 maxi, j Var[X i X j ]
lim P max $ R̂i, j (n) − E [X i X − j]$ ≥ c ≤ lim =0 (7)
n→∞ i, j n→∞ nc2
σ X2 .09
α= 2
= = 0.4 (3)
90(.05) 90(.05)2
(c) Now we wish to find the value of n such that P[|Mn (X ) − PX (1)| ≥ .03] ≤ .01. From the
Chebyshev inequality, we write 0.01 = σ X2 /[n(.03)2 ]. Solving for n yields n = 1000.
281
Problem 7.4.2 Solution
X 1 , X 2 , . . . are iid random variables each with mean 75 and standard deviation 15.
(a) We would like to find the value of n such that
When we know only the mean and variance of X i , our only real tool is the Chebyshev in-
equality which says that
(b) If each X i is a Gaussian, the sample mean, Mn (X ) will also be Gaussian with mean and
variance
E [Mn (X )] = E [X ] = 75 (4)
Var [Mn (X )] = Var [X ] /n = 225/n (5)
In this case,
76 − µ 74 − µ
P [74 ≤ Mn (X ) ≤ 76] = − (6)
σ σ
√ √
= ( n /15) − (− n /15) (7)
√
= 2 ( n /15) − 1 = 0.99 (8)
Thus, n = 1,521.
Since even under the Gaussian assumption, the number of samples n is so large that even if the X i
are not Gaussian, the sample mean may be approximated by a Gaussian. Hence, about 1500 samples
probably is about right. However, in the absence of any information about the PDF of X i beyond
the mean and variance, we cannot make any guarantees stronger than that given by the Chebyshev
inequality.
1
n
P [A] (1 − P [A])
Var[ P̂n (A)] = 2 Var[X A,i ] = . (2)
n i=1 n
282
(c) Since P̂100 (A) = M100 (X A ), we can use Theorem 7.12(b) to write
$ $ Var[X A ]
$ $ 0.16
P $ P̂100 (A) − P [A]$ < c ≥ 1 − 2
=1− = 1 − α. (3)
100c 100c2
For c = 0.1, α = 0.16/[100(0.1)2 ] = 0.16. Thus, with 100 samples, our confidence coeffi-
cient is 1 − α = 0.84.
(d) In this case, the number of samples n is unknown. Once again, we use Theorem 7.12(b) to
write $ $
$ $ Var[X A ] 0.16
P $ P̂n (A) − P [A]$ < c ≥ 1 − 2
=1− = 1 − α. (4)
nc nc2
For c = 0.1, we have confidence coefficient 1 − α = 0.95 if α = 0.16/[n(0.1)2 ] = 0.05, or
n = 320.
Since P̂n (A) = Mn (X A ) and E[Mn (X A )] = P[A], we can use Theorem 7.12(b) to write
$ $ Var[X A ]
$ $
P $ P̂n (A) − P [A]$ < 0.05 ≥ 1 − . (2)
n(0.05)2
Note that Var[X A ] = P[A](1 − P[A]) ≤ 0.25. Thus for confidence coefficient 0.9, we require that
Var[X A ] 0.25
1− 2
≥1− ≥ 0.9. (3)
n(0.05) n(0.05)2
This implies n ≥ 1,000 samples are needed.
283
Problem 7.4.6 Solution
Both questions can be answered using the following equation from Example 7.6:
$ $ P [A] (1 − P [A])
$ $
P $ P̂n (A) − P [A]$ ≥ c ≤ (1)
nc2
The unusual part of this problem is that we are given the true value of P[A]. Since P[A] = 0.01,
we can write $ $ 0.0099
$ $
P $ P̂n (A) − P [A]$ ≥ c ≤ (2)
nc2
(a) In this part, we meet the requirement by choosing c = 0.001 yielding
$ $ 9900
$ $
P $ P̂n (A) − P [A]$ ≥ 0.001 ≤ (3)
n
Thus to have confidence level 0.01, we require that 9900/n ≤ 0.01. This requires n ≥
990,000.
(b) In this case, we meet the requirement by choosing c = 10−3 P[A] = 10−5 . This implies
$ $ P [A] (1 − P [A]) 9.9 × 107
$ $ 0.0099
P $ P̂n (A) − P [A]$ ≥ c ≤ = = (4)
nc2 n10−10 n
The confidence level 0.01 is met if 9.9 × 107 /n = 0.01 or n = 9.9 × 109 .
Thus $ $ Var[X ]
$ $ E
P $ P̂n (A) − P [E]$ ≥ c ≤ 2
≤ 2. (3)
nc nc
Mn (X ) − c ≤ α ≤ Mn (X ) + c. (1)
284
The confidence coefficient of the estimate based on n samples is
P [Mn (X ) − c ≤ α ≤ Mn (X ) + c] = P [α − c ≤ Mn (X ) ≤ α + c] (2)
= P [−c ≤ Mn (X ) − α ≤ c] . (3)
Since Var[Mn (X )] = Var[X ]/n = 1/n, the 0.9 confidence interval shrinks with increasing n. In
particular, c = cn will be a decreasing sequence. Using a Central Limit Theorem approximation, a
0.9 confidence implies
−cn Mn (X ) − α cn
0.9 = P √ ≤ √ ≤√ (4)
1/n 1/n 1/n
√ √ √
= (cn n) − (−cn n) = 2 (cn n) − 1. (5)
√ √
Equivalently, (cn n) = 0.95 or cn = 1.65/ n.
Thus, as a function of the number of samples n, we plot three functions: the sample mean
√ √
Mn (X ), and the upper limit Mn (X ) + 1.65/ n and lower limit Mn (X ) − 1.65/ n of the 0.9 confi-
dence interval. We use the M ATLAB function poissonmeanseq(n) to generate these sequences
for n sample values.
function M=poissonmeanseq(n);
x=poissonrv(1,n);
nn=(1:n)’;
M=cumsum(x)./nn;
r=(1.65)./sqrt(nn);
plot(nn,M,nn,M+r,nn,M-r);
2 3
2
1
Mn(X)
Mn(X)
1
0
0
−1
−1
−2 −2
0 20 40 60 0 200 400 600
n n
poissonmeanseq(60) poissonmeanseq(600)
285
Thus the probability the sample mean is within one standard error of ( p = 1/2) is
√ √
n n n n
pn = P − ≤ Kn ≤ + (1)
2 2 2 2
√ √
n n n n
= P Kn ≤ + − P Kn < − (2)
2 2 2 2
√ % √ &
n n n n
= FK n + − FK n − −1 (3)
2 2 2 2
Here is a M ATLAB function that graphs pn as a function of n for N steps alongside the output graph
for bernoullistderr(50).
function p=bernoullistderr(N); 1
p=zeros(1,N); 0.9
for n=1:N,
r=[ceil((n-sqrt(n))/2)-1; ... 0.8
pn
(n+sqrt(n))/2];
0.7
p(n)=diff(binomialcdf(n,0.5,r));
end 0.6
plot(1:N,p);
ylabel(’\it p_n’); 0.5
0 10 20 30 40 50
xlabel(’\it n’); n
1
n
1
Mn (X ) = Yi = Mn (Y ) (1)
nλ i=1 λ
286
We can conclude that
λ λ
λ̂ = λ̃ = (3)
Mn (Y ) Vn (Y )
For λ = 1, the estimators λ̂ and λ̃ are just scaled versions of the estimators for the case λ = 1.
Hence it is sufficient to consider only the λ = 1 case. The function z=lamest(n,m) returns the
estimation errors for m trials of each estimator where each trial uses n iid exponential (1) samples.
0.1
z(2,i)
−0.1
−0.2
−0.1 −0.05 0 0.05 0.1 0.15
z(1,i)
In the scatter plot, each diamond marks an independent pair ( Ẑ , Z̃ ) where Ẑ is plotted on the x-axis
and Z̃ is plotted on the y-axis. (Although it is outside the scope of this solution, it is interesting
to note that the errors Ẑ and Z̃ appear to be positively correlated.) From the plot, it may not be
obvious that one estimator is better than the other. However, by reading the axis ticks carefully, one
can observe that it appears that typical values for Ẑ are in the interval (−0.05, 0.05) while typical
values for Z̃ are in the interval (−0.1, 0.1). This suggests that Ẑ may be superior. To verify this
observation, we calculate the sample mean for each squared errors
1 2 1 2
m m
Mm ( Ẑ ) =
2
Ẑ Mm ( Z̃ ) =
2
Z̃ (4)
m i=1 i m i=1 i
From our M ATLAB experiment with m = 1,000 trials, we calculate
287
>> sum(z.ˆ2,2)/1000
ans =
0.0010
0.0021
That is, M1,000 ( Ẑ 2 ) = 0.0010 and M1,000 ( Z̃ 2 ) = 0.0021. In fact, one can show (with a lot of work)
for large m that
and that
Mm ( Z̃ 2 )
lim = 2. (6)
m→∞ Mm ( Ẑ 2 )
In short, the mean squared error of the λ̃ estimator is twice that of the λ̂ estimator.
1
n
R̂(n) = X(m)X (m), (1)
n m=1
(In the original printing, 0.05 was 0.01 but that requirement demanded that n be so large that most
installations of M ATLAB would grind to a halt on the calculations.)
The M ATLAB program uses a matrix algebra identity that may (or may not) be familiar. For a
matrix
X = x(1) x(2) · · · x(n) , (3)
with columns x(i), we can write
n
XX = x(i)x (i). (4)
i=1
288
The function diagtest estimates p(n) as the fraction of t trials for which the threshold 0.05
is exceeded. In addition, if the input n to diagtest is a vector, then the experiment is repeated
for each value in the vector n.
The following commands, estimate p(n) for n ∈ {10, 100, 1000, 10000} using t = 2000 trials:
n=[10 100 1000 10000];
p=diagtest(n,2000);
The output is
p=
1.0000 1.0000 1.0000 0.0035
We see that p(n) goes from roughly 1 to almost 0 in going from n = 1,000 to n = 10,000. To
investigate this transition more carefully, we execute the commands
nn=1000:500:10000;
p=diagtest(nn,2000);
The output is shown in the following graph. We use a semilog plot to emphasize differences when
p(n) is close to zero.
0
10
−1
10
p(n)
−2
10
−3
10
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
n
Beyond n = 1,000, the probability p(n) declines rapidly. The “bumpiness” of the graph for large
n occurs because the probability p(n) is small enough that out of 2,000 trials, the 0.05 threshold is
exceeded only a few times.
Note that if x has dimension greater than 10, then the value of n needed to ensure that p(n) is
small would increase.
• p > 0, or
• p = 0 but the Chebyshev inequality isn’t a sufficient powerful technique to verify this fact.
289
To resolve whether p = 0 (and the sample mean converges to the expected value) one can spend
time trying to prove either p = 0 or p > 0. At this point, we try some simulation experiments to
see if the experimental evidence points one way or the other.
As requested by the problem, we implement a M ATLAB function samplemeantest(n,a)
to simulate one hundred traces of the sample mean when E[X ] = a. Each trace is a length n
sequence M1 (X ), M2 (X ), . . . , Mn (X ).
function mx=samplemeantest(n,a); The n × 100 matrix x consists of iid
u=rand(n,100); samples of X . Taking cumulative sums
x=a-2+(1./sqrt(1-u)); along each column of x, and dividng row
d=((1:n)’)*ones(1,100); i by i, each column of mx is a length
mx=cumsum(x)./d;
n sample mean trace. we then plot the
plot(mx);
xlabel(’\it n’); ylabel(’\it M_n(X)’); traces.
axis([0 n a-1 a+1]);
The following graph was generated by samplemeantest(1000,5):
5.5
Mn(X)
4.5
4
0 100 200 300 400 500 600 700 800 900 1000
n
Frankly, it is difficult to draw strong conclusions from the graph. If the sample sequences Mn (X )
are converging to E[X ], the convergence is fairly slow. Even after averaging 1,000 samples, typical
values for the sample mean appear to range from a − 0.5 to a + 0.5. There may also be outlier
sequences which are still off the charts since we truncated the y-axis range. On the other hand, the
sample mean sequences do not appear to be diverging (which is also possible since Var[X ] = ∞.
Note the above graph was generated using 105 sample values. Repeating the experiment with more
samples, say samplemeantest(10000,5), will yield a similarly inconclusive result. Even if
your version of M ATLAB can support the generation of 100 times as many samples, you won’t know
for sure whether the sample mean sequence always converges. On the other hand, the experiment
is probably enough that if you pursue the analysis, you should start by trying to prove that p = 0.
(This will make a a fun problem for the third edition!)
290
Problem Solutions – Chapter 8
to determine if the coin we’ve been flipping is indeed a fair one. We would like to find the
value of c, which will determine the upper and lower limits on how many heads we can get
away from the expected number out of 100 flips and still accept our hypothesis. Under our
fair coin hypothesis, the expected number of heads, and the standard deviation of the process
are
E [K ] = 50, σ K = 100 · 1/2 · 1/2 = 5. (2)
Now in order to find c we make use of the central limit theorem and divide the above inequal-
ity through by σ K to arrive at
|K − E [K ] | c
P > = 0.05 (3)
σK σK
Taking the complement, we get
c K − E [K ] c
P − ≤ ≤ = 0.95 (4)
σK σK σK
Using the Central Limit Theorem we can write
c −c c
− = 2 − 1 = 0.95 (5)
σK σK σK
291
This implies (c/σ K ) = 0.975 or c/5 = 1.96. That is, c = 9.8 flips. So we see that if we
observe more then 50 + 10 = 60 or less then 50 − 10 = 40 heads, then with significance level
α ≈ 0.05 we should reject the hypothesis that the coin is fair.
(b) Now we wish to develop a test of the form
P [K > c] = 0.01 (6)
Thus we need to find the value of c that makes the above probability true. This value will tell
us that if we observe more than c heads, then with significance level α = 0.01, we should
reject the hypothesis that the coin is fair. To find this value of c we look to evaluate the CDF
k
100
FK (k) (1/2)100 . (7)
i=0
i
Computation reveals that c ≈ 62 flips. So if we observe 62 or greater heads, then with a
significance level of 0.01 we should reject the fair coin hypothesis. Another way to obtain
this result is to use a Central Limit Theorem approximation. First, we express our rejection
region in terms of a zero mean, unit variance random variable.
K − E [K ] c − E [K ]
P [K > c] = 1 − P [K ≤ c] = 1 − P ≤ = 0.01 (8)
σK σK
Since E[K ] = 50 and σ K = 5, the CLT approximation is
c − 50
P [K > c] ≈ 1 − = 0.01 (9)
5
From Table 3.1, we have (c − 50)/5 = 2.35 or c = 61.75. Once again, we see that we reject
the hypothesis if we observe 62 or more heads.
292
Problem 8.1.4 Solution
(a) The rejection region is R = {T > t0 }. The duration of a voice call has exponential PDF
(1/3)e−t/3 t ≥ 0,
f T (t) = (1)
0 otherwise.
The significance level of the test is
' ∞
α = P [T > t0 ] = f T (t) dt = e−t0 /3 . (2)
t0
Comment: For α = 0.01, keep in mind that there is a one percent probability that a normal factory
will fail the test. That is, a test failure is quite unlikely if the factory is operating normally.
293
Problem 8.2.1 Solution
For the MAP test, we must choose acceptance regions A0 and A1 for the two hypotheses H0 and
H1 . From Theorem 8.2, the MAP rule is
PN |H0 (n) P [H1 ]
n ∈ A0 if ≥ ; n ∈ A1 otherwise. (1)
PN |H1 (n) P [H0 ]
Since PN |Hi (n) = λin e−λi /n!, the MAP rule becomes
n
λ0 P [H1 ]
n ∈ A0 if e−(λ0 −λ1 ) ≥ ; n ∈ A1 otherwise. (2)
λ1 P [H0 ]
By taking logarithms and assuming λ1 > λ0 yields the final form of the MAP rule
λ1 − λ0 + ln(P [H0 ] /P [H1 ])
n ∈ A0 if n ≤ n ∗ = ; n ∈ A1 otherwise. (3)
ln(λ1 /λ0 )
From the MAP rule, we can get the ML rule by setting the a priori probabilities to be equal. This
yields the ML rule
λ 1 − λ0
n ∈ A0 if n ≤ n ∗ = ; n ∈ A1 otherwise. (4)
ln(λ1 /λ0 )
294
(d) The ML rule is the same as the MAP rule when P[H0 ] = P[H1 ]. When P[H0 ] > P[H1 ],
the MAP rule (which minimizes the probability of an error) should enlarge the A0 acceptance
region. Thus we would expect tMAP > t M L .
(f) For a given threshold t0 , we learned in parts (a) and (b) that
The M ATLAB program rocvoicedataout graphs both receiver operating curves. The
program and the resulting ROC are shown here.
t=0:0.05:30; 1
PFA= exp(-t/3); µD=6
0.8 µD=10
PMISS6= 1-exp(-t/6);
PMISS10=1-exp(-t/10); 0.6
PMISS
plot(PFA,PMISS6,PFA,PMISS10);
0.4
legend(’\mu_D=6’,’\mu_D=10’);
xlabel(’\itP_{\rmFA}’); 0.2
ylabel(’\itP_{\rmMISS}’);
0
0 0.5 1
PFA
As one might expect, larger µ D resulted in reduced PMISS for the same PFA .
295
Cancelling common factors and taking the logarithm, we find that n ∈ A0 if
a0
n ln ≥ (a0 − a1 ) + ln γ . (3)
a1
Since ln(a0 /a1 ) < 0, divind through reverses the inequality and shows that
(a0 − a1 ) + ln γ (a1 − a0 ) − ln γ
n ∈ A0 if n ≤ n ∗ = = ; n ∈ A1 otherwise (4)
ln(a0 /a1 ) ln(a1 /a0 )
However, we still need to determine the constant γ . In fact, it is easier to work with the threshold
n ∗ directly. Note that L(n) < γ if and only if n > n ∗ . Thus we choose the smallest n ∗ such that
P N > n ∗ |H0 = PN |H0 (n) α ≤ 10−6 . (5)
n>n ∗
To find n ∗ a reasonable approach would be to use Central Limit Theorem approximation since
given H0 , N is a Poisson (1,000) random variable, which has the same PDF as the sum on 1,000
independent Poisson (1) random variables. Given H0 , N has expected value a0 and variance a0 .
From the CLT,
∗
∗
N − a0 n ∗ − a0 n − a0
P N > n |H0 = P √ > √ |H0 ≈ Q √ ≤ 10−6 . (6)
a0 a0 a0
From Table 3.2, Q(4.75) = 1.02 × 10−6 and Q(4.76) < 10−6 , implying
√
n ∗ = a0 + 4.76 a0 = 1150.5. (7)
On the other hand, perhaps the CLT should be used with some caution since α = 10−6 implies we
are using the CLT approximation far from the center of the distribution. In fact, we can check out
answer using the poissoncdf functions:
>> nstar=[1150 1151 1152 1153 1154 1155];
>> (1.0-poissoncdf(1000,nstar))’
ans =
1.0e-005 *
0.1644 0.1420 0.1225 0.1056 0.0910 0.0783
>>
Thus we see that n ∗ 1154. Using this threshold, the miss probability is
P N ≤ n ∗ |H1 = P [N ≤ 1154|H1 ] = poissoncdf(1300,1154) = 1.98 × 10−5 . (8)
Keep in mind that this is the smallest possible PMISS subject to the constraint that PFA ≤ 10−6 .
(a) Given H0 , X is Gaussian (0, 1). Given H1 , X is Gaussian (4, 1). From Theorem 8.2, the
MAP hypothesis test is
e−x
2 /2
f X |H0 (x)
x ∈ A0 if = ; x ∈ A1 otherwise. (1)
f X |H1 (x) P [H1 ]
e−(x−4)2 /2 ≥
P [H0 ]
296
Since a target is present with probability P[H1 ] = 0.01, the MAP rule simplifies to
1 P [H1 ]
x ∈ A0 if x ≤ xMAP = 2 − ln = 3.15; x ∈ A1 otherwise. (2)
4 P [H0 ]
(b) The cost of a false alarm is C10 = 1 unit while the cost of a miss is C01 = 104 units. From
Theorem 8.3, we see that the Minimum Cost test is the same as the MAP test except the
P[H0 ] is replaced by C10 P[H0 ] and P[H1 ] is replaced by C01 P[H1 ]. Thus, we see from thr
MAP test that the minimum cost test is
1 C01 P [H1 ]
x ∈ A0 if x ≤ xMC = 2 − ln = 0.846; x ∈ A1 otherwise. (7)
4 C10 P [H0 ]
Because the cost of a miss is so high, the minimum cost test greatly reduces the miss proba-
bility, resulting in a much lower average cost than the MAP test.
e−x /2
2
f X |H0 (x)
x ∈ A0 if L(x) = = −(x−v)2 /2 ≥ γ ; x ∈ A1 otherwise. (1)
f X |H1 (x) e
This rule simplifies to
x ∈ A0 if L(x) = e−[x
2 −(x−v)2 ]/2
= e−vx+v
2 /2
≥ γ; x ∈ A1 otherwise. (2)
297
Taking logarithms, the Neyman-Pearson rule becomes
v 1
x ∈ A0 if x ≤ x0 = − ln γ ; x ∈ A1 otherwise. (3)
2 v
The choice of γ has a one-to-one correspondence with the choice of the threshold x0 . Moreoever
L(x) ≥ γ if and only if x ≤ x0 . In terms of x0 , the false alarm probability is
298
The negative root of the quadratic is the result of the Gaussian assumption which allows for a
nonzero probability that Mn (T ) will be negative. In this case, hypothesis H1 which has higher
variance becomes more likely. However, since Mn (T ) ≥ 0, we can ignore this root since it is
just an artifact of the CLT approximation.
In fact, the CLT approximation gives an incorrect answer. Note that Mn (T ) = Yn /n where Yn
is a sum of iid exponential random variables. Under hypothesis H0 , Yn is an Erlang (n, λ0 =
1/3) random variable. Under hypothesis H1 , Yn is an Erlang (n, λ1 = 1/6) random variable.
Since Mn (T ) = Yn /n is a scaled version of Yn , Theorem 3.20 tells us that given hypothesis
Hi , Mn (T ) is an Erlang (n, nλi ) random variable. Thus Mn (T ) has likelihood functions
(nλi )n t n−1 e−nλi t
t ≥0
f Mn (T )|Hi (t) = (n−1)! (8)
0 otherwise
(d) In this part, we will use the exact Erlang PDFs to find the MAP decision rule. From 8.2, the
MAP rule is
n
f Mn (T )|H0 (t) λ0 P [H1 ]
t ∈ A0 if = e−n(λ0 −λ1 )t ≥ ; t ∈ A1 otherwise. (11)
f Mn (T )|H1 (t) λ1 P [H0 ]
Since P[H0 ] = 0.8 and P[H1 ] = 0.2, the MAP rule simplifies to
ln λλ01 − n1 ln P[H1 ]
P[H0 ] ln 4
t ∈ A0 if t ≤ tMAP = = 6 ln 2 + ; t ∈ A1 otherwise. (12)
λ0 − λ1 n
(e) Although we have seen it is incorrect to use a CLT approximation to derive the decision
rule, the CLT approximation used in parts (a) and (b) remains a good way to estimate the
false alarm and miss probabilities. However, given Hi , Mn (T ) is an Erlang (n, nλi ) random
variable. In particular, given H0 , Mn (T ) is an Erlang (n, n/3) random variable while given
H1 , Mn (T ) is an Erlang (n, n/6). Thus we can also use erlangcdf for an exact calculation
299
of the false alarm and miss probabilities. To summarize the results of parts (a) and (b), a
threshold t0 implies that
√
PFA = P [Mn (T ) > t0 |H0 ] = 1-erlangcdf(n,n/3,t0) ≈ Q( n[t0 /3 − 1]), (13)
√
PMISS = P [Mn (T ) ≤ t0 |H1 ] = erlangcdf(n,n/6,t0) ≈ ( n[t0 /6 − 1]). (14)
(15)
%voicedatroc.m
t0=1:0.1:8’;
n=9;
PFA9=1.0-erlangcdf(n,n/3,t0);
PFA9clt=1-phi(sqrt(n)*((t0/3)-1));
PM9=erlangcdf(n,n/6,t0);
PM9clt=phi(sqrt(n)*((t0/6)-1));
n=16;
PFA16=1.0-erlangcdf(n,n/3,t0);
PFA16clt=1.0-phi(sqrt(n)*((t0/3)-1));
PM16=erlangcdf(n,n/6,t0);
PM16clt=phi(sqrt(n)*((t0/6)-1));
plot(PFA9,PM9,PFA9clt,PM9clt,PFA16,PM16,PFA16clt,PM16clt);
axis([0 0.8 0 0.8]);
legend(’Erlang n=9’,’CLT n=9’,’Erlang n=16’,’CLT n=16’);
0.8
Erlang n=9
0.6 CLT n=9
Erlang n=16
0.4 CLT n=16
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
Both the true curve and CLT-based approximations are shown. The graph makes it clear
that the CLT approximations are somewhat innaccurate. It is also apparent that the ROC for
n = 16 is clearly better than for n = 9.
300
Under hypothesis Hi , K has the binomial (n, pi ) PMF
n k
PK |Hi (k) = p (1 − pi )n−k . (2)
k i
(a) A false alarm occurs if K > k0 under hypothesis H0 . The probability of this event is
n
n k
PFA = P [K > k0 |H0 ] = p0 (1 − p0 )n−k , (3)
k=k +1
k
0
For t0 = 4.5, p0 = 0.2231 < p1 = 0.4724. In this case, the MAP rule becomes
ln 1− p0
1− p1
+ ln 4
k ∈ A0 if k ≤ kMAP = n = (0.340)n + 1.22; k ∈ A1 otherwise. (10)
ln pp10 /(1− p1 )
/(1− p0 )
301
(d) Fr threshold k0 , the false alarm and miss probabilities are
PFA = P [K > k0 |H0 ] = 1-binomialcdf(n,p0,k0) (11)
PMISS = P [K ≤ k0 |H1 ] = binomialcdf(n,p1,k0) (12)
The ROC is generated by evaluating PFA and PMISS for each value of k0 . Here is a M ATLAB
program that does this task and plots the ROC.
function [PFA,PMISS]=binvoicedataroc(n);
t0=[3; 4.5];
p0=exp(-t0/3); p1=exp(-t0/6);
k0=(0:n)’;
PFA=zeros(n+1,2);
for j=1:2,
PFA(:,j) = 1.0-binomialcdf(n,p0(j),k0);
PM(:,j)=binomialcdf(n,p1(j),k0);
end
plot(PFA(:,1),PM(:,1),’-o’,PFA(:,2),PM(:,2),’-x’);
legend(’t_0=3’,’t_0=4.5’);
axis([0 0.8 0 0.8]);
xlabel(’\itP_{\rmFA}’);
ylabel(’\itP_{\rmMISS}’);
0.8
t0=3
0.6 t0=4.5
PMISS
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
PFA
As we see, the test works better with threshold t0 = 4.5 than with t0 = 3.
302
Thus the MAP rule simplifies to
y ∈ A0 if y ≤ 1; y ∈ A1 otherwise. (3)
The probability of error is
PERR = P [Y > 1|H0 ] P [H0 ] + P [Y ≤ 1|H1 ] P [H1 ] (4)
' '
1 ∞ −y 1 1 −y
= e dy + ye dy (5)
2 1 2 0
e−1 1 − 2e−1 1 − e−1
= + = . (6)
2 2 2
(b) Let k ∗ denote the threshold given in part (a). Using n = 500, q0 = 10−4 , and q1 = 10−2 , we
have
ln[(1 − 10−4 )/(1 − 10−2 )]
k ∗ = 500 ≈ 1.078 (7)
ln[10−2 /10−4 ] + ln[(1 − 10−4 )/(1 − 10−2 )]
Thus the ML rule is that if we observe K ≤ 1, then we choose hypothesis H0 ; otherwise, we
choose H1 . The false alarm probability is
PFA = P [A1 |H0 ] = P [K > 1|H0 ] (8)
= 1 − PK |H0 (0) − PK |H1 (1) (9)
= 1 − (1 − q0 ) 500
− 500q0 (1 − q0 ) 499
= 0.0012 (10)
303
and the miss probability is
PMISS = P [A0 |H1 ] = P [K ≤ 1|H1 ] (11)
= PK |H1 (0) + PK |H1 (1) (12)
= (1 − q1 )500 + 500q1 (1 − q1 )499 = 0.0398. (13)
(c) In the test of Example 8.8, the geometric random variable N , the number of tests needed
to find the first failure, was used. In this problem, the binomial random variable K , the
number of failures in 500 tests, was used. We will call these two procedures the geometric
(N ) (N )
and the binomial tests. Also, we will use PFA and PMISS to denote the false alarm and miss
(K ) (K )
probabilities using the geometric test. We also use PFA and PMISS for the error probabilities
of the binomial test. From Example 8.8, we have the following comparison:
geometric test binomial test (14)
(N ) (K )
PFA = 0.0045, PFA = 0.0012, (15)
(N ) (K )
PMISS = 0.0087, PMISS = 0.0398 (16)
When making comparisons between tests, we want to judge both the reliability of the test
as well as the cost of the testing procedure. With respect to the reliability, we see that the
conditional error probabilities appear to be comparable in that
(N ) (K )
PFA PMISS
(K )
= 3.75 but (N )
= 4.57. (17)
PFA PMISS
Roughly, the false alarm probability of the geometric test is about four times higher than
that of the binomial test. However, the miss probability of the binomial test is about four
times that of the geometric test. As for the cost of the test, it is reasonable to assume the
cost is proportional to the number of disk drives that are tested. For the geometric test of
Example 8.8, we test either until the first failure or until 46 drives pass the test. For the
binomial test, we test until either 2 drives fail or until 500 drives the pass the test! You can,
if you wish, calculate the expected number of drives tested under each test method for each
hypothesis. However, it isn’t necessary in order to see that a lot more drives will be tested
using the binomial test. If we knew the a priori probabilities P[Hi ] and also the relative costs
of the two type of errors, then we could determine which test procedure was better. However,
without that information, it would not be unreasonable to conclude that the geometric test
offers performance roughly comparable to that of the binomial test but with a significant
reduction in the expected number of drives tested.
304
Rearranging terms, we have
E C = P [A1 |H0 ] P [H0 ] (C10
− C00
) + P [A0 |H1 ] P [H1 ] (C01
− C11 )
+ P [H0 ] C00 + P [H1 ] C11 . (4)
Since P[H0 ]C00 + P[H1 ]C11 does not depend on the acceptance sets A0 and A1 , the decision rule
that minimizes E[C ] is the same decision rule that minimizes
E C = P [A1 |H0 ] P [H0 ] (C10
− C00
) + P [A0 |H1 ] P [H1 ] (C01
− C11 ). (5)
The decision rule that minimizes E[C ] is the same as the minimum cost test in Theorem 8.3 with
the costs C01 and C10 replaced by the differential costs C01 − C11 and C10 − C00 .
Just as in the QPSK system of Example 8.13, the additive Gaussian noise dictates that the acceptance
region Ai is the set of observations x that are closer to si = (i − 1)a than any other s j .
305
Problem 8.3.2 Solution
Let the components of si jk be denoted by si(1) (2)
jk and si jk so that given hypothesis Hi jk ,
0 (1) 1
X1 s jk N1
= i(2) + (1)
X2 si jk N2
As in Example 8.13, we will assume N1 and N2 are iid zero mean Gaussian random variables with
variance σ 2 . Thus, given hypothesis Hi jk , X 1 and X 2 are independent and the conditional joint PDF
of X 1 and X 2 is
f X 1 ,X 2 |Hi jk (x1 , x2 ) = f X 1 |Hi jk (x1 ) f X 2 |Hi jk (x2 ) (2)
1 −(x1 −si(1)jk )2 /2σ 2 −(x2 −si(2)jk )2 /2σ 2
= e e (3)
2π σ 2
1 −[(x1 −si(1)jk )2 +(x2 −si(2)jk )2 ]/2σ 2
= e (4)
2π σ 2
; ;
In terms of the distance ;x − si jk ; between vectors
0 1
x1 s (1)
x= si jk = i(2) jk
(5)
x2 si jk
we can write
1 −x−si jk 2 /2σ 2
f X 1 ,X 2 |Hi (x1 , x2 ) =
e (6)
2π σ 2
Since all eight symbols s000 , . . . , s111 are equally likely, the MAP and ML rules are
x ∈ Ai jk if f X 1 ,X 2 |Hi jk (x1 , x2 ) ≥ f X 1 ,X 2 |Hi j k (x1 , x2 ) for all other Hi j k . (7)
This rule simplifies to
; ; ; ;
x ∈ Ai jk if ;x − si jk ; ≤ ;x − si jk ; for all other i j k . (8)
This means that Ai jk is the set of all vectors x that are closer to si jk than any other signal. Graph-
ically, to find the boundary between points closer to si jk than si j k , we draw the line segment con-
necting si jk and si j k . The boundary is then the perpendicular bisector. The resulting boundaries
are shown in this figure:
X2
A110 A100
s110 s100
A010 A000
s010 s000
X1
A011 A001
s011 s001
A111 A101
s111 s101
306
Problem 8.3.3 Solution
In Problem 8.3.1, we found the MAP acceptance regions were
A0 = {x|x ≤ −a/2} A1 = {x| − a/2 < x ≤ a/2} A2 = {x|x > a/2} (1)
To calculate the probability of decoding error, we first calculate the conditional error probabilities
2
4 a
P [D E ] = P [X ∈ Ai |Hi ] P [Hi ] = Q (6)
i=0
3 2σ N
(x1 , x2 ) ∈ Ai if (x1 − si1 )2 + (x2 − si2 )2 ≤ (x1 − s j1 )2 + (x2 − s j2 )2 for all j (5)
In terms of the vectors x and si , the acceptance regions are defined by the rule
; ;2
x ∈ Ai if x − si 2 ≤ ;x − s j ; (6)
Just as in the case of QPSK, the acceptance region Ai is the set of vectors x that are closest to si .
307
Problem 8.3.5 Solution
From the signal constellation depicted in Problem 8.3.5, each signal si j1 is below the x-axis while
each signal si j0 is above the x-axis. The event B3 of an error in the third bit occurs if we transmit a
signal si j1 but the receiver output x is above the x-axis or if we transmit a signal si j0 and the receiver
output is below the x-axis. By symmetry, we need only consider the case when we transmit one of
the four signals si j1 . In particular,
• Given H011 or H001 , X 2 = −1 + N2
Assuming all four hypotheses are equally likely, the probability of an error decoding the third bit is
P [B3 |H011 ] + P [B3 |H001 ] + P [B3 |H101 ] + P [B3 |H111 ]
P [B3 ] = (3)
4
Q(1/σ N ) + Q(2/σ N )
= (4)
2
(a) Hypothesis Hi is that X = si +N, where N is a Gaussian random vector independent of which
signal was transmitted. Thus, given Hi , X is a Gaussian (si , σ 2 I) random vector. Since X is
two-dimensional,
1 − 1 (x−si ) σ 2 I−1 (x−si ) 1 − 12 x−si 2
f X|Hi (x) = e 2 = e 2σ . (1)
2π σ 2 2π σ 2
Since the hypotheses Hi are equally likely, the MAP and ML rules are the same and achieve
the minimum probability of error. In this case, from the vector version of Theorem 8.8, the
MAP rule is
X2 Using the conditional PDFs f X|Hi (x), the MAP rule becomes
A2
s2 ; ;2
s1 A1 x ∈ Am if x − sm 2 ≤ ;x − s j ; for all j. (3)
s0 A0
X1 In terms of geometry, the interpretation is that all vectors x
sM-1 closer to sm than to any other signal s j are assigned to Am . In
AM-1 this problem, the signal constellation (i.e., the set of vectors si )
sM-2
AM-2 is the set of vectors on the circle of radius E. The acceptance
regions are the “pie slices” around each signal vector.
308
(b) Consider the following sketch to determine d.
X2
(c) By symmetry, PERR is the same as the conditional probability of error 1− P[Ai |Hi ], no matter
which si is transmitted. Let B denote a circle of radius d at the origin and let Bi denote the
circle of radius d around si . Since B0 ⊂ A0 ,
Thus
(a) In Problem 8.3.4, we found that in terms of the vectors x and si , the acceptance regions are
defined by the rule
; ;2
x ∈ Ai if x − si 2 ≤ ;x − s j ; for all j. (1)
Just as in the case of QPSK, the acceptance region Ai is the set of vectors x that are closest to
si . Graphically, these regions are easily found from the sketch of the signal constellation:
309
X2
s7 s3
s5 s2
s6 s4 s1 s0
X1
s9 s8 s1 2 s1 4
s1 0 s1 3
s11 s1 5
(c) Surrounding each signal si is an acceptance region Ai that is no smaller than the acceptance
region A1 . That is,
This implies
15
P [C] = P [C|Hi ] P [H1 ] (11)
i=0
15
15
≥ P [C|H1 ] P [Hi ] = P [C|H1 ] P [Hi ] = P [C|H1 ] (12)
i=0 i=0
310
Problem 8.3.8 Solution
Let pi = P[Hi ]. From Theorem 8.8, the MAP multiple hypothesis test is
Expanding the squares and using the identity cos2 θ + sin2 θ = 1 yields the simplified rule
σ2 pj
x1 [cos θi − cos θ j ] + x2 [sin θi − sin θ j ] ≥ √ ln (4)
E pi
Note that the MAP rules define linear constraints in x1 and x2 . Since θi = π/4 + iπ/2, we use the
following table to enumerate the constraints:
cos θi sin θi
√ √
i =0 1/ √2 1/√2
i =1 −1/√2 1/ √2 (5)
i =2 −1/√ 2 −1/√2
i =3 1/ 2 −1/ 2
To be explicit, to determine whether (x1 , x2 ) ∈ Ai , we need to check the MAP rule for each j = i.
Thus, each Ai is defined by three constraints. Using the above table, the acceptance regions are
• (x1 , x2 ) ∈ A0 if
σ2 p1 σ2 p3 σ2 p2
x1 ≥ √ ln x2 ≥ √ ln x1 + x2 ≥ √ ln (6)
2E p0 2E p0 2E p0
• (x1 , x2 ) ∈ A1 if
σ2 p1 σ2 p2 σ2 p3
x1 ≤ √ ln x2 ≥ √ ln − x1 + x2 ≥ √ ln (7)
2E p0 2E p1 2E p1
311
• (x1 , x2 ) ∈ A2 if
σ2 p2 σ2 p2 σ2 p2
x1 ≤ √ ln x2 ≤ √ ln x1 + x2 ≥ √ ln (8)
2E p3 2E p1 2E p0
• (x1 , x2 ) ∈ A3 if
σ2 p2 σ2 p3 σ2 p2
x1 ≥ √ ln x2 ≤ √ ln − x1 + x2 ≥ √ ln (9)
2E p3 2E p0 2E p3
A1 s1 s0 A0
X1
A2 s2 s3 A3
Note that the boundary between A1 and A3 defined by −x1 + x2 ≥ 0 plays no role because of the
high value of p0 .
312
Since each Si is a column vector,
⎡√ ⎤
p1 X 1
⎢ ⎥ √ √
SP1/2 X = S1 · · · Sk ⎣ ... ⎦ = p1 X 1 S1 + · · · + pk X k Sk . (2)
√
pk X k
k √
Thus Y = SP1/2 X + N = i=1 pi X i Si + N.
(b) Given the observation Y = y, a detector must decide which vector X = X 1 · · · X k
was (collectively) sent by the k transmitters. A hypothesis H j must specify whether X i = 1
or X i = −1 for each i. That is, a hypothesis H j corresponds to a vector x j ∈ Bk which
has ±1 components. Since there are 2k such vectors, there are 2k hypotheses which we can
enumerate as H1 , . . . , H2k . Since each X i is independently and equally likely to be ±1, each
hypothesis has probability 2−k . In this case, the MAP and and ML rules are the same and
achieve minimum probability of error. The MAP/ML rule is
or equivalently,
; ; ; ;
y ∈ Am if ;y − SP1/2 xm ; ≤ ;y − SP1/2 x j ; for all j. (6)
; ;
That is, we choose the vector x∗ = xm that minimizes the distance ;y − SP1/2 x j ; among all
vectors x j ∈ Bk . Since this vector x∗ is a function of the observation y, this is described by
the math notation ; ;
x∗ (y) = arg min ;y − SP1/2 x; , (7)
x∈Bk
where arg minx g(x) returns the argument x that minimizes g(x).
; ;
(c) To implement this detector, we must evaluate ;y − SP1/2 x; for each x ∈ Bk . Since there 2k
vectors in Bk , we have to evaluate 2k hypotheses. Because the number of hypotheses grows
exponentially with the number of users k, the maximum likelihood detector is considered to
be computationally intractable for a large number of users k.
313
However, as this is not a satisfactory answer, we will build a simple example with k = 2 users
and precessing gain n = 2 to show the difference between the
MLdetector and the decorrelator.
In particular, suppose user 1 transmits with code vector S1 = 1 0 and user transmits with code
vector S2 = cos θ sin θ In addition, we assume that the users powers are p1 = p2 = 1. In this
case, P = I and
1 cos θ
S= . (1)
0 sin θ
For the ML detector, there are four hypotheses corresponding to each possible transmitted bit of
each user. Using Hi to denote the hypothesis that X = xi , we have
1 −1
X = x1 = X = x3 = (2)
1 1
1 −1
X = x2 = X = x4 = (3)
−1 −1
It is useful to show these acceptance sets graphically. In this plot, the area around yi is the acceptance
set Ai and the dashed lines are the boundaries between the acceptance sets.
Y2
A3 A1
y3 y1
1 + cos θ −1 + cos θ
y1 = y3 = (7)
q Y1 sin θ sin θ
1 − cos θ −1 − cos θ
y2 = y4 = (8)
y4 y2 − sin θ − sin θ
A4 A2
(
Even though the components of Y are conditionally independent given Hi , the four integrals Ai f Y|Hi (y) dy
cannot be represented in a simple form. Moreoever, they cannot even be represented by the (·)
314
function. Note that the probability of a correct decision is the probability that the bits X 1 and X 2
transmitted by both users are detected correctly.
The probability of a bit error is still somewhat more complex. For example if X 1 = 1, then
hypotheses H1 and H3 are equally likely. The detector guesses X̂ 1 = 1 if Y ∈ A1 ∪ A3 . Given
X 1 = 1, the conditional probability of a correct decision on this bit is
1 1
P X̂ 1 = 1|X 1 = 1 = P [Y ∈ A1 ∪ A3 |H1 ] + P [Y ∈ A1 ∪ A3 |H3 ] (10)
2' 2'
1 1
= f Y|H1 (y) dy + f Y|H3 (y) dy (11)
2 A1 ∪A3 2 A1 ∪A3
By comparison, the decorrelator does something simpler. Since S is a square invertible matrix,
−1 −1 −1 −1 1 1 − cos θ
(S S) S = S (S ) S = S = (12)
sin θ 0 1
We see that the components of Ỹ = S−1 Y are
cos θ Y2
Ỹ1 = Y1 − Y2 , Ỹ2 = . (13)
sin θ sin θ
Assuming (as in earlier sketch) that 0 < θ < π/2, the decorrelator bit decisions are
Y2
A3 y3 y1 A1
cos θ
X̂ 1 = sgn (Ỹ1 ) = sgn Y1 − Y2 (14)
q Y1 sin θ
Y2
X̂ 2 = sgn (Ỹ2 ) = sgn = sgn (Y2 ). (15)
A4 y4 y2 A2 sin θ
Because we chose a coordinate system such that S1 lies along the x-axis, the effect of the decor-
relator on the rule for bit X 2 is particularly easy to understand. For bit X 2 , we just check whether
the vector Y is in the upper half plane. Generally, the boundaries of the decorrelator decision re-
gions are determined by straight lines, they are easy to implement and probability of error is easy to
calculate. However, these regions are suboptimal in terms of probability of error.
p x0 (1 − pi ) 19−x 0
= i 1 + pi + · · · + pi (3)
1 − pi20
pix0 (1 − pi20−x0 ) pix0 − pi20
= = (4)
1 − pi20 1 − pi20
315
We note that the above formula is also correct for x0 = 20. Using this formula, the false alarm and
miss probabilities are
p0x0 − p020
PFA = P [X > x0 |H0 ] = , (5)
1 − p020
1 − p1x0
PMISS = 1 − P [X > x0 |H1 ] = (6)
1 − p120
The M ATLAB program rocdisc(p0,p1) returns the false alarm and miss probabilities and also
plots the ROC. Here is the program and the output for rocdisc(0.9,0.99):
function [PFA,PMISS]=rocdisc(p0,p1); 1
x=0:20;
0.8
PFA= (p0.ˆx-p0ˆ(20))/(1-p0ˆ(20));
P MISS
PMISS= (1.0-(p1.ˆx))/(1-p1ˆ(20)); 0.6
plot(PFA,PMISS,’k.’);
0.4
xlabel(’\itP_{\rm FA}’);
ylabel(’\itP_{\rm MISS}’); 0.2
0
0 0.5 1
P FA
From the receiver operating curve, we learn that we have a fairly lousy sensor. No matter how
we set the threshold x0 , either the false alarm probability or the miss probability (or both!) exceed
0.5.
It is straightforward to use M ATLAB to plot PERR as a function of p. The function bperr calculates
PERR for a vector p and a scalar signal to noise ratio snr corresponding to v/σ . A second program
bperrplot(snr) plots PERR as a function of p. Here are the programs
function perr=bperr(p,snr); function pe=bperrplot(snr);
%Problem 8.4.2 Solution p=0.02:0.02:0.98;
r=log(p./(1-p))/(2*snr); pe=bperr(p,snr);
perr=(p.*(qfunction(r+snr))) ... plot(p,pe);
+((1-p).*phi(r-snr)); xlabel(’\it p’);
ylabel(’\it P_{ERR}’);
Here are three outputs of bperrplot for the requested SNR values.
316
−24
x 10
0.8 0.2 8
0.6 0.15
6
PERR
PERR
PERR
0.4 0.1
4
0.2 0.05
0 0 2
0 0.5 1 0 0.5 1 0 0.5 1
p p p
bperrplot(0.1) bperrplot(0.1) bperrplot(0.1)
In all three cases, we see that PERR is maximum at p = 1/2. When p = 1/2, the optimal (mini-
mum probability of error) decision rule is able to exploit the one hypothesis having higher a priori
probability than the other.
This gives the wrong impression that one should consider building a communication system
with p = 1/2. To see this, consider the most extreme case in which the error probability goes to
zero as p → 0 or p → 1. However, in these extreme cases, no information is being communicated.
When p = 0 or p = 1, the detector can simply guess the transmitted bit. In fact, there is no need to
tansmit a bit; however, it becomes impossible to transmit any information.
Finally, we note that v/σ is an SNR voltage ratio. For communication systems, it is common to
measure SNR as a power ratio. In particular, v/σ = 10 corresponds to a SNR of 10 log1 0(v 2 /σ 2 ) =
20 dB.
Thus among {0.4, 0.5, · · · , 1.0}, it appears that T = 0.8 is best. Now we test values of T in the
neighborhood of 0.8:
>> T=[0.70:0.02:0.9];Pe=sqdistor(1.5,0.5,100000,T);
>>[Pmin,Imin]=min(Pe);T(Imin)
ans =
0.78000000000000
This suggests that T = 0.78 is best among these values. However, inspection of the vector Pe
shows that all values are quite close. If we repeat this experiment a few times, we obtain:
317
>> T=[0.70:0.02:0.9];Pe=sqdistor(1.5,0.5,100000,T);
>> [Pmin,Imin]=min(Pe);T(Imin)
ans =
0.78000000000000
>> T=[0.70:0.02:0.9];Pe=sqdistor(1.5,0.5,100000,T);
>> [Pmin,Imin]=min(Pe);T(Imin)
ans =
0.80000000000000
>> T=[0.70:0.02:0.9];Pe=sqdistor(1.5,0.5,100000,T);
>> [Pmin,Imin]=min(Pe);T(Imin)
ans =
0.76000000000000
>> T=[0.70:0.02:0.9];Pe=sqdistor(1.5,0.5,100000,T);
>> [Pmin,Imin]=min(Pe);T(Imin)
ans =
0.78000000000000
This suggests that the best value of T is in the neighborhood of 0.78. If someone were paying you
to find the best T , you would probably want to do more testing. The only useful lesson here is that
when you try to optimize parameters using simulation results, you should repeat your experiments
to get a sense of the variance of your results.
x ∈ A0 if e−(8x−x
2 )/16
≥ γ x/4; x ∈ A1 otherwise. (2)
Taking logarithms yields
x ∈ A0 if x 2 − 8x ≥ 16 ln(γ /4) + 16 ln x; x ∈ A1 otherwise. (3)
With some more rearranging,
x ∈ A0 if (x − 4)2 ≥ 16 ln(γ /4) + 16 +16 ln x; x ∈ A1 otherwise. (4)
γ0
When we plot the functions f (x) = (x − 4)2 and g(x) = γ0 + 16 ln x, we see that there exist x1 and
x2 such that f (x1 ) = g(x1 ) and f (x2 ) = g(x2 ). In terms of x1 and x2 ,
A0 = [0, x1 ] ∪ [x2 , ∞), A1 = (x1 , x2 ). (5)
Using a Taylor series expansion of ln x around x = x0 = 4, we can show that
g(x) = γ0 + 16 ln x ≤ h(x) = γ0 + 16(ln 4 − 1) + 4x. (6)
√ is linear, we can use the quadratic formula to solve f (x) = h(x), yielding a solution
Since h(x)
x̄2 = 6+ 4 + 16 ln 4 + γ0 . One can show that x2 ≤ x̄2 . In
√the example shown below corresponding
to γ = 1 shown here, x1 = 1.95, x2 = 9.5 and x̄2 = 6 + 20 = 10.47.
318
60
40
20
0
f(x)
−20 g(x)
h(x)
−40
0 2 4 6 8 10 12
To calculate the ROC, we need to find x1 and x2 . Rather than find them exactly, we calculate f (x)
and g(x) for discrete steps over the interval [0, 1 + x̄2 ] and find the discrete values closest to x1 and
x2 . However, for these approximations to x1 and x2 , we calculate the exact false alarm and miss
probabilities. As a result, the optimal detector using the exact x1 and x2 cannot be worse than the
ROC that we calculate.
In terms of M ATLAB, we divide the work into the functions gasroc(n) which generates the
ROC by calling [x1,x2]=gasrange(gamma) to calculate x1 and x2 for a given value of γ .
function [pfa,pmiss]=gasroc(n); function [x1,x2]=gasrange(gamma);
a=(400)ˆ(1/(n-1)); g=16+16*log(gamma/4);
k=1:n; xmax=7+ ...
g=0.05*(a.ˆ(k-1)); sqrt(max(0,4+(16*log(4))+g));
pfa=zeros(n,1); dx=xmax/500;
pmiss=zeros(n,1); x=dx:dx:4;
for k=1:n, y=(x-4).ˆ2-g-16*log(x);
[x1,x2]=gasrange(g(k)); [ym,i]=min(abs(y));
pmiss(k)=1-(exp(-x1ˆ2/16) ... x1=x(i);
-exp(-x2ˆ2/16)); x=4:dx:xmax;
pfa(k)=exp(-x1/2)-exp(-x2/2); y=(x-4).ˆ2-g-16*log(x);
end [ym,i]=min(abs(y));
plot(pfa,pmiss); x2=x(i);
ylabel(’P_{\rm MISS}’);
xlabel(’P_{\rm FA}’);
The argment n of gasroc(n) generates the ROC for n values of γ , ranging from from 1/20
to 20 in multiplicative steps. Here is the resulting ROC:
319
1
0.8
0.6
P MISS
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
P FA
After all of this work, we see that the sensor is not particularly good in the the ense that no matter
how we choose the thresholds, we cannot reduce both the miss and false alarm probabilities under
30 percent.
Problem 8.4.5 Solution
X2
A2
s2 In the solution to Problem 8.3.6, we found that the signal constella-
s1 A1 tion and acceptance regions shown in the adjacent figure. We could
s0 A0 solve this problem by a general simulation of an M-PSK system.
X1
This would include a random sequence of data sysmbols, mapping
sM-1
AM-1 symbol i to vector si , adding the noise vector N to produce the re-
sM-2 ceiver output X = si + N.
AM-2
However, we are only asked to find the probability of symbol error, but not the probability that
symbol i is decoded as symbol j at the receiver. Because of the symmetry of the signal constellation
and the acceptance regions, the probability of symbol error is the same no matter what symbol is
transmitted.
N2
Thus it is simpler to assume that s0 is transmitted every time and
check that the noise vector N is in the pie slice around s0 . In fact by
q/2 N1 translating s0 to the origin, we obtain the “pie slice” geometry shown
(-E1/2, 0) (0,0)
in the figure. Because the lines marking the boundaries of the pie
slice have slopes ± tan θ/2.
320
where γ = E/σ 2 is the signal to noise ratio of the system.
The M ATLAB “simulation” simply generates many pairs Z 1 Z 2 and checks what fraction
meets these constraints. the function mpsksim(M,snr,n) simulates the M-PSK system with
SNR snr for n bit transmissions. The script mpsktest graphs the symbol error probability for
M = 8, 16, 32.
function Pe=mpsksim(M,snr,n); %mpsktest.m;
%Problem 8.4.5 Solution: snr=10.ˆ((0:30)/10);
%Pe=mpsksim(M,snr,n) n=500000;
%n bit M-PSK simulation Pe8=mpsksim(8,snr,n);
t=tan(pi/M); Pe16=mpsksim(16,snr,n);
A =[-t 1; -t -1]; Pe32=mpsksim(32,snr,n);
Z=randn(2,n); loglog(snr,Pe8,snr,Pe16,snr,Pe32);
PC=zeros(length(snr)); legend(’M=8’,’M=16’,’M=32’,3);
for k=1:length(snr),
B=(A*Z)<=t*sqrt(snr(k));
PC(k)=sum(min(B))/n;
end
Pe=1-PC;
In mpsksim, each column of the matrix Z corresponds to a pair of noise variables Z 1 Z 2 .
The code B=(A*Z)<=t*sqrt(snr(k)) checks whether each pair of noise variables is in the
pie slice region. That is, B(1,j) and B(2,j) indicate if the ith pair meets the first and second
constraints. Since min(B) operates on each column of B, min(B) is a row vector indicating which
pairs of noise variables passed the test.
Here is the output of mpsktest:
0
10
M=8
M=16
M=32
−5
10
0 1 2 3
10 10 10 10
The curves for M = 8 and M = 16 end prematurely because for high SNR, the error rate is so low
that no errors are generated in 500,000 symbols. In this case, the measured Pe is zero and since
log 0 = −∞, the loglog function simply ignores the zero values.
The transmitted data vector x belongs to the set Sk of all binary ±1 vectors of length k. In principle,
we can enumerate the vectors in Bk as x0 , x1 , . . . , x2k −1 . Moreover, each possible data vector xm
321
represents a hypothesis. Since there are 2k possible data vectors, there are 2k acceptance sets Am .
The set Am is the set of all vectors y such that the decision rule is to guess X̂ = xm , Our normal
procedure is to write a decision rule as “y ∈ Am if . . . ” however this problem has so many has so
many hypotheses that it is more staightforward to refer to a hypothesis X = xm by the function x̂(y)
which returns the vector xm when y ∈ Am . In short, x̂(y) is our best guess as to which vector x was
transmitted when y is received.
Because each hypotheses has a priori probability 2−k , the probability of error is minimized by
the maximum likelihood (ML) rule
x̂(y) = arg max f Y|X (y|x) . (2)
x∈Bk
Keep in mind that arg maxx g(x) returns the argument x that maximizes g(x). In any case, the form
of f Y|X (y|x) implies that the ML rule should minimize the negative exponent of fY|X (y|x). That is,
the ML rule is
; ;
x̂(y) = arg min ;y − SP1/2 x; (3)
x∈Bk
Since the term y y is the same for every x, we can define the function
h(x) = −2y SP1/2 x + x P1/2 S SP1/2 x, (6)
In this case, the ML rule can be expressed as x̂(y) = arg minx∈Bk h(x). We use M ATLAB to evaluate
h(xv) for each x ∈ Bk . Since for k = 10, Bk has 210 = 1024 vectors, it is desirable to make the
calculation as easy as possible. To this end, we define w = SP1/2 x and and we write, with some
abuse of notation, h(·) as a function of w:
h(w) = −2y w + w w (7)
Still, given y, we need to evaluate h(w) for each vector w. In M ATLAB, this will be convenient
because we can form the matrices X and W with columns consisting of all possible vectors x and
w. In M ATLAB, it is easy to calculate w w by operating on the matrix W without looping through
all columns w.
function X=allbinaryseqs(n) In terms of M ATLAB, we start by defining
%See Problem 8.4.6 X=allbinaryseqs(n) which returns an n × 2n
%X: n by 2ˆn matrix of all matrix X, corresponding to X, such that the columns
%length n binary vectors of X enumerate all possible binary ±1 sequences of
%Thanks to Jasvinder Singh
length n. How allbinaryseqs works will be clear
A=repmat([0:2ˆn-1],[n,1]);
P=repmat([1:n]’,[1,2ˆn]); by generating the matrices A and P and reading the help
X = bitget(A,P); for bitget.
X=(2*X)-1;
322
Next, for a set of signal vectors (spreading sequences in CDMA parlance) given by the n × k
matrix S, err=cdmasim(S,P,m) simulates the transmission of a frame of m symbols through a
k user CDMA system with additive Gaussian noise. A “symbol,” is just a vector x corresponding to
the k transmitted bits of the k users.
In addition, the function Pe=rcdma(n,k,snr,s,m) runs cdmasim for the pairs of values
of users k and SNR snr. Here is the pair of functions:
function err=cdmasim(S,P,m); function Pe=rcdma(n,k,snr,s,m);
%err=cdmasim(P,S,m); %Pe=rcdma(n,k,snr,s,m);
%S= n x k matrix of signals %R-CDMA simulation:
%P= diag matrix of SNRs (power % proc gain=n, users=k
% normalized by noise variance) % rand signal set/frame
%See Problem 8.4.6 % s frames, m symbols/frame
k=size(S,2); %number of users %See Problem 8.4.6 Solution
n=size(S,1); %processing gain [K,SNR]=ndgrid(k,snr);
X=allbinaryseqs(k);%all data Pe=zeros(size(SNR));
Phalf=sqrt(P); for j=1:prod(size(SNR)),
W=S*Phalf*X; p=SNR(j);k=K(j);
WW=sum(W.*W); e=0;
err=0; for i=1:s,
for j=1:m, S=randomsignals(n,k);
s=duniformrv(1,2ˆk,1); e=e+cdmasim(S,p*eye(k),m);
y=S*Phalf*X(:,s)+randn(n,1); end
[hmin,imin]=min(-2*y’*W+WW); Pe(j)=e/(s*m*k);
err=err+sum(X(:,s)˜=X(:,imin)); % disp([p k e Pe(j)]);
end end
In cdmasim, the kth diagonal element of P is the “power” pk of user k. Technically, we assume
that the additive Gaussian noise variable have variance 1, and thus pk is actually the signal to noise
ratio of user k. In addition, WW is a length 2k row vector, with elements w w for each possible w.
For each of the m random data symbols, represented by x (or X(:,s) in M ATLAB), cdmasim
calculates a received signal y (y). Finally, hmin is the minimum h(w) and imin is the index of
the column of W that minimizes h(w). Thus imin is also the index of the minimizing column of
X. Finally, cdmasim compares x̂(y) and the transmitted vector x bit by bit and counts the total
number of bit errors.
The function rcdma repeats cdmasim for s frames, with a random signal set for each frame.
Dividing the total number of bit errors over s frames by the total number of transmitted bits, we find
the bit error rate Pe . For an SNR of 4 dB and processing gain 16, the requested tests are generated
with the commands
>> n=16;
>> k=[2 4 8 16];
>> Pe=rcdma(n,k,snr,100,1000);
>>Pe
Pe =
0.0252 0.0272 0.0385 0.0788
>>
To answer part (b), the code for the matched filter (MF) detector is much simpler because there
is no need to test 2k hypotheses for every transmitted symbol. Just as for the case of the ML detector,
323
we define a function err=mfcdmasim(S,P,m) that simulates the MF detector for m symbols for
a given set of signal vectors S. In mfcdmasim, there is no need for looping. The mth transmitted
symbol is represented by the mth column of X and the corresponding received signal is given by
the mth column of Y. The matched filter processing can be applied to all m columns at once. A
second function Pe=mfrcdma(n,k,snr,s,m) cycles through all combination of users k and
SNR snr and calclates the bit error rate for each pair of values. Here are the functions:
function err=mfcdmasim(S,P,m); function Pe=mfrcdma(n,k,snr,s,m);
%err=mfcdmasim(P,S,m); %Pe=rcdma(n,k,snr,s,m);
%S= n x k matrix of signals %R-CDMA, MF detection
%P= diag matrix of SNRs % proc gain=n, users=k
% SNR=power/var(noise) % rand signal set/frame
%See Problem 8.4.6b % s frames, m symbols/frame
k=size(S,2); %no. of users %See Problem 8.4.6 Solution
n=size(S,1); %proc. gain [K,SNR]=ndgrid(k,snr);
Phalf=sqrt(P); Pe=zeros(size(SNR));
X=randombinaryseqs(k,m); for j=1:prod(size(SNR)),
Y=S*Phalf*X+randn(n,m); p=SNR(j);kt=K(j);
XR=sign(S’*Y); e=0;
err=sum(sum(XR ˜= X)); for i=1:s,
S=randomsignals(n,kt);
e=e+mfcdmasim(S,p*eye(kt),m);
end
Pe(j)=e/(s*m*kt);
disp([snr k e]);
end
Here is a run of mfrcdma.
>> pemf=mfrcdma(16,k,4,1000,1000);
4 2 4 8 16 73936
4 2 4 8 16 264234
4 2 4 8 16 908558
4 2 4 8 16 2871356
>> pemf’
ans =
0.0370 0.0661 0.1136 0.1795
>>
The following plot compares the maximum likelihood (ML) and matched filter (MF) detectors.
0.2
ML
0.15 MF
0.1
0.05
0
0 2 4 6 8 10 12 14 16
As the ML detector offers the minimum probability of error, it should not surprising that it has a
lower bit error rate. Although the MF detector is worse, the reduction in detctor complexity makes
324
it attractive. In fact, in practical CDMA-based cellular phones, the processing gain ranges from
roughly 128 to 512. In such case, the complexity of the ML detector is prohibitive and thus only
matched filter detectors are used.
where Ñ = (S S)−1 S N is still a Gaussian noise vector with expected value E[Ñ] = 0.
Decorrelation separates the signals in that the ith component of Ỹ is
√
Ỹi = pi X i + Ñi . (3)
This is the same as a single user-receiver output of the binary communication system of
Example 8.6. The single-user decision rule X̂ i = sgn (Ỹi ) for the transmitted bit X i has
probability of error
√ )
pi
Pe,i = P Ỹi > 0|X i = −1 = P − pi + Ñi > 0 = Q . (4)
Var[ Ñi ]
However, since Ñ = AN where A = (S S)−1 S , Theorem 5.16 tells us that Ñ has covariance
matrix CÑ = ACN A . We note that the general property that (B−1 ) = (B )−1 implies that
A = S((S S) )−1 = S(S S)−1 . These facts imply
Note that S S is called the correlation matrix since its i, jth entry is Si S j is the correlation
between the signal of user i and that of user j. Thus Var[ Ñi ] = σ 2 (S S)ii−1 and the probability
of bit error for user i is for user i is
) !< "
pi pi
Pe,i = Q =Q . (6)
Var[ Ñi ] (S S)ii−1
To find the probability of error for a randomly chosen but, we average over the bits of all users
and find that !< "
1 1
k k
pi
Pe = Pe,i = Q . (7)
k i=1 k i=1 (S S)ii−1
(b) When S S is not invertible, the detector flips a coin to decide each bit. In this case, Pe,i = 1/2
and thus Pe = 1/2.
325
(c) When S is chosen randomly, we need to average over all possible matrices S to find the
average probability of bit error. However, there are 2kn possible matrices S and averaging
over all of them is too much work. Instead, we randomly generate m matrices S and estimate
the average Pe by averaging over these m matrices.
A function berdecorr uses this method to evaluate the decorrelator BER. The code has a
lot of lines because it evaluates the BER using m signal sets for each combination of users
k and SNRs snr. However, because the program generates signal sets and calculates the
BER asssociated with each, there is no need for the simulated transmission of bits. Thus the
program runs quickly. Since there are only 2n distinct columns for matrix S, it is quite possible
to generate signal sets that are not linearly independent. In this case, berdecorr assumes
the “flip a coin” rule is used. Just to see whether this rule dominates the error probability, we
also display counts of how often S is rank deficient.
Here is the (somewhat tedious) code:
function Pe=berdecorr(n,k,snr,m);
%Problem 8.4.7 Solution: R-CDMA with decorrelation
%proc gain=n, users=k, average Pe for m signal sets
count=zeros(1,length(k)); %counts rank<k signal sets
Pe=zeros(length(k),length(snr)); snr=snr(:)’;
for mm=1:m,
for i=1:length(k),
S=randomsignals(n,k(i)); R=S’*S;
if (rank(R)<k(i))
count(i)=count(i)+1;
Pe(i,:)=Pe(i,:)+0.5*ones(1,length(snr));
else
G=diag(inv(R));
Pe(i,:)=Pe(i,:)+sum(qfunction(sqrt((1./G)*snr)))/k(i);
end
end
end
disp(’Rank deficiency count:’);disp(k);disp(count);
Pe=Pe/m;
Running berdecorr with processing gains n = 16 and n = 32 yields the following output:
326
>> k=[1 2 4 8 16 32];
>> pe16=berdecorr(16,k,4,10000);
Rank deficiency count:
1 2 4 8 16 32
0 2 2 12 454 10000
>> pe16’
ans =
0.0228 0.0273 0.0383 0.0755 0.3515 0.5000
>> pe32=berdecorr(32,k,4,10000);
Rank deficiency count:
1 2 4 8 16 32
0 0 0 0 0 0
>> pe32’
ans =
0.0228 0.0246 0.0290 0.0400 0.0771 0.3904
>>
As you might expect, the BER increases as the number of users increases. This occurs because
the decorrelator must suppress a large set of interferers. Also, in generating 10,000 signal
matrices S for each value of k we see that rank deficiency is fairly uncommon, however it
occasionally occurs for processing gain n = 16, even if k = 4 or k = 8. Finally, here is a
plot of these same BER statistics for n = 16 and k ∈ {2, 4, 8, 16}. Just for comparison, on the
same graph is the BER for the matched filter detector and the maximum likelihood detector
found in Problem 8.4.6.
0.4
ML
0.3 Decorr
MF
Pe
0.2
0.1
0
0 2 4 6 8 10 12 14 16
k
We see from the graph that the decorrelator is better than the matched filter for a small number
of users. However, when the number of users k is large (relative to the processing gain n), the
decorrelator suffers because it must suppress all interfering users. Finally, we note that these
conclusions are specific to this scenario when all users have equal SNR. When some users
have very high SNR, the decorrelator is good for the low-SNR user because it zeros out the
interference from the high-SNR user.
327
• sb1 b2 b3 is in the right half plane if b2 = 0; otherwise it is in the left half plane.
• sb1 b2 b3 is in the upper half plane if b3 = 0; otherwise it is in the lower half plane.
• There is an inner ring and an outer ring of signals. sb1 b2 b3 is in the inner ring if b1 = 0;
otherwise it is in the outer ring.
Given a bit vector b, we use these facts by first using b2 and b3 to map b = b1 b2 b3 to an
inner ring signal vector
3
4
s ∈ 1 1 , −1 1 , −1 −1 , 1 −1 . (1)
In the next step we scale s by (1 + b1 ). If b1 = 1, then s is stretched to the outer ring. Finally, we
add a Gaussian noise vector N to generate the received signal X = sb1 b2 b3 + N.
X2 In the solution to Problem 8.3.2, we found that the accep-
tance set for the hypothesis Hb1 b2 b3 that sb1 b2 b3 is trans-
mitted is the set of signal space points closest to sb1 b2 b3 .
A110 A100
Graphically, these acceptance sets are given in the adja-
s110 s100
cent figure. These acceptance sets correspond an inverse
A010 A000
mapping of
the received signal vector X to a bit vector
s010 s000
X guess b̂ = b̂1 b̂2 b̂3 using the following rules:
1
328
function [Pe,ber]=myqam(sigma,m); %myqamplot.m
Pe=zeros(size(sigma)); ber=Pe; sig=10.ˆ(0.2*(-8:0));
B=reshape(bernoullirv(0.5,3*m),3,m); [Pe,ber]=myqam(sig,1e6);
%S(1,:)=1-2*B(2,:); loglog(sig,Pe,’-d’, ...
%S(2,:)=1-2*B(3,:); sig,ber,’-s’);
S=1-2*B([2; 3],:); legend(’SER’,’BER’,4);
S=([1;1]*(1+B(1,:))).*S;
N=randn(2,m);
for i=1:length(sigma),
X=S+sigma(i)*N;
BR=zeros(size(B));
BR([2;3],:)=(X<0);
BR(1,:)=sum(abs(X))>(3/sqrt(2));
E=(BR˜=B);
Pe(i)=sum(max(E))/m;
ber(i)=sum(sum(E))/(3*m);
end
Note that we generate the bits and transmitted signals, and normalized noise only once. However
for each value of sigma, we rescale the additive noise, recalculate the received signal and receiver
bit decisions. The output of myqamplot is shown in this figure:
0
10
SER
BER
−5
10
−2 −1 0
10 10 10
Careful reading of the figure will show that the ratio of the symbol error rate to the bit error rate is
always very close to 3. This occurs because in the acceptance set for b1b2 b3 , the adjacent acceptance
sets correspond to a one bit difference. Since the usual type of symbol error occurs when the vector
X is in the adjacent set, a symbol error typically results in one bit being in error but two bits being
received correctly. Thus the bit error rate is roughly one third the symbol error rate.
(a) For the M-PSK communication system with additive Gaussian noise, A j denoted the hypoth-
esis that signal s j was transmitted. The solution to Problem 8.3.6 derived the MAP decision
rule
329
X2
A2 ; ;2
s2
x ∈ Am if x − sm 2 ≤ ;x − s j ; for all j. (1)
s1 A1
s0 A0 In terms of geometry, the interpretation is that all vectors x
X1
closer to sm than to any other signal s j are assigned to Am . In
sM-1
AM-1 this problem, the signal constellation (i.e., the set of vectors si )
sM-2 is the set of vectors on the circle of radius E. The acceptance
AM-2
regions are the “pie slices” around each signal vector.
We observe that
; ;
;x − s j ;2 = (x − s j ) (x − s j ) = x x − 2x s j + s s . (2)
j
330
(c) The next step is to determine the effect of the mapping of bits to transmission vectors s j . The
matrix D with i, jth element di j that indicates the number of bit positions in which the bit
string assigned to si differs from the bit string assigned to s j . In this case, the integers provide
a compact representation of this mapping. For example the binary mapping is
s0 s1 s2 s3 s4 s5 s6 s7
000 001 010 011 100 101 110 111
0 1 2 3 4 5 6 7
s0 s1 s2 s3 s4 s5 s6 s7
000 001 011 010 110 111 101 100
0 1 3 2 6 7 5 4
Thus the binary mapping can be represented by a vector c1 = 0 1 · · · 7 while the Gray
mapping is described by c2 = 0 1 3 2 6 7 5 4 .
331
>> c1=0:7;
>>snr=[4 8 16 32 64];
>>Pb=mpskmap(c1,snr,1000000);
>> Pb
Pb =
0.7640 0.4878 0.2198 0.0529 0.0038
Experimentally, we observe that the BER of the binary mapping is higher than the BER of
the Gray mapping by a factor in the neighborhood of 1.5 to 1.7
In fact, this approximate ratio can be derived by a quick and dirty analysis. For high SNR,
suppose that that si is decoded as si+1 or si−1 with probability q = Pi,i+1 = Pi,i−1 and all
other types of errors are negligible. In this case, the BER formula based on this approximation
corresponds to summing the matrix D for the first off-diagonals and the corner elements. Here
are the calculations:
>> D=mpskdist(c1);
>> sum(diag(D,1))+sum(diag(D,-1))+D(1,8)+D(8,1)
ans =
28
>> DG=mpskdist(c2);
>> sum(diag(DG,1))+sum(diag(DG,-1))+DG(1,8)+DG(8,1)
ans =
16
where di j (k) indicates whether the bit strings mapped to si and s j differ in bit position k.
As in Problem 8.4.9, we describe the mapping by the vector of integers d. For example the
binary mapping is
332
s0 s1 s2 s3 s4 s5 s6 s7
000 001 010 011 100 101 110 111
0 1 2 3 4 5 6 7
The Gray mapping is
s0 s1 s2 s3 s4 s5 s6 s7
000 001 011 010 110 111 101 100
0 1 3 2 6 7 5 4
Thus the binary mapping can be represented by a vector c1 = 0 1 · · · 7 while the Gray
mapping is described by c2 = 0 1 3 2 6 7 5 4 .
function D=mpskdbit(c,k); The function D=mpskdbit(c,k) translates
%See Problem 8.4.10: For mapping the mapping vector c into the matrix D with en-
%c, calculate BER of bit k tries di j that indicates whether bit k is in error
L=length(c);m=log2(L); when transmitted symbol si is decoded by the re-
[C1,C2]=ndgrid(c,c);
ceiver as s j . The method is to generate grids C1
B1=bitget(C1,k);
B2=bitget(C2,k); and C2 for the pairs of integers, identify bit k in
D=(B1˜=B2); each integer, and then check if the integers differ
in bit k.
Thus, there is a matrix D associated with each bit position and we calculate the expected number
of bit errors associated with each bit position. For each bit, the rest of the solution is the same as
in Problem 8.4.9. We use the commands p=mpskerr(M,snr,n) and P=mpskmatrix(p) to
calculate the matrix P which holds an estimate of each probability Pi j . Finally, using matrices P
and D, we treat BER(k) as a finite random variable that takes on value di j with probability Pi j . the
expected value of this finite random variable is the expected number of bit errors.
function Pb=mpskbitmap(c,snr,n); Given the integer mapping vector c,
%Problem 8.4.10: Calculate prob. of we estimate the BER of the a map-
%bit error for each bit position for ping using just one more function
%an MPSK bit to symbol mapping c Pb=mpskmap(c,snr,n). First we
M=length(c);m=log2(M);
calculate the matrix D with elements di j.
p=mpskerr(M,snr,n);
P=mpskmatrix(p); Next, for a given value of snr, we use n
Pb=zeros(1,m); transmissions to estimate the probabilities
for k=1:m, Pi j . Last, we calculate the expected
D=mpskdbit(c,k); number of bit k errors per transmission.
Pb(k)=finiteexp(D,P)/M;
end
For an SNR of 10dB, we evaluate the two mappings with the following commands:
>> c1=0:7;
>> mpskbitmap(c1,10,100000)
ans =
0.2247 0.1149 0.0577
333
We see that in the binary mapping, the 0.22 error rate of bit 1 is roughly double that of bit 2, which
is roughly double that of bit 3. For the Gray mapping, the error rate of bit 1 is cut in half relative
to the binary mapping. However, the bit error rates at each position a re still not identical since the
error rate of bit 1 is still double that for bit 2 or bit 3. One might surmise that careful study of the
matrix D might lead one to prove for the Gray map that the error rate for bit 1 is exactly double that
for bits 2 and 3 . . . but that would be some other homework problem.
334
Problem Solutions – Chapter 9
335
(c) First we must find the marginal PDF for X . For 0 ≤ x ≤ 1,
' ∞ ' 1
$ y=1
f X (x) = f X,Y (x, y) dy = 6(y − x) dy = 3y 2 − 6x y $ y=x (9)
−∞ x
= 3 − 6x + 3x 2 (10)
(c) The MMSE estimator of X given Y is X̂ M (()Y ) = E[X |Y ] = Y/2. The mean squared error
is
e∗X,Y = E (X − X̂ M (()Y ))2 = E (X − Y/2)2 = E X 2 − X Y + Y 2 /4 (4)
336
Of course, the integral must be evaluated.
' 1' y
∗
e X,Y = 2(x 2 − x y + y 2 /4) d x d y (5)
0 0
' 1
$x=y
= (2x 3 /3 − x 2 y + x y 2 /2)$x=0 dy (6)
0
' 1 3
y
= dy = 1/24 (7)
0 6
Another approach to finding the mean square error is to recognize that the MMSE estimator
is a linear estimator and thus must be the optimal linear estimator. Hence, the mean squared
error of the optimal linear estimator given by Theorem 9.4 must equal e∗X,Y . That is, e∗X,Y =
Var[X ](1 − ρ X,Y
2
). However, calculation of the correlation coefficient ρ X,Y is at least as much
work as direct calculation of e∗X,Y .
(b) No, the random variables X and Y are not independent since
This implies
337
(f) The minimum mean square estimator of X given that Y = 3 is
x̂ M (() − 3) = E [X |Y = −3] = x PX |Y (x| − 3) = −2/3 (6)
x
(b) Because X and V are independent random variables, the variance of R is the sum of the
variance of V and the variance of X .
Var[R] = Var[V ] + Var[X ] = 12 + 3 = 15 (3)
338
Problem 9.2.3 Solution
The solution to this problem is to simply calculate the various quantities required for the optimal
linear estimator given by Theorem 9.4. First we calculate the necessary moments of X and Y .
This implies
and that
E X 2 = E Y 2 = E U 2 = E V 2 = E S 2 = E T 2 = E Q 2 = E R 2 = 2/3 (4)
Since each random variable has zero mean, the√ second moment equals the variance. Also, the
standard deviation of each random variable is 2/3. These common properties will make it much
easier to answer the questions.
339
(a) Random variables X and Y are independent since for all x and y,
PX,Y (x, y) = PX (x) PY (y) (5)
Since each other pair of random variables has the same marginal PMFs as X and Y but a
different joint PMF, all of the other pairs of random variables must be dependent. Since X
and Y are independent, ρ X,Y = 0. For the other pairs, we must compute the covariances.
Cov[U, V ] = E[U V ] = (1/3)(−1) + (1/3)(−1) = −2/3 (6)
Cov[S, T ] = E[ST ] = 1/6 − 1/6 + 0 + −1/6 + 1/6 = 0 (7)
Cov[Q, R] = E[Q R] = 1/12 − 1/6 − 1/6 + 1/12 = −1/6 (8)
The correlation coefficient of U and V is
Cov[U, V ] −2/3
ρU,V = √ √ =√ √ = −1 (9)
Var[U ] Var[V ] 2/3 2/3
In fact, since the marginal PMF’s are the same, the denominator of the correlation coefficient
will be 2/3 in each case. The other correlation coefficients are
Cov [S, T ] Cov [Q, R]
ρ S,T = =0 ρ Q,R = = −1/4 (10)
2/3 2/3
(b) From Theorem 9.4, the least mean square linear estimator of U given V is
σU
Û L (()V ) = ρU,V (V − E [V ]) + E [U ] = ρU,V V = −V (11)
σV
Similarly for the other pairs, all expected values are zero and the ratio of the standard devia-
tions is always 1. Hence,
X̂ L (()Y ) = ρ X,Y Y = 0 (12)
ŜL (()T ) = ρ S,T T = 0 (13)
Q̂ L (()R) = ρ Q,R R = −R/4 (14)
From Theorem 9.4, the mean square errors are
e∗L (X, Y ) = Var[X ](1 − ρ X,Y
2
) = 2/3 (15)
e∗L (U, V ) = Var[U ](1 − ρU,V
2
)=0 (16)
e∗L (S, T ) = Var[S](1 − ρ S,T ) = 2/3
2
(17)
e∗L (Q, R) = Var[Q](1 − ρ Q,R
2
) = 5/8 (18)
340
The first and second moments of X are
' 1
$1
E[X ] = (x + 2x 2 − 3x 3 ) d x = x 2 /2 + 2x 3 /3 − 3x 4 /4$0 = 5/12 (3)
0
' 1
$1
E[X 2 ] = (x 2 + 2x 3 − 3x 4 ) d x = x 3 /3 + x 4 /2 − 3x 5 /5$0 = 7/30 (4)
0
341
Where we can calculate the following
' y
$y
f Y (y) = 6(y − x) d x = 6x y − 3x 2 $0 = 3y 2 (0 ≤ y ≤ 1) (2)
0
' 1
f X (x) = 6(y − x) dy = 3(1 + −2x + x 2 ) (0 ≤ x ≤ 1)0 otherwise (3)
x
(a) Given X = x, Y is uniform on [0, x]. Hence E[Y |X = x] = x/2. Thus the minimum mean
square estimate of Y given X is
(b) The minimum mean square estimate of X given Y can be found by finding the conditional
probability density function of X given Y . First we find the joint density function.
−λx
λe 0≤y≤x
f X,Y (x, y) = f Y |X (y|x) · f X (x) = (4)
0 otherwise
342
By dividing the joint density by the marginal density of Y we arrive at the conditional density
of X given Y . −λ(x−y)
f X,Y (x, y) λe x≥y
f X |Y (x|y) = = (6)
f Y (y) 0 otherwise
Now we are in a position to find the minimum mean square estimate of X given Y . Given
Y = y, the conditional expected value of X is
' ∞
E [X |Y = y] = λxe−λ(x−y) d x (7)
y
(c) Since the MMSE estimate of Y given X is the linear estimate Ŷ M (()X ) = X/2, the optimal
linear estimate of Y given X must also be the MMSE estimate. That is, Ŷ L (()X ) = X/2.
(d) Since the MMSE estimate of X given Y is the linear estimate X̂ M (()Y ) = Y +1/λ, the optimal
linear estimate of X given Y must also be the MMSE estimate. That is, X̂ L (()Y ) = Y + 1/λ.
Note that f X,R (x, r ) > 0 for all non-negative X and R. Hence, for the remainder of the problem,
we assume both X and R are non-negative and we omit the usual “zero otherwise” considerations.
343
( (
To use the integration by parts formula u dv = uv − v du by choosing u = r and dv =
e−(x+1)r dr . Thus v = −e−(x+1)r /(x + 1) and
$ ' ∞ $∞
−r −(x+1)r $$∞ 1 −(x+1)r −1 $
−(x+1)r $ 1
f X (x) = e $ + e dr = e $ =
x +1 0 x +1 0 (x + 1)2
0 (x + 1)2
(4)
(b) The MMSE estimate of X given R = r is E[X |R = r ]. From the initial problem statement,
we know that given R = r , X is exponential with mean 1/r . That is, E[X |R = r ] = 1/r .
Another way of writing this statement is
(d) Just as in part (c), because E[X ] doesn’t exist, the LMSE estimate of R given X doesn’t exist.
344
(b) Using a = a ∗ , the mean squared error is
(E [X Y ])2
e∗ = E X 2 −
(3)
E Y2
(c) We can write the LMSE estimator given in Theorem 9.4 in the form
σX
x̂ L (()Y ) = ρ X,Y Y −b (4)
σY
where
σX
b = ρ X,Y E [Y ] − E [X ] (5)
σY
When b = 0, X̂ (Y ) is the LMSE estimate. Note that the typical way that b = 0 occurs when
E[X ] = E[Y ] = 0. However, it is possible that the right combination of means, variances,
and correlation coefficent can also yield b = 0.
From Theorem 9.6, the MAP estimate of R given X = x maximizes f X |R (x|r ) f R (r ). Since R has
a uniform PDF over [0, 1000],
Hence, the maximizing value of r is the same as for the ML estimate in Quiz 9.3 unless the maxi-
mizing r exceeds 1000 m. In this case, the maximizing value is r = 1000 m. From the solution to
Quiz 9.3, the resulting ML estimator is
1000 x < −160
r̂ML (x) = (3)
(0.1)10−x/40 x ≥ −160
345
(a) The minimum mean square error estimate of N given R is the conditional expected value of
N given R = r . This is given directly in the problem statement as r .
N̂ M (()r ) = E [N |R = r ] = r T (3)
(b) The maximum a posteriori estimate of N given R is simply the value of n that will maximize
PN |R (n|r ). That is,
n̂ M A P(r ) = arg max PN |R (n|r ) = arg max(r T )n e−r T /n! (4)
n≥0 n≥0
Usually, we set a derivative to zero to solve for the maximizing value. In this case, that tech-
nique doesn’t work because n is discrete. Since e−r T is a common factor in the maximization,
we can define g(n) = (r T )n /n! so that n̂ M A P = arg maxn g(n). We observe that
rT
g(n) = g(n − 1) (5)
n
this implies that for n ≤ r T , g(n) ≥ g(n − 1). Hence the maximizing value of n is the largest
n such that n ≤ r T . That is, n̂ M A P = r T .
(c) The maximum likelihood estimate of N given R selects the value of n that maximizes f R|N =n (r ),
the conditional PDF of R given N . When dealing with situations in which we mix continuous
and discrete random variables, its often helpful to start from first principles. In this case,
f R|N (r |n) dr = P[r < R ≤ r + dr |N = n] (6)
P[r < R ≤ r + dr, N = n]
= (7)
P[N = n]
P[N = n|R = r ]P[r < R ≤ r + dr ]
= (8)
P[N = n]
In terms of PDFs and PMFs, we have
PN |R (n|r ) f R (r )
f R|N (r |n) = (9)
PN (n)
To find the value of n that maximizes f R|N (r |n), we need to find the denominator PN (n).
' ∞
PN (n) = PN |R (n|r ) f R (r ) dr (10)
−∞
' ∞
(r T )n e−r T −µr
= µe dr (11)
n!
0
' ∞
µT n
= r n (µ + T )e−(µ+T )r dr (12)
n!(µ + T ) 0
µT n
= E[X n ] (13)
n!(µ + T )
where X is an exponential random variable with mean 1/(µ + T ). There are several ways to
derive the nth moment of an exponential random variable including integration by parts. In
Example 6.5, the MGF is used to show that E[X n ] = n!/(µ + T )n Hence, for n ≥ 0,
µT n
PN (n) = (14)
(µ + T )n+1
346
Finally, the conditional PDF of R given N is
(r T )n e−r T
PN |R (n|r ) f R (r ) µe−µr
f R|N (r |n) = = n!
µT n
(15)
PN (n)
(µ+T )n+1
[(µ + T )r ]n e−(µ+T )r
= (µ + T ) (16)
n!
The ML estimate of N given R is
[(µ + T )r ]n e−(µ+T )r
n̂ M L (r ) = arg max f R|N (r |n) = arg max(µ + T ) (17)
n≥0 n≥0 n!
This maximization is exactly the same as in the previous part except r T is replaced by (µ +
T )r . The maximizing value of n is n̂ M L = (µ + T )r .
347
Finally, the conditional PDF of R given N is
(r T )n e−r T
PN |R (n|r ) f R (r ) µe−µr
f R|N (r |n) = = n!
µT n
(10)
PN (n)
(µ+T )n+1
n+1 n −(µ+T )r
(µ + T )
r e
= (11)
n!
(a) The MMSE estimate of R given N = n is the conditional expected value E[R|N = n]. Given
N = n, the conditional PDF oF R is that of an Erlang random variable or order n + 1. From
Appendix A, we find that E[R|N = n] = (n + 1)/(µ + T ). The MMSE estimate of R given
N is
N +1
R̂ M (()N ) = E [R|N ] = (12)
µ+T
(b) The MAP estimate of R given N = n is the value of r that maximizes f R|N (r |n).
(µ + T )n+1 n −(µ+T )r
R̂MAP (()n) = arg max f R|N (r |n) = arg max r e (13)
r ≥0 r ≥0 n!
By setting the derivative with respect to r to zero, we obtain the MAP estimate
n
R̂MAP (()n) = (14)
µ+T
(c) The ML estimate of R given N = n is the value of R that maximizes PN |R (n|r ). That is,
(r T )n e−r T
R̂ML (n) = arg max (15)
r ≥0 n!
Seting the derivative with respect to r to zero yields
R̂ML (n) = n/T (16)
Differentiating PQ|K (q|k) with respect to q and setting equal to zero yields
d PQ|K (q|k) n k−1
= kq (1 − q)n−k − (n − k)q k (1 − q)n−k−1 = 0 (3)
dq k
T‘¡he maximizing value is q = k/n so that
K
Q̂ ML (K ) = (4)
n
348
(b) To find the PMF of K , we average over all q.
' ∞ ' 1
n k
PK (k) = PK |Q (k|q) f Q (q) dq = q (1 − q)n−k dq (5)
−∞ 0 k
We can evaluate this itegral by expressing it in terms of the integral of a beta PDF. Since
(n+1)!
β(k + 1, n − k + 1) = k!(n−k)! , we can write
' 1
1 1
PK (k) = β(k + 1, n − k + 1)q k (1 − q)n−k dq = (6)
n+1 0 n+1
That is, K has the uniform PMF
1/(n + 1) k = 0, 1, . . . , n
PK (k) = (7)
0 otherwise
(d) The MMSE estimate of Q given K = k is the conditional expectation E[Q|K = k]. From the
beta PDF described in Appendix A, E[Q|K = k] = (k + 1)/(n + 2). The MMSE estimator
is
K +1
Q̂ M (()K ) = E [Q|K ] = (9)
n+2
349
Problem 9.4.4 Solution
Under construction.
350
Problem Solutions – Chapter 10
• In Example 10.3, the daily noontime temperature at Newark Airport is a discrete time, con-
tinuous value random process. However, if the temperature is recorded only in units of one
degree, then the process was would be discrete value.
• In Example 10.4, the the number of active telephone calls is discrete time and discrete value.
• The dice rolling experiment of Example 10.5 yields a discrete time, discrete value random
process.
• The QPSK system of Example 10.6 is a continuous time and continuous value random pro-
cess.
0
−0.5
−1
0 0.2T 0.4T 0.6T 0.8T T
1
0.5
x(t,s1)
0
−0.5
−1
0 0.2T 0.4T 0.6T 0.8T T
1
0.5
x(t,s2)
0
−0.5
−1
0 0.2T 0.4T 0.6T 0.8T T
1
0.5
x(t,s3)
0
−0.5
−1
0 0.2T 0.4T 0.6T 0.8T T
t
351
Problem 10.2.3 Solution
The eight possible waveforms correspond to the the bit sequences
{(0, 0, 0), (1, 0, 0), (1, 1, 0), . . . , (1, 1, 1)} (1)
The corresponding eight waveforms are:
1
0
−1
10 T 2T 3T
0
−1
10 T 2T 3T
0
−1
10 T 2T 3T
0
−1
10 T 2T 3T
0
−1
10 T 2T 3T
0
−1
10 T 2T 3T
0
−1
10 T 2T 3T
0
−1
0 T 2T 3T
352
We note that the CDF contain no discontinuities. Taking the derivative of the CDF FX (t) (x) with
respect to x, we obtain the PDF
x−t
e x <t
f X (t) (x) = (4)
0 otherwise
(b) To find the PMF of T1 , we view each oscillator test as an independent trial. A success occurs
on a trial with probability p if we find a one part in 104 oscillator. The first one part in 104
oscillator is found at time T1 = t if we observe failures on trials 1, . . . , t − 1 followed by a
success on trial t. Hence, just as in Example 2.11, T1 has the geometric PMF
(1 − p)t−1 p t = 1, 2, . . .
PT1 (t) = (3)
9 otherwise
A geometric random variable with success probability p has mean 1/ p. This is derived
in Theorem 2.5. The expected time to find the first good oscillator is E[T1 ] = 1/ p =
20 minutes.
(c) Since p = 0.05, the probability the first one part in 104 oscillator is found in exactly 20
minutes is PT1 (20) = (0.95)19 (0.05) = 0.0189.
(d) The time T5 required to find the 5th one part in 104 oscillator is the number of trials needed
for 5 successes. T5 is a Pascal random variable. If this is not clear, see Example 2.15 where
the Pascal PMF is derived. When we are looking for 5 successes, the Pascal PMF is
t−1 5
p (1 − p)t−5 t = 5, 6, . . .
PT5 (t) = 4 (4)
0 otherwise
Looking up the Pascal PMF in Appendix A, we find that E[T5 ] = 5/ p = 100 minutes. The
following argument is a second derivation of the mean of T5 . Once we find the first one part in
104 oscillator, the number of additional trials needed to find the next one part in 104 oscillator
once again has a geometric PMF with mean 1/ p since each independent trial is a success with
probability p. Similarly, the time required to find 5 one part in 104 oscillators is the sum of
five independent geometric random variables. That is,
T5 = K 1 + K 2 + K 3 + K 4 + K 5 (5)
353
where each K i is identically distributed to T1 . Since the expectation of the sum equals the
sum of the expectations,
Note that condition T ≤ t is needed to make sure that the pulse doesn’t arrive after time t. The
other condition T > t + ln x ensures that the pulse didn’t arrrive too early and already decay too
much. We can express these facts in terms of the CDF of X (t).
⎧
⎨ 0 x <0
FX (t) (x) = 1 − P [X (t) > x] = 1 + FT (t + ln x) − FT (t) 0 ≤ x < 1 (2)
⎩
1 x ≥1
We can take the derivative of the CDF to find the PDF. However, we need to keep in mind that the
CDF has a jump discontinuity at x = 0. In particular, since ln 0 = −∞,
Hence, when we take a derivative, we will see an impulse at x = 0. The PDF of X (t) is
(1 − FT (t))δ(x) + f T (t + ln x) /x 0 ≤ x < 1
f X (t) (x) = (4)
0 otherwise
354
Problem 10.4.2 Solution
Each Wn is the sum of two identical independent Gaussian random variables. Hence, each Wn
must have the same PDF. That is, the Wn are identically distributed. However, since Wn−1 and Wn
both use X n−1 in their averaging, Wn−1 and Wn are dependent. We can verify this observation by
calculating the covariance of Wn−1 and Wn . First, we observe that for all n,
E [Wn ] = (E [X n ] + E [X n−1 ])/2 = 30 (1)
Next, we observe that Wn−1 and Wn have covariance
Cov [Wn−1 , Wn ] = E [Wn−1 Wn ] − E [Wn ] E [Wn−1 ] (2)
1
= E [(X n−1 + X n−2 )(X n + X n−1 )] − 900 (3)
4
We observe that for n = m, E[X n X m ] = E[X n ]E[X m ] = 900 while
E X n2 = Var[X n ] + (E [X n ])2 = 916 (4)
Thus,
900 + 916 + 900 + 900
Cov [Wn−1 , Wn ] = − 900 = 4 (5)
4
Since Cov[Wn−1 , Wn ] = 0, Wn and Wn−1 must be dependent.
355
Problem 10.5.2 Solution
Following the instructions given, we express each answer in terms of N (m) which has PMF
(6m)n e−6m /n! n = 0, 1, 2, . . .
PN (m) (n) = (1)
0 otherwise
(a) The probability of no queries in a one minute interval is PN (1) (0) = 60 e−6 /0! = 0.00248.
(b) The probability of exactly 6 queries arriving in a one minute interval is PN (1) (6) = 66 e−6 /6! =
0.161.
(c) The probability of exactly three queries arriving in a one-half minute interval is PN (0.5) (3) =
33 e−3 /3! = 0.224.
For t ≥ 2, the customers in service are precisely those customers that arrived in the interval (t −2, t].
The number of such customers has a Poisson PMF with mean λ[t − (t − 2)] = 2λ. The resulting
PMF of N (t) is
(2λ)n e−2λ /n! n = 0, 1, 2, . . .
PN (t) (n) = (t ≥ 2) (2)
0 otherwise
356
Problem 10.5.6 Solution
The time T between queries are independent exponential random variables with PDF
(1/8)e−t/8 t ≥ 0
f T (t) = (1)
0 otherwise
From the PDF, we can calculate for t > 0,
' t
P [T ≥ t] = f T t dt = e−t/8 (2)
0
357
Problem 10.5.8 Solution
358
Problem 10.6.2 Solution
In an interval (t, t + ] with an infinitesimal , let Ai denote the event of an arrival of the process
Ni (t). Also, let A = A1 ∪ A2 denote the event of an arrival of either process. Since Ni (t) is a
Poisson process, the alternative model says that P[Ai ] = λi . Also, since N1 (t) + N2 (t) is a
Poisson process, the proposed Poisson process model says
Lastly, the conditional probability of a type 1 arrival given an arrival of either type is
P [A1 A] P [A1 ] λ1 λ1
P [A1 |A] = = = = (2)
P [A] P [A] (λ1 + λ2 ) λ1 + λ2
This solution is something of a cheat in that we have used the fact that the sum of Poisson processes
is a Poisson process without using the proposed model to derive this fact.
Now we can consider the special cases arising when t < 2. When 0 ≤ t < 1, every arrival is still in
service. Thus the number in service N (t) equals the number of arrivals and has the PMF
(λt)n e−λt /n! n = 0, 1, 2, . . .
PN (t) (n) = (0 ≤ t ≤ 1) (2)
0 otherwise
When 1 ≤ t < 2, let M1 denote the number of customers in the interval (t − 1, t]. All M1 customers
arriving in that interval will be in service at time t. The M2 customers arriving in the interval
(0, t − 1] must each flip a coin to decide one a 1 minute or two minute service time. Only those
customers choosing the two minute service time will be in service at time t. Since M2 has a Poisson
PMF with mean λ(t − 1), the number M2 of those customers in the system at time t has a Poisson
359
PMF with mean λ(t − 1)/2. Finally, the number of customers in service at time t has a Poisson
PMF with expected value E[N (t)] = E[M1 ] + E[M2 ] = λ + λ(t − 1)/2. Hence, the PMF of N (t)
becomes
(λ(t + 1)/2)n e−λ(t+1)/2 /n! n = 0, 1, 2, . . .
PN (t) (n) = (1 ≤ t ≤ 2) (3)
0 otherwise
that each Si is in the interval (si , si + ] and that N = n. This joint event implies that there
were zero arrivals in each interval (si + , si+1 ]. That is, over the interval [0, T ], the Poisson
=n has exactly one arrival in each interval (si , si + ] and zero arrivals in the time period
process
T − i=1 (si , si + ]. The collection of intervals in which there was no arrival had a total duration
of T − n. Note that the probability of exactly one arrival in the interval (si , si + ] is λe−λδ
and the probability of zero arrivals in a period of duration T − n is e−λ(Tn −) . In addition, the
event of one arrival in each interval (si , si + ) and zero events in the period of length T − n are
independent events because they consider non-overlapping periods of the Poisson process. Thus,
n
P [s1 < S1 ≤ s1 + , . . . , sn < Sn ≤ sn + , N = n] = λe−λ e−λ(T −n) (1)
= (λ)n e−λT (2)
360
If it seems that the above argument had some “hand waving” preceding Equation (1), we now do
the derivation of Equation (1) in somewhat excruciating detail. (Feel free to skip the following if
you were satisfied with the earlier explanation.)
For the interval (s, t], we use the shorthand notation 0(s,t) and 1(s,t) to denote the events of 0
arrivals and 1 arrival respectively. This notation permits us to write
The set of events 0(0,s1 ) , 0(sn +|delta,T ) , and for i = 1, . . . , n − 1, 0(si +,si+1 ) and 1(si ,si +) are inde-
pendent because each devent depend on the Poisson process in a time interval that overlaps none of
the other time intervals. In addition, since the Poisson process has rate λ, P[0(s,t) ] = e−λ(t−s) and
P[1(si ,si +) ] = (λ)e−λ . Thus,
361
each Wn can be written as the sum
W1 = X 1 (1)
W2 = X 1 + X 2 (2)
..
.
Wk = X 1 + X 2 + · · · + X k . (3)
−1
k
f W (w) = f X A w = f X n (wn − wn−1 ) , (7)
n=1
362
Problem 10.8.2 Solution
Recall that X (t) = t − W where E[W ] = 1 and E[W 2 ] = 2.
(c) A model of this type may be able to capture the mean and variance of the daily temperature.
However, one reason this model is overly simple is because day to day temperatures are
uncorrelated. A more realistic model might incorporate the effects of “heat waves” or “cold
spells” through correlated daily temperatures.
363
Problem 10.8.4 Solution
By repeated application of the recursion Cn = Cn−1 /2 + 4X n , we obtain
Cn−2 X n−1
Cn = +4 + Xn (1)
4 2
Cn−3 X n−2 X n−1
= +4 + + Xn (2)
8 4 2
..
. (3)
C0 X1 X2
= n + 4 n−1 + n−2 + · · · + X n (4)
2 2 2
C0 n
Xi
= n +4 (5)
2 i=1
2n−i
E [C0 ] n
E [X i ]
E [Cn ] = + 4 =0 (6)
2n i=1
2n−i
For i = j, E[X i X j ] = 0 so that only the i = j terms make any contribution to the double
sum. However, at this point, we must consider the cases k ≥ 0 and k < 0 separately. Since
each X i has variance 1, the autocovariance for k ≥ 0 is
1
m
1
CC [m, k] = + 16 (9)
22m+k i=1
22m+k−2i
1 16 m
= + (1/4)m−i (10)
22m+k 2k i=1
1 16 1 − (1/4)m
= + (11)
22m+k 2k 3/4
(12)
364
For k < 0, we can write
E C02 E Xi X j
m m+k
CC [m, k] = 2m+k + 16 (13)
2 i=1 j=1
2m−i 2m+k− j
1
m+k
1
= + 16 (14)
22m+k i=1
22m+k−2i
16
m+k
1
= + −k (1/4)m+k−i (15)
22m+k 2 i=1
1 16 1 − (1/4)m+k
= + (16)
22m+k 2k 3/4
A general expression that’s valid for all m and k is
1 16 1 − (1/4)min(m,m+k)
CC [m, k] = + (17)
22m+k 2|k| 3/4
(c) Since E[Ci ] = 0 for all i, our model has a mean daily temperature of zero degrees Celsius
for the entire year. This is not a reasonable model for a year.
(d) For the month of January, a mean temperature of zero degrees Celsius seems quite reasonable.
we can calculate the variance of Cn by evaluating the covariance at n = m. This yields
1 16 4(4n − 1)
Var[Cn ] = + (18)
4n 4n 3
Note that the variance is upper bounded by
By the definition of the Poisson process, N (s) and N (t) − N (s) are independent for s < t. This
implies
E [N (s)[N (t) − N (s)]] = E [N (s)] E [N (t) − N (s)] = λs(λt − λs) (4)
365
Note that since N (s) is a Poisson random variable, Var[N (s)] = λs. Hence
E N 2 (s) = Var[N (s)] + (E [N (s)]2 = λs + (λs)2 (5)
If s > t, then we can interchange the labels s and t in the above steps to show C N (s, t) = λt. For
arbitrary s and t, we can combine these facts to write
f Y (t1 ),...,Y (tk ) (y1 , . . . , yk ) = f X (t1 +a),...,X (tk +a) (y1 , . . . , yk ) (1)
Thus,
f Y (t1 +τ ),...,Y (tk +τ ) (y1 , . . . , yk ) = f X (t1 +τ +a),...,X (tk +τ +a) (y1 , . . . , yk ) (2)
f X (t1 +τ +a),...,X (tk +τ +a) (y1 , . . . , yk ) = f X (t1 +a),...,X (tk +a) (y1 , . . . , yk ) (3)
This implies
f Y (t1 +τ ),...,Y (tk +τ ) (y1 , . . . , yk ) = f X (t1 +a),...,X (tk +a) (y1 , . . . , yk ) = f Y (t1 ),...,Y (tk ) (y1 , . . . , yk ) (4)
Thus,
f Y (t1 +τ ),...,Y (tk +τ ) (y1 , . . . , yk ) = f X (at1 +aτ ),...,X (atk +aτ ) (y1 , . . . , yk ) (2)
We see that a time offset of τ for the Y (t) process corresponds to an offset of time τ = aτ for the
X (t) process. Since X (t) is a stationary process,
366
Problem 10.9.3 Solution
For a set of time samples n 1 , . . . , n m and an offset k, we note that Yni +k = X ((n i + k)). This
implies
f Yn1 +k ,...,Ynm +k (y1 , . . . , ym ) = f X ((n 1 +k)),...,X ((n m +k)) (y1 , . . . , ym ) (1)
Since X (t) is a stationary process,
f Yn1 +l ,...,Ynm +l (y1 , . . . , ym ) = f X kn1 +kl) ,...,X knm +kl) (y1 , . . . , ym ) (2)
= f X kn1 ,...,X knm (y1 , . . . , ym ) (3)
= f Yn1 ,...,Ynm (y1 , . . . , ym ) . (4)
367
Since X (t) is a stationary process, the joint PDF of X (t1 + τ ), . . . , X (tn + τ ) is the same as the joint
PDf of X (t1 ), . . . , X (tn ). Thus
' ∞ y
1 1 yn
f Y (t1 +τ ),...,Y (tn +τ ) (y1 , . . . , yn ) = f X (t1 +τ ),...,X (t n +τ ) , . . . , f A (a) da (5)
an a a
'0 ∞ y
1 1 yn
= f X (t1 ),...,X (t n ) , . . . , f A (a) da (6)
0 an a a
= f Y (t1 ),...,Y (tn ) (y1 , . . . , yn ) (7)
We can conclude that Y (t) is a stationary process.
In principle, we can calculate P[Aτ ] by integrating f X (t1 +τ ),...,X (tn +τ ) (x1 , . . . , xn ) over the region
corresponding to event Aτ . Since X (t) is a stationary process,
f X (t1 +τ ),...,X (tn +τ ) (x1 , . . . , xn ) = f X (t1 ),...,X (tn ) (x1 , . . . , xn ) (4)
This implies P[Aτ ] does not depend on τ . In particular,
FY (t1 +τ ),...,Y (tn +τ ) (y1 , . . . , yn ) = P [Aτ ] (5)
= P [g(X (t1 )) ≤ y1 , . . . , g(X (tn )) ≤ yn ] (6)
= FY (t1 ),...,Y (tn ) (y1 , . . . , yn ) (7)
368
Problem 10.10.2 Solution
Since Y (t) = A + X (t), the mean of Y (t) is
E [Y (t)] = E [A] + E [X (t)] = E [A] + µ X (1)
The autocorrelation of Y (t) is
RY (t, τ ) = E [(A + X (t)) (A + X (t + τ ))] (2)
= E A2 + E [A] E [X (t)] + AE [X (t + τ )] + E [X (t)X (t + τ )] (3)
= E A2 + 2E [A] µ X + R X (τ ) (4)
We see that neither E[Y (t)] nor RY (t, τ ) depend on t. Thus Y (t) is a wide sense stationary process.
369
Problem 10.10.4 Solution
(a) In the problem statement, we are told that X (t) has average power equal to 1. By Defini-
tion 10.16, the average power of X (t) is E[X 2 (t)] = 1.
Note that the mean of Y (t) is zero no matter what the mean of X (t) sine the random phase
cosine has zero mean.
(d) Independence of X (t) and results in the average power of Y (t) being
E Y 2 (t) = E X 2 (t) cos2 (2π f c t + ) (7)
= E X 2 (t) E cos2 (2π f c t + ) (8)
= E cos2 (2π f c t + ) (9)
Note that we have used the fact from part (a) that X (t) has unity average power. To finish the
problem, we use the trigonometric identity cos2 φ = (1 + cos 2φ)/2. This yields
2 1
E Y (t) = E (1 + cos(2π(2 f c )t + )) = 1/2 (10)
2
Note that E[cos(2π(2 f c )t + )] = 0 by the argument given in part (b) with 2 fc replacing
fc .
370
Problem 10.10.5 Solution
This proof simply parallels the proof of Theorem 10.12. For the first item, R X [0] = R X [m, 0] =
E[X m2 ]. Since X m2 ≥ 0, we must have E[X m2 ] ≥ 0. For the second item, Definition 10.13 implies
that
R X [k] = R X [m, k] = E [X m X m+k ] = E [X m+k X m ] = R X [m + k, −k] (1)
Since X m is wide sense stationary, R X [m + k, −k] = R X [−k]. The final item requires more effort.
First, we note that when X m is wide sense stationary, Var[X m ] = C X [0], a constant for all t. Second,
Theorem 4.17 implies that that
Now for any numbers a, b, and c, if a ≤ b and c ≥ 0, then (a + c)2 ≤ (b + c)2 . Choosing
a = C X [m, k], b = C X [0], and c = µ2X yields
2 2
C X [m, m + k] + µ2X ≤ C X [0] + µ2X (3)
In the above expression, the left side equals (R X [k])2 while the right side is (R X [0])2 , which proves
the third part of the theorem.
In addition,
(b) To examine whether X (t) and W (t) are jointly wide sense stationary, we calculate
Since W (t) and X (t) are both wide sense stationary and since RW X (t, τ ) depends only on the
time difference τ , we can conclude from Definition 10.18 that W (t) and X (t) are jointly wide
sense stationary.
371
Problem 10.11.2 Solution
To show that X (t) and X i (t) are jointly wide sense stationary, we must first show that X i (t) is wide
sense stationary and then we must show that the cross correlation R X X i (t, τ ) is only a function of
the time difference τ . For each X i (t), we have to check whether these facts are implied by the fact
that X (t) is wide sense stationary.
we have verified that X 1 (t) is wide sense stationary. Now we calculate the cross correlation
Since R X X 1 (t, τ ) depends on the time difference τ but not on the absolute time t, we conclude
that X (t) and X 1 (t) are jointly wide sense stationary.
we have verified that X 2 (t) is wide sense stationary. Now we calculate the cross correlation
Except for the trivial case when a = 1 andX 2 (t) = X (t), R X X 2 (t, τ ) depends on both the
absolute time t and the time difference τ . We conclude that X (t) and X 2 (t) are not jointly
wide sense stationary.
372
(b) The cross correlation of X (t) and Y (t) is
(c) We have already verified that RY (t, τ ) depends only on the time difference τ . Since E[Y (t)] =
E[X (t − t0 )] = µ X , we have verified that Y (t) is wide sense stationary.
(d) Since X (t) and Y (t) are wide sense stationary and since we have shown that R X Y (t, τ ) de-
pends only on τ , we know that X (t) and Y (t) are jointly wide sense stationary.
Comment: This problem is badly designed since the conclusions don’t depend on the specific
R X (τ ) given in the problem text. (Sorry about that!)
Problem 10.12.1
( Solutiont+τ
Writing Y (t + τ ) = 0 N (v) dv permits us to write the autocorrelation of Y (t) as
' t ' t+τ
RY (t, τ ) = E [Y (t)Y (t + τ )] = E N (u)N (v) dv du (1)
0 0
' t ' t+τ
= E [N (u)N (v)] dv du (2)
0 0
' t ' t+τ
= αδ(u − v) dv du (3)
0 0
At this point, it matters whether τ ≥ 0 or if τ < 0. When τ ≥ 0, then v ranges from 0 to t + τ and
at some point in the integral over v we will have v = u. That is, when τ ≥ 0,
' t
RY (t, τ ) = α du = αt (4)
0
When τ < 0, then we must reverse the order of integration. In this case, when the inner integral is
over u, we will have u = v at some point.
' t+τ ' t
RY (t, τ ) = αδ(u − v) du dv (5)
0 0
' t+τ
= α dv = α(t + τ ) (6)
0
373
Problem 10.12.2 Solution
Let µi = E[X (ti )].
σ1 σ2 σ12
(c) The general form of the multivariate density for X (t1 ), X (t2 ) is
1 −1 (x−µ
e− 2 (x−µX ) C X)
1
f X (t1 ),X (t2 ) (x1 , x2 ) = (3)
(2π )k/2 |C|
1/2
where k = 2 and x = x1 x2 and µX = µ1 µ2 . Hence,
1 1
= . (4)
(2π )k/2 |C| 1/2
2π σ12 σ22 (1 − ρ 2)
Plugging in each piece into the joint PDF f X (t1 ),X (t2 ) (x1 , x2 ) given above, we obtain the bi-
variate Gaussian PDF.
Problem
10.12.3 Solution
Let W = W (t1 ) W (t2 ) · · · W (tn ) denote a vector of samples of a Brownian motion process.
To prove that W (t) is a Gaussian random process, we must show that W is a Gaussian random
vector. To do so, let
X = X 1 · · · X n = W (t1 ) W (t2 ) − W (t1 ) W (t3 ) − W (t2 ) · · · W (tn ) − W (tn−1 ) (1)
374
denote the vector of increments. By the definition of Brownian motion, X 1 , . . . , X n is a sequence
of independent Gaussian random variables. Thus X is a Gaussian random vector. Finally,
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
W1 X1 1
⎢ W2 ⎥ ⎢ X 1 + X 2 ⎥ ⎢1 1 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
W=⎢ . ⎥=⎢ . ⎥ = ⎢. . ⎥ X. (2)
.
⎣ . ⎦ ⎣ .
. ⎦ ⎣. . . . ⎦
Wn X1 + · · · + Xn 1 1
A
Since X is a Gaussian random vector and W = AX with A a rank n matrix, Theorem 5.16 implies
that W is a Gaussian random vector.
In noisycosine.m, we use a function subsample.m to obtain the discrete time sample func-
tions. In fact, subsample is hardly necessary since it’s such a simple one-line M ATLAB function:
function y=subsample(x,n)
%input x(1), x(2) ...
%output y(1)=x(1), y(2)=x(1+n), y(3)=x(2n+1)
y=x(1:n:length(x));
375
However, we use it just to make noisycosine.m a little more clear.
>> t=(1:600)’;
>> M=simswitch(10,0.1,t);
>> Mavg=cumsum(M)./t;
>> plot(t,M,t,Mavg);
will simulate the switch for 600 minutes, produceing the vector M of samples of M(t) each minute,
the vector Mavg which is the sequence of time average estimates, and a plot resembling this:
150
100
M(t)
50
M(t)
Time Average M(t)
0
0 100 200 300 400 500 600
t
From the figure, it appears that the time average is converging to a vlue in th neighborhood of 100.
In particular, because the switch is initially empty with M(0) = 0, it takes a few hundred minutes
for the time average to climb to something close to 100. Following the problem instructions, we can
write the following short program to examine ten simulation runs:
function Mavg=simswitchavg(T,k)
%Usage: Mavg=simswitchavg(T,k)
%simulate k runs of duration T of the
%telephone switch in Chapter 10
%and plot the time average of each run
t=(1:k)’;
%each column of Mavg is a time average sample run
Mavg=zeros(T,k);
for n=1:k,
M=simswitch(10,0.1,t);
Mavg(:,n)=cumsum(M)./t;
end
plot(t,Mavg);
376
120
100
From the graph, one can see that even after T = 600 minutes, each sample run produces a time
average M 600 around 100. Note that in Chapter 12, we will able Markov chains to prove that the
expected number of calls in the switch is in fact 100. However, note that even if T is large, M T is
still a random variable. From the above plot, one might guess that M 600 has a standard deviation of
perhaps σ = 2 or σ = 3. An exact calculation of the variance of M 600 is fairly difficult because it is
a sum of dependent random variables, each of which has a PDF that is in itself reasonably difficult
to calculate.
function M=simswitchd(lambda,T,t)
%Poisson arrivals, rate lambda
%Deterministic (T) call duration
%For vector t of times
%M(i) = no. of calls at time t(i)
s=poissonarrivals(lambda,max(t));
y=s+T;
A=countup(s,t);
D=countup(y,t);
M=A-D;
Note that if you compare simswitch.m in the text with simswitchd.m here, two changes oc-
curred. The first is that the exponential call durations are replaced by the deterministic time T . The
other change is that count(s,t) is replaced by countup(s,t). In fact, matopn=countup(x,y)
377
does exactly the same thing as n=count(x,y); in both cases, n(i) is the number of elements
less than or equal to y(i). The difference is that countup requires that the vectors x and y be
nondecreasing.
Now we use the same procedure as in Problem 10.13.2 and form the time average
1
T
M(T ) = M(t). (1)
T t=1
>> t=(1:600)’;
>> M=simswitchd(100,1,t);
>> Mavg=cumsum(M)./t;
>> plot(t,Mavg);
105
M(t) time average
100
95
0 100 200 300 400 500 600
t
We used the word “vaguely” because at t = 1, the time average is simply the number of arrivals in
the first minute, which is a Poisson (α = 100) random variable which has not been averaged. Thus,
the left side of the graph will be random for each run. As expected, the time average appears to be
converging to 100.
378
When n is small, it doesn’t much matter if we are efficient because the amount of calculation is
small. The question that must be addressed is to estimate P[K = 1] when n is large. In this case,
we can use the central limit theorem because Sn is the sum of n exponential random variables. Since
E[Sn ] = n/λ and Var[Sn ] = n/λ2 ,
0 1
Sn − n/λ T − n/λ λT − n
P [Sn > T ] = P > ≈Q √ (2)
n/λ2 n/λ2 n
To simplify our algebra, we assume for large n that 0.1λT is an integer. In this case, n = 1.1λT and
!) "
0.1λT λT
P [Sn > T ] ≈ Q − √ = (3)
1.1λT 110
Thus for large λT , P[K = 1] is very small. For example, if λT = 1,000, P[Sn > T ] ≈ (3.01) =
0.9987. If λT = 10,000, P[Sn > T ] ≈ (9.5).
379
>> t=cputime;s=poissonarrivals(1,100000);t=cputime-t
t =
0.0900
>> t=cputime;s=poissonarrivals(1,100000);t=cputime-t
t =
0.1110
>> t=cputime;s=newarrivals(1,100000);t=cputime-t
t =
0.5310
>> t=cputime;s=newarrivals(1,100000);t=cputime-t
t =
0.5310
>> t=cputime;poissonrv(100000,1);t=cputime-t
t =
0.5200
>> t=cputime;poissonrv(100000,1);t=cputime-t
t =
0.5210
>>
380
function pb=brownbarrier(alpha,b,T)
%pb=brownbarrier(alpha,b,T)
%Brownian motion process, parameter alpha
%with barriers at -b and b, sampled each
%unit of time until time T
%Returns vector pb:
%pb(1)=fraction of time at -b
%pb(2)=fraction of time at b
T=ceil(T);
x=sqrt(alpha).*gaussrv(0,1,T);
w=0;pb=zeros(1,2);
for k=1:T,
w=w+x(k);
if (w <= -b)
w=-b;
pb(1)=pb(1)+1;
elseif (w >= b)
w=b;
pb(2)=pb(2)+1;
end
end
pb=pb/T;
In brownbarrier, pb(1) tracks how often the process touches the left barrier at −b while
pb(2) tracks how often the right side barrier at b is reached. By symmetry, P[X (t) = b] =
P[X (t) = −b]. Thus if T is chosen very large, we should expect pb(1)=pb(2). The extent to
which this is not the case gives an indication of the extent to which we are merely estimating the
barrier probability. For each T ∈ {10,000, 100,000, 1,000,000}, here two sample runs:
>> pb=brownbarrier(0.01,1,10000)
pb =
0.0301 0.0353
>> pb=brownbarrier(0.01,1,10000)
pb =
0.0417 0.0299
>> pb=brownbarrier(0.01,1,100000)
pb =
0.0333 0.0360
>> pb=brownbarrier(0.01,1,100000)
pb =
0.0341 0.0305
>> pb=brownbarrier(0.01,1,1000000)
pb =
0.0323 0.0342
>> pb=brownbarrier(0.01,1,1000000)
pb =
0.0333 0.0324
>>
The sample runs show that for α = 0.1 and b = 1 that the
P [X (t) = −b] ≈ P [X (t) = b] ≈ 0.03. (2)
381
Otherwise, the numerical simulations are not particularly instructive. Perhaps the most important
thing to understand is that the Brownian motion process with barriers is very different from the ordi-
nary Brownian motion process. Remember that fr ordinary Brownina motion, the variance of X (t)
always increases linearly with t. For the process with barriers, X 2 (t) ≤ b2 and thus Var[X (t)] ≤ b2 .
In fact, for the process with barriers, the PDF of X (t) converges to a limiting PDF as t becomes
large. If you’re curious, you shouldn’t have much trouble digging in the library to find this PDF.
function I=simswitchdepart(lambda,mu,T)
%Usage: I=simswitchdepart(lambda,mu,T)
%Poisson arrivals, rate lambda
%Exponential (mu) call duration
%Over time [0,T], returns I,
%the vector of inter-departure times
%M(i) = no. of calls at time t(i)
s=poissonarrivals(lambda,T);
y=s+exponentialrv(mu,length(s));
y=sort(y);
n=length(y);
I=y-[0; y(1:n-1)]; %interdeparture times
imax=max(I);b=ceil(n/100);
id=imax/b; x=id/2:id:imax;
pd=hist(I,x); pd=pd/sum(pd);
px=exponentialpdf(lambda,x)*id;
plot(x,px,x,pd);
xlabel(’\it x’);ylabel(’Probability’);
legend(’Exponential PDF’,’Relative Frequency’);
382
0.1
Exponential PDF
0.08 Relative Frequency
Probability
0.06
0.04
0.02
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x
As seen in the figure, the match is quite good. Although this is not a carefully designed statistical
test of whether the inter-departure times are exponential random variables, it is enough evidence
that one may want to pursue whether such a result can be proven.
In fact, the switch in this problem is an example of an M/M/∞ queuing system for which it has
been shown that not only do the inter-departure have an exponential distribution, but the steady-state
departure process is a Poisson process. For the curious reader, details can be found, for example, in
the text Discrete Stochastic Processes by Gallager.
383
Problem Solutions – Chapter 11
We see that RY (t, τ ) only depends on the time difference τ . Thus Y (t) is wide sense stationary.
384
Problem 11.2.1 Solution
1
1 1
= R X [n + i − j] (4)
9 i=−1 j=−1
1
= (R X [n + 2] + 2R X [n + 1] + 3R X [n] + 2R X [n − 1] + R X [n − 2]) (5)
9
Substituting in R X [n] yields
⎧
⎪
⎪ 1/3 n=0
⎨
2/9 |n| = 1
RY [n] = (6)
⎪ 1/9
⎪ |n| = 2
⎩
0 otherwise
385
(b) Theorem 11.5 also says that the output autocorrelation is
∞
∞
RW [n] = h i h j RY [n + i − j] (2)
i=−∞ j=−∞
1
1
= RY [n + i − j] (3)
i=0 j=0
For n = −3,
Following the same procedure, its easy to show that RW [n] is nonzero for |n| = 0, 1, 2.
Specifically, ⎧
⎪
⎪ 0.5 |n| = 3
⎪
⎪ |n| = 2
⎨ 3
RW [n] = 7.5 |n| = 1 (6)
⎪
⎪
⎪
⎪ 10 n=0
⎩
0 otherwise
(c) The second moment of the output is E[Wn2 ] = RW [0] = 10. The variance of Wn is
Var[Wn ] = E Wn2 − (E [Wn ])2 = 10 − 22 = 6 (7)
1
1
= h i h j RY [n + i − j] (3)
i=0 j=0
For n = −3,
386
Following the same procedure, its easy to show that RV [n] is nonzero for |n| = 0, 1, 2.
Specifically, ⎧
⎪
⎪ −0.5 |n| = 3
⎪
⎪ |n| = 2
⎨ −1
RV [n] = 0.5 |n| = 1 (6)
⎪
⎪
⎪
⎪ 2 n=0
⎩
0 otherwise
(c) Since E[Vn ] = 0, the variance of the output is E[Vn2 ] = RV [0] = 2. The variance of Wn is
Var[Vn ] = E Wn2 RV [0] = 2 (7)
= R X [n − 1] + 2R X [n] + R X [n + 1] (2)
This suggests that R X [n] = 0 for |n| > 1. In addition, we have the following facts:
387
The covariance of Yi+1 Yi can be found by the same method.
1 1 1 1 1 1
Cov[Yi+1 , Yi ] = [ X n + X n−1 + X n−2 + . . .][ X n−1 + X n−2 + X n−3 + . . .] (4)
2 4 8 2 4 8
Since E[X i X j ] = 0 for all i = j, the only terms that are left are
∞ ∞
1 1 1 1
Cov[Yi+1 , Yi ] = i 2i−1
E[X 2
i ] = i
E[X i2 ] (5)
i=1
2 2 i=1
4
n−1
E [X n ] = cn E [X 0 ] + cn−1−i E [Z i ] = E [X 0 ] (7)
i=0
Thus, for X n to be a zero mean process, we require that E[X 0 ] = 0. The autocorrelation function
can be written as
⎡! "⎛ ⎞⎤
n−1
n+k−1
R X [n, k] = E [X n X n+k ] = E ⎣ cn X 0 + cn−1−i Z i ⎝cn+k X 0 + cn+k−1− j Z j ⎠⎦ (8)
i=0 j=0
388
Although it was unstated in the problem, we will assume that X 0 is independent of Z 0 , Z 1 , . . . so
that E[X 0 Z i ] = 0. Since E[Z i ] = 0 and E[Z i Z j ] = 0 for i = j, most of the cross terms will drop
out. For k ≥ 0, autocorrelation simplifies to
n−1
1 − c2n
R X [n, k] = c2n+k Var[X 0 ] + c2(n−1)+k−2i) σ̄ 2 = c2n+k Var[X 0 ] + σ̄ 2 ck (9)
i=0
1 − c2
n+k−1
= c2n+k Var[X 0 ] + c−k c2(n+k−1− j) σ̄ 2 (12)
j=0
1 − c2(n+k)
= c2n+k σ 2 + σ̄ 2 c−k (13)
1 − c2
σ̄ 2 −k σ̄ 2
= c +c 2n+k
σ −
2
(14)
1 − c2 1 − c2
σ̄ 2 = (1 − c2 )σ 2 (15)
Yn = a X n + aYn−1 (1)
= a X n + a[a X n−1 + aYn−2 ] (2)
= a X n + a X n−1 + a [a X n−2 + aYn−3 ]
2 2
(3)
n
Yn = a j+1 X n− j + a n Y0 (4)
j=0
n
Yn = a n−i+1 X i (5)
i=0
389
Now we can calculate the mean
0 n 1
n
E [Yn ] = E a n−i+1
Xi = a n−i+1 E [X i ] = 0 (6)
i=0 i=0
To calculate the autocorrelation RY [m, k], we consider first the case when k ≥ 0.
⎡ ⎤
m
m+k
m m+k
CY [m, k] = E ⎣ a m−i+1 X i a m+k− j+1 X j ⎦ = a m−i+1 a m+k− j+1 E X i X j (7)
i=0 j=0 i=0 j=0
m
CY [m, k] = a m−i+1 a m+k−i+1 (9)
i=0
m
= ak a 2(m−i+1) (10)
i=0
= a k (a 2 )m+1 + (a 2 )m + · · · + a 2 (11)
a2
= a k
1 − (a )
2 m+1
(12)
1 − a2
For k ≤ 0, we start from
m m+k
CY [m, k] = a m−i+1 a m+k− j+1 E X i X j (13)
i=0 j=0
m+k
m+k
CY [m, k] = a m− j+1 a m+k− j+1 = a −k a m+k− j+1 a m+k− j+1 (14)
j=0 j=0
a2 −k
CY [m, k] = a 1 − (a )
2 m+k+1
(15)
1 − a2
A general expression that is valid for all m and k would be
a2
CY [m, k] = a |k| 1 − (a 2 )min(m,m+k)+1 (16)
1−a 2
390
Problem 11.3.1 Solution
Since
the process Xn has expected value E[X n ] = 0, we know that C X (k) = R X (k) = 2−|k| . Thus
X = X 1 X 2 X 3 has covariance matrix
⎡ 0 ⎤ ⎡ ⎤
2 2−1 2−2 1 1/2 1/4
CX = ⎣2−1 20 2−1 ⎦ = ⎣1/2 1 1/2⎦ . (1)
2−2 2−1 20 1/4 1/2 1
From Definition 5.17, the PDF of X is
1 1 −1
f X (x) = exp − x CX x . (2)
(2π )n/2 [det (CX )]1/2 2
If we are using M ATLAB for calculations, it is best to decalre the problem solved at this point.
However, if you like algebra, we can write out the PDF in terms of the variables x1 , x2 and x3 . To
do so we find that the inverse covariance matrix is
⎡ ⎤
4/3 −2/3 0
C−1 ⎣
X = −2/3 5/3 −2/3⎦ (3)
0 −2/3 4/3
A little bit of algebra will show that det(CX ) = 9/16 and that
1 −1 2x 2 5x 2 2x 2 2x1 x2 2x2 x3
x CX x = 1 + 2 + 3 − − . (4)
2 3 6 3 3 3
It follows that
4 2x12 5x22 2x32 2x1 x2 2x2 x3
f X (x) = exp − − − + + . (5)
3(2π )3/2 3 6 3 3 3
We note that the components of X are iid Gaussian (0, 1) random variables. Hence X has
covariance matrix CX = I, the identity matrix. Since Y3 = HX,
⎡ ⎤
2 −2 1
CY3 = HCX H = HH = ⎣−2 3 −2⎦ . (3)
1 −2 3
391
Some calculation (by hand or by M ATLAB) will show that det(CY3 ) = 3 and that
⎡ ⎤
5 4 1
1⎣
C−1
Y3 = 4 5 2⎦ . (4)
3
1 2 2
The output
sequence
is Yn . Following the approach of Equation (11.58), we can write the output
Y = Y1 Y2 Y3 as
⎡ ⎤ ⎡ ⎤
⎡ ⎤ ⎡ ⎤ X −1 ⎡ ⎤ X −1
Y1 h2 h1 h0 0 0 ⎢ ⎥
⎢ X0 ⎥ 1 −1 1 0 0 ⎢ ⎥
⎢ X0 ⎥
Y = Y2 = 0 h 2 h 1 h 0 0 ⎢ X 1 ⎥ = 0 1 −1 1 0 ⎢ X 1 ⎥
⎣ ⎦ ⎣ ⎦ ⎢ ⎥ ⎣ ⎦ ⎢
⎥. (2)
Y3 0 0 h2 h1 h0 ⎣ X 2 ⎦ 0 0 1 −1 1 ⎣ X 2 ⎦
X3 X
3
H
X
392
Since X n has autocovariance function C X (k) = 2−|k| , X has covariance matrix
⎡ ⎤
1 1/2 1/4 1/8 1/16
⎢ 1/2 1 1/2 1/4 1/8 ⎥
⎢ ⎥
CX = ⎢ 1/4 1/2 1 1/2 1/4 ⎥
⎢
⎥. (3)
⎣ 1/8 1/4 1/2 1 1/2 ⎦
1/16 1/8 1/4 1/2 1
Since Y = HX, ⎡ ⎤
3/2 −3/8 9/16
CY = HCX H = ⎣−3/8 3/2 −3/8⎦ . (4)
9/16 −3/8 3/2
Some calculation (by hand or preferably by M ATLAB) will show that det(CY ) = 675/256 and that
⎡ ⎤
12 2 −4
1 ⎣ 2 11 2 ⎦ .
C−1
Y = (5)
15
−4 2 12
This solution is another demonstration of why the PDF of a Gaussian random vector should be left
in vector form.
Comment: We know from Theorem 11.5 that Yn is a stationary Gaussian process. As a result, the
random variables Y1 , Y2 and Y3 are identically distributed and CY is a symmetric Toeplitz matrix.
This might make on think that the PDF f Y (y) should be symmetric in the variables y1 , y2 and y3 .
However, because Y2 is in the middle of Y1 and Y3 , the information provided by Y1 and Y3 about
Y2 is different than the information Y1 and Y2 convey about Y3 . This fact appears as asymmetry in
f Y (y).
393
The output
sequence
is Yn . Following the approach of Equation (11.58), we can write the output
Y = Y1 Y2 Y3 as
⎡ ⎤ ⎡ ⎤
⎡ ⎤ ⎡ ⎤ X −1 ⎡ ⎤ X −1
Y1 h2 h1 h0 0 0 ⎢ ⎢ X0 ⎥
⎥ 1 0 −1 0 0 ⎢ ⎢ X0 ⎥
⎥
Y = ⎣ Y2 ⎦ = ⎣ 0 h 2 h 1 h 0 0 ⎦ ⎢ X
⎢ 1⎥
⎥ = ⎣ 0 1 0 −1 0 ⎦ ⎢ X 1 ⎥
⎢
⎥. (2)
Y3 0 0 h2 h1 h0 ⎣ X 2 ⎦ 0 0 1 0 −1 ⎣ X 2 ⎦
X3 X
3
H
X
Since X n has autocovariance function C X (k) = 2−|k| , X has the Toeplitz covariance matrix
⎡ ⎤
1 1/2 1/4 1/8 1/16
⎢ 1/2 1 1/2 1/4 1/8 ⎥
⎢ ⎥
⎢
CX = ⎢ 1/4 1/2 1 1/2 1/4 ⎥ ⎥. (3)
⎣ 1/8 1/4 1/2 1 1/2 ⎦
1/16 1/8 1/4 1/2 1
Since Y = HX, ⎡ ⎤
3/2 3/8 −9/16
CY = HCX H = ⎣ 3/8 3/2 3/8 ⎦ . (4)
−9/16 3/8 3/2
Some calculation (preferably by M ATLAB) will show that det(CY ) = 297/128 and that
⎡ ⎤
10/11 −1/3 14/33
C−1
Y =
⎣ −1/3 5/6 −1/3 ⎦ . (5)
14/33 −1/3 10/11
problem
is solved using Theorem
with k = 1. The optimum linear predictor filter h =
11.9
h 0 h 1 of X n+1 given Xn = X n−1 X n is given by
←
− h
h = 1 = R−1 Xn RXn X n+k , (1)
h0
394
where
R X [0] R X [1] 1 3/4
RXn = = (2)
R X [1] R X [0] 3/4 1
and
X n−1 R X [2] 1/2
RXn X n+1 = E X n+1 = = . (3)
Xn R X [1] 3/4
Thus the filter vector h satisfies
−1
←
− h1 1 3/4 1/2 −1/7
h = = = . (4)
h0 3/4 1 3/4 6/7
Thus h = 6/7 −1/7 .
problem
is solved using Theorem
with k = 1. The optimum linear predictor filter h =
11.9
h 0 h 1 of X n+1 given Xn = X n−1 X n is given by
←
− h
h = 1 = R−1 Xn RXn X n+k , (1)
h0
where
R X [0] R X [1] 1.1 0.75
RXn = = (2)
R X [1] R X [0] 0.75 1.1
and
X n−1 R X [2] 0.5
RXn X n+1 = E X n+1 = = . (3)
Xn R X [1] 0.75
Thus the filter vector h satisfies
−1
←
− h 1.1 0.75 0.5 −0.0193
h = 1 = = . (4)
h0 0.75 1.1 0.75 0.6950
Thus h = 0.6950 −0.0193 .
Comment: It is instructive to compare this solution Problem 11.4.1 where the random process,
denoted X̂ n here to distinguish it from X n in this problem, has autocorrelation function
1 − 0.25 |k| |k| ≤ 4,
R X̂ [k] = (5)
0 otherwise.
The difference is simply that R X̂ [0] = 1, rather than R X [0] = 1.1 as in this problem. This difference
corresponds to adding an iid noise sequence to X̂ n to create X n . That is,
X n = X̂ n + Nn (6)
where Nn is an iid additive noise sequence with autocorrelation function R N [k] = 0.1δ[k] that is
independent of the X n process. Thus X n in this problem can be viewed as a noisy version of X̂ n
in Problem 11.4.1. Because
the X̂ n process
is less noisy, the
optimal predictor filter of X̂ n+1 given
X̂ n−1 and X̂ n is ĥ = 6/7 −1/7 = 0.8571 −0.1429 , which places more emphasis on the
current value X̂ n in predicting the next value.
395
Problem 11.4.3 Solution
This problem generalizes Example 11.14 in that −0.9 is replaced by the parameter c
and the noise
variance 0.2 is replaced by η2 . Because we are only finding the first order filter h = h 0 h 1 , it is
relatively simple to generalize the solution
of Example
11.14 to the parameter values c and η2 .
Based on the observation Y = Yn−1 Yn , Theorem 11.11 states that the linear MMSE esti-
←−
mate of X = X n is h Y where
←
−
h = R−1 −1
Y RYX n = (RXn + RWn ) RXn X n . (1)
From Equation (11.82), RXn X n = R X [1] R X [0] = c 1 . From the problem statement,
2
1 c η 0 1 + η2 c
RXn + RWn = + = . (2)
c 1 0 η2 c 1 + η2
This implies
−1
←
− 1 + η2 c c
h = (3)
c 1 + η2 1
1 1 + η2 −c c
= (4)
(1 + η2 )2 − c2 −c 1 + η2 1
1 cη2
= . (5)
(1 + η2 )2 − c2 1 + η2 − c2
The optimal filter is
1 1 + η2 − c2
h= . (6)
(1 + η2 )2 − c2 cη2
From Theorem 9.7, the mean square error of the filter output is
Equations (3) and (4) are general expressions for the means square error of the optimal linear filter
that can be applied to any situation described by Theorem 11.11.
To apply this result to the problem at hand, we observe that R X [0] = c0 = 1 and that
←− 1 cη2 R X [1] c
h = , RXn X n = = (5)
(1 + η2 )2 − c2 1 + η2 − c2 R X [0] 1
396
This implies
←−
e∗L = R X [0] − h RXn X n (6)
1
c
=1− cη2 1+η −c
2 2
(7)
(1 + η 2 )2
− c2 1
c2 η2 + 1 + η2 − c2
=1− (8)
(1 + η2 )2 − c2
1 + η2 − c2
=η 2
(9)
(1 + η2 )2 − c2
The remaining question is what value of c minimizes the mean square error e∗L . The usual
d e∗
approach is to set the derivative dcL to zero. This would yield
$ the incorect answer c = 0. In fact,
d 2 e∗L $
evaluating the second derivative at c = 0 shows that dc2 $ < 0. Thus the mean square error e∗L
c=0
is maximum at c = 0. For a more careful analysis, we observe that e∗L = η2 f (x) where
a−x
f (x) = , (10)
a2 − x
x = c2 , and a = 1 + η2 . In this case, minimizing f (x) is equivalent to minimizing the mean square
error. Note that for R X [k] to be a respectable autocorrelation function, we must have |c| ≤ 1. Thus
we consider only values of x in the interval 0 ≤ x ≤ 1. We observe that
d f (x) a2 − a
=− 2 (11)
dx (a − x)2
Since a > 1, the derivative is negative for 0 ≤ x ≤ 1. This implies the mean square error is
minimized by making x as large as possible, i.e., x = 1. Thus c = 1 minimizes the mean square
error. In fact c = 1 corresponds to the autocorrelation function R X [k] = 1 for all k. Since each X n
has zero expected value, every pair of sample X n and X m has correlation coefficient
Cov [X n , X m ] R X [n − m]
ρ X n ,X m = √ = = 1. (12)
Var[X n ] Var[X m ] R X [0]
That is, c = 1 corresponds to a degenerate process in which every pair of samples X n and X m are
perfectly correlated. Physically, this corresponds to the case where where the random process X n
is generated by generating a sample of a random variable X and setting X n = X for all n. The
observations are then of the form Yn = X + Z n . That is, each observation is just a noisy observation
of the random variable X . For c = 1, the optimal filter
1 1
h= . (13)
2+η 1 2
397
By recursive application of X n = cX n−1 + Z n−1 , we obtain
n
X n = an X 0 + a j−1 Z n− j (2)
j=1
n
The expected value of X n is E[X n ] = a n E[X 0 ] + j=1 a
j−1
E[Z n− j ] = 0. The variance of X n is
n
n
Var[X n ] = a 2n Var[X 0 ] + [a j−1 ]2 Var[Z n− j ] = a 2n Var[X 0 ] + σ 2 [a 2 ] j−1 (3)
j=1 j=1
where β 2 = η2 /(d 2 σ 2 ). From Theorem 9.4, the mean square estimation error at step n
1 + β2
e∗L (n) = E[(X n − X̂ n )2 ] = Var[X n ](1 − ρ X2 n ,Yn−1 ) = σ 2 (11)
1 + β 2 (1 − c2 )
We see that mean square estimation error e∗L (n) = e∗L , a constant for all n. In addition, e∗L is an
increasing function β.
398
Problem 11.5.1 Solution
To use Table 11.1, we write R X (τ ) in terms of the autocorrelation
sin(π x)
sinc(x) = . (1)
πx
In terms of the sinc(·) function, we obtain
R X (τ ) = 10 sinc(2000τ ) + 5 sinc(1000τ ). (2)
From Table 11.1,
10 f 5 f
SX ( f ) = rect + rect (3)
2,000 2000 1,000 1,000
Here is a graph of the PSD.
0.012
0.01
0.008
SX(f)
0.006
0.004
0.002
0
−1500 −1000 −500 0 500 1000 1500
f
399
Problem 11.6.1 Solution
Since the random sequence X n has autocorrelation function
We can find the PSD directly from Table 11.2 with 0.1|k| corresponding to a |k| . The table yields
To complete the problem, we need to show that SX Y (− f ) = [S X Y ( f )]∗ . First we note that since
R X Y (τ ) is real valued, [R X Y (τ )]∗ = R X Y (τ ). This implies
' ∞
[S X Y ( f )]∗ = [R X Y (τ )]∗ [e− j2π f τ ]∗ dτ (4)
−∞
' ∞
= R X Y (τ )e− j2π(− f )τ dτ (5)
−∞
= S X Y (− f ) (6)
2 · 104 1
SX ( f ) = H( f ) = (1)
(2π f )2 + 104 a + j2π f
By Theorem 11.16,
2 · 104
SY ( f ) = |H ( f )|2 S X ( f ) = (2)
[(2π f )2 + a 2 ][(2π f )2 + 104 ]
400
To find RY (τ ), we use a form of partial fractions expansion to write
A B
SY ( f ) = + (3)
(2π f ) + a
2 2 (2π f )2 + 104
Note that this method will work only if a = 100. This same method was also used in Exam-
ple 11.22. The values of A and B can be found by
$ $
2 · 104 $ −2 · 104 2 · 104 $$ 2 · 104
A= $ = B = = (4)
$
(2π f )2 + 104 f = ja a 2 − 104 $
a 2 + 104 f = j100 a 2 − 104
2π 2π
Since e−c|τ | and 2c/((2π f )2 + c2 ) are Fourier transform pairs for any constant c > 0, we see
that
−104 /a −a|τ | 100
RY (τ ) = 2 e + 2 e−100|τ | (6)
a − 10 4 a − 104
(b) To find a = 1/(RC), we use the fact that
−104 /a 100
E Y 2 (t) = 100 = RY (0) = 2 + 2 (7)
a − 10 4 a − 104
Rearranging, we find that a must satisfy
a 3 − (104 + 1)a + 100 = 0 (8)
This cubic polynomial has three roots:
√ √
a = 100 a = −50 + 2501 a = −50 − 2501 (9)
Recall that a = 100 is not a valid solution because our expansion of SY ( f ) was not valid
for a = 100.√Also, we require a > 0 in order to take the inverse transform of SY ( f ). Thus
a = −50 + 2501 ≈ 0.01 and RC ≈ 100.
(d) Since the white noise W (t) has zero mean, the mean value of the filter output is
E [Y (t)] = E [W (t)] H (0) = 0 (3)
401
Problem 11.8.3 Solution
Since SY ( f ) = |H ( f )|2 S X ( f ), we first find
|H ( f )|2 = H ( f )H ∗ ( f ) (1)
= a1 e− j2π f t1 + a2 e− j2π f t2 a1 e j2π f t1 + a2 e j2π f t2 (2)
= a12 + a22 + a1 a2 e− j2π f (t2 −t1 ) + e− j2π f (t1 −t2 ) (3)
(b) From Table 11.1, the input has power spectral density
1
S X ( f ) = e−π f /4
2
(2)
2
The output power spectral density is
⎧
⎨ 1 −π f 2 /4
e |f| ≤ 2
SY ( f ) = |H ( f )|2 S X ( f ) = (3)
⎩ 02 otherwise
This integral cannot be expressed in closed form. However, we can express it in√the form of
the integral of a standardized Gaussian PDF by making the substitution f = z 2/π . With
this subsitution,
' √2π
2 1
e−z /2 dz
2
E Y (t) = √ √ (5)
2π − 2π
√ √
= ( 2π ) − (− 2π ) (6)
√
= 2 ( 2π ) − 1 = 0.9876 (7)
The output power almost equals the input power because the filter bandwidth is sufficiently
wide to pass through nearly all of the power of the input.
402
Problem 11.8.5 Solution
By making the substitution, f = 50 tan θ, we have d f = 50 sec2 θ dθ. Using the identity
1 + tan2 θ = sec2 θ, we have
' tan−1 (2)
100 tan−1 (2)
E Y 2 (t) = 8 2 dθ = = 1.12 × 10−7 (7)
10 π 0 106 π 2
S X Y ( f ) = H ( f )S X ( f ) (1)
403
(b) From Theorem 11.17,
8
S X Y ( f ) = H ( f )S X ( f ) = (3)
[7 + j2π f ][16 + (2π f )2 ]
(c) To find the cross correlation, we need to find the inverse Fourier transform of SX Y ( f ). A
straightforward way to do this is to use a partial fraction expansion of SX Y ( f ). That is, by
defining s = j2π f , we observe that
8 −8/33 1/3 1/11
= + + (4)
(7 + s)(4 + s)(4 − s) 7+s 4+s 4−s
Hence, we can write the cross spectral density as
−8/33 1/3 1/11
SX Y ( f ) = + + (5)
7 + j2π f 4 + j2π f 4 − jπ f
Unfortunately, terms like 1/(a − j2π f ) do not have an inverse transforms. The solution is to
write S X Y ( f ) in the following way:
−8/33 8/33 1/11 1/11
SX Y ( f ) = + + + (6)
7 + j2π f 4 + j2π f 4 + j2π f 4 − j2π f
−8/33 8/33 8/11
= + + (7)
7 + j2π f 4 + j2π f 16 + (2π f )2
(8)
(a) Since E[N (t)] = µ N = 0, the expected value of the output is µY = µ N H (0) = 0.
404
(d) Since N (t) is a Gaussian process, Theorem 11.3 says Y (t) is a Gaussian process. Thus the
random variable Y (t) is Gaussian with
E [Y (t)] = 0 Var[Y (t)] = E Y 2 (t) = 10−3 (5)
Thus the impulse response h(v) depends on t. That is, the filter response is linear but not time
invariant. Since Theorem 11.2 requires that h(t) be time invariant, this example does not violate the
theorem.
(a) Note that |H ( f )| = 1. This implies SM̂ ( f ) = S M ( f ). Thus the average power of M̂(t) is
' ∞ ' ∞
q̂ = S M̂ ( f ) d f = SM ( f ) d f = q (1)
−∞ −∞
405
To find the expected value of the random phase cosine, for an integer n = 0, we evaluate
' ∞
E [cos(2π f c t + n)] = cos(2π f c t + nθ) f (θ) dθ (5)
−∞
' 2π
1
= cos(2π f c t + nθ) dθ (6)
0 2π
1
= sin(2π f c t + nθ)|2π
0 (7)
2nπ
1
= (sin(2π f c t + 2nπ ) − sin(2π f c t)) = 0 (8)
2π
Similar steps will show that for any integer n = 0, the random phase sine also has expected
value
E [sin(2π f c t + n)] = 0 (9)
Using the trigonometric identity cos2 φ = (1 + cos 2φ)/2, we can show
1
E cos2 (2π f c t + ) = E (1 + cos(2π(2 f c )t + 2)) = 1/2 (10)
2
Similarly,
1
E sin2 (2π f c t + ) = E (1 − cos(2π(2 f c )t + 2)) = 1/2 (11)
2
Since M(t) and M̂(t) are independent of , the average power of the upper sideband signal
is
E U 2 (t) = E M 2 (t) E cos2 (2π f c t + ) + E M̂ 2 (t) E sin2 (2π f c t + ) (13)
− E M(t) M̂(t) E [2 cos(2π f c t + ) sin(2π f c t + )] (14)
= q/2 + q/2 + 0 = q (15)
406
(c) We cannot initially assume V (t) is WSS so we first find
We see that for all τ = 0, RV (t, t + τ ) = 0. Thus we need to find the expected value of
(e) The filter input V (t) has power spectral density SV ( f ) = 12 10−15 . The filter output has power
spectral density
−15
10 /2 | f | ≤ B
SY ( f ) = |L( f )|2 SV ( f ) = (14)
0 otherwise
407
Problem 11.9.1 Solution
The system described in this problem corresponds exactly to the system in the text that yielded
Equation (11.146).
104
SX ( f ) = . (2)
(5,000)2 + (2π f )2
408
It follows that the optimal filter is
104
(5,000)2 +(2π f )2 109
Ĥ ( f ) = = . (3)
104
+ 10−5 1.025 × 109 + (2π f )2
(5,000)2 +(2π f )2
From Table 11.2, we see that the filter Ĥ ( f ) has impulse response
109 −α|τ |
ĥ(τ ) = e (4)
2α
√
where α = 1.025 × 109 = 3.20 × 104 .
When X is a column of iid Gaussian (0, 1) random variables, the column vector Y = HX is a single
sample path of Y1 , . . . , Yn . When X is an n × m matrix of iid Gaussian (0, 1) random variables,
409
each column of Y = HX is a sample path of Y1 , . . . , Yn . In this case, let matrix entry Yi, j denote
a sample Yi of the jth sample path. The samples Yi,1 , Yi,2 , . . . , Yi,m are iid samples of Yi . We can
estimate the mean and variance of Yi using the sample mean Mn (Yi ) and sample variance Vm (Yi ) of
Section 7.3. These are estimate is
1 1 2
m m
Mn (Yi ) = Yi, j , V (Yi ) = Yi, j − Mn (Yi ) (7)
m j=1 m − 1 j=1
0.6
Sample mean
0.4 Sample variance
0.2
−0.2
−0.4
0 50 100 150 200 250 300 350 400 450 500
n
We see that each sample mean is small, on the other order of 0.1. Note that E[Yi ] = 0. For m = 100
samples, the sample mean has variance 1/m = 0.01 and standard deviation 0.1. Thus it is to be
expected that we observe sample mean values around 0.1.
Also, it can be shown (in the solution to Problem 11.2.6 for example) that as i becomes large,
Var[Yi ] converges to 1/3. Thus our sample variance results are also not surprising.
Comment: Although within each sample path, Yi and Yi+1 are quite correlated, the sample means
of Yi and Yi+1 are not very correlated when a large number of sample paths are averaged. Exact
calculation of the covariance of the sample means of Yi and Yi+1 might be an interesting exercise.
The same observations apply to the sample variance as well.
410
a column vector, then R is a column vector. If r is a row vector, then R is a row vector. For
fftc to work the same way, the shape of n must be the same as the shape of R. The instruction
n=reshape(0:(N-1),size(R)) does this.
function x=cospaths(n,m);
%Usage: x=cospaths(n,m)
%Generate m sample paths of
%length n of a Gaussian process
% with ACF R[k]=cos(0.04*pi*k)
k=0:n-1;
rx=cos(0.04*pi*k)’;
x=gaussvector(0,rx,m);
The program is simple because if the second input parameter to gaussvector is a length m vector
rx, then rx is assumed to be the first row of a symmetric Toeplitz covariance matrix.
The commands x=cospaths(100,10);plot(x) will produce a graph like this one:
1
Xn
−1
−2
0 10 20 30 40 50 60 70 80 90 100
n
We note that every sample path of the process is Gaussian random sequence. However, it would also
appear from the graph that every sample path is a perfect sinusoid. this may seem strange if you
are used to seeing Gaussian processes simply as nsoisy processes or fluctating Brownian motion
processes. However, in this case, the amplitude and phase of each sample path is random such
that over the ensemble of sinusoidal sample functions, each sample X n is a Gaussian (0, 1) random
variable.
Finally, to confirm that that each sample path is a perfect sinusoid, rather than just resembling a
sinusoid, we calculate the DFT of each sample path. The commands
>> x=cospaths(100,10);
>> X=fft(x);
>> stem((0:99)/100,abs(X));
411
120
100
80
Xk
60
40
20
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
k/100
The above plot consists of ten overlaid 100-point DFT magnitude stem plots, one for each Gaussian
sample function. Each plot has exactly two nonzero components at frequencies k/100 = 0.02 and
(100−k)/100 = 0.98 corresponding to each sample path sinusoid having frequency 0.02. Note that
the magnitude of each 0.02 frequency component depends on the magnitude of the corresponding
sinusoidal sample path.
412
To find the power spectral density S X (φ), we need to find the DTFT of sinc(φ0 k) Unfortunately, this
was omitted from Table 11.2 so we now take a detour and derive it here. As with any derivation of
the transform of a sinc function, we guess the answer and calculate the inverse transform. In this
case, suppose
1 1 |φ| ≤ φ0 /2,
S X (φ) = rect(φ/φ0 ) = (2)
φ0 0 otherwise.
We find R X [k] from the inverse DTFT. For |φ0 | ≤ 1,
' 1/2 '
1 φ0 /2 j2πφk 1 e jπφ0 k − e− jπφ0 k
R X [k] = S X (φ) e j2πφk dφ = e dφ = = sinc(φ0 k) (3)
−1/2 φ0 −φ0 /2 φ0 j2π k
Now we apply this result to take the transform of R X [k] in Equation (1). This yields
10 5
S X (φ) = rect(φ/0.5) + rect(φ/0.25). (4)
0.5 0.25
Ideally, an 2N + 1-point DFT would yield a sampled version of the DTFT SX (φ). However, the
truncation of the autocorrelation R X [k] to 201 points results in a difference. For N = 100, the DFT
will be a sampled version of the DTFT of R X [k] rect(k/(2N + 1)). Here is a M ATLAB program that
shows the difference when the autocorrelation is truncated to 2N + 1 terms.
function DFT=twosincsdft(N);
%Usage: SX=twosincsdft(N);
%Returns and plots the 2N+1
%point DFT of R(-N) ... R(0) ... R(N)
%for ACF R[k] in Problem 11.2.2
k=-N:N;
rx=10*sinc(0.5*k) + 5*sinc(0.25*k);
DFT=fftc(rx);
M=ceil(0.6*N);
phi=(0:M)/(2*N+1);
stem(phi,abs(DFT(1:(M+1))));
xlabel(’\it \phi’);
ylabel(’\it S_X(\phi)’);
40
30
SX(φ)
20
10
0
0 0.05 0.1 0.15 0.2 0.25 0.3
φ
From the stem plot of the DFT, it is easy to see the deviations from the two rectangles that make
up the DTFT S X (φ). We see that the effects of windowing are particularly pronounced at the break
points.
413
Comment: In twosincsdft, DFT must be real-valued since it is the DFT of an autocorrela-
tion function. Hence the command stem(DFT) should be sufficient. However, due to numerical
precision issues, the actual DFT tends to have a tiny imaginary hence we use the abs operator.
414
3 3
2 2
1 1
Xn
Xn
0 0
−1 −1
−2 Actual −2 Actual
Predicted Predicted
−3 −3
0 10 20 30 40 50 0 10 20 30 40 50
n n n
(a) c = 0.9, d = 10 (d) c = 0.6, d = 10
3 3
2 2
1 1
Xn
Xn
0 0
−1 −1
−2 Actual −2 Actual
Predicted Predicted
−3 −3
0 10 20 30 40 50 0 10 20 30 40 50
n n
(b) c = 0.9, d = 1 (e) c = 0.6, d = 1
3 3
2 2
1 1
Xn
Xn
0 0
−1 −1
−2 Actual −2 Actual
Predicted Predicted
−3 −3
0 10 20 30 40 50 0 10 20 30 40 50
n n
(c) c = 0.9, d = 0.1 (f) c = 0.6, d = 0.1
For σ = η = 1, the solution to Problem 11.4.5 showed that the optimal linear predictor of X n given
Yn−1 is
cd
X̂ n = 2 Yn−1 (1)
d + (1 − c2 )
The mean square estimation error at step n was found to be
d2 + 1
e∗L (n) = e∗L = σ 2 (2)
d 2 + (1 − c2 )
We see that the mean square estimation error is e∗L (n) = e∗L , a constant for all n. In addition,
e∗L is a decreasing function of d. In graphs (a) through (c), we see that the predictor tracks X n
415
less well as β increases. Decreasing d corresponds to decreasing the contribution of X n−1 to the
measurement Yn−1 . Effectively, the impact of measurement noise variance η2 is increased. As d
decreases, the predictor places less emphasis on the measurement Yn and instead makes predictions
closer to E[X ] = 0. That is, when d is small in graphs (c) and (f), the predictor stays close to zero.
With respect to c, the performance of the predictor is less easy to understand. In Equation (11), the
mean square error e∗L is the product of
σ2 (d 2 + 1)(1 − c2 )
Var[X n ] = 1 − ρ X2 n ,Yn−1 = (3)
1 − c2 d 2 + (1 − c2 )
As a function of increasing c2 , Var[X n ] increases while 1 − ρ X2 n ,Yn−1 decreases. Overall, the mean
square error e∗L is an increasing function of c2 . However, Var[X ] is the mean square error obtained
using a blind estimator that always predicts E[X ] while 1 − ρ X2 n ,Yn−1 characterizes the extent to
which the optimal linear predictor is better than the blind predictor. When we compare graphs (a)-
(c) with a = 0.9 to graphs (d)-(f) with a = 0.6, we see greater variation in X n for larger a but in
both cases, the predictor worked well when d was large.
Note that the performance of our predictor is limited by the fact that it is based on a single
observation Yn−1 . Generally, we can improve our predictor when we use all of the past observations
Y0 , . . . , Yn−1 .
416
Problem Solutions – Chapter 12
• “. . . each read or write operation reads or writes an entire file and that files contain a geometric
number of sectors with mean 50.”
This statement says that the length L of a file has PMF
(1 − p)l−1 p l = 1, 2, . . .
PL (l) = (1)
0 otherwise
with p = 1/50 = 0.02. This says that when we write a sector, we will write another sector
with probability 49/50 = 0.98. In terms of our Markov chain, if we are in the write state,
we write another sector and stay in the write state with probability P22 = 0.98. This fact also
implies P20 + P21 = 0.02.
Also, since files that are read obey the same length distribution,
417
• “Further, suppose idle periods last for a geometric time with mean 500.”
This statement simply says that given the system is idle, it remains idle for another unit of
time with probability P00 = 499/500 = 0.998. This also says that P01 + P02 = 0.002.
• “After an idle period, the system is equally likely to read or write a file.”
Given that at time n, X n = 0, this statement says that the conditional probability that
P01
P [X n+1 = 1|X n = 0, X n+1 = 0] = = 0.5 (3)
P01 + P02
Combined with the earlier fact that P01 + P02 = 0.002, we learn that
• “However, on completion of a write operation, a read operation follows with probability 0.6.”
Now we find that given that at time n, X n = 2, the conditional probability that
P21
P [X n+1 = 1|X n = 2, X n+1 = 2] = = 0.6 (7)
P20 + P21
Combined with the earlier fact that P20 + P21 = 0.02, we learn that
0 1 2
0.004 0.012
0.001
0.008
418
Problem 12.1.7 Solution
Under construction.
The way to find Pn is to make the decomposition P = SDS−1 where the columns of S are the
eigenvectors of P and D is a diagonal matrix containing the eigenvalues of P. The eigenvalues are
λ1 = 1 λ2 = 0 λ3 = 1/2 (2)
The decomposition of P is
⎡ ⎤⎡ ⎤⎡ ⎤
1 −1 0 1 0 0 0.5 0.5 0
P = SDS−1 = ⎣1 1 0⎦ ⎣0 0 0 ⎦ ⎣−0.5 0.5 0⎦ (4)
1 0 1 0 0 0.5 −0.5 −0.5 1
Finally, Pn is
⎡ ⎤⎡ ⎤⎡ ⎤
1 −1 0 1 0 0 0.5 0.5 0
Pn = SDn S−1 = ⎣1 1 0⎦ ⎣0 0 0 ⎦ ⎣−0.5 0.5 0⎦ (5)
1 0 1 0 0 (0.5)n −0.5 −0.5 1
⎡ ⎤
0.5 0.5 0
=⎣ 0.5 0.5 0 ⎦ (6)
0.5 − (0.5)n+1 0.5 − (0.5)n+1
(0.5)n
419
Problem 12.3.1 Solution
From Example 12.8, the state probabilities at time n are
7 5
5 −5
p(n) = 12 12
+ λn2 12 p0 − 127
p1 p0 + 12
12
7
p1 (1)
with λ2 = 1 − ( p + q) = 344/350. With initial state probabilities p0 p1 = 1 0 ,
7 5
5 −5
p(n) = 12 12
+ λn2 12 12
(2)
The limiting state probabilities are π0 π1 = 7/12 5/12 . Note that p j (n) is within 1% of π j
if $ $
$π j − p j (n)$ ≤ 0.01π j (3)
These requirements become
7/12
λn2 ≤ 0.01 λn2 ≤ 0.01 (4)
5/12
The minimum value n that meets both requirements is
% &
ln 0.01
n= = 267 (5)
ln λ2
Hence, after 267 time steps, the state probabilities are all within one percent of the limiting state
probability vector. Note that in the packet voice system, the time step corresponded to a 10 ms time
slot. Hence, 2.67 seconds are required.
0.9
e •X =1 0.01 e •X =2 0.01 e •X =3
c
c
c ...
0.1 0.99 0.99
420
Assuming that sending a packet takes one unit of time, the time X until the next packet error has the
PMF ⎧
⎨ 0.9 x =1
PX (x) = 0.001(0.99)x−2 x = 2, 3, . . . (1)
⎩
0 otherwise
Thus, following an error, the time until the next error always has the same PMF. Moreover, this time
is independent of previous interarrival times since it depends only on the Bernoulli trials following
a packet error. It would appear that N (t) is a renewal process; however, there is one additional
complication. At time 0, we need to know the probability p of an error for the first packet. If
p = 0.9, then X 1 , the time until the first error, has the same PMF as X above and the process is a
renewal process. If p = 0.9, then the time until the first error is different from subsequent renewal
times. In this case, the process is a delayed renewal process.
Keep in mind that even if we make one of these replacements, there will be at least one self transition
probability, either P00 or P11 , that will be nonzero. This will guarantee that the resulting Markov
chain will be aperiodic.
421
Suppose E[Ti j ] = ∞. Since i and j communicate, we can find n, the smallest nonnegative
integer such that P ji (n) > 0. Given we start in state j, let G i denote the event that we go through
state i on our way back to j. By conditioning on G j ,
E T j j = E T j j |G i P [G i ] + E T j j |G ic P G ic (1)
Since the random variable Ti j assumes that we start in state i, E[Ti j |G i ] = E[Ti j ]. Thus E[T j j |G i ] ≥
E[Ti j ]. In addition, P[G i ] ≥ P ji (n) since there may be paths with more than n hops that take the
system from state j to i. These facts imply
E T j j ≥ E T j j |G i P [G i ] ≥ E Ti j P ji (n) = ∞ (4)
Thus, state j is not positive recurrent, which is a contradiction. Hence, it must be that E[Ti j ] < ∞.
We can find the stationary probability vector π = [π0 π1 π2 ] by solving π = Pπ along with π0 +
π1 + π2 = 1. It’s possible to find the solution by hand but its easier to use MATLAB or a similar
tool. The solution is π = [0.7536 0.1159 0.1304].
Of course, one equation of π = πP will be redundant. The three independent equations are
422
From the second equation, we see that π2 = 0. This leaves the two equations:
Solving these two equations yields π0 = π1 = 0.5. The stationary probability vector is
π = π0 π1 π2 = 0.5 0.5 0 (8)
If you happened to solve Problem 12.2.2, you would have found that the n-step transition matrix is
⎡ ⎤
0.5 0.5 0
Pn = ⎣ 0.5 0.5 0 ⎦ (9)
0.5 − (0.5) n+1
0.5 − (0.5)n+1
(0.5)n
From Theorem 12.21, we know that each rows of the n-step transition matrix converges to π. In
this case,
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0.5 0.5 0 0.5 0.5 0 π
lim P = lim
n ⎣ 0.5 0.5 0 ⎦ = lim 0.5 0.5 0 = π ⎦
⎣ ⎦ ⎣
n→∞ n→∞ n→∞
0.5 − (0.5)n+1
0.5 − (0.5)n+1
(0.5) n
0.5 0.5 0 π
(10)
423
1-p p2+(1-p)2 1-p
1-p p(1-p)
0 1 2
p(1-p) p
We can solve this chain very easily for the stationary probability vector π. In particular,
0 (0, 1) One teller is idle, the other teller has a customer requiring one more minute of service
1 (1, 1) Each teller has a customer requiring one more minute of service.
2 (1, 2) One teller has a customer requring one minute of service. The other teller has a cus-
tomer requiring two minutes of service.
424
Writing the stationary probability equations for states 0, 2, and 3 and adding the constraint
j π j = 1 yields the following equations:
π0 = π2 (1)
π2 = (1/2)π0 + (1/2)π1 (2)
π3 = (1/4)π0 + (1/4)π1 (3)
1 = π0 + π 1 + π 2 + π 3 (4)
Substituting π2 = π0 in the second equation yields π1 = π0 . Substituting that result in the third
equation yields π3 = π0 /2. Making sure the probabilities add up to 1 yields
π = π0 π1 π2 π3 = 2/7 2/7 2/7 1/7 (5)
Both tellers are busy unless the system is in state 0. The stationary probability both tellers are busy
is 1 − π0 = 5/7.
counterexample, consider the simple chain on the right with Pii = 0 for each i. 0.5 0.5
Note that P00 (2) > 0 and P00 (3) > 0. The largest d that divides both 2 and 3 2
is d = 1. Hence, state 0 is aperiodic. Since the chain has one communicating
class, the chain is also aperiodic.
425
Problem 12.6.4 Solution
Under construction.
where α = p(1−q) and δ = q(1− p). To find the stationary probabilities, we apply Theorem 12.13
by partitioning the state space between states S = {0, 1, . . . , i} and S = {i + 1, i + 2, . . .} as shown
in Figure 12.4. By Theorem 12.13, for state i > 0,
πi α = πi+1 δ, (1)
implying πi+1 = (α/δ)πi . A cut between states 0 and 1 yields π1 = ( p/δ)π0 . Combining these
results, we have for any state i > 0,
p α i−1
πi = π0 (2)
δ δ
Under the condition α < δ, it follows that
p α i−1
∞ ∞
p/δ
πi = π0 + π0 = π0 1 + (3)
i=0 i=1
δ δ 1 − α/δ
since p < q implies α/δ < 1. Thus, applying i πi = 1 and noting δ − α = q − p, we have
q p p/(1 − p) i−1
π0 = , πi = , i = 1, 2, . . . (4)
q−p (1 − p)(1 − q) q/(1 − q)
Note that α < δ if and only if p < q, which is botha sufficient and necessary condition for the
Markov chain to be positive recurrent.
426
Problem 12.9.1 Solution
From the problem statement, we learn that in each state i, the tiger spends an exponential time with
parameter λi . When we measure time in hours,
λ0 = q01 = 1/3 λ1 = q12 = 1/2 λ2 = q20 = 2 (1)
The corresponding continous time Markov chain is shown below:
1/3
0 1
2 ½
427
Problem 12.9.4 Solution
Under construction.
The parameter ρ = λ/µ is the normalized load. When c = 2, the blocking probability is
ρ 2 /2
P [B] = (2)
1 + ρ + ρ 2 /2
Setting P[B] = 0.1 yields the quadratic equation
2 2
ρ2 − ρ − = 0 (3)
9 9
The solutions to this quadratic are √
1± 19
ρ= (4)
9
√
The meaningful nonnegative solution is ρ = (1 + 19)/9 = 0.5954.
Note that although the load per server remains the same, doubling the number of circuits to 200
caused the blocking probability to go down by more than a factor of 10 (from 0.004 to 2.76 × 10−4 ).
This is a general property of the Erlang-B formula and is called trunking efficiency by telephone
system engineers. The basic principle ss that it’s more efficient to share resources among larger
groups.
The hard part of calculating P[B] is that most calculators, including MATLAB have trouble
calculating 200!. (In MATLAB, factorial is calculated using the gamma function. That is, 200! =
gamma(201).) To do these calculations, you need to observe that if qn = ρ n /n!, then
ρ
qn+1 = qn−1 (3)
n
428
A simple MATLAB program that uses this fact to calculate the Erlang-B formula for large values of
c is
function y=erlangbsimple(r,c)
%load is r=lambda/mu
%number of servers is c
p=1.0;
psum=1.0;
for k=1:c
p=p*r/k;
psum=psum+p;
end
y=p/psum;
Essentially the problems with the calculations of erlangbsimple.m are the same as those of
calculating the Poisson PMF. A better program for calculating the Erlang-B formula uses the im-
provements employed in poissonpmf to calculate the Poisson PMF for large values. Here is the
code:
function pb=erlangb(rho,c);
%Usage: pb=erlangb(rho,c)
%returns the Erlang-B blocking
%probability for sn M/M/c/c
%queue with load rho
pn=exp(-rho)*poissonpmf(rho,0:c);
pb=pn(c+1)/sum(pn);
0 1 c-1 c
µ µ µ µ
pn = ρ n p0 n = 0, 1, . . . , c (1)
c
pi = p0 1 + ρ + ρ 2 + · · · + ρ c = 1 (2)
i=0
This implies
1−ρ
p0 = (3)
1 − ρ c+1
429
The stationary probabilities are
(1 − ρ)ρ n
pn = n = 0, 1, . . . , c (4)
1 − ρ c+1
0 1 c c+1
µ 2µ cµ cµ cµ
In the solution to Quiz 12.10, we found that the stationary probabilities for the queue satisfied
p0 ρ n /n! n = 1, 2, . . . , c
pn = (1)
p0 (ρ/c) ρ /c! n = c + 1, c + 2, . . .
n−c c
where ρ = λ/µ = λ. We must be sure that ρ is small enough that there exists p0 > 0 such that
! "
ρ c ρ n−c
∞ c ∞
ρn
pn = p0 1 + + =1 (2)
n=0 n=1
n! c! n=c+1 c
This requirement is met if and only if the infinite sum converges, which occurs if and only if
∞
ρ n−c ρ j
∞
= <∞ (3)
n=c+1
c j=1
c
That is, p0 > 0 if and only if ρ/c < 1, or λ < c. In short, if the arrival rate in cars per second is
less than the service rate (in cars per second) when all booths are busy, then the Markov chain has
a stationary distribution. Note that if ρ > c, then the Markov chain is no longer positive recurrent
and the backlog of cars will grow to infinity.
(a) In this case, we have two M/M/1 queues, each with an arrival rate of λ/2. By defining
ρ = λ/µ, each queue has a stationary distribution
pn = (1 − ρ/2) (ρ/2)n n = 0, 1, . . . (1)
Note that in this case, the expected number in queue i is
∞
ρ/2
E [Ni ] = npn = (2)
n=0
1 − ρ/2
430
probabilities satisfy
l
(b) The combined queue is an M/M/2/∞ queue. As in the solution to Quiz 12.10, the stationary
pn =
ll
m
p0 ρ n /n!
n=0 pn = 1 yields
p0 = 1 + ρ +
1
ρ 2 ρ 2 ρ/2
2
+
l ll
m
n = 1, 2
p0 ρ n−2 ρ 2 /2 n = 3, 4, . . .
2 1 − ρ/2
∞
−1
2
=
1 − ρ/2
1 + ρ/2
The expected number in the system is E[N ] = n=1 npn . Some algebra will show that
E [N ] =
ρ
1 − (ρ/2)2
l ll
m
3
l ll
m
(4)
We see that the average number in the combined queue is lower than in the system with individual
It would seem that the LCFS queue should be less efficient than the ordinary M/M/1 queue because
a new arrival causes us to discard the work done on the customer in service. This is not the case,
however, because the memoryless property of the exponential PDF implies that no matter how much
service had already been performed, the remaining service time remains identical to that of a new
customer.
0
l+h
1
l+h
c-r
c-r
l
c-r+1
l
c-1
c-1
l
c
c
When the number of calls, n, is less than c − r , we admit either type of call and qn,n+1 = λ + h.
When n ≥ c − r , we block the new calls and we admit only handoff calls so that qn,n+1 = h. Since
431
(5)
(6)
queues. The reason for this is that in the system with individual queues, there is a possibility that
one of the queues becomes empty while there is more than one person in the other queue.
The LCFS queue operates in a way that is quite different from the usual first come, first served queue.
However, under the assumptions of exponential service times and Poisson arrivals, customers arrive
at rate λ and depart at rate µ, no matter which service discipline is used. The Markov chain for the
LCFS queue is the same as the Markov chain for the M/M/1 first come, first served queue:
the service times are exponential with an average time of 1 minute, the call departure rate in state n
is n calls per minute. Theorem 12.24 says that the stationary probabilities pn satisfy
⎧
⎪ λ+h
⎨ pn−1 n = 1, 2, . . . , c − r
pn = λ
n (1)
⎪
⎩ pn−1 n = c − r + 1, c − r + 2, . . . , c
n
This implies
⎧
⎪ (λ + h)n
⎨ p0 n = 1, 2, . . . , c − r
pn = n! c−r n−(c−r ) (2)
⎩ (λ + h) λ
⎪
p0 n = c − r + 1, c − r + 2, . . . , c
n!
The requirement that cn=1 pn = 1 yields
0 c−r 1
(λ + h)n c
λn−(c−r )
p0 + (λ + h)c−r
=1 (3)
n=0
n! n=c−r +1
n!
Finally, a handoff call is dropped if and only if a new call finds the system with c calls in progress.
The probability that a handoff call is dropped is
432
Problem 12.11.5 Solution
Under construction.
433