Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
0Activity
0 of .
Results for:
No results containing your search query
P. 1
541soln5

541soln5

Ratings: (0)|Views: 0 |Likes:
Published by moroneyubeda

More info:

Published by: moroneyubeda on Aug 28, 2013
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

08/28/2013

pdf

text

original

 
Probability and Stochastic Processes:
A Friendly Introduction for Electrical and Computer Engineers
Edition 2Roy D. Yates and David J. Goodman
Problem Solutions
: Yates and Goodman, 12.1.4 12.1.7 12.3.2 12.3.3 12.5.5 12.5.6 12.6.212.8.2 12.9.1 12.10.1 12.10.5 12.11.2 and 12.11.3
Problem 12.1.4 Solution
Based on the problem statement, the state of the wireless LAN is given by the followingMarkov chain:
1
3
2
0.5
0.06 
0.06 
0.5
0.9
0.9
0.9
0.04
0.02
0.04
0.04
0.04
0
The Markov chain has state transition matrix
P
=
0
.
5 0
.
5 0 00
.
04 0
.
9 0
.
06 00
.
04 0 0
.
9 0
.
060
.
04 0
.
02 0
.
04 0
.
9
.
(1)
Problem 12.1.7 Solution
Let
Y
=
n
1
n
2
···
0
=
n
1
n
2
···
0
(1)denote the past history of the process. In the conditional space where
n
=
i
and
Y
=
y
,we can use the law of total probability to write
[
n
+1
=
j
|
n
=
i,
Y
=
y
]=
k
[
n
+1
=
j,
|
n
=
i,
Y
=
y
,
n
=
k
]
[
n
=
k
|
n
=
i,
Y
=
y
]
.
(2)Since
n
is independent of 
n
and the past history
Y
,
[
n
=
k
|
n
=
i,
Y
=
y
] =
[
n
=
k
]
.
(3)Next we observe that
[
n
+1
=
j
|
n
=
i,
Y
=
y
,
n
=
k
] =
[
n
+
k
=
j
|
n
=
i,
n
=
k,
Y
=
y
] (4)=
[
n
+
k
=
j
|
n
=
i,
n
=
k
] (5)1
 
because the state
n
+
k
is independent of the past history
Y
given the most recent state
n
. Moreover, by time invariance of the Markov chain,
[
n
+
k
=
j
|
n
=
i,
n
=
k
] =
[
n
+
k
=
j
|
n
=
i
] = [
P
k
]
ij
.
(6)Equations (5) and (6) imply
[
n
+1
=
j
|
n
=
i,
Y
=
y
,
n
=
k
] = [
P
k
]
ij
.
(7)It then follows from Equation (2) that
[
n
+1
=
j
|
n
=
i,
Y
=
y
] =
k
[
n
+1
=
j,
|
n
=
i,
Y
=
y
,
n
=
k
]
[
n
=
k
] (8)=
k
[
P
k
]
ij
[
n
=
k
]
.
(9)Thus
[
n
+1
=
j
|
n
=
i,
Y
=
y
] depends on
i
and
j
and is independent of the past history
Y
and we conclude that
n
is a Markov chain.
Problem 12.3.2 Solution
At time
n
1, let
p
i
(
n
1) denote the state probabilities. By Theorem 12.4, the probabilityof state
k
at time
n
is
 p
k
(
n
) =
i
=0
 p
i
(
n
1)
ik
(1)Since
ik
=
for every state
i
,
 p
k
(
n
) =
i
=0
 p
i
(
n
1) =
(2)Thus for any time
n >
0, the probability of state
k
is
.
Problem 12.3.3 Solution
In this problem, the arrivals are the occurrences of packets in error. It would seem that
(
t
) cannot be a renewal process because the interarrival times seem to depend on theprevious interarrival times. However, following a packet error, the sequence of packets thatare correct (
c
) or in error (
e
) up to and including the next error is given by the tree
 ¨  ¨  ¨  ¨  ¨  ¨ 
e
0
.
9
c
0
.
1
 ¨  ¨  ¨  ¨  ¨  ¨ 
e
0
.
01
c
0
.
99
 ¨  ¨  ¨  ¨  ¨  ¨ 
e
0
.
01
c
0
.
99
=1
=2
=3
...
Assuming that sending a packet takes one unit of time, the time
until the next packeterror has the PMF
(
x
) =
0
.
9
x
= 10
.
001(0
.
99)
x
2
x
= 2
,
3
,...
0 otherwise(1)2
 
Thus, following an error, the time until the next error always has the same PMF. Moreover,this time is independent of previous interarrival times since it depends only on the Bernoullitrials following a packet error. It would appear that
(
t
) is a renewal process; however,there is one additional complication. At time 0, we need to know the probability
p
of anerror for the first packet. If 
p
= 0
.
9, then
1
, the time until the first error, has the samePMF as
above and the process is a renewal process. If 
p
= 0
.
9, then the time until thefirst error is different from subsequent renewal times. In this case, the process is a delayedrenewal process.
Problem 12.5.5 Solution
For this system, it’s hard to draw the entire Markov chain since from each state
n
thereare six branches, each with probability 1
/
6 to states
n
+ 1
,n
+ 2
,...,n
+ 6. (Of course, if 
n
+
k >
1, then the transition is to state
n
+
k
mod
.) Nevertheless, finding thestationary probabilities is not very hard. In particular, the
n
th equation of 
π
=
π
P
yields
π
n
=16(
π
n
6
+
π
n
5
+
π
n
4
+
π
n
3
+
π
n
2
+
π
n
1
)
.
(1)Rather than try to solve these equations algebraically, it’s easier to guess that the solutionis
π
=
1
/K 
1
/K 
···
1
/K 
.
(2)It’s easy to check that 1
/K 
= (1
/
6)
·
6
·
(1
/K 
)
Problem 12.5.6 Solution
This system has three states:0 front teller busy, rear teller idle1 front teller busy, rear teller busy2 front teller idle, rear teller busyWe will assume the units of time are seconds. Thus, if a teller is busy one second, the tellerwill become idle in th next second with probability
p
= 1
/
120. The Markov chain for thissystem is
0
1
2
1-p
 p +(1-p)
2 2
1-p
1-p
p(1-p)
 p(1-p)
p
We can solve this chain very easily for the stationary probability vector
π
. In particular,
π
0
= (1
 p
)
π
0
+
 p
(1
 p
)
π
1
(1)This implies that
π
0
= (1
 p
)
π
1
. Similarly,
π
2
= (1
 p
)
π
2
+
 p
(1
 p
)
π
1
(2)3

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->