You are on page 1of 248

Queueing Theory with Applications . . .

:
Solution Manual

QUEUEING THEORY WITH APPLICA-
TIONS TO PACKET TELECOMMUNICA-
TION: SOLUTION MANUAL

JOHN N. DAIGLE
Prof. of Electrical Engineering
The University of Mississippi
University, MS 38677

Kluwer Academic Publishers
Boston/Dordrecht/London

Contents

List of Figures vii
List of Tables ix
1. TERMINOLOGY AND EXAMPLES 1
2. REVIEW OF RANDOM PROCESSES 9
3. ELEMENTARY CTMC-BASED QUEUEING MODELS 47
3.1 Supplementary Problems 82
4. ADVANCED CTMC-BASED QUEUEING MODELS 101
4.1 Supplementary Problems 133
5. THE BASIC M/G/1 QUEUEING SYSTEM 141
6. THE M/G/1 QUEUEING SYSTEM WITH PRIORITY 199
7. VECTOR MARKOV CHAINS ANALYSIS 217
7.1 Supplementary Problems 232

.

2 State diagram for Supplemental Exercise 3.27 129 4.1 Time dependent state probabilities for Exercise 3.6 98 4.1 Complementary distributions 111 4.6 State diagram for Supplemental Exercise 3.3 90 3.2 Complementary distributions 113 4.4 94 3.2 85 3.4 State diagram for Supplemental Exercise 3.3 State Diagram for Problem 4.3 State diagram for Supplemental Exercise 3.22 75 3.List of Figures 3.4 State Diagram for Problem 2a 137 .5 State diagram for Supplemental Exercise 3.5 96 3.

.

170 .1 Occupancy values as a function of the number of units served for the system of Example 5.List of Tables 5.8.

.

Compute τ̃n for 1 ≤ n ≤ 20 and verify that w̃n can be obtained from z̃(t). compute z̃n from (1.3) and w̃n from (1. 4. from which we may solve to find α = 1.3) and w̃n from (1. We must first determine the constants α and β.14358.1 Assume values of x̃ and t̃ are drawn from truncated geo- metric distributions.2) for 1 ≤ n ≤ 100 and compute the average waiting times for the 100 customers.48524892.2) for 1 ≤ n ≤ 20. 1. n=1 n=1 Thus. .292578 and pt = 0. Plot ũ(t) as a function of t.45939316 and β = 1. In particular. Solution Part 1. using the formula for the partial sum of a geometric series. Using your favorite programming language or a spreadsheet. 2. 3. Compute z̃n from (1. we find 10 X 10 X  P {x̃ = n} = P t̃ = n = 1. and let P {x̃ = n} = αpx (1 − px )n and P {t̃ = n} = βpt (1 − pt )n for 1 ≤ n ≤ 10 with px = 0. Since the probabilities must sum to unity. let P {x̃ = n} = 0 and P {t̃ = n} = 0 except for 1 ≤ n ≤ 10.Chapter 1 TERMINOLOGY AND EXAMPLES Exercise 1. generate a sequence of 100 random variates each for x̃ and t̃. we find h i h i α (1 − px ) − (1 − px )11 = β (1 − pt ) − (1 − pt )11 = 1.

ln (1 − px ) We may therefore select random variates from a uniform distribution and use these numbers in place of the above formula in place of P {x̃ ≤ n} and then round up to obtain the variates for x̃. in general. we find ln [(1 − px ) − (1/α)P {x̃ ≤ n}] n= − 1. 100. 2.2 QUEUEING THEORY WITH APPLICATIONS . Pn P {x̃ ≤ n} = = i} = ni=1 αpx (1 − px )i i=1 P {x̃ P = α (1 − px ) − (1 − px )n+1 . and then apply the formulas to obtain the following table: n xn tn zn wn τn 1 3 6 2 0 6 2 5 1 -1 2 7 3 1 6 -4 1 13 4 1 5 -1 0 18 5 7 2 4 0 20 6 2 3 -1 4 23 7 2 3 -3 3 26 8 4 5 3 0 31 9 2 1 1 3 32 10 1 1 0 4 33 11 2 1 -1 4 34 12 1 3 -3 3 37 13 3 4 -3 0 41 14 4 6 2 0 47 15 7 2 3 2 49 16 2 4 -2 5 53 17 5 4 1 3 57 18 1 4 -6 4 61 19 1 7 -6 0 68 20 4 7 -1 0 75 . : SOLUTION MANUAL Now. ln (1 − pt ) By drawing samples  from a uniform distribution and using those values for P {x̃ ≤ n} and P t̃ ≤ n in the above formulas. Solving P {x̃ = n} = αpx (1 − px )n for n. one can obtain the variates for xn and tn for n = 1. · · · . by solving P {t̃ = n} = βpt (1 − pt )n for n we may find the variates for t̃ from    ln (1 − pt ) − (1/β)P t̃ ≤ n n= − 1. . Similarly. .

Terminology and Examples 3 n xn tn zn wn τn 21 1 5 -6 0 80 22 5 7 -1 0 87 23 5 6 2 0 93 24 2 3 -8 2 96 25 4 10 -4 0 106 26 1 8 -1 0 114 27 1 2 -2 0 116 28 4 3 2 0 119 29 3 2 -3 2 121 30 1 6 0 0 127 31 2 1 -2 0 128 32 2 4 -6 0 132 33 1 8 -2 0 140 34 9 3 2 0 143 35 3 7 -6 2 150 36 2 9 -3 0 159 37 5 5 3 0 164 38 3 2 2 3 166 39 4 1 3 5 167 40 2 1 -3 8 168 41 6 5 3 5 173 42 3 3 0 8 176 43 2 3 -2 8 179 44 1 4 -9 6 183 45 5 10 -2 0 193 46 2 7 -8 0 200 47 1 10 -1 0 210 48 1 2 0 0 212 49 2 1 -1 0 213 50 1 3 -4 0 216 51 10 5 8 0 221 52 2 2 -2 8 223 53 3 4 1 6 227 54 1 2 -1 7 229 55 5 2 2 6 231 56 2 3 1 8 234 57 2 1 -1 9 235 58 1 3 -1 8 238 59 1 2 -1 7 240 60 5 2 1 6 242 61 2 4 0 7 246 62 3 2 1 7 248 63 1 2 -1 8 250 64 1 2 -4 7 252 65 5 5 -5 3 257 66 3 10 -2 0 267 67 2 5 0 0 272 68 2 2 -1 0 274 69 1 3 -2 0 277 70 4 3 -2 0 280 .

That is.95 Solution Part 2. τ3 = 13. The values of τn and wn are shown in the above Ta- ble. Therefore. and then would begin the service on the customer that arrived at time t. .07 -0. and u(13− ) = 1.1 shows the unfinished work as a function of time. That is. The values of zn and wn are shown in the above Table.1.From Figure 2.98 3.05 4. For example. it is readily verified that wn = u(τn− ). : SOLUTION MANUAL n xn tn zn wn τn 71 1 6 -2 0 286 72 4 3 -1 0 289 73 5 5 -5 0 294 74 1 10 -3 0 304 75 1 4 -9 0 308 76 1 10 0 0 318 77 4 1 -4 0 319 78 4 8 -4 0 327 79 5 8 -1 0 335 80 4 6 3 0 341 81 6 1 5 3 342 82 2 1 -1 8 343 83 3 3 2 7 346 84 1 1 -3 9 347 85 6 4 0 6 351 86 9 6 2 6 357 87 3 7 -3 8 364 88 7 6 3 5 370 89 3 4 2 8 374 90 4 1 2 10 375 91 2 2 -1 12 377 92 3 3 2 11 380 93 5 1 1 13 381 94 3 4 2 14 385 95 3 1 -4 16 386 96 8 7 4 12 393 97 2 4 1 16 397 98 1 1 0 17 398 99 4 1 -4 17 399 100 2 8 0 13 407 Averages 3. then an amount of time u(t− ) would pass before service on that customer’s job began. the server would first complete the work present in the system at time t− . Solution Part 3. . Figure 2.4 QUEUEING THEORY WITH APPLICATIONS . . if an arrival occurs at time t. w3 = 1.

The average waiting time is shown at the bottom of the above table. 10 7. It is found by simply summing the waiting times and dividing by the number of customers.Terminology and Examples 5 Solution Part 4. .5 u(t) 5 2.1 Unfinished work as a function of time.5 0 0 20 40 60 80 time Figure 2.

The length of the busy period is the total service time accumulated up to the point in time at which the next busy period starts. A busy period starts whenever a customer waits zero time. determine the lengths of all busy periods that occur during the interval. The lengths of the busy periods are given in the following table: 1 9 2 1 3 11 4 10 5 3 6 19 7 1 8 4 9 1 10 5 11 7 12 4 13 1 14 1 15 7 16 1 17 2 18 2 19 1 20 12 21 2 22 26 23 5 24 2 25 1 . whenever some later arriving customer has a waiting time of zero.1. : SOLUTION MANUAL Exercise 1. Using the data obtained in Exercise 1. . Based on the results of this comparison. . 1. 3.6 QUEUEING THEORY WITH APPLICATIONS . speculate about whether or not the average length of the busy period and the average waiting time are related.1. From the table of values given in the previous problem’s solution. we see that a busy period is in progress at the time of arrival of C1 00. Therefore we must decide whether or not to count that busy period.1. We arbitrarily decide not to count that busy period. that is.2 Assume values of x̃ and t̃ are drawn from truncated geo- metric distributions as given in Exercise 1. 2. Compare the average length of the busy period obtained in the previous step to the average waiting time computed in Exercise 1. Determine the average length of the busy period. Solution Part 1.

] Solution. The computed averages are E [w̃] = 3. Discuss the result of reducing the run length below that point. Since we want to have a 90% traffic intensity. the average . Thus. the busy period and the waiting time have the same value.209302326 Solution Part 2. It is sometimes possible to establish a definitive relationship between the two quantities. the average run length will be 1/p10 . Exercise 1.3 For the general case where the packet arrival process has run lengths. Since the run lengths have the geometric distribution. the results are not all that different. there will be a packet arrival in a slot if the system is in the on state.9/N . we choose p01 = p11 = 0. We see that based on this very small simulation. [Hint: A packet arrival occurs whenever the system tran- sitions into the on state. Thus. Solution Part 3.21.Terminology and Examples 7 26 1 27 2 28 1 29 44 30 3 31 2 32 2 33 1 34 4 35 1 36 4 37 5 38 1 39 1 40 1 41 4 42 4 43 5 Average 5. the arrival process becomes a sequence of independent Bernoulli trials. Determine whether or not there exists an average run length at which the packet arrival process becomes a sequence of independent Bernoulli trials. Hence we want to set transition probabilities p01 and p11 equal. The arrival process will be Bernoulli if the probability of having a packet in a slot is independent of the current state. if the probability of transitioning into the on state is independent of the current state. it is seen that the survivor functions decrease with decreasing run lengths. We will see later on that under certain sets of assumptions. If such a choice is possible. In general. find the value of run length at which independence occurs. The average length of the busy period is determined by summing the lengths of the busy periods and dividing by the number of busy periods. The result is shown at the end of the above table.95 and E [ỹ] = 5.

8 QUEUEING THEORY WITH APPLICATIONS . . Similarly the average off time will be N 0. . : SOLUTION MANUAL run length will be N N − 0.9.9. .

1. N }. {C. {N }.70 {T. S. there are a total of four possible outcomes and the number of events is 16. N }. S. N } 0.1.96 {T.15 {T. N }. N } 1. N } 0. {T. C}. C.1. the probability of each event is the sum of the probabilities of its con- stituent events. S. N }.11 {T. N } 0. N } 0.30 {S} 0. {C}. S. For the experiment described in Example 2. Show that it is always true that card (Ω) = 2card (S) . {S}. S.] . where card (A) denotes the car- dinality of the set A. For example. [Hint: The events can be put in one-to-one correspondence with the card (S)-bit binary numbers. {T.81 {C.19 {T. S} 0. Solution. which are mutually ex- clusive. we find the results given in the following table: Event Probability Event Probability ∅ 0. {S.04 {T. C. By proceeding in this manner. S}. C. {C.85 {N } 0.77 {C} 0. {T }. C.19+0. S} 0. {T. {T. Since each of the events is a union of elementary events. S. {T. S}. P {T.89 {T } 0.00 {C. C. in turn. which.11+0. C} 0.23 {T.0. specify all of the event probabilities. {T. {C. C. S} 0.04 = 1.66+0. the events are as fol- lows: ∅. is the number of elements of A.34 {S.1 For the experiment described in Example 2.66 {C.00 Exercise 2. N } 0. N } = P {T }+P {C}+P {S}+P {N } = 0. S. N } 0. {T. S}.Chapter 2 REVIEW OF RANDOM PROCESSES Exercise 2. N }.2 For the experiment described in Example 2. N }. C. N }.

1 2 1 P {ω1 } = P {s1 } + P {s2 } + P {s3 } = + +0 = . We then find for experiment A. we find P {si } = 15 for si ∈ SB and si = 0 otherwise. Exercise 2. The set of all possible outcomes in experiment A is SA = {s1 . in the experiment of Eample 2. . and d0 . s2 . s6 . then these events can be ordered and put in one-to-one correspon- dence with the digits of an card (S)-digit binary number.10 QUEUEING THEORY WITH APPLICATIONS . say.3 Repeat the computations of Example 2. and compute P {ω1 } in each of the experiments. Since there are a total of 2N N -digit binary numbers. for experiment B. s8 }.1. where kA is a normalizing parameter such that X P {si } = 1. we have 0 0 3 1 P {ω1 } = P {s1 } + P {s2 } + P {s3 } = + + = . For example. the di = 1. d1 . there are exactly 2card (S) events in any experiment. : SOLUTION MANUAL Solution. 21 21 7 i Similarly. Then. The total number of events is 24 = 16. we have four events. For experiment A. . 1010 represents the event {T. The idea is to construct two different experiments. Thus. every event is characterized by the presence or absence of elementary events.] Solution. Every event in the event space consists of a union of the elementary events of the experiment. else di = 0. Let T . C. S}. Then.2 by constructing restricted experiments based on SA and SB . If elementary event i is present in some event. and N be represented by the binary digits d3 . [Hint: The probabilities of the elementary events in the restricted experiments must be normalized by di- viding the probabilities of the individual elementary events of the restricted experiment by the sum of the probabilities of the constituent elementary events of the restricted experiment. dcard (S)−1 dcard (S)−2 · · · d0 . . s4 . si ∈SA We find KA = 3621 so that P {si } = i 21 for si ∈ SA and P {si = 0} otherwise. A and B. Suppose an experiment has card (S) elemen- tary events. for experiment B. d2 . we compute the probabilities of each elementary event to i be P {si } = kA 36 . S. 15 15 15 5 We recognize these probabilities to be the same as the conditional probabil- ities previously computed. respectively.

5 Suppose ω1 . 104. . ω2 ∈ Ω with P {ω1 } = 6 0 and P {ω2 } = 6 0. we have P {ω1 |ω2 } P {ω2 } P {ω2 |ω1 } = . the result follows. Since the other event is not the null event. then ω1 is equivalent to the null event. 70.54. then necessarily. then P {ω1 |ω2 } = P {ω1 }.4. Suppose P {ω1 } = 0 but P {ω2 } = 6 0. which implies P {ω1 |ω2 } = 0. P {ω1 } If ω1 is statistically independent of ω2 . From Bayes’ rule. P {ω2 } But. and thus. In other words.07. 99. show that statistical inde- pendence between events is a mutual property. Solution. Thus. Therefore. its unconditional probability of occurrence is greater than zero. Solution. Hence no event other than the null event can be statistically independent of the null event.99.48. Thus. 96.75. We assume ω1 and ω2 are two different events.35. P {ω1 ω2 } = 0.6 Develop the expression for dFc̃ (x) for the random variable c̃ defined in Example 2. and its support set is C = {56.29. 75. that is. ω2 is statistically independent of ω1 . In words. then we are certain no other event has occurred. If P {ω1 } = 0. 132. all subsets of the null event are the null event.Review of Random Processes 11 Exercise 2. 84. if we know the null event has occurred.22} . Exercise 2. Solution. P {ω1 ω2 } ≤ P {ω1 } and P {ω1 ω2 } ≥ 0. ω1 is independent of ω2 . P {ω1 } P {ω2 } P {ω2 |ω1 } = = P {ω2 } . Exercise 2. Then P {ω1 ω2 } P {ω1 |ω2 } = . the probability that any event has occurred given that the null event has actually occurred is zero. Show that if ω1 is statistically independent of ω2 .4 is discrete. Discuss the concept of independence between ω1 and ω2 . ω2 ∈ Ω but P {ω1 } = 0 or P {ω2 } = 0. The random variable c̃ defined in Example 2. P {ω1 } Since P {ω2 |ω1 } = P {ω2 } is the definition of ω2 being statistically independent of ω1 .4 Suppose ω1 .

055555556 132.75. . we find E [c̃] = 79.48 0.22)dx 36 36 Exercise 2.138888889 96. : SOLUTION MANUAL We then find.22} (2.4 and the previous problem. we have the following table: c P {c̃ = c} 56. Solution. X c∈C From Example 2.07)dx + δ(x − 75.1126743.54 0.70.99 0.222222222 70. X E [c̃] = cP {c̃ = c} . and var (c̃) = 321.166666667 84.132.84.54)dx + δ(x − 132. c∈C h i var (c̃) = E c̃2 − E 2 [c̃].104.194444444 75.12 QUEUEING THEORY WITH APPLICATIONS .7 Find E[c̃] and var (c̃) for the random variable c̃ defined in Example 2.35.99)dx + 36 36 36 2 1 δ(x − 104. and h i E c̃2 = c2 P {c̃ = c} .083333333 104.07 0. E c̃2   = 6562.07. . c∈{56.22 0.35 0.75)dx + δ(x − 99.4. .96. 8 7 6 dFc̃ (x) = δ(x − 56. By definition.99.29 0.48)dx + δ(x70.35)dx + δ(x − 96.75.1) Thus.54.027777778 Upon performing the calculations.99.48. X dFc̃ (x) = P {c̃ = c} δ(x − c)dx c∈C X = P {c̃ = c} δ(x − c)dx.29.75 0.111111111 99.29)dx + 36 36 36 5 4 3 δ(x − 84.0025.507681.

But. P {x̃1 = i} > 0 only if i is over only the integers from 5 to 14. . Solution. i=max {5.8 Let X1 and X2 denote the support sets for x̃1 and x̃2 .n−11} X P {x̃1 + x̃2 = n} = P {x̃2 = n − i | x̃1 = i} P {x̃1 = i} . i=5 Exercise 2. Then. We also see that x̃1 + x̃2 can take on all values between 16 and 36. we need i ≥ max {5. Hence. we find immediately that the support set for x̃1 + x̃2 is {16. .n−22} For example. . We wish to find an expression for fx̃1 +x̃2 (x) given the densities d fx̃1 (x) and fx̃2 (x). 6. Specialize (2. Equation (2. . 14} and X2 = {11. . x] into N equal-length intervals an assume that the random variable x̃1 has masses at the points i∆X. 6. and for n ∈ {5.18−11} X P {x̃1 + x̃2 = 18} = P {x̃2 = n − i | x̃1 = i} P {x̃1 = i} i=max {5. . Fx̃1 +x̃2 (x)(x) = P {x̃1 + x̃2 ≤ x}.14) as indicated in the previous paragraph. We. . Solution. min {14. We first note that fx̃1 +x̃2 (x) = dx Fx̃1 +x̃2 (x)(x).9 Derive (2. We also see that since x̃1 ranges only over the integers from 5 to 14. . P {x̃1 + x̃2 = n} = 0. i=0 First we note that the smallest possible value of x̃1 + x̃2 is 5 + 11 = 16. for P {x̃1 = i} > 0. 36}. .12) where X1 = {5. 17. . .Review of Random Processes 13 Exercise 2. therefore have the following result. and the largest possible value x̃1 + x̃2 can take on is 14 + 22 = 36. . 36}.18−22} 7 X = P {x̃2 = n − i | x̃1 = i} .12) is as follows: n X P {x̃1 + x̃2 = n} = P {x̃2 = n − i | x̃1 = i} P {x̃1 = i} . 36}. 6. For n 6= {5. min {14. n − 11. N X P {x̃1 + x̃2 ≤ x} = P {x̃1 + x̃2 ≤ x|x̃1 = i∆x} P {x̃1 = i∆X} . suppose we partition the interval (0. so that probx̃2 = n − i > 0 only if n − 11 ≥ i and n − 22 ≤ i or i ≤ n − 11 and i ≥ n − 22. . . . where ∆X = Nx . Thus. respectively. . Similarly. Now. n − 22} and i ≤ min set14. . 12. 22}. . . probx̃2 = n − i > 0 only if n − i ≥ 11 and n − i ≤ 22. i=1 . . .

Solution. we compute P {x̃ > α + β|x̃ > β} and see whether this quantity is equal to P {x̃ > α}. x]. for every α. Also. that is. : SOLUTION MANUAL But P {x̃1 = i∆X} ≈ fx̃1 (i∆X)∆X. x̃ > β} P {x̃ > α + β|x̃ > β} = . Recall the definition of memoryless: A random variable x̃ is said to be memoryless if. N X P {x̃1 + x̃2 ≤ x} = P {x̃1 + x̃2 ≤ x|x̃1 = i∆x} fx̃1 ∆X i=1 XN = P {x̃2 ≤ x − i∆X|x̃1 = i∆x} fx̃1 (i∆X)∆X. From the definition of conditional probability. so. P {x̃ > β} = e−λβ . we find P {x̃ > α + β. 0 Exercise 2. The sum then passes to the integral over (0. we can pass i∆X to the continuous variable y and ∆X to dy. . P {x̃ > β} But the joint probability of the numerator is just P {x̃ > α + β}.10 Prove Lemma 2. β ≥ 0. . i=1 In the limit. we simply show that the exponential distribution satisfies the definition of memoryless. which yields the result Z x P {x̃1 + x̃2 ≤ x} = P {x̃2 ≤ x − y|x̃1 = y} fx̃1 (y)dy. In order to prove the lemma. and only if. and for the exponential distribution is P {x̃ > α + β} = e−λ(α+β) . 0 Differentiating both sides with respect to x then yields Z x fx̃1 +x̃2 (x) = fx̃2 |x̃1 (x − y|y)fx̃1 (y)dy.1.14 QUEUEING THEORY WITH APPLICATIONS . P {x̃ > α + β|x̃ > β} = P {x̃ > α}. . 0 or Z x Fx̃1 +x̃2 (x) = Fx̃2 |x̃1 (x − y|y)fx̃1 (y)dy.

the same result gives g(1) = g(1/m)m from which it follows that g(1/m) = g(1)1/m . g(t + s) = g(t)g(s) ∀s. n. e−λβ Thus.Review of Random Processes 15 Thus. so assume (n − 1) ∈ T . j=1 m j=1 m m But. 458 .11 Prove Lemma 2.460. m m If we now take λ = − ln [g(1)]. Substitution of the latter result into the conclusion of the proposition yields n−1  1 1   g(n/m) = g g = g(1)n/m for all integer m. [1968] pp. Further. Since the rationals are dense in the reals. t > 0. n ∈ T . we have g(n/m) = e−λ(n/m) for all integer m. j=1 m m and by using the inductive hypothesis. the result of the proposition gives g(n) = g(1)n . Now. we have n−1  n 1 1 1    g(n/m) = g g =g for all integer m. g is a right continuous function so that by analytic continuity. Let T denote the truth set for the proposition.2.] Solution. but it is strongly recommended that the exercise be attempted without going to the reference. Then     n n−1 X 1 X 1 1 g(n/m) = g   = g + . Exercise 2. g(t) = e−λt for all integer t > 0. n. with m = 1. by hypothesis of the Lemma 2. and with n = m. n. [Hint: Start with rational arguments. m m m This completes the proof of the proposition. . the definition of memoryless is satisfied and the conclusion follows. We first prove the proposition that g(n/m) = g(1/m)n for an arbi- trary integer m. Extend to the real line using a continuity argument. each real number is the limit of a sequence of rational numbers. Thus. Trivially.2.   n−1 1 1 X   g(n/m) = g  g . The proof is given in Feller. e−λ(α+β) P {x̃ > α + β|x̃ > β} = = e−λα P {x̃ > α}.

.12 Repeat Example 2. Thus. it is impossible for Alice to finish her call before Charlie. Then dn ∗ .2. Even if Bob finishes his call at precisely the moment Alice walks up.13 Prove Theorem 2. Bob and Charlie will use the phone exactly one time unit each. : SOLUTION MANUAL Exercise 2. Since assuming exponential hold- ing times results in P {Alice before Charlie} = 1/4. exponentiality is not an appropriate assumption in this case. Exercise 2. How do the results compare? Would an exponential assumption on service-time give an adequate explanation of system performance if the service-time is really deterministic? Solution. Charlie’s time remaining is strictly less than one. Let x̃ be a non- negative random variable with distribution Fx̃ (x). assuming all students have a deter- ministic holding time of one unit.5. So. . Charlie has already begun his.16 QUEUEING THEORY WITH APPLICATIONS .2. and let Fx̃∗ (s) be the Laplace-Stieltjes transform of x̃. while Alice’s time to completion is exactly one in the best case. We see this as follows: Alice. Theorem2. P {Alice before Charlie} must be zero.

.

.

2 n E[x̃ ] = (−1) F (s) . dsn x̃ .

8.2) This proves the proposition. ds x̃ 0 and so 1 ∈ T . Fx̃∗ (s) = E[e−sx ]. so that Z ∞ d ∗ F (s) = (−x)e−sx dFx̃ (x). dsn x̃ 0 Let T be the truth set for the proposition. By Definition 2. Then dn+1 ∗ d dn ∗   F (s) = F (s) dsn+1 x̃ ds dsn x̃ Z ∞ d   n n −sx = (−1) x e dFx̃ (x) ds 0 Z ∞ n+1 = (−1) xn+1 e−sx dFx̃ (x). We may now show dn ∗ .s=0 Solution. 0 (2. We first prove the proposition that dn ∗ ∞ Z F (s) = (−1)n xn e−sx dFx̃ (x). Now assume that n ∈ T .

.

.

.

n R ∞ n −sx .

(−1) Fx̃ (s) = 0 x e dFx̃ (x).

dsn .

s=0

s=0

Review of Random Processes 17
R∞
= 0 xn dFx̃ (s)
= E[x̃n ].
(2.3)

Exercise 2.14 Prove Theorem 2.3. Let x̃ and ỹ be independent, non-
negative, random variables having Laplace-Stieltjes transforms Fx̃∗ (s) and
Fỹ∗ (s), respectively. Then, the Laplace-Stieltjes transform for the random
variable z̃ = x̃ + ỹ is given by the product of Fx̃∗ (s) and Fỹ∗ (s).
Solution. By the definition of Laplace-Stieltjes transform,
Fz̃∗ (s) = E[e−sz̃ ] = E[e−s(x̃+ỹ) ]
= E[e−sx̃ e−sỹ ].
Now, because x̃ and ỹ are independent, h̃1 (x̃) and h̃2 (ỹ) are also indepen-
dent for arbitrary functions h̃1 and h̃2 . In particular, e−sx̃ and e−sỹ are inde-
pendent. Since the expectation of the product of independent random variables
is equal to the product of the expectations,
E[e−sx̃ e−sỹ ] = E[e−sx̃ ]E[e−sỹ ].
That is,
Fz̃∗ (s) = Fx̃∗ (s)Fỹ∗ (s).

Exercise 2.15 Let x̃ be an exponentially distributed random variable
with parameter λ. Find Fx̃∗ (s).
Solution. By definition,
h i Z ∞
Fx̃∗ (s) = E e−sx̃ = e−sx dFx̃ (x).
0

For x̃ exponential, dFx̃ (x) = λe−λx dx, so
Z ∞
Fx̃∗ (s) = e−sx λe−λx dx
0
Z ∞
= λe−(s+λ)x dx
0
λ
Z ∞
= (s + λ)e−(s+λ)x dx
s+λ 0
λ
=
s+λ

Exercise 2.16 Let x̃ be an exponentially distributed random variable
with parameter λ. Derive expressions for E[x̃], E[x̃2 ], and V ar(x̃). [Hint:
Use Laplace transforms.]

18 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL

Solution. By Theorem 2.2,
dn ∗

.

E[x̃n ] = (−1)n F (s).

Fx̃∗ (s) = λ+s . Then d λ  .5. . dsn x̃ s=0 λ where by Exercise 2.

.

E[x̃] = (−1) .

ds λ + s .

s=0 "  −2 # .

1 .

1 = (−1) λ .

λ+s . = .

s=0 λ 2   .

.

E[x̃2 ] = (−1)2 ds d 2 λ λ+s .

.

s=0 " −3 # .

1 2  .

= 2λ .

= . λ+s .

1. Find the distribution of z̃ = min{x̃. P {x̃ > z. Therefore. Note that for the exponential random variable. z̃ = min{x̃. 3. ỹ} and z̃ > z means x̃ > z and ỹ > z. note that P {z̃ ≤ z} = 1 − P {z̃ > z}.17 Let x̃ and ỹ be independent exponentially distributed ran- dom variables with parameters α and β. ỹ > z} . As noted in the hint. s=0 λ2 We may now compute Var(x̃) as follows: Var(x̃) = E[x̃2 ] − E 2 [x̃]  2 2 1 = 2 − λ λ = λ1 . Solution. ỹ > z} = P {x̃ > z} P {ỹ > z} . [Hint: Note that z̃ = min{x̃. the mean and variance are equal. First. ỹ} and z̃ > z means x̃ > z and ỹ > z. Show that the conditional distribution Fz̃|x̃<ỹ (z) = Fz̃ (z).] 2. Since x̃ and ỹ are independent. Exercise 2. P {z̃ > z} = P {x̃ > z. respectively. 1. Find P {x̃ < ỹ}. ỹ}.

We compute the required probability by conditioning on the value of x̃ as follows: R∞ P {x̃ < ỹ} = 0 RP {x̃ < ỹ|x̃ = x} dFx̃ (x) = 0∞R P {x < ỹ} dFx̃ (x) = 0∞ e−βx αe−αx dx α R∞ −(α+β)x = α+β 0 (α + β)e dx α = α+β . We have Fz̃|x̃<ỹ (z) = P {z̃ ≤ z|x̃ < ỹ} . ã. 2. But. Fz̃|x̃<ỹ (z) = 1 − P {z̃ > z|x̃ < ỹ} P {z̃>z. Thus. is exponentially distributed with parameter β. ỹ} P {z̃ > z. is exponentially distributed with parameter α and the time required for Betsy to complete. Thus.18 Suppose Albert and Betsy run a race repeatedly. since z̃ = min{x̃.x̃<ỹ} =1− P {x̃<ỹ} . 3. we find P {z̃ > z|x̃ < ỹ} = e−(α+β)z so that Fz̃|x̃<ỹ (z) = 1 − e−(α+β)z = Fz̃ (z). Thus. b̃. Let ñb denote the number of times Betsy wins before Albert wins his first race. Find P {ñb = n} for n ≥ 0. P {z̃ ≤ z} = 1 − e−(α+β)x . . so that z̃ is exponentially distributed with parameter λ + α. x̃ < ỹ} R ∞= P {x̃ > z. upon using th eresult of part (ii).Review of Random Processes 19 = e−αx e−βx = e−(α+β)x . Exercise 2. The time required for Albert to complete the race. and the proof is complete. We begin with the definition of Fz̃|x̃<ỹ (z). x̃ < ỹ} = z RP {x̃ < ỹ|x̃ = x} dFx̃ (x) = z∞R∞P {x < ỹ} dFx̃ (x) = z e−βx αe−αx dx α R∞ −(α+β)x = α+β z (α + β)e dx α = α+β e−(α+β)z .

That is. b̃i denote the races times of Albert and Betsy. i=1 i then (αy)N −2 fPN−1 x̃ (y) = αe−αy . . So suppose the result holds for n = N − 1. the probability that Betsy wins one race and then Albert wins the second. ñb is geometric with parameter α+β . ãn > b̃n . 2. ·P · · are as in the statement of the exercise and n is a f ixed integer. Let ãi . independent of {x̃i . ã2 < b̃2 n o n o = P ã1 > b̃1 P ã2 < b̃2 β α = α+β · α+β . during the second race they don’t ‘remember’ what happened during the first race. ãn+1 < b̃n+1 n o n o n o n o = P ã1 > b̃1 P ã2 > b̃2 · · · P ãn > b̃n P ãn+1 < b̃n+1  n β α = α+β α+β . Now consider P {ñb = 1}. P ã < b̃ = α+β .} be a sequence of exponentially dis- tributed random variables and let ñ be a geometrically distributed random variable with parameter p. First we prove a lemma.19 Let {x̃i . : SOLUTION MANUAL Solution. the probability that Albert wins the very first race. i=1 i (N − 2)! PN PN −1 Using the fact that i=1 x̃i = i=1 +x̃N leads to R∞ fPN x̃i (y) = 0 fx̃N (y − t)fPN−1 x̃ (t)dt i=1 i=1 i .20 QUEUEING THEORY WITH APPLICATIONS . then ỹ = ni=1 x̃i has the gamma distribution with parameter (n. Then n o P {ñb = 1} = P ã1 > b̃1 . Exercise 2. Since nAlbert’so time must be less than Betsy’s n forohim to win. First consider P {ñb = 0}. the result holds trivially. N ≥ 2. α). i=1 Show that ỹ has the exponential distribution with parameter pα. . ã2 > b̃2 . · · · . i = 1. we see that n o P {ñb = n} = P ã1 > b̃1 . Thus the times for each race for each runner are independent of each other. Repeating this argument for arbitrary n. i = 1. α That is. . this α probability is P ã < b̃ . Solution. on the ith race. If fPN−1 x̃ (y) is the probability density function of ỹ.7 (iii). Let ñ X ỹ = x̃i . i = 1. hence they are memoryless.}. By Exercise 2. Both ã and b̃ are exponential random variables. Proof: If n = 1. . . . . respectively. . Lemma: If x̃i . 2. 2.

respectively. . Denote the ith variate for x̃ and ỹ by xi and yi . and set z̃ = min{x̃. . . respectively. Let x̃ and ỹ denote the two independent exponential random variables with rates 1 and 2. Using a spreadsheet (or a computer program- ming language). Thus. P 100 ȳ = (1/100) i=1 yi . . . for j = 1. This proves the Lemma. The following sequences for x̃. 2. then the time to state change is exponentially distributed with parameter α. Recall that here ñ is a random variable and is independent of x̃i . To complete the exercise. fỹ (y) = fPñ x̃i (y) P∞ i=1 = n=1 x̃i (y)P {ñ = n} fPn i=1 N−1 = n=1 αe−αy (αy) P∞ n−1 (N −1)! p(1 − p) (αy(1−p))n = pαe−αy ∞ P n=0 n! = pαe−αy eαy(1−p) = pαe−pαy . we condition the probability density function of ỹ on the value of ñ.” Solution. that is P 100 compute x̄ = (1/100) i=1 xi . Is w̄ closer to x̄ or z̄? Now give an intuitive explanation for the statement. n. and w̄ = (1/100) j=1 wj . generate a sequence of 100 variables for each of the ran- dom variables. i = 1. and define z̃ = min{x̃. “It is tempting to con- clude that if one knows the state change was caused by the event having its interevent time drawn from the distribution Fx̃ (x). · · ·. Compute P 100 the sample averages for P100 the variates. . 100. Let n denote the number of values of i such that xi < yi . z̃. Exercise 2. .Review of Random Processes 21 N−2 αe−α(y−t) αe−αt (αt) R∞ = 0 (N −2)! dt N−1 = αe−αy (αy) (N −1)! .20 This exercise is intended to reinforce the meaning of Property 2 of exponential random variables. z̄ = (1/100) i=1 zi . ỹ. 2. and w̃ were generated using a spreadsheet package: . but this is false. 2. Compare the results. ỹ} for i = 1. ỹ}. let ij denote the jthe such value and define wj = zi . .

2435585 0.21006062 0.58175933 0.95500193 0.1304179 1.53641231 0.08348682 8 1.74196329 11 0.18397153 0.7438511 0.1339849 22 0.04403085 0.43283182 1.53356533 0.27631063 39 1.15029115 0.81441406 0.20431873 0.1701992 19 0.08247806 0.39268839 6 4.15029115 20 0. .64094628 9 2.28216752 26 0.2989296 0.73515498 28 0.1702069 0.01412904 0.98337703 1.84397193 0.17315955 0.58175933 17 0.32353766 18 1.18449179 0.45688209 0.1929263 1.06286597 0.1339849 0.83148476 0.25982677 0.1339849 0.42970352 0.08376639 0.53356533 34 2.16714174 0.19140255 36 0.19140255 0.21725415 29 1.08376639 5 1.05653863 0.43923471 0. : SOLUTION MANUAL n xn yn zn wn 1 1.31830474 0.27631063 0.7575555 0.38827964 0.32710226 1.05653863 25 1.1929263 10 2.1701992 0.24667042 0.2435585 24 1.64094628 0.2392642 0.07873111 0.31830474 15 1.56052792 0.01878152 1.63894337 35 0.13928522 0.65195416 0.83148476 .28216752 2.55841417 0.36686726 0.77950928 0.55841417 40 0.28216752 1.34361333 7 3.38827964 0.41245136 0.16714174 2 1.27687641 0.63137432 0.11265824 0.18449179 3 2.08640858 0.38827964 13 0.63894337 0.14791652 31 0.08247806 27 0.08348682 0.04403085 0.73515498 1.14791652 1.73515498 0.75348525 0.07468417 0.89695466 0.21725415 0.43212736 0.2989296 23 0.29300628 0.93645887 0.22 QUEUEING THEORY WITH APPLICATIONS .17315955 0.04403085 21 0.46050301 0.64144138 0.17315955 37 0.74196329 0.01412904 14 0.11265824 12 0.54172938 0.34361333 1. .61897506 0.20688382 0.07468417 4 0.58498602 0.46251518 1.7438511 33 6.1304179 16 0.46050301 0.58175933 0.4729621 0.07873111 32 3.32353766 0.39268839 0.06286597 30 1.46050301 38 0.

13168134 0.30356611 0.50033913 54 0.85560022 0.42281911 65 0.38888728 0.28880168 0.57224943 0.10942768 0.03148097 0.42281911 1.27826342 74 1.05800158 57 0.57224943 0.4642691 0.43087249 0.00060148 0.4176623 0.86907458 0.03148097 0.24678196 51 1.31332586 0.44562071 0.36374508 0.92824082 0.79283399 1.33542262 0.Review of Random Processes 23 n xn yn zn wn 41 0.28880168 0.28880168 0.19271322 0.25010753 0.02385716 0.25010753 64 0.52810458 0.26106201 0.13168134 47 2.08457892 0.10829625 0.58957361 46 0.67811659 0.55708733 60 0.28160211 0.13146297 0.75585187 0.06398327 0.42532257 0.93021164 0.83064034 0.56825514 0.34664989 70 0.70889554 0.03148097 55 2.48084293 0.1030878 0.15881452 0.21198004 0.33542262 49 0.02385716 43 0.23169248 0.02385716 0.29128385 0.36727522 69 0.55708733 0.87417563 0.79283399 0.5614103 0.05496071 0.44562071 45 0.5614103 1.79283399 .8008909 0.73587666 0.57224943 68 1.73581018 56 2.03337237 0.42603754 48 0.16509961 63 0.37414692 0.00647539 0.58372847 0.17958999 0.02778907 0.19616038 0.24678196 0.05496071 77 0.01464839 0.00071004 76 0.03337237 79 1.00071004 0.08457892 66 2.03337237 0.06398327 44 0.42603754 0.36374508 71 0.85560022 80 0.21254329 0.73581018 0.42532257 62 0.28880168 59 2.12054774 52 1.00647539 0.27826342 0.05800158 0.48084293 67 0.44562071 1.20835641 0.43420203 0.02321168 0.31306894 0.42281911 0.04647351 50 0.5614103 58 0.04647351 0.15051176 0.08658913 0.48428777 0.34664989 0.50033913 0.1030878 73 0.36374508 0.16509961 0.04647351 0.26106201 72 1.25010753 0.26106201 0.02778907 42 0.23169248 75 0.00647539 61 1.25354251 0.36727522 0.12054774 0.24678196 1.58372847 53 0.58957361 0.43087249 78 0.

25514443 0.9035322 0.00704379 0.98820564 0.02886531 0.46098839 0. Now. and so will not be exponential with rate α. ỹ.58648222 83 1.14403435 0.38083248 0.06881894 0.5 for ỹ. Thus z̃ is not a representation of x̃.01055528 0.06949872 90 0.98820564 as compared to the expected value of 1.28810271 89 0.93930994 0. Hence the same result would hold if w̃ represented those yi that were less than their corresponding xi values.41854311 0.40806585 87 0.37903079 0.3453931 Mean of xi ′ s Mean of yi ′ s Mean of zi ′ s Mean of wi ′ s 0.9430666 0.30661797 86 0.33372236 Solution.00704379 100 0. and therefore of z̃.18747991 0.05314163 97 0.46098839 as compared to the expected value of 0. and z̃ are close to their expected values: 0.06949872 0.40806585 0.08817172 85 0.08817172 0. Observe that the sampled means of x̃. respectively.58648222 0.23571032 0.28810271 0.37903079 0.08515753 0.33333333.21641837 0.8636937 10.66163598 0.18747991 94 0. .68434451 0.10592748 91 0.05314163 0.) Note that the sampled mean of w̃ is very close to the sampled mean of z̃. .08091491 88 2.24 QUEUEING THEORY WITH APPLICATIONS .06949872 0.8205643 46.37903079 96 0.07335002 82 2. (Recall that if α and β are the rates for x̃ and ỹ.33863694 0. hence E[z̃] = 0.08091491 0.01055528 98.07335002 0. The distribution of w̃.23571032 95 0.9035322 92 0.0 for x̃.26009176 98 0.9035322 0. does not depend on whether α < β or β < α. then 1 E[z̃] = α+β .78172674 0.41854311 99 0.68718355 0.90496445 0.0988392 33.08817172 0. the wi are selected from the samples of z̃.07335002 2. .25523489 0.14198992 0.81272977 0.14403435 84 0.44464432 0. In this exercise α = 1 and β = 2.62340183 0.13090162 0.41854311 0.00704379 0. : SOLUTION MANUAL n xn yn zn wn 81 0.27739983 0.33863694 for z̃.30661797 0. Once the samples of z̃ are chosen. 0.26009176 0.26009176 0.10592748 0. and 0.06881894 93 0.74250208 0. it does not matter whether the original variates came from sampling the distribution of x̃ or of ỹ.

then there is a 100% probability it will not burn out in the next. 1. and 4. because of “peak” hours and slow times. This process does not have independent increments. 2. Certainly this is not stationary. however. Pas- sengers usually will not be interested in how many people arrived before them. 3. Then the process can be modeled using stationary and independent increments. independent but not stationary increments. Stationary. Nor is this process independent. it is generally agreed that the birth rate increases with time. Then this process would likely have independent increments. The birth of every individual alive today was dependent upon the births of his or her parents. not stationary: Count the number of customers who enter a store over an hour-long period. Illi- nois (a small farming community). 3. so it is memory- less. 4. Independent. For the final part of this exercise. both stationary and independent increments. this counting process is not likely to have stationary increments. not independent: Count the number of failures of a light bulb in any small time interval. assume the airport has an infinite airplane capacity. and so on. since it can have only one failure. Both stationary and independent: Count the number of phone calls through an operator’s switchboard between 3:00 am and 5:00 am in Watseka. neither stationary nor independent increments. 2. Neither stationary nor independent: Count the Earth’s population. stationary but not independent increments. The number of customers that enter during one 1-hour time interval likely will be independent of the number that enter during a non-overlapping 1-hour period.Review of Random Processes 25 Exercise 2. Assume the store has infinte capacity.21 Define counting processes which you think have the fol- lowing properties: 1. Let the failure rate be exponential. December 31 of the previous year? Solution. Hence the probability it fails in one fixed-length interval is the same as in any other. And their births were dependent upon the births of their parents. However. . If the light burns out in one increment. What would you think would be the properties of the process which counts the number of passengers which arrive to an airport by June 30 of a given year if time zero is defined to be midnight.

limh→0 h = limh→0 h = 0. current. .22 Show that E[ñ(t + s) − ñ(s)] = λt if {ñ(t). 2. 1. t > 0 5. determine whether the function is o(h) or not. f (t) = te−at for a. t > 0 Solution. h2 2. : SOLUTION MANUAL they are concerned in their own.26 QUEUEING THEORY WITH APPLICATIONS . this counting process is not likely to have stationary increments since there will be peak and slow times for flying. f (t) = t. (For example: “red eye” flights are during slow times. Furthermore. limh→0 h = limh→0 1 = 1. Therefore f (t) = t 2 is not o(h). Therefore f (t) = t2 is o(h). f (t) = e−at for a. . t > 0} is a Poisson process with rate λ.) Exercise 2. Solution. h 1. 1 3. f (t) = t 2 . f (t) = t2 . Therefore f (t) = t is not o(h). 1 1 1 h2 3.23 For each of the following functions. Your determination should be in the form of a formal proof. flight. limh→0 h = limh→0 h− 2 = ∞. 4. . ∞ X E [ñ(t + s) − ñ(s)] = nP {ñ(t + s) − ñ(s) = n} n=0 ∞ " # X (λt)n e−λt = n n=0 n! ∞ " # X (λt)n e−λt = n n=1 n! ∞ " # X (λt)n e−λt = n n=1 n − 1! ∞ −λt X (λt)n−1 = (λt) e n − 1! n=1 ∞ (λt)n = (λt) e−λt X n=0 n! = (λt) e−λt eλt = λt Exercise 2.

3. limh→0 he h = limh→0 e−ah = e0 = 1. by the definition of o(h). the scalar is −1. (Here. t > 0 is not o(h). So suppose that f (0) is not equal to 0. t > 0 is not o(h). since e−ah → 1 as h → 0. 1. f (t) is continuous. q(h) limh→0 f (h) 0 lim = = h→0 h limh→0 g(h)h 0 . 4. Hence it must be that a = 0. d(t) = f (t) − g(t) 3. q(t) = f (t)/g(t) Rt 5. then f (0) = 0. so that the required limit is zero. By the Lemma. a > 0. d(t) = f (t) − g(t) is o(h) by the additive and scalar multiplication proper- ties of limits. limh→0 g(h) = limh→0 f (h) = 0. proof: Clearly.24 Suppose that f (t) and g(t) are both o(h). Lemma: If f (t) is o(h).) We now prove an itermediate result.Review of Random Processes 27 −ah −ah 4. Determine whether each of the following functions is o(h). Then f (h) lim =∞ h→0 h since a > 0. Exercise 2. say f (0) = a. But f (t) is o(h). Since limh→0 f (h) and limh→0 g(h) exist independently. So f (h)g(h) f (h)    lim = lim lim g(h) = 0 · 0 = 0 h→0 h h→0 h h→0 So p(t) = f (t)g(t) is o(h). s(t) = f (t) + g(t) 2. s(t) = f (t) + g(t) is o(h) by the additive property of limits. Therefore f (t) = e−at for a. So if f (t) is o(h) then f (0) = 0. 1. 2. Therefore f (t) = te−at for a. −ah 5. i(t) = 0 f (x) dx Solution. p(t) = f (t)g(t) 4. limh→0 e h = limh→0 e h = ∞.

f (t) = t2 . 1 (i)2 : Immediate. there exists some n in [0. By (iii)1 . 3 (iii)2 : By (iii)1 . 1 −λh P {ñ(h) = 1} = λh 1! e = λhe−λh = λhe−λh = λhe −λh + λh − λh   = λh e−λh − 1 + λh = g(h) + λh . Solution. 2 (ii)2 : Property (ii)1 gives us independent increments. : SOLUTION MANUAL The limit is then indeterminant. we see that P {ñ(t + s) − ñ(s) = n} is independent of s. Since f (t) is continuous. and whether or not q(h) is o(h) depends upon the specific form of f (h) and g(h). Denote Property (j) of Poisson process Definition n by (j)n . Hence i(t) = 0 f (x) dx is o(h). For example. h] such that Z h f (x) dx = h · f (n) 0 Therefore i(h) hf (n) limh→0 = limh→0 h h = limh→0 f (n). on the other hand. then q(h) h4 lim = lim 2 = lim h = 0 h→0 h h→0 h h h→0 making q(t) an o(h) function.That is. . it remains to show they are also stationary. If. . This defines stationary increments. if f (t) = t4 .25 Show that definition 1 of the Poisson process implies def- inition 2 of the Poisson process. g(t) = t2 . n in [0.28 QUEUEING THEORY WITH APPLICATIONS . g(t) = t4 then q(h) h2 1 lim = lim 4 = lim 3 = ∞ h→0 h h→0 h h h→0 h and here q(t) is not o(h). g(t) are to proceed further. Exercise 2. we now need to know what f (t). 5. h] =0 Rt by the Lemma above.

establish that P0 (t) = exp{−λt} where Pn (t) = P {ñ(t) = n} and then prove the validity of Property 3 of Definition 1 by induction.Review of Random Processes 29 Note that g(h) λh(e−λh −1) lim = limh→0 h h→0 h   = λlimh→0 e−λh − 1 = λ (1 − 1) =0 by Exercise 2. And by Exercise 2. 1 (i)1 : Immediate. Thus P {ñ(h) = 1} = λh + g(h) = λh + o(h) And so (iii)2 holds. . 4 (iv)2 : By (iii)1 .] Solution.15 (v). So ∞ X (λh)n eλh n=2 n! is o(h). Then (λh)n e−λh lim = limh→0 λn e−λh hn−1 h→0 n! = λn limh→0 e−λh hn−1 = λn · 0 =0 since limh→0 eλh = 1 and limh→0 hn−1 = 0.26 Show that Definition 2 of the Poisson process implies Definition 1 of the Poisson process. Denote propery (j) of Poisson process definition n by (j)n . Exercise 2. [Hint: After satisfying the first two properties of definition 1. This shows (iv)2 .16 (i). ∞ X (λh)n e−λh P {ñ(h) ≥ 2} = n=2 n! Let n ≥ 2 be arbitrary. So g(h) is o(h). n ≥ 2. sums of o(h) functions are o(h).

P {ñ(t + h) − ñ(t) = 0} = P {ñ(h) = 0} = 1 − [P {ñ(h) = 1} + P {ñ(h) ≥ 2}] = 1 − [(λh + o(h)) + o(h)] = 1 − λh + o(h). and any scalar multiplied by a o(h) function is still o(h). we find Pn Pn (t + h) = k=0 Pn P {ñ(t + h) = n|ñ(t) = k} P {ñ(t) = k} = k=0 P{ñ(t + h) − ñ(t) = n − k} Pk (t) = P {ñ(t + h) − ñ(t) = 0} Pn (t) +P {ñ(t + h) − ñ(t) = 1} Pn−1 (t) +P {ñ(t + h) − ñ(t) = 2} Pn−2 (t) +··· +P {ñ(t + h) − ñ(t) = n} P0 (t) But by (iii)2 . Conditioning upon the value of ñ(t). 3 (iii)1 : Define Pn (t) = P {ñ(t) = n}.30 QUEUEING THEORY WITH APPLICATIONS . (iv)2 . . Rearranging the terms. Then Pn (t+h) = P {ñ(t + h) = n}. . and P {ñ(t + h) − ñ(t) ≥ 2} = o(h). : SOLUTION MANUAL 2 (ii)1 : Immediate by the first half of (ii)2 . P {ñ(t + h) − ñ(t) = 1} = λh + o(h). n≥0 with Pn (t) = 0. Pn (t + h) − Pn (t) = −λhPn (t) + λhPn−1 (t) + o(h). Thus. .) Thus Pn (t + h) = Pn (t) [1 − λh + o(h)] + Pn−1 (t) [λh + o(h)] + o(h). Observe that n X Pn P {ñ(t + h) − ñ(t) = k} Pn−k (t) = k=2 o(h)Pn−k (t) ≤ o(h) · 1 k=2 = o(h) since Pn−k (t) is a scalar for all k ≤ n. (Think of the limit definition of o(h) to see why this is true. n < 0.

26. (2. ′ Pn (t) = −λPn (t) + λPn−1 (t) Multiplying by e−λt and rearranging terms: h ′ i e−λt Pn (t) + λPn (t) = λeλt Pn−1 (t). i.1) from above. dt leads to ′ eλt P0 (t) + λeλt P0 (t) = 0 and P0 (t) = Ke−λt . " # (λh)n−1 eλh Pn−1 (t) = . ′ P0 (t) + λP0 (t) = 0.3) (n − 1)! By (2. (2.1) For n = 0.Review of Random Processes 31 We then divide by h and take the limit as h → 0 Pn (t + h) − Pn (t) −λhPn (t) λhPn−1 (t) o(h) lim = lim + lim + lim . Observe that d h λt i e P0 (t) = 0. so that P0 (t) = eλt . (2. Let T denote the truth set for the Proposition that (λt)n eλt Pn (t) = . h→0 h h→0 h h→0 h h→0 h so that ′ Pn (t) = −λPn (t) + λPn−1 (t) + 0 = −λPn (t) + λPn−1 (t). ′ P0 (t) = −λP0 (t).2) But by (i)2 . n ≥ 1. . Now suppose n − 1 ∈ T . n! Then 0 ∈ T .26. Using this result in (2.26.e.26.2) we find K = 1. That is.26.. P0 (0) = 1.

ñ(0) = 0.26.32 QUEUEING THEORY WITH APPLICATIONS . (λt)n−1 e−λt h i = λeλt (n−1)! λ(λt)n−1 = (n−1)! λn t(n−1) = (n−1)! Integrating both sides with respect to t: λn tn e−λt Pn (t) = n(n−1)! + c n = (λt) n! + c. then for all s. we have by (2. n! Thus. n ∈ T . Therefore. in order for no events to have occurred by time t. So for n > 0. the time of the first event must be greater than t. P {ñ(t + s) − ñ(s) = 0} = P {ñ(t) = 0} = e−λt Now. . Pn (0) = 0.3). : SOLUTION MANUAL Hence. That is. . the Proposition is proved. Recall that if {ñ(t). P t̃1 > t = P {ñ(t) = 0} = e−λt  This is equivalent to P t̃1 ≤ t = 1 − e−λt  . d λt e Pn (t) = λeλt Pn−1 (t) dt Since n − 1 ∈ T . Exercise 2. Thus. (λt)n   Pn (t) = e−λt +c n! But by (i)2 . h n i Pn (0) = e−λ(0) (λ·0) n! + c = 1 (c) = 0.27 Show that the sequence of interarrival times for a Poisson process with rate λ forms a set of mutually iid exponential random variables with parameter λ. That is. Solution. (λt)n eλt Pn (t) = . and all properties of Definition 1 are satisfied. t ≥ 0} is a Poisson process with rate λ.

t̃3 . upon reindexing the first summation. . Note that the second interarrival time begins at the end of the first interarrival time.d. Repeating these arguments for n ≥ 3.29 Show that Definition 3 of the Poisson process implies Definition 1 of the Poisson process. · · · are inde- pendent exponential variables with parameter λ.f. t̃2 . is unique. k=n k! since {ñ(t). But this implies d d (λt)k e−λt P∞ P {s̃n ≤ t} = dt k=n k! dt P∞ d h (λt)k e−λt i = k=n dt k! P∞ h kλk tk−1 e−λt λ(λt)k e−λt i = k=n k! − k! P∞ h kλk tk−1 e−λt i P∞ h λ(λt)k e−λt i = k=n k! − k=n k! P∞ λ(λt)k−1 e−λt P∞ λ(λt)k e−λt = k=n (k−1)! − k=n k! . t ≥ 0} is a Poisson process with rate λ. Since the c.28 Show that d λ(λt)n−1 e−λt P {s̃n ≤ t} = . Observe that we may separate the first summation as ∞ " # X kλk tk−1 e−λt λ(λt)n−1 e−λt = (n−1)! k=n k! P∞ λ(λt)k−1 e−λt P∞ λ(λt)k e−λt + k=n+1 (k−1)! − k=n k! . Thus.Review of Random Processes 33 Observe that this is the cumulative distribution function for an exponential ran- dom variable with parameter λ. Hence ñ(t) a Poisson process that ∞ X (λt)k e−λt P {s̃n ≤ t} = P {ñ(t) ≥ n} = . Solution. we see that t̃1 . d λ(λt)n−1 e−λt P∞ λ(λt)k e−λt P∞ λ(λt)k e−λt P {s̃n ≤ t} = (n−1)! + k=n k! − k=n k! dt λ(λt)n−1 e−λt = (n−1)! Exercise 2. First note that s̃n ≤ t if and only if ñ(t) ≥ n. t̃1 is exponentially distributed with rate λ. Furthermore. Exercise 2. the process has stationary and independent increments so that t̃2 has the same distribution as t̃1 and is independent  of t̃1 . dt (n − 1)! [Hint: Start by noting s̃n ≤ t ⇐⇒ ñ(t) ≥ n].

Solution. From the definition of the Poisson distribution. respectively. note that P {ñ(t) = n} = P {ñ(t) ≥ n} + P {ñ(t) ≥ n + 1} = P {s̃n ≤ t} − P {s̃n+1 ≤ t} . Define ñ = ñ1 + ñ2 . since ñ(0) = max {n : s̃n ≤ 0}. Using this result. i ≥ 1 . . : SOLUTION MANUAL Solution. αn e−n P {ñ1 = n} = n! and β n e−n P {ñ2 = n} = n! Now. ñ(0) = 0.34 QUEUEING THEORY WITH APPLICATIONS . because these are exponential random variables and thus are memoryless. prove Property 1 of the Poisson process. Exercise 2. dt n! Therefore (λt)n e−λt P {ñ(t) = n} = n! and the proof is complete.  (ii) This follows immediately from the independence of t̃i . condition on the value of ñ2 to find:  . Show that ñ has the Poisson distribution with rate α + β. ñ has stationary increments.20. . d λ(λt)n−1 e−λt λ(λt)n e−λt P {ñ(t) = n} = (n−1)! − n! dt But the right-hand side is: " # d (λt)n e−λt . (iii) Using Exercise 2. (i) Clearly. Fur- thermore.30 Let ñ1 and ñ2 be independent Poisson random variables with rates α and β.

 Pn .

P {ñ = n} = k=0 P ñ1 + ñ2 = n. .

ñ2 = k P {ñ2 = k} .

= Pnk=0 P {ñ1 + k = n} P {ñ2 = k} P = nk=0 P {ñ h1= n − k}i hP {ñ 2 = i k} n−k −α (α) e k −β (β) e P n = k=0 (n−k)! k! .

ñ(t) is Poisson distributed with parameter (α + β)t.Review of Random Processes 35 since both ñ1 . t ≥ 0} and {ñ2 (t). Since. Condition on the value of ñ. Use this result to prove Property 2 of the Poisson process. It remains to show that {ñ(t). by the result just shown. the distribution of green balls. respectively. by property (i) of Definition 1 of the Poisson process . ñg = g} = P {ñr = r. t ≥ 0} to be Pois- son processes with rates α and β. Suppose the balls are either red or green. it follows that [ñ(t1 ) − ñ(t0 )] and [ñ(t3 ) − ñ(t2 )] are independent. say (t0 . ñ1 (t) and ñ2 (t) are Poisson distributed with rates αt and βt. from Definition 1 of the Poisson process. Then. [Hint: Condition on the total number of balls in the urn and use the fact that the number of successes in a sequence of n repeated Bernoulli trials has the binomial distribution with parameters n and p. ñg = g|ñ = k} P {ñ = k} k=0 . ñr . t ≥ 0} has independent increments. Show that the distribution of the number of red balls. and the proof of Property 1 of Poisson processes is complete. and that ñr and ñg are independent random variables. the total number of balls in the urn: ∞ X P {ñr = r.31 Suppose an urn contains ñ balls. Thus. Now. t3 ). we find ñ(0) = 0. ñ(t1 ) − ñ(t0 ) = [ñ1 (t1 ) − ñ1 (t0 )] + [ñ2 (t1 ) − ñ2 (t0 )]. and ñ(t3 ) − ñ(t2 ) = [ñ1 (t3 ) − ñ1 (t2 )] + [ñ2 (t3 ) − ñ2 (t2 )]. Consider two non-overlapping intervals of time. {ñ(t). respectively. Since the random variables [ñ1 (t1 ) − ñ1 (t0 )] and [ñ2 (t1 ) − ñ2 (t0 )] are inde- pendent. To prove Property 1. and ñ(0) = ñ1 (0) + ñ2 (0). define {ñ1 (t). the proportion of red balls being p. this settles property (iii) of Definition 1 of the Poisson process. ñ2 are Poisson. ñ1 (0) = ñ2 (0) = 0.] Solution. Thus. ñg is Poisson with parameter (1 − p)λ. where ñ is a Poisson random variable with parameter λ. and ñ(t) = ñ1 (t) + ñ2 (t). in the urn is Poisson with parameter pλ. t ≥ 0} has independent increments. t1 ) and (t2 . and [ñ1 (t3 )− ñ1 (t2 )] and [ñ2 (t3 )− ñ2 (t2 )] are independent. e−(α+β) Pn n!(α)n−k (β)k = n! k=0 (n−k)!k! e−(α+β) Pn n n−k β k = n! k=0 (k ) α −(α+β) = e n! (α + β)n n −(α+β) = [(α+β)·t]n!e This shows ñ = ñ1 + ñ2 is Poisson with rate α + β. this settles property (i) of Definition 1 of the Poisson process. Exercise 2.

P {ñr = r. . ñg = g|ñ = r + g} is Binomial having parameters (r + g). we find ∞ X P {ñr = r} = P {ñr = r. since ñ is Poisson with rate λ. By Definition 1 of the Poisson process ñ(t) . P {ñr = r. P {ñg = g} is shown to be Poisson with rate (1 − p)λ. Then. Now show ñr (t) and ñg (t) are Poisson processes. if k = r + g. ñg = g|ñ = r + g} P {ñ = k} is ! " # r+g r e−λt (λt)r+g e−λt (λt)r+g p (1 − p)(r+g)−r = r (r + g)! (r + g)! After some rearranging of terms: " #" # e−pλt (pλt)r e−(1−p)λt {(1 − p)λt}g = r! g! Summing over all possible values of ñg . Hence P {ñr = r. Let {ñ(t). ñg = g} = P {ñr = r. by summing over all possible values of ñr . ñg = g} = P {ñr = r} P {ñg = g} ñr and ñg are independent.36 QUEUEING THEORY WITH APPLICATIONS . given that ñ = k. ñg = g} g=0 ∞ " # (pλt)r e−pλt −(1−p)λt X {(1 − p)λt}g   = e r! g=0 g! " # (pλt)r e−pλt −(1−p)λt (1−p)λt = e e r! (pλt)r e−pλt = r! Therefore ñr is Poisson with rate pλ. ñg = g|ñ = r + g} P {ñ = k} Consider a red ball to be a success and a green ball to be a failure. t ≥ 0} be a Poisson process with rate λ. Similarly. Then we may think of all of the balls in the urn as being a series of ñ Bernoulli exper- iments. ñg = g|ñ = k} = 0. : SOLUTION MANUAL Note that the only way for ñr = r and ñg = g. since P {ñr = r. . is if their sum is r + g. So for k 6= r + g. that is. Hence P {ñr = r. p. Further- more.

t ≥ 0} and {ñ2 (t). the increments are not independent. ñr (t) and ñg (t) are independent random variables. respectively.ñ(t) = ñr (t) + ñg (t). t0 + h) is o(h) because this would require two events from the original Poisson process in a period of length h. Con- sider two non-overlapping intervals of time. the probability that an event will be recorded in (t0 . Therefore. Since ñr (t) and ñg (t) are non-negative for all t ≥ 0. Furthermore. t3 ). ñ(0) = 0.e. ñr (0) = ñg (0) = 0. Suppose all odd num- bered events and no even numbered events are recorded. t ≥ 0} each have independent increments? Do they have stationary increments? Are they Poisson processes? Solution. By property (iii) of Definition 1 of the Poisson process. the processes {ñ1 (t). and ñ(t3 ) − ñ(t2 ) = [ñr (t3 ) − ñr (t2 )] + [ñg (t3 ) − ñg (t2 )]. and ñg (t) to be the number of events not recorded by time t. Hence. Then. Let ñ1 (t) be the number of events recorded by time t and ñ2 (t) be the number of events not recorded by time t. which are then in turn independent of each other. t ≥ 0} and {ñ2 (t). ñ(t1 ) − ñ(t0 ) = [ñr (t1 ) − ñr (t0 )] + [ñg (t1 ) − ñg (t0 )]. [ñr (t1 ) − ñr (t0 )] and [ñr (t3 ) − ñr (t2 )] and independent. since ñ(t) = ñr (t) + ñg (t). It remains to show property (ii) of Definition 1 of the Poisson process.Review of Random Processes 37 is Poisson distributed with parameter λt. {ñ(t). [ñr (t1 ) − ñr (t0 )] and [ñg (t1 ) − ñg (t0 )] are independent. Exercise 2. Do the processes {ñ1 (t). {ñr (t). t1 ) and (t2 . Then by the result just shown. say (t0 . Then. ñr (t) and ñg (t) are independent across the intervals. t ≥ 0} and {ñg (t).32 Events occur at a Poisson rate λ.. t ≥ 0} are clearly not Poisson. and [ñr (t3 ) − ñr (t2 )] and [ñg (t3 ) − ñg (t2 )] are independent. Define ñr (t) to be the number of events recorded by time t. t ≥ 0} is a Poisson process so it has independent increments: ñ(t1 ) − ñ(t0 ) and ñ(t3 ) − ñ(t2 ) are independent. the time between recorded events is the sum of two exponentially distributed random variables with parameter λ. Since all three properties hold. On the other hand. Let each event be recorded with probability p. . That is. suppose an event is recorded at time t0 . and [ñg (t1 ) − ñg (t0 )] and [ñg (t3 ) − ñg (t2 )] and independent. Now. Thus property (i) of Definition 1 of the Poisson process holds. Since only odd numbered events are recorded. i. where ñr (t) and ñg (t) are Poisson distributed with rates pλt and (1 − p)λt. This proves property (ii) of Definition 1 of the Poisson pro- cess. By the result shown above. Since ñr (t) and ñg (t) are independent of each other across the sums. This proves property (iii) of Definition 1 of the Poisson process. t ≥ 0} are Pois- son processes.

. .. In turn.. For i = 0 or i = 1. . N . . The dynamical equations are given as follows: q̃k+1 = (q̃k − 1)+ + ṽk+1 . . . n o P {q̃k+1 = j|q̃k = i} = P (q̃k − 1)+ + ṽk + 1 = j|q̃k = i . n o P ṽk + 1 = j − (i − 1)+ = P {ṽk + 1 = j} . Define aj = P {ṽk + 1 = j} for i = 0.. . . : SOLUTION MANUAL the event probabilities are independent of t0 because the original process is Poisson. . .  P= 0  a0 a1 a2 a3 a4 a5 a6 ··· aN −1 . k = 0. . . . . the one-step transition probability is P {q̃k+1 = j|q̃k = i} for all integers i.2 for the case where {ṽk ... n o P ṽk + 1 = j − (i − 1)+ = P {ṽk + 1 = j + 1 − i} . . identically distributed binomial random variables with parameters N and p.. By definition. .   0  0 a0 a1 a2 a3 a4 a5 ··· aN −2 . Solution. . .} of Section 1.. . . . .2. so the increments are stationary. 2. Then.. we have shown that a0 a1 a2 a3 a4 a5 a6 a7 ··· aN 0 ···    a0  a1 a2 a3 a4 a5 a6 a7 ··· aN 0 ···  . Exercise 2. .38 QUEUEING THEORY WITH APPLICATIONS .. n o n o P (q̃k − 1)+ + ṽk + 1 = j|q̃k = i = P (i − 1)+ + ṽk + 1 = j|q̃k = i n o = P ṽk + 1 = j − (i − 1)+ |q̃k = i But. . 1.. .. . j But. . .. ṽk+1 is independent of q̃k = i.} is assumed to be a sequence of independent. . k = 1. For i ≥ 2. . . where we note that the previous probability is zero whenever j < i − 1.. . It follows that n o n o P ṽk + 1 = j − (i − 1)+ |q̃k = i = P ṽk + 1 = j − (i − 1)+ . 1.  . . . . . ... .33 Determine the one-step transition probability matrix for the Markov chain {qk . ..

[Hint: First calculate P ∞ under the assumption β0 = [ 1 0 0 . The general idea is that π is independent of β0 .4 P= .}. k = 0. that is. −0.5 0. show that the rows of P ∞ must be identical. if we choose the starting state as i with probability 1.5 1 1 Adding 2 of the second equation to the first yields   0. 0. 1. Suppose we choose β0 = [ 1 0 0 .6 0. k = 0. β0 P ∞ is equal to the first row of P ∞ . We have π = πP and πe = 1. then β0 P ∞ is equal to the second row of P ∞ . 0 1 .18).4 [ π1 π2 ] = [ π1 π2 ] . . ].   0. Again.5 0. . . ]. Since π = β0 P ∞ .5 1].} is a Markov chain such that   0.5 or 0.35 Suppose {x̃k . 2 Upon substitution of the last equation into the second equation of the matrix equation. we have   0. Thus. since π = β0 P ∞ . . 0.5 Also πe = 1 means   1 [ π1 π2 ] =.4   [ π1 π2 ] = [0 0].6 0.9 1 [ π1 π2 ] = [ 0. 0 1 Then dividing the first equation by 0. . −0. calculate P ∞ under the assumption β0 = [ 0 1 0 .9 yields   1 1 [ π1 π2 ] = [ 59 1]. . we then have the first row of P ∞ must be π.5 0.Review of Random Processes 39 Exercise 2. ]. Exercise 2. all rows of P ∞ must be equal to π. . In general.4 −0. .5 Determine the stationary vector of {x̃k . Next.34 Staring with (2. then we conclude that the i-th row of P ∞ is equal to π.4 1 [ π1 π2 ] = [0 1]. 1. if we choose β0 = [ 0 1 0 . Then. . ].] Solution. . . . . thus. the value of π is the same no matter how we choose the distribution of the starting state. Continue along these lines. Solution. . we then have the first row of P ∞ must be π.

5}. We wish to find P {ṽk+1 = j|ṽk = i} for i. the number of on sources during period k + 1 given i on sources in period k is equal to ãi + b̃5−i . ! n o 5 − i j−ℓ (5−i)−(j−ℓ) P b̃5−i = j − ℓ = p p . j + i − 5}.j+i−5} Since each of the off source turns on with probability p01 . then P b̃5−i = j − ℓ = 0. 1. Thus. . 1. finally subtracting the first equation from the second yields   1 0 [ π1 π2 ] = [ 59 4 9 ]. Thus. Thus. if ṽk = i. . j − ℓ > 5 − i or ℓ < j + i − 5. . Exercise 2. In addition. Solution. 4. : SOLUTION MANUAL And. In the summation. so we may ter- minate the summationn at min {i. n o P {ṽk+1 = j|ṽk = i} = P ãi + b̃5−i = j . then i sources are in the on state during period k and 5 − i sources are in the off state during that period. Now. if ℓ > j. o j}.40 QUEUEING THEORY WITH APPLICATIONS . ℓ=max {0. we may begin the summation at max {0. Thus min {i. HTen. 2. then P b̃5−i = j − ℓ = 0. . . j − ℓ 01 00 . k = 0. ℓ=0 the latter step following by independence n among theobehavior of the sources. 0 1 or [ π1 π2 ] = [ 59 4 9 ]. But. n o i X n o P ãi + b̃5−i = j = P ãi + b̃5−i = j|ãi = ℓ P {ãi = ℓ} ℓ=0 i X n o = P b̃5−i = j − ℓ P {ãi = ℓ} .} for the special case of N = 5.36 Develop the one-step state probability transition matrix for the special case {ṽk . j ∈ {0. 3. Let ãi denote the number of sources that are in the on state in period k + 1 if i sources are in the on state in period k and b̃i denote the number of sources that are in the on state in period k + 1 if i sources are in the off state in period k. ṽk is the number of sources in the on state during interval k.j} n o X P {ṽk+1 = j|ṽk = i} = P b̃5−i = j − ℓ P {ãi = ℓ} .

. if q̃k = 7 and q̃k−1 = 3. that is show whether or not {q̃k . Some examples are P {ṽk+1 = 0|ṽk = 0} = p500 .} is a DPMC. q̃k = i} = P {q̃k+1 = j|q̃k = i} . but the probability mass function for the queue length at epoch k + 1 can be computed solely form the value of infor- mation is not needed to compute the new value of q̃k . . We are given q̃k+1 = (q̃k − 1)+ + ṽk+1 . ℓ 11 10 The transition probabilities can now be obtained by performing the indicate summations. since each of the on source remains on with probability p11 . 1. and 3 X n o P {ṽk+1 = 3|ṽk = 4} = P b̃1 = 3 − ℓ P {ã3 = ℓ} ℓ=2 n o n o = P b̃1 = 1 P {ã4 = 2} + P b̃1 = 0 P {ã4 = 3} ! ! 4 2 2 4 3 = p01 p11 p10 + p00 p p10 2 3 11 = 6p01 p211 p210 + 4p00 p311 p10 Exercise 2. . determine whether or not {q̃k . then we know that there were 5 arrivals during interval k. the answer is “Yes. . For the case where the arrival process is on-off.Review of Random Processes 41 Similarly. and we want to know whether or not P {q̃k+1 = j|q̃0 = i0 . . For example. . Solution.2. q̃1 = i1 . Defend your conclusion mathematically. which means that there were 5 sources in the on state during interval k. . . .37 For the example discussed in Section 1. 1. k = 0. the sequence of queue lengths reveals the sequence of arrivals. the distribution of the number of new arrivals . P {ṽk+1 = 1|ṽk = 0} = 5p01 p400 . Thus. For the case where the arrival process is an independent sequence of random variables. ! i ℓ i−ℓ P {ãi = ℓ} = p p . k = 0.” The sequence of values of q̃k does reveal the number of arrivals in each interval.} satisfies the definition of a Markov chain. .2 in the case where arrivals occur according to an on off process.

38 Suppose {x̃(t). {q̃k . Hence the time between events is exponential. that is. . otherwise. t ≥ 0} is a time-homogeneous CTMC having infinitesimal generator Q defined as follows:  −λ.42 QUEUEING THEORY WITH APPLICATIONS .1 (t) = λte−λt Repeating this process for each column of P (t).0 (t) = e−λt . . Begin with P0. and therefore the distribution of q̃k+1 . This solves to P0.1 (t) = λP0. . {x̃(t). Hence for the general on-off arrival process. dt Due to independent increments.] Solution. this is strictly a birth process.0 (t) = −λP0. dt We see immediately that P0. 1. [Hint: Simply solve the infinite matrix differential equation term by term starting with P00 (t) and completing each column in turn. .16.1 (t) dt = λe−λt − λP0. Furthermore. This should be apparent from the definition of Q. if j = i + 1. Solve the infinite matrix differential equation: d P (t) = P (t)Q. k = 0. Now solve for the second column of P (t): d P0. t ≥ 0} is a Poisson process. t ≥ 0} is a Poisson process. Thus.0 (t) − λP0. we cannot compute the distribution of q̃k+1 .1 (t). . : SOLUTION MANUAL during interval k + 1. is dependent upon the values of q̃k and q̃k−1 . 0.  if j = i. Qij = λ. the system cannot lose customers once they enter the system.} is not a DPMC. . By Definition 2. Exercise 2.  Show that {x̃1 (t). if we know only that q̃k = i. we find the solution of P (t) to be e−λt (λt)n P0.n (t) = n! Observe that this probability is that of a Poisson random variable with param- eter λt.0 (t): d P0. we need only compute the first row of the matrix P (t).0 (t).

00 1. 1. 1.75   [ P (∞)0 P (∞)1 P (∞)2 ]  0.00 and 1   [ P (∞)0 P (∞)1 P (∞)2 ]  1  = 1 1 yield −2. 4. − 12 21 − 10 3 2 .25 yields 1 −1 1   [ P (∞)0 P (∞)1 P (∞)2 ]  − 14 1 1 = [0 0 1]. −2. 2. Show that the value of π found in part 2 of this problem satisfies π = πP for the P found in part 3 of this problem.00 −3. Solve for P (∞) directly by solving P (∞)Q = 0 P( ∞)e = 1.00 2.39 Let {x̃(t).50  −1.25 0.00 1. t ≥ 0} be a CTMC such that −2.00 2.24).00 1.Review of Random Processes 43 Exercise 2.00 2.50 −1.25 0.25 0.00 −3.50  −1.75   Q = 0.25 0.00 1. 3. Solution.75  . − 21 − 85 1 Then adding the first equation to the second and subtracting the first equa- tion from the third yields: 1 0 0   [ P (∞)0 P (∞)1 P (∞)2 ] − 14  3 4 5 4  = [0 0 1]. 1. Find P for for the DPMC embedded at points of state transition.25 1   [ P (∞)0 P (∞)1 P (∞)2 ] 0.75  = [ 0 0 0 ] . Solve for π for the DPMC embedded at points of state transition using (2.25 1 = [0 0 1]. 1.00 1 Then dividing the first equation by -2 and the second equation by -1.

2.24) P i6=j Pi (∞)Qij πj = P P . − 12 − 14 5 3 2 Subtracting 5/4 of the second equation from the third yields 1 0 0   [ P (∞)0 P (∞)1 P (∞)2 ]  − 14 1 0 = [0 0 1]. . − 21 − 14 5 1 Then adding 14/5 times the third equation to the second yields 1 0 0   [ P (∞)0 P (∞)1 P (∞)2 ]  − 14 1 0 = [0 14 25 0. − 21 0 1 Finally adding 1/4 of the second equation and one half of the third equation to the first equation yields 6 14 5 [ P (∞)0 P (∞)1 P (∞)2 ] = [ 25 25 25 ]. : SOLUTION MANUAL Dividing the second equation by 3/4 yields 1 0 0   [ P (∞)0 P (∞)1 P (∞)2 ] − 14  1 5 4  = [0 0 1]. − 21 − 14 5 5 Dividing the third equation by 5 yields 1 0 0   [ P (∞)0 P (∞)1 P (∞)2 ]  − 14 1 0 = [0 0 0.44 QUEUEING THEORY WITH APPLICATIONS . i6=2 25 4 25 4 100 . ℓ i6=ℓ Pi (∞)Qiℓ Plugging in numbers.2 ] . we find X 14 1 5 12 Pi (∞)Qi0 = + = . . i6=0 25 2 25 25 X 6 5 5 70 Pi (∞)Qi1 = + 2= . We have from (2.2 ] . i6=1 25 4 25 100 and X 6 3 14 3 60 Pi (∞)Qi1 = + = .

We have 5 3 0   8 8 48 70 60 [ 178 178 178 ]  25 0 3 5  = 70 2 [ 178 5 + 60 1 178 3 48 5 178 8 + 60 2 178 3 48 3 178 8 + 70 3 178 5 ] 1 2 3 3 0 48 70 60 = [ 178 178 178 ]. . π1 = . we have immediately 5 3 0   8 8 P =  52 0 3 5 . 48 48 70 60 π0 = = .Review of Random Processes 45 Thus. 1 2 3 3 0 4. From Qij P {x̃k+1 = j|x̃k = i} = . and π2 = . 48 + 70 + 60 178 178 178 3. We wish to show that the value of π computed in Part 2 satisfies π = πP. −Qii for i 6= j and P {x̃k+1 = j|x̃k = j} = 0.

.

Solution. However. 0 Solution. where a wall is erected to the left of position zero.2 Prove Theorem 3. Determine the proba- bility of an increase in the queue length.1 and its continuos analog Z ∞ E[x̃] = P {x̃ > x}dx. λ From Chapter 2. and show that this probability is less than 0. When there is an increase in the occupancy of the queue this is like the walker taking a step to the right.Chapter 3 ELEMENTARY CONTINUOUS-TIME MARKOV CHAIN-BASED QUEUEING MODELS Exercise 3.5 if and only if λ < µ. Consider the occupancy of the M/M/1 queueing system as the posi- tion of a random walker on the nonnegative integers.1 Carefully pursue the analogy between the random walk and the occupancy of the M/M/1 queueing system. this probability is λ+µ . if the queue is empty (the walker is at position zero). then a ‘decrease’ in the occupancy is analogous to the walker attempting a step to the left – but he hits the wall and so remains at position zero (queue empty). It follows that p+ < 21 if and only if λ < µ. Denote the probability of an arrival by p+ and a departure by p− . Exercise 3. Let λ be the rate of arriving customers and µ be the rate of departing cus- tomers. a decrease is a step to the left. ∞ X E [ñ] = nP {ñ = n} n=0 ∞ n ! X X = 1 P {ñ = n} n=0 k=1 . Then p+ is simply the probability that an arrival occurs before a departure.

Then E [min {x̃. ỹ > z} ⊆ {ỹ > z}.z} .3 Prove Theorem 3. E [z̃] ≤ E [x̃] and E [z̃] ≤ E [ỹ] . {x̃ > z. . 0 But. and only if.z} ≤ P {x̃ > z} and P {x̃ > z. ỹ > z} ⊆ {x̃ > z} and {x̃ > z. . ỹ. ỹ}] ≤ min {E [x̃] . Define z̃ = min {x̃. But. 0 y 0 0 Exercise 3. ỹ > z}. 0 0 Changing order of integration yields Z ∞Z ∞ Z ∞ Z ∞ E [x̃] = fx̃ (x)dxdy = P {x̃ > y} dy = P {x̃ > x} dx. Z ∞ Z ∞ E [z̃] ≤ P {x̃ > z} and E [z̃] ≤ P {ỹ > z} . Z x x= dy.48 QUEUEING THEORY WITH APPLICATIONS . : SOLUTION MANUAL ∞ n ! X X = P {ñ = n} n=0 k=1 Changing the orders of summation   ∞ n ∞ ∞ ! X X X X P {ñ = n} =  P {ñ = k} n=0 k=1 n=0 k=n+1 X∞ = P {ñ > n} n=0 For the continuous analog. ỹ. There- fore. Z ∞ Z ∞ E [z̃] = P {z̃ > z} dz = P {x̃ > z. The theorem is as follows: Suppose x̃ and ỹ are any two nonegative random variables. Then. 0 0 Equivalently. 0 so Z ∞Z x E [x̃] = dyfx̃ (x)dx. we have Z ∞ E [x̃] = xfx̃ (x)dx.z} ≤ P {ỹ > z} because the event {x̃ > z. 0 0 But. First we consider the continuous case. E [ỹ]}. ỹ}. P {x̃ > z. ỹ. Solution. . z̃ > z if.2.

as zero. Exercise 3.4 Suppose customers arrive to a system at the end of every even-numbered second and each customer requires exactly one second of service. ỹ}] ≤ min {E x̃. Compute the stochastic equilibrium occupancy distribution. because a customer finishes service at the end of every odd-numbered second. Each of these events is equally likely since the time intervals that they occur in is the same (1 second). Compute the occupancy distribution as seen by arriving customers. Obviously this is not the same as E [s̃] and we conclude that the distributions as seen by an arbitrary arriving customer and that of an arbitrary observer cannot be the same. Hence an arriving customer sees the system occupancy as empty. there will either be one customer in the system or no customers in the system. that is. . Hence E [s̃] = E [s̃|ñ = 0] P {ñ = 0} + E [s̃|ñ = 1] P {ñ = 1} 1 1 = 0· +1· 2 2 1 = 2 On the other hand. she has left the system by the time the next cusomer arrives. The discrete and mixed cases are proved in the same way. that is. Are they the same? Solution. the time-averaged distribution of the number of customers found in the system.Elementary CTMC-Based Queueing Models 49 The previous statement is equivalent to E [min {x̃. E [ỹ]}. Compare the two distributions. Since a customer arrives every two seconds and leaves after one second.

and then compute the stochastic equilibrium distribution for the process {ñ(t). That is. : SOLUTION MANUAL Exercise 3. and 3 are identical.] 3. πi E[s̃i ] i=0 where s̃i denotes the time the systems spends in state i on each visit. t ≥ 0} accord- ing to the following well known result from the theory of Markov chains as discussed in Chapter 2: πi E[s̃i ] Pi = P ∞ . j) − th element of Pd is given by P {ñd (k + 1) = j|ñd (n) = i} .5 For the ordinary M/M/1 queueing system. λ+µ λ+µ . 2. letting A denote the event of an arrival before the k − th service completion. j−(i−1)+ λ µ   + P j − (i − 1) A = . [Hint: Form the system of equa- tions πd = πd Pd .] 2. Define π = [ π0 π1 · · · ] to be the stationary probability vector and P to be the one-step transition probability matrix for this embedded Markov chain. as seen by arriving customers. Observe that the results of parts 1. Solution. defining the state to be the number of customers in the system immediately fol- lowing the state change. as seen by departing customers. at instants of time at which the occupancy changes. Then the (i. Determine π. and then solve the system as was done to obtain P {ñ = n}. Otherwise. since you can’t have a negative number of arrivals. Let ñd (k) denote the number of customers left by the k − th departing customer. For j − (i − 1)+ < 0. P {ñd (n + 1) = j|ñd (n) = i} = 0. determine the limiting distribution of the system occupancy 1. and that these are all equal to the stochastic equilibrium occupancy probabilities determined previously. . This probability is given by the probability that exactly j − (i − 1)+ cus- tomers arrive during the k − th service time. and then try the solution πa = πd . . and [Hint: First form the system of equa- tions πa = πa Pa .50 QUEUEING THEORY WITH APPLICATIONS . embed a Markov chain at the instants at which the occupancy changes. 1.

∞  j ! X λ 1 πd0 = πd0 λ = 1. . 1.. Then.Elementary CTMC-Based Queueing Models 51 This results in   2  µ λ µ λ µ  λ+µ λ+µ λ+µ λ+µ λ+µ . . Repeating this process we see  j λ that πdj = µ πd0 . Then for λ < µ.. 2...  2  µ µ µ   λ λ  λ+µ λ+µ λ+µ λ+µ λ+µ ..   . ∞ X πd0 = πdi Pdi0 i=0 ∞ X = πd0 Pd00 + πd1 Pd10 + πdi Pdi0 i=2 µ λ µ     = πd0 + πd1 + 0. Then the (i. Using this matrix.   λ+µ . Substitute this into the equation for πd1 : P∞ πd1 = i=0 πdi Pdi1 ∞ X = πd1 Pd01 + πd1 Pd11 + πd2 Pd21 πdi Pdi1 i=3 µ λ λ λ µ       = πd0 + πd0 λ+ µ λ+ µ µ λ+µ λ+µ µ +πd2 +0 λ+µ  2 λ which gives the solution πd2 = µ πd0 . λ+µ λ+µ λ+µ Hence πd1 = µλ πd0 . 2.  µ  0 0 . and πdj = λ µ 1− λ µ .. for i = 0.. .. . .. .  Pd =  0 µ λ µ    λ+µ λ+µ λ+µ . j=0 µ 1− µ  j   Thus πd0 = 1 − µλ .. j = 0. .. ... j) − th element of Pa is given by P {ña (k + 1) = j|ñd (n) = i} . P∞ Now use the normalizing constraint j=0 πdj = 1. . . Let ña (k) denote the number of customers as seen by the k − th arriving customer. form the system of equations πd = πd Pd .

... .. λ + µ i=0 i λ+µ Now. . it suffices to show that πd is a solution to πa = πa Pa . Form the system of equations πa = πa Pa .. Substitute πd0 into the equation just found for πa0 : ∞  i  i µ X µ−λ λ µ πa0 = λ + µ i=0 µ µ λ+µ ∞ i µ−λX λ  = λ + µ i=0 λ + µ µ−λ λ+µ = · λ+µ µ µ−λ = µ which is the same result found for πd0 in part (a).. for j = 0.. Then.. λ+µ i+1−j λ µ  P {ña (k + 1) = j|ñd (n) = i} = .  Pa =   µ 3   λ  µ 2 λ µ λ  0 . . . .. and substitute πdi for πai : ∞ i λ X µ  πaj = πaj−1+i λ + µ i=0 λ + µ . . . j > 0. . . Now generalize to πaj . this probability is equal to zero since only departures may occur between arrivals. i>0 λ+µ λ+µ For j > i + 1. .. : SOLUTION MANUAL For j = 0. ∞ X πa0 = πai Pai0 i=0 ∞ i µ X µ  = πa . to show that πa = πd . .. .   λ+µ λ+µ   µ 2 λ µ λ    λ+µ λ+µ λ+µ λ+µ 0 0 .. 1.   λ+µ λ+µ λ+µ λ+µ λ+µ λ+µ   . . . . This results in µ λ 0 0 0 . .52 QUEUEING THEORY WITH APPLICATIONS . i+1 this probability is given by the probability that exactly i + 1 − j customers arrive during the k − th service time: 1−j µ   P {ña (k + 1) = j|ñd (n) = 0} = .

. .. . then the next event will either be a departure or an arrival. The probability of it being in any other state at time (k + 1) is 0.Elementary CTMC-Based Queueing Models 53 ∞    j−1 X i λ µ−λ λ λ  = λ+µ µ µ λ+µ   j−1 i=0 λ µ−λ λ λ+µ  = λ+µ µ µ µ µ−λ λ j   = ... This will λ occur with probability λ+µ . 3. µ µ Thus πa = πd .   0 1 0 0 0 . . λ+µ i=2 That is.. π1 = π0 λ+µ P∞ µ ... . then with probability 1 it will be in state 1 at time (k + 1). Repeating this process we see that for j ≥ 1. . . Thus.... then the system will be in state (i + 1) at time (k + 1).  µ 0 λ 0 0 . P∞ Form the system of equations πj = i=0 πj Pij ..   . . Substituting this into π1 =    i=0 πj P1j gives the λ+µ λ solution π2 = π0 µ µ . Then for j = 0: ∞ µ   X π0 = 0 + π1 + πj · 0.. If it’s a departure. If the system is in state i. .   λ+µ λ+µ . (i 6= 0) at time k. If the next event is an arrival. .. the one-step transition probability matrix. µ µ Now use the normalizing constraint to find π0 : ∞   j−1 λ+µ λ X  1 = π0 + π0 j=0 µ µ " !# λ+µ 1   = π0 1 + λ . Determine P. the system wil be in state (i − 1) µ at time (k + 1). µ 1− µ .  λ+µ λ+µ  µ λ P= 0 0 0 ..   λ+µ λ+µ   µ λ  0 0 0 . If the system is in state 0 at time k. . This happens with probability λ+µ .  j−1 λ+µ λ πj = π0 .

the expected amount of time the sys- tem spends in state i on each visit. since the time spent in state i is determined by when the next ar- rival or departure may occur.54 QUEUEING THEORY WITH APPLICATIONS . . Note that s̃i is an exponential random variable. 1 E[s̃0 ] = λ 1 E[s̃i ] = .e. Thus by the properties of the exponential distribution. The rate for s̃0 is λ. : SOLUTION MANUAL i. i > 0. And so for j ≥ 1. since only an arrival may occur. . first determine E[s̃i ]. . λ+µ Then ∞ ∞    i−1  λ+µ λ µ−λ 1 X X   πi E[s̃i ] = π0 E[s̃0 ] + i=0 i=1 µ µ 2µ λ+µ " # µ−λ 1 µ−λ 1 = · + 2 λ 2µ λ 2µ 1− µ 1 = .   j−1 λ+µ λ µ−λ  πj = . since either an arrival or a departure may occur. µ µ 2µ To compute Pi . i > 0. 2λ Using this result and the πj′ s obtained above in πi E[s̃i ] Pi = P ∞ πi E[s̃i ] i=0 we get the solutions µ−λ 1 2µ · λ λ P0 = 1 =1− µ  2λ   i−1    λ+µ λ µ−λ 1 µ µ 2µ λ+µ Pi = 1  j  2λ λ λ  = 1− . And the rates for all other s̃i . µ µ This is precisely the results of parts (a) and (b).. is λ + µ. π0 = (µ − λ)/2µ.

condition on the value of ñ. without resorting to the use of Laplace-Stieltjes transform techniques. By Little’s Result. E[w̃] = µ1 . E[ñ] = λE[w̃]. Solution. (3.1) n=0 where P {ñ = n} = ρn (1 − ρ). for x ≥ 0. Recall Exercise 2.e. the average waiting time in the system is just the average service time. Thus λ P {B} = E[ñ] = . the number of cus- tomers in the system: ∞ X Fs̃ (x) = P {s̃ ≤ x|ñ = n} P {ñ = n}. Show that Fs̃ (x) = 1 − e−µ(1−ρ)x . t + x]: the original n customers plus herself. Then P {B} = P {ñ = 1} where ñs is the number of customers in the system (i. Hence E[ñ] = 0 · P {ñ = 0} + 1 · P {ñ = 1} = P {ñ = 1} = P {B} . respectively. no queue). To compute Fs̃ (x). If a customer arrives at time t and finds n customers in the system. Let B denote the event that the server is busy. then the probability that she departs the system by time t + x equals the probability that there will be n + 1 service completions in the interval (t. in stochastic equilibrium.20: d µ(µx)n e−µx P {s̃n+1 ≤ x} = . That is. dx n! . show that the probability that the server is busy at an arbitrary point in time is equal to the quantity (λ/µ)..Elementary CTMC-Based Queueing Models 55 Exercise 3.e. and Fw̃ (x) = 1 − ρe−µ(1−ρ)x . ñs is the number in service). Let Fs̃ (x) ≡ P {s̃ ≤ x} and Fw̃ (x) ≡ P {w̃ ≤ x}..6 Using Little’s result.6. µ Exercise 3. for x ≥ 0. This probability is P {s̃n+1 ≤ x}. Furthermore.7 Let w̃ and s̃ denote the length of time an arbitrary cus- tomer spends in the queue and in the system. Define the system to be the server only (i. Solution.

Argue that the interdeparture times are independent so that the departure process for this system is Poisson with the same rate as the arrival process (Burke [1956]). Using this fact. the number of customers in the system and use the fact that the system is in stochastic equilibrium: ∞ µ(µα)n−1 e−µα n xX Z Fw̃ (x) = (1 − ρ) + ρ (1 − ρ)dα 0 n=1 (n − 1)! ∞ (µρα)n Z x µρ(1 − ρ)e−µα X = (1 − ρ) + dα (reindexing) 0 n=0 n! Z x = (1 − ρ) + µρ(1 − ρ)e−µ(1−ρ)α dα 0 −µ(1−ρ)x = 1 − ρe . ∞ µ(µα)n e−µα n xX Z Fs̃ (x) = ρ (1 − ρ)dα 0 n=0 n! ∞ (µρα)n Z x −µα X = µ(1 − ρ)e dα 0 n=0 n! Z x = µ(1 − ρ)e−µ(1−ρ)α dα 0 = 1 − e−µ(1−ρ)x . Show that the distribution of an arbitrary interdeparture time for the M/M/1 system in stochastic equi- librium is exponential with the same parameter as the interarrival-time dis- tribution. Now find Fw̃ (x): Fw̃ (x) = P {w̃ ≤ 0} + P {w̃ < x} . x > 0. the . so Z x Fw̃ (x) = (1 − ρ) + fw̃ (α)dα. : SOLUTION MANUAL By differentiating and then integrating both sides of (3. x ≥ 0.8 M/M/1 Departure Process. x > 0. if the system is not left empty.] Solution. Then condition on whether or not the ith departing customer leaves the system empty. Exercise 3. condition on whether or not the i − th departing customer leaves the system empty. Now. . x ≥ 0. . 0 Condition fw̃ on the value of ñ. [Hint: Use the fact that the Poisson arrival sees the system in stochastic equilibrium.56 QUEUEING THEORY WITH APPLICATIONS . But P {w̃ ≤ 0} = P {w̃ = 0} = P {ñ = 0} = 1 − ρ.6. x>0 = P {w̃ ≤ 0} + intx0 fw̃ (α)dα. Observe that the Poisson arrivals will see the system in stochastic equilibrium.1).

however. which we know to be independent. Because of this ‘rate in equals rate out’ characteristic. 0 Since ã and x̃ are independent of each other. That is.Elementary CTMC-Based Queueing Models 57 time to the next departure is simply the time it takes for the (i+1)−st customer to complete service. noting that P {ã + x̃ ≤ d} = 0 if ã > d. this implies that the interdeparture times are also independent. This is Poisson with parameter µ. Denoting the event that the system is left empty by A. . n o n o n o P d˜ ≤ d = P d˜ ≤ d|B P {B} + P d˜ ≤ d|A P {A} = ρP {x̃ ≤ d} + (1 − ρ)P {ã + x̃ ≤ d} . Then if B denotes the event that the system is not left empty. Recall that in stochastic equilibrium the probability that the system is empty is (1 − ρ). Sum over all possible values of ã to get P {ã + x̃ ≤ d}. this is n o   Z d P d˜ ≤ d = ρ 1−e −µd + (1 − ρ) P {a + x̃ ≤ d} λe−λa da   Z0 d   −µd = ρ 1−e + (1 − ρ) 1 − e−µ(d−a) λe−λa da 0 = 1 − e−λd . the time to the next departure is the time to the next arrival plus the time for that arrival to complete service. n o P d˜ ≤ d|A = P {ã + x̃ ≤ d} . the departure process occurs at the same rate as the arrival process. Hence n o   Z d P d˜ ≤ d = ρ 1 − e−µd + (1 − ρ) P {ã + x̃ ≤ d|ã = a} dFã . then its distribution could not be that of the interarrival times. If the interdepar- ture times were not independent. If the system is left empty. This shows the departure process d˜ is Poisson with parameter λ.

the number of times the customer joins the queue.1) n=1 Now. each potentially departing customer rejoins the service queue. with probability p. (∞ )  X P t̃ ≤ x|ñ = n = P x̃i ≤ x i=1 = P {s̃n ≤ x} . the probability of the customer entering the queue n times is simply pn−1 (1 − p) since he returns to the queue (n − 1) times with probability pn−1 and leaves after the n-th visit with probability (1  − p). Recall Exercise 2. How does your solution compare to that of the M/M/1 queueing system? What explains this behavior? [Hint: Consider the remaining service time required for each customer in the queue. Determine the distribution of the total amount of service time the server renders to an arbitrary customer. (3. . Sup- pose customers that required additional increments of service returned immediately to service rather than joining the tail of the queue. ∞ X  Ft̃ (x) = P t̃ ≤ x|ñ = n P {ñ = n}. A queueing system has exogenous Poisson arrivals with rate λ and exponential service with rate µ. Argue that the departure process from the system is a Poisson process with rate λ.9 M/M/1 with Instantaneous Feedback. Let t̃ be the total amount of service time rendered to an arbitrary customer. What would be the effect on the queue occupancy?] 3.58 QUEUEING THEORY WITH APPLICATIONS . At the instant of service completion. dx n! . then P t̃ ≤ x|ñ = n is the probability that the sum of his n service times will be less than x. Compute the distribution of the number of customers in the system in stochastic equilibrium. Condition on ñ.20: d µ(µx)n e−µx P {s̃n+1 ≤ x} = . Solution. 2. : SOLUTION MANUAL Exercise 3. 1. 4. if the customer was in the queue n times. 1.8. independent of system state. Compute the average sojourn time for this system and comment on com- putation of the distribution of the sojourn time. x̃i ≥ 0. That is. . Furthermore.

As argued in part(b). In effect. 3. where ρ = (1−p)µ . the total amount of service time for each customer is exponential with parameter (1−p)µ. Suppose that instead of joining the end of the queue. 4. With this is mind.1). they immediately reenter service. Observe that the arrival process is now a Poisson process: since customers complete service now in one pass and don’t join the end of the queue as they did before. on their first pass through the server. reentering these customers immediately will have no effect on the queue occupancy. they ’use up‘ all of their service time on the first pass. model this system as an ordinary M/M/1 queue- ing system whose service is exponential with rate (1 − ρ)µ. it will remain a Poisson process with the same rate as that of the arrival process. still have increments of service time remaining. Hence. . x ≥ 0. This will simply rearrange the order of service. The distribution of the number of customers in the system in stochastic equilibrium is the same as that of an ordinary M/M/1 λ system. arrivals are independent. Since the departure process does not depend on the actual rate of the server.8. we see that the system can now be modeled as an ordinary M/M/1 system with no feedback. where ρ = (1−ρ)µ . Since the queue occupancy does not depend on order of service. Then E[s̃] = 1 1 λ (1−p)µ 1−ρ .Elementary CTMC-Based Queueing Models 59 Then. Using the results of part(b). we can model this feedback system as one in which there is no feedback and whose service rate is (1 − p)µ. with this new service rate: Pn = (1 − ρ)ρn . upon differentiating and then integrating both sides of (3. Consider those customers who. 2. they arrive to the queue only once. ∞ µ(µα)n−1 e−µα n−1 xX Z Ft̃ (x) = p (1 − p)dα 0 n=1 (n − 1)! ∞ (µpα)n Z x µe−µα (1 − p) X = dα 0 n=1 n! Z x = µe−µα (1 − p)eµpα dα 0 = 1 − e−µ(1−p)x . From part (a).

find E[h̃]. that is. s + 2a + s t taken from Mathematical Tables from the Handbook of Physics and Chemistry. a generic busy period. will be useful in accomplishing this exercise. √ √ s + 2a − s 1 √ √ ⇐⇒ e−at I1 (at). λ+µ λ+µ Then. E[h̃|D] = 1. no service will be completed during the interarrival period. . Condition E[e−sỹ ] on whether or not the first customer completes service before the first arrival after the busy period has begun. . E[e−sỹ |D] = E[e−sz̃1 ]. Now. find E[e−sỹ ]. Thus. the Laplace-Stieltjes transform of the distribution of the length of a busy period. Show that (d/dy)Fỹ (y) = √ √ 1/(y ρ)e−(λ+µ)y I1 (2y λµ). if ρ = λµ .60 QUEUEING THEORY WITH APPLICATIONS . If D occurs then clearly only the initial customer will complete service during that busy period. D be the same as in part (i). Now. Solution. the expected number of customers served in busy period. E[h̃] = 1 · P {D} + E[h̃ + h̃]P {A} µ λ = + 2E[h̃] . Let D. A Laplace transform pair. 1. and let A denote the comple- ment of D. µ 1 E[h̃] = = . On the other hand. but h̃ service completions will take place during each busy period length ỹ. then the length of the busy period has been shown to be the length of the interarrival time plus twice the length of ỹ. and 2. 1. Then E[h̃] = E[h̃|D]P {D} + E[h̃|A]P {A} . µ−λ 1−ρ 2. if D occurs. : SOLUTION MANUAL Exercise 3.10 For the M/M/1 queueing system. . Then E[e−sỹ ] = E[e−sỹ |D]P {D} + E[e−sỹ |A]P {A} . Let D denote the event that the first customer completes service before the first arrival after the busy period has begun.

 2 E[e−sỹ ] = P {D} E[e−sz̃1 ] + P {A} E[e−sz̃1 ] E[e−sỹ ] . This value is known to be 1. q (s + λ + µ) ± (s + λ + µ)2 − 4λµ E[e−sỹ ] = . Thus. Substituting the expressions for E[e−sz̃1 ]. Then E[e−0·ỹ ] = 1 q (λ + µ) ± (λ + µ)2 − 4λµ = p 2λ (λ + µ) ± (λ − µ)2 = 2λ p (λ + µ) ± (µ − λ)2 = 2λ (λ + µ) ± (µ − λ) = . 2λ This of course implies that the sign should be negative. p dy y ρ We first prove a lemma. and using the quadratic formula. Then G(s) = . P {A}.Elementary CTMC-Based Queueing Models 61 and E[e−sỹ |A] = E[e−s(z̃1 +ỹ+ỹ) ]  2 = E[e−sz̃1 ] E[e−sỹ ] . 2λ We now must decide which sign gives the proper function.. q (s + λ + µ) − (s + λ + µ)2 − 4λµ E[e−sỹ ] = . This implies that  2 0 = P {A} E[e−sz̃1 ] E[e−sỹ ] − E[e−sỹ ] + P {D} E[e−sz̃1 ]. i. and P {D}.e. E[e−sỹ ]|s=0 is simply the cumulative distribution function of the random variable evaluated at infinity. 2λ It remains to show that d 1 Fỹ (y) = √ e−(λ+µ)y I1 (2y λµ). By the definition of the Laplace-Stieltjes transform. Lemma: Let g(t) = e−at f (t).

Define p (s + a + b) − (s + b)(s + 2a + b) γ(s) = β(s + b) = a . This proves the Lemma.62 QUEUEING THEORY WITH APPLICATIONS . t To apply the Lemma to invert α(s). : SOLUTION MANUAL F (s + a). . then Z ∞ F (s) = f (t)e−st dt. we wish to invert q (s + λ + µ) − (s + λ + µ)2 − 4λµ α(s) = 2λ given √ √ s + 2a − s 1 √ √ ⇐⇒ e−at I1 (at). proof: Recall that if F (s) is the Laplace transform of f (t). 0 Observe that if s is replaced by (s + a) we get Z ∞ F (s + a) = f (t)e−(s+a)t dt Z0 ∞ = f (t)e−at e−st dt Z0 ∞ = g(t)e−st dt 0 = G(s) by the definition of a Laplace transform. Using this Lemma. . we first need to manipulate α(s) so it is in the form of β(s). s + 2a + s t First note √ √ s + 2a − s β(s) = √ √ √s + 2a + s   √ √ √  s + 2a − s s + 2a − s = √ √  √ √  s + 2a + s s + 2a − s p (s + a) − s(s + 2a) = a where 1 β(s) ⇐⇒ e−at I1 (at).

replacing t with y. 2.5. ρ 2λ Note that √ 1 γ(s) = α(s) ρ =⇒ α(s) = √ γ(s). Then b = (λ + µ) − 2 λµ. 1 L−1 [α(s)] = √ L−1 [γ(s)] ρ 1 1 −(a+b)t = √ e I1 (at) ρt 1 1 √ e−(λ+µ)t I1 (2t λµ). Now. t By linearity. Exercise 3. Recall that for h̃ to be a stopping time for a sequence of random variables. Solution. h̃ must be independent of x̃h̃+1 . . af (t) ⇐⇒ aL[f (t)]. p dy ρy This is the desired result. p γ(s) = β(s + (λ + µ) − q2 λµ) (s + λ + µ) − (s + λ + µ)2 − 4λµ = √ 2 qλµ   1  (s + λ + µ) − (s + λ + µ)2 − 4λµ = .11 For the M/M/1 queueing system. . L−1 [γ(s)] = e−bt L−1 [β(s)] 1 = e−bt e−at I1 (at) t 1 −(a+b)t = e I1 (at).} illustrated in Figure 3. a √ Let (a + b) = (λ + µ) and a2 = 4λµ. argue that h̃ is a stop- ping time for the sequence {x̃i . ρ Then by applying the Lemma. d 1 1 Fỹ (y) = √ e−(λ+µ)y I1 (2y λµ). Find E[h̃] by using the results given above for E[ỹ] in combination with Wald’s equation. i = 1. . p = ρt Or. h̃ describes the number of .Elementary CTMC-Based Queueing Models 63 p (s + a + b) − (s + 2a + b)2 − a2 = .

12 For the M/M/1 queueing system. then the customer’s sojourn time will just be her service time. then the customer has to wait until the system is empty again to receive service. · · ·}. i = 1. w̃ in this case is equivalent to ỹ.] Solution. if the system is busy upon arrival. . : SOLUTION MANUAL customers served in a busy period and so x̃h̃ is the last customer served during that particular busy period. and so E[s̃|B] = E[w̃] + E x̃] 1 = E[ỹ] + µ 1 µ 1 = + 1 − ρ µ  1 1 = +1 . . i=1 Hence. Then E[s̃] = E[s̃|B]P {B} + E[s̃|I]P {I} . if the system is idle with probability (1 − ρ). 2. all subsequent customers x̃h̃+j . Hence h̃ is independent of those customers. j ≥ 1. We can thus apply Wald’s equation   h̃ X E[ỹ] = E  x̃i  = E[h̃]E[x̃]. and the expected length of a busy period are equal. That is. we see that ρ 1 1−ρ   E[s̃] = +1 + µ 1−ρ µ . On the other hand. This defines h̃ to be a stopping time for the sequence {x̃i . µ 1−ρ Combining the two conditional probabilities. the ex- pected amount of time a customer spends in the system. Now.64 QUEUEING THEORY WITH APPLICATIONS . [Hint: Consider the expected waiting time of an arbitrary customer in the M/M/1 queueing system under a non- preemptive LCFS and then use Little’s result. Let B be the event that an arbitrary customer finds the system empty upon arrival. are served in another busy period. argue that E[s̃]. and let I be the event that the system is found to be idle. E[ỹ] E[h̃] = E[x̃] 1/µ 1   = 1−ρ µ 1 = . 1−ρ Exercise 3.

beginning the busy pe- riod. and an infinite waiting room capacity. the service time.Elementary CTMC-Based Queueing Models 65 1 1   = µ 1−ρ = E[ỹ]. It has already been shown in Exercise 3.11 that in a LCFS discipline the waiting time in queue has the same distribution as ỹ.14 Determine the Laplace-Stieltjes transform for the length of the busy period for the M/M/2 queueing system. however. and let B c denote the event that the server is idle. Solution. [Hint: Condition on whether or not an arrival occurs prior to the completion of the first service of a busy period. Let A denote the event of an arrival before completion of service. then the length of the busy period is the minimum of the service time and the interarrival time. Condition the Laplace-Stieltjes transform for the distribution of s̃LCFS on whether or not the service is busy when an arbitrary customer ar- rives. Hence we . If the server is busy. s+µ 2λ Exercise 3. exponential service. If the original customer departs before this arrival.] Solution. If the server is idle. and let D denote the event the customer completes ser- vice before the next arrival: E[e−sỹ2 ] = E[e−sỹ2 |D]P {D} + E[e−sỹ2 |A]P {A} . Let B denote the event that the customer finds the server is busy. Then note that there is a very close relationship between the time required to reduce the occupancy from two customers to one customer in the M/M/2 and the length of the busy period in the ordinary M/M/1 system. an arbitrary busy period. the customer’s total time in the system will be waiting time in the queue plus service time. two parallel servers. E[e−ss̃LCFS ] = E[e−ss̃LCFS |B c ]P {B c } + E[e−ss̃LCFS |B]P {B} = (1 − ρ)E[e−sx̃ ] + ρ{E[e−sỹ ] + E[e−sx̃ ]} = E[e−sx̃ ] + ρE[e−sỹ ] q µ (s + λ + µ) − (s + λ + µ)2 − 4λµ = +ρ· . Exercise 3. Let ỹ2 denote the length of a generic busy period in the M/M/1 queueing system and suppose that a customer arrives. the system having Pois- son arrivals. Determine the Laplace-Stieltjes transform for the distribution of s̃LCFS . Condition E[e−sỹ2 ] on whether or not the customer finishes service be- fore the next customer arrives. the customer will immediately enter service and so the Laplace-Stieltjes transform of s̃LCFS is that of x̃.13 Let s̃LCFS denote the total amount of time an arbitrary customer spends in the M/M/1 queueing system under a nonpreemptive discipline.

the expected length of the remaining time in the busy period should be the same as it was originally: E[e−sỹ2 ]. E[e−sỹ2 ] = P {D} E[e−sỹ2 |D] + P {A} E[e−sỹ2 |A] ˜ = P {D} E[e−sz̃1 ] + P {A} E[e−sz̃1 ]E[e−sf2. Once the system returns to state 1 again. and rearranging terms. we see that ˜ E[e−sỹ2 |A] = E[e−s(z̃1 +f2. Consider f˜2. the time it takes to return to state 1 from state 2.1 +ỹ2 ) ].1 ]E[e−sỹ2 ) ]. Because of the Markovian properties. one in which the service rate is 2µ. With these observations. : SOLUTION MANUAL see that E[e−sỹ2 |D] = E[e−sz̃1 ]. with 2µ substituted in for the service rate. then there will be two customers in the system. 2µ E[e−sỹ2 ] = q . µ+λ which we know to be s+µ+λ . Call this state of the system ‘state 1’. 2λ This follows directly from the definition of E[e−sỹ ]. . Call this state of the system ‘state 2’.1 . and P {D}. Hence. . If the next arrival occurs before the original customer finishes service. ˜ Substituting the expressions for E[e−sz̃1 ].’) Think of the time spent in state 2 as an ordinary M/M/1 busy period. The second server will be activated and the overall service rate of the system will be 2µ. where z̃1 represents the interarrival time between the original customer and the first arrival after the busy period begins. the Laplace-Stieltjes trans- formof a generic busy period in the M/M/1 system. It will continue to be 2µ until there is only one customer in the system again. E[e−sf2.66 QUEUEING THEORY WITH APPLICATIONS . (This is called the ‘first passage time from state 2 to state 1.1 ]. P {A}. Then q ˜ (s + λ + 2µ) − (s + λ + 2µ)2 − 8λµ E[e−sf2. (s + λ) + (s + λ + 2µ)2 − 8λµ .1 ] = . note that this is ex- actly the state it was in originally.

if no customers arrive. z̃1 .] Solution.Elementary CTMC-Based Queueing Models 67 Exercise 3. represents the interarrival time between the original customer and the first arrival. Then due to the memoryless property of the exponen- tial distribution. That is.15 We have shown that the number of arrivals from a Pois- son process with parameter λ. We’ve already show above that the last term of this sum is equivalent to z̃1 . Determine the mean length of the busy period by conditioning on the number of arrivals that occur during the first service time of the busy period. that occur during an exponentially distributed service time with parameter µ. that is. is geometrically distributed with parameter µ/(µ + λ). the probability of n arrivals during a service time is given by [λ/(λ + µ)]n [µ/(λ + µ)]. then the busy period ends with the completion of the original customer’s service time. 1 E[ỹ|ñ = 0] = E[z̃1 ] = . So the remaining time in this busy period is equal to the length of a busy period in which there are initially two customers present. E[ỹ|ñ = 1] = 2E[z̃1 ] + E[ỹ]. Hence. where the first term. . and 1 customer whose service period ends with no new arrivals. let ñ1 denote the number of arrivals that occur during the first service time. this service time starts over. then this is equivalent to the length of a busy period that has (n+1) initial customers present: n customers we know nothing about (and so have generic busy periods). It has already been shown that ỹ|{x̃1 < t̃1 } = z̃1 . Condition E[ỹ] on the number of customers who arrive during the service time of the original customer. µ+λ Now suppose that exactly one new customer arrives during the original cus- tomer’s service period. i. n=0 [Hint: The arrivals segment the service period into a sequence of intervals. and start your solution with the statement ∞ X E[ỹ] = E[ỹ|ñ1 = n]P {ñ1 = n}. For example.  ỹ|{x̃1 > t̃1 } = z̃1 + ỹ + ỹ|{x̃1 < t̃1 } . we see that if there are n arrivals during the service period of the initial customer. E[ỹ|ñ = n] = nE[z̃1 ] + nE[ỹ] + E[z̃1 ] = (n + 1)E[z̃1 ] + nE[ỹ]. one of whom’s busy period ends service with no new arrivals.e. Thus. Repeating this process. Now.

) Let Di . P {D2 } = P {A2 } = o(h) . In this case we may think of Definition 2 as governing changes in state and not simply the event of births. But these arrivals and departures are Poisson. λ0 P0 (t) + µ1 P1 (t) for n = 0.1) k=0 where ñ(t+h) = n|ñ(t) = k signifies the event of having n−k arrivals or k−n departures in the time interval (t. .  Solution. Show that the dynamical equations are as follows:  −(λn + µn )Pn (t) + λn−1 Pn−1 (t)  Pn′ (t) = +µn+1 Pn+1 (t). so we may apply the properties of Definition 2 of the Poisson process (modified to take into account the possibility of deaths. and let the service rate when there are k customers in the system be µk . µ µ which implies 1 µ E[ỹ] = 1−ρ Exercise 3. Ai denote the events of i departures and i arrivals. in the time interval (t. P {Di } = P {Ai } = o(h). but that their rates depend upon the number of customers in the system. : SOLUTION MANUAL Then. Let Pn (t) denote the probability of having n customers in the system at time t. ∞ X E[ỹ] = E[y|ñ1 = n]P {ñ1 = n} n=0 ∞ n λ µ X  = [(n + 1)E[z̃1 ] + nE[ỹ]] n=0 λ+µ λ + µ ∞ n−1 µ λ λ  X  = E[z̃1 ] + E[ỹ] n λ+µ λ+µ λ+µ n=1 µ λ λ+µ 2    = E[z̃1 ] + E[ỹ] λ+µ λ+µ µ µ 1 λ λ+µ 2    = + E[ỹ] λ+µ λ+µ λ+µ µ 1 λ = + E[ỹ].68 QUEUEING THEORY WITH APPLICATIONS . using the geometric properties of P {ñ1 = n}.16 Suppose that the arrival and service time distributions are memoryless. t + h].15. t + h]. using general λ and µ. And note that for i ≥ 2. respectively. Let the arrival rate when there are k customers in the system be λk . . Hence. for n > 0. Then ∞ X Pn (t + h) = P {ñ(t + h) = n|ñ(t) = k}Pk (t) (3.

and det W (σ) are all continuous functions of σ. P {D0 } = 1 − µh + o(h). · · · .1) and using state-dependent arrival and de- parture rates. Let Q be a (K +1)-dimensional square matrix having distinct eigen- values σ0 . note that for n = 0. · · · . Finally. Pn (t + h) = P {A0 } P {D0 } Pn (t) = +P {A1 } Pn−1 (t) + P {D1 } Pn+1 (t) + o(h) = [1 − λn h + o(h)][1 − µn h + o(h)]Pn (t) = +[λn−1 h + o(h)]Pn−1 (t) + [µn+1 h + o(h)]Pn+1 (t) + o(h) = [1 − (λn + µn )h]Pn (t) + λn−1 hPn−1 (t) = +µn+1 hPn+1 (t) + o(h). σ→σi σ→σi . W (σ). so that P0′ (t) = −λ0 P0 (t) + µ1 P1 (t). lim adj W (σ)W (σ) = lim detW (σ)I. Then.Elementary CTMC-Based Queueing Models 69 P {D1 } = µh + o(h).5.. for detW (σ) 6= 0. for σ 6= σi . Now. (i. σK . σ1 . Rearranging terms and dividing by h: Pn (t + h) + Pn (t) o(h) = −(λn + µn )Pn (t)+λn−1 Pn−1 (t)+µn+1 Pn+1 (t)+ . K). h h Now let h → 0: Pn′ (t) = −(λn + µn )Pn (t) + λn−1 Pn−1 (t) + µn+1 Pn+1 (t). 2. P {A0 } = 1 − λh + o(h). Since W −1 (σ) = adj W (σ)/det W (σ). Exercise 3. and define W (σ) = (σI − Q) . adj W (σ). Solution. Then. adj W (σ)W (σ) = det W (σ)I.e. Therefore.17 Prove Theorem 3. i = 1. Pn−1 (t) = 0 = µ0 . we find W (σ)W −1 (σ) = I.15. P {A1 } = λh + o(h). substituting these in (3.

Since W (σ)W −1 (σ) = I. Similary. : SOLUTION MANUAL so that.6. we find that W (σ)adj W (σ) = det W (σ)I. By Exercise 3. Solution. By the continuity of W (σ). Thus. since the rows of adj (σi I − Q) are proportional to Mi . . Thus. by definition. every column of adj (σi I − Q) is an eigenvector of Q corresponding to σi . Exercise 3. Therefore. adj W (σ). σ→σi σ→σi This implies. X is a left eigenvector of Q corresponding to σi if σi X = XQ or X(σi I − Q) = 0. . where On×m denotes an n × m matrix of zeroes. and W −1 (σ) = adj W (σ)/det W (σ). they must be proportional to each other. the left eigenvector of Q corresponding to σi . that σ→σi adj W (σi )W (σi ) = O(K+1)×(K+1) . .70 QUEUEING THEORY WITH APPLICATIONS . by definition. we have adj (σi I − Q) (σi I − Q) = O(K+1)×(K+1) . every row of adj (σi I − Q) is a left eigenvector of Q corresponding to σi . since lim detW (σ) = 0. Now. lim W (σ)adj W (σ) = lim detW (σ)I. Hence the rows of adj (σi I − Q) are proportional to each other. we may use the same technique of Exercise 3. where On×m denotes an n × m matrix of zeroes. and det W (σ). σ→σi adj W (σi )W (σi ) = O(K+1)×(K+1) .18 Prove Theorem 3. we have (σi I − Q) adj (σi I − Q) = O(K+1)×(K+1) . Hence Mi is proportional to the rows of adj (σi I − Q). X is an eigenvector of Q corresponding to σi if σi X = QX or (σi I − Q)X = 0. since lim detW (σ) = 0. Now.16 to show that the columns of adj (σi I − Q) are proportional to each other. Therefore.16.

we see that Q̂ is as follows: √ −λ   √ µλ µλ −µ It is easily seen that the eigenvalues of Q̂ are σ0 = 0 and σ1 = −(λ + µ). λ+µ . Solve the equation for P0 (t). Then show that the matrix Q̂ is negative semi- definite. dt Show that the eigenvalues of the matrix Q are real and nonnegative. P1 (t) and show that they converge to the solution given in Example 3.2 regardless of the values P0 (0). we can use Definition 2 of the Poisson process to write the dynamical equations as: P0′ (t) = −λ0 P0 (t) + µ1 P1 (t) P1′ (t) = −µn P1 (t) + λ0 P0 (t). do a similarity transformation on the matrix Q.i+1 qi+1. eσ1 t . eσK t M−1 .Elementary CTMC-Based Queueing Models 71 Exercise 3.19 Let K = 1. using the definition √ q̂i. Use Definition 2 of the Poisson process to write an equation of the form d [ P0 (t) P1 (t) ] = [ P0 (t) P1 (t) ] Q. 1. Then.i+1 = qi. and then let t → ∞: µ P0 (t) = . and arbitrary P0 (0) and P1 (0).] Solution. we see that " µ λ # −λ   1 1 0 µ+λ µ+λ P (t) = P (0) . which converts the matrix to a symmetric matrix Q̂. λ+µ λ+µ Note that P0 (0) + P1 (0) = 1. K − 1. · · · . [Hint: First. · · · . As seen in the proof of Exercise 3. 1 µ 0 e−(λ+µ)t −1 µ+λ 1 µ+λ This implies that µ e−(λ+µ)t P0 (t) = [P0 (0) + P1 (0)] + [λP0 (0) − µP1 (0)] . λ+µ λ+µ λ e−(λ+µ)t P1 (t) = [P0 (0) + P1 (0)] + [−λP0 (0) + µP1 (0)]. using the equation   P (t) = P (0) M diag eσ0 t .15. P1 (0).i for i = 0. Then.

. . The one-step transition probability matrix is then 1−ρ   1 PC = . by the very definition of being uniformized. 1 πe 0 = 1+ρ ρ πe 1 = 1+ρ Recall that in the original system. This implies πci E[s̃i ] PCi = 1 P πci E[s̃i ] i=0 . we see that P0 . Exercise 3. λ+µ Since P0 (0) and P1 (0) were arbitrary. Solution.20 For the specific example given here show that the equi- librium probabilities for the embedded Markov chain are the same as those for the continuous-time Markov chain. P1 converge to the solu- tion given in the example regardless of the values P0 (0). As already shown. P1 (0). That is. the continuous-time Markov chain was   0 1 PO = . using the equation πe = πe Pe . 1 0 Now. E[s̃0 ] = E[s̃1 ] = 1/ν. the probability matrix for the discrete-time em- bedded Markov chain is 1−ρ   ρ Pe = . 1+ρ Furthermore. : SOLUTION MANUAL µ P1 (t) = .72 QUEUEING THEORY WITH APPLICATIONS . 1 0 Thus. the system can transition from one state back into itself. we see that 1 πc 0 = πe 0 = 1+ρ ρ πc 1 = πe 1 = . however. 1 0 Since this is precisely the probability matrix for the discrete-time embedded Markov chain. the mean occupancy times for each state on each entry are equal.

Hence for i = 0.21 Show that the equilibrium probabilities for the embedded Markov chain underlying the continuous-time Markov chain are equal to the equilibrium probabilities for the continuous-time Markov chain. · · · . Hence. 0 0  µ ν−µ 0 0 0 0 . Thus we see that the equilibrium probabilities for the embedded discrete-time Markov chain are equal to the equilibrium probabilities for the continuous-time Markov chain when we randomize the system. Solution.. the one-step transition probability matrix PC is equal to Pe . K>1.19. 2. the rate leaving states 1. .... And. . Exercise 3. K − 1. .. ρ= µ By the same reasoning as in the proof of Exercise 3.. Then  ν−λ λ 0 0 . .. . 0 0  ν ν µ ν−(λ+µ) λ   ν ν ν 0 . we must take ν to be at least as large as λ + µ. . 0 0    µ ν−(λ+µ) λ  Pe =   0 ν ν ν .. .. K. πei = πci ..   . 1.. · · · . πci E[s̃i ] PCi = ∞ P πci E[s̃i ] i=0 πc i · 1 = ∞ πc i · 1 P i=0 = πc i = πe i . the expected amount of time spent in each state before transition is unity. In this general case.. Let the capacity of the finite M/M/1 queueing system be K..   . . 0 0 . since the system has been uniformized..Elementary CTMC-Based Queueing Models 73 πci · ν1 = πc0 · ν1 + πc1 · 1 ν πc i = πc 0 + πc 1 = πc i = πe i . ν ν This produces the solution 1−ρ πe 0 = 1 − ρK+1 λ π e i = ρi π e 0 .

−0.32) directly and then using uniformization for t = 0. Compare the quality of the results and the relative diffi- culty of obtaining the numbers.6944272t P2 (t) = 0. 0.6944272t 0 0 e 0. 1.0.3278688 0. respectively.9055728t 0  −2.4417094e−0.9055728t − 0.9055728t + 0.0521367  .3144318 The time dependent state probabilities are then P0 (t) = 0.6944272. The .5777088 −0.8  .1327818e−2. · · · .6944272t P1 (t) = 0.4.2. Solution. 2. 0. .0582906 0. 2.5777088 1 10.42229124 1 −0.0 −1.4.8 0. .74 QUEUEING THEORY WITH APPLICATIONS .0466325e−0. 0. and µ1 = µ2 = 1.9055728. To complete the exercise.2622951  M−1 =  −0. determine the time-dependent state probabilities by first solving the dif- ferential equation (3.0 The eigenvalues of Q are found to be 0. Thus. λ0 = λ1 = 0. [ −7.42229124   M = 1 1 1   1 10. where 1 −7.5901699 ]T .4098361 0. : SOLUTION MANUAL Exercise 3. this would become increasingly difficult. The resulting values of P0 (t). For K = 2.0061539 0.8. . Equation 3. and −2.3515454 0.6944272t .6659772 −0.3278688 + 0. and P2 (t) are shown in Figure 3.8.22 For the special case of the finite capacity M/M/1 queue- ing system with K = 2. we find P (t) = [ P0 (0) P1 (0) P2 (0) ] MeΛt M−1 .8 0. and [ −0. it was easier to solve for the equilibrium probabilities by hand.0 with P0 (0) = 1.48 was programmed on a computer for t = 0. . .0 −1.5901699 1 0 0  eΛt =  0 e−0. For K = 2. 0 1. the matrix Q is: −λ λ 0   Q =  µ −λ − µ λ   0 µ −µ  −0. however.2. 0.5901699 ]T .3950769e−0.9055728t + 0. P1 (t). −. . P1 (t).2812364e−2.0.8 0 =  1.4098361 + 0.5901699 −0. plotting the results for P0 (t).0. and their corresponding eigenvectors are proportional to [ 1 1 1 ]T .1484545e−2.2622951 − 0.1. For larger values of K. and P2 (t).

1−ρ   PB (K) = ρK .4 P_1(t) 0. 1 − ρK+1 . increased to threshold value. . As t. where ǫ de- termines when the summation in Equation 3. depedent on ǫ. And as ǫ decreased. the equilibrium probabilities improved. 1 + ρPB (K − 1) where PB (0) = 1. .2 1.8 2.6 0.8 Time dependent state probabilities P_0(t) 0. Exercise 3.1.2 P_2(t) 0.6 0. 2.0 0.0 1.8 1. Time dependent state probabilities for Exercise 3. .Elementary CTMC-Based Queueing Models 75 1. the values of t were allowed to get larger and larger.23 For the special case of the finite-capacity M/M/1 system. if the finite capacity is K. ρPB (K − 1) PB (K) = .0 0. bringing the equilibrium probabilities closer to those determined by hand.0 Time (sec) Figure 3.4 0. show that for K = 1. .4 1.6 1.22 quality of the computer results differed with the values of t and ǫ.2 0.60). Solution.48 may be stopped. By Equation (3.

. (1 − ρ)ρK−1 = PB (K − 1)(1 − ρK ). then by combining Equations (3. 1 + ρPB (K − 1) Hence. 1 + ρPB (K − 1) Exercise 3. : SOLUTION MANUAL It follows that if the finite capacity is K − 1. 2. 1 + (λK /µK )PB (K − 1) where PB (0) = 1. 1 − ρK That is.24 For the finite-state general birth-death process. If the finite capacity of the M/M/1 queueing system is K. 1−ρ   PB (K − 1) = ρK−1 . K−1   Q λi  1 PK =  n−1   i=0 . .   Q K  Q  K  λi µi  1+ P  i=0 n  i=1 Q n=1  µi i=1 . .58).76 QUEUEING THEORY WITH APPLICATIONS . (λK /µK )PB (K − 1) PB (K) = . show that for K = 1. ρPB (K − 1) PB (K) = . 1 − ρK 1 − ρK = 1 − ρK+1 1 − ρK + ρK − ρK+1 1 − ρK = 1 − ρK + ρK (1 − ρ) 1 = ρρK−1 (1−ρ) 1+ 1−ρK 1 = .56) and (3. So 1−ρ   PB (K) = ρρK−1 1 − ρK+1 ρPB (K − 1)(1 − ρK ) = . 1 − ρK+1 Furthermore. . Solution. . .

  n µK K−1  Q n=1 µi  Q µi i=1 i=1 Dividing both numerator and denominator by n−1   Q K−1 X λi   i=0 1+ . note that the numerator of PK is equal to K−2 Q λi λK−1 i=0 ( ) .   Q  K−1  λi Q K−1 P   µi 1+  i=0 n  i=1 Q n=1  µi i=1 Now. K−2   Q λi  1  PK−1 =  n−1   i=0 . µK K−1 Q µi i=1 and the denominator of PK is n−1 K−2   Q λi  Q K−1 λi X  i=0 λK−1 i=0 1+ +( ) .Elementary CTMC-Based Queueing Models 77 Similarly if the finite capacity is K − 1.   n  Q n=1 µi  i=1 we see that the numerator becomes K−2 Q λi λ i=0 ( µK−1 K ) K−1 Q µi i=1 n−1 . Q K−1 λi P  i=0  1+ n Q   n=1 µi i=1 .

. Q is a diagonal. 1 + (λK−1 /µK )PB (K − 1) where PB (0) = 1. d Solution. Observe that dt P (t) is {the rate of arrivals to the system at time t} minus {the rate of departures from the system at time t}. Hence.25 Let K be arbitrary. Show that the eigenvalues of the matrix Q are real and nonpositive. using the table in Section 3.78 QUEUEING THEORY WITH APPLICATIONS . . Thus. Therefore. .   .   µK−1 −(λK−1 + µK−1 ) λK−1  0 µK −µK Now. Use the balance equation approach to write an equation of the form d P (t) = P (t)Q dt where P (t) = [ P0 (t) P1 (t) · · · PK (t) ].    λK−2 PK−2 (t) + µK PK (t) − (λK−1 + µK−1 ) PK−1 (t)  λK−1 PK−1 (t) − µK PK (t) d We may rewrite this as dt P (t) = P (t)Q. : SOLUTION MANUAL which is (λK−1 /µK )PK−1 .. .4 of the text. And the denominator becomes  K−2  Q  i=0 λi     K−1 Q  µi λK−1 i=1 1+( )  n−1  . (λK−1 /µK )PB (K − 1) PB (K) = . whose main diagonal is negative and whose off-diagonal elements are all of the same sign (positive).. real matrix.  P (t) =    dt  . we may write µ1 P1 (t) − λ0 P0 (t)    λ0 P0 (t) + µ2 P2 (t) − (λ1 + µ1 ) P1 (t)  d  . its eigenvalues are all real and nonpositive. where −λ0 λ0 0    µ1  −(λ1 + µ1 ) λ1   Q=  . Exercise 3. µK Q λ P  i=0 i  K−1 1+  n Q   n=1 µi i=1 which is 1 + (λK−1 /µK )PK−1 .

27 Prove (3. Solution. In Figure 3. we see that λ1 λ0 P2 = P0 . from state 1 into state 2) is λ1 . the local balance is λ0 P1 = P0 .26 Using the concept of local balance. Solution.Elementary CTMC-Based Queueing Models 79 Exercise 3. we must be have λ1 P1 = µ2 P2 λ1 i.e. Since the ‘rate in’ must equal the ‘rate out’. µ2 µ1 Repeating this process by moving the boundary between every pair of states. P2 = P1 .12. µ1 Substituting this into the earlier result for P2 . . We see that the rate going from the left side of the boundary into the right side (that is. the initial boundary is between the nodes represent- ing state 1 and state 2.12. we get the general result Pn−1 λi Pn = Pi=0n P0 . write and solve the balance equations for the general birth-death process shown in Figure 3. µ2 Moving the boundary between state 0 and state 1.66). The rate going from the right side of the boundary into the left side (from state 2 into state 1) is µ2 . i=1 µi Exercise 3.

dn .

E[x̃(x̃ − 1) · · · (x̃ − n + 1)] = n Fx̃ (z).

66) . (3.

dz .

we may differentiate each side with respect to z: d d Fx̃ (z) = E[z x̃ ] dz dz Z d = z x dFx̃ (x) Zdz x = xz x−1 dFx̃ (x) x = E[x̃z x̃−1 ]. z=1 Since Fx̃ (z) = E[z x̃ ]. Hence. d lim Fx̃ (z) = lim E[x̃z x̃−1 ] z→1 dz z→1 .

assume that dn−1 h x̃−n+1 i F x̃ (z) = E x̃(x̃ − 1) · · · (x̃ − n + 2)z . . Solution.80 QUEUEING THEORY WITH APPLICATIONS .67). . dn d Z Fx̃ (z) = x(x − 1) · · · (x − n + 2)z x−n+1 dFx̃ (x) dz n dz x Z = x(x − 1) · · · (x − n + 2)(x − n + 1)z x−n dFx̃ (x) xh i = E x̃(x̃ − 1) · · · (x̃ − n + 1)z x̃−n . Thus. dn h x̃−n i lim Fx̃ (z) = lim E x̃(x̃ − 1) · · · (x̃ − n + 1)z z→1 dz n z→1 = E[x̃(x̃ − 1) · · · (x̃ − n + 1)]. That is. . dz n−1 Then. : SOLUTION MANUAL = E[x̃]. differentiating this expression with respect to z.28 Prove (3. Now use induction on n. assuming the result holds for n − 1. Exercise 3.

1 dn .

67) . P {x̃ = n} = F (z) . (3.

x̃ n! dz n .

.

Fx̃ (z) = E[z x̃ ] ∞ z n P {x̃ = i}. X = i=0 On the other hand. the Maclaurin series expansion for Fx̃ (z) is as follows: ∞ . z=0 By definition of expected value.

1 dn .

X Fx̃ (z) = F (z) . zn.

x̃ n! dz n .

.

Exercise 3. Use equation (3. we may match coefficients of z n . Solution. n=0 z=0 By the uniqueness of power series. The result follows.29 Use (3.66) with n = 1: .66) to find E[ñ] and E[ñ2 ].

d .

E[ñ(ñ − 1) · · · (ñ − 1 + 1)] = E[ñ] = Fñ (z).

.

dz .

z=1 .

Elementary CTMC-Based Queueing Models 81 .

d P0 .

.

= dz (1 − ρz) .

z=1 .

let n = 2: . 1−ρ To solve for E[ñ2 ]. ρ = .

2 d2 .

E[ñ(ñ − 1) · · · (ñ − 2 + 1)] = E[ñ − ñ] = 2 Fx̃ (z).

.

dz .

z=1
2ρ 2
= .
(1 − ρ)2

But E[ñ2 − ñ] = E[ñ2 ] − E[ñ], so after a little arithmetic we find that

ρ2 + ρ
E[ñ2 ] = .
(1 − ρ)2

82 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL

3.1 Supplementary Problems
3-1 Messages arrive to a statistical multiplexing system according to a Poisson
process having rate λ. Message lengths, denoted by m̃, are specified in
octets, groups of 8 bits, and are drawn from an exponential distribution
having mean 1/µ. Messages are multiplexed onto a single trunk having a
transmission capacity of C bits per second according to a FCFS discipline.

(a) Let x̃ denote the time required for transmission of a message over the
trunk. Show that x̃ has the exponential distribution with parameter
µC/8.
(b) Let E[m̃] = 128 octets and C = 56 kilobits per second (kb/s). Deter-
mine λmax , the maximum message-carrying capacity of the trunk.
(c) Let ñ denote the number of messages in the system in stochastic equi-
librium. Under the conditions of (b), determine P {ñ > n} as a func-
tion of λ. Determine the maximum value of λ such that P {ñ > 50} <
10−2 .
(d) For the value of λ determined in part (c), determine the minimum
value of s such that P {s̃ > s} < 10−2 , where s̃ is the total amount
of time a message spends in the system.
(e) Using the value of λ obtained in part (c), determine the maximum
value of K, the system capacity, such that PB (K) < 10−2 .

Solution:

(a) Since trunk capacity is C, the time to transmit a message of length m
bytes is x = 8m/C. Therefore, x̃ = 8m̃/C, or m̃ = x̃C/8. Thus,
8m̃
 
P {x̃ ≤ x} = P ≤x
 C
xC

= P m̃ ≤
8
−µC
x
= 1−e 8 ,

where the last equation follows from the fact that x̃ is exponential
with parameter µ. Therefore, m̃ is exponential with parameter µC/8.
1
(b) Since E[m̃] = 128 octets, µ = 128 . Therefore, x̃ is exponentially
µC 7 7
distributed with parameter 8 = 128 Kbps, or 128 × 103 bps. Then
E[x̃] = 128
7 × 10
−3 sec. Since λ
max E[x̃] = ρ < 1,

1
λ <
E[x̃]

Elementary CTMC-Based Queueing Models 83

7
= × 103
128
= 54.6875 msg/sec.

(c) We know from (3.10) that P {ñ > n} = ρn+1 . Thus, P {ñ > 50} =
ρ51 . Now, ρ51 < 10−2 implies

51 log 10 ρ < −2.

i.e.,
2
log10 ρ < − .
51
2
Hence, ρ = λE[x̃] < 10− 51 = 0.91366, so that

λ < 49.966 msg/sec.

(d) Now, s̃ is exponential with parameter µ(1 − ρ), so P {s̃ > x} =
e−µ(1−ρ)x . Then for P {s̃ > s} < 10−2 , we must have

e−µ(1−ρ)s < 10−2

or
−µ [1 − ρ] s < −2 (ln 10)
i.e.,
2 (ln 10)
s >
µ (1 − ρ)
2 (ln 10)
= 7 3
128 × 10 (1 − 0.91366)
4.60517
=
4.7217
= 0.9753 sec
= 975.3 ms

(e) Recall the equation for PB (K):
1−ρ
 
PB (K) = ρK
1 − ρK+1

We wish to find K such that PB (K) < 10−2 . First, we set PB (K) =
10−2 and solve for K.
1−ρ
 
ρK = 10−2
1 − ρK+1

84 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
 
(1 − ρ) ρK = 10−2 1 − ρK+1
= 10−2 − 10−2 ρK+1
K −2 K+1
(1 − ρ) ρ + 10 ρ = 10−2
1 − ρ + 10−2 ρ ρK = 10−2
10−2
ρK =
1 − 0.99ρ

Therefore,
−2 (ln 10) − ln (1 − 0.99ρ)
K =
ln ρ
−[2 (ln 10) + ln (1 − 0.99ρ)]
=
ln ρ

For ρ = 0.91366,

−[4.60517 − 2.348874]
K =
−0.09029676
2.256296
=
0.09029676
= 25.799

Therefore, for a blocking probability less than 10−2 , we need K ≥
26. From part (c), note that for the non-blocking system, the value
of K such that P {ñ > K} < 10−2 is 50 for this particular value of
λ. Thus, it is seen that the buffer size required to achieve a given
blocking probability cannot be obtained directly from the survivor
function. In this case, approximating buffer requirements from the
survivor function would have resulted in K = 50, but in reality, only
26 storage spots are needed.

3-2 A finite population, K, of users attached to a statistical multiplexing sys-
tem operate in a continuous cycle of think, wait, service. During the think
phase, the length of which is denoted by t̃, the user generates a message.
The message then waits in a queue behind any other messages, if any, that
may be awaiting transmission. Upon reaching the head of the queue, the
user receives service and the corresponding message is transmitted over a
communication channel. Message service times, x̃, and think times, t̃, are
drawn from exponential distributions with rates µ and λ, respectively. Let
the state of the system be defined as the total number of users waiting and
in service and be denoted by ñ.

(b) Determine the distribution of the number of visits from state i to state i + 1 during the first passage time from state i to i − 1. Verify that P0 (K) is given by the ratio of the expected length of the idle period to the sum of the expected lengths of the idle and busy periods. (e) Let E[ı̃K ] denote the expected length of the idle period for the com- munication channel. Kλ (K-1) λ (K-2) λ 2λ λ 0 1 2 • • • K-1 K Figure 3. the expected length of a busy period.2. is given by the following recursion: 1  1 + λ(K − 1)E [ỹK−1 ] E[ỹK ] = with E [ỹ0 ] = 0. Determine the distribution of s̃i . 1 + [(Kλ)/µ] {1 + (K − 1)λ E[ỹK−1 ]} That is. Let s̃i denote the total cumulative time the system spends in state i during the first passage time from state i to state i − 1. that is. Determine P0 (K) using ordinary birth- death process analysis.] (d) Let P0 (K) denote the stochastic equilibrium probability that the com- munication channel is idle. State diagram for Supplemental Exercise 3. (c) Show that E[ỹK ]. µ [Hint: Use the distribution found in part (b) in combination with the result of part (a) as part of the proof.2 Solution: .Elementary CTMC-Based Queueing Models 85 (a) The first passage time from state i to state i − 1 is the total amount of time the systems spends in all states from the time it first enters state i until it makes its first transition to the state i − 1. show that P0 (K) computed by the formula just stated is iden- tical to that obtained in part (d). E[ı̃K ] P0 (K) = E[ı̃K ] + E[ỹK ] which can be determined iteratively by 1 P0 (K) = .

the total time spent in states i + 1 and above during a first passage time from state i to i − 1 is . 2. s̃i is the geometric sum of exponential random variables and is therefore expo- µ nentially distributed with parameter [(K − i)λ + µ] (K−i)λ+µ = µ. page 35 of the text) with parameter µ/ [(K − i)λ + µ] because the probability that a given visit is the last visit is the same as the probability of a service completion before arrival (see Proposition 3 of exponential random variables. Let s̃ij denote the time spent in state i on the j −th visit and ṽi denote the number of visits to state i during the first passage from i to i−1. which has parameter µ. : SOLUTION MANUAL (a) The state diagram is as in Figure 3. 2. and the time to first departure. (c) First note that ỹK−i . Then Pṽi s̃i = j=1 s̃ij . given on page 35 of the text. the number of visits to state i is geometrically distributed (see proof of exponential random variables.2. K − 1. and in particular on the number of visits to state i before transitioning to state i−1. Thus. Thus. · · · . k µ (K − i)λ  P {ṽi = k} = (K − i)λ + µ (K − i)λ + µ so that the mean number of visits is (K − i)λ + µ (K − i)λ −1= for i = 1. In summary. page 35 of the text) and this probability is independent of the number of visits that have occurred up to this point. In this case. which follows directly from Property 5 of exponential random vari- ables. . the busy period for a system having K − i cus- tomers. has the same distribution as the first passage time from state i + 1 to i. say i. the number of visits is exactly 1. the time spent in state i is the minimum of two exponential random variables: the time to first arrival. Thus. the time spent in state i on each visit is exponen- tially distributed with parameter (K − i)λ + µ. Now. K. (b) From arguments similar to those of part (a). the total amount of time spent in any state before transitioning to the next lower state is exponentially distributed with parameter µ.86 QUEUEING THEORY WITH APPLICATIONS . we wish to know the total amount of time spent in state i before the sys- tem transitions from state i to i − 1 for the first time. the time spent in state i on the j −th visit is independent of everything. which has parameter (K − i)λ. the expected minimum number of visits to state i + 1 from state i is (K − i) µλ . Now. For a typical state. From part (b). the number of visits to state i + 1 from state i is also geometric. µ µ For i = K. . on each visit to state i. i = 1. Now. · · · .

Elementary CTMC-Based Queueing Models 87 (K − i) µλ E[ỹK−i ]. we have 1 E[ỹK ] = (1 + (K − 1)λE[ỹK−1 ]) . µ With i = 1. we find from (3. starting with n = 1. Thus. the total time spent in state i is 1/µ. 1 E[ỹn ] = (1 + (K − 1)λE[ỹn−1 ]) . µi = µ and λi = (K − i)λ for this particular case. so that the first passage time from state i to i − 1 is given by 1 λ E[ỹK−(i−1) ] = + (K − i) E[ỹK−i ] µ µ 1 = (1 + (K − i)λE[ỹK−i ]) . Also. (d) Using ordinary birth-death analysis.58) that 1 P0 =    P∞ Qn−1 Qn 1+ n=1 i=0 λi i=1 µi 1 =   . 1 P0 =    PK Qn−1 Qn 1+ n=1 i=0 (K − i)λ i=1 µ . PK Qn−1 Qn 1+ n=1 i=0 λi i=1 µi Now. µ with E[ỹ0 ] = 0. and iterating to K. µ We then have 1 E[ỹ1 ] = µ 1 E[ỹ2 ] = (1 + 1 · λE[ỹ1 ]) µ 1 λ  = 1+ µ µ 1 E[ỹ3 ] = (1 + 2λE[ỹ2 ]) µ 1 λ λ   = 1+2 1+ µ µ µ so that we may compute E[ỹK ] for any value of K by using the re- cursion.

which is clearly correct. 1 E[ỹK ] = [1 + (K − 1)λE[ỹK−1 ]] µ which is always true. by hypothesis. Substituting (∗) into this last expression gives K−2  n #!# " " 1 1 X (K − 2)! λ E[ỹK ] = 1 + λ(K − 1) 1+ µ µ n=1 (K − 2 − n)! µ (K − 1) K−1 "  n # 1 X (K − 1)(K − 2)! λ = 1+λ + µ µ n=2 (K − 1 − n)! µ . λ n−1 1+ n=1 µ i=0 (K − i) Qn−1 K! But i=0 (K − i) = K(K − 1) · · · K − (n − 1) = (K−n)! .88 QUEUEING THEORY WITH APPLICATIONS . So 1 P0 = PK  n . suppose (K − 1) ∈ T . we find E[ỹ1 ] = µ1 . λ K! 1+ n=1 µ (K−n)! (e) The expression given for P0 (K) is readily converted to the final form given as follows: 1 Kλ P0 (K) = 1 Kλ + E[ỹK ] 1 Kλ =   1 1 Kλ + µ 1 + λ(K − 1)E[ỹ( K − 1)] 1 =   Kλ 1+ µ 1 + λ(K − 1)E[ỹ( K − 1)] We now prove the proposition K−1  n # " 1 X (K − 1)! λ E[ỹK ] = 1+ µ n=1 (K − 1 − n)! µ proof: Let T denote the truth set for the proposition. . With K = 1. Then. : SOLUTION MANUAL 1 = PK  n Q . Now. . K−2 "  n # 1 X (K − 2)! λ E[ỹK−1 ] = 1+ (∗) µ n=1 (K − 2 − n)! µ and. from part (c).

We now substitute the expression for E[ỹK−1 ] into the given expres- sion −1 K  P0 (K) = 1+λ (1 + (K − 1)λE[ỹK−1 ]) µ " K−1  n !#−1 K X (K − 1)! λ = 1+λ 1+ µ n=1 (K − 1 − n)! µ " K  n #−1 X K! λ = 1+ .in terms of the system parameters and the ergodic state probabilities. rate β. Each student operates continuously as follows: the student is ini- tially in the work state for an exponential. call holding times have a mean of three minutes.that is the proportion of attempts to join the service sys- tem that will be blocked . . If a telephone is available. that is. Compute the call blocking probability as a function of β for β ∈ (0. 3-3 Traffic engineering with finite population. n=1 (K − n)! µ which is identical to that obtained in part (d). (b) Draw a state diagram for this system showing all transition rates. (e) Specify a formula to compute the average call generation rate.Elementary CTMC-Based Queueing Models 89 K−1 "  n # 1 X (K − 1)! λ = 1+ µ n=1 (K − 1 − n)! µ This proves the proposition. (c) Write the balance equations for the system. 30). If all telephones are busy. (f) Let µ = 1/3 calls per minute. period of time. (a) Define an appropriate state space for this service system. The students are always busy doing one of two activities: doing queueing homework (work state) or using the telephone (service state).ever. no other activities are allowed . the student uses the telephone for a length of time drawn from an exponential distribution with rate µ and then returns to the work state. (d) Specify a method of computing the ergodic blocking probability for the system . Ten students in a certain grad- uate program share an office that has four telephones. The stu- dent then attempts to use one of the telephones. then the student is blocked and returns immediately to the work state.

(a) Define the state of the system as the number of students using the telephones. Over a long period of time T . the number of blocked attempts is 6βP4 T while the total number of ar- rivals is (10βP0 + 9βP1 + 8βP2 + 7βP3 + 6βP4 ) T . 4. 3. This can have values 0. . Taking ratios. 10ß 9ß 8ß 7ß 0 1 2 3 4 1µ 2µ 3µ 4µ Figure 3. we find that the proportion of blocked calls is 6P4 . 1. The arrival rate while in state 4 is 6β. : SOLUTION MANUAL (g) Compare the results of part (f) to those of the Erlang loss system hav- ing 4 servers and total offered traffic equal to that of part (f). State diagram for Supplemental Exercise 3. for each value of β. and plot this result on the same graph as the results obtained in (f). (b) See Figure 3.90 QUEUEING THEORY WITH APPLICATIONS . 10P0 + 9P1 + 8P2 + 7P3 + 6P4 (e) The average call generation rate is 10P0 β + 9P1 β + 8P2 β + 7P3 β + 6P4 β. Solution.3. (d) All arrivals that occur while the system is in state 4 are blocked. there is a total offered traffic rate for the sys- tem specified in this problem. Use this total offered traffic to obtain a value of λ. That is. 10βP0 = µP1 9βP1 = 2µP2 8βP1 = 3µP3 7βP3 = 4µP4 .3 (c) The balance equations can be either local or global. . .3. 2. Then compare the results. we choose local. and then obtain the blocking probability that would result in the Erlang loss system.

4    i X 10 β 1 = P0 . 1 P0 =   P4 10  β i i=0 i µ Therefore.e. 4 P4 10 β j j=0 j µ The call generation rate is then 4 X λ = (10 − i)βPi i=0    P4 10 iβ β i=0 (10 − i) i µ =    . i µ i=0 i.    10 4 β 6β µ 4 PB =    P4 10 βi β i=0 (10 − i) µ i .Elementary CTMC-Based Queueing Models 91 (f) First we solve the balance equations. 3. Hence. 2. 10β 10 β   P1 = P0 = P µ 1 µ 0    2 9β 10 β P2 = P1 = P0 2µ 2 µ    3 8β 10 β P3 = P2 = P0 3µ 3 µ    4 7β 10 β P4 = P3 = P0 4µ 4 µ . P4 10 j β j=0 j µ And the call blocking probability is 6P4 βλ . So. 10 βµ P1 =   P4 10  β i i=0 µ  i   10 β i i µ Pi =    for i = 0.. 1.

λ) using this formula. It can be seen that for the Erlang blocking formula. . As an example.92 QUEUEING THEORY WITH APPLICATIONS . P4 10 4−i µ i=0 (10 − i) i β With µ = 13 . 10 (3β)j P4 j=0 (10 − j) j (g) From (3. if β = 1. a smooth traffic assumption would result in a substantial underestimation of the actual blocking probability.00025. then the actual blocking probability is approximately 0. a) =   P4 ai i=0 i! (3λ)4   4! = P4  (3λ)i  i=0 i! (3λ)4 = h (3λ)2 3 4 i 4! 1 + 3λ + + (3λ) (3λ) 3! + 4! 2! (3λ)4 = 24 + (24 + [12 + (4 + 3λ)3λ]3λ)3λ The idea is to compute λ first and then compute β(4.65). . This will always be true for relatively small numbers of users. we have λ a= = 3λ µ and   a4 4! B(4. we have   10 (3β)i P4 β i=0 (10 − i) i λ(β) =   10 (3β)j P4 j=0 j and   10 6 (3β)4 4 PB (β) =   . : SOLUTION MANUAL   10 6 4 =    . . which is off by a factor of 400.64) and (3.1. but the Erland blocking formula would yield 0.

(e) Draw the state transition rate diagram. 4r. but only one line. Holding times are ex- ponentially distributed with parameter µ. Each employee has a think time which is exponentially distributed with parameter λ. and the letter a indicates that the auxiliary line is actually available for service. 4r. (a) Two waiting employees means that there are three employees waiting and using the system. the extra line is immediately disconnected. At a point in time. Solution. list the events that would cause a change in the state of the process.Elementary CTMC-Based Queueing Models 93 3-4 A company has six employees who use a leased line to access a database. When the third person arrives. If a person arrives while the system is in state 3. use of an auxiliary line is authorized. If the authorization is completed when there are less than three employees waiting or if the number of employees waiting drops below two at any time while the extra line is in use. the system is immediately in the request mode 3a. 6r. 6a}. if an employee requests a line. 3r. The time required for the employee to obtain the authorization is exponentially distributed with rate τ . (d) What is the distribution of the amount of time the system spends in state 4r on each visit? Explain. if no other customers arrive before the line is authorized. 5a. (a) Argue that the set {0. (f) Write the balance equations for the system. When the number of waiting employees reaches a level 2. is a suitable state space for this process. then there is a waiting period before the line is authorized. With the process in state 4r at time t0 . the employee needs the database and joins a queue along with other employees who may be waiting for the leased line to access the database. 2. the line is dropped. But. (c) Compute the probability that each of the possible events listed in part (b) would actually cause the change of state. and specify the new state of the process following the event. leaving 3 in the system. 3. where the numbers indicate the number of employees waiting and in service. (b) The situation in state 4r is that there are employees waiting and in service and an authorization has been requested. 3a. 5r. The idea is that the system behaves differently when the number of wait- ing persons is increasing than when the number of waiting persons is decreasing. the . 1. the letter r indicates that authorization has been requested. Upon completion of the think time.

the possible next events are customer arrival.94 QUEUEING THEORY WITH APPLICATIONS . 6λ 5λ µ 0 1 2 3 µ τ 3λ µ µ 4λ 3λ 2λ λ 3a 4a 5a 6a 2µ µ µ µ τ τ τ 3λ 2λ λ 3r 4r 5r 6r 2µ 2µ 2µ Figure 3. and τ . cus- tomer service. State diagram for Supplemental Exercise 3. and AC denote the events customer arrival. 5a. respectively. line authorization complete. respectively. µ. or 6r. with rates 2λ. Then 2λ P {A} = 2λ + µ + τ µ P {S} = 2λ + µ + τ τ P {AC} = . .4 for the state transition rate diagram. : SOLUTION MANUAL line is again requested. (c) Let A. S. respectively. . line authorization complete. respectively. 2λ + µ + τ (d) The distribution of the amount of time spent in state 4r is exponen- tially distributed with parameter 2λ + µ + τ because this time is the minimum of 3 exponential random variables with parameter 2λ. µ.4 . (b) While in state 4r. 5r. customer ser- vice.4. putting the system in state 4r. and τ . (e) See Figure 3. If the line au- thorization is complete while the system is in 4r. then the system goes into state 4a. or 6a.

the extra line is immediately disconnected. When the waiting messages reach a level 3.5 details two state diagram for the system. specified in octets.Elementary CTMC-Based Queueing Models 95 (f) The balance equations are as follows: state rate leaves rate enters 0 6λP0 = µP1 1 (5λ + µ)P1 = 6λP0 + µP2 2 (4λ + µ)P2 = 5λP1 + µP3 + µP3a + 2µP3r 3 (3λ + µ)P3 = τ P3a 3a (3λ + τ + µ)P3a = 4λP2 + µP4a 3r (3λ + 2µ)P3r = 2µP4r 4a (2λ + τ + µ)P4a = 3λP3 + 3λP3a + µP5a 4r (2λ + 2µ)P4r = 3λP3r + τ P4a + 2µP5r 5a (λ + τ + µ)P5a = 2λP4a + µP6a 5r (λ + 2µ)P5r = 2λP4r + 2µP6r + τ P5a 6a (µ + τ )P6a = λP5a 6r 2µP6r = λP5r + τ P6a 3-5 Messages arrive to a statistical multiplexer at a Poisson rate λ for trans- mission over a communication line having a capacity of C in octets per second. Message lengths. Note that the first passage time from state 2 to state 1 in the first diagram is the same as . (a) Define a suitable state space for this queueing system. the capacity of the transmission line is increased to Ce by adding a dial-up line. Solution. The time required to set up the dial-up line to increase the capacity is exponen- tially distributed with rate τ . and write the vector balance equations for the system. (d) Determine the infinitesimal generator for the underlying Markov chain for this system and comment on its structure relative to matrix geo- metric solutions. If the connection is completed when there are less than three messages waiting or if the number of messages waiting drops below two at any time while the additional capacity is in use. (c) Organize the state vector for this system according to level. (a) Figure 3. where the level corresponds to the number of messages waiting and in service. are exponentially distributed with parameter µ. (b) Draw the state transition-rate diagram.

0 .. 0 1. .e.0 1 2 3 .. E[ỹ] = E[ỹ|A]P {A} + E[ỹ|Ac ]P {Ac } = E[z̃ + ỹ2 ]P {A} + E[z̃]P {Ac } λ µ = E[z̃ + f˜21 + ỹ] + E[z̃] λ+µ λ+µ .. Therefore. λ 2λ 3λ µ µ .5 (b) Let A denote the event of an arrival before the first departure from a busy period. Thus. i.. State diagram for Supplemental Exercise 3.96 QUEUEING THEORY WITH APPLICATIONS .. µ µ 1− 2µ 1   1 1 λ 2µ (c) c̃ = ĩ + ỹ implies that E[c̃] = λ + µ + µ λ 1− 2µ . 2λ 2λ λ Figure 3. the M/M/1 with service rate 2µ. : SOLUTION MANUAL the length of the busy period in the second diagram. 1 2 0 1. 1 ! 1 λ 2µ E[ỹ] = + λ ..5. µ µ µ . . 1 λ P {system is idle} =  1  1 1 λ 2µ λ + µ + µ λ 1− 2µ 1 =  λ  λ 2µ 1+ µ 1+ λ 1− 2µ ...e. and let Ac denote its complement.. f˜21 = E[ỹM/M/1 ] = 2µ 1 /1 − 2µλ . 1 ! µ 1 λ 2µ E[ỹ] = + λ λ+µ λ+µ λ+µ 1− 2µ i. Thus.. Then.

we know that the total amount of time the system spends i state 1 before entering state 0 is exponential with parameter µ because there are a geometric number of visits. [Hint: How does this period of time compare to the length of the busy period for an ordinary M/M/1 queueing system?] (b) Determine the expected length of the busy period for the ordinary M/M/2 queueing system by conditioning on whether or not an arrival occurs before the first service completion of the busy period and by using the result from part (a). Thus. λ 1 1+ µ λ 1− 2µ 3-6 Consider the M/M/2 queueing system. (a) Determine the expected first passage time from state 2 to state 1. Determine the probability that there is exactly one customer in the system by taking the ratio of the expected amount of time that there is exactly one customer in the system during a busy period to the expected length of a cycle. 1 µ P {ñ = 1} =  1  1 1 λ 2µ λ + µ + µ λ 1− 2µ λ = µ . (d) Determine the total expected amount of time the system spends in state 1 during a busy period. (c) Define c̃ as the length of time between successive entries into busy periods. . Determine the probability that the system is idle at an arbitrary point in time by tak- ing the ratio of the expected length of an idle period to the expected length of a cycle.Elementary CTMC-Based Queueing Models 97 1 =  . exponential service. the system having Poisson arrivals. 2 parallel servers. and an infinite waiting room capac- ity. that is. λ 1 1+ µ λ 1− 2µ (d) From an earlier exercise. and for each visit the amount of time spent is exponential with parameter (λ+ µ). (e) Check the results of (c) and (d) using classical birth-death analysis. as the length of one busy/idle cycle.

6. State diagram for Supplemental Exercise 3. occupancy is 4. for an arbitrary customer by conditioning on whether an arbitrary customer finds either zero. where e indicates extra capacity. So. 1. 3e. one. . 2. (b) See Figure 3. if there are 3 customers waiting. Less than 3 mes- sages waiting means occupancy is less than 4. Solution. 3. · · ·. or two or more customers present.6 (c) Organizing the state vector for this system according to level.6. : SOLUTION MANUAL (f) Determine the expected sojourn time. then there are (n − 1) customers waiting. 4e. Consider the nonpreemptive last-come-first-serve discipline together with Little’s result and the fact that the distribution of the number of customers in the system is not affected by order of service.98 QUEUEING THEORY WITH APPLICATIONS . (a) If the occupancy is n. λ λ λ λ λ 0 1 2 3 4 5 µC µC µC µC T µC T λ λ µCe 3e 4e 5e µCe µCe Figure 3. the required states are 0. 4. we see that level 0 : P0 level 1 : P1 level 2 : P2 level n : Pn = [ Pn0 Pn1 ] n>2 And the balance equations for the system are λP0 = µCP1 (λ + µC) P1 = µP2 + λP0   µC (λ + µC) P2 = λP1 + λP3 µCe . . We assume that if the number waiting drops below 3 while attempting to set up the extra capacity. E[s̃]. In this case. then the attempt is aborted.

   µC  −R λ 0 0 0 0 .. . .     0 0 0 0 0 µC 0 ..... Q̃ =  0    0 0 B43 0 A1 A0 .. ..    0 B21 B22 B23 0 0 0 ...   0 0 0 0 A2 A1 A0 . .. . ... ... .    0 0 0 0 0 0 0 . ..    0 0 0 0 0 0 µCe ... ..  0  µC −R λ 0 0 0 .......    0 0 B32 B33 A0 0 0 ..    0 0 0 0 0 0 0 . . . ... .. .. the resulting generator is −λ λ 0 0 0 0 0 ...   ..Elementary CTMC-Based Queueing Models 99     λ + µC 0 µC 0 P3 = P2 [ λ 0 ] + P4 0 λ + µCe 0 µCe For all remaining n..  B10 B11 B12 B13 0 0 0 ... ...       λ + µC + τ 0 λ 0 0 τ Pn = Pn−1 + Pn 0 λ + µCe 0 λ  0 0  µC 0 +Pn+1 0 µCe (d) The infinitesimal generator is readily written by inspection of the bal- ance equations by simply taking all terms to the right-hand side. . except that the boundary conditions are very complex...    0 0 0 µC 0 −Re − τ τ .. We see that this latter form is the same as that given on page 140 of the text.    0 0 0 0 0 A2 A1 ..   Q̃ =   0 0 0 0 µC 0 −Re . . If we let R = (λ + µC) and Re = (λ + µCe ). .. . n ≥ 4.  0  0 µC −R 0 λ 0 . . .. We may also write Q̃ as B00  B01 0 0 0 0 0 ... ... . ...  0 0 µCe 0 −Re 0 λ . The text by Neuts treats problems of this form under the heading of queues with complex boundary conditions..   . . .

.

and let γ be the net arrival rate into the system. ñ = M P i=1 ñi . determine the average time spent in the system for an arbitrary customer when the system is in stochastic equi- librium.2 Argue that the matrix R is stochastic. The element rij represents the probability that a unit currently at node i will proceed next to node j. and that. i.11). γ = Mi=1 γi . Since the system is closed. a unit must go .Chapter 4 ADVANCED CONTINUOUS-TIME MARKOV CHAIN-BASED QUEUEING MODELS Exercise 4. Let s̃ denote the time spent in the systemP for an arbitrary customer.e. Observe that the total number of customers in the system is the sum of the customers at each node. Solution. the vector λ is proportional to the equilibrium probabilities of the Markov chain for which R is the one-step transition probability matrix. Exercise 4. Solution. E[ñ] E[s̃] = γ where ñ is the total number of customers in the system in stochastic equilib- rium.11) and (3. M " # 1 X E[s̃] = E ñi γ i=1 M 1X = E[ñi ] γ i=1 M 1X ρi = . Therefore. By Little’s result. therefore. γ i=1 1 − ρi the latter following from (4..1 Using Little’s result.

102 QUEUEING THEORY WITH APPLICATIONS . where R is a one-step transition matrix and π is the stationary probability vector of the Markov chain for which R is the one-step transition probability matrix. Solution. by direct integration. Thus. Exercise 4. : SOLUTION MANUAL to some node within the network. this equation has exactly the form πR = π. The integral is then 1 φx+1 1 h i · | = rej(θ+2π) |x+1 − rejθ |x+1 j2π x + 1 C j2π 1 h i = r x+1 ej(θ+2π)(x+1) − r x+1 ejθ(x+1) j2π 1 h i = r x+1 ejθ(x+1) · 1 − r x+1 ejθ(x+1) j2π = 0. But. and all solutions λ are proportional to this eigenvector. . .3 Let x denote any integer. we have λR = λ. From λ[I − R] = 0. j2π C j2π x + 1 C Note that the closed path C may be traversed by beginning at the point rejθ and ending at rej(θ+2π) . Therefore. Therefore. Then 1 1 φx+1 I φx dφ = · | . j=1 Hence R is stochastic. Show that 1 for x = −1  1. M X rij = 1∀i. otherwise. Suppose x 6= −1. I x φ dφ = j2π C 0. λ is proportional to π. 1 1 dφ I I −1 φ dφ = j2π C j2π C . the vector λ is the left eigenvector of R corresponding to its eigen- value at 1. Now suppose x = −1.

φ 1 .

= ln φ.

.

j2π C .

rej(θ+2π) 1 .

= ln φ.

.

j2π rejθ .

2) i=1 j=1 (1 − σi φ) Show that (νi −j) 1 1  cij = − (νi − j)! σi d(νi −j) νi . .σ2 . . (4. That is. there are exactly r distinct singular values of P g(N. j2π Exercise 4. φ) can be written as r Y 1 g(N.4 Suppose that the expression for g(N.and the multiplicity of σi is νi . M. (4. We may rewrite (4. φ) = ..these are called σ1 . M. φ).1) i=1 (1 − σi φ)νi where ri=1 νi = M . φ) = j. M.Advanced CTMC-Based Queueing Models 103 1 = [ln r + j(θ + 2π) − ln r − jθ] j2π 1 = j2π = 1.35) as νj r X X cij g(N. M.σr ..

. [(1 − σ φ) g(N. φ)] . M.

i dφ(νi −j) .

φ=1/σi Solution. φ) = j=1 where F (φ) is the remainder of the terms in the sum not involving σm . M. Let m be a fixed but otherwise arbitrary index. d(νm −n) . Now fix n. M. Thus. φ) by (1 − σm φ)νm : νm νm cmj (1 − σm φ)νm −j + (1 − σm φ)νm F (φ). 1 ≤ m ≤ r. Multiply both sides of g(N. 1 ≤ n ≤ νm . X (1 − σm φ) g(N. Differentiate both sides of this expression and evaluate at φ = σ1m .

φ). (1 − σm φ)νm g(N. M.

.

= (−σm )(νm −n) cmn (νm − n)! .

dφ (ν m −n) φ= σ1 i i. φ)] (νm − n)! σm dφ(νm −n) 1 −1 (νi −n) d(νi −n)   ..e. M. 1 −1 (νm −n) d(νm −n)   cmn = [(1 − σm φ)νm g(N.

νi .

φ)] . = [(1 − σ i φ) g(N. M.

. (νi − n)! σi dφ(νi −n) .

φ= σ1 m .

then finding all combinations with n5 = 3. etc. · · · .5 . then. . Show that N +n−1   bnN = σiN . n4 . N Exercise 4. (−n − M + 1)(1)N .6 Verify that the probabilities as specified by (4. Then by the Maclaurin series expansion for h. n2 . n3 . Solution. n5 )| ni = 4 . we’ll substitute n3 . .5 Define bnN to be the coefficient of φN in the expansion of (1 − σi φ)−n . N! But h(N ) (0) = (−n)(−n − 1) . with the last column of the table being the intermediate result I. . Note that I is equal to P {n1 . These members were obtained by finding all possible combinations of n1 . the N −th term of this sum is h(N ) (0)(−σi φ)N . k=0 k! In particular. we must first determine the set 5 ( ) X S4. n2 . n2 . : SOLUTION MANUAL Exercise 4. n5 ) of this set. producing an intermediate result. n2 . .39) sum to unity. . In this exercise. It suffices. ∞ X h(k) (0)(−σi φ)k h(φ) = . so that the coefficient of φN is (−1)N (N + n − 1)!(−σi )N bnN = N !(n− 1)! N +n−1  = σiN . n4 . n5 with n5 = 4. Define h(φ) = (1−σi φ)−n . i=1 Then for each member (n1 . n4 . (4.104 QUEUEING THEORY WITH APPLICATIONS . which we’ll call I.3) N Solution.5 = (n1 . n5 } multiplied by 1497. n3 . and n5 into 2n3 +n4 4n5 . The following table gives all possible members of the set S4. · · · . to show that the sum of these intermediate results is equal to 1497.

Advanced CTMC-Based Queueing Models 105 i n1 n2 n3 n4 n5 I 1 0 0 0 0 4 256 2 0 0 0 1 3 128 3 0 0 1 3 3 128 4 0 1 0 0 3 64 5 1 0 0 0 3 64 6 0 0 0 2 2 64 7 0 0 2 0 2 64 8 0 0 1 1 2 64 9 0 2 0 0 2 16 10 2 0 0 0 2 16 11 1 1 0 0 2 16 12 0 1 0 1 2 32 13 1 0 0 1 2 32 14 0 1 1 0 2 32 15 1 0 1 0 2 32 16 0 0 3 0 1 32 17 0 0 0 3 1 32 18 0 0 1 2 1 32 19 0 0 2 1 1 32 20 0 1 2 0 1 16 21 1 0 2 0 1 16 22 1 0 0 2 1 16 23 0 1 0 2 1 16 24 0 1 1 1 1 16 25 1 0 1 1 1 16 26 0 2 1 0 1 8 27 2 0 1 0 1 8 28 0 2 0 1 1 8 29 2 0 0 1 1 8 30 1 1 1 0 1 8 31 1 1 0 1 1 8 32 0 3 0 0 1 4 33 3 0 0 0 1 4 34 1 2 0 0 1 4 35 2 1 0 0 1 4 36 0 0 4 0 0 16 37 0 0 0 4 0 16 38 0 0 3 1 0 16 39 0 0 1 3 0 16 .

Exercise 4.105) sum to unity. Solution. Hence (4. : SOLUTION MANUAL i n1 n2 n3 n4 n5 I 40 0 0 2 2 0 16 41 1 0 3 0 0 8 42 0 1 3 0 0 8 43 1 0 0 3 0 8 44 0 1 0 3 0 8 45 1 0 2 1 0 8 46 0 1 2 1 0 8 47 1 0 1 2 0 8 48 0 1 1 2 0 8 49 0 2 2 0 0 4 50 2 0 2 0 0 4 51 0 2 0 2 0 4 52 2 0 0 2 0 4 53 1 1 2 0 0 4 54 1 1 0 2 0 4 55 1 1 1 1 0 4 56 0 2 1 1 0 4 57 2 0 1 1 0 4 58 0 3 1 0 0 2 59 3 0 1 0 0 2 60 0 3 0 1 0 2 61 3 0 0 1 0 2 62 1 2 0 1 0 2 63 2 1 0 1 0 2 64 1 2 1 0 0 2 65 2 1 1 0 0 2 66 0 4 0 0 0 1 67 4 0 0 0 0 1 68 1 3 0 0 0 1 69 3 1 0 0 0 1 70 2 2 0 0 0 1 It is easy to verify that the last column does indeed sum to 1497. This shows that the probabilities given by (3.43) to (4. In the development of (4. . Since N − n also represents some integer.43). . Rather.43).7 Carefully develop the argument leading from (4. N was never assigned a specific num- ber.43) is true for all integers.106 QUEUEING THEORY WITH APPLICATIONS .44) follows from (4. .44). it is a variable and so it simply represents some integer. we see immediately that (4.

Letting ρ = 4 as in the example in the chapter.43) together with the initial con- ditions. Each node in the system may have only one customer in service at a time. We then substitute this value into the expression for g(1.47). Repeating this for each value of N and M until g(6.47).8 Using the recursion of (4.109) along with the initial conditions we may compute g(1. 5) is reached. E[ñM ] = P {ñ > 0} g(N − 1. 0) + ρ1 g(0. 2): g(1. respectively. at node M . Define ñsM and s̃sM to be the number in service and time in service. 2) = +1 · 1 = 2. we find that ρ ρ1 = ρ2 = = 1. 1) = g(1. so that by (4. 4 ρ ρ3 = ρ4 = = 2. 5) for the special case N = 6 numer- ically for the example presented in this section.Advanced CTMC-Based Queueing Models 107 Exercise 4. verify the expression for g(N. 1): g(1. M ) = ρM g(N. Solution. we may form a table of normalizing constants: M 0 1 2 3 4 5 0: 0 1 1 1 1 1 1: 0 1 2 4 6 10 2: 0 1 3 11 23 63 N 3: 0 1 4 26 72 324 4: 0 1 5 57 201 1497 5: 0 1 6 120 522 6510 6: 0 1 7 247 1291 27331 where g(6. 2 ρ5 = ρ = 4. Solution. 2) = g(1.9 Develop an expression for throughput at node M using Lit- tle’s result and (4. M ) . Then. 1) + ρ2 g(0. 5) is in the lower right-hand corner of the table. Exercise 4. 1) = 0 + 1 · 1 = 1. using (3.

Substituting β = δ = 2.10 Show that det A(z) is a polynomial. We wish to show det A(z) is at most 2(n + 2). Let A(z) be an (n + 1) × (n + 1) matrix. M ) Exercise 4.11 Repeat the above numerical example for the parameter values β = δ = 2. · · · . the proportion of time spent in each phase is the same and the transition rate between the phases is faster than in the original example. By assumption.j represents its respective cofactor.j is of order at most 2(n + 1).j is of at most order 2. Now. λ = µ = 1. we find −β β −2     2 Q= = . Compare the results to those of the example by plotting curves of the respective complementary distributions.j A0. Now assume that for an n × n matrix. λ = µ = 1. Solution. 1. each A0. First consider the case where A(z) is a 1 × 1 matrix. The throughput may now be found by applying Little’s result to L and W . Thus the result holds for all K ≥ 0. Hence the product a0.j is the element in the jth column of the first row of A(z) and A0. . the determinant of A(z) will be K X det A(z) = a0.j A0. its determinant is at most 2(n + 1).j is the determinant of an n × n matrix. Solution. The proof is by induction. : SOLUTION MANUAL = L. into the example in Chapter 3. Clearly det A(z) is a polynomial whose order is at most 2. . That is. 0 µ 0 1 . Exercise 4. A0. Compute the overall traffic intensity and compare. n.j is of at most order 2(n + 2). Furthermore. g(N. and 1 E[s̃M ] = µ = W. j=0 where a0. the order of which is not greater than 2(K + 1). M ) = λM .j . a0. Note that for all j.108 QUEUEING THEORY WITH APPLICATIONS . L γ = W g(N − 1. j = 0. δ −δ 2 −2     µ 0 1 0 M= = .

Advanced CTMC-Based Queueing Models 109 and     0 0 0 0 Λ= = . 3 3 Thus. 2z z 2 − 4z + 1 z 2 − 4z + 1 −2z   adj A(z) = . −2z 1 − 3z and det A(z) = −3z 3 + 9z 2 −√7z + 1 √ 6 6 = 3(1 − z)(1 + − z)(1 − − z). we find 1 − 3z   2z A(z) = . 0 λ 0 1 Note that the matrices M and Λ remain unchanged from the example in the chapter. upon substitution of these definitions into (3. we find that . Then.137).

3670068   . −0.

2996598 adj A(z). 0.

.

z=1− √ 6 −0. = .4494897 3 .3670068 0.

−2 −2   .

adj A(z).

.

z=1 −2 −2 and det A(z) . = .

.

.

= −2. (1 − z) .

5505103z 0. Upon substitution of this result into (3. this result reduces to [ G0 (z) G1 (z) ] = [ 0.138).140) and (3.0673470   2 [0 1 ] = P0 . +0.2247449 ] .0824829 2 Thus.2752551 0.2247449 ] . we find that −0. we find after some algebra that T T 1 0. G1 (z) 1 − 0.2247449 After some additional algebra.2752551 − 0.2752551 0.141).z=1 Substituting these numbers into (3. we find that P0 = [ 0.0505103z   G0 (z) = .

the overall arrival rate to the transmission line is simple the sum of the arrivals from the computer and from the group of users. : SOLUTION MANUAL ∞ (0.5505103)n z n .5505103)n for n ≥ 1.1 shows the complementary distributions of both the original example and its modification β = δ = 2. in the first case where γ = 0. independent of the com- puter’s activity.5 + 0.5505103)n . Solution.1835034 0. Exercise 4. Assuming that the user packets also require exponential service with rate µ. The probability generating function for the occupancy distribution can now be computed from (3.139) or by simply summing G0 (z) and G1 (z). it should be clear that the matrix Λ will also remain . This is a simple example of integrated traffic. . . 0 λ+γ The matrices Q and M remain unchanged from the example.110 QUEUEING THEORY WITH APPLICATIONS . we find [ Pn0 Pn1 ] = [ 0.5 Figure 4. That is.2247449 ] n=1 Thus. This is simply the complement of the probability that the system is not busy: 1 − P0 . for n ≥ 1. Now.4082483 n=1 From this probability generating function.5505103)n z n .4082483 × (0.1. We find ∞ (0.5 = 0. we find P0 = 0.12 Suppose we want to use the packet switch and transmis- sion line of the above numerical example to serve a group of users who collectively generate packets at a Poisson rate γ. X Fñ (z) = 0.1835034 0. To compute the traffic intensity we determine the probability that the system will be busy.5 and Pn e = 0.2247449 ] (0. in addition to the computer.λ = µ = 1.   γ 0 Λ= . Since the group of users generate packets independent of the com- puter’s activity. Since P0 did not change in this exercise from the example in the chapter. X + [ 0. the traffic intensity for both problems will be the same: 1 − 0. show the impact the user traffic has on the occupancy distribution of the packet switch by plotting curves for the cases of γ = 0 and γ = 0.

i.1.1 modified example . δ −δ 1 −1     µ 0 1 0 M= = .0001 0 3 6 9 12 15 n Figure 4.1z 2 − 2.1z + 1 .01 original example .e. 0 µ 0 1 We proceed in the same manner as in Exercise 3.001 .1z 2 − 3. 0 1. where γ = 0. z 1..39 and in the example in the chapter. the problem is the same as that in the example.Advanced CTMC-Based Queueing Models 111 1 P{occupancy exceeds value shown on abscissa} . Complementary distributions unchanged.1. the matrix Λ will be:   0.1 0 Λ= . In the second case.1 We repeat the definitions of Q and M: −β −1     β 1 Q= = . Substituting these definitions into (3.137) yields: .1z + 1   z A(z) = .

we find that .71z 2 − 5.2z + 1 = 0. : SOLUTION MANUAL 1.112 QUEUEING THEORY WITH APPLICATIONS .62z 3 + 6.2865501 − z). Thus.5091141 − z)(21.0225177 − z)(0. .1z 2 − 2. .1z + 1 and det A(z) = . −z .1z + 1 −z   adj A(z) = .1z 2 − 3.11z 4 − 2.11(1 − z)(1.

2865501   . −0.

0.2020168 adj A(z).

.

= .2865501 0.4064559 . z=0.2865501 −0.

−1 −1   .

adj A(z)

= ,
z=1 −1 −1
and
det A(z)

(1 − z) .8. = −0.

(1 − 0.6626404z)(1 − 0.6626404)n .2346046 0.0475680)n z n . we find that −0. this result reduces to [ G0 (z) G1 (z) ] = [ 0.5 Thus.0845332   2.1704812 ] n=1 ∞ (0.0475680z) 0.z=1 Substituting these numbers into (3.1325209 0.0739485z  . X + [ 0.6626404)n z n X + [ 0.1653954 − 0. for n ≥ 1. we find that P0 = [ 0.1325209 0.1653954 ] . we find after some algebra that the matrix [G0 (z)G1 (z)] is T 1 0.1020837 −0.1199059 2. +0.2346046 0. Upon substitution of this result into (3.1704812 ] (0.138).0047394 After some additional algebra.140) and (3. we find [ Pn0 Pn1 ] = [ 0.2346046 − 0.5 [ 0 1 ] = P0 .141).1653954 ] ∞ (0.0050858 ] n=1 Thus.

0969979 n=1 From this probability generating function. Show that the rows of adj Q are equal. continuous-time Markov chain.001 .0050858 ] (0. 1 P{occupancy exceeds value shown on abscissa} . γ = 0.13 Let Q denote the infinitesimal generator for a finite. Complementary distributions Figure 4.01 gamma = 0.0475680)n . X + 0.6626404)n z n X Fñ (z) = 0. The probability generating function for the occupancy distribution can now be computed from (3.139) or by simply summing G0 (z) and G1 (z).4 + 0.0475680)n z n .1 .3030021×(0.6626404)n +0.0969979×(0. Exercise 4.4 and Pn e = 0.1.0001 0 0 5 10 15 20 25 n Figure 4.0475680)n for n ≥ 1. . we find P0 = 0.2. discrete-valued.1 .0 gamma = 0.1020837 −0.3030021 n=1 ∞ (0. We find ∞ (0.Advanced CTMC-Based Queueing Models 113 + [ 0.2 shows the occupancy distributions for the cases γ = 0.

is a scalar. it follows that the ki are identical so that the rows of adj Q are identical and the proof is complete. . = . jK e ] =  ..   .. Then postmultiply both sides by e. . and noting that Qe = 0 yields the desired result: G(1) [M − Λ] = P0 M. k0 φ k0 φ0 k0 φ1 . Since φl cannot be zero for all i unless all states are transient. So that if φl 6= 0. we see that ′ G (z)A(z) + G(z) [2Λz − (Λ − Q + M)] = −zP0 M. ..73) by starting out with (4.. ′ G (1)Q + G(1) [Λ + Q − M] = −P0 M.. the columns of adj Q are all proportional to each other and these are all proportional to the right eigenvector of Q corresponding to its zero eigenvalue..114 QUEUEING THEORY WITH APPLICATIONS .. kK φK In addition.  . Solution. 0 ≤ i ≤ K. : SOLUTION MANUAL Solution.       . postmultiplying both sides by e.14 Obtain (4. . jK  adj Q = [ j0 e j1 e . j0 j1 .. Now.... a scalar.  kK φ kK φ0 kK φ1 . Comparing the two expressions for adj Q. Recall Equation (4. There- fore..64): G(z)A(z) = (1 − z)P0 M.. then ki = jl /φl for all i. Since Qe = 0. . k1 φK  adj Q =  . in addition. differentiating both sides of (3.  j0 j1 .. jK    j0 j1 .. Let the stationary vector be denoted by φ. and then taking limits as z → 1..137) with respect to z. Exercise 4..    .. Then. . We first note that the rows of adj Q are all proportional to each other and. Taking limits as z → 1. with ki . where A(z) = Λz 2 − (Λ − Q + M)z + M. k0 φK      k1 φ   k1 φ0 k1 φ1 .. . . . . which is impossible for a finite state CTMC. this right eigenvector is proportional to e. differentiating both sides with respect to z. . 0 ≤ j ≤ K. these are proportional to the stationary vector for the CTMC.. we see that ki φl = jl for all i. jK where ji ..64).

Finally. Â(z). Recall the definition of A(z): A(z) = Λz 2 − (Λ − Q + M)z + M. µK ).Advanced CTMC-Based Queueing Models 115 Exercise 4.   δK−1 −(βK−1 + δK−1 ) βK−1  0 δK −δK Now.   . transforming A(z) into a symmetric matrix. postmultiply the matrix by the column vector e: A(z)e = Λz 2 e − (Λ − Q + M) e + Me = −z(1 − z)Λe + (1 − z)Me = (1 − z) (M − zΛ) e. · · · . it is possible to transform Q to a symmetric matrix using a diagonal similarity transformation. . and −β0 β0 0    δ1  −(β1 + δ1 ) β1   Q=  . do a similarity trans- formation. where Xz is the null vector of Â(z) corresponding to z. λ1 . form the inner product < Xz . 1. Exercise 4. λK ). examine the zeros of the resulting quadratic equation. M = diag(µ0 . K 0. This quadratic form results in a scalar quadratic equation whose roots are positive and one of which is a null value of A(z). µ1 . else where l0 = 1. · · · . Â(z)Xz >. and then forming the quadratic form of Â(z). . Then.16 Show that the zeros of the determinant of the λ-matrix AlA(z) are all real and nonnegative. Solution. where Λ = diag(λ0 .. · · · . Define  Qi Wij = j=0 lj for i = j = 0. First we show that the null values are all positive by transforming A(z) into a symmetric matrix.15 Show that the sum of the columns of A(z) is equal to the column vector (1 − z)(M − zΛ)e so that det A(z) has a (1-z) factor.] Solution. [Hint: First. since Q is tridiagonal and the off-diagonal terms have the same sign. To find the sum of the column vectors of A(z). Â(z).

< Xσ .  . since the determinant of a product of matrices is equal to the product of the individual determinants. p is negative semi-definite. suppose that det A(z) = 0.44. Therefore.44. But. where < ·. m =< Xσ . and there exists some nontrivial Xσ . Thus. it follows that detÂ(z) = detW −1 A(z)W. (3. and W is non-singular.. so l and m are nonnegative. Furthermore. Q̂Xσ > . and W are all diagonal. . Â(z)Xσ >= 0. · > denotes the inner product. The discriminant of the quadratic equation for (3. such that Â(z)Xσ = 0. MXσ >. Also. . where Q̂ is the following tridiagonal symmetric matrix: √ √−β0 β0 δ1 √0    β0 δ1 −(β1 + δ1 ) β1 δ2  Q̂ =  . ΛXσ >. < Xσ . Then det Â(z) = 0. Also. M. Â(z)Xσ >= lσ 2 − (l − p + m)σ + m = 0. since Λ. p  0 βK−1 δK −δK It is then straightforward to show that Q̂ is negative semi-definite.   . p = < Xσ .1) where l = < Xσ . βj−1 Then Q̂ = W −1 QW. Now. Λ and M are positive semi-definite.1) is given by (l − p + m)2 − 4lm = (l − m)2 − 2p(l + m) + p2 . so p ≤ 0.116 QUEUEING THEORY WITH APPLICATIONS . . : SOLUTION MANUAL and s δj lj = for 1 ≤ j ≤ K. it follows that h i detÂ(z) = det Λz 2 − (Λ − Q̂ + M)z + M .

2. In the first case. The left-hand side expresses the average rate at which units enter the service the system. both roots of (3. (l − p + m) > (l − p + m)2 − 4lm > 0. analogous to the special case of the M/M/1 system of λ = (1−P0 )µ where ρ = µλ = (1−P0 ). 1. 1. and so the value of σ must be real.Advanced CTMC-Based Queueing Models 117 We see that the right-hand side of this equation is the sum of nonnegative terms. Hence. Let T represent time. Exercise 4. Express the traffic intensity in terms of the system parameters and P0 . Hence all null values of A(z) are real. In addition. the average service time is then [G(1) − P0 ] T e [G(1) − P0 ] e = . 2. m−p In the second case we find from the quadratic equation that 1  q  σ= (l − p + m) ± (l − p + m)2 − 4lm .74) represents the average rate at which customers leave the system. Therefore.44. Solution. We now show that these null values are positive. [G(1) − P0 ] MT e [G(1) − P0 ] Me . we may write the traffic intensity using (4. and the right-hand side expresses the average rate at which units leave the system. Recall (4.74) as ρ = [G(1) − P0 ] e. Therefore.1) that m σ= > 0. Then. Check the result obtained in part 2 for the special case M = µI. if [G(1) − P0 ] e is the traffic intensity. 2l p Clearly.74): G(1)Λe = [G(1) − P0 ] Me. We see from above that either l = 0. by Little’s result. 3.17 The traffic intensity for the system is defined as the prob- ability that the server is busy at an arbitrary point in time. [G(1) − P0 ] T e is the total time spent serving. or l > 0. Determine the average amount of time a customer spends in service us- ing the results of part 1 and Little’s result.1) are positive so that all of the null values of A(z) are positive. the number of customers com- pleting service over time is [G(1) − P0 ] MT e. we find from (3. This is due to the fact that the right-hand side of (4.44.

If M = µI. and A(zi )A−1 (zi ) = I. Then show that it is not possible for det A(z)/(1 − z) to be zero for any choice of δi s unless P0 = 0.18 Show that adj A(zi )e is proportional to the null vector of A(z) corresponding to zi . Now. then the average service time is E[x̃] = [G(1) − P0 ] e 1 = .. we find that A(zi )adj A(zi ) = det A(zi )I. : SOLUTION MANUAL 3. and the fact that zi is a null value of A(z).118 QUEUEING THEORY WITH APPLICATIONS .   . First note that det A(z) is equal to the determinant of the matrix obtained from A(z) by replacing its last column by A(z)e. we have A(zi )adj A(zi )e = 0.  λK z 2 − (λK + µK )z + µK Therefore.57) and (4. 1). [Hint: First show that this is the case if δi = 0 ∀ i. 1) does not change when the δi change. . where B(z) is the matrix obtained by replacing the last column of A(z) by the column vector (M − Λz)e.60) is ergodic. lim A(zi )adj A(zi ) = lim detA(zi )I. since lim detA(zi ) = 0. The result is that the number of zeros in (0. Exercise 4. det A(z) = (1 − z)det B(z). By the continuity of A(zi ). . that σ→σi A(zi )adj A(zi ) = O(K+1)×(K+1) . Solution.] Solution. adj A(zi ). it is easy to show that λ0 z 2 − (λ0 + µ0 )z + µ0    λ1 z 2 − (λ1 + µ1 )z + µ1    A(z)e =    = (1 − z)(M − Λz)e. This defines adj A(zi ) to be a null vector of A(z) corresponding to zi . Since A−1 (zi ) = adj A(zi )/det A(zi ). Exercise 4. then there are exactly K zeros of det AlA(z) in the interval (0. and det A(zi ). [G(1) − P0 ] µe µ which is a familiar result. where On×m denotes an n × m matrix of zeroes. Thus. which implies no equilibrium solution exists. . σ→σi σ→σi This implies.19 Show that if the system described by (4..

1) for some α = φK e 6= 0. the null values of B̂(z. δ2 . Thus.57. we find that the vector of cofactors corresponding to the last column in proportional to µK . µi z= . φK G(1) = . 0. But π0 > 0 if an equi- librium solution exists.1) is zero only if π0 = 0. 0. δK ). and λi > 0. we may choose to examine the behavior of the null values of B(z) as a function of the death rates of the phase process while keeping the birth rates positive. 0) may be obtained by solving the following equations for z: λi z 2 − (λi + βi + µi )z + µi = 0. 0 ≤ i < K. two possibilities must be considered: λi = 0. 0 ≤ i < K. (4. These elements are as follows: λi z 2 − (λi + βi + µi )z + µi . 0) may therefore be found by setting each of the above terms equal to zero. 0. In doing this. we may compute the determinant of B(1) by expanding about its last column.2). Since the lower subdiagonal of this λ-matrix is zero. G(1) [M − Λ] e = π0 Me. 0. The null values of B̂(z. · · · . · · · . Toward this end. δ1 . Equivalently. then B(1) is nonsingular. (4. That is. · · · . if an equilibrium solution exists.2) and µK − λK = 0. · · · . then B(z) has no null value at z = 1. In the first case. µK − λK z. 0).57.57.Advanced CTMC-Based Queueing Models 119 Now. φK e Thus. In the latter case. We note that the null values of a λ-matrix are continuous functions of the parameters of the matrix. But from equation (4. First. The left-hand side of (4. detB(1) = αG(1) [M − Λ] e. 0. we define B̂(z. When solving (4. 0. the left eigenvector of Q corresponding to its zero eigenvalue. Thus.57. the corresponding null value is between zero and one. the determinant of the matrix is equal to the product of the diagonal elements. βi + µi so that for each i. consider det B̂(z. we find that lim λi z 2 − (λi + βi + µi )z + µi = µi > 0. so that if an equilibrium solution exists.74). z→0 .

δ2 . Now. In addition. z→1 and that the expression lim λi z 2 − (λi + βi + µi )z + µi z→∞ tends to +∞. ∞) if λi > 0. B(z). has exactly K null values in the interval (0.∞) ∞ j 1 X X  = Ai j=n+1 zK+i zK+1 ∈Z (1. δ2 . Exercise 4. the number of null values of B̂(1. · · · .2) yield exactly K null values of B̂(z. and consequently A(z).86) through (4. 1) to assume that value 0 for some value of (δ1 . 1). Therefore. 1) is independent of the death rates. δ1 . · · · .∞) 1 − zK+i . δK ). Hence. and between 0 and K null values of B̂(z. δK ) in the inter- val (0. 0. 0. Solution. · · · .∞)   (n+1) X zK+i = Ai  −1 . δ1 .20 Beginning with (4. 1).88). each quadratic equation of (4. · · · . : SOLUTION MANUAL lim λi z 2 − (λi + βi + µi )z + µi = −βi < 0. δK ). δ2 .2) has one root in (0. due to (4. In summary. δK ) such that 0 = det B̂(1.57. δ1 . 0.120 QUEUEING THEORY WITH APPLICATIONS .57. 0) in (1. δ1 .∞) Therefore.57. the equations (4.1). all of the null values of B̂(z. From (4. · · · . δK ) are real if Q is a generator for an ergodic birth and death process.∞) ∞ j 1 X X  = Ai j=n+1 zK+i zK+1 ∈Z (1. δ2 . ∞ −j X X P {ñ > n} = Ai zK+i j=n+1 zK+1 ∈Z (1. 1) and one root in (1. 0) in (0.∞)  n+1 1 X zK+i = Ai 1 1− zK+i zK+1 ∈Z (1. δK ) in (0. · · · . 0. · · · . ∞). · · · . it is impossible for any of the null values of B̂(z. there exists no (δ1 .86). . Therefore. Therefore. −n X Pn = Ai zK+i . δ2 . zK+1 ∈Z (1. . zK+1 ∈Z (1. the latter quantity being equal to the number of positive values of λi . develop expres- sions for the joint and marginal complementary ergodic occupancy distri- butions. δ2 .

Since adj A(z) is a continuous function of z for an arbitrary matrix G(z). First. adj A = det A(z) A(z)−1 −1 −1 = det A(z)U (z)L (z) 1 1 = det A(z) adj U (z) adj L(z) det U (z) det L(z) = adj U (z) adj L(z). zK+1 ∈Z (1. Now solve the linear system Û (zi )y(zi ) = eK . [Hint: The term in the lower right-hand corner. det Û (zi ) . From matrix theory.21 Develop an expression for adj A(zi ) in terms of the outer products of two vectors using LU decomposition. This means the entire last row of U (z) is zero so that all elements of adj U (z) are zero except for its last column. What then is true of its adjoint?] Solution. we then have adj A(zi ) = adj U (zi )adj L(zi ). This means that adj A(zi ) will be given by the product of the last column of adj U (zi ) with the last row of adj L(zi ). We then have 1 y(zi ) = adj Û (zi )eK . K) element of U (zi ) by 1/ K−1 j=0 Ujj (zi ). Call this matrix Û (zi ) and the note that this matrix is nonsingular and that its determinant is 1.∞) 1 − zK+i Exercise 4. of the upper trian- gular matrix will be zero. if zi is a simple null value of U (z). we may determine the last column of adj U (zQ i ) in the following way.   (n+1) X zK+i P {ñ > n} e = Ai e  −1 . we replace the (K. Let y(zi ) denote the last column of adj U (zi ) and x(zi ) denote the last row of adj L(zi ). We let A(z) = L(z)U (z). and consequently the last row. we know that for A(z) nonsingular.Advanced CTMC-Based Queueing Models 121 Similarly. Since the last column of adj U (z) is unaffected by the elements of the last row of U (z). the LU decomposition can be arranged such that the last diagonal element of U (z) is zero. Now. where fi is the row vector of all zeros except for a 1 in position i. where L(z) is lower triangular and U (z) is upper triangular. we may solve for the last row of adj L(zi ) by solving x(zi )L(zi ) = fK . We follow the usual convention that the elements of the major diagonal of L(z) are all unity. Since det L(zi ) = 1.

122 QUEUEING THEORY WITH APPLICATIONS . We may compute the last column ŷ of adj Û (and. so that 2 3 4   Û =  0 2 3 1 0 0 4 Since the determinant of this matrix is 1. hence. We note in passing that solving the linear systems is trivial because of the form of Û (zi ) and L(zi ). The product is then 1 −2 1   x̂ŷ =  −6 12 −6  4 −8 4 . : SOLUTION MANUAL But det Û (zi ) = 1 and. of adj U ) by solving the system 0   Û ŷ =  0  1 It is easy to show that ŷ = [ 1 −6 4 ]T . if x̂ is the last row of adj L. Let 2 3 4   A= 4 8  11  . x(zi )L(zi ) = fK and then we have adj A(zi ) = y(zi )x(zi ). x̂ = [ 1 −2 1 ] 1 We now replace the (3. 6 13 18 We may then decompose A as described above:   1 0 0 L = 2 1 1 2 3 4 U = 0 2 3 0 0 0 Now. then x̂L = [ 0 0 1 ] Thus. so that adj L = L−1 . Consequently y(zi ) is identically the last column of adj U (zi ). That is. . we solve Û (zi )y(zi ) = eK . adj Û = det Û −1 . det L is 1. in addition. . In summary. the last column of adj Û (zi ) is the same as the last column of U (zi ). We illustrate the above method with an example. 3) element of U of 2+2 = 41 .

639709 .0      −1 0.5 0.163) is then Pn = P0 Rn  n 0.165395 ] .1 0.029500 = [ 0. we see that P0 = 0. The following are sample data from the program.234604 0. Using differing values of τ and the parameters from Example 3.5 using the matrix geometric approach.165395 ] . Substituting these result into (3. indeed. Solution. In particular.070500 0.5. we find G(1) = [ 0.1 0.1 −1.5 ] .5 ] I − 0.98) converges.172).1 The matrix R was determined to be   0.460291 0.0 1. The solution expressed in matrix geometric form as in (3. 0.460291 0.029500 R= 0.460291 0. Exercise 4. we find 2. Also. Since the vector G(1) is proportional to the left eigenvector of Q corresponding to the eigenvalue 0. Note that the resultant matrix R was the same for each of the sampled τ .029500 P0 = [ 0. Determine numerically the range of values of τ for which (4.172). Specify the results in terms of the matrix AlR.5 0.639709 for values of τ between 1. verify numerically that the results are the same as those obtained above. equal to adj A. we see that the product x̂ŷ is. −1 3.639709 = [ 0.7. where ǫ = 1 × 10−11 . A program was written to compute the different iterations of Equa- tion (3.22 Solve for the equilibrium state probabilities for Example 4. 0.5).167) yields    0. R0 was taken to be diag(0.029500 R= .7 into (3.070500 0.4.639709 These are the same results as those found in Example 3. and n is the number of iterations it took for the program to converge. 0.070500 0.Advanced CTMC-Based Queueing Models 123 By computing adj A directly by forming the minors of A.7215 and 2795.460291 0.070500 0.234604 0.   0.0 Ri = τ R2i−1 + Ri−1 I − τ −1 +τ −1 .

.167) yields    0. Note that P0. Substituting different values of τ and the parameters from Exercise 3.5 ] I − 0.0 The matrix R was determined to be   0.23 Solve Exercise 3.0 0.0 0.275255 0.0 0.0 P0 = [ 0.0 1. we see that P0 = 0.0 Ri = τ R2i−1 + Ri−1 I − τ −1 +τ −1 .163) is then Pn = P0 Rn  n 0.22 using the matrix geometric approach.39.550510 These are the same results as those found in Exercise 3.124 QUEUEING THEORY WITH APPLICATIONS . : SOLUTION MANUAL τ n 2 300 50 3200 200 11800 500 28000 2795 10928 Exercise 4.39 into the computer program from Exercise 3. In particular. Substituting these result into (3. Since the vector G(1) is proportional to the left eigenvector of Q correspond- ing to the eigenvalue 0.275255 0.5 0.5 ] .0 = [ 0. we find G(1) = [ 0.5 0.550510 . .0      −1 0. we find 3.0 −2.224745 ] .50.449490 0.449490 0. Evaluate the relative difficulty of using the matrix geometric approach to that of using the probability generating function approach.0 0. The solution expressed in matrix geometric form as in (3.550510 = [ 0.5.449490 0.0 R= 0.0 0. 0.224745 ] . . −2 4.0 does not play a role in determining Pn since the first row of R is zero. Solution.

Then using the identity G(z) = P0 [I − zR]−1 . βi . etc. n ≥ 1.0 R= .449490 0.   0. the program should be adaptable to different parameters: matrix dimension. as the di- mension of the matrices increases even slightly. . dz n Suppose n = 1.550510 τ n 3 300 10 600 50 2400 100 4500 500 20400 For this exercise. solving the problem using the matrix-geometric approach involved developing a computer program. d G(z) = P0 [I − zR]−2 R dz = P0 [I − zR]−1 [I − zR]−1 R = G(z) [I − zR]−1 R.0 0. it is reusable.112) for the n-th factorial moment of ñ. and n is the number of iterations it took for the program to converge. this particular program was relatively simple. On the other hand. λi . First we prove a lemma: dn G(z) = n!G(z) [I − zR]−n Rn . the matrix geometric method is much more practical than the probability generating function approach. However. τ . As in Exercise 3. as the dimension of these matrices increases. Exercise 4. the difficulty involved with using this method also increases. 0.24 Prove the result given by (4.50. Solution.Advanced CTMC-Based Queueing Models 125 The following are sample data from the program. Furthermore. the resultant matrix R was the same for each of the sampled τ . Although this is not al- ways an easy task. Plus. So although there is initial overhead. K = 1. using the probability generating function approach was fairly straight forward and did not involve a great deal of computation. Since the matrices involved were 2 × 2. where ǫ = 1 × 10−11 .

. This proves the lemma. dz by assumption. dn . . we wish to show it holds true for n + 1. Thus. we evaluate the n−th derivative of G(z) at z = 1.126 QUEUEING THEORY WITH APPLICATIONS . : SOLUTION MANUAL Now assume the result holds true for n. d(n+1) d dn   G(z) = G(z) dz (n+1) dz dz n d   = n!G(z) [I − zR]−n Rn . Using this result. and recall that Rn = V −1 N n V. d(n+1) G(z) = −(n + 1)n! [I − zR]−(n+1) Rn (−R) dz (n+1) = (n + 1)!G(z) [I − zR]−(n+1) Rn+1 .

.

G(z).

.

e = n!G(z) [I − zR]−n Rn .

.

e .

.

Now. powers of diagonal matrices are the diagonal matrix of powers of the diagonal elements of the original matrix. In particular. dz n z=1 z=1 = n!G(1) [I −"R]−n R n e n ∞ # Ri Rn e Y X = n!G(1) j=1 " i=0 n ∞ # −1 N V V −1 N n Ve i Y X = n!G(1) V j=1 i=0" n ∞ # −1 i N n Ve Y X = n!G(1)V N j=1 i=0 −1 −n = n!G(1)V [I − N ] N n Ve. and so [I − N ]−1 is the matrix 1 1 ···   diag 1−ν0 1−νK . multiplication of diagonal matrices is again a diagonal matrix. Thus. N is a diagonal matrix. Furthermore. dn . whose elements are the products of the diagonal elements of the individual matrices.

h n  n i −1 ν0 νK .

G(z).

dz n . e = n!G(1)ν 1−ν0 . ··· 1−νK Ve.

z=1 .

P1 ). service. Therefore. sum the elementsPof the right hand side of (4. For a system in which the previous condition does not hold. we have "∞ # X 0= Pi [A0 + A1 + A2 ] i=0 from which we see that ∞ X Pi = π i=0 . i=0 But. for our special case. the three stochastic matrices must be the same. the result is in general not true. and [B0 + A0 ] are all stochastic. and each of these stochastic matrices is the infinitesimal generator of the phase process at its level.113).113) from i = 1 to i = ∞. from the boundary condition of (4. because Q̃ is stochastic.25 Beginning with (4. Next. Then complete the proof in the obvious way. [B1 + A1 + A0 ]. i=0 [Hint: First. The implication of the fact that Q̃ is stochastic is that the sum of the block matrices at each level is stochastic.113). This can be found by comparing the first two to see that B1 = A2 and the first and third to see that B0 = A1 + A2 . The arrival. there is no service. and phase processes are independent of the level except that at level zero. Thus the previous summation yields "∞ # X 0= Pi [A0 + A1 + A2 ] − P0 B0 − P1 B1 . the sum of the elements of each of the rows of Q̃ must be a zero matrix to show that θ(P0 .Advanced CTMC-Based Queueing Models 127 Exercise 4. that is [A0 + A1 + A2 ]. Thus. This will yield 0 = [ ∞ i=0 Pi ] [A0 + A1 + A2 ] + θ(P0 . A1 + A2 = B0 and A2 = B1 . show that ∞ X Pi = π. use the fact that. i=0 But. Upon summing the elements of the right hand side of (4.113) from i = 1 to i = ∞. We emphasize that we are dealing here with the specific case that the arrival and service rates are dependent upon the phase of an auxiliary process. in order for the phase process to be independent of the level.] Solution. we find "∞ # X 0= Pi [A0 + A1 + A2 ] − P0 [A1 + A2 ] − P1 A2 . P1 ) = 0. we have P0 B0 − P1 B1 = 0.

whose infinitesimal generator is Q.26 With Q̃ defined as in (4. . For xample. the magnitude of the diagonal elements of S must be at least as large as the corresponding row elements of S 0 . so that the sum of the diagonal elements of S and the corresponding row elements of S 0 cannot exceed 0. . We first note that since Q̃ is stochastic. the arrival and services while in a given phase could be accompanied by a phase transition. Se + S 0 be = 0. But. In a very general case. At the same time. S 0 b is a matrix whose dimensions are the same as those of S so that S and S 0 b may be summed term-by-term. : SOLUTION MANUAL because both are the unique left eigenvector of [A0 + A1 + A2 ] whose ele- ments sum to unity. Solution. It is then easy to see that the phase process is independent of the level if and only if the M = M̂. Let Λ̂ and M̂ denote the more general arrival and service matrices. it follows that Se + S 0 = 0.128 QUEUEING THEORY WITH APPLICATIONS . show that the matrix S + S 0 b is always an infinitesimal generator. the elements of the result follows. To show this. and let Λ and M denote the diagonal matrices whose i-th elements denote the total arrival and service rates while the process is in phase i. Since the off-diagonal elements of S and the elements of S 0 b are all nonnegative. . It is easy to show that we then have the following dynamical equations: 0 = P0 (Q − Λ) + P1 M̂ and 0 = Pi−1 Λ̂ + Pi (Q − Λ − M) + Pi+1 M̂.138) and b any vector of proba- bility masses such that be = 1. The diagonal elements of S 0 b cannot exceed the corresponding row elements of S 0 because b is a vector of probability masses. Alternatively. but then this would contaminate the definition of the independent phase process at each level. respec- tively. the phase process at level zero could be modified to incor- porate the phase changes resulting from the service process at all other levels. the service process must not result in a phase shift. Exercise 4. Since Q̃ is stochastic. Hence we have (S + S 0 b)e = 0. That is. and hence each element cannot exceed 1. it follows immediately that S 0 be = S 0 . Since be = 1. we could have an independently operating phase process. Thus. first note that the matrix S has these properties. and define µij in a parallel way. It remains to show that the elements of the diagonal of S + S 0 b are non- positive and the off diagonal elements are nonnegative. we would denote by λij the arrival rate when the process that starts in phase i and results in an increase in level by 1 and a transition to phase j simultaneously.

the rate from state (i. Since service completions at phase 1 always result in an additional phase 0 service. except for i = 0..0 .3.. State Diagram for Problem 4. j). the infinitesimal generator for the continuous-time Markov chain defining the occupancy process for this system. 1 is the phase of the service process. Solution. 0) to (i − 1. µ1 p µ2 µ1 p µ2 µ1 p µ2 µ1(1-p) µ1 (1-p) µ1 (1-p) 1. The state diagram is detailed in Figure 4. 1) to (i.0 3. 0) is µ2 . where i is the system occupancy and j is the phase.. The state diagram can be determined by first defining the states in the diagram which have the from (i. and determine the matrix Q.3. Increases in level occur due to arrivals only. Define Pi = [ Pi0 Pi1 ] for i > 0 and P0 . respectively.27 .1 . Similarly.27 Consider a single-server queueing system having Poisson arrivals. It is then a matter of determining transition rates.Advanced CTMC-Based Queueing Models 129 Exercise 4.1 2.. Each time a customer receives a type 1 service increment. a scalar. Suppose type 1 and type 2 service increment times are each drawn independently from exponential distributions with parameters µ1 and µ2 . 0) to (i. 1) is µ1 p.0 2. Otherwise. Define the state of the system to be the 0 when the system is empty and by the pair (i. λ 1. This is a direct result of Property 2 of the Poisson Process given on page 44. the rate from (i. the transition rate from (i.. the customer leaves the system with probability (1 − p) or else receives a type 2 service increment followed by an additional type 1 service increment. j) where i > 0 is the system occupancy and j = 0.. Since all phase 0 service completions result in lowering the occupancy by 1 with probability p.1 3. λ λ λ Figure 4. so for every state. Define the phase of the system to be 1 if a customer in service is receiving a type 2 service increment. 0 1. the system is in phase 0. Draw the state diagram. 0) is µ1 (1 − p). Suppose upon entering service.0 . each customer initially receives a type 1 service increment. the rate of transition to the next level is λ.

130 QUEUEING THEORY WITH APPLICATIONS ..  µ2 −µ2  µ1 (1 − p) 0 M = . dt or −µ µ 0   d [ P0 (t) P1 (t) P2 (t) ] = [ P0 (t) P1 (t) P2 (t) ]  0 −µ µ.     0 0 0 µ1 (1 − p) 0 . Therefore P0 (t) = e−µt .28 Suppose −µ     µ 0 0 S= S = . 0 0 It is different in the first level.. so that P0 (t) = ke−µt .. . But P0 (0) = 1. ... and Pt (0) = [ 1 0 ] . and then for P2 (t) = Pa (t).] Solution. . . we find −λ  λ 0 0 0 . d P1 (t) = µP0 (t) − µP1 (t). . −µ1 p µ1 p   P = . −λ −     0 µ 2 µ2 0 λ . . . We have d P (t) = P (t)Q̃. . . . dt 0 0 0 d Thus. .. then for P1 (t).  .  µ1 (1 − p) −µ1 − λ µ 1p λ 0 .. but it is exactly an M/MP/1 system because the service always starts in phase 0. .... This compares to the infinitessimal generator for a queueing system with phase dependent arrivals and service as follows: Λ = λI. : SOLUTION MANUAL By inspection of the state diagram... dt .. . . dt P0 (t) = µP0 (t). dt or d P1 (t) + µP1 (t) = µP0 (t). Exercise 4.. . There is never a need to do matrix exponentiation. and identify the form of fx̃ (t). .   0 0 0 µ2 −µ2 − λ . 0 −µ µ Find Fx̃ (t) = P {x̃ ≤ t} and fx̃ (t).. [Hint: First solve for P0 (t).  Q=  0 µ1 (1 − p) 0 −µ1 − λ µ1 p . so k = 1..

e P1 (t) = µt + k. µt so k = 0 and we have P1 (t) = µte−µt . dt Therefore. Since the probabilities sum to 1. d h µt i e P1 (t) = µeµt e−µt = µ. P1 (0) = 0.140).Advanced CTMC-Based Queueing Models 131 Multiplying both sides by eµt . We may check this by evaluating P2 (t) directly.139) as given. Z ∞ Fx̃∗ (s) = e−sx dFx̃ (x) Z0∞ h i = e−sx −bSeSx e dx 0 . and fx̃ (t) = µ(µt)e−µt . First. d P2 (t) = µP1 (t) dt = µ2 te−µt . it must be the case that P2 (t) = 1 − P0 (t) − P1 (t) = 1 − e−µt µte−µt . That is.29 Starting with (4. Solution. find the Laplace-Stieltjes transform of x̃. Exercise 4. Now. dt Equivalently. d eµt P1 (t) + µeµt P1 (t) = µeµt P0 (t). P2 (t) = P {in state 2 at time t} = P {x̃ ≤ t} . d d h i P2 (t) = 1 − e−µt µte−µt dt dt = 0 + µe−µt + µ2 te−µt − µe−µt = µ(µt)e−µt . Now. prove the validity of (4. or P1 (t) = µte−µt + ke−µt . Fx̃ (t) = 1 − e−µt − µte−µt .

so d . : SOLUTION MANUAL Z ∞ = −bS e−[sI−S]x dxe 0 Z ∞ = bS [sI − S]−1 (− [sI − S]) e−[sI−S]x dxe 0 = −bS [sI − S]−1 e. . d n Now.132 QUEUEING THEORY WITH APPLICATIONS . E[x̃n ] = (−1)n ds ∗ n Fx̃ (s)|s=0 . .

E[x̃] = (−1) Fx̃∗ (s).

.

.

ds s=0 dn h i .

.

= (−1) n −bS [sI − S]−1 e .

.

ds .

s=0 h i.

= (−1) bS [sI − S]−2 e .

.

s=0 = −bS −1 e. We now prove that proposition that dn ∗ F (s) = −n!(−1)n bS [sI − S]−(n+1) e. the truth set for the proposition. Therefore. continuing with the proof of the exercise. and the proof of the proposition is complete. dsn−1 x̃ so that dn ∗ F (s) = −(n − 1)!(−1)(n−1) bS(−n) [sI − S]−(n+1) e dsn x̃ = −n!(−1)n bS [sI − S]−(n+1) e. d ∗ F (s) = bS [sI − S]−2 e ds x̃ = 1!(−1)(−1)bS [sI − S]−2 e. So 1 ∈ T . h i . dsn x̃ For n=1. If (n − 1) ∈ T . then dn−1 ∗ F (s) = −(n − 1)!(−1)(n−1) bS [sI − S]−n e.

.

n n n −(n+1) E[x̃ ] = (−1) −n!(−1) bS [sI − S] e .

.

s=0 = −n!bS(−1)(n+1) e = n!(−1)n bS −n e. .

state m is reachable from all other states in the original chain. . with Pt (0)e = 1. m−1 of the modified chain are all tran- sient. Let T T0   Q̃ = . . state m is still reachable from all states. once in state m. and the remaining terms are chosen to conform. that is. . all states can be reached from all other states. . 1. then the matrix T must be nonsingular. Make use of the fact that if T is singular. (e) Argue that if Q is the infinitesimal generator for an irreducible Markov chain. .Advanced CTMC-Based Queueing Models 133 4. then T has a zero eigenvalue. Solution: (a) Since the Markov chain is irreducible. that is. . (a) Argue that if Q is the infinitesimal generator for an irreducible Markov chain. . . Argue that P {x̃ ≤ t} = Pa (t). P {x̃ ≤ t} = 1 − Pt (0) exp {T t}e. 1. Let P (t) = [ Pt (t) Pa (t) ] denote the state probability vector for the Markov chain for which Q̃ is the infinitesimal generator. . . with Pt (t) a row vector of dimension m and Pa (t) a scalar.1 Supplementary Problems 4-1 Let Q be an (m+1)-square matrix representing the infinitesimal generator for a continuous-time Markov chain with state space {0. . m}. the modified system will never leave state m because its departure rate to all other . then the matrix T̃ = T +T 0 Pt (0) is the infinitesimal generator for an irreducible Markov chain with state space {0. (b) Prove that if Q is the infinitesimal generator for an irreducible Markov chain. . [Hint: Use the fact that Pt (t)e is the probability that the state of the modified Markov chain is in the set {0. . . 1. · · · . therefore.] (c) Show that Pa (t) = 1 − Pt (0) exp {T t}e. then the states 0. But. . T 0 is an m × 1 column vector. 0 0 where T is an m-square matrix. be a matrix obtained by replacing any row of Q by a row of zeros and then exchanging rows so that the final row is a vector of zeros. and state m is an absorbing state. m − 1} in the modified chain. [Hint: Solve for Pt (t). Since the dynamics of the system have not been changed for states {0. then prove by contradiction.] (d) Let x̃ be the time required for the modified Markov chain to reach state m given an initial probability vector P (0) = [ Pt (0) 0 ]. 1. m − 1} at time t. m − 1}. .

then it necessarily has left the transient states by time t. P {x̃ ≤ t} = Pa (t) = 1 − Pt (0)eT t e . all states other than state m are transient states in the modified chain and state m is an absorbing state. (d) The argument is straight-forward. This means that all eigenvalues of T must have strictly negative real parts. Therefore.134 QUEUEING THEORY WITH APPLICATIONS . (c) From the matrix above. so that T −1 T 0 = −e. Pa (t) = 1 − Pt (t)e = 1 − Pt (0)eT t e Furthermore. we know that lim Pt (t) = 0 t→0 is independent of Pt (0). the matrix T cannot be singluar. d/dtPa (t) = Pt (t)T 0 . from the differential equation. Hence. if all eigenvalues have strictly negative real parts. Pa (t) = K − Pt (0)eeT t e. . . Therefore. and Pa (t) = 1 − Pt (0)eT t e. and lim Pa (t) = 1 = 1 − 0 t→0 implies that K = 1. : SOLUTION MANUAL states is 0. If the chain is in the absorbing state at time t. which is equal to the complement of the probability that the system is in some other state at time t. (b) Observe that d [ Pt (t) Pa (t) ] = Pt (t)T dt so that d Pt (t) = Pt (0)eT t dt Now. Thus. But. Pa (t) is the probability that the system is in an absorbing state at time t. d Pa (t) = Pt0 eT t T 0 dt Now. Thus. Pa (t) = Pt (0)eT t T −1 T 0 + K But T e + T 0 = 0.

24-26. Suppose type 1 and type 2 service increment times are each drawn independently from exponential distributions with parameters µ1 and µ2 . so [T + T 0 Pt (0)]e = T e + T 0 . (e) Comment on the structure of the matrix Q relative to that for the phase-dependent arrival and service rate queueing system and to the M/PH/1 system. Define Pi = [ Pi0 Pi1 · · · Pi. Define the state of the system to be 0 when the system is empty and by the pair (i. j) where i ≥ 0 is the system occupancy and j = 0. m − 1}. the customer leaves the system with probability (1 − p) or else receives a type 2 service increment followed by an additional type 1 service incre- ment. T + T 0 Pt (0) is the in- finitesimal generator for an irreducible Markov chain with state space {0. This is because Tii + Ti0 ≤ 0 so that Tii + Ti0 Pti (0) ≤ 0. Suppose upon entering service. respectively. Define the phase of the system to be j if there are j customers receiving a type 2 service increment. The matrix T 0 Pt (0) is then a nonnegative m × m matrix. j = 0. .m} ] for i > 0 and P0 . suppose there are m servers. (d) Determine the matrix Q. it follows that T + T0 Pt (0) is a matrix whose diagonal terms are nonpositive. the infinitesimal generator for the continuous- time Markov chain defining the occupancy process for this system. pp. Thus. . Pt (0)e = 1. (c) Write the matrix balance equations for the case of general values of m. a scalar. What modifications in the solution procedure would have to be made to solve this problem? [Hint: See Neuts [1981a].Advanced CTMC-Based Queueing Models 135 (e) The vector T 0 represents the rates at which the system leaves the transient states to enter the absorbing state. Since [T T 0 ]e = 0. It remains only to determine if T̃e = [ T + T 0 Pt (0) ] e = 0 Now. (b) Write the matrix balance equations for the special case of m = 3. 1. With the service process defined as in Problem 4-1. 1. m. All nondiagonal terms are nonnegative since all off-diagonal terms of T are nonnegative and T 0 is nonnegative. . which we al- ready know is equal to a zero vector. This is due to the fact that Pti (0) ≤ 1. i is the phase of the service process. Each time a customer receives a type 1 service increment. each customer initially receives a type 1 service increment.] . · · · . 4-2 Consider an m-server queueing system having Poisson arrivals. (a) Draw the state transition diagram for the special case of m = 3.min{i. · · · . .

where Ik is the k×k identity matrix: Λ0 = λ. we may compactly write the balance equations as follows: P0 Λ0 = P1 M̂1 P1 [ Λ1 + P1 + M1 ] = P0 Λ̂0 + P2 M̂2 P2 [ Λ2 + P2 + M2 ] = P1 Λ̂0 + P3 M̂3 . 0 0 3µ1 (1 − ρ) 0 0 0    0 2µ1 (1 − ρ) 0 0 M3 =  . we may think of a customer as staying with the same server thoughout its service. Λ̂0 = [ λ 0 ]  λ 0 0 Λ1 = λI2 . Λ̂1 = 0 λ 0 λ 0 0 0  Λ2 = λI3 . 0 0 −3µ2 0 With this notation. Λ̂2 =  0 λ 0 0  0 0 λ 0 Λ3 = λI  4 µ1 (1 − ρ) 0 µ1 (1 − ρ)    M1 = .  0 0 µ1 (1 − ρ)  0 0  0 0 −2µ1 0  −µ1 ρ   0 P1 = . See Figure 4. 0 0  0 2µ1 (1 − ρ) 0 0  M2 =  0 µ1 (1 − ρ) 0  . . . 0 0 0 2µ1 (1 − ρ) 0  M̂2 =  0 µ1 (1 − ρ)  .4 for the state diagram of the system. (b) We first introduce the following notation.  0 0 µ1 (1 − ρ) 0  0 0 0 0 3µ1 (1 − ρ) 0 0    0 2µ1 (1 − ρ) 0  M̂3 =  .136 QUEUEING THEORY WITH APPLICATIONS . −µ2 0 0 −2µ2 0 0 −3µ1 0 0   P3 =  −µ2 0 −2µ1 0  . P2 =  −µ2 0 −µ1 ρ  . M̂1 = . : SOLUTION MANUAL Solution: (a) For this exercise. the server simply does a different task.

2 µ 2 µ 2 λ (1-p)µ 1 λ (1-p)µ 1 λ (1-p)µ 1 p µ1 pµ1 p µ1 3.1 µ 2 λ (1-p)µ 1 λ (1-p)µ 1 p µ1 pµ1 2.2 5. .2 4.1 5.3 µ 2 µ 2 µ 2 λ (1-p)µ 1 λ (1-p)µ 1 λ (1-p)µ 1 λ (1-p)µ 1 p µ1 pµ1 p µ1 5.Advanced CTMC-Based Queueing Models 137 0 λ (1-p)µ 1 p µ1 1.0 4.4.1 4.0 1.0 2. State Diagram for Problem 2a P3 [ Λ3 + P3 + M3 ] = P2 Λ̂2 + P4 M3 .1 3.3 µ 2 µ 2 µ 2 λ (1-p)µ 1 λ (1-p)µ 1 λ (1-p)µ 1 λ (1-p)µ 1 Figure 4.1 2.0 3.2 3.0 5.3 µ 2 µ 2 µ 2 λ (1-p)µ 1 λ (1-p)µ 1 λ (1-p)µ 1 λ (1-p)µ 1 p µ1 pµ1 p µ1 4.

... m − 1. This matrix Q has the form   B00 B01 B02 B03 0 .. the starting phase is selected at random.. . Since there is more than one possible starting phase. Λ̂1 ... · · · .. Pn [ Λm + Pm + Mm ] = Pn−1 Λ̂m + Pn+1 Mm (d) Define ∆ = diag[−Λ0 .138 QUEUEING THEORY WITH APPLICATIONS . − [Λ1 + P1 + M1 ) .   . where n = 1. · · · . Λ3 .. · · · h i ∆l = subdiag M̂0 . a new customer starts service at exactly the phase of the previously completed customer as long as there are customers waiting. . and Pi .. · · · . M3 . m. P0 Λ0 = P1 M̂1 . by inspection of the balance equations in part (c). Λ̂2 . : SOLUTION MANUAL and for n ≥ 4. . .. = −h (Λ3 + P3 + M3 ) . . Λ3 . . . .. 2..   ..   20   B30 B31 B32 B33 A0 . m.. m as in part (b).. For n > m.. Q̂ = ∆ + ∆u + ∆l (e) This exercise will be harder to solve than the M/PH/1 system.. . − (Λ2 + P2 + M2 ) . That is. this problem will be more difficult to solve. 2. M3 . 1. ... · · ·] ∆u = superdiag Λ̂0 .     0 0 0 B43 A1 A0 . The equations will then follow the same pattern as in part (b). . M̂3 . . Pn [ Λ3 + P3 + M3 ] = Pn−1 Λ̂3 + Pn+1 M3 (c) To obtain the balance equations for general m. M̂1 . . M̂i . M̂2 .  10   B B21 B22 B23 0 .    0 0 0 0 0 0 A2 . i = 0. Λ̂i . − (Λ3 + i P3 + M3 ) . . 1. . i. i = 0. . i = 1. Pn [ Λn + Pn + Mn ] = Pn−1 Λ̂n−1 + Pn+1 Mn+1 . .. B B11 B12 B13 0 . In the M/PH/1 system. we simply define Λi . Λ3 . · · · Then. ...  0 0 0 0 A2 A1 A0 . Mi . this is due to the fact that there is memory from service to service.. · · · .      0 0 0 0 0 A2 A1 .e..

P2 . (f) We know that P4 = P3 R. we have B00 B01 B02 B03    B10 B11 B12 B13  0 = [ P0 P1 P2 P3 ]   B20  B21 B22 B23  B30 B31 B32 B33 + RB43 = [ P0 P1 P2 P3 ] B(R) Thus. P1 . P3 . X kx0 e + i=1 or kx0 e + kx1 [I − R]−1 e = 1.Advanced CTMC-Based Queueing Models 139 which is similar to the form of the matrix of the G/M/1 type. P4 . On the other hand. We may then write the zero eigenvalue of B(R) as x and [ P0 P1 P2 P3 ] = k [ x0 x1 ]. Thus. where k is a normalizing constant. P1 . That is. but we do not know P0 . we know P Q = 0. Then. ∞ kx1 Ri e = i. we can compute [ P0 P1 P2 ] = kx0 . P0 . we see that the vector [ P0 P1 P2 P3 ] is proportional to the left eigenvector of B(R) corresponding to its zero eigenvalue. so that 1 k= x0 e + kx1 [I − R]−1 e Once k is know. but with more complex boundary conditions. B00 B01 B02 B03    B10 B11 B12 B13    0 = [ P0 P1 P2 P3 P4 ]   B20 B21 B22 B23   B B31 B32 B33  30 0 0 0 B43 But P4 = P3 R. Now consider only those elements of P that are multiplied by the B submatrices. and x1 is proportional to P3 . P2 . P3 . so that B00 B01 B02 B03    B10 B11 B12 B13    0 = [ P0 P1 P2 P3 P3 R ]   B20 B21 B22 B23   B B31 B32 B33  30 0 0 0 B43 Then combining the columns involving P3 .

. . n≥1 .140 QUEUEING THEORY WITH APPLICATIONS . : SOLUTION MANUAL P3 = kx1 and Pn+3 = P3 Rn .

P∞ yP Solution. Recall that Fỹ (z) = y=0 z {ỹ = y}. Then ∞ dn [y(y − 1) · · · (y − n + 1)] z y−n P {ỹ = y} .1 With x̃. show that E[ỹ(ỹ − 1) · · · (ỹ − n + 1)] = λn E[x̃n ]. X Fỹ (z) = dz n ỹ=0 so that dn . t ≥ 0}.2. {ñ(t).Chapter 5 THE BASIC M/G/1 QUEUEING SYSTEM Exercise 5. and ỹ defined as in Theorem 5.

.

Fỹ (z).

= E [ỹ(ỹ − 1) · · · (ỹ − n + 1)] . dz n .

dn ∗ dn ∞ −λ[1−z] Z F (λ[1 − z]) = e dFx̃ (x) dz n x̃ n Zdz∞ 0 = (λx)n e−λ[1−z] dFx̃ (x). z=1 On the other hand. 0 which implies dn ∗ .

= λn E[x̃n ]. .

n Fx̃ (λ[1 − z]).

.

dz z=1 Since Fỹ (z) = Fx̃∗ (λ[1 − z]). . it follows that E [ỹ(ỹ − 1) · · · (ỹ − n + 1)] = λn E[x̃n ] Alternatively. Fỹ (z) = Fx̃∗ (s(z)) .

: SOLUTION MANUAL where s(z) = λ[1 − z] implies n dn dn ∗ d  Fỹ (z) = Fx̃ (s(z)) s(z) dz n ds n dz dn ∗ = F (−λ)n dsn nx̃ = (−λ) [(−1)n E[x̃n ]] = λn E[x̃n ]. both numerator and denominator go to zero.  X  i=1 But (x̃ − 1)+ = 0 when either x̃ = 1 or x̃ = 0. and then take the limit as z → 1 of the resultant expression. z z Exercise 5.3 Starting with (5. Solution. using the properties of Laplace transforms and probability generating func- tions. . Let α(z) denote the numerator and β(z) denote the denominator of the right-hand side of (5. Thus. The proof follows directly from the definition of Fx̃ (z). It is easy to verify using Theo- d ∗ rem 2. Solution.142 QUEUEING THEORY WITH APPLICATIONS .2 that limz→1 dz Fx̃ (λ[1 − z]) = λE[x̃] = ρ. ∞ z i P (x̃ − 1)+ = i X  F(x̃−1)+ (z) = i=0 ∞ = z 0 P (x̃ − 1)+ = 0 + z i P (x̃ − 1)+ = i . dz dz x̃ Now take the limits of both sides as z → 1. use the properties of Laplace trans- forms and probability generating functions to establish (5. Thus we can apply L’Hôpital’s rule.5) as z → 1. after some simple . Then the derivatives are d d   α(z) = −P {q̃ = 0} Fx̃∗ (λ[1 − z]) − (1 − z) Fx̃∗ (λ[1 − z]) dz dz d d ∗ β(z) = F (λ[1 − z]) − 1.5). so that ∞ 1X F(x̃−1)+ (z) = P {x̃ = 0} + P {x̃ = 1} + z (i+1) P {x̃ = i + 1} z i=1 = P {x̃ = 0} + P {x̃ = 1} 1 = + [Fx̃ (z) − P {x̃ = 0} − zP {x̃ = 1}] z 1 1   = 1− P {x̃ = 0} − Fx̃ (z).6).2 Prove Theorem 5. Observe that if we take the limit of the right-hand side of (5. . Exercise 5.3.5).

P {q̃ = 0} = 1 − ρ.. Fq̃n+1 (z) = F(q̃n −1) (z)Fṽn+1 (z) 1 1    = 1− P {q̃n = 0} + Fq̃n (z) Fṽn+1 (z).. It should be clear. Now. It should be easy to see that the basic relationship q̃n+1 = (q̃n − 1)+ + ṽn+1 will remain valid. i.e. z z .e.6) directly by using Little’s result. Then there is either one in the system (i. i. On the other hand. we have by Little’s result that E[ñ] = λE[s̃] = λE[x̃] = ρ. so that E[ñ] = 0 · P {ñ = 0} + 1 · P {ñ = 1} = P {ñ = 1} . we see that P {q̃ = 0} = 1 − ρ. and the batches occur according to a Poisson process at rate λ. In particular. Note that this is the probability that the server is busy. Now. that the proba- bility that no one is in the system is (1 − ρ). that the number of customers who arrive during the (n + 1)-st customer’s service will vary according to the batch size. Thus. ρ−1 i.e. Solution. and so we need to re-specify ṽn+1 . it has already been pointed out that the Poisson arrival’s view of the system is the same as that of a ran- dom observer. Suppose arrivals to the system occur in batches of size b̃. the number of customers left in the system by the (n + 1)-st departing customer will be the sum of the number of customers left by the n-th customer plus the number of customers who arrive during the (n + 1)-st customer’s service. however. P {q̃ = 0} − = 1. Be sure to define each of the variables carefully. in service) or zero.The Basic M/G/1 Queueing System 143 algebra.5) for this case.. equivalently. the probability that the server is busy is ρ. Define the system to be the server only.. P {ñ = n} = P {q̃ = n}.e. Exercise 5. Solution. Develop an expression equivalent to (5.5 Batch Arrivals. or.4 Establish (5. Exercise 5.

we find that Fṽ (z) = E[z ṽ ] Z ∞ ∞X  .144 QUEUEING THEORY WITH APPLICATIONS . . 1 − 1z Fq̃ (z)Fṽ (z) But. . : SOLUTION MANUAL Taking the limit of this equation as n → ∞. conditioning upon the batch size and the length of the service. 1 1    Fq̃ (z) = 1− P {q̃ = 0} + Fq̃ (z) Fṽ (z)  z  z 1 1 − z P {q̃ = 0} Fṽ (z) = .

 E z ṽ .

.

x̃ = x P {ñ = n|x̃ = x} dFx̃ (x) .ñ = n.

= 0 n=0 Z ∞X∞ h Pn i (λx)n b = E z i=1 i e−λx dFx̃ (x) 0 n=0 n! ∞ h i (λx)n ( ) Z ∞ b̃ e−λx dFx̃ (x) X = E z 0 n=0 n! Z ∞ λxFb̃ (z)−λx = e dFx̃ (x) 0 Fx̃∗   = λ 1 − Fb̃ (z) . .

in order to find d lim Fñ (z).The Basic M/G/1 Queueing System 145 Exercise 5. multiply both sides of (5. Note that if we take the limit of β(z) that both the numerator and the denominator of the . and 1 − Fx̃∗ (λ[1 − z]) β(z) = 1 − . dz z→1 first find the limits as z → 1 of α(z).8) to clear fractions and then differ- entiate and take limits. β(z). 1−z Then. and then substitute these limits into the formula for the derivative of a ratio. z→1 dz The limit as z → 1 of β(z) and its derivative are more complicated.8) is first rewritten as Fñ (z) = α(z)/β(z).] α(z) Solution. show that λρ E[x̃2 ] E[ñ] = ρ + 1 − ρ 2E[x̃] ρ Cx̃ 2 + 1    = ρ 1 + . and 1 − Fx̃∗ (λ[1 − z]) β(z) = 1 − . we first rewrite Fñ (z) as β(z) . dα(z)/dz. Alternatively. where α(z) = (1 − ρ)Fx̃∗ (λ[1 − z]). and dβ(z)/dz. where α(z) = (1 − ρ)Fx̃∗ (λ[1 − z]).     (5.6 Using the properties of the probability generating function. α(z) and dz α(z): lim α(z) = 1 − ρ z→1 d lim α(z) = (1 − ρ)ρ z→1 dz d lim Fñ (z) = E[ñ].1) 1−ρ 2  [Hint: The algebra will be greatly simplified if (5. 1−z d d It is straight forward to find the limit as z → 1 of dz Fñ . Following the hint given in the book.

Similarly. we see that d 2 (1 − z)λ2 ds ∗ " # d 2 Fx̃ (s(z)) lim β(z) = lim z→1 dz z→1 2(z − 1) d 3 2 −(1 − z)λ3 ds ∗ 2 d ∗ " # 3 Fx̃ (s(z)) − λ ds2 Fx̃ (s(z)) = lim z→1 2 −λ 2 = E[x̃2 ]. we compute d d 1 − Fx̃∗ (λ[1 − z])   β(z) = − dz dz  1−z  d 1 − Fx̃∗ (s(z)) = − dz 1−z d ∗ −(1 − z)λ ds Fx̃ (s(z)) − [1 − Fx̃∗ (s(z))] = . d d d β(z) dz α(z) − α(z) dz β(z) Fñ (z) = 2 dz β (z).146 QUEUEING THEORY WITH APPLICATIONS . 2 (1 − ρ)2 ρ + (1 − ρ) λ2 E[x̃2 ] E[ñ] = (1 − ρ)2 λ E[x̃2 ] 2 = ρ+ 1−ρ 2 λ2 (E[x̃2 ] − E 2 [x̃] + E 2 [x̃]) = ρ+ 1−ρ 2 . Thus. 2 To find E[ñ]. we substitute these limits into the formula for the derivative of Fñ (z). : SOLUTION MANUAL fraction go to zero. (1 − z)2 Upon taking the limit of this ratio and applying L’Hôpital’s rule twice. . 1 − Fx̃∗ (λ[1 − z])   lim β(z) = 1 − lim z→1 z→1 " 1−z d ∗ # λ dz Fx̃ (λ[1 − z]) = 1 − lim z→1 −1 = 1 − λE[x̃] = 1 − ρ. We may apply L’Hôpital’s rule in this case. .

That is. Starting with this equation. δn and q̃n are not independent. ρ(1 − 2ρ) + E[ṽ 2 ] E[ñ] = . we begin by squaring the equation given in the problem state- ment. E[q̃ 2 ] = E[q̃ 2 ] + E[ṽ 2 ] + E[δ∞ 2 ] + 1 + 2E[q̃ṽ] + 2E[q̃δ] + 2E[ṽδ∞ ] −2E[q̃] − 2E[ṽ] − 2E[δ∞ ].] Solution.1. In addition. Furthermore. To find E[ñ].The Basic M/G/1 Queueing System 147 ! λ2 E[x̃2 ] − E 2 [x̃] + E 2 [x̃] = ρ+ E 2 [x̃] 1−ρ 2E 2 [x̃] ρ2 V ar(x̃) + 1   = ρ+ 1−ρ 2E 2 [x̃]! ρ 2 Cx̃ 2 + 1 = ρ+ 1−ρ 2 ρ Cx̃ 2 + 1    = ρ 1 + . find E[δ∞ ] and E[ñ]. Clearly their limits are also independent.     1−ρ 2 Exercise 5. Using these observations and solving for E[ñ]. But E[q̃] = E[ñ]. so that E[ṽ 2 ] = ρ2 E[x̃2 ] + ρ. and E[δ∞ 2 ] = E[δ ] since δ can take on values of 0 or 1 ∞ n only. start off by squaring both sides of the equation for q̃n+1 . as are δn and ṽn+1 . their product will always be zero. First take expectations of the equation for q̃n+1 .7 Let δn = 1 if q̃n = 0 and let δn = 0 if q̃n > 0 so that q̃n+1 = q̃n − 1 + δn + ṽn+1 . However. Substituting this into the expression for E[ñ]. we know that E[ṽ] = λE[x̃] = ρ. hence their expected value will also be zero. [Hint: To find E[ñ]. E[ṽ 2 ] − E[ṽ] = λ2 E[x̃2 ]. ρ E[ñ] = . Interpret E[δ∞ ]. 1−ρ . E[q̃] = E[q̃] − 1 + E[δ∞ ] + E[ṽ]. 2(1 − ρ) But by Exercise 5. q̃n and ṽn+1 are independent. By Little’s result. Substitute this value into the above equation to find that E[δ∞ ] = 1 − ρ. 2 E[ñ] (2 − 2E[ṽ]) = E[δ∞ ] + E[ṽ 2 ] + 1 + 2E[ṽ]E[δ∞ ] − 2E[ṽ] − 2E[δ∞ ].

8 Using (5. Begin with (5.14) as a starting point. and ds α(s) as s → 0. show that ρ E[x̃2 ] E[s̃] = + E[x̃] 1 − ρ 2E[x̃] ρ Cx̃ 2 + 1   =  + 1  E[x̃]. where d ∗ Fx̃ (s) + [1 − Fx̃∗ (s)] E[x̃] " # d ρ sE[x̃] ds β(s) = . α(s). . : SOLUTION MANUAL Exercise 5. lim α(s) = 1 − ρ s→0 d lim α(s) = −(1 − ρ)E[x̃] s→0 ds d lim Fs̃ (s) = −E[s̃]. [Hint: Use (5. where 1 − Fx̃∗ (s)   β(s) = 1 − ρ .148 QUEUEING THEORY WITH APPLICATIONS . and use the hint for Exercise 5. ds E[x̃] s2 Then. sE[x̃] d ∗ d Simple calculations give us the limits of ds Fs̃ (s). d ∗ d 2 d ∗ ∗ " # d ρ ds Fx̃ (s) + s ds 2 Fx̃ (s) − ds Fx̃ (s) lim β(s) = lim s→0 ds E[x̃] s→0 2s .15) and rewrite Fs̃∗ (s) as α(s)/β(s). We may apply L’Hôpital’s rule in this case to find d ∗ " # − ds Fx̃ (s) lim β(s) = 1 − ρ lim s→0 s→0 E[x̃] E[x̃] = 1−ρ E[x̃] = 1 − ρ. .2) 1−ρ 2 Combine this result with that of Exercise 5.      (5.14) and the properties of the Laplace transform.6 to verify the validity of Little’s result applied to the M/G/1 queueing system.6] Solution. taking the limit of this expression as s → 0 and twice applying L’Hôpital’s rule. s→0 ds Taking the limit of the β(s)as s → 0 results in both a zero numerator and denominator in the second term. ′ We now find the limit of β (s).15) rather than (5.

    1−ρ 2  To complete the exercise. 2E[x̃] Substituting these expressions into the formula for the derivative of Fs̃∗ (s).     1−ρ 2  i.e.. . combine this result with that of Exercise 5. ρ Cx̃ 2 + 1   E[s̃] = E[x̃]  + 1 .The Basic M/G/1 Queueing System 149 d2 ∗ d 3 ∗ " # ρ ds2 Fx̃ (s) + s ds 3 Fx̃ (s) = lim E[x̃] s→0 2 E[x̃2 ] = ρ . This verifies Little’s result applied to the M/G/1 queueing system.   ρ Cx̃ 2 +1  E[ñ] ρ 1 + 1−ρ 2 =  2  E[s̃] ρ Cx̃ +1 E[x̃]  1−ρ + 1 2 ρ = E[x̃] = λ.6. we find that −(1 − ρ)2 E[x̃] − (1 − ρ) λ2 E[x̃2 ] −E[s̃] = (1 − ρ)2 ρ E[x̃2 ] = −E[x̃] − 1 − ρ 2E[x̃] ρ (E[x̃2 ] − E 2 [x̃] + E 2 [x̃]) = −E[x̃] − 1−ρ 2E[x̃] ! ρ E[x̃2 ] − E 2 [x̃] + E 2 [x̃] = −E[x̃] − E[x̃] 1−ρ 2E 2 [x̃] ! ρ V ar(x̃) + E 2 [x̃] = −E[x̃] − E[x̃] 1−ρ 2E 2 [x̃] ! ρ Cx̃ 2 + 1 = −E[x̃] − E[x̃] 1−ρ 2 2   ρ Cx̃ + 1 = −E[x̃]  + 1 .

Solution.17) and the properties of the Laplace transform. : SOLUTION MANUAL Exercise 5. applying L’Hôpital’s rule.17): (1 − ρ) Fw̃∗ (s) = h 1−Fx̃∗ (s) i. where 1 − Fx̃∗ (s)   β(s) = 1 − ρ . this would result in both a zero numerator and denominator. We first calculate β(s) at s = 0. 2E[x̃] . we need to find d ∗ d Fw̃ (s) = −(1 − ρ)β −2 (s) β(s).3) 1−ρ 2 Combine this result with the result of Exercise 5. d ∗ " # − ds Fx̃ (s) lim β(s) = 1 − ρ lim s→0 s→0 E[x̃] = 1 − ρ. Recall equation (5. sE[x̃] To compute E[w̃]. show that ρ Cx̃ 2 + 1 E[w̃] = E[x̃]. ds ds and its limit as s → 0.8. however. (5.150 QUEUEING THEORY WITH APPLICATIONS . d We now calculate the limit of ds β(s) by applying L’Hôpital’s rule twice. Clearly. we rewrite Fw̃∗ (s) as (1 − ρ)β −1 (s). . . Therefore. 1−ρ sE[x̃] As in Exercise 5. This results in d ∗ Fx̃ (s) − [1 − Fx̃∗ (s)] " # d ρ −s ds lim β(s) = lim − s→0 ds s→0 E[x̃] s2 d ∗ d 2 d ∗ ∗ " # ρ ds Fx̃ (s) + s ds 2 Fx̃ (s) − ds Fx̃ (s) = lim s→0 E[x̃] 2s d2 ∗ d 3 ∗ " # ρ ds2 Fx̃ (s) + s ds 3 Fx̃ (s) = lim s→0 E[x̃] 2 E[x̃2 ] = ρ .9 Using (5.6 to verify the validity of Little’s result when applied to the waiting line for the M/G/1 queueing system.

The Basic M/G/1 Queueing System 151 Noting that E[w̃] is −d/dsFw̃∗ (s) at s = 0.21) Fỹ∗ (s) = Fx̃∗ [s + λ − λFỹ∗ (s)]. Recall equation (5.10 Using properties of the Laplace transform. the expected number of customers in queue. Exercise 5. show that E[ñ] − λE[x̃] = E[w̃].6 to complete the exercise. (5. " # E[ñ] − λE[x̃] ρ Cx̃ 2 + 1 = E[x̃] 1 + − E[x̃] λ 1−ρ 2 " # ρ Cx̃ 2 + 1 = E[x̃] 1 + −1 1−ρ 2 ρ Cx̃ 2 + 1 = E[x̃] 1−ρ 2 = E[w̃].4) 1−ρ Solution. ds dg ds . show that E[x̃] E[ỹ] = . we may now specify E[w̃] as d ∗ E[w̃] = − lim F (s) s→0 ds w̃ E[x̃2 ] = (1 − ρ)(1 − ρ)−2 ρ 2E[x̃] ρ E[x̃2 ]   = 1 − ρ" 2E[x̃] # ρ E 2 [x̃] − E 2 [x̃] + E 2 [x̃] = 1−ρ 2E[x̃] ρ Cx̃2 + 1 = E[x̃]. λ where E[ñ] − λE[x̃] = E[ñq ]. 1−ρ 2 Combine this result with that of Exercise 5. so that d d d f (s) = F (g(s)) · g(s). We see that Fỹ∗ (s) has the form f (s) = F (g(s)). That is.

: SOLUTION MANUAL This results in the expression d ∗ d ∗ d   F (s) = Fx̃ (s) s + λ − λFỹ∗ (s) ds ỹ ds ds d ∗ d   = Fx̃ (s) 1 − λ Fỹ∗ (s) . . .152 QUEUEING THEORY WITH APPLICATIONS . ds ds .

d ∗ .

and recall that ds Fỹ (s). Evaluate both sides at s = 0.

.

= −E[ỹ]. and .

s=0 d ∗ .

F ds x̃ (s).

.

That is. condition this expected value on the number of customers who arrive during the first customer’s service. Z ∞ E[ỹ] E[ỹ|x̃ = x]dFx̃ (x).1) 0 Next. ṽ = v] = x + E[ỹi ] i=0 where ỹ0 = 0 with probability 1. Solving for E[ỹ]. [Hint: Condition on the length of the first customer’s service and the number of customers that arrive during that pe- riod of time..2) v=0 v! Now. 1−ρ Exercise 5. ṽ = v] e . i. Thus. ṽ = v] = x + E[ỹ] i=1 . ∞ X (λx)v −λx E[ỹ|x̃ = x] = E[ỹ|x̃ = x. it is easy to see that the length of the busy period will be the length of the first customer’s service plus the length of the sub busy periods generated by those v customers.11. v X E[ỹ|x̃ = x. (5. s=0 −E[ỹ] = −E[x̃] (1 + λE[ỹ]) . condition on the length of the first customer’s service.11. Thus. To determine E[ỹ].11 For the ordinary M/G/1 queueing system determine E[ỹ] without first solving for Fỹ (s). (5. v X E[ỹ|x̃ = x. given that the length of the first customer’s service is x and the number of customers who arrive during this time is v.e. this equation becomes E[x̃] E[ỹ] = . = −E[x̃].] Solution.

By Little’s result applied to E[ṽe ]. Let ṽe denote the number of customers who arrive during the first customer’s service. The service time for the first customer in each busy period is drawn from the service-time distribution Fx̃e (x). E[x̃] E[ỹ] = .The Basic M/G/1 Queueing System 153 = x + vE[ỹ]. 1−ρ Exercise 5. If we substitute this expression into (5.12 M/G/1 with Exceptional First Service. Z ∞ E[ỹ] = (x + ρE[ỹ]) dFx̃ (x) Z0∞ = xdFx̃ (x) + ρE[ỹ] 0 = E[x̃] + ρE[ỹ]. using the result from Exercise 5. Then. Solution. E[ṽe ] = λE[x̃e ]. substitute this result into (5. Then it is easy to see that E[ỹe ] = E[x̃e ] + E[ṽe ]E[ỹ]. 1−ρ where ρ = λE[x̃].11.2). since each of the sub busy periods ỹi are drawn from the same distribution. Thus. Let ỹe denote the length of the busy period for this system.11.1) and solve for E[ỹ]. where ỹ denotes the length of the busy period for each of the subsequent ṽe customers. To complete the derivation. A queueing sys- tem has Poisson arrivals with rate λ.. Show that E[x̃e ] E[ỹe ] = .11.e. ρ E[ỹe ] = E[x̃e ] + E[x̃e ] 1−ρ . i. and the remaining service times in each busy period are drawn from the gen- eral distribution Fx̃ (x). ∞ X (λx)v −λx E[ỹ|x̃ = x] = (x + vE[ỹ]) e v=0 v! = x + E[ṽ]E[ỹ] = x + ρE[ỹ].

where ỹ0 = 0 with probability 1. ṽ = v] = E e−s(x+ i=0 ỹi ) . Now. To complete the proof. Solution.1) 0 where ∞ −sỹe E[e−sỹe |x̃e = x. substitute this into (5. x and v are constants at this stage. ∞ v −sỹe −sx −λx X E[e−sỹ ]λx E[e |x̃e = x] = e e 0 v! −sx −λx (−λxE[e−sỹ ]) = e e e . X E[e |x̃e = x] = (5. Z ∞ −sỹe E[e ] = E[e−sỹe |x̃e = x]dFx̃ (x). Furthermore. As in the proof of Exercise 5.2).154 QUEUEING THEORY WITH APPLICATIONS . Hence.13. . h P∞ i E[e−sỹe |x̃e = x. so that v −sỹe −sx E[e−sỹi ] Y E[e |x̃e = x. That is. Substituting this into (5.2) v=0 Here.13. 1−ρ Exercise 5.11. : SOLUTION MANUAL E[x̃e ] = . Z ∞ e−x(s+λ−λE[e ) dF (x) −sỹe −sỹ ] E[e ] = x̃ 0 .1). (5.13 For the M/G/1 queueing system with exceptional first ser- vice as defined in the previous exercise. we shall first condition this expected value on the length of the first customer’s service. Recall that we may write Fỹ∗ (s) as E[e−sỹe ]. and then combine the ex- ponents of e. and then condition it again on the number of customers who arrive during the first customer’s service.13. .13. ṽ = v]P {ṽ = v} dFx̃ (x). and the ỹi are independent identically distributed random variables. ṽ = v] = e i=1 = e−sx E v [e−sỹ ]. x̃e represents the length of the first customer’s service and ṽ is the num- ber of arrivals during that time. show that Fỹ∗e (s) = Fx̃∗e (s + λ − λFỹ∗ (s)). the length of the busy period will be the length of the first customer’s service plus the length of the sub busy periods generated by those customers who arrive during the first customer’s service.

at the arrival time of an arbitrary customer.] Solution. service discipline and ap- ply Little’s result. if any. We know from Little’s result that E[w̃LCF S ] is the same as E[w̃F CF S ] because E[ñF CF S ] = E[ñLCF S ]. we know that ρ E[x̃2 ] E[w̃] = . First. Exercise 5. What random variable must x̃e represent in this form? [Hint: Consider the operation of the M/G/1 queueing system under a nonpreemptive.The Basic M/G/1 Queueing System 155   = Fỹ∗e s + λ − λE[e−sỹ ]   = Fỹ∗e s + λ − λFỹ∗ (s) . But E[w̃LCF S |no customers present] = 0 and E[w̃LCF S |at least 1 customer present] = E[length of busy period started by customer in service] = E[x̃e ]/(1− ρ). that is. taking into account that an arriving customer may find the system empty.14 Comparison of the formulas for the expected waiting time for the M/G/1 system and the expected length of a busy period for the M/G/1 system with the formula for exceptional first service reveals that they both have the same form. 2E[x̃] Explain why these formulas have this relationship. Then K+1 = m. so that ρ E[w̃] = E[x̃e ]. suppose that ℓ = (K + 1)m + n.28) is K + 1 if ℓ = (K + 1)m + n and zero otherwise. But the probability of at least one customer present is ρ. 1 − ρ 2E[x̃] Therefore E[x̃2 ] E[x̃e ] = 2E[x̃] represents the expected remaining service time of the customer in service. Exercise 5. the expected waiting time in an ordi- nary M/G/1 system is the same as the length of the busy period of an M/G/1 system in which the expected length of the first service is given by E[x̃2 ] E[x̃e ] = ρ . Consider the waiting time under LCFS.15 Show that the final summation on the right-hand side of (5. LCFS. 1−ρ On the other hand. ℓ−n Solution. and the final summation is K K −j2πkm X X e = 1=K+1 k=0 k=0 .

that is. z − Fx̃∗ (λ[1 − z]) x̃ Now. Now.  2πr K+1 K X −j 2πkr 1 − e−j K+1 e K+1 = 2πr k=0 1 − e−j K+1 1 − e−j2πr = 2πr = 0. That is. the denominator of the right-hand side goes to zero by definition of z0 . multiply both sides of (5. 0 < r < (K + 1). and so we have that b1 = lim (z − z0 )Fñ (z) z→z0 " # (1 − ρ)(z − 1)Fx̃∗ (λ[1 − z]) = d ∗ . : SOLUTION MANUAL Now suppose that ℓ is not of the above form. 1 − dz Fx̃ (λ[1 − z]) z=z0 .156 QUEUEING THEORY WITH APPLICATIONS . . . note that 0 < r < K + 1 implies that this summation may be summed as a geometric series. ℓ = (K + 1)m + n + r. K +1 K +1 K +1 and the final summation becomes K 2πk[m(K+1)+r] K 2πkr e−j e−j K+1 . 1 − e−j K+1 Exercise 5.31). Hence we may apply L’Hôpital’s rule to the right-hand side: h i d (z − z0 ) dz (1 − ρ)(z − 1)Fx̃∗ (λ[1 − z]) + (1 − ρ)(z − 1)Fx̃∗ (λ[1 − z]) d ∗ .16 Argue the validity of the expression for b1 in (5. 1− dz Fx̃ (λ[1 − z]) If we evaluate this expression at z = z0 . the first term in the numerator goes to zero. m.30) by (z − z0 ) to obtain (1 − ρ)(z − 1) ∗ (z − z0 )Fñ (z) = (z − z0 ) F (λ[1 − z]). Solution. Then 2πk(ℓ − n) 2πk [m(K + 1) + r] 2πkr = = 2πkm + . X X K+1 = k=0 k=0 since ej2πkm = 1 for all integers k. First. taking the limit as z → z0 .

Since this is not possible we conclude that the denominator has no zero at z = 0. then z = Fx̃∗ (z) = 0 implies that P {ña } = 0. The graph of f1 (z) is a line which starts at 0 and whose slope is 1. (5. pK+1+n ≈ pK+1 r0n ≈ (pK r0 )r0n = pK r0n+1 . we know that Fx̃∗ (λ[1 − z] is the probability generating function for the number of arrivals during a service time. and so on.2.17 Show that the denominator of the right-hand side of (5. Fx̃∗ (λ[1 − z]) can be expressed as a power series in which the coefficients are probabilities and therefore nonnega- tive.29).18 Starting with (5. Hence it can have no poles. The function and all of its derivatives are therefore nonnegative for nonnegative z.] Solution.30) for the probability generating of the occupancy distribution has only one zero for z > 1. X n=0 where each of the P {ña = n} is. dz f2 (z) = ρ < 1. nonnegative. Combining these results. f2 (z) and f1 (z) will intersect only once for z > 1. This means that P {ña = 0} > 0.2. if i = K and n = 1. and its denominator no zeros. That is.33) through (5. At z = 1. This will happen at the point z = z0 . However. f2 (z) will d intersect f1 (z). Consider the graphs of the functions f1 (z) = z and f2 (z) = Fx̃∗ (z). then pK+1+n ≈ pK+1 r0n .18. and whose sum is equal to one. establish the validity of (5. Fx̃∗ (λ[1 − z]) can be expressed as ∞ Fx̃∗ (λ[1 − z]) = P {ña = n} z n . If Fñ (z) does have a zero at z = 0. This means that eventually. d and dz f1 (z) = 1. [Hint: From Theorem 5. noting that the expression Fñ (z) can have poles neither inside nor on the unit circle (Why?). in that region.The Basic M/G/1 Queueing System 157 Exercise 5. since dz f2 (z) d is increasing. then pK+1 ≈ pK r0 . Exercise 5. But the slope of f2 (z) is an increasing function. Solution.36). the probability that the queue is ever empty is zero. Therefore. so that d as z increases. while dz f1 (z) is constant. Therefore.1) .5. Now Fñ (z) is also a probability generating function and so is bounded within the unit circle for z > 0. we know that Fx̃∗ (λ[1 − z]) is the probability generating function for the number of arrivals during a service time. From Theorem 5.29) and (5. In addition. Now compare the functions f1 (z) = z and f2 (z) = Fx̃∗ (λ[1 − z]). with f1 (z) = f2 (z) at z = 1. If i = K + 1 in (5. dz f2 (z) also increases. of course. and so stability is never achieved. the graph of f2 (z) starts above the origin and slowly increases towards d 1 as z → 1.32). On the other hand.

note that ∞ ∞ pm(K+1)+n ≈ r0n X X pm(K+1) . : SOLUTION MANUAL To show the validity of (5.K − p0 = pm(K+1) .K = p0 + pm(K+1) .29) with n = K.158 QUEUEING THEORY WITH APPLICATIONS .K To show the validity of (5.K . .2) and solving for r0 . .35).K = p .29) to get ∞ X c0. m=1 which implies ∞ X c0. m=1 m=1 Furthermore. ∞ X ∞ X pm(K+1) = pK+1 + pm(K+1) − pK+1 m=1 m=1 X∞ = pK+1 + p(m+1)(K+1) m=1 X∞ = pK+1 + pm(K+1)+K+1 m=1 ∞ X ≈ pK r0 + r0 pm(K+1)+K m=1 ∞ " # X = r0 pK + pm(K+1)+K m=1 = r0 cK.34).2) m=1 Now. m=1 .K − p0 r0 ≈ .K − p0 = pm(K+1) . m=0 m(K+1) or ∞ X c0. X∞ c0. (5. let n = 0 in (5. Substituting this result into (5. by the definition of cn.18.18. (5.K . c0.34) cK. where the last equality follows from (5.

∞ .29).35) Finally.K − p0 ) r0n .K − (c0. by the definition of probability generating function.The Basic M/G/1 Queueing System 159 Substituting this into (5.K = pn + pm(K+1)+n m=1 ∞ pm(K+1) r0n X = ≈ pn + m=1 = pn + (c0. pn ≈ cn. (5.K − p0 )r0n . solving for pn . we find ∞ X cn. Thus.

n X .

Fñ (0) = z pn .

.

19 Approximate the distribution of the service time for the previous example by an exponential distribution with an appropriate mean. with a = 1. b = 5000.i ρ i=0 i=0 .2 of the text. Exercise 5. Plot the survivor function for the corresponding M/M/1 system at 95% uti- lization.47). with an exponential service distribution and ρ = .95. the exponential distribution is a good approximation only in the special case where the truncated distribution closely resembles the exponential but not otherwise. but that the survivor function is significantly different for the rest of the curves. Beginning with (5.i so that ∞ ∞ " # 1 Fx̃∗r (s) = i zi X X 1− z Px̃. Fx̃∗ (s) = i=0 z iP .39).19. demonstrate the validity of (5.43).20 Starting with (5. Solution. 1 − Fx̃∗ (s) Fx̃∗r (s) = sE[x̃] 1 − Fx̃∗ (s) = λ[1 − z] µ1  ∞ 1 − Fx̃∗ (s) X  = zi ρ i=0 P∞ By (5. Thus.2. Solution.39). x̃. Observe that this function closely approximates that of the truncated geometric distribution in Figure 5.1 illustrates the survivor function for the M/M/1 system. (5. Compare the result to those shown in Figure 5.36) n=0 z=0 Exercise 5. Figure 5. = p0 .

i X i=0 ∞ i " # 1 X i X = z 1− Px̃. : SOLUTION MANUAL 1 P{occupancy exceeds n} .001 0 25 50 75 100 occupancy.n .19. . n Figure 5.160 QUEUEING THEORY WITH APPLICATIONS .43) to Fx̃∗r (s).n . .1 Survivor function for Exercise 5. ρ n=0 . by applying (5.i = 1− Px̃. ρ i=0 n=0 But.n .1 . ρ i=0 n=0 Matching coefficients of zi .01 . ∞ i " # 1X zi 1 − X = Px̃. we obtain i " # 1 X Px̃r . we see that ∞ Fx̃∗r (s) = z i Px̃r .19.

By hypothesis. where ρ = λ/µ.The Basic M/G/1 Queueing System 161 Exercise 5. Let T denote the truth set for this proposition. Starting with (5. that i λ µ   Px̃.50).94). Therefore.n = i=0 1 + ρ n=0 1 + ρ i+1     ρ 1 1 − 1+ρ = . Px̃. we have by Proposition 4. so that Pi−n = (1 − ρ)ρi−n n > 0.0 Now assume that i ∈ T . (1 − ρ)Px̃.  ρ 1+ρ 1 − 1+ρ  " i+1 # 1 ρ  = 1− . We wish to show this implies (i + 1) ∈ T .i for the special case in which x̃ has the ex- ponential distribution with mean 1/µ. show that the ergodic occupancy distribution for the M/M/1 system is given by Pi = (1 − ρ)ρi . Then 0 ∈ T since from (5. page 35.21 Evaluate Px̃.n Pi−n = n=1 m=0 n=1 1+ρ i  n+1 i+1 X 1 = (1 − ρ)ρ n=1 1+ρ .i = λ+µ λ+µ i ρ 1  = . Solution. rate λ. 1+ρ 1+ρ We now show that Pi = (1 − ρ)ρi where ρ = λ/µ. n < i. 1+ρ 1+ρ Therefore i i n 1 X ρ X  Px̃. i n i  ! n+1 ρ (1 − ρ)ρi−n X X X 1− Px̃. Pn = (1 − ρ)ρn . Since the interarrival times of the Poisson process are exponential.0 P0 = = 1 − ρ.

with the deterministic system summing to 1 faster than that of the exponential.50) to calculate the occupancy distribution. 1+ρ 1+ρ Thus. (Accuracy was to within 10−8 . those of the M/M/1 system for λ = 0.23 Evaluate Px̃. Use (5. e−λ λi Px̃. Solution.9 is shown in Figure 5. The graphs of the complementary probabilities vs.i = . Com- pare the complementary occupancy distribution (P {N > i}) for this system to that of the M/M/1 system with µ = 1. Use (5. . ( ). Exercise 5. .50) and the hypothesis.i = + . the two clearly di- verged. then it should be clear that the distribu- tion describing the number of Poisson arrivals is simply a Poisson distribution with t = 1 (Definition 1.22. respectively. and the proof is complete.) However. For λ = 0. i! To calculate the complementary occupancy probabilities.i is a weighted Poisson distribution.22 Evaluate Px̃. as n increased.1. As in Exercise 5.1.50) using different values of λ.i for the special case in which P {x̃ = 21 } = P {x̃ = 23 } = 12 .e.i for the special case in which P {x̃ = 1} = 1. a computer program was written using the recursive formula (5. with i i   e− 12 λ   e− 23 λ   1 3 1 2λ 1 2λ Px̃. Exercise 5. i.. with the probabilities of the deter- ministic and exponenitial summing to 1 after n = 6 and n = 9. if x̃ is deterministic. If x̃ = 1 with probability 1. part iii).162 QUEUEING THEORY WITH APPLICATIONS . the distributions were very similar. using (5. ρi (1 − ρ)ρi 1 1  Pi = (1 − ρ) i+1 + 1− (1 + ρ) 1+ρ (1 + ρ)i 1+ρ = (1 − ρ)ρi . : SOLUTION MANUAL  2  i+1  1 1 1+ρ − 1+ρ = (1 − ρ)ρi+1    1 1−  1+ρ " i+1 # (1 − ρ)ρi 1  = 1− . Compare the comple- mentary occupancy distribution (P {N > i}) for this system to that of the M/M/1 system with µ = 1. then Px̃.5 and λ = 0.50) to calculate the occupancy distribution.22. 2 i! 2 i! . Solution.5.

(5. Solution.5 0 0 Exponential 0 D eterm inistic 0 0 2 4 6 8 10 12 14 16 n Figure 5. 1.24 Beginning with (5. Show that (5.59 is as follows:   Fq̃ (z) [z − Fã (z)] = π0 z Fb̃ (z) − Fã (z) .001 . The graphs compar- ing the resultant complementary probabilites to those of the M/M/1 system for λ = 0.0001 0 λ=. the graphs of both distribu- tions were very similar for λ = 0.1 λ=. suppose Fx̃e (x) = Fx̃ (x). The computer program used in Exercise 22 was modified and the new occu- pancy probabilites calculated.59). As in Exercise 22.1.22. Equation 5. As λ increased.9 is shown in Figure 5.The Basic M/G/1 Queueing System 163 1 .9 .5 and λ = 0.59) . with the deterministic complementary distribution approaching zero before that of the exponential system.22.1 Survivor function for Exercise 5. the differences between the two systems became apparent.59) reduces to the standard Pollaczek-Khintchine transform equation for the queue length distribution in an ordinary M/G/1 system.01 P{occupancy exceeds n} . however. with the probabilities of both summing to unity within n = 9.23. Exercise 5.

5 0 0 Exponential 0 D eterm inistic 0 0 2 4 6 8 10 12 14 16 n Figure 5. Putting Fã (z) = Fx̃∗ (λ[1 − z]) leads to Fq̃ (z) [z − Fx̃∗ (λ[1 − z])] = π0 (z − 1)Fx̃∗ (λ[1 − z]). Solving the previous equation.164 QUEUEING THEORY WITH APPLICATIONS . Since Fx̃e (x) = Fx̃ (x). we find (1 − z)π0 Fx̃∗ (λ[1 − z]) Fq̃ (z) = . we have Fq̃ (z) [z − Fã (z)] = π0 (z − 1)Fã (z). : SOLUTION MANUAL 1 . .23.01 P{occupancy exceeds n} . .23.1 Survivor function for Exercise 5.1 λ=.0001 0 λ=.001 . we have Fb̃ (z) = Fã (z).59). Upon substitution of this result in (5. Fx̃∗ (λ[1 − z]) − z which is the same as (5. .5) with π = 1 − P {q̃ = 0}.9 .

25 Beginning with (5. .26 Argue rigorously that in order for the M/G/1 queue to be stable.. −am−1      D= 0  . we ned a0 > 0. Thus. In that case the expected number of arrivals during a service time is the traffic intensity.  0 −a0 1 − a1 . N = [ b0 − 1 b1 .. the traffic intensity is at least unity and the system is unstable. Exercise 5.62) is as follows: z0 D = π0 N where z0 = [ π1 π2 . Exercise 5. Suppose the queueing system ever enters level one.59 is as follows:   Fq̃ (z) [z − Fã (z)] = π0 z Fb̃ (z) − Fã (z) .. bm ] and −a0 1 − a1 −a2 ··· −am   ..5) 1 − Fã′ (1) + Fb̃′ (1) Solution.62).. (5.59).... We then have d d d     1− Fã (1) = π0 Fb̃ (1) + 1 − Fã (1) dz dz dz from which the result follows directly..59) with respect to z to find d d d d     Fq̃ (z) [z − Fã (z)]+Fq̃ (z) 1 − Fã (z) = π0 z Fb̃ (z) + Fb̃ (z) − Fã (z) . The restatement of (5. we must have a0 > 0. Since a0 represents the probability that there are no arrivals during an ordinary service time. (5. Fã (1) = 1. Therefore. The latter choice indicates that there are at least one arrival during an ordinary service time and hence the expected number of arrivals during a service time is at least unity.  0 0 0 ··· −a0 .. and Fb̃ (1) = 1.   . we may consider the ordinary M/G/1 queueing system. −am−2   . Solution. we know that for stability of the present queueing system. if a0 = 0. there only two choices..59) First take the derivative of both sides of (5. a0 = 0 or a0 > 0. . Solution.   . Therefore. .  0 −a0 . . . dz dz dz dz Now.27 Verify (5. . . . πm+1 ]. use the fact that limz→1 Fq̃ (z) = 1 to show that 1 − Fã′ (1) π0 = .The Basic M/G/1 Queueing System 165 Exercise 5. take the limit as z → 1 and recognize that Fq̃ (1) = 1. Then the dynamics of returning to level zero are identical to that of an ordinary M/G/1 queueing system. Equation 5.

. . ..166 QUEUEING THEORY WITH APPLICATIONS . . show that ∞ (i) πi+j z j . Since Fq̃ (z) = j=0 πj z we see immediately that Fq̃ (z) − π0 = P∞ j j=1 πj z . . .   0 Im π0 [ 1 0 . . . . .. : SOLUTION MANUAL This is derived directly from (5. πm+1 ]  0  0 a0 · · · am−2  . .  . πm+1 ] . ... 0 ] + [ π1 π2 . . πm ] = π0 [ b0 b1 . . We thus have for (5. . .  . 2.. .  0 0 0 ··· a0 We may write the left side of the previous equation as   0 Im π0 [ 1 0 . . . . . . . .  0 0  . . . 0] and a0 a1 a2 ··· am      0 a0  a1 ··· am−1  0 Im ···  D= − 0 0 a0 am−2  . . 0 ] is an m-vector and Im is an m-square identity matrix. 0 0 where [ 1 0 . .59). . X X [Fq̃ (z) − π0 ] = z j=1 j=0 . bm ] + a0 a1 a2 · · · am    0 a0 a1 · · · am−1    [ π1 π2 ... X Fq̃ (z) = j=0 P∞ j.. 0 ] + z0 = π 0 [ b0 b1 . which is as follows: [ π0 π1 . . bm ] − [ 1 0 . .  0 0 0 ··· a0 The result follows by noting that N = [ b0 b1 .61).. . .. .. bm ] + 0 0 a0 a1 a2 ··· am     0 a0 a1 · · · am−1   z0   0 0 a0 · · · am−2  . . .. .. .   . . ∞ ∞ 1 πj z j−1 = πj+1 z j .   ... Therefore.   . ..  0 0 0 ··· a0 Exercise 5. .. ... . . Solution.28 For i = 1. ..

68). X Fq̃ (z) = j=0 and the proof is complete. Show that at each step.The Basic M/G/1 Queueing System 167 Thus.69). then a func- (2) (1) tion of Fq̃ (z) for Fq̃ (z). z we have     ∞ ∞ (i) 1 X 1 X Fq̃ (z) =  πi−1+j z j − πi−1  =  πi−1+j z j  . Assume the result hold for i − 1. First we solve (1) 1 Fq̃ (z) = [Fq̃ (z) − π0 ] z for Fq̃ (z) to find (1) Fq̃ (z) = zFq̃ (z) + π0 and (i+1) 1 h (i) i Fq̃ (z) = Fq̃ (z) − πi . i ≥ 1. the result holds for i = 1. Solution. i ≥ 1 z (i) for Fq̃ (z) to find (i) (i+1) Fq̃ (z) = zFq̃ (z) + πi . substitute a function of Fq̃ (z) for Fq̃ (z). resulting in (5. . X Fq̃ (z) = j=0 Now. and continue step by step until a function of (C) (C−1) Fq̃ (z) is substituted for Fq̃ (z). one element of C−1 πj z j Fã (z) X j=0 is eliminated. Exercise 5. z z (1) Starting with (5. then ∞ (i−1) πi−1+j z j . (i) 1 h (i−1) i Fq̃ (z) = Fq̃ (z) − πi−1 . z j=0 z j=1 Hence ∞ (i) πi+j z j .29 Define (1) 1 (i+1) 1 h (i) i Fq̃ (z) = [Fq̃ (z) − π0 ] and Fq̃ (z) = Fq̃ (z) − πi .

68) to obtain h i C−1 C−1 (1) C C z j Fã (z)+z C π0 +π0 Fã (z).j (z) − z j Fã (z) .68.68.1) to find h i C−1 C−1 z 2 Fq̃ (z)(2) z C − Fã (z) πj z C Fb̃. (5.j (z)− z j Fã (z)−z C π0 −z C+1 π1 . . X X j=0 j=0 or h i C−1 h i Fq̃ (z)(C) z C − Fã (z) = πj Fb̃.168 QUEUEING THEORY WITH APPLICATIONS .68) is h i C−1 h i Fq̃ (z) z C − Fã (z) = πj z C Fb̃. .j (z) − z j Fã (z) X X = j=0 j=C−1 C−1 z C+j πj + πC−1 z C−1 Fã (z). and simplifying leads to h i C−1 C−1 z 2 Fq̃ (z)(1) z C − Fã (z) = πj z C Fb̃.j (z) − z j Fã (z) X X = j=0 j=1 C C+1 −z π0 − z π1 + π1 zFã (z).1) (2) (1) Similarly.j (z) − z C + jπj .69) j=0 . X X j=0 j=2 (5. X + j=0 and simplifying leads to h i C−1 C−1 z C Fq̃ (z)(C) z C − Fã (z) = πj z C Fb̃.j (z) − z j Fã (z) + z C π0 . we substitute zFq̃ (z) + πi for Fq̃ (z) in (5. we find h i C−1 C−1 zFq̃ (z)(1) z C − Fã (z) = πj z C Fb̃.j (z) − z j . X X zFq̃ (z) z − Fã (z) = πj z Fb̃. X (5.68. we substitute Fq̃ (z) = zFq̃ (z) + π0 into (5.2) Continuing in this way leads to h i C−1 C−1 z C Fq̃ (z)(C) z C − Fã (z) πj z C Fb̃. : SOLUTION MANUAL Now. X X j=0 j=1 (5. X (5.68) j=0 (1) To begin.j (z)− j=0 j=0 Simplifying.

A.30 Suppose Fã (z) and Fb̃ (z) are each polynomials of degree m as discussed in the first part of this section.. and E using (5. 1. . ν = max {nud . (5. . ... Recall that Fã (z) = u(z)/w(z). . . m + 1} = m + 1. u(z) = Fã (z).62) and (5.. 1.. 1. m ≥ 1 because otherwise. Find ν. 1 ≤ i ≤ m.. . Recall that νd and νn are the degrees of d(z) and n(z). .  . −am−1      A = 0  . . d(z) = z − Fã (z) and n0 (z) = n(z) = Fb̃ (z) − 1 for this special case. .  0 0 ··· 1 d1 we find E = diag (1.  1 . Thus.   .. .73). . . Since m ≥ 1.72).The Basic M/G/1 Queueing System 169 Exercise 5.. −am−2   . Similarly.   .  0 −a0 . . .78).78). Thus.. . .. Now. Using (5.63). νn + 1} = max {m. .   1 . 1 0 .  1 0 dν−2    A = 0  .. D. . −a0 ) and 0 0 ··· ··· −am   . νd = νn = m. .66). we then have d1 = 1−a1 . di = −ai . . . . .. 0 ≤ i ≤ m and n0 = b0 −1. . −am−1      D= 0  ... .. With w(z) = 1. We are considering the case where C = 1. . ..  0 −a0 1 − a1 .. v(z) = Fb̃ (z). . i 6= 1.  0 0 ··· 1 (1 − a1 ) These are the same results as presented in (5. . From (5.. . respectively.. We then have from (5. . ···  0 0 0 ··· −a0 and N = [ b0 − 1 b1 b2 · · · bm ].73). . −dν−3   . .. . Compare the results to those presented in (5..62) and (5. . Define w(z) = 1. ni = bi .   . 1.   .. .. . where E and A are de- fined as follows where E = diag (1. .. d0 ). −am−2  .76) −a0 1 − a1 −a2 ··· −am   . Solution. . N . and 0 0 ··· ··· dν−1    . and (5. there would never be any arrivals. . .

2.31 Table 5. i=1 To obtain the probability masses.9861 0. .32 Use a busy period argument to establish the validity of (5. If we repeat this for C = 4.6233 0.8. the expected number of units served will be C−1 X iP {q̃ = i} + CP {q̃ ≥ C} .9. [Hint: Consider the M/G/1 system under the nonpreemptive LCFS service discipline.5254 0. The main idea here is that if there are at least C units present in the system.2472 0. 4.00000 1.9000 0. or equivalently. we will find that the expected number of services that occur during a service time is 3.9920 2 0.9071 5 0.8507 0.9. For example.8.90).9493 0. .6350 0.4419 0.1.7347 0.9762 3 0.8507.5.3123 0.00000 1. Occupancy C=1 C=2 C=4 C=8 . for the case C = 2.5261 0. : SOLUTION MANUAL Table 5.9493 − 0. deter- mine the probability masses for the first few elements of the distributions and then compute the mean number of units served during a service time for C = 1.0986. P {q̃ ≥ 2} = P {q̃ > 1} = 0. Hence.9493 = 0. Otherwise.9985 1 0.4356 0. we see that the system utilization is 0.8507 = 0.00000 1.8547 6 0.3606 0. P {q̃ = 1} = 0.7292 Exercise 5. Exercise 5. From this. Occupancy values as a function of the number of units served during a service period for the system analyzed in Example 5. Hence the expected number of services during a service interval is 1 × 0. then the number of units served will be C.7914 0.4569 0.3715 0.5292 0.6 so that again the system utilization is 0.6107 0. Solution.5 gives numerical values for the survivor function of the occupancy distributions shown in Figure 5. From this table.0507.6997 0.0986 + 2 × 0.9481 4 0. and 8. there will be the same as the number of units present. Analyze the results of your calculations.2985 0.8772 0. we have P {q̃ = 0} = 1 − 0.7633 0. we simply subtract successive values of the survivor functions. the servers will serve all units present at the beginning of the service interval.8507 = 1.] .7942 7 0.170 QUEUEING THEORY WITH APPLICATIONS . 1.9453 0.00000 0 0.

E[w̃] = ρE[w̃|B]. sE[z̃] . Z ∞ Fz̃∗r (s) = e−sz dFz̃r (z) Z0∞ 1 − Fz̃ (z)   −sz = e dz 0 E[z̃] 1 ∞ Z ∞ Z = e−sz dFz̃ (y)dz E[z̃] Z0 Z x 1 ∞ y  −sz = e dz dFz̃ (y) E[z̃] Z0 0 1 ∞ 1 1 − e−sy dFz̃ (y)  = E[z̃] 0 s 1 = [1 − Fz̃ (s)] . and let I represent its complement. 1−ρ which is the desired result. we find that ρ E[w̃] = ρE[w̃|B] = E[x̃e ].33 Show that the Laplace-Stieltjes transform for the distri- bution of the residual life for the renewal process having renewal intervals of length z̃ is given by 1 − Fz̃∗ (s) Fz̃∗r (s) = . the probability that the system is busy upon arrival. Then E[w̃] = E[w̃|B]P {B} + E[w̃|I]P {I} = ρE[w̃|B] + E[w̃|I](1 − ρ).The Basic M/G/1 Queueing System 171 Solution. Now. E[x̃e ] E[w̃|B] = . the total amount of waiting time for the arbitrary customer in a LCFS dis- cipline will be the length of the busy period started by the customer in service and generated by the newly arrived customers. Exercise 5. Hence. Therefore.8. Let B be the event that an arbitrary customer finds the system busy upon arrival. By Definition 2. But E[w̃|I] = 0 since the customer enters service immediately if the system is idle upon arrival. 1 − ρ] Multiplying this expression by ρ. (5.98) s E[z̃] Solution.

show that E x̃n+1   E [x̃nr ] = . E[x̃] 0 x Changing the order of integration. .34 For an arbitrary nonnegative random variable. Recall that [1 − Fx̃ (z)] fx̃r (z) = . E[x̃] We may then write Z∞ E[x̃r ] = xn fx̃r (x)dx. x̃. 0 so that Z∞ 1 E[x̃r ] = xn [1 − Fx̃ (z)] dx E[x̃] 0 Z∞ Z∞   1 = xn  1 − fx̃ (z)dz  dx.172 QUEUEING THEORY WITH APPLICATIONS . .102) (n + 1)E[x̃] Solution. Z∞ Zz   1  xn dx fx̃ (z)dz E[x̃r ] = E[x̃] 0 0 Z∞ . (5. : SOLUTION MANUAL Exercise 5.

z 1 xn+1 .

= .

fx̃ (z)dz .

E[x̃] n + 1 .

(n + 1)E[x̃] Exercise 5. exponentially distributed random variables with parameters µ1 and µ2 .35 For the M/G/1 system.0 0 E[x̃n+1 ] = . µ2 such that E[x̃] = 1. suppose that x̃ = x̃1 + x̃2 where x̃1 and x̃2 are independent. by the fact that E[x̃] = 1. Show that Cx̃2 ≤ 1 for all µ1 . First. Solution. 1 = E 2 [x̃] = E 2 [x̃1 + x̃2 ] . respectively.

Plot on the same graph E[w̃] as a function of ρ for these two distributions and for the M/M/1 queueing system with µ = 1 on the same graph. 2 . Therefore. µ21 µ22 Exercise 5. 1−ρ 2 1−ρ For the Erlang-2 system.101): " # ρE[z̃] 1 + Cz̃2 E[w̃] = . we need only find the variance (which will be equal to Cz̃2 in this case) of each to compute its expected waiting time. so that ρ ρ 1+0   E[w̃d ] = = 2 . Var(x̃) Cx̃2 = E 2 [x̃] Var(x̃1 + x̃2 ) = 1 = Var(x̃1 ) + Var(x̃2 ) 1 1 = + ≤ 1. Var(ze ) = Var(z1 ) + Var(z2 ) 1 1 = 2 + 2 4µ 4µ 1 = . Solution. E 2 [z̃] For all three systems. the variance is zero. note that E[z̃] = 1. Thus.36 Compute the expected waiting time for the M/G/1 system with unit mean deterministic service times and for the M/G/1 system with service times drawn from the unit mean Erlang-2 distribution. µ1 µ1 µ2 µ2 which implies that 1/µ21 + 1/µ22 ≤ 1 since 2/µ1 µ2 ≥ 0. Compare the results. To compute the expected waiting time. recall equation (5. For the deterministic system.The Basic M/G/1 Queueing System 173 = E 2 [x̃1 ] + 2E[x̃1 ]E[x̃2 ] + E 2 [x̃2 ] 1 2 1 = 2 + + 2. 1−ρ 2 where Var(z̃) Cz̃2 = .

1−ρ 2 1−ρ For the M/M/1 system. respectively. we see from the figure that the mean waiting time is also directly proportional to ρ. Let E[x̃] = 1.e. Var(z̃m ) = 1/µ2 = 1. the perfor- mance of the system deteriorates. For the exponential distribution with rate µ. so that dFx̃ (x) = pdFx̃1 (x) + (1 − p)dFx̃2 (x).. putting a greater load on the system.174 QUEUEING THEORY WITH APPLICATIONS . E[x̃2 ] = 2/µ2 . proving Cx̃2 ≥ 1 is equivalent to showing E[x̃2 ] ≥ 2. so that ρ 1+1 ρ   E[w̃m ] = = . By the definition of Cx̃2 . then the arrival rate also increases. . In addition. and so 2 2 p 1−p   E[x̃2 ] = p 2 + (1 − ρ) 2 = 2 2 + . µ1 µ2 µ1 µ22 . : SOLUTION MANUAL and the mean waiting time for this system is 1 3 " # ρ 1+ 2 4ρ E[w̃e ] = = . where x̃1 and x̃2 are independent. Let Di denote the event that distribution i is selected. suppose that x̃ is drawn from the distribution Fx̃1 (x) with probability p and from Fx̃2 (x) otherwise. i = 1. 1−ρ 2 1−ρ From Figure 5. This should be clear since if ρ = µλ increases. Therefore.1. E 2 [x̃] E 2 [x̃] That is. . 1]. Var(x̃) E[x̃2 ] − E 2 [x̃] Cx̃2 = = . i. Exercise 5. E[x̃n ] = pE[x̃n1 ] + (1 − p)E[x̃n2 ]. Solution. exponentially distributed random variables with parameters µ1 and µ2 . we see that as the coefficient of variance increases. 2. the coefficient of variance and the mean waiting time are directly proportional. Then P {x̃ ≤ x} = P {x̃ ≤ x|D1 } P {D1 } + P {x̃ ≤ x|D1 } P {D1 } = pFx̃1 (x) + (1 − p)Fx̃2 (x).37 For the M/G/1 system. Show that Cx̃2 ≥ 1 for all p ∈ [0.

83 0. p 1−p 2 + − 1 ≥ 0.92 0.80 0. p 1−p E[x̃] = + = 1.. µ1 µ2 so that p2 2p(1 − p) (1 − p)2 E 2 [x̃] = + + = 1.36. Therefore.36. µ21 µ1 µ2 µ22 .95 0.98 rho Figure 5.1 Survivor function for Exercise 5.e.89 0. µ1 µ22 i.86 0.The Basic M/G/1 Queueing System 175 100 D eterm inistic Erlang-2 80 Exponential E[waiting time] 60 40 20 0 0. showing E[x̃2 ] ≥ 2 is equivalent to showing p (1 − p) 2 + ≥ 1. µ1 µ22 Now.

38 With x̃ and p defined as in Exercise 5.38.3) to yield 1 − 2p Cx̃2 + 1  1 2 − + = 0. let p = 12 .38. Would it be possible to determine p. so that we have the following two equations p (1 − p) + = 1 (5. Now. Find µ1 and µ2 such that Cx̃2 = 2.37. µ22 µ2 1−p 1 And solving this equation for µ2 . 1 (1 − p)/p Cx̃2 + 1 2 + = .  = 1± (5. . : SOLUTION MANUAL Thus. and squaring both sides " # 1 1 2(1 − p) (1 − p)2 = 1 − + . µ1 µ2 and the proof is complete. Exercise 5.3) µ1 µ22 2p 1 Solving equation (5. (5.38.2) µ21 µ22 Rearranging (5.176 QUEUEING THEORY WITH APPLICATIONS . p 1−p p 1 − p p2 2p(1 − p) (1 − p)2 + −1 = + − 2− − µ21 µ22 µ12 µ22 µ1 µ1 µ2 µ22 p(1 − p) p(1 − p) 2p(1 − p) = + − µ21  µ22 µ1 µ2 1 2 1 = p(1 − p) 2 − + µ µ1 µ2 µ22  1 2 1 1 = p(1 − p) − ≥ 0.4) µ2 1−p 2 .38. 1 = E[x̃]. and then we will substitute for the specific value of p = 21 .38.38.2). µ1 . First. we will compute the general case for any p. Solution. and µ2 uniquely for a given value of Cx̃2 ? Explain. µ21 p2 µ2 µ22 Substitute this expression into (5.1) for µ1 . and Cx̃2 = E[x̃2 ]−1.1) µ1 µ2 2p 2(1 − p) + = Cx̃2 + 1 (5. .38. we obtain s 1 p 1   Cx̃2 − 1 .

the expected cycle length is E[z̃]. Solution. so that Y (t)/t represents the proportion of time the server has spent serving up to time t.  = 1± (5.5). Y (t) is a random variable.2). If we take p 1/µ1 = 1 + 1/2. Now suppose the renewal process is observed at a random point in time. we see that it is necessary to specify p in addition to Cx̃2 to obtain unique values of µ1 and µ2 . dz a E[z̃] as was shown in the previous subsection. Solution. else define the system to be in a y-period. z̃}]. t→∞ t t→∞ t N (t) . µ2 2 From (5. Exercise 5.4) and (5. we may write Y (t) Y (t) N (t) Y (t) N (t) = = .The Basic M/G/1 Queueing System 177 Substitute this result into (1) to yield s 1 1−p  Cx̃2 − 1 . t0 .38. If the age of the observed interval is less than c. Choose a real number c arbitrarily. Exercise 5. Let Y (t) denote the total length of time the server has spent serving up to time t. which is a system of two equations in three unknowns. Thus. define the system to be in an x-period.5) µ1 2p This shows that one can find µ1 and µ2 given p and Cx̃2 if Cx̃2 ≥ 1. So long as there has been at least one busy period up to this time. Now let N (t) denote the total number of busy periods completed up to time t. then r 1 1 =1− . This should be clear from equations (5. z̃}] = [1 − Fz̃ (z)]dz 0 so that d 1 − Fz̃ (z) Fz̃ (z) = .38.40 Formalize the informal discussion of the previous para- graph. the long-term average propor- tion of time the server is busy is Y (t) Y (t) N (t) lim = lim . Show that Z c E[min {c. t t N (t) N (t) t For a fixed t.38.39 (Ross[1989]) Consider an ordinary renewal process with renewal interval z̃.1) and (5. and the expected length of the it x-period is E[min {c.38. Thus.38.

Let cn = xn + yn denote the length of the n-th cycle. We consider the limit of t/N (t). Then PN (t) PN (t)+1 t n=0 cn t n=0 cn lim = lim ≤ ≤ . . . . t→∞ t E[x̃] + E[ỹ] Since the limits exist separately. Suppose the mes- sages have length m̃ (in octets). : SOLUTION MANUAL It remains to show Y (t) N (t) Y (t) N (t) lim = lim lim . t→∞ t N (t) t→∞ N (t) t→∞ t That is. respectively. that these limits exist separately. Y (t) Y (t) N (t) lim = lim lim t→∞ t t→∞ N (t) t→∞ t E[ỹ] = . where m̃ is the number of characters in a message and k is a normalizing constant. N (t) 1 lim = . (a) Given that P {m̃ = m} = kθ(1 − θ)m−1 for a ≤ m ≤ b. and the lengths are drawn from a geomet- ric distribution having a mean of E[m̃] octets. message lengths are drawn from a distribution characterized as follows: P {m̃ = m} = kθ(1 − θ)m−1 for a ≤ m ≤ b.178 QUEUEING THEORY WITH APPLICATIONS . t→∞ N (t) t→∞ N (t) N (t) N (t) so that t lim = E[c̃] = E[x̃] + E[ỹ]. E[x̃] + E[ỹ] 5-1 Consider a communication system in which messages are transmitted over a communication line having a capacity of C octets/sec. but truncated at a and b characters on the lower and upper ends of the distribution. as t → ∞. t→∞ N (t) Therefore. That is.

θi+1 (c) Write a simple program to implement the recursive relationship de- fined in part (b) to solve for θ in the special case of a = 10. assuming a transmission capacity of 30 characters/sec. b X P {m̃ = m} = 1. b. m=a . (e) Using the computer program given in the Appendix. and E[m̃] = 30. θi ). b = 80. Comment on the suitability of making the geo- metric assumption. b. θz 1 − [(1 − θ)z](b−[a−1]) E[z m̃ ] = z (a−1) . θ). where C is the transmission capacity in octets/sec. 1 − (1 − θ)z 1 − (1 − θ)(b−[a−1]) and 1 (b − [a − 1])(1 − θ)(b−[a−1]) E[m̃] = a − 1 + − . Use θ0 = E −1 [m̃] as the starting value for the recursion. (f) Compare this complementary distribution to one obtained under the assumption that the message lengths are drawn from an ordinary ge- ometric distribution. θ and use this expression to obtain a recursive expression for θ of the form 1 = f (E[m̃]. a. Solution: (a) From the laws of total probability. obtain the com- plementary occupancy distribution for the transmission system under its actual message length distribution at a traffic utilization of 95%. (d) Argue that Fx̃∗ (s) = Fm̃ ∗ (s/C). θ 1 − (1 − θ)(b−[a−1]) (b) Rearrange the expression for E[m̃] given above by solving for θ −1 to obtain an equation of the form 1 = f (E[m̃].The Basic M/G/1 Queueing System 179 show that h i−1 k = (1 − θ)a−1 − (1 − θ)b . a.

first note the following probabilities: 1. P P {m̃ > x} =  m=x+1    0. a ≤ x < b. =  0.1). . Therefore. x≥b  1. : SOLUTION MANUAL And from (1. a ≤ x < b. h i−1 k = (1 − θ)a−1 − (1 − θ)b . .180 QUEUEING THEORY WITH APPLICATIONS . . observe that b E[z m̃ ] = z m P {m̃ = m} X m=a b z m kθ(1 − θ)m−1 X = m=a b [z(1 − θ)]m−1 X = kθz m=a [z(1 − θ)]a−1 − [z(1 − θ)]b = kθz h 1 − z(1 − θ) i θz (1 − θ)a−1 − z (b−(a−1) (1 − θ)b = z a−1 1 − z(1 − θ) 1 · a−1 (1 − θ) − (1 − θ)b a−1 θz 1 − [(1 − θ)z]b−(a−1) = z . 1 − z(1 − θ) 1 − (1 − θ)b−(a−1) To compute E[m̃]. i x b k (1 − θ) − (1 − θ) . b b θ(1 − θ)m−1 X X P {m̃ = m} = k m=a m=a (1 − θ)a−1 − (1 − θ)b = kθ h 1 − (1 − θ) i = k (1 − θ)a−1 − (1 − θ)b = 1.   b   P {m̃ = m} . To find E[m̃]. x≥b so that ∞ X E[m̃] = P {m̃ > x} x=0 .h x < a. . x < a.

1. Then E[m̃] = a − 1 + E[m̃1 |m̃1 ≤ c].1) But ∞ θ(1 − θ)i−1 X P {m̃ > c} = i=c θ(1 − θ)c = 1 − (1 − θ) = (1 − θ)c .The Basic M/G/1 Queueing System 181 " b−1 # (1 − θ)x − (b − a)(1 − θ)b + a X = k "x=a # (1 − θ)a − (1 − θ)b = k − (b − a)(1 − θ)b + a 1 − (1 − θ) kh i = (1 − θ)a − (1 − θ)b − θ(b − a)(1 − θ)b + a θ" # 1 (1 − θ)a − (1 − θ)b − θ(b − a)(1 − θ)b = +a θ (1 − θ)a−1 − (1 − θ)b " # 1 (1 − θ) − (1 − θ)b−(a−1) − θ(b − a)(1 − θ)b−(a−1) = θ 1 − (1 − θ)b−(a−1) +a 1 1 + (b − a)(1 − θ)b−(a−1) = − +a θ 1 − (1 − θ)b−(a−1) 1 1 − (1 − θ)b−(a−1) + [b − (a − 1)](1 − θ)b−(a−1) = − θ 1 − (1 − θ)b−(a−1) +a 1 [b − (a − 1)](1 − θ)b−(a−1) = (a − 1) + − . E[m̃1 ] = E[m̃1 |m̃1 ≤ c]P {m̃ ≤ c} +E[m̃1 |m̃1 ≤ c]P {m̃ > c} . . (5. Now. P {m̃ ≤ c} = 1 − (1 − θ)c . and 1 E[m̃1 |m̃ > c] = c + . Define m̃1 to be geometric random variable with parameter θ. θ Thus. θ 1 − (1 − θ)b−(a−1) We may also compute the mean of m̃ by considering the following. where the latter expression is equal to the mean of geometric random variable truncated at the point c = b − (a − 1).

so that Z ∞ Z ∞ Fx̃∗ (x) = e −sx dFx̃ (x) = e−sx dFm̃ (Cx). we then have   1 1 θ − c+ θ (1 − θ)c E[m̃1 |m̃1 ≤ c] = 1 − (1 − θ)c 1 c(1 − θ)c = − . : SOLUTION MANUAL and from (S5.039531. 1 c(1 − θ)c E[m̃] = a − 1 + − θ 1 − (1 − θ)c 1 [b − (a − 1)](1 − θ)b−(a−1) = a−1+ − . (f) The comparison is roughly that shown in Figure 5. the truncated geometric diverges from that of the ordinary geometric and so the geometric assumption is not valid in this case. 1 [b − (a − 1)](1 − θ)b−(a−1) = E[m̃] = (a − 1) + .1 between the (10. . C (e) See the graph in Figure 1. then x = y/C. However. 80) curve and the (1. the two distri- butions can be said to approximate one another. θ 1 − (1 − θ)c Therefore.1. For n small. θi+1 1 − (1 − θi )b−(a−1) (c) The computer program resulted in θ = 0.1).182 QUEUEING THEORY WITH APPLICATIONS . θ 1 − (1 − θ)b−(a−1) or 1 [b − (a − 1)](1 − θi )b−(a−1) = E[m̃] − (a − 1) + . medskip . 0 0 Let y = Cx. . θ 1 − (1 − θ)b−(a−1) (b) From part (a). 5000) curve. n o m̃ m̃ (d) Since x̃ = C. as n in- creases. P {x̃ ≤ x} = P C ≤ x = P {m̃ ≤ Cx}. and Z ∞ y Fx̃∗ (s) = e−s C dFm̃ (y) 0 ∗ s = Fm̃ ( ).

and Fx̃r (x) is the distribution for the forward recurrence time of the ser- vice time.80) . [Hint: The algebra will be greatly simplified if (??) is first rewritten as Fñ (z) = α(z)/β(z).The Basic M/G/1 Queueing System 183 1 P{Occupancy exceeds n} .1 geometric mean 18.001 0 25 50 75 100 Occupancy.70 . Verify the formula for E[ñ] along the way. determine a formula for E[ñ2 ]. the service time distribution. n Figure P1. where α(z) = (1 − ρ)Fx̃∗ (λ[1 − z]). the second moment of the occupancy distribution for the ordinary M/G/1 system.1 Survivor function for Supplementary Problem 1. Then. β(z) = 1 − ρFx̃∗r (λ[1 − z]).90 (10. in terms of the first three moments of Fx̃ (x). in order to find d2 lim Fñ (z). 5-2 Using the properties of the probability generating function.01 mean 12. z→1 dz 2 .

8) as α(z) Fñ (z) = . and then substitute these limits into the formula for the second derivative of the ratio.] Solution: First rewrite (5. and β(z) = 1 − ρFx̃∗r (λ[1 − z]). and β(1) = 1 − ρ. dα(z)/dz. : SOLUTION MANUAL first find the limits as z → 1 of α(z). β(z). dz β(z) β 2 (z) and ′′ ′ ′ d2 α (z) α (z)β (z) Fñ (z) = − dz 2 β(z) β 2 (z) ! ′ ′ ′′ ′ α (z)β (z) α(z)β (z) 2α(z)[β (z)]2 − + − β 2 (z) β 2 (z) β 3 (z) ′′ ′ ′ ′′ ′ α (z) 2α (z)β (z) α(z)β (z) 2α(z)[β (z)]2 = − − + . β(z) β 2 (z) β 2 (z) β 3 (z) Now. α(1) = 1 − ρ. Therefore d E[ñ] = Fñ (1) ds . β(z) where α(z) = (1 − ρ)Fx̃∗ (λ[1 − z]). d2 β(z)/dz 2 . . ds x̃ and ′ d ∗ β (z) = −ρ(−λ) F (x) ds x̃r implies ′ β (1) = −ρ(−λ)(−E[x̃r ]) = −λρE[x̃r ]. ′ ′ d α (z) α(z)β (z) Fñ (z) = − .184 QUEUEING THEORY WITH APPLICATIONS . so that ′ d ∗ α (z) = (1 − ρ)(−λ) F (x) ds x̃ implies ′ d ∗ α (1) = (1 − ρ)(−λ) F (0) = ρ(1 − ρ). Then. d2 α(z)/dz 2 . . where Fx̃r (x) is the distribution for the forward recurrence time of the service time. dβ(z)/dz.

The Basic M/G/1 Queueing System 185 ′ ′ α (1) α(1)β (1) = − β(1) β 2 (1) (1 − ρ)ρ (1 − ρ)λρE[x̃r ] = + 1−ρ (1 − ρ)2 λρ E[x̃ ] 2 = ρ+ " 1 − ρ 2E[x̃] # ρ E[x̃2 ] = ρ 1+ 2(1 − ρ) 2E[x̃] . In terms of Cx̃2 . d2 ∗ . " # ρ 1 + Cx̃2 E[ñ] = ρ 1 + . 1−ρ 2 Taking the second derivatives of α(z) and β(z) evaluated at z = 1.

.

.

.

′′ 2 = (1 − ρ)λ2 E[x̃2 ]. .

α (z).

.

= (1 − ρ)(−λ) F (s) z=1 ds2 x̃ .

s=0 and d2 ∗ .

.

.

.

′′ 2 = −ρλ2 E[x̃2r ]. .

β (z).

.

= (−ρ)(−λ) 2 Fx̃ (s).

d ∗ 1 1 1 d ∗   F (s) = − (1 − Fx̃∗ (s)) 2 − F (s) ds x̃r E[x̃] " s s ds # x̃ d ∗ 1 1 − Fx̃∗ (s) + s ds Fx̃ (s) = − 2 . s→0 ds E[x̃] s→0 2 so that E[x̃2 ] E[x̃r ] = . Fx̃∗r (x) = sE[x̃] . 2E[x̃] Next we obtain d ∗ d 2 2[1 − Fx̃∗ (s) + s ds Fx̃ (s)] + s2 ds ∗ " # d2 ∗ 1 2 Fx̃ (s) F (s) = . E[x̃] s Applying L’Hôpital’s rule twice. Thus. z=1 ds s=0 1−Fx̃∗ (s) Now. 3 d2 s d 3 F ∗ (s) + ∗ " # d 1 ds2 Fx̃ (s) lim Fx̃∗r (s) = − lim ds x̃ . ds2 x̃ E[x̃] s3 .

we find d2 ∗ E[x̃3 ] lim Fx̃ (s) = .186 QUEUEING THEORY WITH APPLICATIONS . ′′ −ρλ2 E[x̃3 ] limz→1 β (z) = . s→0 ds2 r 3E[x̃] Therefore. : SOLUTION MANUAL By applying L’Hôpital’s rule four times. 3 E[x̃] We may now specify d2 . 3E[x̃] Consequently. . E[x̃3 ] E[x̃2r ] = . .

= E[ñ(ñ − 1)] = E[ñ2 ] − E[ñ] .

2 Fñ (z).

.

dz z=1 2] 2ρ(1 − ρ)[−λρ] E[x̃ 2E[x̃] 2 2 = λ E[x̃ ] − (1 − ρ)2 " #2 2 (1 − ρ)ρλ E[x̃3 ] 2(1 − ρ)λ2 ρ2 E[x̃2 ] + + (1 − ρ)2 3E[x̃] (1 − ρ)3 2E[x̃] ρ 3 2 E[x̃ ] ρ 3 E[x̃3 ] = λ2 E[x̃2 ] + + (1 − ρ) 2E 2 [x̃] 3(1 − ρ) E 3 [x̃] " #2 2ρ4 E[x̃2 ] + 4(1 − ρ)2 E[x̃] E[x̃2 ] ρ3 E[x̃2 ] ρ3 E[x̃3 ] = ρ2 2 + + E [x̃] 1 − ρ E 2 [x̃] 3(1 − ρ) E 3 [x̃] " #2 ρ4 E[x̃2 ] + . we may determine a formula for E[ñ2 ] using an operator . 2(1 − ρ)2 E[x̃] Therefore. 2(1 − ρ) E 2 [x̃] 1 − ρ E 2 [x̃] 3(1 − ρ) E 3 [x̃] Alternatively. E[x̃2 ] ρ3 E[x̃2 ] ρ3 E[x̃3 ] E[ñ2 ] = ρ2 + + + E 2 [x̃] 1 − ρ E 2 [x̃] 3(1 − ρ) E 3 [x̃] ( )2 ρ4 E[x̃2 ] ρ2 E[x̃2 ] + ρ + 2(1 − ρ)2 E 2 [x̃] 2(1 − ρ) E 2 [x̃] ( ) ρ2 E[x̃2 ] ρ2 E[x̃2 ] ρ3 E[x̃3 ] = 3 + + + ρ.

where −1 Fñ1 (z) = (1 − ρ) 1 − ρFx̃∗r (λ[1 − z])  .2. d E[ñj ] = Fñ (z)|z=1 dz j and d2 . (S5. Rewrite ñ = ñ1 + ñ2 .The Basic M/G/1 Queueing System 187 argument as follows. Then E[ñ2 ] = E[(ñ1 + ñ2 )2 ] = E[ñ21 ] + 2E[ñ1 ]E[ñ2 ] + E[ñ22 ]. and Fñ2 (z) = Fx̃∗ (λ[1 − z]).1) Now.

.

E[ñj (ñj − 1)] = 2 Fñj (z).

.

ds z=1 or. d2 d E[ñ2j ] = 2 Fñj (z)|z=1 + Fñj (z)|z=1 dz dz d2 = Fñ (z)|z=1 + E[ñj ]. and for s(z) = λ[1 − z]. d ∗    . rearranging terms. dz 2 j Then. for ñ1 and ñ2 as defined above. .

−2 −(1 − ρ) 1 − ρFx̃∗r (λ[1 − z])  .

E[ñ1 ] = ρλ F (s(z)) .

ds x̃r .

(1 − ρ)2 and d  . z=1 1−ρ = (ρλ)E[x̃r ].

E[ñ2 ] = −λ Fx̃∗ (s) .

.

.

ds z=1 The second moments of ñ1 and ñ2 are thus 2 d ∗   −3 E[ñ21 ] = 2(1 − ρ) 1 − ρFx̃∗r (λ[1 − z])  ρλ F (s) −2 ds x̃r +(1 −"ρ) 1 − ρFx̃∗r (λ[1#− z])  d2 ∗ . = ρ.

.

· ρλ 2 Fx̃r (s(z)) .

.

+ E[ñ1 ] ds z=1 .

(1 − ρ) (1 − ρ) 1−ρ and ( ). . . : SOLUTION MANUAL 2(1 − ρ) (1 − ρ) 2 ρλ = 3 (ρλ)2 E 2 [x̃r ] + 2 ρλ E[x̃2r ] + E[x̃r ].188 QUEUEING THEORY WITH APPLICATIONS .

d2 E[ñ22 ] = (−λ)2 2 Fx̃∗ (s(z)) .

.

.

(a) Determine the Laplace-Stieltjes transform of the job service-time dis- tribution. + E[ñ1 ] ds z=1 = λ2 E[x̃2 ] + ρ. (b) Determine the mean forward recurrence time of the service-time dis- tribution using the result of part (a) and transform properties.1) to obtain 2 ρ ρλ E[ñ2 ] = 2 (ρλ)2 E 2 [x̃r ] + λ2 E[x̃2r ] + E[x̃r ] (1 − ρ) 1−ρ 1−ρ ρ2 λ +2 E[x̃r ] + λ2 E[x̃r ] + ρ 1−ρ !2 2ρ2 λ2 E[x̃2 ] ρλ E[x̃2 ] ρ2 λ E[x̃2 ] = + + 2 (1 − ρ)2 2E[x̃] 1 − ρ 2E[x̃] 1 − ρ 2E[x̃] ρλ 2 +λ2 E[x̃2 ] + ρ + E[x̃2r ] ( 1 − ρ ) ρ2 E[x̃2 ] 2ρ2 E[x̃2 ] ρλ2 E[x̃3 ] = + 3(1 − ρ) + ρ + (1 − ρ)2 2E[x̃] 2E 2 [x̃] 1 − ρ 3E[x̃] ( ) ρ2 2 E[x̃ ] 2 ρ E[x̃ ] 2 ρλ2 E[x̃3 ] = + 3 + ρ + . if any. . 5-3 Jobs arrive to a single server system at a Poisson rate λ. (c) Determine the stochastic equilibrium mean sojourn time for jobs in this system. independent of everything. Each task requires a service time drawn from a common distribution. We may now substitute these results into (S5. 2(1 − ρ) E 2 [x̃] 1 − ρ E 2 [x̃] 3(1 − ρ) E[x̃] This is the same expression as was found using the first method. Fx̃t . drawn from a general distribution Fm̃ (m). Each job con- sists of a random number of tasks. (d) Determine the mean number of tasks remaining for a job in service at an arbitrary point in time.2. m̃. independent of everything.

0 Therefore Fñ (z) = Fx̃∗ (β[1 − z]). Let ñ represent the total number of interruptions. the process that counts the interruptions is a Poisson process. which. 0 n! and ∞ z n P {ñ = n} X Fñ (z) = n=0 ∞ (βx)n e−βx zn∞ X = dFx̃ (x) n=0 n! ∞ (zβx)n −βx Z ∞X = e Fx̃ (x) 0 n=0 n! Z ∞ = ezβx e−βx Fx̃ (x). of course.  Pñ  −sc̃ −s x̃+ x̃ i=0 si E[e ] = E e   ñ .The Basic M/G/1 Queueing System 189 Solution: (a) Since the time between interruptions is exponentially distributed with parameter β. n! Therefore. is just the pgf for the distribution of the number of arrivals from a Poisson process over a period x̃. Hence (βx)n e−βx P {ñ = n|x̃ = x} = . ∞ (βx)n e−βx Z P {ñ = n} = dFx̃ (x). Then c̃ = x̃ + (b) P ñ i=0 x̃si . Therefore.

.

 ∞ ( P Z ∞X −s x̃+ x̃si .

.

ñ=n. 0 .x̃=x = E e i=0    0 n=0 ) P {ñ = n|x̃ = x} dFx̃ (x) ∞ ∞X n (βx)n Z e−sx Fx̃∗s (s) e−βx dFx̃ (x)  = 0 n=0 n! Z ∞ −sx βxFx̃∗s (s) −βx = e e e dFx̃ (x).

(d) Since c̃ = x̃ + ñi=0 x̃si . the busy period is over. then the completion time has the same distribution as the length of an ordinary busy period. : SOLUTION MANUAL Thus.190 QUEUEING THEORY WITH APPLICATIONS . Then each interruption of the first customer has the same distribution as the ordinary busy period. Therefore E[c̃] = E[x̃](1 + ρs ). P E[c̃|x] = x + βxE[x̃s ]. the busy period relationship is simply a special case of the completion time. As a check. When service of the first customer is complete. .  (c) The formula for the M/G/1 busy period is as follows:   Fỹ∗ (s) = Fx̃∗ s + λ − λFỹ∗ (s) . Thus. . The relationship is explained by simply allowing the first customer of the busy period to be interrupted by any other arriving customer. which implies E[c̃] = E[x̃] + βE[x̃]E[x̃s ]. The relationship between the two formulas is that if the service time of the special customer has the same distribution as the length of a busy period. Fc̃ (s) = Fx̃∗ s + β − βFx̃∗s (s) . d ∗ . where ρs = βE[x̃s ].

.

d d .

.

 .

= Fx̃∗ (s).

.

.

.

Fc̃ (s).

1 − β Fx̃s (s) .

.

That is. ds2 ds2 x̃ ds x̃ ds x̃ ds2 x̃ Taking the limit as s → 0. P {B} = λE[c̃] = λE[x̃](1 + ρs ) . (e) Note that the probability that the server is busy is just the expected value of the number of customer is service. . " # 2 d2 d2 ∗ d ∗ d ∗ d2 ∗   Fc̃ (s) = F (s) 1 − β F (s) + F (s) −β F (s) . if B represents the event the the server is busy. E[c̃2 ] = E[x̃2 ] (1 + βE[x̃s ])2 + E[x̃]βE[x̃2s ]. ds s=0 ds s=0 ds s=0 It follows that E[c̃] = E[x̃] (1 + βE[x̃s ]) . Similarly.

independently on each entry.] (c) Compare the results of part (b) with the Laplace-Stieltjes transform for the length of the M/G/1 busy period. and Fñ (z). the Laplace-Stieltjes transform for c̃ under this pol- icy. Explain the relationship between these two results. E[x̃] (1 + ρs ) 5-4 Consider a queueing system in which ordinary customers have service times drawn from a general distribution with mean 1/µ. There is a spe- cial customer who receives immediate service whenever she enters the system. length of time. The stability condition is then ρ(1 + ρs ) < 1. (b) Determine Fc̃∗ (s). the probability generating function for the number of interruptions suffered by the ordinary customer. which has mean 1/α. from a general distribution. the ordinary customer’s service resumes from the point of interruption. (d) Determine E[c̃] and E[c̃2 ].The Basic M/G/1 Queueing System 191 = ρ(1 + ρs ). [Hint: Condition on the the length of the service time of the or- dinary customer and the number of service interruptions that occur. Upon completion of service. Following an interruption. and let ñ denote the number of interruptions. Fx̃s (x). her service time being drawn. rate β. and the stability condition for this system. Determine P {ñ = n|x̃ = x}. (e) Determine the probability that the server will be busy at an arbitrary point in time in stochastic equilibrium. let c̃ denote the time that elapses from the instant an ordinary customer enters service until the instant the ordinary customer departs. the con- ditional probability that the number of interruptions is n. . the special customer departs the system and then returns after an exponential. Let x̃si denote the length of the ith interruption of an ordinary customer by the special customer. (a) Suppose that service time for the ordinary customer is chosen once. Also. Rewriting ρ as λE[x̃] the stability condition is then 1 1 λ< .

∞ (βx)n e−βx Z P {ñ = n} = dFx̃ (x). n! Therefore. : SOLUTION MANUAL Solution: (a) Since the time between interruptions is exponentially distributed with parameter β. the process that counts the interruptions is a Poisson process. Then c̃ = x̃ + i=0 x̃si .  Pñ  −sc̃ −s x̃+ x̃ i=0 si E[e ] = E e   ñ . . Therefore. is just the pgf for the distribution of the number of arrivals from a Poisson process over a period x̃. (b) Let Pñ ñ represent the total number of interruptions. of course. which. 0 n! and ∞ z n P {ñ = n} X Fñ (z) = n=0 ∞ (βx)n e−βx zn∞ X = dFx̃ (x) n=0 n! ∞ (zβx)n −βx Z ∞X = e Fx̃ (x) 0 n=0 n! Z ∞ = ezβx e−βx Fx̃ (x). .192 QUEUEING THEORY WITH APPLICATIONS . Hence (βx)n e−βx P {ñ = n|x̃ = x} = . 0 Therefore Fñ (z) = Fx̃∗ (β[1 − z]).

.

 ∞ ( P Z ∞X −s x̃+ x̃si .

.

ñ=n. 0 .x̃=x = E e i=0    0 n=0 ) P {ñ = n|x̃ = x} dFx̃ (x) ∞ ∞X n (βx)n Z e−sx Fx̃∗s (s) e−βx dFx̃ (x)  = 0 n=0 n! Z ∞ −sx βxFx̃∗s (s) −βx = e e e dFx̃ (x).

Therefore E[c̃] = E[x̃](1 + ρs ). the busy period relationship is simply a special case of the completion time.The Basic M/G/1 Queueing System 193 Thus. When service of the first customer is complete. As a check. Fc̃ (s) = Fx̃∗ s + β − βFx̃∗s (s) . the busy period is over. then the completion time has the same distribution as the length of an ordinary busy period. Then each interruption of the first customer has the same distribution as the ordinary busy period. which implies E[c̃] = E[x̃] + βE[x̃]E[x̃s ]. d ∗ .  (c) The formula for the M/G/1 busy period is as follows:   Fỹ∗ (s) = Fx̃∗ s + λ − λFỹ∗ (s) . (d) Since c̃ = x̃ + ñi=0 x̃si . The relationship is explained by simply allowing the first customer of the busy period to be interrupted by any other arriving customer. P E[c̃|x] = x + βxE[x̃s ]. The relationship between the two formulas is that if the service time of the special customer has the same distribution as the length of a busy period. Thus. where ρs = βE[x̃s ].

.

d d .

.

 .

= Fx̃∗ (s).

.

.

.

Fc̃ (s).

1 − β Fx̃s (s) .

.

ds s=0 ds s=0 ds s=0 It follows that E[c̃] = E[x̃] (1 + βE[x̃s ]) . That is. ds2 ds2 x̃ ds x̃ ds x̃ ds2 x̃ Taking the limit as s → 0. (e) Note that the probability that the server is busy is just the expected value of the number of customer is service. E[c̃2 ] = E[x̃2 ] (1 + βE[x̃s ])2 + E[x̃]βE[x̃2s ]. Similarly. . if B represents the event the the server is busy. P {B} = λE[c̃] = λE[x̃](1 + ρs ) . " # 2 d2 d2 ∗ d ∗ d ∗ d2 ∗   Fc̃ (s) = F (s) 1 − β F (s) + F (s) −β F (s) .

] (b) Given the expected length of the busy period with K = 2. describe a procedure for obtaining the expected length of the busy period for the case of K = 3. The stability condition is then ρ(1 + ρs ) < 1. Each customer. i. where λef f is the average arrival rate of jobs to the server. (a) Given the expected length of the busy period for this system. while not being served or waiting. E[ỹ] P {B} = . Now. describe a procedure through which you could obtain the expected waiting time. Rewriting ρ as λE[x̃] the stability condition is then 1 1 λ< . E[ỹ] + E[ĩ] Then. hence λc is equal to the inverse of the expected cycle length. Since the λ f customer are statistically identical. [Hint: Use alternating renewal theory. . . Solution: (a) First note if B denotes the event that the system is busy. Service times are drawn independently from a general service time distribution Fx̃ (x). E[x̃] (1 + ρs ) 5-5 Consider a queueing system that services customers from a finite popula- tion of K identical customers. E[t̃] + E[w̃] + E[x̃] E[ỹ] + E[c̃] . P {B} = λef f E[x̃]. 1 λc = . by Little’s result. thinks for an exponentially distributed length of time with param- eter λ and then joins a FCFS queue to wait for service. : SOLUTION MANUAL = ρ(1 + ρs ). a customer generates one job per cycle. λc = ef K is the job generation rate per customer.e. by Little’s result. E[t̃] + E[w̃] + E[x̃] Therefore..194 QUEUEING THEORY WITH APPLICATIONS . E[x̃] E[ỹ] P {B} = K = .

Thus. Further.The Basic M/G/1 Queueing System 195 We may then solve for E[w̃] to find KE[x̃] E[w̃] = − E[x̃] − E[t̃] P {B} E[x̃] = [K − P {B}] − E[t̃] P {B} K   = E[x̃] − 1 − E[t̃] " P {B} # KE[ỹ] + KE[ĩ] = E[x̃] − E[t̃]] E[ỹ] " # E[ĩ] = E[x̃] K + K − 1 + E[t̃]. because the service can be reordered so that one customer stands aside while the busy period of the second customer completes and then the first customer is brought back and its busy period completes. X = E[x̃] + i=1 In particular. define ỹij t be the length of a busy period starting with i customers in the service system and j total customers in the system. E[ỹ] (b) Let ỹK denote the length of a busy period for a system having K customers.   E[ỹ13 ] = E[x̃] + E[ỹ13 ]P1 + E[ỹ12 ] + E[ỹ13 ] P2 . 1 − P1 − P2 . E[x̃] + E[ỹ12 ]P2 E[ỹ13 ] = . From the point of view of the second customer. ỹ23 = ỹ12 + ỹ13 . Now. for K = 3. 2 h i E[ỹ13 ] = E[x̃] + E ỹi3 |Ai P {Ai } X i=1 = E[x̃] + E[ỹ13 ]P1 + E[ỹ23 ]P2 . but the first customer sees all three customers. and let Ai denote the event that i arrivals occur during x̃1 . Then K−1 h i E[ỹ1K ] E ỹiK |Ai P {Ai } . the whole population is only two. That is. which implies E[ỹ13 ](1 − P1 − P2 ) = E[x̃] + E[ỹ12 ]P2 .

we may consider each arrival stream as being served by an independent. where λ is the arrival rate and x̃ is the holding time. 2. if we denote by ñi the number of busy servers for class i. (d) Suppose that job service times are drawn from an arbitrary distribu- tion Fx̃ (x).e. it is well known that the stochastic equi- librium distribution for the number of busy servers is Poisson with param- eter λE[x̃]. (c) Calculate the mean length of an arbitrary job that is in service at an arbitrary point in time in stochastic equilibrium.  P n 3 P3 λ i=1 ipi −λ i=1 ipi P {ñ = n} = e n! n! .196 QUEUEING THEORY WITH APPLICATIONS . 3. 3 with p1 + p2 + p3 = 1. Repeat part (c). Determine the equilibrium distribution of the number of servers that are busy serving jobs of length i for i = 1. . then (λpi i)ni e−λpi i P {ñi = n} = . Since there are an infinite number of servers. then the arrival process consists of 3 independent arrival processes with arrival rates λpi .. then by Property 1 of the Poisson process (page 44 of the text) we have that ñ is Poisson distributed with parameter λ. 2. namely the length of the busy pe- riod if the population size is two. (e) What can be concluded about the distribution of remaining service time of a customer in service at an arbitrary point in time in stochastic equilibrium for the M/G/∞ system? Solution: (a) From the problem statement. and the distribution of the number of servers that are busy serving all jobs. Since this is the sum of three independent Poisson random variables. 5-6 For the M/G/∞ queueing system. i = 1. Now. (b) Determine the probability that a job selected at random from all of the jobs in service at an arbitrary point in time in stochastic equilibrium will have service time i. 2. the equilibrium distribution of the num- ber of busy servers is Poisson with parameter λE[x̃]. 3. 2. (a) Suppose that x̃ = i with probability pi for i = 1. infinite set of servers. : SOLUTION MANUAL where E[ỹ12 ] is the given quantity. Thus. i = 1. ni ! The number of jobs in service at an arbitrary point in time is simply ñ = ñ1 + ñ2 + ñ3 . i. if x̃ = i with probability pi . . 3.

(c) From part (b). P {x̃0 = x} = ipi /E[x̃]. the probability that a job is type i is ipi /E[x̃]. λ i=1 ipi E[x̃] That is. There- fore.The Basic M/G/1 Queueing System 197 (λE[x̃])n −λE[x̃] = e . (b) Since ñ is a Poisson random variable that is obtained by the addition of 3 independent Poisson random variables. E[x̃] so that ∞ xdFx̃ (x) E[x̃2 ] Z   E[x̃0 ] = x = . E[x̃] (d) By the same argument as in part (c). . 0 E[x̃] E[x̃] (e) The distribution of the remaining service time is the same as the distri- bution of the remaining service time of a renewal interval in a renewal process whose interval distribution is Fx̃ (x). and each item is marked with color i with probability ββi . if x̃0 denotes the length of an observed job. it is as though there is a Poisson random variable with param- eter β. and n n X [ipi ] X [i2 pi ] E[x̃0 ] = i = i=1 E[x̃] i=1 E[x̃] 2 E[x̃ ] = . it follows that the propor- tion of calls of type i is simply λipi ipi P3 = . xdFx̃ (x) P {x ≤ x̃0 ≤ x + dx} = . n! as in the problem statement.

.

For ease of notation.1). [Hint: See Exercise 5.e.13]. But. v=0 v! Now.. each of the v customers have the same busy period distribution ỹ. Solution. 0 But this conditional may in turn be conditioned on the number of customers who arrive during the system time of the arbitrary customer. then the sojourn time will be the service time requirement plus the busy periods generated by those v customers. i.1 Argue the validity of (6. conditioning the Laplace-Stieltjes transform of this variable on the ser- vice time requirement of the customer. ṽ = v] X = . Exercise 6.2 Derive an expression for the Laplace-Stieltjes transform of the sojourn-time distribution for the M/G/1 system under the LCFS-PR discipline conditional on the customer’s service time requirement. ∞ E[e−ss̃ |x̃ = x] = E[e−ss̃ |x̃ = x. Then. if v customers arrive during the system time of the arbitrary customer. with ỹ = 0 with probability 1. ṽ = v]P {ṽ = v} X v=0 ∞ e−λx (λx)v E[e−ss̃ |x̃ = x. Hence. Z ∞ Fs̃∗ (s) = E[e−ss̃ |x̃ = x]dFx̃ (x). ∞ h Pv i e−λx (λx)v E[e−ss̃ |x̃ = x] = E e−s(x+ ỹ) X i=1 v=0 v! . let s̃ represent in this solution only the sojourn time of an arbitrary customer for a system having the LCFS-PR discipline.Chapter 6 THE M/G/1 QUEUEING SYSTEM WITH PRIORITY Exercise 6. Solution.

Hence. recall that Fs̃∗LCF S−P R (s) = Fỹ∗ (s). .   Fs̃∗ (s) = Fx̃∗ s + λ − λFỹ∗ (s) . we may rewrite the LST of s̃LCF S−P R as Fs̃∗LCF S−P R (s) = Fx̃∗ (u(s)) . Therefore.22). : SOLUTION MANUAL ∞ v " # X −sx Y −sỹ e−λx (λx)v = e E e v=0 i=1 v! ∞  v e−λx (λx)v e−sx Fỹ∗ (s) X = v=0 v!  v ∞ λxFỹ∗ (s) = e−sx e−λx X v=0 v! −sx −λx λxFỹ∗ (s) = e e e ∗ = e−x[s+λ−λFỹ (s)] . 1−ρ Now. Exercise 6. .3 Compare the means and variances of the sojourn times for the ordinary M/G/1 system and the M/G/1 system under the LCFS-PR dis- cipline. d2 ∗ d d ∗ d   F (s) = Fx̃ (u(s)) u(s) ds2 ỹ ds du ds  2 d2 ∗ d d ∗ d2  = F (u(s)) u(s) + F (u(s)) u(s) du2 x̃ ds du x̃ ds2 2 d2 ∗ d ∗  = Fx̃ (u(s)) 1 − λ F ỹ (s) du2 ds . To compute the mean of Fs̃∗LCF S−P R (s). since   Fỹ∗ (s) = Fx̃∗ s + λ − λFỹ∗ (s) . by (5. Thus. where u(s) = s + λ − λFỹ∗ (s).200 QUEUEING THEORY WITH APPLICATIONS . Solution. E[x̃] E[s̃LCF S−P R ] = .

du ds We then evaluate this expression at s = 0 to find d2 ∗ .The M/G/1 Queueing System with Priority 201 " # d d2 + Fx̃∗ (u(s)) −λ 2 Fỹ∗ (s) .

.

.

  2 2 2 F (s) = E[x̃ ] (1 + λE[ỹ]) + (−E[x̃]) −λE[ỹ ] ds2 ỹ .

Observe that s̃ = w̃ + x̃. ds . where w̃ and x̃ are independent since the service time in question occurs after the waiting period is over. E[x̃2 ] E[ỹ 2 ] = . Thus Var(s̃) = Var(w̃) + Var(x̃). (1 − ρ)3 so that Var(s̃LCF S−P R ) = Var(ỹ) = E[ỹ 2 ] − E 2 [ỹ] E[x̃2 ] E[x̃] 2   = − (1 − ρ)3 1−ρ E[x̃2 ] − E 2 [x̃] + ρE 2 [x̃] = (1 − ρ)3 2 E [x̃] h 2 i = C x̃ + ρ .s=0 2 ρ    2 = E[x̃ ] 1 + − E[x̃] −λE[ỹ 2 ] 1−ρ = E[ỹ 2 ]. (1 − ρ)3 It remains to compute Var(s̃). 1 − ρFx̃∗r (s) Then d2 ∗ d −2 d ∗   (1 − ρ) 1 − ρFx̃∗r (s)  F (s) = ρ Fx̃r (s) ds2 w̃ ds ds 2 d ∗   ∗ −3 = 2(1 − ρ) 1 − ρFx̃r (s) ρ Fx̃r (s) ds −2 d2 ∗ +(1 − ρ) 1 − ρFx̃∗r (s)  ρ 2 Fx̃r (s). Then solving for E[ỹ 2 ]. where we have used the above result for E[ỹ]. Rewrite Fw̃∗ (s) as 1−ρ Fw̃∗ (s) = .

" # ρ 1 + Cx̃2 E[x̃] E[s̃F CF S ] − E[s̃LCF S−P R ] = E[x̃] 1 + − 1−ρ 2 1−ρ . 1−ρ 2 so that the variance of w̃ is then Var(w̃) = E[w̃2 ] − E 2 [w̃] " !#2  ρE[x̃] Cx̃2 + 1 ρ E[x̃3 ]  = 2 + 1−ρ 2 1 − ρ 3E[x̃] " !#2 ρE[x̃] Cx̃2 + 1 − 1−ρ 2 " #2 ρE[x̃] Cx̃2 + 1 ρ E[x̃3 ]   = + . by (5.  1−ρ 2  1−ρ 3E[x̃] Now.202 QUEUEING THEORY WITH APPLICATIONS . 1−ρ 2 1−ρ 3E[x̃] Therefore.17). we find that 1−ρ 2 2 1−ρ E[w̃2 ] = 2 3 ρ E [x̃r ] + ρE[x̃2r ] (1 − ρ) (1 − ρ)2 !2  ρ2 E[x̃2 ] ρ E[x̃3 ]  = 2 + (1 − ρ)2 2E[x̃] 1 − ρ 3E[x̃] " #2 ρ E[x̃2 ] ρ E[x̃3 ]   = 2 + 1 − ρ 2E[x̃] 1−ρ 3E[x̃] " !#2 ρE[x̃] Cx̃2 + 1 ρ E[x̃3 ]   = 2 + 1−ρ 2 1−ρ 3E[x̃] Now. . Var(s̃) = Var(w̃) + Var(x̃) " #2  ρE[x̃] Cx̃2 + 1 ρ E[x̃3 ]  = + + E[x̃2 ] − E 2 [x̃] 1−ρ 2 1 − ρ 3E[x̃] " #2  ρ (1 + Cx̃2 ρ E[x̃3 ]     = E 2 [x̃] + Cx̃2 + . . ! ρE[x̃] Cx̃2 + 1 E[w̃] = . : SOLUTION MANUAL Upon evaluating this expression at s = 0 and using the relation E[x̃nr ] = E[x̃n+1 ]/(n + 1)E[x̃].

The M/G/1 Queueing System with Priority 203 " # ρ 1 + Cx̃2 1 = E[x̃] 1 + − 1−ρ 2 1−ρ " # ρ 1 + Cx̃2 ρ = E[x̃] − 1−ρ 2 1−ρ " # ρE[x̃] 1 + Cx̃ 2 = −1 1−ρ 2 " # ρE[x̃] Cx̃2 − 1 = . then upon each arrival. then E[s̃LCF S−P R ] > E[s̃F CF S ]. E[x̃3 ] = E 3 [x̃]. so that for all ρ. If Cx̃2 = 1. Since the service time is memoryless. Therefore.   ∆s̃ Cx̃2 = Var(s̃F CF S ) − Var(s̃LCF S−P R ) " !#2   ρ 1 + Cx̃2  ρ E[x̃3 ] = E 2 [x̃] + Cx̃2 +  1−ρ 2  1 − ρ 3E[x̃] E 2 [x̃] h 2 i − Cx̃ + ρ . and it does not matter which order the service occurs. which is reasoned as follows. if Cx̃2 < 1. (1 − ρ)3 That is. and if Cx̃2 > 1. the average number of customers is unaffected by service order. (   E 2 [x̃] ρ2 (1 − ρ)  2 ∆s̃ Cx̃2 = 1 + Cx̃2 (1 − ρ)3 4 ) 3 ρ E[x̃3 ] +(1 − ρ) Cx̃2 − Cx̃2 −ρ + . 12(1 − ρ)3 . ( )   E 2 [x̃] ρ2 (1 − ρ) ρ E[x̃3 ] ∆s̃ Cx̃2 = − ρ + (1 − ρ)3 4 1 − ρ 3E[x̃] 2 E [x̃]ρ = − {8 + 4ρ + ρ(1 − ρ)} < 0. 1−ρ 2 Thus. Thus. all customers are alike. 1 − ρ 3E[x̃] At Cx̃2 = 0. the service time of the customer in service is redrawn again from the same distribution as all other customers.We now compare the variances of the two distributions. This result is clear for exponential service because of Little’s result. then E[s̃LCF S−P R ] < E[s̃F CF S ]. E[s̃LCF S−P R ] = E[s̃F CF S ]. by Little’s result. Thus. the distribution of the number of customers in the system is unaffected by order of service. then E[s̃LCF S−P R ] = E[s̃F CF S ].

we conjecture that E[x̃3 ] is an in- creasing function of Cx̃2 . 4 Since x̃ is a nonnegative random variable. if 6 − 7ρ + 3ρ2 Cx̃2 > . At Cx̃2 = 1. .e. ρ(1 − ρ) . at Cx̃2 = 1 and for all ρ. This question can be answered in part by con- sidering E 2 [x̃] ρ E[x̃3 ] ∆s̃ (Cx̃2 = f (C 2 x̃ ) + . this means that var(s̃LCF S−P R ) > var(s̃F CF S ) at Cx̃2 = 1. That is. f (Cx̃2 ) is an increasing function. The question then arises as to whether Var(s̃LCF S−P R ) < var(s̃F CF S ) for any value of Cx̃2 . 3E[x̃] 3E[x̃] Hence. 2 ρ(1 − ρ) This shows that for sufficiently large Cx̃2 . (1 − ρ)3 1 − ρ 3E[x̃] where ρ2 (1 − ρ) f (Cx̃2 ) = (1 + Cx̃2 ) + (1 − ρ)3 Cx̃2 − Cx̃2 + ρ.. E 2 [x̃] ρ(1 − ρ)4     ∆s̃ Cx̃2 = 3 + (1 − ρ)3 − 1 − ρ + 2ρ(1 − ρ)2 (1 − ρ) ( 4 E 2 [x̃] = ρ − ρ2 + 1 − 3ρ + 3ρ2 − ρ3 − 1 − ρ (1 − ρ)3 ) 2 3 +2ρ − 4ρ + 2ρ ρE 2 [x̃] n 2 o = −1 − 2ρ + ρ (1 − ρ)3 ρE 2 [x̃] n o = − 3 1 + 2ρ − ρ2 < 0.204 QUEUEING THEORY WITH APPLICATIONS . d ρn o f (Cx̃2 ) = ρ(1 − ρ)Cx̃2 − [6 − 7ρ + 3ρ2 ] Cx̃2 2 ( ) ρ2 (1 − ρ) 2 6 − 7ρ + 3ρ2 = Cx̃ − . : SOLUTION MANUAL This means that var(s̃LCF S−P R ) > var(s̃F CF S ) if Cx̃2 = 0 for deterministic service. (d1 − ρ) Again. . with exponential service. Now. E[x̃2 ] 6E 3 [x̃] = = 2E 2 [x̃]. i. with exponential service.

the minimum √ value of (6 − 7ρ + 3ρ2 )/ρ(1 − ρ) √ can be shown to be at ρ = (3 − 3)/2. one would expect that var(s̃LCF S−P R ) > √ var(s̃F CF S )√for Cx̃2 < 5 + 4 3. The probability of no ρ2 Class 2 customers in service is 1 − 1−ρ 1 .2. In the case of the priority system. Solution. Exercise 6.. However. otherwise the customer will leave behind only those who arrive after and during the set-up time. either the customer in question in the first customer of the busy period or not. and at this 2 point. Recall Equation (6. and that var(s̃LCF S−P R ) < var(s̃F CF S ) for Cx̃2 < 5 + 4 3.36) 1 − ρ1 − ρ2 ρ2 Fx̃∗2r (λ1 [1 − z])  (1 − ρ1 )Fx̃∗1 (λ1 [1 − z])   Fñ1 (z) =  + .    ∗   1 − ρ1 1 − ρ1 1 − ρ1 Fx̃1r (λ1 [1 − z])  and (6.4 Compare the probability generating function for the class 1 occupancy distributions for the HOL system to that of the M/G/1 system with set-up times discussed in Section 6. one of which is the probability generating function for the number left in the system in the ordinary M/G/1 system. during the residual of the set-up time. If so. i. Recall Equation .24) 1 ρs   Fñs (z) =  Fx̃∗s (λ[1 − z]) + F ∗ (λ[1 − z])  Fñ (z).24) represent the probability generating functions for the occupancy distribution of HOL Class 1 customers and M/G/1 customers with set-up times.e. These probability generating functions have the exactly the same form in that they both are the probability generating function of the sum of two random variables. the set-up time is either 0 or x̃2r . the customer will leave behind all customers who arrive in the set-up time. since this is before ordinary customers are serviced. Therefore.The M/G/1 Queueing System with Priority 205 then f (Cx̃2 ) is an increasing function. then the probability generating function for the number of Class 1 customers who arrive is the same as for those who arrive during the residual service time of the Class 2 customer in service. the first part of the expression represents the number left due to arrivals that occur prior to the time an ordinary customer begins service in a busy period. namely Fx̃∗2r (λ1 [1 − z]). The result is that in the case of the priority system.36) and (6.   1 + ρs 1 + ρs x̃sr  where (6. the probability generating function is 1. Cx̃ > 5 + 4 3. respectively. either there is a Class 2 customer in service or there isn’t. Otherwise. depending upon whether or not a Class 2 customer is in service or not upon arrival of the first Class 1 customer of a Class 1 busy period. Do they have exactly the same form? Explain why or why not intuitively. In the case of set-up. If so. In each case.

the first part of the expression represents the number left due to arrivals that occur prior to the time an ordinary customer begins service in a busy period. .24) represent the probability generating functions for the occupancy distribution of HOL Class 1 customers and M/G/1 customers with set-up times. namely Fx̃∗2r (λ1 [1 − z]). Otherwise. the term λ1 ∗   1 − γ2  (1 − z) + λ2 {1 − Fỹ11 (λ2 [1 − z])}     1 + γ1  Fx̃∗2c (λ2 [1 − z]) − z    is the probability generating functionfor the distribution of the number of cus- tomers who arrive during the waiting time of the departing customer while the . depending upon whether or not a Class 2 customer is in service or not upon arrival of the first Class 1 customer of a Class 1 busy period. the probability generating function is 1. These probability generating functions have the exactly the same form in that they both are the probability generating function of the sum of two random variables. respectively.. The probability of no ρ2 Class 2 customers in service is 1 − 1−ρ 1 . otherwise the customer will leave behind only those who arrive after and during the set-up time. The result is that in the case of the priority system. Solution. the customer will leave behind all customers who arrive in the set-up time. . Exercise 6. The probability generating function for the number left in the system by a departing class 2 customer is readily derived from (6. the set-up time is either 0 or x̃2r . In each case.    ∗   1 − ρ1 1 − ρ1 1 − ρ1 Fx̃1r (λ1 [1 − z])  and Equation (6.e.24) 1 ρs   Fñs (z) =  Fx̃∗s (λ[1 − z]) + F ∗ (λ[1 − z])  Fñ (z). : SOLUTION MANUAL (6.36) 1 − ρ1 − ρ2 ρ2 Fx̃∗2r (λ1 [1 − z])  (1 − ρ1 )Fx̃∗1 (λ1 [1 − z])   Fñ1 (z) =  + . In the case of the priority system.36) and (6. since this is before ordinary customers are serviced. then the probability generating function for the number of Class 1 customers who arrive is the same as for those who arrive during the residual service time of the Class 2 customer in service. If so. In the case of set-up. during the residual of the set-up time. If so.206 QUEUEING THEORY WITH APPLICATIONS . either there is a Class 2 customer in service or there isn’t. i. As noted earlier.5 Derive the expression for Fñ2 (z) for the case of the HOL- PR discipline with I = 2. one of which is the probability generating function for the number left in the system in the ordinary M/G/1 system.   1 + ρs 1 + ρs x̃sr  where (6.45). either the customer in question in the first customer of the busy period or not.

ñj1 is the sum of two independent random variables. the distribution of which is Fx̃2 c (x). all others are Type 2. each being started by a Type 2 customer and generated by Type 1 and higher priority customers. Since these customers arrive in non-overlapping intervals and are due to Poisson processes. Solution. the distribution of the number of customers who arrive during the waiting time is the same whether or not servicing of the class 2 customer is preemptive. ñj1 and ñj2 are independent. ñj = ñj1 + ñj2 . which has the same distribution as the corresponding quantity in an ordinary M/G/1 queueing system having traffic intensity γj and service time equivalent to the completion time of a class j . Fñ2 (z). As before. the number of class 2 customers who arrive after the time at which the class 2 customer enters service and the same customer’s time of departure is equal to the number of class 2 customers who arrive during a completion time of the class 2 customer. Then.6 Derive expressions for Fñ1 (z).The M/G/1 Queueing System with Priority 207 term Fx̃∗2 (λ2 [1 − z]) is the probability generating functionfor the distribution of the number of cus- tomers who arrive during the service time of the departing customer. and Fñ3 (z) for the ordinary HOL discipline with I = 3. Now. the order of service is that the service takes place as a sub-busy period of a sequence of sub-busy periods. Consider the number of customers left by a class j departing cus- tomer by separating the class j customers remaining into two sub-classes: those who arrive during a sub-busy period started by a customer of their own class are Type 1. On the other hand. Extend the analysis to the case of arbitrary I. The first is the number of Type 1 customers who arrive during the waiting time of the tagged customer. As we have argued previously. The prob- ability generating functionof the number of class 2 arrivals during this time is then Fx̃∗2 c (s)|s=λ2 [1−z] = Fx̃∗2 (s + λ1 − λ1 Fỹ∗11 (s))|s=λ2 [1−z] = Fx̃∗2 (λ2 [1 − z] + λ1 − λ1 Fỹ∗11 (λ2 [1 − z])) The required probabiltiy generating function is then (1 − z) + λλ12 {1 − Fỹ∗11 (λ2 [1 − z])}    1 − γ2  Fñ2 (z) =     1 + γ1 F ∗ (λ [1 − z]) − z     x̃2c 2 Fx̃∗2 (λ2 [1 − z] + λ1 − λ1 Fỹ∗11 (λ2 [1 − z])) Exercise 6.

7) 1 − σj−1 where j X σj = Pk . which is the LST of the length of a busy period started by a high priority cus- tomer and generated by customers whose priority is higher than j.6. (6. : SOLUTION MANUAL customer.208 QUEUEING THEORY WITH APPLICATIONS . (6. (6. Specifically.6.6. Fx̃∗jc (s) = Fỹ∗j (s). the completion time of a class j customer H is simply the length of a sub-busy period started by a class j customer and generated by customers having priority higher than j. j−1 X λH = λk . (6.6. ∗ λj E[x̃j ] 1 − Fx̃jc (s) γj Fx̃∗jcr (s) = sE[x̃j ] 1 − σj−1 h 1−σi j−1 λj 1 − Fx̃∗jc (s) = s .6. Furthermore.6) sE[x̃jc ] and E[x̃j ] E[x̃jc ] = .   Fỹ∗jH (s) = Fx̃∗j s + λH − λH Fỹ∗HH (s) (6. . The pgf for the distribution of the number of Type 1 customers in this category is (1 − γj ) . 1 − Fx̃∗jc (s) Fx̃jcr (s) = (6.1) 1 − γj Fx̃∗jcr (λj [1 − z]) In turn.6. k=1 λj E[x̃j ] Since γj = 1−σj−1 .5) λH k=1 with λH = 0 and Fx̃∗H (s) undefined if j < 2. . In addition.6. That is.2).4) k=1 and j−1 1 X Fx̃∗H (s) = λk Fx̃∗k (s).3). where Fỹ∗HH (s) satisfies   Fỹ∗HH (s) = Fx̃∗H s + λH − λH Fỹ∗HH (s) . (6.

We now turn to the distribution of the number of Type 2 customers left in the system. Thus we find on making appropriate substitutions that (1 − γj )Fx̃∗j (λj [1 − z]) Fñj1 (z) = 1 − γj Fx̃∗jcr (λj [1 − z]) (1 − γj )Fx̃∗j (λj [1 − z]) = h i λj 1−Fỹ∗ (λj [1−z]) jH 1− λj [1−z] (1 − γj )(1 − z)Fx̃∗j (λj [1 − z]) = Fỹ∗jH (λj [1 − z]) − z (1 − γj )(z − 1)Fx̃∗j (λj [1 − z]) =   z − Fx̃∗j λj [1 − z] + λH − λH Fỹ∗HH (λj [1 − z]) by (6. 1 − ρH 1 − ρH and λL E[x̃L ] ρL P {L} = = . The completion time of class j customers is equal to the length of a sub-busy period of higher priority customers started by a class j customer. the event that a higher priority customer is in service excluding periods covered by events E and L. Hence λj E[x̃j ] ρj P {E} = = = γj .The M/G/1 Queueing System with Priority 209 h i λj 1 − Fỹ∗jH (s) = (6.2). Since there can be at most 1 completion time in progress at any given time. the event that the system is idle. H.8) s Returning to the second component of ñj1 . We also define E. L. the event that the system is in a completion time of a lower priority cus- tomer excluding periods of time covered by event E. 1 − ρH 1 − ρH . the completion time of a lower priority customer excluding periods of completion times of class j customers is equal to the length of a sub-busy period of a higher priority customers started by a lower priority customer.6. the event that the system is in a completion time of a class j customer.6. Such customers arrive to the system when one of the fol- lowing three events is occuring: I. Similarly. the event probabilities are readily calculated by applying Little’s result. The pgf for the distribution of the number of such customers is simply Fx̃∗j (λj [1 − z]). this is just the number of class j customers who arrive during the service time of the class j customer.

1 − ρH 1 − ρH (1 − ρ)ρH P {H} = . The number left behind for the event H|Ē is simply the number that arrive in the residual time of a high priority busy period started by a high priority cus- tomer. . Now. We then find P {H} = ρH − [P {E} − ρj ] −  [P{L} − ρL ] ρj ρL  = ρH − − ρj − − ρH . the pgf of which is given by Fỹ∗LHr (λj [1 − z]). Similarly. : SOLUTION MANUAL The proportion of time spent during events E and L serving higher priority customers is readily computed by simply subtracting ρj and ρL from P {E} and P {L} respectively. the number  left behind for the event L|Ē is simply the number that arrive in the residual life of a high priority busy period started by a low priority customer. the number of Type 2 customers left behind if a Type 1 customer  arrives to an empty system is 0. the pfg of which is given by Fỹ∗HHr (λj [1 − z]). 1 − Fỹ∗HH (s) Fỹ∗HHr (s) = sE[ỹHH ] 1 − Fỹ∗HH (s) = s E[x̃ 1−ρH H] h i (1 − ρH ) 1 − Fỹ∗HH (s) = .210 QUEUEING THEORY WITH APPLICATIONS .   Fỹ∗HH (s) = Fx̃∗H s + λH − λH Fỹ∗HH (s) . the number of Type 2 customers left in the system by an arbitrary de- parture from the system is equal to the number of Type 2 customers left in the system by an arbitrary Type 2 arrival.   Fỹ∗LH (s) = Fx̃∗L s + λH − λH Fỹ∗HH (s) . . Now. This is because each Type 2 arrival is associated with exactly one Type 1 sub-busy period. sE[x̃H ] . from exceptional first service results. Also. 1 − ρH Next we find the probabilities  1−ρ P I|Ē = 1 − γj  (1 − ρ)ρH P H|Ē = (1 − ρH )(1 − γj )  ρL P L|Ē = (1 − ρH )(1 − γj ) Now.

s 1 − λj [1 − Fỹ∗jH (s)]/s That is. h  i (1 − ρH ) 1 − Fx̃∗L s + λH − λH Fỹ∗HH (s) Fỹ∗LH (s) = .The M/G/1 Queueing System with Priority 211 Similarly. sE[x̃L ] We then find that h i ∗ 1−ρ (1 − ρ)ρH  1 − FỹHH (λj [1 − z])  Fñj2 (z) = + + 1 − γj (1 − γj )  λj [1 − z]E[x̃H ]  h  i  ρL  1 − Fx̃∗ λj [1 − z] + λH − λH Fỹ∗ (λj [1 − z])  L HH . n h io Fs̃∗j (s) = (1 − ρ) s + λH 1 − Fỹ∗HH (s) + Fx̃∗j (s) ! h  i λL 1 − Fx̃∗L s + λH − λH Fỹ∗HH (s) s − λj + λj Fỹ∗jH (s) . h i (1 − ρ)λH 1 − Fỹ∗HH (λj [1 − z]) ( Fñj (z) = (1 − ρ) + + λj [1 − z] h  i ) λL 1 − Fx̃∗L λL [1 − z] + λH − λH Fỹ∗HH (λj [1 − z]) λj [1 − z] (z − 1)Fx̃∗j (λj [1 − z]) .  h i  λH 1 − Fỹ∗HH (s)  Fs̃∗ (s) = (1 − ρ) 1 + +  s     ! 1 − Fx̃∗L s + λH − λH Fỹ∗HH (s) Fx̃∗j (s) λL   . (1 − γj )  λj [1 − z]E[x̃L ]  Thus. z − Fỹ∗jH (λj [1 − z]) We also note that since Fñj (z) = Fs̃∗j (λj [1 − z]) .

The prob- ability generating function of the number of class j arrivals during this time is .212 QUEUEING THEORY WITH APPLICATIONS . the number of class 2 customers who arrive after the time at which the class 2 customer enters service and the same customer’s time of departure is equal to the number of class j customers who arrive during a completion time of the class j customer.6. while the remainder of the expression repre- sents the probability generating function for the distribution of the number of customers who arrive during the waiting time of the class j customer. On the other hand. Solution. . The dis- tribution of the number of customers who arrive during the waiting time is the same whether or not servicing of the class j customer is preemptive.5. the term Fx̃∗j (λj [1 − z]) is the probability generating functionfor the dis- tribution of the number of customers who arrive during the service time of the departing class j customer. : SOLUTION MANUAL Exercise 6. The solution to this problem is similar to that of Exercise 6.7 Extend the analysis of the previous case to the case of HOL-PR. z − Fỹ∗jH (λj [1 − z]) or h i (1 − ρ)λH 1 − Fỹ∗HH (λj [1 − z]) ( Fñ (z) = (1 − ρ) + + λj [1 − z] h  i ) λL 1 − Fx̃∗L λL [1 − z] + λH − λH Fỹ∗HH (λj [1 − z]) λj [1 − z] ( ) (z − 1) Fx̃∗j (λj [1 − z]) z − Fỹ∗jH (λj [1 − z]) Now. The probability generating function for the number left in the system by a departing class j customer is readily derived from the results of Exercise 6. and proceeds as follows. the distribution of which is Fx̃jc (x). which are as follows h i (1 − ρ)λH 1 − Fỹ∗HH (λj [1 − z]) ( Fñj (z) = (1 − ρ) + + λj [1 − z] h  i ) λL 1 − Fx̃∗L λL [1 − z] + λH − λH Fỹ∗HH (λj [1 − z]) λj [1 − z] (z − 1)Fx̃∗j (λj [1 − z]) . .

.

Fx̃∗jc (s).

.

= Fx̃∗j (s + λH − λH Fỹ∗11 (s)).

.

.

.

s=λj [1−z] s=λj [1−z] = Fx̃∗j (λj [1 − z] + λH − λH Fỹ∗HH (λj [1 − z])). .

i=1 Similarly.8 Suppose that the service time of the customers in an M/G/1 system are drawn from the distribution Fx̃i (x) with probability pi such that PI i=1 pi = 1. I E[x̃2 ] = E[x̃2i ]pi . P 2 E[x̃i ]pi i=1 From (5. we then have ρ E[w̃] = E[x̃r ] 1−ρ I E[x̃2i ]pi P ρ i=1 = · 1−ρ I 2 P E[x̃i ]pi i=1 . Then. given x̃ is drawn from Fx̃i (x) with probability pi . we find I X E[x̃] = E[x̃|Si ]P {Si } i=1 XI = E[x̃i ]pi . I E[x̃2i ]pi P i=1 E[x̃r ] = I . Solution. X i=1 Therefore. Let Si denote the even that distribution i is selected.96). z − Fỹ∗jH (λj [1 − z]) Exercise 6. Determine E[w̃] for this system.The M/G/1 Queueing System with Priority 213 The required probabiltiy generating function is thus h i (1 − ρ)λH 1 − Fỹ∗HH (λj [1 − z]) ( Fñ (z) = (1 − ρ) + + λj [1 − z] h  i ) λL 1 − Fx̃∗L λL [1 − z] + λH − λH Fỹ∗HH (λj [1 − z]) λj [1 − z] ( (z ∗ − 1)Fx̃j (λj [1 − z]) ) + λH − λH Fỹ∗HH (λj [1 − z]).

214 QUEUEING THEORY WITH APPLICATIONS . Explain the implications of this result. Show that Ii=1 ρi E[w̃i ] = ρE[w̃] where E[w̃] is as determined in Exercise 5.9 Conservation Law (Kleinrock [1976]) Under the condi- tions of Exercise 6. i=1 (1 − σi )(1 − σi−1 ) 1−ρ . Does the result imply that the expected waiting time is independent of the priority assignment? Why or why not? If not. : SOLUTION MANUAL I λ E[x̃2i ]pi X = 2(1 − ρ) i=1 I 1 λi E[x̃2i ] X = 2(1 − ρ) i=1 I 1 X E[x̃2i ] = λi E[x̃i ] 1 − ρ i=1 2E[x̃i ] I 1 X = ρi E[x̃ri ]. (1 − σj )(1 − σj−1 ) We wish to show ρE[x̃r ] E[w̃FCFS ] = . i=1 (1 − σi )(1 − σi−1 ) Clearly. Exercise 6. . 1 − ρ i=1 where λi = λpi and ρi = λi E[x̃i ]. .9. with PI i=1 E[x̃ri ]ρi E[w̃j ] = (1 − σj )(1 − σj−1 ) ρE[x̃r ] = . it is sufficient to show I X ρi ρ = . under what conditions would equality hold? Solution. 1−ρ so that I X ρE[w̃FCFS ] = ρ ρi E[w̃i ] i=1 I X ρi = ρE[x̃r ] . This exercise refers to nonpreemptive service or ordinary HOL. suppose the customers whose service times are drawn from the distributionPFx̃i (x) are assigned priority i and the service discipline is HOL.8.

Proposition: For all j. where the weighting factors are equal to the proportion of all service times devoted to the particular class.The M/G/1 Queueing System with Priority 215 We prove the following proposition by induction. j=1 ρ That is. proof: Let T denote the truth set for the proposition. Thus. Clearly 1 ∈ T . we have I X ρi σI ρ = = . we have I X ρE[w̃FCFS ] = ρj E[w̃j ]. Continuing with the exercise. i=1 (1 − σi )(1 − σi−1 ) 1 − σI 1−ρ This proves the proposition. Suppose j − 1 ∈ T . i=1 (1 − σi )(1 − σi−1 ) 1 − σj−1 Then j j−1 X ρi X ρi ρj = + i=1 (1 − σi )(1 − σi−1 ) i=1 (1 − σ i )(1 − σ i−1 ) (1 − σ j )(1 − σj−1 ) σj−1 ρj = + (1 − σj−1 ) " (1 − σj )(1 − σj−1# ) 1 σj−1 (1 − σj ) + ρj = (1 − σj−1 ) (1 − σj )(1 − σj−1 ) " # 1 σj (1 − σj−1 ) = (1 − σj−1 ) 1 − σj σj = . j−1 X ρi σj−1 = . 1 − σj Thus j ∈ T for all j. and with j = I. I X λj E[x̃j ] E[w̃FCFS ] = E[w̃j ]. j=1 λE[x̃] . j X ρi σj = . i=1 (1 − σi )(1 − σi−1 ) 1 − σj with σ0 = 0. and the above result follows as a special case. j=1 or I X ρj E[w̃FCFS ] = E[w̃j ]. E[w̃] can be expressed as a weighted sum of the expected waiting times of individual classes. That is.

Therefore. : SOLUTION MANUAL Now.1).6. Hence I X E[x̃j ] E[ñqFCFS ] = E[ñqj ]. λE[w̃FCFS ] = E[ñq ]. .6. the average waiting time may not be conserved.1) j=1 so that a sufficient condition for E[ñHOL ] = E[ñqFCFS ] is that E[x̃j ] = E[x̃] for all j. j=1 E[x̃] But I X E[ñqHOL ] = E[ñqj ]. if the service times are not identical. a sufficient condition for E[w̃HOL ] = E[w̃F CF S ] is that E[x̃j ] = E[x̃] for all j. we see that I X λE[w̃qHOL ] = λj E[w̃j ]. j=1 λ j=1 ρ That is. and λj E[w̃j ] = E[ñqj ].216 QUEUEING THEORY WITH APPLICATIONS . by Little’s result. . j=1 which implies I I X λj X ρj E[w̃qHOL ] = E[w̃j ] 6= E[w̃j ] = E[w̃qFCFS ]. From (6. (6. .

where δ(x) is the Dirac delta function. . where K is an integer. xK }. . x1 .} as defined by (7. k = 0. 1. . Determine P {ak . then we find that dFx̃ (x) = δ(x − 1)dx. .1 Suppose that P {x̃ = 1} = 1. 2. In order to get started.6). . . 1. k = 0. let dFx̃ (x) = x∈X αk δ(x − xk ).6). Solution: if P {x̃ = 1} = 1.} as defined by (7. Determine {ak . ∞ (λx)k −λx Z ak = e dFx̃ (x) k! Z0∞ (λx)k −λx X = e αj · δ(x − xj )dx 0 k! x ∈X j ∞ (λx)k e−λx −λx X Z = αj e δ(x − xj )dx xj ∈X 0 k! . 1. . from (5. ∞ (λx)k −λx Z ak = e dFx̃ (x) Z0∞ k! (λx)k −λx = e δ(x − 1)dx 0 k! λk −λ = e for k = 0. Solution: By (7.6).6). . Define αk = P {x̃ = xk } for xk ∈ X . the service time is deterministic with mean 1. . . · · · k! Exercise 7.Chapter 7 VECTOR MARKOV CHAIN ANALYSIS: THE M/G/1 AND G/M/1 PARADIGMS Exercise 7. Thus.2 Suppose that x̃ is a discrete valued random variable having support set X = {x0 . where δ(x) is the Dirac delta function. that is.

k = 0.218 QUEUEING THEORY WITH APPLICATIONS .} as defined by (7. · · · λ+µ λ+µ This result is obvious since ak is just the probability that in a sequence of independent trials. Exercise 7.3 Suppose that x̃ is an exponential random variable with pa- rameter µ. where x̃1 and x̃2 are exponen- tial random variables with parameter µ1 and µ2 . Solution: First. 1. Determine {ak .6)). . for k = 0.4 Suppose that x̃ = x̃1 + x̃2 . respectively. : SOLUTION MANUAL X (λxj )k e−λxj = αj xj ∈X k! J X (λxj )k e−λxj = αj for k = 0. . . k µ λ  ak = . there will be k failures prior to the first success. 1. (λx)k −λx ∞ Z ak = e dFx̃ (x) Z0∞ k! (λx)k −λx −λx = e µe dx 0 k! [(λ + µ)x]k −(λ+µ)x k Z ∞ λ  = µ e dx λ+µ 0 k! (λ + µ) [(λ + µ)x]k −(λ+µ)x k Z ∞ µ λ  = e dx λ+µ λ+µ 0 k! But. we find fx̃ (x) = µ1 µ2 e−µ2 x x . . · · · j=0 k! Exercise 7. the expression inside the integral is just the density of the Erlang-(k+1) distribution with parameter (λ + µ). . Solution: By (7.} as defined by (7. Z x fx̃ (x) = fx̃2 (x − y)fx̃1 (y)dy Z0x = µ2 e−µ2 (x−y) µ1 e−µ1 y dy 0 Z x = µ1 µ2 e−µ2 x e−(µ1 −µ2 )y dy 0 If µ1 = µ2 . k = 0.6). Determine {ak . 1. . so that the expression integrates to unity. . we determine dFx̃ (x) = fx̃ (x)dx. . Therefore. 1.6)).

37). Solution: From equation (5. from Exercise 7.Vector Markov Chains Analysis 219 = µ22 xe−µ2 x = µ2 (µ2 x)e−µ2 x . Fq̃ (1) = Fq̃ (1)P Therefore. we find Fq̃ (z) [Iz − PFã (z)] = π0 [z − 1] PFã (z) So. we have that Fq̃ (z) [Iz − PFã (z)] = π0 [z − 1] PFz̃ (z) . we have µ1 µ2 −µ2 x x Z fx̃ (x) = e (µ1 − µ2 ) e−(µ1 −µ2 )y dy µ1 − µ2 0 µ1 µ2 −µ2 x h i = e 1 − e−(µ1 −µ2 )x µ1 − µ2 µ1 µ2  −µ2 x − e−µ1 x  = e µ1 − µ2 Then.e. with z = 1.3. Fã (1) = I.5 Show that the quantity Fq̃ (1) corresponds to the stationary probability vector for the phase process.38). Fq̃ (1) [I − PFã (1)] = 0 But. Solution: From (5. Fq̃ (1) is the stationary probability vector for the Markov Chain having transition matrix P. · · · µ1 − µ2 λ + µ1 λ + µ1 Exercise 7. Exercise 7. for the phase process. we find " k # k µ1 µ2 1 µ2 λ 1 µ1 λ      ak = − µ1 − µ2 µ2 λ + µ2 λ + µ2 µ1 λ + µ1 λ + µ1 k µ1 µ2 λ   = µ1 − µ2 λ + µ2 λ + µ2 k µ2 µ1 λ   − for k = 0. If µ1 6= µ2 .6 Derive equation (7. so the above equation becomes Fq̃ (1) [I − P ] = 0 i. 1. that is.37)..

and recalling that Fã (1) = I. there are ni sources in phase i. if there are a total of N sources and M phases. : SOLUTION MANUAL Differentiating with respect to z.7 Suppose there are N identical traffic sources. n2 . if Q ni is simply M i=1 Fi (z). the totals number of sources in each phase is sufficient to determine the distribution of the number of arrivals. M and i=1 ni = N . Since the latter depends solely upon the phase of the individual sources. each of which has an arrival process that is governed by an M -state Markov chain. nM ) with ni ≥ 0 for i = 1. Let T denote the truth set of N . Now. Also. since the dynamics of the sources are identical. then the pgf for the number of arrivals That is. Next. ′ h ′ i Fq̃ (1) [I − P] e + Fq̃ (1) I − PFã (1) e = π0 Pe That is. since all sources operate independently of each other. ′ h ′ i Fq̃ (1) [I − P] + Fq̃ (1) I − PFã (1) = π0 P Postmultiplying by e. N positive integers. . . · · · . ′ 1 − Fq̃ (1)PFã (1)e = π0 e Exercise 7. N which we shalldo by induction on M . First. the distribution of the total num- ber of arrivals due to all sources is simply the convolution of the distribution of the number of arrivals from the individual sources. · · · . We wish to show that the number of such states is N +M −1   N CM = . we find ′ h ′ i ′ Fq̃ (z) [Iz − PFã (z)]+Fq̃ (z) I − PFã (z) = π0 PFz̃ (z)+π0 [z − 1] PFz̃ (z) Taking the limit as z → 1. we have the proposition CMN = N +M −1  for all M . N Solution: First note that the distribution of the number of arrivals from a given source depends solely upon the current phase of the source. 2. then a typical state is represented PM by the M -tuple (n1 . the future evolution depends only upon the number of sources in each phase and not upon the which sources are in which phase. That is.220 QUEUEING THEORY WITH APPLICATIONS . show  that the number of states of the phase process is N +M −1  given by . argue that this state description completely characterizes the phase of the arrival process. Now. Suppose the state of the combined phase process is defined by an M -vector in which the ith element is the number of sources currently in phase i.

then the remaining elements sum to N − i so that M P −1 j=1 xj = N − i.. we have     0+x x+1 =1= 0 0 Now assume N −1  N −1−i+x N −1+x+1 X    = N −1−i N −1 i=0 . Assume N +M −1   N CM = . If N = 0.= i. 1.e. N for all n = 0. 2. · · · . N −i N i=0 by induction on N . N N N −i X CM = CM −1 i=0 N  N −i+M −1−1 X  = N −i i=0 N  N − i + (M − 1) − 1 N + (M − 1) − 1 X    = + N −i N i=1 N −1  N − i + (M − 1) − 1 N − (M − 1) − 1 X    = + N −i N i=0 Now. Thus. · · · . If the last element of the M -tuple. xM . 2. assume 1. we must show N  N − i + (M − 1) − 1 N +M −1 X    = N −i N i=0 To do this.Vector Markov Chains Analysis 221 the proposition. we prove N  N −i+x    X N +x+1 = . N . M −1. i. N so M = 1 ∈ T . and m = 1. · · · . Then N +1−1   C1N = =1 for all N. 2. M − 1 ∈ T .

222 QUEUEING THEORY WITH APPLICATIONS . . k = 0. 2. For (0− > 1). this transition has probability β 3 .1 for the special case of N = 3.. Each source acts independently of all others.e. Continuing in this manner results in the following table of . Therefore. if the phase vector is ijk. the transition probability is β 2 (1 − 1 β) = 3β 2 (1 − β). 3. For transition (0− > 0). 2 sources remain off andthere  is one 3 transition to the ‘on’ state. and k sources in phase 2. where the states of the phase process have the following interpretations shown in Table 7. Exercise 7. and k sources in phase 2. Phase 0 corresponds to the phase vector 300. all sources must remain in the ‘off’ state. j sources in phase 1. Thus. 1. all sources are in phase 0. and the only transitions from phase 0 are to phase 0 or to phase 1 for individual sources to 0. for i. j sources in phase 1. j. Solution: If the phase vector is (ijk) then there are i sources in phase 0. . 2. i. then there are i sources in phase 0. 1.8 Define the matrices P and Fã (a) for the model defined in Example 7. or 3. In the table.2. : SOLUTION MANUAL Then N  N  N −i+x N −i+x     X X N +x = + N −i N −i N i=0 i=0 N −1  N −1−i+x    X N +x = + N −1−i N i=0 N −1+x+1    N +x = +  N−1   N N +x N +x = + N −1 N (N + x)! (N + x)! = + (N − 1)! (x + 1)!  N !x!  (N + x)! 1 1 = + (N − 1)! (x + 1)! x + 1 N (N + x + 1)! = N ! (x + 1)!  N +x+1 = N The result now follows with x = M − 2.

Vector Markov Chains Analysis 223 transition probabilities: From To Probability 300 −> 300 β3 −> 210 2β(1 − β) −> 120 3β(1 − β) −> 030 (1 − β)3 210 −> 201 β2 −> 111 2β(1 − β) −> 021 (1 − β)2 120 −> 102 β −> 012 (1 − β) 030 −> 003 1 201 −> 300 (1 − α)β 2 −> 210 2β(1 − β)(1 − α) + β 2 α −> 120 2β(1 − β)α + (1 − β)2 (1 − α) −> 030 (1 − β)2 α 111 −> 201 β(1 − α) −> 111 (1 − β)(1 − α) + βα −> 021 (1 − β)α 021 −> 012 α −> 102 (1 − α) 102 −> 300 (1 − α)2 β −> 210 2β(1 − α)α + (1 − β)(1 − α)2 −> 120 βα2 + 2α(1 − β)(1 − α) −> 030 (1 − β)α2 012 −> 201 (1 − α)2 −> 111 2α(1 − α) −> 021 α2 003 −> 300 (1 − α)3 −> 210 3α(1 − α)2 −> 120 3α2 (1 − α) −> 030 α3 .

Prove that the final three columns of Kj (1) are zero columns for all j. Since K0 (1) = 0. Let T denote the truth set for this proposition.1) i=0 with K0 (1) = 0. Consider the proposition that the last 3 columns of Kj (1) are zero columns for all j ≥ 0. we use the formula 2 Ai [Kj−1 (1)]i X Kj (1) = for j ≥ 1. suppose we compute K(1) iteratively. X PFã (z) = i=0 Therefore.224 QUEUEING THEORY WITH APPLICATIONS . (7. .  0 0 0 0 0 0 0 0 (1 − α) α 0 0 0 0 0 0 0 and 0 0 0 0 0 0   0 0 0 0 0 0   0 0 0 0 0 1 A2 =  0   0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Now. : SOLUTION MANUAL Exercise 7.  βα + (1 − β)(1 − α) α(1 − β) 0 0 0  0 0 0 0 0 0 (1 − α)2 2α(1 − α) α2 0 0 0 0 0 0 0 0 0   0  0 0 β (1 − β) 0 0 0 0 0 0 0 A1 =  0 . 0 ∈ T . K1 (1) = A0 . From these specifications. X K(1) = i=0 with β2 2β(1 − β) (1 − β)2 0 0 0     0 0 0 0 0 0  0 0 0 0 0 0 A0 =   β(1 − α) . we specified Fã (z) and P. Also. . .9 In the previous example. that is. Solution: The proof is by induction. we have 2 Ai z i . 2 Ai [K(1)]i . so that 1 ∈ T .

Hence. K10 0 and K00 0    e K(1)e =  K10 0 e K00 e = K  10 e e = e Therefore. P with A = B = Kj−1 (1). But K00 0   K(1) = . and with A = A2 . recall that κ is the stationary vector of K(1). we have A2 [Kj−1 (1)]2 with its last 3 columns equal to zero. K(1)e = e. Solution: Recall that K(1) stochastic means that K(1) is a one-step transition probability matrix for a discrete parameter Markov chain. thus κ · K(1) = κ. where κ0 is the stationary probability vector for K00 . Then. and btk = 0 for all k. Bearing in mind that K(1) is stochastic. j ∈ T . Thus. Now. the sum leading to Kj (1) also has its last 3 columns equal to zero. That is. (7.2) K10 0 where K00 is a square matrix.10 Suppose K(1) has the form K00   0 K(1) = .Vector Markov Chains Analysis 225 Now suppose (j − 1) ∈ T . Therefore. the element of the rows of K00 sum to unity and are nonnegative because they come from K(1). with A = A1 . Kj (1) = A0 + A1 Kj−1 (1) + A2 [Kj−1 (1)]2 . and the proof is complete. B = [Kj−1 (1)]2 . then the k-th column of D is also 0 because clk = t alt btk . we have the last 3 columns of A1 Kj−1 (1) = 0. Since K00 is square it is a legitimate one-step transition probability matrix for a discrete paramter Markov chain. Also. Exercise 7. Therefore. prove that K00 is also stochastic and that κ has the form [ κ 0 ]. Now. K00   0 [ κ0 κ1 ] = [ κ0 K00 + κ1 K10 0] K10 0 . if D = AB. we have [Kj−1 (1)]2 with its last 3 columns equal to zero. with κe = 1 . and the k-th column of B is a zero column. Since this is also true of A0 . all entries are nonnegative and the elements of every row sum to unity. B = Kj−1 (1).

Now. We may also show [K0 (1)]n is stochastic by induction on n. Furthermore. n − 1 ∈ T . so that κ1 = 0. since κe = [ κ0 0 ] · [ e e ]T = 1. 1. [Kj−1 ]i e = e. : SOLUTION MANUAL = [ κ0 κ1 ] . ∞ Ai z i X PFã (z) = i=0 Then. Ai is a matrix of probabilities and is therefore nonnegative. Exercise 7. if [K0 (1)]n−1 is stochastic.11 Prove Theorem 7. ∞ X PFã (1) = P = Ai . .226 QUEUEING THEORY WITH APPLICATIONS . and we find that κ has the form [ κ0 0 ]. ∞ X Kj (1)e = Ai e i=0 . Thus. κ0 K00 = κ0 . Thus. [K0 (1)]n is nonnegative for all n. And clearly. · · · . Then 0 ∈ T since [K0 (1)]0 = I. with z = 1. First. Therefore. we find κ0 e = 1. Hence [K0 (1)]n−1 is nonnegative and [K0 (1)]n−1 e = e. Now. X Kj (1) = i=0 so that Kj (1) is nonnegative for all j ≥ 0. This proves the proposition. Thus. Now assume that 0. . ∞ Ai [Kj−1 ]i e. X Kj (1)e = i=0 and by the above proposition. Let T be the truth set for the propostion. ∞ Ai [Kj−1 ]i . which is stochastic. Solution: By definition. note that [K0 (1)]n represents an n-step transition probability matrix for a discrete parameter Markov chain so that this matrix is stochastic. then so is Kn (1). i=0 so that the latter sum is stochastic.3. so that κ0 is the stationary probability vector for K00 . It follows that [K0 (1)]n = K0 (1) [K0 (1)]n−1 = K0 (1)e = e.

and continue step by step until a function of (C) (C−1) Fq̃ (z) is substituted for Fq̃ (z). eφ [I − P + eφ] = eφ − eφ + eφ = eφ.49). Therefore. we find eφ = eφ [I − P + eφ]−1 as required. then a func- (2) (1) tion of Fq̃ (z) for Fq̃ (z).Vector Markov Chains Analysis 227 = Pe = e. For i > 0. therefore. Exercise 7.12 Suppose that P is the one-step transition probability ma- trix for an irreducible discrete-valued. given that [I − P + eφ] is nonsingular. so that φP = φ and φe = 1. .13 Define (1) 1 (i+1) 1 h (i) i Fq̃ (z) = [Fq̃ (z) − π0 ] and Fq̃ (z) = Fq̃ (z) − πi . i ≥ 1.51). z we find (1) Fq̃ (z) = zFq̃ (z) + π0 . Prove that eφ [I − P + eφ] = eφ and that. Show that at each step. eφ = eφ [1 − P + eφ]−1 . Solution: Starting with (1) 1 Fq̃ (z) = [Fq̃ (z) − π0 ] . discrete-parameter Markov chain. Solution: Observe that eφ [I − P + eφ] = eφ − eφP + eφeφ But φ is the stationary probability vector for P. one element of C−1 πj z j A(z) X j=0 is eliminated. we find (i) (i+1) Fq̃ (z) = zFq̃ (z) + πi . resulting in (7. z z (1) Starting with (7. Exercise 7. Define φ to be the stationary probability vector for the Markov chain. so that Kj (1) is also stochastic. Thus. substitute a function of Fq̃ (z) for Fq̃ (z).

X X z zFq̃ (z) + π1 j=0 j=1 Simplifying.228 QUEUEING THEORY WITH APPLICATIONS . X (7. X Fq̃ j=0 . X zFq̃ (z) + π0 j=0 or h i h i C−1 h i (1) zFq̃ (z) z C I − A(z) = −π0 z C I − A(z) + πj z C Bj (z) − z j A(z) . X X j=0 j=1 (2) (1) We now substitute zFq̃ (z) + π1 for Fq̃ (z) in the previous equation to find h ih i C−1 C−1 (2) z C I − A(z) = −π0 z C I + πj z C Bj (z)− z j A(z). X j=0 Upon simplifying. we get h i C−1 C−1 2 (2) C C C C z j A(z). X X j=0 j=0 or h i C−1 h i (C) (z) z C I − A(z) = πj Bj (z) − z j . . X X j=0 j=0 or h i C−1 C−1 (1) zFq̃ (z) z C I − A(z) = −π0 z C I + πj z C Bj (z) − z j A(z). X X z Fq̃ (z) z I − A(z) = −π0 z I−zπ1 z + πj z Bj (z)− j=0 j=2 Continuing in this manner. we find h i h i C−1 C−1 (1) zFq̃ (z) z C I − A(z) = −π0 z C I − A(z) + πj z C Bj (z)− z j A(z).49) j=0 we find h ih i C−1 h i (1) z C I − A(z) = πj z C Bj (z) − z j A(z) . we get h i C−1 C−1 (C) z C Fq̃ (z) z C I − A(z) = −z C πj z j + πj z C Bj (z). . : SOLUTION MANUAL Given h i C−1 h i Fq̃ (z) z C I − A(z) = πj z C Bj (z) − z j A(z) .

E [q̃] = dz Fq̃ (1)e because Fq̃ (z)e is the generat- ing function for the occupancy distribution.Vector Markov Chains Analysis 229 which is the required result. . We then have C−1 d iπi e + Cg [I − F ]−1 He + gF [I − F ]−2 He. nonnegative random d d2 Fx̃ (z)kz=1 and E x̃2 = dz   variable. X E [q̃] = Fq̃ (1)e = dz i=1 and h i C−1 e q̃ 2 = i(i − 1)πi e + C(C − 1)g [I − F ]−1 H X E i=2 +2CgF [I − F ]−2 He + gF 2 [I − F ]−3 He + E [q̃] . The appropriate derivatives are C−1 d iz i−1 πi + Cz C−1 g [I − F z]−1 H + z C gF [I − F z]−2 H.68) i=0 we just use the properties of moment generating functions to compute the required moments. develop an expression for the first and second moments of the queue length distribution. Recall that for any integer valued. X Fq̃ (z) = (7. In this case.68). Solution: Starting with C−1 z i πi + z C g [I − F z]−1 H. X Fq̃ (z) = dz i=1 and C−1 d2 i(i − 1)z i−2 πi + C(C − 1)z C−2 g [I − F z]−1 H X Fq̃ (z) = dz 2 i=2 +2Cz C−1 gF [I − F z]−2 H + z C gF 2 [I − F z]−3 H. x̃ we have E [x̃] = dz 2 Fx̃ (z)kz=1 + d d dz Fx̃ (z)kz=1 . Exercise 7.14 Beginning with (7.

. i = 1. 2. 2. we have   Ae Ao [ φ2e φ2o ] = [ φ2e φ2o ] Ao Ae . .15 Suppose C = 2. .230 QUEUEING THEORY WITH APPLICATIONS . . i=0 2i−1 z A P i=0 A2i z Upon setting z = 1. : SOLUTION MANUAL Exercise 7. X Â(z) = i=0 and φ2 to be the stationary vector of Â(1). (b) "∞ # X φ iAi e = 2ρ. we have φ2 = φ2 Â(1) and φ2 e = 1. Suppose "∞ # X ρ = φ2 iÂi e. we have  P∞ P∞  A i=0 A2i+1 . . i = 1. i=0 i=0 We then have   Ae Ao Â(1) = . . . i=0 Solution: (a) Given   A2i A2i+1 Âi = . . . Define   A2i A2i+1 Âi = . . suppose φ2 = [ φ2e φ2o ] is the stationary vector of Â(1). A2i−1 A2i ∞ Âi z i . A2i−1 A2i we have  P∞ i P∞ i A z i=0 A2i+1 z Â(z) = P∞i=0 2i i ∞ i . Â(1) = P∞i=0 2i P∞ i=0 A2i−1 i=0 A2i Define ∞ X ∞ X Ae = A2i and Ao = A2i+1 . Ao Ae Now. i=0 Show that 1 (a) φ2 = 2 [ φ φ ]. From φ2 = φ2 Â(1). Then. where φ is the stationary vector of A(1).

(b) Given  P∞ P∞ A z i i Â(z) = P∞i=0 2i i i=0 A2i+1 z .Vector Markov Chains Analysis 231 from which we have φ2e = φ2e Ae + φ2o Ao and φ2o = φ2e Ao + φ2o Ae . Therefore. 1 P∞ 1 P∞ P∞ d i=0 2iA2i i=0 (2i + 1)A2i+1 − 12 i=0 A2i+1 . ∞ i i=0 2i−1 z A P i=0 A2i z we have ∞ P∞ d iA z (i−1) (i−1)   P iA z Â(z) = P∞i=0 2i (i−1) i=0 P∞ 2i+1 (i−1) . so we have φA(1) = φ and φe = 1 because φ is the stationary vector of A(1). That is. We then have 1 1 1 1 φ = φAe + φAo = φ [Ae + Ao ] . if we set φ2 = 12 [ φ φ ]. φ2 is the unique stationary vector of Â(1). Now suppose we set φ2e = φ2o = 12 φ. then φ2 satisfies φ2 = φ2 Â(1). φ2 e = 1. 2 2 2 2 But. Ae + Ao = A(1). dz i=1 iA2i−1 z i=0 iA2i z So that ∞ P∞ d  P  iA i=0 iA2i+1 . Â(z)kz=1 = P∞i=0 2i P ∞ dz i=1 iA2i−1 i=0 iA2i Hence.

  Â(z).

.

we find P∞ P∞ d 1 i=0 2iA2i e + 1 (2i + 1)A2i+1 e − 21 ∞ P i=0 A2i+1 e . = 2 2 1 P∞ P∞ 1 P∞ dz z=1 2 i=1 (2i − 1)A2i−1 + 21 i=1 A2i−1 2 i=0 2iA2i If we now postmultiply by e.

  2 2 i=0P Â(z).

e = .

1 P∞ i=1 iA2i−1 e + 21 ∞ 1 i=1 A2i−1 e + 2 ∞ P dz z=1 2 i=0 2iA2i e Then. upon changing the index of summation. we find 1 P∞ 1 P∞ d iAi e − A2i+1 e .

  Â(z).

e = 2 Pi=0 2 i=0 .

1 ∞ 1 P∞ dz z=1 2 i=0 iAi e + 2 i=1 A2i−1 e 1 Then premultiplying by φ2 = 2 [ φ φ ] yields ∞ d .

1 X φ2 Â(z).

e = φ iAi e. .

i=0 dz . dz z=1 2 i=0 Therefore. ∞ X d φ iAi e = 2φ2 Â(z)kz=1 e = 2ρ.

(g) Write a computer program to determine K(1) for the parameters given in part (f). (h) Compute E[q̃]. the system occupancy level is k > 0 and the phase of the arrival process is i. 1.3. j = 0. i = 0. i = 0. the system occupancy level is zero and the phase of the arrival process is i. . (f) Let α = 0.75.2. arrivals to the system occur accord- ing to a Poisson process with rate λi . : SOLUTION MANUAL 7. and then determine the equilibrium probability vector for the Markov chain for which K(1) is the one-step transition probability matrix. k = {0. l = {k − 1. Determine the probability that the system occupancy level at the end of the following time slot will be l.5. 1. (d) Suppose that at the end of a time slot. k. β = 0. j = 0. What can be said about the effect of burstiness on the average system occupancy? Solution: . Determine the equi- librium probability vector for the phase process and ρ for the system.232 QUEUEING THEORY WITH APPLICATIONS . (a) Argue that A(z) = B(z) for this system. Compare the result to the equivalent mean value E[q̃] for the system in which λ0 = λ1 = ρ as computed in part (f). 1} that has the following one-step transition probability matrix: 1−β   β P= . 1−α α While the arrival process is in phase i. k + 1} and the phase of the arrival process will be j. Suppose the arrival process to the system is a Poisson process mod- ulated by a discrete time Markov chain on {0. 1. (e) Suppose that at the end of a time slot. Determine the probability that the system occupancy level at the end of the following time slot will be k. i = 0. · · ·} and the phase of the arrival process will be j. (b) Show that 1−β e−λ0 (1−z)    β 0 A(z) = . . 1.1 Supplementary Problems 7-1 Consider a slotted time. single server queueing system having unit service rate. the expected number of packets in the system at the end of an arbitrary time slot. 1. 1. 1−α α 0 e−λ1 (1−z) (c) Determine Ai for all i ≥ 0. and λ1 = 0. λ0 = 1.

there is no difference between the two. this probability matrix is given by 1−β e−λ0    β 0 A0 = . then the phase will be 0 at the end of the next time slot with probability β. Hence βe−λ0 (1−z) (1 − β)e−λ1 (1−z)   A(z) = −λ (1−z)  (1 − α)e 0 (1 − β)e−λ1 (1−z)    −λ β 1−β e 0 (1−z) 0 = . In particular. Similarly. ℘˜ = 1|ñ = 0. 1−α α 0 e−λ1 (1−z) (c) From the formula for the Poisson distribution. 1−α α 0 e−λ1 Now. ℘˜ = 0|ñ = 0.Vector Markov Chains Analysis 233 (a) The matrices Bj represent the number of arrivals that occur in the first slot of a busy period while the matrices Aj represent the number of arrivals that occur in an arbitrary slot. (b) The matrix A(z) represents the generating function for the number of arrivals that occur during a time slot. ℘˜ = 1} = α k! (e) The level wil be k − 1 if there are no arrivals. 1−α α 0 λ1 e−λ1 . ℘˜ = 0} = β k! λk1 P {ñ = k. ℘˜ = 1|ñ = 0. ℘˜ = 0} = (1 − α) k! λk1 P {ñ = k. Since arrivals are Poisson and the slots are of fixed length. 1−α α 0 i! (d) λk0 P {ñ = k. l = k if exactly one arrival given by 1−β λ0 e−λ0    β 0 A1 = . and the distribution of the number of arrivals will be Poisson with parameter λ0 .  " λi0 # 1−β 0  β i! Ai = λi1 . ℘˜ = 0} = (1 − β) k! λk0 P {ñ = k. then it will be zero with probability (1 − α) and the distribution of the number of arrivals will be Poisson with parameter λ0 . if the phase is initially 1. ℘˜ = 0|ñ = 0. if the phase of the phase process is 0 at the end of a given time slot.

3) = 0. Thus.486555 0. The result is   0. 1. Also.25 1 = = .75 3 Thus.5 − 0. 1−α α where xe = 1.39).75 3 and 1/(1 − α) P {phase 1} = 1/(1 − β) + 1/(1 − α) 1−β = 2−α−β 0.5 − 0. G(1) and K(1) are also identical. 1/(1 − β) P {phase 0} = 1/(1 − β) + 1/(1 − α) 1−α = 2−α−β 0. Therefore. 0. we can simply take ratios of the mean time in each phase to the mean of the cycle time.644321 Note that the result is a legitimate one-step transition probability ma- trix for a discrete parameter Markov chain. i = 0. P {ñ0 = k} = (1 − β)β k−1 implies 1 1 E[ñ0 ] = 1−β . 2 1 ρ = λ0 + λ1 3 3 2 1 = (1. Let ñi be the number of slots in phase i. . E[ñ1 ] = 1−α . 2 − 0. Similarly. We may then solve for K(1) by using (5.2) + (0.5 2 = = . 2 − 0.234 QUEUEING THEORY WITH APPLICATIONS . : SOLUTION MANUAL and l = k + 1 if exactly two arrivals given by  " λ20 e−λ0 # 1−β 0  β 2! A2 = λ21 e−λ1 1−α α 0 2! (f) The equilibrium distribution for the phase process is found by solving 1−β   β x=x .9 3 3 (g) Since A(z) and B(z) are identical.513445 K(1) = . The equilibrium may be . .355679 0.

the result is κ = [ 0. (h) We may compute E[q̃] from (5. we must first define each term in the equation. The relevant adjoint.319.  −λ0 [1−z]  e 0 Fã (z) = 0 e−λ1 [1−z] so that.3 and λ20     (2) 0 1.44 0 Fã (1) = 2 = . 0.0409239 0. 0 λj1 e−λ1 [1−z] In particular.41) yeilds E[q̃] = 6. is   0. All of the terms are now defined. 0 λ1 0 0.409239 0. if normalized so that the row sums are both unity. in turn.355679 0.2 0 Fã (1) = = .10).3333 ].0590761 ] . the appropriate formula is (6.6667 0.41).355679 0. . h i (1) π0 = 1 − Fq̃ (1)Fã (1)e κ = [ 0.     (1) λ0 0 1.590761 ] . in general. For the M/G/1 case. it can be seen that the effect of burstiness is to increase the average system occupancy.47).95. λj0 e−λ0 [1−z] " # (j) 0 Fã (z) = . As has been observed many times before.Vector Markov Chains Analysis 235 obtained by normalizing any row of the adjoint of I − K(1). 0 λ1 0 0. Substituting these into (5.513445 adj I − K(1) = . Also. From this.513445 and. From part g.09 From (5. which is ρ Cx̃2 + 1    E[ñ] = ρ 1 +     1−ρ 2  = 4. Fq̃ (1) = [ 0. In order to do this.

70) as z → 1 to obtain h i (1) (1) Fq̃ (1) [I − P] = π0 P − Fq̃ (1) I − PFã (1) . Use the fact that eFq̃ (1) [I − P + eFq̃ (1)] = eFq̃ (1). (7.73) . take limits on both sides as z → 1. Our point of departure is the expression for the generating function of the occupancy distribution as given in (7.37).70) with respect to z. postmultiply both sides by e. (1) (1) (d) Add Fq̃ (1)eFq̃ (1) to both sides of (7.71).72) [I − P + eFq̃ (1)]−1 . as shown in Exercise 7.12 in Section 7. . . and (1) then postmultiply by PFã (1)e to obtain (1) (1) Fq̃ (1)PFq̃ (1) = Fq̃ (1)eρ (1) +{π0 P − Fq̃ (1)[I − Fã (1)]} (7. for the slotted M/D/1 system having phase- dependent arrivals and unit service times. and then rearrange terms to find (1) (1) (1) 1 (1) (1) Fq̃ (1)PFã (1)e = Fq̃ (1)e − Fq̃ (1)Fã (1)e 2 (1) −π0 PFã (1)e (7. E[q̃].70) (b) Take limits of both sides of (7. (7.71) (c) Define ρ to be the marginal probability that the system is not empty at the end of a slot. and show (1) that ρ = Fq̃ (1)Fã (1)e. solve for Fq̃ (1).236 QUEUEING THEORY WITH APPLICATIONS . 7-2 The objective of this problem is to develop a closed-form expression for the mean queue length. (e) Differentiate both sides of (7.71) by e.3. Postmultiply both sides of (7.69) (a) Differentiate both sides of (7. : SOLUTION MANUAL a Poisson assumption is not valid in many cases where an observed phenomena shows that the traffic process is not smooth. which is now repeated for continuity: Fq̃ (z) [Iz − PFã (z)] = π0 [z − 1] PFã (z).69) to obtain h i (1) (1) Fq̃ (z) Iz − PFã (z) + Fq̃ (z) [Iz − PFã (z)] (1) = π0 [z − 1]PFã (z) + π0 PFã (z) (7.

The [I − P] e = 0. the result follows. Considering the term (1) Fq̃ (1)PFã (1)e. (c) From Part(b).71) Now. Fq̃ (1)P = Fq̃ (1) because Fq̃ (1) is the stationary vector for P. and then solve for Fq̃ (1)e to obtain (1) E[q̃] = Fq̃ (1)e 1 n1 (2) (1) = Fq̃ (1)Fã (1)e + π0 PFã (1)e 1−ρ 2  h i (1) + π0 P − Fq̃ (1) I − Fã (1) o (1) × [I − P + eFq̃ (1)]−1 PFã (1)e .70) from Part (a) and using the fact that limz→1 Fã (z) = I. In summary.Vector Markov Chains Analysis 237 (1) (f) Equate right-hand sides of (7.72) and (7. h i (1) (1) Fq̃ (1) [I − P] = π0 P − Fq̃ (1) I − PFã (1) . Also.69) by setting z = 1. Considering the RHS.71) . this fact can easily be seen from (7. and Pe = e because P is the one-step transition probability matrix for a DPMC. (b) Given h i (1) (1) Fq̃ (z) Iz − PFã (z) + Fq̃ (z) [Iz − PFã (z)] (1) = π0 [z − 1]PFã (z) + π0 PFã (z) (7. π0 Pe = π0 e. Fq̃ (1)Ie = Fq̃ (1)e = 1 because Fq̃ (1) is a vector of probability masses covering all possible states. Solution: (a) This part is accomplished by simply using the formula d d d     u(z)v(z) = u(z) v(z) + u(z) v(z) dz dz dz with u(z) = Fq̃ (z) and v(z) = [Iz − PFã (z)]. which is the probability the occupancy is equal to zero. (d) Starting with h i (1) (1) Fq̃ (1) [I − P + eFq̃ (1)] = π0 P −Fq̃ (1) I − PFã (1) .73). (7. Ie = e. (7. we have h i (1) (1) 0 = π0 e− 1 − Fq̃ (1)Fã (1)e or 1−π0 e = ρ = Fq̃ (1)Fã (1)e.

Hence. : SOLUTION MANUAL (1) we add Fq̃ (1)eFq̃ (1) to both sides to get (1) Fq̃ (1) [I − P + eFq̃ (1)] h i (1) (1) = Fq̃ (1)eFq̃ (1) + π0 P − Fq̃ (1) I − PFã (1) . (1) Again using Fq̃ (1)P = Fq̃ (1) and ρ = Fq̃ (1)Fã (1)e leads to (1) (1) (1) Fq̃ (1)PFã (1)e = Fq̃ (1)eρ n h io (1) + π0 P + Fq̃ (1) I − Fã (1) (1) [I − P + eFq̃ (1)]−1 PFã (1)e. We now postmultiply both sides by [I − P + eFq̃ (1)]−1 to obtain (1) (1) Fq̃ (1) = Fq̃ (1)eFq̃ (1) [I − P + eFq̃ (1)]−1 n h io (1) + π0 P + Fq̃ (1) I − PFã (1) [I − P + eFq̃ (1)]−1 . we have (1) (1) (1) (1) Fq̃ (1)PFã (1)e = Fq̃ (1)eFq̃ (1)PFã (1)e n h io (1) + π0 P + Fq̃ (1) I − Fã (1) (1) [I − P + eFq̃ (1)]−1 PFã (1)e. (1) Upon multiplying both sides by PFã (1)e. . We then note that since eFq̃ (1) [I − P + eFq̃ (1)] = eFq̃ (1). .238 QUEUEING THEORY WITH APPLICATIONS . (1) (1) Fq̃ (1) = Fq̃ (1)eFq̃ (1) n h io (1) + π0 P + Fq̃ (1) I − Fã (1) [I − P + eFq̃ (1)]−1 . In addition. it is also true that eFq̃ (1) = eFq̃ (1) [I − P + eFq̃ (1)] [I − P + eFq̃ (1)]−1 . Fq̃ (1)P = Fq̃ (1) because Fq̃ (1) is the stationary vector of P. .

2 (f) The final result is obtained by simple algebraic operations. These operations do not involve any other equations. This yields (1) 1 (2) (1) Fq̃ (1)e(1 − ρ) = Fq̃ (1)Fã (1)e + π0 PFã (1) n 2h io (1) (1) + π0 P + Fq̃ (1) I − Fã (1) [I − P + eFq̃ (1)]−1 PFã (1)e. Upon (1) (1) solving for Fq̃ (1)PFã (1)e.Vector Markov Chains Analysis 239 (e) Upon taking the derivative of both sides of (7. (2) where we have used the fact that Fq̃ (1) [I − PFã (1)] e = 0. We begin with (1) 1 (2) (1) (1) Fq̃ (1)e − Fq̃ (1)Fã (1)e − π0 PFã (1) = Fq̃ (1)eρ n 2 h io (1) (1) + π0 P + Fq̃ (1) I − Fã (1) [I − P + eFq̃ (1)]−1 PFã (1)e. we find h i (2) (1) (1) Fq̃ (z) [Iz − PFã (z)] + 2Fq̃ (z) I − PFã (z) (2) (2) (1) −Fq̃ (z)PFã (z) = π0 [z − 1]PFã (z) + 2π0 PFã (z) Then multiplying by e and taking the limit as z → 1 yields h i (1) (1) (2) (1) 2Fq̃ (1) I − PFã (1) e − Fq̃ (1)PFã (1)e = +2π0 PFã (1).70). . we find (1) (1) (1) 1 (2) (1) Fq̃ (1)PFã (1)e = Fq̃ (1)e − Fq̃ (1)Fã (1)e − π0 PFã (1)e. The result follows by dividing both sides by (1 − ρ).