.
I
J
QUEUEING SYSTEMS
VOLUME J: THEORY
Leonard Kleinrock
Prof essor
Comput er Science Department
School of Engineering and Applied Sci ence
University of California, Los Angeles
A WileyInterscience Publication
John Wiley & Sons
New York • Chiches te r > Brisbane • Toronto
" Ah, ' All thinas come to those who wait.'
They come, but cjten come too late."
From Lady Mary M. Curr ie: Tout Vient aQui Sait Attendre (1890)
27947
Copyr ight © 1975, by John Wiley & Sons, Inc.
All rights reserved. Publ ished simulta neously in Canada.
Reproduction or translation of any part of this work beyond that
permitted by Sections 107or 108of 'he 1976 United St ates Copy
ri ght Ac t wi thout the permission of the copyright owne r is unlaw
ful. Requests for permission or furthe r information should be
addressed to the Permissions Depart ment , John wi ley & Sons. Inc.
Library of Congress Cat aloging in Publication Data:
K leinrock, Leonard.
Queueing systems.
"A WileyIruerscience pu blicat ion."
CONTE NTS: v . I. Th eor y.
I. Que ueing theory. I. Title.
T57.9. K6 5 19.8'2
ISBN 047 149110 1
13 1415
749846
Preface
How much time did you waste waiting in line this week? It seems we cann ot
escape frequent delays, and they are getting progressively worse ! In thi s text
we study the phenomena of standing, waiting, and serving, and we call this
study queueing theory.
Any system in which arrivals place demands upon a finitecapacity resource
may be termed a queueing system. In particular, if the arri val times of these
demand s are unpredictable , or if the size of these demands is unpredictable,
then conflicts for the use of the resource will ari se and queu es of waiting
customers will form. The lengths of these queue s depend upon two aspects of
the flow pattern : first, they depend upon the average rate at which demands
are placed upon the resource; and second, they depend upon the statistical
flu ctuations of this rate. Certainly, when the average rate exceeds the capacity,
then the system breaks down and unbounded queues will begin to form; it is
the effect of this average overload which then dominates the growth of queue s.
However, even if the average rate is less than the system capacity, then here,
too, we have the formation of queues due to the statistical fluctuati ons and
spurts of arrivals that may occur; the effect of these var iations is greatly
magnified when the average load approaches (but does not necessarily
exceed) that of the system capacity. The simplicity of these queueing struc
tures is decepti ve, and in our studies we will often find ourselves in deep
analytic waters. Fortunately, a familiar and fundamental law of science
permeates our queueing investigations . This law is the conservation of flow,
which states that the rate at which flow increases within a system is equal to
the difference between the flow rat e int o and the flow rate out of that system.
Thi s observation permits us to write down the basic system equations for
rath er complex structures in a relativel y easy fashion.
The pu rpose of this book , then , is to present the theory of queue s at the
firstyear graduate level. It is assumed that the student has been exposed to a
first course in probabi lity theory; however, in Appendi x II of thi s text we
give a pr obability theory refresher and state the basic pr inciples that we shall
need. It is also helpful (but not necessary) if the student has had some
exposure to transforms, alth ough in thi s case we present a rat her complete
vii
viii PREFACE
transform theory refresher in Appendix I. The student is advised to read both
appendices before proceeding with the text itself. Whereas our material is
presented in the language of mathematics, we do take great pains to give as
informal a presentation as possible in order to strike a balance between the
abstractions usually encountered in such a study and the basic need for
understanding and applying these tools to practical systems. We feel that a
satisfactory middle ground has been established that will neither offend the
mathematician nor confound the practitioner. At times we have relaxed the
rigor in proofs of uniqueness , existence, and convergence in order not to
cloud the main thrust of a presentation. At such times the reader is referred to
some of the other books on the subject. We have refrained from using the
dull "theoremproof" approach; rather, we lead the reader through a natural
sequence of steps and together we "discover" the result . One finds that
previous presentations of this material are usually either too elementary and
limited or far too elegant and precise, and almost all of them badly neglect the
applications; we feel that the need for a book such as this, which treads the
boundary inbetween, is necessary and useful. This book was written over a
period of fiveyears while being used as course notes for a oneyear (and later a
twoquarter) sequence in queueing systems at the University of California,
Los Angeles. The material was developed in the Computer Science Depart
ment within the School of Engineering and Applied Science and has been
tested successfully in the most critical and unforgiving of all environments,
namely, that of the graduate student. This text is appropriate not only for
computer science departments , but also for departments of engineering,
operations research, mathematics, and many others within science, business ,
management and planning schools .
In order to describe the contents of this text, we must first describe the
very convenient shorthand notation that has been developed for the specifica
tion of queueing systems. It basically involves the threepart descriptor
A/B/m that denotes an mserver queueing system, where A and Bdescribe the
interarrival time distribution and the service time distribution, respectively. A
and B take on values from the following set of symbols whose interpretation
is given in terms of distributions within parentheses: M (exponential) ; E
T
(rstage Eriangian); HR (Rstage hyperexponential); D (deterministic) ;
G (general). Occasionally, some other specially defined symbols are used. We
sometimes need to specify the system's storage capacity (which we denote by
K) or perhaps the size of the customer population (which we denote by M) ,
and in these cases we adopt the fivepart descriptor A/B/m/K/M; if either of
these last two descriptors is absent, then we assume it takes on the value of
infinity. Thus, for example, the system D/M/2/20 is a twoserver system with
constant (deterministic) interarrival times, with exponentially distributed
service times, and with a system storage capacity of size 20.
PREFACE ix
This is Volume I (Theory) of a twovolume series, the second of which is
devoted to computer applications of this theory. The text of Volume I
(which consists of four parts) begins in Chapter I with an intr oducti on to
queuein g systems, how they fit into the general scheme of systems of flow,
and a discussion of how one specifies and evaluates the performance of a
queueing system. Assuming a knowledge of (or after reviewing) the mater ial
in Appendices I and II, the reader may then proceed to Chapter 2, where he is
warned to take care! Section 2.1 is essential and simple. However, Sections
2.2, 2.3, and 2.4 are a bit "heavy" for a first reading in queueing systems, and
it would be quite reasonable if the reader were to skip these sections at thi s
point, proceeding directly to Section 2.5, in which the fundamental birth
death process is introduced and where we first encounter the use of atrans
forms and Laplace tran sforms . Once these preliminaries in Part I are estab
lished one may proceed with the elementary queueing theory presented in
Par t II. We begin in Chapter 3 with the general equilibrium solut ion to birth
death processes and devote most of the chapter to providing simple yet
important examples. Chapter 4 generalizes this treatment , and it is here
where we discuss the method of stages and provide an intr oduction to net
works of Mar kovian queues. Whereas Part II is devoted to algebraic and
tr ansform oriented calculati ons , Part III returns us once again to probabilistic
(as well as tran sform) agruments. This discussion of intermediate queueing
theory begins with the important M/G/I queue (Chapter 5) and then proceeds
to the dual G/M/I queue and its natural generalization to the system G/M/m
(Chapter 6). The material on collective marks in Chapter 7 develops the
probabilistic interpretati on of tran sforms . Finally, the advanced mat erial in
Part IV leads us to the queue G/G/I in Chapter 8; this difficult system (whose
mean wait cannot even be expressed simply in terms of the system parameters)
is studied through the use of the spectral solution to Lindley' s integral equa
tion. An approximati on to the precedence structure among chapters in these
two volumes is given below. In this diagram we have represented chapters in
Volume I as numbers enclosed in circles and have used small squares for
Volume II. The shading for the Volume I nodes indicates an appropri ate
amount of mater ial for a relatively leisurely first cour se in queueing systems
that can easily be accompl ished in one semester or can be comfort ably handled
in a onequa rter course. The shading of Chapter 2 is meant to indicate that
Sections 2.2 2.4 may be omitted on a first reading, and the same applies to
Sections 8.3 and 8.4. A more rapid onesemester pace and a highly accelerated
onequarter pace would include all of Volume I in a single cour se. We close
Volume I with a summary of import ant equations, developed thr oughout the
book, which are grouped together according to the class of queueing system
involved; this list of results then serves as a "handbook" for later use by the
reader in concisely summarizing the principal result s of this text. The results
X PREFACE
o Volume I
o Volume II
2
5
are keyed to the page where they appear in order to simplify the task of
locating the explanatory material associated with each result.
Each chapter contains its own list of references keyed alphabetically to the
author and year; for example, [KLEI 74] would reference this book. All
equations of importance have been marked with the symbol  , and it is
these which are included in the summary of important equations. Each chapter
includes a set of exercises which, in some cases, extend the material in that
chapter ; the reader is urged to work them out.
LEONARD KLEI NROCK
XII PREFACE
the face of the real world's complicated models, the mathematicians proceeded
to ad vance the field of queueing theory rapidl y and elegantl y. The frontiers
of this research proceeded int o the far reaches of deep and complex mathe
matics. It was soon found that the really intere sting model s did not yield to
solution and the field quieted down considerably. It was mainly with the
advent of digital computers that once again the tools of queueing theory were
brought to bear on a class of practical problems, but thi s time with great
success. The fact is that at present, one of the few tools we have for analyzing
the performance of computer systems is that of queueing theory, and this
explains its popularity among engineers and scientists today. A wealth of
new problems are being formulated in terms of this theory and new tools and
meth ods are being developed to meet the challenge of these problems. More
over, the application of digital computers in solving the equati ons of queueing
theory has spawned new interest in the field. It is hoped that thi s twovolume
series will provide the reader with an appreciation for and competence in the
methods of analysis and application as we now see them.
I take great pleasure in closing this Preface by acknowledging those
indi vidual s and instituti ons that made it possible for me to bring this book
int o being. First , I would like to thank all tho se who participated in creating
the stimulating environment of the Computer Science Department at UCLA,
which encouraged and fostered my effort in this directi on. Acknowledgment
is due the Advanced Research Projects Agency of the Department of Defense ,
which enabled me to participate in some of the most exciting and ad vanced
computer systems and networks ever developed . Furthermore , the John
Simon Guggenheim Foundation provided me with a Fellowship for the
academic year 19711 972, during which time I was able to further pursue my
investigati ons. Hundreds of students who have passed through my queueing
systems courses have in major and minor ways contributed to the creation
of this book, and I am happy to ackn owledge the special help offered by
Arne Nilsson, Johnny Wong, Simon Lam, Fouad Tobagi , Farouk Kamoun,
Robert Rice, and Th omas Sikes. My academic and profes sional colleagues
have all been very support ive of this endeavour. To the typi sts l owe all. By
far the lar gest port ion of this book was typed by Charlotte La Roche , and I
will be fore ver in her debt. To Diana Skocypec and Cynthia Ellman I give my
deepest thanks for carrying out the enormous task of. proofreading and
correctionmaking in a rapid , enthusiastic, and support ive fashion. Others who
contributed in maj or ways are Barbara Warren, Jean Dubinsky, Jean D'Fucci ,
and Gloria Roy. l owe a great debt of thanks to my family (and especially to
my wife, Stella) who have stood by me and supported me well beyond the
call of duty or marriage contract. Lastl y, I would certainly be remiss in
omitting an acknowledgement to my everfaithful dict atin g machine, which
was constantly talking back to me.
March, 1974
Contents
VOLUME I
PART I: PRELIMINARIES
Chapter 1 Queueing Sys tems
1.1. Systems of Flow .
1.2. The Specification and Measure of Queueing Systems
3
3
8
Chapter 2 Some Impor tant Random Processes 10
2. 1. Notation and Structure for Basic Queueing Systems 10
2.2. Definition and Classification of Stochastic Processes 19
2.3 . DiscreteTime Markov Chains 26
2.4 . Co nti nuo usTime Mar kov Chain s . 44
2.5 . BirthDeath Processes. 53
PART II: ELEMENTARY QUEUEING THEORY
Chapter 3 BirthDeath Queueing Sys tems in Equili brium 89
3.1. Gener al Eq ui libri um Solution 90
3.2. M/M/I: The Classical Queueing System . 94
3.3. Discouraged Arrivals 99
3.4. M/ M/ ro: Responsive Servers (Infinite Number of Server s) 101
3.5. M/M/m: The mServer Case. 102
3.6. M/M/I /K : Finite Storage 103
3.7. M/ M/m/m: mServer Loss Syste ms . 105
3.8. M/M/I IIM: Finite Customer PopulationSingle Server 106
3.9. M/M/ roIIM: Finite Cu stomer Population " Infinite"
Number of Servers 107
3.10. M/M/m/K/M : Fi nite Population, mServer Case , Finite
Storage 108
XIII
xiv CONTENTS
Chapter 4 Markovian Queues in Equilibrium I 15
4.1. The Equilibrium Equ at ions . 11 5
4.2. The Method of Stages Erlangian Distribution E. 119
4. 3. The Queue M/Er/1 126
4.4. The Queue ErlM/I 130
4.5. Bulk Arri val Systems 134
4.6. Bulk Service Systems 137
4.7. SeriesParallel Stages : Generalizations 139
4.8. Networks of Markovian Queues 147
PART III: INTERMEDIATE QUEUEING THEORY
Chapter 5 The Queue M/G/I 167
5. 1. The M/G/I System 168
5.2. The Paradox of Residual Life: A Bit of Renewal Theory . 169
5.3. The Imbedded Markov Chain 174
5.4. The Transition Probabilities . 177
5.5. The Mean Queue Length . 180
5.6. Distributi on of Number in System . 191
5.7. Distribution of Waiti ng Time 196
5.8. The Busy Peri od and Its Durat ion . 206
5.9. The Number Served in a Busy Period 216
5.10. From Busy Periods t o Waiting Times 219
5. 11. Combinat orial Methods 223
5.12. The Tables Integrodifferential Equation . 226
Chapter 6 The Queue G/M/m 241
6. 1. Transition Probabilit ies for the Imbedded Markov Chain
(G/M/m) 241
6.2. Conditi onal Distributi on of Queue Size . 246
6.3. Cond itional Distribut ion of Waiting Time 250
6.4. The Queue G/M/I 251
6.5. The Queue G/M/m 253
6.6. The Queue G/M/2 256
Chapter 7 The Method of Collective Marks 261
7. 1. The Mar king of Customers 26I
7.2. The Catastrophe Process 267
CONTDITS XV
PART IV: ADVANCED MATERIAL
Chapter 8 The Queue GIGII
8. 1. Lindley's I ntegral Equat ion
8.2. Spect ra l Sol ution to Lindley' s In tegra! Eq uation
8.3. Ki ngman ' s Algebra for Queues
8.4. The Idle Tim e and Duali ty
Epilogue
Appendix I : Transform Theory Refresher: zTransforrn and Laplace
Transform
..
275
275
283
299
304
319
1.1. Why Transforms ? 321
1.2. The zTransform . 327
1.3. Th e Laplace Transform 338
1.4. Use of Transforms in the Soluti on of Difference and Dif
feren tia l Equa tions 355
Appendix II: Probability Theory Refresher
II. I. Rules of the Game 363
11.2. Random Variables 368
I1.3. Expectation 377
11.4. Transfo rms, Generating Funct ion s, and Characteristic
Functions . 381
11.5. Inequal it ies and Limit Theorems 388
11.6. St ochastic Processes 393
Glossary of Notation 396
Summary of Important Results 400
Index 411
xvi CONTENTS
VOLUME 1/
Chapter I A Queueing The ory Primer
I. Notation
2. Gene ral Results
3. Markov, BirthDeath, and Poisson Processes
4. The M /M / l Que ue
5. The MI Ml m Queueing System
6. Markovian Que ueing Networks
7. The M / G/l Queue
8. The GIMII Queue
9. The GI Mlm Queue
10. The G/G /l Que ue
Chapter 2 Bounds, Inequalities and Approximations
I. The HeavyTraffic Approximation
2. An Upper Bound for the Average Wait
3. Lower Bounds for the Average Wait
4. Bounds on the Tail of the Waiting Time Distribution
5. Some Remarks for GIGlm
6. A Discrete Approximation
7. The Fluid Approximation for Queues
8. Diffusion Processes
9. Diffusion Approximation for MIGII
10. The RushHour Approximation
Chapter 3 Priority Queueing
I . The Model
2. An Approach for Calculating Average Waiting Times
3. The Delay Cycle, Generalized Busy Periods, and Waiting
Time Distributions
4. Conservation Laws
5. The LastCome FirstServe Queueing Discipline
.
CONTENTS xvii
6. Head oftheLine Priorities
7. Ti meDependent Prior ities
8. Opt imal Bribing for Queue Position
9. ServiceTimeDependent Disciplines
Chapter 4 Computer TimeSharing and Multiaccess Systems
1. Definitions and Models
2. Distribution of Att ained Service
3. The Batch Processing Algorithm
4. The RoundRobin Scheduling Algorithm
5. The LastComeFirstServe Schedul ing Algorithm
6. The FB Schedul ing Algorithm
7. The Mul tilevel Processor Sharing Scheduling Algor ithm
8. Selfish Scheduling Algo rithms
9. A Conservation Law for TimeShared Systems
10. Ti ght Bounds on the Mean Response Time
11. Finite Popul ation Models
12. Mult ipleResour ce Models
13. Models for Multiprogramming
14. Remote Terminal Access to Computers
Chapter 5 ComputerCommunication Networks
I. Resource Sharing
2. Some Contrasts and TradeOff's
3. Networ k Structures and Packet Switching
4. The ARPANETAn Operational Descripti on of an
Existing Network
5. Definitions, the Model, and the Problem Statement s
6. Delay Analysis
7. The Capacity Assignment Problem
8. The Traffic Fl ow Assignment Problem
9. The Capacity and Flow Assignment Probl em
10. Some Topological ConsiderationsApplications to the
ARPANET
II . Satellite Packet Switching
12. Grou nd Radio Packet Switchi ng
xvi ii CONTENTS
Chapter 6 ComputerCommunication Networks
Measurement, Flow Control and ARP ANET Traps
1. Simulation and Routing
2. Early ARPANET Measur ements
3. Flow Control
4. Lockups, Degradations and Traps
5. Network Throughput
6. One Week of ARPANET Data
7. Line Overhead in the ARPANET
8. Recent Changes to the Flow Cont rol Procedure
9. The Cha llenge of the Future
Glossary
Summary of Results
Index
QUEUEING SYSTEMS
VOLUME I: THEORY
PART I
PRELIMINARIES
It is difficult to see the forest for the trees (especially if one is in a mob
rather than in a wellordered queue). Likewise, it is often difficult to see the
impact of a collection of mathematical results as you try to master them; it is
only after one gains the understanding and appreciation for their application
to realworld problems that one can say with confidence that he understands
the use of a set of tools .
The two chapters contained in this preliminary part are each extreme in
opposite directions. The first chapter gives a global picture of where queueing
systems arise and why they are important. Entertaining examples are provided
as we lure the reader on. In the second chapter, on random processes, we
plunge deeply into mathematical definitions and techniques (quickly losing
sight of our longrange goals); the reader is urged not to falter under this
siege since it is perhaps the worst he will meet in passing through the text.
Specifically, Chapter 2 begins with some very useful graphical means for
displaying the dynamics of customer behavior in a queueing system. We then
introduce stochastic processes through the study of customer arrival, be
havior, and backlog in a very general queueing system and carefully lead the
reader to one of the most significant results in queueing theory, namely,
Little's result, using very simple arguments. Having thus introduced the
concept of a stochastic process we then offer a rather compact treatment
which compares many wellknown (but not welldistinguished) processes and
casts them in a common terminology and notation, leading finally to Figure
2.4 in which we see the basic relationships among these processes; the reader
is quickly brought to realize the central role played by the Poisson process
because of its position as the common intersection of all the stochastic
processes considered in this chapter. We then give a treatment of Markov
chains in discrete and continuous time; these sections are perhaps the tough
est sledding for the novice, and it is perfectly acceptable ifhe passes over some
of this material on a first reading. At the conclusion of Section 2.4 we find
ourselves face to face with the important birthdeath processes and it is here
2 PRELIMINARIES
where things begin to take on a relationship to physical systems once again.
In fact , it is not unreasonable for the reader to begin with Section 2.5 of thi s
chapter since the treatment following is (almost) selfcontained from there
throughout the rest of the text. Only occasionally do we find a need for the
more detailed material in Sections 2.3 and 2.4. If the reader perseveres
through Chapter 2 he will have set the stage for the balance of the textbook.
I
Queuei ng Systems
One of life' s more disagreeable act ivities, namel y, waiting in line, is the
delightful subject of thi s book. One might reasonably ask, " What does it
profit a man to st udy such unpleasant phenomena1" The answer , of course,
is that through understanding we gain compassion, and it is exactl y this
which we need since people will be waiting in longer and longer queues as
civilizat ion progresses, and we must find ways to toler ate these unpleasant
situa tions. Think for a moment how much time is spent in one's daily
acti vities waiting in some form of a queue: waiting for breakfast ; stopped at a
traffic light ; slowed down on the highways and freeways ; del ayed at the
entrance to one's parking facility; queued for access to an elevat or ; sta ndi ng
in line for the morn ing coffee; holding the telephone as it rings, and so on.
The list is endless, and too often also are the queues.
The orderliness of queues varies from place to place ar ound the world.
Fo r example, the English are terribly susceptible to formation of orderly
queues, whereas some of the Mediterranean peopl es consider the idea
ludicrous (have you ever tried clearing the embarkation pr ocedure at the
Port of Brindisi 1). A common sloga n in the U.S. Army is, "Hurry up and
wait." Such is the nature of the phenomena we wish to study.
1.1. SYSTEMS OF FLOW
Queueing systems represent an example of a much broader class of
int erest ing dynamic systems, which, for convenience, we refer to as " systems
of flow." A flow system is one in which some commodity flows, moves, or is
tran sferred through one or more finitecapacity channels in order to go from
one point to another. For example, consider the flow of automobi le traffi c
t hr ough a road network, or the transfer of good s in a railway system, or the
st reami ng of water th rough a dam, or the tr ansmission of telephone or
telegraph messages, or the passage of customers through a supermarket
checkout counter, or t he flow of computer pr ograms t hrough a ti meshar ing
computer system. In these examples the commodities are the automobiles,
the goods, the water, the telephone or telegraph messages, the customers, and
the programs, respecti vely; the channel or channels are the road network,
3
4 QUEUEING SYSTEMS
the rail way net wor k, the dam, the telephone or telegraph networ k, the
supermarket checkout counter, and the computer processing system, re
spectively. The " finite capacity" refers to the fact that the channel can satisfy
the demands (placed upon it by the commodity) at a finite rate only. It is
clear that the analyses of many of these systems require analytic tools drawn
from a variety of disciplines and, as we shall see, queueing theory is just one
such discipline.
When one analyzes systems of flow, they naturally break int o two classes :
steady and unsteady flow. The first class consists of those systems in which the
flow proceeds in a predictable fashion. That is, the quantity of flow is
exactly known and is const ant over the int erval of interest; the time when tha t
flow appears at the channel, and how much of a demand that flow places upon
the channel is known and consta nt. These systems are trivial to analyze in the
case of a single channel. For example, consider a pineapple fact ory in which
empty tin cans are being transported along a conveyor belt to a point at
which they must be filled with pineapple slices and must then proceed further
down the conveyor belt for addi tional operations. In thi s case, assume that
the cans arrive at a constant rate of one can per second and that the pine
applefilling operation takes ninetenths of one second per can. These numbers
are constant for all cans and all filling operations. Clearl y thi s system will
funct ion in a reliable and smooth fashion as long as the assumpti ons stated
above continue to exist. We may say that the arrival rate R is one can per
second and the maximum service rate (or capacity) Cis 1/0.9 = 1.11111 . ..
filling operations per second . The example above is for the case R < c.
However , if we have the condition R > C, we all know what happens : cans
and/ or pineapple slices begin to inundat e and overflow in the fact ory! Thus
we see that the mean capacity of the system must exceed the average flow
requirements if chaotic congest ion is to be avoided ; this is true for all systems
of flow. Th is simple observation tells most of the story. Such systems are of
littl e interest theoretically.
The more interesting case of steady flow is that of a net work of channels.
For stable flow, we obviously require that R < C on each channel in the
net wor k. However we now run int o some serious combinat orial problems.
For example, let us consider a rail way net wor k in the fictitious land of
Hatafla. See Figure 1.1. The scenario here is that figs grown in the city of
Abra must be transported to the destinati on city of Cadabra, makin g use
of the rail way network shown. The numbers on each chann el (section of
railway) in Figure 1.1 refer to the maximum number of bushels of figs which
that cha nnel can handle per day. We are now confronted with the following
fig flow probl em: How many bushels of figs per day can be sent from Abra to
Cadabra and in what fashion shall this flow of figs take place ? The answer to
such questions of maximal " traffic" flow in a variety of networ ks is nicely
Zeus
8
1.1. SYSTEMS OF FLOW
Nonabel
5
Abra
Cadabra
Sucsamad 6 Oriac
Figure 1.1 Maximal flow problem.
sett led by a wellknown result in net work flow theory referred to as the
maxflowmincut theorem. To state this theo rem, we first define a cut as a
set of channel s which, once removed from the network, will separate all
possible flow from the origin (Abra) to the destination (Cadabra). We define
the capacity of such a cut to be the total fig flowthat can travel across that cut
in the direct ion from origin to destination. For example, one cut consists of
the bran ches from Abra to Zeus, Sucsamad to Zeus , and Sucsamad to Oriac ;
the capacit y of thi s cut is clearl y 23 bushels of figs per day. The maxflow
mincut theorem states that the maximum flow that can pass bet ween an
origin and a destination is the minimum capacity of all cuts. In our example
it can be seen that the maximum flow is therefore 21 bushels of figs per day
(work it out). In general, one must consider all cut s that separate a given
origin and destination. This computation can be enormously time consuming.
Fortunately, there exists an extremel y powerful method for finding not only
what is the maximum flow, but also which flow pattern achieves this maxi
mum flow. Thi s procedure is known as the labeling algorithm (due to Ford
and Fulkerson [FORD 62]) and is efficient in that the computational require
ment grows as a small power of the number of nodes ; we present the algor ithm
in Volume II , Chapt er 5.
In addition to maximal flow problems, one can pose nume rous other
interesting and worthwhile questions regarding flow in such networks. For
example , one might inquire int o the minimal cost network which will support
a given flow if we assign costs to each of the channels. Also, one might ask
the same questions in networks when more than one origin and dest inati on
exist. Complicating matters further, we might insist that a given network
support flow of various kinds. for example, bushels of figs, cartons of
cartridges and barrel s of oil. Thi s multic ommodity flow problem is an
extremely difficult one, and its solution typically requires consi derable
computati onal effort. These and numerous other significant problems in
net wor k flow theory are addressed in the comprehensive text by Frank and
Frisch [FRAN 71] and we shall see them again in Volume II , Chapter 5. Net
work flow theory itself requires met hods from graph theory, combinator ial
6 QUEUEING SYSTEMS
mathematics, optimization theory, mathematical programming, and heuristic
programming.
The second class into which systems of flow may be divided is the class of
random or stochastic flow problems. By this we mean that the times at which
demands for service (use of the channel) arrive are uncertain or unpredict
able, and also that the size of the demands themselves that are placed upon
the channel are unpredictable. The randomness, unpredictability, or unsteady
nature of this flow lends considerable complexity to the solution and under
standing of such problems. Furthermore, it is clear that most realworld
systems fall into this category. Again, the simplest case is that of random flow
through a single channel; whereas in the case of deterministic or steady flow
discussed earlier in which the singlechannel problems were trivial, we have
now a case where these singleehannel problems are extremely challenging
and, in fact , techniques for solution to the singlechannel or singleserver
problem comprise much of modern queueing theory .
For example, consider the case of a computer center in which computation
requests are served making use of a batch service system. In such a system,
requests for computation arrive at unpredictable times, and when they do
arrive, they may well find the computer busy servicing other demands. If, in
fact, the computer is idle, then typically a new demand will begin service
and will be r u ~ until it is completed. On the other hand, if the system is busy,
then this job will wait on a queue until it is selected for service from among
those that are waiting. Until that job is carried to completion, it is usually the
case that neither the computation center nor the individual who has submitted
the program knows the extent of the demand in terms of computational effort
that this program will place upon the system; in this sense the service require
ment is indeed unpredictable.
A variety of natural questions present themselves to which we would like
intelligent and complete answers . How long, for example , maya job expect to
wait on queue before entering service? How many jobs will be serviced before
the one just submitted? For what fraction of the day will the computation
center be busy? How long will the intervals of continual busy work extend?
Such questions require answers regarding the probability of certain periods
and numbers or perhaps merely the average values for these quantities.
Additional considerations, such as machine breakdown (a not uncommon
condition), complicate the issue further; in this case it is clear that some pre
emptive event prevents the completion of the job currently in service. Other
interesting effects can take place where jobs are not serviced according to their
order of arrival. Timeshared computer systems, for example, employ rather
complex scheduling and servicing algorithms, which, in fact , we explore in
Volume II, Chapter 4.
The tools necessary for solving singlechannel randomflow problems are
1.1. SYSTEMS OF FLOW 7
contained and described withi n queue ing theory, to which much of th is text
devote s itself. Th is requires a back ground in pr obability th eory as well as an
underst anding of complex variables and so me of the usual tr ansform
calculus methods ; th is material is reviewed in Appendices I and II.
As in the case of deterministic flow, we may enlarge our scope of probl ems
to that of networks of channels in which random flow is encountered. An
example of such a system would be that of a computer network. Such a
system consists of computers connected together by a set of communication
lines where the capacity of these lines for carrying information is finite. Let us
return to the fictiti ous land of Hatafla and assume that the railway net work
considered earli er is now in fact a computer net work. Assume that user s
located at Abra require computational effort on the facility at Cadabra. The
particular times at which these requests are made are themsel ves unpredict
able, and the commands or inst ructions that describe these request s are al so
of unpredictable length . It is these commands which must be transmitted to
Cadabra over our communication net as messages. When a message is
inserted int o the network at Abra, and after an appropriate deci sion rule
(referred to as a routing procedure) is accessed, then the message proceeds
through the network along so me path. If a port ion of this path is busy, and
it may well be, then the message must queue up in front of the busy channel
and wait for it to become free. Const ant decisions must be made regarding
the flow of messages "and routing procedures. Hopefully, the message will
eventually emerge at Cadabra, the computation will be performed, and the
results will then be inserted into the network for delivery back at Abra.
It is clear that the problems exemplified by our computer net wor k involve a
variety of extremely complex queueing problems, as well as networ k flow
and deci sion problems. In an earlier work [KLEI 64] the author addressed
himself to certain as pects of these questions. We develop the analysis of these
syst ems lat er in Volume II , Chapter 5.
Having thus classified *systems of flow, we hope that the reader underst ands
where in the general scheme of things the field of queueing theory may be
pl aced. The methods from thi s the ory a re central to analyzing most stochas tic
flow problems, and it is clear from a n examination of the current literat ure
that the field and in particular its applications are growing in a viable and
purposeful fashion.
• The classification described above places queueing systems within the class of systems of
flow. This approach identifies and emphasizes the fields of applicatio n for queueing theory.
An alterna tive approach would have been to place queueing theory as belongi ng to the
field of app lied stochastic processes ; this classification would have emphasized the mat he
ma tical structure of queueing theory ra ther than its applications. The point of view taken
in this twovolume book is the former one, namely. with application of the theory as its
major goal rat her than extension of the math emat ical for mal ism and result s.
8 QUEUEING SYSTEMS
1.2. THE SPECIFICATION AND MEASURE
OF QUEUEING SYSTEMS
In order to completely specify a queueing system, one must iden tify the
stochas tic processes that describe the arriving stream as well as thestructure
and di sciplin e of the service facility. Generally, the arri val pr ocess is described
in terms of the probability di stribution of the interarrical times of customers
and is denoted A(t) , where*
A(t ) = P[time between arrivals ~ t] (I.I )
The assumption in most of queueing theory is that these interarrival times are
independent , identically distributed random variables (and, therefore, the
st rea m of arrivals forms a stationary renewal process ; see Chapter 2). Thus,
onl y the di stribution A(t) , which describes the time between a rrivals, is usually
of significa nce. The second statistical quantity that must be described is the
amount of demand these arrivals place upon the channel; thi s is usuall y
referred to as the service time whose probability distributi on is den oted by
B(x) , that is,
B(x) = P[service time ~ x ] ( 1.2)
Here service time refers to the length of time that a cust omer spends in the
ser vice facility.
Now regarding the st ruct ure and discipline of the service facility, one must
spec ify a variety of additiona l quantities. One of these is the extent of
storage capacity available to hold waiting customers and typically thi s quan
tit y is described in terms of the variable K ; often K is taken to be infinite. An
additional specification involves the number of service stations available, and
if more than one is available, then perhaps the di stribution of service time
will differ for each, in which case the distribution B(x) will include a subscript
to indicate that fact. On the other hand, it is sometimes the case that the
arriving st ream con sist s of more than one identifiable class of customers ; in
such a case the interarrival distributi on A (t ) as well as the service di str ibut ion
B(x) may each be characteri stic of each class and will be identified again by
use of a subscript on these distr ibutions. An other importa nt st ruct ural
descripti on of a queueing system is that of the queueing discipline; thi s
describes the order in which cust omers are taken from the queue and allowed
int o service. For example, some standa rd queueing di sciplines are firstco me
firstserve (FCFS), lastcomefirstserve (LCFS), and random order of
service. When the arriving customers are distin guishable according to gro ups,
then we encounter the case of priority queueing disciplines in which priority
• The notat ion P[A] denotes, as usual. the " pro bability of the event A,"
1.2. THE SPECIFICATION AND ~ I E A S U R E OF QUEUEING SYSTntS 9
among groups may be established. A further sta tement regarding the avail
ability of the service facility is also necessary in case t he service faci lity is
occasionally requ ired to pay attention to other ta sks (as, for example, its
own breakdown). Beyond this, queue ing systems may enjoy custo mer
behavior in the form of defections from the queue, j ockey ing among the many
qu eues, balking before ent ering a queue, bribing for queue positi on , cheating
for queue po sition, a nd a variety of othe r interesting and notunexpected
humanlike cha racterist ics. We will encounter these as we move th rough t he
text in an orderly fashion (firstcomefi rstserve according to page nu mber).
No w that we have indi cated how one must specify a queueing system, it is
appropriate t hat we ide nti fy the meas ures of performance and effectiveness
that we sha ll obtai n by analysis. Basicall y, we are int erested in the waiting time
for a custo mer, the number of customers in the system, the length of a busy
period (the continuous interval during which the server is busy), the length of
an idle period, a nd the current 1I'0rk backlog expressed in units of time. All
t hese quant ities a re ra ndom variables and thus we seek their complet e
prob abilistic description (i.e., their probability dist ribu tion function).
Us ually, however , to give the distributi on functio n is to give more than
one can easi ly make use of. Consequently, we often settle for the first few
moments (mean, var iance, etc.).
Happily, we shall begin with simple co nsiderations and develop the tools
in a st raightforward fashio n, paying a tte ntion to the essential det ails of
a nalysis. In t he followi ng pages we will encounter a variety of simple queueing
problems, simple at least in the sense of description and usually rather
sophistica ted in terms of solution. However , in orde r to do t his pr operl y, we
first devote our efforts in the following chapter to describing some of t he
important ra ndom processes that ma ke up the arriva l a nd service processes
in our queueing systems.
REFERENCES
FORD 62 Ford, L. R. and D. R. Fulkerson, Flows in Networks, Princeton
University Press (Princeton, N.J.), 1962.
FRAN 71 Frank. H. and I. T. Frisch, Communication. Transmission , and
Transportation Network s, AddisonWesley (Reading, Mass.), 1971.
KLEI 64 Kleinrock, L. . Communication Nets ; Stochastic Message Flow and
Delay . McGrawHili (New York), 1964, out of print. Reprinted by
Dover (New York), 1972.
2
Some Important Random Processes*
We assume that the reader is familiar with the basic elementary notions,
terminology, and concepts of probability theory. Th e particular aspects of
that theory which we require are presented in summary fashion in Appendix
II to serve as a review for those readers desi ring a quick refresher and
reminder; it is recommended that the material therein be reviewed, especially
Section 11.4on transforms, generating functions , and characteri stic functions.
Included in Appendix " are the following important definitions, concep ts,
and results :
Sample space, events , and probabi lity.
• Conditional probability, statistical inde pendence, the law of total
probabi lity, and Bayes' theorem.
A real random variable, its pro babili ty dist ribution function (PDF),
its probability density function (pdf), and their simple properties.
• Events related to random variables and their probabilities.
• Joint dist ribu tion functions.
Functions of a random variable and t heir density functions.
• Expectati on.
• Laplace transforms , generating functions, and characteri stic functi ons
and their relationships and propertics.t
• Inequalities and limit theorems .
Definition of a stochast ic process.
2.1. NOTATI ON AND STRUCTURE FOR BASIC
QUEUEING SYSTEMS
Before we plunge headlong into a stepbystep development of queueing
theory from its elementary not ions to its inte rmediate and then finally to
some ad vanced material , it is important first that we understand the basic
• Sections 2.2, 2.3, and 2.4 may be skipped on a first read ing.
t Appendix [ is a transform theor y refresher. This material is also essential to the proper
under standing of this text.
10
2.1. NOTAn ON AND STRUCTURE FOR BASIC QUEUEING SYSTEMS II
·st ruct ure of queues. Also, we wish to provide the read er a glimpse as to
where we a re head ing in th is journ ey.
It is our purpose in thi s sectio n to define some notation, both symbolic
and graphic, and then to introduce one of the basic stochast ic pr ocesses that
we find in queueing systems. Further , we will deri ve a simple but significa nt
result , which relates some first moments of importance in t hese systems. In
so doing, we will be in a positi on to define the quantities and processes that
we will spend many pages studying later in the text.
The system we co nsider is the very general queueing system G/G/m ; recall
(from the Preface) that thi s is a system whose interarrival time di stributi on
A (I) is completel y arbit ra ry a nd who se service time di stributi on B(x) is also
completely arbitrary (all interar rival times and service time s are assumed to
be inde pendent of each ot her). The system ha s m servers and order of service
is also quite arbit ra ry (in particular, it need not be firstcomefirstserve).
We focus attentio n on the flow of customers as they arri ve, pass through , a nd
eventuall y lea ve thi s system: as such, we choose to number the customers with
the subsc ript n a nd define C,. as foll ows :
C
n
denotes the nth customer to enter the system (2. 1)
Thus, we may portray our system as in Figure 2.1 in which the box represents
t he queueing system and the flow of cust omers both in and out of the system
is shown. One can immediately define some random processes of int erest.
For example, we are int erested in N( I) where *
N(I) ~ number of cust omers in the system at time I (2.2)
Another stochastic process of interest is the unfinished work V( I) that exists
in the system a t time I , that is,
V( I) ~ the unfinished work in the system a t time I
~ the remainin g t ime required to empty the system of all
customers present at time I (2.3)
Whenever V( I ) > 0, then the system is said to be busy, and only when
V( I) = 0 is the syste m sai d to be idle. The durati on and location of the se busy
and idl e peri ods a rc al so quantiti es of int erest.
Oueoei nq
system
.. ,J; ~
; I
       '
Figure 2. 1 A general queueing system.
• The notation ~ is to be read as "equals by defi nition."
12 SOME IMPORTANT RANDOM PROCESSES
which is independent of n. Similarly, we define the service time for COl as
Thus we have defined for the nth customer his a rrival time, " his" intera rri v.
time, his service time, his waiting time, a nd his system t ime. We find
(2. 5)
(2.6)
(2.9
'(2.8;
(2.7)
(2. 1(
interarrival times are drawn from the dis
P[t" ~ t] = A(t)
P[X
n
~ x] = B(x)
X
n
,;; service time for Cn
II' n ~ waiting ti me (in queue) for C;
= II' n + x"
and from our assumptions we have
The det ail s of the se stochastic processes may be observed first by defining
the foll owing variables and then by di spl aying these va riab les on an appro
priate time di agram to be discussed below. We begin with the definitions.
Recalling that the nth cust omer is den oted by Cn. we define his arriva l time
to the queueing system as
T n ~ a rriva l time for Cn (2.4)
The sequences {t
n
} a nd {x
n
} may be thought of as input va riables for OUI
queueing system; the way in which the system handles these cust omers give:
rise to queues and waiting times that we must now define. Thus, we define t h.
waiting time (time spent in the queue) * as .
The total time spent in the system by COl is the sum of his wai ti ng time an,
service time, which we denote by
s; ~ system time (queue plus service) for COl
* T he terms " waiting ti me" and " queueing time" have conflicting defini tions within tl
body of queueingtheory literatu re. The former sometimes refers to the total time spent
system. and the latter then refers to the total time spent on queue ; however . these tv
definit ions are occasionally reversed. We attempt to remove that confusion by defini
waiting and queueing time to be the same quant ity. namely. the time spent wa iting '
queue (but not being served); a more appropriate term perhaps would be " wasted tim'
The tota l time spent in the system will be referred to as "sys tem time" (occasionally kno
as " flow time" ).
2.1. NOTATION AND STRUCTURE FOR BASIC QUEUEING SYSTEMS 13
expedient at th is point to elaborate somewhat further on notation. Let us
con sider the interarrival time In once again. We will have occasion to refer
to the limiting random varia ble i defined by
 ~ I'
I = im 1
n
ne cc
(2.11)
which we den ote by I n + i. (We have already requ ired that the interarrival
times In have a di stribution independent of n, but this will not necessaril y be
the case with many other random variables of interest.) The typical notation
for the probability distribut ion function (PDF) will be
and for the limiting PDF
P[i ~ I] = A (I)
(2.12)
(2. 13)
r
s
e
I)
d
Thi s we denote by A n(l) + A(t ) ; of course, for the interarrival time we have
assumed that A n(l ) = A (I) , which gives rise to Eq. (2.6). Similarly, the
probability den sit y function (pdf) for t n and i will be an(l) and aCt), respec
tively, and will be den oted as an(t) + aCt). Finally, the Laplace transform
(see Appendix II) of these pdf's will be denoted by An *(s) and A *(s),
respecti vely, with the obvious notation An*(s ) + A *(s) . The use of the letter
A (and a) is meant as a cue to remind the reader that they refer to the
interarrival time . Of .course, the moments of the interarrival time are of
interest and they will be denoted as follows *:
E[l n]';;'i. (2.14)
Acc ording to our usual notati on , the mean interarrival time for the limiting
random variable will be given] by i in the sense that i. + i. As it turns out
i, which is the average interarri val time between customers, is used so
frequentl y in our equ ati ons that a special notation ha s been adopted as
foll ows :
Thus i. represents th e Qt'erage arrical rate of customers to our queueing
system. Hi gher moments of the interarrival time are also of interest and so
we define the k th moment by
• The notat ion E[ J denotes the expecta tion of the quant ity within squar e brackets. As
shown, we also adopt the overbar notat ion to denote expect at ion.
t Actually, we should use the notation I wit h a tilde and a ba r, but this is excessive and will
be simplified to i. The same simplification will be applied to many of our ot her random
variables.
))
al
it
he
in
vo
ng
on
vn
_ <l I
1=
i.
k = 0, 1,2, . . .
(2. 15)
(2. 16)
14 SOME IMPORTANT R A N D O ~ [ PROCESSES
In this last equation we ha ve introduced the definition a
k
to be the kth
moment of t he interarri val t ime i ; thi s is fairl y standa rd not at ion and we
note immediately from the above that
_ 1 j.
t =  = a
1
= a
A
(2. 17)
That is, th ree special notations exist for the mean interarrival time; in partic
ular, the use of the symbol a is very common and various of the se form s
will be used throughout the text as appropria te. Summarizing th e information
with regard to the interar riva l time we have the following short hand glossa ry:
t; = interarrival time bet ween C; and C
n
_
1
t
n
+ i, An(t) + A(t), an(t ) > a(t ), An*(s) > A*(s)
  I k k
t
n
+ t = ~ = a
1
= a, t ; > t = a
k
(2. 18)
In a similar ma nner we identify the notation associated with X
n
, I\' n, and S n
as follo ws :
X
n
= service time for C
n
X
n
+ X, B.(x) + B(x), b. (x) + b(x), B
n
*(s)  ~ B*(s)
  1 b b
X
n
.. x =  = 1 = ,
fl
(2. 19)
IV
n
= waiting time for C
n
s; = system time for C,
s ; + s, Sn(Y) > S(y), sn(Y) + s(y), S n"(s) + S *(s)
(2.20)
(2.21)
All th is notation is selfevident except perhaps for the occas ional special
symbols used for the first moment and occasionally t he higher moment s of
th e random variables invol ved (t ha t is, the use of t he symbols )" a, Il, b, IV,
and T). The reader is, at thi s poin t , directed to t he Gl ossary for a complete
set of not ati on used in thi s book.
With the above not ation we now suggest a timediagram notation for
queues, which pe rmits a graphical view of the dynamics of our queueing
system and a lso provides the det ails of t he underlying stochastic processes.
This diagram is shown in Figure 2.2. Thi s particular figure is shown for a
2.1. NOTATION A:ID STRUCTURE FOR BASIC QUEUEI NG SYSTEMS 15
s.
' I
C
ll
_
1 C.
cr. ,!'.
Cn +2
Servicer
C ~ Cn + 1 C:"'+2 Tlme vce
T.
'n +1
T/
u 2
Queue
(11+2
C
n en +1 Cn t2
Figure 2.2 Timediagram notation for queues.
firstcomefirstserve order of service, but it is e ~ s y to see how the figure
may al so be made to represent any order of service. In this time diagram the
lower horizontal time line rep resents the queue and the upper hori zontal time
line represents the service facility; moreover, the diagram shown is for the
case of a single server, although this too is easil y generalized. An arrow
approaching the queue (or service) line from below indicates that an arrival
has occurred to the queue (or service facility) . Arrows emanating from the
line indicate the departure of a customer from the queue (or service facility).
In this figure we see that customer C
n
+
1
arrives before customer C
n
enters
service; only when C; departs from service may Cn+l ente r service and , of
course, the se two events occur simultaneously. Notice that when C
n
+
2
enters
the system he finds it empty and so immediately proceeds through an empty
queue directly int o the service facility . In this di agram we have al so shown the
waiting time and the system time for C
n
(note that 1\'n+2 = 0). Thus, as time
proceed s we can identify the number of cust omers in the system N(t), the
unfini shed work Vet) , and also the idle and busy period s. We will find much
use for thi s timedi agram notation in what follows.
In a general que ueing system one expects that when the number of
customers is lar ge then so is the waiti ng time. One manifestation of thi s is a
very simple relati onship between the mean number in the queueing system,
the mean a rriva l rate of customers to that system, and the mean system
time for cust omers. It is our purpose next to deri ve that relati onship and
thereby familiarize ourselves a bit further with the underlying behavior
of the se systems. Referring back to Figure 2.1, let us position ourselves at
the input of the queueing system and count how man y cust omers ent er as a
function of time . We denote this by ot (/) where
ot (t) ~ number of arri vals in (0, t ) (2.22)
16 SOME IMPORTANT RANDOM PROCESSES
12,    ,
11
10
~ 9
E 8
B
'5 7
c 6
'0
5
"
~ 4
~ 3
2
1
oL"'="'"
Time l
Figure 2.3 Arrivals and departures.
Alternat ively, we may position ourselves at the output of the queuei ng system
and count the number of departures that leave; thi s we denot e by
bet) ~ number of depar ture s in (0, r) (2.23)
Sample functions for these two stochastic processes are shown in Figur e 2.3.
Clearly N( t), the number in the system at time t, must be given by
N( t ) = «(r)  bet)
On the other hand, the tot al area bet ween these two curves up to some point ,
say t , repr esent s t he total time all customers have spent in the system (meas
ur ed in unit s of customerseconds) during the int erval (0, t) ; let us denote this
cumulative area by yet ). Moreover, let At be defined as the average arrival
rate (customers per second) during the int erval (0, t); that is,
(2.24)
We may define T, as the system time per customer averaged over all custome rs
in the interval (0, t); since yet ) repre sent s the accumulated customer seconds
up to time t , we may divide by the number of arrivals up to that point to
obtain
yet)
T
t
= 
(X( t)
Lastly, let us define N
I
as the average number of custome rs in the queueing
system during the int erval (0, r): this may be obtained by dividing the
accumulated number of customerseconds by the total interval length t
2.1. NOTATION AND STRUCTURE FOR BASIC QUEUEING SYSTEMS 17
thusly
_ y(t )
N,= 
t
From these last three equations we see
N, = A,T,
Let us now assume that our queueing system is such that the following limits
exist as t .. CtJ:
A= lim J. ,
,'"
T=lim T,
, '"
Note that we are using our former definitions for Aand T representing the
average customer arri val rate and the average system time , respectively. If
these last two limits exist, then so will the limit for N" which we denote by N
now representing the average number of customers in the system; that is,
N= AT  (2.25)
Thi s last is the result we were seeking and is known as Little's result . It states
that the average number of customers in a queueing system is equal to the
average arrical rate of customers to that system, times the average time spent
in that system. * The above proof does not depend upon any specific assump
tion s regarding the arrival distribution A (r) or the service time distribution
B(x) ; nor does it depend upon the number of servers in the system or upon the
particular queueing discipline within the system. This result existed as a
" folk the orem" for many years ; the first to establish its validity in a formal
way was J. D. C. Little [LITT 61] with some later simpl ifications by W. 'S.
Jewell [JEWE 67) . S. Ei lon [EILO 69 ) and S. Stidham [STID 74). It is im
portant to note that we have not precisely defined the boundary around our
queueing system. For exampl e, the box in Figure 2.1 could apply to the entire
system composed of queue and server , in whic h case Nand T as defined refer
to quant ities for the entire syste m; on the other hand , we could have consid
ered the bound ar y of the queueing system to contai n only the queue itsel f, in
which case the relationship wou ld have been
No= AW  (2.26)
where No represents the average number of customers in the queue and, as
defined earlier, W refers to the average time spent waiting in the queue. As a
third possible alterna tive the queueing system defined could have surrounded
• An intuiti ve proof of Little' s result depends on the observation that an arriving cus
tomer should find the same average number, N, in the system as he leaves behind upon
his departure. Thi s latter quantity is simply the arri val ra te A times his average time in
system, T.
18 SOME I11PORTANT RANDOM PROCESSES
only the server (or servers) itself; in this case our equation would have reduced
to
R. = AX (2.27)
where R. refers to the average number of customers in the service facility
(or facilities) and x, of course, refers to the average time spent in the service
box. Note that it is always true that
T = x + W _ (2.28)
The queueing system could refer to a specific class of customers, per haps
based on priority or some other attribute of this class, in which case the same
relationship would apply. In other words, the average arri val rate of customers
to a "queueing system" times the average time spent by cust omer s in that
"system" is equal to the average number of customers in the "system,"
regardless of how we define that " system."
We now discuss a basic parameter p, which is commonly referred to as the
utilization factor. The utilization factor is in a fundamental sense really the
ratio R/C, which we introduced in Chapter I. It is the rat io of the rate at
which "work" enter s the system to the maximum rat e (capacity) at which the
system can perform thi s work; the work an arri ving customer brings into the
system equals the number of seconds of service he requires. So, in the case of
a singleserver system, the definition for p becomes
p ,;; (average arrival rate of customers) X (average service time)
= AX _ (2.29)
Thi s last is true since a singleserver system has a maximum capacity for
doing work , which equals I sec/sec and each ar riving customer brings an
amount of work equal to x sec; since, on the average, ..1. customers ar rive per
second, then }.x sec of work are brought in by customer s each second that
passes, on the average. In t he case of mult iple servers (say, III servers) the
definition remains the same when one considers the ratio R/C , where now the
work capacity of the system is III sec/sec; expressed in terms of system param
eters we then have
a
AX
p =  _ (2.30)
m
Equat ions (2.29) and (2.30) apply in the case when the maximum service
rat e is independent of the system state; if this is not t he case, then a more
careful definition must be provided. The rate at which work enters the
system is sometimes referred to as the traffi c intensity of the system and is
usually expressed in Erlangs ; in singleserver systems, the utilizat ion factor is
equal to the traffic intensity whereas for (m) mult iple servers, the tr affi c
intensity equal s mp. So long as 0 ~ p < I, then p may be interpreted as
p = E[fraction of busy servers1 (2.3I)
2.2. DEFINITION AND CLASSIFICATION OF STOCHASTIC PROCESSES 19
[In the case of an infinite number of servers, t he ut ilizati on fact or p plays no
impor tant part , and instead we are interested in the number of busy servers
(and its expectati on).] 
Indeed , for the system GIGII to be st able , it must be that R < C, that is,
o~ p < I. Occasionally, we permit the case p = 1 with in the ran ge of
sta bility (in particul ar for the system 0 /0/1). Stability here once again refer s
to the fact that limiting di stributions for all random vari ables of interest
exist , and that all customers are eventually served. In such a case we may
carry out the following simple calcul ation. We let 7 be an arbitrarily long
t ime interval ; during this interval we expect (by the law of large numbers)
with probability 1 that the number of arrivals will be very nearly equal to .AT.
Moreover , let us define Po as the probability that the server is idle at some
randomly selected time . We may, therefore, say that during the interval 7,
the server is busy for 7  TPO sec, and so with pr obability I , the number of
customers served during the interval 7 is very nearly (7  7po)fx. We may
now equate the number of arri vals to the number served during thi s int erval,
which gives, for lar ge 7,
Thus, as 7 > 00 we have Ax = I  Po; using Definiti on (2.29) we finall y
ha ve the important conclusion for GfG/l
p = 1  P»
(2.32)
The interpretati on here is that p is merely the fracti on of time the server is
busy; thi s supports the conclusion in Eq. (2.27) in which Ax = p was shown
equal to the average number of customers in the service facilit y.
Thi s, then, is a rapid look at an overall queueing system in which we ha ve
exposed some of the basic stochast ic processes, as well as some of the
impo rta nt definiti ons and notation we will encounter. Moreover , we have
establi shed Little' s result , which permits us to calcul ate the average number
in the system once we have calculated the average time in the system (or vice
versa). Now let us move on to a more careful study of the imp ortant stochas tic
processes in our queueing systems.
2.2*. DEFINITION AND CLASSIFICATION
OF STOCHASTIC PROCESSES
At the end of Appendix II a definiti on is given for a stochas tic process,
which in essence states that it is a famil y of random vari ables X(t) wher e the
• The reader may choose to skip Sections 2.2, 2.3, and 2.4 at this point and move directly
to Section 2.5. He may then refer to this materi al only as he feels he needs to in the balance
of the text.
20 SOME IMPORTANT RANDOM PROCESSES
random variables are "indexed" by the time parameter I. For example, t he
number of people .sitting in a movie theater as a funct ion of time is a
stochastic process, as is also the atmospheric pressure in that movie the ater
as a functi on of time (at least those functi ons may be modeled as stoc hastic
processes). Often we refer to a stochastic process as a random process. A
random process may be thought of as describing the moti on of a particle in
some space. The classification of a random process depends upon three
quantities: the slate space; the index (lime) parameter; and the statistical
dependencies among the random variables X(I ) for different values of the
index parameter t. Let us discuss each of these in order to provide the general
framework for random processes.
Fir st we consider the state space. The set of possible values (or st ates) that
X(I) may take on is called its state space. Referring to our analogy with regard
to the motion of a particle, if the positions th at particle may occupy are
finite or countable, then we say we have a discretestate process, often
referred to as a chain. The state space for a chain is usually the set of inte gers
{O, 1,2, .. .}. On the other hand, if the permitted positions of the particle
are over a finite or infinite continuous interval (or set of such intervals), then
we say that we have a cont inuousstate process.
Now for the index (lime) parameler. If the permitted times at which changes
in positi on may take place are finite or countable, then we say we have a
discrele(time) parameter process; if these changes in positi on may occur
an ywhere within (a set of) finite or infinite intervals on the time axis, then we
say we have a continuousparameter process. In t he former case we ofte n write
X
n
rather than X(I) . X
n
is often referred to as a random or stochas tic sequellce
whereas X(I ) is often referred to as a random or stochas tic process .
The truly di stingui shing feature of a stochas tic process is th e rel ati onship
of the random variables X(I) or X
n
to ot her members of the same famil y. As
defined in Appendi x II , one must specify the complete j oint dis t rib ution
function among the random variables (which we may th ink of as vectors
den oted by the use of boldface) X = [X(t l ), X( I.) , . . .J, namel y,
Fx(x; t) ~ P[X(t I) ~ Xl ' · .. , X( l
n
) ~ xnl
(2.33)
for all x = (Xl' X., . . . , X
n
) , t = (II> I. , ... , In), and 11. As menti oned there,
thi s is a formidable task ; fortunately, many interest ing stochastic processes
permit a simpler description. In any case, it is the funct ion Fx(x ; t) that really
describes the dependencies among the random variables of the stoc has tic
process. Below we describe some of the usual types of stochas tic pr ocesses
th at are characterized by different kinds of dependency relati ons among t heir
random variables. We provide thi s classificati on in order to give t he reader a
global view of this field so that he may better understand in which particular
2.2. AND CLASSIFICAT ION OF STOCH ASTI C PROCESSES 21
regions he is operating as we proceed with our st udy of queueing theory and
it s related stochas t ic pr ocesses.
(a) Stationary Processes. As we discuss at the ver y end of Appendix II,
a stochas tic process X (I ) is said t o be stat ionary if Fx(x ; t) is invari ant to
shifts in time for all values of it s arguments; that is, given any constant T
the foll owing must hold :
FX(x ; t + T) = Fx (x ; t) (2.34)
where the notati on t + T is defined as the vector ( 11 + T , 1
2
+ T, . •. , I n + T) .
An associated noti on , that of widesense stationarity, is identified with the
random process X(I) if merely both the first and second moments are inde
pendent of the locati on on the time axis, that is, if E[X(I )] is independent of I
and if E[X(I)X(I + T)] depends only upon T and not upon I. Observe that all
st ati onary processes are widesense stati onary, but not conversely. The
theory of stationary random pr ocesses is, as o ne might expect, simpler than
that for nonstationary processes.
(b) Independent Processes. The simplest and most tr ivial stochast ic
process to consider is the random sequence in which {X
n
} forms a set of
independent random variables, that is, the j oint pdf defined for our stochastic
process in Appendix .II mu st fact or into the product, thusly
(2.35)
In th is case we are stretching th ings somewhat by calling such a sequence a
random process since there is no st ruct ure or dependence among the random
variables. I n the case of a continuous random process, such an independent
pr ocess may be defined, and it is commonl y referred to as " white noi se"
(an example is the time derivative of Brownian motion).
(c) Markov Processes. In 1907 A. A. Markov publi shed a paper [MARK
07] in which he defined and investigated the properties of what are now
known as Markov processes. In fact, what he created was a simple and
highly useful form of dependency among the random vari ables forming a
stochasti c process, which we now describe.
A Markov proces s with a di screte state space is referred t o as a Markov
chain. The discretetime Markov chain is the easiest to conceptualize and
understand. A set of random variables {X
n
} forms a Markov chain if the
pr obability that the next value (state) is X
n
+1 depends onl y upon the current
value (st ate) X
n
and not upon any previous values. Thus we have a random
sequence in which the dependency extends backwards one unit in time. That
22 SOME IMPORTANT R A N D O ~ I PROCESSES
is, the way in which the ent ire past history affects the future of the process is
complet ely summarized in the current value of the process.
In t he case of a discretetime Markov chain the instants when state changes
may occur are preord ained to be at the integers 0, 1,2, . . . , n, . . . . In the
case of the continuoust ime Markov chain, however, the transitions bet ween
states may take place at any instant in time. Thu s we are led to consider the
rand om variable that describes how long the process remains in its curr ent
(discrete) state before making a tr ansition to some ot her state . Because the
Markov pr operty insists that the past history be compl etely summarized in
the specification of the current state, then we are not free to require that a
specification also be given as to how long the proce ss has been in its current
state ! This imposes a heavy constraint on the distributi on of time that the
process may remain in a given state. In fact , as we shall see in Eq. (2.85),
"this state time must be exponent ially distributed. In a real sense, then , the
exponential distributi on is a continuous distribution which is "rnemoryless"
(we will discuss this not ion at considerable length later in thi s chapter).
Similarl y, in the discretetime Markov chain , the process may remain in the
given state for a time that must be geometrically distributed ; thi s is the only
discrete pr obability mass funct ion that is memoryless. Thi s memoryless
property is requi red of all Markov chains and restri cts the generality of the
processes one would like to cons ider .
Expressed analytically the Marko v property may be written as
P[X(tn+l) = x
n
+
1
1 X(t
n
) = X
n,
X( t
n
_
1
) = Xn_l>' . . ,X(t l) = xtl
= P[X(t n+l) = x
n
+
1
I X(t
n)
= xnl (2.36)
where t
1
< t
2
< . .. < I n < t
n
+
1
and X i is included in some discrete state
space.
The consideration of Markov processes is central to the study of queueing
theory and much of thi s text is devoted to that study. Therefore, a good
porti on of thi s chapter deals with discreteand continuoustime Mar kov
chains.
(d) Birthdeath Processes. A very important special class of Mar kov
chains has come to be known as the birthdeath process. These may be either
discreteor continuoustime processes in which the defining condit ion is that
state transiti ons take place between neighboring states only. That is, one may
choose the set of integer s as the discrete state space (with no loss of generality)
and then the birthdeath process requires that if X
n
= i, then Xn+l = i  I,
i, or i + I and no other. As we shall see, birthdeath processes have played a
significant role in the development of queueing theory. For the moment ,
however , let us proceed with our general view of stochastic processes to see
how each fits int o the gener al scheme of things.
2.2. DEFINITlO N AND CLASSIFICATI ON OF STOCHASTI C PROCESSES 23
(e) SemiMarkov Processes. We begin by discussing discretetime
semiMa rkov processes. The discretetime Markov chain had the propert y
that at every unit inter val on the time axi s the process was required to make a
transition from the current state to some other state (possibly back to the
same state). The transition probabilities were completely arbitrary; however ,
the requirement that a transition be made at every unit time (which really
came about because of the Markov property) leads to the fact that the time
spent in a state is geometrically distributed [as we shall see in Eq. (2.66)].
As mentioned earlier, this imposes a strong restriction on the kinds of
processes we may consider. If we wish to relax that restriction, namel y, to
permi t an arbitrary distribution of time the process may remain in a state,
then we are led directly into the notion of a discretetime semiMarkov
process; specifically, we now permit the times between state transitions to
obey an arbitrary probability distribution. Note , however, that at the instants
of state tran sition s, the process behaves just like an ordinary Markov chain
and, in fact , at those instants we say we have an imbedded Markov chain.
Now the definition of a continuoustime semiMarkov pr ocess follows
directly. Here we permit state transitions at any instant in time. However, as
opposed to the Mar kov process which required an exponentially distributed
time in state, we now permit an arbitrary distribution. Thi s then affords us
much greater generality, which we are happy to employ in our study of
queueing systems. Here , again , the imbedded Markov process is defined at
those instants of state transition. Certainly, the class of Markov processes is
contained within the class of semiMarkov processes.
(f) Random Walks. In the study of random processes one often en
counters a process referred to as a random walk . A random walk may be
th ought of as a particle moving among states in some (say, discrete) state
space. What is of interest is to identify the location of the particle in that state
space. The salient feature cf a rand om walk is that the next position the
pr ocess occupies is equal to the previ ous position plus a random variable
whose value is drawn independently from an arbitrary distribution ; thi s
distribution, however, does not change with the state of the process. * Th at is,
a sequence of random variables {S n} is referred to as a random walk (sta rting
at the ori gin) if
Sn = X, + X
z
+ . .. + X
n
n = I, 2, . . . (2.37)
where So = 0 and X" X
z
, .. . is a sequence of independent random variables
with a common distributi on. The index n merely counts the number of state
transitions the process goes through ; of course, if the instants of these
transitions are taken from a discrete set , then we have a discretetime random
* Except perhaps at some boundary states.
24 SOME IMPORTANT RANDOM PROCESSES
walk, whereas if they are taken from a continuum, then we have a continu ous
time random walk. In any case , we assume that the interval between these
tr an sitions is di stributed in an arbitrary way and so a random walk is a
special case of a semiMa rkov process. * In the case when the common
di stribution for X
n
is a di screte distribution, then we have a di screte st at e
random wal k; in thi s case the transiti on probabil ity Pi; of goi ng from sta te i
to stat e j will depend only up on the difference in indices j  i (which we
den ote by q;_;).
An exa mple of a continuoust ime rand om walk is that of Brownian mot ion;
in the disc retetime case an exa mple is the total number of heads observed in a
sequence of independent coin tosses.
A random walk is occasionally referred to as a process with " independent
increments. "
(g) Renewal Processes. A renewal process is related] to a random walk.
However, the interest is not in following a pa rticle among many states but
rather in counting transitions that take place as a functi on of time . That is,
we consider the real time axi s on which is laid out a sequence of points; the
distribution of time between adj acent point s is an a rbitrary common distri
bution and each point corresponds to an instant of a state transition. We
ass ume that the process begins in sta te 0 [i.e., X(O) = 0] a nd increases by
unity at each transiti on epoch ; that is, X(t) equals the number of state tran
siti ons that ha ve taken place by t. In thi s sense it is a special case of a random
walk in which q, = I and q; = 0 for i ~ I. We may think of Eq. (2.37) as
describing a rene wal pr ocess in which S; is the random variabl e denot ing the
time at which the nt h tr ansiti on tak es place. As earl ier , the sequence {X
n
} is a
set of independe nt identicall y distributed random variab les where X
n
now
represent s the time bet ween the (n  I)th and nth tr ansition. One should be .
careful to distinguish the interpret ati on of Eq. (2.37) when it applies to
renewal pr ocesses as here and when it applies to a random walk as earlier.
The difference is that here in the renewal process the equat ion describes the
time of the nth renewal or transition , whereas in the rand om walk it describes
the state of the pr ocess and the time between sta te tr ansitions is some ot her
rand om varia ble.
An importa nt example of a renewal process is the set of arrival instants
to the G/G/m queue. In this case, X
n
is identi fied with the interarrivaI time.
• Usually, the distribution of time between intervals is of lillieconcern in a randomwalk;
emphasis is placed on the value (position) Sn after n transitions. Often, it is assumed that
this distributionof interval time is memoryl ess, thereby making the randomwalka special
case of Markov processes; we are more generous in our defi nition here and permit an
arbitrary distribution.
t It maybe considered to be a special caseof the randomwalk as defined in (f) above. A
renewal process is occasionally referred to as a recurrent process.
2.2. DEFINITION AND CLASSIfiCATION OF STOCHASTIC PROCESSES 25
MP
Pi j arbitrary
S ~ l P
Prj arbitrary
IT arbitrary
RW
n.: qj  i
IT arbi trary
RP
q, = 1
ITarbitrary
Figure 2.4 Relationships among the interesting random processes. SMP: Semi
Markov process; MP: Markov process; RW: Random walk; RP: Renewal process;
BD: BirthDeath Process.
So there we have ita selfconsistent classification of some interesting
stoc hastic processes. In order to aid the reader in understanding the relation
ship among Markov .pr ocesses, semiMarkov processes, and their special
cases, we have prepared the diagram of Figure 2.4, which shows thi s relation
ship for discretestate systems. The figure is in the form of a Venn diagram.
Moreover , the symbol Pii denotes the probability of making a transiti on next
to state j given that the process is currently in state i. Also, fr den otes the
distribution of time between transitions; to say that "fr is mernoryless"
implies that if it is a discretetime process, thenfr is a geometric distributi on ,
whereas if it is a continuoustime process, then fr is an exponential distri
buti on. Furthermore, it is implied that fr may be a functi on both of the
current and the next state for the pr ocess.
The figure shows that birthdeath processes form a subset of Markov
pr ocesses, which themselves form a subset of the class of semiMarkov
processes. Similarl y, renewal processes form a subset of random walk
pr ocesses which also are a subset of semiMarkov processes. Moreover ,
there are some renewal processes that may also be classified as birthdeath
26 SOME IMPORTANT RANDOM PROCESSES
processes. Simil arl y, those Markov processes for which PH = q j  i (tha t is,
where the transit ion probabilities depend only upon the di fference of the
indices) overla p those random walks whercj', is memor yless. A rand om walk
for which [, is memo ryless and for which q j i = 0 when Ij  i/ > I overlaps
the class of birthdeath processes. If in addition to thi s last requirement our
rando m walk has q, = I , then we have a process that lies at the intersection
of all five of the processes shown in the figure. This is referred to as a "pure
birth" pr ocess ; alt hough /, must be. memoryless, it may be a dist ribution
which depends upon the state itself, ii ]; is independent of the state (thus
giving a const ant " birth rate" ) then we have a process that is figuratively and
literally at the "center" of the st udy of st ochastic pr ocesses and enjoys the
nice properti es of each ! This very special case is referred to as the Poisson
process and plays a major role in queueing the ory. We shall develop its
properties later in thi s chapter.
So much for the classificati on of stochas tic processes at this point. Let us
now elab orate up on the definition and properties of discret estate Markov
processes. Thi s will lead us naturall y int o some of the elementary queueing
systems. Some of the required theory behind the more sop histicated contin
uousstat e Markov processes will be developed later in this work as the need
arises. We begin with the simpler di scretestate , di scretetime Markov
chains in the next section and foll ow that with a section on di scretestate,
continuoust ime Markov chains.
2.3. DISCRETETL\tIE MARKOV CHAINS'
As we have said , Markov processes may be used to describe the motion of
a particle in some space. We now consider di scretet ime Mar kov chai ns,
which permit the particle to occupy discrete positions and permi t transiti ons
bet ween these positions to take place only at di screte times. We present the
elements of t he theor y by carrying along the foll owi ng contemp orary exa mple.
Consider the hipp ie who hitchhikes from city to cit y acr oss the country.
Let X
n
denote the city in which we find our hippie at noon on day n. When
he is in some particular city i, he will accept the first ride leavi ng in the
evening from tha t city. We assume that the tr avel time betwee n any two cit ies
is negligible . Of course, it is possible that no ride comes alo ng, in which
case he will remain in city i until the next evening. Since vehicles head ing for
various neighboring cities come along in some unpredi ct able fashion , the
hippie' s posi tion at some time in the fut ure is clearly a rand om variable.
It turns out t hat this random variable may properly be described through the
use of a Markov cha in.
• See footnote on p. 19.
2.3. DISCRETE T l ~ I E MARK OV CHAINS 27
We have the foll owing definition
DEFINITIO N: The sequence of random variables Xl' X
2
, •• • forms a
discretetime Markov chain if for all n (n = I, 2, . . .) and all possible
values of the random variables we have (for i
1
< i
2
< . .. < in) that
P[X. = j I Xl = iI' X
2
= i
2,
.. . , X. _
1
= in_I]
= P[X. = j IX . _
1
= in_I]  (2.38)
In terms of our example, this defin ition merely states that the cit y next to be
visited by the hippie depends only upon the cit y in which he is currently
located and not upon all the pre vious cities he has visited . In th is sense the
memory of the random process, or Markov chain, goes baek only to the
most recent position of the particle (hippie). When X . = j (the hippie is in
cit y j on day 11), then t he system is said to be in state E; at time n (or at the
nth step) . To get our hippie st arted on day 0 we begin with some initial prob
abilit y dist ribution P [X
o
= j] . The expression on the right side of Eq. (2.38)
is referred to as the (onestep) transition probability and gives the conditional
probability of making a transition from state E
i
. _
1
at step n  I to sta te E;
at the nth step in the proces s. It is clear that if we are given the initial state
probability distributi on and the transition probabilities, then we can
uniquely find the probabi lity of bei ng in various states at time n [see Eqs.
(2.55) and (2.56) below].
If it turns out that' the transition probabi lities are independent of n, then
we have what is referred to as a homogeneous Markov chain and in that case
we make the further definition
Pi; ,;; P[X. = j IX
n
_
1
= i]
(2.39)
p:i )£ P[X . +m = j I X n = i]  (2.40)
From the Markov property given in Eq. (2.38) it is easy to est abl ish the follow
ing recursive formula for calculating plj ):
which gives the probability of going to state E; on the next step, given that
we a re currently at sta te i . What fo llows refers to homogene ous Markov
chain s only. These chain s are such t hat thei r transiti on probabilities are
stationa ry with time": therefore, given the current cit y or st ate (pun) the
probability of various states III steps into the future depends only upon m and
not upon the current time; it is expedient to define the mstep tr ansition
probabilities as
( m) ,, <m i )
Po = £., Pik Pki
k
III = 2,3, . .. (2.41)
This equation merely says that if we are to travel from E, to E; in m steps,
• Note that although this is a Markov process with stationary transitions, it need 1101 be a
stationary random process.
28 SOME IMPORTANT RANDOM PROCESSES
then we must do so by first traveling from E
i
to some state E
k
in m  I
steps and then from E
k
to E, in one more step; the probability of the se last
two independent events (remember this is a Markov chain) is the pr oduct of
the probability of each and if we sum this product over all pos sible inter
mediate states E
k
, we arrive at p:i ).
We say that a Markov chain is irreducible* if every state can be reached
from every other state; that is, for each pair of states (E, and E
j
) there exists
an integer m
o
(which may depend upon i and j) such that
(mol > 0
PH
Further, let A be the set of all states in a Markov chain. Then a subset of
states Al is said to be closed if no onestep transition is possible from any
state in Al to any state in Ale (the complement of the set AI). If Al consi sts of
a single state, say E
i
, then it is called an absorbing state; a necessary and
sufficient condition for E, to be an absorbing state is P« = I. If A is closed
and does not contain any proper subset which is closed, then we have an
irreducible Markov chain as defined above. On the other hand, if A contains
proper subsets that are closed, then the chain is said to be reducible. If a
closed subset of a reducible Markov chain contains no closed subsets of
itself, then it is referred to as an irreducible subMarkov chain; these subchains
may be studied independently of the other states.
It may be that our hippie prefers not to return to a previously visited city.
However, due to his mode of travel thi s may well happen, and it is important
for us to define this quantity. Accordingly, let
f i n) ~ P [first return to E, occurs n steps after leaving EjJ
It is then clear that the probability of our hippie ever returning to city j is
gi ven by
co
fj = 2: f ~ n ) = P[ever returning to E;]
n_ 1
It is now possible to classify states of a Markov chain according to the value
obtained for /;. In particular, if!j = I then sta te E, is said to be recurrent ;
if on the other hand,/; < I, then state E, is said to be transient. Furthermore,
if the only possible steps at which our hippie can return to sta te E , are
y , 2y , 3y , . . . (where y > 1 and is the largest such integer) , then state E, is
said to be periodic with peri od y; if y = 1, then E, is aperiodic.
Considering states for which/; = 1, we may then define the mean recurrence
time of E, as
'" a:>
1'V[ j = 2: nf \n)
n=l
(2.42)
• Many of the intcresti ng Markov chains which one encounters in queueing theory are
irreducible.
2.3. DISCRETETIM E MARKOV CHAINS 29
This is merely the average time to return to E;. With thi s we may then classify
states even further. In particular, if M; = 00 , then E; is said to be recurrent
null , whereas if M; < 00, then E; is said to be recurrent nonnull. Let us define
71"Jnl to be the pr ob ability of finding the system in state E; at the nth step,
that is,
71" jnl ;; P[X n = j] _ (2.43)
We may now st ate (without proof) two important the orems. The first
comments on the set of states for an irreducible Markov chain.
Theorem 1 The states of an irreducible Markov chain are either all
transient or all recurrent nonnull or all recurrent null. If periodic, then
all states have the same period y.
Assuming that our hippie wanders fore ver , he will pass through the various
cities of the nation many times, and we inquire as to whether or not there
exists a stationary probability distribution {71";} describing his probability of
being in cit y j at some time arbitrarily far into the future. [A pr obability
di stribu tion P; is said to be a stationary distribution if when we choose it for
our initial state di stribution (that is, 71"JOI = Pi) then for all n we will have
71"Jnl = P; .] Solving for {71";} is a most important part of the analysis of
Markov chains. Our second theorem addresses itself to thi s question.
Theorem 2 In an irreducibl e and aperiodi c homogeneous Markov
chain the limiting probabilities
TTj = lim 1 T ~ n )
n ",
(2.44)
(2.45)
always exist and are independent of the initial state probability distri
bution. .Moreover, either
(a) all st ates are transient or all states are recurrent null in which
cases 71"; = 0 f or all j and there ex ists no sta tio nary distribution,
or
(b) all states are recurrent nonnull and then 71"; > 0 f or all j , in
which case the set {1T;} is a stationary probability distribut ion and
1
Tr
j
=
AI ;
In this case the quantities 7T
j
are uniquel y determined through the
following equations
1 = I 71",
1Tj = L 1Ti P U
(2.46)
(2.47)
30 SOME IMPORTANT RANDOM PROCESSES
We now introduce the notion of ergodicity. A state E; is said to be ergodic
if it is aperiodic, recurrent, and nonnull; that is, if;; = I, M; < co, and
y = I. If all states of a Markov chain are ergodic, then the Mark ov chain
itself is said to be ergodic. Moreover, a Markov chain is said to be ergodic if
the probability distribution {r.)"J} as a function of n always converge s to a
limiting stationary distribution {7T;} , which is independent of the initial state
distribution. It is easy to show that all states of ajinite* aperiodic irreducible
Markov chain are ergodic. Moreover, among Foster's criteria [FELL 66]
it can be shown that an irreducible and aperi odic Markov chain is ergodic if
the set of linear equations given in Eq. (2.47) has a nonnull solution for which
L;17T;1< co. The limiting probabilities {7T;}, of an ergodic Markov chain are
often referred to as the equilibrium probabilities in the sense that the effect of
the initial state distribution 7T)0 1 has disappeared.
By way of example, let's place the hippie in our fictitious land of Hatafla ,
and let us consider the network given in Figure 1.1 of Chapter I. In order to
simplify thi s example we will assume that the cities of Nonabel , Cadabra,
and Oriac have been bombed out and that the resultant road network is as
given in Figure 2.5. In this figure the ordered links represent permi ssible
directions of road travel ; the numbers on these links represent the probability
(Pi;) that the hippie will be picked up by a car travelin g over that road, given
that he is hitchhiking from the cit y where the arrow eman ates. Note that
from the city of Sucsamad our hippie has probability 1/2 of remaining in that
cit y until the next day. Such a diagram is referred to as a statetransition
di agram. The parenthetical numbers following the cities will henceforth be
used instead of the city names.
Zeus (1)
Abra (0 )
Figure 2.5 A Markov chain.
• A finite Mar kov chai n is one with a finite number of states. If an irreducible Mark ov
cha in is of type (a) in Theorem 2 (i.e., recur rent null or transient ) then it cannot be finite.
2.3. DISCRETETIME MARKOV CHAINS 31
In order to continue our example we now define, in general, the transition
probability matrix P as consisting of elements Pu, that is,
 (2.48)
If we further define the probability vector 1t as
then we may rewrite the set of relati ons in Eq. (2.47) as
re = 1tP
For our exa mple shown in Figure 2.5 we have
3
~ l
0
4
1
0 P =
4
~ J
1 1
4 4
(2.49)
 (2.50)
and so we may solve Eq. (2.50) by considering the three equations deri vable
from it, that is,
3 1
7T
1
=  7T
O
+ 0 1T
1
+  7T2
4 4
131
7T. =  1To +  1T
1
+  7T.,
 4 4 2 
(2.51)
Not e from Eq. (2.51) that the first of the se three equati ons equal s the negat ive
sum of the second and third, indicating that there is a linear dependence
among them. It alway s will be the case that one of the equations will be
linearly dependent on the others, and it is therefore necessary to int roduce the
additional conservati on relat ionship as given in Eq. (2.46) in order to solve
the system. In our example we then requi re
(2. 52)
Thus the sol utio n is obtai ned by simultane ously solving any two of the
32 SOME IMPORTANT R A N D O ~ l PROCESSES
equations given by Eq. (2.51) along wit h Eq. (2.52). Solving we obtai n
1
17
0
=  = 0.20
5
7
17, =  = 0.28
25
13
172 =  = 0.52
25
(2.53)
Thi s gives us the equilibrium (stationary) state probabiliti es. It is clear that
thi s is an ergodic Markov chain (it is finite and irreducible).
Often we are interested in the transient behavior of the system. The
transient behavior involves solving for 17) n), the probabili ty of finding our
hippie in city j at time II. We also define the probability vector at time II as
1t
1n
) ~ [17
1n
) 17
1n)
17(n) ]
o , 1 , 2 , • • •  (2.54)
Now using the definition of transition pr obabil ity and making use of Defini
tion (2.48) we have a method for calculating 1t
1
l) expressible in terms of P
and the initial state distribution 1t
101
• That is,
1tlll = 1t
101
P
Similarly, we may calculate the state probabilities at the second step by
n ( 2) = 1tlllp
From this last we can then generalize to the result
II = 1,2, ...
_ (2.55)
which may be solved recurs ively to obt ain
II = I , 2, . ..
 (2.56)
Equati on (2.55) gives the general method for calculating the state probabilities
II steps int o a process, given a transition pr obability mat rix P and an initial
state vector 1t
1O
). From our earlier definiti ons , we have the stationary prob
ability vector
1t = lim 1t( n )
. assuming the limit exists. (From Theorem 2, we know that this will be the
case if we have an irreducible aperiodic homogeneous Markov chain.)
2.3. DISCRETETIME MARKOV CHAINS 33
Then, from Eq. (2.55) we find
and so
7t = 7tP
which is Eq. (2.50) again. Note that the solution for 7t is independent of the
initial state vector. Applying this to our example, let us assume that our
hippie begins in the city of Abra at time 0 with probability I, that is
7t(0 ) = [1 ,0,0]
(2.57)
From thi s we may calculate the sequence of values 7t( n ) and these are given
in the chart below. The limiting value 7t as given in Eq. (2.53) is also entered
in this chart.
n 0 2 3 4 co
7 T ~ n )
I 0 0.250 0.187 0.203 0.20
(n )
0 0.75 0.062 0.359 0.254 0.28 7T1
(n )
0 0.25 0.688 0.454 0.543 0.52 7T2
We may alternati vely have chosen to assume that the hippie begins in the
city of Zeus with pr obability I , which would give rise to the init ial state
vector
7t (O) = [0, I, 0]
and which result s in the following table:
(2.58)
n
( n)
7T0
(n )
7T,
( n)
7T2
o
o
I
o
0.25
o
0.75
2
0.187
0.375
0.438
3
0.203
0.250
0.547
4
0.199
0.289
0.512
0.20
0.28
0.52
Similarly, beginning in the city of Sucsamad we find
7t(0) = [0, 0, I]
n 0 2 3 4
7 T ~ n )
0 0.25 0.187 0.203 0.199
(n)
0 0.25 0.313 0.266 0.285 7T1
( n)
I 0.50 0.500 0.531 0.516 7T2
0.20
0.28
0.52
(2.59)
From these calculations we may make a number of observati ons. First , we
34 SOME IMPORTANT RANDOM PROCESSES
see that after only four steps the quantities 11";"1for a given value of i are
almost identic al regardless of the city in which we began. The rapidity with
which these quantities converge, as we shall soo n see, depends upon the
eigenvalues of P. In all cases, however , we observe that the limi ting values at
infinity are rapidly approached and, as stated earlier, are independent of the
init ial positi on of the particle.
In order to get a bett er physical feel for what is occurri ng, it is instructive
to follow the probabilities for the vari ous states of the Markov chai n as time
evolves. To this end we introduce the noti on of baricentric coordinates,
which are extremely useful in portraying probabil ity vectors. Consider a
pr obabil ity vector with N components (i.e., a Markov process with N sta tes
in our case) and a tetrahedron in N  I dimensions. In our example N = 3
and so our tetrahedron becomes an equil ateral triangle in two dimen sions. In
general, we let the height of thi s tetrahedr on be unity. Any pr obability vector
1t
1n l
may be represented as a point in thi s N  I space by identifying each
component of that pr obability vector wit h a distance from one face of the
tetrahedron. That is, we measure from face j a distance equal to the pr ob
ability associated with that component 1 1 " ~ " ) ; if we do this for each face and
therefore for each component , we will specify one point within the tetr a
hedr on and that point correctly identifies our probab ility vector. Each unique
probability vector will map into a unique point in th is space, and it is easy to
determine .the pr obability measure from its locati on in that space. In our
example we may plot the three initial state vectors as given in Eqs. (2.57)
(2.59) as shown in Figure 2.6. The numbers in parentheses represent which
pr obability components a re to be measured from the face associa ted with
those numbers. Th e initial state vector corresponding to Eq. (2.59), for
[a, 0, I J
T
Height = 1
1
[0. 1, OJ
(2)
[ 1. 0, OJ
Figure 2.6 Representation of the convergence of a Markov chain.
2.3. MARKOV CHAINS 35
example, will appea r at the apex of the triangle and is indicated as such. In
our earlier calculations we followed the progress of our pr obability vectors
beginning with three initial state pr obability vectors . Let us now follow these
paths simultaneously and obse rve, for example, that the ' vector [0, 0, I] ,
following Eq . (2.59) , moves to the positi on [0.25,0.25,0.5) ; the vector
[0, 1, 0] moves to the positi on [0.25,0,0.75], and the vector [1, 0,0]
moves to the positi on [0,0.75,0.25]. Now it is clear that had we sta rted
with an initial state vector anywhere within the ori ginal equil ateral tri angle,
that point would have been mapped into t he interior of the smaller triangle,
which now joins the three point s just referred to and which represent possible
positi ons of the original state vectors . We note fr om t he figure that thi s new
tri angle is a shrunken version of the ori ginal triangle. If we now continue
to map these three points into the second step of the pr ocess as given by the
three charts above, we find an even sma ller triangle int er ior to both the first
and the second tri angles, and thi s region represents the possible locati ons of
any original stat e vector after tll' O steps int o the pr ocess. Clearl y, this shrinking
will continue unti l we reach a convergent point. Thi s convergent point will
in the limit be exactl y that given by Eq. (2.53)! Thu s we can see the way
in which the possible posit ions of our pr obability vectors move around in
our space.
The calculation of the transient response 1t
l n
) from Eqs. (2.55) or (2.56) is
extremely tedious if we desire more than just the first few terms. In order to
obtai n the general solution, we often resort to tran sform methods. Below we
demonstrate thi s meth od in general and then apply it to our hi ppie hitch
hiking example. Thi s will give us an opportunity to apply the ztransform
calcul ations that we have introduced in Append ix I. * Our point of de
parture is Eq. (2.55) . That equation is a difference equ ati on among
vectors. The fact that it is a difference equ ati on suggests the use of ctrans
for ms as in App end ix I, and so we naturally define the following vector
transform (t he vectors in no way interfere with our transform approach
except that we must be car eful when taki ng inverses) :
co
II(z) 4, L: 1t
l n l
Z
n
n= O
(2.60)
Th is transform will certainly exist in the unit disk , that is, Izi ..,:; I. We now
apply the transfor m method to Eq . (2.55) over its range of applicatio n
(11 = 1,2, . . . ,); thi s we do by first multiplying that equ ati on by c :' and then
summing from I to infinity, thus
"" oo
L: 1t( n )z " = L: 1t( nllp;; "
n=l n=l
* The steps invol ved in ap plying this method are summarized on pp. 745 of this chapter.
36 SOME IMPORTANT RANDOM PROCESSES
We have now reduced our infinite set of difference equations to a single
al gebraic equation. FolIowing through with our method we must now try to
identify our vector transform D (z). Our lefthand side contains all but the
initial term of t his transform and so we ha ve
The parenthetical term on the righthand side of thi s last equation is rec0llt
nized as D(z) simply by cha nging the index of summati on. Thus we find
D(=: )  7t(O) = zD(z)P
z is merel y a scal ar in th is vector equati on and may be moved freely across
vectors and matrices. Solving this matrix equation we immediately come up
with a general solution for our vector transform :
(2.61)
where 1 is the identity matrix and the (I) notation implies the matrix
inverse. If we can invert this equation, we will ha ve, by the uniqueness of
transforms, the transient solution; that is, using the doubleheaded, double
barred arrow notation as in Appendix I to denote tr ansform pairs, we have
D (z) ~ 7t (n ) = 7t(O)p n (2.62)
In thi s last we have taken advantage of Eq. (2.56). Comparing Eqs . (2.61)
and (2.62) we have the obvious transform pair
[I  zP j l ~ P"
 (2.63)
I
l'
Of course P" is precisely what we are looking for in order to obta in our
transient solution since thi s will directly give us 7t( n l from Eq. (2.56). All that
is required, therefore, is that we form the matri x inverse indi cated in Eq.
(2.63). In general this becomes a rather complex task when the number of
states in our Markov chain is at all lar ge. Nevertheless, this is one formal
procedure for carrying out the transient analysis.
Let us apply these techniques to our hipp ie hit chhiking example. Recall
that the transiti on probability matrix P was given by
0
3 I
P=
4 4
I
0
3
4 4
I I I
4 4 2
2.3. DISCRETETIME MARKOV CHAINS 37
First we must form
1  zP =
1
3
 z
4
1
 z
4
I
 z
4
3
  z
4
1 1.
z
2
Next, in order to find the inverse of this matrix we must form its determinant
thus :
det (I  zP) = 1  1. z _ :LZ2 _ l.. Z3
2 16 16
which factors nicely into
det(I ZP)=(I Z)(1 + ~ z r
It is ea sy to show that z = I is always a root of the determinant for an
irreducible Markov chain (and, as we shall see, gives rise to our equilibrium
solution). We now proceed with the calculation of the matrix inverse using
the usual methods to arrive at
1 9
 z +  Z2
4 16
3 1 .
 z + z
4 16
1 _1
z
2
16
3 5 .
 z   z
4 16
1  1. z _l.. Z2
2 16
1 3 2
t z + z
4 16
I 1
 z + Z2
4 16
x
[I _ ?P j t = 1
 (1  z) [I + (1/4)zf
1  1. Z _1
z
2
2 16
1 I .
 "'+ .,.
4  16 
Having found the matrix inverse, we are now faced with finding the inverse
transform of thi s matrix which will yield P", This we do as usu al by carrying
out a partial fraction expansion (see Appendix I) . The fact that we have a
matrix presents no problem ; we merely note that each element in the matrix
is itself a rati onal function of z which must be expanded in partial fractions
term by term. (This task is simplified if the matrix is written as the sum of
three matrices: a constant matrix ; a constant matrix times z ; and a constant
matrix times Z2.) Since we have three roots in the denominator of our rational
functions we expect th ree terms in our partial fraction expansion. Carrying
38 SOME IMPORTANT RANDOM PROCESSES
out this expansion and separating the three terms we find
53]
3
22
 I 1/25 [5
[I  =P] =  5
1  z
5
7
7
7
13] 1/5 [0
13 + (1 + =/4)2 0
13 0
1/25 [20
+ 5
1 + =/4
5
 ~  ~ ]
2 2
33
8
17
(2.64)
(2.65)
 ~ ]
2
n = 0, 1, 2, . ..
8
2
2
7
7
7
13] 1 1 n[O
13 +:5 (n + 1)(  4) 0
13 0
1( 1)"[ 20 33 53]
+  5 83
25 4.
5 17 22
We observe immediately from this expansion that the matrix associated with
the root (l  e) gives precisely the equilibrium solution we found by direct
methods [see Eq. (2.53)]; the fact that each row of this matrix is identical
reflects the fact that the equilibrium solution is independent of the initial
state. The other matrices associated with roots greater than unity in absolute
value will always be what are known as differential matrices (each of whose
rows must sum to zero). Inverting on z we finally obtain (by our tables in
Appendix I)
P" ~ ;,[:
This is then the complete solution since application of Eq. (2.56) directly
gives 7t ( n ) , which is the transient solution we were seeking. Note that for
II = 0 we obtain the identity matrix whereas for II = I we must, of course,
obtain the transition probability matrix P. Furthermore, we see that in thi s
case we have two transient matrices, which deca y in the limit leaving only the
con stant matrix representing our equilibrium solution. When we think about
the decay of the transient, we are reminded of the shrinking triangles in
Figure 2.6. Since the transients decay at a rate related to the characteristic
values (one over the zeros of the determinant) we therefore expect the
permitted positions in Figure 2.6 to decay with II in a similar fashi on. In
fact, it can be shown that these triangles shrink by a constant factor each time
II increases by 1. This shrinkage factor for any Markov process can be
shown to be equ al to the absolute value of the product of the characteristic
values of its tr ansition probability matrix; in our example we have char
acteristic value s equal to 1, 1/4, 1/4. Their product is 1/16 and thi s indeed is
the fact or by which the area of our triangles decreases each time II is increased.
2.3. DISCRETETIME MARKOV CHAINS 39
This method of tran sform analysis is extended in t wo excellent volumes by
Howard [HOWA 71] in which he treat s such problems and disc usses
additional approaches such as the flow graph method of ana lysis.
Throughout thi s discussion of di scretetime Markov chains we have not
explicitly addressed ourselves to the memoryless property* of t he t ime that
the syst em spe nds in a given state. Let us now prove that t he nu mber of time
units th at the system spends in the same sta te is geometrically distributed ;
the geometric distribution is t he unique discrete memoryless di stribution .
Let us assume the system has just entered state E; It will remain in this sta te
at the next step with probability Pii; similarly, it will leave th is sta te at the
next step with probability I  Pu If indeed it does remain in thi s sta te at the
next step, then the probability of it s remai ning for an additi onal step is again
Pu»and similarly the conditional probability of its leaving at thi s second step
is given by I  Pu  And so it goes. Furthermore, due to t he Markov property
the fact that it has remained in a given state for a known number of steps in
no way affects the probability that it leaves at the next step. Since these
probabilities are independent , we may then write
P[system remains in E
i
for exactly m additional steps given t hat it has
just entered E
i]
= (I  Pii)Pii
ffl
(2.66)
This, of course, is the geomet ric di stribution as we cla ime d. A similar argu
ment will be given later for the continuoustime Markov chain.
So far we have concerned ourselves principally with homogeneous Markov
processes. Recall that a homogeneous Markov chain is one for which the
tr ansit ion probabilities are independent of time. Amon g t he qu antities we
were able t o calculate was the mstep transiti on pr obab ility p\il, which gave
the probability of passing from state E, t o state E, in m steps ; the recursive
formula for thi s calculation was given in Eq . (2.41). We now wish to take a
mo re genera l point of view and permit the transit ion probabiliti es to depend
upon ti me. We intend to deri ve a relation ship not unlike Eq. (2.4 1), whic h
will form our point of departure for many further developments in the
app lication of Markov pr ocesses to queueing problems. For the t ime bei ng
we con tinue t o rest rict ourselves t o discretet ime, discre te stat e Markov
cha ins .
Generalizing the homogene ous definition for the mu lti step tr an sit ion
probabilit ies given in Eq. (2.40) we now define
_ (2.67)
which gives the probability that the system will be in st ate E, at step n, given
• The memoryless property is discussed in some detail laler.
40 SOME IMPORTANT RANDOM PROCESSES
In q
Time step
n
for m ~ q ~ 11. Thi s last equation must hold for any stochast ic process (not
necessarily Markovian) since we are considering all mutually exclusive and
exhaustive possibilities. From the definition of conditional probability we
may rewrite this last equation as
Pilm, n) = LP[X. = k I x; = i]P[X
n
=j I x; = i, X. = k] (2.69)
k
Now we invoke the Markov property and observe that
P[ X
n
<! I x; = i , X. = k] = P[ X
n
= j I x, = k]
Appl ying this to Eq. (2.69) and making use of our definition in Eq. (2.67) we
finally arrive at
that it was in state E; at step m, where 11 ~ m. As discussed in the homo
geneous case, it certainly must be true that if our process goes from state E;
at time m to state E; at time 11, then at some intermediate time q it must have
passed through some state E
k
• This is depicted in Figure 2.7. In thi s figure we
have shown four sample paths of a stochastic process as it moves from state
E; at time m to state E; at time 11. We have plotted the state of the process
vertically and the discrete time steps horizontally. (We take the liberty of
dra wing continuous curves rather than a sequence of points for convenience.)
Note that sample paths a and b both pass through state E
k
at timeq, whereas
sample paths c and d pass through other intermediate states at time q. We
are certa in of one thing only , namely , that we must pass through some
intermediate state at time q. We may then express poem, 11) as the sum of
probabiliti es for all of these (mutually exclusive) intermediat e states; that is,
I
.1
,
Figure 2.7 Sample paths of a stochastic process.
p;lm, n) = L P[Xn = j , X. = k I Xm = i]
k
poem, 11) = L p;k(m, q)Pk,(q, 11 )
k
(2.68)
 (2.70)
2.3. DISCRETETIM E MARKOV CHAINS 41
for m ::; q ::; n. Equation (2.70) is known as the ChapmanKolmogorov
equat ion for discretetime Mark ov processes. Were this a homogeneous
Mark ov chain then from the definition in Eq. (2.40) we would have the
relat ionship p;;(m,n) = p\;mland in the casewhenn = q + I our Chapman
Kolmogorov equat ion would reduce to our earl ier Eq. (2.41). The Chapman
Kolmogorov equat ion states that we can partition any n  m step transition
pr obab ility int o the sum of products of a q  m and an n  q step transition
probabi lity to and from the intermediate states that might have been occupied
at some time q within the interval. Indeed we are permitted to choose any
part itioning we wish, and we will take advant age of this shortl y.
It is convenient at thi s point to write the ChapmanKolmogorov equation
in matrix form. We have in t he past defined P as the matrix containing the
elements PHin the case of a homogeneous Mark ov chain. Since these quant i
ties may now depend upon time, we define p en) to be the onestep tran sition
probability matrix at time n, that is,
p en) ~ [p;; (n, n + I)]  (2.71)
Of course, p en) = P if the chain is homogeneous. Also, for the homogeneous
case we found that the nstep transition pr obability matrix was equal to P".
In the nonhomogeneous case we must make a new definiti on and for this
purpose we use the symbol H(m, n) to denote the following multistep
transit ion pr obabil itymatrix:
H (m, n) ~ [Pi j (m, n)]  (2.72)
Note that H(n, n + I) = p en) and that in the homogeneous case H(m, m +
n) = P". Wit h these definiti ons we may then rewrit e the Chapman 
Kolmogorov equation in matri x form as
H(m, n) = H(m , q)H(q, n) _ (2.73)
for m ::; q ::; n. To complete the definiti on we require that H (II, II) = I ,
where I is the identity matri x. All of t he matrice s we are considering are
square matrices with dimensionality equal to the number of states of the
Mark ov chain. A solution to Eq. (2.73) will consist of expressing H (m, n)
in terms of the given matrices p en).
As ment ioned above, we are free to choose q to lie anywhere in the interval
between m and II. Let us begin by choosing q = n  I. In this case Eq. (2.70)
becomes
p;;(m, II) = 2 Piim, n  I)Pk;(n  I , n)
k
which in matri x form may be written as
H(m, n) = H(m, n  I)P (n  I)
(2.74)
 (2.75)
42 soxn; IMPORTANT RANDOM PROCESSES
Equati ons (2.74) and (2.75) are known as the fo rward ChaprnanKolrnogorov
equations for discretetime Markov chains since they are writt en at the
for ward (most recent time) end of the interval. On the other hand, we could
have chosen q = m + I, in which case we obtain
pilm, n) = 2: Pik(m , m + I)Pk,(m + 1, II)
k
whose matri x form is
H(m, II) = P(m)H(m + 1, n)
(2.76)
 (2.77)
The se last two are referred to as the backward ChapmanKolmogorov
equations since they occur at the backward (oldest time) end of the interval.
Since the forward and backward equations both describe the same discrete
time Markov chain, we would expect their solutions to be the same, and
indeed this is the case. The gener al form of the solution is
H(m, n) = P(m)P(m + I) . .. P(I!  I) m::;, I!  I
 (2.78)

That thi s solves Eqs. (2.75) and (2.77) may be establi shed by direct substitu
tion. We observe in the homogeneous case that this yields H(m, n) = p n m
as we have seen earlier. By similar arguments we find that the timedependent
probabilities {1rl
n l
} defined earlier may now be obtained through the following
equation :
whose solution is
7t(n+!1= 7t(oIP (O)P(I ) .. . P(II)
_ (2.79)
,
These last two equations correspond to Eqs. (2.55) and (2.56), respecti vely,
for the homogeneous case. The ChapmanKolmogorov equati ons give us a
means for describing the timedependent probabilities of many interesting
queueing systems that we develop in later chapters. *
Before leaving discretetime Markov chains, we wish to introduce the
special case of discrete time birthdeath processes. A birthdeath process is an
example of a Mark ov process that may be thought of as modeling changes
in the size of a popul ation. In what follows we say that the system is in state
E
k
when the popul ation consists of k members. We further assume that
changes in popul ati on size occur by at most one; that is, a " birt h" will change
the popul ati on' s size to one greater, whereas a "death" will lower the
popul at ion size to one less. In consider ing birthdeath processes we do not
permit multiple birth s or bulk di sasters; such possibilities will be considered
* It is clear from this develop ment that all Mar kov processes must sati sfy the Chapma n
Kolmogorov equatio ns. Let us note, however , that all processes that satisfy the Chapman
Kolmogorov equation are not necessarily Markov processes; see. for example. p. 203 of
[PARZ 62].
2.3. DISCRETETIME MARKOV CHAINS 43
later in the text and correspond to rand om walks . We will consider the Mar kov
chain to be homogeneous in that the transition probabilit ies Pi; do not change
with time; howe ver , cert ainly they will be a functi on of the state of the
system. Thus we"have that for our discretetime birthdeath process
{ d,
j= iI
Ih.  d, j=i
(2.80)
PH=
b, j = i + I
0 ot herwise
Here d, is the pr obability that at the next time step a single death will occur,
driving the population size down to i I, given that the popul ation size
now is i. Similarly, b, is the probability that a single birth will occur, given
that the current size is i, thereby dri ving the populati on size to i + I at the
next time step. I  b,  d, is the probability that neither of these event s will
occur and that at the next time step the population size will not change.
Onl y these three possibilities are permitted. Clearly do = 0, since we can
have no deaths when there is no one in the populati on to die. However,
contrary to intuition we do permit b
o
> 0; this correspond s to a birth when
there are no members in the population. Whereas thi s may seem to be
spontaneous generati on, or perhaps divine creation , it does provide a
meaningful model in terms of queueing the ory. The model is as follows : The
population corresponds to the customers in the queueing system; a death
corresponds to a customer departure from that system; and a birth corre
sponds to a customer arrival to that system. Thus we see it is perfectly feasible
to have an arri val (a birth) to an empty system! The stationary pr obability
tran siti on matrix for the general birthdeath pr ocess t hen appears as follows :
Ib
o
b
o
0 0 0 0 0 0
d, I  b,  d, b, 0 0 0 0 0
0 d'!, I  bzd
z
b, 0 0 0 0
p =
0 d
i
Ibi  d, b, 0
. . .
If we are dealing with a finite chain, then the last row of thi s matri x would be
[00 . . . 0 d
s
I  dsL which illustrates the fact that no births are permitted
when the populati on has reached its maximum size N. We see that the P
f
44 SOME IMPORTANT RANDOM PROCESSES
matrix has nonzero terms only along the main diagonal and along the dia
gonals directly above and below it. This is a highly specialized form for the
transition probability matrix, and as such we might expect that it can be
solved. To solve the birthdeath process means to find the solution for the
state probabilities 1t(n l . As we have seen, the general form of solution for
these probabilities is given in Eqs. (2.55) and (2.56) and the equation that
describes the limiting solution (as n  00) is given in Eq, (2.50). We also
demonstrated earlier the ztransform method for finding the solution. Of
course, due to this special structure of the birthdeath transition matrix, we
might expect a more explicit solution. We defer discussion of the solution to
the material on continuoustime Markov chains , which we now investigate.
2.4. CONTINUOUSTIME MARKOV CHAINS'
Ifwe allow our particle in motion to occupy positions (takeon values) from
a discrete set, but permit it to change positions or states at any point in time,
then we say we have a continuoustime Markov chain. We may continue to
use our example of the hippie hitchhiking from city to city, where now his
transitions between cities may occur at any time of day or night. We let X(f)
denote the city in which we find our hippie at time f . X(f) will take on values
from a discrete set, which we will choose to be the ordered integers and which
will be in onetoone correspondence with the cities which our hippie may
visit.
In the case of a continuoustime Markov chain, we have the following
definition :
DEFINITION: The random process X(f) forms a continuoustime
Markov chain if for all integers n and for any sequence f" f
2
, •• • , f n+l
such that f
1
< f
2
< ... < f
n
+
1
we have
P[X(t n+l) = j IX(tl) = ii' X(t2) = ;2' .. . , X(t n) = in]
= P[X(t n+l) = j IX(t n) = in] (2.81)
This definitiont is the continuoustime version of that given in Eq. (2.38).
The interpretation here is also the same, namely, that the future of our hippie's
travels depends upon the past only through the current city in which we firid
him. The development of the theory for continuous time parallels that for
discrete time quite directly as one might expect and, therefore, our explana
tions will be a bit more concise. Moreover, we will not overly concern
• See footnote on p. 19.
t An alternate definition for a discretestate continuoustime Markov process is that the
following relation must hold :
P[X(t) = j IX(T) for TI S T S T2 < I] = P[X(I ) = j IXh )]
,
2.4. CONTINUOUSTIME MARKOV CHA INS 45
ourselves wit h some of the deeper questi ons of convergence of limits in
passing from discrete to continuous time ; for a car eful treatment the reader
is referred to [PARZ 62, FELL 66].
Earl ier we stated for any Markov process that the time which the process
spends in any state must be " memoryless" ; this implies that the discretetime
Mark ov chains must have geometrically distributed state times [which we
have already pr oved in Eq. (2.66)] and that continuoust ime Ma rkov chai ns
must have exponentially distributed sta te times. Let us now prove thi s last
sta tement. For thi s purpose let T
i
be a random variable that represent s the
time which the process spends in state E
i
. Recall the Markov pr operty which
states that the way in which the past trajectory of the process influences the
future development is completely specified by giving the cur rent sta te of the
process. In particular, we need not specify how long the pr ocess has been in
its current state. This means that the remaining time in E, must have a
distributi on that depends only upon i and not upon how long the pr ocess
has been in E
i
. We may write this in the following form :
P['Ti > s + 1It , > s] = h(l)
where h(l ) is a function only of the additional time 1 (and not of the expended
time s) *. We may rewrite thi s conditional probability as follows :
I
_
P ~ [ 'T.!.. i ..:..>_ 5 :+'I'_'T!... i :::.. >_5,]
P['T
i
> 5 + 1 'T
i
> 5] =
P['T
i
> 5]
P[T
i
> 5 + I]
P[T
i
> 5]
Thi s last step follows since the event 'T
i
> s + 1 implies the event T
i
> s.
Rewriting this last equati on and introducing h(l ) once again we find
I
P['T
i
> s + I] = P[T
i
> s]h( l) (2.82)
Setting s = °and observing that P['T
i
> 0] = I we have immed iately that
P[Ti > I] = h(l )
Using thi s last equati on in Eq. (2.82) we then obtain
P['T
i
> 5 + I] = P['T
i
> 5]P['T, > I] (2.83)
for s, 1 ~ 0. (Setti ng s = 1 = °we again requ ire P[T
i
> 0] = I.) We now
show that the only continuous distribution satisfying Eq. (2.83) is the
• The symbol s is used as a time variab le in this section only and should not be confused
with its use as a transform variable elsewhere.
46 SOME IMPORTANT RANDOM PROCESSES
exponential distributi on. First we have, by definition, the following general
relati onship:
d d
 (P[T
i
> tJ) =  ( I  Ph t])
dt dt
=  JT,(t )
(2.84)
where we use the notati on!T.(t ) to den ote the pdf for T
i
. Now let us
tiate Eq. (2.83) with respect to s, yielding
dPh > s + t]
"'" =  JT (s) P[Ti > t]
ds •
where we have taken advantage ofEq. (2.84). Dividing both sides by P[T
i
> I]
and setting s = 0 we have
dP[T, > t] =  /,(0) ds
P[T
i
> t] T, .
If we integrate this last from 0 to I we obtain
or
P[T
i
> t] = e f T, (O) '
Now we use Eq. (2.84) again to obtain the pdf for T
i
as
JT,(t) = JT,(O)e
Ir
,lOlt (2.85)
,
which holds for I O. There we have it: the pdf for the time the process
spends in state E, is exponentially distributed with the parameter ;;,(0),
which may depend upon the state E,. We will have much more to say about
thi s exponential distribution and its importance in Markov processes shortly.
In the case of a discretetime homogeneous Mar kov chai n we defined the
transition probabilities as Pis = P[X
n
= j IX
n
_
1
= i] and also the mstep
transiti on probabilities as = P[X
n
+
m
= j IX
n
= i] ; these quant ities
were independent of n due to the homogeneity of the Markov chain. In the
case of the nonhomogeneous Markov chain we found it necessary to identify
points along the time axis in an absolute fashion and were led to the import ant
tr ansition probability definition Pii(m, n) = P[X
n
= j IX
m
= i]. In a
completel y analogous way we must now define for our continuoustime
Markov chain s the following timedependent transition probability:
p,;(s, t) P[X(t) = j IX es) = i]  (2.86)
where XCI) is the position of the particle at time I s. Just as we considered
three successive time instant s m q n for the discrete case, we may
2.4. CONTINUOUSTIME CHAINS 47
consider the following three successive time instants for our continuous time
chain s 1I I . We may then refer back to Figure 2.7 and identify some
sample paths for what we willnow consider to be a continuoustime Markov
chain; the critical observation once again is that in passing from sta te E, at
time s to state E , at time t, the process must pass through some inter
mediate stat e E. at the intermedi ate time 1I. We then proceed exactly as we did
in derivin g Eq. (2.70) and arrive at the followi ng ChapmanKolmogorov
equation for continuoust ime Markov chains:
(2.87)
where i.] = 0, I, 2, .. . . We may put thi s equation int o matri x form if we
first define the matrix consisting of elements Pii(S, t) as
H(s, t) [Pii(S, t»)
Then the ChapmanKolmogorov equation becomes .
 (2.88)
H(s, t) = H(s, lI)H(u, t)
 (2.89)
[We define H(/ , t) = I, the identity matrix.]
In the case of a homogeneous discretetime Markov chain we found that
the mat rix equation 1t = 1tPhad to be investigated in order to determine if
the chain was ergodic, and so on ; also, the transient solution in the non
homogeneous case could be determi ned from 1t
1n
+
1l
= 1t
IOI
P(O)P(l ) . .. pen),
which was given in terms of the timedependent transition probabilities
Pii(m, n) . For the continuoustime Markov chain the onestep transition
probabilities are replaced by the infinit esimal rates to be defined below; as
we shall see they are given in of the time derivat ive of P'i(S, t) as
t > s.
What we wish now to do is to form the continuoustime analog of the
for ward and backward equations. So far we have reached Eq. (2.89), which is
ana logous to Eq. (2.73) in the discretetime case. We wish to extract the
analog for Eqs. (2.74)(2.77), which show both the termbyterm and matrix
form of the for ward and backward equati ons , respecti vely. We choose to do
this in the case of the forwa rd equati on, for example, by starting with
Eq. (2.75), namely, H(m, n) = H(m, n  I )P (n  I), and allowing the
unit ti me interval to shrink toward zero. To this end we use thi s last equation
and form the following difference :
H(m, n)  H(m, n  I) = H(m, n  I )P (n  I)  H(m, n  I)
,
= H(m, n  I)[P(n  I)  I] (2.90)
We must now consider some limits. Just as in the discrete case we defined
pen) = H(n, n + I), we find it convenient in this continuoustime case to
48 SOME I:'IPORTANT RANDOM PROCESSES
define the following matrix:
P(t ) ~ [p;;(t , t + 6.t»)  (2.91)
Furthermore we identify the matrix H(s, t) as the limit of H (m, n) as our time
interval shrinks; similarly we see that the limit of p en) will be p et ). Returning
to Eq. (2.90) we now divide both sides by the time step, which we denote by
!'It, and take the limit as 6.t > O. Clearly then the lefthand side limits to the
derivative, resulting in
s ~ t
aH(s , t) = H(s, t)Q( t)
at
where we have defined the matri x Q (t ) as the following limit:
 (2.92)
 (2.93)
Q( t) = lim p et )  I
"'0 6.t
This matrix Q (t ) is known as the infinit esimal generator of the transition
matrix function H(s, t). Another more descriptive name for Q(t ) is t he
transit ion rat e matrix ; we will use both names interchangeably. The elements
of Q (t ), which we denote by q;;(t ), are the rates that we referred to earlier.
They are defined as follows :
. . p;;(t, t + 6.t)  1
q;;(t) = 11m (2.94)
410 D.t
. P"J( t, t + 6. t)
q;;( t) = 11m i ,e j (2.9 5)
41 0 6.t
These limits have the following inte rpretation. If the system at time t is in
state E; then the probabi lity that a transition occurs (to any state other than
E
i)
during the inte rval (t, t + D. t) is given by qii(t) 6.t + o(6.t). * Thus we
may say that  q;;(t ) is the rat e at which the process depar ts from state E;
when it is in that sta te. Similarl y, given that the system is in state E, at time t,
the conditional probabil ity that it will make a transition from this state to
state E; in the time interval (t, 1 + 6.t) is given by q;;(I) 6.1 + o(6.t) . Thus
• As usual, the notation o ( ~ t ) denotes allY functio n that goes to zero with !!'t faster than !!'t
itself, that is,
lim oeM) = 0
~ ,  o !it
More generally, one states that the function g et) is o(y(t )) as t  t
l
if
j" Iget) I 0
t ~ ~ , yet) =
See also Chapter 8, P: 284 for a definit ion of 0(').
,
2.4. CONTINUOUS TIME MARKOV CHAINS 49
q;i(t) is the rat e at which the pr ocess moves from E; to E
i
, given th at the
system is currently in the state E;. Since it is always true that I i p;;(s, I) = I
then we see th at Eqs. (2.94) and (2.95) imply that
for all i (2.96)
Thus we have interpreted the terms in Eq. (2.92); th is is nothing more than
t he for ward ChapmanKolmogorov equ ation for the continuoustime
Ma rkov chain.
In a similar fas hion, beginning with Eq . (2.77) we may deri ve the back ward
ChapmanKolmogor ov equ ati on
a H ~ ; , t) =  Q(s)H(s, t)
s ~ t  (2.97)
The for ward and backward matrix equations j ust deri ved may be expressed
through their indi vidual terms as foll ows. The forward equati on gives us
[with t he addi tional condition that the passage to the limit in Eq. (2.95) is
uniform in i for fixed j]
(2.98)
Th e initial sta te E, at t he initia l time s affect s t he solution of thi s set of
differential equ ati ons only through the initial conditions
pil s, s) = { ~
if j = i
if j ~ i
,
From the backwar d matri x equation we obtain
(2.99)
The "i nitial" condi tions for th is equation are
pilt , t) = { ~
if i = j
if i ~ j
These equ ati ons [(2.98) and (2.99)] uniquely determine t he tr ansiti on
prob abilities p,ieS, t) and must, of course, a lso satisfy Eq. (2.87) as well as
t he ini tial conditions.
In matri x not ati on we may exhibit the solution to the forward and back
ward Eqs. (2.92) and (2.97), respectively, in a st raightforward manner ; the
50 RANDOM PROCESSES
result is'
H(s, I) = exp [fQ(U) duJ  (2.100)
We observe that thi s solutio n also satisfies Eq. (2.89) and is a continuoustime
an al og to the di scretetime solut ion given in Eq. (2.78) .
Now for the state prob abilit ies the mselves : In analogy with 7Tl n ) we now
define
7T;(t) P[X(I) = jj
as well as t he vector of these probabilities
n et ) [1T
O
(t ), 7T
I
(t), 7T
2
(1), . .. j
 (2. 101)
(2.102)
If we are given the initial state distribution n CO) then we can solve for t he
t imedependent state probabilitie s from
n et) = n(O)H(O, t) (2.103)
where a general solution may be seen from Eq. (2. 100) to be
n(l) = nCO) exp [1'Q(u) duJ  (2. 104)
This corresponds to the discretetime solution given in Eq, (2.79). The mat rix
differential equation corresponding to Eq. (2. 103) is easi ly seen to be
dn( l )
 = n (I)Q(t )
dl
This last is similar in form to Eq . (2.92) and may be expr essed in terms of its
elements as
,
(2. 105)
The similarity between Eqs. (2. 105) and (2.98) is not accidental. The latt er
descr ibes th e pr obability th at t he process is in state E; at time t given t hat it
was in state E; at time s. The former merel y gives the probability t hat the
system is in state E; at time t ; information as to where t he proce ss began is
given in the initial st ate probability vector n CO). If indeed 7T
k
(O) = I for
k = i and 7T
k
(O) = 0 for k ,t= i , then we are stating for sure that the system was
in state E; at ti me O. In th is case 1T;(I) will be ident ically eq ual to Pu(O, I).
Both form s for thi s probability are often used ; the form Pu(s, I) is used when
• The expression e
P
' where P is a squa re mat r ix is defined as the following matrix power
series:
( 2 ( 3
ePl = I + PI + p 2_ + p 3_ + .. .
2! 3!
i
.I
2.4. CONTINUOUSTIME CHAINS 51
we want to specifically sho w the initial state ; th e form 7T;(I) is used when we
ch oose t o neglect or imply the initial state.
We now con sider the case where o ur continuoustime Markov chain is
homogeneous. In this ca se we drop .t he dependence upon time and ad opt the
foll owing notation :
(2.110)


(2. 106)
(2.107)
... (2. 108)
H(s + I) = H(s)H(I)
The forward and backward equations become, respectively,
dpi,(l ) ,
 d = q jiPi](l ) + ,L, qk;Pik(l)
I
and in matrix form*
Pi' (I) po(s, S + I)
qij qij(l ) i,j = 1, 2, . . .
.l
H(I) = H(s, s + I) = [pi,(I»)
.l
Q = Q(I) = [qij) (2.109)
In this ca se we may list in rapid order the corresponding results. Fir st , the
ChapmanKolmogorov equations become
Pij(S + I) = L Pik(S)Pk,(l )
k
and
and in matrix form th ese bec ome, respect ively,
dH(I)
 = H(I)Q
dl
(2. 111)
 (2. 112)
,
and
dH(I)
 = QH(I)
dl
 (2. 113)
with t he common init ial condit ion H (O) = I. The solut ion for thi s ma trix is
given by
H(I) = eOt 
Now for the stat e probabilit ies themsel ves we have th e differenti al equa tio n
d7T,(t)
 d  = qji 7T
j
( l) + L q
kj7T
k(l)
(2.114)
t k:;:.j
which in matrix form is
d7t(I)
 = 7t(I)Q
dl

• The corresponding discretetime result is simply pm+n = pmpn.
52 PROCESSES
Fo r an irreduci ble homogeneous Markov chain it can be shown that t he
foll owing limits always exist and are independent of t he initi al sta te of th e
chain, namely,
lim Pil (t) = TT
j
1",
Thi s set { TT
j
} will form the limiting state probab ility di st ribut ion . For a n er
godi c Markov chain we will have the further limit , which will be independen t
of the init ial distr ibution, namel y,
lim TT /t ) = TT
j
I  x
Thi s limit ing di stribution is given uniquely as the solution of the foll owing
system of linear equati ons :
«.» , + 2: q kj TT
k
= 0
k * i
In matri x fo rm th is last equati on may be expressed as
1tQ = 0
(2. 115)
 (2. 116)
r
wher e we have used t he ob vious notati on 1t = [TT
O
' TTl ' 71'2, • •• ] . Thi s last
equati on coupled with the probabi lity con ser vati on relat ion , namely,
(2 . 117)
uniquely gives us o ur limit ing state pr obabilities. We compar e the Eq. (2: 116)
wit h our ea rl ier eq ua tio n for d iscre tetime Mar kov chai ns, namely, 1t = 1tP :
here P was the matri x of tr an siti on probabilities. whereas the infinitesimal
generator Q is a matri x of tr ansit ion rates.
T his completes our discussion of di scr etestate Markov cha ins. In the
table on pp. 402403. we summa rize the maj or result s for the four eases con
side red he re. For..a furt her di scussion , t he reader is referred to [BHA R 60] .
. di scretestate Markov chains (bo t h in di scret e a nd
continuou s time) it would see m natural that we next co nsider cont inuous
sta te Mar kov pr ocesses. Thi s we will not do, but rat her we pos t po ne con
side ra t ion of such mat er ial un til we require it [viz.• in Cha pter 5 we co nsid er
Takacs' integrodifferentia l equati on for M/G/I . and in Cha pter 2 (Volume
II) we devel op the FokkerPl anck equa t ion for use in the diffu sion approx ima 
ti on for qu eues]. One would furt her expect that foll owing the st udy of
Ma r kov processes. we wou ld then investi gate renewal processes, random
walks, a nd fina lly, semi Ma r kov processes. Here too, we choose t o postpone
such di scu ssion s until they are needed later in the text (e.g ., the discu ssi on
in Cha pter 5 of Mar kov chains imbedded in semi Markov processes).
2.5. BIRTHDEATH PROCESSES 53
Indeed it is fair to say that much of the balance of thi s textbook depend s
upon addi tional material from the theory of stochas tic pr ocesses and will be
developed as needed. Fo r the time being we choose to specialize the results
we have obtained from t he continuoust ime Mar kov chai ns to the class of
birthdeath pr ocesses, which , as we have forewarned, playa majo r role in
queueing systems anal ysis. Th is will lead us directl y to the important Poisson
process.
2.5. BIRTHDEATH PROCESSES
Earli er in thi s chapter we said that a birthdeath process is the special
case of a Markov process in which transitions from state E
k
are permitted
only to neighboring states Ek+l ' E
k
, and E
k
_
l
• Thi s restr iction permits us to
carry the solution much further in many cases. These processes turn out to be
excellent models for all of the material we will study under elementary queue
ing theory in Chapt er 3, and as such forms our point of departure for the
st udy of queueing systems. The discretetime birthdeath process is of less
interest to us than the conti nuoustime case, and, therefore , discretetime
birthdeath processes are not considered explicitly in the following develop
ment ; needless to say, an almost parallel treatment exists for that case.
Moreover, tran sitions of the form from state E; back to E; are of direct
interest only in the discretet ime Markov chai ns ; in the continuoustime
Mar kov chai ns, t he rate at which the process returns to the state that it
currentl y occupies is infinite, and the astute reader should have observed
that we verycarefully subtracted this term out of our definition for qu(t )
in Eq. (2.94). Therefore , our main interest will focus on continuoustime
bi rthdeath processes with discrete state space in which transitions only to
neighboring states Ek' I or E
k
_
1
from sta te E, are per mitted. *
Earlier we described a birthdeath process as one that is appropriate for
modeling changes in the size of a populati on. Indeed, when the pr ocess is
said to be in state E
k
we will let this denote the fact that the popul at ion at that
time is of size k. Moreover , a transition from E
k
to Ek+l will signify a " birth"
within the popul ation, whereas a transition from E
k
to E
k
_
1
will denote a
"deat h" in the popul ati on.
Thu s we consider changes in size of a popul ati on where tr ansiti ons from
state E
k
take place to nearest neighbors only. Regarding the nature of birt hs
and deaths, we int roduce the not ion of a birth rate i.
k
, which describes the
* Thi s is true in the onedimensio nal case. Later, in Chap ter 4, we consider multidimen
sional systems for which the sta tes are described by discrete vectors, and then each state
has two neighbors in each dimension. For example, in the twodimensional case, the state
descriptor is a couplet (k
t
, k,) denoted by £ k, .k, whose four neighbors are £k,  t .k" £k"k,  I '
£k,+1. k, and £ k , . k , ~ I '
,
54 SOME IMPORTANT RANDOM PROCESSES
rate at which birt hs occur when the population is of size k. Similarly, we
define a death rate fl k>. which is the rate at which deaths occur when the
population is of size k. Note that these birth and death rate s are independent
oft ime and depend only on E
k
; thus we have a continuoustime homogeneous
Markov chain of the birthdeath type. We adopt this special notat ion since it
leads us directly into the queueing system notati on; note that, in terms of our
earlier definit ions, we have
and
fl k = Qk ,kl
The nearestnei ghbor condition requires that qkj = 0 for Ik  jl > I.
Moreover, since we have previously shown in Eq. (2.96) that L.;qkj = 0,
then we require q kk = _ (flk + A
k
) (2.118)
Thus our infinitesimal generator for the general homogeneous birthdeath
process takes the form
,10 )'0 0 0 0
fll
()., + fll) Ai
0 0
0
fl . (A. + P.)
i'2
0
Q=
0 0
fl 3  (;'3 + fl 3) ,13
L
,
Note that except for the main , upper, and lower diagonals, all terms are zero.
To be more expl icit, the assumptions we need for the birthdeath process
are that it is a homogeneous Markov chain X(t ) on the states 0, I , 2, . . . ,
that births and death s are independent (this follows directl y from the Markov
pr operty), and
B, : P[exactly I birth in (r, t + nt ) I k in populat ion]
= ;'kn t + o( nl)
D
1
: P[exactly I death in (t , t + nt ) Ik in population]
= Pk nt + o(nt)
B.: P[exactly 0 births in (r, t + nt) Ik in population]
= I  ;'kn l + o(nt)
D. : P[e xactly 0 deaths in (I, t + nt ) Ik in population]
= I  Pk nl + o(n t )
2.5. BIRTHDEATH PROCESSES 55
Fr om these assumpti ons we see that multiple births, multiple deaths, or in
fact, both a birth and a death in a small time interval are prohibited in the
sense that each such mult ipleevent is of order
What we wish to solve for is the probabil ity that the population size is k:
at some time t ; this we denote by'
Pk(t ) P[X(t ) = k] (2.119)
Thi s calculation could be carried out directly by using our result in Eq.
(2. 114) for 7T
J
(t ) and our specific values for q i j ' However , since the deriva tion
of these equations for the bir thdeath process is so straightforward and
follows from first principles, we choose not to use the heavy machine ry we
developed in the previou s section, which tends to camouflage the simplicity
of the basic approach, but rather to rederive them below. The reader is
encouraged to identify the parallel steps in this development and compare
them to the more general steps taken earlier. Note in terms of our previous
definiti on that Pk(t) = 7T
k
(t ). Moreover , we are " suppressing" the initial
conditions temporarily, and will introduce them only when required.
We begin by expressing the ChapmanKolmogorov dynamics, which are
quite trivial in this case. In particular, we focus on the possible motions of
our particle (that is, the number of members in our population) during an
interval (t , t + We will find ourselves in state E
k
at time t + if one of
the three following (mutually exclusi ve and exhaustive) eventualities occurred:
f
1. that we had k in the populati on at time t and no state changes occur red;
2. that we had k  1 in the population at time t and we had a birth
during the interval (t , t +
3. that we had k + 1 members in the populati on at time t and we had one
death during the interval (t, t +
These three cases ar e portrayed in Figure (2.8). The probability for the first
of these possibilities is merely the probability Pk(t) that we were in st ate E
k
at
time f times the probability that we moved from state E
k
to state E,
(i.e., had neither a birth nor a death) during the next seconds ; thi s is
represented by the first term on the righthand side ofEq. (2.120) below. The
second and third terms on the righthand side of that equati on correspond,
respectivel y, to the second and third cases listed above. We need not concern
ourselves specifically with transition s fr om states other than nearest neighb or s
to state E
k
since we have assumed that such transit ions in an interval of
• We use X (r) here to denote the number in system at time I to be consistent with the use of
X (l ) for ou r general stochastic process. Cer tainly we cou ld have used N(t) as defined
earli er; we use N (t) outside of this chapter.
" /
56 SOME IMPORTANT RANOmr PROCESSES
Time
Figure 2.8 Possible transitions into E
k
•
duration !:;.t are of order o(!:;. t ). Thus we may write
Pk(t + Ar) = Pk(t)Pk.k(!:;.t)
+ Pk_1(t)Pk_l.k(!:;.t )
+ P
k
+
1
(t )Pk+l.k(!:;. t )
+ oeM) k ~ l (2. 120)
We may add the three probabilities above since these events are dearly
mutually exclusive. Of course, Eq. (2. 120) only makes sense in the case for
k ~ I , since clearly we could not have had I members in the populati on.
For the case k = 0 we need t he special boundary equati on given by
Po(t + !:;.t) = Po(t)Poo(!:;.t )
+ P, (t )PlO(!:;.t )
+ o(!:;.t) k = 0 (2. 121)
Furthermore, it is also clear for all values of t that we must conserve our
probability, and thi s is expressed in the following equati on :
(2. 122)
To solve the system represented by Eqs. (2. 120)(2. 122) we must make use
of our assumptions B" D
"
B
2
, and D
2
, in order to evaluat e the coefficients
k ~ 1
k=O
2.5. BIRTH DEATH PROCESSES 57
in these equati ons. Carrying out thi s operat ion our equati ons convert to
Pk(t + llt ) = Pk(t)[l .; i'k llt + o(ll t)][ l  flk llt + o(llt) ]
+ Pk_1(t)[i'k_l llt + o (llt )]
+ Pk+l(t )[Pk+lllt + o(llt) ]
+ o( llt ) k ~ I (2. 123)
Po(t + llt) = Po(t)[l  i,o llt + o(llt )]
+ P1(t)[fll llt + o (ll t )]
+ o( ll t) k = 0 (2. 124)
In Eq. (2. 124) we ha ve used the assumption that it is impossible to have a
death when the population is of size 0 (i.e., flo = 0) and the assumption that
one indeed can have a birt h when the populat ion size is 0 (i,o ~ 0). Expanding
the righthand side of Eqs. (2. 123) and (2.124) we have
Pk(t +llt ) = Pk(l )  (I'k +,Uk) lltP.(t) + Aki ll t Pk_l( t)
+ fl k+l ll t Pk+l(t) + o(llt )
Po(t +llt ) = Po(t )  i.
o
lltPo(t ) + fllllIPl(t) + o(ll t)
If we now subtract Pk(t) from both sides of each equati on and divide by llt,
we have the following : .
Po(t + Ill )  Po(l ) =  i.oPo(t ) + fllPl(t ) +o( ll t) k = 0 (2.126)
III III
Tak ing the limit as llt approac hes 0 we see that the lefthand sides of Eqs.
(2. 125) and (2. 126) represent the formal derivati ve of Pk(t ) with respect to t
and al so that the ter m o (ll l) jll l goes to O. Consequently, we have the result ing
equations :
k ~ l
 (2.127)
k =O
The set of equations given by (2. 127) is clearly a set of different ialdifference
equati ons and represent s the dynamics of our probabil ity system; we
58 SOME IMPORTANT RANDOM PROCESSES
recognize them as Eq. (2.114) and their solution will give the behavior of
Pk(t ). It remains for. us to solve them. (No te that t his set was obtai ned by
essentially using the Chapman Kolrnogorov equations.)
In order to solve Eqs. (2. 127) for the timedependent behavior Pk(t) we now
require our initial conditions: that is, we must specify Pk(O) for k = 0, I,
2, . . . . In addi tion, we further require that Eq. (2. 122) be satisfied.
Let us pause temp orarily to describe a simple inspect ion technique for
finding the di fferenti aldifference equations given above. We begin by observ
ing that an alternate way for displaying the information contained in the Q
matri x is by means of the statetransitionrate diagram . In such a diagram the
sta te E
k
is represented by an oval surro unding the number k. Each nonzero
infinitesimal rate q j j (the elements of the Q matrix) is represented in the
statetransitionrate diagram by a directed branch point ing from E, to E,
and label ed with the value q j j ' Fur thermore, since it is clear that the terms
along the main diagonal of Q cont ain no new information [see Eqs. (2.96)
and (2.118)] we do not include the "self"loop from E, back to E
j
• Thus the
statetransitionrate diagram for the general birt hdeath process is as shown
in Figure 2.9.
In viewing this figure we may truly think of a particle in motion moving
among these states; the branches identify the per mitt ed transitions and the
bra nch labels give the infinitesimal rates at which these transitions take
place. We emph asize that the labels on the ordered links refer to birth and
death rates and not to probabilities. If one wishes to convert these labels to
probabilities, one must multiply each by the quant ity dt to obtain the
probabili ty of such a transition occurring in the next interval of time whose
duration is dt , In t hat case it is also necessary to put selfloops on each state
indicating the probability that in the next interval of time dt the system re
mains in the given state . Note that t he statetransitionrate diagram contains
exactly the same informati on as does the tr ansitionrate matrix Q.
Concentra ting on state E
k
we observe that one may enter it only from state
E
k
_
1
or from sta te Ek+l and similarly one leaves state E
k
only by entering
state E
k

1
or state E
k
+
1
• From thi s picture we see why such processes are
referr ed to as "nearestneighbor" birt hdeat h processes .
Since we are considering a dynamic situatio n it is clear that the difference
between the rat e at which the system ent ers E
k
and the rate at which the system
leaves E
k
must be equal to the rate of change of "flow" into that state . This
Figure 2.9 Statetransitionrate diagram for the birthdeath process.
 (2.128)
2.5. DlRTHDEATH PROCESSES 59
notion is crucial and provides for us a simple intuitive mea ns for writing
down the equati ons of motion for t he probabil ities Pk(t) . Specifically, if we
focus upon state E
k
we observe that the rat e at which probability " flows"
into this state at time t is given by
Fl ow rate into E
k
= Ak_1Pk_1(t ) + flk+IPk+l (t )
whereas the flow rate out of that state at time t is given by
Fl ow rate out of E
k
= (I'k + flk)Pk(t )
Clearly the difference between these two is the effecti ve probability flow rate
into t his state, that is,
dPk(t ) _
~ = Ak_1Pk_1( t) + flk+l Pk+l(t)  Uk + fJ k)Pk(t )
But thi s is exactl y Eq. (2. l 27) ! Of course, we have not attended to the details
for the boundar y state Eobut it is easy to see that the rate argument ju st given
lead s to the correct equation for k = O. Obser ve that each ter m in Eq .
(2.128) is of the form: pr obability of being in a particular state at time t
multipli ed by the infinitesimal rate of leaving that state. It is clear that wha t
we have done is to draw an imaginary boundary surrounding sta te E
k
and
have calcul ated the pr obability flow rates cr ossing that boundary, where we
place opposite signs . on flows entering as opposed to leaving ; thi s tot al
computation is then set equa l to the time derivati ve of the probability flow
rate into that sta te.
Actu ally there is no reason for selecting a single state as the "system" for
which the flow equ ati ons must hold . In fact one may encl ose an y number of
sta tes wit hin a contour and then write a flow equ ati on for all flow crossing'
that boundary. Th e only danger in de aling with such a conglomerate set is
that one may write down a dependent set of equ ati ons rather than an inde
pendent set; on the other hand, if one systema tically encloses each sta te
singly a nd writes down a conservation law for each, t hen one is guaranteed to
have an independent set of equations for the system wit h t he qu ali fication
that t he conservation of probability given by Eq. (2. 122) must al so be
applied. * Thus we have a simple inspecti on techn ique for arriving at t he
equa tions of moti on for t he birthdeath process. As we shall see lat er th is
approa ch is perfectly suitable for ot her Mar kov pr ocesses (includi ng semi .
Markov processes) and will be used extensively; these observations also lead
us to t he notion of global and local balance equ ati ons (see Chapter 4).
At thi s point it is important for the reader to recognize and accept the fact
t hat the bi rthdeath pr ocess descr ibed above is capa ble of pr oviding the
• When the number of states is finite (say. K states) then any set of K  I singlenode state
equations will be independent. The addi tio nal equation needed is Eq. (2. 122).
:
60 SOME IMPORTANT RANDOM PROCESSES
(2.129)
k?: 1
k=O
framework for di scussing a large number of imp ortant and interesting
problems in queueing theory. The direct solution for appropriate special
case s of Eq. (2. 127) provides for us the tr an sient behavior of the se queueing
systems and is of less int erest to this book than the equilibrium or steady
state behavior of queues. * However, for purposes of illustrati on and to
elaborate fur ther upon these equations, we now consider some imp ortant
examples.
The simplest system to consider is a pure birth system in which we assume
fl k = 0 for all k (note that we ha ve now entered the next to innermost circle
in Fi gure 2.4!) . Moreover , to simplify the problem we will assume that
A
k
= Afor all k = 0, 1,2, . ... (Now we have ent ered the innermost circle!
We therefore expect some marvelou s properties to emerge.) Substituting
thi s into our Eqs. (2.127) we have
d Pk(l)
 =  APk(l) + APk_,(I)
dl
dPo(t) =  APo(l )
dl
For simplicity we assume that the system begins at time 0 with 0 members,
that is,
k=O
k,.,O
(2. 130)
Sol ving for Po(l) we have immediately
p oet) = e
At
Inserting thi s last int o Eq. (2. 129) for k = I result s in
dP,(t) =  AP,( t) + u:"
dt
The solution to th is differenti al equati on is clearly
P,(t) = J,te
AI
Continuing by induction, then, we finally have as a solution to Eq. (2. 129)
_ (2. 131)
k ?: 0, I ?: 0 P ( )
(}.t)k  AI
I = e
k k!
This is the celebrated Poisson di stribution. It is a pure birth pr ocess with
constant birth rat e Aand gives rise to a sequence of birth epochs which are
• Transient behavior is discussed elsewhere in this text, notably in Chapter 2 (Vol. II).
For an excellent trea tment the reader is referred to [COHE 69].
2.5. BIRTHDEATH PROCESSES 61
said to constitute a Poisson process. Let us study the Poisson process
more carefully and show its relat ionship to the exponenti al distribution.
The Poisson process is central to much of elementary and intermedi ate
queue ing theory and is widely used in their development. The special position
of this process comes about for two reasons. First , as we have seen, it is the
"innermost circle" in Figure 2.4 and, therefore, enjoys a number of mar velous
and simplifying anal ytical and probabilistic properties ; thi s will become
undeniably apparent in our subsequent development. The second reason for
its great import ance is that , in fact, nume rous natu ral physical and organic
processes exhibit behavior that is probably meaningfully modeled by Poisson
pr ocesses. For example , as Fry [FRY 28] so graphically point s out, one of the
first observations of the Poisson process was that it properl y represent ed the
number of army soldiers killed due to being kicked (in the head ?) by their
horses. Other examples include the sequence of gamma rays emitting from a
radioact ive part icle, and the sequence of times at which telephone calls are
originated in the telephone network. In fact , it was shown by Palm [PALM
43] and Khinchin [KHIN 60] that in many cases the sum ofa large number of
independent stationary renewal processes (each with an arbitrary distributi on
of renewal time) will tend to a Poisson process. Thi s is an import ant limit
theorem and explains why Poisson pr ocesses appear so often in nature where
the aggregate effect of a large number of individual s or particles is under
observation. "
Since this development is intended for our use in the study of queueing
systems, let us immediately adopt queueing notation and also conditi on
ourselves to discussing a Poisson process as the arrival of customers to some
queueing facility rather than as the birth of new members in a population.
Thus ,l is the average rate at which these customer s arrive . With the"initial
condition in Eq, (2. 130), PkV) gives the pr obability that k arrivals occur
during the time interval (0, I). It is intuitively clear , since the average arrival
rate is ,l per second, that the average number of arrivals in an interval of
length I must be AI. Let us carry out the calculati on of thi s last intuitive
statement. Defining K as the number of arr ivals in thi s interval of length I
[previously we used a(I)] we have
co
E[K] = L kPk(t)
(i.t) k
= e L. k
k _O k!
= e 1 11 · ( ,lt )k
k _l (k  I )!
.<t ,
= e At L.
k _O k!
is given by
62 sosts IMPORTA NT RANDOM PROCESSES
By definition, we know that e" = 1 +x + x
2
/2! + .. . and so we get
E[K] = At  (2. I32)
Thus clearly the expected number of arrivals in (0, t) is equal to At.
We now proceed to calculate the variance of the number of arri vals. In
orde r to do this we find it convenient to first calculate t he foll owing moment
'"
E[K(K  1)] = 2,k(k  l)P
k
(t )
k =O
'" (,l. )k
= e
ll
2, k(k _ 1) _t_
k !
'"
= e;"(At)22, ,''
M (k  2)!
= e
ll
(At)2 (At)k
k_o k!
= (At)2
Now for ming the variance in terms of this last quantity and in terms of
E[K] , we have
(fK
2
= E[K(K I)] + E[A.1  (E[K])2
= (At)2 + At  (At)2
and so
(fK
2
= At  (2. 133)
Thus we see that the mean and variance of the Poi sson process ar e identical
and each equal to At.
In Figure 2.10 we plot the family of cur ves Pk(t) as a functi on of k and as a
function of At (a con venient normali zing form for r),
Recollect from Eq. (11.27) in Appendix II that the zt ransform (proba bility
generating function) for the probability mass di stributi on of a di screte random
va riable K where
gk = P[K = k]
G(z) = E[zK]
= 2, Zk
g k
k
for [z] I. Applying this to the Poisson di stribution deri ved above we have
co
E[zK] = 2, Z
k
p
k
(t)
k= O
= e At (lt z)k
k!
= e 1t +J. f%
2.5. IlIRTH DEATH PROCESSES 63
Ik II)
~ I
;/
Figure 2.10 The Poisson distribution.
and so
G(z) = E[zK] = eW = 1l  (2.134)
We sha ll ma ke considerable use of this result for the zt ransform of a Poisson
dis t rib ut ion. For exa mple, we may now easily ca lcula te the mean and
va riance as given in Eqs . (2.132) and (2.133) by taking advantage of t he
spec ial properties of th e ztra nsfor m (see Appendix II ) as foll ows*:
GIll(l) = i E[ZK]! = E[K]
OZ ,_1
Applying th is to t he Poisson di stribution, we get
E[K] = Ale
w Z

11
1, _1
= At
Also
(JK
2
= G
I2\1)
+ G(l)(l )  [G
lll
(1)]"
Thus , for t he Poisson di stribution ,
(J// = (t.t )2eW'  1l1' _1 + At  ().t)2
= At
This confirms our earlier calculations.
• The shorthand notation for derivatives given in Eq . (11.25) should be reviewed.
64 SOME IMPORTANT RANDOM PROCESSES
c
I "k. '
We have intr oduced the Poisson process here as a pure birth process and
we have found an expression for Pk(t), the probability distributi on for the
number of arrivals during a given.interval of length t. Now let us consider the
j oint distribution of the arrival instants when it is known beforehand that
exactly k arrivals have occurred during that interval. We break the interval
(0, t) into 2k + I intervals as shown in Figure 2.11. We are interested in A
k
,
which is defined to be the event that exactly one arri val occurs in each of the
intervals {PJ and that no arri val occurs in any of the inter vals {O'.J. We wish
to calculate the probability that the event A
k
occur s given that exactly k
arrivals have occurred in the inter val (0, r) : from the definiti on of conditi onal
pr obability we thus have
I
I k
• . ( .P..!C [ I:.o. y.:.:: k.:.:: · ::.:ar..:c
r
.:.::
i
v.:.::a.:.:: 1 t:..:.! )]
P[ A
k
exact y . arrivals In 0, t )] = 
P[exactl y k arrivals in (0. t )]
(2.135)
When we consider Poisson arrival s in nonoverlapping intervals, we are
consider ing independent events whose j oint probabilit y may be calculated
as the pr oduct of the individual pr obabilities (i.e. , the Poisson process has
independent increments). We note from Eq. (2. 131), therefore, that
Prone arrival in interval of length Pd = i.{3 je
' P
,
and
P[no arrival in inter val of length O'. j] = eA' j
Using thi s in Eq. (2. 135) we have directly
P[ A
k
Iexactl y k: arrivals in (0, t )]
(i.{3,i,P2 · .. i.{3ke lP'elP, . . . elPk)(el"eob, . . . el'k+')
=
[(i.t)kfk!Je
lt
= f3 ,{32 . . . {3k k !
t
k
(2. 136)
On the ot her hand, let us consider a new process that selects k points in the
int erval (0, t ) independently where each point is uniformly distributed over
th is interval. Let us nowmake the same calculati on that we did for the Poisson
process, namely,
p[Ak l exactly k arrivals in (0,t)] = ... (2.137)
2.5. BIRTHDEATH PROCESSES 65
where the ter m k! comes about since we do not dist inguish amo ng the
permutations of the k points among the k chosen intervals. We observe that
the two conditional probabi lities given in Eqs. (2. 136) and (2.137) are the
same and, the refore, conclude that if a n interval ofl ength t cont ains exactl y k
arrivals from a Poisson process, then the j oint di stribution of the instants
when these arrivals occurred is "the sa me as the distribution of k points
uniformly di stributed over the same interval.
Furthermore, it is easy to show from the pr operties of our birth process
that the Poisson process is one with independent increments; that is, definin g
Xes, s + t) as the number of arri val s in t he interval (s, s + t) then the
following is true:
(}. tle
1 1
P[X(s, s + t) = k] = '''
k!
regardless of the location of thi s interval.
We would now like to investigate the int imat e relat ionship between the
Poi sson pr ocess and the exponential di stribution. This distribution also plays
a central role in queueing theory. We consider the random vari able l, which
we recall is the t ime be tween adjacent arrivals in a queueing system, and whose
PDF and pdf are given by A(t ) and a Ct ), respectively, as already agreed for
the inte rarri val times . From its definiti on, then, aCt)b.t + o (b.t) is the prob
ability that the next arrival occurs at least t sec and at most (t + b.t ) sec
from the time of the last arrival.
Since the definition of A (t ) is merely the probability that the time between
a rrivals is ~ t , it must clearl y be given by
A(t) = I  pel > t]
But P [l > t ] is j ust the probability that 110 arrivals occur in (0, r), that is,
poet) . Therefore, we ha ve
A(t) = I  p oet)
and so from Eq. (2. 131), we obtain th e PDF (in the Poisson case)
A(t ) = I  e:"
Di fferen tiating, we obta in the pdf
aCt) = i.e
1 t
t ~ O
t ~ O
(2.138)
(2. 139)
I
I
l
This is the wellknown exponential di stribution ; its pdf and PD F are shown
in Figure 2.12.
What we have shown by Eqs. (2. 138) and (2. 139) is tha t for a Poisson
arri val pr ocess, the time bet ween arri val s is exponentially distributed ; thus
we say that the Poisson arrival process has exponential interarrival times.
66 RANDOM PROCESSES
1_
(b) PDF
Figure 2.12 Exponential distribution.
The most amazing characteristic of the exponential distribution is that it
has the remarkable memoryless property, which we introduced in our dis
cussion of Markov processes. As the name indicates, the past history of a
random variable that is di stributed exponentially plays no role in predicting
its future; precis ely, we mean the following. Consider that an arrival has ju st
occurred at time O. If we inquire as to what our feeling is regarding the di s
tribution of time until the next arrival , we clearly respond with the pdf
given in Eq. (2. 139). Now let some time pass, say, to sec, during which no
arrival occurs. We may at this point in time again ask , " What is the prob
ability that the next arrival occurs t sec from now?" This question is the same
question we asked at time 0 except we now know that the time between
arrivals is at least to sec. To answer the second que stion , we carry out the
following calcul ati ons :
P[t
o
< i < t + 10J
P[i t + toIi> toj = '"'_=''""
P[t > toJ
P[i < t + toJ  P[i < toJ
=
P[i > toJ
Due to Eq. (2. 138) we then ha ve
and so
P[i t + to I i > toJ = 1  e "
(2. 140)
This result shows that the distribution of remaining time until the next
arr ival , given that to sec has elapsed since the last a rrival, is identically equal
to the unconditional distribution of interarrival time . The imp act of this
stat ement is that our probabilistic feeling regarding the time unt il a future
arrival occurs is independent of how long it has been since the last arr ival
occurred. That is, the future of an exponentially di stributed rand om vari able
...
I
I
2. 5. BIRTHDEATH PROCESSES 67
Figure 2. I3 The memoryless propert y of the exponential distribut ion.
is independent of the past history of that variable and this dist ribution remains
constant in time . The exponential distribution is the Dilly continuous dis
tribution with thi s pr operty. (In the case of a discrete random vari abl e we have
seen that the geometric distribution is the only discrete distr ibution with
that same property.) We may further appreciate the nature of this memo ry
less property by considering Figure 2. I3. In this figure we show the exponen
tial den sity x«:". Now given that to sec has elapsed, in order to calculate the
density funct ion for the time until the next arrival, what one must do is to
take that portion of the density function lying to the right of the point to
(shown shad ed) and recognize that thi s region represent s our pr obabili stic
feeling regardi ng the future ; the portion of the density functi on in the
interval from °to to is past history and invol ves no more uncert ainty. In
order to make the shaded region into a bona fide density function, we must
magnify it in order to increase its total area to unity ; the appropriate magni
fication takes place by dividing the functi on representing the tail of thi s
distribution by the area of the shaded region (which must , of course, be
P[ J > tD. This operation is identical to the operation of creati ng a cond itional
distributi on by di viding a joint dist ributi on by the probability of the conditi on.
Thus the shaded region magnifies into the second indicated curve in Figure
2.13. This new functi on is an exact replica of the original density funct ion as
shown from time 0, except that it is shifted to sec to the right. No other
density functi on has the pr operty that its tail everywhere possesses the exact
same shape as the entire density function.
We now use the memoryless pr operty of the exponential distribution in
order to close the cir cle regarding the relati ons hip between the Poisson and
exponent ial distributi ons. Equation (2.140) gives an expres sion for the PDF
of the interarrival time cond itioned on the fact that it is at least as lar ge as to.
Let us position ourselves at time to and as k for the probability that the next
68 SOME I ~ I P O R T A N T RANDOM PROCESSES
arri val occurs within the next !::>.f sec. From Eq . (2. 140) we have
P[i ~ 1
0
+!::>.I Ii> 1
0
] = I  e
a t
[
(A!::>. t)"
= 1  1  I. !::>.I + 2"!"  ..]
= A!::>. I + O(!::>. I) (2. 141)
Equation (2. 141) tells us, given that an arri val has not yet occurred, that the
probability of it occurring in the next interval of length !::>.I sec is A!::>. t +
O(!::>.I). But thi s is exactly assumpti on B[ from the opening paragraphs of thi s
section. Furthermore, the probability of no arrival in the interval (to, to + !::>.t)
is calculated as
P[ i > 1
0
+ !::>.t Ii> 1
0
] = 1  P[i ~ 1
0
+ !::>.I I i > 1
0
]
= 1  (1  e
w
)
= e
a t
• (A!::>.I )"
= 1  I. !::>. I + . . .
2!
= 1  I.!::>.I + O(!::>.I )
This corroborates assumpti on B
2
• Furthermore,
P[2 or more arrival s in (10' 1
0
+ !::>.I)]
= 1  P[none in (to, 1
0
+ !::>.I)]  Prone in (to, 1
0
+ !::>.I)]
= 1  [1  1.!::>.1 + O(!::>.I)]  [I.!::>.1 + O(!::>.I) ]
= 0(!::>.1)
This corroborates the "multiplebirth" assumption . Our conclusion , then ,
is that the assumption of exponenti ally di stributed interarri val times (which
are independent one from the other) implies that we have a Poi sson process,
which implies we have a constant birth rate. The converse implications are
al so true. This relationship is shown graphically in Fi gur e 2.14 in which the
symb ol <+ here denotes implicati on in both directi ons.
Let us now calculate the mean and variance for the exponential di stribution
as we did for the Poisson process. We shall proceed using two methods (t he
direct method a nd the transform meth od). We ha ve
E[i] ~ f = fa;; l a(t ) dt
Jo
=i "'IAe ;·t
dl
We use a trick here to evaluate the (simple) integral by recogni zing t ha t the
integrand is no more than the partial deri vati ve of the following integral ,
(2.142)
2.5. BIRTHDEATH PROCESSES 69
)
Figure 2. 14 The mcmoryless triangle.
which may be eval uated by inspectio n:
f
"' · af OO .
ti.eA'dt =  J.  «:" dt
o aJ. 0
and so
_ 1
t=
i.
Thus we have that the ave rage interarrival time for an exponential dist ribut ion
is given by I Ii.. This result is intuitively pleasi ng if we examine Eq. (2. 141)
and observe that the probability of an a rrival in a n interval of length 6>t is
given by J. 6>t [+ o(6)t )] and thus i. itself must be t he average rate of arrivals ;
thus the average time between arrivals must be l {i.. In orde r to evaluate the
variance, we first calculate the second moment for the interarrival time as
foll ows:
2
70 SOME RANDOM PROCESSES
Thus the variance is given by
a/ = E[(i)"]  vr
=:2 (1Y
and so
9 1
a( = ::;
) . 
(2.143)
As usual, these two moments could more easily have been calculated by
first considering the Laplace transform of the probability density functi on
for this random variable. The notati on for the Laplace transform of the
interarrival pdfis A *(s) . In thi s special case of the exponential distribution we
then have the following:
A*(s) dt
=1""e "i.eJ. ' dt
and so
A *(s) = )'
s +}.
(2.144)
Equation (2. 144) thus gives the Laplace transform for the exponential density
functi on. Fr om Appendix II we recognize that the mean of thi s density
function is given by
f = _ dA*(s) I
ds
i. I
= (s + if ,_0
I
i.
The second moment is also calcul ated in a similar fashion:
d
2
A *(s) I
E[(i)2] = 2
ds
2i. I
=(s + }.)3
2
2.5. BIRTH DEATH PROCESSES 71
an d so
Thus we see the ease with which moments can be calculated by making use
of tr ansforms.
No te also, that the coefficient of variation [see Eq . (II .23)] for the exponen
tial is
(2.145)
It will be of further interest to us later in the text to be able to calculate the
pdf for the time interval X required in order to collect k arrivals from a
Poisson process. Let us define this random variable in terms of the random
variables In where In = time between nth and (n  I)th arrival (where the
" zeroth" arrival is assumed to occur at time 0). Thus
We define f x(x) to be the pdf for this random vari able . From Appendix II
we should immedia tely recognize that the density of X is given by the
convolu tion of the densities on each of the I,:S, since they are independently
distri buted. Of course, thi s con voluti on operation is a bit lengt hy t o carry
out , so let us use our further result in Appendix II, which tells us that the
Lapl ace tr ansform of the pdf for the sum of independent random variables is
equal to the product of the Laplace transforms of the density for each. In
our case each I n has a common exponential distribution and therefor e the
Laplace transform for the pdf of X will merel y be the kth power of A*(s)
where A *(s) is given by Eq. (2. 144); that is, defining
X*( s) =f 'eSXfx(x) dx
for the Lapl ace tr ansform of the pdf of our sum, we have
X *(s) = [A*(S)]k
thus
J
(
? )k
X *(s) = 
s +?
(2.146)
72 SOME IM PORTA;,/T RA;,/DOM PROCESSES
We must now invert thi s tr ansform. Fortunately, we identify the needed
transform pair as entry. 10 in Table 1.4 of Appendix I. Thus the density
funct ion we are looking for , which describes the t ime required to observe k
arrivals, is given by
(2.147) x ~ O
A(Ax)kI .
fx(x) = (k _ I)! e'
x
This family of density functi ons (one for each value of k) is referred to as the
family of Erlang distributions. We will have considerable use for thi s famil y
later when we di scuss the method of stages, in Chapter 4.
So much for the Poisson arrival process and its relati on to the exponential
di stribution. Let us now return to the birthdeath equati ons and consider a
more genera l pure birth process in which we permit statedependent birth
rates A
k
(for the Poisson process , we had A
k
= A). We once again insist that
th e death rates fl.k = O. From Eq . (2.127) thi s yields the set of equations
(2.148)
k=O
k ~ 1
dPk(t ) _
 =  AkPk(t) + Ak_IPk_l(t )
dt
dPo(t)
  = )'oPo(t)
dt
Again, let us assume the initial di stributi on as given in Eq. (2. 130), which
states that (wit h probability one) the population begin s with 0 members at
time O. Sol ving for poet) we have
poet) = e
Ao'
The general solution * for Pk(t) is given bel ow with an explicit expression for
the first two values of k:
Pk(t ) = eAk' [i'k_Ii'Pk_I(X)eAkX dx + Pk(O)] k = 0, 1,2, . .. (2. 149)
i.
o(
eAo' _ e
AI
' )
PI(t) = ='
i' l  i·
o
P
2
(t ) = . i ,o)'I. [e
A"
 : A" _ e
A,'
 eAo'J
Al  "'0 A
2
 1.
1
i'
2
 A
o
As a third exampie of the timedependent soluti on to the birthdeath
equations let us consider a pure death process in which a populat ion is
ini tia ted wit h, say, N members and all that can happen to thi s population is
that members die; none are born. Thus A
k
= 0 for all k , and fl.k = fl. ~ 0
• The validity of this solution is easily verified by substituting Eq. (2.149) into Eq. (2. 148).
2.5. BIRTHD EATH PROCESSES 73
for k = I, 2, . . . , N. For this constant de ath rate process we have
dPk(t) =
flPk(t ) + flPk+l(t )
dl
dP.,(I )
;;; =  fl P
s
(I)
dPo(t) = flPI(I )
dl
O<k< N
k = N
k=O
Proceeding as earlier and using induction we obtain the solution
P (I) = (fll)' k e"I
k (N  k) !
dPo( t) fl (fll )' I _"I
 = e
dt (N  I)!
O<kS N
k=O
(2.150)
Note the similarity of this last result to the Erlang distribution.
The last case we consider is a birthdeath process in which all birth
coefficient s are equal to Afor k ~ 0 and all death coefficients are equal to fl
for k ~ I. Thi s birthdeath process with constant coefficient s is of primary
importance and forms perhaps the simplest interesting model of a queueing
system. It is the celebrated M/M/I queue ; recall that the notation denotes a
singleserver queue with a Poisson arrival process and an exponential
distributi on for service time (from our earlier discussi on we recognize that
thi s is the memoryless system). Thus we may say
MIMI 1
I \
Ak = A}  o ( ~ (A(I) = 1  e).I
fl k = fl B(x) = 1  e,n
(2.151)
It should be clear why A(I) is of exponential form from our earlier discussion
relating the exponential interarrival distribution with the Poisson arri val
pr ocess. In a similar fashi on, since the death rate is constant (fl k = fl,
k = I, 2, ...) then the same reasoning leads to the observation that the time
between deaths is also exponentially distributed (in this case with a param
eter fl) . However, deaths correspond in the queueing system to service
74 SOME IMPORTANT RANDOM PROCESSES
completions and, therefore , the servicetime distribution B(x) must be of
exponential form. The interpretation of the condition /10 = 0, which says
that the death rate is zero when the population size is zero, correspond s in
our queueing system to the condition that no service may take place when no
customers are pr esent. The beha vior of the system MIMI I will be studied
throughout thi s text as we introduce new methods and new measures of
performance; we will constantly check our sophisticat ed adva nced tech
niques against thi s example since it affords one of the simplest applications
of many of these advanced methods. More over , much of the behavior
manifested in thi s system is characteristic of more complex queueing system
behavior , and so a careful st udy here will serve to famili ari ze the reader with
some import ant queueing phenomena.
Now for our first exposure to the M/M/l system behavior. From the general
equation for Pk(t ) given in Eq. (2. 127) we find for thi s case that the corre
sponding differenti aldifference equat ions are
dPk(t ) (" ) () • () (
  =  It + fl P
k
t + I.P
k
_, t + ,uPk+l t)
dt
dPo(t )
 =  APo(t ) + ,uP,(t)
.dt
k";?:. l
k = O
(2. 152)
Many methods are available for solving this set of equati ons. Here, we choose
to use the method of ztransforms devel oped in Appendi x I. We have already
seen one application of this meth od earlier in this chapter [when we defined
the tr ansform in Eq. (2.60) and applied it to the system of equat ions (2.55)
to obtain the algebraic equ ati on (2.61)]. Recall that the steps involved in
applying the method of ztransforrns to the solution of a set of difference
equations may be summarized as follows:
1. Multiply the k th equ ati on by Zk.
2. Sum all those equations that have the same form (typically true for
k = K, K + I, . ..).
3. In this single equati on, attempt to ident ify the ztransform for the
unkn own functi on. If all bu t a finite set of terms for the transform are
present , then add the missing terms to get the functi on and then
explicitly subtract them out in the equat ion.
4. Make use of the K " bounda ry" equations (namely. those that were
omitted in step 2 ab ove for k = 0, I , . . . , K  I) to elimina te
unknowns in the transformed equati on.
5. Solve for the desired tran sform in the resultin g algebraic, matrix or
2.5. BIRTHDEATH PROCESSES 75
differential * equation. Use the conservation relationship, Eq. (2. 122),
to elimi nate the last unknown term. t
6. Inver t the .solution to get an explic it solution in terms of k.
7. If step 6 cannot be carried out, then moments may be obta ined by
differentiati ng with respect to z and setti ng z = I.
Jet us apply this met hod to Eq. (2.152) . First we define the timedependent
ran sfor m
P(z, t) ~ I Pk(t)Zk
k=O
(2.153)
Iext we multiply the kth differential equation by zk (step I) and then sum
rver all permitted k (k = 1,2, .. .) (step 2) to yield a single differential
quati on for the ztransform of Pk(t ) :
r opert y 14 from Table I.l in Appendix I permits us to move the differentia
on operato r outside the summation sign in th is last equati on . Thi s summa
on then appears very much like P(z, r) as defined above, except that it is
iissing the term for k = 0 ; the sa me is true of the first summation on the
ghtha nd side of thi s last equat ion. In the se two cases we need merel y add
nd subtract the ter m Po(t )ZO, whieh permit s us to form t he transform we
re seeki ng. The second summati on on the righthand side is clearly }.zP(z , t )
nee it contains an extr a fact or of z, but no missing terms. The last summation
missing a fac tor of z as well as the first two terms of this sum. We have now
We sometimes.obtain a differential equation at this stage if our ori ginal set of difference
luations was. in fact , a set of differentialdifference equat ions. When this occurs, we arc
Iectivcly back to step 1 of this procedure as far as the differential variable (usually time)
concerned. We then proceed through steps 1 5 a second time using the Lapl ace transfor m
If this new variable; our transform multipl ier becomes e·
t
? our sums become integrals .
id our ' 'tricks" become the properties associa ted with Lapl ace transforms (see Appendix
. Similar " returns to step I " occur whenever a function of more than one variable is
msformed ; for each discrete variable, we require a etransform and , for each cont inuous
.riable, we require a Laplace transform.
When addi tiona l unknowns remain, we must appeal to the analyticity of the transform
rd obser ve that in its region of ana lyticity the transform must have a zero to cancel each
i le (singularity) if the transform is to remai n bounded. These ad ditio nal conditions
mplet ely remove any remai ning unkn owns. Thi s procedure will often be used and
plained in the next few chapters.
(2. 155)
(2. 156)
76 SOME n1PORTANT RANDOM PROCESSES
carried out st ep 3, which yields
.£. [P(z, t)  Po( t)]
at
=  (i. + fl)[P(Z, t)  poet )] + }.:; P(z, t) + ~ [P(z, r)  poet)  PI(t )z]
(2. 154)
The equati on for k = 0 has so far not been used and we now apply it as
de scribed in step 4 (K = I) , which permits us to eliminate certain terms in
Eq. (2.154) :
~ P(z, t) =  AP(z, r)  p[P(z, t)  Po(t)] + AZP(z, r) + ~ [P(z, t)  Po(t)]
at ~
Rearranging thi s last equation we obtain the following linear, firstorder
(parti al) differential equation for P( z, r):
z E.. P(z, t) = ( 1  z)[(p  h )P(z, t)  pPo(t)]
at
This differential equati on requires furt her transforming, as menti oned in the
first footnote to step 5. We must therefore define the Laplace tran sform for
our functi on P(z, t) as follows*:
:. f ""
P*(z, s) = e " P(z , r) dt
0+
Ret urning t o step I , applying thi s transform to Eq. (2. 155), and taking ad vant
age of pr operty II in Table 1.3 in Appendix I, we obtain
z[sP*(z, s)  P(z,O+)] = ( I  z)[(p  ).z)P*(z, s)  ,uPo*(s) ] (2. 157)
where we have defined Po*(s) to be the Laplace transform of Po(t) , that is,
Po*(s) ~ l'''e "po(t ) dt (2. 158)
We ha ve now t ran sformed the set of differentialdifference equati on s for
Pk(t) both on the di screte va riable k and on the continuous variable t. Thi s
has led us to Eq . (2. 157), which is a simple al geb raic equation in our t wice
transformed funct ion P*(z, s) , and thi s we may wr ite as
* zP(z, O+)  p(1  z)Po*(s)
P (z, s) = _
sz  (1  z)(p  I.z )
(2. 159)
• For convenience we take the lower limit of integration to be 0' rather than our usual
con vention of using 0 wit h the nonnegati ve rand om variables we often deal with. As
a consequence, we must include the initial condition P(=. 0+) in Eq , (2.157).
2.5. BIRTH DEATH PROCESSES 77
Let us carry this ar gument just a bit further. From the definition in Eq,
(2. 153) we see that
co
P(: , O+) = LPk(O+)Zk
k=O
(2.160)
(2.161)
Of course, Pk(O+) is just our initial condition ; whereas earlier we took the
simple point of view that the system was empty at time 0 [that is, Po(Q+) = 1
and all other terms Pk(O+) = 0 for k 0), we now generalize and permit i
customers to be present at time 0, that is,
k=i
When i = 0 we have our original initi al condi tion. Substituting Eq, (2. 161)
int o Eq. (2. 160) we see immediately that
P(:,O+) = :i
which we may place int o Eq. (2. 159) to obtain
* Zi+1  ,u(l  z)Po*(s)
P (z, s) = (2.162)
sz  ( I  z)(,u  ),z)
We are almost finished with step 5 except for the fact that the unknown
funct ion Po*(s) appears in our equ ation. The second footnote to step 5 tells
us how to proceed. From here on the analysis becomes a bit complex and it is
beyond our desire at thi s point to continue the calcul at ion ; instead we
relegate the excruciating details to the exerci ses below (see Exerci se 2.20). It
suffi ces to say that Po*(s) is determined through the denominat or root s of
Eq. (2. 162), which then leaves us with an explicit expre ssion for our double
transform. We are now at step 6 and must attempt to invert on both the
transform variables ; the exercises require the reader to show that the result
of this inversion yields the final solution for our transient analysis, namely,
wher e
P
k
(/) = e IArPlt[plk il/2Ik_lat) + plk i 1l /2I
k
+
i
+1(at )
+ ( I 
),
p= 
,u
 (2.163)
(2.164)
and
k Z. 1
(2.165)
(2.166)
EXERCISES 79
EXERCISES
2.1.
2.2.
2.3.
Consider K independent sources of customers where the
time between customers for each source is exponentially distributed
with parameter A
k
(i.e. , each source is a poiss?n proc.ess). Now consider
the arrival st ream, which is formed by merging the Input from each of
the K sources defined above. Prove that this mer ged stream IS al so
Poisson with parameter A= Al + }' 2 + . .. + }'K'
Referring back to the previous problem, consid.er thi.s mer ged Poi sson
stream and now assume that we wish to break It up Into several bran
ches. Let Pi be the probability that a customer from.the mer ged st ream
is assigned to the substream i, If the overall rate IS A customers per
second, and if the substream probabilities Pi are chosen for
customer independently, then show that each of the se substreams IS a
Poi sson process with rate APi'
Let {Xj} be a sequence of identically distributed mutually independent
Bernoulli random var iable s (with P[X
j
= 1] = p, and P[X
j
= 0] =
I  p). Let S" = Xl + ... + Xx be the sum of a number. N
'."' . __ •• ,  " " P";«on di stribution WIth
78 SOME IMPORTANT RANDOM PROCESSES
where Ik(x) is the modified Bessel functi on of the first kind of order k . This
last expression is most disheartening. What it has to say is that an appropriate
model for the simplest interesting queueing system (di scussed further in the
next chapter) leads to an ugly expression for the timedependent beh avior of
its state probabilities. As a consequence, we can only hope for greater
complexity and obscurity in attempting to find timedependent behavior of
more general queueing systems.
More will be said about timedependent results later in the text. Our main
purpose now is to focus upon the equilibrium behavior of queueing systems
rather than upon their transient behavior (which is far more difficult). In the
next chapter the equilibrium behavior for birthdeath queueing systems will
be studied and in Chapter 4 more general Markovian queues in equilibrium
will be considered. Only when we reach Chapter 5, Chapter 8, and then
Chapter 2 (Volume II) will the timedependent behavior be considered again.
Let us now proceed to the simplest equilibrium behavior.
REFERENCES
BHAR 60 BharuchaReid, A. T., Elements of the Theory of Mark ov Processes
and Their Applications, McGrawHili (New York) 1960.
COHE 69 Cohen, J., The Single S erver Queue, North Holland (Amsterdam) ,
1969.
ElLO 69 Eilon, S., "A Simpler Proof of L = }.W,'· Operations Research, 17,
915916 (1969).
FELL 66 Feller , W., An Introduction to Probability Theory and Its Applications.
Vol. II, Wiley (New York), 1966.
FRY 28 Fry, T. c., Probability and Its Engineering Uses, Van Nostrand,
(New York) , 1928.
HOWA 71 Howard, R. A., Dynamic Probabilistic Systems, Vol. I (Markov
Models) and Vol. II (SemiMarkov and Decision Processes). Wilev
80 SOME IMPORTANT RANDOM PROCESSES
(c) Solve for the equilibrium probability vector re,
(d) What is the mean recurrence time for state E
2
?
(e) For which values of (J. and p will we have 1T, = 1T
2
= 1T3? (Give
a physical interpretation of this case.)
2.6. Consider the discretestate, discretetime Markov chain whose
transition probability matrix is given by
(a) Find the stationary state probability vector 1t.
(b) Find [I  ZP] l.
(c) Find the general form for P",
2.7. Consider a Markov chain with states Eo, E" E2> . . . and with transi
tion probabilities
1 ~ (i) n in ,1f  n
PH = e s: P q ( . ) '
n  O n J  n .
where p +q = 1 (0 < P < I).
(a) Is this chain irreducible? Periodic? Expl ain.
(b) We wish to find
1T
i
= equilibrium probability of E,
Write 1T
i
in terms of Pi; and 1T; for j = 0, 1, 2, ....
(c) From (b) find an expression relating P(z) to P[I +p(z  I)],
where
'"
P(z) = L 1T
i
Z
i
.0
(d) Recursi vely (i.e., repeatedly) appl y the result in (c) to itself and
show that the nth recursion gives
P(z) = ellz1111+p.tp'+...+ pO l lp [ l + pO(z  1)]
(e) From (d) find P(z) and then recogni ze 1T
i
•
2.8. Show that any point in or on the equilateral triangle of unit height
shown in Figure 2.6 represents a threecomponent probability vector
in the sense that the sum of the distances from any such point to each
of the three sides must always equal unity.
EXERCISES 81
2.9. Consider a pure birth process with constant birth rate ,1. . Let us
consider an interval oflength T, which we divide up into m segments
each of length TIm. Define t1t = TIm.
(a) For t1t small, find the probability that a single arri val occurs in
each of exactly k of the m intervals and that no arrivals occur in
the remaining m  k intervals.
(b) Consider the limit as t1t > 0, that is, as m > co for fixed T,
and evaluate the probability Pk(T) that exactly k arrivals occur
in the interval of length T.
2.10. Consider a population of bacteria of size N(t) at time t for which
N (O) = I. We consider this to be a pure birth process in which any
member of the population will split into two new members in the
interval (t , t + t1t) with probability ;. t1t + o(t1t) or will remain
unchanged in this inter val with probability I  ;. t1t + o (t1t) as
t1t > O.
(a) Let Pk(t) = P[N(t) = k] and write down the set of differential
difference equations that must be satisfied by these probabilities.
(b) Fr om part (a) show that the atransform P(z, t) for N (t ) must
satisfy
ze.t'
P( z, t) = .t .
1 z + ze: ,
(e) Find E[ N (t)] .
(d) Solve for Pk(t) .
(e) Solve for P(z, r), E[N(t)] and Pk(t) that satisfy the initial con
dit ion N( O) = n ;::: I.
(f) Consider the corresponding deterministic problem in which each
bacterium splits into two every I I). sec and compare with the
answer in part (c).
2.11. Consider a birthdeath process with coefficients
k=O
k#O
k=1
k#I
which corresponds to an MIMII queueing system where there is no
room for waiting customers.
(a) Give the differentialdifference equations for Pk(t) (k = 0, I).
(b) Solve these equations and express the answers in terms of PoCO)
and PI(O).
82 SOME IMPORTANT RANDOM PROCESSES
2.12. Consider a birthdeath queueing system in which
k ~ O
k ~ O
(a) For all k, find the differentialdifference equations for
Pk(t) = P[k in system at time t]
(b) Define t he ztransform
<X>
P(z, t) = 2,P
k
(t )Zk
k=O
and find the partial differential equation that P(z, t ) must sat isfy.
(c) Show that the solution to this equation is
Pk(t) = P[k in system at time t].
2.14. Consider the case of a linear birthdeath process in which A
k
= k Aand
P.k = kp:
(a) Find the partialdifferential equation that must be satisfied by
P(z, t ) as defined in Eq. (2. 153).
(b) Assuming that the population size is one at time zero, show that
the funct ion that satisfies the equation in part (a) is
k > K
k ~ K
{
(K  k )A
A
k
=
o
Write down the differentialdifference equations for
P(z, t) = exp (; (1 e")(z  1»)
with the initial condition PoCO) = I.
(d) Comparing the solution in part (c) with Eq. (2. 134), give the
. expression for Pk(t) by inspection.
(e) Find the limiting values for the se probabilities as t + 00.
2.13. Consider a system in which the birth rate decre ases and the death rate
increases as the number in the system k increases, that is,
EXERCISES 83
(c) Expanding P(z, t) in a power series show that
Pk(t) = [l ....:. ex:(t)][1  ,8(t)][,8(t)]k1 k = 1, 2, . . .
poet) = ex:(t)
and find «(r) and ,8(t).
(d) Find the mean and variance for the number in system at time t .
(e) Find the limiting probability that the population dies out by
time t for t > 00.
2.15. Consider a linear birthdeath process for which A
k
= kA + ex: and
flk = ku,
(a) Find the differentialdifference equations that must be satisfied
by Pk(t).
(b) From (a) find the partialdifferential equation that must be
satisfied by the timedependent transform defined as
eo
P(z, t) = ~ Pk(t)Zk
k O
(c) What is the value of P(I, t)? Give a verbal interpretation for the
expression
 a
N(t) = lim  P(z, t)
.1az
(d) Assuming that the population size begins with i members at time
0, find an ordinary differential equation for N(t) and then solve
for N(t) . Consider the case A = fl as well as A ,,6 fl.
(e) Find the limiting value for N(t) in the case A < fl (as t + (0).
2.16. Consider the equations of motion in Eq, (2.148) and define the
Laplace transform
P/(s) =i'" Pk(t)e $/ dt
For our initial condition we will assume poet) = I for t = 0. Trans
form Eq. (2.148) to obtain a set of linear difference equations in
{P
k
*(s)}.
(a) Show that the solution to the set of equations is
kl
IT )' i
P/(s) = k,, i ," O_
IT (s + Ai)
i=O
(b) From (a) find p.(t) for the case Ai = ). (i = 0, I, 2, ...).
84 SOME IMPORTANT RANDOM PROCESSES
2.17. Consider a time interval (0, t) during which a Poisson process generates
arrivals at an average rate A. Deri ve Eq. (2. 147) by considering the
two events: exactly k  I arrivals occur in the interval (0, t  tl.t)
and the event that exactly one arrival occur s in the interval (t  tl.t, r).
Considering the limit as tl.t > 0 we immedi ately arrive at our desired
result.
2.18. A barber opens up for business at t = O. Customers arrive at random
in a Poisson fashion ; that is, the pdf of interarrival time is aCt) =
Ae
lt
• Each haircut takes X sec (where X is some random variable).
Find the probability P that the second arriving customer will not have
to wait and also find W, the average value of his waiting time for the
two following cases :
i. X = c = constant.
ii. X is exponentially distributed with pdf:
b(x) = p e  ~ '
2.19. At t = 0 customer A places a request for service and finds all m
servers busy and n other customers waiting for service in an M/M/m
queueing system. All customers wait as long as necessary for service,
waiting customers are served in order of arri val , and no new requests
for service are permitted after t = O. Service times are assumed to be
mutually independent, identical , exponentially distributed rand om
variables, each with mean duration I Ip.
(a) Find the expected length of time customer A spends waiting for
service in the queue.
(b) Find the expected length of time from the arrival of customer A
at t = 0 until the system becomes completely empty (all cust omer s
complete service).
(c) Let X be the order of completion of service of customer A; that
is, X = k if A is the kth customer to complete service afte r
t = O. Find P[X = k] (k = 1,2, . .. , m + n + I).
(d) Find the probability that customer A completes service before
the customer immediately ahead of him in the queue.
(e) Let 1Vbe the amount of time customer A waits for service. Find
P[lV> x] .
2.20. In thi s problem we wish to proceed from Eq. (2. 162) to the transient
solution in Eq. (2.163). Since P* (z , s) must converge in the region
[z] ~ 1 for Re( s) > 0, then, in this region, the zeros of the denomina
tor in Eq. (2.162) must also be zeros of the numerat or.
(a) Find those two values of z that give the denominator zeros, and
denote them by ct
1
(s) , ~ ( s ) where 1ct
2
(s)1< Ict
1
(s)l.
I
i
I
]
EXERCISES 85
(b) Using Rouche's theorem (see Appendix I) show that the denom
inator of P* (z, s) has a single zero within the unit disk Izl 1.
(c) Requiring that the numerator of P*(z , s) vanish at z = CX2(S)
from our earlier considerations, find an explicit expression for
Po*(s).
(d) Write P* (z, s) in terms of cx, (s) = cx, and cx
2
(s) = cx
2
• Then show
that this equati on may be reduced to
P*(z s) = (z' + + .. . + CX2') + cx;+l/(I  CX 2)
, Acx,(1  Z/CX1)
(e) Using the fact that Icx
2
1 < I and that CX
1CX2
= P. /A show that the
inversion on z yields the followi ng expression for P
k
*( s), which is
the Laplace transform for our tra nsient probabilities Pk(t):
(f) In what follows we take advantage of property 4 in Table 1.3
and also we make use of the following transform pair :
. [s+ ";S2  4A.u] k
kpkl _tlf k(al)= 2.A.
where p and a are as defined in Eqs. (2. 164), (2.165) and where
f k(x) is the modified Bessel function of the first kind of order k as
defined in Eq. (2.166). Using these fact s and the simple relations
among Bessel functions, namel y,
show that Eq. (2.163) is the inverse transform for the expression
shown in part (e).
2.21. The rand om variables Xl ' X
2
, • •• , Xi' . . . are independent. identically
distributed random variables each with densityf x (x) and characteristic
funct ion <I>x (u) = Consider a Poisson process N(t) with
parameter A which is independent of the random variables Xi'
Consider now a second random process of the form
.. \" (t)
X(t) = LXi
i =l
86 SOME IMPORTANT RANDOM PROCESSES
This second random process is clearly a family of staircase functions
where the jumps occur at the discontinuities of the random process
N(I); the magnitudes of such jumps are given by the random variables
Xi. Show that the characteristic function of this second random
process is given by
ePXW(II) = e .lt[4>.rlul ll
2.22. Passengers and taxis arrive at a service point from independent
Poisson processes at rates )., p. , respectively. Let the queue size at
time I be q" a negative value denoting a line of taxis, a positive value
denoting a queue of passengers. Show that, starting with qo = 0, the
distribution of q, is given by the difference between independent
Poisson variables of means At, ut. Show by using the normal approxi
mation that if), = p., the probability that k ~ q, ~ k is, for large
t, (2k + 1)(41TAt)1/2.
I
I
I
,I
I
PART II
ELEMENTARY
QUEUEING THEORY
Elementary here means that all the systems we consider are pure Markov
ian and, therefore, our state description is convenient and manageable. In
Part I we developed the timedependent equations for the behavior of
birthdeath processes ; here in Chapter 3 we address the equilibrium solut ion
for these systems. The key equation in this chapte r is Eq. (3.11), and the
balance of the material is the simple application of that formula. It , in fact ,
is no more than the solution to the equation 1t = 1tP deri ved in Chapter 2.
The key tool used here is again that which we find throughout the text ,
namely, the calculation of flow rates across the boundaries of a closed
system. In the case of equilibrium we merely ask that the rate of flow into be
equal to the rate of flow out of a system. The application of these basic
results is more than just an exercise for it is here that we first obtain some
equations of use in engineering and designing queueing systems. The classical
MIMII queue is studied and some of its important performance measures
are evaluated. More complex models involving finite storage, multiple
servers, finite customer population, and the like, are developed in the balance
of this chapter. In Chapter 4 we leave the birthdeath systems and allow
more general Markovian queues, once again to be studied in equilibrium. We
find that the techniques here are similar to our earlier ones, but find that no
general solution such as Eq. (3.11) is available ; each system is a case unt o
itself and so we are rapidly led into the solutions of difference equations,
which force us to look carefull y at the method of ztransforms for these
solutions. The ingenious method of stages introduced by Erlang is considered
here and its generality discussed. At the end of the chapter we introduce (for
later use in Volume II) networks of Markovian queues in which we take
exquisite advant age of the memoryless properties that Markovian queues
provide even in a network environment. At this point , however, we have
essent ially exhausted the use of the memoryless distributi on and we must
depa rt from that crutch in the following parts.
87
3
BirthDeath Queueing Systems
in Equilibrium
In the previous chapter we studied a variety of stochastic pr ocesses. We
indicated that Markov processes play a fundamental role in the study of
queueing systems, and after presenting the main results from that theory, we
then considered a special form of Markov pr ocess known as the birth
death process. We also showed that birthdeath processes enjoy a most
convenient property, namely, that the time between births and the time
between deaths (when the system is nonempty) are each exponentially
distributed. * We then developed Eq. (2.127), which gives the basic equa tions
of moti on for the general birthdeath process with stationary birth and death
rates.] The solution of this set of equations gives the transient behavior of
the queueing process and some important special cases were discussed earlier.
In this chapter we study t he limiting form of these equations to obtain the
equilibrium behavi or of birthdeath queueing systems.
The importance of elementary queueing theory comes from its histori cal
influence as well as its ability to describe behavior that is to be found in more
complex queueing systems. The methods of analysis to be used in this chapter
in large part do not carryover to the more involved queueing situations;
nevertheless, the obtained results do provide insight into the basic behavi or
of many of these other queueing systems.
It is necessary to keep in mind how the birthdeath process describes
queueing systems. As an example , consider a doctor's office made up of a
waiting room (in which a queue is allowed to form, unfortu nately) and a
service facility consisting of the doctor's examination room. Each time a
patient enters the waiting room from outside the office we consider this to be
an arrival to the queueing system; on t he other hand, this arrival may well be
considered to be a birth of a new member of a population, where the popula
tion consists of all patients present. In a similar fashion, when a patient leaves
• This comes directly from the fact that they are Markov processes.
t In addit ion to these equations, one requires the conservat ion relat ion given in Eq. (2. 122)
and a set of initial conditions {Pk(OJ}.
89
90 BIRTHDEATH QUEUEING SYSTEMS IN EQUILIBRIUM
the office after being treated, he is considered to be a departure from the
queueing system; in terms of a birthdeath process this is considered to be a
death of a member of the population.
We have considerable freedom in constructing a large number of queueing
systems through the choice of the birth coefficients A
k
and death coefficients
flk, as we shall see shortly. First, let us establish the general solution for the
equilibrium behavior.
3.1. GENERAL EQUILIBRIUM SOLUTION
As we saw in Chapter 2 the timedependent solution of the birthdeath
system quickly becomes unmanageable when we consider any sophisticated
set of birthdeath coefficients. Furthermore, were we always capable of
solving for Pk(t) it is not clear how useful that set of functions would be in
aiding our understanding of the behavior of these queueing systems (too
much information is sometimes a curse!). Consequently, it is natural for us to
ask whether the probabilities Pk(t) eventually settle down as t gets large and
display no more "transient" behavior. This inquiry on our part is analogous
to the questions we asked regarding the existence of 1T
k
in the limit of 1T
k
(t )
as t > CfJ. For our queueing studies here we choose to denote the limiting
probability as Pk rather than 1Tk> purely for convenience. Accordingly, let
Pk ~ lim Pk(t) (3.1)
,'"
where Pk is interpreted as the limiting probability that the system contains
k members (or equivalently is in state E
k
) at some arbitrary time in the distant
futu re. The question regarding the existence of these limiting probabilities
is of concern to us, but will be deferred at this point until we obtain the
general steadystate or limiting solution. It is important to understand that
whereas Pk (assuming it exists) is no longer a function of t, we are not claiming
that the process does not move from state to state in this limiting case;
cert ainly, the number of members in the population will change with time,
but the longrun probability of finding the system with k members will be
properly described by P»
Accepting the existence of the limit in Eq. (3. I), we may then set lim dPk(t)!
dt as t > CfJ equal to zero in the Kolmogorov forward equations (of motion)
for the birthdeath system [given in Eqs. (2.127)] and immediately obtain the
result
o= (At + flk)Pk + )'kIPkl + flHIPHl k ~ 1 (3.2)
o= AoPo + fllPl k = 0 (3.3)
The annoying task of providing a separate equation for k = 0 may be over
come by agreeing once and for all that the following birth and death
3.1. GENERAL EQUILIBRI UM SOLUTION 91
coefficients are identically equal to 0 :
A_
1
== A_
2
= A_
3
= = 0
flo = fL1 = fL2 = = 0
Furthermore, since it is perfectly clear that we cannot have a negative number
of members in our population, we will , in most cases , adopt the convention
that
P 1= P2 = P 3 = .. . = 0
Thus, for all values of k , we may reformulate Eqs. (3.2) and (3.3) into the
following set of difference equations for k = .. . ,  2, I, 0, 1, 2, . ..
o=  (Ak + fLk)Pk + Ak 1Pk 1 + f.lk+1Pk+1
We also require the conservation relation
co
2.Pk = 1
k= O
(3.4)
(3.5)
Recall from the previ ous chapter that the limit given in the Eq. (3. 1) is
independent of the initial conditions.
Ju st as we used the statetransitionrate diagram as an inspection technique
for writing down the equations of motion in Chapter 2, so may we use the
same concept in writing down the equilibrium equations [Eqs. (3.2) and (3.3)]
directly from that diagram. In thi s equilibrium case it is clear that flow must
be conserved in the sense that the input flow must equal the output flow from a
given state. For example, if we look at Figure 2.9 once again and concentrate
on sta te E
k
in equilibrium, we observe that
and
In equilibrium these two must be the same and so we ha ve immediatel y
(3.6)
But this last is just Eq. (3.4) again! By inspection we have established the
equilibrium difference equations f or our system. The same comments appl y
here as applied earlier regarding the conser vat ion of flow across any closed
boundary; for example, rather than surrounding each state and writing down
its equation we could choose a sequence of boundaries the first of which
surrounds Eo, the second of which surrounds Eoand E
"
and so on , each time
add ing the next highernumbered state to get a new boundary. In such an
example the kth boundary (which surrounds states Eo, E
"
. .. , E
k
_
1
) would
and so
92 BIRTHDEATH QUEUEI NG SYSTEMS IN EQUILIBRIUM
lead to the following simple conservation of flow relationship:
AkIPkI = flkPk (3.7)
This last set of equations is equivalent to drawing a vertical line separating
adjacent states and equating flows across this boundary; this set of difference
equations is equivalent to our earlier set.
The solution for P»in Eq. (3.4) may be obtained by at least two methods.
One way is first to solve for P, in terms of Po by considering the case k = O.
that is.
A
O
PI =  Po (3.8)
fll
We may then consider Eq. (3.4) for the case k = I and using Eq. (3.8) obtain
o= (J'
I
+ fl,)P, + AoPo + fl2P2
o= (AI + flI) A
o
Po + AoPo + fl2P2
flI
AIA
o
o=   Po  J..,po + AoPo + fl2P2
flI
AOA
I
P2 =  Po (3.9)
fllfl2
If we examine Eqs. (3.8) and (3.9) we may justifiably guess that the general
solution to Eq. (3.4) must be
(3.10)
k=0.1,2• .. .  (3. II )
fllfl 2. . . flk
To validate this assertion we need merely use the inductive argument and
apply Eq. (3.10) to Eq, (3.4) solving for PHI' Carrying out this operation we
do. in fact . find that (3.10) is the solution to the general birthdeath process
in this steadystate or limiting case. We have thus expressed all equilibrium
probabilities P» in terms of a single unknown constant Po:
kI }...
Pk = Po I1'
i = O I' i +l
(Recall the usual convention that an empty product is unity by definition.)
Equation (3.5) provides the additional condition that allows us to determine
Po; thus, summing over all k , we obtain
I
<X> k I A
I + ~ I1'
k= l i = O Pi+l
 (3.12)
3.1. GENERAL EQUILIBRI UM SOLUTION 93
Th is "product" solution for P»(k = 0, I , 2, . . .) simply obt ained , is a
principal equati on in elementary queueing theory and, in fact, is the point of
departure for all of our further solutions in this chapter.
A second easy way to obtain the solution to Eq. (3.4) is to rewrite that
equat ion as follows :
Defining
(3.13)
(3.14)
we have from Eq. (3. 13) that
Clearl y Eq. (3.15) implies that
gk = constant with respect to k
However, since A._I = Po = 0, Eq. (3.14) gives
g_1 = 0
(3.15)
(3.16)
(3.17)
and so the constant in Eq. (3. 16) must be O. Setting gk equal to 0, we immed
iately obtain from Eq, (3.14)
A.
k
P HI =  Pk
P HI
Solving Eq. (3. 17) successively beginning with k = 0 we obtain the earlier
solution, namel y, Eqs. (3. 11) and (3. 12).
We now address ourselves to the existence of the steadystate probabilities
P»given by Eqs. (3. 11) and (3. 12). Simply stated, in order for those expres
sions to represent a probability distribution, we usually require that P» > O.
Thi s clearly places a condition upon the birth and death coefficients in those
equations. Essentially, what we are requiring is that the system occasionally
empties ; that thi s is a condition for stability seems quite reasonable when one
interprets it in terms of real life situations. * More precisely , we may classify
the possibilities by first defining the two sums
(3.18)
(3.19)
j
• It is easy to construct counterexa mples to this case. and so we requ ire the precise argu
ment s which follow.
Transient :
94 BIRTHDEATII QUEUEING SYSTEMS IN EQUILIBRIUM
All states E
k
of our birthdeath process will be ergodic if and only if
Ergodic : 5, < IX)
5
2
= IX)
On the other hand, all states will be recurrent null if and only if
Recurrent null: 5, = IX)
5
2
= IX)
Also, all states will be transi ent if and only if
5,= IX)
5
2
< IX)
It is the ergodic case that gives rise to the equilibrium probabilities {fk} and
that is of most interest to our studies. We note that the condition for
ergodicity is met whenever tbesequence {).,JPk} remains below unit y from some
k onwards, that is, if there exists some k
o
such that for all k ~ k
o
we have
A
k
< I (3.20)
Pk
We will find this to be true in most of the queueing systems we study.
We are now ready to apply our general solution as given in Eqs. (3. 11)
and (3. 12) to some very important special cases. Before we launch headlong
into that discussion. let us put at ease those readers who feel that the birth
death constraints of permitting only nearestneighbor transitions are too
confining. It is true that the solution given in Eqs. (3. 1I) and (3. 12) applies
only to nearestneighbor birthdeath processes. However . rest assured that
t he equilibrium methods we have described can be extended to more general
than nearestneighbor systems ; these generalizat ions are considered in
Chapter 4.
3.2. M/M/1: THE CLASSICAL QUEUEING SYSTEM
As mentioned in Chapter 2, the celebrated MIMII queue is the simplest
nontrivial interesting system and may be described by selecti ng the birth
death coeffi cient s as follows :
k = 0, 1,2, .. .
k=I ,2,3• . . .
That is, we set all birth * coefficient s equal to a constant A and all death*
• In this case. the average interarrival time is f = 1/). and the average service time is
i = l /p; this follows since t and i are both exponentially distributed.

3.2. M /M /I: THE CLASSICAL QUEUEING SYSTEM 95
A A
~ ...
Figure 3.1 Statetransitionrate diagram for M/M /I.
coefficients equal to a constant /l . We further assume that infinite queueing
space is provided and that customers are served in a firstcomefirstserved
fashion (although this last is not necessary for many of our results). For thi s
important example the statetransitionrate diagram is as given in Figure 3.1.
Applying these coefficients to Eq. (3. 11) we have
kl }.
Pk = PoII 
'0 /l
or
(3.21)
The result is immediat e. The condi tions for our system to be ergodic (and,
therefore, to have an equilibrium solution P» > 0) are that S, < CXJ and
So = CXJ; in this case the first condition becomes
The series On the lefthand side of the inequality will converge if and only if
Af/l < I. The second conditi on for ergodicit y becomes
This last condition will be satisfied if Af/l ::;; I ; thus the necessary and suffi
cient condition for ergodicity in the M IMII queue is simply}. < /l. In order
to solve for Po we use Eq. (3. 12) [or Eq. (3.5) as suits the reader] and obtai n
96 BIRTHDEATH QUEUEI NG SYSTEMS IN EQUILIBRI UM
The sum conver ges since). < fl and so
Thus
1
1 + Alp
1  ;./fl
 (3.24)
A
Po = 1   (3.22)
P
From Eq. (2.29) we have p = Alfl' From our stability conditi ons, we there
fore require that 0 ~ p < 1; note that this insures that Po > O. From Eq.
(3.21) we have, finally,
Pk = (I  p)pk k = 0, 1,2, .. .  (3.23)
Equation (3.23) is indeed the solution for the steadystate probability of
finding k customers in the system. * We make the important observation that
P» depends upon). and fl only through their rati o p.
The solution given by Eq. (3.23) for this fundamental system is graphed in
Figure 3.2 for the case of p = 1/2. Clearly, thi s is the geometric distribution
(which shares the fundamental memoryless property with the exponential
distribution). As we develop the behavior of the MIMII queue, we shall
continue to see that almost all of its important pr obabil ity distributions are
of the memoryless type.
An important measure of a queuein g system is the average number of
customers in the system N. This is clearly given by
co
= (I  p),I kpk
k= O
Using the trick similar to the one used in deriving Eq. (2. 142) we have
a <Xl
N= (Ip)pIl
ap k O
a 1
= ( 1  p)p
ap 1  p
N =P
1  p
• If we inspect the transient solution for M/M/l given in Eq. (2.163), we see the term
(I  p)pk ; the reader may verify that , for p < 1, the limit of the transient solution agrees
with our solution here.
3.2. MIM/l : TIlE CLASSICAL QUEUEING SYSTEM 97
l  p
~ (l p)p
(1_p)p2
o
Figure 3.2 The solution for P« in the system MIMI!.
The behavior of the expected number in the system is plotted in Figure 3.3.
By similar methods we find that the variance of the number in the system is
given by
00
G,v" = Z(k  R)"Pk
k O
 (3.25)
We may now appl y Little's result directly from Eq. (2.25) in order to obtain
o
Figure 3.3 The average number in the system MIMI!.
98 BIRTHDEATH QUEUEING SYSTEMS IN EQUILIBRIUM
o
p ~
Figure 3.4 Average delay as a funcIion of p for MIMI!.
T, the average time spent in the system as follows :
N
T=
;.
T= C~ ) G )
IIp.
T=
lp
 (3.26)
This dependence of average t ime on the utili zati on factor p is shown in
Figure 3.4. The va lue obtained by T when p = 0 is exactly the av erage servi ce
time expected by a cust omer; th at is, he spends no time in queue a nd IIp. sec
in servi ce on the average.
The behavi or given by Eqs. (3.24) and (3.26) is rather dramatic. As p
a pproaches unity, both the average number in the sys tem a nd the average time
in the sys tem gro w in an unbounded fa shi on. * Bot h th ese quantities have. a
• We observe at p = I that the system behavior is unstable; this is not surprising if one
recalls that p < I was our condition for ergodicity. What is perhaps surprising is that the
behavior of the average number iii and of the average systemtime T deteriorates so badly as
p  I from below; we had seen for steady flowsystems in Chapter I that so long as R < C
(which corresponds to the case p < I) no queue formed and smooth, rapid flowproceeded
through the system. Here in the M IMI! queue wefind this is no longer true and that wepay
an extreme penalty when we attempt to run the system near (but below) its capacity. The
k = 0, 1,2, . . .
3.3. DISCOURAGED ARRI VALS 99
simple pole at p = 1. This type oj behavior with respect to p as p approaches
1 is characteristic of almost every queueing system one. CUll encounter. We will
see it again in M IGII in Chap ter 5 as well as in the heavy traffic behavior
of G/Gjl (and also in the tight bounds on G/Gfl behavior) in Volume II,
Chapter 2.
Another interesting quantity to calculate is the pr obability of finding at
least k cust omer s in the system:
'"
P [ ~ k in system] = I Pi
i =k
'"
= I(l  p)pi
ik
P [ ~ k in system] = pk  (3.27)
Thus we see that the probability of exceeding some limit on the number of
customers in the system is a geometrically decreasing function of that number
and decays very rapidly.
With the tools at hand we are now in a positi on to develop the probability
densit y function for the time spent in the system. However, we defer that
development until we treat the more general case of M/Gfl in Chapter 5
[see Eq. (5.118)]. Meanwhile, we proceed to discuss numerous other birth
death queues in equil ibrium.
3.3. DISCOURAGED ARRIVALS
This next example considers a case where arrivals tend to get discouraged
when more and more people are present in the system. One possible way to
model thi s effect is to choose the birth and death coefficients as follows :
ex
)' k =  
k+1
ft k = ft k = 1, 2, 3, . . .
We are here assuming an harmonic discouragement of arrivals with respect to
the number present in the system. The statetransitionrate diagram in thi s
intuitive explanation here is that with random flow (e.g., M/M/! ) we get occasional bursts of
traffi c which temporarily overwhelm the server ; while it is still true that the server will be
idle on the average I  p = Po of the time thi s average idle time will not be distri buted
uniformly within small time interva ls but will onl y be true in the long run . On the ot her
hand , in the steady flowcase (which cor responds to our system D/D/! ) the system idle time
will be distributed quite uniforml y in the sense that after every service time (of exactly I II'
sees) there will be an idle time of exactly (J f).)  ( l ip.) see. Thus it is the cariability in bot h
the interarrival time and in the service time which gives rise to the disastro us behavior near
p = 1; any reduction in the var iati on of either random variable will lead to a reduct ion in
the average wait ing time, as we shall see again and aga in.
100 BIRTHDEATH QUEUEING SYSTEMS IN EQUILIBRIUM
ex ex!2
...
cxlk cxI(k +l )
Figure 3.5 Statetransitionrate diagram for discouraged arrivals.
case is as shown in Figure 3.5. We apply Eq. (3. 11) immediately to obtain
iox p;
k1 oc/(i + 1)
Pk = Po II :..:!...OC'2
;  0 fl
Pk = :!
Solving for Po from Eq, (3.12) we have
Po = 1/ [1 +I l.J
k1 fl k!
(3.28)
(3.29)
Po =
From Eq. (2.32) we have therefore,
p = 1  ea/
p
(3.30)
Note that the ergodic condition here is merely ocl fl < 00. Going back to
Eq. (3.29) we ha ve the final solution
Pk = e ' !p k = 0, 1, 2, . . . (3.31)
We thus have a Poisson distribution for the number of customers in the
system of discouraged arrivals! From Eqs. (2. 131) and (2.132) we have that
the expected number in the system is
 oc
N = 
fl
In order to calculate T, the average time spent in the system, we may use
Little' s result agai n. For this we require A, which i s dir ectly calculated from
p = AX = Al fl ; thus from Eq. (3.30)
). = flP = fl(l  e o/
P
)
Using this* and Little' s resul t we the n obtain
T = oc (3.32)
fl"( I  e ' /P)
• No te that this result could have been obtai ned from ;' = L kJ. kPk The reader should
verify this last calculat ion.
3.4. M/M / oo: RESPONSIVE SERVERS (I NFINITE NUMBER OF SERVERS) 101
3.4. M/M/ oo: RESPONSIVE SERVERS (INFINITE NUMBER
OF SERVERS)
Here we consider the case that may be interpreted either as that of a
responsive server who accelerates her service rate linearly when more
customers are wait ing or may be interpreted as the case where there is
always a new clerk or server ava ilable for each arriving customer. In partic
ular , we set
k = 0, 1,2 .
k = 1,2.3, .
Here the statet ransitionrate dia gram is that shown in Figure 3.6. Going
dire ctly to Eq. (3.11) for the solution we obtain
k 1 i,
Pk = Po IT(. + I )
. ~ o I fl
(3.33)
(3.34) k = 0. 1. 2•. ..
Need we go any further? The reader should compare Eq. (3.33) with Eq.
(3.28). These two are in fact equi valent for ex = i., and so we immedi atel y
have the soluti ons for P« and N,
(}' Ifl)k  Al p
P, = z! e
N=!
fl
Her e. too. the ergodi c condition is simply i.l fl < 00. It appears then that a
system of discouraged arriva ls behaves exactly the same as a system that
includ es a responsive server. However, Little ' s result provid es a different
(and simpler) form for T here than that given in Eq. (3.32) ; thus
I
T = 
fl
This answer is, of course. obvio us since if we use t he interpretation where
each arriving customer is gra nted his own server. then his time in system will
be merely his service time which clearl y equals l il t on the ave rage.
x x
~ ...
11 211
Figure 3.6 Statetransitionrate diagram for the infiniteserver case M/M/ oo.
r:
102 BIRTHDEATH QUEUEING SYSTEMS IN EQUILIBRIUM
3.5. M/M/m: THE mSERVER CASE
Here again we consider a system with an unlimited waiting room and with
a constant arrival rate A. The system provides for a maximum of m servers.
This is within the reach of our birthdeath formulation and leads to
I'
k
= I. k = 0, 1,2, . . .
flk = min [kfl , mfl]
= {kfl 0 ks m
mft m k
From Eq. (3.20) it is easily seen that the condition for ergodicity is A/mft < I.
The statetransitionrate diagram is shown in Figure 3.7. When we go to
solve for P» from Eq. (3.11) we find that we must separate the solution into
two parts, since the dependence offtk upon k is also in two parts. Accordingly,
for k m,
Similarly, for k m,
kl A
Pk = Po II (. 1)
'  0 I + fl
=
(3.35)
m
ml A k l A
Pk = Po II . II 
'0 (I + l)ft ;m mft
(
A
k
I
= Po 
ft) m! mk m
Collecting together the results from Eqs. (3.35) and (3.36) we have
{
(mp)k
POI:!
Pk =
(p) kmm
Po
m!
where
(3.36)
 (3.37)
A
p =  < 1 (3.38)
mfl
This expression for p follows that in Eq. (2.30) and is consistent with our
A
}.
A A A x
. ..
Figure 3.7 Statetransitionrate diagram for M/M/m.
 (3.39)
 (3.40)
3.6. M/MfI / K: FI NITE STORAGE 103
defini tion in terms of the expected fract ion of busy servers. We may now
solve for Po from Eq. (3. 12), which gives us
[
m 1(mp)k co (mp)k I J1
Po = I + I  + I  
kd k! m! mk m
and so
[
m 1( m p)k (mp)m) ( I )J1
Po= I +  
k  O k! m! I  p
The probability that an arriving customer is forced to join the queue is
given by
co
P[queueing] = I Pk
km
Thus
(
m p)m) (_1)
. m! 1 p
P[queuemg] = ["f (mp)k + (mp)m) (_I_)J
. ko k ! m ! 1  p
This probability is of wide use in telephony and gives the pr obability that no
trunk (i.e., server) is available for an arriving call (customer) in a system of m
trunks; it is refer red to as Erl ang's C f ormula and is oft en denoted* by
C(m, A/p) .
3.6. lVI/lVI/I/K: FINITE STORAGE
We now consider for the first time the case of a queueing system in which
there is a maximum number of cust omers that may be stored; in particular,
we assume the system can hold at most a total of K customers (including the
cust omer in service) and that any further arriving customers will in fact be
refused entry to the system and will depart immedi ately without service.
Newly a rriving customers will continue to be generated according to a
Poisson pr ocess but only those who find the system with strictly less than K
customers will be allowed entry. In telephony the refused cust omers are
considered to be "lost" ; for the system in which K = I (i.e., no waiting
room at all) this is referred to as a " blocked calls cleared" system with a
single server.
* Europeans use the symbol
k < K
k:? K
104 BIRTHDEATH QUEUEING SYSTEMS I N EQUILIBRIUM
A A A A
~ ... ~
11 JJ P. IJ
Figure 3.8 Statetransitionrate diagram for the case of finite storage room
M/M/I /K.
It is interesting that we are capable of accommodating this seemingly
complex system description with our birthdeath model. In particular, we
accomplish thi s by effecti vely "turning off " the Poi sson input as soon as the
systems fills up, as foll ows:
l k= { ~
P. k= P. k = 1, 2, . . . , K
From Eq. (3.20), we see that th is system is always ergodic. The statetransi
ti onrate diagram for thi s finite Markov chain is shown in Figure 3.8. Proceed
ing directly with Eq. (3.11) we obtai n
kl l
Pk = PoII  k ::;; K
;_0 P.
or
(3.4 1)
Of course, we al so have
Pk = 0 k > K (3.42)
In order to solve for Po we use Eqs . (3.41) and (3.42) in Eq. (3. 12) to obtai n
and so
I  i.fp.
Po = 1 _ (l/p.)K+1
Thus, finally,
r
1  I./P. ( l)k
Pk = l~ (l/p.)K+1 P.
otherwise
 (3.43)
J
3.7. M jM jm jm : mSERVER LOSS SYSTEMS 105
A A A A
~ ... ~
I' 21' Im  l )p mu
Figure 3.9 Statetransitionrate diagram for mserver loss system M/M/m/m.
For the case of blocked calls cleared (K = 1) we have
1
k=O
1 + Nil
Pk =
Nil
k = 1 = K
1 + ;.jll
0 otherwise
(3.44)
3.7. MjM/m/m: mSERVER LOSS SYSTEMS
Here we have again a blocked calls cleared situation in which there are
available m servers. Each newly arriving customer is given his private server ;
howe ver, if a customer arri ves when all servers are occupied, that customer is
lost. We create thi s artifact as above by choosing the following birth and
death coefficient s :
Here again, ergodicity is alway s
diagram is shown in Fi gure 3.9.
Applying Eq. (3.11) we obtain
k<m
k >: m
k = 1,2, . . . , m
assured. This finite statetransitionrate
k ;» III
or
Solving for Po we have
k  l J.
Pk = PoIT (. + 1)
, ~ o I Il
(
(
A)k1
Po  
Pk = 0 Il k!
[
m ( A)k1J
1
Po = I  
k _ O Il k!
k ~ III
 (3.45)

This particular system is of great interest to those in telephony [so much so
that a special case of Eq. (3.45) has been tabulated and graphed in many books
106 BIRTH DEATH QUEUEING SYSTEMS IN EQUILI BRIUM
on telephony]. Specifically, Pmdescribes the fracti on of time that all m servers
are busy. The name given t o this probability expression is Erlang's loss
f ormula and it is given by
 (3.46)
Pm =
m
2,(Alfl)klk!
kO
Thi s equation is also referred to as Erlang's B f ormula and is commonly
denoted * by B(m, Alfl). Formula (3.46) was first deri ved by Erlang in 1917!
3.8. M/M/ I//Mt : FINITE CUSTOMER POPULATION
SINGLE SERVER
Here we consider the case where we no longer have a Poisson input
proce ss with an infinite user population, but rather have a finit e population
of possible users. The system structure is such that we have a tot al of M
users; a customer is either in the system (consisting of a queue and a single
server) or outside the system and in some sense " arri ving." In particular,
when a customer is in the " arriving" condition then the lime it takes him to
arrive is a random variable with an exponential distributi on whose mean is
I/A sec. All customers act independently of each other. As a result , when there
are k customers in the system (queue plus service) then there are M  k
customers in the arriving state and, therefore , the tot al average arrival rate
in this state is }.(M  k) . We see that this system is in a str ong sense self
regulating. By this we mean that when the system gets bUSY, with many of
these customers in the queue, then the rate at which additional custome rs
arrive is in fact reduced, thus lowering the further congesti on of t he system.
We model this quite appropriately with our birthdeath process choosing for
parameters
{
A(M  k)
}'k =
o
os k ~ M
otherwise
fl k = !t k = I , 2, .. .
The system is ergodic. We assume that we have sufficient room to contain M
custome rs in the system. The finite statetra nsitionrate diagram is shown in
Figure 3.10. Using Eq. (3. 11) we solve for h as follows :
k I }.(M  i )
Pk = PoIT''
b O fl
• Europeans use the nota tion £1 m(J. /Il)·
t Recall that a blank entry in either of the last two opt ional positions in this notation means
an entry of 00; thus here we have the system M/M/I / oo/M.
3.9. MIMI rolIM: FINITE CUSTOMER POP ULATION 107
M). (M l)'
~ ...
Il Il
Figure 3.10 Statetransitionrate diagram
system M/M/I/IM.
Thus
(
(
A)k M!
Pk = :0; (M  k) !
2), x
~
Il Il
for singleserver finite popul ation
 (3.47)
k>M
In addition, we obtain for Po
[
.ll (.il)k M! J
1
Po = ~ 
k_ O P. (M  k)!
3.9. M/M/ roIlM: FINITE CUSTOMER POPULATION
"INFINITE" NUMBER OF SERVERS
 (3.48)
We again consider the finite population case, but now provide a separate
server for each customer in the system. We model this as follows:
. {.il(M  k)
A
k
=
o otherwise
P.k = kp. k = I , 2, ...
Clearly, this t oo is an er godic system. The finite st atetransitionrate diagram
is shown in Figure 3.11. Solving this system, we have from Eq . (3. II)
k  l }.(M  i)
Pk = PoIT (. + 1)
,0 I P.
= P o ( ~ r ( ~ ) 0 ~ k ~ M (3.49)
where the binomial coefficient is defined in the usual way,
(
M) .l M!
k = k! ( Af  k)!
.11), (,111 ) ), 2),),
~ . . . ~
I' 21' (.II l lp M I'
Figur e 3.11 Statetransitionrate diagram for "in finite"server finite population
system M/M/ ooI/M.
108 BIRTHDEATH QUEUEING SYSTEMS IN EQUILIBRIUM
(3.50)
O ~ k ~ M
Thus
otherwise
We may easily calculate the expected number of people in the system from
Sol ving for Po we have
an d so
I
I
I
I
I
I
Ik(2:)k(M)
> 0 fL k
(I + i./fL)M
Using the partialdifferentiation trick such as for obtaining Eq . (3.24) we
then have
 Mi' /fL
N = ''
1+ AlfL
3.10. M/M/m/K/M: FINITE POPULATION, mSERVER CASE,
FINITE STORAGE
This rather general system is the most complicated we have so far consid
ered and will reduce to all of the previous cases (except the example of dis
couraged arrivals) as we permit the parameters of thi s system to vary. We
assume we have a finite population of M customers , each with an " arriving"
parameter A. In addition, the system has m servers, each with parameter fl.
The system also has finite storage room such that the total number of cust o
mers in the system (queueing plus th ose in service) is no more than K.
We assume M ~ K ~ m; cust omers arri ving to find K alre ady in the system
are "lost" and return immediately to the arri ving state as if they had just
completed service. This leads to the following set of birthdeath coefficients:
i'
k
= {OA(M  k) 0 ~ k ~ K I
otherwise
{
k
fl
fl k = mfl

3.10. M /M/m/K/M: FINITE POPUL ATION, mSERVER CASE 109
(1.1 1) (M m+ l p (M m+2) .\ ( .\Im) (M K+ I )
.. :e:B:GD3: ...
IJ Zu (m  I) J: mu mu /Il 1J
Figure 3. I2 Statetransitionrate diagram for mserver, finite storage, finite
populat ion system M/M/m/K/M.
(3.51)
In Figure 3.12 we see the most complicated of our finite statetransitionrate
diagrams. In order to apply Eq. (3.11) we must consider two regions. First,
for the range 0 k m  I we have
kI A(M  i)
Pk = Po IT (. + I)
I P.
_
r:
For the region m k K we have
(3.52)
«s e«.«
m 1 A(M _ i) k1 ;,(M  i)
Pk = Po IT . IT''
(I + l)p. i m IIIP.
(
A)k( M) k! mk
=Po  Ill
P. k m!
The expression for Po is rather complex and will not be given here, although
it may be computed in a straightforward manner. In the case of a pure loss
system (i.e., M K = m), the stationary state probabilities are given by
(3.53) k = 0, 1, .. . , m
(
Pk = i (2:)'
I p.
This is known as the Engset distribution.
We could continue these examples ad nauseam but we will instead take a
benevolent approach and terminate the set of examples here . Additi onal
examples a re given in the exercises. It should be clear to the reader by now
that a lar ge number of interestin g queueing st ruct ures can be modeled with
the birthdeath process. In particular, we have demonstrated the ability to
mod el the multipl eser ver case, the finitepopulati on case, the finitestorage
case a nd co mbinat ions thereof. The common element in a ll of the se is that
the solution for the equilibrium probabilities {pJ is given in Eqs. (3.11) and
(3.12). Only systems whose solut ions are given by the se equations have been
con sidered in thi s chapter. However, there are many other Markovian systems
that lend themselves to simple soluti on and which are important in queueing
110 BIRTH DEATH QUEUEING SYSTEMS IN EQUILIBRIUM
theory. In the next chapter (4) we consider the equilibrium solution for
Markovian queues ; in Chapter 5 we will generalize to semiMarkov processes
in which the service time distribution B(x) is permitted to be general, and in
Chapter 6 we revert back to the exponential service time case, but permit
the interarrival time distribution A (I) to be general; in both of the se cases
an imbedded Markov chain will be identified and solved. Onl y when both
A(I) and B(x) are nonexponential do we requ ire the methods of adva nced
queueing theory discu ssed in Chapter 8. (There are some special nonexpon
entia l distributions tha t may be descr ibed wit h the the ory of Markov pr ocesses
and these too are di scussed in Chapter 4.)
EXERCISES
equilibrium probabilities P» for the number in the Find the
system.
What relationship must exist among t he parameters of the
problem in order that the system be stable and, therefore , that
thi s equilibrium solution in fact be reached ? Interpret this
answer in terms of the possible dynami cs of the system.
(b)
a pure Markovian queueing system in which
/ . ??.'" {A. 0 k K
; \...:.;;:: A. =
! ) k 2A. K < k
\
'1>;a;; !1 _'>j
...; j
 . .!'! i P,k=P, k=I , 2, .. .
\ ... "i, ../ . '" .
" \ ! J"
, ;
3.2. Cons ider a Markovian queueing system in which
k 0, 0 a: < I
(a) Find the equilibrium pro bability h of having k custo mers in the
system. Express your answer in terms of Po.
(b) Give an expression for P«
3.3. Consider an MjMj2 queueing system where the average a rrival rate
is ;. customers per second and the average service time is l j p, sec,
where A. < 2p,.
(a) Find the differential equati ons that govern the timedependent
probabilities Pk(I ).
(b) Find the equilibrium probabilities
Pk = lim Pk( l )
I a:
EXERCISES III
3.4. Consider an MIMII system with parameters A, p in which customer s
are impatient. Specifically, upon arri val, customers estimate their
queueing time wand then join the queue with probability e
aw
(or
leave wit h pr obability I  e
aw
) . The estimate is w = k/fl when the
new arrival finds k in the system. Assume 0 ::::; oc.
(a) In terms of Po , find the equilibrium probabi lities P» of finding
k in the system. Gi ve an expression for Po in terms of the system
parameters.
(b) For 0 < «, 0 < p under what cond itions will the equilibrium
solution hold ?
(e) For oc > 00, find P» explicitly and find the average number in the
system.
3.5. Consider a birthdeath system with the following birth and death
coeffi cients :
A. = (k + 2)A
/
1
• = kp:
k = 0, 1,2, .
k = 1,2,3, .
All other coefficients are zero.
(a) Solve for h. Be sure to express your answer explicitly in terms
of A, k , and p only.
(b) Find the average number of customer s in the system.
3.6. Consider a birt hdeath process with the following coeffi cient s :
A. = ock(K.  k)
fl . = fJk (k  K,)
k = K" K, + I , , K.
k = K" K, + I , K.
where K, ::::; K. and where these coefficient s are zero outside the range
K, ::::; k ::::; Ka Solve for P» (assuming tha t the system initially contai ns
K, ::::; k ::::; K. customers).
3.7. Consider an M/M/m system that is to serve the pooled sum of two
Poisson arrival streams; t he ith stream has an average arriva l rate
given by Ai and exponentially distributed service t imes with mean
I /p, (i = 1, 2). The first stream is an ordinary stream whereby each
ar rival requires exactly one of the In servers ; if all In servers are busy
then any newly arrivi ng customer of type I is lost. Customers from the
second class each require the simultaneous use of In
o
servers (and will
occupy them all simulta neously for the same exponenti ally distributed
amo unt of time whose mean is I Ip. sec); if a customer from this class
finds less than m
o
idle servers then he too is lost to the system. Find
the fracti on of type I customers and the fraction of type 2 customers
that are lost.
112 BIRTHDEATH QUEUEING SYSTEMS IN EQUILIBRIUM
3.8. Consider a finite customer population system with a single server such
as that considered in Section 3.8; let the parameters M, Abe repl aced
by M, i:. It can be shown that if M > 00 and A' > °such that lim
MA' = A then the finite population system becomes an infinite
populati on system with exponential interarrival times (at a mean rate
of ). customers per second). Now consider the case of Section 3.10;
the par ameters of that case are now to be denoted M, A' , m, p" Kin
the obvi ous way. Show what value these parameters must take on if
they are to represent the earlier cases described in Sections 3.2, 3.4, 3.5,
3.6,3.7,3.8, or 3.9.
3.9. Using the definition for Bim, A/p,) in Section 3.7 and the definiti on of
C(m, Alp, ) given in Secti on 3.5 establish the following for A/p, > 0,
m = 1,2, . ..
(a) S( m)) < ~ (A/p,)k «": < c(m))
p, k m k! p,
(b) c( m,;)
3. 10. Here we consider an M/M/l queue in di screte time where time is
segmented into intervals of length q sec each. We assume that event s
can only occur at the ends of these discrete time intervals. In par
ticul ar the probability of a single arrival at the end of such an interval
is given by Aq and the pr obability of no arrival at that point is I  i.q
(thus at most one arrival may occur). Similarly the departure pr ocess
is such that if a customer is in service during an interval he will com
plete service at the end of that interval with pr obability I  (J or will
require at least one more interval with pr obability (J.
(a) Derive the for m for a(l ) and b(x) , the interarrival time and service
time pdf' s, respecti vely.
(b) Assuming FCFS, write down the equilibrium equations that
govern the behavior of Pk = P[k customers in system at the end
of a discrete time interval] where k includes any arrivals who
EXER CISES 113
have occur red at the end of this interval as well as any customers
who are about to leave at this point.
(c) Solve for the expected value of the number of customers at these
points.
3.11. Consider an M/M/I system with "feedback"; by this we mean that
when a customer departs from service he has probability a of rejoining
the tail of the queue after a random feedback time, which is exponen
tially distributed (with mean 1/1' sec) ; on the other hand, with prob
ability I  a he will depart forever after completing service. It is
clear that a customer may return many times to the tail of the queue
before making his eventual final departure. Let hi be the equilibrium
probabil ity that the re are k customers in the "system" (that is, in the
queue and the service facility) and that there are j customers in the
process of returning to the system.
(a) Write down the set of difference equations for the equilibrium
probabilities hi'
(b) Defining the double ztransform
show that
00 00
PCZ1, 2:2) = Pk;Ztk
Z
/
1. = 0 j=O
C
 ) apCZh Z2) + {oCI )
" Z2  ""' I A  Zl
aZ
2
+,u[1  I  a 
ZI  1
[
I  a ZOJ
=,u I    a: P(O, Z2)
Zl "" I
(c) By taking advantage of the momentgenerating properties of our
ztransforms, show that the mean number in the " system"
(queue plus server) is given by p!(l  p) and that the mean
number returning to the tail of the queue is given by ,uap!y ,
where p = A/ Cl  o),u.
Consider a " cyclic queue" in which 1\<1 customers circulate around
through two queueing facilities as shown below.
•
3.12.
Stage 1
114 BIRTH DEATH QUEUEING SYSTEMS IN EQUILIBRIUM
Both servers are of the exponential type with rates JlI and Jl2, respec
tively. Let
Pk = P[k customers in stage I and Mk in stage 2]
(a) Draw the statetransitionrate diagram.
(b) Write down the relati onship among {Pk} .
(c) Find
.U
P(z) = 2: Pk
Zk
k=Q
(d) Find Pk.
3.13. Consider an M{Mfl queue with parameters). and Jl. A customer in
the queue will defect (depart without service) with probability
oc tJ.t + o(tJ.t) in any interval of duration tJ.t.
(a) Dr aw the statetransitionrate diagram.
(b) Expre ss P k+1 in terms of Pk.
(c) For oc = fl , solve for Pk(k = 0, 1, 2, . . .).
3.14. Let us elaborate on the M{M{I {K system of Section 3.6.
(a) Evaluate P» when). = fl .
(b) Find N for ). ,c.Jl and for }. = fl·
(c) Find T by carefull y solving for the average arrival rate to the
system:
4
Markovian Queues in Equilibrium
• The previous chapter was devoted to the study of the birthdeath produ ct
solution given in Eq. (3. 11). The beauty of that solution lies not only in its
simplicity but also in its broad range of applicati on to queue ing systems, as
we have discussed. When we vent ure beyond the birthdeath process int o
:he more general Mar kov process, then the product solution menti oned above
10 longer applies; however , one seeks and often finds some other form of
oroduct solution for the pure Markovian systems. In thi s chapt er we intend
:0 investigat e some of these Markov processes that are of direct interest to
[ueueing systems. Most of what we say will apply to random walks of the
vlarkovian type ; we may think of these as somewhat more general birth
leath processes where steps beyond nearest neighb ors are permitted, but
vhich neverthel ess contain sufficient structure so as to permit explicit solu
ions. All of the underl ying distributions are, of course, exponential.
Our concern here aga in is with equili brium results. We begin by outlining
L general meth od for finding the equilibrium equations by inspecti on. Then
ve consider the special Erlangia n distribution E" which is applied to the
[ueucing systems M/ ET/I and ET/ M/l. We find that the system M/ ET/I
Las an interpretat ion as a bulk arrival process whos e general form we study
urther ; similarly the system ET/ Mfl may be interpreted as a bulk service
ystem, which we also investigate separately. We then consider the more
.eneral systems ET.lETJI and step beyond tha t to ment ion a broad class of
l/Gfl systems that are derivable from the Erlangian by "seriesparallel"
ombinations. Fi nally, we consider the case of qu eueing networks in which
II the underlying distrib utions once agai n are of the memoryless type. As
e shall see in most of these cases we obtain a pr oduct form of solution.
.1. THE EQUILIBRIUM EQUATIONS
Our point of departure is Eq, (2.116), namely, 1tQ = 0, which expresses
re equili br ium conditions for a general ergodic discretestat e cont inuous
me Markov process ; recall that 1t is t he row vector of equilibrium state
robabilities and that Q is the infinitesimal generator whose elements are the
115
, I
116 MARKOVIAN QUEUES IN EQUILIBRIUM
infinitesimal transition rates of our Markov process. As discussed in the
previous chapter, we adopt the more standard queueingtheory notati on
and replace the vector 7t with the row vector p whose kth element is the equi
librium probability Pkof finding the system in state E
k
• Our task then is to solve
pQ = 0
with the additional conservation relation given in Eq. (2.117), namely,
LPt=l
,..
This vector equation describes the "equations of motion" in equilibrium.
In Chapter 3 we presented a graphical inspection method for writing down
equations of motion making use of the statetransitionrate diagram. For the
equilibrium case that method was based on the observation that the prob
abilistic flow rate into a state must equal the probabilistic flow rate out of
that state. It is clear that this notion of flow conservation applies more
generally than only to the birthdeath process, but in fact to any Markov
chain. Thus we may construct "nonnearestneighbor" systems and still
expect that our flow conservation technique should work; this in fact is the
case. Our approach then is to describe our Markov chain in terms of a state
diagram and. then apply conservation of flow to each state in turn. This
graphical representation is often easier for this purpose than, in fact, is the
verbal, mathematical, or matrix description of the system. Once we have this
graphical representation we can, by inspection, write down the equations
that govern the system dynamics. As an example, let us consider the very
simple threestate Markov chain (which clearly is not a birthdeath process
since the transition Eo E
2
is permitted), as shown in Figure 4.1. Writing
down the flow conservation law for each state yields
(i, + Il)PI = },Po + IlP2
A •
IlP2 = 2Po + API
(4.1)
(4.2)
(4.3)
where Eqs . (4.1), (4.2). and (4.3) correspond to the flow conservati on for
states Eo, E" and E
2
, respectively. Observe also that the last equation is
exactly the sum of the first two; we always have exactly one redundant equa
tion in these finite Markov chains. We know that the additional equation
required is
Po + PI + P2 = 1
4.1. THE EQUILIBRIUM EQUATIONS 117
Figure 4.1 Example ofnonnear estneighbor system.
The so lutio n to thi s system of equations gives
(4.4)
Voila ! Simple as pie . In fact, it is as " simple" as inverting a set of simul
taneous linear equations.
We ta ke adva ntage of t his inspection technique in solving a number of
Markov chains in equilibrium in the bal ance of th is chapter. *
As in the prev iou s chapter we are here concerned with the limit ing prob
ability defined as P» = lim P[N(t ) = k] as t ~ co, assuming it exists. This
probability may be interpreted as givi ng the proportion of time that the
system spends in sta te E
k
• One could, in fact, estimate th is pr obability by
meas uri ng how ofte n the system contained k cust omers as compared to the
tot al measurement t ime. Ano t her quantity of interes t (perhaps of greater
interest) in queueing systems is the pr obability that an arri ving customer finds
the system in sta te E
k
; that is, we consider the equil ibrium probability
' k = P[arri ving cust omer finds the syst em in state E
k
]
in the case of an ergodic system. One might intui tive ly feel t hat in all cas es
Pk = ' k, but it is easy to show that th is is not genera lly true. For example,
let us consider the (no n Markovian) system 0 /0/1 in which arri val s are
un iformly spaced in time such t ha t we get one arrival every i sec exactl y;
the servicet ime requirements are identical for all cust omers and equa l, say
• It should also be clear that this inspection technique permits us to wri te down the time
dependent state probabilities Pk(r ) directly as we have already seen for the case of birth
death processes; these timedependent equations will in fact be exactly Eq. (2. 114).
118 MARKOVIAN QUEUES IN EQUILIBRIUM
to X sec. We recogni ze this singleserver system as an instance of steady flow
through a single channel (remember the pineapple fact ory). For sta bility we
require that x < f. Now it is clear that no arri val will ever have to wait once
equilibrium is reached and, therefore, '0 = I and ' k= 0 for k = 1,2, . . . .
Moreover , it is clear that the fraction of time that the system contains one
customer (in service) is exactly equal to p = x/f, and the remainder of the
time the system will be empty; therefore, we have P» = I  p, Pi = p,
P» = 0 for k = 2,3,4, . . .. So we have a trivial example in which h ¥ ' k.
However, as is often the case , one's intuition has a basis in fact , and we find
that there is a large class of queueing systems for which h = ' k for all k .
This , in fact, is the class of stable queueing systems with Poisson arri vals!
Actually, we can prove more, and as we show below for any queueing
system with Poisson arrivals we must have
where Pk(t ) is, as before, the probability that the system is in state E
k
at time t
and where Rk(t) is the probability that a customer arriving at time t finds the
system in state E
k
• Specifically, for our system with Poisson arrivals we define
A(t , 1+ UI) to be the event that an arrival occurs in the interval (I, I + UI) ;
then we have
Rk(t) ~ lim P[N(t) = k I A(t, I + ut)]
A t ..... 0
(4.5)
[where N(t) gives the number in system at time I]. Using our definition of
conditi onal probability we may rewrite Rk(l) as
. _P,[ NO(, t)_=_k,:.A:(: t,_I:+_U_I,, )]
Rk(t ) = lim
"'0 P[A(t, t + ut)]
. P[A(t, I + ut) I N(t ) = k]P[N(t ) = k]
= lim ~  '    '     '  '   ' :   '      =     =    '   '  ~
" .0 P[A(t , t + ut)]
Now for the case of Poi sson arrivals we know (due to the memor yless
property) that the event A(I , I + UI) must be independent of the number
in the system at time I (and also of the time I itself); consequently P[A(I, I +
UI) IN(I ) = k] = P[A (I, I + UI)] , and so we have
Rk(t) = lim P[N(t ) = k]
"'0
or
(4.6)
4.2. TH E METHOD OF STAGESERLANGlAN DISTRIBUTION E, I 19
Thi s is what we set out to prove, namely, that the timedependent pr obability
of an arrival finding the system in state E
k
is exactly equal to the timede
pendent probability of the system being in state E
k
• Clearly this also
applies to the equilibrium probability 'kthat an arrival finds k customers in
the system and the proportion of time Pk that the system finds itself with k
customers. Thi s equi valence does not surprise us in view of the memoryless
property of the Poisson pr ocess, which as we have just shown generates a
sequence of arri vals that take a really "random look" at the system.
4.2. THE METHOD OF STAGESERLANGIAN
DISTRIBUTION E,
The "method of stages" permits one to study queueing systems that are
more general than the birthdeath systems . This ingenious meth od is a
fur ther test imonial to the brill iance of A. K. Erlang, who developed it early
in this century long before our tools of modem probability theory were
available . Erlang recognized the extreme simplicity of the exponential
distribution and its great power in solving Markovian queueing systems.
However , he also recognized that the exponential distribution was not always
an appropriate candidate for representing the true situati on with regard to
service times (and interarri val times). He must also have observed that to
allow a more general service distribution would have destroyed the Markovian
property and then would have required some more complicated solution
method. * The inherent beauty of the Markov chain was not to be given up so
easily. What Erlang conceived was the notion of decomposing the service]
time distribution int o a collection of structured exponential distributions.
The principle on which the method of stages is based is the memoryless
pr operty of the exponential distribution ; agai n we repeat that this lack of
memory is reflected by the fact that the distribution of time remaining for an
exponent ially distr ibuted random variable is independent of the acquired
"age" of that random variable.
Consider the diagram of Figure 4.2. In thi s figure we are defining a service
facil ity with an exponentially distributed service time pdf given by
(4.7)
x ~ O
~ dB(x) z
b(x) =   = pe  ·
dx
The notation of the figure shows an oval which represent s the service facility
and is labeled with the symbol /1, which represent s the servicerate parameter
* As we shall see in Chapter 5, a newer approach to this problem, the "method of imbedded
Markov chains," was not ava ilable at the time of Erlang.
t Identical observations apply also to the interarrival time distribut ion.
I
J
120
Service
faci li ty
Figure 4.2 The singlestage exponential server.
as in Eq. (4.7). The reader will recall from Ch apter 2 that the exponential
distribution has a mean and va ria nce given by
E[ i] = 1.
fI.
• I
(]b  = :;
fI. 
where the subscript bon (1b
2
ident ifies thi s as the service time va riance .
Now consider the system sho wn in Fi gure 4.3. In th is figure the large oval
represents the service fac ility. The internal struct ure of th is service facility is
revealed as a series or tandem connection of two smaller ovals. Each of these
small ovals represents a single exponential server such as that depicted in
Fi gure 4.2; in Fi gure 4.3, howe ver, the small ovals are labeled internally with
the parameter 2f1. ind icating that they each ha ve a pdf given by
y ~ O (4.8)
Thus t he mean a nd varia nce for h(y) are E(fi ) = I f2f1. and (1." = (l f2f1.)2.
The fashion in which th is two st age service facilit y funct ion s is t hat upon
departure of a cust omer from this facility a new cust omer i s all owed t o enter
fro m the left. This new cust omer enters stage I and remains t here for a n
a mount of time randomly chosen fr om h(y). Upo n hi s depart ure from t his
first st age he then proceeds immediat ely int o the second stage and spends an
amo unt of time th ere equal to a random vari able drawn indepe nde ntly once
a gain from h(y). After thi s seco nd random interval expi res he then departs
fr om the service fac ilit y and at thi s point only may a new cust omer enter t he
facil ity fro m th e left. We see then , t hat one, and on ly one, customer is
Service facility
Figure 4.3 The twostage Erlangian server £2.
4.2. THE METHOD OF STAGESERLANG IA:I DISTRIB UTION E; 121
allowed into the box enti tled " ser vice facility" at any time. * This implies
that at least one of the two service stages must always be empty. We now
inquire as to the specific distribution of total time spent in the service facility.
Clearly th is is a random variable, which is the sum of t wo independent and
identically distributed random variables. Thus, as shown in Appendix II ,
we must form the convolution of the density function associ ated wit h each
of the two summa nds. Alternatively, we may calculate the Laplace tr ansform
of the ser vice time pdf as being equal to the product of the Laplace transform
of t he pdf's associat ed wit h each of the summands. Since both random vari
ables are (independent and) ident ically distributed we mu st form the product
of a function with itself. First , as always, we define the appropriate transforms
as
8 *(5) ~ L'" eSZb(x) dx
H*(5) ~ L'"e'"h(y) dy
From our earlier sta tements we have
(4.9)
(4.10)
(4.11)
8 *(5) = [H*(5)]2
But , we already know the transform of the exponential fr om Eq. (2.144)
and so
* 2#
H (5) =
5 + 2#
Thus
(
2# )2
8 *(5) = 
5 + 2#
We must now invert Eq. (4. 11). However , the reader may recall that we
already have seen thi s form in Eq . (2. 146) with its inverse in Eq. (2. 147).
Applying that result we have
b(x) = 2#(2#x) e
2
•
z
x ~ 0 (4.12)
We may now calculate the mean and vari ance of this twostage system in one
of t hree possible ways: by arguing on the ba sis of the structure in Fi gure 4.3;
by using the moment generating properties of B*(s) ; or by direct calculati on
• As an example of a twostage service facility in which only one stage may be act ive at a
time, consider a courtroom in a small town. A queue of defendant s forms, waiti ng for trial.
The judge tries a case (the first service stage) and then fines the defendan t. The second stage
consists of paying the fine to the court clerk. Ho wever, in this sma ll town, the j udge is also
the clerk and so he moves over to the clerk' s desk, collects the fine, releases the defendant,
goes back to his bench, and then accepts the next defendant into "service."
;=
I:
II
!I
i
I
!
122 MARKOVIAN QUEUES IN EQUILI BRIUM
from the density funct ion given in Eq. (4. 12). We choose the first of these
three met hods since it is most straightforward (t he reader may verify the
other two for his own satisfaction). Since the time spent in service is the sum
of two random variables, then it is clear that the expected ti me in service is
the sum of the expectati ons of each. Thus we have
E[i] = 2EW] = 1
p.
Similarly, since the two random variables being summed are independent ,
we may, therefore , sum their variances to find the variance of the sum:
2 2 2 1
O"b = O"h + O"h =
. 2p.2
Note that we have arra nged mat ter s such that the mean time in service in the
singlestage system of Figure 4.2 and the twostage system of Figure 4.3
is the same. We accompli shed thi s by speeding up each of the twostage
service stations by a factor of 2. Note further that the variance of the
twostage system is onehalf the variance of the onestage system.
The previ ous paragraph introduced the noti on of a twostage service
facility but we have yet to discuss the crucial point. Let us consider the state
variable for a queueing system with Poisson arrivals and a twostage expo
nenti al server as given in Figure 4.3. As always, as part of ours tat e descript ion,
we must record the number of cust omers waiting in the queue. Inaddition we
must supply sufficient information about the service facility so as to summar
ize the relevant past history. Owing to the memoryless propert y of the
exponential distribution it is enough to indicate which of the following three
possible situations may be found within the service facility: either both stages
are idle (indicating an empt y service facility); or the firs t stage is busy and the
second stage is idle; or the first stage is idle and the second stage is busy.
This servicefacility stat e information may be supplied by ident ifying the
stage of service in which the cust omer may be found. Our state descripti on
then becomes a twodimen sional vector that specifies the number of customers
in queue and the number of stages yet to be completed by our customer in
service. The time thi s customer has already spent in his current stage of
service is irrelevant in calcul atin g the future behavior of the system. Once
aga in we have a Markov process with a di screte (twodimensional) state space!
The method generalizes and so now we consider the case in which we
provide an rstage service facility, as shown in Figure 4.4. In this system, of
cour se, when a customer departs by exiting from the right side of the oval
service facilit y a new customer may then ent er from the left side and proceed
one stage at a time thro ugh the sequence of r stages. Upon his departure from
4.2. TilE METHOD OF STAGES ERLANGI AN DISTRIB UTI ON E, 123
~ C 00' ... 0 ... 0 ~ ·
~ 1 2 i T 7
Service facility
Figure 4.4 The ,stage Erlangian server E , .
the rt h stage a new customer again may then enter, and so on. The ti me that
he spends in the ith stage is drawn from the density functi on
hey) = '11[ ' ." Y ~ 0 (4.13)
The total time that a customer spends in thi s service facility is the sum of ,
independent identically distributed random variables, each chosen from the
distribution given in Eq. (4. 13). We have the following expectati on and
vari ance associated with each stage :
E[Y] = 1
'flo
It should be clear to the reader that we have chosen each stage in thi s system
to have a service rate equal to 'Il in order that the mean service time remain
constant :
E[i] = ,(J...) = .!
'flo flo
Similarly, since the stage times are independent we may add the vari ances to
obtai n
Also, we observe that the coefficient of variation [see Eq. (11.23)] is
I
c, = J ~
(4. 14)
Once again we wish to solve for the pdf of the service time. Th is we do by
generalizing the noti ons leadin g up to Eq. (4.11) to obt ain
8 *(5) = (!l!:....)'
5 + ' flo
(4.15)
124 MARKOVIAN QUEUES IN EQUILI BRIUM
1
•
Figure 4.5 The family of , stage Erlangian distributions Ei :
Equation (4.15) is easily inverted as earl ier to give
x

' fl(,/t xY ' e
r
•
x
b(x)  (,  I) ! x ~ O  (4.16)
This we recognize as the Erlangian distribution given in Eq. (2. 147). We have
carefully adjusted the mean of this density fun ction to be independent of r,
In order to obtain an indicati on of its width we must examine the sta nda rd
deviati on as given by
a  _I (1)
•  r"
...; ' fl
Thus we see that the standa rd deviat ion for the rstage Erlangian distributi on
is I//r times the sta nda rd deviation for the single stage. It should be clear
t o the sop histicated reader that as r increases , the density funct ion given by
Eq. (4.16) must approach that of the normal or Gaussian distributi on due to
the central limit the orem. Thi s is indeed true but we give more in Eq. (4. 16)
by specifying the actual sequence of distribut ions as r increases to show the
fashion in which the limit is approa ched. In Figure 4.5 we show the family of
rstage Erlangian distributions (compare with Figure 2.10). From this figure
we observe that the mean holds constant as the width or standa rd deviat ion
of the density shrinks by I/Jr. Below, we show that the limit (as , goes to
infinity) for thi s density functi on must , in fact , be a unit impulse functi on
4.2. THE METHOD OF STAGES ERLANGIAN DISTRIBUTION E
T
125
(see Appendi x I) at the point x = I Ip ; thi s impl ies that the time spent in an
infinitesta ge Erlangian service facilit y approaches a constant with probability
I (this constant, of course, equals the mean l /.u). We see further that the
peak of the famil y shown moves to the right in a regular fashion. To calculate
the location of the peak , we differenti ate the den sity functi on as given in
Eq. (4. 16) and set this deri vative equal to zero to obtain
db( x ) _ (r,u)2(r  l)(r,uxy2
e
 T. '
dx (r  I)!
(4. 17)
or
(r  I) = rux
and so we have
Xp ea k = (r ~ I)~
Thus we see that the locat ion of the peak moves rather qui ckly toward it s
final locati on at If.u.
We now show that the limiting distribution is, in fact , a unit impulse by
considering the limit of the Lapl ace transform given in Eq. (4. 15):
lim B*(s) = lim (!1!:.)T
! '" T ", s + r,u
(
I )T
= Iim
T '" I + slru
lim B*(s) = e , I.
(4.18)
We recogni ze the inverse tr ansform of th is limiting distribution from entry 3
in Table 1.4 of Appendix I ; it is merely a unit impulse located at x = IIp .
Thus the famil y of Erlan gian distributions varie s over a fairl y br oad
ran ge; as such, it is extremely useful for approximating empirical (and even
theoretical) distributions. For example, if one had measured a servicetime
operat ion and had sufficient data to give acceptable estimates of its mean and
var iance only, then one could select one member of th is twoparameter
famil y such that 11ft matched the mean and I/rp2 matched the variance; thi s
would then be a method for approximating B(x) in a way that permits
solution of the queueing system (as we shall see below). If the measured
coefficient of vari ati on exceeds unity, we see from Eq. (4.14) that thi s pro
cedure fails, and we must use the hyperexponenti al di stribution described
later or some other distribution.
It is clear for each member of this famil y of den sity functi ons that we may
describe the sta te of the service facility by merel y giving the number of stages
yet to be completed by a cust omer in service. We denote the rstage Erlangian
126 MARKOVIAN QUEUES IN EQUILIBRIUM
distribution by the symbol E, (not to be confused with the notation for the
state of a random process). Since our state variable is discrete, we are in a
position to anal yze the queueing system* M/Er/l. Th is we do in the following
section. Moreover, we will use the same technique in Secti on 4.4 to decompose
the interarrival time distribution A(t ) into an rstage Erlangian distribution.
Note in these next two sections that we neurotically require at least one of
our distr ibuti ons to be a pure exponential (this is also true for Chapters 5
and 6).
4.3. THE QUEUE M/Er/l
Here we consider the system for which
a(l) = i.e A' I ~ 0
rfl (rfl x)'le
r
•
x
b( x)  x >_ 0
(r  I)!
Since in addition to specifying the number of cust omer s in the system (as in
Chapter 3), we must also specify the number of stages remainin g in the service
facility for the man in service, it behooves us to represent each customer in
the queue as possessing r stages of service yet to be completed for him. Thus
we agree to take the state variable as the total number of service stages yet to
be completed by all customers in the system at the time the state is dcscribed.]
In particular, if we consider the state at a time when the system contains k
customers and when the ith stage of service contains the customer in service
we then have that the number of stages contained in the total system is
j ~ number of stages left in total system
= (k  I)r + (r  i + I)
Thus
j=rki+1 (4. 19)
As usual, fk is defined as the equilibrium probability for the number of
customer s in the system; we further define
P; ~ P[j stages in system] (4.20)
The relati onship between customers and stages allows us to write
kr
Pk = :L P;
; E:(kl)r+ l
k = 1,2,3, . . .
• Clearly this is a special case of the system MIG II which we will ana lyze in Chapter 5
using the imbedded Markov chain approach.
t Note that this converts our proposed twodimensional state vector into a onedimensional
description.
rll rll rll
4.3. THE QUEUE MIErl1 127
x
ru
Figure 4.6 Statetransitionrate diagram for number of stages: MIErl1.
And now for the beaut y of Erlang's approach : We may represent the state
transitionrate diagram for stages in our system as shown in Figure 4.6.
Focusing on state E; we see that it is entered from below by a state which is r
positions to its left and also entered from above by state E;+l ; the former
transition is due to the arrival of r new stages when a new customer enter s,
and the latter is due to the completion of one stage within the rstage service
facility. Furthermore we may leave state E; at a rate). due to an arrival and
at a rate rfl due to a service completion. Of course,.we have special boundary
conditions for states Eo, E" .. . , E
r
_
1
• In order to handle t he bound ary
situation simply let us agree, as in Chapter 3, that state probabilities with
negat ive subscripts are in fact zero. We thus define
.',
P; = 0 j<O (4.21 )
We may now write down the system state equations immediately by using
our flow conservati on inspection method. (Note that we are writing the
forward equations in equilibrium.) Thus we have
)'P
o
= rfl P
1
(). + rfl )P; = )'P;r + rfl P
H 1
j = 1,2, .. .
(4.22)
(4.23)
Let us now use our "fami liar" meth od of solving difference equat ions,
namely the ztransform. Thus we define
co
P(z) = 2. P;z;
j = o
As usual, we multiply thej th equation given in Eq. (4.23) by z; and then sum
over all applicable j. Thi s yields
co co co
2. (). + r,u)P;z; = 2. )'P;.z; +2. rfl P; +1Z;
; _ 1 ; _ 1 j ~ l
Rewriting we have
yielding finally
(4.24)
(4.25)
128 MARKOVIAN QUEUES I N EQUILIBRIU M
Recognizing P(z), we then have
(A + r/l )[P(z)  Pol = U P(z) + r; [P(z)  Po  P,zl
The first term on the righthand side of thi s last equation is obtained by
taking special note of Eq. (4.21). Simplifying wehave
_
P,,, o[c A,+,',r/l__(",r,/ll, z"," )l_ _r/l,P"
P(z) =
A+ r/l  U  (r/ll z)
We may now use Eq. (4.22) to simplify this last furt her :
_
' r/l,P,o,,[I_' ('I/' z) ~ l _
P(z) =
A+ r/l  U  (r/ll z)
(
r/lPo(I  z)
P z) =  '='
r/l + Azr+l  (A + r/l )z
We may evaluate the constant Po by recognizing that P(l) = I and using
L'Hospital's rule , thus
P(l ) = I = r/lP
o
r/l  ).r
giving (observe that Po = Po)
A
Po = 1
/l
In thi s system the arri val rate is Aand the average service time is held fixed at
I//l independent of r, Thus we recognize that our ut ilizat ion fact or is
<l. _ A
p = J.x =
/l
Substituting back into Eq. (4.24) we find
P(z) = rp(1  p)(1  z) (4.26)
rfl + Azr+l  (A + r/l)z
We must now invert this ztransform to find the di str ibution of the number of
stages in the system.
The case r = I , which is clearly the system M/M/l , presents no difficulties;
thi s case yields
P(z) = _fl !.....: (,I_ _ p ,,).o.... ( I_' z)'
/l + ),Z2  (A+ /l )z
( I  p)(1  z)

I + p Z2  ( I + p)z
4.3. THE QUEUE MIE,II 129
The denominator factors int o (I  z)( 1  pz) and so canceling the common
term (I  z) we obtain
P(z) = I  P
1  pz
We recognize this functi on as entry 6 in Table 1.2 of Appendix I, and so we
have immediately
k = 0, 1,2, .. . (4.27)
Now in the case , = I it is clear that P k = P
k
and so Eq . (4.27) gives us the
distribution of the number of customers in the system M/M /I, as we had
seen previously in Eq . (3.23).
For arbitrary val ues of r things a re a bit more complex. The usual ap
proach to inverting a ztransform such as that given in Eq . (4.26) is to make
a partial fraction expansion and the n to invert each term by inspection; let
us follow this approach. Before we can carry out this expansion we must
identify the' + I zeroes of the denominator polynomial. Unity is easily
seen to be one such. The denominator may therefor e be written as
(I  z) [' 11  A(Z + Z2 + .. . + z') ], whe re the remaining' zeroes (which
we choose to de note by zl' Z2, . • • , z, ) are the roo ts of the bracketed expression.
Once we have found these roots* (which are unique) we may then write the
denominator as '11(1  z)(1  ZIZl) . .. (I  zlzr)' Substitut ing this back into
Eq. (4.26) we find
1 P
P(z) = '
(I  ZIZl)(1  zfz2) ' .. (I  zlzr)
Our pa rtial fraction expansion now yields
where
(4.28)
We may now invert Eq. (4.28) by inspection (from entry 6 in Table 1.2) to
obtain the final solution for the dist ribution of the number of stages in the
system, namely,
r
r, = (I  p) L A,{z;) i
i=l
j = 1,2, . . . , r
 (4.29)
• Many of the analytic pro blems in queueing theory reduce to the (difficult) task of locating
the roots of a funct ion.
130 MARKOVIAN QUEUES IN EQUILIBRIUM
and where as before Po = I  p. Thus we see for the system M/E
r
/1 that the
distribution of the number of stages in the system is a weighted sum of
geometric distributions. The waitingtime distribution may be calculat ed
using the methods developed later in Chapter 5.
4.4. THE QUEUE Er/M/l
t ~ O
x ~ O
Let us now consider the queueing system Er/M/1 for which
r}.(rAtrie
rl t
a(t )  '' 
(r  I)!
b(x ) = p.e
Pz
(4.30)
(4.31)
Here the roles of interarrival time and service time are interchanged from those
of the previous section ; in many ways these two systems are dual s of each
other. The system operates as follows : Given that an arrival has just occurred,
then one immediately introduces a new "arri ving" customer into an rstage
Erlangian facility much like that in Figure 4.4; however, rather than consider
this to be a service facility we consider it to be an "arri ving" facility. Whenthis
arriving cust omer is inserted from the left side he must then pass through r
exponential stages each with parameter rA. It is clear that the pdf of the time
spent in the arri ving facility will be given by Eq, (4.30). When he exits from
the right side of the arriving facility he is then said to "arrive" to the queueing
system ErlM/1. Immediately upon his arrival, a new customer (taken from an
infinite pool of available customers) is inserted into the left side of the arriving
box and the process is repeated. Once having arrived, the customer j oins the
queue , waits for service, and is then served according to the distributi on
given in Eq. (4.31). It is clear that an appropriate state descripti on for this
system is to specify not only the number of customer s in the system, but also
to identify which stage in the arriving facility the arriving customer now
occupies. We will consider that each customer who has already arrived (but
not yet departed) is contributing r stages of "arrival" ; in add ition we will
count the number of stages so far completed by the arriving customer as a
further contribution to the number of arrival stages in the system. Thus our
state descript ion will consist of the total number of stages of arrival currently
in t he system ; when we find k cust omers in the system and when our arriving
customer is in the ith stage of arrival ( I ~ i ~ r) then the tot al number of
stages of arrival in the system is given by
j = rk + i  I
Once again let us use the definition given in Eq. (4.20) so that Pi is defined
to be the number of arrival stages in the system; as always Pk will be the
4.4 . THE QUEUE ET/M/I 131
rX rX r,\
...
Figure 4.7 Statetransitionrate diagram for number of stages: ET/M/l.
equilibrium probability for number of customers in the system, and clearly
they are related through
r(k+I)l
Pk = L P;
j = r k
The system we have defined is an irreducible ergodic Markov chain with its
statetransitionrate diagram for stages given in Figure 4.7. Note that when a
customer departs from service, he "removes" r stages of "arrival" from the
system. Using our inspection method, we may write down the equilibrium
equations as
rAP
o
= p,P
r
rAP; = rAP;_1 + p,P
HT
(rA + p,)P; = rAP;_1 + p,P
HT
r
(4.32)
(4.33)
(4.34)
Again we define the ztransform for these probabilities as
00
P(z) = LP;z;
j = O
Let us now apply our transform method to the equilibrium equations.
Equations (4.33) and (4.34) are almost identical except that the former is
missing the term p,Pj; consequently let us operate upon the equations in the
range j I, adding and subtracting the missing terms as appropriate. Thus
we obtain
0::> r l co (J)
L(P, + r).)Pjz;  LP,P;z; = LrAP;_1
zj
+ LP,PHTz
j
j=1 ; = 1 ;=1 j=1
Identifying the transform in this last equation we have
(p, + rA)[P(z)  Po]  Ip,pjZ
j
= rAzp(z) + tPjzj]
We may now use Eq. (4.32) to eliminate the term P
T
and then finally solve
for our tran sform to obtain
T 1
(l  ZT) L Pjz;
P(z) =
rpzT+1  (I + rp)zT+ 1
(4.35)
I I
I
I
132 MARKOVIAN QUEUES IN EQU ILI BRIUM
where as always we have defined p = i.x = i./", . We mu st now study t he
poles (zeroes of the denominator) for thi s funct ion. The denominator
polynomial has r + I zeroes of which un ity is one such [the factor ( I  c) is
almost always present in the denominat or). Of the remain ing r zeroes it can
be shown (see Exercise 4. I0) that exactly r  I of them lie in the range
Izi < I and the last , whi ch we shall den ote by zo, is such that IZol > I. We are
still faced with the numerator summation that contains t he un known prob
abilities Pi ; we mu st now appeal to the second footnote in step 5 of our z
transform procedure (see Chapter 2, pp. 7475), which takes ad vantage of
the observa t ion that the at ransform of a probability dist ribution must be
analytic in the range [z] < I in the following way. Since P(z) mu st be bounded
in the range Izi < I [see Eq. (II .28») and since the denominator has r  I
zeroes in thi s ran ge, then certainly the numerator must also ha ve ~ e r o e s at
the same r  I points. The numerator consists of two factors ; the first of the
form (I  zr) all of whose zeroes have abso lute value equal to unity; and the
second in the form of a summat ion. Consequently, the "compensating"
zeroes in the numerator must come from the summa t ion itself (t he summa tio n
is a pol ynomial of degree r  I and therefore has exactly r  I zeroes). These
observa tio ns, therefore, permit us to equate the numerator sum to the
denominator (after it s two roots at z = I and z = Zo are factored out) as
follows:
r pzr+l _ ( I + rp)zr + 1 r I ;
...!'''' = K L P;z
( I  z)( 1  z/ zo) ;  0
where K is a con stant to be evalu at ed bel ow. This computation permits us to
rewrite Eq. (4.35) as
_
_ (' I__z'r)'_
P(z) =
K(I  z)( 1  * 0)
But since P( I ) = I we find th at
K = r/(I  Ilz
o
)
and so we ha ve
( I  z')( 1  I/z
o
)
P(z) = '''"'
r (1  z)( 1  z/ zo)
(4.36)
We now kno w all there is to know about the poles and zeroes of P(z) ; we a re,
therefore, in a posit ion to make a partial fracti on expansion so that we may
invert on z. Unfort unately P(z) as expressed in Eq . (4.36) is not in the pr oper
form for the partial fraction expansion , si nce the numerat or degre e is not
less than the denominat or degree. However , we will take adva ntage of
property 8 in Table I.l of Appendix I , which states that if F(z) cei nt hen
!
4.4. THE QUEUE E, / MfJ 133
z' F(z) <=> / "_,, where we recall that the notation <=> indica tes a transform
pair. With thi s observation then , we carry out the following parti al fraction
expansion
P(z) = ( I  ZJ[ _I_/r + __ I'.. / _rz'0J
I  z 1  z/zo
If we denot e the inverse transform of the quantity in squa re br ackets by /;
then it is clear that the inverse transform for P(z) must be
P; = /;  /;_, (4.37)
By inspection we see that
, (1 ( I  ZO;I)
j ; = r
o j<O
First we solve for P; in the range j r ; from Eqs.
therefore , have
(4.38)
(4.37) and (4.38) we,
(4.39) P
 1 '  ;1( 1 _  '\
i  Zo Zo J
r
We may simplify thi s last expression by recogni zing that the den omina tor of
Eq. (4.35) must equal zero for z = zo; this observation lead s to the equality
rp(zo  I ) = I  zo', and so Eq. (4. 39) becomes
P; = p(zo  j r (4.40)
On t he othe r hand, in the range 0 j < r we have that /;_, = 0, and so P;
is easily found for the rest of our range. Combining thi s and Eq. (4.40) we
finall y obtain the di stribution for the number of arrival stages in our system:
(
! ( I  zoH )
P , = r
J p(zo _
(4.41)
 (4.42)
k =O
k >O
Using our earlier relati onship between Pk and P; we find (the reader should
check thi s algebra for himself) that the di stribution of the number of custo mers
in the system is given by
{
I  p
P»=
p(z;  1)zi)'k
We note that thi s di stribution for number of customers is geometric with a
slightly modified first term. We could at this point calcul ate the waiting time
distr ibution, but we will postpone that unt il we st udy the system G/M/l in
Cha pter 6.
134 MARKOVIAN QUEUES IN EQUILIBRIUM
4.5. BULK ARRIVAL SYSTEMS
In Section 4.3 we studied the system M/ET{I in which each customer had
to pass through r stages of service to complete his total service. The key
to the solution of that system was to count the number of service stages
remaining in the system, each customer contributing r stages to that number
upon his arrival into the system. We may look at the system from another
point of view in which we consider each "customer" arrival to be in reality
the arrival of r customers. Each of these r customers will require only a
single stage of service (that is, the service time distribution is an exponential *).
Clearly, these two points of view define identical systems : The former is the
system M{ET{1 and the latter is an M/M/I system with "bulk" arrivals of
size r. In fact, if we were to draw the statetransitionrate diagram for the
number of customers in the system, then the bulk arrival system would lead
to the diagram given in Figure 4.6; of course, that diagram was for the
number of stages in the system M/ET{I. As a consequence, we see that the
generating function for the number of customers in the bulk arrival system
must be given by Eq. (4.26) and that the distribution of number of customers
in the system is given by Eq. (4.29) since we are equating stages in the
original system to customers in the current system.
Since we are .considering bulk arrival systems, we may as well be more
generous and permit other than a fixedsize bulk to arrive at each (Poisson)
arrival instant. What we have in mind is to permit a bulk (or group) at each
arrival instant to be of random size where
gi ~ P[bulk size is i] (4.43)
(As an example, one may think of randomsize families arriving at the doctor's
office for individual vaccinations.) As usual, we will assume that the arrival
rate (of bulks) is i.. Taking the number of customers in the system as our
state variable, we have the statetransitionrate diagram of Figure 4.8. In
this figure we have shown details only for state E
k
for clarity. Thus we find
that we can enter E
k
from any state below it (since we permit bulks of any
size to arrive); similarly, we can move from state E
k
to any state above it, the
net rate at which we leave E
k
being i.g, + i.g. + ... = AL;';., gi = A. If,
as usual we define Pk to be the equilibrium probability fer the number of
customers in the system, then we may write down the following equilibrium
• To make the correspondence complete. the parameter for this exponential distribution
should indeed be ru, However, in the following development, we will choose the parameter
merely to be Il and recall this fact whenever we compare the bulk arrival system to the
system M/ETII.
4.5. BULK ARRIVAL SYSTEMS 135
Figure 4.8 The bulk arrival statetransitionrate diagram.
equations using our inspection method:
(4.46)
k ~ 1
kl
(A+ fl)Pk = flPk+l + L PiAgk i
i = O
(4.44)
Apo = flPl (4.45)
Equation (4.44) has equated the rate out of state E
k
(the lefthand side) to
the rate into that state, where the first term refers to a service completion
and the second term (the sum) refers to all possible ways that arrivals may
occur and drive us into state E
k
from below. Equation (4.45) is the single
boundary equation for the state Eo. As usual, we shall solve these equations
using the method of ztransforms ; thus we have
00 Jl GO Q) k l
(A +fl) L Pk
Zk
=  LPk+l
i
+
1
+L L PiAgk_i
Zk
k=l ' Z k"",l k =l i =O
We may interchange the order of summation for the double sum such that
GO kl <:0 00
and regrouping the terms, we have
(4.47)
The ztransform we are seeking is
A <Xl
P(z) = LPkZk
k  O
and we see from Eq. (4.47) that we should define the ztransform for the
distribution of bulk size as *
G(z) 4, I gkzk (4.48)
k l
• We couldjust as well have permittedgo> 0, which would then have allowed zerosize
bulks to arrive, and this would have put selfloops in our statetransition diagram corre
sponding to null arrivals. Had wedone so, then the definition for G(z) wouldhave ranged
fromzero to infinity, and everything wesay belowapplies for this case as well.
'I
136 MARKOVIAN QUEUES IN EQU ILI BRIUM
Extract ing these transforms from Eq. (4.46) we have
(), + fl )[P(z)  Po] = ~ [P(z)  Po  P1z] + AP(Z)G(z)
Note that the product P(z)G(z) is a manifestat ion of property II in Table I. I
of Appendi x 1, since we have in effect formed the transform of the convoluti on
of the sequence {Pk} with that of {gk} in Eq. (4.44). Appl ying the bound ary
equati on (4.45) and simplifying, Wi: have
P(z) = fl Po(1  z)
fl(1  z)  ).z(l  G(z)]
To eliminate Po we use P(I) = I; direct application yields the indeterminat e
form % and so we must use L'Hospit al' s rule, which gives po = I  p.
We obtain
.. _,,fl(, I _' p')( O I__z...:...)_
P(z) = 
fl (1  z)  AZ(l  G(z)]
 (4.49)
k ~ r
(4.50)
k = r
).G' ( l)
p=  
fl
It is instructive to consider the special case where all bulk sizes are the same,
namely,
Th is is the final solution for the transform of number of customer s in the bulk
arrival M/M/l system. Once the sequence {gk} is given, we may then face the
pr oblem of inverting thi s transform. One may calculate the mean and variance
of the number of customers in the system in terms of the system parameters
directl y from P(z) (see Exercise 4.8). Let us note that the appropriate defini
tion for the utilization fact or p must be carefully defined here. Recall that p
is the average arrival rate of customers times the average service time. In
our case, the average arri val rate of customers is the product of the average
arrival rate of bulks and the average bulk size. From Eq. (1I.29) we have
immediately that the average bulk size must be G'(I). Thus we naturally
conclude that the appropriate definition for p in thi s system is
Clearly, this is the simplified bul k system discussed in the beginning of thi s
section ; it corresponds exactl y to the system M/ Er/l (where we must make the
minor modificati on as indicated in our earl ier footnote that fl must now be
replaced by rfl). We find immediately that G(z) = ZT and after substituting
this into our solution Eq. (4.49) we find that it corresponds exactly to our
earlier solution Eq. (4.26) as, of course, it must.
4.6. BULK SERVICE SYSTEMS 137
4.6. BULK SERVICE SYSTEMS
In Section 4.4 we studied the system ErIM/I in which arrivals were con
sidered to have passed through r stages of "arrival." We found it expedient
in that case to take as our state variable the number of "arri val stages" that
were in the system (where each fully arrived customer still in the system
contributed r stages to that count) . As we found an analogy between bulk
arrival systems and the Erlangian service systems of Section 4.3, here also we
find an anal ogy between bulk service systems and the Erlangian arri val systems
studied in Section 4.4. Thus let us consider an M/M/I system which provides
service to groups of size r . That is, when the server becomes free he will
accept a "bulk" of exactly r customers from the queue and administer service
to them collect ively; the service time for thi s group is drawn from an exponen
tial distribution with parameter fl.. If , upon becoming free, the server finds
less than r customers in the queue , he then wait s until a total of r accumulate
and then accepts them for bulk service, and so on. * Customers arrive from a
simple Poisson process, at a rate A, one at a time. It should be clear to the
reader that thi s bulk service system and the ErlM/1 are identical. Were we to
draw the statetransitionrate diagram for the number of customer s in the
bulk service system, then we would find exactly the diagram of Figure 4.7
(with the parameter rA replaced by A; we must account for this parameter
chan ge, however, whenever we compare our bulk service system with the
system Er/ M/I). Since the two systems are equivalent , then the solution for
the distributi on of number of customer s in the bulk service system must be
given by Eq. (4.41) (since stages in the original systemcorrespond to customers
in the current system).
It certainly seems a waste for our server to remain idle when less than r
customers are avai lable for bulk service. Therefore let us now consider a
system in which t he server will, upon becoming free, accept r customers for
bulk service if they are available, or if not will accept less than r if any are
available. We take the number of customers in the system as our state
variable and find Figure 4.9 to be the statetransitionrate diagram. In this
figure we see that all stat es (except for sta te Eo) behave in the same way in
that they are entered from their lefthand neighbor by an arrival, and from
their neighbor r unit s to the right by a gr oup departure, and they are exited
by either an arrival or a group departure ; on the other hand, state Eocan be
entered from anyone of the r states immedi ately to its right and can be
exited only by an arrival. These considerati ons lead directly to the following
set of equat ions for the equilibrium pr obability P» of finding k customer s in
• For example. the shared taxis in Israel do not (usually) depart unt il they have collected a
full load of customers, all of whom receive service simultaneously.
and so we have
138 MARKOVIAN QUEUES IN EQUILIBRIUM
r
 zr(}.po + p.Po) =
k= O
(4.51)
r
fl p,tZ'  (l + fl)Poz
r
P(z) = 
}.Zr+1 _ (l + fl)zr + p.
From our boundary Eq. (4.51) we see that the negative term in the numerator
of this last equation may be written as
r I
Pk(Zk  zr)
P(z) = k _ O (4.52)
rpzr+l  (1 + rp) zr +
where we have defined p = Afflr since, for this system, up to r customers may
be served simultaneously in an interval whose average length is l /fl sec. We
(l + p.)[P(z)  Po] = ;[P(Z)  ] + lzP(z)
Solving for P(z) we have
a>
P(z) =
k _O
We then multiply by z\ sum, and then identify P(z) to obtain in the usual way
(l + P.)h = P.Pk+r + lhI k I
lpo = fl(PI + P2 + ... + Pr)
Let us now apply our ztransform method; as usual we define
the system:
I' I'
Figure 4.9 The bulk servicestatetransitionrate diagram.
immediately observe that the denominator of this last equation is precisely
the same as in Eq. (4.35) from our study of the system ET/M/I. Thus we may
give the same arguments regarding the location of the denominator roots; in
particular, of the r + I denominator .zeroes, exactly one will occur at the
point z = I, exactly r  I will be such that [z] < I, and only one will be
found, which we will denote by zo, such that IZol > I. Now let us study the
numerator of Eq. (4.52). We note that this is a polynomial in z of degree r .
Clearly one root occurs at z = I. By arguments now familiar to us, P(z)
must remain bounded in the region Izi < I, and so the r  I remaining
zeroes of the numerator must exactly match the r  1 zeroes of the denomina
tor for which Izi < I; as a consequence of this the two polynomials of degree
r  I must be proportional, that is,
1
4.7. SERIESPARALLEL STAGES: GENERALIZATIONS 139
T1
K I PtCzk  ZT)
k= O
1  z
rpzT+l  (1 + rp)zT + 1
(1  z)(1  Z/Zo)
Taking advantage of this last equation we may then cancel common factors
in the numerator and denominator of Eq. (4.52) to obtain
1
P(z) = "
K(I  z/zo)
The constant K may be evaluated in the usual way by requiring that P(I) = I,
which provides the following simple form for our generating function:
P(z) = 1  I/z
o
(4.53)
1  z/ zo
This last we may invert by inspection to obtain finally the distribution for the
number of customers in our bulk service system
k = 0, 1,2, . . . (4.54)
Once again we see the familiar geometric distribution appear in the solution
of our Markovian queueing systems!
4.7. SERIESPARALLEL STAGES: GENERALIZATIONS
How general is the method of stages studied in Section 4.3 for the system
M/Er/l and studied in Section 4.4 for the system ErfM/I? The Erlangian
distribution is shown in Figure 4.5 ; recall that we may select its mean by
appropriate choice of 11 and may select a range of standard deviations by
adjusting r. Note, however, that we are restricted to accept a coefficient of
(4.55)
140 MARKOVI AN QUEUES IN EQU ILI BRIUM
variation that is less than tha! of the exponenti al distributi on [from Eq.
(4. 14) we see that Co = IIJ r whereas for r = I the exponenti al gives
Cb = I] and so in some sense Erlangian random variables are " more regular"
than exponent ial variables. Thi s situation is cert ainly less than compl etely
general.
One direct ion for generalization would be to remove the restricti on that
one of our two basic queueing distributi ons must be exponential ; that is, we
certainly could consider the system ErJErJ I in which we have an rastage
Erlangian distributi on for the interarr ival times and an rbstage Erlangian
distribution for the service times . * On the other hand , we could attempt to
generali ze by broadening the class of distributi ons we consider beyond that
of the Erlangian. Thi s we do next.
We wish to find a stagetype arrangement that gives larger coefficient s of
variation than the exponential. One might consider a generalization of the
rstage Erlangi an in which we permit each stage to have a differ ent service
rate (say, the ith stage has rate fl ,). Perhaps this will extend the range of C,
above unit y. In thi s case we will have instead of Eq. (4. 15) a Lapl ace trans
form for the servicet ime pdf given by
B*(s)  ( ~ ) ( ~ ) . . . ( ~ )
s + fll S + fl2 S + fl r
The service time density hex) will merely be the convolution of r exponen tial
densities each with its own parameter fl i. The squared coefficient of variati on
in this case is easily shown [see Eq. (11.26), Appendix II] to be
But for real a, ~ 0, it is always true that I i a/ ~ (I i a;)2 since the right 
hand side contains the lefthand side plus the sum of all the nonnegative
cross terms. Choosing a, = I lfl ;, we find that C
b
2 ~ I. Thu s, unfortunately,
no gener alization to larger coefficient s of variation is obtained thi s way.
We previously found that sending a customer through an increasing
sequence of faster exponential stages in series tended to reduce the vari abil ity
of the service time, and so one might expect that sending him through a
parallel arrangement would increase the variability. Thi s in fact is true. Let
us therefore consider the twostage parallel service system shown in Figure
4.10. The situati on may be contrasted to the service st ructure shown in
Figure 4.3. In Figure 4.10 an entering customer approaches the lar ge oval
(which represents the service facility) from the left. Upo n entry into the
• We consider thi s short ly.
4.7. SERI ESPARALLEL STAGES: GENERALIZATIONS 141
Service facility
Figure 4.10 A twostage parallel server H
2
•
facility he will proceed to service stage I with probabil ity 0( , or will proceed
to service stage 2 with pr obability 0(2' where 0( , + 0(2 = 1. He will then spend
an exponentially distributed interval of time in the ith such stage who se mean
is I{fl i sec. After that interval the cust omer departs and only then is a new
cust omer allowed int o the service facility. It is clear from th is description
that the service time pdf will be given by
x ~ O
and also we have
B*(5) = O(, ......f:!..! + 0(2 ~
5 + fl, 5 + fl 2
Of cou rse the more general case with R parallel stages is shown in Figure
4.11. (Co nt ras t this with Figure 4.4.) In thi s case, as always , at most one
customer at any one time is permitted within the large oval representing the
service faci lity. Here we assume that
Clearl y,
and
R
b(x) = ! O(iflie#' z
i=l
R /l 
B*(5) = ! lX
i
 ' 
i' 5 + /l i
x ~ O
(4.56)
 (4.57)
The pdf given in Eq. (4.57) is referred to as the hyperexp onent ial di stribution
and is denoted by HR. Hopefully, the coeffi cient of var iati on (C
b
,;; aJi)
is now grea ter than unity and the refore repre sent s a wider variation than
142 QUEUES IN EQUILIBRIUM
R
Service facility
Figure 4.11 The Rstage parallel server HR.
that of the exponential. Let us prove thi s. From Eq. (II. 26) we find immed
iatel y that
_ R C1.
i
x = 1: 
i = l }li
Forming the square of the coefficient of variation we then have
(4. 58)
Now, Eq. (II. 35), the CauchySchwarz inequality, may al so be expressed as
follows (for ai' b, real ):
(4.59)
4.7. SERIESPARALLEL STAGES: GENERALIZATIONS 143
Figure 4.12 Statetransitionrate diagram for M/H
2
/ I.
(This is often referred to as the Cauchy inequality.) lfwe make the association
a
i
= JCJ.
i
, hi = J" J,u,, then Eq. (4.59) shows
(
I CJ. i)2~ (ICJ.') (IC J . ~ )
I Jli I t Pi
But from Eq, (4.56) the first factor on the righthand side of thi s inequ alit y
is just unity; thi s result along wit h Eq. (4.58) permits us to write
 (4.60)
which pr oves the desired result.
One might expect t hat an analysis by the method of stages exists for the
systems M/H rt/I , Hrt/MfI, H
R
/ H
rt
fI , and thi s is indeed true. The reason
a b
that the analysis can proceed is that we may take account of the nonexpon
ential character of the service (or arrival) facilit y merely by specifying which
stage within the service (or arri val) facility the customer currentl y occupies.
Thi s informat ion along with a statement regarding the number of customers
in the system creates a Markov chain , which may then be studied much as
was done earlier in this chapt er.
For exa mple, the system M/H
2
/l would have the sta tetra nsitionrate
diag ram shown in Figure 4.12. In this figure the designati on k, implies that
the system contains k customers and that the customer in service is locat ed
in stage i (i = I , 2). The transitions for higher numbered states are identical
to the transitions between states I, and 2,.
We are now led directl y int o the foll owing generalization of series stages
and parallel stages; specifically we are free to combine series and par allel
•
144 MARKOVIAN QUEUES IN EQUILIBRIUM
2
r,
Service facility
Figure 4.1 3 Seriesparallel server.
stages into arbitrarily complex structures such as shown in Figure 4.13.
This diagram shows R parallel "stages," the ith "stage" consisting of an
restage series system (i = 1,2, .. . , R); each stage in the ith series branch
is an exponential service facility with parameter rill i. It is clear that great
generality can be built into such seriesparallel systems. Within the service
facility one and only one of the multitude of stages may be occupied by a
customer and no new customer may enter the large oval (representing the
service facility) until the previous customer departs. In all cases, however,
we note that the state of the service facility is completely contained in the
specification of the particular single stage of service in which the customer
may currently be found . Clearly the pdf for the service time is calculable
directly as above to give
b()
~ ri/1.
i(r
,lli
x
)' , l  r . ·x
x = £ rt.
i
e I I
, ~ l Cr,  1)!
and has a tran sform given by
x ~ O (4.61)
(4.62)
One further way in which we may generalize our seriesparallel server is to
remove the restriction that each stage within the same series branch has the
same service rate (rill i); if indeed we permit the jth series stage in the ith
4.7. SERIESPARALLEL STAGES: GENERALIZATIONS 145
",
",
Service facilit y
Figure 4.14 Another stagetype server.
parallel branc h to have a service rate given by {I;;, then we find that the Lap
lace tr ansform of the service time density will be generalized to
R r, ( II )
8 *(5) = L «, IT .uzu.:
;  1 ;  1 5 + {IH
(4.63)
These generalities lead to rather complex system equations.
Another way to create the seriesparallel effect is as follows. Consider the
service facility shown in Figure 4.14. In thi s system there are r service stages
only one of which may be occupied at a given time. Cust omers enter from the
left and depart to the right. Before entering the ith stage an independent
choice is made such that with probability f3; the customer will proceed into
the ith exponential service stage and wit h probability cx; he will depart from
the system immediately; clearly we require f3; + cx ; = I for i = I , 2, ... , r,
After compl eting the rth stage he will depart from the system with pr obability
1. One may immediately write down the Laplace transform of the pdf for the
system as follows:
(4. 64)
where (l(r +l = I. One is tempted to consider more general transiti ons among
stages than that shown in thi s last figure ; for example, rather than choosing
only bet ween immediate departure and entry int o the next stage one might
consider feedback or feedforward to ot her stages. Cox [COX 55] has shown
that no further generality is introduced with thi s feedback and feedforwa rd
concept over that of the system shown in Figure 4.14.
It is clear that each of these last three expressions for B*(s) may be re
writt en as a rati onal funct ion of s, that is, as a rati o of polynomials in s.
The positions of the poles (zeroes of the denominator polynomial) for B *(s)
will of necessity be located on the negative real axis of the complex splane.
This is not quite as general as we would like, since an arb itrary pdf for
146 MARKOVIAN QUEUES IN EQUILIBRIUM
service time may have poles located anywhere in the negati ve half splane
[that is, for Re(s) <0]. Cox [COX 55] has studied this pr oblem and suggests
that complex values for the exponenti al parameters rill . be permitted ; the
ar gument is that whereas this corresponds to no physically realizabl e expon
ential stage, so long as we provide poles in complex conju gate pai rs then the
entire service facility will have a real pdf, which corresponds to the feasible
cases. If we permi t complexconjugate pair s of poles then we have complete
generality in synthesizing any rational functi on of s for our servicetime
tran sform B*(s). In addition, we have in effect outlined a method of solving
these systems by keeping track of the state of the service facility. Moreover ,
we can similarly construct an interarrival time distri buti on from series
parallel stages, and thereby we are capable of considerin g any G/G/ I system
where the distributions have transforms that are rational functions of s.
It is further true that any nonrati onal functi on of s may be approx imated
arbitrarily closely with rational functi ons. * Thus in pr inciple we have solved
a very general problem. Let us discuss this meth od of solution. The sta te
descript ion clearly will be the number of customers in the system, the stage
in which the arriving cust omer finds himself within the (stagetype) arriving
box and the stage in which the cust omer finds himself in service. Fr om thi s
we may draw a (horribly complicated) statetransition diagram. Once we
have this diagram we may (by inspect ion) write down the equilibrium
equations in a rather straightforward manner ; this large set of equati ons
will typically have many bound ary conditions. However, these equati ons
will all be linear in the unknowns and so the solution method is straight
forward (albeit extremely tedi ous). What more natural setup for a computer
solutio n could one ask for ? Indeed, a digital computer is extremely adept at
solving large sets of linear equati ons (such a task is much easier for a digit al
computer to handle than is a small set of nonlinear equations). In carrying
out the digital solution of this (typically infinite) set of linear equations, we
must reduce it to a finite set; thi s can only be done in an ap proximate way by
first deciding at what point we ar e satisfied in truncating the sequence
Po, PI> p", .. . . Then we may solve the finite set and perhaps extrapolate the
• In a rea l sense, then, we are faced with an approximation problem; how may we "best"
app roximate a given dist ribution by one that has a rat ional transform. If we are given a
pdf in numerical form then Prony' s method IWHI T 44] is one acceptable procedure. On
the other hand, if the pdf is given analytically it is difficult to describe a genera l procedure
for suitable approxi mation. Of course one would like to make these approximati ons with
the fewest number of stages possib le. We comment that if one wishes to fit the first and
second moment s of a given distributi on by the method of stages then the number of stage s
cannot be significantly less than I / C
b
" ; unfortun ate ly, this implies that when the distri
but ion tends to concentrate ar ound a fixed value, then the number of stages required
grows rather quickly.
4.8. NETWORKS OF MARKOVIAN QUEUES 147
sol ution to the infinite set; all this is in way of approximation and hopefull y
we are abl e to carry out the .computation far enough so that the neglected
terms are indeed negligible.
One must not overemphasize the usefulne ss of this pr ocedure ; thi s solution
method is not as yet automated but does at least in principl e provide a method
of approach. Other anal ytic methods for handling the more complex queueing
situations are discussed in the balance of this book.
4.8. NETIVORKS OF MARKOVIAN QUEUES
We have so far considered Markovian systems in which each customer
was demanding a single service operation from the system. We may refer
to this as a "singlenode" system. In this section we are concerned with
multiplenode systems in which a customer req uires service at more than one
station (node). Thus we may think of a network of nodes, each of which is a
service center (perha ps wit h multiple servers at some of the nodes) and each
wit h storage room for queues to form. Customers enter the system at various
points, queue for service, and upon departure from a given node then
pr oceed to some other node, there to receive additional service. We are now
describing the last category of flow system di scussed in Chapter I , namely,
stochastic flow in a network .
A number of new considerat ions emerge when one considers networ ks.
For example, the topological structure of the network is important since it
describes the permissible transitions between nodes. Also the path s taken by
individual customers must somehow be described. Of great significance is the
nature of the stochastic flow in terms of the basic stochastic pr ocesses
describing that flow; for example, in the case of a tandem queue where
customers departing from node i immediately enter node i + I , we see that
the interdeparture times from the former generate the interarrival times to
the latter. Let us for the moment consider the simple twonode tandem
networ k shown in Figure 4.15. Each ova l in that figure describes a queueing
system consisting of a queue and server(s) ; within each oval is given t he
node number. (It is import ant not to confuse these physical net work diagrams
with the abstract statetransitionrate diagrams we have seen earli er.) For the
moment let us assume that a Poisson process generates the arrivals to the
system at a rat e i., all of which enter node one ; further assume that node one
consists of a single exponential server at rat e p. Thus node one is exactly an
M/M /l queueing system. Also we will assume that node two has a single
· 8 f    t   ~ O f   · ·  
Figure 4.15 A twonode tandem network.
148 MARKOVIAN QUEUES IN EQUI LIBRIUM
exponential server also of rate p,. The basic question is to solve for the inter
arrival time distribut ion feeding node two ; this cert ainly will be equivalent to
the interdeparture time distribution from node one. Let d(t ) be the pdf
describing the interdeparture process from node one and as usual let its
Laplace transform be denoted by D*(s). Let us now calculate D*(s). When a
customer departs from node one either a second customer is available in the
queue and ready to be taken into service immediately or the queue is empt y.
In the first case, the time until this next customer departs from node one will
be distributed exactly as a service time and in that case we will have
D*(s ) l node one nouempty = B*(s)
On the other hand , if the node is empty upon this first customer' s departure
t hen we must wait for the sum of two intervals, the first being the time until
t he second customer arrives and the next being his service time; since these
two intervals are independently distributed then the pdf of the sum must be
the convoluti on of the pdf' s for each. Certainly then the tran sform of the sum
pdf will be the pr oduct of the transforms of the individual pdfs and so we
have
A
D*(S)l nod e one empty =   B*(s)
s +A
where we have given the explicit expression for the tran sform of the inter
arrival time densit y. Since we have an exponential server we may also write
B*(s) = p,/ (s + p, ); furthermore, as we shall discuss in Chapter 5 the proba
bility of a departure leaving behind an empty system is the same as the
probability of an arrival finding an empty system, namely, I  p. This
permits us to write down the unconditi onal transform for the interdepa rture
time density as
D*(s) = ( I  p) D*(S)lnodeone empty + pD*(S)lnodeone none mptv
Using our above calculati ons we then have
D*(s) = ( I _ ) + p(.f! )
S +A s + p s + p
A little algebra gives
D*(s)
S +A
(4.65)
and so the interdeparture time dist ributi on is given by
D(t ) = I  e).'

4.8. NETWORKS OF MARKOVIAN QUEUES 149
Thus we find the remar kable conclusion that the interdepart ure times are
exponentially distribut ed with t he same parameter as the interarrival times!
In other words (in the case of a stable sta tionary queueing system), a Poisson
pr ocess driving an exponent ial server generates a Poisson process for de
partures. This startling result is usually referred to as Burk e's theorem
[BURK 56]; a number of others also studied the pr oblem (see, for example,
the discussion in [SAAT 65]). In fact , Burke' s theorem says more, namely,
that the steadysta te output of a stable M/M/m queue with input parameter
Aand servicetime parameter flo for each of the m channels is in fact a Poisson
process at the same rate A. Burke also established that the output process was
independent of the other processes in the system. It has also been shown tha t
the M/M/m system is the only such FCFS systemwith this pro pert y. Returning
nowto Figure 4.15 we see therefore that node two is dri ven by an independent
Poisson arrival process and therefore it too behaves like an M/MfJ system
and so may be analyzed independently of node one. In fact Burke's theorem
tells us that we may connect many multiple server nodes (each server with
exponential pdf) together in a feedfor ward* network fashion and still
preserve this nodebynode decomposition.
Jack son [JACK 57) addressed himself to this question by considering an
arbitrar y net work of queues. The system he studied consists of N nodes
where the it h node consists of m, exponenti al servers each with par ameter fIo i;
fur ther the ith node receives arrivals from outside the system in the form of a
Poisson process at rat e Y i' Th us if N = I then we have an M/M/m system.
Upon leaving the ith node a customer then proceeds to the jth node with
probability r
i i
; this formul ati on permits the case where r« ~ O. On the other
hand, aft er complet ing service in the ith node the probability that the custo
mer departs from the network (never to return again) is given by I  Li'.:, l r
i i
.
We must calculate the total average arrival rate of customers to a given node.
To do so, we must sum the (Poisson) ar rivals from out side the system plus
arrival s (not necessarily Poisson) from all intern al nodes; that is, denoting
the tot al average arrival rate to node i by j' i we easily find that this set of
par ameters must satisfy the following equat ions :
S
Ai = r, +L }1i i
j = l
i=I , 2, .. . , N  (4.66)
I n order for all nodes in this system to represent ergodic Markov chains we
require that i ' i < m ill
i
for all i; again we caution the reader not to confuse
t he nodes in this discussion wit h the system states of each node from our
• Specifically we do not permit feedback pat hs since this may dest roy the Poisson nature of
the feedback depart ure stream. In spite of this, t he following discussion of Jackson's work
points out that even networks with feedback are such that the individua l node s behave
as if they were fed totall y by Poisson arrivals, when in fact they are not.
l
ISO MARKOVIAN QU EUES I N EQUILIBRIUM
previous discussions. What is amazing is that Jackson was able to show that
each node (say the it h) in the network behaves as if it were an independent
M/M/m system with a Poisson input rate A,. In general, the total input will
not be a Poisson process. The state variable for this Nnode system consists
of the vector (k ,. k
2
• • • • , k
s)
. where k ; is the number of customers in the ith
node [including the customer (s) in service]. Let the equili brium pr obability
asso ciated with thi s sta te be den oted by ptk.; k, • . . . , k
s
). Similarl y we
denot e the marginal distributi on of findi ng k, customer s in the ith node by
p.(k,). Jackson was abl e to show that the joint distri bution for all nodes
factored into the pr oduct of each of the mar ginal distributions. that is,
 (4.67)
I,
and' pi (k ,) is given as the solution to the classical M/M/ m system [see. for
example, Eqs. (3.37)(3.39) with the obvious change in not ation]! This last
result is commonl y referred to as Jackson's theorem. Once agai n we see the
"product" form of solution for Markovian queues in equ ilibriu m.
A modificat ion of Jack son' s network of queues was considered by Gordon
and Newell [GORD 67]. The modification they investiga ted was that of a
closed Mark ovian network in the sense that a fixed and finite number of
cust omers, say K , are conside red to be in the system and are trapped in that
system in the sense that no others may enter and none of these may leave: this
cor responds to Jackson' s case in which ~ ; : . \ r ij = I and Yi = 0 for all i.
(An interesting example of thi s class of systems known as cyclic queues had
been considered earli er by Koeni gsberg [KO EN 58]; a cyclic queue is a
tandem queue in which the last stage is conn ected back to the first.) In the
general case considered by Gord on and Newell we do not quite expect a
pr oduct soluti on since there is a depend ency among the element s of the sta te
vector (k\ . k, • . . . • k
s
) as foll ows :
S
I k
i
= K
i = l
(4.68)
As is the case for Jackson' s model we ass ume that this discre testate Markov
pr ocess is irreducible and therefor e a unique equ ilibrium pr obability
distribut ion exists for p(k\ . k" . . . , k
s
). In thi s model, however , there is a
finite number of states; in parti cul ar it is easy to see that the num ber of
dist ingui shable states of the system is equal to the number of ways in which
one can place K customers among the N nodes. and is equal to the binomial
coeffi cient
(
N + K  I)
N  I
4.8. NETWORKS OF MARKOVIAN QUEUES 151
The following equations describe the behavior of the equilibrium distribution
of custo mers in this closed syste m and may be written by inspection as
.v
P(/(I, /(2' . . . , /(x) 2
0
k. ICf. .( /(;)fl i
i = 1
s s
= I IOkj_ICf. ,(k
i
+ l)!t
i
' ij p(k
l
, k
2
, • • • , k ,  1, . . . , k , + I , ... , k
s
)
i = l r "", l
(4.69)
where the discrete unit stepfunct ion defined in Appendix I takes the for m
'" {I
Ok = 0
k = 0, 1, 2, . . .
k< O
(4.70)
and is included in' the equilibri um equations to indicate the fact that the
service rate must be zero when a given node is empty ; furthermore we define
{
k .
Cf. i(k
i)
= '
11l
i
k ·< nI ,
1  ,
which merely gives the number of cust omers in service in the ith node when
there are k, customers at that node. As usual the lefthand side of Eq . (4.69)
describes the flow of 'probability out of sta te (k
l
, k
2
, • • • , k".) whereas the
righthand side acco unts for the flow of probability into that state from
neighboring states. Let us proceed to write down the solution to these equa
tions. We define the function (li(k
i
) as follows :
k< m ·
,  ,
Consider a set of numbers {Xi}' which are solutions to the foliowing set of
linea r equati ons :
N
# i Xi = Lp j x jrji
;=1
i = 1,2, . . . , lV (4.71)
Note that thi s set of equations is in t he same form as 1t = 1tP where now
the vector 1t may be considered to be (fl, x" . . . , flsxs) and the elements of
the matrix P are considered to be the elements rij. * Since we assume that the
• Again the reader is caut ioned that, on the one hand, we have been considering Markov
cha ins in which the quant ities Pij refer to the transition probabilities among the possible
slates that the system may take on, whereas, on the other hand, we have in this section in
additi on been considering a network of queuein g systems in which the probabilities r ij
refer to tran sitions that customers make between nodes in that network.
(4.73)
•
152 MARKOVIAN QUEUES IN EQUILIBRIUM
matrix of transition probabilities (whose element s are " i) is irreducible, then
by our previous studies we know that there must be a solution to Eqs. (4.71),
all of whose components are positive; of course, they will only be determined
to within a mult iplicati ve constant since there are only N  I independent
equati ons there . With these definitions the solution to Eq. (4.69) can be
shown to equal
 (4.72)
where the normalization constant is given by
N x .kj
G(K) = L II  '
ke.,! ' _ 1 fliCk ,)
Here we imply that the summation is taken over all state vectors k ~ (k" . . . ,
k
N
) that lie in the set A, and this is the set of all state vectors for which Eq.
(4.68) holds. Thi s then is the solution to the closed finite queueing network
pr oblem, and we observe once again that it has the product form.
We may expose the pr oduct formulati on somewhat further by considering
the case where K ~ 00. As it turns out, the quantities x;/m, are critical in this
calculation; we will assume that there exists a unique such rati o that is
largest and we will renumber the nodes such that x,!m, > x;/m, (i,e I).
It can then be shown that pik, k
2
, . . • , k
N
) ~ 0 for any state in which k, <
00. Thi s implies that an infinite number of customers will form in node one ,
and this node is often referred to as the " bottleneck" for the given network .
On the other hand , however, the marginal distribution p(k
2
, • • • , k,v) is
welldefined in the limit and takes the form
(4.74)
Thus we see the pr oduct solution directly for this marginal distribution and ,
of cour se, it is similar to Jackson' s theorem in Eq. (4.67); not e that in one
case we have an open system (one that permit s external arrivals) and in the
other case we have a closed system. As we shall see in Chapter 4, Volume II ,
this model has significant applications in timeshared and multi access
computer systems.
Jackson (JACK 63] earlier considered an even more general open queueing
system, which includes the closed system just considered as a special case.
The new wrinkles introduced by Jackson are, first , that the customer arrival
process is permitted to depend upon the tot al number of customers in the
system (using thi s, he easily creates closed networks) and, second, that the
service rate at any node may be a function of the number of cust omers in that
node. Thus defining
S(k) ~ k , + k, + . .. + k»
4.8. NETWORKS OF MARKOVIAN QUEUES 153
we then permit the total a rrival rate to be a function of S(k) when the system
state is given by the vector k. Simil arl y we define the exponential service
rat e at node i to be Ilk, when there are k, cust ome rs at that node (including
those in ser vice). As earlier, we ha ve the node transiti on probabilities ' ij
(i , j = 1,2, . . . , N) wit h the following additional definitions : '0, is the
probability that the next externally generated arrival wiII enter the network
at node i ; ' i . N+l is the probability that a customer leaving node i departs
from the system; and 'O, N +l is the probability that the next arrival will
require no service from the system and leave immediately upon arrival. Thus
we see that in this case y, = 'Oiy(S(k», where y(S(k» is the total external
arrival rate to the system [conditioned on the number of cust omers S(k) at
the moment] from our external Poi sson process. It can be seen that the
probability of a customer arri ving at node i
l
and then passing through the
node sequence i
2
, i
3
, . • . , in and then departing is given by ' oil' I,,',,i,' "
" . _ l i .'i • •V+l ' Rather than seek the solution of Eq. (4.66) for the traffic
rates, since they are funct ions of the total number of cust omer s in the system
we rather seek the solution for the following equivalent set :
N
e, = ' 0' + 2 ej'ji
1_ 1
(4.75)
[In the case where the arrival rates are independent of the number in the
system then Eqs. (4.66) and (4.75) differ by a multiplicative factor equal to
the tot al arri val rate of cust omers to the system.] We assume that the solution
to Eq. (4.75) exists, is unique, and is such that e, ~ 0 for all i; th is is equ iv
alent to assuming that with probability I a customer' s j ourney through the
net work is of finite length . e, is, in fact , the expected number of times a
cust omer will visit node i in passing through the network.
Let us define the timedependent st ate probabilities as
Pk(t ) = P[system (vecto r) state at time t is k] (4.76)
By our usual methods we may write down the differentialdifference equa
tions governing these probabilities as follows:
J.V .. v 1. V
+ 2 Ilk.+l' "N+IPk(i+)(t) + 2 2 1lk+l
r
jiPkl,.il ( t) (4.77)
i=l I i = l j= l J
i /: j
where terms are omitted when any component of the vector argument goes
negative; k(i) = k except for its ith component, which takes on the value
a:
154 MARKOVIAN QUEUES IN EQUILI BRIUM
k,  1; k(i+) = k except for its ith component , which takes on the value
k, + I; and k(i,j) = k except that its ith component is k ,  I and its jth
component is k , + I where i j . Complex as thi s not ati on appears its
interpretat ion should be rather straightforward for the reader. Jackson shows
that the equilibrium distribut ion is unique (if it exists) and defines it in our
earl ier not ati on to be lim Pk(t ) g Pk g pt k, k
2
, •• • , k
N
) as t > 00. In
order to give the equilibrium solution for Pk we must unfortunately define
the following furt her notation :
K  l
F(K) gII y(S(k» K = 0, 1, 2, . .. (4.78)
N ki
f( k) ';' II II .5 (4.79)
1"", 1 ij = l fl; i
H(K) gI f(k)
(4.80)
k e..l
G g{
if the sum converges
(4.81)
otherwise
where the set A shown in Eq. (4.80) is the same as that defined for Eq. (4.73).
In ter ms of these definiti ons then Jackson' s more general theorem states that
if G < 00 then a unique equilibriumstate probability distributi on exists for
the general statedependent networks and is given by
1
Pk =  f( k) F(S( k»
G
(4.82)
Again we detect the product form of solutio n. It is also possible to show that
in the case when arrivals are independent of the total number in the system
[that is, y g y( S(k» ) then even in the case of statedependent service rates
Jackson' s first theorem applies, namely, that the jo int pdf fact ors into the
produc t of the individual pdf' s given in Eq. (4.67). In fact PiCk;) turns out to
be the same as the probabi lity distribut ion for the number of customers in a
singlenode system where arrivals come from a Poisson pr ocess at rat e ye;
and with the statedependent service rates fl.,such as we have derived for our
general birthdeath process in Chapter 3. Thu s one impact of Jackson' s
second theorem is that for the constantarrivalrate case, the equilibrium
probabili ty distributions of number of customer s in the system at individual
4.8. NETWORKS OF MARKOVIAN QUEUES 155
centers are independent of other centers; in addition, each of these distri 
but ions is identical to the weilknown singlenode service center with the
same parameters. * A remar kable result!
This last theo rem is perhaps as far as one can got with simple Markovian
networks, since it seems to extend Burke' s theo rem in its most general sense.
When one relaxes the Mar kovian assumpti on on arrivals and/or service
times, then extreme complexity in the inter depar ture process arises not only
from its marginal distri bution, but also from its lack of independence on
other state variables.
These Markovian queueing networks lead to rather depr essing sets of
(linear) system equat ions ; this is due to the enormous (yet finite) sta te
descripti on. It is indeed remar kable that such systems do possess reasonably
straightforward solutions. The key to solution lies in the observation that
these systems may be repr esented as Markovian popul ati on processes, as
neatly described by Kingman [KI NG 69) and as recently pursued by Chandy
[CHAN 72). In part icular , a Mar kov population process is a conti nuoustime
Markov chain over the set of finitedimensional state vectors k = (k
1
, k
2
, • • • ,
k
s
) for which transitions are permitted only between statesf : k and k (i +)
(an external ar rival at node i) ; k and k (i  ) (an external departure from node
i) ; and k and k(i ,j ) (an internal tra nsfer from node ito nodej). Kingman
gives an elegant discussion of the interesting classes and properties of these
processes (using the notion and properties of reversible Markov chai ns).
Chandy discusses some of these issues by observi ng that the equilibrium
pr obabi lities for the system states obey not only the globalbalance equati ons
that we have so far seen (and typically which lead to productform solutions)
but also that this system of equati ons may be decomposed into many sets of
smaller systems of equations, each of which is simpler to solve. Th is trans
for med set is referred to as the set of " local"balance equa tions , which we
now proceed to discuss.
The concep t of local balance is most valuable when one deals with a net
work of queues. However, the concept does apply to singlenode Mar kovian
queues, and in fact we have already seen an example of local balan ce at play.
• Thi s model al so permit s one to handle the closed queueing systems studied by Gordon
and Newell. In order to crea te the constant tot al number of customers one need merely set
y (k ) = 0 for k ~ K an d y( K  I) = co, where K is the fixed number one wishes to conta in
within the system. In order to keep the node transition probabilities identical in the open and
closed systems, let us denote the former as earlier by r;; and the latter now by rii' : to make
the limit of Jackson' s general system equivalent to the closed system of Gordon and
Newell we then require r;;' = ri; + (ri .N+l)(rU;)'
t In Chapter 4, Volume II , we describe some recent result s that do in fact extend the model
to handle different customer classes and different service disciplines at each node (permit
ti ng. in some cases, more general servicetime distributions).
t Sec the definitions following Eq. (4.77).
....
156 MARKOVIAN QUEUES IN EQUILIBRIUM
Node l Node 2 Node 3
:r
[
Figure 4.16 A simplecyclic network example: N = 3, K = 2.
Let us recall the global balance equations (the flowconservati on equations)
for the general birthdeath process as exempl ified in Eq. (3.6). Thi s equation
was obtained by balancing flow into and out of state E
k
in Figure 2.9. We
also commented at that time that a different boundary could be considered
across which flow must be conserved , and thi s led to the set of equations
(3.7). These latter equations are in fact localbalance equations and have the
extremely interesting property that they match terms from the lefthand side
of Eq. (3.6) with corresponding terms on the righthand side ; for example , the
term Ak 1Pkl on the lefthand side of Eq. (3.6) is seen to be equal to fl kPk
on the righthand side of that equation directl y from Eq. (3.7), and by a
second application of Eq. (3.7) we see that the two remaining terms in Eq.
(3.6) must be equal. Thi s is preci sely the way in which local balance operates,
namely, to observe that certain sets of terms in the globalbalance equation
must balance by themselves giving rise to a number of "Iocal " balance
equations.
The significant observation is that, if we are dealing with an ergodic
Markov process, then we know for sure that ther e is a unique solution for the
equilibrium probabilities as defined by the generic equati on 7t = 7tP. Second ,
if we decompose the globalbalance equations for such a process by mat ching
terms of the large globalbalance equations into sets of smaller localbalance
equati ons (and of cour se account for all the terms in the global balance), then
any solution satisfied by thi s large set of localbalance equations must also
satisfy the globalbalance equations; the converse is not generally true. Thus
any solution for the localbalance equati ons will yield the unique solution
for our Markov process.
In the interesting case of a network of queues we define a localbalance
equation (with respect to a given network state and a net work node i) as one
that equates the rat e of flow out of that network stat e due to the depar ture of
a customer from node i to the rate of flow int o that network state due to the
ar rival of a customer to node i, * Thi s not ion in the case of networks is best
illustra ted by the simple example shown in Figure 4.16. Here we show the
case of a threenode network where the service rate in the ith node is given as
• When service is nonexponential but rather given in terms of a stagetype service dis
tr ibut ion, then one equates ar rivals to and departures from a given stage of service (rather
than to and from the node itself).
4.8. NETWORKS OF MARKOVIAN QUEUES 157
Figure 4.17 Statetransitionrate diagram for example in Figure 4.16.
fl i and is independent of the number of customers at that node ; we assume
there are exactly K = 2 customers circulating in this closed cyclic network .
Clearly we have ' 13 = '32 = ' 21 = I and ' if = 0 otherwise. Our state descrip
tion is mer ely the triplet (k
l
, k
2
, k
3
) , where as usual k, gives the number of
cust ome rs in node i and where we require, of course, that k
1
+ k
2
+ k
3
= 2.
For thi s net work we will therefore have exactly
(
N + K  I) = 6
N  I
states wit h sta tetransition rat es as shown in Figure 4.17.
Fo r this system we have six globalbalance equations (one of which will be
redu ndant as usual; the extra condition comes from the conservation of
probabi lity); these are
fllp(2 , 0, 0) = p2p( l, 1,0)
fl2P(0, 2, 0) = P3P(0, I , I)
fl3P(0, 0,2) = PIP(I , 0, I)
fllp( l, 1,0) + fl 2P(l , 1,0) = P2P(0, 2, 0) + fl 3P(I, 0, I)
fl2P(0, I, I) + fl3P(0, I , I) = P3P(0, 0, 2) + fl IP(I , 1, 0)
fl IP(! ,O, I) + p3p(l, 0, I) = P2P(0, I , I) + PIP(2, 0, 0)
(4.83)
(4.84)
(4.85)
(4.86)
(4.87)
(4.88)
\
, ,
158 MARKOVIAN QUEUES IN EQUILIBRI UM
Each of these globalbalance equ ati ons is of the form whereby the lefthand
side represents the flow out of a state and the righthand side represents the
flow int o that state. Equations (4.83)(4.85) are already localbalance
equations as we shall see; Eqs. (4.86)(4.88) have been written so that the
first term on the lefthand side of each equation balances the first term on the
righthand side of the equ ation, and likewise for the second terms. Thus
Eq. (4.86) gives rise to the following localbalance equati ons:
PIP(1, 1,0) = p,p(O, 2, 0) (4.89)
p,p(l , 1,0) = P3P(I, 0, I) (4.90)
Note, for example, that Eq. (4.89) takes the rate out of sta te (I , 1,0) due to
a departure from node I and equates it to the rate into that state due to
arrivals at node I; similarly, Eq. (4.90) does likewise for departures and
arrivals at node 2. This is the principle of local balance and we see therefor e
that Eqs . (4.83)(4.85) are already of this form. Thus we generate nine
local balance equations* (four of which must therefore be redundan t when
we consider the conservation of probability), each of which is extremely
simple and therefore permits a straightforward solution to be found. If thi s
set of equations does indeed have a solution, then they certainly guarantee
that the global equations are satisfied and therefore that the solution we have
found is the unique solution to the original global equati ons. The reader may
easily verify the following solution :
fil (
p(l , 0, I) =  P 2,0,0)
fi 3
pel, 1,0) = fil p(2, 0, 0)
fi'
_ (/l l) 2 ?
p(O, 1, 1)  p(_ , 0, 0)
fl 2fl3
p(O, 0, 2)= ( ~ r p ( 2 , 0, 0)
(
fl l )2
p(O, 2, 0) = ;:. p( 2, 0, 0)
p(2, 0, 0) = [1 + PI + !!:l + (fll )2 + (fl
l)
2+ (fll)1 
1
(4.91)
fl3 u« fl 2P3 fl 3 fl 2
Had we allowed all possible transiti ons among nodes (rather than the cyclic
behavior in this example) then the statetransitionrate diagram would have
• The reader should write them out di rectly from Figure 4.17.
4.8. NETWORKS OF MARKOVIAN QUEUES 159
Figure 4.1 8 Statetransitionrate diagram showing local balance (N = 3, K = 4).
permitted transitions in both directions where now only unidire ctional
transitions are permitt ed ; however, it will always be true that only tr ansitions
t o neare stnei ghbor states (in thi s twodimensional diagram) are permitted
so that such a diagram can always be drawn in a planar fashion. For example,
had we allowed four customer s in an arbitrarily connected threenode
network , then the statetransitionrate di agram would have been as shown in
Figure 4. 18. In t his diagram we repr esent possible transiti ons between nodes
by an undirected branch (representing two oneway branches in opposi te
directions). Also, we have collected together sets of branches by j oining them
with a heavy line, and these are mean t to repr esent branches whose cont ri
buti ons appear in the same localbalance equati on. Th ese diagrams can be
extended to higher dimensions when the re are more than three nodes in the
system. In particular , with four nodes we get a tetrahedron (that is, a three
dimensional simplex). In general, with N nodes we will get an (N  1)
dimensional simplex with K + 1 nodes along each edge (where K = number
of customers in the closed system). We note in these diagrams that all node s
lying in a given straight line (pa ral!el to any base of the simplex) maintai n one
component of the state vector at a const ant value and that this value increases
or decr eases by unity as one moves to a parallel set of nodes. The local
balance equati ons are identi fied as balancing flow in that set of branches that
connects a given node on one of these constant lines to all other nodes on
that constant line adjacent and parallel to this node , and that decreases by
unit y that component that had been held constant. In summary, then, the
160 MARKOVIAN QUEUES IN EQU ILI BRIUM
local bal ance equ ati on s are tr ivial to write down, and if one can succeed in
findin g a solution that satisfies them, then one has found the solut ion to the
globalbalance equati ons as well!
As we see, most of the se Markovian networks lead to rather complex
systems of linear equations. Wall ace and Rosenberg [WALL 66] propose a
numerical solution method for a large class of these equati ons which is
computati onall y effi cient. They di scuss a computer program, which is designed
to evaluate the equilibrium probability distributions of state variables in
very large finite Markovian queueing net works. Specificall y, it is designed to
solve the equilibrium equati ons of the form given in Eqs. (2.50) and (2. 116),
namely, 7t = 7tP and 7tQ = O. The procedure is of the "poweriteration
type" such that if7t (i) is the ith iterate then 7t(i + I) = 7t(i)R is the (i + I)th
iterate ; the matrix R is either equal to the matri x Gt P + (I  Gt) I (where a: is a
scalar) or equal to the matrix ~ Q + I (where ~ is a scalar and I is the identity
matrix), depending upon which of the two above equati ons is to be solved.
The scalars a: and ~ are chosen carefully so as to give an efficient convergence
to the solution of these equations. The speed of soluti on is quite remarkable
and the reader is referred to [WALL 66] and its references for further det ails.
Thus ends our study of purely Markovian systems in equilibrium. The
unify ing feature throughout Chapters 3 and 4 has been that these systems
give rise to producttype solutions; one is therefore urged to look for
solutions of thi s for m whenever Markovian queueing systems are encoun
tered. In the next chapter we permit either A (t) or B(x) (but not both) to be
of arbitrary form, requiring the other to remain in exponential form .
REFERENCES
BURK 56 Burke, P. J., " The Output of a Queueing System," Operations Research,
4, 699704 (1966).
CHAN 72 Chandy, K. M. , " The Analysis and Solutions for General Queueing
Networks," Proc. Sixth Annual Princeton Conference on Information
Sciences and Systems , Princeton University, March 1972.
COX 55 Cox, D. R., " A Use of Complex Probabilit ies in the Theory of Sto
chastic Processes," Proceedings Cambridge Philosophical Society,
51,31331 9 (1955).
GORD 67 Gordon , W. J. and G. F. Newell, " Closed Queueing Systems with
Exponential Servers, " Operations Research, 15, 254265 (1967).
JACK 57 Jackson, J. R., "Networks of Waiting Lines," Operations Research,S,
518521 (1957).
JACK 63 Jackson, J. R., "JobshopLike Queueing Systems," Management
Science, 10, 131 142 (1963).
KING 69 Kingman, J. F. C., "Markov Population Processes," Journal of Applied
Probability, 6, 118 (1969).
EXERCISES 161
KOEN 58 Koenigsberg, E. , " Cyclic Queues," Operations Research Quarterly, 9,
2235 (1958).
SAAT 65 Saaty, T. L., "Stochastic Network Flows: Advances in Networks of
Queues," Proc. Symp. Congestion Theory, Univ, of North Carolina
Press, (Chapel Hill), 86107, (1965).
WALL 66 Wallace, V. L. and R. S. Rosenberg, "Markovian Modelsand Numer
ical Analysis of Computer System Behavior," AFIPS Spring Joint
Computer Confe rence Proc., 141148, (1966).
WHIT 44 Whittaker, E. and G. Robinson, The Calculus ofObservations, 4th ed.,
Blackie (London), (1944).
EXERCISES
4.1. Consider the Markovian queueing system shown below. Branch
labels are birth and death rates. Node labels give the number of
customers in the system.
(a) Solve for Pk:
(b) Find the average number in the system.
(c) For). = fl , what values do we get for parts (a) and (b)? Try to
interpret these results.
(d) Write down the transition rate matrix Q for thi s pr oblem and
give the matri x equation relating Q to the probabilities found in
part (a) .
4.2. Consider an Ek/En/1 queueing system where no queue is permitted to
form. A customer who arrives to find the service facility busy is " lost"
(he departs with no service). Let E
i j
be the system state in which the
"arriving" customer is in the ith arrival stage and the cust omer in
service is in the jth service stage (note that there is always some
customer in the arrival mechanism and that if there is no customer in
the service facility, then we let j = 0). Let I lk), be the average time
spent in any arrival stage and I lnfl be the average time spent in any
service stage.
(a) Draw the state tr ansition diagram showing all the transition
rat es.
(b) Write down the equilibrium equation for E;j where I < i < k,
o<i < n,
162
4.3.
4.4.
4.5.
MARKOVIAN QUEUES IN EQUI LIBRIUM
Consider an MfEr/i system in which no queue is allowed to form.
Let j = the number of stages of service left in the system and let P,
be the equilibrium probability of being in state E
j
•
(a) Find Pj,j = 0, I, .. . , r .
(b) Find the probability of a busy system.
Consider an M/H.fl system in which no queue is allowed to form.
Service is of the hyperexponential type as shown in Figure 4.10 with
P.l = 2p.!Y.
l
and p.. = 2p.(1  !Y.
l
) ·
(a) Solve for the equilibrium probability of an empty system.
(b) Find the probability that server 1 is occupied.
(c) Find the probability of a busy system.
Consider an M/Mfl system with parameters Aand p. in which exactly
two customers arrive at each arrival instant.
(a) Draw the statetransitionrate diagram.
(b) By inspection, write down the equilibrium equati ons for fk
(k = 0, 1, 2, . . .).
(c) Let p = 2Afp.. Express P(z) in terms of p and z.
(d) Find P(z) by using the bulk arrival result s from Section 4.5.
(e) Find the mean and variance of the number of customers in the
system from P(z).
(f) Repeat' parts (a)(e) with exactly r customers arriving at each
arrival instant (and p = rAfp.).
i = 0, 1,2, . . . g i = (1  !y') !y'
i
Consider an M/M fl queueing system with parameters i. and p.. At
each of the arrival instants one new customer will ent er the system
with probability 112 or two new customers will enter simultaneously
with probabilit y 1/2.
(a) Draw the statetransitionrate diagram for this system.
(b) Using the method of nonnearestneighbor systems write down
the equilibr ium equat ions for Ps .
(c) Find P(z) and also evaluate any constants in thi s expression so
that P(z) is given in terms only of i. and p.. If possible eliminate
any common fact ors in the numerat or and denominat or of this
expression [this makes life simpler for you in part (d)].
(d) Fr om part (c) find the expected number of customers in the
system.
(e) Repeat part (c) using the results obtained in Section 4.5 directly.
For the bulk arri val system of Section 4.5, assume (for 0 < !Y. < 1)
that
4.7.
4.6.
Find h = equilibrium probability of finding k in the system.
EXERCISES 163
For the bulk arrival system studied in Section 4.5, find the mean N
and variance aN" for the number of customers in the system. Express
your answers in terms of the moments of the bulk arrival distribution.
Consider an M/M/I system with the following variation: Whenever
the server becomes free, he accepts (11'0 customers (if at least two are
available) from the queue into service simultaneously. Of these two
customers, only one receives service; when the service for this one is
completed, both customers depart (and so the other cust omer got a
" free ride").
If only one cust omer is available in the queue when the server
becomes free, then that cust omer is accepted alone and is serviced;
if a new customer happens to arrive when this single customer is being
served, then the new customer j oins the old one in service and this
new customer receives a "free ride."
In all cases, the service time is exponentially di stributed with mean
I/p, sec and the average (Poisson) arrival rate is A customers per
second .
(a) Draw the appropriate state diagram.
(b) Write down the appropri ate difference equati ons for P» =
equilibrium probability of finding k customers in the system.
(e) Solve for P(z) in terms of Po and Pt.
(d) Express Pi in terms of Po.
We consider the denominator polynomial in Eq. (4.35) for the system
Er/ M/ I. Of the r + I root s, we know that one occurs at z = I. Use
Rouche' s theorem (see Appendix I) to show that exactly r  I of the
remaining r root s lie in the unit disk 1=1 ::s; I and therefore exactly
one roo t, say zo, lies in the region IZol > I.
Show that the soluti on to Eq. (4.71) gives a set of variables {Xi} which
gua rantee that Eq. (4.72) is indeed the solution to Eq. (4.69).
(a) Draw the statetransitionrate diagram sho wing local balance
for the case (N = 3, K = 5) with the following structure:
(b) Solve for p(k
t
, k
2
• k
3
) .
164 MARKOVIAN QUEUES IN EQUILIBRIUM
4.13. Consider a twonode Markovian queueing network (of the more
general type considered by Jackson) for which N = 2, m
1
= m
z
= I ,
flk, = fl i (constant servi ce rate), and which has transition probabilities
(r
i j
) as described in the following matrix:
j
o 2 3
o
o I  ~ ~
o
o
2 0
o
o
o
o
where 0 < ~ < I and nodes 0 and N + I are the "source" and "sink'.'
nodes, respectively. We also have (for some integer K)
k, + k, ¥ K
k, + k
z
= K
and assume the system initially contains K customers.
(a) Find e, (i = 1,2) as given in Eq. (4.75).
(b) Since N = 2, let us denote p(k
1
, k.) = p(k
lo
K  k
1
) by hi'
Find the balance equations for hi'
(c) Solve these equations for hi explicitly.
(d) By considering the fraction of time the first node is busy, find
the time between customer departures from the network (via
node I, of course).
PART III
INTERMEDIATE
QUEUEING THEORY
We are here concerned with those queueing systems for which we can still
apply certain simplifications due to their Markovian nature. We encounter
those systems that are representable as imbedded Markov chains, namely,
the M/G/I and the G/M/m queues. In Chapter 5 we rapidly develop the basic
equilibrium equations for M/G/l giving the noto rious PollaczekKhinchin
equations for queue length and waiting time. We next discuss the busy period
and, finally, introduce some moderately advanced techniques for studying
these systems, even commenting a bit on the timedependent solutions.
Similarly for the queue G/M/m in Chapter 6, we find that we can make some
very specific statements about the equi librium system behavior and, in fact,
find that the conditional distribution of waiting time will always be exponen 
tial regardless of the interarrival time distri bution! Similarly, the conditional
queuelength distribution is shown to be geometric. We note in this part that
the methods of solution are quite different from that studied in Part II, but
that much of the underlying behavior is similar; in particular the mean
queue size, the mean waiting time, and the mean busy period duration
all are inversely proportional to I  p as earlier. In Chapter 7 we briefly
investigate a rather pleasing interpretation of transforms in terms of
probabilities.
The techniques we had used in Chapter 3 [the explicit product solution
of Eq. (3.1I)] and in Chapter 4 (flow conservation) are replaced by an
indi rect atransform approach in Chapter 5. However, in Chapter 6, we return
once again to the flow conservation inherent in the 1t = 1tP solution.
165
s
The Queue MjGjl
c ....: ) : ;..: ' L;,..·· t . :,../ ••• ..:::: .:.,. " . ..
, T.hat theory. th; siIJFIlcity
* to '
hist ory
the number of customersj., present. All ofller / ·fiiStoncal mf9.,fmatlOn "IS
t6 the &ii <'i{ilif of pure"Markovian systems. the state
descripti on is not only one dimensional but also countable (and in some
cases finite) . It is this latter property (the countability) that simplifies our
calculations.
In this chapter and the next we study queueing systems that are driven by
nonMarkovian stochastic processes. As a consequence we are faced with
new problems for which we must find new methods of solution.
In spite of the nonMarkovian nature of these two systems there exists an
abundance of techniques for handling them. Our approach in this chapter
will be the method of the imbedded Markov chain due to Palm [PALM 43]
and Kendall [KEND 51]. However, we have in reality already seen a second
approach to thi s class of problems , namely, the method of stages, in which it
was shown that so long as the interarrival time and service ti me pdf' s have
Lapl ace transforms that are rational , then the stage method can be applied
(see Section 4.7); the disadvantage of that approach is that it merely gives a
procedure for carrying out the solution but does not show the solution as an
explicit expr ession, and therefore properties of the solution cannot be studied
for a class of systems. The third approach, to be studied in Chapter 8, is to
solve Lindley's integral equation [LIND 52] ; this approach is suitable for the
system G/G/l and so obviously may be speciali zed to some of the systems we
consider in this chapter. A fourth approach, the method of supplementary
• Usually a stat e descripti on is given in terms of a vector which describes the system' s
state at time t , A vector v(t) is a state vector if, given v(t) and all input s to this system during
the interval (t, (
1
) (where t < (
1
) , then we ar e capable of solving for the state vector v(t
1
) .
Clearly it behooves us to choose a state vector containing that informat ion that permit s us
to calculate quantities of importan ce for understandin g system behavior.
t We saw in Chapter 4 that occasionally we reeor d the number of stages in the system rather
than the number of customers.
167
168 THE QUEUE M/G/I
variables, is discussed in the exercises at the end of this chapter ; more will be
said about thi s method in the next section. We also discuss the busy period
analysis [GAVE 59], which leads to the waitingtime distr ibuti on (see
Section 5.10). Beyond these there exist other approaches to nonMar kovian
queueing systems, among which are the random walk and combinatorial
approaches [TAKA 67] and the method of Green's function [KEIL 65].
5.1. THE M/G/1 SYSTEM
The M/G/I queue is a singleserver system with Poisson ar rivals and
. arbitrary servicetime distribution denoted by B(x) [and a service time pdf
denoted by b(x)] . That is, the interarrival time distribution is given by
A(t) = 1  eJ.t
t::O:O
with an average arrival rate of Acustomers per second, a mean interarrival
time of I/Asec, and a variance O"a
2
= 1/;,2. As defined in Chapter 2 we denote
the kth moment of service time by
~ roo
x
k
=Joxkb(x) dx
and we sometimes express these service time moments by b
k
~ x'.
Let us discuss' the state description (vector) for the M/G/l system. If at
some time t we hope to summarize the complete past history of this system,
then it is clear that we must certainly specify N( t), the number of cust omers
present at time t. Moreover, we must specify Xo(t) , the service time already
received by the customer in service at time t ; thi s is necessary since the
servicetime distribution is not necessarily of the memoryless type. (Clearly,
we need not specify how long it has been since the last arri val entered the
system, since the arrival process is of the memoryless type.) Thus we see that
the rand om pr ocess N(t ) is a nonMark ovian process. However, the vector
[N(t ), XoCt)] is a Markov process and is an appropriate state vector for the
M IG/I system, since it completely summarizes all past history relevant to the
future system development.
We have thus gone from a singlecomp onent descript ion of state in
elementary queueing theory to what appears to be a twocomponent descrip
tion here in intermedi ate queueing theory. Let us examine the inherent
difference between these two state descriptions. In elementary queueing
theory, it is suffi cient to provide N(t) , the number in the system at time t , and
we then have a Markov process with a discretestate space, where the states
themsel ves are either finite or countable in number. When we proceed to the
current situation where we need a twodimensional state descripti on, we find '
that the number in the system N(t) is still denumerable, but now we must also
5.2. THE PARADOX OF RESIDUAL LIFE: A BIT OF RENEWAL THEORY 169
pr ovide Xo(t) , the expended service time, which is continuous. We have thus
evolved from a discretest ate description to a continuousstate descript ion,
and thi s essenti al difference complicates the analysis.
It is possible to proceed with a general theory based upon the couplet
[N(t ), Xo(t )] as a state vector and such a method of solution is referred to as
the method of supplementary cariables. For a treatment of this sort the reader
is referred to Cox [COX 55] and Kendall [KEND 53); Henderson [HEND
72] also discusses this method, but chooses the remaining service time instead
of the expended service time as the supplementary variable. In this text we
choose to use the method of the imbedded Markov chain as discussed below.
However, before we proceed with the method itself, it is clear that we should
understand some properties of the expended service time; this we do in the
following section.
5.2. THE PARADOX OF RESIDUAL LIFE: A BIT OF
RENEWAL THEORY
We are concerned here with the case where an arri ving customer finds a
partially served customer in the service facility. Problems of this sort occur
repeatedl y in our studies, and so we wish to place this situation in a more
general context. We begin with an apparent par adox illustrated through the
following exampl e. Assume that our hippie from Chapter 2 arrives at a
roadside cafe at an arbitrary instant in time and begins hitchhiking. Assume
further that automobiles arrive at thi s cafe according to a Poi sson process at
an average rate of Acars per minute. How long must the hippie wait , on the
average, unt il the next car comes along? .
The re are two apparently logical answers to thi s question.T'irst, we might
argue that since the average time between automobile arri vals is I{A min, and
since the hippie arri ves at a random point in time, then " obviously" the
hippie will wait on the average 1{2Amin. On the other hand , we observe that
since the Poisson pr ocess is rnernoryless, the time until the next arri val is
independent of how long it has been since the previous arrival and therefore
the hippie will wait on the average I {). min ; this second argument can be
extended to show that the average time from the last arrival until the hippie
begins hitchhiking is also Iii, min. The second solution ther efore implies
that the average time between the last car and the next car to arrive will be
2{), min! It appears that thi s interval is twice as long as it should be for a
Poisson process! Nevertheless, the second solution is the correct one, and
so we are faced with an apparent paradox!
Let us discuss the solution to this pr oblem in the case of an arbitrary
int erarrival time distribution. Th is study properly belongs to renewal theory.
and we quote results freely from that field; most of these results can be found
170 THE QUEU E M/G/l
X O ~ i P P i e = J
aenves
I+X
A,
Figure 5.1 Life, age and residual life.
Time
axis
in the excellent monograph by Cox [COX 62] or in the fine expository
article by Smith [SMIT 58]; the reader is also encouraged to see Feller
[FELL 66]. The basic diagram is that given in Figure 5.1. In this figure we let
A
k
denote the kth automobile, which we assume arrives at time T
k•
We assume
that the intervals T
k
+!  T
k
are independent and identically distributed
random variables with distribution 'given by
Let us now choose a random point in time, say t , when our hippie arrives at
the roadside cafe. In thi s figure, A
n
_
1
is the last au tomobile to arrive pri or to t
and An will be the first automobile to arrive after t . We let X denote this
"s pecial" interarri val time and we let Y denote the time that our hippie must
wait until the next arrival. Clearly, the sequence of arrival points {T
k
} for ms
a renewal pr ocess; renewal theory discusses the instantane ous replacement
of components. In this case, {T
k
} forms the sequence of instant s when the
old component fails and is replaced by a new component. In the language
of renewal theory X is said t o be the lifetime of the comp onent under con
sideration, Y is said to be the residual life of that comp onent at time t, and
X
o
= X  Y is referred to as the age of that component at time t. Let us ado pt
that termin ology and pr oceed to find the pdf for X and Y. the lifetime and
residual life of our selected componen t. We assume that the renewal process
has been operating for an arbitrarily long time since we are interested only
in limit ing distr ibuti ons.
The amazing result we will find is that X is not distr ibuted according to
F(x). In terms of our earlier example thi s means that the int erval which the
hippie happens to select by his arriv al at the cafe is not a typical interval.
In fact , herein lies the solution to our paradox: A long interval is more likely
<l.
F( x ) = Ph+!  r , ~ xl
We further define the common pdf for these intervals as
f( x) ,g, dF( x)
dx
(5. 1)
(5.2)
5.2. THE PARADOX OF RESIDUAL LIFE: A BIT OF RENEWAL THEORY 171
to be "intercepted" by our hippie than a short one . In the case of a Poisson
process we shall see that this bias causes the selected interval to be on the
average twice as long as a typical interval.
Let the residual life have a distribution
(5.4)
(5.3)
with density
. ~
F(x) = pry ~ xl
j(x) = dF( x)
d x
Simil arly, let the selected lifetime X have a pdfI x(x) and PDF Fx (x) where
~
Fx (x) = P[X ~ xl (5.5)
In Exercise 5.2 we direct the reader through a rigorous deri vati on for the
residual lifetime density j'(«). Rather than proceed through those. details , let
us give an intuiti ve der ivati on for the density that take s advantage of our
physical intuiti on regarding thi s pr oblem. Our basic observation is that long
interval s between renewal points occupy larger segments of the time axis
than do shorter intervals, and therefore it is more likely that our random point
t will fall in a long interval. If we accept thi s, then we recogni ze that the
probability that an interval of length x is chosen should be proporti onal to
the lengt h (x) as well as to the relative occurrence of such interv als [which is
given by I(x) dx]. Thus, for the selected int erval , we may write
Jx(x) dx = K xJ(x) dx (5.6)
where the left hand side is P[x < X ~ x + dx] and the righthand side
expresses the linear weightin g with respect to interval length and includes a
constant K, which must be evaluated so as to pr operly normalize this density.
Integrating both sides of Eq. (5.6) we find that K = I jm" where
~
Ill, = E[T.  Tk_d ( 5.7)
and is the common average time between renewals (between arrivals of
automobiles). Thus we have shown that the density associated with the
selected interval is given in ter ms of the den sity of typical intervals by
xJ(x)
Jx(x) =  
Ill,
 (5.8)
This is our first result. Let us proceed now to find the den sity of residual life
! (x). If we are told that X = x , then the probabil ity that the residual life Y
does not exceed the value y is given by
y
pry ~ y I X = xl = 
x
172 THE QUEUE M/G/I
for 0 s: y s: x; this last is true since we have randomly chosen a poin t wit hin
this selected interval, and therefore thi s point must be uniformly distributed
within that interval. Thus we may write down the joint density of X and Y as
pry < Y s: y + dy, x < X s: x + dx] = ( d : ) ( X f ~ ~ dX)
f(x) dy dx
=
(5.9)
 (5.10)
 (5.11)
,
, .
nit
for 0 s: y s: x. Integrating over x we obtain!(y) , which is the unconditional
density for Y, namely,
! (y) dy = r a:> f( x) dy dx
Jz=y m
1
This immediately gives the final result:
! (y) = I  F(y)
nit
This is our second result. It gives the density of residual life in terms of the
common distribution of interval length and its mean. *
Let us express thi s last result in terms of transforms. Using our usual
transform notation we have the following correspondences:
f (x)= F*(s)
!(x)= I *(s)
Clearl y, all the random variables we have been discussing in this section are
nonnegati ve, and so the relationship in Eq. (5. 10) may be tr ansformed directly
by use of entry 5 in Table 1.4 and entry 13 in Table 1.3 to give
r es) = 1  F*(s)
Sni t
It is now a tri vial ma tte r to find the moments of residual life in terms of the
moments of the lifetimes themselves. We denote the nt h moment of the life
time by m; and the Ilth moment of the residual life by rn ' that is,
nJ " ,:;, E[(T
k
 T
k_t
) " ] (5.12)
r, ,:;, E[Y"] (5.13)
Using our moment formula Eq. (1I.26), we may di ffer ent iate Eq. (5. 11) to
obtain the moments of residu al life. As s > 0 we obtai n indeterminate for ms
• It may also be shown that the limit ing pdf for age (%0) is the same as for residual life ( Y)
given in Eq. (5. 10).
5.2. THE PARADOX OF RESIDUAL LI FE: A BIT OF RENEWAL THEORY 173
which may be evaluated by means of L'Hospital' s rule ; this computation
gives the moments of residual life as
ln
n
+
1
r = ''''''
n (n + I)m
l
(5.14)
This important formula is most often used to evaluate 'I' the mean residual
life, which is found equal to
 (5.15)
(5. 16)
and ma y also be expressed In terms of the lifetime variance (denoted by
(J 2 ~ m
z
 m
1
2
) to give
m
l
(J2
'I = +
2 2m
l
Thi s last form shows that the correct answer to the hippie paradox is m, /2,
half the mean interarrival time, only if the variance is zero (regularly spaced
arrivals); however, for the Poisson arrivals, m
l
= IIA a nd (J2 = IfJ.2, giving
'1 = IIA = mt> which confirms our earlier solution to the hippie paradox of
residual life. Note that mtl2 ~ 'I and 'I will gr ow without bound as (J2 > 00.
The result for the mean residual life (' I) is a rather counterintuitive result; we
will see it appear again and again.
Before leaving renewal theory we take this opportunity to qu ote some other
useful results. In the language of renewal theory the agedependent failure
rate rex) is defined as the instantaneous rate at which a component will fail
given that it has already attained an age of x ; that is, , (x) dx ~ P[x <
lifetime of component ~ x + dx Ilifetime > z], From first principles, we see
that this conditional density is
f(x)
rex) = 1 _ F(x)
 (5.1 7)
where once againf (x) and F(x) refer to the common di stribution of compo
nent lifetime. The renewal funct ion H(x) is defined to be
H(x) ~ E[number of renewals in an interval of length xl (5.18)
and the renewal density hex) is merely the renewal rate at t ime x defined by
hex) ~ dH( x)
dx
(5.19)
174 TIl E QUEUE M/G/!
Renewal theory seems to be obsessed with limit theorems, and one of the
important results is the renewal theorem, which states that
lim hex) =....!.
z ..... ex) nIl
(5.20)
Thi s merely says that in the limit one cannot identify when the renewal
process began, and so the rate at which components are renewed is equal to
the inverse of the average time between renewal s (m.). We note that hex) is
not a pdf; in fact, its integral diverges in the typical case. Nevertheless, it
does possess a Laplace transform which we denote by H*(s). It is easy to
show that the following relationship exists between this transform and the
transform of the underlying pdf for renewals, namely:
H*(s) = F*(s)
1  F*(s)
(5.21)
Thi s last is merely the transform expression of the integral equation ofrenewal
theory, which may be written as
hex) = f(x) +fh(X  t)f(t) dt (5.22)
More will not be said about renewal theory at this point. Again the reader
is urged to consult the references mentioned above.
5.3. THE IMBEDDED MARKOV CHAIN
We now consider the method of the imbedded Markov chain and apply it
to the M/G/I queue . The fundamental idea behind this method is that we
wish to simplify the description of state from the twodimensional description
[N(t), Xo(t)] into a onedimensional description N(t) . If indeed we are to be
successful in calculating future values for our state variable we must also
impl icitly give, along with this onedimensional description of the number in
system, the time expended on service for the customer in service. Furthermore
(and here is the crucial point), we agree that we may gain this simplification
by looking not at all points in time but rather at a select set of points in time .
Clearly, these special epochs must have the property that, if we specify the
number in the system at one such point and also provide future inputs to the
system, then at the next suitable point in time we can again calculate the
number in system; thus somehowwe must implicitly be specifying the expended
service for the man in service. How are we to identify a set of points with
this property? There are many such sets. An extremely convenient set of
points with this property is the set of departure instants from service. It is
5.3. THE IMBEDDED MARKOV CHAI N 175
clear if we specify the number of customers left behind by a departing cus
tomer that we can calculate this same quantity at some point in the future
given only the additional inputs to the system. Certainly, we have specified the
expended service time at these instants: it is in fact zero for the customer (if
any) currently in service since he has just at that instant entered service!*
(There are other sets of point s with this property, for example, the set of
points that occur exactly I sec after customers enter service; if we specify the
number in the system at these instants, then we are capable of solving for the
number of customers in the system at such future instants of time. Such a set
as j ust described is not as useful as the departure instants since we must
worry about the case where a customer in service does not remain for a
duration exceeding I sec.)
The reader should recognize that what we are describing is, in fact , a semi
Markov process in which the state transitions occur at customer departure
instants. At these instants we define the imbedded Markov chain to be the
number of customers present in the system immediately following the
departure. The transitions take place only at the imbedded points and form
a discretestate space. The distribution of time between state transitions is
equal to the service time distribution R(x) whenever a departure leaves behind
at least one cust omer, whereas it equals the convolution of the interarrival
time distribution (exponentially distributed) with b(x) in the case that the
departure leaves behind .an empty system. In any case, the behavi or of the
chain at these imbedded points is completely describable as a Markov process,
and the results we have discussed in Chapter 2 are appl icable.
Our approach then is to focus attention upon departure instants from
service and to specify as our state variable the number of customers lef t behind
by such a departing customer. We will proceed to solve for the system
behavior at these instants in time. Fortunately, the solution at these imbedded
Markov points happens also to provide the solution for all points in time. t
In Exercise 5.7 the reader is asked to rederive some MIG/! result s using the
method of supplementary variables; this method is good at all points in time
and (as it must) turns out to be identical to the result s we get here by using the
imbedded Mark ov chain approach. This proves once again that our solution
• Moreover~ we assume that no service has been expended on any other customer in the
queue.
t Thi s happy circumstance is due to the fact that we have a Poisson input and therefore
(as shown in Section 4.1) an ar riving customer takes what amounts to a " random" look
at the system. Furthermore, in Exercise 5.6 we assist the reader in proving that the limiting
distribution for the number of customers left behind by a depart ure is the same as the
limiting distribution of custome rs found by a new arrival for any system that changes state
by unit step values (positive or negati ve); this result is true for arbitrary arriva l and
arbitrary servicetime distri butions ' Thus. for MJG/I. arrivals. departures, and random
observers all see the same distr ibuti on of number in the system.
176 THE QUEUE M IGII
is good for all time. In the following pages we establi sh results for the queue
length distribution, the. waitingtime distribution, and the busyperi od
distribution (all in terms of transforms); the waitingtime and busyperi od
durati on results are in no way restricted by the imbedding we have described.
So even if the other methods were not available, these results would still hold
and would be unconstrained due to the imbedding pr ocess. As a final re
assurance to the reader we now offer an intuitive justificati on for the equiv
alence between the limiting distributions seen by departures and arrivals.
Taking the state of the system as"the number of customers therein, we may
observe the changes in system sta te as time evolves ; if we follow the system
state in continuous time, then we observe that these changes are of the
nearestneighbor type. In particular, if we let E
k
be the system stat e when k
cust omer s are in the system, then we see that the only tran sitions from thi s
state are E
k
+ E k+l and E
k
+ E
k
_
1
(where this last can only occur if k > 0).
Thi s is denoted in Figure 5.2. We now make the observati on that the number
of transitions of the type E
k
+ E k+l can differ by at most one from the
number of transitions of the type E k+l + E
k
. The former correspond to
customer arri vals and occur at the arriv al instants ; the latter refer to customer
departures and occur at the departure instants. After the system has been in
operation for an arbitrarily long time, the number of such transitions upward
must essentially equal the number of transitions downward. Since this up
anddown motion with respect to E
k
occurs with essenti ally the same
frequency, we may therefore conclude that the system states found by arrivals
must have the same limiting distribution (r
k
) as the system states left behind
by departures (which we denote by d
k
) . Thus, if we let N(I) be the number
in the system at time I, we may summarize our two conclusions as follows:
1. For Poisson arrivals, it is always true that [see Eq. (4.6)]
P[N(t) = k] = P[arrival at time t finds k in system]
that is,
(5.23)
 (5.24)
2. If in any (perhaps nonMarkovian) system N( I) makes only discon
tinuous changes of size (plus or minus) one, then if either one of the
following limiting distributions exists, so does the other and they are
equal (see Exercise 5.6) : .
I"k ,;; lim P[arrival at t finds k customers in system]
t  ",
d
k
,;; lim P[departure at 1 leaves k customers behind]
t  ",
Thus, for M/G/l,

5.4. THE TRANS ITION PROBABILITIES 177
Figure 5.2 State transitions for unit stepchange systems.
Our approach for the bal ance of thi s chapter is first to find the mean
number in system, a result referred to as the PollaczekKhinch in meanvalue
formula. * Foll owin g that we obtain the generati ng functi on for the distri
bution of number of cust omers in the system and t hen the transform for both
the waitingtime and total systemtime distributions. These last transform
results we sha ll refer to as PollaczekKhinchin tr ansform equ ations. *
Furthermore, we solve for the transform of the busype riod durati on and
for the number served in the busy pe riod; we then show how to derive
waitingt ime results from the busyperiod ana lysis. Lastly, we deri ve the
Takacs integrodi fferential equ ation for the unfinished work in the system. We
begin by defining some notation and identifying the transiti on probabilities
associated with our imbedded Markov chain.
5.4. THE TRAl'1SmON PROBABILITIES
We have already di scussed the use of customer departure instants as a set
of imbedded points in the time axis; at these instants we define the imbedded
Markov chain as the number of cust omers left behind by the se departures
(th is forms our imbedded Markov chain). It should be clear to the reader
that th is is a compl ete state description since we know for sure that zero
service has so far been expended on the customer in service and that the time
since the last arri val is irr elevant to the future devel opment of the process,
since the interarri valtime distributi on is memoryless. Early in Chapter 2 we
introduced some symbolical and graphical not at ion ; we ask that the reader
refresh his underst anding of Figure 2.2 and th at he recall the foll owing
definitions :
C
n
represents the nth customer to enter the system
r ; = arrival time of C;
t ; = T n  T
n
_
1
= interarrival time between C
n
_
1
and C;
X
n
= service time for C
n
In addition, we int roduce two new random variables of consider able interest :
qn = number of customers left behind by departure of C
n
from service
V
n
= nu mber of cust omers arrivi ng during the service of C
n
• There is considerable disagreement within the queueing theory literature regarding the
names for the meanvalue and transform equations. Some authors refer to the meanvalue
expression as the PollaczekKhinchin formula, whereas others reserve that term for the
transform equations. We attempt to relieve that confusion by adding the appropriate
adjectives to these names.
r
r
i
178 TH E QUEUE MIG/!
We are interested in solving for the distributi on of q", namely, Piq; = kj ,
which is, in fact , a t imedependent probability; its limiting distribution
(as II > co) corresponds to elk' which we know is equ al to Pk> the basic
distributi on di scussed in Chapters 3 and 4 previously. In carrying out that
solution we will find that the number of arriving cust omers V
n
plays a crucial
role.
As in Chapter 2, we find t hat t he tr ansition probabilities descr ibe our
Markov chain ; t hus we define the onestep transiti on pr ob abilities
Pi; ~ P[qn+! =j Iq. = i]
(5.25)
Since these tr an srnons are observed only at departures, It IS clear that
qn+J <qn  I is an impossible situation ; on the other hand, q,,+! ~ q.  I
is possible for all values due to the arrivals V
n
+!. It is easy to see that the
matrix of transiti on probabilities P = [Pi;] (i,j = 0, 1,2, . ..) takes the
following form :
eL. eLl
eL
2
eL
a
eL.
eL
J
eL
2
eL
a
0
eL. eLl (X2
0 0
eL. eLl
,
P =
I
0 0 0
eL.
, j
I;
where
eLk ~ P[v. +! = k]
(5.26)
For example, the jth component of the first row of thi s matri x gives the
probability that the previous cust omer left behind an empty system and that
during the service of C
n
+
l
exactly j cust omers arrived (all of who m were
left behind by the dep arture of C
n
+\); similarly, for ot her t han t he first row,
the entry Pi; for j ~ iI gives the probability t hat exactly j  i + I
customers arr ived during the service peri od for C,,+I> give n that C" left behind
exactly i customers ; of these i cust omers one was indeed C"+1 and thi s
accounts for the +I term in this last co mputation. The sta tetra nsition
probability dia gram for th is Markov chain is shown in Figure 5.3, in which
we show only trans iti ons out of E
i
.
Let us now calc ulat e eLk' We observe first of all that the arriva l pr ocess (a
Poisson process at a rate of Acust omers per seco nd) is independent of the sta te
of the queueing system. Similarl y, x"' the service time for C", is independent
5.4. THE TRANSITION PROBABILITI ES 179
a
o
Figure 5.3 Statetransitionprobabilit y diagram for the M/G/I imbedded Mar kov
Chain.
of 11 and is di stributed according to B( x). Therefore, Vn, the number of
arri val s during the service time X
n
depends only upon the durati on of X
n
and
not upon 11 at all. We may therefore dispense with the subscripts on V
n
and x
n
,
repl acing them with the random variables uand x so that we may write
P[x
n
::;; x] = P[ x ::;; x] = B(x) and P[v
n
= k ] = P[ u = k] = (f.k . We may
now proceed with the calcu lati on of (f.k . We have by the law of tot al proba
bility
(f.k = P[u = k] = f'P[u = k, x < x ::;; x + dx] dx
By conditi onal probabilities we further have
(f.k = f 'P[u = k Ix = x]b(x) dx
(5. 27)
where again b(x) = dB(x)/dx is the pdf for service time. Since we have a
Poisson arri val process, we may replace the pr obabil ity beneath th e int egral
by the expression given in Eq . (2. 131), t hat is,
i
'" (}.X)k  l' b( ) d
(f.k =  e x x
o k !
(5.28)
Thi s the n completely specifies the transition pr obability matrix P.
We note that since (f.k > 0 for all k ~ D it is possible to reach all other
states from any given state ; thus our Markov cha in is irreducible (a nd
aperiodic). Moreover , let us make our usual definition :
p = AX
a nd point out that thi s Markov chain is ergodi c if p < 1 (unless specified
otherwi se, we shall assume p < I below) .
The stationary proba bilities may be obtained from the vect or equati on
p = pP where p = [Po, p" P2' . . .] whose kth component Pk ( = .i
k
) is
180 THE QUEUE M/G/l
merely the limiting probability that a departing customer will leave behind k
customers, namel y,
Pk = P[q = k] (5.29)
In the following section we find the mean value E[q] and in the section
following that we find the ztransform for h .
5.5. THE MEAN QUEUE LENGTH
In thi s section we derive the PollaczekKhinchin formula for the mean
value of the limiting queue length. In particular, we define
q= lim qn (5.30)
which certainly will exist in the case where our imbedded chain is ergodic.
Our first step is to find an equation relat ing the random variable qn+l to
the random variable qn by considering two cases. The first is shown in Figure
5.4 (using our timediagram notation) and corresponds to the case where C;
leaves behind a nonempty system (i.e., qn > 0). Note that we are assuming a
firstcomefirstserved queueing discipline, alth ough this assumption only
affects waiting times and not queue lengths or busy periods. We see from
Figure 5.4 that qn is clearly greater than zero since C n+l is already in the
system when C
n
departs. We purposely do not show when customer C
n
+
2
arr ives since that is unimportant to our developing argument. We wish now
to find an expression for q n+l ' the number of customer s left behind when C n+l
departs. This is clearly given as equal to qn the number of customer s present
when C; departed less I (since cust omer C n+l departs himself) plus the
number of customers that arri ve during the service interval Xn+l ' Thi s last
term is clearly equal to Dn+l by definition and is shown as a "set" of arri vals
q. lef t
behind
Q' Hl left
behind
'vJ
Cn "
v
n
. l arrive
Queuer'..L:
Figure 5.4 Case where q« > O.
5.5. THE MEAN QUEUE LENGTH lSI
Server
r
x
, . , "," q."left behi nd
c.
C,, +I
t
C,,.1
t
C.
Queue
V
IJ+
l arri ve
Figure 5.5 Case where qn = O.
in the diagram. Thus we have
qn > 0
(5.31)
Now consider the second case where qn = 0, that is, our departing custo
mer leaves behind an empty system; this is illustrated in Figure 5.5. In this
case we see that qn is clearly zero since e n+! has not yet arrived by the time
C
n
departs. Thus qn+!, the number of customers left behi nd by the depar ture
of C
n
+
1
, is merely equal to the number of arrivals during his service time.
Thus
qn = 0 (5.32)
Collecting together Eq. (5.31) and Eq. (5.32) we have
qn > 0
qn = 0
(5.33)
It is convenient at thi s point to introduce D.
k
, the shifted discrete step function
k = 1,2, . . .
(5.34)
which is related to the discrete step functi on Ok [defined in Eq. (4.70)] through
D.
k
= 15
k
_
I
, Applying thi s definition to Eq, (5.33) we may now write the single
defining equation for qn+l as
 (5.35)
182 THE QUEUE M/G/I
Equation (5.35) is the key equation for the st udy of M/GfI systems. It
remains for us to extract from Eq. (5.35) the mean value * for qn' As usual, we
concern ourselves not with the timedependent behavior (which is inferred
by the subscript II) but rather with the limiting distribution for the rand om
variable qn, which we denote by g. Accordingly we assume that the jth
moment of qn exists in the limit as II goes to infinity independent of II ,
namely ,
lim E[q /] = EW]
n e cc
(5.36)
(We are in fact requiring ergodicity here.)
As a first attempt let us hope that forming the expectation of both sides of
Eq. (5.35) and then takin g the limit as II  . cc will yield the average value we
are seeking. Proceeding as described we have
Using Eq. (5.36) we have, in the limit as II + co,
E[g] = E[g]  E[Llq ] + E[ v]
Alas, the expectation we were seeking drops out of this equati on, which
yields instead .
E[6.] = E[ v] (5.37)
What insight does this last equat ion provide us ? (Note that since vis the
number of arrivals during a customer's service time , which is independent of
II , the index on u; could have been dropped even before we went to the limit.)
We have by definiti on that
E[ v] = average number of arri vals in a service time
Let us now interpret the lefthand side of Eq. (5.37). By definiti on we may
calcul ate this directly as
00
E[6;;] = .26kP[g = k]
k= O
= 6
oP[
g = 0] +6,P[g = 1] + . ..
• We could at this point pr oceed to the next section to obtain the (ztransform of the)
limit ing distribution for number in system and from that expression evaluate the average
number in system. Instead , let us calculat e the average number in system directly from
Eq. (5.35) following the method of Kendall [KENO 51] ; we choose to car ry out this extra
work to demonstrate to the student the simplicity of the argument.
5.5. THE MEAN QUEUE LENGTH 183
But , from the definition in Eq. (5.34) we may rewr ite this as
E[D.. ] = O{P[q = OJ) + I{P[q > OJ}
or
E[D.. ] = P[ q > 0] (5.38)
Since we are dealin g with a singleserver system, Eq. (5.38) may also be
written as
E[D..] = P[ busy system]
And from our definition of the ut ilizat ion factor we furt her have
P [busy system] = p
(5.39)
(5.40)
as we had observed* in Eq. (2.32). Thus from Eqs. (5.37), (5.39), and (5.40)
we conclude tha t
E[v] = p  (5.41)
We thus have the perfectly reasonable conclusion that the expected number
of arrivals per service inte rval is equal to p (= i x). For stability we of course
require p < I , and so Eq. (5.41) indicates that customers must arrive more
slowly than they can be served (on the average).
We now return to the task of solving for the expected value of q. Formi ng
thefirst moment of Eq. (5.35) yielded interesti ng resul ts but fai led to give the
desired expectati on. Let us now attempt to find this average value by first
squaring Eq. (5.35) and then taking expectati ons as follows :
(5.42)
From our de finition in Eq. (5.34) we ha ve (D.
o
)" = D.
o
" an d also q n D.
o
" = q n'
Applyi ng this to Eq. (5.42) and taking expecta tio ns ,we have
In this equation, we have t he expecta tion of the product of two random
variab les in the last two terms . However, we observe that L'n+l [the number of
a rriva ls during the (11+ I)th service int er val] is independen t of q" (the
number of customers left behind by e n)' Consequent ly, the last two expec ta
tions may each be written as a product of the expectations. Taking the limit
as n goes to infinity, and using our limit assumptions in Eq . (5.36), we have
o= E[D. .] + E[v']  2E[q] + 2E[q]E[v]  2E[D. q]E[v]
* For any M/Gfl systcm, we see tha t P [g = 0] = I  P [q > 0] = 1  p and so P[ ncw
customer need 1101 queue] = I  p. Th is agrees with our ear lier observation for GIGI I.
(5.43)
•
184 THE QUEUE M/G/l
We now make use of Eqs. (5.37) and (5.41) to obtai n, as an intermedi at e
result for the expectation of g,
E  _ + E[i?]  E[ o]
[q]  P 2(1  p)
The only unknown here is E[v
2
] .
Let us solve not only for the second moment of 0 but, in fact , let us describe
a method for obtaining all the moments, Equati on (5.28) gives an expr ession
for (Xk = P[ o = k]. From this expression we should be able to calculate the
moments. However, we find it expedient first to define the ztransform for the
random variable 0 as
.:l  .6. 00
V(z) = E[z"] = I P[o = k]Zk
k= O
Forming V(z) from Eqs. (5.28) and (5.44) we have
(5.44)
'" r (h)k .
V(z) = I  eAXb(x) dx Zk
k ~ O 0 k!
Our summation and integral are well behaved , and we may interchange the
order of these two operations to obtain
l
ro • ( co (Axzt )
V(z) = e
AX
I b(x) dx
o k O k!
=LX> e AXe AXZb(x) dx
= reIAA=lxb(x) dx (5.45)
At thi s point we define (as usual) the Laplace transform B*(s) for the service
t ime pdf as
B*(s) ~ LX> e SXb(x) dx
We note that Eq. (5.45) is of this form, with the complex variable s replaced
by i.  }.z, and so we recognize the impo rtant result that
V(z) = B*(Je  h )  (5.46)
Thi s last equati on is extremely useful and represents a relati onship between
the ztransform of the probability distribution of the random variable 0 and
t he Laplace transform of the pdf of the random variable xwhen the Laplace
transform is evaluated at the critical point Je  h. These two random vari
ables are such that ii represents the number of arrivals occurring du ring the
5.5. THE MEAN QUEUE LENGTH 185
interval i where the arrival pr ocess is Poi sson at an average rate of Aarrivals
per second. We will shortly have occasion t o incorporate thi s interpretati on
of Eq. (5.46) in our further results.
From Appendix II we note that vari ous derivati ves of ztransforms
evaluated for z = I give the various moments of the random variable under
considerati on. Similarl y, the appropriate derivati ve of the Laplace transform
evaluated at its ar gument s = 0 also gives rise to moments. In particular,
from th at appendix we recall that
B*(k\ O) ~ dkB*(s) I = (I )kE[Xk]
ds
k
,_0
V(ll(1) ~ dV(z) I = E[ ii]
dz :  1
V(2)(1) ~ d ' V ~ z ) I = E[ii
2
]  E[ii]
dz" : ~ l
(5.47)
(5.48)
(5.49)
In order to simplify the notation for these limitin g derivat ive operations, we
have used the more usual superscript notation with the argument replaced
by its limit. Furthermore, we now resort to the overbar notat ion to denote
expected value of the random variable below that bar. t Thus Eqs. (5.47)
(5.49) become
B*Ckl(O) = (  I )kx"
V(ll(1) = iJ
V(2l( l) = v'  iJ
Of course, we must also have the conservati on of probability given by
B*(O) = V(1) = I
(5.50)
(5.51)
(5.52)
(5.53)
We now wish to exploit the relationship given in Eq. (5.46) so as to be
able to obtai n the moment s of the random variable ii from the expressions
given in Eqs. (5.50)(5.53). Thus from Eq. (5.46) we have
dV(z) dB*(}.  AZ)
 
dz dz
(5.54)
t Recall from Eq. (2. 19)that E[x
n
k ]_ x
k
= b
k
(rather than the more cumbersome nota tion
(ilk which one might expect). We take the same liberties with vand ij, namely, (if = ;;;
and (fj)k = qk.
186 TH E QUEUE M/G/l
Thi s last may be calculated as
dB*(A k ) = (dB*,(i.  , AZ)) (d(i.  i.Z))
dz d( /.  I.Z) dz
, dB*( y)
=  A   (5.55)
dy
where
y = A ;. z
Setting Z = 1 in Eq. (5.54) we have
V(ll(1) = _ A dB*(y) I
dy
But from Eq. (5.56) the case Z = 1 is the case y = 0, and so we have
VOI(I) =  AB*(l)(O)
From Eqs. (5.50), (5.51), and (5.57), we finally have
ij = i3:
(5.56)
(5.57)
(5.58)
But Ax is just p and we have once again established that which we knew from
Eq. (5.41), namely, ij = p. (This certainl y is encouraging.) We may continue
to pick up higher moments by differentiating Eq. (5.54) once again to
obtain
d
2
V(z) d
2
B*(A  k)

dz
2
dz
2
(5.59)
Using the first deri vati ve of B *(y) we now for m its second der ivative as
follows :
d
2B*
().  i.z) = .!![_;. dB*(y)]
dz
2
dz dy .
=
dy dz
or
d
2B
*(}.  i.z) , 2 d
2
B*( y)
= I .
dz
2
d y'
Sett ing z equal to 1 in Eq. (5.59) and using Eq. (5.60) we have
(5.60)
5.5. THE MEAN QUEUE LENGTH 187
Thus, from earlier results in Eqs. (5.50) and (5.52), we obtain
 (5.61)
(5.62)
 (5.63)
We have thus finally solved for v'. Thi s clearly is the quantity required in
order to evaluate Eq. (5.43). If we so desired (and with suita ble ener gy) we
could continue this differentiati on game and extract additional moment s of iJ
in terms of the moments of i; we prefer not to yield to that temptati on here.
Returning to Eq. (5.43) we apply Eq. (5.61) to obtain
j. 2
X
2
ij = P + 2(1.:.:... · _"' p)
This is the result we were after ! It expresses the average queue size at customer
departure instants in terms of known quantities, namel y, the utilizati on
factor (p = AX), }., and x' (the second moment of the servicetime
distr ibuti on). Let us rewr ite thi s result in terms of C; = Gb'/{x)', the squared
coefficient of variat ion for service time :
__ + ' (1 + Cb' )
q  p P 2(1  p)
Thi s last is the extremely wellknown formula for the average number of
custome rs in an M/GlI system and is commonly* referred t o as the Pollaczek
Khinchin (P K) meanvalue f ormula. Note with emphasis that thi s average
depends only upon the fi rst ruo moments (x and x' ) of the servicetime
distribution. Moreover , observe that ij grows linearly with the variance of the
servicetime distributi on (or, if you will, linearl y with its squared coefficient
of variation).
The PK meanvalue formula provides an expression for ij that represent s
the average number of customers in the system at departure instants; how
ever, we alr eady know that this also represent s the average number at the
arrival instan ts and, in fact , at all point s in time. We already have a not ati on
for the average number of customers in the system, namel y iii, which we
introduced in Chapter 2 and have used in previous chapters; we will continue
to use the iii notat ion outside of this chapt er. Furthermore, we have defined
iii. to be the average number of customers in the queue (not coun ting the
customer in service). Let us take a moment to develop a relati onship between
these two quantities. By definiti on we have
• See footnote on p. t 77.
0 '"
N = "2: kP[ ij = k]
k= O
(5.64)
188 TH E QUEU E MIGI I
Similarly we may calculate the ave rage queue size by subtracting unity from
this pre viou s calculation so long as there is at least one customer in the
system, that is (note the iowe r limit) ,
" ",
N
q
= I(k  I )P[q = k)
k = l
This easily gives us
'" '"
N
q
= I kP[q = k)  I P[q = k)
k= O k= l
But the second sum is merely p and so we have the result
N
q
= N  p  (5.65)
This simple formula gives the general relationship we were seeking.
As an example of the P K meanval ue for mul a, in the case of an MIMfI
system, we have that the coefficient of variati on for the exponential di stri
buti on is uni ty [see Eq. (2. 145»). Thus for this system we have
__ + 2 (2)
q  p P 2(1  p)
or
(5.66) MIMII
 p
q= 
I  P
Equati on (5.66) gives the expected number of cust omers left behind by a
departi ng cust ome r. Compare thi s to t he expression for the average number of
customers in an MIMfI system as given in Eq. (3.24). They are identical and
lend validit y to our ea rlier statements that the method of the imbedded
Markov cha in in the MIGfI case gives rise to a solution that is good at all
points in time. As a second example, let us consider the servicetime di s
tributi on in which service time is a const ant and equal to x. Such systems are
described by the notati on MIDII , as we ment ioned earlier. In this case
clearly C
b
2
= 0 a nd so we have
 (5.67)
MIDII
__ + 2 1
q  P P 2(1  p)
ij = p  ,P_
1  P 2( 1  p)
Thus the MIDfI system has p
2
/2(1  p) fewer cust omers on the average than
the MIMI I system, demonstrating the earlier sta tement that ij increases with
the vari ance of the servicetime di stributi on .
5.5. THE MEAN QUEUE LENGTH I S9
Service faci Iity
Figure 5.6 The M/H
2
/1 example.
For a th ird example, we consider an M/H
2
/l system in which
x ~ O (5.6S)
That is, the service facility consists of two parallel servi ce stages, as shown in
Fi gure 5.6. Not e that Ais also the arrival rate, as usual. We may immediately
ca lculate x = 5/(S).) and (Jb
2
= 31 /(64.1
2
) , which yields C/ = 31/ 25. Thus
P
"( 2.24)
  +..:........:_
q  p 2(1  p)
p O.12
p
2
= +
I p I p
Thus we see t he (small) increase in ij for the (sma ll) increase in C;
2
over the
value of unity for M/M/ 1. We note in this example that p is fixed at p =
i.x = 5/S; therefore, ij = 1.79, whereas for M/M/l at thi s value of p we get
ij = 1.66. We have introduced thi s M/H
2
/l example here since we intend to
carry it (and t he M/M/ I exa mple) t hr ough our MIG/l discussion.
The main result of th is sect ion is th e PollaczekKhinchin formula for
the mean number in system, as given in Eq. (5.63). This result becomes a
special case of our results in the next sect ion, but we feel that its development
has been useful as a pedagogical device. Moreover , in obtai ning th is res ult
we established the basic equation for MIG/I given in Eq . (5.35) . We also
obtai ned the ge neral relati onship between V( z) and B*(5) , as given in Eq.
(5.46); from t his we are able t o obtai n t he moments for the number of arr ivals
during a service interval.
190 THE QUEUE M IGII
We have not as yet derived any result s regarding time spent in the system;
we are now in a positi on to do so. We recall Little's result:
This result relates the expected number of customers iii in a system to 1, the
arrival rate of customers and to T, their average time in the system. For
MIGII we have deri ved Eq . (5.63), which is the expected number in the
system at customer departure inst ants. We may therefore appl y Little's result
to this expected number in order to obtain the average time spent in the
system (queue + service) . We know that ij also represents the average
number of customers found at random, and so we may equate ij = iii. Thus
we have
_ • (1 + C.
2
)
N=p+p· =1T
2(1  p)
Solving for T we have
px(1 + C; )
T = x + '''"'
2(1  p)
(5.69)
This last is easily interpreted. The average total time spent in system is clearly
the average time spent in service plus the average time spent in the queue.
The first term above is merely the average service time and thu s the second
term must represent the average queueing time (which we denote by W).
Thus we have that the average queueing time is
px(l + C;)
W = ''''''
2(1  p )
or
W
o
W=
Ip
 (5.70)
where W
o
~ i0/2; W
o
is the average remaining service time for the cust omer
(if any) found in service by a new arrival (work it out using the mean residual
life formula). A particularly nice normalization fact or is now apparent.
Consider T, the average time spent in system. It is natural to compare thi s
time to x, the average service time required of the system by a cust omer.
Thus the ratio Tlxexpresses the ratio of time spent in system to time required
of the system and represents the factor by which the system inconveniences
5.6. DISTRIB UTI ON OF NUMBER IN SYSTEM 191
cust omers due to the fact that they are sharing the system with other custom
ers. If we use this normalization in Eqs. (5.69) and (5.70), we arrive at the
following, where now time is expressed in units of average service intervals:
T (1 + C
b
2
)
 = 1 + p _ (5.71)
x · 2(1  p)
W (l + C
b
2
)
 = p _ (5.72)
x 2(1  p)
Each of these last two equations is also referred to as the PK meanvalue
formula [along with Eq. (5.63)]. Here we see the linear fashi on in which the
statistical fluctuati ons of the input processes create delays (i.e. , I + C
b
2
is
the sum of the squared interarrivaitime and servicetime coeffici ents of
variation). Further, we see the highly nonlinear dependence of delays upon
the average load p.
Let us now comp are the mean normalized queueing time for the systems"
M/M /l and M/Dfl ; these have a squared coefficient of variation C
b
2 equal to
I and 0, respectively. Applying this to Eq. (5.72) we have
W
P
MIMII 
_ (5.73)
x
(I  p)
W
P
MIDII
_ (5.74)

x 2(1  p)
Note that the system with constant service time (M/D/l) has half the average
waitin g time of the system with exponentially distributed service time
(M/ M{l) . Thus, as we commented earlier, the ti me in the system and the
number in the system both grow in proport ion to the vari ance of the
servicetime distribution.
Let us now proceed to find the distribution of the number in the system.
5.6. DISTRIBUTION OF NUMBER IN SYSTEM
In the previ ous sections we characterized the MIGII queueing system as an
imbedded Markov chain and then est ablished the fundamental equati on
(5.35) repeated here :
(5.75)
By forming the average of this last equation we obtained a result regarding
the utili zati on fact or p [see Eq. (5.41)]. By first squaring Eq. (5.75) and then
• Of less interest is our highly specialized MjHzll example for which we obtain Wj ;;; =
1.12pj(1  pl .
192 THE QUEUE MfG fl
takin g expect ati ons we were able to obtain PK formulas that gave the
expected number in the system [Eq. (5.63)] and the norm alized expected
time in the system [Eq. (5.71)]. If we were now to seek the second moment of
the number in the system we could obtain this quantity by first cubing Eq.
(5.75) and then taking expectations. In thi s operation it is clear that the
expectati on E[f] would cancel on both sides of the equation once the limit
on n was taken ; thi s would then leave an expression for the second moment of
g. Similarly, all higher moments can be obtained by raising Eq. (5.75) to
successively higher powers and then forming expectations. * In this section,
however, we choose to go after the distribution for qn itself (actually we
consider the limiting random variable g). As it turns out , we will obtain a
result which gives the ztransforrn for this distribution rather than the dis
tributi on itself. In principle, these last two are completely equivalent; in
practice, we sometimes face great difficulty in inverting from the ztransform
back to the distribution. Nevertheless, we can pick off the moments of the
distributi on of gfrom the ztransforrn in extremely simple fashion by making
use of the usual properties of transforms and their deri vati ves.
Let us now proceed to calculate the atransform for the probability of
finding k customers in the system immediately following the departure of a
customer. We begin by defining the ztransform for the random variable qn
as
(5.76)
From Appendix II (and from the definition of expected value) we have that
thi s ztransform (or probability generating functi on) is also given by
Qn(z) ~ E[z· n]
Of interest is the ztransform for our limiting random variable g:
'" 
Q(z) = lim Qn(z) = 2: P[g = k ]Zk = E[z"]
n e cc ""=0
(5.77)
(5.78)
As is usual in these definit ions for tr ansforms, the sum on the righthand side
of Eq. (5.76) converges to Eq. (5.77) only within some circle of convergence
in the zplane which defines a ma ximum value for [z] (certai nly [a] ~ I
is all owed).
The system MfG fl is characterized by Eq. (5.75). We therefore use both
sides of thi s equat ion as an exponent for z as follows :
• Specifically, the k th power leads to an expression for Erqk 'j that involves the first k
momen ts of service time.
5.6: DISTRIBUTIO N OF NUMBER IN SYSTEM 193
Let us now take expectations:
E[z·" ,] = E[z· .<1<' +· ' +1]
Usi ng Eq. (5.77) we recognize the lefthand side of this last as Qn+!(z).
Similarl y, we may write the righthand side of this equat ion as the expecta
tion of the product of two fact ors, giving us
Qn+1(z) = E[z· . dq. ]E[zv. +.] (5.80)
The second of these two expectations we again recognize as being independent
of the subscript n + I ; we thus remove the subscript and consider the random
vari able vagain. From Eq. (5.44) we then recognize that the second expecta
tion on the righth and side of Eq. (5.80) is merely
Qn+1(z) = E[z· . dq. zV'+l ] (5.79)
We now observe, as earli er, that the random var iable v n+t (which represents
the number of arrivals during the service of C
n
+!) is independent of the
random variable qn (which is the number of customers left behind upon the
departure of C
n
) . Since this is true , then the two fact ors within the expectat ion
on the righthand side of Eq. (5.79) must themselves be independent (since
functions of independent rand om variables are also independent). We may
thu s write the expectation of the produ ct in that equati on as the product of
the expectations:
We thus have
E[zVo+' ] = E[zV] = V(z)
Qn+1( z) = V(z)E[z· . dq.] (5.81)
The only complicat ing factor in this last equati on is the expectation. Let us
examine this term separately; from the definition of expectat ion we have
'"
E[zo. <1••] = LP[qn = k]Zk <1.
k ~ O
The difficult part of this summation is that the exponent on z cont ains t:J.
k
,
which takes on one of two values according to the value of k . In order to
simplify this special behavior we write the summation by exposing the first
ter m separately:
co
E[zo.<1••] = P[qn = O]ZOO+LP[qn = k ]Zk '
k= l
(5.82)
(5.83)
Regarding the sum in this last equa tion we see that it is almost of the form
given in Eq. (5.76); the differences are that we have one fewer powers of z
and also t hat we are missing the first term in the sum. Bot h these deficiencies
may be correct ed as follows:
'" I '" 1
LP[qn = k]Zkt = z L P[qn = k]Zk  ZP[qn = O]ZO
k= t k  O
I i
194 THE QUEUE M IGII
Applying thi s to Eq. (5.82) and recogni zing that the sum on the righthand
side of Eq. (5.83) is merely Qn(z), we have
= P[qn = 0] + Qn(z)  P[q. = 0]
We ma y now substitute this last in Eq. (5.81) to obtai n
We now take the limit as n goes to infinity and recogni ze the limiting value
expressed in Eq. (5.36). We thus have
Q(z) = V(Z)( P[ij = 0] + Q(z)  :[ij = 0])
Using P[ij = 0] = I  p, and solving Eq. (5.84) for Q(z) we find
(5.84)
(5.85)
_ (5.8..6)
I
I
1
f
I'
! I
I
Q(z) = V(z) (I  p)(I  l Iz)
I  V( z)/z
Finally we multiply numerator and denominator of thi s last by ( z) and use
our result in Eq. (5.46) to arrive at the wellknown equati on that gives the
ztransform for the number of customers in the system,
Q(z) = B*(A _ i.z) (I  p)(1  z)
B*(}.  AZ)  z
We shall refer to thi s as one form of the PollaczekKhinchin (PK) transform
equation.t
The PK transform equati on readily yields the momen ts for the distr ibuti on
of the number of customers in the system. Using the momentgenerating
pr operties of our transform expressed in Eqs. (5.50)(5.52) we see that
certainly Q(1) = I ; when we attempt to set z = 1 in Eq. (5.86), we obtain
an indeterminant formj and so we are required to use L'Hospital' s rule . In
carrying out this opera tion we find that we must evaluate lim dB*(i.  i.z)ldz
as z + I, which was carried out in the pre vious section and shown to be
equ al to p. Thi s computat ion verifies that Q(I) = I. In Exercise 5.5, the
reader is as ked to show that Q(I)(I ) = ij .
t Thi s formula was found in 1932 by A. Y. Khinchin [KHI N 32]. Shortly we will derive
two other equat ions (each of which follow from and imply this equation), which We also
refer to as PK transform equation s ; these were studied by F. Pollaczek [POLL 30] in
1930 and Khinchin in 1932. See also the footnote on p. 177.
t We note that the denominat or of the PK transfor m equation must always contain the
factor ( I  c) since B* (O) = I.
5.6. DISTRIBUTION OF NUMBER I N SYSTEM 195
Us ually , the inver sior: of the PK transform equation is difficul t, and
therefore one settles for moments . However, the system M/Mfl yields very
nicely to inversi on (and to almost everything else). Thus, by way of example,
we shall find it s di stribution. We ha ve
(5.87)
M/M/I
8 *(s) = l:!:..
s + ,u
Clearly, the regi on of convergence for this last form is Re (s ) >  ,Lt . Apply
ing this t o the PK transform equati on we find
Q(z) = ( p ) (I  p)(1  z)
A AZ+ ,u [PIC;,  AZ+,u)]  Z
Noting that p = A/p , we have
Q(Z) = I  P
1  pz
(5.88)
Equation (5.88) is the solution for t he zt ransfor m of the distributi on of the
number of people in the sys tem. We ca n reach a point such as thi s with many
servicet ime distributi on s B(x); for th e exponential distribution we can evalu
ate the inverse transform (by inspection !). We find immediately that
P[ ij = k ] = (1 _ p)p"
M/M/l
(5.89)
This then is the fami liar solut ion for M/M/!. If the reader refers back t o
Eq. (3.23), he will find the same functi on for the probability of k cust omers
in the M IMII system. However, Eq . (3.23) gives the solution for all points
in time whereas Eq . (5.89) gives the solution only at the imbedded Markov
points (na mely, at the de pa rtu re inst ants for cust omers). The fact t hat these
two a nswers a re identi cal is no surprise for two reason s : first , because we
told you so (we said that the imbedded Ma rkov points give so lutions that
a re good at all points) ; and second, because we recogni ze th at the M IM I I
system forms a continuoust ime Markov chain.
As a sec ond example, we consider the system M/H
2
/1 whose pdf for service
time was given in Eq. (5.68). By inspect ion we may find B*(s), wh ich gives
8 *(s) = (l) _i. + ( ~ )    . 1 L
4 s + i. 4 s + 2i.
7i.s + 8A
2
4(s + A)(S + 2i,)
(5.90)
196 THE QUEUE M/Gfl
where the plane of con vergence is Re (5) >  l. From the PK t ra nsform
equati on we then have 
Q(z) = ( 1  p)(1  z)[8 + 7(1  z)]
8 + 7(1  z)  4z(2  z)(3  z)
Factoring the den ominator and canceling the commo n term (I  z) we ha ve
( 1  p)(1  (7{15)z]
Q(z) = [1 _ (2/5)z][1  (2/ 3)z]
We now exp and Q(z) in partial fractions, which gives
(
1{4 3{4)
Q(z) = (I  p) I _ (2/5)z + I _ (2/3)z
Thi s last may be inverted by inspect ion (by now the reader should recogni ze
the sixth entry in Table 1.2) to give
It should not surprise us to find thi s sum of geometric terms for our solut ion.
Further examples will be found in the exerci ses. For now we terminate th e
discussion of how many cust omers are in the system and proceed wit h the
calculati on of how long a cust omer spends in the system .
(5.91)
(5.92) k = 0, 1,2, . . .
P. = P[ij = k] = ( 1  p > [ ~ ( ~ r + ~ ( ~ n
Lastl y, we note that the value for p has already been calculated at 5/8 , and
so for a final soluti on we have
5.7. DISTRIBUTION OF WAITING TIME
Let us now set out to find the di stribution of time spent in the system a nd
in the queue. These particul ar qu ant ities are rather easy to obta in fr om our
earl ier principal result , namely, the PK tr an sform equa t ion (and as we
have said , lead to expressions which sha re that name). Note that the order in
which cust omers receive service has so far not affected our results. Now,
however, we must use our ass umptio n that the order of service is firstco me
first ser ved .
In order to pr oceed in the simplest possibl e fashi on , let us reexamine the
deri vation of the foll owing equat ion :
V(z) = B* (i.  k) (5.93)
5.7. DISTRIBUTION OF WAITING TI ME 197
Time;;....
Ououo  , ;.., __
"n
arrive
Figure 5.7 Derivation of V(z) = B*(i.  i.z).
In Figure 5.7, the reader is reminded of the structure from which we obtained
thi s equation. Recall that V(z) is the ztransform of the number of customer
arrivals in a particular inter val, where the arrival process is Poisson at a rate A
cust omers per second. The particular time interval involved happens to be
the service interval for Cn; this interval has distribution B(x) with Laplace
t ransform B *(s). The deri ved relati on between V(z) and B* (s) is given in
Eq. (5.93). The important observation to make now is that a relationship
of thi s form must exist between any two random variables where the one
identifies the number of customer arrivals from a Poisson process and the
other describes the time interval over which we are counting these customer
arri vals. It clearly makes no difference what the interpretation of thi s time
interval is, only that we give the distribution of its length ; in Eq. (5.93) it
just so happens that the interval involved is a service interval. Let us now
direct our attention to Figure 5.8, which concentrates on the time spent in the
sys tem for Cn' In this figure we have traced the history of Cn' The interval
labeled lI" n ident ifies the time from when C; enters the queue until that cus
t omer leaves the queue and enters service; it is clearl y the waiting time in queue
for Cn' We have also identified the service time X
n
for Cn' We may thu s
q"
arrive
Figure 5.8 Derivation of Q(z) = S *(i.  i.:;).
198 THE QUEUE M/G/I
identify the total time spent i ll system S n for CO'
(5.94)
We have earlier defined gn as the number of customers left behind upon the
departure of Cn' In considering a firstcomefirstserved system it is clear
that all those customers present upon the arri val of C
n
must depart before he
does; consequently, those customer s that C; leaves behind him (a tot al of
gn) must be precisely those who arri ve durin g his stay in the system. Th us,
referring to Figure 5.8, we may identify those customers who arrive du ring
the time interval s; as being our previously defined rand om variable gn'
The reader is now asked to compare Figures 5.7 and 5.8. In bot h cases we
have a Poisson arrival process at rate I customers per second. In Figure 5.7 we
inquire into the number of arrivals (un) during the interval whose durat ion
is given by X
n
; in Figure 5.8 we inquire int o the number of arrivals (gn)
during an interval whose durati on is given by S n' We now define the distri
but ion for the total time spent in system for C; as
Sn(Y) ~ P[sn ~ y]
(5.95)
S(y) ;; P[s ~ y] (5.96)
Finally, we define the Lap lace transform of the pdf for total time in system as •
Since we are assuming ergodicity, we recognize immediat ely that the limit of
this distribution (as n goes to infinity) must be independent of II . We denote
this limit by S(y) and the limiting rand om variable by s [i.e., Sn(Y) > S(y)
and s; > s]. Thus
Since we already have an explicit expression for Q(:) as given in the PK
transform equat ion, we may therefore use that with Eq. (5.98) to give an
explicit expression for S* (s) as
S*(,1. _ ,1. z) = B*(}. _ ,1.:) (l  p)(l  z) (5.99)
B*(,1.  }.z)  z
(5.98) Q(z) = S*(i.  }.z)
S*(s) ;; f 'e ' · dS( y) = E [ e ~ S ] (5.97)
With these definit ions we go back to the analogy between Figures 5.7 and
5.8. Clearly, since Un is analogous to gO' then V( z) must be analogous to
Q(z), since each describes the generating functi on for the respective number
distribution. Similarl y, since X
n
is analogous to S n , then B*(s) must be
anal ogous to S*(s). We have therefore by dir ect analogy from Eq. (5.93)
t hat t
t Thi s can be der ived directly by the unconvinced reader in a fashion similar to that which
led to Eqs. (5.28) and (5.46).
5.7. DI STRI BUTI ON OF WAITI NG TIME 199
Thi s last equat ion is just crying for the obvious change of variable
which gives
= = I  ~
A
Making thi s change of variable in Eq. (5.99) we then have
5 *(s) = B*(s) s( 1  p)  (5.100)
s  A+ AB*(s)
Equat ion (5. 100) is the desired explicit expression for the Lapl ace transfor m
of the distribution of total time spent in the MIGII system. It is given in
terms of known quantities derivable from the initial statement of the pr oblem
[namely, the specificati on of the servi cetime distribution B( x) and the
par ameters Aand x). This is the second of the three equ at ions that we refer
to as the PK tra nsform equati on. .
Fr om Eq. (5. 100) it is tr ivial to deri ve the Laplace tr ansform of the di s
tr ibution of wai ting time, which we shall denote by W*(s). We define the
PDF for e n's waiting time (in queue) to be Wn(y), that is,
Wn(y) ~ P[w
n
~ y ) (5. 101)
Furthermore, we define the limit ing quantities (as n > co) , Wn(y) > W(y)
and W
n
> Iii, so that
W(y) ~ P [ I ~ ' ::; y )
The corresponding Laplace transform is
(5. 102)
JV *(s) ~ L "e
s
• dW(y) = E[e ';;;) ( 5. 103)
From Eq . (5.94) we may de rive the dist ribution of l ~ ' from the distribut ion
of s and x (we drop subscri pt notation now since we are considering equi
librium behavior). Since a customer' s service time is independent of his
queueing time, we have that s, the time spent in system for some cust omer,
is the sum of two independent random vari abl es: l ~ ' (his queueing time) and x
(his service time). That is, Eq. (5.94) has the limiting for m
(5.104)
As derived in Appendix II the Laplace transform of the pdf of a random
vari able that is itself the sum of two independent random vari ables is equal
to the product of the Lapl ace transforms for the pdf of each. Consequently,
we have
5 *(s) = W*(s) B*(s)
200 THE QUEUE M/G/I
Thus fr om Eq. (5. 100) we obtain immed iat ely that
 (5.105)
W*(s) = s( 1  p)
s  A+ AB*(s)
Thi s is the desired expression for the Laplace tran sform of the queueing
(waiting)time distribution. Here we have the third equati on that will be
referred to as the PK transform equation.
Let us rewrite the PK transform equation for waitin g time as follows:
* 1 p
W (s) = . [I  B*(S)]
1 p _
sx
(5. 106)
We recognize the bracketed term in the denominator of thi s equation to be
exactly the Laplace transform associated with the density of residual service
time from Eq. (5. 1I). Using our special notation for residual densities and
their tr ansforms, we define .
B*(s) ;; 1  B*(s)
SX
(5.107)
and are therefore permitted to write
(5.108)
(5.109)
co
W*(s) = ( I  p)2: l[B*(s)]k
P O
* I  p
W (s)  ,'
 I  pB*(s)
Thi s observation is trul y amazi ng since we recognized at the outset that the
problem with the M/Gfl analysis was to take account of the expended
service time for the man in service. Fr om that investigat ion we found that the
residual service time remaining for the customer in service had a pdf given
by b(x) , whose Laplace transform is given in Eq. (5. 107). In a sense ther e is a
poetic justice in its appearance at thi s point in the final solution. Let us
follow Benes [BENE 56] in inverting this transform in terms of these residual
service time densities. Equation (5.108) may be expanded as the following
power series :
From Appendix I we know that the kth power of a Lapl ace transfor m corre
sponds to the kfold con volution of the inverse transform with itself. As in
Appendix I the symbol 0 is used to denote the convoluti on operator, and
we now choose to denote the kfold convoluti on of a funct ion f (x) with
itself by the use of a parenthetical subscript as follows :
d
f (k)(X) = f (x) 0 f ( x)0 .. ·0 f( x) ( 5.110)
, ~
kfold convolut ion
5.7. DISTRIBUTION OF WAITING TIME 201
Using thi s notation we may by inspection invert Eq. (5.109) to obtai n the
waitingtime pdf, which we denote by w(y) ~ dW(y)/dy; it is given by
'"
w(y) = L(I  p)pk bCkl(y)
k = O
(5.111)
(5.112)
(5. 113)
(5.114)
Thi s is a most intriguing result! It states that the waiting time pdf is given by
a weighted sum of convolved residual service time pdf' s. The interesting
observation is that the weighting factor is simply (I  p)pk, which we now
recognize to be the probability distribution for the number of customers in
an M/M/l system. Tempting as it is to try to give a physical explanation for
the simplicity of thi s result and its relation to M/M/I , no satisfactory,
int uitive explanat ion has been found t o explain this dramatic form. We
note that the contribution to the waitingtime density decreases geometrically
with p in thi s series. Thu s, for p not especially close to unit y, we expect the
higho rder terms to be of less and less significance, and one pract ical applica
tion of thi s equati on is to provide a rapidly converging approximation to the
density of waiting time.
So far in this section we have established two principle results, namely,
the PK transfor m equatio ns for time in system and time in queue given in
Eqs. (5. 100) and (5. 105), respectively. In the previous section we have already
given the first moment of these two random variables [see Eqs. (5.69) and
(5.70)]. We wish now to give a recurrence formula for the moments of t he
waiting time. We denot e the kth moment of the waiting time E[w
k
], as usual ,
by Irk. Takacs [TAKA 62b] has shown that if X i+! is finite, then so also are
Iii, \\,2, ... , Wi; we now adopt our slightly simplified notati on for the ith
moment of service time as follows : hi ~ x'. The Takacs recurr ence for mula is
• k b
k I. T"" (k) i + 1 ~ .
w =   ' IV '
I  P i=' i ( i + I )
where \\ ,0 ~ 1. Fr om thi s formula we may write down the first couple of
moments for waiting time (and note that the first moment of waiti ng time
agrees with the PK formula):
sb ;
lii (= IV) = 
2(1  p)
; • }.b
3
IV = 2(l,'r +  "
3(1  p)
In order to obtain similar moments for the total time III system, that is,
E[5
k
] , which we denot e by s", we need merely take advant age of Eq. (5. 104) ;
from this equation we find
(5.115)
202 THE QUEUE M/GfI
Fro
m
SI
H
•
we I
ApF
Imp
p
a
t '
reF
Re
(5.
frc
di:
tir
th
(5
Thi
(
to '
Th is
(5.120)
(5.12 1)
 (5.118)
 (5. 1l 9)
MIMI 1
W*(s) = ( I _ p) + ;.(1  p)
s + p(l  p)
Using the binomi al expansion and t he independence bet ween wai ting time
and service time for a given customer, we find
? = i (5.116)
i = O I
Thus calculating the moments of the waiting time from Eq. (5. 112) also
permits us to calcul ate the moments of time in system from t his last equation.
In Exercise 5.25, we drive a rel ati onship bet ween Sk and t he moment s of
the number in system; t he simplest of these is Little' s result , and the others
are useful genera lizations.
At the end of Section 3.2, we promised the reader that we would develop
the pd f for the time spent in the system for an MIMII queueing system. We
are now in a position to fulfill th at promise. Let us in fact find both the
distribution of waiting time and di stribution of system time for cust omers in
M/M /I . Usi ng Eq. (5.87) for the system M/MfI we may calculate S*(s) from
Eq. (5. 100) as follows :
S*(s) = p 1 s( l  p) ]
(s + p) L;  A+ Ap/(s + p)
S*(s) = p(1  p) MIMII (5.117)
. s + p( l  p)
This equat ion gives the Laplace tr ansform of the pdf for time in the system
which we denote , as usual , by s(y) dS(y) ldy. Fortunately (as is usual with
t he case M/M/I) , we recogni ze the inver se of thi s tr ansform by inspection.
Thus we have immediat ely t hat
s(y) = p( 1  p ) e P(l  p) u
The cor responding PDF is given by
S(y) = I  ep(l p) u y 0 MIMII
Simil arly, from Eq, (5. 105) we may obtain W* (s) as
W*(s) = s( 1  p)
s  A+ i.p/(s + It)
(s + p)(1  p)
s + (p  ;.)
Before we can invert thi s we must place the righthand side in proper form,
namely, where the numerat or polynomi al is of lower degree th an the denomi 
nator. We do t his by dividin g out the constant term and obta in
5.7. DISTRIBUTION OF WAITING TIME 203
I" (y )
M/M/I  (5.122)
o y
Figure 5.9 Waitingtime distribution for MIMI !.
This expression gives t he Lap lace transform for the pdf of waiting time which
we denote, as usual , by w(y) ~ dW(y)/dy. From ent ry 2 in Table 1.4 of
Appendix I, we recognize that the inverse transform of ( I  p) must be a n
impulse at the origin ; thus by inspection we have
w(y) = (1  p)uo(y) + A(I  p)e·
11

p l
• y ~ 0
From this we find the PDF of waiting time simply as
W(y) = 1  pe·(lpl . y ~ 0 M/M/I  (5.123)
This di stribution is shown in Fi gure 5.9.
Observe that the probability of not queueing is merely I  p ; compare tbi s
to Eq . (5.89) for the probabil ity t hat if = O. Clearly, t hey are the same; both
represent the probability of not queueing. This al so was found in Eq. (5.40).
Recall further that the mean normalized queueing time was given in Eq.
(5.73); we obtain the same answer, of course, if we calcu late thi s mea n value
fr om (5.123). It is interesting to note for M/MjI that all of tbe interestin g
di stributions are memoryless: this applies not only to the given interarrival
time and service time processes, but also to the distribution of the number in
the system given by Eq. (5.89), t he pdf of time in t he system given by Eq .
(5. 119), and the pdf of waiting time* given by Eq . (5.122).
It turns out th at it is possible to find the density given in Eq. (5.118) by a
more direct calculati on , and we di splay this method here to indicate it s
simplicity. Our point of departure is our early result given in Eq. (3.23) for
the prob ability of finding k customers in system up on arrival , namely,
h = (I  p)pk (5.124)
• A simple exponential form for the tail of the waitingt ime distribution (that is, the
probabilities associated with long waits) can bederived for thesystemM/G/1. We postpone
a discussion of this asymptotic result until Chapter 2, Volume II, in which we establish
this result for the more general systemGIG/I.
204 THE QUEUE M/G/I
We repeat agai n that thi s is the same expression we found in Eq. (5.89) and
we know by now that this result applies for all points in time. We wish to form
t he Lapl ace transform of the pdf of total time in the system by considering
thi s Lapl ace transform conditioned on the number of customer s found in the
system upon arrival of a new customer. We begin as generally as possible
and first consider the system MIGII. In particular , we define the condit iona l
distribution
S (y Ik ) = P[customer' s total time in system j; y Ihe finds k in
system upon his arrival]
We now define the Lapl ace transform of thi s conditional density
t. ( ""
5 *(s Ik ) = Joe
sv
d5(y Ik )
(5.125)
Now it is clear that if a customer finds no one in system upon his arrival,
then he must spend an amount of time in the system exactly equal to his own
service time , and so we have
S *( s I0) = B*(s)
On the other hand , if our arriving customer finds exactly one customer
ahead of him, then he remains in the system for a time equal to the time to
finish the man in service, plus his own service time; since these two int ervals
are independent, then the Laplace transform of the density of this sum must
be the product of the Lapl ace tr ansform of each density, giving
S *(s I I) = 8 *(s)B*(s)
where B*(s) is, again, the transform for the pdf for residual service time.
Similarly, if our arriving customer finds k in front of him, then his tot al
system time is the sum of the k service times associated with each of t hese
customer s plus his own service time. These k + I rand om vari ables are all
independent, and k of them are dra wn from the same distribution S ex).
Thus we have the kfold product of B*(s) with B*(s) giving
5 *(s Ik ) = [B*(s)jk8*(s) (5. 126)
Equati on (5. 126) holds for M IGII. Now for our M/M/I problem, we have
that B* (s) = I'/ (s + 1') and, similarly, for B*(s) (memoryless); thus we have
(
I' )k+'
5*(s Ik ) = 
s + 1'
(5.127)
cc
S*(s) = L5*(s I k)Pk
k=O
(5.128)
5.7. DISTRIBUTION OF WAITI NG TI ME 205
In order to obtain S*(s) we need merel y weight the transform S*(s Ik ) with
the pr obability P»of our customer finding k in the system upon his arrival,
namely,
Substituting Eqs. (5.127) and (5. 124) into this last we have
co ( P )k+l
S*(s) = L  (I  p)p'
k ~ O S + P
!l(I  p)
s + p(I  p)
We recogni ze that Eq. (5.128) is identical to Eq. (5.117) and so the remaining
steps leading to Eq. (5. 118) follow immediately. This demonstration of a
simpler method for calcul ating the distribution of system time in the MIMII
queue demonstrates the following import ant fact: In the development of
Eq. (5. 128) we were required to consider a sum of random variables, each
distributed by the same exponential distributi on ; the number of terms in
that sum was itself a rand om"variable distributed geometrically. What we
found was t hat this geomet rical weighting on a sum of identically distributed
exponential random vari ables was itself expo nential [see Eq . (5.118)]. This
result is true in general, namely , that a geometric sum of exponential random
variables is itself exponentially distributed.
Let us now carry out the calculations for our M/H./I example. Using the
expr ession for B*(s) given in Eq. (5.90), and applying this to the PK
transform equati on for waitingtime den sity, we have
* 4_s,(I_ 'p')(" s:+''l)'(s_+..:... 2_l )'_
W (s)
 4(s  l )(s + l )(s + 2).) + 8).3 + 7l
2
s
Thi s simplifies upon fact oring the denominator , to give
* .0....( I_'p.:..o )(s , +_). ,,)(:.....s.: +_ 2_), ..:...)
W (s) =
[s + (3/ 2)l ][s + (112»).]
Once again, we must divide numerator by denominator to reduce the degree
of the numerator by one, giving
* _ ).: ,(_1_,p,)[' s .: +_ (, 51, 4, »)., ]
W (s) = ( I  p) +
[s + (3/2»)'][s + ( 1/2»). ]
We may now carry out our partialfr acti on expansion:
}./4 3}./4 ]
W*(s) = (1  p>[
1
+ s + (3/ 2»). + s + ( 1/2) )'
==
(5. 129) y;:::O
206 THE QUEUE MIG/!
This we may now invert by inspection to obtain the pdf for waiting time (and
recalling that p = 5/ 8) : .
()
3 () 3). (3! 21". 9),  (I ! 2)".
wy =  u y + e +  e
8 0 32 32
This completes our discussion of the waitingtime and systemtime dis
tr ibutions for M/G/1. We now introduce the busy peri od , an imp ortant
stochas tic process in queueing systems.
5.8. THE BUSY PERIOD AND ITS DURATION
We now choose to study queueing systems from a different point of view.
We make the observation t hat the system passes through alternating cycles
of busy peri od , idle peri od , busy period, idle period, and so on. Our purpose
in this section is to deri ve the distribution for the length of the idle peri od
and the length of the busy peri od for the M/G/) queue.
As we already understand , the pertinent sequences of random variables
that drive a queueing system are the instants of arri val and the sequence of
service times. As usual let
C; = the nth customer
7"n = arrival time of C;
I n = 7"n  7"n_I = interarrival time betwee n C
n
_
I
and C;
X
n
= service time for C;
We now recall the important stochastic process V( I) as defined in Eq. (2. 3) :
V(t) ~ the unfini shed work in the system at time I
~ the remaining time required to empty the system of all
cust omers present at time I
Thi s functi on V(I ) is appropriately refe rred to as the unfinished work at time I
since it represents the interval of time that is required to empty the system
completely if no new cust omers are ail owed to enter after the instant I . Th is
funct ion is sometimes referred to as the "vi rtua l" waiting time at time I
since, for a firstcomefirstserved system it repre sents how lon g a (virt ual)
cust ome r would wait in queue if he entered at time I ; however , thi s waiting
time inte rpretation is good only for firstcomefirstserved disciplines, where
as the unfinished work interpretation applies for all disciplines. Behavior of
this functi on is extremely important in understanding queueing systems when
one studies them from the point of view of the busy peri od.
Let us refer to Figure 5.1Oa, which shows the fashi on in which busy
periods alternate with idle pe riods. The busype riod durations are denoted
by Y" Y
2
, Y
3
, • •• and the idle period du rations by I" 1
2
, • • •• Cust omer C,
5.8. THE BUSY PERIOD AND ITS DURATION 207
U(I)
(a>
I
I BUSY I IDLE I
BUSY
, IDLE I BUSY I
c,
C, C
3
Server .ji+',L,+..L;;..
(b) C, C, C
3
Queuet.'.'   f    +,,' _
C
3
Figure 5.10 (a) The unfinished work, the busy period, and (b) the customer
history.
enters the system at time T
I
and brings with him an amount of wor k (that is, a
required service time) of size X l ' Thi s customer finds the system idle and
therefore his arrival terminates the previous idle period and initiates a new
busy peri od. Prior to his arrival we assumed the system to be empty and
therefore the unfinished work was clearly zero. At the inst ant of the arrival
of C, the system backl og or unfinished work j umps to the size X
ll
since it
would take this long to empty the system if we allowed no further ent ries
beyond this instant. As time progresses from T
1
and the server works on C
ll
this unfinished work reduce s at the rate of 1 sec/sec and so Vet ) decreases
with slope equal to  I. t
2
sec lat er at time T
2
we observe that C
2
ent ers the
system and forces t he unfinished work Vet) to make another vertical jump of
magnitude X
2
equal to the service time for C
2
• The functi on then decrea ses
agai n at a rat e of I sec/sec until customer C
3
enters at time T 3 forcing a vertical
ju mp again of size X
3•
Vet ) continues to decrease as the server works on t he
customers in the system unt il it reaches the instant T
1
+ Y
I
, at which time he
has successfully emptied the system of all cust omers and of all work. Thi s
..
208 THE QUEUE M/G/!
then terminates the busy period and initiates a new idle period. The idle
per iod is terminated at time T. when C. enters . This second busy period
serves only one customer before the system goes idle again . The third busy
period serves two customers. And so it continues. For reference we show in
Figure 5. lObour usual doubletimeaxis representation for the same sequence
of customer arrivals and service times dra wn to the same scale as Figure 5. IOu
and under an assumed firstcomefirstserved discipline. Thus we can say
that Vet) is a function which has vertical jumps at the customerarrival instants
(these jumps equaling the service times for those customers) and decreases
at a rate of I sec/sec so long as it is positive; when it reaches a value of zero, it
remains there until the next customer arrival. This stochastic process is a
continuousstate Markov process subject to discontinuous jumps; we have
not seen such as this before .
Observe for Figure 5.10u that the departure instants may be obtained by
extrapolating the linearly decreasing portion of Vet) down to the horizontal
axis; at these intercepts, a customer departure occurs and a new customer
service begins. Again we emphasize that the last observation is good only for
the firstcomefirstserved system. What is important, however, is to observe
that the function Vet) itself is independent of the order of service! The only
requirement for this last statement to hold is that the server remain busy as
long as some customer is in the system and that no customers depart before
they are completely served; such a system is said to be "work conserving"
(see Chapter 3, Volume II) . The truth of this independence is evident when
one considers the definition of Vet) .
Now for the idleperiod and busyperiod distributions. Recall
A(t ) = PIt . ~ t] = 1  e
At
B(x) = PI x. ~ x]
t ~ O (5. 130)
(5.131)
(5. I32)
where ACt) and B(x) are each independent of n. Our interest lies in the two
following distributions:
F(y) ~ prJ. ~ y] ~ idleperiod distribution
G(y) ~ pry. ~ y] ~ busyperiod distribution
 (5. 133) F(y) = J  ei. · y ~ 0
So much for the idletime distribution in M/G/1.
The calculation of the idleperiod distribution is trivial for the system M/G/I .
Observe that when the system terminates a busy period, a new idle peri od
must begin, and this idle period will terminate immediately upon the arrival
of the next customer. Since we have a memoryle ss distribution, the time until
the next customer arrival is distributed according to Eq. (5. I30), and
therefore we have
5.8. TH E BUSY PERI OD AND ITS DURATI ON 209
Ulti
(Il) Decomposit ion of the
busy period
x, _ ..jE_ .\'. J.\' 3  ...;10;..  .\' 2   ;1
Subbusy p e r i ~ 1  Subbusy period Subbusy period
generated by C
4
generated by C
3
generated by C
1
Service time
for C
1
T,
o
~             y             ~
Busy period generated by C
1
Nlti
(bJ Number in the system
C, C. C
9
(c) Customer history
C
lI
Figure 5.11 The busy period : lastcomefirstserved
Now for t he busyperiod distribution ; this is not quite so simple. The
reader is referred to Figur e 5.11. In part (a) of this figure we once agai n
observe the unfinished work U(t) . We assume that the system is empty just
pri or to the instant 7"1. at which time customer C; init iates a busy period of
duration Y. His service time is equal to Xl . It is clear that this cust omer will
depart from the system at a time 7"1 + Xl . Du ring his service other customers
210 THE QUEUE M/G/I
may arrive to the system and it is they who will continue the busy period.
Fo r the function shown, t hree other customers lC
2
, C
3
, and C. ) ar rive during
the int erval of Cl ' s service. We now make use of a brilliant device due to
Takacs [TAKA 62a]. In particular, we choose to permute the order in which
customer s are served so as to create a lastcomefirstserved (LCFS) queueing
discipline* (recall that the duration of a busy period is independent of the
order in which customers are served). The moti vation for the reordering of
customers wi ll soon be apparent. At the departure of C
l
we then take int o
service the newest customer , which in our example is C, . In addition, since
all future arrivals during this busy period must be served before (LCFS!)
any customers (besides C,) who arrived during Cl' s service (in this case C
2
and C
3
) , then we may as well consider them to be (temporarily) out of the
system. Thus, when C, ent ers service, it is as if he initiat ed a new busy period,
which we will refer to as a "subbusy peri od"; the subbusy period generated
by C, will have a duration X. exactly as long as it takes to service C. and all
those who enter into the system to find it busy (remember that C
2
and C
3
are
not considered to be in the system at thi s time). Thus in Figure 5. l la we
show the subbusy period generated by C. during which customers C., C. ,
and C
s
get serviced in that order. At time 1"1 + Xl + X. thi s subbusy period
ends and we now continue the lastcomefirstserved order of service by
bringing C
3
backinto the system. It is clear that he may be considered as
generating his own subbusy period, of durati on X
3
, duri ng which all of his
"descendents" receive service in the lastcomefirstserved order (namely,
C
3
, C
7
, C
a
, and C
9
) . Finall y, then , the system empties agai n, we reintr oduce
C
2
, and permit his subbusy peri od (of length X
2
) to run its cour se (and
complete th e maj or busy period) in which customer s get serviced in the order
C
2
, C
10
, and finally Cu.
Figure 5. lla shows that the cont our of any subbusy per iod is identical
with the contour of the main busy period over the same time interval and is
merely shifted down by a constant amount ; this shift, in fact, is equal to the
summed service time of all those customers who arrived during Cl's service
time and who have not yet been allowed to generate their own subbusy
periods. The details of customer history are shown in Figure 5.1Ic and the
total number in the systemat any time under thi s discipline is shown in Figure
5. 1l b. Thus, as far as the queueing system is concerned, it is st rictly a last
comefirstserved system from start to finish. However, our analysis is
simplified if we focus upon the subbusy periods and observe that each behaves
statistically in a fashion identical to the major busy period generated by C
l.
This is clear since all the subbusy periods as well as the maj or busy period
• This is a "pushdown" slack. Thi s is only one of many permutations that " work"; it
happens that LCFS is convenient for pedagogical purp oses.
5.8. THE BUSY PERIOD A!'o1) ITS DURATION 211
are each initiated by a single customer whose service times are all drawn
from the same di st ributi on independently; each subbusy period continues
until the system catches up to the work load, in the sense that the unfinished
work funct ion U(t) drops to zero. Thus we recognize that the random vari
ables { X
k
} are each independent and identically distributed and have the same
distributio n as Y, the duration of the major busy peri od.
In Figure S.ll e the reader may follow the customer hist ory in detail ; the
soli d black region in this figure identifies the cust omer being served during
that time interval. At each cust omer departure the server " floats up" to the
top of the customer contour to engage the most recent arrival at that time;
occasionally the server "fl oats down" to the cust omer directly below him
such as at the departure of C
G
• The server may trul y be thought of as floating
up to the highest cust omer there to be held by him until his departure, and
so on . Occasi onall y, however, we see that our ser ver "falls down" through a
gap in order to pick up the most recent arrival to the system, for example, at
the departure of C
S
• It is at such instants that new subbusy peri ods begin
and only when the server falls down to hit the hori zontal axis does the maj or
busy period terminate .
Our point of view is now clear : the duration of a busy peri od Y is the sum
of I + vrandom variables, the first of which is the service time for C, and
the remainder of which are each random vari ables describing the duration of
the subbusy peri ods, each of which is distributed as a busy peri od itself. vis
a random variable equal to the number of cust omer arri vals during C, 's
service interval. Thus we ha ve the important relat ion
Y = X, + X v+! + X ; + . .. + X
3
+ X
2
We define the busyperi od distributi on as G(y):
~
G(y) = pry ~ y ]
(5.134)
(5. 135)
We also know that X, is di stributed according to B(x) a nd that X
k
is dis
tr ibu ted as G(y) from our earlier comments. We next deri ve the Laplace trans
form for the pdf associated with Y, which we define, as usual, by
G*(s) ~ f 'e"dG(y) (5. 136)
Once agai n we remind the reader that these transforms may also be expressed
as expectation operato rs, namely:
G*(s) ~ E[ e
SF
]
Let us now ta ke advantage of the powerful technique of conditioning used so
often in pr obability the ory; thi s technique permits one to write down the
probability associated with a complex event by cond itioning that event on
   
212 THE QUEUE MIGII
enough given conditions, so that the conditional probability may be written
down by inspection. The unconditional pr obability is then obtai ned by
multiplying by the probability of each condition and summing over all
mutually exclusive and exha ust ive conditions. In our case we choose to
condition Yon two events : the du ration of Cl 's service and the number of
customer arrivals during his service. With this point of view we then calculate
the followin g conditional transform:
E[e ' Y IXl = X , ii = k] = E[e ,(z+x ' +l+" '+ X2 1]
= E[e SXe SX.l:: Tl .. . e
Sx 2]
Since the subbusy periods have durations that are independent of each other,
we may write this last as
E[e ' Y I Xl = X , ii = k] = E[ e ""]E[e,xt+l] ... E[e ' x ,]
Since X is a given constant we have E[e
sx
} = e:«, and further, since the sub
busy periods are identically distributed with corresponding transforms
G*(s), we have
E[e:' Y IXl = X, ii = k] = e SX[G*(s)]k
Since ii represents the number of ar rivals during an interval of length x,
then ii must have a Poisson distribution whose mean is I.X. We may therefore
remove the condition on ii as follows :
E[e
sY
IXl = X] = ~ E[e
sY
IXl = X, ii = k]P[ii = k]
k= O
= I e' X[G*(s)l' (Ax)k e
1x
k O k!
= e X[s+l ;.O (s )l
Similarly, we may remove the conditi on on Xl by integrating with respect to
B( x), finally to obtain G*(s) thusly
G*(s) = L"e zlS+1 ;.G·(,I] dB(x)
This last we recognize as the transform of the pdf for service time evaluated
at a value equal to the bracketed term in the exponent , that is,
G*(s) = B*[s + ;.  AG*(S)}  (5.137)
This maj or result gives the transform for the MIGII busyperi od dist ributi on
(for any order of service) expressed as a functi onal equati on (which is usually
impossible to invert). It was obtained by identifying subbusy periods within
the busy period all of which. had the same distribution as the busy peri od
itself.
5.8. THE BUSY PERIOD AND ITS DURATI ON 213
Later in this chapter we give an explicit expression for the bus y period
PDF G(y), but unfortunately. it is not in closed form [see Eq. (5.169)]. We
point out , however, that it is possible to solve Eq. (5.137) numerically for
G*(s) at any given value of s through the following iterative equation:
G ~ + l ( s ) = B*[s + ).  i.Gn *(s) ] (5.138)
(5. 140)
in which we choose 0 ~ Go*(s) ~ I ; for p = Ax < I the limit of this iterative
scheme will converge to G*(s) and so one may att empt a numerical inversion
of these calculated values if so desired .
In view of these inversion difficulties we obtain what we can from our
functional equati on , and one calculation we can make is for the moments
of the busy per iod. We define
gk ~ E[yk] (5.139)
as the kth moment of the busyperi od distribution, and we intend to express
the first few moments in terms of the moments of the servicetime distribution ,
namely, x", As usual we have
gk = ( _ I)kG*(k)(O)
x
k
= (_I)kB*(kl(O)
From Eq. (5.137) we then obtain directly
g, = G*(lI(O) = B*(lI(O)!!... [s + i.  )'G*(s)]I
ds ,_0
= B*(I)(O)[I  )'G*(I)(O)]
[note, for s = 0, that s + ).  i.G*(s) = 0] and so
g, = x( 1 + i.g,)
Sol ving for g, and recalling that p = i.x, we then have
x
g, =   (5. 141)
1 P
If we comp are thi s last result with Eq , (3.26), we find that the average length
of a busy periodfo r the system MIGII is equal to the average time a customer
spends in an MIMI] system and depends only 0 11 ). and x
Let us now chase down the second moment of the busy peri od. Pr oceedin g
from Eq. (5. 140) and (5. 137) we obtai n
g2 = G*(2)(S) = !!... [B*(lI(S + }.  i.G*(s»][ 1  )'G*(I)(s)] I
ds ,0
= B*(2) (0)[1 _ i.G*(I)(O)f + B*(lI(O)[ _ ).G*(2\0)]
=:z=.
214 THE QUEUE M/G/l
and so
g2 = x
2
(1 + Ag,)Z+ XI.g
2
Sol ving for gz and using our result for g" we have
x
2
( 1 + Ag,)Z
gz =
I  I. X
xZ[1 + I.X/(i _ p)] z

Ip
and so finally
gz = ( I _ p)3  (5.142)
This last result gives the second moment of the busy period and it is interest
i ng to not e the cube in the den ominat or ; t his effect does not occur when one
calculates th e second moment of th e wai t in t he system where only a squa re
power ap pears [see Eq. (5. 114)]. We may now easily calcul ate the variance of
t he busy period , den ot ed by u.", as follows:
X
Z
( X)2
=
( I  p)3 ( I _ p)z
_ (5.143)


" ()"
uz =u. +p x
• ( I _ p)3
where uo" is t hevariance of the servicetime distribution .
Proceeding as above we find that
and so
x
3
3A(?)"
g3 = (i _ p)4 + ( I _ p)5
x' lOA? ? 15A
z(?)
3
g4 = 5 + 6 + 
(l p) ( I p) (I p)'
We observe that the fact or ( I  p) goes up in powers of 2 for t he domina nt
term of each succeeding mo ment of the busy per iod and this determi nes the
behavior as p  I .
We now consider some exa mples of invert ing Eq. (5. 137). We begin wit h
th e M/M/I queueing system. We have
B*(s) = _ f.1 _
s + f.1
(5.144)
5.8. THE BUSY PERIOD AND ITS DURAnON 215
which we ap ply to Eq. (5. 137) to obtain
G*(s) ,;" ft
s + A AG*(S) + ft
or
A[G*(s}f  (ft + ). + s)G*(s) + ft = 0
Solving for G* (s) and restricting our solution to the required (stable) case
for which IG* (s)1 ~ I for Re (s) ~ 0, gives
G*(s) = ft + }. + s  [(ft + A+ S)2  4ft A]!!2
2A
Thi s equation may be inverted (by referring to transform table s) to obtain
the pdf for the busy period, name ly,
g(y) ~ dG( y) = _1_ eL<+P" I I [2y(}.ft)I !2]  (5.145)
dy y( p)I!2
where I I is the modified Bessel functi on of the first kind of order one.
Consider the limit
lim G*(s) = lim r "' e ' YdG(y) (5.146)
0 < 50 0 < $0 Jo
Examining the right side of thi s equation we observe that thi s limit is merely
the probability that the busy period is finite, which is equivalent to the
probability of the busy period ending. Clearly, for p < I the busy peri od
ends with probabilit y one, but Eq. (5. 146) provides informati on in the case
p> I. We have
P[busy period ends] = G*(O)
Let us examine thi s computati on in the case of the system MIMI !. We have
directly from Eq. (5. 144)
G*(O) = ft + }.  [(ft + A) 2  4ft}·]! !2
2A
and so
Thu s
G*(O) = 1
p
Plb"" ""';00 ends in MIMI!] ~ {;
p< l
p> 1
(5. 147)
G*(s) = 8). 2 + 7).[s + }.  )'G*(s)]
4[s + A }.G*(s) + A][S + A }.G*(s) + 2A]
21 6 THE QUEUE MIGII
The busy peri od pdf given in Eq. (5. 145) is much more complex than we
would have wished for thi s simplest of interesting queuein g systems! It is
indicati ve of the fact that Eq. (5. I37) is usually uninvertibl e for more general
servicetime distributions.
As a second exampl e, let' s see howwell we can do with our M/H
2
/1 example.
Using the expression for B* (s) in our funct ional equat ion for the busy peri od
we get
which leads directly to the cubic equation
4[G*(S)] 3  4(2s + 5)[G* (s)J2 + (4s
2
+ 20s + 31 )G*(s)  (15 + 7s) = 0
Th is last is not easily solved and so we stall at thi s point in our attempt to
invert G* (s). We will return to the functional equati on for the busy period
when we discuss pri orit y queueing in Chapt er 3, Volume II. Th is will lead
us to the concept of a delay cycle, which is a slight generalization of the
busyperi od analysis we have j ust carried out and greatly simplifies priority
queueing calculations.
5.9. THE NUMBER SERVED IN A BUSY PERIOD
In this section we discuss the distribution of the number of customers
served in a busy period. Th e development parallels that of the previous
section very closely, both in the spirit of the der ivati on and in the nature of
the result we will obtain.
Let N
b p
be the number of customers served in a busy period. We are
interested in its probability distribution Indefined as
In= P[ Nb p = II]
(5.148)
The best we can do is to obt ain a functi onal equati on for its ztransform
defined as
(5.149)
The term for II = 0 is omitted from this definition since at least one customer
must be served in a busy peri od. We recall that the random var iable ii
represent s the number of arrivals during a service peri od and its ztransform
V(z) obeys the equati on deri ved earlier, namely,
V( z) = B*(A  Az) (5.150)
Proceeding as we did for the durati on of the busy peri od, we condition our
argument on the fact that ii = k, that is, we assume that k customers arrive
5.9. THE NUMBER SERVED IN A BUSY PERIOD 217
during the service of C
1
• Moreover, we recognize immediately that each of
these arrivals will generate a. subbusy period and the number of customers
served in each of these subbusy periods will have a distribution given byfn.
Let the random variable M, denote the number of customers served in the ith
subbusy period. We may then write down immediately
£[zSbP IiJ = k] = £[z1+J1I+.1[, +· · .+.11,]
and since the M, are independent and identically distributed we have
k
£[ZSb
P
IiJ = k] = z II £[z ·lI,]
i=1
But each of the M
i
is dist ributed exactly the same as N
b p
and, therefore,
E[ZSb
P
IiJ = k] = z[F(z)]k
Removing the condition on the number of arrivals we have
00
F(z) = LE[z.Y
bP
I iJ = k]P[iJ = k]
k= O
00
= z LP[iJ = k][F(zW
k=O
From Eq, (5.44) we recognize this last summation as V(z) (the ztransform
associated with iJ) with transform variable F(z); thus we have
F(z) = zV[F(z)]
But from Eq. (5.150) we may finally write
F(z) = Z8*[A  ).F(z)]
(5.151)
 (5.152)
This functional equation for the ztransform of the number served in a busy
period is not unlike the equation given earlier in Eq. (5.137).
From this fundamental equation we may easily pick off the moments for
the number served in a busy period. We define the kth moment of the
number served in a busy period as 11
k
• We recognize then
h
1
= Flll(l)
= 8*(1)(0)[ AF(1)(I)] + 8*(0)
Thus
which immediately gives us
1
h
1
=  
1  p
 (5.153)
 (5.154)
 (5.155)
218 THE QUEUE M/G/I
We further recogni ze
. F(2)(l ) = h
2
 hI
Carrying out thi s computation in the usual way, we obtain the second mo
ment and variance of the number ser ved in the busy period:
J
2p(1  p) + A
2
x
2
1
1.= +
 ( I  p)3 1  p
p(l  p) + A
2
?
Uk
2
= ~   '  ' : " " " ' ; '   
(1 _ p)3
As an example we again use the simple case of the M/Mjl system to solve
for F(z) from Eq. (5.152). Carrying thi s out we find
Solving,
F(z) = z /l
/l + A AF(z)
AF
2(
z)  (/l + A)F(z) + /l z = 0
(5.156)
F(Z)=!..±1'[I(I 4pz )1/
1
2p ( 1 + p)" J
Fortunately, it turns out that the equation (5.156) can be inverted to obtain
In' the probability of having n served in the busy peri od:
 (5.157)
As a second example we consider the system M/D/1. For thi s system we
have hex) = uo(x  x) and from entry three in Table 1.4 we ha ve immedi atel y
that
B*(s) = e ' z
Using thi s in our functi onal equati on we obtain
F(z) = z e PepF ( z) (5.158)
where as usual p = AX. It is convenient to make the substitution u = zpe: "
and H(u) = pF(z), which then permits us to rewrite Eq. (5. 158) as
u = H(u)ell(u)
The solution to this equation may be obtained [RIOR 62) and then our
ori ginal funct ion may be evaluated to give
F(z) = i (np )nI enpz n
n= l 11!
5.10. FROM BUSY PERIODS TO WAITING TIMES 219
From this power series we recognize immediately that the distribution for the
number served in the MIDII busy period is given explicitly by
( )
n l
I n = .!!.f!.... e np
Il !
 (5.159)
Fo r the case of a constant service time we know tha t if the busy period
ser ves II customers then it must be of durat ion nii , and therefore we may
immediately write down the solution for the MIDfI busyperiod dist ribution
as
[V/il( np) nl
G(y) = L   e
np
n=l It !
where [yl x] is the largest integer not exceedi ng yl x.
5.10. FROM BUSY PERIODS TO WAITING TIMES
 (5.160)
We had mentioned in the opening paragraphs of this chapter that waiting
times could be obtai ned from the busyperiod analysis. We are now in a
position to fulfill tha t claim. As the reader may be aware (and as we shall
show in Chapter 3, Volume II), whereas the distribution of the busyperiod
duration is independent of the queueing discipline, the distribution of waiting
time is strongly dependent upon order of service. Therefore, in this section
we consider only firstcomefirstserved MIG/! systems. Since we restrict
ourse lves to thi s discipline , the reordering of customers used in Section 5.8
is no longer permitted. Instead, we must now decompose the busy period
into a sequence of interva ls whose lengths are dependent random variables
as follows. Consider Figure 5.12 in which we show a single busy peri od for
the firstcomefirstserved system [in terms of the unfinished work Vet)].
Here we see that customer C, initiates the busy peri od upon his arrival at
time T , • The first interva l we cons ider is his service time Xl> which we denote
by X
o
; during this interval more custome rs arrive (in this case C
2
and C
3
) .
All those customers who arrive during X
o
are served during the next interval,
whose duration is X, and which equals the sum of the service times of all
arrivals during Xi> (in this case C
2
and C
3
) . At the expi ration of X" we then
create a new inte rval of duration X
2
in which all customers arriving during
X, are served, and so on. Thus Xi is the length of time required to service
all those customers who arrive during the previous interval whose duration
is Xi_l' If we let n
i
denote the number of customer arrivals during the interval
Xi' then n
i
customers arc served during the interval Xi+l' We let no equal the
number of customers who arrive during X
o
(the first customer's service time).
220 THE QUEUE MiGfl
U(t )
c,
OL....+__>:,_ _ _
Figure 5.1 2 The busy period: firstcomefirstserved.
Thus we see that Y, the duration of the total busy peri od , is given by
""
Y = LXi
; = 0
where we permit the possibility of an infinite sequence of such inter vals.
Clearl y, we define Xi = 0 for those intervals that fall beyond the terminati on
of thi s busy period ; for p < I we know that with pr obability I there will be a
finite i
o
for which Xi (and all its successors) will be O. Furthermore, we know
o
that Xi+! will be the sum of », service intervals [each of which is distributed as
B(x)].
We now define Xi(y) to be the PDF for Xi' that is,
"
Xi(y) = P[X
i
:s;; y]
and the correspondi ng Lapl ace transform of the associated pdf to be
X,*(s) 1"" e
SV
dXi(y)
= E[e ' X']
We wish to derive a recurrence relati on among the X,*( s). This derivat ion
is much like that in Section 5.8, which led up to Eq. (5. 137). That is, we first
condition our transform sufficiently so that we may write it down by in
spection; the cond itions are on the interval length X
i
_
l
and on the number of
I
5. 10. FROM BUSY PERIODS TO WAITI NG TIMES 221
arri vals n
i

1
during that interval, that is, we may write
E[
 'X; IX    ]  [B*( )]"
e i  I  Y, " i l  n  s
Thi s last follows from our convolution property leading to the multiplicati on
of tr ansforms in t he case when the variables are independent ; here we have
n independent service times, all with identical distributi ons. We may un
condition first on n:
00 (A )"
E[e ' x ; I X
i
_
1
= Y] = I ..JL eJ.V[B*(s)]"
n=O n !
and next on Y:
Clearly , the lefthand side is X;*(s); evaluating the sum on the righthand side
leads us to
f
OO •
Xi*(s) = e [J.J.B (, )]. dXi_1(y)
.0
Thi s integral is recogni zed as the transform of the pdf for XiI> namely,
Xi*(s) = X i ~ l [ A  AB*(s)] (5.161)
Thi s is the first step.
We now condition our calculations on the event that a new (" tagged")
arrival occurs during the busy period and, in particular, while the busy
peri od is in its ith interval (of duration X ;). From our observations in Secti on
4.1, we know that Poisson arrival s find the system in a given sta te with a
probability equal to the equilibrium probability of the system being in that
state. Now we know that if the system is in a busy period, then the fracti on of
time it spends in the interval of du rati on Xi is given by E[Xi]1E[ Y] (t his can
be made rigorous by renewal theory a rguments). Consider a customer who
arri ves during an interval of duration Xi. Let his waiting time in system be
denot ed by IV; it is clear that this wait ing time will equal the sum of the
remaining time (residual life) of the ith interval plus the sum of the service
times of all j obs who arr ived before he did during the ith interval. We wish
to calculat e E[e ' ;;' Ii], which is the tr an sform of the wai ti ng time pdf for an
arrival during the ith interval; again , we perform thi s calculation by con
ditioning on the three variables Xi' Y
i
(defined to be the residual life of this
ith interval) and on N , (defined to be t he number of arrivals during the it h
interval but pri or to our customer' s arrivalthat is, in the interval Xi  Yi).
Thus, using our convolution property as before, we may write
E[e 'WI i , Xi = y, Y, = v' , s, = /I] = e '"[B*(s)r
222 THE QUEUE M/G/l
Now since we ass ume that n cust omers have arrived during an interval of
duration y  y' we uncondition on N, as follows:
E[
.;;; I . X Yt _ y' '] _ e'" [A(Y  y' )] ne,t('  " ' [B*(s) ]n
e: I , i = y, .L.
n:oO n!
= e s J/'  l (V lI' )+ A( lI Y' ) B  ( s )
(5. 162)
We have already observed that Y
i
is the residual life of the lifetime Xi'
Equation (5.9) gives the joint density for the residual life Yand lifetime X;
in that equation Yand X play the roles of Y
i
and X; in our problem. There
fore, replacing/ex) dx in Eq. (5.9) by dXi(y) and noting that y and y' ha ve
replaced x and y in that development , we see that the j oint density for Xi
and Y
i
is given by dXJy) dy' /E[X
i]
for 0 ::s; y' ::s; y ::s; 00. By means of this
joint densit y we may remove the condition on Xi and Y
i
in Eq. (5. 162) t o
obtain
E[e ' ;;; I i] = r'" r' dX/ y) dy' / E[ X;]
Jy=o JJI '=O
1
'" [e "  e[ ,t,tB'(,»). ]
= dX(y)
._0[s + A AB* (5)]E[X;] •
These last integrals we recogni ze as tr an sforms and so
E[ e ' ;;; Ii] = X/(5)  X/(J.  ;.8*(5»
[ 5 + I.  1. 8*(5) ]E[X;]
But now Eq. (5. 161) permits us to rewrite the second of these tr an sforms to
obta in .
 ,W I . X7+1(5)  X;*(5)
E[e I] = "''''''
[5  I. + A8*(5)] £[ X
i
]
Now we may remove the condition on our arrival entering during the ith
interval by weighting this last expression by the probability that we have
formerly expressed for the occurre nce of this event (sti ll condit ioned on o ur
ar riva l entering during a busy per iod) , and so we have
E[e ' WIenter in busy period ]
= I E[ e ";;; I i] E[X
i
]
iO E[ Y]
1 :L '" [v* ( ) X *( )]
,i \ · ... S  . s
[
_" "8*( )]E[ Y] '_ , , 1 ,
5 I. + I. 5 .  0
I
(5. 163)
5.11. COMBINATORIAL METHODS 223
Th is last sum nicely collapses to yield 1  X
o
*(s) since Xi*(s) = I for those
inte rvals beyond the busy period (recall Xi = 0 for i ~ i
o
) ; also, since X
o
=
x" a service time, then X
o
*(s) = B*(s) , and so we arrive at
E[
S;;; I . b . d] 1  B*(s)
e enter In usy peno =
[s  }. + }. B*(s)]E[Y ]
Fr om previous con sider ati ons we know that the probability of an arrival
ente ring during a busy per iod is merely p = Ax (and for sure he must wait for
service in such a case); further, we may evaluate the average length of the
busy peri od E[ Y] either from our previous calcul ati on in Eq. (5. 141) or from
elementary considerations ' to give E[Y] = 'i/ (l  p). Thus, unconditioning
on an arrival finding the system busy, we finally have
E[e
SW]
 
= ( I  p)E[e SWI enter in idle peri od] +pE[e
s
" Ienter in busy period]
[1  B*(s)](1  p)
= ( 1  p) + p [s _ A+AB*(s)]'i
='s ''(I =         '    p ) ~
s  A+AB*(s)
Voila! This is exactl y the PK transform equation for waiting time , namel y,
W*(s) ~ E[e
siD
] given in Eq. (5. 105).
Thus we have shown how to go from a busyperiod analysis to the calcul a
tion of waiting time in the system. Thi s meth od is reported upon in [CONW
67] and we will have occasio n to return to it in Chapter 3, Volume 11.
S.U. COMBINATORIAL METHODS
We had menti oned in the opening remarks of this chapter that considera
tion of random walks and combinat ori al meth ods was applicable to the study
of the M/G!I queue. We take thi s opportunity to indicate some aspects of
those methods. In Figure 5.13 we have reproduced Vet) from Figur e 5.1Oa. In
addition, we have indicat ed the " random walk" R(t) , which is the same as
Vet) except that it does not saturate at zero but rat her continues to decline at
a rat e of I sec/sec below the hori zontal axis; of course, it too tak es vertical
j umps at the customerarrival instants. We introduce this diagram in order
to define wha t are known as ladder indices. The kth (descending) ladder index
• The following simple argument ena bles us to ealculate E[ Y] . In a long interval (say, I) the
server is busy a fraction p of the time. Each idle per iod in M/G/l is of average length I f}.
sec and therefore we expect to have ( I  p)I /(l I).) idle periods. This will al so be the
number of busy periods, approxi mately; therefore, since the time spent in busy periods is
pI , the average durat ion of each must be pl l ).I( 1  p) = :el(l  p) . As I ~ 00 , this ar gu
ment becomes exact.
224 THE QUEUE M /G!I
Figure 5.13 The descending ladder indices.
is defined as the instant when the random walk R(t) rises from its kth new
minimum (and the value of this minimum is referred to as the ladder height).
In Figure 5.13 the first three ladder indices are indicated by heavy dots.
Fluctuation theory concerns itself with the distribution of such ladder
indices and is amply discussed both in Feller [FELL 66] and in Prabhu
[PRAB 65] in which they consider the applicati ons of that theory to queueing
processes. Here we merely make the observation that each ladder index
identifies the arrival instants for those customers who begin new busy periods
and it is this observation that makes them interesting for queueing theory.
Moreover, whenever R(t ) drops below its previous ladder height then a busy
peri od terminates as shown in Figure 5. I3. Thus, between the occurrence of a
ladder index and the first time R(t) drops below the corresponding ladder
height , a busy period ensues and both R (t) and U(t ) have exactly the same
shape, where the former is shifted down from the latter by an amount exact ly
equal to the accumulated idle time since the end of the first busy peri od. One
sees that we are quickly led into methods from combinat orial theory when
we deal with such indices.
In a similar vein, Takacs has successfully applied combinatori al theory to
the study of the busy period. He consider s this subject in depth in his book /
[TAKA 67] on combinatorial methods as applied to queueing theory and
develops, as his cornerstone , a generali zati on of the classical ballot theorem.
The classical ballot theorem concerns itself with the counting of votes in a.
twoway contest involving candidate A and candidate B. Ifwe assume that A
scores a votes and B scores b votes and that a ;:::: mb, where m is a non
negati ve integer and if we let P be the probability that throughout the
(5.164)
5.11. COMBINATORIAL METHODS 225
counting of votes A continually leads B by a factor greater than m and further ,
if all possible sequences of voting records are equally likely, then the classical
ballot theorem states that
a  mb
P = =:..:.::::
a+b
This theorem originated in 1887 (see [TAKA 67] for its history). Takacs
generalized thi s theorem and phrased it in terms of cards drawn from an urn
in the following way. Consider an urn with Il cards, where the cards are
marked with the nonnegative integers k
I
, k
2
, • • • , k ; and where
n
L k, = k ~ Il
i= l
(that is, the ith card in the set is marked with the integer k ;). Assume that all
Il cards are drawn without replacement from the urn. Let o; (r = I, . . . , Il)
be the number on the card drawn at the rth drawing. Let
N
r
= VI + V
2
+ .. . + V
T
r = I, 2, . .. , 11
(5.165)
NT is thus the sum of the numbers on all cards drawn up through the rth
draw. Takacs' generalization of the classical ballot theorem states that
 11  k
P[N
T
< r for all r = 1,2, .. . , 11] = 
11
The proof of this theorem is not especially difficult but will not be reproduced
here . Note the simplicity of the theorem and, in particular, that the prob
ability expressed is independent of the particular set of integers k; and de
pends only upon their sum k . We may identify o; as the number of customer
arrivals during the service of the rth customer in a busy period of an fovl /G/l
queueing system. Thus Fl
T
+ I is the cumulative number of arrivals up to the
conclusion of the rth customer's service during a busy period. We are thus
involved in a race between Fl
T
+ I and r : As soon as r equals Fl
T
+ I then the
busy period must terminate since, at this point, we have served exactl y as
many as have arrived (including the customer who initiated the busy period)
and so the system empties. If we now let N
b P
be the number of customers
served in a busy period it is possible to apply Eq. (5. 165) and obtain the
following result [TAKA 67]:
1 
P[N
bp
= III =  P[Nn = 11  I]
Il
(5.166)
It is easy to calculate the probability on the righthand side of this equation
since we have Poisson arrivals: All we need do is condition this number of
(5. 167)
 (5.168)
226 THE QUEUE M/G/I
arriva ls on the durati on of the busy period , multiply by t he probabi lity that 11
service interva ls will, .in fact , sum to thi s length and then integrate over all
possible lengths. Thus
 ( '" (AV)'  l .l
P[N
n
= II  I] = . e "bl. l(y) dy
. 0 (II  I)!
where bl. l(y) is the nfold convolution of bey) wit h it self [see Eq . (5. 110)]
and represents the pdf for the sum of n independent random variables, where
each is drawn from the common den sity bey). Thus we arr ive at an expl icit
expression for the pr obability distr ibution for the number served in a busy
period:
I
'" (;ly)nl  .l"
P[N
b p
= II] = e bl. l(y) dy
O il!
We may go further and calcula te G(y) , the distributi on of the busy period, by
integrating in Eq. (5. 168) only up to some point y (rather than 00) and then
summing over all possible numbers served in the busy per iod , that is,
and so ,
 (5.169)
I
"co .lz(i·X)·  l
G(y) = I e" b1nl(X) dx
O . ~ l II !
Thus Eq . (5. 169) is an explicit expression in terms of known quantities for
the distributi on of the busy period and in fact may be used in place of the
expression given in Eq. (5. 137), the Lapl ace tr ansform of dG(y)/dy. Th is is
the expre ssion we had promised earlier, altho ugh we have expressed it as an
infinite summati on ; nevertheles s, it does pr ovide the ability to approxi mate
the busyperi od distribution numericall y in any given sit ua tion. Similarl y,
Eq . (5. 168) gives an explicit expression for the number served in the busy
period .
The reader may have observed that our st udy of the busy per iod has
reall y been the st udy of a transient phenomenon and thi s is one of the
reasons that t he development bogged down. In t he next sectio n we consider
certain aspects of the transient solution for M/G/I a bit fur ther.
5.12. THE TAKACS INTEGRODIFFERENTIAL EQUATION
In th is section we take a cl oser look at the un finished work and de rive t he
forward Kolmogorov equati on for its timedependent behavior. A moment' s
reflection will reveal the fact that the unfini shed work U(t) is a conti nuous
time continuousstate Markov pr ocess that is subject to di scont inu ou s
!
 (5.172)
5.12. THE TAKACS INTEGRODlffERENTIAL EQUATION 227
changes. It is a Markov process since the entire past history of its moti on is
summarized in its current value as far as it s future behavior is concerned.
That is, its ver tical discont inuities occur at instants of customer arrivals and
for M/G/l these arrivals form a Poisson pr ocess (therefore, we need not
know how long it has been since the last arrival), and the current value for
Vet) tells us exactl y how much work remains in the system at each instant.
We wish to deri ve the probability distributi on funct ion for Vet), given its
initial value at time t = O. Accordingly we define
F(w, t ; w
o
) ';; P[U(t ) ::;; wi U(O) = w
o
] (5. 170)
This notation is a bit cumbersome and so we choose to suppress the initi al
value of the unfini shed work and use the shorthand notati on F(w, t) ~
F(w, I ; 1"0) with the understand ing that the init ial value is 11'0' We wish to
relate the probability F(w, t + D.I) to its possible values at time I. We
observe that we can reach this state from I if, on the one hand, there had been
no arri vals during this increment in time [which occurs with probability
I  AD.t + o(D.t)] and the unfinished work was no larger than II' + D.I at
time t : or if, on the other hand , there had been an arrival in this int erval
[with probabil ity AD.t + o( D.t) ] such that the unfinished work at time I, plus
the new increment of work brought in by this customer, together do not
exceed I". These observati ons lead us to the following equati on :
F(w, 1+D.I )
J
w aF( x, I)
= ( 1 AD. I)F(w+D.I , I) + AD.t B(w  x) dx +O(D. I) (5.171 )
x ~ o ax
Clearly, (aF(x , t) jax) dx ~ dFt», t) is the pr obability that at time I we have
x < Vet ) ::;; x + dx. Expanding our distributi on functi on on its first vari able
we have
aF(w, t )
F(w + D.I, I) == F(w, t) + D.I + O(D.I )
aw
Using thi s expansion for the first term on the righthand side of Eq. (5. 171)
we obtain
F(w, t + D. t ) = F(w, t) + aF(w, I) D.I _ AD. t[F(W, t) + aF(w, t) D.tJ
aw aw
+ i. D.tL : oB(W x) dxF(x , I) + O(D. I)
Subtracting F(w, r), dividing by D.t , and passing to the limit as D.t > 0 we
finally obtain the Takacs integrodifferential equation for V( t) : .
aF(w, t) aF(w, t) IW
'' = .  i.F(w, t ) + A B(w  x) dxF(x, t)
at ow xo
I
228 TIl E QUEUE M/G/I
Takacs [TAKA 55] deri ved thi s equation for the more genera l case of a
nonhomogene ous Poisson process, namely, where the arriva l rat e .1.(1)depends
upon I. He showed t hat this equation is good for almost all W 0 an d 1 0 ;
it does 1/01 hold at those w a nd 1 for which of(lV, 1)/OlV has an accumulati on
of probability (na mely, an impulse) . This occurs, in particular, at 1\' = 0
and would give rise to t he term F(O, I)uo(w) in of(lV, I)/OW, whereas no ot her
term in the equati on contains such an impulse.
We may gai n more information from the Takacs integr odifferential
equation if we transform it on the variable W (and not on t) ; thus using the
tr an sform variable I' we define
W*'(r, I) e
TW
dFw(w, I) (5.173)
We use t he notation (*.) to denote transformati on on the first , but not the
second argument. The symbol Wis chosen since, as we shall see, lim W*'(r, I) =
W*(r) as 1 > 00 , which is our former tr ansform for the waitin gtime ' pdf
[see, for example, Eq. (5. 103)].
Let us examine the transform of each term in Eq, (5. 172) separately. First
we note that since F(w, I) = dFi», I), then from entry 13 in Table 1.3 of
Appendix I (a nd its footnote) we must have
J.
'"  TW W*'(r, I) + F(O , I)
. F(w, I)e dw =
o I'
and, similarl y, we have
J.
'"  TW
B(w)e dw =
0 I'
However , since the unfini shed work and the ser vice time are both nonnegat ive
random variables, it must be that F(O, I) = B(O) = 0 always . We recogni ze
that the last term in the Takacs inte grodifferential equation is a con volution
between B(w) and of(W,I)/OW, and therefore the tr ansform of th is con
voluti on (includi ng t he constant multiplier A) must be (by properties 10 and
13 in that same tabl e) }. W* ·(r, I)[B*(r)  B(O )]Ir = }.lV*·(r, I)B*(r)/r.
Now it is clear t hat t he tr ansform for the term of(w, I)/OWwill be W*'( r, I) ;
but thi s t ransfo rm includes F«(j+ , I), the tr ansform of the impulse locat ed at t he
origin for thi s partial deri vative, and since we know th at the Tak acs int egr o
differential equati on does not contain that impulse it must be subt racted out.
Thus, we have from Eq . (5. 172),
(
I )ow*'(r , I) *, + i.W*'(r, I) W*'(r, I)B*(r)
 = IV ( I', I)  F(O , r)  + A
r 01 r r
(5. I 74)
';
I
I
I
J
(5. 175)
5.12. THE TAKACS INTEGRODl FFERENTIAL EQUATION 229
which may be rewritt en as
oW*"(r, t) " * *" +
o't'' = [r  A+ .1.B (r) ]W (r, t)  rF(O , I)
Takacs gives the solution to thi s equ ati on {p, 51, Eq . (8) in [TAKA 62b]}.
We may now transfor m on our second vari able 1 by first defining the
double transform
t. I"
F**(r, s) = J o e' tW*"(r, t) dt
We also need the definiti on
(5.176)
(5.178)
r,*(s) 1
00
e stF(O+, t) dl (5.177)
We may now transform Eq. (5. 175) using the transform pr operty given as
entry II in Table I.3 (and its foot note) to obtain
sF**(r, s)  W*"(r ,O) = [r  ;. + .1.B*(r)]F**(r , s)  rF
o
*(s)
From thi s we obtain
F**(r, s) = W*"(r, 0)  rFo*(s)
s  r + ).  .1.B*(r)
The unknown funct ion F
o
*(s) may be determined by insisting that the
transform F**(r, s) be analytic in the region Re (s) > 0, Re (r) > O. Thi s
implies that the zeroes of the numerator and denominator must coinc ide in
this region ; Benes [BENE 56] has shown that in this region 1] = 1](s) is the
unique root of the denominator in Eq. (5. 178). Thus W*' (7J , o) = 1] F
o*(s)
and so (writing 0 as 0), we have
F**(r s) = W*"(r, O)  (r!7J)W*"(1] . 0) (5. 179)
, s  r + A i.B*(r)
Now we recall that V(O) = IV
O
with probability one, and so from Eq. (5. 173)
we have W*' (r ,O) = eru:
o
. Thus F**(r , s) takes the final form
 e
rwo
F**(r, s) =  (5.180)
;.B*(r)  i. + r  s
We will return to thi s equati on later in Chapter 2, Volume II , when we discuss
the diffusion ap proxi mation.
For now it behooves us to investigate the steadysta te value of these func
tions ; in particular, it can be shown that F(w, t) has a limit as t > 00 so
long as p < I , and thi s limit will be independent of the initi al condition
I
230 THE QUEUE M/G/ l
F(O, w) : we denot e t his .lirnit by F (lI') = lim F( lI', r) as t + CIJ, and from
Eq . (5. 172) we find that it must sa tisfy t he foll owing equ at ion:
dF(w) l W
 = U(w)  A B(w  x) dF( x )
dw =0
(5. 181)
(5. 182)
Furthermore , for p < 1 then W*(r ) ~ lim W*' (r , t) as t + CIJ will exist and
be independent oft he init ial di stributi on . Taki ng the tr ansform of Eq. (5. 181)
we find as we did in deri ving Eq . (5. 174)
+ i.W*(r) ).B*(r )W*(r )
W*( r)  F(O ) =  _ _ ....:....:...=''
r r
where F(O+) = lim F (O+, t) as t + CIJ and equal s the probability that the
unfini shed work is zero. This last may be rewritten to give
rF(O+)
W*(r ) =   ''
r  ). + ).B*(r)
Howe ver , we require W* (O) = 1, which requ ires that the unknown const a nt
F (O+) have a value F (O+) = I  p. Finally we ha ve
W* (r) = r(1  p)
r  i. + AB*(r)
which is exactly the Poll aczekKhinchin transform equation for waiting time
as we pr omi sed!
Thi s completes our di scussion of the system M/G/l (for t he time being).
Next we consider the "companion" system, G/M/m.
REFERENCES
BENE 56
cONW67
COX 55
COX 62
FELL 66
GAVE 59
Benes, V. E. , " On Queues with Poisson Arrivals," Annals of Mathe
matical Statistics, 28, 670677 (1956).
Conway, R. W., W. L. Maxwell, and L. W. Miller, Theory ofSchedul
ing , AddisonWesley (Reading, Mass.) 1967.
Cox, D. R., " The Analysis of NonMarkovian Stochast ic Processes by
the Inclusion of Suppl ementary Variables," Proc. Camb. Phil. Soc.
(Math. and Phys. Sci.), 51,433441 (1955).
Cox, D. R., Renewal Theory , Methuen (London) 1962.
Feller, W., Probability Theory and its Applications Vol. II , Wiley (New
York), 1966.
Gaver, D. P., Jr ., "Imbedded Mar kov Chain Analysis of a Waiting
Line Process in Continu ous Time," Annals of Mathematical Statistics
30, 698720 (1959).
I
HEND 72
KEIL 65
KEND 51
KEND 53
KHIN 32
LIND 52
PALM 43
POLL 30
PRAB 65
RIOR 62
SMIT 58
TAKA 55
TAKA 62a
TAKA 62b
TAKA 67
EXERCISES 231
Henderson, W., " Alternative Approaches to the Analysis of the
M/G/I and G/M/I Queues," Operations Research, 15,92101 (1972).
Keilson , J . , " The Role of Green' s Functions in Conge st ion The ory,"
Proc. Symposium 0 11 Congestion Theory, Univ. of North Carolina
Press, 43 71 (1965).
Kendall, D. G. , "Some Probl ems in the Theory of Que ues," Journal
of the Royal Statistical Soci ety , Ser. B, 13, 1511 85 (1951).
Kendall, D. G., "Stochastic Processes Occurring in the Theory of
Queues and their Analysis by the Method of the Imbedded Markov
Chain," Annals of Math ematical St atistics, 24, 338354 (1953).
Khinchin, A. Y. , " Mathema tical The ory of Stati onary Queues,"
Mat . Sbornik, 39, 7384 (1932).
Lindley, D. Y., "The Theory of Queues with a Single Server ," Proc.
Cambridge Philosophical Society, 48, 277289 (1952).
Palm, C.; "Intensitatschwankungen im Fernsprechverkehr," Ericsson
Technics, 6,1189 (1943).
Pollaczek, F., "Uber eine Aufgabe dev Wahrscheinlichkeitstheori e,"
III Mat h. Zeitschrift., 32, 64100, 729 750 (1930).
Prabhu, N. U., Queues and Inventories, Wiley (New York) 1965.
Riordan, J. , Stochastic Service Systems, Wiley (New York) 1962.
Smith, W. L., " Renewal Theory and its Ramifications," Journal of the
Royal Statistical Society, Ser . B, 20, 243302 (1958).
Takacs, L. , "Investigation of Wait ing Time Problems by Reduction
to Markov Processes," Acta Math Acad. Sci. Hung ., 6,101 129 (1955).
Tak acs, L., Introduction to the Theory of Queues, Oxford University
Press (New Yor k) 1962.
Takacs, L. , " A SingleServer Queue with Poisson Input ," Operations
Research, 10, 388397 (1962).
Takacs, L. , Combinatorial Methods in the Theory of Stochastic
Processes, Wiley (New York) 1967.
EXERCISES
5.1. Prove Eq. (5. 14) from Eq. (5.11) .
5.2. Here we derive t he residual lifetime density j(x) di scussed in Section
5.2. We use the notation of Fi gure 5. 1.
(a) Observing that the event { Y ::S; y} can occur if a nd only if
t < T
k
::s; t + y < T k+l for so me k , show th at
t ,(y) ~ pry :s; y It]
<XI (1+.
= k ~ l J, [L  F(t + y  x) ] dP[Tk :s; x]
I
232 TH E QUEUE MjGjl
(b) Observing that T
k
:s; x if and only if oc(x) , the number of " arriv
als" in (0, x): is at least k , that is, P[T
k
:s; x] = P[oc(x) ;::: k] ,
show that
<Xl '"
L Ph :s; x] = LkP [oc( x) = k]
k=l k= l
(c) For large x , the meanvalue expression in (b) is x/mI. Let F(y) =
lim Ft(y) as t > 00 with corresponding pdf f ey). Show that we
now have
• _I__F.:(y ::..:.)
fey) =
m
i
5.3. Let us rederive the P K meanval ue formula (5.72).
(a) Recognizing that a new arri val is delayed by one service time
for each queued customer plus the residual service time of the
customer in service, write an expression for W in terms of
R., p, X, (lb' and P[w > 0).
(b) Use Little's result in (a) to obtain Eq. (5.72).
5.4. Replace I  p in Eq. (5.85) by an unknown constant and show that
Q(I) = V(I) = I easily gives us the correct value of I  p for this
constant.
5.5. (a) From Eq. (5.86) form Q(l)(I) and show that it gives the ex
pression for qin Eq. (5.63). Note that L'Hospital's rule will be
required twice to remove the indeterminacies in the expression
for Ql1l(I).
(b) From Eq, (5.105), find the first two moments of the waiting
time and compare with Eqs. (5.113) and (5.114).
5.6. We wish to prove that the limiting probability r
k
for the number of
customers found by an arrival is equal to the limit ing probability d
k
for the number of customers left behind by a departure, in any
queueing system in which the state changes by unit step values only
(positi ve or negative). Beginning at t = 0, let X
n
be those instants
when N( t) (the number in system) increases by one and Yn be those
instants when N(t) decrease s by unit y, n = I , 2, .. . . Let N(x
n
 ) be
denoted by OC
n
and N(Yn+) by f3 n. Let N(O) = i.
(a) Show that if f3nH :s; k , then OC n+ k+1 :s; k .
(b) Show that if OC n+ k+l s k, then f3n+i :s; k . 1/
(c) Show that (a) and (b) must therefore give, for any k,
lim P[f3 n :s; k] = lim P[oc
n
s k]
which estab lishes that r
k
= d
k
•
EXERCISES 233
5.7. In this 'exercise, we explore the method of supplementary variables as
applied to the M/G/I . queue . As usual , let Pk(t) = P[N(t ) = k].
Moreover, let Pk(t , x
o
) dx
o
= P[N(t ) = k, X
o
< Xo(t ) ~ X
o
+ dx
o]
where Xo(t) is the service already received by the customer in service at
time t.
(a) Show that
oPo(t ) I'"
 =  APo(t ) + Pl(t , xo)r(x
o)
dx
o
ot 0
where h(x
o
)
rex)  _....:......:e.....
o  1  B(x
o)
(b) Let h = lim Pk(t) as t .. ~ and h (x
o
) = lim Pk(t , x
o
) as t .. co,
From (a) we have the equilibrium result
Apo =l'"Pl(XO)r( x
o)
dx
o
Show the following equilibrium result s [where Po(x
o
) ~ 0]:
°Pk(XO)
(i)  =  [}. + r(xO)]pk(x
O)
+ APk_l(X
O
) k ~ 1
oX
o
(ii) piO) =1'"Pk+l(XO)r(x
o
) dx
o
k > 1
(iii) Pl(O) =1"'P2(x
o)r(xo)
dx
o
+ Apo
(e) The four equations in (b) determine the equilibrium probabilities
when combined with an appropriate normalizat ion equation. In
terms of po and hex,,) (k = 1, 2, . ..) give thi s normalizati on
equati on.
(d) Let R(z, x
o
) = 2::1h (XO) Zk. Show that
oR(z, x
o)
•
''..:= = [}.z  A  r(xo)]R(z, x
o)
ox
o
and
zR(z, O) = 1'" r(xo)R(z, x
o)
dx
o
+ ;.z(z  l) po
(e) Show that t he solution for R(z, x
o)
from (d) must be
R( z, x
o)
= R( z,O)e '< ZoCl z)JifoTCV) dV
AZ(Z  l)p o
R(z,O) = B*( ' .)
z  l'.  j.Z

234 THE QUEUE M/G/I
(f) Defining R(z) ~ S;' R(z, x
o)
dx
o
, show that
I  B*().  AZ)
R(z) = R(z, 0) ' :.
A( I  z)
(g) From the normalizati on equati on of (c), now show that
Po = I  p (p = Ax)
(h) Consistent with Eq. (5.78) we now define
Q(z) = Po + R(z)
Show that Q(z) expressed thi s way is identical to the PK
transform equation (5.86). (See [COX 55] for addi tional details
of this meth od.)
5.8. Consider the M/G/ oo queue in which each customer always finds a
free server; thus
s(y) = bey) and T = x. Let Pk(l ) = P[N(I ) = k] 
and assume PoCO) = !.
(a) Show that
co (AI)n(n)[1
1
' Jk[11' Ink
Pk(l ) = I e.lt  [1  B(x)] dx  B(x) dx
n ~ k n! k l o t 0
[HINT: (I /I) S ~ B(x) dx is the probability that a customer' s
service terminates by time I , given that his arrival time was
uniforml y distr ibuted over the interval (0, I). See Eq. (2.137)
also. ]
(b) Show that P» ~ lim Pk(l ) as 1 > 00 is
().X)k AX
r< :«:
regardless of the form of B(x)!
5.9. Consider M/ E. / !.
(a) Find the polynomial for G*(s).
(b) Solve for S(y) = P[time in system ~ y]. I
5.10. Consider an M/D/I system for which x = 2 sec. I
(a) Show that the residual service time pdf hex) is a rectangular
distr ibuti on. /
(b) For p = 0.25, show that the result of Eq. (5. 111) with four terms
may be used as a good approxi matio n to the distribution of
queueing time.

EXERCISES 235
5.11. Consider a n M/G/I que ue in which bul k arrivals occur at rate Aand
with a probabi lity gr that r customers arrive together at an arrival
instant.
(a) Show that the zt ransforrn of the number of customers arri ving
in an interva l of lengt h t is e,l '[l Gl zl ] where G(z) = 2: g.zr.
(b) Show that the ztransform of t he random va riables Un . the number
of arri vals during the service of a customer, is B*[A  i.G(z)].
5.12. Consider the M/G/I bulk arrival system in t he pre vious problem.
Usi ng the method of imbedded Markov chains:
(a) Fi nd the expe cted queue size. [HI NT: show that ij = p and
;? _ o= I = /(C
b
2
+ 1) + + 1 _
dz P. g
where C. is the coefficient of va riation of t he bul k group size and
it is t he mean group size.]
(b) Show that the generating function for queue size is
(I  p)(l  z)B*[A  AG(Z)]
Q(z) = B*[A _ AG(Z)] _ z
Using Litt le's result, find the ratio W/x of the expected wait on
queue to the ave rage service time.
(c) Using the same method (imbedded Markov chain) find the
expected number of groups in the queue (averaged over departure
times). [HINTS : Show that D(z) = f3* (A  Az), where D(z) is t he
generating functi on for the number of groups arri ving during the
ser vice time for an entire group and where f3 *(s) is the Laplace
t ransform of the servicetime den sity for an entire gro up. Also
not e that f3 *(s) = G[B*(s) ], which allows us to show that
r
2
= (X) 2(g2  g) + x
2
g, where r
2
is the second moment of the
group service time.]
(d) Using Little's result, find W., the expected wait on queue for a
gr oup (measured from the arrival time of the gr oup until the
start of service of t he firs t member of t he group) and show t hat
x C 2
II' = P g [1+ + C2J
• 2(1  p) g •
(e) If the customers within a gr oup arriving together are served in
random order, show t hat the ratio of the mean wai ting time for a
single cust omer to the average service time for a single customer
is W.l x from (d) increased by (1/2)g( 1 + C; )  1/2.
I
I
236
5.13.
THE QUEUE M/Gfl
Consider an MIGII system in which service is instant aneous but is
only available at " service instants, " the interval s between successive
service instants being independently distributed with PDF F(x ). The
maximum number of customers that can be served at any service
instant is m. Note that thi s is a bulk service system.
(a) Show that if qn is the number of cust omer s in the system ju st
before the nth service instant, then
{
qn + V
n
 m
q n+t =
v
n
q n < m
where V
n
is the number of arrivals in the interval between the
nth and (n + I)th service instants.
(b) Prove that the probability generating function of u, is P (/,  k).
Hence show that Q(z) is
m'
I Pk(zm  Zk)
Q(z) = Z m [ ; : ~ A _ AZ)r' _ 1
where fk = p[ij = kl (k = 0, .. . , m  I).
(c) The {Pt} can be determined from the cond ition that within the
unit disk of the zplane, the numerator must vani sh when the
denominat or does. Hence show that if F(x) = I  «r',
z  1
Q(z) = _m__
zm  Z
where Zmis the zero of zm [I + A(I  z)l fll  I out side the unit
disk.
5.14. Consider an M/Gfl system with bulk service. Whenever the server
becomes free, he accepts 11\"0 cust omer s from the queue into service
simult aneously, or, if only one is on queue, he accepts that one; in
either case, the service time for the group (of size I or 2) is taken from
B(x). Let qn be the number of customers remaining after the nth
service instant. Let V
n
be the number of arrivals during the nth service.
Define B*(s), Q(z), and V(z) as transforms associated with the rand om
variables x, ij , and ii as usual. Let p = ).X12.
(a) Using the meth od of imbedded Markov chains, find
E(ij) = lim E(qn)
in terms of p, G
b
2
, and P(ij = 0) ~ Po.
(b) Find Q(z) in terms of B*('), Po, and p, ~ P(ij = I).
(c) Express p, in terms of po.
1
EXERCISES 237
5.15. Consider an MIGII queueing system with the following variation.
The server refuses to serve any customers unless at least two customers
are ready for service, at" which time both are "taken into" service.
These two customers are served individually and independently, one
after the other. The instant at which the second of these two is finished
is called a "critical" time and we shall use these critical times as the
points in an imbedded Markov chain. Immediately following a
critical time, if there are two more ready for service, they are both
"taken into" service as above. If one or none are ready, then the
server waits until a pair is ready, and so on. Let
q. = number of customers left behind in the system
immediately following the nth critical time
V
n
= number of customers arriving during the combined
service time of the nth pair of customers
(a) Derive a relationship between q. +1> q., and v.
H
.
(b) Find
eo
V(z) = .L P[v
n
= k] Zk
kO
(c) Derive an expression for Q( z) = lim Q.(z) as n ~ CX) in terms
of po = P[ij = 0], where
co
Q.(z) = .L P[qn = k]Zk
k O
(d) How would you solve for Po?
(e) Describe (do not calculate) two methods for finding ij .
5.16. Consider an M/G/I queueing system in which service is given as
follows. Upon entry into service. a coin is tossed, which has prob
ability p of giving Heads. If the result is Heads , then the service time
for that customer is zero seconds. If Tails , his service time is drawn
from the following exponential distribution :
x ~ O
(a) Find the average service time x.
(b) Find the variance of service time CJ
b
' .
(c) Find the expected waiting time W.
(d) Find W*(s).
(e) From (d). find the expected waiting time W.
(f) From (d), find Wet) = P[waiting time ~ t].
5.17. Consider an M/G{I queue. Let E be the event that Tsec have elapsed
since the arrival of the last customer. We begin at a random time and
(
238 THE QUEUE M/G/!
measure t he time IV until event E next occurs. This measurement may
invol ve the observation of man y cust omer arrivals before E occurs.
(a) Let A(t ) be the interarrivaltirne distribution for those interva ls
during which E does not occur. Find A(1) .
(b) Find A *(s) = f;; e
st
dA(t).
(c) Find W* (s I n) = f;; e
SW
dW(1V I n). where W (IV I 11) = P[time
to event E :s;; IV I II arrival s occur before E).
(d) Find W *(s) = f;; e
SW
dW( IV), where W (w) = P[time to event
E:S;; w).
(e) Find the mean time to event E.
. 5.18. Consider an M/Gf! system in which time is di vided into interval s of
length q sec each . Assume that arrivals are that is,
P[l arrival in an y interval) = ).q
prO arrival s in any interval) = 1  ).q
P[ > I arri val in any interval) = 0
Assume that a customer' s service time x is some multiple of q sec
such that
P[service time = nq sec) = K» 11 = 0, 1,2".,
(a) Find E[number of arri vals in an interval).
(b) Find the average arr ival rate.
(c) Express E[ i) ;;, x and E[x(i  q») x
2
 xq in terms of the
moments of the gn di stribution (i.e., let gk ;;, L :"oII
k
gn)'
(d) Find Ymn = P[m cust omers arrive in IIq sec).
(e) Let v
m
= P[m cust omers arrive during the service of a cust omer)
and let
5.19.
5.20.
00 00
V( z) = Lvmz
m
and G(z) = Lgm
zm
ffl = O m= O
Express V(z) in terms of G(z) and the system par ameters Aand q.
(f) Find the mean number of arrivals during a customer service time
from (e).
Suppose that in an M/G/l queueing system the cos t of making a
cust omer wait t sec is c(t) doll ars , where c(t) = rJ.eP
t
• Find the average
cost of queueing for a cust omer. Also determine the conditi ons
necessary to keep the average cost finite,
We wish to find the interdeparture time probability density funct ion
d(t ) for an MIGII queueing system, \
(a) Find the Laplace transform D *(s) of th is density conditioned
first on a nonempty queue left behind , and second on an empty
queue left behind by a departing cust omer. Co mbine these results
J
EXERCISES 239
to get the Laplace transform of the interdeparture time density
and from this find the density itself.
(b) Give an explicit form for the probability di stribution D(I), or
density d(l) = dD(t)fdl, of the interdeparture time when we
have a constant service time, that is
{
o x < T
B(x) =
I x ? T
5.21. Consider the following modified order of service for MfGfI. Instead
of LCFS as in Figure 5.11, as sume that after the interva l X " the
subbusy peri od generated by C
2
occurs, which is foll owed by the
subbusy peri od generated by Co, and so on , until the busy peri od
terminates. Using the sequence of arrivals and service times sho wn in
the upper contour of Figure 5.lla, redraw parts a, b, and c to corre
spond to the above order of service.
5.22. Consider an MfGfI system in which a departing customer immediately
j oins the queue again with probability P» or departs forever with
probability q = I  p. Service is FCFS, and the ser vice time for a
returning customer is independent of his previous service times. Let
B*(s) be the transform for the service time pdf and let BT*(s) be the
transform for a customer's total service time pdf.
(a) Find BT*(s) in terms of B* (s), P»and q.
(b) Let x
T
n
be the nth moment of the total service time. Find X
T '
and X
T
2
in terms of x, x
2
, p, and q.
(c) Show that the following recurrence formula holds:
 n _ ;; + p. ~ ( n ) ..k;;::;;
XT  x .L.. X". XT
q k1 k
(d) Let
co
QT(z) = LPkT
zk
k ~ O
where PkT = P[number in system = k] . For }..? <q prove that
(
}.x) q(1  z)B*[I.( 1  z)]
QT(z) = 1
q (q + pZ)B*[A(l  z) ]  z
(e) Find iV, the average number of customers in the syst em.
5,23. Consider a firstcomefirstserved MfGfI queue with the foll owing
changes. The server serves the queue as long as someone is in the
system. Whenever the system empties the server goes away on vacati on
for a certain length of time, which may be a random vari able. At the
end of his vacation the server returns and begins to serve cust omers
again; if he returns to an empty system then he goes awa y on vacation
240 THE QUEUE MIGII
again. Let F(z) = I ;"' : 1];Z; be the ztransform for the number of
customers awaiting service when the server returns from vacation to
find at least one customer waiting (t hat is, /; is the prob ability that at
the initiation of a busy period the server finds j cust omers awaiting
service).
(a) Derive an expression which gives qn+l in terms of qn, vn+l' and j
(the number of customer arrivals du ring the server' s vacati on) .
(b) Deri ve an expression for Q(z) where Q(z) = lim E[z"") as
n > co in terms of Po (equal to the probability that a departing
cust omer lea ves 0 customers behind). (HINT: condition on j .)
(c) Show that po = ( I  p)IF(l )(I) where F(l)(I) = aF(z)/azlz_1
and p = Ax.
(d) Assume no w that the service vacation will end whenever a new
cust omer enters the empty system. For this case find F(z) and
show that when we substitute it back into our answer for (b)
then we arrive at the classical MIGII solution.
5.24. We recogni ze that an arriving customer who finds k others in the system
is delayed by the remaining service time for the customer in service
plus the sum of (k  I ) complete service times.
(a) Using the notation and approach of Exerci se 5.7, show that we
may express the transform of the waiting time pdf as
1V*(s) = Po + r'" I pk(X
o
)[B*(s»)k1
Jo klCl
x L"e' Y r(y + x
o)
dy
X
u )d u d
e X
o
(b) Show t hat the expression in (a) reduces to W*(s) as given in
Eq. (5. 106).
5.25. Let us relate sk, the e
h
moment of the time in system to N", the k
t h
moment of the number in system.
(a) Show that Eq. (5.98) leads directly to Little' s result, namely
N= J.T
(b) F rom Eq . (5.98) est a blish the second moment relationship
N 2  R = A.
2
S
2
(c) Prove that the general relati onship is
N(N  1)(N  2) . . . (N  k + I) = ;'k Sk
6
The Queue G/M/rn
We have so far studied systems of the type MfM/I and its variants
(elementary queueing theory) and MfG/l (intermediate queueing theory).
The next natural system to study is GfM/I, in which we have an arbitrary
interarrival time distribution A (t) and. an exponentially distributed service
time. It turns out that the mserver system GfMfm is almost as easy to study
as is the singleserver system GfM/I, and so we proceed directly to the
mserver case. This study falls within intermediate queueing theory along
with MfG/I, and it too may be solved using the method of the imbedded
Markov chain, as elegantly presented by Kendall [KEND 51].
6.1. TRANSmON PROBABILITIES FOR THE IMBEDDED
MARKOV CHAIN (G/M/m)
The system under consideration contains m servers, who render service in
order of arrival. Customers arrive singly with interarrival times identically
and independently distributed according to A(t) and with a mean time be
tween arrivals equal to IfA. Service times are distributed exponentially with
mean I/ft , the same distribution applying to each server independently. We
consider steadystate results only (see discussion below).
As was the case in M/G/I , where the state variable became a continuous
variable, so too in the system GfM/m we have a continuousstate variable
in which we are required to keep' track of the elapsed time since the last
arrical , as well as the number in system. This is true since the probabilit y of
an arrival in any particular time interval depends upon the elapsed time (the
" age") since the last arrival. It is possible to proceed with the analysis by
considering the twodimensional state description consisting of the age since
the last arrival and the number in system; such a procedure is again referred
to as the method of supplementary variables. A second approach, very much
like that which we used for MfGfl, is the method of the imbedded Markov
chain, which we pursue below. We have already seen a third approach,
namely , the method of stages from Chapter 4.
241
242 T HE QUEUE G/M/m
\
Server if + ++
en Time ;.
Queue ;;;' r ''      ':;;o1' ' '
q'" found
elf
q' ;/ .l found
Cn +
1
Figure 6.1 The imbedded Markov points.
If we are to use the imbedded Markov chain approach then it must be
that the points we select as the regeneration points implicity inform us of the
elapsed time since the last arrival in an analogo us way as for the expended
service time in the case M/G/I . The natural set of points to choose for this
purpose is the set of arrival instants. It is cert ainl y clear that at these epochs
the elapsed time since the last arrival is zero. Let us therefore define
q; = number of customers found in the system immed iately pri or
to the arrival of C;
We use qn' for this random variable to distinguish it from qn, the number of
customers left behind by the departure of C
n
in the M/Gfl system. In Figure
6.1 we show a sequence of arri val times and identify them as critical points
imbedded in the time axis. It is clear that the sequence {q,: } forms a discrete
stat e Markov chain . Defining
V ~ + l = the number of customers serve d between the arri val of C
n
and C
n+!,
we see immediatel y that the following fund amental relation must hold:
 (6.1)
We must now calculate the tr ansition probabilities associated with this
Mar kov chain , and so we define
p;; = P [ q ~ + ! = j I «: = i] (6.2)
It is clear tha t Pu is merely the pr obabil ity that i + I  j customers are
served during an interarrival time . It is further clear that
p;; = 0 for j > i + I  (6.3)
since there ar e at most i + I present between the arriva l of C; and C
n
+!.
The Markov statetransitionprobability diagram has transiti ons such as
shown in Figure 6.2; in thi s figure we show only the transitions out of sta te E,.
6.1. IMBEDDED MARKOV CHAIN (G/M/ m) 243
8 ···
Figure 6.2 Slatetransitionprobabilit y diagram for the G/M/m imbedded
Markov chain.
We are concerned with steadystate results only and so we must inquire as
to the conditions under which this Markov chain will be ergodic. It may
easily be shown that the conditi on for ergodicity is, as we would expect,
A< mu ; where Ais the average arrival rate associated with our input dis
tribution and fl is the parameter associated with our exponential service time
(tha t is, x = I /fl) . As defined in Chapter 2 and as used in Secti on 3.5, we
define the utili zat ion factor for thi s system as
4 ). 
p =  (6.4)
mfl
Once again thi s is the average rate at which work enters the system (Ax = Nfl
sec of workper elapsed second) di vided by the maximum rate at which the
system can do work (m sec of work per elapsed second). Thus our conditi on
for ergodicity is simply p < I. In the ergodic case we are assured that an
equilibrium pr obab ility distribution will exist describing the number of
cust omers present at the arrival inst ants; thus we define
r
k
= lim P[q; = k] (6.5)
and it is thi s probability di stribution we seek for the system G/M/m. As we
know from Chapter 2, the direct method of solution for thi s equilibrium
distribution requires that we solve the following system of linear equati ons:
r = rP (6.6)
where
(6.7)
j
and P is the matrix whose elements a re the onestep tr ansiti on pr obabilities
Pu·
Our first task then is to find these onestep tran sition probabil ities. We
must consider four regions in the i,j pl ane as shown in Figure 6.3, which gives
the case m = 6. Regarding the region labeled I , we already know from Eq.
(6.3) that Pis = 0 for i + I < j . Now for region 2 let us consider the ran ge
j ~ i + I ~ m, which is the case in which no customers are waiting and all
pre sent are engaged with their own server. During the intera rrival period , we
244 TH E QUEUE G/M/m
t
j
;. ....,•..• .: :.: ;..,
a
1 2 3
III
Figure 6.3 Range of validity for Pi; equations
(equation numbers are also given in parentheses).
see that i + I  j customers will complete their service. Since service times
are exponentially distributed, the probability that any given cust omer will
dep art within I sec after the arrival of C
n
is given by I  e
P
' ; similarly the
probability that a given customer will not depart by this time is «r'. There
fore, in thi s region we have
P[i + I  j departures within I sec after enarrives I qn' = ij
= ( . i + I .) [I _ eP']i+1i [eP'F (6.8)
1+ IJ
where the binomi al coefficient
(
. i+ 1
I+IJ J
merel y counts the number of ways in which we can choose the i + I  j
cust omers to depart out of the i + I that are available in the system. With
t
n
+! as the interarri val time between C; and C
n
+" Eq . (6.8) gives =
j I q; = i , I
n
+
1
= Ij. Removing the condition on I n+! we then ha ve the one
step tr ansition probability in this ran ge, namel y,
Po = L'" (i I) (I  e
P
' ji+1ie
p
t; dA(/) j i + I m  (6.9)
Next consider the range m j i + I, i In (region 3), * which corre
sponds to the simple case in which all m servers are busy throughout the
• The point i = m  I , j = m can properly lie either in region 2 or region 3.
6.1. IMBEDDED MARKOV CHAIN (G/M/m) 245
interarrival interval. Under this assumption (that all m servers remain busy),
since each service time is exponentially distributed (memoryless), then the
number of customers served during this inte rval wiII be Poisson distributed
(in fact it is a pure Poisson death process) with parameter mu ; that is defining
"t , all m busy" as the event that t
n
+! = t and all m servers remain busy during
t
n
+
1
, we have
I
(mflt)k  mn
t
P[k customers served t, all m busy] = e
k!
As pointed out earlier, if we are to go from state i to state j , then exactly
i + 1  j customers must have been served during the interarrival time ;
taking account of this and removing the condition on t
n
+! we have
Pi; = + 1  j served It, all m busy] dA(t )
or
., =i'" (mflt )i+1 ; emn
t
dA(t )
P" ( . + 1 .) I
t  O I  ] .
(6.10)
Note that in Eq. (6.10) the indices i and j appear only as the difference
i + 1  j , and so it behooves us to define a new quantity with a single index
111 j i + 1, m i
(6.11)
o n i + 1  111 , In i
where Pn = the probabi lity of serving n customers duri ng an interarrival time
given that all m servers remain busy during this interval ; thu s, with n = i +
1  j, we have
i
'" (l1Iflt) n  mn
t
Pn = Pi,i+! n = , e dA(t )
1= 0 n .
 (6.12)
The last case we must consider (region 4) isj < m < i + 1, which describes
the situation where C; arrives to find m cust omers in service and i  111
waitin g in queu e (which he joi ns) ; upon the arrival of C
n
+! there are exactl y j
customers, all of whom are in service. If we assume that it requires y sec unt il
the queue empties then one may calcul ate Pi} in a straightforward manner to
yield (see Exercise 6. 1)
l"'(m)  in'
Pi'  . e
o ]
j< m < i + l
j
 (6.1 3)
246 THE QUEUE G/M/m
Thus Eqs. (6.3), (6.9), (6. I2), and (6. 13) give the complete description of the
onestep transition probabilities for the G/M/m system.
Having established the form for our onestep transition probabilities we
may place them in the transition matrix
poo pal
PlO Pn
P20 hI
0 0
Pl2
0
p.)')
P23
p= P m 2.0 P m2. 1 P m2. ml
0 0 0
Pm I.O PmI. I Pml.mI Po
0 0
Pm.O P«.I Pm. ml PI
flo
0
P m+ n,O P m+ n.l
. . . Pm+n,ml (3n+l f3n . . .
Po
In this matri x all terms above the upper diagonal are zero, and the terms
fln are given through Eq. (6.12). The "boundary" terms denoted in thi s
matrix by their generic symbol PH are given either by Eqs. (6.9) or (6. 13)
according to the range of subscripts i and j . Of most importance to us are the
transition probabilities Pn.
6.2. CONDITIONAL DI STRIBUTION OF QUEUE SIZE
Now we are in a position to find the equilibrium probabilities r
k
, which
must satisfy the system oflinear equations given in Eq. (6.6). At thi s point we
perhaps could guess at the form for r
k
that sati sfies these equat ions, but
rather than that we choo se to motivate the results that we obt ain by the
following intuitive arguments . In order to do this we define
Nk(t) = number of arrival instants in the inter val (0, t) in which
the arriving customer finds the system in state Ek> given
ocustomers at t = 0 (6. 14)
j
6.2. CONDITIO NAL DISTRIB UTION OF QUEUE SIZE 247
Note from Figu re 6.2 that t he system can move up by at most one state, but
may move down by many states in any single transition. We consider thi s
motion between states and define (for III  I :::;; k)
Uk = E[number of times state E k+I is reached between two
successive visits to state E
k
] (6. 15)
We have that the pr obability of reaching state E k+l no times bet ween returns
to state E
k
is equal to I  Po(that is, given we are in state E
k
the onl y way
we can reach state E k+1 before our next visit to sta te E
k
is for no customers to
be served, which has pr obability Po, and so the probability of not getting to
E k+1 first is I  Po, the probability of serving at least one) . Furthermore, let
y = P[Ieave state E k+l and return to it some time later without passing
through state s; where j :::;; k]
= P[leave state E k+ 1 and return to it later without passing through
state E
k
]
Thi s last is true since a visit to state E, for j :::;; k must result in a visit to state
E
k
before next returning to state E k+I (we move up only one state at a time) .
We note that y is independent of k so long as k ~ III  I (i.e., all III servers
are busy). We have the simple calcul ati on
PIn occurrences of state E k+ 1 between two successive visits to state E
k]
= yn 1(I  y)Po
Thi s last equation is calculated as the probability (Po) of reaching state
E k+I at all , times the probability (ynI) of returning to E k+1 a total of n  I
times without first touching state E
k
, times the probability (I  y) of then
visiting state E
k
without first returning to state E k+1' From this we may cal
culate
<0
Uk = I nyn1(1  y)Po
n= l
as the average number of visits to E k+I between successive visits to stale E
k

Thus
Po
Uk =  for k ~ III  1
1  Y
Note that Uk is independent of k and so we may dr op the subscript, in which
case we have
~ Po
U =U
k
= f o r k ~ I I I  1 (6. 16)
l y
From the definition in Eq. (6.15), U must be the limit of the rati o of the
number of times we find ourselves in state E
k
+
1
to the number of times we find
248 THE QUEUE G/M/m
ourselves in state E
k
; thus we may write
(6.17) k ~ ml
. . NkH( t) fl o
a=hm=
/'" Nk(t) 1  Y
However, the limit is merely the ratio of the steadystate probability of finding
the system in state E
kH
to the probability of finding it in state E
k
. Con
sequently, we have established
k ~ ml (6.18)
The solution to this last set of equations is clearly
k ~ ml (6.19)
for some constant K. This is a basic result, which says that the distribution of
number of customers found at the arrival instants is geometric for the case
k ~ m  1. It remains for us to find a and K, as well as r
k
for k < m  1.
Our intuitive reasoning (which may easily be made rigorous by results
from renewal theory) has led us to the basic equation (6.19). We could have
"pulled this out of a hat" by guessing that the solution to Eq. (6.6) for the
probability vector r ~ fro, rl> r
2
, ••• J might perhaps be of the form
(6.20)
This flash of brilliance would, of course , have been correct (as our calculations
have just shown) ; once we suspect this result we may easily verify it by
considering the kth equation (k ~ m) in the set (6.6), which reads
co
r, = K ~ = L riPi k
i=O
co
= L riPik
i = kl
<Xl
= L K a ifli+l _ k
i =kl
Canceling the constant K as well as common factors of a we have
co
~ i+l kR
(J = ..::. (J P i + l k
i=1.1
Changing the index of summation we finally have
I
j
co
P[arrival queues] = I r
k
k=m
(6.22)
I
I
1
6.2. CONDITIONAL DISTRIBUTION OF QUEUE SIZE 249
Of course we know fJn from Eq. (6.12), which permits the following calcula
tion:
a = i an ('Xl (mf.lt) n e
mp
, dA(t)
n= O JtO n!
=J.'"e l m. mpa)t dA(t)
This equation must be satisfied if our assumed ("calculated") guess is to be
correct. However, we recognize this last integral as the Laplace transform
for the pdf of interarrival times evaluated at a special point ; thus we have
a = A*(mf.l  mf.la)  (6.21)
Th is functional equation for a must be satisfied if our assumed solution is to
be acceptable. It can be shown [TAKA 62] that so long as p < I then there
is a unique real solution for a in the range 0 < a < I, and it is this solution
which we seek; note that a = I must always be a solution of the functional
equation since A*(0) = I.
We now have the defining equation for a and it remains for us to find the
unknown constant K as well as r
k
for k = 0, 1,2, . . . , m  2. Before we
settle these questions, however, let us establish some additional important
results for the G/M/m system using Eq. (6.19), our basic result so far. This
basic result establishes that the distribution for number in system is geo
metrically distributed in the range k ~ m  I. Working from there let us now
calculate the probability that an arriving customer must wait for service.
Clearl y
co
=IKa
k
k=m
1  a
(This operati on is permi ssible since 0 < a < I as discussed above.) The
conditional probability of finding a queue length of size n, given that a
customer must queue, is
r
P[queue size = n Iarri val queues] = . m+n
P[arnval queue s]
and so
Ka
n
+
m
P[queue size = n Iarrival queues] = ="
Ka
m
/ (1  a)
= (1  a)a
n
n ~ 0  (6.23)
Thus we conclude that the conditional queue length distribut ion (given that a
queue ex ists) is geometric for any G/Mlm system.
250 THE QUEUE G/ M/m
6.3. CONDITIONAL DISTRIBUTION OF WAITING TIME
Let us now seek the distributi on of queueing time, given that a customer
must queue. From Eq. (6.23), a cust omer who queues will find In + II in the
system with pr obabili ty (1  (J)(Jn. Under such cond itions our arriving
customer must wai t until II + I cust omers depart from the system before he
is all owed into service, and this interval will constitute his waiting time. Thus
we are as king for the distribution of an interval whose length is made up of
t he sum of II + I independently and exponentially distributed random varia
bles (each with par ameter mp.). The resulting convoluti on is most easily
expressed as a transform, which gives rise to the usual pr oduct of tr ansforms.
Thus defining W*(s) to be the Laplace transform of the queueing time as in
Eq . (5. 103) (i.e., as E[e "D]), and definin g
W*(s In) = E[e
S
;;; Iar rival queues and queue size = n] (6.24)
we have
(
111P. )n+l
W*(s In) =
s + mp.
(6.25)
But clearl y
• 00
W*(s Iarrival queues) = .2 W*(s In)P[queue size = n Iarrival queues]
n =O
and so from Eqs. (6.25) and (6.23) we have
00 ( mp: )n+1
W*(s Iar rival queues) = .2 (1  (J)(J n
n ~ O S + mp.
111P.
= (1  (J)  '
s + mp.  mp.(J
Luckily, we recognize the inverse of this Lapl ace transfor m by inspection.
thereby yieldi ng the following conditional pdf for queueing time,
w(y Iarrival que ues) = (1  (J)m,ue
m
# lI<1 )Y
y ~ O  (6.26)
Quite a surprise! The condition al pdf for queueing time is exponentially
distributed for the system G/M/m t
Thus far we have two principal result s: first , that the conditional queue
size is geometrically distributed wit h parameter (Jas given in Eq. (6.23); and
second, that the conditional pdf for queueing time is exponentially distributed
with par ameter m,u(l  (J) as given in Eq. (6.26). Th e parameter (J is foun d as

6.4. TH E QUEUE G/Mfl 251
the un ique root in the ran ge 0 < a < I of the functi onal equat ion (6.21).
We are still searchi ng for the distribution r
k
and have carried that solution to
the point of Eq. (6.20); we have as yet to evalu ate the constant K as well as
the first m  1 terms in that distribution. Before we pr oceed with these last
steps let us study a n imp ortant speci al case.
6.4. THE QUEUE G/M/l
This is perhaps the most important system and forms the "dual" to the
system M/G/1. Since m = 1 then Eq. (6. 19) gives us the solution for r
k
for all values of k , that is,
k = 0, 1,2, . . .
K is now easily evaluated since the se probab ilities must sum to unity. From
thi s we obtain immediately
k=0,1 ,2, . ..
_ (6.27)
where, of course, a is the unique root of
a = A*(fl  fla)  (6.28)
in the ran ge 0 < a : I. Thus the system G/M] ! gives rise to a geometri c
distribution f or number of customers f ound in the system by an arriral;
this applies as an unconditional statement regardless of the f orm f or the
interarrioal distri but ion. We have already seen an example of this in Eq.
(4.42) for the system E
r
/ M/ 1. We comment t hat the state probabil ities,
P» = P[k in system], differ from Eq. (6.27) in that Po = I  p whereas
r
o
= (l  G) and Pk = p( 1  a)G
k

1
= pr
k

1
for k = 1,2, . . . . {see Eq.
(3.24), p. 209 of [COHE 69]}; in the M/G/I queue we found Pk = r
k
•
A cust omer will be forced to wait for service wit h pr obab ility I  r
o
= G,
a nd so we may use Eq. (6.26) to obtai n the unconditional distributi on of
waiti ng time as follows (where we define A to be t he event " a rrival queues"
and A', the complementary event) :
W(y) = P[queueing time ~ y]
= I  P [que ueing time > y I A]P[A]
P[queueing time> y I A' ]P[A' ] (6.29)
I
J
Clearly, the last term in thi s equation is zero; the remaining conditi onal
probabil ity in this last expression may be obtained by integrating Eq. (6.26)
from y to infinity for III = I; thi s computati on gives e p(l .). and since a is the
252 THE QUEUE G/ M/m
probability of queueing we have immediately from Eq. (6.29) that
y ~ O  (6.30)
We have the remark able conclu sion that the unconditional waitingtime
distribution is exponential (with a jump of size 1  a at the origin) for the
system G/M/l. If we compare thi s result to (5. 123) and Figure 5.9, which
gives the waitingtime distribution for M/M/l, we see that the results agree
with p replacing a. That is, the queueingtime distribution for G/M/l is of
t he same f orm as for M/M/ 1!
By straightforward calcul ati on, we also have that the mean wait in G/M/ I is
a
 (6.31)
Example
Let us now illustrate thi s meth od for the example M/M/\. Since A(t) =
I  e;"(t ~ 0) we have immediately
A*(s) = _i,_
s + i.
Using Eq. (6.28) we find that a must sati sfy
i,
a =   
fl  ua + A
or
fla"  (fl + A)a + i. = 0
which yields
(a  l)(,l a  i.) = 0
(6.32)
Of these two solutions for a, the case a = I is unacceptable due to stability
conditions (0 < a < I) and therefore the only acceptable solution is
i.
a =  =p
fl
which yields from Eq. (6.27)
M/M/l
(6.33)
(6.34)
Thi s, of course, is our usual solution for M/M/!. Fu rther, using a = p as the
value for a in our waiting time distr ibuti on [Eq. (6.30)] we come up imme
diately with the known solut ion given in Eq. (5. 123).
I
I
J
6.5. THE QUEUE G/M/m 253
Example
As a second (slightly more Interesting) example let us consider a G/M/I
system, with an interarrival time distribution such that
2 2
A*(s) = P,
(s + p,)(s + 2p,)
(6.35)
Note that this corresponds to an E
2
/M/I system in which the two ar rival
stages have different death rates ; we choose these rates to be linear multiples
of the service rate p,. As always our first step is to evaluate a fr om Eq. (6.28)
and so we have
2p,2
a= '   
(p,  ua +p.)(p.  p,a + 2p,)
This leads directly to the cubic equation
a
3
 5a
2
+ 6a  2 = 0
We know for sure that a = I is always a root of Eq. (6.28), and thi s permits
the straightforward fact oring
(a  ' I)(a  2  J 2)(a  2 + J 2) = 0
Of these three roots it is clear that onl y a = 2  J"2 is acceptable (since
o< a < I is required). Therefore Eq. (6.27) immediately gives the distri
bution for number in system (seen by a rriva ls)
r, = (li  1)(2  J 2f k = 0, 1,2, . . . (6.36)
Similarl y we find
W(y) = 1  (2  J 2)e· p/2 1lu
for the waitingtime distributi on.
y ~ O (6.37)
1
Let us now return to the more genera l system G/M/m.
6.5. THE QUEUE G/M/m
At the end of Section 6.3 we point ed out that the only remaining unknowns
for the general G/M /m solution were: K, an unknown constant , and the
m  I " boundary" pr obabilities r
o
, r
"
. .. , r
m
_
2
• That is, our solution
254 THE QUEUE G/M/m
appears in the form of Eq. (6.20) ; we may fact or out the term Ko":" to obtain
where
 (6.38)
k = 0, 1, . .. , m  2 (6.39)
Fu rthermore, for convenience we define
J = Ka
m

1
(6.40)
We have as yet not used the first m  1 equations represented by the matri x
equation (6.6). We now require them for the evaluation of our unknown
terms (of which there are m  I) . In terms of our onestep transition prob
abilities PHwe then have
""
e, = I R iPik
i = k l
k = 0, 1, .. . , m  2
where we may extend the definiti on for R
k
in Eq. (6.39) beyond k = m  2
by use of Eq. (6.19), that is, R; = a i  m+I for i ~ m  I. The tail of the
sum above may be evaluated to give
m2 00
e, = I R iPi k + I a i+l  mp ik
i =k l i =ml
Solving for R
k
_
h
the lowestorder term present, we have
m  2 00
R
"R " i +Im
k  £.. iP;k  £.. a Pi k
R
_ i =k i =ml
k l 
Pkl.k
 (6.41)
for k = 1, 2, . . . , m  I. The set of equations (6.41) is a triangular set in the
unknowns R
k
; in particular we may start with the fact that R
m
_
1
= I [see
Eq. (6.38)] and then solve recur sively over the range k = m  I , m  2, . . . ,
1, 0 in order. Finally we may use the conservati on of probability to evaluate
the constant J (this being equivalent to evaluatin g K) as
m 2 co
J I n, + J I ak m+I = 1
k=O k=ml
or
J = 1
1 m2
+IR
k
I  a kO
 (6.42)
6.5. THE QUEUE G/M/m 255
This then provides a complete prescription for evaluating the distribution of
the number of customers in the system. We point out that Takacs rrAKA 62]
gives an explicit (albeit complex) expression for these boundary probabilities.
Let us now determine the distribution of waiting time in this system [we
already have seen the conditional distribution in Eq. (6.26)]. First we have the
probability that an arriving customer need not queue , given by
ml m l
W(O) = .L r, = J .L s ,
k= O k=O
(6.43)
(6.44)
On the other hand , if a customer arrives to find k ~ m others in the system
he must wait until exactly k  m + I customers depart before he may enter
service. Since there are m servers working continuously during his wait then
the interdeparture times must be exponentially distributed with parameter
mfL, and so his waiting time must be of the form of a (k  m + I)stage
Erlangian distributi on as given in Eq. (2. 147). Thus for this case (k ~ m) we
may write
I
"mfL(mfLx)k m
P[l" .:s;; Y Icustomer finds k in system] = e
mpz
dx
o (k  m)!
If we now remove the conditi on on k we may write the unconditi onal dis
tribution as
W(y) = W(O) + J i f"( mfL)(mfLx)kmr!'m+l e
mpz
dx
k ~ m Jo (k  m)!
= W(O) + J(JI"mfLe mpzl1 a l dx
We may now use the expression for J in Eq. (6.42) and for W(O) in Eq. (6.43)
and carry out the integration in Eq. (6.44) to obtain
(J
y ~ O  (6.45)
Thi s is the final soluti on for our waitingtime distribution and shows that in
the general case GIM/m Ire still have the exponential distribution twith an
accumulation point at the origin) for waiting time!
We may calculate the average waiting time either from Eq. (6.45) or as
follows. As we saw, a cust omer who arrives to find k ~ m others in the
system must wait unt il k  m + I services are complete, each of which
take s on the avera ge I jmp sec. We now sum over all those cases where our
256 THE QUEUE G/M/m
customer must wait to obtain
U <X> 1
E[lv] = W = L  (k  m + l )r
k
k m mfl
But in this range we know that r
k
= Ko" and so
K <X>
W = . L (k  m + l )a
k
mf,l k m
and this is easily calculated to yield
J a

6.6. THE QUEUE G/M/2
Let us see how far we can get with the system G/M/2. From Eq. (6. 19)
we have immediately
r
k
=
Conserving probability we find
k = 1, 2, . ..
<Xl <X>
Lr
k
= 1 = r
o
+L
k O
This yields the following relationship between K and ro:
K = ( I  ro)(1  a)
a
(6.46)
Our task now is to find another relation between K and roo Thi s we may do
from Eq. (6.41), which states
co
R
" '1
1  .L. a Pn.
R
_ '1
0 
POl
But R, = 1. The denominator is given by Eq. (6.9), namely,
P Ol = f '(D[I  dA( I)
Thi s we recognize as
(6.47)
P Ol = A*Cfl) (6.48)
Regarding the onestep transition probabilities in the numerat or sum of
Eq. (6.47) we fi nd they break into two regions: the term Pu. must be calculated
from Eq. (6.9) and the terms Pi: for i = 2,3,4, .. . must be calculated from
6.6. THE QUEUE G/ M/ 2 257
Eq. (6.13). Proceeding we have
Pu =1"'(7) [1  e·'V·'dA(t)
Again we recognize this as the transform
Pa = 2A *(Il)  2A *(21l)
Also for i = 2,3,4, . . . , we have
Pi! = f"'(2
1
)e.'[f' ( ~ l l y ) i  2 (e·"  e·')21l dyJ dA(t)
Jo Jo (I  2)!
Substituting these last equations into Eq. (6.47) we then have
R
o
= _1_[1 2A*{fl) + 2A*(21l)  Ia
i
Ip
il
]
A *(Il) i2
(6.49)
(6.50)
(6.51)
(6.52)
The summation in this equation may be carried out withi n the integral signs
of Eq. (6.50) to give
~ iI * 2A *(21l  21la)  4aA*(Il)
""'" a Pn. = 2A (21l) + _....0....:_'' __'''
i 2 20'  1
But from Eq. (6.21) we recognize that a = A*(21l  21la) and so we have
'" . 20'
L a,I
PiI
= 2A*(21l) +   [I  2A*{fl) ]
i 2 20'  I
Substi tuti ng back into Eq. (6.51) we find
R _ 2A*(Il)  1
o  (20'  I)A*(Il)
However from Eq. (6.39) we know that
R
 ..!:!.
0 
Ka
and so we may express r« as
r  Ka[l  2A *(Il)] (6 .53)
0 (I  2a)A*(Il)
Thus Eqs. (6.46) and (6.53) give us two equations in our two unknowns K
and r
o,
which when solved simultaneously lead to
(l  0')[1  2A *(Il)]
r
o
=
I  0' A*(Il)
K = A*(Il) (l  0')(1  20')
0'[1  a  A*(Il) ]
EXERCISES 259
Comparing Eq. (6.56) with our results from Chapter 3 [Eqs. (3.37) and (3.39)]
we find that they agree for m. = 2.
This compl etes our st udy of the G[M[m queue . Some further results of
int erest may be found in [DESM 73]. In the next chapter, we view transforms
as probabilities and gain considerable reduction in the analytic effort required
to solve equilibrium and transient queueing problems.
REFERENCES
COHE 69 Cohen, J. W., The Single Ser ver Queue, Wiley (New York) 1969.
DESM 73 De Smit, J. H. A., "On the Many Server Queue with Exponential
Service Times," Advances in Applied Probability, 5,1701 82 (1973).
KEND 51 Kendall, D. G. , "Some Problems in the Theory of Queues," Journal
of the Royal Statistical Society , Ser. E., 13, 1511 85 (1951).
TAKA 62 Takacs, L., Introduction to the Theory of Queues, Oxford University
Press (New York) 1962.
EXERCISES
6.1. Prove Eq. (6. 13). [HINT: conditi on on an interarrival time of dur at ion
t and then furt her conditi on on the time ( ~ t) it will take to empty the
queue.]
6.2. Consider E
2
[M[ I (with infinite queueing room).
(a) Solve for r
k
in terms of G.
(b) Evaluat e G explicitly.
6.3. Consi der M[M[m.
(a) How do Pk and r
k
compare?
(b) Compare Eqs. (6.22) and (3.40).
6.4. Prove Eq. (6.31).
6.5. Show that Eq. (6.52) follows from Eq. (6.50).
6.6. Consider an H
2
/ MfI system in which }.t = 2, }'2 = I, f1 = 2, and
()(l = 5[8.
(a) Find G .
(b) Find r
k
•
(c) Find Il'(Y) .
(d) Find W.
6.7. Conside r a D[MfI system with f1 = 2 and with t he same p as in the
previous exercise.
(a) Fi nd G (correct to two decimal places).
260 THE QUEUE G/M/m
(b) Find r
k
•
(c) Find w( y).
(d) Find W.
6.8. Consider a G/M/I queueing system with room for at most two customers
(one in service plus one waiting) . Find r
k
(k = 0, 1,2) in terms of fl
and A*(5).
6.9. Consider a G/M/I system in which the cost of making a customer wait
y sec is
c(y) = ae
bV
(3) Find the average cost of queueing for a customer.
(b) Under what conditions will the average cost be finite?
J
(7. 1)
(7.2)
7
The Method of Collective Marks
When one studies stoch ast ic pr ocesses such as in queueing the ory, one
finds that the work divides int o two parts. The first part typically requires a
careful probabilistic argument in order to arrive at expressions involvin g the
random variables of interest. * The second part is then one of analysis in which
t he f ormal manipulation of symbols takes place either in the original domain
or in some transformed domain. Whereas the prob abil ist ic arguments typi
cally must be made with great care, they nevertheless leave one with a com
fort abl e feeling that t he " physics" of the situation a re constantly withi n one's
understanding and gras p. On the ot her hand, whereas t he analytic manipula
tions that one carries out in the second part tend to be rather straightforward
(albeit difficul t) formal operations, one is unfortunately left with the uneasy
feeling th at these manipul at ions relate back to the origina l problem in no
clearly underst andable fashi on. This " nonphysical" aspect to problem
solving typically is taken on when one moves into the domain of transforms,
(eit her Laplace or zt ransforms).
In this chap ter we de monstr at e t hat one may deal wit h tr ansforms and sti ll
maintain a handle on the prob abilistic arguments taking place as these
tra nsforms are manipulated. Th ere are two separat e opera tions involved : the
" ma rking" of custo mers ; and the observatio n of "catas tro phe" processes.
Together these meth ods a re referred to as the method of collective marks.
Both operations need not necessarily be used simultaneously, and we study
them sepa rately bel ow. This ma terial is drawn principally from [RUNN 65] ;
these ideas were int roduced by van Dan tzig [VAN 48] in order to expose the
probabil ist ic int erpret ation for tr ansforms.
7.1. THE MARKING OF CUSTOMERS
Assume that , at the entrance to a queueing syst em, the re is a gremlin who
marks (i.e., tags) arrivi ng customers wit h the following probabilities :
P [customer is ma rked] = I  z
P[customer is not marked] = z
• As, for example. the arguments leading up to Eqs. (5.31) and (6.1).
26 1
I
!
I
262 THE METHOD OF COLLECTIVE MARKS
where 0 ~ z ~ I. We assume that the gremlin marks customers with these
probabilities independent of all other as pects of the queueing proces s. As we
shall see below, this marking process allows us to create genera ting fun ct ions
in a very natural way.
It is most inst ructive if we illustrate the use of th is marking process by
examples:
Example 1: Poisson Arrivals
We first consider a Poisson a rrival pr ocess with a mean arrival rat e of }.
customers per second. Assume that customers are marked as above. Let us
consider the probability
q(z, t) ~ P[no marked customers arrive in (0, t) ] (7.3)
It is clear that k customers will arrive in the int er val (0, t ) wit h the probabil ity
(At)keAt/k !. Moreover, with probability zk, none of these k cust omers will
be marked ; thi s last is true since marking takes place independently a mong
cust omers. Now summing over all values of k we have immediately that
0) (}. t)ke.lt
q(z, t) = .2 Zk
kO k !
= e.lt( zIl (7.4)
Going back to Eq. (2. 134) we see that Eq. (7.4) is merel y the generati ng fun c
tion for a Poisson arrival process. We thus conclude that the generating
functi on for this arrival process may also be int erpreted as the probabili st ic
qu antity expressed in Eq . (7.3). This will not be the first time we may give a
probabili stic interpretati on for a generat ing fun cti on!
Example 2: M/M/ ro
We consider the birthdeath queueing system with an infini te number of
servers. We also assume at time t = 0 that there are i customers present. The
parameters of our syst em as usual are). and f.l [i.e. , A(t) = I  e
At
and
B(x) = I  e  ~ X ] .
We are int erested in the quantity
I '
f
Pk(t ) = P[k cust omers in the system at time t ]
and we define its generating functi on as we did in Eq, (2. I53) to be
0)
P(z, r) = .2 Pk(t)Zk
k=O
(7.5)
(7.6)
7.1. TH E MARKI NG OF CUSTOMERS 263
Once again we mar k customer s according to Eqs. (7.1) and (7.2). In analogy
with Example I , we recognize that Eq. (7.6) may be interpreted as the pr ob
ability that the system contains no marked customers at time t (where the
term Zk again represents the probability that none of the k customer s present
is marked). Here then is our crucial observati on : We may calcul ate P(z , t )
directly by finding the probability that there are no marked customers in the
system at time t, rather than calculating Pk(t) and then finding its ztransform!
We pr oceed as follows: We need merel y find the probability that none of
the customers still present in the system at time t is marked and this we do by
accounting for all customers present at time 0 as well as all customer s who
arrive in the interval (0, t ). For any cust omer present at time 0 we may
calcul ate the probability that he is still present at time t and is mar ked as
(I  z)[ l  B(t )] where the first factor gives the probabil ity that our
customer was marked in the first place and the second factor gives the
probabil ity that his service time is greater than t . Clearly, then , this quantity
subtracted from unity is the pr obability that a customer originally present is
not a marked customer present at time t ; and so we have
P[customer present initially is not a marked customer present at time t]
= 1  ( 1  z)e
P
'
Now for the new customers who enter in the interval (0, r), we have as
before P[k arrivals in (0; t)] = ().t)'e ;" /kL Given that k have arrived in this
interval then their arrival instants are uniforml y distributed over this interval
[see Eq. (2.136)]. Let us consider one such arrivin g customer and assume that
he arri ves at a time or < t . Such a customer will not be a marked customer
present at time t with probability
P[new arrival is not a mar ked customer present at time t given he
arrived at or ~ t] = I  (I  z)[ l  B(t  or)] (7.7)
However, we have that
P[arrival time ~ or] = !
t
and so
for 0 ~ or ~ t
P[new arrival still in system at t] =f' ep(',) dor
t=O t
1  e
p t
(7.8)
1
flt
Unconditioning the arrival time from Eq. (7.7) as shown in Eq. (7.8) we have
1  pi
P[new arr ival is not a marked customer present at t] = 1  ( 1 _ z)  e
p.t
(7.10)
II
I
r
264 THE METHOD OF COLLECTIVE MARKS
Thus we may calculate the probability that there are no marked customers
at time t as foll ows:
co
P( z, t) = I P[k arrive in (0, t )]
k= O
X {P [new arrival is not a marked cust omer present at tW
x {P [initial customer is not a marked customer present at t]}i
Using our established relati onships we arrive at
co (J.t )k [ 1   PJ k
P(z, t) = Ie
At
1  (I  z) e [I  ( I  z)ep'r
k ~ O k! /it
which then gives the known result
P(z, t) = [I  (I  z) e P' ]ie W pll l Z)[l  eP' J (7.9)
It should be clear to the student that the usual method for obtaining this
result would have been extremely complex.
Example 3: MJGJI
In this exa mple we consider the FCFS MJGfI system. Recall that the
random variables w. , t . +! , t . +
2
, ••• , x., x.+
1
, . . . are all independent of
each other. As usual , we define B*(s) and W
n
*(s) as the Laplace tr ansform's
for the servicetime pdfb(x) and the waitingtime pdf wn(Y) for C
n
, respectively.
We define the event
{
u . } a {no customers who arrive during the waiting}
no 1Y, In IV.  . f C k d
 trrne 0 n are mar e
We wish to find the probability of this event, that is, P [no M in Jr. ]. Con 
ditioning on the number of arri ving cust omers and on the wait ing time lI ' n ,
a nd then removing these conditions , we have
. <Xl 1 <Xl (i.)k .
P[no M In w. ] = I ....JL. e A· z
k
dWn(y)
k  O 0 k!
= f"eA.(l z) d W.(y)
We recogni ze the integral as W. *(J.  Az) and so
P[no M in w. ] = W
n
*(J.  }.z) (7.11)
Thus once again we ha ve a very simple probabili stic int erpretat ion for the
(Laplace) transform of an important distribution . By identical arguments
we may arrive at
P[no M in x.] = B*(J.  },z) (7. 12)
7.1. THE MARKI NG OF CUSTOMERS 265
This last gives us another interpretation for an old expression we have seen
in Chapter 5.
Now comes a startling insight! It is clear that the arrival of cust omers
during the waiting time of C
n
and the arrival of cust omers during the service
time of C; must be independent events since these are nonoveriapping
intervals and our arri val process is memoryless. Thus the events of no marked
customers arriving in each of these two disjoint intervals of time must be in
dependent , and so the probability that no marked customers arrive in the
uni on of these two di sjoint intervals must be the product of the probabilities
that none such arrive in each of the intervals separately. Thus we may write
P[no M in ll ' n + x
n
] = P[no M in lI'n]P[no M in x
n
]
= W
n
*(A  Az)B*(I.  Az)
(7. 13)
(7.14)
This last result is pleasing in two ways. First, becau se it says that the
probabil ity of two independent joint events is equal to the product of the
pr obabilities of the indi vidual events [Eq. (7. 13)]. Second, becau se it says
that the transform of the pdf of the sum of two independent random variables
is equal to the pr oduct of the tr ansforms of the pdf of the ind ividual random
variables [Eq. (7.14)]. Thus two familiar results (regarding disj oint events
and regarding sums of independent random variables) have led to a meaning
ful new insight, namely, that multiplication of transforms implies not only
the sum of two independent random variables, but also implies the product
of the probab ilities of two independent events ! Not often are we privile ged to
see such fundamental pri nciples related.
Let us now continue with the argument. At the moment we have Eq, (7. 14)
as one means for expressing the pr obabil ity that no marked customers a rrive
during the int erval Il' n + X
n
. We now proceed to calcul ate thi s probabil ity by
a second argume nt. Of course we have
P[no M in lI ' n + x
n
] = P[no M in \ " n + X
n
and C
nH
marked]
+ P[no Min ll ' n + X
n
and C
nH
not marked] (7.15)
Furthermore, we have
P[no Min W
n
+ X
n
and C n+l marked] = 0 if Wn+l > 0
since if C
nH
must wait, then he must arri ve in the interval w, + X
n
and it is
impossible for him to be marked and stilI to have the event {no M in W
n
+
Xn} . Thus the first term on the right hand side of Eq . (7. 15) must be P[ w
nH
=
0](1  z) where this second fact or is merely the probability that C n+l is
marked. Now consider the second term on the righthand side of Eq. (7. I 5) ;
I as shown in Figure 7. 1 it is clear that no customers arrive between C
n
and
C
nH
, and therefore the customers of interest (namely, those arriving after
I
I
266 THE METHOD OF COLLECTIVE MARKS
,
>1
v    )
c;
 71
1<
W
n
+ X fl
Server
IE
w
n
Queue
t
.
No
arrivals
I
wn+
1
c;
C ntl
\.
Arrivals of
interest
Figure 7.1 Arrivals of interest during lV
n
+ X
n
•
Cn+l does, but yet in the inte rval ll'n + x
n
) must arrive in the interval 11''' +1
since this interval will end when II'
n
+ X
n
ends. Thus this second term must be
P[no Min IV
n
+ X
n
and C n+l not mar ked) = P[no Mi n IVn+l )
X P[C
n
+
1
not mar ked)
= P[no M in II'
n
+
1)
Z
From these observations and the result of Eq. (7. \ I) we may write Eq. (7.15)
as
P[no M inlV
n
+ X
n
)
= ( I  z)P[Cn+l arrives after lV
n
+ x
n
) +  i.z)
Now if we think of a second separate marking process in which all the cus
tomers are marked (with an additional tag) with probabi lity one, and ask that
no such mar ked customers arrive during the interval Il' n + x'" then we are
asking that no customers at all arrive during this interval (which is the same
as asking that C n+l arrive after W
n
+ x
n
) ; we may calculate this using Eq.
(7.14) with z = 0 (since this guarantees that all customers be ma rked) and
obtai n W
n
*(.l.) B*(.l.) for this probabi lity. Thus we arrive at
P[no ,\lf in lV
n
+ x
n
) = ( I  z)W.*(A)B*(A) +  ).=) (7.16)
We now have two expressions for P[no M in W
n
+ X
n
), which may be
equat ed to obtai n
Wn*(.l.  k) B*(i.  .l.z) = ( I  z)W'* (i.)B*(i.) + zW:+
1
(i.  i.z) (7.17)
The interesting part is over. The use of the method of collective marks has
brought us to Eq. (7.17), which is not easily obtained by other methods, but
which in fact checks with the result due to other methods. Rathe r than dwell
on the techniques required to carry thi s equation furt her we refer the reader
to Runnenburg [RUNN 65) for additional details of the timedependent
solutio n.
(7. 18)
I
I
j
7.2. THE CATASTROPHE PROCESS 267
Now, for p < I , we have an ergodic process with W Oes) = lim W
n
*(s)
as n ;. 00 . Equation (7. 17) then reduces to
1V*(i.  i.z) B*().  i.z) = (I  z) W *(i.)B*( )') + z lV*()'  i.z)
If we make the change variable s = X )'z and sol ve for W*(s) , we obtain
WOes) = s W*( i.) B*(i. )
s  ). + ).B*(s)
Since W*(O) = I , we evaluate W *(i.)B*().) = ( I  p) and arrive at
WOes) = s( 1  p)
s  ). + i.B*(s)
which, of course, is the PK transform equ ation for waiting time.
We have demonstrated three examples where the marking of customers
has allowed us to argue purel y with pr obabilistic reasoning to deri ve expres
sions relating transforms. What we have here traded has been straightforward
but tedious analysis for deep but physical probabilistic reasoning. We now
consider the cat astrophe pr ocess.
7.2. THE CATASTROPHE PROCESS
Let us pursue the method of collective marks a bit further by observing
" catas trophe" processes. Mea suring from time 0 let us consider that some
event occurs at time t (t ~ 0), where the pdf associated wit h the time of
occurrence of thi s event is given by f(t). Furthermore, let there be an in
dependent " catastrophe" process taking place simultaneously which gener
ates cat astrophest at a rat e y according to a Poisson pr ocess.
We wish to calcul ate the probability that the event at time t takes place
before the first cat astr ophe (measuring from time 0). Conditi oning on t
and integrating over all t , we get
Prevent occ urs before catastrophe] = L'" eY'f(t) dt
= F*(y) (7.19)
where, as usual , f (t )¢>F*(s) are Laplace transform pairs. Thu s we have a
probabili sti c interpretation for the Laplace transform (evaluated at the point
y) of the pdf for the time of occurrence of the event, namel y, it is the pr oba
bility that an event with this pdf occurs before a Poisson catastr ophe at rate
y occurs.
t A catastrophe is merely an impressive name given to these generated times to distinguish
them from the " event" of interest at time I.
268 THE METHOD OF COLLECTIVE MARKS
As a second illustrati on using catastrophe processes, con sider a sequence
of events (tha t is, a point process) on the interval (0, co). Measuring from
time 0 we would like to calculate the pdf of the t ime un til the nth event,
which we denote ] by f (n,(I), and with di stribution Ftn,(I), where the time
between events is given as before with density f( I). That is,
F(n)(I) = P[nth event has occurred by time I]
We are interested in deriving an expression for the renewal fun ction H(I),
which we recall fr om Section 5.2 is equal to the expected number of events
(renewals) in an interval of length I . We proceed by defining
Pn(l ) = P[exactly n events occur in (0, I)] (7.20)
The renewal functi on may therefore be calculat ed as
H(I) = E[number of events in (0, I)]
OX>
=J,nPn(t)
nO
But from its definition we see that Pn(l ) = F(n,(I)  F(n+I)(I) and so we ha ve
If
E,
u
tl
1
OX>
H(I) = J, n(F(n,(I)  Ftn+I,(I)]
n= O
OX>
= .L F(n,(I)
n l
(7.21)
If we now permit a Poi sson catastrophe process (at rate y) to devel op we may
ask for the expectat ion of the following random var iable:
N; ~ number of even ts occurring before the first ca tas t rophe (7.22)
With probability ve:" dt the first catastrophe will occur in the interval
(I, I + dl ) and then H(I) will give the expected number of events occurr ing
before this first cata strophe, th at is,
H (t ) = E[ N
e
I first catastrophe occurs in (I, I + dl )]
Summing over all possibilities we may then writ e
E[N, ] =f.OX> H(t)ye
y t
dl (7.23)
In Section 5.2 we had defined H*(s) to be the Lapl ace transform of the
renewal density h(l ) defined as h(l ) ~ dH (I )/dl , that is,
H*( s) ~ f Oh(I)e " dt (7.24)
t We use the subscript (n) to remind the reader of the definition in Eq. (5.110) denoting the
nfold convolution. We see that[. n,(t ) is indeed the nfold convolution of the lifet ime density
J(t).
REFERENCES 269
If we integrate this last equation by parts, we see that the righthand side of
Eq. (7.24) is merely S:sH(t )e't dt and so from Eq . (7.23) we have (ma king
the substit ution s = y) .
E[N
e
] = H*(y) (7.25)
Let us now calcul ate E[N
e
] by an alternate means. From Eq. (7. 19) we see
that the cat astrophe will occur before the first event with probability I 
F*(y) and in this case N , = O. On the other hand, with probability F*(y)
we will get at least one event occurring before the catastrophe. Let Nc' be
the random variable N;  I conditioned on at least one event ; then we have
N; = I + Nc' . Because of the memoryless property of the Poisson process
as well as the fact that the event occurrences generate an imbedded Markov
process we see that Nc' must have the same distribution as N, itself. Forming
expectati ons on N; we may therefore write
(7.26)
(7. 27)
E[N
e
] = 0[1  F*(y)] + {I + E[Ne]}F*(y)
This gives immediatel y
E[N] = F*(y)
e 1 _ F*(y)
We now have two expressions for E[N
e
] and so by equating them (and making
the change of variable s = y) we have the final result
H*(s) _ F*(s)
1  F*(s)
Thi s last we recogni ze as the tr ansform expression for the integral equation
of renewal theory [see Eq. (5.21)]; its integral formulation is given in Eq .
(5.22).
It is fair to say that the method of collective marks is a rather elegant way
to get some useful and important results in the theory of stochastic processes.
On the other hand , this method has as yet yielded no results that were not
previousl y known through the application of other methods. Thus at present
its principal use lies in providing an alternati ve way for viewing the funda
mental relat ionship s, thereby enhancing one's insight int o the probabili st ic
struct ure of these processes.
Thus end s our treatment of intermediate queueing theory. In the next
part, we venture into the kingdo m of the GIGII queue.
REFERENCES
RUNN 65 Runnenburg. J. Th., " On the Use of the Method of Collective Marks
in Queueing Theory," Proc. Symposium Oil Congest ion Theory , eds,
W. L. Smith and W. E. Wilkinson, University of North Carolina
Press (1965).
VAN 48 van Dantzig. D., " Sur la methode des fonctions generatrices,"
Colloques internationaux du CNRS, 13, 2945 (1948).
270 THE METHOD OF COLLECTIVE MARKS
EXERCISES
7.1. Consider the M/G/l system shown in the figure below with average
arri val rate A and servicetime distribution = B(x) . Customers a ~ e
served firstcomefirstserved from queue A until they either leave or
receive a sec of service, at which time they join an entrance box as shown
in the figure. Customers continue to collect in the ent rance box forming
,J sec of service
received
)     ' ~ Depart wit h service
completed
Server
a group until queue A empties and the server becomes free. At thi s point ,
the entrance box "dumps" all it has collected as a bulk arrival to queue
B. Queue B will receive service until a new arrival (to be referred to as a
"starter") joins queue A at which time the server switches from queue B
to serve queue A and the customer who is preempted returns to the head
of queue B. The entrance box then begins to fill and the process repeat s.
Let
g. = P[entrance box delivers bulk of size n to queue B]
'"
G(z) = I gn
zn
.0
(a) Give a pr obabili stic interpretat ion for G(z) using the method of
collective marks.
(b) Gi ven that the " starter" reaches the entrance box, and using the
method of collective marks find [in terms of A, a, B(' ), and G(z)]
P
k
= P[k customers arri ve to queue A during the " starter' s"
service time and no mar ked customers arrive to the
entrance box fr om the k subbusy per iods creat ed in
queue A by each of these customers]
(c) Gi ven that the "starter" does not reach the entrance box, find P
k
as defined above.
(d) From (b) and (c), give an expression (involving an integral) for
G(z) in terms of A, a, B(') , and itself.
(e) From (d) find the average bulk size ii = I ~  o ng•.
EXERCISES 271
7.7.. Consider the M/G/ oo system. We wish to find P(z, I) as defined in
Eq. (7.6). Assume the system contains i = 0 customers at I = O. Let
p(l) be the probability that a customer who ar rived in the interval
(0, I) is still present at I. Proceed as in Example 2 of Sect ion 7.1.
(a) Express p(l) in terms of B(x).
(b) Find P(z, t ) in terms of A, I, z, and p(I) .
(c) From (b) find Pk(l) defined in Eq . (7.5).
(d) From (c), find lim Pk(t) = P»as 1> 00.
7.3. Consider a n M/G/I queue, which is idle at time O. Let p = P[no
catastrophe occurs during the time the server is busy with those
customers who arri ved during (0, I)] and let q = P[no catastrophe
occurs during (0, t + U(I» ] where U(I) is the unfinished work at time t.
Catastrophes occur at a rate y.
(a) Find p.
(b) Find q.
(c) Interpret p  q as a probability and find an independent expres
sion for it. We may then use (a) and (0) to relate the distribution
of unfini shed work to 8*(s).
7.4. Consider the G/M/m system. The root <1, which is defined in Eq, (6.21)
plays a central role in the solution. Examine Eq. (6.21) fro m the
viewpoi nt of collective marks and give a probabilistic interpretation
for <1 .
PART IV
ADVANCED MATERIAL
We find ourselves in difficult terrain as we enter the foothills of G/G/I. Not
even the average waiting time is known for this queue! In Chapter 8, we
nevertheless develop a "spectral" method for handling these systems which
often leads to useful results. The difficult part of this method reduces to
locating the roots of a function, as we have so often seen before. The spectral
method suffers from the disadvantage of not providing one with the general
behavior pattern of the system; each new queue must be studied by itself.
However, we do discuss Kingman's algebra for queues, which so nicely
exposes the common framework for all of the various methods so far used to
attack the GIGII queue. Finally, we introduce the concept of a dual queue,
and express some of our principal results in terms of idle times and dual
queues.
273
8
The Queue G/G/l
We have so far made effective use of the Markovian property in the
queueing systems M/M/l , M/G/l , and G/M/m. We must now leave behind
many (but not all) of the simplifications that deri ve from the Markovian
property and find new meth ods for studying the more difficult system GIG/I.
In this chapter we solve the G/G/l system equat ions by spectral methods,
makin g use of transform and complexvaria ble techniques. There are , how
ever, numerous other approaches : In Section 5.11 we introduced the ladder
indices and point ed out the way in which they were related to important
events in queueing systems; these ideas can be extended and applied to the
general system GIG /I. Fluctuations of sums of random variables (i.e., the
ladder indices) have been studied by Ander sen [ANDE 53a, ANDE 53b,
ANDE 54] and also by Spit zer [SPIT 56, SPIT 60], who simplified and
expanded Andersen' s work . Thi s led, among other things, to Spitzer's identity ,
of great importance in that app roach to queueing theory. Much earlier (in
the 1930' s) Pollaczek considered a formalism for solving these systems and his
approach (summarized in 1957 [POLL 57]) is now referred to as Pollaczek's
method. More recently, Kingman [KING 66] has developed an algebra for
queues, which places all these methods in a common framewor k and exposes
the underlying similarity among them; he also identifies where the problem
gets difficult and why, but unfortunatel y he shows that this method does not
extend to the multiple server system. Keilson [KEIL 65] applies the method
of Green's function. Benes [BENE 63] studied G/Gfl through the unfinished
work and its "relat ives."
Let us now esta blish the basic equat ions for this system.
8.1. LINDLEY'S INTEGRAL EQUATION
The system under consideration is one in which the interarrival times
between custo mers are independent and are given by an arbitrary distr i
bution A (t) . The service times are also independently dr awn from an arbitrary
distr ibut ion given by B(x). We assume there is one server avail able and that
service is offered in a firstcomefirstserved order. The basic relat ionship
275
276 THE QUEUE G/GfI
among the per tinent random variables is deri ved in this section and leads to
Lindley' s integral equation, whose solution is given in the following section.
We consider a sequence of arriving customers indexed by the subscript n
and remind the reader of our earli er notation :
C
n
= the nth customer arriving to the system
t
n
= T n  T n1 = interarrival time between C
n

1
and C
n
X
n
= service time for C
n
IV
n
= waiting time (in queue) for C;
We assume that the random variables {t
n
} and {x
n
} are independent and are
given, respectively, by the distribution functions A(t) and R(x) independent
of the subscript n. As always, we look for a Markov process to simplify our
analysis. Recall for M/G/I , that the unfini shed work U( t) is a Markov process
for all t. For G/G/I , it should be clear that although U( t ) is no longer
Markovian, imbedded within U(t) is a crucial Markov pr ocess defined at the
cust omerarrival times. At these regeneration points, all of the past history
that is pertinent to future behavior is completely summarized in the current
value of U(t ). That is, for FCFS systems, the value of the unfini shed work
just prior to the arrival of C; is exactl y equal to his waiting time (IV
n
) and
this Mar kov process is the object of our study . In Figures 8.1 and 8.2 we use
the timediagram notation for queues (as defined in Figure 2.2) to illust rat e
the history of en in two cases : Figure 8. 1 displays the case where C
n
+!
arrives to the system before C; departs from the service facilit y ; and Figur e
8.2 shows the case in which C
n
+
1
arrives to an empty system. Fo r the con
dit ions of Figure 8.1 it is clear that
That is,
if (8. 1)
The condition expressed in Eq. (8. 1) assures that C
n
+! ar rives to find a busy
C
n
_
1
c,
I •
"'n
X
n
Server
c; Cn + 1
Ti me;:
Queue
.. I
t,..,
'Ulnf.'
C.
c..,,
Figure 8.1 The case where Cn+1 arrives to find a busy system.
J
8.1. LINDLEY'S INTEGRAL EQUATION 277
Server
Queue
t
w
•
x . ~
C. e
ll
+
1
t n +1
I
T i m e ~
c.
Figure 8.2 The case where Cn+l arrives to find an idle system.
system. From Figure 8.2 we see immediately that
if (8.2)
where the condition in Eq. (8.2) assures that C
n
+
1
arrives to find an idle
system. For convenience we now define a new (key) random variable Un as
(8.3)
(8.4)
IV
n
+ U n :?: 0
IV
n
+ U n::::;; 0
This random variable is merely the difference between the service time for C;
and the interarrival time between Cn+l and C
n
(for a stable system we will
require that the expectation of u.; be negative). We may thus combine Eqs.
(8.1)(8.3) to obtain the follo wing fundamental and yet elementary relation
ship , first established by Lindley [LIND 52];
if
if
The term IV. + Un is merely the sum of the unfinished work (w.) found by
C; plus the service time (x. ), which he now adds to the unfini shed work,
less the time durat ion (t'+l) until the arrival of the next customer Cn+l;
if this quantity is nonnegati ve then it represents the amount of unfini shed
work found by Cn+l and therefore represents his waiting time w
n
+
1
• How
ever, if this quantity goes nega tive it indicates that an interval of time has
elapsed since the arrival of Cn, which exceeds the amount of unfinished work
present in the system j ust after the arrival of Cn' thereby indicating that the
system has gone idle by the time Cn+l arrives.
We may write Eq. (8.4) as
1"n+1 = max [0, II' n + un]
We introduce the notation (x)+ ;;, max [0, x ] ; we then have
(8.5)
 (8.6)
\
1
278 THE QUEUE GIGII
Since the random variables {In} and {x
n}
are independent among themselves
and each other, then one observes that the sequence of random variables
{1I'0, WI' IV
z
, . •.} forms a Markov process with sta tionary transiti on prob
abilities. This can be seen immediately from Eq . (8.4) since the new value
IV
n
+! depends upon the previous sequence of random vari abl es W; (i =
0, I, . .. , n) only through the most recent value IV
n
plus a random variable
lin' which is independent of the random variables 11' ; for all i ~ n.
Let us solve Eq. (8.5) recursively beginning with W
o
as an initi al condition.
We have (defining Co to be our initi al arri val)
IV, = (IV
O
+ lIo)+
IV
z
= (II' , + lI,) + = max [0, w, + lI l]
= max [0, lI, + max (0, W
o
+ lI
o
)]
= max [0, lI l, III + lI
o
+ IVO]
W
3
= (w
z
+ II
z
)+ = max [0, W
z
+ liz]
= max [0, liz + max (0, lI" lI, + lI
o
+ 11'0)]
= max [0, liz, liz + lIl , liz + III + lIO+ w
o
]
W
n
= (IV
n
_, + lI
n
_, )+ = max [0, W
n
_
1
+ lIn_tl
= max [0, Un_ It U
n
_
1
+ u
n
_
2
, • • . , U
n
_
1
+ .. . + U
h
lIn_1 + . . . + II I + lI
o
+ 11'0] (8.7)
However, since the sequence of random variables {lIJ is a sequence of
independent and identicall y di stributed random vari ables, then they are
"interchangeable" and we may consider a new random variable w
n
' with the
same distribution as ll ' n , where
11': ~ max [0, lI
o
, lIo + u. , lI
o
+ lI, + liz, . . . , lIO+ III + . .. + lI
n
_
Z
,
lI
O
+ III + ... + lI
n
_
Z
+ lI
n
_ , + 11'0 ] (8.8)
Equation (8.8) is obtained from Eq. (8.7) by relabeling the ra ndo m varia bles
u.. It is now convenient to define the quantities U; as
n  I
o; = III;
i = O
U
o
= 0
We thus have from Eq, (8.8)
w: = max [Uo, U" U••.. . , U n_I, U n + W
o
]
(8.9)
(8.10)
8. I. LINDLEY'S INTEGRAL EQUATION 279
From thi s last form we see for IV
o
= 0 that w: can only increase with Il .
Therefore the limiting random variable lim w
n
' as n + 00 must converge to
the (possibl y infinite) random variable IV
IV== sup U';
n ;:::O
 (8.l! )
Our imbedded Markov chain is er godic if, with probability one, IV IS
finite, and if so, then the di stribution of I\'n ' and of W
n
both converge to the
di stribution of IV; in thi s case, the distributi on of IV is the waitingtime
distribution . Lindley [LIND 521 has shown that for 0 < E[lunlJ < 00 then
the system is stable if and onl y if E[unl < O. Therefore, we will henceforth
assume
E[lI
n
l < 0
(8. 12)
Equation (8. 12) is our usual condition for stabilityas may be seen from the
following:
E[lI nl = E[ X
n
 t n+l l
= E[ xnl  E[t n+I]
= xi
= i(p  1)
(8.13)
where as usual we assume that the expected service time is xand the expected
interarrival time is I (and we have p = xli). From Eqs , (8. 12) and (8. 13) we
see we have requi red that p < I , as is our usual condition for sta bility. Let us
denote (as usual) the stati onary distribution for IV
n
(and also the refore for
\I'n' ) by
lim P[ w
n
~ y] = lim P[ w
n
' ~ y] = W(y)
nXl nCD
(8.14)
j
which must exist for p < 1 [LIND 52]. Thus W(y) will be our ass umed
sta tiona ry distribution for time spent in queue ; we will not dwell upon the
proof of its existence but rather upon the method for its calcul ati on. As we
know for such Markov processes, thi s limiting distribution is independent of
the initi al sta te 11"0'
Before proceeding to the formal derivation of result s let us investiga te the
way in which Eq. (8.7) in fact produces the waiting time. This we do by
exa mple; consider Figure 8.3, which represents the unfin ished work U(t) .
Fo r the sequence of arrivals and departures given in this figure, we present the
table bel ow sho wing the interarri val times t
n
+
b
service times x
n
' the random
variables Un' and the wait ing time W
n
as measured from the dia gram; in the
last row of this table we give the waiting times W
n
as calculated from Eq.
(8.7) as follows.
280 THE QUEUE G/G/l
U(t)
1 2 3 4 5 6 7 8 9101112131415 16171819 2021 22 2324 25 26
t t t t t t t t t t
Arrivals
Co C, C,
C
J
C
4 C. C.
C,
e"e"
~ ~ ~ ~ ~ .~ .
•
~
Departures
Co C, C
2
C
J
C
4 C. C. C,
e" e"
Figure 8.3 Unfinished work V(I ) showing sequence of arr ivals and departures.
Table of values from Figure 8.3.
n 0 2 3 4 5 6 7 8 9
1,,+1
2 2 5 2 7
X
n
2 3 2 3 2 3
/I n
I
4 2 I 6
W
n
0 2 2 0 2 0 measured from Fig. 8.3
W
n
0 2 2 0 2 0 calculated from Eq. 8.7
IV
O
= 0
IV
1
= max (0, Ir
o
+ u
o)
= max (0, I) = 1
IV
2
= ma x (0, u
b
U
1
+ U
o
+ 11.
0)
= max (0, 1,2) = 2
IVa = max (0, u
2
, u
2
+ u
1
, u
2
+ u
1
+ U
o
+ Ir
o
) = max (0,  1, 0, I) = 1
IV. = max (0, U
a
, lI
a
+ U
2
, lI
a
+ u
2
+ Il l ' U
a
+ 11
2
+ III + U
o
+ IV
O)
= max (0, I , 0, I , 2) = 2
IVS = max (0, II. , II. + U
a
, II, + lI
a
+ U
2
, II. + U
a
+ 11
2
+ Ill'
U, + U
a
+ U
2
+ III + 11
0
+ IV
o
)
= max (0, 4, 3, 4, 3, 2) = 0
\1'. = max (0, 2, 2, I, 2, 1,0) = 2
IV
7
= max (0, I , 1, 3, 2, 3,  2,  I) = 1
IVB = ma x (0, 6, 7, 5, 9, 8, 9, 8, 7) = 0
IV. = ma x (0, I, 5,  6, 4, 8, 7,  8, 7, 6) = 1
J
8.1. LlNDLEY'S INTEGRAL EQUAnON 281
These calculations are quite revealing. For example, whenever we find an m
for which W
m
= 0, then the .m rightmost calculations in Eq. (8.7) need be
made no more in calcu lating W
n
for all n > m ; this is due to the fact that a
busy period has ended and the service t imes and interarrival times from that
busy period cannot affect the calculations in future busy periods. Thus we
see the isolating effect of'idle periods which ensue between busy periods.
Furthermore, when W m = 0, then the rightmost term (Um + 11'0) gives the
(negative of the) total accumulated idle time of the system during the interval
(0, Tm).
Let us now proceed with the theory for calculating W(y). We define
Cn(u) as the PDF for the random variable Un> that is,
(8.15)
(8.16)
and we note that un is not restricted to a half line. We now derive the expres
sion for Cn(u) in terms of A (r) and B(x):
Cn(u) = P[x
n
 In+l ~ u]
=l:p[X
n
~ u + I IIn+l = I] dA(I)
However, the service time for C
n
is independent of t n+l and therefore
Ciu) = l ~ o B ( U + I) dA(I)
Thus, as we expected, Cn(u) is independent of n and we therefore write
c. f a>
Cn(u) = ceu) = t ~ o B ( u + I) dA(I)
Also, let ii denote the random variable
(8.17)
Note that the integral given in Eq. (8.17) is very much like a convolution form
for aCt) and B(x) ; it is not quite a straight convolution since the distribution
C(u) represents the difference between X
n
and t n+1 rather than the sum.
Using our convolution notation (@), and defining cn(u) ~ dCn(u) jdu we have
Cn(u) = c(u) = a( u) @ b(u)  (8. 18)
It is (again) convenient to define the waitingtime distribution for customer
C; as
\
J
(8.19)
282 THE QUEUE G/ G/I
For y 0 we have from Eq. (8.4)
Wn+l(Y) = P[w
n
+ lin y)
= Y  wi W
n
= w) dW.(w)
And now once agai n, since u.; is independent of W
n
we have
Wn+l(Y) = Cn(y  w) dWn(w) for Y 0 (8.20)
However, as postul ated in Eq. (8.14) thi s di stribution has a limit W(y) and
therefore we have the following integral equat ion , which defines the limiting
distribution of waiting time for customers in the system G/G/ I :
W(y) = C(y  w) dW(w)
Further, it is clear that
for y 0
W( y) = 0 for y < 0
Combining these last two we have Lindley 's integral equation [LI ND 52),
which is seen to be an integral equation of the WienerHopf type [SPIT 57).
(
( <Xl C(y _ w) dW(w)
W(y) = Jo
o
y< O
(8.2 1)
Equation (8.21) may be rewritten in at least two ot her useful forms, which
we now proceed to deri ve. Integrating by parts, we have (for y :2: 0)
W(y) = C(y  w) lV(w)I:::,_o lV(w) dC(y  w)
= lim C(y  w)W( w)  C(y) W(O)  dC(y  w)
wr co Jo
We see that lim C(y  II') = 0 as w  > co since the limit of C( u) as u >  co
is the pr obability that an interarrival time approaches infinity, which clearl y
must go to zero if the int erarrival time is to have finite moments. Similarly,
we have W(O ) = 0 and so our form for Lindley' s integral equat ion may be
rewritten as
j
(
_ ( <Xl W(w) dC(y _ w)
W(y) = Jo
o
y< O
(8.22)
1
8.2. SPECTRAL SOLUTI ON TO LINDL EY'S INTEGRAL EQUATION 283
Let us now show a third form for this equation. By the simple variable change
u = y  IV for the argument of our distributions we finally arrive at
(f
"W(1i  u) dC(u)
W(y) =  Q)
o
y ~ O
y < O
 (8.23)
Equations (8.21), (8.22), and (8.23) all describe the basic integral equati on
which governs the beha vior of GIGII. These integral equati ons, as menti oned
above, are WeinerHopftype integral equations and are not unfamili ar in
the theory of stochastic processes.
One observes from these forms tha t Lindley's integral equat ion is almo st ,
but not quite, a convolution integral. The important distinction between a
convolution integral and that given in Lindley' s equation is that the latter
integral form holds only when the variable is nonnegative; the distribution
functi on is identically zero for values of negative argument . Unfort unately,
since the integral holds only for the halfline we must borrow techniques from
the theory of complex variables and from contour integrati on in or der to
solve our system. We find a similar difficulty in the design of optimal linear
filters in the mathematical theory of commun icat ion ; there too, a Weiner
Hopf integral equati on describes the optimal solution, except that for linear
filters, the unkn own appear s as one factor in the integrand rather than as in
our case in que ueing theory, where the unknown appears on both sides of the
int egral equation. Neverthel ess, the solution techniques are amazingly
similar and the reader acquainted with the theory of optimal realizable linear
filter s will find the following ar guments famil iar.
In the next section, we give a fairly general solution to Lindley' s integral
equat ion by the use of spectral (transform) methods. In Exercise 8.6 we
examine a solution approach by means of an example that does not require
transforms; the example chosen is the system D/E,/I considered by Lindley.
In that (direct) approa ch it is required to assume the solution for m. We now
conside r the spectral solution to Lindley's equation in which such assumed
solution forms will not be necessary.
8.2. SPECTRAL SOLUTION TO LINDLEY' S INTEGRAL
EQUATION
In this section we describe a method for solving Lindley' s integral equati on
by means of spectrum fact ori zati on [SMIT 53]. Our point of departure is the
form for this equation given by (8.23). As ment ioned earlier it would be
rather straightforward to solve this equati on if the right hand side were a true
convolution (it is, in fact , a convolut ion for the nonnegative halfline on the
284 THE QUEUE GIGII
variable y but not so otherwise). In order to get around this difficulty we use
the following ingenious device whereby we define a " complementary"
wait ing time, which completes the convolution, and which takes on the value
of the integral for negative y only, that is,
(
0
C>
W_(y) = "
L"W(y  u) dC(u )
y ~ O
y < O
(8.24)
(8.25)
Note that the lefthand side of Eq. (8.23) might consistently be written as
W+(y) in the same way in which we defined the lefthand side of Eq. (8.24).
We now observe that if we add Eqs. (8.23) and (8.24) then the righthand
side takes on the integral expression for all values of the argument, that is,
W( y) + W_( y) = r oo W(y  u)c(u) du . for all real y
where we have denoted the pdf for uby c(u) [ ~ dC(u)ldu].
To pr oceed, we assume that the pdf of the interarrival time is* OCe Dt) as
t ..... 00 (where D is any real number greater than zero), that is,
. aCt)
hm Dt < 00
t  oo e
(8.26)
The condition (8.26) really insists that the pdf associated with the inter
arrival time dr ops off at least as fast as an exponential for very large inter
arrival times. From this condition it may be seen from Eq. (8. 17) that the
behavior of C(u) as u .....  00 is governed by the behavior of the interarri val
time; this is true since as u takes on large negative values the argument for the
servicetime distribution can be made positive only for lar ge values of t ,
which also appears as the argument for the interarrival time density. Thus
we can show
. C(u )
hm v;: < 00
u   oo e
That is, C( u) is O ( ~ U ) as u +  00. If we now use this fact in Eq. (8.24) it is
easy to establish that W_(y) is also O(eD") as y +  00•
• The notat ion O(g (x» as x _ X
o
refers to any function that (as x  xo> decays to zero
at least as rapidl y asg(x) [where g(x) > OJ , that is,
lim 10(g(X»1 = K < 00
xxo sv:
j
8.2. SPECTRAL SOLUTION TO LINDLEY'S I NTEGRAL EQUATION 285
Let us now define some (bilateral) transforms for various of our functions .
For the Laplace transform of W_(y) we define
<I><s) ~ L:W<y)e " dy (8.27)
Due to the condition we have established regarding the asymptotic property
of W_(y), it is clear that <I>_(s) is analytic in the region Re (s) < D. Similarly,
for the distribution of our waiting time W(y) we define
<I>+(s) ~ L:W(y) e " dy (8.28)
Note that <I>+(s) is the Laplace transform of t he PDF for waiting time,
whereas in previous chapters we have defined WOes) as the Laplace transform
of the pdf for waiting time ; thus by entry I I of Table 1.3, we have
s<l>+(s) = WOes) (8.29)
Since there are regions for Eqs. (8.23) and (8.24) in which the functions drop
to zero, we may therefore rewrite these transforms as
<I>. Is) =f ~ W_(yV" dy (8.30)
$ +(s) =rW(yV ' Ydy (8. 31)
Since W(y) is a true distribution function (and therefore it remains bounded
as y + co) then <I>+(s) is analytic for Re (s) > O. As usual , we define the
transform for the pdf of the interarrival time and for the pdf of the service
time as A*(s) and S *(s), respectively. Note for the condition (8.26) that
A*( s) is analytic in the region Re (s) < D just as was <I>_(s) .
From Appendix I we recall that the Laplace transform for the convolution
of two functions is the pr oduct of the transforms of each. Equati on (8. 18)
is almost the con volution of the servicetime density with the interarri val
time density; the only difficulty is the negative arg ument for the interarrival
time density. Nevertheless, the abovementioned fact regarding products of
transforms goes through merely with the negative argument (this is Exercise
8. 1). Thus for the Lapl ace transform of c(u) we find
C*(s) = A *(  s)S*(s) (8.32)
Let us now return to Eq. (8.25), which expresses the fundamental relati on
ship among the variables of our problem and the waitingtime distribution
W(y). Clearly, the time spent in queue must be a nonnegative random variable,
and so we recogni ze the righthand side of Eq. (8.25) as a convolution between
the waiting time PDF and the pdf for the random variable ii . The Laplace
286 THE QUEUE G/G/l
transform of this convolution must therefore give the product of the
Laplace transform <1>+(5) (for the waitingtime distributi on) and C*(s) (for
the density on ii ). The transform of the lefthand side we recognize from Eqs.
(8.30) and (8.31) as being <I>+(s) + <I>_(s) , thus
<I>+(s) +<I>_(s) = <1> +(s)C* (s)
From Eq. (8.32) we therefore obtai n
<I>+(s) + <I>_(s) = <1>+(s)A*(  s)B* (s)
which gives us
<I>Cs) = <I>+(s)[ A* (s)B* (s)  I ] (8.33)
We have already establi shed that both <I> _(s) and A*(s) are analytic in the
region Re (s) < D. Furthermore, since <I>+(s) and B* (s) are transforms of
bounded functions of nonnegat ive variables then both funct ions must be
. analytic in the region Re (s) > O.
We now come to the spect rumfactorizat ion. The purpose of this factoriza
tion is to find a suitable representati on for the term
A*(s)B*(s)  1 (8.34)
(8.36)
in the form of two fact ors. Let us pause for a moment and recall the method
of stages whereby Erlang conceived the ingeni ous idea of approxi mating a
distribution by means of a collect ion of series and parallel exponent ial stages.
The Laplace transform for the pdf's obtainable in this fashion was generally
given in Eq. (4.62) or Eq. (4.64); we immediately recognize these to be rati onal
functions of s (that is, a rati o of a polynomial in s divided by a polynomial
in s). We may similarly conceive of approximati ng the Laplace transfor ms
A*(  s) and B* (s) each in such forms; if we so approximate, then the term
given by Eq. (8.34) will also be a rat ional function of s. We thus choose to
consider those queue ing systems for which A *(s) and B*(s) may be suitably
approximated with (or which are given initia lly as) such rational functio ns
of s, in which case we then propose to for m the following spectrum factor iza
tion
A*c  5)8*(5) _ I = 'Y..(s)  (8.35)
'Y_(s)
Clearl y 'Y+(s)/' f_(s) will be some rat ional function of s, and we are now
desirous of finding a particular factored form for this expression. We speci
fically wish to find a factorizat ion such that :
• For Re (s) > 0, 'Y+(s) is an analytic functio n of s with
no zeroes in this halfpl ane.
• For Re (s) < D , 't'_(s) is an analytic function of s with
no zeroes in this halfplane.
(8.37)
•
8.2. SPECTRAL SOLUTION TO LI NDLEY'S I NTEGRAL EQUATION 287
Furthermo re, we wish to find these functions with the additional pr operties :
'F+(s )
For Re (5) > 0, lim  = 1.
[. 1 00 5
'I" (5)
For Re (5) < D, lim    = 1.
1., /  00 5
The conditions in (8.37) are convenient and must have opposite polarity in
the limit since we observe that as 5 run s off to infinity along the imaginary
axis, bot h A*(  5) and 8 *(5) must decay to 0 [if they are to have finite moments
and if A(t) and 8 (x ) do not contain a sequence of discontinuities, which we
will not permit] leaving the lefthand side of Eq. (8.35) equal to I, which
we have suitably matched by the rati o of limits given by Conditions (8.37).
We shall find that this spectrum fact or izati on, which requ ires us to find
'F+(5) and '1"_(5) with the appropri ate properties, cont ains the diffi cult part
of this method of solution. Nevertheless, assuming that we have found such a
fact ori zati on it is then clear that we may write Eq. (8.33) as
or
'1'+(5)
<1> (5) = <1>+(5)
 lL (5)
<1>_(5)'L (5) = <1>+(S)'I'+(5) (8.38)
where the common region of ana lyticity for both sides of Eq. (8.38) is within
the strip
o< Re (s) < D (8.39)
That this last is true may be seen as foll ows. We have already assumed that
'l'"+(s) is analyt ic for Re (s) > 0 and it is further true that <1>+(s) is analytic
in this same region since it is the Lapl ace transform of a functi on that is
identically zero for negative ar guments; the product of these two must
therefore be analytic for Re (5) > O. Similarl y, q' _(5) has been given to be
analytic for Re (s) < D and we have that <1>_(s) is analytic here as explained
earl ier following Eq. (8.27) ; thus the product of these two will be analytic in
Re (s) < D. Thus the common region is as stated in Eq. (8.39). Now, Eq.
(8.38) establis hes that these two functions are equal in the common strip and
so they must represent functions which, when continued in the region Re (5) <
0, are analytic and when continued in the region Re (5) > D, are also
analytic; therefore their analytic continuation contains no singularities in
the ent ire finite splane, Since we have establi shed the behavior of the functi on
<1> +(5)'1"+(5) = <1>_(s)'I'_(s) to be analytic and bounded in the finite splane,
and since we assume Condition (8.37), we may then apply Liouville' s theor em*
• Liouville' s theorem states, "If I ( ~ ) is analytic and bounded for all finite values of z,
then I(z) is a constant."
288 THE QUEUE G/G/l
[TITC 52], which immediately establishes that this function must be a constant
(say, K). We thus have .
$ _(s)'Y_(s) = $ +(s)'Y+(s) = K
This immediately yields
(8.40)
(8.41)
K
$ +( s) = 
'Y+(s)
The reader should recall that what we are seeking in this development is an
expression for the distributi on of queue ing time whose Laplace transform is
exactly the function $ +(s), which is now given through Eq. (8.41). It remains
for us to demonstrate a method for evaluating the constant K.
Since s$+(s) = W* (s) , we have
Let us now consider the limit of this equation as s + 0; working with the
righthand side we have
lim e" dW( Y) = dW(Y) = 1
8 0 Jo Jo
We have thu s established
lim s $ +(s) = 1
.0
(8.42)
This is nothing mor e than the final value theorem (entry 18, Table 1.3) and
comes about since W( (0) = I. Fr om Eq. (8.41) and this last result we then
have
and so we may write
K = lim'Y+(s)
8 0 S
(8.43)
Equat ion (8.43) provides a means of calcul ating the constant K in our solu
tion for $ +(s) as given in Eq. (8.41). If we make a Taylor expansion of the
funct ion 'Y+(s) around s = 0 [viz., 'Y...(s) = 'Y+(O) +s'Y<;>(O) +
+ . ..] and note from Eqs. (8.35) and (8.36) that 'Y+(0) = 0, we then
recognize that thi s limit may also be written as
. d'Y+(s)
K=hm
.0 ds
(8.44)
J
8.2. SPECTRAL SOLUTION TO LINDLEY'S INTEGRAL EQUATION 289
and this provides us with an alternate way for calculating the constant K.
We may further explore this constant K by examining the behavior of
<1>+(5)o/ +(S) anywhere in the region Re (5) > 0 [i.e., see Eq. (8.40»); we
choose to examine this behavior in the limit as 5 > a::J where we know from
Eq. (8.37) that 0/+(5) behaves as 5 does ; that is,
= lim s r ~ e ' vW(y) dy
s .... 00 Jo
Making the change of vari able 5Y = X we have
K = lim r ~ e  " w ( ~ ) dx
s.... oo )o s
As 5 > cc we may pull the constant term W(O+) outside the integral and
then obtain the value of the rema ining integral, which is unit y. We thus
obtain
'(8.45)
This establishes that the constant K is merely the probability that an arriving
customer need not queue]. .
In conclusion then, assuming that we can find the appropriate spectrum
fact ori zati on in Eq. (8.35) we may immedi ately solve for the Lapl ace trans
form of the waitingtime distribution through Eq. (8.41), where the constant
K is given in either of the three forms Eq. (8.43), (8.44), or (8.45). Of course
it then remains to invert the transform but the pr oblems involved in that
calcul ati on have been faced before in numerous of our other solution form s.
H is possible to carry out the solution of this problem by concentrating on
0/_(5) rather than '1'+(5) , and in some cases this simplifies the calcul ations. In
such cases we may proceed from Eq, (8.35) to obtai n
' F+(s) = o/_(s)[A*(  s)8*(s)  1)
From Eq. (8.41) we then have
K
<1> (s)  = 
+  [A*( s)8 *( s)  1]'Y_(s)
(8.46)
(8.47)
t Note t hat W(O+) is not necessaril y equal to I  p. which is the fracti on of lime the server
is idle. (These two are equal for the system M{G{I. )
290 THE QUEUE GIGII
In order to evalua te the constant K in this case we di fferentiate Eq. (8.46)
at s = 0, that is,
= [04 *(0)8*(0)  + 0/_(0) [04*(0)8 *(1)(0)  o4 *(1\O)B*(O) ]
(8.48)
From Eq. (8.44) we recognize the lefthand side of Eq. (8.48) as the const ant
K and we may now evaluate the righthand side to obtain
K = °+ o/_(O)[x + f]
giving
K = 0/_(0 )(1  p)i (8.49)
Thus, if we wish to use 'F_(s) in our solution form, we obtai n the transform
of the waitingtime di st ribution fr om Eq. (8.47), where the unknown constant
K is evalu ated in terms of 'L(s) through Eq. (8.49) .
Summari zing then, once we have ca rried out the spect ru m factori zat ion
as indicated in Eq. (8.35), we may proceed in one of two directions in solving
for <1l +(s) , the transform of the waitingt ime di stributi on . The firs t me thod
gives us
and the seco nd provide s us with
<I> s _ 0/<0)(1  p)i
+( )  [A*(  s)B*(s)  I]'Y_(5)
 (8.50)
 (8.51)
I
!
We now proceed to demonstrate the use of these result s in some examples.
Example 1: MIMI]
Our old fr iend MIMI I is extremely st raightforwa rd and sho uld serve to
clarify the meaning of spect rum factori zati on. Since both the intera rriva l time
and the service time are exponentially di stributed random va riables, we
immediately have A*(5) = AI(s+ }.) and B*(s) = fll (s+ fl) , where x = I/,u
and i = I /A. In order to solve for <I>+(s) (the transform of the wai ting time
di stribution), we mu st first form the expression given in Eq. (8.34), that is,
04 *(  5)B*(5)  I = (_ A_) (_fl_)  I
). 5 5 +fl
52 + S(fl  A) .
(i.  S)(5 + fl)

8.2. SPECTRAL SOLUTIO N TO LI NDLEY'S INTEGRAL EQUATION 291
Thus , from Eq. (8.35), we obtain
' F+(s) = A*(s)B*(s) _ 1
0/_(s)
s(s + p.  J.)
=
(s + po)(),  s)
(8.52)
In Figure 8.4 we show the locati on of the zeroes (denoted by a circle) and
poles (denoted by a cross) in the complex splane for the funct ion given in
Eq. (8.52). Note that in this particular example the roots of the numer at or
(zeroes of the expression) and the roots of the denominat or (poles of the
expression) are especially simple to find ; in general , one of the most difficult
parts of thi s method of spectrum factori zation is to solve for the ' roots. In
order to fact orize we require that conditions (8.36) and (8.37) mainta in.
Inspecting the pole zero plot in Figure 8.4 and remembering that 0/+(s)
must be analytic and zerofree for Re (s) > 0, we may collect together the
two zeroes (at s = 0 and s =  po + I.) and one pole (at s =  po) and still
satisfy this required condition. Similarly, 0/ _(s) must be analytic and free
from zeroes for the Re (s) < D for some D > 0; we can obtain such a
condition if we allow this functi on to contain the remaining pole (at s = J.)
and choose D = J.. Thiswe show in Figur e 8.5.
Thus we have
' F+(s) = s(s + po  A)
s + p.
' L (s) = J,  s
Note that Condi tion (8.37) is satisfied for the limit as s > 00 .
Im(s)
splane
"*()e>* Re(s)
(8.53)
(8.54)
p
_I
Figure 8.4 Zeroes (0) and poles ( x) of'i'+(s)/,C(s) for MiMI!.
292 THE QUEUE GIGII
Im(s)
scplane
* {J{)    Re(s)
(a) '!'. (s)
Im(s)
splane
Figure 8.5 Factorization into '1'+{s) and 1/'Y_{s) for MIMI !.
We are now faced with finding K. From Eq. (8.43) we have
K = lim o/+(s)
.0 S
. s +/lA
= hm '
.  0 S + /l
=I p (8.55)
Our expression for the Laplace transform of the waiting time PDF for MIMI !
is therefore from Eq. (8.41),
<I>+(s) = (I  p)(s + /l) (8.56)
s(s + /l  A)
At this poi nt, typically, we attempt to invert the transform to get the waiting
time dist ribution. However , for thi s M/M/I example, we have already
carr ied out this inversion for W *(s) = s<I>+(s) in going from Eq. (5.120)
to Eq . (5. 123). The solution we obtai n is the familiar form,
Example 2: GIMl lt
(8.57)
In this case B*(s) = /l l(s + /l) but now A *( s) is completely arbitrary.
giving us
A*(s) B*(s) _ I = A*( s)/l _ I
s +/l
t Thi s examp le forces us to locale roo ts using Rouche' s theorem in a way often nccessar y
for specific G/G !I problems when the spectrum factorization method is used. Of course,
we have already studi ed this system in Section 6.4 and will compare the result s for both
methods.
1
8.2. SPECTRAL SOLUTION TO LI NDLEY'S INTEGRAL EQUATION 293
and so we have
o/+(s} = flA*(  s)  s  fl
'I'_(s) s +fl
(8.58)
In order to factorize we must find the roots of the numerator in this equation.
We need not concern ourselves with the poles due to A *( s) since they must
lie in the region Re (s) > 0 [i.e., A(t ) = 0 for t < 0) and we are attempting to
find o/+(s), which cannot include any such poles. Thus we only study the
zeroes of the function
s +fl  flA*( s) = 0 (8. 59)
Clearly, one root of this equation occurs at s = O. In order to find the
remaining roots, we make use of Rouche' s theorem (given in Appendix I but
which we repeat here) :
Rouche's Theorem Iff(s) and g(s) are analytic functions of s inside and
on a closed contour C, and also iflg(s)1< /f(s)1 on C, thenf(s) andf (s) +
g(s) have the same number of zeroes inside C.
In solving for the roots of Eq. (8.59) we make the identification
f (s) = s + fl
g(s) = flA*(  s)
We have by definition
A*( s) = .Ce"dA(t )
We now choose C to be the contour that runs up the imaginary axis and then
forms an infiniteradius semicircle moving counterclockwise and surrounding
the left half of the splane, as shown in Figure 8.6. We consider thi s contour
since we are concerned about all the poles and zeroes in Re (s) < 0 so that
we may properly include them in 'Y+(s) [recall that 0/ _(s) may contain none
such); Rouche' s theorem will give us information concerning the number of
zeroes in Re (s) < 0, which we must consider. As usual , we assume that the
real and imaginary parts of the complex variable s are given by a and (0),
respectively, that is, for j = J=I
s = a +jw
294 THE QUEUE GIGII
Im(, )
s plane
a
 1      I:: Re(.<)
Figure 8.6 The contour C for G/M /I.
Now for the Re (s) = a ~ 0 we have eat ~ I (t ~ 0) and so
Ig(s)1 = l,ul:e st dA(t)1
= l,ul:e at e 'r" dA(t )/
~ l,ul:e;wt <l A(t )1
~ l,ui:dA(t )1
Similarly we have
=,u
1I(s)I = Is +,ul
(8.60)
(8.61)
Now, examining the contour C as shown in Figure 8.6, we observe that for
all points on the contour, except at s = 0, we have from Eqs. (8.60) and
(8.61) that
II(s)1 = Is +,ul >,u ~ Ig(s)j
(8.62)
This follows since s + ,u (for s on C) is a vector whose length is the distance
from the point ,u to the point on C where s is located. We are almos t in a
8.2. SPECTRAL SOLUTION TO LINDL EY'S INTEGRAL EQUATION 295
I m(. ) = w
splane
 =   t'::7.....:...."+:::   Re(, ) = a
Figure 8.7 The excursion around the origin.
positi on to apply Rouche's theorem; the onl y remaining consideration is to
show that 1/(s)1 > Ig(s)1 in the vicinity s = O. For thi s purpose we allow the
contour C to make a small semicircula r excu rsion to the left of the origin
as shown in Fi gure 8.7. We not e at s = 0 tha t Ig(O)1= 1/(0)1 = fl, which
doe s no t sati sfy the conditions for Rouche' s theore m. The small semicircular
excursion of radius £(£ > 0) that we take to the left of the ori gin overco mes
thi s difficult y as follows. Cons ider ing an arbitrary point s on thi s semicircle
(see the figure) , which lies at an angle () with the aaxis, we may write s =
a +jw =  £ cos () +j £sin 0 and so we ha ve
1/ (5)1
2
= Is +fl l
2
= 1£ cos () + j £sin () +fl l
2
Formin g the product of (s + fl) and its co mplex conj ugate, we get
If(sW = (fl  £ co s ()' + 0(£)
=fl '  2fl€ cos 0 + 0(£)
(8.63)
Note that the smallest value for I/(s) I occurs for () = O. Eva luating g( s) on
this same semicircular excursion we have
From the powerseries expansion of the exponenti al inside the integral we
have
Ig(sW = f l 2 1 I . ~ [I + ( £ cos () +j« sin 0)1+...1dA(e )12
296 THE QUEUE GIGII
We recogni ze the integrals in this series as proportional to the moments of
the interarrival time , and 'so
Ig(sW = !1
2
11 ei eos () + j dsin () + O(EW
Forming Ig(s)j2by multiplying g(s) by its complex conjugate , we have
jg(s)1
2
= !1
2
(1  2dcos () + o(E»
I
(8.64)
where, as usual , p = xli = I I!1i. Now since () lies in the range n/2 ~ (} ~
71"12 , which gives cos (} ~ 0, we have as E +O that on the shrinking semi
circle surrounding the origin
2 2 2!1
E
!1  2!1Ecos (} >!1   cos /}
p
(8.65)
This last is true since p < I for our stable system. The leftha:nd side of
Inequality (8.65) is merely the expression given in Eq. (8.63) for I/(s)/2
correct up to thefirst order in E , and the righthand side is merely the expres
sion in Eq. (8.64) for Ig(S)j2, again correct up to the first order in E. Thus we
have shown that in the vicinit y s = 0, I/(s)1> jg(s)l. Thi s fact now having
been established for all points on the contour C, we may apply Rouche' s
theorem and state that I (s) and I (s) +g(s) have the same number of zeroes
inside the contour C. Since I (s) has onl y one zero (at s =  !1) it is clear that
the expression given in Eq. (8.59) [/(s) + g(s) ]has only one zero for Re (s) <
0; let this zero occur at the point s =  S1' As discussed above, the point
5 = 0 is also a root of Eq. (8. 59).
We may therefore write Eq. (8.58) as
'I"+(s) = r!1
A
*(  5)  5  !1J [ 5(5 + 51)J
'1'_(5) L 5(5 + 51) 5 +!1
(8.66)
where the first bracketed term contains no poles and no zeroes in Re (s) ~ 0
(we have di vided out the only two zeroes at s = 0 and s =  51in thi s half
plane). We now wish to extend the region Re (s) ~ 0 into the region Re (s) <
D and we choose D (> 0) such that no new zeroes or poles of Eq. (8.59)
are introduced as we extend to this new region. The first br acket qualifies
for ['I"_(S)]1, and we see immediatel y that the second bracket qual ifies for
'I"+(s) since none of its zeroes (s = 0, s =  S1) or poles (s = !1) are in

8.2. SPECTRAL SOLUTION TO LINDLEY'S INTEGRAL EQUATION 297
Re (5) > O. We may then factorize Eq. (8.66) in the following form:
0/+(5) = 5(5 + 51) (8.67)
5+P
'1" _(5) = 5(5 + 51) (8.68)
5 +p..P.A*(5)
We have now assured that the functions given in these last two equations
satisfy Conditions (8.36) and (8.37). We evaluate the unknown constant K
as follows:
. 0/+(5) . S + SI
K = hm = hm
,  0 S .  0 S +p.
= ~ = W(o+)
P.
Thus we have from Eq. (8.41)
<1>+(5) = St(p. + s)
P.5(5 + SI)
The partialfraction expansion for this last function gives us
. <1>+(5) = ! _ 1  sdp.
5 S +51
Inverting by inspection we obtain the final solution for G{M{1 :
(8.69)
(8.70)
W(y) = 1 (1  ; )e
S1Y
y ~ 0 (8.71)
The reader is urged to compare this last result with that given in Eq. (6.30),
also for the system G{Mjl; the comparison is clear and in both cases there
is a single constant that must be solved for. In the solution given here that
constant is solved as the root of Eq. (8.59) with Re (s) < 0; in the equati on
given in Chapter 6, one must solve Eq. (6.28), which is equivalent to Eq.
(8.59).
Example 3:
The example for G{Mjl can be carried no further in the general case. We
find it instructive therefore to consider a more specific G{M{I example and
finish the calculations; the example we choose is the one we used in Chapter 6,
for which A*(s ) is given in Eq. (6.35) and corresponds to an Ez{M{1 system,
where the two arrival stages have different death rates. For that example we
298 THE QUEUE GIGII
splane
)(
I'
Figure 8.8 Polezero pattern for E
2
/M{1 example.
note that the poles of A*( s) occur at the points s = fl , s = 21t, which as
promised lie in the region Re (s) > O. As our first step in factori zing we form
\
\
'I' +(s) = A*( s)B*(s) _ 1
'¥_(s)
= [(fl  S ~ ~ 2 ~  S)] C: J1
 s(s  It + fl / 2)(S  fl  ,u.j2)
(s + fl )(fl  s)(2fl  s)
(8.72)
The spectrum factorizat ion is considerably simplified if we plot these poles
and zeroes in the complex plane as shown in Figure 8.8. It is clear that the
two poles and one zero in the right halfplane must be associated with 'r' _(s).
Fu rthermore, since the strip 0 < Re (s) < fl contains no zeroes and no poles
we choose D = fl and identify the remaining two zeroes and the single pole
in the region Re (s) < D as being associated with 'I'"+(s) . Note well that the
zero located at s = (l  .J2)fl is in fact the single root of the expression
flA *( s)  s  fl located in the left halfplane, as discussed above, and
therefore s, = (I  J 2),u. Of course, we need go no further to solve our
problem since the solution is now given through Eq. (8.71) ; however, let us
continue identifying various forms in our solution to clarify the remaining
steps. With this factorization we may rewrite Eq. (8.72) as
•
'f'"+(s)
 
'L(s)
[
 (S  fl  fl JZ)] rls(s  It + It J
2l]
(fl  s)(2fl  s) s + ,u
8.3. KI NGMAN'S ALGEBRA FOR QUEUES 299
In this form we recognize the first bracket as II' F_(s) and t he seco nd bracket
as 'F+(s). Thus we have
\I'"+(s) = s(s  It +It J 2)
s +1t
We may evaluate the constant K from Eq. (8.69) to find
(8.73)
s 
K = ...! = 1 + ../2 (8.74)
It
and thi s of course corresponds to W(O+) , which is the probability that a new
arrival must wait for service. Finally then we substi tute t hese values into
Eq. (8.7 1) to find
W(y) = I  (2  J 2)e
p
( v "2 U y y ~ 0 (8.75)
which as expected correspon ds exactly to Eq. (6.37).
This method of spectrum factorization has been used successfully by Rice
[RICE 62], who considers the busy peri od for the GfGfl system. Among the
interesting results available, there is one corresponding to the limiting distri
bution of long waiting times in the heavytraffic case (which we develop in
Section 2.1 of Volume II) ; Rice gives a similar approximation for the dura
tion of a busy peri od in the heavy tr affic casco
8.3. KINGMAN'S ALGEBRA FOR QUEUES
Let us once again sta te the funda mental relat ionships underlying the GfGfl
que ue. For u; = X
n
 t
n
+! we have the basic relat ionship
(8.76)
and we have also seen that
U,"n = max [0, U
n
_
17
U
n
_
1
+ u
n
_
2
, . . . ,
U
n
_
1
+ . .. + U
1
, U
n
_
1
+ . . . + U
o
+ H'o]
We observed ea rlier that {lV
n
} is a Markov process with statio nary tr ansition
probabilities ; its tot al stoc has tic structure is givcn by P[ lV
m
+
n
:::; y 11I' m= x],
which may be calcu lated as an nfold integral ove r t he ndimensional joi nt
distri butio n of t he n random variables \I' m+!' .• . , \l'm+ n over t hat regio n of
the space which results in 11·.,+ n :::; y. This calculation is much too complicated
and so we look for alternative means to solve this problem. Poll aczek [POLL
57] used a spectra l ap proach and complex integrals to carry ou t the sol ution.
Lindley [LI ND 52) observed that Il' n has the sa me distribution as IV: ,defined
earlier as
,
•
300 THE QUEUE G/G/I
If we have the case E[u
n
] < 0, which corresponds to p = xli < I, then a
stable solution exists for the limiting random variable IV such that
IV = sup U';
l 1 ~ O
(8.77)
independent of 11'0 ' The method of spectrum factorization given in the
previous section is Smith's [SMIT 53] approach to the solution of Lindley's
WienerHopf integral equation. Another approach due to Spitzer using
combinatorial methods leads to Spitzer's identity [SPIT 57]. Many proofs
for this identity exist and Wendel [WEND 58] carried it out by exposing the
underlying algebraic str ucture of the problem. Keilsen [KEIL 65] demonstra
ted the application of Green's functions to the solution of G/G/1. Bend
[BENE 63] also considered the G/G/I system by investigating the unfinished
work and its variants. These many approaches, each of which is rather
complicated, forces one to inquire whether or not there is a larger underlying
structure, which places these solution methods in a common framework. In
1966 Kingman [KING 66] addressed this problem and introduced his
algebra for queues to expose the common structure ; we study this algebra
briefly in this section.
From Eq. (8.76) we clearly could solve for the pdf of IV
n
+! iteratively
starting with n = °and with a given pdf for 11,'0; recall that the pdf for u;
[i.e., c(u)] is independent of n. Our iterative procedure would proceed as
follows. Suppose we had already calculated the pdf for IV.. which we denote
by IVn(Y) ~ dWn(y) /dy , where Wn(y) = P[lV
n
:::;: y]. To find w
n
+1(y) we follow
the prescription given in Eq. (8.76) and begin by forming the pdf for the sum
IV
n
+ U n> which, due to the independence of these two random variables, is
clearly the convolution IVn(Y) 0 c(y). This convolution will result in a density
funct ion that has nonnegative values for negative as well as positive values of
its argument. However Eq. (8.76) requires that our next step in the calcula
tion of IV
n
+1(Y) is to calculate the pdf associated with (w
n
+ u
n
)+; this re
quires that we take the total probability associated with all negat ive argu
ment s for this density just found [i.e., for II'n(Y)0 c(y) ] and collect it
together as an impul se of probability located at the origin for w
n
+1 (Y)' The
value of this impulse will just be the integral of our former density on the
negative half line. We say in this case that " we sweep the probability in the
negative half line up to the origin." The values found from the convolution
on the positive half line are correct for W
n
+1 in that region. The algebra that
describes this operation is that which Kingman introduces for studying the
system G/G/1. Our iterative procedure continues by next forming the
convolution of w
n
+1(Y) with c(y), sweeping the pr obability in the negat ive
half line up to the origin to form \I' n+2(Y) and then proceeds to form ll'n+3(Y)
in a like fashion , and so on.
7
11'[h(A») = P{w: X+(w)€A}
8.3. KINGMAN'S ALGEBRA FOR ' QUEUES 301
The elements of this algebra consist of all finite signed measures on the
real line (for example, a pdf on the real line). For any two such measures,
say hi and h
2
, the sum hi + h
2
and also all scalar multiples of either bel ong
to this algebra. The product operation hi 0 h
2
is defined as the convoluti on
of hi with h
2
• It can be shown that this algebra is a real commutative algebra.
There also exists an identity element denoted by e such that e 0 h = h
for any h in the algebra, and it is clear that e will merely be a unit impulse
located at the origin. We are interested in operators that map real functions
into other real functi ons and that are measurable. Specifically we are inter
ested in the operato r that takes a value x and maps it into the value (z) " ,
where as usual we have (x)+ ~ max [0, x). Let us denote this operator by 11',
which is not to be confused with the matrix of the transition probabilities
used in Chapter 2; thus, if we let A denote some event which is measurable,
and let h(A) = P{w: X(w)tA } denote the measure of this event, then 11' is
defined through
We note the linearity of this operator, that is, 11'(ah) = a11'(h) and
11' (h
l
+ h
2
) = 11' (h
l
) + 11' (h
2
) . Thus we have a commutative algebra (with
identity) along with the linear opera tor 11' that maps this algebra into itself.
Since [(x) +]+ = (x)+ we see that an important property of this operator 11'
is that
A linear operator satisfying such a condition is referred to as a projection.
Furthermore a projection whose range and null space are both subalgebras of
the underlying algebra is called a Wendel projection; it can be shown that 11'
has this property, and it is this that makes the solution for G/Gfl possible.
Now let us return to considerations of the queue G/G/1. Recall that the
random variable u; has pdf c(u) and that the waiting time for the nth customer
II'
n
has pdf 1\'n(Y). Again since u. ; and IV
n
are independent then IV
n
+ Un has
pdf c(y) @ lI'n(Y). Furthermore, since II' n+! = (IV
n
+ u
n)
+ we have therefore
,
•
n=O,I, . .. (8. 78)
and thi s equati on gives the pdf for waiting times by induction. Now if p < I
the limiting pdf 1I'(Y) exists and is independent of 11'0' That is, Ii must hate the
samepdf as (Ii' + ii)+ (a remark due to Lindley [LIND 52]). This gives us the
basic equation defining the stationary pdf for waiting time in G/G/ I :
ll' (Y) = 11' (c(y) @ If (Y»
_ (8.79)
The solution of this equation is of main interest in solving G/G/1. The re
maining porti on of this secti on gives a succinct summary of some elegant
results invol ving thi s al gebra; only the courageous are encouraged to con
tinue.
302 THE QUEUE GIGII
The particular formalism used for constructing this algebra and car rying
out the solution of Eq. (8.79) is what distinguishes the various methods we
have menti oned above . In order to see the relati onship among the various
approaches we now introduce Spitzer' s identity . In order to state this identit y,
which involves the recurrence relation given in Eq. (8.78), we must intr oduce
the following ztransform:
co
X(z, y) = I wn(y)z'
n= O
(8.80)
Addition and scalar multiplication may be defined in the obvious way for
this power series and "multiplication" will be defined as corresponding to
convolution as is the usual case for transforms. Spitzer's identity is then given
as
where
y ~ log [e  zc(y)]
(8.81)
(8.82)
Thus wn(Y) may be found by expanding X( z, y) as a power series in z and
picking out the coefficient of Z' . It is not difficult to show that
lim W
n
*(s) = W*(s) ;; E[e
SW)
n_",
We may also 'form a generating function on the sequence E[e Su:
n
) ;;
W. *(s) , which permits us to find the transform of the limiting waiting time;
that is,
so long as p < I. This leads us to the following equation, which is also
referred to as Spitzer's identity and is directly applicable to our queueing
problem :
X(z, y) = wo(Y) + Z1T(C(Y)® X(z, y)
(
cc I + )
W*(s) = exp  I  E[I  e
s lUn
) ]
n= l n
(8.83)
(8.84)
1
/.
I •
,
We never claimed it would be simple! ']
If we deal with W. *(s) it is possible to define another real commutative
algebra (in which the product is defined as mult iplicati on rather than
convolution as one might expect). The algebraic solution to our basic equat ion
(8.79) may be carried out in either of these two algebras ; in the transformed
t Fr om this identity we easily find th at
A '" I
E[ lj'l = W = I  E[ (U. )+j
n:>; 1 n
I
I
I
I
(8.86)
8.3. KINGMAN'S ALGEBRA FOR QUEUES 303
case one deals with the power series
 il 00
X* (Z, S) = L W
n
*(s)zn (8.85)
n=O
rather than with the series given in Eq. (8.80).
Pollaczek considers th is latter case and for G/G/I obtains the foll owin g
equation which serves to define the system behavi or :
* * ZS J ie+oo C*(s' )X*(z, s') ds'
X (z, s) = W
o
(s) +. "
21r} ieoo S (s  s )
and he then shows after considerable complexity t hat thi s solution must be of
the form
(8.87 )
where
Yes) ~ log (I  zC*(s))
and
S J ie+oo yes') ds'
,T(Y(s)) ~  . , ,
21r} ieoo S (S  s)
When C*( s) is simple enough then these expressions can be evaluated by
contour integrals .
On the other hand, the method we have described in the previous section
using spectrum factorization may be phrased in terms of this algebra as
follows . If we replace s<1l+(s) by W*( s) and s<1l_(s) by W_*(s) then our basic
equation reads
W*(s) + W_*(s) = C*( s)W*(s)
Corresponding to Eq . (8.83) the transformed version becomes
,T(X*(z, s)  W
o
*(s)  zC*(s) X*(z, s)) = 0
and the spectrum factorization takes the form
I  zC*(s) = e;(Y(,)e,l s);I;'(s»
(8.88)
,
•
I
1
This spectrum fact ori zati on, of course, is the critical step.
This uni ficati on as an algebra for queues is elegant but as yet has provided
little in th e way of extending the theory. In particular, Kingman point s out
that this approach does not eas ily extend to the system G/G/m since whereas
the range of this algebra is a subalgebra, it s null space is not; therefore, we
do not have a Wendel pr ojection . Perhap s the most enli ghtening asp ect of
thi s discussion is the significant equ ation (8.79) , which gives the basic
condition that must be sati sfied by the pdf of waiting time. We take advantage
of its recurrence form, Eq. (8.78), in Ch apter 2, Volume II.
304 THE QUEUE GjGjl
8.4. THE IDLE TIME AND DUALITY
Here we obtain an expression for W*(s) in terms of the transform of the
idletime pdf and interpret this result in terms of duality in queues.
Let us return to the basic equation given in (8.5), that is,
W
n
+! = max [0, W
n
+ un]
We now define a new random variable which is the "other half" of the
waiting time, namely,
Yn = min [0, Il 'n + unl (8.89)
This random variable in some sense cor responds to the random variable
whose distribution is W_(y), which we studied earlier. Note from these last
two equations that when Yn > 0 then II'n+l = 0 in which case Yn is merely
the length of the idle period, which is terminated with the arrival of C
n
+!.
Moreover, since either II' n+l or Yn must be 0, we have that
(8.90)
We adopt the convention that in order for an idle period to exist, it must have
nonzero length, and so if Yn and w
n
+
1
are both 0, then we say that the busy
period continues (an annoying triviality).
From the definitions we observe the following to be true in all cases:
(8.91)
From this last equation we may obtain a number of important results and we
proceed here as we did in Chapter 5, where we derived the expected queue size
for the system MjGjl using the imbedded Markov chain approach. In
particular, let us take the expectation of both sides of Eq. (8.91) to give
E[wn+l1  E[Yn] = E[wnl + E[u
n]
We assume E[unl < 0, which (except for D{D{I where ~ 0 will do) is the
necessary and sufficient condi tion for there to be a stationary (and unique)
waitingtime distribution independent of n; this is the same as requiring
p = xji < I. In this case we have*
lim E[wn+!1 = lim E[wnl
nco n_CX)
• One must be cautious in claiming that
lim E[w
n
kl = lim E[w: +1l
f'l _ oo '" co
since these are distinct random variables . We permit that step here, but refer the interested
reader to Wolff [WOLF 70) for a careful treatment.
,
•
8.4. THE IDLE TIME AND DUALITY 305
and so our earlier equation gives
E[Y] = E[u] (8.92)
where Yn  Yand u;  ii . (We note that the idle periods are independent and
identicall y distributed, but the duration of an idle period does depend upon
the duration of the previous busy period.) Now from Eq. (8.13) we have
E[u] = i (p  I) and so
E[y] = i (l  p) (8.93)
 (8.94)
Let us now square Eq. (8.91) and then take expected values as follows :
Using Eq. (8.90) and recognizing that the moments of the limiting distribution
on IV
n
must be independent of the subscript we have
E[(y)2] = 2E[lvii] + E[(ii)2]
We now revert to the simpler notation for moments, " ok ~ E[(w)'], etc. Since
W
n
and Un are independent random variables we have E[wu] = wii; using this
and Eq, (8.92) we find
_ .:l u
2
y
2
IV=W=
2ii 2y
Recalling that the mean residual life of a random variable X is given by
X2f2X, we observe that W is merely the mean residual life of u less the
mean residual life of y! We must now evaluate the second moment of ii.
Since ii = x  i ; then u
2
= (x  i)2, which gives .
(8.95)
where a; and a/ are the variance of the interarrivaltime and servicetime
densities, respectively. Using this expression and our previou s result for u
we may thus convert Eq. (8.94) to
,
•
y
2
2y
(8.96)
We must now calculate the first two moments of y [we already know that
fj = i (l  p) but wish to express it differently to eliminate a constant].
This we do by conditi oning these moments with respect to the occurrence of
an idle peri od. That is, let us define
1
Go = P[y > 0]
= P[arrival finds the system idlej (8.97)
306 THE QUEUE GIGII
It is clear that we have a stable system when a
o
> O. Furthermore, since we
have defined an idle period to occur only when the system remains idle for a
nonzero interval of time, we have that
P[fi ~ y Iy > 0] = P[idle period ~ y] (8.98)
and this last is just the idleperiod distribution earlier denoted by F(y ). We
denote by I the random variable represent ing the idle period. Now we may
calculate the following:
y = E[Y Iy = O]P[Y = 0] + E[Y Iy > O]P[y > 0]
= 0 + auE[y Iy > 0]
The expectation in this last equation is merely the expected value of I and
so we have .
(8.99)
Similarly, we find
(8. 100)
Thus, in particul ar, y2/2y = 12/21 (a
o
cancels!) and so we may rewrite the
expression for Win Eq. (8.96) as
IV = ao" + ao" + (i)"(1  p)2 1
2
2i(1  p) 21
 (8.101)
Unfortunately this is as far as we can go in establ ishing W for GIG/I. The
calculation now involves the determination of the first two moments of the
idle period. In general, for G/Gfl we cannot easily solve for these rnojnents
since the idle period depends upon the particul ar way in which the previous
busy period terminated. However in Chapter 2, Volume II , we place bounds
on the second term in this equati on, thereby bounding the mean wait W.
As we did for MIGII in Chapter 5 we now return to our basic equat ion '
(8.91) relat ing the import ant rand om variables and attempt to find the
transform of the waiting time density W* (s) ~ E[ e
siD
] for GIG/I. As one
might expect this will involve the idletime distribution as well. Forming the
tran sform on both sides of Eq. (8.91) we have
However since II'
n
and u.; are independent, we find
•
(8.102)
(8.105)

8.4. THE IDLE TIME AND DUALITY 307
In order to evaluate t he lefth and side of thi s transform expression we take
advantage of t he fact that only one or the other of the random variables
IV
n
+l and Yn may be nonzero. Accordingly, we have
E[e
S1wn
+.1In) ] = E[e
sI

On
) IYn > O]P[Yn > 0]
+E[e
swn
+
1
Is; = O]P[Yn = 0] (8. 103)
To determine the right hand side of this last eq uation we may use the follow
ing simi lar expansion :
E[e
swB
+'] = E [e SIDn+l Is; = O]P[Yn = 0]
+ E [e
sID B
+' Is; > O]P[Yn > 0] (8. 104)
However , since IV
n
+l Yn = 0, we have E[e
SIDft
+' IYn > 0] = I. Making use
of t he definition for 0 0 in Eq. (8.97) a nd allowi ng the limit as» + ex) we
obtain the foll owing tr ansform expression from Eq. (8.104) :
E[r
S
;;; Ifj = O]P[y = 0] = w *(s)  00
We may then write the limiting form of Eq . (8. 103) as
E[e
sl
;;;  ' )] = 1* (  s)oo + W*(s)  0 0
wher e 1*(s) is the Laplace transform of the idlet ime pdf [see Eq. (8.98)
for the definition of this distribution ]. Thus, fro m thi s last and from Eq.
(8.102), we ob tai n immediatel y
W*(s)C*(s) = 0
0
1*(  s) + W*(s)  0 0
where as in the past C*(s) is the Laplace tr ansform for the den sit y describing
the random variable ii . Thi s last equation fina lly gives us [MARS 68]
* 0
0
[1  1*( s)]
W (s) = ~ 1 C*(s)  (8.106)
which represents the genera lization of the PollaczekKhinchin t ransform
equat ion given in Chapter 5 and which now appl ies to the system G/G/ 1.
Cl early this equatio n holds a t least along the imagi nary axis of the complex s
plane , since in that case it becomes the characteri stic func tion of the various
di str ibutions which are known to exist .
Let us now consider some examples.
Example 1: M/M/1
For thi s system we know that the idleperiod distributi on is the same as the
interarri valtime di st ribution, namely,
•
F(y) = P[l ~ y] = I  e
1
• y ~ O (8.107)
and so
308 THE QUEUE G/G/I
And so we have the first two moments 1 = 1/)., / 2 = 2/).2; we also have
aa
2
= 1/ ).2 and a
b
2
= I/ fl2. Using these values in Eq. (8. 101) we find
).2(1/).2 + l /fl2) + ( I _ p)2 I
W = '''''' ..:....:..
2).( 1  p) ).
(8. 108)
w = P/fl
1  p
which of course checks with our earlier results for M/M/!.
We know that I *(s) = A/ (s + A) and C*(s) = ).fl/ (A  s)(s + fl). More
over , since the probability that a Poi sson arrival finds the system empty is the
sa me as the longrun proportion of time the system is empty, we have that
Go = I  P and so Eq. (8. 106) yields
* ( I  p)[1  A/(A  s)]
W (s) = "       '  ' ~  '  "    '  =
I  ).fl/(A  s)(s + fl)
 (I  p)s(s + fl)
=
(A  s)(s + fl)  Afl
( 1  p)(s + fl)
S + fl  A
which is the same as Eq. (5. I20).
(8.109)
Example 2: M/G//
(8 .1 10)
•
A
2
[(I/A
2
) + a
b
2
] + (I  p)2 1
W = ='''".::....:..."'''
2).(1  p) J,
(1 + C
b
2
)
=p
2fl (1  p)
which is the PK formula. Also, C*(s) = B*(S)A/ <J.  s) a nd agai n Go =
(I  pl· Equation (8. 106) then gives
In thi s case the idlet ime distributi on is as in M/M/I ; howe ver , we must
leave the variance for the ser vicet ime distributi on as an unknown. We
.. obtain
* ( I  p)[1  J./U  s)]
w (s) = ..:....:..''''''
1  [)./(A  s)]B*(s)
s( I  p)
s  A+ AB*(s)
which is the PK transform equation for waiting time!
(8. II I)
8.4. THE IDLE TIME AND DUALITY 309
Example 3: D!D!]
In this case we ha ve that the length of the idle period is a constant and is
given by i = i  x = t(l  p); therefore 1 = t(l  p), and j2 = (1) 2.
Moreover, aa
2
= a/ = O. Therefore Eq . (8.101) gives
IV = 0 +Ufo  p)2 _ H(1 _ )
2t(l  p)  p
and so
w=o (8.112)
This last is of course correct since the equilibrium waiting time in the (stable)
system D/D!I is always zero.
Since i, I, and] are all constants, we have B*(s) = e
ii
, A*(s) = e" and
I*(s) = e ,l = r ,m p). Also, with probabil ity one an arrival finds the system
empty; thus a
o
= I. Then Eq. (8.106) gives
* 1[1  e"ll p)]
IV (s) = _
1
, .  , I
 e e
=1 (8.113)
and so w(y) = uo(y ), an impulse at the origin which of course checks with the
result that no waiting occurs.
Considerations of the idletime distribution naturally lead us to the study
of duality in queues. This material is related to the ladder indices we had
defined in Section 5.11. The random walk we are interested in is the sequence
of values taken on by U'; [as given in Eq. (8.9)]. Let us denote by Un. the
val ue taken on by U; at the kth ascending ladder index (instants when the
function first drops below its latest maximum). Since u < 0 it is clear that
lim U; =  ~ as 11> 00. Therefore, there will exist a (finite) integer K such
th at K is the large st as cending ladder index for Un' Now from Eq. (8.11)
repe ated below
Ii ' = sup Un
n2:: o
It is clear that
Ii' = Un"
Now let us define the random variable t, (which as we shall see is related to
an idle time) as
(8.114)
310 THE QUEUE GIGII
for k ::;; K. That is t, is merely the amount by which the newascending ladder
height exceeds the previous ascending ladder height. Since all of the random
variables u; are independent then the rand om vari ables i
k
conditioned on K
are independent and identicall y distributed. If we now let I  a = P[Un ::;;
Un. for all n > n
k
] then we may easily calculate the distribution for K as
P[K = k] = (I  a)a
k
(8.115)
In exercise 8. 16, we show that ( I  c) = p[iv = 0]. Also it is clear that
l+l+···+I=u U + U U + ···+ U  U
1 2 K n l no n! n l ng "K l
=U
n
x
where no ~ 0 and U
o
~ O. Thus we see that whas the same distribution as
11+ ... + i
K
and so we may write
E[e' W] = E[E[e ,l1,+·· ·+J
x
) IK]]
= E[(i*(s))K] (8.116)
\
\ \
where I *(s) is the Lapl ace transform for the pdf of each of the t, (each of
which we now denote simply by I). We may now evaluate the expectation in
Eq . (8.116) by using the distribution for Kin Eq. (8.115) finally to yield
W*( ) 1  a
s = I _ ai*(s)
 (8.117)
I
Here then , is yet another expression for W*(s) in the GIGII system.
We now wish to interpret the random variable 1by considering a ."dual"
queue (whose variables we will distinguish by the use of the symbol "), The
dual queue for the GIGII system considered above is the queue in which the
service times x
n
in the original system become the interarrival times i
n
+
1
in
the dual queue and also the interarrival times I n+1 from the original queue
become the service times x
n
in the dual queue.] It is clear then that the
random variable Un for the dual queue will merely be Un = x
n
' i n+! =
I n+!  X
n
= Un and defining O n = U
o
+ . . . + U
n
_ 1 for the dual queue we
have
(8.118)
t Clearly, if the original queue is sta ble, the dual must be unstabl e, and conversely (except
that both may be unstable if p = I) .

8.4. THE IDLE TIME AND DUALITY 311
as the relationship among the dual and the original queues. It is then clear
from our discussion in Section 5.11 that the ascending and descending ladder
indices are interchanged for the original and the dual queue (the same is
true of the ladder heights). Therefore the first ascending ladder index n) in the
original queue will correspond to the first descending ladder index in the dual
queue ; however, we recall that descending ladder indices correspond to the
arrival of a customer who terminates an idle period. We denote this customer
by en,. Clearly the length of the idle period that he terminates in the dual queue
is the difference between the accumulated interarrival times and the accumu
lated service times for all customers up to his arrival (these services must
have taken place in the first busy period), that is, for the dual queue,
{
Length of first idle period} n
1
1. n ,1 A
following first busy period = n ~ o t n+l  n ~ o X
n
n ll nll
= L X
n
 L I n+J
n""O n= O
= Un,
(8.119)
where we have used Eq. (8.114) at the last step. Thus we see that the random
variable j is merely the idle period in the dual queue and so our Eq. (8. 117)
relate s the transform of the waiting time in the original queue to the transform
of the idle time pdf in the dual queue [contrast this with Eq. (8. 106), which
relates this waitingtime transform to the transform of the idle time in its own
queue] .
Thi s duality observation permits some rather powerful conclusions to be
drawn in simp le fashion (and these are discussed at length in [FELL 66],
especially Sections VI.9 and XII.5). Let us discuss two of these .
Example 4: GIMII
If we have a sta ble GIMII queue (with i = II), and x = IIp.) then the dual
is an unstable queue of the type MIGI I (with i = IIp. and if = II), and so 1
(the distribution of idle time in the dual queue) will be of exponential form;
therefore 1*(s) = p.1(s + p.), which gives from Eq. (8. 11 7) the following
.
312 TIlE QUEUE G/GfI
result for the original G{M/I queue :
W*(s) = (1  O")(s +/l )
s +fL  0"1'
In verting this and forming the PDF for waiting time we have

W(y) = I _· O"e pll alY
which corresponds exactly to Eq. (6.30)_
Example 5: MIGI}
y ~ O (8.120)
As a second example let the original queue be of the form M/Gfl and there
fore the dual is of the form G/M /I. Since 0" = P[ w > OJ it must be that
0" = P for MIG/I . Now in the dual system, since a busy period ends at a
random point in time (and since the service time in this dual queue is memory
less), an idle per iod will have a durati on equal to the residual life of an
interarrival time; therefore from Eq. (5.1I) we see that
1*(s) = I  B*(s)
sx
and when these calculations are applied to Eq. (8.117) we have
W*(s) = I  P
1  p{[l  B*(s)]/sx }
(8.121)
(8.122)
which is the PK transform equ ati on for wai ling ti me rewritten as in Eq.
(5.106).
This concludes our study of G/G/1. Sad to say, we have been unable 10
give analytic expressions for the waitingtime distribution explicitly in terms
of known quantities. In fact, we have not even succeeded for the mean wait
W! Nevert heless, we have given a method for handling the rational case by
spectrum fact ori zati on, which is quite effective. In Chapter 2, Volume II,
we return to G/G{I and succeed in extracting many of its important proper
ties through the use of bounds , inequalities, and approximations.
REFERENCES 313
REFERENCES
ANDE 53a Andersen, S. E., "On Sums of Symmetrically Dependent Random
Vari ables," Skan. Ak tuar., 36, 123138 (1953).
ANDE 53b Andersen, S. E., "On the Fluctuations of Sums of Random Variables
I," Math . Scand., 1,263285 (1953).
ANDE 54 Andersen, S. E. , "On the Fluctuati ons of Sums of Random Varia bles
II," Math . Scand., 2, 195223 (1954).
BENE 63 Benes, V. E., General St ochast ic Processes in the Theory of Queues,
AddisonWesley (Reading, Mass.) , 1963.
FELL 66 Feller, W. , Probability Theory and its Applications, Vol. II, Wiley
(New Yo rk), 1966.
KEIL 65 Keilson, J., " The Role of Green' s Functions in Congestion Theory,"
Proc. Symp, on Congestion Theory (edited by W. L. Smith and W. E.
Wilkinson) Uni v. of North Carolina Press (Chapel Hill) , 4371
(1965).
KING 66 Kingman, J. F. C,; "On the Algebra of Queues," Journal of Applied
Probability , 3, 285326 (1966).
LIND 52 Lindley, D. V., "The Theory of Queues with a Single Server," Proc.
Cambridge Philosophical Society , 48, 277289 (1952).
MARS 68 Marshall, K. T., "Some Relationships between the Distributions of
Waiting Time, Idle Time, and Interoutput Time in the GI/G /I
Queue," SIAM Journal Applied Math ., 16,324327 (1968).
POLL 57 Pollaczek, F., Problemes St ochastiques Poses par le Phenomene de
Formation dune' Queue d'Attente aun Guichet et par de Phenomenes
Apparentes, Gauthiers Villars (Paris), 1957.
RICE 62 Rice, S. o., " Single Server Systems, " Bell Sys tem Technical Journal,
41, Part I : "Relations Between Some Averages, " 269 278 and Part II:
"Busy Periods," 279310 (1962).
SMIT 53 Smith, W. L., " On the Di stribution of Queueing Times, " Proc.
Cambridge Philosophical Society , 49, 449461 (1953).
SPIT 56 Spitzer, F. "A Combinatorial Lemma and its Application to Proba
bility Theory," Transactions of the American Mathematical Society ,
82, 323339 ( 1956).
SPIT 57 Spitzer, E , "The WienerHopf Equation whose Kernel is a Prob
ability Density, " Duke Math ematics Journal, 24, 327344 (1957).
SPIT 60 Spit zer , F. "A Tauberian Theorem and its Probability Interpreta
tion," Transactions ofthe American Mathematical Society, 94,150160
(1960).
SYSK 62 Syski, R. , Introduction to Congestion Theory in Telephone Systems,
Oliver and Boyd (London), 1962.
TITC 52 Titchmarsh, E. C, ; Theory of Functions, Oxford Uni v, Press (London) ,
1952.
WEND 58 Wendel, F. G., "Spitzer'S Formula; a Short Proof," Proc. American
Math ematical Society, 9, 905908 (1958).
\
\
314 THE QUEUE GIGII
WOLF 70 Wolff, R. W., " Bounds and Inequalities in Queueing," unpublished
notes, Department of Industrial Engineering and Operations Re
search, University of California (Berkeley), 1970.
EXERCISES
8.1. From Eq. (8. 18) show that C*(s) = A*(  s)B*(s).
8.2. Find ceu) for MIMI!.
8.3. Consider the system MIDII with a fixed service time of xsec.
(3) Find
ceu) = P[u
n
.::;; II ]
and sketch its shape.
(b) Find E[u
n
].
8.4. For the sequence of random variables given below, generate the figure
corresponding to Figure 8.3 and complete the table.
II 0 2 3 4 5 6 7 8 9
t n t 1
2 I I 5 7 2 2 6
x.
3 4 2 3 3 4 2 3
/In
lV . measured
w. calculated
8.5. Consider the case where p = I  E for 0 < E « I . Let us expand
W(y  II) in Eq. (8.23) as
u
2
W(y  u) = W(y)  uW(l)(y) + W(21(y) + R(u, y)
2
where w (nl(y) is the nth derivat ive of W(y) and R(II, y) is such that
~ ' " R(u, y) d C ( ~ ) is negligible due to the slow variation of W( y)
when p = I  E. Let Il k denote the kth moment of ii.
(3) Under these conditions convert Lindley' s integral equation to a
secondorder linear differential equation involving Ii"and ii.
(b) Wit h the boundary cond ition W(O) = 0, solve the equation
foun d in (a) and expre ss the mean wait W in terms of the first
two moments of i and x.
8.6. Consider the DIErl1 queueing system, with a constant interarrival
time (of [sec) and a servicetime pdf given as in Eq. (4.16).
(3) Fi nd ceu).

EXERC ISES 315
(b) Show that Lindley' s integral equation yields W(y  i) = 0
for y < { and
W( y  i) = f W( y  w) dB(w)
(c) Assume the following solution for W(y) :
r
lV(y) = 1 + L G,e' "
i  I
for y ~ {
y ~ O
where a, and cx, may both be complex, but where Re (cx,) < 0
for i = 1, 2, . . . , r . Using thi s ass umed solution, show that the
following equations mu st hold:
_, ., ( rp. )r
e ' =
rp. + cx ,
i . a, . = 0
i _ 0 (rp. + cx
i
)' +!
i = 1,2, . . . , r
j = 0, 1, . . . , r  1
where a
o
= 1 and CX
o
= O. Note that {cx,} ma y be found fr om the
first set of (t ranscendenta l) equation s, and then the second set
gives {a
i
} . It ca n be shown that the CX
i
are di stinct. See [SYSK 62].
8.7. Consider the following queueing systems in which no queue is per
mitted . Cu st omers who arrive t o find t he system bu sy must leave
witho ut servi ce.
(a) M/M /l : Sol ve for P» = P[k in system].
(b) M/H. /I: As in Fi gure 4.10 with cx , = cx , cx. = 1  CX , P. , = 2p.cx
a nd p.. = 2p.(1  «).
(i) Find t he mean ser vice ti me x.
(i i) Sol ve for Po (an empty system) , P« (a cust omer in the 2p.:t.
box) and P' a (a customer in the 2p. (1  Gt) box).
(c) H. /M fl : Where A(t ) is hyperexponential as in (b), but wit h
par amet er s tI , = 2,1,cx an d p.. = 2}.( 1  Gt) instead. Draw the
sta tet ransition di agram (wit h labels on br anche s) for the
foll owing four states: EiJ is state with " arri ving" customer in
arriva l stage i and j customers in service i = I , 2 and; = 0, I.
316 TH E QUEUE GIGfl
(d) M/ Er/l: Solve for Pi = P[j stages of service left to gol.
(e) M/D fl: With all service times equal to x
(i) Find the probability of an empty system.
(ii) Find the fracti on of lost customers.
(0 EJM/I : Define the four states as E
ii
where i is the number of
" arrival" stages left to go and j is the number of cust omers in
service. Draw the labeled statetransition diagram.
8.8. Consider a singleserver queueing system in which the interarri val
time is chosen with probability cz from an exponential distributi on of
mean II). and with probability I  cz from an exponential distribution
with mean 11ft . Service is exponential with mean 11ft.
(a) Find A*(s) and B* (s) .
(b) Find the expression for 'I'+(s)/'I" _(s) and show the pole zero
plot in the splane.
(c) Find 'I' +(s) and ' L(s) .
(d) Find <!l+(s) and W(y).
8.9. Consider a G/G/I system in which
2
A*(s)   _ :::...._ 
(s + 1)(s + 2)
1
8 *( s) =
s + 1
(a) Find the expression for 'I'+(s)/'I"_(s) and showthe pole zero plot
in the splane.
(b) Use spectrum factorization to find 'I' +(s) and ' F_(s).
(c) Find <I>+(s).
(d) Find W(y) .
(e) Find the average waiting time W.
(0 We solved for W(y) by the method of spectrum fact or izati on.
Can you describe another way t o find W( y) ?
8.10. Consider the system M/G/1. Using the spectral solution method for
Lindley' s integral equation, find
(a) . 'Y+(s). {HINT: Interpret [l  B* (s)l/sx. }
(b) 'I'_(s).
(c) s<l>+(s).
,
J
EXERCISES 317
8.11. Consider the queue E
o
/E
r
/1.
(a) Show that
'¥+(5) = F(5)
'1" _(5) 1  F(5)
where F(5) = I  ( 1  5/I..Q)o(l + 5/W Y .
(b) For p < 1, show that F(5) has one zero at the origin, zeroes
5\ ,5
2
, . . • . s, in Re (.I) < 0, and zeroes .1' + 1>Sr +2, .• • , 5
r
+o
1
in
Re (5) > O.
(c) Expre ss 'Y+(.1) and q.'_(s) in terms of S i '
(d) Express W*(s) in terms of s, (i = 1,2, .. . , r +q  I).
8.12. Show that Eq. (8.71) is equivalent to Eq. (6.30).
8,13. Consider a 0 /0/1 queue with p < 1. Assume W
o
= 4f(1  p).
(a) Calculate lI'n(Y) using the procedure defined by Eq. (8.78) for
n = 0, 1,2, . . ..
(b) Show that the known solution for
w(y) = lim wn(y)
satisfies Eq. (8.79).
8.14. Con sider an M/M/I queue with p < 1. Assume W
o
= O.
(a) Calcul ate 11' \ (y) using the procedure defined by Eq. (8.78).
(b) Repeat for 1t'2(Y)'
(c) Show that our known solution for
W(y ) = lim wn(y)
satisfies Eq. (8.79).
(d) Compare 1t'2(Y) with I\·(Y).
8.15. By first cubing Eq. (8.91) and then forming expectations, express a;;;
(the variance of the waiting time) in terms of the first three moments
of i, i , and I .
8.16. Show that P[w = 0] = I  a from Eq. (8.117) by finding the cons tant
term in a powerseries expansion of W*( s).
8.17. Consider a G/GfI system.
(a) Express 1*(.1) in terms of the transform of the pdf of idle time
in the given system.
(b) Using (a) find 1*(.1 ) when the original system is the ordinary
M/M/1.
318 THE QUEUE G/G/!
(C) Using (a), show that the transform of the idletime pdf in a
G/M/I queue is given by
1*(s) = 1  ~ *(s)
s l
thereby reaffirming Eq. (8.121).
(d) Since either the original or the dual queue must be unstable
(except for D/DfI), discuss the existence of the transform of the
idletime pdf for the unstable queue.

Epilogue
We have invested eight chapters (and two appendices!) in studying the
theory of queueing systems. Occasionally we have been overjoyed at the
beaut y and generality of the results, but more often we have been overcome
(with frustration) at the lack of real pr ogress in the theory. (No, we never
promi sed you a rose garden.) However, we did seduce you int o believing that
thi s study would provide worthwhile methods for pract ical applicati on to
many of today' s pressing congestion problems, We confirm that belief in
Volume II.
In the next volume, after a brief review of this one, we begin by tak ing a
more relaxed view of GIG II. In Chapter 2, we enter a new world leaving
behind the rigor (and pain) of exact soluti ons to exact probl ems. Here we are
willing to accept the raw facts of life, which state that our models are not
perfect pictures of the systems we wish to analyze so we should be willing to
accept approximati ons and bounds in our problem solution. Upper and lower
bound s are found for the average delay in GIGII and we find that these are
related to a very useful heavy traffic approximati on for such queues. This
approximation, in fact , predicts that the long waiting times are exponentially
distributed. A new class of models is then introduced whereby the discrete
arrival and departure processes of queueing systems are replaced first by a
fluid approximation (in which these stochastic pr ocesses are replaced by their
mean values as a function of time), and then secondly by a diffusion approxi 
mation (in which we permit a variation about these means). We happil y find
that these approximat ions give quite reasonable result s for rather general
queueing systems. In fact they even permi t us to study the transient behavior
not only of stab le queues but also of saturated queues, and this is the mat erial
in the final section of Chapter 2 whereby we give Newell' s tr eatment of the
rushhour approxi mationan effective method indeed.
Cha pter 3 points the way to our applicati ons in timeshared computer
systems by presenting some of the principal results for pr iority queueing
systems. We study general methods and appl y them to a number of import ant
queueing disciplines. The conservati on law for priority systems is establi shed,
prevent ing the useless search for nonrealizable disciplines.
In the remainder , we choose applications pr incipally from the computer
field, since these applications are perhaps the most recent and successful for
the theory of queues. In fact, the queueing analysis of allocation of resources
and j ob flow through computer systems is perhaps the only tool available
319
320 EPILOGUE
to computer scientists in understanding the behavi or of the complex inter
action of users, programs , processes, and resour ces. In Chapter 4 we empha
size multiaccess computer systems in isolati on, handling demands of a large
collection of competing users. We look for throughput and response time as
well as utilization of resources . The major portion of this chapter is devoted
to a particular class of al gorithms known as processorsharing algorithms,
since they are singularly suited to queueing analysis and capture the essence
of more difficult and more complex algorithms seen in real scheduling prob
lems. Chapter 5 addresses itself to computers in networks, a field that is
perhaps the fastest growing in the young computer industry itself (most of the
references there are drawn from the last three year sa telltale indicat or
indeed) . The chapter is devoted to developing method s of anal ysis and design
for computercommunication networks and identifies many unsolved impor
tant problems. A specific existing network, the ARPANET, is used through
out as an example to guide the reader through the motivati on and evaluati on
of the various techniques developed.
Now it remains for you , the reader, to sharpen and appl y your new set of
tools. The world awaits and you must serve!

I
L

A P PENDIX I
Transform Theory Re fresher:
z  Transform and Laplace Transform
In this appendix we develop some of the properties and expressions for
the ztransform and the Laplace transform as they apply to our studies in
queueing theory. We begin with the ztransforrn since it is easier to visualize its
operation. The forms and properties of both transforms are very similar,
and we compare them later under the discussion of Laplace transforms.
1.1. WHY TRANSFORMS ?
So often as we progress through the study of interesting physical systems
we find that transforms appear in one form or another. These transforms
occur in many varieties (e.g., ztransform, Laplace tr ansform, Fourier
transform, Mellin transform, Hankel transform, Abel transform) and with a
variety of names (e.g., transform, characteristic function , generating func
tion) . Why is it that they appear so often? The answer has two parts ; first ,
because they arise naturally in the formulation and the sol ution of systems
problems; and second , because when we observe or introduce them into our
solution method, they greatly simplify the calculations. Moreover, oftentimes
they are the only tools we have available for proceeding with the solution at
all.
Since transforms do appear naturally, we should inquire as to what gives
rise to thei r appearance. The answer lies in tbe consideration of linear time
invariant systems. A system, in the sense that we use it here, is merely a
transformation, or mapping, or inputoutput relationship between two
functions. Let us represent a general system as a "black" box with an input
f and an out put g, as shown in f igure 1.1. Thus the system operates on the
function f to produce tbe function g. In what follows we will assume that
these functions depend upon an independent time parameter t ; this arbitrary
choice results in no loss of generality but is convenient so that we may discuss
certain notions mor e explicitly. Thus we assume that f = J(t}. In order to
represent the inputoutput relationship between the functions J(t} and get}
321
322 APPENDI X I
fa ~
Figure I.l A general system.
we use the not ati on
f (t ) .. get) (I.!)
to den ote the fact that get ) is the output of our system when f (t) is applied as
input. A system is said to be linear if, when
and
then a lso the following is true :
(1.2)
where a and b are independent of the time variable t. Further, a system is said
to be timeinvariant if, when Eq. (Ll ) holds, then the following is also true :
f(t + T}.. get + T) (1.3)
for any T. If the a bove two properties both hold , then our system is sai d to be
a linear timeinvariant system, and it is these with which we concern ourselves
for the moment. .
Whenever one st udies such systems , one finds that complex exponent ial
fun ctions of time appear throughout the soluti on . Further, as we sha ll see,
the tr an sforms of interest merel y represent ways of decomposing functi on s of
time int o sums (or int egral s) of complex exponentials. Th at is, co mplex
exponentials form the building blocks of our tr an sforms, a nd so, we must
inquire further to discover why the se complex exponenti al s pervade our
thinking wit h such systems. Let us now pose the fundament al qu est ion,
namel y, which fun cti ons of time f(t) may pass through our linear ti me
invariant systems with no change in form; that is, for whichf(t) will g( l) =
Hf (t }, where H is so me sca lar multiplier (wit h respect to I) ? If we can discover
such functi ons f(t} we will then have found the "eigenfunctions," or "char
acteri stic functi on s," or " invaria nts" of our system. Denotin g these eigen
funct ions byf ,(I) it will be shown th at they must be of the foll owing form (to
wit hin an a rbitrary scalar multiplier) :
!e(t} = e st (IA)
L

1.1. WHY TRANSFOR,"tS? 323
where s is, in general, a complex variable. That is, the compl ex exponentials
given in (1.4) form the set of eigenfunctions for all linear timeinvariant
systems. Thi s result is so fundamental that it is worthwhile devot ing a few
lines to its derivation. Thu s let us assume when we appl y[. (t) that the output
is of the form g.(t), that is,
f . (t ) = e' t + g.(t)
But , by the linearity property we have
e"f.(t) = e'( t+
T)
+ e"g.( t)
where T and therefore e" are both constants. Moreover , from the time
invariance property we must have
f .( t +T) = e,(t+T) + git + T)
From these last two, it must be that
eSTg.(t) = g. (t +T)
The unique solution to thi s equati on is given by
g.(t) = He' t
which confirms our earlier hypothesis that the complex exponentials pass
through our linear ti meinvariant systems unchanged except for the scalar
mult iplier H. H is independent of t but may certainly be a funct ion of s and so
we choose to write it as H = H(s) . Therefore, we have the final conclusion
that
(r.5)
and this funda mental result exposes the eigenfunctions of our systems.
In this way the compl ex exponenti als are seen to be the basic functions in
the study of linear timeinvariant systems. Moreover , if it is true that the
input to such a system is a complex exponential , then it is a tr ivial computa
tion to evaluat e the output of that system from Eq. (1.5) if we are given the
function H(s). Thus it is natural to ask that for any inputf(t ) we would hope
to be able to decomposef( t ) into a sum (or integral) of complex exponentials,
each of which contri butes to t he overa ll output g(t ) thr ough a computation
of the form given in Eq. (1.5). Then the overall output may be foun d by
summin g (int egrati ng) these individual components of the output. (The
fact that the sum of the individual out puts is the same as the output of the
sum of the individua l inputsthat is, the compl ex exponentia l decom
positionis due to the linear ity of our system.) The process of decomposing
our input into sums of exponentials, computing the response to each from
Eq. (r.5), and then reconstituting the output from sums of exponentials is
324 APPENDIX I
referred to as the transform method of analysis. This approach, as we can see,
arises very naturally from our foregoing statements. In this sense, transforms
arise in a perfectly natural way. Moreover, we know that such systems are
described by constantcoefficient linear differential equati ons , and so the
common use of transforms in the solution of such equations is not surprising.
We still have not given a precise definition of the transform itself; be
patient , for we are attempting to answer the question " why transforms ?"
If we were to pursue the line of reasoning that follows from Eq. (1.5), we
would quickly encounter Laplace transforms. However, it is convenient at
thi s point to consider only functions of discrete time rather than functions of
continuous time, as we have so far been discussing. This change in directi on
brings us to a consideration of atransforms and we will return to Laplace
transforms later in this appendix. The reason for this switch is that it is easier
to visualize operations on a discretetime axis as compared to a continuous
time axis (and it also delays the introduction of the unit impulse function
temporarily).
Thus we consider functions I that are defined only at discrete instants in
time, which, let us say, are multiples of some basic time unit T. That is,
l(t) = /(t = nT) , where n= . . . ,2,I,O,I ,2, .. .. In order to
incorporate this discretetime axis into our notation we will denote the
function I(t = nT) by In. We assume further that our systems are also dis
crete in time. Thus we are led to consider linear timeinvariant systems with
inputs I and outputs g (also functions of discrete time) for which we obtain
the following three equati ons corresponding to Eqs. (Ll)(I.3):
i n > gn
+ > +
f n+m . g n+ m
( 1.6)
(1.7)
( 1.8)
where m is some integer constant. Here Eq. (I.7) is the expressi on of linearit y
whereas Eq. (I.8) is the expression of timeinvariance for our discrete systems.
We may ask the same fundamental question for these discrete systems and ,
of course, the answer will be essentially the same, namel y, that the eigen
functions are given by
Once again the complex exponentials are the eigenfunctions. At thi s point it
is convenient to introduce the definition
(1.9)
and so the eigenfunctions take the form
1.1. WHY TRANSFORMS ? 325
Since s is a complex variable, so, too, is z. Following through steps essentially
identical to those which led from Eq. (104) to Eq. (I.5) we find that
( 1.10)
where H(z) is a function independent of n. This merely expresses the fact that
the set of functi on s {z:"} for any value of z form the set of eigenfuncti ons for
di screte linear timeinvariant systems. Moreover, the functi on (consta nt) H
either in Eq . (1.5) or (Ll O) tells us precisely how much of a given complex
exp onenti al we get out of our linear system when we insert a unit amo unt of
that exponential at the input. Th at is, H really describes the effect of the
system on these exponenti als; for this reason it is usually referred to as the
system (or transfer)function.
Let us pursue this line of reasoning somewhat further. As we all know, a
common way to di scover what is ins ide a system is to kick ithard and
quickly. For our sys tems this corresponds to providing an input only at time
t = 0 and then observing the subsequent output. Thus let us define the
Kronecker delta fun ction (also known as the unit fun ction) as
n=O
(I. 11)
When we apply u. ; to our syst em it is common to refer t o the output as the
unit response, and th is is usually denoted by h; That is,
Fro m Eq. (1.8) we may therefore also write
U
n
+
m
+ h
n
+
m
From the linearity property in Eq . (1.7) we have therefore
Certainly we may multiply both expressions by unity, and so
z nz nzm
u
n+m + znznzmhn+m ( 1. 12)
Furthermore, if we consider a set of inputs and if we define the output
for each of these by
then by the linearity of our system we must have
i
(1.l3)
326 APPENDIX t
If we now apply this last observation to Eq. (1.12) we have
.,.n "" II z n+ 1n + .,.n v h _n+ m
"" .£. n+ m OJ L n+m'"
m m
where the sum ranges over all integer values of m. Fr om the definition in
Eq. (I. I I) it is clear that the sum on the lefthand side of thi s equation has
only one nonzero term, namely, for m = n, and this term is equal to unity;
moreover, let us make a change of variable for the sum on the righthand
side of this expression, giving
z  !1 + zn L hkz
k
k
This last equation is now in the same form as Eq. (1.10); it is obvious then
that we have the relationship
(l.I 4)
This last equati on relates the system functi on H( z) to the unit response h
k
•
Recall that our linear timeinvariant system was completely* specified by
knowledge of H( z), since we could then determine the output for any of our
eigenfunctions ; similarly, knowledge of the unit response also completely*
determines the operation of our linear timeinvariant system. Thus it is no
surprise that some explicit relationship must exist between the two, and,
of course, this is given in Eq. (I.I4).
Finally, we are in a position to answer the questionwhy tran sforms ?
The key lies in the expression (1.14), which is, itself, a transform (in this case
a ztransform), which converts'[ the time function h
k
int o a functi on of a
complex variable H(z). This transform aro se naturally in our study of linear
timeinvariant systems and was not intr oduced into the anal ysis in an arti
ficial way. We shall see later that a similar relationship exists for continuous
time systems, as well, and this gives rise to the Laplace transform. Recalling
that continuoustime systems may be described by constantcoefficient linear
differential equations and that the use of tr ansforms greatly simplifies the
solution of these equations, we are not surprised that discretetime systems
lead to sets of constantcoefficient linear difference equations whose solution
is simplified by the use of etransforms. Lastly, we comment that the input sf
• Completely specified in the sense that the only additional required information is the
initial state of the system (e.g., the initial condit ions of all the ener gy storage elements).
Usually, the system is assumed to be in the zeroenergy state, in which case we truly have a
complete specification.
t Transforms not only change the form in which the informati on describing a given func
tion is presented, but they also present this information in a simplified form which is
convenient for. mathematical manipulation.
1.2. TilE Z TRANSFORM 327
and the outputs g are easily decomposed into weighted sums of complex
exponentials by means of transforms, and of course, once this is done, then
results such as (1.5) or (1.10) immediately give us the componentbycompo
nent output of our system for each of these inputs; the total output is then
formed by summing the output components as in Eq. (1.13).
The fact that these transforms arise naturally in our system studies is really
only a partial answer to our basic question regarding their use in analysis.
The other and more pragmatic reason is that they greatly simplify the analysis
itself ; most often , in fact, the analysis can only proceed with the use of trans
forms leading us to a partial solution from which properties of the system
behavior may be derived.
The remainder of this appendix is devoted to giving examples and proper
ties of these two principal transforms which are so useful in queueing theory.
1.2. THE zTRANSFORM [JURY 64, CADZ.73]
Let us consider a function of discrete time In' which takes on nonzero
values only for the nonnegative integers, that is, for n = 0, 1,2, ... (i.e.,
for convenience we assume that/
n
= 0 for n < 0). We now wish to compress
this semiinfinite sequence into a single function in a way such that we can
expand the compressed form back into the original sequence when we so
desire. In order to do this, we must place a "tag" on each of the terms in
the sequence/no We choose to tag the term j', by multiplying it by z" ; since
n is then unique for each term in the sequence, each tag is also unique.
z will be chosen as some complex variable whose permitted range of values
will be discussed shortly. Once we tag each term, we may then sum over all
tagged terms to form our compressed function, which represents the original
sequence. Thus we define the ztransform (also known as the generating
function or geometric transform) for /n as follows:
A <Xl
F(z) = 2. fnz'" (1.15)
n= O
F(z) is clearly only a function of our complex variable z since we have summed
over the index n; the notation we adopt for the ctransforrn is to use a capital
letter that corresponds to the lowercase letter describing the sequence , as in
Eq. (LIS). We recognize that Eq. (Ll4) is, of course, in exactly this form. The
ztransform for a sequence will exist so long as the terms in that sequence
grow no faster than geometrically, that is, so long as there is some a > 0 such
that
Furthermore, given a sequence / n its etransform F(z) is unique.
328 APPENDIX I
If the sum over all terms in the sequence f nis finite , then certainly the unit
disk [z] ~ 1 represents a range of analyticity for F(z).* In such a case we have
a>
F(l) = Lfn
n= O
(I.16)
We nowconsider some important examples of ztransforms. It is convenient
to denote the relati onship between a sequence and its transform by mean s of a
doublebarred, doubleheaded arrowr ; thus Eq. (US) may be written as
fn<=> F(z) (I.17)
For our first example, let us consider the unit function as defined in Eq . (1.1I).
For this function and from the definition given in Eq. (1.15) we see that
. exactly one term in the infinite summation is nonzero, and so we immediately
have the transform pair
(U8)
For a related example, let us consider the unit function shifted to the right
by k units, that is,
Il=k
n,pk
From Eq. (US) again, exactly one term will be nonzero , giving
U n_ k<=> Zk
As a third example, let us consider the unit step fun ction defined by
for n = 0, 1,2, . ..
(recall that all functions are zero for n < 0). In thi s case we have a geometric
series, that is,
a> I
15
n
<=>L l e" =  (I.19)
n  O I  z
We note in thi s case that Izl < 1 in order for the ztransform to exist. An
extremely important sequence often encountered is the geometric series
n=0,1,2, .. .
* A functi on of a complex variable is said to be analyt ic at a point in the complex plane if
that function has a unique derivative at that point. The Cauchy Rieman n necessary and
sufficient condition for analyticity of such functions may befound in any text on functions
of a complex variable [AHLF 66).
t The do uble bar denotes the transform relati onship whereas the doubl e heads on the arrow
indicate that the journe y may be made in either direction, f => F and F =>f

1.2. THE ZTRANSFORM 329
Its ztransform may be calculated as
co
F(z) = L Aocnz
n
n=O
A
1  ocz
And so
A
n A
oc <=>  
1  ocz
(1.20)
where, of course, the region of analyticity for this function is [z] < i ]«;
note that oc may be greater or less than unit y.
Linear transformations such as the zt ransform enjoy a number of impor
tant properties. Many of these are listed in Table 1.1. However , it is instructive
for us to derive the convolution property which is most important in queueing
systems. Let us consider two functions of discrete timeI nand gn, which may
take on nonzero values only for the nonnegative integers . Their respective
ztransforms are , of course, F(z) and G(z). Let 0 denote the convolution
oper ator, which is defined for I nand gn as follows:
We are interested in deriving the ztransform of the convoluti on for I nand gn,
and this we do as follows:
co
f
n
0 gn<=>L U
n
0 gn)zn
n O
00 n
= L L fn_kgk?nkzk
n=Ok=O
However, since
we have
co n 00 co
L L=L L
n=Ok=O 1;0 n= k
00 00
fn ® g n < = > L g ~ k L f n_k
Zn

k
k""'O n=k
= G(z)F(z)

330 APPE NDIX I
Table 1.1
Some Properties of the zTransf orm
SEQUENCE zTRANSFORM
I. f n
n = 0, 1,2, . . .
co
F(z) = L: f n
zn
n _ O
2. af; + bg
n
3. a"fn
4.fnlk n =0,k,2k, ...
5. f n+!
6. fn+k k > 0
aF(: ) + bG(z)
F(az)
F(zk)
I
 [F(z)  f ol
z
F( z) s .
  L z· k'j;
zk i = l l  J
17. Initial value theorem
19. Final value theorem
18. Intermediate value theorem
d
m
z"' F(z)
dz'"
F(z)G(z)
(I  z)F(z)
FC: )
I  z
o
 F(z)
oa "
zF(z)
zkF(z)
d
z dz F(z)
co
F(I ) = L: f n
n=O
co
F( I) = L: (  I) nfn
n= O
F(O) = fo
I dnF(: ) I
 =j,
n! dz" %=0 n
lim ( I  : )F(z ) = f oo
,_1
k>O
7. f n1
8. f nk
9. nfn
10. n(n  I)(n  2), . . . , (Il  m + I )fn
n
13. L: f k n = 0, 1, 2, . ..
k ~ O
a
14.  [« (a is a par ameter off n)
oa
15. Series sum property
11. f n @ g n
12. fn  f n1
16. AlIemating sum property
1.2. THE ZTRANSFORM 331
333
2
"he
Transform Pairs
rm
"he
SEQUENCE zTRANSFORM
ion
the
co
itly
n = 0, 1, 2, . . . F(z) = 2: I n
zn
n=O
vay
, ( ~
n = 0
sed
1/ ~ 0
zk
, 1
1/ = 0 ,1 , 2 , .. .
1  z
Zk
her
) is
1  z
~ i a l
A
uor
1  «z
wer
ctZ hen
(I  ctZ)2
Z
' to
(I  z )'
ress
InS
ctZ(I + ctZ)
.ing
(I  ctZ )3
lOW
z( 1 + z)
for
(I  z)" .
h is
1
I S a
1) ,, _
(I  ctz)2
5 in
rms
I)
(I )"
I to
 z 
out
l + m)(1/ + m  I) . . . (1/ + 1)ct
n
.eed
(I
_ ctz)m+l
e
Z
len, we have that the atransform of the convolution of two
equal to the product of the ztransforrn of eac h of the sequences
.1 we list a number of important properties of the ztra nsfor m,
,g t hat in Table 1.2 we provide a list of important common
.ach
UIL
332 APPENDIX I
Some comments regarding these tables are in order. First, in the propertv
table we note that Property 2 is a statement of linearity, and Properties 3 and
4 are statements regarding scale change in the transform and time domain,
respectively . Properties 58 regard translation in time and are most useful.
In particular, note from Property 7 that the unit delay (delay by one unit of
time) results in multiplication of the transform by the factor z whereas
Property 5 states that a unit advance involves division by the factor z,
Properties 9 and 10 show multiplication of the sequence by terms of the form
n(n  I) . . . (n  m). Combinations of these may be used in order to find,
for example, the transform of n2jn; this may be done by recognizing that
n
2
= n(n  I) + n, and so the transform of n2jn is merely Z2 d
2F(
z)/dz
2
+
zdF(z)/dz. This shows the simple differentiation technique of obtaining
more complex transforms. Perhaps the most however, is Property
I I showing that the convolution of two time sequences has a transform that
is the product of the transform of each time sequence separately. Properties
12 and 13 refer to the difference and summation of various terms in the
sequence. Property 14 shows if a is an independent parameter ofIn' differen
tiating the sequence with respect to this parameter is equivalenttodifferentiat
ing the transform. Property 15 is also important and shows that the transform
expression may be evaluated at z = I directly to give the sum of all terms in
the sequence. Property 16 merely shows how to calculate the alternating sum.
From the definition of the ztransform, the initial value theorem given in
Property 17 is obvious and shows how to calculate the initial term of the
sequence directly from the transform. Property 18, on the other hand, shows
how to calculate any term in the original sequence directly from its ztransform
by successive differentiation; this then corresponds to one method for
calculating the sequence given its transform. It can be seen from Property 18
that the sequence Informs the coefficients in the Taylorseries expansion
of F(z) about the point o. Since this power series expansion is unique,
then it is clear that the inversion process is also unique. Property 19 gives
a direct method for calculating the final value of a sequence from its
ztransform.
Table 1.2 lists some useful transform pairs . This table can be extended
considerably by making use of the properties listed in Table 1.1 ; in some cases
this has already been done. For example, Pair 5 is derived from Pair 4 by use
of the delay theorem given as entry 8 in Table 1.1. One of the more useful
relationships is given in Pair 6 considered earlier.
Thus we see the effect of compressing a time scquence Ininto a single
function of the complex variable z. Recall that the use of the variable z
was to tag the terms in the sequence Inso that they could be recovered from
the compressed function; that is,ln was tagged with the factor z", We have
see!
pre
F(z
firs
F(:
sec
ex
wr
a,
I.2. THE ZTRANSFORM 333
seen how to form the ztransforrn of the sequence [through Eq. (U5)]. The
problem confronting us now is to find the sequence f n given the ztransform
F(z). Th ere are basically three methods for carrying out this inversion. The
fi rst is the powerseries method, which attempts to take the given func tion
F(z) and express it as a power series in z; once thi s is done the terms in the
sequencef nmay be picked off by inspecti on since the tagging is now explicitly
exposed. The powe r series may be obtained in one of two ways : the first way
we have already seen through our intermediate value theorem expressed
as Item 18 in Table I.l , that is,
f = 1 d nF(z) I
n It! dz" %= 0
(t his method is useful if one is only interested in a few terms but is rather
tedi ous if many terms are required) ; the second way is useful if F(z) is
expressible as a rational function of z (that is , as the rati o of a polynomi al
in z over a polynomi al in z) and in thi s case one may divide the denominator
int o the numerator to pick off the sequence of leadin g terms in the power
series directly. The powerseries expansion meth od is usually difficult when
many terms are required.
The second and most useful method for inverting ztransforms [that is, to
calcul ate j', from F(z)] is the inspection method. That is, one att empts to express
F(::.) in a fashion such that it consists of terms that are recognizable as trans
form pairs, for example , fr om Table I.2. The standard approach for placing
F(z) in this for m is to carry out a par tialfraction expansion, * which we now
discuss. The partialfr act ion expansion is merely an algebraic technique for
expressing rat ional functions of z as sums of simple terms, each of which is
easily inverte d. In pa rticular , we will attempt to express a rati onal F(z) as a
sum of terms , each of which looks either like a simple pole (see entry 6 in
Table I.2) or as a multi ple pole (see entry 13). Since the sum of the transforms
equals the tra nsform of the sum we may apply Property 2 from Tabl e I.l to
invert each of these now recognizable forms separately, thereby carrying out
the required inversion. To carry out the parti alfraction expansion we proceed
as follows. We assume that F(z) is in rati onal for m, that is
F(z) = N( z)
D(z)
where both the numerat or N(z) and the denominat or D(z) are each
• This procedure is related to the Laurent expa nsion of F( z) around each pole [GUIL
49].
334 APPENDI X I
polynomials in z. * Furt hermore we will assume that D(z) IS already i n
factored form, that is,
k
D(z) = II ( I  ,,;z)m;
i= l
The pr oduct notation used in this last equati on is defined as
( 1.21)
(1.23)
Equati on (1.21) implies that the ith root at z = II"; occurs with multiplicity
m.. [We note here that in most problems of interest, the difficult part of the
solution is to take an arbitr ary polynomial such as D(z) and to find its roots
IX; so that it may be put in the factored form given in Eq. (1.21). At this point
we ass ume th at that difficult task has been accomplished. ] If F(z) is in thi s
form then it is possible to express it as follows [GUlL 49]:
This last form is exactl y wha t we were looking for, since eac h term in this
sum may be found in our table of transform pa irs ; in particul ar it is Pair I3
(and in the simplest case it is Pair 6). Thus if we succeed in carrying ou t the
partialfract ion expa nsion, then by inspection we ha ve our time sequence In.
It remain s now to descr ibe the method for calculating the coefficient s Ai;'
The general expression for such a term is given by
1
(
1
)
/  1 d ll [ "( )J I
A ·· =    ( I  IX .Z)m, ~
" (j  I )! «, dZ
I

1
• D(z) : ~ I / ..
This rather formidable procedure is, in fact , rather st raightforward as long as
t he function F(z) is not terribly complex.
• We note here tha t a partialfrac tion expansion may be carrie d out only if the degree of the
numerator polynomial is strictly less than the degr ee of the denomi nator polynomia l : if
thi s is not thc case, then it is nece ssary to divide the denomi nator into the numerator unt il
the remai nder is of lower degree than the denominat or. This remainder divided by the
origi na l denomi nator may then be expanded in parti al frac tions by the method shown ; the
terms generated from the di vision al so may be invert ed by inspectio n mak ing use of
tr an sform pa ir 3 in Ta ble 1.2. An alternative way of sa tisfying the degree condit ion is to
attempt to factor out enough powers of z from the numerator if possi ble.
l.2. THE ZTRANSFORM 335
worthwhile at this point to carry outan example in order to demonstrate
ethod. Let us ass ume F(z) is given by
F(z) = 4z
2
(1  8z) 0
(1  4z)(1  2z)"
(1.24)
; exampl e the numerator and denominator both have the same degree
) it is necessary to bring the expression into pr oper form (numerator
: less than denomin ator degree). In this case our task is simple since
ly factor out two power s of z (we are required to fact or out onl y one
of z in order to brin g the numerator degree below that of the denornina
lit obviously in this case we may as well factor out both and simplify
.lculati ons). Thus we have
F(z) = Z2 [ 4(l  8z) ]
(1  4z)(I  2Z)2
; define the term in square br ackets as G(z). We note in this example
.ie denominator has three poles: one at z = 1/4 ; and two (that is a
~ pole) at z = 1(2. Thus in ter ms of the variab les defined in Eq. (1.21)
ve k = 2, 0(1 = 4, 111
1
= I , 0(2 = 2, 111. = 2. From Eq . (1.22) we are
ore seeking the following expan sion :
t>. 4(1  8z)
G(z) = 0
(1  4z)(1  2z)'
All A.
I
A
22
=  + 0 +  "''
1 4z ( 1 2z)" ( 12z)
such as All (that is, coefficients of simple poles) are easily obtained
:q. (1.23) by mult iplying the ori ginal functi on by the factor correspond 
the pole and t hen evaluating the result at the pole itself (that is, when
; on a value that drives the facto r to 0). Thus in our example we have
A, = ( 1  4z)G(z) l. = 4[1  (8/4)] = I
. 1 . 1/ 1 [I _ (2(4)]2 6
ly be evaluat ed in a similar way from Eq. (l.23) as follows :
2 4[ 1  (8/2) ]
A'I = (1  2z) G(z)lz_l/o = = 12
. • [I  (4/2)]
<

336 APP ENDI X I
Finall y, in order to evaluate A
22
we must apply the differenti ati on formula
given Eq. (1.23) once , that is,
I d I
A
22
=    [(I  2Z)2G(Z)]
2 dz % 1/ 2
I d 4(1  8z) I
=  2: dz ( I  4z) % 1/2
= _ !.(I  4z)( <32)  4 { ~  8z)(4) I
2 (I  4z)" % ~ 1 / 2
=8
Thus we conclude that
16 12 8
G(z )= + +
I  4z (I  2Z)2 I  2z
This is easily shown to be equal to the original fact ored form of G(z) by
placing these terms over a common denominator. Our next step is to invert
G(z) by inspecti on. Thi s we do by observing that the first and third terms are
of the form given by transform pair 6 in Table 1.2 and that the second term
is given by transform pair 13. This, coupled with the linearity property 2
in Table 1.1 gives immediately that
{
0
G(z) <=> gn =
16(4)' + 12(n + 1)(2)' + 8(2)n
n < 0
(1.25)
n = 0, I , 2, .. .
Of course, we must now account for the factor Z2 to give the expression for
f n. As menti oned above we do thi s by taking ad vantage of Property 8 in
Table 1.1 and so we have (for n = 2, 3, . . .)
f . = 16(4)n 2 + 12(n  l) (2)n 2 + 8(2) n 2
and so
{
0
f,
n  (3/1.1 )2'4'
n <2
n = 2, 3,4, ...
Thi s completes our example.
The third method for carrying out the inver sion process is to
inversion f ormula. This involves evaluating the following integral :
use the
Ii
I n =. F(z)zl n dz
27TJ C
(1.26)
J
1.2. THE ZTRANSFORM 337
wherej and the int egral is evaluated in the complex zplane around a
closed circular contour C, which is large enough * to surround all poles of
F(z) . This met hod of evalua tion works properly whe n factors of t he for m Zk
are removed fro m th e expression; the reduced expression is then evaluat ed
a nd the final solutio n is obtained by taking ad vantage of Property 9 in Table
I.l as we shall see below. This contour integrati on is most easily performed by
maki ng use of t he Cauchy residue theorem [G UlL 49]. This theorem may be
stated as follows:
Cauchy Residue Theorem The integral of g(z) over a closed contour C
containing within it only isolated singular points of g(z) is equal to 27Tj
times the sum of the residues at these points , wheneoer g(z) is analytict
0 11 and within the closed contour C.
An isolat ed singula r point of an analytic functi on is a singular point whose
neighborhood contains no other singula r points ; a simple pole (i.e., a pole of
order onesee below) is th e classical example. lf z = a is an isolated singular
point of g( z) a nd if y(z) = (z  a)rng(z) is ana lytic at z = a an d y(a) ,e 0,
then g(z) is said to have a pole of order m a t z = a with the residue r
a
given by
(1.27)
We note that the residue given in this last equ at ion is almos t t he same as A i;
given in Eq. (1.23), the main difference being the form in which we write the
pole. Thus we have now defined a ll that we need to apply the Ca uchy
resi due t heorem in order to evaluate th e inte gral in Eq. (1.26) and thereby to
recover the time function f n fro m our zt ra nsfo rm. By way of illust ration we
ca rry out the calculat ion of our pre vious example given in Eq. (1.24). Making
use of Eq. (1.26) we have
 I 1. 4z
1

n
(1  8z) d
gn = 27Tj 'J;; (1 _ 4z)(1 _ 2zl z
• Since Jordan ' s lemma (see p. 353) requires that F(z) 0 as z 00 if we are to let the
. contour grow, then we require that any funct ion F(z) that we consider have this property;
thus for rat ional functi ons of z if the numerator degree is not less than the denominat or
degree, then we must divide the numerator by the denominator until the remainder is of
lower degree than the denominator, as we have seen ear lier. The terms generate d by this
division are easily transformed by inspection , as discussed and it is only the remain
ing funct ion which we now consider in this inversion method for the ztransform.
t A function F(z) of a complex variable z is said to be analytic in a region of the complex
plane if it is singlevalued and differentiab le at every point in that region.
338 APPENDI X I
where C is a circle large enough to enclose the pol es of F(z) at z = 1/4 and
z = 1/2. Using the residue theorem and Eq . (1.27) we find that the residu e at
z = 1/4 is given by
(
.1) 4z
1

n(1
 8z) I
r
1
f< = z 
4 ( I  4z)(1  2Z)2 z1/'
_ (1/4)1 n(l  (8/4)]
= (I  (2/4)]2 = 16(4)n
whereas the residue at z = 1/2 is calculated as
d [( B
2
4z
1

n(1
8z) JI
r
1
/ 2 =  z  
dz (1  4=)(1  2Z)2 z 1/2
d [zl n(1  8
z
)JI
= dz 1  4z z1/2
= (1  4z)[( 1  n)z2
n(1
 8z) +z:n( 8)] ZI"(1  8z)(  4) I
(I  4z)" ' =1/2
[ (
1) 2 n ( I )  I ~ n J (I)In
=(1) (  I  n) 2: (3)+ 2: (8)  2: (  3)(  4)
=  12(n + 1)2
n
+16(2)"  24(2)"
Now we must take 27Tj times the sum of the residues and then multiply by the
factor preceding the integral in Eq . (1.26) (t hus we mu st take  1 times the
sum of the residues) to yield
gn =  16(4)" + 12(n + 1)2
n
+ 8(2)n n = 0, 1, 2, ...
But thi s last is exactly equal to the form for gn in Eq . (1.25) found by the
method of partialfraction expansions. From here the solution pr oceeds as in
th at meth od , thus confirming the consistency of these t wo approaches .
Thus we have reviewed some of the techniques for applying and inverting
the zt ransform in the handling of discret etime fun cti ons. The a pplication
of these methods in the solution of difference equations is carefully described
in Sect. 1.4 below.
1.3. THE LAPLACE TRANSFORM [WIDD 46]
The Laplace transform, defined below, enjoys man y of the sa me pr operti es
as the zt ransform. As a result, the following discussion very closely parallels
that given in the prev ious section.
We now con sider funct ions of continuous time f (t) , which take on non zer o
values only for nonnegative values of t he continuous parameter t. [Again for
(1.28)
(1.29)
1.3. THE LAPLACE TRANSFOR.. \ I 339
convenience we are assuming t ha t f(l) = 0 for I <O. For the more general
case , most of these techniques apply as discussed in the paragraph containing
Eq. (1.38) below.] As with discretetime functi ons, we wish to tak e our
continuoustime function and transform it from a functi on of t to a function
of a new complex variable (say, s). At the same time we would like to be able
to " untransform" back int o the t domain, a nd in order to do this it is clear
we must somehow " tag"f(t) a t each value of t. For reason s related to those
described in Secti on 1.1 the tag we choose to use is e: », The complex variable
s may be wri tt en in ter ms of its real and complex parts as s = o +j w where,
again, j = J ~ . Having multiplied by thi s tag, we then integrate over all
no nzero values in order to obtain our transform function defined as follows :
F*(s) ~ L:!(t) e
Sl
dt
Agai n, we have ado pted the notation for general Laplace tr an sforms in which
we use a capital letter for the transform of a function of time, which is
described in terms of a .lower case letter. This is usually referred to as the
"twosided, " or "bilateral" Laplace tran sform since it opera tes on both
the negative and positive time axes. We have assumed thatf(t ) = 0 for t < 0,
and in this case the lower limit of integration may be replaced by 0 , which is
defined as the limit of 0  € as €(> 0) goes to zero; further , we often denote
this lower limit merel y by 0 with the understanding that it is meant as 0
(usually thi s will cau se no confusion). There also exist s what is known as the
"onesided" Lapl ace transform in which the lower limit is repl aced by 0+,
which is defined as the limit of 0 + e as €(> 0) goes to zero; this onesided
tr an sform has application in the solution of transient problems in linear
syst ems. It is important that the reader dist inguish bet ween these two trans
forms with zero as their lower limit since in the former case (the bilat eral
tr ansform) any accumulation at the origin (as, for example, the unit impulse
defined below) will be included in the tr ansform, wherea s in the latte r case
(t he onesided transform) it will be omitt ed.
For our assumed case in which f(l) = 0 for t < 0 we may write our
t ran sform as
F*(s) = f '! (t) e
S
' dt
where, we repeat , the lower limit is to be int erpreted as 0 . Thi s Laplace
transform will exist so long asf(l) grows no faster than an exponenti al , that
is, so long as there is some real number u. such that
340 APPENDIX I
The smallest possible value for " « is referred to as the absci ssa of absol ute
convergence. Again we stat e that the Lapl ace transform F*(s) for a given
function j (r) is unique.
If the integral of f(l) is finite, then certainly the righthalf plane Re (s) ~ 0
represents a region of analyticity for F*(s) ; the not ati on Re ( ) reads as
" the real part of the complex function withi n the parentheses." In such a
case we have, corresponding to Eq. (1.16),
F*(O) =1"'j (l) dt (I.30)
From our earli er definition in Eq. (1.9) we see that propert ies for the z
transform when z = I will corres pond to properties for the Lapl ace trans
form when s = 0 as, for example, in Eqs. (1.16) and (I.30).
Let us now consider some importa nt examples of Lapl ace tr ansfor ms. We
use notati on here ident ical to that used in Eq. (1.17) for zt ransforms, namely,
we use a doublebarred, doubleheaded arrow to denot e the relat ionship
bet ween a function and its transform; thus, Eq. (1.29) may be written as

j(t) <=;o F*(s)
(I.31)
The use of the double ar row is a statement of the uniqueness of the transform
as earli er.
As in the case of ztransforrns, the most useful method for finding t he
inverse [that is, calculatingf (t ) from F*(s») is the inspect ion method, namely,
looking up the inverse in a tabl e. Let us, therefore, concentrat e on the
calcul at ion of some Lapl ace tr ansform pairs. By far the mos t important
Lapl ace transfor m pair to consider is for the onesided exponent ial function,
namel y,
{
Aea,
j(t) = 0
t ~ O
. t < 0
Let us carry out the computation of this transform , as follows :
f( t) <=;0 F*(s) = f '"Ae<J 'e
s t
dt
J.
= Are ( H al t dt
A
s + a
And so we have the fundament al relationship
A
Aeat 0(1) <=;0
s + a
( 1.32)

1.3. THE LAPLACE TRANSFORM 341
where we have defined the unit step function in continuous time as
o(t) = { ~
t ~ O
t<O
( 1.33)
In fact , we obser ve that the unit step function is a special case of our one
sided exponentia l function when A = I , a = 0, a nd so we have immediatel y
the additional pair
1
o( t) <=>
s
(1.34)
We note that the transform in Eq. (1.32) has an abscissa of convergence
G
a
= G.
Thus we have calculated analogous ztransform and Laplacetr ansform
pairs: the geometric series given in Eq. (1.20) with the exponential functi on
given in Eq. (1.32) and al so the unit step function in Eqs. (1.19) and (1.34),
respect ively. It remain s to find the continuous analog of the unit funct ion
defined in Eq. (1.1 I) a nd whose ztra nsform is given in Eq. (1.18). Thi s brings
us face to face with the unit impulse junction. The unit impulse funct ion
plays an important part in transform theory, linear system theor y, as well as in
probability a nd queuei ng theor y. It therefore behoves us to learn to work
with this funct ion. Let us ado pt the following notati on
uo(l ) ;;; unit impulse funct ion occurring at t = 0
uo(l ) corresponds to highly concentrated unitarea pu lses that are of such
short durati on that they cannot be distin guished by available measurement
instru ments fr om other perh aps briefer pul ses. Therefore, as one might
expect , the exact sha pe of the pul se is unimportant , rather only its time of
occurrence and its area matter. This function has been studied and utili zed by
scientists for many years [GUlL 49], among them Dirac, and so the un it
impulse funct ion is often referred to as the Dirac delta junction. For a long
time pure mathematicians have refrained from using uo(l ) since it is a highly
improper funct ion , but yea rs ago Schwartz' s theory of distributi ons [SCHW
59] put the concept of a unit impulse functi on on firm mathematical gro und.
Part of the di fficulty lies with the fact that the unit impulse functi on is not a
function at all , but merel y provides a not ati onal way for handling di s
continuities a nd their der ivat ives. In th is regard we will introduce the un it
impul se as the limit of a sequence without appealin g to the more sophisticated
generalized functi ons that place muc h of what we do in a more rigor ous
frame wor k.
~          
342 APPENDI X I
As we mentioned earlier, the exact shape of the pul se is unimportant. Let
us therefore choose the following representative pulse shape for our dis
cussion of impulses :
ItI ::;; L
ItI > J.
20'.
This rectangular wave form has a height and widt h dependent upon the
parameter 0'. as shown in Figure 1.2. Note that this functi on has a constant
area of unity (hence the name unit impulse function). As 0'. increases, we not e
that the pul se gets taller and narrower. The limit of thi s sequence as 0'. > ex)
(or the limit of an yone of an infinite number of other sequences with similar
properties, i.e. , increasing height , decreasing width, unit area) is what we
mean by the unit impulse "function." Thus we are led to the following
description of the unit impulse functi on.
t=O
uo(t ) = {ex)
o 1:;60
L:uo(t) dt = 1
This functi on is represented gr aphically by a vertical arrow located at t he
instant of the impulse and with a number adjacent to the head of the arrow
8
0: = 8
4
0: = 4
2
0: = 2
0: = 11
1
1
fait!
!
I
1 1 0 1 \
 8"  16 16 a
Figure 1.2 A sequence of functions whose limit is the unit impulse function uo(t ).
1.3. THE LAPLACE TRA NSFORM 343
o
Figure i.3 Graphical representation of Auo(t  a) .
indicating the area of the impulse; that is, A times a unit impulse function
located at the point t = a is denoted as Auo(t  a) and is depicted as in Figure
1.3.
Let us now co nsider the integral of the unit impulse funct ion. It is clear that
if we integrate from  00 to a point I where I < 01hen the total integral must
be 0 whereas if I > 0 then we will have succe ssfully integr ated past the unit
impulse and thereby will have accumulated a total area of unity. Thus we
conclude
r' uo(x) dx = {I
L " 0
I ~ O
1<0
But we note immediately that the righthand side is the same as the definition
of the unit step funct ion given in Eq. (1.33). Therefore, we conclude that the
un it step functi on is the integr al of the unit impulse functi on , and so the
"de riva tive" of the uni t st ep functi on must therefore be a unit impulse
funct ion. However , we recogni ze that the derivative of this discontinuous
funct ion (the step funct ion) is not pr operly defined; once again we appeal to
the theory of distr ibutions to place thi s operat ion on a firm mathematical
foundation. We will therefore assume thi s is a proper operation and proceed
to use the unit impulse functi on as if it were an ordinary function.
One of the very important properties of the unit impulse function is its
sifting property ; that is, for an arbitrary differentiable function g(l ) we have
L :uo(t  x)g(x) dx = get)
This las t equat ion merel y says that t he integr al of t he product of our funct ion
g(x) wit h an impulse located at x = I "sifts" t he functi on g(x) to prod uce its
val ue at t , g(I). We note t hat it is possible al so t o define the derivat ive of the
unit impulse which we den ote by U,(I) = dUo(I) /dl; th is is known as the uni t
doublel and has the property that it is everywhere 0 except in the vicinity of
the origi n where it runs off to 00 j ust to the left of the origin and off to  00
344 APPE NDIX I
just to the right of the origin, and, in addition, ha s a tot al area equal to zero.
Such functions correspond to electrostatic dipoles, for example, used in
physics. In fact , an impulse function may be likened to the force placed on a
piece of paper when it is laid over the edge of a knife and pressed down
whereas a unit doublet is similar to the force the paper experiences when cut
with scissors. Higherorder deri vatives are possible and in general we may
have un(t) = dUn_l (t )/dt . In fact , as we have seen, we may al so go back down
the sequence by integrating these functi ons as, for example, by generating
the unit step function as the integral of the unit impulse functi on ; the obvious
notation for the unit step function, t herefore, would be U_I(t) and so we may
write uo(r) = dU_I(t)/dt. [Note, from Eq. (1.33), that we have al so reserved
the notation bet) to represent the unit step functi on.) Thus we have defined
an infinite sequence of specialized functions beginning with the unit impulse
and proceeding to higherorder derivatives such as the doublet , and so on,
as well as integrating the unit impulse and thereby generating the unit step '
function , the ramp, namel y,
1<0
1 <0
(1.35)
1 <0
(
tnl
 I) !
"' I' {I
11_
2
(1) = II _I( X) dx =
a:> 0
the parabol a, namely,
".<) g, dx (:
and in general
This entire family is called the family of singularity functions , and the most
important members are the un it step funct ion a nd the unit impulse function.
Let us now return to our main discussion and consider the Laplace t rans
form of uo(t ). We proceed directly from Eq . (1.28) to ob tai n
lIo(I) <=>l''' uo(l)e " dt = 1
(Note that the lower limit is interpreted as 0 .) Thus we see that the unit
impulse has a Laplace transform equal to the constant unity.
Let us now consider some of the important pr operties of the transforma
tion. As with the ztransform, the convolution property is the most important
and we proceed to derive it here in the continuous time case. Thus. consider
two functi ons of continuous time f(t) and get), which take on nonzero values

1.3. THE LAPLACE 345
onl y for t 0; we denote their Laplace transforms by F*(s) and G*(s),
respectively. Defining 0 once aga in as the convolution operator, that is,
J(t) 0 get) L : J (I  x)g(x) dx
which in our case reduces to
(1.36)
J(t) 0 get) = f J (1  x)g(x) dx
we may t hen ask for the Laplace transform of thi s convolut ion. We obta in
thi s formall y by plugging int o Eq. (1.28) as follows :
J (t) 0 0 g(t))e S'dl
= I t: o  x)g(x) dx e
s
' dt
=L : oi: / (t  x)e·('xl dt g(x)e
SX
dx
= L : / (x)e '
x
dX.C / (V)e ··· dv
And so we have
j(t) 0 g(t) = F*(s)G*(s)
Once again we see that the transform of the convolution of two functions
equals the product of the transforms of each . In Table 1.3, we list a number of
importa nt properties of the Laplace transform, and in Table 1.4 we list some
of the import ant transforms themselves. In these tabl es we adopt the usual
notation as follows:
dnJ (t ) f nl(t)
dt
n
s .Jj(x) dx j( nl(t)

n times
(1.37)
Fo r example, p  ll(t ) = j!.. ",! (x) dx ; when we deal with functions which are
zero for t < 0, then pll(O) = O. We comment here "that the onesided
tr ansform tha t uses 0+ as a lower limit in its definition is quite commonly
used in transient analysis, but we prefer 0 so as to include impul ses at the
origi n.
The table of properties permits one to compute many transform pairs from
a given pair. Proper ty 2 is the statement of linearit y and Pr oper ty 3 describes
the effect of a scale change. Property 4 gives the effect of a tr anslati on in time,
Tabl e {.3
Some Properties of the Laplace Transform
FUNCTION TRANSFORM
18. Final value theorem
17. Ini tial value theorem
16. Int egral property
F*(s)
F*(s) = 5.:f(t)e ' t dt
aF*(s) + bG*(s)
snF*(s)
s
F*(s)
sF*(s)
aF*(as)
e "'F*(s)
F*(s + a)
dF*(s)
  
ds
dnF*(s)
( _ I ) n ~
1:,F*(s, ) ds,
r'" ds; r'" ds
2
' " I" dsnF*(sn)
J 51""" 3 J 8: = 81 ) Sn= 5
n
_ l
F*(s)G*(s)
a
aa F(s)
F*(O) = fo: f(t) dt
lim sF*(s) = lim f(t)
$ _00 t_ O
lim sF*(s) = lim f (t)
8_ 0 l _co
if sF*(s) is analytic for Re (s) ~ 0
sn
[a is a parameter)
7. t nf (t )
8. f (t )
I
9. f(l)
I n
I. f(t) t ~ 0
2. af (t) + bg(t)
3. f ( ~ ) (a > 0)
4. f (t  a)
5. ea'f(t)
6. tf(t )
10. f (l ) ® g(l)
t df(l )
I I. ;[(
t d"f(t)
12. di"
B .t f ",f(t )dl
14. t f"", f J (t )(dr)n
~
n times
a
15. aa f(t)
t To be co mplete. we wish to show the form of the transform for entries 11 14 in
the cas e when f(l) may ha ve nonzero values for I < 0 also :
d nf (t ) <=> snF*(s) _ sn'[(o) _ sn2[(1)(0)  . ..  [<n1)(0)
dt n
I
, It F* (s) [11)(0) f (2)(O) f l11)(0 )
. . . f (t )(dt )n <=>  + + _ + . . . +
 co  00 sn s" s" 1 S
'" ,.
n times
346
1.3. THE LAPLACE TRANSFOR)\ 347
Table 1.4
Some Laplace Transform Pairs
FUNCTION
1. !(t) t ~ 0
2. 11
0
(1) (unit impulse)
3. 1I
0
(t  a)
A d
4. 1I. (r) = dt 1I._I(t )
TRANSFORM
F*(s) = s.:!(t)e<J' dt
I
5. 1Ct(r) ~ «»
6. 1C , (t  a)
a r
n

1
7. lI_n(r) = (II _ I) !
8. Ae
at
6(t)
9. te
at
6(t )
(uni t step)
s
s· .
A
s+a
1

(s + a)2
I
(s + a)"+1
whereas Property 5, its dual, gives the effect of a para meter shift in the
tra nsform do mai n. Pr operties 6 and 7 show the effect of mul tiplication by t
(to some power) , which corres po nds to differentiation in t he t ran sform
domain ; simila rly, Properties 8 and 9 show the effect of di vision by t (to so me
power) , which correspo nds to integ ra tion. Property 10, a most important
proper ty (derived earlier), shows the effect of con volut ion in t he time domai n
going over to simple multiplicat ion in the t ran sform domain. Properties 11
and 12 give the effect of time different iation; it should be noted that this
corresponds to multip licatio n by s (to a power equal to the number of
differentiation s in time) times the origi nal t ra nsform. In a simi lar way
Proper t ies 13 a nd 14 show the effect of t ime integration goi ng over to division
by s in t he transform domai n. Property 15 shows t hat differ ent iat ion wit h
res pect to a parameter off(t) corresponds to differentiation in t he transform
domai n as well. Property 16, the integra l pr operty, shows the simple way in
which t he t ra nsform may be eva lua ted a t t he origi n to give the total integral

r
p
"
\'
ta
cc
t a
As
Let
tha
wil
wh
th,
fOI
R,
t<O
t ~ O
t<O
t ~ O
{
J ( /)
JC
t
) = 0
J+(t) = { 0
J (t )
348 APPENDIX I
ofJ(t). Pr operties 17 and 18, the initial and final value theorems, show how to
compute the values forJ(t) at t = 0 and t = CIJ directly from the transform.
In Table 1.4 we have a rather short list of important Laplace transform
pairs. Much more extensive tables exist and may be found elsewhere
[DOET 61]. Of course, a s we said earlier, the table shown can be extended
considerably by making use of the properties listed in Table 1.3. We note , for
example, th at the transform pair 3 in Table f.4 is obtained from tr ansform
pair 2 by application of Pr operty 4 in Table 1.3. We point out again that thi s
table is limited in length since we ha ve included only those functi ons that find
relevance to the material contained in thi s text.
So far in this discu ssion of Laplace transforms we have been considering
only functionsJ(t) for which J(t) = 0 for t < O. This will be satisfactory for
most of the work we consider in this text. However, there is an occas iona l
need for transforming a function of time which may be nonzero anywhere on
the real time axi s. For this purpose we must once again con sider the lower
limit of integrati on to be  CIJ, that is,
F*(s) = L:J(IV
S1
dt ( 1.38)
One can ea sily show that thi s (bilatera l) Laplace transform may be calculated
in terms of onesided time functi ons and their transforms as foll ows. First we
define
and so it immediately follows that
J(t ) = J(1) +I +(t)
We now observe that J(t) is a functi on that is nonzero only for positi ve
values of t, and J+(t) is nonzero only for nonnegative values of t . Thus we have
J+(t) <=:> F+*(s)
J (t) <=:> L *(s)
where these tr an sforms are defined as in Eq . (1.29). Howe ver , we need the
transform of J (t) which is easily shown to be
J (t) <=:> L *( s)
Thus, by the linear ity of transforms, we may finally write the bilateral
transform in terms of onesided transforms:
F*(s) = L *(  s) + F+*(s)
1.3. THE LAP LACE 349
As always, these Laplace transfor ms have abscissas of abso lute convergence.
Let us therefor e define 0'+ as the convergence abscissa for F; *(s) ; this implies
that the region of convergence for F+*(s) is Re (s) > 0'+. Similarl y, F_*(s)
will have some abscissa of abso lute convergence, which we will denote by 0'_,
which implies that F_*(s) converges for Re (s) > 0'_ , It then follows directly
that F_*(  s) will have the same convergence abscissa (0'_) but will converge
for Re (s) < 0'_, Thus we have a situation where F*(s) converges for 0'+ <
Re (s) < 0'_ and therefor e we will have a " convergence st rip" if and only if
0' + < 0'_ ; if such is not the case, then it is not useful to define F*(s). Of course,
a similar argument can be made in the case of ztransforrns for funct ions that
take on nonzero values for negati ve time indices.
So far we have seen the effect of tagging our time funct ion f( t) with the
compl ex exponential e
st
and then compressing (integrating) over all such
tagged functi ons to form a new functi on, namel y, the transform F*(s). The
purpose of the tagging was so that we could later " untransforrn" or, if you
will, "unwind" the transform in order to obtainf(t) once again. In princ iple
we know this is possible since a tr ansform and its time functi on are uniquely
related. So far , we have specified how to go in the direction from f( t)
to F*(s). Let us now discuss the problem of inverting the Laplace tr ansform
F*(s) to recover f( t) . There are basically two meth ods for conducting this
inversion: The inspection method and the f ormal inversion int egral met hod.
These two methods are very similar.
First let us discuss the inspection method , which is perhaps the most
useful scheme for invertin g transforms. Here , as with ztransforms, the
approac h is to rewrit e F*(s) as a sum of terms. each of which can be recog
nized from the table of Laplace transform pairs. Then, making use of the
linear ity pr operty, we may invert the transform term by term , and then sum
the result to recover f( t) . Once again , the basic method for writing F*(s) as a
sum of recognizable terms is that of the partialfraction expansion. Our
descript ion of that method will be somewhat shortened here since we have
discussed it at some length in the ztransform section. First. we will ass ume
that F*(s) is a rat ional function of s , namely,
F*(s) = N( s)
D(s)
where bot h the numerator N(s) and denominat or D(s) are each polynomi als
in s. Again , we assume that the degree of N(s) is less than the degree of D(s) ;
if this is not t he case. N(s) must be divided by D(s) until the remainder is of
degree less than the degree of D(s), and then the partialfract ion expan sion is
ca rried out for this remainder, whereas the terms of the quotient resulting
from the division will be simple powers of s, which may be inverted by
appealing to Tr ansform 4 in Table 1.4. In additi on , we will assume that the
350 AP PENDIX [
" hard" part of the problem has been done , namely, that D(s) has been put in
facto red form
k
D(s) = II (s + ai)m,
. i :::lt l
(1.39)
Once F "'(s) is in this form we may then express it as the following sum:
,
B
kl
B
k 2
B
k m
•
+ + + . . . + (1.40)
(s + ak)m. (s + ak)'n., (s + a
k
)
Once we have expressed F*(s) as above we are then ina positi on to invert
each term in thi s sum by inspection from Table 1.4. In particular, Pair s 8
(for simple poles) and 10 (for multiple poles) give us the answer directly. As
before, the method for calculating the coefficients Bi; is given in general by
. I d
H
[ N(S)] I
B
i
; = (j _ I)! dsi ' (s + ai)m, D(s) sa,
(1.41)
(1.42)
Thus we have a complete prescription for findingf (t) from £*(s) by inspec
tien in those cases where F*(s) is rat ional and where D(s) has been factored
as in Eq. (1.39). Thi s method works very well in those cases where F*(s) is not
overly complex.
To elucidate some of these principles Jet us carry out a simple example.
Assume that F*(s) is given by
* 8(S2 + 3s + l)
F (s)  ' '
 (s + 3)(s + 1)3
We have already written the denominat or in factored form, and so we may
proceed directly to expand F* (s) as in Eq. (1.40). Not e that we have k = 2,
a, = 3, m, = I , a
2
= I , m
2
= 3. Since the denominat or degree (4) is greater
than the numerator degree (2), we may immediately expand F*(s) as a parti al
fraction as given by Eq. (1.40), namel y,
F * ( S ) = ~ + B
2,
+ B
22
+ ~
S + 3 (s + 1)3 (s + 1)2 (s + 1)
1.3. THE LAPLACE TRANSFORM 351
Evaluat ion of the coefficients Bij pr oceeds as foll ows. B
ll
is especi all y simple
si nce no differentiations are requi red, and we obtain
* (9  9 + I)
Bll = (s + 3)F = 8 (_2)3 =1
B
21
is also easy to evaluate:
3 * I (I  3 + I)
B
21
= (s + I) F (s) = 8 2 =  4
For B
22
we mu st differentiate once, namely,
B
22
= .!!...[8(S2 + 3s + I)J I
cis s + 3 ,I
= 8 (s + 3)(2s +3)  (S2 + 3s + 1)(1) I
(s + 3)2 .1
S2 + 6s + 81 I  6 + 8
=8
(s + 3)" .1 (2)2
=6
Lastl y, the calculati on of B
23
involves two differentiati ons ; however , we have
alrea dy carried out the first differentiation, a nd so we take advantage of the
form we have derived in B
22
j ust pr ior to eva luation at s = I ; furthermore,
we note that since j = 3, we have for t he first time an effect due to the term
(j  I)! fr om Eq . (1.41). Thus
B'
3
= 1ci
2
[8(S2 + 3s + OJ I
 2! ds
2
s + 3 . 1
_ 1(8) .{[S2 + 6s + 8J I
 2 cis (s + 3)2 . 1
= 4 (s + 3)2(2s + 6)  (S2 + 6s + 8)(2)(s + 3) I
(s + 3)4
(2)2(4)  ( I  6 + 8)(2)(2)
= 4
(2)4
=1
352 APPENDI X I
This completes the evaluation of the constants B
ii
to give the parti al fracti on
expansion
F*(s) = =.!. + . 4 + 6 + _I_
s + 3 (s + 1)3 (s + 1)2 (s + I)
(1.43)
This last form lends itself to inversion 'by inspecti on as we had promi sed. In
particular, we observe that the first a nd last terms invert directl y according to
transform pair 8 from Table lA, whereas the second a nd third terms invert
directly from Pair 10 of that table; thus we have for I ~ °the foll owing:
(1.44)
and of course, JU) = °for I < 0.
In the course of carrying out a n inversi on by partialfracti on expansions
there a re two natural points at which one can conduct a test to see if any
errors ha ve been made : first , once we ha ve the partialfraction expa nsion
[as in our example, the result given in Eq. (1.43)], then one can combine thi s
sum of terms int o a single term over a commo n denomin at or and check that
thi s single term corresponds to the original given F*(s) ; the other check is to
take the final form for J (I) and carry out the forward transformati on and
confirm that it gives the original F*(s) [of course, one then gets F*(s) ex
panded directl y as a partial fraction].
The second meth od for findingj'(r) from F*(s) is to use the incersion int e
gral
I J"'+iOO
J( t) =  . F*(s)e
s t
ds
2rr} ",  ;00
( 1.45)
for t ~ °and (Je > (J a ' The integration in the complex splane is taken to be a
straightline integrati on parallel to the imaginary axis and lying to the right
of (J a ' the abscissa of absolute convergence for F*(s). The usual means for
carrying out this integration is to make use of the Cauchy residue theorem as
applied to the integral in the complex domain around a closed contour. The
closed con tour we choose for this purpose is a semicircle of infinite radius as
shown in Fi gure IA. In thi s figure we see the path of int egrati on required for
Eq . (lAS) is S3  S1 and the semicircle of infinite radius closing thi s conto ur is
given as S1  S2  S3 ' If the integral al ong the path S 1  S2  S3 is 0, then the
integral along the entire closed contour will in fact give us J(I) from Eq.
(lAS) . To establish that this contribution is 0, we need
1.3. TH E LAPLA CE TRANSFORM 353
W = Im (s)
s plane
:1;;+':""+0 = Re(s)
Figure 1.4 Closed contour for inversion integral.
Jordan's Lemma If IF*(s)l 0 as R  00 on s,  s.  S 3, then
for t > 0
Thus, in order to carry out the complex inversion int egral shown in Eq.
(1.45) , we must first express F*(s) in a form for which J ordan' s lemma
applies. Ha ving done thi s we may then evaluate the integral a round the
closed conto ur C by calculating residues and using Cauchy' s residu e the orem.
Thi s is most easily ca rried out if F*(s) is in rational form with a fact or ed
denominat or as in Eq. (1.39). In order for Jordan' s lemma to apply, we will
require, as we did before, th at th e degree of the numerator be st rictly less than
the degree of the denominator, a nd if thi s is not so , we mustdividetherational .
functi on until the remainder has th is property. That is all there is to the
meth od . Let us carry this out on our previous example, namely that given in
Eq . (1.42). We note t his is already in a form for which Jordan' s lemma
applies, and so we may proceed directly with Cauchy' s residue theorem.
Our poles are located .at s = 3 and s = I. We begin by calculating the
residue at s = 3, thus
r 3 = (s +
= 8(s' + 3s + l )e', I
(s + 1)3 s_ _ 3
8(9  9 + 1)e
3 1
=
( _2)3
= _ e3'
354 APPENDIX I
Similarly, we must calculate the residue a t s = I , which requires the
differentiation s indicated in our residue formula Eq. (1.27) :
I d
2
I
r_
1
=:;;; (5+ 1)3F*(s)e" .
_. ds
= I d
2
8(S2 +3s + l)e"/
2d5
2
(5+ 3) , 1
= + 3)[8(2s + 3)e' t + 8(5
2
+ 3s + 1) te") 8(S2 + 3s + Oe"JI
2ds (s +3)2 .•
=! I {(s + 3)"[8(25+ 3)e" + 8(5
2+
3s + 1)le"
2(s +3)4
+ (s + 3)8[2e" + (25 + 3)le")
+ (5 + 3)8[(2s + 3)le
st
+ (S2 + 3s + 1)1
2
e")
 8(2s + 3)e"  8(S2+ 35 + 1)le"]
 [(s + 3)8[(2s + 3)e' t + (52 + 3s + ljre"]
 8(S2 + 3s + l )e"]2(s + 3)}1,1
= e' +61e '  21
2
e'
Combining the se residues we have
f(l ) = _ e
3
' + e:' + 6le
t
21
2e
 ' t 0
Thus we see that our solution here is the same as in Eq. (1.44), as it must be:
we ha ve once again that f(l) = 0 for t < O.
In our earlier discussion of the (bilateral) Laplace transform we discussed .
functi on s of time 1_(1 ) and f +(I) defined for the negat ive and positive real
time axis, respect ively. We al so obse rved that the tr an sform for each of these
functi on s was analytic in a left halfplane and a right halfpl ane, respecti vely,
as measured from their appropriate abscissas of a bsolute convergence.
Moreover , in our last inversion method [the application of Eq. (1.45) ] we
observed that closing the conto ur by a semicircleof infinit e rad ius in a co unter
clockwise directi on gave a result for t > O. We comment now that had we
closed the conto ur in a clockwi se fashion to the right , we would have obtained
the result that would have been applica ble for I < O. assumi ng tha t the
contribution of thi s contour could be shown to be 0 by Jordan ' s lemma. In
order to invert a bilateral tr ansform, we pr oceed by obtaining first f( l) for
positive values of I a nd then for negati ve values of I. For the first we tak e a
path of integration within the con vergence strip defined by G_ < Gc < G...
and then closing the contour with a counterclockwi se semicircle; for I < 0,
we ta ke the same vertical contour but close it with a semicircle to t he right.
  
1.4. TRANSFORMS IN DIFFERENCE AND DIFFERENTIAL EQUATIONS 355
As may be anticipated from our contour integration methods, it is some
times necessary to determine exactly how many singularities of a function
exist wit hin a closed region. A very powerful and convenient theorem which
aids us in thi s determination is given as follows :
Rouche's Theorem [GUIL 49] If f(s) andg(s) are analyticfun ctions of s
inside and on a closed contour C, and also if /g(s)! < If(s)1on C, thenf (s)
and f (s) + g(s) have the same number of zeroes inside C.
1.4. USE OF TRANSFORMS IN THE SOLUTION OF
DIFFERENCE AND DIFFERENTIAL EQUATIONS
As we have already mentioned, transforms are extremely useful in the
solution of both differential and difference equations with constant coeffi
cients. In this section we illustrate that technique ; we begin with difference
equati ons using zt ra nsforms and then move on to differential equations using
Laplace transforms, preparing us for the more complicated differential
difference equations encountered in the text for which we need both methods
simultaneously.
Let us consider the following general Nthorder linear difference equation
with constant coefficients
(1.46)
where the a, ar e known constant coefficients, g, are unknown functions to be
found, and e. is a giveri function of n. In addition, we assume we are given N
boundary equati ons (e.g., initial conditi ons). As always with such equ at ion s,
the solution which we are seeking consi sts of both a homogeneous a nd a
particular solution, namely,
just as wit h differential equations. We know that the homogeneous solution
must sati sfy the homogeneous equation
Gs g.  s + . . . + Gog. = 0
The genera l form of solution to Eq. (1.47) is
(1.47)
where A a nd « are yet to be determined. If we substitute the proposed solution
int o Eq. (1.47), we find
A
n S + A ..\"+1 + + A n 0
as rx GS _
1
rx . . . a
o
CI. =
(1.48)
356 APPENDI X I
This Nt h·order polynomial clearly has N solutio ns, which we will denote by
ai ' <X
2
, • •• , <X,v , assumi ng for t he moment that all t he <Xi are distinct. As soci.
a ted wit h each suc h solution is an a r bitra ry co nstant Ai whic h will be de ter 
mi ned from t he initial co nditions for the di fference equati on (of which there
must be N). By cancelling th e common term A<xn .Y from Eq . (1.48) we
finally arrive at the characteristic equation whi ch determines the values <X i
C) , ..
aN + as_l <x + as_
2<x
 + .. . + a o<X' = 0 ( 1.49)
Thus the search for the homogeneous solutio n is now reduced t o find ing th e N
roots o f our characteristic equati on (1.49) . If all N of the <X i are disti nct, th en
the homogeneou s solutio n is
g<;:) = A,<x
l
n
+ A
2
<X2
n
+ ... + A,vaN
n
In the case of nondist inct root s, we have a slightly different sit uation. In
part icula r, let <XI be a multiple root of orde r k; in thi s case the k equ al root s
will contrib ute to th e homogeneous solut ion in the following for m:
(Aun
k

I
+ A
12
11
k

2
+ ... + AU_, n + Alk)<x
l
n
and similarly for any ot her multi ple roots. As fa r as t he particular solution
g ~ P ) is concerned, we know that it must be found by an appropriate guess
fr om t he form of en'
Let us illustrate so me of these principl es by mea ns of a n example. Consider
the secondorder di ffere nce equa t io n
( 1.50) II = 2, 3, 4, . ..
6g"  5gn_ l + g"2 = 6Gr
This equati on gives the relationship among the unknown functions gn for
n = 2,3, 4, .. . . Of co urse, we mu st give two ini tial co ndi t ions (since t he
o rder is 2) a nd we choose th ese t o be go= 0, gl = 6/5. In order to find t he
homogeneo us solution we must form Eq. (1.49) , which in th is case becomes
60:
2
 5 <X + I = 0
and so the two values of <X which solve thi s equat ion a re
I
~ l = 
2
I
:%<) = 
• 3
a nd thus we have the homogeneous solutio n
0
( 11
) = A('!')"+ A.,(.!.)"
on 1 2  3

lA. IN DIFFERENCE AND DIFF ERENTIAL EQUATIONS 357
The particular solution must be guessed at, and the correct guess in this case is
If we plug as given back int o our basic equation, namel y, Eq. (1.50), we
find that B = 1 and so we are convinced that the 'particular solution is
correct. Thus our complete sol ution is given by
We use the initial conditions to solve for Ai and A
2
and find Ai = 8 and
A
2
= 9. Thus our final solution is
Il = 0, 1, 2, . . . (1.51)
This completes the standard way for solving our difference equation.
Let us now describe the method of ctransforrns for solving difference
equations. Assume once again that we are given Eq. (1.46) a nd that it is good
in the range /I = k , k + .1, .. .. Ou r approach begins by defining the follow
ing ctra nsforrn:
<X>
G(: ) = I gn:n
n= O
(1.52)
From our earlier discussio n we know that once we have found G(:) we may
then apply our inversion techniques to find the desired solution gn' Our
next step is to multiply the nth equati on from Eq. (1.46) by : n and then form
the sum of all such multiplied equation s from k to infinity; that is. we form
ec S 00
.2 .2 Qign_ i
z n
= .2 e nz
n
Tl = h: ; = 0 n=h:
We then carry out the summations and att empt to recogni ze G(z) in this
single equati on. Next we solve for G(:) algebraically and then pr oceed with
our inversion techniques to obtai n the solution. Th is meth od does not require
that we guess at the part icular solution , and so in that sense is simpler than
the di rect method : however, as we sha ll see, it still has the basic difficulty that
we must solve the char acteristic equation [Eq. (1.49)] and in general this is the
difficult part of the soluti on . However, even if we cannot solve for the root s
(Xi it is possible to obtai n meaningful properties of the solution g n from the
perhaps unfactored form for G(: ).
358 APPENDIX I
Let us solve our earlier example using the method of ztra nsforrns. Accord
ingly we begin with Eq. (1.50), multiply by z" and then sum; the sum will go
from 2 to infinity since this is the applicable range for that equation. Thus
We now factor out enough powers of z from each sum so that these powers
match the subscript on g thusly:
Focusing on the first summation we see that it is almost of the form G(z)
except that it is missing the terms for n = 0 and /I = I [see Eq. (1.52)];
applying this observation to each of the sums on the lefthand side and
carrying out the summation on the righthand side directly, we find
• 6(1/5)2z2
6[G(z)  go  g,z]  5z[G(z)  go] + zG(z) = '''
1  (1/5) z
Observe how the first term in this last equation reflects the fact that our
summation was missing the first two terms for G(z). Solving for G(z) alge
braically we find "
G(z) = 6go + 6g, z  5goz + (6/25)Z2/[1  (l /5)z]
65z+ z
2
If we now use our given values for go and g" we have
(
I) z(6  z)
G(z) = "5 [I _ (1/3) z][1  (1/2) z][1  (I /5)z]
Proceeding with a partialfraction expansion of this last form we obtain
G(z) = 9 + 8 + 1
1  (1/3) z 1  (1/2) z 1  (1/5)z
which by our usual inversion methods yields the final solution
n = 0, 1,2, .. .
Note that this is exactly the same as Eq . (LSI) and so our method checks. We
comment here that even were we not able to invert the given form for G(z)
we could still have found certain of its properties; for example, we could"find
l
IA. TRANSFORMS IN DIFFERENCE AND DIFFERENTIAL EQUATIONS 359
that the sum of all terms is given immediately by G(I ), that is,
Let us now consider the application of the Laplace transform to the
solution of constantc oefficient linear differential equations. Consider an
Nth order equation of the following form:
dNf (t) dN 'f(t) df(t )
aN,. + aNl v 1 + ... + a,  + aof( t) = e(t)
dt" dt :  dt
( 1.53)
Here the coefficients a; are given constants, and e(t ) is a given driving func
tion. Along with this equation we must also be given N initial conditi ons in
order to carry out a complete solution; these conditions typically are the
values of the first N derivat ives at some instant, usually at time zero. It is
required to find the function f(t ). As usual, we will have a homogeneous
solut ion fihl( t), which solves the homogeneous equation [when e{t ) = OJ
as well as a particular solut ionf (OI(t) that corresponds to the nonhomogeneous
equation. The form for the homogeneous solution will be
If we substitute this into Eq. (1.53) we obt ain
Thi s equation will have N sol utions <x" <x" ••• , <x
n
, which must solve the
characteristic equation
which is equivalent to Eq. (1.49) with a change in subscripts. If all of the <Xi
are dist inct , then the general form for our homogeneous solution will be
The evaluation of the coeffi cients A ; is carried out making use of the initial
conditions. In the case of mult iple root s we have the following modificat ion.
Let us assume that <x, is' a repeated root of order k ; this multiple root will
contri but e to the homogeneous solution in the following way:
360 APPENDIX I
and in the case of more than one multiple root the modification is obviou s.
As usual , one must guess in order to find the particular solutionJ<p)(t). The
complete solution then is, of course, the sum of the homogeneous and
particular solutions, namely,
Jet) = j (hl(t) + f p'(t)
Let us apply this method to the solution of the
equation for illustrative purposes:
following differential
d](t) _ 6 df(t) + 9f(t) = 2t
dt
2
dt
(1.54)
with the two initial conditions f(O) = 0 and df(O)/dt = O. Forming the
characteristic equation
cx'  6cx + 9 = 0
we find the following multiple root:
(1.55)
cx, = cx, = 3
and so the homogeneous solution must be of the form
jlhl(t) = (Ant + A'2)e
3t
Making an appropriate guess for the particular solution we try
j lp'(t) = B, + B,t
Substituting this back into the basic equation (1.54) we find that B, = 4/27
and B. = 2/9. Thus our complete solution takes the form
j (
3 / 4 2
(t) = Ant + A
, 2
)e +  +  t
27 9
Since our initial conditions state that bothf(t) and its first derivative must
be zero at t '= 0 , we find that Au = 2/9 and A" = 4/27, which gives for
our final and complete solution
(1.56)
t ~ O
Je t) = ~ ( t  ~ ) e 3 t +~ ( t + ~ )
The Laplace transform provides an alternative method for solving constant
coefficient linear differential equations. The method is based upon Properties
II and 12 of Table 1.3, which relate the deri vati ve of a time funct ion to its
Laplace transform. The approach is to make use of these properties to tr ans
form both sides of the given differential equation into an equation involving
the Lapl ace transform of the unknown functionf (t) itself, which we denote
as usual by F*(s) . This algebraic equation is then solved for F*(s), and is then
IA. TRANSFO/t.\ lS IN DIFFERENCE AND DIFFERENTIAL EQUATIONS 361
inverted by any of our methods in order to immediately yield the complete
solution forfU). No guess is required in order to find the particular solution,
since it comes out of the inversion procedure directly.
Let us apply thi s technique to our previous example. We begin by trans
forming both sides of Eq. (1. 54), which will require that we take advantage
of our initial conditions as follows:
s2F*(s)  sJ(O)  l'\o)  6sF*(s) + 6J(0) + 9F*(s) = ~
s
In carrying out this last operation we have taken advantage of Laplace
transform pair 7 from Table IA. Since our initial conditions are both zero,
we may eliminate certain terms in this last equation and proceed dir ectly to
solve for £*(s) thusly:
2/s
2
F*(s) = .' 
s:  6s + 9
We must now factor this last equation, which is the same problem we faced
in finding the root s of Eq. (1.55) in the direct method, and as usual forms the
basically difficult part of all direct and indirect methods. Carrying this out we
have
F*(s) _ 2
 S2( S  W
We are now in the position to make a partialfract ion expansion yielding
* 2/9 4/27 2/9 4/27
F (s) =  + + +
S2 s (s  3)2 S  3
Inverting as usuaLwe then obtai n, for I ~ 0,
2 4 2 4
J ( I) =  I +  +  le
3t
  e
3t
9 27 9 27
which is identical to our former solution given in Eq. (1.56).
In our study of queueing systems we often encounter not only difference
equati ons and differential equations but also the combination in the form of
differentialdifference equations. That is, if we refer back to Eq. (1.53) and
replace the time functi ons by time functions that depend upon an index, say
/1, and if we then displa y a set of differenti al equations for vari ous values of /1,
then we have an infinite set of differentialdifference equations. The solution
to such equations often requires that we take both the ztransform on the
discrete index 11 and the Laplace transform on the continuous time parameter
I. Examples of this type of analysis are to be found in the text itself.
362 APPENDIX t
REFERENCES
AHLF 66 Ahlfors, L. V., Complex Analysis, 2nd Edition, McGrawHili (New
York), 1966.
CADZ 73 Ca dzow , J. A. , DiscreteTime Sys tems, PrenticeHa ll (Englewood
Cliffs, N.J.), 1973.
DOET 61 Doetsch, G. , Guide to the Applications of Laplace Transf orms, Van
Nostrand (Princeton), 1961.
GU[L 49 Guillemin, E. A., The Mathematics of Circuit Analysis, Wiley (New
York) , 1949.
J URY 64 Jury, E. 1., Theory and Application of the zTransfo rm Me/hod,
Wiley (New Yor k), 1964.
SCHW 59 Schwartz, L. , Theorie des Distributions, 2nd printing, Actualities
scientifiques et industrielles Nos. [245 and I [ 22, Hermann et Cie.
(Paris), Vol. 1 (1957), Vol. 2 (1959).
WIDD 46 Widder , D. V., The Laplace Transf orm, Princeton Uni versity Press
(Princeto n), 1946.
APPENDIX II
Probability Theory Refresher
In this appendix we review selected topics from probability theory, which
are relevant to our discussion of queueing systems. Mostly, we merel y list
the important definitions and results with an occasional example. The reader
is expected to be famili ar with this material , which corresponds to a good
first course in probability theory. Such a course would typically use one of
the follo wing text s that contain additional details and derivat ions: Feller,
Volume I [FELL 68]; Papouli s [PAPO 65] ; Parzen [PARZ 60]; or Daven
port [DAVE 70]. .
Probability theory concerns itself with describing random events. A typical
dictionary definition of a random event is an event lacking aim, purpose, or
regularity. Nothing could be further from t he truth ! In fact , it is the extreme
regularity t hat manifests itself in collections of random events, that makes
probability the or y interesting and usefu l. The notion of statistical regularity
is central to our st udies. For example, if one were to toss a fair coin four times ,
one expect s on the average two heads and two tails . Of course, there is one
chance in sixteen that no head s will occur. As a consequence, if a n unusual
sequence ca me up (tha t is, no heads) , we would not be terribly surprised nor
would we suspect the coin was unfair. On the other hand, if we tossed the
coin a million times, then once again we expect a pproximately half heads and
half tail s, but in thi s case, if no heads occurred, we would be more than
surprised, we would be indi gnant and with overwhelming assurance could
state that this coin was clearly unfair. In fact , the odds are better t han 10
88
to I that at least 490,000 heads will occu r! This is what we mean by stati stical
regularity, namely, that we can make some very precise statements about
large collect ions of random events.
ILL RULES OF THE GAME
We now descri be the rules of the game for creating a mathematical model
for probabilist ic situations, which is to correspond to realworld experiments.
Typically one exa mines three features 'of such experiments:
1. A set of possible experimental outcomes.
363
364 APP ENDIX II
2. A gro uping of the se outcomes into classes called results.
3. The relative frequency of these classes in many independent tr ials of
the experiment.
The relative frequency f e of a class c is merel y the number of times the
experimental outcome falls int o that clas s, divided by the number of time s the
experiment is performed; as the number of experimental trial s increases, we
expect [, to reach a limit due to our noti on of stat istical regularity.
The mathematical model we create al so has three quantities of interest that
are in onetoone relation with the three quantities listed above in the
experimental world. They are, respectively:
1. A sample space which is a collection of objects which we denote by
S. S corresponds to the set of mutually exclusive exhaustive outcomes
of the model of an experiment. Each object (i.e., possible outcome)
w in the set S is referred to as a sample point.
2. . A famil y of events <S' denoted {A, B, C, . . .} in which each event is
a set of sample points {w} . An event corresponds to a class or result
in the real world.
3. A probability measure P which is an assignment (mappi ng) of the
events defined on S into the set of real numbers. P corresponds to the
relative frequency in the experimental situati on. The notati on P[A]
is used to denote the real number associated with the event A. This
assignment must satisfy the foll owing properti es (axioms):
(a) For any event A , 0 ::::; P[A]::::; 1. (ILl)
(b) P[S] = I . (11.2)
(c) If A and B are "mutually exclusive" event s [see (11.4)
below], then P[A U B] = P[A] +P[B]. (11.3 )
It is appropriate at thi s point to define some set theor etic not ati on [for
example, the use of the symbol U in property (c)]. Typically, we descr ibe a n
event A as follows : A = {w : w satisfies the membership pr operty for the
event A } ; thi s is read as "A is the set of sa mple points w such that w satisfies
the member ship property for the event A." We further define
Ae = {w: w not in A} = complement of A
A U B = {w: w in A or B or both} = union of A and B
A II B = AB = {w: w in A and B} = intersection of A and B
'P = S" = null event (contains no sample points since S contains
all the points)
11.1. RULES OF THE GAME 365
If AB = cp , then A and B are said to be mutually ex clusive (or disjoint). A
set of event s whose union forms the sample space S is said to be an exhaustive
set of event s. We are therefore led to the definiti on of a set of mutually
exclusive exhaustive events {Ai> A
2
, • • . , An}, which have the properties
AiA; = cp for all i '" j
A, V A
2
V . . . v An = S
(1104)
We note further that A v Ac = S, AA c = cp , AS = A , Alp = tp, A V S =
S, A V cp = A, S" = cp, and cpc = S. Also , we comment that the uni on and
intersection operators are commutative , associative, and distributive.
The triplet (S, tff, P ) along with Axioms (11.1) (11.3) form a probabili ty
system. These three axioms are all that one needs in order to develop an
axiomat ic theory of probability whenever the number of events that can be
defined on the sample space S is finite. [When the number of such events is
infinite it is necessar y to include an additional axiom which extends Axiom
(11.3) to include the infinite union of disjoint event s. Thi s leads us to the
noti on of a Borel field and of infinite additivity of probability measures. We
do not discuss the det ails further in thi s refr esher.] Lastly, we comment that
Axiom (11. 2) is nothing more than a normalizat ion statement and the choice
of unity for this normalizati on is quite arbitrary (but also very natural).
Two other definitions are now in order. The first is that of conditional
probability . The cond itional probabil ity of the event A given that the event B
occurr ed (denot ed as P[A I BD is defined as
P[ A IB] ~ P[ AB]
P[B]
whenever P[B] '" O. The introduction of the conditional event B forces
us to restrict attention from the original sample space S to a new sample
space defined by the event B; since thi s new constrained sample space must
now have a tot al pr obability of unity, we magnify the probabil ities associated
with conditional event s by dividing by the term P[B] as given above.
The second additional notion we need is that of statist ical independence
of event s. Two events A, B are said to be statistically independent if and only
if
P[ AB] = P[A]P[B] (11.5)
Fo r three events A, B, C we require that each pair of event s satisfies Eq. (11.5)
a nd in additi on
P[ABC] = P[ A]P[B]P[C]
Th is definition extends of course to n event s requiring the nfold fact oring of
the probability expression as well as all the (n  I)fold fact orings all the way
366 APPENDIX II
down to all the pairwi se fact orings. It is easy to see for two independent
events A, B that PtA I B} = Pt A], which merely says that knowledge of the
occurrence of the event B in no way affects the probability of the Occurrence
of the independent event A .
The theorem of total probability is especially simple and important. It
relates the pr obability of an event B and a set of mutually exclusive exhaustive
event s {Ai} as defined in Eq. (11.4). The theorem is
n
P[B] = L P[ AiB]
i = l
which merely says that if the event B is to occur it must occur in conjunction
with exactl y one of the mutually exclusive exhaustive events Ai ' However
from the definition of conditional probability we may always write
P[AiB] = P[A
i
IB] P[B] = P[B IAi ] P[ A
i]
Thus we have the second important form of the theorem of total probability,
namel y,
n
P[B) = L P[B IAi] pr All
i= l
Thi s last equation is perhaps one of the most useful for us in st udying queue
ing theory. It suggests the following approach for finding the pr obability of
some complex event B , namely, first to condition the .event B on some event
Ai in such a way that the calculati on of the occurrence of event B given this
condition is less complex, and then of course to multipl y by the pr obabil ity
of the conditional event A , to yield the j oint probabi lity P[ A,B) ; this having
been done for a set of mutu ally exclusive exhaustive events {A,} we may then
sum these probabilities to find the probability of the event B. Of course, this
approach can be extended and we may wish to condition the event B on more
than one event then uncondition each of these event s suita bly (by multipl ying
by the probabilit y of the appropriate condition) and then sum all possible
forms of all conditions. We will use this approach many times in the text.
We now come to the wellknown Bayes' theorem. Once aga in we consi der
a set of events {A,}, which are mutually exclusive and exhaustive. The theorem
says
I
P[B IAi) P[A
i
)
P[A ; B) =  n ''.:..:....'''=
LP[B IAi ) P[A
i)
j=l
This theorem permits us to calculate the probability of one event conditioned
on a second by calculatin g the probability of the second conditioned on the
first and other similar terms.
I
J
ILl . RULES OF TH E GAME 367
imple example is in order here to illustrate some of these ideas. Consider
'ou have just entered a gambling casino in Las Vegas. You approach a
: who is known to have an ident ical twin brother ; the twins cannot
tinguished. It is further known that one of the twins is an honest dealer
.as the second twin is a cheating dealer in the sense that when you play
he honest dealer you lose with probability oneh alf, whereas when you
vith the cheating dealer you lose with probability p (if P is greater than
alf, he is cheating against you whereas if p is less than onehalf he is
ng for you). Further more, it is equally likely that upon entering the
) you will find one or the other of these two dealers. Consider that you
ilay one game with the particular twin whom you encounter and further
ou lose. Of course you are disappointed and you would now like to
ate the probability that the dealer you faced was in fact the cheat, for if
an establish that thi s probability is close to unity, you have a case for
the casino. Let DII be the event that you play with the honest dealer
:t Dc be the event that you play with the cheating dealer ; further let L
: event that you lose. What we are then asking for is P[D
c
IL] . It is not
diat ely obvious how to make this calculation ; however , if we appl y
, theorem the calculation itself is trivial , for
P[L I Del P[DC]
P[D
c
IL] = P[L I Dc] P[Del + P[L I D
II
] P[D
ll
]
s application of Bayes' theorem the collect ion of mutually exclusive
stive event s is the set { DH, Dc}, for one of these two event s must occur
'oth cannot occur simulta neously. Our problem is now tr ivial since
.errn on the right hand side is easily calculated and leads us to
P D IL _ pm __2p_
[ c ]  pm+ mm 2p + I
s the answer we were seeking and we find that the probability of having
a cheating dealer, given that we lost in one play , ranges from 0 (p = 0)
(p = I). Thus, even if we know that the cheating dealer is completel y
Jest (p = I), we can onl y say that with probability 2/3 we faced this
"given that we lost one play.
a final word on element ary topics, let us remind the reader that the
er of permutations of N objects taken K at a time is
N'
'''.= = N(N  I) . .. (N  K + 1)
(N  K)!
368 APPENDIX II
whereas the number of combinations of N things taken K at a time is denoted
by ( ~ ) and is given by
( ~ )
11.2. RANDOM VARIABLES
N!
K!(N  K)!
So far we have described a probability system which consists of the triplet
(S, Iff, P), that is, a sample space, a set of events, and a probability assign
ment to the events of that sample space. We are now in a position to define
the important concept of a random variable. A random variable is a variable
whose value depends upon the outcome of a random experiment. Since the
outcomes of our random experiments are represented as points w E S then
to each such outcome w , we associate a real number X (w), which is in fact the
value the random variable takes on when the experimental outcome is w.
Thus our (real) random variable X (w) is nothing more than a function defined
on the sample space , or if you will, a mapping from the points of the sample
space int o the (real) line.
As an example , let us consider the random experiment which consists of
one play of a game of blackjack in Las Vegas. The sample space consists of
all possible pairs of scores that can be obt ained by the dealer and the player.
Let us assume that we have grouped all such sample points into three
(mutually exclusive) events of interest : lose (L), draw (D) , or win ( W). In
order to complete the probability system we must assign probabilities to each
of these events as follows*: P[L] = 3{8, P[D] = 1{4, P[W] = 3{8. Thus our
probabilit y system may be represented as in the Venn diagram of Figure 11.1.
The numbers in parentheses are of course the probabiliti es. Now for the
random variable X (w). Let us assume that if we win the game we win S5, if
we draw we win SO, and if we lose we win  S5 (that is, we lose S5). Let our
winnings on this single play of blackjack be the random variable X(w ). We
may therefore define this variable as follows :
{
+5
X(w) = 0
5
wEW
W ED
wE L
Similarly, we may represent this random variable as the mapping shown in
Figure 11.2.
• This is the most difficult step in practice, that is, determining appro priate numbers to
use in our model of the real world.
11.2. R ANDOM VARIABLES 369
Figure JI.I The probability system for the blackjack example.
The domain of the random variable X(w) is the set of event s e' and the values
it takes on for m its range. We note in passing that the probability assignment
P may itself be thought.of as a random var iable since it satisfies the definition;
thi s particular assignment P, however, has further restricti ons on it , namely,
those given in Axioms (II.l}(II.3).
We are mainly interested in describing the probability that the rand om
vari able X(w) takes on certain values. To thi s end we define the following
shorthand notati on for events:
[X = x] ~ {w : X(w) = x} (11 .6)
We may discuss the probability of thi s event which we define as
P[ X = x ] = probability that X(w) is equal to x
which is merely the sum of the probabilities associa ted with each point w for
which X(w) = x. For our example we have
P[X = 5] = 3/8
P[X =0] = 1/4 (11.7)
P[ X = 5] = 3/8
Another convenient form for expressing the pr obabilities associated with
the rand om vari able is the probability dist ributionfunction (PDF) also known
x u»
   '           L .         L .      R ~ a l line
~ 0 ~
Figure 11. 2 The random variable X(w).
370 APPENDI X II
as the cumulati ve distributi on function. For this purpose we define notation
similar to that given in Eq. (II .6), namely,
[X ~ x] = {w: X(w) ~ x}
We then have that the PDF is defined as
Fx(x) ~ P[X ~ x)
which expresses the probability that the random variable X takes on a value
less than or equal to x. The important properties of this functi on are
Fx(x) ~ 0 (II .S)
Fx ( oo) = I
Fx(  oo) = 0
Fx(b)  Fx (a) = P[a < X ~ b) for a < b (11. 9)
Fx (b) ~ Fx (a) for a ~ b
Thus Fx (x) is a nonnegati ve mon ot onically nondecreasing funct ion with
limits 0 and I at  00 and +00 , respectively. In addition Fx(x) is assumed
to be continuous from the right. For our blackjack example we then have the
functi on given in Figure II.3. We not e that at points of discontinuity the
PDF takes on the upper value (as indicated by the dot ) since the functi on is
piecewise continuous from the right. From Property (II.9) we may easily
calculate the probability that our random variable lies in a given interval.
Thus for our blackjack example, we may write P[  2 < x ~ 6) = 5/S,
P[l < x ~ 4) = 0, and so on.
For purposes of calculati on it is much more convenient to work with a
funct ion closely related to the PDF rather than with the PDF itself. Thus we
FX (x )
3
8
5
,
8 ,
1
...
3
'
a
I
 5 o +5
Figure 11.3 The PDF for the blackjack example.
11.2. RANDOM VARIABLES 371
j to the definit ion of the probability density fun ction (pdf) defined as
s :
to. dF x( X)
iX(X) =h
(I I.! 0)
ir se, we are immediately faced with the question of whether or not such
vati ve exist s and if so over what interval. We temporarily a void that
on and assume that Fx(x) possesses a continuous derivative everywhere
1 is false for our blackjack example). As we shall see later, it is possible
ine the pdf even when the PDF contains jumps. We may "invert"
I.l 0) to yield
F.y(x) = f / x(Y) dy
thi s and Eq . (11.8) we have
ix(x) ~ 0
F.y(oo) = I, we have from Eq. (II.! I)
(II.! n
the pdf is a function which when integrated over an interval gives the
bility that the random variable X lies in that interval , namely, for
. we ha ve
Pea < X ~ b] = fix(x ) dx
.  b, and the axiom sta ted in Eq. (II.! ) we see that this last equation
mpli es
f x'<x) ~ 0
an example, let us consider an exponentially distributed random
ole defined as one for which
(I
A X
 e
Fx(x) =
. 0
: i. > O.
or res pondi ng pdf is given by
O ~ X
x < O
O ~ x
x < O
(I I.! 2)
(11.13)
372 APPENDI X II
For thi s example, the probability that the random va riable lies between the
va lues a( >O) and b( > a) may be calculated in either of the two foll owing
ways:
P[a < x ~ b] = Fx ( b)  Fx (a)
= e.l.
a
_ e
1 b
P[a < x ~ b] = ffx (x) dx
= e  i.a _ e  J.. b
From our blackjack example we not ice that the PDF ha s a derivat ive
which is everywhere 0 except at the three critical points (x = 5, 0, + 5).
In order to complete the definition for the pdf when the PDF is discontinuous
we recognize that we must introduce a function such that when it is integrated
over the region of the di scontinuity it yields a value equal to the size of the
discontinuous jump; that is, in the blackjack example the probability density
functi on must be such that when integr ated fr om 5  E to 5 + E (for
small E > 0) it should yield a probability equal to 3/8. Such a function has
alread y been studied in Appendix I and is, of course , the impulse functi on
(or Dirac delta funct ion ). Recall that such a function uo(x) is given by
r
oo x = 0
lI o(x) = 0
x ;= O
C'uo(x ) dx = I
.  a::
a nd also that it is merel y the derivati ve of the unit step functi on as can be
seen from
r
x
lI
o
(Y) dy = {O
L" I
x< O
x :2: 0
Using the graphical notati on in Fi gure 1.3, we may pr operly descr ibe the pdf
for our blackjack example as in Figure 11.4. We note immediately that this
representat ion gives exactly the information we had in Eq. (11.7), and there
fore the use of impulse functions is overl y cumbersome for such problems.
In particular if we define a discrete random variable as one that take s on
values over a discrete set (finite or countable) then the use of the pdf* is a bit
heavy and unnecessary alt ho ugh it does fit int o our genera l definiti on in the
obvio us way. On the ot her hand , in the case of a random va riable that ta kes
on values over a continuum it is perfectly natural to use the pdf and in the
• In the discrete case, the function P[X = xd is often referr ed to as the probability 1Il0SS
[unction. The genera lization to the pdf leads one to the noti on of a mass density function.
J
8
[1.2. RANDOM VARIABLES 373
J
8
_ _ ' ' ' x
 5 o +5
Figure 11.4 The pdf for the blackjack example.
case where the re is also a nonzero probability that the random variable ta kes
on a specific val ue (i.e., that the PDF con tains j umps) then the use of the pdf
is necessar y as well as is the introducti on of the impulse function to acco unt
for these points of accumulation. We are thus led to distin guish between a
discrete random variable, a purely continuous random variable (one whos e
PDF is continuous and ' everywhere di fferenti able), and the third case of a
mixed random variable which contains some discret e as well as conti nuous
portions.* So , for example, let us con sider a random varia ble that rep
resent s the lifetime of an automobile. We will assume that there is a finite
pr obability, say of val ue p, t hat the automobil e will be inoperable immediately
upon delivery, and therefor e will have a lifet ime of length zero. On the other
hand, if the a utomobi le is operable upo n delivery then we will ass ume tha t
the remainder of it s lifetime is exponenti ally di stributed as given in Eqs.
(11.I 2) a nd ([1.13). Thus for thi s auto mobile lifetime we have a PD F and a
pdf as given in Figure 11.5. Thus we clearly see the need for impulse funct ions
in describing interesting random variables.
We have now disc ussed the notion of a pro ba bility system (5, s, P) and
the no tion of a random variable X(w) defined upon the sample space 5.
There is, of course, no reason why we cannot define mallY random vari ables
on the same sa mple space. Let us co nsider the case of two random variables
X a nd Y defined for some probability system (5, 8, P) . In th is case we have
• It can be shown that an y PDF may be decomposed into a sum of three part s, namely, a
pure jump function (cont aining on ly discont inu ous jumps), a purely cont inuous porti on,
and a singular port ion (which rarcly occu rs in di stribution functions of interest and which
will be considered no further in thi s text) .
374 APPENDIX II
p
o
1======x
o
(a) PDF
Figure U.s PDF and pdf for automobile lifetime.
. (h) pdf
the natural exte nsion of the PDF for two random varia bles, namely,
6
F Xy(X, y) = P[X ~ x, Y ~ y]
which is mer ely the probability t ha t X takes on a value less t ha n or equal to x
at the same time Y takes on a va lue less th an or equal to y; that is, it is the
sum of the probabilities associated with all sample point s in the intersecti on
of t he two events {w : X(w) ~ z }, ·{w : Y(w) ~ y }. FXY(x, y) is referred to as
t he j oint PDF. Of co urse, associated with thi s funct ion is a joint probability
density funct iondefined as
A d
2F
XI'(X' y)
f.yy( x, y) = _  "   ' .   '  ~
dx dy
Gi ven a joi nt pdf, one naturally inquires as to th e "marginal" density func
ti on for one of the varia bles and thi s is clearl y given by integrating over all
possi ble val ues of t he second variable, t hus
(11.14)
fx( X) = i : _J xr(X, y) dy
We are now in a posi tion to define the noti on of independence between
random variables. Two rando m variables X an d Yare sai d to be inde pendent
if and only if
f xy(x, y) = fx(x)!I.(y)
that is, if t heir joint pdf factors into the product of the onedimensional
pdf' s. Thi s is very much like th e definition fo r two independent events as
given in Eq. (11. 5). However , for th ree or more random variables, th e defini
tion is essent ially the same as for t wo, namely, X" X
2
, • • • , X
n
are sai d to
be independent ra ndom variables if and only if
IL2. RANDOM VARIABLES 375
This last is a much simpler test than that required for multiple events to be
independent.
With more than one random variable, we can now define conditiona l
distr ibut ions an d densities as follows. For example, we could ask for the
PDF of the rand om variable X condi tioned on some given value of the
rand om variable Y, which would be expressed as P[X ::;; x I Y = y). Simi
larly, the conditional pdf on X , given Y, is defined as
a d fxy(x, y)
fXI y(x Iy) =  P[X ::;; x I y = y ) = :...:='''
dx fy(y)
much as the definition for conditional probabil ity of events.
To review again, we see that a random variable is defined as a mapping
from the sample space for some probability system into the real line and
from this mapping the PDF may easily be determined. Usually, however, a
random var iable is not given in terms of its sample space and the mapping,
but rather directly in terms of its PDF or pdf.
It is possible to define one random variable Y in terms of a second random
variable X, in which case Y would be referred to as a function of the random
variable X. In its most general form we then have
Y= g(X) (11.15)
where g(' ) is some given function of its argument. Thus, once the value for X
is determined, then the value for Y may be computed; however, the value
for X depends upon the sample point w, a nd therefore so does the value of
Y which we may therefore write as Y = Y(w) = g( X(w». Gi ven the random
variable X and its PDF, one should be able to calculate the PDF for the
random variable Y, once the functi on gO is known. In principle, the compu
tat ion take s the following form:
Fy (Y) = P[ Y::;; y ] = P[ {w : g(X(w» ::;; y} ]
In general, this computati on is rather complex.
One random variable may be a funct ion of many other random variables
rather than just one. A particularly important form which often arises is in
fact the sum of a collection of independent random variables {Xi}' namely,
n
Y = L: X
i
i  I
( 11.16)
Let us derive the distribution function of the sum of two independent random
variables (n = 2). It is clear that this distribution is given by
Fy(y) = P[Y::;; y] = P[X
1
+ X
2
::;; y]
376 APPENDIX 11
X,
Figure I/. 6 The integration region for Y = X, + X, ::::; y.
We have the situati on shown in Figure I1.6. Integrating over the indicated
region we have
Due to the independence of X, and X, we then obtain the PDF for Yas
f
OO [f"X' ]
F y(Y) =  <Xl 00 !x,(x
,
) dx, ! x ,(x, ) dx,
=foo Fx,(Y  xJ!x,(x,) dx,
 00
Fi nally, forming the pdf from thi s PDF, we have
f y(Y) = L : ! x ,(y  x,) !x,(x,) dx,
Th is last equation is merely the convolut ion of the density functions for X,
and X, and, as in Eq. (1.36), we denote this convolut ion operator (which is
both associative and commut ative) by an asterisk enclosed within a circle.
Thus
II. 3. EXPECTATION 377
In a similar fashion , one easily shows for the case of arbitrary n that the pdf
for Yas defined in Eq. (11.16) is given by the conv olu tion of the pd f' s for the
X;'s, that is,
(11.17)
II.3. EXPECTATION
In thi s section we discuss certain measures associated with the PDF and the
pdf for a random variabl e. These measures will in genera l be called expecta
tions and they deal with integral s of the pdf. As we saw in the last section ,
the pdf involves certain difficulties in its definiti on , and the se difficulties were
handily resolved by the use of impulse functions. However, in much of the
literature on pr obability theor y a nd in most of the literature on queueing
the or y the use of impulses is either not accepted, not under st ood or not known ;
as a result , special care and not ation has been built up to get aro und the
problem of differentiating discontinuous functions. The result is that many
of the int egral s encountered are Stieltjes integrals rather than the usual
Riemann integrals with which we are most familiar. Let us take a moment to
define the Stieltjes integral. A Stieltjes integral is defined in terms of a non
decreasing function F(x) and a continuous function 'r ex); in additi on, two sets
of points {f.} and such that 1. _
1
< f. are defined and a limit is con
sidered where max If• .:... 1. _,1 O. From these definiti on s, consider the sum
L  F(I . _
1
) ]
•
This sum tends to a limit as the intervals shrink to zero independent of the
sets {f. } and a nd the limit is referred to as the Stieltjes integral of cp
with respect to F. This Stieltjes integral is written as
Jqc(x) dF(x)
Of course, we recognize that the PDF may be identified with the functi on F
in thi s definition a nd that dF(x) may be identified with the pdf [say, f (x)]
through
dF(x) = f( x) dx
by defini tion . Without the use of impulses the pdf may not exist ; however,
the Stieltjes integral will always exist and therefore it avoids the issue of
impulses. However , in thi s text we will feel free to incorporate impul se
functi ons and therefore will work with both the Riemann and Stieltjes
integrals; when impulses are permitted in the funct ion f (x) we then have the
378 APPENDI X II
following identity:
J'r e
x
) dF(x) =Jg;(x)f(x) dx
We will use both notations throughout the text in order to familiari ze the
student with the more common Stieltjes integral for queueing theory, as well
as with the more easily manipulated Riemann integral wit h impulse funct ion s.
Having said all th is we may now introduce the definition of expectation.
The expectation of a real random variable X(w) denoted by E[X] and al so
by X is given by the following:
1
'" &  .1
E[X] = X = X> X dF_,(x) (I 1.18)
This last is given in the form of a Stieltjes integral ; in the form of a Riemann
integral we have, of course,
E[X] = .¥ =L:xfx(x) dx
The expectation of X is also referred to as the mean or average calue of X.
We may also wr ite
E[X] = f '(I  Fx(x)] dx  f/ x(X) dx
which, upon int egrating by parts, is easily shown to be equal to Eq. (l1.l 8)
so long as E[X] < 00. Similarly, for X a nonnegative random variable, thi s
form becomes
E[X] = i"[I  Fx (x)] dx X z o
In general , the expectation of a random variable is equal to the product of t he
va lue the random variable may take on and the probability it takes o n th is
value, summed (integra ted) over all possible values.
Now let us con sider once aga in a new random variable Y, which is a
function of our first random variable X, namely, as in Eq. (IU5)
Y = g( X )
We may define the expecta t ion E
r
[Y] for Yin terms of its PDF just as we did
for X ; the subscript Yon the expectation is there to distingui sh expectat ion
with respect to Yas opposed to any other random variables (in th is case X ).
Thus we have
E[XY] = L : L : xyfXl '( x, y) dx dy
II.3. EXPECTATION 379
This last computation requi res that we find either Fy(Y) or f y(y) , which as
mentioned in the previous section, may be a rather complex computati on.
However, thefu/ldamentaltheorem ofexpectation gives a much more straight
for ward calculation for this expectati on in ter ms of dist ribution of the
underlying random variable X, namely,
Edy] = Ex[g(X)]
= roo g(x)fx(X) dx
.i_oo
We may define the expectation of the sum of two random variables given
by the followi ng obvious general ization of the onedimensional case :
E[X + Y] = L : L : (x + Y)fxy(x, y) dx dy
= L : L : Xf x y(x, y) dx dy +L : L :Yfxy(x, y) dx dy
=L : xfx (x) dx +i :y!y(y) dy
= E[X ] + E[Y ] ( 11 .19)
Th is may also be written as (X + y = X + Y). In goi ng from the second
line to the thi rd line we have taken advantage of Eq. (I I. 14) of the previous
section in which the marginal density was defined from the jo int density.
We have shown the very important result , that the expectation of the sum of
tlro random cariables is always equal to the sum of the expectations of each
this is true whether or not these random variables are independent. This very
nice property comes from the fact that the expectation operator is a linea r
ope rator. The mor e genera l sta tement of this propert y for any number of
random variables, independent or not , is that the expectation of the sum is
always equal to the sum of the expectations, that is,
E[X
l
+ X
o
+ . .. + X
n
] = E[Xtl + E[X
o]
+ ... + E[X
n
]
A similar question may be asked about the product of two random var iables,
that is,
In the special case where the two ran do m variables X and Ya re independent ,
we may write the pdf for this joint random variab le as the product of the
pdf's for the individual random variables, thus obtaining
E[XY] =L:L:xyf s (x )f d Y) dx dy = E[X]E[Y ]
( 11. 20)
380 APP ENDI X II
This last equati on (which may also be written as X Y = X Y) states that the
·expecta tion of the product is equal to the product of the expectations if the
random varia bles are independent. A result simi lar to that expre ssed in Eq.
(11.20) applies also to functions of independent random variab les. That is,
if we have two independent random variables X and Y and functi ons of
each denoted by g(X) a nd h( Y), then by arguments exactl y the same as th ose
leading to Eq. (11.20) we may show
Often we are interested in the expectati on of the pOlrer ofa random variable.
In fact, this is so common that a s pecial name has been coined so that the
expected value of the 11th power of a random variable is referred to as its nth
moment . Thus, by definition (rea lly this follo ws from the fundamental
theorem of expectation) , the 11th moment of X is given by
E[X
n]
x
n
L:xnfx(x) dx
Furthermore, the ntli central moment of thi s random variable is given as
follows:
E[g(X) lJ(Y)] = E[ g(X)]E[lJ(Y )] (l1.21)
var
In
(I
fil
t c
o
q
d
( 11.22)
[
OC
A 
( X  x )n = (x  X)"fx( x) d x
·  oc
The nth central moment may be expressed in terms of the first II moments
themselves ; to show thi s we first write down the foll owin g ident ity making
use of the binomial the orem
(X  X )" =I(n)X'C Xl"k
k
Taking expectations o n both sides we then have
(X  X)" =i(II) X
k
( X)"k
k
= i  Xj' ,k
k O k
In going fr om the first to the seco nd line in thi s last equat ion we have taken
advantage of the fact that the expectat ion of a sum is equal to the sum of t he
expectation s a nd that the expectat ion of a con stant is merely the con st ant
itself. Now for a few observations. Fir st we note th at the Oth moment of a
random va ria ble is j ust unit y. Al so . the Ot h central moment must be The
first central moment must be 0 since
( X  X) = X  X = 0
11.4. TRANSFORMS AND CHARACTERISTIC FUNCTIONS 38 1
The second central moment is extremel y important and is referred to as the
variance; a special not ation has been adopted for the varia nce and is given by
ax
2
~ (X  X)2
~ X2  (1')2
In the second line of thi s last equati on we have taken advantage of Eq.
(II.22) and have expressed the variance (a central moment) in terms of the
first two moments themselves. The square root of the variance ax is referred
to as the standard deviation. The ratio of the standa rd deviati on to the mean
of a random variable is a most important quant ity in sta tist ics and also in
queueing theor y; this ratio is referred to as the coefficient of variation and is
denoted by
(11 .23)
11.4. TRANSFORMS, GENERATING FUNCTIONS, AND
CHARACTERISTIC FUNCTIONS
c/> x( u) ~ E[ei "x]
= L:eiuz;x(x) dx
where j = J ~ and where u is an arbitrar y real vari able. (Note that except
for the sign of the exponent , the characteri stic function is the Fourier trans
form of the pdf for X). Clearly.
In probability theor y one encounters a variety of functi ons (in part icular ,
expect ations) all of which are close relatives of each other. Included in this
class is the characteristicf unction of a random vari able, its moment generating
function, the Laplace transf orm of its probability density fu nction, and its
probability generating fun ction. In this secti on we wish to define and distin
guish these various forms and to indicate a common central propert y that they
share.
The characteristic function of a rand om variable X, denoted by c/> .d u), is
given by
Ic/>x(u)I :s;; L : lei "xI IJx (X)1dx
and since lei""1= I . we have
which shows that
382 APPENDIX II
An important property of the char acteri stic function may be seen by expand
ing the exponential in the integrand in terms of its power series and then
integrating each term separately as follows:
f'" [ (jUX)2 ]
epx(u) =L / x(x) 1 + jux + + ... dx
. _
= 1 + lUX +X " + , . .
2!
From this expansion, we see that the characteristic functi on is expressed in
terms of all the moments of X. Now, if we set u = 0 we find that epx(O) = I.
Similarly, if we first form depx(u)/du and then set u = 0, we obtain j X
Thus, in general , we have

(11.24)
. ( 11.25)
Thi s last important result gives a rather simple way for calcul ating a constant
times the nth moment of the random variable X.
Since thi s property is frequently used, we find it convenient to adopt the
following simplified notation (consistent with that in Eq. · 1.37) for the nth
der ivative of an arbitrary function g (x), evaluated at some fixed value x = x
o
:
gln )(x
o
) dng( x) I
dx" %=:1: 0
Thus the result in Eq. (II.24) may be rewritten as
= rx».
The moment generating function denoted by Mx(v) is given below along
with the appropriate differential relationship that yields the nth moment of X
directly.
Mx(v) E[e
x]
=L : eZ! x(x) dx
= X
n
where v is a real variable. From this last property it is easy to see where the
name "moment generating function" comes from. The derivation of this
moment relationship is the same as that for the characteristic functi on.
Another important and useful function is the Laplace transform of the pdf
of a random variable X. We find it expedient to use a notation now in which
the PDF for a rand om variable is labeled in a way that identifies the random
variable without the use of subscripts. Thus, for example. if we have a
IIA. TRANSFOIU\1S AND CHARACTERISTIC FUNCTIONS 383
rand om var iable X, which represents, say, the interarrival time between
adjacent customers to a system, then we define A( x) to be the PDF for X;
A(x) = P[ X ~ x]
where the symbol A is keyed to the word "Arrival." Further , the pdf for this
example would be denoted a(x). Finally, then , we denote the Laplace tr ans
form of a(x) by A *(s) and it is given by the following:
A*(s) ~ E[e ' x]
~ L:e' '''a(x) dx
where s is a complex variable. Here we are using the "twosided" transform;
however , as mentioned in Section 1.3, since most of the random variables we
deal with are nonnegative, we often write
The reader should take special note that the lower limit 0 is defined as 0 ;
that is, the limit comes in from the left so that we specifically mean to include
any impulse functions at the origin. In the fashion identical to that for the
moment generating funct ion and for the characteristic function, we may find
the moments of X through the following formula:
A*(nl(o) = ( _ l)nx
n
For nonnegative random variables
(11.26)
IA*(s)1 ~ f ',e' ''''IG(X)' dx
But the complex variable s consists of a real part Re (s) = a and an imaginary
par t 1m (s) = w such that s = a + j w. Then we have
les"' l = le ' '''e ;w'''l
~ le'''' I le ;w"'l
= lea"'l
Moreover , for Re (s) ;:::: 0, lea"'l ~ I and so we have from these last two
equati ons and from JO' a(x) dx = I,
IA*(s)1 ~ 1 Re (s) ;:::: 0
It is clear that the three functions 1>x(u), M x (v), A *(s) are all close rel
atives of each ot her. In particular, we have the following relati onship :
c/>x(sj) = Mx ( s) = A*(s)
384 APPENDIX II
Thus we are not surprised that the moment generating pr operties (by dif
ferentiati on) are so similar for each; this property is the central pr operty that
we will take ad vantage of in our studies. Thus the nth moment of X is cal
culable from an y of the following expressions:
X
n
=
X
n
=;
X
n
= (_l )nA*<n\o)
It is perhaps worthwhile to carry out an example demonstrating these
properties. Consider the continuous random variable X, which represents,
say, the interarrival time of customers to a system and which is exponentially
distributed, that is,
{
Ae
A
.,
f x:(x) = a(x) =
 0
x < O
By direct substitution into the defining integral s we find immediately that
.J. J.
,/,X(II ) = .
J.  )11
A
M (v) =
x }. _ v
A
A*(s) = .
A.+ s
It is always true that
4>x(O) = .Mx(O) = A*(0) = 1
and, of course, this checks out for our example as well. Using our expression
for the first moment we find through anyone of our three functions that
_ I
X= ;
A.
and we may al so verify that the second moment may be calculated from any
of the three to yield
X
2
= 2
}.2
and so it goes in calculating all of the moments.
In the case of a discr et e random vari able described, for example, by
gk = P[X = k]
11.4. TRANSFORMS AND CHARACTERISTIC FUNCTIONS 385
we make use of the probability generatingf unction den ot ed by G(z) as foll ows :
. A
G(z) = E[zx]
~ k
= ~ z gk
k
(11.27)
where z is a complex variable. It sho uld be clear from our di scussion in
Appendix I that G(z) is nothing more th an the ztra nsform of the di screte
sequence gk' As wit h the continuous tr an sforms, we have for Izl ~ I
IG(z)1 ~ L Izkll gkl
k
~ L gk
and so
IG(z)1 ~ I for [z] ~ I (11.28)
No te th at the first deri vati ve evaluated at z = I yields the first moment of X
(11.29)
a nd that the second deri vat ive yields
GIZ)(l ) = XZ  X
in a fashi on simila r to th at for continuous random va ria bles. * Note that
in all cases
G(I ) = I
Let us apply the se methods t o the blackjack example considered earlier in
thi s appendix. Working either with Eq. (11.7), which gives the probability
of various winni ngs or wit h the impulsive pd f give n in Fi gure 11.4, we find
th at the probability genera ti ng functi on for the number of doll ars wo n in
a game of blackjack is given by
( )
3  5 1 3 5
G z = z ++ z
8 4 8
We note here th at , of course, G( I) = I a nd further, th at the mean winni ngs
may be calculated as
X = GIll(l ) = 0
Let us now consider the sum of n independent variables Xi' namely,
y = L ~ ~ l Xi' as de fined in Eq. (1I.16). Ifwe form the characterist ic functio n
• Thu s we have that aX
Z
= G(ZI(I ) + G(l I(I )  [G(l) (1)j2.
386 APPENDIX II
for Y, we have by definition
ep y(u) ~ E[e
iuY]
[
;u.:E Xi]
= E e , ~ ,
= E[eiUX'eiuX, . . . e;Ux,]
Now in Eq. (II.21) we showed that the expectation of the product of functi ons
of independent random variables is equal to the product of the expectat ions
of each function separately; applying this to the above we have
ep y(u) = E[e
iUX
']E[e
iUX
,] . . . E[e
iuX
,]
Of course the righthand side of this equation is just a product of character
istic functions , and so
ep y(U) = epx,(u)epx,(u)' . . epx,(u) (11.30)
In the case where each of the Xi is identically distributed, then , of course,
the characteristic functions will all be the same, and so we may as well drop
the subscript on Xi and conclude
(11.31)
We·have thus shown that the characteristic functi on of a sum of n identically
distributed independent random variables is the nth power of the character
istic function of the individual random variable itself. This important result
also applies to our other transforms, namely, the moment generating func
tion, the Laplace transform and the ztransform. It is this significant property
that accounts, in no small way, for the widespread use of transforms in
probability theory and in the theory of stochastic processes.
Let us say a few more words now about sums of independent random
variables. We have seen in Eq. (II. 17) that the pdf of a sum of independent
variables is equal to the convolution of the pdf for each ; also, we have seen
in Eq. (11. 30) that the transform of the sum is equal to the product of the
transforms for each. From Eq. (II. 19) it is clear (regardless of the independ
ence) that the expectation of the sum equals the sum of the expectations,
namel y,
y = X, + X
2
+ .. . + X.
For n = 2 we see that the second moment of Y must be
yo = (X, + X
2)
2 = X, 2 + 2X,X
2
+ X
2
2
And also in this case
(11.32)
11.4. TRANSFOR.\1S AND CHARACTERISTIC FUNCTIONS 387
ming the va riance of Y and then using these last two equations we have
ay' = y ~  (Y)"
= Xt·  (X
t
)' + X .'  (X,)2 + 2( X
tX2
 X
tX2
)
= ax ,2 + a
x
, 2 + 2( X
IX.
 X
IX2
)
v if XI and X
2
are also independent, the n X
IX2
= X
IX2
, giving the final
tit
1 similar fashion it is easy to show that the variance of the sum of n
'pendent random va riab les is equal to the sum of the variances of each,
: is,
'ontinuing with sums of independent random varia bles let us now ass ume
: the nu mber of these variables tha t are to be summed together is itself a
dorn variable, that is, we defi ne
x
Y = I X
i
i = l
:re {Xi} is a set of ident ically distributed independent ra ndom variables,
1 with mean Xa nd varia nce ax
2
, and where N is also a random variable
1 mean and variance fil a nd ax
2
, respectively; we assume that N is also
epende nt of the Xi' In this case, Fy (y ) is said to be a compound dis
ution. Let us now find Y*(s), which is the Laplace transform of the pdf
Y. By definition of the transform and due to the independence of all
ra ndom variables we may write down
OX>
= I E[e ,
x l
] .. . E[e,xn]P[N = II]
n= O
: since {X,} is a set of identically di st ribu ted random va riables, we have
OX>
Y *(s) = I [X *(s)rP[N = II]
n = O
( 11.33)
ere we have denot ed the Lap lace transform of the pdf for each of the X i
X ' (s) . The final expression given in Eq. (II .33) is immedia tely recognized
388 APPENDI X II
as the ztransform for the random variable N, which we choose to denote by
N(z) as defined in Eq. (11.27) ; in Eq, (II. 33), z has been repl aced by X*(s) .
Thus we finally conclude
Y*( s) = N (X*(s» (11.34)
Thus a random sum of identically distributed independent random variables
has a transform that is related to the transforms of the sum's random variables
and of the number of terms in the sum, as given above. Let us now find an
expression similar to that in Eq. (11.32); in that equation for the case of
identically distributed Xi we had Y = nX, where n was a given constant.
Now, however, the number of terms in the sum is a random quantity and we
must find the new mean Y. We proceed by takin g advantage of the moment
generating properties of our transforms [Eq. (11.26)]. Thus different iating
Eq. (11.34), setting s = 0, and then takin g the negative of the result we find
Y= NX
which is a perfectly reasonable result. Similarly, one can find the variance
of this rand om sum by differenti at ing twice and then subtracting off the
mean squared to obtain
a y 2 = Nax 2 + (X)2
all
· 2
Thi s last result perhaps is not so intuitive.
11.5. INEQUALITIES AND LIMIT THEOREMS
In this section we present some of the classical inequal ities and limit
theorems in probability theory.
Let us first consider bounding the probabili ty that a random variable
exceeds some value. If we know only the mean value of the random variable,
then the following Markov inequality can be establi shed for a nonnegati ve
random variable X:
x
P[X ~ z] ~ 
x
Since only the mean value of the rand om var iable is utilized, this inequality
is rather weak. The Chebyshev inequality makes use of the mean and variance
and is somewhat tighter ; it states that for any x > 0,
a .2
P[IX  X/ ~ xl ~ ~
x
Oth er simple inequal ities invol ve moments of two random varia bles. as
follows: First we have the CauchySchwarz inequality , which makes a
statement about the expectation of a product of rand om variables in terms of
11.5. INEQUALITIES AND LIMIT TH EOREMS 389
the second moments of each.
(I 1.35)
A generali zati on of this last is Holder' s inequality , which states for C1. > I,
f3 > I, C1.
1
+ f3
1
= I, and X> 0, Y> 0 that
XY ~ (X")l /'(yP/IP
whenever the indicated expectations exist. Note that the CauchySchwartz
inequality is the (impor tant) special case in which C1. = f3 = 2. The triangle
inequality relates the expectation of the absolute value of a sum to the sum
of the expect at ions of the absolute values, namel y,
IX + YI ~ IXI + IYI
A generalizat ion of the tri angle inequality, which is known as the C.inequality,
is
where
{
I
C=
• 2. 1
O < r ~ 1
I<r
Next we bound the expect at ion of a convex function g of an arbitrary
random varia ble X (whose first moment X is assumed to exist). A convex
funct ion g(x) is one that lies on or below all of its chords, that is, for any
Xl ~ X., and 0 ~ x ~ I
g(C1.X
1
+ ( I  e<)x.) ~ C1.g(x
l
) + (1  C1. )g(x.)
For such convex funct ions g and random variables X, we have Jensen' s
inequality as follows:
g(X) :2: g(X)
When we deal with sums of random variables, we find that some very
nice limitin g properties exist. Let us once again consider the sum of n
independent identically distr ibuted random vari ables Xi' but let us now
divide that sum by the number of terms n , thusly
I n
IV =") X
n _ .
ll i = l
This arithmetic mean is often referred to as the sample mean. We assume
that each of the Xi has a mean given by X and a variance ax'. Fr om our
earlier di scussion regarding means and vari ances of sums of independent
390 APPENDIX II
ran dom variables we have
W
n
= X
2 a . ~ /
a W = 
n n
If we now apply the Chebyshev inequality to the random variable W
n
and
make use of these last two observations, we may express our bound in terms
of the mean and variance of the random variable X itself thusly
(I 1.36)
This very important result says that the arithmetic mean of the sum of n
independent and identically distributed rand om variables will approach
its expected value as n increases. This is due to the decreasing value of
a X2/nx2as n grows (a X
2
/X
2
remains constant). In fact , this leads us directly to
the weak law of large numbers, namely, that for any £ > 0 we have
lim P[l W
n
 X/ ;::: £] = 0
The strong law of large numbers states that
. lim W
n
= X with probabil ity one
Once agai n, let us consider the sum of n independent identically distributed
random variables X; each with mean X and variance ax
2
• The central limit
theorem concerns itself with the normalized rand om variable Z; defined by
n
2: Xi  nX
Z = i  I (I 1.37)
n ax.Jn
and states that the PDF for Z, tends to the standard normal distribut ion as
n increa ses; that is, for any real number x we have
lim P[Z n ~ x] = lI>(x)
n <Xl
where
~ IX 1 z
lI>(x) =   e ' / 2 dy
c eo (27T)1/2
That is, the appropriately normalized sum of a large number of independent
random variables tends to a Gaussi an, or a normal distributi on. There are
many other forms of the central limit theorem that deal, for example, with
dependent random variables.
 
n
Y = LXi
i = l
II.S. INEQUALITI ES AND LIMIT THEOREMS 391
A rather sophisticated means for bounding the tail of the sum of a large
number of independent random variables is available in the form of the
Chernoffbound. It involves an inequality similar to the Markov and Chebyshev
inequalities, but makes use of the entire distribution of the random variable
itself (in particular, the moment generating function) . Thus let us consider
the sum of n independent identically distributed random variables Xi as
given by
From Eq. (II.31) we know that the moment generating function for Y,
M y(v), is related to the moment generating function for each of the random
variables Xi [namely, 1\1
x
(v)) through the relationship
(11.38)
As with our earlier inequalities, we are interested in the probability that
our sum exceeds a certain value, and this may be calculated as
pry ;::: y) = f 'Jy(W) dw (11.39)
for v ;::: 0
Clearly, for v;::: 0 we have that the unit step function [see Eq. (1.33)) is
bounded above by the following exponential:
u_,(w  y) ::;; e
vCw
 .
1
Applying this inequality to Eq. (11.39) we have
pry ;::: y) ::;; e1>' L:eVWJy(w) dw
However, the integral on the righthand side of this equation is merely the
moment generating function for Y, and so we have
v;::: 0 (11040)
Let us now define the " semiinvariant" generating function
t>.
y(v) = log M( v)
(Here we are considering natural logarithms.) Applying this definition to
Eq. (11.38) we immediatel y have
y y (v) = nyx(v)
and appl ying these last two to Eq. (11040) we arrive at
prY ;::: y1::;; e1>Y+nyxCvl
v ~ O
392 APPENDI X II
Since this last is good for any value of v 0), we should choose v to create
the tighte st possible bound; this is simply carried out by differenti atin g the
exponent and setting it equal to zero . We thus find the optimum relationship
between v and y as
y = nYi l(v) (H.41)
Thus the Chernoff bound for the tail of a density function takes the final
form*
pry ;;:.; e n[ Yz lvlvYz l1Jlvl] v ;;:.; 0 (11.42)
It is perhaps worthwhile to carry out an example demonstrating the use of
this last bounding procedure. For this purpose, let us go back to the second
paragraph in thi s appendix, in which we estimated the odds that at least
490,000 heads would occur in a million tosses of a fair coin. Of course, that
calculation is the same as calculating the probability that no more than
510,000 head s will occur in the same experiment. assuming the coin js fair.
In this example the random variable X may be chosen as follows
X= {I heads
o tails
Since Y is the sum ofa million trials of this experiment, we have that n = 10· ,
and we now ask for the complementary probability that Yadd up to 510,000
or more, namely , pry 510,000] . The momentgenerating function for X is
Mx(v) = ! + ! e
V
and so
1
y,.(v) = log  (1 + e")
., 2
Similarly
e
V
y
l1l
( v) = .
x 1 + e"
From our formula (H.4I) we then must have
e
V
nylll( v) = 10
6
= 510,000 = y
s: 1 + e"
Thus we have
51
e
V
= 
49
and
51
v = log
49
• The same derivation leads to a bound on the "lower tail" in which all three inequalities
from Eq. (II.42) face thusly: For example v o.
11. 6. STOCHASTI C PROCESSES 393
Thus we see typically how v might be calcul ated. Plugging these values
back int o Eq. (11.42) we conclude .
P[ Y ~ 510,000] ~ e
l 0
' (
l o
l< (50/4.)o. 51 !Og (5 1/4 .) ]
This computat ion shows that the probability of exceeding 510,000 heads in a
million tosses of a fair coin is less than 10
88
(this is where the number in
our opening par agraphs comes from). An alternative way of carrying out
thi s computati on would be to make use of the central limit theorem. Let
us do so as an example. For this we require the calculation of the mean and
variance of X which are easily seen to be X = 1/2, Gx
2
= 1/4. Thus from
Eq, (11.37) we have
Y  10
6(1
/2)
Z = ':''
n (1/2)10
3
If we require Y to be greater than 510,000 , then we are requiring that Z.
be greater than 20. If we nowgo to a tabl e of the cumulative normal distribu
tion, we find that
P[Z ~ 20] = 1  <1>(20) ~ 25 x 10 .
0
Again we see the extreme implausibility of such an event occurring. On the
other hand, the Chebyshev inequality, as given in Eq. (11.36), yields the
following;
p[ Iw _.! I> O.OIJ < 0.25 = 25 x 104
• 2   10
6
• 10
4
Thi s result is twice as large as it should be for our calculation since we have
effectively calculated both tails (namely, the probability that more than
510,000 or less than 490,000 heads would occur); thus the appropriate answer
for the Chebyshev inequality would be that the probability of exceeding
510,000 heads is less than or equal to 12.5 x IQ4. Note what a poor result
this inequality gives compared to the central limit theorem approximat ion,
which in this case is comparabl e to the Chernoff bound.
11.6. STOCHASTIC PROCESSES
It is often said that queueing theory is part of the theory of applied sto
chastic processes. As such, the main port ion of this text is really the proper
sequel to this section on stochastic processes; here we merely state some of
the fundamental definitions and concepts.
We begin by considering a probability system (S, Iff, P) , which consists
of a sample space S, a set of events {A, E, .. .} , and a probability measure P.
In addition, we have already introduced the notion of a rand om variable
394 APPENDIX II
X (w). A stochastic process may be defined as follows : For each sample
point w E S we assign a time functi on X (t, w). Thi s family of functions forms
a stochastic process; altern ativel y, we may say that for each t included in
some appropriate parameter set, we choose a random variable X (t , w).
Thi s is a collection of rand om variables depending upon t , Thus a stochastic
process (or random function) is a function * X (t) whose values are random
variables. An exampl e of a random process is the sequence of closing prices
for a given security on the New York Stock Exchange; another exampl e is
the temperature at a given point on the earth as a function of time.
We are immediately confronted with the problem of completely specifying
a random pr ocess X (t ). For this purpose we define, for each allo wed t,
a PDF, which we denote by Fx(x, t) and which is given by
Fx(x , t) = P[X(t ) ~ xl
Further we define for each of n allowable t , {t
l
, t
2
, • • • , t n} a j oint PDF,
given by
F
X 1
"" Y 1, ·· · X ,,,( X
1
, X
2,
· ·· ,X
n
; t
h
1
2
, .. . ,t,J
.l.
= P[X(tl) ~ Xl ' X (t
2
) ~ X
2,
• • • , X(t
n
) ~ x
n
]
and we use the vector notation Fx(x ; t) to denote this function.
A stochastic process X(t) is said to be stat ionary if all Fx(x , t) are in
variant to shifts in time ; that is, for any given constant T the followin g holds:
Fx(x ; t + T) = Fx(x; t)
where the notation t + T implies the vector (1
1
+ T , t
2
+ T, .• • , t
n
+ T ).
Of most interest in the theory of stochastic processes are these stationary
random functions.
In order to completely specify a stochastic process, then , one must give
Fx(x ; t) for all possible subsets of {Xi}, {I,}, and all n. This is a monstrous
task in general! Fortunately, for many of the interesting stochastic pr ocesses,
it is possible to provide this specificat ion in very simple terms.
Some other definiti ons are in order. The first is the definition of the pdf
for a stochastic process, and thi s is defined by .
Second, we often discuss the mean value of a stochastic process given by
X (I) = E[X(t )] =L:xix(x; I) dx
• Usually we denote X(I, w) by X (I ) for simplicity.

Il .6. STOCHASTIC PROCESSES 395
Next , we introduce the autocorrelation of X(t) given by
Rx x (t" t
2
) = £ [X(t ,)X( t
2
) ]
=J:L:x,xdx,x,(x" x2 ; I" 12) dx, d X2
A large the or y of stochas tic pr ocess has been devel oped, known as second
order theory, in which these pr ocesses are classified a nd distingui shed
only on the basis of their mean XU) and autocorrelati on Rx x (t" t
2
) . In the
case of stat ionary random processes, we ha ve
and
X(I) = X ( 11.43)
Rx .\:(t" 1
2
) = R
x x
(1
2
 I,) (11.44)
that is, R
x x
is a functi on onl y of the time difference r = t
2
 t,. In the
stationary case, then, random processes are characterized in the second
order theor y only by a constant (their mean X) and a onedimensional func 
tion Rx x (r). A random pr ocess is said to be widesense stationary if Eqs.
(11.43) and (11.44) hold . No te that all sta tionary processes are wide sense
stationary, but not conversely.
REFERENCES
DAVE 70 Davenport, W. B. Jr. , Probability and Random Processes, McGrawHill
(New York), 1970.
FELL 68 Feller, W., An Introduction to Probability Theory and Its Applications,
3rd Edition, Vol. I , Wiley (New York), 1968.
PAPO 65 Papoulis, A., Probability , Random Variables, and St ochastic Processes,
McGrawHill (New York), 1965.
PARZ 60 Parzen, E., Modern Probability Theory and Its Applications, Wiley
(New York), 1960.
Glossary of Notation*
(Only the notat ion used ofte n in this book is included below.)
TYPICAL PAGE
NOTATI ONt DEFI NITION REFERENCE
A.(t) = A(t) P[t . ~ t] = P[i ~ t] 13
An*(s) = A*(s) Lapl ace transform of aCt ) 14
a
k
kth moment of aCt ) 14
a. (t ) = aCt) dAn(t) /dt = dA( t) /dt 14
Bn(x) = B(x) P[x. ~ x] = P[x ~ x] 14
B;*(s) = B*(s) Lapl ace tra nsform of b(x) 14
b
k
kt h moment of b(x) 14
bn(x) = b(x) dBn(x)/dx = dB(x)/dx 14
C2
Coefficient of variati on for service time 187
b
C.
nt h customer to enter the system I I
Cn(u) = CCu) P[u. ~ u] 281
C. *(s) = C*(s) Lapl ace transfor m of c
n(
lI) = c(u) 285
c.(lI) = c(u) dC. (lI)/du = dCCu)/du 281
D Denotes determin istic distribution VIII
d
k
P[ij = k] 176
E[X ] = X Expectation of t he random variable X 378
E
i
System state i 27
E
r
Denot es rstage Erlan gian distribut ion 124
FCFS Fir stcomefirstserved 8
Fx(x) P[X ~ x] 370
• In those few cases where a symbol has more than one meaning. the context (or a
specific statement) resolves the ambiguity.
t The use of the notati on Y
n
 Y is meant to indicate tha t y = limYn' as /I  co whereas
y( t)  y indicates that Y = lim y(r) as t  "'J .
396
f
c
G
G
s.
g
}
Ii
I ,
I'
r
K
L
tv
tv
III
N
N
O(
01
p
P
P
PI
p<
P,
Pi
Pi
fk
Q
q"
q"
GLOSSARY OF NOTATION 397
I,(x)
G
G(y)
G*(s)
gk
g(y)
H
Il
Im(s)
In  > I
I *(s )
I*(s)
K
LCFS
M
M
111
N.(I) > N.
N(I) > N
o(x)
o(x)
P
Pt A]
PtA IB]
PDF
pdf
Pk(l )
Pu
Pij(S, I)
Pk
Q(=)
quCl )
qn ij
q,/ ~ ij'
dFx(x){dx
Denot es general di stri bution
Busyperi od di stribution
Laplace tr an sform of g(y)
kth moment of busyperi od duration
dG(y) {dy
Den otes Rstage hyperexponential
distri buti on
Imaginary part of the complex variable s
Durat ion of the (nth) idle peri od
Laplace tr ansform of idleperiod den sity
Laplace transform of idletime density in the
dual system
Size of finite storag e
Lastcomefirstser ved
Den ote s exponential distribution
Size of finite populati on
Number of servers
Number 'of cust omers in queue at time I
Number of cust omers in system at t ime t
lim O(x){x = K < co
zo
lim o(x) {x = 0
zo
Matri x of transiti on probabil ities
Pr obability of the event A
Pr ob ab ility of the event A conditioned
on the event B
Probabil ity distribut ion fun cti on
Pr obabil ity density functi on
P[ N(I ) = k ]
P[next sta te is t; Icurrent sta te is E,]
P[ X(t ) = j I Xes) = i ]
P[ k customers in system]
ct ra nsforrn of P[ij = k]
Tra nsi tio n rate s at time t
Number left behi nd by departure (of e n)
Nu mbe r found by arri val (of en)
371
VIII
208
211
213
215
141
293
206
307
310
viii
8
VIII
Vlll
viii
17
II
284
48
31
364
365
369
371
55
27
46
90
192
48
177
242
398 GLOSSARY OF NOTATION
Rc(s) Real part of the complex variable s
340
r ij P[next node is j Icurrent node is i]
149
r
k
P [q' = k ]
176
Sn(Y) > S(y) P[sn ~ y] > p es ~ y]
14
S; *(s) + S* (s) Laplace transform of sn(Y) + sty)
14
s Lapl ace transform variable
339
s n ~ s Time in system (for e n)
14
sn(Y) > sty) dSn(y) /dy + dS(y)/dy
14
sn + s = T Average time in system (for e n)
14
sn
k
Joost
kth moment of sn(Y)
14
T Average time in system
14
I n > i Interarrival time (between e
n
_
1
and e n)
14
i n = (= IIA Average interarriva l time
14
fi' kth moment of a(/ ) 14
U(/) Unfinished wor k in system at time I 206
!lo(/) Unit impul se functi on
341
U n ~ U Un = X
n
 f
n
+
1
+ ii = 53  i 277
V(z) ztransforrn of P[v = k] 184
Un + V Number of arrivals during service time
(of e n) 177
W Average time in queue 14
IV
o
Average remaining service time 190
W_(y) Complementary waiti ng time 284
Wn(y) > W(y) P[w
n
~ y] + P[I;:' ~ y ] 14
W
n
*(s) + W*(s) Lapl ace transform of wn(y) 14
"'n+ lV
Waiting time (for e n) in queue 14
wn(y) > w(y) dWn(y) /dy + dW(Y)ldy 14
~ v n + W = W Average waiting time (for en) 14
w
n
k
lvk kth moment of wn(y) 14
X(I) State of stochastic pr ocess X(I) at
time I 19
X
n
+ 53 Service time (of e n) 14
x
k
kt h moment of b(x) 14
X
n
= x =
1/fl
Average service time 14
y
Busyperiod duration 206
z ztransform variable 327
GLOSSARY OF NOTATION 399
(X(t)
Yi
bet)
},
z,
fl
fl k
1t( n ) ) 1t
Cnl
1Tk + 1T
k
k
II a,
i= l
p
=
(0, t)
X= E[X]
(y) +
(:)
A/B/m/K/M
FCnl(a)
fCkl(x)
o
f + g
A <c4B
Number of arri vals in (0, t)
(External) input rate to node i
Number of departures in (0, t)
Average arrival rate
Birth (arrival) rate when N = k
Average service rate
Death (service) rate when N = k
Vector of state probabilities 7Tl
a
)
P[sy stem state (at nth step) is E
k
]
at a
2
' " a
k
(Product notation)
Utilization factor
Root for G/M/m
Variance of interarrival time
Variance of service time
Arrival time of e n
Laplace transform of W(y)
Laplace transform of W_(y)
Equals by definition
The interval from 0 to t
Expectation of the random variable X
max [0, y]
n'
Binomial coefficient = k ( .
! n  k)!
»zServer queue with A(t) and B(x)
identified by A and B, respectively,
with storage capacity of size K, and
with a customer population of size M
(if any of the last two descriptors are
mi ssing they are assumed to be
infinite)
dnF(y)/dyn I . ~ a
f(x) 0 . .. 0 f (x) kfold convolution
Convolution operator
Input f gives output g
Statement A implies statement Band
conver sely
f and F form a transform pair
15
149
16
14
53
14
54
31
29
334
18
249
305
305
12
285
285
11
15
378
277
368
viii
382
200
376
322
68
328
Summary of lmportant Results
Following is a collection of the basic results (those marked by ) from this
text in the form of a list of equations. To the right of each equati on is the page
number where it first appears in a meaningful way; this is to aid the reader in
locating the descriptive text and theory relevant to that equation.
GENERAL SYSTEMS
P = AX (G/G/ l) 18
p 4, Ax/m (G/G/m) .18
T= X + W 18
N = AT (Little's result) 17
N. = AW 17
N. = N  p 188
dPk(t) /dt = flow rate into Ekflow rate out of E
k
59
P» = r
k
(for Poisson arrivals) 176
r
k
= d
k
[N(t) makes unit changes] 176
MARKOV PROCESSES
For a summary of discrete state Markov chains , see the table on
pp. 402403 .
POISSON PROCESSES
P (t) = (At)k e ;" k ~ 0, t ~ 0 60
k k!
N(t) = At 62
62
63
.. .. 69
400
SUMMARY OF IMPORTANT RESULTS 401
BIRTHDEATH SYSTEMS
k 1 i..
Pk = PoII  ' (eq uilibrium solut ion)
;_0 P HI
1
Po = co k 1 A
1+ III'
k=l i  O fli +l
MIMII
k 1
k=O
57
57
92
92
Pk(t) = + plki11/2I
k
+
H1
(at )
+ (1 
77
Pk = ( 1  p)P'
R = pl(l  p)
u/ = pl(l _ p)2
W = pIp
1  p
1  p
in system] = l
s(y) = !l( 1  p)e
plt

pl
> y 0
S(y) = 1  ep(tph y 0
w(y) = (1  p)uo(Y) + A(l  p)epI1 pl >
W(y) = 1  pe
plt

ph
96
96
97
191
98
99
202
202
203
203
<!5
IV
Summary of DiscreteStat e Markov Chains
DI SCRETETIME CONTINUOUSTI ME

HOMOGENEOUS NON HOMOGENEOUS HOMOG ENEOUS NONHOMOGEN EOUS
Oneste p
PiI" . .
PiI(n, n+ I )
Po
PoU, I + ~ I ) .
transition
= P[X" t1 = J I X" = /] £ P[X" +l = j I X" = i]
£ P[ X(t + ~ I ) = j I X( I) = i] AP[X(I + ~ I ) = jl X(I ) = i]
probabil ity
Matrix of one
step transition I' £ [Pi j]
"
I' £ [PiI) 1' (1) £ [pi ;(I, I + ~ t ) ] 1' (11) = [Pi /n, n + 1)]
probabilit ies
Multiplestep pc'r
l
Pi j(III , n) Po(1) PoC' , I )
i1
A
. .
tra nsiti on
= p [X" t", =J I X" = /) AP[ X" = j I x'" = i] £ P[ X(s + I) = j I Xes) = i] £ P[x(t ) = j I Xes) = i)
proba bilit ies
Mat rix of
multiplestep
I' ,m, £ [1'1;"1] H(III, n) £ )1'/1(11I, II)] H(t ) A [P iI (I )] " transit ion H (s, I) =. [ pi; (S,I )J
probabiliti es
Cha pma n p(,"1 = 2: pt'''0Ip
101
po(m, n) PiI(1) = 2: P,k(1  s)Pk,(s) Pi les, I) = 2: Pi keS, lI )pk; (II , I)
tI ik k;
Kolmogorov
k
= 2: pi k(m, q)pk;(q, n)
k k
equa tion
k
p lm) = p on q'plq)
11 (11I , n) = H(m, q)H(q, n H (t ) = H(I  s)H(s) H(s , I) = H(s, II)H(II, I)
./>0
o
VJ
Table (continued)
Forwa rd
P IUIl = p tm l lp
H(III , II) dH(I)ldl ,d H(I)Q aH(s, t)lal = H(s, I)Q (I )
equa tion = H(III , II  1)1' (11  I)
Backward
P I11IJ = pplm l )
H(III , II) dH(t)ldl = QH(I) all(s, t)l as = Q(s)lI(s, t)
equation = P(m)H(1II + I, II)
H(s , I) = exp [f : Q(II) dll ] Soluti on
p llll l = p m
H(III , II) H(I) = e
o t
= 1'(111)1' (111 + I) ' . . 1' (11  I)
Transitionrate
P  I . p et)  I
  Q =lim QU) = lim   
matrix. t>t_O CJ.r t>t O CJ. r
State probabi lity P[X" =jl P[X" =j l " j(t) P[X(I) =j l " j(t) P[X(I ) =jl
Matrix of state
probabi lities n "lt ,1\, n ' " ' [" I" )l n (l ) [" j U)l n (t ) ,1\, [" jU )l
For ward
n: (tll = n 1n 11P n' n, = n ,n 1,p(1I _ I)
dn(I) ldl = n (I )Q dn(I)ldl = n (I )Q(I )
equation
n(l) = n (O)e
ot
n (l ) = n CO) exp WQ(II) dll ] solution
n ln) = nlO'pn
n:
11ll
= n ' OIP(O)p (1) . . . Pen  I)
Equilibrium
solution n = nP  nQ = 0

Tr ansform
[I  %pl 
1
¢> p n

[sl  Ql l ¢>H(r)

relatio nships
otherwise
404 SUMMARY OF IMPORTANT RESULTS
P [inter departure time tj = I  e;"
g(y) ( 1)11
0
e
U
+
P
)"I
1
[2 y(A.u)1/2j
y p 
in = l(2n  2)pn_l(1 + p)I 2n
n n  1
(
lI,I.u ( .A.)k
Pk =  CJ. 1.u)K+I .u
(M/ M/ I/ K)
148
215
218
104
107 (M/ M/ IIIM)
Pk =
M! (/.)k
(M  k)! .u
I. M !
,oeM i ) ! .u
P(z) = .uO  p)(l  z) (M/ M/ l bulk arr ival ) 136
.u( I  c)  1.z[1  G(z)]
Pk = ( I  k = 0, 1,2, . . . (MIMII bulk service) 139
M/M/rn
102
Po= ['II (mpt + (IIl
P
)m)(_1)] 1
k O k! III ! 1  p
. (7)(2)
P[queue mgj = ['II(mp)k + (m
p
)m)P( _ l _ ) ]
k _O k! III! 1  p
CJ.1.ulj m <A1.u)'
Pk = (M/M/m/m)
( Erla ng C
formula)
103
103
105
(.A. I.u)m/ m (AI.u)'
Pm=  2:.
m ! I!
(M/ M/m/m)
(Erlang's
loss formula)
106
SUMMARY OF IMPORTANT RESULTS 405
MIDI I
ij = _P  p'
1  p 2(1  p)
W = px
2(1  p)
( ul .j (np) n1
G(y) = 2: e
n p
n=l n!
( )
,  1
f
=..!!.E....... enp
, 1
n.
188
191
219
219
E
T
(rstage Erlang Distribution)
rp,( rp,xy1
e
 T' '''
b(x)  ''''
(r  I) !
1
(Jb =  
p,(r)1/'
x ~ O 124
124
T
P; = (I  p) 2: A;(:=,r; ) = 1, 2, . . . , r
i=I
129
{
I  p
Pk =
p(ZOT _ l )zo rk
HR (R stage Hypere xponential Distribution)
R
b( x) = 2: 'XiP,ie " '"
i = 1
co" ~ I
MARKOVIAN NETWORKS
s
;., = y, + 2: }.;rji
;"",,1
k = O
k>O
x ~ O
133
141
143
149
p(k
1
• k•• . . . • k
s
) = P1(k
1)
p.(k.) . . . p.\( k
s
) 150
(open) where PiCk,) is solution to isolated M/M /m ,
(closed)
152
406 SUMMARY OF I MFORTA:lT RESULTS
LIFE AND RESIDUAL LIFE
fx( x) = xf(x) (lifetime density of sampled int erval)
m,
f
"(y) _ I  F(y) (resi
residual life den sity)
m,
171
172
I*(s) = I  P es)
sm
l
(residual life transform)
(nth moment of residual life)
172
173
(mean residual life) 173
rex) = f (x)
1  F( x)
M/G/l
(failure rate) 173
(PK mean value formul a)
v = p
v'  V = A'X' = l (1 + C
b
' )
V( z) = B*(A  i.z)
__ + , (I+C/ )
q  p P 2(1  p)
T ( 1 + c/)
 = I + p ''""
x 2(1  p)
W ( I + Co')
 = p
x 2(1  p)
(PK mean value formula)
(PK mean value formula)
176
181
183
187
184
187
191
191
W
o
W =  (PK mean value formula) 190
Ip
W
o
~ AX' 190
2
Q(z) = B*(i.  AZ) (1  p)(1  z) (PK transform equati on) 194
B*(A  AZ)  Z
(bulk arri val )
SUMMARY OF IMPORTANT RESULTS
*( ) s(I  p)
w s = (P K transform equat ion)
s  i. + AB*(s)
5*(s) = B*(s) sO  p)
s  i. + i.B*(s)
P[J ~ y] = 1  e
A
' y ~ 0
G*(s) = B*(s + A AG*(S»
f"00 . _ ( Ax)nI
G(y) =Jon ~ l e  A X  n  !  b(n)(x) dx
X
gl = 1
p
g, = ( 1  p)"
, Vb' + p(x)'
V =
• (1  p)3
x
3
3i.C?)2
g  +
3  (1 _ p)' ( 1 _ p)5
x' lOAx' :;O 15i.'( x
2
)3
g = + + ''
• ( 1  p)5_ ( 1 _ p)" (1 _ p)7
F(z) = zB*[A  AF(z) ]
f
oo (Ay) nI A'
P[ N
b p
= n] =   e  b(nl(Y) dy
o n !
1
11
1
=  
1  p
')(1 ) + . .,:; 1
h; =  p  P I ."x' + __
 ( I  p)3 1  p
• p(I _ p) + A
2
:2
a
lt
 =
(1  p)"
aF(w, t ) aF(w, t ). ) ' fw ( )d F( )
'''' =  I.F(w, t + ), B w  x x x, t
at aw = 0
(Ta kacs integrodifferential equat ion)
** (r/IJ)e' lCo _ e
rlCO
F ( r , s) = '''''
AB*(s)  i. + r  s
( I  p)(I  z) B*[A  i.G(z)]
Q(: ) = B*().  AG(z»  z
407
200
199
208
212
226
213
214
214
214
214
217
226
217
218
21 8
227
229
235
408 SUMMARY OF IMPORTANT RESULTS
M/G/ a:;
T= x
s(y) = b(y )
G/M!l
r
k
= (1  O)if' k = 0, 1,2, .. .
a = A*(p  pa)
W(y ) = 1  y 0
a
G[M[m
234
234
234
251
251
252
252
a = A*(mp  mu a)
P[queue size = n Iarrival queues] = (I  a)a
n
m2 00
R
"R " i +I m
k  £.. iPik  £.. a Pik
i=k i=m l
Pkl,k
242
249
249
254
254
Pu = 0 for j > i + I
PH = i'" (i 1) [I  dA(t)
f
'" (mptr
fJn = Pi. i+l  n = , e dA(t)
1= 0 n.
242
244
o n i + I  m, m i
245
245
SUMMARY OF IMPORTANT RESULTS
409
J = 1
I m2
  + IRk
I  a k ~ O
W = Ja
m,u(l  a)2
254
256
w(y I ar rival queues) = (1  a)m,uem"lla).
y ~ O 250
W(y ) = 1  a m 2 em"lla).
1 + (1  a) I R
k
k=O
y ~ O
255
GIGll
IV= sup U;
n ~ O
11 2 y2
W = 
2ii 2Y
277
281
(Lindley's
integral
283
equation)
286
290
290
306
305
279
301
307
310
[2
21
y ~ O
y < O
$ +(s) = [A*( s)B*(s) _ 1]'Y_(s)
an' + a"+(i)"( l _ p)2
W =
2f(1  p)
W(y) = 7T{c(y) 0 w(y»
* ao[l  [ *(  s)]
W (s) =
1  C* (s)
W*( ) 1  a
s = 1 _ aI*(s)
W. +l = (w. + 11 . ) +
C(II) = a(  II) 0 b(lI)
(I
"W(y  II) dCCII)
W(y) = eec
o
* * 'Y+(s)
A ( s)B (s)  I = 'Y_(s)
I . 'Y+(s) W(o+)
$ +(s) = 11m  =  
'¥+(s) .0 s 'Y+(s)
'Y (0)(1  p)f
Index
Abel transform, 321
Abscissa of absolute convergence, 340,349
Absorbing state, 28
Age, 170
Algebra, real commutative, 301
Algebra for queues, 229303
Alternating sum property, 330
Analytic continuation, 287
Analytic function, 328, 337
isolat ed singular point, 337
Analyticity, 328
common region of , 287
Aperiodic state, 28
Approximations, 319
ARPANET, 320
Arrival rate, 4
average, 13
Arrival time, 12
Arrivals, 15
Aut ocorrelation, 395
Availabilit y, 9
Average value, 378
Axiomatic theory of probability, 365
Axioms of probability theory , 364
Backward ChapmanKolmogorov equati on,
42,47,49
Balking, 9
Ballot theorem, classical, 224225
generalization, 225
Baricentri c coordinates, 34
Bayes' theorem, 366
Birthdeath process, 22, 25, 42, 5378
assumpt ion, 54
equilibrium solution, 9094
existence of, 9394
infinitesimal generator , 54
linear , 82
probabilit y transiti on matri x, 43
summary of result s, 401
tran sitions, 55 56
Birth rate, 53
Borel field, 365
Bottl eneck, 152
Bound,319
Chernoff, 391
Bribing, 9
Bulk arrivals, 134136,162163, 235, 270
Bulk service, 137139, 163, 236
Burke' s theorem, 149
CapacitY,4,5,l S
Catastrophe process, 267 269
Cauchy inequality, 143
Cauchy residue theorem, 337, 352
CauchyRiemann , 328
Centr al limit theorem, 390
Chain, 20
Channel s, 3
ChaprnanKolmogor ov equation, 41 , 47, 51
Characteristic equation, 356, 359
Characteri stic function, 321, 381
Cheating, 9
Chernoff bound, 391
Closed queueing net work, 150
Closed subset, 28
Coeffi cient of variation , 381
Combinat orial methods, 223 226
Commodity, 3
Complement, 364
Complete solution, 357, 360
Compl ex exponentials, 322  324
Compl ex splane, 291
Complex variable, 325
Compound distribution, 387
Computer center example, 6
Computercommunication networks, 320
Computer net work , 7
Conditional pdf, 375
411
412 INDEX
Conditional PDF, 375
Conditional probability, 365
Cont inuousparameter process, 20
Conti nuousstate process, 20
Contour, 293
Convergence strip , 354
Convex function , 389
Convol ution, density functions, 376
notation, 376
property , 329, 344345
Cumulative distribution function, 370
Cut , 5
Cyclic queue , 113, 156 158
0 /0/1 , example, 309
Death process, 245
Death rate , 54
Decomposition, 119, 323, 327
Defections, 9
Delta function, Dirac, 341
Kronecker, 325
Departures, 16,174 176
D/ E, /I queueing system, 314
Difference equations, 355 359
standard solution, 355357
ztransforrn solution, 357  359
Differentialdifference equation, 57,361
Differential equations , 359361
Laplace transform solution, 360 361
linear constan t coefficient , 324
standard solution, 359360
Differential matrix , 38
Diffusion approximation, 319
Dirac delta funct ion, 341
Discouraged arrivals , 99
Discreteparameter process, 20
Discretestate process, 20
Disjoint, 365
Domain, 369
Duality, 304 , 309
Dual queue, 310311
E2/ M/ I,259
spectrum factorization, 297
Eigenfunctio ns, 322 324
Engset distribution, 109
Equilibri um equat ion, 91
Equilibriu m probabilit y, 30
Ergodic Mar kov chain, 30, 52
process, 94
state , 30
Erlang, 119, 286
Bformula , 106
C formula, 103
distributi on, 72, 124
loss formula , 106
E, /M/I, 130 133, 405
E,(r stage Erlang Distribution), 405
Event , 364
Exhaustive set of events , 365
Expectation , 13, 377 381
fundamental theorem of , 379
Exponential dist ribution, 65 71
coefficient of variat ion, 71
Laplace transform, 70
mean, 69
memory less prop erty , 66 67
variance, 70
Exponential function, 340
FCFS, 8
Fig flow example, 5
Final value theo rem, 330, 346
Finite capacity, 4
Flow, 58
conservation, 91 92
rat e, 59 , 87, 91
system, 3
time, 12
Fluid approximation, 319
Forward ChapmanKolmogorov equation,
42 ,47,49,90
Foster's crit eria , 30
Fourier transfor m, 321, 381
Function of a random variable, 375,380
Gaussian distribution, 390
Generat ing functio n, 321, 327
probabilistic interpretat ion, 262
Geometric distribution, 39
Geometric series, 328
Geometric transform, 327
G/G/ I,19,275312
defining equatio n, 277
mean wait, 306
summary of results , 409
waiting time transform, 307, 310
G/G/m, II .
Globalbalance equatio ns, 155
Glossary of Notation, 396 399
G! M! I,251253
dual queue, 311
mean wait , 252
spectrum factorizat ion, 292
summary of results, 408
waiting time dist ribution, 252
G/ M!2 , 256  259
distribution of number of custome rs. 258
distribution of waiting time , 258
G/ M/m.241 259
conditio nal pdf for queueing time,250
conditional queue length distribution, 249
functional equatio n, 249
imbedded Markov chain, 241
mean wait , 256
summary of results , 408 409
transitio n probabilities, 241  246
waitingtime distribu tion , 255
Gremlin, 261
Group arr ivals, see Bulk arrivals
Group service, see Bulk service
Heavy traffic approxi mation, 319
Hippie example, 2627,3038
Homogeneous Markov chain, 27
Homogeneous solution, 355 ,
HR (Rstage Hyperexponential Distribu tion) ,
141, 405
Idle period, 206, 305, 311
isolating effect , 281
Idle time, 304, 309
Imbedded Markov chain, 23,167, 169,241
G/G/I, 276279
G/ M/m.241246
M/G/I ,1 74177
Independence, 374
Independent process, 21
Indepe ndent random variables, produc t of
functions, 386
sums, 386
Inequalit y, CauchySchwarz, 388
Chebyshev, 388
Cr. 389
Holder, 389
Jensen, 389
Markov, 388
triangle, 389
Infinitesimal generator, 48
Initial value theorem, 330, 346
INDEX 413
Inputoutput relationship, 321
Input variables, 12
Inspection techniqu e, 58
Integral property, 346
Int erarrival time, 8, 12
Int erchangeable random variables, 278
Int erdepart ure time distribution, 148
Intermediate value theorem. 330
Intersection, 364
Irreducible Markov chain , 28
Jackson's theorem, ISO
Jockeying, 9
Jor dan' s lemma, 353
Kronecker delta function , 325
Labeling algorit hm, 5
Ladder height , 224
Ladder index, 223
ascending, 309
descending, 311
Laplace transform, 321,338 355
bilat eral, 339, 348
one·sided,339
probabilist ic inte rpretation, 264, 267
table of properties, 346
table of transform pairs, 347
twosided, 339
Laplace transform inversion , inspection
method, 340, 349
inversion integral , 352 354
Laplace transform of the pdf, 382
Laure nt expansion, 333
Law of large numbers, strong , 390
weak, 390
LCFS, 8, 210
Life, summar y of results , 406
Lifetime, 170
Limiting probability , 90
Lindley' s integral equation, 282283, 314
Linearity, 332, 345
Linear system, 321, 322, 324
Liouville's theorem, 287
Little' s result , 17
generalization, 240
Localbalance equat ions, 155 160
Loss system, lOS
Marginal density function, 374
414 INDEX
Marking of customers, 261267
Markov chain, 21
continuoustime, 22, 4453
definiti on, 44
discret etime, 22, 2644
definition, 27
homogeneous, 27, 51
nonhomogeneous, 39 42
summary of results, 402403
theorems, 29
transient behavior, 32, 3536
Markovian net works, summary of results,
405
Markovian population processes, 155
Markov process, 21, 25 ,402 403
Markov property , 22
Maxt1owmineut theorem, 5
MID/I , 188
busy period, number served, 219
pdf, 219
mean wait, 191
summary of results, 405
M/ E2/ 1, 234
Mean, 378
Mean recurrence time, 28
Measures, 301
finite signed, 301
Memoryl ess property, 45
M/E
r
/l ,126130
summary of results, 405
Merged Poisson stream, 79
Method of stages, 119 126
Method of supplementary variables