You are on page 1of 115

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/269710614

INTRODUCTION TO QUEUING MODELS

Technical Report · June 2013

CITATIONS READS

0 8,176

2 authors:

Dejan Dragan Borut Jereb


University of Maribor University of Maribor
76 PUBLICATIONS   181 CITATIONS    78 PUBLICATIONS   254 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Smart grids View project

Guest Editoring of Special Issue (Energies MDPI) View project

All content following this page was uploaded by Dejan Dragan on 20 December 2014.

The user has requested enhancement of the downloaded file.


UNIVERSITY OF MARIBOR
FACULTY OF LOGISTICS

Working paper

INTRODUCTION TO QUEUING MODELS

Dejan Dragan

Borut Jereb

Celje, June 2013


CONTENT

1 BASICS OF MODELING AND SIMULATION …….................4


1.1 Description of the system…………………………………………………...7
1.2 Definition of the model and simulation……………………………………10
1.3 Relationship between the system and the model…………………………..11
1.4 Modeling and simulation…………………………………………………..11
1.5 Modeling methodology……………………………………………………12
1.6 Model classification………………………………………………………. 13
1.7 Mathematical modeling……………………………………………………14
1.8 Theoretical and experimental modeling …………………………………..15
1.9 Computer simulation methodology………………………………………..17
1.10 Modeling and simulation iterative procedure……………………………..19
1.11 The classification of simulation…………………………………………...20
1.12 The methodology of system dynamics……………………………………22

2 DISCRETE EVENT SIMULATION …………………………..24


2.1 Introduction of time………………………………………………………...24

2.2 Inter-arrival times distribution……………………………………………...25


2.2.1 Poisson distribution ……………………………………………………………………26

2.3 Distribution of the number of arrivals……………………………………...27

2.4 Introduction to Queueing systems………………………………………….28

2.4.1 Basic characteristics of queueing system……………………………………………31


2.4.2 Queueing terminology and basic parameters………………………………………33
2.4.3 Types of queueuing systems……………………………………………………………39

2.5 Some probability basics of simulation…………………………………..…40


2.6 Random generators…………………………………………………………43

2
3 THEORY OF STOCHASTIC PROCESSES…………………..46
3.1 Definition of stochastic processes and basic properties……………………46
3.2 Markov processes…………………………………………………………..49
3.3 Markov chains……………………………………………………………...49

3.4 Poisson processes…………………………………………………………..53


3.4.1 Derivation of distribution of the number of events…………………………………54
3.4.2 Derivation of distribution of the times between events…………………………….60

3.5 Birth processes……………………………………………………………...61


3.6 Death processes…………………………………………………………….63
3.7 Birth-Death processes………………………………………………………65

4 INTRODUCTION TO BASIC QUEUEING MODELS……….70


4.1 Single channel queueing models…………………………………………...70

4.1.1 Basic model M/M/1……………………………………………………………………..71


4.1.2 Model M/M/1 with limited waiting space……………………………………………76
4.1.3 Model M/M/1 (gradually "scared" or "balked" customers)……………………….80
4.1.4 Model M/M/1 with limited number of customers……………………………………83
4.1.5 Model M/M/1 with additional server for longer queues…………………………...87

4.2 Multiple channel queueing models…………………………………………92

4.2.1 Basic model M/M/r……………………………………………………………………..92


4.2.2 Model M/M/r with limited waiting space…………………………………………..101
4.2.3 Model M/M/r with large number of servers………………………………………..105
4.2.4 Model M/M/r (waiting space is not allowed)………………………………………108
4.2.5 Model M/M/r with limited number of customers………………………………… 109

4.3 Conclusion about the queueing models…………………………………...112

LITERATURE…………………………………………...……….114

3
1 BASICS OF MODELING AND SIMULATION

The building of models by means of observations and study of model properties are the main
components of the modern sciences. Models can have more or less formal character
("hypotheses", "laws of nature", "paradigms", etc) and are trying to unite observations into a
pattern, which has the same characteristics as the observed system [Ljung]. The system is
confined arrangement of mutually affected entities (processes) that influence one another,
where the process indicates the conversion and/or transport of material, energy and/or
information [Isermann].

When we interact with a system, we need some concept of how its variables relate to each
other. With a broad definition, we call such an assumed relationship among observed signals a
model of the system. A model of the system is any “experiment”, which can be applied to in
order to answer some questions about that system. This implies that a model can be used to
answer questions about a system without doing experiments on the real system. Instead of this,
we rather perform simplified “experiments” on the model, which in turn can be regarded as a
kind of simplified system that reflects properties of the real system. In the simplest case a
model can just be a piece of information that is used to answer questions about the system
[Fritzson].

Models, just like systems, are hierarchical in nature. We can cut out a piece of a model, which
becomes a new model that is valid for a subset of the experiments for which the original
model is valid. Naturally, there are different kinds of models depending on how the model is
represented, like mental models, verbal models, physical models, mathematical models, etc.
Mental models helps us answer questions about persons' behaviour, verbal models are
expressed in words, physical models are the physical objects that mimics some properties of
the real system, and the mathematical models are a description of the system where the
relationships between variables of the system are expressed in mathematical form [Fritzson].

In many cases, there is a possibility of performing “experiments” on models instead of on the


real systems corresponding to the models. This is actually one of the main uses of models, and
is denoted by the term simulation, from the Latin "simulare", which means to pretend. We
usually define a simulation as follows: "A simulation is an experiment performed on a model".

4
The simulation allows to make the repeated observation of the model, where one or many
simulations can be done. After that, the analysis can be performed, which aids in the ability to
draw conclusions, verifications and validations of the research, and make recommendations
based on various iterations or simulations of the model. This way, the modeling and
simulation give us the strong problem-based discipline, which allows the repeated testing of a
hypothesis [Sokolowski].

Simulation, as a way of solving of important and complex problems, is the


very old research discipline. For example, the princes and rulers in the years before our
count, performed the different possible strategies of the enemies and their
answers, while doing the military exercises. Also, in many important complex and
interconnected industries, the more intuitive than scientific simulation methods were used in
the broadest sense. When the first computers have occurred, the simulation is becoming a
scientific discipline and the part of the system theory approach. Nowadays, the range of
application of simulation models is widely and extends to all areas of science, especially in
organizational, industrial, economics, transportation, technical and other important sciences.

One of the main purposes of the simulation is also the analysis of the responses of a system in
the future, or the increase of understanding of the system under consideration. In this way,
costs for experiments on the real systems and potential hazards can be avoided. In addition,
simulation is used when the treated problem is very complex and can not be solved by other
methods. Simulation models are not really exclusively bounded only to the computer, but the
use of computer simulation is nowadays so extended, that the word "computer simulation"
became the synonymous of simulation as a technique of problem solving [Kljajič].

For the wider use of simulation in business environments, the simple and efficient solutions
are needed. They must be coordinated with the real data and should not demand too high level
of specialized computer knowledge. This is especially important for the reason that the
business simulation based data are dedicated mostly to managers. The latter must take quick
decisions, which should be based on reliable and up to date information. Accelerating the
deployment of business systems based simulation is the consequence of improved
communication between the human and the machine, which enabled more frequent use of
simulation models by managers also, especially when the business models are concerned
[Kljajić].

5
Nowadays, the highly sophisticated graphical interfaces enables much easier model building
than before. Thus, simulation, design of experiments, testing of different scenarios and the
analysis of system behavior is possible for almost everybody with the basic knowledge of
informatics. Consequently, the business simulation is actually transferred from highly
specialized laboratories to the desks of common users [Kljajič].

Different researches in the world and the existing literature [Dijk, Klajič, Saltman, Tung]
show that the combination of simulation and decision support systems enable decisions of
higher quality. Simulation applied by additional use of animation, which shows the operations
of modeled system, can help the users to learn the specifics of the system working mechanism
very quickly. Even more, in the research [Dijk] it is confirmed that the decision makers better
understand the simulation results, when they are represented with animation also. Thus, the
combined simulation and animation motivate the decision makers to search for a new
solutions, since the testing of scenarios, impossible in the real world, is enabled in the
simulated world [Klajič].

Historically, scientists and engineers have concentrated on studying natural phenomena which
are well-modeled by the laws of gravity, classical and non-classical mechanics, physical
chemistry, etc. In so doing, we typically deal with quantities such as the displacement,
velocity, and acceleration of particles and rigid bodies, or the pressure, temperature, and flow
rates of fluids and gases. These are “continuous variables” in the sense that they can take on
any real value as time itself “continuously” evolves. Based on this fact, a vast body of
mathematical tools and techniques has been developed to model, analyze, control and
simulate the systems (continuous simulation). It is fair to say that the study of ordinary and
partial differential equations currently provides the main infrastructure for system analysis
and control. But in the today life of our technological and increasingly computer-dependent
world, we notice two things. Firstly, many of the quantities we deal with are “discrete”,
typically involving counting integer numbers (how many parts are in an inventory, how many
planes are on a runway, how many telephone calls are active). Secondly, many of the
processes depend on instantaneous “events” such as the pushing of a button, hitting a
keyboard key, or a traffic light turning green. In fact, much of the technology we have
invented is event-driven, like communication networks, manufacturing facilities, or the
execution of a computer programs [Cassandras].

6
In this case, we are talking about the discrete and/or event driven simulation, where the states
of the system are changing in the discrete time moments. These changes are called the
discrete events, which are happening periodically in specific time moments, or
asynchronously in the dependence of conditions, determined by values of state variables
[Zupančič].

Discrete event simulation, among other functionalities, enables the determination of


efficiency of existing technology and the identification of bottlenecks, the determination of
time dependence for supply of products in inventory control, the design of the model for
operational planning of production, the analysis of the system in the future, etc [Kljajič].

In the past, the complexity of the model construction and simulation experiments limited us
to use the simulation in the business environments. Nowadays, this problems are mostly
bridged, especially, when the visual interactive modeling is possible. The latter enables us to
develop the model and do simulations in the interactive environment, which simplifies the
model design and improve the perception of the system performance [Kljajić].

1.1 Description of the system

The system is a group of elements or units, which are connected in a certain whole. Each
element have certain properties, or attributes and activities, or activities. Figure 1 shows the
basic classification of systems.

Figure 1: The basic classification of systems.

7
Naturally, there are more or less complex systems in the real world. For example, figure 2
shows a relatively complex system: a house with solar-heated warm tap water, together with
clouds and sunshine [Fritzson].

Figure 2: A house with solar-heated warm tap water

Figure 3 shows a less complex second-order systems: a mechanical system and an electrical
system.

Figure: A mechanical system and an electrical system

In general, the system can be described by (c.f. figure 4):


• Input variables X = { X i } , i = 1, 2,3,..., m

• States of the system Z = {Z i } , i = 1, 2,3,..., n

• Outputs from the system Y = {Yi } , i = 1, 2,3,..., l

8
Figure 4: System

Table 1 shows the examples of different dystems, their elements, properties, and activities.

Table 1: The examples of different dystems, their elements, properties, and activities

State of the system is defined as a value of state variables at certain time. It can be described
by the elements of the system, their properties and activities. Process is the change of the state
of system, induced by the influence of input variables or internal events in the system
[Kljajič].

The behavior of the system can be defined as a response (reaction) of the system to input
signals (stimuli). Figure 5 shows the response y ( t ) of the system to step input signal x ( t )

[Zupančič, reg].

Figure 5: The response y ( t ) of the system to step input signal x ( t )

9
1.2 Definition of the model and simulation

Model of the system is simplified visualization (concept) of the real system. System
simulation is the methodology of problem solving by means of experimentation on the
computer model, where the main purpose is to analyze the operations of the whole system, or
the operations of particular parts of the system under certain conditions [Kljajič].

Simulation is a dynamic visualization of the system behavior for the following purposes
[Kljajič]:
1. The description of the system or its parts,
2. The explanation of the system behavior in the past,
3. The prediction of the system behavior in the future, and
4. The understanding of the system principles.

When treating the computer simulation, the modeling procedure consists of the following
steps [Kljajič]:
• definition of the problem,
• determination of objectives,
• a draft of study,
• creation of the mathematical model,
• creation of the computer program,
• validation of the model,
• preparation of the experiment (simulation scenarios),
• simulation and analysis of the results.

10
1.3 Relationship between the system and the model

Relationship between the system and the model can be defined in the following way (c.f.
figure 6) [Kljajič]:

1. The system is deterministic and the model is deterministic. Example of this are the
models of simple mechanical systems, like for instance the second order differential equation:
.. .
x+ B ⋅ x+ C ⋅ x = U ( t ) (1.1)

2. The system is deterministic and the model is stochastic. Example of this is for instance the
methodology of simplification of complicated functions by means of Monte Carlo method.

3. The system is stochastic and the model is deterministic. Example of this are for instance
the congruential generators of random numbers.

4. The system is stochastic and the model is stochastic. Example of this are for instance the
complex organizational systems, where the solving by use of system simulation is needed.

Figure 6: Relationship between the system and the model

1.4 Modeling and simulation

Modeling represents the relation between the simulated system and the model, while the
simulation represents the relation between the model and computer process. Within this
philosophy, the following can be enunciated [Klajič]:

Object X simulates object Y only, if:


a) X and Y are both systems,
b) Y represents the simulated system (system),
c) X represents the simplification of simulated system (model),
d) Validity of the X in dependence of Y is not necessary completed.

11
The following can be also stated: Simulation is the process of generation of model behavior.
By other words, it represents the experimenting on the model [Kljajič].

1.5 Modeling methodology


When the problem as a subject of study is precisely defined, the modeling procedure can
begin. Within this framework, all important variables and their interconnections must be
defined at first. We focus only on relevant data, which are dominant for the chosen aspect of
the system study. At this stage, we can not define some specific rules, how to approach to this
matter. The only exception are the natural systems, where the conservation laws and
continuity are standardized. The basic procedure of modeling is shown on figure 7 [Kljajič].

Figure 7: The basic procedure of modeling

Researcher can make certain simplifications with respect to the objectives and knowledge of
the system. He can determine the structure of the system, collect all the data and then build
the suitable model within the framework of existing theory. In sequel, he can study the
properties of the real system by use of model, or tries to choose the most appropriate strategy
of influencing on the real system. View of the modeling and simulation procedure from the
certain angle is shown on figure 8 [Kljajič].

Figure 8: The modeling and simulation procedure

12
With respect to the way of description and the sequence of formation, the following
classification of models can be introduced [Kljajič]:
• Verbal models, where the principles of the observed system are described in the natural
language.
• Physical models, which are usually the miniature images of the observed system. The can
be very useful for this kind of researches, where the experimenting with the original system
could be expensive or even dangerous.
• Mathematical or formal models, which are the most precise descriptions of certain system.

1.6 Model classification

There are many classifications of models. One of the possible classification was introduced by
[Forrester] and is shown on figure 9.

Figure 9: Classification of models by [Forrester]

The choice of model depends on the system, which is taken into the consideration. From
figure 9 it is evident that the models can be divided into the physical and the abstract models
as a possible way of studying of systems. Naturally, every chosen model is dependent on the

13
theory, which was used to construct it. With respect to the abstraction of the used theory, we
can talk about exact or verbal theories, which depends on phenomenon, which we want to
interpret. Physical models are usually simplified and reduced real systems, which precisely
define the behavior of the real system, especially its more important properties in the certain
working environment and under specific conditions. Static physical models are for instance
the illustration of urbanity or architectonic solutions in the form of models, which enables
visual imaginations of the certain spatial form. Dynamical physical models are for example
aerodynamic tunnels for investigation of the aircraft properties, while hydrodynamic channels
are used for the investigation of hydrodynamic properties of ships. Good properties of
dynamical physical models are in their clearness and transparency. But they have also bad
properties, since they are usually too big, inflexible and often do not reflect causality
dependency between certain phenomenon and variables.

1.7 Mathematical modeling

Mathematical model is the abstract visualization of the certain system and is useful for
investigation of system properties. It enables some conclusions about the characteristics and
behavior of the system.

Mathematical models are more or less homomorphic to the real system. From the level of
model suitability it is dependent, if the achieved results and conclusions are reliable and
significant enough. At this place it must be noted, that the practice and developers experiences
are very crucial in attempts to design an adequate model, which sufficiently reflects the
properties of the system.

The mathematical models can be classified into very different categories, like: graphs, tables,
logical symbols, etc, which represent certain state of the system and its behavior. Due to their
precision in expressive power and possibility of analysis of future behavior of the systems in
quantitative form, they can be very interested for the System theory. By their help, the
behavior of the systems can be analyzed and the decision making or system control can be
applied. But we must be aware that it is not always possible to find an appropriate
mathematical model.

14
1.8 Theoretical and experimental modeling

For the derivation of mathematical models of dynamic systems, one typically discriminates
between theoretical and experimental modeling [Isermann]. For the theoretical analysis,
also termed theoretical modeling, the model is obtained by applying methods from calculus
to equations as e.g. derived from physics. One typically has to apply simplifying assumptions
concerning the system, as only this will make the mathematical treatment feasible in most
cases. In general, the following types of equations are combined to build the model
[Isermann]:
1. Balance equations: Balance of mass, energy, momentum. For distributed parameter
systems, one typically considers infinitesimally small elements, for lumped parameter systems,
a larger (confined) element is considered.
2. Physical or chemical equations of state: These are the so-called constitutive equations
and describe reversible events, such as e.g. inductance or the second Newtonian postulate.
3. Phenomenological equations: Describing irreversible events, such as friction and heat
transfer. An entropy balance can be set up if multiple irreversible processes are present.
4. Interconnection equations according to e.g. Kirchhoff’s node and mesh equations, torque
balance, etc.

By applying these equations, one obtains a set of ordinary or partial differential equations,
which finally leads to a theoretical model with a certain structure and defined parameters if all
equations can be solved explicitly. In many cases, the model is too complex or too
complicated, so that it needs to be simplified to be suitable for subsequent application. Even if
this is not possible, the individual model equations in many cases still give us an important
hints concerning the model structure and thus still can be useful [Isermann].

In case of an experimental analysis, which is also termed identification, a mathematical


model is derived from measurements [Isermann]. Here, one typically has to rely on certain a
priori assumptions, which can either stem from theoretical analysis or from previous (initial)
experiments. Measurements are carried out and the input as well as the output signals are
subjected to some identification method in order to find a mathematical model that describes
the relation between the input and the output [Isermann].

15
The theoretically and the experimentally derived models can also be compared if both
approaches can be applied. If the models do not match, then one can get hints from the
character and the size of the deviation, which steps of the theoretical or the experimental
modeling have to be corrected [Isermann].

The system analysis can typically neither be completely theoretical nor completely
experimental. To benefit from the advantages of both approaches, one does rarely use only
theoretical modeling (leading to so-called white-box models) or only experimental modeling
(leading to so-called black-box models), but rather a mixture of both leading to what is called
gray-box models (c.f. figure 10 [Isermann]).

Despite the fact that the theoretical analysis can in principle deliver more information about
the system, provided that the internal behavior is known and can be described mathematically,
experimental analysis has found ever increasing attention over the past 50 years. The main
reasons are the following [Isermann]:
• Theoretical analysis can become quite complex even for simple systems.
• Mostly, model coefficients derived from the theoretical considerations are not precise
enough.
• Not all actions taking place inside the system are known.
• The actions taking place cannot be described mathematically with the required accuracy.
• Some systems are very complex, making the theoretical analysis too time-consuming.
• Identified models can be obtained in shorter time with less effort compared to theoretical
modeling.

The experimental analysis allows the development of mathematical models by measurement


of the input and output of systems of arbitrary composition [Isermann]. One major advantage
is the fact that the same experimental analysis methods can be applied to diverse and
arbitrarily complex systems. By measuring the input and output only, one does however only
obtain models governing the input-output behavior of the system, i.e. the models will in
general not describe the precise internal structure of the system. These input-output models
are approximations and are still sufficient for many areas of application [Isermann].

16
Figure 10: Different kinds of mathematical models ranging from white box models to
black box models [Isermann]

1.9 Computer simulation methodology

As mentioned in the previous chapter, the analytical solution of the differential equations,
which describe a certain dynamical system, can be found only for the simplest and most
idealized systems. In the case of more complicated systems of differential equations, the use
of numerical methods is usually the only possible way to find their solutions. Within this
framework, the computer simulation is definitely the most popular approach. In last two
decades, there were many problem oriented simulation languages developed. Naturally, the
corresponding modeling demand very well known knowledge of the problem, which we want
to investigate by means of computer simulation. Figure 11 shows the basic modeling
approach for the purpose of computer simulation.

17
Figure 11: The basic modeling approach for the purpose of computer simulation.

The basic modeling approach for the purpose of computer simulation consists of the following
stages [Kljajič]:

a) Definition of the problem: Define the level and the objective of the modeling, the volume
of the treated system, etc.
b) Define the variables, the feedback loops, and interactions between variables and the parts
of the system.
c) Analyze the problem in the wider frame, which establishes the connection between the
observed system and concrete solutions from the more general point of view.
d) Construct mathematical details of the system with corresponding equations, suitable for
the chosen simulation language.
e) Repeat all the stages until the satisfactory solution is not found.

18
1.10 Modeling and simulation iterative procedure

In this chapter, let us describe the modeling and simulation procedure more precisely. As
mentioned before, every real object is observed by observation of available data and
experiments, which are possible to be done on this object. Mathematical (conceptual) models
are then constructed on the basis of data analysis. As we know, they represent the imitation of
the real object and behave similarly for the purposes for which they serve. When building the
conceptual mathematical model, the following issues must be also taken into the consideration
[Zupančič]:

• The purpose of the modeling must be clearly defined,


• The constraints and the limitations of the model must be also defined,
• The attributes of the object, which will be included in the model, must be chosen, where
insignificant details will be neglected,
• The assumptions about certain real principles must be idealized,
• The structure and parameters of the model must be defined, which represent the
connections between particular attributes in the system (for example differential or
difference equations).

When the conceptual model is constructed, as much as possible information about its behavior
must be collected. We know that only in simple cases the analytical solutions can be also
derived. But in general, this is not possible and the simulation model must be constructed,
which is based on the conceptual model. The data about the model behavior can be collected
by use of deduction (analytical treatment) of conceptual model or by experimenting with
simulation model. Afterwards, the validation of the model must be applied, where the analysis
of fitting of real data and simulated data is done. Naturally, the verification of simulation
model must be also executed, where it is tested, if the simulation model reflects the properties
of conceptual model in the proper way (computer simulation program is without any faults).

The analysis of the system, the construction of the model, its experimenting, verification and
validation must be usually repeatedly done, until the demanded results are not achieved. Thus,
[Neelemkavil] defines simulation as a slow, iterative and experimentally oriented technique
(c.f. figure 12).

19
Figure 12:Iterative procedure of modeling and simulation [Neelemkavil]

1.11 The classification of simulation

Simulation as a methodology for analysis and design of systems can be classified into
continuous, discrete and combined simulation [Zupančič], if the classification is related to the
type of the used model. With respect to used tool or technique (type of computer), by which it
is executed, the simulation can be classified into analog, digital, or hybrid simulation
[Zupančič]. Continuous simulation can be analog or digital, discrete simulation is always
digital, while combined simulation can be digital or hybrid. Figure 13 shows the classification
of simulation. [Zupančič].

The speed of simulation execution with respect to real system time determines the way, how
the model can be simulated:
• slower than in real system time,
• in real system time,
• faster than in real system time.

20
Figure 13: The classification of simulation [Zupančič]

Simulation, which is not executed in real system time, is the most common and can be
applied on general-purpose computers. If the execution is faster or slower than in real system
time, depends on time constants of real system and on capability of simulation tool, how fast
can perform the simulation. If the simulation is done in real system time, the computer must
be usually connected to the real system. Efficient simulation in the real system time is
possible only in the case, if we have the specific software and hardware (modern simulation
working stations, analog-hybrid computers) [Zupančič].

Continuous simulation enables to simulate the systems, which can be described by linear or
nonlinear ordinary (ODE) or partial (PDE) differential equation with constant or varying
coefficients. The condition for this kind of simulation is that the state variables and
derivatives are continuous through the entire simulation execution, where the independent
variable is usually time.

In discrete simulation, the states of the system can be changed only in discrete time instants.
These changes are called the discrete events, which are happening periodically in specific
time moments (usually in the theory of automatic control [Zupančič]), or asynchronously in
the dependence of conditions, determined by values of state variables [Zupančič]. Classical
example of the asynchronous simulation is the post office with one server and waiting line
(queue). The customers arrive randomly into the system and are served according to the FIFO
(first in, first out) discipline. In this case, the computer can simulate the arrivals of the
customers into the system and working mechanism inside the queueing system. In this way,

21
very complex systems can be treated and simulated, where the analytical solutions usually
could not be found at all.

According to the [Cellier], the combined or hybrid simulation is the simulation, which can be
described on the whole observation interval by means of differential equations, where at least
one state variable or its derivative is not continuous quantity. By this kind of definition, we
can see that in reality almost every simulation of the real system is actually combined
continuous-discrete simulation.

1.12 The methodology of system dynamics

There are several equivalent ways of the visualization of the system, suitable for computer
simulation. In the simulation of business systems the so-called methodology of system
dynamics has found the most advanced position, which was proposed by [Forrester]. In this
methodology, the so-called block diagrams for discrete event simulation also found their place.

The methodology of system dynamics is not only the generation of equations, which describe
the dynamics of business processes, but it is the whole methodology of solving of dynamical
problems (c.f. figure 14) [Forrester].

Figure 14:The concept of problem solving in the methodology of system dynamics


[Forrester]

22
As it can be noticed from figure 14, the arrows show the step by step solving procedure,
where the next step can iteratively influence on the previous step, which means that the
corresponding steps are mutually dependent.

The methodology of system dynamics can be very useful for development of decision support
systems in business environments, where the following issues can be achieved [Kljajič]:

• Learning of the behaviour of integral simulation system, which can support the business
decisions, strategically planning and analysis of organizational systems,
• Improvement of the process of planning and decision making,
• Acquisition of new knowledge about the behaviour and management of complex systems,
and
• Education of professional personnel for planning and government of the company.

23
2 DISCRETE EVENT SIMULATION

The focus of discrete event simulation is on events, which can influence on the system. These
events can [Kljajič]:
• cause the change of value of certain system variable,
• trigger or disconnect the system variable,
• activate or deactivate the certain process.

Discrete events can be treated from the two points of view [Kljajič]:
• In the case of elements orientation (particle orientation), the system elements represent
the starting point for simulation analysis,
• In the case of events orientation, the system events represent the starting point for
simulation analysis.

2.1 Introduction of time

Time is represented with an internal simulation clock, when the simulation of discrete events
is treated. Relationship between the simulation time and the real time depends on the nature
of observed system, where the generation of times of arrivals of elements (transactions)
depends on system conditions.

The basic terms, when we study the ways of waiting line formations, are [Kljajič]:
• The sequence of arrivals of elements (transactions) (arrival patterns),
• The processing of system elements (transactions) (service process),
• The ways of waiting line formations (queueing disciplines).

The processing (server service) of the system elements (transactions) can be described by the
service time and the capacity of processing (capacity of server, service capacity).

The processing time (time of service) is the time needed for the processing of the system
element (transaction, dynamic entity). The capacity of processing (capacity of server service)
represents the number of system elements (transactions), which can be proceeded
simultaneously.

24
When modeling the system, the probability distributions of the times between two consecutive
elements arrivals (inter-arrival times) and the service processing time must be given.

There are several possibilities of waiting line formations (disciplines), like FIFO, LIFO,
random, etc, which will be more precisely defined in the sequel.

2.2 Inter-arrival times distribution

The arrivals of the system elements into the system are usually described by inter-arrival
times. In the real systems, the number of different inter-arrival times distributions is
practically indefinite. By means of theoretical distributions, the real distributions are trying to
be described. Naturally, the theoretical distributions are only more or less accurate
approximations of the real distributions. The most frequent theoretical distributions, which
can be used for the purpose of simulation, are [Kljajič]:

• uniform distribution,
• exponential distribution,
• normal distribution, and
• Erlang distribution.

For the description of elements (transactions, customers) arrivals into the system, the
following parameters are usually used [Winston]:
• Ta ... Time interval between two consecutive arrivals,

1
• E (Ta ) = = ∫ t ⋅ f ( t )dt ...The mean or average inter-arrival time,
λ 0

1
• λ= ... the arrival rate (units of arrivals per time unit).
E (Ta )

Naturally, the Ta is the random variable and f(t) is supposed to be the probability density

function of random variable Ta .

25
2.2.1 Poisson distribution

Suppose we are dealing with the random variable Ta , which represents the time interval (0, t ]

between the time origin and the first following event (arrival). In the case of Poisson
distribution of the number of arrivals, it can be shown that the probability for Ta to take the

value between t and t + dt , follows the exponential law [Dragan, Winston, Taha, Hillier]:

fTa ( t ) = f ( t ) = λ ⋅ e − λ ⋅t , t >0 (2.1)

where f(t) is the exponential probability density function of random variable Ta , and λ is

positive constant.

If the inter-arrival times have an exponential distribution, than it can be shown that they also
have the so-called no-memory property [Winston]. This finding is very important, because it
implies that if we want to know the probability distribution of the time until the next arrival,
then it does not matter how long it has been since the last arrival [Winston].

Figure 15 shows four examples of exponential distribution for arrival rates λ = 2, 1, 0.5, 0.25 .

Figure 15: Four examples of exponential distribution for arrival rates λ = 2, 1, 0.5, 0.25

26
Figure 15 represents the information about the chance, that the new event not yet happened
until the time t. From figure 15 we can concluded that it is very unlikely to have very long
inter-arrival times [Winston, Dragan]. The longer the inter-arrival time is, the smaller is the
chance that the new event not yet happened.

2.3 Distribution of the number of arrivals

It can be shown that there is a strong connection between the Poisson distribution of the
number of arrivals, and the exponential distribution of the inter-arrival times. If the inter-
arrival times are exponential, the probability distribution of the number of arrivals occurring
in any time interval of length t is given by the following important theorem [Winston]:

Inter-arrival times are exponential with parameter λ if and only if the number of arrivals
to occur in an interval of length t follows a Poisson distribution with parameter λ ⋅ t .

In general, a discrete random variable N has a Poisson distribution with parameter λ , if the
probability distribution function has a form [Winston]:
n

P [ N = n] =
(λ ) ⋅ e−λ , n = 0,1, 2,... (2.2)
n!

If we define N ( t ) to be the number of arrivals to occur during any time interval of length t,

from the previous theorem we can apply the following expression [Winston]:

P  N ( t ) = n =
(λ ⋅ t ) ⋅ e − λ ⋅t , n = 0,1, 2,... (2.3)
 n!

Since N ( t ) has a Poisson distribution with parameter λ ⋅ t , it can be shown that the
expectation and variance are [Winston]:

E  N ( t )  = VAR  N ( t )  = λ ⋅ t (2.4)

It follows that an average of λ ⋅ t arrivals occur during a time interval of length t, so λ may
be thought as the average number of arrivals per time unit, or the arrival rate [Winston].

27
Figure 16 shows six examples of Poisson distribution for arrival rates λ = 0.25, 1, 2, 3, 5, 12 .

Figure 16: Six examples of Poisson distribution for arrival rates λ = 0.25, 1, 2, 3, 5, 12 .

2.4 Introduction to queueing systems

Queues (waiting lines) are a part of everyday life. We all wait in queues to buy a movie ticket,
make a bank deposit, pay for groceries, mail a package, obtain food in a cafeteria, start a ride
in an amusement park, etc. We have become accustomed to considerable amounts of waiting,
but still get annoyed by unusually long waits [Hillier].

However, having to wait is not just a petty personal annoyance. The amount of time that a
nation’s populace wastes by waiting in queues is a major factor in both the quality of life there
and the efficiency of the nation’s economy. For example, before its dissolution, the U.S.S.R.
was notorious for the tremendously long queues that its citizens frequently had to endure just
to purchase basic necessities. Even in the United States today, it has been estimated that
Americans spend 37,000,000,000 hours per year waiting in queues. If this time could be spent
productively instead, it would amount to nearly 20 million person-years of useful work each
year! [Hillier]

28
Even this staggering figure does not tell the whole story of the impact of causing excessive
waiting. Great inefficiencies also occur because of other kinds of waiting than people standing
in line. For example, making machines wait to be repaired may result in lost production.
Vehicles (including ships and trucks) that need to wait to be unloaded may delay subsequent
shipments. Airplanes waiting to take off or land may disrupt later travel schedules. Delays in
telecommunication transmissions due to saturated lines may cause data glitches. Causing
manufacturing jobs to wait to be performed may disrupt subsequent production. Delaying
service jobs beyond their due dates may result in lost future business [Hillier].

Let us condense some most typical real cases of queueing systems [Dragan]:

• Waiting for service in restaurants, banks, shops, or at the doctor's office,


• Waiting for transfer by bus, train, or plane,
• Waiting for a ticket to the cinema, theater or game,
• Waiting of cars at a gas station,
• Waiting of aircrafts on takeoff or landing,
• Waiting of machinery to be repaired,
• Waiting of the material in the warehouse for sale,
• Waiting for a telephone call to establish a connection, and so on.

The limited number of servers is usually the reason for waiting lines, which are formed, when
all the customers can not be served simultaneously. But in same cases, the waiting lines are
also formed as a consequence of time limitations, when the service is possible only in certain
time intervals. For example, pass through the intersection with a traffic light is only possible
when the light is green.

The entities that queue for service are called customers, or users, or jobs, depending on what
is appropriate for the situation at hand [Stewart]. Customers who require service are said to
“arrive” at the service facility and place service “demands” on the resource. At a doctor’s
office, the waiting line consists of the patients awaiting their turn to see the doctor; the doctor
is the server who is subjected to a limited resource, in this case time. The planes may be
viewed as users and the runway viewed as a resource that is assigned by an air traffic
controller. The resources are of finite capacity meaning that there is not an infinity of them
nor can they work infinitely fast. Furthermore arrivals place demands upon the resource and
these demands are unpredictable in their arrival times and unpredictable in their size. Taken

29
together, limited resources and unpredictable demands imply a conflict for the use of the
resource and hence queues of waiting customers [Stewart].

Our ability to model and analyze systems of queues helps to minimize their inconvenience, to
maximize the use of the limited resources and to enable the possible improvements of the
existing system. An analysis may tell us something about the expected time that a resource
will be in use, or the expected time that a customer must wait. This information may then be
used to make decisions as to when and how to upgrade the system: for an overworked doctor
to take on an associate or an airport to add a new runway, for example [Stewart].

For the efficient analysis and optimization of queueing systems, the corresponding models
must be built. They can be used to answer questions like the following [Winston]:

1 What fraction of the time is each server idle?


2 What is the expected number of customers present in the queue?
3 What is the expected time that a customer spends in the queue?
4 What is the probability distribution of the number of customers present in the queue?
5 What is the probability distribution of a customer’s waiting time?

Queueing theory is the study of waiting systems [Hillier]. It uses queueing models to
represent the various types of queueing systems (systems that involve queues of some kind)
that arise in practice. Formulas for each model indicate how the corresponding queueing
system should perform. Therefore, they are very helpful for determining how to operate a
queueing system in the most effective way. Providing too much service capacity to operate
the system involves excessive costs. But not providing enough service capacity results in
excessive waiting and all its unfortunate consequences. So the models enable finding an
appropriate balance between the cost of service and the amount of waiting [Hillier].

30
2.4.1 Basic characteristics of queueing system

The basic characteristics of queueing systems are [Kljajič]:


• The distribution of inter-arrival times of customers
• The distribution of service times
• The number of servers
• The capacity of queueing system
• The discipline of queue
• The number of serving levels.

In sequel, we are going to look at the characteristics of the queueing systems more closely.

The Basic Queueing Process

The basic process assumed by most queueing models is the following [Hillier]. Customers
requiring service are generated over time by an input source. These customers enter the
queueing system and join a queue. At certain times, a member of the queue is selected for
service by some rule known as the queue discipline. The required service is then performed
for the customer by the service mechanism, after which the customer leaves the queueing
system. This process is depicted in Figure 17 [Hillier].

Figure 17: The Basic Queueing Process

1. Input source: One characteristic of the input source is its size. The size is the total number
of customers that might require service from time to time, i.e., the total number of distinct
potential customers. This population from which arrivals come is referred to as the calling

31
population. The size may be assumed to be either infinite or finite (so that the input source
also is said to be either unlimited or limited). The finite case is more difficult analytically
because the number of customers in the queueing system affects the number of potential
customers outside the system at any time. However, the finite assumption must be made if
the rate at which the input source generates new customers is significantly affected by the
number of customers in the queueing system [Hillier].

2. Queue: The queue is where customers wait before being served. A queue is characterized
by the maximum permissible number of customers that it can contain. Queues are called
infinite or finite, according to whether this number is infinite or finite. The assumption of an
infinite queue is the standard one for most queueing models, even for situations where there
actually is a (relatively large) finite upper bound on the permissible number of customers,
because dealing with such an upper bound would be a complicating factor in the analysis.
However, for queueing systems where this upper bound is small enough that it actually would
be reached with some frequency, it becomes necessary to assume a finite queue [Hillier].

3. Queue Discipline: The queue discipline refers to the order in which members of the queue
are selected for service. For example, it may be first-come-first-served, random, according to
some priority procedure, or some other order. First-come-first-served usually is assumed by
queueing models, unless it is stated otherwise [Hillier].

4. Service Mechanism: The service mechanism consists of one or more service facilities,
each of which contains one or more parallel service channels, called servers [Hillier]. If there
is more than one service facility, the customer may receive service from a sequence of these
(service channels in series). At a given facility, the customer enters one of the parallel service
channels and is completely serviced by that server. A queueing model must specify the
arrangement of the facilities and the number of servers (parallel channels) at each one. Most
elementary models assume one service facility with either one server or a finite number of
servers. The time elapsed from the commencement of service to its completion for a customer
at a service facility is referred to as the service time (or holding time). A model of a particular
queueing system must specify the probability distribution of service times for each server,
although it is common to assume the same distribution for all servers. The service-time
distribution that is most frequently assumed in practice, is the exponential distribution. Other

32
important service-time distributions are the degenerate distribution (constant service time) and
the Erlang (gamma) distribution [Hillier].

5. Servers in parallel and servers in series: This kind of servers classification is most usual
in the queueing theory [Winston]. Servers are in parallel if all servers provide the same type
of service and a customer need only pass through one server to complete service. For example,
the tellers in a bank are usually arranged in parallel; any customer need only be serviced by
one teller, and any teller can perform the desired service. Servers are in series if a customer
must pass through several servers before completing service. An assembly line is an example
of a series queuing system.

6. Finite source models and phenomena of balking: These are two typical situations, which
can happen in reality. In the first case, the arrivals are drawn from a small population, for
example we have the machines, which are waiting to be repaired (limited number of
customers). In the second case, the arrival process depends on the number of customers,
which are already present in the system. In this case, the rate at which customers arrive to the
facility, decreases when the facility becomes too crowded. For example, if you see that the
bank parking lot is full, you might pass by and come another day. If a customer arrives but
fails to enter the system, we say that the customer has balked. Here we can distinguish
between two different situations. In first situation, the arrival rate gradually decreases, which
means that as bigger the quantity of already present customers in the system is, more balked
the new potential customer will become. In the second situation, the arrival rate is constant,
until the waiting space becomes fully loaded. At that moment, the arrival rate immediately
falls down to 0 and new potential customers will definitely go away un-served (example of
limited waiting space).

2.4.2 Queueing terminology and basic parameters

According to Kendall, each queueing system is described by six characteristics:


1/ 2 / 3 / 4 / 5 / 6 , which are [Winston]:

1 - The nature of the arrival process,


2 - The nature of the sevice times,
3 - The number of parallel servers,

33
4 - The queue discipline,
5 - The maximum allowable number of customers in the system (including customers who are
waiting and customers who are in service),
6 - The size of the population from which customers are drawn.

Another version of the description of the queueing system can be given as follows [Kljajič]:

Notation = A / B / X / Y / Z (2.5)

where the symbols are:


A − The distribution of inter − arrival times
B − The distribution of service times
X − The number of parallel servers (2.6)
Y − The limitations of service capacity
Z − The discipline of queue

Table 2 shows the meaning of symbols in (2.6), which denote the basic characteristics of
queueing systems [Kljajič].

Table 2: The basic characteristics of queueing systems

34
Example:

Let us have the following queueing system:

Notation = M / D / 1/ 5 / PRI (2.7)

In this case, the inter-arrival times are exponentially distributed, the service time is
deterministic, we have one server with the total capacity five and the service has a priority
mode of serving.

Even simpler version of the description of the queueing system can be presented at given
capacity, if the default is FIFO discipline [Winston, Hillier]:

Notation =

Inter-arrival distribution/service times distribution/number of servers (2.8)

The following standard abbreviations are used (either for inter-arrival times or service times)
[Winston]:

M: Poisson random process [Winston] with the eksponential distribution of times,


G: Some general distribution of times,
D: Deterministic distribution of times,
Ek: Erlangs distribution of times with shape parameter k.

If we are dealing, for instance, with the system M/M/r, the abbreviations mean the following:
The input arrival process is the Poisson process, the service times are distributed
exponentially, and the number of servers is r. On the other hand, if we are dealing, for
instance, with the system M/G/1, it means that the input arrival process is the Poisson process,
the service times are distributed with respect to some general distribution, and the number of
servers is 1. Let us count some other typical queueuing systems: G/M/r, M/D/r, G/G/r, etc.

The queueing systems can be treated as so-called birth-death systems [Winston, Dragan],
where customers in the system represent the observed population, arrivals are the births, and
the departures are the deaths. The birth-death processes are the stochastic processes, which
will be more precisely explained in chapter 3.

35
The systems of type M/M/r has so-called Markovian property (already mentioned no-memory
property), where the theory of Markov processes can be applied. On the contrary, the other
types of queueing systems do not have this property and must be treated by use of special,
more difficult approaches [Winston, Dragan].

The basic quantities, which deserve a lot of attention in the theory of queueing systems, are
[Schaums, Dragan, Winston]:

• The number of the customers in the system and in the queue, and
• The customers time of being in the system and the customers waiting in the queue.

The basic parameters, which define the property of queueing channel, are [Kljajič, Dragan,
Hillier, Winston]:

E ( N ) = L ................................average number of the customers in the system (including


those one, which are already in the service process),

E ( N q ) = Lq ............................ average number of the customers in the queue,

E ( N S ) = Ls ............................ average number of the customers in the service process,

E(W )......................................average customers time of being in the system (sum of the


waiting time in the queue and the time being in the service process),

E( Wq )......................................average customers waiting time in the queue,

E( Ws ).......................................average customers time being in the service process.


(2.9)
where »E« denotes the expectation.

36
Let us define some other quantities, which are also important in the queueing theory [Kljajič,
Hillier]:

State of the system.....................number of the customers in the system,

Queue length............................ number of the customers waiting for service to begin,

N ( t ) ..........................................number of the customers in the system at time t,

pn (t ) ..........................................probability of exactly n customers in queueing system at


time t,
r..................................................number of the servers in queueing system,

λn ................................................mean arrival rate (expected number of arrivals per time


unit) of new customers, when n customers are already in the system.

µn ……………………………….mean service rate for overall system (expected number


of customers completing service per time unit), when n customers are already in the
system.
(2.10)

When λn is constant for all n, this constant is denoted by λ . Similarly, when the mean

service rate is constant for all n, this constant is denoted by µ [Hillier]. Quantity λ can also
be treated as the speed of arrivals of transactions into the system, while the quantity µ can
also be treated as the speed of the service, for example in one channel system the speed of the
single server [Kljajič].

In the case of single channel system, we can introduce the following utilization factor
[Hillier]:
λ
ρ= (2.11)
µ

which represents the fraction of the system’s service capacity ( µ ), that is being utilized on
the average by arriving customers ( λ ) [Hillier].

37
In the case of multiple channel system, we can introduce the following utilization factor
[Hillier]:
λ
ρ= (2.12)
r⋅µ

Certain notation also is required to describe steady-state results [Hillier]. When a queueing
system has recently begun operation, the state of the system (number of customers in the
system) will be greatly affected by the initial state and by the time that has since elapsed. The
system is said to be in a transient condition. However, after sufficient time has elapsed, the
state of the system becomes essentially independent of the initial state and the elapsed time
The system has now essentially reached a steady-state condition, where the probability
distribution of the state of the system remains the same (the steady-state or stationary
distribution) over time. Queueing theory has tended to focus largely on the steady-state
condition, partially because the transient case is more difficult analytically [Hillier].

Relationships between L, Lq , E (W ) , E (Wq ) :

It has been proved, that in a steady-state the so-called Little's formula can be applied [Hillier]:
L = λ ⋅ E (W ) (2.13)

Also, we can introduce the following expression:


Lq = λ ⋅ E (Wq ) (2.14)
and:
1
E (W ) = E (Wq ) + (2.15)
µ

These relationships are extremely important because they enable all four of the fundamental
quantities ( L, Lq , W , Wq ) to be immediately determined as soon as one is found analytically.

This situation is fortunate, because some of these quantities often are much easier to find than
others when a queueing model is solved from basic principles [Hiller].

38
2.4.3 Types of queueuing systems

In this chapter, let us shortly introduce some typical queueing systems [Kljajič]. Figure 18
shows the simple queueing system with one queue and single server.

Figure 18: The simple queueing system with one queue and single server

Figure 19 shows the simple queueing system with one queue and multiple parallel servers.

Figure 19: The simple queueing system with one queue and multiple parallel servers

Figure 20 shows the closed queueing system of maintainance of machinery.

Figure 20: The closed queueing system of maintainance of machinery

39
Figure 21 shows more complex queueing system with many queues and many channels,
where the priority logic is also involved.

Figure 21: More complex queueing system

2.5 Some probability basics of simulation

As mentioned before, the behavior of complex organization systems must be described by


means of stochastic functions. Within this framework, the following situations are possible
[Kljajič]:
1. The knowledge about the system is incomplete, so the system must be described randomly
with the appropriate distribution function,
2. The description of the system is very complicated, thus it must be approximated by use of
specific stochastic function.
3. The behavior of the system has a significantly stochastic character.

Since the computer simulation represents experimenting on the computer, the random events
must be generated for the inputs and outputs of the system model. The variable, which
represents the outcome of the random event, is called the random (stochastic) variable. For the
events, which are described by random variable, the stock of the values and the probability
distribution is known only. Based on these two characteristics, the mean (expected) value and
the variance can be also found. If the stock of the values and the probability distribution are

40
discrete, than we are talking about discrete random variables, otherwise we are talking about
the continuous random variables.

Let us have some continuous random variable X, which has the probability density function
f ( x) . The probability, that the random variable X takes some value in the interval

a ≤ X ≤ b , can be derived as follows (c.f. figure 22):


b
P ( a ≤ X ≤ b ) = ∫ f ( x )dx (2.16)
a

where the following expression is always valid:


∫ f ( x )dx = 1
−∞
(2.17)

Figure 22: Illustration of probability P ( a ≤ X ≤ b )

Particularly important for the description of the random variable X is the so-called cumulative
distribution function, which can be expressed as follows (c.f. figure 23):
x
F ( x) = P ( X ≤ x) = ∫ f ( t )dt
−∞
(2.18)

41
Figure 23: Illustration of cumulative function

We can give some additional properties, which are valid for the random variable X [Dragan]:

1. Since the F ( x) is monotonically increa sin g , it follows :


dF
f ( x) = ≥0
dx

(2.19)
2. F (∞) = P( X ≤ ∞) = ∫ f ( t ) dt = 1
−∞

3. F ( b ) − F ( a ) = P ( a ≤ X ≤ b )

The expectation of the random variable can be defined as:


E(X ) = ∫ x ⋅ f ( x ) dx
−∞
(2.20)

and the variance can be defined as [Dragan]:

2

∞ 
VAR ( X ) = E X ( ) 2
− E ( X ) = ∫ x ⋅ f ( x ) dx −  ∫ x ⋅ f ( x ) dx 
2 2
(2.21)
−∞  −∞ 

42
In the case of discrete random variable X, where stock of values is: x1 , x2 , x3 ,...xn and the
n
corresponding probabilities are: p1 , p2 , p3 ,... pn , with property ∑p
i =1
i = 1 , the following

expressions can be given:

x
F ( x) = P ( X ≤ x) = ∑ p ( xi ) = ∑ pi =∑ pi
i =1
xi ≤ x xi ≤ x

n n (2.22)
E ( X ) = ∑ x ⋅ p ( x ) = ∑ xi ⋅ p ( xi ) = ∑ xi ⋅ pi
x i =1 i =1

2
 
VAR ( X ) = E X( ) 2
− E ( X ) = ∑ xi ⋅ pi −  ∑ xi ⋅ pi 
2 2

i =1:n  i =1:n 

2.6 Random generators

For the purpose of computer simulation, the so-called random generator is used [Kjajič,
Winston]. It serves for generating of random numbers, which represent the time instants for
arrivals of customers into the system, and for the generation of times related to the service
with respect to some probability law. One of the typical random generators is so-called
roulette random generator, where the behavior similar to roulette behavior is applied. The
latter also has the fingerprints, why the famous simulation Monte Carlo method has got its
name [Winston]. The procedure of segmentation and using a roulette wheel is equivalent to
generating integer random numbers in certain interval, for example between 00 and 99
[Winston]. This follows from the fact that each random number in the sequence has an equal
probability of showing up (for example, in the case of interval between 0 and 99 the
probability is 0.01). In addition, each random number is independent of the numbers that
precede and follow it.

The definition of random number generator is [Hillier]:


A random number generator is an algorithm that produces sequences of numbers that
follow a specified probability distribution and possess the appearance of randomness.

43
Random numbers usually come from some form of uniform random observations, where all
possible numbers are equally likely. When we are interested in some other probability
distribution, we shall refer to random observations from that distribution [Hillier].

Uniform random numbers can be divided into two main categories [Hillier]:
• A random integer number is a random observation from a discretized uniform distribution
over some range,
• A uniform random number is a random observation from a continuous uniform distribution
over some interval [a, b]. If a random number is defined as an independent random sample
drawn from a continuous uniform distribution, the probability density function is given by
the expression [Winston]:

 1
 a≤ x≤b (2.23)
f ( x) = b − a
 0 sicer

When a and b are not specified, they are assumed to be: a = 0, b = 1 .

Pseudo-random generator:

There are several ways and methods form numeric generation of random numbers, which are
usually based on recursive algorithms. From the initial number (called "seed" [Kljajič]), the
next number is generated. Since it is demanded that the numbers are uniformly distributed
over some interval and are independent between each other, we are talking about the
deterministic procedure for generating the stochastic process. Therefore, the corresponding
numerical generators are called pseudo-random, since they are not purely random.

Typical generators for generating random numbers are [Kljajič]:


• Generator of the mean of square,
• Generator of the mean of product,
• Logistic equation,
• Congruential generator, etc.

44
Most random number generators use some form of a congruential relationship [Winston].
Examples of such generators include the linear congruential generator, the multiplicative
generator, and the mixed generator. The linear congruential generator is by far the most
widely used [Winston]. In fact, most built-in random number functions on computer systems
use this generator. With this method, we produce a sequence of integers x1 , x2 , x3 ,...xn

between 0 and m -1 according to the following recursive relation [Winston]:

xi +1 = ( a ⋅ xi + c ) modulo m, i = 0,1, 2,... (2.24)

The initial value of x0 is called the seed, a is the constant multiplier, c is the increment, and m

is the modulus. These four variables are called the parameters of the generator. Using this
relation, the value of xi +1 equals the remainder from the division of ( a ⋅ xi + c ) by m. The

random number between 0 and 1 is then generated using the equation [Winston]:

xi (2.25)
Ri = , i = 1, 2,...
m

The generator, introduced by (2.24) and (2.25), is so-called mixed generator. It can become an
additive generator, if a = 1, and can become multiplicative generator, if c = 0.

By carefully selecting the values of a, c, m, x0 , the pseudorandom numbers can be made to

meet all the statistical properties of true random numbers [Winston]. In addition to the
statistical properties, random number generators must have several other important
characteristics if they are to be used efficiently within computer simulations [Winston]. (1)
the routine must be fast; (2) the routine should not require a lot of core storage; (3) the
random numbers should be replicable; and (4) the routine should have a sufficiently long
cycle—that is, we should be able to generate a long sequence without repetition of the random
numbers.

Most programming languages have built-in library functions that provide random (or
pseudorandom) numbers directly [Winston]. Therefore, most users need only know the library
function for a particular system. In some systems, a user may have to specify a value for the

45
seed, x0 , but it is unlikely that a user would have to develop or design a random number

generator.

In the following two chapters, Theory of stochastic processes and Introduction to basic
queueing models, we are going to give some theoretical basics, which can be also very useful
for the deeper understanding of discrete-event simulation of queueing systems.

3 THEORY OF STOCHASTIC PROCESSES

The theory of stochastic processes is generally defined as the "dynamic" part of probability
theory, in which one studies a collection of random variables (called a stochastic process)
from the point of view of their interdependence and limiting behavior [Parzen]. One is
observing a stochastic process whenever one examines a process developing in time in a
manner controlled by probabilistic laws. Examples of stochastic processes are provided by the
path of a particle in Brownian motion, the growth of a population such as a bacterial colony,
the fluctuating number of particles emitted by a radioactive source, and the fluctuating output
of gasoline in successive runs of an oil-refining mechanism. Stochastic or random processes
often occur in nature. They also occur in medicine, biology, physics, oceanography,
economics, and psychology, to name only a few scientific disciplines. If a scientist is to take
account of the probabilistic nature of the phenomena with which he is dealing, he should
undoubtedly make use of the theory of stochastic processes [Parzen].

3.1 Definition of stochastic processes and basic properties

Stochastic processes are the processes, which change in dependence of time and/or space with
respect to probability laws [Hillier, Winston, Dragan].

Their definition is as follows [Hillier]:

Random (stochastic) process is the family (sequence) of the random variables:

{ X t = X (t ), t ∈ T } (3.1)

46
where the time t is the parameter, which in general takes its values from the set of real
numbers.

By other words this means that a stochastic process is defined to be an indexed collection of
random variables { X t = X (t ), t ∈ T } , where the index t runs through a given set T [Hillier].

Often T is taken to be the set of nonnegative integers, and X t represents a measurable

characteristic of interest at time t. For example, X t might represent the inventory level of a

particular product at the end of week t. So the stochastic processes are of interest for
describing the behavior of a system operating over some period of time [Hillier].

Figure 24 represents two typical examples of stochastic processes: a) Random behavior of the
temperature in dependence of hours time progress, b) Stock value behavior in dependence of
days time progress.

Figure 24: Illustration of stochastic process: a) Random behavior of the temperature in


dependence of hours time progress, b) Stock value behavior in dependence of days time
progress.

47
The stochastic processes are usually separated into two main categories [Winston]:
• Discrete stochastic processes ( X t is the function of discrete time moments),

• Continuous stochastic processes ( X t is the function of continuous time moments).

A discrete-time stochastic process is simply a stochastic process in which the state of the
system can be viewed at discrete instants in time [Winston]. On the contrary, a continuous-
time stochastic process is a stochastic process in which the state of the system can be viewed
at any time. For example, the number of people in a supermarket t minutes after the store
opens for business may be viewed as a continuous-time stochastic process. Since the price of
a share of stock can be observed at any time (not just the beginning of each trading day), it
may be also viewed as a continuous-time stochastic process. Viewing the price of a share of
stock as a continuous time stochastic process has led to many important results in the theory
of finance, including the famous Black–Scholes option pricing formula [Winston].

Let us count some more stochastic processes, which are typical in the real practice [Hudoklin,
Dragan]:
• Markov processes,
• Random walk processes,
• Poisson processes,
• Birth-death processes,
• Epidemic processes,
• Diffusion processes, etc.

With respect to the nature of time and the nature of state space, the stochastic processes can
be divided into the following categories [Hudoklin, Dragan]:

• Processes with discrete time and discrete state space,


• Processes with discrete time and continuous state space,
• Processes with continuous time and discrete state space,
• Processes with continuous time and continuous state space.

48
3.2 Markov processes

Markov processes represent an important group of stochastic processes, since they can
describe many real situations. They have got their name after the scientist Andrey Markov,
which introduced the so-called Markov chains in the year 1907 [Hudoklin].

Certain random process is Markov process, if the following expression can be applied
(Markovian property) [Winston, Hillier]:

(
P X X , X ,... X
tn t 1 t 2 tn − 1 ) (
=P X X
tn tn − 1 ) (3.2)

It means that the conditional probability of the random variable X to take some value at the
tn
time tn , depends only on the last state of the process X , and does not depend on the
tn − 1
process states in the previous times t1 ,..., tn − 2 . It means that the Markov process does not
have the memory and if we want to predict the process behavior in the future, we only must
get familiar with the present and not with the past.

By other words, the Markovian property implies that the conditional probability of any future
“event,” given any past “event” and the present state, is independent of the past event and
depends only upon the present state.

The Markov processes can be divided into the two main categories [Hudoklin, Winston]:
• The Markov processes with the discrete states in discrete time (Markov chains), and
• The Markov processes with the discrete states in continuous time.

In sequel, let us briefly introduce the Markov chains.

3.3 Markov chains

Markov chains are Markov processes with discrete states in discrete time, where the time is
defined as:

tn+1 → n + 1, tn → n, ..., t1 → 1, t0 → 0 (3.3)

49
Random variables can then be defined in the form: {Xn, n ≥ 0} , where some space of
possible states S = {1, 2,..., m} is given, in which the number of states can be finite (finite
chains) or accountably infinite (infinite chains).

If the following relation can be expressed: X n = i , it means that the given Markov chain is

located in state i , when the time is n. Discrete Markov chain can be characterized with the
following expression [Wilson, Hillier]:

P ( X n +1 = j X 0 = i0 , X 1 = i1 ,..., X n = i ) = P ( X n +1 = j X n = i ) (3.4)

where the conditional probabilities:

P ( X n +1 = j X n = i ) = pij = P ( i → j ) (3.5)

are called (one step) transition probabilities.

The probability pij is the probability that given the system is in state i at time t, it will be in a

state j at time t + 1. If the system moves from state i during one period to state j during the
next period, we say that a transition from i to j has occurred. Equation (3.5) implies that the
probability law relating the next period’s state to the current state does not change (or remains
stationary) over time. For this reason, the relation (3.5) is often called the Stationary
Assumption. Any Markov chain that satisfies (3.5) is called a stationary Markov chain
[Winston].

50
In most applications, the transition probabilities are displayed as an N X N dimensional
transition probability matrix P. The transition probability matrix P may be written as
[Winston, Hillier]:
0 1 ... N
0  p00 p01 ... p0 N 
1  p10 p11 ... p1N 
2 . . ... .  (3.6)
P = { pij } = . 
.  . . ... . 
.  . ... . 
 
N  pN 0 pN 1 ... pNN 

where {0,1, 2,..., N } are the discrete states of the process.

For every transition matrix the following properties can be expressed [Winston, Hillier]:
1. It is a stochastic and square matrix with non-negative elements,
2. The sum of probabilities in every row is equal 1.

To get more familiar with the use of Markov chains, let us show one example, which is called
the Gambler's Ruin [Winston].

Example:
At time 0, I have 2 EUROS. At times 1, 2, . . . , I play a game in which I bet 1 EURO. With
probability p, I win the game, and with probability 1 - p, I lose the game. My goal is to
increase my capital to 4 EUROS, and as soon as I do, the game is over. The game is also over
if my capital is reduced to 0 EUROS. If we define X n to be my capital position after the time t

game (if any) is played, then X 0 , X 1 ,..., X n may be viewed as a discrete-time stochastic

process. Note that X 0 = 2 is a known constant, but X 1 and later X n are random. For

example, with probability p, X 1 = 3 , and with probability 1 - p, X 1 = 1 . Note that if X n = 4 ,

then X n +1 and all later X n will also equal 4. Similarly, if X n = 0 , then X n +1 and all later

X n will also equal 0. For obvious reasons, this type of situation is called a gambler’s ruin

problem.
Find the transition matrix and state-transition diagram!

51
Since the amount of money I have after n +1 plays of the game depends on the past history of
the game only through the amount of money I have after t plays, we definitely have a Markov
chain. Since the rules of the game do not change over time, we also have a stationary Markov
chain. The transition matrix is as follows (state i means that we have i EUROS):
0 1 2 3 4
0 1 0 0 0 0

1 1 − p 0 p 0 0 
P = 2 0 1− p 0 p 0 (3.7)
 
3 0 0 1− p 0 p
4  0 0 0 0 1 

where {0,1, 2,3, 4} is the state space, observed in EUROS.

If the state is 0 or 4 EUROS, I do not play the game anymore, so the state cannot change,
hence, p00 = p44 = 1 . For all other states, we know that with probability p, the next period’s

state will exceed the current state by 1, and with probability 1 - p, the next period’s state will
be 1 less than the current state.

The corresponding state-transition diagram is shown on figure 25.

Figure 25: The state-transition diagram for the Gambler's Ruin problem

52
3.4 Poisson processes

Poisson processes can be ranked into the category of Markov processes, which have the
discrete states and continuous time [Kay]. On the contrary to the Markov chains, the
transitions between states can be observed in the short time interval ( t , t + ∆t ) .

Poisson process is one of the simplest Markov processes with the discrete states and
continuous time. Since the theory of this kind of processes is much harder to understand, the
Poisson process represent the welcome asset for understanding more complex stochastic
processes [Hudoklin, Kay]. It can be used as a model of the great number of real stochastic
processes, such as [Hudoklin]:

• The decay of radioactive nuklei,


• Thermal emission of electrons,
• Machinery repair and maintenance,
• Inventory resupply,
• Road accidents,
• Queueing theory,
• Space distribution of animals or vegetables, etc.

So the Poisson process is a random process that is very useful for modeling events occurring
in time [Kay]. A typical realization is shown in Figure 26 in which the events occur randomly
in time. The random process that counts the number of events in the time interval [0, t] and
which is denoted by N ( t ) , is called the Poisson counting random process. It is clear from

Figure 26 that the two random processes are equivalent descriptions of the same random
phenomenon. Note that N ( t ) is a continuous-time/discrete-valued random process. Also,

because N ( t ) counts the number of events from the initial time t = 0 up to and including the

time t, the value of N ( t ) at a jump is N ( t + ) . Thus, N ( t ) is right-continuous [Kay].

In sequel, we are going to derive some important results for a Poisson process. This results
will represent a good basis for better understanding of findings, which were already expressed

53
in chapters 2.2 and 2.3, when the inter-arrival times distribution and distribution of the
number of arrivals were introduced.

Figure 26: The typical realization of the Poisson counting process ( N ( t ) counts the number

of events, time points t1 , t2 ,... represent the formation of new events, while z1 , z2 ,... are the

times between these events).

3.4.1 Derivation of distribution of the number of events

At first, we are going to give the following definition of the Poisson process [Kay, Winston,
Hudoklin]:

For example, we are dealing with the sequence of events, which occur individually and
completely random. Let us denote with N ( t , t + ∆t ) the number of events in the time interval

(t , t + ∆t ] ,and with N ( t ) the number of events in the time interval (0, t ] . Poisson process is

then the family { N ( t )} , which for certain positive constant λ and ∆t → 0 corresponds to the

following demands [Kay, Winston, Hudoklin]:

54
a) P  N ( t , t + ∆t ) = 0  = 1 − λ ⋅ ∆t + o ( ∆t ) ..............................no event in ( t , t + ∆t )

b) P  N ( t , t + ∆t ) = 1 = λ ⋅ ∆t + o ( ∆t ) ..................................one event in ( t , t + ∆t )


(3.8)
c) P  N ( t , t + ∆t ) > 1 = o ( ∆t ) ..............................................more than one event in ( t , t + ∆t )

d) The number N ( t , t + ∆t ) is completely independent from the number of events


in the interval (0, t ] ( Markovian property )

where the o ( ∆t ) is some function, which limits to the 0 faster than ∆t , and λ has the

number of events
dimension: . Thus the λ represents the frequency of events and is the
time unit
parameter of the process.

Let us now denote with N ( t + ∆t ) the number of events in the time interval (0, t + ∆t ] . The

probability pi ( t + ∆t ) = P  N ( t + ∆t ) = i  , which means the probability, that i events

happened until the time t + ∆t , can be expressed in the following way [Kay, Hudoklin]:

P  N ( t + ∆t ) = i  =
{ N ( t ) = i and N ( t , t + ∆t ) = 0 } or 
 
{ N ( t ) = i − 1 and N ( t , t + ∆t ) = 1 } or 
 
{ N ( t ) = i − 2 and N ( t , t + ∆t ) = 2 } or  (3.9)
= P 
 { N ( t ) = i − 3 and N ( t , t + ∆t ) = 3} or 
 
.................................................. or 
 { N ( t ) = i − i and N ( t , t + ∆t ) = i } 
 

which means the probability, that i events happened until the time t and 0 events happened in
the interval (t , t + ∆t ] , or the probability, that i-1 events happened until the time t and 1 event
happened in the interval (t , t + ∆t ] , or the probability, that i-2 events happened until the time t
and 2 events happened in the interval (t , t + ∆t ] , etc.

55
The expression (3.9) can also be written in the form:

P  N ( t + ∆t ) = i  =
= P  N ( t ) = i  ⋅ P  N ( t , t + ∆t ) = 0 N ( t ) = i  +
+ P  N ( t ) = i − 1 ⋅ P  N ( t , t + ∆t ) = 1 N ( t ) = i − 1 +
(3.10)
+ P  N ( t ) = i − 2  ⋅ P  N ( t , t + ∆t ) = 2 N ( t ) = i − 2  +
+............. +
+ P  N ( t ) = 0  ⋅ P  N ( t , t + ∆t ) = i N ( t ) = 0 

Since we are dealing with the independent events, the conditional probabilites can be
abandoned, thus we have:

P  N ( t + ∆t ) = i  =
= P  N ( t ) = i  ⋅ P  N ( t , t + ∆t ) = 0  +
+ P  N ( t ) = i − 1 ⋅ P  N ( t , t + ∆t ) = 1 +
(3.11)
+ P  N ( t ) = i − 2  ⋅ P  N ( t , t + ∆t ) = 2  +
+............. +
+ P  N ( t ) = 0  ⋅ P  N ( t , t + ∆t ) = i 

Based on expressions in (3.8), we can now write:

P  N ( t + ∆t ) = i  =
= P  N ( t ) = i  ⋅ 1 − λ ⋅ ∆t + o ( ∆t )  +
+ P  N ( t ) = i − 1 ⋅ λ ⋅ ∆t + o ( ∆t )  +
(3.12)
+ P  N ( t ) = i − 2  ⋅ o ( ∆t ) +
+............. +
+ P  N ( t ) = 0  ⋅ o ( ∆t )

56
It can be shown, that the probabilites, which are multiplied by o ( ∆t ) , take the value 0, when

∆t → 0 . Thus we have:

P  N ( t + ∆t ) = i  =
(3.13)
= P  N ( t ) = i  ⋅ 1 − λ ⋅ ∆t + o ( ∆t )  + P  N ( t ) = i − 1 ⋅ λ ⋅ ∆t + o ( ∆t ) 

and consequently:

P  N ( t + ∆t ) = i  =
= P  N ( t ) = i  ⋅ (1 − λ ⋅ ∆t ) + P  N ( t ) = i − 1 ⋅ λ ⋅ ∆t +

{ }
+ P  N ( t ) = i  + P  N ( t ) = i − 1 ⋅ o ( ∆t ) =

(3.14)
o ( ∆t )

= P  N ( t ) = i  ⋅ (1 − λ ⋅ ∆t ) + P  N ( t ) = i − 1 ⋅ λ ⋅ ∆t + o ( ∆t )

If we now (for the sake of simplicity) apply relations pi ( t + ∆t ) = P  N ( t + ∆t ) = i  and

pi ( t ) = P  N ( t ) = i  , we have:

pi ( t + ∆t ) = pi ( t ) − pi ( t ) ⋅ λ ⋅ ∆t + pi −1 ( t ) ⋅ λ ⋅ ∆t + o ( ∆t )

pi ( t + ∆t ) − pi ( t ) = − pi ( t ) ⋅ λ ⋅ ∆t + pi −1 ( t ) ⋅ λ ⋅ ∆t + o ( ∆t ) : ∆t

pi ( t + ∆t ) − pi ( t ) o ( ∆t )
= − pi ( t ) ⋅ λ + pi −1 ( t ) ⋅ λ + lim (3.15)
∆t ∆t ∆t → 0

p ( t + ∆t ) − pi ( t ) o ( ∆t )
lim i = − pi ( t ) ⋅ λ + pi −1 ( t ) ⋅ λ + lim
∆t →0
 ∆t ∆t → 0
 ∆t
dpi ( t ) 0
dt

57
Thus, we get the following system of differential equations [Hudoklin, Kay]:

dpi ( t )
= −λ ⋅ pi ( t ) + λ ⋅ pi −1 ( t ) , i = 0,1, 2,...
dt (3.16)
where :
pi ( t ) = P  N ( t ) = i 

which can be also written in the form:

dp0 ( t )
= −λ ⋅ p0 ( t ) + λ ⋅ p−1 ( t ) = −λ ⋅ p0 ( t )
dt 
0
dp1 ( t ) (3.17)
= −λ ⋅ p1 ( t ) + λ ⋅ p0 ( t )
dt
dp2 ( t )
= −λ ⋅ p2 ( t ) + λ ⋅ p1 ( t )
dt
..........................

In sequel, the corresponding system of differential equations must be solved, for example by
use of Laplace transformation. For this purpose, we can firsty try to solve the first equation:

dp0 ( t )
= −λ ⋅ p0 ( t ) L
dt

s ⋅ p0 ( s ) − p0 ( 0 ) = −λ ⋅ p0 ( s )
(3.18)
p0 ( 0 ) 1 −1
p0 ( s ) = = L
s+λ s+λ

p0 ( t ) = e− λ ⋅t

where "L" denotes the Laplace operator and is p0 ( 0 ) = 1 , since it is hundred percent, that no

event happened until the time 0.

58
The result from (3.18) can be now put into the second differential equation, which can be
solved by use of Laplace transformation also. The solution of the second differential equation
is [Dragan]:
p1 ( t ) = λ ⋅ t ⋅ e − λ ⋅t (3.19)

Similarly, we can solve the third, fourth, and all other differential equations. The solutions are
[Dragan]:
λ2
p2 ( t ) = ⋅ t 2 ⋅ e− λ ⋅t
2

λ3
p3 ( t ) = ⋅ t 3 ⋅ e− λ ⋅t
2⋅3 (3.20)

λ4
p4 ( t ) = ⋅ t 4 ⋅ e − λ ⋅t
2 ⋅3⋅ 4

etc...

If we now combine the results from (3.18), (3.19) and (3.20) all together, we get the following
expression [Dragan, Hudoklin, Kay]:
λi
pi ( t ) = ⋅ t i ⋅ e− λ ⋅t
i!
and : (3.21)
i

pi ( t ) = P  N ( t ) = i  =
(λ ⋅ t ) ⋅ e − λ ⋅t , i = 0,1, 2,...
i!

The result (3.21) can now be compared with the result (2.3). Obviously, both results are the
same, which proves, that the distribution of the number of arrivals is Poisson distribution and
actually governed by the Poisson stochastic process .

59
3.4.2 Derivation of distribution of the times between events

Let us go back to the typical realization of a Poisson random process as it was shown in
Figure 26. The times t1 , t2 ,... can be called the arrival times while the time intervals z1 , z2 ,...

can be called the interarrival times. The interarrival times shown in Figure 26 are realizations
of the continuous random variables Z1 , Z 2 ,... We wish to be able to compute probabilities for a

finite set, say Z1 , Z 2 ,...Z k . To begin we first determine the probability density function

f Z1 ( z1 ) . Note that Z1 = T1 , where T1 is the random variable denoting the first arrival. By the

definition of the first arrival, we can conclude (c.f. figure 26): if Z1 > ξ1 , then N (ξ1 ) = 0 (the

first arrival has not yet occured). So we can introduce the following expression [Kay]:

P ( z1 > ξ1 ) = P  N (ξ1 ) = 0  =
( λ ⋅ ξ1 ) ⋅ e − λ ⋅ξ1 = e− λ ⋅ξ1 (3.22)
0!

Based on expression (3.22), the probability density function f Z1 ( z1 ) can be derived [Kay]:

dFZ1 ( z1 ) dP ( z1 < Z1 ) d
f Z1 ( z1 ) = = = 1 − P ( z1 > Z1 )  =
dz1 dz1 dz1 
d d (3.23)
= 1 − P ( z1 > ξ1 )  = 1 − e − λ ⋅ξ1  = λ ⋅ e− λ ⋅ξ1
dz1 dz1

Thus, for the probability density function of the time for the first event ("arrival"), which is
also the interarrival time between 0 and the occurence of first event, we can write [Kay]:

λ ⋅ e− λ ⋅ z1 , z1 ≥ 0
f Z1 ( z1 ) =  (3.24)
 0, z1 < 0

The result (3.24) can now be compared with the result (2.1). Obviously, both results are the
same, which proves, that the distribution of the inter-arrival time in the time interval (0, t ]
between the time origin and the first following event (arrival) follows the exponential law.

60
3.5 Birth processes

Pure birth processes can be used as the models of real situations such as reproduction of
bacteria, chain reaction of the nuclear fission, etc [Hudoklin, Taha].

In order to model the pure birth process, the following assumptions must be made [Hudoklin]:

• Let us have the population of specimens, where the probability, that the specimen present in
time t, will born a new specimen in the interval (t , t + ∆t ] , is λ ⋅ ∆t + o ( ∆t ) , while the

probability, that will not born a new specimen in the interval (t , t + ∆t ] , is 1 − λ ⋅ ∆t + o ( ∆t ) .

• The probability of birth should be the same for all specimens, independent of their age. Of
course, the births, which are based on a variety of specimens, should be independent of
each other also.
• Birth of each specimen represents an independent Poisson process. Since there are more
specimens observed in a population, we obviously have a combination of several
independent Poisson processes, where the events represent each birth. This kind of process
is so-called combined Poisson process [Hudoklin, Dragan], with the frequency of events,
which is equal to the sum of the frequencies of events of individual processes.
• Frequency of events of individual processes in our case is presumably the same for all
specimens and is equal λ . If at the time t there are i specimens in the population present,
than the frequency of events of combined process is obviously equal i ⋅ λ . And the
probability of the new birth of the combined process in the interval (t , t + ∆t ] is equal

i ⋅ λ ⋅ ∆t + o ( ∆t ) , while the probability not to have a new birth is 1 − i ⋅ λ ⋅ ∆t + o ( ∆t ) .

The number of the specimens, present in the population at time t, represents the random
variable, denoted by pi ( t ) = P  N ( t ) = i  . Let us presume that the size of the population at

time t is equal n0 . Then with respect to some additional assumptions, the following

expression can be applied for the probability, that we have n0 specimens in the population at

time t + ∆t [Hudoklin]:

61
pn0 ( t + ∆t ) = pn0 ( t ) ⋅ P ( not a new birth ) + pn0 −1 ( t ) ⋅ P ( new birth ) + o ( ∆t )

(3.25)
pn0 ( t + ∆t ) = pn0 ( t ) ⋅ [1 − n0 ⋅ λ ⋅ ∆t ] + pn0 −1 ( t ) ⋅ ( n0 − 1) ⋅ λ ⋅ ∆t + o ( ∆t )

where the o ( ∆t ) is some function, which limits to the 0 faster than ∆t . So either stays n0

specimens, which did not born a new specimen, either there were n0 − 1 specimens, which

have borned a new specimen.

In the similar matter, we can made the further conclusions. The probability, that we have
n0 + 1 specimens in the population at time t + ∆t , can be wirtten in the following way:

pn0 +1 ( t + ∆t ) = pn0 +1 ( t ) ⋅ P ( not a new birth ) + pn0 ( t ) ⋅ P ( a new birth ) + o ( ∆t )

(3.26)
pn0 +1 ( t + ∆t ) = pn0 +1 ( t ) ⋅ 1 − ( n0 + 1) ⋅ λ ⋅ ∆t  + pn0 ( t ) ⋅ n0 ⋅ λ ⋅ ∆t + o ( ∆t )

The probability, that we have n0 + 2 specimens in the population at time t + ∆t , can be

wirtten in the following way:

pn0 + 2 ( t + ∆t ) = pn0 +2 ( t ) ⋅ P ( not a new birth ) + pn0 +1 ( t ) ⋅ P ( a new birth ) + o ( ∆t )

(3.27)
pn0 + 2 ( t + ∆t ) = pn0 +2 ( t ) ⋅ 1 − ( n0 + 2 ) ⋅ λ ⋅ ∆t  + pn0 +1 ( t ) ⋅ ( n0 + 1) ⋅ λ ⋅ ∆t + o ( ∆t )

In general, we obviously have [Hudoklin]:

pi ( t + ∆t ) = pi ( t ) ⋅ P ( not a new birth ) + pi −1 ( t ) ⋅ P ( a new birth ) + o ( ∆t )

pi ( t + ∆t ) = pi ( t ) ⋅ [1 − i ⋅ λ ⋅ ∆t ] + pi −1 ( t ) ⋅ ( i − 1) ⋅ λ ⋅ ∆t + o ( ∆t ) , (3.28)

i = no , no + 1, no + 2,...

Naturally, during this derivation, it was considered, that the specimens in a very short time
∆t → 0 can not born more than one new specimen. The expression (3.28) can be modified,
which leads us to:

62
pi ( t + ∆t ) − pi ( t ) = −i ⋅ λ ⋅ ∆t ⋅ pi ( t ) + pi −1 ( t ) ⋅ ( i − 1) ⋅ λ ⋅ ∆t : ∆t

(3.29)
pi ( t + ∆t ) − pi ( t )
= −i ⋅ λ ⋅ pi ( t ) + ( i − 1) ⋅ λ ⋅ pi −1 ( t ) lim
∆t ∆t →0

where the terms with o ( ∆t ) were neglected with respect to the reasons already mentioned

before. This way, we have got the following system of differential equations [Hudoklin]:

dpi ( t )
= −i ⋅ λ ⋅ pi ( t ) + ( i − 1) ⋅ λ ⋅ pi −1 ( t ), i = no , no + 1, no + 2,... (3.30)
dt

The corresponding system could be solved by the use of similar procedure (sequential use of
Laplace transformation) as in the case of Poisson process derivation, where we would get the
following solution [Hudoklin, Dragan]:

 i − 1  − n0 ⋅λ ⋅t i − n0
pi ( t ) =  ⋅e ⋅ 1 − e − λ ⋅t  , i = n0 , n0 + 1, n0 + 2,... (3.31)
 i − n0 

3.6 Death processes


Pure death processes can be used as the models of real situations such as emptying of
inventory in the warehouse, denial of group of unrepairable products, etc.

In order to model the pure death process, the following assumptions must be made
[Hudoklin]:

• Let us have the population of specimens, where the probability, that the specimen present in
time t, dies in the interval (t , t + ∆t ] , is µ ⋅ ∆t + o ( ∆t ) , while the probability, that will not

die in the interval (t , t + ∆t ] , is 1 − µ ⋅ ∆t + o ( ∆t ) .

• The probability of death should be the same for all specimens, independent of their age. Of
course, the deaths, which are based on a variety of specimens, should be independent of
each other also.

63
• Death of each specimen represents an independent Poisson process. Since there are more
specimens observed in a population, we obviously have a combination of several
independent Poisson processes, where the events represent each death. This kind of process
is also called the combined Poisson process, as in the case of Birth processes, with the
frequency of events, which is equal to the sum of the frequencies of events of individual
processes.
• Frequency of events of individual processes in our case is presumably the same for all
specimens and is equal µ . If at the time t there are i specimens in the population present,
than the frequency of events (deaths) of combined process is obviously equal i ⋅ µ . And the
probability of the new death of the combined process in the interval (t , t + ∆t ] is equal

i ⋅ µ ⋅ ∆t + o ( ∆t ) , while the probability not to have a new death is 1 − i ⋅ µ ⋅ ∆t + o ( ∆t ) .

The number of the specimens, present in the population at time t, represents the random
variable, denoted by pi ( t ) = P  N ( t ) = i  . Let us presume that the size of the population at

time t is equal n0 . Then with respect to some additional assumptions, the following

expression can be applied for the probability, that we have n0 specimens in the population at

time t + ∆t [Hudoklin]:

pn0 ( t + ∆t ) = pn0 ( t ) ⋅ P ( not a new death ) + pn0 +1 ( t ) ⋅ P ( a new death ) + o ( ∆t )

(3.32.)
pn0 ( t + ∆t ) = pn0 ( t ) ⋅ [1 − n0 ⋅ µ ⋅ ∆t ] + pn0 +1 ( t ) ⋅ ( n0 + 1) ⋅ µ ⋅ ∆t + o ( ∆t )

The probability, that we have n0 − 1 specimens in the population at time t + ∆t , is:

pn0 −1 ( t + ∆t ) = pn0 −1 ( t ) ⋅ P ( not a new death ) + pn0 ( t ) ⋅ P ( a new death ) + o ( ∆t )

(3.33)
pn0 −1 ( t + ∆t ) = pn0 −1 ( t ) ⋅ 1 − ( n0 − 1) ⋅ µ ⋅ ∆t  + pn0 ( t ) ⋅ n0 ⋅ µ ⋅ ∆t + o ( ∆t )

64
The probability, that we have n0 − 2 specimens in the population at time t + ∆t , is:

pn0 − 2 ( t + ∆t ) = pn0 − 2 ( t ) ⋅ P ( not a new death ) + pn0 −1 ( t ) ⋅ P ( a new death ) + o ( ∆t )

(3.34)
pn0 − 2 ( t + ∆t ) = pn0 − 2 ( t ) ⋅ 1 − ( n0 − 2 ) ⋅ µ ⋅ ∆t  + pn0 −1 ( t ) ⋅ ( n0 − 1) ⋅ µ ⋅ ∆t + o ( ∆t )

In general, we obviously have [Hudoklin]:

pi ( t + ∆t ) = pi ( t ) ⋅ P ( not a new death ) + pi +1 ( t ) ⋅ P ( a new death ) + o ( ∆t )

pi ( t + ∆t ) = pi ( t ) ⋅ [1 − i ⋅ µ ⋅ ∆t ] + pi +1 ( t ) ⋅ ( i + 1) ⋅ µ ⋅ ∆t + o ( ∆t ) , (3.35)

i = no , no − 1, no − 2,...2,1, 0

Similarly as in the case of the Birth process, we can derive the following system of
differential equations from the expression (3.35) [Hudoklin]:

dpi ( t )
= −i ⋅ µ ⋅ pi ( t ) + ( i + 1) ⋅ µ ⋅ pi +1 ( t ), i = no , no − 1, no − 2,...,1,0 (3.36)
dt

The corresponding system could be solved by the use of similar procedure (sequential use of
Laplace transformation) as in the case of Poisson process derivation, where we would get the
following solution [Hudoklin, Dragan]:

n  n0 −i
pi ( t ) =  0  ⋅ e − i⋅µ ⋅t ⋅ 1 − e− µ ⋅t  , i = n0 , n0 − 1, n0 − 2,...,1, 0 (3.37)
 i 

3.7 Birth-Death processes

In this case, we are observing the population, which can either change by new births or by
new deaths. Due to simpler understanding of the subject, the birth-death process will be
explained through the treatment of the queueing system, where customers in waiting line and

65
in service will be considered. For this purpose, the model in the form of state transition
diagram (c.f. figure 27) is also introduced [Winston, Stewart, Hillier].

λ0 λ1 λ2 λ3 λi −1 λi λi +1
p0 p1 p2 p3 p4
pi
0 1 2 3 4 … i i+1 …
µ1 µ2 µ3 µ4 µi µi +1 µi + 2
Figure 27: The model of the birth-death process in the form of state transition diagram

The individual states represent the current number of the customers (specimens) in the system.
When the new customer enters into the system, the new birth happens, and when the served
customer departs (leaves) the system, the new death happens. Let us assume, that in very short
time ∆t → 0 only one customer can simultaneously enter into the system (only one birth can
happen). Similarly let us assume, that in very short time ∆t → 0 only one customer can
simultaneously departs the system (only one death can happen). Thus in very short time the
number of specimens in the system can be increased or decreased only for one specimen,
which means that only the transitions between the neighboring states are possible. In figure 27,
the quantities pi are also applied, which are talking about the probabilities, that the system is

in the i.th state. By other words, the i.th state of the system means that there are i
specimens present in the system.

Let us assume that we have i specimens in the system. The probability for a birth in interval
(t , t + ∆t ] is λi ⋅ ∆t + o ( ∆t ) , probability not to have a birth in this interval, is 1 − λi ⋅ ∆t + o ( ∆t ) ,

the probability for a death in interval (t , t + ∆t ] is µi ⋅ ∆t + o ( ∆t ) , and the probability not to

have a death in this interval, is 1 − µi ⋅ ∆t + o ( ∆t ) .

The number of the specimens, present in the population at time t, represents the random
variable, denoted by pi ( t ) = P  N ( t ) = i  . Then with respect to some additional assumptions,

the following expression can be applied for the probability, that we have 0 specimens in the
population at time t + ∆t [Hudoklin]:

66
p0 ( t + ∆t ) = p0 ( t ) ⋅ P ( not a new birth ) + p1 ( t ) ⋅ P ( a new death )

(3.38)
p0 ( t + ∆t ) = p0 ( t ) ⋅ 1 − λ0 ⋅ ∆t + o ( ∆t )  + p1 ( t ) ⋅  µ1 ⋅ ∆t + o ( ∆t ) 

The probability, that we have 1 specimen in the population at time t + ∆t , is [Hudoklin]:

p1 ( t + ∆t ) = p1 ( t ) ⋅ P ( not a new birth) ⋅ P ( not a new death) + p0 ( t ) ⋅ P ( a new birth) +


+ p2 ( t ) ⋅ P ( a new death)
(3.39)
p1 ( t + ∆t ) = p1 ( t ) ⋅ 1− λ1 ⋅∆t + o ( ∆t )  ⋅ 1− µ1 ⋅∆t + o ( ∆t )  + p0 ( t ) ⋅ λ0 ⋅∆t + o ( ∆t )  +
+ p2 ( t ) ⋅ µ2 ⋅∆t + o ( ∆t ) 

The probability, that we have 2 specimens in the population at time t + ∆t , is [Hudoklin]:

p2 ( t + ∆t ) = p2 ( t ) ⋅ P ( not a new birth) ⋅ P ( not a new death) + p1 ( t ) ⋅ P ( a new birth) +


+ p3 ( t ) ⋅ P ( a new death)
(3.40)
p2 ( t + ∆t ) = p2 ( t ) ⋅ 1− λ2 ⋅∆t + o ( ∆t )  ⋅ 1− µ2 ⋅∆t + o ( ∆t )  + p1 ( t ) ⋅ λ1 ⋅∆t + o ( ∆t )  +
+ p3 ( t ) ⋅ µ3 ⋅∆t + o ( ∆t ) 

In general, we obviously have [Hudoklin, Stewart, Winston]:

p0 ( t + ∆t ) = p0 ( t ) ⋅ 1− λ0 ⋅∆t + o ( ∆t )  + p1 ( t ) ⋅ µ1 ⋅∆t + o ( ∆t ) 

pi ( t + ∆t ) = pi ( t ) ⋅ 1− λi ⋅∆t + o ( ∆t )  ⋅ 1− µi ⋅∆t + o ( ∆t )  + pi−1 ( t ) ⋅ λi−1 ⋅∆t + o ( ∆t )  + (3.41)


+ pi+1 ( t ) ⋅ µi+1 ⋅∆t + o ( ∆t ) 

67
Similarly as in the cases of the Birth and of the Death process, we can derive the following
system of differential equations from the expression (3.41) [Hudoklin, Stewart, Winston]:

dp0 ( t )
= − p0 ( t ) ⋅ λ0 + p1 ( t ) ⋅ µ1
dt

dpi ( t ) (3.42)
= − pi ( t ) ⋅ ( µi + λi ) + pi −1 ( t ) ⋅ λi−1 + pi+1 ( t ) ⋅ µi+1
dt

i = 1,2,..., n −1

where n is the capacity of the system (naturally, n can also go to the infinity).

As it turns out, on the contrary to the case of pure birth and pure death process, it is very
difficult to find the solution of the system (3.42). Thus, in sequel we will rather observe the
situation in stationary conditions, where the analysis of steady-state probabilities will be
applied. In stationary conditions, the derivatives can be set to 0, so we have [Hudoklin,
Stewart, Winston, Hillier]:

0 = − p0 ⋅ λ0 + p1 ⋅ µ1

0 = − pi ⋅ ( µi + λi ) + pi−1 ⋅ λi−1 + pi+1 ⋅ µi+1 (3.43)

i = 1,2,..., n −1

It follows:
p0 ⋅ λ0 = p1 ⋅ µ1

pi ⋅ ( µi + λi ) = pi −1 ⋅ λi−1 + pi+1 ⋅ µi+1 (3.44)

i = 1,2,..., n −1

68
and:
p0 ⋅ λ0 = p1 ⋅ µ1

p1 ⋅ ( µ1 + λ1 ) = p0 ⋅ λ0 + p2 ⋅ µ2
p2 ⋅ ( µ2 + λ2 ) = p1 ⋅ λ1 + p3 ⋅ µ3
(3.45)
p3 ⋅ ( µ3 + λ3 ) = p2 ⋅ λ2 + p4 ⋅ µ4
...
pi ⋅ ( µi + λi ) = pi−1 ⋅ λi−1 + pi+1 ⋅ µi+1
...
pn−1 ⋅ ( µn−1 + λn−1 ) = pn−2 ⋅ λn−2 + pn ⋅ µn

The expressions in (3.45) are obviously the balance equations for the states in figure 27,
which means: Rate in = Rate out Principle [Hillier]. The system (3.45) can be relatively
easly solved. In sequel, let us stress the corresponding solutions [Dragan, Hudoklin, Hillier,
Stewart].

The steady-state probability for 0 specimens in the system is:

1 1 1
p0 = = =
S n i −1 λj n
λ0 ⋅ λ1⋅ λ2 ⋅ λ3 ⋅ ... ⋅⋅λi −1 (3.46)
1+ ∑∏ 1+ ∑
i =1 j =0 µ j +1 i =1 µ1 ⋅ µ2 ⋅ µ3 ⋅ µ4 ⋅ ... ⋅ µi

The steady-state probabilities for one, two and more specimens in the system are:

λ0
p1 = ⋅p
µ1 0
λ ⋅λ
p2 = 0 1 ⋅ p0
µ1 ⋅ µ2
λ ⋅λ ⋅λ
p3 = 0 1 2 ⋅ p0
µ1 ⋅ µ2 ⋅ µ3
... (3.47)
λ ⋅ λ ⋅ λ ⋅ λ ⋅ ... ⋅⋅λi−1
pi = 0 1 2 3 ⋅p
µ1 ⋅ µ2 ⋅ µ3 ⋅ µ4 ⋅ ... ⋅ µi 0
...
λ0 ⋅ λ1⋅ λ2 ⋅ λ3 ⋅ ... ⋅⋅λn−1
pn = ⋅p
µ1 ⋅ µ2 ⋅ µ3 ⋅ µ4 ⋅ ... ⋅ µn 0

69
The probabilities in (3.47) can also be written in more compact form:

i −1 λj
pi = p0 ⋅ ∏ , i = 1,2,..., n (3.48)
j =0 µ j +1

At this point, we have finished with the short overview of stochastic processes theory. The
derived results in the chapter with stochastic processes will help us to better understand the
methodology of the following chapter, which introduces the basic queueing models.

4 INTRODUCTION TO BASIC QUEUEING MODELS

In this chapter, we will get more familiar with some basic queueing models. The basic
characteristics of queueing systems, terminology and basic parameters were allready shortly
introduced in chapter 2.4. So in this chapter, we will firstly focus on the derivation of basic
M/M/1 model, where all the significant details of the modeling will be shown. Secondly, the
results for some other most typical models of the type »M/M/…« (models with the Markovian
property) will be also shortly stressed. Naturally, as already mentioned in the previous chapter,
we will observe only the situation in stationary conditions, where the analysis of steady-state
probabilities and other important statistical quantities will be applied.

With the theoretical background, aquired in this chapter, the bigger understanding of working
mechanisms of queueing simulation models will definitely also become more pronounced.

4.1 Single channel queueing models

The main property of the single channel queueing models is, that they have only a single
waiting line and a single server. For all models treated in this chapter we will suppose that the
input arrival process is the Poisson process, and the service times are distributed exponentially,
while the queue discipline is FIFO (first in, first out).

70
4.1.1 Basic model M/M/1

For the basic model, the following assumptions must be made [Hudoklin]:
• The population of customers is infinite, and
• The waiting space is infinitely big.

The arrival rate λ is governed by the Poisson process, the service times are random variables,
exponentially distributed, and the departure rate (mean service rate) is equal to µ . The
corresponding system is shown in figure 28, and its state transition diagram is shown in figure
29.

Figure 28: The basic model M/M/1

Figure 29: The state transition diagram of the basic model M/M/1

As it can be seen from figure 29, we assume the arrival and departure rates, which are always
constant regardless to the number of already present customers in the system. So we have:

λ0 = λ1 = ... = λn−1 = ... = λ


µ1 = µ2 = ... = µn = ... = µ (4.1)

71
The derivation of steady-state probabilities

Firstly, we are going to calculate the steady-state probabilities for the number of customers
being present in the system. The expressions in (3.47) take the following form:

λ0 λ
p1 = ⋅ p0 = ⋅ p0
µ1 µ

λ0 ⋅ λ1 λ ⋅λ λ2
p2 = ⋅ p0 = ⋅ p0 = 2 ⋅ p0
µ1 ⋅ µ2 µ ⋅µ µ

λ3
p3 = ⋅p (4.2)
µ3 0
...
λi
pi = ⋅p
µi 0
...
λn
pn = ⋅p
µn 0
...

Now let us try to calculate the probability p0 . For this purpose, the following expression can

be given:
p0 + p1 + p2 + ... + pn + ... = 1

λ λ2 λn
p0 + ⋅ p0 + 2 ⋅ p0 + ... + n ⋅ p0 + ... = 1
µ µ µ

 λ λ2 λn  (4.3)
p0 ⋅ 1 + + 2 + ... + n + ...  = 1
 µ µ µ 
1 1
p0 = 2 n
=
λ λ λ S
1+ + 2 + ... + n + ...
µ µ µ

Naturally, the result (4.3) could be also directly calculated from the result (3.46). Now, if we
introduce the utilization factor from (2.11), the expression for p0 takes the following form:

1 1
p0 = 2
= (4.4)
1 + ρ + ρ + ... + ρ + ... S
n

The condition for the existance of stationary distribution is [Hudoklin, Winston]:


S = 1 + ρ + ρ 2 + ρ 3 + ... < ∞ (4.5)

72
which implies:
ρ <1
λ
<1 (4.6)
µ
λ<µ

Since the series S is the geometric series, it can be also applied in the following form:

1
S= (4.7)
1− ρ

which implies:
1 1 λ
p0 = = = 1− ρ = 1−
S 1 µ (4.8)
1− ρ

If the result (4.8) is considered in probabilities (4.2), we get:

p1 = ρ ⋅ (1 − ρ )

p2 = ρ 2 ⋅ (1 − ρ )

p3 = ρ 3 ⋅ (1 − ρ ) (4.9)
...
pi = ρ i ⋅ (1 − ρ )
...
pn = ρ n ⋅ (1 − ρ )
...

The derived results can be also written more compactly:

λ
pi = ρ i ⋅ (1 − ρ ) , i = 0,1, 2,... and ρ = (4.10)
µ

and represent the stationary probability distribution for the number of customers being in the
system (stationary probabilities for i customers in the system = i. th state of the system).

73
The derivation of basic statistical parameters

In sequel, we are going to derive the basic statistical parameters, which are the average
number of the customers in the system E ( N ) = L , the average number of the customers in the

queue E ( N q ) = Lq , the average customers time of being in the system E( W ), and the average

customers waiting time in the queue E( Wq ) [Bose, Gross].

The average number of the customers in the system E ( N ) = L is calculated by means of the

following expression [Hudoklin, Bose, Gross]:


E ( N ( t ) ) = L = ∑ i ⋅ pi = 0 ⋅ p0 + 1 ⋅ p1 + 2 ⋅ p2 + ... =
i =0

= 0 ⋅ (1 − ρ ) + 1 ⋅ ρ ⋅ (1 − ρ ) + 2 ⋅ ρ 2 ⋅ (1 − ρ ) + ... =
(4.11)
= (1 − ρ ) ⋅ {0 + ρ + 2 ⋅ ρ 2 + 3 ⋅ ρ 3 + ...} =

= (1 − ρ ) ⋅ ∑ i ⋅ ρ i
i =0

It can be shown that the following relation can be written [Bose, Gross]:


ρ
∑i ⋅ ρ i
= 2
, if ρ < 1 (4.12)
i =0 (1 − ρ )

which implies:
λ
ρ ρ λ µ
E ( N ( t ) ) = L = (1 − ρ ) ⋅ 2
= = = (4.13)
(1 − ρ ) 1 − ρ 1 − λ µ − λ
µ

Now, the next step is to calculate the average customers time of being in the system E(W ),
where the Little's law (2.13) can be used:

74
λ
L µ −λ 1 (4.14)
E (W ) = = =
λ λ µ −λ

Based on expression (2.15), the average customers waiting time in the queue E( Wq ) can be

also calculated:
1 1 1 µ − (µ − λ) λ
E (Wq ) = E (W ) − = − = = (4.15)
µ µ −λ µ µ (µ − λ) µ (µ − λ)

Finally, the average number of the customers in the queue E ( N q ) = Lq is going to be

calculated. For this purpose, the Little's law is used again, this time is based on the expression
(2.14), which lead us to the following result:

λ λ2
Lq = E ( N q ) = λ ⋅ E (Wq ) = λ ⋅ = =
µ (µ − λ) µ (µ − λ)
λ2 ρ2 (4.16)
= =
 λ  1− ρ
µ 2 1 − 
 µ

The probability of server being occupied

Now let us calculate the probability, that the server is occupied. Since we have:

P ( server occupied ) + P ( server free ) = 1


 (4.17)
p0

where p0 is the probability for the empty system, it follows:

P ( server occupied ) = 1 − p0 = 1 − (1 − ρ ) = ρ (4.18)

So the probability of server being occupied is equal to the utilization factor.

75
The probability of more than N customers in the system

At final, we are going to calculate the probability P ( i > N ) , which means that there are more

than N customers in the system:

∞ ∞
ρ N +1 ∞ ∞
ρi
P (i > N ) = ∑
i = N +1
pi = ∑
i = N +1
ρ i ⋅ (1 − ρ ) =
ρ N +1
⋅ (1 − ρ ) ⋅ ∑
i = N +1
ρ i
=ρ N +1
⋅ (1 − ρ ) ⋅ ∑ N +1 =
i = N +1 ρ
(4.19)

= ρ N +1 ⋅ (1 − ρ ) ⋅ ∑ρ
i = N +1
i − ( N +1)

Let us introduce a new variable s = i − ( N + 1) , which leads us to:



1
P (i > N ) = ρ N +1
⋅ (1 − ρ ) ⋅ ∑ ρ s = ρ N +1 ⋅ (1 − ρ ) ⋅
s =0 1− ρ
N +1
(4.20)
N +1 λ
P (i > N ) = ρ = 
µ
where the relationship for the geometric series was also used.

4.1.2 Model M/M/1 with limited waiting space

Similarly as in the case of basic model, the modifified model for the situation, when there is a
limited waiting space, can be derived. Here, we are going to treat the situation, where the
arrival rate is constant, until the waiting space becomes fully loaded. At that moment, the
arrival rate immediately falls down to 0 and new potential customers will definitely go away
un-served. Typical systems of this kind are for example the automotive services, the laundry
services, etc [Hudoklin]. The state transition diagram of the model M/M/1 with limited
waiting space is shown in figure 30, where we have the maximum possible number of
customers in the system equal to n, which means maximally n states of the system.

76
Figure 30: The state transition diagram of the model M/M/1 with limited waiting
space(total number of customers in the system can be n, which means n states)

So, we are dealing with the situation, when maximally n-1 customers can be in the waiting
queue, while one customer is just in the service. For the arrival and departure rates the
following expressions can be written [Hudoklin, Gross]:

λ , i < n
λi =  ⇒ λ0 = λ1 = λ2 = ... = λn −1 = λ
0 , i ≥ n (4.21)
µi = µ , i = 1, 2,.., n.

The derivation of steady-state probabilities

As in the case of basic model, we are going to calculate the steady-state probabilities for the
number of customers being present in the system at first. The expressions in (4.2) now take
the following form:
λi
pi = ⋅ p = ρ i ⋅ p0 , i = 1, 2,..., n (4.22)
µi 0

The expression for p0 , which was given for basic model in (4.4), now takes the following

form:
1 1
p0 = 2
= (4.23)
1 + ρ + ρ + ... + ρ n
S

It can be shown that for the series S the following expression can be given [Gross, Bose]:

1 − ρ n +1
n
S = 1 + ρ + ρ + ... + ρ = ∑ ρ =
2 n i
(4.24)
i =0 1− ρ

77
Thus, the probability p0 is:

1 1− ρ
p0 = = (4.25)
S 1 − ρ n +1

and the stationary probability distribution for the number of i ≤ n customers being in the
system is:
1− ρ
pi =ρ i ⋅ , i = 0, 1, 2,..., n (4.26)
1 − ρ n +1

The derivation of basic statistical parameters

The derivation of basic statistical parameters is now a little bit more complicated as it was in
the case of basic model. In sequel, we will just show how the average number of the
customers in the system E ( N ) = L can be derived. The details of derivation of the other

statistical parameters ( E ( N q ) = Lq , E( W ) and E( Wq ) ) can be investigated in the literature

[Gross, Bose, Hudoklin, Stewart, Dragan].

At first, let us apply the following expression:

n n
1− ρ 1− ρ n
L = E ( N ) = ∑ i ⋅ pi =∑ i ⋅ρ i ⋅ n +1
= n +1 ∑
⋅ i ⋅ρ i
i =0 i =0 1− ρ 1− ρ i =0

1− ρ n (4.27)
=
1− ρ n +1
⋅ ρ ⋅ ∑
i =0
i ⋅ρ i −1

Since we know, that the following relation can be given:

d

( ρ i ) = i ⋅ ρ i−1 (4.28)
we obviously have:
1− ρ n
d
L = E(N) =
1− ρ n +1
⋅ ρ ⋅ ∑
i =0 d ρ
( ρi ) =

1− ρ d  n i (4.29)
=
1 − ρ n +1
⋅ ρ ⋅ ∑ρ
d ρ  i =0 

78
Due to relation (4.24) we can write:

1− ρ d  1 − ρ n +1 
L= ⋅ ρ ⋅  
1 − ρ n +1 d ρ  1− ρ  (4.30)

which leads to:

1− ρ  − ( n + 1) ⋅ ρ n ⋅ (1 − ρ ) − (1 − ρ n +1 ) ⋅ ( −1) 
L= ⋅ ρ ⋅ =
1 − ρ n +1  (1 − ρ )
2

 

1− ρ  − ( n + 1) ⋅ ρ n ⋅ (1 − ρ ) + (1 − ρ n +1 ) 
= ⋅ ρ ⋅ =
1 − ρ n +1  (1 − ρ )
2

 

1− ρ  − ( n + 1) ⋅ ρ n + ( n + 1) ρ n +1 + 1 − ρ n +1 
= ⋅ ρ ⋅ =
1 − ρ n +1  (1 − ρ )
2

 

1− ρ  − ( n + 1) ⋅ ρ n + n ⋅ ρ n +1 + 1 
= ⋅ ρ ⋅  = (4.31)
1 − ρ n +1  ( 1 − ρ )
2

 

1− ρ 1 − ( n + 1) ⋅ ρ n + n ⋅ ρ n +1
= ⋅ρ⋅ 2
=
1 − ρ n +1 (1 − ρ )
1 1 − ( n + 1) ⋅ ρ n + n ⋅ ρ n +1
= n +1
⋅ρ⋅ =
1− ρ (1 − ρ )
ρ 1 − ( n + 1) ⋅ ρ n + n ⋅ ρ n +1 
=
(1 − ρ ) (1 − ρ )
n +1

So the the average number of the customers in the system E ( N ) = L is equal:

ρ ⋅ 1 − ( n + 1) ⋅ ρ n + n ⋅ ρ n +1 
L = E(N) = (4.32)
(1 − ρ ) ⋅ (1 − ρ )
n +1

79
4.1.3 Model M/M/1 (gradually "scared" or "balked" customers)

In this case, the arrival process depends on the number of customers, which are already
present in the system. The rate at which customers arrive to the facility, decreases graduallly
when the facility becomes too crowded. For example, if the customer sees that the bank
parking lot is almost full, he might change his mind, pass by and come another day. If a
customer arrives but fails to enter the system, we say that the customer has balked. At this
kind of the system, we are treating the situation, when the arrival rate gradually decreases. It
means that as bigger the quantity of already present customers in the system is, more balked
("scared") the new potential customer will become. Let us assume that in this case the arrival
rate decreases in proportion to the increasing of the queue length. The state transition diagram
of the model M/M/1 with gradually "scared" or "balked" customers is shown in figure 31
[Hudoklin].
λ λ λ λ λ
λ 2 3 n −1 n n +1

0 1 2 … n-1 n n+1 …

µ µ µ µ µ µ µ

Figure 31: The state transition diagram of the model M/M/1 with gradually "scared" or
"balked" customers.

For the arrival and departure rates the following expressions can be written [Hudoklin]:

λ
λi = , i = 0, 1, ...
i +1 (4.33)
µi = µ , i = 1, 2,...

The derivation of steady-state probabilities

Now let us apply the balance equations for the states in figure 31 (Rate in = Rate out
Principle):

80
µ ⋅ p1 =λ ⋅ p0
λ
µ ⋅ p2 = ⋅ p1
2
λ
µ ⋅ p3 = ⋅ p2
3 (4.34)
...
λ
µ ⋅ pi = ⋅ pi −1
i
...
λ
µ ⋅ pn = ⋅ pn −1
n
...

which leads us to:

1
p1 =λ ⋅ p0 ⋅
µ
1
λ⋅
µ λ λ λ2 1
p2 = ⋅ p1 = ⋅ ⋅ p0 = ⋅ p0 ⋅ 2
2 2µ µ 2! µ
1
λ⋅
µ λ λ λ λ3 1
p3 = ⋅ p2 = ⋅ ⋅ ⋅ p0 = ⋅ p0 ⋅ 3
3 3µ 2 µ µ 3! µ

(4.35)
i
1 1 1 ρ
λ⋅ λ⋅ λ⋅ i
µ µ µ λi 1  λ  p0
pi = ⋅ ⋅ ... ⋅ ⋅ p0 = ⋅ p0 ⋅ i =   ⋅
i i −1 1 i! µ  µ  i!

n
1 1 1 ρ
λ⋅ λ⋅ λ⋅ n
µ µ µ λn 1  λ  p0
pn = ⋅ ⋅ ... ⋅ ⋅ p0 = ⋅ p0 ⋅ n =   ⋅
n n −1 1 n! µ  µ  n!


Thus, in general we have:


ρi
pi = ⋅ p0 , i = 0, 1, 2, ... (4.36)
i!

81
The expression for p0 in this case can be derived as follows [Hudoklin, Dragan]:

p0 + p1 + p2 + ... + pi +...=1
ρ ρ2 ρi
p0 + ⋅ p0 + ⋅ p0 + ... +
⋅ p0 +...=1
1! 2!
i!
 ρ ρ2 ρi 
p0 ⋅ 1 + + + ... + +...  =1 (4.37)
 1! 2! i! 
1 1 1 1
p0 = = 2 i
= ∞ i = + ρ = e− ρ
S ρ ρ ρ ρ e
1+ + + ... + +... ∑
1! 2! i! i =0 i !

and the stationary probability distribution for the number of i customers being in the system
is:
ρi
pi = ⋅ e− ρ , i = 0, 1, 2, ... (4.38)
i!

So, the stationary probability of the system being in the i.th state in this case is governed by
the Poisson distribution.

The derivation of basic statistical parameters

In sequel, we will just show how the average number of the customers in the system
E ( N ) = L can be derived. The details of derivation of the other statistical parameters

( E ( N q ) = Lq , E(W ) and E( Wq ) ) can be investigated in the literature [Bhat, Gross, Bose,

Stewart, Ross, Takacz].

At first, let us apply the following expression:

∞ ∞
ρi ∞
i ⋅ ρ i −1
L = E ( N ) = ∑ i ⋅ pi = 0 ⋅ p0 + ∑ i ⋅ ⋅ e − ρ = e − ρ ⋅ ρ ⋅ ∑ =
i =0
 i =1 i ! i =1 i!
0
(4.39)

ρ i −1
= e− ρ ⋅ ρ ⋅ ∑
i =1 ( i − 1)!

Now, let us introduce a new variable m = i − 1 , which leads us to:

82

ρm
L=e −ρ
⋅ρ ⋅∑ = e− ρ ⋅ ρ ⋅ e ρ = ρ
m =0 m ! (4.40)


So the average number of the customers in the system is equal to the utilization factor.

4.1.4 Model M/M/1 with limited number of customers

In this case we are dealing with the limited number of n customers, which can visit the
queueing system. Naturally, if there are k customers already in the system, only n - k
customers remain in the origin population. The state transition diagram of the model M/M/1
with limited number of customers is shown in figure 32 [Bose, Gross, Stewart].

nλ ( n − 1) λ ( n − 2) λ ( n − k + 2 ) λ ( n − k + 1) λ 2λ λ

0 1 2 … k-2 k-1 k … n-1 n

µ µ µ µ µ µ µ µ µ
Figure 32: The state transition diagram of the model M/M/1 with limited number of
customers.

Typical application of this kind of system is the maintenance and repair of the machinery. Let
us suppose that we have n machines of the same type. Also we assume, that the mechanic man
works all the time and do not loose any time for walking from one machine to another. Let be
the frequency of faults of machines equal to λ , and the frequency of completions of repairs
equal to µ . The faulty machines represent the customers, which are waiting for repair.
Naturally, if all machines are working properly, the queuing system is empty (state 0 in figure
32), and the arrival rate is n ⋅ λ (which means that n machines can become faulty in any time
in the future). If suddenly one machine become faulty and goes to repair, the system jumps
into the state 1 in figure 32, and the arrival rate is now ( n − 1) ⋅ λ (which means that n - 1

machines can become faulty in any time in the future). The same logic can be further applied
for the other states and arrival rates of the system. The working mechanism of the model
M/M/1 with limited number of customers is clearly shown in figure 33.

83
Figure 33: The working mechanism of the model M/M/1 with limited number of customers.

For the arrival and departure rates the following expressions can be written [Hudoklin,
Gross]:


λi = 
( n − i ) ⋅ λ, i = 0,1, 2,..., n
0 i>n
(4.41)

µi = µ , i = 1, 2,...

The derivation of steady-state probabilities

Now let us apply the balance equations for the states in figure 32 (Rate in = Rate out
Principle):
µ ⋅ p1 = n ⋅ λ ⋅ p0
µ ⋅ p2 = ( n − 1) ⋅ λ ⋅ p1

µ ⋅ p3 = ( n − 2 ) ⋅ λ ⋅ p2 (4.42)
...
µ ⋅ pk = ( n − (k − 1) ) ⋅ λ ⋅ pk −1
...
µ ⋅ pn = ( n − (n − 1) ) ⋅ λ ⋅ pn −1

84
which leads us to:
n ⋅λ
p1 = ⋅ p0
µ

p2 =
( n − 1) ⋅ λ ⋅ p
1
µ

p3 =
( n − 2) ⋅ λ ⋅ p
2 (4.43)
µ
...
pk =
( n − (k − 1) ) ⋅ λ ⋅ p
k −1
µ
...
pn =
( n − (n − 1) ) ⋅ λ ⋅ p
n −1
µ

If these expressions are slightly modified, we have:

n⋅λ
p1 = ⋅ p0
µ

p2 =
( n − 1) ⋅ λ ⋅ p =
( n − 1) ⋅ λ ⋅ n ⋅ λ ⋅ p λ2
= ( n − 1) ⋅ n ⋅ 2 ⋅ p0
1 0
µ µ µ µ

p3 =
( n − 2) ⋅ λ ⋅ p =
( n − 2 ) ⋅ λ ⋅ ( n − 1) ⋅ λ ⋅ n ⋅ λ ⋅ p =
2 0
µ µ µ µ
λ3
= ( n − 2 ) ⋅ ( n − 1) ⋅ n ⋅ ⋅p
µ3 0


λk (4.44)
pk = ( n − k + 1) ⋅ ( n − k + 2 ) ⋅ ... ⋅ ( n − 2 ) ⋅ ( n − 1) ⋅ n ⋅ k ⋅ p0
   µ
n!
( n − k )!

λn
pn = ( n − n + 1) ⋅ ( n − n + 2 ) ⋅ ... ⋅ ( n − 2 ) ⋅ ( n − 1) ⋅ n ⋅ n ⋅ p0 =
   µ
n!
( n − n )!
λn
= 12
⋅⋅ ...
⋅ n ⋅ n ⋅ p0
µ
n!

85
Thus, in general we have:

λ
pk = ( n − k + 1) ⋅ ... ⋅ ( n − 2 ) ⋅ ( n − 1) ⋅ n ⋅ ρ k ⋅ p0 , k = 1,..., n in ρ =
 µ
n! (4.45)
( n − k )!

The expression for p0 in this case can be derived as follows [Hudoklin, Dragan]:

p0 +p1 + ... + pk + ... + pn = 1

p0 +n ⋅ ρ ⋅ p0 + ( n − 1) ⋅ n ⋅ ρ 2 ⋅ p0 + ... + n !⋅ ρ n ⋅ p0 = 1

(4.46)
p0 ⋅ 1 + n ⋅ ρ + ( n − 1) ⋅ n ⋅ ρ 2 + ... + n !⋅ ρ n  = 1

1 1
p0 = 2
=
1 + n ⋅ ρ + ( n − 1) ⋅ n ⋅ ρ + ... + n !⋅ ρ n
S

and the stationary probability distribution for the number of i customers being in the system
is:
n!
⋅ ρk
pk =
( n − k )! , k = 1,..., n and ρ =
λ (4.47)
2
1 + n ⋅ ρ + ( n − 1) ⋅ n ⋅ ρ + ... + n !⋅ ρ n
µ

The derivation of basic statistical parameters

In general, the average number of the customers in the system E ( N ) = L can be derived by

using the expression:

n n
L = E ( N ) = ∑ k ⋅ pk = ∑ k ⋅ ( n − k + 1) ⋅ ... ⋅ ( n − 1) ⋅ n ⋅ ρ k ⋅ p0 =
k =0 k =0
(4.48)
n
n!
= ∑k ⋅ ⋅ ρ k ⋅ p0
k =0 ( n − k )!

86
It is not easy to simplify the expression (4.48). It can be shown that after more extensive
derivation we can get the following result [Hillier, Hudoklin]:

1
L = E(N) = n− ⋅ (1 − p0 ) (4.49)
ρ

The details of derivation of the other statistical parameters ( E ( N q ) = Lq , E(W ) and E( Wq ) )

can be investigated in the literature [Bhat, Gross, Bose, Stewart, Hillier, Winston].

4.1.5 Model M/M/1 with additional server for longer queues

In practice often can happen, that the number of serving places depends on the number of the
customers in the system (for example, the number of store cashiers, the number of bank
counters, etc). Let us suppose, that we have a single server spot, as long as the number of
customers is ≤ n . But as soon as the number of customers exceeds some value n, the
additional server spot is employed. But the latter is closed as soon as the number of customers
is ≤ n again. The state transition diagram of the model M/M/1 with additional server for
longer queues is shown in figure 34.

Figure 34: The state transition diagram of the model M/M/1 with additional server for
longer queues

87
For the arrival and departure rates the following expressions can be written [Hudoklin]:

λi = λ , i = 0,1, 2,...

µ , i = 1,..., n (4.50)
µi = 
2µ , i = n + 1, n + 2,...

The derivation of steady-state probabilities

Now let us apply the balance equations for the states in figure 34 (Rate in = Rate out
Principle):
µ ⋅ p1 =λ ⋅ p0
µ ⋅ p2 =λ ⋅ p1
µ ⋅ p3 =λ ⋅ p2
...
µ ⋅ pi =λ ⋅ pi −1
... (4.51)
µ ⋅ pn =λ ⋅ pn −1
2 ⋅ µ ⋅ pn +1 =λ ⋅ pn

2 ⋅ µ ⋅ pn + 2 =λ ⋅ pn +1

...

which implies for the states between 0 and n:

λ
p1 = ⋅p
µ 0
2
λ λ
p2 = ⋅ p1 =   ⋅ p0
µ µ
3
λ λ
p3 = ⋅ p2 =   ⋅ p0
µ µ
(4.52)

i
λ
pi =   ⋅ p0
µ

n
λ
pn =   ⋅ p0
µ

88
while for the states bigger than n we have:

n
λ λ λ
pn +1 = ⋅ pn = ⋅   ⋅ p0
2µ 2µ  µ 
2 n
λ  λ  λ
pn + 2 = ⋅ pn +1 =   ⋅   ⋅ p0
2µ  2µ  µ (4.53)
3 n
λ  λ  λ
pn + 3 = ⋅ pn + 2 =   ⋅   ⋅ p0
2µ  2µ  µ

Thus, in general we have [Hudoklin]:

  λ i
   ⋅ p0 , i = 0, 1,..., n
 µ 
pi =  i −n n
(4.54)
 λ   λ 
 2 µ  ⋅  µ  ⋅ p0 , i = n + 1, n + 2,...
   

λ
If the utilization factor ρ = is additionally applied, we get the following form:
µ
 ρ i ⋅ p0 , i = 0, 1,..., n

pi =  ρ i − n n
  ⋅ ρ ⋅ p0 , i = n + 1, n + 2,...
 2 

 ρ i ⋅ p0 , i = 0, 1,..., n

pi =  ρ i − n ⋅ ρ n (4.55)
 i − n ⋅ p0 , i = n + 1, n + 2,...
 2

 ρ i ⋅ p0 , i = 0, 1,..., n

pi =  ρ i ⋅ p
0
 i−n , i = n + 1, n + 2,...
 2

89
The expression for p0 in this case can be derived as follows [Hudoklin, Dragan]:

p0 + p1 + ... + pn + pn +1 + pn + 2 +...=1

ρ n +1 ρ n+2
p0 + ρ 1 ⋅ p0 + ρ 2 ⋅ p0 + ... + ρ n ⋅ p0 + ⋅ p0 + ⋅ p0 + ... = 1
21 22
 ρ n +1 ρ n + 2 
p0 ⋅ 1 + ρ + ρ 2 + ... + ρ n + 1 + 2 + ... = 1
 2 2 

 
 
  (4.56)
ρ n +1 ρ n + 2
p0 ⋅ 1 + ρ + ρ + ... + ρ + 1 + 2 + ... = 1
 2 n
    2 2 
 n  
∑ρ
i i
 ∞ ρ 

i =0 ∑ i −n 
i = n +1 2

1 1
p0 = ∞ i
=
n
ρ S
∑ρ + ∑ 2
i =0
i

i = n +1
i −n
  
S

The series S can be slightly modified, which lead us to:

n −1
n ∞
ρi ∞
ρi 1− ρ n ρ n ∞ ρi
S = ∑ ρi + ∑2 i−n
= ∑ ρi + ∑ = + ⋅∑ =
i =0 i = n +1 i =0
 i =n 2i − n 1 − ρ ρ n i = n 2i − n
1− ρ n
1− ρ (4.57)

i −n
1− ρ n ∞
ρ i −n 1 − ρ n ∞
ρ
= + ρ n ⋅ ∑ i −n = + ρn ⋅∑ 
1− ρ i=n 2 1− ρ i=n  2 

If now the new variable i − n = m is introduced, we get:

geometric

series

m
1− ρ n ∞
ρ 1− ρ n ρn 1− ρ n 2 ⋅ ρ n
S= + ρ ⋅∑  =
n
+ = + (4.58)
1− ρ m =0  2  1− ρ 1− ρ 1− ρ 2 − ρ
2

90
and consequently:

S=
(1 − ρ ) ⋅ ( 2 − ρ ) + 2ρ ⋅ (1 − ρ ) = 2 − ρ − 2 ⋅ ρ
n n n
+ ρ n +1 + 2 ⋅ ρ n − 2 ⋅ ρ n +1
=
(1 − ρ ) ⋅ ( 2 − ρ ) (1 − ρ ) ⋅ ( 2 − ρ ) (4.59)
2 − ρ − ρ n +1
=
(1 − ρ ) ⋅ ( 2 − ρ )

So, the expression for p0 in this case can be written as follows [Hudoklin, Gross]:

1 (1 − ρ ) ⋅ ( 2 − ρ ) λ
p0 = = , ρ= (4.60)
S 2 − ρ − ρ n +1 µ

and the stationary probability distribution for the number of i customers being in the system
is [Hudoklin, Gross]:

 i (1 − ρ ) ⋅ ( 2 − ρ )
ρ ⋅ n +1
, i = 0, 1,..., n
 2 − ρ − ρ
λ
pi =  where ρ = (4.61)
i (1 − ρ ) ⋅ ( 2 − ρ ) µ
ρ ⋅
 2 − ρ − ρ n +1
, i = n + 1, n + 2,...
 2i − n

The derivation of basic statistical parameters

As it turns out, the derivation of basic statistical parameters is this this case rather than easy
to be carried out. The details of derivation of the other statistical parameters ( E ( N q ) = Lq ,

E(W ) and E( Wq ) ) can be investigated in the literature [Bhat, Gross, Bose, Stewart, Ross,

Takacz].

Let us just mention that, for instance, the most suitable way to calculate the average number
of the customers in the system E ( N ) = L is when we are using the following expression

[Hudoklin, Gross]:

E ( N ) = L = ∑ i ⋅pi (4.62)
i =0

91
4.2 Multiple channel queueing models

The main property of the multiple channel queueing models is, that they have the multiple
servers (more than one server). For all models treated in this chapter we will suppose that the
input arrival process is the Poisson process, and the service times of the servers are distributed
exponentially, while the queue discipline is FIFO (first in, first out).

4.2.1 Basic model M/M/r

For the basic model, the following assumptions must be made [Hudoklin]:
• The population of customers is infinite, and
• The waiting space is infinitely big.

Let us suppose that we have the r servers in the system, which operate independently from
each other. Also we assume that the arriving customers form one single queue in the sequence
as they arrive into the system. As soon as certain service spot becomes empty, the customer,
which is first in the queue, enters into that spot. If there are more servers free, the customer
can go to any of them. The input arrival process is the Poisson process with the arrival rate λ ,
while the service times of the servers are random variables, which are due to equivalency of
servers distributed exponentially, where the departure rate (mean service rate) for all of
servers is µ . While the number of customers is lower than the number of servers ( is i ≤ r ) ,

all the customers will be simultaneously in the service. But as soon as i > r , the number of
customers becomes bigger than the number of servers, which implies that besides r
simultaneously served customers, i − r customers will have to wait. The corresponding
system is shown in figure 35, and its state transition diagram is shown in figure 36.

92
Figure 35: The basic model M/M/r

Figure 36: The state transition diagram of the basic model M/M/r

For the arrival and departure rates the following expressions can be written [Hudoklin, Gross,
Winston, Hillier]:
λi = λ , i = 0,1, 2,...

i ⋅ µ , 1≤ i < r (4.63)
µi = 
r ⋅ µ , i≥r
where the µ is the mean service rate of the individual server.

The derivation of steady-state probabilities

Now let us apply the balance equations for the states in figure 36 (Rate in = Rate out
Principle):

93
µ ⋅ p1 =λ ⋅ p0
2 ⋅ µ ⋅ p2 =λ ⋅ p1

3 ⋅ µ ⋅ p3 =λ ⋅ p2

i ⋅ µ ⋅ pi =λ ⋅ pi −1
...
(4.64)
( r − 1) ⋅ µ ⋅ pr −1 =λ ⋅ pr −2
r ⋅ µ ⋅ pr =λ ⋅ pr −1

r ⋅ µ ⋅ pr +1 =λ ⋅ pr

r ⋅ µ ⋅ pr + 2 =λ ⋅ pr +1
...
which leads us to:

λ
p1 = ⋅p
µ 0
λ λ λ λ2
p2 = ⋅ p1 = ⋅ ⋅ p0 = ⋅ p0
2µ 2µ µ 2!⋅ µ 2

λ λ λ λ λ3
p3 = ⋅ p2 = ⋅ ⋅ ⋅ p0 = ⋅ p0
3µ 3µ 2 µ µ 3!⋅ µ 3

λ λ λ λ λ λi
pi = ⋅p = ⋅ ⋅ ... ⋅ ⋅ ⋅ p0 = ⋅ p0
i ⋅ µ ( i −1) i ⋅ µ ( i − 1) ⋅ µ 2µ µ i !⋅ µ i

λ λ λ λ λ
pr −1 = ⋅ pr − 2 = ⋅ ⋅ ... ⋅ ⋅ ⋅ p0 =
( r − 1) ⋅ µ ( r − 1) ⋅ µ ( r − 2 ) ⋅ µ 2µ µ
λ r −1 (4.65)
= r −1
⋅ p0
( r − 1)!⋅ µ
λ λ λ λ λ λr
pr = ⋅ pr −1 = ⋅ ⋅ ... ⋅ ⋅ ⋅ p0 = ⋅ p0
r⋅µ r ⋅ µ ( r − 1) ⋅ µ 2µ µ r !⋅ µ r
λ λ λ λ λ λ
pr +1 = ⋅ pr = ⋅ ⋅ ⋅ ... ⋅ ⋅ ⋅ p0 =
r⋅µ r ⋅ µ r ⋅ µ ( r − 1) ⋅ µ 2µ µ
λ λr
= ⋅ ⋅ p0
r ⋅ µ r !⋅ µ r
λ λ λ λ λ λ λ
pr + 2 = ⋅ pr +1 = ⋅ ⋅ ⋅ ⋅ ... ⋅ ⋅ ⋅ p0 =
r⋅µ r ⋅ µ r ⋅ µ r ⋅ µ ( r − 1) ⋅ µ 2µ µ
2
 λ  λr
=  ⋅ ⋅ p0
 r ⋅ µ  r !⋅ µ
r

... etc.

94
Thus, in general we have:

λi
pi = ⋅ p0 , i = 0, 1, 2,..., r
i !⋅ µ i
λr
pr = ⋅ p0 (4.66)
r !⋅ µ r
j −r j −r
 λ   λ  λr
pj =  ⋅ pr =   ⋅ ⋅ p0 , j = r + 1, r + 2,..., ∞
 r⋅µ   r⋅µ  r !⋅ µ r

In the case of M/M/r system, the utilization factor is defined in the following way [Hudoklin,
Winston, Hillier]:
λ λ
ρ= ⇒ = ρ ⋅r (4.67)
r⋅µ µ

so we have:

pi =
(ρ ⋅r) ⋅ p0 , i = 0, 1,..., r
i!
r

pr =
(ρ ⋅r) ⋅ p0
r! (4.68)
r

pj = ρ j −r
⋅ pr =ρ ⋅
(ρ ⋅r)
j −r
⋅ p0 =
r!
ρ j ⋅ rr
= ⋅ p0 , j = r + 1, r + 2,..., ∞
r!

and finally:

 ( ρ ⋅ r )i
 ⋅ p0 , i = 0, 1,..., r
 i! λ
pi =  , where ρ = (4.69)
i
 ρ ⋅r
r r⋅µ
 r ! ⋅ p0 , i = r + 1, r + 2,..., ∞

95
The expression for p0 in this case can be derived as follows [Hudoklin, Dragan]:

p0 + p1 + ... + pr + pr +1 + pr + 2 +...=1

2 r

p0 +
ρ ⋅r
⋅ p0 +
(ρ ⋅r) ⋅ p0 + ... +
(ρ ⋅r) ⋅ p0 +
ρ r +1 ⋅ r r
⋅ p0 +
ρ r +2 ⋅ r r
⋅ p0 + ... = 1
1! 2! r! r! r!

 ρ ⋅ r ( ρ ⋅ r )2 (
r
ρ ⋅ r ) ρ r +1 ⋅ r r ρ r + 2 ⋅ r r  (4.70)
p0 ⋅ 1 + + + ... + + + + ... = 1
 1! 2! r! r! r! 

1 1
p0 = 2 r
=
1+
ρ ⋅r
+
(ρ ⋅r) + ... +
(ρ ⋅r) +
ρ r +1
⋅r
+
r
ρ ⋅r r +2
+ ...
r S
1! 2! r! r! r!

The series S can be slightly modified:

2 r

S = 1+
ρ ⋅r
+
(ρ ⋅r) + ... +
(ρ ⋅r) +
ρ r +1 ⋅ r r
+
ρ r +2 ⋅ r r
+ ... =
1! 2! r! r! r!
i

= 1+ ∑
r −1
(ρ ⋅r) +
ρ r ⋅ rr
+
ρ r +1 ⋅ r r
+
ρ r +2 ⋅ r r
+ ... =
i =1 i! r! r! r!
i

= 1+ ∑
r −1
(ρ ⋅r) +
ρ r ⋅ rr
⋅ (1 + ρ + ρ 2 + ...) = (4.71)
i =1 i! r! 
1
1− ρ
i i r

= 1+ ∑
r −1
(ρ ⋅r) +
ρ r ⋅ rr r −1
=∑
(ρ ⋅r) + (ρ ⋅r)
i =1 i! r !⋅ (1 − ρ ) i =0 i ! r !⋅ (1 − ρ )

So the probability p0 is equal to:

1 1 λ
p0 = = i r
, ρ=
S r −1
(ρ ⋅r) (ρ ⋅r) r⋅µ (4.72)

i =0 i!
+
r !⋅ (1 − ρ )

and the stationary probability distribution for the number of i customers being in the system
is [Hudoklin, Gross]:

96
 ( ρ ⋅ r )i 1
 ⋅ , i = 0, 1,..., r
 i!
i r
r −1
(ρ ⋅r) + (ρ ⋅r)


∑i =0 i! r !⋅ (1 − ρ ) λ
pi =  , where ρ = (4.73)
i
 ρ ⋅r
r
1 r⋅µ
 r ! ⋅ r −1 ρ ⋅ r i r
, i = r + 1, r + 2,..., ∞
( ) ( ρ ⋅ r )


∑i =0 i!
+
r !⋅ (1 − ρ )

The condition for the existance of stationary distribution is [Hudoklin, Winston]:

λ
ρ= <1
r⋅µ
or : (4.74)
λ
<r
µ

The derivation of basic statistical parameters

In sequel, we are going to derive the basic statistical parameters, which are, as we know, the
average number of the customers in the system E ( N ) = L , the average number of the

customers in the queue E ( N q ) = Lq , the average customers time of being in the system E( W ),

and the average customers waiting time in the queue E( Wq ) [Bose, Gross].

For the average number of the customers in the system E ( N ) = L we can write the following

expression:

i

L = E ( N ) = ∑ i ⋅ pi = ∑
r
(ρ ⋅r) ∞
ρi ⋅ rr
i =0 i =0
i⋅
i!
⋅ p0 + ∑ i⋅
i = r +1 r!
⋅ p0 =

(4.75)
ρ ⋅r r (ρ ⋅r) rr ∞
i
 r
( ρ ⋅r)
i −1
rr ∞ 
= ⋅∑i ⋅ ⋅ p0 + ⋅ ∑ i ⋅ ρ i ⋅ p0 = ρ ⋅r ⋅∑ + ⋅ ∑ i ⋅ ρ i  p0
ρ ⋅ r i =0 i! r ! i = r +1  i =1 ( i − 1) ! r ! i = r +1 
 

Since the following relation can be given:

97
r ∞ ∞

∑ i ⋅ ρ i + ∑ i ⋅ ρ i =∑ i ⋅ ρ i
i =0 i = r +1 i =0
(4.76)
∞ ∞ r

∑ i ⋅ ρ =∑ i ⋅ ρ − ∑ i ⋅ ρ
i = r +1
i

i =0
i

i =0
i

we have after the introduction of a new variable i − 1 = m :

 r −1
( ρ ⋅r)
m
rr  ∞ r

L = E ( N ) = p0 ⋅  ρ ⋅ r ⋅ ∑ + ⋅ ∑ i ⋅ρ i − ∑ i ⋅ρ i  (4.77)
 m=0 m! r !  i =0 i =0 

It can be shown that the following relation can be applied [Gross, Hsu, Stewart]:

r ρ ⋅  r ⋅ ρ r +1 − ( r + 1) ⋅ ρ r + 1
∑ i ⋅ρ i
= 2
, ρ <1 (4.78)
i =0 (1 − ρ )

so we have:

 r −1
( ρ ⋅r)
m
r r  ∞ ρ ⋅  r ⋅ ρ r +1 − ( r + 1) ⋅ ρ r + 1 
L = E ( N ) = p0 ⋅  ρ ⋅ r ⋅ ∑ + ⋅ ∑ i ⋅ρ −
i
2  (4.79)
 m! r !  i =0
 m=0
 (1 − ρ ) 

If we now also consider the relation (4.12), we get the following expression:

 
 
r r  ρ ρ ⋅  r ⋅ ρ − ( r + 1) ⋅ ρ + 1 
m r +1 r

r −1
L = E ( N ) = p0 ⋅  ρ ⋅ r ⋅ ∑
( ρ ⋅r)
+ ⋅ −  (4.80)
m! r !  (1 − ρ )2 (1 − ρ )
2

 m=0
 
 F 

In sequel, let us try to simplify the expression for F:

ρ − ρ ⋅ r ⋅ ρ r +1 + ( r + 1) ⋅ ρ r +1 − ρ − ρ ⋅ r ⋅ ρ r +1 + r ⋅ ρ r +1 + ρ r +1
F= 2
= 2
=
(1 − ρ ) (1 − ρ )
(4.81)
r +1
r⋅ρ ⋅ (1 − ρ ) ρ r +1
r⋅ρ r +1
ρ r +1
= 2
+ 2
= +
(1 − ρ ) (1 − ρ ) (1 − ρ ) (1 − ρ )2

98
so the expression for L becomes:

 r −1
( ρ ⋅r)
m
r r  r ⋅ ρ r +1 ρ r +1  
L = p0 ⋅  ρ ⋅ r ⋅ ∑ + ⋅ +  =
 m! r !  1 − ρ (1 − ρ )2  
m=0

 r −1
(
m r
ρ ⋅r) (ρ ⋅r) ⋅ ρ ⋅r r r ⋅ ρ r +1 
= p0 ⋅  ρ ⋅ r ⋅ ∑ + + 2
=
 m=0 m! r !⋅ (1 − ρ ) r !⋅ (1 − ρ ) 
  (4.82)
 
 
  r −1 ( ρ ⋅ r ) r 
( ρ ⋅ r )  + ρ ⋅ ( ρ ⋅ r ) 
m r

= p0 ⋅  ρ ⋅ r ⋅  ∑ + 
  m=0 m !
 r !⋅ (1 − ρ )  r !⋅ (1 − ρ )2 
  
 1 
 S= 
 p0 

So, for the average number of the customers in the system E ( N ) = L it follows that we have:

 1 ρ ⋅(ρ ⋅ r) 
r
ρ ⋅(ρ ⋅ r)
r
λ
L = E ( N ) = p0 ⋅  ρ ⋅ r ⋅ +  = ρ ⋅ r + p0 ⋅ , where ρ = (4.83)
 p0 r !⋅ (1 − ρ )2  r !⋅ (1 − ρ )
2
r⋅µ

Now, the next step is to calculate the average customers time of being in the system E(W ),
where the Little's law (2.13) can be used:

λ
L 1 ρ ⋅(ρ ⋅ r)  r ⋅ µ  ρ ⋅r) 
r r

E (W ) = =  ρ ⋅ r + p0 ⋅ =  r + p0 ⋅
( =
λ λ  r !⋅ (1 − ρ ) 
2
λ  r !⋅ (1 − ρ ) 
2

(4.84)
1  ( ρ ⋅ r )  = 1 + p ⋅ ( ρ ⋅ r )
r r

=  r + p0 ⋅
r ⋅ µ  r !⋅ (1 − ρ )  µ
2 0 2
r !⋅ (1 − ρ ) ⋅ r ⋅ µ

Based on expression (2.15), the average customers waiting time in the queue E( Wq ) can be

also calculated:

r r
1
E (Wq ) = E (W ) − = + p0 ⋅
1 (ρ ⋅r) 1 (ρ ⋅r)
2
− = p0 ⋅ 2 (4.85)
µ µ r !⋅ (1 − ρ ) ⋅ r ⋅ µ µ r !⋅ (1 − ρ ) ⋅ r ⋅ µ

99
Finally, the average number of the customers in the queue E ( N q ) = Lq is going to be

calculated. For this purpose, the Little's law is used again, this time is based on the expression
(2.14), which lead us to the following result:

Lq = E ( N q ) = λ ⋅ E (Wq ) = λ ⋅ p0 ⋅
(ρ ⋅r) =
2
r !⋅ (1 − ρ ) ⋅ r ⋅ µ
(4.86)
r r

=
λ
⋅ p0 ⋅
(ρ ⋅r) = ρ ⋅ p ⋅ (ρ ⋅r)
2 0 2
r⋅µ r !⋅ (1 − ρ ) r !⋅ (1 − ρ )

The probability of all servers being occupied

In sequel, let us calculate the probability that all servers are occupied, which means that the
customer will have to wait.

For this purpose, we can apply the following expression:

P ( all r servers occupied ) = pr + pr +1 + pr + 2 + ... = pWAIT (4.87)

If we consider (4.73) for i ≥ r , we get:

pWAIT =
(ρ ⋅r) ⋅ p0 +
ρ r +1 ⋅ r r
⋅ p0 +
ρ r +2 ⋅ r r
⋅ p0 + ... =
r! r! r!
ρ r ⋅ rr (4.88)
= ⋅ p0 ⋅ 1 + ρ + ρ 2 + ρ 3 + ...
r! 
1
1− ρ

So, the probability that all servers are occupied and the customer must wait, is equal to:

1 ρ r ⋅ rr

r
ρ ⋅r r
1 r! 1− ρ λ
pWAIT = ⋅ p0 ⋅ = , where ρ = (4.89)
r! 1 − ρ r −1 ( ρ ⋅ r )i ( ρ ⋅r)
r
r⋅µ

i =0 i!
+
r !⋅ (1 − ρ )

which is so-called Erlang delay (formula C) [Gross, Hsu, Stewart], well known in telephony.

100
4.2.2 Model M/M/r with limited waiting space

Similarly as in the case of M/M/1 system with limited waiting space, we can also derive the
model M/M/r with limited waiting space. In this case, we also treat the situation, where the
arrival rate is constant, until the waiting space becomes fully loaded. At that moment, the
arrival rate immediately falls down to 0 and new potential customers will definitely go away
un-served. The state transition diagram of the model M/M/r with limited waiting space is
shown in figure 37, where we have the maximum possible number of customers in the system
equal to r + n, which means maximally r + n states of the system. This means that the r
customers are supposed to be served, while there are n customers waiting in the "waiting
room". So, the potentially new customer will go away, if there are already r + n customers in
our system.

Figure 37: The state transition diagram of the model M/M/r with limited waiting space (total
number of customers in the system can be n+r)

For the arrival and departure rates the following expressions can be written [Hudoklin,
Gross]:
λ , 0≤i<r+n
λi = 
 0, i≥r+n
(4.90)
 i⋅µ , 1≤ i < r
µi = 
r ⋅ µ , r+n≥i≥r

The derivation of steady-state probabilities

Similarly as in case of basic M/M/r model, the steady-state probabilities can be derived. We
can derive the similar result as it was the result (4.69):

101
 ( ρ ⋅ r )i
 ⋅ p0 , i = 0, 1,..., r
 i! λ
pi =  , where ρ = (4.91)
i
 ρ ⋅r
r r⋅µ
 r ! ⋅ p0 , i = r + 1, r + 2,..., r + n

Now let us try to calculate the probability p0 . For this purpose, the following expression can

be given:
p0 + p1 + ... + pr + pr +1 + ... + pr + n = 1
2 r

p0 +
ρ ⋅r
⋅ p0 +
(ρ ⋅r) ⋅ p0 + ... +
(ρ ⋅r) ⋅ p0 +
1! 2! r! (4.92)

ρ r +1 ⋅ r r ρ r +2 ⋅ r r ρ r +n ⋅ r r
+ ⋅ p0 + ⋅ p0 + ... + ⋅ p0 = 1
r! r! r!

which implies
1 1
p0 = = 2 r
S ρ ⋅r (ρ ⋅r) (ρ ⋅r) ρ r +1 ⋅ r r ρ r +n ⋅ r r (4.93)
1+ + + ... + + + ... +
1! 2! r! r! r!

For the series S we have:

i r

S =∑
r −1
(ρ ⋅r) +
(ρ ⋅r) ⋅ 1 + ρ + ρ 2 + ... + ρ n  =
i =0 i! r!
(4.94)
i r

=∑
r −1
(ρ ⋅r) +
(ρ ⋅r) n
⋅∑ ρi
i =0 i! r! i =0

n
1 − ρ n +1
Due to the relation ∑ ρi =
i =0 1− ρ
we can write:

i r

S =∑
r −1
(ρ ⋅r) +
(ρ ⋅r) ⋅
1 − ρ n +1
and p0 =
1
(4.95)
i =0 i! r! 1− ρ S

and the stationary probability distribution for the number of i customers being in the system
is:

102
 ( ρ ⋅ r )i 1
 ⋅ , i = 0, 1,..., r
 i!
i r
r −1
( ρ ⋅ r ) + ( ρ ⋅ r ) ⋅ 1 − ρ n+1
 ∑i =0 i! r! 1− ρ
pi = 
i
 ρ ⋅r
r
1

 r ! r −1 ρ ⋅ r i r
, i = r + 1,..., r + n (4.96)
( ) ( ρ ⋅ r ) 1 − ρ n +1


∑i =0 i!
+
r!

1− ρ

λ
where ρ =
r⋅µ

The derivation of basic statistical parameters

As in the case of M/M/1 system with limited waiting space, the derivation of basic
statistical parameters is now a little bit more complicated than for example in the case of basic
M/M/r model. In sequel, we will just show how the average number of the customers in the
queue E ( N q ) = Lq can be derived. The details of derivation of the other statistical parameters

( E ( N ) = L , E(W ) and E( Wq ) ) can be investigated in the literature [Gross, Bose, Stewart,

Dragan].

For the purpose of calculation of E ( N q ) = Lq , we will firstly introduce a variable N = r + n ,

which represents the maximum possible number of the customers in the system. So we have
[Hsu, Gross, Bose]:

N N
ρi ⋅ rr
Lq = E ( N q ) = ∑ ( i − r ) ⋅ pi = ∑ ( i − r ) ⋅ ⋅ p0 =
i=r i =r r!
(4.97)
rr N
rr ρr N rr ⋅ ρ r N
= ⋅ p0 ⋅ ∑ ( i − r ) ⋅ ρ i = ⋅ p0 ⋅ r ⋅ ∑ ( i − r ) ⋅ ρ i = ⋅ p0 ⋅ ∑ ( i − r ) ⋅ ρ i − r
r! i =r r! ρ i=r r! i =r

103
Now, let us introduce a new variable m = i − r , which leads us to:

r r
(ρ ⋅r) N −r
⋅ p0 ⋅ ∑ m ⋅ ρ =
(ρ ⋅r) N −r
Lq =
r! m=0
m

r!
⋅ p0 ⋅ ρ ⋅ ∑ m⋅ρ
m=0
m −1
=
 
∑ρ (
d N −r m
d ρ m =0 ) (4.98)

r
(ρ ⋅r)
=
r!
⋅ p0 ⋅ ρ ⋅
d ρ m=0(
d N −r m
∑ρ )
If we apply the relation:
N −r
1 − ρ N − r +1
∑ρ
m=0
m
=
1− ρ
(4.99)

we get:
r

=
(ρ ⋅r) ⋅ p0 ⋅ ρ ⋅
d  1 − ρ N − r +1 
(4.100)
Lq  
r! d ρ  1− ρ 
and:

(ρ ⋅r)
r
− ( N − r + 1) ⋅ ρ N − r ⋅ (1 − ρ ) − (1 − ρ N − r +1 ) ⋅ ( −1)
Lq = ⋅ p0 ⋅ ρ ⋅ 2
=
r! (1 − ρ )
r
(ρ ⋅r) − ( N − r + 1) ⋅ ρ N − r + ( N − r + 1) ⋅ ρ N − r +1 + 1 − ρ N − r +1
= ⋅ p0 ⋅ ρ ⋅ 2
=
r! (1 − ρ )
r

=
( ρ ⋅ r ) ⋅ p ⋅ ρ ⋅ 1 − ( N − r + 1) ⋅ ρ N −r + ( N − r ) ⋅ ρ N −r +1 =
0 2
r! (1 − ρ )
(4.101)
r
( ρ ⋅ r ) ⋅ p0 ⋅ ρ ⋅ 1 − N − r ⋅ ρ N −r − ρ N −r + N − r ⋅ ρ N −r +1  =
= 2  ( ) ( ) 
r !⋅ (1 − ρ )
r

=
( ρ ⋅ r ) ⋅ p0 ⋅ ρ
⋅ 1 − ρ N − r ⋅ {( N − r ) + 1 − ( N − r ) ⋅ ρ } =
2
r !⋅ (1 − ρ )
r
( ρ ⋅ r ) ⋅ p0 ⋅ ρ ⋅ 1 − ρ N −r ⋅ 1 + N − r ⋅ 1 − ρ 
= 2  { ( )( )}
r !⋅ (1 − ρ )

104
The expression (4.101) can be also written in the following form:

r
( ρ ⋅ r ) ⋅ p0 ⋅ ρ ⋅ 1 − ρ N −r − ρ N −r ⋅ N − r ⋅ 1 − ρ  =
Lq = 2  ( )( )
r !⋅ (1 − ρ )
r
( ρ ⋅ r ) ⋅ p0 ⋅ ρ ⋅ 1 − ρ N −r − ρ N −r ⋅ N − ρ ⋅ N − r + ρ ⋅ r  =
= 2  ( )
r !⋅ (1 − ρ )
(4.102)
r

=
( ρ ⋅ r ) ⋅ p0 ⋅ ρ ⋅ 1 − ρ N − r − ρ N − r ⋅ N + ρ N − r +1 ⋅ N + ρ N − r ⋅ r − ρ N − r +1 ⋅ r  =
2
r !⋅ (1 − ρ )
r
( ρ ⋅ r ) ⋅ p0 ⋅ ρ ⋅ 1 − ρ N −r ⋅ 1 + N − r + ρ N −r +1 ⋅ N − r 
= 2  ( ) ( )
r !⋅ (1 − ρ )

If we now consider that N = r + n , we get the following expression:

r
( ρ ⋅ r ) ⋅ p0 ⋅ ρ ⋅ 1 − ρ n+ r −r ⋅ 1 + n + r − r + ρ n+ r −r +1 ⋅ n + r − r 
Lq = 2  ( ) ( )
r !⋅ (1 − ρ )
(4.103)
r

Lq = E ( N q ) =
( ρ ⋅ r ) ⋅ p0 ⋅ ρ ⋅ 1 − ρ n ⋅ (1 + n ) + ρ n +1 ⋅ n 
2
r !⋅ (1 − ρ )

4.2.3 Model M/M/r with large number of servers

In same real cases, for instance in department sores, big marketplaces, etc, the number of
serving spots can be enormously big and then the basic model M/M/r can be simplified by
setting r → ∞ [Hudoklin, Winston, Hillier]. In this case, we are dealing with the so-called
M / M / ∞ system with unlimited service capacity.

Naturally, in this case we assume that the number of customers is always lower than the
number of servers, so each customer will be immediately served and actually the waiting lines
will not be formed at all.

105
For the arrival and departure rates the following expressions can be written [Hudoklin,
Gross]:
λi = λ , i = 0,1, 2,...
(4.104)
µi = i ⋅ µ , i = 1, 2,..., ∞

The derivation of steady-state probabilities

The steady-state probabilities can be derived similarly as in the case of the basic model M/M/r.
If we are starting from the expression (4.69), we can write:

i
λ
µ λi (4.105)
pi =   ⋅ p0 = ⋅ p0 , i = 0, 1, 2,..., ∞
i! i !⋅ µ i

Now let us try to calculate the probability p0 . For this purpose, the following expression can

be given:
p0 + p1 + ... + pi + ... = 1

λ1 λ2 λi
p0 + ⋅ p + ⋅ p + ... + ⋅ p0 + ... = 1
1!⋅ µ 1
0
2!⋅ µ 2
0
i !⋅ µ i (4.106)
 λ1 λ2 λi 
p0 ⋅ 1 + 1
+ 2
+ ... + + ...  = 1
 1!⋅ µ 2!⋅ µ i !⋅ µ i

which implies:
1 1
p0 = 1 2 i
=
λ λ λ S (4.107)
1+ 1
+ 2
+ ... + + ...
1!⋅ µ 2!⋅ µ i !⋅ µ i

The series S can be expressed as follows:


i
λ
∞ 
µ 
1 2 λ
λ λ λ i ∞
λ i
 + (4.108)
S = 1+ 1
+ 2
+ ... + + ... = ∑ =∑ =e µ
1!⋅ µ 2!⋅ µ i !⋅ µ i
i = 0 i !⋅ µ
i
i =0 i!

106
So the probability p0 takes the form:
λ
1 −
µ
p0 = λ
=e (4.109)
+
µ
e

and the stationary probability distribution for the number of i customers being in the system
is:
i
λ
µ −λ (4.110)
pi =   ⋅ e µ , i = 0, 1, 2,..., ∞
i!

The derivation of basic statistical parameters

The average number of the customers in the system E ( N ) = L can be derived as follows:

i i
λ λ λ
∞ ∞ ∞ µ −λ ∞ µ  −λ
  µ
L = E ( N ) = ∑ i ⋅ pi = ∑ i ⋅ pi = ∑ i ⋅ ⋅e = ⋅∑i ⋅   ⋅e µ =
µ

i =0 i =1 i =1 i! λ i =1 i!
µ (4.111)
i −1
λ
∞ 
µ 
λ
− λ 
=e µ
⋅ ⋅∑
 µ  i =1 ( i − 1) !

Let us introduce the new variable m = i - 1, which leads us to:

m
λ
∞ 
µ
λ λ λ
λ λ λ
⋅ ⋅ ∑   = e µ
− −
L = E(N) = e µ
⋅  ⋅eµ = (4.112)
=0 ( m ) !
 µ  m µ µ
 
λ
µ
e

The average customers time of being in the system E(W ) can be derived by use of Little's
law:
1 1
E ( N ) = λ ⋅ E (W ) ⇒ E (W ) = ⋅E(N) = (4.113)
λ µ

107
Since the waiting lines can not be formed in this case, it follows:

Lq = E ( N q ) = 0
(4.114)
E (Wq ) = 0

4.2.4 Model M/M/r (waiting space is not allowed)

In practice, this kind of system can for instance represent the system of phone calls by use of r
phone lines. In this kind of system, the arriving phone calls are lost (not registered), if all the
lines are already occupied.

The derivation of steady-state probabilities

The starting point of the derivation of steady-state probabilities is the expression (4.95), which
appeared, when the M/M/r system with the limited waiting space was treated. The series S
gets the following form, if we consider n = 0 (waiting space is not allowed):

i r i r i

S =∑
r −1
(ρ ⋅r) +
(ρ ⋅r) ⋅
1 − ρ 0+1 r −1 ( ρ ⋅ r ) ( ρ ⋅ r )
=∑ + ⋅1 = ∑
r
(ρ ⋅r) (4.115)
i =0 i! r! 1− ρ i =0 i! r! i =0 i!

1
The probability p0 = takes the following form:
S

1 1 λ
p0 = = i
, where ρ = (4.116)
S r
(ρ ⋅r) r⋅µ

i =0 i!

If this is considered in the expression (4.91), we get the following expression for pi :

 ( ρ ⋅ r )i
 ⋅ p0 , i = 0, 1,..., r
pi =  i ! (4.117)
 0 , i>r } since the waiting space not allowed

108
which leads us to:

i
(ρ ⋅r)
i! λ
pi = i
, i = 0, 1,..., r and ρ= (4.118)
r
(ρ ⋅r) r⋅µ

i =0 i!

For the case of phone calls, the probabilities in (4.118) represent the probabilities, that the i
phone lines are occupied (Erlang Loss B Formula [Gross, Bose, Hsu]).

4.2.5 Model M/M/r with limited number of customers

Already in the case of M/M/1 system with the limited number of customers we have got
familiar with the typical example, when we have the system of maintanence and repair of the
machines of the same type. Here we are dealing with n machines and r mechanics ( r < n ),
which promptly repair the machines and do not losing time for walking from one machine to
another. The frequency of machines faults is λ and equal for all machines, while the mean
service rate is µ and equal for all mechanics. In this case, the state transition diagram has the
structure as shown in figure 38 [Bose, Gross, Hsu, Winston, Hillier].

Figure 38: The state transition diagram of the model M/M/r with the limited number of
customers

For the arrival and departure rates the following expressions can be written [Hudoklin,
Gross]:

109
λ ⋅ ( n − i ) , 0≤i<n
λi = 
 0 , i≥n
(4.119)
i ⋅ µ , 1≤ i < r
µi = 
r ⋅ µ , i≥r

The derivation of steady-state probabilities

Now let us apply the balance equations for the states in figure 38 (Rate in = Rate out
Principle). First we are going to write the expressions for states, when i ≤ r :

µ ⋅ p1 = n ⋅ λ ⋅ p0
2 ⋅ µ ⋅ p2 = ( n − 1) ⋅ λ ⋅ p1

3 ⋅ µ ⋅ p3 = ( n − 2 ) ⋅ λ ⋅ p2 (4.120)
...
i ⋅ µ ⋅ pi = ( n − (i − 1) ) ⋅ λ ⋅ pi −1
...
r ⋅ µ ⋅ pr = ( n − (r − 1) ) ⋅ λ ⋅ pr −1

which leads us to:

λ n λ
p1 = n ⋅ ⋅ p0 = ⋅   ⋅ p0
µ 1 µ
2

p2 = ( n − 1) ⋅
λ
⋅ p1 = ( n − 1) ⋅
λ λ
⋅ n ⋅ ⋅ p0 =
( n − 1) ⋅ n ⋅  λ  ⋅ p
  0
2µ 2µ µ 1⋅ 2 µ
3

p3 = ( n − 2 ) ⋅
λ
⋅ p2 =
( n − 2 ) ⋅ ( n − 1) ⋅ n ⋅  λ  ⋅ p
  0
3µ 1⋅ 2 ⋅ 3 µ

i i i

pi =
( n − i + 1) ⋅ ... ⋅ ( n − 1) ⋅ n ⋅  λ  ⋅ p = n ! ⋅  λ  ⋅ p =  n  ⋅  λ  ⋅ p
1
⋅2 ⋅
3 ⋅ ...⋅i
  0
µ ( n − i )!⋅ i !  µ  0  i   µ  0 (4.121)
i!

( ( ))
n − r −1

 
( 1) ⋅ ( n − ( r − 2 ) ) ⋅ ... ⋅ ( n − 1) ⋅ n  λ 
r
n − r +
pr = ⋅   ⋅ p0 =
1
⋅ 2 ⋅
3 ⋅ ...
⋅r µ
r!
r r
n! λ n  λ 
= ⋅   ⋅ p0 =   ⋅   ⋅ p0
( n − r )!⋅ r !  µ  r  µ 

110
In we now apply the utilization factor:

λ λ
ρ= ⇒ = ρ ⋅r (4.122)
r⋅µ µ
we get the following stationary probability distribution for the number of i ≤ r customers
being in the system:

n i n! i
pi =   ⋅ ( ρ ⋅ r ) ⋅ p0 = ⋅ ( ρ ⋅ r ) ⋅ p0 , i = 0, 1,..., r (4.123)
i ( n − i )!⋅ i !

Now we will try to write the expressions for probabilities being in states, when i > r . As it
turns out, the derivation of final form is not so easy to be carried out. It can be shown that the
result is [Gross, Hudoklin]:

r r ⋅ n !⋅ ρ i ⋅ p0
pi = , i = r , r + 1,..., n (4.124)
r !⋅ ( n − i ) !

and the joint result for the steady-state stationary probability distribution is:

 n  i
  ⋅ ( ρ ⋅ r ) ⋅ p0 , i = 0, 1,..., r
 i 
pi =  r (4.125)
 r ⋅ n !⋅ ρ ⋅ p0 ,
i
i = r , r + 1,..., n
 r !⋅ ( n − i ) !

Now let us try to calculate the probability p0 . For this purpose, the following expression can

be given:
p0 + p1 + ... + pr + pr +1 + ... + pn = 1

n n 1  n  r −1
  ⋅ p0 +   ⋅ ( ρ ⋅ r ) ⋅ p0 + ... +   ⋅ ( ρ ⋅ r ) ⋅ p0 +
0 1  r − 1
r r ⋅ n !⋅ ρ r r r ⋅ n !⋅ ρ r +1 r r ⋅ n !⋅ ρ n
+ ⋅ p0 + ⋅ p0 + ... + ⋅ p0 = 1
r !⋅ ( n − r ) ! r !⋅ ( n − ( r + 1) )! r !⋅ ( n − n ) !
(4.126)
 n
r −1
r ⋅ n !⋅ ρ 
n r i
p0 ⋅  ∑   ⋅ ( ρ ⋅ r ) + ∑
i
 =1
 i =0  i  i = r r !⋅ ( n − i ) !

1 1
p0 = =
S r −1
n n
r r ⋅ n !⋅ ρ i
∑  ⋅(ρ ⋅ r) + ∑
i

i =0  i  i = r r !⋅ ( n − i ) !

111
The derivation of basic statistical parameters

As it turns out, the derivation of basic statistical parameters is now much more difficult than
in previous cases [Hudoklin, Gross, Hsu, Bose, Stewart].

For the average number of the customers in the system E ( N ) = L we can write the following

expression:
n
L = E ( N ) = ∑ i ⋅ pi (4.127)
i =0

and for the average number of the customers in the queue E ( N q ) = Lq we can write the

following expression:

n
E ( N q ) = Lq = ∑ ( i − r ) ⋅ pi (4.128)
i =r

As it turns out, the further analytical simplification of the expressions (4.127) and (4.128)
would be quite difficult. The difficulty of derivation would appear for the E( W ) and E( Wq )

also. The details of derivation of statistical parameters can be investigated in the literature
[Gross, Bose, Stewart, Bhat].

4.3 Conclusion about the queueing models

In many cases the Poisson processes are the good approximation for the real process of
customers arrivals into the system. But on the other hand, the assumption about the
exponential distribution of service times is not justified so often. Thus, sometimes is also
important to study the model behavior, when the interarrival times are independent and
exponentially distributed, while the distribution of service times is governed by some more
general distribution than exponential. This kind of processes are still the birth-death processes,
but they are not pure Markovian processes anymore.

112
If we want to analytically derive these processes, it turns out that the theory in the background
is much harder than in the case of pure Markovian processes. Much harder analytical theory
is also present in the case of more complicated queueing systems, which have a more difficult
structure (many channels and/or queues) and are structured in some kind of network form.

In these cases, instead of using the analytical derivation and mathematical analysis of the
properties of the queueing system, the use of simulation looks the better solution, where the
system behavior is simulated.

The simulation approach is particularly appropriate in the cases, when the system is so
complicated, that the analytical solutions for the model can not be found et all, or the
modeling would require a lot of effort. So, the simulation often gives all the necessary
answers in the shorter time than the analytical modeling approach. Besides this, in general, the
simulation approach gives the models, whose behavior can be very close to the original real
queueing systems.

113
LITERATURE

[1] Bhat N.: An Introduction to Queueing Theory: Modeling and Analysis in Applications,
Birkhauser Boston, 2008.

[2] Bose S.K.: An Introduction to Queueing Systems, Springer, 2001.

[3] Dragan D.: Stochastic processes in logistics (Stohastični procesi v logistiki), Textbook in
Slovene, Faculty of logistics, University in Maribor, 2013.

[4] Gross D.: Fundamentals of Queueing Theory, John Wiley&Sons, 2009.

[5] Hillier, F.S.: Introduction to Operations Research, McGraw-Hill, 2001.

[6] Hsu H.: Schaum's Outline of Probability, Random Variables, and Random Processes,
McGraw-Hill, 1997.

[7] Hudoklin-Božič A.: Stochastic processes (Stohastični procesi), Textbook in Slovene,


Založba Moderna organizacija, Kranj, 2003.

[8] Kljajić M., Bernik I., Škraba A.: Event Driven Simulation of Systems (Dogodkovna
simulacija sistemov), Textbook in Slovene, Faculty of Organizational Sciences, University in
Maribor, 1999.

[9] Ross S.M.: Introduction to Probability Models, Academic Press, 1997.

[10] Stewart W.J.: Probability, Markov Chains, Queues, and Simulation: The Mathematical
Basis of Performance Modeling, Princeton University Press, 2009.

[11] Taha H.: Operations Research, An Introduction, Sixth Edition, Prentice-Hall, 1997.

[12] Takacz L.: Stochastic processes, Problems and solutions, Wiley, 1960.

[13] Winston, W., L. : Operations Research, Applications and Algorithms, Duxbury press,
International Thomson Publishing, 1994.

114

View publication stats

You might also like