You are on page 1of 16

Queuing theory

Introduction:
Queuing theory refers to the study comprising a queue’s features,
functions, and imperfections. This mathematical study is very relevant in
operations research since its appropriate application helps in eliminating
operational bottlenecks and service failures. Queuing theory is a powerful tool
to analyze the daily phenomenon of waiting in line. Discover how to define
queuing theory, how it started, why it’s important, and how it can be applied to
real-life situations. Queuing theory offers a mathematical framework for
analyzing and optimizing queues, making it a valuable tool for financial
institutions striving to enhance customer service and operational efficiency.

Who Invented Queuing Theory?


Agner Krarup Erlang, a Danish mathematician, statistician,
and engineer, is credited with creating not only queuing theory but the entire
field of telephone traffic engineering.In the early 20th century, Erlang was head
of a technical laboratory at the Copenhagen Telephone Co. His extensive
studies of wait time in automated telephone services and his proposals for more
efficient networks were widely adopted by telephone companies.

How does Queuing Theory work?


Queuing theory in operations research contributes to designing
an efficient queuing system for a business. The theory guides the professionals
to systematically explore the finest method and arrange the setup. It gives
primary importance to balancing efficient service and the system’s economic
viability. An efficient queuing system in place enhances customer service and
competitive advantage.
Application of Queuing Theory :
The application of queuing theory is not inherent to any specific
sector. It is predominantly applied in industries like retail, logistics, and
hospitality. The relevance hit its peak during the Covid 19 pandemic period.
Dealing with long lines and resolving issues during the emergence of the
COVID-19 pandemic is very hectic. Businesses have introduced different queue
management systems keeping public safety as a priority. Its use is evident in the
following cases where changes are made, keeping in mind the customers’
safety.
There is software on the market that promotes virtual waiting. A virtual queue
management system organizes clients in a virtual waiting line or queue not
visibly waiting in line to receive a product or service. Customers can wait
virtually using a virtual queue management system since they are not bound to a
specific waiting area.

Importance of Queuing Theory:


 Waiting in line is a common occurrence in everyday life because it
serves various key functions as a process. When there are limited
resources, queues are a fair and necessary manner of dealing with the
flow of clients. If there isn't a queuing process in place to deal with
overcapacity, bad things happen.
 For example, if a website has too many visitors, it will slow down and
fail if it does not include a mechanism to adjust the speed at which
requests are processed or a mechanism to queue visitors. Consider planes
waiting to land on a runway. When there are too many planes to land at
once, the lack of a queue has actual safety issues when jets try to land at
the same time.
 Queuing theory is significant because it helps to describe queue
characteristics such as average wait time and gives tools for queue
optimization.
 Queuing theory influences the design of efficient and cost-effective
workflow systems from a commercial standpoint.

Little’s Law:
Little’s Law connects the capacity of a queuing system, the
average time spent in the system, and the average arrival rate
into the system without knowing any other features of the
queue. The formula is quite simple and is written as follows:

or transformed to solve for the other two variables so that:

Where:

 L is the average number of customers in the system


 λ (lambda) is the average arrival rate into the system
 W is the average amount of time spent in the system
Project management processes like Lean and Kanban wouldn’t
exist without the Little’s Law queuing models. They’re critical for
business applications, in which Little’s Law can be written in
plain English as:

Queuing theory examples & types of queuing


models:
Little’s Law gives powerful insights because it lets us solve for
important variables like the average wait of in a queue or the
number of customers in queue simply based on two other
inputs.

A line at a cafe

For example, if you’re waiting in line at a Starbucks, Little’s Law


can estimate how long it would take to get your coffee.

Assume there are 15 people in line, one server, and 2 people are
served per minute. To estimate this, you’d use Little’s Law in the
form:
Showing that you could expect to wait 7.5 minutes for your
coffee.

A line in a virtual waiting room

At Queue-it, we show visitors their wait time in the online queue


using a queuing model based on Little’s Law, adding in factors to
account for no-shows and re-entries:

Where:

 L is the number of users ahead in line


 λ (lambda) is rate of redirects to the website or app, in
users per minute
 N is the no-show ratio
 R is the re-entry rate to the queue
 W is the estimated wait time, in minutes

Military process optimization

We can look at a process optimization example from the


military, courtesy of Process.st.

In this real-life example, the military needed to determine the


ideal amount of time B-2 stealth bombers would be in
maintenance. There are only 20 B-2 aircraft and they need to be
ready at a moment’s notice. But they require frequent
maintenance, which can range anywhere from 18 to 45 days.

Using Little’s Law helps find the balance of aircraft in use versus
aircraft under maintenance.
Based on flight schedule analysis, it was calculated that three B-
2 bombers would be under maintenance at any given time. The
rate at which bombers entered maintenance was also calculated
to be roughly every 7 days. So:

 L= number of items in WIP (maintenance) = 3


 A= arrival/departure rate = 1 every 7 days = 1/7 days
 W= the average amount of time spent in maintenance=???
Put into Little’s Law, this leaves us with:

Therefore, the target lead time for B-2 bomber maintenance


needed to be 21 days to meet the demands of both available
aircraft and the regular flight schedules.

Conclusion:
The objective of a queuing model is to find out the optimum
service rate and the number of servers so that the average cost of
being in queuing system and the cost of service are minimised. The
queuing problem is identified by the presence of a group of customers
who arrive randomly to receive some service. By applying queuing
theory, a business can develop more efficient systems, processes,
pricing mechanisms, staffing solutions, and arrival management
strategies to reduce customer wait times and increase the number of
customers that can be served.
Markov chain

Introduction:
A Markov chain is a stochastic model created by Andrey
Markov that outlines the probability associated with a sequence of
events occurring based on the state in the previous event. It’s a very
common and easy to understand model that’s frequently used in
industries that deal with sequential data such as finance. Even
Google’s page rank algorithm, which determines what links to show
first in its search engine, is a type of Markov chain. Through
mathematics, this model uses our observations to predict an
approximation of future events.

Main Characteristics of a Markov Chain :


As stated above, a Markov process is a stochastic process
which has memoryless characteristics. The term “memorylessness” in
mathematics is a property of probability distributions. It generally
refers to scenarios in which the time associated with a certain event
occurring does not depend on how much time has already elapsed. In
other words, when a model has a memoryless property, it implies that
the model has “forgotten” which state the system is in. Hence,
previous states of the process would not influence the probabilities.

The main characteristic of a Markov process is this property of


memorylessness. The predictions associated with a Markov process
are conditional on its current state and are independent of past and
future states.

This memorylessness attribute is both a blessing and a curse to the


Markov model in application. Imagine a scenario in which you wish
to predict words or sentences based on previously entered text —
similar to how Google does for Gmail. The benefit of using the
Markov process to do this is that the newly generated predictions
would not be dependent on something you wrote paragraphs ago.
However, the downside is that you won’t be able to predict text that’s
based on context from a previous state of the model. This is a
common problem in natural language processing (NLP) and an issue
many models face.

Properties of Markov Chain:


There are also several properties that Markov chains can
have, including:

1.Irreducibility: Markov chain is irreducible when it is possible to


reach any state from any other state in a finite number of steps.
2.Aperiodicity: A Markov chain is aperiodic when it is possible to
reach any state from any other state in a finite number of steps,
regardless of the starting state.
3.Recurrence: A state in a Markov chain is recurrent if it is possible to
return to that state in a finite number of steps.
4.Transience: A state in a Markov chain is transient if it is not
possible to return to that state in a finite number of steps.
5.Ergodicity: A Markov chain is ergodic if it is both irreducible and
aperiodic and if the long-term behaviour of the system is independent
of the starting state.
6.Reversibility: A Markov chain is reversible if probability of
transitioning from one state to another is equal to the probability of
transitioning from that state back to the original state.
Types of Markov Chain:
There are several different types of Markov chains,
including:

 1.Finite Markov chains: These are Markov chains with a finite


number of states. The transition probabilities between states are
fixed, and system will then eventually reach a steady state in
which the probabilities of being in each state become constant.
An example of a finite state Markov chain might be a model of a
traffic light, with three states (red, yellow, green) and transitions
governed by the rules of the traffic light.
 Infinite Markov chains: These are Markov chains with an
infinite number of states. The transition probabilities between
states are fixed, but the system may not reach a steady state. For
example, a model of the spread of a virus through a population.
The states of the Markov chain represent the number of people
who have been infected at any given time, and the transitions
between states are governed by the rate of infection and the rate
of recovery.
 Continuous-time Markov chains (CTMC): These are Markov
chains in which the transitions between states occur at random
times rather than at discrete time intervals. The transition
probabilities are defined by rate functions rather than
probabilities. For example, the state of a CTMC could represent
the number of customers in a store at a given time, and the
transitions between states could represent the arrival and
departure of customers. The probability of a system being in a
particular state (e.g., the probability that there are a certain
number of customers in the store) can be calculated using the
CTMC model.
 Discrete-time Markov chains (DTMC): These are Markov
chains in which the transitions between states occur at discrete
time intervals. The transition probabilities are defined by a
transition matrix. For example, the state of a DTMC could
represent the weather on a particular day (e.g., sunny, cloudy, or
rainy), and the transitions between states could represent the
change in weather from one day to the next. The probability of a
system being in a particular state (e.g., the probability of it being
sunny on a particular day) can be calculated using the DTMC
model.

Example:

:
This matrix represents the probability of transitioning from one state to another. For
example, the value in the second row and third column (0.7) represents the
probability of transitioning from state B to state C.
In order to analyse the behaviour of this Markov chain over time, it is necessary to
define a starting state and calculate the probabilities of transitioning to other states at
each time step. This can be done using matrix multiplication. For example, if the
starting state is A, the probability of being in state B after one time step can be
calculated as follows:
This is obtained by multiplying the transition matrix by the starting state
vector:
To calculate the probability of being in state B after two-time steps, we
can simply multiply the transition matrix by itself and then multiply the
result by the starting state vector:

This process can be repeated to calculate the probabilities of being in


each state at any time step.
One important property of Markov chains is that they will eventually
reach a steady state, where the probabilities of being in each state no
longer change. This steady state can be calculated using matrix
multiplication as well, by finding the eigenvectors of the transition matrix
and normalizing them. The resulting vector represents the long-term
behaviour of the Markov chain.

Advantages and Disadvantages of Markov Chain:


A Markov chain undergoes transitions from one state to
another as per specific probabilistic rules. The defining characteristic of a
Markov chain is that no matter how the system arrived at its current
state, the possible future states are fixed. This specific kind of memory-
lessness is known as the “Markov property”.

Some advantages of Markov chains include:


1.They can be used to model and analyse a wide variety of
systems in many different fields, including economics, biology, and
computer science.
2.They are relatively simple to understand and use, making them
a popular choice for modelling complex systems.
3.They can be used to predict the long-term behaviour of a
system, even if the system is subject to random fluctuations in the short
term.

Some disadvantages of Markov chains include:

1.They are only able to model systems that exhibit the Markov
property, which means that the future state of the system is dependent
only on the current state and not on the sequence of events that led to
the current state.
2.They can be computationally intensive, especially for large
systems or systems with many possible states.
3.They may not accurately capture the behaviour of systems that
exhibit complex dependencies or long-range correlations.
4.They may not be able to capture the full complexity of real-world
systems, which can have many interacting variables and non-linear
relationships.

Conclusion :
It provides a general framework of aggregation in agent-
based and related computational models by making use of Markov chain
aggregation and lumpability theory in order to link between the micro-
level dynamical behavior and higher-level processes defined by
macroscopic observables. The starting point is a formal representation of
a class of ABMs as Markov chains—so-called micro chains—obtained
by considering the set of all possible agent configurations as the state
space of a huge Markov chain.
NAME: D. Mrishika Dhinakaran
REG NO: RA2211026050061
DEPT: B.Tech CSE-AIML-C
SUBJECT: Probability Queuing
Theory

You might also like