You are on page 1of 18

Decision Analysis

– quantitative analysis is the scientific approach to managerial decision making


– aka quantitative analysis/management science/ operations research
– business analytics/ big data
– data driven approach to decision making that allows companies to make
better decisions
– statistical and quantitative methods are used to analyze the data and provide
useful information to the decision maker
– business analytics is often broken into three categories: descriptive,
predictive, and prescriptive

Introduction to decision analysis

– Descriptive analytics
– involves the study and consolidation of historical data for a business and
an industry
– it helps a company measure how it has performed in the past and how it is
performing now (Stat 101)
– Predictive analytics
– aimed at forecasting future outcomes based on patterns in the past data
– statistical and mathematical models are used extensively for this purpose
(BA 182)
– Prescriptive analytics
– involves the use of optimization methods to provide new and better ways
to operate based on specific business objectives (BA 181)

Quantitative Analysis approach


". defining a problem
$. developing a model
%. acquiring input data
&. developing a solution
'. testing the solution
(. analyzing the results
). implementing the results

Decision Analysis
– What lies in the future?
– How do you think your professional and career goals will be affected by these
developments?
Six steps in Decision Making
". Clearly define the problem at hand
$. List the possible alternatives (internal)
%. Identify the possible outcomes or states of nature (external)
&. List the payoff (typically profit) of each combination of alternatives and
outcomes (estimate the interaction between options and outcomes)>
'. Select one of the mathematical decision theory models (choose how to
analyze the problem)
(. Apply the model and make your decision (follow the decision rule of the
(.
model)

Types of Decision making environments


". Decision making under certainty
- the decision makers know with certainty the consequence of every
alternative or decision choice
$. Decision making under uncertainty
- there are several possible outcomes for each alternative, and the decision
maker does not know the probabilities of the various outcomes
%. Decision making under risk
- there are several possible outcomes for each alternative, and the decision
maker knows the probability of occurrence of each outcome.

Decision Analysis under Uncertainty

Criteria for decision making under uncertainty


". Optimistic (Maximax criterion)
$. Pessimistic (Maximin criterion)
%. Criterion of realism (Hurwicz)
&. Equally likely (Laplace)
'. Minimax regret

1-4: can be computed directly from the decision (payoff) table, whereas the
minimax regret criterion requires use of an opportunity loss table

". Maximax Criterion


- choose maximum in a row, then maximum in the column
$. Maximin criterion
- choose minimum in a row, then the maximum in the column
%. Criterion of realism
- often called the weighted average: a compromise between an optimistic and
a pessimistic decision
- a coefficient of realism, ⍺, is selected, which measures the degree of
optimism of the decision maker about the future and is between 0 and 1
- when ⍺ = 1, optimistic future is 100%; if ⍺ =0, pessimistic future is 100%
- weighted average = ⍺ (maximum in row) + (1 - ⍺)(minimum in row)
- choose the highest weighted average in the column
&. Equally likely
- find the average payoff for each alternative and select the alternative with
the best or highest average: states of nature are equally likely
- select the alternative with the best or highest average
'. Minimax regret
- this is based on opportunity loss (or regret), which refers to the difference
'.

between the optimal profit or payoff for a given state of nature and the actual
payoff received for a particular decision for that state of nature
- in short, itʼs the amount lost by not picking the best alternative in a given
state of nature
a.) Opportunity loss is calculated by subtracting each payoff in the column
from the best payoff in the column
b.) Find the maximum opportunity loss for each alternative and pick the
alternative with the minimum number

by doing (b.), the opportunity value loss will be no more than the minimax. For
the alternative selected, the maximum opportunity loss cannot be more than
the minimum of the maximum regrets
As with maximization problem, the opportunity loss can never negative

Decision Analysis under Risk


– a decision situation in which several possible states of nature may occur and
the probabilities of these states of nature are known
– the most popular method is to choose the alternative with the highest
expected monetary value (EMV)
– when using the EMV criterion with minimization problems, the calculations are
the same, but the alternative with the smallest EMV is selected

Expected Value or Perfect Information (EVPI)


– places an upper bound on what you should pay for additional information
(information that would possible change a decision under risk to a decision
under certainty)
– EVwPI = Expected Value [of the decision] with Perfect Information
– expected or average return, in the long run, [of the choice made] if we
have perfect information before a decision had to be made

EVPI = EVwPI - Maximum EMV

EVPI tells us that the most we would pay for any information (perfect or imperfect)

For minimization problems, the approach is similar.


– the best payoff in each state of nature is found, but instead this would be the
lowest payoff for that state of nature rather than the highest
– EVwPI is calculated from these lowest payoffs, and this is compared to the
best (lowest) EMV without perfect information
– ECPI is the improvement that results: EVPI = the best EMV - EVwPI
Expected Opportunity Loss (EOL)
– cost of not picking the best solution
– an alternative approach to maximizing EMV is to minimize EOL
– Steps
1.) An opportunity loss table is constructed
2.) The EOL is computed for each alternative by multiplying the opportunity
loss by the probability and adding these together
– Note: The minimum EOL will always equal EVPI

Sensitivity Analysis
– investigates how the decision might change given a change in the problem
data or any of the input data

Decision Trees
– any problem that can be presented in a decision table can also be graphically
illustrated in a decision tree
– contain decision points or nodes and state-of-nature points or nodes
– decision node: a point form which one of several alternatives may be
chosen
– state-of-nature node: a point from which one state of nature will occur
Five Steps of Decision Tree Analysis
". Define the problem
$. Structure or draw the decision tree
%. Assign probabilities to the states of nature
&. Estimate payoffs for each possible combination of alternatives and states of
nature
'. Solve the problem by computing EMVs for each state-of-nature node (start at
the right of the tree and work back to the decision nodes on the left; at each
decision node, the alternative with the best EMV is selected)

Expected Value of Sample Information


– increase in expected value resulting from the sample information

– the expected value with sample information (EV with SI) is found from the
decision tree, and the cost of the sample information is added to this, since
this was subtracted from all the payoffs before the EV with SI was calculated.
The expected value without sample information (EV without SI) is then

subtracted from this to find the value of the sample information

Efficiency of Sample Information


– evaluates sources of information (market research, survey, FGD, test market,
etc.)
ESI = EVSI/EVPI x 100%

Bayesian Analysis
– incorporates both initial estimates of the probabilities and information about
the accuracy of the information source (e.g. market research survey)
– recognizes that a decision maker does not know with certainty what state of
nature will occur (ex. prior probabilities)
– allows manager to revise his initial or prior probability assessments based on
new information
– revised probabilities are called posterior probabilities
– Bayeʼs Theorem: Extension of the Conditional Law
– Refers to mutually exclusive and collectively exhaustive events
Queuing Theory
– study of waiting lines is one of the oldest and most widely used quantitative
analysis techniques
– in serving customers, there are waiting lines: Arrival time>Service time
– queue: a line of customers waiting service
– three basic components: arrivals, service facilities, the actual waiting line

The Main Issue:


– trade off between the cost of providing the best service (minimal waiting) and
the cost of making the client wait in a queue (inconvenience)
– total expected cost: sum of expected service cost plus expected waiting costs
– service costs are seen to increase as a firm attempts to raise its level of
service: as service improves in speed, the cost of time spent waiting
decreases

Elements of Cost in a Waiting Line


". How many arrive in a given time period
- items: cars, broken machines, packages
- persons: shoppers, clients, sick people
$. Average waiting time per item or person
- waiting time in the queue (prior to service)
- waiting time in queue & service time combined
%. Total waiting time is obtained from 1 and 2

Characteristics of a Queuing System (3 parts)


". the arrivals or inputs to the system (sometimes referred to as the calling
population)
$. the queue or the waiting line itself
%. the service facility

Arrivals: input source has 3 major characteristics


". size of calling population (unlimited/infinite or limited/finite)
- when the number of arrivals at any given moment is just a small portion of
potential arrivals, the calling population is considered unlimited
- Unlimited: cars at tollbooth, shoppers at supermarket, students at university
- Limited: shop with 8 machines
$. pattern of arrivals (random or Poisson distributed)
- Poisson: based on known schedule

%. behavior of arrivals
- assumption: arrival of patient customer
- balking: customers who refuse to join the waiting line because it is too long
to suit their needs or interests
- reneging: customers are those who enter the queue but then become
impatient and leave without completing their transaction

Waiting Line Characteristics


". Length of line can be either limited or unlimited
- limited length: when, by the law of physical restrictions, the queue cannot
increase to an infinite length (ex. eatery with 10 tables enough for 50 diners)
- unlimited length: all analytic queuing models in this chapter are treated
under this assumption (ex. toll booth)
$. Queue Discipline
- rule by which customers in the line are to receive service
- ex. FIFO, express lane, Senior citizenʼs lane

Service Facility Characteristics: Two Basic Properties


". configuration of the service system
- service systems are classified in terms of the number of channels, or
servers, and number of phases, or service stops
- single-channel system with one server is quite common: customer receives
service from just one server
- multichannel systems exist when multiple servers are fed by one common
waiting line
- single-channel but multiphase system: the customer goes through several
steps or phases to be serviced
- multichannel systems with multiphase systems

$. pattern of service times


- service patterns can be either constant or random
- constant service times are often machine controlled (e.g. automated parking
ticket)
- more often, service times are randomly distributed according to a negative
exponential probability distribution
- analysts should observe, collect, and plot service time date to ensure that
the observations fit the assumed distributions when applying these models

Using Kendall Notation to identify models


– D.G. Kendall developed a notation for queuing models

– specific letters are used to represent probability distributions



M = Poisson distribution for number of occurrences
D = constant (deterministic) rate
G = general distribution with known mean and variance

Single-channel model with Poisson arrivals and exponential service times:


M/M/1

2-channel system:
M/M/2

3- channel system with Poisson arrivals and constant service time:


M/D/3

4-channel system with Poisson arrivals and normally distributed service times:
M/G/4

Single-channel queuing model wth Poisson Arrivals and exponential service times
(M/M/1)
Assumptions of the model
– arrivals are served on a FIFO basis
– there is no balking or reneging
– arrivals are independent of each other but the arrival rate is constant over time
– arrivals follow a poisson distribution
– service times are variable and independent but the average is known
– service times follow a negative exponential distribution
– average service rate is greater than the average arrival rate
If these seven conditions are met, a series of equations that define the queueʼs
operating characteristics can be developed

λ = mean number of arrivals per time period


µ = mean number of customers or units served per time period
The arrival rate and the service rate must be defined for the same time period.
Example: If λ is the average number of arrivals per hour, then µ must indicate the
average number that could be served per hour.
Multichannel Queuing Model with Poisson Arrivals and Exponential Service Times
(M/M/m)
Assumptions of the model:
– arrivals are served on a FIFO basis
– there is no balking or reneging
– arrivals are independent of each other but the arrival rate is constant over time
– arrivals are independent of each other but the arrival rate is constant over time
– arrivals follow a Poisson distribution
– service times are variable and independent but the average is known
– service times follow a negative exponential distribution
– average service rate is greater than the average arrival rate
m = number of channels open
λ = average arrival rate
µ = average service rate at each channel

Single-Channel Queuing Model with Poisson Arrivals and Constant Service Times
(M/D/1)
– constant service times are used when customers or units are processed
according to a fixed cycle
– the values for Lq, Wq, L, and W are always less than those of models with
variable service time: the average queue length and average waiting time are
halved in constant service rate models
Finite Population Model Single-channel (M/M/1 with Finite Source)
When the population of potential customers is limited, the models are different.
There is now a dependent relationship between the length of the queue and the
arrival rate. The model has the following assumptions
– there is only one server
– the population of units seeking service is finite
– arrivals follow a Poisson distribution and service times are exponentially
distributed
– customers are served on a first-come, first-served basis
Equations for the finite population model:
Using λ = mean arrival rate, µ = mean service rate, and N = size of the population,
the operating characteristics are:
General Operating Characteristic Relationships for any queuing system in a steady
state
– a steady state condition exists when a system is in its normal stabilized
condition, usually after an initial transient state that may occur (e.g. customers
waiting at the door when a business opens in the morning)
– the means that both the arrival rate and the service rate should be stable in
steady state:
– first two conditions of steady state are referred to as littleʼs flow equations

– third condition: average time in system = average time waiting in queue +


average time receiving service
W = Wq + 1/µ

More Complex Queuing models and the use of simulation


– in the real world there are often variations from basic queuing models
– computer simulation can be used to solve these more complex problems
– simulation allows the analysis of controllable factors
– simulation should be used when standards queuing models provide only a
poor approximation of the actual service system

Simulation
– one of the most of the most widely used quantitative analysis tools
– to simulate is to try to duplicate the features, appearance, and characteristics
of a real system
– physical models can also be built to test systems
– one of the oldest quantitative analysis tools, but it was not until the
introduction of computers that it became a practical means of solving
management and military problems
Advantages of simulation
– relatively straightforward and flexible
– recent advances in computer software make simulation models very easy to
develop
– it can be used to analyze large and complex real-world situations
– allows “what-if” type questions: scenario planning
– does not interfere with real-world system
– enables the study of interactions between components
– enables time compression
– enables inclusion of real-world complications

Disadvantages of simulation
– often expensive as it may require a long, complicated process to develop the
model
– does not generate optimal solutions; trial and error approach
– requires managers to generate all conditions and constraints of the real world
problem
– each model is unique and the solutions and inferences are not usually
transferable to other problems

When systems contain elements that exhibit chance in their behavior, the Monte
Carlo method can be applied: Examples:
– Inventory demand
– Lead time for inventory
– Times between machine breakdowns
– Times between arrivals
– Service times
– Times to complete project activities
– Number of employees absent
Monte Carlo simulation is based on the experimentation of the probabilistic
elements through random sampling. It has the following five steps:
". Establishing a probability distribution for important variables
$. Building a cumulative probability distribution for each variable
%. Establishing an interval of random numbers for each variable
&. Generating random numbers
'. Actually simulation a series of trials

Inventory Demand

". Establishing probability distributions


- one way to establish probability distribution for a given variable is to examine
historical outcomes
- managerial estimates based on judgement and experience can also be used
$. Building a cumulative probability distribution for each variable
- convert a regular probability distribution to a cumulative probability
distribution
- a cumulative probability is the probability that a variable will be less than or
equal to a particular value
%. Setting random number intervals
- assign a set of number to represent each possible value or outcome
- these are random number intervals
- a random number is a series of digits that have been selected by a totally
random process
- the range of the random number intervals corresponds exactly to the
probability of the outcomes as shown by the cumulative probability
distribution
&. Generating random numbers
- random numbers can be generated in several ways
- large problems need to use computer programs to generate the needed
random numbers
- small problems: random processes like roulette wheels or pulling chips from
a hat may be used
- the most common manual method is to use a random number table
- because everything is random in a random number table, we can select
numbers from anywhere in the table to use in the simulation
'. Simulating the experiment
- we select random numbers from the random number table
- the number selected has a corresponding range in the cumulative
distribution table
- expected daily demand is the sum of all the probabilities multiplied to its
corresponding variable value
expected value is the long term average (and not a short erm simulated
average)

You might also like