You are on page 1of 49

Operations Management

Class 1: Introduction to OM
Best Box Builders
Process Analysis / Theory of Constraints

Class 2: Decision Analysis

1
Decision Analysis
Decision analysis is suitable for a wide range of operations
management decisions where uncertainty is present,
e.g.,
 capacity planning,
 product design,
 equipment selection,
 location planning.

2
Decision Analysis
 An analytic and systematic approach to decision making
In this chapter: under uncertain environment (i.e.,
decision maker doesn’t know which event will occur in
the future)

 Considers all the possible alternatives and possible


outcomes (events), and chooses the best among
alternatives

 Payoff based on a specific alternative chosen and a


specific event occurred should be available

 Probabilities for the possible events may or may not be


provided

3
Five Steps in Decision Making
1. Clearly define the problem
2. List all possible decision alternatives
3. Identify all possible outcomes for each decision
alternative
4. Identify the payoff for each alternative &
outcome combination
5. Use a decision modeling technique to choose
an alternative

4
Thompson Lumber Company (TLC)
1. Decision: Whether or not to make and sell
backyard storage sheds
2. Alternatives:
• Build a large plant
• Build a small plant
• Do nothing (don’t forget this option)

3. Outcomes (events): Demand for sheds will


be high, moderate, or low

5
Thompson Lumber Company (TLC)
4. Payoffs:
Outcomes (Demand)
Alternatives High Moderate Low
Build large plant 200 000 100 000 -120 000
Build small plant 90 000 50 000 -20 000
No plant 0 0 0

5. Apply a decision modeling method

6
Decision-Making Environments
Type 1: Decision making under certainty

Type 2: Decision making under uncertainty

Type 3: Decision making under risk

7
Decision Making Under Certainty
 The consequence (payoff) of every alternative is
known

 Usually there is only one outcome (state of


nature) for each alternative

 This seldom occurs in reality

 Decision making methods used: LP, IP

8
Decision Making Under Uncertainty
 Possible alternatives and possible outcomes
(events) are known; however, probabilities of
the possible outcomes are not known

 Decision making methods:


1. Maximax profit
2. Maximin profit
3. Criterion of realism
4. Equally likely
5. Minimax regret

9
Maximax Profit Criterion: TLC
 The optimistic approach
 Assume the best scenario will occur for each alternative
 Maximize the maximum profits

Outcomes (Demand)
Alternatives High Moderate Low
Build large plant 200 000 100 000 -120 000
Build small plant 90 000 50 000 -20 000
No plant 0 0 0
Thus, choose large plant (best payoff)

10
Maximin Profit Criterion: TLC
 The pessimistic approach
 Assume the worst scenario will occur for each alternative
 Maximize the minimum profits

Outcomes (Demand)
Alternatives High Moderate Low
Build large plant 200 000 100 000 -120 000
Build small plant 90 000 50 000 -20 000
No plant 0 0 0

Thus, choose no plant (best payoff)

11
Criterion of Realism /
Hurwicz Criterion
 Uses the coefficient of realism (α) to estimate
the decision maker’s optimism (i.e., 0 ≤ α ≤ 1)

 α=1:the decision maker is totally optimistic


α=0: the decision maker is totally pessimistic

 Realism payoff for alternative


= α x (Maximum payoff for alternative)
+ (1-α) x (Minimum payoff for alternative)

12
Criterion of Realism /
Hurwicz Criterion: TLC
Suppose α = 0.45

Alternatives Realism Payoff


Build large plant 24 000
Build small plant 29 500
No plant 0

Thus, choose small plant

13
Equally Likely Criterion /
Laplace Criterion: TLC
Assumes all outcomes equally likely and uses
the average payoff

Alternatives Average Payoff


Build large plant 60 000
Build small plant 40 000
No plant 0

Chose large plant

14
Minimax Regret Criterion: TLC
 Regret or opportunity loss measures how much better
we could have done
Regret = (best payoff) – (actual payoff)

Outcomes (Demand)
Alternatives High Moderate Low
Large plant 200,000 100,000 -120,000
Small plant 90,000 50,000 -20,000
No plant 0 0 0

The best payoff for each outcome is highlighted


15
Minimax Regret Criterion: TLC
Regret Table
Outcomes (Demand)
Max
Alternatives High Moderate Low Regret
Large plant 0 0 120 000 120 000
Small plant 110 000 50 000 20 000 110 000
No plant 200 000 100 000 0 200 000

We want to minimize the amount of regret we


might experience, so chose small plant

16
Using Excel for Solution: TLC
SCREENSHOT 9.1B: Excel Solution for Thompson Lumber

17
Decision Making Under Risk
 Where probabilities of outcomes (events or states of
nature) are available

 Expected Monetary Value (EMV) uses the


probabilities to calculate the average payoff for each
alternative

EMV (for alternative i)


= ∑ { (probability of outcome j) x (payoff of outcome j) }

18
EMV Method: TLC

Outcomes (Demand)
High Moderate Low
Alternatives (PH=0.3) (PM=0.5) (PL=0.2) EMV
Large plant 200 000 100 000 -120 000 86 000
Small plant 90 000 50 000 -20 000 48 000
No plant 0 0 0 0

Since large plant alternative has the largest EMV,


we choose large plant

19
Expected Opportunity Loss (EOL)
 An alternative approach which minimizes EOL
 How much regret do we expect based on the
probabilities?

EOL (for alternative i)


= ∑ { (probability of outcome j) x (regret of outcome j) }

Note: The expected value and expected opportunity loss criteria


always result in the same decision.

20
EOL Method: TLC

Outcomes (Demand)
High Moderate Low
Alternatives (PH=0.3) (PM=0.5) (PL=0.2) EOL
Large plant 0 0 120 000 24 000
Small plant 110 000 50 000 20 000 62 000
No plant 200 000 100 000 0 110 000

Since large plant alternative has the smallest EOL,


we choose large plant

21
Perfect Information (PI)
 Perfect Information would tell us with certainty
which outcome (event) is going to occur

 Having perfect information before making a


decision would allow choosing the best payoff
for the outcome

 In reality, perfect information is rarely available.

 Ifperfect information exists, however, how much


is the decision maker willing to pay for it?

22
Expected Value With PI (EVwPI)
 Theexpected payoff when we have and use
perfect information before making a decision

EVwPI = ∑ { (probability of outcome j)


x ( best payoff of outcome j) }

23
Expected Value of PI (EVPI)
 The amount by which perfect information would
increase our expected payoff
 The maximum amount the decision maker is
willing to pay for perfect information

EVPI = EVwPI – EVwoPI

Note: EVwPI = Expected value with perfect information


EVwoPI = the best EMV without perfect information

Note: EVPI always equals the EOL for the best decision.
24
Expected Value of PI (EVPI)

Outcomes (Demand)
High Moderate Low
Alternatives (PH=0.3) (PM=0.5) (PL=0.2)
Build large plant 200 000 100 000 -120 000
Build small plant 90 000 50 000 -20 000
No plant 0 0 0
* Payoffs in blue would be chosen based on
perfect information

EVwPI = 0.3 x 200 000 + 0.5 x 100 000 + 0.2 x 0


= $110 000
25
Expected Value of PI (EVPI)

EVPI = EVwPI – EVwoPI


= $110 000 – $86 000 = $24 000

 Use of the “perfect information” increases the expected


value by $24 000

 In other words, the decision maker is willing to pay for the


perfect information up to $24 000.

 If perfect information costs less than $24 000, the decision


maker should purchase it; otherwise, he shouldn’t.

26
Decision Trees
A decision tree can be used instead of a table to
show alternatives, outcomes, and payoffs

A decision tree consists of nodes and arcs


» □ = decision node; = outcome node
» Arcs (lines) indicates decision alternatives or outcomes
(states of nature)

 It shows the order of decisions and outcomes

27
Thompson Lumber Company (TLC)
Payoff Table:
Outcomes (Demand)
Alternatives High Moderate Low
Build large plant 200 000 100 000 -120 000
Build small plant 90 000 50 000 -20 000
No plant 0 0 0

28
Decision Tree: TLC
FIGURE 1:

29
Folding Back a Decision Tree
A process of identifying the optimal decision in a
decision tree
 The process begins after a complete decision
tree has been developed
 Moving from right to left, calculate the expected
payoff at each outcome node
 At each decision node, select the best decision
alternative (based on expected payoff)
– Largest payoff for the maximization problem
– Lowest cost for the minimization problem

30
Decision Tree with EMVs: TLC
FIGURE 2: Completed Decision Tree

We select large plant because it provides


the largest expected payoff (i.e., $86 000) 31
Using Excel Simple Decision Tree
 We will use an Excel add-in called ‘Simple
Decision Tree’ to create and solve decision
trees
 Download it from
https://sites.google.com/site/simpledecisiontree/
 Load the file SimpleDecisionTree_V1.4.xla into
Excel
 Click on Add-Ins to see the Simple Decision
Tree Toolbar

32
Multistage Decision Problems

 Multistage problems involve a sequence of


several decision alternatives and outcomes
(states of nature)

 It
is possible for a decision alternative (or
outcome) to be immediately followed by another
decision alternative (or outcome)

 Decision trees are best for showing multistage


decision making problems because they can
show the sequential arrangement at each stage
33
Expanded Decision Tree: TLC
 Suppose they will first decide (prior to the plant decision)
whether to pay $4 000 to conduct a market survey
 Survey results may be positive or negative in association
with the demand for the product
 Survey results will be imperfect (i.e., not a perfect
information)
 Then they will decide whether to build a large plant,
small plant, or no plant
 Then they will find out what the actual outcome and
payoff are

34
Expanded Decision Tree: TLC
FIGURE 3:

Note the
probabilities
for “high
demand”,
“moderate
demand”,
and “low
demand” in
the “conduct
survey”
subtree for
“positive
result” and
“negative
result”.
-> revised
probabilities 35
Decision Tree with EMVs: TLC
FIGURE 4:

Note: the
non-optimal
branches
are pruned.

36
Decision Tree Analysis: TLC
Based on the decision tree with EMVs shown in Figure 4,
we have the following strategy:

1. Conduct the survey


2. If the survey results are positive, then build the large
plant (EMV = $141 840)
If the survey results are negative, then build the small
plant (EMV = $16 540)

Note: The actual payoff for TLC would be different from the
calculated EMVs.
37
Expected Value of
Sample Information (EVSI)
 The Thompson Lumber survey provides sample
information (which is not perfect information)
 What is the value of this sample information (SI)?
 In other words, what is the maximum amount of
money TLC willing to pay for the sample
information?

EVSI = (EMV with sample information at no cost)


– (EMV without any sample information)

38
EVSI: TLC
If sample information were available with no cost
EMV (with free SI) = $87 961 + $4 000 (because we
assumed the survey would cost $4 000)
= $91 961
EMV (no SI) = $86 000 (previously obtained)

Thus EVSI = $91 961 – $86 000 = $5 961

Implication:
If the survey costs less than $5 961, then TLC should
purchase it; otherwise, do not purchase it.
39
Efficiency of Sample Information
 How close does the sample information come to
perfect information?
 Efficiency of sample information is the proportion
(or percentage) of the expected value of sample
information to the expected value of perfect
information
EVSI
Efficiency of Sample Information =
EVPI

 Efficiency = 5 961 / 24 000 = 0.2484 or 24.84%


– See slide 26: EVPI = 24 000.
40
Estimating Revised Probabilities

 Allows probability values to be revised based on new


information (from a survey or test market)

 Prior probabilities are the probability values before new


information

 Revised/posterior probabilities are obtained by


combining the prior probabilities with the new information

 In the decision trees (Figures 3 and 4), both prior and


revised probabilities for demand outcomes were used for
calculating EMVs

41
Estimating Revised Probabilities
 Given Known Prior Probabilities:
P(HD) = 0.30
P(MD) = 0.50
P(LD) = 0.20

 How do we find the revised probabilities where the


survey result is given?
For example: P(HD|PS) = ?

 We are using Bayes’ theorem, which allows decision


makers to revise probability values.

42
Estimating Revised Probabilities
 It is necessary to understand the conditional probability
formula: P ( A and B )
P( A | B) =
P (B )

 P(A|B) is the probability of event A occurring, given that


event B has occurred

 When P(A|B) ≠ P(A), this means the probability of event


A has been revised based on the fact that event B has
occurred

43
Estimating Revised Probabilities
 The marketing research firm provided the following
probabilities based on its track record of survey
accuracy:
P(PS|HD) = 0.967 P(NS|HD) = 0.033
P(PS|MD) = 0.533 P(NS|MD) = 0.467
P(PS|LD) = 0.067 P(NS|LD) = 0.933

 Here the demand is “given”, but we need to reverse the


events so the survey result is “given”

 We need to use the above conditional probabilities as


well as the known prior probabilities in order to calculate
the revised probabilities
44
Estimating Revised Probabilities
 Finding probability of the demand outcome given the
survey result:
P(HD|PS) = P(HD and PS) / P(PS)
= {P(PS|HD) x P(HD)} / P(PS)

 Known probability values are in blue, so need to find


P(PS)
P(PS|HD) x P(HD) 0.967 x 0.30
+ P(PS|MD) x P(MD) + 0.533 x 0.50
+ P(PS|LD) x P(LD) + 0.067 x 0.20
= P(PS) = 0.570

45
Estimating Revised Probabilities
 Now we can calculate P(HD|PS):

P(HD|PS) = {P(PS|HD) x P(HD)} / P(PS)


= {0.967 x 0.30} / 0.570
= 0.509
 Notice that the probability of HD increased from 0.30 to
0.509 given the positive survey result
 The other five conditional probabilities are found in the
same manner
 Instead of using Bayes’ theorem formula, we can use the
tableau approach shown on the next slides

46
Tableau Approach: TLC

47
Tableau Approach: TLC

48
Readings
 Ifyou want to have a better grasp the key concepts,
the following readings are recommended:
 Chapter 3. Decision Analysis
– Originally Chapter 19 of Business Statistics: For
Contemporary Decision Making, Second Canadian Edition

49

You might also like