You are on page 1of 64

Decision Theory

In today’s lecture
Decision Analysis
For evaluating and choosing among
alternatives

Considers all the possible alternatives and


possible outcomes
Five Steps in Decision Making
1. Clearly define the problem
2. List all possible alternatives
3. Identify all possible outcomes for each alternative
4. Identify the payoff for each alternative & outcome combination
5. Use a decision modeling technique to choose an alternative
Thompson Lumber Co. Example

1.Decision: Whether or not to


make and sell storage sheds
2.Alternatives:
• Build a large plant
• Build a small plant
• Do nothing

3.Outcomes: Demand for sheds


will be high, moderate, or low
4. Payoffs
Outcomes (Demand)
Alternatives High Moderate Low
Large plant 200,000 100,000 -120,000
Small plant 90,000 50,000 -20,000
No plant 0 0 0

5. Apply a decision modeling method


Types of Decision
Modeling Environments

Type 1: Decision making under


certainty

Type 2: Decision making under


uncertainty

Type 3: Decision making under risk


Decision Making Under Certainty
• The consequence of every alternative is known
• Usually there is only one outcome for each alternative
• This seldom occurs in reality
Decision Making Under Uncertainty

• Probabilities of the possible


outcomes are not known
• Decision making methods:
1.Maximax
2.Maximin
3.Criterion of realism
4.Equally likely
5.Minimax regret
Maximax Criterion
• The optimistic approach
• Assume the best payoff will occur for each
alternative
Outcomes (Demand)
Alternatives High Moderate Low
Large plant 200,000 100,000 -120,000
Small plant 90,000 50,000 -20,000
No plant 0 0 0

Choose the large plant (best payoff)


Maximin Criterion
• The pessimistic approach
• Assume the worst payoff will occur for each
alternative
Outcomes (Demand)
Alternatives High Moderate Low
Large plant 200,000 100,000 -120,000
Small plant 90,000 50,000 -20,000
No plant 0 0 0

Choose no plant (best payoff)


Criterion of Realism

• Uses the coefficient of realism (α) to


estimate the decision maker’s optimism
• 0<α<1

α x (max payoff for alternative)


+ (1- α) x (min payoff for alternative)
= Realism payoff for alternative
Suppose α = 0.45
Realism
Alternatives Payoff
Large plant 24,000
Small plant 29,500
No plant 0

Choose small plant


Equally Likely Criterion
Assumes all outcomes equally likely and uses the
average payoff
Average
Alternatives Payoff
Large plant 60,000
Small plant 40,000
No plant 0
Chose the large plant
Minimax Regret Criterion
• Regret or opportunity loss measures much
better we could have done
Regret = (best payoff) – (actual payoff)
Outcomes (Demand)
Alternatives High Moderate Low
Large plant 200,000 100,000 -120,000
Small plant 90,000 50,000 -20,000
No plant 0 0 0

The best payoff for each outcome is highlighted


Regret Values
Outcomes (Demand)
Max
Alternatives High Moderate Low Regret
Large plant 0 0 120,000 120,000
Small plant 110,000 50,000 20,000 110,000
No plant 200,000 100,000 0 200,000

We want to minimize the amount of regret we


might experience, so chose small plant
Go to file 8-1.xls
Decision Making Under Risk

• Where probabilities of outcomes are available


• Expected Monetary Value (EMV) uses the
probabilities to calculate the average payoff for
each alternative
EMV (for alternative i) =
∑(probability of outcome) x (payoff of outcome)
Expected Monetary Value (EMV) Method
Outcomes (Demand)
Alternatives High Moderate Low EMV
Large plant 200,000 100,000 -120,000 86,000
Small plant 90,000 50,000 -20,000 48,000
No plant 0 0 0 0

Probability
of outcome 0.3 0.5 0.2

Chose the large plant


Expected Opportunity Loss (EOL)

• How much regret do we expect based on the probabilities?

EOL (for alternative i) =


∑(probability of outcome) x (regret of outcome)
Regret (Opportunity Loss) Values
Outcomes (Demand)
Alternatives High Moderate Low EOL
Large plant 0 0 120,000 24,000
Small plant 110,000 50,000 20,000 62,000
No plant 200,000 100,000 0 110,000

Probability
of outcome 0.3 0.5 0.2

Chose the large plant


Perfect Information
• Perfect Information would tell us
with certainty which outcome is
going to occur
• Having perfect information before
making a decision would allow
choosing the best payoff for the
outcome
Expected Value With
Perfect Information (EVwPI)

The expected payoff of having perfect


information before making a
decision

EVwPI = ∑ (probability of outcome)


x ( best payoff of outcome)
Expected Value of
Perfect Information (EVPI)

• The amount by which perfect information would increase our


expected payoff
• Provides an upper bound on what to pay for additional
information

EVPI = EVwPI – EMV


EVwPI = Expected value with perfect information
EMV = the best EMV without perfect information
Payoffs in blue would be chosen based on perfect
information (knowing demand level)
Demand
Alternatives High Moderate Low
Large plant 200,000 100,000 -120,000
Small plant 90,000 50,000 -20,000
No plant 0 0 0

Probability 0.3 0.5 0.2

EVwPI = $110,000
Expected Value of Perfect Information

EVPI = EVwPI – EMV


= $110,000 - $86,000 = $24,000

• The “perfect information” increases the expected value by $24,000


• Would it be worth $30,000 to obtain this perfect information for demand?
Decision Trees

• Can be used instead of a table to show alternatives, outcomes, and


payofffs
• Consists of nodes and arcs
• Shows the order of decisions and outcomes
Decision Tree for Thompson Lumber
Folding Back a Decision Tree

• For identifying the best decision in the tree


• Work from right to left
• Calculate the expected payoff at each outcome node
• Choose the best alternative at each decision node (based on
expected payoff)
Thompson Lumber Tree with EMV’s
Using TreePlan With Excel

• An add-in for Excel to create and solve decision trees

• Load the file Treeplan.xla into Excel


(from the CD-ROM)
Decision Trees for Multistage
Decision-Making Problems
• Multistage problems involve a sequence of several decisions and outcomes
• It is possible for a decision to be immediately followed by another decision
• Decision trees are best for showing the sequential arrangement
Expanded Thompson
Lumber Example

• Suppose they will first decide whether to pay $4000 to conduct a


market survey
• Survey results will be imperfect
• Then they will decide whether to build a large plant, small plant, or
no plant
• Then they will find out what the outcome and payoff are
Thompson Lumber
Optimal Strategy
1. Conduct the survey

2. If the survey results are positive, then build the large plant (EMV =
$141,840)

If the survey results are negative, then build the small plant (EMV =
$16,540)
Expected Value of
Sample Information (EVSI)
• The Thompson Lumber survey provides sample information (not
perfect information)
• What is the value of this sample information?
EVSI = (EMV with free sample information)
- (EMV w/o any information)
EVSI for Thompson Lumber

If sample information had been free


EMV (with free SI) = 87,961 + 4000 =
$91,961

EVSI = 91,961 – 86,000 = $5,961


How close does the sample information
come to perfect information?
EVSI vs. EVPI

Efficiency of sample information = EVSI


EVPI

Thompson Lumber: 5961 / 24,000 =


0.248
Estimating Probability
Using Bayesian Analysis

• Allows probability values to be revised based on new information


(from a survey or test market)
• Prior probabilities are the probability values before new
information
• Revised probabilities are obtained by combining the prior
probabilities with the new information
Known Prior Probabilities

P(HD) = 0.30
P(MD) = 0.50
P(LD) = 0.30

How do we find the revised probabilities


where the survey result is given?
For example: P(HD|PS) = ?
• It is necessary to understand the
Conditional probability formula:
P(A|B) = P(A and B)
P(B)
• P(A|B) is the probability of event A
occurring, given that event B has occurred
• When P(A|B) ≠ P(A), this means the
probability of event A has been revised
based on the fact that event B has occurred
The marketing research firm provided the
following probabilities based on its track
record of survey accuracy:
P(PS|HD) = 0.967 P(NS|HD) = 0.033
P(PS|MD) = 0.533 P(NS|MD) = 0.467
P(PS|LD) = 0.067 P(NS|LD) = 0.933

Here the demand is “given,” but we need


to reverse the events so the survey
result is “given”
• Finding probability of the demand
outcome given the survey result:
P(HD|PS) = P(HD and PS) = P(PS|HD) x P(HD)
P(PS) P(PS)

• Known probability values are in


blue, so need to find P(PS)
P(PS|HD) x P(HD) 0.967 x 0.30
+ P(PS|MD) x P(MD) + 0.533 x 0.50
+ P(PS|LD) x P(LD) + 0.067 x 0.20
= P(PS) = 0.57
Now we can calculate P(HD|PS):
P(HD|PS) = P(PS|HD) x P(HD) = 0.967 x 0.30
P(PS) 0.57
= 0.509

• The other five conditional


probabilities are found in the same
manner
• Notice that the probability of HD
increased from 0.30 to 0.509 given
the positive survey result
Utility Theory

• An alternative to EMV
• People view risk and money differently, so
EMV is not always the best criterion
• Utility theory incorporates a person’s
attitude toward risk
• A utility function converts a person’s
attitude toward money and risk into a
number between 0 and 1
Jane’s Utility Assessment

Jane is asked: What is the minimum amount that


would cause you to choose alternative 2?
• Suppose Jane says $15,000
• Jane would rather have the
certainty of getting $15,000 rather
the possibility of getting $50,000
• Utility calculation:
U($15,000) = U($0) x 0.5 + U($50,000) x 0.5
Where, U($0) = U(worst payoff) = 0
U($50,000) = U(best payoff) = 1

U($15,000) = 0 x 0.5 + 1 x 0.5 = 0.5 (for Jane)


• The same gamble is presented to
Jane multiple times with various
values for the two payoffs
• Each time Jane chooses her
minimum certainty equivalent
and her utility value is calculated
• A utility curve plots these values
Jane’s Utility Curve
• Different people will have different curves
• Jane’s curve is typical of a risk avoider
• Risk premium is the EMV a person is
willing to willing to give up to avoid the
risk
Risk premium = (EMV of gamble)
– (Certainty equivalent)

Jane’s risk premium = $25,000 - $15,000


= $10,000
Types of Decision Makers
Risk Premium
• Risk avoiders: >0
• Risk neutral people: =0
• Risk seekers: <0
Utility Curves for Different Risk Preferences
Utility as a
Decision Making Criterion
• Construct the decision tree as usual with the same alternative,
outcomes, and probabilities
• Utility values replace monetary values
• Fold back as usual calculating expected utility values
Decision Tree Example for Mark
Utility Curve for Mark the Risk Seeker
Mark’s Decision Tree With Utility Values
The Bayesian Approach
• The key to the Bayesian approach is model
comparison (null and alternative hypotheses) and
your estimate of prior evidence

P(D | Hi )P(Hi )
P(Hi | D) 
P(D | Hi )P(Hi )  ...  P(D | Hk )P(Hk )

• However, we can go ‘uninformed’ and suggest


that all hypotheses as equally likely
Proportions: M & M’s
Let’s say we want to make a guess as to the proportion of
brown M&M’s in a bag*
They are fairly common but don’t make a majority in and of
themselves
Guesses? Somewhere between .1 and .5
Posit three null hypotheses
H01: π = .2
H02: π = .3
H03: π = .4
What now?
Take a sampling distribution approach as we typically would
and test each hypothesis
Proportions

At this point we


 Using normal approximation
 Bag 1: n = 61 brown = 21 proportion = .344

H0: π = .2
Ha: π ≠ .2 might reject the
p-value = .001 95% CI: (.23,.46) hypothesis of π = .2
 Bag 2: n = 59 brown = 15 proportion = .254
but are still not sure
H0: π = .3 about the other two
Ha: π ≠ .3
p-value = .481 95% CI: (.14,.37)

 Bag 3: n = 60 brown = 21 proportion = .350

H0: π = .4
Ha: π ≠ .4
p-value = .435 95% CI: (.23,.47)
Proportions: The Bayesian Way
Bag 1: n = 61 x = 21
Hypothesis Prior probability P(D|Hi) Prior X P(D|Hi) Posterior Probability

H1: π = .2 .333 .003394 .001130 .02


H2: π = .3 .333 .081093 .027044 .52
H3: π = .4 .333 .071586 .023838 .46
Total 1.0 .051972 1.0

n  61  2 1 4 0
P( D | H1 )    0  1   0    .2 .8  .003394
x n x

 x  21 
n  61  2 1 4 0
P( D | H 2 )    0  1   0    .3 .7  .081093
x n x

 x  21 
n  61  x n x
P( D | H 3 )    0  1   0    .4 .6  .071586
x n x

x
   21 

P(D | Hi )P(Hi )
P(Hi | D)  For H1 : .001130/.051972 = .02
P(D | Hi )P(Hi )  ...  P(D | Hk )P(Hk )
The rest based on updated priors
Bag 2: n = 59 x = 15
Hypothesis Prior probability P(D|Hi) Prior X P(D|Hi) Posterior Probability

H1: π = .2 .021742 .071176 .001548 .03


H2: π = .3 .520357 .087510 .045536 .90
H3: π = .4 .458670 .007421 .003404 .07
Total 1.0 .050488 1.0

Bag 3: n = 60 x = 21
Hypothesis Prior probability P(D|Hi) Prior X P(D|Hi) Posterior Probability

H1: π = .2 .030661 .002782 .000085 .00


H2: π = .3 .901917 .075965 .068514 .93
H3: π = .4 .067422 .078236 .005275 .07
Total 1.0 .073874 1.0

Based on 3 samples of N ≈ 60, which hypothesis do we go with?


The Bayesian Approach: Means
• Say you had results from high school
records that suggested an average IQ of
about 108 for your friends
• You test them now (N = 9) and obtain a
value of 110, which hypothesis is more
likely that it is random deviation from an
IQ of 100 or 108?
• z = (110 - 100)/5 = 2.00; p(D|H0) = .05*
*two-tailed,
• z = (110 recall
- 108)/5that the standard
= 0.40; p(D|H1) =error
.70*
for the sampling distribution here is   N X
The Bayesian Approach
• So if we assume either the Null (IQ =100) or
Alternative (IQ = 108) are as likely i.e. our priors
are .50 for each…

p( D | H o )* p( H o )
p ( H o | D) 
p( D | H o )* p( H o )  p( D | H1 )* p( H1 )
.05*.50
p( H o | D)   .067
.05*.50  .70*.50
.7 * . 5
P( H1 | D)   .933
.05 * .50  .70 * .50
The Bayesian Approach
• Now think, well they did score 108 before, and
probably will be closer to that than 100, maybe I’ll
weight the alternative hypothesis as more likely (.75)

.05*.25
p ( H o | D)   .023
.05*.25  .70*.75
.7 * .75
P(H1 | D)   .977
.05 * .25  .70 * .75

• The result is that the end probability of the null


hypothesis given the data is even less likely
Summary
Gist is that by taking a Bayesian approach we can get
answers regarding hypotheses
It involves knowing enough about your research situation
to posit multiple viable hypotheses
While one can go in ‘uninformed’ the real power is using
prior research to make worthwhile guesses regarding
priors
Though it may seem subjective, it is no more so than other
research decisions, e.g. claiming p < .05 is ‘significant.
And possibly less since it’s weighted by evidence rather than a
heuristic

You might also like