You are on page 1of 11

DECISION ANALYSIS PART 1

Topics Outline
Elements of Decision Analysis
Decision Criteria
Risk Profiles
Sensitivity Analysis
Decision Trees
Bayes Rule
Value of Information
Elements of Decision Analysis
Many examples of decision making under uncertainty exist in the business world, including the following:
Bidding competitions
The trade-off is between bidding low to win the bid and bidding high to make a larger profit.
Introducing a new product into the market
If the product generates high customer demand, the company will make a large profit.
But if demand is low the company could fail to recoup its development costs.
Capacity expansions of manufacturing companies
If they dont expand and demand for their products is higher than expected, they will lose
revenue because of insufficient capacity. If they do expand and demand for their products is
lower than expected, they will be stuck with expensive underutilized capacity.
Although decision making under uncertainty occurs in a wide variety of contexts, all problems
have three common elements:
set of decisions (or strategies) available to the decision maker
set of possible outcomes and the probabilities of these outcomes
a value model that prescribes monetary values for the various decision-outcome combinations.
Once these elements are known, the decision maker can find an optimal decision, depending on
the optimality criterion chosen.
Decision Criteria
Example 1
A decision maker must choose among three decisions, labeled D1, D2, and D3.
Each of these decisions has three possible outcomes, labeled O1, O2, and O3.
A payoff table lists the payoff for each decision-outcome pair. Positive values correspond to
rewards (or gains) and negative values correspond to costs (or losses).
Here is the payoff table for our decision problem.
Outcome
Decision
O1 O2 O3
D1 10 10 10
D2 10 20 30
D3 30 30 80
This table shows that the decision maker can play it safe by choosing decision D1. This provides a sure
$10 payoff. With decision D2, rewards of $20 or $30 are possible, but a loss of $10 is also possible.
Decision D3 is even riskier; the possible loss is greater, and the maximum gain is also greater.
Which decision would you choose?
Would your choice change if the values in the payoff table were measured in thousands of dollars?
-1-
There are several possible criteria for making decisions. Some criteria involve the assignment of
probabilities to each event, but others do not. We will explore two criteria that do not use
probabilities and one that does.
Maximin Criterion
The maximin criterion finds the worst payoff in each row of the payoff table and chooses the
decision corresponding to the best of these.
This criterion is appropriate for a very conservative (or pessimistic) decision maker.
Such a criterion tends to avoid large losses, but it fails to even consider large rewards.
Example 1 (continued)
Determine the best decision according to the maximin criterion.
Outcome Minimum
Decision
O1 O2 O3 profit
D1 10 10 10 10
D2 10 20 30 10
D3 30 30 80 30
Because the maximum of the minimum profits is $10, the decision maker chooses decision D1.
Maximax Criterion
The maximax criterion finds the best payoff in each row of the payoff table and chooses the
decision corresponding to the best of these.
This criterion is appropriate for a risk taker (or optimist).
This criterion looks tempting because it focuses on large gains, but its very serious downside is
that it ignores possible losses.
Example 1 (continued)
Determine the best decision according to the maximax criterion.
Outcome Maximum
Decision
O1 O2 O3 profit
D1 10 10 10 10
D2 10 20 30 30
D3 30 30 80 80
Because the maximum of the maximum profits is $80, the decision maker chooses decision D3.
The maximin and maximax criteria make no reference to how likely the various outcomes are.
However, decision makers typically have at least some idea of these likelihoods, and they ought
to use this information in the decision-making process. After all, if outcome O1 in our problem is
extremely unlikely, then the pessimist who uses maximin is being overly conservative.
Similarly, if outcome O3 is quite unlikely, then the optimist who uses maximax is taking an
unnecessary risk. The following criterion uses probabilites and is generally regarded as the
preferred criterion in most decision problems.

-2-
Expected Monetary Value (EMV) Criterion

The expected monetary value (EMV) for a decision is the expected payoff from this decision.
That is, EMV for any decision is a weighted average of the possible payoffs for this decision,
weighted by the probabilities of the outcomes.
N
EMV ( D) X j P(O j )
j 1
where
Xj payoff for decision D when outcome Oj occurs
P(O j ) probability of outcome Oj if decision D is made
N number of possible outcomes

Where do the probabilities come from?


The probabilities assigned are based on information available from historical data, expert
opinions, government forecasts, or knowledge about the probability distribution that the event
may follow.

Using the expected monetary value (EMV) criterion, you do the following:
1. Calculate the EMV for each decision.
2. Choose the decision with the largest EMV.

Example 1 (continued)
The decision maker assesses the probabilities of the three outcomes as
0.3, 0.5, 0.2 if decision D2 is made,
0.5, 0.2, 0.3 if decision D3 is made.
Determine the best decision according to the EMV criterion.

The EMV for each decision is:


EMV(D1) = 10 (a sure thing)
EMV(D2) = 10(0.3) + 20(0.5) + 30(0.2) = 13
EMV(D3) = 30(0.5) + 30(0.2) + 80(0.3) = 15

The optimal decision is D3 because it has the largest EMV.

Interpretation of EMV
The EMV of 15 for decision D3 does not mean that you expect to gain $15 from this decision.
It is true only on average. That is, if this situation can occur many times and decision D3 is used
each time, then on average, you will make a gain of about $15. About 50% of the time you will
lose $30, about 20% of the time you will gain $30, and about 30% of the time you will gain $80.
These average to $15. For this reason, using the EMV criterion is some-times referred to as
playing the averages.

-3-
Risk Profiles

The risk profile for a decision is a spike chart that represents the probability distribution of
monetary outcomes for this decision. The spikes are located at the possible monetary values,
and the heights of the spikes correspond to the probabilities.

The EMV for any decision is a summary measure of the complete risk profile it is the mean of
the corresponding probability distribution. Therefore, when you use the EMV criterion for
making decisions, you are not using all of the information in the risk profiles; you are comparing
only their means.

Risk profiles can be useful as extra information for making decisions. For example, a manager
who sees too much risk in the risk profile of the EMV-maximizing decision might choose to
override this decision and instead choose a somewhat less risky alternative.

Example 1 (continued)
Construct the risk profile for the EMV-maximizing decision D3.
Here is the probability distribution for decision D3.

Monetary value 30 30 80
Probability 0.5 0.2 0.3

Here is the risk profile.

It shows that a loss of $30 has probability 0.5, a gain of $30 has probability 0.2, and a gain of
$80 has probability 0.3.

The risk profile for decision D2 is similar, except that its spikes are above the values 10, 20, and 30.

The risk profile for decision D1 is a single spike of height 1 over the value 10.

-4-
Sensitivity Analysis

Some of the quantities in a decision analysis, particularly the probabilities, are often intelligent
guesses at best. Therefore, it is important, especially in real-world business problems, to assess
the sensitivity of the conclusions. Here we systematically vary inputs to the problem to see how
the outputs change.

Usually the most important information from a sensitivity analysis is whether the optimal
decision continues to be optimal as one or more inputs change. If the optimal decision does not
change, you can be more confident in it. But if small changes in the inputs result in large
differences in the outputs, you should take great care and not rely too heavily on your model.

Example 1 (continued)
Here are the probability distributions and the EMVs for our example.

How sensitive is the optimal decision D3 to small changes in the inputs,


e.g. if the probabilities change only slightly to 0.6, 0.2, and 0.2?

EMV(D3) = 30(0.6) + 30(0.2) + 80(0.2) = 4

The EMV for D3 changes to 4. Now D3 is the worst decision and D2 is the best.
So it appears that the optimal decision is quite sensitive to the assessed probabilities.

Is D3 still the best decision if the probabilities remain the same but the last payoff for D2
changes from 30 to 45?

EMV(D2) = 10(0.3) + 20(0.5) + 45(0.2) = 16

The EMV for D2 changes to 16, and D2 becomes the best decision.

-5-
Decision Trees
Using decision trees is a way of representing the events (decisions and outcomes) graphically.
They are composed of nodes and branches and show the sequence of events, as well as
probabilities and monetary values. There are three types of nodes:
a decision node (a square) represents a time when the decision maker makes a decision;
a probability node (a circle) represents a time when the result of an uncertain outcome becomes known;
an end node (a triangle) indicates that the problem is completed all decisions have been
made, all uncertainty has been resolved, and all payoffs and costs have been incurred.
The nodes represent points in time. Time proceeds from left to right. This means that any
branches leading into a node (from the left) have already occurred. Any branches leading out of a
node (to the right) have not yet occurred. The optimal decision branch(es) are usually marked in
some way, for example with a notch.
Probabilities are listed on probability branches.
These probabilities are conditional on the events that have already been observed (those to the left).
Note that the probabilities on branches leading out of any probability node must sum to 1.
Monetary values are shown to the right of the end nodes.
(Some monetary values are also placed under the branches where they occur in time.)
EMVs are shown above the various nodes.
They are calculated using the so called folding-back procedure:
Starting from the right of the decision tree and working back to the left:
1. At each probability node, calculate an EMV (a sum of products of monetary values and probabilities).
2. At each decision node, take a maximum of EMVs to identify the optimal decision.
Example 1 (continued)
The decision node comes first (to the left) because
the decision maker must make a decision before
observing the uncertain outcome. The probability
nodes then follow the decision branches, and the
probabilities appear above their branches. (There is
no need for a probability node after the D1 branch
because its monetary value is a sure 10.)
The ultimate payoffs appear next to the end nodes,
to the right of the probability branches.
The EMVs above the probability nodes are for the
various decisions. For example, using the folding-
back procedure the EMV for the D2 branch is
EMV(D2) = 10(.3) + 20(.5) + 30(.2) = 13
The maximum of the EMVs is for the D3 branch
written above the decision node. Because it
corresponds to D3, we put a notch on the D3
branch to indicate that this decision is optimal.

-6-
Bayes Rule

The purpose of Bayes rule is to revise probabilties as new information becomes available.

New information
Prior Application of Posterior
B
probabilities Bayes' Rule probabilites
with likelihoods

P(A1) P(B|A1) P(A1|B)


P(A2) P(B|A2) P(A2|B)
. . .
. . .
. . .
P(An) P(B|An) P(An|B)

Often we begin a decision analysis process with estimating initial or prior probabilities
P(A1), P(A2), . . . , P(An) for specific events of interest A1, A2, . . . , An. Then, from sources such
as a sample, a special report, or a product test, we obtain additional information B about the events
together with the conditional probabilities P(B|A1), P(B|A2), . . . , P(B|An) called likelihoods.

Given this new information, we update the prior probability values by calculating revised
probabilities P(A1|B), P(A2|B), . . . , P(An|B), referred to as posterior probabilities.
Bayes rule provides a means for making these probability calculations.

Example 2

A manufacturing firm receives shipments of parts from two different suppliers.


Let A1 and A2 denote the events that a part is from suplier 1 and suplier 2, respectively.
Currently, 65% of the parts purchased by the firm are from supplier 1 and the remaining 35% are
from supplier 2. Historic data suggest that the two suppliers provide 2% and 5% bad parts, respectively.

(a) What are the prior probabilities of events A1 and A2?

If a part is selected at random, we would assign the prior probabilities

P(A1) = 0.65 and P(A2) = 0.35.

(b) Let B denote the event that a part is bad. What are the likelihoods P(B|A1) and P(B|A2)?

The likelihoods that a part is bad are given by the conditional probabilities

P(B|A1) = 0.02 and P(B|A2) = 0.05

-7-
Let A1, A2, . . . , An be mutually exclusive events, whose union is the entire sample space and
P(A1) 0, P(A2) 0, . . . , P(An) 0.

Law of Total Probability


P( B) P( B | A1 ) P( A1 ) P( B | An ) P( An )
That is, the probability of event B is the sum of likelihoods times priors.
For B to occur, it must occur along with one of the As. The Law of Total Probability simply
decomposes the probability of B into all of these possibilities.

Bayes Rule
P( B | Ai ) P( Ai )
P( Ai | B)
P( B)
In words, the posterior of event Ai is the likelihood times the prior, divided by the probability P(B).

Example 2 (continued)
(c) What is the probability that a randomly selected part is bad?

Using the Law of total probability,

P( B) P( B | A1 ) P( A1 ) P( B | A2 ) P( A2 )

= 0.02(0.65) + 0.05(0.35) = 0.013 + 0.0175 = 0.0305

There is about 3% chance that a randomly selected part is bad.

(d) The parts from the two suppliers are used in the firms manufacturing process and a machine
breaks down because it attempts to process a bad part.
What is the probability that the bad part came from supplier 1 and from supplier 2, respectively?

We are looking for the posterior probabilities P( A1 | B) and P( A2 | B) .


Bayes rule can be used to find them.

P( B | A1 ) P( A1 ) 0.02(0.65)
P( A1 | B) 0.4262
P( B) 0.0305

P( B | A2 ) P( A2 ) 0.05(0.35)
P( A2 | B) 0.5738
P( B) 0.0305

Note that we started with a probability of 0.65 that a part selected at random was from
supplier 1. However, given the information that the part is bad, the probability that the part is
from supplier 1 drops to 0.4262. In fact, if the part is bad, it has better than a 50/50 chance
that it came from supplier 2. That is P(A2|B) = 0.5738.

-8-
Value of Information

Many decision problems are solved in stages. The first-stage decision is whether to obtain some
information that could be useful. If you decide not to obtain the information, you make a
decision right away, based on prior probabilities. If you do decide to obtain the information, then
you first observe the information and then make the second-stage decision, based on posterior
probabilities. However, information usually comes at a price.

How much does the information cost? Is the information worth its price?

Example 3
Pittsburgh Development Corporation (PDC)

Pittsburgh Development Corporation (PDC) purchased land that will be the site of a new luxury
condominium complex. PDC commissioned preliminary architectural drawings for three
different-sized projects: small, medium, and large with 30, 60, and 90 condominiums, respectively.

PDC is optimistic about the potential for the luxury condominium complex and has an initial
subjective probability assessment of 0.8 that demand will be strong and a corresponding
probability of 0.2 that demand will be weak. The payoff table with profits expressed in millions
of dollars is shown below.

Outcome
Decision
Strong demand (O1) Weak demand (O2)
Small complex (D1) 8 7
Medium complex (D2) 14 5
Large complex (D3) 20 9

(a) Select the size of the condominium that will lead to the largest profit.

The expected monetary value for each of the three decisions follows:

EMV(D1) = 0.8(8) + 0.2(7) = 7.8


EMV(D2) = 0.8(14) + 0.2(5) = 12.2
EMV(D3) = 0.8(20) + 0.2(9) = 14.2

Thus, using the EMV criterion, we find that the large condominium complex,
with an expected monetary value of $14.2 million, is the recommended decision.

-9-
Perfect information is information that will indicate with certainty which ultimate outcome will occur.
The Expected Value of Perfect Information (EVPI) is the maximum amount you might be
willing to pay for perfect information about the outcome.
EVPI = EMV with (free) perfect information EMV without information
That is, the amount you should be willing to spend for information is the expected increase in
EMV you can obtain from having the information. If the actual price of the information is less
than or equal to this amount, you should purchase it; otherwise, the information is not worth its
price. In addition, information that never affects your decision is worthless, and it should not be
purchased at any price.
Example 3 (continued)
(b) Suppose that PDC could determine with certainty, prior to making a decision, which of the
two outcomes strong demand (O1) or weak demand (O2) is going to occur.
What is PDCs optimal strategy if this perfect information becomes available?

We can state PDCs optimal decision strategy if the perfect information becomes available as follows:
If O1 occurs, select D3 and receive a payoff of $20 million.
If O2 occurs, select D1 and receive a payoff of $7 million.

(c) What is the expected monetary value for this decision strategy?
There is a 0.8 probability that the perfect information will indicate outcome O1 and the
resulting decision alternative D3 will provide a $20 million profit. Similarly, with a 0.2
probability for outcome O2, the optimal decision D1 will provide a $7 million profit.
Thus, the expected monetary value of the decision strategy based on perfect information is
EMV with perfect information = 0.8(20) + 0.2(7) = 17.4
(d) What is the expected value of the perfect information (EVPI)?
EVPI = EMV with perfect information EMV without information
In (a) we showed that the recommended decision is D3 with an EMV of $14.2 million.
Because this decision recommendation and EMV computation were made without the benefit
of perfect information, $14.2 million is the expected monetary value without information.

In (c) we showed that the EMV with perfect information is $17.4 million.
Therefore, the expected value of the perfect information is
EVPI = $17.4 $14.2 = $3.2
(e) Interpret the EVPI.

The EVPI of $3.2 million represents the additional expected monetary value that can be
obtained if perfect information were available. Generally speaking, a market research study
will not provide perfect information; however, if the market research study is a good one,
the information gathered might be worth a sizable portion of the $3.2 million. Given the
EVPI of $3.2 million, PDC might seriously consider a market survey as a way to obtain more
information about the possible outcomes.

- 10 -
Admittedly, perfect information is almost never available at any price, but finding its value is
still useful because it provides an upper bound on the value of any information.
In other words, the value of any information can never be greater than the value of perfect
information that would eliminate all uncertainty.

Most often, additional (imperfect) information is obtained through experiments designed to


provide sample information. Raw material sampling, product testing, and market research
studies are examples of experiments (or studies) that may enable management to revise or update
the probabilities of the possible outcomes. The Expected Value of Sample Information (EVSI)
is the most you would be willing to pay for the sample information.

EVSI = EMV with (free) sample information EMV without information

Example 3 (continued)

(f) The PDC management is considering a six-month market research study with one of the
following two results:

1. Favorable report: A significant number of the individuals contacted express interest in


purchasing a PDC condominium.
2. Unfavorable report: Very few of the individuals contacted express interest in purchasing
a PDC condominium.

Suppose that the decision analysis (after constructing the corresponding decision tree and
finding the respective EMVs) shows that the optimal decision for PDC is to conduct the
market research study and has an EMV of $15.93 million.
What is ehe expected value of the sample information (EVSI)?

The market research study is the sample information used to determine the optimal decision
strategy. The EMV associated with the market research study is $15.93.

In (a), we showed that the best expected monetary value if the market research study is not
undertaken is $14.20. Thus, we can conclude that

EVSI = EMV with sample information EMV without information

= $15.93 $14.20 = $1.73

In other words, conducting the market research study adds $1.73 million to the PDC expected
monetary value.

- 11 -

You might also like