You are on page 1of 24

Dr.

Tchantchane

Decision Analysis
with Additional
Information
Bayesian Analysis

12-1

Decision Analysis with Additional
Information
Bayesian Analysis
P ( A | B) 

P ( A & B) P ( B | A) P ( A)

P ( B)
P ( B)

P( A | B ) 
1

1

P ( B1 )  P ( B1 | A1 ) P ( A1 )  P ( B1 | A2 ) P ( A2 )  ...  P ( B1 | An ) P ( An )

P (B | A )P ( A )
P(B | A )P ( A )

P (B )
P ( B | A ) P ( A )  P ( B | A ) P ( A )  ...  P ( B | A ) P ( A )
1

1

1

1

1

1

1

1

1

1

2

1

2

1

n

n

P ( A1 | B1 )  P ( A2 | B1 )  ...  P ( An | B1 )  1

12-2

000. 12-3 . In real estate investment example.Decision Analysis with Additional Information Bayesian Analysis ■ ■ Bayesian analysis uses additional information to alter the marginal probability of the occurrence of an event. using expected value criterion. and EVPI of $28. best decision was to purchase office building with expected value of $444.000.

■ Economic analyst provides additional information for real estate investment decision. forming conditional probabilities: g = good economic conditions p = poor economic conditions P = positive economic report N = negative economic report 12-4 .Decision Analysis with Additional Information Bayesian Analysis (2 of 3) ■ A conditional probability is the probability that an event will occur given that another event has already occurred.

750 12-5 .250 P(p P) = .60)/[(.10)(.80 P(P poor) = .80)(. P(p) = .20 P(N poor) = .60. ■ Prior probabilities for good or poor economic conditions in real estate decision: P(g) = .90 ■ A posterior probability is the altered marginal probability of an event based on additional information.10 P(N good) = .Additional Information: Bayesian Analysis P(P good) = .40)] = .60) + (.923 ■ Posterior (revised) probabilities for decision: P(g N) = .40 ■ Posterior probabilities by Bayes’ rule (or use probability tree): P(g P) = P(g and P)/P(P)= P(P g)P(g)/[P(P g)P(g) + P(P p)P(p)] = (.80)(.077 P(p N) = .

Decision Analysis with Additional Information Decision Trees with Posterior Probabilities 12-6 .

Decision Analysis with Additional Information Posterior Probabilities Decision tree with posterior probabilities differ from earlier versions in that: Two Decision new branches at beginning tree represent Tree with PosteriorofProbabilities report outcomes. ■ 12-7 .

48) = $63.000(.220(.000(.000(.194 12-8 .Decision Analysis with Additional Information Decision Trees with Posterior Probabilities (4 of 4) EV (apartment building) = $50.460 EV (strategy) = $89.52) + 35.923) + 30.077) = $48.

Decision Analysis with Additional Information Computing Posterior Probabilities with Tables Computation of Posterior Probabilities 12-9 .

000 = . EVSI = $63.44.68 12- .000 = $19.Decision Analysis with Additional Information Expected Value of Sample Information ■ The expected value of sample information (EVSI) is the difference between the expected value with and without information: For example problem.194/ 28.194 .194 ■ The efficiency of sample information is the ratio of the expected value of sample information to the expected value of perfect information: efficiency = EVSI /EVPI = $19.

Example 2 12- .

12- . Construct the decision tree and the course of action. Of course they may not constru there is no cost.Example 2 (to solve) Considering construction of a clinic. Otherwise a loss of 40 K $. If the medical demand is high (Favorable profit is 100 K $.

P(FavMark | FavResea) = 0.11 a) Develop a new decision tree b) Course of action c) EVSI and how much might the doctor be willing to pay for the market research? 12- .82 P(FavMark | UnfResea) = 0.Market research (example 2 continued) Market research firm offer to perform a study at a fee of 5 K $.

Solution 12- .

Example 3 12- .

Decision Analysis Example Problem Solution (1 of 9) 12- .

Develop a decision tree with expected value at the nodes. P(P not good) = .70 probability of good conditions. 30 of poor conditions. .20. Use expected value and expected opportunity loss criteria. c.70. 12- . d.80. P(N not good) = . Compute expected value of perfect information. b.30. e. P(P g) = .Decision Analysis: Questions a. Determine the best decision without probabilities using the 5 criteria of the chapter. Determine best decision with probabilities assuming . Given following. P(N g) = .

000 1.000 320. Maximin Decision: Expand Decisions Expand Status quo Sell Minimum Payoffs $500. 1.000 (maximum) -150.000 12- .Decision Analysis Example Problem Solution (3 of 9) Step 1 (part a): Determine decisions without probabilities.300.000 (maximum) 320.000 2. Maximax Decision: Maintain status quo Decisions Expand Status quo Sell Maximum Payoffs $800.

000 $800.300.000(.000(.3) .000 Sell $320.3) Decision: Expand Expand $590.150.000 Status quo = $285. Minimax Regret Decision: Expand Decisions Expand Status quo Sell Maximum Regrets $500.7) = $1.000 (minimum) 650.000(.000(.000 980.7) = 12- .Decision Analysis Example Problem Solution (4 of 9) 3.7) $320.3) + 500.3) + 320.000(.000(.000 4. Hurwicz ( = .

000(. Expected value decision: Maintain status quo Expand $800.000(.000(.Decision Analysis Example Problem Solution (5 of 9) 5.5) + 500.000 Status quo $575.5) = $1.5) = $320.5) = Step 2 (part b): Determine Decisions with EV and EOL.000(.000 $800.5) + 320.000(.000 Sell $320.3) = 12- .000(.150. 6.5) .7) + 500.000(.000(. Equal Likelihood Decision: Expand Expand $650.300.

000(.7) + 0(.060.Decision Analysis Example Problem Solution (6 of 9) Expected opportunity loss decision: Maintain status quo Expand Status quo $195.000(.3) = $350.000(.000 $500.300.000(. EV given perfect information = 1.000(.3) = $1.7) + 180.7) + 500.000 EV without perfect information = 12- .000(.000 0(.3) = Step 3 (part c): Compute EVPI.3) = $980.000 Sell $740.7) + 650.

12- .Decision Analysis Example Problem Solution (7 of 9) Step 4 (part d): Develop a decision tree.

Decision Analysis Example Problem Solution (8 of 9) Step 5 (part e): Determine posterior probabilities.80)(.109 P(g N) = P(N g)P(g)/[P(N g)P(g) + P(N p)P(p)] = (.891 P(p P) =1-. P(g P) = P(P g)P(g)/[P(P g)P(g) + P(P p)P(p)] = (.70)/[(.467 P(p N) =1-.467= .70) + (.70)(.30)(.70)/[(.533 12- .30)(.70) + (.20)(.30)] = .30)] = .70)(.891= .

12- .Decision Analysis Example Problem Solution (9 of 9) Step 6 (part f): Decision tree analysis.