You are on page 1of 8

1. Discuss in detail Decision Making under uncertainty using valid real world examples.

[30]
Decision analysis refers to a set of methodologies based on expected values, max-min and
related criteria that are used to select the best alternative when a decision maker is face
with an uncertainty. Uncertainty can be defined as the lack of the exact knowledge that
would enable us to reach a perfectly reliable conclusion (Stephanou and Sage, 1987).
Classical logic permits only exact reasoning. It assumes that perfect knowledge always exists
and the law of the excluded middle can always be applied:

IF A is true
THEN A is not false
AND
IF B is false
THEN B is not true
Unfortunately most real-world problems where expert systems could be used do not
provide us with such clear-cut knowledge. The available information often contains inexact,
incomplete or even unmeasurable data.

There are quite several techniques that are used for decision making under uncertainty
which are non-probabilistic measures (MaxMax, MinMax and MinMax), probabilistic
measures (maximum likelihood payoff approach and the maximum expected value
approach also known as the Bayes Rule Based Decision approach), the Bayesian theorem,
decision trees and the utility based decision models.

An example using non probabilistic measures. For instance a business currently holds a
product with a patent worth the value of three thousand dollars. The CEO is proposing a
new venture that will cost the business five thousand. If the business venture proves to be
successful, the new product offered by the firm will result in a return of ten thousand
dollars. If the venture does not succeed the value of the business will decrease from the
three thousand to two thousand due to technological obsolescence.
In this situation the corporate is faced with a challenge of not knowing what the future
holds. In real world situations decision making by management is aided by computer
systems such as Management information systems (MIS) and Decision Support Systems
(DSS). One prerequisite for the design and application of a DSS is that the problem or the
solution process for the problem can be modelled mathematically.

Using the non-probabilistic measures, I am going to illustrate on how these can solve
problems under uncertainty. Below is a table demonstrating the conditions and the
outcomes for a given action.

Success Failure
New Venture 10000 – 5000 = $ 5000 2000 – 5000 = ($3000)
Maintain old product 2000 2000

The Max-Max approach is for the risk takers. This methodology looks for the maximum of
the maximum. This is the best case scenario, so for every outcome find the best outcome
and implement it. In this case the best outcome is the gain of $5000 dollars with the new
venture.

Success Failure Max


New Venture 5000 (3000) 5000
Maintain old product 2000 2000 2000
Max 5000

The second approach is the Max-Min (risk-averse). This is the best worst case scenario, so
for every decision, the system has to find the worst outcome and choose the maximum of
the minimum. In this case the system will choose to maintain the old product, either way
the business will realise $2000 instead of losing $3000

Success Failure Min


New Venture 5000 (3000) (3000)
Maintain old product 2000 2000 2000
Max 2000
The third approach under non-probabilistic measures is the Min-Max Regret approach
(neutral). If the business implements the new venture they have zero (0) regret, however if
they fail they will suffer both the gain they would have made if they maintained the old
product and the failure cost totalling to $5000. However, if the business decides to maintain
the product, the max failure regret is zero and the success max regret total to $3000. So the
decision will be to maintain the old product

Success Failure Success Max Regret Failure Max regret


New Venture 5000 (3000) 0 5000 (3000+2000)
Maintain old product 2000 2000 3000 (5000-2000) 0
Min $3000

The most common and popular methodology that is used in expert systems is the Bayes
Theorem. The P (B|A) is equal to the probability of B given A. Which means these are
conditional probabilities, as a result B will occur given A has already occurred. An example is
given below:

A: represents alcohol
B: represents an accident
The condition becomes that if A is true THEN B is true. Given that the probability of drinking
alcohol is 0.2 and the probability of an accident occurring is 0.1. The probability of an
accident after taking alcohol is 0.3. Given the probability of A intersection B, we might want
to verify if it was really alcohol that caused the accident. An example is given below

P (A) = 0.2, P (B) = 0.1, P (B|A) = 0.3, P (AПB) is given;


P (AПB) = P (A)* P (B|A)
P (AПB) = P (B)* P (A|B)
P (B)* P (A|B) = P (A)* P (B|A)
So to find if the accident was caused by drinking alcohol P (A|B), we divide both sides by the
P (B):
P (A|B) = P (A)* P (B|A) / P (B)
P (A|B) = 0.2 * (0.3) / 0.1 = 0.6
Suppose E1, E2….En, are mutually exclusive events and the union of these events is the entire
sample space.
THEN, for any other event H,
P (Ei|H) = P (Ei) *P (H|Ei) /P (H)
Resulting to:

Whilst the Bayesian theorem uses conditional probability there is also Basic probability that
may be used in decision making by expert systems. The probability of an event is the
proportion of cases in which the event occurs (Good, 1959). Probability can also be defined
as a scientific measure of chance. Probability can be expressed mathematically as a
numerical index with a range between zero (an absolute impossibility) to unity (an absolute
certainty). Most events have a probability index strictly between 0 and 1, which means that
each event has at least two possible outcomes: favourable outcome or success, and
unfavourable outcome or failure.

In the previous paragraphs I gave an example of a business that wishes to venture into a
new product to fight of competition and for survival in the industry. Having the prior
knowledge that there is a 0.6 chance of success, then the inverse becomes obvious as 0.4.
In this case we have been given the probability concerning the certainty about P prior to
evidence being considered.

When considering the maximum likelihood payoff approach the system will select the
decision which returns the maximum payoff yet having a greater possibility of occurrence
hence it chooses the new venture, since the maximum return is $5000.

However, in the Maximum Expected Value approach, we multiply the probability with the
expected value, and the decision is to select the highest value. Using the new venture
example, the system will choose the new venture because it has the highest expected
outcome of 4.2 compared to maintaining the old product.
Success failure Expected Payoff
New venture 5000 3000 0.6(5)- 0.4(3)= 4.2
Maintain Old 5 5 0.6(5)-0.4(5)= 1
Prob. 0.6 0.4 1

Decision trees provide a useful paradigm for solving certain types of classification problems.
Decision trees derive solutions by reducing the set of possible solutions with a series of
decisions or questions that prune their search space. Problems suitable for decision trees
are those that provide the answer to a problem from a predetermined set of possible
answers.

Decision trees consist of nodes and branches, connecting parent nodes to child from the top
to bottom. The top node (root) has no parent. Every other node has only one parent.
Nodes with no children are leaves. Leaf nodes represent all possible solutions that can be
derived from the tree – answer nodes. All other nodes are decision nodes.

In general, a decision node may use any criteria to select which branch to follow as long as it
yields only one branch. The branch selected may be a set or range of values, etc. The
procedure to traverse a tree to reach an answer is simple – begin at root, if the current node
is decision, answer the question – YES, move left, NO, move right. When answer node is
current location value is derived from the decision tree.

A binary decision tree may prove inefficient – not allowing for a set of responses or a series
of cases. A modified decision tree allows for multiple branches – giving a series of possible
decisions
Figure 1Binary Decision Tree

Sometimes it is useful to add new knowledge to a decision tree. Although, earning can result
in the decision tree becoming unbalanced – efficient decision trees are balanced.

Figure 2 Decision Tree after learning bird

The first step implementing the learning process in a decision tree in CLIPS is to decide how
knowledge should be represented. Since the tree should learn, the tree should be
represented as facts instead of rules – facts are easily added / deleted from a tree.
A set of CLIPS rules can be used to traverse the decision tree by implementing the
Solve_Tree_and_Learn algorithm using rule-based approach. Each node of the tree should
be represented by a fact. From one run to the next, information about what has been
learned will be stored in a file. The rules for traversal of the tree must be determined.
References

Cheyer, A.; Martin D.; and Moran D. 1999. The open agent architecture: A framework for
building distributed software systems. Applied Artificial Intelligence 13(1-2).
Davidsson, P., and Boman, M. 1998. Energy saving and value added services: Controlling
intelligent buildings using a multi-agent systems approach. In DA/DSM Europe DistribuTECH,
Penn Well.
Ekenberg, L.; Danielson, M.; and Boman, M. 1996. From local assessments to global
rationality. Intl Journal of Intelligent Cooperative Information Systems 5(2&3): 315-331.
Michael Negnevitsky (2005) Artificial Intelligence A Guide to Intelligent Systems 2nd Edition

You might also like