Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Standard view
Full view
of .
0 of .
Results for:
P. 1
Ai99 Tutorial 4

# Ai99 Tutorial 4

Ratings:

5.0

(1)
|Views: 371|Likes:

Published by: Joao on Jul 07, 2008

### Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PS, PDF, TXT or read online from Scribd
See more
See less

11/08/2012

pdf

text

original

Nicholson & Korb 1
Bayesian AI
AI’99, Sydney6 December 1999 Ann E. Nicholson and Kevin B. Korb
School of Computer Scienceand Software EngineeringMonash University, Clayton, VIC 3168 AUSTRALIA
f
annn,korb
g
@csse.monash.edu.au
Bayesian AI TutorialNicholson & Korb 2
Overview
1. Introduction to Bayesian AI (20 min)2. Bayesian networks (50 min)Break (10 min)3. Applications (50 min)Break (10 min)4. Learning Bayesian networks (50 min)5. Current research issues (10 min)6. Bayesian Net Lab (60 min: Optional)7. Dinner (Optional)
Bayesian AI TutorialNicholson & Korb 3
Introduction to Bayesian AI

Reasoning under uncertainty

Probabilities

Alternative formalisms
Fuzzy logic
MYCIN’s certainty factors
Default Logic

Bayesian philosophy
Dutch book arguments
Bayes’ Theorem
Conditionalization
Conﬁrmation theory

Bayesian decision theory

Towards a Bayesian AI
Bayesian AI TutorialNicholson & Korb 4
Reasoning under uncertainty
Uncertainty:
The quality or state of being notclearly known.This encompasses most of what we understandabout the world — and most of what we wouldlike our AI systems to understand.Distinguishes
deductive
knowledge (e.g.,mathematics) from
inductive
belief (e.g.,science).
Sources of uncertainty

Ignorance(which side of this coin is up?)

Physical randomness(which side of this coin will land up?)

Vagueness(which tribe am I closest to genetically?Picts? Angles? Saxons? Celts?)
Bayesian AI Tutorial

Nicholson & Korb 5
Probabilities
Classic approach to reasoning under uncertainty.(Blaise Pascal and Fermat).Kolmogorov’s Axioms:1.
P
(
U
)=1
2.
8
X

UP
(
X
)

0
3.
8
X;Y

U
if
X
\
Y
=
;
then
P
(
X
^
Y
)=
P
(
X
)+
P
(
Y
)
Conditional Probability
P
(
X
Y
)=
Independence
X
q
Y
i
P
(
X
Y
)=
P
(
X
)
Bayesian AI TutorialNicholson & Korb 6
Fuzzy Logic
Designed to cope with
vagueness:
Is Fido a Labrador or a Shepard?Fuzzy set theory:
m
(
Fido
2
)=
m
(
Fido
2
Shepard
)=0
5
Extended to fuzzy logic, which takes intermediatetruth values:
T
(
(
Fido
))=0
5
.Combination rules:

T
(
p
^
q
)=min(
T
(
p
)
;T
(
q
))

T
(
p
_
q
)=max(
T
(
p
)
;T
(
q
))

T
(
:
p
)=1

T
(
p
)
Not suitable for coping with randomness orignorance. Obviously not:Uncertainty(inclement weather) =max(Uncertainty(rain),Uncertainty(hail),...)
Bayesian AI TutorialNicholson & Korb 7
MYCIN’s Certainty Factors
Uncertainty formalism developed for the earlyexpert system MYCIN (Buchanan and Shortliffe,1984):Elicit for
(
h;e
)
:

measure of belief:
MB
(
h;e
)
2
0
1

measure of disbelief:
MD
(
h;e
)
2
0
1
CF
(
h;e
)=
MB
(
h;e
)

MD
(
h;e
)
2


1
1
Special functions provided for combiningevidence.
Problems:

No semantics ever given for ‘belief’/‘disbelief’

Heckerman (1986) proved that restrictionsrequired for a probabilistic semantics implyabsurd independence assumptions.
Bayesian AI TutorialNicholson & Korb 8
Default Logic
Intended to reﬂect “stereotypical” reasoningunder uncertainty (Reiter 1980). Example:
Bird(Tweety):Bird(x)
!
Flies(x)  Flies(Tweety)
Problems:

Best semantics for default rules areprobabilistic (Pearl 1988, Korb 1995).

Mishandles combinations of low probabilityevents. E.g.,
ApplyforJob(me):ApplyforJob(x)
!
Reject(x)  Reject(me)
I.e., the dole always looks better thanapplying for a job!
Bayesian AI Tutorial

Nicholson & Korb 9
Probability Theory
So, why not use probability theory to representuncertainty?That’s what it was invented for...dealing withphysical randomness and degrees of ignorance.Furthermore, if you make bets which violateprobability theory, you are subject to
Dutchbooks
: A Dutch book is a sequence of “fair” betswhich collectively
guarantee
a loss.
Fair bets
are bets based upon the standardodds-probability relation:
O
(
h
)=
P
(
h
)  1

P
(
h
)
P
(
h
)=
O
(
h
)  1+
O
(
h
)
Bayesian AI TutorialNicholson & Korb 10
A Dutch Book
Payoff table on a
bet for h
(Odds =
p=
1

p
; S = betting unit)h Payoff T \$(1-p)

SF -\$p

SGiven a fair bet, the expected value from such apayoff is always \$0.Now, let’s violate the probability axioms.ExampleSay,
P
(
A
)=

0
1
(violating A2)Payoff table
against A
(inverse of: for A),with S = 1:
:
A Payoff T \$pS = -\$0.10F -\$(1-p)S = -\$1.10
Bayesian AI TutorialNicholson & Korb 11
Bayes’ Theorem;Conditionalization
— Due to Reverend Thomas Bayes (1764)
P
(
h
e
)=
P
(
e
h
)
P
(
h
)
P
(
e
)
Conditionalization:
P
(
h
)=
P
(
h
e
)
Or, read Bayes’ theorem as:
Posterior=    Likelihood

Prior  Probofevidence
Assumptions:
1. Joint priors over
f
h
g
and
e
exist.2. Total evidence:
e
, and only
e
, is learned.
Bayesian AI TutorialNicholson & Korb 12
Bayesian Decision Theory
— Frank Ramsey (1931)Decision making under uncertainty: what actionto take (plan to adopt) when future state of theworld is not known.
Find utility of each possibleoutcome (action-state pair) and take the actionthat maximizes expected utility.Example
Action Rain (p = .4) Shine (1 - p = .6)Take umbrella 30 10Leave umbrella -100 50Expected utilities:E(Take umbrella) = (30)(.4) + (10)(.6) = 18E(Leave umbrella) = (-50)(.4) + (100)(.6) = 40
Bayesian AI Tutorial

## Activity (6)

You've already reviewed this. Edit your review.