Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
5Activity
0 of .
Results for:
No results containing your search query
P. 1
ANU July2001 Tutorial 4

ANU July2001 Tutorial 4

Ratings: (0)|Views: 77 |Likes:
Published by Joao

More info:

Published by: Joao on Jul 07, 2008
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PS, PDF, TXT or read online from Scribd
See more
See less

11/06/2013

pdf

text

original

 
Korb & Nicholson 1
Bayesian AITutorial
Kevin B. Korb and Ann E. Nicholson
School of Computer Scienceand Software EngineeringMonash UniversityClayton, VIC 3168 AUSTRALIA
  
korb,annn
  
@csse.monash.edu.au
HTTP
:/
WWW
.
CSSE
.
MONASH
.
EDU
.
 AU
 /˜
KORB
Bayesian AI TutorialKorb & Nicholson 2
Overview
1. Introduction to Bayesian AI (20 min)2. Bayesian networks (40 min)Lunch3. Bayesian networks cont’d (10 min)4. Applications (50 min)Break (10 min)5. Learning Bayesian networks (50 min)6. Current research issues (10 min)
Bayesian AI TutorialKorb & Nicholson 3
Introduction to Bayesian AI
¯  
Reasoning under uncertainty
¯  
Probabilities
¯  
 Alternative formalisms
Fuzzy logic
MYCIN’s certainty factors
Default Logic
¯  
Bayesian philosophy
Dutch book arguments
Bayes’ Theorem
Conditionalization
Confirmation theory
¯  
Bayesian decision theory
¯  
Towards a Bayesian AI
Bayesian AI TutorialKorb & Nicholson 4
Reasoning under uncertainty
Uncertainty:
The quality or state of being not clearlyknown.This encompasses most of what we understand aboutthe world — and most of what we would like our AIsystems to understand.Distinguishes
deductive
knowledge (e.g.,mathematics) from
inductive
belief (e.g.,science).
Sources of uncertainty
¯  
Ignorance(which side of this coin is up?)
¯  
Physical randomness(which side of this coin will land up?)
¯  
 Vagueness(which tribe am I closest to genetically? Picts? Angles? Saxons? Celts?)
Bayesian AI Tutorial
 
Korb & Nicholson 5
Probabilities
Classic approach to reasoning under uncertainty.(Blaise Pascal and Fermat).Kolmogorov’s Axioms:1.
È    
´  
Í    
µ½   
2.
  
     
  
ÍÈ    
´  
     
µ  
  
¼   
3.
  
    
  
Í    
  
     
  
    
    
  
ØÒ   
È    
´  
     
  
    
µ    
È    
´  
     
µ·    
È    
´  
    
µ  
Conditional Probability
È    
´  
     
 
    
µ    
È 
´ 
 
 
 
µ 
È 
´ 
 
µ 
Independence
     
Õ  
    
«    
È    
´  
     
 
    
µ    
È    
´  
     
µ  
Bayesian AI TutorialKorb & Nicholson 6
Fuzzy Logic
Designed to cope with
vagueness:
Is Fido a Labrador or a Shepard?Fuzzy set theory:
Ñ     
´  
Ó   
¾  
ÄÖÓÖ   
µ    
Ñ     
´  
Ó   
¾  
ËÔÖ   
µ¼   
 
   
Extended to fuzzy logic, which takes intermediatetruth values:
Ì    
´  
ÄÖÓÖ   
´  
Ó   
µµ¼   
 
   
.Combination rules:
¯  
Ì    
´  
 Ô   
  
Õ   
µÑÒ´  
Ì    
´  
 Ô   
µ  
Ì    
´  
Õ   
µµ  
¯  
Ì    
´  
 Ô   
  
Õ   
µÑÜ´  
Ì    
´  
 Ô   
µ  
Ì    
´  
Õ   
µµ  
¯  
Ì    
´  
  
 Ô   
µ½   
  
Ì    
´  
 Ô   
µ  
Not suitable for coping with randomness or ignorance.Obviously not:Uncertainty(inclement weather) =max(Uncertainty(rain),Uncertainty(hail),...)
Bayesian AI TutorialKorb & Nicholson 7
MYCIN’s Certainty Factors
Uncertainty formalism developed for the early expertsystem MYCIN (Buchanan and Shortliffe, 1984):Elicit for
´  
   
µ  
:
¯  
measure of belief:
Å    
´  
   
µ  
¾  
¼   
 
½℄ 
¯  
measure of disbelief:
Å    
´  
   
µ  
¾  
¼   
 
½℄ 
    
´  
   
µ    
Å    
´  
   
µ  
  
Å    
´  
   
µ  
¾  
  
  
½   
 
½℄ 
Special functions provided for combining evidence.
Problems:
¯  
No semantics ever given for ‘belief’/‘disbelief’
¯  
Heckerman (1986) proved that restrictionsrequired for a probabilistic semantics implyabsurd independence assumptions.
Bayesian AI TutorialKorb & Nicholson 8
Default Logic
Intended to reflect “stereotypical” reasoning underuncertainty (Reiter 1980). Example:
Ö´ÌÛØݵÖ´Üµ  
   
Ð״ܵ  Ð×´ÌÛØݵ  
Problems:
¯  
Best semantics for default rules are probabilistic(Pearl 1988, Korb 1995).
¯  
Mishandles combinations of low probabilityevents. E.g.,
ÔÔÐÝÓÖÂÓ´ÑµÔÔÐÝÓÖÂÓ´Üµ  
   
Êشܵ  ÊØ´Ñµ  
I.e., the dole always looks better than applying fora job!
Bayesian AI Tutorial
 
Korb & Nicholson 9
Probability Theory
So, why not use probability theory to representuncertainty?That’s what it was invented for...dealing withphysical randomness and degrees of ignorance.Furthermore, if you make bets which violateprobability theory, you are subject to
Dutch books
: A Dutch book is a sequence of “fair” betswhich collectively
guarantee
a loss.
Fair bets
are bets based upon the standardodds-probability relation:
Ç    
´  
   
µ    
È    
´  
   
µ  ½   
  
È    
´  
   
µ  
È    
´  
   
µ    
Ç    
´  
   
µ  ½·    
Ç    
´  
   
µ  
Bayesian AI TutorialKorb & Nicholson 10
 A Dutch Book
Payoff table on a
bet for h
(Odds =
 Ô   
½   
  
 Ô   
; S = betting unit)h PayofT $(1-p)
¢  
SF -$p
¢  
SGiven a fair bet, the expected value from such a payoff is always $0.Now, let’s violate the probability axioms.ExampleSay,
È    
´  
    
µ    
  
¼   
 
½   
(violating A2)Payoff table
against A
(inverse of: for A),with S = 1:
  
 A PayofT $pS = -$0.10F -$(1-p)S = -$1.10
Bayesian AI TutorialKorb & Nicholson 11
Bayes’ Theorem;Conditionalization
— Due to Reverend Thomas Bayes (1764)
È    
´  
   
 
   
µ    
È    
´  
   
 
   
µ  
È    
´  
   
µ  
È    
´  
   
µ  
Conditionalization:
È    
¼ 
´  
   
µ    
È    
´  
   
 
   
µ  
Or, read Bayes’ theorem as:
ÈÓ×ØÖÓÖ    ÄÐÓÓ   
¢  
ÈÖÓÖ  ÈÖÓÓÚÒ  
 Assumptions:
1. Joint priors over
  
   
 
  
and
   
exist.2. Total evidence:
   
, and only
   
, is learned.
Bayesian AI TutorialKorb & Nicholson 12
Bayesian Decision Theory
— Frank Ramsey (1931)Decision making under uncertainty: what action totake (plan to adopt) when future state of the world isnot known.
Bayesian answer:
Find utility of each possibleoutcome (action-state pair) and take the action thatmaximizes expected utility.Example
 Action Rain (p = .4) Shine (1 - p = .6)Take umbrella 30 10Leave umbrella -100 50Expected utilities:E(Take umbrella) = (30)(.4) + (10)(.6) = 18E(Leave umbrella) = (-100)(.4) + (50)(.6) = -10
Bayesian AI Tutorial

Activity (5)

You've already reviewed this. Edit your review.
1 thousand reads
1 hundred reads
aizoo22 liked this
uri20 liked this
arbiter007 liked this

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->