You are on page 1of 2

AI-Unit8

Problem Solving vs Planning


Stochastic
Multiagent
Partial Observability [A, S, F, B]
--------------------Unknown
Hierarchical
Right - R
Left - L
Suck - S
Always
SRS
RSLS
SRRS
SRSRS
_____

(
(
(
(
(

)
)
)
)
)

Maybe
(*)
(*)
(*)
(*)
(*)

[S, R, S]
[]--S->[]--R->()--B->[]--S->[]
[]<-A--()
[S, while A:R, S]
Unbounded Solution
( ) Some leaf goal
(*) Every leaf goal
( ) No
Bounded Solution
( ) No Branch
(*) No Loops
( ) No
[A, S, F] Result(Result(A, A->S), S->F)
s'(s prime)
s'= Result(s,a)
b'= Update(Predict(b,a), o)

(is an element of) Goal

Classical Planning
State Space:
k-Boolean (2^k)
Dirt A
Dirt B
Vac A
Worl State:
Complete Assigment
Belief State:
Complete Assigment
Partial Assigment
Arbitary Formula
Action(Fly(p, x, y)
Precond: Plane(p)
Airport(x)
Efect: A=(p,x)
A+(p,y))

Airport(y)

A+(p,x)

[A=(p, sfo); A=(p, sfo)] ------> Load(c1, p1, sfo)


------> [...; ...; ...] ------>
[Goal]
Pgina 1

AI-Unit8
[A+(c1, jff); A+(c2, sfo)] -> Goal
[...] Unload(c1, p, jfk)
Action(Unload(c, p, a),
Precond: In(c, p)
Effect: At(c, a)

At(p, a)
In(c, p))

Cargo(c)

Plane(p)

Airport(a)

Action(Buy(b)
Precond: Isbn(b)
Effect: Own(b))
Goal(Own(0136042597))
Action(Slide(t, a, b)
Precond: On(t, a)
Tile(t)
A Adj(a, b))
Effect: On(t, b)
Blank(a)
On(+a)
Blank(b))

Blank(b)

Situation Calculus
FOL
Actions: objects Fly(p, x, y)
Situations: objects So s' = Result(s,a)
Action(s)
Poss(a, s)
SomePrecond(s) => Poss(a, s)
Plane(p, s)
Airport(x, s)
Airport(y, s)
s)

A+(p, x, s) => Poss(Fly(p, x, y),

Situation Calculus
Succesor-State Axioms
A+(p, x, s)
a, s Poss(a, s) => (fluent true <=> a made it
a did it undo)
Possible(a, s) = In(c, p, Result(s, a)) <=> (a=Load(c, p, x)
(In(c, r, s)
aUnload(c, p, x)))
Initial state: So
A+(p1, jfk, So)
c Cargo(c) => A+(c, jfk, So)
Goal s c Cargo(c) => A+(c, sfo, s)

Pgina 2

You might also like