You are on page 1of 25

CLASSICAL PLANNING

Dr. J. Ujwala Rekha


Outline
• The challenges in planning with standard
search algorithm
• Representing plans-the PDDL language
• Planning as state-space search
• Planning graphs and the GRAPHPLAN
algorithm
Introduction
• Devising a plan of action to achieve one’s goals is critical
part of AI
• Search-based problem solving agent and the hybrid
logical agent are e examples of planning agents.
• Problem solving agent deals with atomic representations
of states and thus needs good domain-specific heuristics.
• The hybrid propositional logic agent uses domain-
independent heuristics
– But it relies on variable-free propositional inference resulting
in many actions and states
• PDDL-A representation for planning problems
that scales up to problems that could not be
handled by search-based and logical agents.
• Classical Planning
– Fully observable
– Deterministic
– Static environments
– Single agent
Planning Domain Definition Language (PDDL)

• State is represented as a conjunction of fluents


that are ground and functionless atoms.
– Poor Ʌ Unknown
– At(Truck1, Melbourne) Ʌ At(Truck2,Sydney)
• The following fluents are not allowed in a state:
– At(x,y) because it is non-ground
– ¬Poor because it is a negation
– At(Father(Fred),Sydney) because it uses a function
symbol
PDDL
• Actions are represented by a set of action schemas
• The schema consists of the action name, a list of all the variables
used in the schema, a precondition and an effect.
– Action(Fly(p,from,to)
– PRECOND: At(p,from) Ʌ Plane(p) Ʌ Airport(from) Ʌ Airport(to)
– Effect:¬At(p,from) Ʌ At(p,to)
• The result of executing action a in state s is defined as state s’
which is represented by the set of fluents formed by starting with
s, removing the fluents that appear as negative literals (delete
list) and adding the fluents that are positive literals (add list)
RESULT(s,a)=(s-DEL(a)) U ADD(a)
Complexity of Classical Planning
• PlanSAT: is there a plan that solves the problem
• Bounded PlanSAT: is there a plan of length k or less
• Both are decidable if the planning language is function-free –
finitely many states.
• Both PlanSAT and Bounded PlanSAT are in the complexity class
PSPACE
– A class that is larger than NP and refers to problems that can be solved
by a deterministic Turing machine with a polynomial amount of space.
• But for many domains:
– Bounded PlanSAT: NP-complete
– PlanSAT: P- optimal planning is usually hard, but sub-optimal planning
is not so hard
A Specific Planning Problem
• Initial state- a conjunction of ground atoms
• Goal state – a conjunction of literals (positive
+ negative) that may contain (existentially
quantified) variables.
• A set of action schemas
• Clearly a planning problem can be seen as a
search problem.
Planning as a State-Space Search

(a) Forward (progression) search


(b) Backward (regression) search
Forward (Progression) State-Space Search

• Planning problems map into a search problem


• We can solve planning problems with any of
the heuristic search algorithms or local search
algorithm
• Forward state-space search is too inefficient:
– Prone to exploring irrelevant actions
– Planning problems often have large state spaces.
Backward (Regression) Relevant-States
Search
• In regression search we start at the goal and apply the
actions backward until we find a sequence of steps
that reaches the initial state
• It is called relevant-states search because we only
consider actions that are relevant to the goal.
• PDDL representation is designed to make it easy to
regress actions. Given a ground goal g and ground
action a, the regression from g over a gives us a state
description g' defined by
g’=(g-ADD(a)) U PRECOND(a)
Heuristics for Planning
• Neither forward nor backward search is efficient without a good
heuristic function.
• An admissible heuristic can be derived by defining a relaxed problem
that is easier to solve.
• The exact cost of a solution to the relaxed problem is the heuristic for
the original problem
• Think of a search problem as a graph where nodes are states and the
edges are actions. The problem is to find a path connecting the initial
state to a goal state.
• There are two ways we can relax this problem:
– By adding more edges to the graph
– By grouping multiple nodes together forming an abstraction of the state space
that has fewer states
Heuristics for Planning
• Adding more edges in the graph-
– Ignoring preconditions- drop all preconditions-every
action becomes applicable in every state-any single
goal fluent can be achieved in one step.
– Ignore delete lists-no action will ever undo progress
made by another action
• Grouping multiple nodes-forming state abstraction
– A key idea in defining heuristics is decomposition:
dividing a problem into parts, solving each part
independently, and then combining the parts.
Planning Graphs-Motivation
• A big source of inefficiency in search algorithms is the
branching factor
• One way to reduce branching factor:
– First create a relaxed problem
• Remove some restrictions of the original problem
– The solution to the relaxed problem will include all solutions to
the original problem
• Then do a modified version of the original search
(backward search)
• Restrict the search space to include only those actions that
occur in solutions to the relaxed problem
Planning Graphs
• Search space for a relaxed version of the planning
problem
• Alternating layers of states (ground literals) and actions.
• Nodes at action-level i: actions that might be possible
to execute at time i
• Nodes at state-level i: literals that might possibly be
true at time i
• Edges: preconditions and effects
• Mutual exclusion links
Mutual Exclusions
• Two actions at the same action-level are mutex if:
– Inconsistence effects: an effect of one negates an
effect of the other
– Interference: one deletes a precondition of the other
– Competing needs: they have mutually exclusive
preconditions
• Two literals at the same state-level are mutex if:
– Inconsistent support: one is the negation of the other
– Or all ways of achieving them are pairwise mutex
Planning Graphs
• Special data structure called a planning graph can be
used to give better heuristic estimates.
• A planning graph is a directed graph organized into
levels:
– first a level So for the initial state, consisting of nodes
representing each fluent that holds in So;
– then a level Ao consisting of nodes for each ground action
that might be applicable in So;
– then alternating levels Si followed by Ai; until we reach a
termination condition.
The GRAPHPLAN Algorithm
• GRAPHPLAN extracts a plan directly from the planning
graph, rather than just using the graph to provide a
heuristic.
• The GRAPHPLAN algorithm repeatedly adds a level to the
planning graph with EXPAND-GRAPH.
• Once all the goals show up as non-mutex in the graph,
GRAPHPLAN calls EXTRACT-SOLUTION to search for a plan
that solves the problem.
• If that fails, it expands another level and tries again,
terminating with failure when there is no reason to go on.
Termination of GRAPHPLAN
Extract-Solution as a Search Problem
• Initial State: the last level of the planning graph Sn
• Actions at state level Si: select any conflict-free subset of
actions in Ai-1 that covers the goals of the state.
• The resultant state: level Si-1, the goals at the new state=the
preconditions of the selected actions
• Goal of the backward search: reach a state at So such that all
goals are satisfied
• The cost of each action is 1
• When extract-solution fails to find a solution for a set of
goals at a level, a pair (level, goals) is added as a no-good.

You might also like