You are on page 1of 83

Learning Objectives

At the end of the class you should be able to:


explain the model of deterministic planning
represent a problem using the STRIPs representation of
actions.

AI System 1 / 11
State-space Search

deterministic or stochastic dynamics


fully observable or partially observable
explicit states or features or individuals and relations
static or finite stage or indefinite stage or infinite stage
goals or complex preferences
perfect rationality or bounded rationality
flat or modular or hierarchical
single agent or multiple agents
knowledge is given or knowledge is learned
reason offline or reason while interacting with environment

2 / 11
Classical Planning

deterministic or stochastic dynamics


fully observable or partially observable
explicit states or features or individuals and relations
static or finite stage or indefinite stage or infinite stage
goals or complex preferences
perfect rationality or bounded rationality
flat or modular or hierarchical
single agent or multiple agents
knowledge is given or knowledge is learned
reason offline or reason while interacting with environment

3 / 11
Planning

Planning is deciding what to do based on an agent’s ability,


its goals, and the state of the world.

4 / 11
Planning

Planning is deciding what to do based on an agent’s ability,


its goals, and the state of the world.
Initial assumptions:
► The world is deterministic. There are no exogenous events
outside of the control of the robot that change the state of the
world.

© D . L . Poole and A.K. Mackworth 2010-2020 Artificial Intelligence, Lecture 6.1 4 / 11


Planning

Planning is deciding what to do based on an agent’s ability,


its goals, and the state of the world.
Initial assumptions:
► The world is deterministic. There are no exogenous events
outside of the control of the robot that change the state of the
world.
► The agent knows what state it is in. (Fully observable)

4 / 11
Planning

Planning is deciding what to do based on an agent’s ability,


its goals, and the state of the world.
Initial assumptions:
► The world is deterministic. There are no exogenous events
outside of the control of the robot that change the state of the
world.
► The agent knows what state it is in. (Fully observable)
► Time progresses discretely from one state to the next.

© D . L . Poole and A.K. Mackworth 2010-2020 Artificial Intelligence, Lecture 6.1 4 / 11


Planning

Planning is deciding what to do based on an agent’s ability,


its goals, and the state of the world.
Initial assumptions:
► The world is deterministic. There are no exogenous events
outside of the control of the robot that change the state of the
world.
► The agent knows what state it is in. (Fully observable)
► Time progresses discretely from one state to the next.
► Goals are predicates of states that need to be achieved.

© D . L . Poole and A.K. Mackworth 2010-2020 Artificial Intelligence, Lecture 6.1 4 / 11


Planning

Planning is deciding what to do based on an agent’s ability,


its goals, and the state of the world.
Initial assumptions:
► The world is deterministic. There are no exogenous events
outside of the control of the robot that change the state of the
world.
► The agent knows what state it is in. (Fully observable)
► Time progresses discretely from one state to the next.
► Goals are predicates of states that need to be achieved.
Planning is finding a sequence of actions to solve a goal.

4 / 11
Actions

A deterministic action is a partial function from states to


states.

5 / 11
Actions

A deterministic action is a partial function from states to


states.
The preconditions of an action specify when the action can be
carried out.

© D . L . Poole and A.K. Mackworth 2010-2020 Artificial Intelligence, Lecture 6.1 5 / 11


Actions

A deterministic action is a partial function from states to


states.
The preconditions of an action specify when the action can be
carried out.
The effect of an action specifies the resulting state.

© D . L . Poole and A.K. Mackworth 2010-2020 Artificial Intelligence, Lecture 6.1 5 / 11


Delivery Robot Example

Coffee
Shop
(cs) Sam's
Office
(off )

Mail Lab
Room (lab)
(mr )

Actions:
Features:
mc – move clockwise
RLoc – Rob’s location
mcc – move counterclockwise
RHC – Rob has coffee
puc – pickup coffee
SWC – Sam wants coffee
dc – deliver coffee
MW – Mail is waiting
pum – pickup mail
RHM – Rob has mail
dm – deliver mail
6 / 11
Explicit State-space Representation

State Action Resulting State


⟨lab, ¬rhc, swc, ¬mw , rhm⟩ mc ⟨mr , ¬rhc, swc, ¬mw , rhm⟩
⟨lab, ¬rhc, swc, ¬mw , rhm⟩ mc ⟨off , ¬rhc, swc, ¬mw , rhm⟩
⟨off , ¬rhc, swc, ¬mw , rhm⟩ c ⟨off , ¬rhc, swc, ¬mw , ¬rhm⟩
⟨off , ¬rhc, swc, ¬mw , rhm⟩ dm ⟨cs, ¬rhc, swc, ¬mw , rhm⟩
⟨off , ¬rhc, swc, ¬mw , rhm⟩ mc ⟨lab, ¬rhc, swc, ¬mw , rhm⟩
... c ...
mc
...

7 / 11
Explicit State-space Representation

State Action Resulting State


⟨lab, ¬rhc, swc, ¬mw , rhm⟩ mc ⟨mr , ¬rhc, swc, ¬mw , rhm⟩
⟨lab, ¬rhc, swc, ¬mw , rhm⟩ mc ⟨off , ¬rhc, swc, ¬mw , rhm⟩
⟨off , ¬rhc, swc, ¬mw , rhm⟩ c ⟨off , ¬rhc, swc, ¬mw , ¬rhm⟩
⟨off , ¬rhc, swc, ¬mw , rhm⟩ dm ⟨cs, ¬rhc, swc, ¬mw , rhm⟩
⟨off , ¬rhc, swc, ¬mw , rhm⟩ mc ⟨lab, ¬rhc, swc, ¬mw , rhm⟩
... c ...
What happens when we also wantmc to model its battery charge?
Want “elaboration tolerance”. . . .

7 / 11
STRIPS Representation

The state is a function from features into values. So it can be


represented as a set of feature-value pairs.
For each action:
precondition a set of feature-value pairs that must be true for
the action to be carried out.

8 / 11
STRIPS Representation

The state is a function from features into values. So it can be


represented as a set of feature-value pairs.
For each action:
precondition a set of feature-value pairs that must be true for
the action to be carried out.
effect a set of a feature-value pairs that are made true by this
action.

8 / 11
STRIPS Representation

The state is a function from features into values. So it can be


represented as a set of feature-value pairs.
For each action:
precondition a set of feature-value pairs that must be true for
the action to be carried out.
effect a set of a feature-value pairs that are made true by this
action.
STRIPS assumption: every feature not mentioned in the effect is
unaffected by the action.

8 / 11
Example STRIPS representation

Pick-up coffee (puc):


precondition: {RLoc = cs, RHC = False}. “The robot needs
to be at the coffee shop and not holding coffee in order to
pick up coffee.”

9 / 11
Example STRIPS representation

Pick-up coffee (puc):


precondition: {RLoc = cs, RHC = False}. “The robot needs
to be at the coffee shop and not holding coffee in order to
pick up coffee.”
effect: {RHC = True}. “After the robot picks up coffee, it is
holding coffee. Nothing else has changed.”

9 / 11
Example STRIPS representation

Pick-up coffee (puc):


precondition: {RLoc = cs, RHC = False}. “The robot needs
to be at the coffee shop and not holding coffee in order to
pick up coffee.”
effect: {RHC = True}. “After the robot picks up coffee, it is
holding coffee. Nothing else has changed.”
Deliver coffee (dc):
precondition: {Rloc = off , RHC = True} “The robot needs
to be at the office and holding coffee in order to deliver
coffee.”

9 / 11
Example STRIPS representation

Pick-up coffee (puc):


precondition: {RLoc = cs, RHC = False}. “The robot needs
to be at the coffee shop and not holding coffee in order to
pick up coffee.”
effect: {RHC = True}. “After the robot picks up coffee, it is
holding coffee. Nothing else has changed.”
Deliver coffee (dc):
precondition: {Rloc = off , RHC = True} “The robot needs
to be at the office and holding coffee in order to deliver
coffee.”
effect: {RHC = False, SWC = False} “After the robot
delivers coffee, it is not holding coffee, and Sam no longer
wants coffee. Nothing else has changed.”

© D . L . Poole and A.K. Mackworth 2010-2020 Artificial Intelligence, Lecture 6.1 9 / 11


Deterministic Planning

Given:
A description of the effects and preconditions of the actions
A description of the initial state
A goal to achieve
find a sequence of actions that is possible and will result in a state
satisfying the goal.

10 / 11
Deterministic Planning

Given:
A description of the effects and preconditions of the actions
A description of the initial state
A goal to achieve
find a sequence of actions that is possible and will result in a state
satisfying the goal.

Artificial Intelligence, Lecture 6.2 1/ 6


Forward Planning

Idea: search in the state-space graph.


The nodes represent the states

Artificial Intelligence, Lecture 6.2 2/ 6


Forward Planning

Idea: search in the state-space graph.


The nodes represent the states
There is an arc ⟨s, s J⟩ labeled with action A if
► A is an action that can be carried out in state s and
► s ′ is the state resulting from doing A in state s

Artificial Intelligence, Lecture 6.2 2/ 6


Forward Planning

Idea: search in the state-space graph.


The nodes represent the states
There is an arc ⟨s, s J⟩ labeled with action A if
► A is an action that can be carried out in state s and
► s ′ is the state resulting from doing A in state s
A plan is a path from the state representing the initial state to
a state that satisfies the goal.

Artificial Intelligence, Lecture 6.2 2/ 6


Example state-space graph

Actions
〈cs, ¬rhc, swc, mw, ¬rhm〉
mc: move clockwise puc mcc
mcc: move counterclockwise mc
nm: no move 〈cs, rhc, swc, mw, ¬rhm〉 〈mr, ¬rhc, swc, mw, ¬rhm〉
puc: pick up coffee
dc: deliver coffee 〈off, ¬rhc, swc, mw, ¬rhm〉
mcc
pum: pick up mail mc mcc mc
dm: deliver mail 〈cs, ¬rhc, swc, mw, ¬rhm〉

〈lab, ¬rhc, swc, mw, ¬rhm〉 Locations:


〈off, rhc, swc, mw, ¬rhm〉
cs: coffee shop
dc 〈mr, rhc, swc, mw, ¬rhm〉 off: office
lab: laboratory
mc
mcc mr: mail room
〈off, ¬rhc, ¬swc, mw, ¬rhm〉
Feature values
〈lab, rhc, swc, mw, ¬rhm〉 rhc: robot has coffee
〈cs, rhc, swc, mw, ¬rhm〉 swc: Sam wants coffee
mw: mail waiting
rhm: robot has mail

Artificial Intelligence, Lecture 6.2 3/ 6


Forward planning representation

The search graph can be constructed on demand: you only


construct reachable states.

Artificial Intelligence, Lecture 6.2 4/ 6


Forward planning representation

The search graph can be constructed on demand: you only


construct reachable states.
If you want a cycle check or multiple path-pruning, you need
to be able to find repeated states.

Artificial Intelligence, Lecture 6.2 4/ 6


Forward planning representation

The search graph can be constructed on demand: you only


construct reachable states.
If you want a cycle check or multiple path-pruning, you need
to be able to find repeated states.
There are a number of ways to represent states:

Artificial Intelligence, Lecture 6.2 4/ 6


Forward planning representation

The search graph can be constructed on demand: you only


construct reachable states.
If you want a cycle check or multiple path-pruning, you need
to be able to find repeated states.
There are a number of ways to represent states:
► As a map from features into their values

Artificial Intelligence, Lecture 6.2 4/ 6


Forward planning representation

The search graph can be constructed on demand: you only


construct reachable states.
If you want a cycle check or multiple path-pruning, you need
to be able to find repeated states.
There are a number of ways to represent states:
► As a map from features into their values
► As a path from the start state

Artificial Intelligence, Lecture 6.2 4/ 6


Improving Search Efficiency

Forward search can use domain-specific knowledge specified as:


a heuristic function that estimates the cost from a complete
state description to a goal.

Artificial Intelligence, Lecture 6.2 5/ 6


Improving Search Efficiency

Forward search can use domain-specific knowledge specified as:


a heuristic function that estimates the cost from a complete
state description to a goal.
domain-specific pruning of neighbors:
► don’t pick-up coffee unless Sam wants coffee.

Artificial Intelligence, Lecture 6.2 5/ 6


Improving Search Efficiency

Forward search can use domain-specific knowledge specified as:


a heuristic function that estimates the cost from a complete
state description to a goal.
domain-specific pruning of neighbors:
► don’t pick-up coffee unless Sam wants coffee.
► unless the goal involves time constraints, don’t do a “no
move” action..

Artificial Intelligence, Lecture 6.2 5/ 6


Improving Search Efficiency

Forward search can use domain-specific knowledge specified as:


a heuristic function that estimates the cost from a complete
state description to a goal.
domain-specific pruning of neighbors:
► don’t pick-up coffee unless Sam wants coffee.
► unless the goal involves time constraints, don’t do a “no
move” action..
► don’t go to the coffee shop unless “Sam wants coffee” is part
of the goal and Rob doesn’t have coffee

Artificial Intelligence, Lecture 6.2 5/ 6


© D . L. Poole and A.K. Mackworth 2010-2020 Artificial Intelligence, Lecture 6.2 6/ 6
Regression Planning

Idea: search backwards from the goal description: nodes


correspond to subgoals, and arcs to actions.

Artificial Intelligence, Lecture 6.3 1/ 8


Regression Planning

Idea: search backwards from the goal description: nodes


correspond to subgoals, and arcs to actions.
A subgoal is an assignment of values to some features.

Artificial Intelligence, Lecture 6.3 1/ 8


Regression Planning

Idea: search backwards from the goal description: nodes


correspond to subgoals, and arcs to actions.
A subgoal is an assignment of values to some features.

Search problem:
Nodes are subgoals

Artificial Intelligence, Lecture 6.3 1/ 8


Regression Planning

Idea: search backwards from the goal description: nodes


correspond to subgoals, and arcs to actions.
A subgoal is an assignment of values to some features.

Search problem:
Nodes are subgoals
There is an arc ⟨g,g J⟩ labeled with action A if
► A achieves one of the assignments in g

Artificial Intelligence, Lecture 6.3 1/ 8


Regression Planning

Idea: searchbackwards from the goal description: nodes


correspond to subgoals, and arcs to actions.
A subgoal is an assignment of values to some features.

Search problem:
Nodes are subgoals
There is an arc ⟨g,g J⟩ labeled with action A if
► A achieves one of the assignments in g
► gJ is a proposition that must be true immediately before
action A so that g is true immediately after.

Artificial Intelligence, Lecture 6.3 1/ 8


Regression Planning

Idea: searchbackwards from the goal description: nodes


correspond to subgoals, and arcs to actions.
A subgoal is an assignment of values to some features.

Search problem:
Nodes are subgoals
There is an arc ⟨g,g J⟩ labeled with action A if
► A achieves one of the assignments in g
► gJ is a proposition that must be true immediately before
action A so that g is true immediately after.
The start node is the goal to be achieved.

Artificial Intelligence, Lecture 6.3 1/ 8


Regression Planning

Idea: searchbackwards from the goal description: nodes


correspond to subgoals, and arcs to actions.
A subgoal is an assignment of values to some features.

Search problem:
Nodes are subgoals
There is an arc ⟨g,g J⟩ labeled with action A if
► A achieves one of the assignments in g
► gJ is a proposition that must be true immediately before
action A so that g is true immediately after.
The start node is the goal to be achieved.
goal(g ) is true if g is a proposition that is true of the
initial state.

Artificial Intelligence, Lecture 6.3 1/ 8


Defining nodes and arcs

A node g can be represented asa set of assignments of


values to variables:

[X 1 = v1 ,...,X n = vn ]

This is a set of assignments you want to hold.

Artificial Intelligence, Lecture 6.3 2/ 8


Defining nodes and arcs

A node g can be represented asa set of assignments of


values to variables:

[X 1 = v1 ,...,X n = vn ]

This is a set of assignments you want to hold.


The last action achieves one of the X i = vi ,

Artificial Intelligence, Lecture 6.3 2/ 8


Defining nodes and arcs

A node g can be represented asa set of assignments of


values to variables:

[X 1 = v1 ,...,X n = vn ]

This is a set of assignments you want to hold.


The last action achieves one of the X i = vi , and does not
achieve X j = vjJ where vjJ is different to vj .

Artificial Intelligence, Lecture 6.3 2/ 8


Defining nodes and arcs

A node g can be represented asa set of assignments of


values to variables:

[X 1 = v1 ,...,X n = vn ]

This is a set of assignments you want to hold.


The last action achieves one of the X i = vi , and does not
achieve X j = vjJ where vjJ is different to vj .
The neighbor of g along arc A must contain:

Artificial Intelligence, Lecture 6.3 2/ 8


Defining nodes and arcs

A node g can be represented asa set of assignments of


values to variables:

[X 1 = v1 ,...,X n = vn ]

This is a set of assignments you want to hold.


The last action achieves one of the X i = vi , and does not
achieve X j = vjJ where vjJ is different to vj .
The neighbor of g along arc A must contain:
► The prerequisites of action A

Artificial Intelligence, Lecture 6.3 2/ 8


Defining nodes and arcs

A node g can be represented asa set of assignments of


values to variables:

[X 1 = v1 ,...,X n = vn ]

This is a set of assignments you want to hold.


The last action achieves one of the X i = vi , and does not
achieve X j = vjJ where vjJ is different to vj .
The neighbor of g along arc A must contain:
► The prerequisites of action A
► All of the elements of g that were not achieved by A

Artificial Intelligence, Lecture 6.3 2/ 8


Defining nodes and arcs

A node g can be represented asa set of assignments of


values to variables:

[X 1 = v1 ,...,X n = vn ]

This is a set of assignments you want to hold.


The last action achieves one of the X i = vi , and does not
achieve X j = vjJ where vjJ is different to vj .
The neighbor of g along arc A must contain:
► The prerequisites of action A
► All of the elements of g that were not achieved by A
it must be consistent = have at most one value for each
feature.

Artificial Intelligence, Lecture 6.3 2/ 8


Formalizing arcs using STRIPS notation

⟨g,g J⟩ is an arc labeled with action A where g is


[X 1 = v1 ,...,X n = vn ] and A is an action, if

Artificial Intelligence, Lecture 6.3 3/ 8


Formalizing arcs using STRIPS notation

⟨g,g J⟩ is an arc labeled with action A where g is


[X 1 = v1 ,...,X n = vn ] and A is an action, if
∃i X i = vi is on the effects list of action A

Artificial Intelligence, Lecture 6.3 3/ 8


Formalizing arcs using STRIPS notation

⟨g,g J⟩ is an arc labeled with action A where g is


[X 1 = v1 ,...,X n = vn ] and A is an action, if
∃i X i = vi is on the effects list of action A
∀j X j = vjJ is not on the effects list for A, where vjJ /= vj

Artificial Intelligence, Lecture 6.3 3/ 8


Formalizing arcs using STRIPS notation

⟨g,g J⟩ is an arc labeled with action A where g is


[X 1 = v1 ,...,X n = vn ] and A is an action, if
∃i X i = vi is on the effects list of action A
∀j X j = vjJ is not on the effects list for A, where vjJ /= vj
g J = preconditions(A) ∪ {X k = vk ∈ g : X k = vk ∈ /
effects(A)}

Artificial Intelligence, Lecture 6.3 3/ 8


Formalizing arcs using STRIPS notation

⟨g,g J⟩ is an arc labeled with action A where g is


[X 1 = v1 ,...,X n = vn ] and A is an action, if
∃i X i = vi is on the effects list of action A
∀j X j = vjJ is not on the effects list for A, where vjJ /= vj
g J = preconditions(A) ∪ {X k = vk ∈ g : X k = vk ∈ /
effects(A)} if it is consistent

Artificial Intelligence, Lecture 6.3 3/ 8


Regression example

Locations:
Actions [¬ swc] cs: coffee shop Feature values
mc: move clockwise off: office rhc: robot has coffee
mcc: move counterclockwise dc lab: laboratory swc: Sam wants coffee
puc: pick up coffee mr: mail room mw: mail waiting
dc: deliver coffee [off,rhc] rhm: robot has mail
pum: pick up mail
dm: deliver mail mc mcc
[cs,rhc] [lab,rhc]
mc mcc mcc
puc mc
[mr,rhc] [off,rhc] [mr,rhc]

[cs] [off,rhc]

Artificial Intelligence, Lecture 6.3 4/ 8


Loop detection and multiple-path pruning

Goal G1 is simpler than goal G2 if G1 is a subset of G2.


► It is easier to solve [cs] than [cs, rhc].
If you have a path to node N have already found a path
to a simpler goal, you can prune the path N.

Artificial Intelligence, Lecture 6.3 5/ 8


Improving Efficiency

You can define a heuristic function that estimates how


difficult it is to solve a goal from a state.

Artificial Intelligence, Lecture 6.3 6/ 8


Improving Efficiency

You can define a heuristic function that estimates how


difficult it is to solve a goal from a state.
You can usedomain-specific knowledge to remove
impossible goals, e.g.
► It is often not obvious from an action description to
conclude whether an agent can hold multiple items at
any time.

Artificial Intelligence, Lecture 6.3 6/ 8


Comparing forward and regression planners

Which is more efficient depends on:


► The branching factor

Artificial Intelligence, Lecture 6.3 7/ 8


Comparing forward and regression planners

Which is more efficient depends on:


► The branching factor
► How good the heuristics are

Artificial Intelligence, Lecture 6.3 7/ 8


Comparing forward and regression planners

Which is more efficient depends on:


► The branching factor
► How good the heuristics are
Forward planning is unconstrained by the goal (except as
a source of heuristics).
Regression planning is unconstrained by the initial state
(except asa source of heuristics)

Artificial Intelligence, Lecture 6.3 7/ 8


Planning as a CSP

In forward planning, the nodes considered are constrained to


be reachable, even if they don’t lead to goal.

Artificial Intelligence, Lecture 6.4 1/ 6


Planning as a CSP

In forward planning, the nodes considered are constrained to


be reachable, even if they don’t lead to goal.
In regression planning, the nodes considered are constrained
to be ones from which we can achieve the goal, even if they
are not reachable.

Artificial Intelligence, Lecture 6.4 1/ 6


Planning as a CSP

In forward planning, the nodes considered are constrained to


be reachable, even if they don’t lead to goal.
In regression planning, the nodes considered are constrained
to be ones from which we can achieve the goal, even if they
are not reachable.
When representing planning as a CSP, we can constrain the
states by the starting state and the goal.

Artificial Intelligence, Lecture 6.4 1/ 6


Planning as a CSP

In forward planning, the nodes considered are constrained to


be reachable, even if they don’t lead to goal.
In regression planning, the nodes considered are constrained
to be ones from which we can achieve the goal, even if they
are not reachable.
When representing planning as a CSP, we can constrain the
states by the starting state and the goal.
. . . but we can only do this if we know the number of steps.

Artificial Intelligence, Lecture 6.4 1/ 6


Planning as a CSP

In forward planning, the nodes considered are constrained to


be reachable, even if they don’t lead to goal.
In regression planning, the nodes considered are constrained
to be ones from which we can achieve the goal, even if they
are not reachable.
When representing planning as a CSP, we can constrain the
states by the starting state and the goal.
. . . but we can only do this if we know the number of steps.
Search over planning horizons (number of time steps).

Artificial Intelligence, Lecture 6.4 1/ 6


Planning as a CSP

In forward planning, the nodes considered are constrained to


be reachable, even if they don’t lead to goal.
In regression planning, the nodes considered are constrained
to be ones from which we can achieve the goal, even if they
are not reachable.
When representing planning as a CSP, we can constrain the
states by the starting state and the goal.
. . . but we can only do this if we know the number of steps.
Search over planning horizons (number of time steps).
For each planning horizon, create a CSP constraining possible
actions and features

Artificial Intelligence, Lecture 6.4 1/ 6


CSP Variables

Choose a planning horizon k.


Create a variable for each state feature and each time from 0
to k.
Create a variable for the action for each time in the range 0 to
k − 1.

Artificial Intelligence, Lecture 6.4 2/ 6


CSP for Delivery Robot for a planning horizon of 2

RLoc 0 RLoc1 RLoc2

RHC0 RHC1 RHC2

Action0 Action1
SWC0 SWC1 SWC2

MW 0 MW 1 MW 2

RHM0 RHM1 RHM2

State 0 Action 0 State 1 Action 1 State 2

RLoci — Rob’s location Movei — Rob’s move action


RHCi — Rob has coffee PUCi — Rob picks up coffee
SWCi — Sam wants coffee DelC — Rob delivers coffee
MWi — Mail is waiting PUMi — Rob picks up mail
RHMi — Rob has mail DelMi — Rob delivers mail

Artificial Intelligence, Lecture 6.4 3/ 6


Constraints

precondition constraints between state variables at time t and


action variable at time t, specify constraints on what actions
are available from a state.

Artificial Intelligence, Lecture 6.4 4/ 6


Constraints

precondition constraints between state variables at time t and


action variable at time t, specify constraints on what actions
are available from a state.
effect constraints between state variables at time t, action
variable at time t and state variables at time t + 1 constrain
the resulting state to be one that satisfies the effects.

Artificial Intelligence, Lecture 6.4 4/ 6


Constraints

precondition constraints between state variables at time t and


action variable at time t, specify constraints on what actions
are available from a state.
effect constraints between state variables at time t, action
variable at time t and state variables at time t + 1 constrain
the resulting state to be one that satisfies the effects.
frame constraints among state variables at time t, action
variables at time t, and state variables at time t + 1 for values
of variables that do not change.

Artificial Intelligence, Lecture 6.4 4/ 6


Constraints

precondition constraints between state variables at time t and


action variable at time t, specify constraints on what actions
are available from a state.
effect constraints between state variables at time t, action
variable at time t and state variables at time t + 1 constrain
the resulting state to be one that satisfies the effects.
frame constraints among state variables at time t, action
variables at time t, and state variables at time t + 1 for values
of variables that do not change.
initial state constraints that are usually domain constraints on
the initial state (at time 0).
goal constraints that constrains the final state to be a state
that satisfies the goals that are to be achieved.

Artificial Intelligence, Lecture 6.4 4/ 6


Example Constraints

precondition constraint RLoci = cs if Actioni = puc


is violated when

Artificial Intelligence, Lecture 6.4 5/ 6


Example Constraints

precondition constraint RLoci = cs if Actioni = puc


is violated when RLoci /= cs ∧Actioni = puc

Artificial Intelligence, Lecture 6.4 5/ 6


Example Constraints

precondition constraint RLoci = cs if Actioni = puc


is violated when RLoci /= cs ∧Actioni = puc
effect constraint rhci +1 if Acti = puc
is violated when

Artificial Intelligence, Lecture 6.4 5/ 6


Example Constraints

precondition constraint RLoci = cs if Actioni = puc


is violated when RLoci /= cs ∧Actioni = puc
effect constraint rhci +1 if Acti = puc
is violated when RHCi+1 = false ∧Acti = puc.

Artificial Intelligence, Lecture 6.4 5/ 6


Example Constraints

precondition constraint RLoci = cs if Actioni = puc


/ cs ∧Actioni = puc
is violated when RLoci =
effect constraint rhci +1 if Acti = puc
is violated when RHCi+1 = false ∧Acti = puc.
frame constraint Rob has mail at any time if it had mail before
and the action wasn’t

Artificial Intelligence, Lecture 6.4 5/ 6


Example Constraints

precondition constraint RLoci = cs if Actioni = puc


is violated when RLoci /= cs ∧Actioni = puc
effect constraint rhci +1 if Acti = puc
is violated when RHCi+1 = false ∧Acti = puc.
frame constraint Rob has mail at any time if it had mail before
and the action wasn’t to pickup mail or deliver mail:

/ { pum, dm}
RHMi +1 = RHMi if Acti ∈

violated when

Artificial Intelligence, Lecture 6.4 5/ 6


Example Constraints

precondition constraint RLoci = cs if Actioni = puc


is violated when RLoci /= cs ∧Actioni = puc
effect constraint rhci +1 if Acti = puc
is violated when RHCi+1 = false ∧Acti = puc.
frame constraint Rob has mail at any time if it had mail before
and the action wasn’t to pickup mail or deliver mail:

/ { pum, dm}
RHMi +1 = RHMi if Acti ∈

violated when
RHMi +1 /= RHMi ∧Acti /= pum ∧Acti /= dm

Artificial Intelligence, Lecture 6.4 5/ 6

You might also like