You are on page 1of 10

Problem-Solving Agents

Artificial Intelligence
Intelligent agent are supposed to maximize their
performance measure
Solving Problem by Searching
Chapter 3
Simplified if the agent can adopt a goal

Decision problem is complex – many tradeoffs

Goal based agent might be a problem-solving agent

Problem-Solving Agents Problem-Solving Agents

We are interested in how we can design goal-based agents What goal does the agent need to achieve?
to solve problems
How do you describe the goal?
There are three major questions to consider: – A situation to be reached
– What goal does the agent need to achieve? – The answer to a question
– What knowledge does the agent need? – A set of properties to be acquired
– What actions does the agent need to do?
How do you know when the goal is reached?
– With a goal test that defines what it means to have
achieved/satisfied the goal

Problem-Solving Agents Problem-Solving Agents

What knowledge does the agent need? What actions does the agent need to do?

The information needs to be: Given:


– Sufficient to describe everything relevant to reaching the – An set of available actions
goal – A description of the current state of the world
– Adequate to describe the world state/situation
Determine:
Use a closed world assumption: – Which actions can be applied (those applicable/legal?)
– All necessary information about a problem domain is – What the exact state of the world will be (or likely be) after an
observable in each percept so that each state is a complete action is performed in the current state
description of the world – What is the action that is likely to lead me to my goal?
• i.e. There is never any hidden information

1
Formalizing a Search Formalizing a Search

Initial state: designated start state Open list: list of states which are waiting to be considered
in the search
Successor states: collection of states generated by
applying actions to a particular state Closed list: list of states which have already been
considered in the search
Goal state: state which satisfies to the object of
our search task Solution: path from the initial state to a goal

Goal test: way of deciding a goal state

Feasible and Infeasible


Solving Problems by Searching
Solutions3)
Typical tasks:
– Finding a solution search
– Finding a feasible solution space S
– Finding an optimal solution
– Finding an optimal feasible solution feasible
– Finding a near optimal solution part F
– Finding a near optimal feasible solution
infeasible
Performance: part U
– Minimum time to reach an adequate solution
– Maximum quality in specified time

Solving Problems by Searching Problem Solving Agents

We want our agent to search for a sequence of actions that Before an agent can start searching for solutions:
– Formulate a goal
lead to a solution – Use the goal to formulate a problem

A problem consists of four parts:


To do this we formalize the problem as a search task, – the initial state
considering our three questions: – a set of actions
– a goal test function
– What goal does the agent need to achieve?
– a path cost function.
– What knowledge does the agent need?
– What actions does the agent need to do? The environment of the problem is represented by a state space

A path through the state space from the initial state to a goal state is
a solution

2
Single State Problem
Abstracting State Space
Formulation
Example: Real world is absurdly complex
– State space must be abstracted for problem solving
1. Initial state e.g. ”at Arad” (Abstract) state = set of real states
2. Successor function S(x) = set of action-state pairs
(Abstract) action = complex combination of real actions
• e.g. S(Arad) = {<Arad -> Zerind, Zerind>, ...}
3. Goal test, can be e.g., ”Arad -> Zerind” represents a complex set of possible
• Explicit, e.g. x = ”Bucharest” routes, detours, rest stops, etc.
• Implicit, e.g. NoDirt(x) For guaranteed realizability, any real state ”in Arad”
4. Path cost(additive) must get to some real state ”in Zerind”
• E.g., sum of distances, number of actions executed, etc.
(Abstract) solution =
• c(x,a,y) is the step cost, assumed to be >= 0
Set of real paths that are solutions in the real world
A solution is a sequence of actions leading from the initial state to a Each abstract action should be ”easier” than the original problem!
goal state

Searching Search Methods

How can we ”search” ? Search methods are either used on their own or as part of
other problem solving methods
No knowledge
Random search Search methods either operate on complete solutions or on
partial solutions
Knowledge as a map or model
More intelligent methods Some search methods are biologically inspired

Search methods is a central topic in AI, classical computer


science, applied mathematics, etc...

Problem Types Problem Types

Single-state problem: deterministic, accessible Contingency problem: nondeterministic, inaccessible


Agent knows everything about world, thus can – Must use sensors during execution
calculate optimal action sequence to reach goal state – Solution can be a tree
State is always known with certainty Not full knowledge of the new state caused by an action

Multiple-state problem: deterministic, inaccessible Exploration problem: unknown state space


Agent must reason about sequences of actions and Discover and learn about environment while taking actions
states assumed while working towards goal state. Agent must learn the effect of actions and what sort of states
Not full knowledge of which state the agent is in exist

3
Example Problem Types Search Algorithm

Examples: A General Search algorithm:


Task : Find a sequence of actions leading from the initial state to a
Route Finding goal state
Traveling Salesperson
VLSI layout 1. Initialize the search tree with the initial state.
Robot Navigation 2. Report failure if search tree is empty
3. Move to a leaf node according to a strategy
Assembly Sequencing
4. Ready if a goal state.
5. Expand the current state by generating successors
to the current state. Add them to the search tree as
leaves.
6. Repeat from 2.

Example: Romania General Search Example

General Search Example General Search Example

4
Search Strategy Search Strategy

A strategy is defined by picking the order of node expansion


Note!!!
Strategies are evaluated along the following dimensions:
– Completeness – does it always find a solution if one exists? An Optimal algorithm only is guaranteed to find
– Time complexity – number of nodes generated/expanded
– Space complexity – maximum number of nodes in memory the cheapest solution. The time and space requirements
– Optimality – does it always find a least-cost solution? may still be a disadvantage

Time and Space complexity are measured in terms of


– b – maximum branching factor of the search tree
– d – depth of the least-cost solution
– m – maximum depth of the state space (may be oändligheten)

Complexity Example:
Complexity Traveling Salesman Problem
There are n cities, with a road of length Lij joining city i to city j.

Why worry about complexity of algorithms? The salesman wishes to find a way to visit all cities that is optimal in
two ways: each city is visited only once, and the total route is as short
as possible.
Because a problem may be solvable in principle but may
take too long to solve in practice

How can we evaluate the complexity of algorithms?

Through asymptotic analysis, i.e., estimate time (or number


of operations) necessary to solve an instance of size n of a
This is a hard problem: the only known algorithms (so far) to solve it
problem when n tends towards infinity
have exponential complexity, that is, the number of operations required
to solve it grows as exp(n) for n cities.
See AIMA2e, Appendix A.

Why is Exponential Complexity


So…
“Hard”?
It means that the number of operations necessary to
compute the exact solution of the problem grows
exponentially with the size of the problem (here, the
number of cities)
In general, exponential-complexity problems cannot
• exp(1) = 2.72
be solved for any but the smallest instances!
• exp(10) = 2.20 104 (daily salesman trip)
• exp(100) = 2.69 1043 (monthly salesman planning)
• exp(500) = 1.40 10217 (music band worldwide tour)
• exp(250,000) = 10108,573 (FedEx, postal services)

5
Complexity Complexity

Polynomial-time (P) problems: we can find algorithms Are there algorithms that require more than polynomial time?
that will solve them in a time (=number of operations) that
grows polynomially with the size of the input Yes (until proof of the contrary); for some algorithms, we do not
− for example: sort n numbers into increasing order: poor know of any polynomial-time algorithm to solve them. These
algorithms have n2 complexity, better ones have n log(n) are referred to as nondeterministic-polynomial-time (NP)
complexity algorithms.
− for example: traveling salesman problem

In particular, exponential-time algorithms are believed to be NP

Polynomial-Time Hierarchy2) Solving Problems by Searching

NP
Branching factor b :The number of new states generated when
P
expanding a state.
P complete NP complete

The maximum number of states in a search tree of depth d is:

PH
b d +1 − 1
1+ b + b2 + L + bd =
P: can be solved in polynomial time
P-complete: hardest problems in P
b −1
NP: nondeterministic-polynomial algorithms
NP-complete: hardest NP problems; if one of them can be proven to be
P, then NP = P
PH: polynomial-time hierarchy

Search Strategy Uninformed Search


Expanding the nodes in search tree
Uninformed search strategies (Blind search)
The leaf nodes are normally collected in a queue for
reasons of efficiency Have no information about the distance or cost to the goal
state. Can only distinguish a goal state from a non goal
The way new nodes are added to the queue distinguishes state
the different search methods
Breadth first
Uniform cost
Depth first
Depth limited
Iterative Deepening

6
Breadth-First Search (BFS) Breadth-First Snapshot 11)
1 Initial
Visited
1. Set N to be a list of initial nodes. Fringe
2. If N is empty, then exit and signal failure. 2 3 Current
3. Set n to be the first node in N, and remove n from N. Visible
Goal
4. If n is a goal node, then exit and signal success.
5. Otherwise, add the children of n to the end of N and return
to step 2. (FIFO)

Complete. i.e: Finds a goal node if such node exists even


if the tree has infinite depth.
Uses space proportional to bd,which may be a lot!
Fringe: [] + [2,3]

Breadth-First Snapshot 21) Breadth-First Snapshot 31)


1 Initial 1 Initial
Visited Visited
Fringe Fringe
2 3 Current 2 3 Current
Visible Visible
Goal Goal
4 5 4 5 6 7

Fringe: [3] + [4,5] Fringe: [4,5] + [6,7]

Properties of Breadth-First
Breadth-First Snapshot 241)
Search
1 Initial
Visited
Fringe Complete? Yes (if b is finite)
2 3 Current
Visible Time? 1+b+b2+b3+ ... +bd + b(bd-1)= O(bd+1), i.e., exp. in d
Goal
4 5 6 7
Space? O(bd+1) (keeps every node in memory)
Note:
The goal test is
8 9 10 11 12 13 14 15 positive for this Optimal? Yes (if cost = 1 per step); not optimal in general
node, and a
solution is
found in 24 The space complexity makes it impractical in most cases
steps.
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Fringe: [25,26,27,28,29,30,31]

7
Uniform-Cost Search Uniform-Cost Search

Breadth first search finds the shallowest goal state, but this Uniform-cost search is a strategy we use if there are
may not always be the least-cost goal state. costs associated with actions (edges), and we want the
least expensive solution to a goal
Path cost to get to node n is given by g(n).
We use a priority queue for the open list, where states are
Uniform-cost search modifies the breadth first strategy by ranked by the total cost of the path from the initial state
always expanding the lowest-cost leaf

It is complete and optimal if the cost of each step exceeds


some positive bound ε

Uniform-Cost Search Examples with Costs

Uniform-cost search is complete and optimal iff g never Suppose we want to travel by train to the Armadillo
decreases along any path (e.g.: g(n) is the accumulated Convention in El Paso, and we want to find the least
distance from the start node to node n). expensive series of tickets from Madison

Implemented by sorting N by increasing g Cities are our states, and tickets are actions

Depth-First Search (DFS) Depth-First Search

1. Set N to be a list of initial nodes. Depth first search is space efficient: d(b-1)+1

2. If N is empty, then exit and signal failure. May not terminate.

3. Set n to be the first node in N, and remove n from N. If the tree is of same depth as the goal, in the worst case,
it examines every nodes. The total time is
4. If n is a goal node, then exit and signal success.
b d +1 − 1
5. Otherwise, add the children of n to the front of N and 1 + b + b2 + L + bd =
b −1
return to step 2. (LIFO)

8
Depth-First Snapshot1) Properties of Depth-First Search
1 Initial
Visited
Complete? No: fails in infinite-depth spaces, spaces with loops
Fringe
2 3 Current Modify to avoid repeated states along path
Visible ⇒ complete in finite spaces
Goal
4 5 6 7

Time? O(bm): terrible if m is much larger than d


but if solutions are dense, may be much faster
8 9 10 11 12 13 14 15
than breadth-first

Space? O(bm), i.e., linear space!


16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
Optimal? No
Fringe: [3] + [22,23]

Other Search Strategies Properties of Other Search Strategies

Depth-limited search (DLS) Depth first search down to some cut-off value for depth
– If memory space is a major concern, one can conduct a simple DFS with a Is complete if the cut-off is big enough
fixed depth limit ℓ
Is not optimal
Iterative deepening search (IDS)
– Conduct a depth-limited search at increasing depth limits until a solution Iterative deepening search is complete, optimal for unit
is found
step costs, and has time complexity of 0(bd) and space
– In general, iterative deepening is preferred to depth-first or breadth-first
when search space large and depth of solution not known complexity of 0(bd)

Bi-directional search (BDS) Bidirectional search can enormously reduce time


– If we want to find a particular goal node, we can search from both ends
of the search space complexity, but it is not always applicable and may require too much
– Can conducts a BFS from both the start and goal states until they meet space
somewhere in the middle

Repeated States Belief States and C-Plans

Repeated states When the environment is partially observable, the agent can
Unavoidable for many problems apply search algorithms in the space of belief states, or sets of
possible states that the agent might be in

Three ways to deal with the problem:


In some cases, a single solution sequence can be constructed
1. Avoid to return to the state you just came from
2. Check if new nodes contain a state already in the path
in other cases
3. Check if new nodes have been generated before (in any
path)
The agent needs a contingency plan to handle unknown
circumstances that may arise

9
References Summary

1) Franz J. Kurfess Associate Professor, Computer Problem formulation usually requires abstracting away real-world
details to define a state space that can be explored using
Science Department at California Polytechnic State
computer algorithms
University USA
Once problem is formulated in a more concrete form, complexity
2) Handbook of Brain Theory & Neural Networks analysis helps us picking out best algorithm to solve problem
(Michael A. Arbib, ed.; MIT Press 1995).
Variety of uninformed search strategies; difference lies in
method used to pick node that will be further expanded
3) Roger Eriksson, Department of Computing Science,
College Skövde, Sweden

Summary Summary

Uninformed search strategies (Blind search) Iterative deepening search only uses linear space and
not much more time than other uniformed search
strategies
Breadth first
Uniform cost
If environment is partially observable – search algorithms in the
Depth first space of belief states, or sets of possible states that the agent
Depth limited might be in
Iterative Deepening
or

The agent needs a contingency plan to handle unknown


circumstances that may arise

Summary Next!

All the strategies discussed so far are called uninformed


search strategies because there is no information provided
other than the problem definition

Informed Search
Next we’ll discuss informed or heuristic search strategies
that try to speed things up by using domain knowledge to and Exploration!!
guide the search

10

You might also like