Professional Documents
Culture Documents
Artificial Intelligence
Intelligent agent are supposed to maximize their
performance measure
Solving Problem by Searching
Chapter 3
Simplified if the agent can adopt a goal
We are interested in how we can design goal-based agents What goal does the agent need to achieve?
to solve problems
How do you describe the goal?
There are three major questions to consider: – A situation to be reached
– What goal does the agent need to achieve? – The answer to a question
– What knowledge does the agent need? – A set of properties to be acquired
– What actions does the agent need to do?
How do you know when the goal is reached?
– With a goal test that defines what it means to have
achieved/satisfied the goal
What knowledge does the agent need? What actions does the agent need to do?
1
Formalizing a Search Formalizing a Search
Initial state: designated start state Open list: list of states which are waiting to be considered
in the search
Successor states: collection of states generated by
applying actions to a particular state Closed list: list of states which have already been
considered in the search
Goal state: state which satisfies to the object of
our search task Solution: path from the initial state to a goal
We want our agent to search for a sequence of actions that Before an agent can start searching for solutions:
– Formulate a goal
lead to a solution – Use the goal to formulate a problem
A path through the state space from the initial state to a goal state is
a solution
2
Single State Problem
Abstracting State Space
Formulation
Example: Real world is absurdly complex
– State space must be abstracted for problem solving
1. Initial state e.g. ”at Arad” (Abstract) state = set of real states
2. Successor function S(x) = set of action-state pairs
(Abstract) action = complex combination of real actions
• e.g. S(Arad) = {<Arad -> Zerind, Zerind>, ...}
3. Goal test, can be e.g., ”Arad -> Zerind” represents a complex set of possible
• Explicit, e.g. x = ”Bucharest” routes, detours, rest stops, etc.
• Implicit, e.g. NoDirt(x) For guaranteed realizability, any real state ”in Arad”
4. Path cost(additive) must get to some real state ”in Zerind”
• E.g., sum of distances, number of actions executed, etc.
(Abstract) solution =
• c(x,a,y) is the step cost, assumed to be >= 0
Set of real paths that are solutions in the real world
A solution is a sequence of actions leading from the initial state to a Each abstract action should be ”easier” than the original problem!
goal state
How can we ”search” ? Search methods are either used on their own or as part of
other problem solving methods
No knowledge
Random search Search methods either operate on complete solutions or on
partial solutions
Knowledge as a map or model
More intelligent methods Some search methods are biologically inspired
3
Example Problem Types Search Algorithm
4
Search Strategy Search Strategy
Complexity Example:
Complexity Traveling Salesman Problem
There are n cities, with a road of length Lij joining city i to city j.
Why worry about complexity of algorithms? The salesman wishes to find a way to visit all cities that is optimal in
two ways: each city is visited only once, and the total route is as short
as possible.
Because a problem may be solvable in principle but may
take too long to solve in practice
5
Complexity Complexity
Polynomial-time (P) problems: we can find algorithms Are there algorithms that require more than polynomial time?
that will solve them in a time (=number of operations) that
grows polynomially with the size of the input Yes (until proof of the contrary); for some algorithms, we do not
− for example: sort n numbers into increasing order: poor know of any polynomial-time algorithm to solve them. These
algorithms have n2 complexity, better ones have n log(n) are referred to as nondeterministic-polynomial-time (NP)
complexity algorithms.
− for example: traveling salesman problem
NP
Branching factor b :The number of new states generated when
P
expanding a state.
P complete NP complete
PH
b d +1 − 1
1+ b + b2 + L + bd =
P: can be solved in polynomial time
P-complete: hardest problems in P
b −1
NP: nondeterministic-polynomial algorithms
NP-complete: hardest NP problems; if one of them can be proven to be
P, then NP = P
PH: polynomial-time hierarchy
6
Breadth-First Search (BFS) Breadth-First Snapshot 11)
1 Initial
Visited
1. Set N to be a list of initial nodes. Fringe
2. If N is empty, then exit and signal failure. 2 3 Current
3. Set n to be the first node in N, and remove n from N. Visible
Goal
4. If n is a goal node, then exit and signal success.
5. Otherwise, add the children of n to the end of N and return
to step 2. (FIFO)
Properties of Breadth-First
Breadth-First Snapshot 241)
Search
1 Initial
Visited
Fringe Complete? Yes (if b is finite)
2 3 Current
Visible Time? 1+b+b2+b3+ ... +bd + b(bd-1)= O(bd+1), i.e., exp. in d
Goal
4 5 6 7
Space? O(bd+1) (keeps every node in memory)
Note:
The goal test is
8 9 10 11 12 13 14 15 positive for this Optimal? Yes (if cost = 1 per step); not optimal in general
node, and a
solution is
found in 24 The space complexity makes it impractical in most cases
steps.
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
Fringe: [25,26,27,28,29,30,31]
7
Uniform-Cost Search Uniform-Cost Search
Breadth first search finds the shallowest goal state, but this Uniform-cost search is a strategy we use if there are
may not always be the least-cost goal state. costs associated with actions (edges), and we want the
least expensive solution to a goal
Path cost to get to node n is given by g(n).
We use a priority queue for the open list, where states are
Uniform-cost search modifies the breadth first strategy by ranked by the total cost of the path from the initial state
always expanding the lowest-cost leaf
Uniform-cost search is complete and optimal iff g never Suppose we want to travel by train to the Armadillo
decreases along any path (e.g.: g(n) is the accumulated Convention in El Paso, and we want to find the least
distance from the start node to node n). expensive series of tickets from Madison
Implemented by sorting N by increasing g Cities are our states, and tickets are actions
1. Set N to be a list of initial nodes. Depth first search is space efficient: d(b-1)+1
3. Set n to be the first node in N, and remove n from N. If the tree is of same depth as the goal, in the worst case,
it examines every nodes. The total time is
4. If n is a goal node, then exit and signal success.
b d +1 − 1
5. Otherwise, add the children of n to the front of N and 1 + b + b2 + L + bd =
b −1
return to step 2. (LIFO)
8
Depth-First Snapshot1) Properties of Depth-First Search
1 Initial
Visited
Complete? No: fails in infinite-depth spaces, spaces with loops
Fringe
2 3 Current Modify to avoid repeated states along path
Visible ⇒ complete in finite spaces
Goal
4 5 6 7
Depth-limited search (DLS) Depth first search down to some cut-off value for depth
– If memory space is a major concern, one can conduct a simple DFS with a Is complete if the cut-off is big enough
fixed depth limit ℓ
Is not optimal
Iterative deepening search (IDS)
– Conduct a depth-limited search at increasing depth limits until a solution Iterative deepening search is complete, optimal for unit
is found
step costs, and has time complexity of 0(bd) and space
– In general, iterative deepening is preferred to depth-first or breadth-first
when search space large and depth of solution not known complexity of 0(bd)
Repeated states When the environment is partially observable, the agent can
Unavoidable for many problems apply search algorithms in the space of belief states, or sets of
possible states that the agent might be in
9
References Summary
1) Franz J. Kurfess Associate Professor, Computer Problem formulation usually requires abstracting away real-world
details to define a state space that can be explored using
Science Department at California Polytechnic State
computer algorithms
University USA
Once problem is formulated in a more concrete form, complexity
2) Handbook of Brain Theory & Neural Networks analysis helps us picking out best algorithm to solve problem
(Michael A. Arbib, ed.; MIT Press 1995).
Variety of uninformed search strategies; difference lies in
method used to pick node that will be further expanded
3) Roger Eriksson, Department of Computing Science,
College Skövde, Sweden
Summary Summary
Uninformed search strategies (Blind search) Iterative deepening search only uses linear space and
not much more time than other uniformed search
strategies
Breadth first
Uniform cost
If environment is partially observable – search algorithms in the
Depth first space of belief states, or sets of possible states that the agent
Depth limited might be in
Iterative Deepening
or
Summary Next!
Informed Search
Next we’ll discuss informed or heuristic search strategies
that try to speed things up by using domain knowledge to and Exploration!!
guide the search
10