You are on page 1of 43

Lec04: Search & Uninformed

Search
Dr Humera 1
 We have some actions that can change the state
of the world
◦ Change resulting from an action perfectly predictable
 Try to come up with a sequence of actions that
will lead us to a goal state
◦ May want to minimize number of actions
◦ More generally, may want to minimize total cost of actions
 Do not need to execute actions in real life while
searching for solution!
◦ Everything perfectly predictable anyway
One of the most basic techniques in AI
• Underlying sub-module in most AI systems

• Search only shows how to solve the problem once we have


it correctly formulated
 Suppose an agent can execute several actions immediately
in a given state
 It doesn’t know the utility of these actions Then, for each
action, it can execute a sequence of actions until it
reaches the goal
 The immediate action which has the best sequence
(according to the performance measure) is then the
solution
 Finding this sequence of actions is called search, and the
agent which does this is called the problem-solver.
 NB: Its possible that some sequence might fail, e.g.,
getting stuck in an infinite loop, or unable to find the goal
at all.

1 March 2023 9
C 2
B 9
2
3 F goal state
start state A D E
4
3 4
C 2
B 9
2
3 F goal state
start state A D
3
state = A,
cost = 0

state = B, state = D,
cost = 3 cost = 3

state = C, state = F,
cost = 5 cost = 12

goal state!

state = A,
cost = 7

search tree nodes and states are not the same t


state = A,
cost = 0

state = B, state = D,
cost = 3 cost = 3

state = C, state = F, state = E,


cost = 5 cost = 12 cost = 7
goal state!

state = A,
state = F,
cost = 7
cost = 11

goal state!

state = B, state = D,

.. cost = 10
.. cost = 10

. .
 You can begin to visualize the concept of a
graph
 Searching along different paths of the graph
until you reach the solution
 The nodes can be considered congruous to
the states
 The whole graph can be the state space
 The links can be congruous to the actions……

1 March 2023 14
 Set of states that we can be in
◦ Including an initial state…
◦ … and goal states (equivalently, a goal test)
 For every state, a set of actions that we can
take
◦ Each action results in a new state
◦ Typically defined by successor function
 Given a state, produces all states that can be reached from it
 Cost function that determines the cost of
each action (or path = sequence of actions)
 Solution: path from initial state to a goal state
◦ Optimal solution: solution with minimal cost
 On holiday in Romania; currently in Arad.
 Flight leaves tomorrow from Bucharest
 Formulate goal: Be in Bucharest
 Formulate problem:
◦ States: various cities
◦ Actions: drive between cities

 Find solution:
◦ Sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest.

16
1 March 2023
1 March 2023 17
 Static: The configuration of the graph (the city
map) is unlikely to change during search

 Observable: The agent knows the state (node)


completely, e.g., which city I am in currently

 Discrete: Discrete number of cities and routes


between them

 Deterministic: Transiting from one city (node) on


one route, can lead to only one possible city

 Single-Agent: We assume only one agent searches


at one time, but multiple agents can also be used.

1 March 2023 18
 A problem is defined by five items:
1. An Initial state, e.g., “In Arad“
2. Possible actions available, ACTIONS(s) returns the set of actions
that can be executed in s.
3. A successor function S(x) = the set of all possible {Action–State}
pairs from some state, e.g., Succ(Arad) = {<Arad  Zerind, In
Zerind>, … }
4. Goal test, can be
 explicit, e.g., x = "In Bucharest
 implicit, e.g., Checkmate(x)
5. Path cost (additive)
 e.g., sum of distances, number of actions executed, etc.
 c(x,a,y) is the step cost, assumed to be ≥ 0
 A solution is a sequence of actions leading from the initial state to a
goal state.

1 March 2023 19
 Readings • IntroducJon: Chapter 3.1 – 3.3 •
Uninformed Search: Chapter 3.4
 State Space Search
◦ Uninformed Search/ Blind Search
◦ Informed / Heuristic Search
 Problem Reduction Search
 Game Tree Search
 Advances
◦ Memory Bounded Search
◦ Multi Objective Search
◦ Learning how to search
 Blind (or uninformed) strategies do not
exploit any of the information contained in a
state

 Heuristic (or informed) strategies exploits


such information to assess that one node is
“more promising” than another
 A search strategy is defined by picking the order of node expansion

 Strategies are evaluated along the following dimensions:


◦ completeness: does it always find a solution if one exists?

◦ time complexity: number of nodes generated

◦ space complexity: maximum number of nodes in memory

◦ optimality: does it always find a least-cost solution?

 Time and space complexity are measured in terms of


◦ b: maximum branching factor of the search tree

◦ d: depth of the least-cost solution

◦ m: maximum depth of the state space (may be ∞)


 A search strategy is defined by picking the order of
node expansion
 Strategies are evaluated along the following
dimensions:
◦ Completeness: Does it always find a solution if one exists?
◦ Time complexity: Number of nodes generated
◦ Space complexity: Maximum number of nodes in memory
◦ Optimality: Does it always find a least-cost solution?
 Time and space complexity are measured in terms
of
◦ b: maximum no. of successors of any node
◦ d: depth of the shallowest goal node
◦ m: maximum length of any path in the state space.

1 March 2023 29
 Fringe = set of nodes generated but not expanded
= nodes we know we still have to explore

 fringe := {node corresponding to initial state}


 loop:
◦ if fringe empty, declare failure
◦ choose and remove a node v from fringe
◦ check if v’s state s is a goal state; if so, declare success
◦ if not, expand v, insert resulting nodes into fringe

 Key question in search: Which of the generated nodes do


we expand next?
 Uninformed search: given a state, we only
know whether it is a goal state or not
 Cannot say one nongoal state looks better
than another nongoal state
 Can only traverse state space blindly in hope
of somehow hitting a goal state at some
point
◦ Also called blind search
◦ Blind does not imply unsystematic!
 Uninformed search strategies use only the
information available in the problem definition

 Random Search
 Breadth-first search
 Uniform-cost search
 Depth-first search
 Iterative deepening search

You might also like