You are on page 1of 38

Solving problem by searching

• Problem-Solving •Uninformed Search


➢ problem formulation ✓breadth-first
➢ problem types ✓uniform-cost search
✓depth-first search
✓depth-limited search
✓iterative deepening
✓bi-directional search
Reflex agent
Re-planning agent
Master-mined agent
Problem formulation
Well-Defined Problems

• A problems can be defined by:


– Initial state
• Starting point from which, the agent sets out
– Actions (operators, successor functions)
• Describe the set of possible actions
• Actions move the agent from one state to another one by applying an
operator to a state
– State space
• Set of all states reachable from the initial state by any sequence of
actions
– Path
• Sequence of actions leading from one state in the state space to another
– Goal test
• Determines if a given state is the goal state
Problem Definition
Problem Definition
◼ Initial State
The initial state of the problem, (starting point from which the agent
sets out)
◼ Operator
A set of actions that moves the problem from one state to another

◼ Neighbourhood (Successor Function)


 The set of all possible states reachable from a given state
◼ State Space
 The set of all states reachable from the initial state by any
sequence of actions
Problem Definition

◼ Goal Test
A test applied to a state which determines if a given state is the goal
state
◼ Path Cost
Function determines, How much it costs to take a particular path (i.e.
function to assign a numeric cost to each path)
Problem Definition

– Solution
• path from the initial state to a goal state
– Search cost
• time and memory required to calculate a solution
– Total cost
• sum of search cost and path cost
• overall cost for finding a solution
Example: Traveling in Romania
Romania
Example: The agent is driving to Bucharest from Arad.

Odare
Neam
a
t
Iasi
Zerind

Vaslui
Farar
Arad Sibiu as

Timisoar Rimnicu
a Vilcea Urzicen
Lugoj Pitesti i Hirsova

Mehadia Buchare
st
Eforie
Dobreta
Craiova Giurgu
i
If the agent has no knowledge, it can just choose a
random road.
If a map is given, it has
The information about the states it might get into and
The actions it can take
“An agent with several immediate options of unknown
value can decide what to do by

first examining different possible sequences of actions


that lead to states of known value, and then choosing the
best sequence.”
Problem-solving Agents
Assumptions on the environment:

Static (formulation, searching, & execution)


Observable (initial state)
Discrete (enumeration of actions)
Deterministic

We are dealing with a very easy environment.


Search problems
Example: Traveling in Romania
A problem can be defined formally by components:
Search problems

▪ Start state ang Goal state


Problem type
Example Problems

• Toy Problems • Real-world Problems


– vacuum world – route finding
– 8-puzzle – touring problems
– 8-queens • traveling salesperson
– vacuum agent – VLSI layout
– robot navigation
– assembly sequencing
– Web search
Problem Definition - Example
–vacuum world
A B

A simple agent function is:

If current square is dirty then suck,


else move to the other square
Problem Definition - Example
➢vacuum world A B

The formulation:
States: the agent can be in one of two locations, each might or
might not contain dirt. 2 location *{clean, dirty}n = 2 ×22 =8
possible states (n × 2n for n locations)
Initial state: any state can be considered an initial state
Successor function: left, right and suck
Goal test: check whether all the squares are clean
Path cost: number of steps in solution
➢ vacuum world
Problem Definition - Example
➢8-puzzle
5 4 1 24 37
6 1 8 8
4
2 5 864
7 3 2 7
3 68 5

Initial State Goal State


Problem Definition - Example
◼ States
A description of each of the eight tiles in each location that it can
occupy. It is also useful to include the blank

◼ Operators
 The blank moves left, right, up or down

◼ Goal Test
 Thecurrent state matches a certain state (e.g. one of the ones shown
on previous slide)

◼ Path Cost
 Each move of the blank costs 1
How Good is a Solution?
◼ Does our search method actually find a solution?

◼ Is it a good solution?
 Path Cost
 Search Cost (Time and Memory)

◼ Does it find the optimal solution?


 But what is optimal?
Measuring problem-solving performance

◼ Completeness
Is the strategy guaranteed to find a solution?
◼ Time Complexity
How long does it take to find a solution?
Measuring problem-solving performance

◼ Space Complexity
How much memory does it take to perform
the search?
◼ Optimality
Does the strategy find the optimal solution ?,
where there are several solutions
Search Terminology
• Search Tree
– Generated as the search space is traversed
• The search space itself is not necessarily a tree, frequently it is a graph
• The tree specifies possible paths through the search space
– Expansion of nodes
• As states are explored, the corresponding nodes are expanded by applying the
successor function
–This generates a new set of (child) nodes
• The fringe (frontier) is the set of nodes not yet visited
–Newly generated nodes are added to the fringe
– Search strategy
• Determines the selection of the next node to be expanded
• Can be achieved by ordering the nodes in the fringe
–E.G. Queue (FIFO), stack (LIFO)
Selection of a Search Strategy

• Most of the effort is often spent on the selection of an


appropriate search strategy for a given problem
– Uninformed search (blind search)
• Number of steps, path cost unknown
• Agent knows when it reaches a goal
– Informed search (heuristic search)
• Agent has background information about the problem
– Map, costs of actions
End this part

You might also like