You are on page 1of 38

Solving Problems by

Searching
Dr. Azhar Mahmood
Associate Professor
Email: azhar.mahmood@cust.edu.pk
Outline
• Problem-solving Agents

• Problem Formulation

• Example Problems

• Basic search Algorithms

2
Solving Problems by Searching
• Reflex agent is simple
– base on their actions on a direct mapping from states to
actions
– have great difficulties in learning desired action sequences
– but cannot work well in environments
• which this mapping would be too large to store
• and would take too long to learn

• Hence, goal-based agent is used


– Goal based agents can succeed by considering future actions
and the desirability of their outcomes
Problem-solving Agent
• Problem-solving agent
– A kind of goal-based agent

– It solves problem by
• finding sequences of actions that lead to desirable states
(goals)

– To solve a problem,
• the first step is the goal formulation, based on the current
situation
Goal Formulation
• The goal is formulated
– As a set of world states, in which the goal is satisfied

• Reaching from initial state  goal state


– Actions are required

• Actions are the operators


– causing transitions between world states
– Actions should be abstract enough at a certain degree,
instead of very detailed
– E.g., turn left VS turn left 30 degree, etc.
Problem Formulation
• The process of deciding
– what actions and states to consider

• E.g., driving Loc1  Loc2


– in-between states and actions defined
– States: Some places in between Loc1 & Loc2
– Actions: Turn left, Turn right, go straight,
accelerate & brake, etc.
Example: Romania
• On holiday in Romania; currently in Arad, Flight
leaves tomorrow from Bucharest

• Formulate goal:
– be in Bucharest

• Formulate problem:
– states: various cities
– actions: drive between cities

• Find solution:
– sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
7
Example: Romania

8
Search
• Because there are many ways to achieve the same
goal
– Those ways are together expressed as a tree
– Multiple options of unknown value at a point,
• The agent can examine different possible sequences of
actions, and choose the best
– This process of looking for the best sequence is called
search
– The best sequence is then a list of actions, called solution
Search Algorithm
• Defined as
– taking a problem
– and returns a solution

• Once a solution is found


– the agent follows the solution
– and carries out the list of actions – execution phase

• Design of an agent
– “Formulate, search, execute”
Problem-solving Agents

11
Well-defined problems and solutions

A problem is defined by 5 components:

1.Initial state
2.Actions
3.Transition model or (Successor functions)
4.Goal Test
5.Path Cost
Well-defined problems and solutions
• A problem is defined by 5 components:
– The initial state
• that the agent starts in: e.g., the initial state for our agent in initial
state “Romania” might be described as In(Arad).
– The set of possible actions available to the agent
• Given a particular state s, ACTIONS(s) returns the set of actions that can be executed in
s. We say that each of these actions is applicable in s. For example, from the state
In(Arad), the applicable actions are {Go(Sibiu),Go(Timisoara),Go(Zerind)}.
– Transition model: Description of what each action does
• RESULT(In(Arad),Go(Zerind)) = In(Zerind)
(successor functions): refer to any state reachable from given
state by a single action

– Initial state, actions and Transition model define the state


space
• the set of all states reachable from the initial state by any sequence of
actions.
Well-defined Problems and Solutions
• A path in the state space:
– any sequence of states connected by a sequence of actions.
• The goal test
– Applied to the current state to test whether a given state is a
goal state
• Sometimes there is an explicit set of possible goal states and the test
simply checks whether the given state is one of them.
• The agent’s goal in Romania is the singleton set {In(Bucharest)}.

-Sometimes the goal is described by the properties (Implicit)


• Instead of stating explicitly the set of state
• For example: in Chess the agent wins if it can capture the KING of the
opponent on next move (checkmate).
• no matter what the opponent does
Well-defined Problems and Solutions
• A path cost function,
– assigns a numeric cost to each path
– = performance measure
– denoted by g
– to distinguish the best path from others

• Usually the path cost is


– the sum of the step costs of the individual actions (in the
action list)
Well-defined Problems and Solutions
• Together a problem is defined by
– Initial state
– Actions
– Successor function
– Goal test
– Path cost function

• The solution of a problem is then


– a path from the initial state to a state satisfying the goal test

• Optimal solution
– the solution with lowest path cost among all solutions
Single-state Problem Formulation
A problem is defined by four items:
1. initial state e.g., "at Arad"
2. actions or successor function S(x) = set of action–state pairs
– e.g., S(Arad) = {<Arad  Zerind, Zerind>, … }

3. goal test, can be


– explicit, e.g., x = "at Bucharest"
– implicit, e.g., Checkmate(x)

4. path cost (additive)


– e.g., sum of distances, number of actions executed, etc.
– c(x,a,y) is the step cost, assumed to be ≥ 0

• A solution is a sequence of actions leading from the initial state


to a goal state
• 17
Formulating Problems

• Besides the four components for problem formulation


– anything else?

• Abstraction
– The process to take out the irrelevant information
– Leave the most essential parts to the description of the states
( Remove details from representation)

– Conclusion: Only the most important parts that are contributing


to searching are used
Selecting a State Space
• Real world is absurdly complex
 state space must be abstracted for problem solving
• (Abstract) state = set of real states
• (Abstract) action = complex combination of real actions
– e.g., "Arad  Zerind" represents a complex set of possible routes,
detours, rest stops, etc.
• For guaranteed realizability, any real state "in Arad“ must get
to some real state "in Zerind"
• (Abstract) solution = set of real paths that are solutions in the
real world
• Each abstract action should be "easier" than the original
problem

19
Vacuum world state space graph

• states?
• actions?
• goal test?
• path cost?

20
Vacuum world state space graph

• states? integer dirt and robot location


• actions? Left, Right, Suck
• goal test? no dirt at all locations
• path cost? 1 per action
21
Example: The 8-puzzle

• states?
• actions?
• goal test?
• path cost?

22
Example: The 8-puzzle

• states? locations of tiles


• actions? move blank left, right, up, down
• goal test? = goal state (given)
• path cost? 1 per move

[Note: optimal solution of n-Puzzle family is NP-hard]

23
Searching for solutions
Searching for Solutions

• Finding out a solution is done by


– Searching through the state space

• All problems are transformed


– As a search tree
– Generated by the initial state and successor
function
Search tree
• Initial state
– The root of the search tree is a search node

• Expanding
– Applying successor function to the current state
– Thereby generating a new set of states

• Leaf nodes
– The states having no successors
Fringe / frontier : Set of search nodes that have not been expanded yet.

• Refer to next figure


Tree Search Example

27
Tree Search Example

28
Tree Search Example

29
Tree Search Algorithms
• Basic idea:
– Offline, simulated exploration of state space by generating
successors of already-explored states (a.k.a.~ expanding
states)

30
Implementation: General Tree Search

31
Search tree
• A node is having five components:
– STATE: which state it is in the state space

– PARENT-NODE: from which node it is generated

– ACTION: which action applied to its parent-node to generate it

– PATH-COST: the cost, g(n), from initial state to the node n itself

– DEPTH: number of steps along the path from the initial state
Implementation: States vs. Nodes
• A state is a (representation of) a physical configuration
• A node is a data structure constituting part of a search tree includes
state, parent node, action, path cost g(x), depth

• The Expand function creates new nodes, filling in the various fields
and using the SuccessorFn of the problem to create the
corresponding states.

33
Search Strategies
• The essence of searching
– In case the first choice is not correct
– Choosing one option and keep others for later inspection
• Hence, we have the search strategy
– which determines the choice of which state to expand
– good choice  fewer work  faster
• Important:
– state space ≠ search tree
• State space has unique states {A, B}
• while a search tree may have cyclic paths: A-B-A-B-A-B-

• A good search strategy should avoid such paths
Measuring Problem-Solving
Performance
• A search strategy is defined by picking the order of node
expansion
• Strategies are evaluated along the following dimensions:
– completeness: does it always find a solution if one exists?
– time complexity: number of nodes generated
– space complexity: maximum number of nodes in memory
– optimality: does it always find a least-cost solution?
• Time and space complexity are measured in terms of
– b: maximum branching factor of the search tree
– d: depth of the least-cost solution
– m: maximum depth of the state space (may be ∞)

35
Measuring Problem-Solving
Performance
• For effectiveness of a search algorithm
– we can just consider the total cost
– The total cost = path cost (g) of the solution found
+ search cost
• search cost = time necessary to find the solution
• Tradeoff:
– (long time, optimal solution with least g)
– vs. (shorter time, solution with slightly larger path
cost g)
Reading
• Artificial Intelligence: A Modern Approach, 3rd
edition
– Chapter 3: Solving problem by Searching
– Section 3.1- 3.4
Next
• Informed/Heuristic Search
– Greedy best-first search
– A* search
– Heuristic functions

You might also like