You are on page 1of 47

Week 3

State Space Search


Knowledge Representation And
Search (Conventional AI)

Memory
ALU

Knowledge
Base

CONTROL

ALGORITHMS
For
• Search
• Inference I/O
Problem Solving Agent
• A particular goal based agent that decides
what to do by searching sequences of
actions that lead to goal state.
• The simple problem solving agent does
three things:
– Formulates problem and goal description
– Searches for the sequence of actions that
would solve the problem
– Executes the actions one at a time
Problem Formulation
• INITIAL STATE/ Other STATES :The initial
state that the agent starts in it could be any state
• ACTIONS available to the agent at state s,
STATE
ACTIONS(s) SPACE
• TRANSITION MODEL : A description of what
each action does; specified by a function
RESULT(s, a)
• PATH : A path in the state space is a sequence
of states connected by a sequence of actions.
• GOAL TEST The goal test, which determines
whether a given state is a goal state. Agent
Actions
• PATH COST:A path cost function that assigns a
numeric cost to each path. OPTIMAL SOLUTION
STATE SPACE
• STATE SPACE Together, the initial state,
actions, and transition model implicitly
define the state space of the problem
State Space
• A “State Space” is a graphical
representation of the problems.
• State Space includes all possible states of
the problems including the solution state
as “Nodes”
• Arcs between nodes denotes the legal
moves, for node to node transaction.
• State Space is also known as Solution
Space or Problem Space.
Example (Vacuum Cleaner)
Example (Vacuum Cleaner)
• States: 2 × 22 = 8 A larger environment with n
locations has n ・ 2n states.
• Initial state: Any state can be designated as the
initial state.
• Actions: In each state just three actions: Left,
Right, and Suck. Up and Down.
• Transition model: The actions have their
expected effects,
• Goal test: This checks whether all the squares
are clean.
• Path cost: Each step costs 1, so the path cost is
the number of steps in the path.
8-Puzzle Problem

1 4 3 1 1 2
7 6 3 4 5
5 8 2 6 7 8

Start State Goal State


State Space Examples
1 4 3
7 6
5 8 2
Up Right
Left Down
1 3 1 4 3 1 4 3 1 4 3
7 4 6 7 6 7 8 6 7 6
5 8 2 5 8 2 5 2 5 8 2

1 3 1 3
7 4 6 7 4 6
5 8 2 5 8 2
8-Puzzle Problem
• The 8-puzzle belongs to the family
SLIDING-BLOCK of sliding-block
puzzles, known to be NP-
complete,
• The 8-puzzle has 9!/2=181, 440
reachable states and is easily
solved.
• The 15-puzzle (on a 4×4 board)
has around 1.3 trillion states,
• The 24-puzzle (on a 5 × 5 board)
has around 1025 states, and
random instances take several
hours to solve optimally.
8-QUEENS PROBLEM
• 8-QUEENS PROBLEM The goal of the 8-
queens problem is to place eight queens
on a chessboard such that no queen
attacks any other.
• A queen attacks any piece in the same
row, column or diagonal
8-QUEENS PROBLEM

• States: Any arrangement of 0 to 8 queens on the board is a state.


• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the
specified square.
• Goal test: 8 queens are on the board, none attacked.
State Space Examples
100 B
A

75 50
125 125
75

100
125
E C

50 100
D
State Space Examples
A
B 100
C E
150 D
C D
250 E
D 275 E
C

300 325
Complete State Space of the traveling
E Sales person problem will assure the
D E
Optimal solution of the problem. Where
375 425 Results are stored in the leaves of the tree.

A A A
BUT
State Space Examples
State Space for TIC TAC TOE

X 0
X
0 X
Homework Draw State
Space 5 Levels?
Start : Goal A
B
D C
A B C D

Rules:
• Pick or Place one Block from the top of the given stacks
• You have to use given three stacks to achieve the Goal.
Search Algorithms
• Having formulated the problem, we now
need to solve them. A solution is an action
sequence, so search algorithms work by
considering various possible action
sequences. Which lead to Initial State to
Goal State
• The SEARCH TREE possible action
sequences starting at the initial state form
a search tree with the initial state NODE
at the root;
Strategies for Search
• Data Driven Search
– Forward Chaining
– Start from Available Data
– Search for Goal
• Goal Driven Search
– Backward Chaining
– Start from Goal
– Generate Sub Goals, until it arrive at current
State
Generating State space
• LEAF NODE: Expanded nodes after one
action
• FRONTIER point is called the
frontier/open list: List of All expanded
nodes
• Explored set (also known CLOSED LIST
which remembers every expanded node.
Graph of Romania

Start
State

Goal
State
State Space
State Space
State Space
Coloring Method
• Coloring method is also used in spite
of Open & Closed Lists
Uninformed Search
Breadth First Search
Uniform Cost Search
Depth First Search
Backtrack
Depth Limited Search
Depth First with iterative deepening
Bi-directional Search
Properties of a NODE
For each node n of the tree, we have a
structure/object that contains four components:
• n.STATE: the state in the state space to which
the node corresponds;
• n.PARENT: the node in the search tree that
generated this node;
• n.ACTION: the action that was applied to the
parent to generate the node;
• n.PATH-COST: the cost, traditionally denoted by
g(n), of the path from the initial state to the
node, as indicated by the parent pointers.
Measuring Performance
• Completeness: Is the algorithm guaranteed
to find a solution when there is one?
• OPTIMALITY : Does the strategy find the
optimal solution?
• TIME COMPLEXITY : How long does it take to
find a solution?
• SPACE COMPLEXITY: How much memory is
needed to perform the search?
Measuring Performance
• In AI, the graph is often represented implicitly by
the initial state, actions, and transition model and is
frequently infinite.
• For these reasons, complexity is expressed in terms
of three quantities:
• b BRANCHING FACTOR maximum number of
successors of any node;
• d, the depth of the shallowest goal DEPTH node
(i.e., the number of steps along the path from the
root);
• and m, the maximum length of any path in the
state space.
Breadth First Search (BFS)
• Breadth-first search is a simple strategy in
which the root node is expanded first, then
all the successors of the root node are
expanded next, then their successors, and so
on.
• This is achieved very simply by using a FIFO
queue for the frontier.
• We can easily see that it is complete—if the
shallowest goal node is at some finite depth
d,
BFS Algorithms
BFS Algorithms
BFS ISSUES
• Time and Space complexity is exponential O(bd)
which is scary
• Assume Branching factor b = 10 , 1 million nodes
can be generated per second and a node requires
1000 bytes
• Following will happen When run on a modern
personal computer.
Uniform Cost Search
• When all step costs are equal, breadth-first search is optimal
• UNIFORM-COST of expanding the shallowest node, uniform-cost
search expands the node n with the lowest path cost g(n).
Uniform Cost Search

Frontier
Sibiu (0)
Rimnicu (80) Fagaras (99)
Fagaras (99) Pitesti( 177)

Pitesti( 177) Bucharest (310)

Bucharest (278) Bucharest (310)


Depth-first search
• Depth-first search always expands DEPTH-
FIRST the deepest node in the current
frontier of the search tree.
• Breadth-first-search uses a FIFO queue,
Depth-first search uses a LIFO queue.
• A LIFO queue means that the most recently
generated node is chosen for expansion.
• it is common to implement depth-first search
with a recursive function
• It is neither complete nor optimal,
• but has linear space complexity.
Depth First Search (DFS)
Depth First Search (DFS)
Depth First Search (DFS)
• The time complexity of depth-first graph O(bm)
unbounded.
• Main advantage of BFS over DFS the space complexity.
• At maximum depth m, depth-first search requires
storage of only O(bm) nodes.
• Depth-first search would require 156 kilobytes instead
of 10 exabytes at depth d = 16, a factor of 7 trillion
times less space.
• This has led to the adoption of depth-first tree search
as the basic workhorse of many areas of AI, including
constraint satisfaction, propositional satisfiability, and
logic programming
Backtracking Search
•A variant of depth-first search called
backtracking search uses still less
memory.
•In backtracking, only one successor is
generated at a time rather than all
successors; each partially expanded
node remembers which successor to
generate
•In this way, only O(m) memory is
needed rather than O(bm).
Backtracking Example
A
# CS BT DFS
0 A [A] [A]
1 B [BA] [BCDA] B C D
2 E [EBA] [EFBCDA]
3 H [HEBA] [HIEFBCDA]
E
4 I [IEBA] [IEFBCDA] F G

5 F [FBA] [FBCDA]
6 J [ JFBA] [JFBCDA] J
I
H
7 C [CA] [CDA]
8 G [GCA] [GCDA]
Depth Limited Search (DFS)
• Failure of depth-first search in infinite state spaces can be
alleviated by supplying predetermined depth limit :l
• Nodes at depth (l) are treated as if they have no successor
• This approach is called depth-limited search.
• The SEARCH depth limit (l) solves the infinite-path problem.
• Incompleteness if we choose l < d,
• nonoptimal if we choose l > d.
• Its time complexity is O(bl) and its space complexity is O(bl)
• Depth limits is helpful if based on knowledge of the
problem.
• Depth-limited search can terminate with two kinds of
failure: the standard failure value indicates no solution;
• the cutoff value
Iterative deepening search
• Iterative deepening search (or iterative deepening
depth-first search) is a general strategy,
• Often used in combination with depth-first tree
search, that finds the best depth limit. It does this
by gradually increasing the limit—first 0, then 1,
then 2, and so on—until a goal is found.
• Iterative deepening combines the benefits of depth-
first and breadth-first search.
• Like depth-first search, its memory requirements
are modest: O(bd)
• Like breadth-first search, it is complete when the
branching factor is finite and
• optimal when the path cost is a nondecreasing
function of the depth of the node.
Iterative deepening search
Iterative deepening search
Iterative deepening search
• Iterative deepening search may seem
wasteful because states are generated
multiple times.
• It turns out this is not too costly. Only the
upper levels are generated multiple times.
• Iterative deepening is the preferred
uninformed search method when the search
space is large and the depth of the solution
is not known
Bi-directional Search
• Run two simultaneous searches
• One forward: the initial state to goal state
• One backward: the goal state to initial state
• The search stops where both search
strategies meet
• The motivation is that bd/2 +bd/2 is much less
than bd

You might also like