You are on page 1of 47

8-Puzzle

Given an initial configuration of 8 numbered tiles on a 3 x 3


board, move the tiles in such a way so as to produce a desired
goal configuration of the tiles.
Representing actions
• The number of actions / operators depends on the
representation used in describing a state.
– In the 8-puzzle, we could specify 4 possible moves for each of the 8
tiles, resulting in a total of 4*8=32 operators.
– On the other hand, we could specify four moves for the “blank” square
and we would only need 4 operators.
• Representational shift can greatly simplify a problem!
Representing states
• What information is necessary to encode about the world to
sufficiently describe all relevant aspects to solving the goal? That
is, what knowledge needs to be represented in a state description to
adequately describe the current state or situation of the world?
• The size of a problem is usually described in terms of the number
of states that are possible.
– The 8-puzzle has 181,440 states.
– Tic-Tac-Toe has about 39 states.
– Rubik’s Cube has about 1019 states.
– Checkers has about 1040 states.
– Chess has about 10120 states in a typical game.
Some example problems

• Toy problems and micro-worlds


– 8-Puzzle
– Missionaries and Cannibals
– Cryptarithmetic
– Remove 5 Sticks
– Water Jug Problem
• Real-world problems
8-Puzzle
• State Representation: 3 x 3 array configuration of the
tiles on the board.
• Operators: Move Blank Square Left, Right, Up or Down.
– This is a more efficient encoding of the operators than one in
which each of four possible moves for each of the 8 distinct tiles is
used.
• Initial State: A particular configuration of the board.
• Goal: A particular configuration of the board.
The 8-Queens Problem
State Representation: ?

Initial State: ?

Operators: ?

Goal: Place eight queens


on a chessboard such that
no queen attacks any other!
Missionaries and Cannibals
Three missionaries and three cannibals wish to cross the river.
They have a small boat that will carry up to two people.
Everyone can navigate the boat. If at any time the Cannibals
outnumber the Missionaries on either bank of the river, they
will eat the Missionaries. Find the smallest number of crossings
that will allow everyone to cross the river safely.
Missionaries and Cannibals
• Goal: Move all the missionaries and
cannibals across the river.
• Constraint: Missionaries can never be
outnumbered by cannibals on either side
of river, or else the missionaries are
killed.
• State: configuration of missionaries and
cannibals and boat on each side of river.
• Initial State: 3 missionaries, 3 cannibals
cannibals
and andare
the boat theonboat
the are
nearonbank
the near
bank
• Operators: Move boat containing some
• set
Operators: Move
of occupants boat the
across containing
river (insome
either
set of occupants
direction) acrossside.
to the other the river (in
either direction) to the other side.
Missionaries and Cannibals Solution
Near side Far side
0 Initial setup: MMMCCC B -
1 Two cannibals cross over: MMMC B CC
2 One comes back: MMMCC B C
3 Two cannibals go over again: MMM B CCC
4 One comes back: MMMC B CC
5 Two missionaries cross: MC B MMCC
6 A missionary & cannibal return: MMCC B MC
7 Two missionaries cross again: CC B MMMC
8 A cannibal returns: CCC B MMM
9 Two cannibals cross: C B MMMCC
10 One returns: CC B MMMC
11 And brings over the third: - B MMMCCC
Cryptarithmetic
• Find an assignment of digits (0, ..., 9) to letters so that a
given arithmetic expression is true. examples: SEND +
MORE = MONEY

FORTY Solution: 29786


+ TEN 850
+ TEN 850
----- -----
SIXTY 31486
F=2, O=9, R=7, etc.
Cryptarithmetic
• State: mapping from letters to digits
Find an assignment of digits to
• Initial State: empty mapping letters so that a given arithmetic
expression is true. examples:
SEND + MORE = MONEY and
• Operators: assign a digit to a letter FORTY Solution: 29786

+ TEN 850
+ TEN 850
• Goal Test: whether the expression is ----- -----
true given the complete mapping SIXTY 31486
F=2, O=9, R=7, etc.

Note: In this problem, the solution is NOT a


sequence of actions that transforms the initial
state into the goal state; rather, the solution is a
goal node that includes an assignment of a digit to
each letter in the given problem.
Remove 5 Sticks
Given the following
configuration of sticks, remove
exactly 5 sticks in such a way
that the remaining configuration
forms exactly 3 squares.

• State: ?

• Initial State: ?

• Operators: ?

• Goal Test: ?
Water Jug Problem
Given a full 5-gallon jug and a full 2-gallon jug, fill the 2-gallon jug with
exactly one gallon of water.

• State: ?

• Initial State: ?
5
2
• Operators: ?

• Goal State: ?
Water Jug Problem
Operator table

5 Name Cond. Transition Effect


2
Empty5 – (x,y)→(0,y) Empty 5-gal.
jug
• State = (x,y), where x is Empty2 – (x,y)→(x,0) Empty 2-gal.
the number of gallons jug
of water in the 5-gallon
jug and y is # of gallons 2to5 x≤3 (x,2)→(x+2,0) Pour 2-gal.
in the 2-gallon jug into 5-gal.
• Initial State = (5,2) 5to2 x≥2 (x,0)→(x-2,2) Pour 5-gal.
• Goal State = (*,1), into 2-gal.
where * means any 5to2part y<2 (1,y)→(0,y+1) Pour partial
amount 5-gal. into 2-
gal.
Some more real-world problems
• Route finding
• Touring (traveling salesman)
• Logistics
• VLSI layout
• Robot navigation
• Learning
Problem Solving
• The process of problem-solving using searching consists of the
following steps.
– Define the problem
– Analyze the problem
– Identify possible solutions
– Choose the optimal solution
– Implementation
• Properties of search algorithms
– Completeness: A search algorithm is said to be complete when it gives a
solution or returns any solution for a given random input.
– Optimality: If a solution found is best (lowest path cost) among all the
solutions identified, then that solution is said to be an optimal one.
– Time complexity
– Space complexity
Types of search algorithms
• Based on the search problems, we can classify the search
algorithm as
– Uninformed search
– Informed search

• Uninformed search algorithm:


– It does not have any domain knowledge such as closeness, location
of the goal state, etc. it behaves in a brute-force way.
– It only knows the information about how to traverse the given tree
and how to find the goal state.
– This algorithm is also known as the Blind search algorithm
or Brute -Force algorithm.
– Example: BFS, DFS, Depth-limited search, Iterative deepening
depth-first search, Bidirectional search, Uniform cost search
Comparison of uninformed search algorithms

Algorithm Time Space Complete Optimality


Breadth First O(b^d) O(b^d) Yes Yes
Depth First O(b^m) O(bm) No No
Depth Limited O(b^l) O(bl) No No
Iterative Deepening O(b^d) O(bd) Yes Yes
Bidirectional O(b^(d/2)) O(b^(d/2)) Yes Yes
Uniform Cost O(bl+floor(C*/epsilon)) O(bl+floor(C*/epsilon)) Yes Yes

b – maximum branching factor in a tree.


d – the depth of the least-cost solution.
m – maximum depth state space (maybe infinity)
Advantages
• DFS requires very little memory as it
only needs to store a stack of the nodes
on the path from the root node to the
current node.
• It takes less time to reach the goal node
than the BFS algorithm (if it traverses in
the right path).
Disadvantages
• There is the possibility that many states
keep reoccurring, and there is no
guarantee of finding a solution.
• The DFS algorithm goes for deep-down
searching, and sometimes it may go to Complete: No
the infinite loop. Time Complexity: O(b^m)
Space complexity: O(bm)
Optimal: Yes
Advantages
• BFS will provide a solution if any
solution exists.
• If there is more than one solution for a
given problem, then BFS will provide the
minimal solution which requires the least
number of steps.
Disadvantages
• It requires lots of memory since each
level of the tree must be saved in
memory to expand to the next level.
• BFS needs lots of time if the solution is
far away from the root node.
Complete: Yes (assuming b is finite)
Time Complexity: O(b^d)
Space complexity: O(b^d)
Optimal: Yes, if step cost = 1 (i.e., no
cost/all step costs are same)
• S start node and G  goal node
• UCS expands nodes with lower cost
• So node A becomes the successor rather than
the required goal node G.
• SACDG (Solution cost 6)

Advantages
• Uniform cost search is an optimal search method because at every state, the path with
the least cost is chosen.
Disadvantages
• It does not care about the number of steps or finding the shortest path involved in the
search problem, and it is only concerned about path cost. This algorithm may be stuck
in an infinite loop.

Complete: Yes (if b is finite and costs are stepped, costs are zero)
Time Complexity: O(b(c/ϵ))   where, ϵ -> is the lowest cost, c -> optimal cost
Space complexity: O(b(c/ϵ))
Optimal: Yes (even for non-even cost)
• The sad failure of DFS is alleviated
by supplying a depth-first search with
a predetermined depth limit.
• The depth limit solves the infinite-
path problem. 

Advantages
• Depth-limited search is Memory efficient.
Disadvantages
• The DLS has disadvantages of completeness and is not optimal if it has more than one
goal state.

Complete: Yes (if solution > depth-limit)


Time Complexity: O(b^l)  where, l -> depth-limit
Space complexity: O(bl)
Optimal: Yes (only if l > d)
• It uses the combined power of the
BFS and DFS algorithms.
• It searches for the best depth in each
iteration.
• It performs the Algorithm until it
reaches the goal node. 
Advantages
• It combines the benefits of BFS and DFS search algorithms in terms of fast search and
memory efficiency.
Disadvantages
• The main drawback of IDDFS is that it repeats all the work from the previous phase.

Complete: Yes (by limiting the depth)


Time Complexity: O(b^d)
Space complexity: O(bd)
Optimal: Yes (if step cost is uniform)
• Bidirectional Search is a combination
of forwarding and backward search.
• Traverse the tree from the start node
and the goal node, and wherever they
meet, the path from the start node to
the goal through the intersection is
the optimal solution.  

Advantages
• Since BS uses various techniques like
DFS, BFS, DLS, etc., it is efficient
and requires less memory.
Disadvantages Complete: Yes
• Implementation of the bidirectional Time Complexity: O(b^(d/2))
search tree is difficult. Space complexity: O(b^(d/2))
• In bidirectional search, one should Optimal: Yes (if step cost is uniform in both
know the goal state in advance. forward and backward directions)
• Uninformed algorithms are used in search problems, where the
goal is to find a solution by exploring a large search space.
• Uninformed algorithms are often simple to implement and can be
effective in solving certain problems, but they may also be less
efficient than informed algorithms that use heuristics to guide their
search.
• Informed search algorithms use additional knowledge or
heuristics to guide the search process.
• The most popular way to give the search problem more
information about the problem is using heuristic functions.
• It is the cost estimate to reach the goal state from a particular node,
n. H(n) = 0 if n is a goal node.
• Any problem-specific function is acceptable (arbitrary and
nonnegative).
Best First Search Algorithm

• The Informed BFS follows a greedy approach for state transitions to reach
a goal.
• For every node here, an evaluation function(f(n)) is maintained, which
provides a cost estimate.
• The idea is to expand the node with the lowest f(n) every time.

• The Evaluation function here has a heuristic function h(n) component. i.e.
f(n)=h(n)
• The node that seems to be closest to the goal is therefore extended.

• The implementation of this algorithm is done using a priority queue


ordered by the evaluation function for each node. 
• Step 1: Place the starting node into the OPEN list.
• Step 2: If the OPEN list is empty, Stop and return failure.
• Step 3: Remove the node n from the OPEN list, which has the lowest value of h(n),
and places it in the CLOSED list.
• Step 4: Expand the node n, and generate the successors of node n.
• Step 5: Check each successor of node n, and find whether any node is a goal node or
not. If any successor node is the goal node, then return success and stop the search,
else continue to next step.
• Step 6: For each successor node, the algorithm checks for evaluation function f(n)
and then check if the node has been in either OPEN or CLOSED list. If the node has
not been in both lists, then add it to the OPEN list.
• Step 7: Return to Step 2. Completeness: No. Gets stuck
in a loop sometimes.
Space Complexity: O(b^m)
Time Complexity: O(b^m)
(but a good heuristic can make
a drastic improvement.)
Optimal: No
• Apply Best First Search (greedy approach)

Open : [S]
Closed: []

Open: [A,B]
Closed: [S]

Open: [E,F,A]
Closed: [S,B]

Open: [I,G,E,A]
Closed: [S,B,F]

Open: [I,E,A]
Closed: [S,B,F,G]

The path taken is S->B->F->G


A* Search Algorithm
• A* Search Strategy guarantees an optimal solution.

• The idea is to avoid expanding paths that are already expensive.

• It uses an evaluation function(f(n)) to evaluate every node(state) in the path.

• Where, g(n) is the actual cost from the initial state to the current node.
h(n) is the estimated cost from the current node to the goal state.

• The optimality of the solution for the A* search depends on the admissibility
of the heuristic function we choose. 
Admissible Heuristic

• A heuristic function, h(n) is admissible if, for every node, h(n) is less
than or equal to g(n).
• An admissible heuristic never overestimates the cost of reaching the
goal.
• It is always optimistic about finding the best path to the goal node.

• If h(n) is admissible for the A* algorithm, we get the optimal solution


to our problem.
• If we have two admissible functions (h1 and h2), where h2(n) ≥ h1(n),
we let h2 dominate over h1.
• h1 (n) = number of misplaced tiles = 8
• h2 (n) = total Manhattan distance (i.e., no. of squares from
desired location of each tile) = 3+1+2+2+2+3+3+2 = 18
• If h2 (n) ≥ h1 (n) for all n (both admissible) then h2
dominates h1 (i.e. h2 is better for search)
g(n) = Depth of node

h(n) = Number of misplaced tiles


• The numbers written on edges represent the distance between the nodes.
• The numbers written on nodes represent the heuristic value.
• Find the most cost-effective path to reach from start state A to final state J using
A* Algorithm.
• We start with node A. Node B and Node F can be reached from node A.
• f(B) = 6 + 8 = 14, f(F) = 3 + 6 = 9
• Since f(F) < f(B), so it decides to go to node F. Path- A → F

• Node G and Node H can be reached from node F.


• f(G) = (3+1) + 5 = 9, f(H) = (3+7) + 3 = 13
•  Since f(G) < f(H), so it decides to go to node G. Path- A → F → G
• Node I can be reached from node G.
• f(I) = (3+1+3) + 1 = 8, It decides to go to node I.  Path- A → F → G → I

• Node E, Node H and Node J can be reached from node I.


• f(E) = (3+1+3+5) + 3 = 15, f(H) = (3+1+3+2) + 3 = 12, f(J) = (3+1+3+3) + 0 = 10
• Since f(J) is least, so it decides to go to node J.
• Path- A → F → G → I → J
Local Search Algorithm

• In many problems, it is unimportant how the goal is reached - only


the goal itself matters (8-queens problem, VLSI Layout, TSP).
• If in addition a quality measure for states is given, a local search can
be used to find solutions.
• Operates using a single current node (rather than multiple paths) use
very little memory.
• Idea: Begin with a randomly-chosen configuration and improve on it
stepwise → Hill Climbing.
• It can be used for maximization or minimization
Hill Climbing Algorithm
• Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make
the initial state as the current state. 
• Loop until the solution state is found or there are no new operators present which can be
applied to the current state. 
– Select a state that has not been yet applied to the current state and apply it to produce a
new state. 
– Perform these to evaluate the new state.
• If the current state is a goal state, then stop and return success. 
• If it is better than the current state, then make it the current state and proceed further. 
• If it is not better than the current state, then continue in the loop until a solution is
found. 
• Exit from the function.
N-Queen Problem
• The heuristic cost function h is the number of pairs of
queens that are attacking each other, either directly or
indirectly
• The global minimum of this function is zero, which occurs
only at perfect solutions.
• Hill Climbing is NOT complete and NOT optimal.
• But has Low memory requirements – usually constant
• It is Effective – Can often find good solutions in extremely large state spaces
• Randomized variants of hill climbing can solve many of the drawbacks in
practice.
Variants
• In Steepest Ascent Hill Climbing, the algorithm evaluates all the
possible moves from the current solution and selects the one that
leads to the best improvement.
• In First-Choice Hill Climbing, the algorithm selects the first
move that it leads to an improvement, regardless of whether it is
the best move.
• In Stochastic Hill Climbing, the algorithm randomly selects a
move and accepts it if it leads to an improvement, regardless of
whether it is the best move.
• Simulated annealing is a probabilistic variation of Hill Climbing
that allows the algorithm to occasionally accept worse moves in
order to avoid getting stuck in local maxima.
• Solve block world’s problem using hill
climbing algorithm.
• h(n) = Add one point for every block that is resting
on the thing it is supposed to be resting on and
subtract one point for every block that is sitting on
the wrong thing.
• h(initial state S0) = -1-1+1+1+1+1+1+1 = 4
• h(Goal state Sg) = 8
• S1take A block and place it on the floor
• h(S1) = 1-1+1+1+1+1+1+1 = 6
• S2  3 possibilities (a), (b) and (c)
• h(S2(a)) = -1-1+1+1+1+1+1+1 = 4
• h(S2(b)) = 1-1+1+1+1+1+1-1 = 4
• h(S2(c)) = 1-1+1+1+1+1+1-1 = 4

Hill climbing will halt because all these states have lower scores than the
current state. The process has reached a local maximum.
Block world’s problem:
• h(n) = +1 for all the blocks in the support structure if the block is
correctly positioned otherwise -1 for all the blocks in the support
structure.

• h(initial state) = -3-2-1 = -6


• h(goal state) = 0+1+2+3 = 6

You might also like