Professional Documents
Culture Documents
11
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
12
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
13
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
14
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
15
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
16
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
17
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
18
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
19
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
20
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
21
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
22
AI - Popular Search Algorithms
Depth-First Iterative Deepening (ES)
26
Iterative deepening search l =1
27
Iterative deepening search l =2
28
Iterative deepening search l =3
29
Iterative deepening search
Number of nodes generated in a depth-limited search to depth d
with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd
For b = 10, d = 5,
NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456
30
AI - Popular Search Algorithms
Bidirectional Search
It searches forward from initial state and
backward from goal state till both meet to
identify a common state.
The path from initial state is concatenated with
the inverse path from the goal state. Each
search is done only up to half of the total path.
AI - Popular Search Algorithms
https://efficientcodeblog.wordpress.com/2017/12/13/bidirectional-search-two-end-bfs/
Time bd bm bd/2 bd bd
Space bd bm bd/2 bd bd
Start
Find out all (n -1)! Possible solutions, where n is the total number of cities.
Determine the minimum cost by finding out the cost of each of these (n -1)!
solutions.
End
Travelling Salesman Problem
Requires (n-1)! Paths to be examined for n cities.
If no of cities grows, then the time required to wait a salesman to get the
information about the shortest path is not a practical situation.
1. Start generating complete paths, keeping track of the shortest path found
so far.
.2. Stop exploring any path as soon as its partial length becomes greater than
the shortest path length found so far.
Solution:
Path 6 and
path 10.
Informed (Heuristic) Search Strategies
To solve large problems with large number of possible
states, problem-specific knowledge needs to be
added to increase the efficiency of search algorithms.
Heuristic Evaluation Functions
They calculate the cost of optimal path between two
states. A heuristic function for sliding-tiles games is
computed by counting number of moves that each tile
makes from its goal state and adding these number of
moves for all tiles.
Informed (Heuristic) Search Strategies
Pure Heuristic Search
3 7 6 5 3 6
5 1 2 7 2
4 8
4 1 8
Informed (Heuristic) Search Strategies
A * Search Algorithm
example: Let us consider eight puzzle problem and solve it by A*
algorithm.
Let define the evaluation function as:
f(X) = g(X) + h(X), where [ f(startstate)= 0 + 4 ]
h(X) = the number of tiles not in their goal position in a given state X = 4
g(X) = depth of node X in the search tree [for start node = 0]
Start state Goal State
3 7 6 5 3 6
5 1 2 7 2
4 8
4 1 8
Informed (Heuristic) Search Strategies
A * Search Algorithm example : search tree
Informed (Heuristic) Search Strategies
A * Search Algorithm :
Note: The quality of the solution depends on the heuristic function.
Hence, the heuristic function used above may not be better for harder eight-puzzle
problems.
The above eight-puzzle problem can not be solved by simple heuristic function given
above.
We need to find the better heuristic function by estimating the h() value properly.
Let we keep function g(X) as same and function h(X) modified as:
h(X) = the sum of the distances of the tiles ( 1 to 8) from their goal position in a
given state X.
Start state Goal state
3 5 1 5 3 6
2 7 7 2
4 8 6 4 1 8
h(start state)= m(1) + m(2) + m(3) + m(4) + m(5) + m(6) + m(7) + m(8)
= 3 + 2 + 1 + 0 + 1 + 2 + 2 + 1 = 12
Informed (Heuristic) Search Strategies
Optimal Solution by A * Algorithm :
A* algorithm finds optimal solution if heuristic function is carefully designed
and is underestimated.
Underestimation: If we can guarantee that h never overestimates actual value from
current to goal , then A* algorithm ensures to find an optimal path to a goal, if
any exists.
Fig. 2.11
Informed (Heuristic) Search Strategies
A * Search Algorithm :
Overestimation: The situation where we overestimate the heuristic value.
By overestimating the heuristic value h, we can not be guaranteed to find the
shortest path.
fig. 2.12
Informed (Heuristic) Search Strategies
A * Search Algorithm :
Admissibility of A*:
A search algorithm is admissible, if for any graph, it always terminates in an
optimal path from start state to goal state, if path exists.
We can say that A* always terminates with the optimal path in case h is an
admissible heuristic function.
Monotonic Function:
A heuristic function h is monotone if
1. for all states Xi and Xj such that Xj is successor of Xi
h(Xi) – h(Xj) <= cost(Xi, Xj) which is actual cost of going from Xi to Xj
2. h(Goal) = 0
Monotone property is reaching each state along the shortest path from its ancestors.
Each monotonic heuristic function is admissible.
• A cost function is monotone if f(N)<= f(succ(N))
• For any admissible cost function f, we can construct a monotonic admissible
function.
Informed (Heuristic) Search Strategies
Iterative-Deepening A * Algorithm :
IDA is a combination of depth-first search iterative deepening and A* algorithm.
Algorithmic steps:
Informed (Heuristic) Search Strategies
Iterative-Deepening A * Algorithm :
IDA is a combination of depth-first search iterative deepening and A* algorithm.
Algorithmic steps:
Informed (Heuristic) Search Strategies
Greedy Best First Search
It expands the node that is estimated to be closest to
goal. It expands nodes based on f(n) = h(n). It is
implemented using priority queue.
Disadvantage - It can get stuck in loops. It is not
optimal.
Local Search Algorithms
They start from a prospective solution and then move to a neighboring solution. They can return a
valid solution even if it is interrupted at any time before they end.
Hill-Climbing Search
It is an iterative algorithm that starts with an arbitrary solution to a problem and attempts to find a
better solution by changing a single element of the solution incrementally. If the change
produces a better solution, an incremental change is taken as a new solution. This process is
repeated until there are no further improvements.
1. Local maximum
2. Plateau
3. Ridge
Local Search Algorithms
Local Beam Search
In this algorithm, it holds k number of states at any given time. At the start, these states are
generated randomly. The successors of these k states are computed with the help of
objective function. If any of these successors is the maximum value of the objective
function, then the algorithm stops.
Otherwise the (initial k states and k number of successors of the states = 2k) states are
placed in a pool. The pool is then sorted numerically. The highest k states are selected
as new initial states. This process continues until a maximum value is reached.
loop
end
Local Search Algorithms
Local Beam Search : w number of best nodes at each level is always expanded.
W is called as width of a beam search. If b is branching factor.
Then there will be w*b nodes under consideration at any depth .
But only w nodes are selected.
If w =1 it will be Hill climbing method.
example
Local Search Algorithms
•Simulated Annealing : Annealing is the process of heating and cooling a
metal to change its internal structure for modifying its physical properties.
When the metal cools, its new structure is seized, and the metal retains its
newly obtained properties. In simulated annealing process, the temperature
is kept variable.
We initially set the temperature high and then allow it to ‘cool' slowly as the
algorithm proceeds. When the temperature is high, the algorithm is allowed
to accept worse solutions with high frequency.
Start
Initialize k = 0; L = integer number of variables;
From i → j, search the performance difference ∆.
If ∆ <= 0 then accept else if exp(-∆/T(k)) > random(0,1) then accept;
Repeat steps 1 and 2 for L(k) steps.
k = k + 1;
Repeat steps 1 through 4 till the criteria is met.
End
Local Search Algorithms
Start
Find out all (n -1)! Possible solutions, where n is the total number of cities.
Determine the minimum cost by finding out the cost of each of these (n -1)! solutions.
end
Local Search Algorithms