Professional Documents
Culture Documents
Informed Search
Acknowledgement:
Tom Lenaerts, SWITCH, Vlaams Interuniversitair Instituut voor Biotechnologie
Outline
2
Previously: tree-search
Best-first search
4
A heuristic function
hSLD=straight-line
distance heuristic.
hSLD can NOT be
computed from the
problem description
itself
Goal is Bucharest: In this example
f(n)=h(n)
– Expand node that is closest
to goal
6
A* search
A* search
Formally:
1. h(n) <= h*(n) where h*(n) is the true cost from n
2. h(n) >= 0 so h(G)=0 for any goal G.
8
Romania example
A* search example
10
A* search example
11
A* search example
12
A* search example
13
A* search example
14
A* search example
15
A* search, evaluation
Optimality: YES
Completeness: YES
Time complexity: Exponential with path length
Space complexity: All nodes are stored
Recent developments:
– Iterative-deepening A* (IDA*)
– Recursive best-first search (RBFS)
– Memory bound A* (MA*)
16
Heuristic functions
17
Heuristic functions
18
Heuristic quality
19
20
Inventing admissible heuristics
21
22
Inventing admissible heuristics
23
Learning heuristics
24
Local search and optimization
25
26
Local search and optimization
27
Hill-climbing search
28
Hill-climbing search
current ← MAKE-NODE(INITIAL-STATE[problem])
loop do
neighbor ← a highest valued successor of current
if VALUE [neighbor] ≤ VALUE[current] then return STATE[current]
current ← neighbor
29
Hill-climbing example
8-queens problem
Complete-state formulation: each state has 8
queens on the board, one per column.
Successor function: returns all possible states
generated by moving a single queen to another
square in the same column (each state has 8×7
successors).
Heuristic function h(n): the number of pairs of
queens that are attacking each other (directly or
indirectly).
Global minimum: h(n) = 0
30
Hill-climbing example
a) b)
31
Drawbacks
32
Hill-climbing variations
Stochastic hill-climbing
– Random selection (rather than best) from among the uphill
moves.
– The selection probability can vary with the steepness of the
uphill move.
– Usually converse slowly but some state landscapes it finds
better solution.
First-choice hill-climbing
– Implements stochastic hill climbing by generating
successors randomly until a better one is found.
– This is a good strategy when a state has many successors.
Random-restart hill-climbing
– Tries to avoid getting stuck in local maxima.
– It conducts a series of hill-climbing search form randomly
generated initial states, stopping when a goal is found.
33
Simulated annealing
34
Simulated annealing
current ← MAKE-NODE(INITIAL-STATE[problem])
for t ← 1 to ∞ do
T ← schedule[t]
if T = 0 then return current
next ← a randomly selected successor of current
∆E ← VALUE[next] - VALUE[current]
if ∆E > 0 then current ← next
else current ← next only with probability e∆E /T
35
36
Genetic algorithms
37
Genetic algorithms
38
Genetic algorithm
39