You are on page 1of 54

HEURISTIC SEARCH METHODS

Generate and Test


1. Generate a solution
(a state or a path to a state)
2. Test weather it is the required
solution
3. If it is a solution, return and quit
otherwise, go-to step 1
This may find a solution, but may
take long time.
. 1
Generate and Test (Contd..)
Example Cubes puzzle problem
Four Cubes in a puzzle, each cube has
six faces.
Each face is painted with one of four colours.
GOAL : To arrange cubes in a row such that on
all four sides, one block face of each colour is
showing.
The Heuristic is to use as few of the faces with the
same colour as possible as outside faces.
2
Generate and Test (Contd..)
Generate-and-test can be combined with
other techniques.
Example : Dendral uses Plan-Generate-Test
• Planning process uses constrained
satisfaction techniques to create a list of
recommended substructures.
• Generate-and-Test is applied only on a
limited substructures.
3
HEURISTIC SEARCH METHODS
(Contd..)
Hill-Climbing
• Variation of Generate-and-Test
• Feedback from test procedure is used
• Used when a good Heuristic function is available
for evaluating states.
Example : Getting downtown in an unfamiliar city
without a map.
Heuristic Function : Distance between current
location and location of tall buildings.
Desirable States: Distance is minimized.
4
Algorithm : Simple Hill Climbing
1. Evaluate the initial state. If it is a goal
state return and quit. Otherwise continue
with initial state as current state.
2. Repeat until a solution is found or there
are no new operators available to apply
to current state.
a. Select an operator and apply it to
produce a new state.
5
Algorithm : Simple Hill Climbing
(Contd..)
b. Evaluate the new state
(i) If it is a goal state, return and quit
(ii) If not, but better than the current
state, make it the current state.
(iii) If it is not better than the current
state, continue in the loop.
Continue in the Loop
Heuristic Function : Sum of the number of different
colours on each of the four sides.
6
Steepest Ascent Hill Climbing

• It is a variation of simple hill climbing.

• All the moves from the current state are


considered and the best one is selected as
the next state.

7
Steepest Ascent Hill Climbing (Contd..)

IS CS

NS1 NS2 NS3 NS4


Best one stored in Succ.

8
Algorithm - Steepest Ascent Hill
Climbing
1. Evaluate the initial state. If it is a goal state
return and quit. Otherwise continue with initial
state as current state.
2. Loop until a solution is found or until a
complete iteration produces no change to
current state.
(a) Let succ be a state such that any
successor of current state (cs) will be better
than Succ. (Succ is replaced by better state).

9
Algorithm - Steepest Ascent Hill
Climbing (Contd..)
(b) For each operator that applies to CS do:
(i) Apply-the operator and generate NS.
(ii) If NS is a goal state return and quit. If
not compare NS to Succ. If NS is better
than Succ, set Succ to NS, If not leave
Succ same.
(c) If Succ is better than current state, then
set current state to Succ.

10
Differences in the two hill-climbing algorithms

Basic Hill-Climbing Steepest-Ascent Hill-


Climbing
The number of moves to Less number of moves
get to a solution is more for a solution
Time required to select a Time to select a move is
move is smaller longer

11
Algorithm - Steepest Ascent Hill
Climbing (Contd..)
Hill-Climbing Methods may fail when the
program has reached one of the following

• Local Maximum
• Plateau ( a high flat land) or a
• Ridge (a long narrow hill top, a line of
junction of two surfaces sloping upwards
towards each other).
12
Algorithm - Steepest Ascent Hill
Climbing (Contd..)
Local Maximum
• Is a state better than all its neighbors, but not
better than states far away.
• Occurs with in the sight of a solution
• Also named as Foothill
How to deal with Local Maximum ?
• Backtrack to some earlier node and try going in
a different direction.
• Implementation requires to maintain a list of
paths.
13
Plateau
• A flat area in which a set of neighbours
have the same value.
• It is not possible to find best direction to
move in.
How to deal with a Plateau ?
• Make a big jump in some direction to get a
new search space.
• Apply single small steps several times to
make a big Jump.
14
RIDGE
• Is a special kind of local maximum.
• Is higher than surrounding areas and has
a slope.
• Is impossible to traverse a ridge by single
moves.
How to deal with Ridges ?
• Apply two or more rules before doing the
test. This corresponds to moving in
several directions at once.
15
Drawbacks of Hill-Climbing
• Local Method – It looks only at immediate
consequences for a new state.
• Lacks the guarantee to provide a solution
Advantage : Less combinatorially explosive
Example to Demonstrate Hill-Climbing
failure due to local searching.

16
Blocks World Problem
A H Rules/Operators
H G i) Pick one Block and put it
G F on the table.
F E ii) Pick one Block and put it
E D on another one.
D C
C B
B A
Initial Goal
State State
17
Blocks World Problem
(Contd..)
• Local (Heuristic Function) : Add one point
for every block that is resting on the thing
that it is suppose to be resting on.

• Subtract one point for every block that is


sitting on the wrong thing.

18
Blocks World Problem
(Contd..)
• Initial state has function value or score
= -1+1+1+1+1+1+1-1 = -2+6 = 4
• Goal state Score = 1+1+1+1+1+1+1+1=8

The possible move(s) from initial state is


only one.

19
Blocks World Problem
(Contd..)
H This is accepted as current
G state.
F
E Three possible moves from
D current state are (a),(b),(c)
C
B A
Score = - 1+6+1 = 6 > 4
20
Blocks World Problem
(Contd..)
A G G
H F F Score (a)= -1+6-1=4
G E E Score (b)= -1+5+1-1=4
F D D Score (c)= -1+5+1-1=4
E C C
D B B
C H
B A A H
(a) (b) (c)

21
Blocks World Problem
(Contd..)
• The process has reached a local
maximum that is not global maximum.
• By changing the Heuristic Function, a
solution may be found.

22
Global (Heuristic Function)
• For each Block that has the current
support structure. (i.e. the complete
structure underneath it is exactly as it
should be) Add One Point in the Support
Structure
• For each Block that has an incorrect
support structure, subtract one point for
every Block in the existing support
structure.

23
Global (Heuristic Function)
(Contd..)
A H
H G Score for initial state
G F = -1-2-3-4-5-6-7
F E = - (1+2 +..+7)
E D = -(7 x 8/2) = -28
D C Score for Goal State
C B = 1+2+3+4+5+6+7 = 28
B A
Initial Goal B
( A B has correct support structure, +1
C has correct structure C
B 1+1=2
A

24
Global (Heuristic Function)
(Contd..)
H The possible move from
G initial state is
F
E Score = -1-2-3-4-5-6
D = -(6x7/2) = -21
C
B A
Current state  - 21 > -28
25
Global (Heuristic Function)
(Contd..)
A G G The three possible moves
H F F from CS are
G E E Score a = -28
F D D Score b = -16
E C C Score c = -15
D B B = (-1-2-3-4-5)
C H State (c) is chosen as current one
B A A H
(a) (b) (c)

26
Global (Heuristic Function)
(Contd..)
• The New Heuristic Function takes correct
structures and leaves incorrect structures.
• Due to lack of knowledge about the problem, It is
not always possible to construct such a perfect
Heuristic Function.
• In the Extreme case even if perfect knowledge is
available, it may not be computationally tractable
to use
• Hill Climbing is useful when combined with
other methods.

27
Simulated Annealing
• It is a variation of Hill Climbing.
It lowers the chances of getting caught at a local
maximum, a plateau or ridge.
In this method enough exploration of the whole
space is done early so that the final solution is
relatively insensitive to initial state
Annealing Is a Physical Process – In which
physical substances as metals are melted and
gradually cooled until desired state is reached.

28
Simulated Annealing (Contd..)
Viewing Objective Function as Energy
Level, this is like valley Descending
as the Goal is to produce a minimal energy
final state.
• Physical substances usually move from
high energy state to lower one.
• But there is a probability that lower to
higher can occur.
29
Simulated Annealing (Contd..)

P = - E/KT
e
E is positive change in energy level T is the
temperature, k is Boltzmann’s constant.
The Revised probability

30
Simulated Annealing (Contd..)
P1 = -  E/T, E is change in Heuristic
e Function

We need to choose a schedule of values for T


Annealing Schedule – Rate at which system is
cooled
To compute annealing schedule
– initial value of T is required
-Criteria to change the value of T

31
Simulated Annealing (Contd..)
As T approaches zero, the probability of
accepting a move to a worse state
becomes zero, and simulated annealing
becomes hill climbing
- The values of T be scaled so that the ratio
be meaningful ; for example, T could be
initialized to a value such that for an
average E, p1 = 0.5

32
Algorithm : Simulated Annealing
1. Evaluate the initial state. If it is a Goal
state, return and quit. Otherwise continue
with initial state as current state (CS)
2. Initialize Best –so-far to CS
3. Initialize T
4. Loop until a solution is found or there are
no new operators left to apply to CS.

33
Algorithm : Simulated Annealing
(Contd..)
(a) Select an operator and apply it to CS to
get NS.
(b) Evaluate NS. Compute E = (value of
current) – (value of new)
• If NS is GS, return and quit.
• If NS is not GS, but is better than CS,
make it CS. Also set Best-So-Far to this
state.
34
Algorithm : Simulated Annealing
(Contd..)
• If NS is not better than CS, make it CS with
probability p1. Involve a random number r in [ 0
1]. If r is less than p1 then the state is
accepted.
(c) Revise T as necessary according to annealing
schedule.
5. Return Best-So-Far as the answer.
Simulated-Annealing is often used to solve
problems in which the number of moves from a
given state is very large

35
Annealing Schedule
• A schedule of values for T is known as Annealing
Schedule
• Initial value of T is selected such that for average
change in heuristic function, the probability is 0.5
• The optimal annealing schedule for each annealing
problem must be discovered empirically
• The best way to select an annealing schedule is by
trying several alternatives and observing the effect on
both the quality of the solution and the rate at which
the process converges

36
Algorithm : Simulated Annealing
(Contd..)
Simple Hill Climbing Simulated Annealing
Moves to worse states Moves to worse states
are not accepted may be accepted
Annealing scheduled is Annealing schedule is to
not required be maintained
Best – So – Far is not Best – So – Far is stored
stored

37
Best – First Search (BFS)
BFS – Combines advantages of Breadth-First
search and Depth First Search.
Breadth First – Does not get trapped at
dead ends.
Depth First – Allows a solution to be found
without expanding other branches.
Best-First Search follows a single path at a time,
but switches paths when some competing path
is more promising than the current one.

38
Best – First Search (BFS)
(Contd..)
Heuristic Function – Cost of getting to a
solution from a given node.
The successors of a node are generated.
If one of them is a solution, quit;
otherwise the nodes are added to the list of
nodes generated so far
The most promising node is selected and
the process continues.
39
Best – First Search (BFS)
(Contd..)
Step – 1 Step -2 Step – 3
A A A

B C D B C D (1)
(3) (5) (1) (3) (5)

E F
(4) (6)

40
Best – First Search (BFS) (Contd..)

Step – 4 Step – 5
A A

B C D B C D

G H E F G H E F
(6) (5) (4) (6)

I J
(2) (1)
41
Best – First Search (BFS) (Contd..)

• It is important to search a graph (not tree) so


that duplicate paths will not be pursued.
• Two lists of nodes are needed for
implementation.
Open – Nodes that have been generated, but not
yet been examined. Open is a priority queue
with the most promising Heuristic value node
having highest priority.
Closed – Nodes that have been examined; need to
keep them in memory to check duplicate nodes.

42
Best – First Search (BFS) (Contd..)

Each Node in a Directed graph contains –

• The problem state


• Priority
• Parent Node
• A List of nodes generated from it.

43
Best – First Search (BFS) (Contd..)

Algorithm : Best First Search


1. Start with open containing just initial
node.
2. Until Goal is found or there are no nodes
left on open DO
(a) Pick the best node on open
(b) Generate its successors
(c) For each successor DO
44
Best – First Search (BFS) (Contd..)

(i) If it has not been generated before,


evaluate it, add it to open, record its
parent.
(ii) If it has been generated before, change
the parent if this new path is better than
the previous one. Also update the cost of
getting to this node and to any
successors that this node may already
have.

45
A* Algorithm

A* Algorithm
Best – First search is a simplification of A*
the Heuristic function in A* is given by
f1 = g + h1
g is a measure of the cost of getting from
initial to current node.
h1 is an estimate of getting from current to
goal node.
46
A* Algorithm (Contd..)
In BFS, the Heuristic function is h1 only; BFS
examines nodes closer to goal, so it
reaches goal with fewer steps.
A* given highest priority to nodes that are on
the shortest path from start node to goal
node.

47
A* Algorithm (Contd..)
Algorithm – A*
1. Place the start node S on open.
2. If open is empty stop and return failure.
3. Remove from open the node n that has
the smallest value of f1 (n). If n is a goal
node, return and stop otherwise go to
step 4.

48
A* Algorithm (Contd..)
4. Expand n generating all of its successors
n1 and place n on closed. For every
successor n1, if n1 is not already on open
or closed, attach a back pointer to n,
compute f1(n1) and place it on open.
5. Each n1 that is already on open or closed
should be attached to back pointers
which reflect the lowest g(n1) path.
6. Return to step 2.
49
A* Algorithm (Contd..)
Observations about A* algorithm
1. Role of g function
– This allows to choose a node based on how good
the path to the node was.
– If the requirement is only to get a solution some how,
define g = 0.
– If the requirement is to find a path with fewest
number of steps, set the cost of going from a node to
successor to 1.
– If the requirement is to find the cheapest path set the
cost of going from one node to another to reflect the
costs.
50
A* Algorithm (Contd..)
A* can be used to find a minimal cost overall path or any
path as quickly as possible.
2. Role of h1
- If h1 is perfect A* will converge to goal with no
search
- If h1 = 0, the search is controlled by g.
- If h1 = 0, g = 0, the search is random
- If h1 = 0, g = 1, the search will be breadth- first.
- Suppose h1 ≠ 0, and h1 never over estimates h. (i.e.
estimated value of h ≤ actual value of h)
A* is guaranteed to find an optional path to Goal

51
A* Algorithm (Contd..)
A A

B C D B C D
(3+1) (4+1) (5+1) (3+1) (4+1) (5+1)
E E
(2+2) (2+2) 1 = h1+g
F F
(1+3) (3+3)
G (0+4)
Fig. 4-a Fig. 4-b
h1Overestimates h h1 Underestimates h

52
A* Algorithm (Contd..)
Fig. 4-a
• Suppose there is a directed path from D to
a solution.
• By overestimating h1(D) we may find
worse solution without ever expanding D.
Fig. 4-b - 1(F) = 6 > 1 (C), so C is expanded
again; by underestimating h1(B), some effort is
wasted; but optimal path is traced.

53
A* Algorithm (Contd..)
3. The Third Observation : A* is stated as
applied to Graphs. It can be simplified to
apply to trees.
• Then there is no need to check whether
a new node is already on open or closed.
• Faster to generate nodes but the same
search may be conducted many times.

54

You might also like