Professional Documents
Culture Documents
7
Steepest Ascent Hill Climbing (Contd..)
IS CS
8
Algorithm - Steepest Ascent Hill
Climbing
1. Evaluate the initial state. If it is a goal state
return and quit. Otherwise continue with initial
state as current state.
2. Loop until a solution is found or until a
complete iteration produces no change to
current state.
(a) Let succ be a state such that any
successor of current state (cs) will be better
than Succ. (Succ is replaced by better state).
9
Algorithm - Steepest Ascent Hill
Climbing (Contd..)
(b) For each operator that applies to CS do:
(i) Apply-the operator and generate NS.
(ii) If NS is a goal state return and quit. If
not compare NS to Succ. If NS is better
than Succ, set Succ to NS, If not leave
Succ same.
(c) If Succ is better than current state, then
set current state to Succ.
10
Differences in the two hill-climbing algorithms
11
Algorithm - Steepest Ascent Hill
Climbing (Contd..)
Hill-Climbing Methods may fail when the
program has reached one of the following
• Local Maximum
• Plateau ( a high flat land) or a
• Ridge (a long narrow hill top, a line of
junction of two surfaces sloping upwards
towards each other).
12
Algorithm - Steepest Ascent Hill
Climbing (Contd..)
Local Maximum
• Is a state better than all its neighbors, but not
better than states far away.
• Occurs with in the sight of a solution
• Also named as Foothill
How to deal with Local Maximum ?
• Backtrack to some earlier node and try going in
a different direction.
• Implementation requires to maintain a list of
paths.
13
Plateau
• A flat area in which a set of neighbours
have the same value.
• It is not possible to find best direction to
move in.
How to deal with a Plateau ?
• Make a big jump in some direction to get a
new search space.
• Apply single small steps several times to
make a big Jump.
14
RIDGE
• Is a special kind of local maximum.
• Is higher than surrounding areas and has
a slope.
• Is impossible to traverse a ridge by single
moves.
How to deal with Ridges ?
• Apply two or more rules before doing the
test. This corresponds to moving in
several directions at once.
15
Drawbacks of Hill-Climbing
• Local Method – It looks only at immediate
consequences for a new state.
• Lacks the guarantee to provide a solution
Advantage : Less combinatorially explosive
Example to Demonstrate Hill-Climbing
failure due to local searching.
16
Blocks World Problem
A H Rules/Operators
H G i) Pick one Block and put it
G F on the table.
F E ii) Pick one Block and put it
E D on another one.
D C
C B
B A
Initial Goal
State State
17
Blocks World Problem
(Contd..)
• Local (Heuristic Function) : Add one point
for every block that is resting on the thing
that it is suppose to be resting on.
18
Blocks World Problem
(Contd..)
• Initial state has function value or score
= -1+1+1+1+1+1+1-1 = -2+6 = 4
• Goal state Score = 1+1+1+1+1+1+1+1=8
19
Blocks World Problem
(Contd..)
H This is accepted as current
G state.
F
E Three possible moves from
D current state are (a),(b),(c)
C
B A
Score = - 1+6+1 = 6 > 4
20
Blocks World Problem
(Contd..)
A G G
H F F Score (a)= -1+6-1=4
G E E Score (b)= -1+5+1-1=4
F D D Score (c)= -1+5+1-1=4
E C C
D B B
C H
B A A H
(a) (b) (c)
21
Blocks World Problem
(Contd..)
• The process has reached a local
maximum that is not global maximum.
• By changing the Heuristic Function, a
solution may be found.
22
Global (Heuristic Function)
• For each Block that has the current
support structure. (i.e. the complete
structure underneath it is exactly as it
should be) Add One Point in the Support
Structure
• For each Block that has an incorrect
support structure, subtract one point for
every Block in the existing support
structure.
23
Global (Heuristic Function)
(Contd..)
A H
H G Score for initial state
G F = -1-2-3-4-5-6-7
F E = - (1+2 +..+7)
E D = -(7 x 8/2) = -28
D C Score for Goal State
C B = 1+2+3+4+5+6+7 = 28
B A
Initial Goal B
( A B has correct support structure, +1
C has correct structure C
B 1+1=2
A
24
Global (Heuristic Function)
(Contd..)
H The possible move from
G initial state is
F
E Score = -1-2-3-4-5-6
D = -(6x7/2) = -21
C
B A
Current state - 21 > -28
25
Global (Heuristic Function)
(Contd..)
A G G The three possible moves
H F F from CS are
G E E Score a = -28
F D D Score b = -16
E C C Score c = -15
D B B = (-1-2-3-4-5)
C H State (c) is chosen as current one
B A A H
(a) (b) (c)
26
Global (Heuristic Function)
(Contd..)
• The New Heuristic Function takes correct
structures and leaves incorrect structures.
• Due to lack of knowledge about the problem, It is
not always possible to construct such a perfect
Heuristic Function.
• In the Extreme case even if perfect knowledge is
available, it may not be computationally tractable
to use
• Hill Climbing is useful when combined with
other methods.
27
Simulated Annealing
• It is a variation of Hill Climbing.
It lowers the chances of getting caught at a local
maximum, a plateau or ridge.
In this method enough exploration of the whole
space is done early so that the final solution is
relatively insensitive to initial state
Annealing Is a Physical Process – In which
physical substances as metals are melted and
gradually cooled until desired state is reached.
28
Simulated Annealing (Contd..)
Viewing Objective Function as Energy
Level, this is like valley Descending
as the Goal is to produce a minimal energy
final state.
• Physical substances usually move from
high energy state to lower one.
• But there is a probability that lower to
higher can occur.
29
Simulated Annealing (Contd..)
P = - E/KT
e
E is positive change in energy level T is the
temperature, k is Boltzmann’s constant.
The Revised probability
30
Simulated Annealing (Contd..)
P1 = - E/T, E is change in Heuristic
e Function
31
Simulated Annealing (Contd..)
As T approaches zero, the probability of
accepting a move to a worse state
becomes zero, and simulated annealing
becomes hill climbing
- The values of T be scaled so that the ratio
be meaningful ; for example, T could be
initialized to a value such that for an
average E, p1 = 0.5
32
Algorithm : Simulated Annealing
1. Evaluate the initial state. If it is a Goal
state, return and quit. Otherwise continue
with initial state as current state (CS)
2. Initialize Best –so-far to CS
3. Initialize T
4. Loop until a solution is found or there are
no new operators left to apply to CS.
33
Algorithm : Simulated Annealing
(Contd..)
(a) Select an operator and apply it to CS to
get NS.
(b) Evaluate NS. Compute E = (value of
current) – (value of new)
• If NS is GS, return and quit.
• If NS is not GS, but is better than CS,
make it CS. Also set Best-So-Far to this
state.
34
Algorithm : Simulated Annealing
(Contd..)
• If NS is not better than CS, make it CS with
probability p1. Involve a random number r in [ 0
1]. If r is less than p1 then the state is
accepted.
(c) Revise T as necessary according to annealing
schedule.
5. Return Best-So-Far as the answer.
Simulated-Annealing is often used to solve
problems in which the number of moves from a
given state is very large
35
Annealing Schedule
• A schedule of values for T is known as Annealing
Schedule
• Initial value of T is selected such that for average
change in heuristic function, the probability is 0.5
• The optimal annealing schedule for each annealing
problem must be discovered empirically
• The best way to select an annealing schedule is by
trying several alternatives and observing the effect on
both the quality of the solution and the rate at which
the process converges
36
Algorithm : Simulated Annealing
(Contd..)
Simple Hill Climbing Simulated Annealing
Moves to worse states Moves to worse states
are not accepted may be accepted
Annealing scheduled is Annealing schedule is to
not required be maintained
Best – So – Far is not Best – So – Far is stored
stored
37
Best – First Search (BFS)
BFS – Combines advantages of Breadth-First
search and Depth First Search.
Breadth First – Does not get trapped at
dead ends.
Depth First – Allows a solution to be found
without expanding other branches.
Best-First Search follows a single path at a time,
but switches paths when some competing path
is more promising than the current one.
38
Best – First Search (BFS)
(Contd..)
Heuristic Function – Cost of getting to a
solution from a given node.
The successors of a node are generated.
If one of them is a solution, quit;
otherwise the nodes are added to the list of
nodes generated so far
The most promising node is selected and
the process continues.
39
Best – First Search (BFS)
(Contd..)
Step – 1 Step -2 Step – 3
A A A
B C D B C D (1)
(3) (5) (1) (3) (5)
E F
(4) (6)
40
Best – First Search (BFS) (Contd..)
Step – 4 Step – 5
A A
B C D B C D
G H E F G H E F
(6) (5) (4) (6)
I J
(2) (1)
41
Best – First Search (BFS) (Contd..)
42
Best – First Search (BFS) (Contd..)
43
Best – First Search (BFS) (Contd..)
45
A* Algorithm
A* Algorithm
Best – First search is a simplification of A*
the Heuristic function in A* is given by
f1 = g + h1
g is a measure of the cost of getting from
initial to current node.
h1 is an estimate of getting from current to
goal node.
46
A* Algorithm (Contd..)
In BFS, the Heuristic function is h1 only; BFS
examines nodes closer to goal, so it
reaches goal with fewer steps.
A* given highest priority to nodes that are on
the shortest path from start node to goal
node.
47
A* Algorithm (Contd..)
Algorithm – A*
1. Place the start node S on open.
2. If open is empty stop and return failure.
3. Remove from open the node n that has
the smallest value of f1 (n). If n is a goal
node, return and stop otherwise go to
step 4.
48
A* Algorithm (Contd..)
4. Expand n generating all of its successors
n1 and place n on closed. For every
successor n1, if n1 is not already on open
or closed, attach a back pointer to n,
compute f1(n1) and place it on open.
5. Each n1 that is already on open or closed
should be attached to back pointers
which reflect the lowest g(n1) path.
6. Return to step 2.
49
A* Algorithm (Contd..)
Observations about A* algorithm
1. Role of g function
– This allows to choose a node based on how good
the path to the node was.
– If the requirement is only to get a solution some how,
define g = 0.
– If the requirement is to find a path with fewest
number of steps, set the cost of going from a node to
successor to 1.
– If the requirement is to find the cheapest path set the
cost of going from one node to another to reflect the
costs.
50
A* Algorithm (Contd..)
A* can be used to find a minimal cost overall path or any
path as quickly as possible.
2. Role of h1
- If h1 is perfect A* will converge to goal with no
search
- If h1 = 0, the search is controlled by g.
- If h1 = 0, g = 0, the search is random
- If h1 = 0, g = 1, the search will be breadth- first.
- Suppose h1 ≠ 0, and h1 never over estimates h. (i.e.
estimated value of h ≤ actual value of h)
A* is guaranteed to find an optional path to Goal
51
A* Algorithm (Contd..)
A A
B C D B C D
(3+1) (4+1) (5+1) (3+1) (4+1) (5+1)
E E
(2+2) (2+2) 1 = h1+g
F F
(1+3) (3+3)
G (0+4)
Fig. 4-a Fig. 4-b
h1Overestimates h h1 Underestimates h
52
A* Algorithm (Contd..)
Fig. 4-a
• Suppose there is a directed path from D to
a solution.
• By overestimating h1(D) we may find
worse solution without ever expanding D.
Fig. 4-b - 1(F) = 6 > 1 (C), so C is expanded
again; by underestimating h1(B), some effort is
wasted; but optimal path is traced.
53
A* Algorithm (Contd..)
3. The Third Observation : A* is stated as
applied to Graphs. It can be simplified to
apply to trees.
• Then there is no need to check whether
a new node is already on open or closed.
• Faster to generate nodes but the same
search may be conducted many times.
54