P. 1
Chapter3_HeuristcSearch

Chapter3_HeuristcSearch

|Views: 10|Likes:
Published by Surbhi Sand

More info:

Published by: Surbhi Sand on May 05, 2012
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PPT, PDF, TXT or read online from Scribd
See more
See less

05/05/2012

pdf

text

original

Chapter 3 Heuristic Search Techniques

Contents
• A framework for describing search methods is provided and several general purpose search techniques are discussed. • All are varieties of Heuristic Search: – Generate and test – Hill Climbing – Best First Search – Problem Reduction – Constraint Satisfaction – Means-ends analysis

Generate-and-Test
Algorithm:
1. Generate a possible solution. For some problems, this means generating a particular point in the problem space. For others it means generating a path from a start state 2. Test to see if this is actually a solution by comparing the chosen point or the endpoint of the chosen path to the set of acceptable goal states. 3. If a solution has been found, quit, Otherwise return to step 1.

Generate-and-Test
• It is a depth first search procedure since complete solutions must be generated before they can be tested. • In its most systematic form, it is simply an exhaustive search of the problem space. • Operate by generating solutions randomly. • Also called as British Museum algorithm • If a sufficient number of monkeys were placed in front of a set of typewriters, and left alone long enough, then they would eventually produce all the works of Shakespeare. • Dendral: which infers the structure of organic compounds using NMR spectrogram. It uses plan-generate-test.

Hill Climbing
• Is a variant of generate-and test in which feedback from the test procedure is used to help the generator decide which direction to move in search space. • The test function is augmented with a heuristic function that provides an estimate of how close a given state is to the goal state. • Computation of heuristic function can be done with negligible amount of computation. • Hill climbing is often used when a good heuristic function is available for evaluating states but when no other useful knowledge is available.

If it is the goal state. Otherwise continue with the initial state as the current state. ii. If it is also goal state. . If it is not a goal state but it is better than the current state. then return it and quit. Select an operator that has not yet been applied to the current state and apply it to produce a new state b. Evaluate the new state i. Loop until a solution is found or until there are no new operators left to be applied in the current state: a. If it is not better than the current state.Simple Hill Climbing Algorithm: 1. iii. then continue in the loop. then make it the current state. then return it and quit. Evaluate the initial state. 2.

precise definition of better must be provided.Simple Hill Climbing • The key difference between Simple Hill climbing and Generate-and-test is the use of evaluation function as a way to inject task specific knowledge into the control process. . • Is one state better than another ? For this algorithm to work.

Steepest-Ascent Hill Climbing • This is a variation of simple hill climbing which considers all the moves from the current state and selects the best one as the next state. . • Also known as Gradient search.

then return it and quit. If it is also a goal state. leave SUCC alone. then set current state to SYCC. c. If is is a goal state. continue with the initial state as the current state. 2. compare it to SUCC. If the SUCC is better than the current state. If not. If it is better. Apply the operator and generate a new state ii. Evaluate the initial state. Evaluate the new state. Otherwise. Let SUCC be a state such that any possible successor of the current state will be better than SUCC b. . For each operator that applies to the current state do: i. If it is not better. then set SUCC to this state. Loop until a solution is found or until a complete iteration produces no change to current state: a.Algorithm: Steepest-Ascent Hill Climbing 1. then return it and quit.

Ridge: Where there are steep slopes and the search direction is not towards the top but towards the side. 2.Hill-climbing This simple policy has three wellknown drawbacks: (a) 1.9 Local maxima. Plateaus: An area of the search space where evaluation function is flat. (b) (c) Figure 5. thus requiring random walk. Plateaus and ridge situation for Hill Climbing . Local Maxima: a local maximum as opposed to global maximum. 3.

9(a) .where random initial states are generated. running each until it halts or makes no discernible progress.10 Random-restart hill-climbing (6 initial values) for 5.Hill-climbing • In each of the previous cases (local maxima. The best result is then chosen. the algorithm reaches a point at which no progress is being made. plateaus & ridge). Figure 5. • A solution is to do a random-restart hill-climbing .

• The simulated annealing process lowers the temperature by slow stages until the system ``freezes" and no further changes occur. • The term simulated annealing derives from the roughly analogous physical process of heating and then slowly cooling a substance to obtain a strong crystalline structure. • This is the idea of simulated annealing. .Simulated Annealing • A alternative to a random-restart hill-climbing when stuck on a local maximum is to do a ‘reverse walk’ to escape the local maximum.

html) .com/annealing/demo1.Simulated Annealing Figure 5.taygeta.11 Simulated Annealing Demo (http://www.

Simulated Annealing • Probability of transition to higher energy state is given by function: – P = e –∆E/kt Where ∆E is the positive change in the energy level T is the temperature K is Boltzmann constant. .

in addition to the current state. The three differences are: – The annealing schedule must be maintained – Moves to worse states may be accepted – It is good idea to maintain.Differences • The algorithm for simulated annealing is slightly different from the simple-hill climbing procedure. . the best state found so far.

then return it and quit. Return BEST-SO-FAR as the answer . 3. Compute: ∆E = ( value of current ) – ( value of new state) If the new state is a goal state. Otherwise. then make it the current state with probability p’ as defined above. 1]. a. then make it the current state. then the move is accepted. continue with the initial state as the current state. Also set BEST-SO-FAR to this new state. • • • • Select an operator that has not yet been applied to the current state and apply it to produce a new state. Initialize T according to the annealing schedule Loop until a solution is found or until there are no new operators left to be applied in the current state. Initialize BEST-SO-FAR to the current state. do nothing. If it is not better than the current state. Evaluate the new state. Otherwise.Algorithm: Simulate Annealing 1. 4. b. c. then return it and quit. If it is also a goal state. If the number is less than p’. This step is usually implemented by invoking a random number generator to produce a number in the range [0. Evaluate the initial state. 2. Revise T as necessary according to the annealing schedule 5. If it is a goal state but is better than the current state.

.Simulate Annealing: Implementation • It is necessary to select an annealing schedule which has three components: – Initial value to be used for temperature – Criteria that will be used to decide when the temperature will be reduced – Amount by which the temperature will be reduced.

• DFS is good because it allows a solution to be found without all competing branches having to be expanded. but switch paths whenever some competing path looks more promising than the current one does. . • One way of combining the two is to follow a single path at a time.Best First Search • Combines the advantages of both DFS and BFS into a single method. • BFS is good because it does not get branches on dead end paths.

Further. even if that state has a value that is lower than the value of the state that was just explored. one move is selected. we select the most promising of the nodes we have generated so far. • This is done by applying an appropriate heuristic function to each of them. This contrasts with hill climbing. never to be reconsidered. – In Best First Search. which will stop if there are no successor states with better values than the current state.Best First Search • At each step of the Best First Search process. the best available state is selected in the BFS. but the others are kept around so that they can be revisited later if the selected path becomes less promising. This produces the straightline behaviour that is characteristic of hill climbing. . • We then expand the chosen node by using the rules to generate its successors • Similar to Steepest ascent hill climbing with two exceptions: – In hill climbing. one move is selected and all the others are rejected.

• This is called OR-graph. • The list of successors will make it possible. • Each node will contain: – – – – Description of problem state it represents Indication of how promising it is Parent link that points back to the best node from which it came List of nodes that were generated from it • Parent link will make it possible to recover the path to the goal once the goal is found. since each of its branches represents an alternative problem solving path . if a better path is found to an already existing node. • An algorithm to do this will operate by searching a directed graph in which each node represents a point in problem space.OR-graph • It is sometimes important to search graphs so that duplicate paths will not be pursued. to propagate the improvement down to its successors.

nodes that have already been examined. OPEN is actually a priority queue in which the elements with the highest priority are those with the most promising value of the heuristic function.Implementation of OR graphs • We need two lists of nodes: – OPEN – nodes that have been generated and have had the heuristic function applied to them but which have not yet been examined. We need to keep these nodes in memory if we want to search a graph rather than a tree. we need to check whether it has been generated before. since whenever a new node is generated. . – CLOSED.

Algorithm: Best First Search 1. update the cost of getting to this node and to any successors that this node may already have. and record its parent. Pick the best node on OPEN Generate its successors For each successor do: i. If it has not been generated before. b. evaluate it. . 2. If it has been generated before. change the parent if this new path is better than the previous one. Start with OPEN containing just the initial state Until a goal is found or there are no nodes left on OPEN do: a. ii. add it to OPEN. In that case. c.

• It generates the successors of the chosen node. it picks the most promising of the nodes that have so far been generated but not expanded. expanding one node at each step. • At each step. although many nodes may point to it as a successor. . we can guarantee that each node only appears once in the graph. • By doing this check. until it generates a node that corresponds to a goal state. and adds them to the list of open nodes.Best First Search : simple explanation • It proceeds in steps. after checking to see if any of them have been generated before. applies the heuristic function to them.

Best First Search Step 1 A B Step 2 A B 3 Step 3 A C D C 5 D 3 5 1 1 Step 4 A B C D B Step 5 A C D E 4 F 6 3 5 1 3 5 1 G 6 H 5 E 4 F 6 G 6 H 5 E 4 F 6 I 2 J 1 .

– g : The function g is a measure of the cost of getting from initial state to the current node. – h’ : The function h’ is an estimate of the additional cost of getting from the current node to a goal state. g and h’ and f’ represents an estimate of the cost of getting from the initial state to a goal state along with the path that generated the current node. – OPEN – CLOSED . This is sum of two components.A* Algorithm • Best First Search is a simplification of A* Algorithm • Presented by Hart et al • Algorithm uses: – f’: Heuristic function that estimates the merits of each node we generate.

See if the BESTNODE is a goal state.A* Algorithm 1. Call it BESTNODE. Remove it from OPEN. its h’ value to whatever it is. Until a goal node is found. Start with OPEN containing only initial node. report failure. Place it in CLOSED. 2. and its f’ value to h’+0 or h’. Otherwise pick the node on OPEN with the lowest f’ value. . Set CLOSED to empty list. generate the successors of BESTNODE but do not set the BESTNODE to point to them yet. If so exit and report a solution. Otherwise. Set that node’s g value to 0. repeat the following procedure: If there are no nodes on OPEN.

then put it on OPEN and add it to the list of BESTNODE’s successors. If SUCCESSOR was not on OPEN. b. e. If so. do the following: a. see if it is on CLOSED. See if SUCCESSOR is the same as any node on OPEN.A* Algorithm ( contd) For each of the SUCCESSOR. d. Set SUCCESSOR to point back to BESTNODE. Compute f’(SUCCESSOR) = g(SUCCESSOR) + h’(SUCCESSOR) . Compute g(SUCCESSOR) = g(BESTNODE) + the cost of getting from BESTNODE to SUCCESSOR c. If SUCCESSOR was not already on either OPEN or CLOSED. call the node on CLOSED OLD and add OLD to the list of BESTNODE’s successors. These backwards links will make it possible to recover the path once a solution is found. If so call the node OLD.

• h’.If h’ is a perfect estimator of h. but also on the basis of how good the path to the node was. . then A* will converge immediately to the goal with no search.Observations about A* • Role of g function: This lets us choose which node to expand next on the basis of not only of how good the node itself looks. the distance of a node to the goal.

• Under certain conditions. the A* algorithm can be shown to be optimal in that it generates the fewest nodes in the process of finding a solution to a problem. then A* algorithm will rarely find a solution whose cost is more than δ greater than the cost of the optimal solution.Gracefull Decay of Admissibility • If h’ rarely overestimates h by more than δ. .

AND-OR graphs • AND-OR graph (or tree) is useful for representing the solution of problems that can be solved by decomposing them into a set of smaller problems. all of which must then be solved. all of which must be solved in order for the arc to point to a solution. • One AND arc may point to any number of successor nodes. Goal: Acquire TV Set Goal: Steal a TV Set Goal: Earn some money Goal: Buy TV Set .

AND-OR graph examples A B 9 C 3 D B A 38 4 17 E F 10 G C D 5 9 H I 27 J 5 3 4 15 10 .

Traverse the graph. even if it could ever be found. assign FUTILITY as the value of this node.Problem Reduction FUTILITY is chosen to correspond to a threshold such than any solution with a cost above it is too expensive to be practical. and accumulate the set of nodes that are on that path and have not yet been expanded or labeled as solved. This propagation of revised cost estimates back up the tree was not necessary in the BFS algorithm because only unexpanded nodes were examined. Loop until the starting node is labeled SOLVED or until its cost goes above FUTILITY: a. If there are no successors. . mark that node as SOLVED. b. 2. If f’ of any node is 0. Otherwise. starting at the initial node and following the current best path. But now expanded nodes must be reexamined so that the best current path can be selected. Algorithm : Problem Reduction 1. c. Initialize the graph to the starting node. Propagate this change backward through the graph. Pick one of these nodes and expand it. Change the f’ estimate of the newly expanded node to reflect the new information provided by its successors. add its successors to the graph and for each of them compute f’.

Before Step 1 A 5 Before Step 2 A 9 6 B Before Step 3 3 C 4 D 5 Before Step 4 A 9 B 3 A 12 4 D 10 10 B C D 10 10 4 G 5 H 7 E 4 F 4 C 6 4 E 4 F The Operation of Problem Reduction .

• Each node in the graph will point down to its immediate successors up to its immediate predecessors. • The estimated cost of a solution is greater than FUTILITY then the search is abandoned as too expansive to be practical. And such a value is not necessary because of the top-down traversing of the best-known path. • Each node in the graph will also have associated with it an h` value. • So h` will serve as the estimate of goodness of a node. . • Will not store g as we did in A* algorithm.AO* Algorithm • Rather than OPEN and CLOSED. the algorithm uses a single structure GRAPH. which guarantees that only nodes that are on the best path will ever be considered for expansion. an estimate of the cost of a path from itself to a set of solution nodes. representing the path of the search graph that has been explicitly generated so far. • It is not possible to compute a single such value since there may be many paths to the same sate.

(II) Generate the successors of NODE . (I) Trace the marked arcs from INIT and select an unbounded node NODE. that is not also an ancestor of NODE do the following (a) add SUCCESSOR to graph G (b) if successor is a terminal node.AO* ALGORITHM: 1. If there are successors then for each one called SUCCESSOR. 2. compute it h' value. Let G consists only to the node representing the initial state call this node INIT. This means that NODE is not solvable. Until INIT is labeled SOLVED or h`(INIT) becomes greater than FUTILITY. repeat the following procedure. Compute h' (INIT). mark it solved and assign zero to its h ' value. (c) If successor is not a terminal node. . if there are no successors then assign FUTILITY as h' (NODE).

(a) select a node from S call if CURRENT and remove it from S. hence all the ancestors of CURRENT are added to S. (e) If CURRENT has been marked SOLVED or its h ' has just changed. . Assign minimum h' to CURRENT. (c) Mark the minimum cost path a s the best out of CURRENT. its new status must be propagate backwards up the graph .(III) propagate the newly discovered information up the graph by doing the following . let S be a set of nodes that have been marked SOLVED. Initialize S to NODE. (b) compute h' of each of the arcs emerging from CURRENT . Until S is empty repeat the following procedure. (d) Mark CURRENT SOLVED if all of the nodes connected to it through the new marked are have been labeled SOLVED.

exit with success where tp is the solution tree.with unsolvable ancestors. If the start node is solved. If the start node is labeled as unsolvable. 1. Place the start node on open. 7. 3. expand node n generating all of its successor compute the cost of for each newly generated node and place all such nodes on open. If n is not solvable node. Go back to step(2) Note: AO* will always find minimum cost solution. remove all nodes from open with a solved ancestor.AO* Search Procedure. Remove all nodes from open . Select node n that is both on open and a part of tp. compute the most promising solution tree TP . label n as solved. . 6. exit with failure. If n is a goal node. Otherwise. 5. 4. Using the search tree. 2. label n as unsolvable. remove n from open and place it no closed.

An A* algorithm the path from one node to the other is always that of the lowest cost and it is independent of the paths through other nodes. This process continues until a solution is found or all paths have led to dead ends. D is expanded producing an AND arc E-F. indicating that there is no solution. it is now marked as current best path. f ' value of D is updated to 10.The working of AO* algorithm is illustrated in figure as follows:Referring the figure. Going backwards we can see that the AND arc B-C is better . B and C have to be expanded next. The initial node is expanded and D is Marked initially as promising node. .

unlike A*) – Costs q() maintained on nodes – SOLVED markings .A/O* algorithm • Data Structure – Graph – Marked Connectors (down.

Or Connector AND Connector .

And/OR Graph .

Solution Subgraph .

Solution Subgraph .

n2. that connector. . …. – There is a Solution graph from each ni to T. – G’ is n. • Otherwise n has one connector to set of nodes n1.Solution Subgraph G’ of G from n to terminals T • If n is in T. nk. G’ is just the singleton n. …. the nodes n1. nk – Plus … the solution graphs from each of the ni.

Heuristic Values: estimated cost to solution set 0 2 4 1 4 1 2 0 0 .

.Montone Restriction • h(n) <= c + h(n1) + h(n2) + … h(nk) Where c is cost of connector between n and set of n1. nk. This guarantees that h(n) <= h*(n). … .

If that direction has all successors SOLVED then n is marked SOLVED. and mark direction. . – q(n) = connector cost + sum of q(successors) Pick smallest of above.Cost (q(n) ) Values • If n has no successors q(n) = h (n) • Otherwise working from bottom.

Basic Idea of A/O* • First top-down graph growing picks out best available partial solution sub-graph from explicit graph. . SOLVE-labeling. • One leaf node of this graph is expanded • Second. bottom-up cost-revising. connector-marking.

Tracing the Algorithm 3 2 1 1 .

Tracing the Algorithm 4 5 1 4 4 1 .

Tracing the Algorithm 5 5 1 4 4 2 2 0 0 .

Tracing the Algorithm 5 5 1 4 4 2 2 0 0 .

Tracing the Algorithm 5 5 1 4 2 0 0 .

Tracing the Algorithm 5 5 1 4 2 0 0 .

Problem Reduction: AO* A 11 B 13 C 10 A 14 B 13 C 15 D 5 E 6 G 5 F 3 D 5 E 6 G 10 H 9 F Necessary backward propagation .

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->