You are on page 1of 2

Heuristic Search

So far we have looked at two search algorithms that can in principle be used to systematically search the whole search space. Sometimes however it is not feasible to search the whole search space - it's just too big. The basic idea of heuristic search is that, rather than trying all possible search paths, you try and focus on paths that seem to be getting you nearer your goal state. Of course, you generally can't be sure that you are really near your goal state - it could be that you'll have to take some amazingly complicated and circuitous sequence of steps to get there. But we might be able to have a good guess. Heuristics are used to help us make that guess. To use heuristic search you need an evaluation function that scores a node in the search tree according to how close to the target/goal state it seems to be. This will just be a guess, but it should still be useful. For example, for finding a route between two towns a possible evaluation function might be a as the crow flies'' distance between the town being considered and the target town. It may turn out that this does not accurately reflect the actual (by road) distance - maybe there aren't any good roads from this town to your target town. However, it provides a quick way of guessing that helps in the search

Hill Climbing
In hill climbing the basic idea is to always head towards a state which is better than the current one. So, if you are at town A and you can get to town B and town C (and your target is town D) then you should make a move IF town B or C appear nearer to town D than town A does. In steepest ascent hill climibing you will always make your next state the best successor of your current state, and will only make a move if that successor is better than your current state. This can be described as follows: 1. Start with current-state = initial-state. 2. Until current-state = goal-state OR there is no change in current-state do: 1. Get the successors of the current state and use the evaluation function to assign a score to each successor. 2. If one of the successors has a better score than the current-state then set the new current-state to be the successor with the best score. Note that the algorithm does not attempt to exhaustively try every node and path, so no node list or agenda is maintained - just the current state. If there are loops in the search space then using hill climbing you shouldn't encounter them - you can't keep going up and still get back to where you were before. Hill climbing terminates when there are no successors of the current state which are better than the current state itself.

Best First Search


Best first search is a little like hill climbing, in that it uses an evaluation function and always chooses the next node to be that with the best score. However, it is exhaustive, in that it should eventually try all possible paths. It uses an agenda as in breadth/depth first search, but instead of taking the first node off the agenda (and generating its successors) it will take the best node off, ie the node with the best score. The successors of the best node will be evaluated (ie have a score assigned to them) and added to the list. The basic algorithm is as follows: 1. Start with open = [initial-state]. 2. While open [] do 1. Pick the best node on open. 2. If it is the goal node then return with success. Otherwise find its succesors. 3. Assign the successor nodes a score using the evaluation function and add the scored nodes to open (Remember ``open'' is just what we have called the agenda.)

The A* Algorithm
In its simplest form as described above, best first search is useful, but doesn't take into account the cost of the path so far when choosing which node to search from next. So, you may find a solution but it may be not a very good solution. There is a variant of best first search known as A* which attempts to find a solution which minimizes the total length or cost of the solution path. It combines advantages of breadth first search, where the shortest path is found first, with advantages of best first search, where the node that we guess is closest to the solution is explored next. In the A* algorithm the score which is assigned to a node is a combination of the cost of the path so far and the estimated cost to solution. This is normally expressed as an evaluation function f, which involves the sum of of the values returned by two functions g and h, g returning the cost of the path (from initial state) to the node in question, and h returning an estimate of the remaining cost to the goal state: f(Node) = g(Node) + h(Node) The A* algorithm then looks the same as the simple best first algorithm, but we use this slightly more complex evaluation function. (Our best node now will be the one with the lowest cost/score).

You might also like