Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
151Activity
0 of .
Results for:
No results containing your search query
P. 1
Hill Climbing Methods

Hill Climbing Methods

Ratings: (0)|Views: 47,365|Likes:
Published by api-3705912

More info:

Published by: api-3705912 on Oct 18, 2008
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as DOC, PDF, TXT or read online from Scribd
See more
See less

03/18/2014

pdf

text

original

 
Local Search Algorithms & Optimization Problems
Hill Climbing
Hill climbing
is an optimization technique which belongs to the family of local search. Itis relatively simple to implement, making it a popular first choice. Although moreadvanced algorithms may give better results, in some situations hill climbing works well.Hill climbing can be used to solve problems that have many solutions, some of which are better than others. It starts with a random (potentially poor) solution, and iterativelymakes small changes to the solution, each time improving it a little. When the algorithmcannot see any improvement anymore, it terminates. Ideally, at that point the currentsolution is close to optimal, but it is not guaranteed that hill climbing will ever comeclose to the optimal solution.For example, hill climbing can be applied to thetraveling salesman problem. It is easy tofind a solution that visits all the cities but is be very poor compared to the optimalsolution. The algorithm starts with such a solution and makes small improvements to it,such as switching the order in which two cities are visited. Eventually, a much better route is obtained.Hill climbing is used widely inartificial intelligence, for reaching a goal state from astarting node. Choice of next node and starting node can be varied to give a list of relatedalgorithms.Hill climbing attempts to maximize (or minimize) afunction 
 f 
(
 x
), where
 x
are discretestates. These states are typically represented byvertices in agraph,where edgesin the graph encode nearness or similarity of a graph. Hill climbing will follow the graph fromvertex to vertex, always locally increasing (or decreasing) the value of 
 f 
 
)
 x
m
is reached. Hill climbing can also operate on acontinuous space: in that case, the algorithm is called gradient ascent (or gradient descentif the function is minimized).*.Problems with hill climbing: local maxima (we've climbed to the top of the hill, andmissed the mountain), plateau (everything around is about as good as where we are),ridges (we're on a ridge leading up, but we can't directly apply an operator to improve our situation, so we have to apply more than one operator to get there).Solutions include: backtracking, making big jumps (to handle plateaus or poor localmaxima), applying multiple rules before testing (helps with ridges).Hill climbing is best suited to problems where theheuristicgradually improves the closer it gets to the solution; it works poorly where there are sharp drop-offs. It assumes thatlocal improvement will lead to global improvement.
 
Local maxima
A problem with hill climbing is that it will find onlylocal maxima. Unless the heuristic isconvex, it may not reach a global maximum. Other local search algorithms try toovercome this problem such as stochastic hill climbing, random walks and simulated annealing. This problem of hill climbing can be solved by using random hill climbingsearch technique
Ridges
A ridge is a curve in the search place that leads to a maximum, but the orientation of theridge compared to the available moves that are used to climb is such that each moves willlead to a smaller point. In other words, each point on a ridge looks to the algorithm like alocal maximum, even though the point is part of a curve leading to a better optimum.
Plateau
Another problem with hill climbing is that of a plateau, which occurs when we get to a"flat" part of the search space, i.e. we have a path where the heuristics are all very closetogether. This kind of flatness can cause the algorithm to cease progress and wander aimlessly.
Steepest Ascent
Hill climbing in which you generate all successors of the current state and choose the bestone. These are identical as far as many texts are concerned.
Branch and Bound
 
Generally, in search we want to find the move that results in the lowest cost (or highest,depending). Branch and bound techniques rely on the idea that we can partition our choices into sets using some domain knowledge, and ignore a set when we can determinethat the optimal element cannot be in it. In this way we can avoid examining mostelements of most sets. This can be done if we know that a higher bound on set X is lower than a lower bound on set Y (in which case Y can be pruned).
 Example: Travelling Salesman Problem.
We decompose our set of choices into a set of sets, in each one of which we've taken a different route out of the current city. Wecontinue to decompose until we have complete paths in the graph. If while we'redecomposing the sets, we find two paths that lead to the same node, we can eliminate themore expensive one.Best-first B&B is a variant in which we can give a lower bound on a set of possiblesolutions. In every cycle, we branch on the class with the least lower bound. When asingleton is selected we can stop.Depth-first B&B selects the most recently generated set; it produces DFS behavior butsaves memory.Some types of branch-and-bound algorithms: A
*
, AO
*
, alpha-beta, SSS
*
, B
*
.
Best-First Search
Expand the node that has the best evaluation according to the heuristic function. AnOPEN list contains states that haven't been visited; a CLOSED list contains those thathave, to prevent loops. This approach doesn't necessarily find the shortest path.(When theheuristicis just the cost function
 g 
, this is blind search. When it's just
h' 
, theestimated cost to the goal, this is steepest ascent (I think -- POD). When it's
 g + h' 
, this isA
*
.
Local search= use single current state and move to neighboring states.Advantages: 
 – Use very little memory – Find often reasonable solutions in large or infinite state spaces.
Are also useful for pure optimization problems.
 – Find best state according to some
objective function
. – e.g. survival of the fittest as a metaphor for optimization.
Example:
n
-queens

Activity (151)

You've already reviewed this. Edit your review.
1 hundred reads
1 thousand reads
Talal Alsulaiman liked this
Ahmad Alqaisi added this note
CS362Artificial Intelligence First semester 2012/2013 Assignment 2 The objective of this assignment is to experiment with the hill-climbing (HC) algorithm. The problem at hand is the n-queens problem discussed in class. The file “hw2.cpp” will help you get started since it contains a large part of the problem formulation. E.g., each state is represented by an array of n integers. The value of the
jsheth_8 liked this
Pac CE liked this
seemab799 liked this
nugroho_budi liked this
Aakriti Mittal liked this
utpalkashyap liked this

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->