You are on page 1of 24

Local Searches

Dr. Azhar Mahmood


Associate Professor
Email: azhar.mahmood@cust.edu.pk
Outline

• Local Search
• Optimization
• Hill Climbing and its variation
• Beam Search
• Simulating Annealing
Local Search Algorithms
• Previous searches
– keep paths in memory, and remember
alternatives so search can backtrack.
– Solution is a path to a goal.

• In many optimization problems, the path to


the goal is irrelevant; the goal state itself is
the solution
– Local search: widely used for very big
problems
– Returns good but not optimal solutions
Local Search Algorithms
• State space = set of "complete" configurations
(solutions)
• Find configuration satisfying some objective
function
• Local search algorithms
– Keep a single "current" state, or small set of
states
– Iteratively try to improve it / them
– Very memory efficient
• keeps only one or a few states
• You control how much memory you use
Optimization
• Local search is often suitable for optimization
problems. Search for best state by optimizing an
objective function
– F(x) where often x is a vector of continuous or discrete values
• Begin with a complete configuration
• A successor of state S is S’ with a single element
changed
• Move from the current state to a successor state
• Low memory requirements, because the search tree or
graph is not maintained in memory (paths are not saved)

5
Algorithm Design Considerations
• How do you represent your problem?
– Rules for agent, e.g. in 8 puzzle problem only moving empty tile
is a legal move
• What is a “complete state”? //state Landscape- contains solutions

• What is your objective function?


– How do you measure cost or value of a state?

• What is a “neighbor” of a state?


– Or, what is a “step” from one state to another?
– How can you compute a neighbor or a step?
• Are there any constraints you can exploit?
Hill Climbing – Algorithm
(Greedy Local Search)
• Steps:
1. Pick a random point in the search space (start)
2. Consider all the neighbours of the current state
3. Choose the neighbour with the best quality and
replace the current state with that one
4. Loop: Repeat 2 to 4 until all the neighbouring
states are of lower quality
5. Return the current state as the solution state
Hill Climbing: Example
S
9 11
A B
7.3 8.5 9 9
C D E F
7
6
G H 5 I J
6 4
4
2 0
K L M N

Hill Climbing is DFS with a heuristic measurement


that orders choices. The numbers beside the
nodes are straight-line distances from the path-
terminating city to the goal city.
Example1- Best
Example2 -Local maximum
Example2 -Plateau
Hill-Climbing Algorithm
• The algorithm does not maintain a search tree, so the data structure
for the current node need only record the state and the value of the
objective function.
• Hill climbing does not look ahead beyond the immediate neighbors of the
current state. (like in greedy it tells till Goals)
• This resembles trying to find the top of Mount Everest in a
thick fog while suffering from amnesia.
Hill Climbing
 Choose the nieghbor with the largest improvment as the
next state

f-value f-value =
evaluation(state)

states

while f-value(state) > f-value(next-best(state))


state := next-best(state)
Hill-Climbing Difficulties
Note: these difficulties apply to all local search algorithms, and usually
become much worse as the search space becomes higher dimensional
Problem: depending on initial state, can get stuck in local maxima
Hill-Climbing Search Problems
(this slide assumes maximization rather than minimization)

• Local maximum: a peak that is lower than the highest peak, so a


suboptimal solution is returned
• Plateau: is a flat area of the state-space landscape. It can be a flat local
maximum, from which no uphill exists, or a shoulder, from which
progress is possible.
• Ridges: slopes very slowly move toward a peak, that is very difficult
for greedy algorithms to navigate

Local maximum Plateau Ridge

15
Variants of Hill climbing’s
• Stochastic Search
– Does not examine all neighbors before deciding how
to move.
– Rather, it selects a neighbor at random, and decides
(based on the amount of improvement in that
neighbor or probability) whether to move to that
neighbor or to examine another.
– This usually converges more slowly than steepest
ascent, but in some state landscapes, it finds better
solutions
Variants of Hill Climbing’s

• First-choice hill climbing

– Implements stochastic climbing by generating


successors randomly until one is generated that is
better than the current state.

– This is a good strategy when a state has many


(e.g., thousands) of successors.
Variants of Hill Climbing’s
• The hill-climbing algorithms described so far are incomplete
– they often fail to find a goal when one exists because
they can get stuck on local maxima.
• Random restart hill-climbing
– Start different hill-climbing searches from random starting positions until a
goal is found
– Save the best result from any search so far
– If all states have equal probability of being generated, it is complete with
probability approaching 1 (a goal state will eventually be generated).
– Finding an optimal solution becomes the question of sufficient number of
restarts
– Surprisingly effective, if there aren’t too many local maxima or plateaux

f-value
f-value = evaluation(state)

State lande scape


18
Local Beam Search
• Keep track of k states rather than just one, as in
hill climbing
• Steps:
– Begins with k randomly generated states
– At each step, all successors of all k states are
generated
– If any one is a goal, algorithm halts
– Otherwise, selects best k successors from the
complete list, and repeats

19
Beam Search
K=2
S
At every level use only 2
9 11
best nodes A B
7.3 8.5 7.1 9
C D E F
7
5.3
G H 5 I J
6 2
4
2.5 0
K L M N
Simulated Annealing
Combinatorial search technique inspired by the physical
process of annealing [Kirkpatrick et al. 1983, Cerny 1985]
Outline
 Select a neighbor at random.

 If better than current state go there.

 Otherwise, go there with some probability.

 Probability goes down with time (similar to temperature


cooling, and Total change?)
• Annealing: harden metals and glass by heating them to a
high temperature and then gradually cooling them
– At the start, make lots of moves and then gradually slow down
Simulated Annealing: Algorithm
function Simulated-Annealing( start, schedule)
current ← start
for t ← 1 to ∞ do
T ← schedule[t]
if T=0 then return current
next ← a randomly selected successor of current
ΔE ← Value[next] – Value[current]
if ΔE > 0 then current ← next
else current ← next only with probability eΔE/T

• Probability of a move decreases with the amount ΔE by which the evaluation


is worsened

• A second parameter T is also used to determine the probability: high T


allows more worse moves, T close to zero results in few or no bad moves

• Schedule input determines the value of T as a function of the completed


cycles
• How to change temperature over time 22
Simulated Annealing: Algorithm
Conclusions
• Local Search Algorithms
– Hill climbing
– Beam search
– Simulating Annealing

• Reading:
• Chapter 4.1 of the book

Next
Genetic Algorithms
24

You might also like