You are on page 1of 21

Introduction to Artificial

Intelligence

Local Search
(updated 4/30/2006)

Henry Kautz
Local Search in Continuous
Spaces
S  initial state vector
f ( S )  quantity to optimized
  step size
until Goal_Test(S ) do
f
S  S  (S )
S

negative step to minimize f


positive step to maximize f
Local Search in Discrete State
Spaces
state = choose_start_state();
while ! GoalTest(state) do
state := arg min { h(s) | s in Neighbors(state) }
end
return state;

• Terminology:
– “neighbors” instead of “children”
– heuristic h(s) is the “objective function”, no need to be
admissible
• No guarantee of finding a solution
– sometimes: probabilistic guarantee
• Best goal-finding, not path-finding
• Many variations
Local Search versus Systematic
Search
• Systematic Search
– BFS, DFS, IDS, Best-First, A*
– Keeps some history of visited nodes
– Always complete for finite search spaces,
some versions complete for infinite spaces
– Good for building up solutions incrementally
• State = partial solution
• Action = extend partial solution
Local Search versus Systematic
Search
• Local Search
– Gradient descent, Greedy local search,
Simulated Annealing, Genetic Algorithms
– Does not keep history of visited nodes
– Not complete. May be able to argue will
terminate with “high probability”
– Good for “fixing up” candidate solutions
• State = complete candidate solution that may not
satisfy all constraints
• Action = make a small change in the candidate
solution
N-Queens Problem
N-Queens Systematic Search
state = choose_start_state();
add state to Fringe;
while ! GoalTest(state) do
choose state from Fringe according to h(state);
Fringe = Fringe U { Children(state) }
end
return state;

• start = empty board


• GoalTest = N queens are on the board
• h = (N – number of queens on the board)
• children = all ways of adding one queen without creating
any attacks
N-Queens Local Search, V1
state = choose_start_state();
while ! GoalTest(state) do
state := arg min { h(s) | s in Neighbors(state) }
end
return state;

• start = put down N queens randomly


• GoalTest = Board has no attacking pairs
• h = number of attacking pairs
• neighbors = move one queen to a different square on the
board
N-Queens Local Search, V2
state = choose_start_state();
while ! GoalTest(state) do
state := arg min { h(s) | s in Neighbors(state) }
end
return state;

• start = put a queen on each square with 50% probability


• GoalTest = Board has N queens, no attacking pairs
• h = (number of attacking pairs + max(0, N - # queens))
• neighbors = add or delete one queen
N Queens Demo
States Where Greedy Search Must
Succeed
objective function
States Where Greedy Search Might
Succeed
objective function
Local Search Landscape

Plateau
objective function

Local Minimum
Variations of Greedy Search
• Where to start?
– RANDOM STATE
– PRETTY GOOD STATE
• What to do when a local minimum is reached?
– STOP
– KEEP GOING
• Which neighbor to move to?
– BEST neighbor
– Any BETTER neighbor (Hill Climbing)
• How to make local search more robust?
Restarts
for run = 1 to max_runs do
state = choose_start_state();
flip = 0;
while ! GoalTest(state) && flip++ < max_flips do
state := arg min { h(s) | s in Neighbors(state) }
end
if GoalTest(state) return state;
end
return FAIL
Uphill Moves: Random Noise
state = choose_start_state();
while ! GoalTest(state) do
with probability noise do
state = random member Neighbors(state)
else
state := arg min { h(s) | s in Neighbors(state) }
end
end
return state;
Uphill Moves: Simulated Annealing
(Constant Temperature)
state = start;
while ! GoalTest(state) do
next = random member Neighbors(state);
deltaE = h(next) – h(state);
if deltaE  0 then Book reverses,
state := next; because is looking for
else max h state
with probability e-deltaE/temperature do
state := next;
end
endif
end
return state;
Uphill Moves: Simulated Annealing
(Geometric Cooling Schedule)
temperature := start_temperature;
state = choose_start_state();
while ! GoalTest(state) do
next = random member Neighbors(state);
deltaE = h(next) – h(state);
if deltaE  0 then
state := next;
else
with probability e-deltaE/temperature do
state := next;
end
temperature := cooling_rate * temperature;
end
return state;
Simulated Annealing

• For any finite problem with a fully-connected


state space, will provably converge to optimum
as length of schedule increases:
lim Pr(optimum)  1
cooling_rate 1
• But: fomal bound requires exponential search
time
• In many practical applications, can solve
problems with a faster, non-guaranteed schedule
Other Local Search Strategies
• Tabu Search
– Keep a history of the last K visited states
– Revisiting a state on the history list is “tabu”
• Genetic algorithms
– Population = set of K multiple search points
– Neighborhood = population U mutations U crossovers
• Mutation = random change in a state
• Crossovers = random mix of assignments from two states
• Typically only a portion of neighbor is generated
– Search step: new population = K best members of
neighborhood

You might also like