You are on page 1of 39

Informed Search

SE-805 Advance AI – Lecture

1
Informed search
• Use domain knowledge!
– Are we getting close to the goal?
– Use a heuristic function that estimates how close a state is to the goal
– A heuristic does NOT have to be perfect!
– Examples of strategies:
▪ Greedy best-first search
▪ A* search

2
Informed search

Heuristic!
All the distances are straight line distance from each city to Sault Ste Marie.
3
Greedy search
– Evaluation function ℎ(𝑛) (heuristic)
▪ ℎ(𝑛) estimates the cost from 𝑛 to the closest goal
– Example: ℎ𝑆𝐿𝐷 𝑛 = straight-line distance from 𝑛 to Sault Ste Marie
– Greedy search expands the node that appears to be closest to goal

4
Greedy Search Algorithm

5
Greedy Search Example
The initial state:

Saint Louis
180

Chicago Kansas City Little Rock Nashville


107 176 240 221

Duluth Omaha Pittsburgh


110 150 152

Sault Ste
Helena Omaha Winnipeg
Marie
254 150 0 156

6
Greedy Search Example
After expanding St Louis:

Saint Louis
180

Chicago Kansas City Little Rock Nashville


107 176 240 221

Duluth Omaha Pittsburgh


110 150 152

Sault Ste
Helena Omaha Winnipeg
Marie
254 150 0 156

7
Greedy Search Example
After expanding Chicago:

Saint Louis
180

Chicago Kansas City Little Rock Nashville


107 176 240 221

Duluth Omaha Pittsburgh


110 150 152

Sault Ste
Helena Omaha Winnipeg
Marie
254 150 0 156

8
Greedy Search Example
After expanding Duluth:

Saint Louis
180

Chicago Kansas City Little Rock Nashville


107 176 240 221

Duluth Omaha Pittsburgh


110 150 152

Sault Ste
Helena Omaha Winnipeg
Marie
254 150 0 156

9
A* search
• Minimize the total estimated solution cost
• Combines
– 𝑔 𝑛 : cost to reach node 𝑛
– h 𝑛 : cost to get from 𝑛 to the goal
– 𝑓 𝑛 = 𝑔 𝑛 + ℎ(𝑛)

𝑓(𝑛) is the estimated cost of the cheapest solution through 𝑛

10
A* search

11
A* search
The initial state:
St
Louis
180

Chicago Kansas Little Nash-


City Rock ville
211 244 300 306

Duluth Omaha Pitts- Denver Okla- Dallas Nash- New Okla- Atlanta Little Raleigh
burgh homa ville Orleans homa Rock
371 396 337 473 366 437 375 482 369 424 419 464

New Toronto Wash-


York ington
449 355 508

Mont- Sault Ste


real Marie
573 355

12
A* search
After expanding St Louis:
St
Louis
180

Chicago Kansas Little Nash-


104+107 68+176 City 60+240 Rock 85+221 ville
211 244 300 306

Duluth Omaha Pitts- Denver Okla- Dallas Nash- New Okla- Atlanta Little Raleigh
burgh homa ville Orleans homa Rock
371 396 337 473 366 437 375 482 369 424 419 464

New Toronto Wash-


York ington
449 355 508

Mont- Sault Ste


real Marie
573 355

13
A* search
After expanding Chicago:
St
Louis
180

Chicago Kansas Little Nash-


City Rock ville
104+157+110 244 300 306

Duluth Omaha Pitts- Denver Okla- Dallas Nash- New Okla- Atlanta Little Raleigh
burgh homa ville Orleans homa Rock
371 396 337 473 366 437 375 482 369 424 419 464
104+81+152
New
104+142+150 Toronto Wash-
York ington
449 355 508

Mont- Sault Ste


real Marie
573 355

14
A* search
After expanding Kansas City:
St
Louis
180

Chicago Kansas Little Nash-


City Rock ville
68+61+237 300 306

Duluth Omaha Pitts- Denver Okla- Dallas Nash- New Okla- Atlanta Little Raleigh
burgh homa ville Orleans homa Rock
371 396 337 473 366 437 375 482 369 424 419 464

New Toronto Wash-


68+135+270
York ington
449 355 508

Mont- Sault Ste


real Marie
573 355

15
A* search
After expanding Little Rock:
St
Louis
180

Chicago Kansas Little Nash-


City Rock ville
306

Duluth Omaha Pitts- Denver Okla- Dallas Nash- New Okla- Atlanta Little Raleigh
burgh homa ville Orleans homa Rock
371 396 337 473 366 437 375 482 369 424 419 464

New Toronto Wash-


York ington
449 355 508

Mont- Sault Ste


real Marie
573 355

16
A* search
After expanding Nashville:
St
Louis
180

Chicago Kansas Little Nash-


City Rock ville

Duluth Omaha Pitts- Denver Okla- Dallas Nash- New Okla- Atlanta Little Raleigh
burgh homa ville Orleans homa Rock
371 396 337 473 366 437 375 482 369 424 419 464

New Toronto Wash-


York ington
449 355 508

Mont- Sault Ste


real Marie
573 355

17
A* search
After expanding Pittsburgh:
St
Louis
180

Chicago Kansas Little Nash-


City Rock ville

Duluth Omaha Pitts- Denver Okla- Dallas Nash- New Okla- Atlanta Little Raleigh
burgh homa ville Orleans homa Rock
371 396 473 366 437 375 482 369 424 419 464

New Toronto Wash-


York ington
449 355 508

Mont- Sault Ste


real Marie
573 355

18
A* search
After expanding Toronto:
St
Louis
180

Chicago Kansas Little Nash-


City Rock ville

Duluth Omaha Pitts- Denver Okla- Dallas Nash- New Okla- Atlanta Little Raleigh
burgh homa ville Orleans homa Rock
371 396 473 366 437 375 482 369 424 419 464

New Toronto Wash-


York ington
449 508

Mont- Sault Ste


real Marie
573 355

19
Admissible heuristics
• A good heuristic can be powerful only if it is of a “good quality”
• A good heuristic must be admissible
– An admissible heuristic never overestimates the cost to reach the goal
→ it is optimistic
– A heuristic h is admissible if
∀ 𝑛𝑜𝑑𝑒 𝑛, ℎ 𝑛 ≤ ℎ∗ (𝑛)
where ℎ∗ is true cost to reach the goal from 𝑛

– ℎ𝑆𝐿𝐷 (used as a heuristic in the map example) is admissible because it is


by definition the shortest distance between two points

20
A* search criteria
• Complete
– Yes
• Time
– Exponential
• Space
– Keeps every node in memory → the biggest problem
• Optimal
– Yes!

21
Heuristics
7 2 4 1 2 3
5 6 4 5 6
8 3 1 7 8
Start State Goal State
• The solution is 20 steps long - Solution
• ℎ1 𝑛 = number of misplaced tiles
• ℎ2 𝑛 = total Manhattan distance (sum of the horizontal & vertical distances)
• ℎ1 𝑛 = 6
• Tiles 1 to 8 in the start state gives ℎ2 = 4 + 0 + 3 + 3 + 1 + 0 + 2 + 1 = 14
which does not overestimate the true solution
22
Recap
• Uninformed search: use no domain knowledge
– BFS, DFS, DLS, IDS, UCS
• Informed search: use a heuristic function that estimates how close
a state is to the goal
– Greedy search, A*
• Which cost function?
– UCS searches layers of increasing path cost
– Greedy best first search searches layers of increasing heuristic function
– A* search searches layers of increasing path cost + heuristic function

23
Local Search

24
Local Search
• Search algorithms seen so far are designed to explore search
spaces systematically
– Problems: observable, deterministic, known environments where the
solution is a sequence of actions
• Real-World problems are more complex
– When a goal is found, the path to that goal constitutes a solution to the
problem
▪ But, depending on the applications, the path may or may not matter

• If the path does not matter OR systematic search is not possible


– we can use iterative improvement algorithms → Local search

25
Local Search
• Also useful in pure optimization problems where the goal is to find
the best state according to an optimization function E.g.
– Integrated circuit design, telecommunications network optimization etc.
– N-puzzle or 8-queen: what matters is the final configuration of the puzzle, not
the intermediary steps to reach it
• Idea: keep a single “current” state, and try to improve it
– Move only to neighbors of that node
• Advantages:
1. No need to maintain a search tree
2. Use very little memory
3. Can often find good enough solutions in continuous or large state spaces
26
Local Search Algorithms
• Hill climbing (steepest ascent/ descent)
• Simulated Annealing: inspired by statistical physics
• Local beam search
• Genetic algorithms: inspired by evolutionary biology

27
State Space Landscape

28
Hill Climbing
• Also called greedy local search
• Looks only to immediate good
neighbors and not beyond
• Search moves uphill:
– moves in the direction of
increasing elevation/ value to find
the top of the mountain
• Terminates when it reaches a
peak
• Can terminate with a local
maximum, global maximum or
can get stuck and no progress is
possible

29
Hill Climbing – Algorithm

30
Hill Climbing – Variants
• Steepest-ascent
– choose the highest-valued neighbor
• Stochastic
– choose randomly from higher-valued neighbors
• First-choice
– choose the first higher-valued neighbor
• Random-restart
– conduct hill climbing multiple times
• Local beam search
– chooses the 𝑘 highest-valued neighbors
• Stochastic beam search
– chosen 𝑘 neighbors are random

31
Genetic algorithms
• Genetic algorithms (GA) is a variant of stochastic beam search
• Successor states are generated by combining two parents rather
by modifying a single state
• The process is inspired by natural selection
• Starts with 𝑘 randomly generated states, called population
– Each state is an individual
• An individual is usually represented by a string of 0’s and 1’s, or
digits; a finite set
• The objective function is called fitness function
– better states have high values of fitness function

32
Genetic algorithms
• In the 8-queen problem, an individual
state can be represented by an array
depicting the position of the 8 queens
in the 8 columns
[3, 2, 7, 4, 8, 5, 5, 2]
• Possible fitness function is
the number of non-attacking
pairs of queens
• Fitness function of the solution:
28

33
Genetic algorithms

Next generation
𝑖 =𝑖+1

Selection No
Chromosomes
Evaluation

Crossover

Mutation
Chromosomes
End? Best Chromosome
Chromosomes Yes
Chromosomes

ith Population
Encoding Decoding

Problem Best Solution

34
Genetic algorithms

35
Genetic algorithms

36
Simulated Annealing
• Early on, higher "temperature":
– more likely to accept neighbors that are worse than current state
• Later on, lower "temperature":
– less likely to accept neighbors that are worse than current state

37
Simulated Annealing

38
Credit
• Artificial Intelligence: A Modern Approach, 3rd Edition by Stuart
J. Russel and Peter Norvig
– Chapter 3 & 4

39

You might also like