You are on page 1of 5

6.

825 Techniques in Artificial Intelligence Example: Route Planning in a Map


A map is a graph where nodes are cities and links
Problem Solving and Search are roads. This is an abstraction of the real world.
ñ Map gives world dynamics: starting at city X on the
Problem Solving map and taking some road gets to you to city Y.
ñ Agent knows world dynamics [learning] ñ World (set of cities) is finite and enumerable.
ñ World state is finite, small enough to enumerate ñ World is deterministic: taking a given road from a
[logic]
given city leads to only one possible destination.
ñ World is deterministic [uncertainty]
ñ Utility for a sequence of states is usually either total
ñ Utility for a sequence of states is a sum over path
distance traveled on the path or total time for the
ñ Agent knows current state [logic, uncertainty] path.
ñ We assume current state is known
Few real problems are like this, but this may be a
useful abstraction of a real problem
Relaxation of assumptions later in the course
Lecture 2 ñ 1 Lecture 2 ñ 2

Formal Definition Route Finding


Problem:
• Set of states: S O
71 151 F
• Initial state Z S 99

• Operators (actions): S Æ S 75 211


• Goal test: S Æ { t, f } A 140
90
R B
• Path cost: (S, O)* Æ R 120
97 P
101
• Sum of costs: S c(S,O) 118
D 146 138
75
T 111 L 70 M
• Criteria for algorithms: C

• Computation time/space
• Solution quality

Lecture 2 ñ 3 Lecture 2 ñ 4

Romania Map Search

ñ Put start state in the agenda


ñ Loop
ñ Get a state from the agenda
–If goal, then return
–Expand state (put children in agenda)

Which state is chosen from the agenda defines the


type of search and may have huge impact on
effectiveness.

Lecture 2 ñ 5 Lecture 2 ñ 6

1
Depth-First Search Avoiding Loops
ñ Treat agenda as a stack (get most recently added node) • Method 1:
ñ Expansion: put children at top of stack
ñ Get new nodes from top of stack
• Don’t add a node to the agenda if it’s already in
the agenda
A O • Causes problems when there are multiple paths
Z S F
ZA SA TA
to a node and we want to be sure to get the
A R shortest
B
P • Method 2:
D
T L M C • Don’t expand a node (or add it to the agenda) if
it has already been expanded.
• We’ll adopt this one for all of our searches

Lecture 2 ñ 7 Lecture 2 ñ 8

Depth-First Search Properties of DFS


O
ñ Treat agenda as a stack (get most recently added node) S F
Z
ñ Expansion: put children at top of stack
ñ Get new nodes from top of stack Sub-optimal answer: A R
B
P
A O D
Z S F T L M C
ZA SA TA
A R Let
B
OAZ SA TA P • b = branching factor
SAZO SA TA T L M
D
C • m = maximum depth
FAZOS RAZOS SA TA • d = goal depth

BAZOSF RAZOS SA TA
• O(bm) time
Result = BAZOSF • O(mb) space

Lecture 2 ñ 9 Lecture 2 ñ 10

Breadth-First Search Properties of BFS


ñ Treat agenda as a queue (get least recently added node) Guaranteed to return shortest path (measured in
ñ Expansion: put children at end of queue number of arcs) from start to goal.
ñ Get new nodes from the front of queue

A O Let O
Z S F Z S F
ZA SA TA • b = branching factor
A R A R
SA TA OAZ B • m = maximum depth B
P P
D • d = goal depth D
TA OAZ OAS FAS RAS T L M C T L M C

OAZ OAS FAS RAS LAT


•O(bd) time
OAS FAS RAS LAT • O(bd) space
RAS LAT BASF
Result = BASF
Lecture 2 ñ 11 Lecture 2 ñ 12

2
Iterative Deepening Uniform Cost Search
• DFS is efficient in space, but has no path-length guarantee • Breadth-first and Iterative-Deepening find path with
• BFS finds min-step path but requires exponential space fewest steps (hops).
• Iterative deepening: Perform a sequence of DFS searches with • If steps have unequal cost, this is not interesting.
increasing depth-cutoff until goal is found. • How can we find the shortest path (measured by
sum of distances along path)?
DFS cutoff • Uniform Cost Search:
Space Time
depth
• Nodes in agenda keep track of total path length
1 O(b) O(b) from start to that node
2 O(2b) O(b2) • Agenda kept in priority queue ordered by path
3 O(3b) O(b3) length
4 O(4b) O(b4) • Get shortest path in queue
… … … • Explores paths in contours of total path length;
finds optimal path.
d O(db) O(bd)
Total Max = O(db) Sum = O(bd+1)
Lecture 2 ñ 13 Lecture 2 ñ 14

Uniform Cost Search Uniform Cost Search


A
Z75 T118 S140 O
151 S F
• Time cost is O(bm) [not O(bd) as in Russell&Norvig]
Z 71 99
T118 S140 O146 • Space cost could also be O(bm), but is probably
75 211
S140 O146 L229 90 more like O(bd) in most cases.
A 140
R
O146 L229 R230 F229 O291 97 P B
120 101
L229 R230 F229 O291 118
R230 F229 O291 M299 D 146 138
75
T 111 L 70 M C

• We examine a node to see if it is the goal only


when we take it off the agenda, not when we
put it in.
• The algorithm is optimal only when the costs
are non-negative.
Lecture 2 ñ 15 Lecture 2 ñ 16

Uninformed vs. Informed Search Using Heuristic Information


• Depth-first, breadth-first and uniform-cost searches are
uninformed.
• Should we go to X or Y?
• In informed search there is an estimate available of the cost 1 200
(distance) from each state (city) to the goal. • Uniform cost says go to X
X Y
• This estimate (heuristic) can help you head in the right • If h(X) À h(Y), this should 300 1
direction. affect our choice
• Heuristic embodied in function h(n), estimate of remaining
cost from search node n to the least cost goal. • If g(n) is path-length of
• Graph being searched is a graph of states. Search algorithm node n, we can use g(n)
defines a tree of search nodes. Two paths to the same state + h(n) to prioritize the
generate two different search nodes. agenda
• Heuristic could be defined on underlying state; the path to a • This method is called A*
state does not affect estimate of distance to the goal. [pronounced “A star”]

Lecture 2 ñ 17 Lecture 2 ñ 18

3
Admissibility Why use estimate of goal distance?
• What must be true about Order in which uniform-cost
h for A* to find optimal looks at nodes. A and B are
path? 2 73 same distance from start, so
X Y will be looked at before any
• A* finds optimal path if h h=100 h=1 longer paths. No “bias”
is admissible; h is 1 1 towards goal.
admissible when it never
overestimates.
h=0 h=0 x x
• In this example, h is not A B
admissible. goal
• In route finding g(X)+h(X) = 102 start Order of examination using
problems, straight-line g(Y)+h(Y) = 74 dist. from start + estimate of
distance to goal is dist. to goal. Note “bias”
admissible heuristic. Optimal path is not Assume states are points toward the goal; points away
found! the Euclidean plane. from goal look worse.

Lecture 2 ñ 19 Lecture 2 ñ 20

Heuristics Search Problems


• If we set h=0, then A* is uniform-cost search; h=0 • In problem-solving problems, we want a path as
is admissible heuristic (when all costs are non- the answer, that is, a sequence of actions to get
negative). from one state to another.
• Very difficult to find heuristics that guarantee sub- • In search problems, all we want is the best state
exponential worst-case cost. (that satisfies some constraints).
• Heuristic functions can be solutions to “relaxed” • Set of states: S
version of original problem, e.g. straight line • Initial state
distance is solution to route-finding when we relax • Operators: S Æ S
constraint to follow the roads.
• Cost (Utility): S Æ ¬

Lecture 2 ñ 21 Lecture 2 ñ 22

Example: Traveling Salesman Example: Square Root


• In traveling salesman problem (TSP) we want a least-cost • Given y=x2 Œ ¬ find x Œ ¬
path that visits all the cities in a graph once.
• Utility of x is a measure of error, e.g. U = 1/2 (x2 – y)2
• Note that this is not a route-finding problem, since we must
visit every city, only the order of visit changes. • Operator: x Æ x – r —x U (for small stepsize r)
• A state in the search for TSP solution is a complete tour of the • take a step down gradient (wrt x) of the error
cities.
• For example, x = x – r (x2 – y) 2x
• An operator is not an action in the world that moves from city
to city, it is an action in the information space that moves • Assume y = 7, start with guess x = 3, let r =
from one potential solution (tour) to another. 0.01
• Possible TSP operators: Swap two cities in a tour • Next guesses are: 2.880, 2.805, 2.756, …, 2.646
• We can prove that there is a unique x whose error
value is minimal and that applying this operator
repeatedly (for some value of r) will find this
minimum x (to some specified accuracy).

Lecture 2 ñ 23 Lecture 2 ñ 24

4
Multiple Minima Simulated Annealing
• Most problems of interest do not have unique global • T = initial temperature
minima that can be found by gradient descent from • x = initial guess
an arbitrary starting point. • v = Energy(x)
• Typically, local search methods (such as gradient • Repeat while T > final temperature
descent) will find local minima and get stuck there. • Repeat n times
• x0 ¨ Move(x)
• How can we escape from local minima?
• v0 = Energy(x0)
• Take some random steps!
• If v0 < v then accept new x [ x ¨ x0 ]
• Re-start from randomly chosen starting points
• Else accept new x with probability exp(-(v0 – v)/kT)
Error
• T = 0.95T /* for example */
• At high temperature, most moves accepted (and can move
between “basins”)
Local minimum
• At low temperature, only moves that improve energy are
accepted

Global minimum

Lecture 2 ñ 25 Lecture 2 ñ 26

Search Demonstration Recitation Problems


• There‘s a cool applet at UBC for playing around with Problems 3.17 a, b, f (Russell & Norvig)
search algorithms: www.cs.ubc.ca/labs/lci/CIspace What would be a good heuristic function in that
domain? (from 3.17)
What would be a good heuristic function for the
Towers of Hanoi problem? (look this up on the
web, if you don’t know about it)

Other practice problems that we might talk about in


recitation: 4.4, 4.11a,b

Lecture 2 ñ 27 Lecture 2 ñ 28

You might also like