This action might not be possible to undo. Are you sure you want to continue?

Chapter 4 Feb 2 2007

Belief States (Chap 3)

Graph Search (Chap 3)

**Graph Search: Analysis
**

Time and Space complexity: proportional to state space (< O(bd)) Optimality: may not be optimal if algorithm throws away a cheaper path to a node that is already closed Uniform-cost and breadth-first graph search are optimal when step cost is constant

2 today¶s focus .Chap 4 ± Informed Search Best-first search Greedy best-first search Recursive best-first search A* search Heuristics Local search algorithms Hill-climbing search Simulated annealing search Local beam search Genetic algorithms sections 4. 4.1.

Review: Tree Search A search strategy is defined by picking the order of node expansion .

Romania with Step Costs in KM additional information that can inform the expansion strategy .

Best-First Search Idea: use an evaluation function f(n) for each node ± f(n) is an estimate of "desirability" ± Expand most desirable unexpanded node Implementation: Order the nodes in fringe in decreasing order of desirability (some form of priority queue) Special cases: ± greedy best-first search ± A* search .

Greedy Best-First Search Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal example: hSLD(n) = straight-line distance from n to Bucharest Greedy best-first search expands the node that appears to be closest to goal .

Greedy Best-First Search .

Greedy Best-First Search .

Greedy Best-First Search .

Greedy Best-First Search .

e.g. Iasi Neamt Iasi Neamt (when trying to get from Iasi to Fagaras) Time? O(bm).Greedy Best-First Search Complete? No ± can get stuck in loops.. keeps all nodes in memory Optimal? No . but a good heuristic can give dramatic improvement Space? O(bm).

* A Search Idea: avoid expanding paths that are already expensive Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to goal f(n) = estimated total cost of path through n to goal .

* A Search Example .

* A Search Example .

* A Search Example .

* A Search Example .

* A Search Example .

* A Search Example .

A* Exercise How will A* get from Iasi to Fagaras? .

An admissible heuristic never overestimates the cost to reach the goal.e. it is optimistic Example: hSLD(n) (never overestimates the actual road distance) Theorem: If h(n) is admissible. h(n) h*(n).Admissible Heuristics A heuristic h(n) is admissible if for every node n. . where h*(n) is the true cost to reach the goal state from n. i..

Admissible Heuristics Admissible Heuristics for the 8-puzzle: h1(n) = number of misplaced tiles h1(S) = 8 h2(n) = total Manhattan distance (sum the walking distance from each square to its desired location) h2(S) = 3+1+2+2+2+3+3+2 = 18 .

f(G2) = g(G2 + h(G2) = g(G2) > C*. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G.Optimality of *(tree) A Suppose some suboptimal goal G2 has been generated and is in the fringe. so G2 will never be expanded A* will not expand goals on sub-optimal paths . since h is admissible f(n) <= C* < f(G2). since G2 is a goal on a non-optimal path (C* is the optimal cost) f(n) = g(n) + h(n) <= C*.

every successor n' of n generated by any action a. h(n) c(n.Consistent Heuristics A heuristic is consistent if for every node n. f(n) is non-decreasing along any path.a.a.n') + h(n') g(n) + h(n) = f(n) i. we have f(n') = g(n') + h(n') = g(n) + c(n.e. Theorem: If h(n) is consistent. A* using GRAPH-SEARCH is optimal .n') + h(n') If h is consistent..

where fi < fi+1 .Optimality of *(graph) A A* expands nodes in order of increasing f value Gradually adds "f-contours" of nodes Contour i has all nodes with f=fi.

Properties of A* Complete? Yes (unless there are infinitely many nodes with f f(G) ) Time? Exponential Space? Exponential .keeps all nodes in memory Optimal? Yes .

if they did they might miss the optimal solution Good news: complete. optimal and optimally efficient Bad news: time complexity is bad.Properties of A* A* is also optimally efficient ± no other optimal algorithm is guaranteed to expand few nodes. space complexity is worse .

135 nodes A*(h2) = 1.644.035 nodes A*(h1) = 227 nodes A*(h2) = 73 nodes IDS = too many nodes A*(h1) = 39. then h2 dominates h1 h2 is better for search Typical search costs (average number of nodes expanded) for 8-puzzle: d=12 d=24 IDS = 3.Dominance If h2(n) h1(n) for all n (both admissible).641 nodes .

Comparing 8-puzzle Heuristics h2 dominates h1 h2 is better heuristic .

.Relaxed Problems A problem with fewer restrictions on the actions is called a relaxed problem The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem Reason: the optimal solution to the original problem is. also a solution to the relaxed problem and must be at least as expensive as the optimal solution to the relaxed problem. by definition.

Relaxed Problems The hSLD(n) function for the Romania travel problem is derived from a relaxed problem: ± original problem: drive following roads ± relaxed problem: fly in straight lines .

This derives h2(n) = total Manhattan distance . This derives h1(n) = number of misplaced tiles 8-puzzle relaxed problem 2: Any tile can move to any adjacent square.Relaxed Problems 8-puzzle relaxed problem 1: Any tile can be moved anywhere.

XXXGXX XX.XX XXXX.XXXXXXXX....SX XXXXXXXXXXX.......XX..XXX XXXXXXXXXXXXXX.XX X.XX............Exercise XXXXXXXXXXXXXXXXXX XX......XX XXXX.XXXXXX.XX XXXX..XX......XXXXXXXXXXXXX XXXX.....XXX Design a heuristic for the maze problem .XXXXXX.....

SMA* = memory-bounded A* .Overcoming Space Limitations A* may run into space limitations Memory-bounded heuristic search attempts to avoid space complexity ± IDA* = iterative deepening A* ± RBFS = recursive best-first search ± MA*.

expect the cutoff used is the f-cost (g+h).IDA* A* and best-first keep all nodes in memory IDA* (iterative deeping A*) is similar to standard iterative deeping. rather than the depth. Cutoff value is set to smallest f-cost of any node that exceeded cutoff in previous iteration .

RBFS Recursive best-first search (RBFS) tries to mimic best-first in linear space Keeps track of the f-value of the best alternative path from any ancestor of the current node If current node exceed that limit. it unwinds back to the alternate path It will often have to re-expand discarded subtrees .

RBFS f-value of best alternate path .

RBFS .

RBFS .

IDA*. RBFS: too little memory IDA* only remembers current f-cost limit RBFS cannot use more memory. even if available Result: Neither can remember repeated states and may explore same state many times .

drop the worst leaf node and push its f-value back to its parent Subtrees are regenerated only when all other paths have been shown to be worse than the forgotten path Trashing behavior can result if memory is small compared to size of candidate subpaths .Memory-bounded A* MA* and SMA* (simplified MA*) work like A* until memory is full. SMA*: if memory is full.

- Pattern and Syllabus_OT QC
- 4 Part Harmonisation Techniques
- stonehenge
- Rpl
- Ch_6_Part_1
- Blake Presentation
- Free Electronics Engineering Online Courses,Electronics Engineering Video Lectures,Electronics Engineering Video Tutorials
- Chapter 3_Case 3-62
- 10 Loss Coolant1
- Vocabulary Unit (Kim)With Units 1-15
- His to en Doc
- List Games
- Dealing With an Angry Patient
- Week 6 - Telecommunication Network
- cape_unit_2_chemistry_syllabus.doc
- Intro
- chem111 postlabexpt1.doc
- Embryology of Lower Respiratory Tract
- CHPT 14 FIRE CONTROL
- Broken Symmetry Sheets
- CHPT 15 DETECTION, ALARMS, SUPPRESSION
- Final Programme
- Brannen-Jurgenson, Resume Writing
- CourseREG_D50299GC10
- terryshyu final
- Assessment Balloon Design
- Back Matter
- Collins Resume 10-30-12
- Becketsbestwriting o
- Conversation and Dice

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd