## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

A lot of AI is about SEARCH. Search can be applied to many problems: soln to a puzzle shortest path on a map proof of a theorem sequence of moves to win a game sequence of transformations to solve a symbolic integration problem sequence of moves to achieve a desired state (monkey & bananas, SHRDLU) The "state space" is the set of states of the problem we can get to by applying operators to a state of the problem to get a new state. The state space can be HUGE! (Combinatorial explosion) Theorem Proving: Inﬁnite! Chess: 10120 (in an average length game) Checkers:1040 Eight puzzle:181,440 Right representation helps: mutilated checkerboard. Control strategy helps choose which operators to apply: Small # of operators: general, but bushy tree Large #: perhaps overly speciﬁc, but less bushy trees

1

AI Lecture on search

**State space search
**

The search problem: How to control the search? Given: an initial state a set of operators (actions the agent can take) a set of goal states (optional) a path cost function Find: a sequence of operators that leads from the initial state to a goal state The search space is the implicit tree (or graph) deﬁned by the initial state and the operators The search tree (or graph) is the explicit tree generated during the search by the control strategy NOTE: Each node of this graph is a complete description of the state of the problem.

2

AI Lecture on search

What is the search cost? The total cost is the sum of the search cost and the path cost (which may not be commensurate!) 3 AI Lecture on search . Is it a good solution (low path cost) 3.State Space Search Measuring effectiveness: 1. Does the method ﬁnd a solution at all? 2.

a problem-independent framework for solving problems 2. 4 AI Lecture on search .State Space Search State space search is an example of a weak method. It may have "stubs" for incorporating domain knowledge However: weak methods usually cannot overcome the combinatorial explosion. A weak method is: 1.

for a search problem such as: how to get to San Francisco by the shortest path given a map: Don’t need to represent all the aspects of the trip! E. most of the time!) 5 AI Lecture on search .. what the weather is like.g. E... what kind of car we’re driving. how many passengers we have. the states may be: In(San Diego) In(Los Angeles) and actions: go(X.Y) and goal: In(San Francisco) (doesn’t matter what we had for breakfast.g.Representation It is important when representing a problem to abstract away from the details..

Toy Problems versus Real-world problems Most of the problems we will work on will be toy problems: Puzzle-solving (8. these techniques are used to solve Real problems: Route-ﬁnding in airline travel planners Travelling Salesperson Problem (planning drill movements for automatic circuit board drills) VLSI layout (cell layout and channel routing) Automatic manufacturing (assembly sequencing) 6 AI Lecture on search . 15-puzzle) 8-queens Cryptarithmetic But in fact.

generating successors is called expanding the node.The search tree Starting from the start state. We attach these into the search tree Then we have to decide which node to expand next How we make this decision is the essence of the search strategy 7 AI Lecture on search .

OPEN = Insert(SUCCESSORS. OPEN).General Tree search algorithm (OBVIOUSLY NOT IN PURE LISP!!!) (ignores bookkeeping for graphs and keeping track of the path): /* Initialize */ 1. If empty(OPEN) then return(*FAIL*). in how things are inserted */ 7. od 8 AI Lecture on search . 4. /* Expand next node */ 3. /* Heuristics go here. 5. While TRUE do 2. CLOSED = cons(CURRENT. If Goal(CURRENT) Then Return(PATH) /* apply operators to get a list of successors */ 6. SUCCESSORS = Expand(CURRENT). CLOSED = NIL. CURRENT = car(OPEN). OPEN = {Start_node}. CLOSED).

Depth ﬁrst search. Divide them based on informedness Uninformed: Make no use of domain knowledge (also known as blind search Breadth-ﬁrst search..Search strategies These will (almost) all vary depending on how things are inserted in the OPEN list.. Informed: Make use of heuristics Hill climbing (gradient descent) Best-ﬁrst search: "Algorithm A" and A* IDA* 9 AI Lecture on search . .

Space complexity: Storage required 4.Evaluating Search strategies Four criteria: 1. does it ﬁnd the best one? 10 AI Lecture on search . Time complexity: How long do they take in search time? 3. Optimality: When there are several solutions. Completeness: Are they guaranteed to ﬁnd a solution? 2.

Depth-limited search 5. Breadth First Search 2. Bi-directional search 11 AI Lecture on search . Uniform cost search 3. Depth ﬁrst search 4. Iterative deepening 6.Six Blind Search algorithms 1.

Breadth First search always puts new nodes on the end of the OPEN list Is it complete? time & space complexity? optimal? 12 AI Lecture on search .

Uniform cost search Just like BFS. Nodes are ordered on OPEN in terms of g (n) . 13 AI Lecture on search .the cost in the graph so far. in the "ﬁnd-a-route" problem. BFS will return the path through the fewest cities. but uses the path cost of a node to order it on the OPEN list For example. UCS will return the shortest path.

Depth ﬁrst search always puts new nodes on the front of the OPEN list Is it complete? time & space complexity? optimal? 14 AI Lecture on search .

Is it complete? time & space complexity? optimal? 15 AI Lecture on search .Depth-limited DF search always puts new nodes on the front of the OPEN list But stops after some depth limit is reached.

but keep increasing the depth bound until the answer is found. Is it complete? time & space complexity? optimal? 16 AI Lecture on search .Iterative Deepening Search Simple idea: Use Depth-bounded DFS.

. + b d −2 + b d −1 + b d For b =10 and d =5 this is 111. giving: (d +1)1 + (d)b + (d −1)b 2 + . .456 17 AI Lecture on search .111 For each expansion to depth k. . . + 3b d −2 + 2b d −1 + b d For b =10 and d =5 this is 123. the root has been expanded k + 1 times. the next level k times.Iterative Deepening The time complexity is much less than you might expect! Recall: The number of expansions to a depth of d with branching factor b is: 1 + b + b 2 + . etc. .

.backwards from goal! Check if they link up! Why is this good? What could be costly? Should search be the same kind from both ends? Is it complete? time & space complexity? optimal? 18 AI Lecture on search ..Bi-directional Search Idea: Search forwards from initial state and.

l = depth limit 19 AI Lecture on search . d = depth of solution m = max depth of search tree.Summary Criterion Breadth First Uniform Cost Depth First Depth-limited(l) Iter. Deepening Bidirectional Time bd bd bd bl bd b d/2 Space bd bd bm bl bd b d/2 Optimal? Yes Yes No No Yes Yes Complete? Yes Yes No If l ≥d Yes Yes b = branching factor.

Never generate the state you just came from 2. Check if a node has ever been generated before.Non-redundant generators Is the state space graph a tree or a graph with loops? A loop means you can visit the same state over and over Non-redundant generators (3 levels of difﬁculty) 1. Do not create paths with cycles (check if a node is already on the current path 3. (Tricky for ﬁnd a route problems) Note: Tradeoff between checking (may be expensive) and not checking (generate a bigger search tree) 20 AI Lecture on search .

- DB26703707-libre.pdf
- Chap8
- IMG
- Gradient Descent [RIT] Adv&Disadv
- TrigSheet.pdf
- Hardy-Steklov operator on two exponent Lorentz spaces for non-decreasing functions
- Algorithms Mayank99
- Fitzpatrick02reversible Present
- stem resume sept2013
- Add Maths F4
- sxc02
- LU Decomposition
- 335
- Advertising
- rm8
- Probablity problems
- Virtual-Topology Adaptation for WDM
- Final Exam Fall2013
- Spectral Graph Theory
- TrigSheet.pdf
- Dorigo05 - ACO
- 0307150v5
- AVL trees
- 3. L3-type-opr-expr
- MSc DataComms Tutorial
- 21st Nov test
- SensorCoverage09 Kunal Wsn (Wieless Sensor Ntw)
- Romi Tc 01 Introduction Mar2016
- 2. Functions
- laplace

Skip carousel

- Personalized Gesture Recognition with Hidden Markove Model and Dynamic Time Warping
- tmpC8D2.tmp
- Optimization of Water Distribution Network for Dharampeth Area
- A Review on Image Inpainting with K-Nearest Neighbor (KNN) Method
- tmp2A0A.tmp
- 68537_1985-1989
- Implementation of Feed Forward Neural Network for Image Compression on FPGA
- A System for Efficient Retrieval of the Images from Large Datasets using Ripplet Transform and Edge Histogram Detector
- tmpDE32.tmp
- Analyzing Sentiment at Sentence-Level on Tweets using Hybrid Systems
- Review on Advanced Prediction of Difficult Keyword Queries Over Databases
- An Algorithm to Improve Accuracy of Recommendation System
- tmpDBB7.tmp
- Weighted Density based Error Optimization for Classical Dataset
- Digital Image Watermarking Based on LSB for RGB Image
- Image Processing Techniques For Quality Checking In Food Industry
- 63607_2010-2014

Sign up to vote on this title

UsefulNot usefulClose Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Close Dialog## This title now requires a credit

Use one of your book credits to continue reading from where you left off, or restart the preview.

Loading