Professional Documents
Culture Documents
Module 1 Part - 3
It is based on reflex actions, i.e., actions having a direct mapping on states. A generalized version of above algorithm is as follows:
A simple reflex agent, i.e., it acts according to a rule whose condition matches the current state based on the percept
As the current percept is mixed with the internal state hence this model is an Intelligent agent having learning.
Problem-solving agents
Example: Romania
On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest Formulate goal:
be in Bucharest
Formulate problem:
states: various cities actions: drive between cities
Find solution:
sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
Source
Example: Romania
Destination
Problem types
Deterministic, fully observable single-state problem
Agent knows exactly which state it will be in; solution is a sequence
Non-observable
contingency
exploration problem
Zerind, Zerind>, }
A solution is a sequence of actions leading from the initial state to a goal state
(Abstract) state = set of real states (Abstract) action = complex combination of real actions
e.g., "Arad Zerind" represents a complex set of possible routes, detours, rest stops, etc.
For guaranteed realizability, any real state "in Arad must get to some real state "in Zerind" (Abstract) solution =
set of real paths that are solutions in the real world
y = 0, 1, 2, 3
Goal state:
One of the buckets has two gallons of water in it Represented by either ( x 2 ) or ( 2 x )
Path cost:
1 per unit step
14
Empty a bucket
(x y) -> (0 y) (x y) -> (x 0)
15
(x, y) if x+y <=4 and y>0 (x, y) if x+y <= 3 and x>0 (0,2)
(x+y, 0)
10
(0, x+y)
11
(2,0)
12
(2,y)
(0,y)
states? integer dirt and robot location actions? Left, Right, Suck goal test? no dirt at all locations path cost? 1 per action
states? locations of tiles actions? move blank left, right, up, down goal test? = goal state (given) path cost? 1 per move
initial state? no queens on the board actions? -add queen to any empty square
-or queens. add queen to leftmost empty square such that it is not attacked by other
goal test? 8 queens on the board, none attacked. path cost? 1 per move
states?: real-valued coordinates of robot joint angles parts of the object to be assembled actions?: continuous motions of robot joints goal test?: complete assembly path cost?: time to execute
The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.
Search strategies
A search strategy is defined by picking the order of node expansion Strategies are evaluated along the following dimensions:
completeness: does it always find a solution if one exists? time complexity: number of nodes generated space complexity: maximum number of nodes in memory optimality: does it always find a least-cost solution?
Breadth-first search
Expand shallowest unexpanded node Implementation:
fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search
Expand shallowest unexpanded node Implementation:
fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search
Expand shallowest unexpanded node Implementation:
fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search
Expand shallowest unexpanded node Implementation:
fringe is a FIFO queue, i.e., new successors go at end
Depth-first search
Expand deepest unexpanded node Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-limited search
= depth-first search with depth limit l, i.e., nodes at depth l have no successors Recursive implementation:
Uniform-cost search
Expand least-cost unexpanded node Implementation:
fringe = queue ordered by path cost
Equivalent to breadth-first if step costs all equal Complete? Yes, if step cost Time? # of nodes with g cost of optimal solution, O(bceiling(C*/ )) where C* is the cost of the optimal solution Space? # of nodes with g cost of optimal solution,
O(bceiling(C*/ ))
Optimal? Yes nodes expanded in increasing order of g(n)
Bi-directional Search
Intuition: Start searching from both the initial state and the goal state, meet in the middle.
Notes Not always possible to search backwards How do we know when the trees meet? At least one search tree must be retained in memory.
Complete? Yes Optimal? Yes Time Complexity: O(bd/2), where d is the depth of the solution. Space Complexity: O(bd/2), where d is the depth of the solution.
Summary of algorithms
Repeated states
Failure to detect repeated states can turn a linear problem into an exponential one!
Graph search
Contingency problem
Occurs for partially observable environment percepts provide new information about current state often interleave search, execution
Exploration problem unknown states the agent must discover what state it is in and the actions to take
Solution?
Summary
Problem formulation usually requires abstracting away real-world details to define a state space that can feasibly be explored Variety of uninformed search strategies Iterative deepening search uses only linear space and not much more time than other uninformed algorithms