You are on page 1of 83

Define Intelligence

Intelligence is a rather hard to define term; Intelligence is


often defined in terms of what we understand as intelligence
in humans.

Allen Newell defines intelligence as the ability to bring all the


knowledge a system has at its disposal to bear in the solution
of a problem.

A more practical definition that has been used in the context


of building artificial systems with the intelligence is to
perform better on tasks that humans currently do better.

01/06/21 1
What the different approaches defining AI

Thinking humanly
Thinking rationally
Acting humanly
Acting rationally

01/06/21 2
Suppose you are designing a machine to pass the Turing test. What
are the capabilities such a machine must have?

 Natural Language processing


 Knowledge Representation
 Automated reasoning
 Machine Learning
 Computer Vision
 Robotics

01/06/21 3
Question form Lecture 1

 Design ten questions to pose to a man/machine


that is taking the Turing test.

 List 5 tasks that you will like a computer to be


able to do within the next 5 years.

 List 5 tasks that computer are unlikely to be able


to do in the next 10 years.

01/06/21 4
Question form Lecture 2

Define Agent:
An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that
environment through actuators

Intelligent Agent:
a)must sense
b)must act
c)must be autonomous
d)must be rational

01/06/21 5
Question form Lecture 2

Rational Agent:

For each possible percept sequence, an ideal rational agent


should do whatever action is expected to maximize its
performance measure, on the basis of the evidence provided
by the percept sequence and whatever built-in knowledge
the agent has.

01/06/21 6
Question form Lecture 2

Autonomous Agent:

Autonomous agents are software entities that are capable of


taking independent action in dynamic, unpredictable
environment. An autonomous agent can learn and adapt to a
new environment.

01/06/21 7
Question form Lecture 2

Describe the salient features of an agent.


An agent perceives its environment using sensors.
An agent takes actions in the environment using
actuators.
A rational agent acts so as to reach its goal or to
maximize its utility.
Reactive agents decide their action on the basis of
their current state and then percepts.
Deliberative agents reasons about their goals to
decide their action.

01/06/21 8
Question form Lecture 2

Find out PEAS model for the MARS rover.

What are the percepts for this agent?


Characterize the operating environment.
What are the actions the agent can take?
How can one evaluate the performance of the
agent?

01/06/21 9
Question form Lecture 2

What are the percept devices for this agent?


Panoramic and Microscopic cameras
A radio receiver
Spectrometers for studying rock sample including
an alpha particle X-ray spectrometer, Miniature
thermal spectrometer.

01/06/21 10
Question form Lecture 2

Characterize the operating environment

The Environment (The Martian surface)


Partially observable
Non-deterministic
Sequential
Dynamic
Continuous
May be single-agent

01/06/21 11
Question form Lecture 2

What are the actions the agent can take?

The rover spirit has


oMotor driven wheels for locomotion.
oAlong with a robotic arm to bring sensors close to
interesting rocks and
oRock abrasion tool (RAT) capable of efficiently drilling 45mm
holes in hard volcanic rock.
oSpirit also has a radio transmitter for communication.

01/06/21 12
Question form Lecture 2

Performance:

oMaximizing the distance or variety of terrain it traverses


oCollecting as many samples as possible
o Finding life (for which it receives 1 point if it succeeds and 0
points if it fails)
oMaximizing lifetime or minimizing power consumptions.

01/06/21 13
Question form Lecture 2
What sort of agent architecture do you think most suitable
for this agent?

Model-based reflex agent for low level navigation;


For route planning, experimentation etc some
combination of goal-based and utility-based would
be needed.

01/06/21 14
Goal Directed Agent
A goal directed agent needs to achieve certain goals.

Many problems can be represented as a set of states and a


set of rules how one states transformed to another.

The agent must choose a sequence of actions to achieve


the desired goal.

01/06/21 15
Solving problems by
searching
Chapter 3

01/06/21 16
Agent & Problem Solving
• Simple Reflexive Agent
• Base their action on a direct mapping from
• STATEs to ACTIONs
• Goal Based Agent
• Consider future Actions and the desirability of their outcomes
• Problem Solving Agent
• Goal based Agent
• Use Atomic representation of Environment
• Planning Agent
• Goal based Agent
• Use more advanced factored/structured representation
State Based Search
Each state is an abstract representation of the agent’s
environment. It is an abstraction that denotes a configuration of
the agent.

Initial state: The description of the starting configuration of the


agent.

Action/Operator: An action/operator takes the agent from one


state to another state. A state can have a number of successor
states.

A plan is a sequence of actions.

01/06/21 20
State Based Search
A goal is a description of a set of the desirable states of the
world. Goal states are often specified by a goal test which any
goal state must satisfy.

Path Cost: path positive number


Usually path cost=sum of steps costs.

01/06/21 21
State Based Search
Problem formulation means choosing a relevant set of states
to consider and feasible set of operators to move from one
state to another.

Search is the process of imagining sequences of operators


applied to initial state and checking which sequence reaches
a goal state.

01/06/21 22
Example: Romania

• On holiday in Romania; currently in Arad.


• Flight leaves tomorrow from Bucharest
• Formulate goal:
• be in Bucharest
• Formulate problem:
• states: various cities
• actions: drive between cities
• Find solution:
• sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

01/06/21 23
Example: Romania

01/06/21 24
Single-state problem formulation

A problem is defined by four items:

1. initial state, S0 where S0 ɛ S. e.g., "at Arad"


2. Actions, operator or successor function S(x) = set of action–state pairs
– e.g., S(Arad) = {<Arad  Zerind, Zerind>, … }
A: SS set of operators.
3. goal test, can be
– explicit, e.g., x = "at Bucharest"
– implicit, e.g., Checkmate)
4. path cost (additive)
– e.g., sum of distances, number of actions executed, etc.
– c(x,a,y) is the step cost, assumed to be ≥ 0

• A solution is a sequence of actions leading from the initial state to a


goal state

01/06/21 25
Search Problem

Finds a sequence of actions which transfers the agent from the initial
state to a goal state.

 Representing the search problems:


• A search problem is represented using a directed graph.
• The states are represented as nodes,
• The allowed actions are represented as arcs.

01/06/21 26
Pegs and Disks

 Consider the following


problem we have
3 pegs and 3 disks.

 Operators: One may move


the topmost Disk on any needle
to the topmost position to any
other needle.

01/06/21 27
8 Queens

 Place 8 queens on a chessboard so that no two queens are in the same


row, column or diagonal.

01/06/21 29
N Queens problem formulation 1

 States: Any arrangement of 0 to 8 queens on the board.

 Initial State: 0 Queen on the board.

 Successor function: Add a queen in any square.

 Goal test: 8 Queens on the board and none are attacked.

01/06/21 30
N Queens problem formulation 2

 States: Any arrangement of 8 queens on the board.

 Initial State: All queens are at column 1.

 Successor function: Change the position of any one queen.

 Goal test: 8 Queens on the board and none are attacked.

01/06/21 31
N Queens problem formulation 3

 States: Any arrangement of k queens in the first k rows such that none
are attacked.

 Initial State: 0 Queens on the board.

 Successor function: Add a queen to the (k+1)th row so that none are
attacked.

 Goal test: 8 Queens on the board and none are attacked.

01/06/21 32
Example: The 8-puzzle

• states?
• actions?
• goal test?
• path cost?

01/06/21 33
Example: The 8-puzzle

• states? locations of tiles


• actions? move blank left, right, up, down
• goal test? = goal state (given)
• path cost? 1 per move

[Note: optimal solution of n-Puzzle family is NP-hard]


01/06/21 34

Basic search algorithms

Let L be a list containing the initial state (L=the fringe)


Loop
if L is empty return failure.
Node  select (L)
if Node is a goal
then return Node
(the path from initial state to Node)
else apply all the applicable operators to Node
And
merge the newly generated states into fringe.

01/06/21 36
Basic search algorithm: Key issues

 Search tree may be unbounded


Because of loops
Because state space is infinite.

 Return a path or a node?

 How selecting a node is done?

 How much is the known about the quality of the


intermediate states?

 Is the aim to find out minimal cost path or any path


as soon as possible.
01/06/21 37
Find a Path

 Shortest Path
E
 Any Path
B

 Blind Search A D F H
 BFS
 DFS
C
G

01/06/21 38
Search Tree
 List all possible paths
A
 Eliminate Cycles from Path
 Result: A search tree
B C

D E D G
E
B
C F B F
A D F H

C G H
G G E
G H
01/06/21 39
Search Tree- Terminology

 Root Node A
 Leaf Node
B C
 Ancestor/
Descendant E G
D D
Branching Factor
 Complete Path/
Partial Path C F B F

 Expanding Open
G H
Nodes results in G E
G H
closed nodes.
01/06/21 40
Basic search algorithms

Let L be a list containing the initial state (L=the fringe)


Loop
if L is empty return failure.
Node  remove-first(fringe)
if Node is a goal
then return the path from initial state to Node
else generate all successors of Node and
merge the newly generated states into fringe.
End Loop

01/06/21 41
Evaluating Search Strategies
• Completeness: Is the strategy guaranteed to find a
solution if one exists?
• Optimallity: If a solution is found, is the solution
guaranteed to have the minimum cost?
• Time complexity: Time taken (number of nodes
expanded) (Worst case or average case) to find a
solution.
• Space? Space used by the algorithm measured in
terms of the maximum size of the fringe.

01/06/21 42
Search Strategies
• Blind Search
Depth First search
Breadth First Search
Iterative deepening search
Bidirectional Search
• Informed Search
• Constraint satisfaction Search
• Adversary Search

01/06/21 43
Search strategies

• A search strategy is defined by picking the order of node


expansion
• Strategies are evaluated along the following dimensions:
– completeness: does it always find a solution if one exists?
– time complexity: how long does it take to find soln
– space complexity: maximum number of nodes in memory
– optimality: does it always find a least-cost solution?
• Time and space complexity are measured in terms of
– b: maximum branching factor of the search tree
– d: depth of the least-cost solution
– m: maximum depth of the state space (may be ∞)

01/06/21 45
Breadth First search

Let L be a list containing the initial state (L=the fringe)


Loop
if L is empty return failure.
Node  remove-first(fringe)
if Node is a goal
then return the path from initial state to Node
else generate all successors of Node and
merge the newly generated states into fringe.
End Loop

01/06/21 46
A

B C
E
B D E D G
A D F H

C C F B F
G

G H
G G E
H

FRINGE: A B C D E D G

01/06/21 47
Implementation: states vs. nodes

• A state is a (representation of) a physical configuration


• A node is a data structure constituting part of a search tree
includes state, parent node, action, path cost g(x), depth

• The Expand function creates new nodes, filling in the various


fields and using the SuccessorFn of the problem to create the
corresponding states.

01/06/21 48
Uninformed search strategies
• Uninformed search strategies use only the information available in the
problem definition
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search

01/06/21 49
Breadth-first search

• Expand shallowest unexpanded node


• Implementation:
• fringe is a FIFO queue, i.e., new successors go at end

01/06/21 50
Breadth-first search

• Expand shallowest unexpanded node


• Implementation:
• fringe is a FIFO queue, i.e., new successors go at end

01/06/21 51
Breadth-first search

• Expand shallowest unexpanded node


• Implementation:
• fringe is a FIFO queue, i.e., new successors go at end

01/06/21 52
Breadth-first search

• Expand shallowest unexpanded node


• Implementation:
• fringe is a FIFO queue, i.e., new successors go at end

01/06/21 53
Properties of breadth-first search
• Complete? Yes (if b is finite)

• Time? 1+b+b2+b3+… +bd + (bd+1-b) = O(bd+1)

• Space? O(bd+1) (keeps every node in memory)

• Optimal? Yes (if cost = 1 per step)

• Space is the bigger problem (more than time)

01/06/21 54
Uniform-cost search

• Expand least-cost unexpanded node


• Implementation:
– fringe = queue ordered by path cost
• Equivalent to breadth-first if step costs all equal
• Complete? Yes, if step cost ≥ ε
• Time? Guided by path cost rather than depth of nodes with g
≤ cost of optimal solution, O(bceiling(C*/ ε)) where C* is the cost of
the optimal solution
• Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/
ε)

• Optimal? Yes – nodes expanded in increasing order of g(n)

01/06/21 55
Uniform-cost search

01/06/21 56
Now try by yourself(UCS)

01/06/21 57
Depth-first search

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO queue, i.e., put successors at front

01/06/21 58
Depth-first search

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO queue, i.e., put successors at front

01/06/21 59
Depth-first search

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO queue, i.e., put successors at front

01/06/21 60
Depth-first search

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO queue, i.e., put successors at front

01/06/21 61
Depth-first search

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO queue, i.e., put successors at front

01/06/21 62
Depth-first search

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO queue, i.e., put successors at front

01/06/21 63
Depth-first search

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO queue, i.e., put successors at front

01/06/21 64
Depth-first search

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO queue, i.e., put successors at front

01/06/21 65
Depth-first search

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO queue, i.e., put successors at front

01/06/21 66
Depth-first search

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO queue, i.e., put successors at front

01/06/21 67
Depth-first search

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO queue, i.e., put successors at front

01/06/21 68
Depth-first search

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO queue, i.e., put successors at front

01/06/21 69
Properties of depth-first search
• Complete? No: fails in infinite-depth spaces,
spaces with loops
– Modify to avoid repeated states along path
 complete in finite spaces
• Time? O(bm): terrible if m is much larger than d
– but if solutions are dense, may be much faster than
breadth-first
• Space? O(bm), i.e., linear space!
• Optimal? No

01/06/21 70
Depth-limited search

= depth-first search with depth limit l, cutoff


i.e., nodes at depth l have no successors
Recursive implementation:

 Optimal =No, l>d


 Time complexity= O(bl) Important to know the exact limit before
 Space Complexity = O (b*l) Known as diameter
01/06/21 71
Iterative deepening search

01/06/21 72
Iterative deepening search l =0

01/06/21 73
Iterative deepening search l =1

01/06/21 74
Iterative deepening search l =2

01/06/21 75
Iterative deepening search l =3

01/06/21 76
Properties of iterative deepening search
• Complete? Yes
• Time?
(d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
• Space? O(bd)
• Optimal? Yes, if step cost = 1

01/06/21 77
Summary of algorithms

01/06/21 78
Bidirectional Search

01/06/21 79
Bidirectional Search

• Two simultaneous search


• Forward from the initial state
• Backward from the goal
• Stop when two search meet at middle
• O(bd/2)

01/06/21 80
Bidirectional Search

• Works well only when there are unique start and goal states,
but performs ambiguously when there are multiple goals.
• Should able to search backwards from goal. Requires the
ability to generate predecessor states.
• Can (sometimes) lead to finding a solution more quickly.

• Time and Space Complexity is O(bd/2)

01/06/21 81
Bidirectional Search

01/06/21 82
Summary

• Problem formulation usually requires abstracting away real-


world details to define a state space that can feasibly be
explored

• Variety of uninformed search strategies

• Iterative deepening search uses only linear space and not


much more time than other uninformed algorithms

01/06/21 83

You might also like