You are on page 1of 81

Artificial Intelligence

Search Problem
Search Problem

Search is a problem-solving technique to

explores successive stages in problem-

solving process.
Search Space

• We need to define a space to search in to find


a problem solution
• To successfully design and implement search
algorithm, we must be able to analyze and
predict its behavior.
State Space Search

One tool to analyze the search space is to

represent it as space graph, so by use

graph theory we analyze the problem

and solution of it.


Graph Theory
A graph consists of a set of nodes and a set
of arcs or links connecting pairs of nodes.

River2

Island1 Island2

River1
Graph structure

• Nodes = {a, b, c, d, e}
• Arcs = {(a,b), (a,d), (b,c),….}

b d
c

e
a
Tree
• A tree is a graph in which two nodes have
at most one path between them.
• The tree has a root.

b c d

e f g h i j
Space representation

In the space representation of a problem, the


nodes of a graph correspond to partial
problem solution states and arcs correspond
to steps in a problem-solving process
Example

• Let the game of Tic-Tac-toe

1 2 3
8 4
7 6 5
1 2 3
8 4
7 6 5

1 2 3 1 4 3 1 4 3 1 4 3
7 4 6 8 7 6 7 8 6 7 6 4
5 8 2 5 8 2 5 6 2 5 8 2

1 1 3 1 3 3 1 4 3 1 4 3
7 4 6 7 4 6 1 7 6 5 7 6
5 8 2 5 8 2 5 8 2 7 8 2
A simple example:
traveling on a graph

C 2
B 9
2
3 F goal state
start state A D E
4
3 4
Search tree
,state = A
cost = 0

,state = B ,state = D
cost = 3 cost = 3

,state = C ,state = F
cost = 5 cost = 12

!goal state
,state = A
cost = 7

!search tree nodes and states are not the same thing
Full search tree
,state = A
cost = 0

,state = B ,state = D
cost = 3 cost = 3

,state = C ,state = F ,state = E


cost = 5 cost = 12 cost = 7

!goal state
,state = A
,state = F
cost = 7
cost = 11

!goal state
,state = B ,state = D

.. cost = 10
.. cost = 10

. .
Problem types
• Deterministic, fully observable  single-state
problem
– Solution is a sequence of states
• Non-observable  sensorless problem
– Problem-solving may have no idea where it is;
solution is a sequence
• Nondeterministic and/or partially observable
Unknown state space
Algorithm types
• There are two kinds of search algorithm
– Complete
• guaranteed to find solution or prove there is
none
– Incomplete
• may not find a solution even when it exists
• often more efficient (or there would be no
point)
Comparing Searching Algorithms:
?Will it find a solution? the best one
Def. : A search algorithm is complete if
whenever there is at least one solution, the
algorithm is guaranteed to find it within a finite
amount of time.

Def.: A search algorithm is optimal if when it


finds a solution, it is the best one
Comparing Searching Algorithms: Complexity
Branching factor b of a node is the number of arcs going
out of the node

Def.: The time complexity of a search algorithm is


the worst- case amount of time it will take to run,
expressed in terms of
• maximum path length m
• maximum branching factor b.

Def.: The space complexity of a search algorithm is


the worst-case amount of memory that the algorithm will use

(i.e., the maximum number of nodes on the frontier),


also expressed in terms of m and b.
Example: the 8-puzzle.
Given: a board situation for the 8-puzzle:
1 3 8
2 7
5 4 6
Problem: find a sequence of moves that transform
this board situation in a desired goal situation:
1 2 3
8 4
7 6 5
State Space representation

In the space representation of a problem, the

nodes of a graph correspond to partial

problem solution states and arcs correspond

to steps (action) in a problem-solving process


Key concepts in search
• Set of states that we can be in
– Including an initial state…
– … and goal states (equivalently, a goal test)
• For every state, a set of actions that we
can take
– Each action results in a new state
– Given a state, produces all states that can
be reached from it
Key concepts in search
• Cost function that determines the
cost of each action (or path =
sequence of actions)
• Solution: path from initial state to a
goal state
–Optimal solution: solution with
minimal cost
0
( NewYork )
2900
250 1200 1500

250 1200 1500 2900


( NewYork, ( NewYork, ( NewYork, ( NewYork,
Boston ) Miami ) Dallas ) Frisco )
1450 3300
1700 6200
( NewYork, ( NewYork,
Boston, Frisco,
Miami ) Miami )

Keep track of accumulated costs in each state if you want


to be sure to get the best path.
Example: Route Finding
• Initial state
– City journey starts in
• Operators Liverpool Leeds
– Driving from city to city
Nottingham
• Goal test Manchester

– Is current location the Birmingham

destination city?

London
State space representation (salesman)
• State:
– the list of cities that are already visited
• Ex.: ( NewYork, Boston )
• Initial state:
• Ex.: ( NewYork )
• Rules:
– add 1 city to the list that is not yet a member
– add the first city if you already have 5 members
• Goal criterion:
– first and last city are equal
Example: The 8-puzzle

• states? locations of tiles


• actions? move blank left, right, up, down
• goal? = goal state (given)
• path cost? 1 per move
Example: robotic assembly

• states?: real-valued coordinates of robot joint angles


parts of the object to be assembled
• actions?: continuous motions of robot joints
• goal test?: complete assembly
• path cost?: time to execute

1 2 3
8 4
7 6 5

1 3 1 4 3 1 4 3 1 4 3
7 4 6 7 6 7 8 6 7 6
5 8 2 5 8 2 5 2 5 8 2

1 3 1 3 4 3 1 4 3
7 4 6 7 4 6 1 7 6 5 7 6
5 8 2 5 8 2 5 8 2 8 2
Example: Chess
• Problem: develop a program that plays chess
1. A way to represent board situations
 Ex.:

8 List:
7 (( king_black, 8, C),
6 ( knight_black, 7, B),
5 ( pawn_black, 7, G),
4 ( pawn_black, 5, F),
3 ( pawn_white, 2, H),
2 ( king_white, 1, E))
1
A B C D E F G H
Chess
search tree
~15
Move 1

Move 2
~ (15)2

~ (15)3 Move 3

Need very efficient search techniques to find good paths


in such combinatorial trees.
:independence of states
Ex.: Blocks world problem.
• Initially: C is on A and B is on the table.
• Rules: to move any free block to another or to the table
• Goal: A is on B and B is on C.

AND-OR-tree?
C
A B
Goal: A on B and B on C

AND
C C
A B A B
Goal: A on B Goal: B on C
Search in State Spaces

•Effects of moving a block (illustration and list-structure iconic model notation)


Avoiding Repeated States
• In increasing order of effectiveness in reducing
size of state space and with increasing
computational costs:
1. Do not return to the state you just came from.
2. Do not create paths with cycles in them.
3. Do not generate any state that was ever
created before.
• Net effect depends on frequency of “loops” in
state space.
:Forward versus backward reasoning
Initial states

Goal states

Forward reasoning (or Data-driven): from initial states to goal states.


:Forward versus backward reasoning
Initial states

Goal states

Backward reasoning (or backward chaining / goal-driven ): from goal


states to initial states.
Data-Driven search

• It is called forward chaining

• The problem solver begins with the given facts


and a set of legal moves or rules for changing
state to arrive to the goal.
Goal-Driven Search

• Take the goal that we want to solve and see


what rules or legal moves could be used to
generate this goal.
• So we move backward.
Search Implementation

• In both types of moving search, we must find


the path from start state to a goal.

• We use goal-driven search if


– The goal is given in the problem

– There exist a large number of rules

– Problem data are not given


Search Implementation

• The data-driven search is used if

– All or most data are given

– There are a large number of potential goals

– It is difficult to form a goal


:Criteria
Sometimes equivalent:
1 3 8 1 2 3
2 7 8 4
5 4 6 7 6 5

In this case: even the same rules !!

• Sometimes: no way to start from the goal


states
– because there are too many (Ex.: chess)
– because you can’t (easily) formulate the rules
in 2 directions.
General Search Considerations
• Given initial state, operators and goal test
– Can you give the agent additional information?

• Uninformed search strategies


– Have no additional information

• Informed search strategies


– Uses problem specific information
– Heuristic measure (Guess how far from goal)
Classical Search Strategies
• Breadth-first search
• Depth-first search
• Bidirectional search
• Depth-bounded depth first search
– like depth first but set limit on depth of search in
tree
• Iterative Deepening search
– use depth-bounded search but iteratively increase
limit
:Breadth-first search
S

A D • Move
downward
B D A E s, level by
level, until
C E E B B F goal is
reached.
D F B F C E A C G

G C G F
G

It explores the space in a level-by-level fashion.


Breadth-first search
• BFS is complete: if a solution exists, one will
be found
• Expand shallowest unexpanded node
– fringe is a FIFO queue, i.e., new successors go at
end

• Implementation:
Breadth-first search
• Expand shallowest unexpanded node
– fringe is a FIFO queue, i.e., new successors go at
end

:Implementation •
Breadth-first search
• Expand shallowest unexpanded node
– fringe is a FIFO queue, i.e., new successors go at
end

:Implementation •
Breadth-first search
• Expand shallowest unexpanded node
– fringe is a FIFO queue, i.e., new successors go at
end

:Implementation •
Analysis of BFS
Def. : A search algorithm is complete if
whenever there is at least one solution, the
algorithm is guaranteed to find it within a finite
amount of time.

Is BFS complete? Yes

• If a solution exists at level l, the path to it


will be explored before any other path of
length l + 1
• impossible to fall into an infinite cycle
• see this in AISpace by loading “Cyclic Graph Examples” or by adding a
cycle to “Simple Tree”
47
Analysis of BFS
Def.: A search algorithm is optimal if
when it finds a solution, it is the best one

Is BFS optimal? Yes

• E.g., two goal nodes: red boxes


• Any goal at level l (e.g. red box N
7) will be reached before goals at
lower levels
48
Analysis of BFS
Def.: The time complexity of a search algorithm is
the worst-case amount of time it will take to run,
expressed in terms of
- maximum path length m
- maximum forward branching factor b.

• What is BFS’s time complexity, in terms of m and b ?

O(bm)

• Like DFS, in the worst case BFS


must examine every node in the tree
• E.g., single goal node -> red box
49
Analysis of BFS
Def.: The space complexity of a search algorithm is the
worst case amount of memory that the algorithm will use
(i.e., the maximal number of nodes on the frontier),
expressed in terms of
- maximum path length m
- maximum forward branching factor b.
• What is BFS’s space complexity, in terms of m and b ?

- BFS must keep paths to all the nodes al level m

O(bm)

50
Using Breadth-first Search
• When is BFS appropriate?
• space is not a problem
• it's necessary to find the solution with the fewest arcs
• When there are some shallow solutions
• there may be infinite paths

• When is BFS inappropriate?


• space is limited
• all solutions tend to be located deep in the tree
• the branching factor is very large
Depth-First Order
• When a state is examined, all of its children and their
descendants are examined before any of its siblings.

• Not complete (might cycle through non-goal states)

• Depth- First order goes deeper whenever this is


possible.
Depth-first search
= Chronological backtracking
S • Select a child
– convention: left-to-right
A
• Repeatedly go to next
B child, as long as
possible.
C E

D F • Return to left-over
alternatives (higher-up)
G
only when needed.
Depth-first search
• Expand deepest unexpanded node
– fringe = LIFO queue, i.e., put successors at front

:Implementation •
Depth-first search
• Expand deepest unexpanded node
– fringe = LIFO queue, i.e., put successors at front

:Implementation •
Depth-first search
• Expand deepest unexpanded node
– fringe = LIFO queue, i.e., put successors at front

:Implementation •
Depth-first search
• Expand deepest unexpanded node
– fringe = LIFO queue, i.e., put successors at front

:Implementation •
Depth-first search
• Expand deepest unexpanded node
– fringe = LIFO queue, i.e., put successors at front

:Implementation •
Depth-first search
• Expand deepest unexpanded node
– fringe = LIFO queue, i.e., put successors at front

:Implementation •
Depth-first search
• Expand deepest unexpanded node
– fringe = LIFO queue, i.e., put successors at front

:Implementation •
Depth-first search
• Expand deepest unexpanded node
– fringe = LIFO queue, i.e., put successors at front

:Implementation •
Depth-first search
• Expand deepest unexpanded node
– fringe = LIFO queue, i.e., put successors at front

:Implementation •
Depth-first search
• Expand deepest unexpanded node
– fringe = LIFO queue, i.e., put successors at front

:Implementation •
Depth-first search
• Expand deepest unexpanded node
– fringe = LIFO queue, i.e., put successors at front

:Implementation •
Depth-first search
• Expand deepest unexpanded node
– fringe = LIFO queue, i.e., put successors at front

:Implementation •
• Is DFS complete?
Analysis of DFS

.
• Is DFS optimal?

• What is the time complexity, if the maximum path length is m and


the maximum branching factor is b ?

• What is the space complexity?

We will look at the answers in AISpace (but see next few slides for a
summary of what we do)
Analysis of DFS
Def. : A search algorithm is complete if whenever there is at least
one
solution, the algorithm is guaranteed to find it within a finite
amount of time.
Is DFS complete? No

• If there are cycles in the graph, DFS may get “stuck” in one of them
• see this in AISpace by loading “Cyclic Graph Examples” or by adding a
cycle to “Simple Tree”
• e.g., click on “Create” tab, create a new edge from N7 to N1, go back to
“Solve” and see what happens
Analysis of DFS
Def.: A search algorithm is optimal if
when it finds a solution, it is the best one (e.g., the shortest)

Is DFS optimal? No

• It can “stumble” on longer solution


paths before it gets to shorter ones.
• E.g., goal nodes: red boxes

• see this in AISpace by loading “Extended Tree Graph” and set N6 as a goal
• e.g., click on “Create” tab, right-click on N6 and select “set as a goal node”

68
Analysis of DFS
Def.: The time complexity of a search algorithm is
the worst-case amount of time it will take to run,
expressed in terms of
- maximum path length m
- maximum forward branching factor b.

• What is DFS’s time complexity, in terms of m and b ?

O(bm)

• In the worst case, must examine


every node in the tree
• E.g., single goal node -> red box
69
Analysis of DFS
Def.: The space complexity of a search algorithm is the
worst-case amount of memory that the algorithm will use
(i.e., the maximum number of nodes on the frontier),
expressed in terms of
- maximum path length m
- maximum forward branching factor b.
• What is DFS’s space complexity, in terms of m and b ?

- for every node in the path currently explored, DFS maintains a path to its unexplored siblings in the search tree
- Alternative paths that DFS needs to explore
- The longest possible path is m, with a maximum of b-1 alterative paths per node

O(bm)

See how this


70 works in
Analysis of DFS (cont.)
DFS is appropriate when.
• Space is restricted
• Many solutions, with long path length
It is a poor method when
• There are cycles in the graph
• There are sparse solutions at shallow depth
• There is heuristic knowledge indicating when
one path is better than another
The example node set
Initial state
A

B C D E F
Goal state

G H I J K L M N O P

Q R S T U V W X Y Z

Press space to see a BFS of the example


We
Thethen
search
Node
backtrack
B
then
is expanded
moves
to expand
to then
the A We
This
Node
begin
node
Awith
isisremoved
then
our expanded
initial
fromstate:
the
to
first
removed
node
nodefrom
in
C,the
andthe
queue.
the
queue.
process
Press
The thequeue.
reveal
node labeled
further
Each revealed
A.
(unexpanded)
Pressnode
spaceis
revealedcontinues.
nodes
.space
are Press
added
to continue
space
to the added to the ENDnodes.ofPress
tothe
continue
queue.
space
.ENDBof the queue. C Press space D PressEspace to continue F the
.search

G H I J K L M N O P
Node L is located and the
Q R S T U search returns a solution.
.Press space to end

Press
Press space
space to
to continue
begin thethe
search
search
Size
SizeofofQueue:
Queue:0
1
5
6
7
8
9 Queue:
Queue:
Queue:
Queue:
Queue:
K,
L,
J,
G,
H,
I,Queue:
K,
J,
M,
L,
F,
H,
I,
Queue:
K,
J,
M,
L,
G,
N,
I,Queue:
E,K,
L,
M,
J,
O,
N,
H,
F,
D,
M,
K,
L,
C,
N,
O,
P,
I,
G,
Queue:
E,M,
L,
J,
N,
D,
O,
Q,
B,
P,
H,
F,K,
M,
N,
O,
E,
Q,
C,
P,
R,
Queue:
G,
I,L,
O,
F,
N,
P,
Q,
D,
J,
R,
S,
Empty
H,M,
G,
Q,
K,
O,
P,
T,
R,
E,
S,I,Q
H
R
N
A
U
L
T
P
F
SJ
10 Current
Nodes FINISHED
Action:
:Current
Current
SEARCH
Expanding
Action:
Action Current Currentlevel:
level:2
0
1
expanded:
expanded:11
109
0
1
2
3
4
5
6
7
8 Backtracking
BREADTH-FIRST SEARCH PATTERN n/a
Aside: Internet Search
• Typically human search will be “incomplete”,
• E.g. finding information on the internet before
google, etc
– look at a few web pages,
– if no success then give up
Example
• Determine whether data-driven or goal-
driven and depth-first or breadth-first would
be preferable for solving each of the
following
– Diagnosing mechanical problems in an
automobile (goal-depth)
– You have met a person who claims to be your
distant cousin, with a common ancestor named
John. You like to verify her claim (goal-breadth)
– A theorem prover for plane geometry(data-depth)
Example
• A program for examining sonar readings and
interpreting them (data"‫(" ت!!!مشيج!ولب!!!سدا!تا ا!حسن‬-
‫ تنفع‬depth , breadth ‫برضو‬

• An expert system that will help a human


classify plants by species, genus,etc.(data-
depth)
Any path, versus shortest path,
:versus best path
Ex.: Traveling salesperson problem:
3000
Boston SanFrancisco
250 1700 2900
3300
1450 NewYork
1200 1700
1500
Miami 1600
Dallas

• Find a sequence of cities ABCDEA such that


the total distance is MINIMAL.
Bi-directional search
• IF you are able to EXPLICITLY describe the GOAL
state, AND you have BOTH rules for FORWARD
reasoning AND BACKWARD reasoning
• Compute the tree both from the start-node and
from a goal node, until these meet.

Start Goal

78
Example Search Problem
• A genetics professor
– Wants to name her new baby boy
– Using only the letters D,N & A
• Search through possible strings (states)
– D,DN,DNNA,NA,AND,DNAN, etc.
– 3 operators: add D, N or A onto end of string
– Initial state is an empty string
• Goal test
– Look up state in a book of boys’ names, e.g. DAN
G(n) = The cost of each move as the distance between each town
H(n) = The Straight Line Distance between any town and town M.

A 40 B

12
C 10 D 23 10

5
20 E F 10
G H
10 5
I J
10 10
15
5

K 20 20 M
L
A 45 E 32 I 12 M 0
B 20 F 23 J 5
C 34 G 15 K 40
D 25 H 10 L 20
• Consider the following search problem. Assume a state is represented as
an integer, that the initial state is the number 1, and that the two
successors of a state n are the states 2n and 2n+1. For example, the
successors of 1 are 2 and 3, the successors of 2 are 4 and 5, the successors
of 3 are 6 and 7, etc. Assumes the goal state is the number 12. Consider
the following heuristics for evaluating the state n where the goal state is g
• h1(n) = |n-g| & h2(n) = (g – n) if (n  g) and h2 (n) =  if (n >g)
• Show the search trees generated for each of the following strategies for
the initial state 1 and the goal state 12, numbering the nodes in the order
expanded.
• Depth-first search b) Breadth-first search
• c) beast-first with heuristic h1 d) A* with heuristic (h1+h2)
• If any of these strategies get lost and never find the goal, then show the
few steps and say "FAILS"

You might also like