You are on page 1of 64

gebrishab@gmail.

com

1
Learning objectives: at the end of the class, you should be able
to:
•What is a Problem?

– Problem types with examples

•Solving a problem

• Solution and optimal solution

• Components of problem

•Steps in problem solving

•Searching
– Searching Algorithm Evaluation
2
– Search Strategies
What is a Problem?
• It is a gap between what actually is and what is desired.
– A problem exists when an individual becomes aware of the existence of
an obstacle which makes it difficult to achieve a desired goal or objective.
• A number of problems are addressed towards designing intelligent
agent:
– Toy problems: are problems that are useful to test and demonstrate
methodologies and can be used by researchers to compare the
performance of different algorithms.
• may require little or no ingenuity, good for games design
• e.g. 8-puzzle, n-queens, vacuum cleaner world, towers of Hanoi, river
crossing…
– Real-life problems: problems that have much greater
commercial/economic impact if solved.
• Such problems are more difficult and complex to solve, and there is
no single agreed-upon description.
• E.g. Route finding, Traveling sales person, etc. 3
Some more Real-world Problems
• Route finding Problem
• Traveling Salesman
Problem(TSP)
• VLSI Layout
• Assembly Sequencing
• Robot Navigation
Route Finding Problem
• Route Finding Problem - shortest path problem
– Defined in terms of locations and transitions along
links between them.
– Applications: automated travel advisory systems,
airline travel planning systems, military operations
planning,
General routing
Route in computer
Finding Algorithmnetworks,
1. Identify initial state as origin: Initial State
2. Expand to All Possible Locations: Successors
3. Choose Location with smallest cost/fastest route
4. Test Goal Function, is it the destiny? Goal Test
5. if yes, return location: Goal State
else, return to 2
Cont’d
• Route Finding Problem: Entire States - State Space

6
A simplified road map of part of Romania
Cont’d
• Route Finding Problem:
• A problem can be defined formally by five
components
– Initial State: The initial state that the agent starts in. For
example, the initial state for our agent in Romania might be
described as In(Arad)

– Actions: Paths Visited (Successors) =


From State: In(Arad), applicable actions are
{Go(Sibiu), Go(Timisoara), Go(Zerind)}.
– Goal Test: Test if state is goal(Bucharest).
– Goal State: Bucharest
– Path Cost: Total cost in number
7
Cont…
• Traveling Salesperson Problems(TSP)
– Visit each city on the map exactly once and returns to
the origin city.
– Needs information about the visited cities
– The aim is used to find the shortest possible route
that visits every city exactly once and return to the
starting point.
– Applications: vehicle routing
• VLSI Layout Cont…
– Place cells on a chip so they don’t overlap and there is
room for connecting wires to be placed between the
cells
– A VLSI Layout problem requires positioning millions
of components and connections on a chip to minimize
area, minimize circuit delays, minimize stray
capacitances, and maximize manufacturing yield.
Solving a Problem
 Problem Solving is a process of generating solutions from
observed or given data.
- Use direct or indirect(Model based) methods
 A problem is characterized by - a stet of goals
- a set of objects and
- a set of operations.
 To build a system, to solve a particular problem, we need to
• Define the problem precisely
- Find input situations as well as final situations for acceptable
solution to the problem
• Analyze the problem
- Find few important features that may have impact on the
appropriateness of various possible techniques for solving the
problem.
• Isolate and represent task k/ge necessary to solve the problem
• Choose the best problem solving technique(s) and apply to the
particular problem 10
Solving a problem…
Solution and Optimal Solution
•Solution is set of actions that leads from initial
state to goal state,

• There may be many solution for a problem


solving agent. The quality of this solution is
measured by the path cost and the solution with
minimum path cost is called Optimal Solution.

11
Solving a problem…
Components of a Problem
o Initial State
- defines where the agent starts or begins its task
o Actions
- defines description of possible actions given a particular
state
o Transition Model
- describes each action a with respect to state.
o Goal Test
- determines the given state is goal or not
o Path Cost
- A function that assigns cost to each path. The cost
function in problem solving agents is their performance measure.
12
Solving a problem…
• Problem Formulation is the process of deciding what actions
and states to consider, given a goal.
• Define states
– States describe distinguishable stages during the problem-
solving process
– Example- What are the various states in route finding problem?
• The various places including the location of the agent
• Define operators/rules
– Identify the available operators for getting from one state to the
next
– Operators cause an action that brings transitions from one state
to another by applying on a current state
• Construct state space
– Suggest a suitable representation (such as graph, table,… or a
combination of them) to construct the state space
13
State Space of the Problem
• The state space defines the set of all relevant states reachable by (any)
sequence of actions from the initial state until the goal state is
reached.
• State space (also called search space/problem space) of the problem
includes the various states
– Initial State
• defines where the agent starts or begins its task
– Transition States
• other states in between initial and goal states
– Goal State
• defines the situation the agent attempts to achieve

- A solution consists of the goal state, or a path to the goal state.


• Our aim is building Goal-based Intelligent Agent 14
Problem Solving Agent
• A Problem solving agent is a Goal-based Agent . It
decide what to do by finding sequence of actions that
lead to desirable states. The agent can adopt a goal and
aim at satisfying it.
• Goal Formulation is the first step in problem solving.

15
Example: The 8 puzzle problem
1 2 3 1 2 3
4 8 4 5 6
7 6 5 7 8

Initial state Goal state


Operators: slide blank up, slide blank down, slide blank
left, slide blank right
1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
4 8 4 8 5 4 8 5 4 5 4 5 4 5 6
7 6 5 7 6 7 6 7 8 6 7 8 6 7 8

Solution: sb-down, sb-left, sb-up,sb-right, sb-down


Path cost: 5 steps to reach the goal 16
Exercise: The 8 puzzle problem
• This is the problem of arranging the tiles so that all the tiles are in
the correct positions. You do this by moving tiles or space up,
down, left, or right, so long as the following conditions are met:
– a) there's no other tile blocking you in the direction of the movement; and
– b) you're not trying to move outside of the boundaries/ edges. 

• Identify possible states & operators?


• Construct state space?

1 2 3 1 2 3
8 4 5 8 4
7 6 7 6 517
Exercise : River Crossing Puzzles
Goat, Wolf and Cabbage Problem
• A farmer returns from the market, where he bought a goat, a
cabbage and a wolf. On the way home he must cross a river. His
boat is small and unable to transport more than one of his
purchases. He cannot leave the goat alone with the cabbage
(because the goat would eat it), nor he can leave the goat alone
with the wolf (because the goat would be eaten). How can the
farmer get everything safely on the other side?
1. Identify the set of possible states and operators

2. Construct the state space of the problem using suitable representation


19
Steps in Problem Solving
• Goal Formulation
– is a step that specifies exactly what the agent is trying to achieve
– This step narrows down the scope that the agent has to look at
• Problem Formulation
– is a step that puts down the actions and states that the agent has to
consider given a goal (avoiding any redundant states), like:
• the initial state
• the allowable actions etc…
• Search
– is the process of looking for the various sequence of actions that
lead to a goal state, evaluating them and choosing the optimal
sequence.
• Execute
– is the final step that the agent executes the chosen sequence of
actions to get it to the solution/goal 21
• Example: Path Finding Problem
Formulate goal:
– be in Bucharest
(Romania)
Initial
Arad to Bucharest
State
• Formulate problem:
– action: drive between
pair of connected
cities (direct road)
Goal
State
– state: be in a city
(20 world states)

• Find solution:
– sequence of cities
leading from start to
goal state, e.g., Arad,
Sibiu, Fagaras,
Bucharest

• Execution
– drive from Arad to
Bucharest according
to the solution
Route Finding Problems
• Basic idea:
– Simulated
exploration of state
space by
generating
successors of
already explored
states (AKA
expanding states)

Sweep out from start (breadth)


…Cont’d
• Basic idea:
– Simulated
exploration of state
space by
generating
successors of
already explored
states (AKA
expanding states)
Go East, young man! (depth)
Search Tree
• The searching process is like building the search tree that is
super imposed over the state space
– A search tree is a representation in which nodes denote paths and
branches connect paths. The node with no parent is the root node.
The nodes with no children are called leaf nodes. 
Example: Route finding Problem
• Partial search tree for route finding from Sidist Kilo to
goal test
Stadium.
(a) The initial state SidistKilo

SidistKilo
(b) After expanding generating a new state
Sidist Kilo
AratKilo Giorgis ShiroMeda
choosing one SidistKilo
option
(c) After expanding AratKilo Giorgis ShiroMeda
Arat Kilo
MeskelSquare Piassa Megenagna
Uninformed (Blind) Search

gebrishab@gmail.com
Contents to be Covered
• Uninformed Searching Strategies
• Breadth First Search (BFS)
• Depth First Search (DFS)
• Depth Limited Search (DLS)
• Iterative Deepening Search (IDS)
• Uniform Cost Search (UCS)
Breadth First Search (BFS)
• Expand shallowest unexpanded node,
– i.e. expand all nodes on a given level of
the search tree before moving to the
next level
• Implementation: uses Queue(FIFO)
data structure to store the list:
– Expansion: put successors at the end of
queue
– Pop nodes from the front of the queue
• Properties:
– Takes space: keeps every node in
memory
– Optimal and complete: guarantees to
find solution
…Cont’d
• Example of Breadth First Search (BFS):

• Traversed path:
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Exercise
• Apply BFS to find an optimal path from Start
Node (S) to Goal Node (G).

• BFS: S->B->C->D->E->G
Depth-First Search (DFS)
• Expand one of the node at the deepest
level of the tree.
– Only when the search hits a non-goal dead
end does the search go back and expand
nodes at shallower levels
…Cont’d
• Implementation: treat the list as Stack(LIFO)
– Expansion: push successors at the top of stack
– Pop nodes from the top of the stack
• Properties
– Incomplete and not optimal: fails in infinite-depth
spaces, spaces with loops.
• Modify to avoid repeated states along the path
– Takes less space (Linear): Only needs to remember
up to the depth expanded
…Cont’d
• Example of Depth First Search (DFS)

• Traversed path:
S---> A---> B---> D ---> E---> C---> G
Exercise
• Apply DFS to find an optimal path from Start
Node (S) to Goal Node(G).
Depth Limited Search (DLS)
• Depth Limited Search is similar to Depth First Search with a
predetermined limit ℓ.
• Depth-limited search can solve the drawback of the infinite path
in the Depth-first search.
• The node at the depth limit will treat as it has no successor
nodes further.
• Depth-limited search can be terminated with two Conditions of
failure:
• Standard failure value: It indicates that problem does not have
any solution.
• Cutoff failure value: It defines no solution for the problem
within a given depth limit.
…Cont’d
• Advantages:
– Memory efficient.
• Disadvantages:
– incompleteness.
– may not be optimal if problem has more than one solution.
…Cont’d
• Example of DLS

• Depth limit ℓ = 2
S--->A--->C--->D--->B--->I--->J
Exercise
• Apply DLS to find an optimal path from Start
Node (S) to Goal Node (G) with depth limit ℓ = 2
Iterative Deepening Search (IDS)
•IDS solves the issue of choosing the best depth limit by trying all
possible depth limit:
–Perform depth-first search to a bounded depth d, starting at d = 1 and
increasing it by 1 at each iteration.
Example: for route finding problem we can take the diameter of the
state space. In our example, at most 9 steps is enough to reach any
node
•This search combines the benefits of DFS and BFS
–DFS is efficient in space, but has no path-length guarantee
–BFS finds min-step path towards the goal, but requires memory space
–IDS performs a sequence of DFS searches with increasing depth-cutoff until
goal is found
Limit=0 Limit=1 Limit=2
…Cont’d
• Advantages:
– It combines the benefits of BFS and DFS search algorithm in
terms of fast search and memory efficiency.
• Disadvantages:
– The main drawback of IDS is that it repeats all the
work of the previous phase.
• Example:
– Following tree structure is showing the iterative deepening
depth-first search.
– IDS algorithm performs various iterations until it does not
find the goal node. The iteration performed by the algorithm
is given as:
…Cont’d
• Example of Iterative Deepening Search (IDS)
A
o Apply IDS to find Optimal Path from
o Initial State = A to Goal State = H

B C D

E G H
F

Level Iteration Path Returned using IDS


Depth I J K
0 1 A
1 2 ABCD
2 3 ABECFGDH
3 4 ABEFGIHJK
Uniform Cost Search
• The goal of this technique is to find the shortest path to the
goal in terms of cost.
– It modifies the BFS by always expanding least-cost unexpanded
node
• Implementation: nodes in list keep track of total path length
from start to that node
– List kept in priority queue ordered by path cost
A
S S S
1 10 S
5 B 5
S G 0 A B C A B C A B C
1 5 15 5 15 15
G G G
15 5
11 11 10
C
• Properties:
– This strategy finds the cheapest solution provided the cost of a path
must never decrease as we go along the path
g(successor(n)) ≥ g(n), for every node n
– Takes space since it keeps every node in memory
…Cont’d
• Example of UCS

o Cost: S--->A: 1, A--->D: 2, D--->G: 3


o S---> A---> D---> G
Exercise:
Apply UCS to find optimal path from Strat Node
(Initial State = S), Goal Node(Goal State = G)
S
1 5 8

A B C
3 9
7 4 5
D E G
Exercise:
Apply Uninformed Search Strategies to find optimal
path. Initial State = S, Goal State = G, ℓ = 1
Find Optimal Path S
1.BFS? 1 5
8
2.DFS?
3.IDS? A B C
4.DLS: ℓ = 2? 3 9
7 4
5
5.UCS?
D E G
Comparing Uninformed Search Strategies
Time Space
Strategies Complete Optimal Complexity Complexity

Breadth First Search Yes Yes O(bd) O(bd)


Depth First search No No O(bm) O(bm)
Uniform Cost Search Yes Yes O(bd) O(bd)
Depth Limited Search if l >= d No O(bℓ) O(bℓ)
Iterative Deepening Yes Yes O(bd) O(bd)
Search
Bi-Directional Search Yes Yes O(bd/2) O(bd/2)
o Where:
– b is branching factor,
– d is depth of the shallowest solution,
– m is the maximum depth of the search tree,
– ℓ is the depth limit
Informed Search
(Heuristic Search)
• Informed Search (Heuristic)
• Best First Search
• Greedy Best First Search
• A* Search
Informed Heuristic Search
• Search efficiency would improve greatly if there is
a way to order the choices so that the most
promising are explored first.
– This requires domain knowledge of the problem (i.e.
heuristic) to undertake focused search
• Define a heuristic function, h(n) that estimates
the goodness of a node n, based on domain
specific information that is computable from the
current state description.
– h(n) is an estimate of how close we are to a goal state
– It is an estimated cost of path from state n to a goal
state.
Best First Search
• It is a generic name to the class of informed methods
• The two best first approaches to find the shortest path:
– Greedy Search: minimizes estimated cost to reach a goal
– A* Search: minimizes the total path cost
• When expanding a node n in the search tree, Greedy
Search uses the estimated cost to get from the current
state to the goal state, defined as h(n).
– In route finding problem h(n) is the straight-line distance
f(n) = h(n)
• We also possess the sum of the cost to reach that node
from the start state, defined as g(n).
– In route finding problem; this is the sum of the step costs for the
search path.
• For each node in the search tree, A*-Search uses an
evaluation function, f(n):
f(n) = g(n) + h(n)
Admissibility
• Search Algorithms (such as Greedy and A* Search) are
admissible when the heuristic function never
overestimated the actual cost so that
– the algorithm always terminates in an optimal path from the
initial state to goal node if one exists.
• Check Admissibility of estimated cost h(n): make sure
that h(n) is not overestimated as compared to g(n)
– g (n): Actual cost of shortest path from n to z (not known)
– h(n) is said to be an admissible heuristic function if for all n, h(n) ≤
g(n)
– The closer estimated cost to actual cost the fewer extra nodes
that will be expanded
– Using an admissible heuristics guarantees that the solution found
by searching algorithm is optimal
Greedy Search
• A Best First Search that uses a heuristic function h(n)
alone to guide the search
– Selects node to expand that is closest (hence it’s greedy) to a
goal node
– The algorithm doesn’t take minimum path costs from initial to
current nodes into account, it just go ahead optimistically and
never looks back.

• Implementation:
– expand 1st the node closest to the goal state, i.e. with evaluation
function f(n) = h(n)
– h(n) = 0 if node n is the goal state
– Otherwise h(n) ≥ 0; an estimated cost of the cheapest path from
the state at node n to a goal state
Example

Initial state
S
8
1 5 8

A B C
8 4 3
3 9
7 4 5
D E G
  0
goal state
Greedy Search
• Greedy Best-First Search algorithm always selects the path which
appears best at that moment.
• It is the combination of BFS and DFS algorithms.
• It uses the heuristic function to search.
• Choose the most promising node at each step.
• In the best first search algorithm, we expand the node which is
closest to the goal node and the closest cost is estimated by
heuristic function, i.e.

f(n)= h(n).

• Were, h(n)= estimated cost from node n to the goal.


• The greedy best first algorithm is implemented by the priority
queue.
…Cont’d
• Example of Greedy Search
– Consider the below search problem, and we will traverse it
using greedy best-first search. At each iteration, each node is
expanded using evaluation function f(n)=h(n) , which is given in
the below table.
A* Search Algorithm
• It considers both estimated cost of getting from n to the goal
node h(n), and cost of getting from initial node to node n, g(n)
• Apply three functions over every nodes
– g(n): Cost of path found so far from initial state to n (path cost)
– h(n): Estimated cost of shortest path from n to z (heuristic value)
– f(n): Estimated total cost of shortest path from a to z via n
– Evaluation function f(n) = h(n) + g(n)
• Implementation: Expand the node for which the evaluation
function f(n) is lowest
– Rank nodes by f(n) that goes from the start node to goal node via
given node

• Example: Route Finding Problem


A* Search
• A* Search is the most commonly known form of best-first
search. It uses heuristic function h(n), and cost to reach the
node n from the start state g(n).
• It has combined features of UCS and greedy best-first search,
by which it solve the problem efficiently.
• A* Search algorithm finds the shortest path through the
search space using the heuristic function. This search
algorithm expands less search tree and provides optimal
result faster.
• A* Algorithm is similar to UCS except that it uses g(n)+h(n)
instead of g(n).
• In A* Search algorithm, we use search heuristic as well as the
cost to reach the node. Hence we can combine both costs as
following, and this sum is called as a fitness number.
…Cont’d
• A* Search uses search heuristic as well as the cost to
reach the node. Hence we can combine both costs as
following, and this sum is called as a fitness number.

• NB:
– At each point in the search space, only those node is expanded
which have the lowest value of f(n), and the algorithm
terminates when the goal node is found.
…Cont’d
• Example of A* Search:
– Traverse the given graph using the A* algorithm. The heuristic
value of all states is given in the below table. Calculate f(n) of
each state using the formula f(n)= g(n) + h(n), where g(n) is the
cost to reach any node from start state.
– Here we will use OPEN and CLOSED list.
…Cont’d
Heuristic Search Methods
Algorithm Time Space Optimal Complete
Comp. Comp.
Greedy O(bm) O(bm) No No

A* O(bd) O(bd) yes optimal if h'(n) Yes


never overestimates

b is the branching factor; d is the depth of solution; m is the


maximum depth of the search tree; N the number of nodes
Exercise: Search Space
• Given the search space below, find the optimal path from Initial
State S to Goal State G using Informed Search Strategies:
a) Best First Search (Greedy)
b) A* Search
Solution: Search Space
• Informed Search (Heuristic Search) Strategies
• BFS (Greedy): f(n) = h(n)
1. S: f(n) = 10
2. S => A: f(n) = 12 (Lowest, expand via A)
S => B: f(n) = 14 (ignore)
3. S => A => C: f(n) = 11 (ignore)
S => A => D: f(n) = 4 (Lowest, expand via D)
4. S => A => D => G: f(n) = 1 (Lowest, Reached Goal State)
S => A => D => H: f(n) = 2 (ignore)
• A* Search: f(n) = g(n) + h(n)
1. S: f(n) = 0 + 10 = 10
2. S => A: f(n) = 2 + 12 = 14 (Lowest, expand via A)
S => B: f(n) = 3 + 14 = 17
3. S => A => C: f(n) = (2 + 9) + 11 = 22
S => A => D: f(n) = (2 + 5) + 4 = 11 (Lowest, expand via D)
4. S => A => D => G: f(n) = (2 + 5 + 3) + 1 = 11 (Lowest, Reached
Goal)
Exercise: Search Space
• Given the search space below, find the optimal path from Initial
State S to Goal State G using Informed Search Strategies:
a) Best First Search (Greedy)
b) A* Search

 Solution:
 Greedy: f(n) = h(n)
 A*: f(n) = g(n) + h(n)
Solution: Search Space
• Informed Search (Heuristic Search) Strategies
• BFS (Greedy): f(n) = h(n)
1. S: f(n) = 13
2. S => A: f(n) = 12
S => B: f(n) = 4 (Lowest h, expand B)
3. S => B => E: f(n) = 8
S => B => F: f(n) = 2 (Lowest h, expand F)
4. S => B => F => I: f(n) = 9
S => B => F => G: f(n) = 0 (Lowest h, Reached Goal State)
Solution: Search Space
• Informed Search (Heuristic Search) Strategies
• A* Search: f(n) = g(n) + h(n)
1. S: f(n) = 0 + 13 = 13
2. S => A: f(n) = 3 + 12 = 15
S => B: f(n) = 2 + 4 = 6 (Lowest f-cost, expand via B)
3. S => B => E: f(n) = (2 + 3) + 8 = 13
S => B => F: f(n) = (2 + 1) + 2 = 5 (Lowest f-cost, expand via F)
4. S => B => F => I: f(n) = (2 + 1 + 2 + 3) + 9 = 17
S => B => F => G: f(n) = (2 + 1 + 3) + 0 = 6 (Lowest f-cost, Reached Goal)
Questions?

You might also like