You are on page 1of 73

General Search

 Definition 1 – Search is an algorithm that discovers or


locates path to the solution.

 Definition 2:- Search is an algorithm that takes a problem as


input and returns with a solution from the search space.

 The search space is the set of all possible solutions.

 Solution is a state where all requirements are fulfilled. Such


a state is called as Goal State.

 A state space [S, A, I, G] is a collection of states, arcs


between them and a non-empty set of intial states and goal
states.
Control Strategies
 Control strategies guides how to reach the goal state or
what way has to be followed in order to find a solution.
 Types Of Search Control Strategies:-
1. Forward Search –
 The search proceeds from initial state to the goal
state.
 The methods are called Data Directed.
 E.g.:- Searching a city on the map.

2. Backward Search –
 The search proceeds backward from goal state to the
initial state.
 The methods are called Goal Directed.
Control Strategies
3. Systematic Search –
 Here no information about domain is available. It can
only distinguish between goal and non-goal state.
 Used when search space is small.
 E.g.:- BFS, DFS are two methods that use this strategy.

4. Heuristic Search –
 Many search depend on knowledge of problem domain.
 They have some measure of relative merits to guide the
search.
 The search so guided are called as heuristic search and
the methods are called as heuristics.
Parameters For Search Evaluation
1. Completeness –
The algorithm is said to be complete if it is guaranteed to find a
solution.

2. Optimality/ Admissibility –
A search solution is said to be optimal if it gives the best solution.

3. Time Complexity –
Worst case or average case time required to execute the algorithm

4. Space Complexity-
Maximum memory space required to compute the algorithm

 Time and space complexity are measured in terms of


 b: maximum branching factor of the search tree
 d: depth of the optimal solution
 m: maximum length of any path in the state space (may be infinite)
Types of Search Methods
l Generally search algorithms are classified into two types:-
l 1. Uniformed Search -
 Also called blind, exhaustive or brute-force search.
 Uses no information about the problem to guide the search
and therefore may not be very efficient.

l 2. Informed Search -
 Also called heuristic or intelligent search.
 Uses information about the problem to guide the search,
usually guesses the distance to a goal state and therefore
efficient.
 But the search may not be always possible.
Uninformed Search
 They have no information about the number of steps or path
costs required to reach from the current state to the goal state.

 These methods are used when there is no prior knowledge or has


no information about the past.

 The only available knowledge is problem definition.

 Following are the uniformed search methods/algorithms:-


1. Breadth First Search
2. Uniform Cost Search
3. Depth First Search
4. Depth Limited Search
5. Iterative Deepening Depth First Search
6. Bidirectional Search
Breadth First Search
 A Search strategy in which the highest layer of a decision
tree is searched completely before proceeding to the next
layer is called Breadth-first search (BFS).

 In this strategy, no viable solution is omitted and therefore


guarantee that optimal solution is found.

 This strategy is often not feasible when the search space is


large.

 Uses Queue(FIFO) data structure for its implementation.


Properties Of BFS
• Completeness : Yes. The solution is reached if it exists.
• Optimality : Yes. Finds the shortest path
• Time Complexity : O(b^d)
• Space Complexity : O(b^d)
• Every explored node
is kept in memory
Uniform Cost Search
• It is an extension to BFS that expands the next node
which has lowest path cost.

• The main purpose of this method is to reduce the total


cost of path traversal.

• Priority Queue is used for implementation.

• Variant of Dijkstra’s algorithm.


Properties of Uniform Cost Search
Tree
• Completeness : Yes. The solution is reached if it exists.
• Optimality : Yes. Returns the least cost path.
• Time Complexity : O(b^d)
• Space Complexity : O(b^d)
Depth First Search Tree
 Explores one path to the deepest level and then backtracks
until it finds a goal state.

 Implemented using stack (LIFO).

 This strategy does not guarantee that the optimal solution


has been found.

 In this strategy, search reaches a satisfactory solution more


rapidly than breadth first, an advantage when the search
space is large.
Properties Of DFS
• Completeness : No
If the depth of the tree is infinite.

• Optimality : No
It stops at the first goal state it finds, no matter if there is another goal
state that is shallower than that.

• Time Complexity : O(b^d)


Higher than BFS if there is a solution on a level smaller than the
maximum depth of the tree.

• Space Complexity : O(b*m)


Much lower than BFS

 So space complexity of DFS is less but time complexity of DFS is more


as compared to BFS.
DFS
Advantages:-
1. Simple to implement.
2. Needs relatively small
memory space.

Disadvantages:-
1. Cannot find solution
in all cases( Not Complete).
2. Not guaranteed to provide optimal solution and may
take lots of time to give the solution. ( Not optimal)
BFS
Advantages:-
1. Guaranteed to find a solution if exists(i.e Complete)

2. Guaranteed to provide optimal solution(i.e Optimal)


This is guaranteed by the fact that longer paths are
never explored until a shorter ones have already been
examined.

Disadvantages:-
1. Requires more memory space as compared to DFS.
Depth Limited Search(DLS)
 In order to overcome the infinite length drawback of DFS, a limit
on the depth of DFS can be set.

 The basic idea is to not allow the expansion of the tree after the
given depth.

 Not Complete – Since the solution may not be found in all cases.
It is complete when depth limit is greater than that of the
solution’s depth.

 Not Optimal.

 Time Complexity : O(b^ l) where l is the depth limit.

 Space Complexity : O(b*l) [same as DFS].


Iterative Deepening Search(IDS)
 It is an enhanced version of Depth Limited Search.

 Sometimes depth limit restricts DLS from finding solution,


since solution may exist beyond prescribed depth limit.

 IDS is used to address such problems.

 If the solution is not found till the given depth limit then
the depth limit is incremented by 1. The solution is
searched at this level. If the solution is still not found the
depth limit is again incremented by 1 and the process goes
on.
Iterative Deepening Search(IDS)
 Complete : Yes
 Optimal : Yes
 Time Complexity : O(b^d)
 Space Complexity : O(b*d)

 Limitation :- Regeneration of tree after reaching the


depth limit. All previous search results are discarded
and we start from scratch.

 IDS method is preferred when search space is large


and depth factor is known.
Bi – Directional Search
 Search is carried here from both the ways i.e from forward as well as
backward direction.

 Forward search is started from initial state and backward search is


started from the goal state.

 At some point they may intersect. If they intersect solution exist or else
there is no solution.

 This is like BFS carried from both the ends.

 If a problem is with depth 4 and search takes place from both the
direction then in the worst case the solution will be found out at depth 2.

 Time and Space Complexity is : O(b^(d/2))

 Complete and Optimal.


Comparing Search Strategies

 b – branching factor d – depth of optimal solution


 m – maximum depth l – depth limit
Questions
1. Explain uniformed search and its methods.
2. Differentiate between breadth first search and depth first
search.
3. Discuss about time and space complexities of the
uninformed search techniques.
4. Explain bi-directional search with an example.
5. Explain the following techniques with respect to their
performance measures:-
 Depth First Search.
 Depth Limited Search.
 Iterative Deepening Search.
Informed Search/ Heuristic Search
 Was developed to overcome the drawbacks of uninformed
search( High Time and Space Complexity)

 Informed search uses the information about the domain or


knowledge about the problem to move towards the goal
state.

 These methods do not always find the best solution but


they guarantee to find a good solution in a reasonable
amount of time. (Not optimal)

 These methods are generally used to solve problem that


require longer time to find the solution.
 Following are the Informed Search Methods:-
1. Best first search
2. Greedy methods
3. A* search
4. Iterative Deepening A*
5. Heuristics
6. AO* Search
Heuristic Function h(n)
 It is a function that guides the decision of selecting a
path while aiming to reach a goal node.

 A heuristic function h(n) provides an estimate of the


cost of the path from the given node (n) to reach to
the closest goal node.

 Informed search algorithms use heuristic function


which guides them to reach goal state in less amount
of time.
Search Notations
Best First Search
 Here nodes are expanded and explored one by one.
 An evaluation function f(n) is used to decide which
node has to be expanded next.
 The node which has lowest value of evaluation is
selected for expansion.
 Procedure:-
1. Start with the root node.
2. The node having lowest value for f(n) is selected.
Repeat the process till you get the goal node or till all
nodes are explored.
Example of Best First Search

 Heuristic here is “ No. of tiles not in correct position”.


 Here smaller value of heuristic leads closer towards the
goal.
Properties
 Not complete – Can lead to an infinite path.

 Not optimal –
Heuristics considers best solution at that particular
moment (i.e considers current best solution and not future
best solution)

 Time Complexity – O(b^m) where m ix the maximum


depth

 Space Complexity – O(b^m)


Greedy Search
• Greedy best-first search tries to expand the node that is
closest to the goal, on the ground that this is likely to lead
to a solution quickly.
• Thus, it evaluates nodes by using just the heuristic
function; that is, f (n) = h(n).
• Where h(n) is the estimate cost of the cheapest path from
node ’n’ to goal node.
• Minimizes the estimated cost to reach the goal. The node
that is closest to the goal according to is always expanded
first.
• It optimizes the search locally, but not always finds the
global optimum.
• It is not complete (can go down an infinite branch of the
tree).
• It is not optimal.
A* Search
 A* is combination of :
 Uniform cost search
g(n): Exact path cost from start state to node n
 Greedy search
h(n): Heuristic path cost from node n to a goal state
 Heuristic function for A*
 f(n) = g(n) + h(n)
Where g(n) = is actual cost
h(n) = estimated cost
A* Search
 Completeness : A* is complete and guarantees
solution.
 Optimality – A* is optimal if h(n) is admissible
heuristic. Admissible heuristic means h(n) never over
estimates the cost to reach the goal.
 Time complexity depends on heuristic function.
 Main issue is about space complexity here since all
nodes that are generated are kept in memory.
 Generally this method is not used for large scale
problem.
Problem Reduction AO* (AND- OR Graph)
 When a problem can be divided into a set of sub
problems, where each sub problem can be solved
separately and a combination of these will be
a solution, AND-OR graphs or AND - OR trees are
used for representing the solution.
Example
 For simplicity, it is assumed that every operation(i.e.
applying a rule) has unit cost.
 Fig (a) – For node C-> F(c) = 3.(least among B,C, D).
 Consider heuristic = 1.
 But choosing node B will be better. Because node C is
AND with node D. So cost will be(3+1 +4+1 = 9).
But for B it will be 5+1 =6. So choose B
Example of A* Search

AO* will always find minimum cost solution.


A* cannot search AND-OR graphs efficiently.
Questions
1. Distinguish between uniformed and informed
search.
2. Write a short note on Informed Search. Sate and
explain informed search methods.
3. Explain A* algorithm with an example.
4. Explain AO* algorithm with an example.
5. Explain heuristic function.
6. Explain Best First Search with an example. Discuss
about its performance measures.
Local Search Algorithms and
Optimization Problems
 Informed and uninformed search methods studied so
far often concentrate on path through which goal is
reached.

 But if the problem does not demand the path of the


solution and it expects only the final configuration of
the solution then we have different types of problem to
solve.
 For eg:- 8 queens problem, IC design, job shop
scheduling, etc. are some of the problems that do not
concentrate on the path.
Local Search Algorithms
 Operate using single state rather than multiple paths.

 They generally move to the neighbors of current state.

 No requirement of maintaining path in memory.

 They are used for solving pure optimization problems.

 In pure optimization problems main aim is to find the best state


according to required objective function.

 Advantages:-
1. They use very little and constant amount of memory.
2. They have ability to find reasonable solution for infinite state
spaces.
Hill Climbing Search
 This algorithm generally moves up in the direction of
increasing value - that is, up-hill.

 The basic idea is to always select a state that is better than


the current state. i.e it always move to a neighbor which has
better score.

 It terminates when it reaches a “peak” where no neighbor


has a higher value/score.

 This algorithm only looks out for immediate neighbors of


current state.

 It is similar to greedy local search which means it only


considers immediate neighbors of the current state.
 It does not maintain a search tree rather stores only the
current node data structure i.e stores the state and its
objective function.

 Since it keeps no history, it cannot recover from failures of


its strategy.

 This method works in small settings of specific


environment.

 This strategy works very well but sometimes it may not be


appropriate to be used in real life scenarios due to shape of
entire space. Basically heuristic helps in deciding the
direction of search.
Hill Climbing Procedure
1. Start from initial node.
2. Consider all the neighbors of the current state
3. Choose the neighbor with the best quality and
move to that state
4. Repeat 2 thru 4 until all the neighboring states are
of lower quality
5. Return the current state as the solution state.
Hill Climbing Algorithm
1. Let IS = Initial State, GS = Goal Sate.
Check if IS = GS? If yes, Quit.
Otherwise make IS as current state (CS).
2. Continue till the solution is found
a) Apply an operator function on CS and so a New state
(NS) is produced.
b) Check if NS = GS? If yes, then quit.
Otherwise if NS is better than CS, make CS = NS.
c) If NS is not better than CS, then continue the loop.

Note:- To find whether current state is better or not


evaluation function is used. F(n) is based on heuristic.
 This method is a heuristic based search method.
Problems with Hill Climbing
1) Local Maxima – Can’t see higher peak.
 This is a state better than the local region or neighboring
states but not global maximum.
 This occurs since a better solution exists but is not
present in vicinity(near) the current state.
Solution – Backtrack to some earlier node and try to
move in some other direction.

2) Plateau – It is a flat area of search space where all


neighboring states has same value. Algorithm fails to
determine best direction to move on.
Solution - Big jump has to be taken in some direction.
Local Beam Search
 Beam search is a heuristic search algorithm.
 This algorithm expand the most promising (good) nodes
from the limited set.
 There can be more than one good node selected for
expansion, say k nodes.

 Algorithm:-
1. Maintain or select K best states instead of single state.
2. The search begins with K randomly generated states.
3. At each iteration, all possible successors of K randomly
generated states are identified.
4. If goal state is found, then halt, else select K best of
successors.
 Example:- Find best student from the country.
1. Select k states from country with best results.
2. Select k cities with best results.
3. Select k colleges with best results.
4. Select k students who scored maximum marks.
5. Select one student among them with highest marks.

 K is the number of best nodes expanded at each


level. So K is the width of the beam.
 Hill climbing is a special case of local beam search
where K= 1.
Local Beam Search
Adversarial Search Problems( Game)
 Searches in game playing are different.
 Game can provide either perfect or imperfect
information.
 They generally have multi-agent environment.
 The environment is co-operative and competitive.

Following are the adversarial search methods:-


1. Minimax Algorithm
2. Alpha Beta Pruning.
Game Terminologies
 A game means a sort of conflict in which n individuals
or groups participate.
 Rules condition under which game is played.
 Strategy – List of optimal choices for each player at
every stage of game.
 Move – The way in which game progresses from one
stage to another .
 Payoff/Outcome/ Utility– A payoff is a value associated
with each player’s final situation. ( Refers to what
happens ate end of the game.)
Types Of Games
1)Based on Information –
a)Games of perfect information-
The games in which all moves of player is known to everyone.
Eg:- Chess, tic-tac-toe.
b) Games of imperfect information-
Here all moves are not known to everyone.

2 ) N person games:-
It involves more than two players.

3) General zero- sum games:-


Games whose sum of payoff of all players result in zero.
Eg: In chess one person wins (payoff +1) and other loose
(payoff -1). So sum of both players payoff result to 0.
3) Non-zero Sum Game:-
These are the games whose outcome are non-zero value.
a) Negative Sum Games( Competitive)
Here nobody really wins, rather everybody loose.
Eg:- A war or strike.
b) Positive Sum Games( Co-operative)
Here all players have one goal that they contribute
together.
Eg:- Educational games, building blocks, etc.
Min- Max Algorithm
 It can be used for games having 2 players.
 Eg:- Chess, tic-tac-toe, etc. (Logic Games).
 Algorithm is effective for games which have few logical
possible state transitions from the current sate.
 There are two views- MAX view and MIN view.
 Nodes that belong to MAX are given the maximum
value of its children.
 Nodes that belong to MIN are given the minimum
value of its children.
 In short, MAX tries to move to the state of maximum
value and MIN tries to move to the state of minimum
value.
An optimal procedure: The Min-
Max method
Designed to find the optimal strategy for Max and find best move:

1. Generate the whole game tree, down to the leaves.

2. Apply utility (payoff) function to each leaf.

3. Back-up values from leaves through branch nodes:


a Max node computes the Max of its child values
a Min node computes the Min of its child values

4. At root: choose the move leading to the child of highest value.


Example(1)
Consider + as MAX(player 1) and – as MIN(player 2) . Max will always select
maximum value of its children whereas MIN will select minimum value of its
children.
Example(2)
Example (3): Evaluation of the
Tic-tac-toe problem.
Properties Of Min- Max Algorithm
 Complete – Yes.
 Optimal – Yes.
 Time Complexity – O(b^m).
 Space Complexity – O(bm)

 Note :- Inefficient for games with huge search space


since it will take longer time to compute.
Alpha Beta Pruning
 The problem with Min Max algorithm is that the
number of game states it has to examine is exponential
in the number of moves.

 Alpha beta proposes to provide the solution without


looking at every node in the game tree.

 Pruning refers to elimination of nodes found to be


unnecessary while searching and evaluation.
Alpha Cut Off-
 Alpha is the value of maximum/best(i.e highest value) choice found so
far. If any value is worse than alpha, MAX will avoid it.
 For eg:- If one branch node say P is having value 15 and other branch
node say Q is having value 10 and we choose the maximum value. So
prune the branch having node Q since we have to keep highest value
here i.e 15.
Beta Cut Off
 Beta is the minimum/worst (i.e lowest value) choice found so far.
 For eg:- If one branch node say P is having value 20 and other branch node
say Q is having value 25 then prune the branch having node Q since 25 >
20 and we have to keep lowest value here.
Example

Alpha represents (>=) and beta represents (<=)


Constraint Satisfaction Problems
 Constraints are the natural medium for people to
express problems in many fields.
 Many real problems in AI can be modeled as
Constraint Satisfaction Problems and are solved
through search.
 Examples Of Constraint:
The sum of three angles of a triangle is 180 degrees.

 Constraint is a logical relation among variables.

 Constraint Satisfaction is a process of finding a


solution to a set of constraints.
Variety/ Types Of Constraints
 Unary constraints involve a single variable.
 e.g. South Africa  green

 Binary constraints involve pairs of variables.


 e.g. South Africa  Washington

 Higher-order constraints involve 3 or more variables.


 Professors A, B and C cannot be on a committee together
 Can always be represented by multiple binary constraints

 Preference (soft constraints)


 e.g. red is better than green often can be represented by a cost
for each variable assignment
Constraint Satisfaction Problems

 CSP in AI are:-
1. 8 queens problem- The constraint is no queen should
threaten each other. A queen threatens other queen if
present on same row, column, diagonal.
2. Map Coloring problem
Map Coloring Problem
 Given a map and number of colors, the problem is to
assign colors to those area in map such that no
adjacent nodes have same color.
 Variables: WA, NT, Q, NSW, V, SA, T
 Domains: Di={red,green,blue}
 Constraints: adjacent regions must have different
colors.
 E.g. WA  NT
Solution
Example (2)
Summary Of Informed and
Uninformed Search

You might also like