You are on page 1of 123

Vellore Institute OF Technology, BHOPAL

Fundamentals In AI & ML

(Department of Computer Science &


Engineering)
Ankit Shrivastava
Vellore Institute Of Technology (VIT),
Bhopal, India
Module-2
Problem Solving Methods

2
Search Strategies:
A search strategy is an organised structure of key terms
used to search a database. The search strategy
combines the key concepts of your search question in
order to retrieve accurate results.
Search techniques are universal problem-solving
methods.
A search Strategy is also called “Search Algorithm”
which solves a search problem. Search algorithms
work to retrieve information stored within some data
structure, or calculated in the search space of a
problem domain, with either discrete or continuous
values.
3
Rational agents or Problem-solving agents in AI
mostly used these search strategies or algorithms to
solve a specific problem and provide the best result.
Problem-solving agents are the goal-based agents and
use atomic representation.
The appropriate search algorithm often depends on the
data structure being searched, and may also include
prior knowledge about the data. Search algorithms can
be made faster or more efficient by specially
constructed database structures, such as search
trees, hash maps, and database indexes.
4
Types of Search Algorithms:

5
Uninformed Search/ Blind Search:
 Uninformed search is a class of general-purpose search
algorithms which operates in brute force-way. Uninformed
search algorithms do not have additional information about
state or search space other than how to traverse the tree, so
it is also called blind search.
 The uninformed search does not contain any domain
knowledge such as closeness, the location of the goal.
 It operates in a brute force way, as it only includes
information about how to traverse the tree & how to
identify the leaf as well as goal nodes.
 Uninformed search applies a way in which search tree is
searched without any information about the search space
like initial state operates & test for the goal, so it is called
Blind search. Example- It examines each node until it
6
achieves the goal.
UNINFORMED SEARCH
Algorithms:

7
Depth First Search (DFS):
DFS is called Uninformed Search Technique.
It works on Present knowledge.
Depth First Search or DFS starts with the initial node
of the graph, then goes deeper until it finds the goal
nodes or nodes having no children.
In DFS, then backtracks from the dead end towards the
most recent node that is yet to be completely
unexplored.
Stack data structure is used in DFS.
DFS works in LIFO (Last in First Out) manner.
It works on Brute Force way or Blind Search.
It is Non-optimal solution.
8 It goes on Deepest node.
Algorithm:

1. Enter root node on stack


2. Do until stack is not empty
(a.) Remove node
i. if node= Goal node then stop
ii. Push all children of node in stack.

9
Time Complexity:

 Time complexity in Data Structure = O(V+E)


where,
V = no. of vertices
E = no. of edges

 Time complexity in Artificial Intelligence = O(bd)


where,
b = branching factor
d = depth
10
Advantages:

1. It requires less memory.


2. It requires less time to reach goal node if traversal in
right path.
ex.- If we have to reach goal node (G) from starting
node (A) then it takes less time but if we have to reach
goal node (H) from starting node (A) then it takes more
time.

Disadvantages:

3. No guarantee of finding a solution.


11 4. It can go in infinite loop.
12
13
H.W. DFS Example-

14
Breadth First Search (BFS):
Breadth First Search (BFS) algorithm traverses a graph
in a breadth ward motion and uses a queue to
remember to get the next vertex to start a search, when
a dead end occurs in any iteration.
It explores all the nodes at given depth before
proceeding to the next level.
It uses Queue data structure to implement (FIFO)
manner.
It gives optimal solution.
BFS comes under Uninformed Search technique or we
can say Blind Search.
Uninformed means no domains have specific
15 knowledge.
Algorithm:

1. Enter starting nodes on Queue.


2. If Queue is empty then return fail and stop.
3. If first element on queue is goal node, then return
success and stop.
4. ELSE
5. 4. Remove and expand first element from queue and
place children at the end of queue.
6. 5. Go to step 2.

16
Time Complexity:

 Time complexity in Data Structure = O(V+E)


where,
V = no. of vertices
E = no. of edges

 Time complexity in Artificial Intelligence = O(bd)


where,
b = branching factor
d = depth
17
Advantages:

1. Find a solution if it exists.


2. It will try to find the minimal solution in least no.
of steps.

Disadvantages:

3. It requires less memory.


4. It needs lots of time if solution is far from root
18
node.
19
H.W. BFS Example-

20
21
22
Uniform Cost Search Algorithm :

It is used for weighted Tree/ Graph Traversal.


Goal is to path finding to goal-node with lowest cost.
Node Expansion is based on path costs.
It uses Backtracking also.
Priority Queue is used for Implementation.

23
Advantage:
1. It gives optimal solution because at every state/ path with
the least cost is chosen.

Disadvantage:
2. It does not care about the no. of steps involve in
searching and only concerned about the cost path. Due to
which this algo. may be stuck in an infinite loop.

24
25
26
27
INFORMED SEARCH
Algorithms:

28
Greedy Search:
A greedy algorithm is any algorithm that follows the
problem-solving heuristic of making the locally
optimal choice at each stage.
A greedy algorithm is an approach for solving a
problem by selecting the best option available at the
moment. It doesn't worry whether the current best
result will bring the overall optimal result.
In many problems, a greedy strategy does not produce
an optimal solution, but a greedy heuristic can yield
locally optimal solutions that approximate a globally
optimal solution in a reasonable amount of time.
It gives feasible solution.
29
The problem that requires either minimum or
maximum result then that problem is known as an
optimization problem. Greedy method is one of the
strategies used for solving the optimization problems.
It follows the local optimum choice at each stage with
a intend of finding the global optimum. Let's
understand through an example.

30
Characteristics of Greedy
Method:

• To construct the solution in an optimal way, this


algorithm creates two sets where one set contains all
the chosen items, and another set contains the rejected
items.
• A Greedy algorithm makes good local choices in the
hope that the solution should be either feasible or
optimal.

31
Applications of Greedy Algorithm:

 It is used in finding the shortest path.


 It is used to find the minimum spanning tree
using the prim's algorithm or the Kruskal's
algorithm.
 It is used in a job sequencing with a
deadline.
 This algorithm is also used to solve the
fractional knapsack problem.

32
Pseudocode of Greedy Algorithm:
1. Algorithm Greedy (a, n)
2. {
3. Solution : = 0;
4. for i = 0 to n do
5. {
6. x: = select(a);
7. if feasible(solution, x)
8. {
9. Solution: = union(solution , x)
10. }
11. return solution;
33 12. } }
Best First Search:
The best first search uses the concept of a priority
queue and heuristic search. It is a search algorithm that
works on a specific rule. The aim is to reach the goal
from the initial state via the shortest path.

The best First Search algorithm in artificial


intelligence is used for finding the shortest path from a
given starting node to a goal node in a graph. The
algorithm works by expanding the nodes of the graph
in order of increasing the distance from the starting
node until the goal node is reached.

34
Algorithm:
Let ‘OPEN’ be a priority queue containing initial state.
Loop
If OPEN is empty return failure
Node <- Remove - First (OPEN)
If Node is a goal
then return the path from initial to Node
else generate all successors of node and put the
newly generated Node into OPEN according to
their F values
END LOOP
35
36
37
Knapsack Problem:
 Fractional Knapsack problem is defined as, “Given a set of
items having some weight and value/profit associated
with it. The knapsack problem is to find the set of items
such that the total weight is less than or equal to a given
limit (size of knapsack) and the total value/profit earned is
as large as possible.”
 This problem can be solved with the help of using two
techniques:
• Brute-force approach: The brute-force approach tries all the
possible solutions with all the different fractions but it is a
time-consuming approach.
• Greedy approach: In Greedy approach, we calculate the
ratio of profit/weight, and accordingly, we will select the
item. The item with the highest ratio would be selected
38
first.
Knapsack Algorithm:
{
For I =1 to n;
compute pi/wi;
Sort objects in non increasing order of P/W
for i= 1 to n from sorted list
if (m>0 && wi<=m)
m= m-wi;
p= p+pi;
else break;
if (m>0)
p = p+pi (m/wi);
39 }
Job Scheduling Problem:

Job scheduling is the problem of scheduling jobs


out of a set of N jobs on a single processor
which maximizes profit as much as possible.
Consider N jobs, each taking unit time for
execution. Each job is having some profit and
deadline associated with it.
The sequencing of jobs on a single processor with
deadline constraints is called as Job Sequencing
with Deadlines.

40
The greedy algorithm described below always gives an
optimal solution to the job sequencing problem-
Step-01:
• Sort all the given jobs in decreasing order of their
profit.

Step-02:
• Check the value of maximum deadline.
• Draw a Gantt chart where maximum time on Gantt
chart is the value of maximum deadline.
Step-03:
• Pick up the jobs one by one.
• Put the job on Gantt chart as far as possible from 0
ensuring that the job gets completed before its
41
deadline.
Q. Given the jobs, this deadlines and associated profits
as shown-

Jobs J1 J2 J3 J4 J5 J6

Deadlines 5 3 3 2 4 2

Profits 200 180 190 300 120 100

Answer the following questions-


1. Write the optimal schedule that gives max. profit
2. Are the jobs completed in optimal schedule?
3. What is the max. earned profit?

42
Soln.-
Step-1: Sort all the given jobs in decreasing order of
their profit-
Jobs J4 J1 J3 J2 J5 J6

Deadlines 2 5 3 3 4 2

Profits 300 200 190 180 120 100

Step-2: Gantt Chart


1 2 3 4 5
J2 J4 J3 J5 J1

43
1. The optimal schedule is-
J2, J4, J3, J5, J1
This is required order in which the jobs must be
completed in order to obtain the maximum profit.
2. All the jobs are not completed in optimal schedule.
This is because job J6 could not be completed with
its deadline.
3. Maximum earned profit = 180 + 300 + 190 + 120 +
200
= 990 units.

44
Prim’s Algorithm:
Prim's Algorithm is a greedy algorithm that is
used to find the minimum spanning tree from a
graph. Prim's algorithm finds the subset of edges
that includes every vertex of the graph such that
the sum of the weights of the edges can be
minimized.
Prim's algorithm starts with the single node and
explores all the adjacent nodes with all the
connecting edges at every step.

45
Working of Prim’s Algorithm:
Prim's algorithm is a greedy algorithm that starts from
one vertex and continue to add the edges with the
smallest weight until the goal is reached. The steps to
implement the prim's algorithm are given as follows -
• First, we have to initialize an MST with the randomly
chosen vertex.
• Now, we have to find all the edges that connect the
tree in the above step with the new vertices. From the
edges found, select the minimum edge and add it to the
tree.
• Repeat step 2 until the minimum spanning tree is
formed.
46
Spanning Tree:
A spanning tree is the subgraph of an undirected connected
graph.

Minimum Spanning Tree:


Minimum spanning tree can be defined as the spanning tree
in which the sum of the weights of the edge is minimum. The
weight of the spanning tree is the sum of the weights given to
the edges of the spanning tree.

47
Applications of Prim’s Algorithm:
• Prim's algorithm can be used in network
designing.

• It can be used to make network cycles.

• It can also be used to lay down electrical


wiring cables.

48
Difference between Prim’s algorithm and Kruskal’s
algorithm:
Prim’s Algorithm Kruskal’s Algorithm

The tree that we are making or growing The tree that we are making or growing
always remains connected. usually remains disconnected.

Prim’s algorithm grows a solution from a Kruskal’s algorithm grows a solution


random vertex by adding the next from the cheapest edge by adding the next
cheapest vertex to the existing tree. cheapest edge to the existing tree/ forest.

Prim’s algorithm is faster for dense Kruskal’s algorithm is faster for sparse
graphs. graphs.

49
50
51
52
Kruskal’s Algorithm:
Kruskal's algorithm to find the minimum cost spanning
tree uses the greedy approach. This algorithm treats the
graph as a forest and every node it has as an individual
tree. A tree connects to another only and only if, it has
the least cost among all available options and does not
violate MST properties.
Kruskal's Algorithm is used to find the minimum
spanning tree for a connected and undirected
weighted graph. The main target of the algorithm is to
find the subset of edges by using which we can
traverse every vertex of the graph.
53
A* Search Algorithm:
A* (pronounced "A-star") is a graph traversal and path
search algorithm, which is often used in many fields of
computer science due to its completeness, optimality,
and optimal efficiency.
A* is an informed search algorithm, or a best-first
search, meaning that it is formulated in terms
of weighted graphs: starting from a specific
starting node of a graph, it aims to find a path to the
given goal node having the smallest cost (least
distance travelled, shortest time, etc.). It does this by
maintaining a tree of paths originating at the start node
and extending those paths one edge at a time until its
54
termination criterion is satisfied.
At each iteration of its main loop, A* needs to
determine which of its paths to extend. It does so based
on the cost of the path and an estimate of the cost
required to extend the path all the way to the goal.
Specifically, A* selects the path that minimizes
f(n) = g(n) + h(n)
where n is the next node on the path, g(n) is the cost of
the path from the start node to n, and h(n) is
a heuristic function that estimates the cost of the
cheapest path from n to the goal.

55
Algorithm:
1. Enter starting node in OPEN list.
2. If OPEN list is empty return FAIL
3. Select node from OPEN list which has smallest value
(g+n)
if node = Goal , return success
4. Expand node ‘n’ generates all successors
compute (g+n) for each successor node
5. If node ‘n’ is already in OPEN/CLOSED, attach to
back pointer
6. Go to (3.)

56
Advantages:
 Best searching algorithm
 Optimal and complete
 Solving complex problems

Disadvantages:
 Doesn’t always produces shortest
 Some complexity issues
 It requires memory

57
How to make A* Admissible:

There are two conditions:


1. Admissible: In this Heuristic function, never
underestimate the cost of reaching the goal.
H(n)< = H*(n) {goal}
2. Non-Admissible: In this Heuristic function, never
overestimate the cost of reaching the goal.
H(n) > H*(n)

58
59
60
61
62
63
Local Search Algorithms:
Local Search Algorithms operate using a single current
node and generally move only to neighbours of that
node.
Local search method keeps small number of nodes in a
memory. They are suitable for problems where the
solution is the goal state itself and not the path.
In addition to finding goals, local search algorithms
are useful for solving pure optimization problems, in
which the aim is to find the best state according to an
objective function.
Hill-Climbing and Simulated Annealing are examples
of local search algorithms.
64
Hill-Climbing Algorithm:
 Hill climbing algorithm is a local search algorithm which
continuously moves in the direction of increasing
elevation/value to find the peak of the mountain or best
solution to the problem. It terminates when it reaches a
peak value where no neighbour has a higher value.

A one- dimensional state-space landscape in which elevation corresponds to


65
the objective function.
Hill climbing is sometimes called greedy local search
because it grabs a good neighbour state- without
thinking ahead about where to go next.

Limitations:
Hill climbing cannot reach the optimal/best state (global
maximum) if it enters any of the following regions:
Local Maxima –
A local maximum is a peak that is higher than each of its
neighbouring states but lower than the global maximum.

66
Plateaus –
A plateau is a flat area of the state-space landscape. It
can be flat local maximum, from which no uphill exit
exits, or a shoulder, from which progress is possible.
Ridge –
A Ridge is an area which is higher than surrounding
states, but it can not be reached in a single move.

67
A Ridges is shown in fig. result in a sequence of local
maxima that is very difficult for greedy algorithm to
navigate.

68
Variations of Hill Climbing-
In Steepest Ascent hill climbing all successors are
compared and the closest to the solution is chosen.
Steepest Ascent hill climbing is like best-first search,
which tries all possible extensions of the current path
instead of only one.
It gives optimal solution but time consuming.
It is also known as Gradient search.

69
Simulated Annealing:
Annealing is the process used to temper or harden
metals and glass by heating them to a high temperature
and then gradually cooling them, thus allowing the
material to reach a low energy crystalline state.
The simulated annealing algorithm is quite similar to
hill-climbing. Instead of picking the best move,
however, it picks a random move. If a move improves
the situation, it is always accepted. Otherwise the
algorithm accepts the move with some probability less
than 1.
Check all the neighbours.
Moves to worst state may be accepted.
70
Constraint Satisfaction Problem:
In artificial intelligence and operations
research, constraint satisfaction is the process of
finding a solution to a set of constraints that impose
conditions that the variables must satisfy.
Constraint propagation methods are also used in
conjunction with search to make a given problem
simpler to solve.
Examples of problems that can be modeled as a
constraint satisfaction problem include:
 Map Colouring Problem
 Crosswords, Sudoku and many logic puzzles

71
Constraint satisfaction depends on three components,
namely:
• X: It is a set of variables.
• D: It is a set of domains where the variables reside.
There is a specific domain for each variable.
• C: It is a set of constraints which are followed by the
set of variables.

72
73
CSP Problems:
Constraint satisfaction includes those problems which
contains some constraints while solving the problem.
CSP includes the following problems:
• Graph Colouring: The problem where the constraint
is that no adjacent sides can have the same colour.

74
Sudoku Playing: The gameplay where the constraint
is that no number from 0-9 can be repeated in the same
row or column.

75
• n-queen problem: In n-queen problem, the constraint
is that no queen should be placed either diagonally, in
the same row or column.
• Crossword: In crossword problem, the constraint is
that there should be the correct formation of the words,
and it should be meaningful.

76
Latin square Problem: In this game, the task is to
search the pattern which is occurring several times in
the game. They may be shuffled but will contain the
same digits.

77
Latin Square Problem:
A Latin square is a square array of objects (letters A,B,C,
…) such that each object appears once and only once in
each row and each column.
Example- Suppose we choose the following
Latin Square:

78
It is not the Latin square design/ problem. Why?

79
Representation of LSD:
Drivers/ Cars 1 2 3 4
a A B C D
b B C D A
C C D A B
D D A B C

4 Brands of petrol's are indicated as A, B, C, D

80
81
In LSD, you have 3 factors:
Rows
Columns
Treatments (letters A,B,C,…)

The number of treatments= No. of Rows = No. of column = n

• The row-column treatments are represented by cells in


a n x n array.
• The treatments are assigned to row-column
combinations using a Latin-square arrangement.

82
Map- Coloring Problem:
Problem: We are given the task of coloring each region
either red, green or blue in such a way that no
neighboring regions have the same color.

83
Solution:
To formulate this as a CSP, we define the variables as
(WA, NT, Q, NSW, V, SA and T)
The domain of each variable is the set {red, green,
blue}
The constraints require neighboring regions to have
distinct colors; for example, the allowable
combinations for WA and NT are the pairs {(red,
green), (red, blue), (green, red), (green, blue), (blue,
red), (blue, green)}
The constraint can also be represented more succinctly
as the inequality.
WA!=NT, provided the constraint satisfaction algorithm
84
has some way to evaluate such expressions.
There are many possible solutions. One possible
solution is shown below
{WA = red, NT = green, Q = red, NSW = green, V =
red, SA = Blue, T = red/green/blue}

85
Backtracking for the map-coloring problem:

86
Solution with Constraint Satisfaction Problem:

87
Backtracking Search:
Backtracking search: A depth-first search that
chooses values for one variable at a time and
backtracks when a variable has no legal values left to
assign. Backtracking algorithm repeatedly chooses an
unassigned variable, and then tries all values in the
domain of that variable in turn, trying to find a
solution.
Examples where backtracking can be used to solve
puzzles or problems include: Puzzles such as eight
queens puzzle, crosswords, verbal arithmetic, Sudoku,
and Peg Solitaire.
When we use Backtracking?
How we will use the Backtracking?
88
89
Types of Constraints in Backtracking :

1. Implicit Constraint
2. Explicit Constraint

90
91
In backtracking technique we backtrack to the last
valid path as soon as we hit a dead end.
Backtracking reduces the search space since we no
longer have to follow down any path we know are
invalid.
Backtracking works in a DFS manner with some
bounding function. In this method the desired solution
is expressed as an ‘n’ tuple (x1,x2,-----xn)
where, xi are chosen from finite set si.

92
Constraint Propagation: Inference

A method of inference that assigns values to variables


characterizing a problem in such a way that some
conditions (called constraints) are satisfied.
The process of using the constraints to reduce the no.
of legal values for a variable, which in turn can reduce
the legal values for another variable, and so on.

93
Example-

94
95
Game Playing:
 General game playing (GGP) is the design of artificial
intelligence programs to be able to play more than one
game successfully. For instance, a chess-playing computer
program cannot play checkers. Examples include Watson,
a Jeopardy! -playing computer; and the RoboCup
tournament, where robots are trained to compete in soccer
and many more.
 Game Playing is a search problem defined by-
a) Initial State
b) Successor function
c) Goal test
d) Path cost/ Utility/Pay off function

96
AI has continued to improve, with aims set on a player
being unable to tell the difference between computer
and a human player.
A game must ‘feel’ natural
a) Obeys laws of the game
b) Character aware of the environment
c) Path finding ( A* algorithm)
d) Decision making
e) Planning
• The game AI is about the illusion of human behaviour
i. Smart to a certain extent
ii. Non- repeating behaviour
iii. Emotional Influences (Irrationality, Personality)
97 iv. Being Integrated in the environment
Game AI needs various computer science disciplines:
a) Knowledge based systems
b) Machine Learning
c) Multi-agent systems
d) Computer graphics & animation
e) Data Structures

98
Optimal Decisions in Game:
Optimal Solution: In adversarial search, the optimal
solution is a contingent strategy, which specifies
MAX(the player on our side)’s move in the initial
state, then MAX’s move in the states resulting from
every possible response by MIN(the opponent), then
MAX’s moves in the states resulting from every
possible response by MIN to those moves, and so on.
One move deep: If a particular game ends after one
move each by MAX and MIN, we say that this tree is
one move deep, consisting of two half-moves, each of
which is called a ply.
99
Explain Min-Max Theorem / Algorithm:
• It is a specialized search algorithm that returns optimal
sequence of moves for a player in Zero sum game.
• Recursive/ Backtracking algorithm which is used in
decision making and game theory-> Two player.
• It uses recursion to search through game tree.
• Algorithm computes minmax decision for current
state.
• Two players MAX (selects maximum value)
•  MIN (selects minimum value)
• Depth first search algorithm is used for operation of
complete game tree.
100
Minimax value: The minimax value of a node is the
utility (for MAX) of being in the corresponding state,
assuming that both players play optimally from there
to the end of the game. The minimax value of a
terminal state is just its utility.

101
Given a game tree, the optimal strategy can be determined from the
minimax value of each node, i.e. MINIMAX(n).
MAX prefers to move to a state of maximum value, whereas MIN prefers
a state of minimum value.

102
Example of Min-Max Algorithm:

103
Alpha- Beta Pruning:
 Alpha–beta pruning is a search algorithm that seeks to
decrease the number of nodes that are evaluated by the
minimax algorithm in its search tree. It is an adversarial
search algorithm used commonly for machine playing of
two-player games.
 Alpha-beta pruning is a modified version of the minimax
algorithm. It is an optimization technique for the minimax
algorithm.
 This involves two threshold parameter Alpha and beta for
future expansion, so it is called alpha-beta pruning. It is
also called as Alpha-Beta Algorithm.

104
 Alpha-beta pruning can be applied at any depth of a tree,
and sometimes it not only prune the tree leaves but also
entire sub-tree.
 The two-parameter can be defined as:
• Alpha: The best (highest-value) choice we have found
so far at any point along the path of Maximiser. The
initial value of alpha is -∞.
• Beta: The best (lowest-value) choice we have found so
far at any point along the path of Minimizer. The initial
value of beta is +∞.
The main condition which required for alpha-beta pruning
is:

105
Key Points about Alpha- Beta Pruning:

 The Max player will only update the value of alpha.


 The Min player will only update the value of beta.
 While backtracking the tree, the node values will be
passed to upper nodes instead of values of alpha and
beta.
 We will only pass the alpha, beta values to the child
nodes.

106
Working of Alpha-Beta Pruning:
Step 1: The Max player will begin by moving from node A, where = -
and = +, and passing these values of alpha and beta to node B, where
again = - and = +, and Node B passing the same value to its offspring D.

107
Step 2: The value of will be determined as Max's turn at Node D. The
value of is compared to 2, then 3, and the value of at node D will be max
(2, 3) = 3, and the node value will also be 3.
Step 3: The algorithm now returns to node B, where the value of will
change as this is a turn of Min, now = +, and will compare with the value
of the available subsequent nodes, i.e. min (, 3) = 3, so at node B now = -,
and= 3.

108
In the next step, algorithm traverse the next successor of
Node B which is node E, and the values of α= -∞, and
β= 3 will also be passed.

Step 4:Max will take its turn at node E, changing the


value of alpha. The current value of alpha will be
compared to 5, resulting in max (-, 5) = 5, and at node
E= 5 and= 3, where >=, the right successor of E will be
pruned, and the algorithm will not traverse it, and the
value at node E will be 5.

109
110
Step 5: The method now goes backwards in the tree,
from node B to node A. The value of alpha will be
modified at node A, and the highest available value will
be 3 as max (-, 3)= 3, and = +. These two values will
now transfer to A's right successor, Node C.

=3 and = + will be passed on to node F at node C, and


the same values will be passed on to node F.
Step 6: At node F, the value of will be compared with
the left child, which is 0, and max(3,0)= 3, and then with
the right child, which is 1, and max(3,1)= 3 will remain
the same, but the node value of F will change to 1.

111
112
Step 7: Node F returns the node value 1 to node C, at C = 3 and = +,
the value of beta is modified, and it is compared to 1, resulting in min (,
1) = 1. Now, at C, =3 and = 1, and again, it meets the condition >=, the
algorithm will prune the next child of C, which is G, and will not compute
the complete sub-tree G.

113
Step 8:C now returns 1 to A, with max (3, 1) = 3 being
the greatest result for A. The completed game tree, which
shows calculated and uncomputed nodes, is shown
below. As a result, in this case, the best maximiser value
is 3.

114
115
116
117
Time Complexity:
• Worst ordering: In some instances, the alpha-beta pruning
technique does not trim any of the tree's leaves and
functions identically to the minimax algorithm. Because of
the alpha-beta factors, it also takes more time in this
scenario; this type of pruning is known as worst ordering.
The optimal move is on the right side of the tree in this
situation. For such an order, the temporal complexity is O.
(bm).
• Ideal ordering:
When a lot of plastic is in the tree and the
best movements are made on the left side of the tree, the
optimal placement for alpha-beta plastering takes place. We
use DFS such that it is initially left searching and goes deep
in same amount of time twice as a minimum method.
Complexity is O(bm/2) in ideal ordering.
118
Stochastic Games:
A stochastic game was introduced by Lloyd Shapley in
the early 1950s. It is a dynamic game with
probabilistic transitions played by one or more players.
The game is played in a sequence of stages. At the
beginning of each stage, the game is in a certain state.
Applications- Stochastic games have applications
in economics, evolutionary biology and computer
networks. They are generalizations of repeated
games which correspond to the special case where
there is only one state.

119
Many games are unpredictable in nature, such as those
involving dice throw. These games are called as
Stochastic Games. The outcome of the game depends
on skills as well as luck.
In the Stochastic Games, the winner of the game is not
only decided by the skill but also by luck.
Examples are
 Gambling game
 Golf ball game
 Backgammon, etc.

120
Stochastic Search Algorithms:
Stochastic search algorithms are designed for problems
with inherent random noise or deterministic problems
solved by injected randomness.
Desired properties of search methods are
 High probability of finding near-optimal solutions
(effectiveness)
 Short processing time (Efficiency)
• They are usually conflicting; a compromise is offered
by stochastic techniques where certain steps are based
on random choice.

121
Why Stochastic Search?
Stochastic search is the method of choice for solving
many hard combinatorial problems.
Ability of solving hard combinatorial problems has
increased significantly.
Solution of large propositional satisfiability problems.
Solution of large travelling salesman problems Good
results in new application areas.

122
Thank you

123

You might also like