Professional Documents
Culture Documents
This document is confidential and intended solely for the educational purpose of RMK
Group of Educational Institutions. If you have received this document through email
in error, please notify the system manager. This document contains proprietary
information and is intended only to the respective group / learning community as
intended. If you are not the addressee you should not disseminate, distribute or copy
through e-mail. Please notify the sender immediately by e-mail if you have received
this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking
any action in reliance on the contents of this information is strictly prohibited.
Digital Notes
20CB603 ARTIFICAL INTELLIGENCE
Department: CSBS
Batch/Year: 2020-24/III
Created by: Mr.B.Jayaram / AP
Date: 08.02.2023
Table of Contents
S NO CONTENTS PAGE NO
1 Contents 5
2 Course Objectives 6
5 Course Outcomes 9
7 Lecture Plan 11
9 Lecture Notes 13
10 Assignments 35
11 Part A (Q & A) 36
12 Part B Qs 40
16 Assessment Schedule 44
PREREQUISITE
UNIT I INTRODUCTION 9
Problems of AI, AI technique, Tic - Tac - Toe problem. Intelligent Agents, Agents &
environment, nature of environment, structure of agents, goal based agents, utility
based agents, learning agents. Problem Solving, Problems, Problem Space & search:
Defining the problem as state space search, production system, problem
characteristics, issues in the design of search programs.
Problem solving agents, searching for solutions; uniform search strategies: breadth
first search, depth first search, depth limited search, bidirectional search, comparing
uniform search strategies. Heuristic search strategies Greedy best-first search, A*
search, AO* search, memory bounded heuristic search: local search algorithms &
optimization problems: Hill climbing search, simulated annealing search, local beam
search
Local search for constraint satisfaction problems. Adversarial search, Games, optimal
decisions & strategies in games, the minimax search procedure, alpha-beta pruning,
additional refinements, iterative deepening.
TOTAL: 45 PERIODS
Course Outcomes
Cognitive/
Affective
Expected
Course Level of
Course Outcome Statement Level of
Code the
Attainment
Course
Outcome
Course Outcome Statements in Cognitive Domain
PO PO PO PO PO PO PO PO PO PO PO PO PS PS PS
1 2 3 4 5 6 7 8 9
10 11 12 O1 O2 O3
CO1 3 3 2 1 2 1 1 1 1 1 1 1 2 2 1
CO2 3 3 2 1 2 1 1 1 1 1 1 1 2 2 1
CO3 3 3 2 1 2 1 1 1 1 1 1 1 2 2 1
CO4 3 3 2 1 2 1 1 1 1 1 1 1 2 2 1
CO5 3 3 2 1 2 1 1 1 1 1 1 1 2 2 1
9
Lecture Plan
UNIT – II
of delivery
Actual lecture Date
S No Topics
of
Proposed date
pertaining CO
Taxonomy
Periods
Mode
level
No
Problem solving agents, searching for 1 CO2 K4 Chalk and Talk
1 solutions 8/2/2023
1
CO2 K4 Chalk and Talk
uniform search strategies: breadth
2 first search, 11/2/2023
A* search, AO* 1
21-02- CO2 K4 Chalk and Talk
search
6 2023
memory bounded heuristic search:
1
local search algorithms &
CO2 K4 Chalk and Talk
optimization
22-02-
problems:
7 2023
S NO TOPICS
1 https://wordmint.com/public_puzzles/2387238
2.1 Problem Solving Agents
Goal formulation, based on the current situation and the agent’s performance
measure, is the first step in problem solving.
The process of looking for a sequence of actions that reaches the goal is called
search. A search algorithm takes a problem as input and returns a solution in the
form of an action sequence. Once a solution is found, the actions it recommends
can be carried out. This is called the execution phase.
We do this by expanding the current state; that is, applying each legal action to the
current state, thereby generating a new set of states. In this case, we add branches
from the parent node to child nodes:
Search algorithms require a data structure to keep track of the search tree that is
being constructed.
For each node n of the tree, we have a structure that contains four components:
• n.STATE: the state in the state space to which the node corresponds;
• n.PARENT: the node in the search tree that generated this node;
• n.ACTION: the action that was applied to the parent to generate the node;
• n.PATH-COST: the cost, traditionally denoted by g(n), of the path from the
initial state to the node, as indicated by the parent pointers.
The appropriate data structure for this is a queue. The operations on a queue are
as follows:
EMPTY?(queue) returns true only if there are no more elements in the queue.
POP(queue) removes the first element of the queue and returns it.
INSERT(element, queue) inserts an element and returns the resulting queue.
1. Breadth-first search is the most common search strategy for traversing a tree
or graph. This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.
2. BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
3. The breadth-first search algorithm is an example of a general-graph search
algorithm.
4. Breadth-first search implemented using FIFO queue data structure.
In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse in
layers, so it will follow the path which is shown by the dotted arrow, and the
traversed path will be:
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Disadvantages:
3. It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
4. BFS needs lots of time if the solution is far away from the root node.
Depth-first Search
In the below search tree, we have shown the flow of depth-first search, and it will
follow the order as:
It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal node
is not found. After backtracking it will traverse node C and then G, and here it will
terminate as it found goal node.
Completeness: DFS search algorithm is complete within finite state space as it will
expand every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node
traversed by the algorithm. It is given by:
T(n)= 1+ n2+ n3 +.........+ nm=O(nm)
Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root
node, hence space complexity of DFS is equivalent to the size of the fringe set,
which is O(bm).
Optimal: DFS search algorithm is non-optimal, as it may generate a large number
of steps or high cost to reach to the goal node.
1. Standard failure value: It indicates that problem does not have any solution.
2. Cutoff failure value: It defines no solution for the problem within a given depth
limit.
Disadvantages:
Bidirectional search algorithm runs two simultaneous searches, one form initial state
called as forward-search and other from goal node called as backward-search, to
find the goal node. Bidirectional search replaces one single search graph with two
small subgraphs in which one starts the search from an initial vertex and other
starts from goal vertex. The search stops when these two graphs intersect each
other.
Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
3. In the below search tree, bidirectional search algorithm is applied. This algorithm
divides one graph/tree into two sub-graphs. It starts traversing from node 1 in
the forward direction and starts from goal node 16 in the backward direction.
4. The algorithm terminates at node 9 where two searches meet.
Advantages:
Disadvantages:
The informed search algorithm is more useful for large search space. Informed
search algorithm uses the idea of heuristic, so it is also called Heuristic search.
h(n) <= h*(n)
Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost
should be less than or equal to the estimated cost.
Pure Heuristic Search:
Pure heuristic search is the simplest form of heuristic search algorithms. It expands
nodes based on their heuristic value h(n). It maintains two lists, OPEN and
CLOSED list. In the CLOSED list, it places those nodes which have already
expanded and in the OPEN list, it places nodes which have yet not been expanded.
On each iteration, each node n with the lowest heuristic value is expanded and
generates all its successors and n is placed to the closed list. The algorithm
continues unit a goal state is found.
In the informed search we will discuss two main algorithms which are given below:
Greedy best-first search algorithm always selects the path which appears best at
that moment. It is the combination of depth-first search and breadth-first search
algorithms. It uses the heuristic function and search. Best-first search allows us to
take the advantages of both algorithms. With the help of best-first search, at each
step, we can choose the most promising node. In the best first search algorithm,
we expand the node which is closest to the goal node and the closest cost is
estimated by heuristic function, i.e.
f(n) = g(n)
Were, h(n)= estimated cost from node n to the goal. The greedy best first
algorithm is implemented by the priority queue.
1. Best first search can switch between BFS and DFS by gaining the advantages of
both the algorithms.
2. This algorithm is more efficient than BFS and DFS algorithms.
Disadvantages:
Example:
Consider the below search problem, and we will traverse it using greedy best-first
search. At each iteration, each node is expanded using evaluation function
f(n)=h(n) , which is given in the below table.
Time Complexity: The worst case time complexity of Greedy best first search is
O(bm).
Space Complexity: The worst case space complexity of Greedy best first search
is O(bm). Where, m is the maximum depth of the search space.
Complete: Greedy best-first search is also incomplete, even if the given state
space is finite.
Optimal: Greedy best first search algorithm is not optimal.
A* Search Algorithm
A* search is the most commonly known form of best-first search. It uses heuristic
function h(n), and cost to reach the node n from the start state g(n). It has
combined features of UCS and greedy best-first search, by which it solve the
problem efficiently. A* search algorithm finds the shortest path through the search
space using the heuristic function. This search algorithm expands less search tree
and provides optimal result faster. A* algorithm is similar to UCS except that it
uses g(n)+h(n) instead of g(n).
In A* search algorithm, we use search heuristic as well as the cost to reach the
node. Hence we can combine both costs as following, and this sum is called as
a fitness number.
Algorithm of A* search:
Advantages:
Disadvantages:
4. It does not always produce the shortest path as it mostly based on heuristics and
approximation.
5. A* search algorithm has some complexity issues.
6. The main drawback of A* is memory requirement as it keeps all generated nodes
in the memory, so it is not practical for various large-scale problems.
Example:
In this example, we will traverse the given graph using the A* algorithm. The
heuristic value of all states is given in the below table so we will calculate the f(n) of
each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any
node from start state. Here we will use OPEN and CLOSED list.
Solution
Initialization: {(S, 5)}
Iteration1: {(S--> A, 4), (S-->G, 10)}
Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S--
>G, 10)}
Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal
path with cost 6.
Points to remember:
1. A* algorithm returns the path which occurred first, and it does not search for
all remaining paths.
2. The efficiency of A* algorithm depends on the quality of heuristic.
3. A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">
If the heuristic function is admissible, then A* tree search will always find the
least cost path.
Best-first search is what the AO* algorithm does. The AO* method divides any
given difficult problem into a smaller group of problems that are then
resolved using the AND-OR graph concept. AND OR graphs are specialized graphs
that are used in problems that can be divided into smaller problems. The AND side
of the graph represents a set of tasks that must be completed to achieve the main
goal, while the OR side of the graph represents different methods for
accomplishing the same main goal.
In the above figure, the buying of a car may be broken down into smaller problems
or tasks that can be accomplished to achieve the main goal in the above figure,
which is an example of a simple AND-OR graph. The other task is to either steal a
car that will help us accomplish the main goal or use your own money to purchase
a car that will accomplish the main goal. The AND symbol is used to indicate the
AND part of the graphs, which refers to the need that all subproblems containing
the AND to be resolved before the preceding node or issue may be finished.
The start state and the target state are already known in the knowledge-based
search strategy known as the AO* algorithm, and the best path is identified by
heuristics. The informed search technique considerably reduces the
algorithm’s time complexity. The AO* algorithm is far more effective in searching
AND-OR trees than the A* algorithm.
Example:
Here in the above example below the Node which is given is the heuristic value
i.e h(n). Edge length is considered as 1.
Step 1
With help of f(n) = g(n) + h(n) evaluation function,
Step 2
By comparing f(A⇢B) & f(A⇢C+D) f(A⇢C+D) is shown to be smaller. i.e 8 < 9 Now
explore f(A⇢C+D) So, the current node is C f(C ⇢G) = g(g) + h(g) f(C ⇢G) = 1 + 3 =
4 f(C⇢H+I) = g(h) + h(h) + g(i) + h(i) f(C ⇢H+I) = 1 + 0 + 1 + 0 ……here we have
added H & I because they are in AND = 2 f(C ⇢H+I) is selected as the path with the
lowest cost and the heuristic is also left unchanged because it matches the actual
cost. Paths H & I are solved because the heuristic for those paths is 0, but Path A ⇢D
needs to be calculated because it has an AND. f(D ⇢J) = g(j) + h(j) f(D ⇢J) = 1 + 0
= 1 the heuristic of node D needs to be updated to 1. f(A ⇢C+D) = g(c) + h(c) +
g(d) + h(d) = 1 + 2 + 1 + 1 = 5 as we can see that path f(A ⇢C+D) is get solved
and this tree has become a solved tree now. In simple words, the main flow of this
algorithm is that we have to find firstly level 1st heuristic value and then level 2nd
and after that update the values with going upward means towards the root node.
Local search algorithms operate using a single current node (rather than multiple
paths) and generally move only to neighbors of that node.
Local search algorithms operate using a single current node (rather than multiple
paths) and generally move only to neighbors of that node.
In addition to finding goals, local search algorithms are useful for solving pure
optimization problems, in which the aim is to find the best state according to an
objective function.
Hill climbing often makes rapid progress toward a solution because it is usually
quite easy to improve a bad state.
Local maxima: a local maximum is a peak that is higher than each of its
neighboring states but lower than the global maximum. Hill-climbing algorithms
that reach the vicinity of a local maximum will be drawn upward toward the peak
but will then be stuck with nowhere else to go.
It combine hill climbing with a random walk in some way that yields both
efficiency and completeness. In metallurgy, annealing is the process used to
temper or harden metals and glass by heating them to a high temperature and
then gradually cooling them, thus allowing the material to reach a low energy
crystalline state.
simulated annealing, we switch our point of view from hill climbing to gradient
descent (i.e., minimizing cost) and imagine the task of getting a ping-pong ball
into the deepest crevice in a bumpy surface. If we just let the ball roll, it will
come to rest at a local minimum. If we shake the surface, we can bounce the
ball out of the local minimum. The trick is to shake just hard enough to bounce
the ball out of local minima but not hard enough to dislodge it from the global
minimum. The simulated-annealing solution is to start by shaking hard (i.e., at a
high temperature) and then gradually reduce the intensity of the shaking (i.e.,
lower the temperature)..
The innermost loop of the simulated-annealing algorithm (Figure 4.5) is quite
similar to hill climbing. Instead of picking the best move, however, it picks a
random move. If the move improves the situation, it is always accepted.
Otherwise, the algorithm accepts the move with some probability less than 1. The
probability decreases exponentially with the “badness” of the move—the amount
ΔE by which the evaluation is worsened. The probability also decreases as the
“temperature” T goes down: “bad” moves are more likely to be allowed at the
start when T is high, and they become more unlikely as T decreases. If the
schedule lowers T slowly enough, the algorithm will find a global optimum with
probability approaching 1.
Simulated annealing was first used extensively to solve VLSI layout problems in
the early 1980s. It has been applied widely to factory scheduling and other large-
scale optimization tasks.
Keeping just one node in memory might seem to be an extreme reaction to the
problem of memory limitations. The local beam search algorithm3 keeps track of
k states rather than just one. It begins with k randomly generated states. At each
step, all the successors of all k states are generated. If any one is a goal, the
algorithm halts. Otherwise, it selects the k best successors from the complete list
and repeats.
At first sight, a local beam search with k states might seem to be nothing more
than running k random restarts in parallel instead of in sequence. In fact, the two
algorithms are quite different. In a random-restart search, each search process
runs independently of the others. In a local beam search, useful information is
passed among the parallel search threads. In effect, the states that generate the
best successors say to the others, “Come over here, the grass is greener!” The
algorithm quickly abandons unfruitful searches and moves its resources to where
the most progress is being made.
Assignments
Q. Question CO K Level
No. Level
Explain problem solving agents with examples. CO2 K1
1
2 CO2 K1
Explain how to search for solutions of a problem and
measure performance.
CO2 K1
3 Explain any two uninformed search algorithms.
4
Part-A Questions
Goals help organize behaviour by limiting the objectives that the agent is trying to
achieve. Goal formulation, based on the current situation and the agents performance
measure, is the first step in the problem solving.
Problem formulation is the process of defining the scope of a problem, formulating one
or more specific questions about it, and establishing the assessment methods needed
to address the questions.
A search tree is a tree data structure used for locating specific keys from within a set.
In order for a tree to function as a search tree, the key for each node must be greater
than any keys in subtrees on the left, and less than any keys in subtrees on the right.
Expanding is a process of applying each legal action to the current state in problem
space, thereby generating a new set of states.
What is the difference between child node and parent node?
Any subnode of a given node is called a child node, and the given node, in turn, is the
child’s parent. Sibling nodes are nodes on the same hierarchical level under the same
parent node. Nodes higher than a given node in the same lineage are ancestors and
those below it are descendants.
Define frontier.
The frontier is a set of paths from a start node. The nodes at the end of the frontier are
outlined in green or blue. Initially the frontier is the set of empty paths from start
nodes.
Each database works differently so you need to adapt your search strategy for each
database. You may wish to develop a number of separate search strategies if your
research covers several different areas.
What is a loopy path?
Loopy paths are a special case of the more general concept of redundant paths, which exist
whenever there is more than one way to get from one state to another. with a data structure
called the explored set (also known as the closed list), which remembers every expanded
node.
Breadth-first Search.
Depth-first Search.
Depth-limited Search.
Iterative deepening depth-first search.
Uniform cost search.
Bidirectional Search.
BFS has O(n) space complexity because in the worst case, the root is connected to all other
nodes and BFS would create a 2-level tree with the root at level 0 and all other nodes at level
1.
The time complexity of BFS algorithm is O(V+E), since in the worst case, BFS algorithm
explores every node and edge. In a graph, the number of vertices is O(V), whereas the
number of edges is O(E). The space complexity of BFS can be expressed as O(V), where V is
the number of vertices.
The time complexity of DFS is O(V + E) where V is the number of vertices and E is the number
of edges .This is because the algorithm explores each vertex and edge exactly once. The
space complexity of DFS is O(V).
Bidirectional search is a graph search algorithm which find smallest path from source to
goal vertex. It runs two simultaneous search – Forward search from source/initial vertex
toward goal vertex. Backward search from goal/target vertex toward source vertex.
A heuristic function, also simply called a heuristic, is a function that ranks alternatives in
search algorithms at each branching step based on available information to decide which
branch to follow. For example, it may approximate the exact solution.
HSLD means heuristic straight line distance between any two nodes in a problem space.
Absolute error is the difference between measured or inferred value and the actual value
of a quantity.It refers to the magnitude of difference between the prediction of an
observation and the true value of that observation.
The relative error is defined as the ratio of the absolute error of the measurement to the
actual measurement. Using this method we can determine the magnitude of the
absolute error in terms of the actual size of the measurement
What is the use of recursive best first search?
Memetic algorithms (MAs) are evolutionary algorithms that use another local search
rather than global search algorithms. MAs are evolutionary algorithms that use local
search processes to refine individuals. When we combine global and local search, it
becomes a global optimization process.
The Optimization and AI group works on the boundaries between optimization and
learning, in order to enable data to guide the design of optimization algorithms, and
enable optimization algorithms to learn and adapt to application-specific structures of
problem instance
Part-B Questions
1
Real Time Applications in Day
to Day life and to Industry
Sl. No. Real Time Application
Banking
1
2 Hospital
3 Share market
4 Educational Institutions
Content Beyond the Syllabus
Machine Learning Model
Before discussing the machine learning model, we must need to understand the
following formal definition of ML given by professor Mitchell −
“A computer program is said to learn from experience E with respect to some class of
tasks T and performance measure P, if its performance at tasks in T, as measured by
P, improves with experience E.”
The above definition is basically focusing on three parameters, also the main
components of any learning algorithm, namely Task(T), Performance(P) and
experience (E). In this context, we can simplify this definition as −
ML is a field of AI consisting of learning algorithms that −
Name of the
S.NO Start Date End Date Portion
Assessment
44
PRESCRIBED TEXT BOOKS AND REFERENCE BOOKS
TEXT BOOKS:
REFERENCE BOOKS:
2. Saroj Kaushik, “Logic & Prolog Programming”, First Edition, New Age International,
2008.
45
MINI PROJECT SUGGESTIONS
46
Thank you
Disclaimer:
This document is confidential and intended solely for the educational purpose of RMK Group of Educational
Institutions. If you have received this document through email in error, please notify the system manager.
This document contains proprietary information and is intended only to the respective group / learning
community as intended. If you are not the addressee you should not disseminate, distribute or copy through
e-mail. Please notify the sender immediately by e-mail if you have received this document by mistake and
delete this document from your system. If you are not the intended recipient you are notified that disclosing,
copying, distributing or taking any action in reliance on the contents of this information is strictly
prohibited.