You are on page 1of 47

Please read this disclaimer before proceeding:

This document is confidential and intended solely for the educational purpose of RMK
Group of Educational Institutions. If you have received this document through email
in error, please notify the system manager. This document contains proprietary
information and is intended only to the respective group / learning community as
intended. If you are not the addressee you should not disseminate, distribute or copy
through e-mail. Please notify the sender immediately by e-mail if you have received
this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking
any action in reliance on the contents of this information is strictly prohibited.
Digital Notes
20CB603 ARTIFICAL INTELLIGENCE
Department: CSBS
Batch/Year: 2020-24/III
Created by: Mr.B.Jayaram / AP
Date: 08.02.2023
Table of Contents

S NO CONTENTS PAGE NO

1 Contents 5

2 Course Objectives 6

3 Pre Requisites (Course Names with Code) 6

4 Syllabus (With Subject Code, Name, LTPC details) 7

5 Course Outcomes 9

6 CO- PO/PSO Mapping 10

7 Lecture Plan 11

8 Activity Based Learning 12

9 Lecture Notes 13

10 Assignments 35

11 Part A (Q & A) 36

12 Part B Qs 40

13 Supportive Online Certification Courses 41

Real time Applications in day to day life and to


14
Industry 42

15 Contents Beyond the Syllabus 43

16 Assessment Schedule 44

17 Prescribed Text Books & Reference Books 45

18 Mini Project Suggestions 46


COURSE OBJECTIVES

• Understand the main approaches to artificial intelligence.


• Explore areas of application such as knowledge representation, natural
language processing and expert systems.
• Develop abilities to apply, build and modify decision models to solve real
problems.
• Design good evaluation functions and strategies for game playing
• Discuss the core concepts and algorithms of searching

PREREQUISITE

• 20 MA103 - Introduction to Probability, Statistics and Calculus


• 20MA203 – Statistical Methods
• 20MA303 - Computational Statistics
• 20CB903 - Machine Learning
SYLLABUS
LT P C
20CB603 Artificial Intelligence + Lab 30 2 4

 
UNIT I INTRODUCTION 9

Problems of AI, AI technique, Tic - Tac - Toe problem. Intelligent Agents, Agents &
environment, nature of environment, structure of agents, goal based agents, utility
based agents, learning agents. Problem Solving, Problems, Problem Space & search:
Defining the problem as state space search, production system, problem
characteristics, issues in the design of search programs.

UNIT II SEARCH TECHNIQUES 9

Problem solving agents, searching for solutions; uniform search strategies: breadth
first search, depth first search, depth limited search, bidirectional search, comparing
uniform search strategies. Heuristic search strategies Greedy best-first search, A*
search, AO* search, memory bounded heuristic search: local search algorithms &
optimization problems: Hill climbing search, simulated annealing search, local beam
search

UNIT III CONSTRAINT SATISFACTION PROBLEMS 9

Local search for constraint satisfaction problems. Adversarial search, Games, optimal
decisions & strategies in games, the minimax search procedure, alpha-beta pruning,
additional refinements, iterative deepening.

UNIT IV KNOWLEDGE & REASONING 9

Knowledge representation issues, representation & mapping, approaches to


knowledge representation. Using predicate logic, representing simple fact in logic,
representing instant & ISA relationship, computable functions & predicates,
resolution, natural deduction. Representing knowledge using rules, Procedural verses
declarative knowledge, logic programming, forward verses backward reasoning,
matching, control Knowledge.
SYLLABUS
LT P C
20CB603 Artificial Intelligence + Lab
30 2 4

UNIT V PROBABILISTIC REASONING 9

Representing knowledge in an uncertain domain, the semantics of Bayesian


networks, Dempster-Shafer theory, Planning Overview, components of a planning
system, Goal stack planning, Hierarchical planning, other planning techniques.
Expert Systems: Representing and using domain knowledge, expert system shells,
and knowledge acquisition.

TOTAL: 45 PERIODS
Course Outcomes
Cognitive/
Affective
Expected
Course Level of
Course Outcome Statement Level of
Code the
Attainment
Course
Outcome
Course Outcome Statements in Cognitive Domain

Demonstrate fundamental understanding of artificial


intelligence (AI) and its problem solving techniques Understand
CO1 K2 70%
Explain how Artificial Intelligence enables capabilities
that are beyond conventional technology Analyse
CO2 K4 70%
Implement and execute searching in AI
Understand
CO3 K2 70%

Understand how to represent the knowledge and its


approaches Analyse
CO4 K4 70%

Acquaint the Artificial Intelligence techniques for


building well-engineered and efficient intelligent Analyse
CO5 systems. K4 70%
CO-PO/PSO Mapping

Correlation Matrix of the Course Outcomes to Programme Outcomes and


Programme Specific Outcomes Including Course Enrichment Activities

(CO 3Programme Outcomes (POs), Programme Specific Outcomes (PSOs)


s)

PO PO PO PO PO PO PO PO PO PO PO PO PS PS PS
1 2 3 4 5 6 7 8 9
10 11 12 O1 O2 O3

CO1 3 3 2 1 2 1 1 1 1 1 1 1 2 2 1

CO2 3 3 2 1 2 1 1 1 1 1 1 1 2 2 1

CO3 3 3 2 1 2 1 1 1 1 1 1 1 2 2 1

CO4 3 3 2 1 2 1 1 1 1 1 1 1 2 2 1

CO5 3 3 2 1 2 1 1 1 1 1 1 1 2 2 1

9
Lecture Plan
UNIT – II

of delivery
Actual lecture Date
S No Topics

of

Proposed date

pertaining CO

Taxonomy
Periods

Mode
level
No
Problem solving agents, searching for 1 CO2 K4 Chalk and Talk
1 solutions 8/2/2023

1
CO2 K4 Chalk and Talk
uniform search strategies: breadth
2 first search, 11/2/2023

depth first search, depth limited


CO2 K4 Chalk and Talk
search 1 14-02-
3 2023
bidirectional search, comparing
1 15-02- CO2 K4 Chalk and Talk
uniform search strategies
4 2023
Heuristic search strategies Greedy 1 18-02- CO2 K4 Chalk and Talk
best-first search
5 2023

A* search, AO* 1
21-02- CO2 K4 Chalk and Talk
search
6 2023
memory bounded heuristic search:
1
local search algorithms &
CO2 K4 Chalk and Talk
optimization
22-02-
problems:
7 2023

1 CO2 K4 Chalk and Talk


Hill climbing search, simulated 25-02-
8 annealing search, 2023

local beam search 1 CO2 K4 Chalk and Talk


28-02-
9 2023
Activity based learning
(Model building/Prototype)

S NO TOPICS

1 https://wordmint.com/public_puzzles/2387238
2.1 Problem Solving Agents

Problem solving agents decide what to do by finding sequences of actions that


lead to desirable states to achieve a particular goal. problem solving begins with
precise definitions of problems and their solutions. Here the problems can be
solved using a search algorithms. These search algorithms are classified in to two
types Uninformed search strategies and informed search strategies.

Goal formulation, based on the current situation and the agent’s performance
measure, is the first step in problem solving.

Problem formulation is the process of deciding what actions and states to


consider, given a goal.

The process of looking for a sequence of actions that reaches the goal is called
search. A search algorithm takes a problem as input and returns a solution in the
form of an action sequence. Once a solution is found, the actions it recommends
can be carried out. This is called the execution phase.

Example for a problem solving agent is shown in below figure:


Components of the problem

The Problem can be divided in to 5 components

1. The initial state that the agent starts in.


2. A description of the possible actions available to the agent. Given a particular
states, ACTIONS(s) returns the set of actions that can be executed in s.
3. A description of what each action does; the formal name for this is the transition
model, specified by a function RESULT(s, a) that returns the state that results from
doing action a in state s.
4. The goal test, which determines whether a given state is a goal state.
5. A path cost function that assigns a numeric cost to each path. The problem-solving
agent chooses a cost function that reflects its own performance measure. The step
cost of taking action a in state s to reach state s is denoted by c(s, a, s).

2.2 Searching for Solutions


A solution is an action sequence, so search algorithms work by considering various
possible action sequences. The possible action sequences starting at the initial state
form a search tree with the initial state at the root; the branches are actions and the
nodes correspond to states in the state space of the problem.

We do this by expanding the current state; that is, applying each legal action to the
current state, thereby generating a new set of states. In this case, we add branches
from the parent node to child nodes:

The general TREE-SEARCH algorithm is shown informally in Figure 3.7. Search


algorithms all share this basic structure; they vary primarily according to how they
choose which state to expand next—the so-called search strategy.
Infrastructure needed for Search algorithms

Search algorithms require a data structure to keep track of the search tree that is
being constructed.

For each node n of the tree, we have a structure that contains four components:
• n.STATE: the state in the state space to which the node corresponds;
• n.PARENT: the node in the search tree that generated this node;
• n.ACTION: the action that was applied to the parent to generate the node;
• n.PATH-COST: the cost, traditionally denoted by g(n), of the path from the
initial state to the node, as indicated by the parent pointers.

The appropriate data structure for this is a queue. The operations on a queue are
as follows:

EMPTY?(queue) returns true only if there are no more elements in the queue.
POP(queue) removes the first element of the queue and returns it.
INSERT(element, queue) inserts an element and returns the resulting queue.

Measuring Problem Solving Performance

We can evaluate an algorithm’s performance in four ways:

• Completeness: Is the algorithm guaranteed to find a solution when there is


one?
• Optimality: Does the strategy find the optimal solution?
• Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to perform the search?

2.3 Uninformed Search Strategies (Blind Search)

Uniform search strategies means that the strategies have no additional


information about states beyond that provided in the problem definition. All they
can do is generate successors and distinguish a goal state from a non-
goal state. All search strategies are distinguished by the order in which nodes
are expanded.

Uniform Search Strategies can be classified in to various types ass below:

1) Breadth – First Search


2) Depth First Search
3) Depth Limited Search
4) Bidirectional Search.
Breadth-first search

1. Breadth-first search is the most common search strategy for traversing a tree
or graph. This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.
2. BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
3. The breadth-first search algorithm is an example of a general-graph search
algorithm.
4. Breadth-first search implemented using FIFO queue data structure.

In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse in
layers, so it will follow the path which is shown by the dotted arrow, and the
traversed path will be:

S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K  

Time Complexity: Time Complexity of BFS algorithm can be obtained by the


number of nodes traversed in BFS until the shallowest Node. Where the d= depth of
shallowest solution and b is a node at every state.
T (b) = 1+b2+b3+.......+ bd= O (bd)
Space Complexity: Space complexity of BFS algorithm is given by the Memory
size of frontier which is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at
some finite depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of
the node.
Advantages:

1. BFS will provide a solution if any solution exists.


2. If there are more than one solutions for a given problem, then BFS will provide
the minimal solution which requires the least number of steps.

Disadvantages:

3. It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
4. BFS needs lots of time if the solution is far away from the root node.

Depth-first Search

5. Depth-first search is a recursive algorithm for traversing a tree or graph data


structure.
6. It is called the depth-first search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.
7. DFS uses a stack data structure for its implementation.
8. The process of the DFS algorithm is similar to the BFS algorithm.

In the below search tree, we have shown the flow of depth-first search, and it will
follow the order as:

Root node--->Left node ----> right node.

It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal node
is not found. After backtracking it will traverse node C and then G, and here it will
terminate as it found goal node.
Completeness: DFS search algorithm is complete within finite state space as it will
expand every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node
traversed by the algorithm. It is given by:
T(n)= 1+ n2+ n3 +.........+ nm=O(nm)
Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root
node, hence space complexity of DFS is equivalent to the size of the fringe set,
which is O(bm).
Optimal: DFS search algorithm is non-optimal, as it may generate a large number
of steps or high cost to reach to the goal node.

Depth Limited Search

A depth-limited search algorithm is similar to depth-first search with a


predetermined limit. Depth-limited search can solve the drawback of the infinite
path in the Depth-first search. In this algorithm, the node at the depth limit will
treat as it has no successor nodes further.

Depth-limited search can be terminated with two Conditions of failure:

1. Standard failure value: It indicates that problem does not have any solution.
2. Cutoff failure value: It defines no solution for the problem within a given depth
limit.

Completeness: DLS search algorithm is complete if the solution is above the


depth-limit.
Time Complexity: Time complexity of DLS algorithm is O(bℓ).
Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).
Optimal: Depth-limited search can be viewed as a special case of DFS, and it is
also not optimal even if ℓ>d.
Advantages:

Depth-limited search is Memory efficient.

Disadvantages:

1. Depth-limited search also has a disadvantage of incompleteness.


2. It may not be optimal if the problem has more than one solution.

Bidirectional search algorithm runs two simultaneous searches, one form initial state
called as forward-search and other from goal node called as backward-search, to
find the goal node. Bidirectional search replaces one single search graph with two
small subgraphs in which one starts the search from an initial vertex and other
starts from goal vertex. The search stops when these two graphs intersect each
other.

Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.

3. In the below search tree, bidirectional search algorithm is applied. This algorithm
divides one graph/tree into two sub-graphs. It starts traversing from node 1 in
the forward direction and starts from goal node 16 in the backward direction.
4. The algorithm terminates at node 9 where two searches meet.

Completeness: Bidirectional Search is complete if we use BFS in both searches.


Time Complexity: Time complexity of bidirectional search using BFS is O(bd).
Space Complexity: Space complexity of bidirectional search is O(bd).
Optimal: Bidirectional search is Optimal.

Advantages:

Bidirectional search is fast.


Bidirectional search requires less memory

Disadvantages:

Implementation of the bidirectional search tree is difficult.


In bidirectional search, one should know the goal state in advance.
Comparing Uninformed Search Strategies

2.4 Informed Search Algorithms (or) Heuristic Algorithms

The informed search algorithm is more useful for large search space. Informed
search algorithm uses the idea of heuristic, so it is also called Heuristic search.

Heuristics function: Heuristic is a function which is used in Informed Search,


and it finds the most promising path. It takes the current state of the agent as
its input and produces the estimation of how close agent is from the goal. The
heuristic method, however, might not always give the best solution, but it
guaranteed to find a good solution in reasonable time. Heuristic function
estimates how close a state is to the goal. It is represented by h(n), and it
calculates the cost of an optimal path between the pair of states. The
value of the heuristic function is always positive.

Admissibility of the heuristic function is given as:

h(n) <= h*(n)  

Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost
should be less than or equal to the estimated cost.
Pure Heuristic Search:

Pure heuristic search is the simplest form of heuristic search algorithms. It expands
nodes based on their heuristic value h(n). It maintains two lists, OPEN and
CLOSED list. In the CLOSED list, it places those nodes which have already
expanded and in the OPEN list, it places nodes which have yet not been expanded.

On each iteration, each node n with the lowest heuristic value is expanded and
generates all its successors and n is placed to the closed list. The algorithm
continues unit a goal state is found.

In the informed search we will discuss two main algorithms which are given below:

•Best First Search Algorithm(Greedy search)


•A* Search Algorithm

1.) Best-first Search Algorithm (Greedy Search):

Greedy best-first search algorithm always selects the path which appears best at
that moment. It is the combination of depth-first search and breadth-first search
algorithms. It uses the heuristic function and search. Best-first search allows us to
take the advantages of both algorithms. With the help of best-first search, at each
step, we can choose the most promising node. In the best first search algorithm,
we expand the node which is closest to the goal node and the closest cost is
estimated by heuristic function, i.e.

f(n) = g(n)

Were, h(n)= estimated cost from node n to the goal. The greedy best first
algorithm is implemented by the priority queue.

Best first search algorithm:

Step 1: Place the starting node into the OPEN list.


Step 2: If the OPEN list is empty, Stop and return failure.
Step 3: Remove the node n, from the OPEN list which has the lowest value of
h(n), and places it in the CLOSED list.
Step 4: Expand the node n, and generate the successors of node n.
Step 5: Check each successor of node n, and find whether any node is a goal
node or not. If any successor node is goal node, then return success and terminate
the search, else proceed to Step 6.
Step 6: For each successor node, algorithm checks for evaluation function f(n),
and then check if the node has been in either OPEN or CLOSED list. If the node
has not been in both list, then add it to the OPEN list.
Step 7: Return to Step 2.
Advantages:

1. Best first search can switch between BFS and DFS by gaining the advantages of
both the algorithms.
2. This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:

3. It can behave as an unguided depth-first search in the worst case scenario.


4. It can get stuck in a loop as DFS.
5. This algorithm is not optimal.

Example:

Consider the below search problem, and we will traverse it using greedy best-first
search. At each iteration, each node is expanded using evaluation function
f(n)=h(n) , which is given in the below table.

In this search example, we are using two lists which


are OPEN and CLOSED Lists. Following are the iteration for traversing the
above example.
Expand the nodes of S and put in the CLOSED list
Initialization: Open [A, B], Closed [S]
Iteration 1: Open [A], Closed [S, B]
Iteration2: Open [E, F, A], Closed [S, B]
                  : Open [E, A], Closed [S, B, F]
Iteration3: Open [I, G, E, A], Closed [S, B, F]
                  : Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F----> G

Time Complexity: The worst case time complexity of Greedy best first search is
O(bm).
Space Complexity: The worst case space complexity of Greedy best first search
is O(bm). Where, m is the maximum depth of the search space.
Complete: Greedy best-first search is also incomplete, even if the given state
space is finite.
Optimal: Greedy best first search algorithm is not optimal.

A* Search Algorithm

A* search is the most commonly known form of best-first search. It uses heuristic
function h(n), and cost to reach the node n from the start state g(n). It has
combined features of UCS and greedy best-first search, by which it solve the
problem efficiently. A* search algorithm finds the shortest path through the search
space using the heuristic function. This search algorithm expands less search tree
and provides optimal result faster. A* algorithm is similar to UCS except that it
uses g(n)+h(n) instead of g(n).

In A* search algorithm, we use search heuristic as well as the cost to reach the
node. Hence we can combine both costs as following, and this sum is called as
a fitness number.
Algorithm of A* search:

Step1: Place the starting node in the OPEN list.


Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure
and stops.
Step 3: Select the node from the OPEN list which has the smallest value of
evaluation function (g+h), if node n is goal node then return success and stop,
otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed
list. n', check whFor each successor ether n' is already in the OPEN or CLOSED list, if
not then compute evaluation function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to
the back pointer which reflects the lowest g(n') value.
Step 6: Return to Step 2.

Advantages:

1. A* search algorithm is the best algorithm than other search algorithms.


2. A* search algorithm is optimal and complete.
3. This algorithm can solve very complex problems.

Disadvantages:

4. It does not always produce the shortest path as it mostly based on heuristics and
approximation.
5. A* search algorithm has some complexity issues.
6. The main drawback of A* is memory requirement as it keeps all generated nodes
in the memory, so it is not practical for various large-scale problems.

Example:

In this example, we will traverse the given graph using the A* algorithm. The
heuristic value of all states is given in the below table so we will calculate the f(n) of
each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any
node from start state. Here we will use OPEN and CLOSED list.
Solution

Initialization: {(S, 5)}
Iteration1: {(S--> A, 4), (S-->G, 10)}
Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S--
>G, 10)}
Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal
path with cost 6.

Points to remember:

1. A* algorithm returns the path which occurred first, and it does not search for
all remaining paths.
2. The efficiency of A* algorithm depends on the quality of heuristic.
3. A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">

Complete: A* algorithm is complete as long as:


I. Branching factor is finite.
II. Cost at every action is fixed.

Optimal: A* search algorithm is optimal if it follows below two conditions:

1. Admissible: the first condition requires for optimality is that h(n) should be an


admissible heuristic for A* tree search. An admissible heuristic is optimistic in
nature.
2. Consistency: Second required condition is consistency for only A* graph-
search.

If the heuristic function is admissible, then A* tree search will always find the
least cost path.

Time Complexity: The time complexity of A* search algorithm depends on


heuristic function, and the number of nodes expanded is exponential to the depth
of solution d. So the time complexity is O(b^d), where b is the branching factor.

Space Complexity: The space complexity of A* search algorithm is O(b^d)


AO* Search

Best-first search is what the AO* algorithm does. The AO* method divides any
given difficult problem into a smaller group of problems that are then
resolved using the AND-OR graph concept. AND OR graphs are specialized graphs
that are used in problems that can be divided into smaller problems. The AND side
of the graph represents a set of tasks that must be completed to achieve the main
goal, while the OR side of the graph represents different methods for
accomplishing the same main goal.

In the above figure, the buying of a car may be broken down into smaller problems
or tasks that can be accomplished to achieve the main goal in the above figure,
which is an example of a simple AND-OR graph. The other task is to either steal a
car that will help us accomplish the main goal or use your own money to purchase
a car that will accomplish the main goal. The AND symbol is used to indicate the
AND part of the graphs, which refers to the need that all subproblems containing
the AND to be resolved before the preceding node or issue may be finished.

The start state and the target state are already known in the knowledge-based
search strategy known as the AO* algorithm, and the best path is identified by
heuristics. The informed search technique considerably reduces the
algorithm’s time complexity. The AO* algorithm is far more effective in searching
AND-OR trees than the A* algorithm.

Working of AO* algorithm:

The evaluation function in AO* looks like this:


f(n) = g(n) + h(n)
f(n) = Actual cost + Estimated cost
here,
          f(n) = The actual cost of traversal.
          g(n) = the cost from the initial node to the current node.
          h(n) = estimated cost from the current node to the goal state.
Difference between the A* Algorithm and AO* algorithm

1. A* algorithm and AO* algorithm both works on the best first search.


2. They are both informed search and works on given heuristics values.
3. A* always gives the optimal solution but AO* doesn’t guarantee to give the
optimal solution.
4. Once AO* got a solution doesn’t explore all possible paths but A* explores all
paths.
5. When compared to the A* algorithm, the AO* algorithm uses less memory.
6. opposite to the A* algorithm, the AO* algorithm cannot go into an endless loop.

Example:

Here in the above example below the Node which is given is the heuristic value
i.e h(n). Edge length is considered as 1.

Step 1
With help of  f(n) = g(n) + h(n) evaluation function,

Start from node A, f(A⇢B) = g(B) + h(B) = 1 + 5 ……here g(n)=1 is taken by


default for path cost = 6 f(A⇢C+D) = g(c) + h(c) + g(d) + h(d) = 1 + 2 + 1 + 4
……here we have added C & D because they are in AND = 8 So, by calculation A ⇢B
path is chosen which is the minimum path, i.e f(A ⇢B)

Step 2

According to the answer of step 1, explore node B


Here the value of E & F are calculated as follows,
 
f(B⇢E) = g(e) + h(e)
f(B⇢E) = 1 + 7
=8

f(B⇢f) = g(f) + h(f)


f(B⇢f) = 1 + 9
= 10
So, by above calculation B⇢E path is chosen which is minimum path, i.e f(B ⇢E)
because B's heuristic value is different from its actual value The heuristic is
updated and the minimum cost path is selected. The minimum value in our
situation is 8.
Therefore, the heuristic for A must be updated due to the change in B's heuristic.
So we need to calculate it again.

f(A⇢B) = g(B) + updated h(B)


=1+8
=9
We have Updated all values in the above tree.
Step 3

By comparing f(A⇢B) & f(A⇢C+D) f(A⇢C+D) is shown to be smaller. i.e 8 < 9 Now
explore f(A⇢C+D) So, the current node is C f(C ⇢G) = g(g) + h(g) f(C ⇢G) = 1 + 3 =
4 f(C⇢H+I) = g(h) + h(h) + g(i) + h(i) f(C ⇢H+I) = 1 + 0 + 1 + 0 ……here we have
added H & I because they are in AND = 2 f(C ⇢H+I) is selected as the path with the
lowest cost and the heuristic is also left unchanged because it matches the actual
cost. Paths H & I are solved because the heuristic for those paths is 0, but Path A ⇢D
needs to be calculated because it has an AND. f(D ⇢J) = g(j) + h(j) f(D ⇢J) = 1 + 0
= 1 the heuristic of node D needs to be updated to 1. f(A ⇢C+D) = g(c) + h(c) +
g(d) + h(d) = 1 + 2 + 1 + 1 = 5 as we can see that path f(A ⇢C+D) is get solved
and this tree has become a solved tree now. In simple words, the main flow of this
algorithm is that we have to find firstly level 1st heuristic value and then level 2nd
and after that update the values with going upward means towards the root node.

Memory Bound Heuristic Search

Recursive best-first search (RBFS) is a simple recursive algorithm that attempts


to mimic the operation of standard best-first search, but using only linear space.
The algorithm is shown in Figure 3.26. Its structure is similar to that of a recursive
depth-first search, but rather than continuing indefinitely down the current path, it
uses the f limit variable to keep track of the f-value of the best alternative path
available from any ancestor of the current node. If the current node exceeds this
limit, the recursion unwinds back to the alternative path. As the recursion unwinds,
RBFS replaces the f-value of each node along the path with a backed-up value—the
best f-value of its children. In this way, RBFS remembers the f-value of the best leaf
in the forgotten subtree and can therefore decide whether it’s worth
RBFS (Problem, MAKE – NODE) (INITIAL – STATE [problem
function RBFS (problem, node, f - limit) return a solution, failure and a new f – cost
limit
if GOAL – TEST [problem] (state) then return node
successors ←← EXPAND (node, problem)
if successors is empty then return failure, ∞∞
for each g in successors do
Local Search and Optimization Problems

Local search algorithms operate using a single current node (rather than multiple
paths) and generally move only to neighbors of that node.

Local search algorithms operate using a single current node (rather than multiple
paths) and generally move only to neighbors of that node.

In addition to finding goals, local search algorithms are useful for solving pure
optimization problems, in which the aim is to find the best state according to an
objective function.

To understand local search, we find it useful to consider the state-space landscape


(as in Figure 4.1). A landscape has both “location” (defined by the state) and
“elevation” (defined by the value of the heuristic cost function or objective function).
If elevation corresponds to GLOBAL MINIMUM cost, then the aim is to find the lowest
valley—a global minimum; if elevation corresponds \to an objective function, then
the aim is to find the highest peak—a global maximum. (You can convert from one to
the other just by inserting a minus sign.) Local search algorithms explore this
landscape. A complete local search algorithm always finds a goal if one exists; an
optimal algorithm always finds a global minimum/maximum.
Hill Climbing Algorithm

The hill-climbing search algorithm (steepest-ascent version) is shown in Figure 4.2.


It is simply a loop that continually moves in the direction of increasing value—that
is, uphill. It terminates when it reaches a “peak” where no neighbor has a higher
value. Hill climbing is sometimes called greedy local search because it grabs a good
neighbor state without thinking ahead about where to go next.

Hill climbing often makes rapid progress toward a solution because it is usually
quite easy to improve a bad state.

Problems in Hill Climbing

Local maxima: a local maximum is a peak that is higher than each of its
neighboring states but lower than the global maximum. Hill-climbing algorithms
that reach the vicinity of a local maximum will be drawn upward toward the peak
but will then be stuck with nowhere else to go.

Ridges: a ridge is shown in Figure 4.4. Ridges result in a sequence of local


maxima that is very difficult for greedy algorithms to navigate.

Plateaux: a plateau is a flat area of the state-space landscape. It can be a flat


local maximum, from which no uphill exit exists, or a shoulder, from which
progress is possible. (See Figure 4.1.) A hill-climbing search might get lost on the
plateau.
Stochastic hill climbing chooses at random from among the uphill moves; the
probability of selection can vary with the steepness of the uphill move. First-
choice hill climbing implements stochastic hill climbing by generating successors
randomly until one is generated that is better than the current state. This is a
good strategy when a state has many (e.g., thousands) of successors.

The hill-climbing algorithms described so far are incomplete—they often fail to


find a goal when one exists because they can get stuck on local maxima.
Random-restart hill climbing adopts the well-known adage, “If at first you don’t
succeed, try, try again.” It con-ducts a series of hill-climbing searches from
randomly generated initial states,1 until a goal is found. It is trivially complete
with probability approaching 1, because it will eventually generate a goal state as
the initial state. If each hill-climbing search has a probability p of success, then
the expected number of restarts required is 1/p.
Simulated Annealing

It combine hill climbing with a random walk in some way that yields both
efficiency and completeness. In metallurgy, annealing is the process used to
temper or harden metals and glass by heating them to a high temperature and
then gradually cooling them, thus allowing the material to reach a low energy
crystalline state.

simulated annealing, we switch our point of view from hill climbing to gradient
descent (i.e., minimizing cost) and imagine the task of getting a ping-pong ball
into the deepest crevice in a bumpy surface. If we just let the ball roll, it will
come to rest at a local minimum. If we shake the surface, we can bounce the
ball out of the local minimum. The trick is to shake just hard enough to bounce
the ball out of local minima but not hard enough to dislodge it from the global
minimum. The simulated-annealing solution is to start by shaking hard (i.e., at a
high temperature) and then gradually reduce the intensity of the shaking (i.e.,
lower the temperature)..
The innermost loop of the simulated-annealing algorithm (Figure 4.5) is quite
similar to hill climbing. Instead of picking the best move, however, it picks a
random move. If the move improves the situation, it is always accepted.
Otherwise, the algorithm accepts the move with some probability less than 1. The
probability decreases exponentially with the “badness” of the move—the amount
ΔE by which the evaluation is worsened. The probability also decreases as the
“temperature” T goes down: “bad” moves are more likely to be allowed at the
start when T is high, and they become more unlikely as T decreases. If the
schedule lowers T slowly enough, the algorithm will find a global optimum with
probability approaching 1.

Simulated annealing was first used extensively to solve VLSI layout problems in
the early 1980s. It has been applied widely to factory scheduling and other large-
scale optimization tasks.

Local Beam Search

Keeping just one node in memory might seem to be an extreme reaction to the
problem of memory limitations. The local beam search algorithm3 keeps track of
k states rather than just one. It begins with k randomly generated states. At each
step, all the successors of all k states are generated. If any one is a goal, the
algorithm halts. Otherwise, it selects the k best successors from the complete list
and repeats.

At first sight, a local beam search with k states might seem to be nothing more
than running k random restarts in parallel instead of in sequence. In fact, the two
algorithms are quite different. In a random-restart search, each search process
runs independently of the others. In a local beam search, useful information is
passed among the parallel search threads. In effect, the states that generate the
best successors say to the others, “Come over here, the grass is greener!” The
algorithm quickly abandons unfruitful searches and moves its resources to where
the most progress is being made.
Assignments
Q. Question CO K Level
No. Level
Explain problem solving agents with examples. CO2 K1
1

2 CO2 K1
Explain how to search for solutions of a problem and
measure performance.
CO2 K1
3 Explain any two uninformed search algorithms.

4
Part-A Questions

Define goal formulation.

Goals help organize behaviour by limiting the objectives that the agent is trying to
achieve. Goal formulation, based on the current situation and the agents performance
measure, is the first step in the problem solving.

Define problem formulation.

Problem formulation is the process of defining the scope of a problem, formulating one
or more specific questions about it, and establishing the assessment methods needed
to address the questions.

Define search tree.

A search tree is a tree data structure used for locating specific keys from within a set.
In order for a tree to function as a search tree, the key for each node must be greater
than any keys in subtrees on the left, and less than any keys in subtrees on the right.

What is expanding and generating.

Expanding is a process of applying each legal action to the current state in problem
space, thereby generating a new set of states.
 
What is the difference between child node and parent node?

Any subnode of a given node is called a child node, and the given node, in turn, is the
child’s parent. Sibling nodes are nodes on the same hierarchical level under the same
parent node. Nodes higher than a given node in the same lineage are ancestors and
those below it are descendants.

Define frontier.

The frontier is a set of paths from a start node. The nodes at the end of the frontier are
outlined in green or blue. Initially the frontier is the set of empty paths from start
nodes.

Define search strategy.

A search strategy is an organised structure of key terms used to search a database.


The search strategy combines the key concepts of your search question in order to
retrieve accurate results.

Each database works differently so you need to adapt your search strategy for each
database. You may wish to develop a number of separate search strategies if your
research covers several different areas.
What is a loopy path?

Loopy paths are a special case of the more general concept of redundant paths, which exist
whenever there is more than one way to get from one state to another. with a data structure
called the explored set (also known as the closed list), which remembers every expanded
node.

List out the types of uninformed search.

Breadth-first Search.
Depth-first Search.
Depth-limited Search.
Iterative deepening depth-first search.
Uniform cost search.
Bidirectional Search.

List out some types of informed types of search.

Greedy best-first search algorithm.


A* search algorithm.
AO* algorithm.

What is the time and space complexity of BFS?

BFS has O(n) space complexity because in the worst case, the root is connected to all other
nodes and BFS would create a 2-level tree with the root at level 0 and all other nodes at level
1.
The time complexity of BFS algorithm is O(V+E), since in the worst case, BFS algorithm
explores every node and edge. In a graph, the number of vertices is O(V), whereas the
number of edges is O(E). The space complexity of BFS can be expressed as O(V), where V is
the number of vertices.

What is the time and space complexity of DFS?

The time complexity of DFS is O(V + E) where V is the number of vertices and E is the number
of edges .This is because the algorithm explores each vertex and edge exactly once. The
space complexity of DFS is O(V).

What is the use of predetermined depth limit in depth limited search?

A depth-limited search algorithm is similar to depth-first search with a predetermined limit.


Depth-limited search can solve the drawback of the infinite path in the Depth-first search. In
this algorithm, the node at the depth limit will treat as it has no successor nodes further.
What do you mean by bidirectional search?

Bidirectional search is a graph search algorithm which find smallest path from source to
goal vertex. It runs two simultaneous search – Forward search from source/initial vertex
toward goal vertex. Backward search from goal/target vertex toward source vertex.

What are the four important criteria in any search algorithm?

The main properties of search algorithms include optimality, completeness, time


complexity, and space complexity. Search algorithms work by defining the problem
(initial state, goal state, state space, space cost, etc) and conducting search operations
to establish the best solution to the given problem.

What is heuristic function?

A heuristic function, also simply called a heuristic, is a function that ranks alternatives in
search algorithms at each branching step based on available information to decide which
branch to follow. For example, it may approximate the exact solution.

What is the use of greedy best first search?

Simple and Easy to Implement: Greedy Best-First Search is a relatively straightforward


algorithm, making it easy to implement.
Fast and Efficient: Greedy Best-First Search is a very fast algorithm, making it ideal for
applications where speed is essential.
Low Memory Requirements: Greedy Best-First Search requires only a small amount of
memory, making it suitable for applications with limited memory.
Flexible: Greedy Best-First Search can be adapted to different types of problems and can
be easily extended to more complex problems.

What do you mean by hSLD?

HSLD means heuristic straight line distance between any two nodes in a problem space.

Define absolute error?

Absolute error is the difference between measured or inferred value and the actual value
of a quantity.It refers to the magnitude of difference between the prediction of an
observation and the true value of that observation.

Define relative error?

The relative error is defined as the ratio of the absolute error of the measurement to the
actual measurement. Using this method we can determine the magnitude of the
absolute error in terms of the actual size of the measurement
What is the use of recursive best first search?

More efficient than IDA*


It is an optimal algorithm if h(n) is admissible
Space complexity is O(bd).

Define MA* algorithm.

Memetic algorithms (MAs) are evolutionary algorithms that use another local search
rather than global search algorithms. MAs are evolutionary algorithms that use local
search processes to refine individuals. When we combine global and local search, it
becomes a global optimization process.

Define SMA* algorithm.

SMA* or Simplified Memory Bounded A* is a shortest path algorithm based on the A*


algorithm. The main advantage of SMA* is that it uses a bounded memory, while the
A* algorithm might need exponential memory. All other characteristics of SMA* are
inherited from A*.

What is local search?

Local Search in Artificial Intelligence is an optimising algorithm to find the optimal


solution more quickly. Local search algorithms are used when we care only about a
solution but not the path to a solution

What is the need for optimization?

The Optimization and AI group works on the boundaries between optimization and
learning, in order to enable data to guide the design of optimization algorithms, and
enable optimization algorithms to learn and adapt to application-specific structures of
problem instance
Part-B Questions

Explain problem solving agents with examples.


Explain how to search for solutions of a problem and measure performance.
Explain any two uninformed search algorithms.
Explain breadth first search algorithm in detail with examples.
Explain depth first search algorithm in detail with examples.
Explain depth limited search algorithm in detail with examples.
Explain bidirectional search algorithm in detail with examples.
Compare the performance of uninformed search algorithms.
Explain any two informed search algorithms in detail.
Explain greedy best first search algorithm in detail with examples.
Explain A* search algorithm in detail with example.
Explain AO* search algorithm in detail with example.
Explain memory bounded heuristic algorithms in detail with example.
Explain recursive best first search algorithm in detail with example.
Compare informed and uninformed search algorithms.
Explain any two local search and optimization algorithms with example.
Explain hill climbing algorithm in detail with example.
Explain simulated annealing search algorithm in detail with example.
Explain local beam search algorithm in detail with example.
Compare the performance of search and optimization algorithms.
Supportive Online Courses
Sl. Courses link
No.
Introduction to AI https://onlinecourses.nptel.ac.in/noc23_cs05/
announcements?force=true

1
Real Time Applications in Day
to Day life and to Industry
Sl. No. Real Time Application

Banking
1

2 Hospital

3 Share market

4 Educational Institutions
Content Beyond the Syllabus
Machine Learning Model

Before discussing the machine learning model, we must need to understand the
following formal definition of ML given by professor Mitchell −

“A computer program is said to learn from experience E with respect to some class of
tasks T and performance measure P, if its performance at tasks in T, as measured by
P, improves with experience E.”

The above definition is basically focusing on three parameters, also the main
components of any learning algorithm, namely Task(T), Performance(P) and
experience (E). In this context, we can simplify this definition as −
ML is a field of AI consisting of learning algorithms that −

Improve their performance (P)


At executing some task (T)
Over time with experience (E)
Based on the above, the following diagram represents a Machine Learning Model −
ASSESSMENT SCHEDULE

Tentative schedule for the Assessment During 2022-


2023 odd semester

Name of the
S.NO Start Date End Date Portion
Assessment

1 FIAT 25-02-2023 Unit 1 &


Unit 2
2

44
PRESCRIBED TEXT BOOKS AND REFERENCE BOOKS

TEXT BOOKS:

1. Stuart J. Russell, Peter Norwig , “Artificial Intelligence –A Modern approach”, 3rd


Pearson Education, 2016
2. Ritch & Knight, ”Artificial Intelligence”, Third Edition, Tata McGraw Hill, 2009

REFERENCE BOOKS:

1. Patterson, “Introduction to Artificial Intelligence & Expert Systems”, First Edition,


Pearson, 2015

2. Saroj Kaushik, “Logic & Prolog Programming”, First Edition, New Age International,
2008.

45
MINI PROJECT SUGGESTIONS

Resume Parser. ...


Fake News Detector. ...
Translator App. ...
Instagram Spam Detection. ...
Object Detection System. ...
Animal Species Prediction. ...
Pneumonia Detection with Python. ...
Teachable Machine.

46
Thank you

Disclaimer:

This document is confidential and intended solely for the educational purpose of RMK Group of Educational
Institutions. If you have received this document through email in error, please notify the system manager.
This document contains proprietary information and is intended only to the respective group / learning
community as intended. If you are not the addressee you should not disseminate, distribute or copy through
e-mail. Please notify the sender immediately by e-mail if you have received this document by mistake and
delete this document from your system. If you are not the intended recipient you are notified that disclosing,
copying, distributing or taking any action in reliance on the contents of this information is strictly
prohibited.

You might also like