You are on page 1of 85

Please read this disclaimer before proceeding:

This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document
through email in error, please notify the system manager. This document
contains proprietary information and is intended only to the respective group /
learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender
immediately by e-mail if you have received this document by mistake and delete
this document from your system. If you are not the intended recipient you are
notified that disclosing, copying, distributing or taking any action in reliance on
the contents of this information is strictly prohibited.
22IT401
ARTIFICIAL
INTELLIGENCE AND
MACHINE LEARNING

Department: Information Technology


Batch/Year: 2022-2026 / II
Created by: Dr.T.Mahalingam
: Dr.S.Selvakanmani
Date : 12-01-2024
Table of Contents
S.NO. CONTENTS SLIDE NO.

1 CONTENTS 5

2 COURSE OBJECTIVES 6

3 PRE REQUISITES (COURSE NAMES WITH CODE) 7

4 SYLLABUS (WITH SUBJECT CODE, NAME, LTPC DETAILS) 8

5 COURSE OUTCOMES (6) 12

6 CO- PO/PSO MAPPING 13

7 LECTURE PLAN –UNIT 1 15

8 ACTIVITY BASED LEARNING –UNIT 1 16


9 CROSSWORD PUZZLE 17
10 VIDEO LINK-QUIZ 18
11 TEST YOURSELF 19

12 LECTURE NOTES – UNIT 1 20

13 ASSIGNMENT 1- UNIT 1 68

14 PART A Q & A (WITH K LEVEL AND CO) 70

15 PART B Q s (WITH K LEVEL AND CO) 76

16 SUPPORTIVE ONLINE CERTIFICATION COURSES 77

REAL TIME APPLICATIONS IN DAY TO DAY LIFE AND TO


17 79
INDUSTRY

18 CONTENTS BEYOND THE SYLLABUS 80

19 ASSESSMENT SCHEDULE 82
83
20 PRESCRIBED TEXT BOOKS & REFERENCE BOOKS

21 MINI PROJECT SUGGESTIONS 84


2. COURSE OBJECTIVES

Understand the concept of Artificial Intelligence

Familiarize with Knowledge based AI systems and approaches

Apply the aspect of Probabilistic approach to AI

Identify the Neural Networks and NLP in designing AI models

Recognize the concepts of Machine Learning and its deterministic tools


3. PRE REQUISITES

PRE-REQUISITE CHART
22IT401-ARTIFICIAL INTELLIGENCE
AND MACHINE LEARNING

22MA401- Probability
and Statistics

22CS303- Design and


Analysis of Algorithms
4. 22IT401 ARTIFICIAL INTELLIGENCE AND MACHINE
LEARNING LTPC
OBJECTIVES 3 00 3
• Understand the concept of Artificial Intelligence

• Familiarize with Knowledge based AI systems and approaches

• Apply the aspect of Probabilistic approach to AI

• Identify the Neural Networks and NLP in designing AI models

• Recognize the concepts of Machine Learning and its deterministic tools

UNIT 1 PROBLEM SOLVING AND SEARCH STARTEGIES

Introduction: What Is Ai, The Foundations Of Artificial Intelligence, The History Of Artificial
Intelligence, The State Of The Art. Intelligent Agents: Agents And Environments, Good Behaviour:
The Concept Of Rationality, The Nature Of Environments, And The Structure Of Agents. Solving
Problems By Searching: Problem-Solving Agents, Uninformed Search Strategies, Informed
(Heuristic) Search Strategies, Heuristic Functions. Beyond Classical Search: Local Search
Algorithms and Optimization Problems, Searching With Nondeterministic Actions And Partial
Observations, Online Search Agents And Unknown Environments. Constraint Satisfaction
Problems: Definition, Constraint Propagation, Backtracking Search, Local Search, The Structure Of
Problems.

List of Exercise/Experiments

1. Implementation of uninformed search algorithm (BFS and DFS).

2. Implementation of Informed Search algorithm (A* and Hill Climbing Algorithm)

UNIT 2 KNOWLEDGE REPRESENTATION AND REASONING

Logical Agents: Knowledge-Based Agents, Propositional Logic, Propositional Theorem Proving,


Effective Propositional Model Checking, Agents Based on Propositional Logic. FirstOrder Logic:
Syntax and Semantics, Knowledge Engineering in FOL, Inference in First-Order Logic, Unification
and Lifting, Forward Chaining, Backward Chaining, Planning: Definition, Algorithms, Planning
Graphs, Hierarchical Planning, Multi-agent Planning. Knowledge Representation: Ontological
Engineering, Categories and Objects, Events, Mental Events and Mental Objects, Reasoning
Systems for Categories, Reasoning with Default Information, The Internet Shopping World.

List of Exercise/Experiments
1. Implementation of forward and backward chaining.
2. Implementation of unification algorithms.

8
4. 22IT401 ARTIFICIAL INTELLIGENCE AND MACHINE
LEARNING
LTPC
UNIT 3 LEARNING
3003
Learning from Examples: Forms of Learning, Supervised Learning, Learning Decision
Trees, Evaluating and Choosing the Best Hypothesis, The Theory of Learning, Regression
and Classification with Linear Models, Artificial Neural Networks. Applications: Human
computer interaction (HCI), Knowledge management technologies, AI for customer
relationship management, Expert systems, Data mining, text mining, and Web mining,
Other current topics.

List of Exercise/Experiments

1. Numpy Operations

2. NumPy arrays

3. NumPy Indexing and Selection

4. NumPy Exercise:

(i) Write code to create a 4x3 matrix with values ranging from 2 to 13.

(ii) Write code to replace the odd numbers by -1 in the following array.

(iii) Perform the following operations on an array of mobile phones prices 6999,
7500, 11999, 27899, 14999, 9999.

a) Create a 1d-array of mobile phones prices

b) Convert this array to float type

c) Append a new mobile having price of 13999 Rs. to this array

d) Reverse this array of mobile phones prices

e) Apply GST of 18% on mobile phones prices and update this array.

f) Sort the array in descending order of price

g) What is the average mobile phone price.

TOTAL : 45 PERIODS

9
4. 22IT401 ARTIFICIAL INTELLIGENCE AND MACHINE
LEARNING
LTPC
UNIT 4 FUNDAMENTALS OF MACHINE LEARNING
3003
Motivation for Machine Learning, Applications, Machine Learning, Learning associations,
Classification, Regression, The Origin of machine learning, Uses and abuses of machine
learning, Success cases, How do machines learn, Abstraction and knowledge
representation, Generalization, Factors to be considered, Assessing the success of
learning, Metrics for evaluation of classification method, Steps to apply machine learning
to data, Machine learning process, Input data and ML algorithm, Classification of machine
learning algorithms, General ML architecture, Group of algorithms, Reinforcement
learning, Supervised learning, Unsupervised learning, Semi-Supervised learning,
Algorithms, Ensemble learning, Matching data to an appropriate algorithm.

List of Exercise/Experiments

1. Build linear regression models to predict housing prices using python , using data set
available Google colabs.

2. Stock Ensemble-based Neural Network for Stock Market Prediction using Historical Stock
Data and Sentiment Analysis.

UNIT 5 MACHINE LEARNING AND TYPES

Supervised Learning, Regression, Linear regression, Multiple linear regression, A multiple


regression analysis, The analysis of variance for multiple regression, Examples for
multiple regression, Overfitting, Detecting overfit models: Cross validation, Cross
validation: The ideal procedure, Parameter estimation, Logistic regression, Decision trees:
Background, Decision trees, Decision trees for credit card promotion, An algorithm for
building decision trees, Attribute selection measure: Information gain, Entropy, Decision
Tree: Weekend example, Occam’s Razor, Converting a tree to rules, Unsupervised
learning, Semi Supervised learning, Clustering, K – means clustering, Automated
discovery, Reinforcement learning, Multi-Armed Bandit algorithms, Influence diagrams,
Risk modelling, Sensitivity analysis, Casual learning.

10
4. 22IT401 ARTIFICIAL INTELLIGENCE AND MACHINE
LEARNING
List of Exercise/Experiments
LTPC
Use Cases
3003
Case Study 1: Churn Analysis and Prediction (Survival Modelling)

Cox-proportional models

Churn Prediction

Case Study 2: Credit card Fraud Analysis

Imbalanced Data

Neural Network

Case study 3: Sentiment Analysis or Topic Mining from New York Times

Similarity measures (Cosine Similarity, Chi-Square, N Grams)

Part-of-Speech Tagging

Stemming and Chunking

Case Study 4: Sales Funnel Analysis

A/B testing

Campaign effectiveness, Web page layout effectiveness

Scoring and Ranking

Case Study 5: Recommendation Systems and Collaborative filtering

User based

Item Based

Singular value decomposition–based recommenders

Case Study 6: Customer Segmentation and Value

Segmentation Strategies

Lifetime Value

Case Study 7: Portfolio Risk Conformance

Risk Profiling

Portfolio Optimization

Case Study 8: Uber Alternative Routing

Graph Construction
11
Route Optimization
5.COURSE OUTCOME

Cognitive/
Affective Expected
Course
Course Outcome Statement Level of the Level of
Code
Course Attainment
Outcome
Course Outcome Statements in Cognitive Domain

Explain the problem solving and Understand


C211.1 70%
search strategies. K2
Demonstrate the techniques for
Apply
C211.2 knowledge representation and 70%
K3
reasoning.
Interpret various forms of learning, Apply
C211.3 artificial neural networks and its K3 70%
applications.

Experiment various machine Analyse


C211.4 70%
learning algorithms. K4

Employ AI and machine learning


Apply
C211.5 algorithms to solve real world 70%
K3
problems.

12
6.CO-PO/PSO MAPPING

Correlation Matrix of the Course Outcomes to


Programme Outcomes and Programme Specific
Outcomes Including Course Enrichment Activities

Programme Outcomes (POs), Programme Specific Outcomes (PSOs)


Course PO PO PO PO PO PO1 PO1 PO1 PSO PSO PSO
Outcomes PO3 PO4 PO5 PO9
1 2 6 7 8 0 1 2 1 2 3
(COs) K3/K
K3 K4 K5 K5 A2 A3 A3 A3 A3 A3 A2 K3 K3 K3
5
C211 K
2 1 1 1 3 3
.1 2
C211 K
3 2 1 1 3 3
.2 3
C211 K
3 2 1 1 3 2 3
.3 3
C211 K
.4 4
3 3 2 2 5 3 3

C211 K
3 2 1 1 3 2 3
.5 3
C211 2.8 2 1.2 1.2 3 0.8 3

13
UNIT I

PROBLEM SOLVING AND


SEARCH STARTEGIES

14
UNIT I INTRODUCTION

Sl.N
o
LECTURE PLAN – UNIT I PROPOSED
ACTU
AL
NO
LECTURE LECT
OF PERTAINING TAXONOMY MODE OF
TOPIC URE
PERI CO(s) LEVEL DELIVERY
ODS PERIOD PER
IOD

1 INTRODUCTION: WHAT IS
AI, THE FOUNDATIONS OF
ARTIFICIAL INTELLIGENCE,
1 03.01.2024 CO1 K2 MD1
THE HISTORY OF
ARTIFICIAL INTELLIGENCE,
THE STATE OF THE ART

INTELLIGENT AGENTS:
AGENTS AND
1 04.01.2024 CO1 K2 MD1
2 ENVIRONMENTS, GOOD
BEHAVIOUR

THE CONCEPT OF
RATIONALITY, THE
3 NATURE OF
ENVIRONMENTS, AND THE
1 06.01.2024 CO1 K2 MD1
STRUCTURE OF AGENTS,
SOLVING PROBLEMS BY
SEARCHING: PROBLEM-
SOLVING AGENTS

UNINFORMED SEARCH
1 08.01.2024 CO1 K3 MD1
STRATEGIES
4
SEARCHING WITH
NONDETERMINISTIC
1 09.01.2024 CO1 K3 MD1
5 ACTIONS AND PARTIAL
OBSERVATIONS

SEARCHING WITH
6 NONDETERMINISTIC
1 10.01.2024 CO1 K2 MD1
ACTIONS AND PARTIAL
OBSERVATIONS
ONLINE SEARCH AGENTS
AND UNKNOWN
1 11.01.2024 CO1 K2 MD1
7 ENVIRONMENTS

CONSTRAINT
SATISFACTION PROBLEMS:
1 23.01.2024 CO1 K2 MD1
8 DEFINITION, CONSTRAINT
PROPAGATION

BACKTRACKING SEARCH,
9 LOCAL SEARCH, THE 1 24.01.2024 CO1 K2 MD1
STRUCTURE OF PROBLEMS
LECTURE PLAN – UNIT I

ASSESSMENT COMPONENTS MODE OF DELEIVERY


AC 1. Unit Test MD 1. Oral presentation
AC 2. Assignment MD 2. Tutorial
AC 3. Course Seminar MD 3. Seminar
AC 4. Course Quiz MD 4 Hands On
AC 5. Case Study MD 5. Videos
AC 6. Record Work MD 6. Field Visit
AC 7. Lab / Mini Project
AC 8. Lab Model Exam
AC 9. Project Review

16
ACTIVITY BASED LEARNING – UNIT I

COMPLETE THE PUZZLES GIVEN BELOW

17
ACTIVITY BASED LEARNING – UNIT I

QUIZ- LINK

Unit I:

https://www.proprofs.com/quiz-school/topic/artificial-intelligence

https://www.onlineinterviewquestions.com/artificial-intelligence-mcq/

https://www.javatpoint.com/artificial-intelligence-mcq

https://quizizz.com/admin/quiz/5c58fa461df8c7001b11e0e9/artificial-
intelligence

https://kubra.com/artificial-intelligence-trivia/

VIDEO QUIZ :

https://www.youtube.com/watch?v=VYx4dWsK9OQ

18
Test Yourself

1. The intelligence displayed by humans and other animals is termed?


A. Constance
B. Ability
C. Natural intelligence
D. Cognition

2. Any device that perceives its environment and takes actions that maximize its
chance of success at some goal is termed?
A. Input

B. Intelligent agent

C. Data

D. Processor

3. In what year was Artificial intelligence founded as an academic discipline?


A. 1990

B. 1956

C. 1912

D. 1909

4. An evolved definition of Artificial Intelligence led to a phenomenon known as the


A. Formulation

B. Data processing

C. AI Effect

D. Machination

5. Which of these is a tool used in Artificial Intelligence?


A. Art

B. Design

C. Input

D. Neural networks

19
Lecture Notes
Introduction:

What Is AI:

In today's world, technology is growing very fast, and we are getting in touch with
different new technologies day by day.

Here, one of the booming technologies of computer science is Artificial Intelligence


which is ready to create a new revolution in the world by making intelligent
machines.The Artificial Intelligence is now all around us. It is currently working with
a variety of subfields, ranging from general to specific, such as self-driving cars,
playing chess, proving theorems, playing music, Painting, etc.

Artificial Intelligence is composed of two words Artificial and Intelligence, where


Artificial defines "man-made," and intelligence defines "thinking power", hence AI
means "a man-made thinking power."

So, we can define AI as:

"It is a branch of computer science by which we can create intelligent


machines which can behave like a human, think like humans, and able to
make decisions."

Artificial Intelligence exists when a machine can have human based skills such as
learning, reasoning, and solving problems

With Artificial Intelligence you do not need to preprogram a machine to do some


work, despite that you can create a machine with programmed algorithms which can
work with own intelligence, and that is the awesomeness of AI.

It is believed that AI is not a new technology, and some people says that as per
Greek myth, there were Mechanical men in early days which can work and behave
like humans.
Lecture Notes
Foundations of Artificial Intelligence:
Artificial Intelligence is not just a part of computer science even it's so vast and requires
lots of other factors which can contribute to it. To create the AI first we should know
that how intelligence is composed, so the Intelligence is an intangible part of our brain
which is a combination of Reasoning, learning, problem-solving perception, language
understanding, etc.
To achieve the above factors for a machine or software Artificial Intelligence requires
the following discipline:
Mathematics
Biology
Psychology
Sociology
Computer Science
Neurons Study
Statistics

History of Artificial Intelligence


Artificial Intelligence is not a new word and not a new technology for
researchers. This technology is much older than you would imagine. Even there
are the myths of Mechanical men in Ancient Greek and Egyptian Myths.
Following are some milestones in the history of AI which defines the journey
from the AI generation to till date development.
Lecture Notes
Maturation of Artificial Intelligence (1943-1952)

Year 1943: The first work which is now recognized as AI was done by Warren
McCulloch and Walter pits in 1943. They proposed a model of artificial neurons.

Year 1949: Donald Hebb demonstrated an updating rule for modifying the
connection strength between neurons. His rule is now called Hebbian learning.

Year 1950: The Alan Turing who was an English mathematician and pioneered
Machine learning in 1950. Alan Turing publishes "Computing Machinery and
Intelligence" in which he proposed a test. The test can check the machine's ability to
exhibit intelligent behavior equivalent to human intelligence, called a Turing test.

The birth of Artificial Intelligence (1952-1956)

Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program"Which was named as "Logic Theorist". This program had
proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for
some theorems.

Year 1956: The word "Artificial Intelligence" first adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined
as an academic field.

At that time high-level computer languages such as FORTRAN, LISP, or COBOL were
invented. And the enthusiasm for AI was very high at that time.

The golden years-Early enthusiasm (1956-1974)

Year 1966: The researchers emphasized developing algorithms which can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966,
which was named as ELIZA.

Year 1972: The first intelligent humanoid robot was built in Japan which was
named as WABOT-1.
Lecture Notes
The first AI winter (1974-1980)

The duration between years 1974 to 1980 was the first AI winter duration. AI winter
refers to the time period where computer scientist dealt with a severe shortage of
funding from government for AI researches.

During AI winters, an interest of publicity on artificial intelligence was decreased.

A boom of AI (1980-1987)

Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems
were programmed that emulate the decision-making ability of a human expert.

In the Year 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University.

The second AI winter (1987-1993)

The duration between the years 1987 to 1993 was the second AI Winter duration.

Again Investors and government stopped in funding for AI research as due to high cost
but not efficient result. The expert system such as XCON was very cost effective.

The emergence of intelligent agents (1993-2011)

Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov,
and became the first computer to beat a world chess champion.

Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.

Year 2006: AI came in the Business world till the year 2006. Companies like Facebook,
Twitter, and Netflix also started using AI.
Lecture Notes
Deep learning, big data and artificial general intelligence (2011-present)

Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it
had to solve the complex questions as well as riddles. Watson had proved that it
could understand natural language and can solve tricky questions quickly.

Year 2012: Google has launched an Android app feature "Google now", which
was able to provide information to the user as a prediction.

Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in
the infamous "Turing test."

Year 2018: The "Project Debater" from IBM debated on complex topics with two
master debaters and also performed extremely well.

Google has demonstrated an AI program "Duplex" which was a virtual assistant


and which had taken hairdresser appointment on call, and lady on other side
didn't notice that she was talking with the machine.

Now AI has developed to a remarkable level. The concept of Deep learning, big
data, and data science are now trending like a boom. Nowadays companies like
Google, Facebook, IBM, and Amazon are working with AI and creating amazing
devices. The future of Artificial Intelligence is inspiring and will come with high
intelligence.
Lecture Notes
Intelligent Agents : Agents and Environments

An AI system can be defined as the study of the rational agent and its environment.
The agents sense the environment through sensors and act on their environment
through actuators. An AI agent can have mental properties such as knowledge,
belief, intention, etc.

What is an Agent?

An agent can be anything that perceiveits environment through sensors and act
upon that environment through actuators. An Agent runs in the cycle
of perceiving, thinking, and acting. An agent can be:

Human-Agent: A human agent has eyes, ears, and other organs which work for
sensors and hand, legs, vocal tract work for actuators.

Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.

Software Agent: Software agent can have keystrokes, file contents as input and
act on those inputs and display output on the screen.

Hence the world around us is full of agents such as thermostat, cellphone, camera,
and even we are also agents.

Before moving forward, we should first know about sensors, effectors, and
actuators.

Sensor: Sensor is a device which detects the change in the environment and sends
the information to other electronic devices. An agent observes its environment
through sensors.

Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a system. An
actuator can be an electric motor, gears, rails, etc.

Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.
Lecture Notes

Intelligent Agents:
An intelligent agent is an autonomous entity which act upon an environment using
sensors and actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals. A thermostat is an example of an intelligent
agent.
Following are the main four rules for an AI agent:
Rule 1: An AI agent must have the ability to perceive the environment.
Rule 2: The observation must be used to make decisions.
Rule 3: Decision should result in an action.
Rule 4: The action taken by an AI agent must be a rational action.
Rational Agent:
A rational agent is an agent which has clear preference, models uncertainty, and
acts in a way to maximize its performance measure with all possible actions.
A rational agent is said to perform the right things. AI is about creating rational
agents to use for game theory and decision theory for various real-world scenarios.
For an AI agent, the rational action is most important because in AI reinforcement
learning algorithm, for each best possible action, agent gets the positive reward
and for each wrong action, an agent gets a negative reward.

The Structure of Intelligent Agents


Agent’s structure can be viewed as −
Agent = Architecture + Agent Program
Architecture = the machinery that an agent executes on.
Agent Program = an implementation of an agent function.
Lecture Notes
Types of AI Agents
Agents can be grouped into five classes based on their degree of perceived
intelligence and capability. All these agents can improve their performance and
generate better action over the time. These are given below:
Simple Reflex Agent
Model-based reflex agent
Goal-based agents
Utility-based agent
Learning agent
1. Simple Reflex agent:
o The Simple reflex agents are the simplest agents. These agents take decisions on
the basis of the current percepts and ignore the rest of the percept history.
o These agents only succeed in the fully observable environment.
o The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
o The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.
oProblems for the simple reflex agent design approach:
oThey have very limited intelligence
oThey do not have knowledge of non-perceptual parts of the current state
oMostly too big to generate and to store.
oNot adaptive to changes in the environment.
Lecture Notes
2. Model-based reflex agent
o The Model-based agent can work in a partially observable environment, and track the
situation.
o A model-based agent has two important factors:
1. Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
2. Internal State: It is a representation of the current state based on percept history.
o These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
Updating the agent state requires information about:
1.How the world evolves
2.How the agent's action affects the world.

3. Goal-based agents
The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.
The agent needs to know its goal which describes desirable situations.
Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
They choose an action, so that they can achieve the goal.
These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different scenario
are called searching and planning, which makes an agent proactive.
Lecture Notes

4. Utility-based agents
o These agents are similar to the goal-based agent but provide an extra component
of utility measurement which makes them different by providing a measure of
success at a given state.
o Utility-based agent act based not only goals but also the best way to achieve the
goal.
o The Utility-based agent is useful when there are multiple possible alternatives, and
an agent has to choose in order to perform the best action.
o The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
Lecture Notes
5. Learning Agents
A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.

It starts to act with basic knowledge and then able to act and adapt automatically
through learning.

A learning agent has mainly four conceptual components, which are:

1.Learning element: It is responsible for making improvements by learning from


environment

2.Critic: Learning element takes feedback from critic which describes that how well
the agent is doing with respect to a fixed performance standard.

3.Performance element: It is responsible for selecting external action

4.Problem generator: This component is responsible for suggesting actions that will
lead to new and informative experiences.

Hence, learning agents are able to learn, analyze performance, and look for new
ways to improve the performance.
Lecture Notes
The Nature of Environments:

Some programs operate in the entirely artificial environment confined to keyboard


input, database, computer file systems and character output on a screen. In
contrast, some software agents (software robots or softbots) exist in rich, unlimited
softbots domains. The simulator has a very detailed, complex environment. The
software agent needs to choose from a long array of actions in real time. A softbot
designed to scan the online preferences of the customer and show interesting items
to the customer works in the real as well as an artificial environment. The most
famous artificial environment is the Turing Test environment, in which one real and
other artificial agents are tested on equal ground. This is a very challenging
environment as it is highly difficult for a software agent to perform as well as a
human. Turing Test The success of an intelligent behavior of a system can be
measured with Turing Test. Two persons and a machine to be evaluated participate
in the test. Out of the two persons, one plays the role of the tester. Each of them
sits in different rooms. The tester is unaware of who is machine and who is a
human. He interrogates the questions by typing and sending them to both
intelligences, to which he receives typed responses. This test aims at fooling the
tester. If the tester fails to determine machine’s response from the human response,
then the machine is said to be intelligent.
Lecture Notes
Solving Problems by Searching

Search algorithms are one of the most important areas of Artificial Intelligence. This
topic will explain all about the search algorithms in AI.
Problem-Solving Agents
In Artificial Intelligence, Search techniques are universal problem-solving methods.
Rational agents or Problem-solving agents in AI mostly used these search
strategies or algorithms to solve a specific problem and provide the best result.
Problem-solving agents are the goal-based agents and use atomic representation. In
this topic, we will learn various problem-solving search algorithms.
Search Algorithm Terminologies:
Search: Searchingis a step by step procedure to solve a search-problem in a given
search space. A search problem can have three main factors:
1.Search Space: Search space represents a set of possible solutions, which a system
may have.
2.Start State: It is a state from where agent begins the search.
3.Goal test: It is a function which observe the current state and returns whether the
goal state is achieved or not.
Search tree: A tree representation of search problem is called Search tree. The
root of the search tree is the root node which is corresponding to the initial state.
Actions: It gives the description of all the available actions to the agent.
Transition model: A description of what each action do, can be represented as a
transition model.
Path Cost: It is a function which assigns a numeric cost to each path.
Solution: It is an action sequence which leads from the start node to the goal
node.
Optimal Solution: If a solution has the lowest cost among all solutions.
Lecture Notes
Properties of Search Algorithms:
Following are the four essential properties of search algorithms to compare the
efficiency of these algorithms:
Completeness: A search algorithm is said to be complete if it guarantees to return
a solution if at least any solution exists for any random input.
Optimality: If a solution found for an algorithm is guaranteed to be the best
solution (lowest path cost) among all other solutions, then such a solution for is said
to be an optimal solution.
Time Complexity: Time complexity is a measure of time for an algorithm to
complete its task.
Space Complexity: It is the maximum storage space required at any point during
the search, as the complexity of the problem.

Types of search algorithms


Based on the search problems we can classify the search algorithms into
uninformed (Blind search) search and informed search (Heuristic search)
algorithms.
Lecture Notes
Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge such as closeness,
the location of the goal. It operates in a brute-force way as it only includes
information about how to traverse the tree and how to identify leaf and goal nodes.
Uninformed search applies a way in which search tree is searched without any
information about the search space like initial state operators and test for the goal,
so it is also called blind search.It examines each node of the tree until it achieves
the goal node.
It can be divided into five main types:
Breadth-first search
Uniform cost search
Depth-first search
Iterative deepening depth-first search Depth limited search
Bidirectional Search

Informed Search
Informed search algorithms use domain knowledge. In an informed search,
problem information is available which can guide the search. Informed search
strategies can find a solution more efficiently than an uninformed search strategy.
Informed search is also called a Heuristic search.
A heuristic is a way which might not always be guaranteed for best solutions but
guaranteed to find a good solution in reasonable time.
Informed search can solve much complex problem which could not be solved in
another way.
An example of informed search algorithms is a traveling salesman problem.
1.Greedy Search
2.A* Search
Lecture Notes
Uninformed Search Algorithms
Uninformed search is a class of general-purpose search algorithms which operates in
brute force-way. Uninformed search algorithms do not have additional information
about state or search space other than how to traverse the tree, so it is also called
blind search.
Following are the various types of uninformed search algorithms:
1.Breadth-first Search
2.Depth-first Search
3.Depth-limited Search
4.Iterative deepening depth-first search
5.Uniform cost search
6.Bidirectional Search
1. Breadth-first Search:
• Breadth-first search is the most common search strategy for traversing a tree or
graph. This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.
• BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
• The breadth-first search algorithm is an example of a general-graph search
algorithm.
• Breadth-first search implemented using FIFO queue data structure.
Advantages:
• BFS will provide a solution if any solution exists.
• If there are more than one solutions for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.
Disadvantages:
• It requires lots of memory since each level of the tree must be saved into memory
to expand the next level.
• BFS needs lots of time if the solution is far away from the root node.
Lecture Notes
Example:
In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse in
layers, so it will follow the path which is shown by the dotted arrow, and the
traversed path will be:
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K

Time Complexity: Time Complexity of BFS algorithm can be obtained by the


number of nodes traversed in BFS until the shallowest Node. Where the d= depth of
shallowest solution and b is a node at every state.
T (b) = 1+b2+b3+.......+ bd= O (bd)
Space Complexity: Space complexity of BFS algorithm is given by the Memory size
of frontier which is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at some
finite depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of
the node.

2. Depth-first Search
• Depth-first search isa recursive algorithm for traversing a tree or graph data
structure.
• It is called the depth-first search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.
• DFS uses a stack data structure for its implementation.
• The process of the DFS algorithm is similar to the BFS algorithm.
Lecture Notes
Advantage:

DFS requires very less memory as it only needs to store a stack of the nodes on
the path from root node to the current node.

It takes less time to reach to the goal node than BFS algorithm (if it traverses in
the right path).

Disadvantage:

There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.

DFS algorithm goes for deep down searching and sometime it may go to the
infinite loop.

Example:

In the below search tree, we have shown the flow of depth-first search, and it will
follow the order as:

Root node--->Left node ----> right node.

It will start searching from root node S, and traverse A, then B, then D and E,
after traversing E, it will backtrack the tree as E has no other successor and still
goal node is not found. After backtracking it will traverse node C and then G, and
here it will terminate as it found goal node.
Lecture Notes
Completeness: DFS search algorithm is complete within finite state space as it will
expand every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node traversed
by the algorithm. It is given by:
T(n)= 1+ n2+ n3 +.........+ nm=O(nm)
Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root
node, hence space complexity of DFS is equivalent to the size of the fringe set, which
is O(bm).
Optimal: DFS search algorithm is non-optimal, as it may generate a large number of
steps or high cost to reach to the goal node.
3. Depth-Limited Search Algorithm:
A depth-limited search algorithm is similar to depth-first search with a predetermined
limit. Depth-limited search can solve the drawback of the infinite path in the Depth-
first search. In this algorithm, the node at the depth limit will treat as it has no
successor nodes further.
Depth-limited search can be terminated with two Conditions of failure:
Standard failure value: It indicates that problem does not have any solution.
Cutoff failure value: It defines no solution for the problem within a given depth limit.
Advantages:
Depth-limited search is Memory efficient.
Disadvantages:
o Depth-limited search also has a disadvantage of incompleteness.
It may not be optimal if the problem has more than one solutio

Example
Completeness: DLS search algorithm is complete if the solution is above the depth-limit.

Time Complexity: Time complexity of DLS algorithm is O(bℓ).

Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).

Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not
optimal even if ℓ>d.

Informed Search Algorithms

So far we have talked about the uninformed search algorithms which looked
through search space for all possible solutions of the problem without having any additional
knowledge about search space. But informed search algorithm contains an array of
knowledge such as how far we are from the goal, path cost, how to reach to goal node, etc.
This knowledge help agents to explore less to the search space and find more efficiently the
goal node.

The informed search algorithm is more useful for large search space. Informed search
algorithm uses the idea of heuristic, so it is also called Heuristic search.

Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the
most promising path. It takes the current state of the agent as its input and produces the
estimation of how close agent is from the goal.

The heuristic method, however, might not always give the best solution, but it guaranteed
to find a good solution in reasonable time. Heuristic function estimates how close a state is
to the goal. It is represented by h(n), and it calculates the cost of an optimal path between
the pair of states. The value of the heuristic function is always positive.

Admissibility of the heuristic function is given as:

h(n) <= h*(n)

Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost
should be less than or equal to the estimated cost.
In the informed search we will discuss two main algorithms which are given below:

1. Best First Search Algorithm(Greedy search)

2. A* Search Algorithm

1.) Best-first Search Algorithm (Greedy Search):

1.Greedy best-first search algorithm always selects the path which appears best at that moment. It is
the combination of depth-first search and breadth-first search algorithms. It uses the heuristic
function and search. Best-first search allows us to take the advantages of both algorithms. With the
help of best-first search, at each step, we can choose the most promising node. In the best first
search algorithm, we expand the node which is closest to the goal node and the closest cost is
estimated by heuristic function, i.e. f(n)= g(n).

Were, h(n)= estimated cost from node n to the goal.

The greedy best first algorithm is implemented by the priority queue.

Best first search algorithm:

Step 1: Place the starting node into the OPEN list.

Step 2: If the OPEN list is empty, Stop and return failure.

Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and places it in
the CLOSED list.

Step 4: Expand the node n, and generate the successors of node n.

Step 5: Check each successor of node n, and find whether any node is a goal node or not. If any
successor node is goal node, then return success and terminate the search, else proceed to Step 6.

Step 6: For each successor node, algorithm checks for evaluation function f(n), and then check if the
node has been in either OPEN or CLOSED list. If the node has not been in both list, then add it to the
OPEN list.

Step 7: Return to Step 2.

Advantages:

Best first search can switch between BFS and DFS by gaining the advantages of both the algorithms.

This algorithm is more efficient than BFS and DFS algorithms.


Disadvantages:

It can behave as an unguided depth-first search in the worst case scenario.

It can get stuck in a loop as DFS.

This algorithm is not optimal.

Example:

Consider the below search problem, and we will traverse it using greedy best-first
search. At each iteration, each node is expanded using evaluation function
f(n)=h(n) , which is given in the below table.

In this search example, we are using two lists which are OPEN and CLOSED Lists.
Following are the iteration for traversing the above example.
Expand the nodes of S and put in the CLOSED list

Initialization: Open [A, B], Closed [S]

Iteration 1: Open [A], Closed [S, B]

Iteration 2: Open [E, F, A], Closed [S, B]

: Open [E, A], Closed [S, B, F]

Iteration 3: Open [I, G, E, A], Closed [S, B, F]

: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F----> G

Time Complexity: The worst case time complexity of Greedy best first search is O(bm).

Space Complexity: The worst case space complexity of Greedy best first search is O(bm). Where, m is
the maximum depth of the search space.

Complete: Greedy best-first search is also incomplete, even if the given state space is finite.

Optimal: Greedy best first search algorithm is not optimal.

2.) A* Search Algorithm:


• A* search is a combination of greedy search and uniform cost search. In this algorithm,
the total cost (heuristic) which is denoted by f(x) is a sum of the cost in uniform cost
search denoted by g(x) and cost of greedy search denoted by h(x).

• f (x) = g (x) + h (x)

• In this g(x) is the backward cost which is the cumulative cost from the root node to the
current node and h(x) is the forward cost which is approximate of the distance of goal
node and the current node.
Lecture Notes
Lecture Notes
Lecture Notes
ALPHA–BETA PRUNING
• Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization
technique for the minimax algorithm.

• As we have seen in the minimax search algorithm that the number of game states it has
to examine are exponential in depth of the tree. Since we cannot eliminate the exponent,
but we can cut it to half. Hence there is a technique by which without checking each
node of the game tree we can compute the correct minimax decision, and this technique
is called pruning. This involves two threshold parameter Alpha and beta for future
expansion, so it is called alpha-beta pruning. It is also called as Alpha-Beta Algorithm.

• Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only
prune the tree leaves but also entire sub-tree.

• The two-parameter can be defined as:

• Alpha: The best (highest-value) choice we have found so far at any point along the path
of Maximizer. The initial value of alpha is -∞.

• Beta: The best (lowest-value) choice we have found so far at any point along the path of
Minimizer. The initial value of beta is +∞.

• The Alpha-beta pruning to a standard minimax algorithm returns the same move as the
standard algorithm does, but it removes all the nodes which are not really affecting the
final decision but making algorithm slow. Hence by pruning these nodes, it makes the
algorithm fast.

• Condition for Alpha-beta pruning:

• The main condition which required for alpha-beta pruning is:

• α>=β

• Key points about alpha-beta pruning:


• The Max player will only update the value of alpha.
• The Min player will only update the value of beta.
• While backtracking the tree, the node values will be passed to upper nodes instead
of values of alpha and beta.
• We will only pass the alpha, beta values to the child nodes.
Lecture Notes
Pseudo-code for Alpha-beta Pruning:

function minimax(node, depth, alpha, beta, maximizingPlayer) is

if depth ==0 or node is a terminal node then

return static evaluation of node

if MaximizingPlayer then // for Maximizer Player

maxEva= -infinity

for each child of node do

eva= minimax(child, depth-1, alpha, beta, False)

maxEva= max(maxEva, eva)

alpha= max(alpha, maxEva)

if beta<=alpha

break

return maxEva

else // for Minimizer player

minEva= +infinity

for each child of node do

eva= minimax(child, depth-1, alpha, beta, true)

minEva= min(minEva, eva)

beta= min(beta, eva)

if beta<=alpha

break

return minEva
Lecture Notes
Working of Alpha-Beta Pruning:

Let's take an example of two-player search tree to understand the working of Alpha-
beta pruning

Step 1: At the first step the, Max player will start first move from node A where α= -
∞ and β= +∞, these value of alpha and beta passed down to node B where again
α= -∞ and β= +∞, and Node B passes the same value to its child D.

Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of
α is compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α
at node D and node value will also 3.

Step 3: Now algorithm backtrack to node B, where the value of β will change as this
is a turn of Min, Now β= +∞, will compare with the available subsequent nodes
value, i.e. min (∞, 3) = 3, hence at node B now α= -∞, and β= 3.
Lecture Notes

In the next step, algorithm traverse the next successor of Node B which is node
E, and the values of α= -∞, and β= 3 will also be passed.

Step 4: At node E, Max will take its turn, and the value of alpha will change. The
current value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at
node E α= 5 and β= 3, where α>=β, so the right successor of E will be pruned,
and algorithm will not traverse it, and the value at node E will be 5.
Lecture Notes
Step 5: At next step, algorithm again backtrack the tree, from node B to node A.
At node A, the value of alpha will be changed the maximum available value is 3 as
max (-∞, 3)= 3, and β= +∞, these two values now passes to right successor of A
which is Node C.

At node C, α=3 and β= +∞, and the same values will be passed on to node F.

Step 6: At node F, again the value of α will be compared with left child which is 0,
and max(3,0)= 3, and then compared with right child which is 1, and max(3,1)=
3 still α remains 3, but the node value of F will become 1.

Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here
the value of beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now
at C, α=3 and β= 1, and again it satisfies the condition α>=β, so the next child of
C which is G will be pruned, and the algorithm will not compute the entire sub-
tree G.
Lecture Notes

Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1)
= 3. Following is the final game tree which is the showing the nodes which are
computed and nodes which has never computed. Hence the optimal value for the
maximizer is 3 for this example.
Lecture Notes
Beyond Classical Search:
Local Search Algorithms and Optimization Problems

The informed and uninformed search expands the nodes systematically in two ways:

• keeping different paths in the memory and

• selecting the best suitable path,

Which leads to a solution state required to reach the goal node. But beyond these
“classical search algorithms," we have some “local search algorithms” where the
path cost does not matters, and only focus on solution-state needed to reach the
goal node.

A local search algorithm completes its task by traversing on a single current node
rather than multiple paths and following the neighbors of that node generally.

Although local search algorithms are not systematic, still they have the
following two advantages:

Local search algorithms use a very little or constant amount of memory as they
operate only on a single path.

Most often, they find a reasonable solution in large or infinite state spaces where the
classical or systematic algorithms do not work.

Does the local search algorithm work for a pure optimized problem?

Yes, the local search algorithm works for pure optimized problems. A pure
optimization problem is one where all the nodes can give a solution. But the target is
to find the best state out of all according to the objective function. Unfortunately,
the pure optimization problem fails to find high-quality solutions to reach the goal
state from the current state.

Working of a Local search algorithm

Let's understand the working of a local search algorithm with the help of an
example:
Consider the below state-space landscape having both:

Location: It is defined by the state.

Elevation: It is defined by the value of the objective function or heuristic cost


function.

Working of a Local search algorithm

The local search algorithm explores the above landscape by finding the following two
points:

Global Minimum: If the elevation corresponds to the cost, then the task is to find the
lowest valley, which is known as Global Minimum.

Global Maxima: If the elevation corresponds to an objective function, then it finds


the highest peak which is called as Global Maxima. It is the highest point in the valley.

We will understand the working of these points better in Hill-climbing search.

Below are some different types of local searches:

• Hill-climbing Search

• Simulated Annealing

• Local Beam Search


Hill Climbing Algorithm in AI
Hill Climbing Algorithm: Hill climbing search is a local search problem. The
purpose of the hill climbing search is to climb a hill and reach the topmost peak/
point of that hill. It is based on the heuristic search technique where the person who
is climbing up on the hill estimates the direction which will lead him to the highest
peak.

State-space Landscape of Hill climbing algorithm

To understand the concept of hill climbing algorithm, consider the below landscape
representing the goal state/peak and the current state of the climber. The
topographical regions shown in the figure can be defined as:

Global Maximum: It is the highest point on the hill, which is the goal state.

Local Maximum: It is the peak higher than all other peaks but lower than the
global maximum.

Flat local maximum: It is the flat area over the hill where it has no uphill or
downhill. It is a saturated point of the hill.

Shoulder: It is also a flat area where the summit is possible.

Current state: It is the current position of the person.


Lecture Notes
Types of Hill climbing search algorithm
There are following types of hill-climbing search:

• Simple hill climbing

• Steepest-ascent hill climbing

• Stochastic hill climbing

• Random-restart hill climbing

Simple hill climbing search


Simple hill climbing is the simplest technique to climb a hill. The task is to
reach the highest peak of the mountain. Here, the movement of the climber
depends on his move/steps. If he finds his next step better than the previous one,
he continues to move else remain in the same state. This search focus only on his
previous and next step.

Simple hill climbing Algorithm

1. Create a CURRENT node, NEIGHBOUR node, and a GOAL node.

2. If the CURRENT node=GOAL node, return GOAL and terminate the search.

3. Else CURRENT node<= NEIGHBOUR node, move ahead.

4. Loop until the goal is not reached or a point is not found.


Simulated annealing: switch from hill climbing to gradient descent(i.e. minimizing
cost), start by shaking hard (hard enough bounce out of local minima), and then
gradually reduce the intensity of the shaking(not hard enough to dislodge from the global
minimum).
The algorithm picks a random move.
If the move improves the situation, it is always accepted;
Otherwise accepts the move with probability <1, the probability decreases exponentially
with △E↑ and T↓.
Lecture Notes
Steepest-ascent hill climbing
Steepest-ascent hill climbing is different from simple hill climbing search. Unlike
simple hill climbing search, It considers all the successive nodes, compares them, and
choose the node which is closest to the solution. Steepest hill climbing search is similar to
best-first search because it focuses on each node instead of one.

Note: Both simple, as well as steepest-ascent hill climbing search, fails when there is no
closer node.

Steepest-ascent hill climbing algorithm

1. Create a CURRENT node and a GOAL node.

2. If the CURRENT node=GOAL node, return GOAL and terminate the search.

3. Loop until a better node is not found to reach the solution.

4. If there is any better successor node present, expand it.

5. When the GOAL is attained, return GOAL and terminate.

Stochastic hill climbing

Stochastic hill climbing does not focus on all the nodes. It selects one node at random and
decides whether it should be expanded or search for a better one.

Random-restart hill climbing

Random-restart algorithm is based on try and try strategy. It iteratively searches the node
and selects the best one at each step until the goal is not found. The success depends most
commonly on the shape of the hill. If there are few plateaus, local maxima, and ridges, it
becomes easy to reach the destination.

Limitations of Hill climbing algorithm

Hill climbing algorithm is a fast and furious approach. It finds the solution state
rapidly because it is quite easy to improve a bad state. But, there are following
limitations of this search:
Lecture Notes
Local Maxima: It is that peak of the mountain which is highest than all its neighboring
states but lower than the global maxima. It is not the goal peak because there is another
peak higher than it.

Plateau: It is a flat surface area where no uphill exists. It becomes difficult for the
climber to decide that in which direction he should move to reach the goal point.
Sometimes, the person gets lost in the flat area.

Ridges: It is a challenging problem where the person finds two or more local maxima of
the same height commonly. It becomes difficult for the person to navigate the right point
and stuck to that point itself.

Simulated Annealing
Simulated annealing is similar to the hill climbing algorithm. It works on the current
situation. It picks a random move instead of picking the best move. If the move leads to the
improvement of the current situation, it is always accepted as a step towards the solution
state, else it accepts the move having a probability less than 1. This search technique was
first used in 1980 to solve VLSI layout problems. It is also applied for factory scheduling and
other large optimization tasks.
Lecture Notes
Local Beam Search
Local beam search is quite different from random-restart search. It keeps track of k
states instead of just one. It selects k randomly generated states, and expand them
at each step. If any state is a goal state, the search stops with success. Else it
selects the best k successors from the complete list and repeats the same process.
In random-restart search where each search process runs independently, but in local
beam search, the necessary information is shared between the parallel search
processes.

Disadvantages of Local Beam search


This search can suffer from a lack of diversity among the k states.

It is an expensive version of hill climbing search.

Searching with nondeterministic actions


When the environment is either partially observable or nondeterministic (or
both), the future percepts cannot be determined in advance, and the agent’s
future actions will depend on those future percepts.

Nondeterministic problems:

Transition model is defined by RESULTS function that returns a set of possible


outcome states;

Solution is not a sequence but a contingency plan (strategy),

e.g.
Classical search
Determinism: each action has a unique outcome: if chose to drive from

Arad to Sibiu, the resulting state is Sibiu

Observability: can tell which state we are in (are we in Arad?)

Known environment: know which states there are, what actions are
possi_x0002_ble, what their outcomes are (have a map)

Search with non-deterministic actions

Each action has a set of possible outcomes (resulting states)

A solution is not a sequence of actions, but a contingency plan, or a

strategy: if after pushing the lift button, lift arrives, then take the lift;

else take the stairs

AND-OR search trees

Previous search trees: branching corresponds to the agent’s choice of


ac_x0002_tions

Call these OR-nodes

Environment’s choice of outcome for each action: AND-nodes

And-or search trees


And-or search: solution
A solution for an AND-OR search problem is a subtree that
(1) has a goal node at every leaf

(2) specifies one action at each of its OR nodes

(3) includes every outcome branch at each of its AND nodes

finds non-cyclic solution if it exists

(add a while loop; if in state where the action failed, repeat until succeed)

this will work provided that each outcome of non-deterministic action even_x0002_tually
occurs
Lecture Notes
Searching with no observations

Belief state: The agent's current belief about the possible physical states it might
be in, given the sequence of actions and percepts up to that point.

Standard search algorithms can be applied directly to belief-state space to solve


sensorless problems, and belief-state AND-OR search can solve general partially
observable problems. Incremental algorithms that construct solutions state-by-
state within a belief state are often more efficient.

1. Searching with no observation

When the agent's percepts provide no information at all, we have


a sensorless problem.

To solve sensorless problems, we search in the space of belief states rather than
physical states. In belief-state space, the problem is fully observable, the solution
is always a sequence of actions.

Belief-state problem can be defined by: (The underlying physical problem P


is defined by ACTIONSP, RESULTP, GOAL-TESTP and STEP-COSTP)

·Belief states: Contains every possible set of physical states. If P has N states, the
sensorless problem has up to 2^N states (although many may be unreachable
from the initial state).

Initial states: Typically the set of all states in P.

·Actions:

a. If illegal actions have no effect on the environment, take the union of all the
actions in any of the physical states in the current belief b:
Online search Agents
Online search is a necessary idea for unknown environments. Online search agent
interleaves computation and action: first it takes an action, then it observes the
environment and computes the next action.

1. Online search problem

Assume a deterministic and fully observable environment, the agent only knows:

·ACTION(s): returns a list of actions allowed in state s;

·c(s, a, s’): The step-cost function, cannot be used until the agent knows that s’ is
the outcome;

·GOAL-TEST(s).

·The agent cannot determine RESULT(s, a) except by actually being in s and


doing a.

·The agent might have access to an admissible heuristic function h(s) that
estimates the distance from the current state to a goal state.

Competitive ratio: The cost (the total path cost of the path that the agent actually
travels) / the actual shortest path (the path cost of the path the agent would follow
if it knew the search space in advance). The competitive ratio is expected to be as
small as possible.

In some case the best achievable competitive ratio is infinite, e.g. some actions
are irreversible and might reach a dead-end state. No algorithm can avoid dead
ends in all state spaces.

Safely explorable: some goal state is reachable from every reachable state. E.g.
state spaces with reversible actions such as mazes and 8-puzzles.

No bounded competitive ratio can be guaranteed even in safely explorable


environments if there are paths of unbounded cost.
Online search agents
Constraint Satisfaction Problems

In this section, we will discuss another type of problem-solving technique known


as Constraint satisfaction technique. By the name, it is understood that constraint
satisfaction means solving a problem under certain constraints or rules.

Constraint satisfaction is a technique where a problem is solved when its values


satisfy certain constraints or rules of the problem. Such type of technique leads to
a deeper understanding of the problem structure as well as its complexity.

Constraint satisfaction depends on three components, namely:

• X: It is a set of variables.

• D: It is a set of domains where the variables reside. There is a specific domain for
each variable.

• C: It is a set of constraints which are followed by the set of variables.

Constraint Propagation

In local state-spaces, the choice is only one, i.e., to search for a solution. But in
CSP, we have two choices either:

• We can search for a solution or

• We can perform a special type of inference called constraint propagation.

Constraint propagation is a special type of inference which helps in reducing the


legal number of values for the variables. The idea behind constraint propagation
is local consistency.

In local consistency, variables are treated as nodes, and each binary constraint is
treated as an arc in the given problem. There are following local
consistencies which are discussed below:

• Node Consistency: A single variable is said to be node consistent if all the


values in the variable’s domain satisfy the unary constraints on the variables.
• Arc Consistency: A variable is arc consistent if every value in its domain satisfies
the binary constraints of the variables.

• Path Consistency: When the evaluation of a set of two variable with respect to
a third variable can be extended over another variable, satisfying all the binary
constraints. It is similar to arc consistency.

• k-consistency: This type of consistency is used to define the notion of stronger


forms of propagation. Here, we examine the k-consistency of the variables.

CSP Problems

Constraint satisfaction includes those problems which contains some constraints


while solving the problem. CSP includes the following problems:

• Graph Coloring: The problem where the constraint is that no adjacent sides can
have the same color.

• Sudoku Playing: The gameplay where the constraint is that no number from
0-9 can be repeated in the same row or column.
• n-queen problem: In n-queen problem, the constraint is that no queen should
be placed either diagonally, in the same row or column.

• Crossword: In crossword problem, the constraint is that there should be the


correct formation of the words, and it should be meaningful.

Cryptarithmetic Problem

• This problem has one most important constraint that is, we cannot assign a
different digit to the same character. All digits should contain a unique alphabet.

Cryptarithmetic Problem is a type of constraint satisfaction problem where the


game is about digits and its unique replacement either with alphabets or other
symbols. In cryptarithmetic problem, the digits (0-9) get substituted by some
possible alphabets or symbols. The task in cryptarithmetic problem is to substitute
each digit with an alphabet to get the result arithmetically correct.

We can perform all the arithmetic operations on a given cryptarithmetic problem.

The rules or constraints on a cryptarithmetic problem are as follows:

• There should be a unique digit to be replaced with a unique alphabet.

• The result should satisfy the predefined arithmetic rules, i.e., 2+2 =4, nothing
else.

• Digits should be from 0-9 only.

• There should be only one carry forward, while performing the addition operation
on a problem.

• The problem can be solved from both sides, i.e., lefthand side (L.H.S), or
righthand side (R.H.S)

Let’s understand the cryptarithmetic problem as well its constraints better with
the help of an example:

• Given a cryptarithmetic problem, i.e., S E N D + M O R E = M O N E Y


•Starting from the left hand side (L.H.S) , the terms
are S and M. Assign a digit which could give a satisfactory
result. Let’s assign S->9 and M->1.

Hence, we get a satisfactory result by adding up the


terms and got an assignment for O as O->0 as well.
•Now, move ahead to the next terms E and O to get N as
its output.

Adding E and O, which means 5+0=0, which is not


possible because according to cryptarithmetic
constraints, we cannot assign the same digit to two
letters. So, we need to think more and assign some
other value.
Note: When we will solve further, we will get one carry,
so after applying it, the answer will be
satisfied.
•Further, adding the next two terms N and R we get,
But, we have already assigned E->5. Thus, the above
result does not satisfy the values
because we are getting a different value for E. So,
we need to think more.
Again, after solving the whole problem, we will get
a carryover on this term, so our answer will be
satisfied.

Let’s move ahead.


•Again, on adding the last two terms, i.e., the
rightmost terms D and E, we get Y as its result.

where 1 will be carry forward to the above term


•Keeping all the constraints in mind, the final resultant is as follows:
ASSIGNMENT – UNIT I

Assignment Questions – Very Easy

Assignment Questions - Easy

Q. ASSIGNMENT QUESTIONS Marks Knowledge CO


No. level
1. Provide an overview of how backtracking 5 K3 CO3
search works in constraint satisfaction
problems.
2. Explain the challenges associated with 5 K2 CO3
searching in environments with partial
observations.

Assignment Questions - Medium

Q. ASSIGNMENT QUESTIONS Marks Knowledge CO


No. level
1. Compare and contrast informed and 5 K3 CO3
uninformed search strategies in problem-
solving.
2. Provide examples of real-world scenarios where 5 K4 CO3
nondeterministic actions may be present and
explain their impact on search algorithms.

68
ASSIGNMENT – UNIT I

Assignment Questions - Hard

Assignment Questions – Very Hard

Course Outcomes:
CO1: Able to build a model using AI and ML, and able to predict based on
various events.

*Allotment of Marks

Correctness of the Presentation Timely Submission Total (Marks)


Content

10 - 5 15

69
PART A- UNIT-1
1.Define Artificial Intelligence (AI).
The study of how to make computers do things at which at the
moment,
people are better.
Systems that think like humans
Systems that act like humans
Systems that think rationally
Systems that act rationally.
2.Define Artificial Intelligence formulated by Haugeland.
The exciting new effort to make computers think machines with
minds in the full
and literal sense.
3.Define Artificial Intelligence in terms of human performance.
The art of creating machines that performs functions that
require intelligence when performed by people.
4. Define Artificial Intelligence in terms of rational acting.
A field of study that seeks to explain and emulate intelligent
behaviours in terms of computational processes-Schalkoff. The branch of
computer science that is concerned with the automation of intelligent
behaviour-Luger & Stubblefield.
5.Define Artificial in terms of rational thinking.
The study of mental faculties through the use of computational
models- Charniak & McDermott.T he study of the computations that
make it possible to perceive, reason and act-Winston.
6.What is meant by Turing test?
To conduct this test we need two people and one machine. One
person will be an interrogator(i.e.) questioner, will be asking questions to
one person and one machine. Three of them will being a separate room.
Interrogator knows them just as A and B. so it has to identify which is
the person and machine. The goal of the machine is to make
Interrogator believe that it is the person’s answer. If machine succeeds
by fooling Interrogator, the machine acts like a human. Programming a
computer to pass Turing test is very difficult.
PART A- UNIT-1
7.What is called materialism?
An alternative to dualism is materialism, which holds that the entire
world operate according to physical law. Mental process and
consciousness are therefore part of physical world, but inherently
unknowable they are beyond rational understanding.
8.What are the capabilities, computer should posses to pass Turing
test?
Natural Language Processing
Knowledge representation
Automated Reasoning
Machine Learning.
9.Define Total Turing Test?
The test which includes a video signals so that the interrogator can
testthe perceptual abilities of the machine.
10. What are the capabilities computers needs to pass total Turing
test?
Computer Vision
Robotics
11. Define Rational Agent.
It is one that acts, so as to achieve the best outcome (or) when there
is uncertainty, the best expected outcome.
12. Define Agent.
An Agent is anything that can be viewed as perceiving (i.e.)
understanding its environment through sensors and acting upon that
environment through actuators.
PART A- UNIT-1
13. Define an Omniscient agent.
An omniscient agent knows the actual outcome of its action and can act
accordingly; but omniscience is impossible in reality.
14. What are the factors that a rational agent should depend on at any
given time?
1. The performance measure that defines degree of success.
2. Ever thing that the agent has perceived so far. We will call this complete perceptual
history the percept sequence.
3. When the agent knows about the environment.
4. The action that the agent can perform.
15. Define Architecture.
The action program will run on some sort of computing device which is
called as Architecture
16. List the various type of agent program.
Simple reflex agent program.
Agent that keep track of the world.
Goal based agent program.
Utility based agent program
17. Give the structure of agent in an environment?
Agent interacts with environment through sensors and actuators.
An Agent is anything that can be viewed as perceiving (i.e.) understanding
its environment through sensors and acting upon that environment
through actuators.
18. Define Percept Sequence.
An agent’s choice of action at any given instant can depend on the entire
percept sequence observed to elate.
19. Define Agent Function.
It is a mathematical description which deals with the agent’s behavior
that maps the given percept sequence into an action.
20. Define Agent Program.
Agent function for an agent will be implemented by agent program.
PART A- UNIT-1
21. How agent should act?
Agent should act as a rational agent. Rational agent is one that does
the right thing, (i.e.) right actions will cause the agent to be most
successful in the environment.
22. How to measure the performance of an agent?
Performance measure of an agent is got by analyzing two tasks. They
are How and When actions.
23. Define performance measures.
Performance measure embodies the criterion for success of an
agent’s behavior.
24. Define Ideal Rational Agent.
For each possible percept sequence, a rational agent should select an
action that is expected to maximize its performance measure, given
the evidence provided by the percept sequence and whatever built in
knowledge the agent has.
25. Define Omniscience.
An Omniscience agent knows the actual outcome of its actions and
can act accordingly.
26. Define Information Gathering.
Doing actions in order to modify future percepts sometimes called
information gathering.
27. What is autonomy?
A rational agent should be autonomous. It should learn what it can
do to compensate for partial (or) in correct prior knowledge.
PART A- UNIT-1
28. What is important for task environment?
PEAS → P- Performance measure
E - Environment
A- Actuators
S – Sensors
Example
Interactive English tutor performance measure maximize student’s score on
test.
Environment
Set of students testing Agency
Actuators
Display exercises suggestions, corrections.
Sensors
Keyboard entry
29. What is environment program?
It defines the relationship between agents and environments.
30. List the properties of environments.
o Fully Observable Vs Partially Observable
o Deterministic Vs Stochastic
o Episodic Vs Sequential
o Static Vs Dynamic
o Discrete Vs Continuous
o Single Agent Vs Multi agent
31. What is Environment Class (EC) and Environment Generator (EG)?
EC – It is defined as a group of environment.
EG – It selects the environment from environment class in which the agent has
to Run.
32. What is the structure of intelligent Agent?
Intelligent Agent = Architecture + Agent Program
33. Define problem solving agent.
Problem solving agent is one kind of goal based agent, where the agent
Should select one action from sequence of actions which lead to desirable states.
34. List the steps involved in simple problem solving technique.
i. Goal formulation
ii. Problem formulation
iii. Search
iv. Solution
v. Execution phase
35. What are the different types of problem?
Single state problem, multiple state problems, Contingency problem, Exploration
problem
PART A- UNIT-1
36. What are the components of a problem?
There are four components. They are
i. initial state ii. Successor function iii. Goal testiv. Path costv. Operator
vi. state space vii. Path
37. Define State Space.
The set of all possible states reachable from the initial state by any sequence of action is called state
space.
38. Define Path.
A path in the state space is a sequence of state connected by sequence of actions.
39. Define Path Cost.
A function that assigns a numeric cost to each path, which is the sum of the cost of the each action
along the path.
40. Give example problems for Artificial Intelligence.
i. Toy problems
ii. Real world problems

42. Define search tree.


The tree which is constructed for the search process over the state space is called search tree.
43. Define search node.
The root of the search tree that is the initial state of the problem is called search node.
44. Define fringe.
The collection of nodes that have been generated but not yet expanded, this collection is called
fringe or frontier.
45. List the performance measures of search strategies.
i. Completeness
ii. Optimality
iii. Time complexity
iv. Space complexity

46. Define branching factor (b).


The number of nodes which is connected to each of the node in search tree is called Branching
factor.
48. Define Uniform cost search.
Uniform cost search expands the node ‘n’ with the lowest path cost instead of expanding the
Shallowest node.
49. Define Depth first search.
It expands the deepest node in the current fringe of the search tree.
50. Define depth limited search.
The problem of unbounded tress can be avoided by supplying depth limit 1(i.e.) nodes at depth 1
are treated as if they have no successors. This is called Depth Limited search.
PART B- UNIT-1
1. Describe informed search strategies with examples.

2. Explain with neat diagram the architecture of expert systems and


mention its features.

3. Explain a* algorithm with suitable examples.

4. What is Search with non-deterministic , Explain AO * Algorithm in


detail.

5. (i) Explain Alpha-Beta Pruning with algorithm in detail. ii) State or


interpret in your own words PEAS description for a Vacuum cleaner?

6. Different types of Agents

7. Explain Constrain Satisfaction Problem with Example.

8. Discuss the problem of Minimax search. Explain in detail how this


problem is overcome by Alpha- Beta Pruning.
SUPPORTIVE ONLINE COURSES – UNIT I

https://onlinecourses.nptel.ac.in/noc21_cs42/preview
An Introduction to Artificial Intelligence
By Prof. Mausam | IIT Delhi

https://www.coursera.org/learn/computational-thinking-problem-
solving

https://www.coursera.org/learn/artificial-intelligence-education-
for-teachers

https://www.coursera.org/specializations/ai-healthcare

https://www.coursera.org/learn/predictive-modeling-machine-
learning

77
VIDEO LINKS
REAL TIME APPLICATION- UNIT I
Artificial Intelligence Applications: Marketing

• Marketing is a way to sugar coat your products to attract more


customers. We, humans, are pretty good at sugar coating, but what if
an algorithm or a bot is there solely for the purpose of marketing a
brand or a company? It would do a pretty awesome job!

• In the early 2000s, if we searched an online store to find a product


without knowing it’s exact name, it would become a nightmare to find
the product. But now when we search for an item on any e-commerce
store, we get all possible results related to the item. It’s like these
search engines are reading our minds! In a matter of seconds, we get
a list of all relevant items. An example of this is finding the right
movies on Netflix.

Artificial Intelligence Applications: Agriculture

Here’s an alarming fact, the world will need to produce 50 percent more
food by 2050 because we’re literally eating up everything! The only way
this can be possible is if we use our resources more carefully. With that
being said, AI can help farmers get more from the land while using
resources more sustainably.

Issues such as climate change, population growth, and food security


concerns have pushed the industry into seeking more innovative
approaches to improve crop yield.

79
CONTENT BEYOND SYLLABUS – UNIT I

1. Constraint satisfaction problem


Constraint satisfaction problems (CSPs) are mathematical
problems defined as a set of objects whose state must satisfy a number of
constraints or limitations. CSPs represent the entities in a problem as a
homogeneous collection of finite constraints over variables, which is
solved by constraint satisfaction methods. CSPs are the subject of intense
research in both artificial intelligence and operations research, since the
regularity in their formulation provides a common basis to analyze and
solve problems of many seemingly unrelated families. CSPs often exhibit
high complexity, requiring a combination of heuristics and combinatorial
search methods to be solved in a reasonable time. The Boolean
satisfiability problem (SAT), the satisfiability modulo theories (SMT) and
answer set programming (ASP) can be roughly thought of as certain forms
of the constraint satisfaction problem.
Examples of simple problems that can be modeled as a constraint
satisfaction problem include:
1. Eight queens puzzle
2. Map coloring problem
3. Sudoku, Crosswords, Futoshiki, Kakuro (Cross Sums), Numbrix, Hidato and
many other logic puzzles

Example :Sudoku solving algorithms


A standard Sudoku contains 81 cells, in a 9×9 grid, and has 9
boxes, each box being the intersection of the first, middle, or last 3
rows, and the first, middle, or last 3 columns. Each cell may contain a
number from one to nine, and each number can only occur once in
each row, column, and box. A Sudoku starts with some cells containing
numbers (clues), and the goal is to solve the remaining cells. Proper
Sudokus have one solution. Players and investigators may use a wide
range of computer algorithms to solve Sudokus, study their properties,
and make new puzzles, including Sudokus with interesting symmetries
and other properties.

80
CONTENT BEYOND SYLLABUS – UNIT I

There are several computer algorithms that will solve most 9×9 puzzles (n=9) in
fractions of a second, but combinatorial explosion occurs as n increases, creating
limits to the properties of Sudokus that can constructed, analysed, and solved as n
increases.

Backtracking

A Sudoku (top) being solved by backtracking. Each cell is tested for a valid
number, moving "back" when there is a violation, and moving forward again until the
puzzle is solved.

A Sudoku designed to work against the brute force algorithm.

Some hobbyists have developed computer programs that will solve


Sudoku puzzles using a backtracking algorithm, which is a type of brute force
search. Backtracking is a depth-first search (in contrast to a breadth-first search),
because it will completely explore one branch to a possible solution before moving to
another branch. Although it has been established that approximately 6.67 x 1021
final grids exist, a brute force algorithm can be a practical method to solve Sudoku
puzzles.

A brute force algorithm visits the empty cells in some order, filling in digits
sequentially, or backtracking when the number is found to be not valid. Briefly, a
program would solve a puzzle by placing the digit "1" in the first cell and checking if
it is allowed to be there. If there are no violations (checking row, column, and box
constraints) then the algorithm advances to the next cell, and places a "1" in that
cell. When checking for violations, if it is discovered that the "1" is not allowed, the
value is advanced to "2". If a cell is discovered where none of the 9 digits is allowed,
then the algorithm leaves that cell blank and moves back to the previous cell. The
value in that cell is then incremented by one. This is repeated until the allowed value
in the last (81st) cell is discovered.

81
ASSESSMENT SCHEDULE

Tentative schedule for the Assessment During 2023-


2024 Even semester

Name of the
S.NO Start Date End Date Portion
Assessment

1 UNIT TEST 1 2.2.24 9.2.24 UNIT 1

2 IAT 1 12.2.24 17.2.24 UNIT 1 & 2

3 UNIT TEST 2 11.3.24 16.3.24 UNIT 3

4 IAT 2 1.4.24 6.4.24 UNIT 3 & 4

5 MODEL 20.4.24 30.4.24 ALL 5 UNITS

82
PRESCRIBED TEXT BOOKS AND REFERENCE BOOKS

TEXT BOOKS:

1. Introduction to Artificial Intelligence and Machine Learning


(IBM ICE Publications).

2. Stuart Russell, Peter Norvig, “Artificial Intelligence: A


Modern Approach”, Third Edition, Pearson Education I
Prentice Hall of India, 2010.

3. Elaine Rich and Kevin Knight, “Artificial Intelligence”,


Third Edition, Tata McGraw-Hill, 2010.

REFERENCES:

1. Patrick H. Winston. "Artificial Intelligence", Third edition,


Pearson Edition, 2006.

2. Dan W.Patterson, “Introduction to Artificial Intelligence and


Expert Systems”, PHI, 2006.

3. Nils J. Nilsson, “Artificial Intelligence: A new Synthesis”,


Harcourt Asia Pvt. Ltd., 2000.

83
MINI PROJECT SUGGESTIONS

1. VERY EASY

Implement Informed Search algorithm to find route between


any two cites.

2. EASY

Music Recommendation App

3. MEDIUM

Predict Housing Price

4.HARD

Predict Housing Price

5.Very Hard

Modern Chatbot

84
Thank you

Disclaimer:

This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document through
email in error, please notify the system manager. This document contains proprietary
information and is intended only to the respective group / learning community as
intended. If you are not the addressee you should not disseminate, distribute or
copy through e-mail. Please notify the sender immediately by e-mail if you have
received this document by mistake and delete this document from your system. If
you are not the intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this information is
strictly prohibited.

85

You might also like