You are on page 1of 16

2mrks

Q1 Compare model based and goal based agent.

Q2 Define Intelligent Agent? What are the characteristic of intelligent agent


Intelligent agent is the one which can take input from the environment through its
sensors and act upon the environment through its actuators. Its actions are always
directed to achieve a goal.
Characteristics of intelligent agent
Ability to remain autonomous
Responsive
Goal-Oriented
Q3 Given a full 5-gallon jug and an empty 3-gallon jug, the goal is to fill the3-
gallon jug with exactly one gallon of water. No marking is given on the jugs.
Give state space representation
(5, 3) →(0, 0) →(0,3) →(3,0) →(3,3) →(5,1) →(0,1)
Q4 What is heuristic function.? Give heuristic function for a given problem( 8
puzzle, 8 queens ).
Heuristic Function is a function that estimates the cost of getting from one place to
another (from the current state to the goal state.) Also called as simply a heuristic.

https://www.youtube.com/watch?v=nmWGhb9E4es
Q5 What are the performance measures used to evaluate the performance
measure of search algorithms.
Completeness: If the algorithm is able to produce the solution if one exist then its
satisfies completeness criteria.
Optimality: If the solution produce is the minimum cost solution, the algorithm is
said to be optimal.
Time complexity: It depends on the time taken to generate the solution. It is the
number of nodes generated during the search
Space complexity: Memory required to store the generated nodes while performing
the search
Q6 Define AI . What are applications of AI
Artificial Intelligence is defined as the process where a machine tries to make
decisions like a human brain. A collection of technologies known as artificial
intelligence (AI) enables computers to carry out a range of complex tasks, such as the
ability to see, hear, interpret, and translate spoken and written language, analyze data,
generate suggestions, and more.
Machine Learning and Predictive Analytics:
Applications: Predictive modeling, recommendation systems, fraud detection, and
risk assessment.
Natural Language Processing (NLP):
Applications: Chatbots, language translation, sentiment analysis, and speech
recognition.
Computer Vision:
Applications: Image and video recognition, facial recognition, object detection, and
autonomous vehicles.
Robotics:
Applications: Industrial automation, drones, surgical robots, and household robots.
Q7 Compare model based and utility based agent.

Q8 What are the components of AI.


AI techniques should be independent of the problem domain as far as possible
AI programs must have :
Knowledge base
Navigational capabilities
Inferencing
Knowledge base: contains Facts and Rules.
Navigational capabilities: it refers to control strategies. Determines which rule to
apply or heuristics to get value of state.
Inferencing: Search through Knowledge base and derive new knowledge
Q9 Explain the concept of rationality.
Rationality depends on four main criteria:First is the performance measure which
defines the criterion of success for an agent, Second is the agent’s prior knowledge
of the environment, and third is the action performed by the agent and the last one is
agent’s percept sequence to date.
Q10 Define Knowledge base agent
A Knowledge Base Agent is a type of intelligent agent in artificial intelligence that
makes decisions based on a knowledge base or database of information. This agent
has an internal representation of knowledge, which it uses to reason, make inferences,
and take actions to achieve its goals. The knowledge base typically contains facts,
rules, and relationships that the agent has acquired or been programmed with.
Q11 Define propositional logic with example.
Propositional logic, also known as sentential logic or propositional calculus, is a
branch of logic that deals with propositions—statements or assertions that are either
true or false. In propositional logic, these propositions are combined using logical
connectives to form more complex statements.
Q12 Explain types of propositions with example.

Q13 Explain Logical connectives with truth tables.


Logical Negation: The negation of a statement is also a statement with a truth value
that is exactly opposite that of the original statement.
Logical Conjunction (AND): A conjunction is a type of compound statement that is
comprised of two propositions (also known as simple statements) joined by the AND
operator.
Logical Disjunction (Inclusive OR): A disjunction is a kind of compound statement
that is composed of two simple statements formed by joining the statements with the
OR operator.
Logical Implication (Conditional): An implication (also known as a conditional
statement) is a type of compound statement that is formed by joining two simple
statements with the logical implication connective or operator.
Logical Biconditional (Double Implication): A double implication (also known as
a biconditional statement) is a type of compound statement that is formed by joining
two simple statements with the biconditional operator. A biconditional statement is
really a combination of a conditional statement and its converse.

Q14 Differentiate between uninformed and informed search techniques


5 mrks
Q1 Explain A* algorithm with example.
5 A* Search algorithm is one of the best and popular technique used in path-finding and
graph traversals.Informally speaking, A* Search algorithms, unlike other traversal
techniques, it has “brains”. What it means is that it is really a smart algorithm which
separates it from the other conventional algorithms.
Consider a square grid having many obstacles and we are given a starting cell and a
target cell. We want to reach the target cell (if possible) from the starting cell as
quickly as possible. Here A* Search Algorithm comes to the rescue.
What A* Search Algorithm does is that at each step it picks the node according to a
value-‘f’ which is a parameter equal to the sum of two other parameters – ‘g’ and ‘h’.
At each step it picks the node/cell having the lowest ‘f’, and process that node/cell.
We define ‘g’ and ‘h’ as simply as possible below
g = the movement cost to move from the starting point to a given square on the grid,
following the path generated to get there.
h = the estimated movement cost to move from that given square on the grid to the
final destination. This is often referred to as the heuristic, which is nothing but a kind
of smart guess. We really don’t know the actual distance until we find the path,
because all sorts of things can be in the way (walls, water, etc.).
Q1 PEAS properties for any given problem. (5 marks)
6 For example,
• Exploring the subsurface oceans of Titan.
• Shopping for used AI books on the Internet.
• Practicing tennis against a wall.
• Performing a high jump.
• Knitting a sweater

Exploring the subsurface oceans of Titan:


Performance Measure: Gathering detailed information about the subsurface oceans,
mapping the terrain, discovering potential signs of life.
Environment: Subsurface oceans of Titan, which are likely dark, cold, and distant.
Actuators: Robotic submarines, probes, sensors.
Sensors: Cameras, sonar, temperature sensors, pressure sensors.

Shopping for used AI books on the Internet:


Performance Measure: Successfully purchasing desired AI books within a specified
budget and time.
Environment: Online marketplace or websites selling used AI books.
Actuators: Web browser, keyboard, mouse.
Sensors: Visual feedback from the website, confirmation emails.

Practicing tennis against a wall:


Performance Measure: Improving tennis skills, refining strokes, and maintaining
consistency in hitting the ball.
Environment: Tennis court with a wall.
Actuators: Tennis racket, arm and body movements.
Sensors: Visual perception of the ball's trajectory, proprioceptive feedback.

Performing a high jump:


Performance Measure: Successfully clearing the bar at a specific height.
Environment: Track and field area with a high jump pit.
Actuators: Legs for jumping, body movements.
Sensors: Visual perception of the bar, proprioceptive feedback.

Knitting a sweater:
Performance Measure: Completing a well-knit sweater according to the design.
Environment: Crafting area with knitting materials.
Actuators: Knitting needles, hands, fingers.
Sensors: Visual perception of the knitting pattern, tactile feedback.
Q1 Draw and illustrate the architecture of learning agent / model based agent /
7 goal based agent/ simple reflex agent/ utility based agent.
Simple Reflex agent:
The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current precepts and ignore the rest of the percept history.
These agents only succeed in the fully observable environment.
The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.

Model-based reflex agent


The Model-based agent can work in a partially observable environment, and track the
situation.
A model-based agent has two important factors:
Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
Internal State: It is a representation of the current state based on percept history.
These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
Updating the agent state requires information about:
How the world evolves
How the agent's action affects the world.
Goal-based agents
The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.
The agent needs to know its goal which describes desirable situations.
Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
They choose an action, so that they can achieve the goal.
These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not. Such considerations of different scenario are
called searching and planning, which makes an agent proactive.

Utility-based agents
These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success
at a given state.
Utility-based agent act based not only goals but also the best way to achieve the goal.
The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
The utility function maps each state to a real number to check how efficiently each
action achieves the goals
Learning Agents
A learning agent in AI is the type of agent which can learn from its past experiences,
or it has learning capabilities.
It starts to act with basic knowledge and then able to act and adapt automatically
through learning.
A learning agent has mainly four conceptual components, which are:
Learning element: It is responsible for making improvements by learning from
environment
Critic: Learning element takes feedback from critic which describes that how well the
agent is doing with respect to a fixed performance standard.
Performance element: It is responsible for selecting external action
Problem generator: This component is responsible for suggesting actions that will lead
to new and informative experiences.
Hence, learning agents are able to learn, analyse performance, and look for new ways
to improve the performance.

Q1 Evaluate the BFS, DFS, DLS, IDFS based on performance measure such as
8 complete, optimal, time and space complexity. (5 marks)
b→branching factor
d→depth of the shallowest solution
m→ max depth of search tree
l→ depth limit
Criteria BFS DFS DLS IDFS
Complete Y N N Y
Time d
O(b ) m
O(b ) d
O(b ) O(bd)
Space d
O(b ) m
O(b ) O(b.l) O(b.d)
optimal Y N N Y
Q1 Consider the graph given below .Assume that the initial state is A and the goal
9 state is
G . Find a path from the initial state to the goal state using DFS.Also report the
final
Cost

Q2 Similar problems (like 13 ) will be asked based on BFS and A*


0
Q2 Explain various types of environment.
1 Single agent vs. multi-agent:
The agent in a single-agent system models itself, the environment, and their
interactions. They are independent entities with their own goals, actions, and
knowledge. In a single-agent system, no other such entities are recognized by the
agent. Thus, even if there are indeed other agents in the world, they are not modeled
as having goals, etc. They are just considered part of the environment.
A multi-agent system (M.A.S.) is a computerized system composed of multiple
interacting agents interacting within an environment. Multi-agent systems can be used
to solve problems that are difficult or impossible for an individual agent or a
monolithic system to solve. For e.g.: An agent playing Tetris by itself can be a single
agent environment, whereas we can have an agent playing checkers in a two-agent
environment.

Deterministic vs. stochastic:


Deterministic AI environments are those on which the outcome can be completely
determined by the previous state and action executed by the agent. In other words,
deterministic environments ignore uncertainty. Most real world AI environments are
not deterministic. Instead, they can be classified as stochastic. Self-driving vehicles
are a classic example of stochastic AI processes.
Stochastic means the environment changes while agent is taking action, hence the next
state of the world does not merely depends on the current state and agent’s agent. An
automated car driving system has a stochastic environment as the agent cannot control
the traffic conditions on the road.

Episodic vs. sequential:


In an episodic environment, the performance of an agent is dependent on a number of
discrete episodes, with no link between the performances of an agent in different
scenarios.
Environments are simpler from the agent developer’s perspective because the agent
can decide what action to perform based only on the current episode — it need not
reason about the interactions between this and future episodes. Consider an example
of pick and place robot agent, which is used to detect defective parts from the
conveyor belt of an assembly line.
In sequential environment as per the name suggest the previous decision can effect on
mutual decision the next action of the agent depends on what action has to be taken
previously and what action is to be supposed taken in the future for example in check
out the Facebook and effort all the following is also sequential environment can be
understood with the help of an automatic car driving example when current is in can
affect the next decisions if agent in initiating breaks then he has to press clutch and
lower down the gear as next consequent actions.
For e.g. in checkers where previous move can affect all the moves.

Static vs. dynamic


Static AI environments rely on data-knowledge sources that don’t change frequently
over time. If an environment remains unchanged while the agent is performing given
tasks then it is called as a static environment. Eg. Vacuum cleaner environment.
If the environment changes while an agent is performing some task, then it is called
dynamic environment. Automatic car driver example comes under dynamic
environment as the environment keeps changing all the time.

Discrete vs. continuous.


When there are distinct and clearly defined inputs and outputs or percepts and actions,
then it is called a discrete environment. For e.g. chess environment has a finite number
of distinct inputs and actions.
When a continuous input signal is received by an agent, all the percepts and actions
cannot be defined beforehand then it is called continuous environment. E.g. an
automated car driving system.

Known vs. unknown


In a known environment, the output for all probable actions is given. Obviously, in
case of unknown environment, for an agent to make decision, it has to gain knowledge
about-how the environment works.
Q2 Problem formulation for any given problem.
2
Q2 Explain Operations performed by KBA
3 Inference System is used when we want to update some information (sentences) in
Knowledge-Based System and to know the already present information. This
mechanism is done by TELL and ASK operations. They include inference i.e.
producing new sentences from old. Inference must accept needs when one asks a
question to KB and answer should follow from what has been Told to KB. Agent also
has a KB, which initially has some background Knowledge. Whenever, agent program
is called, it performs some actions.
Actions done by KB Agent:
It TELLS what it recognized from the environment and what it needs to know to the
knowledge base.
It ASKS what actions to do? and gets answers from the knowledge base.
It TELLS the which action is selected , then agent will execute that action.
Q2 Explain various levels of knowledge-based agent
4 A knowledge-based agent can be viewed at different levels which are given below:
1. Knowledge level
Knowledge level is the first level of knowledge-based agent, and in this level, we
need to specify what the agent knows, and what the agent goals are. With these
specifications, we can fix its behavior. For example, suppose an automated taxi
agent needs to go from a station A to station B, and he knows the way from A to B,
so this comes at the knowledge level.
2. Logical level
At this level, we understand that how the knowledge representation of knowledge is
stored. At this level, sentences are encoded into different logics. At the logical level,
an encoding of knowledge into logical sentences occurs. At the logical level we can
expect to the automated taxi agent to reach to the destination B.
3. Implementation level
This is the physical representation of logic and knowledge. At the implementation
level agent perform actions as per logical and knowledge level. At this level, an
automated taxi agent actually implement his knowledge and logic so that he can
reach to the destination.
Q2 Explain Wumpus World problem with the list of sensors and actuators
5 The Wumpus world is a cave with 16 rooms (4×4). Each room is connected to others
through walkways (no rooms are connected diagonally). The knowledge-based agent
starts from Room[1, 1]. The cave has – some pits, a treasure and a beast named
Wumpus. The Wumpus can not move but eats the one who enters its room. If the
agent enters the pit, it gets stuck there. The goal of the agent is to take the treasure and
come out of the cave. The agent is rewarded, when the goal conditions are met. The
agent is penalized, when it falls into a pit or being eaten by the Wumpus.
Some elements support the agent to explore the cave, like -The wumpus’s adjacent
rooms are stenchy. -The agent is given one arrow which it can use to kill the wumpus
when facing it (Wumpus screams when it is killed). – The adjacent rooms of the room
with pits are filled with breeze. -The treasure room is always glittery.
Actuators:
Devices that allow the agent to perform the following actions in the environment.
Move forward
Turn right
Turn left
Shoot
Grab
Release
Sensors:
Devices which helps the agent in sensing the following from the environment.
Breeze
Stench
Glitter
Scream (When the Wumpus is killed)
Bump (when the agent hits a wall)
Effectors: Move forward, turn left, turn right, Grab gold, shoot arrow. Goal of game:
Main aim of the game is that player should grab the gold and return to starting room
without being killed by the wumpus.
a. 100 point if player comes out of cave with gold.
b. 1 point is taken away for every action taken.
c.1 0 points are taken away if the arrow is used.
d. 200 points are taken away if the player gets killed.
Q2 Compare and contrast propositional logic and first order logic.
6
Q2 Draw and describe the architecture of knowledge based agents.
7
Q2 Explain first order logic with example.
8 First-order logic is another way of knowledge representation in artificial intelligence.
It is an extension to propositional logic.
FOL is sufficiently expressive to represent the natural language statements in a
concise way.
First-order logic is also known as Predicate logic or First-order predicate logic. First-
order logic is a powerful language that develops information about the objects in a
more easy way and can also express the relationship between those objects.
First-order logic (like natural language) does not only assume that the world contains
facts like propositional logic but also assumes the following things in the world:
Objects: A, B, people, numbers, colors, wars, theories, squares, pits, wumpus, ......
Relations: It can be unary relation such as: red, round, is adjacent, or n-any relation
such as: the sister of, brother of, has color, comes between
Function: Father of, best friend, third inning of, end of, ......
As a natural language, first-order logic also has two main parts:
Syntax
Semantics

Atomic sentences: Atomic sentences are the most basic sentences of first-order logic.
These sentences are formed from a predicate symbol followed by a parenthesis with
a sequence of terms.
Example: Ravi and Ajay are brothers: => Brothers(Ravi, Ajay).
Chinky is a cat: => cat (Chinky).

Complex Sentences:
Complex sentences are made by combining atomic sentences using connectives.
Consider the statement: "x is an integer.", it consists of two parts, the first part x is the
subject of the statement and second part "is an integer," is known as a predicate.

Quantifiers in First-order logic:


A quantifier is a language element which generates quantification, and quantification
specifies the quantity of specimen in the universe of discourse.
Universal Quantifier, (for all, everyone, everything)
Existential quantifier, (for some, at least one).
Q2 Explain the working of DLS and DFID
9 Depth-Limited Search Algorithm:
A depth-limited search algorithm is similar to depth-first search with a predetermined
limit. Depth-limited search can solve the drawback of the infinite path in the Depth-
first search. In this algorithm, the node at the depth limit will treat as it has no
successor nodes further.
Depth-limited search can be terminated with two Conditions of failure:
Standard failure value: It indicates that problem does not have any solution.
Cutoff failure value: It defines no solution for the problem within a given depth limit.
Advantages:
Depth-limited search is Memory efficient.
Disadvantages:
Depth-limited search also has a disadvantage of incompleteness.
It may not be optimal if the problem has more than one solution.

Iterative deepeningdepth-first Search:


The iterative deepening algorithm is a combination of DFS and BFS algorithms. This
search algorithm finds out the best depth limit and does it by gradually increasing the
limit until a goal is found.
This algorithm performs depth-first search up to a certain "depth limit", and it keeps
increasing the depth limit after each iteration until the goal node is found.
This Search algorithm combines the benefits of Breadth-first search's fast search and
depth-first search's memory efficiency.
The iterative search algorithm is useful uninformed search when search space is large,
and depth of goal node is unknown.
Advantages:
It combines the benefits of BFS and DFS search algorithm in terms of fast search
and memory efficiency.
Disadvantages:
The main drawback of IDDFS is that it repeats all the work of the previous phase.
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.

You might also like