You are on page 1of 10

Seventh Semester B.

Tech Degree (S, FE) Examination May 2023 (2019 Scheme)

Course Code: CST401


Course Name: ARTIFICIAL INTELLIGENCE
Max. Marks: 100 Duration: 3 Hours
PART A
Answer all questions, each carries 3 marks. Marks
1. What is Turing Test? Give its significance in the field of Artificial Intelligence. (3)
Ans:
The Turing Test, proposed by Alan Turing in 1950, assesses a machine's ability to exhibit human-like
intelligence during a text-based conversation. In the test, a human judge interacts with both a human
and a machine without knowing which is which, aiming to determine if the machine's responses are
indistinguishable from those of a human. Passing the Turing Test implies a machine's capacity to
simulate human-like behavior, though it does not necessarily indicate true understanding or
consciousness.
Significance: important in demonstrating the intelligence of a machine and whether it can think like a
human
2. Describe in detail the four categories under which AI is classified with. (3)
Ans:
Thinking Humanly: Examines how closely AI systems mimic human thought processes and cognitive
functions in problem-solving.
Acting Humanly: Focuses on AI systems emulating human behavior in their interactions, often
involving natural language processing and communication.
Thinking Rationally: Involves the logical aspect of AI, emphasizing the use of rational decision-making
and problem-solving based on formal rules and reasoning.
Acting Rationally: Concentrates on AI systems making rational decisions and taking actions that lead to
optimal outcomes in various situations.
3. What is a Rational agent? Explain. (3)
Ans:
A rational agent is one that does the right thing. Here right thing is one that will cause agent to be more
successful. That leaves us with the problem of deciding how and when to evaluate the agent’s success.
4. List any three advantages of Depth First search. (3)
Ans:
Simple, Requires less memory, May find a solution without examining much of the state space at all,
Faster than BFS
5. What are the components of a Constraint Satisfaction Problem? Illustrate with an example.
(3)
Ans:
A Constraint Satisfaction Problem (CSP) consists of three main components: variables, domains, and
constraints. Variables represent the unknowns to be determined, domains define the possible values
each variable can take, and constraints specify relationships or limitations among the variables. For
example, in scheduling courses, variables could be the time slots for each course, domains would be the
available time periods, and constraints might include ensuring that no two courses with the same
students overlap in their schedules. The goal is to find a combination of variable assignments that
satisfies all constraints, providing a valid solution to the problem.
6. Define Alpha-Beta Pruning. (3)
Ans:
Alpha–beta pruning is a search algorithm that seeks to decrease the number of nodes that are evaluated
by the minimax algorithm in its search tree. It is an adversarial search algorithm used commonly for
machine playing of two-player games.
7. Give the definition of Propositional logic. (3)
Ans:
Propositional logics is a combination of propositional symbols and connectives. It consists of
propositional symbols, which represent simple statements, and connectives, which are logical operators
used to combine or modify these propositions. Examples of propositional logic include statements like
2
"It is raining" (represented by a propositional symbol) and connectives such as "and," "or," and "not,"
allowing the formulation of compound statements like "It is raining and I have an umbrella."
8. Explain the term Skolemization. (3)
Ans:
Skolemization is a transformation on first-order logic formulae, which removes all existential
quantifiers from a formula. It involves introducing Skolem functions, which are functions representing
witnesses for existential quantifiers, thereby transforming the formula into an equivalent prenex normal
form without existential quantifiers.
9. State and explain Ockham’s razor principle. (3)
Ans:
Ockham's razor, states that among competing hypotheses, the one with the fewest assumptions should
be preferred.ie To prefer the simplest hypothesis consistent with the data. Ockham's razor would favor a
theory that explains planetary motion through gravity over one involving complex, unnecessary
assumptions.
10. Explain about Supervised Learning. (3)
Ans:
The computer is presented with example inputs and their desired outputs, given by a "teacher", and the
goal is to learn a general rule that maps inputs to outputs. In other words, Supervised learning is a
machine learning paradigm where the algorithm is trained on a labeled dataset, which means the input
data is paired with corresponding output labels. The goal is for the algorithm to learn the mapping
between inputs and outputs, enabling it to make predictions or classifications on new, unseen data based
on the patterns learned during training.
PART B
Answer any one full question from each module, each carries 14 marks.
Module I
11a) Describe in detail about different types of Agent programs with suitable figures. (8)
Ans:
Types of Agents: A. Simple Reflex Agents B. Model Based Reflex Agents C. Goal Based Agents D.
Utility Based agents.
Simple Reflex Agents- Select actions on the basis of the current percept, ignoring the rest of the percept
history
Model Based Reflex Agents- Agents have internal state, which is used to keep track of past states of the
world. Agents have the ability to represent change in the World.
Goal Based Agents - Key difference wrt Model-Based Agents: In addition to state information, have
goal information that describes desirable situations to be achieved. Agents of this kind take future
events into consideration.
Utility Based agents- An agent’s utility function is essentially an internalization of the performance
measure. If the internal utility function and the external performance measure are in agreement, then an
agent that chooses actions to maximize its utility will be rational according to the external performance
measure.

3
b) Explain 6 applications of AI in detail. (6)
Ans:
Healthcare: AI is used for medical image analysis, diagnosis, and personalized treatment
recommendations, improving patient care and outcomes.
Autonomous Vehicles: AI enables self-driving cars to perceive and navigate the environment,
enhancing road safety and transportation efficiency.
Natural Language Processing: AI powers voice assistants and language translation, facilitating human-
computer communication and breaking language barriers.
Financial Fraud Detection: AI algorithms analyze patterns and anomalies in financial transactions to
identify and prevent fraudulent activities.
E-commerce Recommendations: AI-driven recommendation systems analyze user behavior to suggest
personalized products or content, enhancing the online shopping experience.
Manufacturing Automation: AI controls robotic systems, optimizing production processes, and quality
control in manufacturing environments.
OR
12a) Define PEAS in AI. For the following activities, gives a PEAS description of the task
environment and characterize it in terms of the task environment properties.
a)Medical Diagnosis system
b)Bidding on an item at an auction (7)
Ans:
PEAS (Performance, Environment, Actuators, Sensors)
a) Medical Diagnosis System:
Performance Measure (P): Accurate and timely diagnosis of medical conditions.
Environment (E): Patients, medical records, diagnostic tests, healthcare professionals, medical
databases.
Actuators (A): Recommendations for further tests, treatment plans, referrals to specialists.
Sensors (S): Patient symptoms, medical history, test results.
Task Environment Properties:
Observable: Partially observable, as some symptoms may not be immediately apparent.
Multi-Agent: Involves collaboration between the system and healthcare professionals.
Deterministic/Stochastic: Stochastic, as the progression of diseases and response to treatment may vary.
Static/Dynamic: Dynamic, as the patient's condition may change over time.
Discrete/Continuous: Combination of discrete (symptoms) and continuous (test results) elements.
b) Bidding on an Item at an Auction:
Performance Measure (P): Winning items at the lowest possible cost.
Environment (E): Online auction platform, other bidders, item descriptions.
Actuators (A): Placing bids, increasing bid amounts.
Sensors (S): Current bid amounts, time remaining, information about other bidders.
Task Environment Properties:
4
Observable: Fully observable, as all relevant information is available.
Multi-Agent: Involves competition with other bidders.
Deterministic/Stochastic: Stochastic, as other bidders' strategies are uncertain.
Static/Dynamic: Dynamic, as the auction progresses and bids change.
Discrete/Continuous: Discrete, as bids are made in discrete amounts.
b)What are the properties of Task Environment? Explain. (7)
Ans:
Fully Observable vs Partially Observable: In a fully observable environment, the agent has access to the
complete state of the environment at any given time, while in a partially observable environment, some
aspects of the state are not directly visible to the agent.

Static vs Dynamic: A static environment remains unchanged by the agent's actions, while a dynamic
environment is subject to changes that may occur independently or as a result of the agent's actions.

Discrete vs Continuous: In a discrete environment, there is a finite set of distinct, separate states, while
a continuous environment involves an infinite or uncountable range of possible states.

Deterministic vs Stochastic: In a deterministic environment, the next state is completely determined by


the current state and the agent's actions, whereas in a stochastic environment, there is an element of
randomness or uncertainty in the state transitions.

Single-agent vs Multi-agent: A single-agent environment involves only one agent, while a multi-agent
environment includes multiple interacting agents, each pursuing its own goals.

Episodic vs Sequential: In an episodic environment, the agent's experience is divided into isolated
episodes with no influence from one to the next, whereas a sequential environment involves a
continuous, sequential interaction where the current action affects future outcomes.

Known vs Unknown: In a known environment, the rules governing the agent's actions and the outcomes
are fully understood, while in an unknown environment, some aspects are uncertain or not completely
understood.

Accessible vs Inaccessible: In an accessible environment, the agent has complete knowledge of the
environment's entire state, while in an inaccessible environment, certain parts of the state are hidden or
not directly observable by the agent.
Module II
13a) Discuss the heuristic function. Explain how the heuristic function helps during search
procedure. Explain with a suitable example (7)
Ans:
Heuristic search is a search algorithm that uses heuristics or rules of thumb to guide the exploration of a
solution space efficiently. A heuristic function is an evaluation function that estimates the cost or value
of reaching a goal state from a given state in a search algorithm.
In the context of the A* algorithm, a popular heuristic search algorithm, the heuristic function estimates
the cost from the current state to the goal, guiding the search towards promising paths. For example, in
a navigation system, the straight-line distance (Euclidean distance) from the current location to the
destination serves as a heuristic, providing a quick estimate of the remaining distance. The A*
algorithm uses the sum of the actual cost from the start state and the heuristic estimate to prioritize
paths, exploring those with lower estimated total costs first.
b) Evaluate a problem as a state space search with an example. (7)
Ans:
Viewing a problem as a state space search involves representing the problem-solving process as
exploring a set of states, where each state represents a possible configuration or situation, and the
transitions between states are determined by actions or operations.
In the 8 Queens problem, the state space comprises all possible arrangements of eight queens on an 8x8
chessboard, with each state representing a unique configuration. The goal is to find a state where no two
queens threaten each other. The state space search involves exploring different queen placements
5
through actions like moving or placing queens and using heuristics to guide the search toward a
solution efficiently.
OR
14a) Discuss any two uninformed search strategies in intelligent systems with examples. (9)
Ans:
Differentiation between Uninformed and Informed search strategies:
•Uninformed or blind search strategies uses only the information available in the problem definition
•Informed or heuristic search strategies uses additional information
Breadth-First Search (BFS):
BFS explores the search space level by level, starting from the initial state and moving outward to
neighboring states before progressing deeper.
Example: In the 8-puzzle problem, where you rearrange numbered tiles on a 3x3 grid, BFS
systematically examines all possible board configurations, ensuring the shortest solution path is found
first.
Advantage: Guarantees the shortest path to the solution but may consume substantial memory in larger
state spaces.
Depth-First Search (DFS):
DFS explores the search space by going as deep as possible along one branch before backtracking,
effectively traversing a path until it reaches a leaf node before exploring another.
Example: In the maze-solving problem, DFS might explore a single path through the maze until it
reaches the end or a dead-end, then backtrack to explore alternative routes.
Advantage: Memory-efficient, but may not always find the shortest path and could get stuck in deep
branches before exploring more promising paths.
b) Write A* algorithm and list the various observations about algorithm. (5)
Ans:
A* Algorithm:Steps
Initialize Open List with the start node.
Initialize Closed List as empty.
Set the initial node's cost and heuristic values.
While Open List is not empty:
a. Select the node with the lowest f(n) = g(n) + h(n) from the Open List.
b. Move the selected node from Open List to Closed List.
c. If the node is the goal, the solution is found.
d. Expand the node by generating its successors and updating their costs.
e. For each successor:
i. If the successor is in Closed List, skip it.
ii. If the successor is not in Open List, add it to the Open List.
iii. If the successor is in Open List and has a lower cost, update its values.
If Open List is empty and goal not reached, no solution exists.
Observations:
Optimality: A* is admissible and consistent, guaranteeing the optimal solution if the heuristic is
admissible.
Completeness: A* is complete if the branching factor is finite and the cost of each action is greater than
some positive constant.
Time Complexity: It depends on the heuristic's quality and the structure of the state space.
Space Complexity: Can be memory-intensive due to the need to store and prioritize nodes in the Open
List.
Heuristic Function: The heuristic function (h(n)) must be admissible (never overestimates) for A* to
guarantee optimality.
Example: In a navigation problem, A* can find the shortest path between two locations on a map,
considering both the actual cost of reaching a location (g(n)) and the estimated cost to the goal (h(n)).
Module III
15a) What is local consistency in CSP constraint propagation? Explain different types of local
6
consistencies. (10)
Ans:
Local consistency-Constraint propagation may be intertwined with search, or it may be done as a
preprocessing step, before search starts. Sometimes this preprocessing can solve the whole problem, so
no search is required at all.
Types of Consistency – Node, Arc, Path, K-consistency
Node Consistency: Ensures that all values assigned to a single variable in a constraint satisfaction
problem are consistent with the variable's unary constraints.
Arc Consistency: Requires that for every pair of connected variables in a constraint satisfaction
problem, each value of the first variable is consistent with at least one value of the second variable
based on the binary constraints.
Path Consistency: Extends the concept of arc consistency to longer paths, ensuring that the consistency
is maintained for all possible paths of connected variables in a constraint network.
K-Consistency: A generalization of arc consistency, where a constraint satisfaction problem is
considered k-consistent if every subset of k variables satisfies the binary constraints of the problem.
b) Write an Arc-Consistency algorithm (AC-3). (4)
Ans:

OR
16a) How and when heuristic is used in Minimax search technique? Illustrate with an example.
Also describe an algorithm for Minimax procedure. (8)
Ans:
Heuristic function is used in Minimax for evaluation of the current situation of the game. The final
decision made by Minimax largely depends on how well the heuristic function is. Therefore, designing
a reasonable heuristic function is paramount.

7
b) Solve the following Crypt arithmetic problem using constraints satisfaction search procedure.
i) EAT ii) SEND
THAT MORE

APPLE MONEY (6)


Ans:
Variables- E, A, T, H, P, L.
domain for each variable: {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}.
Constraints- E, T, A, H, P, L should have distinct values.
E and A cannot be 0.
The sum of EAT and THAT should equal APPLE.
EAT + THAT = APPLE (E-8 A-1 T-9 H-2 P-0 L-3)
Variables- S, E, N, D, M, O, R, Y.
domain for each variable: {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}.
Constraints- S and M cannot be 0.
All variables must have distinct values.
The sum of SEND and MORE should equal MONEY.
SEND+MORE=MONEY (S-9 E-5 N-6 D-7 M-1 O-0 R-8 E-5 Y-2)
Apply constraint propagation techniques (like arc consistency) to reduce the domains of the variables
based on the defined constraints. Perform a systematic search for a valid assignment of values to
variables that satisfies all constraints. Once a solution is found, extract and display the values assigned
to each variable to satisfy the equation.
Module IV
17a) What is a knowledge-based agent? How does it work? Write an algorithm for Knowledge
based agent. (7)
Ans:
A knowledge-based agent is an intelligent agent that utilizes an internal knowledge base, representing
information about the world, to make informed decisions and take actions in pursuit of its goals.
A knowledge-based agent operates by acquiring, representing, and reasoning with knowledge stored in
its internal knowledge base, enabling it to perceive its environment, update its beliefs, and choose
actions that align with its objectives based on the available information.

8
b) Illustrate the use of First Order Logic to represent Knowledge. (7)

Ans:
First-order logic (FOL) is a powerful formalism for representing knowledge in artificial intelligence. It
allows for the representation of objects, relationships, and quantifiers, enabling the expression of
complex statements. For instance, to represent the knowledge that "all humans are mortal," one can use
the FOL statement ∀x (Human(x) → Mortal(x)), where ∀ denotes universal quantification, Human(x)
represents the predicate "x is human," and Mortal(x) represents "x is mortal." This expressive capability
makes FOL well-suited for capturing and reasoning about a wide range of knowledge in AI
applications.
OR
18a) Suppose my knowledge base consists of the facts S ^ T ⇒ ¬ (¬ P ^ R), ¬¬S, T And need to
prove P is entailed. Use rules of inference to do this. (5)
Ans:
S ∧ T ⇒ ¬ (¬ P ∧ R), ¬¬S, T (Double Negation Elimination(¬¬S))
S ∧ T ⇒ ¬ (¬ P ∧ R), S, T (AND-Introduction (S,T))
S ∧ T ⇒ ¬ (¬ P ∧ R), S ∧ T (Modus Ponens)
¬ (¬ P ∧ R) (AND Elimination)
¬ ¬ P (Double Negation Elimination)
P
b) Differentiate Forward Chaining and Backward Chaining with their algorithms. (9)
Ans:
Forward chaining is an inference algorithm utilized in rule-based systems to systematically derive
conclusions from a set of known facts and rules. The algorithm begins with an empty working memory
containing established facts. It then iteratively selects rules whose conditions match the current state of
9
the working memory, applies those rules by adding their consequents to the working memory, and
repeats this process until no further rules can be applied. The termination occurs when the working
memory reaches a stable state, and the final content of the working memory represents the derived
conclusions from the given set of rules and facts. This algorithm is commonly employed in expert
systems for automated reasoning and decision-making.
Backward chaining is an inference algorithm used in rule-based systems to determine whether a given
goal can be satisfied based on existing facts and rules. The algorithm starts with the goal and works
backward, attempting to find a set of conditions that, when satisfied, lead to the goal. It selects rules
whose conclusions match the goal, checks if their conditions are satisfied by the existing facts, and
recursively applies the algorithm until it reaches known facts or a failure point. Backward chaining is
particularly useful for goal-oriented reasoning and is commonly employed in diagnostic systems and
expert systems where the objective is to identify the causes of a particular issue.

Forward chaining tends to accumulate facts in the working memory, potentially using more memory.
Backward chaining is often more memory-efficient as it selectively explores paths backward to reach a
goal. Forward chaining is commonly applied in expert systems where the emphasis is on exploring and
utilizing existing knowledge to draw new conclusions.
Backward chaining is frequently used in diagnostic systems, troubleshooting, and situations where the
focus is on identifying the causes of a problem or achieving specific goals.
Module V
19a) Give the significance of Learning from examples. Explain the various types of Learning in
problem solving. (7)
Ans:
An agent is learning if it improves its performance on future tasks after making observations about the
world.
Supervised Learning:
In supervised learning, the algorithm is trained on a labeled dataset, where each input is associated with
the correct output.
The algorithm learns to map inputs to outputs by generalizing from the provided examples.
It is commonly used for tasks like classification and regression.
Unsupervised Learning:
Unsupervised learning involves training an algorithm on an unlabeled dataset, where the model must
discover patterns or relationships within the data.
Clustering and dimensionality reduction are common applications, where the algorithm identifies
inherent structures or groupings in the data.
Reinforcement Learning:
Reinforcement learning is a type of learning where an agent learns to make decisions by interacting
with an environment.
The agent receives feedback in the form of rewards or penalties based on its actions, allowing it to learn
optimal strategies over time.
Applications include game playing, robotic control, and autonomous systems.
Semi-Supervised Learning:
Semi-supervised learning combines elements of both supervised and unsupervised learning.
The algorithm is trained on a dataset that contains both labeled and unlabeled examples.
It leverages the labeled data for supervised learning tasks while using the unlabeled data to discover
underlying structures or improve generalization.
b) How do we evaluate and choose the best hypotheses that fits the future data? Explain with a suitable
method. (7)
Ans:
To evaluate and choose the best hypotheses that fit future data, cross-validation is a widely used method. This
involves splitting the dataset into k subsets, training the model on k-1 folds, and validating on the remaining fold
iteratively. Performance metrics, such as accuracy or precision, are calculated for each iteration. The hypothesis
demonstrating consistent high performance across all folds is deemed more reliable, indicating its potential to
generalize well to new, unseen data. Finally, the selected hypothesis is rigorously tested on an entirely separate
10
test dataset to ensure robust generalization.
OR
20a) Explain learning in Decision Tree with example. (8)
Ans:
Learning in decision trees involves constructing a tree structure based on features in the training data for
classification or regression.
The Decision-Tree learning algorithm involves recursively partitioning a dataset to create a tree structure for
decision-making. It starts by selecting the best feature based on measures like Information Gain or Gini Impurity.
A node is created for each decision based on the chosen feature, and the dataset is split into subsets. The
algorithm recursively applies these steps until stopping criteria, such as a specific depth or minimum samples in a
leaf, are met. The resulting tree represents a set of rules for making decisions, and during prediction, an instance
traverses the tree to reach a leaf node, where the class label is assigned.
Consider a subset of the Iris dataset, focusing on two classes: Setosa and Versicolor. The features include sepal
length and sepal width.
Sepal Length Sepal Width Class
5.1 3.5 Setosa
4.9 3.0 Setosa
5.5 2.3 Versicolor
6.0 2.9 Versicolor
Algorithm Steps:
Select Best Feature:
Measure Information Gain to select the best feature for the initial split. Let's say "Sepal Length" is chosen.
Create a Node:
A decision tree node is created based on "Sepal Length."
Split Data:
Split the dataset into two subsets based on "Sepal Length":
Subset 1: Sepal Length <= 5.2 (Setosa)
Subset 2: Sepal Length > 5.2 (Versicolor)
Recursive Process:
For each subset, recursively apply the Decision-Tree learning algorithm until stopping criteria are met.
Assign Class Labels:
Assign class labels to the leaf nodes:
Subset 1 Leaf: Class = Setosa
Subset 2 Leaf: Class = Versicolor
Resulting Decision Tree:
Sepal Length <= 5.2
/ \
Setosa Versicolor
b) What do you meant by Linear classification with logistic regression? Explain. (6)
Ans:

The logistic function, or sigmoid function, is defined as , where z is the linear combination of
input features and weights. It transforms the linear output of logistic regression into probabilities between 0 and
1, facilitating binary or multiclass classification decisions based on a specified threshold.

Linear classification with logistic regression involves modeling the relationship between input features and the
probability of an instance belonging to a specific class. Despite its name, logistic regression is employed for
classification tasks, not regression. The algorithm assumes a linear relationship between input features and the
log-odds of the positive class, utilizing a sigmoid activation function to map the output into the range [0, 1].
During training, the model adjusts weights using optimization algorithms like gradient descent to maximize the
likelihood of observed class labels. The resulting decision boundary is a hyperplane that separates instances into
different classes based on calculated probabilities.

11

You might also like