You are on page 1of 44

AIML SUPER IMP Questions and Solns

VTU previous year paper analysis by YouTuber Afnan Marquee. The questions present in this
document are repeated 5+ times in previous year papers. So don't miss it by any chance! For
video explanation, check out my YouTube Channel!

Module 1
Questions
1. What are the task domains of AI?
2. What is AI technique?
3. TIC TAC TOE problem
4. State Space Search Methodology
5. Production System Characteristics
6. Requirements of Good Control Strategies
7. Water Jug Problem
8. Heuristic Search Techniques (BFS/DFS)
9. A* and AO*

Solutions

1. What are the task domains of AI?


The following are the figures showing some of the tasks that are the targets of work in AI:
2. What is AI technique?

3. TIC TAC TOE Problem


4. State Space Search
5. Production System Characteristics
6. Requirements of Good Control Strategies

7. Water Jug Problem


8. Heuristic Search Techniques (BFS/DFS)
Example of Heuristic Search Technique (Hill Climbing)
9. A* and AO*
A* Search Algorithm
● It is a searching algorithm that is used to find the shortest path between an initial and a
final point.
● It is a handy algorithm that is often used for map traversal to find the shortest path to be
taken. A* was initially designed as a graph traversal problem, to help build a robot that
can find its own course. It still remains a widely popular algorithm for graph traversal.
● It searches for shorter paths first, thus making it an optimal and complete algorithm. An
optimal algorithm will find the least cost outcome for a problem, while a complete
algorithm finds all the possible outcomes of a problem.

Algorithm
AO* Search Algorithm
The Depth-first search and Breadth-first search given earlier for OR trees or graphs can
be easily adopted by AND-OR graph. The main difference lies in the way termination
conditions are determined since all goals following an AND node must be realized;
whereas a single goal node following an OR node will do. So for this purpose, we are
using AO* algorithm.

Like A* algorithm here we will use two arrays and one heuristic function.

OPEN: It contains the nodes that have been traversed but yet not been marked solvable
or unsolvable.

CLOSE: It contains the nodes that have already been processed.

h(n): The distance from the current node to the goal node.
Module 2
Questions
1. Forward vs Backward Reasoning
2. Explain Logic Programming
3. Candidate Elimination
4. Approaches and Issues to knowledge representation
5. Resolution Logic
6. Unification Algorithm

Solutions
1. Forward vs Backward Reasoning
2. Explain Logic Programming

3. Candidate Elimination
4. Approaches and Issues to Knowledge Representation
5. Resolution Logic
6. Unification Algorithm
Module 3
Questions
1. Decision Tree Defn, Algo and Examples
2. Entropy, Information Gain, Overfitting
3. Issues in Decision Tree Learning
4. Logical AND, OR and XOR representation using Perceptron
5. Gradient Descent and Delta Rule
6. Backpropagation Algorithm

Solutions
1. Decision Tree Defn, Algo and Examples
2. Entropy, Information Gain and Overfitting
How to avoid overfitting?
It is covered in Issues in Decision Tree Learning (next Question)

3. Issues in Decision Tree Learning

1. Avoiding Overfitting the Data


5. Handling Attributes with Differing Costs

4. Logical AND, OR and XOR representation using Perceptron

5. Gradient Descent and Delta Rule


6. Backpropagation Algorithm
Module 4
Questions
1. Bayes Theorem and Brute Force MAP Learning
2. ML and LS error
3. MDL
4. Naive Bayes Classifier
5. Bayesian Belief Network
6. K-means algorithm
7. EM algorithm

Solutions
1. Bayes Theorem
2. Maximum Likelihood and Least Squared Error
3. Minimum Description Length Principle
4. Naive Bayes Classifier
5. Bayesian Belief Networks
6. K-means and EM algorithm
Module 5
Questions
1. KNN (with numerical)
2. Radial Basis Function
3. Case Based Reasoning
4. Reinforcement Learning
5. Q-learning (Example, Algorithm, Formula etc.)

Solutions
1. KNN Algorithm
2. Radial Basis Function
3. Case Based Reasoning
4. Reinforcement Learning
Reinforcement learning addresses the question of how an autonomous agent that senses and
acts in its environment can learn to choose optimal actions to achieve its goals.
5. Q-learning (Example, Algorithm, Formula etc.)

You might also like