You are on page 1of 7

SIR M.

VISVESVARAYA INSTITUTE OF TECHNOLGY


BANGALORE 562 157
DEPARTMENT OF COMPUTER APPLICATIONS
18MCA53 - MACHINE LEARNING
QUESTION BANK
MODULE-1
Introduction
1. Define Machine Learning. Discuss with examples why machine learning is important.

2. Discuss with examples some useful applications of machine learning.

3. Explain how some areas/disciplines that influenced the machine learning.

4. What do you mean by a well–posed learning problem? Explain the important features that are
required to well–define a learning problem.

5. Define learning program for a given problem. Describe the following problems with respect to Tasks,
Performance and Experience:

a. Checkers Learning Problems

b. Handwritten Recognition Problem

c. Robot Driving Learning Problem

6. Describe in detail all the steps involved in designing a learning system.

7. Discuss the perspective and issues in machine learning

8. Define Concept and Concept Learning. With example explain how the Concept Learning task
determines the Hypothesis for given target concept.

9. Discuss Concept learning as search with respect to General to specific ordering of hypothesis.

10. Describe Find S Algorithm. What are the properties and complaints of Find S.

11. Illustrate Find S Algorithm over EnjoySport concept. Training instances given below
12. Define Consistent Hypothesis and Version Space. With example explain Version Space and
Representation of version Space.

13. Describe List the Eliminate Algorithm.

14. Explain the candidate elimination algorithm

15. Trace Candidate-Elimination algorithm on the following data.

16. Explain the inductive biased hypothesis space, unbiased learner and the futility of Bias Free Learning.
Describe the three types of learner.

17. What is the role of a function approximation algorithm? How does learner system estimate training
values and adjusts weights while learning?
18. Describe in brief: Version spaces and Candidate –Elimination Algorithm.

19. Define Inductive Learning Hypothesis.

20. Describe Inductive Systems and Equivalent Deductive Systems

21. Rank the following three types of leaners according to their biases:

a. Rote Learner

b. Candidate Elimination Learner

c. Find S Learner.

MODULE-2
Decision Tree Learning
1. Explain the following with examples:
a. Decision Tree,
b. Decision Tree Learning
c. Decision Tree Representation.
2. What are the characteristics of the problems suited for decision tree learning?
3. Explain the concepts of entropy and information gain.
4. What is the procedure of building Decision tree using ID3 with Gain and Entropy. Illustrate
with an example.
5. Consider the following set of training examples.
a. What is the entropy of this collection of training example with respect to the target function
classification?
b. What is the information gain of A2 relative to these training examples?
Instance Classification A1 A2
1 + T T
2 + T T
3 - T F
4 + F F
5 - F T
6 - F T

6. Discuss Hypothesis Space Search in Decision tree Learning.


7. Discuss Inductive Bias in Decision Tree Learning. Differentiate between two types of biases.

Why prefer Short Hypotheses?


8. What are issues in decision tree learning? Explain briefly how we can overcome?
a. Discuss the following issues in detail:
a. Avoiding overfitting the data in Decision Trees
b. Incorporating Continuous valued attributes
c. Handling Training Examples with Missing attribute values.
d. Handling Attributes with Different costs.

MODULE-3
ARTIFICIAL NEURAL NETWORKS

1. What is Artificial Neural Network?


2. Explain appropriate problem for Neural Network Learning with its characteristics.
3. Explain the concept of a Perceptron with a neat diagram.
4. Explain the single perceptron with its learning algorithm.
5. How a single perceptron can be used to represent the Boolean functions such as
AND, OR
6. Design a two-input perceptron that implements the boolean function A Λ ¬ B.
Design a two-layer network of perceptron’s that implements A XOR B.
7. Consider two perceptrons defined by the threshold expression w0 + wlxl+ w2x2 >
0. Perceptron
A has weight values w0 = 1, w1=2, w2=1 and perceptron B has the weight values
w0 = 0, w1=2, w2=1
True or false? Perceptron A is more-general than perceptron B.
8. Write a note on (i) Perceptron Training Rule (ii) Gradient Descent and Delta Rule
9. Write Gradient Descent algorithm for training a linear unit.
10. Derive the Gradient Descent Rule
11. Write Stochastic Gradient Descent algorithm for training a linear unit.
12. Differentiate between Gradient Descent and Stochastic Gradient Descent
13. Write Stochastic Gradient Descent version of the Back Propagation algorithm
for feedforward networks containing two layers of sigmoid units.
14. Derive the Back Propagation Rule
15. Explain the followings w.r.t Back Propagation algorithm
 Convergence and Local Minima
 Representational Power of Feedforward Networks
 Hypothesis Space Search and Inductive Bias
 Hidden Layer Representations
 Generalization, Overfitting, and Stopping Criterion

MODULE-4
Bayesian Learning
1. Define (i) Prior Probability (ii) Conditional Probability (iii) Posterior Probability
2. Define Bayesian theorem? What is the relevance and features of Bayesian theorem?
3. Explain the practical difficulties of Bayesian theorem.
4. Consider a medical diagnosis problem in which there are two alternative hypotheses:
1. That the patient has a particular form of cancer (+) and
2. That the patient does not (-). A patient takes a lab test and the result comes back
positive. The test returns a correct positive result in only 98% of the cases in which the
disease is actually present, and a correct negative result in only 97% of the cases in which
the disease is not present. Furthermore, .008 of the entire population have this cancer.
Determine whether the patient has Cancer or not using MAP hypothesis.
5. Explain Brute force Bayes Concept Learning
6. Define MAP hypothesis. Derive the relation for hMAP using Bayesian theorem.
7. What are Consistent Learners?
8. Discuss Maximum Likelihood and Least Square Error Hypothesis.
9. Describe Maximum Likelihood Hypothesis for predicting probabilities.
10. Describe the concept of MDL. Obtain the equation for hMDL
11. What is conditional Independence?
12. Explain Naïve Bayes Classifier with an Example.
13. Explain the Gradient Search to Maximize Likelihood in a neural Net.
14. What are Bayesian Belief nets? Where are they used?
15. Explain Bayesian belief network and conditional independence with example.
16. Explain the concept of EM Algorithm. Discuss on Gaussian Mixtures.
MODULE-5
Evaluating Hypothesis
1. Explain the two key difficulties that arise while estimating the Accuracy of Hypothesis.
2. Define the following terms a. Sample error b. True error c. Random Variable d. Expected
value e. Variance f. standard Deviation
3. Explain Binomial Distribution with an example.
4. Explain Normal or Gaussian distribution with an example.
5. Suppose hypothesis h commits r = 10 errors over a sample of n = 65 independently drawn
examples.
 What is the variance and standard deviation for number of true error rate errorD(h)?
 What is the 90% confidence interval (two-sided) for the true error rate?
 What is the 95% one-sided interval (i.e., what is the upper bound U such that errorD(h) ≤5
U with 95% confidence)?
 What is the 90% one-sided interval?

6. What are instance based learning? Explain key features and disadvantages of these methods.
7. Explain the K – nearest neighbour algorithm for approximating a discrete – valued function
with pseudo code
8. Describe K-nearest Neighbour learning Algorithm for continues (real) valued target function.
9. Discuss the major drawbacks of K-nearest Neighbour learning Algorithm and how it can be
corrected
10. Define the following terms with respect to K - Nearest Neighbour Learning:
i) Regression ii) Residual iii) Kernel Function.
11. Explain Locally Weighted Linear Regression.
12. Explain radial basis function
13. Explain CADET System using Case based reasoning.
14. What is Reinforcement Learning and explain Reinforcement learning problem with neat
diagram.
15. Write Reinforcement learning problem characteristics.
16. Explain the Q function and Q Learning Algorithm assuming deterministic rewards and
actions with example.

You might also like