You are on page 1of 3

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

MACHINE LEARNING QUIZ – I


SET – II

1. Machine learning is [ ]
a. An algorithm that can learn
b. A sub-discipline of computer science that deals with the design and implementation of
learning algorithms
c. An approach that abstracts from the actual strategy of an individual algorithm and can
therefore be applied to any other form of machine learning
d. None of the above
2. The Bayesian classifier is * [ ]
a. A class of learning algorithm that tries to find an optimum classification of a set of
examples using probabilistic theory
b. Any mechanism employed by a learning system to constrain the search space of
hypothesis
c. An approach to the design of learning algorithm that is inspired by the fact that when
people encounter new situation, they often explain them by reference to familiar
experiences, adapting the explanations to fit the new situation
d. None of the above
3. The most widely used metrics and tools to assess a classification model are: [ ]
a. Confusion Matrix
b. Cost-sensitive accuracy
c. Area Under the ROC curve
d. All of the above
4. What is the purpose of performing cross-validation? [ ]
a. To assess the predictive performance of the models
b. To judge how the trained model performs outside the sample on test data
c. Both 1 and 2
d. None of the Above
5. In one-versus-one scheme, we train the number of binary classifiers are [ ]
a. (k + 1) / 2
b. k (k – 1) / 2
c. (k – 1) / 2
d. k (k + 1) / 2
6. In predictive clustering, the cluster boundaries are a set of straight lines [ ]
a. Voronoi diagram
b. ER diagram
c. Venn diagram
d. Tree diagram
7. In which of the following cases will k-means clustering fail to give good results? 1) Data points with
outliers 2) Data points with different densities 3) Data points with nonconvex shapes. [ ]
a. 1 and 2
b. 1 and 3
c. 1, 2 and 3
d. 1 and 3
8. Which of the following are the advantages of Decision trees? [ ]
a. Possible scenarios can be added
b. Use a white box model, if given result is provided by a model
c. Worst, best and expected values can be determined for different scenarios
d. All of the above
9. Which of the following is a disadvantage of decision trees? [ ]
a. Factor analysis
b. Decision trees are robust to outliers
c. Decision trees are prone to be overfit
d. None of the above
10. Given a rule of the form IF X THEN Y, rule confidence is defined as the conditional probability that
a. Y is false when X is known to be false [ ]
b. Y is true when X is known to be true
c. X is true when Y is known to be true
d. X is false when Y is known to be false
11. The following is one of the supervised learning under descriptive modeling [
]
a. Classification
b. Regression
c. Subgroup discovery
d. Association rule discovery
12. The following is an example of grouping model [ ]
a. Decision tree
b. Random forest
c. Support vector machines
d. None of the above
13. Incorrectly classified negatives are called as [ ]
a. True positives
b. True negatives
c. False positives
d. False negatives
14. The term ‘Error Rate’ can be defined as [ ]
a. Correctly classified instances / total number of instances
b. Incorrectly classified instances / total number of instances
c. Correctly classified instances / incorrectly classified instances
d.. Incorrectly classified instances / correctly classified instances
15. A frequent itemset is an [ ]
a. Itemset whose support is greater than or equal to user defined support threshold
b. Itemset whose support is less than or equal to user defined support threshold
c. Both a and b
d. None of the Above
16. The following is an example of the First Order Logic [ ]
a. ∀x : BodyPart(x, PairOf(Gill)) → Fish(x)
b. fish(X):-bodyPart(X,pairOf(gills))
c. fish(X):-bodyPart(X,pairOf(Y)), Y=Gills
d. None of the Above
17. Normalized coverage plots are referred as [ ]
a. RIC Plots
b. RAC Plots
c. ROC Plots
d. None of the Above
18. A concept is consistent if [ ]
a. it covers none of the positive examples.
b. it covers none of the negative examples.
c. Both A & B
d. None of the above
19. The second approach treats collections of rules as unordered rule sets.
a. Rule set [ ]
b. Rule list
c. Subgroup
d. None of the above
20. variance estimates as a [ ]
a. Incorrect Predictions in the test data set
b. Zero training error
c. Result of variations in the training set
b. None of the above

You might also like