You are on page 1of 2

1. What is machine learning.

Explain how supervised learning is different from unsupervised


learning.
2. Describe two methods for reduce dimensionality.
3. Explain the steps in developing a machine learning application.
4. What is support vector machine (SVM)? How to compute the margin.
5. Write short notes on:
a. Logistic regression
b. Machine learning Applications
c. Issues in Machine Learning
d. Hierarchical Clustering algorithms.
e. Radial Basis functions.
f. Soft margin SVM
g. Overfitting in Machine Learning
6. Define machine learning. Briefly explain the types of learning.
7. What are issues in decision tree induction?
8. What are requirements of clustering algorithms?
9. What are the steps in designing a machine learning problem? Explain with the checkers
problem.
10. What is goal of the Support Vector Machine (SVM)? How to compute the margin?
11. What is the role of radial basis functions in separating non-linear patterns?
12. “Entropy is a thermodynamic function used to measure the disorder of a system in
Chemistry.” How do you suitably clarify the concept of entropy in ML?
13. Use the k-means clustering algorithms ad Euclidean distance to cluster the following eight
examples into three clusters:
A1= (2,10), A2= (2,5), A3= (8,4), A4= (5,8), A5= (7,5), A6= (6,4), A7= (1,2), A8= (4,9). Find new
centroid at every new point entry into the cluster group. Assume initial clusters centres as
A1, A4, A7.
14. Compare and contrast Linear and Logistic Regression with respect to their mechanisms of
prediction.
15. Find optimal hyperplane for the following points: {(1,1), (2,1), (1, -1), (2, -1), (4,0), (5,1),
(6,0)}.
16. Explain the key terminologies of Support Vector Machines.
17. Explain concepts behind Linear Regression.
18. What are the key tasks of Machine Learning.
19. Explain the steps required for selecting the right machine learning algorithm.
20. Apply k-means algorithm on given data for k=2. Use C1(2, 4) & C2(6, 3) as initial cluster
centres.
Data: a (2,4), b (3,3), c (5,5), d (6.3), e (4,3), f (6,6)
21. Define support vector machine (SVM) and further explain maximum margin linear
separators concept.
22. Explain in detail Principal Component Analysis for Dimension Reduction.
23. Explain procedure to construct decision trees.
24. Explain how support vector machine can be used to find optimal hyperplane to classify linear
separable data. Give suitable example.
25. What is decision tree? How will you choose best attribute for decision tree classifier? Give
suitable example.
26. Explain K means algorithm giving suitable example. Also, explain how K-means clustering
differs from hierarchical clustering.
27. What is kernel? How is kernel can be used with SVM to classify non-linearly separable data?
Also, list standard kernel functions.
28. Explain the Expectation Maximization Algorithm (EMA).
29. Explain kernel functions and kernel trick.
30. You are given a set on cancer detection. You have built a classification model and achieved
an accuracy of 96%. Why shouldn’t you be happy with your model performance? What can
you do about it?
31. You came to know that your model is suffering from low bias and high variance. Which
algorithm should you use to tackle it? Why?

You might also like