You are on page 1of 2

Course Code: Machine Learning L T P C

CSE611 3 0 2 4
Version No. 1.10
Objective: To introduce how to mimic learning through algorithms. To learn from data in a
supervised/ unsupervised manner so as to facilitate decision making
Expected Outcome:

On completion of this course the student would be able to:

 Understand the principles, advantages, limitations and possible applications of machine learning.

 Identify and apply the appropriate machine learning technique to classification, pattern
recognition, optimization and decision making.
Unit No. 1 INTRODUCTION 7 hours

Learning Problems, Perspectives and Issues, Generative Vs. Discriminative, Concept Learning, Version
Spaces and Candidate Eliminations, Inductive bias, Finite and Infinite Hypothesis Spaces, PAC Learning,
Mistake Bound Model, Minimum Description Length Principle

Unit No. 2 Supervised Learning 1 0 hours

Decision Trees - CART, Multiple Linear Regression, Logistic regression, Perceptron, Multilayer
Perceptron, RBF Networks, Linear Discriminant Analysis, Support vector machines: Linear and Non-
Linear, Ensemble methods: Bagging, Boosting, Stacking

Unit No. 3 Unsupervised & Semi-Supervised Learning 10 Hours


K-means clustering, Fuzzy K-means clustering, Self-Organizing Map, Expectation Maximization, Gaussian
Mixture Models, Factor analysis, Principal components analysis (PCA), Locally Linear Embedding (LLE),
Semi-Supervised: HMRF-KMeans, S3VMs

Unit No. 4 Probabilistic & Evolutionary Learning 9hours

Bayesian Learning, Bayes Optimal Classifier, Naive Bayes Classifier, Bayesian Belief Network, Hidden
Markov Models, Evolutionary Learning: Genetic Algorithms, Ant colony optimization

Unit No. 5 Other Learning Paradigms 9hours

Instance based Learning: k-nearest neighbour classification, Locally weighted Regression, Reinforcement
Learning: Q-Learning, Temporal Difference Learning, Deep Learning: Deep belief networks, Design and
Analysis of Machine Learning Experiments
REFERENCES
1 Ethem Alpaydin, "Introduction to Machine Learning”, MIT Press, Prentice Hall of
India,2005.
rd
2 Tom Mitchell, “Machine Learning”, McGraw Hill, 3 Edition,1997.

3 Christopher M. Bishop, “Pattern Recognition and Machine Learning”, Springer (2006).

4 Kevin P. Murphy, “Machine Learning: A Probabilistic Perspective”, The MIT Press,


2012
5 Olivier Chapelle, Bernhard Scholkopf, Alexander Zien "Semi-Supervised Learning",
MIT Press, 2006
Machine Learning Laboratory

S.No. New Indicative List of Experiments (in the areas of)


1. Decision Tree learning

2. Implement Logistic Regression

3. Implement classification using Multilayer perceptron

4. Implement classification using SVM

5. Implement Boosting and Bagging for Ensemble Learning

6. Implement K-means Clustering to Find Natural Patterns in Data

7. Implement Principle Component Analysis for Dimensionality Reduction

8. Maximum Likelihood Estimation of Gaussian Mixtures Using the Expectation


Maximization Algorithm

9. Estimate Hidden Markov Model Parameters


10. Implement Genetic algorithms

11. Implement K-nearest Neighbors

You might also like