You are on page 1of 1

Course Description:

Machine Learning is concerned with computer programs that automatically improve


their performance through experience (e.g., programs that learn to recognize human
faces, recommend music and movies, and drive autonomous robots).
This course covers the theory and practical algorithms for machine learning from a
variety of perspectives. We cover topics such as Bayesian networks, decision tree
learning, Support Vector Machines, statistical learning methods, unsupervised
learning and reinforcement learning.
The course covers theoretical concepts such as inductive bias, the PAC learning
framework, Bayesian learning methods, margin-based learning, and Occam's Razor.
Short programming assignments include hands-on experiments with various learning
algorithms.
This course is designed to give a graduate-level student a thorough grounding in
the methodologies, technologies, mathematics and algorithms currently needed by
people who do research in machine learning.

Prerequisites:

Students are expected to have a pre-existing working knowledge of probability,


linear algebra, statistics and algorithms, though the class has been designed to
allow students with a strong numerate background to catch up and fully participate.

Textbooks:

Machine Learning, Tom Mitchell. (optional)


Pattern Recognition and Machine Learning, Christopher Bishop. (optional)
The Elements of Statistical Learning: Data Mining, Inference and Prediction, Trevor
Hastie, Robert Tibshirani, Jerome Friedman. (optional)
Machine Learning: A Probabilistic Perspective, Kevin P. Murphy. (optional)

You might also like