You are on page 1of 4

Semester-VI

Course Code AIM703

Course Name Advance AI

Credits 5

Pre-Requisites Fundamental of AI and ML and DNN

L-T-P-C 3-1-2-5

COURSE OBJECTIVE





LECTUREs WITH BREAKUP No. of


Lectures

Module-I 3

Introduction: Understanding data - audio, text, video.


How AI algorithms can be used to these domains.
Data to feature: feature engineering
Finding features, Feature Engineering for Text Data, Feature
Extraction and Learning for Visual Data, Feature-Based Time-Series
Analysis, Feature Engineering for Data Streams, Feature Generation
and Feature Engineering for Sequences, Feature Generation for
Graphs and Networks, Automating Feature Engineering in
Supervised Learning, Pattern-Based Feature Generation.

Module-II 3
Advanced DNN. Deep into CNN, RNN. Introduction to Reinforcement
learning: when ML meets GT. GAN.

Markov and hidden Markov models:


Transition matrix, Stationary distribution of a Markov chain,
Application: Google’s PageRank algorithm for web page ranking,
Hidden Markov models, Inference in HMMs, The forwards algorithm,
The forwards-backwards algorithm, The Viterbi algorithm, Forwards
filtering, backwards sampling, Learning for HMMs, State space
models, SSMs for object tracking, Robotic SLAM, Online parameter
learning using recursive least squares, SSM for time series
forecasting

Module-III 3

Undirected graphical models (Markov random fields), Conditional


independence properties of UGMs, Learning, Training maxent
models using gradient methods, Training partially observed maxent
models, Approximate methods for computing the MLEs of MRFs,
Pseudo likelihood, Stochastic maximum likelihood, Structural SVMs,
SSVMs: a probabilistic view, SSVMs: a non-probabilistic view,
Cutting plane methods for fitting SSVMs
Markov Decision Process, Prediction and Control by Dynamic
Programing

Module-IV

Monte Carlo inference, Sampling from standard distributions,


Rejection sampling, Particle filtering, Rao-Blackwellised particle
filtering, Markov chain Monte Carlo (MCMC) inference, Metropolis
Hastings algorithm, Speed and accuracy of MCMC.
Monte Carlo Methods for Model Free Prediction and Control, Policy
Gradients. Markov Game

Module-V
Deep learning, Deep generative models, Deep directed networks,
Deep Boltzmann machines, Deep belief networks, Greedy layer-wise
learning of DBNs

Applications of deep networks, Handwritten digit classifification using


DBNs, Data visualization and feature discovery using deep
auto-encoders, Information retrieval using deep auto-encoders
(semantic hashing), Learning audio features using 1d convolutional
DBNs, Learning image features using 2d convolutional DBNs

COURSE OUTCOMES

On completion of the course the student should be able to;

Text book:
1. Feature Engineering For Machine Learning by Zheng and Casari; O'Reilly Media,
Inc.(2018)
2. Reinforcement Learning: An Introduction by Andrew Barto and Richard S. Sutton;
The MIT Press (1998)
3. Deep Learning, by Ian Goodfellow The MIT Press ( 2016)
4
Reference book:
1. Machine Learning A Probabilistic Perspective By Kevin P. Murphy The MIT Press
(2012)
2. Artificial Intelligence: A Modern Approach by Russell, Norvig; Pearson; 4th
edition (2021)
3. Probabilistic Machine Learning: An Introduction By Kevin P. Murphy The MIT
Press (2022)
List of Practical’s

1. Understanding and reading all types of data


2. Data transformation
3. Extraction of feature from different types of data
4. Different types of ways to find features
5. Application of Markov model (2)
6. Application of Monte carlo(2)
Mini-Project.

Grading Policy

1. Mid-Sem 1: 25%
2. Mid-Sem 1: 25%
3. Mini-Projet-50% (Implementation-25%+ Presentation & Viva - 25%)
4. No end sem exam.
Mini-project Instruction: 1. Group of 4.
2. Student will get one research paper and get it verified
by faculty and start implementing.
3. Marks on understanding and detail implementation
not just on getting result.
4. Timeline: get paper confirmed by August end. 5 mins
discussion on Septmber end. November Mid final presentation starts.

You might also like