Professional Documents
Culture Documents
ANN Notes
ANN Notes
------------------------------------------------------------------
All Exams are MCQs. (T1 - 9 qtns_5 marks each, T2, T3, Compre)
ML is learning algorithm.
IoT: Multi-Feature Recognition. Robust than Normal Neural Networks dur to sturdy
security system.
ML focuses on development of computer programs that can access data and create a
hypothesis to learn for themselves.
SUPERVISED LEARNING:
1. Presence of supervisor.
UNSUPERVISED LEARNING:
1. Training of machine using information that is neither classified nor labeled and
allowing the algorithm to act on that information without guidance.
SEMI-SUPERVISED LEARNING:
2. Machines which use this, are able to considerably improve learning accuracy.
3. Usually chosen when acquired labeled data requires skilled and relevant
resources in order to train it/learn from it.
We use Linear Regression to create a hypothesis using training data set and then
evaluate it using the test dat set.
------------------------------------------------------------------ Lecture 2:
------------------------------------------------------------------
Alpha value can be carefully chose using grid search or nested cross validation.
Grid Search:
1. Fix range of alpha. Find the Lowest Mean Square value for each alpha in given
range.
Batch gradient descent method is suitable for a small test data set. But not for
large data set.
Bias-Variance Problem:
We create multiple models so as to fit the training data set. There are some models
which fit the training set perfectly, some which fit the data set with moderate
error and some with maximum error.
In this case, we do not know what the test data set is going to be. Hence it is
harmful to choose the Perfect fitting model as the test data set may be entirely
diiferent than the training set.
Also, we cant choose the least fitting model as it may not fit the test data set
too.
Here, It is advisable to choose the moderately fitting model so as to generalize
the data set the model can fit with moderate error.
The Perfectly fitting model for training data set is known as Overfitting.
The Least fitting model for training data set is known as Underfitting.
Comparision of Batch Gradient Descent, Mini Batch Gradient Descent and Stocastic
Gradient Descent graphs.
----------See slides for Derivation of Mini Batch Gradient Descent and Stocastic
Gradient Descent. Easy to understand.
Test 1:
9 MCQ qtns. (5 marks each in Google Quiz)
Types of Questions:
1. All equations to be remembered.
2. Given test vectors, evaluate the cost function, weight, etc. till given
iteration.
https://towardsdatascience.com/l1-and-l2-regularization-methods-ce25e7fc831c
Refer to the above link for more information
----------See Slide for Equation
------------------------------------------------------------------ Lecture 4:
------------------------------------------------------------------
In Logistic Regression,
P(y) -> Prior Function
P(x/y) -> Likelihood Function -> Gaussian Distribution
P(x) -> Evidence
P(y/x) -> Aposteriori Distribution.
------------------------------------------------------------------ Lecture 5:
------------------------------------------------------------------
In one vs all algorithm for Multiclass Logistic Regression, we hav a major problem
of Class Imbalance.
------------------------------------------------------------------ Lecture 6:
------------------------------------------------------------------
Underfitting is:
High Bias and Low Variance
Overfitting is:
Low Bias and High Variance