Professional Documents
Culture Documents
Jupyterhub + MNIST
rrios@vmware.com
Instance-based Algorithms: These algorithms learn from individual training examples. They store the training data
and classify new instances based on the similarity to the stored instances. Examples of instance-based algorithms
include k-Nearest Neighbors (k-NN), Learning Vector Quantization (LVQ), and Radial Basis Functions (RBF)
Bayesian Algorithms: These algorithms apply Bayes' Theorem for problems such as classification and regression.
They are probabilistic models that are used to predict the probability of an event. Examples of Bayesian algorithms
include Naive Bayes and Bayesian Networks
Ensemble Algorithms: These algorithms combine multiple weaker models to make a final decision. They are used
to improve the predictive accuracy and control overfitting. Examples of ensemble algorithms include Bagging,
Boosting, and Random Forests
Decomposition Algorithms: These algorithms decompose the data into different components and use these
components to make predictions. Examples include Principal Component Analysis (PCA) and Factor Analysis
Regression Algorithms: These algorithms are used for predicting continuous outcomes. Examples include Linear
Regression, Logistic Regression, and Support Vector Regression
Decision Tree Algorithms: These algorithms are used for both classification and regression tasks. They split the
data into subsets based on the values of the input variables. Examples include Classification and Regression Trees
(CART), Decision Stump, and Iterative Dichotomiser 3 (ID3)
Clustering Algorithms: These algorithms are used for unsupervised learning tasks where the goal is to group
instances into clusters. Examples include K-Means, Hierarchical Clustering, and DBSCAN