You are on page 1of 54

Machine Learning


Engr. Ejaz Ahmad
Common terms

 Accuracy
% of correct prediction made by model to the total
observation
 Algorithm
A method, function or set of instruction to generate a
machine learning algorithm

 Attribute
A quality describing an observation
Common terms

 Bias
Bias is error due to overly simplistic in training
data. causes under fitting
 Bias matric
Average difference between prediction and
Observed values
 Bias Terms
Allow model to represent patterns that do not pass
through the origin
Common terms

 Categorical Variables
Variable with a discrete possible set of values
Classification_report

 Recall
Amount of true positive rate with compare to the
actual positive event throughout data
 Precision
It is the amount of positive predictive value and it is a
measure of the amount of accurate positives your model
claims compared to the number of positives it actually
claims
Classification_report

 F1 Error
It is the measure of model performance, it is the
average weight of the precision and recall of the model

 From sklearn.metrics import classification_report


 classification_report(y_test, y_predict)
ROC_curve

 Receiver operating characteristics is used for visual
comparison of classification model
Common terms

 Convergence
State reach during the training model when the
Loss change very little during each titration
 Dimension
Number of feature have in data sets
 Extrapolation
Making prediction out side the range of data sets
Common terms

 Feature Selection
It is a process of selection of relevant feature from
data set for model
 Hyper parameter
it is higher level of model property such as ,learning
rate, depth of tree, number of hidden layers
 Normalization
Restriction of the value of the weight in regression
to avoid over fitting
Common terms

 Noise
Any irrelevant information or randomness in
dataset
 Outliers
An observation that deviates significantly from
other observation
Bayes Theorem

 Bayes Theorem
it gives the posterior probability of an event on the
bases of prior knowledge
 Generative Model
Generative Model will learn categories of data
 Discriminative model
It simply learn the distinction between different
categories of data

 Split train and test data
 From sklearn.model_selection import train_test_split
 Accuracy Measure
 From sklearn.matrics import accuracy_score
Data Preprocessing

Feature Scaling

Pipeline

Polynomial Feature



Regression

Regression

 Statistical Model for making relationship between a
dependent variable with a given set of independent
variables
1.Simple Linear Regression

 Prediction a response using a single feature
 From sklearn.linear_model import LinearRegression
 Model=LinearRegression()
 Model.fit()
 Model.intercept_,model.cofe_
 Model.predict()
Gradient Descent

 It is a very effective and simple approach to fit linear
models. The general idea for GD is tweak parameters
iteratively in order to minimize the cost function
 Types
1. Batch Gradient DescentIt use full training Batch at
each step
2. Stochastic Gradient Descent It pick any random
instance in training set and calculate the gradient
3. Mini batch Gradient Descent it calculate gradient
on small random set of instance called mini batch
Stochastic Gradient Descent

 From sklearn.linear_model import SGDRegressor
 Model=SGDRegressor(max_iter=1000,tol=1e-
3,eta0=0.1,penalty=None)
 Model.fit()
 Model.predict()
Regularized Linear Models

 For linear models regularization achieved by
constraining the weight of the model
 Following is the regularized version of linear
Regression
1. Ridge Regression
2. Lasso Regression
3. Elastic Net
Ridge Regression

 From sklearn.linear_model import Ridge
 Model=Ridge(alpha=1,solver=‘cholesky’)
 Model.fit()
 Model.predict()
 Same task can be accomplish using SGDRegression
 Model=SGDRegression(penalty=‘l2’)
 Here penalty l2 means Ridge Regression
Lasso class

 From sklearn.linear_model import Lasso
 Model=Lasso(alpha=0.1)
 Model.fit()
 Model.predict()
Elastic Net

 From sklearn.linear_model import ElasticNet
 Model=ElasticNet(alpha=0.1 , l1_ratio=0.5)
 Model.fit()
 Model.predict()
Logistic Regression

 It produce result in binary format which is used to
predict the outcome of the categorical dependent
variable.so its outcome should be
discrete/categorical, such as 0 or 1,yes or no etc.

 From sklearn.linear_model import


LogisticRegression
Classification

Classification

 It is the process of categories a give data into classes,
it can be performed on both structured and
unstructured data
Decision Tree

 It split the dataset into small segment until the target
variable are the same or until the dataset can no
longer be split
 From sklearn.tree import DecisionTreeClassifier
 Model=DecisionTreeClassifier()
 Model. fit()
 Model.predict()
Decision Tree

 Save model result
 From sklearn.externals import joblib
 Joblib.dump(model, ”filename.joblib”)
 model=joblib.load(‘filename.joblib’)
K.Nearest Neighbor

 It is Supervised learning both for Regression and
classification. The principal is to find the predefined
number of training sample closest to the new point
Support Vector Machine

 SVM is a very powerful and versatile Machine
Learning model capable for linear and non linear
classification and regression and even outliear
detection
Linear SVM Classification

 From sklearn.pipeline import Pipeline
 From sklearn.preprocessing import StandardScale
 From sklearn.svm import LinearSVC(support vector classifier)
 Svm_clf=Pipeline([(‘scaler,StandardScale),
(linear_csv,LinearSVC(C=1,loss=‘hinge’))])
 Scm_clf.fit(x, y)
 We can regulate the model by decreasing the value of C hyper
parameter
 Anotheroption is to use the SGDClassifier class, with
SGDClassifier(loss="hinge",
alpha=1/(m*C)).
Nonlinear SVM Classification

 There are two method for classification in SVM
1. Make the data linear by using Polynomial Feature
2. Use the Kernel trick
1.Method PolynomialFeatures

 polynomial_svm_clf = Pipeline([
("poly_features", PolynomialFeatures(degree=3)),
("scalar", StandardScaler()),
("svm_clf", LinearSVC(C=10, loss="hinge"))
])
 polynomial_svm_clf.fit(X, y)
2.Mathod Polynomial Kernel


 from sklearn.svm import SVC
 poly_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=3, coef0=1,
C=5))
])
 poly_kernel_svm_clf.fit(X, y)
2.Method Gaussian RBF Kernel


 rbf_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="rbf", gamma=5, C=0.001))
])
 rbf_kernel_svm_clf.fit(X, y)

 Increasing gamma makes the bell-shape curve narrower


SVM Regression


Linear Regression

from sklearn.svm import LinearSVR

svm_reg = LinearSVR(epsilon=1.5)

svm_reg.fit(X, y)

Nonlinear Regression

from sklearn.svm import SVR

svm_poly_reg = SVR(kernel="poly", degree=2, C=100,
epsilon=0.1)

svm_poly_reg.fit(X, y)
SGD classifier

 It is very strong classifier for handling large data
and suitable for noline learning
 from sklearn.linear_model import SGDClassifier
 sgd_clf = SGDClassifier(random_state=42)
 sgd_clf.fit(X_train, y_train_5)
Naïve Bayes

 It is a classification algorithm based on Bayes’s
Theorem
Random Forest

Artificial NeuralNetwork

Performance
Measures

Classification models
Accuracy Measure using K_Fold

 K-fold cross validation means splitting the training set
into K-folds (i.e. 2,3,4), then making predictions and
evaluating them on each fold using a model trained
on the remaining folds
 from sklearn.model_selection import cross_val_score
 cross_val_score(sgd_clf, X_train, y_train_5, cv=3,
scoring="accuracy")
Confusion Matrix

 It is a performance measurement technique of classification
 It has 4 parameter,TP,TN,FP,FN
 From sklearn.matrics import confusion_matrix
 Confusion_matrix(y_train,y_predict)
 C matrix tack two argument train and predict,we can find
y_predict for it using followni
 from sklearn.model_selection import cross_val_predict
 y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5,
cv=3)
Precision

 Precision
 It is the accuracy of positive prediction
 Precision=TP/(TP+FP)
 from sklearn.metrics import precision_score
 precision_score(y_train_5, y_train_pred)
Recall

 It is the ration of positive instance that are correctly
detected by classifier
 Recall=TP/(TP+FN)
 from sklearn.metrics import recall_score
 recall_score(y_train_5, y_train_pred)
Confusion Matrix

 Type I Error
It is False Positive error ,means calming that
something has happened but in actual it hasn't
 Type II Error
It is False Negative error, means calming that
something has not happened but in actual it happened



You might also like