Professional Documents
Culture Documents
Linear Regression
Linear Regression
Aleksanyan Lida
Matrix
Matrix
Matrix
Matrix addition
Matrix subtraction
Matrix transpose
Functions derivatives
Functions derivatives
Gradient
Gradient
Model Representation
● Supervised learning:
as given “right”
answer for each
example in the data
● Regression problem:
as we’re going to
predict real-valued
output(not discret)
h is the hypothesis
1.
2.
3.
So what we have?
Minimization with two parameters
Gradient Descent Algorithm
Here, α is called the learning rate. This is a
very natural algorithm that repeatedly takes
a step in the direction of steepest decrease
of J.
Learning rate
Feature Scaling
Mean Normalization
Learning rate selection
Summary
➔ Find hypothesis function for this training set:
Multiple features
Notations
Hypothesis
➢ Then :
➢ Now :
For convenience let’s define x0 = 1
➢ Finally :
New gradient descent
Polynomial Regression
Polynomial Regression
What problem do you see here?
Thank you