Essentials of Machine Learning
Linear Models for Regression
Application of Regression
• Predict tomorrow’s stock market price given current market
conditions and other possible side information.
• Predict the age of a viewer watching a given video on
YouTube.
• Predict the amount of prostate specific antigen (PSA) in the
body as a function of a number of different clinical
measurements.
• Predict the temperature at any location inside a building
using weather data, time, door sensors, etc.
Thushari Silva, PhD Essentials of Machine Learning
Linear Regression with one variable – Model representation
Thushari Silva, PhD Essentials of Machine Learning
Linear Regression with one variable – Model representation
Thushari Silva, PhD Essentials of Machine Learning
Linear Regression with one variable – Model representation
Thushari Silva, PhD Essentials of Machine Learning
Regression - Introduction
• The simplest linear model for regression is one that involves a linear
combination of the input variables
Y ( X ,W ) = w0 + w1 x1 + w2 x2 + ... + wD xD , where X = ( x1 , x2 ,..., xD )
• Linear regression with one variable is also known as ”univariate linear
regression”
• Univariate linear regression is used when you want to predict a single
output value from a single input value.
• The general form of the univariate linear regression is: Y ( X ,W ) = w0 + w1 x1
Thushari Silva, PhD Essentials of Machine Learning
Regression - Introduction
The goal of regression is to predict the value of one or more continuous target
variables t given the value of a D-dimensional vector x of input variables.
• Given a training data set comprising N observations {xn}, where n = 1,...,N together
with corresponding target values {tn}
• We model the predictive distribution p(t|x) to express the uncertainty about the
value of t for each value of x.
• Predictive distribution enable us to predict the value of t for a given value x such a
way that minimize the expected value of a suitably chosen loss function.
Thushari Silva, PhD Essentials of Machine Learning
Linear Regression with one variable – Cost function
Thushari Silva, PhD Essentials of Machine Learning
Linear Regression with one variable – Cost function
Thushari Silva, PhD Essentials of Machine Learning
Linear Regression with one variable – Cost function
Thushari Silva, PhD Essentials of Machine Learning
Linear Regression - Cost function
• The accuracy of our hypothesis function is measured using a cost function
which takes all squared valued of the difference between results of the
hypothesis and the actual output y’s.
1 N
Squared Error Function : E ( w) = J (q1 ,q 2 ) = å (Yi -ti ) 2
2 i =1
Thushari Silva, PhD Essentials of Machine Learning
Linear Regression - Cost function
Thushari Silva, PhD Essentials of Machine Learning
Linear Regression - Cost function
We will succeed when our cost function is at the very bottom of the pits in our graph, i.e.
when its value is the minimum.
Thushari Silva, PhD Essentials of Machine Learning
Linear Regression - Gradient Descent
Gradient Descent
method
¶J (q 0 ,q1 )
q j := q j - a
¶q j
Simultaneously update j =0 and j=1
Trial and error method
Thushari Silva, PhD Essentials of Machine learning
Linear Regression - Gradient Descent
• Gradient descent can converge to a local minimum, even with the
learning rate α fixed.
dJ (q1 )
q j := q j - a
dq1
• As we approach a local minimum, gradient descent will automatically
take smaller steps. So, no need to decrease α over time.
Thushari Silva, PhD Essentials of Machine learning