You are on page 1of 24

Regression

Linear
ea Regression
eg ess o

Jeff Howbert Introduction to Machine Learning Winter 2012 1


slide thanks to Greg Shakhnarovich (CS195-5, Brown Univ., 2006)

Jeff Howbert Introduction to Machine Learning Winter 2012 2


slide thanks to Greg Shakhnarovich (CS195-5, Brown Univ., 2006)

Jeff Howbert Introduction to Machine Learning Winter 2012 3


slide thanks to Greg Shakhnarovich (CS195-5, Brown Univ., 2006)

Jeff Howbert Introduction to Machine Learning Winter 2012 4


slide thanks to Greg Shakhnarovich (CS195-5, Brown Univ., 2006)

Jeff Howbert Introduction to Machine Learning Winter 2012 5


slide thanks to Greg Shakhnarovich (CS195-5, Brown Univ., 2006)

Jeff Howbert Introduction to Machine Learning Winter 2012 6


Loss function

z Suppose target labels come from set Y


– Binaryy classification: Y = { 0,, 1 }
– Regression: Y=ℜ (real numbers)
z A loss function maps decisions to costs:
– L( y, yˆ ) defines the penalty for predicting ŷ when the
true value is y .
z St d d choice
Standard h i ffor classification:
l ifi ti
0/1 loss (same as ⎧0 if y = yˆ ⎫
L0 /1 ( y, yˆ ) = ⎨ ⎬
misclassification error)) ⎩1 otherwise ⎭

z Standard choice for regression:


squared loss
L( y, yˆ ) = ( yˆ − y ) 2
Jeff Howbert Introduction to Machine Learning Winter 2012 7
Least squares linear fit to data

z Most popular estimation method is least squares:


– Determine linear coefficients α, β that minimize sum
of squared loss (SSL).
– Use standard (multivariate) differential calculus:
‹ diff
differentiate
ti t SSL with t α, β
ith respectt to
‹ find zeros of each partial differential equation
‹ solve for α, β
z One dimension:
N
SSL = ∑ ( y j − (α + β ⋅ x j )) 2 N = number of samples
j =1

cov[ x, y ]
β= α = y−β ⋅x x,y = means of training x, y
var[ x]
yˆ t = α + β ⋅ xt for test sample xt
Jeff Howbert Introduction to Machine Learning Winter 2012 8
Least squares linear fit to data

z Multiple dimensions
– To simplify g α to β0,
p y notation and derivation,, change
and add a new feature x0 = 1 to feature vector x:
d
yˆ = β 0 ⋅1 + ∑ β i ⋅ xi = β ⋅ x
i =1

– Calculate SSL and determine β:


N d
SSL = ∑ ( y j − ∑ β i ⋅ xi ) 2 = (y − Xβ) T ⋅ (y − Xβ)
j =1 i =0

y = vector of all training responses y j


X = matrix of all training samples x j
β = ( X T X) −1 X T y
yˆ t = β ⋅ x t for test sample x t
Jeff Howbert Introduction to Machine Learning Winter 2012 9
Least squares linear fit to data

Jeff Howbert Introduction to Machine Learning Winter 2012 10


Least squares linear fit to data

Jeff Howbert Introduction to Machine Learning Winter 2012 11


Extending application of linear regression

z The inputs X for linear regression can be:


– Original
g q
quantitative inputs
p
– Transformation of quantitative inputs, e.g. log, exp,
square root, square, etc.
– Polynomial transformation
‹ example: y = β0 + β1⋅x + β2⋅x2 + β3⋅x3
– Basis expansions
– Dummy coding of categorical inputs
– Interactions between variables
‹ example: x3 = x1 ⋅ x2
z This allows use of linear regression techniques to fit much
more complicated non-linear datasets.
Jeff Howbert Introduction to Machine Learning Winter 2012 12
Example of fitting polynomial curve with linear model

Jeff Howbert Introduction to Machine Learning Winter 2012 13


Prostate cancer dataset

z 97 samples, partitioned into:


– 67 training
g samples
p
– 30 test samples
z Eight predictors (features):
– 6 continuous (4 log transforms)
– 1 binary
– 1 ordinal
z Continuous outcome variable:
– lpsa: log(
l ( prostate
t t specific
ifi antigen
ti llevell )

Jeff Howbert Introduction to Machine Learning Winter 2012 14


Correlations of predictors in prostate cancer dataset

Jeff Howbert Introduction to Machine Learning Winter 2012 15


Fit of linear model to prostate cancer dataset

Jeff Howbert Introduction to Machine Learning Winter 2012 16


Regularization

z Complex models (lots of parameters) often prone to overfitting.


z Overfitting can be reduced by imposing a constraint on the overall
magnitude
it d off the
th parameters.
t
z Two common types of regularization in linear regression:
– L2 regularization
g ((a.k.a. ridge ) Find β which minimizes:
g regression).
g
N d d

∑ ( y j − ∑ βi ⋅ xi ) + λ ∑ βi
2 2

j =1 i =0 i =1

‹ λ is the regularization parameter: bigger λ imposes more constraint

– L1 regularization (a.k.a. lasso). Find β which minimizes:


N d d

∑(y − ∑β ⋅ x )
j =1
j
i =0
i i
2
+ λ ∑ | βi |
i =1

Jeff Howbert Introduction to Machine Learning Winter 2012 17


Example of L2 regularization

L2 regularization
shrinks coefficients
towards (but not to)
zero, and towards
each other.

Jeff Howbert Introduction to Machine Learning Winter 2012 18


Example of L1 regularization

L1 regularization shrinks
coefficients to zero at
different rates; different
values of λ give models
with
ith diff
differentt subsets
b t off
features.

Jeff Howbert Introduction to Machine Learning Winter 2012 19


Example of subset selection

Jeff Howbert Introduction to Machine Learning Winter 2012 20


Comparison of various selection and shrinkage methods

Jeff Howbert Introduction to Machine Learning Winter 2012 21


L1 regularization gives sparse models, L2 does not

Jeff Howbert Introduction to Machine Learning Winter 2012 22


Other types of regression

z In addition to linear regression, there are:


– many types of non-linear regression
‹ decision trees
‹ nearest neighbor

‹ neural networks

‹ support
pp vector machines
– locally linear regression
– etc.

Jeff Howbert Introduction to Machine Learning Winter 2012 23


MATLAB interlude

matlab demo 07 m
matlab_demo_07.m

P tB
Part

Jeff Howbert Introduction to Machine Learning Winter 2012 24

You might also like