Professional Documents
Culture Documents
07b Regression PDF
07b Regression PDF
Linear
ea Regression
eg ess o
cov[ x, y ]
β= α = y−β ⋅x x,y = means of training x, y
var[ x]
yˆ t = α + β ⋅ xt for test sample xt
Jeff Howbert Introduction to Machine Learning Winter 2012 8
Least squares linear fit to data
z Multiple dimensions
– To simplify g α to β0,
p y notation and derivation,, change
and add a new feature x0 = 1 to feature vector x:
d
yˆ = β 0 ⋅1 + ∑ β i ⋅ xi = β ⋅ x
i =1
∑ ( y j − ∑ βi ⋅ xi ) + λ ∑ βi
2 2
j =1 i =0 i =1
∑(y − ∑β ⋅ x )
j =1
j
i =0
i i
2
+ λ ∑ | βi |
i =1
L2 regularization
shrinks coefficients
towards (but not to)
zero, and towards
each other.
L1 regularization shrinks
coefficients to zero at
different rates; different
values of λ give models
with
ith diff
differentt subsets
b t off
features.
neural networks
support
pp vector machines
– locally linear regression
– etc.
matlab demo 07 m
matlab_demo_07.m
P tB
Part