Professional Documents
Culture Documents
Yi = β0 + (0)Xi + ǫi
implies that there is no relationship between
Y and X
30
Response/Output
25
20
15
10
1 2 3 4 5 6 7 8 9 10 11
Predictor/Input
Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 4, Slide 7
Alternative Hypothesis : Least Squares Fit
40
Estimate, y = 2.09x + 8.36, mse: 4.15
True, y = 2x + 9, mse: 4.22
35
30
Response/Output
25
b1 rescaled is
test statistic
20
15
10
1 2 3 4 5 6 7 8 9 10 11
Predictor/Input
E(b1 ) = β1
σ2
V (b1 ) =
(Xi − X̄)2
we observe
(Xi − X̄)(Yi − Ȳ ) = (Xi − X̄)Yi − (Xi − X̄)Ȳ
= (Xi − X̄)Yi − Ȳ (Xi − X̄)
Xi
= (Xi − X̄)Yi − Ȳ (Xi ) + Ȳ n
n
= (Xi − X̄)Yi
where
(Xi −X̄)
ki = (X −X̄)2
i
(Xi − X̄)
b1 = ki Yi , ki =
(Xi − X̄)2
write on board
plugging in we get
σ2
V (b1 ) =
(Xi − X̄)2
s2
V̂ (b1 ) =
(Xi − X̄)2
Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 4, Slide 19
Digression : Gauss-Markov Theorem
• In a regression model where E(ǫi) = 0 and
variance V(ǫi) = σ < ∞ and ǫi and ǫj are
uncorrelated for all i and j the least squares
estimators b0 and b1 and unbiased and have
minimum variance among all unbiased linear
estimators.
– Remember
(Xi − X̄)(Yi − Ȳ )
b1 =
(Xi − X̄)2
b0 = Ȳ − b1 X̄
Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 4, Slide 20
Proof
• The theorem states that b1 as minimum
variance among all unbiased linear estimators
of the form
β̂1 = c i Yi
(Yi −Ŷi )2
SSE
σ2 = σ2 ∼ χ2 (n − 2)
Ŝ(b1 ) V̂ (b1 )
S(b1 ) = V (b1 )