Professional Documents
Culture Documents
(x , y )
● i i
yi − y^i
●(xi , y^i) yi − y
y
y^i − y
●
(x , y)
x
3
N
X N
X N
X
(Yi − Ȳ )2 = (Ŷi − Ȳ )2 + (Yi − Ŷi)2
i=1 i=1 i=1
=
SST O = SSR + SSE,
where SST O is the corrected sums of squares of the
observations, SSR is the sum of squares regression and SSE
is the sums of squares error.
4
then we reject H0 : β1 = 0.
5
lm1=lm(Mortality~Education,data=smsa)
anova(lm1)
Response: Mortality
Df Sum Sq Mean Sq F value Pr(>F)
Education 1 59662 59662 20.508 3.008e-05 ***
Residuals 58 168737 2909
6
alm1=anova(lm1)
SSTO=sum(alm1$"Sum Sq")
print(SSTO)
[1] 228398.3
In class, we will show that the t-test and F-test are equivalent
for H0 : β1 = 0. However, the t-test is somewhat more
adaptable as it can be used for one-sided alternatives. We can
also easily calculate it for different hypothesized values in H0.
One-sided t-test for the SMSA example:
H0 : β1 = 0 vs. HA : β1 < 0.
βˆ1
tobs = = −4.529
ˆ βˆ1)
se(
−2
tN
α = −1.627 > −4.529 therefore reject H0 in favor of HA.
8
Coefficient of Determination
2 SSR SSE
R = =1−
SST O SST O
• Often referred to as the proportion of variation explained be
the predictor
• R2 does not
? measure the magnitude of the slope
? measure the appropriateness of the model
R2=alm1$"Sum Sq"[1]/SSTO
print(R2)
0.261217
R2 = 0.26
10
Prediction
yd ˆ ˆ
new = β0 + β1xnew +
ˆ.
E[yd
new ] = β0 + β1xnew ,
11
●
1100
●
1050
●
● ●
● ●
1000
● ●
● ●
● ● ●
●
●
Mortality
● ● ● ● ●
● ● ● ●
950
● ● ●
●
●
● ●
● ●
●
● ● ●
● ● ● ●
900
●
● ● ●
● ● ●
● ● ●
● ● ●
●
850
● ●
●
800
Education
14
2 1 1
p(Yi|β0, β1, σ ) = √ exp{− 2 (Yi − (β0 + β1xi)2}.
2πσ 2 2σ
The likelihood is then equal to
N N
2 1 1 X
L(β0, β1, σ ) = √ exp{− 2 (Yi −(β0 +β1xi)2}.
2πσ 2 2σ i=1
15
N
2 1 X
l ∝ −N/2 log(σ ) − 2 (Yi − (β0 + β1xi))2.
2σ i=1
The MLEs for the simple linear regression model are given by
βˆ0 = Ȳ − βˆ1¯(x),
PN
ˆ i=1 Yi(xi − x̄)
β1 = PN
(x − x̄)2
i=1 i
16
and
N
1
σˆ2 =
X
(Yi − βˆ0 − βˆ1xi)2.
N i=1
The MLEs for β0 and β1 are the same as the least squares
estimators. However the MLE for σ 2 is not. Recall that the
least squares estimate of σ 2 is unbiased. The MLE of σ 2 is
biased (although it is consistent).
17
Example:
y = β0 + β1x1 + β2x2 + ,
10
8
6
E(y)
x2
4
x2
2
0
x1
0 2 4 6 8 10
x1
20
Interactions