You are on page 1of 2

EIA2014/EIE3003

Lecture 1: Summary of regression in matrix notation

1. The classical linear regression model.

Y i=β 0 + β 1 X 1 i+ β 2 X 2 i +…+ β k X ki +u i

[ ][ ][ ] [ ]
Y1 1 X 11 X 21 ⋯ X k1 β 0 u1
Y2 1 X 12 X 22 ⋯ X k2 β 1 u2
= +
⋮ ⋮ ⋮ ⋮ ⋯ ⋮ ⋮ ⋮
Yn 1 X1n X2n ⋯ X kn β k un

Y = Xβ+u

1.1 Assumptions of the classical model.

E ( u )=0

2
var ( u )=E ( uu ' )=σ I n

The n x (k+1) matrix X has rank (k+1)

The matrix X is fixed in repeated sampling

2. The ordinary least square estimator.

Y = X ^β+ e

OLS estimator minimizes the sum of squared residuals SSE=e ' e

SSE=e ' e =( Y − X β^ ) ' ( Y − X ^β )


min
 β^

Using differentiation, the solution:

 ^β=( X ' X )−1 X ' Y

3. OLS predictor and residuals.

Predictor: ^β=( X ' X )−1 X ' Y

Residuals: e=Y −X ^β

1
4. Properties of the OLS estimator.

OLS estimator is unbiased: E ( ^β )=β

The variance of the OLS estimator: var ( ^β ) =σ^ 2 ( X ' X )−1

The OLS estimator of σ^ 2=e ' e /(n−k −1)=SSE /(n−k −1)

5. The coefficient of determination, R2.

' 2
SST =Y Y −nY

SSR= β^ ' X ' Y −n Y 2

SSE=Y ' Y − ^β ' X ' Y

SST =SSR+ SSE

2 SSR ^β ' X ' Y −n Y 2


R= = '
SST Y Y −nY 2

6. Testing the overall significance of regression.

SSR/k ^β' X ' Y −n Y 2 /k


F= = '
SSE / ( n−k −1 ) Y Y − β^ ' X ' Y / ( n−k−1 )

7. Hypothesis testing about individual regression coefficients.

^β −β
i i
t=
se ( β^ )
i


se ( β^ i ) = var ( β^ i )

var ( ^β i) are given by diagonal elements of:


var ( β^ ) =σ^ 2 ( X ' X ) =[ SSE /(n−k−1) ] ( X ' X )
−1 −1

You might also like