You are on page 1of 17

# RELAXING THE ASSUMPTIONS OF CLRM ## University in Tashkent  #  Independent variables , X’s , and residuals of the regression are independent. #  Residuals of the regression normally distributed #  The nature of multicollinearity.  Practical consequences.  Detection.  Remedial measures to alleviate the problem. #  Model specification  An over-determined models # Perfect multicollinearity is the case when two ore more independe variables Can create perfect linear relationship. X
+
X
+
X
1
1
2
2
3
3
2
3
X
=
X
+
X
1
2
3
1
1 .....
X
k
k
k
.....
X
k
1

# Perfect multicollinearity is the case when two ore more independe variables Can create less than perfect linear relationship.

1

k

3

k

• 1 1

1

## e

i  MULTIPLE REGRESSION MODEL
Y = X + X + u
i
2
2
3
3
i
2
2
ˆ
ˆ
ˆ
min u ˆ
=
(
Y
X
X )
i
i
1
2
2
i
3
3
i
ˆ
ˆ
ˆ
= Y
X
X
1
2
2
3
3
(
)
2
y x
(
x
)
(
y x
)(
x x
)
ˆ
i
2
i
3
i
i
3
i
2
i
3
i
=
2
2
2
2
(
x
)(
x
)
(
x x
)
2 i
3
i
2
i
3
i
(
2
y x
)
(
x
)
(
y x
)(
x x
)
ˆ
i
3
i
2
i
i
2
i
2
i
3
i
=
3
2
2
2
(
x
)(
x
)
(
x x
)
2 i
3
i
2
i
3
i

# ˆ

## 2 X
=
X
2
i
3
i
(
)
2
y x
(
x
)
(
y x
)(
x x
)
i
2
i
3
i
i
3
i
2
i
3
i
=
2
2
2
(
x
)(
x
)
(
x x
)
2 i
3
i
2
i
3
i
(
)
2
y x
(
x
)
(
y x
)(
x x
)
ˆ
i
3
i
3
i
i
3
i
3
i
3
i
=
2
2
2
2
(
(
x
) )(
x
)
(
x x
)
3
i
3
i
3
i
3
i
(
)
2
2
y x
(
x
)
(
y x
)(
x
)
3
i
ˆ
i
3
i
3
i
i
3 i
=
=
2
2
2
2
2
(
(
x
)(
x
)
(
x
)
3
i
3
i
3 i
if
_ =
a
(
)
y x a
(
y x a
)
0
ˆ
i
3
i
i
3
i
=
=
2
2
aa
( a )
0

# var( ˆ ) =

## 2 2
2
2
2
1
X
x
+
X
x
2
X X
x x
2
3 i
3
2
i
2
3
2
i
3
i
+
2
2
2
n
x
x
(
x x
)
2 i
3
i
2
i
3
i
2
2
var( ˆ ) =
2
2
3
2
2
x
(1
r
)
x
(1
r
)
2
i
2,3
3
i
2,3

# cov( ˆ , ˆ ) =

### 3 2
r
2,3
2
2
2
(1
r
)
x
x
2,3
2 i
3 i

# As degree of collinearity approaches to one, the variances of coefficients approaches to infinity. #  The sensitivity of estimators and variances are very high to small changes in dataset # VARIANCE INFLATION FACTOR

2

2

2

i

2

2,3

2

2

2

i

2

2,3

2

2

2

i

## VIF 0
120
100
80
60
40
20
0.8
Correlation
0.6
0.4
0.2
1.2
0
1 2

2

j

2

j

2

2

2

j

j

2

j

2

j

# R

### j # CONFIDENCE INTERVALS AND T- STATISTICS ±1.96
se
(
)
VIF
k
k
k
k 0
t =
se
(
)
VIF
Due to low t-stats we can not reject our
Null Hypothesis
k
H
:
=
=
...
=
=
0
0
2
3
k

# H a: Not all slope coefficients are simultaneously zero 2
2
n k ESS
n
k
R
R /( k 1)
F =
=
=
2
2
k
1
RSS k
11 R
(1
R
)/(
n
k
)
ue to high R square the F-value will be very high and rejection of Ho will be easy

#  Eigenvalue and condition index.*** # ˆ

### 1 X
1

i

# R 2
/( k 2)
x x
,
,
x
...
x
i
2
3
k
2
) /( n k
x x
,
,
x
...
x
i
2
3
k

# + 1)

## Rule of thumb: if R square of auxiliary regression is higher than over R square then it might be troublesome. #  Do nothing.  Combining cross section and time series  Transformation of variables (differencing, ratio transformation)  Additional data observations. #  Gujarati D., (2003), Basic Econometrics, Ch. 10 