You are on page 1of 31

4.

Week 4

Confidence Interval
&

Hypothesis Testing
Gujarati(2003): Chapter 5

All rights reserved by Dr.Bill Wan Sing Hung - HKBU


How reliable is this OLS estimation? 4.2

(Computing Tutorial #2: Application of Phillips Curve Theory for the Case of Hong Kong)

All rights reserved by Dr.Bill Wan Sing Hung - HKBU


Properties of OLS estimators–two variable case 4.3
b̂1 b̂2 ŝ 2

1. Unbiased E(bˆ 1) = b1 ˆ 2)=b2


E(b
efficiency

2. Min. Variance Se bˆ [ ( ) =  x ]×
2
s 2
1

n x 2

[Se(bˆ 2)]2 = s2ˆ = s


2

b2
x 2

3. Consistency: as n gets larger, estimator


is more accurate
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
4.4
Properties of OLS estimators (continue)
ˆ ˆ
4 & 5. b1 and b2 are normal distribution

6. ˆb ~ N (b1 , sb2ˆ )
1 1

ˆb ~ N (b2 , sb2ˆ )
2 2

( n - 2 ) sˆ 2 x 2
~ (n - 2 )
s 2

All rights reserved by Dr.Bill Wan Sing Hung - HKBU


Hypothesis Testing and Confidence Interval 4.5
How reliable is the OLS estimation ?
^ to b ?
How “close” is b 1 1
^2 to b2 ?
How “close” is b

f (b̂2 ) Estimated ^b2


falls in area
Density

true
b̂2
b̂2 - d b2 b̂2 + d
Random interval (confidence interval)
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
Hypothesis Testing and Confidence Interval 4.6

b̂2 - d is called lower confidence bound


bˆ 2 + d is called upper confidence bound
the interval between (bˆ2 - d) and (bˆ2 + d) is
called random interval (confidence interval)
0.99
Pr( bˆ 2-d < b2 < bˆ 2+d ) = (1-) 0.95
0.90
which (1-) is confidence coefficient:
(0<  <1)
0.01
 is also called the level of significance. 0.05
0.10
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
4.7
Constructing Confidence Interval for bi

By assumptions:
2
( 2)
u i ~ O, su
N  Var(u) =su
E() = O
s x
bˆ 1 ~ N (b1 , sbˆ )
2 2
2  s = 2

i
1
n x 1 2
i

ˆb ~ N (b2 , sbˆ2 ) s 2
 s bˆ = 2
2 2
 xi2 2

All rights reserved by Dr.Bill Wan Sing Hung - HKBU


(cont.) 4.8
Constructing Confidence Interval for bi

f (b̂2 )

Actual estimated b2
could be fallen into
these regions

b̂2
E(b̂2 ) =b 2
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
4.9
Constructing Confidence Interval for bi (cont.)

f (Z) Transform into normal


standard distribution

bˆ 2 - b2
Z=
Se (bˆ 2 )
O
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
4.10
Constructing Confidence Interval for bi (cont.)

Use the normal distribution to make probabilistic


statements about s2 provided the true b2 is
known

bˆ 2 - b 2
Z = ~ N (0 ,1 )
Se (bˆ 2 )

= (ˆb2 - b2 ) x
2

s In practice this is unobserved

All rights reserved by Dr.Bill Wan Sing Hung - HKBU


4.11
Constructing Confidence Interval for bi (cont.)

For example:
Accept
region

95%
2.5%
2.5%

- 1.96 0 1.96

All rights reserved by Dr.Bill Wan Sing Hung - HKBU


4.12
Constructing Confidence Interval for bi (cont.)

Pr (- 1 . 96 < Z < 1 . 96 ) = 0 . 95
bˆ 2 - b 2
 Pr - 1 . 96 < < 1 . 96 = 0 . 95
Se (bˆ 2 )
95% confidence interval:
bˆ 2 - b 2
- 1 . 96 < < 1 . 96
Se ( bˆ 2 )

All rights reserved by Dr.Bill Wan Sing Hung - HKBU


4.13
Constructing Confidence Interval for bi (cont.)

 bˆ 2 - 1.96* Se(bˆ 2) < b2 < bˆ 2 + 1.96 * Se(bˆ 2 )


 bˆ ± 1.96* Se(bˆ 2)
2

In practice, s 2 is unknown, we have to use the


unbiased estimator
 û
2
 RSS
sˆ =
2 i
n-2
Instead of using normal standard distribution,
t-distribution is used.
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
4.14
Constructing Confidence Interval for bi (cont.)

bˆ 2 - b2 Or some specific value(s)


t= that want to compare
Se (bˆ 2)
estimated - true parameter
t=
standard error of estimator
(bˆ 2 - b2 )  x2
t=
sˆ SEE
Use the t to construct a confidence interval for b2
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
4.15
Constructing Confidence Interval for bi (cont.)

bˆ 2 - b 2
t=  b2 a specified value
Se (bˆ 2 )
s
Se (bˆ 2 )=
^2
where
 x2
estimator - true parameter
t=
s tandard error of estimator
(bˆ2 - b2)  x2
t=

All rights reserved by Dr.Bill Wan Sing Hung - HKBU
Constructing Confidence Interval for bi (cont.) 4.16

Use the tc to construct a confidence interval for b2 as

Pr - t , n-2 t t, n-2 = 1 - 


c
 * c

2 2

where c is critical t value at two-tailed


± t
, n-2

level of significance.  is level of significance
2

2
and (n-2) is degree of freedom (in 2-variable case).
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
4.17
Constructing Confidence Interval for bi (cont.)

Therefore
 c bˆ 2 - b 2

Pr  - t 0 .05 , n - 2   t 0 .05 , n - 2 = 0 . 90
c

 Se ( ˆ
b 2
)
Pr( -tc0.025, n-2  (b^2- b2)/Se(b^2)  tc0.025, n-2 ) = 0.95
Rearranging,
(
Pr b2- 0.05 , n - 2 * Se (b2)  b 2  b2 + t 0.05 , n - 2 * Se ( bˆ 2)
ˆ t c
ˆ ˆ c
)
= 0 .90
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
4.18
Then 90% confidence interval for b2 is:
bˆ 2 ± t 0.05 , n - 2 * Se (bˆ 2)
c

b̂2 & Se (b̂2)


c
t 0.05 , n - 2 Check it from t-table
Check it from
estimated result

The 95% confidence interval interval for b2


becomes
Se(bˆ 2)
c
b̂2 ± t *
0.025, n-2

All rights reserved by Dr.Bill Wan Sing Hung - HKBU


The t-statistic in computer (EVIEWS) output 4.19
Example: Gujarati (2003)pp.123

H0: b2= 0
H1: b2 0

0.5091 - 0
=
0.0357

Se(β^2)
SEE= s
^ RSS
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
4.20
Example: Gujarati (2003)pp.123

Given bˆ 2 = 0.5091, n = 10, Se(bˆ 2) = 0.0357,


95% confidence interval is:
ˆb ± t c * Se( bˆ 2 )
2 , -
2 n 2
c
 0 .5091 ± t 0 .025 , 8 ( 0 .0357 )
 0 .5091 ± 0 .0823 d
 ( 0 .4268 , 0 .5914 )

All rights reserved by Dr.Bill Wan Sing Hung - HKBU


90% confidence interval is: 4.21
± c
0 .5091 t 0.05 ,8 ( 0 .0357 )
 0.5091  1.860(0.0357)
 0.5091  0.0664 d
 (0.4427, 0.5755)

All rights reserved by Dr.Bill Wan Sing Hung - HKBU


Test-Significance Approach: 4.22
One-tailed T-test decision rule
Step 1: H 0 : bˆ 2  b2 (H 0 : bˆ 2  b2 ) State the
H1 : bˆ 2 > b2 (H1 : bˆ 2 < b2 ) hypothesis

Step 2: bˆ 2 - b2
Se (bˆ 2)
t* = Computed value

Step 3: check t-table for t c


, n-2
look for critical t value
Step 4: compare tc and t*
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
One-tailed t-test decision rule 4.23
Decision Rule
Step 5: If t > tc ==> reject H0
Right-tail
If t < tc ==> not reject H0

Right-tail left-tail

0 tc < t t < -tc 0

(If t < - tc ==> reject H0 )


Left-tail
(If t > - tc ==> not reject H0 )
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
4.24
Two-Tailed T-test

1. H 0 : bˆ 2 = b2 State the hypothesis


H1 : bˆ  b2
2

bˆ 2 - b2
Compute t =
Se (bˆ 2 )
2.

c
t
3. Check t-table for critical t value: , n -2
2

All rights reserved by Dr.Bill Wan Sing Hung - HKBU


4.25
Two-Tailed t-test (cont.)
c
4. Compare t and t
Decision Rule:
5. If t > tc or -t < - tc , then reject Ho
or | t | > | tc |
Accept
region

reject H0 region reject H0 region

bˆ 2 - t c * Se (bˆ 2 ) b2 ˆb + t c
2
* Se (bˆ 2 )
- -
2, n 2 2, n 2

All rights reserved by Dr.Bill Wan Sing Hung - HKBU


One-Tailed t-test 4.26

We also could postulate that:


ˆ  0 .3
H0 : b 2

H1 : bˆ 2 > 0.3
1. Compute:
bˆ 2 - b 2
t=
Se ( b
ˆ 2)
0 . 5091 - 0 .3 0 .2091
t= = = 5.857
0 .0357 0 . 0357
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
One-Tailed t-test (cont.) 4.27

2. Check t-table for t c0.05, 8


where t c =1.860  = 0.05
0.05, 8

3. Compare t and the critical t


t 5 . 857 t 0.05 , 8 = 1 . 860
= > c

\ reject H 0

All rights reserved by Dr.Bill Wan Sing Hung - HKBU


One-Tailed t-test (cont.) 4.28

H 0 : bˆ 2  b*2 “ Decision rule for left-tail test”


ˆ
H1 : b2 < b2
*
If t < - tc, df => reject H0

left-tail

b*
^
b
^ tc• Se(b)
b*- ^
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
4.29
Two-Tailed t-test
Suppose we postulate that
H 0 :bˆ 2 = 0.3
H1 : bˆ 2  0.3
Is the observed b̂ compatible with true b ?
2 2

(1) From Confidence-interval approach:


95% confidence-interval is (0.4268, 0.5914)
which does not contain the true b2.
The estimated b2 is not equal to 0.3

All rights reserved by Dr.Bill Wan Sing Hung - HKBU


4.30
(2) From Significance test approach:
Compare t-value and the critical t-value:
bˆ 2 - b 2 0 .5091 - 0 . 3 0 .2091
t= = = = 5.857
Se ( bˆ 2 ) 0 .0357 0 .0357
tc0.025, 8 = 2.306
,

t =5.857> t 0.025, 8 = 2.306


c
==> reject H0
It means the estimated b2 is not equal 0.3

All rights reserved by Dr.Bill Wan Sing Hung - HKBU


4.31
“Accepting” or “Rejecting”

"Accept "the null hypothesis:


All we are saying is that on the basis of the
sample evidence we have no reason to reject it; We
are not saying that the null hypothesis is true
beyond any doubt.
Therefore, in “accepting” a Ho , we should
always be aware that another null hypothesis
may be equally compatible with the data.
So, the conclusion of a statistical test is
“do not reject” rather than “accept”.
All rights reserved by Dr.Bill Wan Sing Hung - HKBU