You are on page 1of 13

REGRESI

BERGANDA
Ekonometrika 1 (Dr. Ghozali Maski)
NOTATION & ASSUMPTIONS

Yi = 1 + 2 X2i + 3X3i + Ui

• Zero mean value of Ui


• No serial correlation
• Homoscedasticity
• Zero covariance between Ui and Xi
• No specification bias
• No exact collinearity between the X variables

2
ESTIMATION

Yi   1   2 X 2 i   3 X 3 i   i
    Y     X   X 3i 
2 2
min i i 1 2 2i 3

  yx    x     yx   x x3 
2

2  2 3 3 2

  x   x     x x 
2
2
3
2
2 3
2

  yx    x     yx   x x3 
2

3  3 2 2 2

  x  x     x x 
2
2
3
2
2 3
2

3
BETA COEFFICIENTS

• Occasional you’ll see reference to a


“standardized coefficient” or “beta
coefficient” which has a specific meaning
• Idea is to replace y and each x variable with a
standardized version – i.e. subtract mean and
divide by standard deviation
• Coefficient reflects standard deviation of y for
a one standard deviation change in x
4
VARIANCE
AND STANDARD ERROR
 __ __ __ __

1 X 2  x3  X 3  x 2  2 X 2 X 3  x 2 x3  2
2 2 2 2

var   1     
n
 x 2  x3    x 2 x3  
2 2 2
 
 x3
2

var   2    2

 x2  x3    x 2 x3 
2 2 2

 x2
2

var   3    2

 x2  x3    x 2 x3 
2 2 2

 r23 2
cov   2 ,  3  
1  r 
2
23 x2
2
x3
2

5
THE T TEST

Under the CLM assumption s


 
ˆ j   j
 
se 
j
ˆ ~ t n  k 1

Note this is a t distributi on (vs normal)


because we have to estimate  by ˆ
2 2

Note the degrees of freedom : n  k  1


6
ONE-SIDED ALTERNATIVES

yi = b0 + b1xi1 + … + bkxik + ui

H0: bj = 0 H1: bj > 0

Fail to reject
reject
(1 - a) a
0 c
7
TWO-SIDED ALTERNATIVES

yi = b0 + b1Xi1 + … + bkXik + ui

H0: bj = 0 H1: bj > 0


fail to reject

reject reject
a/ (1 - a) a/2
2 -c 0 c
8
THE COEFFICIENT OF DETERMINATION
A MEASURE OF “GOODNESS OF FIT”

TSS  ESS  RSS Verbally, R-square measure the


ESS RSS proportion or
1  percentage of the total
TSS TSS variation in Y explained by the
2
 ^ 
 regression model.
  
Y  Y
  2

1 
 2  2
   
  Y  Y 

 

Y  Y 

ESS
R2 
TSS

9
ADJUSTED R-SQUARED

• Recall that the R2 will always increase as more


variables are added to the model.
• The adjusted R2 takes into account the number of
variables in a model, and may decrease.

R 2
1
 SSR  n  k 1 
 SST  n 1 
ˆ 2
1
 SST  n 1 
10
CONT..

• It’s easy to see that the adjusted R2 is just (1 –


R2)(n – 1) / (n – k – 1), but most packages will
give you both R2 and adj-R2.
• You can compare the fit of 2 models (with the
same y) by comparing the adj-R2.
• You cannot use the adj-R2 to compare models
with different y’s (e.g. y vs. ln(y)).

11
THE F STATISTIC

f(F)
fail to reject Reject H0 at a
significance level
if F > c
reject
(1 - a) a
0 c F
12
SELESAI

13

You might also like