You are on page 1of 44

Ekonometrika

TM 9
Purwanto Widodo
THE GAUSS–MARKOV THEOREM
(BLUE: BEST LINEAR UNBIASNESS ESTIMATOR)
1. It is linear, that is, a linear function of a random variable, such as the dependent variable Y
in the regression model.
2. It is unbiased, that is, its average or expected value, E(βˆ2), is equal to the true value, β2
3. It has minimum variance in the class of all such linear unbiased estimators; an unbiased
estimator with the least variance is known as an efficient estimator
THE ASSUMPTIONS UNDERLYING THE
METHOD OF LEAST SQUARES
THE ASSUMPTIONS UNDERLYING THE METHOD OF LEAST SQUARES
Relaxing the Assumptions of the Classical Model
Assumption 1. The regression model is linear in the
parameters.
𝑌 𝑖= 𝛽𝑜 + 𝛽1 𝑋 1 𝑖 +𝛽 1 𝑋 1 𝑖 +… 𝛽 𝑘 𝑋 𝑘𝑖 +𝜀𝑖

Bentuk variable bebas yang tidak linear

Transformasikan :

Persamaan regresi menjadi :


Persamaan regresi menjadi :

Bemtuk reciprocal

Transformasikan
Assumption 2: Fixed versus Stochastic
Regressors
Relaxing the assumption that X is nonstochastic and replacing it by the
assumption that X is stochastic but independent of [u] does not change the
desirable properties and feasibility of least squares estimation.
Assumption 3: Zero Mean Value of u
Assumption 10. The stochastic (disturbance) term u is
normally distributed.
. regress y x1 x2 x3 x4

Source | SS df MS Number of obs = 31


-------------+---------------------------------- F(4, 26) = 8.99
Model | 96.277981 4 24.0694953 Prob > F = 0.0001
Residual | 69.6107522 26 2.67733662 R-squared = 0.5804
-------------+---------------------------------- Adj R-squared = 0.5158
Total | 165.888733 30 5.52962444 Root MSE = 1.6363

------------------------------------------------------------------------------
y | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
x1 | 2.701012 .6957689 3.88 0.001 1.270839 4.131186
x2 | 3.059606 .9373141 3.26 0.003 1.132929 4.986282
x3 | -.0160601 .0081788 -1.96 0.060 -.032872 .0007517
x4 | -.0227016 .2723061 -0.08 0.934 -.5824348 .5370317
_cons | -9.854598 8.895195 -1.11 0.278 -28.13893 8.429736
------------------------------------------------------------------------------
. predict uhat, residuals

. sktest uhat

Skewness/Kurtosis tests for Normality


------ joint ------
Variable | Obs Pr(Skewness) Pr(Kurtosis) adj chi2(2) Prob>chi2
-------------+---------------------------------------------------------------
uhat | 31 0.3045 0.1908 3.02 0.2209
. swilk uhat

Shapiro-Wilk W test for normal data

Variable | Obs W V z Prob>z


-------------+------------------------------------------------------
uhat | 31 0.97204 0.911 -0.194 0.57690
. sfrancia uhat

Shapiro-Francia W' test for normal data

Variable | Obs W' V' z Prob>z


-------------+-----------------------------------------------------
uhat | 31 0.96421 1.294 0.472 0.31830
. sktest uhat, noadjust

Skewness/Kurtosis tests for Normality


------ joint ------
Variable | Obs Pr(Skewness) Pr(Kurtosis) chi2(2) Prob>chi2
-------------+---------------------------------------------------------------
uhat | 31 0.3045 0.1908 2.77 0.2509
Heteroscedasticity:
What Happens If the Error Variance Is
Nonconstant?
Asumsi :
OLS (Ordinary Least Square)
The Nature of Heteroscedasticity
untuk i = 1,2, 3 .. n  homoskedasitas
untuk i = 1,2, 3 .. n 
heteroskedasitas
Kapan hetero terjadi :
• Sering terjadi dlm data “cross section” misal: Hubungan pendapatan &
pengeluaran RT, Perusahaan
• Biasanya tidak terjadi dlm data “ Time Series”, terjadi hetero pada dalam
time series yang high frequency
• Jika Var(  , penduga koefisien OLS tetap tidak bias dan konsisten,
tetapi tidak efisien bahkan bias
Deteksi hetero
Model awal :
…..+

Uji Park :

 
Ho : tidak ada masalah hetero
H1 : terdapat masalah hetero
Kaedah keputusan
Tolak Ho
Jika
 . generate uhat2 = uhat^2
 . generate luhat2 =ln(uhat2)
 . generate lx1 = ln(x1)
 . generate lx2 = ln(x2)
 . generate lx3 = ln(x3)
 . generate lx4 = ln(x4)

. regress luhat2 lx1 lx2 lx3 lx4

Source | SS df MS Number of obs = 31


-------------+---------------------------------- F(4, 26) = 1.23
Model | 20.0852383 4 5.02130957 Prob > F = 0.3231
Residual | 106.274164 26 4.08746786 R-squared = 0.1590
-------------+---------------------------------- Adj R-squared = 0.0296
Total | 126.359403 30 4.21198009 Root MSE = 2.0217

------------------------------------------------------------------------------
luhat2 | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
lx1 | 2.696335 3.480041 0.77 0.445 -4.456991 9.849661
lx2 | 6.185105 9.267806 0.67 0.510 -12.86514 25.23535
lx3 | 4.329269 3.893054 1.11 0.276 -3.673018 12.33156
lx4 | -2.771412 1.439765 -1.92 0.065 -5.730892 .1880677
_cons | -39.01842 19.49114 -2.00 0.056 -79.08302 1.046189
------------------------------------------------------------------------------
 . generate uhat2 = uhat^2
 . generate luhat2 =ln(uhat2)
 . generate lx1 = ln(x1)
 . generate lx2 = ln(x2)
 . generate lx3 = ln(x3)
 . generate lx4 = ln(x4)
2. Uji Glejser

Ho : tidak ada masalah hetero


H1 : terdapat masalah hetero
Kaedah keputusan
Tolak Ho
Jika
atau:
Cari R2
Hitung nR2
Tolak Ho jika nR2 >
Uji Glejser 1

. generate absuhat = abs(uhat)


Uji Glejser 1

. generate absuhat = abs(uhat)

. regress absuhat x1 x2 x3 x4

Source | SS df MS Number of obs = 31


-------------+---------------------------------- F(4, 26) = 1.01
Model | 3.64696511 4 .911741277 Prob > F = 0.4226
Residual | 23.5748975 26 .906726828 R-squared = 0.1340
-------------+---------------------------------- Adj R-squared = 0.0007
Total | 27.2218626 30 .907395422 Root MSE = .95222

------------------------------------------------------------------------------
absuhat | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
x1 | .1806422 .4049037 0.45 0.659 -.6516493 1.012934
x2 | .0376751 .5454713 0.07 0.945 -1.083557 1.158907
x3 | .0072894 .0047597 1.53 0.138 -.0024943 .017073
x4 | -.238095 .1584689 -1.50 0.145 -.5638326 .0876425
_cons | -2.543389 5.176571 -0.49 0.627 -13.18398 8.097206
------------------------------------------------------------------------------
Uji Glejser 2
ȁ𝑢𝑖 ȁ = 𝛽𝑜 + 𝛽1ඥ𝑋1𝑖 + 𝛽2 ඥ𝑋2𝑖 + 𝛽3ඥ𝑋3𝑖 + ⋯+ 𝛽𝑘 ඥ𝑋𝑘𝑖

. generate akrx1 = x1^0.5


. generate akrx2 = x2^0.5
. generate akrx3 = x3^0.5
. generate akrx4 = x4^0.5
Uji Glejser 2
ȁ𝑢𝑖 ȁ = 𝛽𝑜 + 𝛽1ඥ𝑋1𝑖 + 𝛽2 ඥ𝑋2𝑖 + 𝛽3ඥ𝑋3𝑖 + ⋯+ 𝛽𝑘 ඥ𝑋𝑘𝑖

. regress absuhat akrx1 akrx2 akrx3 akrx4


. generate akrx1 = x1^0.5
. generate akrx2 = x2^0.5 Source | SS df MS Number of obs = 31
. generate akrx3 = x3^0.5 -------------+---------------------------------- F(4, 26) = 1.81
Model | 5.93837538 4 1.48459385 Prob > F = 0.1565
. generate akrx4 = x4^0.5 Residual | 21.2834873 26 .818595664 R-squared = 0.2181
-------------+---------------------------------- Adj R-squared = 0.0979
Total | 27.2218626 30 .907395422 Root MSE = .90476

------------------------------------------------------------------------------
absuhat | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
akrx1 | 1.228107 1.562472 0.79 0.439 -1.9836 4.439813
akrx2 | 2.370886 3.068566 0.77 0.447 -3.936642 8.678414
akrx3 | .3163667 .1656458 1.91 0.067 -.0241231 .6568566
akrx4 | -1.620849 .7045422 -2.30 0.030 -3.069056 -.1726416
_cons | -11.03613 9.43768 -1.17 0.253 -30.43556 8.363301
------------------------------------------------------------------------------
Uji Glejser 3

. generate satuperx1 = 1/x1


. generate satuperx2 = 1/x2
. generate satuperx3 = 1/x3
. generate satuperx4 = 1/x4
Uji Glejser 3

. regress absuhat satuperx1 satuperx2 satuperx3 satuperx4


. generate satuperx1 = 1/x1
. generate satuperx2 = 1/x2 Source | SS df MS Number of obs = 31
-------------+---------------------------------- F(4, 26) = 4.06
. generate satuperx3 = 1/x3 Model | 10.4594259 4 2.61485648 Prob > F = 0.0110
. generate satuperx4 = 1/x4 Residual | 16.7624367 26 .644709105 R-squared = 0.3842
-------------+---------------------------------- Adj R-squared = 0.2895
Total | 27.2218626 30 .907395422 Root MSE = .80294

------------------------------------------------------------------------------
absuhat | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
satuperx1 | -5.139322 5.328043 -0.96 0.344 -16.09127 5.812626
satuperx2 | -24.37908 23.50337 -1.04 0.309 -72.69094 23.93278
satuperx3 | 162.1468 1205.311 0.13 0.894 -2315.406 2639.7
satuperx4 | 4.443855 1.210065 3.67 0.001 1.956531 6.93118
_cons | 4.898915 2.87545 1.70 0.100 -1.011657 10.80949
------------------------------------------------------------------------------
Uji Glejser 4
1 1 1 1
|𝑢𝑖|= 𝛽 𝑜+ 𝛽1 + 𝛽2 + 𝛽3 +…+ 𝛽 𝑘
√ 𝑋 1𝑖 √ 𝑋 2𝑖 √ 𝑋 3𝑖 √ 𝑋 𝑘𝑖
. generate satuper_akrx1 = 1/x1^0.5
. generate satuper_akrx2 = 1/x2^0.5
. generate satuper_akrx3 = 1/x3^0.5
. generate satuper_akrx4 = 1/x4^0.5
Uji Glejser 4
1 1 1 1
|𝑢𝑖|= 𝛽 𝑜+ 𝛽1 + 𝛽2 + 𝛽3 +…+ 𝛽 𝑘
√ 𝑋 1𝑖 √ 𝑋 2𝑖 √ 𝑋 3𝑖 √ 𝑋 𝑘𝑖
. regress absuhat satuper_akrx1 satuper_akrx2 satuper_akrx3 satuper_akrx4
. generate satuper_akrx1 = 1/x1^0.5
Source | SS df MS Number of obs = 31
. generate satuper_akrx2 = 1/x2^0.5 -------------+---------------------------------- F(4, 26) = 3.61
. generate satuper_akrx3 = 1/x3^0.5 Model | 9.71687575
Residual | 17.5049869
4 2.42921894
26 .673268727
Prob > F
R-squared
=
=
0.0181
0.3570
. generate satuper_akrx4 = 1/x4^0.5 -------------+---------------------------------- Adj R-squared = 0.2580
Total | 27.2218626 30 .907395422 Root MSE = .82053

-------------------------------------------------------------------------------
absuhat | Coef. Std. Err. t P>|t| [95% Conf. Interval]
--------------+----------------------------------------------------------------
satuper_akrx1 | -5.961065 5.547212 -1.07 0.292 -17.36352 5.441394
satuper_akrx2 | -24.0899 19.0802 -1.26 0.218 -63.30981 15.13001
satuper_akrx3 | -20.65066 86.1758 -0.24 0.812 -197.7876 156.4862
satuper_akrx4 | 6.184132 1.787648 3.46 0.002 2.509569 9.858696
_cons | 11.65992 6.641819 1.76 0.091 -1.99253 25.31238
-------------------------------------------------------------------------------
Breusch–Pagan–Godfrey (BPG) Test (2 versi Eviews)
 

Cari R2
Hitung nR2
Bandingkan dengan
. . regress uhat2 x1 x2 x3 x4

Source | SS df MS Number of obs = 31


-------------+---------------------------------- F(4, 26) = 1.23
Model | 67.4789604 4 16.8697401 Prob > F = 0.3219
Residual | 356.163668 26 13.6986026 R-squared = 0.1593
-------------+---------------------------------- Adj R-squared = 0.0299
Total | 423.642629 30 14.121421 Root MSE = 3.7012

------------------------------------------------------------------------------
uhat2 | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
x1 | .7521687 1.573808 0.48 0.637 -2.482839 3.987177
x2 | .2944959 2.120175 0.14 0.891 -4.063587 4.652579
x3 | .0248884 .0185003 1.35 0.190 -.0131394 .0629162
x4 | -.8902518 .615948 -1.45 0.160 -2.156351 .3758475
_cons | -11.0446 20.12065 -0.55 0.588 -52.4032 30.31399
------------------------------------------------------------------------------
White’s General Heteroscedasticity Test
Model no cross term
1. Regress :
2. Cari R2, hitung nR2, bandingkan dengan
. generate x1kuadrat = x1^2
. generate x2kuadrat = x2^2
. generate x3kuadrat = x3^2
. generate x4kuadrat = x4^2
. generate x1kuadrat = x1^2
. generate x2kuadrat = x2^2
. generate x3kuadrat = x3^2
. generate x4kuadrat = x4^2

. regress uhat2 x1 x2 x3 x4 x1kuadrat x2kuadrat x3kuadrat x4kuadrat

Source | SS df MS Number of obs = 31


-------------+---------------------------------- F(8, 22) = 1.04
Model | 116.067053 8 14.5083816 Prob > F = 0.4387
Residual | 307.575576 22 13.980708 R-squared = 0.2740
-------------+---------------------------------- Adj R-squared = 0.0100
Total | 423.642629 30 14.121421 Root MSE = 3.7391

------------------------------------------------------------------------------
uhat2 | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
x1 | 40.70797 53.60411 0.76 0.456 -70.46015 151.8761
x2 | 8.554251 13.76868 0.62 0.541 -20.00024 37.10874
x3 | .0572075 .1357739 0.42 0.678 -.2243704 .3387854
x4 | -2.587839 1.972473 -1.31 0.203 -6.678497 1.502819
x1kuadrat | -4.451784 5.706431 -0.78 0.444 -16.2862 7.38263
x2kuadrat | -.3728507 .8909715 -0.42 0.680 -2.220613 1.474911
x3kuadrat | -.0000492 .0000705 -0.70 0.492 -.0001955 .000097
x4kuadrat | .1068085 .0738011 1.45 0.162 -.0462457 .2598627
_cons | -133.7902 156.4465 -0.86 0.402 -458.2404 190.6601
------------------------------------------------------------------------------
White’s General Heteroscedasticity Test
Model cross term
Regress :

Cari R2, hitung nR2, bandingkan dengan


. generate x1x2 = x1*x2
. generate x1x3 = x1*x3
. generate x1x4 = x1*x4
. regress uhat2 x1 x2 x3 x4 x1kuadrat x2kuadrat x3kuadrat x4kuadrat x1x2 x1x3 x1x4
. generate x2x3 = x2*x3 x2x3 x2x4 x3x4
. generate x2x4 = x2*x4 Source | SS df MS Number of obs = 31
. generate x3x4 = x3*x4 -------------+----------------------------------
Model | 262.832756 14 18.7737683
F(14, 16)
Prob > F
=
=
1.87
0.1155
  Residual | 160.809873 16 10.0506171
-------------+----------------------------------
R-squared
Adj R-squared
=
=
0.6204
0.2883
Total | 423.642629 30 14.121421 Root MSE = 3.1703

------------------------------------------------------------------------------
uhat2 | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
x1 | 173.2675 149.4464 1.16 0.263 -143.5448 490.0798
x2 | 115.2211 137.7649 0.84 0.415 -176.8274 407.2697
x3 | -.783325 1.127542 -0.69 0.497 -3.173607 1.606957
x4 | 20.92119 46.82531 0.45 0.661 -78.34402 120.1864
x1kuadrat | -11.29855 10.19308 -1.11 0.284 -32.90691 10.30981
x2kuadrat | -6.403126 7.084332 -0.90 0.379 -21.42124 8.614988
x3kuadrat | .0003466 .0009106 0.38 0.708 -.0015837 .0022769
x4kuadrat | .5718936 .8437312 0.68 0.508 -1.216737 2.360524
x1x2 | -12.0206 10.5415 -1.14 0.271 -34.36758 10.32637
x1x3 | .0738459 .0810441 0.91 0.376 -.0979598 .2456516
x1x4 | -2.16023 3.089268 -0.70 0.494 -8.709184 4.388725
x2x3 | .0476595 .1294459 0.37 0.718 -.2267535 .3220726
x2x4 | -.4037976 4.41818 -0.09 0.928 -9.769921 8.962325
x3x4 | -.0303487 .0526993 -0.58 0.573 -.1420663 .0813688
_cons | -677.8232 814.1762 -0.83 0.417 -2403.8 1048.153
------------------------------------------------------------------------------
. regress y x1 x2 x3 x4

Source | SS df MS Number of obs = 31


Built-in Stata -------------+----------------------------------
Model | 96.277981 4 24.0694953
F(4, 26)
Prob > F
=
=
8.99
0.0001
Residual | 69.6107522 26 2.67733662 R-squared = 0.5804
-------------+---------------------------------- Adj R-squared = 0.5158
Total | 165.888733 30 5.52962444 Root MSE = 1.6363

------------------------------------------------------------------------------
y | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
x1 | 2.701012 .6957689 3.88 0.001 1.270839 4.131186
x2 | 3.059606 .9373141 3.26 0.003 1.132929 4.986282
x3 | -.0160601 .0081788 -1.96 0.060 -.032872 .0007517
x4 | -.0227016 .2723061 -0.08 0.934 -.5824348 .5370317
_cons | -9.854598 8.895195 -1.11 0.278 -28.13893 8.429736
------------------------------------------------------------------------------

. . estat hettest

Breusch-Pagan / Cook-Weisberg test for heteroskedasticity


Ho: Constant variance
Variables: fitted values of y

chi2(1) = 1.11
Prob > chi2 = 0.2916
. estat hettest, rhs

Breusch-Pagan / Cook-Weisberg test for heteroskedasticity


Ho: Constant variance
Variables: x1 x2 x3 x4

chi2(4) = 6.69
Prob > chi2 = 0.1531
. estat imtest

Cameron & Trivedi's decomposition of IM-test

---------------------------------------------------
Source | chi2 df p
---------------------+-----------------------------
Heteroskedasticity | 19.23 14 0.1562
Skewness | 6.98 4 0.1368
Kurtosis | 0.57 1 0.4506
---------------------+-----------------------------
Total | 26.79 19 0.1098
---------------------------------------------------
. estat szroeter, rhs

Szroeter's test for homoskedasticity

Ho: variance constant


Ha: variance monotonic in variable

---------------------------------------
Variable | chi2 df p
-------------+-------------------------
x1 | 2.05 1 0.1523 #
x2 | 2.73 1 0.0986 #
x3 | 2.86 1 0.0911 #
x4 | 3.12 1 0.0773 #
---------------------------------------
# unadjusted p-values
Penanganan hetero
 Transformasi ke bentuk logaritma/ ln
 Transformasi ke bentuk 1/ln
 Robust standard error  HCCME (heteroscedasticity Consistence covariance
marix estimator
 Regress y x1 x2 x3 x4, vce(robust)
 Fisible General Least Square

 generate z1 = x1 dst
 Regres y x1 x2 x3 x4
 Predict ehat, residuals
 Generate lehat2 = ln(ehat*ehat)
 Regress lehat2 z1 z2 z3 z4

You might also like