You are on page 1of 15

Eviews-5

(Remaining) Diagnostic Tests


Assumption Test(s) Consequences of failure
Linear Functional Form Ramsey Reset Test Coefficient estimates biased
Errors have constant variance 1.White Test 1.Estimates are unbiased but inefficient
2.ARCH test 2.The standard errors of the coefficients are biased

Errors are normally distributed Jarque-Bera test 1.Estimates are unbiased


2.Distribution of estimates unknown

Errors are serially independent 1.Durbin=Watson test 1.Estimates are unbiased but inefficient
2.Durbin’s h-test 2.The standard errors of the coefficients are biased
3.Breusch-Godfrey test

2
Structural Break
• Unexpected change over
time in the parameters
of regression models
• Can lead to huge
forecasting errors and
unreliability of the
model in general.
Structural Break: Stockbuilding case

Structural break in about 2005


Stockbuilding in Belgium 1970-2017 Billions of LCU
If the Series Have a Structural Break:
• Running regression:
• View/Actual,Fitted,Residual/Residual Graph
• The model has a poor fit due to structural break
How to Deal with Structural Break: Time
dummy

In the command window put series D2005=@time=2005 to create D2005, then include it in the
regression, then check for residual/actual/fitted graph. Please note that the model improved in terms of a
better fit, but the positive autocorrelation still present.
The Durbin-Watson Test (DW test)
• Formal test for the presence of first-order autocorrelation
• The null hypothesis: There is no autocorrelation (
• We wish to test
•(
• ( or (
E(DW)=2 E(DW)<2 E(DW)>2
No autocorrelation Positive autocorrelation Negative autocorrelation

• Note that DW is bounded between 0 and 4.


• The values given in the DW statistics tables are for one-tailed test.
Output Table with DW statistics (DW<2)
ls LOG(I) C LOG(Y) RL 10*DLOG(P)

• DW<2 (potential positive


autocorrelation)
• Number of regressors, excluding
constant (
(
( (one-tail test)
• Number of observation=47
DW d-statistics Table
To test for positive autocorrelation:
• if errors are positively correlated
• no statistical evidence that the
error terms are positively
autocorrelated
• If inconclusive
• Note that if autocorrelation present,
the coefficient estimates may be
unbiased, but are inefficient.
• Since (i.e. 1.38<0.43), positive
autocorrelation
Dealing with the issue: ls DLOG(1) C DLOG(Y) D(RL-100*DLOG(P) AR(1) (i.e. Cokhrane-Orcutt)
Output Table with DW statistics (DW>2)
ls DLOG(1) C DLOG(Y) D(RL-100*DLOG(P)

• DW>2 (potential negative


autocorrelation)
• Number of regressors, excluding
constant (
(
( (one-tail test)
• Number of observation=46
DW d-statistics Table
• To test for negative autocorrelation the
test statistics (4-d) is compared as
follows:
• If statistical evidence for negative
autocorrelation
• If there is no statistical evidence for
negative autocorrelation
• If inconclusive
• Since no autocorrelation

Dealing with the issue: ls DLOG(1) C DLOG(Y) D(RL-100*DLOG(P) AR(1) (i.e. Cokhrane-Orcutt)
Wald Test of Restrictions
• Run regression
• View- coefficient test-Wald
coefficient restrictions
• Put down: c(the
coefficient)=the restriction
• AR(1) is the first lag of the
depvar

Estimation of nonlinear model with EViews Default Maximum Likelihood approach


Wald Test of Restrictions (Cont.)
• View- coefficient test-Wald
coefficient restrictions
• c(4)= 1
• So the null hypothesis: c(4)-1=0
• If p-val<0.05 reject the null
• If p-val>0.05 accept the null
• P-vals<0.05 , reject, AR(1) is not
equal to 1, therefore differencing
is not an option for this model
(Recall) When running a linear regression:

You should ask yourself the following questions:


1. Are the estimates consistent with my prior expectations based on economic theory?
2. Is the estimated model a good statistical representation of the data?

When assessing the statistical fit of the model you should look for:
3. The significance of the estimated coefficients (t-ratios/ or p-vals)
4. The joint significance of the variables in the model (the F-statistic)
5. The proportion of the variance of the endogoneous variable which is explained by the model (the statistics)

14
(Recall) Summary
1-Estimate the model.
Test the null hypothesis that the slope coefficient is equal to zero against the alternative that it is not equal to
zero

2-Use diagnostic tests to examine the specification of the model


Apply tests for serial correlation, heteroscedasticity, functional form and normality of the errors. What do these
tell you about the specification of the model? What are the implications for the test you carried out.

3-Estimate an alternative specification(if needed)


Try estimating the model in first differences. Assess the extent to which this ‘improves’ the diagnostic test
results.

15

You might also like