You are on page 1of 9

Gerdus Claassen

34663495
17 March 2023
ECOH 617 Assignment 1
Professor Andrea Saayman
Question 1
1.1 a) According to Okun’s law there is a inverse relationship. It state’s that 1% increase in
unemployment, is associated with a 2% decrease in GNP. The equation is as follows:
lune=c−β 2 lgnp 1
Where: lune refers to unemployment rate
c is the natural rate of unemployment
lgnp is the Gross National Product of the Azanian Economy.
The negative sign of B2 is the inverse relationship between unemployment and GNP.
b)

For each 1% increase in the Azanian economy’s GNP, the unemployment rate will increase with
0.903442%.
c)
H 0 : β 2=0
H 1 : β2 ≠ 0
P=0.0000 (0.00001% chance of making a mistake when rejecting H 0)
And Calculated t > 5.000
Therefore T stat > T crit
Therefore reject H0
β 2 is statistically significant at 5% level of significance.

1.2 d) I would retain the intercept. The prop. Is 0.000, so once again there is 0.00001% chance of
making a mistake when rejecting H0: B1 = 0, therefore reject H0 , B1 is stat significant.
Question 2
2 Equation:

White’s test

H0: Homoscedasticity
H1: Heteroskedasticity
Prob Chi-square = 0.0236
Reject H0: Homoskedasticity at 95% level of confidence there was Heteroskedasticity detected.
The Breusch-Pagan LM test

H0: Homoskedasticity

H1: Heteroskedasticity

Chi-square probability =0.0382

3.82% chance of making a mistake when rejecting H 0

Reject H0 with 95% level of confidence. Heteroskedasticity was detected

Glesjer LM Test
H0: Homoskedasticity

H1: Heteroskedasticity

Prob. Chi-Square = 0.0414 or 4.14% chance of making a mistake when rejecting H 0 w

Reject H0 at 95% level of confidence. Heteroskedasticity was detected.

Engel’s ARCH test

Lag the variables by one time period, the results is as follow:

H0: y1 = y2 =…= yp=0 = homoskedasticity

H1: y1≠y2≠…≠yp≠0 = heteroskedasticity

Prob. Chi-square = 0.0000

Therefore, reject H0 @ 95% level of confidence. Heteroskedasticity was detected.


Harvey-Godfrey LM test

H0: Homoskedasticity

H1: Heteroskedasticity

Prob. Chi-Square = 0.0848 or 8.48% chance of making a mistake when rejecting H 0

Do not reject H0 at 95% level of confidence. No heteroskedasticity was detected.

3 a) It permits the simultaneously evaluation of the relationship between a dependent variable


and multiple independent variables, in cases when multiple factors may contribute to a specific
outcome.
By including multiple independent variables into a regression analysis, you are able to
compensate for the impacts of confounding variables and examine the unique contribution of
each independent variable to the dependent variable. This can help in identifying the important
factors that influence the outcome of interest.
Multiple regression may also be used to test hypotheses and predict future outcomes according
to the relationship between independent and dependent variables.
b.) Int2 will have a negative sign, higher interest rates leads to lower investment.
Pst2 will have a positive sign, high instability raises risk and reduces investment.
c.)

linv t=22027.86 – 5179.147∫ 2+ 0.932386 pst 2

Even if the interest rate is 0% and the political stability of a country is 0, a country will start of with a
real investment of 22027.86 units of real investment.

For each 1% increase in the interest rate, real investment will decrease with 5179.147 units of real
investment.

For each 1 unit increase of political stability, the real investment will increase with 0.932386 units of
real investment.

Question 4

4.1) Multicollinearity refers to the phenomenon where two or more independent variables in a
regression model are highly correlated with each other. This can lead to problems in the model such
as instability of estimates, difficulty in interpreting the coefficients, and decreased predictive power.

Perfect multicollinearity occurs when there is an exact linear relationship between two or more
independent variables in a regression model. For example, if we were to include both temperature
in Celsius and temperature in Fahrenheit in a regression model, this would create perfect
multicollinearity since the two variables are perfectly correlated.

Imperfect multicollinearity, on the other hand, occurs when there is a high degree of correlation
between two or more independent variables, but there is no exact linear relationship. For example,
if we were to include both height and weight as independent variables in a regression model, this
would create imperfect multicollinearity since the two variables are highly correlated but not
perfectly correlate

In either case, multicollinearity can lead to biased and unstable regression coefficients, decreased
predictive power, and difficulty in interpreting the results. To address multicollinearity, remove one
or more of the highly correlated independent variables.
4.2a) Yes, we would expect multicollinearity in such models. The reason is that the independent
variables in the model, i.e., income at time t, income at time t-1, income at time t-2, and income at
time t-3, are highly correlated with each other.

Since income at time t-1 is used as a predictor for income at time t, and income at time t-2 is used as
a predictor for income at time t-1, and so on, there is a high degree of overlap between the
independent variables. This overlap leads to a situation where the independent variables are not
independent of each other, but rather are highly correlated, which is the definition of
multicollinearity.

Multicollinearity can lead to unreliable and unstable estimates of the regression coefficients, making
it difficult to interpret the results of the model. It can also affect the statistical significance of the
coefficients and increase the standard errors of the coefficients, reducing the power of the statistical
test. Therefore, it is important to detect and address multicollinearity in the model.

b.) Check for the correlation between the independent variables: Compute the correlation matrix of
the independent variables, and check for the magnitude of the correlations. Correlations close to 1
or -1 indicate a high degree of correlation, which suggests multicollinearity.

Remove one of the correlated variables: If two or more variables are highly correlated, you can
remove one of them from the model. In this case, you can remove one or more of the income
variables at previous time periods.

Use principal component analysis (PCA): PCA is a technique used to reduce the dimensionality of a
dataset by creating a set of new variables that capture the maximum amount of variation in the
original data. This can help to eliminate multicollinearity by creating new variables that are
uncorrelated.

Regularize the model: Regularization techniques such as ridge regression or Lasso regression can
help to reduce multicollinearity by adding a penalty term to the regression coefficients, which
shrinks the coefficients towards zero.

4.3.1) It is possible that the R2 for model (2b) is larger than that for model (2a), but the adjusted R 2
may not necessarily be larger.

The R2 measures the proportion of the total variation in the dependent variable that is explained by
the independent variables in the model. Adding an additional variable to the model may increase the
R2, even if the variable is not relevant, because the total variation explained by the model will
necessarily increase. However, the adjusted R 2 takes into account the number of independent
variables in the model, penalizing the addition of irrelevant variables. The adjusted R 2 will only
increase if the additional variable improves the model fit more than would be expected by chance.

In summary, adding an irrelevant variable to the model may increase the R 2, but it may not increase
the adjusted R2. Therefore, the conclusion about whether model (2b) has a larger adjusted R 2 than
model (2a) would depend on the specific data and the degree to which the additional variable
improves the fit of the model.

4.3.2) The estimates 𝛽1 and 𝛽2 obtained from model (2b) are still unbiased, even though an
irrelevant variable 𝑋3 has been added to the model.
This is because adding an irrelevant variable to a regression model does not affect the unbiasedness
of the estimates of the other coefficients. The unbiasedness property of the estimates depends only
on whether the model satisfies certain assumptions, such as the error term being uncorrelated with
the explanatory variables.

4.3.3) Absolutely, adding an irrelevant variable makes the other variables less effective, which
implies it no longer has the lowest variance of all other unbiased estimators.

You might also like