You are on page 1of 1

Classical Assumption

I. The regression model is linear in the coefficients, is correctly specified and has an additive error term

Violation Consequences
If the regression is not linear in the coefficients then OLS cannot estimate it. If the model is not correctly specified and the error term is not additive it would lead to the OLS estimates will be biased If the error term has a mean different from zero then the OLS estimates will be biased and inconsistent. If the explanatory variable and the error term are correlated, then the OLS estimates will be biased and inconsistent. If the errors are serially correlated (autocorrelated) then the OLS estimators are inefficient.

II. The error term has a zero population mean

III. All explanatory variables are uncorrelated with the error term. IV. Observations of the error term are uncorrelated with each other (no serial correlation). V. The error term has a constant variance (no heteroscedasticity)

If the errors do not have the same variance across observations then the OLS estimators will be inefficient. VI. No explanatory variable is a perfect If perfect multicollinearity exists, then linear function of any other there is no unique solution to the explanatory variable (no perfect normal equations derived from the multicollinearity) least squares principle. No consequences for the properties of the OLS estimators but violation of this assumption will render hypothesis testing invalid.

VII. The error term is normally distributed

You might also like