Professional Documents
Culture Documents
and Seid H
ECONOMETRICS
A TEACHING MATERIAL FOR DISTANCE
STUDENTS MAJORING IN ECONOMICS
Module II
Prepared By:
Bedru Babulo
Seid Hassen
Department of Economics
Faculty of Business and Economics
Mekelle University
August, 2005
Mekelle
Econometrics: Module-II
Bedru B. and Seid H
Econometrics
Module II
II
Module II of the course is a continuation of module-I. In the first module of the course the first
three chapters - introductory chapter, the simple classical regression models, and the multiple
regression models - are presented with a fairly detailed treatment. In the two of chapters of
Module-I i.e. on the chapters on ‘Classical Linear Regression Models’ , students are introduced
with the basic logic, concepts, assumptions, estimation methods, and interpretations of the
classical linear regression models and their applications in economic science.
The ordinary least square (OLS) estimation method discussed in module-I possess the desirable
properties of estimators provided that the basic classical assumptions are satisfied. But in many
real world instances, the classical assumptions of linear regression models may be violated.
Therefore, module-II pays due attention to violations of these assumptions, their consequences,
and the remedial measures. Specifically, Autocorrelation, Heteroscedasticity, and Multicolliearity
problems will be given much focus.
Besides the discussions on ‘violations of classical assumptions’, three more chapters viz.
Regression on Dummy Variables; Dynamic Econometric Models; and an Introduction to
Simultaneous Equation Models are also included in Module-II.
Econometrics: Module-II
Bedru B. and Seid H
Chapter Four
4.0 Introduction
In both the simple and multiple regression models, we made important
assumptions about the distribution of Yt and the random error term ‘ut’. We
assumed that ‘ut’ was random variable with mean zero and var(u t ) = σ 2 , and that
the errors corresponding to different observation are uncorrelated, cov(u t , u s ) = 0
Econometrics: Module-II
Bedru B. and Seid H
In the classical linear regression model, one of the basic assumptions is that the
probability distribution of the disturbance term remains same over all observations
of X; i.e. the variance of each u i is the same for all the values of the explanatory
variable. Symbolically,
var(u i ) = Ε[u i − Ε(u i )] = Ε(u i2 ) = σ u2 ; constant value.
2
Econometrics: Module-II
Bedru B. and Seid H
If σ u2 is not constant but its value depends on the value of X; it means that
In panel (a) σ u2 seems to increase with X. in panel (b) the error variance appears
greater in X’s middle range, tapering off toward the extremes. Finally, in panel
Econometrics: Module-II
Bedru B. and Seid H
(c), the variance of the error term is greater for low values of X, declining and
leveling off rapidly an X increases.
The pattern of hetroscedasticity would depend on the signs and values of the
coefficients of the relationship σ ui2 = f ( X i ) , but u i ’s are not observable. As such
in applied research we make convenient assumptions that hetroscedasticity is of
the forms:
i. σ ui2 = K 2 ( X i2 )
ii. σ 2 = K 2 (X i )
K
iii. σ ui2 = etc.
Xi
λ1 0 .......... 0
0 λ2 .......... 0
Ε(UU ' ) = ………………………………………..3.10
: : :
0 0 .......... λ n
Where λi = Ε(U i2 ) . In other words, variance covariance matrix in the present case
is a diagonal matrix with unequal elements in the diagonal.
4..1.4 Examples of Heteroscedastic functions
I. Consumption Function: Suppose we are to study the consumption expenditure
from a given cross-section sample of family budgets:
C i = α + β Yi + U i ; where:
Econometrics: Module-II
Bedru B. and Seid H
At low levels of income, the average consumption is low, and the variation below
this level is less possible; consumption cannot fall too far below because this
might mean starvation. On the other hand, it cannot rise too far above because
money income does not permit it. Such constraints may not be found at higher
income levels. Thus, consumption patterns are more regular at lower income
levels than at higher levels. This implies that at high incomes the u' s will be high,
while at low incomes the u' s will be small. The assumption of constant variance
of u' s is therefore, does not hold when estimating the consumption function from
across section of family budgets.
ii. Production Function: Suppose we are required to estimate the production
function X = f ( K , L) of the sugar industry from a cross-section random sample of
firms of the industry. Disturbance terms in the production function would stand
for many factors; like entrepreneurship, technological differences, selling and
purchasing procedures, differences in organizations, etc. other than inputs, labor
(L) and capital (K) considered in the production function. The factors mentioned
above, which are not considered explicitly in the production function show
considerable variance in large firms than in small ones. This leads to breakdown
of our assumption on homogeneity of variance terms.
It should be noted that the problem of heteroscedasticity is likely to be more
common in cross-sectional data than in time-series data. One deals with members
of population at a given point of time, such as individual consumers or their
families, firms, industries. These members may be of different size such as small,
medium or large firms or low, medium or high income. In time series data on the
other hand, the variables tend to be of similar orders of magnitude because one
generally collects data for the same entity over a period of time.
Econometrics: Module-II
Bedru B. and Seid H
Ε(αˆ ) = α + β X + Ε(U ) − Ε( βˆ ) X = α
i.e., the least square estimators are unbiased even under the condition of
heteroscedasticity. It is because we do not make use of assumption of
homoscedasticity here.
Econometrics: Module-II
Bedru B. and Seid H
σ ui2 is no more a finite constant figure, but rather it tends to change with an
increasing range of value of X and hence cannot be taken out of the summation
(notation).
3.OLS estimators shall be inefficient: in other words, the OLS estimators do not
have the smallest variance in the class of unbiased estimators and, therefore, they
are not efficient both in small and large samples. Under the heteroscedastic
assumption, therefore:
xi Σxi2σ ui2
var(βˆ ) = ΣK 2 Ε(Y 2)
i i = ∑ 2 Ε(Yi ) =
2
− − − − − − − − − 3.11
Σx (Σxi2 ) 2
σ2
Under homoscedasticy, var(βˆ ) = − − − − − − − − − − − − − − − − − − − 3.12
Σx 2
These two variances are different. This implies that, under heteroscedastic
assumption although the OLS estimator is unbiased, but it is inefficient. Its
variance is larger than necessary.
To see the consequence of using (3.12) instead of (3.11), let us assume that:
σ ui2 = K iσ 2
Σk x 2
= (var(βˆ ) Homo . i 2 i − − − − − 3.13
Σxi
Econometrics: Module-II
Bedru B. and Seid H
That is to say if x 2 and k i are positively correlated and if and only if the second
term of (3.13) is greater than 1, then var(βˆ ) under heteroscedasticty will be greater
than its variance under homoscedasticity. As a result the true standard error of β̂
shall be underestimated. As such the t-value associated with it will be over
estimated which might lead to the conclusion that in a specific case at hand β̂ is
statistically significant (which in fact may not be true). Moreover, if we proceed
with our model under false belief of homoscedasticity of the error variance, our
inference and prediction about the population coefficients would be incorrect.
against Yˆ or ( X i ) . In fig (a), we see there is no systematic pattern between the two
variables, suggesting that perhaps no hetroscedasticity is present in the data.
Figures b to e, however, exhibit definite patterns. . For instance, c suggests a
linear relationship where as d and e indicate quadratic relationship between
ei2 and Yi .
Econometrics: Module-II
Bedru B. and Seid H
Or ln σ i2 = ln σ 2 + β ln X i + vi − − − − − − 3.14
where v is the stochastic disturbance term.
Since σ i2 is generally not known, park suggests using ei2 as proxy and running the
following regression.
ln ei2 = ln σ 2 + β ln X i + vi
Econometrics: Module-II
Bedru B. and Seid H
Against H 1 : β ≠ 0
If β turns out to be statistically significant, it would suggest that hetroscedasticity
is present in the data. If it turns out to be insignificant, we may accept the
assumption of homoscedasticity. The park test is thus a two-stage test procedure;
in the first stage, we run OLS regression disregarding the hetroscedasticity
question. We obtain ei from this regression and then in the second stage we run
the regression in equation (3.15) above.
Example: Suppose that from a sample of size n=100 we estimate the relation
between compensation and productivity.
Y = 1992.342 + 0.2329 X i + ei − − − − − − − 3.16
SE = (936.479) (0.0098)
t= (2.1275) (2.333) R 2 = 0.4375
The results reveal that the estimated slope coefficient is significant at 5% level of
significant on the bases of one tail t-test. The equation shows that as labour
productivity increases by, say, a birr, labor compensation on the average increases
by about 23 cents.
The residual obtained from regression (3.16) were regressed on X i as suggested by
equation (3.15) giving the following result.
ln ei2 = 35.817 − 2.8099 ln X i + vi − − − − − −(3.17)
SE = (38.319) (4.216)
t = (0.934) (−0.667) R 2 = 0.0595
The above result revealed that the slope coefficient is statistically insignificant
implying there is no statistically significant relationship between the two variables.
Following the park test, one may conclude that there is no hetroscedasticity in the
error variance. Although empirically appealing, the park test has some problems.
Econometrics: Module-II
Bedru B. and Seid H
Gold Feld and Quandt have argued that the error term vi entering into
ln ei2 = α + β ln X i + vi may not satisfy the OLS assumptions and may itself be
hetroscedastic. Nonetheless, as a strict explanatory method, one may use the park
test.
b. Glejser test:
The Glejser test is similar in sprit to the park test. After obtaining the residuals ei
from the OLS regression. Glejser suggest regressing the absolute value of U i on
1
ei = α + β X i + vi , ei = α + β + vi
Xi
ei = α + β X i + vi , ei = α + β X i + vi ; where vi is error term.
1
ei = α + β + vi , ei = α + β X i2 + vi
Xi
Goldfeld and Quandt point out that error term vi has some problems in that its
expected value is non-zero, i.e. it is serially correlated and irrorrically it is
heteroscedstic. An additional difficulty with the Glejser method is that models
are non-linear in parameters and therefore cannot be estimated with the usual OLS
procedure. Glejester has found that for large samples the first four preceding
models give generally satisfactory results in detecting heterosedasticity. As a
practical matter, therefore, the Glejester technique may be used for large samples
and may be used in small samples strictly as qualitative device to learn something
about heterosedasticity.
c. Goldfield-Quandt test
This popular method is applicable if one assumes that the heteroscedastic
variance σ i2 is positively related to one of the explanatory variables in the
regression model. For simplicity, consider the usual two variable models:
Econometrics: Module-II
Bedru B. and Seid H
Yi = α + β i X i + U i
If the above equation is appropriate, it would mean σ i2 would be larger, the larger
values of X i .If that turns out to be the case, hetroscedasticity is most likely to be
present in the model. To test this explicitly, Goldfeld and Quandt suggest the
following steps:
Step 1: Order or rank the observations according to the values of X i beginning
with the lowest X value
Step 2: Omit C central observations where C is specified a priori, and divide the
(n − c)
remaining (n-c) observations into two groups each of observations
2
(n − c)
Step 3: Fit separate OLS regression to the first observations and the last
2
(n − c)
observations, and obtain the respective residual sums of squares RSS, and
2
RSS2, RSS1 representing RSS from the regression corresponding to the smaller
X i values (the small variance group) and RSS2 that from the larger X i values (the
large variance group). These RSS each have
(n − c) (n − c − 2 K )
− K or df , where: K is the number of parameters to
2 2
be estimated, including the intercept term; and df is the degree of freedom.
RSS 2 / df
Step 4: compute λ =
RSS1 / df
Econometrics: Module-II
Bedru B. and Seid H
Econometrics: Module-II
Bedru B. and Seid H
Yi = 3.4094 + 0.6968 X i + ei
(8.7049) (0.0744)
R = 0.8887
2
RSS1 = 377.17
df = 11
RSS 2 = 1536.8
df = 11
RSS 2 / df 1536.8 / 11
λ= =
From these results we obtain: RSS1 / df 377.17 / 11
λ = 4.07
The critical F-value for 11 numerator and 11 denominator for df at the 5% level is
2.82. Since the estimated F (= λ ) value exceeds the critical value, we may
conclude that there is hetrosedasticity in the error variance. However, if the level
of significance is fixed at 1%, we may not reject the assumption of
homosedasticity (why?) Note that the ρ value of the observed λ is 0.014.
There are also other tests of hetroscedasticity like spearman’s rank correlation test,
Breusch-pagan-Goldfe y test and white’s general hetroscedastic test. But at these
introductory level the above tests are enough.
Econometrics: Module-II
Bedru B. and Seid H
If we apply OLS to the above then it will result in inefficient parameters since
var(u i ) is not constant.
The remedial measure is transforming the above model so that the transformed
model satisfies all the assumptions of the classical regression model including
homoscedasticity. Applying OLS to the transformed variables is known as the
method of Generalized Least Squares (GLS). In short GLS is OLS on the
transformed variables that satisfy the standard least squares assumptions. The
estimators thus obtained are known as GLS estimators, and it is these estimators
that are BLUE.
4.1.8.1 The Method of Generalized (Weight) Least Square
Assume that our original model is: Y = α + βX i + U i where u i satisfied all the
assumptions except that u i is heteroscedastic.
Ε(u i ) 2 = σ i2 = f (k i )
If we apply OLS to the above model, the estimators are no more BLUE. To make
them BLUE we have to transform the above model.
Let us assume the following types of hetroscedastic structures, under two
conditions: hetroschedasticity when the population variance σ i2 is known and
the transformed error term is constant. Now divide the above model by σ i .
Y α βX i U i
= + + − − − − − − − − − (3.19)
σi σi σi σi
The variance of the transformed error term is constant, i.e.
2
u u 1 1
var i = Ε i = 2 Ε(u i ) 2 = 2 σ i2 = 1 Constant
σi σi σi σi
Econometrics: Module-II
Bedru B. and Seid H
We can know apply OLS to the above model. The transformed parameters are
BLUE. Because all the assumptions including homoscedasticity are satisfied to
ui 1
(3.1). = (Yi − α − βX i )
σi σi
2
u 1
2
1
∑ σ i = ∑
σi
(Yi − α − βX i ) 2 , Let wi = 2
σi
i
∑ w uˆi
2
i = Σwi (Yi − αˆ − βˆX i ) 2
The method of GLS (WLS) minimizes the weighted residual sum of squares
∂ ∑ wi uˆ i2
= −2Σwi (Yi − αˆ − βˆX i ) = 0
∂αˆ
Σwi Yi βˆΣwi X i
αˆ = − = Y * − βˆX *
Σwi Σwi
Econometrics: Module-II
Bedru B. and Seid H
Σwi Yi X i − Y * X * Σwi Σx * y *
βˆ = 2
= 2
Σwi X i2 − X * Σwi Σx *
which proves that the new random term in the model has a finite constant
variance (= K 2 ) . We can, therefore, apply OLS to the transformed version of the
α Ui
model +β + . Note that in this transformation the position of the coefficients
Xi Xi
1
has changed: the parameter of the variable in the transformed model is the
Xi
constant intercept of the original model, while the constant of term of the
transformed model is the parameter of the explanatory variable X in the original
model. Therefore, to get back to the original model, we shall have to multiply the
estimated regression by K i .
Econometrics: Module-II
Bedru B. and Seid H
Y α βX i Ui
The transformed model is: = + +
Xi Xi Xi Xi
α Ui
= + β Xi +
Xi Xi
2
u
Ε i = 1 Ε(U i ) 2 = 1 k 2 X = k 2
X X X
i
⇒ Constant variance; thus we can apply OLS to the transformed model.
There is no intercept term in the transformed model. Therefore, one will have to
use the ‘regression through the origin’ model to estimate α and β . In this case,
therefore, to get back to the original model, we shall have to multiply the
estimated regression by Xi
Y α βX i Ui
= + + − − − − − − − − − − − − − − − (i )
α + βX i α + βX i α + βX i α + βX i
Ui 1 1
Ε = K 2 [Ε(Yi )] = K 2
2
Ε(u i ) 2 =
α + βX i (α + β X i )
2
[Ε(Yi )]2
The transformed model described in (i) above is however not operational in this
case. It is because values of α and β are not known. But since we can obtain
Yˆ = αˆ + βˆX i , the transformation can be made through the following two steps.
1st : we run the usual OLS regression disregarding the heteroscedasticity problem
in the data and obtain Yˆ using the estimated Yˆ , we transform the model as
Y 1 X U
follows. =α +β i + i
Yˆ Yˆ Yˆ Yˆ
Econometrics: Module-II
Bedru B. and Seid H
It should be, therefore, be clear that in order to adopt the necessary corrective
measure (which is through transformation of the original data in such a way as to
obtain a form in which the transformed disturbance terms possesses constant
variance) we must have information on the form of heteroscedasticity. Also since
our transformed data no more posses heteroscedasticity, it can be shown that the
estimate of the transformed model are more efficient (i.e. they posses smaller
variance) then the estimates obtained from the application of OLS to the original
data.
Let’s assume that a test reveals that original data possesses heteroscedasticity and
that heteroscedasticity of the form σ i2 = K 2 X i2 is being assumed.
Our original model is therefore:
Yi = α + βX i + U i , Ε(U i ) = σ i2 = K 2 X 2
Yi α Ui
On transforming the original model we obtain: =β+ +
Xi Xi Xi
ˆ Y 1
βˆ = −α
Xi Xi
ˆ σ i2 Σ( 1 X )2 K 2 Σ( 1 X )
2
var(βˆ ) = =
nΣ ( 1 X − ( 1 X ) 2 ) nΣ( 1 X − ( 1 X )) 2
σ u2 ΣX i2
Since var(βˆ ) in OLS =
n Σx 2
Econometrics: Module-II
Bedru B. and Seid H
4.2 Autocorrelation
4.2.1 The nature of Autocorrelation
In our discussion of simple and multiple regression models, one of the
assumptions of the classicalist is that the cov(u i u j ) = Ε(u i u j ) = 0 which implies that
If the above assumption is not satisfied, that is, if the value of U in any particular
period is correlated with its own preceding value(s), we say there is
autocorrelation of the random variables. Hence, autocorrelation is defined as a
‘correlation’ between members of series of observations ordered in time or space.
Econometrics: Module-II
Bedru B. and Seid H
t t t
(a) (b ) (c)
Ui Ui
t : : : : : : :: :: : : : : t
:::::::::::::
(d) (e)
The figures (a) –(d) above, show a cyclical pattern among the U’s indicating
autocorrelation i.e. figures (b) and (c) suggest an upward and downward linear
trend and (d) indicates quadratic trend in the disturbance terms. Figure (e)
indicates no systematic pattern supporting non-autocorrelation assumption of the
classical linear regression model.
Econometrics: Module-II
Bedru B. and Seid H
The above figures f and g similarly indicates us positive and negative auto-
correlation respectively while h indicates no autocorrelation.
In general, if the disturbance terms follow systematic pattern as in (f) and (g) there
is autocorrelation or serial correlation and if there is no systematic pattern, this
indicates no correlation.
There are several reasons why serial or autocorrelation a rises. Some of these are:
a. Cyclical fluctuations
Econometrics: Module-II
Bedru B. and Seid H
economic recovery starts, most of these series move upward. In this upswing, the
value of a series at one point in time is greater than its previous value. Thus, there
is a momentum built in to them, and it continues until something happens (e.g.
increase in interest rate or tax) to slowdown them. Therefore, regression involving
time series data, successive observations are likely to be interdependent.
b. Specification bias
Let’s see one by one how the above specification biases causes autocorrelation.
income, x3 = price of pork and t = time. Now, suppose we run the following
regression in lieu of (3.21):
yt = α + β 1 x1t + β 2 x 2t + Vt − − − − − − − − − − − − ------3.22
Now, if equation 3.21 is the ‘correct’ model or true relation, running equation
3.22 is the tantamount to letting Vt = β 3 x3t + U t . And to the extent the price of
pork affects the consumption of beef, the error or disturbance term V will reflect
a systematic pattern, thus creating autocorrelation. A simple test of this would
be to run both equation 3.21 and equation 3.22 and see whether autocorrelation,
Econometrics: Module-II
Bedru B. and Seid H
if any, observed in equation 3.22 disappears when equation 3.21 is run. The
actual mechanics of detecting autocorrelation will be discussed latter.
ii. Incorrect functional form: This is also one source of the autocorrelation of
error term. Suppose the ‘true’ or correct model in a cost-output study is as
follows.
Marginal cost= α 0 + β1output i + β 2 output i 2 + U i − − − − − − − − − − − − 3.23
However, we incorrectly fit the following model.
M arg inal cos t i = α 1 + α 2 output i + Vi -------------------------------3.24
The marginal cost curve corresponding to the ‘true’ model is shown in the figure
below along with the ‘incorrect’ linear cost curve.
As the figure shows, between points A and B the linear marginal cost curve
will consistently over estimate the true marginal cost; whereas, outside these
points it will consistently underestimate the true marginal cost. This result is
to be expected because the disturbance term Vi is, in fact, equal to (output)2+
ui, and hence will catch the systematic effect of the (output)2 term on the
marginal cost. In this case, Vi will reflect autocorrelation because of the use
of an incorrect functional form.
iii. Neglecting lagged term from the model: - If the dependent variable of a
certain regression model is to be affected by the lagged value of itself or the
Econometrics: Module-II
Bedru B. and Seid H
explanatory variable and is not included in the model, the error term of the
incorrect model will reflect a systematic pattern which indicates
autocorrelation in the model. Suppose the correct model for consumption
expenditure is:
C t = α + β 1 y t + β 2 y t −1 + U t -----------------------------------3.25
: : :
Ε(u u ) Ε(u u )............. Ε(u 2 )
n 1 n 2 n
In the case of the assumption of non-autocorrelation and homoscedasticity.
σ2 0 ........ 0 1 0 ........ 0
0 σ2 ........ 0 = σ 2 0 1 ........ 0 = σ 2 I --------3.27
Ε(UU ' ) =
: : : : : :
n
0 0 σ 2 0 0 1
Econometrics: Module-II
Bedru B. and Seid H
3 0 0 1 1
2 0 3 1
2
1
4
0 5 0 1 1 1 1 1 1
2 2 2 2
0 0 3 1 1 4 2
1 1 1
2 2
∑u u t t −1
ρ̂ = t =2
n
--------------------------------3.31
∑u
t =2
2
t −1
Econometrics: Module-II
Bedru B. and Seid H
∑ ut u t −1 ∑ ut ut −1 ∑u u t t −1
ρˆ = t =2
n
= t =2
= t =2
= rut u t −1 (Why?)---------------------3.32
2
Σu t2 Σu t2−1
∑u
t =2
2
t −1 n
∑ u 2 t −1
t =2
⇒ −1 ≤ ρˆ ≤ 1 since − 1 ≤ r ≤ 1 ---------------------------------------------3.33
This proves the statement “we can treat autocorrelation in the same way as
correlation in general”. From our statistics background we know that:
Econometrics: Module-II
Bedru B. and Seid H
U t = f (U t −1 ) = ρU t −1 + vt
U t −1 = f (U t − 2 ) = ρU t − 2 + vt −1
U t − 2 = f (U t −3 ) = ρU t −3 + vt − 2
U t − r = f (U t −( r +1) ) = ρU t − ( r +1) + vt − r
We make use of above relations to perform continuous substitutions in
U t = ρu t −1 + vt as follows.
U t = ρU t −1 + vt
= ρ ( ρU t − 2 + vt −1 ) + vt , u t −1 = ρU t − 2 + vt −1
= ρ 2U t − 2 + ρvt −1 + vt
= ρ 2 ( ρU t −3 + vt −3 ) + ( ρvt −1 + vt )
U t = ρ 3U t −3 + ρ 2 vt −3 + ρvt −1 + vt
In this way, if we continue the substitution process for r periods (assuming that r is
very large), we shall obtain:
U t = vt + ρvt −1 + ρ 2 vt − 2 + ρ 3 vt −3 + − − − − − − − − -------------3.35
ρ r → 0 since / ρ / ≤ 1
∞
u t = ∑ ρ r vt − r -----------------------------------------------------------3.36
r =0
Now, using this value of u t , let’s compute its mean, variance and covariance
1. To obtain mean
∞
Ε(U t ) = Ε ∑ ρ r vt − r = Σρ r Ε(vt − r ) = 0 since Ε(vt − r ) = 0 ----------3.37
r =0
In other words, we found that the mean of autocorrelated U’s turns out to be zero.
2. To obtain variance
r =0 r =0 r =0
var(vt − r ) = E (Vt − r ) 2
∞
1
= ∑ ρ 2 r σ 2 = σ 2 (1 + ρ 2 + ρ 4 + ρ 6 + ................ + ∞) = ρ 2 2
r =0 1 − ρ
σ2
var(U t ) = --------------------------------(3.38) ; Since / ρ / < 1
(1 − ρ 2 )
σ2
Thus, variance of autocorrelated u i is which is constant value.
1− ρ 2
Econometrics: Module-II
Bedru B. and Seid H
From the above, the variance of Ui depends on the nature of variance of Vi. If the
variance of Vi is homoscedaistic, Ui is homomscedastic and if Vi is hetroscedastic,
Ui is hetroscedastic.
3. To obtain covariance:
= ρ (σ v2 + ρ 2σ v2 + ...... + 0)
= ρ (σ v2 (1 + ρ 2 + ρ 4 + ......)
ρσ 2
= since ρ < 1 --------------------------------------------------------3.40
1− ρ 2
ρσ v2
∴ cov(U t , U t −1 ) = = ρσ u2 ……………………………………………….3.41
1− ρ 2
Similarly cov(u t , u t −2 ) = ρ 2σ u2 ………………………………………….3.42
cov(U t , U t −3 ) = ρ 3σ u2 ….........................................................................3.43
σ2
U t ~ N 0, v 2 and; E ( U tU t − r ) ≠ 0 --------------------------------3.44
1- ρ
Econometrics: Module-II
Bedru B. and Seid H
We have seen that ordinary least square technique is based on basic assumptions.
Some of the basic assumptions are with respect to mean, variance and covariance
of disturbance term. Naturally, therefore, if these assumptions do not hold good on
what so ever account, the estimators derived by OLS procedure may not be
efficient. Now, we are in a position to examine the effect of autocorrelation on
OLS estimators. Following are effects on the estimators if OLS method is applied
in presence of autocorrelation in the given data.
We know that: β̂ = β + Σk i u i
The variance of estimate β̂ in simple regression model will be biased down wards
(i.e. underestimated) when u’s are auto correlated. It can be shown as follows.
We know that: β̂ = β + Σk i u i ; ⇒ βˆ − β = Σk i wi
Var ( βˆ ) = Ε( βˆ − β ) 2 = Ε(Σk i u i ) 2
2 2
= Ε(k1u1 + k 2 u 2 + ...... + k n u n ) 2 = Ε(k1 u1 + k 22 u 22 + ....... + k n2 u n2 + 2k1k 2 u1u 2 + .... + 2k n −1 k n u n −1u n )
= Ε(∑ k i u i + 2Σk i k j u i u j )
2 2
2
= Σk i Ε(u i ) 2 + 2Σk i k j Ε(u i u j )
Econometrics: Module-II
Bedru B. and Seid H
In the case the explanatory variable X of the model is random, the covariance of
successive values is zero (Σxi x j = 0) , under such circumstance the bias in
var(β ) will not be serious even though u is autocorrelated.
This large t-ratio may make β̂ statistically significant while it may not.
4. Wrong testing procedure will make wrong prediction and inference about the
characteristics of the population.
There are two methods that are commonly used to detect the existence or absence
of autocorrelation in the disturbance terms. These are:
1. Graphic method
Dear distance student, you recalled from section 3.2.2 that autocorrelation can be
presented in graphs in two ways. Detection of autocorrelation using graphs will be
based on these two ways.
Econometrics: Module-II
Bedru B. and Seid H
a. Apply OLS to the given data whether it is auto correlated or not and obtain
the error terms. Plot et horizontally and et −1 vertically. i.e. plot the
following observations (e1 , e2 ), (e2 , e3 ), (e3 , e4 ).......(en , en +1 ) .If on plotting, it is
found that most of he points fall in quadrant I and III, as shown in fig (a)
below, we say that the given data is autocorrelated and the type of
autocorrelation is positive autocorrelation. If most of the points fall in
quadrant II and IV, as shown in fig (b) below the autocorrelatioin is said to
be negative. But if the points are scattered equally in all the quadrants as
shown in fig (c) below, then we say there is no autocorrelation in the given
data.
Econometrics: Module-II
Bedru B. and Seid H
This method is called formal because the testis based on the formal testing
procedure you have seen in your statistics course. It is based on either the z-test, t-
test, F-test or X2 test. If a test applies any of the above, it is called formal testing
method. Different econometricians and statisticians suggest different types of
testing methods. But, the most frequently and widely used testing methods by
researchers are the following.
A. Run test: Before going to the detail analysis of this method, let us define what
a run is in this context. Run is the number of positive and negative signs of the
error term arranged in sequence according to the values of the explanatory
variables, like “++++++++-------------++++++++------------++++++”
Under the null hypothesis that successive outcomes (here, residuals) are
independent, and assuming that n1 > 10 and n2 > 10 , the number of runs is
distributed (asymptotically) normally with:
2n1 n 2
Mean: Ε(k ) = +1
n1 + n 2
2n1 n2 (2n1n 2 − n1 − n 2 )
Variance: σ k2 =
(n1 + n 2 ) 2 (n1 + n 2 − 1)
Econometrics: Module-II
Bedru B. and Seid H
since k=5, it clearly falls outside this interval. There fore we can reject the
hypothesis that the observed sequence of residuals is random (are of independent)
with 95% confidence.
B. The Durbin-Watson d test: The most celebrated test for detecting serial
correlation is one that is developed by statisticians Durbin and Waston. It is
popularly known as the Durbin-Waston d statistic, which is defined as:
t =n
∑ (e
t =2
t − et −1 ) 2
d= t =n
------------------------------------3.47
∑e
t =1
2
t
1. The regression model includes an intercept term. If such term is not present,
as in the case of the regression through the origin, it is essential to rerun the
regression including the intercept term to obtain the RSS.
Econometrics: Module-II
Bedru B. and Seid H
3. The disturbances U t are generated by the first order auto regressive scheme:
Vt = ρu t −1 + ε t
4. The regression model does not include lagged value of Y the dependent
variable as one of the explanatory variables. Thus, the test is inapplicable
to models of the following type
yt = β1 + β 2 X 2t + β 3 X 3t + ....... + β k X kt + ry t −1 + U t
Where y t −1 the one period lagged value of y is such models are known as
autoregressive models. If d-test is applied mistakenly, the value of d in such
cases will often be around 2, which is the value of d in the absence of first
order autocorrelation. Durbin developed the so-called h-statistic to test
serial correlation in such autoregressive.
In using the Durbin –Watson test, it is, there fore, to note that it can not be
applied in violation of any of the above five assumptions.
t =n
∑ (e
t =2
t − et −1 ) 2
Dear distance student, from equation 3.47 the value of d = t =n
∑e
t =1
2
t
∑e + ∑e
t =2
2
t
t =2
2
t −1 − 2Σet et −1
d= ------------------3.48
Σet2
n n
However, for large samples ∑e
t =2
2
t ≅ ∑ et2−1 because in both cases one
t =2
Econometrics: Module-II
Bedru B. and Seid H
n
2∑ et2
t =2 2Σet et −1
d= −+
Σe 2
t Σe t
Σet et −1
d ≈ 2 1− n
∑
t =1
et
Σet et −1
but ρ = from equation
Σet
d = 2(1 − ρˆ )
ρˆ = 0, d ≅ 2
if ρˆ = 1, d ≅ 0
ρˆ = −1, d ≅ 4
However, because the exact value of d is never known, there exist ranges of values
with in which we can either accept or reject null hypothesis. We do not also have
unique critical value of d-stastics. We have d L -lower bound and d u upper bound
of he initial values of d to accept or reject the null hypothesis.
For the two-tailed Durbin Watson test, we have set five regions to the values of d
as depicted in the figure below.
Econometrics: Module-II
Bedru B. and Seid H
The mechanisms of the D.W test are as follows, assuming that the assumptions
underlying the tests are fulfilled.
Obtain the computed value of d using the formula given in equation 3.47
For the given sample size and given number of explanatory variables, find
out critical d L and d U values.
of d with d L , d U , (4 − d L ) and (4 − d U )
(4 − d L ) =4-1.37=2.63
(4 − d U ) =4-1.5=2.50
Since d is less than d L we reject the null hypothesis of no autocorrelation
Example 2. Consider the model Yt = α + βX t + U t with the following observation on X and Y
X 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Y 2 2 2 1 3 5 6 6 10 10 10 12 15 10 11
Econometrics: Module-II
Bedru B. and Seid H
Solution:
1. regress Y on X: i.e. Yt = α + βX t + U t :
Σxy 255
βˆ = = = 0.91
Σx 2 280
Y = −0.29 + 0.91X + U i
Σ(et − et −1 ) 2 60.213
d= = = 1.442
Σet2 41.767
(4 − d u ) = 2.64
Since d* lies between dU < d < 4 − dU , accept H0. This implies the data is autocorrelated.
Although D.W test is extremely popular, the d test has one great drawback in that
Econometrics: Module-II
Bedru B. and Seid H
In many situations, however, it has been found that the upper limit d U is
approximately the true significance limit. Thus, the modified DW test is based on
d U in case the estimated d value lies in the inconclusive zone, one can use the
Since in the presence of serial correlation the OLS estimators are inefficient, it is
essential to seek remedial measures. The remedy however depends on what
knowledge one has about the nature of interdependence among the disturbances. :
This means the remedy depends on whether the coefficient of autocorrelation is
known or not known.
Yt = α + β X t + U t − − − − − − − − 3.49
Econometrics: Module-II
Bedru B. and Seid H
ρ y = ρα + ρβ X t −1 + ρU t −1 − − − − − − − − 3.51
t −1
Vt = U t − ρU t −1
Yt − ρYt −1 = (α − ρα ) + β ( X t − ρX t −1 ) + vt − − − − − − − 3.53
Let: Yt* = Y − ρy t −1
a = α − ρα = α (1 − ρ )
X t* = X t − ρX t −1
Yt* = a + BX t* + vt − − − − − − − −(3.54)
It may be noted that in transforming Equation (3.49) into (3.54) one observation
shall be lost because of lagging and subtracting in (3.52). We can apply OLS to
the transformed relation in (3.54) to obtain αˆ and βˆ for our two
parameters α and β .
aˆ
αˆ = and it can be shown that
1− ρ
2
1
var αˆ = var(aˆ )
1− ρ
Econometrics: Module-II
Bedru B. and Seid H
σ u2 ΣX t2 * σ u2
var(αˆ ) = n
, var(βˆ ) = n
n∑ ( X t* − X ) 2 ∑ (ΣX *
t − X t* ) 2
ti
Estimators obtained in equation 6 are efficient, only if our sample size is large so
that loss of one observation becomes negligible.
When ρ is not known, we will describe the methods through which the coefficient
of autocorrelation can be estimated.
Many times an investigator makes some reasonable guess about the value of
autoregressive coefficient by using his knowledge or institution about the
relationship under study. Many researchers usually assume that ρ =1 or -1.
Under this method, the process of transformation is the same as when ρ is known.
(Yt − Yt −1 ) = ( X t − X t −1 ) + Vt ; where Vt = U t − U t −1
Note that the constant term is suppressed in the above. B̂ is obtained by taking
merely the first differences of the variable and obtaining line that passes through
the origin. Suppose that one assumes ρ =-1 instead of ρ =1, i.e the case of perfect
negative autocorrelation. In such a case, the transformed model becomes:
Yt + Yt −1 ( X t + X t −1 ) vt
Yt + Yt −1 = 2α + β ( X t + X t −1 ) + vt Or =α +β +
2 2 2
Econometrics: Module-II
Bedru B. and Seid H
This model is then called two period moving average regression model because
Yt + Yt −1
actually we are regressing the value of one moving average on
2
( X t + X t −1 )
another
2
This method of first difference in quite popular in applied research for its
simplicity. But the method rests on the assumption that either there is perfect
positive or perfect negative autocorrelation in the data.
d ≈ 2(1 − ρˆ )
data. Given the d-value we can estimate ρ from this. 1
⇒ ρˆ ≈ 1 − d
2
As already pointed out, ρ̂ will not be accurate if the sample size is small. The
above relationship is true only for large samples. For small samples, Theil and
Nagar have suggested the following relation:
n 2 (1 − d 2 ) + k 2
ρˆ = ………………………………………………..3.55
n2 − k 2
We estimate ρ̂ from the above relation. With the estimated ρ̂ , we transform the
Econometrics: Module-II
Bedru B. and Seid H
We use this second estimate ρ̂ˆ to transform the original observations and so on
we keep proceeding until the value of the estimate of ρ converges. It can be
shown that the procedure is convergent. When the data is transformed only by
using this second stage estimate of ρ , it is then called two stages Cochrane-Orcutt
method. However one can follow an alternative approach to use at each step of
interaction, the Durbin Watson d-statistic to residuals for autocorrelation or till the
estimates of ρ do not differ substantially from one another.
Method IV: Durbin’s two-stage method: Assuming the first order autoregressive
scheme, Durbin suggests a two-stage procedure for resolving the serial correlation
problem. The steps under this method are:
Given Yt = α + β X t + u t -----------------------------------(3.59)
U t = ρU t −1 + vt
Yt − ρYt −1 = α (1 − ρ ) + β ( X t − ρX t −1 ) + u t − ρu t −1 ------(3.61)
Yt = α (1 − ρ ) + ρYt −1 + β X t − βρ X t −1 + vt
Yt = α * + ρYt −1 + β X t − γX t −1 + vt
Econometrics: Module-II
Bedru B. and Seid H
4.3 Multicollinearity
4.3.1 The nature of Multicollinearity
Originally, multicollinearity meant the existence of a “perfect” or exact, linear
relationship among some or all explanatory variables of a regression model. For k-
variable regression involving explanatory variables x1 , x 2 ,......, x k , an exact linear
relationship is said to exist if the following condition is satisfied.
λ1 x1 + λ 2 x 2 + ....... + λ k x k + vi = 0 − − − − − − (1)
where λ1 , λ 2 ,.....λ k are constants such that not all of them are simultaneously zero.
Today, however , the term multicollineaity is used in a broader sense to include
the case of perfect multicollinearity as shown by (1) as well as the case where the
x-variables are inter-correlated but not perfectly so as follows
λ1 x1 + λ 2 x 2 + ....... + λ 2 x k + vi = 0 − − − − − − (1)
Econometrics: Module-II
Bedru B. and Seid H
Econometrics: Module-II
Bedru B. and Seid H
Dear distance student, do you recall the formulas of β̂1 and β̂ 2 from our discussion
of multiple regression?
Σ x 1 y Σ x 22 − Σ x 2 y Σ x 1 x 2
βˆ 1 =
Σ x 12i Σ x 22 − ( Σ x 1 x 2 ) 2
Σ x 2 y Σ x 12 − Σ x 1 y Σ x 1 x 2
βˆ 1 =
Σ x 12 Σ x 22 − ( Σ x 1 x 2 ) 2
Where λ is non-zero constants. Substitute 3.32in the above β̂1 and β̂ 2 formula:
Σ x1 yΣ (λ x1 ) 2 − Σ λ x1 yΣ x1λ x1
βˆ =
Σ x 12i Σ ( λ x 1 ) 2 − ( Σ x 1 λ x 1 ) 2
1
Econometrics: Module-II
Bedru B. and Seid H
Σ x1 yλ 2 Σ x1 − λ Σ x1 yΣ x1
2 2
0
= = ⇒ indeterminate.
λ (Σ x
2
1
2
) 2
− λ 2 2
(Σ x1 ) 2
0
Applying the same procedure, we obtain similar result (indeterminate value) for
β̂ 2 . Likewise, from our discussion of multiple regression model, variance of β̂1 is
σ 2 Σx 22
given by : var(βˆ1 ) =
Σx12 Σx12 − (Σx1 x 2 ) 2
σ 2 λ2 Σx12
= =∞ ⇒ infinite.
0
These are the consequences of perfect multicollinearity. One may raise the
question on consequences of less than perfect correlation. In cases of near or high
multicollinearity, one is likely to encounter the following consequences.
Econometrics: Module-II
Bedru B. and Seid H
This proves that if we have less than perfect multicollinearity the OLS coefficients
are determinate.
The implication of indetermination of regression coefficients in the case of perfect
multicolinearity is that it is not possible to observe the separate influence of
x1 and x 2 . But such extreme case is not very frequent in practical applications.
1
Multiply the numerator and the denominator by 2
Σx 2
1
σ 2 Σx 22 .
σ2
2
Σx 2
var(βˆ 2 ) = =
(Σx Σx
2
1
2
2 − (Σx1 x 2 ) 2 . ) 1
2 Σx12 −
(Σx1 x 2 ) 2
Σx 2 Σx12
σ2 σ2
= =
(Σx1 x 2 ) 2 Σx12 − (1 − r122 )
Σx12 − 1 −
Σx12 Σx12
Econometrics: Module-II
Bedru B. and Seid H
As r12 increases to ward one, the covariance of the two estimators increase in
absolute value. The speed with which variances and covariance increase can be
seen with the variance-inflating factor (VIF) which is defined as:
1
VIF =
1 − r122
Using this definition we can express var(β1 ) and var(βˆ 2 ) interms of VIF
ˆ σ2 ˆ σ2
var(β 1 ) = 2 VIF and var(β 2 ) = 2 VIF
Σx1 Σx 2
Which shows that variances of βˆ1 and βˆ 2 are directly proportional to the VIF.
4. Because of the large variance of the estimators, which means large standard
errors, the confidence interval tend to be much wider, leading to the acceptance of
“zero null hypothesis” (i.e. the true population coefficient is zero) more readily.
5. Because of large standard error of the estimators, the computed t-ratio will be
very small leading one or more of the coefficients tend to be statistically
insignificant when tested individually.
Econometrics: Module-II
Bedru B. and Seid H
6. Although the t-ratio of one or more of the coefficients is very small (which
makes the coefficients statistically insignificant individually), R2, the overall
measure of goodness of fit, can be very high.
Example: if y = α + β 1 x1 + β 2 x 2 + .... + β k x k + vi
In the cases of high collinearity, it is possible to find that one or more of the partial
slope coefficients are individually statistically insignificant on the basis of t-test.
But the R2 in such situations may be so high say in excess of 0.9.in such a case on
the basis of F test one can convincingly reject the hypothesis that
β1 = β 2 = − − − = β k = 0 Indeed, this is one of the signals of multicollinearity-
insignificant t-values but a high overall R2 (i.e a significant F-value).
7. The OLS estimators and their standard errors can be sensitive to small change
in the data.
Dear Readers! These are the major consequences of near or high multicolinearity.
If you have any comments or suggestion, you are welcome!
Econometrics: Module-II
Bedru B. and Seid H
ii. A high rx x is only sufficient but not a necessary condition (adequate condition)
i j
for the existence of multicollinearity because multicollinearity can also exist even
if the correlation coefficient is low.
However, the combination of all these criteria should help the detection of
multicollinearity.
the X’s. Then, following the relationship between F and R2 established in chapter
three under over all significance , the variable,
R x21 , x2 , x3 ,... xk / k − 2
Ri = ~ F( k − 2, n − k +1)
1 − R x21 , x2 , x3 ,... xk /( n − k + 1)
Econometrics: Module-II
Bedru B. and Seid H
4.3.4.2 The Farrar-Glauber test - They use three statistics for testing
mutlicollinearity There are chi-square, F-ratio and t-ratio. This test may be
outlined in three steps.
A. Computation of χ2 to test orthogonalitly: two variables are called orthogonal
if rx x = 0 i.e. if there is no any colinearity between them. In our discussion of in
i j
1
Σ Σx 22 Σx 2 x 3
⇒
x 2 x1
Σx 2 Σx 2 2
Σx 22 Σx32
1 2 (Σx 22 )
Σx3 x1 Σx3 x 2 Σx32
Σx1 Σx1 Σx12 Σx32 (Σx32 ) 2
2 2
1 r12 r13
⇒ r12 1 r23
r13 r23 1
Econometrics: Module-II
Bedru B. and Seid H
distribution with 1/2k(k-1) df. If the computed χ2 is greater than the critical value
of χ2, reject H0 in favour of multicollinearty. But if it is less, then accept H0.
H1 : rxi x j . x1 , x2 , x3 ,...xk ≠ 0
(r 2 xi x j − x1 , x2 , x3 ,...xk ) n − k
t* = (How?)
(1 − r 2 xi x j . x1 , x2 , x3 ,...xk )
Econometrics: Module-II
Bedru B. and Seid H
In addition using these value we can drive the condition index (CI) defined as
Max.eigen value
CI = = k
min . eigen value
ˆ σ2 1 σ2
var(β 1 ) = 2 = 2 VIF
2
Σx1 1 − Ri Σx i
Econometrics: Module-II
Bedru B. and Seid H
ˆ σ2
seen earlier var(β ), = = 2 (VIF ) ; depends on three factors σ 2 , Σxi2 and VIF . A
Σx i
high VIF can be counter balanced by low σ 2 or high Σxi2 . To put differently, a
high VIF is neither necessary nor sufficient to get high variances and high
standard errors. Therefore, high multicollinearity, as measured by a high VIF may
not necessary cause high standard errors.
4.3.5.Remedial measures
It is more difficult to deal with models indicating the existence of multicollinearity
than detecting the problem of multicollinearity. Different remedial measures have
been suggested by econometricians; depending on the severity of the problem,
availability of other sources of data and the importance of the variables, which are
found to be multicollinear in the model.
Some suggest that minor degree of multicollinearity can be tolerated although one
should be a bit careful while interpreting the model under such conditions. Others
suggest removing the variables that show multicollinearity if it is not important in
the model. But, by doing so, the desired characteristics of the model may then get
affected. However, following corrective procedures have been suggested if the
problem of multicollinearity is found to be serious.
1. Increase the size of the sample: it is suggested that multicollinearity may be
avoided or reduced if the size of the sample is increased. With increase in the size
of the sample, the covariances are inversely related to the sample size. But we
should remember that this will be true when intercorrelation happens to exist only
in the sample but not in the population of the variables. If the variables are
collinear in the population, the procedure of increasing the size of the sample will
not help to reduce multicollinearity.
2. Introduce additional equation in the model: The problem of mutlicollinearity
may be overcome by expressing explicitly the relationship between multicollinear
Econometrics: Module-II
Bedru B. and Seid H
variables. Such relation in a form of an equation may then be added to the original
model. The addition of new equation transforms our single equation (original)
model to simultaneous equation model. The reduced form method (which is
usually applied for estimating simultaneous equation models) can then be applied
to avoid multicollinearity.
3. Use extraneous information – Extraneous information is the information
obtained from any other source outside the sample which is being used for the
estimation. Extraneous information may be available from economic theory or
from some empirical studies already conducted in the field in which we are
interested. Three methods, through which extraneous information is utilized in
order to deal with the problem of multicollinearity.
a. Method of using prior information: Suppose that the correct specification
of the model is Y = α + β 1 X 1 + β 2 X 2 + U i , and also X 1 and X 2 are found to
be collinear. If it is possible to gather information on the exact value of
β1 or β 2 from extraneous source, we then make use of such information in
estimating the influence of the remaining variable of the model in the
following way.
Suppose β 2* known a priori, then:
Y − β 2* X 2 = α + β 1 X 1 + U
Econometrics: Module-II
Bedru B. and Seid H
Q* = A * +αL * + βK * +U
The asterisk indicates logs of the variables. Suppose, it is observed that K and L
move together so closely that it is difficult to separate the effect of changing
quantities of labor inputs on output from the effect of variation in the use of
capital. Again, let us assume that on the basis of information from some other
source, we have a solid evidence that the present industry is characterized by
constant returns to scale. This implies that α + β = 1 , we can therefore, on the basis
of this information, substitute β = (1 − α ) in the transformed function. On
combining the results, the relationship becomes:
Dˆ t = α + βˆPt + β 2*Yt + uˆ t
Where β̂1 is derived from the time series data, β̂ 2* is obtained by using the cross-
section data. By following the pooling technique, we have skirted the
multicollinearity between income and price.
The methods described above are no sure methods to get rid of the problem of
multicollinearity. Which of these rules work in practice will depend on the nature
of the data under investigation and severity of the multicollinearity problem.
Econometrics: Module-II
Bedru B. and Seid H
Chapter Five
Econometrics: Module-II
Bedru B. and Seid H
Example: Yi = α + β Di + u i ------------------------------------------(5.01)
where Y=annual salary of a college professor
Di = 1 if male college professor
Model (5.01) may enable us to find out whether sex makes any difference in a
college professor’s salary, assuming, of course, that all other variables such as age,
degree attained, and years of experience are held constant. Assuming that the
disturbance satisfy the usually assumptions of the classical linear regression
model, we obtain from (5.01).
Mean salary of female college professor: E (Yi / Di = 0) = α -------(5.02)
Mean salary of male college professor: E (Yi / Di = 1) = α + β
that is, the intercept term α gives the mean salary of female college professors
and the slope coefficient β tells by how much the mean salary of a male college
professor differs from the mean salary of his female counterpart, α + β reflecting
the mean salary of the male college professor. A test of the null hypothesis that
there is no sex discrimination ( H 0 : β = 0) can be easily made by running
regression (5.01) in the usual manner and finding out whether on the basis of the t
test the estimated β is statistically significant.
5.2 Regression on one quantitative variable and one qualitative variable with
two classes, or categories
Consider the model: Yi = α i + α 2 Di + β X i + u i ----------------------------(5.03)
Econometrics: Module-II
Bedru B. and Seid H
Di = 1 if male
=0 otherwise
Model (5.03) contains one quantitative variable (years of teaching experience) and
one qualitative variable (sex) that has two classes (or levels, classifications, or
categories), namely, male and female. What is the meaning of this equation?
Assuming, as usual, that E (u i ) = 0, we see that
Mean salary of female college professor: E (Yi / X i , Di = 0) = α 1 + β X i ---------(5.04)
Mean salary of male college professor: E (Yi / X i , Di = 1) = (α + α 2 ) + β X i ------(5.05)
Geometrically, we have the situation shown in fig. 5.1 (for illustration, it is
assumed that α 1 > 0 ). In words, model 5.01 postulates that the male and female
college professors’ salary functions in relation to the years of teaching experience
have the same slope (β ) but different intercepts. In other words, it is assumed that
the level of the male professor’s mean salary is different from that of the female
professor’s mean salary (by α 2 ) but the rate of change in the mean annual salary
by years of experience is the same for both sexes.
Econometrics: Module-II
Bedru B. and Seid H
If the assumption of common slopes is valid, a test of the hypothesis that the two
regressions (5.04) and (5.05) have the same intercept (i.e., there is no sex
discrimination) can be made easily by running the regression (5.03) and noting the
statistical significance of the estimated α 2 on the basis of the traditional t test. If
the t test shows that α̂ 2 is statistically significant, we reject the null hypothesis that
the male and female college professors’ levels of mean annual salary are the same.
Before proceeding further, note the following features of the dummy variable
regression model considered previously.
Econometrics: Module-II
Bedru B. and Seid H
5.3 Regression on one quantitative variable and one qualitative variable with
more than two classes
Suppose that, on the basis of the cross-sectional data, we want to regress the
annual expenditure on health care by an individual on the income and education of
the individual. Since the variable education is qualitative in nature, suppose we
consider three mutually exclusive levels of education: less than high school, high
school, and college. Now, unlike the previous case, we have more than two
categories of the qualitative variable education. Therefore, following the rule that
the number of dummies be one less than the number of categories of the variable,
we should introduce two dummies to take care of the three levels of education.
Assuming that the three educational groups have a common slope but different
intercepts in the regression of annual expenditure on health care on annual income,
we can use the following model:
Yi = α 1 + α 2 D2i + α 3 D3i + βX i + u i --------------------------(5.06)
= 0 otherwise
D3 = 1 if college education
= 0 otherwise
Note that in the preceding assignment of the dummy variables we are arbitrarily
treating the “less than high school education” category as the base category.
Therefore, the intercept α 1 will reflect the intercept for this category. The
Econometrics: Module-II
Bedru B. and Seid H
differential intercepts α 2 and α 3 tell by how much the intercepts of the other two
categories differ from the intercept of the base category, which can be readily
checked as follows: Assuming E (u i ) = 0 , we obtain from (5.06)
E (Yi | D2 = 0, D3 = 0, X i ) = α 1 + β X i
E (Yi | D2 = 1, D3 = 0, X i ) = (α 1 + α 2 ) + β X i
E (Yi | D2 = 0, D3 = 1, X i ) = (α 1 + α 3 ) + β X i
which are, respectively the mean health care expenditure functions for the three
levels of education, namely, less than high school, high school, and college.
Geometrically, the situation is shown in fig 5.2 (for illustrative purposes it is
assumed that α 3 > α 2 ).
Econometrics: Module-II
Bedru B. and Seid H
D2 = 1 if female
=0 otherwise
D3 = 1 if white
=0 otherwise
Notice that each of the two qualitative variables, sex and color, has two categories
and hence needs one dummy variable for each. Note also that the omitted, or base,
category now is “black female professor.”
Assuming E (u i ) = 0 , we can obtain the following regression from (5.07)
Mean salary for black female professor:
E (Yi | D2 = 0, D3 = 0, X i ) = α 1 + β X i
Once again, it is assumed that the preceding regressions differ only in the intercept
coefficient but not in the slope coefficient β .
An OLS estimation of (5.06) will enable us to test a variety of hypotheses. Thus,
if α 3 is statistically significant, it will mean that color does affect a professor’s
salary. Similarly, if α 2 is statistically significant, it will mean that sex also affects
a professor’s salary. If both these differential intercepts are statistically
significant, it would mean sex as well as color is an important determinant of
professors’ salaries.
Econometrics: Module-II
Bedru B. and Seid H
From the preceding discussion it follows that we can extend our model to include
more than one quantitative variable and more than two qualitative variables. The
only precaution to be taken is that the number of dummies for each qualitative
variable should be one less than the number of categories of that variable.
D2 = 1 if female
= 0 if male
D3 = 1 if college graduate
= 0 otherwise
Implicit in this model is the assumption that the differential effect of the sex
dummy D2 is constant across the two levels of education and the differential
effect of the education dummy D3 is also constant across the two sexes. That is, if,
say, the mean expenditure on clothing is higher for females than males this is so
whether they are college graduates or not. Likewise, if, say, college graduates on
Econometrics: Module-II
Bedru B. and Seid H
the average spend more on clothing than non college graduates, this is so whether
they are female or males.
Econometrics: Module-II
Bedru B. and Seid H
Econometrics: Module-II
Bedru B. and Seid H
It pays commissions based on sales in such manner that up to a certain level, the
target, or threshold, level X*, there is one (stochastic) commission structure and
beyond that level another. (Note: Besides sales, other factors affect sales
commission. Assume that these other factors are represented by the stochastic
disturbance term.) More specifically, it is assumed that sales commission increases
linearly with sales until the threshold level X*, after which also it increases
linearly with sales but at a much steeper rate. Thus, we have a piece-wise linear
regression consisting of two linear pieces or segments, which are labeled I and II
in fig. 5.3, and the commission function changes its slope at the threshold value.
Given the data on commission, sales, and the value of the threshold level X*, the
technique of dummy variables can be used to estimate the (differing) slopes of the
two segments of the piecewise linear regression shown in fig. 5.3. We proceed as
follows:
Yi = α 1 + β X + β 2 ( X i − X *) Di + u i ------------------------------------(5.11)
which gives the mean sales commission up to the target level X* and
E (Yi | Di = 1, X i , X *) = α 1 − β 2 X * + ( β 1 + β 2 ) X i ----------------------(5.13)
which gives the mean sales commission beyond the target level X*.
Thus, β1 gives the slope of the regression lien in segment I, and β1 + β 2 gives the
slope of the regression line in segment II of the piecewise linear regression shown
Econometrics: Module-II
Bedru B. and Seid H
in fig 5.3. A test of the hypothesis that there is no break in the regression at the
threshold value X* can be conducted easily by noting the statistical significance of
the estimated differential slope coefficient β̂ 2 .
Summary:
1. Dummy variables taking values of 1 and 0 (r their linear transforms) are a
means of introducing qualitative regressors in regression analysis.
2. Dummy variables are a data-classifying device in that they divide a sample
into various subgroups based on qualities or attributes (sex, marital status,
race, religion, etc.) and implicitly allow one to run individual regressions
for each subgroup. If there are differences in the response of the regress
and to the variation in the quantitative variables in the various subgroups,
they will be reflected in the differences in the intercepts or slope
coefficients, or both, of the various subgroup regressions.
3. Although a versatile took, the dummy variable technique needs to be
handled carefully. First, if the regression contains a constant term, the
number of dummy variables must be less than the number of classifications
of each qualitative variable. Second, the coefficient attached to the dummy
variables must always be interpreted in relation to the base, or reference,
group, that is, the group that gets the value of zero. Finally, if a model has
several qualitative variables with several classes, introduction of dummy
variables can consume a large number of degrees of freedom. Therefore,
one should always weigh the number of dummy variables to be introduced
against the total number of observations available for analysis.
4. Among its various applications, this chapter considered but a few. These
included (1) comparing two (or more) regressions, (2) deseasonalizing time
series data, (3) combining time series and cross-sectional data, and(4)
piecewise linear regression models.
Econometrics: Module-II
Bedru B. and Seid H
5. Since the dummy variables are non stochastic, they pose no special
problems in the application of OLS. However, care must be exercised in
transforming data involving dummy variables. In particular, the problems
of autocorrelation and heteroscedasticity need to be handled very carefully.
renovation, 0 otherwise.
D3 = type of theater: 1 if outdoor, 0 if indoor
Econometrics: Module-II
Bedru B. and Seid H
Chapter Six
6.1 Introduction
While considering the standard regression model, we did not pay attention to the
timing of the explanatory variable(s) on the dependent variable. The standard
linear regression implies that change in one of the explanatory variables causes a
change in the dependent variable during the same time period and during that
period alone. But in economics, such specification is scarcely found. In economic
phenomenon, generally, a cause often produces its effect only after a lapse of time;
this lapse of time (between cause and its effect) is called a lag. Therefore, realistic
formulations of economic relations often require the insertion of lapped values of
the explanatory or insertion of lagged dependent variables.
is a distributed lag model of consumption function. This means that the value of
the consumption expenditure (C t ) at any given time depends on the current and
past values of the disposable income (Yt ) . The general form of a distributed lag
model (with only lagged exogenous variables) is written as:
Yt = α + β 0 X t + β 1 X t −1 + β 2 xt − 2 + − − − − + β s X t − s + − − +U t
The number of lags, s, may be either finite or infinite. But generally it is assumed
to be finite. The coefficient β 0 is known as the short run, or impact, multiplier
because it gives the change in mean value of Y following a unit change in X in the
Econometrics: Module-II
Bedru B. and Seid H
same time period t. If the change in X is maintained at the same level thereafter,
then,(β0+β1)gives the change in the (mean value of) Y in the next period,(β0+β1+
β2) in the following period, and so on. These partial sums are called interim, or
intermediate, multipliers. Finally, after ‘s’ periods we obtain which is known as
the long run, distributed-lag multiplier provided the sum β exists.
s
∑β
i =0
i = β 0 + β1 + β 2 + − − − − − − + β s = β
Econometrics: Module-II
Bedru B. and Seid H
succeeding periods my income returns to its previous level, I may save the
entire increase, whereas someone else in my position might decide to “live
it up”.
2. Technological reason: suppose, for instance, the price of capital relative
to labor declines making substitution of capital for labor economically
feasible. Of course, addition of capital takes time (gestation period)
moreover, if the drop in price is expected to be temporarily, firms may not
rush to substitute capital for labor, especially if they expect that after
temporarily drop the price of capital may increase beyond its previous
level.
3. Institutional reasons: These reasons also contribute to lags. For example,
contractual obligations may prevent firms from switching from one source
of labor or raw material to another. As another example, those who have
placed funds in long term saving accounts for fixed durations such as one
year, three years or seven years are essentially “locked” in even though
money market conditions may be such that higher yields are available.
In (6.01) the length of the lag, that is, how far back into the past we want to go
hasn’t been defined. Such a model is called an infinite (lag) model, whereas
models with specified lags are called a finite (lag) distributed-lag model.
How do we estimate α and β in (6.01)? We may adopt two approaches:
I. Ad Hoc estimation of distributed-lag models
II. A priori restriction on β ' s by assuming that the β ' s follow some
systematic pattern.
Econometrics: Module-II
Bedru B. and Seid H
will continue until the regression coefficients of the lagged variables start
becoming statistically insignificant and/or the coefficient of at least one of the
variables changes signs from positive to negative vise versa. Consider the
following hypothetical example.
Yˆt = 8.37 + 0.17 X t
Proponents of this approach chose the second regression as the “best” one because
in the last two equation the sign of X t − z was not stable and in the last equation the
sign of X t −3 was negative, which may be difficult to interpret economically.
Although seemingly straight forward, ad hoc estimation suffers from many
drawbacks, such as the following:
a. There is no guide as to what is the maximum lag length
b. As one estimates successive lags, there are fewer degrees of freedom
left, making statistical inference some what shaky
c. More importantly, in economic time series data, successive values
(lags) tend to be highly correlated; hence multicollinearity rears its
ugly head.
d. The sequential search for the lengths of lags opens the researcher to
the charge of data mining.
Econometrics: Module-II
Bedru B. and Seid H
In view of the preceding problems, the ad hoc estimation procedure has very little
to recommend it. Some prior or theoretical considerations are brought to bear upon
the variousβ’s if we are to make headway with the estimation problem.
U ~ N(0, σ 2 )
Ε(u i u j ) = 0
Ε(u i xi ) = 0
According to Koyck: β i = λi β 0
⇒ β1 = λβ 0 , β 2 = λ2 β 0
λ is known as the rate of decline or decay, of the distributed lag and 1- λ is known
as the speed of adjustment. But assuming non-negative values for λ , koyck rules
out the β ’s from changing sign, and by assuming λ <1, lesser weight is assigned to
the decline β ’s than current one. Also, the long run multiplier is a finite amount in
Koyck scheme.
∞
1
∑β
i =0
i = β0
1− λ
Substituting the values of β ’s in the original model we obtain:
Econometrics: Module-II
Bedru B. and Seid H
Y = α 0 + β 0 X t + (λβ 0 ) X t −1 + (λ2 β 0 ) xt − 2 + − − − − − + U t
Let α * = α (1 − λ ),Vt = U t − λU t −1
Yt = α * + β 0 X t + λYt −1 + Vt
= −λΕ(U 2 t −1 )
= −λσ u2 ≠ 0
Econometrics: Module-II
Bedru B. and Seid H
c. The lagged variable Yt −1 is not also independent of the error term Vt i.e.
Ε(U t Yt −1 ) ≠ 0 this is because Yt is directly dependent on Vt . Similarly Yt −1 on
to Vt .
Due to these two problems, the Koyck transformation of the distributed lag model
will give rise to biased and inconsistent estimates. In addition to these estimation
problem, the Koyck hypothesis is quite restrictive in the sense that it assumes that
impact of past periods decline successively in a specific way. But the following
are also possible.
⇒ 1. β 0 > β 1 > β 2 > β 3 − − − −
2. β 0 = β 1 = β 2 = β 3 − − − −
Econometrics: Module-II
Bedru B. and Seid H
What equation (ii) implies is that “economic agents will adapt their expectations
in the light of past experience and that in particular they will learn from their
mistakes.” More specifically, (ii) states that expectations are revised each period
by a fraction γ of the gap between the current value of the variable and its
previous expected value. Thus, for our model this would mean that expectations
about interest rates are revised each period by a fraction γ of the discrepancy
between the rate of interest observed in the current period and what its anticipated
value had been in the previous period. Another way of stating this would be to
write (ii) as: X t* = γX t + (1 − γ ) X t*−1 -------------------------------------------------(iii)
which shows that the expected value of the rate of interest at time t is a weighted
average of the actual value of the interest rate at time ‘t’ and its value expected in
the previous period, with weights of ‘ γ ’ and ‘1- γ ’, respectively. If γ =1,
X t* = X t , meaning that expectations are realized immediately and fully, that is, in
the same time period. If, on the other hand, γ =0, X t* = X t*−1 , meaning that
expectations are static, that is, “conditions prevailing today will be maintained in
all subsequent periods. Expected future values then become identified with
current values.” Substituting (iii) into (i), we obtain
[ ]
Yt = β 0 + β 1 γX t + (1 − γ ) X t*−1 + u t
Now, lag equation (i) by one period, multiply it by 1- γ , and subtract the product
from (iv). After simple algebraic manipulations, we obtain:
Yt = γβ 0 + γβ1 X t + (1 − γ )Yt −1 + u t − (1 − γ )u t −1
where vt = u t − (1 − γ )u t −1 .
Econometrics: Module-II
Bedru B. and Seid H
Let us note the difference between (i) and (v). In the former, β1 measures the
average response of Y to a unit change in X*, the equilibrium or long-run value of
X. In (v), on the other hand, γβ1 measures the average response of Y to a unit
change in the actual or observed value of X. These responses will not be the same
unless, of course, γ =1, that is, the current and long-run values of X are the same.
In practice, we first estimate (v). Once an estimate of γ is obtained from the
coefficient of lagged Y, we can easily compute β1 by simply dividing the
coefficient of X t (= γβ 1 ) by γ .
Note that like the Koyck model, the adaptive expectations model is autoregressive
and its error term is similar to the Koyck error term.
Since the desired level of capital is not directly observable, Nerlove postulates the
following hypothesis, known as the partial adjustment, or stock adjustment,
hypothesis:
Yt − Yt −1 = δ (Yt* − Yt −1 ) --------------------------------------------(2)
Econometrics: Module-II
Bedru B. and Seid H
where δ , such that 0 < δ ≤ 1 , is known as the coefficient of adjustment and where
Yt − Yt −1 = actual change and (Yt* − Yt −1 ) =desired change.
Since Yt − Yt −1 , the change in capital stock between two periods, it is nothing but
investment, (2) can alternatively be written as:
I t = δ (Yt* − Yt −1 ) ----------------------------------------------------------------(3)
Equation (2) postulates that the actual change in capital stock (investment) in any
given time period t is some fraction δ of the desired change for that period. If
δ =1, it means that the actual stock of capital is equal to the desired stock; that is,
actual stock adjusts to the desired stock instantaneously (in the same period).
However, if δ =0, it means that nothing changes since actual stock at time t is the
same as that observed in the previous time period. Typically, δ is expected to lie
between these extremes since adjustment to the desired stock of capital is likely to
be incomplete because of rigidity, inertia, contractual obligations, etc. – hence the
name partial adjustment model. Note that the adjustment mechanism (2)
alternatively can be written as:
Yt = δYt * + (1 − δ )Yt −1 -------------------------------------------------(4)
showing that the observed capital stock at time t is a weighted average of the
desired capital stock at that time and the capital stock existing in the previous
time period, δ and (1- δ ) being the weights. Now substitution of (1) into (4)
gives:
Yt = δ ( β 0 + β 1 X t + u t ) + (1 − δ )Yt −1
= δβ 0 + δβ 1 X t + (1 − δ )Yt −1 + δu t ----------------------------------(5)
Econometrics: Module-II
Bedru B. and Seid H
estimate the short-run function (5) and obtain the estimate of the adjustment
coefficient δ (from the coefficient of Yt −1 ), we can easily derive the long-run
function by simply dividing δβ 0 and δβ 1 by δ and omitting the lagged Y term,
which will then give (1).
The partial adjustment model resembles both the Koyck and adaptive expectation
models in that it is autoregressive. But it has a much simpler disturbance term: the
original disturbance term u t multiplied by a constant δ . But bear in mind that
although similar in appearance, the adaptive expectation and partial adjustment
models are conceptually very different. The former is based on uncertainty (about
the future course of prices, interest rates, etc.), whereas the latter is due to
technical or institutional rigidities, inertia, cost of change, etc. However, both of
these models are theoretically much sounder than the Koyck model.
The important point to keep in mind is that since Koyck, adaptive expectations,
and stock adjustment models – apart from the difference in the appearance of the
error term – yield the same final estimating model, one must be extremely careful
in telling the reader which model the researcher is using and why. Thus,
researchers must specify the theoretical underpinning of their model.
Since both Yt* and X t* are not directly observable, one could use the partial
adjustment mechanism for Yt* and the adaptive expectations model for X t* to
arrive at the following estimating equation.
Econometrics: Module-II
Bedru B. and Seid H
(b) follows a moving average process. Another feature of this model is linear in
the α ’s, it is nonlinear in the original parameters.
This model assumes that any pattern of lag scheme among β ' s can be described by
polynomial. This idea is based on a theorem in mathematics known as
Weierstrass’s theorem, which states that under general conditions a curve may be
approximated by a polynomial whose degree is one more than the number of
turning points in the curve. Suppose that the β ' s in a given distributed lag model
are expected to decrease first, then increase and again decrease
Econometrics: Module-II
Bedru B. and Seid H
We are, now in a position to obtain all β ' s by setting i equal to the value of the
subscript of the particular coefficient.
β 0 = a0
β 1 = a 0 + a1 + a 2 + a 3
β 2 = a0 + 2a1 + 4a 2 + 8a3
β 3 = a 0 + 3a1 + 9a 2 + 27 a3
β k = a0 + ka1 + k 2 a 2 + k 3 a3
Naturally therefore, what is needed to be estimated is only four parameters of the
polynomial function: β i = a0 + a1i + a 2 i 2 + a3i 3 . Obtaining the values of
a 0 , a1 , a 2 , and a 3 , we are able to estimate all the parameters of the original
This is the final form (or transformed form) of Almon Lag model. We can now
apply OLS method to estimate αˆ 0 , aˆ 0 , aˆ1 , aˆ 2 , and aˆ 3 to obtain β ' s in the original
form. Note that vt remains in its original form.
Econometrics: Module-II
Bedru B. and Seid H
Chapter Seven
7.1 Introduction
In all the previous chapters discussed so far, we have been focusing exclusively
with the problems and estimations of a single equation regression models. In such
models, a dependent variable is expressed as a linear function of one or more
explanatory variables. The cause-and-effect relationship in such models between
the dependent and independent variable is unidirectional. That is, the explanatory
variables are the cause and the independent variable is the effect. But there are
situations where such one-way or unidirectional causation in the function is not
meaningful. This occurs if, for instance, Y (dependent variable) is not only
function of X’s (explanatory variables) but also all or some of the X’s are, in turn,
determined by Y. There is, therefore, a two-way flow of influence between Y and
(some of) the X’s which in turn makes the distinction between dependent and
independent variables a little doubtful. Under such circumstances, we need to
consider more than one regression equations; one for each interdependent
variables to understand the multi-flow of influence among the variables. This is
precisely what is done in simultaneous equation models.
Econometrics: Module-II
Bedru B. and Seid H
disregarding other equations of the model, the estimates so obtained are not only
biased but also inconsistent; i.e. even if the sample size increases indefinitely, the
estimators do not converge to their true values.
The bias arising from application of such procedure of estimation which treats
each equation of the simultaneous equations model as though it were a single
model is known as simultaneity bias or simultaneous equation bias. To avoid this
bias we will use other methods of estimation, such as, Indirect Least Square (ILS),
Two Stage Least Square (2SLS), three Stage Least Square(3SLS), Maximum
Likelihood Methods and the Method of Instrumental Variable (IV).
Econometrics: Module-II
Bedru B. and Seid H
Y = α 0 + α1 X + U
--------------------------------------------------(10)
X = β 0 + β 1Y + β 2 Z + V
β 0 + α 0 β1 β 2 β U +V
X = + Z + 1 − − − − − − − − − − − − − − − − − − − − − (11)
1 − α 1 β1 1 − α 1 β1 1 − α 1 β1
Applying OLS to the first equation of the above structural model will result in
biased estimator because cov( X iU i ) = Ε( X iU j ) ≠ 0 . Now, let’s proof whether this
expression.
cov( XU ) = Ε[{X − Ε( X )}{U − Ε(U )}]
β + α 0 β1 β 2 β U +V β 0 + α 0 β 1 β 2
= Ε 0 + Z + 1 − − Z U
1 − α 1 β 1 1 − α 1 β1 1 − α 1 β1 1 − α 1 β1 1 − α 1 β 1
U
= Ε ( β 1U + V )
1 − α 1 β1
1
= Ε( β 1U 2 + UV )
1 − α β
1 1
Econometrics: Module-II
Bedru B. and Seid H
β1 βσ2
= Ε(U 2 ) = 1 u ≠ 0 , since E(UV) = 0
1 − α 1 β1 1 − α 1 β1
Econometrics: Module-II
Bedru B. and Seid H
lagged endogenous variable. This is on the assumption that X’s symbolize the
exogenous variables and Y’s symbolize the endogenous variables. Thus, X t , X t −1
and Yt −1 are regarded as predetermined (exogenous) variables.
Q s = α 0 + α 1 P + α 2 R + U 2 − − − − − − − − − − − − − − − − − −(15)
Here P and Q are endogenous variables and Y and R are exogenous variables.
• Structural models
A structural model describes the complete structure of the relationships among the
economic variables. Structural equations of the model may be expressed in terms
of endogenous variables, exogenous variables and disturbances (random
variables). The parameters of structural model express the direct effect of each
explanatory variable on the dependent variable. Variables not appearing in any
function explicitly may have an indirect effect and is taken into account by the
simultaneous solution of the system. For instance, a change in consumption affects
the investment indirectly and is not considered in the consumption function. The
effect of consumption on investment cannot be measured directly by any structural
parameter, but is measured indirectly by considering the system as a whole.
Econometrics: Module-II
Bedru B. and Seid H
Y = C + Z ----------------------------------------------------(17)
for α >0 and 0<β<1
where: C=consumption expenditure
Z=non-consumption expenditure
Y=national income
C and Y are endogenous variables while Z is exogenous variable.
α β U
C= + Z + ----------------------------------(18)
1− β 1− β 1− β
Econometrics: Module-II
Bedru B. and Seid H
Equation (18) and (19) are called the reduced form of the structural model of the
above. We can write this more formally as:
Structural form equations Reduced form equations
C = α + βY + U α β U
C= + Z +
1− β 1− β 1− β
Y =C+Z α 1 U
Y= + Z +
1− β 1− β 1− β
Parameters of the reduced form measure the total effect (direct and indirect) of a
change in exogenous variables on the endogenous variable. For instance, in the
β
above reduced form equation(18), measures the total effect of a unit
1− β
change in the non-consumption expenditure on consumption. This total effect is
1
β , the direct effect, times ,the indirect effect.
1− β
The reduced form equations can be obtained in two ways:
1) To express the endogenous variables directly as a function of the
predetermined variables.
2) To solve the structural system of endogenous variables in terms of the
predetermined variables, the structural parameters, and the disturbance
terms.
Consider the following simple model for a closed economy.
Ct = a1Yt + U1 ---------------------------------------------------------(i)
It = b1Yt + b2Yt-1 + U2-----------------------------------------------(ii)
Yt = Ct +It + Gt-------------------------------------------------------(iii)
This model has three equations in three endogenous variables (Ct , It , and Yt ) and
two predetermined variables (Gt, andYt-1).
Econometrics: Module-II
Bedru B. and Seid H
To obtain the reduced form of this model, we may use two methods (direct method
and solving the structural model method).
Direct Method: Express the three endogenous variables(Ct , It , and Yt ) as
functions of the two predetermined variables (Gt, andYt-1) directly using π’s as the
parameters of the reduced form model as follows.
Ct = π11Yt-1 + π12Gt + V1 ------------------------------------(iv)
It , =π21Yt-1 + π22Gt +V2 -------------------------------------(v)
Yt =π31Yt-1 + π32Gt + V3 ------------------------------------(vi)
Note: π11 , π12 , π21 , π22 , π31 , and π32 are reduced from parameters. By solving the
structural system of endogenous variables in terms of predetermined variables,
structural parameters and disturbances, the expressions for the reduced parameters
can be obtained easily. For instance, the third structural equation (iii) can be
expressed in reduced form as follows:
Yt = b2/ (1-a1-b1)Yt-1 + 1/(1-a1-b1) Gt + (U1 +U2)/ (1-a1-b1). This equation is
obtained by simply substituting structural equations (i) and (ii) in (iii). Form this
expression: π31 = b2/ (1-a1-b1)
π32 = b2/ (1-a1-b1)
Test yourself Questions:
a) Determine the reduced form equations for the structural equations (ii) and
(iii).
b) Indicate the expressions for π11 , π12, π21 , and π22 form (a) above.
How to estimate the reduced form parameters?
The estimates of the reduced from coefficients (π’s ) may be obtained in two ways.
1) Direct estimation of the reduced coefficients by applying OLS.
2) Indirect estimation of the reduced form coefficients:
Steps:
i) Solve the system of endogenous variables so that each equation contains
only predetermined explanatory variables. In this way we may obtain
Econometrics: Module-II
Bedru B. and Seid H
In the above illustration, as usual, the X’s and Y’s are exogenous and endogenous
variables respectively. The disturbance terms follow the following assumptions.
Ε(U 1U 2 ) = Ε(U 1U 3 ) = Ε(U 2U 3 ) = 0
Econometrics: Module-II
Bedru B. and Seid H
The above assumption is the most crucial assumption that defines the recursive
model. If this does not hold, the above system is no longer recursive and OLS is
also no longer valid. The first equation of the above system contains only the
exogenous variables on the right hand side. Since by assumption, the exogenous
variable is independent of U 1 , the first equation satisfies the critical assumption of
the OLS procedure. Hence OLS can be applied straight forwardly to this equation.
Consider the second equation. It contains the endogenous variable Y1 as one of the
explanatory variables along with non-stochastic X’s. OLS can be applied to this
equation only if it can be shown that Y1 and U 2 are independent of each other. This
is true because U1, which affects Y1 is by assumption uncorrelated with U 2 , i.e.
Ε(U 1U 2 ) = 0 . Y1 acts as a predetermined variable in so far as Y2 is concerned.
Hence OLS can be applied to this equation. Similar argument can be stretched to
the 3rd equation because Y1 and Y2 are independent of U 3 . In this way, in the
recursive system OLS can be applied to each equation separately.
factor X 4 = disposable income. Finally the price obtained by the producer = Y3 can
be expressed in terms of the retail price Y2 and exogenous factor X j = the cost of
Econometrics: Module-II
Bedru B. and Seid H
In the first equation, there are only exogenous variables and are assumed to be
independent of U 1 . In the second equation, the causal relation between Y1 and
Y2 is in one direction. Also Y1 is independent of U 2 and can be treated just like
Econometrics: Module-II
Bedru B. and Seid H
*
These methods of estimation are not discussed in this module as they are beyond the scope of this
introductory course.
Econometrics: Module-II
Bedru B. and Seid H
W = α + βP + ϒE + U --------------------------------------(i)
P = λ + µ W + V ------------------------------------------------(ii)
where W and P are percentage rates of wage and price inflation respectively, E is a
measure of excess demand in the labor market while U and V are disturbances, E
is assumed to be exogenously determined. If E is assumed to be exogeneoulsy
determined, then (i) and (ii) represent two equations determining two endogenous
variables: W and P. Let’s explain the problem of identification with help of these
two equations of a simultaneous equation model.
Let’s use equation (ii) to express ‘W’ in terms of P:
−λ 1 V
W = + P− -------------------------------------------------(iii)
µ µ µ
Now, suppose A and B are any two constants. Let’s multiply equation (i) by A,
multiply equation (ii) by B and then add the two equations. This gives
λ B B
( A + B )W = Aα − B + Aβ + P + AϒE + AU − V or
µ µ µ
Bλ Aβ + B B
Aα − AU − V
W= V + µ Aϒ
P+ µ -------------------(iv)
E +
A+ B A+ B A+ B A+ B
Equation (iv) is what is known as a linear combination of (i) and (ii). The point
about equation (iv) is that it is of the same statistical form as the wage equation (i).
That is, it has the form:
W = constant + (constant)P + (constant)E + disturbance
Moreover, since A and B can take any values we like, this implies that our wage
price model generates an infinite number of equations such as (iv), which are all
statistically indistinguishable from the wage equation (i). Hence, if we apply OLS
or any other technique to data on W, P and E in an attempt to estimate the wage
equation, we can’t know whether we are actually estimating (i) rather than one of
the infinite number of possibilities given by (iv). Equation (i) is said to be
Econometrics: Module-II
Bedru B. and Seid H
Notice that, in contrast, price equation (ii) cannot be confused with the linear
combination (iv), because it is a relationship involving W and P only and does not,
like (iv), contain the variable E. The price equation (ii) is therefore said to be
identified, and in principle it is possible to obtain consistent estimates of its
parameters. A function (an equation) belonging to a system of simultaneous
equations is identified if it has a unique statistical form, i.e. if there is no other
equation in the system, or formed by algebraic manipulations of the other
equations of the system, contains the same variables as the function(equation) in
question.
Identification problems do not just arise only on two equation-models. Using the
above procedure, we can check identification problems easily if we have two or
three equations in a given simultaneous equation model. However, for ‘n’
equations simultaneous equation model, such a procedure is very cumbersome. In
general for any number of equations in a given simultaneous equation, we have
two conditions that need to be satisfied to say that the model is in general
identified or not. In the following section we will see the formal conditions for
identification.
Econometrics: Module-II
Bedru B. and Seid H
In applying the identification rules we should either ignore the constant term, or, if
we want to retain it, we must include in the set of variables a dummy variable (say
X0) which would always take on the value 1. Either convention leads to the same
results as far as identification is concerned. In this chapter we will ignore the
constant intercept.
Econometrics: Module-II
Bedru B. and Seid H
Order condition:
( K − M ) ≥ (G − 1)
; that is, the order condition is not satisfied.
(15 − 11) < (10 − 1)
order condition:
( K − M ) ≥ (G − 1)
; that is, the order condition is satisfied.
(15 − 5) < (10 − 1)
Econometrics: Module-II
Bedru B. and Seid H
Firstly. Write the parameters of all the equations of the model in a separate table,
noting that the parameter of a variable excluded from an equation is equal to zero.
For example let a structural model be:
y1 = 3 y 2 − 2 x1 + x 2 + u1
y 2 = y 3 + x3 + u 2
y3 = y1 − y 2 − 2 x3 + u 3
where the y’s are the endogenous variables and the x’s are the predetermined
variables. This model may be rewritten in the form
− y1 + 3 y 2 + 0 y 3 − 2 x1 + x 2 + 0 x3 + u1 = 0
0 y1 − y 2 + y 3 + 0 x1 + 0 x 2 + x3 + u 2 = 0
y1 − y 2 − y 3 + 0 x1 + 0 x 2 − 2 x3 + u 3 = 0
Ignoring the random disturbance the table of the parameters of the model is as
follows:
Variables
Equations Y1 Y2 Y3 X1 X2 X3
1st equation -1 3 0 -2 1 0
2nd equation 0 -1 1 0 0 1
3rd equation 1 -1 -1 0 0 -2
Secondly. Strike out the row of coefficients of the equation which is being
examined for identification. For example, if we want to examine the identifiability
of the second equation of the model we strike out the second row of the table of
coefficients.
Thirdly. Strike out the columns in which a non-zero coefficient of the equation
being examined appears. By deleting the relevant row and columns we are left
with the coefficients of variables not included in the particular equation, but
contained in the other equations of the model. For example, if we are examining
for identification the second equation of the system, we will strike out the second,
third and the sixth columns of the above table, thus obtaining the following tables.
Econometrics: Module-II
Bedru B. and Seid H
Y1 Y2 Y3 X1 X2 X3 Y3 X1 X2
↓ ↓ ↓
st
1 -1 3 0 -2 1 0 -1 -2 1
nd
→2 0 -1 1 0 0 1
rd
3 1 -1 -1 0 0 -2 1 0 0
Fourthly. Form the determinant(s) of order (G-1) and examine their value. If at
least one of these determinants is non-zero, the equation is identified. If all the
determinants of order (G-1) are zero, the equation is underidentified.
In the above example of exploration of the identifiability of the second structural
equation we have three determinants of order (G-1)=3-1=2. They are:
−1 − 2 −2 1 −1 1
∆1 = ≠0 ∆2 = =0 ∆3 = ≠0
1 0 0 0 1 0
(the symbol ∆ stands for ‘determinant’) We see that we can form two non-zero
determinants of order G-1=3-1=2; hence the second equation of our system is
identified.
Fifthly. To see whether the equation is exactly identified or overidentified we use
the order condition ( K − M ) ≥ (G − 1). With this criterion, if the equality sign is
satisfied, that is if ( K − M ) = (G − 1) , the equation is exactly identified. If the
inequality sign holds, that is, if ( K − M ) < (G − 1) , the equation is overidentified.
In the case of the second equation we have:
G=3 K=6 M=3
And the counting rule ( K − M ) ≥ (G − 1) gives
(6-3)>(3-1)
Therefore the second equation of the model is overidentified.
The identification of a function is achieved by assuming that some variables of the
model have zero coefficient in this equation, that is, we assume that some
variables do not directly affect the dependent variable in this equation. This,
however, is an assumption which can be tested with the sample data. We will
Econometrics: Module-II
Bedru B. and Seid H
D = b0 + b1 P1 + b2 P2 + b3 C + b4 t + w
D=S
Where: D= quantity demanded
S= quantity supplied
The above model is mathematically complete in the sense that it contains three
equations in three endogenous variables, D,S and P1. The remaining variables, Y,
P2, C, t are exogenous. Suppose we want to identify the supply function. We
apply the two criteria for identification:
1. Order condition: ( K − M ) ≥ (G − 1)
In our example we have: K=7 M=5 G=3
Therefore, (K-M)=(G-1) or (7-5)=(3-1)=2
Econometrics: Module-II
Bedru B. and Seid H
Consequently the second equation satisfies the first condition for identification.
2. Rank condition
The table of the coefficients of the structural model is as follows.
Variables
Equations P1 P2 t S C
D Y
st
1 equation -1 a1 a2 a3 a4 0 0
2nd equation 0 b1 b2 0 b4 -1 b3
3rd equation 1 0 0 0 0 1 0
Following the procedure explained earlier we strike out the second row an the
second, third, fifth, sixth and seventh columns. Thus we are left with the table of
the coefficients of excluded variables:
Complete table of Table of parameters of
Structural parameters variables excluded from
the second equation
-1 a1 a2 a3 a4 0 0 -1 a3
0 b1 b2 0 b4 1 b3
1 0 0 00 1 1 -1 0
From this table we can form only one non-zero determinant of order
(G-1) = (3-1) =2
−1 a3
∆= = (0)(−1) − (−1)(a3 ) = a 3
−1 0
Econometrics: Module-II
Bedru B. and Seid H
We strike out the first row and the three first columns of the table and thus obtain
the table of coefficients of excluded variables.
Econometrics: Module-II
Bedru B. and Seid H
The value of the first 3x3 determinant of the parameters of excluded variables is
c1 −1 0 −1 0 c1
∆ 1 = −1 − a1 + a2 = 1 + a1 − a 2 c1 ≠ 0
−1 0 1 0 1 −1
(provided a1 − a 2 c1 ≠ −1 )
The rank condition is satisfied since we can construct at least one non-zero
determinant of order 3=(G-1).
Applying the counting rule ( K − M ) ≥ (G − 1) we see that the inequality sign holds:
4>3; hence the investment function is overidentified.
Econometrics: Module-II
Bedru B. and Seid H
Econometrics: Module-II
Bedru B. and Seid H
y 2 = b23 y 3 + y 23 x3 + u 2
y3 = b31 y1 + b32 y 2 + y 33 x3 + u 3
This model is complete in the sense that it contains three equations in three
endogenous variables. The model contains altogether six variables, three
endogenous ( y1 , y 2 , y 3 ) and three exogenous ( x1 , x 2 , x3 ).
The reduced form of the model is obtained by solving the original equations for
the exogenous variables. The reduced form in the above example is:
y1 = π 11 x1 + π 12 x 2 + π 13 x3 + v1
y 2 = π 21 x1 + π 22 x 2 + π 23 x3 + v 2
y3 = π 31 x1 + π 32 x 2 + π 33 x3 + v3
Strike out the rows corresponding to endogenous variables excluded from the
particular equation being examined for identifiability. Also strike out all the
Econometrics: Module-II
Bedru B. and Seid H
π 21 π 22 π 23 π 31
π 31 π 32 π 33 π 32
Thirdly. Examine the order of the determinants of the π’s of excluded exogenous
variables and evaluate them. If the order of the larges non-zero determinant is
G*-1, the equation is identified. Otherwise the equation is not identified.
Major References
Econometrics: Module-II