Professional Documents
Culture Documents
Econometrics - Modulei - 1
Econometrics - Modulei - 1
Econometrics - Modulei - 1
( June, 2005)
ECONOMETRICS
A TEACHING MATERIAL FOR DISTANCE
STUDENTS MAJORING IN ECONOMICS
Module I
Prepared By:
Bedru Babulo
Seid Hassen
Department of Economics
Faculty of Business and Economics
Mekelle University
June, 2005
Mekelle
Econometrics, Module I 1
Prepared by: Bedru B. and Seid H. ( June, 2005)
Econometrics
Module I
Module I of the course includes the first three chapters. The first chapter introduces students
with the definition and some fundamental conceptualization of econometrics. In chapter two a
fairly detailed treatment of the simple classical linear regression model will be made. In this
chapter students will be introduced with the basic logic, concepts, assumptions, estimation
methods, and interpretations of the simple classical linear regression models and their
applications in economic science. Chapter three, which deals with Multiple Regression Models, is
basically the extension of the simple regression models. But in chapter three attempts will be
made to expand the linear regression model by incorporating more than one explanatory variables
or regressors to the model. In both chapters (chapter one and chapter two), due attention will be
given to the basics of ordinary least square (OLS) method of estimation and investigating the
statistical properties of the parameter estimates which are summarized by the Gauss-Markov’s
BLUE (Best, Linear, Unbiased, estimator) properties.
Econometrics, Module I 2
Prepared by: Bedru B. and Seid H. ( June, 2005)
Chapter One
Introduction
Econometrics, Module I 3
Prepared by: Bedru B. and Seid H. ( June, 2005)
guidance for economic policy making we also need to know the quantitative
relationships between the different economic variables. We obtain these
quantitative measurements taken from the real world. The field of knowledge
which helps us to carryout such an evaluation of economic theories in empirical
terms is econometrics.
Distance students! Having said the background statement in our attempt for
defining ‘ECONOMETRICS’, we may now formally define what econometrics is.
WHAT IS ECONOMETRICS?
Literally interpreted, econometrics means “economic measurement”, but the scope
of econometrics is much broader as described by leading econometricians. Various
econometricians used different ways of wordings to define econometrics. But if
we distill the fundamental features/concepts of all the definitions, we may obtain
the following definition.
Econometrics, Module I 4
Prepared by: Bedru B. and Seid H. ( June, 2005)
the “metric” part of the word econometrics signifies ‘measurement’, and hence
econometrics is basically concerned with measuring of economic relationships.
Econometrics, Module I 5
Prepared by: Bedru B. and Seid H. ( June, 2005)
charts them, and attempts to describe the pattern in their development over time
and perhaps detect some relationship between various economic magnitudes.
Economic statistics is mainly a descriptive aspect of economics. It does not
provide explanations of the development of the various variables and it does not
provide measurements the coefficients of economic relationships.
Econometrics, Module I 6
Prepared by: Bedru B. and Seid H. ( June, 2005)
Example: Economic theory postulates that the demand for a commodity depends
on its price, on the prices of other related commodities, on consumers’ income and
on tastes. This is an exact relationship which can be written mathematically as:
Q = b0 + b1 P + b2 P0 + b3Y + b4 t
The above demand equation is exact. How ever, many more factors may affect
demand. In econometrics the influence of these ‘other’ factors is taken into
account by the introduction into the economic relationships of random variable. In
our example, the demand function studied with the tools of econometrics would be
of the stochastic form:
Q = b0 + b1 P + b2 P0 + b3Y + b4 t + u
where u stands for the random factors which affect the quantity demanded.
Econometrics, Module I 7
Prepared by: Bedru B. and Seid H. ( June, 2005)
Specification of the model is the most important and the most difficult stage of any
econometric research. It is often the weakest point of most econometric
applications. In this stage there exists enormous degree of likelihood of
Econometrics, Module I 8
Prepared by: Bedru B. and Seid H. ( June, 2005)
Econometrics, Module I 9
Prepared by: Bedru B. and Seid H. ( June, 2005)
Econometrics, Module I 10
Prepared by: Bedru B. and Seid H. ( June, 2005)
meaningful and statistically and econometrically correct for the sample period
for which the model has been estimated; yet it may not be suitable for
forecasting due to various factors (reasons). Therefore, this stage involves the
investigation of the stability of the estimates and their sensitivity to changes in
the size of the sample. Consequently, we must establish whether the estimated
function performs adequately outside the sample of data. i.e. we must test an
extra sample performance the model.
Econometrics, Module I 11
Prepared by: Bedru B. and Seid H. ( June, 2005)
Review questions
Econometrics, Module I 12
Prepared by: Bedru B. and Seid H. ( June, 2005)
Chapter Two
Economic theories are mainly concerned with the relationships among various
economic variables. These relationships, when phrased in mathematical terms, can
predict the effect of one variable on another. The functional relationships of these
variables define the dependence of one variable upon the other variable (s) in the
specific form. The specific functional forms may be linear, quadratic, logarithmic,
exponential, hyperbolic, or any other form.
Econometrics, Module I 13
Prepared by: Bedru B. and Seid H. ( June, 2005)
Assuming that the supply for a certain commodity depends on its price (other
determinants taken to be constant) and the function being linear, the relationship
can be put as:
Q = f ( P ) = α + β P − − − − − − − − − − − − − − − − − − − − − − − − − − − (2.1)
The above relationship between P and Q is such that for a particular value of P,
there is only one corresponding value of Q. This is, therefore, a deterministic
(non-stochastic) relationship since for each price there is always only one
corresponding quantity supplied. This implies that all the variation in Y is due
solely to changes in X, and that there are no other factors affecting the dependent
variable.
If this were true all the points of price-quantity pairs, if plotted on a two-
dimensional plane, would fall on a straight line. However, if we gather
observations on the quantity actually supplied in the market at various prices and
we plot them on a diagram we see that they do not fall on a straight line.
The derivation of the observation from the line may be attributed to several
factors.
a. Omission of variables from the function
b. Random behavior of human beings
c. Imperfect specification of the mathematical form of the model
d. Error of aggregation
e. Error of measurement
Econometrics, Module I 14
Prepared by: Bedru B. and Seid H. ( June, 2005)
Thus a stochastic model is a model in which the dependent variable is not only
determined by the explanatory variable(s) included in the model but also by others
which are not included in the model.
2.2. Simple Linear Regression model.
The above stochastic relationship (2.2) with one explanatory variable is called
simple linear regression model.
The true relationship which connects the variables involved is split into two parts:
a part represented by a line and a part represented by the random term ‘u’.
Econometrics, Module I 15
Prepared by: Bedru B. and Seid H. ( June, 2005)
The scatter of observations represents the true relationship between Y and X. The
line represents the exact part of the relationship and the deviation of the
observation from the line represents the random component of the relationship.
- Were it not for the errors in the model, we would observe all the points on the
line Y1' , Y2' ,......, Yn' corresponding to X 1 , X 2 ,...., X n . However because of the random
- The first component in the bracket is the part of Y explained by the changes
in X and the second is the part of Y not explained by X, that is to say the
change in Y is due to the random influence of u i .
The classicals made important assumption in their analysis of regression .The most
importat of these assumptions are discussed below.
Econometrics, Module I 16
Prepared by: Bedru B. and Seid H. ( June, 2005)
Dear distance students! Check yourself whether the following models satisfy the
above assumption and give your answer to your tutor.
a. ln Y 2 = α + β ln X 2 + U i
b. Yi = α + β X i + U i
This means that the value which u may assume in any one period depends on
chance; it may be positive, negative or zero. Every value has a certain probability
of being assumed by u in any particular instance.
Mathematically, E (U i ) = 0 ………………………………..….(2.3)
Econometrics, Module I 17
Prepared by: Bedru B. and Seid H. ( June, 2005)
For all values of X, the u’s will show the same dispersion around their mean.
In Fig.2.c this assumption is denoted by the fact that the values that u can
assume lie with in the same limits, irrespective of the value of X. For X 1 , u
can assume any value with in the range AB; for X 2 , u can assume any value
with in the range CD which is equal to AB and so on.
Graphically;
Mathematically;
Econometrics, Module I 18
Prepared by: Bedru B. and Seid H. ( June, 2005)
= E (u i u j ) = 0 …………………………..….(2.5)
Econometrics, Module I 19
Prepared by: Bedru B. and Seid H. ( June, 2005)
= Ε( X iU i ) − Ε( X i )Ε(U i )
= Ε ( X iU i )
=0
8. The explanatory variables are measured without error
- U absorbs the influence of omitted variables and possibly errors of
measurement in the y’s. i.e., we will assume that the regressors are error
free, while y values may or may not include errors of measurement.
Dear students! We can now use the above assumptions to derive the following
basic concepts.
Proof:
Mean: Ε(Y ) = Ε(α + β xi + u i )
= α + β X i Since Ε(u i ) = 0
= Ε(α + β X i + u i − (α + βX i ) )
2
= Ε(u i ) 2
= σ 2 (since Ε(u i ) 2 = σ 2 )
∴ var(Yi ) = σ 2 ……………………………………….(2.8)
Econometrics, Module I 20
Prepared by: Bedru B. and Seid H. ( June, 2005)
the distribution of y i .
∴ Yi ~ N(α + β x i , σ 2 )
Proof:
Cov (Yi , Y j ) = E{[Yi − E (Yi )][Y j − E (Y j )]}
= E{[α + β X i + U i − E (α + β X i + U i )][α + β X j + U j − E (α + β X j + U j )}
(Since Yi = α + β X i + U i and Y j = α + β X j + U j )
Econometrics, Module I 21
Prepared by: Bedru B. and Seid H. ( June, 2005)
αˆ and βˆ are estimated from the sample of Y and X and ei represents the sample
(CLS) involves finding values for the estimates αˆ and βˆ which will minimize the
sum of square of the squared residuals ( ∑ ei2 ).
∑e 2
i = ∑ (Yi − αˆ − βˆX i ) 2 ……………………….(2.7)
To find the values of αˆ and βˆ that minimize this sum, we have to partially
differentiate ∑e 2
i with respect to αˆ and βˆ and set the partial derivatives equal to
zero.
∂ ∑ ei2
1. = −2∑ (Yi − αˆ − βˆX i ) = 0.......................................................(2.8)
∂αˆ
Econometrics, Module I 22
Prepared by: Bedru B. and Seid H. ( June, 2005)
∂ ∑ ei2
2. = −2∑ X i (Yi − αˆ − βˆX ) = 0..................................................(2.11)
∂β
ˆ
Note: at this point that the term in the parenthesis in equation 2.8and 2.11 is the
residual, e = Yi − αˆ − βˆX i . Hence it is possible to rewrite (2.8) and (2.11) as
∑e i = 0 and ∑X e i i = 0............................................(2.12)
Equation (2.9) and (2.13) are called the Normal Equations. Substituting the
values of α̂ from (2.10) to (2.13), we get:
∑Y X i i = ΣX i (Y − βˆX ) + βˆΣX i2
= Y ΣX i − βˆXΣX i + βˆΣX i2
∑Y X i i − Y ΣX i = βˆ (ΣX i2 − XΣX i )
ΣXY − nXY = β̂ ( ΣX i2 − nX 2)
Σ XY − n X Y
βˆ = ………………….(2.14)
Σ X i2 − n X 2
Σ( X − X ) 2 = ΣX 2 − nX 2 − − − − − − − − − − − − − − − − − (2.16)
Econometrics, Module I 23
Prepared by: Bedru B. and Seid H. ( June, 2005)
Σ( X − X )(Y − Y )
βˆ =
Σ( X − X ) 2
Σ xi yi
βˆ = ……………………………………… (2.17)
Σ x i2
Subject to: αˆ = 0
The composite function then becomes
Z = ∑ (Yi − αˆ − βˆX i ) 2 − λαˆ , where λ is a Lagrange multiplier.
∂z
= −2α = 0 − − − − − − − − − − − − − − − − − − − (iii )
∂λ
Substituting (iii) in (ii) and rearranging we obtain:
ΣX i (Yi − βˆX i ) = 0
ΣYi X i − βˆΣX i = 0
2
Econometrics, Module I 24
Prepared by: Bedru B. and Seid H. ( June, 2005)
ΣX i Yi
βˆ = ……………………………………..(2.18)
ΣX i2
This formula involves the actual values (observations) of the variables and not
their deviation forms, as in the case of unrestricted value of β̂ .
Econometrics, Module I 25
Prepared by: Bedru B. and Seid H. ( June, 2005)
According to the this theorem, under the basic assumptions of the classical linear
regression model, the least squares estimators are linear, unbiased and have
minimum variance (i.e. are best of all linear unbiased estimators). Some times the
theorem referred as the BLUE theorem i.e. Best, Linear, Unbiased Estimator. An
estimator is called BLUE if:
a. Linear: a linear function of the a random variable, such as, the
dependent variable Y.
b. Unbiased: its average or expected value is equal to the true population
parameter.
c. Minimum variance: It has a minimum variance in the class of linear
and unbiased estimators. An unbiased estimator with the least variance
is known as an efficient estimator.
According to the Gauss-Markov theorem, the OLS estimators possess all the
BLUE properties. The detailed proof of these properties are presented below
Dear colleague lets proof these properties one by one.
a. Linearity: (for β̂ )
(but Σxi = ∑ ( X − X ) = ∑ X − nX = nX − nX = 0 )
Σx Y xi
⇒ βˆ = i 2 ; Now, let = Ki (i = 1,2,.....n)
Σx i Σxi2
∴ βˆ = ΣK i Y − − − − − − − − − − − − − − − − − − − − − − − − − −(2.19)
∴ β̂ is linear in Y
Econometrics, Module I 26
Prepared by: Bedru B. and Seid H. ( June, 2005)
b. Unbiasedness:
Proposition: αˆ & βˆ are the unbiased estimators of the true parameters α & β
From your statistics course, you may recall that if θˆ is an estimator of θ then
E (θˆ) − θ = the amount of bias and if θˆ is the unbiased estimator of θ then bias =0
In our case, αˆ & βˆ are estimators of the true parameters α & β .To show that they
are the unbiased estimators of their respective parameters means to prove that:
Ε( βˆ ) = β and Ε(αˆ ) = α
= αΣk i + βΣk i X i + Σk i u i ,
but Σk i = 0 and Σk i X i = 1
Σxi Σ(X − X ) ΣX − nX nX − nX
Σki = = = = =0
Σxi2
Σxi2
Σ x i2 Σxi2
⇒ ∑ k i = 0 …………………………………………………………………(2.20)
Σxi X i Σ( X − X ) Xi
Σk i X i = =
Σxi2 Σxi2
ΣX 2 − XΣX ΣX 2 − nX 2
= = =1
ΣX 2 − nX 2 ΣX 2 − nX 2
⇒ ∑ k i X i = 1............................. ……………………………………………(2.21)
βˆ = β + Σk i u i ⇒ βˆ − β = Σk i u i − − − − − − − − − − − − − − − − − − − − − − − − − (2.22)
Econometrics, Module I 27
Prepared by: Bedru B. and Seid H. ( June, 2005)
Ε( βˆ ) = β , since Ε(u i ) = 0
=α +β 1
n ΣX i + 1 n Σu i − αXΣk i − β XΣk i X i − XΣk i u i
= α + 1 n Σu i − XΣk i u i , ⇒ αˆ − α = 1
n Σu i − XΣk i u i
= ∑ ( 1 n − Xk i )u i ……………………(2.23)
Ε(αˆ ) = α − − − − − − − − − − − − − − − − − − − − − − − − − − − − − (2.24)
first obtain variance of αˆ and βˆ and then establish that each has the minimum
variance in comparison of the variances of other linear and unbiased estimators
obtained by any other econometric methods than OLS.
a. Variance of β̂
var(β ) = Ε( βˆ − Ε( βˆ )) 2 = Ε( βˆ − β ) 2 ……………………………………(2.25)
Econometrics, Module I 28
Prepared by: Bedru B. and Seid H. ( June, 2005)
= Ε(∑ k i2 u i2 ) + Ε(Σk i k j u i u j ) i≠ j
Σx i Σxi2 1
Σk i = , and therefore, Σk i
2
= = 2
Σxi
2
(Σx i )
2 2
Σxi
σ2
∴ var(βˆ ) = σ 2 Σk i2 = ……………………………………………..(2.26)
Σxi2
b. Variance of α̂
= Ε(αˆ − α ) − − − − − − − − − − − − − − − − − − − − − − − − − −(2.27)
2
= σ 2 Σ( 1 n − Xk i ) 2
= σ 2 Σ( 1 n 2 − 2 n Xk i + X 2 k i2 )
= σ 2 Σ( 1 n − 2 X n Σk i + X 2 Σk i2 ) , Since ∑ k i = 0
= σ 2 ( 1 n + X 2 Σk i2 )
1 X2 Σxi2 1
=σ ( +
2
) , Since Σk i =
2
= 2
n ∑ xi 2
(Σx i )
2 2
Σxi
Again:
1 X 2 Σxi2 + nX 2 ΣX 2
+ = =
2
n Σxi2 nΣxi2 nΣ x i
X2 ΣX i2
∴ var(αˆ ) = σ 2 1 n + 2 = σ 2 …………………………………………(2.28)
2
Σx i nΣxi
Econometrics, Module I 29
Prepared by: Bedru B. and Seid H. ( June, 2005)
Dear student! We have computed the variances OLS estimators. Now, it is time to
check whether these variances of OLS estimators do possess minimum variance
property compared to the variances other estimators of the true α and β , other
than αˆ and βˆ .
1. Minimum variance of β̂
Suppose: β * an alternative linear and unbiased estimator of β and;
Let β * = Σw i Y i ......................................... ………………………………(2.29)
where , wi ≠ k i ; but: wi = k i + ci
β * = Σwi (α + βX i + u i ) Since Yi = α + β X i + U i
Econometrics, Module I 30
Prepared by: Bedru B. and Seid H. ( June, 2005)
= Σwi var(Yi )
2
var( β *) = var( βˆ ) + σ 2 Σ c i2
Given that ci is an arbitrary constant, σ 2 Σci2 is a positive i.e it is greater than zero.
Thus var(β *) > var(βˆ ) . This proves that β̂ possesses minimum variance property.
In the similar way we can prove that the least square estimate of the constant
intercept ( α̂ ) possesses minimum variance.
2. Minimum Variance of α̂
We take a new estimator α * , which we assume to be a linear and unbiased
estimator of function of α . The least square estimator α̂ is given by:
αˆ = Σ( 1 n − Xk i )Yi
By analogy with that the proof of the minimum variance property of β̂ , let’s use
the weights wi = ci + ki Consequently;
α * = Σ( 1 n − Xwi )Yi
Econometrics, Module I 31
Prepared by: Bedru B. and Seid H. ( June, 2005)
α * = Σ( 1 n − Xwi )(α + β X i + u i )
α βX ui
= Σ( + + − Xwiα − β XX i wi − Xwi u i )
n n n
α * = α + βX + ∑ ui / n − αXΣwi − βXΣwi X i − XΣwi u i
∑ ( wi ) = 0, Σ( wi X i ) = 1 and ∑ ( wi u i ) = 0
i.e., if Σwi = 0, and Σwi X i = 1 . These conditions imply that Σci = 0 and Σci X i = 0 .
= Σ( 1 n − Xwi ) 2 var(Yi )
= σ 2 Σ( 1 n − Xwi ) 2
= σ 2 Σ( 1 n 2 + X 2 wi − 2 1 n Xwi )
2
= σ 2 ( n n 2 + ΣX 2 wi − 2 X
2 1
n Σwi )
var(α *) = σ 2 ( 1
n + X 2 Σwi
2
) ,Since Σwi = 0
⇒ var(α *) = σ 2 ( 1
n + X 2 (Σk i2 + Σci2 )
1 X2
var(α *) = σ 2 + 2 + σ 2 X 2 Σci2
n Σxi
ΣX i2
= σ 2
2
+ σ 2 X 2 Σci2
nΣ x i
The first term in the bracket it var(αˆ ) , hence
Econometrics, Module I 32
Prepared by: Bedru B. and Seid H. ( June, 2005)
Therefore, we have proved that the least square estimators of linear regression
model are best, linear and unbiased (BLU) estimators.
n−2
∑e
2
To prove this we have to compute i from the expressions of Y,
Yˆ , y, yˆ and ei .
Proof:
Yi = αˆ + βˆX i + ei
Yˆ = αˆ + βˆx
⇒ Y = Yˆ + ei ……………………………………………………………(2.31)
⇒ ei = Yi − Yˆ ……………………………………………………………(2.32)
Econometrics, Module I 33
Prepared by: Bedru B. and Seid H. ( June, 2005)
ΣY ΣYˆi
= → Y = Yˆ − − − − − − − − − − − − − − − − − − − −(2.33)
n n
Putting (2.31) and (2.33) together and subtract
Y = Yˆ + e
Y = Yˆ
⇒ (Y − Y ) = (Yˆ − Yˆ ) + e
⇒ y i = yˆ i + e ………………………………………………(2.34)
From (2.34):
ei = yi − yˆ i ………………………………………………..(2.35)
Y = α + βX + U
We get, by subtraction
yi = (Yi − Y ) = β i ( X i − X ) + (U i − U ) = β xi + (U − U )
⇒ y i = βx + (U − U ) …………………………………………………….(2.36)
Note that we assumed earlier that , Ε(u ) = 0 , i.e in taking a very large number
samples we expect U to have a mean value of zero, but in any particular single
sample U is not necessarily zero.
Similarly: From;
Yˆ = αˆ + βˆx
Y = αˆ + βˆx
We get, by subtraction
Yˆ − Yˆ = βˆ ( X − X )
⇒ yˆ = β̂x …………………………………………………………….(2.37)
Econometrics, Module I 34
Prepared by: Bedru B. and Seid H. ( June, 2005)
ei = β xi + (u i − u ) − βˆxi
= (u i − u ) − ( βˆi − β ) xi
The summation over the n sample values of the squares of the residuals over the
‘n’ samples yields:
Σei2 = Σ[(u i − u ) − ( βˆ − β ) xi ] 2
(Σu i ) 2
= Ε Σu i2 −
n
1
= ΣΕ(u i2 ) − Ε(Σu ) 2
n
= nσ 2 − 1n Ε(u1 + u 2 + ....... + u i ) 2 since Ε(u i2 ) = σ u2
= nσ 2 − n1 Ε(Σu i2 + 2Σu i u j )
= nσ 2 − 1n (ΣΕ(u i2 ) + 2Σu i u j ) i ≠ j
= nσ 2 − n1 nσ u2 − n2 ΣΕ(u i u j )
= nσ u2 − σ u2 ( given Ε(u i u j ) = 0)
= σ u2 (n − 1) ……………………………………………..(2.39)
Econometrics, Module I 35
Prepared by: Bedru B. and Seid H. ( June, 2005)
1
Hence Σxi2 .Ε( βˆ − β ) 2 = Σxi2 . σ u2
Σx 2
Σ x i2 .Ε ( βˆ − β ) 2 = σ u2 ……………………………………………(2.40)
Σ x i u i
= -2
Ε (Σ x i u i ) xi
,since k i =
Σ x i
2
∑x
2
i
(Σx u ) 2
= −2Ε i 2i
Σxi
Σxi 2 u i 2 + 2Σxi x j u i u j
= − 2Ε
Σxi
2
Σx 2 Ε(u i )
2
= −2 ( given Ε(u i u j ) = 0)
Σxi
2
Consequently, Equation (2.38) can be written interms of (2.39), (2.40) and (2.41)
as follows: Ε(Σei2 ) = (n − 1)σ u2 + σ 2 − 2σ u2 = (n − 2)σ u2 ………………………….(2.42)
From which we get
Σei2
Ε = E (σˆ u2 ) = σ u2 ………………………………………………..(2.43)
n−2
Σei2
Since σˆ =2
u
n−2
Econometrics, Module I 36
Prepared by: Bedru B. and Seid H. ( June, 2005)
Σei2
Thus, σˆ =2
is unbiased estimate of the true variance of the error term( σ 2 ).
n−2
Dear student! The conclusion that we can drive from the above proof is that we
Σei2
can substitute σˆ 2 = for ( σ 2 ) in the variance expression of αˆ and βˆ , since
n−2
σˆ 2 Σei2
Var ( β ) = 2 =
ˆ ……………………………………(2.44)
Σxi (n − 2)∑ xi 2
∑ ei ∑ X i
2 2
2 ΣX i
2
Var (α ) = σ
ˆ =
ˆ n(n − 2) x 2 ……………………………(2.45)
nΣxi ∑ i
2
∑e can be computed as ∑ ei 2 = ∑ y i − β̂ ∑ xi y i .
2 2
Note: i
Dear Student! Do not worry about the derivation of this expression! we will
perform the derivation of it in our subsequent subtopic.
Econometrics, Module I 37
Prepared by: Bedru B. and Seid H. ( June, 2005)
= Yˆ − Y
Y.
X
Figure ‘d’. Actual and estimated values of the dependent variable Y.
As can be seen from fig.(d) above, Y − Y represents measures the variation of the
sample observation value of the dependent variable around the mean. However
the variation in Y that can be attributed the influence of X, (i.e. the regression line)
is given by the vertical distance Yˆ − Y . The part of the total variation in Y about
Econometrics, Module I 38
Prepared by: Bedru B. and Seid H. ( June, 2005)
Now, we may write the observed Y as the sum of the predicted value ( Yˆ ) and the
residual term (ei.).
Yi = Y{ˆ + ei
{ predicted Yi
{
Observed Yi Re sidual
From equation (2.34) we can have the above equation but in deviation form
y = yˆ + e . By squaring and summing both sides, we obtain the following
expression:
Σy 2 = Σ( yˆ 2 + e) 2
Σy 2 = Σ( yˆ 2 + ei2 + 2 yei)
= Σy i + Σei2 + 2Σyˆ ei
2
(but Σ e i = 0 , Σ ex i = 0 )
⇒ ∑ yˆ e = 0 ………………………………………………(2.46)
Therefore;
Σy i2 = Σ
{yˆ 2 + Σei2 ………………………………...(2.47)
{ {
Total Explained Un exp lained
var iation var iation var ation
OR,
Econometrics, Module I 39
Prepared by: Bedru B. and Seid H. ( June, 2005)
i.e
TSS = ESS + RSS ……………………………………….(2.48)
From equation (2.37) we have yˆ = β̂ x . Squaring and summing both sides give us
Σyˆ 2 = βˆ 2 Σx 2 − − − − − − − − − − − − − − − − − − − − − − − (2.50)
Σxy Σxy
= ………………………………………(2.52)
Σx 2 Σy 2
Econometrics, Module I 40
Prepared by: Bedru B. and Seid H. ( June, 2005)
The limit of R2: The value of R2 falls between zero and one. i.e. 0 ≤ R 2 ≤ 1 .
•
Interpretation of R2
Suppose R 2 = 0.9 , this means that the regression line gives a good fit to the
observed data since this line explains 90% of the total variation of the Y value
around their mean. The remaining 10% of the total variation in Y is unaccounted
for by the regression line and is attributed to the factors included in the disturbance
variable u i .
Exercise:
Suppose rxy = is the correlation coefficient between Y and X and is give by:
Σxi yi
=
Σ x i2 Σ y i2
And let ry2 yˆ = the square of the correlation coefficient between Y and Yˆ , and is
(Σyyˆ ) 2
given by: r 2
y yˆ = 2 2
Σy Σyˆ
Econometrics, Module I 41
Prepared by: Bedru B. and Seid H. ( June, 2005)
We have already assumed that the error term is normally distributed with mean
zero and variance σ 2 , i.e. U i ~ N(0, σ 2 ) . Similarly, we also proved
σ2
1. βˆ ~ N β , 2
Σx
σ 2 ΣX 2
2. αˆ ~ N α ,
nΣx 2
To show whether αˆ and βˆ are normally distributed or not, we need to make use of
one property of normal distribution. “........ any linear function of a normally
distributed variable is itself normally distributed.”
βˆ = Σk i Yi = k1 Y1 + k 2 Y2i + .... + k n Yn
Econometrics, Module I 42
Prepared by: Bedru B. and Seid H. ( June, 2005)
σ2 σ 2 ΣX 2
βˆ ~ N β , 2 ; αˆ ~ N α ,
Σx nΣx 2
The OLS estimates αˆ and βˆ are obtained from a sample of observations on Y and
X. Since sampling errors are inevitable in all estimates, it is necessary to apply
test of significance in order to measure the size of the error and determine the
degree of confidence in order to measure the validity of these estimates. This can
be done by using various tests. The most common ones are:
i) Standard error test ii) Student’s t-test iii) Confidence interval
All of these testing procedures reach on the same conclusion. Let us now see these
testing methods one by one.
i) Standard error test
This test helps us decide whether the estimates αˆ and βˆ are significantly different
from zero, i.e. whether the sample from which they have been estimated might
have come from a population whose true parameters are zero.
α = 0 and / or β = 0 .
Formally we test the null hypothesis
H 0 : β i = 0 against the alternative hypothesis H 1 : β i ≠ 0
SE ( βˆ ) = var(βˆ )
SE (αˆ ) = var(αˆ )
Second: compare the standard errors with the numerical values of αˆ and βˆ .
Decision rule:
• If SE ( βˆi ) > 1
2 βˆi , accept the null hypothesis and reject the alternative
Econometrics, Module I 43
Prepared by: Bedru B. and Seid H. ( June, 2005)
• If SE ( βˆi ) < 1
2 βˆi , reject the null hypothesis and accept the alternative
Test the significance of the slope parameter at 5% level of significance using the
standard error test.
SE ( βˆ ) = 0.025
( βˆ ) = 0.6
1
2 βˆ = 0.3
Econometrics, Module I 44
Prepared by: Bedru B. and Seid H. ( June, 2005)
X −µ
t= , with n-1 degree of freedom.
sx
Σ( X − X ) 2
sx =
n −1
n = sample size
We can derive the t-value of the OLS estimates
βˆi − β
t βˆ =
SE ( βˆ )
with n-k degree of freedom.
αˆ − α
tαˆ =
SE (αˆ )
Where:
SE = is standard error
k = number of parameters in the model.
Since we have two parameters in simple linear regression with intercept different
from zero, our degree of freedom is n-2. Like the standard error test we formally
test the hypothesis: H 0 : β i = 0 against the alternative H 1 : β i ≠ 0 for the slope
parameter; and H0 :α = 0 against the alternative H 1 : α ≠ 0 for the intercept.
βˆ − 0 βˆ
t* = =
SE ( βˆ ) SE ( βˆ )
Econometrics, Module I 45
Prepared by: Bedru B. and Seid H. ( June, 2005)
Then this is a two tail test. If the level of significance is 5%, divide it by two to
obtain critical value of t from the t-table.
Step 4: Obtain critical value of t, called tc at α and n-2 degree of freedom for two
2
tail test.
Step 5: Compare t* (the computed value of t) and tc (critical value of t)
• If t*> tc , reject H0 and accept H1. The conclusion is β̂ is statistically
significant.
• If t*< tc , accept H0 and reject H1. The conclusion is β̂ is statistically
insignificant.
Numerical Example:
Suppose that from a sample size n=20 we estimate the following consumption
function:
C = 100 + 0.70 + e
(75.5) (0.21)
Econometrics, Module I 46
Prepared by: Bedru B. and Seid H. ( June, 2005)
The values in the brackets are standard errors. We want to test the null hypothesis:
H 0 : β i = 0 against the alternative H 1 : β i ≠ 0 using the t-test at 5% level of
significance.
a. the t-value for the test statistic is:
βˆ − 0 βˆ 0.70
t* = = = ≅ 3 .3
SE ( βˆ ) SE ( βˆ ) 0.21
‘t’ at α =0.025 and 18 degree of freedom (df) i.e. (n-2=20-2). From the
2
In order to define how close the estimate to the true parameter, we must construct
confidence interval for the true parameter, in other words we must establish
limiting values around the estimate with in which the true parameter is expected to
lie within a certain “degree of confidence”. In this respect we say that with a
given probability the population parameter will be with in the defined confidence
interval (confidence limits).
Econometrics, Module I 47
Prepared by: Bedru B. and Seid H. ( June, 2005)
sample, would include the true population parameter in 95% of the cases. In the
other 5% of the cases the population parameter will fall outside the confidence
interval.
In a two-tail test at α level of significance, the probability of obtaining the specific
t-value either –tc or tc is α at n-2 degree of freedom. The probability of obtaining
2
βˆ − β
any value of t which is equal to at n-2 degree of freedom is
SE ( βˆ )
1 − (α 2 + α 2 ) i.e. 1 − α .
βˆ − β
but t* = …………………………………………………….(2.58)
SE ( βˆ )
{ }
Pr − SE ( βˆ )t c < βˆ − β < SE ( βˆ )t c = 1 − α − − − − − by multiplying SE ( βˆ )
The limit within which the true β lies at (1 − α )% degree of confidence is:
H1 : β ≠ 0
Decision rule: If the hypothesized value of β in the null hypothesis is within the
Econometrics, Module I 48
Prepared by: Bedru B. and Seid H. ( June, 2005)
hypothesis is outside the limit, reject H0 and accept H1. This indicates β̂ is
statistically significant.
Numerical Example:
Suppose we have estimated the following regression line from a sample of 20
observations.
Y = 128.5 + 2.88 X + e
(38.2) (0.85)
βˆ ± SE ( βˆ )t c
βˆ = 2.88
SE ( βˆ ) = 0.85
The results of the regression analysis derived are reported in conventional formats.
It is not sufficient merely to report the estimates of β ’s. In practice we report
Econometrics, Module I 49
Prepared by: Bedru B. and Seid H. ( June, 2005)
regression coefficients together with their standard errors and the value of R2. It
has become customary to present the estimated equations with standard errors
placed in parenthesis below the estimated parameter values. Sometimes, the
estimated coefficients, the corresponding standard errors, the p-values, and some
other indicators are presented in tabular form.
These results are supplemented by R2 on ( to the right side of the regression
equation).
Y = 128 . 5 + 2 . 88 X
Example: , R2 = 0.93. The numbers in the
( 38 . 2 ) ( 0 . 85 )
parenthesis below the parameter estimates are the standard errors. Some
econometricians report the t-values of the estimated coefficients in place of the
standard errors.
Review Questions
Review Questions
1. Econometrics deals with the measurement of economic relationships which are stochastic
or random. The simplest form of economic relationships between two variables X and Y
can be represented by:
Yi = β 0 + β 1 X i + U i ; where β 0 and β 1 = are regression parameters and
α and β
b. Calculate the coefficient of determination for the data and interpret its value
c. If in a 9th economy the rate of interest is R=8.1, predict the demand for money(M) in
this economy.
Econometrics, Module I 50
Prepared by: Bedru B. and Seid H. ( June, 2005)
3. The following data refers to the price of a good ‘P’ and the quantity of the good supplied,
‘S’.
P 2 7 5 1 4 8 2 8
S 15 41 32 9 28 43 17 40
a. Estimate the linear regression line Ε( S ) = α + β P
∑ X Y = 1,296,836
i i
∑ Y = 539,512
i
2
i) Estimate the regression line of sale on price and interpret the results
ii) What is the part of the variation in sales which is not explained by the
regression line?
iii) Estimate the price elasticity of sales.
5. The following table includes the GNP(X) and the demand for food (Y) for a country over
ten years period.
year 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989
Y 6 7 8 10 8 9 10 9 11 10
X 50 52 55 59 57 58 62 65 68 70
a. Estimate the food function
b. Compute the coefficient of determination and find the explained and unexplained
variation in the food expenditure.
c. Compute the standard error of the regression coefficients and conduct test of
significance at the 5% level of significance.
6. A sample of 20 observation corresponding to the regression model Yi = α + β X i + U i
Econometrics, Module I 51
Prepared by: Bedru B. and Seid H. ( June, 2005)
∑ Y = 21.9 ∑ (Y − Y ) = 86.9
2
i i
∑ X = 186.2 ∑ ( X − X ) = 215.4
2
i i
∑ ( X − X )(Yi i − Y ) = 106.4
a. Estimate α and β
b. Calculate the variance of our estimates
c.Estimate the conditional mean of Y corresponding to a value of X fixed at X=10.
7. Suppose that a researcher estimates a consumptions function and obtains the following
results:
C = 15 + 0.81Yd n = 19
(3.1) (18.7) R 2 = 0.99
where C=Consumption, Yd=disposable income, and numbers in the parenthesis are the ‘t-ratios’
a. Test the significant of Yd statistically using t-ratios
b. Determine the estimated standard deviations of the parameter estimates
8. State and prove Guass-Markov theorem
9. Given the model:
Yi = β 0 + β 1 X i + U i with usual OLS assumptions. Derive the expression for the error
variance.
Econometrics, Module I 52
Prepared by: Bedru B. and Seid H. ( June, 2005)
Chapter Three
3.1 Introduction
Econometrics, Module I 53
Prepared by: Bedru B. and Seid H. ( June, 2005)
and any other (minor) factors, other than xi that might influence Y.
In this chapter we will first start our discussion with the assumptions of the
multiple regressions and we will proceed our analysis with the case of two
explanatory variables and then we will generalize the multiple regression model in
the case of k-explanatory variables using matrix algebra.
3. Hemoscedasticity: The variance of each u i is the same for all the xi values.
i.e. E (u i 2 ) = σ u 2 (constant)
4. Normality of u: The values of each u i are normally distributed.
i.e. U i ~ N (0, σ 2 )
5. No auto or serial correlation: The values of u i (corresponding to Xi ) are
i.e. E (u i u j ) = 0 for xi ≠ j
Econometrics, Module I 54
Prepared by: Bedru B. and Seid H. ( June, 2005)
We can’t exclusively list all the assumptions but the above assumptions are some
of the basic assumptions that enable us to proceed our analysis.
is multiple regression with two explanatory variables. The expected value of the
above model is called population regression equation i.e.
E (Y ) = β 0 + β 1 X 1 + β 2 X 2 , Since E (U i ) = 0 . …………………................(3.4)
Econometrics, Module I 55
Prepared by: Bedru B. and Seid H. ( June, 2005)
ei = Yi − Yˆ = Yi − βˆ 0 − βˆ1 X 1 − βˆ 2 X 2 …………………………………..(3.7)
∑e 2
i with respect to βˆ0 , βˆ1 and βˆ 2 and set the partial derivatives equal to zero.
[
∂ ∑ ei2 ] ( )
= −2∑ Yi − βˆ0 − βˆ1 X 1i − βˆ 2 X 2i = 0 ………………………. (3.8)
∂β
ˆ
0
[
∂ ∑ ei2 ] ( )
= −2∑ X 1i Yi − βˆ 0 − βˆ1 X 1i − βˆ1 X 1i = 0 ……………………. (3.9)
∂β
ˆ
1
[
∂ ∑ ei2 ] ( )
= −2∑ X 2i Yi − βˆ0 − βˆ1 X 1i − βˆ 2 X 2i = 0 ………… ………..(3.10)
∂βˆ2
We know that
Econometrics, Module I 56
Prepared by: Bedru B. and Seid H. ( June, 2005)
∑ (X − Yi ) = (ΣX i Yi − nX i Yi ) = Σxi yi
2
i
∑ (X − X i ) = (ΣX i − nX i ) = Σxi
2 2 2 2
i
Substituting the above equations in equation (3.14), the normal equation (3.12) can
be written in deviation form as follows:
∑ x y = βˆ Σx + βˆ 2 Σx1 x 2 …………………………………………(3.16)
2
1 1 1
∑x y = βˆ1Σx1 x 2 + βˆ 2 Σx 2 ………………………………………..(3.17)
2
2
∑ x y = βˆ Σx + βˆ 2 Σx1 x 2 ……………………………………….(3.18)
2
1 1 1
∑x y = βˆ1Σx1 x 2 + βˆ 2 Σx 2 ……………………………………….(3.19)
2
2
∑x ∑x x β̂1 ∑x
2
1 1 2 = 2 y ………….(3.20)
∑x x 1 2 ∑x 2
2
β̂ 2 ∑x 3y
βˆ 2 = ………………….……………………… (3.22)
Σx1 . Σx 2 − Σ( x1 x 2 ) 2
2 2
Econometrics, Module I 57
Prepared by: Bedru B. and Seid H. ( June, 2005)
= Σy i ( y i − βˆ1 x1i − βˆ 2 x 2i )
⇒ Σ = βˆ1Σx1i y i + βˆ 2 Σx 2i y i + Σei
2
{y2 ----------------- (3.26)
14442444 3 {
Total sum of Explained sum of Re sidual sum of squares
square (Total square ( Explained ( un exp lained var iation )
var iation ) var iation )
ESS βˆ1Σx1i yi + βˆ 2 Σx 2i yi
∴ R2 = = ----------------------------------(3.27)
TSS Σy 2
Econometrics, Module I 58
Prepared by: Bedru B. and Seid H. ( June, 2005)
the model, Yˆt . In this case, the model is said to “fit” the data well. If R2 is low,
there is no association between the values of Yt and the values predicted by the
model, Yˆt and the model does not fit the data well.
This measure does not always goes up when a variable is added because of the
degree of freedom term n-k is the numerator. As the number of variables k
increases, RSS goes down, but so does n-k. The effect on R 2 depends on the
amount by which R 2 falls. While solving one problem, this corrected measure of
goodness of fit unfortunately introduces another one. It losses its interpretation;
R 2 is no longer the percent of variation explained. This modified R 2 is sometimes
used and misused as a device for selecting the appropriate set of explanatory
variables.
3.4.General Linear Regression Model and Matrix Approach
So far we have discussed the regression models containing one or two explanatory
variables. Let us now generalize the model assuming that it contains k variables.
It will be of the form:
Y = β 0 + β 1 X 1 + β 2 X 2 + ...... + β k X k + U
Econometrics, Module I 59
Prepared by: Bedru B. and Seid H. ( June, 2005)
∂Σei2
= −2Σ(Yi − βˆ0 − βˆ1 X 1 − βˆ 2 X 2 − ...... − βˆ k X k )( xi ) = 0
∂βˆ 1
……………………………………………………..
∂Σei2
= −2Σ(Yi − βˆ0 − βˆ1 X 1 − βˆ 2 X 2 − ...... − βˆ k X k )( x ki ) = 0
∂βˆ
k
The general form of the above equations (except first ) may be written as:
∂Σei2
= −2Σ(Yi − βˆ0 − βˆ1 X 1i − − − − − − βˆ k X ki ) = 0 ; where ( j = 1,2,....k )
∂βˆ
j
: : : : :
: : : : :
ΣYi X ki = βˆ 0 ΣX ki + βˆ1ΣX 1i X ki + ∑ X 2i X ki .................. + βˆ k ΣX ki
2
Econometrics, Module I 60
Prepared by: Bedru B. and Seid H. ( June, 2005)
Solving the above normal equations will result in algebraic complexity. But we
can solve this easily using matrix. Hence in the next section we will discuss the
matrix approach to linear regression model.
Y2 = β 0 + β 1 X 12 + β 2 X 22 + β 3 X 32 ............. + β k X k 2 + U 2
Y3 = β 0 + β 1 X 13 + β 2 X 23 + β 3 X 33 ............. + β k X k 3 + U 3
…………………………………………………...
Yn = β 0 + β1 X 1n + β 2 X 2 n + β 3 X 3n ............. + β k X kn + U n
Y1 1 X 11 X 21 ....... X k1 β 0 U 1
Y 1 X 12 X 22 .......
X k 2 β1 U
2 2
Y3 = 1 X 13 X 23 ....... X k 3 β 2 + U 3
. . . . ....... . . .
Yn 1 X 1n X 2n ....... X kn β n U n
Y = X . β + U
In short Y = Xβ + U ……………………………………………………(3.29)
Econometrics, Module I 61
Prepared by: Bedru B. and Seid H. ( June, 2005)
βˆ0 e1
ˆ e
β1 2
β = .
ˆ and e=.
. .
βˆ en
k
∑e
i =1
2
i = e12 + e22 + e32 + ......... + en2
e1
e
2
= [e1 , e2 ......en ] . = e' e
.
en
= ∑ ei2 = e' e
∂ ( X ' AX )
Since = 2 AX and also too 2X’A
∂βˆ
Econometrics, Module I 62
Prepared by: Bedru B. and Seid H. ( June, 2005)
Hence β̂ is the vector of required least square estimators, βˆ0 , βˆ1 , βˆ 2 ,........βˆ k .
Let C= ( X ′X ) −1 X ′
⇒ βˆ = CY …………………………………………….(3.33)
2. Unbiased ness
βˆ = ( X ' X ) −1 X ' Y
βˆ = ( X ' X ) −1 X ' ( Xβ + U )
[
= Ε ( β ) + Ε ( X ' X ) −1 X ' U ]
= β + Ε( X ' X ) −1 X ' Ε(U )
= β , since Ε(U ) = 0
Thus, least square estimators are unbiased.
Econometrics, Module I 63
Prepared by: Bedru B. and Seid H. ( June, 2005)
3. Minimum variance
Before showing all the OLS estimators are best(possess the minimum variance
property), it is important to derive their variance.
[ ] [
We know that, var(βˆ ) = Ε ( βˆ − β ) 2 = Ε ( βˆ − β )( βˆ − β )' ]
[ ]
Ε ( βˆ − β )( βˆ − β )' =
Ε( βˆ1 − β 1 ) 2 [
Ε ( βˆ1 − β 1 )( βˆ 2 − β 2 ) ] ....... [
Ε ( βˆ1 − β1 )( βˆ k − β k ) ]
ˆ
[
Ε ( β 2 − β 2 )( β 1 − β1 )
ˆ ] Ε( βˆ 2 − β 2 ) 2 [ ]
....... Ε ( βˆ 2 − β 2 )( βˆ k − β k )
: : :
: : :
[
Ε ( β k − β k )( β 1 − β 1 )
ˆ ˆ ] [ Ε ( βˆ k − β k )( βˆ 2 − β 2 ) ] ........ Ε( βˆ k − β k ) 2
The above matrix is a symmetric matrix containing variances along its main
diagonal and covariance of the estimators every where else. This matrix is,
therefore, called the Variance-covariance matrix of least squares estimators of the
regression slopes. Thus,
[ ]
var(βˆ ) = Ε ( βˆ − β )( βˆ − β )' ……………………………………………(3.35)
⇒ βˆ − β = ( X ′X ) −1 X ′U ………………………………………………(3.36)
Econometrics, Module I 64
Prepared by: Bedru B. and Seid H. ( June, 2005)
n ΣX 1n ....... ΣX kn
ΣX ΣX 2
....... ΣX 1n X kn
1n 1n
Where, ( X ' X ) −1 = : : :
: : :
ΣX kn ΣX 1n X kn ....... ΣX kn
2
We can, therefore, obtain the variance of any estimator say β̂1 by taking the ith term
from the principal diagonal of ( X ' X ) −1 and then multiplying it by σ u2 .
Where the X’s are in their absolute form. When the x’s are in deviation form we
can write the multiple regression in matrix form as ;
β̂ = ( x ′x) −1 x ′y
The above column matrix β̂ doesn’t include the constant term β̂ 0 .Under such
conditions the variances of slope parameters in deviation form can be written
as: var(βˆ ) = σ u2 ( x' x) −1 …………………………………………………….(2.38)
(the proof is the same as (3.37) above). In general we can illustrate the variance of
the parameters by taking two explanatory variables.
The multiple regression when written in deviation form that has two explanatory
variables is
y1 = βˆ1 x1 + βˆ 2 x 2
[
var(βˆ ) = Ε ( βˆ − β )( βˆ − β )' ]
Econometrics, Module I 65
Prepared by: Bedru B. and Seid H. ( June, 2005)
( βˆ1 − β1 )
In this model; ( β − β ) =
ˆ
ˆ
( β 2 − β 2 )
[
( βˆ − β )' = ( βˆ1 − β 1 )( βˆ 2 − β 2 ) ]
( βˆ1 − β 1 )
∴ ( βˆ − β )( βˆ − β )' = [
( βˆ1 − β 1 )( βˆ 2 − β 2 ) ]
2
( β
ˆ − β )
2
( βˆ1 − β 1 ) 2
[
and Ε ( β − β )( β − β )' = Ε
ˆ ˆ ] ( βˆ1 − β 1 )( βˆ 2 − β 2 )
( βˆ1 − β 1 )( βˆ 2 − β 2 ) ( βˆ 2 − β 2 ) 2
var(βˆ1 ) cov(βˆ1 , βˆ 2 )
=
cov(β 1 , β 2 ) var(βˆ 2 )
ˆ ˆ
−1
−1 Σx12 Σx1 x 2
∴ σ ( x ' x)
2
u =σ 2
u
Σx1 x 2 Σx 22
Σx 22 − Σx1 x 2
σ 2
u
− Σx1 x 2 Σx12
Or σ u2 ( x' x) −1 =
Σx12 Σx1 x 2
Σx1 x 2 Σx 22
σ u2 Σx 22
i.e., var(βˆ1 ) = ……………………………………(3.39)
Σx12 Σx 22 − (Σx1Σx 2 ) 2
σ u2 Σx12
and, var(βˆ 2 ) = ………………. …….…….(3.40)
Σx12 Σx 22 − (Σx1Σx 2 ) 2
(−)σ u2 Σx1 x 2
cov(βˆ1 , βˆ 2 ) = …………………………………….(3.41)
Σx12 Σx 22 − (Σx1Σx 2 ) 2
Econometrics, Module I 66
Prepared by: Bedru B. and Seid H. ( June, 2005)
Σei2
As we have seen in simple regression model σˆ = 2
. For k-parameters
n − 2
Σei2
(including the constant parameter) σˆ 2 = .
n − k
In the above model we have three parameters including the constant term and
Σe 2
σˆ 2 = i
n − 3
∑e = ∑ y i − β1 ∑ x1 y − β 2 ∑ x 2 y......... + β K ∑ x K y ………………………(3.42)
2 2
i
∑e = ∑ y i − β 1 ∑ x1 y − β 2 ∑ x 2 y ………………………………………...(3.43)
2 2
i
This is all about the variance covariance of the parameters. Now it is time to see
the minimum variance property.
Minimum variance of β̂
To show that all the β i ' s in the β̂ vector are Best Estimators, we have also to
prove that the variances obtained in (3.37) are the smallest amongst all other
possible linear unbiased estimators. We follow the same procedure as followed in
case of single explanatory variable model where, we first assumed an alternative
linear unbiased estimator and then it was established that its variance is greater
than the estimator of the regression model.
ˆ
Assume that βˆ is an alternative unbiased and linear estimator of β . Suppose that
βˆ = [( X ' X ) −1 X '+ B ]Y
ˆ
Econometrics, Module I 67
Prepared by: Bedru B. and Seid H. ( June, 2005)
ˆ
[
Ε( βˆ ) = Ε ( X ' X ) −1 X ' ( Xβ + U ) + B( Xβ + U ) ]
[
= Ε ( X ' X ) −1 X ' Xβ + ( X ' X ) −1 X 'U + BXβ + BU ]
= β + BXβ , [since E(U) = 0].……………………………….(3.44)
ˆ
Since our assumption regarding an alternative βˆ is that it is to be an unbiased
ˆ
estimator of β , therefore, Ε( βˆ ) should be equal to β ; in other words ( βXB) should
be a null matrix.
var(βˆ ) = Ε ( βˆ − β )( βˆ − β )'
ˆ ˆ ˆ
[{[ ] }{[ ] }]
= Ε ( X ' X ) −1 X '+ B Y − β ( X ' X ) −1 X '+ B Y − β '
(Q BX = 0)
[{ }{ }]
= Ε ( X ' X ) −1 X 'U + BU U ' X ( X ' X ) −1 + U ' B '
= σ [( X ' X ) + BB '](Q BX = 0)
2
u
−1
ˆ
var(βˆ ) = σ u2 ( X ' X ) −1 + σ u2 BB ' ……………………………………….(3.45)
Econometrics, Module I 68
Prepared by: Bedru B. and Seid H. ( June, 2005)
ˆ
Or, in other words, var(βˆ ) is greater than var(βˆ ) by an expression σ u2 BB ' and it
We know that Σei2 = e' e = Y ' Y − 2βˆ ' X 'Y + βˆ ' X ' Xβˆ since ( X ' X ) βˆ = X ' Y and
∑Y = Y ′Y
2
i
We know, yi = Yi − Y
1
∴ Σy i2 = ΣYi 2 − (ΣYi ) 2
n
In matrix notation
1
Σy i2 = Y ' Y − (ΣYi ) 2 ………………………………………………(3.48)
n
Equation (3.48) gives the total sum of squares variations in the model.
Explained sum of squares = Σy i2 − Σei2
1
= Y ' Y − (Σy ) 2 − e' e
n
1
= βˆ ' X ' Y − (ΣYi ) 2 ……………………….(3.49)
n
Explained sum of squares
Since R 2 =
Total sum of squares
1
βˆ ' X 'Y − (ΣYi ) 2
n βˆ ' X ' Y − nY
∴ R2 = = ……………………(3.50)
1
Y ' Y − (ΣYi ) 2 Y ' Y − nY 2
n
Econometrics, Module I 69
Prepared by: Bedru B. and Seid H. ( June, 2005)
Dear Students! We hope that from the discussion made so far on multiple
regression model, in general, you may make the following summary of results.
(i) Model: Y = Xβ + U
1
βˆ ' X ' Y − (ΣYi ) 2
n β̂ ' X ' Y − nY
(vi) Coeff. of determination: R2 = =
1 Y ' Y − nY
Y ' Y − (ΣYi ) 2
n
B. H 0 : β 2 = 0
H1 : β 2 ≠ 0
Econometrics, Module I 70
Prepared by: Bedru B. and Seid H. ( June, 2005)
The null hypothesis (A) states that, holding X2 constant X1 has no (linear)
influence on Y. Similarly hypothesis (B) states that holding X1 constant, X2 has no
influence on the dependent variable Yi.To test these null hypothesis we will use
the following tests:
i- Standard error test: under this and the following testing methods we
test only for β̂1 .The test for β̂ 2 will be done in the same way.
σˆ 2 ∑ x 22i Σei2
SE ( βˆ1 ) = var(βˆ1 ) = ; where σˆ 2 =
∑x ∑x
2
1i
2
2i − (∑ x1 x 2 ) 2 n−3
• If SE ( βˆ1 ) > 1 2 βˆ1 , we accept the null hypothesis that is, we can
conclude that the estimate β i is not statistically significant.
• If SE ( βˆ1 < 1
2 βˆ1 , we reject the null hypothesis that is, we can
conclude that the estimate β i is statistically significant.
Note: The smaller the standard errors, the stronger the evidence that the estimates
are statistically reliable.
ii. The student’s t-test: We compute the t-ratio for each β̂ i
βˆi − β
t* = ~ t n -k , where n is number of observation and k is number of
SE ( βˆi )
βˆ 2
t* =
SE ( βˆ 2 )
Econometrics, Module I 71
Prepared by: Bedru B. and Seid H. ( June, 2005)
In this section we extend this idea to joint test of the relevance of all the included
explanatory variables. Now consider the following:
Y = β 0 + β 1 X 1 + β 2 X 2 + ......... + β k X k + U i
H 0 : β 1 = β 2 = β 3 = ............ = β k = 0
Econometrics, Module I 72
Prepared by: Bedru B. and Seid H. ( June, 2005)
from the one used in testing the significance of β̂ 3 under the null hypothesis that
β 3 = 0 . But to test the joint hypothesis of the above, we shall be violating the
assumption underlying the test procedure.
The test procedure for any set of hypothesis can be based on a comparison of the
sum of squared errors from the original, the unrestricted multiple regression
model to the sum of squared errors from a regression model in which the null
hypothesis is assumed to be true. When a null hypothesis is assumed to be true,
we in effect place conditions or constraints, on the values that the parameters can
take, and the sum of squared errors increases. The idea of the test is that if these
sum of squared errors are substantially different, then the assumption that the joint
null hypothesis is true has significantly reduced the ability of the model to fit the
data, and the data do not support the null hypothesis.
If the null hypothesis is true, we expect that the data are compliable with the
conditions placed on the parameters. Thus, there would be little change in the sum
of squared errors when the null hypothesis is assumed to be true.
Let the Restricted Residual Sum of Square (RRSS) be the sum of squared errors
in the model obtained by assuming that the null hypothesis is true and URSS be
the sum of the squared error of the original unrestricted model i.e. unrestricted
residual sum of square (URSS). It is always true that RRSS - URSS ≥ 0.
1
Gujurati, 3rd ed.pp
Econometrics, Module I 73
Prepared by: Bedru B. and Seid H. ( June, 2005)
Yi = Yˆ + e
ei = Yi − Yˆi
This sum of squared error is called unrestricted residual sum of square (URSS).
This is the case when the null hypothesis is not true. If the null hypothesis is
assumed to be true, i.e. when all the slope coefficients are zero.
Y = β̂ 0 + ei
β̂ 0 =
∑Y i
=Y → (applying OLS)…………………………….(3.52)
n
e = Y − β̂ 0 but β̂ 0 = Y
e = Y −Y
The sum of squared error when the null hypothesis is assumed to be true is called
Restricted Residual Sum of Square (RRSS) and this is equal to the total sum of
square (TSS).
RRSS − URSS / K − 1
The ratio: ~ F( k −1,n − k ) ……………………… (3.53);
URSS / n − K
(has an F-ditribution with k-1 and n-k degrees of freedom for the numerator and denominator respectively)
RRSS = TSS
Econometrics, Module I 74
Prepared by: Bedru B. and Seid H. ( June, 2005)
(TSS − RSS ) / k − 1
F=
RSS / n − k
ESS / k − 1
F= ………………………………………………. (3.54)
RSS / n − k
If we divide the above numerator and denominator by Σy 2 = TSS then:
ESS
/ k −1
F= TSS
RSS
/k −n
TSS
R2 / k −1
F= …………………………………………..(3.55)
1− R2 / n − k
This implies the computed value of F can be calculated either as a ratio of ESS &
TSS or R2 & 1-R2. If the null hypothesis is not true, then the difference between
RRSS and URSS (TSS & RSS) becomes large, implying that the constraints
placed on the model by the null hypothesis have large effect on the ability of the
model to fit the data, and the value of F tends to be large. Thus, we reject the null
hypothesis if the F test static becomes too large. This value is compared with the
critical value of F which leaves the probability of α in the upper tail of the F-
distribution with k-1 and n-k degree of freedom.
If the computed value of F is greater than the critical value of F (k-1, n-k), then the
parameters of the model are jointly significant or the dependent variable Y is
linearly related to the independent variables included in the model.
Econometrics, Module I 75
Prepared by: Bedru B. and Seid H. ( June, 2005)
Table: 2.1. Numerical example for the computation of the OLS estimators.
n Y X1 X2 X3 yi x1 x2 x3 yi2 x1 x 2 x 2 x3 x1 x3 x12 x 22 x32 x1 y i x2 yi x3 y i
1 49 35 53 200 -3 -7 -9 0 9 63 0 0 49 81 0 21 27 0
2 40 35 53 212 -12 -7 -9 12 144 63 -108 -84 49 81 144 84 108 -144
3 41 38 50 211 -11 -4 -12 11 121 48 -132 -44 16 144 121 44 132 -121
4 46 40 64 212 -6 -2 2 12 36 -4 24 -24 4 4 144 12 -12 -72
5 52 40 70 203 0 -2 8 3 0 -16 24 -6 4 64 9 0 0 0
6 59 42 68 194 7 0 6 -6 49 0 -36 0 0 36 36 0 42 -42
7 53 44 59 194 1 2 -3 -6 1 -6 18 -12 4 9 36 2 -3 -06
8 61 46 73 188 9 4 11 -12 81 44 -132 -48 16 121 144 36 99 -108
9 55 50 59 196 3 8 -3 -4 9 -24 12 -32 64 9 16 24 -9 -12
1 64 50 71 190 12 8 9 -10 144 72 -90 -80 64 81 100 96 108 -120
0
520 420 620 2000
Σyi=0
Σx1=0
Σx2=0
Σx3=0
Σyi2=594
Σx12=270
Σx22=630
Σx32=750
Σx3yi=319
Σx2yi=492
Σx1x2=240
Σx3yi=-625
Σx2x3=-420
Σx1x3=-330
From the table, the means of the variables are computed and given below:
Econometrics, Module I 76
Prepared by: Bedru B. and Seid H. ( June, 2005)
Based on the above table and model answer the following question.
i. Estimate the parameter estimators using the matrix approach
ii. Compute the variance of the parameters.
iii. Compute the coefficient of determination (R2)
iv. Report the regression result.
Solution:
In the matrix notation: βˆ = ( x' x) −1 x' y ; (when we use the data in deviation form),
βˆ1 x11 x 21 x31
Where, βˆ = βˆ 2 , x = x12 x 22 x32 ; so that
βˆ : : :
3 x
1n x2n x3n
Note: the calculations may be made easier by taking 30 as common factor from
all the elements of matrix (x’x). This will not affect the final results.
270 240 − 330
| x' x |= 240 630 − 420 = 4716000
− 330 − 420 750
Econometrics, Module I 77
Prepared by: Bedru B. and Seid H. ( June, 2005)
var(βˆ1 ) = σ u2 (0.0085)
Σei2 17.11
var(βˆ 2 ) = σ u2 (0.0027)σˆ u2 = = = 2.851
n−k 6
var(βˆ3 ) = σ u2 (0.0032)
1
βˆ ' X ' Y − (ΣYi ) 2
n βˆ1Σx1 y + βˆ 2 Σx 2 y + βˆ3 Σx3 y 575.98
(iii) R2 = = = = 0.97
1
Y ' Y − (ΣYi ) 2 Σy 2
i 594
n
(iv) The estimated relation may be put in the following form:
Yˆ = 134.28 + 0.2063 X 1 + 0.3309 X 2 − 0.5572 X 3
Econometrics, Module I 78
Prepared by: Bedru B. and Seid H. ( June, 2005)
Example 2. The following matrix gives the variances and covariance of the of the
three variables:
y x1 x2
y 7.59 3.12 26.99
x1 − 29.16 30.80
x 2 − − 133.00
The first raw and the first column of the above matrix shows ∑y 2
and the first
y = Y − Y , x1 = X − X , and x 2 = X − X
The above matrix is based on the transformed model. Using values in the matrix
we can now estimate the parameters of the original model.
Econometrics, Module I 79
Prepared by: Bedru B. and Seid H. ( June, 2005)
βˆ
(a) βˆ = ( x' x) −1 x' y = 1
βˆ 2
αˆΣx 2 y + βˆΣx3 y
(c). R2 =
Σy i2
− (0.1421)(3.12) + (0.2358)(26.99)
=
7.59
∴ R 2 = 0.78; Σei2 = (1 − R 2 )(Σy i2 ) ≈ 1.6680
1.6680
∴ σˆ u2 = = 0.0981
17
Econometrics, Module I 80
Prepared by: Bedru B. and Seid H. ( June, 2005)
The (constant) food price elasticity is negative but income elasticity is positive.
Also income elasticity if highly significant. About 78 percent of the variations in
the consumption of food are explained by its price and income of the consumer.
Example 3:
Consider the model: Y = α + β 1 X 1i + β 2 X 2i + U i
On the basis of the information given below answer the following question
ΣX 12 = 3200 ΣX 1 X 2 = 4300 ΣX 2 = 400
ΣX 22 = 7300 ΣX 1Y = 8400 ΣX 2Y = 13500
ΣY = 800 ΣX 1 = 250 n = 25
ΣYi = 28,000
2
Σx 2 yΣx 22 − Σx 2 yΣx1 x 2
β1 =
ˆ
Σx12 Σx 22 − (Σx1 x 2 ) 2
Econometrics, Module I 81
Prepared by: Bedru B. and Seid H. ( June, 2005)
Since the x’s and y’s in the above formula are in deviation form we have to find
the corresponding deviation forms of the above given values.
We know that:
Σx1 x 2 = ΣX 1 X 2 − nX 1 X 2
= 4300 − (25)(10)(16)
= 300
Σx1 y = ΣX 1Y − nX 1Y
= 8400 − 25(10)(32)
= 400
Σx 2 y = ΣX 2Y − nX 2Y
= 13500 − 25(16)(32)
= 700
Σx12 = ΣX 12 − nX 12
= 3200 − 25(10) 2
= 700
Σx 22 = ΣX 22 − nX 22
= 7300 − 25(16) 2
= 900
Now we can compute the parameters.
Σx 2 yΣx 22 − Σx 2 yΣx1 x 2
βˆ1 =
Σx12 Σx 22 − (Σx1 x 2 ) 2
(400)(900) − (700)(300)
=
(900)(700) − (300) 2
= 0.278
Econometrics, Module I 82
Prepared by: Bedru B. and Seid H. ( June, 2005)
(700)(700) − (400)(300)
=
(900)(700) − (300) 2
= 0.685
σˆ 2 Σx 22
b. var(βˆ1 ) =
Σx12 Σx 22 − (Σx1 x 2 ) 2
Σei2
⇒ σˆ 2 = Where k is the number of parameter
n−k
In our case k=3
Σei2
⇒ σˆ = 2
n−3
Σe12 = Σy 2 − βˆ1Σx1 y − βˆ 2 Σx 2 y
= 2400 − 0.278(400) − (0.685)(700)
= 1809.3
Σei2
σˆ 2 =
n−3
1809.3
=
25 − 3
= 82.24
(82.24)(900)
⇒ var(βˆ1 ) = = 0.137
540,000
σˆ 2 Σx12
var(βˆ 2 ) =
Σx12 Σx12 − (Σx1 x 2 ) 2
(82.24)(700)
= = 0.1067
540,000
Econometrics, Module I 83
Prepared by: Bedru B. and Seid H. ( June, 2005)
t* 0.278
Hence; = = 0.751
SE ( βˆ1 ) 0.370
The decision rule if t* < t c is to reject the alternative hypothesis that says β is
different from zero and to accept the null hypothesis that says β is equal to zero.
β̂1 is drawn from the population of Y & X1in which there is no relationship
between Y and X1(i.e. β1 = 0 ).
d. R 2 can be easily using the following equation
ESS RSS
R2 = = 1-
TSS TSS
We know that RSS = Σei2
Econometrics, Module I 84
Prepared by: Bedru B. and Seid H. ( June, 2005)
Σei2 / n − k (1 − R 2 )(n − 1)
Adjusted R = 1 − 2
2
= 1−
Σy / n − 1 n−k
(1 − 0.24)(24)
= 1−
22
= 0.178
e. Let’s set first the joint hypothesis as
H 0 : β1 = β 2 = 0
with the critical value F at 5% level of significance and (3,.23) numerator and
denominator respectively. F (2,22) at 5%level of significance = 3.44.
F*(2,22) = 3.47
Fc(2,22)=3.44
⇒ F*>Fc, the decision rule is to reject H0 and accept H1. We can say that
the model is significant i.e. the dependent variable is, at least, linearly
related to one of the explanatory variables.
Econometrics, Module I 85
Prepared by: Bedru B. and Seid H. ( June, 2005)
• Instructions:
Read the following instructions carefully.
Make sure that your exam paper contains 4 pages
The exam has four parts. Attempt
All questions of part one
Only two questions from part two
One question from part three
And the question in part four.
Maximum weight of the exam is 40%
Part One: Attempt all of the following questions (15pts).
1. Discuss briefly the goals of econometrics.
2. Researcher is using data for a sample of 10 observations to estimate the relation
between consumption expenditure and income. Preliminary analysis of the sample data
produces the following data.
∑
xy = 700 , ∑ x 2 = 1000 , ∑
X = 100
∑ Y = 200
__ __
Where x = X i − X i and y = Yi − Y
a. Use the above information to compute OLS estimates of the intercept and slope
coefficients and interpret the result
b. Calculate the variance of the slope parameter
c. Compute the value R2 (coefficient of determination) and interpret the result
d. Compute 95% confidence interval for the slope parameter
e. Test the significance of the slope parameter at 5% level of confidence using t-test
α +β
3. If the model Yi=α β 1X1i +β
β 2X2i +Ui is to be estimated from a sample of 20 observation using
the semi- processed data given in matrix in deviation form.
0 .5 − 0.08
( x ′x) −1 =
− 0.08 0 .6
100
x ′y = X 1 =10, X 2 =25 and Y = 30
250
Econometrics, Module I 86
Prepared by: Bedru B. and Seid H. ( June, 2005)
10,000
0 .1 − 0.12 − 0.03
20,300
( x / x) −1 = − 0.12 0.04 0.02 X ′Y =
10,100
− 0.03 0.02 0.08
30,200
∑X 1 = 400 , ∑ X 2 = 200 , and ∑X 3 = 600
__ __
Where x = X i − X i and y = Yi − Y
2. In a study of 100 firms, the total cost(C) was assumed to be dependent on the rate of out put
(X1) and the rate of absenteeism (X2). The means were: C = 6 , X 1 = 3 and X 2 = 4 . The matrix
showing sums of squares and cross products adjusted for means is
c x1 x2
c 100 50 40
__ __
x1 50 50 -70 where, xi = X i − X i and c = C i − C
x2
40 -70 900
Estimate the linear relation ship between C and the other two variables. (10points)
Econometrics, Module I 87