You are on page 1of 31

Chapter two

2.1 Econometrics and regression analysis

The most commonly applied econometric tool is the regression analysis. The very important
role of econometrics is to provide the tools for modeling on the basis of given data. The
regression modeling technique helps a lot in this task. The regression models can be either
linear or non-linear. We will consider only the tools of linear regression analysis and our
main interest will be the fitting of linear regression model to a given set of data.

2.2 Simple linear econometrics model

In statistics, a simple linear regression model uses a single variable to predict the result of
the other variable. The linear regression model is one of the fundamental workhorses of
econometrics and is used to model a wide variety of economic relationships. The general
model assumes a linear relationship between a dependent variable y, and one or more
independent variables, X. The simplest linear relation called simple linear regression model
and The sample data then fit the statistical model:

Data = fit + residual

𝑦𝑖 = (𝛽0 + 𝛽𝑥𝑖 ) + 𝜀𝑖

 0 : Is called an intercept, mathematically  0 is the expected value of ( y ) at the value of (𝑥𝑖 = 0),
𝐸(𝑦) = 𝛽0 + 𝛽1 𝑥𝑖

(it is a vertical distance between origin point and intersection point Reg. line with Y-axis).

1 : Is called the slope of function and represent amount of variation in dependent variable Y
when the explanatory variable X changes with unity one.

6
𝛥𝑌 𝑠𝑖𝑛 𝜃 𝛥𝑌/𝑍 𝛥𝑌
Slope= 𝛽1 = 𝛥𝑋 , Slope== 𝛽1 = 𝑡𝑎𝑛 𝜃 = = =
𝑐𝑜𝑠 𝜃 𝛥𝑋/𝑍 𝛥𝑋

When ( xi ) increasing unity one ( y ) expected to change by ( 1 ) units.

yˆ i = ˆ0 + ˆ1 xi

yˆ i : Estimated or predicted value of dependent variable for case (i).

Note: the sample regression line passes through sample means of (X) and (Y), means ( x, y ).

Some examples of economic theory of simple relationships:

1- Theory of demand and supply


2- Theory of production and cost
3- Theory of income and consumption

Assumptions of Linear Regression

Firstly: about random variable (ei ) :

1. Expected value of error equal zero for every observation ( xi ) : E (ei ) = 0


2. Variance of random error is constant and homogenous around their average at each time
with all values of ( xi ) and equal to ( e2 ) :
Note// the dispersion of ( xi ) values around Reg. line must be constant.
Var(ei ) = Eei − E (ei ) = E (ei ) 2 =  e2  Value of ( xi )
2

 e21 =  e22 = ... =  en2 =  e2

When the error variance is not constant, it means heterogeneous problem:


Var(ei )   2

 12   22  ...   32   2

3. Random error distributed normal distribution with mean zero and variance ( e2 ) , then we
can make statistical tests where we use those tests only when statistical population data
distributed normally distribution.

7
4. The values of random error (ei ) are independent with the values of explanatory variable
( xi ) : Cav(ei , xi ) = 0 , E (ei , xi ) = 0 no correlation ,
5. There is no relation between the values of random variable error in current time (et ) and
its value in pervious time (et −1 ) or its value in next time (et +1 ) , means there is no
autocorrelation problem among errors:
Cav(ei , e j ) = 0 for all i  j , i = j = 1,2,..., n
Notes // the reasons of adding a random variable error in model:
1. Delete some of variables or factors from model.
2. Randomness nature of human behavior.
3. Lack of formulating (miss-specification) of relation.
4. Collection errors, those errors may fall at collecting data.
5. Measurement errors, there are errors in measurement the observation of variables.

Secondly: about dependent random variable (Y ) :

1. Linearity function and their parameters values are constant.


2. The explanatory variable ( xi ) is a fixed variable, not random variable.
3. The variance of ( yi ) is equal to the variance of the (ei ) .
Var( yi ) = Var(ei )
yi =  0 + 1 xi + ei , E ( yi ) =  0 + 1 xi
Var( yi ) = Eyi − E( yi ) = E 0 + 1 xi + ei −  0 − 1 xi  = E(ei ) 2 =  2
2 2

Var(ei ) = Eei − E (ei ) = E (ei ) 2 =  2


2

8
4. There is no relation (correlation) among explanatory variables in model and if appear
this problem in multiple regression model and called multicollinearity.
5. There is no outlier, extreme points and influence points.

2.3 Checking the assumptions Graphically

To check these assumptions, if these assumptions are valid or not then we perform residual
analysis (check plots & testes):

1. For Constant Variance (Variance Homogeneity):


To check this looks at the plot of the residuals versus ( xi ) or ( yˆ i ) values (dispersion of errors
are randomly).

2. For Linearity:
To check this again looks at the plot of the residuals versus the ( xi ) values or look at scatter
plot.

3. For Independence Errors:


To check this looks at the plot of the residuals against the estimated values ( yˆ i ) , then the
patterns must be randomly.

9
These plots are not randomly (dependent errors), its autocorrelation function (ACF) plot.

4. For normality:
To check this look at the (q ~ q) plot (quantile to quantile plot), it is normal probability graph
paper, is normal c.d.f plot of pairs ( (qi , ei ) as straight line. If this is approximately linear with
unit slope then the errors are distributed normal distribution.

10
5. For outliers and extreme points:
To check this we use Box-plot:

Box-length= Q3 − Q1

If the value is far as 1.5( Box − length) = 1.5(Q3 − Q1 ) then that value is outlier. If the value is far as
3( Box − length) = 3(Q3 − Q1 ) then that value is extreme point.

2.4 Coefficients derivation of Simple linear model using OLS

1. By using origin values of observation.


Let: yi =  0 + 1 xi + ei
ei ~ N (0, 2 ) , Cov = (ei , e j ) = 0 , Cov = (ei , X j ) = 0 , Var( yi ) = Var(ei ) =  2

E ( yi ) =  0 + 1 xi

ei = yi − E ( yi ) = yi −  0 − 1 xi

Q =  ei2 =  ( yi − E ( yi )) 2 =  ( yi −  0 − 1 xi ) Always  0
2

Q   ei Q
2

= = −2 ( yi −  0 − 1 xi ) , =0 y = nˆ0 + ˆ1  xi ….(1)


 0  0  0
i

Q   ei Q
2

= = −2 ( yi −  0 − 1 xi )( xi ) , =0 x y = ˆ0  xi + ˆ1  xi2 ….(2)


1 1 1
i i

11
After takes partial derivative of Q with respect to  0 , 1 and equal to zero, and solved the
Eq(1 & 2) simultaneously, we get: ˆ0 = y − ˆ1 x , ˆ1 = 
x i y i − nx y
x 2
i − nx 2

2. By using Matrices Model:


yi =  0 + 1 xi +  i ,  i = yi −  0 − 1 xi , E ( yi ) =  0 + 1 xi , Y = X +  ,  = Y − X

SSE =  i2 =   = (Y − X )(Y − X )

  = (Y  −  X )(Y − X ) = Y Y − Y X −  X Y +  X X

= Y Y − 2 X Y  +  X X

∂SSE
= −2X ′ Y + 2X ′ Xβ̂ = 0 X X̂ = X Y ……..(1)
∂β

Resolve these equations by multiply both sides on ( X X ) −1 we can obtain the estimates of the
least squares:  ̂ = ( X X ) −1 X Y

3. By using Grammar Rule: we can use determinants to find estimated values of  0 , 1 :


Let: yi =  0 + 1 xi + ei
 y = n +   x …(1)
i 0 1 i

 y = n +   x
i  x0 1 i i

 y x =   x +   x …(2)
i i 0 i 1
2
i

 y   n  x  *  
=
i i 0
  
 x y   x  x    
2
i i i i 1

 n  x  = n x − ( x )
N =   i 2 2

 x  x 
2 i i
i i

 y  x  = y x − ( x x y )
N =    i i 2

 x y  x 
0 2 i i i i i
i i i

 n  y  = n x y − ( x y )
N =    i

 x  x y 
1 i i i i
i i i

ˆ =
N
=
 y  x − ( x  x y )
0 i
2
i i i i

n x − ( x )
o 2 2
N i i

N n x y − ( x  y )
ˆ = = 1 i i i i

n x − ( x )
1 2 2
N i i

12
H.W: prove that
∑ 𝑦𝑖 ∑ 𝑥𝑖2 −(∑ 𝑥𝑖 ∑ 𝑥𝑖 𝑦𝑖 ) ∑𝑛 ̅)
𝑖=1 𝑥𝑖 (𝑦𝑖 −𝑦
1. 𝛽̂𝑜 = 2. 𝛽̂1 =
𝑛 ∑ 𝑥𝑖2 −(∑ 𝑥𝑖 )2 ∑𝑛
𝑖=1 𝑥𝑖 (𝑥𝑖 −𝑥̅ )
𝑛 𝑛
(∑𝑖=1 𝑦𝑖 )(∑𝑖=1 𝑥𝑖 )
∑𝑛 ∑𝑛
𝑖=1 𝑦𝑖 𝑥𝑖 −
𝑖=1 𝑦𝑖 (𝑥𝑖 −𝑥̅ )
3. 𝛽̂1 = ∑𝑛 4. 𝛽̂1 = (∑
𝑛
𝑛 𝑥 )2
𝑖=1 𝑥𝑖 (𝑥𝑖 −𝑥̅ ) ∑𝑛 2 𝑖=1 𝑖
𝑖=1 𝑥𝑖 − 𝑛

Example -1- // The motor pool wants to know if it costs more to maintain cars that are
driven more often.

Hypothesis: maintenance costs are affected by car mileage.


Null hypothesis: there is no relationship between mileage and maintenance costs

Dependent variable: Y is the cost in dollars of yearly maintenance on a motor vehicle


Independent variable: X is the yearly mileage on the same motor vehicle

Data are gathered on each car in the motor pool, regarding number of miles driven in a
given year, and maintenance costs for that year. Here is a sample of the data collected.

Car Number 1 2 3 4 5
Miles Driven (X) 80 29 53 13 45
Repair Costs (Y) 120 54 65 20 32

Find the estimated linear regression equation by using:


1. Origin values of observation.
2. Matrices Model.
3. Grammar Rule.
Solution -1-// yi =  0 + 1 xi +  i , yˆ i = ˆ0 + ˆ1 xi , ˆ0 = y − ˆ1 x , ˆ1 =
 x y − nx y
i i

 x − nx 2
i
2

Car
Repair Costs yi Miles Driven xi xi2 xi yi
Number
1 120 80 6400 9600
2 54 29 841 1566
3 65 53 2809 3445
4 20 13 169 260
5 32 45 2025 1440
∑ 𝑦𝑖 = 291 x i = 220 x 2
i = 12244 x y i i = 16311
y = 58.2 x = 44

x 2 = 1936

ˆ1 =  x y − nx y = 16311 − 5(44)(58.2) = 16311 − 12804 = 3507 = 1.3678


i i

 x − nx
2
i 12244 − 5(1936 )
2
12244 − 9680 2564

13
ˆ0 = y − ˆ1 x = 58.2 − (1.3678)(44) = 58.2 − 60.1832 = −1.9832
yˆi = ˆ0 + ˆ1 xi = −1.9832 + 1.3678 xi

Solution -2-// yi =  0 + 1 xi + + i , Y = X +  , Yˆ = X̂ , e = Y − X̂ , ̂ = ( X X ) −1 X Y

1 80 
1 29
1 1 1 1 1  
120  1 80  X X =   1 53 
 54  1 80 29 53 13 45  
   29 1 13 
  1 45
Y =  65  , X = 1 53 ,  =  0 ,
     1 
 20  1 13 
 32  1 45

 n
X X = 
 x  =  5
i 220    yi   291 
X Y =  =
 xi  x  220
2
i 12244    xi y i

 16311
11  21 
−1  
 5 220  adj ( X X ) 12  22 
( X X ) −1 =   = = =
 220 12244  X X X X

11 = 12244 , 12 = −220 ,  21 = −220 ,  22 = 5

X X = 11a22 − 12 a21 = (12244)(5) − (−220)(−220) = 61220 − 48400 = 12820


11  21  12244 − 220 
  22   − 220 5   0.955070 − 0.017160 
( X X ) −1 =  12 =  =
X X 12820 − 0.017160 0.000390 

 0.95507 − 0.017160   291  − 1.9714   ˆ0 


ˆ = ( X X ) −1 X Y =  = = 
− 0.017160 0.000390  16311  1.3677   ˆ1 

yˆ = ˆ0 + ˆ1 xi = −1.9714 + 1.3677 xi

Solution 3 // yi =  0 + 1 xi +  i , Yˆ = X̂
  yi   n  x  *    291   5 220    0 
= 16311 = 220 12244  *   
i 0
 ,
 x i y i   x i  x   
2
i

1      1
 n  xi  = n x 2 − ( x ) 2
N =  i  i
 xi  xi 
2

 5 220 
N =  = (5)(12244 ) − (220)(220) = 61220 − 48400 = 12820
220 12244 
  yi  x  = y x
N0 =   
i 2
− ( xi  xi yi )
 xi yi x  2 i i
i

14
 291 220 
N0 =   = (291)(12244 ) − (220)(16311) = 3563004 − 3588420 = −25416
16311 12244 
 n  y  = n x y
N1 = 
i
 − (  xi  y i )
 xi x y  i i
i i

 5 291 
N1 =   = (5)(16311) − (220)(291) = 81555 − 64020 = 17535
220 16311

ˆo = 0 =  i  i 2  i 2 i i =
N y x2 − ( x x y ) − 25416
= −1.9825
N n xi − ( xi ) 12820
N1 n xi yi − ( xi  yi ) 17535
ˆ1 = = = = 1.3678
N n xi2 − ( xi ) 2 12820
yˆ i = ˆ0 + ˆ1 xi = −1.9825 + 1.3678 xi

H.W -1- // Mean yields of soybean plants (gms per plant) obtained in response to the
indicated levels of ozone exposure over the growing season. (Data courtesy of Dr. A. S.
Heagle, USDA and North Carolina State University).

X Y
Ozene (ppm) Yield (gm/plt)
0.02 242
0.07 237
0.11 231
0.15 201
∑ 𝑥𝑖 = 0.35 ∑ 𝑦𝑖 = 911

Find the estimated linear regression equation by using:


1. Origin values of observation.
2. Matrices Model.
3. Grammar Rule.
4. Predict the yields of soybean plants when the levels of ozone equal to 0.8 .

H.W -2- //As a profit maximizing monopolist, you face the demand curve yi =  0 + 1 xi +  i , in
the past, you have set the following prices and sold the accompanying quantities:
Y
3 3 7 6 10 15 16 13 9 15 9 15 12
quantities
X
18 16 17 12 15 15 4 12 11 6 8 10 7
Prices

Required// compute the ordinary least squares coefficients in the regression model above
by using:
1. Origin values of observation.
2. Matrices Model.
3. Grammer Rule.

15
2.5 Statistical tests with (ANOVA) table and coefficient of determination (R2)

1. From the ANOVA table, we can decompose the total sum of squares (of the dependent
variable Y around its mean) into two parts:
• The Explained (or Regression) Sum of Squares.
• The Residual Sum of Squares.

Total Sum of Squares = Explained (Reg. Model) SS + Unexplained (Residuals) SS


( y i − y ) 2 =  ( yˆ i − y ) 2 +  ( yi − yˆ i ) 2
SST = SSR + SSE
SSR SSE MSR
MSR = , MSE = , F=
1 n−2 MSE
We compare calculated value (Cal F) with value of (tab F(1,n-2)).

2. Coefficient of Determination (R2): The determination coefficient is percentage for


explained deviation from total deviation, Ratio of the explained sum of squares to the total
sum of squares.
R = Explained sum of squares / Total sum of squares =
2 SSR
=
 ( yˆ i − y)2
SST (y i − y)2

SSE SSE
1 = R2 + , R2 = 1 − , 0  R2  1
SST SST

Remaining proportion of deviations (1 − R 2 ) is return to unknown factors (omitted factors) in


Reg. function, which represents error r.v that cause disturbance in observations. When
R 2 = 0 , ˆ1 = 0 , then estimated Reg. equation ŷi identical on mean y , yˆ i = y

• When R 2 = 1 , then estimated Reg. equation ŷi identical on actual values yi , yˆ i = yi

16
2.6 Standard errors

The S.E of estimated parameter used to measure scope of confidence which can base on it
consideration estimation value well approximation for statistical population parameter
value. This test helps to determine that estimations ˆ0 , ˆ1 different from zero, significantly
different.
The hypothesis of test is:
H 0 =  i = 0 Null hypothesis
H1 =  i  0 Alternative hypothesis , where i = 0,1
Then we calculate standard deviation (error) for parameters by the following forms:
1. Standard deviation of intercept:

1 𝑥̅ 2 𝜎 2 ∑ 𝑥𝑖2
̂ √ ̂ √ 2
𝑆(𝛽0 ) = 𝑉(𝛽0 ) = 𝜎 ( + )= √
𝑛 𝑆𝑥𝑥 𝑛𝑆𝑥𝑥
2. Standard deviation of slope:
2 SSE  ei
2

S ( ˆ1 ) = V ( ˆ1 ) = ,  = MSE = 2


=
Sxx n−2 n−2
SSE = SST − SSR = Syy − ˆ Sxy = Syy − ˆ 2 Sxx
1 1

Then we compare between standard deviation (error) for ˆ0 & ˆ1 with half numerical values
of two estimated parameters.
ˆi
If: S (ˆi ) 
2
Then we reject H 0 the estimated parameter is different from zero, then the estimated
parameter has statistically significant effect.
ˆi
If: S (ˆi ) 
2
Then we not reject H 0 the estimated parameter is not different from zero and then the
estimated parameter has not statistically significant effect.
Note// for fixed (n) we can decrease S (ˆ1 ) or S (ˆ0 ) by expanding range of X values sampled.

Example // from the following data:


Y
4 6 7 8 11 15 18 22
consumption
X
5 8 10 12 14 17 20 25
Income

1. Estimate simple linear regression equation and explain it.


2. Compute the determination coefficient and ANOVA table then explain the results.
3. Test significance of parameters using standard error method, with explains the result.

17
ˆ1 = 
x i y i − nx y
Solution // 1- yi =  0 + 1 xi +  i , yˆ i = ˆ0 + ˆ1 xi , ˆ0 = y − ˆ1 x ,
x 2
i − nx 2

yi xi xi yi xi2
4 5 20 25
6 8 48 64
7 10 70 100
8 12 96 144
11 14 154 196
15 17 255 289
18 20 360 400
22 25 550 625
y i = 91 x i = 111 x y i i = 1553 x 2
i = 1843
y = 11.375 x = 13.875

x = 192.5156
2

ˆ1 =  x y − nxy = 1553 − 8(13.875)(11.375) = 1553 − 1262.625 = 290.375 = 0.9587


i i

 x − nx
2
i 1843 − 8(192.5156 )
2
1843 − 1540 .1248 302.875

ˆ0 = y − ˆ1 x = 11.375 − (0.9587)(13.875) = −1.9270


yˆi = ˆ0 + ˆ1 xi = −1.927 + 0.9587 xi
And by spss the table below shows the results of the estimated parameteres

Coefficientsa
Model Standardized
Unstandardized Coefficients Coefficients

B Std. Error Beta T Sig.

1 (Constant) -1.927- .834 -2.312- .060


X 0.959 .055 .990 17.452 .000

yˆ i = −1.927 + 0.959 xi

When the explanatory variable ( x i ) increasing unity one that led to increase the independent
variable ( yi ) by amount ( 0.959 ) unit, (there is a positive relationship between ( Y & X ).

Model Summary
Adjusted R Std. Error of the
Model R R Square
Square Estimate

1 .990a .981 .977 .956


ANOVAb
Model Sum of Squares df Mean Square F Sig.

1 Regression 278.391 1 278.391 304.579 .000a


Residual 5.484 6 .914
Total 283.875 7

18
2- n = 8 , x i = 111 , y i = 91 , x y i i = 1553 , x = 13.875 , y = 11.375

y 2
i = 1319 , x 2
i = 1843 , intercept = ˆ0 = −1.927 , = ˆ1 = 0.959

SSR = ˆ1Sxy = (0.9587 )(290.375) = 278.38 , SSR = ̂12 Sxx , SSR = 278.391 ,

SST =  yi2 − ny 2 = 1319 − 8(11.375) 2 =283.875

SSE = SST − SSR = 283.875 − 278.391 = 5.484

SSR 278.391
R2 = =  100 = 98.068% = 0.9806 = 0.981
SST 283.875

Here (98. 06%) or (98%) from getting changes ( yi ) cause it the explanatory variable ( x i )
which existing in model, and remaining proportion of deviations (2%) returns to unknown
factors (omitted factors) in Reg. function do not taken in consideration to r.v called error
term.

H 0 : 0 = 0 H 0 : 1 = 0
3- ,
H1 :  0  0 H 1 : 1  0

2
2 1 x2 Se
S ( ˆ0 ) = S e ( + ) = 0.834 , S ( ˆ1 ) = ) = 0.055
n Sxx Sxx

ˆ0 − 1.927 ˆ 0.959


= = −0.963 , 1 = = 0.4795
2 2 2 2

ˆ
S ( ˆ0 )  0 0.834  −0.963
2

Then we not reject null hypothesis ( H 0 :  0 = 0 ), the parameter (  0 ) is not different from
zero, then it has not statistically significant effect that means the regression line pass from
origin point, then we can say that the relation between ( X & Y ) takes the following form:
yˆ i = 0.959 xi

ˆ
S ( ˆ1 )  1 , 0.055  0.4795
2

Then we reject null hypothesis ( H 0 : 1 = 0 ), this means the slope ( 1 ) of regression line not
equal to zero, the line will be not parallel for X-axis that indicates relation between ( X & Y )
statistically significant.

19
H.W -1- // An article in Technometrics by S. C. Narula and J. F. Wellington [“Prediction,
Linear Regression, and a Minimum Sum of Relative Errors” (Vol. 19, 1977)] presents data
on the selling price and annual taxes for 24 houses. The data are shown in the following
table.

a- Assuming that a simple linear regression model ia appropriate, obtain the least squares fit
relating selling price to taxes paid. And what is the estimate of 𝜎 2 ?
b- Find the mean selling price given that the taxes paid are 𝑥 = 7.50 .
c- Calculate the fitted value of y corresponding to 𝑥 = 5.8980 and find the corresponding
residual.
d- Calculate the fitted 𝑦̂𝑖 for each value of 𝑥𝑖 used to fit the model.
e- Calculate the determination coefficient and ANOVA table then explain the results.
f- Test significance of parameters using standard error method, with explains the results.

H.W -2- // from the following data:


Y
40 50 50 70 65 65 80
sale
X
10 20 30 40 50 60 70
advertising

1. Estimate simple linear regression equation and explain it.


2. Compute the determination coefficient and ANOVA table then explain the results.
3. Test significance of parameters using standard error method, with explains the
results.

20
2.7 The properties of the least square estimators

The least squares estimators describes by BLUE property which the best linear unbiased
estimates for ( 0 , 1 ) with condition that a ( ei ) satisfies the general assumptions.
ei ~ N (0, 2 )
Cov(ei , e j ) = E (ei , e j ) = 0 i j

Cov(ei , xi ) = E (ei , xi ) = 0
In addition, this property with conditions are necessary available it, called Gauss-Markov
Least Squares Theorem. The important in this theory is proves that among all unbiased
linear estimators, the OLS estimators describes with (BLUE) property and to prove Gauss-
Markov theory must prove the following properties:
1. Linearity property
( xi − x )
For ̂1 : let k i =
 (x
2
i − x)
ˆ1 =  k i y i = k1 y1 + k 2 y 2 + k 3 y 3 + ... + k n y n
̂1 is linear function of yi
For: ˆ0 = y − ˆ1 x

=
y i 1
− x  k i yi =  ( − xk i )yi
n n
Since ( k i , x , n ) are constant amounts therefore then ̂ 0 only depends on ( yi ) values
Let; power of ( yi ) equal one then ̂ 0 is linear function of ( yi ) values.
1
d i = ( − xki )
n
ˆ 0 =  d i y i = d1 y1 + d 2 y 2 + d 3 y 3 + ... + d n y n

̂ 0 is linear function of yi

2. Unbiased property
For ̂1 //
ˆ1 =  k i y i =  k i (  0 + 1 xi + ei ) =  0  k i + 1  k i xi +  k i ei =  0 (0) +  1 (1) +  k i ei
= 1 +  k i ei
E ( ˆ1 ) =  1 +  k i E (ei ) , E (ei ) = 0
E ( ˆ1 ) = 1
̂1 is unbiased estimator for 1

For // ˆ0 = y − ˆ1 x

ˆ0 =
y i 1
− x  k i yi =  ( − xk i ) yi
n n

21
1 1
E ( ˆ0 ) =  ( − x k i )E ( y i ) =  ( − x k i )(  0 + 1 xi )
n n
n
 0 − x  0  k i + 1
 x i − x k x
= 1 i i
n n

=  0 − x  0 (0) + 1
 x i − x (1) =  + x − x
1 0 1 1
n
E (ˆ0 ) =  0
̂ 0 is unbiased estimator for  0

3. Best Estimator (minimum variance) property:


This property is the reason of popularity of using OLS method, where estimators of this
method describe with best and more efficiency among unbiased linear estimators which
estimated by other methods and this means his variance is less.
To illustrate that assume we have two unbiased estimators, if one of these has variance
greater than another then considers relatively less efficiency than another as it is illustrating
in the following form:

Proof// Minimum Variance (Best Estimator) Property


1. For ̂1 // let we have another unbiased estimator ̂ˆ1 for parameter 1 :
ˆ
Let ̂1 =  ci yi
ˆ
ˆ1 =  ci (  0 + 1 xi + ei ) =  0  ci + 1  ci xi +  ci ei , c i =0 , c x
i i =1
=  0 (0) + 1 (1) +  ci ei
ˆ
ˆ1 = 1 +  ci ei
ˆ
E ( ˆ1 ) = 1 +  ci E (ei ) = 1

22
ˆ
E ( ˆ1 ) = 1
ˆ
̂1 is unbiased estimator for 1
ˆ ˆ ˆ ˆ
Var ( ˆ1 ) = E ( ˆ1 − E ( ˆ1 ) 2 = E ( ˆ1 − 1 ) 2
ˆ
Var( ˆ1 ) = E ( 1 +  ci ei − 1 ) 2 = E ( ci ei ) 2 = E ( ci2 ei2 + 2 ci c j ei e j )
i j

=  c E (e ) + 2 ci c j E (ei e j ) ,
2
i
2
i E (ei e j ) = 0
i j

=  2  c i =  2  (k i + d i ) =  2 ( k i2 + 2 k i d i +  d i2 ) , k d
2 2
i i =0

=  2 ( ki2 +  d i2 ) =  2  ki2 +  2  d i2
2
= +  2  d i2
Sxx
= Var(ˆ1 ) + Positive Quantity
ˆ
Var ( ˆ1 ) = Var ( ˆ1 ) + Positive Quantity
ˆ
Var ( ˆ1 )  Var ( ˆ1 )
̂1 is BLUE
Proof// For ̂ 0
ˆ0 = y − 1 x

ˆ0 =
y 1
− x  k i yi =  ( − xk i ) yi
i

n n
Let ˆˆ0 =  ( − x ci ) y i be another estimator for  0
1
n
ˆ 1 1
E ( ˆ0 ) =  ( − x ci )E ( y i ) =  ( − x ci )(  0 + 1 xi )
n n
n
=  0 −  0 x  ci +  1
 xi −  x c x
1  i i
n n
ˆ ˆ
So that ̂ 0 be unbiased estimate E ( ˆ0 ) =  0 must be  ci = 0 , c xi i =1
So that  c = 0 must be  d = 1 because  k = 0
i i i

So that  c x = 1 must be  d x = 0 because  k x


i i i i i i =1
ˆ
 E ( ˆ0 ) =  0 + 1 x − 1 x
ˆ
E ( ˆ0 ) =  0
ˆ
̂ 0 is unbiased estimator for  0
ˆ ˆ ˆ ˆ
Var( ˆ0 ) = E ( ˆ0 − E ( ˆ0 )) 2 = E ( ˆ0 −  0 ) 2
1 1
Var ( ( − x ci ) y i ) =  ( − x ci ) 2 Var ( y i )
n n

23
=  2 (
1 ci 2 n  ci + x 2 c 2 )
n2
− 2 x
n
+ x 2 2
c i ) =  (
n2
− 2 x
n
i
1
=  2 ( + x 2  (k i + d i ) )
2

, ki di = 0
1
=  2 ( + x 2  k i2 + x 2  d i2 + 2 x 2  k i d i )
n
1
=  2 ( + x 2  k i2 + x 2  d i2 )
n
1

1
=  2 ( + x 2  k i2 ) + x 2 2  d i2 , k i2 =
n Sxx
1 x2
= 2( + ) + x 2 2  d i2
n Sxx
= Var( ˆ0 ) + Positive Quantity
ˆ
Var( ˆ0 ) = Var( ˆ0 ) + Positive Quantity
ˆ
Var( ˆ0 )  Var( ˆ0 )
̂ 0 is BLUE

2.8 The General Linear econometrics model (GLEM)

This model consists of the number of variables (more than two variables), therefore we
using matrices algebra in analysis:
Yi =  0 + 1 X i1 +  2 X i 2 + ... +  k X ik + ei , i = 1,2,3..., n (sample size)
j = 1,2,3..., k (k= No. of explanatory variables)

24
 y1  1 x11 x12 .. x1k   0   e1 
y  1 x   e 
 2  21 x 22 .. x 2 k   1   2
 .  =  . .. .. .. ..  

 2

 + e3 
    . .
 .   . .. .. .. ..     
 y n  1 x n1  .  .
xn 2 .. x nk  n( k +1)   
n1  k  ( k +1)1 en  n1
Y = X +e
Where:
Y : The vector for observations values of dependent variable of order (n  1) .
X : The matrix of explanatory variables of order (n  (k + 1)) .
 : The vector of unknown coefficients for the model of order ((k + 1)  1) .
e : The vector of random errors of order (n  1) .
The main assumptions for random variable ( ei ) in general linear model are the same
assumptions for random variable ( ei ) in simple linear model.
e ~ iidN (0,  2 I n ) , E (ei , e j ) = 0 for all i  j , E (ei , X j ) = 0
 k : Measures the change in the mean value of ( Y ) per unit change in ( X k ), holding the rest
explanatory variables constant.

25
In case two explanatory variables using OLS method.

1. Normal Equations Method:


if k = 2
Yi =  0 + 1 X i1 +  2 X i 2 + ei
ei = Yi −  0 − 1 X i1 −  2 X i 2
 e =  (Y −  −  X −  X )
2
i i 0 1 i1 2 i2
2

 e 2

= −2 (Y −  −  X −  X )=0
i

 0
i 0 1 i1 2 i2

  ei2
= −2 (Yi −  0 − 1 X i1 −  2 X i 2 ) X i1 = 0
 1
  ei2
= −2 (Yi −  0 −  1 X i1 −  2 X i 2 ) X i 2 = 0
 2

 Y = nˆ + ˆ  X + ˆ  X
i 0 1 i1 2 i2 …..(1)
 X Y = ˆ  X + ˆ  X + ˆ  X X
i1 i 0 i1 1
2
i1 2 i1 i2 …..(2) Normal Equations
 X Y = ˆ  X + ˆ  X X + ˆ  X
i2 i 0 i2 1 i1 i2 2
2
i2 …..(3)
By using matrices:
  Yi   n X X   ˆ0 
 ˆ 
i1 i2
  
  X i1Yi  =   X i1 X X X i1 i 2   1 
2
i1
 X i 2Yi   X i 2 X X X  ˆ 
i 2   2 
2
   i1 i2

We using Grammar's method to find the coefficients ( ˆ0 , ˆ1 , ˆ2 ) as follows:

Let:

 n

X X i1 i2

  Yi

X X i1 i2 

N =   X i1 X X X 2
i1 i1 i 2  , N 0 =   X i1Yi X X X 2
i1 i1 i 2 
 X i 2 X X X 2   X i 2Yi X X X 2 
 i1 i2 i2   i1 i2 i2 

 n

Y X i i2

 n

X  Y 
i1 i

N 1 =   X i1 X Y X X i1 i i1 i 2  , N 2 =   X i1 X X Y  2
i1 i1 i
 X i 2 X Y X 2   X i 2  X X  X Y 
 i2 i i2   i1 i2 i2 i

N0 N1 N2
ˆ0 = =? , ˆ1 = =? , ˆ2 = =?
N N N
2. Matrices Algebra Method:
Y =  0 + 1 X i1 +  2 X i 2 + 1 X i 3 + ... +  k X ik + ei , i = 1,2,3..., n (sample size)
Y = X +e j = 1,2,3..., k (k= No. explanatory variables)
We have the same assumption about the random variable error here:

e ~ N (0, 2 I n ) , E (ei , e j ) = 0 for all i  j , E (ei , X i ) = 0

26
 E (e1 )  0
 E ( e )  0 
 2   
Where: E ( e) =  .  =  .  = 0
   
 .  .
 E (en ) 0

 e1   E (e12 ) E (e1e2 ) .. .. E (e1en ) 


e   2 
 2  E (e2 e1 ) E (e2 ) .. .. E (e2 en )

Var − Cor (e) = E (ee ) = E  . e1 e2 . . en  =  .. .. .. .. .. 
   
.  .. .. .. .. .. 
en   E (e e ) E (e e ) .. , , E (en2 ) 
 n 1 n 2

 12 0 0 .. 0
 
 0 2
2
0 .. 0
=  .. .. .. .. ..  ,  12 =  22 = ... =  n2 =  2
 
 .. .. .. .. .. 
0 .. ..  n2 
 0

1 0 0 .. 0
0 1 0 .. 0

 2 0 0 1 .. o  =  2 I
Var − Cor (e) = E (ee ) = 
. . . . . n

 
. . . . .
0 0 0 .. 1 

And also by using matrices we get the coefficients estimation for GLM:

Y = X +e
e =Y − X
    
e e = (Y − X  )(Y − X  ) = Y Y −  X Y − Y X  +  X X 

  X Y = Y  X  , because order of both is ( 1 1 )


   
e e = Y Y − 2 X Y +  X X 

e e
= −2 X Y + 2X X  = 0


 ˆ0 
ˆ 
 1 
ˆ ˆ
X Y = X X    = ( X X ) X Y =  . 
−1

 
 . 
 ˆ 
 k
Notes//

27
• if we have one explanatory variable:
 n
X X = 
x i 
 , X Y = 
 y  i

 x i x  x y 
2
i  i i

 n

x x
i1 i2 

  yi 
 
• if we have two explanatory variables: X X =   xi1 x x x
2
i1 i1 i 2  , X Y =   xi1 yi 

 xi 2 x x x 2
  xi 2 yi 
 i1 i 2 i2   

Unbiased property:
ˆ = ( X X ) −1 X Y , Y = X  + e
= ( X X ) −1 X ( X  + e)
=  + ( X X ) −1 X e ….(1)
E ( ˆ ) =  + ( X X ) −1 X E (e) , E ( e) = 0
E ( ˆ ) = 
 ˆ is unbiased estimator vector for 

To find the form of variance-covariance matrix for estimator's coefficients vector by OLS
method is as follows:

 
Var − Cov( ˆ ) = E ˆ − E ( ˆ ) = E ˆ − 
2
  2
= E ( ˆ −  )(ˆ −  )

ˆ =  + ( X X ) −1 X e  ˆ −  = ( X X ) −1 X e

 
Var − Cov( ˆ ) = E ( X X ) −1 X e ( X X ) −1 X e



= E ( X X ) −1 X eeX ( X X ) −1 
= ( X X ) −1 X E (ee) X ( X X ) −1 , E (ee) =  2 I n

=  2 I n ( X X ) −1 X X ( X X ) −1 , X X ( X X ) −1 = I

=  2 ( X X ) −1

The population variance estimation:

 =
2 e 2
i
=
ee
, n − k − 1 : when k = X is No. of explanatory variables
n − k −1 n − k −1
e

e = Y − Yˆ = Y − X ̂

28
(Y − X ˆ )(Y − X ˆ )
 e2 =
n − k −1

 e2 = (Y Y − ˆ X Y − Y X ˆ + ˆ X X ˆ ) /( n − k − 1) , ˆ = ( X X ) −1 X Y

= (Y Y − 2 ˆ X Y + ˆ X X ( X X ) −1 X Y ) /( n − k − 1) , X X ( X X ) −1 = I

= (Y Y − 2 ˆ X Y + ˆ X Y ) /( n − k − 1)

 e2 = (Y Y − ˆ X Y ) /( n − k − 1) , Y Y =  Yi 2

SSE (Y Y − ˆ X Y )
S e2 =  e2 = =
n − k −1 n − k −1

2.9 The t-test and the F- test.


1. The t-test
We assume that the random variable ( e ) distributed normal distribution with vector of
mean ( 0 ) and variance-covariance matrix (  2 I n ).
With existence of assumptions of the general linear model, then each element of ˆ
ˆ j ~ N (  j ,  2 c jj ) , j = 0,1,2,..., k
Where: c jj : Represent diagonal elements of ( X X ) −1
Var − Cov ( ˆ ) =  2 ( X X ) −1
And Var ( ˆ j ) =  2 c jj

e 2
i = e e ~  (2n − k −1)

t-test hypotheses:
H0 :  j = 0
H1 :  j  0
ˆ j −  j ˆ j ˆ j
Cal t = = = , j = 0,1,2,..., k or j = 1,2,..., k + 1
S ( ˆ j ) S ( ˆ j )  2 c jj
And: if Cal t  Tab t  we reject H 0 ,  j  0 this means that the explanatory variable
influences significantly on the dependent variable.
If Cal t  Tab t  we not reject H 0 ,  j = 0 this means that the explanatory variable X j has
not significantly influence on the dependent variable.

2. The F-test

To reach for F-test seem of construction of (ANOVA) table which depends on two types of
deviations (Explained SSR and Unexplained SSE).

29
H 0 : 1 =  2 = ... =  k = 0
H 1 : 1   2  ...   k  0

H 0 : There is no any explanatory variable effect on dependent variable ( Y ).

H 1 : There is at least one explanatory variable has importance in interpretation of getting


change in dependent variable ( Y ) .

SST =  (Yi − Y ) =  Yi 2 − nY = Y Y − nY
2 2 2
SST = SSR + SSE ,

SSE = Y Y − ̂ X Y
SSR = SSR ( X 1 , X 2 ,.., X k ) = ̂ X Y
SSR( X 1 , X 2 ,.., X k / X 0 ) = SSR( X 1 , X 2 ,..., X k ) − SSR( X 0 )
2
SSR ( X 0 ) = nY

 SSR ( X 1 , X 2 ,.., X k / X 0 ) = ˆ X Y − nY
2

(ANOVA Table)
S.O.V d.f SS MS F
ˆ X Y − nY ( ˆ X Y − nY ) / k
2 2
Re g.( X 1 , X 2 ,.., X k / X 0 ) k MSR
CalF =
MSE
Error(Re sidual) n − k −1 Y Y − ̂ X Y (Y Y − ˆ X Y ) /( n − k − 1)
Total n −1 Y Y − nY
2

The Determination Coefficient (R2): in general linear model which consider base indicator
in evaluation extent of significance of assumed relation between dependent variable and
explanatory variables where it is called multiple determination coefficient and denoted by (
R(2Y , X , X ,..., X ) ), where ( X 1 , X 2 ,..., X k ) indicate to ( k ) of explanatory variables.
1 2 k

SSR
R2 =  SSR = SST ( R 2 ) , SSE = SST − SSR
SST

SSE = SST − SST ( R 2 ) = SST (1 − R 2 )

(ANOVA Table by meaning R2)(Uncorrected)


S.O.V d.f SS MS F
Re g.( X 1 , X 2 ,.., X k ) k Y Y ( R 2 ) SSR / k MSR Y Y ( R 2 ) / k
CalF = =
SSE /( n − k − 1) MSE Y Y (1 − R 2 ) / n − k − 1
Error(Re sidual) n − k −1 Y Y (1 − R 2 )
(R 2 ) / k
=
Total n −1 Y Y (1 − R 2 ) / n − k − 1

30
2
2.10 The Adjusted determination coefficient ( R )

Adjusted determination coefficient uses as alternative for ordinary determination coefficient


which his value become large when addition a new variable to model until if not be for
additive variable any importance (irrelevant variables), R 2 is a non-decreasing function of
the number of explanatory variables. An additional X variable will not decrease R 2 . Because
R 2 will usually increase as more explanatory variables are added to the model, it is not
recall of reality of influence of explanatory variable on dependent variable, it is not good
way to compare models. Therefore, we resort to adjusted determination coefficient to
remove the bias, where this coefficient takes in consideration degrees of freedom for each
of (error (SSE) and Total (SST)).

SSR = SST − SSE


SSR SST − SSE SSE
R2 = = = 1−
SST SST SST
2 ( SSE / n − k − 1) (n − 1) SSE (n − 1)(1 − R 2 ) SST (n − 1)(1 − R 2 )
R = 1− = 1− = 1− = 1−
( SST / n − 1) (n − k − 1) SST (n − k − 1) SST (n − k − 1)
Note//
• if k  1 , R  R 2 thus when number of X variables increases R become less than R 2 .
2 2

• R Can be negative. .
2

Example -1- // Klein and Goldberger attempted to fit the following regression model to the
U.S. economy: Yi =  0 + 1 X i1 +  2 X i 2 +  3 X i 3 + ei
Where: Y = Consumption, X 1 = Wage income, X 2 = Nonwage, nonfarm income and X 3 = Farm
income, but obtained estimates of  2 and  3 as:  2 = 0.751 and  3 = 0.6251 using these
estimates, then reformulated their consumption function as follows:
Yi =  0 + 1 ( X i1 + 0.75 X i 2 + 0.625 X i 3 ) + ei =  0 + 1Z i + ei Where 𝑍𝑖 = 𝑋𝑖1 + 0.75𝑋𝑖2 + 0.625𝑋𝑖3
Find:
1. The values of the variable Z.
2. Fit the modified model to the data bellow in table, obtain estimates of  0 and 1 .
Sol // 1- Yi =  0 + 1Z i + ei

Y Xi1 Xi2 Xi3 Zi ZiYi Yi2 Zi2


62.8 43.41 17.1 3.96 58.71 3686.988 3943.84 3446.8641
65 46.44 18.65 5.48 63.85 4150.25 4225 4076.8225
63.9 44.35 17.09 4.37 59.9 3827.61 4083.21 3588.01
67.5 47.82 19.28 4.51 65.1 4394.25 4556.25 4238.01
71.3 51.02 23.24 4.88 71.5 5097.95 5083.69 5112.25
76.6 58.71 28.11 6.37 83.77 6416.782 5867.56 7017.4129
86.3 87.69 30.29 8.96 116 10010.8 7447.69 13456
95.7 76.73 28.26 9.76 104 9952.8 9158.49 10816
98.3 75.91 27.91 9.31 102.7 10095.41 9662.89 10547.29

31
11.3 77.62 32.3 9.85 108 1220.4 127.69 11664
103.2 78.01 31.39 7.21 106.1 10949.52 10650.24 11257.21
108.9 83.57 35.61 7.39 114.9 12512.61 11859.21 13202.01
108.5 90.59 37.58 7.98 123.8 13432.3 11772.25 15326.44
111.4 95.47 35.17 7.42 126.5 14092.1 12409.96 16002.25

1130.7 1304.83 109839.77 100847.97 129750.57


80.76 93.20

Yi =  0 + 1Z i + e , Y = Z  + e , Yˆ = Z ˆ
  y   1130 .7   n z   14 1304 .83 
Z Y =  = , Z Z =  =
i
 
 zi yi  109839 .77   z i z 2
i  1304 .83 129750 .57 
adg( Z Z )
( Z Z ) −1 =
Z Z
14 1304.83
Z Z = = 1816507 .98 − 1702581 .3289 = 113926 .6511
1304.83 129750 .57
129750 .57 − 1304.83 
adg( Z Z ) =  
 − 1304.83 14 
129750 .57 − 1304 .83 
 
adg ( Z Z )  − 1304 .83 14  =  1.1389 − 0.0115 
( Z Z ) =
−1
=  − 0.0115 0.0001 
Z Z 113926 .6511  
 1.1389 − 0.0115  1130 .7  1287.7542 − 1263 .1574   24.5969 
ˆ = ( Z Z ) −1 Z Y =   = = 
 − 0.0115 0.0001 109839 .77   − 13.0031 + 10.9840   − 2.0191
Yˆi = ˆ0 + ˆ1Zi = 24.5969 − 2.0191Zi
ˆ2 = 0.75ˆ1 = 0.75  (−2.0191) = −1.5143 and ˆ3 = 0.625ˆ1 = 0.625  (−2.0191) = −1.2619
Yˆi = 24.5969 − 2.0191 X i1 − 1.5143 X i 2 − 1.2619 X i 3

Example -2- // A multiple regression of y on a constant x1 and x2 produce the following


results: Yˆi = 4 + 0.4 X 1 + 0.9 X 2 , R = 8 / 60 , ee = 52 , n = 29
2

29 0 0 
X X =  0 50 10 
 0 10 80 
1. Find the ANOVA Table through the information given above.
2. Find the adjusted coefficient of determination with explain the result.
SOL// 1-
(ANOVA Table)
S.O.V d.f SS MS F
Re g.( X 1 , X 2 ,.., X k ) k Y Y ( R 2 ) SSR / k MSR (R 2 ) / k
CalF = =
SSE /( n − k − 1) MSE (1 − R 2 ) / n − k − 1
Error(Re sidual) n − k −1 Y Y (1 − R 2 )

Total n −1 Y Y

32
(ANOVA Table)
S.O.V d.f SS MS F
Re g.( X 1 , X 2 ,.., X k ) MSR 4
2 8 4 CalF = = =2
MSE 2
Error(Re sidual)
26 52 2
Total 28 60
SSR 8
2- R 2 = = = 0.13333333
SST 60

2 (n − 1)(1 − R 2 ) (28)(0.86666667 )
R = 1− = 1− = 0.0666666
(n − k − 1) 26

2 2
R  R 2 Thus when number of X variables increases R become less than R 2 .

R 2 will usually increase as more explanatory variables are added to the model, it is not
recall of reality of influence of explanatory variable on dependent variable, it is not good
way to compare models. Therefore, we use adjusted determination coefficient to remove the
bias between the variables.

Example -3- // from the following data by using matrices algebra method and Grammar
method: n = 20 , Yi = 60 ,  Yi 2 = 450 ,  X i = 245 ,  X i2 = 4460 ,  X iYi = 1156

The required //

1. Estimation coefficients vector for linear model by using Matrices Method and Grammar
method .
2. Find Var − Cov matrix for estimator parameters.
3. Test significance of parameters at  = 5% with explains results when t(0.025 ,18 ) = 2.10 .
Sol// by matrix algebra:
1- Yi =  0 + 1 X i + e , Y = X  + e , Yˆ = X ˆ

X Y = 
  y   60 
=
 n
, X X =   x  =  20
i 245 

 xi yi  1156    xi  x  245
2
i 4460 
adg( X X )
( X X ) −1 =
X X
20 245
X X = = (89200 − 60025) = 29175
245 4460
 4460 − 245 
adg( X X ) =  
 − 245 20 
 4460 − 245 
   − 0.0084 
( X X ) −1 =
adg ( X X )
=  29175 29175  =  0.153 
X X  − 245 20   − 0.0084 0.00067 


 29175 29175 

33
̂ = ( X X ) −1 X Y
 0.153 − 0.0084  60   9.18 − 9.7104   − 0.5304 
ˆ = ( X X ) −1 X Y =    =   =  
 − 0.0084 0.00067 1156   − 0.504 + 0.77452   0.27052 
Yi =  0 + 1 X i + e
Yˆi = ˆ0 + ˆ1 X i = −0.5304 + 0.27052 X i
By Grammar method:
  yi   n x   0   60   20 245    0 
= 1156  = 245 4460  *   
i
 *  ,
 x i y i   x i x   1 
2
i      1
 n  x  = n x
N =
i
 2
− ( xi ) 2
 xi x  2 i
i

20 245
X X = = (89200 − 60025) = 29175
245 4460
 20 245 
N =  = 29175
245 4460 
  yi  x  = y x
N0 =    i 2
− ( xi  xi yi )
 xi yi x  2 i i
i

 60 245 
N0 =   = (60)(4460) − (1156)(245) = 267600 − 283220 = −15620
1156 4460 
 n  y  = n x y
N1 =  i
− (  xi  y i )
 xi x y  i i
i i

 20 60 
N1 =   = (20)(1156) − (245)(60) = 23120 − 14700 = 8420
245 1156 

ˆo = 0 =  i  i 2  i 2 i i =
N y x2 − ( x x y ) − 15620
= −0.5354
N n  xi − (  xi ) 29175
N1 n  xi y i − (  xi  y i ) 8420
ˆ1 = = = = 0.2886
N n  x − (  xi )
2
i
2
29175
Yˆi = ˆ0 + ˆ1 X i = −0.5354 + 0.2886 X i

2- Var − Cov ( ˆ ) =  2 ( X X ) −1

Se2 =  e2 =
e 2
i
=
ee
=
Y Y − ˆ X Y
=
n − k −1 n − k −1 n − k −1

Y i
2
= Y Y = 450

 60 
450 − (− 0.5304 0.27052 ) 
Y Y − ˆ X Y  1156  450 − 280.897 169.103
S e2 =  e2 = = = = = 9.3946
20 − 1 − 1 20 − 2 18 18

34
 0.153 − 0.0084   1.437 − 0.0789   Var( ˆ0 ) Cov( ˆ0 , ˆ1 ) 
Var − Cov( ˆ ) = S e2 ( X X ) −1 = 9.3946 =  = 
 − 0.0084 0.00067   − 0.0789 0.0063   Cov( ˆ1 , ˆ0 ) Var( ˆ1 ) 

1- t-test hypotheses for test significance of parameters at  = 5% :


H0 :  j = 0 j = 0,1
H1 :  j  0
ˆ j −  j ˆ j ˆ j
Cal t = = = ,
S ( ˆ j ) S ( ˆ j )  2 c jj
H 0 : 0 = 0
H1 :  0  0
ˆ0 −  0 ˆ −  0 − 0.5304 − 0 − 0.5304
t ( ˆ ) = = 0 = = = −0.4425
Var( ˆ0 ) S ( ˆ0 ) 1.437 1.1987
0

H 0 : 1 = 0
H1 : 1  0
ˆ1 − 1 ˆ1 − 1 0.27052 − 0 0.27052
t ( ˆ ) = = = = = 3.407
1
Var( ˆ1 ) S ( ˆ1 ) 0.0063 0.0794

 0.05
(n − 2) = 20 − 2 = 18 d.f = = 0.025 and t(0.025 ,18 ) = 2.10
2 2

While Cal t ( ˆ )  Tab t  we not reject H 0 :  0 = 0 or we can say accept H 0 .


0

But Cal t ( ˆ )  Tab t  we reject H 0 : 1 = 0 , where 1  0 this means that the explanatory
1

variable influences significantly on the dependent variable, or since the value of t calculated
lies outside the Region of acceptance then we accept the alternative hypothesis 𝐻1 .

Example H.W // two samples of 50 observations each produce following moment matrices
(in each case, X is a constant and one variable):

Sample 1 Sample 2

 50 300   50 300 
X X =   X X =  
300 2100  300 2100 

 300   300 
X Y =   X Y =  
2000  2200 

Y Y = 2100 Y Y = 2800

1. Compute the least squares regression coefficients and the residual variances (S e2 ) for
each data set. Compute the ( R 2 ) for each regression.

35
2. Compute the ANOVA table for the two regressions then explain and compare the two
results.

Example H.W// the following model was formulated to explain the annual sales of the
manufactures of household cleaning products as a function of a relative price index ( rpi )
and the advertising expenditures ( adv ): sales =  0 + 1rpi +  2 adv + ei

firm sales rpi adv


1 10 10 30
2 8 11 40
3 7 13 60
4 6 10 10
5 13 8 30
6 6 8 10
7 12 9 60
8 7 12 20
9 9 12 40
10 15 9 70
1. Estimate the parameters of the proposed model.
2. Estimate the variance and covariance matrix.
2
3. Calculate the ANOVA table, R 2 and R with explain the results.

36

You might also like