You are on page 1of 5

The Method of Ordinary Least Square:

Let us consider the two variable PRF:

Y i=β 1 + β 2 X i+ μ i … … … ( 1 )

However, PRF is not directly observable. We estimate if from the SRF:

Y i= β^ 1 + ^β 2 X i+ μ^ i

Y i=Y^ i + μ^ i

^ is the estimated value of Y i


Where Y i

^μi=Y i−Y^ i

¿ Y i− β^ 1− β^ 2 X i………………………….(2)

Taking square and sum we get

∑ μ^ 2i =∑ (Y i− ^β 1− ^β 2 X i )2……………………………….
(3)

Now differentiating (3) partially with respect to ^β 1 and ^β 2 we obtain

δ
^ (∑ μ^ 2i )=−2 ∑ (Y i− ^β 1− ^β 2 X i )……………………………(4)
δ β1

δ
^ (∑ μ^ i2)=−2 ∑ (Y i− ^β 1− ^β 2 X i) X i………………………..(5)
δ β2

Now setting (4)= 0 we get

−2 ∑ (Y i − ^β 1− ^β 2 X i )=0

⟹ ∑ (Y i − ^β 1− ^β 2 X i )=0

⇒ ∑ Y i −∑ β 1−¿ ∑ ^β 2 X i ¿=0

⇒ ∑ Y i −n ^β1 − ^β 2 ∑ X i =0

⇒ β^ 1=Y − ^β 2 X ………………………………..(6)

Setting (5)=0, we get

−2 ∑ (Y i − ^β 1− ^β 2 X i ) X i=0
⟹ ∑ (Y i − ^β 1− ^β 2 X i ) X i=0

⟹ ∑ Y i X i−∑ β^ 1 X i−∑ β^ 2 X i =0
2

⟹ ∑ Y i X i− ^β 1 ∑ X i− ^β 2 ∑ X i =0
2

⟹ ∑ Y i X i−(Y − ^β 2 X) ∑ X i− ^β 2 ∑ X i =0
2

⟹ ∑ Y i X i−Y ∑ X i + β^ 2 X ∑ X i− β^ 2 ∑ X i =0
2

⟹ ∑ Y i X i−
∑ X i ∑ Y i − ^β ¿
2
n

∑ X ∑Y
∑ Y i X i − ni i
⟹ ^β 2=
∑ X 2i −¿ ¿ ¿ ¿
Numerical properties of the estimators:

1. The OLS estimators are expressed solely in terms of the observable quantities. Therefore, they
can be easily computed.
2. They are point estimators
3. Once the OLS estimates are obtained from the sample data, the regression line can be easily
obtained. The regression line thus obtained has the following properties:
i. It passes through the sample means of Y and X
ii. The mean value of the estimated Y =Y^ i is equal to the mean value of the actual Y for
Y^ i= β^ 1 + ^β 2 X i
¿(Y − ^β 2 X)+ β^ 2 X i
¿ Y + ^β (X − X )
2 i

Summing both sides of this last equality over the sample values and dividing through by n give

Y^ ¯¿ Y

iii. The mean value of ^μi is equal to zero


iv. The residuals ^μi are uncorrelated with the predicted Y i
v. The residuals are uncorrelated with X i
Gauss- Markov Theorem:
Given the assumptions of the classical linear regression model, the least square estimators, in the class of
unbiased linear estimators, have minimum variance, that is, they are BLUE.

Proof of the Gauss Markov Theorem:

Linearity:

Let us consider the regression model

Y i=β 1 + β 2 X i+ μ i … … … ( 1 )

Sample regression equation corresponding to the PRE

Y i= β^ 1 + ^β 2 X i+ μ^ i

Now the OLS coefficient estimators given by the formulas

∑X ∑Y
∑ Y i X i− ni i
^β =
2
∑ X 2i −¿ ¿ ¿ ¿
^β =Y − ^β X
1 2

From (2) we can write

^β 2= ∑
yi xi
2
∑ xi
^β = ∑
(Y i −Y ) x i
2 2
∑ xi
^β 2= ∑ ∑ Y xi
Y i xi
2
− 2
∑ xi ∑ xi
^β = ∑
Y i xi
2 2
=∑ k i Y i … … … .(3)
∑ xi
The OLS estimator ^β 2 is the linear function of the sample values Y i

Some properties of k i :

xi
i. ∑ k i= ∑ 2
=¿ 0 ¿
∑ xi
∑ k =∑ (
2 xi
2
)=
∑ x i2
ii. i
∑ xi
2
( ∑ x¿¿ i ) =
2 2 1
¿
∑ x i2
iii. ∑ k i x i=∑ k i ( X i− X )=∑ k i X i−X ∑ k i=¿ ∑ k i X i ¿
iv. ∑ k i x i=1
Unbiasedness of ^
β2
We know that

^β 2=∑ k i Y i

¿ ∑ k i( β1 + β 2 X i+ μ i) (from (1)

¿ ∑ k i β 1+ ¿ β2 ∑ k i X i + ∑ k i μi ¿

¿ β 1 ∑ k i + β 2 ∑ k i X i + ∑ k i μi

¿ β 2+ ∑ k i μ i

Taking expectation on both side

E ( β^ 2 ) =E ( β 2) + ∑ k i E(μ¿¿ i)=β 2 ¿

Unbaisedness of ^β 1:
Y i=β 1 + β 2 X i+ μ i

⇒ ∑ Y i =∑ β 1+ β 2 ∑ X i + ∑ μi

⇒ ∑ Y i =N β 1 + β 2 ∑ X i+ ∑ μi


∑ Y i =β + β ∑ X i + ∑ μi
1 2
N N N

⇒Y =β 1+ β 2 X … … … … ..(2)
Substituting the (2) in the formula of ^β 1, we get

^β 1=Y − ^β2 X

¿ β 1+ β2 X − ^β2 X

¿ β 1+( β 2− ^β 2) X

Taking Expectation E ( β^ 1 ) =β1

Minimum variance property:

You might also like