Professional Documents
Culture Documents
where y i is the ith observation on the dependent variable y , β j is the coefficient on the jth
explanatory variable or regressor x ji , and μi is the ith observation on an unobserved error term
u. There are n observations and k regressors. All n observations in the model can be written
as a set of equations
y 1=β 0 + β 1 X 11 + β 2 X 21+ …+ β k X k 1+ u1
[ ][ ][ ] [ ]
y1 1 X 11 X 21 ⋯ Xk 1 β 0 u1
y2 1 X 12 X 22 ⋯ Xk 2 β1 u2
= +
⋮ ⋮ ⋮ ⋮ ⋯ ⋮ ⋮ ⋮
yn 1 X1n X2n ⋯ X k2 β k uk
or
y= Xβ+u
i=1
The assumptions underlying the method of least squares. (Gujarati (2009) – pages 61 – 68)
1. Linear regression model: The regression model is linear in the parameters, though it
may or may not be linear in the variables.
2. Fixed X values or X values independent of error term: Values taken by the regressor
X may be considered fixed in repeated samples.
3. Zero mean value of disturbance ui : Given the value of X i , the mean, or expected value
of the random disturbance term is zero.
4. Homoscedasticity or constant variance of ui : The variance of the error, or disturbance
term is the same regardless of the value of X.
5. No autocorrelation between the disturbances: Given any two X values, X i and X j ( i≠ j )
, the correlation between any two ui and u j ( i≠ j ) is zero.
6. The number of observations n must be greater than the number of parameters to be
estimated.
7. The nature of X variables: The X values in a given sample must not all be the same. No
outliers in the values of X variable.
Properties of least-squares estimators: The Gauss-Markov Theorem. (Gujarati (2009) – pages 71)
An estimator is said to be a best linear unbiased estimator (BLUE) if the following hold:
1. It is linear, that is, a linear function of a random variable, such as the dependent variable Y in the
regression model.
2. It is unbiased, that is, its average or expected value is equal to the true value.
3. It has minimum variance in the class of all such linear unbiased estimators; an unbiased estimator
with the least variance is known as an efficient estimator.
Gauss-Markov Theorem:
Given the assumptions of the classical linear regression model, the least-squares estimators, in the
class of unbiased linear estimators, have minimum variance, that is, they are BLUE.