You are on page 1of 3

Discussion of the Gauss-Markov Theorem Introduction to Econometrics (C.

Flinn) October 1, 2004 We start with estimation of the linear (in the parameters) model y = X + , where we assume that: 1. E (|X ) = 0 for all X (mean independence) 2. V AR(|X ) = E (0 |X ) = 2 IN (homoskedasticity) The Gauss-Markov Theorem states that = (X 0 X )1 X 0 y is the Best Linear Unbiased Estimator (BLUE) if satises (1) and (2). Proof: An estimator is best in a class if it has smaller variance than others estimators in the same class. We are restricting our search for estimators to the class of linear, unbiased ones. Since the data are the y (not the X ), we are looking at estimators that are linear functions of y, or = m + M y, where is a k 1 parameter vector, m is a k 1 vector of constants, M is a k n matrix of constants, and the data vector y is n 1. Second, we are restricting attention to the class of unbiased estimators, that is we require that ) = , E ( for any valid possible value could take, i.e., for all in the parameter space . is to be unbiased, then First note that if |X ) = m + M E (y |X ) E (

= m + M X + M E (|X ) = m + M X, where the last line follows from the mean independence assumption. To be unbiased for any possible value of then requires m=0

= m + M E (X + |X )

and M X = Ik . (1) We note that the least squares estimator satises this requirement for unbiasedness, since for , M = (X 0 X )1 X 0 , and M X = (X 0 X )1 X 0 X = Ik . Thus looking for linear unbiased estimators requires us to look for estimators of the form = M y, for an M that satises [1]. Without any loss of generality, we can redene the M matrix to be of the form M = (X 0 X )1 X 0 + C, where C is some k n matrix. Using the condition [1] again, we know that for our estimator to be unbiased requires that MX = Ik [(X 0 X )1 X 0 + C ]X = Ik Ik + CX = Ik CX = 0,

that is, CX is a k k matrix of 00 s. We can write Now we can compute the covariance matrix of all alternative estimator . = My = M (X + ) = + M . Thus = M . |X ) = 0, so the covariance matrix of the Since it is unbiased by construction, E ( estimator is )( )0 |X ) = E (M (M )0 |X ) E ( = M (E0 |X )M 0
0 = M 2 In M 0 = 2 M M ,

= E (M 0 M 0 |X )

which is a k k matrix. Now M M 0 = [(X 0 X )1 X 0 + C ][(X 0 X )1 X 0 + C ]0 = (X 0 X )1 X 0 X (X 0 X )1 + (X 0 X )1 X 0 C 0 +CX (X 0 X )1 + CC 0 . Since CX = 0 (and of course X 0 C 0 = 0 as well), M M 0 = (X 0 X )1 + CC 0 . Now the matrix CC 0 is a k k cross products matrix, which by construction cannot be negative denite. The best estimator in a class of estimators is the one with the smallest covariance matrix, where by small we mean that the covariance matrix associated with any other estimator in the class (that is, linear and unbiased in the current context) minus the covariance matrix of the best estimator is a positive denite matrix. Formally, the matrix dierence M M 0 COV (best estimator) is positive denite. Since M M 0 is minimized when we set the matrix C equal to 0 (that Any other estimator M in this is, it contains k n 00 s), the best estimator in the class . 0 class (in which the C matrix does not contain 0 s in every row and column) has a strictly is BLUE under the two larger covariance matrix. We conclude that the OLS estimator conditions set forth (mean independence and homoskedastic).

You might also like