You are on page 1of 3

2003.11.

4 Hayashi Econometrics

Proof of (1.2.19) on p. 21
Two proofs are given here. The second proof, which is more straightforward with brute force,
was provided by K. Fukushima of University of Tokyo.

Proof 1.
Let y(i) be an n × 1 vector obtained by replacing the i-th element of y by zero. Define X(i)
similarly. Let v(i) be an n-dimensional vector whose i-th element is 1 and whose other elements
are all zero. Thus
   0   
y1 x1 0
 ..   ..   .. 
 .   .   . 
   0   
 yi−1  x  0
i−1 
(i) (i) 0  (i)
    
y ≡  0 ,
  X ≡  0 ,
 v ≡
 1 .
 (1)
(n×1)  yi+1  (n×K)  x0  (n×1) 0
   i+1   
 .   .   . 
 ..   ..   .. 
yn x0n 0

Thus, for example, y − y(i) = yi v(i) , X − X(i) = v(i) xi 0 , and X0 v(i) = xi .


Clearly, b(i) , the OLS estimator obtained by dropping the i-th observation, equals the OLS
estimator from regressing y(i) on X(i) . Define

e(i) ≡ y(i) − X(i) b(i) . (2)


(n×1)

By construction, the i-th element of e(i) is zero. It should be easy to show:


0
v(i) e(i) = 0, (a)
0 (i) (i) 0 (i)
Xe =X e = 0. (b)

From (2) and the definition of e (that e ≡ y − Xb), derive

e − e(i) = (y − Xb) − (y(i) − X(i) b(i) )


= (y − y(i) ) − (Xb − X(i) b(i) )
= (y − y(i) ) − (X − X(i) )b(i) − X(b − b(i) )
= yi v(i) − v(i) xi 0 b(i) − X(b − b(i) ) (since y − y(i) = yi v(i) , X − X(i) = v(i) xi 0 ).

Thus we have shown

e − e(i) = ui v(i) − X(b − b(i) ), (3)

where

ui ≡ yi − xi 0 b(i) . (4)

1
Multiply both sides of (3) from left by M (≡ I − X(X0 X)−1 X0 ). The LHS of the equation
that results is Me − Me(i) . The RHS is ui Mv(i) since MX = 0. We know from Section 1.2 that
Me = e. Also,

Me(i) = e(i) − X(X0 X)−1 X0 e(i) = e(i) (by (b) above). (5)

Thus Me − Me(i) = e − e(i) . Therefore, pre-multiplication of (3) by M yields:

e − e(i) = ui Mv(i) . (6)


0
Multiplying both sides of this last equation by v(i) , we obtain
0 0 0
v(i) e − v(i) e(i) = ui v(i) Mv(i) . (7)

The first term on the LHS is ei , the i-th element of e. The second term on the LHS is zero by
(a) above. Therefore,
0
ei = ui v(i) Mv(i) . (8)
0
The v(i) Mv(i) on the RHS of this equation can be written as follows:
0 0 0
v(i) Mv(i) = v(i) v(i) − v(i) X(X0 X)−1 Xv(i) (since M ≡ I − X(X0 X)−1 X0 ) (9)
0 0 −1 0 (i)
= 1 − xi (X X) xi , (since X v = xi ) (10)

which by definition is 1 − pi (see (1.2.20)). Therefore, we have shown:


ei
ui (≡ yi − xi 0 b(i) ) = . (11)
1 − pi

Next, multiply both sides of (3) by (X0 X)−1 X0 . Since X0 e = 0 by construction and since
X e = 0 by (b) above, the LHS of the equation that results is 0. Since X0 v(i) = xi , the RHS
0 (i)

of the equation is: ui (X0 X)−1 xi − (b − b(i) ). Therefore,

0 = ui (X0 X)−1 xi − (b − b(i) ). (12)

Combining (11) and (12), we arrive at the desired conclusion (1.2.19).

Proof 2.
The second proof utilizes the following result (famous in numerical analysis):

Sherman-Morrison Formula: Let A be an n × n invertible matrix and let c, d be


n-vectors such that 1 + d0 Ac 6= 0. Then A + cd0 is invertible with the inverse given
by

−1 A−1 cd0 A−1


(A + cd0 ) = A−1 − .
1 + d0 Ac

2
(This can be verified easily by noting that A−1 cd0 A−1 cd0 = d0 A−1 cA−1 cd0 .)
With this lemma, the desired result can be derived directly, as follows.
 −1
X X
b(i) =  xj x0j  xj · yj (by definition of b(i) )
j6=i j6=i
 −1  
n
X n
X
= xj x0j − xi x0i   xj · y j − xj · y j 
j=1 j=1

−1
= (X0 X − xi x0i ) (X0 y − xi · yi )

(X0 X)−1 xi x0i (X0 X)−1


 
0 −1
= (X X) + (X0 y − xi · yi )
1 − x0i (X0 X)−1 xi
(by setting A = X0 X, c = −xi , d = xi in the Sherman-Morrison Formula)

(X0 X)−1 xi x0i (X0 X)−1


 
0 −1
= (X X) + (X0 y − xi · yi ) (since pi ≡ x0i (X0 X)−1 xi )
1 − pi

(X0 X)−1 xi · (yi − x0i b)


=b − (this you can derive through a series of multiplications)
1 − pi

1
=b − (X0 X)−1 xi · ei (since ei ≡ yi − x0i b).
1 − pi

You might also like