Professional Documents
Culture Documents
Assumptions
x∗ = zΠ ,
But, E[x∗0r] =
h i−1
Hence β 0 0 −1 0
= E(x z)[E(z z)] E(z x) E(x0z)[E(z0z)]−1E(z0y)
Consistency of β̂2SLS :
Proof :
β̂2SLS = β +
1 1 1 1 1 1
X X X X X X
x0izi z0izi −1 z0ixi −1 x0izi z0izi −1 z0iui
N N N N N N
Thus,
p h i−1
β̂2SLS − β −
→ 0 0 −1 0
E(x z)[E(z z)] E(z x) E(x0z)[E(z0z)]−1E(z0u), by
Slutsky theorem.
p
Hence, β̂2SLS −
→β
√
Theorem : Under 2SLS1 – 2SLS3, N (β̂ − β ) is asymptotic normal
h i−1
with mean zero and variance-covariance matrix σ 2 E(x0z)[E(z0z)]−1E(z0x)
Proof : x∗ = zΠ
P 0 −1 P 0
β̂2SLS (Stage 2 OLS) = ( x̂ix̂i) x̂iyi
1 P x̂0 x̂ −1 1 P x̂0 y
⇒ β̂2SLS − β = N i i N i i
Now,
−1 −1
1X 0 1
X
plim x̂ix̂i = plim Π̂0z0iziΠ̂
N N
−1
1
= (plim Π̂0) plim z0izi (plim Π̂)
X
N
= [Π0E(z0z)Π]−1, by W LLN
o−1
0
n
= E[(zΠ) (zΠ)]
= [E(x∗0x∗)]−1
= A−1, say
1 P x̂0 x̂ −1 − A−1 −
p
Thus, N i i →
1
P 0 −1
Hence, N x̂ix̂i = A−1 + op(1)
√ h i 1
−2
N (β̂2SLS − β ) = A−1 + op(1)
P 0
Now, N x̂iui
−1 d 0
P 0
But, CLT implies, N 2 x̂iui − 2 ∗ ∗
→ N 0, E(u x x ) , ie, N (0, B) , say.
−1
So N 2 x̂0iui = Op(1), by Lemma 5.
P
Hence,
√ −1
h i
N (β̂2SLS − β ) = A + op(1) Op(1)
1X
−2
−1
= A (N x̂0iui) + op(1)Op(1)
−1 −1
x̂0iui) + op(1), by Lemma 2
X
= A (N 2
√
−1
p
− β ) − A−1
P 0
Thus, N (β̂2SLS N 2 x̂iui −
→ 0.
Hence by Asymptotic Equivalence Lemma, asymptotic distributions
√ −1
−1 P 0
of N (β̂2SLS − β ) is same as that of A N 2 x̂iui .
1
−2 P 0 d 1
−2 d
→ N (0, B) implies A−1
P 0
But, N x̂iui − N x̂iui → N (0, A−1BA−1).
−
√ a
Hence, N (β̂2SLS − β ) ∼ N (0, A−1BA−1).
√
Avar[ N (β̂2SLS − β )] = A−1BA−1
∗ 0 ∗ −1 2 ∗ 0 ∗ ∗ 0 ∗ −1
h i
= E(x x )] σ E(x x )[E(x x )
∗ 0 ∗ −1
h i
2
= σ E(x x )
i−1
0 0
h
2
= σ Π E(z z)Π
o−1
0 0 −1 0 0 −1 0
n
2
= σ E(x z)[E(z z)] [E(z z)][E(z z)] E(z x)
o−1
0 0 −1 0
n
2
= σ E(x z)[E(z z)] E(z x)
√
• To estimate this Avar[ N (β̂2SLS − β )], the matrix part may be
estimated using sample averages.
Proof : Let β̃ be any other IV estimator (other than 2SLS) that also
uses instruments which are linear in z.
√
Under 2SLS1-2SLS2, Avar[ N (β̂2SLS − β )] = σ 2[E(x∗0x∗)]−1 , where
x∗ = zΠ.
Again, β̃ = (N −1
P 0
x̃ixi)−1(N −1 x̃0iyi) (where x̃ is an instrument for
P
⇒ β̃ − β = (N −1
P 0
x̃ixi)−1(N −1 x̃0iui).
P
Now, plim (N −1
P 0
x̃ixi) = E(x̃0x) = C , say.
√ −1
1P
−2
N (β̃ − β ) = C (N x̃0iui) + op(1)
−1 d
Also, N 2 x̃0iui −
→ N (0, D), where D = E(u2x̃0x̃) = σ 2E(x̃0x̃)
P
−1 d 0
−1
⇒ C N 2 x̃0iui −
→ N (0, C −1DC −1 )
P
√
Hence Avar[ N (β̃ − β )] = σ 2[E(x̃0x)]−1E(x̃0x̃)[E(x0x̃)]−1
To show: [E(x∗0x∗)] − E(x0x̃)[E(x̃0x̃)]−1E(x̃0x) is positive semidefinite.
⇒ E[x̃0x]=E[x̃0x∗]
So,
[E(x∗0x∗)] − E(x0x̃)[E(x̃0x̃)]−1E(x̃0x)
√ √
Thus,Avar[ N (β̃ − β )] − Avar[ N (β̂2SLS − β )] is p.s.d.
y = x1β1 + x2β2 + u
where x1 and x2 are partitions of regressors with K1 +
(1×K1 ) (1×K2 )
K2 = K.