You are on page 1of 15

Econometrics I: TA Session 1

Giovanna Úbida

INSPER
PS01-Q1

Show that if y , X and b are defined as bellow, then b 0 X 0 y = y 0 Xb

     
y1 x1,1 x1,2 · · · x1,k β1
y2  x2,1 x2,2 · · · x2,k   β2 
y =. ;X =  . ;β =  . 
     
. .. .. .. 
.  .. . . .   .. 
yn n×1 xn,1 xn,2 · · · xn,k n×k
βk k×1
PS01-Q1

First note that the dimensional of these matrix multiplication is s.t.

I b 0 X 0 y = (1 × k)(k × n)(n × 1) = (1 × k)(k × 1) = (1 × 1)


I y 0 Xb = (1 × n)(n × k)(k × 1) = (1 × n)(n × 1) = (1 × 1)

Thus, since b 0 X 0 y and y 0 Xb are scalars and (b 0 X 0 y )0 = y 0 Xb, the


transpose of a scalar is, actually, the same scalar. Moreover,
b 0 X 0 y + y 0 Xb = 2y 0 Xb
PS01-Q2

Show that if b and X 0 X (a symmetric matrix) are defined as


0 X 0 Xb)
bellow, then, ∂(b ∂b 0 = 2X 0 Xb 1
   
b1 0 1 3
b= ;X X =
b2 2×1 3 4 2×1

1
Hint:
∂(b 0 X 0 Xb)
" #
∂(b 0 X 0 Xb) ∂b1
= ∂(b 0 X 0 Xb)
∂b 0 ∂b2
PS01-Q2

We have that

  
0 0
  1 3 b1
b X Xb = b1 b2
3 4 b2
 
  b1
= (b1 + 3b2 )(3b1 + 4b2 )
b2
= b12 + 3b1 b2 + 3b1 b2 + 4b22
= b12 + 4b22 + 6b1 b2
PS01-Q2

Thus,
" ∂(b2 +4b2 +6b
1 b2 )
#
∂(b 0 X 0 Xb) 1 2
∂b1
= ∂(b12 +4b22 +6b1 b2 )
∂b 0
∂b2
 
2b1 + 6b2
=
6b1 + 8b2
  
2 6 b1
=
6 8 b2
  
1 3 b1
=2
3 4 b2
= 2X 0 Xb
PS01-Q3
Strict exogeneity implications and the use of the LIE

Show that the strict exogeneity assumption E[εi |X] = 0 implies that:
a) the unconditional mean of the error term is zero: E[εi ] = 0
b) the regressors are orthogonal to the error term for all observations:
 
E[xj1 εi ]
 E[xj2 εi ] 
E[xj εi ] =   = 0[K ×1]
 
..
 . 
E[xjK εi ]

c) the unconditional covariance Cov(εi , xjk ) = 0


d) the following expressions hold if we also assume that yi , xi is i.i.d.:

E[εi |X] = E[εi |xi ]


E[ε2i |X] = E[ε2i |xi ]
E[εi εj |X] = E[εi |xi ]E[εj |xj ]
PS01-Q3

a) Suppose E [i |X ] = 0

taking expectation and considering the Law of Iterated


Expectations (L.I.E)

E [E [i |X ]] = E [0] = E [i ]

Thus, we end up with E [i ] = 0


PS01-Q3

b) Suppose E [i |X ] = 0

Now, consider
L.I.E
E [xjk i ] = E [E (xjk i |X )]
= E [xjk E (i |xjk )]
L.I.E
= E {xjk E [E (i |X )|xjk ]}
= E [xjk E (0|xjk )]
=0

Hence, since E [xjk i ] = 0 for all k ∈ (1, 2, ..., k), it follows that
E [xj i ] = 0
PS01-Q3

c) Suppose E [i |X ] = 0

Since Cov (xjk i ) = E (xjk i ) − E (xjk )E (i )

by a E (i ) = 0 and by b E (xj i ) = 0

Thus, Cov (xjk i ) = 0


PS01-Q3

d) Suppose that yi and xi are iid

Then, (xi , yi ) is independent of (xj , yj ) for all j 6= i and since i is a


function of (xi , yi ), (i , xi ) is independent of (j , xj ) for all j 6= i.
Hence, E [i |X ] = E [i |xi ] and E [2i |X ] = E [2i |xi ]

Finally,
L.I.E
E (i j |X ) = E [E (i j |X , j )|X ]
= E [j E (i |X , j )|X ]
= E [j E (i |xi )|X ]
= E (i |xi )E (j |xj )
PS01-Q08

Normal equations
a) Verify that X0 X/n = 1
xi xi0 and X0 y/n =1P
P
n i n i xi yi
b) Show that the K normal equations given by X Xb = X0 y
0 imply
n
1X
xi ei = 0
n
i=1

that is, the sample analogue of the orthogonality condition


E(xi εi ) = 0.
PS01-Q8

X 0X
a) from TA 1 notes we have that n can be written as

  
x1,1 x2,1 · · · xn,1 x1,1 x1,2 · · · x1,k
0
XX 1 x1,2 x2,2 · · ·
 xn,2  x2,1 x2,2 · · · x2,k 
 
=  . .. .. ..   .. .. .. .. 
n n  .. . . .  . . . . 
x1,k x2,k · · · xn,k xn,1 xn,2 · · · xn,k
 0
x1
0
1 x2  x1 x10 + x2 x20 + ... + xn xn0

= x1 x2 · · · xn  .  =
n  ..  n
xn0
Pn 0
i=1 xi xi
=
n
PS01-Q8

Moreover,
Pn
i=1 xi yi x1 y1 + x2 y2 + ... + xn yn
=
n n  
y1
1  y2 

= x1 x2 · · · xn  . 

n  .. 
yn
X 0y
=
n
PS01-Q8
b) Suppose K normal equation s.t.

X 0 Xb = X 0 y (1)

and note that


 
1
n
1X 1 1 2 
 
xi i = (x1 1 + x2 2 + ... + xn n ) = x1 x2 · · · xn  . 
n n n  .. 
i=1
n
X 0 X 0 (y − Xb) X 0y − X 0 Xb
= = =
n n n
By (1)
X 0y − X 0y
= =0
n
Pn
Hence, if X 0 Xb = X 0 y we have that 1
n i=1 xi i =0

You might also like