You are on page 1of 195

ECONOMETRICS

Bruce E. Hansen

c 2000, 2001, 2002, 2003, 20041

University of Wisconsin
www.ssc.wisc.edu/~bhansen

Revised: January 2004


Comments Welcome

1
This manuscript may be printed and reproduced for individual or instructional use, but may not be
printed for commercial purposes.
Contents

1 Ordinary Least Squares 1


1.1 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 E ciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Model in Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Residual Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.7 Asymptotic Normality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.8 Covariance Matrix Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.9 Functions of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.10 t tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.11 Con dence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.12 Wald Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.13 F Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2 Regression Models 23
2.1 Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2 Bias and Variance of OLS estimator . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3 Multicollinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4 Forecast Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.5 NonLinearity in Regressors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.6 NonLinear Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.7 Normal Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.8 Least Absolute Deviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3 Model Selection 39
3.1 Omitted Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.2 Irrelevant Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3 Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.4 Testing for Omitted NonLinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

i
3.5 log(Y ) versus Y as Dependent Variable . . . . . . . . . . . . . . . . . . . . . . . . 46

4 Generalized Least Squares 47


4.1 GLS and the Gauss-Markov Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2 Skedastic Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3 Estimation of Skedastic Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.4 Testing for Heteroskedasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.5 Feasible GLS Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.6 Covariance Matrix Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.7 Commentary: FGLS versus OLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

5 Generalized Method of Moments 54


5.1 Overidenti ed Linear Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.2 GMM Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.3 Distribution of GMM Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.4 Estimation of the E cient Weight Matrix . . . . . . . . . . . . . . . . . . . . . . . 57
5.5 GMM: The General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.6 Over-Identi cation Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.7 Hypothesis Testing: The Distance Statistic . . . . . . . . . . . . . . . . . . . . . . 60
5.8 Conditional Moment Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

6 Empirical Likelihood 63
6.1 Non-Parametric Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.2 Asymptotic Distribution of EL Estimator . . . . . . . . . . . . . . . . . . . . . . . 65
6.3 Overidentifying Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.4 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.5 Numerical Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.5.1 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.5.2 Inner Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.5.3 Outer Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

7 Endogeneity 71
7.1 Instrumental Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
7.2 Reduced Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
7.3 Identi cation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
7.4 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
7.5 Special Cases: IV and 2SLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
7.6 Bekker Asymptotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
7.7 Identi cation Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

ii
8 The Bootstrap 81
8.1 Monte Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
8.2 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
8.3 The Empirical Distribution Function . . . . . . . . . . . . . . . . . . . . . . . . . . 86
8.4 De nition of the Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
8.5 Bootstrap Estimation of Bias and Variance . . . . . . . . . . . . . . . . . . . . . . 89
8.6 Percentile Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
8.7 Percentile-t Equal-Tailed Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
8.8 Symmetric Percentile-t Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
8.9 Asymptotic Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
8.10 One-Sided Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
8.11 Symmetric Two-Sided Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
8.12 Percentile Con dence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
8.13 Bootstrap Methods for Regression Models . . . . . . . . . . . . . . . . . . . . . . . 99
8.14 Bootstrap GMM Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

9 Univariate Time Series 102


9.1 Stationarity and Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
9.2 Autoregressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
9.3 Stationarity of AR(1) Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
9.4 Lag Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
9.5 Stationarity of AR(k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
9.6 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
9.7 Asymptotic Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
9.8 Bootstrap for Autoregressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
9.9 Trend Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
9.10 Testing for Omitted Serial Correlation . . . . . . . . . . . . . . . . . . . . . . . . . 110
9.11 Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
9.12 Autoregressive Unit Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

10 Multivariate Time Series 114


10.1 Vector Autoregressions (VARs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
10.2 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
10.3 Restricted VARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
10.4 Single Equation from a VAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
10.5 Testing for Omitted Serial Correlation . . . . . . . . . . . . . . . . . . . . . . . . . 116
10.6 Selection of Lag Length in an VAR . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
10.7 Granger Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
10.8 Cointegration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
10.9 Cointegrated VARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

iii
11 Limited Dependent Variables 121
11.1 Binary Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
11.2 Count Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
11.3 Censored Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
11.4 Sample Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

12 Panel Data 128


12.1 Individual-E ects Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
12.2 Fixed E ects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
12.3 Dynamic Panel Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

13 Nonparametrics 132
13.1 Kernel Density Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
13.2 Asymptotic MSE for Kernel Estimates . . . . . . . . . . . . . . . . . . . . . . . . . 134

14 Appendix A: Mathematical Formula 138

15 Appendix B: Matrix Algebra 140


15.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
15.2 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
15.3 Trace, Inverse, Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
15.4 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
15.5 Idempotent and Projection Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 146
15.6 Kronecker Products and the Vec Operator . . . . . . . . . . . . . . . . . . . . . . . 148
15.7 Matrix Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

16 Appendix C: Probability 150


16.1 Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
16.2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
16.3 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
16.4 Common Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
16.5 Multivariate Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
16.6 Conditional Distributions and Expectation . . . . . . . . . . . . . . . . . . . . . . . 159
16.7 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
16.8 Normal and Related Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
16.9 Maximum Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

17 Appendix D: Asymptotic Theory 170


17.1 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
17.2 Convergence in Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
17.3 Almost Sure Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
17.4 Convergence in Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

iv
17.5 Asymptotic Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

18 Appendix E: Numerical Optimization 180


18.1 Grid Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
18.2 Gradient Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
18.3 Derivative-Free Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

v
Chapter 1

Ordinary Least Squares

1.1 Framework
An econometrician has observational data

f(y1 ; x1 ) ; (y2 ; x2 ) ; :::; (yi ; xi ) ; :::; (yn ; xn )g = f(yi ; xi ) : i = 1; :::; ng

where each pair fyi ; xi g 2 R Rk is observation on an individual (e.g., household or rm). We


call these observations the sample.
Notice that the observations are paired (yi ; xi ): We call yi the dependent variable and xi
the regressor vector. For convenience, the vector xi is typically presumed to include a constant.
That is, one element (typically written as the rst) equals 1. We can write the k 1 regressor xi
as 0 1 0 1
x1i 1
B x2i C B x2i C
B C B C
xi = B . C = B . C :
@ .. A @ .. A
xki xki
If the data is cross-sectional (each observation is a di erent individual) it is often reasonable
to assume they are mutually independent. If the data is randomly gathered, it is reasonable to
model each observation as a random draw from the same probability distribution. Thus, the
data are independent and identically distributed, or iid. We call this a random sample.
Sometimes the label iid is misconstrued. It means that the pair (yi ; xi ) is independent of the pair
(yj ; xj ) for i 6= j: It is not a statement about the relationship between yi and xi :
The random variables (yi ; xi ) have a distribution F which we call the population. This
\population" is in nitely large. Sometimes this is a source of confusion, but it is merely an
abstraction. This distribution is unknown, and the goal of statistical inference is to learn about
features of F from the sample.

1
It is unimportant whether the observations yi and xi may come from continuous or discrete
distributions. For example, many regressors in econometric practice are binary, taking on only the
values 0 and 1, and are typically called dummy variables.
A linear regression model for yi given xi takes the form

yi = 1 + x2i 2 + + xki k + ei ; i = 1; :::; n (1.1)

where 1 through k are parameters and ei is the error. The parameter vector is written as
0 1
1
B C
B 2 C
= B . C:
@ .. A
k

We can then write (1.1) more compactly as

yi = x0i + ei ; i = 1; :::; n (1.2)

The model is incomplete without a description of the error ei . It should be mean zero, a nite
variance, and be uncorrelated with the regressors. We state the needed conditions here.

Assumption 1.1.1

1. E(ei ) = 0

2. E (xi ei ) = 0

3. 2 = Ee2i < 1

4. Ex0i xi < 1

5. Q = Exi x0i > 0

Assumptions 1.1.1.3 and 1.1.1.4 are made to guarantee that all variables in the model have a
nite variance. This is necessary to ensure that E (xi ei ) is well de ned. Indeed by the Cauchy-
Schwarz inequality,
E jxi ei j E jxi j2 E jei j2 < 1
under these assumptions.
We can use Assumption 1.1.1 to derive a moment representation for the parameter vector :
Take equation (1.2) and pre-multiply by xi

xi yi = xi x0i + xi ei :

2
Now take expectations:
E (xi yi ) = E xi x0i + E (xi ei )
= E xi x0i
where the second equality is Assumption 1.1.1.2. Since E (xi x0i ) is invertible by Assumption 1.1.1.5,
we can solve for :
1
= E xi x0i E (xi yi ) : (1.3)
Thus the parameter is an explicit function of population second moments of (yi ; xi ):
In fact, this derivation shows that if is de ned by (1.3), then Assumption 1.1.1.2 must hold
true by construction. In this sense, Assumption 1.1.1.2 is very weak. However, it is important
to not misinterpret this statement. In many economic models, the parameter may be de ned
within the model, rather than by construction as in (1.3). In this case (1.3) may not hold. These
structural models require alternative estimation methods, and are discussed in Chapter 5.
To emphasize this distinction, we may describe the model of this section as a linear projection
model rather than a linear regression model. This is an accurate label as the equation (1.3) shows
that ei is explicitly de ned as a projection error. However, conventional econometric practice
labels (1.2) as a linear regression model, so we will adhere to this convention.

1.2 Estimation
Equation (1.3) writes the regression parameter as an explicit function of population moments
E (xi yi ) and E (xi x0i ) : Their moment estimators are the sample moments
n
^ (xi yi ) = 1X
E xi yi
n
i=1
n
^ xi x0i 1X
E = xi x0i :
n
i=1

It follows that the moment estimator of is (1.3) with the population moments replaced by the
sample moments:
n
! 1 n
1X 1X
= xi x0i xi yi
n n
i=1 i=1
n
! 1 n
X X
= xi x0i xi yi : (1.4)
i=1 i=1

Another way to derive ^ is as follows. Observe that Assumptions 1.1.1.2 can be written in the
parametric form
E xi yi x0i = 0: (1.5)

3
The function E (xi (yi x0i )) can be estimated by
n
^ xi yi 1X
E x0i = xi yi x0i
n
i=1

and ^ is the value which sets this equal to zero:


n
1X
0 = xi yi x0i ^ (1.6)
n
i=1
n n
1X 1X
= xi yi xi x0i ^
n n
i=1 i=1

whose solution is (1.4).


There is another classic motivation for the estimator (1.4). De ne the sum-of-squared errors
(SSE) function
Xn
2
Sn ( ) = yi x0i
i=1

The Ordinary Least Squares (OLS) estimator is the value of which minimizes Sn ( ): Observe
that we can write the latter as
n
X n
X n
X
Sn ( ) = yi2 2 0
xi yi + 0
xi x0i
i=1 i=1 i=1

Vector calculus (see section 1.1.13) gives the rst-order conditions for minimization:

@
0 = Sn ( ^ )
@
Xn n
X
= 2 xi yi + 2 xi x0i ^
i=1 i=1

whose solution is (1.4). Following convention, we will call ^ the OLS estimator of :
As a by-product of OLS estimation, we de ne the predicted value

y^i = x0i ^

and the residual

e^i = yi y^i
= yi x0i ^ :

4
Note that yi = y^i + e^i :It is important to understand the distinction between the error ei and the
residual e^i : The error is unobservable, while the residual is a by-product of estimation. These two
variables are frequently mislabeled, which can cause confusion.
Equation (1.6) implies that
n
1X
xi e^i = 0:
n
i=1

Thus the sample correlation between the regressors and the residual is zero. Furthermore, since
xi (typically) contains a constant, one implication is that
n
1X
e^i = 0:
n
i=1

Thus the residuals have a sample mean of zero. These are algebraic results, and hold true for all
linear regression estimates.
The error variance 2 is also a parameter of interest. A method of moments estimator for it is
the sample average
n
1X 2
^2 = e^i :
n
i=1

The error variance 2


measures the variation in the \unexplained" part of the regression. A
measure of the explained variation relative to the total variation is the coe cient of determi-
nation or R-squared.
^2
R2 = 1
^ 2y
where
n
1X
^ 2y = (yi y)2
n
i=1

is the sample variance of yi : The R2is frequently mislabeled as a measure of \ t". It is an


2
inappropriate label, as the value of R does not aid in the interpretation of parameter estimates
or test statistics.

1.3 E ciency
Is the OLS estimator e cient, in the sense of achieving the smallest possible mean-squared error
among feasible estimators? The answer was a rmatively provided by Chamberlain (1987).
Suppose that the joint distribution of (yi ; xi ) is discrete. That is, for nite r;

P yi = j; xi = j = j; j = 1; :::; r

5
for some constant vectors j ; j ; and j : Assume that the j and j are known, but the j are
unknown. (We know the values yi and xi can take, but we don't know the probabilities.)
In this discrete setting, the moment condition (1.5) can be rewritten as
r
X
0
j j j j = 0: (1.7)
j=1

By the implicit function theorem, is a function of ( 1 ; :::; r ) :


As the data are multinomial, the maximum likelihood estimator (MLE) is
n
1X
^j = 1 (yi = j) 1 xi = j
n
i=1

for j = 1; :::; r; where 1 ( ) is the indicator function. That is, ^ j is the percentage of the observations
which fall in each category. The MLE ^ mle for is then the function of (^ 1 ; :::; ^ r ) which satis es
the analog of (1.7) with the i replaced by the ^ i :
r
X
0^
^j j j j mle = 0:
j=1

Substituting in the expressions for ^ j ,


r n
!
X 1X 0^
0 = 1 (yi = j) 1 xi = j j j j mle
n
j=1 i=1
n
XXr
1 0^
= 1 (yi = j) 1 xi = j j j j mle
n
i=1 j=1
Xn
1
= xi yi x0i ^ mle
n
i=1

But this is the same expression as (1.6), which means that ^ mle = ^ ols : In other words, if the
data have a discrete distribution, the maximum likelihood estimator is simply the OLS estimator.
Since this is a regular parametric model the MLE is asymptotically e cient, and thus so is the
OLS estimator.
Chamberlain (1987) extends this argument to the case of continuously-distributed data: He
observes that the above argument holds for all multinomial distributions, and any continuous
distribution can be arbitrarily well approximated by a multinomial distribution. He proves that
generically the OLS estimator is asymptotically e cient for the class of regression models satisfying
Assumption 1.1.1.

6
1.4 Model in Matrix Notation
For some purposes, including computation, it is convenient to write the model and statistics in
matrix notation. We de ne
0 1 0 1 0 1
y1 x01 e1
B y2 C B x02 C B e2 C
B C B C B C
Y = B . C; X=B .. C; e=B .. C:
@ .. A @ . A @ . A
yn x0n en

Observe that Y and e are n 1 vectors, and X is an n k matrix.


The linear regression model (1.2) is a system of n equations, one for each observation. We can
stack these n equations together as

y1 = x01 + e1
y2 = x02 + e2
..
.
yn = x0n + en :

or equivalently
Y = X + e:
Sample sums can also be written in matrix notation. For example
n
X
xi x0i = X 0 X
i=1
Xn
xi yi = X 0 Y:
i=1

Thus the estimator (1.4), residual vector, and sample error variance can be written
^ = 1
X 0X X 0Y
e^ = Y X^
^2 = n 1 0
e^ e^

De ne the projection matrices


1
P = X X 0X X0
M = In P:

Then
1
Y^ = X ^ = X X 0 X X 0Y = P Y

7
and
e^ = Y X^ = Y P Y = (In P ) Y = M Y: (1.8)
Another way of writing this is

Y = (P + M ) Y = P Y + M Y = Y^ + e^:

This decomposition is orthogonal, that is

Y^ 0 e^ = (P Y )0 (M Y ) = Y 0 P M Y = 0:

1.5 Residual Regression


Partition
X = [X1 X2 ]
and
1
= :
2

Then the regression model can be rewritten as

Y = X1 1 + X2 2 + e: (1.9)
0 0 0
Observe that the OLS estimator of = ( 1; 2) can be obtained by regression of Y on X = [X1
X2 ]: OLS estimation can be written as

Y = X1 ^ 1 + X2 ^ 2 + e^: (1.10)

Using the partitioned matrix inversion formula (15.1),


1
X10 X1 X10 X2 (X10 M2 X1 ) 1 (X10 M2 X1 ) 1 X10 X2 (X20 X2 ) 1
=
X20 X1 X20 X2 (X20 X2 ) 1 X20 X1 (X10 M2 X1 ) 1
(X20 M1 X2 ) 1
(1.11)
where

M1 = In X1 X10 X1 1 X10
M2 = In X2 X20 X2 1 X20 :

8
Thus
^ 1
1 X10 X1 X10 X2 X10 Y
^ =
2 X20 X1 X20 X2 X20 Y
(X10 M2 X1 ) 1 (X10 M2 X1 ) 1 X10 X2 (X20 X2 ) 1
X10 Y
=
(X20 X2 ) 1 X20 X1 (X10 M2 X1 ) 1
(X20 M1 X2 ) 1 X20 Y
1
(X10 M2 X1 ) (X10 M2 Y )
= 1
(X20 M1 X2 ) (X20 M1 Y )
0 1 1
~0X
X ~ ~ 0 Y~1
X
1 1 1
= @ 1
A (1.12)
~0X
X ~ ~ 0
X2 Y2~
2 2

where
~ 1 = M2 X 1
X
Y~1 = M2 Y
~ 2 = M1 X 2
X
Y~2 = M1 Y

The variables X~ 1 and Y~1 are least-squares residuals from the regression of X1 and Y; respectively,
on the matrix X2 only. Similarly, the variables X ~ 2 and Y~2 are least-squares residuals from the
regression of X2 and Y on the matrix X1 only.
Formula (1.12) shows that the subvector ^ 1 of the OLS estimator ^ can be calculated by the
OLS regression of Y~1 on X ~ 1 ; and similarly ^ 2 can be calculated by the OLS regression of Y~2 on
~
X2 : This technique is called residual regression.
Furthermore, recalling the de nition M = I X (X 0 X) 1 X 0 ; observe that X20 M = 0 and hence

M1 M = I X2 X20 X2 1 X20 M = M

Then using (1.8), we nd M2 e^ = M2 M Y = M Y = e^: Premultiplying (1.10) by M2 ; we obtain


~ 1 ^ 1 + e^:
Y~1 = X

Since ^ 1 is precisely the OLS coe cient from a regression of Y~1 on X


~ 1 ; this shows that the residual
from this regression is e^, the numerically same residual from the joint regression (1.10). We have
proven the following theorem.

Theorem 1.5.1 (Frisch-Waugh-Lovell). In the model (1.9), the OLS estimator of 1 and the OLS
residuals e^ may be equivalently computed by either the OLS regression (1.10) or via the following
algorithm:

1. Regress Y on X2 ; obtain residuals Y~1 ;

9
~1;
2. Regress X1 on X2 ; obtain residuals X
~ 1 ; obtain OLS estimates ^ 1 and residuals e^:
3. Regress Y~ on X

In some contexts, the FWL theorem can be used to speed computation, but in most cases there
is little computational advantage to using the two-step algorithm. Rather, the theorem's primary
use is theoretical.
A common application of the FWL theorem, which you may have seen in an introductory
econometrics course, is the demeaning formula for regression. Partition X = [X1 X2 ] where
X1 = is a vector of ones, and X2 is the vector of observed regressors. In this case,
0 1 0
M1 = I :

Observe that
~ 2 = M1 X 2 = X 2 0 1 0
X X2
= X2 X2

and
1 0
Y~ = M1 Y = Y 0
Y
= Y Y;

which are \demeaned". The FWL theorem says that ^ 2 is the OLS estimate from a regression of
Y~ on X
~ 2 ; or yi y on x2i x2 :

n
! 1 n
!
X X
^ =
2 (x2i x2 ) (x2i x2 )0 (x2i x2 ) (yi y) :
i=1 i=1

Thus the OLS estimator for the slope coe cients is a regression with demeaned data.

1.6 Consistency
The OLS estimator ^ is a statistic, and thus has a statistical distribution. In general, this distribu-
tion is unknown. Asymptotic (large sample) methods approximate sampling distributions based
on the limiting experiment that the sample size n tends to in nity. A preliminary step in this
approach is the demonstration that estimators are consistent { that they converge in probability
to the true parameters as the sample size gets large.

10
The following decomposition is quite useful.
n
! 1 n
X X
^ = xi x0 xi yi
i
i=1 i=1
n
! 1 n
X X
= xi x0i xi x0i + ei
i=1 i=1
n
! 1 n
! n
! 1 n
X X X X
= xi x0i xi x0i + xi x0i xi ei
i=1 i=1 i=1 i=1
n
! 1 n
X X
= + xi x0i xi ei : (1.13)
i=1 i=1

This shows that after centering, the distribution of ^ is determined by the joint distribution of
(xi ; ei ) only.

We can now deduce the consistency of ^ : First, Assumption 1.1.1 and the WLLN (Section 17.2)
imply that
n
1X
xi x0i !p E xi x0i = Q (1.14)
n
i=1

and
n
1X
xi ei !p E (xi ei ) = 0: (1.15)
n
i=1

Using (1.13), we can write

n
! 1 n
X X
^ = + xi x0i xi ei
i=1 i=1
n
! 1 n
!
1X 1X
= + xi x0i xi ei
n n
i=1 i=1
n n
!
1X 0 1
X
= +g xi xi ; xi ei
n n
i=1 i=1

where g(A; b) = A 1 b is a continuous function of A and b; at all values of the arguments such that
A 1 exist. Now by (1.14) and (1.15),
n n
!
1X 1 X
xi x0i ; xi ei !p (Q; 0) :
n n
i=1 i=1

11
Assumption 1.1.1.5 implies that Q 1 exists and thus g( ; ) is continuous at (Q; 0): Hence by the
continuous mapping theorem (CMT) (Section 17.5),
n n
!
1X 1 X
g xi x0i ; xi ei !p g (Q; 0) = Q 1 0 = 0
n n
i=1 i=1

so !
n n
^= 1X 1X
+g xi x0i ; xi ei !p +0=0
n n
i=1 i=1

Theorem 1.6.1 Under Assumption 1.1.1, as n ! 1; ^ !p :

In Section 1.2 we also de ned the sample error variance ^ 2 : We now demonstrate its consistency
for 2 : Using (1.8),
n^ 2 = e0 M M e = e0 M e = e0 e e0 P e: (1.16)
An application of the WLLN yields
n
1X 2
ei !p Ee2i = 2
n
i=1

as n ! 1; so combined with (1.14) and (1.15),


n n n
! 1 n
!
1X 2 1X 1X 1X
^2 = ei ei x0i xi x0i xi ei !p 2
00 Q 1
0= 2
(1.17)
n n n n
i=1 i=1 i=1 i=1

so ^ 2 is consistent for 2:

1.7 Asymptotic Normality


We now establish the asymptotic distribution of ^ after normalization. We need a strengthening
of the moment conditions.

Assumption 1.7.1 In addition to Assumption 1.1.1, Ee4i < 1 and E jxi j4 < 1:

Now de ne
= E xi x0i e2i :
Assumption (1.7.1) guarantees that the elements of are nite. To see this,by the Cauchy-Schwarz
inequality and Assumption 1.7.1,

2 1=2 1=2 1=2 1=2


E xi x0i e2i E xi x0i E e4i = E jxi j4 E e4i < 1: (1.18)

12
Thus xi ei is iid with mean zero and has covariance matrix . By the central limit theorem (Section
17.4),
n
1 X
p xi ei !d N (0; ) (1.19)
n
i=1
Then using (1.13), (1.14), and (1.19),
n
! 1 n
!
p 1X 1 X
n ^ = xi x0i p xi ei
n n
i=1 i=1
1
!d Q N (0; )
1 1
= N 0; Q Q :

Theorem 1.7.1 Under Assumption 1.7.1, as n ! 1


p
n ^ !d N (0; V )

where V = Q 1 Q 1:

p
As V is the variance of the asymptotic distribution of n ^ ; V is often referred to as
the asymptotic covariance matrix of ^ : The form V = Q 1 Q 1 is called a sandwich form.
It may be insightful to examine a special case where and V simplify:
Homoskedastic Projection Error: Cov(xi x0i ; e2i ) = 0
Condition (1.7) holds, for example, when xi and ei are independent, but this is not a necessary
condition. We should not expect it to generically hold, but when it does the asymptotic variance
formulas simplify. If (1.7) is true, then

= E xi x0i E e2i = Q 2
(1.20)
1 1 1 2 0
V = Q Q =Q V (1.21)

In (1.21) we de ne V 0 = Q 1 2 as this matrix is de ned even if (1.7) is false, although in that


case V 0 does not equal V: We call V 0 the homoskedastic covariance matrix.

1.8 Covariance Matrix Estimation


The homoskedastic covariance matrix V 0 = Q 1 2 can be estimated by

V^ 0 = Q
^ 1 2
^ (1.22)

where
X n
^= 1
Q
1
xi x0i = X 0 X
n n
i=1

13
^ !p Q and ^ !p 2 (see (1.14) and (1.17)) it
is the method of moments estimator for Q: Since Q
is clear that V^ !p V :
0 0

To estimate V = Q 1 Q 1 ; we need an estimate of = E xi x0i e2i : Their MME estimator is

X n
^= 1 xi x0i e^2i
n
i=1

where e^i are the OLS residuals. A useful computational formula is to de ne u


^i = xi e^i and the
n k matrix 0 0 1
u
^1
B u 0 C
B ^2 C
^ = B . C:
u
@ .. A
u^0n
Then
1 0
= u
^u^
n
1 1
V^ = n X 0X ^0 u
u ^ X 0X

This estimator was introduced to the econometrics literature by White (1980):


The estimator V^ 0 was the dominate covariance estimator used before 1980, and was still the
standard choice in the 1980s. From my reading of the literature, the White estimate V^ started to
come in common use in the early 1990s, and by the late 1990s is quite commonly used, especially
by younger researchers. When reading and reporting applied work, it is important to pay attention
to the distinction between V^ 0 and V^ , as it is not always clear which has been used. When V^ is used
rather than the traditional choice V^ 0 ; many authors will state that \their standard errors have
been corrected for heteroskedasticity", or that they use a \heteroskedasticity-robust covariance
matrix estimator", or that they use the \White formula", the \Eicker-White formula", the \Huber
formula", the \Huber-White formula" or the \GMM covariance matrix". In most cases, these all
mean the same thing.
We now show ^ !p ; from which it follows that V^ !p V as n ! 1: Expanding the quadratic
2
e^2i = yi x0i ^
2
= ei x0i ^
0 0
= e2i 2 ^ xi ei + ^ xi x0i ^ :

Hence

14
n
^ 1X
= xi x0i e^2i
n
i=1
n n n
1X 2X 0 1X 0
= xi x0i e2i xi x0i ^ xi ei + xi x0i ^ xi x0i ^ : (1.23)
n n n
i=1 i=1 i=1

We now examine the each sum on the right-hand-side of (1.23) in turn. First, (1.18) and the
WLLN show that
n
1X
xi x0i e2i !p E xi x0i e2i = :
n
i=1

Second, by Holder's inequality (Section 17.1)


3=4 1=4
E jxi j3 jei j E jxi j4 E e4i < 1;

so by the WLLN
n
1X
jxi j3 jei j !p E jxi j3 jei j ;
n
i=1

and thus since ^ !p 0;

n n
!
1X 0 1X 3
xi x0i ^ xi ei ^ jxi j jei j !p 0:
n n
i=1 i=1

Third, by the WLLN


n
1X
jxi j4 !p E jxi j4 ;
n
i=1
so
n n
1X 0 2 1X
xi x0i ^ xi xi ^ ^ jxi j4 !p 0:
n n
i=1 i=1

Together, these establish consistency.

Theorem 1.8.1 As n ! 1; ^ !p :

The variance estimator V^ is an estimate of the variance of the asymptotic distribution of ^ .


A more easily interpretable measure of spread is its square root { the standard deviation. This
motivates the de nition of a standard error.

De nition 1.8.1 A standard error s( ^ ) for an estimator ^ is an estimate of the standard


deviation of the distribution of ^ :

15
p p
When is scalar, and V^ is an estimator of the variance of n ^ ; we set s( ^ ) = n 1=2 V^
: When is a vector, we focus on individual elements of one-at-a-time, vis., j ; j = 1; :::; k: Thus
q
s( ^ j ) = n 1=2
V^jj :

Generically, standard errors are not unique, as there may be more than one estimator of the
variance of the estimator. It is therefore important to understand what formula and method is
used by an author when studying their work. It is also important to understand that a particular
standard error may be relevant under one set of model assumptions, but not under another set of
assumptions, just as any other estimator.
From a computational standpoint, the standard method to calculate the standard errors is to
rst calculate n 1 V^ , then take the diagonal elements, and then the square roots.

1.9 Functions of Parameters


Sometimes we are interested in some function of the parameter vector. Let h : Rk ! Rq , and

= h( ):

We will assume from now on that h( ) is continuously di erentiable at the true value of :
The estimate of is
^ = h( ^ ):

What is an appropriate standard error for ^? By a rst-order Taylor series approximation:

h( ^ ) ' h( ) + H 0 ^ :

where
@
H = h( ) k q:
@
Thus
p p
n ^ = n h( ^ ) h( )
p
' H0 n ^
!d H 0 N (0; V )
= N (0; V ): (1.24)

where
V = H0 V H :

16
If V^ is the estimated covariance matrix for ^ ; then the natural estimate for the variance of ^ is
^ 0 V^ H
V^ = H ^

where
^ = @ h( ^ ):
H
@
In many cases, the function h( ) is linear:

h( ) = R0
^ = R; so V^ = R0 V^ R:
for some k q matrix R: In this case, H = R and H
For example, if R is a \selector matrix"

I
R=
0

so that if =( 1; 2 ); then = R0 = 1 and

I
V^ = I 0 V^ = V^11 ;
0

the upper-left block of V^ :


When q = 1q (so h( ) is real-valued), the standard error for ^ is the square root of n ^
1V ; that
is, s(^) = n 1=2 H ^ 0 V^ H
^ :

1.10 t tests
Let = h( ) : Rk ! R be any parameter of interest, ^ its estimate and s(^) its asymptotic
standard error. Consider the studentized statistic
^
tn ( ) = (1.25)
s(^)
It is easy to calculate that this statistic has the asymptotic distribution
^
tn ( ) =
s(^)
p ^
n
= q
V^
N (0; V )
!d p = N (0; 1)
V

17
the standard normal. This distribution is known. Since this distribution does not depend on the
parameters, we say that tn ( ) is asymptotically pivotal. In special cases (such as the normal
regression model, see Section 2.7), the statistic tn has an exact t distribution, and is therefore
exactly free of unknowns. In this case, we say that tn is an exactly pivotal statistic. In general,
however, pivotal statistics are unavailable and so we must rely on asymptotically pivotal statistics.
A simple null and composite hypothesis takes the form

H0 : = 0
H1 : 6= 0

where 0 is some pre-speci ed value, and = h( ) is some function of the parameter vector. (For
example, could be a single element of ):
The standard test for H0 against H1 is the t-statistic (or studentized statistic)
^ 0
tn = tn ( 0 ) = :
s(^)

Under H0 ; tn !d N (0; 1): Let z =2 is the upper =2 quantile of the standard normal distribution.
That is, if Z N (0; 1); then P (Z > z =2 ) = =2 and P (jZj > z =2 ) = : For example, z:025 = 1:96
and z:05 = 1:645: A test of asymptotic signi cance rejects H0 if jtn j > z =2 : Otherwise the test
does not reject, or \accepts" H0 : This is because

P (reject H0 j H0 ) = P jtn j > z =2 j = 0


! P jZj > z =2 = :

The rejection/acceptance dichotomy is associate with the Neyman-Pearson approach to hypothesis


testing.
An alternative approach, associate with Fisher, is to report an asymptotic p-value. The asymp-
totic p-value for the above statistic is constructed as follows. De ne the tail probability, or asymp-
totic p-value function
p(t) = P (jZj > jtj) = 2 (1 (jtj)) :
Then the asymptotic p-value of the statistic tn is

pn = p(tn ):

If the p-value pn is small (close to zero) then the evidence against H0 is strong. In a sense, p-
values and hypothesis tests are equivalent since pn < if and only if jtn j > z =2 , thus an equivalent
statement of a Neyman-Pearson test is to reject at the % level if and only if pn < : The p-value
is more general, however, in that the reader is allowed to pick the level of signi cance ( ), in
contrast to Neyman-Pearson rejection/acceptance reporting, where the researcher picks the level.
Another helpful observation is that the p-value function has simply made a unit-free transfor-
mation of the test statistic. That is, under H0 ; pn !d U [0; 1]; so the \unusualness" of the test

18
statistic can be compared to the easy-to-understand uniform distribution, regardless of the com-
plication of the distribution of the original test statistic. To see this fact, note that the asymptotic
distribution of jtn j is F (x) = 1 p(x): Thus
P (1 pn u) = P (1 p(tn ) u)
= P (F (tn ) u)
1
= P jtn j F (u)
1
! F F (u) = u;
establishing that 1 pn !d U [0; 1]; from which it follows that pn !d U [0; 1]:
It may be helpful to note that in the GAUSS language, the function p(t) may be computed by
the expression p = 2 cdf nc(t):

1.11 Con dence Intervals


A con dence interval Cn is an interval estimate of ; and is a function of the data and hence is
random. It is designed to cover with high probability. Either 2 Cn or 2 = Cn : The coverage
probability is P ( 2 Cn ).
We typically cannot calculate the exact coverage probability P ( 2 Cn ): However we often can
calculate limn!1 P ( 2 Cn ): We call this the asymptotic coverage probability. We say that Cn
has asymptotic (1 )% coverage for if P ( 2 Cn ) ! 1 as n ! 1:
A good method for construction of a con dence interval is the collection of parameter values
which are not rejected by an appropriate statistical test. We recall the t-statistic (1.25) and the
test: Reject H0 : 0 = if the is jtn ( )j > z =2 where z =2 again is the upper =2 quantile of the
standard normal distribution. Our con dence interval is then
Cn = : jtn ( )j z =2
( )
^
= :z =2 z =2
s(^)
h i
= ^ z =2 s(
^); ^+z =2 s(
^) (1.26)
While there is no hard-and-fast guideline for choosing the coverage probability 1 ; the
most common
h professional
i h choice is i95%, or = :05: This corresponds to selecting the con dence
interval ^ 1:96s(^) ^ 2s(^) : Thus values of within two standard errors of the estimated
^ are considered \reasonable" candidates for the true value ; and values of outside two standard
errors of the estimated ^ are considered unlikely or unreasonable candidates for the true value.
The interval has been constructed so that as n ! 1;
P ( 2 Cn ) = P jtn ( )j z =2 ! P jZj z =2 =1 :
and Cn is an asymptotic (1 )% con dence interval.

19
1.12 Wald Tests
Sometimes = h( ) is a q 1 vector, and it is desired to test the joint restrictions simultaneously.
In this case the t-statistic approach does not work. We have the null and alternative
H0 : = 0
H1 : 6= 0:

The natural estimate of is ^ = h( ^ ) and has asymptotic covariance matrix estimate


V^ = H
^ 0 V^ H
^

where
H^ = @ h( ^ ):
@
The Wald statistic for H0 against H1 is
0
Wn = n ^ 0 V^ 1 ^ 0
0 1
= n h( ^ ) 0
^ 0 V^ H
H ^ h( ^ ) 0 :

When h is a linear function of ; h( ) = R0 ; then the Wald statistic takes the form
0 1
Wn = n R 0 ^ 0 R0 V^ R R0 ^ 0 :
p
The delta method (1.24) showed that n ^ !d Z N (0; V ); and Theorem 1.8.1 showed
that V^ !p V: Furthermore, H ( ) is a continuous function of ; so by the continuous mapping
theorem, H ( ^ ) !p H : Thus V^ = H^ 0 V^ H
^ !p H 0 V H = V > 0 if H has full rank q: Hence
0
Wn = n ^ 0 V^ 1 ^ 0 !d Z 0 V 1
Z= 2
q;

by Theorem 16.8.1: We have established:

Theorem 1.12.1 Under H0 and Assumption 1.7.1, if rank(H ) = q; then Wn !d 2; a chi-


q
square random variable with q degrees of freedom.

An asymptotic Wald test rejects H0 in favor of H1 if Wn exceeds 2q ( ); the upper- quantile


of the 2q distribution. For example, 21 (:05) = 3:84 = z:025
2 : The Wald test fails to reject if W
n
is less than 2q ( ): The asymptotic p-value for Wn is pn = p(Wn ); where p(x) = P 2q x is
the tail probability function of the 2q distribution. As before, the test rejects at the % level i
pn < ; and pn is asymptotically U [0; 1] under H0 : In addition, it may be helpful to note that in
the GAUSS language, the function p(t) may be computed by the expression p = cdf chic(t):

20
1.13 F Tests
Take the linear model
Y = X1 1 + X2 2 +e
where X1 is n k1 and X2 is n k2 and k = k1 + k2 : The null hypothesis is

H0 : 2 = 0:

0
In this case, = 2; and there are q = k2 restrictions. Also h( ) = R0 is linear with R =
I
a selector matrix. We know that the Wald statistic takes the form
0
Wn = n^ V^ 1^

0 1
= n ^ 2 R0 V^ R ^ :
2

1
What we will show in this section is that if V^ is replaced with V^ 0 = ^ 2 n 1 X 0 X ; the covariance
matrix estimator valid under homoskedasticity, then the Wald statistic can be written in the form

~2 ^2
Wn = n (1.27)
^2
where
1 0 1
~2 =
e~ e~; e~ = Y X1 ~ 1 ; ~ = X 0 X1
1 1 X10 Y
n
are from OLS of Y on X1 ; and
1 0 1
^2 = e^ e^; e^ = Y X ^; ^ = X 0X X 0Y
n
are from OLS of Y on X = (X1 ; X2 ):
The elegant feature about (1.27) is that it is directly computable from the standard output
from two simple OLS regressions, as the sum of square errors is a typical output from statistical
packages. This statistic is typically reported as an \F-statistic" which is de ned as

n k Wn ~ 2 ^ 2 =k2
F = = :
n k2 ^ 2 =(n k)
1
While it should be emphasized that equality (1.27) only holds if V^ 0 = ^ 2 n 1 X 0 X ; still this
formula often nds good use in reading applied papers.
We now derive expression (1.27). First, note that using (1.11),
! 1
1 0X 0X 1
X 1 1 X 1 2
R0 V^ 0 R = n 1 ^ 2 R0 R = ^ 2 n 1 X20 M1 X2 ;
X20 X1 X20 X2

21
where M1 = I X1 (X10 X1 ) 1X 0 :
1 Thus

0 1
Wn = n ^ 2 R0 V^ 0 R ^
2

^ 0 (X 0 M1 X2 ) ^
2 2 2
= 2 :
^
To simplify this expression further, note that if we regress Y on X1 alone, the residual is
e~ = M1 Y: Now consider the residual regression of e~ on X ~ 2 = M1 X2 : By the FWL theorem,
~ ^ ~ 0
e~ = X2 2 + e^ and X2 e^ = 0: Thus
0
e~0 e~ = ~ 2 ^ 2 + e^
X ~ 2 ^ 2 + e^
X
0
= ^2X ~ 2 ^ 2 + e^0 e^
~ 20 X
0
= ^ X 0 M1 X2 ^ + e^0 e^;
2 2 2

or alternatively,
^ 0 X 0 M1 X2 ^ = e~0 e~ e^0 e^:
2 2 2

Also, since
^2 = n 1 0
e^ e^
we conclude that
e~0 e~ e^0 e^ ~2 ^2
Wn = n =n ;
e^0 e^ ^2
as claimed.
In many statistical packages, when an OLS regression is reported, an \F statistic" is reported.
This is

~ 2y ^ 2 =(k 1)
F = 2 :
^ =(n k)
where
1
(y y)0 (y y)
~ 2y =
n
is the sample variance of yi ; equivalently the residual variance from an intercept-only model. This
special F statistic is testing the hypothesis that all slope coe cients (other than the intercept) are
zero. This was a popular statistic in the early days of econometric reporting, when sample sizes
were very small and researchers wanted to know if there was \any explanatory power" to their
regression. This is rarely an issue today, as sample sizes are typically su ciently large that this F
statistic is highly \signi cant". Certainly, there are special cases where this F statistic is useful,
but these cases are atypical.

22
Chapter 2

Regression Models

2.1 Regression
In regression, we want to nd the central tendency of the conditional distribution of yi given xi :
A standard measure of central tendency is the mean. The conditional analog is the conditional
mean m(x) = E (yi j xi = x). In general, m(x) can take any form.
The regression error ei is de ned to be the di erence between yi and its conditional mean:

ei = yi m(xi ):

By construction, this yields the formula

yi = m(xi ) + ei : (2.1)

It is worth emphasizing that no assumptions have been used to develop (2.1), other than that
(yi ; xi ) have a joint distribution and E jyi j < 1:

Proposition 2.1.1 Properties of the regression error ei

1. E (ei j xi ) = 0:

2. E(ei ) = 0:

3. E (h(xi )ei ) = 0 for any function h ( ) :

4. E(xi ei ) = 0:

Proof:

23
1. By the de nition of ei and the linearity of conditional expectations,

E (ei j xi ) = E ((yi m(xi )) j xi )


= E (yi j xi ) E (m(xi ) j xi )
= m(xi ) m(xi )
= 0:

2. By the law of iterated expectations (Theorem 16.7) and the rst result,

E(ei ) = E (E (ei j xi ))
= E(0)
= 0:

3. By a similar argument, and using the conditioning theorem (Theorem 16.9),

E(h(xi )ei ) = E (E (h(xi )ei j xi ))


= E (h(xi )E (ei j xi ))
= E(h(xi ) 0)
= 0:

4. Follows from the third result setting h(xi ) = xi :

Equation (2.1) plus Proposition 2.1.1.1 are often stated jointly as the regression framework:

yi = m(xi ) + ei (2.2)
E (ei j xi ) = 0:

It is important to understand that this is a framework, not a model, because no restrictions have
been placed on the joint distribution of the data. These equations hold true by de nition. A
regression model imposes further restrictions on the joint distribution; most typically, restrictions
on the permissible class of regression functions m(x):
The most common choice is the linear regression model. It speci es that m(x) is a linear
function of x :

yi = x0i + ei
E (ei j xi ) = 0

Since this is a linear equation is a special case of the general conditional conditional mean equation,
this is a substantive restriction which may or may not be true in a speci c application. The fact

24
that Proposition 2.1.1.4 is the same as Assumption 1.1.1.2 means that the linear regression model
is a special case of the least-squares projection model of Chapter 1. Another way of saying this
is that the conditional mean assumption that E (ei j xi ) = 0 is stronger than the uncorrelated
assumption E (xi ei ) = 0:
It is also useful to de ne the conditional variance of yi given xi = x:

V ar (yi j xi = x) = E e2i j xi = x = 2
(x):

Generally, this is a function of x: Just as the conditional mean function may take any form, so
may the conditional variance function (other than the restriction that it is non-negative). Given
the random variable xi , the conditional variance is 2i = 2 (xi ): In the general case where 2 (x)
is not necessarily a constant function, so 2i may di erent across i; we say that the error ei is
heteroskedastic.
When 2 (x) is a constant, so that

E e2i j xi = 2
(2.3)

we say that the error ei is homoskedastic. The model

yi = x0i + ei
E (ei j xi ) = 0
E e2i j xi = 2

is called the homoskedastic linear regression model. In this case, by the law of iterated
expectations
E xi x0i e2i = E xi x0i E e2i j xi = Q 2
which is (1.20). Thus the homoskedastic linear regression model is a special case of the homoskedas-
tic projection error model.

2.2 Bias and Variance of OLS estimator


The conditional mean assumption allows us to calculate the small sample conditional mean and
variance of the OLS estimator.
To examine the bias of ^ , using (1.13), the conditioning theorem, and the independence of the

25
observations
2 ! 3
n 1 n
X X
E ^ jX = E4 xi x0i xi ei j X 5
i=1 i=1
n
! 1 n
X X
= xi x0i xi E (ei j X)
i=1 i=1
n
! 1 n
X X
= xi x0i xi E (ei j xi )
i=1 i=1
= 0

Thus the OLS estimator ^ is unbiased for :


To examine its covariance matrix, for a random vector Y we de ne

V ar(Y ) = E (Y EY ) (Y EY )0
= EY Y 0 (EY ) (EY )0 :

Then by independence of the observations


n
! n n
X X X
V ar xi ei j X = V ar (xi ei j X) = xi x0i 2
i
i=1 i=1 i=1

and we nd ! ! !
n 1 n n 1
X X X
V ar ^ j X = xi x0i xi x0i 2i xi x0i :
i=1 i=1 i=1

In the special case of the linear homoskedastic regression model, 2 = 2 and the covariance
i
matrix simpli es to
n
! 1
X
V ar ^ j X = 0
xi xi 2
:
i=1
2
Recall the method of moments estimator ^ for 2 : We now calculate its nite sample bias
in the context of the homoskedastic linear regression model. Using (1.16) and linear algebra
manipulations

26
E n^ 2 j X = E e0 M e j X
= E tr e0 M e j X
= E tr M ee0 j X
= tr E M ee0 j X
= tr M E ee0 j X
2
= tr M
2
= (n k);

the nal equality by (15.4). We have found that under these assumptions

(n k)
E ^2 = 2
n
so ^ 2 is biased towards zero. Since the bias is proportional to 2; it is common to de ne the
bias-corrected estimator
n
1 X 2
s2 = e^i
n k
i=1

so that Es2 = 2is unbiased. It is important to remember, however, that this estimator is only
unbiased in the special case of the homoskedastic linear regression model. It is not unbiased in the
absence of homoskedasticity, or in the projection model.

2.3 Multicollinearity
If rank(X 0 X) < k; then ^ is not de ned. This can be called strict multicollinearity. This
happens when the columns of X are linearly dependent, i.e., there is some such that X = 0:
Most commonly, this arises when sets of regressors are included which are identically related.
For example, if X includes both the logs of two prices and the log of the relative prices log(p1 );
log(p2 ) and log(p1 =p2 ): When this happens, the applied researcher quickly discovers the error as
the statistical software will be unable to construct (X 0 X) 1 : Since the error is discovered quickly,
this is rarely a problem for applied econometric practice.
The more relevant issue is near multicollinearity, which is often called \multicollinearity"
for brevity. This is the situation when the X 0 X matrix is near singular, when the columns of X are
close to linearly dependent. This de nition is not precise, because we have not said what it means
for a matrix to be \near singular". This is one di culty with the de nition and interpretation of
multicollinearity.
One implication of near singularity of matrices is that the numerical reliability of the calcula-
tions is reduced. It is possible that the reported calculations will be in error due to oating-point
calculation di culties.

27
More relevantly in practice, an implication of near multicollinearity is that estimation precision
of individual coe cients will be poor. We can see this most simply in a model with two regressors
and no intercept:
yi = x1i 1 + x2i 2 + ei ;
where ei is independent of x1i and x2i ; Ee2i = 1 and

x21i x1i x2i 1


E =
x1i x2i x22i 1

In this case the asymptotic covariance matrix V is


1
1 2 1 1
= 1 :
1 1

The correlation indexes collinearity, since as approaches 1 the matrix becomes singular. We can
see the e ect of collinearity on precision by examining the asymptotic variance of either coe cient
2 1
estimate, which is 1 : As approaches 1, the variance rises quickly to in nity. Thus the
more \collinear" are the regressors, the worse the precision of the individual coe cient estimates.
Basically, what is happening is that when the regressors are highly dependent, it is statistically
di cult to disentangle the impact of 1 from that of 2 : The precision of individual estimates are
reduced.
Is there a simple solution? Basically, No. Fortunately, multicollinearity does not lead to errors
in inference. The asymptotic distribution is still valid. Regression estimates are asymptotically
normal, and estimated standard errors are consistent for the asymptotic variance. So reported
con dence intervals are not inherently misleading. They will be large, correctly indicating the
inherent uncertainty about the true parameter value

2.4 Forecast Intervals


In the linear regression model,

m(x) = E (yi j xi = x) = x0 :

In some cases, we want to estimate m(x) at a particular point x: Notice that this is a (linear)
function 0
pof : Letting h( ) = x and = h( ); we see that m(x) ^ = ^ = x0 ^ and H = x; so
s(^) = n 1 x0 V^ x: Thus an asymptotic 95% con dence interval for m(x) is
h p i
x0 ^ 2 n ^x
1 x0 V :

It is interesting to observe that if this is viewed as a function of x; the width of the con dence set
is dependent on x:

28
For a given value of xi = x; we may want to forecast (guess) yi out-of-sample. A reasonable
guess is the conditional mean m(x); and indeed this is the mean-square-minimizing decision rule.
Thus a point forecast is m(x)
^ = x0 ^ ; the estimated conditional mean, as discussed above. We
would also like a measure of uncertainty for the forecast.
The forecast error is e^i = yi m(x)^ = ei x0 ^ : As the out-of-sample error ei is
independent of the in-sample estimate ^ ; this has variance
0
e2i = E e2i j xi = x + x0 E ^
E^ ^ x
2 1 0
= (x) + n x V x:

Assuming E e2i j xi = 2; the natural estimate of this variance is ^ 2 + n 1 x0 V^ x; so a standard


p
error for the forecast is ^ 2 + n 1 x0 V^ x: Notice that this is di erent from the standard error for
the conditional mean.
It would appear natural to conclude that an asymptotic 95% forecast interval for yi is
q
0^
x 2 ^ 2 + n 1 x0 V^ x ;

but this turns out to be incorrect. In general, the validity of an asymptotic con dence interval is
based on the asymptotic normality of the studentized ratio. In the present case, this would require
the asymptotic normality of the ratio

ei x0 ^
p :
^2 + n ^x
1 x0 V

But no such asymptotic approximation can be made. The only special exception is the case where
ei has the exact distribution N (0; 2 ); which is generally invalid.
To get an accurate forecast interval, we need to estimate the conditional distribution of ei given
xi = x; which is a much more di cult h task. pGiven the di culty,
i most applied forecasters focus on
the simple and unjusti ed interval x ^ 2 ^ + n 1 x0 V^ x :
0 2

2.5 NonLinearity in Regressors


In the regression setting we are interested in E (yi j xi = x) = m(x); which need not be a linear
function of x; and its precise form may be unknown. A common approach is to employ a polynomial
approximation. Consider the case of xi 2 R: Then a k 0 th order polynomial model is
2 k
yi = 0 + 1 xi + 2 xi + + k xi + ei :

Letting =( 0; 1 ; :::; k) and zi = (1; xi ; x2i ; :::; xki ); this is the linear regression yi = zi0 + ei .

29
Now suppose that x 2 R2 : A simple quadratic approximation is
2 2
yi = 0 + 1 x1i + 2 x2i + 3 x1i + 4 x2i + 5 x1i x2i + ei :

As the dimensionality of x increases, such approximations can become quite non-parsimonious!


In practice, therefore, most applications do appear to use more than quadratic terms. Some
applications add cubics without interactions:
2 2 3 3
yi = 0 + 1 x1i + 2 x2i + 3 x1i + 4 x2i + 5 x1i + 6 x2i + 7 x1i x2i + ei :

Non-linear approximations can also be made using alternative basis functions, such as Fourier
series (sins and cosines), splines, neural nets, or wavelets.
Since these non-linear models are linear in the parameters, they can be estimated by OLS, and
inference is convention. However, the model is non-linear so interpretation must take this into
account. For example, in the cubic model given above, the slope with respect to xi1 is
@ 2
E (yi j xi ) = 1 +2 3 x1i +3 5 x1i + 7 x2i ;
@x1i
which is a function of x1i and x2i ; making reporting of the \slope" di cult. In many applications,
it will be important to report the slopes for di erent values of the regressors, carefully chosen to
illustrate the point of interest. In other applications, an average slope may be su cient. There
are two obvious candidates: the derivative evaluated at the sample averages

@ 2
E (yi j xi ) jxi =x = 1 +2 3 x1 +3 5 x1 + 7 x2
@x1i
and the average derivative
n n
1X @ 1X 2
E (yi j xi ) = 1 + 2 3 x1 + 3 5 x1i + 7 x2 :
n @x1i n
i=1 i=1

2.6 NonLinear Least Squares


We say that the regression function m(x; ) = E (yi j xi = x) is nonlinear in the parameters if it
cannot be written as m(x; ) = z(x)0 for some function z(x): Examples of nonlinear regression

30
functions include
x
m(x; ) = 1 + 2
1 + 3x
m(x; ) = 1 + 2x
3

m(x; ) = 1 + 2 exp( 3 x)
0
m(x; ) = G(x ); G known
x2 5
m(x; ) = 1 + 2 x1 +( 3 + 4 x1 )
6
m(x; ) = 1+ 2x + 4 (x 3 ) 1 (x > 3)
m(x; ) = ( 1 + 2 x1 ) 1 (x2 < 3) +( 4 + 5 x1 ) 1 (x2 > 3)

In the rst ve examples, m(x; ) is (generically) di erentiable in the parameters : In the nal
two examples, m is not di erentiable with respect to 3 ; which alters some of the analysis. When
it exists, let
@
m (x; ) = m(x; ):
@
Nonlinear regression is frequently adopted because the functional form m(x; ) is suggested
by an economic model. In other cases, it is adopted as a exible approximation to an unknown
regression function.
The least squares estimator ^ minimizes the sum-of-squared-errors
n
X
Sn ( ) = (yi m(xi ; ))2 :
i=1

When the regression function is nonlinear, we call this the nonlinear least squares (NLLS)
estimator. The NLLS residuals are e^i = yi m(xi ; ^):
One motivation for the choice of NLLS as the estimation method is that the parameter is
the solution to the population problem min E (yi m(xi ; ))2
Since sum-of-squared-errors function Sn ( ) is not quadratic, ^ must be found by numerical
methods. See Appendix E. When m(x; ) is di erentiable, then the FOC for minimization are
n
X
0= m (xi ; ^)^
ei : (2.4)
i=1

Theorem 2.6.1 If the model is identi ed and m(x; ) is di erentiable with respect to ,
p
n ^ 0 !d N (0; V )

1 1
V = E m i m0 i E m i m0 i e2i E m i m0 i
where m i = m (xi ; 0 ):

31
Sketch of Proof. First, it must be shown that ^ !p 0 : This can be done using arguments
for optimization estimators, but we won't cover that argument here. Since ^ !p 0 ; ^ is close to
0 for n large, so the minimization of Sn ( ) only needs to be examined for close to 0 : Let
yi0 = ei + m0 i 0 :
For close to the true value 0; by a rst-order Taylor series approximation,
m(xi ; ) ' m(xi ; 0) + m0 i ( 0) :

Thus
yi m(xi ; ) ' (ei + m(xi ; 0 )) m(xi ; 0) + m0 i ( 0)
0
= ei m i( 0)
= yi0 m 0
i :
Hence the sum of squared errors function is
Xn n
X
2 2
Sn ( ) = (yi m(xi ; )) ' yi0 m0 i
i=1 i=1

and the right-hand-side is the SSE function for a linear regression of yi0 on m i : Thus the NLLS
estimator ^ has the same asymptotic distribution as the (infeasible) OLS regression of yi0 on m i ;
which is that stated in the theorem.
Based on Theorem 2.6.1, an estimate of the asymptotic variance V is
n
! 1 n
! n
! 1
1 X 1 X 1 X
V^ = m 0
^ i m 0 2
^ i e^i
^ im m
^ im 0
^ i
n n n
i=1 i=1 i=1

^ i = m (xi ; ^) and e^i = yi m(xi ; ^):


where m
Identi cation is often tricky in nonlinear regression models. Suppose that
0 0
m(xi ; ) = 1 zi + 2 xi ( ):
The model is linear when 2 = 0; and this is often a useful hypothesis (sub-model) to consider.
Thus we want to test
H0 : 2 = 0:
However, under H0 , the model is
0
yi = 1 zi + ei
and both 2 and have dropped out. This means that under H0 ; is not identi ed. This renders
the distribution theory presented in the previous section invalid. Thus when the truth is that
2 = 0; the parameter estimates are not asymptotically normally distributed. Furthermore, tests
of H0 do not have asymptotic normal or chi-square distributions.
The asymptotic theory of such tests have been worked out by Andrews and Ploberger (1994)
and B. Hansen (1996). In particular, Hansen shows how to use simulation (similar to the bootstrap)
to construct the asymptotic critical values (or p-values) in a given application.

32
2.7 Normal Regression Model
The normal regression model adds the additional assumption that the error ei is independent of xi
and has distribution N (0; 2 ): This is a parametric model, where likelihood methods can be used
for estimation, testing, and distribution theory.
The log-likelihood function for the normal regression model is
n
!
X 1 1 2
2 0
Ln ( ; ) = log exp yi xi
(2 2 )1=2 2 2
i=1
n
n 2 1 X 2
= log 2 2
yi x0i
2 2
i=1

The MLE ( ^ ; ^ 2 ) maximize Ln ( ; 2 ): Since Ln ( ; 2 ) is a function of only through the sum


of squared errors; maximizing the likelihood is identical to minimizing the sum of squared errors.
Hence the MLE for equals the OLS estimator ^ = (X 0 X) 1 (X 0 Y ):
Plugging this estimator into the log-likelihood we obtain
n
n 1 X
Ln ( ^ ; 2
)= log 2 2
2
e^2i
2 2
i=1

Maximization with respect to 2 yields the rst-order condition


@ n 1
Ln ( ^ ; ^ 2 ) = 2 + ^0 e^ =
2e 0:
@ 2 2^ 2 ^2

Solving for ^ 2 yields


1 0
^2 =
e^ e^:
n
which is identical to the method of moments estimator. Thus the estimators are not a ected
by this assumption. Due to this equality, the OLS estimator ^ is frequently referred to as the
Gaussian MLE.
Under the normality assumption, we see that the error vector e is independent of X and
has distribution N 0; In 2 . Since linear functions of normals are also normal, this implies that
conditional on X
^ 1 2 (X 0 X) 1
(X 0 X) X0 0
= e N 0; 2M
e^ M 0

where M = I X (X 0 X) 1 X 0 : Since uncorrelated normal variables are independent, it follows


that ^ is independent of any function of the OLS residuals, including the estimated error variance
s2 :

33
The spectral decomposition of M yields

In k 0
M =H H0
0 0

(see equation (15.5)) where H 0 H = In : Let u = 1H 0e N (0; H 0 H) N (0; In ) : Then

(n k) s2 1
2
= 2
e^0 e^
1
= 2
e0 M e
1 In k 0
= 2
e0 H H 0e
0 0
In k 0
= u0 u
0 0
2
n k;

a chi-square distribution with n k degrees of freedom. Furthermore, if standard errors are


calculated using the homoskedastic formula (1.22)
h i
^ ^ N 0; 2 (X 0 X) 1
j jj j jj N (0; 1)
= rh i q rh i = q 2 tn k
^
s( j ) 2
s (X 0 X) 1 n k
2
n k (X 0 X) 1 n k
n k
jj jj

a t distribution with n k degrees of freedom.


We summarize these ndings

Theorem 2.7.1 If ei is independent of xi and distributed N (0; 2 ); and standard errors are cal-
culated using the homoskedastic formula (1.22) then

^ N 0; 2 (X 0 X) 1

(n k)s2 2
2 n k;

^
j j
tn k
s( ^ j )

In Chapter 1 we showed that in large samples, ^ and t are approximately normally distributed.
In contrast, Theorem 2.7.1 shows that under the strong assumption of normality, ^ has an exact
normal distribution and t has an exact t distribution. As inference (con dence intervals) are based
on the t-ratio, the notable distinction is between the N (0; 1) and tn k distributions. The critical

34
values are quite close if n k 30; so as a practical matter it does not matter which distribution
is used. (Unless the sample size is unreasonably small.)
Now let us partition = ( 1 ; 2 ) and consider tests of the linear restriction

H0 : 2 =0
H1 : 2 6= 0

In the context of parametric models, a good testing procedure is based on the likelihood ra-
tio statistic, which is twice the di erence in the log-likelihood function evaluated under the null
and alternative hypotheses. The estimator under the alternative is the unrestricted estimator
( ^ 1 ; ^ 2 ; ^ 2 ) discussed above. The log-likelihood at these estimates is
n 1 0
Ln ( ^ 1 ; ^ 2 ; ^ 2 ) = log 2 ^ 2 e^ e^
2 2^ 2
n n n
= log ^ 2 log (2 ) :
2 2 2

The MLE of the model under the null hypothesis is ( ~ 1 ; 0; ~ 2 ) where ~ 1 is the OLS estimate from
a regression of yi on x1i only, with residual variance ~ 2 : The log-likelihood of this model is
n n n
Ln ( ~ 1 ; 0; ~ 2 ) = log ~ 2 log (2 ) :
2 2 2
The LR statistic for H0 is

LR = 2 Ln ( ^ 1 ; ^ 2 ; ^ 2 ) Ln ( ~ 1 ; 0; ~ 2 )
= n log ~ 2 log ^ 2
~2
= n log :
^2
By a rst-order Taylor series approximation

~2 ~2
LR = n log 1 + 1 'n 1 = Wn :
^2 ^2
the F statistic.

2.8 Least Absolute Deviations


At the beginning of this chapter, we stated that the conventional goal of regression is to estimate
the central tendency of the dependent variable yi given xi ; and that a good measure of central
tendency is the mean. It is not the only measure, however, and an alternative good measure of
central tendency is the median.

35
Let Y be a continuous random variable with median 0 = M ed(Y ). De ne the sign function

1 if u 0
sgn (u) =
1 if u < 0

A few facts about the median are

P (Y 0) = P (Y > 0) = :5

E sgn (Y 0) =0

0 = min E jY j

Given a random sample fy1 ; ::; yn g from this distribution, these three de nitions motivate three
estimators of : The rst suggests
P taking the 50 quantile. The second suggests nding Pthe solution
to the moment equation n1 ni=1 sgn (yi ) ; and the rst suggests minimizing n1 ni=1 jyi j:
These distinctions are illusory, however, as these estimators are indeed identical.
Now let's consider the conditional median of Y given a random variable X: Let m(x) =
M ed (Y j X = x) denote the conditional median of Y given X = x; and let M ed (Y j X) = m(X)
be this function evaluated at the random variable X: The linear median regression model takes
the form

yi = x0i + ei
M ed (ei j xi ) = 0

In this model, the linear function M ed (yi j xi = x) = x0 is the conditional median function, and
the substantive assumption here is that the median function is linear in x:
Conditional analogs of the facts about the median are

P (yi x0 0 j xi = x) = P (yi > x0 j xi = x) = :5

E (sgn (ei ) j xi ) = 0

E (xi sgn (ei )) = 0

0 = min E jyi x0i j

These facts motivate the following estimator. Let


n
1X
Ln ( ) = yi x0i
n
i=1

be the average of absolute deviations. The least absolute deviations (LAD) estimator of
minimizes this function
^ = argmin Ln ( )

36
Equivalently, it is a solution to the moment condition
n
1X
xi sgn yi x0i ^ = 0: (2.5)
n
i=1

Let f (e j x) denote the conditional density of ei given xi = x: The LAD estimator has the
asymptotic distribution
p
Theorem 2.8.1 n ^ 0 !d N (0; V );

1 1 1
V = E xi x0i f (0 j xi ) Exi x0i E xi x0i f (0 j xi )
4
The variance of the asymptotic distribution inversely depends on f (0 j x) ; the conditional
density of the error at its median. When f (0 j x) is large, then there are many innovations near
to the median, and this improves estimation of the median. In the special case where the error is
independent of xi ; then f (0 j x) = f (0) and the asymptotic variance simpli es
1
(Exi x0i )
V = (2.6)
4f (0)2
This simpli cation is similar to the simpli cation of the asymptotic covariance of the OLS estimator
under homoskedasticity.
Computation of standard error for LAD estimates typically is based on equation (2.6). The
main di culty is the estimation of f (0); the height of the error density at its median. This can
be done with kernel estimation techniques. See Chapter 13. The proof of Theorem 2.8.1 is a bit
advanced, but we provide it here for completeness.
^
1
Pn : Since sgn (a) = 1 2 1 (a 0) ;0 (2.5) is equivalent to g n ( ) = 0; where g n ( ) =
Proof
n i=1 gi ( ) and gi ( ) = xi (1 2 1 (yi xi )) : Let g( ) = Egi ( ). We need three preliminary
result. First, by the central limit theorem
n
X
p 1=2
n (g n ( 0) g( 0 )) = n gi ( 0) !d N (0; Exi x0i )
i=1

since Egi ( 0 )gi ( 0 )0 = Exi x0i : Second using the law of iterated expectations and the chain rule of
di erentiation,
@ @
g( ) = Exi 1 2 1 yi x0i
@ 0 @ 0
@
= 2 0 E xi E 1 ei x0i x0i 0 j xi
@
" Z 0 #
xi x0i 0
@
= 2 0 E xi f (e j xi )
@ 1

= 2E xi x0i f x0i x0i 0 j xi

37
so
@
g( 0) = 2E xi x0i f (0 j xi ) :
@ 0
Third, by a Taylor series expansion and the fact g( 0) =0

@
g( ^ ) ' g( 0)
^
0 :
@ 0
Together
1
p @ p
n ^ 0 ' g( 0) ng( ^ )
@ 0
1p
= 2E xi x0i f (0 j xi ) n g( ^ ) gn( ^ )
1 1p
' E xi x0i f (0 j xi ) n (g n ( 0) g( 0 ))
2
1 1
! d E xi x0i f (0 j xi ) N (0; Exi x0i )
2
= N (0; V ):

The third line follows from an asymptotic empirical process argument.

38
Chapter 3

Model Selection

3.1 Omitted Variables


Let x1i and x2i be two sets of regressors. We can de ne

g1 (x1 ) = E (yi j x1i = x1 )

and
g2 (x1 ; x2 ) = E (yi j x1i = x1 ; x2i = x2 ) :
Both of these functions exist and are well de ned. Given data, either can be estimated. Thus, if
the function g1 (x1 ) is estimated by regression of yi on x1i only, there is no bias.
However, the function g1 may not be of interest. Rather, the function g2 may be of interest.
Thus if g1 is estimated, when the true relationship of interest is g2 ; then there will be estimation
bias. That is, what may be of interest is the e ect of x1i on the conditional mean of yi , holding
x2i constant, namely
@ @
g2 (x1 ; x2 ) 6= g1 (x1 ):
@x1 @x1
In this sense, omission of x2i from the regression can induce bias.
Another way to see this is by focusing on the linear regression model. Suppose that

yi = x01i 1 + x02i 2 + ei
E (ei j x1i ; x2i ) = 0:

Then

E (yi j x1i ) = E x01i 1 + x02i 2 + ei j x1i


= x01i 1 + E (x2i j x1i )0 2
6= x01i 1 :

39
Thus a regression of yi on x1i does not yield the coe cient 1; unless E (x2i j x1i ) = 0 or 2 = 0:
Furthermore, suppose E (x2i j x1i ) = x1i : Then
E (yi j x1i ) = x01i 1 + ( x1i )0 2
= x01i 1 + 0
2 :
So a regression of yi on x1i will consistently estimate 1 + 0 2 : 1 cannot be uncovered from this
regression, unless = 0 or 2 = 0; and thus the regression is \biased", if the parameter 1 is of
interest.
Notice that the omitted variable bias problem disappears if = 0 (so x1i and x2i are uncor-
related) or if 2 = 0 (so x2i does not enter the joint regression). The rst can be assessed by
examining the correlation between x1i and x2i ; but the second can only be assessed by computing
the joint regression. Therefore the standard advice is when in doubt, to always estimate the more
general model, since it is by construction free of the omitted variables problem.

3.2 Irrelevant Variables


In the model
yi = x01i 1 + x02i 2 + ei
E (ei j x1i ; x2i ) = 0;
x2i is \irrelevant" if 1 is the parameter of interest and 2 = 0: That is, the truth can be written
as
yi = x01i 1 + ei
E (ei j x1i ; x2i ) = 0:
One estimator of 1 is to regress yi on x1i alone, denoted ~ 1 : Another is to regress yi on x1i and
x2i ; yielding ( ^ 1 ; ^ 2 ): Under which conditions is ^ 1 or ~ 1 superior?
First, it is easy to see that both are unbiased and consistent for 1 : So in comparison with
the problem of omitted variables, we see that the presence (or absence) of irrelevant variables is
relatively less important.
Second, we can consider the relative e ciency of ~ 1 versus ^ 1 : It is harder to make comparisons
in the general case, so we focus on the homoskedastic case E e2i j x1i ; x2i = 2 : Then
1
lim nV ar( ~ 1 j X) = Ex1i x01i 2
= Q111 2
;
n!1

say, and
1 1
lim nV ar( ^ 1 j X) = Ex1i x01i Ex1i x02i Ex2i x02i Ex2i x01i 2
= (Q11 Q121 ) 2
;
n!1

say. If Ex1i x02i = 0 (so the variables are uncorrelated) then these two variance matrices equal, and
the two estimators have equal asymptotic e ciency.

40
Proposition 3.2.1 If Ex1i x02i = 0; ^ 1 and ~ 1 are both consistent and have equal asymptotic
variances.

When Ex1i x02i 6= 0, however, then Q121 > 0 and

Q11 = Q11 Q121 + Q121 > Q11 Q121 ;

so
1
Q111 < (Q11 Q121 ) ;
meaning that ~ 1 has a lower asymptotic variance matrix than ^ 1 : The inclusion of irrelevant
variables reduces e ciency if these variables are correlated with the relevant variables.

Proposition 3.2.2 If Ex1i x02i 6= 0; ^ 1 and ~ 1 are both consistent, and

lim nV ar( ~ 1 j X) < lim nV ar( ^ 1 j X):


n!1 n!1

3.3 Model Selection


We have discussed the costs and bene ts of inclusion/exclusion of variables. How does a researcher
go about selecting an econometric speci cation, when economic theory does not provide complete
guidance? This is the question of model selection. It is important that the model selection question
be well-posed. For example, the question: \What is the right model for y?" is not well posed,
because it does not make clear the conditioning set. In contrast, the question, \Which subset of
(x1 ; :::; xK ) enters the regression function E(yi j x1i = x1 ; :::; xKi = xK )?" is well posed.
In many cases the problem of model selection can be reduced to the comparison of two nested
models, as the larger problem can be written as a sequence of such comparisons. We thus consider
the question of the inclusion of X2 in the linear regression

Y = X1 1 + X2 2 + e;

where X1 is n k1 and X2 is n k2 : This is equivalent to the comparison of the two models

M1 : Y = X1 1 + e; E (e j X1 ; X2 ) = 0
M2 : Y = X1 1 + X2 2 + e; E (e j X1 ; X2 ) = 0:

Note that M1 M2 : To be concrete, we say that M2 is true if 2 6= 0:


To x notation, models 1 and 2 are estimated by OLS, with residual vectors e^1 and e^2 ; estimated
variances ^ 21 and ^ 22 ; etc., respectively. To simplify some of the statistical discussion, we will on
occasion use the homoskedasticity assumption E e2i j x1i ; x2i = 2 :
A model selection procedure is a data-dependent rule which selects one of the true models. We
c There are many possible desirable properties for a model selection procedure.
can write this as M.

41
One useful property is consistency, that it selects the true model with probability one if the sample
is su ciently large. A model selection procedure is consistent if

c = M1 j M1
P M ! 1
c = M2 j M2
P M ! 1

We now discuss a number of possible model selection methods.


Selection Based on Fit
Natural measures of t of a regression are the residual sum of squares e^0 e^, R2 = 1 (^ e0 e^) =^ 2y or
Gaussian log-likelihood l = (n=2) log ^ 2 : It might therefore be thought attractive to base a model
selection procedure on one of these measures of t. The problem is that each of these measures
are necessarily monotonic between nested models, namely e^01 e^1 e^02 e^2 ; R12 R22 ; and l1 l2 ; so
model M2 would always be selected, regardless of the actual data and probability structure. This
is clearly an inappropriate decision rule!
Selection based on Testing
A common approach to model selection is to base the decision on a statistical test such as
the Wald Wn : The model selection rule is as follows. For some critical level ; let c satisfy
P 2k2 > c : Then select M1 if Wn c ; else select M2 .
The major problem with this approach is that the critical level is indeterminate. The rea-
soning which helps guide the choice of in hypothesis testing (controlling Type I error) is not
relevant for model selection. That is, if is set to be a small number, then P M c = M1 j M1

1 but P M c = M2 j M2 could vary dramatically, depending on the sample size, etc. An-
other problem is that if is held xed, then this model selection procedure is inconsistent, as
c
P M = M1 j M1 ! 1 < 1:
Adjusted R-squared
Since R2 is not a useful model selection rule, as it always \prefers" the larger model, Theil
proposed an adjusted coe cient of determination

2 e0 e^) = (n
(^ k)
R = 1
^ 2y
s2
= 1 :
^ 2y
2
At one time, it was popular to pick between models based on R : This rule is to select M1 if
2 2 2
R1 > R2 ; else select M2 : Since R is a monotonically decreasing function of s2 ; this rule is the
same as selecting the model with the smaller s2 ; or equivalently, the smaller log(s2 ): It is helpful

42
to observe that
n
log(s2 ) = log ^ 2
n k
k
= log ^ 2 + log 1 +
n k
k
' log ^ 2 +
n k
k
' log ^ 2 + ;
n
2
(the rst approximation is log(1 + x) ' x for small x): Thus selecting based on R is the same
as selecting based on log ^ 2 + nk ; which is a particular choice of penalized likelihood criteria. It
turns out that model selection based on any criterion of the form
k
log ^ 2 + c ; c > 0; (3.1)
n
is inconsistent, as the rule tends to over t. Indeed, since under M1 ;

LRn = n log ^ 21 log ^ 22 ' Wn !d 2


k2 ; (3.2)

c = M1 j M1 2 2
P M = P R1 > R2 j M1
' P n log(s21 ) < n log(s22 ) j M1
' P n log ^ 21 + ck1 < n log ^ 22 + c (k1 + k2 ) j M1
= P (LRn < ck2 j M1 )
2
! P k2 < ck2 < 1:

Akaike Information Criterion


Akaike proposed an information criterion which takes the form (3.1) with c = 2 :
k
AIC = log ^ 2 + 2 : (3.3)
n
2
This imposes a larger penalty on overparameterization than does R : Akaike's motivation for
this criterion is that a good measure of the t of a model density f (Y j X; M) to the true
density f (Y j X) is the Kullback distance K(M) = E (log f (Y j X) log f (Y j X; M)) : The
log-likelihood function provides a decent estimate of this distance, but it is biased, and a better,
less-biased estimate can be obtained by introducing the penalty 2k: The actual derivation is not
very enlightening, and the motivation for the argument is not fully satisfactory, so we omit the
details. Despite these concerns, the AIC is a popular method of model selection. The rule is to
select M1 if AIC1 < AIC2 ; else select M2 :

43
Since the AIC criterion (3.3) takes the form (3.1), it is an inconsistent model selection criterion,
and tends to over t.
Schwarz Criterion
While many modi cations of the AIC have been proposed, the most popular appears to be one
proposed by Schwarz, based on Bayesian arguments. His criterion, known as the BIC, is
k
BIC = log ^ 2 + log(n) : (3.4)
n
Since log(n) > 2 (if n > 8); the BIC places a larger penalty than the AIC on the number of
estimated parameters and is more parsimonious.
In contrast to the other methods studied above, BIC model selection is consistent. Indeed,
since (3.2) holds under M1 ;
LRn
!p 0;
log(n)
so
c = M1 j M1
P M = P (BIC1 < BIC2 j M1 )
= P (LRn < log(n)k2 j M1 )
LRn
= P < k2 j M1
log(n)
! P (0 < k2 ) = 1:

Also under M2 ; one can show that


LRn
!p 1;
log(n)
thus

c = M2 j M2 LRn
P M = P > k2 j M2
log(n)
! 1:

Selection Among Multiple Regressors


We have discussed model selection between two models. The methods extend readily to the
issue of selection among multiple regressors. The general problem is the model

yi = 1 x1i + 2 x1i + + K xKi + ei ; E (ei j xi ) = 0

and the question is which subset of the coe cients are non-zero (equivalently, which regressors
enter the regression).
There are two leading cases: ordered regressors and unordered.

44
In the ordered case, the models are

M1 : 1 6= 0; 2 = 3 = = K =0
M2 : 1 6= 0; 2 6= 0; 3 = = K =0
..
.
MK : 1 6= 0; 2 6= 0; : : : ; K 6= 0:

which are nested. The AIC selection criteria estimates the K models by OLS, stores the residual
variance ^ 2 for each model, and then selects the model with the lowest AIC (3.3). Similarly for
the BIC, selecting based on (3.4).
In the unordered case, a model consists of any possible subset of the regressors fx1i ; :::; xKi g;
and the AIC or BIC in principle can be implemented by estimating all possible subset models.
However, there are 2K such models, which can be a very large number. For example, 210 = 1024;
and 220 = 1; 048; 576: In the latter case, a full-blown implementation of the BIC selection criterion
would seem computationally prohibitive.

3.4 Testing for Omitted NonLinearity


If the goal is to estimate the conditional expectation E (yi j xi ) ; it is useful to have a general test
of the adequacy of the speci cation.
One simple test for neglected nonlinearity is to add nonlinear functions of the regressors to
the regression, and test their signi cance using a Wald test. Thus, if the model yi = x0i ^ + e^i has
been t by OLS, let zi = h(xi ) denote functions of xi which are not linear functions of xi (perhaps
squares of non-binary regressors) and then t yi = x0 ~ +z 0 ~ + e~i by OLS, and form a Wald statistic
i i
for = 0:
Another popular approach is the RESET test proposed by Ramsey (1969). The null model is

yi = x0i + ei

which is estimated by OLS, yielding predicted values y^i = x0i ^ : Now let
0 2 1
y^i
B .. C
zi = @ . A
y^im

be an (m 1)-vector of powers of y^i : Then run the auxiliary regression

yi = x0i ~ + zi0 ~ + e~i (3.5)

by OLS, and form the Wald statistic Wn for = 0: It is easy (although somewhat tedious) to
show that under the null hypothesis, Wn !d 2m 1 : Thus the null is rejected at the % level if Wn
exceeds the upper % tail critical value of the 2m 1 distribution.

45
To implement the test, m must be selected in advance. Typically, small values such as m = 2,
3, or 4 seem to work best.
The RESET test appears to work well as a test of functional form against a wide range of
smooth alternatives. It is particularly powerful at detecting single-index models of the form

yi = G(x0i ) + ei

where G( ) is a smooth \link" function. To see why this is the case, note that (3.5) may be written
as
2 3 m
yi = x0i ~ + x0i ^ ~ 1 + x0i ^ ~ 2 + x0i ^ ~ m 1 + e~i

which has essentially approximated G( ) by a m'th order polynomial.

3.5 log(Y ) versus Y as Dependent Variable


An econometrician can estimate Y = X ^ + e^ or log(Y ) = X ^ + e^ (or perhaps both). Which is
preferable? There is a large literature on this subject, much of it quite misleading.
The plain truth is that either regression is \okay", in the sense that both E (yi j xi ) and
E (log (yi ) j xi ) are well-de ned (so long as yi > 0): It is perfectly valid to estimate either or both
regressions. They are di erent regression functions, neither is more nor less valid than the other.
To test one speci cation versus the other, or select one speci cation over the other, requires the
imposition of additional structure, such as the assumptions that the conditional expectation is
linear in xi ; and ei N (0; 2 ):
There still may be good reasons for preferring the log(Y ) regression over the Y regression.
First, it may be the case that E (log (yi ) j xi ) is roughly linear in xi over the support of xi ; while
the regression E (yi j xi ) is non-linear, and linear models are easier to report and interpret. Second,
it may be the case that the errors in ei = log (yi ) E (log (yi ) j xi ) may be less heteroskedastic than
the errors from the linear speci cation (although the reverse may be true!). Finally, and this may be
the most important reason, if the distribution of yi is highly skewed, the conditional mean E (yi j xi )
may not be a useful measure of central tendency, and estimates will be undesirably in uenced by
extreme observations (\outliers"). In this case, the conditional mean-log E (log (yi ) j xi ) may be a
better measure of central tendency, and hence more interesting to estimate and report.

46
Chapter 4

Generalized Least Squares

4.1 GLS and the Gauss-Markov Theorem


The linear regression model

yi = x0i + ei
E (ei j xi ) = 0

imposes the condition of zero conditional mean. This is stronger than the orthogonality condition
E (xi ei ) = 0: This stronger condition can be exploited to improve estimation e ciency. Recall the
de nition of the conditional variance 2i = E e2i j xi :
The Generalized Least Squares (GLS) estimator of is
~ = X 0D 1 1
X X 0D 1
Y (4.1)

where 0 1
2 0 0
1
B 0 2 0 C
2 2 B 2 C
D = diagf 1 ; :::; ng =B .. .. .. .. C:
@ . . . . A
0 0 2
n
The GLS estimator is sometimes called the Aitken estimator.
Since Y = X + e; then
~= 1
+ X 0D 1
X X 0D 1
e :

Since D is a function of X; E ~ j X = and

1 1
V ar( ~ j X) = X 0 D 1
X X 0D 1
DD 1
X X 0D 1
X
1
= X 0D 1
X :

47
The class of unbiased linear estimators take the form
~ = A(X)0 Y; A(X)0 X = Ik
L

where A(X), n k, is a function only of X: This is called linear because it is a linear function
of Y; even though it is nonlinear in X: OLS is the case A(X) = X(X 0 X) 1 and GLS is the case
1
A(X) = D 1 X X 0 D 1 X : Observe that

E ~ L j X = A(X)0 X =

so ~ L is unbiased: Thus ~ L = + A(X)0 e; and its variance is

V ar( ~ L j X) = A(X)0 DA(X):

The \best" estimator within this class is the one with the smallest variance.

Theorem 4.1.1 (Gauss-Markov). The best (minimum-variance) unbiased linear estimator is


GLS.
1
Proof. Let A = D 1 X X 0 D 1 X and A be any other n k function of X such that
A0 X = Ik : We need to show that A0 DA A 0 DA .
Let C = A A : Note that

C 0 X = A0 X A 0X
= Ik Ik = 0

and
1
C 0 DA = C 0 DD 1
X X 0D 1
X
1
= C 0X X 0D 1
X = 0:

Then

A0 DA = (C + A )0 D (C + A )
= C 0 DC + C 0 DA + A 0 DC + A 0 DA
= C 0 DC + A 0 DA A DA 0 :

The Gauss-Markov theorem tells us that the OLS estimator is ine cient in linear regression
models, and that within the class of linear estimators, GLS is e cient. However, the restriction
to linear estimators is unsatisfactory, as the theorem leaves open the possibility that a non-linear
estimator could have lower mean squared error than the GLS estimator.

48
Chamberlain (1987) established a more powerful and general result. He showed that in the
regression model, no regular consistent estimator can have a lower asymptotic variance than the
GLS estimator. This establishes that the GLS estimator is asymptotically e cient. The proof of
his theorem is quite deep and we cannot cover it here.
In Section 1.3 we claimed that OLS is asymptotically e cient in the class of models with
E (xi ei ) = 0: Now we have shown that OLS is ine cient if E (ei j xi ) = 0: The gain of e ciency
(through use of GLS) comes through the exploitation of the stronger conditional mean assumption,
which has the cost of reduced robustness. If E (ei j xi ) 6= 0 then the GLS estimator will be
inconsistent for the projection coe cient ; but OLS will be consistent.

4.2 Skedastic Regression


Except in the special case of homoskedastic errors (where D = I 2 and GLS=OLS), the results
of the previous section show that in the regression model OLS is ine cient. However, the GLS
estimator is not feasible since D is unknown. The next few sections explore a method for feasible
implementation of an approximate GLS estimator.
Suppose that the conditional variance takes the parametric form
2 0
i = 0 + z1i 1
0
= zi ;
where z1i is some q 1 function of xi : Typically, z1i are squares (and perhaps levels) of some (or
all) elements of xi : Often the functional form is kept simple for parsimony.
Let i = e2i : Then
0
E ( i j xi ) = 0 + z1i 1

and we have the regression equation


0
i = 0 + z1i 1 + i (4.2)
E( i j xi ) = 0:
It is helpful to think about the regression error i: It has conditional variance
V ar ( i j xi ) = V ar e2i j xi
2
= E e2i E e2i j xi j xi
2
= E e4i j xi E e2i j xi :
If ei is heteroskedastic, then V ar ( i j xi ) will depend on xi : In contrast, when ei is independent of
xi then it is a constant
V ar ( i j xi ) = E e4i 4

and under normality it simpli es to


4
V ar ( i j xi ) = 2 : (4.3)

49
4.3 Estimation of Skedastic Regression
Suppose ei (and thus i) were observed. Then we could estimate by OLS:
1
^ = Z 0Z Z 0 !p

and p
n (^ ) !d N (0; V )
where
1 1
V = E zi zi0 E zi zi0 2
i E zi zi0 : (4.4)
While ei is not observed, we have the OLS residual e^i = yi x0i ^ = ei x0i ( ^ ): Thus

^ i = e^2i e2i
= 2ei x0i ^ + (^ )0 xi x0i ( ^ )
= i;

say. Note that


n n n
1 X 2X p 1X ^ p
p zi i = zi ei x0i n ^ + zi ( )0 xi x0i ( ^ ) n
n n n
i=1 i=1 i=1
!p 0

Let
1
~ = Z 0Z Z0^ (4.5)
be from OLS regression of ^i on zi : Then
p p 1 1
n (~ ) = n (^ )+ n Z 0Z n 1=2
Z0
!d N (0; V ) (4.6)

Thus the fact that i is replaced with ^i is asymptotically irrelevant. We may call (4.5) the
skedastic regression, as it is estimating the conditional variance of the regression of yi on xi :
We have shown that is consistently estimated by a simple procedure, and hence we can
2
estimate i = zi by ~ i = zi0 ~ : We now discuss how to use these results to test hypotheses on ;
2 0

and construct a FGLS estimator for .

4.4 Testing for Heteroskedasticity


The hypothesis of homoskedasticity is that E e2i j xi = 2, or equivalently that

H0 : 1 =0

50
in the regression (4.2). We may therefore test this hypothesis by the estimation (4.5) and con-
structing a Wald statistic.
This hypothesis does not imply that i is independent of xi : Typically, however, we impose
the stronger hypothesis and test the hypothesis that ei is independent of xi ; in which case i is
independent of xi and the asymptotic variance (4.4) for ~ simpli es to
1
V = E zi zi0 E 2
i : (4.7)

Hence the standard test of H0 is a classic F (or Wald) test for exclusion of all regressors from the
skedastic regression (4.5). The asymptotic distribution (4.6) and the asymptotic variance (4.7)
under independence show that this test has an asymptotic chi-square distribution.

Theorem 4.4.1 Under H0 ; and ei independent of xi ; the Wald test of H0 is asymptotically 2:


q

Most tests for heteroskedasticity take this basic form. The main di erences between popular
\tests" is which transformations of xi enter zi : Motivated by the form of the asymptotic variance
of the OLS estimator ^ ; White (1980) proposed that the test for heteroskedasticity be based on
setting zi to equal all non-redundant elements of xi ; its squares, and all cross-products. Breusch-
Pagan (1979) proposed what might appear to be a distinct test, but the only di erence is that
they allowed for general choice of zi ; and used an assumption of normality to use the simpli cation
(4.3) for their test. If this simpli cation is replaced by the standard formula (under independence
of the error), the two tests coincide.

4.5 Feasible GLS Estimation


Let
~ 2i = ~ 0 zi :
Suppose that ~ 2i > 0 for all i: Then set
~ = diagf~ 21 ; :::; ~ 2n g
D

and
1
~ = X 0D
~ 1
X ~
X 0D 1
Y:

This is the feasible GLS, or FGLS, estimator of :


Since there is not a unique speci cation for the conditional variance the FGLS estimator is not
unique, and will depend on the model (and estimation method) for the skedastic regression.
One typical problem with implementation of FGLS estimation is that in a linear regression
speci cation, there is no guarantee that ~ 2i > 0 for all i: If ~ 2i < 0 for some i; then the FGLS
estimator is not well de ned. Furthermore, if ~ 2i 0 for some i; then the FGLS estimator will force
the regression equation through the point (yi ; xi ); which is typically undesirable. This suggests

51
that there is a need to bound the estimated variances away from zero. A trimming rule might
make sense:
2 2 2
i = max[~ i ; ]
for some 2 > 0:
It is possible to show that if the skedastic regression is correctly speci ed, then FGLS is
asymptotically equivalent to GLS, but the proof of this is tricky in our notational structure. We
just state the result without proof.
Theorem 4.5.1 If the skedastic regression is correctly speci ed,
p
n ~ GLS ~ F GLS !p 0;

and thus p
n ~ F GLS !d N (0; V );
where
2 1
V = E i xi x0i :

4.6 Covariance Matrix Estimation


Examining the asymptotic distribution of Theorem 4.5.1, the natural estimator of the asymptotic
variance of ^ is
n
! 1
1 X 1 0~ 1 1
~ 0
V = 2 0
~ i xi xi = XD X :
n n
i=1

which is consistent for V as n ! 1: This estimator V~ 0 is appropriate when the skedastic regression
(4.2) is correctly speci ed.
It may be the case that 0 zi is only an approximation to the true conditional variance 2i =
E(e2i j xi ). In this case we interpret 0 zi as a linear projection of e2i on zi : ~ should perhaps be
called a quasi-FGLS estimator of : Its asymptotic variance is not that given in Theorem 4.5.1.
Instead,
1 1 2 1 1
0
V = E zi xi x0i E 0
zi 2 0
i xi xi E 0
zi xi x0i :

V takes a sandwich form, similar to the covariance matrix of the OLS estimator. Unless 2i = 0 zi ,
V~ 0 is inconsistent for V .
An appropriate solution is to use a White-type estimator in place of V~ 0 : This may be written
as
n
! 1 n
! n
! 1
1 X 1 X 1 X
V~ = 2 0
~ i xi xi 4 2 0
~ i e^i xi xi 2
~ i xi xi 0
n n n
i=1 i=1 i=1
1 1
~
= n XD 0 1
X ~
XD 0 1 ^D
D ~ 1
X ~
XD 0 1
X

52
where D ^ = diagf^e21 ; :::; e^2n g: This is an estimator which is robust to misspeci cation of the condi-
tional variance, and was proposed by Cragg (Journal of Econometrics, 1992).

4.7 Commentary: FGLS versus OLS


In a regression model, FGLS is asymptotically superior to OLS. Why then do we not exclusively
estimate regression models by FGLS? This is a good question. There are three reasons.
First, FGLS estimation depends on speci cation and estimation of the skedastic regression.
Since the form of the skedastic regression is unknown, and it may be estimated with considerable
error, the estimated conditional variances may contain more noise than information about the true
conditional variances. In this case, FGLS will do worse than OLS in practice.
Second, individual estimated conditional variances may be negative, and this requires trimming
to solve. This introduces an element of arbitrariness which is unsettling to empirical researchers.
Third, OLS is a more robust estimator of the parameter vector. It is consistent not only in
the regression model, but also under the assumptions of linear projection. The GLS and FGLS
estimators, on the other hand, require the assumption of a correct conditional mean. If the equation
of interest is a linear projection, and not a conditional mean, then the OLS and FGLS estimators
will converge in probability to di erent limits, as they will be estimating two di erent projections.
And the FGLS probability limit will depend on the particular function selected for the skedastic
regression. The point is that the e ciency gains from FGLS are built on the stronger assumption
of a correct conditional mean, and the cost is a reduction of robustness to misspeci cation.

53
Chapter 5

Generalized Method of Moments

5.1 Overidenti ed Linear Model


Consider the linear model

yi = x0i + ei
= x01i 1 + x02i 2 + ei
E (xi ei ) = 0

where x1i is k 1 and x2 is r 1 with ` = k + r: We know that without further restrictions, an


asymptotically e cient estimator of is the OLS estimator. Now suppose that we are given the
information that 2 = 0: Now we can write the model as

yi = x01i 1 + ei
E (xi ei ) = 0:

In this case, how should 1 be estimated? One method is OLS regression of yi on x1i alone. This
method, however, is not necessarily e cient, as their are ` restrictions in E (xi ei ) = 0, while 1 is
of dimension k < `. This situation is called overidenti ed. There are ` k = r more moment
restrictions than free parameters. We call r the number of overidentifying restrictions.
This is a special case of a more general class of moment condition models. Let g(y; z; x; ) be
an ` 1 function of a k 1 parameter with ` k such that

Eg(yi ; zi ; xi ; 0) =0 (5.1)

where 0 is the true value of : In our previous example, g(y; x; ) = x(y x01 ): In econometrics,
this class of models are called moment condition models. In the statistics literature, these are
known as estimating equations.

54
As an important special case we will devote special attention to linear moment condition
models, which can be written as
yi = zi0 + ei
E (xi ei ) = 0:
where the dimensions of zi and xi are k 1 and ` 1 , with ` k: If k = ` the model is just
identi ed, otherwise it is overidenti ed. The variables zi may be components and functions of
xi ; but this is not required. This model falls in the class (5.1) by setting
g(y; z; x; 0) = x(y z0 ) (5.2)

5.2 GMM Estimator


De ne the sample analog of (5.2)
n n
1X 1X 1
gn( ) = gi ( ) = xi yi zi0 = X 0Y X 0Z : (5.3)
n n n
i=1 i=1

The method of moments estimator for is de ned as the parameter value which sets g n ( ) = 0;
but this is generally not possible when ` > k: The idea of the generalized method of moments
(GMM) is to de ne an estimator which sets g n ( ) \close" to zero.
For some ` ` weight matrix Wn > 0; let
Jn ( ) = n g n ( )0 Wn g n ( ):
This is a non-negative measure of the \length" of the vector g n ( ): For example, if Wn = I; then,
Jn ( ) = n g n ( )0 g n ( ) = n jg n ( )j2 ; the square of the Euclidean length. The GMM estimator
minimizes Jn ( ):
De nition 5.2.1 ^ GM M = argmin Jn ( ):

Note that if k = `; then g n ( ^ ) = 0; and the GMM estimator is the MME:


The rst order conditions for the GMM estimator are
@
0 = Jn ( ^ )
@
@
= 2 g n ( ^ )0 Wn g n ( ^ )
@
1 1
= 2 Z 0 XWn X 0 Y Z ^
n n
so
2Z 0 XWn X 0 Z ^ = 2Z 0 XWn X 0 Y
which establishes the following.

55
Proposition 5.2.1
^ 1
GM M = Z 0 XWn X 0 Z Z 0 XWn X 0 Y:

While the estimator depends on Wn ; the dependence is only up to scale, for if Wn is replaced
by cWn for some c > 0; ^ GM M does not change.

5.3 Distribution of GMM Estimator


Assume that Wn !p W > 0: Let
Q = E xi zi0
and
= E xi x0i e2i = E gi gi0 ;
where gi = xi ei : Then
1 0 1 0
Z X Wn XZ !p Q0 W Q
n n
and
1 0 1 0
Z X Wn Xe !d Q0 W N (0; ) :
n n
We conclude:
p
Theorem 5.3.1 n ^ !d N (0; V ) ; where

1 1
V = Q0 W Q Q0 W W Q Q0 W Q :

In general, GMM estimators are asymptotically normal with \sandwich form" asymptotic
variances.
The optimal weight matrix W0 is one which minimizes V: This turns out to be W0 = 1 : The

proof is left as an exercise. This yields the e cient GMM estimator:

^ = Z 0X 1 1
X 0Z Z 0X 1
X 0 Y:

Thus we have
p 1
Theorem 5.3.2 For the e cient GMM estimator, n ^ !d N 0; Q0 1Q :

This estimator is e cient only in the sense that it is the best (asymptotically) in the class of
GMM estimators with this set of moment conditions.
W0 = 1 is not known in practice, but it can be estimated consistently. For any W ! W ;
n p 0
^
we still call the e cient GMM estimator, as it has the same asymptotic distribution.

56
We have described the estimator ^ as \e cient GMM" if the optimal (variance minimizing)
weight matrix is selected. This is a weak concept of optimality, as we are only considering alter-
native weight matrices Wn : However, it turns out that the GMM estimator is semiparametrically
e cient, as shown by Gary Chamberlain (1987).
If it is known that E (gi ( )) = 0; and this is all that is known, this is a semi-parametric
problem, as the distribution of the data is unknown. Chamberlain showed that in this context,
no semiparametric estimator (one which is consistent globally for the class of models considered)
1
can have a smaller asymptotic variance than G0 1 G : Since the GMM estimator has this
asymptotic variance, it is semiparametrically e cient.
This results shows that in the linear model, no estimator has greater asymptotic e ciency than
the e cient linear GMM estimator. No estimator can do better (in this rst-order asymptotic
sense), without imposing additional assumptions.

5.4 Estimation of the E cient Weight Matrix


Given any weight matrix Wn > 0; the GMM estimator ^ is consistent yet ine cient. For example,
we can set Wn = I` : In the linear model, a better choice is Wn = (X 0 X) 1 : Given any such rst-
step estimator, we can de ne the residuals e^i = yi zi0 ^ and moment equations g^i = xi e^i = g(wi ; ^ ):
Construct
n
1X
gn = gn( ^ ) = g^i ;
n
i=1

g^i = g^i gn;


and de ne ! !
n 1 n 1
1X 1X 0
Wn = g^i g^i 0 = g^i g^i g n g 0n : (5.4)
n n
i=1 i=1

Then Wn !p 1
= W0 ; and GMM using Wn as the weight matrix is asymptotically e cient.
A common alternative choice is to set
n
! 1
1X 0
Wn = g^i g^i
n
i=1

which uses the uncentered moment conditions. Since Egi = 0; these two estimators are asymptot-
ically equivalent under the hypothesis of correct speci cation. However, Alastair Hall (2000) has
shown that the uncentered estimator is a poor choice. When constructing hypothesis tests, under
the alternative hypothesis the moment conditions are violated, i.e. Egi 6= 0; so the uncentered
estimator will contain an undesirable bias term and the power of the test will be adversely a ected.
A simple solution is to use the centered moment conditions to construct the weight matrix, as in
(5.4) above.

57
Here is a simple way to compute the e cient GMM estimator: First, set Wn = (X 0 X) 1 ,
estimate ^ using this weight matrix, and construct the residual e^i = yi zi0 ^ : Then set g^i = xi e^i ;
and let g^ be the associated n ` matrix: Then the e cient GMM estimator is

1 1 1
^ = Z 0 X g^0 g^ ng n g 0n X 0Z Z 0 X g^0 g^ ng n g 0n X 0 Y:

In most cases, when we say \GMM", we actually mean \e cient GMM". There is little point in
using an ine cient GMM estimator as it is easy to compute.
An estimator of the asymptotic variance of ^ can be seen from the above formula. Set

1 1
V^ = n Z 0 X g^0 g^ ng n g 0n X 0Z :

Asymptotic standard errors are given by the square roots of the diagonal elements of V^ :
There is an important alternative to the two-step GMM estimator just described. Instead, we
can let the weight matrix be considered as a function of : The criterion function is then
n
! 1
0 1X
J( ) = n g n ( ) gi ( )gi ( )0 g n ( ):
n
i=1

where
gi ( ) = gi ( ) gn( )
The ^ which minimizes this function is called the continuously-updated GMM estimator,
and was introduced by L. Hansen, Heaton and Yaron (1996).
The estimator appears to have some better properties than traditional GMM, but can be
numerically tricky to obtain in some cases. This is a current area of research in econometrics.

5.5 GMM: The General Case


In its most general form, GMM applies whenever an economic or statistical model implies the ` 1
moment condition
E (gi ( )) = 0:
Often, this is all that is known. Identi cation requires l k = dim( ): The GMM estimator
minimizes
J( ) = n g n ( )0 Wn g n ( )
where
n
1X
gn( ) = gi ( )
n
i=1

58
and !
n 1
1X 0
Wn = g^i g^i g n g 0n ;
n
i=1

with g^i = gi ( ~ ) constructed using a preliminary consistent estimator ~ , perhaps obtained by rst
setting Wn = I: Since the GMM estimator depends upon the rst-stage estimator, often the weight
matrix Wn is updated, and then ^ recomputed. This estimator can be iterated if needed.
p 1
Theorem 5.5.1 Under general regularity conditions, n ^ !d N 0; G0 1G ; where
1
= (E (gi gi0 )) 1 and G = E @@ 0 gi ( ): The variance of ^ may be estimated by G
^0 ^ ^
1G where
^ =n 1 P ^=n 1 P @ ^
ig ^i g^i 0 and G i @ 0 gi ( ):

The general theory of GMM estimation and testing was exposited by L. Hansen (1982).

5.6 Over-Identi cation Test


Overidenti ed models (` > k) are special in the sense that there may not be a parameter value
such that the moment condition

Eg(wi ; ) = 0
holds. Thus the model { the overidentifying restrictions { are testable.
For example, take the linear model yi = 01 x1i + 02 x2i + ei with E (x1i ei ) = 0 and E (x2i ei ) = 0:
It is possible that 2 = 0; so that the linear equation may be written as yi = 01 x1i + ei : However,
it is possible that 2 6= 0; and in this case it would be impossible to nd a value of 1 so that
both E (x1i (yi x01i 1 )) = 0 and E (x2i (yi x01i 1 )) = 0 hold simultaneously. In this sense an
exclusion restriction can be seen as an overidentifying restriction.
Note that g n !p Egi ; and thus g n can be used to assess whether or not the hypothesis that
Egi = 0 is true or not. The criterion function at the parameter estimates is

J = n g 0n Wn g n
1
= n2 g 0n g^0 g^ ng n g 0n gn:

is a quadratic form in g n ; and is thus a natural test statistic for H0 : Egi = 0:

Theorem 5.6.1 (Sargan-Hansen). Under the hypothesis of correct speci cation, and if the weight
matrix is asymptotically e cient,
J = J( ^ ) !d 2` k :

59
The proof of the theorem is left as an exercise. This result was established by Sargan (1958)
for a specialized case, and by L. Hansen (1982) for the general case.
The degrees of freedom of the asymptotic distribution are the number of overidentifying re-
strictions. If the statistic J exceeds the chi-square critical value, we can reject the model. Based on
this information alone, it is unclear what is wrong, but it is typically cause for concern. The GMM
overidenti cation test is a very useful by-product of the GMM methodology, and it is advisable to
report the statistic J whenever GMM is the estimation method.
When over-identi ed models are estimated by GMM, it is customary to report the J statistic
as a general test of model adequacy.

5.7 Hypothesis Testing: The Distance Statistic


We described before how to construct estimates of the asymptotic covariance matrix of the GMM
estimates. These may be used to construct Wald tests of statistical hypotheses.
If the hypothesis is non-linear, a better approach is to directly use the GMM criterion function.
This is sometimes called the GMM Distance statistic, and sometimes called a LR-like statistic (the
LR is for likelihood-ratio). The idea was rst put forward by Newey and West (1987).
For a given weight matrix Wn ; the GMM criterion function is

J( ) = n g n ( )0 Wn g n ( )

For h : Rk ! Rr ; the hypothesis is

H0 : h( ) = 0:

The estimates under H1 are


^ = argmin J( )

and those under H0 are


~ = argmin J( ):
h( )=0

The two minimizing criterion functions are J( ^ ) and J( ~ ): The GMM distance statistic is the
di erence
D = J( ~ ) J( ^ ):

Proposition 5.7.1 If the same weight matrix Wn is used for both null and alternative,

1. D 0

2. D !d 2
r

3. If h is linear in ; then D equals the Wald statistic.

60
If h is non-linear, the Wald statistic can work quite poorly. In contrast, current evidence
suggests that the D statistic appears to have quite good sampling properties, and is the preferred
test statistic.
Newey and West (1987) suggested to use the same weight matrix Wn for both null and alter-
native, as this ensures that D 0. This reasoning is not compelling, however, and some current
research suggests that this restriction is not necessary for good performance of the test.
This test shares the useful feature of LR tests in that it is a natural by-product of the compu-
tation of alternative models.

5.8 Conditional Moment Restrictions


In many contexts, the model implies more than an unconditional moment restriction of the form
Egi ( ) = 0: It implies a conditional moment restriction of the form
E (ei ( ) j xi ) = 0
where ei ( ) is some s 1 function of the observation and the parameters. In many cases, s = 1.
It turns out that this conditional moment restriction is much more powerful, and restrictive,
than the unconditional moment restriction discussed above.
Our linear model yi = zi0 + ei with instruments xi falls into this class under the stronger
assumption E (ei j xi ) = 0: Then ei ( ) = yi zi0 :
It is also helpful to realize that conventional regression models also fall into this class, except
that in this case zi = xi : For example, in linear regression, ei ( ) = yi x0i , while in a nonlinear
regression model ei ( ) = yi g(xi ; ): In a joint model of the conditional mean and variance
8
< yi x0i
ei ( ; ) = :
: 0 2 0
(yi xi ) f (xi )
Here s = 2:
Given a conditional moment restriction, an unconditional moment restriction can always be
constructed. That is for any ` 1 function (xi ; ); we can set gi ( ) = (xi ; )ei ( ) which satis es
Egi ( ) = 0 and hence de nes a GMM estimator. The obvious problem is that the class of functions
is in nite. Which should be selected?
This is equivalent to the problem of selection of the best instruments. If xi is a valid instrument
satisfying E (ei j xi ) = 0; then xi ; x2i ; x3i ; :::; etc., are all valid instruments. Which should be used?
One solution is to construct an in nite list of potent instruments, and then use the rst k
instruments. How is k to be determined? This is an area of theory still under development. A
recent study of this problem is Donald and Newey (2001).
Another approach is to construct the optimal instrument. The form was uncovered by Cham-
berlain (1987). Take the case s = 1: Let
@
Ri = E ei ( ) j xi
@

61
and
2
i = E ei ( )2 j xi :
Then the \optimal instrument" is
2
Ai = i Ri
so the optimal moment is
gi ( ) = Ai ei ( ):
Setting gi ( ) to be this choice (which is k 1; so is just-identi ed) yields the best GMM estimator
possible.
In practice, Ai is unknown, but its form does help us think about construction of optimal
instruments.
In the linear model ei ( ) = yi zi0 ; note that

Ri = E (zi j xi )

and
2
i = E e2i j xi ;
so
2
Ai = i E (zi j xi ) :
In the case of linear regression, zi = xi ; so Ai = i 2 xi : Hence e cient GMM is GLS, as we
discussed earlier in the course.
In the case of endogenous variables, note that the e cient instrument Ai involves the estimation
of the conditional mean of zi given xi : In other words, to get the best instrument for zi ; we need the
best conditional mean model for zi given xi , not just an arbitrary linear projection. The e cient
instrument is also inversely proportional to the conditional variance of ei : This is the same as the
GLS estimator; namely that improved e ciency can be obtained if the observations are weighted
inversely to the conditional variance of the errors.

62
Chapter 6

Empirical Likelihood

6.1 Non-Parametric Likelihood


An alternative to GMM is empirical likelihood. The idea is due to Art Owen (1988, 2001) and
has been extended to moment condition models by Qin and Lawless (1994). It is a non-parametric
analog of likelihood estimation.
The idea is to construct a multinomial distribution F (p1 ; :::; pn ) which places probability pi
at each observation. To be a valid multinomial distribution, these probabilities must satisfy the
requirements that pi 0 and
Xn
pi = 1: (6.1)
i=1
Since each observation is observed once in the sample, the log-likelihood function for this multi-
nomial distribution is
n
X
Ln (p1 ; :::; pn ) = ln(pi ): (6.2)
i=1
First let us consider a just-identi ed model. In this case the moment condition places no
additional restrictions on the multinomial distribution. The maximum likelihood estimator of the
probabilities (p1 ; :::; pn ) are those which maximize the log-likelihood subject to the constraint (6.1).
This is equivalent to maximizing
n n
!
X X
log(pi ) pi 1
i=1 i=1

where is a Lagrange multiplier. The n rst order conditions are 0 = pi 1 : Combined with
1
the constraint (6.1) we nd that the MLE is pi = n yielding the log-likelihood n log(n):
Now consider the case of an overidenti ed model with moment condition

Egi ( 0) =0

63
where g is ` 1 and is k 1 and for simplicity we write gi ( ) = g(yi ; zi ; xi ; ): The multinomial
distribution which places probability pi at each observation (yi ; xi ; zi ) will satisfy this condition if
and only if
X n
pi gi ( ) = 0 (6.3)
i=1
The empirical likelihood estimator is the value of which maximizes the multinomial log-
likelihood (6.2) subject to the restrictions (6.1) and (6.3).
The Lagrangian for this maximization problem is
n n
! n
X X X
0
Ln ( ; p1 ; :::; pn ; ; ) = ln(pi ) pi 1 n pi gi ( )
i=1 i=1 i=1
where and are Lagrange multipliers. The rst-order-conditions of Ln with respect to pi , ;
and are
1
= + n 0 gi ( )
pi
n
X
pi = 1
i=1
n
X
pi gi ( ) = 0:
i=1
Multiplying the rst equation by pi , summing over i; and using the second and third equations,
we nd = n and
1
pi = :
n 1 + 0 gi ( )
Substituting into Ln we nd
n
X
0
Rn ( ; ) = n ln (n) ln 1 + gi ( ) : (6.4)
i=1

For given ; the Lagrange multiplier ( ) minimizes Rn ( ; ) :


( ) = argmin Rn ( ; ): (6.5)

This minimization problem is the dual of the constrained maximization problem. The solution
(when it exists) is well de ned since Rn ( ; ) is a convex function of : The solution cannot be
obtained explicitly, but must be obtained numerically (see section 6.5). This yields the (pro le)
empirical log-likelihood function for .
Ln ( ) = Rn ( ; ( ))
n
X
= n ln (n) ln 1 + ( )0 gi ( )
i=1

64
The EL estimate ^ is the value which maximizes Ln ( ); or equivalently minimizes its negative
^ = argmin [ Ln ( )] (6.6)

Numerical methods are required for calculation of ^ : (see section 6.5)


As a by-product of estimation, we also obtain the Lagrange multiplier ^ = ( ^ ); probabilities
1
p^i = 0
:
n 1 + ^ gi ^

and maximized empirical likelihood


n
X
L^n = ln (^
pi ) : (6.7)
i=1

6.2 Asymptotic Distribution of EL Estimator


De ne
@
Gi ( ) = gi ( ) (6.8)
@ 0
G = EGi ( 0)
0
= E gi ( 0 ) gi ( 0 )

and
1
V = G0 1
G (6.9)
1
V = G G0 1
G G0 (6.10)
For example, in the linear model, Gi (; ) = xi zi0 ; G = E (xi zi0 ), and = E xi x0i e2i :

Theorem 6.2.1 Under regularity conditions,


p
n ^ 0 !d N (0; V )
p
n ^ !d 1
N (0; V )
p p ^
where V and V are de ned in (6.9) and (6.10), and n ^ 0 and n are asymptotically
independent.

The asymptotic variance V for ^ is the same as for e cient GMM. Thus the EL estimator is
asymptotically e cient.

65
Proof. ( ^ ; ^ ) jointly solve

@
n
X gi ^
0 = Rn ( ; ) = 0
(6.11)
@ 1 + ^ gi ^
i=1
0
@
n
X Gi ; ^
0 = Rn ( ; ) = 0
: (6.12)
@ 1 + ^ gi ^ i=1
P P P
Let Gn = n1 ni=1 Gi ( 0 ) ; g n = n1 ni=1 gi ( 0 ) and n = n1 ni=1i g ( 0 ) gi ( 0
0) :
Expanding (6.12) around = 0 and = 0 = 0 yields
0 ' G0n ^ 0 : (6.13)
Expanding (6.11) around = 0 and = 0 = 0 yields
0' gn Gn ^ 0 + n
^ (6.14)

Premultiplying by G0n n
1 and using (6.13) yields
0 ' G0n n
1
gn G0n n
1
Gn ^ 0 + G0n n
1
n
^

= G0n n
1
gn G0n n
1
Gn ^ 0

Solving for ^ and using the WLLN and CLT yields


p 1 1p
n ^ 0 ' G0n n 1 Gn G0n n ng n (6.15)
1
!d G0 1
G G0 1
N (0; )
= N (0; V )
Solving (6.14) for ^ and using (6.15) yields
p 1 p
n^ ' n
1
I Gn G0n n
1
Gn G0n n
1
ng n (6.16)
1
!d 1
I G G0 1
G G0 1
N (0; )
1
= N (0; V )
Furthermore, since
1
G0 I 1
G G0 1
G G0 = 0
p p ^
n ^ 0 and n are asymptotically uncorrelated and hence independent.
Chamberlain (1987) showed that V is the semiparametric e ciency bound for in the overi-
denti ed moment condition model. This means that no consistent estimator for this class of models
can have a lower asymptotic variance than V . Since the EL estimator achieves this bound, it is
an asymptotically e cient estimator for .

66
6.3 Overidentifying Restrictions
In a parametric likelihood context, tests are based on the di erence in the log likelihood functions.
The same statistic can be constructed for empirical likelihood. Twice the di erence between the
unrestricted empirical likelihood n log (n) and the maximized empirical likelihood for the model
(6.7) is
Xn
0
LRn = 2 ln 1 + ^ gi ^ : (6.17)
i=1

Theorem 6.3.1 If Eg(wi ; 2 :


0) = 0 then LRn !d ` k

The EL overidenti cation test is similar to the GMM overidenti cation test. They are asymp-
totically rst-order equivalent, and have the same interpretation. The overidenti cation test is a
very useful by-product of EL estimation, and it is advisable to report the statistic LRn whenever
EL is the estimation method.
Proof. First, by a Taylor expansion, (6.15), and (6.16),
n
1 X p
p g wi ; ^ ' n g n + Gn ^ 0
n
i=1
1 p
' I Gn G0n n
1
Gn G0n n
1
ng n
p
' n n^:

Second, since ln(1 + x) ' x x2 =2 for x small,


n
X 0
LRn = 2 ln 1 + ^ gi ^
i=1
n n
0X X 0
' 2^ gi ^ ^0 gi ^ gi ^ ^
i=1 i=1
0
' n^ n
^
!d N (0; V )0 1
N (0; V )
2
= ` k

where the proof of the nal equality is left as an exercise.

6.4 Testing
Let the maintained model be
Egi ( ) = 0 (6.18)

67
where g is ` 1 and is k 1: By \maintained" we mean that the overidentfying restrictions
contained in (6.18) are assumed to hold and are not being challenged (at least for the test discussed
in this section). The hypothesis of interest is

h( ) = 0:

where h : Rk ! Ra : The restricted EL estimator and likelihood are the values which solve
~ = argmax Ln ( )
h( )=0

L~n = Ln ( ~ ) = max Ln ( ):
h( )=0

Fundamentally, the restricted EL estimator ~ is simply an EL estimator with ` k + a overidenti-


fying restrictions, so there is no fundamental change in the distribution theory for ~ relative to ^ :
To test the hypothesis h( ) while maintaining (6.18), the simple overidentifying restrictions test
(6.17) is not appropriate. Instead we use the di erence in log-likelihoods:

LRn = 2 L^n L~n :

This test statistic is a natural analog of the GMM distance statistic.

Theorem 6.4.1 Under (6.18) and H0 : h( ) = 0; LRn !d 2:


a

The proof of this result is more challenging and is omitted.

6.5 Numerical Computation


Gauss code which implements the methods discussed below can be found at

http://www.ssc.wisc.edu/~bhansen/progs/elike.prc

6.5.1 Derivatives
The numerical calculations depend on derivatives of the dual likelihood function (6.4). De ne

gi ( )
gi ( ; ) =
1 + 0 gi ( )
Gi ( )0
Gi ( ; ) =
1 + 0 gi ( )

68
The rst derivatives of (6.4) are
n
X
@
R = Rn ( ; ) = gi ( ; )
@
i=1
n
X
@
R = Rn ( ; ) = Gi ( ; ) :
@
i=1

The second derivatives are


X n
@2
R = R n ( ; ) = gi ( ; ) gi ( ; )0
@ @ 0 i=1
X n
@2 Gi ( )
R = 0 R n ( ; ) = gi ( ; ) Gi ( ; )0
@ @ i=1
1 + 0 gi ( )
0 1
@2
@2 Xn
@ @ 0
gi ( )0
R = Rn ( ; ) = @Gi ( ; ) Gi ( ; ) 0 A
@ @ 0 i=1
1 + 0 gi ( )

6.5.2 Inner Loop


The so-called \inner loop" solves (6.5) for given : The modi ed Newton method takes a quadratic
approximation to Rn ( ; ) yielding the iteration rule
1
j+1 = j (R ( ; j )) R ( ; j) : (6.19)
where > 0 is a scalar steplength (to be discussed next). The starting value 1 can be set to the
zero vector. The iteration (6.19) is continued until the gradient R ( ; j ) is smaller than some
prespeci ed tolerance.
E cient convergence requires a good choice of steplength : One method uses the following
quadratic approximation. Set 0 = 0; 1 = 21 and 2 = 1: For p = 0; 1; 2; set
1
p = j p (R ( ; j )) R ( ; j ))
Rp = Rn ( ; p)

A quadratic function can be t exactly through these three points. The value of which minimizes
this quadratic is
^ = R2 + 3R0 4R1 :
4R2 + 4R0 8R1
yielding the steplength to be plugged into (6.19)..
A complication is that must be constrained so that 0 pi 1 which holds if
0
n 1+ gi ( ) 1 (6.20)
for all i: If (6.20) fails, the stepsize needs to be decreased.

69
6.5.3 Outer Loop
The outer loop is the minimization (6.6). This can be done by the modi ed Newton method
described in the previous section. The gradient for (6.6) is

@ @ 0
L = Ln ( ) = Rn ( ; ) = R + R =R
@ @

since R ( ; ) = 0 at = ( ); where

@ 1
= ( )= R R ;
@ 0

the second equality following from the implicit function theorem applied to R ( ; ( )) = 0:
The Hessian for (6.6) is

@
L = Ln ( )
@ @ 0
@ 0
= R ( ; ( )) + R ( ; ( ))
@ 0
= R ( ; ( )) + R0 + 0
R + 0
R
0 1
= R R R R :

It is not guaranteed that L > 0: If not, the eigenvalues of L should be adjusted so that all are
positive. The Newton iteration rule is
1
j+1 = j L L

where is a scalar stepsize, and the rule is iterated until convergence.

70
Chapter 7

Endogeneity

We say that there is endogeneity in the linear model y = zi0 + ei if is the parameter of interest
and E(zi ei ) 6= 0: This cannot happen if is de ned by linear projection, so requires a structural
interpretation. The coe cient must have meaning separately from the de nition of a conditional
mean or linear projection.
Example: Measurement error in the regressor. Suppose that (yi ; xi ) are joint random
variables, E(yi j xi ) = xi 0 is linear, is the parameter of interest, and xi is not observed. Instead
we observe xi = xi + ui where ui is an k 1 measurement error, independent of yi and xi : Then

yi = xi 0 + ei
= (xi ui )0 + ei
= x0i + vi

where
vi = ei u0i :
The problem is that

E (xi vi ) = E (xi + ui ) ei u0i = E ui u0i 6= 0

if 6= 0 and E (ui u0i ) 6= 0: It follows that if ^ is the OLS estimator, then

^ !p 1
= E xi x0i E ui u0i 6= :

This is called measurement error bias.


Example: Supply and Demand. The variables qi and pi (quantity and price) are determined
jointly by the demand equation
qi = 1 pi + e1i

and the supply equation


qi = 2 pi + e2i :

71
e1i
Assume that ei = is iid, Eei = 0; 1 + 2 = 1 and Eei e0i = I2 (the latter for simplicity).
e2i
The question is, if we regress qi on pi ; what happens?
It is helpful to solve for qi and pi in terms of the errors. In matrix notation,

1 1 qi e1i
=
1 2 pi e2i
so
1
qi 1 1 e1i
=
pi 1 2 e2i
2 1 e1i
=
1 1 e2i
2 e1i + 1 e2i
= :
(e1i e2i )

The projection of qi on pi yields

qi = pi + " i
E (pi "i ) = 0

where
E (pi qi ) 2 1
= =
E p2i 2

Hence if it is estimated by OLS, ^ !p ; which does not equal either 1 or 2: This is called
simultaneous equations bias.

7.1 Instrumental Variables


Let the equation of interest be
yi = zi0 + ei (7.1)
where zi is k 1; and assume that E(zi ei ) 6= 0 so there is endogeneity. We call (7.1) the structural
equation. In matrix notation, this can be written as

Y = Z + e: (7.2)

Any solution to the problem of endogeneity requires additional information which we call
instruments.

De nition 7.1.1 The ` 1 random vector xi is an instrumental variable for (7.1) if E (xi ei ) = 0:

72
In a typical set-up, some regressors in zi will be uncorrelated with ei (for example, at least the
intercept). Thus we make the partition

z1i k1
zi = (7.3)
z2i k2

where E(z1i ei ) = 0 yet E(z2i ei ) 6= 0: We call z1i exogenous and z2i endogenous. By the above
de nition, z1i is an instrumental variable for (7.1); so should be included in xi : So we have the
partition
z1i k1
xi = (7.4)
x2i `2
where z1i = x1i are the included exogenous variables, and x2i are the excluded exogenous
variables. That is x2i are variables which could be included in the equation for yi (in the sense
that they are uncorrelated with ei ) yet can be excluded, as they would have true zero coe cients
in the equation.
The model is just-identi ed if ` = k (i.e., if `2 = k2 ) and over-identi ed if ` > k (i.e., if
`2 > k2 ):
We have noted that any solution to the problem of endogeneity requires instruments. This
does not mean that valid instruments actually exist.

7.2 Reduced Form


The reduced form relationship between the variables or \regressors" zi and the instruments xi is
found by linear projection. Let
1
= E xi x0i E xi zi0
be the ` k matrix of coe cients from a projection of zi on xi ; and de ne

u i = zi x0i

as the projection error. Then the reduced form linear relationship between zi and xi is
0
zi = xi + ui : (7.5)

In matrix notation, we can write (7.5) as

Z =X +u (7.6)

where u is n k:
By construction,
E(xi u0i ) = 0;

73
so (7.5) is a projection and can be estimated by OLS:

Z = X^ + u
^
^ = X X 1 X 0Z :
0

Substituting (7:6) into (7.2), we nd

Y = (X + u) +e
= X + v; (7.7)

where
= (7.8)
and
v = u + e:
Observe that
E (xi vi ) = E xi u0i + E (xi ei ) = 0:
Thus (7.7) is a projection equation and may be estimated by OLS. This is

Y = X ^ + v^;
^ = X 0X 1 X 0Y

The equation (7.7) is the reduced form for Y: (7.6) and (7.7) together are the reduced form
equations for the system

Y = X +v
Z = X + u:

As we showed above, OLS yields the reduced-form estimates ( ^ ; ^ )

7.3 Identi cation


The structural parameter relates to ( ; ) through (7.8). The parameter is identi ed, meaning
that it can be recovered from the reduced form, if

rank( ) = k: (7.9)

Assume that (7.9) holds. If ` = k; then = 1 : If ` > k; then for any W > 0; =
0 1 0
( W ) W :
If (7.9) is not satis ed, then cannot be recovered from ( ; ): Note that a necessary (although
not su cient) condition for (7.9) is ` k:

74
Since X and Z have the common variables X1 ; we can rewrite some of the expressions. Using
(7.3) and (7.4) to make the matrix partitions X = [X1 ; X2 ] and Z = [X1 ; Z2 ]; we can partition
as
11 12
=
21 22
I 12
=
0 22

(7.6) can be rewritten as

Z1 = X1
Z2 = X1 12 + X2 22 + u2 : (7.10)

is identi ed if rank( ) = k; which is true if and only if rank( 22 ) = k2 (by the upper-diagonal
structure of ): Thus the key to identi cation of the model rests on the `2 k2 matrix 22 in (7.10).

7.4 Estimation
The model can be written as

yi = zi0 + ei
E (xi ei ) = 0

or

Eg (wi ; ) = 0
g (wi ; ) = xi yi zi0 :

This a moment condition model. Appropriate estimators include GMM and EL. The estimators
and distribution theory developed in those Chapter 8 and 9 directly apply. Recall that the GMM
estimator, for given weight matrix Wn ; is
^ = Z 0 XWn X 0 Z 1
Z 0 XWn X 0 Y:

7.5 Special Cases: IV and 2SLS


If the model is just-identi ed, so that k = `; then the formula for GMM simpli es. We nd that
^ = 1
Z 0 XWn X 0 Z Z 0 XWn X 0 Y
1 1
= X 0Z Wn 1 Z 0 X Z 0 XWn X 0 Y
1
= X 0Z X 0Y

75
This estimator is often called the instrumental variables estimator (IV) of ; where X is used
as an instrument for Z: Observe that the weight matrix Wn has disappeared. In the just-identi ed
case, the weight matrix places no role. This is also the MME estimator of ; and the EL estimator.
Another interpretation stems from the fact that since = 1 ; we can construct the Indirect

Least Squares (ILS) estimator:


^ = ^ 1^

1 1 1
= X 0X X 0Z X 0X X 0Y
1 1
= X 0Z X 0X X 0X X 0Y
1
= X 0Z X 0Y :

which again is the IV estimator.

Recall that the optimal weight matrix is an estimate of the inverse of = E xi x0i e2i : In the
special case that E e2i j xi = 2 (homoskedasticity), then = E (xi x0i ) 2 / E (xi x0i ) suggesting
the weight matrix Wn = (X 0 X) 1 : Using this choice, the GMM estimator equals

1 1 1
^ = Z 0X X 0X X 0Z Z 0X X 0X X 0Y
2SLS

This is called the two-stage-least squares (2SLS) estimator. It was originally proposed by Theil
(1953) and Basmann (1957), and is the classic estimator for linear equations with instruments.
Under the homoskedasticity assumption, the 2SLS estimator is e cient GMM, but otherwise it is
ine cient.
It is useful to observe that writing
1
PX = X X 0 X X 0;
1
Z^ = PX Z = X X 0 X X 0 Z;

then
^ = 1
Z 0 PX Z Z 0 PX Y
= Z^ 0 Z^ Z^ 0 Y:

The source of the \two-stage" name is since it can be computed as follows


1
First regress Z on X; vis., ^ = (X 0 X) (X 0 Z) and Z^ = X ^ = PX Z:
1
^ vis., ^ = Z^ 0 Z^
Second, regress Y on Z; Z^ 0 Y:

76
^ Recall, Z = [Z1 ; Z2 ] and X = [Z1 ; X2 ]: Then
It is useful to scrutinize the projection Z:
h i
Z^ = Z^1 ; Z^2
= [PX Z1 ; PX Z2 ]
= [Z1 ; PX Z2 ]
h i
= Z1 ; Z^2 ;

since Z1 lies in the span of X: Thus in the second stage, we regress Y on Z1 and Z^2 : So only the
endogenous variables Z2 are replaced by their tted values:
Z^2 = X1 ^ 12 + X2 ^ 22 :

7.6 Bekker Asymptotics


Bekker (1994) used an alternative asymptotic framework to analyze the nite-sample bias in the
2SLS estimator. Here we present a simpli ed version of one of his results. In our notation, the
model is
Y = Z +e (7.11)
Z = X +u (7.12)
= (e; u)
E ( j X) = 0
0
E jX = S
As before, X is n l so there are l instruments.
First, let's analyze the approximate bias of OLS applied to (7.11). Using (7.12),
1 0 0
E Ze = E (zi ei ) = E (xi ei ) + E (ui ei ) = S21
n
and
1 0
E ZZ = E zi zi0
n
0
= E xi x0i + E ui x0i + 0
E xi u0i + E ui u0i
0
= Q + S22
where Q = E (xi x0i ) : Hence by a rst-order approximation
1
1 0 1 0
E ^ OLS E ZZ E Ze
n n
0 1
= Q + S22 S21 (7.13)

77
which is zero only when S21 = 0 (when Z is exogenous).
We now derive a similar result for the 2SLS estimator.
^ 1
2SLS = Z 0 PX Z Z 0 PX Y :

Let PX = X (X 0 X) 1 X 0 . By the spectral decomposition of an idempotent matrix, P = H H 0


where = diag(Il ; 0): Let q = H 0 S 1=2 which satis es Eqq 0 = In and partition q = (q10 q20 ) where
q1 is l 1: Hence
1 0 1 1=20
E PX = S E q 0 q S 1=2
n n
1 1=20 1 0
= S E q q1 S 1=2
n n 1
l 1=20 1=2
= S S
n
= S

where
l
= :
n
Using (7.12) and this result,
1 1 1
E Z 0 PX e = E 0
X 0e + E u0 PX e = S21 ;
n n n
and
1 1
E Z 0 PX Z = 0
E xi x0i + 0
E (xi ui ) + E ui x0i + E u 0 PX u
n n
0
= Q + S22 :

Together
1
1 0 1 0
E ^ 2SLS E Z PX Z E Z PX e
n n
0 1
= Q + S22 S21 : (7.14)

In general this is non-zero, except when S21 = 0 (when Z is exogenous). It is also close to zero
when = 0. Bekker (1994) pointed out that it also has the reverse implication { that when
= l=n is large, the bias in the 2SLS estimator will be large. Indeed as ! 1; the expression in
(7.14) approaches that in (7.13), indicating that the bias in 2SLS approaches that of OLS as the
number of instruments increases.
Bekker (1994) showed further that under the alternative asymptotic approximation that is
xed as n ! 1 (so that the number of instruments goes to in nity proportionately with sample
size) then the expression in (7.14) is the probability limit of ^ 2SLS

78
7.7 Identi cation Failure
Recall the reduced form equation

Z2 = X1 12 + X2 22 + u2 :

The parameter fails to be identi ed if 22 has de cient rank. The consequences of identi cation
failure for inference are quite severe.
Take the simplest case where k = l = 1 (so there is no X1 ): Then the model may be written as

yi = zi + ei
zi = xi + ui

and 22 = = E (xi zi ) =Ex2i : We see that is identi ed if and only if 22 = 6= 0; which occurs
when E (zi xi ) 6= 0. Thus identi cation hinges on the existence of correlation between the excluded
exogenous variable and the included endogenous variable.
Suppose this condition fails, so E (zi xi ) = 0: Then by the CLT
n
1 X
p xi ei !d N1 N 0; E x2i e2i (7.15)
n
i=1

n n
1 X 1 X
p xi zi = p xi ui !d N2 N 0; E x2i u2i (7.16)
n n
i=1 i=1

therefore Pn
p1
n i=1 xi ei N1
^ = Pn !d Cauchy;
p1 N2
n i=1 xi zi

since the ratio of two normals is Cauchy. This is particularly nasty, as the Cauchy distribution
does not have a nite mean. This result carries over to more general settings, and was examined
by Phillips (1989) and Choi and Phillips (1992).
Suppose that identi cation does not complete fail, but is weak. This occurs when 22 is full
rank, but small. This can be handled in an asymptotic analysis by modeling it as local-to-zero,
viz
1=2
22 = n C;
where C is a full rank matrix. The n 1=2 is picked because it provides just the right balancing to
allow a rich distribution theory.
To see the consequences, once again take the simple case k = l = 1: Here, the instrument xi is
weak for zi if
= n 1=2 c:

79
Then (7.15) is una ected, but (7.16) instead takes the form
n n n
1 X 1 X 2 1 X
p xi zi = p xi + p xi ui
n n n
i=1 i=1 i=1
Xn n
X
1 1
= x2i c +p xi ui
n n
i=1 i=1
!d Qc + N2

therefore
^ N1
!d :
Qc + N2
As in the case of complete identi cation failure, we nd that ^ is inconsistent for and the
asymptotic distribution of ^ is non-normal. In addition, standard test statistics have non-standard
distributions, meaning that inferences about parameters of interest can be misleading.
The distribution theory for this model was developed by Staiger and Stock (1997) and extended
to nonlinear GMM estimation by Stock and Wright (2000). Further results on testing were obtained
by Wang and Zivot (1998).
The bottom line is that it is highly desirable to avoid identi cation failure. Once again, the
equation to focus on is the reduced form

Z2 = X1 12 + X2 22 + u2

and identi cation requires rank( 22 ) = k2 : If k2 = 1; this requires 22 6= 0; which is straightforward


to assess using a hypothesis test on the reduced form. Therefore in the case of k2 = 1 (one RHS
endogenous variable), one constructive recommendation is to explicitly estimate the reduced form
equation for Z2 ; construct the test of 22 = 0, and at a minimum check that the test rejects
H0 : 22 = 0.
When k2 > 1; 22 6= 0 is not su cient for identi cation. It is not even su cient that each
column of 22 is non-zero (each column corresponds to a distinct endogenous variable in X2 ): So
while a minimal check is to test that each columns of 22 is non-zero, this cannot be interpreted
as de nitive proof that 22 has full rank. Unfortunately, tests of de cient rank are di cult to
implement. In any event, it appears reasonable to explicitly estimate and report the reduced form
equations for X2 ; and attempt to assess the likelihood that 22 has de cient rank.

80
Chapter 8

The Bootstrap

8.1 Monte Carlo Simulation


It will be helpful to de ne some general concepts so our discussion will be at a higher level of
generality than the linear regression framework. Recall that we let F denote the population
distribution of the observations (yi ; xi ) : Now, we let F denote a general CDF, and let F0 denote
the true value. Let be some parameter of interest and let Tn = Tn (y1 ; x1 :::; yn ; xn ; ) be a statistic
of interest, for example an estimator such as ^ or a test statistic such as ^ =s(^): The goal
is to calculate the sampling distribution of Tn :
The exact CDF of Tn when the data are sampled from the distribution F is

Gn (x; F ) = P (Tn x j F)

Given the structure of the problem, Gn is a function only of F; the distribution of (yi ; xi ) : In
general, Gn (x; F ) depends on F; meaning that G changes as F changes.
Ideally, inference on would be based on Gn (x; F0 ); the true value of the sampling distribution.
This is generally impossible for two reasons. First, the function Gn (x; F ) is unknown. Second, F0
is unknown.
The idea of Monte Carlo simulation is to solve the rst problem through numerical simulation.
The idea is that for any given F; the distribution function Gn (x; F ) can be calculated numerically
through simulation. Since F is unknown, this does not solve the problem of inference. Instead,
the method is typically used to assess the adequacy of statistical methods in practical settings.
The name Monte Carlo derives from the famous Mediterranean gambling resort, where games
of chance are played.
The method of Monte Carlo is quite simple to describe. The researcher chooses F0 (the dis-
tribution of the data) and the sample size n. A \true" value of 0 is implied by this choice, or
equivalently the value 0 is selected directly by the researcher.
Then the following experiment is conducted

81
n independent random vectors yi ; xi ; i = 1; :::; n; are drawn from the distribution F using
the computer's random number generator.

The statistic Tn = Tn (y1 ; xn :::; yn ; xn ; ) is calculated on this pseudo data.

For step 1, most computer packages have built-in procedures for generating U [0; 1] and N (0; 1)
random numbers, and from these most random variables can be constructed. (For example, a
chi-square can be generated by sums of squares of normals.)
For step 2, it is important that the statistic be evaluated at the \true" value of corresponding
to the choice of F:
The above experiment creates one random draw from the distribution Gn (x; F ): This is one
observation from an unknown distribution. Clearly, from one observation very little can be said.
So the researcher repeats the experiment B times, where B is a large number. Typically, we set
B = 1000 or B = 5000; and we will discuss this choice later.
Notationally, let the b0 th experiment result in the draw Tnb ; b = 1; :::; B: These results are
stored. They constitute a random sample of size B from the distribution of Gn (x; F ) = P (Tnb x) =
P (Tn x j F ) :
From a random sample, we can estimate any feature of interest using (typically) a method of
moments estimator. For example:
Suppose we are interested in the bias, mean-squared error (MSE), or variance of the distribution
of ^ : We then set Tn = ^ ; run the above experiment, and calculate
B
\^) = 1 X
Bias( Tnb
B
b=1
XB
1
M\
SE(^) = (Tnb )2
B
b=1
2
V\
ar(^) = M\
SE(^) \^)
Bias(

Suppose we are interested in the Type I error associated with an asymptotic 5% two-sided
t-test. We would then set Tn = ^ =s(^) and calculate

B
^ 1 X
P = 1 (Tnb 1:96) ; (8.1)
B
b=1

the percentage of the simulated t-ratios which exceed the asymptotic 5% critical value.
Suppose we are interested in the 5% and 95% quantile of ^ or ^ =s(^): We then set Tn to
either choice, and compute the 10% and 90% sample quantiles of the sample fTnb g: The % sample
quantile is a number q such that % of the sample are less than q : A simple way to compute

82
sample quantiles is to sort the sample fTnb g from low to high. Then q is the N 'th number in
this ordered sequence, where N = (B + 1) : It is therefore convenient to pick B so that N is an
integer. For example, if we set B = 999; then the 5% sample quantile is 50'th sorted value and
the 95% sample quantile is the 950'th sorted value.
The typical purpose of a monte carlo simulation is to investigate the performance of a statistical
procedure (estimator or test) in realistic settings. Generally, the performance will depend on n
and F: In many cases, an estimator or test may perform wonderfully for some values, and poorly
for others. It is therefore useful to conduct a variety of experiments, for a selection of choices of n
and F:
As discussed above, the researcher must select the number of experiments, B: Often this is
called the number of replications. Quite simply, a larger B results in more precise estimates of
the features of interest of Gn ; but requires more computational time. In practice, therefore, the
choice of B is often guided by the computational demands of the statistical procedure. However,
it should be recognized that the results of a monte carlo experiment are all estimates computed
from a random sample of size B; and therefore it is straightforward to calculate standard errors
for any quantity of interest. If the standard error is too large to make a reliable inference, then B
will have to be increased.
In particular, it is simple to make inferences about rejection probabilities from statistical tests,
such as the percentage estimate reported in (8.1). The random variable 1 (Tnb 1:96) is iid
Bernoulli, equalling 1 with probability P = E1 (Tnb 1:96) : The average (8.1) is therefore an
p
unbiased estimator of P with standard error s P^ = P (1 P ) =B. As P is unknown, this may
r
be approximated using the estimated value s P = P^ 1 P^ =B or using a hypothesized value.
^
p
For example, if we are assessing an asymptotic 5% test, then we can set s P^ = (:05) (:95) =B '
p
:22= B: Hence the standard errors for B = 100; 1000, and 5000, are, respectively, s P^ = :022;
:007; and .003.

8.2 An Example
Here we illustrate Monte Carlo simulation to investigate estimation and tests on a nonlinear
function of regression parameters.
Our model is an iid sample fyi ; x1i ; x2i : 1 i ng from the linear Gaussian regression

yi = 0 + x1i 1 + x2i 2 + ei
x1i
N (0; I2 )
x2i
2
ei N 0;

We set = 3; 0 = 0; 1 = 1; 2 = :5; and n = 300: In this example, the distribution of the data,

83
F; is determined by the above choices. We don't have to be more explicit about F; as the above
equations are su cient to specify the joint distribution.
The parameter of interest is the ratio of the regression slopes

1
= :
2

The goal is to estimate , construct con dence intervals for , and test the hypothesis that = 0:
Note that 0 = 2:
We estimate the parameters of the model by OLS, and then estimate by
^
^= 1
:
^
2

The standard error for ^ is calculated by the \delta method":


0 1
0
B 1 C
1=2 B C
se(^) = H
^ 0 V^ H
^ ; ^ =B
H B ^
2
C
C;
B ^ C
@ 1 A
^2
2

where V^ is the White covariance matrix estimate.


The asymptotic approximation to the distribution of ^ is N ( ; AV ar(^)); where
1
AV ar(^) = n 1
H 0 Exi x0i H Ee2i ' 0:6:

Thus the asymptotic approximation is ^ N (2; 0:6):


We rst investigate the distribution of ^: We set B = 100; 000; which is extremely high, simply
because the computation time was minimal. Consequently, the error due to the simulation is
minimal.
First, I nonparametrically estimated the density of ^ (using a kernel density estimator). This
density, along with the asymptotic distribution, is displayed in the top section of Figure 1. The
divergence between the exact and asymptotic densities is quite dramatic. The exact density is
skewed and thick-tailed. Next I estimated1 the mean and standard deviation of ^; nding E ^ =
2:32 and sd(^) = 1:28: Note that these are quite di erent from the asymptotic approximation
(which are 2.0 and 0.77, respectively).
1
Actually, these are trimmed means and standard deviations, calculated on a sample where the upper and lower
0.5% of the sample has been trimmed o . This was done to obtain a robust measure of mean and variance, due to
the presence of extreme outliers.

84
Inference is typically based on the t-ratio tn = (^ ^
0 )=s( ): The asymptotic approximation
to tn is N (0; 1): Using the same simulated samples, we estimated the exact density of tn ; which is
displayed along with the asymptotic distribution in the bottom section of Figure 1.
Once again we nd that the divergence between the exact and asymptotic distributions to be
dramatic. The exact distribution is highly skewed and non-normal. The accuracy of hypothesis
testing and con dence interval coverage depends on tail probabilities. We calculate the exact prob-
abilities P (tn > 1:645) = 0:00; P (tn < 1:645) = :115 and P (jtn j > 1:96) = :084; meaning that
both one-tailed and two-tailed tests and con dence intervals based on asymptotic approximations
have signi cant Type I error. (The nominal, or asymptotic, Type I error for each of these tests is
.050).
This example shows that asymptotic approximations may be quite poor, even in very simple
regression models with reasonably large samples.

85
8.3 The Empirical Distribution Function
The observations are wi ; i; :::; n; e.g. wi = (yi ; xi ) with CDF F (w) = P (wi w) and true value
F0 : Note that F0 (w) = P (wi w) = E1 (wi w) ; where 1( ) is the indicator function, so F0 (w)
can be expressed as a population moment. The MME is the corresponding sample moment:
n
1X
Fn (w) = 1 (wi w) :
n
i=1

Fn (w) is called the empirical distribution function (EDF). Fn is a nonparametric estimate of F0 :


Note that while F0 may be either discrete or continuous, Fn is by construction a (discontinuous)
step function.
The EDF is a consistent estimator of the CDF. To see this, note that for any w; 1 (wi w)
is an iid random variable with expectation F0 (w): Thus by the WLLN, Fn (w) !p F0 (w): It also
p
converges at rate n: By the CLT,
p
n (Fn (w) F0 (w)) !d N (0; F0 (w) (1 F0 (w))) :

To see the e ect of sample size on the EDF, in Figure 2, I have plotted the EDF and true
CDF for three random samples of size n = 25; 50, and 100. The random draws are from the
N (0; 1) distribution. For n = 25; the EDF is only a crude approximation to the CDF, but the
approximation appears to improve for the large n. Yet even for n = 100; there is a signi cant
divergence between the EDF and CDF around w = :6 in this sample. In general, as the sample
size gets larger, the EDF step function gets uniformly close to the true CDF.

86
8.4 De nition of the Bootstrap
Let Tn = Tn (w1 ; :::; wn ; ) be a statistic of interest with CDF Gn (x; F ) = P (Tn x j F ): For exact
inference on , we desire Gn (x; F0 ): This is impossible since F0 is unknown.
In a seminal contribution, Efron (1979) observed that since F0 can be estimated by Fn ;
Gn (x; F0 ) can be estimated by plugging Fn into Gn (x; F ) to get the estimator

Gn (x) = Gn (x; Fn ): (8.2)

Bootstrap inference is based on Gn (x):


Indeed, bootstrap inference can be based on any estimate Fn of F . (Choices other than the
EDF introduced in the previous section are discussed later.) When Fn is the EDF Gn (x) is called
the nonparametric bootstrap.
The bootstrap distribution substitutes Fn for F0 in the formula Gn (x; F ): As such, it not only

87
pretends that the distribution of wi is Fn rather than F0 ; but it also pretends that the true value
of the parameter2 is the sample estimate ^; rather than 0 :
The EDF is a valid discrete probability distribution which puts probability mass 1=n at each
point wi = (yi ; xi ), i = 1; :::; n: Notationally, it will be helpful to think of a random variable wi
with the distribution Fn : That is,
P (wi x) = Fn (x):
We can easily calculate the moments of wi :
Z
Eh(wi ) = h(w)dFn (w)
n
X
= h(wi )P (w = wi )
i=1
n
1X
= h(wi );
n
i=1

the empirical sample average.


Let Tn = Tn (w1 ; :::; wn ; ^) be a random variable with distribution Gn : That is,

P (Tn x) = Gn (x):

We call wi and Tn the bootstrap distribution of the data and statistic. Tn is the correct analog of
Tn when the true CDF is Fn ; as the data wi are sampled from the CDF Fn and the parameter ^n
is the true value under Fn :
Since the EDF Fn is a multinomial (with n support points), in principle the distribution of Tn
could be calculated by direct methods. Since there are 2n possible samples fw1 ; :::; wn g; however,
such a calculation is computationally infeasible unless n is very small. The popular alternative is
to use Monte Carlo simulation to approximate the distribution. The algorithm is identical to our
discussion of Monte Carlo simulation, with the following points of clari cation:

The sample size n used for the simulation is the same as the sample size

The random vectors wi are drawn randomly from the distribution function Fn

Tn = Tn (w1 ; :::; wn ; ^) is evaluated at the sample estimate ^:

The generation of random vectors from Fn depends on the form of Fn : When Fn equals the
EDF, this is particularly simple. Recall that Fn is a discrete probability distribution putting
probability mass 1=n at each sample point wi : Thus a random draw from Fn is just a random
draw from the sample fw1 ; :::; wn g: For a bootstrap sample we need n independent random draws
2
More precisely, the bootstrap pretends that the true value of is the value consistent with Fn ; the estimate of
F used to construct the bootstrap. In most cases, and the ones we consider, this value of is the sample estimate ^:

88
from Fn : This requires that we make n independent random draws from the sample fw1 ; :::; wn g
with replacement. In consequence, a bootstrap sample fw1 ; :::; wn g will necessarily have some
ties and multiple values, which is generally not a problem.
A theory for the determination of the number of bootstrap replications B has been developed
by Andrews and Buchinsky (2000).

8.5 Bootstrap Estimation of Bias and Variance


The bias of ^ is
n = E(^ 0 ):

Let Tn ( ) = ^ : Then
n = E(Tn ( 0 )):
The bootstrap counterparts are ^ = ^(w1 ; :::; wn ) and Tn = ^ n = ^ ^: The bootstrap
estimate of n is
n = E(Tn ):

If this is calculated by the simulation described in the previous subsection, the estimate of n
is
B
1 X
^n = Tnb
B
b=1
XB
1 ^ ^
= b
B
b=1

= ^ ^:

If ^ is biased, it might be desirable to construct a biased-corrected estimator (one with reduced


bias). Ideally, this would be
~=^ n;

but n is unknown. The (estimated) bootstrap biased-corrected estimator is


~ = ^ ^n
= ^ (^ ^)

= 2^ ^ :

Note, in particular, that the biased-corrected estimator is not ^ : Intuitively, the bootstrap makes
the following experiment. Suppose that ^ is the truth. Then what is the average value of ^
calculated from such samples? The answer is ^ : If this is lower than ^; this suggests that the
estimator is downward-biased, so a biased-corrected estimator of should be larger than ^; and

89
the best guess is the di erence between ^ and ^ : Similarly if ^ is higher than ^; then the estimator
is upward-biased and the biased-corrected estimator should be lower than ^.
Let Tn = ^: The variance of ^ is

Vn = E(Tn ETn )2 :

Let Tn = ^ : It has variance


Vn = E(Tn ETn )2 :
The simulation estimate is
B
^ 1 X ^ ^
2
Vn = b :
B
b=1

A bootstrap
q standard error for ^ is the square root of the bootstrap estimate of variance,
s( ^ ) = V^n :
While this standard error may be calculated and reported, it is not clear if it is useful. The
primary use of asymptotic standard errors is to construct asymptotic con dence intervals, which
are based on the asymptotic normal approximation to the t-ratio. However, the use of the boot-
strap presumes that such asymptotic approximations might be poor, in which case the normal
approximation is suspected. It appears superior to calculate bootstrap con dence intervals, and
we turn to this next.

8.6 Percentile Intervals


For a distribution function Gn (x; F ); let qn ( ; F ) denote its quantile function. This is the function
which solves
Gn (qn ( ; F ); F ) = :
[When Gn (x; F ) is discrete, qn ( ; F ) may be non-unique, but we will ignore such complications.]
Let qn ( ) = qn ( ; F0 ) denote the quantile function of the true sampling distribution, and qn ( ) =
qn ( ; Fn ) denote the quantile function of the bootstrap distribution. Note that this function will
change depending on the underlying statistic Tn whose distribution is Gn :
Let Tn = ^; an estimate of a parameter of interest. In (1 )% of samples, ^ lies in the region
[qn ( =2); qn (1 =2)]: This motivates a con dence interval proposed by Efron:

C1 = [qn ( =2); qn (1 =2)]:

This is often called the percentile con dence interval.


Computationally, the quantile qn (x) is estimated by q^n (x); the x'th sample quantile of the
simulated statistics fTn1 ; :::; TnB g; as discussed in the section on Monte Carlo simulation. The
(1 )% Efron percentile interval is then [^ qn ( =2); q^n (1 =2)]:

90
The interval C1 is a popular bootstrap con dence interval often used in empirical practice.
This is because it is easy to compute, simple to motivate, was popularized by Efron early in the
history of the bootstrap, and also has the feature that it is translation invariant. That is, if we
de ne = f ( ) as the parameter of interest for a monotonic function f; then percentile method
applied to this problem will produce the con dence interval [f (qn ( =2)); f (qn (1 =2))]; which
is a naturally good property.
However, as we show now, C1 is in a deep sense very poorly motivated.
It will be useful if we introduce an alternative de nition C1 . Let Tn ( ) = ^ and let qn ( )
be the quantile function of its distribution. (These are the original quantiles, with subtracted.)
Then C1 can alternatively be written as

C1 = [^ + qn ( =2); ^ + q (1
n =2)]:

This is a bootstrap estimate of the \ideal" con dence interval

C10 = [^ + qn ( =2); ^ + qn (1 =2)]:

The latter has coverage probability

P 0 2 C10 = P ^ + qn ( =2) 0
^ + qn (1 =2)

= P qn (1 =2) ^ 0 qn ( =2)
= Gn ( qn ( =2); F0 ) Gn ( qn (1 =2); F0 )

which generally is not 1 ! There is one important exception. If ^ 0 has a symmetric distribution,
then Gn ( x; F0 ) = 1 Gn (x; F0 ); so

P 0 2 C10 = Gn ( qn ( =2); F0 ) Gn ( qn (1 =2); F0 )


= (1 Gn (qn ( =2); F0 )) (1 Gn (qn (1 =2); F0 ))
= 1 1 1
2 2
= 1

and this idealized con dence interval is accurate. Therefore, C10 and C1 are designed for the case
that ^ has a symmetric distribution about 0 :
When ^ does not have a symmetric distribution, C1 may perform quite poorly.
However, by the translation invariance argument presented above, it also follows that if there
exists some monotonic transformation f ( ) such that f (^) is symmetrically distributed about f ( 0 );
then the idealized percentile bootstrap method will be accurate.
Based on these arguments, many argue that the percentile interval should not be used unless
the sampling distribution is close to unbiased and symmetric.
The problems with the percentile method can be circumvented by an alternative method.

91
Let Tn ( ) = ^ . Then
1 = P (qn ( =2) Tn ( 0 ) qn (1 =2))
= P ^ qn (1 =2) 0
^ qn ( =2) ;

so an exact (1 )% con dence interval for 0 would be


C20 = [^ qn (1 =2); ^ qn ( =2)]:
This motivates a bootstrap analog
C2 = [^ qn (1 =2); ^ qn ( =2)]:
Notice that generally this is very di erent from the Efron interval C1 ! They coincide in the special
case that Gn (x) is symmetric about ^; but otherwise they di er.
Computationally, this interval can be estimated from a bootstrap simulation by sorting the
bootstrap statistics Tn = ^ ^ ; which are centered at the sample estimate ^: These are sorted
to yield the quantile estimates q^n (:025) and q^n (:975): The 95% con dence interval is then [^
q^ (:975); ^ q^ (:025)]:
n n
This con dence interval is discussed in most theoretical treatments of the bootstrap, but is not
widely used in practice.

8.7 Percentile-t Equal-Tailed Interval


Suppose we want to test H0 : = 0 against H1 : < 0 at size : We would set Tn ( ) =
^ =s(^) and reject H0 in favor of H1 if Tn ( 0 ) < c; where c would be selected so that

P (Tn ( 0 ) < c) = :
Thus c = qn ( ): Since this is unknown, a bootstrap test replaces qn ( ) with the bootstrap estimate
qn ( ); and the test rejects if Tn ( 0 ) < qn ( ):
Similarly, if the alternative is H1 : > 0 ; the bootstrap test rejects if Tn ( 0 ) > qn (1 ):
Computationally, these critical values can be estimated from a bootstrap simulation by sorting
the bootstrap t-statistics Tn = ^ ^ =s(^ ): Note, and this is important, that the bootstrap test

statistic is centered at the estimate ^; and the standard error s(^ ) is calculated on the bootstrap
sample. These t-statistics are sorted to nd the estimated quantiles q^n ( ) and/or q^n (1 ):
Let Tn ( ) = ^ =s(^). Then

1 = P (qn ( =2) Tn ( 0 ) qn (1 =2))


= P qn ( =2) ^ 0 =s(^) qn (1 =2)

= P ^ s(^)qn (1 =2) 0
^ s(^)qn ( =2) ;

92
so an exact (1 )% con dence interval for 0 would be

C30 = [^ s(^)qn (1 =2); ^ s(^)qn ( =2)]:

This motivates a bootstrap analog

C3 = [^ s(^)qn (1 =2); ^ s(^)qn ( =2)]:

This is often called a percentile-t con dence interval. It is equal-tailed or central since the proba-
bility that 0 is below the left endpoint approximately equals the probability that 0 is above the
right endpoint, each =2:
Computationally, this is based on the critical values from the one-sided hypothesis tests, dis-
cussed above.

8.8 Symmetric Percentile-t Intervals


Suppose we want to test H0 : = 0 against H1 : 6= 0 at size : We would set Tn ( ) =
^ =s(^) and reject H0 in favor of H1 if jTn ( 0 )j > c; where c would be selected so that

P (jTn ( 0 )j > c) = :

Note that

P (jTn ( 0 )j < c) = P ( c < Tn ( 0 ) < c)


= Gn (c) Gn ( c)
Gn (c);

which is a symmetric distribution function. The ideal critical value c = qn ( ) solves the equation

Gn (qn ( )) = 1 :

Equivalently, qn ( ) is the 1 quantile of the distribution of jTn ( 0 )j :


The bootstrap estimate is qn ( ); the 1 quantile of the distribution of jTn j ; or the number
which solves the equation

Gn (qn ( )) = Gn (qn ( )) Gn ( qn ( )) = 1 :

Computationally, qn ( ) is estimated from a bootstrap simulation by sorting the bootstrap t-


statistics jTn j = ^ ^ =s(^ ); and taking the upper % quantile. The bootstrap test rejects if
jTn ( 0 )j > qn ( ):
Let
C4 = [^ s(^)qn ( ); ^ + s(^)qn ( )];

93
where qn ( ) is the bootstrap critical value for a two-sided hypothesis test. C4 is called the sym-
metric percentile-t interval. It is designed to work well since

P( 0 2 C4 ) = P ^ s(^)qn ( ) 0
^ + s(^)q ( )
n

= P (jTn ( 0 )j < qn ( ))
' P (jTn ( 0 )j < qn ( ))
= 1 :

If is a vector, then to test H0 : = 0 against H1 : 6= 0 at size ; we would use a Wald


statistic 0
Wn ( ) = n ^ V^ 1 ^

or some other asymptotically chi-square statistic. Thus here Tn ( ) = Wn ( ): The ideal test rejects
if Wn qn ( ); where qn ( ) is the (1 )% quantile of the distribution of Wn : The bootstrap test
rejects if Wn qn ( ); where qn ( ) is the (1 )% quantile of the distribution of
0
Wn = n ^ ^ V^ 1 ^ ^ :

Computationally, the critical value qn ( ) is found as the quantile from simulated values of Wn :
Note in the simulation that the Wald statistic is a quadratic form in ^ ^ ; not ^ 0 :
[This is a typical mistake made by practitioners.]

8.9 Asymptotic Expansions


Let Tn be a statistic such that
Tn !d N (0; v 2 ): (8.3)
p
If Tn = n ^ 0 then v = V while if Tn is a t-ratio then v = 1: Equivalently, writing
Tn Gn (x; F ) then
x
lim Gn (x; F ) = ;
n!1 v
or
x
Gn (x; F ) = + o (1) : (8.4)
v
x
While (8.4) says that Gn converges to v as n ! 1; it says nothing, however, about the rate
of convergence; or the size of the divergence for any particular sample size n: A better asymptotic
approximation may be obtained through an asymptotic expansion.
The following notation will be helpful. Let an be a sequence.

De nition 8.9.1 an = o(1) if an ! 0 as n ! 1

94
De nition 8.9.2 an = O(1) if jan j is uniformly bounded.

De nition 8.9.3 an = o(n r) if nr jan j ! 0 as n ! 1.

Basically, an = O(n r ) if it declines to zero like n r :


We say that a function g(x) is even if g( x) = g(x); and a function h(x) is odd if h( x) =
h(x): The derivative of an even function is odd, and vice-versa.

Theorem 8.9.1 Under regularity conditions and (8.3),


x 1 1 3=2
Gn (x; F ) = + 1=2 g1 (x; F ) + g2 (x; F ) + O(n )
v n n
uniformly over x; where g1 is an even function of x; and g2 is an odd function of x: Moreover, g1
and g2 are di erentiable functions of x and continuous in F relative to the supremum norm on the
space of distribution functions.

We can interpret Theorem 8.9.1 as follows. First, Gn (x; F ) converges to the normal limit at
rate n1=2 : To a second order of approximation,
x 1=2
Gn (x; F ) +n g1 (x; F ):
v
Since the derivative of g1 is odd, the density function is skewed. To a third order of approximation,
x 1=2 1
Gn (x; F ) +n g1 (x; F ) + n g2 (x; F )
v
which adds a symmetric non-normal component to the approximate density (for example, adding
leptokurtosis).

8.10 One-Sided Tests


Using the expansion of Theorem 8.9.1, we can assess the accuracy of one-sided hypothesis tests
and con dence regions based on an asymptotically normal t-ratio Tn . An asymptotic test is based
on (x):
To the second order, the exact distribution is
1 1
P (Tn < x) = Gn (x; F0 ) = (x) + g1 (x; F0 ) + O(n )
n1=2
since v = 1: The di erence is
1 1
(x) Gn (x; F0 ) = g1 (x; F0 ) + O(n )
n1=2
1=2
= O(n );

95
so the order of the error is O(n 1=2 ):
A bootstrap test is based on Gn (x); which from Theorem 8.9.1 has the expansion
1 1
Gn (x) = Gn (x; Fn ) = (x) + g1 (x; Fn ) + O(n ):
n1=2
Because (x) appears in both expansions, the di erence between the bootstrap distribution and
the true distribution is
1 1
Gn (x) Gn (x; F0 ) = (g1 (x; Fn ) g1 (x; F0 )) + O(n ):
n1=2
p
Since Fn converges to F at rate n; and g1 is continuous with respect to F; the di erence
p
(g1 (x; Fn ) g1 (x; F0 )) converges to 0 at rate n: Heuristically,
@
g1 (x; Fn ) g1 (x; F0 ) g1 (x; F0 ) (Fn F0 )
@F
= O(n 1=2 );
@
The \derivative" @F g1 (x; F ) is only heuristic, as F is a function. We conclude that
1
Gn (x) Gn (x; F0 ) = O(n );

or
1
P (Tn x) = P (Tn x) + O(n );
which is an improved rate of convergence over the asymptotic test (which converged at rate
O(n 1=2 )). This rate can be used to show that one-tailed bootstrap inference based on the t-
ratio achieves a so-called asymptotic re nement { the Type I error of the test converges at a faster
rate than an analogous asymptotic test.

8.11 Symmetric Two-Sided Tests


If a random variable X has distribution function H(x) = P (X x); then the random variable jXj
has distribution function
H(x) = H(x) H( x)
since

P (jXj x) = P ( x X x)
= P (X x) P (X x)
= H(x) H( x):

For example, if Z N (0; 1); then jZj has distribution function

(x) = (x) ( x) = 2 (x) 1:

96
Similarly, if Tn has exact distribution Gn (x; F ); then jTn j has the distribution function

Gn (x; F ) = Gn (x; F ) Gn ( x; F ):

A two-sided hypothesis test rejects H0 for large values of jTn j : Since Tn !d Z; then jTn j !d
jZj : Thus asymptotic critical values are taken from the distribution, and exact critical
values are taken from the Gn (x; F0 ) distribution. From Theorem 8.9.1, we can calculate that

Gn (x; F ) = Gn (x; F ) Gn ( x; F )
1 1
= (x) + 1=2 g1 (x; F ) + g2 (x; F )
n n
1 1 3=2
( x) + 1=2 g1 ( x; F ) + g2 ( x; F ) + O(n )
n n
2
= (x) + g2 (x; F ) + O(n 3=2 ); (8.5)
n
where the simpli cations are because g1 is even and g2 is odd. Hence the di erence between the
asymptotic distribution and the exact distribution is
2 3=2 1
(x) Gn (x; F0 ) = g2 (x; F0 ) + O(n ) = O(n ):
n
The order of the error is O(n 1 ):
Interestingly, the asymptotic two-sided test has a better coverage rate than the asymptotic
one-sided test. This is because the rst term in the asymptotic expansion, g1 ; is an even function,
meaning that the errors in the two directions exactly cancel out.
Applying (8.5) to the bootstrap distribution, we nd
2 3=2
Gn (x) = Gn (x; Fn ) = (x) + g2 (x; Fn ) + O(n ):
n
Thus the di erence between the bootstrap and exact distributions is
2
Gn (x) Gn (x; F0 ) = (g2 (x; Fn ) g2 (x; F0 )) + O(n 3=2 )
n
= O(n 3=2 );
p
the last equality because Fn converges to F0 at rate n; and g2 is continuous in F: Another way
of writing this is
P (jTn j < x) = P (jTn j < x) + O(n 3=2 )
so the error from using the bootstrap distribution (relative to the true unknown distribution) is
O(n 3=2 ): This is in contrast to the use of the asymptotic distribution, whose error is O(n 1 ):
Thus a two-sided bootstrap test also achieves an asymptotic re nement, similar to a one-sided
test.

97
A reader might get confused between the two simultaneous e ects. Two-sided tests have better
rates of convergence than the one-sided tests, and bootstrap tests have better rates of convergence
than asymptotic tests.
The analysis shows that there may be a trade-o between one-sided and two-sided tests. Two-
sided tests will have more accurate size (Reported Type I error), but one-sided tests might have
more power against alternatives of interest. Con dence intervals based on the bootstrap can be
asymmetric if based on one-sided tests (equal-tailed intervals) and can therefore be more informa-
tive and have smaller length than symmetric intervals. Therefore, the choice between symmetric
and equal-tailed con dence intervals is unclear, and needs to be determined on a case-by-case
basis.

8.12 Percentile Con dence Intervals


p
To evaluate the coverage rate of the percentile interval, set Tn = n ^ 0 : We know that
Tn !d N (0; V ); which is not pivotal, as it depends on the unknown V: Theorem 8.9.1 shows that
a rst-order approximation
x
Gn (x; F ) = + O(n 1=2 );
v
p
where v = V ; and for the bootstrap
x 1=2
Gn (x) = Gn (x; Fn ) = + O(n );
v^

where V^ = V (Fn ) is the bootstrap estimate of V: The di erence is


x x 1=2
Gn (x) Gn (x; F0 ) = + O(n )
v^ v
x x 1=2
= (^
v v) + O(n )
v^ v^
= O(n 1=2 )

Hence the order of the error is O(n 1=2 ):


p
The good news is that the percentile-type methods (if appropriately used) can yield n-
convergent asymptotic inference. Yet these methods do not require the calculation of standard
errors! This means that in contexts where standard errors are not available or are di cult to
calculate, the percentile bootstrap methods provide an attractive inference method.
The bad news is that the rate of convergence is disappointing. It is no better than the rate
obtained from an asymptotic one-sided con dence region. Therefore if standard errors are available,
it is unclear if there are any bene ts from using the percentile bootstrap over simple asymptotic
methods.
Based on these arguments, the theoretical literature (e.g. Hall, 1992, Horowitz, 2002) tends to
advocate the use of the percentile-t bootstrap methods rather than percentile methods.

98
8.13 Bootstrap Methods for Regression Models
The bootstrap methods we have discussed have set Gn (x) = Gn (x; Fn ); where Fn is the EDF.
Any other consistent estimate of F0 may be used to de ne a feasible bootstrap estimator. The
advantage of the EDF is that it is fully nonparametric, it imposes no conditions, and works in
nearly any context. But since it is fully nonparametric, it may be ine cient in contexts where more
is known about F: We discuss some bootstrap methods appropriate for the case of a regression
model where

yi = x0i + ei
E (ei j xi ) = 0:

The non-parametric bootstrap distribution resamples the observations (yi ; xi ) from the EDF,
which implies

yi = xi 0 ^ + ei
E (xi ei ) = 0

but generally
E (ei j xi ) 6= 0:
The the bootstrap distribution does not impose the regression assumption, and is thus an ine cient
estimator of the true distribution (when in fact the regression assumption is true.)
One approach to this problem is to impose the very strong assumption that the error ei is
independent of the regressor xi : The advantage is that in this case it is straightforward to con-
struct bootstrap distributions. The disadvantage is that the bootstrap distribution may be a poor
approximation when the error is not independent of the regressors.
To impose independence, it is su cient to sample the xi and ei independently, and then create
yi = xi 0 ^ + ei : There are di erent ways to impose independence. A non-parametric method is
to sample the bootstrap errors ei randomly from the OLS residuals f^ e1 ; :::; e^n g: A parametric
method is to generate the bootstrap errors ei from a parametric distribution, such as the normal
ei N (0; ^ 2 ):
For the regressors xi , a nonparametric method is to sample the xi randomly from the EDF
or sample values fx1 ; :::; xn g: A parametric method is to sample xi from an estimated parametric
distribution. A third approach sets xi = xi : This is equivalent to treating the regressors as xed
in repeated samples. If this is done, then all inferential statements are made conditionally on the
observed values of the regressors, which is a valid statistical approach. It does not really matter,
however, whether or not the xi are really \ xed" or random.
The methods discussed above are unattractive for most applications in econometrics because
they impose the stringent assumption that xi and ei are independent. Typically what is desirable
is to impose only the regression condition E (ei j xi ) = 0: Unfortunately this is a harder problem.

99
One proposal which imposes the regression condition without independence is the Wild Boot-
strap. The idea is to construct a conditional distribution for ei so that

E (ei j xi ) = 0
E ei 2 j xi = e^2i
E ei 3 j xi = e^3i :

A conditional distribution with these features will preserve the main important features of the
data. This can be achieved using a two-point distribution of the form
p ! ! p
1+ 5 5 1
P ei = e^i = p
2 2 5
p ! ! p
1 5 5+1
P ei = e^i = p
2 2 5

For each xi ; you sample ei using this two-point distribution.

8.14 Bootstrap GMM Inference


Let wi = (yi ; xi ; zi ) and let ^ be the 2SLS or GMM estimator of . Using the EDF of wi , we can
apply the bootstrap methods discussed before to compute estimates of the bias and variance of ^ ;
and construct con dence intervals for ; identically as in the regression model. However, caution
should be applied when interpreting such results.
A straightforward application of the nonparametric bootstrap works in the sense of consistently
achieving the rst-order asymptotic distribution. This has been shown by Hahn (1996). However,
it fails to achieve an asymptotic re nement when the model is over-identi ed, jeopardizing the
theoretical justi cation for percentile-t methods. Furthermore, the bootstrap applied J test will
yield the wrong answer.
The problem is that in the sample, ^ is the \true" value and yet g n ( ^ ) 6= 0: Thus according to
random variables wi drawn from the EDF Fn ;

E gi ^ = g n ( ^ ) 6= 0:

This means that wi do not satisfy the same moment conditions as the population distribution.
A correction suggested by Hall and Horowitz (1996) can solve the problem. Given the bootstrap
sample (Y ; X ; Z ); de ne the bootstrap GMM criterion
0
J ( )=n gn( ) gn( ^ ) Wn g n ( ) gn( ^ )

where g n ( ^ ) is from the in-sample data, not from the bootstrap data.

100
Let ^ minimize J ( ); and de ne all statistics and tests accordingly. In the linear model, this
implies that the bootstrap estimator is
^ = Z 0X W X 0Z 1
n Z 0 X Wn X 0 Y X 0 e^ :

where e^ = Y Z ^ are the in-sample residuals. The bootstrap J statistic is J ( ^ ):


Brown and Newey (2002) have an alternative solution. They note that we can sample from the
observations fw1 ; :::; wn g with the empirical likelihood probabilities f^
pi g described in Chapter 6.
Pn ^
Since i=1 p^i gi = 0; this sampling scheme preserves the moment conditions of the model, so
no recentering or adjustments are needed. Brown and Newey argue that this bootstrap procedure
will be more e cient than the Hall-Horowitz GMM bootstrap.
To date, there are very few empirical applications of bootstrap GMM, as this is a very new
area of research.

101
Chapter 9

Univariate Time Series

A time series yt is a process observed in sequence over time, t = 1; :::; T . To indicate the dependence
on time, we adopt new notation, and use the subscript t to denote the individual observation, and
T to denote the number of observations.
Because of the sequential nature of time series, we expect that Yt and Yt 1 are not independent,
so classical assumptions are not valid.
We can separate time series into two categories: univariate (yt 2 R is scalar); and multivariate
(yt 2 Rm is vector-valued). The primary model for univariate time series is autoregressions (ARs).
The primary model for multivariate time series is vector autoregressions (VARs).

9.1 Stationarity and Ergodicity


De nition 9.1.1 fYt g is covariance (weakly) stationary if

E(Yt ) =

is independent of t; and
Cov (Yt ; Yt k) = (k)
is independent of t for all k:

(k) is called the autocovariance function.

De nition 9.1.2 fYt g is strictly stationary if the joint distribution of (Yt ; :::; Yt k) is independent
of t for all k:

De nition 9.1.3 (k) = (k)= (0) = Corr(Yt ; Yt k) is the autocorrelation function.

De nition 9.1.4 (loose). A stationary time series is ergodic if (k) ! 0 as k ! 1.

102
The following two theorems are essential to the analysis of stationary time series. There proofs
are rather di cult, however.

Theorem 9.1.1 If Yt is strictly stationary and ergodic and Xt = f (Yt ; Yt 1 ; :::) is a random
variable, then Xt is strictly stationary and ergodic.

Theorem 9.1.2 (Ergodic Theorem). If Xt is strictly stationary and ergodic and E jXt j < 1; then
as T ! 1;
T
1X
Xt !p E(Xt ):
T
t=1

This allows us to consistently estimate parameters using time-series moments:


The sample mean:
T
1X
^= Yt
T
t=1

The sample autocovariance


T
1X
^ (k) = (Yt ^ ) (Yt k ^) :
T
t=1

The sample autocorrelation


^ (k)
^(k) = :
^ (0)

Theorem 9.1.3 If Yt is strictly stationary and ergodic and EYt2 < 1; then as T ! 1;

1. ^ !p E(Yt );

2. ^ (k) !p (k);

3. ^(k) !p (k):

Proof. Part (1) is a direct consequence of the Ergodic theorem. For Part (2), note that
T
1X
^ (k) = (Yt ^ ) (Yt k ^)
T
t=1
T T T
1X 1X 1X
= Yt Yt k Yt ^ Yt k^ + ^2:
T T T
t=1 t=1 t=1

103
By Theorem 9.1.1 above, the sequence Yt Yt k is strictly stationary and ergodic, and it has a nite
mean by the assumption that EYt2 < 1: Thus an application of the Ergodic Theorem yields
T
1X
Yt Yt k !p E(Yt Yt k ):
T
t=1
Thus
2 2 2 2
^ (k) !p E(Yt Yt k) + = E(Yt Yt k) = (k):
Part (3) follows by the continuous mapping theorem: ^(k) = ^ (k)=^ (0) !p (k)= (0) = (k):

9.2 Autoregressions
In time-series, the series f:::; Y1 ; Y2 ; :::; YT ; :::g are jointly random. We consider the conditional
expectation
E (Yt j It 1 )
where It 1 = fYt 1 ; Yt 2 ; :::g is the past history of the series.
An autoregressive (AR) model speci es that only a nite number of past lags matter:
E (Yt j It 1) = E (Yt j Yt 1 ; :::; Yt k ) :

A linear AR model (the most common type used in practice) speci es linearity:
E (Yt j It 1) = + 1 Yt 1 + 2 Yt 1 + + k Yt k :

Letting
et = Yt E (Yt j It 1) ;
then we have the autoregressive model
Yt = + 1 Yt 1 + 2 Yt 1 + + k Yt k + et
E (et j It 1) = 0:
The last property de nes a special time-series process.
De nition 9.2.1 et is a martingale di erence sequence (MDS) if E (et j It 1) = 0:
Regression errors are naturally a MDS. Some time-series processes may be a MDS as a conse-
quence of optimizing behavior. For example, some versions of the life-cycle hypothesis imply that
either changes in consumption, or consumption growth rates, should be a MDS. Most asset pricing
models imply that asset returns should be the sum of a constant plus a MDS.
The MDS property for the regression error plays the same role in a time-series regression as
does the conditional mean-zero property for the regression error in a cross-section regression. In
fact, it is even more important in the time-series context, as it is di cult to derive distribution
theories without this property.
A useful property of a MDS is that et is uncorrelated with any function of the lagged information
It 1 : Thus for k > 0; E(Yt k et ) = 0:

104
9.3 Stationarity of AR(1) Process
A mean-zero AR(1) is
Yt = Yt 1 + et :
Assume that et is iid, E(et ) = 0 and Ee2t = 2 < 1:
By back-substitution, we nd
2
Yt = et + et 1 + et 2 + :::
X1
k
= et k:
k=0

Loosely speaking, this series converges if the sequence ke gets small as k ! 1: This occurs
t k
when j j < 1:

Theorem 9.3.1 If j j < 1 then Yt is strictly stationary and ergodic.

We can compute the moments of Yt using the in nite sum:


1
X
k
EYt = E (et k) =0
k=0

1
X 2
2k
V ar(Yt ) = V ar (et k) = 2
:
1
k=0

If the equation for Yt has an intercept, the above results are unchanged, except that the mean
of Yt can be computed from the relationship

EYt = + EYt 1;

and solving for EYt = EYt 1 we nd EYt = =(1 ):

9.4 Lag Operator


An algebraic construct which is useful for the analysis of autoregressive models is the lag operator.

De nition 9.4.1 The lag operator L satis es LYt = Yt 1:

De ning L2 = LL; we see that L2 Yt = LYt 1 = Yt 2: In general, Lk Yt = Yt k:


The AR(1) model can be written in the format

Yt Yt 1 + et

105
or
(1 L) Yt 1 = et :
The operator (L) = (1 L) is a polynomial in the operator L: We say that the root of the
polynomial is 1= ; since (z) = 0 when z = 1= : We call (L) the autoregressive polynomial of Yt .
From Theorem 9.3.1, an AR(1) is stationary i j j < 1: Note that an equivalent way to say
this is that an AR(1) is stationary i the root of the autoregressive polynomial is larger than one
(in absolute value).

9.5 Stationarity of AR(k)


The AR(k) model is
Yt = 1 Yt 1 + 2 Yt 1 + + k Yt k + et :
Using the lag operator,
2 k
Yt 1 LYt 2 L Yt k L Yt = et ;
or
(L)Yt = et
where
2 k
(L) = 1 1L 2L kL :

We call (L) the autoregressive polynomial of Yt :


The Fundamental Theorem of Algebra says that any polynomial can be factored as
1 1 1
(z) = 1 1 z 1 2 z 1 k z

where the 1 ; :::; k are the complex roots of (z); which satisfy ( j ) = 0:
We know that an AR(1) is stationary i the absolute value of the root of its autoregressive
polynomial is larger than one. For an AR(k), the requirement is that all roots are larger than one.
Let j j denote the modulus of a complex number :

Theorem 9.5.1 The AR(k) is strictly stationary and ergodic if and only if j j j > 1 for all j:

One way of stating this is that \All roots lie outside the unit circle."
If one of the roots equals 1, we say that (L); and hence Yt ; \has a unit root". This is a special
case of non-stationarity, and is of great interest in applied time series.

106
9.6 Estimation
Let
0
xt = 1 Yt 1 Yt 2 Yt k
0
= 1 2 k :

Then the model can be written as


yt = x0t + et :
The OLS estimator is
^ = X 0X 1
X 0 Y:
To study ^ ; it is helpful to de ne the process ut = xt et : Note that ut is a MDS, since

E (ut j It 1) = E (xt et j It 1) = xt E (et j It 1) = 0:

By Theorem 9.1.1, it is also strictly stationary and ergodic. Thus


T T
1X 1X
xt et = ut !p E (ut ) = 0: (9.1)
T T
t=1 t=1

Theorem 9.6.1 If the AR(k) process Yt is strictly stationary and ergodic and EYt2 < 1; then
^ !p as T ! 1:

Proof. The vector xt is strictly stationary and ergodic, and by Theorem 9.1.1, so is xt x0t : Thus
by the Ergodic Theorem,
T
1X
xt x0t !p E xt x0t = Q:
T
t=1

Combined with (9.1) and the continuous mapping theorem, we see that

T
! 1 T
!
^= 1X 1X
+ xt x0t xt et !p Q 1
0 = 0:
T T
t=1 t=1

107
9.7 Asymptotic Distribution
Theorem 9.7.1 MDS CLT. If ut is a strictly stationary and ergodic MDS and E(ut u0t ) = < 1;
then as T ! 1;
T
1 X
p ut !d N (0; ) :
T t=1

Since xt et is a MDS, we can apply Theorem 9.7.1 to see that


T
1 X
p xt et !d N (0; ) ;
T t=1

where
= E(xt x0t e2t ):

Theorem 9.7.2 If the AR(k) process Yt is strictly stationary and ergodic and EYt4 < 1; then as
T ! 1;
p
T ^ !d N 0; Q 1 Q 1 :

This is identical in form to the asymptotic distribution of OLS in cross-section regression. The
implication is that asymptotic inference is the same. In particular, the asymptotic covariance
matrix is estimated just as in the cross-section case.

9.8 Bootstrap for Autoregressions


In the non-parametric bootstrap, we constructed the bootstrap sample by randomly resampling
from the data values fyt ; xt g: This creates an iid bootstrap sample. Clearly, this cannot work in a
time-series application, as this imposes inappropriate independence.
Brie y, there are two popular methods to implement bootstrap resampling for time-series data.

Method 1: Model-Based (Parametric) Bootstrap.

1. Estimate ^ and residuals e^t :

2. Fix an initial condition (Y k+1 ; Y k+2 ; :::; Y0 ):

3. Simulate iid draws ei from the empirical distribution of the residuals f^


e1 ; :::; e^T g:

4. Create the bootstrap series Yt by the recursive formula

Yt = ^ + ^1 Yt 1 + ^2 Yt 2 + + ^k Yt k + et :

108
This construction imposes homoskedasticity on the errors ei ; which may be di erent than the
properties of the actual ei : It also presumes that the AR(k) structure is the truth.

Method 2: Block Resampling

1. Divide the sample into T =m blocks of length m:

2. Resample complete blocks. For each simulated sample, draw T =m blocks.

3. Paste the blocks together to create the bootstrap time-series Yt :

4. This allows for arbitrary stationary serial correlation, heteroskedasticity, and for model-
misspeci cation.

5. The results may be sensitive to the block length, and the way that the data are partitioned
into blocks.

6. May not work well in small samples.

9.9 Trend Stationarity

Yt = 0 + 1t + St (9.2)
St = 1 St 1 + 2 St 2 + + k St l + et ; (9.3)

or
Yt = 0 + 1t + 1 Yt 1 + 2 Yt 1 + + k Yt k + et : (9.4)
There are two essentially equivalent ways to estimate the autoregressive parameters ( 1 ; :::; k ):

You can estimate (9.4) by OLS.

You can estimate (9.2)-(9.3) sequentially by OLS. That is, rst estimate (9.2), get the residual
S^t ; and then perform regression (9.3) replacing St with S^t : This procedure is sometimes called
Detrending.

The reason why these two procedures are (essentially) the same is the Frisch-Waugh-Lovell
theorem.

Seasonal E ects

There are three popular methods to deal with seasonal data.

Include dummy variables for each season. This presumes that \seasonality" does not change
over the sample.

109
Use \seasonally adjusted" data. The seasonal factor is typically estimated by a two-sided
weighted average of the data for that season in neighboring years. Thus the seasonally
adjusted data is a \ ltered" series. This is a exible approach which can extract a wide
range of seasonal factors. The seasonal adjustment, however, also alters the time-series
correlations of the data.
First apply a seasonal di erencing operator. If s is the number of seasons (typically s = 4
or s = 12);
s Yt = Yt Yt s ;
or the season-to-season change. The series s Yt is clearly free of seasonality. But the long-run
trend is also eliminated, and perhaps this was of relevance.

9.10 Testing for Omitted Serial Correlation


For simplicity, let the null hypothesis be an AR(1):
Yt = + Yt 1 + ut : (9.5)
We are interested in the question if the error ut is serially correlated. We model this as an AR(1):
ut = u t 1 + et (9.6)
with et a MDS. The hypothesis of no omitted serial correlation is
H0 : =0
H1 : 6= 0:
We want to test H0 against H1 :
To combine (9.5) and (9.6), we take (9.5) and lag the equation once:
Yt 1 = + Yt 2 + ut 1:

We then multiply this by and subtract from (9.5), to nd


Yt Yt 1 = + Yt 1 Yt 1 + ut ut 1;

or
Yt = (1 ) + ( + ) Yt 1 Yt 2 + et = AR(2):
Thus under H0 ; Yt is an AR(1), and under H1 it is an AR(2). H0 may be expressed as the
restriction that the coe cient on Yt 2 is zero.
An appropriate test of H0 against H1 is therefore a Wald test that the coe cient on Yt 2 is
zero. (A simple exclusion test).
In general, if the null hypothesis is that Yt is an AR(k), and the alternative is that the error
is an AR(m), this is the same as saying that under the alternative Yt is an AR(k+m), and this
is equivalent to the restriction that the coe cients on Yt k 1 ; :::; Yt k m are jointly zero. An
appropriate test is the Wald test of this restriction.

110
9.11 Model Selection
What is the appropriate choice of k in practice? This is a problem of model selection.
One approach to model selection is to choose k based on a Wald tests.
Another is to minimize the AIC or BIC information criterion, e.g.
2k
AIC(k) = log ^ 2 (k) + ;
T
where ^ 2 (k) is the estimated residual variance from an AR(k)
One ambiguity in de ning the AIC criterion is that the sample available for estimation changes
as k changes. (If you increase k; you need more initial conditions.) This can induce strange
behavior in the AIC. The best remedy is to x a upper value k; and then reserve the rst k as
initial conditions, and then estimate the models AR(1), AR(2), ..., AR(k) on this (uni ed) sample.

9.12 Autoregressive Unit Roots


The AR(k) model is

(L)Yt = + et
k
(L) = 1 1L kL :

As we discussed before, Yt has a unit root when (1) = 0; or

1 + 2 + + k = 1:

In this case, Yt is non-stationary. The ergodic theorem and MDS CLT do not apply, and test
statistics are asymptotically non-normal.
A helpful way to write the equation is the so-called Dickey-Fuller reparameterization:

Yt = + 0 Yt 1 + 1 Yt 1 + + k 1 Yt (k 1) + et : (9.7)

These models are equivalent linear transformations of one another. The DF parameterization
is convenient because the parameter 0 summarizes the information about the unit root, since
(1) = 0 : To see this, observe that the lag polynomial for the Yt computed from (9.7) is

(1 L) 0L 1 (L L2 ) k 1 (L
k 1
Lk )

But this must equal (L); as the models are equivalent. Thus

(1) = (1 1) 0 (1 1) (1 1) = 0:

Hence, the hypothesis of a unit root in Yt can be stated as

H0 : 0 = 0:

111
Note that the model is stationary if 0 < 0: So the natural alternative is
H1 : 0 < 0:
Under H0 ; the model for Yt is
Yt = + 1 Yt 1 + + k 1 Yt (k 1) + et ;
which is an AR(k-1) in the rst-di erence Yt : Thus if Yt has a (single) unit root, then Yt is a
stationary AR process. Because of this property, we say that if Yt is non-stationary but d Yt is
stationary, then Yt is \integrated of order d"; or I(d): Thus a time series with unit root is I(1):
Since 0 is the parameter of a linear regression, the natural test statistic is the t-statistic for
H0 from OLS estimation of (9.7). Indeed, this is the most popular unit root test, and is called the
Augmented Dickey-Fuller (ADF) test for a unit root.
It would seem natural to assess the signi cance of the ADF statistic using the normal table.
However, under H0 ; Yt is non-stationary, so conventional normal asymptotics are invalid. An
alternative asymptotic framework has been developed to deal with non-stationary data. We do
not have the time to develop this theory in detail, but simply assert the main results.

Theorem 9.12.1 (Dickey-Fuller Theorem). Assume 0 = 0: As T ! 1;


T ^ 0 !d (1 1 2 k 1 ) DF

^0
ADF = ! DFt :
s(^ 0 )
The limit distributions DF and DFt are non-normal. They are skewed to the left, and have
negative means.
The rst result states that ^ 0 converges to its true value (of zero) at rate T; rather than the
conventional rate of T 1=2 : This is called a \super-consistent" rate of convergence.
The second result states that the t-statistic for ^ 0 converges to a limit distribution which is
non-normal, but does not depend on the parameters : This distribution has been extensively
tabulated, and may be used for testing the hypothesis H0 : Note: The standard error s(^ 0 ) is the
conventional (\homoskedastic") standard error. But the theorem does not require an assumption
of homoskedasticity. Thus the Dickey-Fuller test is robust to heteroskedasticity.
Since the alternative hypothesis is one-sided, the ADF test rejects H0 in favor of H1 when
ADF < c; where c is the critical value from the ADF table. If the test rejects H0 ; this means that
the evidence points to Yt being stationary. If the test does not reject H0 ; a common conclusion is
that the data suggests that Yt is non-stationary. This is not really a correct conclusion, however.
All we can say is that there is insu cient evidence to conclude whether the data are stationary or
not.
We have described the test for the setting of with an intercept. Another popular setting includes
as well a linear time trend. This model is
Yt = 1 + 2t + 0 Yt 1 + 1 Yt 1 + + k 1 Yt (k 1) + et : (9.8)

112
This is natural when the alternative hypothesis is that the series is stationary about a linear time
trend. If the series has a linear trend (e.g. GDP, Stock Prices), then the series itself is non-
stationary, but it may be stationary around the linear time trend. In this context, it is a silly
waste of time to t an AR model to the level of the series without a time trend, as the AR model
cannot conceivably describe this data. The natural solution is to include a time trend in the tted
OLS equation. When conducting the ADF test, this means that it is computed as the t-ratio for
0 from OLS estimation of (9.8).
If a time trend is included, the test procedure is the same, but di erent critical values are
required. The ADF test has a di erent distribution when the time trend has been included, and
a di erent table should be consulted.
Most texts include as well the critical values for the extreme polar case where the intercept has
been omitted from the model. These are included for completeness (from a pedagogical perspective)
but have no relevance for empirical practice where intercepts are always included.

113
Chapter 10

Multivariate Time Series

A multivariate time series Yt is a vector process m 1. Let It 1 = (Yt 1 ; Yt 2 ; :::) be all lagged
information at time t: The typical goal is to nd the conditional expectation E (Yt j It 1 ) : Note
that since Yt is a vector, this conditional expectation is also a vector.

10.1 Vector Autoregressions (VARs)


A VAR model speci es that the conditional mean is a function of only a nite number of lags:
E (Yt j It 1) = E (Yt j Yt 1 ; :::; Yt k ) :

A linear VAR speci es that this conditional mean is linear in the arguments:
E (Yt j Yt 1 ; :::; Yt k ) = A0 + A1 Yt 1 + A2 Yt 2 + Ak Yt k:

Observe that A0 is m 1,and each of A1 through Ak are m m matrices.


De ning the m 1 regression error
et = Yt E (Yt j It 1) ;

we have the VAR model


Yt = A0 + A1 Yt 1 + A2 Yt 2 + Ak Yt k + et
E (et j It 1) = 0:
Alternatively, de ning the mk + 1 vector
0 1
1
B Yt C
B 1 C
B C
xt = B Yt 2 C
B .. C
@ . A
Yt k

114
and the m (mk + 1) matrix

A= A0 A1 A2 Ak ;

then
Yt = Axt + et :
The VAR model is a system of m equations. One way to write this is to let a0j be the jth row
of A. Then the VAR system can be written as the equations

Yjt = a0j xt + ejt :

Unrestricted VARs were introduced to econometrics by Sims (1980).

10.2 Estimation
Consider the moment conditions
E (xt ejt ) = 0;
j = 1; :::; m: These are implied by the VAR model, either as a regression, or as a linear projection.
The GMM estimator corresponding to these moment conditions is equation-by-equation OLS

^j = (X 0 X)
a 1
X 0 Yj :

An alternative way to compute this is as follows. Note that

^0j = Yj0 X(X 0 X)


a 1
:

^ we nd
And if we stack these to create the estimate A;
0 1
Y10
B Y0 C
B 2 C
A^ = B . C X(X 0 X) 1
@ .. A
0
Ym+1
= Y 0 X(X 0 X) 1
;

where
Y = Y1 Y2 Ym
the T m matrix of the stacked yt0 :
This (system) estimator is known as the SUR (Seemingly Unrelated Regressions) estimator,
and was originally derived by Zellner (1962)

115
10.3 Restricted VARs
The unrestricted VAR is a system of m equations, each with the same set of regressors. A restricted
VAR imposes restrictions on the system. For example, some regressors may be excluded from some
of the equations. Restrictions may be imposed on individual equations, or across equations. The
GMM framework gives a convenient method to impose such restrictions on estimation.

10.4 Single Equation from a VAR


Often, we are only interested in a single equation out of a VAR system. This takes the form

Yjt = a0j xt + et ;

and xt consists of lagged values of Yjt and the other Ylt0 s: In this case, it is convenient to re-de ne
the variables. Let yt = Yjt ; and Zt be the other variables. Let et = ejt and = aj : Then the single
equation takes the form
yt = x0t + et ; (10.1)
and h i
0
xt = 1 Yt 1 Yt k Zt0 1 Zt0 k :
This is just a conventional regression, with time series data.

10.5 Testing for Omitted Serial Correlation


Consider the problem of testing for omitted serial correlation in equation (10.1). Suppose that et
is an AR(1). Then

yt = x0t + et
et = et 1 + ut (10.2)
E (ut j It 1) = 0:

Then the null and alternative are

H0 : =0 H1 : 6= 0:

Take the equation yt = x0t + et ; and subtract o the equation once lagged multiplied by ; to get

yt yt 1 = x0t + et x0t 1 + et 1
= x0t xt 1 + et et 1;

or
yt = y t 1 + x0t + x0t 1 + ut ; (10.3)

116
which is a valid regression model.
So testing H0 versus H1 is equivalent to testing for the signi cance of adding (yt 1 ; xt 1 ) to
the regression. This can be done by a Wald test. We see that an appropriate, general, and simple
way to test for omitted serial correlation is to test the signi cance of extra lagged values of the
dependent variable and regressors.
You may have heard of the Durbin-Watson test for omitted serial correlation, which once was
very popular, and is still routinely reported by conventional regression packages. The DW test is
appropriate only when regression yt = x0t + et is not dynamic (has no lagged values on the RHS),
and et is iid N (0; 1): Otherwise it is invalid.
Another interesting fact is that (10.2) is a special case of (10.3), under the restriction = :
This restriction, which is called a common factor restriction, may be tested if desired. If valid,
the model (10.2) may be estimated by iterated GLS. (A simple version of this estimator is called
Cochrane-Orcutt.) Since the common factor restriction appears arbitrary, and is typically rejected
empirically, direct estimation of (10.2) is uncommon in recent applications.

10.6 Selection of Lag Length in an VAR


If you want a data-dependent rule to pick the lag length k in a VAR, you may either use a testing-
based approach (using, for example, the Wald statistic), or an information criterion approach. The
formula for the AIC and BIC are
p
AIC(k) = log det ^ (k) + 2
T
p log(T )
BIC(k) = log det ^ (k) +
T
XT
^ (k) = 1 et (k)0
e^t (k)^
T
t=1
p = m(km + 1)

where p is the number of parameters in the model, and e^t (k) is the OLS residual vector from the
model with k lags. The log determinant is the criterion from the multivariate normal likelihood.

10.7 Granger Causality


Partition the data vector into (Yt ; Zt ): De ne the two information sets

I1t = (Yt ; Yt 1 ; Yt 2 ; :::)


I2t = (Yt ; Zt ; Yt 1 ; Zt 1 ; Yt 2 ; Zt 2 ; ; :::)

The information set I1t is generated only by the history of Yt ; and the information set I2t is
generated by both Yt and Zt : The latter has more information.

117
We say that Zt does not Granger-cause Yt if

E (Yt j I1;t 1) = E (Yt j I2;t 1) :

That is, conditional on information in lagged Yt ; lagged Zt does not help to forecast Yt : If this
condition does not hold, then we say that Zt Granger-causes Yt :
The reason why we call this \Granger Causality" rather than \causality" is because this is not
a physical or structure de nition of causality. If Zt is some sort of forecast of the future, such as a
futures price, then Zt may help to forecast Yt even though it does not \cause" Yt : This de nition
of causality was developed by Granger (1969) and Sims (1972).
In a linear VAR, the equation for Yt is

Yt = + 1 Yt 1 + + k Yt k + Zt0 1 1 + + Zt0 k k + et :

In this equation, Zt does not Granger-cause Yt if and only if

H0 : 1 = 2 = = k = 0:

This may be tested using an exclusion (Wald) test.


This idea can be applied to blocks of variables. That is, Yt and/or Zt can be vectors. The
hypothesis can be tested by using the appropriate multivariate Wald test.
If it is found that Zt does not Granger-cause Yt ; then we deduce that our time-series model of
E (Yt j It 1 ) does not require the use of Zt : Note, however, that Zt may still be useful to explain
other features of Yt ; such as the conditional variance.

10.8 Cointegration
The idea of cointegration is due to Granger (1981), and was articulated in detail by Engle and
Granger (1987).

De nition 10.8.1 The m 1 series Yt is cointegrated if Yt is I(1) yet there exists ; m r, of


rank r; such that zt = 0 Yt is I(0): The r vectors in are called the cointegrating vectors.

If the series Yt is not cointegrated, then r = 0: If r = m; then Yt is I(0): For 0 < r < m; Yt is
I(1) and cointegrated.
In some cases, it may be believed that is known a priori. Often, = (1 1)0 : For example,
if Yt is a pair of interest rates, then = (1 0
1) speci es that the spread (the di erence in
returns) is stationary. If Y = (log(Consumption) log(Income))0 ; then = (1 1)0 speci es
that log(Consumption=Income) is stationary.
In other cases, may not be known.
If Yt is cointegrated with a single cointegrating vector (r = 1); then it turns out that can
be consistently estimated by an OLS regression of one component of Yt on the others. Thus Yt =

118
(Y1t ; Y2t ) and = ( 1 2 ) and normalize 1 = 1: Then ^ 2 = (Y20 Y2 ) 1 Y2 Y1 !p 2 : Furthermore
this estimation is super-consistent: T ( ^ 2 2 ) !d Limit; as rst shown by Stock (1987). This
is not, in general, a good method to estimate ; but it is useful in the construction of alternative
estimators and tests.
We are often interested in testing the hypothesis of no cointegration:

H0 : r = 0
H1 : r > 0:

Suppose that is known, so zt = 0 Yt is known. Then under H0 zt is I(1); yet under H1 zt is


I(0): Thus H0 can be tested using a univariate ADF test on zt :
When is unknown, Engle and Granger (1987) suggested using an ADF test on the estimated
0
residual z^t = ^ Yt ; from OLS of Y1t on Y2t : Their justi cation was Stock's result that ^ is super-
consistent under H1 : Under H0 ; however, ^ is not consistent, so the ADF critical values are not
appropriate. The asymptotic distribution was worked out by Phillips and Ouliaris (1990).
When the data have time trends, it may be necessary to include a time trend in the estimated
cointegrating regression. Whether or not the time trend is included, the asymptotic distribution
of the test is a ected by the presence of the time trend. The asymptotic distribution was worked
out in B. Hansen (1992).

10.9 Cointegrated VARs


We can write a VAR as

A(L)Yt = et
A(L) = I A1 L A2 L2 Ak Lk

or alternatively as
Yt = Yt 1 + D(L) Yt 1 + et
where

= A(1)
= I + A1 + A2 + + Ak :

Theorem 10.9.1 (Granger Representation Theorem). Yt is cointegrated with m r if and only


0
if rank( ) = r and = where is m r, rank( ) = r:

Thus cointegration imposes a restriction upon the parameters of a VAR. The restricted model
can be written as
0
Yt = Yt 1 + D(L) Yt 1 + et
Yt = zt 1 + D(L) Yt 1 + et :

119
If is known, this can be estimated by OLS of Yt on zt 1 and the lags of Yt :
If is unknown, then estimation is done by \reduced rank regression", which is least-squares
subject to the stated restriction. Equivalently, this is the MLE of the restricted parameters under
the assumption that et is iid N (0; ):
One di culty is that is not identi ed without normalization. When r = 1; we typically just
normalize one element to equal unity. When r > 1; this does not work, and di erent authors have
adopted di erent identi cation schemes.
In the context of a cointegrated VAR estimated by reduced rank regression, it is simple to
test for cointegration by testing the rank of : These tests are constructed as likelihood ratio
(LR) tests. As they were discovered by Johansen (1988, 1991, 1995), they are typically called
the \Johansen Max and Trace" tests. Their asymptotic distributions are non-standard, and are
similar to the Dickey-Fuller distributions.

120
Chapter 11

Limited Dependent Variables

A \limited dependent variable" Y is one which takes a \limited" set of values. The most common
cases are

Binary: Y = f0; 1g

Multinomial: Y = f0; 1; 2; :::; kg

Integer: Y = f0; 1; 2; :::g

Censored: Y = fx : x 0g

The traditional approach to the estimation of limited dependent variable (LDV) models is
parametric maximum likelihood. A parametric model is constructed, allowing the construction of
the likelihood function. A more modern approach is semi-parametric, eliminating the dependence
on a parametric distributional assumption. We will discuss only the rst (parametric) approach,
due to time constraints. They still constitute the majority of LDV applications. If, however, you
were to write a thesis involving LDV estimation, you would be advised to consider employing a
semi-parametric estimation approach.
For the parametric approach, estimation is by MLE. A major practical issue is construction of
the likelihood function.

11.1 Binary Choice


The dependent variable Yi = f0; 1g: This represents a Yes/No outcome. Given some regressors xi ;
the goal is to describe P (Yi = 1 j xi ) ; as this is the full conditional distribution.
The linear probability model speci es that

P (Yi = 1 j xi ) = x0i :

121
As P (Yi = 1 j xi ) = E (Yi j xi ) ; this yields the regression: Yi = x0i + ei which can be estimated by
OLS. However, the linear probability model does not impose the restriction that 0 P (Yi j xi ) 1:
Even so estimation of a linear probability model is a useful starting point for subsequent analysis.
The standard alternative is to use a function of the form

P (Yi = 1 j xi ) = F x0i

where F ( ) is a known CDF, typically assumed to be symmetric about zero, so that F (z) =
1 F ( z): The two standard choices for F are

Logistic: F (u) = (1 + e u) 1 :

Normal: F (u) = (u):

If F is logistic, we call this the logit model, and if F is normal, we call this the probit model.
This model is identical to the latent variable model

Yi = x0i + ei
ei F()
1 if Yi > 0
Yi = :
0 otherwise

For then

P (Yi = 1 j xi ) = P (Yi > 0 j xi )


= P x0i + ei > 0 j xi
= P ei > x0i j xi
= 1 F x0i
= F x0i :

Estimation is by maximum likelihood. To construct the likelihood, we need the conditional


distribution of an individual observation. Recall that if Y is Bernoulli, such that P (Y = 1) = p
and P (Y = 0) = 1 p, then we can write the density of Y as

f (y) = py (1 p)1 y
; y = 0; 1:

In the Binary choice model, Yi is conditionally Bernoulli with P (Yi = 1 j xi ) = pi = F (x0i ) : Thus
the conditional density is

f (yi j xi ) = pyi i (1 pi )1 yi
yi
= F x0i (1 F x0i )1 yi
:

122
Hence the log-likelihood function is
n
X
ln ( ) = log f (yi j xi )
i=1
Xn
yi
= log F x0i (1 F x0i )1 yi

i=1
n
X
= yi log F x0i + (1 yi ) log(1 F x0i )
i=1
X X
= log F x0i + log(1 F x0i ):
yi =1 yi =0

The MLE ^ is the value of which maximizes ln ( ): Standard errors and test statistics are
computed by asymptotic approximations. Details of such calculations are left to more advanced
courses.

11.2 Count Data


If Y = f0; 1; 2; :::g; a typical approach is to employ Poisson regression. This model speci es that
k
exp ( i) i
P (Yi = k j xi ) = ; k = 0; 1; 2; :::
k!
i = exp(x0i ):

The conditional density is the Poisson with parameter i: The functional form for i has been
picked to ensure that i > 0.
The log-likelihood function is
n
X n
X
ln ( ) = log f (yi j xi ) = exp(x0i ) + yi x0i log(yi !) :
i=1 i=1

The MLE is the value ^ which maximizes ln ( ):


Since
E (Yi j xi ) = i = exp(x0i )
is the conditional mean, this motivates the label Poisson \regression."
Also observe that the model implies that

V ar (Yi j xi ) = i = exp(x0i );

so the model imposes the restriction that the conditional mean and variance of Yi are the same.
This may be considered restrictive. A generalization is the negative binomial.

123
11.3 Censored Data
The idea of \censoring" is that some data above or below a threshold are mis-reported at the
threshold. Thus the model is that there is some latent process yi with unbounded support, but
we observe only
yi if yi 0
yi = : (11.1)
0 if yi < 0
(This is written for the case of the threshold being zero, any known value can substitute.) The
observed data yi therefore come from a mixed continuous/discrete distribution.
Censored models are typically applied when the data set has a meaningful proportion (say 5%
or higher) of data at the boundary of the sample support. The censoring process may be explicit
in data collection, or it may be a by-product of economic constraints.
An example of a data collection censoring is top-coding of income. In surveys, incomes above
a threshold are typically reported at the threshold.
The rst censored regression model was developed by Tobin (1958) to explain consumption of
durable goods. Tobin observed that for many households, the consumption level (purchases) in a
particular period was zero. He proposed the latent variable model

yi = x0i + ei
2
ei iid N (0; )

with the observed variable yi generated by the censoring equation (11.1). This model (now called
the Tobit) speci es that the latent (or ideal) value of consumption may be negative (the household
would prefer to sell than buy). All that is reported is that the household purchased zero units of
the good.
The naive approach to estimate is to regress yi on xi . This does not work because regression
estimates E (Yi j xi ) ; not E (Yi j xi ) = x0i ; and the latter is of interest. Thus OLS will be biased
for the parameter of interest :
[Note: it is still possible to estimate E (Yi j xi ) by LS techniques. The Tobit framework pos-
tulates that this is not inherently interesting, that the parameter of is de ned by an alternative
statistical structure.]
Consistent estimation will be achieved by the MLE. To construct the likelihood, observe that
the probability of being censored is

P (yi = 0 j xi ) = P (yi < 0 j xi )


= P x0i + ei < 0 j xi
ei x0i
= P < j xi

x0i
= :

124
The conditional distribution function above zero is Gaussian:
Z y
1 z x0i
P (yi = y j xi ) = dz; y > 0:
0

Therefore, the density function can be written as


1(y=0) 1(y>0)
x0i 1 z x0i
f (y j xi ) = ;

where 1 ( ) is the indicator function.


Hence the log-likelihood is a mixture of the probit and the normal:
n
X
ln ( ) = log f (yi j xi )
i=1
X x0i X yi x0i
1
= log + log :
yi =0 yi >0

The MLE is the value ^ which maximizes ln ( ):

11.4 Sample Selection


The problem of sample selection arises when the sample is a non-random selection of potential
observations. This occurs when the observed data is systematically di erent from the population
of interest. For example, if you ask for volunteers for an experiment, and they wish to extrapolate
the e ects of the experiment on a general population, you should worry that the people who
volunteer may be systematically di erent from the general population. This has great relevance
for the evaluation of anti-poverty and job-training programs, where the goal is to assess the e ect
of \training" on the general population, not just on the volunteers.
A simple sample selection model can be written as the latent model

yi = x0i + e1i
Ti = 1 zi0 + e0i > 0

where 1 ( ) is the indicator function. The dependent variable yi is observed if (and only if) Ti = 1:
Else it is unobserved.
For example, yi could be a wage, which can be observed only if a person is employed. The
equation for Ti is an equation specifying the probability that the person is employed.
The model is often completed by specifying that the errors are jointly normal

e0i 1
N 0; 2 :
e1i

125
It is presumed that we observe fxi ; zi ; Ti g for all observations.
Under the normality assumption,

e1i = e0i + vi ;

where vi is independent of e0i N (0; 1): A useful fact about the standard normal distribution is
that
(x)
E (e0i j e0i > x) = (x) = ;
(x)
and the function (x) is called the inverse Mills ratio.
The naive estimator of is OLS regression of yi on xi for those observations for which yi is
available. The problem is that this is equivalent to conditioning on the event fTi = 1g: However,

E (e1i j Ti = 1; Zi ) = E e1i j fe0i > zi0 g; Zi


= E e0i j fe0i > zi0 g; Zi + E vi j fe0i > zi0 g; Zi
= zi0 ;

which is non-zero. Thus


e1i = zi0 + ui ;
where
E (ui j Ti = 1; Zi ) = 0:
Hence
yi = x0i + zi0 + ui (11.2)
is a valid regression equation for the observations for which Ti = 1:
Heckman (1979) observed that we could consistently estimate and from this equation, if
were known. It is unknown, but also can be consistently estimated by a Probit model for selection.
The \Heckit" estimator is thus calculated as follows

Estimate ^ from a Probit, using regressors zi : The binary dependent variable is Ti :

Estimate ^ ; ^ from OLS of yi on xi and (zi0 ^ ):

The OLS standard errors will be incorrect, as this is a two-step estimator. They can be
corrected using a more complicated formula. Or, alternatively, by viewing the Probit/OLS
estimation equations as a large joint GMM problem.

The Heckit estimator is frequently used to deal with problems of sample selection. However,
the estimator is built on the assumption of normality, and the estimator can be quite sensitive
to this assumption. Some modern econometric research is exploring how to relax the normality
assumption.

126
The estimator can also work quite poorly if (zi0 ^ ) does not have much in-sample variation.
This can happen if the Probit equation does not \explain" much about the selection choice. An-
other potential problem is that if zi = xi ; then (zi0 ^ ) can be highly collinear with xi ; so the second
step OLS estimator will not be able to precisely estimate : Based this observation, it is typically
recommended to nd a valid exclusion restriction: a variable should be in zi which is not in xi :
If this is valid, it will ensure that (zi0 ^ ) is not collinear with xi ; and hence improve the second
stage estimator's precision.

127
Chapter 12

Panel Data

A panel is a set of observations on individuals, collected over time. An observation is the pair
fyit ; xit g; where the i subscript denotes the individual, and the t subscript denotes time. A panel
may be balanced :
fyit ; xit g : t = 1; :::; T ; i = 1; :::; n;
or unbalanced :
fyit ; xit g : For i = 1; :::; n; t = ti ; :::; ti :

12.1 Individual-E ects Model


The standard panel data speci cation is that there is an individual-speci c e ect which enters
linearly in the regression
yit = x0it + ui + eit :
The typical maintained assumptions are that the individuals i are mutually independent, that ui
and eit are independent, that eit is iid across individuals and time, and that eit is uncorrelated
with xit :
OLS of yit on xit is called pooled estimation. It is consistent if

E (xit ui ) = 0 (12.1)

If this condition fails, then OLS is inconsistent. (12.1) fails if the individual-speci c unobserved
e ect ui is correlated with the observed explanatory variables xit : This is often believed to be
plausible if ui is an omitted variable.
If (12.1) is true, however, OLS can be improved upon via a GLS technique. In either event,
OLS appears a poor estimation choice.
Condition (12.1) is called the random e ects hypothesis. It is a strong assumption, and most
applied researchers try to avoid its use.

128
12.2 Fixed E ects
This is the most common technique for estimation of non-dynamic linear panel regressions.
The motivation is to allow ui to be arbitrary, and have arbitrary correlated with xi : The goal
is to eliminate ui from the estimator, and thus achieve invariance.
There are several derivations of the estimator.
First, let 8
< 1 if i = j
dij = ;
:
0 else
and 0 1
di1
B C
di = @ ... A ;
din
an n 1 dummy vector with a \1" in the i0 th place. Let
0 1
u1
B C
u = @ ... A :
un

Then note that


ui = d0i u;
and
yit = x0it + d0i u + eit : (12.2)
Observe that
E (eit j xit ; di ) = 0;
so (12.2) is a valid regression, with di as a regressor along with xi :
OLS on (12.2) yields estimator ( ^ ; u
^): Conventional inference applies.
Observe that

This is generally consistent.

If xit contains an intercept, it will be collinear with di ; so the intercept is typically omitted
from xit :

Any regressor in xit which is constant over time for all individuals (e.g., their gender) will
be collinear with di ; so will have to be omitted.

There are n + k regression parameters, which is quite large as typically n is very large.

129
Computationally, you do not want to actually implement conventional OLS estimation, as the
parameter space is too large. OLS estimation of proceeds by the FWL theorem. Stacking the
observations together:
Y = X + Du + e;
then by the FWL theorem,
^ = 1
X 0 (1 PD ) X X 0 (1 PD ) Y
1
= X 0X X 0Y ;

where

Y = Y D(D0 D) 1
D0 Y
X = X D(D0 D) 1
D0 X:

Since the regression of yit on di is a regression onto individual-speci c dummies, the predicted
value from these regressions is the individual speci c mean y i ; and the residual is the demean
value
yit = yit y i :
The xed e ects estimator ^ is OLS of yit on xit , the dependent variable and regressors in deviation-
from-mean form.
Another derivation of the estimator is to take the equation

yit = x0it + ui + eit ;

and then take individual-speci c means by taking the average for the i0 th individual:
ti ti ti
1 X 1 X 0 1 X
yit = x + ui + eit
Ti t=t Ti t=t it Ti t=t
i i i

or
y i = x0i + ui + ei :
Subtracting, we nd
yit = xit0 + eit ;
which is free of the individual-e ect ui :

12.3 Dynamic Panel Regression


A dynamic panel regression has a lagged dependent variable

yit = yit 1 + x0it + ui + eit : (12.3)

130
This is a model suitable for studying dynamic behavior of individual agents.
Unfortunately, the xed e ects estimator is inconsistent, at least if T is held nite as n ! 1:
This is because the sample mean of yit 1 is correlated with that of eit :
The standard approach to estimate a dynamic panel is to combine rst-di erencing with IV or
GMM. Taking rst-di erences of (12.3) eliminates the individual-speci c e ect:

yit = yit 1 + x0it + eit : (12.4)

However, if eit is iid, then it will be correlated with yit 1 :


2
E ( yit 1 eit ) = E ((yit 1 yit 2 ) (eit eit 1 )) = E (yit 1 eit 1 ) = e:

So OLS on (12.4) will be inconsistent.


But if there are valid instruments, then IV or GMM can be used to estimate the equation.
Typically, we use lags of the dependent variable, two periods back, as yt 2 is uncorrelated with
eit : Thus values of yit k ; k 2, are valid instruments.
Hence a valid estimator of and is to estimate (12.4) by IV using yt 2 as an instrument for
yt 1 (which is just identi ed). Alternatively, GMM using yt 2 and yt 3 as instruments (which
is overidenti ed, but loses a time-series observation).
A more sophisticated GMM estimator recognizes that for time-periods later in the sample, there
are more instruments available, so the instrument list should be di erent for each equation. This is
conveniently organized by the GMM principle, as this enables the moments from the di erent time-
periods to be stacked together to create a list of all the moment conditions. A simple application
of GMM yields the parameter estimates and standard errors.

131
Chapter 13

Nonparametrics

13.1 Kernel Density Estimation


d
Let X be a random variable with continuous distribution F (x) and density f (x) = dx F (x): The
goal is to estimateP
f (x) from a random sample (X1 ; :::; Xn g While F (x) can be estimated by the
EDF F^ (x) = n 1 ni=1 1 (Xi x) ; we cannot de ne dx d ^
F (x) since F^ (x) is a step function. The
standard nonparametric method to estimate f (x) is based on smoothing using a kernel.
While we are typically interested in estimating the entire function f (x); we can simply focus
on the problem where x is a speci c xed number, and then see how the method generalizes to
estimating the entire function.

De nition 1 K(u) is a second-order kernel function if it is a symmetric zero-mean density


function.

Three common choices for kernels include the Gaussian


1 x2
K(x) = p exp
2 2
the Epanechnikov
3
4 1 x2 ; jxj 1
K(x) =
0 jxj > 1
and the Biweight or Quartic
15 2
16 1 x2 ; jxj 1
K(x) =
0 jxj > 1
In practice, the choice between these three rarely makes a meaningful di erence in the estimates.
The kernel functions are used to smooth the data. The amount of smoothing is controlled by
the bandwidth h > 0. Let
1 u
Kh (u) = K :
h h

132
be the kernel K rescaled by the bandwidth h: The kernel density estimator of f (x) is
n
1X
f^(x) = Kh (Xi x) :
n
i=1

This estimator is the average of a set of weights. If a large number of the observations Xi are near
x; then the weights are relatively large and f^(x) is larger. Conversely, if only a few Xi are near x;
then the weights are small and f^(x) is small. The bandwidth h controls the meaning of \near".
Interestingly, f^(x) is a valid density. That is, f^(x) 0 for all x; and
Z Z n n Z n Z
1 1
1X 1X 1
1X 1
f^(x)dx = Kh (Xi x) dx = Kh (Xi x) dx = K (u) du = 1
1 1 n n 1 n 1
i=1 i=1 i=1

where the second-to-last equality makes the change-of-variables u = (Xi x)=h:


We can also calculate the moments of the density f^(x): The mean is
Z n Z
1
1X 1
xf^(x)dx = xKh (Xi x) dx
1 n 1
i=1
Xn Z 1
1
= (Xi + uh) K (u) du
n 1
i=1
n Z n Z
1X 1
1X 1
= Xi K (u) du + h uK (u) du
n 1 n 1
i=1 i=1
n
1X
= Xi
n
i=1

the sample mean of the Xi ; where the second-to-last equality used the change-of-variables u =
(Xi x)=h which has Jacobian h:
The second moment of the estimated density is
Z n Z
1
1X 1
x f^(x)dx =
2
x2 Kh (Xi x) dx
1 n 1
i=1
Xn Z 1
1
= (Xi + uh)2 K (u) du
n 1
i=1
n n Z n Z
1X 2 2X 1
1X 2 1
= Xi + Xi h K(u)du + h u2 K (u) du
n n 1 n 1
i=1 i=1 i=1
n
1X 2
= X i + h2 2
K
n
i=1

133
where Z 1
2
K = x2 K (x) dx
1

is the variance of the kernel. It follows that the variance of the density f^(x) is
Z Z n n
!2
1 1 2
1X 2 1X
x f^(x)dx
2
xf^(x)dx = X i + h2 2
K Xi
1 1 n n
i=1 i=1
2
= ^ + h2 2K

Thus the variance of the estimated density is in ated by the factor h2 2


K relative to the sample
moment.

13.2 Asymptotic MSE for Kernel Estimates


For xed x and bandwidth h observe that
Z 1 Z 1 Z 1
EKh (X x) = Kh (z x) f (z)dz = Kh (uh) f (x + hu)hdu = K (u) f (x + hu)du
1 1 1

The second equality uses the change-of variables u = (z x)=h: The last expression shows that
the expected value is an average of f (z) locally about x:
This integral (typically) is not analytically solvable, so we approximate it using a second order
Taylor expansion of f (x + hu) in the argument hu about hu = 0; which is valid as h ! 0: Thus
1
f (x + hu) ' f (x) + f 0 (x)hu + f 00 (x)h2 u2
2
and therefore
Z 1
1
EKh (X x) ' K (u) f (x) + f 0 (x)hu + f 00 (x)h2 u2 du
1 2
Z 1 Z 1 Z 1
1
= f (x) K (u) du + f 0 (x)h K (u) udu + f 00 (x)h2 K (u) u2 du
1 1 2 1
1
= f (x) + f 00 (x)h2 2K :
2
The bias of f^(x) is then
n
1X 1
Bias(x) = E f^(x) f (x) = EKh (Xi x) f (x) = f 00 (x)h2 2
K:
n 2
i=1

We see that the bias of f^(x) at x depends on the second derivative f 00 (x): The sharper the derivative,
the greater the bias. Intuitively, the estimator f^(x) smooths data local to Xi = x; so is estimating

134
a smoothed version of f (x): The bias results from this smoothing, and is larger the greater the
curvature in f (x):
We now examine the variance of f^(x): Since it is an average of iid random variables, using
rst-order Taylor approximations and the fact that n 1 is of smaller order than (nh) 1
1
V ar (x) = V ar (Kh (Xi x))
n
1 1
= EKh (Xi x)2 (EKh (Xi x))2
n n
Z 1
1 z x 2 1
' 2
K f (z)dz f (x)2
nh h n
Z 11
1
= K (u)2 f (x + hu) du
nh 1
Z
f (x) 1
' K (u)2 du
nh 1
f (x) R(K)
= :
nh
R1
where R(K) = 1 K (x)2 dx is called the roughness of K:
Together, the asymptotic mean-squared error (AMSE) for xed x is the sum of the approximate
squared bias and approximate variance

1 f (x) R(K)
AM SEh (x) = f 00 (x)2 h4 4
K + :
4 nh
A global measure of precision is the asymptotic mean integrated squared error (AMISE)
Z
h4 4K R(f 00 ) R(K)
AM ISEh = AM SEh (x)dx = + : (13.1)
4 nh
R
where R(f 00 ) = (f 00 (x))2 dx is the roughness of f 00 : Notice that the rst term (the squared bias)
is increasing in h and the second term (the variance) is decreasing in nh: Thus for the AMISE to
decline with n; we need h ! 0 but nh ! 1: That is, h must tend to zero, but at a slower rate
than n 1 :
Equation (13.1) is an asymptotic approximation to the MSE. We de ne the asymptotically
optimal bandwidth h0 as the value which minimizes this approximate MSE. That is,

h0 = argmin AM ISEh
h

It can be found by solving the rst order condition

d R(K)
AM ISEh = h3 4 00
K R(f ) =0
dh nh2

135
yielding
1=5
R(K) 1=2
h0 = 4 R(f 00 ) n : (13.2)
K

This solution takes the form h0 = cn 1=5 where c is a function of K and f; but not of n: We
thus say that the optimal bandwidth is of order O(n 1=5 ): Note that this h declines to zero, but
at a very slow rate.
In practice, how should the bandwidth be selected? This is a di cult problem, and there is a
large and continuing literature on the subject. The asymptotically optimal choice given in (13.2)
depends on R(K); 2K ; and R(f 00 ): The rst two are determined by the kernel function. Their
values for the three functions introduced in the previous section are given here.
R1 R1
K 2
K = 2
1x K (x) dx R(K) = 1 K (x)2 dx
p
Gaussian 1 1/(2 )
Epanechnikov 1=5 1=5
Biweight 1=7 5=7

An obvious di culty is that R(f 00 ) is unknown. A classic simple solution proposed by Silverman
(1986)has come to be known as the reference bandwidth or Silverman's Rule-of-Thumb.
It uses formula (13.2) but replaces R(f 00 ) with ^ 5 R( 00 ); where is the N (0; 1) distribution
and ^ 2 is an estimate of 2 = V ar(X): This choice for h gives an optimal rule when f (x) is
normal, and gives a nearly optimal rule when f (x) is close to normal. The downside is that if the
density is very far from normal, the rule-of-thumb h can be quite ine cient. We can calculate that
p
R( 00 ) = 3= (8 ) : Together with the above table, we nd the reference rules for the three kernel
functions introduced earlier.
Gaussian Kernel: hrule = 1:06n 1=5
Epanechnikov Kernel: hrule = 2:34n 1=5
Biweight (Quartic) Kernel: hrule = 2:78n 1=5

Unless you delve more deeply into kernel estimation methods the rule-of-thumb bandwidth is
a good practical bandwidth choice, perhaps adjusted by visual inspection of the resulting estimate
f^(x): There are other approaches, but implementation can be delicate. I now discuss some of these
choices. The plug-in approach is to estimate R(f 00 ) in a rst step, and then plug this estimate into
the formula (13.2). This is more treacherous than may rst appear, as the optimal h for estimation
of the roughness R(f 00 ) is quite di erent than the optimal h for estimation of f (x): However, there
are modern versions of this estimator work well, in particular the iterative method of Sheather
and Jones (1991). Another popular choice for selection of h is cross-validation. This works by
constructing an estimate of the MISE using leave-one-out estimators. There are some desirable
properties of cross-validation bandwidths, but they are also known to converge very slowly to
the optimal values. They are also quite ill-behaved when the data has some discretization (as is
common in economics), in which case the cross-validation rule can sometimes select very small

136
bandwidths leading to dramatically undersmoothed estimates. Fortunately there are remedies,
which are known as smoothed cross-validation which is a close cousin of the bootstrap.

137
Chapter 14

Appendix A: Mathematical Formula

Factorial: For positive integer n; n! = n (n 1) (n 2) 1; and 0! = 1


Exponential Function:
1
X xn
ex = exp(x) =
n!
n=0

Natural Logarithm: ln(x)pis the inverse of ex


Stirling's Formula: n! 2 nnn e n ; R n ! 1:
1
Gamma Function: For > 0; ( ) = 0 t 1 e t dt:
1 p
Special values: (1) = 1; 2 =
Recurrence Property: ( + 1) = ( )
Relation to Factorial: For n integer, (n) = (n 1)!
Binomial coe cients: For nonnegative integers n and r; n r;

n n!
=
r r! (n r)!

Binomial Expansion: For real x and y and nonnegative integer n


n
X n i n
(x + y)n = xy i
i
i=0

A number x is the limit of the sequence fxn g if for every " > 0; there is some N < 1 such
that jxn xj < " for all n N: The sequence is said to converge if it has a limit.
The limit superior (limsup) and limit inferior (liminf ) of the sequence fxn g are

lim sup xn = inf sup xk


n!1 N k N

lim inf xn = sup inf xk


n!1 N k N

138
A function f : R ! R is continuous at x if limx!a f (x) = f (a): A function f (x) is uniformly
continuous if for every " > 0 there exists a > 0 such that jf (x) f (y)j < " for every x and y
in its domain with jx yj < :
Taylor Series in one variable
N
X xn xN +1 (N +1)
f (a + x) = f (n) (a) + f (a + x); 0 1
n! N!
n=0

Second-Order Vector Taylor Expansion. For x; a 2 Rk

@ 1 @2
f (a + x) = f (a) + x0 f (a) + x0 f (a + x)x; 0 1
@a 2 @a@a0

139
Chapter 15

Appendix B: Matrix Algebra

15.1 Terminology
A scalar a is a single number.
A vector a is a k 1 list of numbers, typically arranged in a column. We write this as
0 1
a1
B a2 C
B C
a=B . C
@ . A.
ak

Equivalently, a vector a is an element of Euclidean k space, hence a 2 Rk : If k = 1 then a is a


scalar.
A matrix A is a k r rectangular array of numbers, written as
2 3
a11 a12 a1r
6 a21 a22 a2r 7
6 7
A=6 . .. .. 7 = [aij ]
4 .. . . 5
ak1 ak2 akr

By convention aij refers to the i0 th row and j 0 th column of A: If r = 1 or k = 1 then A is a vector.


If r = k = 1; then A is a scalar.
A matrix can be written as a set of column vectors or as a set of row vectors. That is,
2 0 3
1
6 0 7
6 2 7
A= a1 a2 ar =6 . 7
4 .. 5
0
k

140
where 2 3
a1i
6 a2i 7
6 7
ai = 6 .. 7
4 . 5
aki
are column vectors and
0
j = aj1 aj2 ajr
are row vectors.
The transpose of a matrix, denoted B = A0 ; is obtained by ipping the matrix on its diagonal.
2 3
a11 a21 ak1
6 a12 a22 a 7
6 k2 7
B = A0 = 6 . . .. 7
4 .. .. . 5
a1r a2r akr

Thus bij = aji for all i and j: Note that if A is k r, then A0 is r k: If a is a k 1 vector, then
a0 is a 1 k row vector.
A matrix is square if k = r: A square matrix is symmetric if A = A0 ; which implies aij = aji :
A square matrix is diagonal if the only non-zero elements appear on the diagonal, so that aij = 0
if i 6= j: A square matrix is upper (lower) diagonal if all elements below (above) the diagonal
equal zero.
A partitioned matrix takes the form
2 3
A11 A12 A1r
6 A21 A22 A2r 7
6 7
A=6 . .. .. 7
4 .. . . 5
Ak1 Ak2 Akr

where the Aij denote matrices, vectors and/or scalars.

15.2 Matrix Multiplication


If a and b are both k 1; then their inner product is
k
X
a0 b = a1 b1 + a2 b2 + + ak bk = aj bj
j=1

Note that a0 b = b0 a:

141
If A is k r and B is r s; then we de ne their product AB by writing A as a set of row
vectors and B as a set of column vectors (each of length r). Then
2 0 3
a1
6 a0 7
6 2 7
AB = 6 . 7 b1 b2 br
4 .. 5
a0k
2 3
a01 b1 a01 b2 a01 br
6 a02 b1 a02 b2 a02 br 7
6 7
= 6 .. .. .. 7
4 . . . 5
0 0
ak b1 ak b2 a0k br

When the number of columns of A equals the number of rows of B; we say that A and B; or the
product AB; are conformable, and this is the only case where this product is de ned.
An alternative way to write the matrix product is to use matrix partitions. For example,

A11 A12 B11 B12


AB =
A21 A22 B21 B22
A11 B11 + A12 B21 A11 B12 + A12 B22
=
A21 B11 + A22 B21 A21 B12 + A22 B22

and
2 3
B1
6 B2 7
6 7
AB = A1 A2 Ar 6 .. 7
4 . 5
Br
= A1 B1 + A2 B2 + + Ar Br
Xr
= Ar Br
j=1

The Euclidean norm of an m 1 vector a is


m
!1=2
1=2
X
0
jaj = a a = a2i :
i=1

If A is a m n matrix, then its Euclidean norm is


0 11=2
m X
X n
1=2 1=2
jAj = tr A0 A = vec(A)0 vec(A) =@ a2ij A :
i=1 j=1

142
An important diagonal matrix is the identity matrix, which has ones on the diagonal. A
k k identity matrix is denoted as
2 3
1 0 0
6 0 1 0 7
6 7
Ik = 6 . . .. 7
4 .. .. . 5
0 0 1

Important properties are that if A is k r; then AIr = A and Ik A = A:


We say that two vectors a and b are orthogonal if a0 b = 0: The columns of a k r matrix A,
r k, are said to be orthogonal if A0 A = Ir : A square matrix A is called orthogonal if A0 A = Ik :

15.3 Trace, Inverse, Determinant


The trace of a k k square matrix A is the sum of its diagonal elements
k
X
tr (A) = aii
i=1

Some straightforward properties are

tr (cA) = c tr (A)
tr A0 = tr (A)
tr (A + B) = tr (A) + tr (B)
tr (Ik ) = k
tr (AB) = tr (BA)

The last result follows since


2 3
a01 b1 a01 b2 a01 bk
6 a02 b1 a02 b2 a02 bk 7
6 7
tr (AB) = tr 6 .. .. .. 7
4 . . . 5
a0k b1 a0k b2 a0k bk
k
X
= a0i bi
i=1
Xk
= b0i ai
i=1
= tr (BA) :

143
A k k matrix A has full rank, or is nonsingular, if there is no c 6= 0 such that Ac = 0:
In this case there exists a unique matrix B such that AB = BA = Ik : This matrix is called the
inverse of A and is denoted by A 1 : Some properties include

1 1
AA = A A = Ik
1 0 0 1
A = A
1 1 1
(AC) = C A
1 1 1 1 1 1
(A + C) = A A +C C
1 1 1 1 1 1
A (A + C) = A A +C A
Also, if A is an orthogonal matrix, then A 1 = A:
The following fact about inverting partitioned matrices is sometimes useful
1 1 1 BD 1
A B E E
= 1 CE 1 1 (15.1)
C D D F
where
1 1 1 1 1 1 1
E = A +A BF CA = A BD C
1 1 1 1 1 1 1
F = D +D CE BD = D CA B
Even if a matrix A does not possess an inverse, we can still de ne a generalized inverse A
as a matrix which satis es
AA A = A: (15.2)
The matrix A is not necessarily unique. The Moore-Penrose generalized inverse A satis es
(15.2) plus the following three conditions
A AA = A
AA is symmetric
A A is symmetric
For any matrix A; the Moore-Penrose generalized inverse A exists and is unique.
The determinant is de ned for square matrices.
If A is 2 2; then its determinant is det A = a11 a22 a12 a21 :
For a general k k matrix A = [aij ] ; we can de ne the determinant as follows. Let =
(j1 ; :::; jk ) denote a permutation of (1; :::; k) : There are k! such permutations. There is a unique
count of the number of inversions of the indices of such permutations (relative to the natural order
(1; :::; k)); and let " = +1 if this count is even and " = 1 if the count is odd. Then
X
det A = " a1j1 a2j2 akjk

Some properties include

144
det A = det A0

det ( A) = k det A

det(AB) = (det A) (det B)


1 1
det A = (det A)

A B 1C
det = (det D) det A BD if det D 6= 0
C D
det A 6= 0 if and only if A is nonsingular.
Qk
If A is triangular (upper or lower), then det A = i=1 aii

If A is orthogonal, then det A = 1

15.4 Eigenvalues
The characteristic equation of a square matrix A is

det (A Ik ) = 0:

The left side is a polynomial of degree k in so has exactly k roots, which are not necessarily
distinct and may be real or complex. They are called the latent roots or characteristic roots
or eigenvalues of A. If i is an eigenvalue of A; then A i Ik is singular so there exists a non-zero
vector hi such that
(A i Ik ) hi = 0

The vector hi is called a latent vector or characteristic vector or eigenvector of A corre-


sponding to i :
We now state some useful properties. Let i and hi , i = 1; :::; k denote the k eigenvalues and
eigenvectors of a square matrix A: Let be a diagonal matrix with the characteristic roots in the
diagonal, and let H = [h1 hk ]:
Q
det(A) = ki=1 i
P
tr(A) = ki=1 i

A is non-singular if and only if all its characteristic roots are non-zero.

If A has distinct characteristic roots, there exists a nonsingular matrix P such that A =
P 1 P and P AP 1 = .

If A is symmetric, then A = H H 0 and H 0 AH = ; and the characteristic roots are all real.

145
1 1 1 1
The characteristic roots of A are 1 ; 2 ; ..., k :

The decomposition A = H H 0 is called the spectral decomposition of a matrix.


We de ne the rank of a square matrix as the number of its non-zero characteristic roots.
We say that a square matrix A is positive semi-de nite if for all non-zero c; c0 Ac 0: This
is written as A 0: We say that A is positive de nite if for all non-zero c; c0 Ac > 0: This is
written as A > 0:
If A is positive de nite, then A is non-singular and A 1 exists. Furthermore, A 1 > 0:
We say that X is n k; k < n; has full rank k if there is no non-zero c such that Xc = 0: In
this case, X 0 X is symmetric and positive de nite.
If A is symmetric, then A > 0 if and only if all its characteristic roots are positive.
If A > 0 we can nd a matrix B such that A = BB 0 : We call B a matrix square root of A:
The matrix B need not be unique. One way to construct B is to use the spectral decomposition
A = H H 0 where is diagonal, and then set B = H 1=2 :

15.5 Idempotent and Projection Matrices


A square matrix A is idempotent if AA = A:
If A is also symmetric (most idempotent matrices are) then all its characteristic roots equal
either zero or one. To see this, note that we can write A = H H 0 where H is orthogonal and
contains the (real) characteristic roots. Then

A = AA = H H 0 H H 0 = H 2
H 0:

By the uniqueness of the characteristic roots, we deduce that 2 = and 2i = i for i = 1; :::; k:
Hence they must equal either 0 or 1.
It follows that if A is symmetric and idempotent, then tr(A) = rank(A):
Let X be an n k matrix; k < n: Two projection matrices are
1
P = X X 0X X0
M = In P
1
= In X X 0X X 0:

They are called projection matrices due to the property that for any matrix Z which can be written
as Z = X for some matrix ; (we say that Z lies in the range space of X) then
1
P Z = P X = X X 0X X 0X = X = Z

and
M Z = (In P)Z = Z PZ = Z Z = 0:

146
As an important example of this property, partition the matrix X into two matrices X1 and
X2 ; so that
X = [X1 X2 ] :
Then P X1 = X1 and M X1 = 0:
P and M are symmetric:
1 0
P0 = X X 0X X0
0 1 0
= X0 X 0X (X)0
0 1
= X X 0X X0
0 1
= X (X)0 X 0 X0
= P

and
M 0 = (In P )0 = In0 P 0 = In P = M:
The projection matrices P and M are idempotent:
1 1
PP = X X 0X X0 X X 0X X0
1 1
= X X 0X X 0X X 0X X0
1
= X X 0X X 0 = P;

and

MM = (In P ) (In P)
= In In P In In P + P P
= In P P +P
= In P = M:

Furthermore,

M +P = In P + P = In
MP = (In P )P = P PP = P P = 0:

Another useful property is that

tr P = k (15.3)
tr M = n k: (15.4)

147
Indeed,
1
tr P = tr X X 0 X X0
1
= tr X 0X X 0X
= tr (Ik )
= k;

and
tr M = tr (In P ) = tr (In ) tr (P ) = n k:
From this, we deduce that the ranks of P and M are k and n = k, respectively. Since M is
symmetric and idempotent, its spectral decomposition takes the form

In k 0
M =H H0 (15.5)
0 0

with H 0 H = In .

15.6 Kronecker Products and the Vec Operator


Let A = [a1 a2 an ] = [aij ] be m n: The vec of A; denoted by vec (A) ; is the mn 1 vector
0 1
a1
B a2 C
B C
vec (A) = B . C :
@ .. A
an

Let B be any matrix. The Kronecker product of A and B; denoted A B; is the matrix
2 3
a11 B a12 B a1n B
6 a21 B a22 B a2n B 7
6 7
A B=6 .. .. .. 7:
4 . . . 5
am1 B am2 B amn B

Some important properties are now summarized. These results hold for matrices for which all
matrix multiplications are conformable.

(A + B) C=A C +B C

(A B) (C D) = AC BD

A (B C) = (A B) C

148
(A B)0 = A0 B0
tr (A B) = tr (A) tr (B)
If A is m m and B is n n; det(A B) = (det (A))n (det (B))m
1 1 1
(A B) =A B
If A > 0 and B > 0 then A B>0
vec (ABC) = (C 0 A) vec (B)
tr (ABCD) = vec (D0 )0 (C 0 A) vec (B)

15.7 Matrix Calculus


Let x = (x1 ; :::; xk ) be k 1 and g(x) = g(x1 ; :::; xk ) : Rk ! R: The vector derivative is
0 @ 1
@x1 g(x)
@ B .. C
g(x) = @ . A
@x @
@xk g(x)

and
@ @ @
g(x) = @x1 g(x) @xk g(x) :
@x0
Some properties are now summarized.
@ @
@x (a0 x) = @x (x0 a) = a
@
@x0 (Ax) = A
@
@x (x0 Ax) = (A + A0 ) x
@2
@x@x0 (x0 Ax) = (A + A0 )
A = [aij ] be m n and g(A) : Rmn ! R: We de ne
@ @
g(A) = g(A)
@A @aij
Some properties are now summarized.
@
@A (x0 Ax) = xx0
@
ln (A) = A 1 0
@A
@
@A tr (AB) = B 0
@ 1B 1 BA 1
@A tr A = A

149
Chapter 16

Appendix C: Probability

16.1 Foundations
The set S of all possible outcomes of an experiment is called the sample space for the experiment.
Take the simple example of tossing a coin. There are two outcomes, heads and tails, so we
can write S = fH; T g: If two coins are tossed in sequence, we can write the four outcomes as
S = fHH; HT; T H; T T g:
An event A is any collection of possible outcomes of an experiment. An event is a subset of S;
including S itself and the null set ;: Continuing the two coin example, one event is A = fHH; HT g;
the event that the rst coin is heads. We say that A and B are disjoint or mutually exclusive
if A \ B = ;: For example, the sets fHH; HT g and fT Hg are disjoint. Furthermore, if the sets
A1 ; A2 ; ::: are pairwise disjoint and [1
i=1 Ai = S; then the collection A1 ; A2 ; ::: is called a partition
of S:
The following are elementary set operations:
Union: A [ B = fx : x 2 A or x 2 Bg:
Intersection: A \ B = fx : x 2 A and x 2 Bg:
Complement: Ac = fx : x 2 = Ag:
The following are useful properties of set operations.
Communtatitivity: A [ B = B [ A; A \ B = B \ A:
Associativity: A [ (B [ C) = (A [ B) [ C; A \ (B \ C) = (A \ B) \ C:
Distributive Laws: A\(B [ C) = (A \ B)[(A \ C) ; A[(B \ C) = (A [ B)\(A [ C) :
c c c c
DeMorgan's Laws: (A [ B) = A \ B ; (A \ B) = Ac [ B c :
A probability function assigns probabilities (numbers between 0 and 1) to events A in S:
This is straightforward when S is countable; when S is uncountable we must be somewhat more
careful:A set B is called a sigma algebra (or Borel eld) if ; 2 B , A 2 B implies Ac 2 B, and
A1 ; A2 ; ::: 2 B implies [1i=1 Ai 2 B. A simple example is f;; Sg which is known as the trivial sigma
algebra. For any sample space S; let B be the smallest sigma algebra which contains all of the
open sets in S: When S is countable, B is simply the collection of all subsets of S; including ; and

150
S: When S is the real line, then B is the collection of all open and closed intervals. We call B the
sigma algebra associated with S: We only de ne probabilities for events contained in B.
We now can give the axiomatic de nition of probability. Given S and B, a probability function
P satis es P (S)
P1= 1; P (A) 0 for all A 2 B, and if A1 ; A2 ; ::: 2 B are pairwise disjoint, then
1
P ([i=1 Ai ) = i=1 P (Ai ):
Some important properties of the probability function include the following

P (;) = 0

P (A) 1

P (Ac ) = 1 P (A)

P (B \ Ac ) = P (B) P (A \ B)

P (A [ B) = P (A) + P (B) P (A \ B)

If A B then P (A) P (B)

Bonferroni's Inequality: P (A \ B) P (A) + P (B) 1

Boole's Inequality: P (A [ B) P (A) + P (B)

For some elementary probability models, it is useful to have simple rules to count the number
of objects in a set.
When counting the number of objects in a set, there are two important distinctions. Counting
may be with replacement or without replacement. Counting may be ordered or unordered.
For example, consider a lottery where you pick six numbers from the set 1, 2, ..., 49. This selection is
without replacement if you are not allowed to select the same number twice, and is with replacement
if this is allowed. Counting is ordered or not depending on whether the sequential order of the
numbers is relevant to winning the lottery. Depending on these two distinctions, we have four
expressions for the number of objects (possible arrangements) of size r from n objects.

Without With
Replacement Replacement
n!
Ordered (n r)! nr
n n+r 1
Unordered r r

In the lottery example, if counting is unordered and without replacement, the number of
potential combinations is 49
6 = 13; 983; 816.
If P (B) > 0 the conditional probability of the event A given the event B is

P (A \ B)
P (A j B) = :
P (B)

151
For any B; the conditional probability function is a valid probability function where S has been
replaced by B: Rearranging the de nition, we can write

P (A \ B) = P (A j B) P (B)

which is often quite useful. We can say that the occurrence of B has no information about the
likelihood of event A when P (A j B) = P (A); in which case we nd

P (A \ B) = P (A) P (B) (16.1)

We say that the events A and B are statistically independent when (16.1) holds. Furthermore,
we say that the collection of events A1 ; :::; Ak are mutually independent when for any subset
fAi : i 2 Ig; !
\ Y
P Ai = P (Ai ) :
i2I i2I

Theorem 2 (Bayes' Rule). For any set B and any partition A1 ; A2 ; ::: of the sample space, then
for each i = 1; 2; :::
P (B j Ai ) P (Ai )
P (Ai j B) = P1
j=1 P (B j Aj ) P (Aj )

16.2 Random Variables


A random variable X is a function from a sample space S into the real line. This induces a
new sample space { the real line { and a new probability function on the real line. Typically,
we denote random variables by uppercase letters such as X; and use lower case letters such as
x for potential values and realized values. For a random variable X we de ne its cumulative
distribution function (CDF) as
F (x) = P (X x) : (16.2)
Sometimes we write this as FX (x) to denote that it is the CDF of X: A function F (x) is a CDF
if and only if the following three properties hold:

1. limx! 1 F (x) = 0 and limx!1 F (x) = 1

2. F (x) is nondecreasing in x

3. F (x) is right-continuous

We say that the random variable X is discrete if F (x) is a step function. In the latter case,
the range of X consists of a countable set of real numbers 1 ; :::; r : The probability function for
X takes the form
P (X = j ) = j ; j = 1; :::; r (16.3)

152
P
where 0 j 1 and rj=1 j = 1.
We say that the random variable X is continuous if F (x) is continuous in x: In this case P (X =
) = 0 for all 2 R so the representation (16.3) is unavailable. Instead, we represent the relative
probabilities by the probability density function (PDF)

d
f (x) = F (x)
dx
so that Z x
F (x) = f (u)du
1
and Z b
P (a X b) = f (x)dx:
a
These expressions only make sense if F (x) is di erentiable. While there are examples of continuous
random variables which do not possess a PDF, these cases are unusualRand are typically ignored.
1
A function f (x) is a PDF if and only if f (x) 0 for all x 2 R and 1 f (x)dx:

16.3 Expectation
For any measurable real function g; we de ne the mean or expectation Eg(X) as follows. If X
is discrete,
Xr
Eg(X) = g( j ) j ;
j=1

and if X is continuous Z 1
Eg(X) = g(x)f (x)dx:
1
The latter is well de ned and nite if
Z 1
jg(x)j f (x)dx < 1: (16.4)
1

If (16.4) does not hold, evaluate


Z
I1 = g(x)f (x)dx
g(x)>0
Z
I2 = g(x)f (x)dx
g(x)<0

If I1 = 1 and I2 < 1 then we de ne Eg(X) = 1: If I1 < 1 and I2 = 1 then we de ne


Eg(X) = 1: If both I1 = 1 and I2 = 1 then Eg(X) is unde ned.

153
Since E (a + bX) = a + bEX; we say that expectation is a linear operator.
For m > 0; we de ne the m0 th moment of X as EX m and the m0 th central moment as
E (X EX)m :
Two special
p moments are the mean = EX and variance 2 = E (X )2 = EX 2 2 : We

call = 2 the standard deviation of X: We can also write 2 = V ar(X). For example, this

allows the convenient expression V ar(a + bX) = b2 V ar(X):


The moment generating function (MGF) of X is

M ( ) = E exp ( X) :

The MGF does not necessarily exist. However, when it does and E jXjm < 1 then
dm
M( ) = E (X m )
d m =0

which is why it is called the moment generating function.


More generally, the characteristic function (CF) of X is

C( ) = E exp (i X) :
p
where i = 1 is the imaginary unit. The CF always exists, and when E jXjm < 1
dm
C( ) = im E (X m ) :
d m =0

The Lp norm, p 1; of the random variable X is

kXkp = (E jXjp )1=p :

16.4 Common Distributions


For reference, we now list some important discrete distribution function.
Bernoulli

P (X = x) = px (1 p)1 x
; x = 0; 1; 0 p 1
EX = p
V ar(X) = p(1 p)

Binomial
n x
P (X = x) = p (1 p)n x
; x = 0; 1; :::; n; 0 p 1
x
EX = np
V ar(X) = np(1 p)

154
Geometric
P (X = x) = p(1 p)x 1
; x = 1; 2; :::; 0 p 1
1
EX =
p
1 p
V ar(X) =
p2
Multinomial
n!
P (X1 = x1 ; X2 = x2 ; :::; Xm = xm ) = px1 p x2 pxmm ;
x1 !x2 ! xm ! 1 2
x1 + + xm = n;
p1 + + pm = 1
EX =
V ar(X) =
Negative Binomial
r+x 1
P (X = x) = p(1 p)x 1
; x = 1; 2; :::; 0 p 1
x
EX =
V ar(X) =
Poisson
x
exp ( )
P (X = x) = ; x = 0; 1; 2; :::; >0
x!
EX =
V ar(X) =
We now list some important continuous distributions.
Beta
( + )
f (x) = x 1 (1 x) 1 ; 0 x 1; > 0; >0
( ) ( )
=
+
V ar(X) =
( + + 1) ( + )2
Cauchy
1
f (x) = ; 1<x<1
(1 + x2 )
EX = 1
V ar(X) = 1

155
ExponentiaI
1 x
f (x) = exp ; 0 x < 1; >0
EX =
2
V ar(X) =
Logistic
exp ( x)
f (x) = ; 1 < x < 1;
(1 + exp ( x))2
EX = 0
V ar(X) =
Lognormal
!
1 (ln x )2
f (x) = p exp 2
; 0 x < 1; >0
2 x 2
2
EX = exp + =2
2 2
V ar(X) = exp 2 + 2 exp 2 +
Pareto

f (x) = +1
; x < 1; > 0; >0
x
EX = ; >1
1
2
V ar(X) = ; >2
( 1)2 ( 2)
Uniform
1
f (x) = ; a x b
b a
a+b
EX =
2
(b a)2
V ar(X) =
12
Weibull
1 x
f (x) = x exp ; 0 x < 1; > 0; >0

1= 1
EX = 1+

2= 2 2 1
V ar(X) = 1+ 1+

156
16.5 Multivariate Random Variables
A pair of bivariate random variables (X; Y ) is a function from the sample space into R2 : The joint
CDF of (X; Y ) is
F (x; y) = P (X x; Y y) :
If F is continuous, the joint probability density function is
@2
f (x; y) = F (x; y):
@x@y
For a Borel measurable set A 2 R2 ;
Z Z
P ((X < Y ) 2 A) = f (x; y)dxdy
A
For any measurable function g(x; y);
Z 1 Z 1
Eg(X; Y ) = g(x; y)f (x; y)dxdy:
1 1
The marginal distribution of X is
FX (x) = P (X x)
= lim F (x; y)
y!1
Z x Z 1
= f (x; y)dydx
1 1
so the marginal density of X is
Z 1
d
fX (x) = FX (x) = f (x; y)dy:
dx 1
Similarly, the marginal density of Y is
Z 1
fY (y) = f (x; y)dx:
1
The random variables X and Y are de ned to be independent if f (x; y) = fX (x)fY (y):
Furthermore, X and Y are independent if and only if there exist functions g(x) and h(y) such that
f (x; y) = g(x)h(y):
If X and Y are independent, then
Z Z
E (g(X)h(Y )) = g(x)h(y)f (y; x)dydx
Z Z
= g(x)h(y)fY (y)fX (x)dydx
Z Z
= g(x)fX (x)dx h(y)fY (y)dy
= Eg (X) Eh (Y ) : (16.5)

157
if the expectations exist. For example, if X and Y are independent then

E(XY ) = EXEY:

Another implication of (16.5) is that if X and Y are independent and Z = X + Y; then

MZ ( ) = E exp ( (X + Y ))
= E (exp ( X) exp ( Y ))
0 0
= E exp X E exp Y
= MX ( )MY ( ): (16.6)

The covariance between X and Y is

Cov(X; Y ) = XY = E ((X EX) (Y EY )) = EXY EXEY:

The correlation between X and Y is


XY
Corr(X; Y ) = XY = :
x Y

The Cauchy-Schwarz Inequality implies that j XY j 1:The correlation is a measure of linear


dependence, free of units of measurement.
If X and Y are independent, then XY = 0 and XY = 0: The reverse, however, is not true.
For example, if EX = 0 and EX 3 = 0, then Cov(X; X 2 ) = 0:
A useful fact is that

V ar (X + Y ) = V ar(X) + V ar(Y ) + 2Cov(X; Y ):

An implication is that if X and Y are independent, then

V ar (X + Y ) = V ar(X) + V ar(Y );

the variance of the sum is the sum of the variances.


A k 1 random vector X = (X1 ; :::; Xk )0 is a function from S to Rk : Letting x = (x1 ; :::; xk )0 ;
it has the distribution and density functions

F (x) = P (X x)
@k
f (x) = F (x):
@x1 @xk

For a measurable function g : Rk ! Rs ; we de ne the expectation


Z
Eg(X) = g(x)f (x)dx
Rk

158
where the symbol dx denotes dx1 dxk : In particular, we have the k 1 multivariate mean

= EX

and k k covariance matrix

= E (X ) (X )0
= EXX 0 0

If the elements of X are mutually independent, then is a diagonal matrix and


k
! k
X X
V ar Xi = V ar (Xi )
i=1 i=1

16.6 Conditional Distributions and Expectation


The conditional density of Y given X = x is de ned as
f (x; y)
fY jX (y j x) =
fX (x)
if fX (x) > 0: One way to derive this expression from the de nition of conditional probability is
@
fY jX (y j x) = lim P (Y y j x X x + ")
@y "!0
@ P (fY yg \ fx X x + "g)
= lim
@y "!0 P (x X x + ")
@ F (x + "; y) F (x; y)
= lim
@y "!0 FX (x + ") FX (x)
@
@ @x F (x + "; y)
= lim
@y "!0 fX (x + ")
@2
@x@y F (x; y)
=
fX (x)
f (x; y)
= :
fX (x)
The conditional mean or conditional expectation is the function
Z 1
m(x) = E (Y j X = x) = yfY jX (y j x) dy:
1

The conditional mean m(x) is a function, meaning that when X equals x; then the expected value
of Y is m(x):

159
Similarly, we de ne the conditional variance of Y given X = x as
2
(x) = V ar (Y j X = x)
= E (Y m(x))2 j X = x
= E Y2 jX =x m(x)2 :
Evaluated at x = X; the conditional mean m(X) and conditional variance 2 (x) are random
variables, functions of X: We write this as E(Y j X) = m(X) and V ar (Y j X) = 2 (X): For
example, if E (Y j X = x) = + x; then E (Y j X) = + X; a transformation of X:
The following are important facts about conditional expectations.
Simple Law of Iterated Expectations:
E (E (Y j X)) = E (Y ) (16.7)
Proof :
E (E (Y j X)) = E (m(X))
Z 1
= m(x)fX (x)dx
1
Z 1 Z 1
= yfY jX (y j x) fX (x)dydx
1 1
Z 1Z 1
= yf (y; x) dydx
1 1
= E(Y ):
Law of Iterated Expectations:
E (E (Y j X; Z) j X) = E (Y j X) (16.8)
Conditioning Theorem. For any function g(x);
E (g(X)Y j X) = g (X) E (Y j X) (16.9)
Proof : Let
h(x) = E (g(X)Y j X = x)
Z 1
= g(x)yfY jX (y j x) dy
1
Z 1
= g(x) yfY jX (y j x) dy
1
= g(x)m(x)
where m(x) = E (Y j X = x) : Thus h(X) = g(X)m(X), which is the same as E (g(X)Y j X) =
g (X) E (Y j X) :

160
16.7 Transformations
Suppose that X 2 Rk with continuous distribution function FX (x) and density fX (x): Let Y =
g(X) where g(x) : Rk ! Rk is one-to-one, di erentiable, and invertible. Let h(y) denote the
inverse of g(x). The Jacobian is

@
J(y) = det h(y) :
@y 0

Consider the univariate case k = 1: If g(x) is an increasing function, then g(X) Y if and
only if X h(Y ); so the distribution function of Y is

FY (y) = P (g(X) y)
= P (X h(Y ))
= FX (h(Y ))

so the density of Y is
d d
fY (y) = FY (y) = fX (h(Y )) h(y):
dy dy
If g(x) is a decreasing function, then g(X) Y if and only if X h(Y ); so

FY (y) = P (g(X) y)
= 1 P (X h(Y ))
= 1 FX (h(Y ))

and the density of Y is


d
fY (y) = fX (h(Y )) h(y):
dy
We can write these two cases jointly as

fY (y) = fX (h(Y )) jJ(y)j : (16.10)

This is known as the change-of-variables formula. This same formula (16.10) holds for k > 1;
but its justi cation requires deeper results from analysis.
As one example, take the case X U [0; 1] and Y = ln(X). Here, g(x) = ln(x) and
h(y) = exp( y) so the Jacobian is J(y) = exp(y): As the range of X is [0; 1]; that for Y is [0,1):
Since fX (x) = 1 for 0 x 1 (16.10) shows that

fY (y) = exp( y); 0 y 1;

an exponential density.

161
16.8 Normal and Related Distributions
The standard normal density is

1 x2
(x) = p exp ; 1 < x < 1:
2 2
This density has all moments nite. Since it is symmetric about zero all odd moments are zero. By
iterated integration by parts, we can also show that EX 2 = 1 and EX 4 = 3: It is conventional to
write X N (0; 1); and to denote the standard normal density function by (x) and its distribution
function by (x): The latter has no closed-form solution.
If Z is standard normal and X = + Z; then using the change-of-variables formula, X has
density !
1 (x )2
f (x) = p exp ; 1 < x < 1:
2 2 2
which is the univariate normal density. The mean and variance of the distribution are and
2 ; and it is conventional to write X N ( ; 2 ).
k
For x 2 R ; the multivariate normal density is

1 (x )0 1 (x )
f (x) = k=2 1=2
exp ; x 2 Rk :
(2 ) det ( ) 2

The mean and covariance matrix of the distribution are and ; and it is conventional to write
X N ( ; ).
It useful to observe that the MGF and CF of the multivariate normal are exp 0 + 0 =2
and exp i 0 0
=2 ; respectively.
k
If X 2 R is multivariate normal and the elements of X are mutually uncorrelated, then
= diagf 2j g is a diagonal matrix. In this case the density function can be written as
!!
2 2 2 2
1 (x1 1) = 1 + + (xk k) = k
f (x) = exp
(2 )k=2 1 k
2
k 2
!
Y 1 xj j
= 1=2
exp 2
j=1 (2 ) j 2 j

which is the product of marginal univariate normal densities. This shows that if X is multivariate
normal with uncorrelated elements, then they are mutually independent.
Another useful fact is that if X N ( ; ) and Y = a + BX with B an invertible matrix, then
by the change-of-variables formula, the density of Y is
!
0 1
1 (y Y ) (y Y )
f (y) = exp Y
; x 2 Rk :
(2 )k=2 det ( Y )1=2 2

162
where Y = a+B and Y = B B 0 ; where we used the fact that det (B B 0 )1=2 = det ( )1=2 det (B) :
This shows that linear transformations of normals are also normal.
Let X N (0; Ir ) and set Q = X 0 X: We show at the end of this section that the Q has density
1
f (y) = r y r=2 1
exp ( y=2) ; y 0: (16.11)
2 2r=2

and is known as the chi-square density with r degrees of freedom, denoted 2. Its mean and
r
variance are = r and 2 = 2r: A useful result is:

Theorem 16.8.1 If Z N (0; A) with A > 0; q q; then Z 0 A 1Z 2:


q

Let Z N (0; 1) and Q 2 be independent. Set


r

Z
tr = p :
Q=r

We show at the end of this section that the density of tr is


r+1
2
f (x) = r+1 (16.12)
p r x2 2
r 2 1+ r

and is known as the student's t distribution with r degrees of freedom.


Proof of (16.11). The MGF for the density (16.11) is

Z 1
1
E exp (tQ) = r y r=2 1
exp (ty) exp ( y=2) dy
0 2 2r=2
r=2
= (1 2t) (16.13)
R1
where the second equality uses the fact that 0 y a 1 exp ( by) dy = b a (a); which can be found
by applying change-of-variables to the gamma function. For Z N (0; 1) the distribution of Z 2 is
p
P Z2 y = 2P (0 Z y)
Z py
1 x2
= 2 p exp dx
0 2 2
Z y
1 s
= 1 1=2
s 1=2 exp ds
0 2 2
2
1 p
using the change{of-variables s = x2 and the fact 2 = : Thus the density of Z 2 is (16.11)
with r = 1: From (16.13), we see that the MGF of Z 2 is (1 2t) 1=2 . Since we can write Q =

163
Pr
X 0X = 2
j=1 Zj where the Zj are independent N (0; 1), (16.6) can be used to show that the MGF
r=2
of Q is (1 2t) ; which we showed in (16.13) is the MGF of the density (16.11).
Proof of (16.8.1). The fact that A > 0 means that we can write A = CC 0 where C is
non-singular. Then A 1 = C 10 C 1 and
1 1 10 1
C Z N (0; C AC ) = N (0; C CC 0 C 10
) = N (0; Iq ):

Thus
0
Z 0A 1
Z = Z 0C 10
C 1
Z= C 1
Z C 1
Z 2
q:

Proof of (16.12). Using the simple law of iterated expectations, tr has distribution function
!
Z
F (x) = P p x
Q=r
( r )
Q
= E Z x
r
" r !#
Q
= E P Z x jQ
r
r !
Q
= E x
r

Thus its density is


r !
d Q
f (x) = E x
dx r
r !r !
Q Q
= E x
r r
Z r !
1
1 qx2 q 1 r=2 1
= p exp r q exp ( q=2) dq
0 2 2r r 2 2r=2
r+1
x2 ( r+1
2 )
2
= p r 1+ :
r 2
r

16.9 Maximum Likelihood


If the distribution of Yi is F (y; ) where F is a known distribution function and 2 is an
unknown m 1 vector, we say that the distribution is parametric and that is the parameter

164
of the distribution F: The space is the set of permissible value for : In this setting the method
of maximum likelihood is the appropriate technique for estimation and inference on :
If the distribution F is continuous then the density of Yi can be written as f (y; ) and the joint
density of a random sample Y~ = (Y1 ; :::; Yn ) is
n
Y
fn Y~ ; = f (Yi ; ) :
i=1

The likelihood of the sample is this joint density evaluated at the observed sample values, viewed
as a function of . The log-likelihood function is its natural log
n
X
Ln ( ) = ln f (Yi ; ) :
i=1

If the distribution F is discrete, the likelihood and log-likelihood are constructed by setting
f (y; ) = P (Y = y; ) :
De ne the Hessian
@2
H= E ln f (Yi ; 0 ) (16.14)
@ @ 0
and the outer product matrix
@ @ 0
=E ln f (Yi ; 0) ln f (Yi ; 0) : (16.15)
@ @
Two important features of the likelihood are

Theorem 16.9.1
@
E ln f (Yi ; ) =0 (16.16)
@ = 0

H= I0 (16.17)

The matrix I0 is called the information, and the equality (16.17) is often called the infor-
mation matrix equality.

Theorem 16.9.2 Cramer-Rao Lower Bound. If ~ is an unbiased estimator of 2 R; then


V ar(~) (nI0 ) 1 :

The Cramer-Rao Theorem gives a lower bound for estimation. However, the restriction to
unbiased estimators means that the theorem has little direct relevance for nite sample e ciency.
The maximum likelihood estimator or MLE ^ is the parameter value which maximizes
the likelihood (equivalently, which maximizes the log-likelihood). We can write this as
^ = argmax Ln ( ):
2

165
In some simple cases, we can nd an explicit expression for ^ as a function of the data, but these
cases are rare. More typically, the MLE ^ must be found by numerical methods.
Why do we believe that the MLE ^ is estimating the parameter ? Observe that when stan-
dardized, the log-likelihood is a sample average
n
1 1X
Ln ( ) = ln f (Yi ; ) !p E ln f (Yi ; ) L( ):
n n
i=1

As the MLE ^ maximizes the left-hand-side, we can see that it is an estimator of the maximizer
of the right-hand-side. The rst-order condition for the latter problem is
@ @
0= L( ) = E ln f (Yi ; )
@ @

which holds at = 0by (16.16). In fact, under conventional regularity conditions, ^ is consistent
for this value, ^ !p 0 as n ! 1:

p
Theorem 16.9.3 Under regularity conditions, n ^ 0 !d N 0; I0 1 .

Thus in large samples, the approximate variance of the MLE is (nI0 ) 1 which is the Cramer-
Rao lower bound. Thus in large samples the MLE has approximately the best possible variance.
Therefore the MLE is called asymptotically e cient.
Typically, to estimate the asymptotic variance of the MLE we use an estimate based on the
Hessian formula (16.14)
X n
^ = 1 @2 ^
H 0 ln f Yi ; (16.18)
n @ @
i=1

We then set I^0 1 = H ^ 1 : Asymptotic standard errors for ^ are then the square roots of the diagonal
elements of n I^0 :
1 1

Sometimes a parametric density function f (y; ) is used to approximate the true unknown
density f (y); but it is not literally believed that the model f (y; ) is necessarily the true density.
In this case, we refer to Ln (^) as a quasi-likelihood and the its maximizer ^ as a quasi-mle or
QMLE.
In this case there is not a \true" value of the parameter : Instead we de ne the pseudo-true
value 0 as the maximizer of
Z
E ln f (Yi ; ) = f (y) ln f (y; ) dy

which is the same as the minimizer of


Z
f (y)
KLIC = f (y) ln dy
f (y; )

166
the Kullback-Leibler information distance between the true density f (y) and the parametric density
f (y; ): Thus the QMLE 0 is the value which makes the parametric density \closest" to the true
value according to this measure of distance. The QMLE is consistent for the pseudo-true value,
but has a di erent covariance matrix than in the pure MLE case, since the information matrix
equality (16.17) does not hold. A minor adjustment to Theorem (16.9.3) yields the asymptotic
distribution of the QMLE:
p
n ^ 0 !d N (0; V ) ; V =H 1
H 1

The moment estimator for V is


V^ = H
^ 1^ ^
H 1

^ is given in (16.18) and


where H
X @ n
^= 1 @ 0
ln f Yi ; ^ ln f Yi ; ^ :
n @ @
i=1

Asymptotic standard errors (sometimes called qmle standard errors) are then the square roots of
the diagonal elements of n 1 V^ :
Proof of Theorem 16.9.1. To see (16.16);
Z
@ @
E ln f (Yi ; ) = ln f (y; ) f (y; 0 ) dy
@ = 0 @ = 0
Z
@ f (y; 0 )
= f (y; ) dy
@ f (y; ) = 0
Z
@
= f (y; ) dy
@ = 0
@
= 1 = 0:
@ = 0

Similarly, we can show that !


@2
@ @
f (Yi ; 0 )
0
E = 0:
f (Yi ; 0 )
By direction computation,
@2 @ @ 0
@2 @ @ 0f (Yi ; 0 ) @ f (Yi ; 0) @ f (Yi ; 0)
0 ln f (Yi ; 0) = 2
@ @ f (Yi ; 0 ) f (Yi ; 0)
@2
@ @ 0f (Yi ; 0 ) @ @ 0
= ln f (Yi ; 0) ln f (Yi ; 0) :
f (Yi ; 0 ) @ @
Taking expectations yields (16.17).

167
Proof of Cramer-Rao Lower Bound.
n
X
@ @
S= ln fn Y~ ; 0 = ln f (Yi ; 0)
@ @
i=1

which by Theorem (16.9.1) has mean zero and variance nH: Write the estimator ~ = ~(Y~ ) as a
function of the data. Since ~ is unbiased for any ;
Z
~
= E = ~(~ y )f (~
y ; ) d~
y

where y~ = (y1 ; :::; yn ). Di erentiating with respect to and evaluating at 0 yields


Z Z
~ @ @
1= y ) f (~
(~ y = ~(~
y ; ) d~ y ) ln f (~
y ; ) f (~ y = E ~S :
y ; 0 ) d~
@ @
By the Cauchy-Schwarz inequality
2
1 = E ~S V ar (S) V ar ~
so
1 1
V ar ~ = :
V ar (S) nH

Proof of Theorem 16.9.3 Taking the rst-order condition for maximization of Ln ( ), and
making a rst-order Taylor series expansion,
@
0 = Ln ( )
@ =^
n
X @
= ln f Yi ; ^
@
i=1
n
X Xn
@ @2 ^
' ln f (Yi ; 0) + 0 ln f (Yi ; n) 0 ;
@ @ @
i=1 i=1

where n lies on a line segment joining ^ and 0 : (Technically, the speci c value of n varies by
row in this expansion.) Rewriting this equation, we nd
n
! 1 n !
X @ 2 X @
^ 0 = ln f (Yi ; n ) ln f (Yi ; 0 ) :
i=1
@ @ 0 i=1
@
@
Since @ ln f (Yi ; 0) is mean-zero with covariance matrix ; an application of the CLT yields
n
1 X @
p ln f (Yi ; 0) !d N (0; ) :
n @
i=1

168
The analysis of the sample Hessian is somewhat more complicated due to the presence of n : Let
2
H( ) = @ @@ 0 ln f (Yi ; ) : If it is continuous in ; then since n !p 0 we nd H( n ) !p H and
so
n n
1 X @2 1X @2
0 ln f (Yi ; n) = 0 ln f (Yi ; n) H( n) + H( n)
n @ @ n @ @
i=1 i=1
! pH

by an application of a uniform WLLN. Together,


p
n ^ 0 !d H 1
N (0; ) = N 0; H 1
H 1
= N 0; H 1
;

the nal equality using Theorem 16.9.1 .

169
Chapter 17

Appendix D: Asymptotic Theory

17.1 Inequalities
Triangle inequality.

jX + Y j jXj + jY j :
C r inequality. 8
< jXjr + jY jr : 0<r 1
r
jX + Y j :
: 1 (jXjr
2r + jY jr ) r 1
Proofs of the following statements are at the end of this section.
Jensen's Inequality. If g( ) : R ! R is convex, then

g(E(X)) E(g(X)):

Lr Norm: kXkr = (E jXjr )1=r


Lr inequality:. If r p; kXkr kXkp
1 1
Holder's Inequality. If p > 1 and q > 1 and p + q = 1; then

E jXY j kXkp kY kq :

Cauchy-Schwarz Inequality.

E jXY j kXk2 kY k2

This is Holder's inequality with p = q = 2:


Minkowski's Inequality. For p 1;

170
kX + Y kp kXkp + kY kp :

Markov's Inequality. For any strictly increasing function g(X) 0;

1
P (g(X) > ) Eg(X):

Proof of Jensen's Inequality. Let a + bx be the tangent line to g(x) at x = EX: Since g(x)
is convex, tangent lines lie below it. So for all x; g(x) a + bx yet g(EX) = a + bEX since the
curve is tangent at EX: Applying expectations, Eg(X) a + bEX = g(EX); as stated.

Proof of Lr Inequality. Let Y = jXjr and g(x) = xp=r ; which is convex. By Jensen's inequality,
g(EY ) Eg(Y ); so
(E jXjr )p=r E (jXjr )p=r = E jXjp
Raised to the 1=p power is the inequality.
Proof of Holder's Inequality. By renormalization, without loss of generality we can assume
E jXjp = 1 and E jY jq = 1, so that we need to show E jXY j 1: By the theorem of geometric
means, for A > 0 and B > 0
1 1
A1=p B 1=q A+ B (17.1)
p q
so h i 1 1 1 1
E jXY j = E (jXjp )1=p (jY jq )1=q E jXjp + jY jq = + = 1
p q p q
as needed.
We now show (17.1). De ne a random variable T which takes the value ln A with probability
1=p; and the value ln B with probability 1=q: The exponential function is convex, so Jensen's
inequality yields

1 1
A1=p B 1=q = exp ln A + ln B
p q
= exp(E(T ))
E (exp(T ))
1 1
= A+ B
p p

which is (17.1) as needed.


Proof of Minkowski's Inequality. If p = 1; the triangle inequality yields jX + Y j jXj + jY j ;
and then take expectations.

171
For p > 1; de ne its conjugate q = p=(p 1) (so 1=p + 1=q = 1): By the triangle inequality,
Holder's inequality,

E jX + Y jp = E jX + Y jp 1
jX + Y j

= E jX + Y jp 1
jXj + E jX + Y jp 1
jY j

= jX + Y jp 1
kXkp + jX + Y jp 1
kY kp
q q

= (E jX + Y jp )1 1=p
kXkp + kY kp

where the nal inequality holds since (p 1)q = p: Multiplying both sides by (E jX + Y jp )1=p 1

yields the result.

Proof of Markov's Inequality. Set Y = g(X); and let f g denote the indicator function: For
simplicity suppose that Y has density f (y): Then
Z Z Z Z 1
P (Y > ) = EfY > g = fy > gf (y)dy = f (y)dy yf (y)dy yf (y)dy = E(Y )
fy> g fy> g 1

the second-to-last inequality using the region of integration fy > g: Hence P (Y > ) 1 E(Y )
and we are done.

17.2 Convergence in Probability


We say that Zn converges in probability to Z as n ! 1; denoted Zn !p Z as n ! 1; if for
all > 0;
lim P (jZn Zj > ) = 0:
n!1

This is a probabilistic way of generalizing the mathematical de nition of a limit.


A set of random vectors fX1 ; :::; Xn g are independent and identically distributed or iid
if they are mutually independent and are drawn from a common distribution F:
Weak Law of Large Numbers (WLLN). If Xi 2 Rk is iid and E jXi j < 1; then as n ! 1
n
1X
Xn = Xi !p E(X):
n
i=1

Proof: Without loss of generality, we can set E(X) = 0 (by recentering Xi on its expectation).
We need to show that for all > 0 and > 0 there is some N < 1 so that for all n N;
P Xn > : Fix and : Set " = =3: Pick C < 1 large enough so that

E (jXj 1 (jXj > C)) " (17.2)

172
(where 1 ( ) is the indicator function) which is possible since E jXj < 1: De ne the random vectors

Wi = Xi 1 (jXi j C) E (Xi 1 (jXi j C))


Zi = Xi 1 (jXi j > C) E (Xi 1 (jXi j > C)) :

By the triangle inequality, Jensen's inequality and (17.2),

E Zn E jZi j
E jXi j 1 (jXi j > C) + jE (Xi 1 (jXi j > C))j
2E jXi j 1 (jXi j > C)
2": (17.3)

By Jensen's inequality, the fact that the Wi are iid and mean zero, and the bound jWi j 2C;
2 2
E Wn EW n
EWi2
=
n
4C 2
n
"2 (17.4)

the nal inequality holding for n 4C 2 ="2 = 36C 2 = 2 2 .


Finally, by Markov's inequality, the fact that X n = W n + Z n ; the triangle inequality, (17.3)
and (17.4),
E Xn E W n + E Zn 3"
P Xn > = ;

the equality by the de nition of ": We have shown that for any > 0 and > 0 then for all
n 36C 2 = 2 2 ; P X n > ; as needed.

17.3 Almost Sure Convergence


We say that Zn converges almost surely or with probability one to Z as n ! 1; denoted
Zn !a:s: Z as n ! 1; if for all > 0;

P lim Zn = Z = 1:
n!1

Equivalently, for every " > 0;


P lim jZn Zj < " = 1:
n!1

173
Almost sure convergence is stronger than convergence in probability. For example, consider
the sequence of discrete random variables Zn with
1
P (Zn = 1) = n
1
P (Zn = 0) = 1 n

You can see that Zn !p Z yet Zn does not converge almost surely:
Strong Law of Large Numbers (SLLN). If fX1 ; :::; Xn g are iid with E jXj < 1 and
EX = then as n ! 1
n
1X
Xn = Xi !a:s: :
n
i=1

To show the SLLN, we need some intermediate results. The rst is a maximal form of Markov's
inequality.

Theorem 3 (Komogorov's Inequality) If X1 ; :::; Xn are independent with EXi = 0 and Sj =


Pj
i=1 Xi ; then for any " > 0

n
1 X
P max Si > 2 EXi2 : (17.5)
1 i n
i=1

Proof. Since
n
X i
X
E (Sn j X1 ; :::; Xi ) = E (Xj j X1 ; :::; Xi ) = X j = Si
j=1 j=1

then by Jensen's inequality

Si2 = jE (Sn j X1 ; :::; Xi )j2 E Sn2 j X1 ; :::; Xi : (17.6)

174
n o
2 2
Let Ii 1 = Si2 > ; maxj<i Sj2 which are disjoint events. Then

2 2
P max jSi j > = P max Si2 > 2
1 i n 1 i n
n
X
2
= P (Ii 1)
i=1
n
X
2 2 2
= P Ii 1 Si >
i=1
n
X
2
E Ii 1 Si
i=1
n
X
E Ii 1E Sn2 j X1 ; :::; Xi 1
i=1
Xn
2
= E Ii 1 Sn
i=1
E Sn2
Xn
= EXi2 :
i=1

The fourth line is Markov's inequality, the fth is (17.6), the sixth uses the conditioning theorem
and the law of iterated expectations.
The next two are summation results. P
Toeplitz Lemma. Suppose xn ! x with jxn xj B and for aj 0; nj=1 aj ! 1 as
n ! 1: Then as n ! 1 Pn
aj xj
Pj=1
n !x
j=1 aj

P Fix
Proof: " > 0: P
Find N1 < 1 such that jxn xj " for all n N1 : Then nd N2 < 1 such
that N 1
j=1 aj B" N2
j=1 aj : Then for all for n > N2 ;
Pn PN1 Pn
aj xj j=1 aj (xj x) j=N1 +1 aj (xj x)
Pj=1
n x = Pn + Pn 2":
j=1 aj j=1 aj j=1 aj

Pn
Kronecker Lemma. If bn is increasing with bn ! 1; and Sn = j=1 xj ! x, then as n ! 1
n
1 X
bj xj ! 0:
bn
j=1

175
Proof: Set aj = bj bj 1 0: Then
n
X n
X n
X
bj xj = bj S j bj S j 1
j=1 j=1 j=1
Xn Xn n
X
= bj S j bj 1 Sj 1 aj Sj 1
j=1 j=1 j=1
n
X
= bn Sn aj Sj 1:
j=1

Hence as n ! 1 Pn
n
1 X j=1 aj Sj 1
bj xj = Sn P n !x x=0
bn j=1 aj
j=1

using the Toeplitz Lemma.


We now complete the proof of the SLLN.
Proof of SLLN: Without loss of generality,
Pn assume EX = 0: To simplify the proof, we assume
that 2 1
< 1: We rst show that Sn = i=1 i Xi converges to a nite random limit almost surely
as n ! 1. This occurs if and only if Sj Sk ! 0 as j; k ! 1: Indeed, for each " > 0; using
Kolmogorov's inequality (17.5).
1
!
[
P fjSm+k Sm j "g = lim P max jSm+k Sm j "
n!1 1 k n
k=1
m+n
X 2
1 Xi
2
lim E
" n!1 i
i=m+1
2 1
X
2
= i
"2
i=m+1
! 0
P
as m ! 1 since 1 i=1 i
2 < 1.
Pn
We now apply the Kronecker Lemma with bi = i and xi = i 1X
i. Since i=1 xi converges as
n ! 1 almost surely, it follows that
n n
1X 1 X
Xn = Xi = bi xi ! 0
n bn
i=1 i=1

almost surely.

176
17.4 Convergence in Distribution
Let Zn be a random variable with distribution Fn (x) = P (Zn x) : We say that Zn converges
in distribution to Z as n ! 1, denoted Zn !d Z; where Z has distribution F (x) = P (Z x) ;
if for all x at which F (x) is continuous, Fn (x) ! F (x) as n ! 1:
Central Limit Theorem (CLT). If Xi 2 Rk is iid and E jXi j2 < 1; then as n ! 1
n
p 1 X
n Xn =p (Xi ) !d N (0; V ) :
n
i=1

where = EX and V = E (X ) (X )0 :
Proof: Without loss of generality, it is su cient to consider the case = 0 and V = Ik : For
2 Rk ; let C( ) = E exp i 0 X denote the characteristic function of X and set c( ) = ln C( ).
Then observe
@
C( ) = iE X exp i 0 X
@
@2
C( ) = i2 E XX 0 exp i 0 X
@ @ 0
so when evaluated at =0

C(0) = 1
@
C(0) = iE (X) = 0
@
@2
C(0) = E XX 0 = Ik :
@ @ 0
Furthermore,
@ @
c ( ) = c( ) = C( ) 1 C( )
@ @
@2 1 @
2
2 @ @
c ( ) = c( ) = C( ) C( ) C( ) C( ) 0 C( )
@ @ 0 @ @ 0 @ @
so when evaluated at =0

c(0) = 0
c (0) = 0
c (0) = Ik :

By a second-order Taylor series expansion of c( ) about = 0;


1 0 1 0
c( ) = c(0) + c (0)0 + c ( ) = c ( ) (17.7)
2 2

177
where lies on the line segment joining 0 and :
p p
We now compute Cn ( ) = E exp i 0 nX n the characteristic function of nX n : By the
properties of the exponential function, the independence of the Xi ; the de nition of c( ) and
(17.7)
0 1
n 0
X
1
ln Cn ( ) = log E exp @i p Xj A
n
j=1
n
Y 1
= log E exp i p 0 Xj
n
j=1
n
Y 1
= log E exp i p 0 Xj
n
i=1

= nc p
n
1 0
= c ( n)
2
p
where n ! 0 lies on the line segment joining 0 and = n: Since c ( n) ! c (0) = Ik ; we see
that as n ! 1;
1 0
Cn ( ) ! exp
2
the characteristic function of the N (0; Ik ) distribution. This is su cient to establish the theorem.

17.5 Asymptotic Transformations


Continuous Mapping Theorem 1 (CMT). If Zn !p c as n ! 1 and g ( ) is continuous at c;
then g(Zn ) !p g(c) as n ! 1.
Proof: Since g is continuous at c; for all " > 0 we can nd a > 0 such that if jZn cj <
then jg (Zn ) g (c)j ": Recall that A B implies P (A) P (B): Thus P (jg (Zn ) g (c)j ")
P (jZn cj < ) ! 1 as n ! 1 by the assumption that Zn !p c: Hence g(Zn ) !p g(c) as n ! 1.
Continuous Mapping Theorem 2. If Zn !d Z as n ! 1 and g ( ) is continuous; then
g(Zn ) !d g(Z) as n ! 1.
p
Delta Method: If n( n 0) !d N (0; ) ; where is m 1 and is m m; and g( ) :
Rm ! Rk ; k m; then p
n (g ( n) g( 0 )) !d N 0; g g0
@
where g ( ) = @ 0
g( ) and g = g ( 0 ):

178
Proof : By a vector Taylor series expansion, for each element of g;

gj ( n) = gj ( 0 ) + gj ( jn ) ( n 0)

where nj lies on the line segment between n and 0 and therefore converges in probability to 0:
It follows that ajn = gj ( jn ) gj !p 0: Stacking across elements of g; we nd
p p
n (g ( n ) g( 0 )) = (g + an ) n ( n 0 ) !d g N (0; ) = N 0; g g0

179
Chapter 18

Appendix E: Numerical Optimization

Many econometric estimators are de ned by an optimization problem of the form


^ = argmin Q( ) (18.1)
2

where the parameter is 2 Rm and the criterion function is Q( ) : ! R: For example


NLLS, GLS, MLE and GMM estimators take this form. In most cases, Q( ) can be computed
for given ; but ^ is not available in closed form. In this case, numerical methods are required to
obtain ^:

18.1 Grid Search


Many optimization problems are either one dimensional (m = 1) or involve one-dimensional opti-
mization as a sub-problem (for example, a line search). Here we outline some standard approaches
to one-dimensional optimization.
Grid Search. Let = [a; b] be an interval. Suppose we want to nd ^ with an error bounded
by some " > 0: Then set G = (b a)=" to be the number of gridpoints. Construct an equally spaced
grid on the region [a; b] with G gridpoints, which is f (j) = a + j(b a)=G : j = 0; :::; Gg. At each
point evaluate the criterion function and nd the gridpoint which yields the smallest value of the
criterion, which is (^|) where |^ = argmin0 j G Q( (j)): This value (^ |) is the gridpoint estimate
^
of : If the grid is su ciently ne to capture small oscillations in Q( ); the approximation error
is bounded by "; that is, (^ |) ^ ": Plots of Q( (j)) against (j) can help diagnose errors in
grid selection. This method is quite robust but potentially costly:
Two-Step Grid Search. The gridsearch method can be re ned by a two-step execution. For
an error bound of " pick G so that G2 = (b a)=" For the rst step de ne an equally spaced
grid on the region [a; b] with G gridpoints, which is f (j) = a + j(b a)=G : j = 0; :::; Gg:
At each point evaluate the criterion function and let |^ = argmin0 j G Q( (j)). For the second
step de ne an equally spaced grid on [ (^ | 1); (^ | + 1)] with G gridpoints, which is f 0 (k) =

180
| 1) + 2k(b a)=G2 : k = 0; :::; Gg: Let k^ = argmin0 k G Q( 0 (k)): The estimate of ^ is
(^
0 ^
(k): The advantage of the two-step method over a one-stepp grid search is that the number of
function evaluations has been reduced from (b a)=" to 2 (b a)=" which can be substantial.
The disadvantage is that if the function Q( ) is irregular, the rst-step grid may not bracket ^
which thus would be missed.

18.2 Gradient Methods


Gradient Methods are iterative methods which produce a sequence i : i = 1; 2; ::: which are de-
signed to converge to ^: All require the choice of a starting value 1 ; and all require the computation
of the gradient of Q( )
@
g( ) = Q( )
@
and some require the Hessian
@2
H( ) = Q( ):
@ @ 0
If the functions g( ) and H( ) are not analytically available, they can be calculated numerically.
Take the j 0 th element of g( ): Let j be the j 0 th unit vector (zeros everywhere except for a one in
the j 0 th row). Then for " small

Q( + j ") Q( )
gj ( ) ' :
"
Similarly,
Q( + j" + k ")
Q( + k ") Q( + j ") + Q( )
gjk ( ) '
"2
In many cases, numerical derivatives can work well but can be computationally costly relative to
analytic derivatives. In some cases, however, numerical derivatives can be quite unstable.
Most gradient methods are a variant of Newton's method which is based on a quadratic
approximation. By a Taylor's expansion for close to ^

0 = g(^) ' g( ) + H( ) ^

which implies
^= H( ) 1
g( ):
This suggests the iteration rule
^i+1 = i H( i ) 1
g( i ):
where

181
One problem with Newton's method is that it will send the iterations in the wrong direction if
H( i ) is not positive de nite. One modi cation to prevent this possibility is quadratic hill-climbing
which sets
^i+1 = i (H( i ) + i Im ) 1 g( i ):

where i is set just above the smallest eigenvalue of H( i ) if H( ) is not positive de nite.
Another productive modi cation is to add a scalar steplength i : In this case the iteration
rule takes the form
i+1 = i Di gi i (18.2)
where gi = g( i ) and Di = H( i ) 1 for Newton's method and Di = (H( i ) + i Im ) 1 for quadratic
hill-climbing.
Allowing the steplength to be a free parameter allows for a line search, a one-dimensional
optimization. To pick i write the criterion function as a function of

Q( ) = Q( i + Di gi )

a one-dimensional optimization problem. There are two common methods to perform a line search.
A quadratic approximation evaluates the rst and second derivatives of Q( ) with respect to
; and picks i as the value minimizing this approximation. The half-step method considers the
sequence = 1; 1/2, 1/4, 1/8, ... . Each value in the sequence is considered and the criterion
Q( i + Di gi ) evaluated. If the criterion has improved over Q( i ), use this value, otherwise move
to the next element in the sequence.
Newton's method does not perform well if Q( ) is irregular, and it can be quite computationally
costly if H( ) is not analytically available. These problems have motivated alternative choices for
the weight matrix Di : These methods are called Quasi-Newton methods. Two popular methods
are do to Davidson-Fletcher-Powell (DFP) and Broyden-Fletcher-Goldfarb-Shanno (BFGS).
Let

gi = gi gi 1

i = i i 1

and : The DFP method sets


0
i i Di 1 gi gi0 Di 1
Di = Di 1+ 0 + 0 :
i gi gi Di 1 gi
The BFGS methods sets
0 0 0
i i i i i gi0 Di 1 Di 1 gi i
Di = Di 1+ 0 2 gi0 Di 1 gi + 0 + 0 :
i gi 0
i gi i gi i gi

For any of the gradient methods, the iterations continue until the sequence has converged in
some sense. This can be de ned by examining whether j i i 1 j ; jQ ( i ) Q ( i 1 )j or jg( i )j
has become small.

182
18.3 Derivative-Free Methods
All gradient methods can be quite poor in locating the global minimum when Q( ) has several
local minima. Furthermore, the methods are not well de ned when Q( ) is non-di erentiable. In
these cases, alternative optimization methods are required. One example is the simplex method
of Nelder-Mead (1965).
A more recent innovation is the method of simulated annealing (SA). For a review see
Go e, Ferrier, and Rodgers (1994). The SA method is a sophisticated random search. Like the
gradient methods, it relies on an iterative sequence. At each iteration, a random variable is drawn
and added to the current value of the parameter. If the resulting criterion is decreased, this new
value is accepted. If the criterion is increased, it may still be accepted depending on the extent of
the increase and another randomization. The latter property is needed to keep the algorithm from
selecting a local minimum. As the iterations continue, the variance of the random innovations
is shrunk. The SA algorithm stops when a large number of iterations is unable to improve the
criterion. The SA method has been found to be successful at locating global minima. The downside
is that it can take considerable computer time to execute.

183
Bibliography

[1] Aitken, A.C. (1935): \On least squares and linear combinations of observations," Proceedings
of the Royal Statistical Society, 55, 42-48.

[2] Akaike, H. (1973): \Information theory and an extension of the maximum likelihood prin-
ciple." In B. Petroc and F. Csake, eds., Second International Symposium on Information
Theory.

[3] Anderson, T.W. and H. Rubin (1949): \Estimation of the parameters of a single equation in
a complete system of stochastic equations," The Annals of Mathematical Statistics, 20, 46-63.

[4] Andrews, D.W.K. (1988): \Laws of large numbers for dependent non-identically distributed
random variables,' Econometric Theory, 4, 458-467.

[5] Andrews, D.W.K. (1993), \Tests for parameter instability and structural change with un-
known change point," Econometrica, 61, 821-8516.

[6] Andrews, D.W.K. and M. Buchinsky: (2000): \A three-step method for choosing the number
of bootstrap replications," Econometrica, 68, 23-51.

[7] Andrews, D.W.K. and W. Ploberger (1994): \Optimal tests when a nuisance parameter is
present only under the alternative," Econometrica, 62, 1383-1414.

[8] Basmann, R. L. (1957): \A generalized classical method of linear estimation of coe cients in
a structural equation," Econometrica, 25, 77-83.

[9] Bekker, P.A. (1994): \Alternative approximations to the distributions of instrumental variable
estimators, Econometrica, 62, 657-681.

[10] Billingsley, P. (1968): Convergence of Probability Measures. New York: Wiley.

[11] Billingsley, P. (1979): Probability and Measure. New York: Wiley.

[12] Bose, A. (1988): \Edgeworth correction by bootstrap in autoregressions," Annals of Statistics,


16, 1709-1722.

184
[13] Breusch, T.S. and A.R. Pagan (1979): \The Lagrange multiplier test and its application to
model speci cation in econometrics," Review of Economic Studies, 47, 239-253.

[14] Brown, B.W. and W.K. Newey (2002): \GMM, e cient bootstrapping, and improved infer-
ence ," Journal of Business and Economic Statistics.

[15] Carlstein, E. (1986): \The use of subseries methods for estimating the variance of a general
statistic from a stationary time series," Annals of Statistics, 14, 1171-1179.

[16] Chamberlain, G. (1987): \Asymptotic e ciency in estimation with conditional moment re-
strictions," Journal of Econometrics, 34, 305-334.

[17] Choi, I. and P.C.B. Phillips (1992): \Asymptotic and nite sample distribution theory for
IV estimators and tests in partially identi ed structural equations," Journal of Econometrics,
51, 113-150.

[18] Chow, G.C. (1960): \Tests of equality between sets of coe cients in two linear regressions,"
Econometrica, 28, 591-603.

[19] Davidson, J. (1994): Stochastic Limit Theory: An Introduction for Econometricians. Oxford:
Oxford University Press.

[20] Davison, A.C. and D.V. Hinkley (1997): Bootstrap Methods and their Application. Cambridge
University Press.

[21] Dickey, D.A. and W.A. Fuller (1979): \Distribution of the estimators for autoregressive time
series with a unit root," Journal of the American Statistical Association, 74, 427-431.

[22] Donald S.G. and W.K. Newey (2001): \Choosing the number of instruments," Econometrica,
69, 1161-1191.

[23] Dufour, J.M. (1997): \Some impossibility theorems in econometrics with applications to
structural and dynamic models," Econometrica, 65, 1365-1387.

[24] Efron, B. (1979): \Bootstrap methods: Another look at the jackknife," Annals of Statistics,
7, 1-26.

[25] Efron, B. and R.J. Tibshirani (1993): An Introduction to the Bootstrap, New York: Chapman-
Hall.

[26] Eicker, F. (1963): \Asymptotic normality and consistency of the least squares estimators for
families of linear regressions," Annals of Mathematical Statistics, 34, 447-456.

[27] Engle, R.F. and C.W.J. Granger (1987): \Co-integration and error correction: Representa-
tion, estimation and testing," Econometrica, 55, 251-276.

185
[28] Frisch, R. and F. Waugh (1933): \Partial time regressions as compared with individual
trends," Econometrica, 1, 387-401.

[29] Gallant, A.F. and D.W. Nychka (1987): \Seminonparametric maximum likelihood estima-
tion," Econometrica, 55, 363-390.

[30] Gallant, A.R. and H. White (1988): A Uni ed Theory of Estimation and Inference for Non-
linear Dynamic Models. New York: Basil Blackwell.

[31] Go e, W.L., G.D. Ferrier and J. Rogers (1994): \Global optimization of statistical functions
with simulated annealing," Journal of Econometrics, 60, 65-99.

[32] Gauss, K.F. (1809): \Theoria motus corporum coelestium," in Werke, Vol. VII, 240-254.

[33] Granger, C.W.J. (1969): \Investigating causal relations by econometric models and cross-
spectral methods," Econometrica, 37, 424-438.

[34] Granger, C.W.J. (1981): \Some properties of time series data and their use in econometric
speci cation," Journal of Econometrics, 16, 121-130.

[35] Granger, C.W.J. and T. Ter•asvirta (1993): Modelling Nonlinear Economic Relationships,
Oxford University Press, Oxford.

[36] Hall, A. R. (2000): \Covariance matrix estimation and the power of the overidentifying re-
strictions test," Econometrica, 68, 1517-1527,

[37] Hall, P. (1992): The Bootstrap and Edgeworth Expansion, New York: Springer-Verlag.

[38] Hall, P. (1994): \Methodology and theory for the bootstrap," Handbook of Econometrics, Vol.
IV, eds. R.F. Engle and D.L. McFadden. New York: Elsevier Science.

[39] Hall, P. and J.L. Horowitz (1996): \Bootstrap critical values for tests based on Generalized-
Method-of-Moments estimation," Econometrica, 64, 891-916.

[40] Hahn, J. (1996): \A note on bootstrapping generalized method of moments estimators,"


Econometric Theory, 12, 187-197.

[41] Hansen, B.E. (1992): \E cient estimation and testing of cointegrating vectors in the presence
of deterministic trends," Journal of Econometrics, 53, 87-121.

[42] Hansen, B.E. (1996): \Inference when a nuisance parameter is not identi ed under the null
hypothesis," Econometrica, 64, 413-430.

[43] Hansen, L.P. (1982): \Large sample properties of generalized method of moments estimators,
Econometrica, 50, 1029-1054.

186
[44] Hansen, L.P., J. Heaton, and A. Yaron (1996): \Finite sample properties of some alternative
GMM estimators," Journal of Business and Economic Statistics, 14, 262-280.
[45] Hausman, J.A. (1978): \Speci cation tests in econometrics," Econometrica, 46, 1251-1271.
[46] Heckman, J. (1979): \Sample selection bias as a speci cation error," Econometrica, 47, 153-
161.
[47] Imbens, G.W. (1997): \One step estimators for over-identi ed generalized method of moments
models," Review of Economic Studies, 64, 359-383.
[48] Imbens, G.W., R.H. Spady and P. Johnson (1998): \Information theoretic approaches to
inference in moment condition models," Econometrica, 66, 333-357.
[49] Jarque, C.M. and A.K. Bera (1980): \E cient tests for normality, homoskedasticity and serial
independence of regression residuals, Economic Letters, 6, 255-259.
[50] Johansen, S. (1988): \Statistical analysis of cointegrating vectors," Journal of Economic
Dynamics and Control, 12, 231-254.
[51] Johansen, S. (1991): \Estimation and hypothesis testing of cointegration vectors in the pres-
ence of linear trend," Econometrica, 59, 1551-1580.
[52] Johansen, S. (1995): Likelihood-Based Inference in Cointegrated Vector Auto-Regressive Mod-
els, Oxford University Press.
[53] Johansen, S. and K. Juselius (1992): \Testing structural hypotheses in a multivariate cointe-
gration analysis of the PPP and the UIP for the UK," Journal of Econometrics, 53, 211-244.
[54] Kitamura, Y. (2001): \Asymptotic optimality and empirical likelihood for testing moment
restrictions," Econometrica, 69, 1661-1672.
[55] Kitamura, Y. and M. Stutzer (1997): \An information-theoretic alternative to generalized
method of moments," Econometrica, 65, 861-874..
[56] Kunsch, H.R. (1989): \The jackknife and the bootstrap for general stationary observations,"
Annals of Statistics, 17, 1217-1241.
[57] Kwiatkowski, D., P.C.B. Phillips, P. Schmidt, and Y. Shin (1992): \Testing the null hypothesis
of stationarity against the alternative of a unit root: How sure are we that economic time
series have a unit root?" Journal of Econometrics, 54, 159-178.
[58] Lafontaine, F. and K.J. White (1986): \Obtaining any Wald statistic you want," Economics
Letters, 21, 35-40.
[59] Lovell, M.C. (1963): \Seasonal adjustment of economic time series," Journal of the American
Statistical Association, 58, 993-1010.

187
[60] MacKinnon, J.G. (1990): \Critical values for cointegration," in Engle, R.F. and C.W. Granger
(eds.) Long-Run Economic Relationships: Readings in Cointegration, Oxford, Oxford Univer-
sity Press.

[61] MacKinnon, J.G. and H. White (1985): \Some heteroskedasticity-consistent covariance matrix
estimators with improved nite sample properties," Journal of Econometrics, 29, 305-325.

[62] Magnus, J. R., and H. Neudecker (1988): Matrix Di erential Calculus with Applications in
Statistics and Econometrics, New York: John Wiley and Sons.

[63] Muirhead, R.J. (1982): Aspects of Multivariate Statistical Theory. New York: Wiley.

[64] Nelder, J. and R. Mead (1965): \A simplex method for function minimization," Computer
Journal, 7, 308-313.

[65] Newey, W.K. and K.D. West (1987): \Hypothesis testing with e cient method of moments
estimation," International Economic Review, 28, 777-787.

[66] Owen, Art B. (1988): \Empirical likelihood ratio con dence intervals for a single functional,"
Biometrika, 75, 237-249.

[67] Owen, Art B. (2001): Empirical Likelihood. New York: Chapman & Hall.

[68] Phillips, P.C.B. (1989): \Partially identi ed econometric models," Econometric Theory, 5,
181-240.

[69] Phillips, P.C.B. and S. Ouliaris (1990): \Asymptotic properties of residual based tests for
cointegration," Econometrica, 58, 165-193.

[70] Politis, D.N. and J.P. Romano (1996): \The stationary bootstrap," Journal of the American
Statistical Association, 89, 1303-1313.

[71] Potscher, B.M. (1991): \E ects of model selection on inference," Econometric Theory, 7,
163-185.

[72] Qin, J. and J. Lawless (1994): \Empirical likelihood and general estimating equations," The
Annals of Statistics, 22, 300-325.

[73] Ramsey, J. B. (1969): \Tests for speci cation errors in classical linear least-squares regression
analysis," Journal of the Royal Statistical Society, Series B, 31, 350-371.

[74] Rudin, W. (1987): Real and Complex Analysis, 3rd edition. New York: McGraw-Hill.

[75] Said, S.E. and D.A. Dickey (1984): \Testing for unit roots in autoregressive-moving average
models of unknown order," Biometrika, 71, 599-608.

[76] Shao, J. and D. Tu (1995): The Jackknife and Bootstrap. NY: Springer.

188
[77] Sargan, J.D. (1958): \The estimation of economic relationships using instrumental variables,"
Econometrica, 2 6, 393-415.
[78] Sheather, S.J. and M.C. Jones (1991): \A reliable data-based bandwidth selection method
for kernel density estimation, Journal of the Royal Statistical Society, Series B, 53, 683-690.
[79] Shin, Y. (1994): \A residual-based test of the null of cointegration against the alternative of
no cointegration," Econometric Theory, 10, 91-115.
[80] Silverman, B.W. (1986): Density Estimation for Statistics and Data Analysis. London: Chap-
man and Hall.
[81] Sims, C.A. (1972): \Money, income and causality," American Economic Review, 62, 540-552.
[82] Sims, C.A. (1980): \Macroeconomics and reality," Econometrica, 48, 1-48.
[83] Staiger, D. and J.H. Stock (1997): \Instrumental variables regression with weak instruments,"
Econometrica, 65, 557-586.
[84] Stock, J.H. (1987): \Asymptotic properties of least squares estimators of cointegrating vec-
tors," Econometrica, 55, 1035-1056.
[85] Stock, J.H. (1991): \Con dence intervals for the largest autoregressive root in U.S. macroe-
conomic time series," Journal of Monetary Economics, 28, 435-460.
[86] Stock, J.H. and J.H. Wright (2000): \GMM with weak identi cation," Econometrica, 68,
1055-1096.
[87] Theil, H. (1953): \Repeated least squares applied to complete equation systems," The Hague,
Central Planning Bureau, mimeo.
[88] Tobin, J. (1958): \Estimation of relationships for limited dependent variables," Econometrica,
2 6, 24-36.
[89] Wald, A. (1943): \Tests of statistical hypotheses concerning several parameters when the
number of observations is large," Transactions of the American Mathematical Society, 54,
426-482.
[90] Wang, J. and E. Zivot (1998): \Inference on structural parameters in instrumental variables
regression with weak instruments," Econometrica, 66, 1389-1404.
[91] White, H. (1980): \A heteroskedasticity-consistent covariance matrix estimator and a direct
test for heteroskedasticity," Econometrica, 48, 817-838.
[92] White, H. (1984): Asymptotic Theory for Econometricians, Academic Press.
[93] Zellner, A. (1962): \An e cient method of estimating seemingly unrelated regressions, and
tests for aggregation bias," Journal of the American Statistical Association, 57, 348-368.

189

You might also like