You are on page 1of 167

ECONOMETRICS

Bruce E. Hansen
c 2000, 20071

University of Wisconsin

www.ssc.wisc.edu/~bhansen

This Revision: January 18, 2007


Comments Welcome

1
This manuscript may be printed and reproduced for individual or instructional use, but may not be printed for
commercial purposes.
Contents

1 Introduction 1
1.1 Economic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Observational Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Economic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Regression and Projection 3


2.1 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Conditional Density and Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Regression Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.4 Conditional Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.5 Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.6 Best Linear Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.7 Technical Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3 Least Squares Estimation 12


3.1 Random Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.4 Normal Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.5 Model in Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.6 Projection Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.7 Residual Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.8 Bias and Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.9 Gauss-Markov Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.10 Semiparametric E ciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.11 Multicollinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.12 In uential Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.13 Technical Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.14 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

4 Inference 27
4.1 Sampling Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.3 Asymptotic Normality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.4 Covariance Matrix Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.5 Alternative Covariance Matrix Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.6 Functions of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.7 t tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.8 Con dence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.9 Wald Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.10 F Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.11 Normal Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.12 Problems with Tests of NonLinear Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.13 Monte Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.14 Estimating a Wage Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.15 Technical Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

i
4.16 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

5 Additional Regression Topics 51


5.1 Generalized Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2 Testing for Heteroskedasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.3 Forecast Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.4 NonLinear Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.5 Least Absolute Deviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.6 Quantile Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.7 Testing for Omitted NonLinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.8 Omitted Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.9 Irrelevant Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.10 Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.11 Technical Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

6 The Bootstrap 66
6.1 De nition of the Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6.2 The Empirical Distribution Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6.3 Nonparametric Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.4 Bootstrap Estimation of Bias and Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.5 Percentile Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.6 Percentile-t Equal-Tailed Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.7 Symmetric Percentile-t Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.8 Asymptotic Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.9 One-Sided Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.10 Symmetric Two-Sided Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.11 Percentile Con dence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
6.12 Bootstrap Methods for Regression Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.13 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

7 Generalized Method of Moments 77


7.1 Overidenti ed Linear Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
7.2 GMM Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
7.3 Distribution of GMM Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
7.4 Estimation of the E cient Weight Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7.5 GMM: The General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
7.6 Over-Identi cation Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
7.7 Hypothesis Testing: The Distance Statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7.8 Conditional Moment Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
7.9 Bootstrap GMM Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
7.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

8 Empirical Likelihood 86
8.1 Non-Parametric Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
8.2 Asymptotic Distribution of EL Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
8.3 Overidentifying Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
8.4 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
8.5 Numerical Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
8.6 Technical Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

9 Endogeneity 92
9.1 Instrumental Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
9.2 Reduced Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
9.3 Identi cation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
9.4 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
9.5 Special Cases: IV and 2SLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
9.6 Bekker Asymptotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
9.7 Identi cation Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

ii
9.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

10 Univariate Time Series 101


10.1 Stationarity and Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
10.2 Autoregressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
10.3 Stationarity of AR(1) Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
10.4 Lag Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
10.5 Stationarity of AR(k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
10.6 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
10.7 Asymptotic Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
10.8 Bootstrap for Autoregressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
10.9 Trend Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
10.10Testing for Omitted Serial Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
10.11Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
10.12Autoregressive Unit Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
10.13Technical Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

11 Multivariate Time Series 110


11.1 Vector Autoregressions (VARs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
11.2 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
11.3 Restricted VARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
11.4 Single Equation from a VAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
11.5 Testing for Omitted Serial Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
11.6 Selection of Lag Length in an VAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
11.7 Granger Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
11.8 Cointegration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
11.9 Cointegrated VARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

12 Limited Dependent Variables 115


12.1 Binary Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
12.2 Count Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
12.3 Censored Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
12.4 Sample Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

13 Panel Data 120


13.1 Individual-E ects Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
13.2 Fixed E ects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
13.3 Dynamic Panel Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

14 Nonparametrics 123
14.1 Kernel Density Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
14.2 Asymptotic MSE for Kernel Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

A Matrix Algebra 127


A.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
A.2 Matrix Addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
A.3 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
A.4 Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
A.5 Rank and Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
A.6 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
A.7 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
A.8 Positive De niteness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
A.9 Matrix Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
A.10 Kronecker Products and the Vec Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
A.11 Vector and Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

iii
B Probability 135
B.1 Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
B.2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
B.3 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
B.4 Common Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
B.5 Multivariate Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
B.6 Conditional Distributions and Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
B.7 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
B.8 Normal and Related Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

C Asymptotic Theory 146


C.1 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
C.2 Weak Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
C.3 Convergence in Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
C.4 Asymptotic Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

D Maximum Likelihood 151

E Numerical Optimization 155


E.1 Grid Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
E.2 Gradient Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
E.3 Derivative-Free Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

iv
Chapter 1

Introduction

Econometrics is the study of estimation and inference for economic models using economic data. Econo-
metric theory concerns the study and development of tools and methods for applied econometric applications.
Applied econometrics concerns the application of these tools to economic data.

1.1 Economic Data


An econometric study requires data for analysis. The quality of the study will be largely determined
by the data available. There are three major types of economic data sets: cross-sectional, time-series, and
panel. They are distinguished by the dependence structure across observations.
Cross-sectional data sets are characterized by mutually independent observations. Surveys are a typical
source for cross-sectional data. The individuals surveyed may be persons, households, or corporations.
Time-series data is indexed by time. Typical examples include macroeconomic aggregates, prices and
interest rates. This type of data is characterized by serial dependence.
Panel data combines elements of cross-section and time-series. These data sets consist surveys of a set of
individuals, repeated over time. Each individual (person, household or corporation) is surveyed on multiple
occasions.

1.2 Observational Data


A common econometric question is to quantify the impact of one set of variables on another variable.
For example, a concern in labor economics is the returns to schooling { the change in earnings induced by
increasing a worker's education, holding other variables constant. Another issue of interest is the earnings
gap between men and women.
Ideally, we would use experimental data to answer these questions. To measure the returns to schooling,
an experiment might randomly divide children into groups, mandate di erent levels of education to the
di erent groups, and then follow the children's wage path as they mature and enter the labor force. The
di erences between the groups could be attributed to the di erent levels of education. However, experiments
such as this are infeasible, even immoral!
Instead, most economic data is observational. To continue the above example, what we observe (through
data collection) is the level of a person's education and their wage. We can measure the joint distribution
of these variables, and assess the joint dependence. But we cannot infer causality, as we are not able to
manipulate one variable to see the direct e ect on the other. For example, a person's level of education is
(at least partially) determined by that person's choices and their achievement in education. These factors
are likely to be a ected by their personal abilities and attitudes towards work. The fact that a person is
highly educated suggests a high level of ability. This is an alternative explanation for an observed positive
correlation between educational levels and wages. High ability individuals do better in school, and therefore
choose to attain higher levels of education, and their high ability is the fundamental reason for their high
wages. The point is that multiple explanations are consistent with a positive correlation between schooling
levels and education. Knowledge of the joint distibution cannot distinguish between these explanations.
This discussion means that causality cannot be infered from observational data alone. Causal inference
requires identi cation, and this is based on strong assumptions. We will return to a discussion of some of
these issues in Chapter 9.

1
1.3 Economic Data
Fortunately for economists, the development of the internet has provided a convenient forum for dis-
semination of economic data. Many large-scale economic datasets are available without charge from gov-
ernmental agencies. An excellent starting point is the Resources for Economists Data Links, available at
http://rfe.wustl.edu/Data/index.html.
Some other excellent data sources are listed below.
Bureau of Labor Statistics: http://www.bls.gov/
Federal Reserve Bank of St. Louis: http://research.stlouisfed.org/fred2/
Board of Governors of the Federal Reserve System: http://www.federalreserve.gov/releases/
National Bureau of Economic Research: http://www.nber.org/
US Census: http://www.census.gov/econ/www/
Current Population Survey (CPS): http://www.bls.census.gov/cps/cpsmain.htm
Survey of Income and Program Participation (SIPP): http://www.sipp.census.gov/sipp/
Panel Study of Income Dynamics (PSID): http://psidonline.isr.umich.edu/
U.S. Bureau of Economic Analysis: http://www.bea.doc.gov/
CompuStat: http://www.compustat.com/www/
International Financial Statistics (IFS): http://ifs.apdi.net/imf/

2
Chapter 2

Regression and Projection

2.1 Variables
The most commonly applied econometric tool is regression. This is used when the goal is to quantify
the impact of one set of variables (the regressors, conditioning variable, or covariates) on another
variable (the dependent variable). We let y denote the dependent variable and (x1 ; x2 ; :::; xk ) denote the
k regressors. It is convenient to write the set of regressors as a vector in Rk :
0 1
x1
B x2 C
B C
x = B . C: (2.1)
@ .. A
xk

Following mathematical convention, real numbers (elements of the real line R) are written using lower
case italics such as y, and vectors (elements of Rk ) by lower case bold italics such as x: Upper case bold
italics such as X will be used for matrices.
The random variables (y; x) have a distribution F which we call the population. This \population"
is in nitely large. This abstraction can be a source of confusion as it does not correspond to a physical
population in the real world. The distribution F is unknown, and the goal of statistical inference is to learn
about features of F from the sample.
At this point in our analysis it is unimportant whether the observations y and x come from continuous
or discrete distributions. For example, many regressors in econometric practice are binary, taking on only
the values 0 and 1, and are typically called dummy variables.

2.2 Conditional Density and Mean


To study how the distribution of y varies with the variables x in the population, we start with f (y j x) ;
the conditional density of y given x:
To illustrate, Figure 2.1 displays the density1 of hourly wages for men and women, from the population
of white non-military wage earners with a college degree and 10-15 years of potential work experience. These
are conditional density functions { the density of hourly wages conditional on race, gender, education and
experience. The two density curves show the e ect of gender on the distribution of wages, holding the other
variables constant.
While it is easy to observe that the two densities are unequal, it is useful to have numerical measures of
the di erence. An important summary measure is the conditional mean
Z 1
m (x) = E (y j x) = yf (y j x) dy: (2.2)
1

In general, m (x) can take any form, and exists so long as E jyj < 1: In the example presented in Figure
2.1, the mean wage for men is $27.22, and that for women is $20.73. These are indicated in Figure 2.1 by
the arrows drawn to the x-axis.
1 These are nonparametric density estimates using a Gaussian kernel with the bandwidth selected by cross-validation. See

Chapter 14. The data are from the 2004 Current Population Survey

3
Figure 2.1: Wage Densities for White College Grads with 10-15 Years Work Experience

Take a closer look at the density functions displayed in Figure 2.1. You can see that the right tail of
the density is much thicker than the left tail. These are asymmetric (skewed) densities, which is a common
feature of wage distributions. When a distribution is skewed, the mean is not necessarily a good summary
of the central tendency. In this context it is often convenient to transform the data by taking the (natural)
logarithm. Figure 2.2 shows the density of log hourly wages for the same population, with mean log hourly
wages drawn in with the arrows. The di erence in the log mean wage between men and women is 0.30, which
implies a 30% average wage di erence for this population. This is a more robust measure of the typical wage
gap between men and women than the di erence in the untransformed wage means. For this reason, wage
regressions typically use log wages as a dependent variable rather than the level of wages.
The comparison in Figure 2.1 is facilitated by the fact that the control variable (gender) is discrete.
When the distribution of the control variable is continuous, then comparisons become more complicated. To
illustrate, Figure 2.3 displays a scatter plot2 of log wages against education levels. Assuming for simplicity
that this is the true joint distribution, the solid line displays the conditional expectation of log wages varying
with education. The conditional expectation function is close to linear; the dashed line is a linear projection
approximation which will be discussed in the Section 2.6. The main point to be learned from Figure 2.3 is
that the conditional expectation describes the central tendency of the conditional distribution. Of particular
interest to graduate students may be the observation that di erence between a B.A. and a Ph.D. degree in
mean log hourly wages is 0.36, implying an average 36% di erence in wage levels.

2.3 Regression Equation


The regression error e is de ned to be the di erence between y and its conditional mean (2.2) evaluated
at the observed value of x:
e = y m(x):
By construction, this yields the formula
y = m(x) + e: (2.3)

Theorem 2.3.1 Properties of the regression error e

1. E (e j x) = 0:
2. E(e) = 0:
2 White non-military male wage earners with 10-15 years of potential work experience.

4
Figure 2.2: Log Wage Densities

3. E (h(x)e) = 0 for any function h ( ) :


4. E(xe) = 0:

To show the rst statement, by the de nition of e and the linearity of conditional expectations,

E (e j x) = E ((y m(x)) j x)
= E (y j x) E (m(x) j x)
= m(x) m(x)
= 0:

The remaining parts of the Theorem are left as an exercise.


The equations

y = m(x) + e
E (e j x) = 0:

are often stated jointly as the regression framework. It is important to understand that this is a framework,
not a model, because no restrictions have been placed on the joint distribution of the data. These equations
hold true by de nition. A regression model imposes further restrictions on the joint distribution; most
typically, restrictions on the permissible class of regression functions m (x) :
The conditional mean also has the property of being the the best predictor of y; in the sense of achieving
the lowest mean squared error. To see this, let g (x) be an arbitrary predictor of y given x: The expected
squared error using this prediction function is
2 2
E (y g (x)) = E (e + m (x) g (x))
2
= Ee2 + 2E (e (m (x) g (x))) + E (m (x) g (x))
2
= Ee2 + E (m (x) g (x))
Ee2

where the second equality uses Theorem 2.3.1.3. The right-hand-side is minimized by setting g (x) = m (x) :
Thus the mean squared error is minimized by the conditional mean.

5
Figure 2.3: Conditional Mean of Wages Given Education

2.4 Conditional Variance


While the conditional mean is a good measure of the location of a conditional distribution, it does
not provide information about the spread of the distribution. A common measure of the dispersion is the
conditional variance
2
(x) = var (y j x) = E e2 j x :
Generally, 2 (x) is a non-trivial function of x, and can take any form, subject p
to the restriction that it is
non-negative. The conditional standard deviation is its square root (x) = 2 (x):
2
In the special case where (x) is a constant and independent of x so that
E e2 j x = 2
(2.4)
we say that the error e is homoskedastic. In the general case where 2 (x) depends on x we say that the
error e is heteroskedastic.
Some textbooks inappropriately describe heteroskedasticity as the case where \the variance of e varies
across observations". This concept is less helpful than de ning heteroskedasticity as the dependence of the
conditional variance on the observables x:
As an example, take the conditional wage densities displayed in Figure 2.1. The conditional standard
deviation for men is 12.1 and that for women is 10.5. So while men have higher average wages, they are also
somewhat more dispersed.

2.5 Linear Regression


An important special case of (2.3) is when the conditional mean function m (x) is linear in x (or linear
in functions of x): Notationally, it is convenient to augment the regressor vector x by listing the number \1"
as an element. We call this the \constant" or \intercept". Equivalently, we assume that x1 = 1, where x1 is
the rst element of the vector x de ned in (2.1). Thus (2.1) has been rede ned as the k 1 vector
0 1
1
B x2 C
B C
x = B . C: (2.5)
@ .. A
xk
When m(x) is linear in x; we can write it as
m(x) = x0 = 1 + x2i 2 + + xki k (2.6)

6
where 0 1
1
B C
= @ ... A (2.7)
k

is a k 1 parameter vector.
In this case (2.3) can be written as

y = x0 + e (2.8)
E (e j x) = 0: (2.9)

Equation (2.8) is called the linear regression model,


An important special case is homoskedastic linear regression model

y = x0 + e
E (e j x) = 0
E e2 j x = 2
:

2.6 Best Linear Predictor


While the conditional mean m(x) = E (y j x) is the best predictor of y among all functions of x; its
functional form is typically unknown, and the linear assumption of the previous section is empirically unlikely
to be accurate. Instead, it is more realistic to view the linear speci cation (2.6) as an approximation. We
derive an appropriate approximation in this section.
In the linear projection model the coe cient is de ned so that the function x0 is the best linear
predictor of y. As before, by \best" we mean the predictor function with lowest mean squared error. For
any 2 Rk a linear predictor for y is x0 with expected squared prediction error
2
S( ) = E (y x0 )
= Ey 2 2 0 E (xy) + E (xx0 )

which is quadratic in : The best linear predictor is obtained by selecting to minimize S( ): The
rst-order condition for minimization (from Appendix A.9) is
@
0= S( ) = 2E (xy) + 2E (xx0 ) :
@
Solving for we nd
1
= (E (xx0 )) E (xy) : (2.10)
It is worth taking the time to understand the notation involved in this expression. E (xx0 ) is a k k matrix
E(xy) 0 1
and E (xi yi ) is a k 1 column vector. Therefore, alternative expressions such as E(xx 0 ) or E (xy) (E (xx ))

are incoherent and incorrect. Appendix A provides a comprehensive review of matrix notation and operation.
The vector (2.10) exits and is unique as long as the k k matrix Q = E (xx0 ) is invertible. The matrix
Q plays an important role in least-squares theory so we will discuss some of its properties in detail. Observe
that for any non-zero 2 Rk ;
0 2
Q = E ( 0 xx0 ) = E ( 0 x) 0
so Q is by construction positive semi-de nite. It is invertible if and only if it is positive de nite, which
2
requires that for all non-zero ; E ( 0 x) > 0: Equivalently, there cannot exist a non-zero vector such that
0
x = 0 identically. This occurs when redundant variables are included in x: In order for to be uniquely
de ned, this situation must be excluded.
Given the de nition of in (2.10), x0 is the best linear predictor for y: The error is

e=y x0 : (2.11)
Notice that the error e from the linear prediction equation is equal to the error from the regression equation
when (and only when) the conditional mean is linear in x; otherwise they are distinct.
Rewriting, we obtain a decomposition of y into linear predictor and error

y = x0 + e: (2.12)

7
This completes the derivation of the model. We call x0 alternatively the best linear predictor of y given
x0 ; or the linear projection of y onto x: In general we will call equation (2.12) the linear projection model.
We now summarize the assumptions necessary for its derivation and list the implications in Theorem
2.6.1.

Assumption 2.6.1

1. x contains an intercept;
2. Ey 2 < 1;
3. Ex2j < 1 for j = 1; :::; k:
4. Q = E (xx0 ) is invertible.

Theorem 2.6.1 Under Assumption 2.6.1, (2.10) and (2.11) are well de ned. Furthermore,

E e2 < 1 (2.13)

E (xe) = 0 (2.14)
and
E (e) = 0: (2.15)
A complete proof of Theorem (2.6.1) is presented in Section 2.7.
The two equations (2.12) and (2.14) summarize the linear projection model. Let's compare it with the
linear regression model (2.8)-(2.9). Since from Theorem 2.3.1.4 we know that the regression error has the
property E (xe) = 0; it follows that linear regression is a special case of the projection model. However,
the converse is not true as the projection error does not necessarily satisfy E (e j x) = 0: For example,
suppose that for x 2 R that Ex = 0; Ex3 = 0; and e = x2 Ex2 : Then Exe = Ex3 ExEx2 = 0 yet
E (e j x) = x2 Ex2 6= 0:
It is useful to note that the facts that E (xe) = 0 and E (e) = 0 means that the variables x and e are
uncorrelated.
The conditions listed in Assumption 2.6.1 are weak. The nite second moment Assumptions 2.6.1.2 and
2.6.1.3 are called regularity conditions. Assumption 2.6.1.4 is required to ensure that is uniquely de ned.
Assumption 2.6.1.1 is employed to guarantee that (2.15) holds.
We have shown that under mild regularity conditions for any pair (y; x) we can de ne a linear equation
(2.12) with the properties listed in Theorem 2.6.1. No additional assumptions are required. However, it is
important to not misinterpret the generality of this statement. The linear equation (2.12) is de ned by the
de nition of the best linear predictor and the associated coe cient de nition (2.10). In contrast, in many
economic models the parameter may be de ned within the model. In this case (2.10) may not hold and the
implications of Theorem 2.6.1 may be false. These structural models require alternative estimation methods,
and are discussed in Chapter 9.
Returning to the joint distribution displayed in Figure 2.3, the dashed line is projection of log wages
onto education. In this example the linear predictor is a close approximation to the conditional mean. In
other cases the two may be quite di erent. Figure 2.4 displays the relationship3 between mean log hourly
wages and labor market experience. The solid line is the conditional mean, and the straight dashed line is
the linear projection. In this case the linear projection is a poor approximation to the conditional mean. It
over-predicts wages for young and old workers, and under-predicts for the rest. Most importantly, it misses
the strong downturn in expected wages for those above 35 years work experience (equivalently, for those over
53 in age).
This defect in the best linear predictor can be partially corrected through a careful selection of regressors.
In the example just presented, we can augment the regressor vector x to include both experience and
experience2 : The best linear predictor of log wages given these two variables can be called a quadratic
projection, since the resulting function is quadratic in experience: Other than the rede nition of the regressor
vector, there are no changes in our methods or analysis. In Figure 2.4 we display as well the quadratic
projection. In this example it is a much better approximation to the conditional mean than the linear
projection.
3 In the population of Caucasian non-military male wage earners with 12 years of education.

8
Figure 2.4: Hourly Wage as a Function of Experience

Another defect of linear projection is that it is sensitive to the marginal distribution of the regressors when
the conditional mean is non-linear. We illustrate the issue in Figure 2.5 for a constructed4 joint distribution
of y and x. The solid line is the non-linear conditional mean of y given x: The data are divided in two { Group
1 and Group 2 { which have di erent marginal distributions for the regressor x; and Group 1 has a lower
mean value of x than Group 2. The separate linear projections of y on x for these two groups are displayed
in the Figure by the dashed lines. These two projections are distinct approximations to the conditional
mean. A defect with linear projection is that it leads to the incorrect conclusion that the e ect of x on y is
di erent for individuals in the two Groups. This conclusion is incorrect because is fact there is no di erence
in the conditional mean function. The apparant di erence is a by-product of a linear approximation to a
non-linear mean, combined with di erent marginal distributions for the conditioning variables.

2.7 Technical Proofs


Proof of Theorem 2.6.1. We rst show that the moments E (xy) and E (xx0 ) are nite and well de ned.
First, it is useful to note that Assumption 2.6.1.3 implies that
k
X
2 0
E kxk = E (x x) = Ex2j < 1: (2.16)
j=1

Note that for j = 1; :::; k; by the Cauchy-Schwarz Inequality (C.3) and Assumptions 2.6.1.2 and 2.6.1.3
1=2 1=2
E jxj yj Ex2j Ey 2 < 1:

Thus the elements in the vector E (xy) are well de ned and nite. Next, note that the jl'th element of
E (xx0 ) is E (xj xl ) : Observe that
1=2 1=2
E jxj xl j Ex2j Ex2l <1

under Assumption 2.6.1.3. Thus all elements of the matrix E (xx0 ) are nite.
1 1
Equation (2.10) states that = (E (xx0 )) E (xy) which is well de ned since (E (xx0 )) exists under
0
Assumption 2.6.1.4. It follows that e = y x as de ned in (2.11) is also well de ned.
4 The x in Group 1 are N(2; 1) and those in Group 2 are N(4; 1); and the conditional distriubtion of y given x is N(m(x); 1)
i
where m(x) = 2x x2 =6:

9
Figure 2.5: Conditional Mean and Two Linear Projections

2 2 2
Note the Schwarz Inequality (A.7) implies (x0 ) kxk k k and therefore combined with (2.16) we
see that
2 2 2
E (x0 ) E kxk k k < 1: (2.17)
Using Minkowski's Inequality (C.5), Assumption 2.6.1.2 and (2.17) we nd

1=2 2 1=2
E e2 = E (y x0 )
1=2 2 1=2
Ey 2 + E (x0 )
< 1

establishing (2.13).
An application of the Cauchy-Schwarz Inequality (C.3) shows that for any j
1=2 1=2
E jxj ej Ex2j Ee2 <1

and therefore the elements in the vector E (xe) are well de ned and nite.
1
Using the de nitions (2.11) and (2.10), and the matrix properties that AA = I and Ia = a;

E (xe) = E (x (y x0 ))
1
= E (xy) E (xx0 ) (E (xx0 )) E (xy)
= 0:

Finally, equation (2.15) follows from (2.14) and Assumption 2.6.1.1.

10
2.8 Exercises
1. Prove parts 2, 3 and 4 of Theorem 2.3.1.
2. Suppose that the random variables y and x only take the values 0 and 1, and have the following joint
probability distribution
x=0 x=1
y=0 .1 .2
y=1 .4 .3
Find E (y j x) ; E y 2 j x and var (y j x) for x = 0 and x = 1:
3. Suppose that y is discrete-valued, taking values only on the non-negative integers, and the conditional
distribution of y given x is Poisson:
j
exp ( x0 ) (x0 )
P (y = k j x) = ; j = 0; 1; 2; :::
j!

Compute E (y j x) and var (y j x) : Does this justify a linear regression model of the form y = x0 + e?
j
exp( )
Hint: If P (y = j) = j! ; then Ey = and var(y) = :
3
4. Let x and y have the joint density f (x; y) = 2 x2 + y 2 on 0 x 1; 0 y 1: Compute the
coe cients of the best linear predictor y = 1 + 2 x + e: Compute the conditional mean m(x) =
E (y j x) : Are they di erent?
5. Take the bivariate linear projection model

y = 1 + 2x +e
E (e) = 0
E (xe) = 0
2 2 2
De ne y = Ey; x = Ex; x = var(x); y = var(y) and xy = cov(x; y): Show that 2 = xy = x and
1 = y 1 x:

6. True or False. If y = x + e; x 2 R; and E (e j x) = 0; then E x2 e = 0:


7. True or False. If y = x0 + e and E (e j x) = 0; then e is independent of x:
8. True or False. If y = x0 + e, E (e j x) = 0; and E e2 j x = 2
; a constant, then e is independent of
x:
9. True or False. If y = x + e; x 2 R; and E (xi ei ) = 0; then E x2 e = 0:
10. True or False. If y = x0 + e and E(xe) = 0; then E (e j x) = 0:
2
11. Let x be a random variable with = Ex and = var(x): De ne

2 x
g xj ; = 2 2 :
(x )
2
Show that Eg (x j m; s) = 0 if and only if m = and s = :

11
Chapter 3

Least Squares Estimation

3.1 Random Sample


In Chapter 2, we discussed the joint distribution of a pair of random variables (y; x) 2 R Rk ; describing
the regression relationship as the conditional mean of y given x; and the approximation given by the best
linear predictor of y given x. We now discuss estimation of these relationships from economic data.
In a typical application, an econometrician's data is a set of observed measurements on the variables
(y; x) for a group of individuals. These individuals may be persons, households, rms or other economic
agents. We call this information the data, dataset, or sample, and denote the number of inviduals in the
dataset by the natural number n.
We will use the index i to indicate the i'th individual in the dataset. The observation for the i'th
individual will be written as (yi ; xi ) : yi is the observed value of y for individual i and xi is the observed
value of x for the same individual.
If the data is cross-sectional (each observation is a di erent individual) it is often reasonable to assume
the observations are mutually independent. This means that the pair (yi ; xi ) is independent of (yj ; xj )
for i 6= j. (Sometimes the independent label is misconstrued. It is not a statement about the relationship
between yi and xi :) Furthermore, if the data is randomly gathered, it is reasonable to model each observation
as a random draw from the same probability distribution. In this case we say that the data are independent
and identically distributed, or iid. We call this a random sample.

Assumption 3.1.1 The observations (yi ; xi ) i = 1; :::; n; are mutually independent across observations i
and identically distributed.
This chapter explores estimation and inference in the linear projection model for a random sample:

yi = x0i + ei (3.1)
E (xi ei ) = 0 (3.2)
1
= (E (xi x0i )) E (xi yi ) (3.3)

In Sections 3.8 and 3.9, we narrow the focus to the linear regression model, but for most of the chapter we
retain the broader focus on the projection model.

3.2 Estimation
Equation (3.3) writes the projection coe cient as an explicit function of population moments E (xi yi )
and E (xi x0i ) : Their moment estimators are the sample moments
n
b (xi yi ) 1X
E = xi y i
n i=1
n
b (xi x0 ) 1X
E i = xi x0i :
n i=1

12
It follows that the moment estimator of replaces the population moments in (3.3) with the sample moments:
1
^ = b (xi x0i )
E b (xi yi )
E
n
! 1 n
!
1X 1X
= xi x0i xi yi
n i=1 n i=1
n
! 1 n
!
X X
0
= xi xi xi y i (3.4)
i=1 i=1

Another way to derive ^ is as follows. Observe that (3.2) can be written in the parametric form g ( ) =
E (xi (yi x0i )) = 0: The function g ( ) can be estimated by
n
1X
^( ) =
g xi (yi x0i ) :
n i=1

This is a set of k equations which are linear in : The estimator ^ is the value which jointly sets these
equations equal to zero:

^ ^
0 = g (3.5)
n
1X
= xi yi x0i ^
n i=1
n n
1X 1X
= xi yi xi x0i ^
n i=1 n i=1

whose solution is (3.4).


To illustrate, consider the data used to generate Figure 2.3. These are white male wage earners from the
March 2004 Current Population Survey, excluding military, with 10-15 years of potential work experience.
This sample has 988 observations. Let yi be log wages and xi be an intercept and years of education. Then
n
1X 2:95
xi y i =
n i=1 42:40

n
1X 1 14:14
xi x0i = :
n i=1 14:14 205:83

Thus
1
^ 1 14:14 2:95
=
14:14 205:83 42:40

34:94 2: 40 2:95
=
2: 40 0:170 42:40

1: 313
= :
0:128

We often write the estimated equation using the format

\
log(W age) = 1:313 + 0:128 Education:

An interpretation of the estimated equation is that each year of education is associated with a 12.8% increase
in mean wages.

13
3.3 Least Squares
Least squares is another classic motivation for the estimator (3.4). De ne the sum-of-squared errors
(SSE) function
n
X 2
Sn ( ) = (yi x0i )
i=1
n
X n
X n
X
0 0
= yi2 2 xi y i + xi x0i :
i=1 i=1 i=1

This is a quadratic function of : To visualize this function, Figure 3.1 displays an example sum-of-squared
errors function Sn ( ) for the case k = 2:

Figure 3.1: Sum-of-Squared Errors Function

The Ordinary Least Squares (OLS) estimator is the value of which minimizes Sn ( ): Matrix
calculus (see Appendix A.9) gives the rst-order conditions for minimization:
@
0 = Sn ( ^ )
@
Xn n
X
= 2 xi y i + 2 xi x0i ^
i=1 i=1

whose solution is (3.4). Following convention we will call ^ the OLS estimator of :
As a by-product of OLS estimation, we de ne the predicted value
y^i = x0i ^
and the residual
e^i = yi y^i
= yi x0i ^ :
Note that yi = y^i + e^i : It is important to understand the distinction between the error ei and the residual
e^i : The error is unobservable, while the residual is a by-product of estimation. These two variables are
frequently mislabeled, which can cause confusion.

14
Equation (3.5) implies that
n
1X
xi e^i = 0:
n i=1
Since xi contains a constant, one implication is that
n
1X
e^i = 0:
n i=1

Thus the residuals have a sample mean of zero and the sample correlation between the regressors and the
residual is zero. These are algebraic results, and hold true for all linear regression estimates.
The error variance 2 = Ee2i is also a parameter of interest. It measures the variation in the \unexplained"
part of the regression. Its method of moments estimator is the sample average of the squared residuals
n
1X 2
^2 = e^ : (3.6)
n i=1 i

An alternative estimator uses the formula


n
X
1
s2 = e^2i : (3.7)
n k i=1

A justi cation for the latter choice will be provided in Section 3.8.
A measure of the explained variation relative to the total variation is the coe cient of determination
or R-squared. Pn 2
2 yi y)
(^ ^2
R = Pi=1n 2 = 1
i=1 (yi y) ^ 2y
where
n
1X 2
^ 2y = (yi y)
n i=1

is the sample variance of yi : The R2 is frequently mislabeled as a measure of \ t". It is an inappropriate


label as the value of R2 does not help interpret the parameter estimates ^ or test statistics concerning .
Instead, it should be viewed as an estimator of the population parameter

2 var (x0i ) 2
= =1 2
var(yi ) y

2 2
where y = var(yi ): An alternative estimator of proposed by Theil called \R-bar-squared" is

2 s2
R =1
~ 2y

where
n
X
1 2
~ 2y = (yi y) :
n 1 i=1
2
Theil's estimator R is a ratio of adjusted variance estimators, and therefore is expected to be a better
estimator of 2 than the unadjusted estimator R2 :

3.4 Normal Regression Model


Another motivation for the least-squares estimator can be obtained from the normal regression model.
This is the linear regression model with the additional assumption that the error ei is independent of xi
and has the distribution N 0; 2 : This is a parametric model, where likelihood methods can be used for
estimation, testing, and distribution theory.

15
The log-likelihood function for the normal regression model is
n
!
X 1 1 2
2
log L( ; ) = log exp 2
(yi x0i )
(2 2 )1=2 2
i=1
n 2 1
= log 2 2
Sn ( )
2 2
The MLE ( ^ ; ^ 2 ) maximize log L( ; 2 ): Since log L( ; 2 ) is a function of only through the sum of squared
errors Sn ( ); maximizing the likelihood is identical to minimizing Sn ( ). Hence the MLE for equals the
OLS estimator:
Plugging ^ into the log-likelihood we obtain
n
n 1 X
log L ^ ; 2
= log 2 2
2
e^2i :
2 2 i=1

2
Maximization with respect to yields the rst-order condition
n
X
@ n 1
log L ^ ; ^ 2 = 2 + 2 e^2i = 0:
@ 2 2^ 2 ^2 i=1

Solving for ^ 2 yields the method of moments estimator (3.6). Thus the MLE ^ ; ^ 2 for the normal regression
model are identical to the method of moment estimators. Due to this equivalence, the OLS estimator ^ is
frequently referred to as the Gaussian MLE.

3.5 Model in Matrix Notation


For many purposes, including computation, it is convenient to write the model and statistics in matrix
notation. The linear equation (2.12) is a system of n equations, one for each observation. We can stack these
n equations together as

y1 = x01 + e1
y2 = x02 + e2
..
.
yn = x0n + en :

Now de ne 0 1 0 1 0 1
y1 x01 e1
B y2 C B x02 C B e2 C
B C B C B C
y=B .. C; X=B .. C; e=B .. C:
@ . A @ . A @ . A
yn x0n en
Observe that y and e are n 1 vectors, and X is an n k matrix. Then the system of n equations can be
compactly written in the single equation
y = X + e:
Sample sums can also be written in matrix notation. For example
n
X
xi x0i = X 0X
i=1
Xn
xi y i = X 0 y:
i=1

Thus the estimator (3.4), residual vector, and sample error variance can be written as
1
^ = X 0X X 0y
^ = y
e X^
^2 = n 1 0
e
^e^:

16
A useful result is obtained by inserting y = X + e into the formula for ^ to obtain
1
^ = X 0X X 0 (X + e)
1 1
= X 0X X 0X + X 0X X 0e
1
= + X 0X X 0 e: (3.8)

3.6 Projection Matrices


De ne the matrices
1
P = X X 0X X0
and
1
M = In X X 0X X0
= In P

where I n is the n n identity matrix. P and M are called projection matrices due to the property that
for any matrix Z which can be written as Z = X for some matrix ; (we say that Z lies in the range
space of X) then
1
P Z = P X = X X 0X X 0X = X = Z
and
M Z = (I n P)Z = Z PZ = Z Z = 0:
As an important example of this property, partition the matrix X into two matrices X 1 and X 2 ; so that

X = [X 1 X 2] :

Then P X 1 = X 1 and M X 1 = 0: It follows that M X = 0 and M P = 0; so M and P are orthogonal.


The matrices P and M are symmetric and idempotent1 . To see that P is symmetric,
1 0
P0 = X X 0X X0
0 1 0 0
= X0 X 0X (X)
0 1
= X X 0X X0
0 1
0
= X (X) X 0 X0
= P:

To establish that it is idempotent,

1 1
PP = X X 0X X0 X X 0X X0
1 1
= X X 0X X 0X X 0X X0
1
= X X 0X X0
= P:

Similarly,
0
M 0 = (I n P ) = In P =M
and

MM = M (I n P )
= M MP
= M;
1A matrix P is symmetric if P 0 = P : A matrix P is idempotent if P P = P: See Appendix A.8

17
since M P = 0:
Another useful property is that

tr P = k (3.9)
tr M = n k (3.10)

(See Appendix A.4 for de nition and properties of the trace operator.) To show (3.9) and (3.10),
1
tr P = tr X X 0 X X0
1
= tr X 0X X 0X
= tr (I k )
= k;

and
tr M = tr (I n P ) = tr (I n ) tr (P ) = n k:
Given the de nitions of P and M ; observe that
1
y^ = X ^ = X X 0 X X 0y = P y

and
^= y
e X^ = y P y = M y: (3.11)
Furthermore, since y = X + e and M X = 0; then

^ = M (X + e) = M e:
e (3.12)

Another way of writing (3.11) is

y = (P + M ) y = P y + M y = y^ + e
^:

This decomposition is orthogonal, that is


0
y^0 e
^ = (P y) (M y) = y 0 P M y = 0:

3.7 Residual Regression


Partition
X = [X 1 X 2]
and
1
= :
2

Then the regression model can be rewritten as

y = X1 1 + X2 2 + e: (3.13)
0 0 0
Observe that the OLS estimator of =( 1; 2) can be obtained by regression of y on X = [X 1 X 2 ]: OLS
estimation can be written as
y = X 1 ^1 + X 2 ^2 + e
^ (3.14)
Suppose that we are primarily interested in 2 ; not in 1 ; so we are only interested in obtaining the
OLS sub-component ^ 2 : In this section we derive an alternative expression for ^ 2 which does not involve
estimation of the full model.
De ne
1
M 1 = I n X 1 X 01 X 1 X 01 :
1
Recalling the de nition M = I X X 0X X 0 ; observe that X 01 M 1 = 0 and thus
1
M 1M = M X 1 X 01 X 1 X 01 M = M :

18
It follows that
^ = M 1M e = M e = e
M 1e ^:
Using this result, if we premultiply (3.14) by M 1 we obtain

M 1y = M 1X 1 ^1 + M 1X 2 ^2 + M 1e
^
= M 1X 2 ^2 + e
^ (3.15)

the second equality since M 1 X 1 = 0. Premultiplying by X 02 and recalling that X 02 e


^ = 0; we obtain

X 02 M 1 y = X20 M 1 X 2 ^ 2 + X 02 e
^ = X 02 M 1 X 2 ^ 2 :

Solving,
1
^ 2 = X 02 M 1 X 2 X 02 M 1 y
an alternative expression for ^ 2 :
Now, de ne
~ 2 = M 1X 2
X (3.16)
y~ = M 1 y; (3.17)

the least-squares residuals from the regression of X 2 and y; respectively, on the matrix X 1 only. Since the
matrix M 1 is idempotent, M 1 = M 1 M 1 and thus
1
^2 = X 02 M 1 X 2 X 02 M 1 y
1
= X 02 M 1 M 1 X 2 X 02 M 1 M 1 y
1
= ~ 02 X
X ~2 ~ 02 y~ :
X

This shows that ^ 2 can be calculated by the OLS regression of y~ on X~ 2 : This technique is called residual
regression.
Furthermore, using the de nitions (3.16) and (3.17), expression (3.15) can be equivalently written as
~ 2 ^2 + e
y~ = X ^:

Since ^ 2 is precisely the OLS coe cient from a regression of y~ on X ~ 2 ; this shows that the residual vector
from this regression is e^, numerically the same residual as from the joint regression (3.14). We have proven
the following theorem.

Theorem 3.7.1 (Frisch-Waugh-Lovell). In the model (3.13), the OLS estimator of 2 and the OLS resid-
uals e
^ may be equivalently computed by either the OLS regression (3.14) or via the following algorithm:

1. Regress y on X 1 ; obtain residuals y~;


~ 2;
2. Regress X 2 on X 1 ; obtain residuals X
~ 2 ; obtain OLS estimates ^ 2 and residuals e
3. Regress y~ on X ^:

In some contexts, the FWL theorem can be used to speed computation, but in most cases there is little
computational advantage to using the two-step algorithm. Rather, the primary use is theoretical.
A common application of the FWL theorem, which you may have seen in an introductory econometrics
course, is the demeaning formula for regression. Partition X = [X 1 X 2 ] where X 1 = is a vector of ones,
and X 2 is the vector of observed regressors. In this case,
0 1 0
M1 = I ( ) :

Observe that
~2
X = M 1X 2
1
= X2 ( 0 ) X2
= X2 X2

19
and

y~ = M 1 y
0 1 0
= y ( ) y
= y y;

which are \demeaned". The FWL theorem says that ^ 2 is the OLS estimate from a regression of yi y on
x2i x2 : ! 1 n !
Xn X
^2 = 0
(x2i x2 ) (x2i x2 ) (x2i x2 ) (yi y) :
i=1 i=1

Thus the OLS estimator for the slope coe cients is a regression with demeaned data.

3.8 Bias and Variance


In this and the following section we consider the special case of the linear regression model (2.8)-(2.9).
In this section we derive the small sample conditional mean and variance of the OLS estimator.
By the independence of the observations and (2.9), observe that
0 1 0 1
.. ..
B . C B . C
E (e j X) = B C B C
@ E (ei j X) A = @ E (ei j xi ) A = 0: (3.18)
.. ..
. .

Using (3.8), the properties of conditional expectations, and (3.18), we can calculate
1
E ^ jX = E X 0X X 0e j X
1
= X 0X X 0 E (e j X)
= 0:

We have shown that


E ^jX = (3.19)

which implies
E ^ =

and thus the OLS estimator ^ is unbiased for :


Next, for any random vector Z de ne the covariance matrix
0
var(Z) = E (Z EZ) (Z EZ)
0
= EZZ 0 (EZ) (EZ) :

Then given (3.19) we see that


0
var ^ j X = E ^ ^ jX
1 1
= X 0X X 0 DX X 0 X

where
D = E (ee0 j X) :
The i'th diagonal element of D is
E e2i j X = E e2i j xi = 2
i

while the ij 0 th o -diagonal element of D is

E (ei ej j X) = E (ei j xi ) E (ej j xj ) = 0:

20
2
Thus D is a diagonal matrix with i'th diagonal element i:
0 2
1
1 0 0
B 0 2
0 C
2 2 B 2 C
D = diag 1 ; :::; n =B .. .. .. .. C: (3.20)
@ . . . . A
2
0 0 n
2 2
In the special case of the linear homoskedastic regression model, i = and we have the simpli cations
D = I n 2 , X 0 DX = X 0 X 2 ; and
1
var ^ j X = X 0 X 2
:

We now calculate the nite sample bias of the method of moments estimator ^ 2 for 2 , under the addi-
tional assumption of conditional homoskedasticity E e2i j xi = 2 : From (3.12), the properties of projection
matrices and the trace operator observe that
1 0 1 1 1 1
^2 = e
^e^ = e0 M M e = e0 M e = tr (e0 M e) = tr M ee0
n n n n n
Then

1
E ^2 j X = tr E M ee0 j X
n
1
= tr (M E (ee0 j X))
n
1
= tr M 2
n
2 n k
= ;
n
the nal equality by (3.10). Thus ^ 2 is biased towards zero. As an alternative, the estimator s2 de ned in
(3.7) is unbiased for 2 by this calculation. This is the justi cation for the common preference of s2 over ^ 2 in
empirical practice. It is important to remember, however, that this estimator is only unbiased in the special
case of the homoskedastic linear regression model. It is not unbiased in the absence of homoskedasticity, or
in the projection model.

3.9 Gauss-Markov Theorem


In this section we restrict attention to the homoskedastic linear regression model, which is (2.8)-(2.9)
plus E e2i j xi = 2 : Now consider the class of estimators of which are linear functions of the vector y;
and thus can be written as
~ = A0 y
where A is an n k function of X. The least-squares estimator is the special case obtained by setting
A = X(X 0 X) 1 : What is the best choice of A? The Gauss-Markov theorem, which we now present, says
that the least-squares estimator is the best choice, as it yields the smallest variance among all unbiased linear
estimators.
By a calculation similar to those of the previous section,

E ~ j X = A0 X ;

so ~ is unbiased if (and only if) A0 X = I k : In this case, we can write


~ = A0 y = A (X + e) = + Ae:
2
Thus since var (e j X) = I n under homoskedasticity,

var ~ j X = A0 var (e j X) A = A0 A 2
:

The \best" linear estimator is obtained by nding the matrix A for which this variance is the smallest in
the positive de nite sense. The following result, known as the Gauss-Markov theorem, is a famous statement
of the solution.

21
Theorem 3.9.1 Gauss-Markov. In the homoskedastic linear regression model, the best (minimum-variance)
unbiased linear estimator is OLS.

The Gauss-Markov theorem is an e ciency justi cation for the least-squares estimator, but it is quite
limited in scope. Not only has the class of models been restricted to homoskedastic linear regressions, the
class of potential estimators has been restricted to linear unbiased estimators. This latter restriction is
particularly unsatisfactory, as the theorem leaves open the possibility that a non-linear or biased estimator
could have lower mean squared error than the least-squares estimator.

3.10 Semiparametric E ciency


In the previous section we presented the Gauss-Markov theorem as a limited e ciency justi cation for
the least-squares estimator. A broader justi cation is provided in Chamberlain (1987), who established
that in the projection model the OLS estimator has the smallest asymptotic mean-squared error among
feasible estimators. This property is called semiparametric e ciency, and is a strong justi cation for the
least-squares estimator. We discuss the intuition behind his result in this section.
Suppose that the joint distribution of (yi ; xi ) is discrete. That is, for nite r;
P yi = j; xi = j = j; j = 1; :::; r
for some constants j ; j ; and j : Assume that the j and j are known, but the j are unknown. (We
know the values yi and xi can take, but we don't know the probabilities.)
In this discrete setting, the de nition (3.3) can be rewritten as
0 1 10 1
Xr X r
=@ 0A
j j j
@ j j j
A (3.21)
j=1 j=1

Thus is a function of ( 1 ; :::; r ) :


As the data are multinomial, the maximum likelihood estimator (MLE) is
n
1X
^j = 1 (yi = j) 1 xi = j
n i=1

for j = 1; :::; r; where 1 ( ) is the indicator function. That is, ^ j is the percentage of the observations which
fall in each category. The MLE ^ mle for is then the analog of (3.21) with the parameters j replaced by
the estimates ^ j :
0 1 10 1
Xr Xr
^ mle = @ ^j j j A @
0
^j j j A :
j=1 j=1

Substituting in the expressions for ^ j ,


r
X Xr n
0 1X 0
^j j j = 1 (yi = j) 1 xi = j j j
j=1 j=1
n i=1
n r
1 XX
= 1 (yi = j) 1 xi = j xi x0i
n i=1 j=1
n
1X
= xi x0i
n i=1
and
r
X Xr n
1X
^j j j = 1 (yi = j) 1 xi = j j j
j=1 j=1
n i=1
n r
1 XX
= 1 (yi = j) 1 xi = j xi y i
n i=1 j=1
n
1X
= xi y i :
n i=1

22
Thus ! !
n 1 n
^ mle = 1X 1X
xi x0i xi y i = ^ ols :
n i=1 n i=1
In other words, if the data have a discrete distribution, the maximum likelihood estimator is identical to the
OLS estimator. Since this is a regular parametric model the MLE is asymptotically e cient (see Appendix
D), and thus so is the OLS estimator.
Chamberlain (1987) extends this argument to the case of continuously-distributed data: He observes
that the above argument holds for all multinomial distributions, and any continuous distribution can be
arbitrarily well approximated by a multinomial distribution. He proves that generically the OLS estimator
(3.4) is an asymptotically e cient estimator for the parameter de ned in (2.10) for the class of models
satisfying Assumption 2.6.1.

3.11 Multicollinearity
If rank(X 0 X) < k; then ^ is not de ned2 . This is called strict multicollinearity. This happens when
the columns of X are linearly dependent, i.e., there is some 6= 0 such that X = 0: Most commonly, this
arises when sets of regressors are included which are identically related. For example, if X includes both the
logs of two prices and the log of the relative prices, log(p1 ); log(p2 ) and log(p1 =p2 ): When this happens, the
applied researcher quickly discovers the error as the statistical software will be unable to construct (X 0 X) 1 :
Since the error is discovered quickly, this is rarely a problem for applied econometric practice.
The more relevant issue is near multicollinearity, which is often called \multicollinearity" for brevity.
This is the situation when the X 0 X matrix is near singular, when the columns of X are close to linearly
dependent. This de nition is not precise, because we have not said what it means for a matrix to be \near
singular". This is one di culty with the de nition and interpretation of multicollinearity.
One implication of near singularity of matrices is that the numerical reliability of the calculations is
reduced. In extreme cases it is possible that the reported calculations will be in error due to oating-point
calculation di culties.
A more relevant implication of near multicollinearity is that individual coe cient estimates will be im-
precise. We can see this most simply in a homoskedastic linear regression model with two regressors

yi = x1i 1 + x2i 2 + ei ;

and
1 0 1
XX=
n 1
In this case
2 1 2
1 1
var ^ j X = = :
n 1 n (1 2) 1
The correlation indexes collinearity, since as approaches 1 the matrix becomes singular. We can see
the e ect of collinearity on precision by observing that the asymptotic variance of a coe cient estimate
2 2 1
1 approaches in nity as approaches 1. Thus the more \collinear" are the regressors, the worse
the precision of the individual coe cient estimates.
What is happening is that when the regressors are highly dependent, it is statistically di cult to dis-
entangle the impact of 1 from that of 2 : As a consequence, the precision of individual estimates are
reduced.

3.12 In uential Observations


The i'th observation is in uential on the least-squares estimate if the deletion of the observation from the
sample results in a meaningful change in ^ . To investigate the possibility of in uential observations, de ne
the leave-one-out least-squares estimator of ; that is, the OLS estimator based on the sample excluding the
i'th observation. This equals
1
^ ( i) = X 0( i) X ( i) X ( i) y ( i) (3.22)
2 See Appendix A.5 for the de ntion of the rank of a matrix.

23
where X ( i) and y ( i) are the data matrices omitting the i'th row. A convenient alternative expression
(derived in Section 3.13) is
^ ( i) = ^ (1 hi ) 1 X 0 X 1 xi e^i (3.23)
where
1
hi = x0i X 0 X xi
1
is the i'th diagonal element of the projection matrix X X 0 X X 0:
We can also de ne the leave-one-out residual
1
e^i; i = yi x0i ^ ( i) = (1 hi ) e^i : (3.24)

A simple comparison yields that


1
e^i e^i; i = (1 hi ) hi e^i : (3.25)
As we can see, the change in the coe cient estimate by deletion of the i'th observation depends critically
on the magnitude of hi : The hi take values in [0; 1] and sum to k: If the i'th observation has a large
value of hi ; then this observation is a leverage point and has the potential to be an in uential observation.
Investigations into the presence of in uential observations can plot the values of (3.25), which is considerably
more informative than plots of the uncorrected residuals e^i :

3.13 Technical Proofs


Proof of Theorem 3.9.1. Let A be any n k function of X such that A0 X = I k : The variance
1 2
of the least-squares estimator is X 0 X and that of A0 y is A0 A 2 : It is su cient to show that the
1 1
di erence A0 A X 0X is positive semi-de nite. Set C = A X X 0X : Note that X 0 C = 0: Then
we calculate that
1 1 0 1 1
A0 A X 0X = C + X X 0X C + X X 0X X 0X
1 1
= C 0C + C 0X X 0X + X 0X X 0C
1 1 1
+ X 0X X 0X X 0X X 0X
= C 0C

The matrix C 0 C is positive semi-de nite (see Appendix A.7) as required.

Proof of Theorem equation (3.23). Equation (A.2) in Appendix A.5 states that for nonsingular A
and vector b
1 1
A bb0 = A 1 + 1 b0 A 1 b A 1 bb0 A 1 :
This implies
1 1 1 1 1
X 0X xi x0i = X 0X + (1 hi ) X 0X xi x0i X 0 X
and thus
1
^( i) = X 0X xi x0i X 0y xi y i
0 1 0 0 1
= XX Xy XX xi y i
1 1 1
+ (1 hi ) X 0X xi x0i X 0 X X 0y xi y i
1 1 1
= ^ X 0X xi yi + (1 hi ) X 0X xi x0i ^ hi y i
1 1
= ^ (1 hi ) X 0X xi (1 hi ) y i x0i ^ + hi yi
1 1
= ^ (1 hi ) X 0X xi e^i
1 1
the third equality making the substitutions ^ = X 0 X X 0 y and hi = x0i X 0 X xi ; and the remainder
collecting terms.

24
3.14 Exercises
2
1. Let y be a random variable with = Ey and = var(y): De ne

2 y
g y; ; = 2 2 :
(y )
Pn
Let (^ ; ^ 2 ) be the values such that g n (^ ; ^ 2 ) = 0 where g n (m; s) = n 1
i=1 g yi ; ; 2
: Show that
^ and ^ 2 are the sample mean and variance.

2. Consider the OLS regression of the n 1 vector y on the n k matrix X. Consider an alternative set
of regressors Z = XC; where C is a k k non-singular matrix. Thus, each column of Z is a mixture
of some of the columns of X: Compare the OLS estimates and residuals from the regression of y on
X to the OLS estimates from the regression of y on Z:
^ be the OLS residual from a regression of y on X = [X 1 X 2 ]. Find X 02 e
3. Let e ^:

4. Let e
^ be the OLS residual from a regression of y on X: Find the OLS coe cient estimate from a
regression of e
^ on X:
5. Let y^ = X(X 0 X) 1
X 0 y: Find the OLS coe cient estimate from a regression of y^ on X:
6. Prove that R2 is the square of the simple correlation between y and y^:
Pn
7. Explain the di erence between n1 i=1 xi x0i and E (xi x0i ) :
1
8. Let ^ n = X 0n X n X 0n y n denote the OLS estimate when y n is n 1 and X n is n k. A new obser-
vation (yn+1 ; xn+1 ) becomes available. Prove that the OLS estimate computed using this additional
observation is

^ n+1 = ^ n + 1 1
1 X 0n X n xn+1 yn+1 x0n+1 ^ n :
1+ x0n+1 X 0n X n xn+1

9. True or False. If P
yi = xi + ei , xi 2 R; E(ei j xi ) = 0; and e^i is the OLS residual from the regression
n
of yi on xi ; then i=1 x2i e^i = 0:

10. A dummy variable takes on only the values 0 and 1. It is used for categorical data, such as an
individual's gender. Let d1 and d2 be vectors of 1's and 0's, with the i0 th element of d1 equaling 1 and
that of d2 equaling 0 if the person is a man, and the reverse if the person is a woman. Suppose that
there are n1 men and n2 women in the sample. Consider the three regressions

y = + d1 1 + d2 2 + e (3.26)
y = d1 1 + d2 2 + e (3.27)
y = + d1 + e (3.28)

(a) Can all three regressions (3.26), (3.27), and (3.28) be estimated by OLS? Explain if not.
(b) Compare regressions (3.27) and (3.28). Is one more general than the other? Explain the relation-
ship between the parameters in (3.27) and (3.28).
0 0
(c) Compute d1 and d2 ; where is an n 1 is a vector of ones.
0
(d) Letting = ( 1 2 ) ; write equation (3.27) as y = X + e: Consider the assumption E(xi ei ) = 0.
Is there any content to this assumption in this setting?

11. Let d1 and d2 be de ned as in the previous exercise.

(a) In the OLS regression


y = d1 ^ 1 + d2 ^ 2 + u
^;
show that ^ 1 is sample mean of the dependent variable among the men of the sample (y 1 ), and
that ^ 2 is the sample mean among the women (y 2 ).

25
(b) Describe in words the transformations

y = y d1 y 1 d2 y 2
X = X d1 X 1 d2 X 2 :

(c) Compare ~ from the OLS regresion


y =X ~+e
~
with ^ from the OLS regression

y = d1 ^ 1 + d2 ^ 2 + X ^ + e
^:

12. The data le cps85.dat contains a random sample of 528 individuals from the 1985 Current Population
Survey by the U.S. Census Bureau. The le contains observations on nine variables, listed in the le
cps85.pdf.
V1 = education (in years)
V2 = region of residence (coded 1 if South, 0 otherwise)
V3 = (coded 1 if nonwhite and non-Hispanic, 0 otherwise)
V4 = (coded 1 if Hispanic, 0 otherwise)
V5 = gender (coded 1 if female, 0 otherwise)
V6 = marital status (coded 1 if married, 0 otherwise)
V7 = potential labor market experience (in years)
V8 = union status (coded 1 if in union job, 0 otherwise)
V9 = hourly wage (in dollars)

Estimate a regression of wage yi on education x1i , experience x2i , and experienced-squared x3i = x22i
(and a constant). Report the OLS estimates.
Let e^i be the OLS residual and y^i the predicted value from the regression. Numerically calculate the
following:
Pn
(a) i=1 e
^i
Pn
(b) i=1 x1i e^i
Pn
(c) i=1 x2i e^i
Pn 2
(d) i=1 x1i e^i
Pn 2
(e) i=1 x2i e^i
Pn
(f) i=1 y
^i e^i
Pn 2
(g) i=1 e
^i
(h) R2
Are these calculations consistent with the theoretical properties of OLS? Explain.

13. Use the data from the previous problem, restimate the slope on education using the residual regression
approach. Regress yi on (1; x2i ; x22i ), regress x1i on (1; x2i ; x22i ), and regress the residuals on the
residuals. Report the estimate from this regression. Does it equal the value from the rst OLS
regression? Explain.
In the second-stage residual regression, (the regression of the residuals on the residuals), calculate the
equation R2 and sum of squared errors. Do they equal the values from the initial OLS regression?
Explain.

26
Chapter 4

Inference

4.1 Sampling Distribution


The least-squares estimator is a random vector, since it is a function of the random data, and therefore
has a sampling distribution. In general, its distribution is a complicated function of the joint distribution of
(yi ; xi ) and the sample size n:

Figure 4.1: Sampling Density of ^ 2

To illustrate the possibilities in one example, let yi and xi be drawn from the joint density

1 1 2 1 2
f (x; y) = exp (log y log x) exp (log x)
2 xy 2 2

and let ^ 2 be the slope coe cient estimate computed on observations from this joint density. Using simulation
methods, the density function of ^ 2 was computed and plotted in Figure 4.1 for sample sizes of n = 25;
n = 100 and n = 800: The vertical line marks the true value of the projection coe cient.
From the gure we can see that the density functions are dispersed and highly non-normal. As the sample
size increases the density becomes more concentrated about the population coe cient. To characterize the
sampling distribution more fully, we will use the methods of asymptotic approximation. A review of the
most important tools in asymptotic theory is contained in Appendix C.

27
4.2 Consistency
As discussed in Section 4.1, the OLS estimator ^ is has a statistical distribution which is unknown.
Asymptotic (large sample) methods approximate sampling distributions based on the limiting experiment
that the sample size n tends to in nity. A preliminary step in this approach is the demonstration that
estimators converge in probability to the true parameters as the sample size gets large. This is illustrated
in Figure 4.1 by the fact that the sampling densities become more concentrated as n gets larger.
This derivation is based on three key components. First, the OLS estimator can be written as a continuous
function of a set of sample moments. Second, the weak law of large numbers (WLLN, Theorem C.2.1) shows
that sample moments converge in probability to population moments. And third, the continuous mapping
theorem (Theorem C.4.1) states that continuous functions preserve convergence in probability. We now
explain each step.
First, the OLS estimator
n
! 1 n
!
X 1X
^= 1 0
xi xi xi yi
n i=1 n i=1
Pn Pn
is a continuous function of the sample moments n1 i=1 xi x0i and n1 i=1 xi yi :
Second, the WLLN states that for any iid random variable ui such that E jui j < 1;
n
1X p
ui ! E (ui )
n i=1
Pn
n ! 1: We want to apply the WLLN to the elements of the sample moments n1 i=1 xi x0i and
asP
1 n
n i=1 xi yi : This is okay since these are sample averages of the random variables xji xli and xj yi . The
WLLN says that for any j and l;
n
1X p
xji xli ! E (xji xli ) ;
n i=1
and
n
1X p
xji yi ! E (xji yi ) :
n i=1
Pn Pn
Since this holds for all elements in the matrix n1 i=1 xi x0i and the vector 1
n i=1 xi yi ; it follows that
n
1X p
xi x0i ! E (xi x0i ) = Q (4.1)
n i=1

and
n
1X p
xi yi ! E (xi yi ) : (4.2)
n i=1
The continuous mapping theorem then implies that
n
! 1 n
!
^= 1X 1X
xi x0i xi yi
n i=1 n i=1
p 1
! (E (xi x0i )) (E (xi yi ))
= : (4.3)
p
We have shown that ^ ! : In words, the OLS estimator converges in probability to the true parameter
vector as the sample size n gets large.
For a slightly di erent demonstration of this result, recall that (3.8) implies that
n
! 1 n
!
1 X 1 X
^ = 0
xi xi xi ei : (4.4)
n i=1 n i=1

The WLLN and (2.14) imply


n
1X p
xi ei ! E (xi ei ) = 0: (4.5)
n i=1

28
Therefore
n
! 1 n
!
^ 1X 1X
= xi x0i xi ei
n i=1 n i=1
p 1
!Q 0
=0
p
which is the same as ^ ! .
p
Theorem 4.2.1 Under Assumptions 2.6.1 and 3.1.1, ^ ! as n ! 1:

For further details on the proof see Section 4.15.


Theorem 4.2.1 states that the OLS estimator ^ converges in probability to as n diverges to positive
in nity. When an estimator converges in probability to the true value as the sample size diverges, we say
that the estimator is consistent. This is a good property for an estimator to possess. It means that for any
given joint distribution of (yi ; xi ) ; there is a sample size n su ciently large such that the estimator ^ will
be arbitrarily close to the true value with high probability. Consistency is also an important preliminary
step in establishing other important asymptotic approximations.
We can similarly show that the estimators ^ 2 and s2 are consistent for 2 :
p p
Theorem 4.2.2 Under Assumptions 2.6.1 and 3.1.1, ^ 2 ! 2
and s2 ! 2
as n ! 1:
The proof is given in Section 4.15.
One implication of this theorem is that multiple estimators can be consistent for the sample population
parameter. While ^ 2 and s2 are unequal in any given application, they are close in value when n is very
large.

4.3 Asymptotic Normality


We started this Chapter discussing the need for an approximation to the distribution of the OLS estimator
^ : In the previous section we showed that ^ converges in probability to . This is a useful rst step, but in
itself does not provide a useful approximation to the distribution of the estimator. In this section we derive
an approximation typically called the asymptotic distribution of the estimator.
The derivation starts by writing the estimator as a function of sample moments. One of the moments
must be written as a sum of zero-mean random vectors and normalized so that the central limit theory can
be applied. The steps are as follows. p
Take equation (4.4) and multiply it by n: This yields the expression

n
! 1 n
!
p 1X 1 X
n ^ = xi x0i p xi ei : (4.6)
n i=1 n i=1

p
This shows that the normalized and centered estimator n ^ is a function of the sample average
1
Pn 0 1
Pn
i=1 xi xi and the normalized sample average i=1 xi ei : Furthermore, the latter has mean zero so the
p
n n
central limit theorem (CLT) applies. Recall, the CLT (Theorem C.3.1) states that if ui 2 Rk is iid, Eui = 0
and Eu2ji < 1 for j = 1; :::; k; then as n ! 1
n
1 X d
p ui ! N (0; E (ui u0i )) :
n i=1

For our application, ui = xi ei ; and E (xi ei ) = 0: We calculate that E (ui u0i ) = E xi x0i e2i and de ne this
matrix by . By the CLT we condlue
n
1 X d
p xi ei ! N (0; ) : (4.7)
n i=1

29
where
= E xi x0i e2i :
Then using (4.6), (4.1), and (4.7),
p d
n ^ !Q 1
N (0; )
1 1
= N 0; Q Q :

where the nal equality follows from a property of the normal distribution.
A formal statement of this result requires the following strengthening of the moment conditions.

Assumption 4.3.1 In addition to Assumption 2.6.1, E e4i < 1 and for j = 1; :::; k; Ex4ji < 1:

Theorem 4.3.1 Under Assumptions 3.1.1 and 4.3.1, as n ! 1


p d
n ^ ! N (0; V )

where
1 1
V =Q Q :

Further details of the proof are given in Section 4.15.


p
As V is the variance of the asymptotic distribution of n ^ ; V is often referred to as the asymp-
totic covariance matrix of ^ : The expression V = Q 1 1
Q is called a sandwich form.
Theorem 4.3.1 states that the sampling distribution of the least-squares estimator, after rescaling, is
approximately normal when the sample size n is su ciently large. This holds true for all joint distibutions of
(yi ; xi ) which satisfy the conditions of Assumption 4.3.1. However, for any xed n the sampling distribution
of ^ can be arbitrarily far from the normal distribution. In Figure 4.1 we have already seen a simple example
where the least-squares estimate is quite asymmetric and non-normal even for reasonably large sample sizes.
There is a special case where and V simplify. We say that ei is a Homoskedastic Projection Error
when
cov(xi x0i ; e2i ) = 0: (4.8)
Condition (4.8) holds, for example, when xi and ei are independent, but this is not a necessary condition.
Under (4.8) the asymptotic variance formulas simplify as

= E (xi x0i ) E e2i = Q 2


(4.9)
1 1 1 2 0
V = Q Q =Q V (4.10)

In (4.10) we de ne V 0 = Q 1 2 whether (4.8) is true or false. When (4.8) is true then V = V 0 ; otherwise
V 6= V 0 : We call V 0 the homoskedastic covariance matrix.
The asymptotic distribution of Theorem 4.3.1 is commonly used to approximate the nite sample dis-
p
tribution of n ^ : The approximation may be poor when n is small. How large should n be in
order for the approximation to be useful? Unfortunately, there is no simple answer to this reasonable ques-
tion. The trouble is that no matter how large is the sample size, the normal approximation is arbitrarily
poor for some data distribution satisfying the assumptions. We illustrate this problem using a simulation.
Let yi = 0 + 1 xi + ei where xi is N (0; 1) ; and ei is independent of xi with the Double Pareto density
1
f (e) = 2 jej ; jej 1: If > 2 the error ei has zero mean and variance =( 2): As approaches
2,
q however, its variance diverges to in nity. In this context the normalized least-squares slope estimator
n 2 ^2 2 has the N(0; 1) asymptotic distibution. In Figure 4.2 we display the nite sample den-
q
sities of the normalized estimator n 2 ^ 2 2 ; setting n = 100 and varying the parameter . For
= 3:0 the density is very close to the N(0; 1) density. As diminishes the density changes signi cantly,
concentrating most of the probability mass around zero.
Another example is shown in Figure 4.3. Here the model is yi = 1 + ei where

uki E uki
ei =
2 1=2
E u2k
i E uki

30
Figure 4.2: Density of Normalized OLS estimator

p
and ui N(0; 1): We show the sampling distribution of n ^ 1 1 setting n = 100; for k = 1; 4, 6 and 8.
As k increases, the sampling distribution becomes highly skewed and non-normal. The lesson from Figures
4.2 and 4.3 is that the N(0; 1) asymptotic approximation is never guaranteed to be accurate.

Figure 4.3: Sampling distribution

4.4 Covariance Matrix Estimation


Let
X n
^= 1
Q xi x0i
n i=1

31
be the method of moments estimator for Q: The homoskedastic covariance matrix V 0 = Q 1 2
is typically
estimated by
0
^ 1 s2 :
V^ = Q (4.11)
^ p! Q and s2 p! 2 (see (4.1) and Theorem 4.2.1) it is clear that V^ 0 p! V 0 : The estimator ^ 2
Since Q
may also be substituted for s2 in (4.11) without changing this result.
To estimate V = Q 1 Q 1 ; we need an estimate of = E xi x0i e2i : The MME estimator is

X n
^ = 1 xi x0i e^2i (4.12)
n i=1

where e^i are the OLS residuals. The estimator of V is then


1 1
V^ = Q
^ ^Q
^ :

This estimator was introduced to the econometrics literature by White (1980):


p p
Theorem 4.4.1 Under Assumptions 3.1.1 and 4.3.1, as n ! 1; ^ ! and V^ !V:
0
The estimator V^ was the dominate covariance estimator used before 1980, and was still the standard
choice for much empirical work done in the early 1980s. The methods switched during the late 1980s and early
1990s, so that by the late 1990s the White estimate V^ emerged as the standard covariance matrix estimator.
0
When reading and reporting applied work, it is important to pay attention to the distinction between V^
and V^ , as it is not always clear which has been computed. When V^ is used rather than the traditional choice
0
V^ ; many authors will state that their \standard errors have been corrected for heteroskedasticity", or that
they use a \heteroskedasticity-robust covariance matrix estimator", or that they use the \White formula",
the \Eicker-White formula", the \Huber formula", the \Huber-White formula" or the \GMM covariance
matrix". In most cases, these all mean the same thing.
The variance estimator V^ is an estimate of the variance of the asymptotic distribution of ^ . A more easily
interpretable measure of spread is its square root { the standard deviation. This motivates the de nition of
a standard error.

De nition 4.4.1 A standard error s( ^ ) for an estimator ^ is an estimate of the standard deviation of
the distribution of ^ :
p p
When is scalar, and V^ is an estimator of the variance of n ^ ; we set s( ^ ) = n 1=2 V^ : When
is a vector, we focus on individual elements of one-at-a-time, vis., j ; j = 0; 1; :::; k: Thus
q
s( ^ j ) = n 1=2 V^ jj :

Generically, standard errors are not unique, as there may be more than one estimator of the variance of
the estimator. It is therefore important to understand what formula and method is used by an author when
studying their work. It is also important to understand that a particular standard error may be relevant
under one set of model assumptions, but not under another set of assumptions, just as any other estimator.
From a computational standpoint, the standard method to calculate the standard errors is to rst calcu-
late n 1 V^ , then take the diagonal elements, and then the square roots.
To illustrate, we return to the log wage regression of Section 3.2. We calculate that s2 = 0:20 and

^ = 0:199 2:80
:
2:80 40:6

Therefore the two covariance matrix estimates are


1
0 1 14:14 6:98 0:480
V^ = 0:20 =
14:14 205:83 0:480 :039

and
1 1
1 14:14 :199 2:80 1 14:14 7:20 0:493
V^ = = :
14:14 205:83 2:80 40:6 14:14 205:83 0:493 0:035

32
p
In this case the two estimates are quite similar. The (White) standard errors for ^ 0 are 7:2=988 = :085
p
and that for ^ 1 is :035=988 = :006: We can write the estimated equation with standard errors using the
format
\
log(W age) = 1:313 + 0:128 Education:
(:085) (:006)

4.5 Alternative Covariance Matrix Estimators


MacKinnon and White (1985) suggested a small-sample corrected version of V^ based on the jackknife
principle. Recall from Section 3.12 the de nition of ^ ( i) as the least-squares estimator with the i'th
observation deleted. From equation (3.13) of Efron (1982), the jackknife estimator of the variance matrix
for ^ is
Xn 0
V^ = (n 1) ^ ( i) ^ ( i) (4.13)
i=1
where
n
1 X^
= i) :
n i=1 (
Using formula (3.23), you can show that
n 1 1 1
V^ = ^
Q ^ Q
^ (4.14)
n
where ! !0
X n n n
^ = 1 2 1X 1 1X 1
(1 hi ) xi x0i e^2i (1 hi ) xi e^i (1 hi ) xi e^i
n i=1 n i=1 n i=1
1
and hi = x0i X 0 X xi : MacKinnon and White (1985) present numerical (simulation) evidence that V^
works better than V^ as an estimator of V . They also suggest that the scaling factor (n 1)=n in (4.14) can
be omitted.
Andrews (1991) suggested a similar estimator based on cross-validation, which is de ned by replacing the
1
OLS residual e^i in (4.12) with the leave-one-out estimator e^i; i = (1 hi ) e^i presented in (3.24). Using
this substitution, Andrews' proposed estimator is
1 1
V^ ^
=Q ^ Q
^

where
n
^ 1X 2
= (1 hi ) xi x0i e^2i :
n i=1

It is similar to the MacKinnon-White estimator V^ ; but omits the mean correction. Andrews (1991) argues
that simulation evidence indicates that V^ is an improvement on V^ .

4.6 Functions of Parameters


Sometimes we are interested in some lower-dimensional function of the parameter vector = ( 1 ; :::; k ):
For example, we may be interested in a single coe cient j or a ratio j = l : In these cases we can write the
parameter of interest as a function of : Let h : Rk ! Rq denote this function and let
= h( )
denote the parameter of interest. The estimate of is
^ = h( ^ ):

What is an appropriate standard error for ^? Assume that h( ) is di erentiable at the true value of :
By a rst-order Taylor series approximation:

h( ^ ) ' h( ) + H 0 ^ :

33
where
@
H = h( ) k q:
@
Thus
p p
n ^ = n h( ^ ) h( )
p
' H0 n ^
d
! H 0 N (0; V )
= N (0; V ) : (4.15)

where
V = H0 V H :

If V^ is the estimated covariance matrix for ^ ; then the natural estimate for the variance of ^ is

^ 0 V^ H
V^ = H ^

where
^ = @ h( ^ ):
H
@
In many cases, the function h( ) is linear:

h( ) = R0
^ = R; so V^ = R0 V^ R:
for some k q matrix R: In this case, H = R and H
For example, if R is a \selector matrix"

I
R=
0

so that if =( 1; 2 ); then = R0 = 1 and

I
V^ = I 0 V^ = V^ 11 ;
0

the upper-left block of V^ :


When q = ^ 1^
q 1 (so h( ) is real-valued), the standard error for is the square root of n V ; that is,
0
s(^) = n 1=2 H ^ V^ H^ :

4.7 t tests
Let = h( ) : Rk ! R be any parameter of interest, ^ its estimate and s(^) its asymptotic standard
error. Consider the studentized statistic
^
tn ( ) = : (4.16)
s(^)

d
Theorem 4.7.1 tn ( ) ! N (0; 1)
Thus the asymptotic distribution of the t-ratio tn ( ) is the standard normal. Since the standard normal
distribution does not depend on the parameters, we say that tn ( ) is asymptotically pivotal. In special
cases (such as the normal regression model, see Section 3.4), the statistic tn has an exact t distribution, and
is therefore exactly free of unknowns. In this case, we say that tn is an exactly pivotal statistic. In general,
however, pivotal statistics are unavailable and so we must rely on asymptotically pivotal statistics.
A simple null and composite hypothesis takes the form

H0 : = 0
H1 : 6= 0

34
where 0 is some pre-speci ed value, and = h( ) is some function of the parameter vector. (For example,
could be a single element of ):
The standard test for H0 against H1 is the t-statistic (or studentized statistic)
^ 0
tn = tn ( 0 ) = :
s(^)
d
Under H0 ; tn ! N(0; 1): Let z =2 is the upper =2 quantile of the standard normal distribution. That is,
if Z N(0; 1); then P(Z > z =2 ) = =2 and P(jZj > z =2 ) = : For example, z:025 = 1:96 and z:05 = 1:645:
A test of asymptotic signi cance rejects H0 if jtn j > z =2 : Otherwise the test does not reject, or \accepts"
H0 : This is because
P (reject H0 j H0 true) = P jtn j > z =2 j = 0

! P jZj > z =2 = :
The rejection/acceptance dichotomy is associated with the Neyman-Pearson approach to hypothesis testing.
An alternative approach, associated with Fisher, is to report an asymptotic p-value. The asymptotic
p-value for the above statistic is constructed as follows. De ne the tail probability, or asymptotic p-value
function
p(t) = P (jZj > jtj) = 2 (1 (jtj)) :
Then the asymptotic p-value of the statistic tn is
pn = p(tn ):
If the p-value pn is small (close to zero) then the evidence against H0 is strong. In a sense, p-values
and hypothesis tests are equivalent since pn < if and only if jtn j > z =2 . Thus an equivalent statement
of a Neyman-Pearson test is to reject at the % level if and only if pn < : The p-value is more general,
however, in that the reader is allowed to pick the level of signi cance , in contrast to Neyman-Pearson
rejection/acceptance reporting where the researcher picks the level.
Another helpful observation is that the p-value function has simply made a unit-free transformation of the
d
test statistic. That is, under H0 ; pn ! U[0; 1]; so the \unusualness" of the test statistic can be compared to
the easy-to-understand uniform distribution, regardless of the complication of the distribution of the original
test statistic. To see this fact, note that the asymptotic distribution of jtn j is F (x) = 1 p(x): Thus
P (1 pn u) = P (1 p(tn ) u)
= P (F (tn ) u)
= P jtn j F 1 (u)
1
! F F (u) = u;
d d
establishing that 1 pn ! U[0; 1]; from which it follows that pn ! U[0; 1]:

4.8 Con dence Intervals


A con dence interval Cn is an interval estimate of 2 R; and is a function of the data and hence is
random. It is designed to cover with high probability. Either 2 Cn or 2 = Cn : The coverage probability
is P( 2 Cn ).
We typically cannot calculate the exact coverage probability P( 2 Cn ): However we often can calculate
the asymptotic coverage probability limn!1 P( 2 Cn ): We say that Cn has asymptotic (1 )% coverage
for if P( 2 Cn ) ! 1 as n ! 1:
A good method for construction of a con dence interval is the collection of parameter values which are
not rejected by a statistical test. The t-test of the previous setion rejects H0 : = 0 if jtn ( )j > z =2 where
tn ( ) is the t-statistic (4.16) and z =2 is the upper =2 quantile of the standard normal distribution. A
con dence interval is then constructed as the values of for which this test does not reject:
Cn = : jtn ( )j z =2
( )
^
= : z =2 z =2
s(^)
h i
= ^ z =2 s(
^); ^+z =2 s(
^) : (4.17)

35
While there is no hard-and-fast guideline for choosing the coverage probability 1 ; the
h most commoni
professional choice is 95%, or = :05: This corresponds to selecting the con dence interval ^ 1:96s(^)
h i
^ 2s(^) : Thus values of within two standard errors of the estimated ^ are considered \reasonable"
candidates for the true value ; and values of outside two standard errors of the estimated ^ are considered
unlikely or unreasonable candidates for the true value.
The interval has been constructed so that as n ! 1;

P ( 2 Cn ) = P jtn ( )j z =2 ! P jZj z =2 =1 :

and Cn is an asymptotic (1 )% con dence interval.

4.9 Wald Tests


Sometimes = h( ) is a q 1 vector, and it is desired to test the joint restrictions simultaneously. In
this case the t-statistic approach does not work. We have the null and alternative

H0 : = 0
H1 : 6= 0:

The natural estimate of is ^ = h( ^ ) and has asymptotic covariance matrix estimate

^ 0 V^ H
V^ = H ^

where
^ = @ h( ^ ):
H
@
The Wald statistic for H0 against H1 is
0 1
Wn = n ^ 0 V^ ^ 0
0 1
= n h( ^ ) 0
^ 0 V^ H
H ^ h( ^ ) 0 : (4.18)

When h is a linear function of ; h( ) = R0 ; then the Wald statistic takes the form
0 1
Wn = n R 0 ^ 0 R0 V^ R R0 ^ 0 :

p d
The delta method (4.15) showed that n ^ !Z N (0; V ) ; and Theorem 4.4.1 showed that
p
V^ ! V : Furthermore, H ( ) is a continuous function of ; so by the continuous mapping theorem,
p
^ 0 V^ H
H ( ^ ) ! H : Thus V^ = H ^ p
! H 0 V H = V > 0 if H has full rank q: Hence
0 1 d
Wn = n ^ 0 V^ ^ 0 ! Z 0V 1
Z= 2
q;

by Theorem B.8.2: We have established:

d 2
Theorem 4.9.1 Under H0 and Assumption 4.3.1, if rank(H ) = q; then Wn ! q; a chi-square random
variable with q degrees of freedom.

An asymptotic Wald test rejects H0 in favor of H1 if Wn exceeds 2q ( ); the upper- quantile of the 2q
distribution. For example, 21 (:05) = 3:84 = z:025
2
: The Wald test fails to reject if Wn is less than 2q ( ): The
asymptotic p-value for Wn is pn = p(Wn ); where p(x) = P 2q x is the tail probability function of the 2q
distribution. As before, the test rejects at the % level i pn < ; and pn is asymptotically U[0; 1] under
H0 :

36
4.10 F Tests
Take the linear model
y = X1 1 + X2 2 +e
where X 1 is n k1 and X 2 is n k2 and k = k1 + k2 : The null hypothesis is
H0 : 2 = 0:
0
In this case, = 2; and there are q = k2 restrictions. Also h( ) = R0 is linear with R = a selector
I
matrix. We know that the Wald statistic takes the form
0 1
Wn = n^ V^ ^
0 1
= n ^ 2 R0 V^ R ^2:

0 1
What we will show in this section is that if V^ is replaced with V^ = ^ 2 n 1 X 0 X ; the covariance matrix
estimator valid under homoskedasticity, then the Wald statistic can be written in the form
~2 ^2
Wn = n (4.19)
^2
where
1 0 1
~2 =
e
~e ~; ~= y
e X 1 ~1; ~ 1 = X 01 X 1 X 01 y
n
are from OLS of y on X 1 ; and
1 0 ^ = X 0X 1 X 0y
^2 =
e
^e^; ^ = y X ^;
e
n
are from OLS of y on X = (X 1 ; X 2 ):
The elegant feature about (4.19) is that it is directly computable from the standard output from two
simple OLS regressions, as the sum of squared errors is a typical output from statistical packages. This
statistic is typically reported as an \F-statistic" which is de ned as
n k Wn ~ 2 ^ 2 =k2
F = = :
n k2 ^ 2 =(n k)
0 1
While it should be emphasized that equality (4.19) only holds if V^ = ^ 2 n 1 X 0 X ; still this formula
often nds good use in reading applied papers. Because of this connection we call (4.19) the F form of the
Wald statistic.
We now derive expression (4.19). First, note that by partitioned matrix inversion (A.3)
1
1 X 01 X 1 X 01 X 2 1
R0 X 0 X R = R0 R = X 02 M 1 X 2
X 02 X 1 X 02 X 2
where M 1 = I X 1 (X 01 X 1 ) 1
X 01 : Thus
0 1 1 1
R0 V^ R =^ 2
n 1
R0 X 0 X R =^ 2
n 1
X 02 M 1 X 2

and
0 0 1
Wn = n ^ 2 R0 V^ R ^2

^ 02 X 02 M 1 X 2 ^ 2
= :
^2
To simplify this expression further, note that if we regress y on X 1 alone, the residual is e
~ = M 1 y: Now
consider the residual regression of e ~ 2 = M 1 X 2 : By the FWL theorem, e
~ on X ~ = X 2 ^2 + e ~ 02 e
^ and X ^ = 0:
Thus
0
~0 e
e ~ = ~ 2 ^2 + e
X ^ ~ 2 ^2 + e
X ^

= ^ 02 X
~ 02 X
~ 2 ^2 + e
^0 e
^
= ^ 02 X 02 M 1 X 2 ^ 2 + e
^0 e
^;

37
or alternatively,
^ 02 X 02 M 1 X 2 ^ 2 = e
~0 e
~ ^0 e
e ^:
Also, since
^2 = n 1 0
e
^e^
we conclude that
~0 e
e ~ e ^0 e
^ ~2 ^2
Wn = n 0 =n ;
e
^e ^ ^2
as claimed.
In many statistical packages, when an OLS regression is reported, an \F-statistic" is reported. This is

~ 2y ^ 2 = (k 1)
F = 2 :
^ =(n k)
where
1 0
~ 2y =
(y y) (y y)
n
is the sample variance of yi ; equivalently the residual variance from an intercept-only model. This special
F statistic is testing the hypothesis that all slope coe cients (other than the intercept) are zero. This
was a popular statistic in the early days of econometric reporting, when sample sizes were very small and
researchers wanted to know if there was \any explanatory power" to their regression. This is rarely an issue
today, as sample sizes are typically su ciently large that this F statistic is highly \signi cant". While there
are special cases where this F statistic is useful, these cases are atypical.

4.11 Normal Regression Model


As an alternative to asymptotic distribution theory, there is an exact distribution theory available for
the normal linear regression model, introduced in Section 3.4. The modelling assumption that the error ei
is independent of xi and N 0; 2 can be used to calculate a set of exact distribution results.
In particular, under the normality assumption the error vector e is independent of X and has distribution
N 0; I n 2 . Since linear functions of normals are also normal, this implies that conditional on X
^ 1 1
X 0X X0 2
X 0X 0
= e N 0; 2
e
^ M 0 M
1
where M = I X X 0 X X 0 : Since uncorrelated normal variables are independent, it follows that ^ is
independent of any function of the OLS residuals, including the estimated error variance s2 :
The spectral decomposition of M yields
In k 0
M =H H0
0 0
(see equation (A.5)) where H 0 H = I n : Let u = 1
H 0e N 0; H 0 H N (0; I n ) : Then
(n k) s2 1
2
= 2
^0 e
e ^
1
= 2
e0 M e
1 In k 0
= e0 H H 0e
2 0 0
In k 0
= u0 u
0 0
2
n k;

a chi-square distribution with n k degrees of freedom. Furthermore, if standard errors are calculated using
the homoskedastic formula (4.11)
h i
2 0 1
^ ^ N 0; X X
j j j j jj N (0; 1)
= rh i q rh i =q 2 tn k
s( ^ j ) s XX 0 1 2 2 XX0 1 n k
n k n k n k
jj jj

38
a t distribution with n k degrees of freedom.
We summarize these ndings
2
Theorem 4.11.1 If ei is independent of xi and distributed N 0; ; and standard errors are calculated
using the homoskedastic formula (4.11) then
1
^ N 0; 2
X 0X

(n k)s2 2
2 n k;
^
j j
s( ^ j )
tn k

In Theorem 4.3.1 and Theorem 4.7.1 we showed that in large samples, ^ and t are approximately normally
distributed. In contrast, Theorem 4.11.1 shows that under the strong assumption of normality, ^ has an
exact normal distribution and t has an exact t distribution. As inference (con dence intervals) is based on
the t-ratio, the notable distinction is between the N(0; 1) and tn k distributions. The critical values are
quite close if n k 30; so as a practical matter it does not matter which distribution is used. (Unless the
sample size is unreasonably small.)
Now let us partition = ( 1 ; 2 ) and consider tests of the linear restriction
H0 : 2 =0
H1 : 2 6= 0
In the context of parametric models, a good testing procedure is based on the likelihood ratio statistic, which
is twice the di erence in the log-likelihood function evaluated under the null and alternative hypotheses. The
estimator under the alternative is the unrestricted estimator ( ^ 1 ; ^ 2 ; ^ 2 ) discussed above. The Gaussian log-
likelihood at these estimates is
n 1 0
log L( ^ 1 ; ^ 2 ; ^ 2 ) = log 2 ^ 2 e
^e^
2 2^ 2
n n n
= log ^ 2 log (2 ) :
2 2 2
The MLE of the model under the null hypothesis is ( ~ 1 ; 0; ~ 2 ) where ~ 1 is the OLS estimate from a regression
of yi on x1i only, with residual variance ~ 2 : The log-likelihood of this model is
n n n
log L( ~ 1 ; 0; ~ 2 ) = log ~ 2 log (2 ) :
2 2 2
The LR statistic for H0 is

LR = 2 log L( ^ 1 ; ^ 2 ; ^ 2 ) log L( ~ 1 ; 0; ~ 2 )
= n log ~ 2 log ^ 2
~2
= n log :
^2
By a rst-order Taylor series approximation
~2 ~2
LR = n log 1 + 1 'n 1 = Wn :
^2 ^2
the F statistic.

4.12 Problems with Tests of NonLinear Hypotheses


While the t and Wald tests work well when the hypothesis is a linear restriction on ; they can work quite
poorly when the restrictions are nonlinear. This can be seen by a simple example introduced by Lafontaine
and White (1986). Take the model
yi = + ei
ei N(0; 2 )

39
and consider the hypothesis
H0 : = 1:
Let ^ and ^ be the sample mean and variance of yi : Then the standard Wald test for H0 is
2

2
^ 1
Wn = n :
^2
Now notice that H0 is equivalent to the hypothesis
s
H0 (s) : =1
s s 1
for any positive integer s: Letting h( ) = ; and noting H = s ; we nd that the standard Wald test
for H0 (s) is
2
^s 1
Wn (s) = n 2s 2 :
^ 2 s2 ^
While the hypothesis s = 1 is una ected by the choice of s; the statistic Wn (s) varies with s: This is an
unfortunate feature of the Wald statistic.
To demonstrate this e ect, we have plotted in Figure 4.4 the Wald statistic Wn (s) as a function of s;
setting n= 2 = 10: The increasing solid line is for the case ^ = 0:8: The decreasing dashed line is for the case
^ = 1:7: It is easy to see that in each case there are values of s for which the test statistic is signi cant relative
to asymptotic critical values, while there are other values of s for which the test statistic is insigni cant.
This is distressing since the choice of s seems arbitrary and irrelevant to the actual hypothesis.

Figure 4.4: Wald Statistic as a function of s

d
Our rst-order asymptotic theory is not useful to help pick s; as Wn (s) ! 21 under H0 for any s:
This is a context where Monte Carlo simulation can be quite useful as a tool to study and compare the
exact distributions statistical procedures in nite samples. The method uses random simulation to create an
arti cial dataset to apply the statistical tools of interest. This produces random draws from the sampling
distribution of interest. Through repetition, features of this distribution can be calculated.
In the present context of the Wald statistic, one feature of importance is the Type I error of the test
using the asymptotic 5% critical value 3.84 { the probability of a false rejection, P (Wn (s) > 3:84 j = 1) :
Given the simplicity of the model, this probability depends only on s; n; and 2 : In Table 2.1 we report the
results of a Monte Carlo simulation where we vary these three parameters. The value of s is varied from 1
to 10, n is varied among 20, 100 and 500, and is varied among 1 and 3. Table 4.1 reports the simulation
estimate of the Type I error probability from 50,000 random samples. Each row of the table corresponds to a

40
di erent value of s { and thus corresponds to a particular choice of test statistic. The second through seventh
columns contain the Type I error probabilities for di erent combinations of n and . These probabilities are
calculated as the percentage of the 50,000 simulated Wald statistics Wn (s) which are larger than 3.84. The
null hypothesis s = 1 is true, so these probabilities are Type I error.
To interpret the table, remember that the ideal Type I error probability is 5% (.05) with deviations
indicating distortion. Typically, Type I error rates between 3% and 8% are considered reasonable. Error
rates above 10% are considered excessive. Rates above 20% are unacceptable. When comparing statistical
procedures, we compare the rates row by row, looking for tests for which rejection rates are close to 5%, and
rarely fall outside of the 3%-8% range. For this particular example, the only test which meets this criterion
is the conventional Wn = Wn (1) test. Any other choice of s leads to a test with unacceptable Type I error
probabilities.
In Table 4.1 you can also see the impact of variation in sample size. In each case, the Type I error
probability improves towards 5% as the sample size n increases. There is, however, no magic choice of n for
which all tests perform uniformly well. Test performance deteriorates as s increases, which is not surprising
given the dependence of Wn (s) on s as shown in Figure 4.4.
Table 4.1
Type I error Probability of Asymptotic 5% Wn (s) Test
=1 =3
s n = 20 n = 100 n = 500 n = 20 n = 100 n = 500
1 .06 .05 .05 .07 .05 .05
2 .08 .06 .05 .15 .08 .06
3 .10 .06 .05 .21 .12 .07
4 .13 .07 .06 .25 .15 .08
5 .15 .08 .06 .28 .18 .10
6 .17 .09 .06 .30 .20 .11
7 .19 .10 .06 .31 .22 .13
8 .20 .12 .07 .33 .24 .14
9 .22 .13 .07 .34 .25 .15
10 .23 .14 .08 .35 .26 .16
Note: Rejection frequencies from 50,000 simulated random samples
In this example it is not surprising that the choice s = 1 yields the best test statistic. Other choices are
arbitrary and would not be used in practice. While this is clear in this particular example, in other examples
natural choices are not always obvious and the best choices may in fact appear counter-intuitive at rst.
This point can be illustrated through another example. Take the model
yi = 0 + x1i 1 + x2i 2 + ei (4.20)
E (xi ei ) = 0
and the hypothesis
1
H0 : =r
2
where r is a known constant. Equivalently, de ne = 1 = 2 ; so the hypothesis can be stated as H0 : = r:
Let ^ = ( ^ 0 ; ^ 1 ; ^ 2 ) be the least-squares estimates of (4.20), let V^ be an estimate of the asymptotic
variance matrix for ^ and set ^ = ^ 1 = ^ 2 . De ne
0 1
0
B C
B C
B 1 C
B C
^1 = B
H B ^2 C
C
B C
B C
B ^ C
@ 1 A
^2
2
1=2
so that the standard error for ^ is s(^) = n 1 ^ 01 V^ H
H ^1 : In this case a t-statistic for H0 is
^
^
1
r
2
t1n = :
s(^)

41
An alternative statistic can be constructed through reformulating the null hypothesis as

H0 : 1 r 2 = 0:

A t-statistic based on this formulation of the hypothesis is


2
^ r ^2
1
t2n = 1=2
:
n ^
1H 0 V
2 H2

where 10
0
H2 = @ 1 A :
r
To compare t1n and t2n we perform another simple Monte Carlo simulation. We let x1i and x2i be
mutually independent N(0; 1) variables, ei be an independent N(0; 2 ) draw with = 3, and normalize
0 = 0 and 1 = 1: This leaves 2 as a free parameter, along with sample size n: We vary 2 among :1; .25,
.50, .75, and 1.0 and n among 100 and 500:

Table 4.2
Type I error Probability of Asymptotic 5% t-tests
n = 100 n = 500
P (tn < 1:645) P (tn > 1:645) P (tn < 1:645) P (tn > 1:645)
2 t1n t2n t1n t2n t1n t2n t1n t2n
.10 .47 .06 .00 .06 .28 .05 .00 .05
.25 .26 .06 .00 .06 .15 .05 .00 .05
.50 .15 .06 .00 .06 .10 .05 .00 .05
.75 .12 .06 .00 .06 .09 .05 .00 .05
1.00 .10 .06 .00 .06 .07 .05 .02 .05

The one-sided Type I error probabilities P (tn < 1:645) and P (tn > 1:645) are calculated from 50,000
simulated samples. The results are presented in Table 4.2. Ideally, the entries in the table should be
0.05. However, the rejection rates for the t1n statistic diverge greatly from this value, especially for small
values of 2 : The left tail probabilities P (t1n < 1:645) greatly exceed 5%, while the right tail probabilities
P (t1n > 1:645) are close to zero in most cases. In contrast, the rejection rates for the linear t2n statistic are
invariant to the value of 2 ; and are close to the ideal 5% rate for both sample sizes. The implication of
Table 4.2 is that the two t-ratios have dramatically di erent sampling behavior.
The common message from both examples is that Wald statistics are sensitive to the algebraic formulation
of the null hypothesis. In all cases, if the hypothesis can be expressed as a linear restriction on the model
parameters, this formulation should be used. If no linear formulation is feasible, then the \most linear"
formulation should be selected, and alternatives to asymptotic critical values should be considered. It is also
prudent to consider alternative tests to the Wald statistic, such as the GMM distance statistic which will be
presented in Section 7.7.

4.13 Monte Carlo Simulation


In the previous section we introduced the method of Monte Carlo simulation to illustrate the small sample
problems with tests of nonlinear hypotheses. In this section we describe the method in more detail.
Recall, our data consist of observations (yi ; xi ) which are random draws from a population distribution
F: Let be a parameter and let Tn = Tn ((y1 ; x1 ) ; :::; (yn ; xn ) ; ) be a statistic of interest, for example an
estimator ^ or a t-statistic (^ )=s(^): The exact distribution of Tn is

Gn (u; F ) = P (Tn u j F):

While the asymptotic distribution of Tn might be known, the exact ( nite sample) distribution Gn is generally
unknown.
Monte Carlo simulation uses numerical simulation to compute Gn (u; F ) for selected choices of F: This
is useful to investigate the performance of the statistic Tn in reasonable situations and sample sizes. The

42
basic idea is that for any given F; the distribution function Gn (u; F ) can be calculated numerically through
simulation. The name Monte Carlo derives from the famous Mediterranean gambling resort, where games
of chance are played.
The method of Monte Carlo is quite simple to describe. The researcher chooses F (the distribution of
the data) and the sample size n. A \true" value of is implied by this choice, or equivalently the value is
selected directly by the researcher, which implies restrictions on F .
Then the following experiment is conducted

n independent random pairs (yi ; xi ) ; i = 1; :::; n; are drawn from the distribution F using the com-
puter's random number generator.
The statistic Tn = Tn ((y1 ; x1 ) ; :::; (yn ; xn ) ; ) is calculated on this pseudo data.

For step 1, most computer packages have built-in procedures for generating U[0; 1] and N(0; 1) random
numbers, and from these most random variables can be constructed. (For example, a chi-square can be
generated by sums of squares of normals.)
For step 2, it is important that the statistic be evaluated at the \true" value of corresponding to the
choice of F:
The above experiment creates one random draw from the distribution Gn (u; F ): This is one observation
from an unknown distribution. Clearly, from one observation very little can be said. So the researcher repeats
the experiment B times, where B is a large number. Typically, we set B = 1000 or B = 5000: We will discuss
this choice later.
Notationally, let the b0 th experiment result in the draw Tnb ; b = 1; :::; B: These results are stored. They
constitute a random sample of size B from the distribution of Gn (u; F ) = P (Tnb u) = P (Tn u j F ) :
From a random sample, we can estimate any feature of interest using (typically) a method of moments
estimator. For example:
Suppose we are interested in the bias, mean-squared error (MSE), or variance of the distribution of ^ :
We then set Tn = ^ ; run the above experiment, and calculate
B B
\^) 1 X 1 X^
Bias( = Tnb = b
B B
b=1 b=1
B
1 X
M\
2
SE(^) = (Tnb )
B
b=1
2
\
var(^) = M\
SE(^) \^)
Bias(

Suppose we are interested in the Type I error associated with an asymptotic 5% two-sided t-test. We
would then set Tn = ^ =s(^) and calculate

B
X
^= 1
P 1 (Tnb 1:96) ; (4.21)
B
b=1

the percentage of the simulated t-ratios which exceed the asymptotic 5% critical value.
Suppose we are interested in the 5% and 95% quantile of Tn = ^: We then compute the 10% and 90%
sample quantiles of the sample fTnb g: The % sample quantile is a number q such that % of the sample
are less than q : A simple way to compute sample quantiles is to sort the sample fTnb g from low to high.
Then q is the N 'th number in this ordered sequence, where N = (B + 1) : It is therefore convenient to
pick B so that N is an integer. For example, if we set B = 999; then the 5% sample quantile is 50'th sorted
value and the 95% sample quantile is the 950'th sorted value.
The typical purpose of a Monte Carlo simulation is to investigate the performance of a statistical proce-
dure (estimator or test) in realistic settings. Generally, the performance will depend on n and F: In many
cases, an estimator or test may perform wonderfully for some values, and poorly for others. It is therefore
useful to conduct a variety of experiments, for a selection of choices of n and F:
As discussed above, the researcher must select the number of experiments, B: Often this is called the
number of replications. Quite simply, a larger B results in more precise estimates of the features of interest
of Gn ; but requires more computational time. In practice, therefore, the choice of B is often guided by the
computational demands of the statistical procedure. Since the results of a Monte Carlo experiment are

43
estimates computed from a random sample of size B; it is straightforward to calculate standard errors for
any quantity of interest. If the standard error is too large to make a reliable inference, then B will have to
be increased.
In particular, it is simple to make inferences about rejection probabilities from statistical tests, such as
the percentage estimate reported in (4.21). The random variable 1 (Tnb 1:96) is iid Bernoulli, equalling
1 with probability p =pE1 (Tnb 1:96) : The average (4.21) is therefore an unbiased estimator of p with
standard error s (^p) = p (1 p) =B. As p is unknown, this may be approximated by replacing p with p^
or with pan hypothesized value.p For example, if we are assessing an asymptotic 5% test, then we can set
p) = (:05) (:95) =B ' :22= B: Hence, standard errors for B = 100; 1000, and 5000, are, respectively,
s (^
s (^
p) = :022; :007; and .003.

4.14 Estimating a Wage Equation


We again return to our wage equation. We now expand the sample to include all non-military wage
earners, and estimate a multivariate regression. Again our dependent variable is the natural log of wages,
and our regressors include years of education, potential work experience, experience squared, and dummy
variable indicators for the following: married, female, union member, immigrant, and hispanic. We separately
estimate equations for white and non-whites.
For the dependent variable we use the natural log of wages so that coe cients may be interpreted as semi-
elasticities. We use the sample of wage earners from the March 2004 Current Population Survey, excluding
military. For regressors we include years of education, potential work experience, experience squared, and
dummy variable indicators for the following: married, female, union member, immigrant, hispanic, and non-
white. Furthermore, we included a dummy variable for state of residence (including the District of Columbia,
this adds 50 regressors). The available sample is 18,808 so the parameter estimates are quite precise and
reported in Table 4.1, excluding the coe cients on the state dummy variables.

Table 4.1
OLS Estimates of Linear Equation for Log(Wage)

^ s( ^ )
Intercept 1.027 .032
Education .101 .002
Experience .033 .001
Experience2 :00057 .00002
Married .102 .008
Female :232 .007
Union Member .097 .010
Immigrant :121 .013
Hispanic :102 .014
Non-White :070 .010
^ .4877
Sample Size 18,808
R2 .34

One question is whether or not the state dummy variables are relevant. Computing the Wald statistic
(4.18) that the state coe cients are jointly zero, we nd Wn = 550: Alternatively, re-estimating the model
with the 50 state dummies excluded, the restricted standard deviation estimate is ~ = :4945: The F form of
the Wald statistic (4.19) is

^2 :48772
Wn = n 1 = 18; 808 1 = 515:
~2 :49452
Notice that the two statistics are close, but not equal. Using either statistic the hypothesis is easily rejected,
as the 1% critical value for the 250 distribution is 76.
Another interesting question which can be addressed from these estimates is the maximal impact of
experience on mean wages. Ignoring the other coe cients, we can write this e ect as
2
log(W age) = 2 Experience + 3 Experience +

44
Our question is: At which level of experience do workers achieve the highest wage? In this quadratic model,
if 2 > 0 and 3 < 0 the solution is
2
= :
2 3
From Table 4.1 we nd the point estimate
^
^= 2
= 28:69:
2^3
Using the Delta Method, we can calculate a standard error of s(^) = :40; implying a 95% con dence interval
of [27:9; 29:5]:
However, this is a poor choice, as the coverage probability of this con dence interval is one minus the
Type I error of the hypothesis test based on the t-test. In the previous section we discovered that such
t-tests had very poor Type I error rates. Instead, we found better Type I error rates by reformulating the
hypothesis as a linear restriction. These t-statistics take the form
^ + 2^
2 3
tn ( ) = 1=2
0 ^
h Vh
where
1
h =
2
and V^ is the covariance matrix for ( ^ 2 ^ 3 ):
In the present context we are interested in forming a con dence interval, not testing a hypothesis, so we
have to go one step further. Our desired con dence interval will be the set of parameter values which are
not rejected by the hypothesis test. This is the set of such that jtn ( )j 1:96: Since tn ( ) is a non-linear
function of ; there is not a simple expression for this set, but it can be found numerically quite easily. This
set is [27:0; 29:5]: Notice that the upper end of the con dence interval is the same as that from the delta
method, but the lower end is substantially lower.

4.15 Technical Proofs


Pn Pn
Proof
P of Theorem 4.2.1. In order to apply the WLLN to the sample moments n1 i=1 xi x0i ; n1 i=1 xi yi ;
n
or n1 i=1 xi ei we need to verify that the random variables xji xli ; xj yi and xj ei are iid with nite absolute
rst moments. These moment conditions are covered by Assumptions 2.6.1 and Theorem 2.6.1. Assumption
3.1.1 states that the observations (yi ; xi ) are mutually independent and identically distributed. It follows
that any function zi = h (yi ; xi ) is also iid. This includes ei ; xji xli ; xj yi and xj ei : We have veri ed that the
random variables xji xli ; xj yi and xj ei are iid with nite absolute rst moments and therefore the conditions
for the WLLN hold. Equations (4.1), (4.2) and (4.5) are therefore valid.
The nal step of the proof is the application of the continuous mapping theory to obtain (4.3). To fully
understand this applicationwe now walk through its application in more detail. Using (4.4) we can write

n
! 1 n
!
^ 1X 1X
= xi x0i xi yi
n i=1 n i=1
n n
!
1X 0 1
X
= g xi xi ; xi y i
n i=1 n i=1

where g (A; b) = A 1 b is a function of A and b: This function is a continuous function of A and b at all
values of the arguments such that A 1 exists. Assumption 2.6.1.4 implies that Q 1 exists and thus g (A; b)
is continuous at A = Q: Hence by the continuous mapping theorem (Theorem C.4.1),
n n
!
1 X 1 X
^ =g 0
xi xi ; xi y i
n i=1 n i=1
p
! g (Q; E (xi yi ))
1
= E (xi x0i ) E (xi yi )
=

45
Proof of Theorem 4.2.2. Note that

e^i = yi x0i ^
= ei + x0i x0i ^
= ei x0i ^ :

Thus 0
e^2i = e2i 2ei x0i ^ + ^ xi x0i ^ (4.22)

and
n
1X 2
^2 = e^
n i=1 i
n n
! n
!
1X 2 1X ^
0 1X
= e 2 ei x0i + ^ xi x0i ^
n i=1 i n i=1 n i=1
p 2
!

the last line using the WLLN, (4.1), (4.5) and Theorem (4.2.1). Thus ^ 2 is consistent for 2
:
Finally, since n=(n k) ! 1 as n ! 1; it follows that

n p
s2 = ^2 ! 2
:
n k

Proof of Theorem 4.3.1. The discussion preceeding the statement of the theorem, together with the
proof of Theorem 4.2.1, covered all points of the derivation except for the demonstration that the elements of
the vector xi ei have nite second moments, which we now show. For any j = 1; :::; k; by the Cauchy-Schwarz
Inequality (C.3) and Assumption 4.3.1
2 1=2 1=2
E jxji ei j = E x2ji e2i Ex4ji Ee4i < 1: (4.23)

p p
Proof of Theorem 4.4.1. We now show ^ ! ; from which it follows that V^ ! V as n ! 1:
Using (4.22)
n
^ 1X
= xi x0i e^2i
n i=1
n n n
1X 2X 1X
0 0 2
= xi x0i e2i xi x0i ^ xi ei + xi x0i ^ xi : (4.24)
n i=1 n i=1 n i=1

We now examine each k k sum on the right-hand-side of (4.24) in turn.


Take the rst term on the right-hand-side of (4.24). The jl'th element of xi x0i e2i is xji xli e2i . Using the
Cauchy-Schwarz Inequality (C.3) twice and Assumption 4.3.1,
1=2 1=2
E xji xli e2i Ex2ji x2li Ee4i
1=4 1=4 1=2 1=2
Ex4ji Ex4li Ee4i Ex2ji x2li :

Since this expectation is nite, we can apply the WLLN (Theorem C.2.1) to nd that
n
1X p
xi x0i e2i ! E xi x0i e2i = :
n i=1

46
Now take the second term on the right-hand-side of (4.24). Applying the Triangle Inequality (A.9) to the
matrix Euclidean norm, the Matrix Schwarz Inequality (A.8), equation (A.6) and the Schwarz Inequality
(A.7)
n n
2X 0 2X 0
xi x0i ^ xi ei xi x0i ^ xi ei
n i=1 n i=1
n
2X ^
0
kxi x0i k xi jei j
n i=1
n
!
2X 3 ^
kxi k jei j : (4.25)
n i=1

Using Holder's inequality (C.4) and Assumption 4.3.1,


3=4 1=4
3 4
E kxi k jei j E kxi k E e4i < 1:

By the WLLN
n
1X 3 p 3
kxi k jei j ! E kxi k jei j < 1:
n i=1
p
Since ^ ! 0 it follows that (4.25) converges in probability to zero. This shows that the second term
on the right-hand-side of (4.24) converges in probability to zero.
We now take the third term in (4.24). Again by the Triangle Inequality, the Matrix Schwarz Inequality,
(A.6) and the Schwarz Inequality
n n
1X 1X
0 2 0 2
xi x0i ^ xi kxi x0i k ^ xi
n i=1 n i=1
n
1X 4
kxi k ^
n i=1
p
!0
p Pn 4 p 4
the nal convergence since ^ ! 0 and n1 i=1 kxi k ! E kxi k < 1 under Assumption 4.3.1. This
shows that the third term on the right-hand-side of (4.24) converges in probability to zero.
Considering the three terms on the right-hand-side of (4.24), we have shown that the rst term converges
in probability to , and the second and third converge in probability to zero. We conclude that ^ !
as claimed.

Proof of Theorem 4.7.1. By (4.15)

^
tn ( ) =
s(^)
p ^
n
= p
V^
d N (0; V )
! p
V
= N (0; 1)

47
4.16 Exercises
For exercises 1-4, the following de nition is used. In the model y = X + e; the least-squares estimate
of subject to the restriction h( ) = 0 is
~ = argmin Sn ( )
h( )=0

0
Sn ( ) = (y X ) (y X ):

That is, ~ minimizes the sum of squared errors Sn ( ) over all such that the restriction holds.

1. In the model y = X 1 1 + X 2 2 + e; show that the least-squares estimate of =( 1; 2) subject to


the constraint that 2 = 0 is the OLS regression of y on X 1 :
2. In the model y = X 1 1 + X 2 2 + e; show that the least-squares estimate of = ( 1 ; 2 ); subject to
the constraint that 1 = c (where c is some given vector) is simply the OLS regression of y X 1 c on
X 2:

3. In the model y = X 1 1 + X 2 2 + e; with X 1 and X 2 each n k; nd the least-squares estimate of


= ( 1 ; 2 ); subject to the constraint that 1 = 2:

4. Take the model y = X + e with the restriction R0 = r where R is a known k s matrix, r is


a known s 1 vector, 0 < s < k; and rank(R) = s: Explain why ~ solves the minimization of the
Lagrangian
1
L( ; ) = Sn ( ) + 0 R0 r
2
where is s 1:

(a) Show that the solution is


h i 1
1 1
~ = ^ X 0X R R0 X 0 X R R0 ^ r
h i 1
1
^ = R0 X 0 X R R0 ^ r

where
1
^ = X 0X X 0y
is the unconstrained OLS estimator.
(b) Verify that R0 ~ = r:
(c) Show that if R0 = r is true, then
h i 1
1 1 1
~ = Ik X 0X R R0 X 0 X R R0 X 0X X 0 e:

p
(d) Under the standard assumptions plus R0 = r; nd the asymptotic distribution of n ~
as n ! 1:
(e) Find an appropriate formula to calculate standard errors for the elements of ~ :

5. You have two independent samples (y 1 ; X 1 ) and (y 2 ; X 2 ) which satisfy y 1 = X 1 1 + e1 and y 2 =


X 2 2 + e2 ; where E (x1i e1i ) = 0 and E (x2i e2i ) = 0; and both X 1 and X 2 have k columns. Let ^ 1
and ^ 2 be the OLS estimates of 1 and 2 . For simplicity, you may assume that both samples have
the same number of observations n:
p
(a) Find the asymptotic distribution of n ^ 2 ^ 1 ( 2 1 ) as n ! 1:

(b) Find an appropriate test statistic for H0 : 2 = 1:


(c) Find the asymptotic distribution of this statistic under H0 :

48
6. The model is

yi = x0i + ei
E (xi ei ) = 0
= E xi x0i e2i :

(a) Find the method of moments estimators ^ ; ^ for ( ; ) :

(b) In this model, are ^ ; ^ e cient estimators of ( ; )?


(c) If so, in what sense are they e cient?

7. Take the model yi = x01i 1 + x02i 2 + ei with Exi ei = 0: Suppose that 1 is estimated by regressing
yi on x1i only. Find the probability limit of this estimator. In general, is it consistent for 1 ? If not,
under what conditions is this estimator consistent for 1 ?
8. Verify that equation (4.13) equals (4.14) as claimed in Section 4.5.
2
9. Prove that if an additional regressor X k+1 is added to X; Theil's adjusted R increases if and only if
jtk+1 j > 1; where tk+1 = ^ k+1 =s( ^ k+1 ) is the t-ratio for ^ k+1 and
1=2
s( ^ k+1 ) = s2 [(X 0 X) 1
]k+1;k+1

is the homoskedasticity-formula standard error.


10. Let y be n 1; X be n k (rank k): y = X + e with E(xi ei ) = 0: De ne the ridge regression
estimator ! !
n 1 n
X X
^= xi x0i + I k xi y i
i=1 i=1

where > 0 is a xed constant. Find the probability limit of ^ as n ! 1: Is ^ consistent for ?
11. Of the variables (yi ; yi ; xi ) only the pair (yi ; xi ) are observed. In this case, we say that yi is a latent
variable. Suppose

yi = x0i + ei
E (xi ei ) = 0
yi = y i + u i

where ui is a measurement error satisfying

E (xi ui ) = 0
E (yi ui ) = 0

Let ^ denote the OLS coe cient from the regression of yi on xi :

(a) Is the coe cient from the linear projection of yi on xi ?


^
(b) Is consistent for as n ! 1?
p
(c) Find the asymptotic distribution of n ^ as n ! 1:

12. The data set invest.dat contains data on 565 U.S. rms extracted from Compustat for the year 1987.
The variables, in order, are

Ii Investment to Capital Ratio (multiplied by 100).


Qi Total Market Value to Asset Ratio (Tobin's Q).
Ci Cash Flow to Asset Ratio.
Di Long Term Debt to Asset Ratio.

The ow variables are annual sums for 1987. The stock variables are beginning of year.

49
(a) Estimate a linear regression of Ii on the other variables. Calculate appropriate standard errors.
(b) Calculate asymptotic con dence intervals for the coe cients.
(c) This regression is related to Tobin's q theory of investment, which suggests that investment should
be predicted solely by Qi : Thus the coe cient on Qi should be positive and the others should be
zero. Test the joint hypothesis that the coe cients on Ci and Di are zero. Test the hypothesis
that the coe cient on Qi is zero. Are the results consistent with the predictions of the theory?
(d) Now try a non-linear (quadratic) speci cation. Regress Ii on Qi ; Ci , Di ; Q2i ; Ci2 , Di2 ; Qi Ci ; Qi Di ;
Ci Di : Test the joint hypothesis that the six interaction and quadratic coe cients are zero.

13. In a paper in 1963, Marc Nerlove analyzed a cost function for 145 American electric companies. (The
problem is discussed in Example 8.3 of Greene, section 1.7 of Hayashi, and the empirical exercise in
Chapter 1 of Hayashi). The data le nerlov.dat contains his data. The variables are described on
page 77 of Hayashi. Nerlov was interested in estimating a cost function: T C = f (Q; P L; P F; P K):

(a) First estimate an unrestricted Cobb-Douglass speci cation

log T Ci = 1 + 2 log Qi + 3 log P Li + 4 log P Ki + 5 log P Fi + ei : (4.26)

Report parameter estimates and standard errors. You should obtain the same OLS estimates as
in Hayashi's equation (1.7.7), but your standard errors may di er.
(b) Using a Wald statistic, test the hypothesis H0 : 3 + 4 + 5 = 1:
(c) Estimate (4.26) by least-squares imposing this restriction by substitution. Report your parameter
estimates and standard errors.
(d) Estimate (4.26) subject to 3 + 4 + 5 = 1 using the restricted least-squares estimator from
problem 4. Do you obtain the same estimates as in part (c)?

50
Chapter 5

Additional Regression Topics

5.1 Generalized Least Squares


In the projection model, we know that the least-squares estimator is semi-parametrically e cient for the
projection coe cient. However, in the linear regression model

yi = x0i + ei
E (ei j xi ) = 0;

the least-squares estimator is ine cient. The theory of Chamberlain (1987) can be used to show that in
this model the semiparametric e ciency bound is obtained by the Generalized Least Squares (GLS)
estimator
~ = X 0D 1X 1 X 0D 1y (5.1)
where D = diagf 21 ; :::; 2n g and 2i = 2 (xi ) = E e2i j xi . The GLS estimator is sometimes called the
Aitken estimator. The GLS estimator (5.1) infeasible since the matrix D is unknown. A feasible GLS
(FGLS) estimator replaces the unknown D with an estimate D ^ = diagf^ 21 ; :::; ^ 2n g: We now discuss this
estimation problem.
Suppose that we model the conditional variance using the parametric form
2
i = 0 + z 01i 1
0
= zi;

where z 1i is some q 1 function of xi : Typically, z 1i are squares (and perhaps levels) of some (or all) elements
of xi : Often the functional form is kept simple for parsimony.
Let i = e2i : Then
E ( i j xi ) = 0 + z 01i 1
and we have the regression equation

=
i 0 + z 01i 1 + i (5.2)
E( i j xi ) = 0:

This regression error i is generally heteroskedastic and has the conditional variance

var ( i j xi ) = var e2i j xi


2
= E e2i E e2i j xi j xi
2
= E e4i j xi E e2i j xi :

Suppose ei (and thus i) were observed. Then we could estimate by OLS:


1 p
^ = Z 0Z Z0 !

and p d
n (^ ) ! N (0; V )

51
where
1 2 1
V = (E (z i z 0i )) E z i z 0i i (E (z i z 0i )) : (5.3)
While ei is not observed, we have the OLS residual e^i = yi x0i ^ = ei x0i ( ^ ): Thus

i ^ i
= e^2i e2i
= 2ei x0i ^ + (^ )0 xi x0i ( ^ ):

And then
n n n
1 X 2X p 1X p
p zi i = z i ei x0i n ^ + zi( ^ )0 xi x0i ( ^ ) n
n i=1 n i=1 n i=1
p
!0

Let
1
~ = Z 0Z Z0^ (5.4)
be from OLS regression of ^i on z i : Then
p p 1
n (~ )= n (^ )+ n 1
Z 0Z n 1=2
Z0
d
! N (0; V ) (5.5)

Thus the fact that i is replaced with ^i is asymptotically irrelevant. We may call (5.4) the skedastic
regression, as it is estimating the conditional variance of the regression of yi on xi : We have shown that
is consistently estimated by a simple procedure, and hence we can estimate 2i = z 0i by

~ 2i = ~ 0 z i : (5.6)

Suppose that ~ 2i > 0 for all i: Then set


~ = diagf~ 21 ; :::; ~ 2n g
D

and
1 1 1
~ = X 0D
~ X ~
X 0D y:

This is the feasible GLS, or FGLS, estimator of : Since there is not a unique speci cation for the conditional
variance the FGLS estimator is not unique, and will depend on the model (and estimation method) for the
skedastic regression.
One typical problem with implementation of FGLS estimation is that in a linear regression speci cation,
there is no guarantee that ~ 2i > 0 for all i: If ~ 2i < 0 for some i; then the FGLS estimator is not well de ned.
Furthermore, if ~ 2i 0 for some i; then the FGLS estimator will force the regression equation through the
point (yi ; xi ); which is typically undesirable. This suggests that there is a need to bound the estimated
variances away from zero. A trimming rule might make sense:
2
i = max[~ 2i ; 2
]

for some 2 > 0:


It is possible to show that if the skedastic regression is correctly speci ed, then FGLS is asymptotically
equivalent to GLS, but the proof of this can be tricky. We just state the result without proof.

Theorem 5.1.1 If the skedastic regression is correctly speci ed,


p p
n ~ GLS ~ F GLS ! 0;

and thus p d
n ~ F GLS ! N (0; V ) ;

where
2 1
V = E i xi x0i :

52
Examining the asymptotic distribution of Theorem 5.1.1, the natural estimator of the asymptotic variance
of ^ is
n
! 1
0 1 X 1 0~ 1
1
V~ = 2 0
~ i xi xi = XD X :
n i=1 n
0
which is consistent for V as n ! 1: This estimator V~ is appropriate when the skedastic regression (5.2) is
correctly speci ed.
It may be the case that 0 z i is only an approximation to the true conditional variance 2i = E(e2i j xi ).
In this case we interpret 0 z i as a linear projection of e2i on z i : ~ should perhaps be called a quasi-FGLS
estimator of : Its asymptotic variance is not that given in Theorem 5.1.1. Instead,
1 1 2 1 1
0
V = E ( zi) xi x0i E ( 0
zi) 2 0
i xi xi E ( 0
zi) xi x0i :

0
V takes a sandwich form similar to the covariance matrix of the OLS estimator. Unless 2i = 0 z i , V~ is
inconsistent for V .
0
An appropriate solution is to use a White-type estimator in place of V~ : This may be written as
n
! 1 n
! n
! 1
~ 1X 2 0 1X 4 2 0 1X 2 0
V = ~ xi xi ~ e^ xi xi ~ xi xi
n i=1 i n i=1 i i n i=1 i
1 1 1 1 1 1
~
= n X 0D X ~
X 0D ^D
D ~ X ~
X 0D X

where D ^ = diagf^ e21 ; :::; e^2n g: This is an estimator which is robust to misspeci cation of the conditional
variance, and was proposed by Cragg (1992).
In the linear regression model, FGLS is asymptotically superior to OLS. Why then do we not exclusively
estimate regression models by FGLS? This is a good question. There are three reasons.
First, FGLS estimation depends on speci cation and estimation of the skedastic regression. Since the
form of the skedastic regression is unknown, and it may be estimated with considerable error, the estimated
conditional variances may contain more noise than information about the true conditional variances. In this
case, FGLS will do worse than OLS in practice.
Second, individual estimated conditional variances may be negative, and this requires trimming to solve.
This introduces an element of arbitrariness which is unsettling to empirical researchers.
Third, OLS is a more robust estimator of the parameter vector. It is consistent not only in the regression
model, but also under the assumptions of linear projection. The GLS and FGLS estimators, on the other
hand, require the assumption of a correct conditional mean. If the equation of interest is a linear projection,
and not a conditional mean, then the OLS and FGLS estimators will converge in probability to di erent
limits, as they will be estimating two di erent projections. And the FGLS probability limit will depend on
the particular function selected for the skedastic regression. The point is that the e ciency gains from FGLS
are built on the stronger assumption of a correct conditional mean, and the cost is a reduction of robustness
to misspeci cation.

5.2 Testing for Heteroskedasticity


The hypothesis of homoskedasticity is that E e2i j xi = 2
, or equivalently that

H0 : 1 =0

in the regression (5.2). We may therefore test this hypothesis by the estimation (5.4) and constructing a
Wald statistic.
This hypothesis does not imply that i is independent of xi : Typically, however, we impose the stronger
hypothesis and test the hypothesis that ei is independent of xi ; in which case i is independent of xi and
the asymptotic variance (5.3) for ~ simpli es to
1 2
V = (E (z i z 0i )) E i : (5.7)

Hence the standard test of H0 is a classic F (or Wald) test for exclusion of all regressors from the skedastic
regression (5.4). The asymptotic distribution (5.5) and the asymptotic variance (5.7) under independence
show that this test has an asymptotic chi-square distribution.

53
2
Theorem 5.2.1 Under H0 and ei independent of xi ; the Wald test of H0 is asymptotically q:

Most tests for heteroskedasticity take this basic form. The main di erences between popular \tests"
are which transformations of xi enter z i : Motivated by the form of the asymptotic variance of the OLS
estimator ^ ; White (1980) proposed that the test for heteroskedasticity be based on setting z i to equal all
non-redundant elements of xi ; its squares, and all cross-products. Breusch-Pagan (1979) proposed what
might appear to be a distinct test, but the only di erence is that they allowed for general choice of z i ; and
replaced E 2i with 2 4 which holds when ei is N 0; 2 : If this simpli cation is replaced by the standard
formula (under independence of the error), the two tests coincide.

5.3 Forecast Intervals


In the linear regression model the conditional mean of yi given xi = x is

m(x) = E (yi j xi = x) = x0 :

In some cases, we want to estimate m(x) at a particular point x: Notice that this is a (linear)
p function of :
0
Letting h( ) = x and = h( ); we see that m(x) ^ ^ 0^ ^
= = x and H = x; so s( ) = n 1 x0 V^ x: Thus
an asymptotic 95% con dence interval for m(x) is
h p i
x0 ^ 2 n 1 x0 V^ x :

It is interesting to observe that if this is viewed as a function of x; the width of the con dence set is dependent
on x:
For a given value of xi = x; we may want to forecast (guess) yi out-of-sample. A reasonable rule is
the conditional mean m(x) as it is the mean-square-minimizing forecast. A point forecast is the estimated
conditional mean m(x)^ = x0 ^ . We would also like a measure of uncertainty for the forecast.
The forecast error is e^i = yi m(x)
^ = ei x0 ^ : As the out-of-sample error ei is independent of
the in-sample estimate ^ ; this has variance
0
e2i
E^ = E e2i j xi = x + x0 E ^ ^ x
2 1
= (x) + n x0 V x:

Assuming E e2i j xi = 2 ; the natural estimate of this variance is ^ 2 + n 1 x0 V^ x; so a standard error for the
p
forecast is s^(x) = ^ 2 + n 1 x0 V^ x: Notice that this is di erent from the standard error for the conditional
mean. If we have an estimate of q the conditional variance function, e.g. ~ 2 (x) = ~ 0 z from (5.6), then the
forecast standard error is s^(x) = ~ 2 (x) + n 1 x0 V^ x
It would appear natural to conclude that an asymptotic 95% forecast interval for yi is
h i
x0 ^ 2^
s(x) ;

but this turns out to be incorrect. In general, the validity of an asymptotic con dence interval is based on
the asymptotic normality of the studentized ratio. In the present case, this would require the asymptotic
normality of the ratio
ei x0 ^
:
s^(x)
But no such asymptotic approximation can be made. The only special exception is the case where ei has
the exact distribution N(0; 2 ); which is generally invalid.
To get an accurate forecast interval, we need to estimate the conditional distribution of ei given xi = x;
which is a much more h di cult task.
i Given the di culty, many applied forecasters focus on the simple
approximate interval x0 ^ 2^ s(x) :

54
5.4 NonLinear Least Squares
In some cases we might use a parametric regression function m (x; ) = E (yi j xi = x) which is a non-
linear function of the parameters : We describe this setting as non-linear regression. Examples of
nonlinear regression functions include
x
m (x; ) = 1 + 2
1 + 3x
m (x; ) = 1 + 2 x 3
m (x; ) = 1 + 2 exp( 3 x)
m (x; ) = G(x0 ); G known
0 0 x2 3
m (x; ) = 1 x1 + 2 x1
4
m (x; ) = 1 + 2x + 3 (x 4 ) 1 (x > 4 )
0
m (x; ) = 1 x1 1 (x2 < 3 ) + 02 x1 1 (x2 > 3)

In the rst ve examples, m (x; ) is (generically) di erentiable in the parameters : In the nal two examples,
m is not di erentiable with respect to 4 and 3 which alters some of the analysis. When it exists, let
@
m (x; ) = m (x; ) :
@
Nonlinear regression is frequently adopted because the functional form m (x; ) is suggested by an eco-
nomic model. In other cases, it is adopted as a exible approximation to an unknown regression function.
The least squares estimator ^ minimizes the sum-of-squared-errors
n
X 2
Sn ( ) = (yi m (xi ; )) :
i=1

When the regression function is nonlinear, we call this the nonlinear least squares (NLLS) estimator.
The NLLS residuals are e^i = yi m xi ; ^ :
One motivation for the choice of NLLS as the estimation method is that the parameter is the solution
2
to the population problem min E (yi m (xi ; ))
Since sum-of-squared-errors function Sn ( ) is not quadratic, ^ must be found by numerical methods. See
Appendix E. When m(x; ) is di erentiable, then the FOC for minimization are
n
X
0= m xi ; ^ e^i : (5.8)
i=1

Theorem 5.4.1 If the model is identi ed and m (x; ) is di erentiable with respect to ,
p d
n ^ 0 ! N (0; V )

1 1
V = (E (m i m0 i )) E m i m0 i e2i (E (m i m0 i ))
where m i = m (xi ; 0 ):

Based on Theorem 5.4.1, an estimate of the asymptotic variance V is


n
! 1 n
! n
! 1
1X 1X 1X
V^ = m ^0i
^ im m ^ 0 i e^2i
^ im m ^0i
^ im
n i=1 n i=1 n i=1

^ i = m (xi ; ^) and e^i = yi m(xi ; ^):


where m
Identi cation is often tricky in nonlinear regression models. Suppose that
0 0
m(xi ; ) = 1zi + 2 xi ( )

55
where xi ( ) is a function of xi and the unknown parameter : Examples include xi ( ) = xi ; xi ( ) =
exp ( xi ) ; and xi ( ) = xi 1 (g (xi ) > ). The model is linear when 2 = 0; and this is often a useful
hypothesis (sub-model) to consider. Thus we want to test

H0 : 2 = 0:

However, under H0 , the model is


0
yi = 1zi + ei
and both 2 and have dropped out. This means that under H0 ; is not identi ed. This renders the
distribution theory presented in the previous section invalid. Thus when the truth is that 2 = 0; the
parameter estimates are not asymptotically normally distributed. Furthermore, tests of H0 do not have
asymptotic normal or chi-square distributions.
The asymptotic theory of such tests have been worked out by Andrews and Ploberger (1994) and B.
Hansen (1996). In particular, Hansen shows how to use simulation (similar to the bootstrap) to construct
the asymptotic critical values (or p-values) in a given application.

5.5 Least Absolute Deviations


We stated that a conventional goal in econometrics is estimation of impact of variation in xi on the
central tendency of yi : We have discussed projections and conditional means, but these are not the only
measures of central tendency. An alternative good measure is the conditional median.
To recall the de nition and properties of the median, let y be a continuous random variable. The median
0 = med(y) is the value such that P(y 0 ) = P (y 0 ) = :5: Two useful facts about the median are that

0 = argmin E jy j (5.9)

and
E sgn (y 0) =0
where
1 if u 0
sgn (u) =
1 if u < 0
is the sign function.
These facts and de nitions motivate three estimators
Pn of : The rst de nition is the 50th empirical
quantile. The second is theP value which minimizes n1 i=1 jyi j ; and the third de nition is the solution
n
to the moment equation n1 i=1 sgn (yi ) : These distinctions are illusory, however, as these estimators
are indeed identical.
Now let's consider the conditional median of y given a random vector x: Let m(x) = med (y j x) denote
the conditional median of y given x: The linear median regression model takes the form

yi = x0i + ei
med (ei j xi ) = 0

In this model, the linear function med (yi j xi = x) = x0 is the conditional median function, and the
substantive assumption is that the median function is linear in x:
Conditional analogs of the facts about the median are

P(yi x0 0 j xi = x) = P(yi > x0 j xi = x) = :5


E (sgn (ei ) j xi ) = 0
E (xi sgn (ei )) = 0

0 = min E jyi x0i j

These facts motivate the following estimator. Let


n
1X
LADn ( ) = jyi x0i j
n i=1

56
be the average of absolute deviations. The least absolute deviations (LAD) estimator of minimizes
this function
^ = argmin LADn ( )

Equivalently, it is a solution to the moment condition


n
1X
xi sgn yi x0i ^ = 0: (5.10)
n i=1

The LAD estimator has the asymptotic distribution


p d
Theorem 5.5.1 n ^ 0 ! N (0; V ) ; where

1 1 1
V = (E (xi x0i f (0 j xi ))) (Exi x0i ) (E (xi x0i f (0 j xi )))
4
and f (e j x) is the conditional density of ei given xi = x:
The variance of the asymptotic distribution inversely depends on f (0 j x) ; the conditional density of
the error at its median. When f (0 j x) is large, then there are many innovations near to the median, and
this improves estimation of the median. In the special case where the error is independent of xi ; then
f (0 j x) = f (0) and the asymptotic variance simpli es
1
(Exi x0i )
V = 2 (5.11)
4f (0)
This simpli cation is similar to the simpli cation of the asymptotic covariance of the OLS estimator under
homoskedasticity.
Computation of standard error for LAD estimates typically is based on equation (5.11). The main
di culty is the estimation of f (0); the height of the error density at its median. This can be done with
kernel estimation techniques. See Chapter 14. While a complete proof of Theorem 5.5.1 is advanced, we
provide a sketch here for completeness.

5.6 Quantile Regression


The method of quantile regression has become quite popular in recent econometric practice. For 2 [0; 1]
the 'th quantile Q of a random variable with distribution function F (u) is de ned as

Q = inf fu : F (u) g

When F (u) is continuous and strictly monotonic, then F (Q ) = ; so you can think of the quantile as the
inverse of the distribution function. The quantile Q is the value such that (percent) of the mass of the
distribution is less than Q : The median is the special case = :5:
The following alternative representation is useful. If the random variable U has 'th quantile Q ; then

Q = argmin E (U ): (5.12)

where (q) is the piecewise linear function

q (1 ) q<0
(q) = (5.13)
q q 0
= q( 1 (q < 0)) :

This generalizes representation (5.9) for the median to all quantiles.


For the random variables (yi ; xi ) with conditional distribution function F (y j x) the conditional quantile
function q (x) is
Q (x) = inf fy : F (y j x) g:
Again, when F (y j x) is continuous and strictly monotonic in y, then F (Q (x) j x) = : For xed ; the
quantile regression function q (x) describes how the 'th quantile of the conditional distribution varies with
the regressors.

57
As functions of x; the quantile regression functions can take any shape. However for computational
convenience it is typical to assume that they are (approximately) linear in x (after suitable transformations).
This linear speci cation assumes that Q (x) = 0 x where the coe cients vary across the quantiles :
We then have the linear quantile regression model

yi = x0i + ei

where ei is the error de ned to be the di erence between yi and its 'th conditional quantile x0i : By
construction, the 'th conditional quantile of ei is zero, otherwise its properties are unspeci ed without
further restrictions.
Given the representation (5.12), the quantile regression estimator ^ for solves the minimization
problem
^ = argmin Sn ( )

where
n
1X
Sn ( ) = (yi x0i )
n i=1
and (q) is de ned in (5.13).
Since the quanitle regression criterion function Sn ( ) does not have an algebraic solution, numerical
methods are necessary for its minimization. Furthermore, since it has discontinuous derivatives, conventional
Newton-type optimization methods are inappropriate. Fortunately, fast linear programming methods have
been developed for this problem, and are widely available.
An asymptotic distribution theory for the quantile regression estimator can be derived using similar
arguments as those for the LAD estimator in Theorem 5.5.1.
p d
Theorem 5.6.1 n ^ ! N (0; V ) ; where

1 1
V = (1 ) (E (xi x0i f (0 j xi ))) (Exi x0i ) (E (xi x0i f (0 j xi )))

and f (e j x) is the conditional density of ei given xi = x:


In general, the asymptotic variance depends on the conditional density of the quantile regression error.
When the error ei is independent of xi ; then f (0 j xi ) = f (0) ; the unconditional density of ei at 0, and we
have the simpli cation
(1 ) 0 1
V = 2 (E (xi xi )) :
f (0)
A recent monograph on the details of quantile regression is Koenker (2005).

5.7 Testing for Omitted NonLinearity


If the goal is to estimate the conditional expectation E (yi j xi ) ; it is useful to have a general test of the
adequacy of the speci cation.
One simple test for neglected nonlinearity is to add nonlinear functions of the regressors to the regression,
and test their signi cance using a Wald test. Thus, if the model yi = x0i ^ + e^i has been t by OLS, let
z i = h(xi ) denote functions of xi which are not linear functions of xi (perhaps squares of non-binary
regressors) and then t yi = x0i ~ + z 0i ~ + e~i by OLS, and form a Wald statistic for = 0:
Another popular approach is the RESET test proposed by Ramsey (1969). The null model is

yi = x0i + ei

which is estimated by OLS, yielding predicted values y^i = x0i ^ : Now let
0 2 1
y^i
B .. C
zi = @ . A
y^im

be an (m 1)-vector of powers of y^i : Then run the auxiliary regression

yi = x0i ~ + z 0i ~ + e~i (5.14)

58
by OLS, and form the Wald statistic Wn for = 0: It is easy (although somewhat tedious) to show that
d 2
under the null hypothesis, Wn ! m 1: Thus the null is rejected at the % level if Wn exceeds the upper
% tail critical value of the 2m 1 distribution.
To implement the test, m must be selected in advance. Typically, small values such as m = 2, 3, or 4
seem to work best.
The RESET test appears to work well as a test of functional form against a wide range of smooth
alternatives. It is particularly powerful at detecting single-index models of the form

yi = G(x0i ) + ei

where G( ) is a smooth \link" function. To see why this is the case, note that (5.14) may be written as
2 3 m
yi = x0i ~ + x0i ^ ~ 1 + x0i ^ ~2 + x0i ^ ~m 1 + e~i

which has essentially approximated G( ) by a m'th order polynomial.

5.8 Omitted Variables


Let the regressors be partitioned as
x1i
xi = :
x2i
Suppose we are interested in the coe cient on x1i alone in the regression of yi on the full set xi : We can
write the model as

yi = x01i 1 + x02i 2 + ei (5.15)


E (xi ei ) = 0

where the parameter of interest is 1 :


Now suppose that instead of estimating equation (5.15) by least-squares, we regress yi on x1i only. This
is estimation of the equation

yi = x01i 1 + ui (5.16)
E (x1i ui ) = 0

Notice that we have written the coe cient on x1i as 1 rather than 1 and the error as ui rather than ei :
This is because the model being estimated is di erent than (5.15). Goldberger (1991) calls (5.15) the long
regression and (5.16) the short regression to emphasize the distinction.
Typically, 1 6= 1 , except in special cases. To see this, we calculate
1
1 = (E (x1i x01i )) E (x1i yi )
1
= (E (x1i x01i )) E (x1i (x01i 1 + x2i
0
2 + ei ))
1
= 1 + (E (x1i x01i )) E (x1i x02i ) 2
= 1 + 2

where
1
= (E (x1i x01i )) E (x1i x02i )
is the coe cient from a regression of x2i on x1i :
Observe that 1 6= 1 unless = 0 or 2 = 0: Thus the short and long regressions have the same
coe cient on x1i only under one of two conditions. First, the regression of x2i on x1i yields a set of zero
coe cients (they are uncorrelated), or second, the coe cient on x2i in (5.15) is zero. In general, least-squares
estimation of (5.16) is an estimate of 1 = 1 + 2 rather than 1 : The di erence 2 is known as omitted
variable bias. It is the consequence of omission of a relevant correlated variable.
To avoid omitted variables bias the standard advice is to include potentially relevant variables in the
estimated model. By construction, the general model will be free of the omitted variables problem. Typically
there are limits, as many desired variables are not available in a given dataset. In this case, the possibility
of omitted variables bias should be acknowledged and discussed in the course of an empirical investigation.

59
5.9 Irrelevant Variables
In the model

yi = x01i 1 + x02i 2 + ei
E (xi ei ) = 0;

x2i is \irrelevant" if 1 is the parameter of interest and 2 = 0: One estimator of 1 is to regress yi on


1
x1i alone, ~ 1 = X 01 X 1 X 01 y : Another is to regress yi on x1i and x2i jointly, yielding ( ^ 1 ; ^ 2 ): Under
which conditions is ^ 1 or ~ 1 superior?
It is easy to see that both estimators are consistent for 1 : However, they will (typically) have di erent
asymptotic variances.
The comparison between the two estimators is straightforward when the error is conditionally ho-
moskedastic E e2i j xi = 2 : In this case
1
lim n var( ~ 1 ) = (Ex1i x01i ) 2
= Q111 2
;
n!1

say, and

1 1 1
lim n var( ^ 1 ) = Ex1i x01i Ex1i x02i (Ex2i x02i ) Ex2i x01i 2
= Q11 Q12 Q221 Q21 2
;
n!1

say. If Q12 = 0 (so the variables are orthogonal) then these two variance matrices equal, and the two esti-
mators have equal asymptotic e ciency. Otherwise, since Q12 Q221 Q21 > 0; then Q11 > Q11 Q12 Q221 Q21 ;
and consequently
1 2
Q111 2 < Q11 Q12 Q221 Q21 :
This means that ~ 1 has a lower asymptotic variance matrix than ^ 1 : We conclude that the inclusion of
irrelevant variable reduces estimation e ciency if these variables are correlated with the relevant variables.
For example, take the model yi = 0 + 1 xi + ei and suppose that 0 = 0: Let ^ 1 be the estimate of
1 from the unconstrained model, and ~ 1 be the estimate under the constraint 0 = 0: (The least-squares
2
estimate with the intercept omitted.). Let Exi = ; and E (xi ) = 2x : Then under (4.8),
2
lim n var( ~ 1 ) = 2 2
n!1 x +

while
2
lim n var( ^ 1 ) = 2
:
n!1 x

When 6= 0; we see that ~ 1 has a lower asymptotic variance.


However, this result can be reversed when the error is conditionally heteroskedastic. In the absence of
the homoskedasticity assumption, there is no clear ranking of the e ciency of the restricted estimator ~ 1
versus the unrestricted estimator.

5.10 Model Selection


In earlier sections we discussed the costs and bene ts of inclusion/exclusion of variables. How does a
researcher go about selecting an econometric speci cation, when economic theory does not provide complete
guidance? This is the question of model selection. It is important that the model selection question be
well-posed. For example, the question: \What is the right model for y?" is not well-posed, because it does
not make clear the conditioning set. In contrast, the question, \Which subset of (x1 ; :::; xK ) enters the
regression function E (yi j x1i = x1 ; :::; xKi = xK )?" is well posed.
In many cases the problem of model selection can be reduced to the comparison of two nested models,
as the larger problem can be written as a sequence of such comparisons. We thus consider the question of
the inclusion of X 2 in the linear regression

y = X1 1 + X2 2 + e;

60
where X 1 is n k1 and X 2 is n k2 : This is equivalent to the comparison of the two models
M1 : y = X1 1 + e; E (e j X 1 ; X 2 ) = 0
M2 : y = X1 1 + X2 2 + e; E (e j X 1 ; X 2 ) = 0:
Note that M1 M2 : To be concrete, we say that M2 is true if 2 6= 0:
To x notation, models 1 and 2 are estimated by OLS, with residual vectors e ^1 and e^2 ; estimated
variances ^ 21 and ^ 22 ; etc., respectively. To simplify some of the statistical discussion, we will on occasion use
the homoskedasticity assumption E e2i j x1i ; x2i = 2 :
A model selection procedure is a data-dependent rule which selects one of the two models. We can
write this as M.c There are many possible desirable properties for a model selection procedure. One useful
property is consistency, that it selects the true model with probability one if the sample is su ciently large.
A model selection procedure is consistent if
c = M1 j M1
P M ! 1
c = M2 j M2
P M ! 1

However, this rule only makes sense when the true model is nite dimensional. If the truth is in nite dimen-
sional, it is more appropriate to view model selection as determining the best nite sample approximation.
A common approach to model selection is to base the decision on a statistical test such as the Wald Wn :
The model selection rule is as follows. For some critical level ; let c satisfy P 2k2 > c : Then select M1
if Wn c ; else select M2 .
A major problem with this approach is that the critical level is indeterminate. The reasoning which
helps guide the choice of in hypothesis testing (controlling Type I error) is not relevant for model selection.
That is, if is set to be a small number, then P M c = M1 j M1 1 but P M c = M2 j M2 could
vary dramatically, depending on the sample size, etc. Another problem is that if is held xed, then this
model selection procedure is inconsistent, as P M c = M1 j M1 ! 1 < 1:
Another common approach to model selection is to use a selection criterion. One popular choice is the
Akaike Information Criterion (AIC). The AIC under normality for model m is
km
AICm = log ^ 2m + 2 : (5.17)
n
where ^ 2m is the variance estimate for model m; and km is the number of coe cients in the model. The AIC
can be derived as an estimate of the KullbackLeibler information distance K(M) = E (log f (y j X) log f (y j X; M))
between the true density and the model density. The rule is to select M1 if AIC1 < AIC2 ; else select M2 :
AIC selection is inconsistent, as the rule tends to over t. Indeed, since under M1 ;
d
LRn = n log ^ 21 log ^ 22 ' Wn ! 2
k2 ; (5.18)
then
c = M1 j M1
P M = P (AIC1 < AIC2 j M1 )
k1 k1 + k2
= P log(^ 21 ) + 2
< log(^ 22 ) + 2 j M1
n n
= P (LRn < 2k2 j M1 )
! P 2k2 < 2k2 < 1:
While many criterions similar to the AIC have been proposed, the most popular is one proposed by
Schwarz based on Bayesian arguments. His criterion, known as the BIC, is
km
BICm = log ^ 2m + log(n) : (5.19)
n
Since log(n) > 2 (if n > 8); the BIC places a larger penalty than the AIC on the number of estimated
parameters and is more parsimonious.
In contrast to the AIC, BIC model selection is consistent. Indeed, since (5.18) holds under M1 ;
LRn p
! 0;
log(n)

61
so
c = M1 j M1
P M = P (BIC1 < BIC2 j M1 )
= P (LRn < log(n)k2 j M1 )
LRn
= P < k2 j M1
log(n)
! P (0 < k2 ) = 1:

Also under M2 ; one can show that


LRn p
! 1;
log(n)
thus

c = M2 j M2 LRn
P M = P > k2 j M2
log(n)
! 1:

We have discussed model selection between two models. The methods extend readily to the issue of
selection among multiple regressors. The general problem is the model

yi = 1 x1i + 2 x2i + + K xKi + ei ; E (ei j xi ) = 0

and the question is which subset of the coe cients are non-zero (equivalently, which regressors enter the
regression).
There are two leading cases: ordered regressors and unordered.
In the ordered case, the models are

M1 : 1 6= 0; 2 = 3= = K =0
M2 : 1 6= 0; 2 6= 0; 3 = = K =0
..
.
MK : 1 6= 0; 2 6= 0; : : : ; K 6= 0:

which are nested. The AIC selection criteria estimates the K models by OLS, stores the residual variance
^ 2 for each model, and then selects the model with the lowest AIC (5.17). Similarly for the BIC, selecting
based on (5.19).
In the unordered case, a model consists of any possible subset of the regressors fx1i ; :::; xKi g; and the
AIC or BIC in principle can be implemented by estimating all possible subset models. However, there are 2K
such models, which can be a very large number. For example, 210 = 1024; and 220 = 1; 048; 576: In the latter
case, a full-blown implementation of the BIC selection criterion would seem computationally prohibitive.

5.11 Technical Proofs


p
Proof of Theorem 5.4.1 (Sketch). First, it must be shown that ^ ! 0 : This can be done using
p
arguments for optimization estimators, but we won't cover that argument here. Since ^ ! 0 ; ^ is close
to 0 for n large, so the minimization of Sn ( ) only needs to be examined for close to 0 : Let

yi0 = ei + m0 i 0:

For close to the true value 0; by a rst-order Taylor series approximation,

m (xi ; ) ' m (xi ; 0) + m0 i ( 0) :

Thus

yi m (xi ; ) ' (ei + m (xi ; 0 )) (m (xi ; 0) + m0 i ( 0 ))


= ei m0 i ( 0)
= yi0 m0 i :

62
Hence the sum of squared errors function is
n
X n
X
2 2
Sn ( ) = (yi m (xi ; )) ' yi0 m0 i
i=1 i=1

and the right-hand-side is the SSE function for a linear regression of yi0 on m i : Thus the NLLS estimator
^ has the same asymptotic distribution as the (infeasible) OLS regression of y 0 on m i ; which is that stated
i
in the theorem.
p
Proof of Theorem 5.5.1: The rst step is to show that ^ ! 0 : We sketch the main argument. For any
p
xed ; by the WLLN, LADn ( ) ! E jyi x0i j : Furthermore, it can be shown that this convergence is
uniform in : It follows that ^ ; the minimizer of LADn ( ); converges in probability to 0 ; the minimizer of
E jyi x0i j.
Pn
Since sgn (a) = 1 2 1 (a 0) ; (5.10) is equivalent to g n ( ^ ) = 0; where g n ( ) = n 1 i=1 g i ( ) and
g i ( ) = xi (1 2 1 (yi x0i )) : Let g( ) = Eg i ( ). We need three preliminary results. First, by the
central limit theorem (Theorem C.3.1)
n
X
p 1=2 d
n (g n ( 0) g( 0 )) = n gi ( 0) ! N (0; Exi x0i )
i=1

0
since Eg i ( 0 )g i ( 0 ) = Exi x0i : Second using the law of iterated expectations and the chain rule of di eren-
tiation,
@ @
g( ) = Exi (1 2 1 (yi x0i ))
@ 0 @ 0
@
= 2 0 E [xi E (1 (ei x0i x0i 0 ) j xi )]
@
" Z 0 #
xi x0i 0
@
= 2 0 E xi f (e j xi ) de
@ 1

= 2E [xi x0i f (x0i x0i 0 j xi )]

so
@
g( 0) = 2E [xi x0i f (0 j xi )] :
@ 0
Third, by a Taylor series expansion and the fact g( 0) =0

@
g( ^ ) ' g( 0)
^ 0 :
@ 0
Together
1
p @ p
n ^ 0 ' g( 0) ng( ^ )
@ 0
1 p
= ( 2E [xi x0i f (0 j xi )]) n g( ^ ) gn ( ^ )
1 1p
' (E [xi x0i f (0 j xi )]) n (g n ( 0 ) g( 0 ))
2
d 1 1
! (E [xi x0i f (0 j xi )]) N (0; Exi x0i )
2
= N (0; V ) :
p
The third line follows from an asymptotic empirical process argument and the fact that ^ ! 0.

63
5.12 Exercises
1. For any predictor g(xi ) for yi ; the mean absolute error (MAE) is

E jyi g(xi )j :

Show that the function g(x) which minimizes the MAE is the conditional median m (x) = med(yi j xi ):
2. De ne
g(u) = 1 (u < 0)
where 1 ( ) is the indicator function (takes the value 1 if the argument is true, else equals zero). Let
satisfy Eg(yi ) = 0: Is a quantile of the distribution of yi ?
3. Verify equation (5.12).

4. In the homoskedastic regression model y = X + e with E(ei j xi ) = 0 and E(e2i j xi ) = 2 ; suppose ^


is the OLS estimate of with covariance matrix V^ ; based on a sample of size n: Let ^ 2 be the estimate
of 2 : You wish to forecast an out-of-sample value of yn+1 given that xn+1 = x: Thus the available
information is the sample (y; X); the estimates ( ^ ; V^ ; ^ 2 ), the residuals e
^; and the out-of-sample value
of the regressors, xn+1 :

(a) Find a point forecast of yn+1 :


(b) Find an estimate of the variance of this forecast.

5. In a linear model
2
y = X + e; E(e j X) = 0; var (e j X) =
with known, the GLS estimator is
1
~ = X0 1
X X0 1
y :

the residual vector is e


^= y X ~ ; and an estimate of 2
is
1
s2 = ^0
e 1
^:
e
n k
2
(a) Why is this a reasonable estimator for ?
1
(b) Prove that e
^ = M 1 e; where M 1 = I X X0 1
X X0 1
:
1
(c) Prove that M 01 1
M1 = 1 1
X X 0 1
X X 0 1
:

6. Let (yi ; xi ) be a random sample with E(y j X) = X : Consider the Weighted Least Squares (WLS)
estimator of
~ = X 0W X 1 X 0W y

where W = diag (w1 ; :::; wn ) and wi = xji2 , where xji is one of the xi :

(a) In which contexts would ~ be a good estimator?


(b) Using your intuition, in which situations would you expect that ~ would perform better than
OLS?

7. Suppose that yi = g(xi ; ) + ei with E (ei j xi ) = 0; ^ is the NLLS estimator, and V^ is the estimate of
var ^ : You are interested in the conditional mean function E (yi j xi = x) = g(x) at some x: Find
an asymptotic 95% con dence interval for g(x):
8. The model is

yi = xi + ei
E (ei j xi ) = 0

64
where xi 2 R: Consider the two estimators
Pn
^ xi yi
= Pi=1
n 2
i=1 xi
n
~ 1 X yi
= :
n i=1 xi

(a) Under the stated assumptions, are both estimators consistent for ?
(b) Are there conditions under which either estimator is e cient?

9. In Chapter 6, Exercise 13, you estimated a cost function on a cross-section of electric companies. The
equation you estimated was

log T Ci = 1 + 2 log Qi + 3 log P Li + 4 log P Ki + 5 log P Fi + ei : (5.20)

(a) Following Nerlove, add the variable (log Qi )2 to the regression. Do so. Assess the merits of this
new speci cation using (i) a hypothesis test; (ii) AIC criterion; (iii) BIC criterion. Do you agree
with this modi cation?
(b) Now try a non-linear speci cation. Consider model (5.20) plus the extra term 6zi; where

1
z i = log Qi (1 + exp ( (log Qi 7 ))) :
In addition, impose the restriction 3 + 4 + 5 = 1: This model is called a smooth threshold
model. For values of log Qi much below 7 ; the variable log Qi has a regression slope of 2 : For
values much above 7 ; the regression slope is 2 + 6 ; and the model imposes a smooth transition
between these regimes. The model is non-linear because of the parameter 7 :
The model works best when 7 is selected so that several values (in this example, at least 10 to
15) of log Qi are both below and above 7 : Examine the data and pick an appropriate range for
7:
(c) Estimate the model by non-linear least squares. I recommend the concentration method: Pick 10
(or more or you like) values of 7 in this range. For each value of 7 ; calculate z i and estimate
the model by OLS. Record the sum of squared errors, and nd the value of 7 for which the sum
of squared errors is minimized.
(d) Calculate standard errors for all the parameters ( 1 ; :::; 7 ).

10. The data le cps78.dat contains 550 observations on 20 variables taken from the May 1978 current
population survey. Variables are listed in the le cps78.pdf. The goal of the exercise is to estimate a
model for the log of earnings (variable LNWAGE) as a function of the conditioning variables.

(a) Start by an OLS regression of LNWAGE on the other variables. Report coe cient estimates and
standard errors.
(b) Consider augmenting the model by squares and/or cross-products of the conditioning variables.
Estimate your selected model and report the results.
(c) Are there any variables which seem to be unimportant as a determinant of wages? You may
re-estimate the model without these variables, if desired.
(d) Test whether the error variance is di erent for men and women. Interpret.
(e) Test whether the error variance is di erent for whites and nonwhites. Interpret.
(f) Construct a model for the conditional variance. Estimate such a model, test for general het-
eroskedasticity and report the results.
(g) Using this model for the conditional variance, re-estimate the model from part (c) using FGLS.
Report the results.
(h) Do the OLS and FGLS estimates di er greatly? Note any interesting di erences.
(i) Compare the estimated standard errors. Note any interesting di erences.

65
Chapter 6

The Bootstrap

6.1 De nition of the Bootstrap


Let F denote a distribution function for the population of observations (yi ; xi ) : Let

Tn = Tn ((y1 ; x1 ) ; :::; (yn ; xn ) ; F )

be a statistic of interest, for example an estimator ^ or a t-statistic ^ =s(^): Note that we write Tn
as possibly a function of F . For example, the t-statistic is a function of the parameter which itself is a
function of F:
The exact CDF of Tn when the data are sampled from the distribution F is

Gn (u; F ) = P(Tn u j F)

In general, Gn (u; F ) depends on F; meaning that G changes as F changes.


Ideally, inference would be based on Gn (u; F ). This is generally impossible since F is unknown.
Asymptotic inference is based on approximating Gn (u; F ) with G(u; F ) = limn!1 Gn (u; F ): When
G(u; F ) = G(u) does not depend on F; we say that Tn is asymptotically pivotal and use the distribution
function G(u) for inferential purposes.
In a seminal contribution, Efron (1979) proposed the bootstrap, which makes a di erent approximation.
The unknown F is replaced by a consistent estimate Fn (one choice is discussed in the next section). Plugged
into Gn (u; F ) we obtain
Gn (u) = Gn (u; Fn ): (6.1)
We call Gn the bootstrap distribution. Bootstrap inference is based on Gn (u):
Let (yi ; xi ) denote random variables with the distribution Fn : A random sample from this distribution
is called the bootstrap data. The statistic Tn = Tn ((y1 ; x1 ) ; :::; (yn ; xn ) ; Fn ) constructed on this sample is
a random variable with distribution Gn : That is, P(Tn u) = Gn (u): We call Tn the bootstrap statistic:
The distribution of Tn is identical to that of Tn when the true CDF of Fn rather than F:
The bootstrap distribution is itself random, as it depends on the sample through the estimator Fn :
In the next sections we describe computation of the bootstrap distribution.

6.2 The Empirical Distribution Function


Recall that F (y; x) = P (yi y; xi x) = E (1 (yi y) 1 (xi x)) ; where 1( ) is the indicator function.
This is a population moment. The method of moments estimator is the corresponding sample moment:
n
1X
Fn (y; x) = 1 (yi y) 1 (xi x) : (6.2)
n i=1

Fn (y; x) is called the empirical distribution function (EDF). Fn is a nonparametric estimate of F: Note that
while F may be either discrete or continuous, Fn is by construction a step function.
The EDF is a consistent estimator of the CDF. To see this, note that for any (y; x); 1 (yi y) 1 (xi x) is
p
an iid random variable with expectation F (y; x): Thus by the WLLN (Theorem C.2.1), Fn (y; x) ! F (y; x) :

66
Furthermore, by the CLT (Theorem C.3.1),
p d
n (Fn (y; x) F (y; x)) ! N (0; F (y; x) (1 F (y; x))) :

To see the e ect of sample size on the EDF, in the Figure below, I have plotted the EDF and true CDF for
three random samples of size n = 25; 50, 100, and 500. The random draws are from the N (0; 1) distribution.
For n = 25; the EDF is only a crude approximation to the CDF, but the approximation appears to improve
for the large n. In general, as the sample size gets larger, the EDF step function gets uniformly close to the
true CDF.

Figure 6.1: Empirical Distribution Functions

The EDF is a valid discrete probability distribution which puts probability mass 1=n at each pair (yi ; xi ),
i = 1; :::; n: Notationally, it is helpful to think of a random pair (yi ; xi ) with the distribution Fn : That is,

P(yi y; xi x) = Fn (y; x):

We can easily calculate the moments of functions of (yi ; xi ) :


Z
Eh (yi ; xi ) = h(y; x)dFn (y; x)
n
X
= h (yi ; xi ) P (yi = yi ; xi = xi )
i=1
n
1X
= h (yi ; xi ) ;
n i=1

the empirical sample average.

6.3 Nonparametric Bootstrap


The nonparametric bootstrap is obtained when the bootstrap distribution (6.1) is de ned using the
EDF (6.2) as the estimate Fn of F:
Since the EDF Fn is a multinomial (with n support points), in principle the distribution Gn could be
calculated by direct methods. However, as there are 2n possible samples f(y1 ; x1 ) ; :::; (yn ; xn )g; such a
calculation is computationally infeasible. The popular alternative is to use simulation to approximate the
distribution. The algorithm is identical to our discussion of Monte Carlo simulation, with the following
points of clari cation:

67
The sample size n used for the simulation is the same as the sample size.
The random vectors (yi ; xi ) are drawn randomly from the empirical distribution. This is equivalent
to sampling a pair (yi ; xi ) randomly from the sample.

The bootstrap statistic Tn = Tn ((y1 ; x1 ) ; :::; (yn ; xn ) ; Fn ) is calculated for each bootstrap sample. This
is repeated B times. B is known as the number of bootstrap replications. A theory for the determination
of the number of bootstrap replications B has been developed by Andrews and Buchinsky (2000). It is
desirable for B to be large, so long as the computational costs are reasonable. B = 1000 typically su ces.
When the statistic Tn is a function of F; it is typically through dependence on a parameter. For example,
the t-ratio ^ =s(^) depends on : As the bootstrap statistic replaces F with Fn ; it similarly replaces
with n ; the value of implied by Fn : Typically n = ^; the parameter estimate. (When in doubt use ^:)
Sampling from the EDF is particularly easy. Since Fn is a discrete probability distribution putting proba-
bility mass 1=n at each sample point, sampling from the EDF is equivalent to random sampling a pair (yi ; xi )
from the observed data with replacement. In consequence, a bootstrap sample f(y1 ; x1 ) ; :::; (yn ; xn )g will
necessarily have some ties and multiple values, which is generally not a problem.

6.4 Bootstrap Estimation of Bias and Variance


The bias of ^ is n = E(^ 0 ): Let Tn ( ) =
^ : Then n = E(Tn ( 0 )): The bootstrap counterparts
are ^ ^
= ((y1 ; x1 ) ; :::; (yn ; xn )) and Tn = ^ n =
^ ^: The bootstrap estimate of n is

n = E(Tn ):

If this is calculated by the simulation described in the previous section, the estimate of n is
B
1 X
^n = Tnb
B
b=1
B
1 X^ ^
= b
B
b=1

= ^ ^:

If ^ is biased, it might be desirable to construct a biased-corrected estimator (one with reduced bias).
Ideally, this would be
~=^ n;

but n is unknown. The (estimated) bootstrap biased-corrected estimator is


~ = ^ ^n
= ^ (^ ^)

= 2^ ^ :

Note, in particular, that the biased-corrected estimator is not ^ : Intuitively, the bootstrap makes the
following experiment. Suppose that ^ is the truth. Then what is the average value of ^ calculated from such
samples? The answer is ^ : If this is lower than ^; this suggests that the estimator is downward-biased, so a
biased-corrected estimator of should be larger than ^; and the best guess is the di erence between ^ and
^ : Similarly if ^ is higher than ^; then the estimator is upward-biased and the biased-corrected estimator
should be lower than ^.
Let Tn = ^: The variance of ^ is
Vn = E(Tn ETn )2 :
Let Tn = ^ : It has variance
Vn = E(Tn ETn )2 :
The simulation estimate is
B
1 X ^ ^
2
V^n = b :
B
b=1

68
q
^ ^
A bootstrap standard error for is the square root of the bootstrap estimate of variance, s ( ) = V^n :
While this standard error may be calculated and reported, it is not clear if it is useful. The primary
use of asymptotic standard errors is to construct asymptotic con dence intervals, which are based on the
asymptotic normal approximation to the t-ratio. However, the use of the bootstrap presumes that such
asymptotic approximations might be poor, in which case the normal approximation is suspected. It appears
superior to calculate bootstrap con dence intervals, and we turn to this next.

6.5 Percentile Intervals


For a distribution function Gn (u; F ); let qn ( ; F ) denote its quantile function. This is the function which
solves
Gn (qn ( ; F ); F ) = :
[When Gn (u; F ) is discrete, qn ( ; F ) may be non-unique, but we will ignore such complications.] Let
qn ( ) = qn ( ; F0 ) denote the quantile function of the true sampling distribution, and qn ( ) = qn ( ; Fn )
denote the quantile function of the bootstrap distribution. Note that this function will change depending on
the underlying statistic Tn whose distribution is Gn :
Let Tn = ^; an estimate of a parameter of interest. In (1 )% of samples, ^ lies in the region
[qn ( =2); qn (1 =2)]: This motivates a con dence interval proposed by Efron:

C1 = [qn ( =2); qn (1 =2)]:

This is often called the percentile con dence interval.


Computationally, the quantile qn ( ) is estimated by q^n ( ); the 'th sample quantile of the simulated
statistics fTn1 ; :::; TnB g; as discussed in the section on Monte Carlo simulation. The (1 )% Efron percentile
interval is then [^ qn ( =2); q^n (1 =2)]:
The interval C1 is a popular bootstrap con dence interval often used in empirical practice. This is because
it is easy to compute, simple to motivate, was popularized by Efron early in the history of the bootstrap,
and also has the feature that it is translation invariant. That is, if we de ne = f ( ) as the parameter
of interest for a monotonic function f; then percentile method applied to this problem will produce the
con dence interval [f (qn ( =2)); f (qn (1 =2))]; which is a naturally good property.
However, as we show now, C1 is in a deep sense very poorly motivated.
It will be useful if we introduce an alternative de nition C1 . Let Tn ( ) = ^ and let qn ( ) be the
quantile function of its distribution. (These are the original quantiles, with subtracted.) Then C1 can
alternatively be written as
C1 = [^ + qn ( =2); ^ + qn (1 =2)]:
This is a bootstrap estimate of the \ideal" con dence interval

C10 = [^ + qn ( =2); ^ + qn (1 =2)]:

The latter has coverage probability

P 0 2 C10 = P ^ + qn ( =2) 0
^ + qn (1 =2)

= P qn (1 =2) ^ 0 qn ( =2)
= Gn ( qn ( =2); F0 ) Gn ( qn (1 =2); F0 )

which generally is not 1 ! There is one important exception. If ^ 0 has a symmetric distribution, then
Gn ( u; F0 ) = 1 Gn (u; F0 ); so

P 0 2 C10 = Gn ( qn ( =2); F0 ) Gn ( qn (1 =2); F0 )


= (1 Gn (qn ( =2); F0 )) (1 Gn (qn (1 =2); F0 ))
= 1 1 1
2 2
= 1

and this idealized con dence interval is accurate. Therefore, C10 and C1 are designed for the case that ^ has
a symmetric distribution about 0 :

69
When ^ does not have a symmetric distribution, C1 may perform quite poorly.
However, by the translation invariance argument presented above, it also follows that if there exists some
monotonic transformation f ( ) such that f (^) is symmetrically distributed about f ( 0 ); then the idealized
percentile bootstrap method will be accurate.
Based on these arguments, many argue that the percentile interval should not be used unless the sampling
distribution is close to unbiased and symmetric.
The problems with the percentile method can be circumvented by an alternative method.
Let Tn ( ) = ^ . Then
1 = P (qn ( =2) Tn ( 0 ) qn (1 =2))
= P ^ qn (1 =2) 0
^ qn ( =2) ;

so an exact (1 )% con dence interval for 0 would be


C20 = [^ qn (1 =2); ^ qn ( =2)]:
This motivates a bootstrap analog
C2 = [^ qn (1 =2); ^ qn ( =2)]:
Notice that generally this is very di erent from the Efron interval C1 ! They coincide in the special case that
Gn (u) is symmetric about ^; but otherwise they di er.
Computationally, this interval can be estimated from a bootstrap simulation by sorting the bootstrap
statistics Tn = ^ ^ ; which are centered at the sample estimate ^: These are sorted to yield the quantile
estimates q^n (:025) and q^n (:975): The 95% con dence interval is then [^ q^n (:975); ^ q^n (:025)]:
This con dence interval is discussed in most theoretical treatments of the bootstrap, but is not widely
used in practice.

6.6 Percentile-t Equal-Tailed Interval


Suppose we want to test H0 : = 0 against H1 : < 0 at size : We would set Tn ( ) = ^ =s(^)
and reject H0 in favor of H1 if Tn ( 0 ) < c; where c would be selected so that
P (Tn ( 0 ) < c) = :
Thus c = qn ( ): Since this is unknown, a bootstrap test replaces qn ( ) with the bootstrap estimate qn ( );
and the test rejects if Tn ( 0 ) < qn ( ):
Similarly, if the alternative is H1 : > 0 ; the bootstrap test rejects if Tn ( 0 ) > qn (1 ):
Computationally, these critical values can be estimated from a bootstrap simulation by sorting the
bootstrap t-statistics Tn = ^ ^ =s(^ ): Note, and this is important, that the bootstrap test statistic

is centered at the estimate ^; and the standard error s(^ ) is calculated on the bootstrap sample. These
t-statistics are sorted to nd the estimated quantiles q^n ( ) and/or q^n (1 ):
Let Tn ( ) = ^ ^
=s( ). Then taking the intersection of two one-sided intervals,

1 = P (qn ( =2) Tn ( 0 ) qn (1 =2))


= P qn ( =2) ^ 0 =s(^) qn (1 =2)

= P ^ s(^)qn (1 =2) 0
^ s(^)qn ( =2) ;

so an exact (1 )% con dence interval for 0 would be


C30 = [^ s(^)qn (1 =2); ^ s(^)qn ( =2)]:
This motivates a bootstrap analog
C3 = [^ s(^)qn (1 =2); ^ s(^)qn ( =2)]:
This is often called a percentile-t con dence interval. It is equal-tailed or central since the probability that
0 is below the left endpoint approximately equals the probability that 0 is above the right endpoint, each
=2:
Computationally, this is based on the critical values from the one-sided hypothesis tests, discussed above.

70
6.7 Symmetric Percentile-t Intervals
Suppose we want to test H0 : = 0 against H1 : 6= 0 at size : We would set Tn ( ) = ^ =s(^)
and reject H0 in favor of H1 if jTn ( 0 )j > c; where c would be selected so that

P (jTn ( 0 )j > c) = :

Note that

P (jTn ( 0 )j < c) = P ( c < Tn ( 0 ) < c)


= Gn (c) Gn ( c)
Gn (c);

which is a symmetric distribution function. The ideal critical value c = qn ( ) solves the equation

Gn (qn ( )) = 1 :

Equivalently, qn ( ) is the 1 quantile of the distribution of jTn ( 0 )j :


The bootstrap estimate is qn ( ); the 1 quantile of the distribution of jTn j ; or the number which
solves the equation
Gn (qn ( )) = Gn (qn ( )) Gn ( qn ( )) = 1 :
Computationally, qn ( ) is estimated from a bootstrap simulation by sorting the bootstrap t-statistics
jTn j = ^ ^ =s(^ ); and taking the upper % quantile. The bootstrap test rejects if jTn ( 0 )j > q ( ):
n
Let
C4 = [^ s(^)qn ( ); ^ + s(^)qn ( )];
where qn ( ) is the bootstrap critical value for a two-sided hypothesis test. C4 is called the symmetric
percentile-t interval. It is designed to work well since

P( 0 2 C4 ) = P ^ s(^)qn ( ) 0
^ + s(^)q ( )
n

= P (jTn ( 0 )j < qn ( ))
' P (jTn ( 0 )j < qn ( ))
= 1 :

If is a vector, then to test H0 : = 0 against H1 : 6= 0 at size ; we would use a Wald statistic


0 1
Wn ( ) = n ^ V^ ^

or some other asymptotically chi-square statistic. Thus here Tn ( ) = Wn ( ): The ideal test rejects if
Wn qn ( ); where qn ( ) is the (1 )% quantile of the distribution of Wn : The bootstrap test rejects if
Wn qn ( ); where qn ( ) is the (1 )% quantile of the distribution of
0 1
Wn = n ^ ^ V^ ^ ^ :

Computationally, the critical value qn ( ) is found as the quantile from simulated values of Wn : Note in the
simulation that the Wald statistic is a quadratic form in ^ ^ ; not ^ 0 : [This is a typical mistake
made by practitioners.]

6.8 Asymptotic Expansions


Let Tn 2 R be a statistic such that
d 2
Tn ! N(0; ): (6.3)

In some cases, such as when Tn is a t-ratio, then 2 = 1: In other cases 2


is unknown. Equivalently, writing
Tn Gn (u; F ) then
u
lim Gn (u; F ) = ;
n!1

71
or
u
Gn (u; F ) = + o (1) : (6.4)
u
While (6.4) says that Gn converges to as n ! 1; it says nothing, however, about the rate of
convergence; or the size of the divergence for any particular sample size n: A better asymptotic approx-
imation may be obtained through an asymptotic expansion.
The following notation will be helpful. Let an be a sequence.

De nition 6.8.1 an = o(1) if an ! 0 as n ! 1

De nition 6.8.2 an = O(1) if jan j is uniformly bounded.


r
De nition 6.8.3 an = o(n ) if nr jan j ! 0 as n ! 1.
Basically, an = O(n r ) if it declines to zero like n r :
We say that a function g(u) is even if g( u) = g(u); and a function h(u) is odd if h( u) = h(u): The
derivative of an even function is odd, and vice-versa.

Theorem 6.8.1 Under regularity conditions and (6.3),


u 1 1 3=2
Gn (u; F ) = + g1 (u; F ) + g2 (u; F ) + O(n )
n1=2 n
uniformly over u; where g1 is an even function of u; and g2 is an odd function of u: Moreover, g1 and g2 are
di erentiable functions of u and continuous in F relative to the supremum norm on the space of distribution
functions.
We can interpret Theorem 6.8.1 as follows. First, Gn (u; F ) converges to the normal limit at rate n1=2 :
To a second order of approximation,
u 1=2
Gn (u; F ) +n g1 (u; F ):

Since the derivative of g1 is odd, the density function is skewed. To a third order of approximation,
u 1=2 1
Gn (u; F ) +n g1 (u; F ) + n g2 (u; F )

which adds a symmetric non-normal component to the approximate density (for example, adding leptokur-
tosis).

6.9 One-Sided Tests


Using the expansion of Theorem 6.8.1, we can assess the accuracy of one-sided hypothesis tests and
con dence regions based on an asymptotically normal t-ratio Tn . An asymptotic test is based on (u):
To the second order, the exact distribution is
1 1
P (Tn < u) = Gn (u; F0 ) = (u) + g1 (u; F0 ) + O(n )
n1=2
since = 1: The di erence is
1 1
(u) Gn (u; F0 ) = g1 (u; F0 ) + O(n )
n1=2
1=2
= O(n );

so the order of the error is O(n 1=2 ):


A bootstrap test is based on Gn (u); which from Theorem 6.8.1 has the expansion
1
Gn (u) = Gn (u; Fn ) = (u) + g1 (u; Fn ) + O(n 1 ):
n1=2
Because (u) appears in both expansions, the di erence between the bootstrap distribution and the true
distribution is
1
Gn (u) Gn (u; F0 ) = 1=2 (g1 (u; Fn ) g1 (u; F0 )) + O(n 1 ):
n

72
p
Since Fn converges to Fpat rate n; and g1 is continuous with respect to F; the di erence (g1 (u; Fn ) g1 (u; F0 ))
converges to 0 at rate n: Heuristically,
@
g1 (u; Fn ) g1 (u; F0 ) g1 (u; F0 ) (Fn F0 )
@F
= O(n 1=2 );
@
The \derivative" @F g1 (u; F ) is only heuristic, as F is a function. We conclude that
1
Gn (u) Gn (u; F0 ) = O(n );

or
1
P (Tn u) = P (Tn u) + O(n );
which is an improved rate of convergence over the asymptotic test (which converged at rate O(n 1=2 )).
This rate can be used to show that one-tailed bootstrap inference based on the t-ratio achieves a so-called
asymptotic re nement { the Type I error of the test converges at a faster rate than an analogous asymptotic
test.

6.10 Symmetric Two-Sided Tests


If a random variable y has distribution function H(u) = P(y u); then the random variable jyj has
distribution function
H(u) = H(u) H( u)
since

P (jyj u) = P ( u y u)
= P (y u) P (y u)
= H(u) H( u):

For example, if Z N(0; 1); then jZj has distribution function

(u) = (u) ( u) = 2 (u) 1:

Similarly, if Tn has exact distribution Gn (u; F ); then jTn j has the distribution function

Gn (u; F ) = Gn (u; F ) Gn ( u; F ):
d d
A two-sided hypothesis test rejects H0 for large values of jTn j : Since Tn ! Z; then jTn j ! jZj :
Thus asymptotic critical values are taken from the distribution, and exact critical values are taken from
the Gn (u; F0 ) distribution. From Theorem 6.8.1, we can calculate that

Gn (u; F ) = Gn (u; F ) Gn ( u; F )
1 1
= (u) + 1=2 g1 (u; F ) + g2 (u; F )
n n
1 1 3=2
( u) + 1=2 g1 ( u; F ) + g2 ( u; F ) + O(n )
n n
2
= (u) + g2 (u; F ) + O(n 3=2 ); (6.5)
n
where the simpli cations are because g1 is even and g2 is odd. Hence the di erence between the asymptotic
distribution and the exact distribution is
2 3=2 1
(u) Gn (u; F0 ) = g2 (u; F0 ) + O(n ) = O(n ):
n
The order of the error is O(n 1 ):
Interestingly, the asymptotic two-sided test has a better coverage rate than the asymptotic one-sided
test. This is because the rst term in the asymptotic expansion, g1 ; is an even function, meaning that the
errors in the two directions exactly cancel out.

73
Applying (6.5) to the bootstrap distribution, we nd
2 3=2
Gn (u) = Gn (u; Fn ) = (u) + g2 (u; Fn ) + O(n ):
n
Thus the di erence between the bootstrap and exact distributions is
2
Gn (u) Gn (u; F0 ) = (g2 (u; Fn ) g2 (u; F0 )) + O(n 3=2 )
n
= O(n 3=2 );
p
the last equality because Fn converges to F0 at rate n; and g2 is continuous in F: Another way of writing
this is
P (jTn j < u) = P (jTn j < u) + O(n 3=2 )
so the error from using the bootstrap distribution (relative to the true unknown distribution) is O(n 3=2 ):
This is in contrast to the use of the asymptotic distribution, whose error is O(n 1 ): Thus a two-sided
bootstrap test also achieves an asymptotic re nement, similar to a one-sided test.
A reader might get confused between the two simultaneous e ects. Two-sided tests have better rates of
convergence than the one-sided tests, and bootstrap tests have better rates of convergence than asymptotic
tests.
The analysis shows that there may be a trade-o between one-sided and two-sided tests. Two-sided tests
will have more accurate size (Reported Type I error), but one-sided tests might have more power against
alternatives of interest. Con dence intervals based on the bootstrap can be asymmetric if based on one-sided
tests (equal-tailed intervals) and can therefore be more informative and have smaller length than symmetric
intervals. Therefore, the choice between symmetric and equal-tailed con dence intervals is unclear, and
needs to be determined on a case-by-case basis.

6.11 Percentile Con dence Intervals


p d
To evaluate the coverage rate of the percentile interval, set Tn = n ^ 0 : We know that Tn !
N(0; V ); which is not pivotal, as it depends on the unknown V: Theorem 6.8.1 shows that a rst-order
approximation
u
Gn (u; F ) = + O(n 1=2 );
p
where = V ; and for the bootstrap
u 1=2
Gn (u) = Gn (u; Fn ) = + O(n );

where ^ = V (Fn ) is the bootstrap estimate of : The di erence is


u u 1=2
Gn (u) Gn (u; F0 ) = + O(n )
u u 1=2
= (^ ) + O(n )
1=2
= O(n )

Hence the order of the error is O(n 1=2 ): p


The good news is that the percentile-type methods (if appropriately used) can yield n-convergent
asymptotic inference. Yet these methods do not require the calculation of standard errors! This means that
in contexts where standard errors are not available or are di cult to calculate, the percentile bootstrap
methods provide an attractive inference method.
The bad news is that the rate of convergence is disappointing. It is no better than the rate obtained
from an asymptotic one-sided con dence region. Therefore if standard errors are available, it is unclear if
there are any bene ts from using the percentile bootstrap over simple asymptotic methods.
Based on these arguments, the theoretical literature (e.g. Hall, 1992, Horowitz, 2001) tends to advocate
the use of the percentile-t bootstrap methods rather than percentile methods.

74
6.12 Bootstrap Methods for Regression Models
The bootstrap methods we have discussed have set Gn (u) = Gn (u; Fn ); where Fn is the EDF. Any other
consistent estimate of F may be used to de ne a feasible bootstrap estimator. The advantage of the EDF is
that it is fully nonparametric, it imposes no conditions, and works in nearly any context. But since it is fully
nonparametric, it may be ine cient in contexts where more is known about F: We discuss some bootstrap
methods appropriate for the case of a regression model where
yi = x0i + ei
E (ei j xi ) = 0:
The non-parametric bootstrap distribution resamples the observations (yi ; xi ) from the EDF, which implies
yi = xi 0 ^ + ei
E (xi ei ) = 0
but generally
E (ei j xi ) 6= 0:
The the bootstrap distribution does not impose the regression assumption, and is thus an ine cient estimator
of the true distribution (when in fact the regression assumption is true.)
One approach to this problem is to impose the very strong assumption that the error "i is independent of
the regressor xi : The advantage is that in this case it is straightforward to construct bootstrap distributions.
The disadvantage is that the bootstrap distribution may be a poor approximation when the error is not
independent of the regressors.
To impose independence, it is su cient to sample the xi and ei independently, and then create yi =
0^
xi + ei : There are di erent ways to impose independence. A non-parametric method is to sample the
bootstrap errors ei randomly from the OLS residuals f^ e1 ; :::; e^n g: A parametric method is to generate the
bootstrap errors ei from a parametric distribution, such as the normal ei N(0; ^ 2 ):
For the regressors xi , a nonparametric method is to sample the xi randomly from the EDF or sample
values fx1 ; :::; xn g: A parametric method is to sample xi from an estimated parametric distribution. A third
approach sets xi = xi : This is equivalent to treating the regressors as xed in repeated samples. If this is
done, then all inferential statements are made conditionally on the observed values of the regressors, which
is a valid statistical approach. It does not really matter, however, whether or not the xi are really \ xed"
or random.
The methods discussed above are unattractive for most applications in econometrics because they impose
the stringent assumption that xi and ei are independent. Typically what is desirable is to impose only the
regression condition E (ei j xi ) = 0: Unfortunately this is a harder problem.
One proposal which imposes the regression condition without independence is the Wild Bootstrap. The
idea is to construct a conditional distribution for ei so that
E (ei j xi ) = 0
E ei 2 j xi = e^2i
E ei 3 j xi = e^3i :
A conditional distribution with these features will preserve the main important features of the data. This
can be achieved using a two-point distribution of the form
p ! ! p
1+ 5 5 1
P ei = e^i = p
2 2 5
p ! ! p
1 5 5+1
P ei = e^i = p
2 2 5
For each xi ; you sample ei using this two-point distribution.

6.13 Exercises
1. Let Fn (x) denote the EDF of a random sample. Show that
p d
n (Fn (x) F0 (x)) ! N (0; F0 (x) (1 F0 (x))) :

75
2. Take a random sample fy1 ; :::; yn g with = Eyi and 2 = var (yi ) : Let the statistic of interest be
the sample mean Tn = y n : Find the population moments ETn and var (Tn ) : Let fy1 ; :::; yn g be a
random sample from the empirical distribution function and let Tn = y n be its sample mean. Find the
bootstrap moments ETn and var (Tn ) :
3. Consider the following bootstrap procedure for a regression of yi on xi : Let ^ denote the OLS estimator
from the regression of y on X, and e^ = y X ^ the OLS residuals.

(a) Draw a random vector (x ; e ) from the pair f(xi ; e^i ) : i = 1; :::; ng : That is, draw a random
integer i0 from [1; 2; :::; n]; and set x = xi0 and e = e^i0 . Set y = x 0 ^ + e : Draw (with
replacement) n such vectors, creating a random bootstrap data set (y ; X ):
(b) Regress y on X ; yielding OLS estimates ^ and any other statistic of interest.

Show that this bootstrap procedure is (numerically) identical to the non-parametric bootstrap.
4. Consider the following bootstrap procedure. Using the non-parametric bootstrap, generate bootstrap
samples, calculate the estimate ^ on these samples and then calculate

Tn = (^ ^)=s(^);

where s(^) is the standard error in the original data. Let qn (:05) and qn (:95) denote the 5% and 95%
quantiles of Tn , and de ne the bootstrap con dence interval
h i
C = ^ s(^)qn (:95); ^ s(^)qn (:05) :

Show that C exactly equals the Alternative percentile interval (not the percentile-t interval).

5. You want to test H0 : = 0 against H1 : > 0: The test for H0 is to reject if Tn = ^=s(^) > c where
c is picked so that Type I error is : You do this as follows. Using the non-parametric bootstrap, you
generate bootstrap samples, calculate the estimates ^ on these samples and then calculate

Tn = ^ =s(^ ):

Let qn (:95) denote the 95% quantile of Tn . You replace c with qn (:95); and thus reject H0 if Tn =
^=s(^) > q (:95): What is wrong with this procedure?
n

6. Suppose that in an application, ^ = 1:2 and s(^) = :2: Using the non-parametric bootstrap, 1000
samples are generated from the bootstrap distribution, and ^ is calculated on each sample. The ^
are sorted, and the 2.5% and 97.5% quantiles of the ^ are .75 and 1.3, respectively.

(a) Report the 95% Efron Percentile interval for :


(b) Report the 95% Alternative Percentile interval for :
(c) With the given information, can you report the 95% Percentile-t interval for ?

7. The data le hprice1.dat contains data on house prices (sales), with variables listed in the le
hprice1.pdf. Estimate a linear regression of price on the number of bedrooms, lot size, size of
house, and the colonial dummy. Calculate 95% con dence intervals for the regression coe cients using
both the asymptotic normal approximation and the percentile-t bootstrap.

76
Chapter 7

Generalized Method of Moments

7.1 Overidenti ed Linear Model


Consider the linear model
yi = x0i + ei
= x01i 1 + x02i 2 + ei
E (xi ei ) = 0
where x1i is k 1 and x2 is r 1 with ` = k +r: We know that without further restrictions, an asymptotically
e cient estimator of is the OLS estimator. Now suppose that we are given the information that 2 = 0:
Now we can write the model as
yi = x01i 1 + ei
E (xi ei ) = 0:
In this case, how should 1 be estimated? One method is OLS regression of yi on x1i alone. This method,
however, is not necessarily e cient, as there are ` restrictions in E (xi ei ) = 0; while 1 is of dimension k < `.
This situation is called overidenti ed. There are ` k = r more moment restrictions than free parameters.
We call r the number of overidentifying restrictions.
This is a special case of a more general class of moment condition models. Let g(y; z; x; ) be an ` 1
function of a k 1 parameter with ` k such that
Eg(yi ; z i ; xi ; 0) =0 (7.1)
where 0 is the true value of : In our previous example, g(y; x; ) = x (y x01 ): In econometrics, this
class of models are called moment condition models. In the statistics literature, these are known as
estimating equations.
As an important special case we will devote special attention to linear moment condition models, which
can be written as
yi = z 0i + ei
E (xi ei ) = 0:
where the dimensions of z i and xi are k 1 and ` 1 , with ` k: If k = ` the model is just identi ed,
otherwise it is overidenti ed. The variables z i may be components and functions of xi ; but this is not
required. This model falls in the class (7.1) by setting
g(y; z; x; 0) = x (y z0 ) (7.2)

7.2 GMM Estimator


De ne the sample analog of (7.2)
n n
1X 1X 1
gn ( ) = gi ( ) = xi (yi z 0i ) = X 0y X 0Z : (7.3)
n i=1 n i=1 n

77
The method of moments estimator for is de ned as the parameter value which sets g n ( ) = 0. This
is generally not possible when ` > k; are there are more equations than free parameters. The idea of the
generalized method of moments (GMM) is to de ne an estimator which sets g n ( ) \close" to zero.
For some ` ` weight matrix W n > 0; let

Jn ( ) = n g n ( )0 W n g n ( ):

This is a non-negative measure of the \length" of the vector g n ( ): For example, if W n = I; then, Jn ( ) =
2
n g n ( )0 g n ( ) = n kg n ( )k ; the square of the Euclidean length. The GMM estimator minimizes Jn ( ).

De nition 1 GM M = argmin Jn ( ) :

Note that if k = `; then g n ( ^ ) = 0; and the GMM estimator is the method of moments estimator: The
rst order conditions for the GMM estimator are
@
0 = Jn ( ^ )
@
@
= 2 g ( ^ )0 W n g n ( ^ )
@ n
1 0 1 0
= 2 Z X Wn X y Z^
n n
so
2 Z 0X W n X 0Z ^ = 2 Z 0X W n X 0y
which establishes the following.

Proposition 7.2.1
1
^ GM M = Z 0X W n X 0Z Z 0X W n X 0y :
While the estimator depends on W n ; the dependence is only up to scale, for if W n is replaced by cW n
for some c > 0; ^ GM M does not change.

7.3 Distribution of GMM Estimator


p
Assume that W n ! W > 0: Let
Q = E (xi z 0i )
and
= E xi x0i e2i = E (g i g 0i ) ;
where g i = xi ei : Then
1 0 1 0 p
Z X Wn XZ ! Q0 W Q
n n
and
1 0 1 0 d
Z X Wn Xe ! Q0 W N (0; ) :
n n
We conclude:
p d
Theorem 7.3.1 n ^ ! N (0; V ) ; where

1 1
V = Q0 W Q Q0 W W Q Q0 W Q :

In general, GMM estimators are asymptotically normal with \sandwich form" asymptotic variances.
1
The optimal weight matrix W 0 is one which minimizes V : This turns out to be W 0 = : The proof
is left as an exercise. This yields the e cient GMM estimator:
1
^ = Z 0X 1
X 0Z Z 0X 1
X 0 y:

Thus we have

78
p d 1
Theorem 7.3.2 For the e cient GMM estimator, n ^ ! N 0; Q0 1
Q :

1 p
W0 = is not known in practice, but it can be estimated consistently. For any W n ! W 0 ; we still
call ^ the e cient GMM estimator, as it has the same asymptotic distribution.
By \e cient", we mean that this estimator has the smallest asymptotic variance in the class of GMM
estimators with this set of moment conditions. This is a weak concept of optimality, as we are only considering
alternative weight matrices W n : However, it turns out that the GMM estimator is semiparametrically
e cient, as shown by Gary Chamberlain (1987).
If it is known that E (g i ( )) = 0; and this is all that is known, this is a semi-parametric problem, as the
distribution of the data is unknown. Chamberlain showed that in this context, no semiparametric estimator
(one which is consistent globally for the class of models considered) can have a smaller asymptotic variance
1
than G0 1
G where G = E @@ 0 g i ( ): Since the GMM estimator has this asymptotic variance, it is
semiparametrically e cient.
This result shows that in the linear model, no estimator has greater asymptotic e ciency than the e cient
linear GMM estimator. No estimator can do better (in this rst-order asymptotic sense), without imposing
additional assumptions.

7.4 Estimation of the E cient Weight Matrix


Given any weight matrix W n > 0; the GMM estimator ^ is consistent yet ine cient. For example,
1
we can set W n = I ` : In the linear model, a better choice is W n = X 0 X : Given any such rst-step
0^
estimator, we can de ne the residuals e^i = yi z i and moment equations g ^i = xi e^i = g(yi ; z i ; xi ; ^ ):
Construct
n
1X
gn = gn ( ^ ) = ^;
g
n i=1 i

^i = g
g ^i gn ;
and de ne ! !
n 1 n 1
1X 1X
Wn = g ^0
^ g = g
^g^0 g n g 0n : (7.4)
n i=1 i i n i=1 i i
p 1
Then W n ! = W 0 ; and GMM using W n as the weight matrix is asymptotically e cient.
A common alternative choice is to set
n
! 1
1X 0
Wn = g
^g^
n i=1 i i

which uses the uncentered moment conditions. Since Eg i = 0; these two estimators are asymptotically
equivalent under the hypothesis of correct speci cation. However, Alastair Hall (2000) has shown that the
uncentered estimator is a poor choice. When constructing hypothesis tests, under the alternative hypothesis
the moment conditions are violated, i.e. Eg i 6= 0; so the uncentered estimator will contain an undesirable
bias term and the power of the test will be adversely a ected. A simple solution is to use the centered
moment conditions to construct the weight matrix, as in (7.4) above.
Here is a simple way to compute the e cient GMM estimator for the linear model. First, set W n =
(X 0 X) 1 , estimate ^ using this weight matrix, and construct the residual e^i = yi z 0i ^ : Then set g
^i = xi e^i ;
and let g
^ be the associated n ` matrix: Then the e cient GMM estimator is

1 1 1
^ = Z 0X g
^0 g
^ ng n g 0n X 0Z ^0 g
Z 0X g ^ ng n g 0n X 0 y:

In most cases, when we say \GMM", we actually mean \e cient GMM". There is little point in using an
ine cient GMM estimator when the e cient estimator is easy to compute.
An estimator of the asymptotic variance of ^ can be seen from the above formula. Set

1 1
V^ = n Z 0 X g
^0 g
^ ng n g 0n X 0Z :

Asymptotic standard errors are given by the square roots of the diagonal elements of V^ :

79
There is an important alternative to the two-step GMM estimator just described. Instead, we can let the
weight matrix be considered as a function of : The criterion function is then
n
! 1
0 1X
J( ) = n g n ( ) g ( )g i ( )0 g n ( ):
n i=1 i

where
gi ( ) = gi ( ) gn ( )
The ^ which minimizes this function is called the continuously-updated GMM estimator, and was
introduced by L. Hansen, Heaton and Yaron (1996).
The estimator appears to have some better properties than traditional GMM, but can be numerically
tricky to obtain in some cases. This is a current area of research in econometrics.

7.5 GMM: The General Case


In its most general form, GMM applies whenever an economic or statistical model implies the ` 1
moment condition
E (g i ( )) = 0:
Often, this is all that is known. Identi cation requires l k = dim( ): The GMM estimator minimizes

J( ) = n g n ( )0 W n g n ( )

where
n
1X
gn ( ) = g( )
n i=1 i
and !
n 1
1X
Wn = g
^g^0 g n g 0n ;
n i=1 i i

^i = g i ( ~ ) constructed using a preliminary consistent estimator ~ , perhaps obtained by rst setting


with g
W n = I: Since the GMM estimator depends upon the rst-stage estimator, often the weight matrix W n is
updated, and then ^ recomputed. This estimator can be iterated if needed.
p d 1
Theorem 7.5.1 Under general regularity conditions, n ^ ! N 0; G0 1
G ; where =
1
(E (g i g 0i ))
1
and G = E @@ 0 g i ( ): The variance of ^ may be estimated by ^0 ^
G
1
^
G where ^ =
P P @ ^
n 1 ig ^i g ^=n 1
^i 0 and G i @ 0 g i ( ):

The general theory of GMM estimation and testing was exposited by L. Hansen (1982).

7.6 Over-Identi cation Test


Overidenti ed models (` > k) are special in the sense that there may not be a parameter value such
that the moment condition

Eg(yi ; z i ; xi ; ) = 0
holds. Thus the model { the overidentifying restrictions { are testable.
For example, take the linear model yi = 01 x1i + 02 x2i + ei with E (x1i ei ) = 0 and E (x2i ei ) = 0: It is
possible that 2 = 0; so that the linear equation may be written as yi = 01 x1i + ei : However, it is possible
that 2 6= 0; and in this case it would be impossible to nd a value of 1 so that both E (x1i (yi x01i 1 )) = 0
and E (x2i (yi x01i 1 )) = 0 hold simultaneously. In this sense an exclusion restriction can be seen as an
overidentifying restriction.

80
p
Note that g n ! Eg i ; and thus g n can be used to assess whether or not the hypothesis that Eg i = 0 is
true or not. The criterion function at the parameter estimates is

J = n g 0n W n g n
1
^0 g
= n2 g 0n g ^ ng n g 0n gn :

is a quadratic form in g n ; and is thus a natural test statistic for H0 : Eg i = 0:

Theorem 7.6.1 (Sargan-Hansen). Under the hypothesis of correct speci cation, and if the weight matrix is
asymptotically e cient,
d
J = J( ^ ) ! 2` k :
The proof of the theorem is left as an exercise. This result was established by Sargan (1958) for a
specialized case, and by L. Hansen (1982) for the general case.
The degrees of freedom of the asymptotic distribution are the number of overidentifying restrictions. If
the statistic J exceeds the chi-square critical value, we can reject the model. Based on this information
alone, it is unclear what is wrong, but it is typically cause for concern. The GMM overidenti cation test is
a very useful by-product of the GMM methodology, and it is advisable to report the statistic J whenever
GMM is the estimation method.
When over-identi ed models are estimated by GMM, it is customary to report the J statistic as a general
test of model adequacy.

7.7 Hypothesis Testing: The Distance Statistic


We described before how to construct estimates of the asymptotic covariance matrix of the GMM esti-
mates. These may be used to construct Wald tests of statistical hypotheses.
If the hypothesis is non-linear, a better approach is to directly use the GMM criterion function. This
is sometimes called the GMM Distance statistic, and sometimes called a LR-like statistic (the LR is for
likelihood-ratio). The idea was rst put forward by Newey and West (1987).
For a given weight matrix W n ; the GMM criterion function is

J( ) = n g n ( )0 W n g n ( )

For h : Rk ! Rr ; the hypothesis is


H0 : h( ) = 0:
The estimates under H1 are
^ = argmin J( )

and those under H0 are


~ = argmin J( ):
h( )=0

The two minimizing criterion functions are J( ^ ) and J( ~ ): The GMM distance statistic is the di erence

D = J( ~ ) J( ^ ):

Proposition 7.7.1 If the same weight matrix W n is used for both null and alternative,

1. D 0
d 2
2. D ! r

3. If h is linear in ; then D equals the Wald statistic.

If h is non-linear, the Wald statistic can work quite poorly. In contrast, current evidence suggests that
the D statistic appears to have quite good sampling properties, and is the preferred test statistic.
Newey and West (1987) suggested to use the same weight matrix W n for both null and alternative, as
this ensures that D 0. This reasoning is not compelling, however, and some current research suggests that
this restriction is not necessary for good performance of the test.
This test shares the useful feature of LR tests in that it is a natural by-product of the computation of
alternative models.

81
7.8 Conditional Moment Restrictions
In many contexts, the model implies more than an unconditional moment restriction of the form Eg i ( ) =
0: It implies a conditional moment restriction of the form

E (ei ( ) j xi ) = 0

where ei ( ) is some s 1 function of the observation and the parameters. In many cases, s = 1.
It turns out that this conditional moment restriction is much more powerful, and restrictive, than the
unconditional moment restriction discussed above.
Our linear model yi = z 0i + ei with instruments xi falls into this class under the stronger assumption
E (ei j xi ) = 0: Then ei ( ) = yi z 0i :
It is also helpful to realize that conventional regression models also fall into this class, except that in
this case z i = xi : For example, in linear regression, ei ( ) = yi x0i , while in a nonlinear regression model
ei ( ) = yi g(xi ; ): In a joint model of the conditional mean and variance
8
< yi x0i
ei ( ; ) = :
: 2 0
(yi x0i ) f (xi )
Here s = 2:
Given a conditional moment restriction, an unconditional moment restriction can always be constructed.
That is for any ` 1 function (xi ; ) ; we can set g i ( ) = (xi ; ) ei ( ) which satis es Eg i ( ) = 0 and
hence de nes a GMM estimator. The obvious problem is that the class of functions is in nite. Which
should be selected?
This is equivalent to the problem of selection of the best instruments. If xi 2 R is a valid instrument
satisfying E (ei j xi ) = 0; then xi ; x2i ; x3i ; :::; etc., are all valid instruments. Which should be used?
One solution is to construct an in nite list of potent instruments, and then use the rst k instruments.
How is k to be determined? This is an area of theory still under development. A recent study of this problem
is Donald and Newey (2001).
Another approach is to construct the optimal instrument. The form was uncovered by Chamberlain
(1987). Take the case s = 1: Let
@
Ri = E ei ( ) j xi
@
and
2
i = E ei ( )2 j xi :
Then the \optimal instrument" is
2
Ai = i Ri
so the optimal moment is
g i ( ) = Ai ei ( ):
Setting g i ( ) to be this choice (which is k 1; so is just-identi ed) yields the best GMM estimator possible.
In practice, Ai is unknown, but its form does help us think about construction of optimal instruments.
In the linear model ei ( ) = yi z 0i ; note that

Ri = E (z i j xi )

and
2
i = E e2i j xi ;
so
2
Ai = i E (z i j xi ) :
In the case of linear regression, z i = xi ; so Ai = i 2 xi : Hence e cient GMM is GLS, as we discussed
earlier in the course.
In the case of endogenous variables, note that the e cient instrument Ai involves the estimation of
the conditional mean of z i given xi : In other words, to get the best instrument for z i ; we need the best
conditional mean model for z i given xi , not just an arbitrary linear projection. The e cient instrument is
also inversely proportional to the conditional variance of ei : This is the same as the GLS estimator; namely
that improved e ciency can be obtained if the observations are weighted inversely to the conditional variance
of the errors.

82
7.9 Bootstrap GMM Inference
Let ^ be the 2SLS or GMM estimator of . Using the EDF of (yi ; xi ; z i ), we can apply the bootstrap
methods discussed in Chapter 6 to compute estimates of the bias and variance of ^ ; and construct con dence
intervals for ; identically as in the regression model. However, caution should be applied when interpreting
such results.
A straightforward application of the nonparametric bootstrap works in the sense of consistently achieving
the rst-order asymptotic distribution. This has been shown by Hahn (1996). However, it fails to achieve
an asymptotic re nement when the model is over-identi ed, jeopardizing the theoretical justi cation for
percentile-t methods. Furthermore, the bootstrap applied J test will yield the wrong answer.
The problem is that in the sample, ^ is the \true" value and yet g n ( ^ ) 6= 0: Thus according to random
variables (yi ; xi ; z i ) drawn from the EDF Fn ;

E gi ^ = g n ( ^ ) 6= 0:

This means that (yi ; xi ; z i ) do not satisfy the same moment conditions as the population distribution.
A correction suggested by Hall and Horowitz (1996) can solve the problem. Given the bootstrap sample
(y ; X ; Z ); de ne the bootstrap GMM criterion
0
J ( )=n gn ( ) gn ( ^ ) W n gn ( ) gn ( ^ )

where g n ( ^ ) is from the in-sample data, not from the bootstrap data.
Let ^ minimize J ( ); and de ne all statistics and tests accordingly. In the linear model, this implies
that the bootstrap estimator is
1
^ = Z 0X W nX 0Z Z 0X W n X 0y X 0e
^ :

where e^ = y Z ^ are the in-sample residuals. The bootstrap J statistic is J ( ^ ):


Brown and Newey (2002) have an alternative solution. They note that we can sample from the obser-
Pn
vations with the empirical likelihood probabilities p^i described in Chapter 8. Since i=1 p^i g i ^ = 0; this
sampling scheme preserves the moment conditions of the model, so no recentering or adjustments is needed.
Brown and Newey argue that this bootstrap procedure will be more e cient than the Hall-Horowitz GMM
bootstrap.

83
7.10 Exercises
1. Take the model
yi = x0i + ei
E (xi ei ) = 0
e2i = z 0i + i
E (z i i ) = 0:

Find the method of moments estimators ^ ; ^ for ( ; ) :

2. Take the single equation


y = Z +e
E (e j X) = 0
1
Assume E e2i j xi = 2
: Show that if ^ is estimated by GMM with weight matrix W n = X 0 X ;
then p d 1
n ^ ! N 0; 2
Q0 M 1
Q
where Q = E (xi z 0i ) and M = E (xi x0i ) :
3. Take the model yi = z 0i + ei with E (xi ei ) = 0: Let e^i = yi z 0i ^ where ^ is consistent for (e.g.
a GMM estimator with arbitrary weight matrix). De ne the estimate of the optimal GMM weight
matrix ! 1
n
1X 0 2
Wn = xi xi e^i :
n i=1
p 1
Show that W n ! where = E xi x0i e2i :
4. In the linear model estimated by GMM with general weight matrix W ; the asymptotic variance of
^ GM M is
1 1
V = Q0 W Q Q0 W W Q Q0 W Q
1 1
(a) Let V 0 be this matrix when W = : Show that V 0 = Q0 1
Q :
(b) We want to show that for any W ; V V 0 is positive semi-de nite (for then V 0 is the smaller
1
possible covariance matrix and W = is the e cient weight matrix). To do this, start by
nding matrices A and B such that V = A0 A and V 0 = B 0 B:
(c) Show that B 0 A = B 0 B and therefore that B 0 (A B) = 0:
(d) Use the expressions V = A0 A; A = B +(A B) ; and B 0 (A B) = 0 to show that V V 0:
5. The equation of interest is
yi = g(xi ; ) + ei
E (z i ei ) = 0:
The observed data is (yi ; xi ; z i ). z i is ` 1 and is k 1; ` k: Show how to construct an e cient
GMM estimator for .
6. In the linear model y = X + e with E(xi ei ) = 0; the Generalized Method of Moments (GMM)
criterion function for is de ned as
1 0 1
Jn ( ) = (y X ) X ^ n X 0 (y X ) (7.5)
n
Pn
where e^i are the OLS residuals and ^ n = n1 i=1 xi x0i e^2i : The GMM estimator of ; subject to the
restriction h( ) = 0; is de ned as
~ = argmin Jn ( ):
h( )=0

The GMM test statistic (the distance statistic) of the hypothesis h( ) = 0 is


D = Jn ( ~ ) = min Jn ( ): (7.6)
h( )=0

84
(a) Show that you can rewrite Jn ( ) in (7.5) as
0 1
Jn ( ) = ^ V^ n ^

where !
n
X
1 1
V^ n = X 0 X xi x0i e^2i X 0X :
i=1

(b) Now focus on linear restrictions: h( ) = R0 r: Thus


~ = argmin Jn ( )
R0 r=0

and hence R0 ~ = r: De ne the Lagrangian L( ; ) = 12 Jn ( ) + 0


R0 r where is s 1:
Show that the minimizer is
1
~ = ^ V^ n R R0n V^ R R0 ^ r
1
^ = R0n V^ R R0 ^ r :

p d
(c) Show that if R0 = r then n ~ ! N (0; V R ) where

1
VR=V V R R0 V R R0 V :

(d) Show that in this setting, the distance statistic D in (7.6) equals the Wald statistic.

7. Take the linear model

yi = z 0i + ei
E (xi ei ) = 0:

and consider the GMM estimator ^ of : Let


1
Jn = ng n ( ^ )0 ^ gn ( ^ )
d 2
denote the test of overidentifying restrictions. Show that Jn ! ` k as n ! 1 by demonstrating
each of the following:
1
(a) Since > 0; we can write = CC 0 and = C0 1
C 1

0 1
(b) Jn = n C 0 g n ( ^ ) C0 ^ C C 0 gn ( ^ )

(c) C 0 g n ( ^ ) = D n C 0 g n ( 0) where
1
1 0 1 0 ^ 1 1 0 1 0 ^ 1
Dn = I` C0 XZ ZX XZ ZX C0 1
n n n n
1 0
gn ( 0) = X e:
n
p 1
(d) D n ! I ` R R0 R R0 where R = C 0 E (xi z 0i )
d
(e) n1=2 C 0 g n ( 0) !Z N (0; I ` )
d 0 0 1
(f) Jn ! Z I` R RR R0 Z
1
(g) Z 0 I ` R R0 R R0 Z 2
` k:
1
Hint: I ` R R0 R R0 is a projection matrix..

85
Chapter 8

Empirical Likelihood

8.1 Non-Parametric Likelihood


An alternative to GMM is empirical likelihood. The idea is due to Art Owen (1988, 2001) and has
been extended to moment condition models by Qin and Lawless (1994). It is a non-parametric analog of
likelihood estimation.
The idea is to construct a multinomial distribution F (p1 ; :::; pn ) which places probability pi at each
observation. To be a valid multinomial distribution, these probabilities must satisfy the requirements that
pi 0 and
X n
pi = 1: (8.1)
i=1

Since each observation is observed once in the sample, the log-likelihood function for this multinomial
distribution is
n
X
log L (p1 ; :::; pn ) = log(pi ): (8.2)
i=1

First let us consider a just-identi ed model. In this case the moment condition places no additional re-
strictions on the multinomial distribution. The maximum likelihood estimators of the probabilities (p1 ; :::; pn )
are those which maximize the log-likelihood subject to the constraint (8.1). This is equivalent to maximizing
n n
!
X X
log(pi ) pi 1
i=1 i=1

where is a Lagrange multiplier. The n rst order conditions are 0 = pi 1 : Combined with the constraint
(8.1) we nd that the MLE is pi = n 1 yielding the log-likelihood n log(n):
Now consider the case of an overidenti ed model with moment condition

Eg i ( 0) =0

where g is ` 1 and is k 1 and for simplicity we write g i ( ) = g(yi ; z i ; xi ; ): The multinomial distribution
which places probability pi at each observation (yi ; xi ; z i ) will satisfy this condition if and only if
n
X
pi g i ( ) = 0 (8.3)
i=1

The empirical likelihood estimator is the value of which maximizes the multinomial log-likelihood
(8.2) subject to the restrictions (8.1) and (8.3).
The Lagrangian for this maximization problem is
n n
! n
X X X
0
L ( ; p1 ; :::; pn ; ; ) = log(pi ) pi 1 n pi g i ( )
i=1 i=1 i=1

86
where and are Lagrange multipliers. The rst-order-conditions of L with respect to pi , ; and are
1
= + n 0 gi ( )
pi
n
X
pi = 1
i=1
n
X
pi g i ( ) = 0:
i=1

Multiplying the rst equation by pi , summing over i; and using the second and third equations, we nd
= n and
1
pi = :
n 1 + 0 gi ( )
Substituting into L we nd
n
X
0
R( ; ) = n log (n) log 1 + gi ( ) : (8.4)
i=1

For given ; the Lagrange multiplier ( ) minimizes R ( ; ) :


( ) = argmin R( ; ): (8.5)

This minimization problem is the dual of the constrained maximization problem. The solution (when it
exists) is well de ned since R( ; ) is a convex function of : The solution cannot be obtained explicitly, but
must be obtained numerically (see section 6.5). This yields the (pro le) empirical log-likelihood function for
.
R( ) = R( ; ( ))
n
X
= n log (n) log (1 + ( )0 g i ( ))
i=1

The EL estimate ^ is the value which maximizes R( ); or equivalently minimizes its negative
^ = argmin [ R( )] (8.6)

Numerical methods are required for calculation of ^ (see Section 8.5).


As a by-product of estimation, we also obtain the Lagrange multiplier ^ = ( ^ ); probabilities
1
p^i = 0 :
n 1 + gi ^
^

and maximized empirical likelihood


n
X
R( ^ ) = log (^
pi ) : (8.7)
i=1

8.2 Asymptotic Distribution of EL Estimator


De ne
@
Gi ( ) = g ( ) (8.8)
@ 0 i
G = EGi ( 0 )
0
= E gi ( 0 ) gi ( 0)

and
1
V = G0 1
G (8.9)
0 1 1 0
V = G G G G (8.10)
For example, in the linear model, Gi ( ) = xi z 0i ; G= E (xi z 0i ), and =E xi x0i e2i :

87
Theorem 8.2.1 Under regularity conditions,
p d
n ^ 0 ! N (0; V )
p d
n^ ! 1
N (0; V )
p p ^
where V and V are de ned in (8.9) and (8.10), and n ^ 0 and n are asymptotically independent.

The proof is given in Section 8.6.


The theorem shows that asymptotic variance V for ^ is the same as for e cient GMM. Thus the EL
estimator is asymptotically e cient.
Chamberlain (1987) showed that V is the semiparametric e ciency bound for in the overidenti ed
moment condition model. This means that no consistent estimator for this class of models can have a lower
asymptotic variance than V . Since the EL estimator achieves this bound, it is an asymptotically e cient
estimator for .

8.3 Overidentifying Restrictions


In a parametric likelihood context, tests are based on the di erence in the log likelihood functions. The
same statistic can be constructed for empirical likelihood. Twice the di erence between the unrestricted
empirical log-likelihood n log (n) and the maximized empirical log-likelihood for the model (8.7) is
n
X 0
LRn = 2 log 1 + ^ g i ^ : (8.11)
i=1

d 2
Theorem 8.3.1 If Eg i ( 0) = 0 then LRn ! ` k:

The proof is given in Section 8.6.


The EL overidenti cation test is similar to the GMM overidenti cation test. They are asymptotically
rst-order equivalent, and have the same interpretation. The overidenti cation test is a very useful by-
product of EL estimation, and it is advisable to report the statistic LRn whenever EL is the estimation
method.

8.4 Testing
Let the maintained model be
Eg i ( ) = 0 (8.12)
where g is ` 1 and is k 1: By \maintained" we mean that the overidentfying restrictions contained in
(8.12) are assumed to hold and are not being challenged (at least for the test discussed in this section). The
hypothesis of interest is
h( ) = 0:
where h : Rk ! Ra : The restricted EL estimator and likelihood are the values which solve
~ = argmax R( )
h( )=0

R( ~ ) = max R( ):
h( )=0

Fundamentally, the restricted EL estimator ~ is simply an EL estimator with ` k +a overidentifying restric-


tions, so there is no fundamental change in the distribution theory for ~ relative to ^ : To test the hypothesis
h( ) while maintaining (8.12), the simple overidentifying restrictions test (8.11) is not appropriate. Instead
we use the di erence in log-likelihoods:

LRn = 2 R( ^ ) R( ~ ) :

This test statistic is a natural analog of the GMM distance statistic.


d 2
Theorem 8.4.1 Under (8.12) and H0 : h( ) = 0; LRn ! a:

The proof of this result is more challenging and is omitted.

88
8.5 Numerical Computation
Gauss code which implements the methods discussed below can be found at

http://www.ssc.wisc.edu/~bhansen/progs/elike.prc

Derivatives
The numerical calculations depend on derivatives of the dual likelihood function (8.4). De ne

gi ( )
gi ( ; ) =
1 + 0 gi ( )
0
Gi ( )
Gi ( ; ) =
1 + 0 gi ( )

The rst derivatives of (8.4) are


n
X
@
R = R( ; ) = gi ( ; )
@ i=1
Xn
@
R = R( ; ) = Gi ( ; ) :
@ i=1

The second derivatives are


n
X
@2 0
R = 0R( ; ) = gi ( ; ) gi ( ; )
@ @ i=1
2 n
X
@ 0 Gi ( )
R = 0R( ; )= g i ( ; ) Gi ( ; )
@ @ i=1
1 + 0 gi ( )
n @2 0 !
@2 X 0 @ @ 0 gi ( )
R = 0R( ; )= Gi ( ; ) Gi ( ; ) 0
@ @ i=1
1+ gi ( )

Inner Loop
The so-called \inner loop" solves (8.5) for given : The modi ed Newton method takes a quadratic
approximation to Rn ( ; ) yielding the iteration rule
1
j+1 = j (R ( ; j )) R ( ; j) : (8.13)

where > 0 is a scalar steplength (to be discussed next). The starting value 1 can be set to the zero vector.
The iteration (8.13) is continued until the gradient R ( ; j ) is smaller than some prespeci ed tolerance.
E cient convergence requires a good choice of steplength : One method uses the following quadratic
approximation. Set 0 = 0; 1 = 12 and 2 = 1: For p = 0; 1; 2; set
1
p = j p(R ( ; j )) R ( ; j ))
Rp = R( ; p)

A quadratic function can be t exactly through these three points. The value of which minimizes this
quadratic is
^ = R2 + 3R0 4R1 :
4R2 + 4R0 8R1
yielding the steplength to be plugged into (8.13).
A complication is that must be constrained so that 0 pi 1 which holds if
0
n 1+ gi ( ) 1 (8.14)

for all i: If (8.14) fails, the stepsize needs to be decreased.


Outer Loop

89
The outer loop is the minimization (8.6). This can be done by the modi ed Newton method described
in the previous section. The gradient for (8.6) is
@ @ 0
R = R( ) = R( ; ) = R + R =R
@ @
since R ( ; ) = 0 at = ( ); where
@ 1
= ( )= R R ;
@ 0
the second equality following from the implicit function theorem applied to R ( ; ( )) = 0:
The Hessian for (8.6) is
@
R = R( )
@ @ 0
@ 0
= R ( ; ( )) + R ( ; ( ))
@ 0
0 0
= R ( ; ( )) + R0 + R + R
0 1
= R R R R :

It is not guaranteed that R > 0: If not, the eigenvalues of R should be adjusted so that all are positive.
The Newton iteration rule is
j+1 = j R 1R
where is a scalar stepsize, and the rule is iterated until convergence.

8.6 Technical Proofs


Proof of Theorem 8.2.1. ( ^ ; ^ ) jointly solve

@
n
X gi ^
0 = R( ; ) = 0 (8.15)
@ i=1 1 + ^ gi ^
0
@
n
X Gi ^
0 = R( ; ) = 0 : (8.16)
@ i=1 1 + ^ gi ^
Pn Pn Pn 0
Let Gn = n1 i=1 Gi ( 0 ) ; g n = n1 i=1 g i ( 0 ) and n = n1 i=1 g i ( 0 ) gi ( 0) :
Expanding (8.16) around = 0 and = 0 = 0 yields

0 ' G0n ^ 0 : (8.17)

Expanding (8.15) around = 0 and = 0 = 0 yields

0' gn Gn ^ 0 + n
^ (8.18)

Premultiplying by G0n n
1
and using (8.17) yields

0 ' G0n n
1
gn G0n n
1
Gn ^ 0 + G0n n
1
n
^

= G0n n
1
gn G0n n
1
Gn ^ 0

Solving for ^ and using the WLLN and CLT yields


p 1 1p
n ^ 0 ' G0n n
1
Gn G0n n ng n (8.19)
d 1
! G0 1
G G0 1
N (0; )
= N (0; V )

90
Solving (8.18) for ^ and using (8.19) yields
p 1 p
n^ ' n
1
I Gn G0n n
1
Gn G0n n
1
ng n (8.20)
d 1 1
! I G G0 1
G G0 1
N (0; )
1
= N (0; V )

Furthermore, since
1
G0 I 1
G G0 1
G G0 = 0
p p ^
n ^ 0 and n are asymptotically uncorrelated and hence independent.

Proof of Theorem 8.3.1. First, by a Taylor expansion, (8.19), and (8.20),


n
1 X p
p g ^ ' n g n + Gn ^
n i=1 i 0

1 p
' I Gn G0n n
1
Gn G0n n
1
ng n
p
' n n^:

Second, since log(1 + u) ' u u2 =2 for u small,


n
X 0
LRn = 2 log 1 + ^ g i ^
i=1
n
X n
X 0
0
' 2^ gi ^ ^0 gi ^ gi ^ ^
i=1 i=1
0
' n^ n
^
d 0 1
! N (0; V ) N (0; V )
2
= ` k

where the proof of the nal equality is left as an exercise.

91
Chapter 9

Endogeneity

We say that there is endogeneity in the linear model y = z 0i + ei if is the parameter of interest and
E(z i ei ) 6= 0: This cannot happen if is de ned by linear projection, so requires a structural interpretation.
The coe cient must have meaning separately from the de nition of a conditional mean or linear projection.
Example: Measurement error in the regressor. Suppose that (yi ; xi ) are joint random variables,
E(yi j xi ) = xi 0 is linear, is the parameter of interest, and xi is not observed. Instead we observe
xi = xi + ui where ui is an k 1 measurement error, independent of yi and xi : Then

yi = xi 0 + ei
0
= (xi ui ) + ei
= x0i + vi

where
vi = ei u0i :
The problem is that
E (xi vi ) = E [(xi + ui ) (ei u0i )] = E (ui u0i ) 6= 0
if 6= 0 and E (ui u0i ) 6= 0: It follows that if ^ is the OLS estimator, then

^ p! = (E (xi x0i ))
1
E (ui u0i ) 6= :

This is called measurement error bias.


Example: Supply and Demand. The variables qi and pi (quantity and price) are determined jointly
by the demand equation
qi = 1 pi + e1i

and the supply equation


qi = 2 pi + e2i :
e1i
Assume that ei = is iid, Eei = 0; 1 + 2 = 1 and Eei e0i = I 2 (the latter for simplicity). The
e2i
question is, if we regress qi on pi ; what happens?
It is helpful to solve for qi and pi in terms of the errors. In matrix notation,

1 1 qi e1i
=
1 2 pi e2i
so
1
qi 1 1 e1i
=
pi 1 2 e2i
2 1 e1i
=
1 1 e2i
2 e1i + 1 e2i
= :
(e1i e2i )

92
The projection of qi on pi yields

qi = pi + " i
E (pi "i ) = 0

where
E (pi qi ) 2 1
= =
E (p2i ) 2
p
Hence if it is estimated by OLS, ^ ! ; which does not equal either 1 or 2: This is called simultaneous
equations bias.

9.1 Instrumental Variables


Let the equation of interest be
yi = z 0i + ei (9.1)
where z i is k 1; and assume that E(z i ei ) 6= 0 so there is endogeneity. We call (9.1) the structural
equation. In matrix notation, this can be written as

y = Z + e: (9.2)

Any solution to the problem of endogeneity requires additional information which we call instruments.

De nition 9.1.1 The ` 1 random vector xi is an instrumental variable for (9.1) if E (xi ei ) = 0:
In a typical set-up, some regressors in z i will be uncorrelated with ei (for example, at least the intercept).
Thus we make the partition
z 1i k1
zi = (9.3)
z 2i k2
where E(z 1i ei ) = 0 yet E(z 2i ei ) 6= 0: We call z 1i exogenous and z 2i endogenous. By the above de nition,
z 1i is an instrumental variable for (9.1); so should be included in xi : So we have the partition

z 1i k1
xi = (9.4)
x2i `2

where z 1i = x1i are the included exogenous variables, and x2i are the excluded exogenous variables.
That is x2i are variables which could be included in the equation for yi (in the sense that they are uncorrelated
with ei ) yet can be excluded, as they would have true zero coe cients in the equation.
The model is just-identi ed if ` = k (i.e., if `2 = k2 ) and over-identi ed if ` > k (i.e., if `2 > k2 ):
We have noted that any solution to the problem of endogeneity requires instruments. This does not mean
that valid instruments actually exist.

9.2 Reduced Form


The reduced form relationship between the variables or \regressors" z i and the instruments xi is found
by linear projection. Let
1
= E (xi x0i ) E (xi z 0i )
be the ` k matrix of coe cients from a projection of z i on xi ; and de ne
0
ui = z i xi

as the projection error. Then the reduced form linear relationship between z i and xi is
0
zi = x i + ui : (9.5)

In matrix notation, we can write (9.5) as


Z =X +U (9.6)
where U is n k:

93
By construction,
E(xi u0i ) = 0;
so (9.5) is a projection and can be estimated by OLS:

Z = X^ + u
^
1
^ = X 0X X 0Z :

Substituting (9:6) into (9.2), we nd

y = (X + U ) +e
= X + v; (9.7)

where
= (9.8)
and
v=U + e:
Observe that
E (xi vi ) = E (xi u0i ) + E (xi ei ) = 0:
Thus (9.7) is a projection equation and may be estimated by OLS. This is

y = X^ + v
^;
1
^ = X 0X X 0y

The equation (9.7) is the reduced form for y: (9.6) and (9.7) together are the reduced form equations
for the system

y = X +v
Z = X + U:

As we showed above, OLS yields the reduced-form estimates ^ ; ^

9.3 Identi cation


The structural parameter relates to ( ; ) through (9.8). The parameter is identi ed, meaning
that it can be recovered from the reduced form, if

rank ( ) = k: (9.9)
1 1
Assume that (9.9) holds. If ` = k; then = : If ` > k; then for any W > 0; = 0 W 0
W :
If (9.9) is not satis ed, then cannot be recovered from ( ; ) : Note that a necessary (although not
su cient) condition for (9.9) is ` k:
Since X and Z have the common variables X 1 ; we can rewrite some of the expressions. Using (9.3) and
(9.4) to make the matrix partitions X = [X 1 ; X 2 ] and Z = [X 1 ; Z 2 ] ; we can partition as

11 12
=
21 22

I 12
=
0 22

(9.6) can be rewritten as

Z1 = X1
Z2 = X1 12 + X2 22 + U 2: (9.10)

is identi ed if rank( ) = k; which is true if and only if rank( 22 )


= k2 (by the upper-diagonal structure
of ): Thus the key to identi cation of the model rests on the `2 k2 matrix 22 in (9.10).

94
9.4 Estimation
The model can be written as
yi = z 0i + ei
E (xi ei ) = 0
or
Eg i ( ) = 0
gi ( ) = xi (yi z 0i ) :
This is a moment condition model. Appropriate estimators include GMM and EL. The estimators and
distribution theory developed in those Chapter 8 and 9 directly apply. Recall that the GMM estimator, for
given weight matrix W n ; is
^ = Z 0 XW n X 0 Z 1 Z 0 XW n X 0 y:

9.5 Special Cases: IV and 2SLS


If the model is just-identi ed, so that k = `; then the formula for GMM simpli es. We nd that
1
^ = Z 0 XW n X 0 Z Z 0 XW n X 0 y
1 1
= X 0Z W n 1 Z 0X Z 0 XW n X 0 y
1
= X 0Z X 0y
This estimator is often called the instrumental variables estimator (IV) of ; where X is used as an
instrument for Z: Observe that the weight matrix W n has disappeared. In the just-identi ed case, the weight
matrix places no role. This is also the MME estimator of ; and the EL estimator. Another interpretation
1
stems from the fact that since = ; we can construct the Indirect Least Squares (ILS) estimator:
1
^ = ^ ^
1 1 1
= X 0X X 0Z X 0X X 0y
1 1
= X 0Z X 0X X 0X X 0y
1
= X 0Z X 0y :
which again is the IV estimator.

Recall that the optimal weight matrix is an estimate of the inverse of = E xi x0i e2i : In the special case
that E e2i j xi = 2 (homoskedasticity), then = E (xi x0i ) 2 / E (xi x0i ) suggesting the weight matrix
1
W n = X 0X : Using this choice, the GMM estimator equals
1 1 1
^ 2SLS = Z 0 X X 0 X X 0Z Z 0X X 0X X 0y

This is called the two-stage-least squares (2SLS) estimator. It was originally proposed by Theil (1953)
and Basmann (1957), and is the classic estimator for linear equations with instruments. Under the ho-
moskedasticity assumption, the 2SLS estimator is e cient GMM, but otherwise it is ine cient.
It is useful to observe that writing
1
P = X X 0X X0
^
Z = PZ = X^
then
1
^ = Z 0P Z Z 0P y
1
= ^0 Z
Z ^ ^ 0 y:
Z

The source of the \two-stage" name is since it can be computed as follows

95
1
First regress Z on X; vis., ^ = X 0 X ^ = X ^ = P Z:
X 0 Z and Z
1
Second, regress y on Z; ^0 Z
^ vis., ^ = Z ^ ^ 0 y:
Z

^ Recall, Z = [Z 1 ; Z 2 ] and X = [Z 1 ; X 2 ]: Then


It is useful to scrutinize the projection Z:
h i
Z^ = Z ^1 ; Z
^2

= [P Z 1 ; P Z 2 ]
= [Z 1 ; P Z 2 ]
h i
= Z 1; Z ^2 ;

^ 2 : So only the endogenous


since Z 1 lies in the span of X: Thus in the second stage, we regress y on Z 1 and Z
variables Z 2 are replaced by their tted values:
^ 2 = X 1 ^ 12 + X 2 ^ 22 :
Z

9.6 Bekker Asymptotics


Bekker (1994) used an alternative asymptotic framework to analyze the nite-sample bias in the 2SLS
estimator. Here we present a simpli ed version of one of his results. In our notation, the model is

= y Z +e (9.11)
= Z X +U (9.12)
= (e; U )
E ( j X) = 0
E 0 jX = S

As before, X is n l so there are l instruments.


First, let's analyze the approximate bias of OLS applied to (9.11). Using (9.12),

1 0 0
E Ze = E (z i ei ) = E (xi ei ) + E (ui ei ) = s21
n

and
1 0
E ZZ = E (z i z 0i )
n
0 0
= E (xi x0i ) + E (ui x0i ) + E (xi u0i ) + E (ui u0i )
0
= Q + S 22

where Q = E (xi x0i ) : Hence by a rst-order approximation


1
1 0 1 0
E ^ OLS E ZZ E Ze
n n
0 1
= Q + S 22 s21 (9.13)

which is zero only when s21 = 0 (when Z is exogenous).


We now derive a similar result for the 2SLS estimator.
1
^ 2SLS = Z 0 P Z Z 0P y :
1
Let P = X X 0 X X 0 . By the spectral decomposition of an idempotent matrix, P = H H 0 where
= diag (I l ; 0) : Let Q = H 0 S 1=2 which satis es EQ0 Q = I n and partition Q = (q 01 Q02 ) where q 1 is

96
l 1: Hence
1 0 1 1=20
E P jX = S E Q0 Q j X S 1=2
n n
1 1=20 1 0
= S E q q S 1=2
n n 1 1
l 1=20 1=2
= S S
n
= S

where
l
= :
n
Using (9.12) and this result,
1 1 1
E Z 0P e = E 0
X 0e + E U 0 P e = s21 ;
n n n
and
1 1
E Z 0P Z = 0
E (xi x0i ) + 0
E (xi ui ) + E (ui x0i ) + E U 0P U
n n
0
= Q + S 22 :

Together
1
1 0 1 0
E ^ 2SLS E Z PZ E Z Pe
n n
0 1
= Q + S 22 s21 : (9.14)

In general this is non-zero, except when s21 = 0 (when Z is exogenous). It is also close to zero when = 0.
Bekker (1994) pointed out that it also has the reverse implication { that when = l=n is large, the bias
in the 2SLS estimator will be large. Indeed as ! 1; the expression in (9.14) approaches that in (9.13),
indicating that the bias in 2SLS approaches that of OLS as the number of instruments increases.
Bekker (1994) showed further that under the alternative asymptotic approximation that is xed as
n ! 1 (so that the number of instruments goes to in nity proportionately with sample size) then the
expression in (9.14) is the probability limit of ^ 2SLS

9.7 Identi cation Failure


Recall the reduced form equation

Z2 = X1 12 + X2 22 + U 2:

The parameter fails to be identi ed if 22 has de cient rank. The consequences of identi cation failure
for inference are quite severe.
Take the simplest case where k = l = 1 (so there is no X 1 ): Then the model may be written as

yi = zi + ei
zi = xi + ui

and 22 = = E (xi zi ) =Ex2i : We see that is identi ed if and only if 6= 0; which occurs when E (zi xi ) 6= 0.
Thus identi cation hinges on the existence of correlation between the excluded exogenous variable and the
included endogenous variable.
Suppose this condition fails, so E (zi xi ) = 0: Then by the CLT
n
1 X d
p xi ei ! N1 N 0; E x2i e2i (9.15)
n i=1

97
n n
1 X 1 X d
p xi zi = p xi ui ! N2 N 0; E x2i u2i (9.16)
n i=1 n i=1
therefore Pn
p1 xi ei
^ n i=1 d N1
= Pn ! Cauchy;
p1 xi z i N2
n i=1

since the ratio of two normals is Cauchy. This is particularly nasty, as the Cauchy distribution does not
have a nite mean. This result carries over to more general settings, and was examined by Phillips (1989)
and Choi and Phillips (1992).
Suppose that identi cation does not completely fail, but is weak. This occurs when 22 is full rank, but
small. This can be handled in an asymptotic analysis by modeling it as local-to-zero, viz
1=2
22 =n C;

where C is a full rank matrix. The n 1=2 is picked because it provides just the right balancing to allow a
rich distribution theory.
To see the consequences, once again take the simple case k = l = 1: Here, the instrument xi is weak for
zi if
= n 1=2 c:
Then (9.15) is una ected, but (9.16) instead takes the form
n n n
1 X 1 X 2 1 X
p xi zi = p xi + p xi ui
n i=1 n i=1 n i=1
n n
1X 2 1 X
= xi c + p xi ui
n i=1 n i=1
d
! Qc + N2

therefore
^ d N1
! :
Qc + N2
As in the case of complete identi cation failure, we nd that ^ is inconsistent for and the asymptotic
distribution of ^ is non-normal. In addition, standard test statistics have non-standard distributions, meaning
that inferences about parameters of interest can be misleading.
The distribution theory for this model was developed by Staiger and Stock (1997) and extended to
nonlinear GMM estimation by Stock and Wright (2000). Further results on testing were obtained by Wang
and Zivot (1998).
The bottom line is that it is highly desirable to avoid identi cation failure. Once again, the equation to
focus on is the reduced form
Z 2 = X 1 12 + X 2 22 + U 2
and identi cation requires rank( 22 ) = k2 : If k2 = 1; this requires 22 6= 0; which is straightforward to assess
using a hypothesis test on the reduced form. Therefore in the case of k2 = 1 (one RHS endogenous variable),
one constructive recommendation is to explicitly estimate the reduced form equation for Z 2 ; construct the
test of 22 = 0, and at a minimum check that the test rejects H0 : 22 = 0.
When k2 > 1; 22 6= 0 is not su cient for identi cation. It is not even su cient that each column of 22
is non-zero (each column corresponds to a distinct endogenous variable in X 2 ): So while a minimal check is
to test that each columns of 22 is non-zero, this cannot be interpreted as de nitive proof that 22 has full
rank. Unfortunately, tests of de cient rank are di cult to implement. In any event, it appears reasonable
to explicitly estimate and report the reduced form equations for X 2 ; and attempt to assess the likelihood
that 22 has de cient rank.

98
9.8 Exercises
1. Consider the single equation model
yi = zi + ei ;
where yi and zi are both real-valued (1 1). Let ^ denote the IV estimator of using as an instrument
a dummy variable di (takes only the values 0 and 1). Find a simple expression for the IV estimator in
this context.
2. In the linear model

yi = z 0i + ei
E (ei j z i ) = 0

suppose 2i = E e2i j z i is known. Show that the GLS estimator of can be written as an IV estimator
using some instrument xi : (Find an expression for xi :)
3. Take the linear model
y = Z + e:
Let the OLS estimator for be ^ and the OLS residual be e^ = y Z ^.
Let the IV estimator for using some instrument X be ~ and the IV residual be e ~ = y Z ~ . If
0 0
X is indeed endogeneous, will IV \ t" better than OLS, in the sense that e
~e ~< e
^e ^; at least in large
samples?
4. The reduced form between the regressors z i and instruments xi takes the form
0
zi = x i + ui

or
Z =X +U
where z i is k 1; xi is l 1; Z is n k; X is n l; U is n k; and is l k: The parameter is
de ned by the population moment condition

E (xi u0i ) = 0
1
Show that the method of moments estimator for is ^ = X 0 X X 0Z :
5. In the structural model

y = Z +e
Z = X +U

with l k; l k; we claim that is identi ed (can be recovered from the reduced form) if rank( ) = k:
Explain why this is true. That is, show that if rank( ) < k then cannot be identi ed.
6. Take the linear model

yi = xi + ei
E (ei j xi ) = 0:

where xi and are 1 1:

(a) Show that E (xi ei ) = 0 and E x2i ei = 0: Is z i = (xi x2i )0 a valid instrumental variable for
estimation of ?
(b) De ne the 2SLS estimator of ; using z i as an instrument for xi : How does this di er from OLS?
(c) Find the e cient GMM estimator of based on the moment condition

E (z i (yi xi )) = 0:

Does this di er from 2SLS and/or OLS?

99
7. Suppose that price and quantity are determined by the intersection of the linear demand and supply
curves

Demand : Q = a0 + a1 P + a2 Y + e1
Supply : Q = b0 + b1 P + b2 W + e2

where income (Y ) and wage (W ) are determined outside the market. In this model, are the parameters
identi ed?
8. The data le card.dat is taken from David Card \Using Geographic Variation in College Proximity
to Estimate the Return to Schooling" in Aspects of Labour Market Behavior (1995). There are 2215
observations with 29 variables, listed in card.pdf. We want to estimate a wage equation
2
log(W age) = 0 + 1 Educ + 2 Exper + 3 Exper + 4 South + 5 Black +e

where Educ = Eduation (Years) Exper = Experience (Years), and South and Black are regional and
racial dummy variables.

(a) Estimate the model by OLS. Report estimates and standard errors.
(b) Now treat Education as endogenous, and the remaining variables as exogenous. Estimate the
model by 2SLS, using the instrument near4, a dummy indicating that the observation lives near
a 4-year college. Report estimates and standard errors.
(c) Re-estimate by 2SLS (report estimates and standard errors) adding three additional instruments:
near2 (a dummy indicating that the observation lives near a 2-year college), f atheduc (the edu-
cation, in years, of the father) and motheduc (the education, in years, of the mother).
(d) Re-estimate the model by e cient GMM. I suggest that you use the 2SLS estimates as the rst-
step to get the weight matrix, and then calculate the GMM estimator from this weight matrix
without further iteration. Report the estimates and standard errors.
(e) Calculate and report the J statistic for overidenti cation.
(f) Discuss your ndings..

100
Chapter 10

Univariate Time Series

A time series yt is a process observed in sequence over time, t = 1; :::; T . To indicate the dependence on
time, we adopt new notation, and use the subscript t to denote the individual observation, and T to denote
the number of observations.
Because of the sequential nature of time series, we expect that yt and yt 1 are not independent, so
classical assumptions are not valid.
We can separate time series into two categories: univariate (yt 2 R is scalar); and multivariate (yt 2 Rm
is vector-valued). The primary model for univariate time series is autoregressions (ARs). The primary model
for multivariate time series is vector autoregressions (VARs).

10.1 Stationarity and Ergodicity


De nition 10.1.1 fyt g is covariance (weakly) stationary if

E(yt ) =

is independent of t; and
cov (yt ; yt k) = (k)
is independent of t for all k: (k) is called the autocovariance function.

(k) = (k)= (0) = corr(yt ; yt k)

is the autocorrelation function.

De nition 10.1.2 fyt g is strictly stationary if the joint distribution of (yt ; :::; yt k) is independent of t
for all k:

De nition 10.1.3 A stationary time series is ergodic if (k) ! 0 as k ! 1.


The following two theorems are essential to the analysis of stationary time series. There proofs are rather
di cult, however.

Theorem 10.1.1 If yt is strictly stationary and ergodic and xt = f (yt ; yt 1 ; :::) is a random variable, then
xt is strictly stationary and ergodic.

Theorem 10.1.2 (Ergodic Theorem). If yt is strictly stationary and ergodic and E jyt j < 1; then as
T ! 1;
T
1X p
yt ! E(yt ):
T t=1

This allows us to consistently estimate parameters using time-series moments:


The sample mean:
T
1X
^= yt
T t=1

101
The sample autocovariance
T
1X
^ (k) = (yt ^ ) (yt k ^) :
T t=1
The sample autocorrelation
^ (k)
^(k) = :
^ (0)

Theorem 10.1.3 If yt is strictly stationary and ergodic and Eyt2 < 1; then as T ! 1;
p
1. ^ ! E(yt );
p
2. ^ (k) ! (k);
p
3. ^(k) ! (k):

A proof is given in Section 10.13.

10.2 Autoregressions
In time-series, the series f:::; y1 ; y2 ; :::; yT ; :::g are jointly random. We consider the conditional expectation

E (yt j Ft 1)

where Ft 1 = fyt 1 ; yt 2 ; :::g is the past history of the series.


An autoregressive (AR) model speci es that only a nite number of past lags matter:

E (yt j Ft 1) = E (yt j yt 1 ; :::; yt k ) :

A linear AR model (the most common type used in practice) speci es linearity:

E (yt j Ft 1) = + 1 yt 1 + 2 yt 1 + + k yt k :

Letting
et = yt E (yt j Ft 1) ;

then we have the autoregressive model

yt = + 1 yt 1 + 2 yt 1 + + k yt k + et
E (et j Ft 1 ) = 0:

The last property de nes a special time-series process.

De nition 10.2.1 et is a martingale di erence sequence (MDS) if E (et j Ft 1) = 0:


Regression errors are naturally a MDS. Some time-series processes may be a MDS as a consequence of
optimizing behavior. For example, some versions of the life-cycle hypothesis imply that either changes in
consumption, or consumption growth rates, should be a MDS. Most asset pricing models imply that asset
returns should be the sum of a constant plus a MDS.
The MDS property for the regression error plays the same role in a time-series regression as does the
conditional mean-zero property for the regression error in a cross-section regression. In fact, it is even more
important in the time-series context, as it is di cult to derive distribution theories without this property.
A useful property of a MDS is that et is uncorrelated with any function of the lagged information Ft 1 :
Thus for k > 0; E (yt k et ) = 0:

102
10.3 Stationarity of AR(1) Process
A mean-zero AR(1) is
yt = yt 1 + et :
Assume that et is iid, E(et ) = 0 and Ee2t = 2
< 1:
By back-substitution, we nd
2
yt = et + et 1 + et 2 + :::
X1
k
= et k:
k=0

k
Loosely speaking, this series converges if the sequence et k gets small as k ! 1: This occurs when j j < 1:

Theorem 10.3.1 If j j < 1 then yt is strictly stationary and ergodic.


We can compute the moments of yt using the in nite sum:
1
X
k
Eyt = E (et k) =0
k=0

1
X 2
2k
var(yt ) = var (et k) = 2
:
1
k=0

If the equation for yt has an intercept, the above results are unchanged, except that the mean of yt can
be computed from the relationship
Eyt = + Eyt 1 ;
and solving for Eyt = Eyt 1 we nd Eyt = =(1 ):

10.4 Lag Operator


An algebraic construct which is useful for the analysis of autoregressive models is the lag operator.

De nition 10.4.1 The lag operator L satis es Lyt = yt 1:

De ning L2 = LL; we see that L2 yt = Lyt 1 = yt 2: In general, Lk yt = yt k:


The AR(1) model can be written in the format

yt yt 1 + et

or
(1 L) yt 1 = et :
The operator (L) = (1 L) is a polynomial in the operator L: We say that the root of the polynomial is
1= ; since (z) = 0 when z = 1= : We call (L) the autoregressive polynomial of yt .
From Theorem 10.3.1, an AR(1) is stationary i j j < 1: Note that an equivalent way to say this is that
an AR(1) is stationary i the root of the autoregressive polynomial is larger than one (in absolute value).

10.5 Stationarity of AR(k)


The AR(k) model is
yt = 1 yt 1 + 2 yt 1 + + k yt k + et :
Using the lag operator,
2 k
yt 1 Lyt 2 L yt k L yt = et ;
or
(L)yt = et
where
2 k
(L) = 1 1L 2L kL :

103
We call (L) the autoregressive polynomial of yt :
The Fundamental Theorem of Algebra says that any polynomial can be factored as
1 1 1
(z) = 1 1 z 1 2 z 1 k z

where the 1 ; :::; k are the complex roots of (z); which satisfy ( j ) = 0:
We know that an AR(1) is stationary i the absolute value of the root of its autoregressive polynomial
is larger than one. For an AR(k), the requirement is that all roots are larger than one. Let j j denote the
modulus of a complex number :

Theorem 10.5.1 The AR(k) is strictly stationary and ergodic if and only if j jj > 1 for all j:
One way of stating this is that \All roots lie outside the unit circle."
If one of the roots equals 1, we say that (L); and hence yt ; \has a unit root". This is a special case of
non-stationarity, and is of great interest in applied time series.

10.6 Estimation
Let
0
xt = 1 yt 1 yt 2 yt k
0
= 1 2 k :

Then the model can be written as


yt = x0t + et :
The OLS estimator is
1
^ = X 0X X 0 y:
To study ^ ; it is helpful to de ne the process ut = xt et : Note that ut is a MDS, since

E (ut j Ft 1) = E (xt et j Ft 1) = xt E (et j Ft 1) = 0:

By Theorem 10.1.1, it is also strictly stationary and ergodic. Thus


T T
1X 1X p
xt et = ut ! E (ut ) = 0: (10.1)
T t=1 T t=1

The vector xt is strictly stationary and ergodic, and by Theorem 10.1.1, so is xt x0t : Thus by the Ergodic
Theorem,
T
1X p
xt x0t ! E (xt x0t ) = Q:
T t=1

Combined with (10.1) and the continuous mapping theorem, we see that

T
! 1 T
!
^= 1X 1X p 1
+ xt x0t xt et !Q 0 = 0:
T t=1 T t=1

We have shown the following:


p
Theorem 10.6.1 If the AR(k) process yt is strictly stationary and ergodic and Eyt2 < 1; then ^ ! as
T ! 1:

104
10.7 Asymptotic Distribution
Theorem 10.7.1 MDS CLT. If ut is a strictly stationary and ergodic MDS and E (ut u0t ) = < 1; then
as T ! 1;
T
1 X d
p ut ! N (0; ) :
T t=1

Since xt et is a MDS, we can apply Theorem 10.7.1 to see that


T
1 X d
p xt et ! N (0; ) ;
T t=1

where
= E(xt x0t e2t ):

Theorem 10.7.2 If the AR(k) process yt is strictly stationary and ergodic and Eyt4 < 1; then as T ! 1;
p d
T ^ ! N 0; Q 1
Q 1
:

This is identical in form to the asymptotic distribution of OLS in cross-section regression. The implication
is that asymptotic inference is the same. In particular, the asymptotic covariance matrix is estimated just
as in the cross-section case.

10.8 Bootstrap for Autoregressions


In the non-parametric bootstrap, we constructed the bootstrap sample by randomly resampling from
the data values fyt ; xt g: This creates an iid bootstrap sample. Clearly, this cannot work in a time-series
application, as this imposes inappropriate independence.
Brie y, there are two popular methods to implement bootstrap resampling for time-series data.

Method 1: Model-Based (Parametric) Bootstrap.

1. Estimate ^ and residuals e^t :


2. Fix an initial condition (y k+1 ; y k+2 ; :::; y0 ):

3. Simulate iid draws ei from the empirical distribution of the residuals f^


e1 ; :::; e^T g:
4. Create the bootstrap series yt by the recursive formula

yt = ^ + ^ 1 y t 1 + ^ 2 yt 2 + + ^ k yt k + et :

This construction imposes homoskedasticity on the errors ei ; which may be di erent than the properties
of the actual ei : It also presumes that the AR(k) structure is the truth.

Method 2: Block Resampling

1. Divide the sample into T =m blocks of length m:


2. Resample complete blocks. For each simulated sample, draw T =m blocks.

3. Paste the blocks together to create the bootstrap time-series yt :


4. This allows for arbitrary stationary serial correlation, heteroskedasticity, and for model-misspeci cation.
5. The results may be sensitive to the block length, and the way that the data are partitioned into blocks.
6. May not work well in small samples.

105
10.9 Trend Stationarity

yt = 0 + 1t + St (10.2)
St = 1 St 1 + 2 St 2 + + k St l + et ; (10.3)

or
yt = 0 + 1t + 1 yt 1 + 2 yt 1 + + k yt k + et : (10.4)
There are two essentially equivalent ways to estimate the autoregressive parameters ( 1 ; :::; k ):

You can estimate (10.4) by OLS.


You can estimate (10.2)-(10.3) sequentially by OLS. That is, rst estimate (10.2), get the residual S^t ;
and then perform regression (10.3) replacing St with S^t : This procedure is sometimes called Detrending.

The reason why these two procedures are (essentially) the same is the Frisch-Waugh-Lovell theorem.

Seasonal E ects

There are three popular methods to deal with seasonal data.

Include dummy variables for each season. This presumes that \seasonality" does not change over the
sample.
Use \seasonally adjusted" data. The seasonal factor is typically estimated by a two-sided weighted
average of the data for that season in neighboring years. Thus the seasonally adjusted data is a
\ ltered" series. This is a exible approach which can extract a wide range of seasonal factors. The
seasonal adjustment, however, also alters the time-series correlations of the data.
First apply a seasonal di erencing operator. If s is the number of seasons (typically s = 4 or s = 12);

s yt = yt yt s;

or the season-to-season change. The series s yt is clearly free of seasonality. But the long-run trend
is also eliminated, and perhaps this was of relevance.

10.10 Testing for Omitted Serial Correlation


For simplicity, let the null hypothesis be an AR(1):

yt = + yt 1 + ut : (10.5)

We are interested in the question if the error ut is serially correlated. We model this as an AR(1):

ut = u t 1 + et (10.6)

with et a MDS. The hypothesis of no omitted serial correlation is

H0 : =0
H1 : 6= 0:

We want to test H0 against H1 :


To combine (10.5) and (10.6), we take (10.5) and lag the equation once:

yt 1 = + yt 2 + ut 1:

We then multiply this by and subtract from (10.5), to nd

yt yt 1 = + yt 1 yt 1 + ut ut 1;

or
yt = (1 ) + ( + ) yt 1 yt 2 + et = AR(2):

106
Thus under H0 ; yt is an AR(1), and under H1 it is an AR(2). H0 may be expressed as the restriction that
the coe cient on yt 2 is zero.
An appropriate test of H0 against H1 is therefore a Wald test that the coe cient on yt 2 is zero. (A
simple exclusion test).
In general, if the null hypothesis is that yt is an AR(k), and the alternative is that the error is an
AR(m), this is the same as saying that under the alternative yt is an AR(k+m), and this is equivalent to
the restriction that the coe cients on yt k 1 ; :::; yt k m are jointly zero. An appropriate test is the Wald
test of this restriction.

10.11 Model Selection


What is the appropriate choice of k in practice? This is a problem of model selection.
One approach to model selection is to choose k based on a Wald tests.
Another is to minimize the AIC or BIC information criterion, e.g.
2k
AIC(k) = log ^ 2 (k) + ;
T
where ^ 2 (k) is the estimated residual variance from an AR(k)
One ambiguity in de ning the AIC criterion is that the sample available for estimation changes as k
changes. (If you increase k; you need more initial conditions.) This can induce strange behavior in the AIC.
The best remedy is to x a upper value k; and then reserve the rst k as initial conditions, and then estimate
the models AR(1), AR(2), ..., AR(k) on this (uni ed) sample.

10.12 Autoregressive Unit Roots


The AR(k) model is

(L)yt = + et
k
(L) = 1 1L kL :

As we discussed before, yt has a unit root when (1) = 0; or

1 + 2 + + k = 1:

In this case, yt is non-stationary. The ergodic theorem and MDS CLT do not apply, and test statistics are
asymptotically non-normal.
A helpful way to write the equation is the so-called Dickey-Fuller reparameterization:

yt = + 0 yt 1 + 1 yt 1 + + k 1 yt (k 1) + et : (10.7)

These models are equivalent linear transformations of one another. The DF parameterization is convenient
because the parameter 0 summarizes the information about the unit root, since (1) = 0 : To see this,
observe that the lag polynomial for the yt computed from (10.7) is

(1 L) 0L 1 (L L2 ) k 1 (L
k 1
Lk )

But this must equal (L); as the models are equivalent. Thus

(1) = (1 1) 0 (1 1) (1 1) = 0:

Hence, the hypothesis of a unit root in yt can be stated as

H0 : 0 = 0:

Note that the model is stationary if 0 < 0: So the natural alternative is

H1 : 0 < 0:

Under H0 ; the model for yt is

yt = + 1 yt 1 + + k 1 yt (k 1) + et ;

107
which is an AR(k-1) in the rst-di erence yt : Thus if yt has a (single) unit root, then yt is a stationary
AR process. Because of this property, we say that if yt is non-stationary but d yt is stationary, then yt is
\integrated of order d"; or I(d): Thus a time series with unit root is I(1):
Since 0 is the parameter of a linear regression, the natural test statistic is the t-statistic for H0 from
OLS estimation of (10.7). Indeed, this is the most popular unit root test, and is called the Augmented
Dickey-Fuller (ADF) test for a unit root.
It would seem natural to assess the signi cance of the ADF statistic using the normal table. However,
under H0 ; yt is non-stationary, so conventional normal asymptotics are invalid. An alternative asymptotic
framework has been developed to deal with non-stationary data. We do not have the time to develop this
theory in detail, but simply assert the main results.

Theorem 10.12.1 (Dickey-Fuller Theorem). Assume 0 = 0: As T ! 1;


d
T ^ 0 ! (1 1 2 k 1 ) DF

^0
ADF = ! DFt :
s(^ 0 )
The limit distributions DF and DFt are non-normal. They are skewed to the left, and have negative
means.
The rst result states that ^ 0 converges to its true value (of zero) at rate T; rather than the conventional
rate of T 1=2 : This is called a \super-consistent" rate of convergence.
The second result states that the t-statistic for ^ 0 converges to a limit distribution which is non-normal,
but does not depend on the parameters : This distribution has been extensively tabulated, and may be
used for testing the hypothesis H0 : Note: The standard error s(^ 0 ) is the conventional (\homoskedastic")
standard error. But the theorem does not require an assumption of homoskedasticity. Thus the Dickey-Fuller
test is robust to heteroskedasticity.
Since the alternative hypothesis is one-sided, the ADF test rejects H0 in favor of H1 when ADF < c;
where c is the critical value from the ADF table. If the test rejects H0 ; this means that the evidence points
to yt being stationary. If the test does not reject H0 ; a common conclusion is that the data suggests that yt
is non-stationary. This is not really a correct conclusion, however. All we can say is that there is insu cient
evidence to conclude whether the data are stationary or not.
We have described the test for the setting of with an intercept. Another popular setting includes as well
a linear time trend. This model is
yt = 1 + 2t + 0 yt 1 + 1 yt 1 + + k 1 yt (k 1) + et : (10.8)
This is natural when the alternative hypothesis is that the series is stationary about a linear time trend. If
the series has a linear trend (e.g. GDP, Stock Prices), then the series itself is non-stationary, but it may be
stationary around the linear time trend. In this context, it is a silly waste of time to t an AR model to the
level of the series without a time trend, as the AR model cannot conceivably describe this data. The natural
solution is to include a time trend in the tted OLS equation. When conducting the ADF test, this means
that it is computed as the t-ratio for 0 from OLS estimation of (10.8).
If a time trend is included, the test procedure is the same, but di erent critical values are required. The
ADF test has a di erent distribution when the time trend has been included, and a di erent table should
be consulted.
Most texts include as well the critical values for the extreme polar case where the intercept has been
omitted from the model. These are included for completeness (from a pedagogical perspective) but have no
relevance for empirical practice where intercepts are always included

10.13 Technical Proofs


Proof of Theorem 10.1.3. Part (1) is a direct consequence of the Ergodic theorem. For Part (2), note
that
T
1X
^ (k) = (yt ^ ) (yt k ^)
T t=1
T T T
1X 1X 1X
= yt y t k yt ^ yt k^ + ^2:
T t=1 T t=1 T t=1

108
By Theorem 10.1.1 above, the sequence yt yt k is strictly stationary and ergodic, and it has a nite mean by
the assumption that Eyt2 < 1: Thus an application of the Ergodic Theorem yields
T
1X p
y t yt k ! E(yt yt k ):
T t=1

Thus
p 2 2 2 2
^ (k) ! E(yt yt k) + = E(yt yt k) = (k):
p
Part (3) follows by the continuous mapping theorem: ^(k) = ^ (k)=^ (0) ! (k)= (0) = (k):
.

109
Chapter 11

Multivariate Time Series

A multivariate time series y t is a vector process m 1. Let Ft 1 = (y t 1 ; y t 2 ; :::) be all lagged


information at time t: The typical goal is to nd the conditional expectation E (y t j Ft 1 ) : Note that since
y t is a vector, this conditional expectation is also a vector.

11.1 Vector Autoregressions (VARs)


A VAR model speci es that the conditional mean is a function of only a nite number of lags:

E (y t j Ft 1) = E yt j yt 1 ; :::; y t k :

A linear VAR speci es that this conditional mean is linear in the arguments:

E yt j yt 1 ; :::; y t k = a0 + A1 y t 1 + A2 y t 2 + Ak y t k:

Observe that a0 is m 1,and each of A1 through Ak are m m matrices.


De ning the m 1 regression error

et = y t E (y t j Ft 1) ;

we have the VAR model

y t = a0 + A1 y t 1 + A2 y t 2 + Ak y t k + et
E (et j Ft 1 ) = 0:

Alternatively, de ning the mk + 1 vector


0 1
1
B yt C
B 1 C
B C
xt = B y t 2 C
B .. C
@ . A
yt k

and the m (mk + 1) matrix


A= a0 A1 A2 Ak ;
then
y t = Axt + et :
The VAR model is a system of m equations. One way to write this is to let a0j be the jth row of A.
Then the VAR system can be written as the equations

Yjt = a0j xt + ejt :

Unrestricted VARs were introduced to econometrics by Sims (1980).

110
11.2 Estimation
Consider the moment conditions
E (xt ejt ) = 0;
j = 1; :::; m: These are implied by the VAR model, either as a regression, or as a linear projection.
The GMM estimator corresponding to these moment conditions is equation-by-equation OLS

^j = (X 0 X)
a 1
X 0 yj :

An alternative way to compute this is as follows. Note that

^0j = y 0j X(X 0 X)
a 1
:

^ we nd
And if we stack these to create the estimate A;
0 1
y 01
B y 02 C
A^ = B B ..
C
C X(X 0 X) 1
@ . A
y 0m+1
= Y X(X 0 X)
0 1
;

where
Y = y1 y2 ym
the T m matrix of the stacked y 0t :
This (system) estimator is known as the SUR (Seemingly Unrelated Regressions) estimator, and was
originally derived by Zellner (1962)

11.3 Restricted VARs


The unrestricted VAR is a system of m equations, each with the same set of regressors. A restricted
VAR imposes restrictions on the system. For example, some regressors may be excluded from some of the
equations. Restrictions may be imposed on individual equations, or across equations. The GMM framework
gives a convenient method to impose such restrictions on estimation.

11.4 Single Equation from a VAR


Often, we are only interested in a single equation out of a VAR system. This takes the form

yjt = a0j xt + et ;
0
and xt consists of lagged values of yjt and the other ylt s: In this case, it is convenient to re-de ne the
variables. Let yt = yjt ; and z t be the other variables. Let et = ejt and = aj : Then the single equation
takes the form
yt = x0t + et ; (11.1)
and h i
0
xt = 1 yt 1 yt k z 0t 1 z 0t k :
This is just a conventional regression with time series data.

11.5 Testing for Omitted Serial Correlation


Consider the problem of testing for omitted serial correlation in equation (11.1). Suppose that et is an
AR(1). Then

yt = x0t + et
et = et 1 + ut (11.2)
E (ut j Ft 1 ) = 0:

111
Then the null and alternative are
H0 : =0 H1 : 6= 0:
Take the equation yt = x0t + et ; and subtract o the equation once lagged multiplied by ; to get

yt yt 1 = (x0t + et ) x0t 1 + et 1
= x0t xt 1 + et et 1 ;

or
yt = y t 1 + x0t + x0t 1 + ut ; (11.3)
which is a valid regression model.
So testing H0 versus H1 is equivalent to testing for the signi cance of adding (yt 1 ; xt 1 ) to the regression.
This can be done by a Wald test. We see that an appropriate, general, and simple way to test for omitted
serial correlation is to test the signi cance of extra lagged values of the dependent variable and regressors.
You may have heard of the Durbin-Watson test for omitted serial correlation, which once was very
popular, and is still routinely reported by conventional regression packages. The DW test is appropriate
only when regression yt = x0t + et is not dynamic (has no lagged values on the RHS), and et is iid N(0; 2 ):
Otherwise it is invalid.
Another interesting fact is that (11.2) is a special case of (11.3), under the restriction = : This
restriction, which is called a common factor restriction, may be tested if desired. If valid, the model (11.2)
may be estimated by iterated GLS. (A simple version of this estimator is called Cochrane-Orcutt.) Since the
common factor restriction appears arbitrary, and is typically rejected empirically, direct estimation of (11.2)
is uncommon in recent applications.

11.6 Selection of Lag Length in an VAR


If you want a data-dependent rule to pick the lag length k in a VAR, you may either use a testing-based
approach (using, for example, the Wald statistic), or an information criterion approach. The formula for the
AIC and BIC are
p
AIC(k) = log det ^ (k) + 2
T
p log(T )
BIC(k) = log det ^ (k) +
T
XT
^ (k) = 1 et (k)0
^t (k)^
e
T t=1
p = m(km + 1)

where p is the number of parameters in the model, and e^t (k) is the OLS residual vector from the model with
k lags. The log determinant is the criterion from the multivariate normal likelihood.

11.7 Granger Causality


Partition the data vector into (y t ; z t ): De ne the two information sets

F1t = yt ; yt 1 ; y t 2 ; :::
F2t = yt ; zt ; yt 1 ; z t 1 ; y t 2 ; z t 2 ; ; :::

The information set F1t is generated only by the history of y t ; and the information set F2t is generated by
both y t and z t : The latter has more information.
We say that z t does not Granger-cause y t if

E (y t j F1;t 1) = E (y t j F2;t 1) :

That is, conditional on information in lagged y t ; lagged z t does not help to forecast y t : If this condition
does not hold, then we say that z t Granger-causes y t :
The reason why we call this \Granger Causality" rather than \causality" is because this is not a physical
or structure de nition of causality. If z t is some sort of forecast of the future, such as a futures price, then

112
z t may help to forecast y t even though it does not \cause" y t : This de nition of causality was developed by
Granger (1969) and Sims (1972).
In a linear VAR, the equation for y t is

yt = + 1 yt 1 + + k yt k + z 0t 1 1 + + z 0t k k + et :

In this equation, z t does not Granger-cause y t if and only if

H0 : 1 = 2 = = k = 0:

This may be tested using an exclusion (Wald) test.


This idea can be applied to blocks of variables. That is, y t and/or z t can be vectors. The hypothesis
can be tested by using the appropriate multivariate Wald test.
If it is found that z t does not Granger-cause y t ; then we deduce that our time-series model of E (y t j Ft 1 )
does not require the use of z t : Note, however, that z t may still be useful to explain other features of y t ; such
as the conditional variance.

11.8 Cointegration
The idea of cointegration is due to Granger (1981), and was articulated in detail by Engle and Granger
(1987).

De nition 11.8.1 The m 1 series y t is cointegrated if y t is I(1) yet there exists ;m r, of rank r; such
that z t = 0 y t is I(0): The r vectors in are called the cointegrating vectors.
If the series y t is not cointegrated, then r = 0: If r = m; then y t is I(0): For 0 < r < m; y t is I(1) and
cointegrated.
In some cases, it may be believed that is known a priori. Often, = (1 1)0 : For example, if y t is a
0
pair of interest rates, then = (1 1) speci es that the spread (the di erence in returns) is stationary.
If y = (log(Consumption) log(Income))0 ; then = (1 1)0 speci es that log(Consumption=Income)
is stationary.
In other cases, may not be known.
If y t is cointegrated with a single cointegrating vector (r = 1); then it turns out that can be consistently
estimated by an OLS regression of one component of y t on the others. Thus y t = (Y1t ; Y2t ) and = ( 1 2 )
p
and normalize 1 = 1: Then ^ 2 = (y 02 y 2 ) 1 y 02 y 1 ! 2 : Furthermore this estimation is super-consistent:
d
T (^2 2 ) ! Limit; as rst shown by Stock (1987). This is not, in general, a good method to estimate
; but it is useful in the construction of alternative estimators and tests.
We are often interested in testing the hypothesis of no cointegration:

H0 : r=0
H1 : r > 0:

Suppose that is known, so z t = 0 y t is known. Then under H0 z t is I(1); yet under H1 z t is I(0):
Thus H0 can be tested using a univariate ADF test on z t :
When is unknown, Engle and Granger (1987) suggested using an ADF test on the estimated residual
0
z^t = ^ y t ; from OLS of y1t on y2t : Their justi cation was Stock's result that ^ is super-consistent under
H1 : Under H0 ; however, ^ is not consistent, so the ADF critical values are not appropriate. The asymptotic
distribution was worked out by Phillips and Ouliaris (1990).
When the data have time trends, it may be necessary to include a time trend in the estimated cointegrating
regression. Whether or not the time trend is included, the asymptotic distribution of the test is a ected by
the presence of the time trend. The asymptotic distribution was worked out in B. Hansen (1992).

11.9 Cointegrated VARs


We can write a VAR as

A(L)y t = et
A(L) = I A1 L A2 L2 Ak Lk

113
or alternatively as
yt = yt 1 + D(L) y t 1 + et
where

= A(1)
= I + A1 + A2 + + Ak :

Theorem 11.9.1 (Granger Representation Theorem). y t is cointegrated with m r if and only if


0
rank( ) = r and = where is m r, rank ( ) = r:

Thus cointegration imposes a restriction upon the parameters of a VAR. The restricted model can be
written as
0
yt = yt + D(L) y t 1 + et
1
yt = z t 1 + D(L) y t 1 + et :

If is known, this can be estimated by OLS of y t on z t 1 and the lags of y t :


If is unknown, then estimation is done by \reduced rank regression", which is least-squares subject to
the stated restriction. Equivalently, this is the MLE of the restricted parameters under the assumption that
et is iid N(0; ):
One di culty is that is not identi ed without normalization. When r = 1; we typically just normalize
one element to equal unity. When r > 1; this does not work, and di erent authors have adopted di erent
identi cation schemes.
In the context of a cointegrated VAR estimated by reduced rank regression, it is simple to test for
cointegration by testing the rank of : These tests are constructed as likelihood ratio (LR) tests. As they
were discovered by Johansen (1988, 1991, 1995), they are typically called the \Johansen Max and Trace"
tests. Their asymptotic distributions are non-standard, and are similar to the Dickey-Fuller distributions.

114
Chapter 12

Limited Dependent Variables

A \limited dependent variable" y is one which takes a \limited" set of values. The most common cases
are

Binary: y 2 f0; 1g
Multinomial: y 2 f0; 1; 2; :::; kg
Integer: y 2 f0; 1; 2; :::g
Censored: y 2 R+

The traditional approach to the estimation of limited dependent variable (LDV) models is parametric
maximum likelihood. A parametric model is constructed, allowing the construction of the likelihood function.
A more modern approach is semi-parametric, eliminating the dependence on a parametric distributional
assumption. We will discuss only the rst (parametric) approach, due to time constraints. They still
constitute the majority of LDV applications. If, however, you were to write a thesis involving LDV estimation,
you would be advised to consider employing a semi-parametric estimation approach.
For the parametric approach, estimation is by MLE. A major practical issue is construction of the
likelihood function.

12.1 Binary Choice


The dependent variable yi 2 f0; 1g: This represents a Yes/No outcome. Given some regressors xi ; the
goal is to describe P (yi = 1 j xi ) ; as this is the full conditional distribution.
The linear probability model speci es that

P (yi = 1 j xi ) = x0i :

As P (yi = 1 j xi ) = E (yi j xi ) ; this yields the regression: yi = x0i + ei which can be estimated by OLS.
However, the linear probability model does not impose the restriction that 0 P (yi j xi ) 1: Even so
estimation of a linear probability model is a useful starting point for subsequent analysis.
The standard alternative is to use a function of the form

P (yi = 1 j xi ) = F (x0i )

where F ( ) is a known CDF, typically assumed to be symmetric about zero, so that F (u) = 1 F ( u): The
two standard choices for F are
u 1
Logistic: F (u) = (1 + e ) :
Normal: F (u) = (u):

If F is logistic, we call this the logit model, and if F is normal, we call this the probit model.

115
This model is identical to the latent variable model

yi = x0i + ei
ei F()
1 if yi > 0
yi = :
0 otherwise

For then

P (yi = 1 j xi ) = P (yi > 0 j xi )


= P (x0i + ei > 0 j xi )
= P (ei > x0i j xi )
= 1 F ( x0i )
= F (x0i ) :

Estimation is by maximum likelihood. To construct the likelihood, we need the conditional distribution
of an individual observation. Recall that if y is Bernoulli, such that P(y = 1) = p and P(y = 0) = 1 p,
then we can write the density of y as

f (y) = py (1 p)1 y
; y = 0; 1:

In the Binary choice model, yi is conditionally Bernoulli with P (yi = 1 j xi ) = pi = F (x0i ) : Thus the
conditional density is

f (yi j xi ) = pyi i (1 pi )1 yi
y
= F (x0i ) i (1 F (x0i ))1 yi
:

Hence the log-likelihood function is


n
X
log L( ) = log f (yi j xi )
i=1
Xn
y
= log F (x0i ) i (1 F (x0i ))1 yi

i=1
n
X
= [yi log F (x0i ) + (1 yi ) log(1 F (x0i ))]
i=1
X X
= log F (x0i ) + log(1 F (x0i )):
yi =1 yi =0

The MLE ^ is the value of which maximizes log L( ): Standard errors and test statistics are computed
by asymptotic approximations. Details of such calculations are left to more advanced courses.

12.2 Count Data


If y 2 f0; 1; 2; :::g; a typical approach is to employ Poisson regression. This model speci es that
k
exp ( i) i
P (yi = k j xi ) = ; k = 0; 1; 2; :::
k!
i = exp(x0i ):

The conditional density is the Poisson with parameter i: The functional form for i has been picked to
ensure that i > 0.
The log-likelihood function is
n
X n
X
log L( ) = log f (yi j xi ) = ( exp(x0i ) + yi x0i log(yi !)) :
i=1 i=1

116
The MLE is the value ^ which maximizes log L( ):
Since
E (yi j xi ) = i = exp(x0i )
is the conditional mean, this motivates the label Poisson \regression."
Also observe that the model implies that

var (yi j xi ) = i = exp(x0i );

so the model imposes the restriction that the conditional mean and variance of yi are the same. This may
be considered restrictive. A generalization is the negative binomial.

12.3 Censored Data


The idea of \censoring" is that some data above or below a threshold are mis-reported at the threshold.
Thus the model is that there is some latent process yi with unbounded support, but we observe only

yi if yi 0
yi = : (12.1)
0 if yi < 0

(This is written for the case of the threshold being zero, any known value can substitute.) The observed
data yi therefore come from a mixed continuous/discrete distribution.
Censored models are typically applied when the data set has a meaningful proportion (say 5% or higher)
of data at the boundary of the sample support. The censoring process may be explicit in data collection, or
it may be a by-product of economic constraints.
An example of a data collection censoring is top-coding of income. In surveys, incomes above a threshold
are typically reported at the threshold.
The rst censored regression model was developed by Tobin (1958) to explain consumption of durable
goods. Tobin observed that for many households, the consumption level (purchases) in a particular period
was zero. He proposed the latent variable model

yi = x0i + ei
2
ei iid N(0; )

with the observed variable yi generated by the censoring equation (12.1). This model (now called the Tobit)
speci es that the latent (or ideal) value of consumption may be negative (the household would prefer to sell
than buy). All that is reported is that the household purchased zero units of the good.
The naive approach to estimate is to regress yi on xi . This does not work because regression estimates
E (yi j xi ) ; not E (yi j xi ) = x0i ; and the latter is of interest. Thus OLS will be biased for the parameter of
interest :
[Note: it is still possible to estimate E (yi j xi ) by LS techniques. The Tobit framework postulates that
this is not inherently interesting, that the parameter of is de ned by an alternative statistical structure.]
Consistent estimation will be achieved by the MLE. To construct the likelihood, observe that the prob-
ability of being censored is

P (yi = 0 j xi ) = P (yi < 0 j xi )


= P (x0i + ei < 0 j xi )
ei x0i
= P < j xi

x0i
= :

The conditional distribution function above zero is Gaussian:


Z y
1 z x0i
P (yi = y j xi ) = dz; y > 0:
0

Therefore, the density function can be written as


1(y=0) 1(y>0)
x0i 1 z x0i
f (y j xi ) = ;

117
where 1 ( ) is the indicator function.
Hence the log-likelihood is a mixture of the probit and the normal:
n
X
log L( ) = log f (yi j xi )
i=1
X x0i X yi x0i
1
= log + log :
yi =0 yi >0

The MLE is the value ^ which maximizes log L( ):

12.4 Sample Selection


The problem of sample selection arises when the sample is a non-random selection of potential obser-
vations. This occurs when the observed data is systematically di erent from the population of interest.
For example, if you ask for volunteers for an experiment, and they wish to extrapolate the e ects of the
experiment on a general population, you should worry that the people who volunteer may be systematically
di erent from the general population. This has great relevance for the evaluation of anti-poverty and job-
training programs, where the goal is to assess the e ect of \training" on the general population, not just on
the volunteers.
A simple sample selection model can be written as the latent model
yi = x0i + e1i
Ti = 1 (z 0i + e0i > 0)
where 1 ( ) is the indicator function. The dependent variable yi is observed if (and only if) Ti = 1: Else it is
unobserved.
For example, yi could be a wage, which can be observed only if a person is employed. The equation for
Ti is an equation specifying the probability that the person is employed.
The model is often completed by specifying that the errors are jointly normal
e0i 1
N 0; 2 :
e1i
It is presumed that we observe fxi ; z i ; Ti g for all observations.
Under the normality assumption,
e1i = e0i + vi ;
where vi is independent of e0i N(0; 1): A useful fact about the standard normal distribution is that
(x)
E (e0i j e0i > x) = (x) = ;
(x)
and the function (x) is called the inverse Mills ratio.
The naive estimator of is OLS regression of yi on xi for those observations for which yi is available.
The problem is that this is equivalent to conditioning on the event fTi = 1g: However,
E (e1i j Ti = 1; z i ) = E (e1i j fe0i > z 0i g; z i )
= E (e0i j fe0i > z 0i g; z i ) + E (vi j fe0i > z 0i g; z i )
= (z 0i ) ;
which is non-zero. Thus
e1i = (z 0i ) + ui ;
where
E (ui j Ti = 1; z i ) = 0:
Hence
yi = x0i + (z 0i ) + ui (12.2)
is a valid regression equation for the observations for which Ti = 1:
Heckman (1979) observed that we could consistently estimate and from this equation, if were
known. It is unknown, but also can be consistently estimated by a Probit model for selection. The \Heckit"
estimator is thus calculated as follows

118
Estimate ^ from a Probit, using regressors z i : The binary dependent variable is Ti :

Estimate ^ ; ^ from OLS of yi on xi and (z 0i ^ ):

The OLS standard errors will be incorrect, as this is a two-step estimator. They can be corrected using
a more complicated formula. Or, alternatively, by viewing the Probit/OLS estimation equations as a
large joint GMM problem.

The Heckit estimator is frequently used to deal with problems of sample selection. However, the estimator
is built on the assumption of normality, and the estimator can be quite sensitive to this assumption. Some
modern econometric research is exploring how to relax the normality assumption.
The estimator can also work quite poorly if (z 0i ^ ) does not have much in-sample variation. This can
happen if the Probit equation does not \explain" much about the selection choice. Another potential problem
is that if z i = xi ; then (z 0i ^ ) can be highly collinear with xi ; so the second step OLS estimator will not
be able to precisely estimate : Based this observation, it is typically recommended to nd a valid exclusion
restriction: a variable should be in z i which is not in xi : If this is valid, it will ensure that (z 0i ^ ) is not
collinear with xi ; and hence improve the second stage estimator's precision.

119
Chapter 13

Panel Data

A panel is a set of observations on individuals, collected over time. An observation is the pair fyit ; xit g;
where the i subscript denotes the individual, and the t subscript denotes time. A panel may be balanced :

fyit ; xit g : t = 1; :::; T ; i = 1; :::; n;

or unbalanced :
fyit ; xit g : For i = 1; :::; n; t = ti ; :::; ti :

13.1 Individual-E ects Model


The standard panel data speci cation is that there is an individual-speci c e ect which enters linearly
in the regression
yit = x0it + ui + eit :
The typical maintained assumptions are that the individuals i are mutually independent, that ui and eit are
independent, that eit is iid across individuals and time, and that eit is uncorrelated with xit :
OLS of yit on xit is called pooled estimation. It is consistent if

E (xit ui ) = 0 (13.1)

If this condition fails, then OLS is inconsistent. (13.1) fails if the individual-speci c unobserved e ect ui
is correlated with the observed explanatory variables xit : This is often believed to be plausible if ui is an
omitted variable.
If (13.1) is true, however, OLS can be improved upon via a GLS technique. In either event, OLS appears
a poor estimation choice.
Condition (13.1) is called the random e ects hypothesis. It is a strong assumption, and most applied
researchers try to avoid its use.

13.2 Fixed E ects


This is the most common technique for estimation of non-dynamic linear panel regressions.
The motivation is to allow ui to be arbitrary, and have arbitrary correlated with xi : The goal is to
eliminate ui from the estimator, and thus achieve invariance.
There are several derivations of the estimator.
First, let 8
< 1 if i = j
dij = ;
:
0 else
and 0 1
di1
B C
di = @ ... A ;
din

120
an n 1 dummy vector with a \1" in the i0 th place. Let
0 1
u1
B C
u = @ ... A :
un
Then note that
ui = d0i u;
and
yit = x0it + d0i u + eit : (13.2)
Observe that
E (eit j xit ; di ) = 0;
so (13.2) is a valid regression, with di as a regressor along with xi :
OLS on (13.2) yields estimator ^ ; u ^ : Conventional inference applies.
Observe that
This is generally consistent.
If xit contains an intercept, it will be collinear with di ; so the intercept is typically omitted from xit :
Any regressor in xit which is constant over time for all individuals (e.g., their gender) will be collinear
with di ; so will have to be omitted.
There are n + k regression parameters, which is quite large as typically n is very large.
Computationally, you do not want to actually implement conventional OLS estimation, as the parameter
space is too large. OLS estimation of proceeds by the FWL theorem. Stacking the observations together:
y = X + Du + e;
then by the FWL theorem,
1
^ = X 0 (I P D) X X 0 (I P D) y
1
= X 0X X 0y ;
where
y = y D(D 0 D) 1 D 0 y
X = X D(D 0 D) 1 D 0 X:
Since the regression of yit on di is a regression onto individual-speci c dummies, the predicted value from
these regressions is the individual speci c mean y i ; and the residual is the demean value
yit = yit yi :
The xed e ects estimator ^ is OLS of yit on xit , the dependent variable and regressors in deviation-from-
mean form.
Another derivation of the estimator is to take the equation
yit = x0it + ui + eit ;
and then take individual-speci c means by taking the average for the i0 th individual:
ti ti ti
1 X 1 X 1 X
yit = x0it + ui + eit
Ti t=t Ti t=t Ti t=t
i i i

or
y i = x0i + ui + ei :
Subtracting, we nd
yit = xit0 + eit ;
which is free of the individual-e ect ui :

121
13.3 Dynamic Panel Regression
A dynamic panel regression has a lagged dependent variable

yit = yit 1 + x0it + ui + eit : (13.3)

This is a model suitable for studying dynamic behavior of individual agents.


Unfortunately, the xed e ects estimator is inconsistent, at least if T is held nite as n ! 1: This is
because the sample mean of yit 1 is correlated with that of eit :
The standard approach to estimate a dynamic panel is to combine rst-di erencing with IV or GMM.
Taking rst-di erences of (13.3) eliminates the individual-speci c e ect:

yit = yit 1 + x0it + eit : (13.4)

However, if eit is iid, then it will be correlated with yit 1 :


2
E ( yit 1 eit ) = E ((yit 1 yit 2 ) (eit eit 1 )) = E (yit 1 eit 1 ) = e:

So OLS on (13.4) will be inconsistent.


But if there are valid instruments, then IV or GMM can be used to estimate the equation. Typically, we
use lags of the dependent variable, two periods back, as yt 2 is uncorrelated with eit : Thus values of yit k ;
k 2, are valid instruments.
Hence a valid estimator of and is to estimate (13.4) by IV using yt 2 as an instrument for yt 1
(which is just identi ed). Alternatively, GMM using yt 2 and yt 3 as instruments (which is overidenti ed,
but loses a time-series observation).
A more sophisticated GMM estimator recognizes that for time-periods later in the sample, there are
more instruments available, so the instrument list should be di erent for each equation. This is conveniently
organized by the GMM principle, as this enables the moments from the di erent time-periods to be stacked
together to create a list of all the moment conditions. A simple application of GMM yields the parameter
estimates and standard errors.

122
Chapter 14

Nonparametrics

14.1 Kernel Density Estimation


d
Let X be a random variable with continuous distribution F (x) and density f (x) = dx F (x): The goal
estimate f (x) from a random sample (X1 ; :::; Xn g While F (x) can be estimated by the EDF F^ (x) =
is toP
n d ^
n 1 i=1 1 (Xi x) ; we cannot de ne dx F (x) since F^ (x) is a step function. The standard nonparametric
method to estimate f (x) is based on smoothing using a kernel.
While we are typically interested in estimating the entire function f (x); we can simply focus on the
problem where x is a speci c xed number, and then see how the method generalizes to estimating the entire
function.

De nition 2 K(u) is a second-order kernel function if it is a symmetric zero-mean density function.


Three common choices for kernels include the Gaussian
1 u2
K(u) = p exp
2 2

the Epanechnikov
3
4 1 u2 ; juj 1
K(u) =
0 juj > 1
and the Biweight or Quartic
15 2
1 u2 ; juj 1
K(u) = 16
0 juj > 1
In practice, the choice between these three rarely makes a meaningful di erence in the estimates.
The kernel functions are used to smooth the data. The amount of smoothing is controlled by the
bandwidth h > 0. Let
1 u
Kh (u) = K :
h h
be the kernel K rescaled by the bandwidth h: The kernel density estimator of f (x) is
n
1X
f^(x) = Kh (Xi x) :
n i=1

This estimator is the average of a set of weights. If a large number of the observations Xi are near x; then
the weights are relatively large and f^(x) is larger. Conversely, if only a few Xi are near x; then the weights
are small and f^(x) is small. The bandwidth h controls the meaning of \near".
Interestingly, f^(x) is a valid density. That is, f^(x) 0 for all x; and
Z Z n n Z n Z
1 1
1X 1X 1
1X 1
f^(x)dx = Kh (Xi x) dx = Kh (Xi x) dx = K (u) du = 1
1 1 n i=1 n i=1 1 n i=1 1

where the second-to-last equality makes the change-of-variables u = (Xi x)=h:

123
We can also calculate the moments of the density f^(x): The mean is
Z 1 n Z
1X 1
xf^(x)dx = xKh (Xi x) dx
1 n i=1 1
n Z
1X 1
= (Xi + uh) K (u) du
n i=1 1
n Z 1 n Z
1X 1X 1
= Xi K (u) du + h uK (u) du
n i=1 1 n i=1 1
n
1X
= Xi
n i=1
the sample mean of the Xi ; where the second-to-last equality used the change-of-variables u = (Xi x)=h
which has Jacobian h:
The second moment of the estimated density is
Z 1 n Z
2^ 1X 1 2
x f (x)dx = x Kh (Xi x) dx
1 n i=1 1
n Z
1X 1 2
= (Xi + uh) K (u) du
n i=1 1
n n Z 1 n Z
1X 2 2X 1X 2 1 2
= Xi + Xi h K(u)du + h u K (u) du
n i=1 n i=1 1 n i=1 1
n
1X 2
= X + h2 2
n i=1 i K

where Z 1
2
K = u2 K (u) du
1

is the variance of the kernel. It follows that the variance of the density f^(x) is
Z 1 Z 1 n n
!2
1X 2 1X
2
2^ ^ 2 2
x f (x)dx xf (x)dx = X +h K Xi
1 1 n i=1 i n i=1
= ^ 2 + h2 2
K

Thus the variance of the estimated density is in ated by the factor h2 2


K relative to the sample moment.

14.2 Asymptotic MSE for Kernel Estimates


For xed x and bandwidth h observe that
Z 1 Z 1 Z 1
EKh (X x) = Kh (z x) f (z)dz = Kh (uh) f (x + hu)hdu = K (u) f (x + hu)du
1 1 1

The second equality uses the change-of variables u = (z x)=h: The last expression shows that the expected
value is an average of f (z) locally about x:
This integral (typically) is not analytically solvable, so we approximate it using a second order Taylor
expansion of f (x + hu) in the argument hu about hu = 0; which is valid as h ! 0: Thus
1
f (x + hu) ' f (x) + f 0 (x)hu + f 00 (x)h2 u2
2
and therefore
Z 1
1
EKh (X x) ' K (u) f (x) + f 0 (x)hu + f 00 (x)h2 u2 du
1 2
Z 1 Z 1 Z 1
0 1 00 2
= f (x) K (u) du + f (x)h K (u) udu + f (x)h K (u) u2 du
1 1 2 1
1
= f (x) + f 00 (x)h2 2K :
2

124
The bias of f^(x) is then
n
1X 1 00
Bias(x) = Ef^(x) f (x) = EKh (Xi x) f (x) = f (x)h2 2
K:
n i=1 2

We see that the bias of f^(x) at x depends on the second derivative f 00 (x): The sharper the derivative, the
greater the bias. Intuitively, the estimator f^(x) smooths data local to Xi = x; so is estimating a smoothed
version of f (x): The bias results from this smoothing, and is larger the greater the curvature in f (x):
We now examine the variance of f^(x): Since it is an average of iid random variables, using rst-order
Taylor approximations and the fact that n 1 is of smaller order than (nh) 1
1
var (x) = var (Kh (Xi x))
n
1 2 1 2
= EKh (Xi x) (EKh (Xi x))
n n
Z 1 2
1 z x 1
' K f (z)dz f (x)2
nh2 1 h n
Z 1
1 2
= K (u) f (x + hu) du
nh 1
Z
f (x) 1 2
' K (u) du
nh 1
f (x) R(K)
= :
nh
R1 2
where R(K) = 1 K (u) du is called the roughness of K:
Together, the asymptotic mean-squared error (AMSE) for xed x is the sum of the approximate squared
bias and approximate variance

1 00 2 4 4 f (x) R(K)
AM SEh (x) = f (x) h K + :
4 nh
A global measure of precision is the asymptotic mean integrated squared error (AMISE)
Z
h4 4K R(f 00 ) R(K)
AM ISEh = AM SEh (x)dx = + : (14.1)
4 nh
R 2
where R(f 00 ) = (f 00 (x)) dx is the roughness of f 00 : Notice that the rst term (the squared bias) is increasing
in h and the second term (the variance) is decreasing in nh: Thus for the AMISE to decline with n; we need
h ! 0 but nh ! 1: That is, h must tend to zero, but at a slower rate than n 1 :
Equation (14.1) is an asymptotic approximation to the MSE. We de ne the asymptotically optimal
bandwidth h0 as the value which minimizes this approximate MSE. That is,

h0 = argmin AM ISEh
h

It can be found by solving the rst order condition

d R(K)
AM ISEh = h3 4 00
K R(f ) =0
dh nh2
yielding
1=5
R(K) 1=2
h0 = 4 R(f 00 ) n : (14.2)
K

This solution takes the form h0 = cn 1=5 where c is a function of K and f; but not of n: We thus say
that the optimal bandwidth is of order O(n 1=5 ): Note that this h declines to zero, but at a very slow rate.
In practice, how should the bandwidth be selected? This is a di cult problem, and there is a large and
continuing literature on the subject. The asymptotically optimal choice given in (14.2) depends on R(K);
2 00
K ; and R(f ): The rst two are determined by the kernel function. Their values for the three functions
introduced in the previous section are given here.

125
2
R1 R1 2
K K = 1
u2 K (u) du R(K) = 1 K (u) du
p
Gaussian 1 1/(2 )
Epanechnikov 1=5 1=5
Biweight 1=7 5=7

An obvious di culty is that R(f 00 ) is unknown. A classic simple solution proposed by Silverman (1986)has
come to be known as the reference bandwidth or Silverman's Rule-of-Thumb. It uses formula (14.2)
but replaces R(f 00 ) with ^ 5 R( 00 ); where is the N(0; 1) distribution and ^ 2 is an estimate of 2 = var(X):
This choice for h gives an optimal rule when f (x) is normal, and gives a nearly optimal rule when f (x) is
close to normal. The downside is that if the density
p is very far from normal, the rule-of-thumb h can be quite
ine cient. We can calculate that R( 00 ) = 3= (8 ) : Together with the above table, we nd the reference
rules for the three kernel functions introduced earlier.
Gaussian Kernel: hrule = 1:06n 1=5
Epanechnikov Kernel: hrule = 2:34n 1=5
Biweight (Quartic) Kernel: hrule = 2:78n 1=5

Unless you delve more deeply into kernel estimation methods the rule-of-thumb bandwidth is a good
practical bandwidth choice, perhaps adjusted by visual inspection of the resulting estimate f^(x): There are
other approaches, but implementation can be delicate. I now discuss some of these choices. The plug-in
approach is to estimate R(f 00 ) in a rst step, and then plug this estimate into the formula (14.2). This is
more treacherous than may rst appear, as the optimal h for estimation of the roughness R(f 00 ) is quite
di erent than the optimal h for estimation of f (x): However, there are modern versions of this estimator
work well, in particular the iterative method of Sheather and Jones (1991). Another popular choice for
selection of h is cross-validation. This works by constructing an estimate of the MISE using leave-one-out
estimators. There are some desirable properties of cross-validation bandwidths, but they are also known
to converge very slowly to the optimal values. They are also quite ill-behaved when the data has some
discretization (as is common in economics), in which case the cross-validation rule can sometimes select very
small bandwidths leading to dramatically undersmoothed estimates. Fortunately there are remedies, which
are known as smoothed cross-validation which is a close cousin of the bootstrap.

126
Appendix A

Matrix Algebra

A.1 Notation
A scalar a is a single number.
A vector a is a k 1 list of numbers, typically arranged in a column. We write this as
0 1
a1
B a2 C
B C
a=B . C
@ .. A
ak
Equivalently, a vector a is an element of Euclidean k space, written as a 2 Rk : If k = 1 then a is a scalar.
A matrix A is a k r rectangular array of numbers, written as
2 3
a11 a12 a1r
6 a21 a22 a2r 7
6 7
A=6 . .. .. 7
4 .. . . 5
ak1 ak2 akr
0 0
By convention aij refers to the element in the i th row and j th column of A: If r = 1 then A is a column
vector. If k = 1 then A is a row vector. If r = k = 1; then A is a scalar. Matrices are generally represented
by capital letters such as A or sometimes by the symbol (aij ) :
A standard convention (which we will follow in this text whenever possible) is to denote scalars by
lower-case italics (a); vectors by lower-case bold italics (a); and matrices by upper-case bold italics (A):
A matrix can be written as a set of column vectors or as a set of row vectors. That is,
2 3
1
6 2 7
6 7
A= a1 a2 ar =6 . 7
.
4 . 5
k

where 2 3
a1i
6 a2i 7
6 7
ai = 6 .. 7
4 . 5
aki
are column vectors and
j = aj1 aj2 ajr
are row vectors.
The transpose of a matrix, denoted A0 ; is obtained by ipping the matrix on its diagonal. Thus
2 3
a11 a21 ak1
6 a12 a22 ak2 7
6 7
A0 = 6 . . .. 7
4 .. .. . 5
a1r a2r akr

127
Alternatively, letting B = A0 ; then bij = aji . Note that if A is k r, then A0 is r k: If a is a k 1 vector,
then a0 is a 1 k row vector.
A matrix is square if k = r: A square matrix is symmetric if A = A0 ; which requires aij = aji : A
square matrix is diagonal if the only non-zero elements appear on the diagonal, so that aij = 0 if i 6= j: A
square matrix is upper (lower) diagonal if all elements below (above) the diagonal equal zero.n
An important diagonal matrix is the identity matrix, which has ones on the diagonal. The k k
identity matrix is denoted as 2 3
1 0 0
6 0 1 0 7
6 7
Ik = 6 . . .. 7 :
4 .. .. . 5
0 0 1
A partitioned matrix takes the form
2 3
A11 A12 A1r
6 A21 A22 A2r 7
6 7
A=6 .. .. .. 7
4 . . . 5
Ak1 Ak2 Akr

where the Aij denote matrices, vectors and/or scalars.

A.2 Matrix Addition


If the matrices A = (aij ) and B = (bij ) are of the same order, we de ne the sum

A + B = (aij + bij ) :

It follows that matrix addition follows the communtative and associative laws:

A+B = B+A
A + (B + C) = (A + B) + C:

A.3 Matrix Multiplication


If A is k r and c is real, we de ne their product as

Ac = cA = (aij c) :

If a and b are both k 1; then their inner product is


k
X
a0 b = a1 b1 + a2 b2 + + ak bk = aj bj :
j=1

Note that a0 b = b0 a: We say that two vectors a and b are orthogonal if a0 b = 0:


If A is k r and B is r s; so that the number of columns of A equals the number of rows of B; we
say that A and B are conformable. In this event the matrix product AB is de ned, writing A as a set of
row vectors and B as a set of column vectors (each of length r). Then
2 0 3
a1
6 a02 7
6 7
AB = 6 . 7 b1 b2 bs
4 .. 5
a0k
2 3
a01 b1 a01 b2 a01 bs
6 a02 b1 a02 b2 a02 bs 7
6 7
= 6 .. .. .. 7:
4 . . . 5
a0k b1 a0k b2 a0k bs

128
Matrix multiplication is not communicative: in general AB 6= BA: However, it is associative and dis-
tributive:

A (BC) = (AB) C
A (B + C) = AB + AC

An alternative way to write the matrix product is to use matrix partitions. For example,

A11 A12 B 11 B 12
AB =
A21 A22 B 21 B 22

A11 B 11 + A12 B 21 A11 B 12 + A12 B 22


= :
A21 B 11 + A22 B 21 A21 B 12 + A22 B 22

As another example,
2 3
B1
6 B2 7
6 7
AB = A1 A2 Ar 6 .. 7
4 . 5
Br
= A1 B 1 + A2 B 2 + + Ar B r
Xr
= Aj B j
j=1

An important property of the identity matrix is that if A is k r; then AI r = A and I k A = A:


The k r matrix A, r k, is called orthogonal if A0 A = I r :

A.4 Trace
The trace of a k k square matrix A is the sum of its diagonal elements
k
X
tr (A) = aii :
i=1

Some straightforward properties for square matrices A and B and real c are

tr (cA) = c tr (A)
tr A0 = tr (A)
tr (A + B) = tr (A) + tr (B)
tr (I k ) = k:

Also, for k r A and r k B we have


tr (AB) = tr (BA) :
Indeed,
2 3
a01 b1 a01 b2 a01 bk
6 a02 b1 a02 b2 a02 bk 7
6 7
tr (AB) = tr 6 .. .. .. 7
4 . . . 5
a0k b1 a0k b2 a0k bk
k
X
= a0i bi
i=1
k
X
= b0i ai
i=1
= tr (BA) :

129
A.5 Rank and Inverse
The rank of the k r matrix (r k)

A= a1 a2 ar

is the number of linearly independent columns aj ; and is written as rank (A) : We say that A has full rank
if rank (A) = r:
A square k k matrix A is said to be nonsingular if it is has full rank, e.g. rank (A) = k: This means
that there is no k 1 c 6= 0 such that Ac = 0:
If a square k k matrix A is nonsingular then there exists a unique matrix k k matrix A 1 called the
inverse of A which satis es
AA 1 = A 1 A = I k :
For non-singular A and C; some important properties include

1 1
AA = A A = Ik
1 0 0 1
A = A
1 1 1
(AC) = C A
1 1 1 1 1 1
(A + C) = A A +C C
1 1 1 1 1 1
A (A + C) = A A +C A

Also, if A is an orthogonal matrix, then A 1 = A:


Another useful result for non-singular A is
1 1 1 1 1 1
(A + BCD) =A A BC C + CDA BC CDA : (A.1)

In particular, for C = 1; B = b and D = b0 for vector b we nd


1 1
A bb0 =A 1
+ 1 b0 A 1
b A 1
bb0 A 1
: (A.2)

The following fact about inverting partitioned matrices is quite useful. If A BD 1 C and D CA 1
B
are non-singular, then
" 1 1
#
1
A B A BD 1 C A BD 1 C BD 1
= 1 1 : (A.3)
C D D CA 1 B CA 1 D CA 1 B

Even if a matrix A does not possess an inverse, we can still de ne the generalized inverse A as a
matrix which satis es
AA A = A: (A.4)
The matrix A is not necessarily unique. The Moore-Penrose generalized inverse A satis es (A.4)
plus the following three conditions

A AA = A
AA is symmetric
A A is symmetric

For any matrix A; the Moore-Penrose generalized inverse A exists and is unique.

A.6 Determinant
The determinant is a measure of the volume of a square matrix.
While the determinant is widely used, its precise de nition is rarely needed. However, we present the
de nition here for completeness. Let A = (aij ) be a general k k matrix . Let = (j1 ; :::; jk ) denote a
permutation of (1; :::; k) : There are k! such permutations. There is a unique count of the number of inversions

130
of the indices of such permutations (relative to the natural order (1; :::; k) ; and let " = +1 if this count is
even and " = 1 if the count is odd. Then the determinant of A is de ned as
X
det A = " a1j1 a2j2 akjk :

For example, if A is 2 2; then the two permutations of (1; 2) are (1; 2) and (2; 1) ; for which "(1;2) = 1
and "(2;1) = 1. Thus

det A = "(1;2) a11 a22 + "(2;1) a21 a12


= a11 a22 a12 a21 :

Some properties include

det (A) = det A0


det (cA) = ck det A

det (AB) = (det A) (det B)


1 1
det A = (det A)

A B 1
det = (det D) det A BD C if det D 6= 0
C D
det A 6= 0 if and only if A is nonsingular.
Qk
If A is triangular (upper or lower), then det A = i=1 aii
If A is orthogonal, then det A = 1

A.7 Eigenvalues
The characteristic equation of a square matrix A is

det (A I k ) = 0:

The left side is a polynomial of degree k in so it has exactly k roots, which are not necessarily distinct and
may be real or complex. They are called the latent roots or characteristic roots or eigenvalues of A.
If i is an eigenvalue of A; then A i I k is singular so there exists a non-zero vector hi such that

(A i I k ) hi = 0:

The vector hi is called a latent vector or characteristic vector or eigenvector of A corresponding to


i:
We now state some useful properties. Let i and hi , i = 1; :::; k denote the k eigenvalues and eigenvectors
of a square matrix A: Let be a diagonal matrix with the characteristic roots in the diagonal, and let
H = [h1 hk ]:
Qk
det(A) = i=1 i
Pk
tr(A) = i=1 i

A is non-singular if and only if all its characteristic roots are non-zero.


1
If A has distinct characteristic roots, there exists a nonsingular matrix P such that A = P P and
P AP 1 = .
If A is symmetric, then A = H H 0 and H 0 AH = ; and the characteristic roots are all real.
A = H H 0 is called the spectral decomposition of a matrix.
1 1 1 1
The characteristic roots of A are 1 ; 2 ; ..., k :

131
A.8 Positive De niteness
We say that a symmetric square matrix A is positive semi-de nite if for all c 6= 0; c0 Ac 0: This is
written as A 0: We say that A is positive de nite if for all c 6= 0; c0 Ac > 0: This is written as A > 0:
Some properties include:

If A = G0 G for some matrix G; then A is positive semi-de nite. (For any c 6= 0; c0 Ac = 0


0
where = Gc:) If G has full rank, then A is positive de nite.
1 1
If A is positive de nite, then A is non-singular and A exists. Furthermore, A > 0:
A > 0 if and only if it is symmetric and all its characteristic roots are positive.
If A > 0 we can nd a matrix B such that A = BB 0 : We call B a matrix square root of A:
The matrix B need not be unique. One way to construct B is to use the spectral decomposition
A = H H 0 where is diagonal, and then set B = H 1=2 :

A square matrix A is idempotent if AA = A: If A is idempotent and symmetric then all its charac-
teristic roots equal either zero or one and is thus positive semi-de nite. To see this, note that we can write
A = H H 0 where H is orthogonal and contains the (real) characteristic roots. Then

A = AA = H H 0 H H 0 = H 2
H 0:

By the uniqueness of the characteristic roots, we deduce that 2 = and 2i = i for i = 1; :::; k: Hence
they must equal either 0 or 1. It follows that the spectral decomposition of idempotent A takes the form

In k 0
A=H H0 (A.5)
0 0

with H 0 H = I n . Additionally, tr(A) = rank(A):

A.9 Matrix Calculus


Let x = (x1 ; :::; xk ) be k 1 and g(x) = g(x1 ; :::; xk ) : Rk ! R: The vector derivative is
0 @ 1
@x1 g (x)
@ B .. C
g (x) = @ . A
@x @
@xk g (x)

and
@ @ @
g (x) = @x1 g (x) @xk g (x) :
@x0
Some properties are now summarized.
@ @
@x (a0 x) = @x (x0 a) = a
@
@x0 (Ax) = A
@
@x (x0 Ax) = A + A0 x
@2
@x@x0 (x0 Ax) = A + A0

A.10 Kronecker Products and the Vec Operator


Let A = [a1 a2 an ] be m n: The vec of A; denoted by vec (A) ; is the mn 1 vector
0 1
a1
B a2 C
B C
vec (A) = B . C :
@ .. A
an

132
Let A = (aij ) be an m n matrix and let B be any matrix. The Kronecker product of A and B;
denoted A B; is the matrix
2 3
a11 B a12 B a1n B
6 a21 B a22 B a2n B 7
6 7
A B=6 .. .. .. 7:
4 . . . 5
am1 B am2 B amn B

Some important properties are now summarized. These results hold for matrices for which all matrix
multiplications are conformable.

(A + B) C=A C +B C
(A B) (C D) = AC BD

A (B C) = (A B) C
0 0 0
(A B) = A B
tr (A B) = tr (A) tr (B)
n m
If A is m m and B is n n; det(A B) = (det (A)) (det (B))
1 1 1
(A B) =A B
If A > 0 and B > 0 then A B>0
vec (ABC) = C 0 A vec (B)
0
tr (ABCD) = vec D 0 C0 A vec (B)

A.11 Vector and Matrix Norms


The Euclidean norm of an m 1 vector a is
1=2
kak = (a0 a)
m
!1=2
X
= a2i :
i=1

The Euclidean norm of an m n matrix A is

kAk = kvec (A)k


1=2
= tr A0 A
0 11=2
Xm Xn
= @ a2ij A :
i=1 j=1

A useful calculation is for any m 1 vectors a and b,

ab0 = kak kbk

and in particular
2
kaa0 k = kak (A.6)
Some useful inequalities are now given:
Schwarz Inequality: For any m 1 vectors a and b,

ja0 bj kak kbk : (A.7)

133
Schwarz Matrix Inequality: For any m n matrices A and B;

A0 B kAk kBk : (A.8)

Triangle Inequality: For any m n matrices A and B;

kA + Bk kAk + kBk : (A.9)

Proof of Schwarz Inequality: First, suppose that kbk = 0: Then b = 0 and both ja0 bj = 0 and
1 0
kak kbk = 0 so the inequality is true. Second, suppose that kbk > 0 and de ne c = a b b0 b b a: Since
0
c is a vector, c c 0: Thus
2
0 c0 c = a0 a (a0 b) = b0 b :
Rearranging, this implies that
2
(a0 b) (a0 a) b0 b :
Taking the square root of each side yields the result.

Proof of Schwarz Matrix Inequality: Partition A = [a1 ; :::; an ] and B = [b1 ; :::; bn ]. Then by
partitioned matrix multiplication, the de nition of the matrix Euclidean norm and the Schwarz inequality

a01 b1 a01 b2
A0 B = a02 b1 a02 b2
.. .. ..
. . .
ka1 k kb1 k ka1 k kb2 k
ka2 k kb1 k ka2 k kb2 k
.. .. ..
. . .
0 11=2
n X
X n
2 2
= @ kai k kbj k A
i=1 j=1

n
!1=2 n
!1=2
X 2
X 2
= kai k kbi k
i=1 i=1
0 11=2 0 11=2
n X
X m n X
X m
2A
= @ a2ji A @ kbji k
i=1 j=1 i=1 j=1

= kAk kBk

Proof of Triangle Inequality: Let a = vec (A) and b = vec (B) . Then by the de nition of the matrix
norm and the Schwarz Inequality
2 2
kA + Bk = ka + bk
= a0 a + 2a0 b + b0 b
a0 a + 2 ja0 bj + b0 b
2 2
kak + 2 kak kbk + kbk
2
= (kak + kbk)
2
= (kAk + kBk)

134
Appendix B

Probability

B.1 Foundations
The set S of all possible outcomes of an experiment is called the sample space for the experiment. Take
the simple example of tossing a coin. There are two outcomes, heads and tails, so we can write S = fH; T g:
If two coins are tossed in sequence, we can write the four outcomes as S = fHH; HT; T H; T T g:
An event A is any collection of possible outcomes of an experiment. An event is a subset of S; including
S itself and the null set ;: Continuing the two coin example, one event is A = fHH; HT g; the event that the
rst coin is heads. We say that A and B are disjoint or mutually exclusive if A \ B = ;: For example,
the sets fHH; HT g and fT Hg are disjoint. Furthermore, if the sets A1 ; A2 ; ::: are pairwise disjoint and
[1i=1 Ai = S; then the collection A1 ; A2 ; ::: is called a partition of S:
The following are elementary set operations:
Union: A [ B = fx : x 2 A or x 2 Bg:
Intersection: A \ B = fx : x 2 A and x 2 Bg:
Complement: Ac = fx : x 2 = Ag:
The following are useful properties of set operations.
Communtatitivity: A [ B = B [ A; A \ B = B \ A:
Associativity: A [ (B [ C) = (A [ B) [ C; A \ (B \ C) = (A \ B) \ C:
Distributive Laws: A \ (B [ C) = (A \ B) [ (A \ C) ; A [ (B \ C) = (A [ B) \ (A [ C) :
c c
DeMorgan's Laws: (A [ B) = Ac \ B c ; (A \ B) = Ac [ B c :
A probability function assigns probabilities (numbers between 0 and 1) to events A in S: This is
straightforward when S is countable; when S is uncountable we must be somewhat more careful:A set B
is called a sigma algebra (or Borel eld) if ; 2 B , A 2 B implies Ac 2 B, and A1 ; A2 ; ::: 2 B implies
[1i=1 Ai 2 B. A simple example is f;; Sg which is known as the trivial sigma algebra. For any sample space
S; let B be the smallest sigma algebra which contains all of the open sets in S: When S is countable, B is
simply the collection of all subsets of S; including ; and S: When S is the real line, then B is the collection
of all open and closed intervals. We call B the sigma algebra associated with S: We only de ne probabilities
for events contained in B.
We now can give the axiomatic de nition of probability. Given S and B, a probability function P satis es
P1
P(S) = 1; P(A) 0 for all A 2 B, and if A1 ; A2 ; ::: 2 B are pairwise disjoint, then P ([1i=1 Ai ) = i=1 P(Ai ):
Some important properties of the probability function include the following

P (;) = 0
P(A) 1
P (Ac ) = 1 P(A)
P (B \ Ac ) = P(B) P(A \ B)
P (A [ B) = P(A) + P(B) P(A \ B)
If A B then P(A) P(B)
Bonferroni's Inequality: P(A \ B) P(A) + P(B) 1
Boole's Inequality: P (A [ B) P(A) + P(B)

135
For some elementary probability models, it is useful to have simple rules to count the number of objects in
a set. These counting rules are facilitated by using the binomial coe cients which are de ned for nonnegative
integers n and r; n r; as
n n!
= :
r r! (n r)!
When counting the number of objects in a set, there are two important distinctions. Counting may be
with replacement or without replacement. Counting may be ordered or unordered. For example,
consider a lottery where you pick six numbers from the set 1, 2, ..., 49. This selection is without replacement
if you are not allowed to select the same number twice, and is with replacement if this is allowed. Counting
is ordered or not depending on whether the sequential order of the numbers is relevant to winning the
lottery. Depending on these two distinctions, we have four expressions for the number of objects (possible
arrangements) of size r from n objects.

Without With
Replacement Replacement
n!
Ordered (n r)! nr
n n+r 1
Unordered r r

In the lottery example, if counting is unordered and without replacement, the number of potential
combinations is 49
6 = 13; 983; 816.
If P(B) > 0 the conditional probability of the event A given the event B is

P (A \ B)
P (A j B) = :
P(B)
For any B; the conditional probability function is a valid probability function where S has been replaced by
B: Rearranging the de nition, we can write

P(A \ B) = P (A j B) P(B)

which is often quite useful. We can say that the occurrence of B has no information about the likelihood of
event A when P (A j B) = P(A); in which case we nd

P(A \ B) = P (A) P(B) (B.1)

We say that the events A and B are statistically independent when (B.1) holds. Furthermore, we say
that the collection of events A1 ; :::; Ak are mutually independent when for any subset fAi : i 2 Ig;
!
\ Y
P Ai = P (Ai ) :
i2I i2I

Theorem 3 (Bayes' Rule). For any set B and any partition A1 ; A2 ; ::: of the sample space, then for each
i = 1; 2; :::
P (B j Ai ) P(Ai )
P (Ai j B) = P1
j=1 P (B j Aj ) P(Aj )

B.2 Random Variables


A random variable X is a function from a sample space S into the real line. This induces a new sample
space { the real line { and a new probability function on the real line. Typically, we denote random variables
by uppercase letters such as X; and use lower case letters such as x for potential values and realized values.
(This is in contrast to the notation adopted for most of the textbook.) For a random variable X we de ne
its cumulative distribution function (CDF) as

F (x) = P (X x) : (B.2)

Sometimes we write this as FX (x) to denote that it is the CDF of X: A function F (x) is a CDF if and only
if the following three properties hold:

1. limx! 1 F (x) = 0 and limx!1 F (x) = 1

136
2. F (x) is nondecreasing in x
3. F (x) is right-continuous

We say that the random variable X is discrete if F (x) is a step function. In the latter case, the range
of X consists of a countable set of real numbers 1 ; :::; r : The probability function for X takes the form

P (X = j) = j; j = 1; :::; r (B.3)
Pr
where 0 j 1 and j=1 j = 1.
We say that the random variable X is continuous if F (x) is continuous in x: In this case P(X = ) = 0
for all 2 R so the representation (B.3) is unavailable. Instead, we represent the relative probabilities by
the probability density function (PDF)

d
f (x) = F (x)
dx
so that Z x
F (x) = f (u)du
1

and Z b
P (a X b) = f (u)du:
a

These expressions only make sense if F (x) is di erentiable. While there are examples of continuous random
variables which do not possess a PDF, these cases are unusual and areRtypically ignored.
1
A function f (x) is a PDF if and only if f (x) 0 for all x 2 R and 1 f (x)dx:

B.3 Expectation
For any measurable real function g; we de ne the mean or expectation Eg(X) as follows. If X is
discrete,
Xr
Eg(X) = g( j ) j ;
j=1

and if X is continuous Z 1
Eg(X) = g(x)f (x)dx:
1

The latter is well de ned and nite if


Z 1
jg(x)j f (x)dx < 1: (B.4)
1

If (B.4) does not hold, evaluate


Z
I1 = g(x)f (x)dx
g(x)>0
Z
I2 = g(x)f (x)dx
g(x)<0

If I1 = 1 and I2 < 1 then we de ne Eg(X) = 1: If I1 < 1 and I2 = 1 then we de ne Eg(X) = 1: If


both I1 = 1 and I2 = 1 then Eg(X) is unde ned.
Since E (a + bX) = a + bEX; we say that expectation is a linear operator.
m
For m > 0; we de ne the m0 th moment of X as EX m and the m0 th central moment as E (X EX) :
2 2
Two
p special moments are the mean = EX and variance = E (X ) = EX 2 2
: We call
2 2
= the standard deviation of X: We can also write = var(X). For example, this allows the
convenient expression var(a + bX) = b2 var(X):
The moment generating function (MGF) of X is

M ( ) = E exp ( X) :

137
m
The MGF does not necessarily exist. However, when it does and E jXj < 1 then
dm
M( ) = E (X m )
d m =0

which is why it is called the moment generating function.


More generally, the characteristic function (CF) of X is

C( ) = E exp (i X) :
p m
where i = 1 is the imaginary unit. The CF always exists, and when E jXj <1
dm
C( ) = im E (X m ) :
d m =0
p
The L norm, p 1; of the random variable X is
p 1=p
kXkp = (E jXj ) :

B.4 Common Distributions


For reference, we now list some important discrete distribution function.
Bernoulli

P (X = x) = px (1 p)1 x
; x = 0; 1; 0 p 1
EX = p
var(X) = p(1 p)

Binomial
n x n x
P (X = x) = p (1 p) ; x = 0; 1; :::; n; 0 p 1
x
EX = np
var(X) = np(1 p)

Geometric

P (X = x) = p(1 p)x 1
; x = 1; 2; :::; 0 p 1
1
EX =
p
1 p
var(X) =
p2
Multinomial
n!
P (X1 = x1 ; X2 = x2 ; :::; Xm = xm ) = px1 px2 pxmm ;
x1 !x2 ! xm ! 1 2
x1 + + xm = n;
p1 + + pm = 1
EXi = pi
var(Xi ) = npi (1 pi )
cov (Xi ; Xj ) = npi pj

Negative Binomial
(r + x) r
P (X = x) = p (1 p)x 1
; x = 0; 1; 2; :::; 0 p 1
x! (r)
r (1 p)
EX =
p
r (1 p)
var(X) =
p2

138
Poisson
x
exp ( )
P (X = x) = ; x = 0; 1; 2; :::; >0
x!
EX =
var(X) =
We now list some important continuous distributions.
Beta
( + ) 1 1
f (x) = x (1 x) ; 0 x 1; > 0; >0
( ) ( )
=
+
var(X) = 2
( + + 1) ( + )
Cauchy
1
f (x) = ; 1<x<1
(1 + x2 )
EX = 1
var(X) = 1
ExponentiaI
1 x
f (x) = exp ; 0 x < 1; >0
EX =
2
var(X) =
Logistic
exp ( x)
f (x) = 2; 1 < x < 1;
(1 + exp ( x))
EX = 0
2
var(X) =
3
Lognormal
!
2
1 (log x )
f (x) = p exp 2
; 0 x < 1; >0
2 x 2
2
EX = exp + =2
2 2
var(X) = exp 2 + 2 exp 2 +
Pareto

f (x) = +1
; x < 1; > 0; >0
x
EX = ; >1
1
2
var(X) = 2 ; >2
( 1) ( 2)
Uniform
1
f (x) = ; a x b
b a
a+b
EX =
2
2
(b a)
var(X) =
12

139
Weibull
1 x
f (x) = x exp ; 0 x < 1; > 0; >0

1= 1
EX = 1+

2= 2 2 1
var(X) = 1+ 1+

B.5 Multivariate Random Variables


A pair of bivariate random variables (X; Y ) is a function from the sample space into R2 : The joint CDF
of (X; Y ) is
F (x; y) = P (X x; Y y) :
If F is continuous, the joint probability density function is
@2
f (x; y) = F (x; y):
@x@y
For a Borel measurable set A 2 R2 ;
Z Z
P ((X < Y ) 2 A) = f (x; y)dxdy
A

For any measurable function g(x; y);


Z 1 Z 1
Eg(X; Y ) = g(x; y)f (x; y)dxdy:
1 1

The marginal distribution of X is

FX (x) = P(X x)
= lim F (x; y)
y!1
Z x Z 1
= f (x; y)dydx
1 1

so the marginal density of X is


Z 1
d
fX (x) = FX (x) = f (x; y)dy:
dx 1

Similarly, the marginal density of Y is


Z 1
fY (y) = f (x; y)dx:
1

The random variables X and Y are de ned to be independent if f (x; y) = fX (x)fY (y): Furthermore,
X and Y are independent if and only if there exist functions g(x) and h(y) such that f (x; y) = g(x)h(y):
If X and Y are independent, then
Z Z
E (g(X)h(Y )) = g(x)h(y)f (y; x)dydx
Z Z
= g(x)h(y)fY (y)fX (x)dydx
Z Z
= g(x)fX (x)dx h(y)fY (y)dy

= Eg (X) Eh (Y ) : (B.5)

if the expectations exist. For example, if X and Y are independent then

E(XY ) = EXEY:

140
Another implication of (B.5) is that if X and Y are independent and Z = X + Y; then

MZ ( ) = E exp ( (X + Y ))
= E (exp ( X) exp ( Y ))
= E exp 0 X E exp 0 Y
= MX ( )MY ( ): (B.6)

The covariance between X and Y is

cov(X; Y ) = XY = E ((X EX) (Y EY )) = EXY EXEY:

The correlation between X and Y is


XY
corr (X; Y ) = XY = :
x Y

The Cauchy-Schwarz Inequality implies that j XY j 1:The correlation is a measure of linear dependence,
free of units of measurement.
If X and Y are independent, then XY = 0 and XY = 0: The reverse, however, is not true. For example,
if EX = 0 and EX 3 = 0, then cov(X; X 2 ) = 0:
A useful fact is that
var (X + Y ) = var(X) + var(Y ) + 2 cov(X; Y ):
An implication is that if X and Y are independent, then

var (X + Y ) = var(X) + var(Y );

the variance of the sum is the sum of the variances.


A k 1 random vector X = (X1 ; :::; Xk )0 is a function from S to Rk : Let x = (x1 ; :::; xk )0 denote a vector
in Rk : (In this Appendix, we use bold to denote vectors. Bold capitals X are random vectors and bold lower
case x are nonrandom vectors. Again, this is in distinction to the notation used in the bulk of the text) The
vector X has the distribution and density functions

F (x) = P(X x)
@k
f (x) = F (x):
@x1 @xk

For a measurable function g : Rk ! Rs ; we de ne the expectation


Z
Eg(X) = g(x)f (x)dx
Rk

where the symbol dx denotes dx1 dxk : In particular, we have the k 1 multivariate mean

= EX

and k k covariance matrix


0
= E (X ) (X )
= EXX 0 0

If the elements of X are mutually independent, then is a diagonal matrix and


k
! k
X X
var Xi = var (X i )
i=1 i=1

B.6 Conditional Distributions and Expectation


The conditional density of Y given X = x is de ned as
f (x; y)
fY jX (y j x) =
fX (x)

141
if fX (x) > 0: One way to derive this expression from the de nition of conditional probability is

@
fY jX (y j x) = lim P (Y y j x X x + ")
@y "!0
@ P (fY yg \ fx X x + "g)
= lim
@y "!0 P(x X x + ")
@ F (x + "; y) F (x; y)
= lim
@y "!0 FX (x + ") FX (x)
@
@ @x F (x + "; y)
= lim
@y "!0 fX (x + ")
@2
@x@y F (x; y)
=
fX (x)
f (x; y)
= :
fX (x)

The conditional mean or conditional expectation is the function


Z 1
m(x) = E (Y j X = x) = yfY jX (y j x) dy:
1

The conditional mean m(x) is a function, meaning that when X equals x; then the expected value of Y is
m(x):
Similarly, we de ne the conditional variance of Y given X = x as
2
(x) = var (Y j X = x)
2
= E (Y m(x)) j X = x
= E Y2 jX =x m(x)2 :
2
Evaluated at x = X; the conditional mean m(X) and conditional variance (x) are random vari-
2
ables, functions of X: We write this as E(Y j X) = m(X) and var (Y j X) = (X): For example, if
E (Y j X = x) = + 0 x; then E (Y j X) = + 0 X; a transformation of X:
The following are important facts about conditional expectations.
Simple Law of Iterated Expectations:

E (E (Y j X)) = E (Y ) (B.7)

Proof :

E (E (Y j X)) = E (m(X))
Z 1
= m(x)fX (x)dx
1
Z 1Z 1
= yfY jX (y j x) fX (x)dydx
1 1
Z 1Z 1
= yf (y; x) dydx
1 1
= E(Y ):

Law of Iterated Expectations:

E (E (Y j X; Z) j X) = E (Y j X) (B.8)

Conditioning Theorem. For any function g(x);

E (g(X)Y j X) = g (X) E (Y j X) (B.9)

142
Proof : Let

h(x) = E (g(X)Y j X = x)
Z 1
= g(x)yfY jX (y j x) dy
1
Z 1
= g(x) yfY jX (y j x) dy
1
= g(x)m(x)

where m(x) = E (Y j X = x) : Thus h(X) = g(X)m(X), which is the same as E (g(X)Y j X) = g (X) E (Y j X) :

B.7 Transformations
Suppose that X 2 Rk with continuous distribution function FX (x) and density fX (x): Let Y = g(X)
where g(x) : Rk ! Rk is one-to-one, di erentiable, and invertible. Let h(y) denote the inverse of g(x). The
Jacobian is
@
J(y) = det h(y) :
@y 0
Consider the univariate case k = 1: If g(x) is an increasing function, then g(X) Y if and only if
X h(Y ); so the distribution function of Y is

FY (y) = P (g(X) y)
= P (X h(Y ))
= FX (h(Y ))

so the density of Y is
d d
fY (y) = FY (y) = fX (h(Y )) h(y):
dy dy
If g(x) is a decreasing function, then g(X) Y if and only if X h(Y ); so

FY (y) = P (g(X) y)
= 1 P (X h(Y ))
= 1 FX (h(Y ))

and the density of Y is


d
fY (y) = fX (h(Y )) h(y):
dy
We can write these two cases jointly as

fY (y) = fX (h(Y )) jJ(y)j : (B.10)

This is known as the change-of-variables formula. This same formula (B.10) holds for k > 1; but its
justi cation requires deeper results from analysis.
As one example, take the case X U [0; 1] and Y = log(X). Here, g(x) = log(x) and h(y) = exp( y)
so the Jacobian is J(y) = exp(y): As the range of X is [0; 1]; that for Y is [0,1): Since fX (x) = 1 for
0 x 1 (B.10) shows that
fY (y) = exp( y); 0 y 1;
an exponential density.

B.8 Normal and Related Distributions


The standard normal density is

1 x2
(x) = p exp ; 1 < x < 1:
2 2

143
This density has all moments nite. Since it is symmetric about zero all odd moments are zero. By iterated
integration by parts, we can also show that EX 2 = 1 and EX 4 = 3: It is conventional to write X N (0; 1) ;
and to denote the standard normal density function by (x) and its distribution function by (x): The latter
has no closed-form solution.
If Z is standard normal and X = + Z; then using the change-of-variables formula, X has density
!
2
1 (x )
f (x) = p exp ; 1 < x < 1:
2 2 2

2
which is the univariate normal density. The mean and variance of the distribution are and ; and it
is conventional to write X N ; 2 .
For x 2 Rk ; the multivariate normal density is
0 1
1 (x ) (x )
f (x) = k=2 1=2
exp ; x 2 Rk :
(2 ) det ( ) 2

The mean and covariance matrix of the distribution are and ; and it is conventional to write X N ( ; ).
It useful to observe that the MGF and CF of the multivariate normal are exp 0 + 0 =2 and
exp i 0 0
=2 ; respectively.
If X 2 Rk is multivariate normal and the elements of X are mutually uncorrelated, then = diagf 2j g
is a diagonal matrix. In this case the density function can be written as
!!
2 2 2 2
1 (x1 1) = 1 + + (xk k) = k
f (x) = k=2
exp
(2 ) 1 k
2
k 2 !
Y 1 xj j
= exp
j=1 (2 )
1=2
j
2 2j

which is the product of marginal univariate normal densities. This shows that if X is multivariate normal
with uncorrelated elements, then they are mutually independent.
Another useful fact is that if X N ( ; ) and Y = a + BX with B an invertible matrix, then by the
change-of-variables formula, the density of Y is
0 1
1 (y Y ) (y Y )
f (y) = k=2 1=2
exp Y
; y 2 Rk :
(2 ) det ( Y)
2

1=2 1=2
where Y = a + B and Y = B B 0 ; where we used the fact that det B B 0 = det ( ) det (B) :
This shows that linear transformations of normals are also normal.

Theorem B.8.1 Let X N (0; I r ) : Then Q = X 0 X has the density


1
f (q) = r q r=2 1
exp ( q=2) ; q 0: (B.11)
2 2r=2
2
and is known as the chi-square density with r degrees of freedom, denoted r. Its mean and variance are
= r and 2 = 2r:

Theorem B.8.2 If Z N (0; A) with A > 0; q q; then Z 0 A 1


Z 2
q:

2
Theorem B.8.3 Let Z N (0; 1) and Q r be independent. Set

Z
tr = p :
Q=r

The density of tr is
r+1
2
f (x) = p r+1 (B.12)
r x2 2
r 2 1+ r
and is known as the student's t distribution with r degrees of freedom.

144
Proof of Theorem B.8.1. The MGF for the density (B.11) is

Z 1
1
E exp (tQ) = r y r=2 1
exp (ty) exp ( y=2) dy
0 2 2r=2
r=2
= (1 2t) (B.13)
R1
where the second equality uses the fact that 0 y a 1 exp ( by) dy = b a (a); which can be found by applying
change-of-variables to the gamma function. For Z N(0; 1) the distribution of Z 2 is
p
P Z 2 y = 2P (0 Z y)
Z py
1 x2
= 2 p exp dx
0 2 2
Z y
1 s
= 1 1=2
s 1=2 exp ds
0 2 2 2
1 p
using the change{of-variables s = x2 and the fact 2 = : Thus the density of Z 2 is (B.11) with r = 1:
1=2 Pr
2
From (B.13), we see that the MGF of Z is (1 2t) . Since we can write Q = X 0 X = j=1 Zj2 where
r=2
the Zj are independent N (0; 1), (B.6) can be used to show that the MGF of Q is (1 2t) ; which we
showed in (B.13) is the MGF of the density (B.11).

Proof of Theorem B.8.2. The fact that A > 0 means that we can write A = CC 0 where C is non-singular.
Then A 1 = C 10 C 1 and
1 1 10 1
C Z N 0; C AC = N 0; C CC 0 C 10
= N (0; I q ) :

Thus
0
Z 0A 1
Z = Z 0C 10
C 1
Z= C 1
Z C 1
Z 2
q:

Proof of Theorem B.8.3. Using the simple law of iterated expectations, tr has distribution function
!
Z
F (x) = P p x
Q=r
( r )
Q
= E Z x
r
" r !#
Q
= E P Z x jQ
r
r !
Q
= E x
r

Thus its density is


r !
d Q
f (x) = E x
dx r
r !r !
Q Q
= E x
r r
Z r !
1
1 qx2 q 1 r=2 1
= p exp r q exp ( q=2) dq
0 2 2r r 2 2r=2
r+1
x2 ( r+1
2 )
2
= p r 1+ :
r 2
r

145
Appendix C

Asymptotic Theory

C.1 Inequalities
The following are an important set of inequalities which are used in asymptotic distribution theory.

Jensen's Inequality. If g( ) : R ! R is convex, then for any random variable x for which E jxj < 1 and
E jg (x)j < 1;
g(E(x)) E (g (x)) : (C.1)
Expectation Inequality. For any random variable x for which E jxj < 1;

jE(x)j E jxj : (C.2)

Cauchy-Schwarz Inequality. For any random m n matrices X and Y;


1=2 1=2
2 2
E X 0Y E kXk E kY k : (C.3)

1 1
Holder's Inequality. If p > 1 and q > 1 and p + q = 1; then for any random m n matrices X and Y;

p 1=p q 1=q
E X 0Y (E kXk ) (E kY k ) : (C.4)

Minkowski's Inequality. For any random m n matrices X and Y;


p 1=p p 1=p p 1=p
(E kX + Y k ) (E kXk ) + (E kY k ) (C.5)

Markov's Inequality. For any random vector x and non-negative function g(x) 0;

1
P(g(x) > ) Eg(x): (C.6)

Proof of Jensen's Inequality. Let a + bu be the tangent line to g(u) at u = Ex: Since g(u) is convex,
tangent lines lie below it. So for all u; g(u) a + bu yet g(Ex) = a + bEx since the curve is tangent at Ex:
Applying expectations, Eg(x) a + bEx = g(Ex); as stated.

Proof of Expecation Inequality. Follows from an application of Jensen's Inequality, noting that the
function g(u) = juj is convex.
1 1
Proof of Holder's Inequality. Since p + q = 1 an application of Jensen's Inequality shows that for any
real a and b
1 1 1 1
exp a+ b exp (a) + exp (b) :
p q p q
Setting u = exp (a) and v = exp (b) this implies
u v
u1=p v 1=q +
p q

146
and this inequality holds for any u > 0 and v > 0:
p p q q
Set u = kXk =E kXk and v = kY k =E kY k : Note that Eu = Ev = 1: By the matrix Schwarz
0
Inequality (A.8), X Y kXk kY k. Thus
E X 0Y E (kXk kY k)
p 1=p q 1=q p 1=p q 1=q
(E kXk ) (E kY k ) (E kXk ) (E kY k )
= E u1=p v 1=q
u v
E +
p q
1 1
= +
p q
= 1;
which is (C.4).

Proof of Minkowski's Inequality. Note that by rewriting, using the triangle inequality (A.9), and then
Holder's Inequality to the two expectations
p p 1
E kX + Y k = E kX + Y k kX + Y k
p 1 p 1
E kXk kX + Y k + E kY k kX + Y k
1=q 1=q
p 1=p q(p 1) p 1=p q(p 1)
(E kXk ) E kX + Y k + (E kY k ) E kX + Y k
p 1=p p 1=p p (p 1)=p
= (E kXk ) + (E kY k ) E (kX + Y k )

where the second equality picks q to satisfy 1=p + 1=q = 1; and the nal equality uses this fact to make the
p (p 1)=p
substitution q = p=(p 1) and then collects terms. Dividing both sides by E (kX + Y k ) ; we obtain
(C.5).

Proof of Markov's Inequality. Let f denote the density function of x: Then


Z
P (g(x) ) = f (u)du
fug(u) g
Z
g(u)
f (u)du
fug(u) g
Z 1
1
g(u)f (u)du
1
1
= E (g(x))
the rst inequality using the region of integration fg(u) > g:

C.2 Weak Law of Large Numbers


Let zn 2 R be a random variable. We say that zn converges in probability to z as n ! 1; denoted
p
zn ! z as n ! 1; if for all > 0;
lim P (jzn zj > ) = 0:
n!1
This is a probabilistic way of generalizing the mathematical de nition of a limit. The limit z may be a
constant or may be random.
p
If Z n 2 Rk r is a matrix, we say that Z n ! Z as n ! 1 if each element of Z n converges in probability
to the corresponding element of Z:
The WLLN shows that sample averages converge in probability to the population average.

Theorem C.2.1 Weak Law of Large Numbers (WLLN). If xi 2 R is iid and E jxi j < 1; then as
n!1
n
1X p
xn = xi ! E(xi ):
n i=1

147
Proof: Without loss of generality, we can set E(xi ) = 0 by recentering xi on its expectation.
We need to show that for all > 0 and > 0 there is some N < 1 so that for all n N; P (jxn j > ) :
Fix and : Set " = =3: Pick C < 1 large enough so that

E (jxi j 1 (jxi j > C)) " (C.7)

(where 1 ( ) is the indicator function) which is possible since E jxi j < 1: De ne the random vectors

wi = xi 1 (jxi j C) E (xi 1 (jxi j C))


zi = xi 1 (jxi j > C) E (xi 1 (jxi j > C)) :

By the Triangle Inequality (A.9), the Expectation Inequality (C.2), and (C.7),
n
1X
E jz n j = E zi
n i=1
n
1X
E jzi j
n i=1
= E jzi j
E jxi j 1 (jxi j > C) + jE (xi 1 (jxi j > C))j
2E jxi j 1 (jxi j > C)
2": (C.8)

By Jensen's Inequality (C.1), the fact that the wi are iid and mean zero, and the bound jwi j 2C;
2
(E jwn j) Ew2n
Ewi2
=
n
4C 2
n
2
" (C.9)

the nal inequality holding for n 4C 2 ="2 = 36C 2 = 2 2 .


Finally, by Markov's Inequality (C.6), the fact that xn = wn + z n ; the triangle inequality, (C.8) and
(C.9),
E jxn j E jwn j + E jz n j 3"
P (jxn j > ) = ;
2 2
the equality by the de nition of ": We have shown that for any > 0 and > 0 then for all n 36C 2 = ;
P (jxn j > ) ; as needed.

C.3 Convergence in Distribution


Let z n be a random vector with distribution Fn (u) = P (z n u) : We say that z n converges in
d
distribution to z as n ! 1, denoted z n ! z; where z has distribution F (u) = P (Z u) ; if for all u at
which F (u) is continuous, Fn (u) ! F (u) as n ! 1:

Theorem C.3.1 Central Limit Theorem (CLT). If xi 2 Rk is iid and Ex2ji < 1 for j = 1; :::; k; then
as n ! 1
n
p 1 X d
n (xn )= p (xi ) ! N (0; ) :
n i=1
0
where = Exi and = E (xi ) (xi ) :
Proof: The moment bound Ex2ji < 1 is su cient to guarantee that the elements of and V are well
de ned and nite. Without loss of generality, it is su cient to consider the case = 0 and = I k :

148
For 2 Rk ; let C ( ) = E exp i 0 xi denote the characteristic function of xi and set c ( ) = log C( ).
Then observe
@
C( ) = iE xi exp i 0 xi
@
@2
0 C( ) = i2 E xi x0i exp i 0 xi
@ @
so when evaluated at =0

C(0) = 1
@
C(0) = iE (xi ) = 0
@
@2
0 C(0) = E (xi x0i ) = Ik:
@ @
Furthermore,
@ @
c ( ) = c( ) = C( ) 1 C( )
@ @
@2 1 @2 2 @ @
c ( ) = 0 c( ) = C( ) C( ) C( ) C( ) C( )
@ @ @ @ 0 @ @ 0
so when evaluated at =0

c(0) = 0
c (0) = 0
c (0) = Ik:

By a second-order Taylor series expansion of c( ) about = 0;


1 0 1 0
c( ) = c(0) + c (0)0 + c ( ) = c ( ) (C.10)
2 2
where lies on the line segment joining p
0 and : p
We now compute Cn ( ) = E exp i 0 nxn the characteristic function of nxn : By the properties of
the exponential function, the independence of the xi ; the de nition of c( ) and (C.10)
n
!
1 X 0
log Cn ( ) = log E exp i p xi
n i=1
n
Y 1 0
= log E exp i p xi
i=1
n
n
Y 1 0
= log E exp i p xi
i=1
n

= nc p
n
1 0
= c ( n)
2
p
where n ! 0 lies on the line segment joining 0 and = n: Since c ( n) !c (0) = I k ; we see that as
n ! 1;
1 0
Cn ( ) ! exp
2
the characteristic function of the N (0; I k ) distribution. This is su cient to establish the theorem.

C.4 Asymptotic Transformations


p
Theorem C.4.1 Continuous Mapping Theorem 1 (CMT). If z n ! c as n ! 1 and g ( ) is contin-
p
uous at c; then g(z n ) ! g(c) as n ! 1.

149
Proof: Since g is continuous at c; for all " > 0 we can nd a > 0 such that if kz n ck < then
jg (z n ) g (c)j ": Recall that A B implies P(A) P(B): Thus P (jg (z n ) g (c)j ") P (kz n ck < ) !
p p
1 as n ! 1 by the assumption that z n ! c: Hence g(z n ) ! g(c) as n ! 1.

d
Theorem C.4.2 Continuous Mapping Theorem 2. If z n ! z as n ! 1 and g ( ) is continuous; then
d
g(z n ) ! g(z) as n ! 1.

p d
Theorem C.4.3 Delta Method: If n ( n 0) ! N (0; ) ; where is m 1 and is m m; and
m k
g( ) : R ! R ; k m; then
p d
n (g ( n ) g( 0 )) ! N (0; g g0 )
@
where g ( ) = @ 0 g( ) and g = g ( 0 ):

Proof : By a vector Taylor series expansion, for each element of g;

gj ( n) = gj ( 0) + gj ( jn ) ( n 0)

where nj lies on the line segment between n and 0 and therefore converges in probability to 0: It follows
p
that ajn = gj ( jn ) gj ! 0: Stacking across elements of g; we nd
p p d
n (g ( n) g( 0 )) = (g + an ) n ( n 0) ! g N (0; ) = N (0; g g0 ) :

150
Appendix D

Maximum Likelihood

If the distribution of y i is F (y; ) where F is a known distribution function and 2 is an unknown


m 1 vector, we say that the distribution is parametric and that is the parameter of the distribution
F: The space is the set of permissible value for : In this setting the method of maximum likelihood
is the appropriate technique for estimation and inference on :
If the distribution F is continuous then the density of y i can be written as f (y; ) and the joint density
of a random sample (y 1 ; :::; y n ) is
n
Y
fn (y 1 ; :::; y n ; ) = f (y i ; ) :
i=1

The likelihood of the sample is this joint density evaluated at the observed sample values, viewed as a
function of . The log-likelihood function is its natural log
n
X
log L( ) = log f (y i ; ) :
i=1

If the distribution F is discrete, the likelihood and log-likelihood are constructed by setting f (y; ) =
P (y i = y; ) :
De ne the Hessian
@2
H= E log f (y i ; 0 ) (D.1)
@ @ 0
and the outer product matrix
@ @ 0
=E log f (y i ; 0) log f (y i ; 0) : (D.2)
@ @
Two important features of the likelihood are

Theorem D.0.4
@
E log f (y i ; ) =0 (D.3)
@ = 0

H= I0 (D.4)
The matrix I0 is called the information, and the equality (D.4) is often called the information matrix
equality.

Theorem D.0.5 Cramer-Rao Lower Bound. If ~ is an unbiased estimator of 2 R; then var(~)


1
(nI0 ) :
The Cramer-Rao Theorem gives a lower bound for estimation. However, the restriction to unbiased
estimators means that the theorem has little direct relevance for nite sample e ciency.
The maximum likelihood estimator or MLE ^ is the parameter value which maximizes the likelihood
(equivalently, which maximizes the log-likelihood). We can write this as
^ = argmax log L( ):
2

151
In some simple cases, we can nd an explicit expression for ^ as a function of the data, but these cases are
rare. More typically, the MLE ^ must be found by numerical methods.
Why do we believe that the MLE ^ is estimating the parameter ? Observe that when standardized, the
log-likelihood is a sample average
n
1 1X p
log L( ) = log f (y i ; ) ! E log f (y i ; ) :
n n i=1

As the MLE ^ maximizes the left-hand-side, we can see that it is an estimator of the maximizer of the
right-hand-side. The rst-order condition for the latter problem is
@
0= E log f (y i ; )
@

which holds at = 0 by (D.3). In fact, under conventional regularity conditions, ^ is consistent for this
p
value, ^ ! 0 as n ! 1:
p d
Theorem D.0.6 Under regularity conditions, n ^ 0 ! N 0; I0 1
.
1
Thus in large samples, the approximate variance of the MLE is (nI0 ) which is the Cramer-Rao lower
bound. Thus in large samples the MLE has approximately the best possible variance. Therefore the MLE
is called asymptotically e cient.
Typically, to estimate the asymptotic variance of the MLE we use an estimate based on the Hessian
formula (D.1)
Xn
b = 1 @2
H log f y i ; ^ (D.5)
n i=1 @ @ 0
1
We then set Ib0 1 = H b : Asymptotic standard errors for ^ are then the square roots of the diagonal elements
1b 1
of n I0 :
Sometimes a parametric density function f (y; ) is used to approximate the true unknown density f (y);
but it is not literally believed that the model f (y; ) is necessarily the true density. In this case, we refer to
log L( ) as a quasi-likelihood and the its maximizer ^ as a quasi-mle or QMLE.
In this case there is not a \true" value of the parameter : Instead we de ne the pseudo-true value 0
as the maximizer of Z
E log f (y i ; ) = f (y) log f (y; ) dy

which is the same as the minimizer of


Z
f (y)
KLIC = f (y) log dy
f (y; )

the Kullback-Leibler information distance between the true density f (y) and the parametric density f (y; ):
Thus the QMLE 0 is the value which makes the parametric density \closest" to the true value according to
this measure of distance. The QMLE is consistent for the pseudo-true value, but has a di erent covariance
matrix than in the pure MLE case, since the information matrix equality (D.4) does not hold. A minor
adjustment to Theorem (D.0.6) yields the asymptotic distribution of the QMLE:
p d
n ^ 0 ! N (0; V ) ; V =H 1
H 1

The moment estimator for V is 1 1


b
V^ = H b
^H
b is given in (D.5) and
where H
n
X @ 0
^ = 1 log f y i ; ^
@
log f y i ; ^ :
n i=1 @ @

Asymptotic standard errors (sometimes called qmle standard errors) are then the square roots of the diagonal
elements of n 1 V^ :

152
Proof of Theorem D.0.4. To see (D.3);
Z
@ @
E log f (y i ; ) = log f (y; ) f (y; 0 ) dy
@ = 0
@ = 0
Z
@ f (y; 0 )
= f (y; ) dy
@ f (y; ) = 0
Z
@
= f (y; ) dy
@ = 0
@
= 1 = 0:
@ = 0

Similarly, we can show that !


@2
@ @ f (y i ; 0 )
0
E = 0:
f (y i ; 0 )
By direction computation,
@2 @ @ 0
@2 @ @ f (y i ; 0 )
0 @ f (y i ; 0) @ f (y i ; 0)
0 log f (y i ; 0) = 2
@ @ f (y i ; 0 ) f (y i ; 0)
@2
@ @ f (y i ; 0 )
0 @ @ 0
= log f (y i ; 0) log f (y i ; 0) :
f (y i ; 0 ) @ @
Taking expectations yields (D.4).

Proof of Theorem D.0.5. Let Y = (y 1 ; :::; y n ) be the sample, and set


Xn
@ @
S= log fn (Y ; 0) = log f (y i ; 0)
@ i=1
@

which by Theorem (D.0.4) has mean zero and variance nH: Write the estimator ~ = ~ (Y ) as a function of
the data. Since ~ is unbiased for any ;
Z
= E~ = ~ (Y ) f (Y ; ) dY :

Di erentiating with respect to and evaluating at 0 yields


Z Z
~ @ @
1= (Y ) f (Y ; ) dY = ~ (Y ) log f (Y ; ) f (Y ; 0 ) dY = E ~S :
@ @
By the Cauchy-Schwarz inequality
2
1 = E ~S var (S) var ~

so
1 1
var ~ = :
var (S) nH

Proof of Theorem D.0.6 Taking the rst-order condition for maximization of log L( ), and making a
rst-order Taylor series expansion,
@
0 = log L( )
@ =^
Xn
@
= log f y i ; ^
i=1
@
Xn n
X
@ @2 ^
' log f (y i ; 0) + 0 log f (y i ; n) 0 ;
i=1
@ i=1
@ @

153
where n lies on a line segment joining ^ and 0: (Technically, the speci c value of n varies by row in this
expansion.) Rewriting this equation, we nd

n
! 1 n
!
X @2 X @
^ 0 = log f (y i ; n) log f (y i ; 0) :
0
i=1
@ @ i=1
@

@
Since @ log f (y i ; 0) is mean-zero with covariance matrix ; an application of the CLT yields
n
1 X @ d
p log f (y i ; 0) ! N (0; ) :
n i=1 @

The analysis of the sample Hessian is somewhat more complicated due to the presence of n : Let H( ) =
@2 p p
@ @ 0 log f (y i ; ) : If it is continuous in ; then since n ! 0 we nd H( n ) ! H and so

n n
1 X @2 1X @2
0 log f (y i ; n) = 0 log f (y i ; n) H( n) + H( n)
n i=1 @ @ n i=1 @ @
p
!H

by an application of a uniform WLLN. Together,


p d
n ^ 0 !H 1
N (0; ) = N 0; H 1
H 1
= N 0; H 1
;

the nal equality using Theorem D.0.4 .

154
Appendix E

Numerical Optimization

Many econometric estimators are de ned by an optimization problem of the form


^ = argmin Q( ) (E.1)
2

where the parameter is 2 Rm and the criterion function is Q( ) : ! R: For example NLLS, GLS,
MLE and GMM estimators take this form. In most cases, Q( ) can be computed for given ; but ^ is not
available in closed form. In this case, numerical methods are required to obtain ^:

E.1 Grid Search


Many optimization problems are either one dimensional (m = 1) or involve one-dimensional optimization
as a sub-problem (for example, a line search). In this context grid search may be employed.
Grid Search. Let = [a; b] be an interval. Pick some " > 0 and set G = (b a)=" to be the
number of gridpoints. Construct an equally spaced grid on the region [a; b] with G gridpoints, which is
f (j) = a + j(b a)=G : j = 0; :::; Gg. At each point evaluate the criterion function and nd the gridpoint
which yields the smallest value of the criterion, which is (^ |) where |^ = argmin0 j G Q( (j)): This value
(^ ^
|) is the gridpoint estimate of : If the grid is su ciently ne to capture small oscillations in Q( ); the
approximation error is bounded by "; that is, (^ |) ^ ": Plots of Q( (j)) against (j) can help diagnose
errors in grid selection. This method is quite robust but potentially costly:
Two-Step Grid Search. The gridsearch method can be re ned by a two-step execution. For an error
bound of " pick G so that G2 = (b a)=" For the rst step de ne an equally spaced grid on the region [a; b]
with G gridpoints, which is f (j) = a+j(b a)=G : j = 0; :::; Gg: At each point evaluate the criterion function
and let |^ = argmin0 j G Q( (j)). For the second step de ne an equally spaced grid on [ (^ | 1); (^ | + 1)]
with G gridpoints, which is f 0 (k) = (^ | 1) + 2k(b a)=G2 : k = 0; :::; Gg: Let k^ = argmin0 k G Q( 0 (k)):
The estimate of ^ is k^ . The advantage of the two-step method over a one-step grid search is that the
p
number of function evaluations has been reduced from (b a)=" to 2 (b a)=" which can be substantial.
The disadvantage is that if the function Q( ) is irregular, the rst-step grid may not bracket ^ which thus
would be missed.

E.2 Gradient Methods


Gradient Methods are iterative methods which produce a sequence i : i = 1; 2; ::: which are designed to
converge to ^: All require the choice of a starting value 1 ; and all require the computation of the gradient
of Q( )
@
g( ) = Q( )
@
and some require the Hessian
@2
H( ) = Q( ):
@ @ 0

155
If the functions g( ) and H( ) are not analytically available, they can be calculated numerically. Take
the j 0 th element of g( ): Let j be the j 0 th unit vector (zeros everywhere except for a one in the j 0 th row).
Then for " small
Q( + j ") Q( )
gj ( ) ' :
"
Similarly,
Q( + j " + k ") Q( + k ") Q( + j ") + Q( )
gjk ( ) '
"2
In many cases, numerical derivatives can work well but can be computationally costly relative to analytic
derivatives. In some cases, however, numerical derivatives can be quite unstable.
Most gradient methods are a variant of Newton's method which is based on a quadratic approximation.
By a Taylor's expansion for close to ^

0 = g(^) ' g( ) + H( ) ^

which implies
^= H( ) 1
g( ):
This suggests the iteration rule
^i+1 = i H( i ) 1
g( i ):
where
One problem with Newton's method is that it will send the iterations in the wrong direction if H( i ) is
not positive de nite. One modi cation to prevent this possibility is quadratic hill-climbing which sets
^i+1 = 1
i (H( i ) + iI m) g( i ):
where i is set just above the smallest eigenvalue of H( i ) if H( ) is not positive de nite.
Another productive modi cation is to add a scalar steplength i : In this case the iteration rule takes
the form
i+1 = i Di gi i (E.2)
1
where g i = g( i ) and D i = H( i ) 1 for Newton's method and Di = (H( i ) + i I m ) for quadratic
hill-climbing.
Allowing the steplength to be a free parameter allows for a line search, a one-dimensional optimization.
To pick i write the criterion function as a function of
Q( ) = Q( i + Di gi )
a one-dimensional optimization problem. There are two common methods to perform a line search. A
quadratic approximation evaluates the rst and second derivatives of Q( ) with respect to ; and picks
i as the value minimizing this approximation. The half-step method considers the sequence = 1; 1/2,
1/4, 1/8, ... . Each value in the sequence is considered and the criterion Q( i + D i g i ) evaluated. If the
criterion has improved over Q( i ), use this value, otherwise move to the next element in the sequence.
Newton's method does not perform well if Q( ) is irregular, and it can be quite computationally costly if
H( ) is not analytically available. These problems have motivated alternative choices for the weight matrix
Di : These methods are called Quasi-Newton methods. Two popular methods are do to Davidson-Fletcher-
Powell (DFP) and Broyden-Fletcher-Goldfarb-Shanno (BFGS).
Let
gi = gi gi 1

i = i i 1

and : The DFP method sets


0
i i Di 1 g i g 0i D i 1
Di = Di 1 + 0 + :
i gi g 0i D i 1 g i
The BFGS methods sets
0 0 0
i i i i i g 0i D i 1 Di 1 gi i
Di = Di 1 + 0 0 2 gi0 D i 1 gi + 0 + 0 :
i gi i gi i gi i gi

For any of the gradient methods, the iterations continue until the sequence has converged in some sense.
This can be de ned by examining whether j i i 1 j ; jQ ( i ) Q ( i 1 )j or jg( i )j has become small.

156
E.3 Derivative-Free Methods
All gradient methods can be quite poor in locating the global minimum when Q( ) has several local
minima. Furthermore, the methods are not well de ned when Q( ) is non-di erentiable. In these cases,
alternative optimization methods are required. One example is the simplex method of Nelder-Mead (1965).
A more recent innovation is the method of simulated annealing (SA). For a review see Go e, Ferrier,
and Rodgers (1994). The SA method is a sophisticated random search. Like the gradient methods, it relies
on an iterative sequence. At each iteration, a random variable is drawn and added to the current value of
the parameter. If the resulting criterion is decreased, this new value is accepted. If the criterion is increased,
it may still be accepted depending on the extent of the increase and another randomization. The latter
property is needed to keep the algorithm from selecting a local minimum. As the iterations continue, the
variance of the random innovations is shrunk. The SA algorithm stops when a large number of iterations is
unable to improve the criterion. The SA method has been found to be successful at locating global minima.
The downside is that it can take considerable computer time to execute.

157
Bibliography

[1] Aitken, A.C. (1935): \On least squares and linear combinations of observations," Proceedings of the
Royal Statistical Society, 55, 42-48.
[2] Akaike, H. (1973): \Information theory and an extension of the maximum likelihood principle." In B.
Petroc and F. Csake, eds., Second International Symposium on Information Theory.
[3] Anderson, T.W. and H. Rubin (1949): \Estimation of the parameters of a single equation in a complete
system of stochastic equations," The Annals of Mathematical Statistics, 20, 46-63.
[4] Andrews, D.W.K. (1988): \Laws of large numbers for dependent non-identically distributed random
variables,' Econometric Theory, 4, 458-467.
[5] Andrews, D.W.K. (1991), \Asymptotic normality of series estimators for nonparameric and semipara-
metric regression models," Econometrica, 59, 307-345.
[6] Andrews, D.W.K. (1993), \Tests for parameter instability and structural change with unknown change
point," Econometrica, 61, 821-8516.
[7] Andrews, D.W.K. and M. Buchinsky: (2000): \A three-step method for choosing the number of boot-
strap replications," Econometrica, 68, 23-51.
[8] Andrews, D.W.K. and W. Ploberger (1994): \Optimal tests when a nuisance parameter is present only
under the alternative," Econometrica, 62, 1383-1414.
[9] Basmann, R. L. (1957): \A generalized classical method of linear estimation of coe cients in a structural
equation," Econometrica, 25, 77-83.
[10] Bekker, P.A. (1994): \Alternative approximations to the distributions of instrumental variable estima-
tors, Econometrica, 62, 657-681.
[11] Billingsley, P. (1968): Convergence of Probability Measures. New York: Wiley.
[12] Billingsley, P. (1979): Probability and Measure. New York: Wiley.
[13] Bose, A. (1988): \Edgeworth correction by bootstrap in autoregressions," Annals of Statistics, 16,
1709-1722.
[14] Breusch, T.S. and A.R. Pagan (1979): \The Lagrange multiplier test and its application to model
speci cation in econometrics," Review of Economic Studies, 47, 239-253.
[15] Brown, B.W. and W.K. Newey (2002): \GMM, e cient bootstrapping, and improved inference ,"
Journal of Business and Economic Statistics.
[16] Carlstein, E. (1986): \The use of subseries methods for estimating the variance of a general statistic
from a stationary time series," Annals of Statistics, 14, 1171-1179.
[17] Chamberlain, G. (1987): \Asymptotic e ciency in estimation with conditional moment restrictions,"
Journal of Econometrics, 34, 305-334.
[18] Choi, I. and P.C.B. Phillips (1992): \Asymptotic and nite sample distribution theory for IV estimators
and tests in partially identi ed structural equations," Journal of Econometrics, 51, 113-150.
[19] Chow, G.C. (1960): \Tests of equality between sets of coe cients in two linear regressions," Economet-
rica, 28, 591-603.

158
[20] Cragg, John (1992): \Quasi-Aitken Estimation for Heterskedasticity of Unknown Form" Journal of
Econometrics, 54, 179-201.
[21] Davidson, J. (1994): Stochastic Limit Theory: An Introduction for Econometricians. Oxford: Oxford
University Press.
[22] Davison, A.C. and D.V. Hinkley (1997): Bootstrap Methods and their Application. Cambridge University
Press.
[23] Dickey, D.A. and W.A. Fuller (1979): \Distribution of the estimators for autoregressive time series with
a unit root," Journal of the American Statistical Association, 74, 427-431.
[24] Donald S.G. and W.K. Newey (2001): \Choosing the number of instruments," Econometrica, 69, 1161-
1191.
[25] Dufour, J.M. (1997): \Some impossibility theorems in econometrics with applications to structural and
dynamic models," Econometrica, 65, 1365-1387.
[26] Efron, Bradley (1979): \Bootstrap methods: Another look at the jackknife," Annals of Statistics, 7,
1-26.
[27] Efron, Bradley (1982): The Jackknife, the Bootstrap, and Other Resampling Plans. Society for Industrial
and Applied Mathematics.
[28] Efron, Bradley and R.J. Tibshirani (1993): An Introduction to the Bootstrap, New York: Chapman-Hall.
[29] Eicker, F. (1963): \Asymptotic normality and consistency of the least squares estimators for families of
linear regressions," Annals of Mathematical Statistics, 34, 447-456.
[30] Engle, R.F. and C.W.J. Granger (1987): \Co-integration and error correction: Representation, estima-
tion and testing," Econometrica, 55, 251-276.
[31] Frisch, R. and F. Waugh (1933): \Partial time regressions as compared with individual trends," Econo-
metrica, 1, 387-401.
[32] Gallant, A.F. and D.W. Nychka (1987): \Seminonparametric maximum likelihood estimation," Econo-
metrica, 55, 363-390.
[33] Gallant, A.R. and H. White (1988): A Uni ed Theory of Estimation and Inference for Nonlinear
Dynamic Models. New York: Basil Blackwell.
[34] Goldberger, Arthur S. (1991): A Course in Econometrics. Cambridge: Harvard University Press.
[35] Go e, W.L., G.D. Ferrier and J. Rogers (1994): \Global optimization of statistical functions with
simulated annealing," Journal of Econometrics, 60, 65-99.
[36] Gauss, K.F. (1809): \Theoria motus corporum coelestium," in Werke, Vol. VII, 240-254.
[37] Granger, C.W.J. (1969): \Investigating causal relations by econometric models and cross-spectral meth-
ods," Econometrica, 37, 424-438.
[38] Granger, C.W.J. (1981): \Some properties of time series data and their use in econometric speci cation,"
Journal of Econometrics, 16, 121-130.
[39] Granger, C.W.J. and T. Ter•
asvirta (1993): Modelling Nonlinear Economic Relationships, Oxford Uni-
versity Press, Oxford.
[40] Hall, A. R. (2000): \Covariance matrix estimation and the power of the overidentifying restrictions
test," Econometrica, 68, 1517-1527,
[41] Hall, P. (1992): The Bootstrap and Edgeworth Expansion, New York: Springer-Verlag.
[42] Hall, P. (1994): \Methodology and theory for the bootstrap," Handbook of Econometrics, Vol. IV, eds.
R.F. Engle and D.L. McFadden. New York: Elsevier Science.
[43] Hall, P. and J.L. Horowitz (1996): \Bootstrap critical values for tests based on Generalized-Method-of-
Moments estimation," Econometrica, 64, 891-916.

159
[44] Hahn, J. (1996): \A note on bootstrapping generalized method of moments estimators," Econometric
Theory, 12, 187-197.
[45] Hansen, B.E. (1992): \E cient estimation and testing of cointegrating vectors in the presence of deter-
ministic trends," Journal of Econometrics, 53, 87-121.
[46] Hansen, B.E. (1996): \Inference when a nuisance parameter is not identi ed under the null hypothesis,"
Econometrica, 64, 413-430.
[47] Hansen, L.P. (1982): \Large sample properties of generalized method of moments estimators, Econo-
metrica, 50, 1029-1054.
[48] Hansen, L.P., J. Heaton, and A. Yaron (1996): \Finite sample properties of some alternative GMM
estimators," Journal of Business and Economic Statistics, 14, 262-280.
[49] Hausman, J.A. (1978): \Speci cation tests in econometrics," Econometrica, 46, 1251-1271.
[50] Heckman, J. (1979): \Sample selection bias as a speci cation error," Econometrica, 47, 153-161.
[51] Horowitz, Joel (2001): \The Bootstrap," Handbook of Econometrics, Vol. 5, J.J. Heckman and E.E.
Leamer, eds., Elsevier Science, 3159-3228.
[52] Imbens, G.W. (1997): \One step estimators for over-identi ed generalized method of moments models,"
Review of Economic Studies, 64, 359-383.
[53] Imbens, G.W., R.H. Spady and P. Johnson (1998): \Information theoretic approaches to inference in
moment condition models," Econometrica, 66, 333-357.
[54] Jarque, C.M. and A.K. Bera (1980): \E cient tests for normality, homoskedasticity and serial indepen-
dence of regression residuals, Economic Letters, 6, 255-259.
[55] Johansen, S. (1988): \Statistical analysis of cointegrating vectors," Journal of Economic Dynamics and
Control, 12, 231-254.
[56] Johansen, S. (1991): \Estimation and hypothesis testing of cointegration vectors in the presence of
linear trend," Econometrica, 59, 1551-1580.
[57] Johansen, S. (1995): Likelihood-Based Inference in Cointegrated Vector Auto-Regressive Models, Oxford
University Press.
[58] Johansen, S. and K. Juselius (1992): \Testing structural hypotheses in a multivariate cointegration
analysis of the PPP and the UIP for the UK," Journal of Econometrics, 53, 211-244.
[59] Kitamura, Y. (2001): \Asymptotic optimality and empirical likelihood for testing moment restrictions,"
Econometrica, 69, 1661-1672.
[60] Kitamura, Y. and M. Stutzer (1997): \An information-theoretic alternative to generalized method of
moments," Econometrica, 65, 861-874..
[61] Koenker, Roger (2005): Quantile Regression. Cambridge University Press.
[62] Kunsch, H.R. (1989): \The jackknife and the bootstrap for general stationary observations," Annals of
Statistics, 17, 1217-1241.
[63] Kwiatkowski, D., P.C.B. Phillips, P. Schmidt, and Y. Shin (1992): \Testing the null hypothesis of
stationarity against the alternative of a unit root: How sure are we that economic time series have a
unit root?" Journal of Econometrics, 54, 159-178.
[64] Lafontaine, F. and K.J. White (1986): \Obtaining any Wald statistic you want," Economics Letters,
21, 35-40.
[65] Lovell, M.C. (1963): \Seasonal adjustment of economic time series," Journal of the American Statistical
Association, 58, 993-1010.
[66] MacKinnon, J.G. (1990): \Critical values for cointegration," in Engle, R.F. and C.W. Granger (eds.)
Long-Run Economic Relationships: Readings in Cointegration, Oxford, Oxford University Press.

160
[67] MacKinnon, J.G. and H. White (1985): \Some heteroskedasticity-consistent covariance matrix estima-
tors with improved nite sample properties," Journal of Econometrics, 29, 305-325.

[68] Magnus, J. R., and H. Neudecker (1988): Matrix Di erential Calculus with Applications in Statistics
and Econometrics, New York: John Wiley and Sons.
[69] Muirhead, R.J. (1982): Aspects of Multivariate Statistical Theory. New York: Wiley.
[70] Nelder, J. and R. Mead (1965): \A simplex method for function minimization," Computer Journal, 7,
308-313.

[71] Newey, W.K. and K.D. West (1987): \Hypothesis testing with e cient method of moments estimation,"
International Economic Review, 28, 777-787.
[72] Owen, Art B. (1988): \Empirical likelihood ratio con dence intervals for a single functional,"
Biometrika, 75, 237-249.

[73] Owen, Art B. (2001): Empirical Likelihood. New York: Chapman & Hall.
[74] Phillips, P.C.B. (1989): \Partially identi ed econometric models," Econometric Theory, 5, 181-240.
[75] Phillips, P.C.B. and S. Ouliaris (1990): \Asymptotic properties of residual based tests for cointegration,"
Econometrica, 58, 165-193.
[76] Politis, D.N. and J.P. Romano (1996): \The stationary bootstrap," Journal of the American Statistical
Association, 89, 1303-1313.
[77] Potscher, B.M. (1991): \E ects of model selection on inference," Econometric Theory, 7, 163-185.
[78] Qin, J. and J. Lawless (1994): \Empirical likelihood and general estimating equations," The Annals of
Statistics, 22, 300-325.
[79] Ramsey, J. B. (1969): \Tests for speci cation errors in classical linear least-squares regression analysis,"
Journal of the Royal Statistical Society, Series B, 31, 350-371.
[80] Rudin, W. (1987): Real and Complex Analysis, 3rd edition. New York: McGraw-Hill.
[81] Said, S.E. and D.A. Dickey (1984): \Testing for unit roots in autoregressive-moving average models of
unknown order," Biometrika, 71, 599-608.
[82] Shao, J. and D. Tu (1995): The Jackknife and Bootstrap. NY: Springer.
[83] Sargan, J.D. (1958): \The estimation of economic relationships using instrumental variables," Econo-
metrica, 2 6, 393-415.
[84] Sheather, S.J. and M.C. Jones (1991): \A reliable data-based bandwidth selection method for kernel
density estimation, Journal of the Royal Statistical Society, Series B, 53, 683-690.
[85] Shin, Y. (1994): \A residual-based test of the null of cointegration against the alternative of no cointe-
gration," Econometric Theory, 10, 91-115.
[86] Silverman, B.W. (1986): Density Estimation for Statistics and Data Analysis. London: Chapman and
Hall.
[87] Sims, C.A. (1972): \Money, income and causality," American Economic Review, 62, 540-552.
[88] Sims, C.A. (1980): \Macroeconomics and reality," Econometrica, 48, 1-48.

[89] Staiger, D. and J.H. Stock (1997): \Instrumental variables regression with weak instruments," Econo-
metrica, 65, 557-586.
[90] Stock, J.H. (1987): \Asymptotic properties of least squares estimators of cointegrating vectors," Econo-
metrica, 55, 1035-1056.

[91] Stock, J.H. (1991): \Con dence intervals for the largest autoregressive root in U.S. macroeconomic time
series," Journal of Monetary Economics, 28, 435-460.

161
[92] Stock, J.H. and J.H. Wright (2000): \GMM with weak identi cation," Econometrica, 68, 1055-1096.
[93] Theil, H. (1953): \Repeated least squares applied to complete equation systems," The Hague, Central
Planning Bureau, mimeo.
[94] Tobin, J. (1958): \Estimation of relationships for limited dependent variables," Econometrica, 2 6,
24-36.
[95] Wald, A. (1943): \Tests of statistical hypotheses concerning several parameters when the number of
observations is large," Transactions of the American Mathematical Society, 54, 426-482.

[96] Wang, J. and E. Zivot (1998): \Inference on structural parameters in instrumental variables regression
with weak instruments," Econometrica, 66, 1389-1404.
[97] White, H. (1980): \A heteroskedasticity-consistent covariance matrix estimator and a direct test for
heteroskedasticity," Econometrica, 48, 817-838.

[98] White, H. (1984): Asymptotic Theory for Econometricians, Academic Press.


[99] Zellner, A. (1962): \An e cient method of estimating seemingly unrelated regressions, and tests for
aggregation bias," Journal of the American Statistical Association, 57, 348-368.

162

You might also like