ECONOMETRICS

Bruce E. Hansen
c (2000, 2010
1
University of Wisconsin
www.ssc.wisc.edu/~bhansen
This Revision: January 10, 2010
Comments Welcome
1
This manuscript may be printed and reproduced for individual or instructional use, but may not be
printed for commercial purposes.
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
1 Introduction 1
1.1 What is Econometrics? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 The Probability Approach to Econometrics . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Econometric Terms and Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Observational Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Standard Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Sources for Economic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Econometric Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.8 Reading the Manuscript . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Regression and Projection 8
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Conditional Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Regression Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.5 Best Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.6 Conditional Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.7 Homoskedasticity and Heteroskedasticity . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.8 Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.9 Best Linear Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.10 Regression Coe¢cients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.11 Best Linear Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.12 Normal Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.13 Regression to the Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.14 Reverse Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.15 Limitations of the Best Linear Predictor . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.16 Identi…cation of the Conditional Mean . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3 The Algebra of Least Squares 28
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2 Least Squares Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3 Solving for Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4 Least Squares Residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.5 Model in Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.6 Projection Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.7 Residual Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.8 Prediction Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.9 In‡uential Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.10 Measures of Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
i
3.11 Normal Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4 Least Squares Regression 46
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.2 Sampling Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.3 Mean of Least-Squares Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.4 Variance of Least Squares Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.5 Gauss-Markov Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.6 Residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.7 Estimation of Error Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.8 Covariance Matrix Estimation Under Homoskedasticity . . . . . . . . . . . . . . . . 54
4.9 Covariance Matrix Estimation Under Heteroskedasticity . . . . . . . . . . . . . . . . 55
4.10 Standard Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.11 Multicollinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.12 Omitted Variable Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.13 Normal Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5 Asymptotic Theory 65
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.2 Weak Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.3 Consistency of Least-Squares Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.4 Asymptotic Normality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.5 Consistency of Sample Variance Estimators . . . . . . . . . . . . . . . . . . . . . . . 73
5.6 Consistent Covariance Matrix Estimation . . . . . . . . . . . . . . . . . . . . . . . . 74
5.7 Functions of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.8 t statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.9 Con…dence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.10 Semiparametric E¢ciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.11 Semiparametric E¢ciency in the Projection Model . . . . . . . . . . . . . . . . . . . 82
5.12 Semiparametric E¢ciency in the Homoskedastic Regression Model . . . . . . . . . . 85
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6 Testing 89
6.1 t tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.2 t-ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.3 Wald Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.4 F Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.5 Normal Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.6 Problems with Tests of NonLinear Hypotheses . . . . . . . . . . . . . . . . . . . . . 94
6.7 Monte Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.8 Estimating a Wage Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
7 Additional Regression Topics 104
7.1 Generalized Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.2 Testing for Heteroskedasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.3 Forecast Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.4 NonLinear Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
7.5 Least Absolute Deviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
7.6 Quantile Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
ii
7.7 Testing for Omitted NonLinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.8 Irrelevant Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
7.9 Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
8 The Bootstrap 121
8.1 De…nition of the Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
8.2 The Empirical Distribution Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
8.3 Nonparametric Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
8.4 Bootstrap Estimation of Bias and Variance . . . . . . . . . . . . . . . . . . . . . . . 123
8.5 Percentile Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
8.6 Percentile-t Equal-Tailed Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
8.7 Symmetric Percentile-t Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
8.8 Asymptotic Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
8.9 One-Sided Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
8.10 Symmetric Two-Sided Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
8.11 Percentile Con…dence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
8.12 Bootstrap Methods for Regression Models . . . . . . . . . . . . . . . . . . . . . . . . 132
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
9 Generalized Method of Moments 134
9.1 Overidenti…ed Linear Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
9.2 GMM Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
9.3 Distribution of GMM Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
9.4 Estimation of the E¢cient Weight Matrix . . . . . . . . . . . . . . . . . . . . . . . . 137
9.5 GMM: The General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
9.6 Over-Identi…cation Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
9.7 Hypothesis Testing: The Distance Statistic . . . . . . . . . . . . . . . . . . . . . . . 139
9.8 Conditional Moment Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
9.9 Bootstrap GMM Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
10 Empirical Likelihood 146
10.1 Non-Parametric Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
10.2 Asymptotic Distribution of EL Estimator . . . . . . . . . . . . . . . . . . . . . . . . 148
10.3 Overidentifying Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
10.4 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
10.5 Numerical Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
11 Endogeneity 153
11.1 Instrumental Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
11.2 Reduced Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
11.3 Identi…cation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
11.4 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
11.5 Special Cases: IV and 2SLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
11.6 Bekker Asymptotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
11.7 Identi…cation Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
iii
12 Univariate Time Series 163
12.1 Stationarity and Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
12.2 Autoregressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
12.3 Stationarity of AR(1) Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
12.4 Lag Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
12.5 Stationarity of AR(k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
12.6 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
12.7 Asymptotic Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
12.8 Bootstrap for Autoregressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
12.9 Trend Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
12.10Testing for Omitted Serial Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . 170
12.11Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
12.12Autoregressive Unit Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
13 Multivariate Time Series 173
13.1 Vector Autoregressions (VARs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
13.2 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
13.3 Restricted VARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
13.4 Single Equation from a VAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
13.5 Testing for Omitted Serial Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . 175
13.6 Selection of Lag Length in an VAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
13.7 Granger Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
13.8 Cointegration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
13.9 Cointegrated VARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
14 Limited Dependent Variables 179
14.1 Binary Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
14.2 Count Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
14.3 Censored Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
14.4 Sample Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
15 Panel Data 184
15.1 Individual-E¤ects Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
15.2 Fixed E¤ects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
15.3 Dynamic Panel Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
16 Nonparametrics 187
16.1 Kernel Density Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
16.2 Asymptotic MSE for Kernel Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . 189
A Matrix Algebra 192
A.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
A.2 Matrix Addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
A.3 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
A.4 Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
A.5 Rank and Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
A.6 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
A.7 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
A.8 Positive De…niteness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
A.9 Matrix Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
A.10 Kronecker Products and the Vec Operator . . . . . . . . . . . . . . . . . . . . . . . . 198
A.11 Vector and Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
iv
B Probability 201
B.1 Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
B.2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
B.3 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
B.4 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
B.5 Common Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
B.6 Multivariate Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
B.7 Conditional Distributions and Expectation . . . . . . . . . . . . . . . . . . . . . . . . 209
B.8 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
B.9 Normal and Related Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
C Asymptotic Theory 215
C.1 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
C.2 Convergence in Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
C.3 Asymptotic Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
D Maximum Likelihood 219
E Numerical Optimization 223
E.1 Grid Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
E.2 Gradient Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
E.3 Derivative-Free Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
v
Preface
This book is intended to serve as the textbook for a …rst-year graduate course in econometrics.
It can be used as a stand-alone text, or be used as a supplement to another text.
Students are assumed to have an understanding of multivariate calculus, probability theory,
linear algebra, and mathematical statistics. A prior course in undergraduate econometrics would
be helpful, but not required.
For reference, some of the basic tools of matrix algebra, probability, and statistics are reviewed
in the Appendix.
For students wishing to deepen their knowledge of matrix algebra in relation to their study of
econometrics, I recommend Matrix Algebra by Abadir and Magnus (2005).
An excellent introduction to probability and statistics is Statistical Inference by Casella and
Berger (2002). For those wanting a deeper foundation in probability, I recommend Ash (1972)
or Billingsley (1995). For more advanced statistical theory, I recommend Lehmann and Casella
(1998), van der Vaart (1998), Shao (2003), and Lehmann and Romano (2005).
For further study in econometrics beyond this text, I recommend Davidson (1994) for asymp-
totic theory, Hamilton (1994) for time-series methods, Wooldridge (2002) for panel data and discrete
response models, and Li and Racine (2007) for nonparametrics and semiparametric econometrics.
Beyond these texts, the Handbook of Econometrics series provides advanced summaries of contem-
porary econometric methods and theory.
As this is a manuscript in progress, some parts are quite incomplete, in particular the later
sections of the manuscript. Hopefully one day these sections will be ‡eshed out and completed in
more detail.
vi
Chapter 1
Introduction
1.1 What is Econometrics?
The term “econometrics” is believed to have been crafted by Ragnar Frisch (1895-1973) of
Norway, one of the three principle founders of the Econometric Society, …rst editor of the journal
Econometrica, and co-winner of the …rst Nobel Memorial Prize in Economic Sciences in 1969. It
is therefore …tting that we turn to Frisch’s own words in the introduction to the …rst issue of
Econometrica for an explanation of the discipline.
A word of explanation regarding the term econometrics may be in order. Its de…ni-
tion is implied in the statement of the scope of the [Econometric] Society, in Section I
of the Constitution, which reads: “The Econometric Society is an international society
for the advancement of economic theory in its relation to statistics and mathematics....
Its main object shall be to promote studies that aim at a uni…cation of the theoretical-
quantitative and the empirical-quantitative approach to economic problems....”
But there are several aspects of the quantitative approach to economics, and no single
one of these aspects, taken by itself, should be confounded with econometrics. Thus,
econometrics is by no means the same as economic statistics. Nor is it identical with
what we call general economic theory, although a considerable portion of this theory has
a de…ninitely quantitative character. Nor should econometrics be taken as synonomous
with the application of mathematics to economics. Experience has shown that each
of these three view-points, that of statistics, economic theory, and mathematics, is
a necessary, but not by itself a su¢cient, condition for a real understanding of the
quantitative relations in modern economic life. It is the uni…cation of all three that is
powerful. And it is this uni…cation that constitutes econometrics.
Ragnar Frisch, Econometrica, (1933), 1, pp. 1-2.
This de…nition remains valid today, although some terms have evolved somewhat in their usage.
Today, we would say that econometrics is the uni…ed study of economic models, mathematical
statistics, and economic data.
Within the …eld of econometrics there are sub-divisions and specializations. Econometric theory
concerns the development of tools and methods, and the study of the properties of econometric
methods. Applied econometrics is a term describing the development of quantitative economic
models and the application of econometric methods to these models using economic data.
1.2 The Probability Approach to Econometrics
The unifying methodology of modern econometrics was articulated by Trygve Haavelmo (1911-
1999) of Norway, winner of the 1989 Nobel Memorial Prize in Economic Sciences, in his seminal
1
paper “The probability approach in econometrics”, Econometrica (1944). Haavelmo argued that
quantitative economic models must necessarily be probability models (by which today we would
mean stochastic). Deterministic models are blatently inconsistent with observed economic quan-
tities, and it is incohorent to apply deterministic models to non-deterministic data. Economic
models should be explicitly designed to incorporate randomness; stochastic errors should not be
simply added to deterministic models to make them random. Once we acknowledge that an eco-
nomic model is a probability model, it follows naturally that the best way to quantify, estimate,
and conduct inferences about the economy is through the powerful theory of mathematical statis-
tics. The appropriate method for a quantitative economic analysis follows from the probabilistic
construction of the economic model.
Haavelmo’s probability approach was quickly embraced by the economics profession. Today no
quantitative work in economics shuns its fundamental vision.
While all economists embrace the probability approach, there has been some evolution in its
implementation.
The structural approach is the closest to Haavelmo’s original idea. A probabilistic economic
model is speci…ed, and the quantitative analysis performed under the assumption that the economic
model is correctly speci…ed. Researchers often describe this as “taking their model seriously.” The
structural approach typically leads to likelihood-based analysis, including maximum likelihood and
Bayesian estimation.
A criticism of the structural approach is that it is misleading to treat an economic model
as correctly speci…ed. Rather, it is more accurate to view a model as a useful abstraction or
approximation. In this case, how should we interpret structural econometric analysis? The quasi-
structural approach to inference views a structural economic model as an approximation rather
than the truth. This theory has led to the concepts of the pseudo-true value (the parameter value
de…ned by the estimation problem), the quasi-likelihood function, quasi-MLE, and quasi-likelihood
inference.
Closely related is the semiparametric approach. A probabilistic economic model is partially
speci…ed but some features are left unspeci…ed. This approach typically leads to estimation methods
such as least-squares and the Generalized Method of Moments. The semiparametric approach
dominates contemporary econometrics, and is the main focus of this textbook.
Another branch of quantitative structural economics is the calibration approach. Similar
to the quasi-structural approach, the calibration approach interprets structural models as approx-
imations and hence inherently false. The di¤erence is that the calibrationist literature rejects
mathematical statistics as inappropriate for approximate models, and instead selects parameters
by matching model and data moments using non-statistical ad hoc
1
methods.
1.3 Econometric Terms and Notation
In a typical application, an econometrician has a set of repeated measurements on a set of vari-
ables. For example, in an labor application the variables could include weekly earnings, educational
attainment, age, and other descriptive characteristics. We call this information the data, dataset,
or sample.
We use the term observations to refer to the distinct repeated measurements on the variables.
An individual observation often corresponds to a speci…c economic unit, such as a person, household,
corporation, …rm, organization, country, state, city or other geographical region. An individual
observation could also be a measurement at a point in time, such as quarterly GDP or a daily
interest rate.
Economists typically denote variables by the italized roman characters n, r. and/or .. The
convention in econometrics is to use the character n to denote the variable to be explained, while
1
Ad hoc means “for this purpose” – a method designed for a speci…c problem – and not based on a generalizable
principle.
2
the characters r and . are used to denote the conditioning (explaining) variables.
Following mathematical convention, real numbers (elements of the real line R) are written using
lower case italics such as n, and vectors (elements of R
I
) by lower case bold italics such as i. e.g.
i =

¸
¸
¸
¸
r
1
r
2
.
.
.
r
I
¸

.
Upper case bold italics such as A will be used for matrices.
We typically denote the number of observations by the natural number :. and subscript the
variables by the index i to denote the individual observation, e.g. n
j
. i
j
and z
j
. In some contexts
we use indices other than i, such as in time-series applications where the index t is common, and
in panel studies we typically use the double index it to refer to individual i at a time period t.
The i’th observation is the set (n
j
. i
j
. z
j
).
It is proper mathematical practice to use upper case A for random variables and lower case r
for realizations or speci…c values. This practice is not commonly followed in econometrics because
instead we use upper case to denote matrices. Thus the notation n
j
will in some places refer to a
random variable, and in other places a speci…c realization. Hopefully there will be no confusion as
the use should be evident from the context.
As we mentioned before, ideally each observation consists of a set of measurements on the
list of variables. In practice it is common to …nd that some variables are not measured for some
observations, and in these cases we describe these variables or observations as unobserved or
missing.
We typically use Greek letters such as . 0 and o
2
to denote unknown parameters of an econo-
metric model, and will use boldface, e.g. d or 0, when these are vector-valued. Estimates are
typically denoted by putting a hat “^”, tilde “~” or bar “-” over the corresponding letter, e.g.
^

and
~
are estimates of .
The covariance matrix of an econometric estimator will typically be written using the capital
boldface \ . often with a subscript to denote the estimator, e.g. \
^
o
= var

:

`
d ÷d

as the
covariance matrix for

:

`
d ÷d

. Hopefully without causing confusion, we will use the notation
\
d
to denote the asymptotic covariance matrix of

:

`
d ÷d

(the variance of the asymptotic
distribution). Estimates will be denoted by appending hats or tildes, e.g.
´
\
d
is an estimate of \
d
.
1.4 Observational Data
A common econometric question is to quantify the impact of one set of variables on another
variable. For example, a concern in labor economics is the returns to schooling – the change in
earnings induced by increasing a worker’s education, holding other variables constant. Another
issue of interest is the earnings gap between men and women.
Ideally, we would use experimental data to answer these questions. To measure the returns to
schooling, an experiment might randomly divide children into groups, mandate di¤erent levels of
education to the di¤erent groups, and then follow the children’s wage path after they mature and
enter the labor force. The di¤erences between the groups would be direct measurements of the ef-
fects of di¤erent levels of education. However, experiments such as this would be widely condemned
as immoral! Consequently, we see few non-laboratory experimental data sets in economics.
3
Instead, most economic data is observational. To continue the above example, through data
collection we can record the level of a person’s education and their wage. With such data we
can measure the joint distribution of these variables, and assess the joint dependence. But from
observational data it is di¢cult to infer causality, as we are not able to manipulate one variable to
see the direct e¤ect on the other. For example, a person’s level of education is (at least partially)
determined by that person’s choices as well as their educational level. These factors are likely to
be a¤ected by their personal abilities and attitudes towards work. The fact that a person is highly
educated suggests a high level of ability, which suggests a high relative wage. This is an alternative
explanation for an observed positive correlation between educational levels and wages. High ability
individuals do better in school, and therefore choose to attain higher levels of education, and their
high ability is the fundamental reason for their high wages. The point is that multiple explanations
are consistent with a positive correlation between schooling levels and education. Knowledge of the
joint distibution alone may not be able to distinguish between these explanations.
Most economic data sets are observational, not experimental. This means that
all variables must be treated as random and possibly jointly determined.
This discussion means that it is di¢cult to infer causality from observational data alone. Causal
inference requires identi…cation, and this is based on strong assumptions. We will return to a
discussion of some of these issues in Chapter 11.
1.5 Standard Data Structures
There are three major types of economic data sets: cross-sectional, time-series, and panel. They
are distinguished by the dependence structure across observations.
Cross-sectional data sets have one observation per individual. Surveys are a typical source
for cross-sectional data. In typical applications, the individuals surveyed are persons, households,
…rms or other economic agents. In many contemporary econometric cross-section studies the sample
size : is quite large. It is conventional to assume that cross-sectional observations are mutually
independent. Most of this text is devoted to the study of cross-section data.
Time-series data are indexed by time. Typical examples include macroeconomic aggregates,
prices and interest rates. This type of data is characterized by serial dependence so the random
sampling assumption is inappropriate. Most aggregate economic data is only available at a low
frequency (annual, quarterly or perhaps monthly) so the sample size can be much smaller than in
typical cross-section studies. The exception is …nancial data where data are available at a high
frequency (weekly, data, hourly, or tick-by-tick) so sample sizes can be quite large.
Panel data combines elements of cross-section and time-series. These data sets consist of a set
of individuals (typically persons, households, or corporations) surveyed repeatedly over time. The
common modeling assumption is that the individuals are mutually independent of one another,
but a given individual’s observations are mutually dependent. This is a modi…ed random sampling
environment.
Data Structures
« Cross-section
« Time-series
« Panel
4
Some contemporary econometric applications combine elements of cross-section, time-series,
and panel data modeling. These include models of spatial correlation and clustering.
As we mentioned above, most of this text will be devoted to cross-sectional data under the
assumption of mutually independent observations. By mutual independence we mean that the i’th
observation (n
j
. i
j
. z
j
) is independent of the ,’th observation (n
;
. i
;
. z
;
) for i = ,. (Sometimes the
label “independent” is misconstrued. It is a statement about the relationship between observations
i and ,, not a statement about the relationship between n
j
and i
j
and/or z
j
.)
Furthermore, if the data is randomly gathered, it is reasonable to model each observation as
a random draw from the same probability distribution. In this case we say that the data are
independent and identically distributed or iid. We call this a random sample. For most of
this text we will assume that our observations come from a random sample.
De…nition 1.5.1 The observations (n
j
. i
j
. z
j
) are a random sample if they are
mutually independent and identically distributed (iid) across i = 1. .... :.
In the random sampling framework, we think of an individual observation (n
j
. i
j
. z
j
) as a re-
alization from a joint probability distribution 1 (n. i. z) which can call the population. This
“population” is in…nitely large. This abstraction can be a source of confusion as it does not cor-
respond to a physical population in the real world. The distribution 1 is unknown, and the goal
of statistical inference is to learn about features of 1 from the sample. The assumption of random
sampling provides the mathematical foundation for treating economic statistics with the tools of
mathematical statistics.
The random sampling framework was a major intellectural breakthrough of the late 19th cen-
tury, allowing the application of mathematical statistics to the social sciences. Before this concep-
tual development, methods from mathematical statistics had not been applied to economic data as
they were viewed as inappropraite. The random sampling framework enabled economic samples to
be viewed as homogenous and random, a necessary precondition for the application of statistical
methods.
1.6 Sources for Economic Data
Fortunately for economists, the the internet provides a convenient forum for dissemination of
economic data. Many large-scale economic datasets are available without charge from governmental
agencies. An excellent starting point is the Resources for Economists Data Links, available at
rfe.org. From this site you can …nd almost every publically available economic data set. Some
speci…c data sources of interest include
« Bureau of Labor Statistics
« US Census
« Current Population Survey
« Survey of Income and Program Participation
« Panel Study of Income Dynamics
« Federal Reserve System (Board of Governors and regional banks)
« National Bureau of Economic Research
5
« U.S. Bureau of Economic Analysis
« CompuStat
« International Financial Statistics
Another good source of data is from authors of published empirical studies. Most journals
in economics require authors of published papers to make their datasets generally available. For
example, in its instructions for submission, Econometrica states:
Econometrica has the policy that all empirical, experimental and simulation results must
be replicable. Therefore, authors of accepted papers must submit data sets, programs,
and information on empirical analysis, experiments and simulations that are needed for
replication and some limited sensitivity analysis.
The American Economic Review states:
All data used in analysis must be made available to any researcher for purposes of
replication.
The Journal of Political Economy states:
It is the policy of the Journal of Political Economy to publish papers only if the data
used in the analysis are clearly and precisely documented and are readily available to
any researcher for purposes of replication.
If you are interested in using the data from a published paper, …rst check the journal’s website,
as many journals archive data and replication programs online. Second, check the website(s) of
the paper’s author(s). Most academic economists maintain webpages, and some make available
replication …les complete with data and programs. If these investigations fail, email the author(s),
politely requesting the data. You may need to be persistent.
As a matter of professional etiquette, all authors absolutely have the obligation to make their
data and programs available. Unfortunately, many fail to do so, and typically for poor reasons.
The irony of the situation is that it is typically in the best interests of a scholar to make as much of
their work (including all data and programs) freely available, as this only increases the likelihood
of their work being cited and having an impact.
Keep this in mind as you start your own empirical project. Remember that as part of your end
product, you will need (and want) to provide all data and programs to the community of scholars.
The greatest form of ‡attery is to learn that another scholar has read your paper, wants to extend
your work, or wants to use your empirical methods. In addition, public openness provides a healthy
incentive for transparency and integrity in empirical analysis.
1.7 Econometric Software
Economists use a variety of econometric, statistical, and programming software.
STATA (www.stata.com) is a powerful statistical program with a broad set of pre-programmed
econometric and statistical tools. It is quite popular among economists, and is continuously being
updated with new methods. It is an excellent package for most econometric analysis, but is limited
when you want to use new or less-common econometric methods which have not yet been programed.
GAUSS (www.aptech.com), MATLAB (www.mathworks.com), and Ox (www.oxmetrics.net)
are high-level matrix programming languages with a wide variety of built-in statistical functions.
Many econometric methods have been programed in these languages and are available on the web.
The advantage of these packages is that you are in complete control of your analysis, and it is
6
easier to program new methods than in STATA. Some disadvantages are that you have to do
much of the programming yourself, programming complicated procedures takes signi…cant time,
and programming errors are hard to prevent and di¢cult to detect and eliminate.
R (www.r-project.org) is an integrated suite of statistical and graphical software that is ‡exible,
open source, and best of all, free!
For highly-intensive computational tasks, some economists write their programs in a standard
programming language such as Fortran or C. This can lead to major gains in computational speed,
at the cost of increased time in programming and debugging.
As these di¤erent packages have distinct advantages, many empirical economists end up using
more than one package. As a student of econometrics, you will learn at least one of these packages,
and probably more than one.
1.8 Reading the Manuscript
Chapters 2 through 7 deal with the core linear regression and projection models. Chapter 8
introduces the bootstrap. Chapters 9 through 11 deal with the Generalized Method of Moments,
empirical likelihood and endogeneity. Chapters 12 and 13 cover time series, and Chapters 14, 15
and 16 cover limited dependent variables, panel data, and nonparametrics. Reviews of matrix
algebra, probability theory, asymptotic theory, maximum likelihood, and numerical optimization
can be found in the appendix.
7
Chapter 2
Regression and Projection
2.1 Introduction
The most commonly applied econometric tool is least-squares estimation, also known as regres-
sion. As we will see, least-squares is a tool to estimate an approximate conditional mean of one
variable (the dependent variable) given another set of variables (the regressors, conditioning
variables, or covariates).
In this chapter we abstract from estimation, and focus on the probabilistic foundation of the
regression model and its projection approximation.
2.2 Notation
We let n denote the dependent variable and let (r
1
. r
2
. .... r
I
) denote the / regressors. Through-
out this section we maintain the assumption that the variables are stochastic.
Assumption 2.2.1 (n. r
1
. r
2
. .... r
I
) is a random vector
with a joint probability distribution such that
1. En
2
< ·.
2. Er
2
;
< · for , = 1. .... /.
The …nite second moment conditions imposed in Assumption 2.2.1.1 and 2.2.1.2 imply that the
variables have …nite means and variances.
It is convenient to write the set of regressors as a vector in R
I
:
i =

¸
¸
¸
¸
r
1
r
2
.
.
.
r
I
¸

. (2.1)
For most of our analysis it is unimportant whether the regressors i come from continuous
or discrete distributions. As an example of a discrete variable, many regressors in econometric
applications are binary, taking on only the values 0 and 1, and are called dummy variables.
For some purposes, the same is true about the dependent variable – it could be continuous
or discrete. But when the dependent variable is discrete we typically use speci…c models and
techniques built for this purpose (see Chapter 14).
8
2.3 Conditional Mean
To study how the distribution of n varies with the variables i in the population, we start with
1 (n [ i) . the conditional density of n given i.
Figure 2.1: Wage Densities for White College Grads with 10-15 Years Work Experience
To illustrate, Figure 2.1 displays the density
1
of hourly wages for men and women, from the
population of white non-military wage earners in the U.S. with a college degree and 10-15 years of
potential work experience. These are conditional density functions – the density of hourly wages
conditional on race, gender, education and experience. The two density curves show the e¤ect of
gender on the distribution of wages, holding the other variables constant.
While it is easy to observe that the two densities are unequal, it is useful to have numerical
measures of the di¤erence. An important summary measure is the conditional mean
2
:(i) = E(n [ i) =

o
÷o
n1 (n [ i) dn. (2.2)
The function :(i) varies with the vector i and is thus a function from R
I
to R. The conditional
mean :(i) is sometimes called the regression function. In general, :(i) can have arbitrary
shape, although in some cases an economic model may dictate a speci…c shape restriction (such
as monotonicity) or a speci…c functional form (such as linearity). The regression function :(i) is
de…ned for values of i in the support
3
of i. Thus when i has a discrete distribution then :(i) is
de…ned for those values of i with positive probability. When i has a continuous distribution with
density 1
æ
(i) then :(i) is de…ned for those values of i for which 1
æ
(i) 0.
In the example presented in Figure 2.1, the mean wage for men is $27.22, and that for women
is $20.73. These are indicated in Figure 2.1 by the arrows drawn to the x-axis. These values
are the conditional means of U.S. wages in 2004 (conditional on gender, and conditional for white
non-military wages earners with a college degree and 10-15 years of work experience).
Take a closer look at the density functions displayed in Figure 2.1. You can see that the right
tail of the density is much thicker than the left tail. These are asymmetric (skewed) densities,
1
These are nonparametric density estimates using a normal kernel with the bandwidth selected by cross-validation.
See Chapter 16. The data are from the 2004 Current Population Survey.
2
The conditional mean exists if Ejyj < 1: For a rigorous de…nition see Section 2.16.
3
The support of a random vector x is the closed set of points for which its distribution F(x) is increasing in all
elements of x:
9
which is a common feature of many economic variables. When a distribution is skewed, the mean
is not necessarily a good summary of the central tendency. In this context it is often convenient to
transform the data by taking the (natural) logarithm
4
. Figure 2.2 shows the density of log hourly
wages for the same population, with mean log hourly wages (3.21 and 2.91, respectively) drawn
in with the arrows. The di¤erence between the mean log wage of men and women is 0.30, which
implies a 30% average wage di¤erence for this population. The di¤erence in the mean log wage is a
more robust measure of the typical wage gap than the di¤erence in the untransformed wage means.
For this reason, wage regressions typically use log wages as a dependent variable rather than the
level of wages.
Figure 2.2: Log Wage Densities for White College Grads with 10-15 Years Work Experience
The comparisons in Figures 2.1 and 2.2 are facilitated by the fact that the control variable
(gender) is binary. When the distribution of the control variable takes on multiple values or
is continuous, then comparisons become more complicated. To illustrate, Figure 2.3 displays a
scatter plot
5
of log wages against education levels. Assuming for simplicity that this is the true
joint distribution, the solid line displays the conditional expectation of log wages varying with
education. The conditional expectation function is close to linear; the dashed line is a linear
projection approximation which will be discussed in Section 2.9. The main point to be learned
from Figure 2.3 is that the conditional expectation is a useful summary of the central tendency of
the conditional distribution when the control variable takes multiple values. Of particular interest
to graduate students may be the observation that di¤erence between a B.A. and a Ph.D. degree in
mean log hourly wages is 0.36, implying an average 36% di¤erence in wage levels.
As another example, Figure 2.4 displays the conditional mean
6
of log hourly wages as a function
of labor market experience. The solid line is the conditional mean. We see that the conditional
mean is strongly non-linear and non-monotonic. The main lesson to be learned at this point from
Figure 2.4 is that conditional expectations can be quite non-linear.
4
Mathematically, this is equivalent to measuring the central tendency by the conditional geometric mean
exp(E(log y j x)). For example, the conditional geometric means for the densities in Figure 2.1 are $24.78 and
$18.36, respectively.
5
White non-military male wage earners with 10-15 years of potential work experience.
6
In the population of white non-military male wage earners with 12 years of education.
10
Figure 2.3: Scatter Plot and Conditional Mean of Log Wages Given Education
2.4 Regression Error
The regression error c is de…ned as the di¤erence between n and its conditional mean (2.2)
evaluated at the random vector i:
c = n ÷:(i).
By construction, this yields the formula
n = :(i) +c. (2.3)
It is useful to understand that the regression error is derived from the joint distribution of (n. i).
and so its properties are derived from this construction. We now discuss some of these properties.
Theorem 2.4.1 Properties of the regression error c.
Under Assumption 2.2.1,
1. E(c [ i) = 0.
2. E(c) = 0.
3. E(/(i)c) = 0 for any function /() such that E/(i)
2
< ·
4. E(ic) = 0.
Proof of Theorem 2.4.1.1:
By the de…nition of c and the linearity of conditional expectations,
E(c [ i) = E((n ÷:(i)) [ i)
= E(n [ i) ÷E(:(i) [ i)
= :(i) ÷:(i) = 0.
Proofs of the remaining parts of Theorem 2.4.1.1 are left to Exercise 2.1.
11
Figure 2.4: Log Hourly Wage as a Function of Experience
The equations
n = :(i) +c
E(c [ i) = 0.
are often stated jointly as the regression framework. It is important to understand that this is a
framework, not a model, because no restrictions have been placed on the joint distribution of the
data. These equations hold true by de…nition. A regression model imposes further restrictions on
the permissible class of regression functions :(i) .
The condition E(c [ i) = 0 is the key implication of the conditional mean model. This equation
is sometimes called a conditional mean restriction, since the conditional mean is restricted to equal
zero. The property is also sometimes called mean independence, for the conditional mean of c is
0 and thus independent of i. It is quite important to understand, however, that it does not imply
that the distribution of c is independent of i. Sometimes the assumption “c is independent of i”
is added as a convenient simpli…cation, but it is not generic feature of regression. Typically and
generally, c and i are jointly dependent, even though the conditional mean of c is zero.
As a simple example, suppose that n = rn where r and n are independent and En = 1. Then
E(n [ r) = r so the regression equation is n = r + c where c = r(n ÷1). Yet c is not independent
of r. even though E(c [ r) = 0.
2.5 Best Predictor
Given a realized value of i, we can view :(i) as a predictor or forecast of n. The prediction
error is c = n÷:(i). which is random. A non-stochastic measure of the magnitude of the prediction
error is the expectation of the squared error, or mean squared error
E(n ÷:(i))
2
= Ec
2
= o
2
. (2.4)
The parameter o
2
is also known as the variance of the regression error.
It turns out that the the conditional mean is a good predictor of n in the sense that it has the
lowest mean squared error among all predictors. This holds regardless of the joint distribution of
(n. i). We state this formally in the following result.
12
Theorem 2.5.1 Conditional Mean as Best Predictor
Let :(i) = E(n [ i) be the conditional mean and let o (i) be any other
predictor of n given i. Under Assumption 2.2.1,
E(n ÷o (i))
2
_ E(n ÷:(i))
2
.
Proof of Theorem 2.5.1: Since n = :(i) +c. the mean squared error using o (i) is
E(n ÷o (i))
2
= E(c +:(i) ÷o (i))
2
= Ec
2
+ 2E(c (:(i) ÷o (i))) +E(:(i) ÷o (i))
2
= Ec
2
+E(:(i) ÷o (i))
2
_ Ec
2
= E(n ÷:(i))
2
where the third equality uses Theorem 2.4.1.3. The right-hand-side after the third equality is
minimized by setting o (i) = :(i), yielding the …nal inequality.
2.6 Conditional Variance
While the conditional mean is a good measure of the location of a conditional distribution,
it does not provide information about the spread of the distribution. A common measure of the
dispersion is the conditional variance.
De…nition 2.6.1 The conditional variance of n given i is
o
2
(i) = var (n [ i)
= E

n
2
[ i

÷(E(n [ i))
2
= E

(n ÷E(n [ i))
2
[ i

= E

c
2
[ i

Generally, o
2
(i) is a non-trivial function of i and can take any form subject to the restriction
that it is non-negative. The conditional standard deviation is its square root o(i) =

o
2
(i).
One way to think about o
2
(i) is that it is the conditional mean of c
2
given i.
As an example of how the conditional variance depends on observables, compare the conditional
wage densities for men and women displayed in Figure 2.1. The di¤erence between the densities is
not just a location shift, but is also a di¤erence in spread. Speci…cally, we can see that the density
for men’s wages is somewhat more spread out than that for women, while the density for women’s
wages is somewhat more peaked. Indeed, the conditional standard deviation for men’s wages is
12.1 and that for women is 10.5. So while men have higher average wages, they are also somewhat
more dispersed.
Many econometric studies focus on the conditional mean :(i) and either ignore the conditional
variance o
2
(i). treat it as a constant o
2
(i) = o
2
. or treat it as a nuisance parameter (a parameter
not of primary interest). This may be unfortunate as dispersion is relevant to many economic
topics, including income and wealth distribution, economic inequality, and price dispersion.
The perverse consequences of a narrow-minded focus on the mean has been parodied in a classic
joke:
13
An economist was standing with one foot in a bucket of boiling water
and the other foot in a bucket of ice. When asked how he felt, he
replied, “On average I feel just …ne.”
Clearly, the economist in question ignored variance!
2.7 Homoskedasticity and Heteroskedasticity
An important special case obtains when the conditional variance of the regression error o
2
(i)
is a constant and independent of i. This is called homoskedasticity.
De…nition 2.7.1 The error is homoskedastic if E

c
2
[ i

= o
2
does not depend on i.
In the general case where o
2
(i) depends on i we say that the error c is heteroskedastic.
De…nition 2.7.2 The error is heteroskedastic if E

c
2
[ i

= o
2
(i)
depends on i.
Even when the error is heteroskedastic we still de…ne the unconditional variance o
2
of the error
c as in (2.4). It may be helpful to notice that by using iterated expectations the unconditional
variance can be written as the expected conditional error variance
o
2
= E

c
2

= E

E

c
2
[ i

= E

o
2
(i)

.
Thus o
2
is well-de…ned whether or not the error is homoskedastic or heteroskedastic.
Some older or introductory textbooks describe heteroskedasticity as the case where “the variance
of c varies across observations”. This is a poor and confusing de…nition. It is more constructive
to understand that heteroskedasticity is the case where the conditional variance o
2
(i) depends on
the variables i. (Once again, recall Figure 2.1 and how the variance of wages varies between men
and women.)
Older textbooks also tend to describe homoskedasticity as a component of a correct regression
speci…cation, and describe heteroskedasticity as an exception or deviance. This description has
in‡uenced many generations of economists, but it is unfortunately backwards. The correct view
is that heteroskedasticity is generic and “standard”, while homoskedasticity is unusual and excep-
tional. The default in empirical work should be to assume that the errors are heteroskedastic, not
the converse.
In apparent contraction to the above statement, we will still frequently impose the homoskedas-
ticity assumption when making theoretical investigations into the properties of regression tech-
niques. The reason is that in many cases homoskedasticity greatly simpli…es the theoretical cal-
culations, and it is therefore quite advantageous for teaching and learning. It should always be
remembered, however, that homoskedasticity is never imposed because it is believed to be a correct
feature of an empirical regression, but rather because of its simplicity.
14
2.8 Linear Regression
An important special case of (2.3) is when the conditional mean function :(i) is linear in i
(or linear in functions of i). In this case we can write the mean equation as
:(i) =
0
+r
1

1
+r
2

2
+ +r
I

I
.
Notationally it is convenient to write this as a simple function of the vector i. An easy way to do
so is to augment the regressor vector i by listing the number “1” as an element. We call this the
“constant” and the corresponding coe¢cient is called the “intercept”. Equivalently, assuming that
the …rst element
7
of the vector i is the intercept, then r
1
= 1. Thus (2.1) has been rede…ned as
the / 1 vector
i =

¸
¸
¸
¸
1
r
2
.
.
.
r
I
¸

. (2.5)
With this rede…nition, then the mean equation is
:(i) = r
1

1
+r
2

2
+ +r
I

I
= i
t
d (2.6)
where
d =

¸
¸

1
.
.
.

I
¸

(2.7)
is a / 1 coe¢cient vector. This is called the linear regression model.
Linear Regression
n = i
t
d +c
E(c [ i) = 0
If in addition the error is homoskedastic, we call this the homoskedastic linear regression model.
Homoskedastic Linear Regression
n = i
t
d +c
E(c [ i) = 0
E

c
2
[ i

= o
2
7
The order doesn’t matter. It could be any element.
15
2.9 Best Linear Predictor
While the conditional mean :(i) = E(n [ i) is the best predictor of n among all functions
of i. its functional form is typically unknown. In particular, the linear equation of the previous
section is empirically unlikely to be accurate. In practice it is more realistic to view the linear
speci…cation (2.6) as an approximation. In this section we derive a speci…c approximation with a
simple interpretation.
Theorem 2.5.1 showed that the conditional mean :(i) is the best predictor in the sense that
it has the lowest mean squared error among all predictors. By extension, we can de…ne a linear
approximation to the conditional mean function as the linear function with the lowest mean squared
error among all linear predictors.
To be precise, a linear predictor for n given i is i
t
d for some d ÷ R
I
. The mean squared error
of this predictor is
o(d) = E

n ÷i
t
d

2
.
The best linear predictor of n given i is de…ned by …nding the vector d which minimizes o(d).
De…nition 2.9.1 The Best Linear Predictor of n given i is i
t
d, where d
minimizes the mean squared error
o(d) = E

n ÷i
t
d

2
.
The minimizer
d = argmin
d÷R
k
o(d) (2.8)
is called the Linear Projection Coe¢cient.
The quadratic structure of o(d) means that we can solve explicitly for d. The mean squared
prediction error can be written out as a quadratic function of d :
o(d) = En
2
÷2d
t
E(in) +d
t
E

ii
0

d
The …rst-order condition for minimization (from Appendix A.9) is
0 =
0
0d
o(d) = ÷2E(in) + 2E

ii
t

d. (2.9)
This has a unique solution under the following condition.
Assumption 2.9.1 O = E(ii
t
) is invertible.
The matrix Ois sometimes called the design matrix, as in experimental settings the researcher
is able to control O by manipulating the distribution of the regressors i.
Rewriting (2.9) as
2E(in) = 2E

ii
t

d
dividing by 2, and then inverting the / / matrix E(ii
t
) . we obtain the solution for d.
16
Theorem 2.9.1 Linear Projection Coe¢cient
Under Assumptions 2.2.1 and 2.9.1, the linear projection coe¢cient equals
d =

E

ii
t

÷1
E(in) . (2.10)
It is worth taking the time to understand the notation involved in the expression (2.10). E(ii
t
)
is a / / matrix and E(in) is a / 1 column vector. Therefore, alternative expressions such as
E(æ&)
E(ææ
0
)
or E(in) (E(ii
t
))
÷1
are incoherent and incorrect.
Given the de…nition of d in (2.10), i
t
d is the best linear predictor for n. The projection error
is
c = n ÷i
t
d. (2.11)
The error c from the linear prediction equation is equal to the error from the regression equation
when (and only when) the conditional mean is linear in i. otherwise they are distinct.
Rewriting, we obtain a decomposition of n into linear predictor and error
n = i
t
d +c. (2.12)
This completes the derivation of the model. We call i
t
d the best linear predictor of n given i, or
the linear projection of n onto i. In general we call equation (2.12) the linear projection model.
The following are important properties of the model.
Theorem 2.9.2 Properties of Linear Projection Model
Under Assumptions 2.2.1 and 2.9.1, then (2.11) and (2.12) exist and are unique,
o
2
= E

c
2

< ·. (2.13)
and
E(ic) = 0. (2.14)
A complete proof of Theorem 2.9.1 is presented below.
We have shown that under mild regularity conditions, for any pair (n. i) we can de…ne a linear
equation (2.12) with the properties listed in Theorem 2.9.1. No additional assumptions are required.
Thus the linear model (2.12) exists quite generally. However, it is important not to misinterpret
the generality of this statement. The linear equation (2.12) is de…ned as the best linear predictor.
In contrast, in many economic models the parameter d may be de…ned within the model. In this
case (2.10) may not hold and the implications of Theorem 2.9.1 may be false. These structural
models require alternative estimation methods, and are discussed in Chapter 11.
Linear Projection Model
n = i
t
d +c.
E(ic) = 0
d =

E

ii
t

÷1
E(in)
17
Equation (2.14) is a set of / equations, one for each regressor. In other words, (2.14) is equivalent
to
E(i
;
c) = 0 (2.15)
for , = 1. .... /. As in (2.5), the regressor vector i typically contains a constant, e.g. r
1
= 1. In
this case (2.15) for , = 1 is the same as
E(c) = 0. (2.16)
Thus the projection error has a mean of zero when the regression contains a constant. (When i
does not have a constant, this is not guarenteed. As it is desireable for c to have a zero mean, this
is a good reason to always include a constant in any regression.)
It is also useful to observe that since cov(i
;
. c) = E(i
;
c) ÷ E(i
;
) E(c) . then (2.15)-(2.16)
together imply that the variables i
;
and c are uncorrelated.
Invertibility and Identi…cation
The vector (2.10) exists and is unique as long as the / / matrix O = E(ii
t
) is
invertible. Observe that for any non-zero o ÷ R
I
.
o
t
Oo = E

o
t
ii
t
o

= E

o
t
i

2
_ 0
so O by construction is positive semi-de…nite. It is invertible if and only if it is positive
de…nite, which requires that for all non-zero o. E(o
t
i)
2
0. Equivalently, there cannot
exist a non-zero vector o such that o
t
i = 0 identically. This occurs when redundant
variables are included in i. In order for d to be uniquely de…ned, this situation must be
excluded.
Theorem 2.9.1 shows that the linear projection coe¢cient d is identi…ed (uniquely
determined) under Assumptions 2.2.1 and 2.9.1. The key is invertibility of O. Otherwise,
there is no unique solution to the equation
E

ii
t

d = E(in) . (2.17)
When O is not invertible there are multiple solutions to (2.17), all of which yield an
equivalent best linear predictor i
t
d. In this case the coe¢cient d is not identi…ed as it
does not have a unique value. Even so, the best linear predictor i
t
d still identi…ed. One
solution is to set
d =

E

ii
t

÷
E(in)
where A
÷
denotes the generalized inverse of A (see Appendix A.5).
Proof of Theorem 2.9.1
We …rst show that the moments E(iu) and E(ii
t
) are …nite and well de…ned. First, it is useful
to note that Assumption 2.2.1 implies that
E|i|
2
= E

i
t
i

=
I
¸
;=1
Er
2
;
< ·. (2.18)
Note that for , = 1. .... /. by the Cauchy-Schwarz Inequality (C.3) and Assumption 2.2.1
E[r
;
n[ _

Er
2
;

1/2

En
2

1/2
< ·.
18
Thus the elements in the vector E(iu) are well de…ned and …nite. Next, note that the ,|’th element
of E(ii
t
) is E(r
;
r
|
) . Observe that
E[r
;
r
|
[ _

Ei
2
;

1/2

Ei
2
|

1/2
< ·.
Thus all elements of the matrix E(ii
t
) are …nite.
Equation (2.10) states that d = (E(ii
t
))
÷1
E(in) which is well de…ned since (E(ii
t
))
÷1
exists
under Assumption 2.9.1. It follows that c = n ÷i
t
d as de…ned in (2.11) is also well de…ned.
Note the Schwarz Inequality (A.6) implies (i
t
d)
2
_ |i|
2
|d|
2
and therefore combined with
(2.18) we see that
E

i
t
d

2
_ E|i|
2
|d|
2
< ·. (2.19)
Using Minkowski’s Inequality (C.5), Assumption 2.2.1, and (2.19) we …nd

E

c
2

1/2
=

E

n ÷i
t
d

2

1/2
_

En
2

1/2
+

E

i
t
d

2

1/2
< ·
establishing (2.13).
An application of the Cauchy-Schwarz Inequality (C.3) shows that for any ,
E[r
;
c[ _

Ei
2
;

1/2

Ec
2

1/2
< ·
and therefore the elements in the vector E(ic) are well de…ned and …nite.
Using the de…nitions (2.11) and (2.10), and the matrix properties that AA
÷1
= 1 and 1u = u.
E(ic) = E

i

n ÷i
t
d

= E(in) ÷E

ii
t

E

ii
t

÷1
E(in) = 0
completing the proof.
2.10 Regression Coe¢cients
Sometimes it is useful to separate the intercept from the other regressors, and write the regres-
sion equation in the format
n = c +i
t
d +c (2.20)
where c is the intercept and i does not contain a constant.
Taking expectations of this equation, we …nd
En = Ec +Ei
t
d +Ec
or
j
&
= c +j
t
a
d
where j
&
= En and j
a
= Ei. since E(c) = 0 from (2.16). Rearranging, we …nd
c = j
&
÷j
t
a
d.
Subtracting this equation from (2.20) we …nd
n ÷j
&
= (i ÷j
a
)
t
d +c. (2.21)
19
a linear equation between the centered variables n ÷ j
&
and i ÷ j
a
. (They are centered at their
means, or equivalently are mean-zero random variables.) Because i ÷ j
a
is uncorrelated with c.
(2.21) is also a linear projection, thus by the formula for the linear projection model,
d =

E

(i ÷j
a
) (i ÷j
a
)
t

÷1
E

(i ÷j
a
)

n ÷j
&

= cov (i. i)
÷1
cov (i. n)
a function only of the covariances
8
of i and n.
Theorem 2.10.1 In the linear projection model
n = c +i
t
d +c.
then
c = j
&
÷j
t
a
d (2.22)
and
d = cov (i. i)
÷1
cov (i. n) . (2.23)
2.11 Best Linear Approximation
There are alternative ways we could construct a linear approximation i
t
d to the conditional
mean :(i). In this section we show that one natural approach turns out to yield the same answer
as the best linear predictor.
We start by de…ning the mean-square approximation error of i
t
d to :(i) as the expected
squared di¤erence between i
t
d and the conditional mean :(i)
d(d) = E

:(i) ÷i
t
d

2
. (2.24)
The function d(d) is a measure of the deviation of i
t
d from :(i). If the two functions are identical
then d(d) = 0. otherwise d(d) 0. We can also view the mean-square di¤erence d(d) as a density-
weighted average of the function (:(i) ÷i
t
d)
2
.
We can then de…ne the best linear approximation to the conditional :(i) as the function i
t
d
obtained by selecting d to minimize d(d) :
d = argmin
d÷R
k
d(d). (2.25)
Similar to the best linear predictor we are measuring accuracy by expected squared error. The
di¤erence is that the best linear predictor (2.8) selects d to minimize the expected squared predic-
tion error, while the best linear approximation (2.25) selects d to minimize the expected squared
approximation error.
Despite the di¤erent de…nitions, it turns out that the best linear predictor and the best linear
approximation are identical. By the same steps as in (2.9) plus an application of conditional
expectations we can …nd that
d =

E

ii
t

÷1
E(i:(i)) (2.26)
=

E

ii
t

÷1
E(in) (2.27)
(see Exercise 2.14). Thus (2.25) equals (2.8). We conclude that the de…nition (2.25) can be viewed
as an alternative motivation for the linear projection coe¢cient.
8
The covariance matrix between vectors x and z is cov (x; z) = E

(x Ex) (z Ez)
0

: We call cov (x; x) the
covariance matrix of x:
20
2.12 Normal Regression
Suppose the variables (n. i) are jointly normally distributed. Consider the best linear predictor
of n on i
n = i
t
d +c.
d =

E

ii
t

÷1
E(in) .
Since the error c is a linear transformation of the normal vector (n. i). it follows that (c. i) is
jointly normal, and since they are jointly normal and uncorrelated (since E(ic) = 0) they are also
independent (see Appendix B.9). Independence implies that
E(c [ i) = E(c) = 0
and
E

c
2
[ i

= E

c
2

= o
2
which are properties of a homoskedastic linear conditional regression.
We have shown that when (n. i) are jointly normally, they satisfy a normal linear regression
n = i
t
d +c
where
c ~ ·(0. o
2
)
is independent of i.
This is an alternative (and traditional) motivation for the linear regression model. This moti-
vation has limited merit in econometric applications since economic data is typically non-normal.
2.13 Regression to the Mean
The term regression originated in an in‡uential paper by Francis Galton published in 1886,
where he examined the joint distribution of the stature (height) of parents and children. E¤ectively,
he was estimating the conditional mean of children’s height given their parents height. Galton
discovered that this conditional mean was approximately linear with a slope of 2/3. This implies
that on average a child’s height is more mediocre than his or her parent’s height. Galton called
this phenomenon regression to the mean, and the label regression has stuck to this day to
describe most conditional relationships.
One of Galton’s fundamental insights was to recognize that if the marginal distributions of n
and r are the same (e.g. the heights of children and parents in a stable environment) then the
regression slope in a linear projection is always less than one.
To be more precise, take the simple regression
n = c +r +c (2.28)
where n equals the height of the child and r equals the height of the parent. Assume that n and r
have the same mean, so that j
&
= j
a
= j. Then from (2.22)
c = (1 ÷) j
so we can write the conditional mean of (2.28) as
E(n [ i) = (1 ÷) j +r.
This shows that the expected height of the child is a weighted average of the population average
height j and the parents height r. with the weight equal to the regression slope . When the height
21
distribution is stable across generations, so that var(n) = var(r). then this slope is the simple
correlation of n and r. Using (2.23)
=
cov (i. n)
var(r)
= corr(r. n).
By the properties of correlation (e.g. equation (B.7) in the Appendix), ÷1 _ corr(r. n) _ 1. with
corr(r. n) = 1 only in the degenerate case n = r. Thus if we exclude degeneracy, is strictly less
than 1.
This means that on average a child’s height is more mediocre (closer to the population average)
than the parent’s.
Sir Francis Galton
Sir Francis Galton (1822-1911) of England was one of the leading …gures in late 19th century
statistics. In addition to inventing the concept of regression, he is credited with introducing
the concepts of correlation, the standard deviation, and the bivariate normal distribution.
His work on heredity made a signi…cant intellectual advance by examing the joint distribu-
tions of observables, allowing the application of the tools of mathematical statistics to the
social sciences.
A common error – known as the regression fallacy – is to infer from < 1 that the population
is converging
9
. This is a fallacy because we have shown that under the assumption of constant
(e.g. stable, non-converging) means and variances, the slope coe¢cient must be less than one. It
cannot be anything else. A slope less than one does not imply that the variance of n is less than
than the variance of r.
Another way of seeing this is to examine the conditions for convergence in the context of equation
(2.28). Since r and c are uncorrelated, it follows that
var(n) =
2
var(r) + var(c).
Then var(n) < var(r) if and only if

2
< 1 ÷
var(c)
var(r)
which is not implied by the simple condition [[ < 1.
The regression fallacy arises in related empirical situations. Suppose you sort families into groups
by the heights of the parents, and then plot the average heights of each subsequent generation over
time. If the population is stable, the regression property implies that the plots lines will converge
– children’s height will be more average than their parents. The regression fallacy is to incorrectly
conclude that the population is converging. The message is that such plots are misleading for
inferences about convergence.
The regression fallacy is subtle. It is easy for intelligent economists to succumb to its temptation.
A famous example is The Triumph of Mediocrity in Business by Horace Secrist, published in 1933.
In this book, Secrist carefully and with great detail documented that in a sample of department
stores over 1920-1930, when he divided the stores into groups based on 1920-1921 pro…ts, and
plotted the average pro…ts of these groups for the subsequent 10 years, he found clear and persuasive
evidence for convergence “toward mediocrity”. Of course, there was no discovery – regression to
the mean is a necessary feature of stable distributions.
9
A population is converging if its variance is declining towards zero.
22
2.14 Reverse Regression
Galton noticed another interesting feature of the bivariate distribution. There is nothing special
about a regression of n on r. We can also regress r on n. (In his heredity example this is the best
linear predictor of the height of parents given the height of their children.) This regression takes
the form
r = c
+
+n
+
+c
+
. (2.29)
This is sometimes called the reverse regression. In this equation, the coe¢cients c
+
.
+
and
error c
+
are de…ned by linear projection. In a stable population we …nd that

+
= corr(r. n) =
c
+
= (1 ÷) j = c
which are exactly the same as in the regression of n on r! The intercept and slope have exactly the
same values in the forward and reverse regression!
While this algebraic discovery is quite simple, it is counter-intuitive. Instead, a common yet
mistaken guess for the form of the reverse regression is to take the regression (2.28), divide through
by and rewrite to …nd the equation
r = ÷
c

+n
1

÷
1

c (2.30)
suggesting that the regression of r on n should have a slope coe¢cient of 1´ instead of . and
intercept of -c´ rather than c. What went wrong? Equation (2.30) is perfectly valid, because it
is a simple manipulation of the valid equation (2.28). The trouble is that (2.30) is not a regression
equation. Inverting a regression does not yield a regression. Instead, (2.29) is a valid regression,
not (2.30).
In any event, Galton’s …nding was that when the variables are standardized, the slope in both
regressions (n on r. and r and n) equals the correlation, and both equations exhibit regression to
the mean. It is not a causal relation, but a natural feature of all joint distributions.
2.15 Limitations of the Best Linear Predictor
Let’s compare the linear projection and linear regression models.
From Theorem 2.4.1.4 we know that the regression error has the property E(ic) = 0. Thus a
linear regression is a linear projection. However, the converse is not true as the projection error
does not necessarily satisfy E(c [ i) = 0.
To see this in a simple example, suppose we take a normally distributed random variable
r ~ ·(0. 1) and set n = r
2
. Note that n is a deterministic function of r! Now consider the linear
projection of n on r and an intercept. The intercept and slope may be calculated as

c

=

1 E(r)
E(r) E

r
2

÷1

E(n)
E(rn)

=

1 E(r)
E(r) E

r
2

÷1

E

r
2

E

r
3

=

1
0

Thus the linear projection equation takes the form
n = c +r +c
23
where c = 1, = 0 and c = r
2
÷1. Observe that E(c) = E

r
2

÷1 = 0 and E(rc) = E

r
3

÷E(c) =
0. yet E(c [ r) = r
2
÷1 = 0. In this simple example c is a deterministic function of r. yet c and r
are uncorrelated! The point is that a projection error need not be a regression error.
Return for a moment to the joint distributions displayed in Figures 2.3 and 2.4. In these …gures,
the solid lines are the conditional means and the straight dashed lines are the linear projections.
In Figure 2.3 (the conditional mean of log hourly wages as a function of education) the conditional
mean and linear projection are quite close to one another. In this example the linear predictor is a
close approximation to the conditional mean. However, in Figure 2.4 (the conditional mean of log
hourly wages as a function of labor market experience) the conditional mean is quite nonlinear, so
the linear projection is a poor approximation. It over-predicts wages for young and old workers,
and under-predicts for the rest. Most importantly, it misses the strong downturn in expected wages
for those above 35 years work experience (equivalently, for those over 53 in age).
This defect in the best linear predictor can be partially corrected through a careful selection of
regressors. In the example of Figure 2.4, we can augment the regressor vector i to include both
crjcric:cc and crjcric:cc
2
. The best linear predictor of log wages given these two variables can
be called a quadratic projection, since the resulting function is quadratic in crjcric:cc. Other than
the rede…nition of the regressor vector, there are no changes in our methods or analysis. In Figure
2.4 we display as well the quadratic projection. In this example it is a much better approximation
to the conditional mean than the linear projection.
Figure 2.5: Conditional Mean and Two Linear Projections
Another defect of linear projection is that it is sensitive to the marginal distribution of the
regressors when the conditional mean is non-linear. We illustrate the issue in Figure 2.5 for a
constructed
10
joint distribution of n and r. The solid line is the non-linear conditional mean of
n given r. The data are divided in two – Group 1 and Group 2 – which have di¤erent marginal
distributions for the regressor r. and Group 1 has a lower mean value of r than Group 2. The
separate linear projections of n on r for these two groups are displayed in the Figure by the dashed
lines. These two projections are distinct approximations to the conditional mean. A defect with
linear projection is that it leads to the incorrect conclusion that the e¤ect of r on n is di¤erent for
individuals in the two Groups. This conclusion is incorrect because in fact there is no di¤erence in
the conditional mean function. The apparant di¤erence is a by-product of a linear approximation
10
The x in Group 1 are N(2; 1) and those in Group 2 are N(4; 1); and the conditional distriubtion of y given x is
N(m(x); 1) where m(x) = 2x x
2
=6:
24
to a non-linear mean, combined with di¤erent marginal distributions for the conditioning variables.
2.16 Identi…cation of the Conditional Mean
When a parameter is uniquely determined by the distribution of the observable variables, we
say that the parameter is identi…ed. Typically, identi…cation only holds under a set of restrictions,
and an identi…cation theorem carefully describes a set of such conditions which are su¢cient for
identi…cation. Identi…cation is a necessary pre-condition for estimation.
For example, consider the unconditional mean j = En. It is well de…ned and unique for all
distributions for which E[n[ < ·. Thus the mean j is identi…ed from the distribution of n under
the restriction E[n[ < ·. Unless E[n[ < ·. it is meaningless to attempt to estimate En.
As another example, consider the ratio of means 0 = j
1
´j
2
where j
1
= En
1
and j
2
= En
2
. It
is well de…ned when j
1
and j
2
are both …nite and j
2
= 0. but if j
2
= 0 then 0 is unde…ned. Thus
0 is identi…ed from the distribution of (n
1
. n
2
) under the restrictions E[n
1
[ < ·. E[n
2
[ < ·. and
En
2
= 0. Unless these conditions hold, it is meaningless to estimate 0.
Now consider the conditional mean :(i) = E(n [ i). Under which conditions is :(i) de…ned
and unique? The answer is provided in the following deep result from probability theory, which
establishes the existence of the conditional mean.
Theorem 2.16.1 Existence of the Conditional Mean
If E[n[ < · then there exists a function :(i) such that for all measurable sets A
E(1 (i ÷ A) n) = E(1 (i ÷ A) :(i)) . (2.31)
The function :(i) is almost everywhere unique, in the sense that if /(i) satis…es
(2.31), then there is a set o
+
such that P(o
+
) = 1 and :(i) = /(i) for i ÷ o
+
.
The function :(i) is called the conditional mean and is written :(i) = E(n [ i) .
See, for example, Ash (1972), Theorem 6.3.3.
The function :(i) de…ned by (2.31) specializes to (2.2) when (n. i) have a joint density.
Theorem 2.16.1 shows that the conditional mean function :(i) exists and is almost everywhere
unique, and is thus is identi…ed.
Theorem 2.16.2 Identi…cation of the Conditional Mean
If E[n[ < ·. the conditional mean :(i) = E(n [ i) is identi…ed for i ÷ o
+
where
P(o
+
) = 1.
25
Exercises
Exercise 2.1 Prove parts 2, 3 and 4 of Theorem 2.4.1.
Exercise 2.2 Suppose that the random variables n and r only take the values 0 and 1, and have
the following joint probability distribution
r = 0 r = 1
n = 0 .1 .2
n = 1 .4 .3
Find E(n [ r) . E

n
2
[ r

and var (n [ r) for r = 0 and r = 1.
Exercise 2.3 Show that o
2
(i) is the best predictor of c
2
given i:
(a) Write down the mean-squared error of a predictor /(i) for c
2
.
(b) What does it mean to be predicting c
2
?
(c) Show that o
2
(i) minimizes the mean-squared error and is thus the best predictor.
Exercise 2.4 Use n = :(i) +c to show that
var (n) = var (:(i)) +o
2
Exercise 2.5 Suppose that n is discrete-valued, taking values only on the non-negative integers,
and the conditional distribution of n given i is Poisson:
P(n = , [ i) =
exp(÷i
t
d) (i
t
d)
;
,!
. , = 0. 1. 2. ...
Compute E(n [ i) and var (n [ i) . Does this justify a linear regression model of the form n =
i
t
d +c?
Hint: If P(n = ,) =
exp(÷A)A
j
;!
. then En = ` and var(n) = `.
Exercise 2.6 Let r and n have the joint density 1 (r. n) =
3
2

r
2
+n
2

on 0 _ r _ 1. 0 _ n _ 1.
Compute the coe¢cients of the best linear predictor n = c+r+c. Compute the conditional mean
:(r) = E(n [ r) . Are the best linear predictor and conditional mean di¤erent?
Exercise 2.7 True or False. If n = r +c. r ÷ R. and E(c [ r) = 0. then E

r
2
c

= 0.
Exercise 2.8 True or False. If n = r +c. r ÷ R. and E(rc) = 0. then E

r
2
c

= 0.
Exercise 2.9 True or False. If n = i
t
d +c and E(c [ i) = 0. then c is independent of i.
Exercise 2.10 True or False. If n = i
t
d +c and E(ic) = 0. then E(c [ i) = 0.
Exercise 2.11 True or False. If n = i
t
d + c, E(c [ i) = 0. and E

c
2
[ i

= o
2
. a constant, then
c is independent of i.
Exercise 2.12 Let r be a random variable with j = Er and o
2
= var(r). De…ne
o

r [ j. o
2

=

r ÷j
(r ÷j)
2
÷o
2

.
Show that Eo (r [ :. :) = 0 if and only if : = j and : = o
2
.
26
Exercise 2.13 Suppose that
i =

¸
1
r
2
r
3
¸

and r
3
= c
1
+c
2
r
2
is a linear function of r
2
.
(a) Show that O = E(ii
t
) is not invertible.
(b) Use a linear transformation of i to …nd an expression for the best linear predictor of n given
i. (Be explicit, do not just use the generalized inverse formula.)
Exercise 2.14 Show (2.26)-(2.27), namely that for
d(d) = E

:(i) ÷i
t
d

2
then
d = argmin
d÷R
k
d(d)
=

E

ii
t

÷1
E(i:(i))
=

E

ii
t

÷1
E(in) .
Hint: To show E(i:(i)) = E(in) use the law of iterated expectations.
27
Chapter 3
The Algebra of Least Squares
3.1 Introduction
In this chapter we introduce the popular least-squares estimator. Most of the discussion will be
algebraic, with questions of distribution and inference defered to later chapters.
3.2 Least Squares Estimator
In Section 2.9 we derived and discussed the best linear predictor of n given i for a pair of random
variables (n. i) ÷ RR
I
. and called this the linear projection model. Applied to observations from
a random sample with observations (n
j
. i
j
: i = 1. .... :) this model takes the form
n
j
= i
t
j
d +c
j
(3.1)
where d is de…ned as
d = argmin
d÷R
k
o(d). (3.2)
o(d) = E

n
j
÷i
t
j
d

2
. (3.3)
and
d =

E

ii
t

÷1
E(in) . (3.4)
When a parameter is de…ned as the minimizer of a function as in (3.2), a standard approach
to estimation is to construct an empirical analog of the function, and de…ne the estimator of the
parameter as the minimizer of the empirical function.
The empirical analog of the expected squared error (3.3) is the sample average squared error
o
a
(d) =
1
:
a
¸
j=1

n
j
÷i
t
j
d

2
(3.5)
=
1
:
oo1
a
(d)
where
oo1
a
(d) =
a
¸
j=1

n
j
÷i
t
j
d

2
is called the sum-of-squared-errors function.
An estimator for d is the minimizer of (3.5):
`
d = argmin
d÷R
k
o
a
(d).
28
Figure 3.1: Sum-of-Squared Errors Function
Alternatively, as o
a
(d) is a scale multiple of oo1
a
(d). we may equivalently de…ne
`
d as the mini-
mizer of oo1
a
(d). Hence
`
d is commonly called the least-squares estimator of d.
To visualize the quadratic function o
a
(d), Figure 3.1 displays an example sum-of-squared er-
rors function oo1
a
(d) for the case / = 2. The least-squares estimator
`
d is the the pair (
^

1
.
^

2
)
minimizing this function.
3.3 Solving for Least Squares
To solve for
`
d, expand the SSE function to …nd
oo1
a
(d) =
a
¸
j=1
n
2
j
÷2d
t
a
¸
j=1
i
j
n
j
+d
t
a
¸
j=1
i
j
i
t
j
d
which is quadratic in the vector argument d . The …rst-order-condition for minimization of oo1
a
(d)
is
0 =
0
0d
o
a
(
`
d) = ÷2
a
¸
j=1
i
j
n
j
+ 2
a
¸
j=1
i
j
i
t
j
`
d. (3.6)
By inverting the / / matrix
¸
a
j=1
i
j
i
t
j
we …nd an explicit formula for the least-squares estimator
`
d =

a
¸
j=1
i
j
i
t
j

÷1

a
¸
j=1
i
j
n
j

. (3.7)
This is the natural estimator of the best linear prediction coe¢cient d de…ned in (3.2), and can
also be called the linear projection estimator.
29
Early Use of Matrices
The earliest known treatment of the use of matrix methods
to solve simultaneous systems is found in Chapter 8 of the
Chinese text The Nine Chapters on the Mathematical Art,
written by several generations of scholars from the 10th to
2nd century BCE.
Alternatively, equation (3.4) writes the projection coe¢cient d as an explicit function of the
population moments E(i
j
n
j
) and E(i
j
i
t
j
) . Their moment estimators are the sample moments
´
E(i
j
n
j
) =
1
:
a
¸
j=1
i
j
n
j
´
E

i
j
i
t
j

=
1
:
a
¸
j=1
i
j
i
t
j
.
The moment estimator of d replaces the population moments in (3.4) with the sample moments:
`
d =

´
E

i
j
i
t
j

÷1
´
E(i
j
n
j
)
=

1
:
a
¸
j=1
i
j
i
t
j

÷1

1
:
a
¸
j=1
i
j
n
j

=

a
¸
j=1
i
j
i
t
j

÷1

a
¸
j=1
i
j
n
j

which is identical with (3.7).
Least Squares Estimation
De…nition 3.3.1 The least-squares estimator
`
d is
`
d = argmin
d÷R
k
o
a
(d)
where
o
a
(d) =
1
:
a
¸
j=1

n
j
÷i
t
j
d

2
and has the solution
`
d =

a
¸
j=1
i
j
i
t
j

÷1

a
¸
j=1
i
j
n
j

.
To illustrate least-squares estimation in practice, consider the data used to generate Figure 2.3.
These are white male wage earners from the March 2004 Current Population Survey, excluding
30
military, with 10-15 years of potential work experience. This sample has 988 observations. Let n
j
be log wages and i
j
be an intercept and years of education. Then
1
:
a
¸
j=1
i
j
n
j
=

2.951
42.405

and
1
:
a
¸
j=1
i
j
i
t
j
=

1 14.136
14.136 205.826

.
Thus
`
d =

1 14.136
14.136 205.826

÷1

2.951
42.405

=

1. 33
0.115

. (3.8)
We often write the estimated equation using the format
\
log(\aoc) = 1.33 + 0.115 cdncatio:. (3.9)
An interpretation of the estimated equation is that each year of education is associated with an
11% increase in mean wages.
Equation (3.9) is called a bivariate regression as there are only two variables. A multivariate
regression has two or more regressors, and allows a more detailed investigation. Let’s redo the
example, but now including all levels of experience. This expanded sample includes 6578 observa-
tions. Including as regressors years of experience and its square (experience
2
´100) (we divide by
100 to simplify reporting), we obtain the estimates
\
log(\aoc) = 0.959 + 0.100 cdncatio: + 0.053 crjcric:cc ÷0.095 crjcric:cc
2
´100. (3.10)
These estimates suggest a 10% increase in mean wages per year of education.
Adrien-Marie Legendre
The method of least-squares was …rst published in 1805 by the French mathematician
Adrien-Marie Legendre (1752-1833). Legendre proposed least-squares as a solution to the
algebraic problem of solving a system of equations when the number of equations exceeded
the number of unknowns. This was a vexing and common problem in astronomical mea-
surement. As viewed by Legendre, (3.1) is a set of : equations with / unknowns. As the
equations cannot be solved exactly, Legendre’s goal was to select d to make the set of
errors as small as possible. He proposed the sum of squared error criterion, and derived
the algebraic solution presented above. As he noted, the …rst-order conditions (3.6) is a
system of / equations with / unknowns, which can be solved by “ordinary” methods. Hence
the method became known as Ordinary Least Squares and to this day we still use the
abbreviation OLS to refer to Legendre’s estimation method.
31
3.4 Least Squares Residuals
As a by-product of estimation, we de…ne the …tted or predicted value
^ n
j
= i
t
j
`
d
and the residual
^ c
j
= n
j
÷ ^ n
j
= n
j
÷i
t
j
`
d. (3.11)
Note that n
j
= ^ n
j
+ ^ c
j
. We make a distinction between the error c
j
and the residual ^ c
j
. The
error c
j
is unobservable while the residual ^ c
j
is a by-product of estimation. These two variables are
frequently mislabeled, which can cause confusion.
Equation (3.6) implies that
1
:
a
¸
j=1
i
j
^ c
j
= 0. (3.12)
To see this by a direct calculation, using (3.11) and (3.7),
1
:
a
¸
j=1
i
j
^ c
j
=
1
:
a
¸
j=1
i
j

n
j
÷i
t
j
`
d

=
1
:
a
¸
j=1
i
j
n
j
÷
1
:
a
¸
j=1
i
j
i
t
j
`
d
=
1
:
a
¸
j=1
i
j
n
j
÷
1
:
a
¸
j=1
i
j
i
t
j

a
¸
j=1
i
j
i
t
j

÷1

a
¸
j=1
i
j
n
j

=
1
:
a
¸
j=1
i
j
n
j
÷
a
¸
j=1
i
j
n
j
= 0.
When i
j
contains a constant, an implication of (3.12) is
1
:
a
¸
j=1
^ c
j
= 0.
Thus the residuals have a sample mean of zero and the sample correlation between the regressors
and the residual is zero. These are algebraic results, and hold true for all linear regression estimates.
Given the residuals, we can construct an estimator for o
2
as de…ned in (2.13):
^ o
2
=
1
:
a
¸
j=1
^ c
2
j
. (3.13)
3.5 Model in Matrix Notation
For many purposes, including computation, it is convenient to write the model and statistics in
matrix notation. The linear equation (2.12) is a system of : equations, one for each observation.
We can stack these : equations together as
n
1
= i
t
1
d +c
1
n
2
= i
t
2
d +c
2
.
.
.
n
a
= i
t
a
d +c
a
.
32
Now de…ne
u =

¸
¸
¸
¸
n
1
n
2
.
.
.
n
a
¸

. A =

¸
¸
¸
¸
i
t
1
i
t
2
.
.
.
i
t
a
¸

. c =

¸
¸
¸
¸
c
1
c
2
.
.
.
c
a
¸

.
Observe that u and c are :1 vectors, and A is an :/ matrix. Then the system of : equations
can be compactly written in the single equation
u = Ad +c.
Sample sums can also be written in matrix notation. For example
a
¸
j=1
i
j
i
t
j
= A
t
A
a
¸
j=1
i
j
n
j
= A
t
u.
Therefore
`
d =

A
t
A

÷1

A
t
u

. (3.14)
Using matrix notation we have simple expressions for most estimators. This is particularly conve-
nient for computer programming, as most languages allow matrix notation and manipulation.
Important Matrix Expressions
u = Ad +c
`
d =

A
t
A

÷1

A
t
u

` c = u ÷A
`
d
^ o
2
= :
÷1
` c
t
` c.
3.6 Projection Matrices
De…ne the matrices
1 = A

A
t
A

÷1
A
t
and
A = 1
a
÷A

A
t
A

÷1
A
t
= 1
a
÷1
where 1
a
is the : : identity matrix. 1 and A are called projection matrices due to the
property that for any matrix Z which can be written as Z = AI for some matrix I (we say that
Z lies in the range space of A). then
1Z = 1AI = A

A
t
A

÷1
A
t
AI = AI = Z
and
AZ = (1
a
÷1) Z = Z ÷1Z = Z ÷Z = 0.
33
As an important example of this property, partition the matrix A into two matrices A
1
and
A
2
so that
A = [A
1
A
2
] .
Then 1A
1
= A
1
and AA
1
= 0. It follows that AA = 0 and A1 = 0. so A and 1 are
orthogonal.
The matrices 1 and A are symmetric and idempotent
1
. To see that 1 is symmetric,
1
t
=

A

A
t
A

÷1
A
t

t
=

A
t

t

A
t
A

÷1

t
(A)
t
= A

A
t
A

t

÷1
A
t
= A

(A)
t

A
t

t

÷1
A
t
= 1.
To establish that it is idempotent,
11 =

A

A
t
A

÷1
A
t

A

A
t
A

÷1
A
t

= A

A
t
A

÷1
A
t
A

A
t
A

÷1
A
t
= A

A
t
A

÷1
A
t
= 1.
Similarly,
A
t
= (1
a
÷1)
t
= 1
a
÷1 = A
and
AA = A (1
a
÷1)
= A ÷A1
= A,
since A1 = 0.
Another useful property is that
tr 1 = / (3.15)
tr A = : ÷/ (3.16)
(See Appendix A.4 for de…nition and properties of the trace operator.) To show (3.15) and (3.16),
tr 1 = tr

A

A
t
A

÷1
A
t

= tr

A
t
A

÷1
A
t
A

= tr (1
I
)
= /.
and
tr A = tr (1
a
÷1) = tr (1
a
) ÷tr (1) = : ÷/.
1
A matrix P is symmetric if P
0
= P: A matrix P is idempotent if PP = P: See Appendix A.8.
34
Given the de…nitions of 1 and A. observe that
` u = A
`
d = A

A
t
A

÷1
A
t
u = 1u
and
` c = u ÷A
`
d = u ÷1u = Au. (3.17)
Furthermore, since u = Ad +c and AA = 0. then
` c = A (Ad +c) = Ac. (3.18)
Another way of writing (3.17) is
u = (1 +A) u = 1u +Au = ` u + ` c.
This decomposition is orthogonal, that is
` u
t
` c = (1u)
t
(Au) = u
t
1Au = 0.
The projection matrix 1 is also known as the hat matrix due to the equation ` u = 1u. The
i’th diagonal element of 1 = A(A
t
A)
÷1
A
t
is
/
jj
= i
t
j

A
t
A

÷1
i
j
(3.19)
which is called the leverage of the i’th observation. The /
jj
take values in [0. 1] and sum to /
a
¸
j=1
/
jj
= / (3.20)
(See Exercise 3.6).
3.7 Residual Regression
Partition
A = [A
1
A
2
]
and
d =

d
1
d
2

.
Then the regression model can be rewritten as
u = A
1
d
1
+A
2
d
2
+c. (3.21)
Observe that the OLS estimator of d = (d
t
1
. d
t
2
)
t
can be obtained by regression of u on A = [A
1
A
2
]. OLS estimation can be written as
u = A
1
`
d
1
+A
2
`
d
2
+ ` c (3.22)
Suppose that we are primarily interested in d
2
. not in d
1
. and we want to obtain the OLS sub-
component
`
d
2
. In this section we derive an alternative expression for
`
d
2
which does not involve
estimation of the full model.
De…ne
A
1
= 1
a
÷A
1

A
t
1
A
1

÷1
A
t
1
.
Recalling the de…nition A = 1
a
÷A(A
t
A)
÷1
A
t
. observe that A
t
1
A
1
= 0 and thus
A
1
A = A ÷A
1

A
t
1
A
1

÷1
A
t
1
A = A.
35
It follows that
A
1
` c = A
1
Au = Au = ` c.
Using this result, if we premultiply (3.22) by A
1
we obtain
A
1
u = A
1
A
1
`
d
1
+A
1
A
2
`
d
2
+A
1
` c
= A
1
A
2
`
d
2
+ ` c (3.23)
the second equality since A
1
A
1
= 0. Premultiplying by A
t
2
and recalling that A
t
2
` c = 0. we
obtain
A
t
2
A
1
u = A
t
2
A
1
A
2
`
d
2
+A
t
2
` c = A
t
2
A
1
A
2
`
d
2
.
Solving,
`
d
2
=

A
t
2
A
1
A
2

÷1

A
t
2
A
1
u

an alternative expression for
`
d
2
.
Now, de…ne
¯
A
2
= A
1
A
2
(3.24)
¯ u = A
1
u. (3.25)
the least-squares residuals from the regression of A
2
and u. respectively, on the matrix A
1
only.
Since the matrix A
1
is idempotent, A
1
= A
1
A
1
and thus
`
d
2
=

A
t
2
A
1
A
2

÷1

A
t
2
A
1
u

=

A
t
2
A
1
A
1
A
2

÷1

A
t
2
A
1
A
1
u

=

¯
A
t
2
¯
A
2

÷1

¯
A
t
2
¯ u

.
This shows that
`
d
2
can be calculated by the OLS regression of ¯ u on
¯
A
2
. This technique is called
residual regression.
Furthermore, using the de…nitions (3.24) and (3.25), expression (3.23) can be equivalently writ-
ten as
¯ u =
¯
A
2
`
d
2
+ ` c.
Since
`
d
2
is precisely the OLS coe¢cient from a regression of ¯ u on
¯
A
2
. this shows that the residual
vector from this regression is ` c, numerically the same residual vector as from the joint regression
(3.22). We have proven the following theorem.
Theorem 3.7.1 Frisch-Waugh-Lovell
In the model (3.21), the OLS estimator of d
2
and the OLS residuals ` c
may be equivalently computed by either the OLS regression (3.22) or via
the following algorithm:
1. Regress u on A
1
. obtain residuals ¯ u;
2. Regress A
2
on A
1
. obtain residuals
¯
A
2
;
3. Regress ¯ u on
¯
A
2
. obtain OLS estimates
`
d
2
and residuals ` c.
In some contexts, the FWL theorem can be used to speed computation, but in most cases
there is little computational advantage to using the two-step algorithm. Rather, the primary use
is theoretical.
36
A common application of the FWL theorem, which you may have seen in an introductory
econometrics course, is the demeaning formula for regression. Partition A = [A
1
A
2
] where
A
1
= i is a vector of ones, and A
2
is the vector of observed regressors. In this case,
A
1
= 1 ÷i

i
t
i

÷1
i
t
.
Observe that
¯
A
2
= A
1
A
2
= A
2
÷i

i
t
i

÷1
i
t
A
2
= A
2
÷A
2
and
¯ u = A
1
u
= u ÷i

i
t
i

÷1
i
t
u
= u ÷u.
which are “demeaned”. The FWL theorem says that
`
d
2
is the OLS estimate from a regression of
n
j
÷u on i
2j
÷i
2
:
`
d
2
=

a
¸
j=1
(i
2j
÷i
2
) (i
2j
÷i
2
)
t

÷1

a
¸
j=1
(i
2j
÷i
2
) (n
j
÷u)

.
Thus the OLS estimator for the slope coe¢cients is a regression with demeaned data.
Ragnar Frisch
Ragnar Frisch (1895-1973) was co-winner with Jan Tinbergen of the …rst Nobel Memorial
Prize in Economic Sciences in 1969 for their work in developing and applying dynamic mod-
els for the analysis of economic problems. Frisch made a number of foundational contribu-
tions to modern economics beyond the Frisch-Waugh-Lovell Theorem, including formalizing
consumer theory, production theory, and business cycle theory.
3.8 Prediction Errors
The least-squares residual ^ c
j
are not true prediction errors, as they are constructed based on
the full sample including n
j
. A proper prediction for n
j
should be based on estimates constructed
only using the other observations. We can do this by de…ning the leave-one-out OLS estimator
of d as that obtained from the sample excluding the i’th observation:
`
d
(÷j)
=

¸
1
: ÷1
¸
;,=j
i
;
i
t
;
¸

÷1

¸
1
: ÷1
¸
;,=j
i
;
n
;
¸

=

A
t
(÷j)
A
(÷j)

÷1
A
(÷j)
u
(÷j)
(3.26)
where A
(÷j)
and u
(÷j)
are the data matrices omitting the i’th row. The leave-one-out predicted
value for n
j
is
~ n
j
= i
t
j
`
d
(÷j)
.
37
and the leave-one-out residual or prediction error is
~ c
j
= n
j
÷ ~ n
j
.
A convenient alternative expression for
`
d
(÷j)
(derived below) is
`
d
(÷j)
=
`
d ÷(1 ÷/
jj
)
÷1

A
t
A

÷1
i
j
^ c
j
(3.27)
where /
jj
are the leverage values as de…ned in (3.19).
Using (3.27) we can simplify the expression for the prediction error:
~ c
j
= n
j
÷i
t
j
`
d
(÷j)
= n
j
÷i
t
j
`
d + (1 ÷/
jj
)
÷1
i
t
j

A
t
A

÷1
i
j
^ c
j
= ^ c
j
+ (1 ÷/
jj
)
÷1
/
jj
^ c
j
= (1 ÷/
jj
)
÷1
^ c
j
. (3.28)
A convenient feature of this expression is that it shows that computation of ~ c
j
is based on a simple
linear operation, and does not really require : separate estimations.
One use of the prediction errors is to estimate the out-of-sample mean squared error
~ o
2
=
1
:
a
¸
j=1
~ c
2
j
=
1
:
a
¸
j=1
(1 ÷/
jj
)
÷2
^ c
2
j
.
This is also known as the mean squared prediction error. Its square root ~ o =

~ o
2
is the
prediction standard error.
Proof of Equation (3.27). The Sherman–Morrison formula (A.2) from Appendix A.5 states that
for nonsingular A and vector I

A÷II
t

÷1
= A
÷1
+

1 ÷I
t
A
÷1
I

÷1
A
÷1
II
t
A
÷1
.
This implies

A
t
A ÷i
j
i
t
j

÷1
=

A
t
A

÷1
+ (1 ÷/
j
)
÷1

A
t
A

÷1
i
j
i
t
j

A
t
A

÷1
and thus
`
d
(÷j)
=

A
t
A ÷i
j
i
t
j

÷1

A
t
u ÷i
j
n
j

=

A
t
A

÷1
A
t
u ÷

A
t
A

÷1
i
j
n
j
+(1 ÷/
j
)
÷1

A
t
A

÷1
i
j
i
t
j

A
t
A

÷1

A
t
u ÷i
j
n
j

=
`
d ÷

A
t
A

÷1
i
j
n
j
+ (1 ÷/
j
)
÷1

A
t
A

÷1
i
j

i
t
j
`
d ÷/
j
n
j

=
`
d ÷(1 ÷/
j
)
÷1

A
t
A

÷1
i
j

(1 ÷/
j
) n
j
÷i
t
j
^
+/
j
n
j

=
`
d ÷(1 ÷/
j
)
÷1

A
t
A

÷1
i
j
^ c
j
the third equality making the substitutions
`
d = (A
t
A)
÷1
A
t
u and /
j
= i
t
j
(A
t
A)
÷1
i
j
. and the
remainder collecting terms.
38
3.9 In‡uential Observations
Another use of the leave-one-out estimator is to investigate the impact of in‡uential obser-
vations, sometimes called outliers. We say that observation i is in‡uential if its omission from
the sample induces a substantial change in a parameter of interest. From (3.27)-(3.28) we know
that
`
d ÷
`
d
(÷j)
= (1 ÷/
jj
)
÷1

A
t
A

÷1
i
j
^ c
j
=

A
t
A

÷1
i
j
~ c
j
.
By direct calculation of this quantity for each observation i. we can directly discover if a speci…c
observation i is in‡uential for a coe¢cient estimate of interest.
For a more general assessment, we can focus on the predicted values. The di¤erence between
the full-sample and leave-one-out predicted values is
^ n
j
÷ ~ n
j
= i
t
j
`
d ÷i
t
j
`
d
(÷j)
= i
t
j

A
t
A

÷1
i
j
~ c
j
= /
jj
~ c
j
which is a simple function of the leverage values /
jj
and prediction errors ~ c
j
. Observation i is
in‡uential for the predicted value if [/
jj
~ c
j
[ is large, which requires that both /
jj
and [~ c
j
[ are large.
One way to think about this is that a large leverage value /
jj
gives the potential for observation
i to be in‡uential. A large /
jj
means that observation i is unusual in the sense that the regressor i
j
is far from its sample mean. We call this observation with large /
jj
a leverage point. A leverage
point is not necessarily in‡uential as this also requires that the prediction error ~ c
j
is large.
To determine if any individual observations are in‡uential in this sense, a useful summary
statistic is
In‡uence = max
1<j<a
[^ n
j
÷ ~ n
j
[
~ o
= max
1<j<a
/
jj
[~ c
j
[
~ o
which scales the maximum change in predicted values by the prediction standard error. If In‡uence
is large, it may be useful to examine the corresponding observation or observations. (As this is an
informal comparison there is no magic threshold, so judgement must be employed.)
If an observation is determined to be in‡uential, what should be done? Certainly, the recorded
values for the observations should be examined. It is quite possible that there is a data error, and
this is a common cause of in‡uential observations. If there is an error, you should scrutinize all
observations more carefully, as it would seem unlikely that data error would be con…ned to a single
observation. If it is determined that an observation is incorrectly recorded, then the observation is
typically deleted from the sample. When this is done it is proper empirical practice to document
such choices. (It is useful to keep the source data in its original form, a revised data …le after
cleaning, and a record describing the revision process. This is especially useful when revising
empirical work at a later date.)
It is also possible that an observation is correctly measured, but unusual and in‡uential. In
this case it is unclear how to proceed. Some researchers will try to alter the speci…cation to
properly model the in‡uential observation. Other researchers will delete the observation from the
sample. The motivation for this choice is to prevent the results from being skewed or determined
by individual observations, but this practice is viewed skeptically by many researchers, who believe
it reduces the integrity of reported empirical results.
3.10 Measures of Fit
When a least-squares regression is reported in applied economics, it is common to see a reported
summary measure of …t, measuring how well the regressors explain the observed variation in the
dependent variable.
39
Some common summary measures are based on scaled or transformed estimates of the mean-
squared error o
2
. These include the sum of squared errors
¸
a
j=1
^ c
2
j
. the mean squared error
of sample variance :
÷1
¸
a
j=1
^ c
2
j
= ^ o
2
. and the root mean squared error

:
÷1
¸
a
j=1
^ c
2
j
(sometimes
called the standard error of the regression), and the mean prediction error ~ o
2
=
1
a
¸
a
j=1
~ c
2
j
.
A related and commonly reported statistic is the coe¢cient of determination or R-squared:
1
2
=
¸
a
j=1
(^ n
j
÷n)
2
¸
a
j=1
(n
j
÷n)
2
= 1 ÷
^ o
2
^ o
2
&
where
^ o
2
&
=
1
:
a
¸
j=1
(n
j
÷n)
2
is the sample variance of n
j
. 1
2
can be viewed as an estimator of the population parameter
j
2
=
var (i
t
j
d)
var(n
j
)
= 1 ÷
o
2
o
2
&
where o
2
&
= var(n
j
). A high j
2
or 1
2
means that forecasts of n using i
t
d or i
t
´
d will be quite
accurate relative to the unconditional mean. In this sense 1
2
can be a useful summary measure for
an out-of-sample forecast or policy experiment.
An alternative estimator of j
2
proposed by Theil called R-bar-squared or adjusted 1
2
is
1
2
= 1 ÷
(: ÷1)
¸
a
j=1
^ c
2
j
(: ÷/)
¸
a
j=1
(n
j
÷n)
2
.
Theil’s estimator 1
2
is better estimator of j
2
than the unadjusted estimator 1
2
because it can be
expressed as a ratio of bias-corrected variance estimates.
Unfortunately, the frequent reporting of 1
2
and 1
2
seems to have led to exaggerated beliefs
regarding their usefulness. One mistaken belief is that 1
2
is a measure of “…t”. This belief is
incorrect, as an incorrectly speci…ed model can still have a reasonably high 1
2
. For example,
suppose the truth is that r
j
~ ·(0. 1) and n
j
= r
j
+r
2
j
. If we regress n
j
on r
j
(incorrectly omitting
r
2
j
). the best linear predictor is n
j
= 1+r
j
+c
j
where c
j
= r
2
j
÷1. This is a misspeci…ed regression,
as the true relationship is deterministic! You can also calculate that the population j
2
= ´(2 +)
which can be arbitrarily close to 1 if is large. For example, if = 8. then 1
2
· j
2
= .8. or if
= 18 then 1
2
· j
2
= .9. This example shows that a regression with a high 1
2
can actually have
poor …t.
Another mistaken belief is that a high 1
2
is important in order to justify interpretation of the
regression coe¢cients. This is mistaken as there is no known association between the level of 1
2
and the “correctness” of a regression, the accuracy of the coe¢cient estimates, or the validity of
statistical inferences based on the estimated regression. In contrast, even if the 1
2
is quite small,
accurate estimates of regression coe¢cients is quite possible when sample sizes are large.
The bottom line is that while 1
2
and 1
2
have appropriate uses, their usefulness should not be
exaggerated.
Henri Theil
Henri Theil (1924-2000) of Holland invented 1
2
and two-stage least squares, both of which
are routinely seen in applied econometrics. He also wrote an early and in‡uential advanced
textbook on econometrics (Theil, 1971).
40
3.11 Normal Regression Model
The normal regression model is the linear regression model under the restriction that the error
c
j
is independent of i
j
and has the distribution N

0. o
2

. We can write this as
c
j
[ i
j
~ N

0. o
2

.
This assumption implies
n
j
[ i
j
~ N

i
t
j
d. o
2

.
Normal regression is a parametric model, where likelihood methods can be used for estimation,
testing, and distribution theory.
The log-likelihood function for the normal regression model is
log 1(d. o
2
) =
a
¸
j=1
log

1
(2¬o
2
)
1/2
exp

÷
1
2o
2

n
j
÷i
t
j
d

2

= ÷
:
2
log

2¬o
2

÷
1
2o
2
oo1
a
(d).
The maximum likelihood estimator (MLE) (
`
d. ^ o
2
) maximize log 1(d. o
2
). Since the latter is a
function of d only through the sum of squared errors oo1
a
(d). maximizing the likelihood is
identical to minimizing oo1
a
(d). Hence
`
d
n|c
=
`
d
c|c
.
the MLE for d equals the OLS estimator. Due to this equivalence, the least squares estimator
`
d is
also known as the MLE.
We can also …nd the MLE for o
2
. Plugging
`
d into the log-likelihood we obtain
log 1

`
d. o
2

= ÷
:
2
log

2¬o
2

÷
1
2o
2
a
¸
j=1
^ c
2
j
.
Maximization with respect to o
2
yields the …rst-order condition
0
0o
2
log 1

`
d. ^ o
2

= ÷
:
2^ o
2
+
1
2

^ o
2

2
a
¸
j=1
^ c
2
j
= 0.
Solving for ^ o
2
yields the MLE for o
2
^ o
2
=
1
:
a
¸
j=1
^ c
2
j
which is the same as the moment estimator (3.13).
It may seem surprising that the MLE
`
d is numerically equal to the OLS estimator, despite
emerging from quite di¤erent motivations. It is not completely accidental. The least-squares
estimator minimizes a particular sample loss function – the sum of squared error criterion – and
most loss functions are equivalent to the likelihood of a speci…c parametric distribution, in this case
the normal regression model. In this sense it is not surprising that the least-squares estimator can
be motivated as either the minimizer of a sample loss function or as the maximizer of a likelihood
function.
41
Carl Friedrich Gauss
The mathematician Carl Friedrich Gauss (1777-1855) proposed the normal regression model,
and derived the least squares estimator as the maximum likelihood estimator for this model.
He claimed to have discovered the method in 1795 at the age of eighteen, but did not publish
the result until 1809. Interest in Gauss’s approach was reinforced by Laplace’s simultaneous
discovery of the central limit theorem, which provided a justi…cation for viewing random
disturbances as approximately normal.
42
Exercises
Exercise 3.1 Let n be a random variable with j = En and o
2
= var(n). De…ne
o

n. j. o
2

=

n ÷j
(n ÷j)
2
÷o
2

.
Let (^ j. ^ o
2
) be the values such that o
a
(^ j. ^ o
2
) = 0 where o
a
(:. :) = :
÷1
¸
a
j=1
o (n
j
. :. :) . Show that
^ j and ^ o
2
are the sample mean and variance.
Exercise 3.2 Consider the OLS regression of the :1 vector u on the :/ matrix A. Consider
an alternative set of regressors Z = AC. where C is a / / non-singular matrix. Thus, each
column of Z is a mixture of some of the columns of A. Compare the OLS estimates and residuals
from the regression of u on A to the OLS estimates from the regression of u on Z.
Exercise 3.3 Let ` c be the OLS residual from a regression of u on A = [A
1
A
2
]. Find A
t
2
` c.
Exercise 3.4 Let ` c be the OLS residual from a regression of u on A. Find the OLS coe¢cient
from a regression of ` c on A.
Exercise 3.5 Let ` u = A(A
t
A)
÷1
A
t
u. Find the OLS coe¢cient from a regression of ` u on A.
Exercise 3.6 Show ()3.20), that /
jj
in (3.19) sum to /. (Hint: Use (3.15).)
Exercise 3.7 A dummy variable takes on only the values 0 and 1. It is used for categorical data,
such as an individual’s gender. Let u
1
and u
2
be vectors of 1’s and 0’s, with the i
t
th element of u
1
equaling 1 and that of u
2
equaling 0 if the person is a man, and the reverse if the person is a woman.
Suppose that there are :
1
men and :
2
women in the sample. Consider the three regressions
u = j +u
1
c
1
+u
2
c
2
+c (3.29)
u = u
1
c
1
+u
2
c
2
+c (3.30)
u = j +u
1
c +c (3.31)
Can all three regressions (3.29), (3.30), and (3.31) be estimated by OLS? Explain if not.
(a) Compare regressions (3.30) and (3.31). Is one more general than the other? Explain the
relationship between the parameters in (3.30) and (3.31).
(b) Compute i
t
u
1
and i
t
u
2
. where i is an : 1 is a vector of ones.
(c) Letting o = (c
1
c
2
)
t
. write equation (3.30) as u = Ao + c. Consider the assumption
E(i
j
c
j
) = 0. Is there any content to this assumption in this setting?
Exercise 3.8 Let u
1
and u
2
be de…ned as in the previous exercise.
(a) In the OLS regression
u = u
1
^
1
+u
2
^
2
+ ` u.
show that ^
1
is sample mean of the dependent variable among the men of the sample (n
1
),
and that ^
2
is the sample mean among the women (n
2
).
(b) Describe in words the transformations
u
+
= u ÷u
1
n
1
÷u
2
n
2
A
+
= A ÷u
1
A
1
÷u
2
A
2
.
43
(c) Compare
¯
d from the OLS regresion
u
+
= A
+
¯
d + ¯ c
with
`
d from the OLS regression
u = u
1
^ c
1
+u
2
^ c
2
+A
`
d + ` c.
Exercise 3.9 Let
`
d
a
= (A
t
a
A
a
)
÷1
A
t
a
u
a
denote the OLS estimate when u
a
is : 1 and A
a
is
: /. A new observation (n
a+1
. i
a+1
) becomes available. Prove that the OLS estimate computed
using this additional observation is
`
d
a+1
=
`
d
a
+
1
1 +i
t
a+1
(A
t
a
A
a
)
÷1
i
a+1

A
t
a
A
a

÷1
i
a+1

n
a+1
÷i
t
a+1
`
d
a

.
Exercise 3.10 Prove that 1
2
is the square of the simple correlation between u and ` u.
Exercise 3.11 The data …le cps85.dat contains a random sample of 528 individuals from the
1985 Current Population Survey by the U.S. Census Bureau. The …le contains observations on nine
variables, listed in the …le cps85.pdf.
V1 = education (in years)
V2 = region of residence (coded 1 if South, 0 otherwise)
V3 = (coded 1 if nonwhite and non-Hispanic, 0 otherwise)
V4 = (coded 1 if Hispanic, 0 otherwise)
V5 = gender (coded 1 if female, 0 otherwise)
V6 = marital status (coded 1 if married, 0 otherwise)
V7 = potential labor market experience (in years)
V8 = union status (coded 1 if in union job, 0 otherwise)
V9 = hourly wage (in dollars)
Estimate a regression of wage n
j
on education r
1j
, experience r
2j
, and experienced-squared r
3j
= r
2
2j
(and a constant). Report the OLS estimates.
Let ^ c
j
be the OLS residual and ^ n
j
the predicted value from the regression. Numerically calculate
the following:
(a)
¸
a
j=1
^ c
j
(b)
¸
a
j=1
r
1j
^ c
j
(c)
¸
a
j=1
r
2j
^ c
j
(d)
¸
a
j=1
r
2
1j
^ c
j
(e)
¸
a
j=1
r
2
2j
^ c
j
(f)
¸
a
j=1
^ n
j
^ c
j
(g)
¸
a
j=1
^ c
2
j
(h) 1
2
Are these calculations consistent with the theoretical properties of OLS? Explain.
44
Exercise 3.12 Using the data from the previous problem, restimate the slope on education using
the residual regression approach. Regress n
j
on (1. r
2j
. r
2
2j
), regress r
1j
on (1. r
2j
. r
2
2j
), and regress
the residuals on the residuals. Report the estimate from this regression. Does it equal the value
from the …rst OLS regression? Explain.
In the second-stage residual regression, (the regression of the residuals on the residuals), cal-
culate the equation 1
2
and sum of squared errors. Do they equal the values from the initial OLS
regression? Explain.
45
Chapter 4
Least Squares Regression
4.1 Introduction
In this chapter we investigate some …nite-sample properties of least-squares applied to a random
sample in the the linear regression model. Throughout this chapter we maintain the following.
Assumption 4.1.1 Linear Regression Model
The observations (n
j
. i
j
) come from a random sample and satisfy the linear
regression equation
n
j
= i
t
j
d +c
j
(4.1)
E(c
j
[ i
j
) = 0. (4.2)
The variables have …nite second moments
En
2
j
< ·
and
Er
2
;j
< ·
for , = 1. .... /. and an invertible design matrix
O = E

i
j
i
t
j

0.
We will consider both the general case of heteroskedastic regression, where the conditional
variance
E

c
2
j
[ i
j

= o
2
(i
j
) = o
2
j
is unrestricted, and the specialized case of homoskedastic regression, where the conditional variance
is constant. In the latter case we add the following assumption.
Assumption 4.1.2 Homoskedastic Linear Regression Model
In addition to Assumption 4.1.1,
E

c
2
j
[ i
j

= o
2
(i
j
) = o
2
(4.3)
is independent of r
j
.
46
Figure 4.1: Sampling Density of
^

4.2 Sampling Distribution
The least-squares estimator is random, since it is a function of random data, and therefore has a
sampling distribution. In general, its distribution is a complicated function of the joint distribution
of (n
j
. i
j
) and the sample size :.
To illustrate the possibilities in one example, let n
j
and r
j
be drawn from the joint density
1(r. n) =
1
2¬rn
exp

÷
1
2
(log n ÷log r)
2

exp

÷
1
2
(log r)
2

and let
^
be the slope coe¢cient estimate from a bivariate regression on observations from this
joint density. Using simulation methods, the density function of
^
was computed and plotted in
Figure 4.1 for sample sizes of : = 25. : = 100 and : = 800. The vertical line marks the true value
of the projection coe¢cient.
From the …gure we can see that the density functions are dispersed and highly non-normal. As
the sample size increases the density becomes more concentrated about the population coe¢cient.
To learn about the true value of from the sample estimate
^
. we need to have a way to characterize
the sampling distribution of
^
. We start in the next sections by deriving the mean and variance of
^
.
4.3 Mean of Least-Squares Estimator
In this section we show that the OLS estimator is unbiased in the linear regression model.
Under (4.1)-(4.2) note that
E(u [ A) =

¸
¸
¸
.
.
.
E(n
j
[ A)
.
.
.
¸

=

¸
¸
¸
.
.
.
E(n
j
[ i
j
)
.
.
.
¸

=

¸
¸
¸
.
.
.
i
t
j
d
.
.
.
¸

= Ad. (4.4)
47
Similarly
E(c [ A) =

¸
¸
¸
.
.
.
E(c
j
[ A)
.
.
.
¸

=

¸
¸
¸
.
.
.
E(c
j
[ i
j
)
.
.
.
¸

= 0. (4.5)
By (3.14), conditioning on A, the linearity of expectations, (4.4), and the properties of the
matrix inverse,
E

`
d [ A

= E

A
t
A

÷1
A
t
u [ A

=

A
t
A

÷1
A
t
E(u [ A)
=

A
t
A

÷1
A
t
Ad
= d.
Applying the law of iterated expectations to E

`
d [ A

= d, we …nd that
E

`
d

= E

E

`
d [ A

= d.
Another way to calculate the same result is as follows. Insert u = Ad + c into the formula
(3.14) for
`
d to obtain
`
d =

A
t
A

÷1

A
t
(Ad +c)

=

A
t
A

÷1
A
t
Ad +

A
t
A

÷1

A
t
c

= d +

A
t
A

÷1
A
t
c. (4.6)
This is a useful linear decomponsition of the estimator
`
d into the true parameter d and the
stochastic component (A
t
A)
÷1
A
t
c.
Using (4.6), conditioning on A, and (4.5),
E

`
d ÷d [ A

= E

A
t
A

÷1
A
t
c [ A

=

A
t
A

÷1
A
t
E(c [ A)
= 0.
Using either derivation, we have shown the following theorem.
Theorem 4.3.1 Mean of Least-Squares Estimator
In the linear regression model (Assumption 4.1.1)
E

`
d [ A

= d (4.7)
and
E(
`
d) = d. (4.8)
Equation (4.8) says that the estimator is unbiased, meaning that the distribution of
`
d is centered
at d. Equation (4.7) says that the estimator is conditionally unbiased, which is a stronger result.
It says that
`
d is unbiased for any realization of the regressor matrix A.
48
4.4 Variance of Least Squares Estimator
In this section we calculate the conditional variance of the OLS estimator.
For any r 1 random vector Z de…ne the r r covariance matrix
var(Z) = E(Z ÷EZ) (Z ÷EZ)
t
= EZZ
t
÷(EZ) (EZ)
t
and for any pair (Z. X) de…ne the conditional covariance matrix
var(Z [ A) = E

(Z ÷E(Z [ A)) (Z ÷E(Z [ A))
t
[ A

.
The conditional covariance matrix of the : 1 regression error c is the : : matrix
L = E

cc
t
[ A

.
The i’th diagonal element of L is
E

c
2
j
[ A

= E

c
2
j
[ i
j

= o
2
j
while the i,
t
th o¤-diagonal element of L is
E(c
j
c
;
[ A) = E(c
j
[ i
j
) E(c
;
[ i
;
) = 0.
where the …rst equality uses independence of the observations (Assumption 1.5.1) and the second
is (4.2). Thus L is a diagonal matrix with i’th diagonal element o
2
j
:
L = diag

o
2
1
. .... o
2
a

=

¸
¸
¸
¸
o
2
1
0 0
0 o
2
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 o
2
a
¸

. (4.9)
In the special case of the linear homoskedastic regression model (4.3), then
E

c
2
j
[ i
j

= o
2
j
= o
2
and we have the simpli…cation
L = 1
a
o
2
.
In general, however, L need not necessarily take this simpli…ed form.
For any matrix : r matrix A = A(A),
var(A
t
u [ A) = var(A
t
c [ A) = A
t
LA. (4.10)
In particular, we can write
`
d = A
t
u where A = A(A
t
A)
÷1
and thus
var

`
d [ A

= A
t
LA
=

A
t
A

÷1
A
t
LA

A
t
A

÷1
.
It is useful to note that
A
t
LA =
a
¸
j=1
i
j
i
t
j
o
2
j
.
a weighted version of A
t
A.
49
Rather than working with the variance of the unscaled estimator
`
d. it will be useful to work
with the conditional variance of the scaled estimator

:

`
d ÷d

\
^
d
= var

:

`
d ÷d

[ A

= :var

`
d [ A

= :

A
t
A

÷1

A
t
LA

A
t
A

÷1
=

1
:
A
t
A

÷1

1
:
A
t
LA

1
:
A
t
A

÷1
.
This rescaling might seem rather odd, but it will help provide continuity between the …nite-sample
treatment of this chapter and the asymptotic treatment of later chapters. As we will see in the
next chapter, var

`
d [ A

vanishes as : tends to in…nity, yet \
^
d
converges to a constant matrix.
In the special case of the linear homoskedastic regression model, L = 1
a
o
2
, so A
t
LA =
A
t
Ao
2
. and the variance matrix simpli…es to
\
^
d
=

1
:
A
t
A

÷1
o
2
.
Theorem 4.4.1 Variance of Least-Squares Estimator
In the linear regression model (Assumption 4.1.1),
\
^
d
= var

:

`
d ÷d

[ A

=

1
:
A
t
A

÷1

1
:
A
t
LA

1
:
A
t
A

÷1
where L is de…ned in (4.9).
In the homoskedastic linear regression model (Assumption 4.1.2), the co-
variance matrix simpli…es to
\
^
d
=

1
:
A
t
A

÷1
o
2
.
4.5 Gauss-Markov Theorem
Now consider the class of estimators of d which are linear functions of the vector u. and thus
can be written as
¯
d = A
t
u
where A is an : / function of A. The least-squares estimator is the special case obtained by
setting A = A(A
t
A)
÷1
. What is the best choice of A? The Gauss-Markov theorem, which we now
present, says that the least-squares estimator is the best choice when the errors are homoskedastic,
as the least-squares estimator has the smallest variance among all unbiased linear estimators.
To see this, since E(u [ A) = Ad, then for any linear estimator
¯
d = A
t
u we have
E

¯
d [ A

= A
t
E(u [ A) = A
t
Ad.
50
so
¯
d is unbiased if (and only if) A
t
A = 1
I
. Furthermore, we saw in (4.10) that
var

¯
d [ A

= var

A
t
u [ A

= A
t
LA = A
t
Ao
2
.
the last equality using the homoskedasticity assumption L = 1
a
o
2
. The “best” unbiased linear
estimator is obtained by …nding the matrix A such that A
t
A is minimized in the positive de…nite
sense.
Theorem 4.5.1 Gauss-Markov
1. In the homoskedastic linear regression model (Assumption 4.1.2),
the best (minimum-variance) unbiased linear estimator is the least-
squares estimator
`
d =

A
t
A

÷1
A
t
u
2. In the linear regression model (Assumption 4.1.1), the best unbiased
linear estimator is
¯
d =

A
t
L
÷1
A

÷1
A
t
L
÷1
u (4.11)
The …rst part of the Gauss-Markov theorem is a limited e¢ciency justi…cation for the least-
squares estimator. The justi…cation is limited because the class of models is restricted to ho-
moskedastic linear regression and the class of potential estimators is restricted to linear unbiased
estimators. This latter restriction is particularly unsatisfactory as the theorem leaves open the
possibility that a non-linear or biased estimator could have lower mean squared error than the
least-squares estimator.
The second part of the theorem shows that in the (heteroskedastic) linear regression model,
the least-squares estimator is ine¢cient. Within the class of linear unbiased estimators the best
estimator is (4.11) and is called the Generalized Least Squares (GLS) estimator. This estimator
is infeasible as the matrix L is unknown. This result does not suggest a practical alternative to
least-squares. We return to the issue of feasible implementation of GLS in Section 7.1.
Proof of Theorem 4.5.1.1. Let A be any :/ function of A such that A
t
A = 1
I
. The variance
of the least-squares estimator is (A
t
A)
÷1
o
2
and that of A
t
u is A
t
Ao
2
. It is su¢cient to show
that the di¤erence A
t
A÷(A
t
A)
÷1
is positive semi-de…nite. Set C = A÷A(A
t
A)
÷1
. Note that
A
t
C = 0. Then we calculate that
A
t

A
t
A

÷1
=

C +A

A
t
A

÷1

t

C +A

A
t
A

÷1

÷

A
t
A

÷1
= C
t
C +C
t
A

A
t
A

÷1
+

A
t
A

÷1
A
t
C
+

A
t
A

÷1
A
t
A

A
t
A

÷1
÷

A
t
A

÷1
= C
t
C
The matrix C
t
C is positive semi-de…nite (see Appendix A.7) as required.
The proof of Theorem 4.5.1.2 is left for Exercise 4.3.
51
4.6 Residuals
What are some properties of the residuals ^ c
j
= n
j
÷i
t
j
`
d and prediction errors ~ c
j
= n
j
÷i
t
j
`
d
(÷j)
,
at least in the context of the linear regression model?
Recall from (3.17) and (3.18) that we can write the residuals in vector notation as
` c = u ÷A
`
d = Au = Ac
where A = 1
a
÷A(A
t
A)
÷1
A
t
is the matrix which projects on the the space orthogonal to the
columns of A. Using the properties of conditional expectation
E(` c [ A) = E(Ac [ A) = AE(c [ A) = 0
and
var (` c [ A) = var (Ac [ A) = A var {c j A) A = ALA (4.12)
where L is de…ned in (4.9).
We can simplify this expression under the assumption of conditional homoskedasticity
E

c
2
j
[ i
j

= o
2
.
In this case (4.12) simplies to
var (` c [ A) = Ao
2
.
In particular, for a single observation i. we obtain
var (^ c
j
[ A) = E

^ c
2
j
[ A

= (1 ÷/
jj
) o
2
(4.13)
since the diagonal elements of A are 1 ÷ /
jj
as de…ned in (3.19). Thus the residuals are het-
eroskedastic even if the errors are homoskedastic.
Similarly, we can write the prediction errors ~ c
j
= (1 ÷/
jj
)
÷1
^ c
j
in vector notation. Set
A
+
= diag¦(1 ÷/
11
)
÷1
. ... (1 ÷/
aa
)
÷1
¦
Then we can write the prediction errors as
¯ c = A
+
Au
= A
+
Ac.
We can calculate that
E(¯ c [ A) = A
+
AE(c [ A) = 0
and
var (¯ c [ A) = A
+
A var {c j A) AA
+
= A
+
ALAA
+
which simpli…es under homoskedasticity to
var (¯ c [ A) = A
+
AAA
+
o
2
= A
+
AA
+
o
2
.
The variance of the i’th prediction error is then
var (~ c
j
[ A) = E

~ c
2
j
[ A

= (1 ÷/
jj
)
÷1
(1 ÷/
jj
) (1 ÷/
jj
)
÷1
o
2
= (1 ÷/
jj
)
÷1
o
2
.
52
A residual with proper variance can be obtained by rescaling. The studentized residuals are
c
j
= (1 ÷/
j
)
÷1/2
^ c
j
. (4.14)
and in vector notation
¯ c = ( c
1
. .... c
a
)
t
= A
+1/2
Ac.
From our above calculations, under homoskedasticity,
var (¯ c [ A) = A
+1/2
AA
+1/2
o
2
and
var ( c
j
[ A) = E

c
2
j
[ A

= o
2
(4.15)
and thus these rescaled residuals have the same bias and variance as the original errors when the
latter are homoskedastic.
4.7 Estimation of Error Variance
The error variance o
2
= Ec
2
j
can be a parameter of interest, even in a heteroskedastic regression
or a projection model. o
2
measures the variation in the “unexplained” part of the regression. Its
method of moments estimator (MME) is the sample average of the squared residuals:
^ o
2
=
1
:
a
¸
j=1
^ c
2
j
and equals the MLE in the normal regression model (3.13).
In the linear regression model we can calculate the mean of ^ o
2
. From (3.18), the properties of
projection matrices and the trace operator, observe that
^ o
2
=
1
:
` c
t
` c =
1
:
c
t
AAc =
1
:
c
t
Ac =
1
:
tr

c
t
Ac

=
1
:
tr

Acc
t

.
Then
E

^ o
2
[ A

=
1
:
tr

E

Acc
t
[ A

=
1
:
tr

AE

cc
t
[ A

=
1
:
tr (AL) . (4.16)
Adding the assumption of conditional homoskedasticity E

c
2
j
[ i
j

= o
2
. so that L = 1
a
o
2
. then
(4.16) simpli…es to
E

^ o
2
[ A

=
1
:
tr

Ao
2

= o
2

: ÷/
:

.
the …nal equality by (3.16). This calculation shows that ^ o
2
is biased towards zero. The order of
the bias depends on /´:, the ratio of the number of estimated coe¢cients to the sample size.
53
Another way to see this is to use (4.13). Note that
E

^ o
2
[ A

=
1
:
a
¸
j=1
E

^ c
2
j
[ A

=
1
:
a
¸
j=1
(1 ÷/
jj
) o
2
=

: ÷/
:

o
2
using (3.20).
Since the bias takes a scale form, a classic method to obtain an unbiased estimator is by rescaling
the estimator. De…ne
:
2
=
1
: ÷/
a
¸
j=1
^ c
2
j
. (4.17)
By the above calculation,
E

:
2
[ A

= o
2
(4.18)
so
E

:
2

= o
2
and the estimator :
2
is unbiased for o
2
. Consequently, :
2
is known as the “bias-corrected estimator”
for o
2
and in empirical practice :
2
is the most widely used estimator for o
2
.
Interestingly, this is not the only method to construct an unbiased estimator for o
2
. An al-
ternative unbiased estimator can be using the studentized residuals c
j
from (4.14), yielding the
estimator
o
2
=
1
:
a
¸
j=1
c
2
j
=
1
:
a
¸
j=1
(1 ÷/
jj
)
÷1
^ c
2
j
.
You can show (see Exercise 4.6) that
E

o
2
[ A

= o
2
(4.19)
and thus o
2
is unbiased for o
2
(in the homoskedastic linear regression model).
When the sample sizes are large and the number of regressors small, the estimators ^ o
2
. :
2
and
o
2
are likely to be close. For example, in the regression (3.10), ^ o. :, and o all equal 0.490. The
estimators are more likely to di¤er when : is small and / is large.
4.8 Covariance Matrix Estimation Under Homoskedasticity
For inference, we need an estimate of the covariance matrix \
^
o
of the least-squares estimator.
In this section we consider estimation of \
^
o
in the homoskedastic regression model (4.1)-(4.2)-(4.3).
Under homoskedasticity, the covariance matrix takes the relatively simple form
\
^
d
=

1
:
A
t
A

÷1
o
2
.
which is known up to the unknown scale o
2
. In the previous section we discussed three estimators
of o
2
. The most commonly used choice is :
2
. leading to the classic covariance matrix estimator
´
\
0
^
d
=

1
:
A
t
A

÷1
:
2
. (4.20)
54
Since :
2
is conditionally unbiased for o
2
, it is simple to calculate that
´
\
0
^
d
is conditionally
unbiased for \
^
d
under the assumption of homoskedasticity:
E

´
\
0
^
d
[ A

=

1
:
A
t
A

÷1
E

:
2
[ A

=

1
:
A
t
A

÷1
o
2
= \
^
d
.
This estimator was the dominant covariance matrix estimator in applied econometrics in pre-
vious generations, and is still the default in most regression packages.
If the estimator (4.20) is used, but the regression error is heteroskedastic, it is possible for
´
\
0
^
d
to be quite biased for the correct covariance matrix \
^
d
=

1
:
A
t
A

÷1

1
:
A
t
LA

1
:
A
t
A

÷1
.
For example, suppose / = 1 and o
2
j
= r
2
j
(extreme heteroskedasticity). The ratio of the true
variance of the least-squares estimator to the expectation of the variance estimator is
\
^
d
E

´
\
0
^
d
[ A
=
1
:
¸
a
j=1
r
4
j
o
2
1
:
¸
a
j=1
r
2
j
·
Er
4
j
o
2
Er
2
j
=
Er
4
j

Er
2
j

2
.
(Notice that we use the fact that o
2
j
= r
2
j
implies o
2
= Eo
2
j
= Er
2
j
.) This is the kurtosis of
the regressor r
j
. As the kurtosis can be any number greater than one, we conclude that the bias
of
´
\
0
^
d
can be arbitrarily large. While this is an extreme and constructed example, the point is
that the classic covariance matrix estimator (4.20) may be quite biased when the homoskedasticity
assumption fails.
4.9 Covariance Matrix Estimation Under Heteroskedasticity
In the previous section we showed that that the classic covariance matrix estimator can be
highly biased if homoskedasticity fails. In this section we show how to contruct covariance matrix
estimators which do not require homoskedasticity.
Recall that the general form for the covariance matrix is
\
^
d
=

1
:
A
t
A

÷1

1
:
A
t
LA

1
:
A
t
A

÷1
.
This depends on the unknown matrix L which we can write as
L = diag

o
2
1
. .... o
2
a

= E

cc
t
[ A

= E

diag

c
2
1
. .... c
2
a

[ A

.
Thus L is the conditional mean of diag

c
2
1
. .... c
2
a

. so the latter is an unbiased estimator for L.
Therefore, if the squared errors c
2
j
were observable, we could construct the unbiased estimator
\
joco|
^
d
=

1
:
A
t
A

÷1

1
:
A
t
diag

c
2
1
. .... c
2
a

A

1
:
A
t
A

÷1
=

1
:
A
t
A

÷1

1
:
a
¸
j=1
i
j
i
t
j
c
2
j

1
:
A
t
A

÷1
.
55
Indeed,
E

\
joco|
^
d
[ A

=

1
:
A
t
A

÷1

1
:
a
¸
j=1
i
j
i
t
j
E

c
2
j
[ A

1
:
A
t
A

÷1
=

1
:
A
t
A

÷1

1
:
a
¸
j=1
i
j
i
t
j
o
2
j

1
:
A
t
A

÷1
=

1
:
A
t
A

÷1

1
:
A
t
LA

1
:
A
t
A

÷1
= \
^
d
verifying that \
joco|
^
d
is unbiased for \
^
d
Since the errors c
2
j
are unobserved, \
joco|
^
d
is not a feasible estimator. To construct a feasible
estimator we can replace the errors with the least-squares residuals ^ c
j
. the prediction errors ~ c
j
or
the unbiased residuals c
j
. e.g.
´
L = diag

^ c
2
1
. .... ^ c
2
a

.
¯
L = diag

~ c
2
1
. .... ~ c
2
a

.
L = diag

c
2
1
. .... c
2
a

.
Substituting these matrices into the formula for \
^
o
we obtain the estimators
´
\
^
d
=

1
:
A
t
A

÷1

1
:
A
t
´
LA

1
:
A
t
A

÷1
=

1
:
A
t
A

÷1

1
:
a
¸
j=1
i
j
i
t
j
^ c
2
j

1
:
A
t
A

÷1
.
¯
\
^
d
=

1
:
A
t
A

÷1

1
:
A
t
¯
LA

1
:
A
t
A

÷1
=

1
:
A
t
A

÷1

1
:
a
¸
j=1
i
j
i
t
j
~ c
2
j

1
:
A
t
A

÷1
=

1
:
A
t
A

÷1

1
:
a
¸
j=1
(1 ÷/
jj
)
÷2
i
j
i
t
j
^ c
2
j

1
:
A
t
A

÷1
.
and
\
^
d
=

1
:
A
t
A

÷1

1
:
A
t
LA

1
:
A
t
A

÷1
=

1
:
A
t
A

÷1

1
:
a
¸
j=1
i
j
i
t
j
c
2
j

1
:
A
t
A

÷1
=

1
:
A
t
A

÷1

1
:
a
¸
j=1
(1 ÷/
jj
)
÷1
i
j
i
t
j
^ c
2
j

1
:
A
t
A

÷1
.
The estimators
´
\
^
d
.
¯
\
^
d
. and \
^
d
are often called robust, heteroskedasticity-consistent, or heteroskedasticity-
robust covariance matrix estimators. The estimator
´
\
^
d
was …rst developed by Eicker (1963), and
introduced to econometrics by White (1980), and is sometimes called the Eicker-White or White
56
covariance matrix estimator
1
. The estimator
¯
\
^
d
was introduced by Andrews (1991) based on the
principle of leave-one-out cross-validation, and the estimator \
^
o
was introduced by Horn, Horn
and Duncan (1975) as a reduced-bias covariance matrix estimator.
In general, the bias of the estimators
´
\
^
d
.
¯
\
^
d
and \
^
d
. is quite complicated, but they greatly
simplify under the assumption of homoskedasticity (4.3). For example, using (4.13),
E

´
\
^
d
[ A

=

1
:
A
t
A

÷1

1
:
a
¸
j=1
i
j
i
t
j
E

^ c
2
j
[ A

1
:
A
t
A

÷1
=

1
:
A
t
A

÷1

1
:
a
¸
j=1
i
j
i
t
j
(1 ÷/
jj
) o
2

1
:
A
t
A

÷1
=

1
:
A
t
A

÷1
o
2
÷

1
:
A
t
A

÷1

1
:
a
¸
j=1
i
j
i
t
j
/
jj

1
:
A
t
A

÷1
o
2
<

1
:
A
t
A

÷1
o
2
= \
^
d
.
The inequality A < H when applied to matrices means that the matrix H÷A is positive de…nite,
which holds here since
¸
a
j=1
i
j
i
t
j
/
jj
is positive de…nite. This calculation shows that
´
\
^
d
is biased
downwards.
Similarly, (again under homoskedasticity) we can calculate that
¯
\
^
d
is biased upwards, speci…-
cally
E

¯
\
^
d
[ A

1
:
A
t
A

÷1
o
2
(4.21)
while the estimator \
^
d
is unbiased
E

\
^
d
[ A

=

1
:
A
t
A

÷1
o
2
. (4.22)
(See Exercise 4.7
It might seem rather odd to compare the bias of heteroskedasticity-robust estimators under the
assumption of homoskedasticity, but it does give us a baseline for comparison.
We have introduced four covariance matrix estimators,
´
\
0
^
d
.
´
\
^
d
.
¯
\
^
d
. and \
^
d
. Which should
you use? The classic estimator
´
\
0
^
d
is typically a poor choice, as it is only valid under the unlikely
homoskedasticity restriction. For this reason it is not typically used in contemporary economet-
ric research. Of the three robust estimators,
´
\
^
d
is the most commonly used, as it is the most
straightforward and familiar. However,
¯
\
^
d
. and in particular \
^
d
. are perferred based on their
improved bias. Unfortunately, standard regression packages set the classic estimator
´
\
0
^
d
as the
default. As
¯
\
^
d
and \
^
d
are simple to implement, this should not be a barrier. For example, in
STATA, \
^
d
is implemented by selecting “Robust” standard errors and selecting the bias correction
option “1´(1 ÷/)”, or using the vce(hc2) option.
4.10 Standard Errors
A variance estimator such as
´
\
^
d
is an estimate of the variance of the distribution of
`
d. A
more easily interpretable measure of spread is its square root – the standard deviation. This is
1
Often, this estimator is rescaled by multiplying by the ad hoc bias adjustment
n
n k
in analogy to the bias-
corrected error variance estimator.
57
so important when discussing the distribution of parameter estimates, we have a special name for
estimates of their standard deviation.
De…nition 4.10.1 A standard error :(
^
) for an real-
valued estimator
^
is an estimate of the standard deviation
of the distribution of
^
.
When d is a vector with estimate
`
d and covariance matrix estimate :
÷1
´
\
^
d
, standard errors
for individual elements are the square roots of the diagonal elements of :
÷1
´
\
^
d
. That is,
:(
^

;
) =

:
÷1 ´
\
^
o
j
= :
÷1/2

´
\
^
d

;;
.
As we discussed in the previous section, there are multiple possible covariance matrix estimators,
so standard errors are not unique. It is therefore important to understand what formula and method
is used by an author when studying their work. It is also important to understand that a particular
standard error may be relevant under one set of model assumptions, but not under another set of
assumptions.
4.11 Multicollinearity
If rank(A
t
A) < /. then
`
d is not de…ned
2
. This is called strict multicollinearity. This
happens when the columns of A are linearly dependent, i.e., there is some o = 0 such that
Ao = 0. Most commonly, this arises when sets of regressors are included which are identically
related. For example, if A includes both the logs of two prices and the log of the relative prices,
log(j
1
). log(j
2
) and log(j
1
´j
2
). When this happens, the applied researcher quickly discovers the
error as the statistical software will be unable to construct (A
t
A)
÷1
. Since the error is discovered
quickly, this is rarely a problem for applied econometric practice.
The more relevant situation is near multicollinearity, which is often called “multicollinearity”
for brevity. This is the situation when the A
t
A matrix is near singular, when the columns of A are
close to linearly dependent. This de…nition is not precise, because we have not said what it means
for a matrix to be “near singular”. This is one di¢culty with the de…nition and interpretation of
multicollinearity.
One implication of near singularity of matrices is that the numerical reliability of the calculations
is reduced. In extreme cases it is possible that the reported calculations will be in error.
A more relevant implication of near multicollinearity is that individual coe¢cient estimates will
be imprecise. We can see this most simply in a homoskedastic linear regression model with two
regressors
n
j
= r
1j

1
+r
2j

2
+c
j
.
and
1
:
A
t
A =

1 j
j 1

In this case
var

`
d [ A

=
o
2
:

1 j
j 1

÷1
=
o
2
:(1 ÷j
2
)

1 ÷j
÷j 1

.
2
See Appendix A.5 for the de…ntion of the rank of a matrix.
58
The correlation j indexes collinearity, since as j approaches 1 the matrix becomes singular. We
can see the e¤ect of collinearity on precision by observing that the variance of a coe¢cient esti-
mate o
2

:

1 ÷j
2

÷1
approaches in…nity as j approaches 1. Thus the more “collinear” are the
regressors, the worse the precision of the individual coe¢cient estimates.
What is happening is that when the regressors are highly dependent, it is statistically di¢cult
to disentangle the impact of
1
from that of
2
. As a consequence, the precision of individual
estimates are reduced. The imprecision, however, will be re‡ected by large standard errors, so
there is no distortion in inference.
Some earlier textbooks overemphasized a concern about multicollinearity. A very amusing
parody of these texts appeared in Chapter 23.3 of Goldberger’s A Course in Econometrics (1991),
which is reprinted below. To understand his basic point, you should notice how the estimation
variance o
2

:

1 ÷j
2

÷1
depends equally and symmetrically on the the correlation j and the
sample size :.
Arthur S. Goldberger
Art Goldberger (1930-2009) was one of the most distinguished members of the Depart-
ment of Economics at the University of Wisconsin. His PhD thesis developed an early
macroeconometric forecasting model (known as the Klein-Goldberger model) but most of
his career focused on microeconometric issues. He was the leading pioneer of what has been
called the Wisconsin Tradition of empirical work – a combination of formal econometric
theory with a careful critical analysis of empirical work. Goldberger wrote a series of highly
regarded and in‡uential graduate econometric textbooks, including including Econometric
Theory (1964), Topics in Regression Analysis (1968), and A Course in Econometrics (1991).
59
Micronumerosity
Arthur S. Goldberger
A Course in Econometrics (1991), Chapter 23.3
Econometrics texts devote many pages to the problem of multicollinearity in multiple regres-
sion, but they say little about the closely analogous problem of small sample size in estimation
a univariate mean. Perhaps that imbalance is attributable to the lack of an exotic polysyllabic
name for “small sample size.” If so, we can remove that impediment by introducing the term
micronumerosity.
Suppose an econometrician set out to write a chapter about small sample size in sampling
from a univariate population. Judging from what is now written about multicollinearity, the
chapter might look like this:
1. Micronumerosity
The extreme case, “exact micronumerosity,” arises when : = 0. in which case the sample
estimate of j is not unique. (Technically, there is a violation of the rank condition : 0 : the
matrix 0 is singular.) The extreme case is easy enough to recognize. “Near micronumerosity”
is more subtle, and yet very serious. It arises when the rank condition : 0 is barely
satis…ed. Near micronumerosity is very prevalent in empirical economics.
2. Consequences of micronumerosity
The consequences of micronumerosity are serious. Precision of estimation is reduced. There
are two aspects of this reduction: estimates of j may have large errors, and not only that,
but \
&
will be large.
Investigators will sometimes be led to accept the hypothesis j = 0 because n´^ o
&
is small,
even though the true situation may be not that j = 0 but simply that the sample data have
not enabled us to pick j up.
The estimate of j will be very sensitive to sample data, and the addition of a few more
observations can sometimes produce drastic shifts in the sample mean.
The true j may be su¢ciently large for the null hypothesis j = 0 to be rejected, even
though \
&
= o
2
´: is large because of micronumerosity. But if the true j is small (although
nonzero) the hypothesis j = 0 may mistakenly be accepted.
3. Testing for micronumerosity
Tests for the presence of micronumerosity require the judicious use of various …ngers. Some
researchers prefer a single …nger, others use their toes, still others let their thumbs rule.
A generally reliable guide may be obtained by counting the number of observations. Most
of the time in econometric analysis, when : is close to zero, it is also far from in…nity.
Several test procedures develop critical values :
+
. such that micronumerosity is a problem
only if : is smaller than :
+
. But those procedures are questionable.
4. Remedies for micronumerosity
If micronumerosity proves serious in the sense that the estimate of j has an unsatisfactorily
low degree of precision, we are in the statistical position of not being able to make bricks
without straw. The remedy lies essentially in the acquisition, if possible, of larger samples
from the same population.
But more data are no remedy for micronumerosity if the additional data are simply “more
of the same.” So obtaining lots of small samples from the same population will not help.
60
4.12 Omitted Variable Bias
Let the regressors be partitioned as
i
j
=

i
1j
i
2j

.
We can write the regression of n
j
on i
j
as
n
j
= i
t
1j
d
1
+i
t
2j
d
2
+c
j
(4.23)
E(i
j
c
j
) = 0.
Now suppose that instead of estimating equation (4.23) by least-squares, we regress n
j
on i
1j
only. Perhaps this is done because the variables i
2j
are not in the data set, in order to reduce the
number of estimated parameters. E¤ectively, we are estimating the equation
n
j
= i
t
1j
~
1
+n
j
(4.24)
E(i
1j
n
j
) = 0
Notice that we have written the coe¢cient on i
1j
as ~
1
rather than d
1
and the error as n
j
rather
than c
j
. This is because the model being estimated is di¤erent than (4.23). Goldberger (1991)
introduced the labels (4.23) the long regression and (4.24) the short regression to emphasize
the distinction.
Typically, d
1
= ~
1
, except in special cases. To see this, we calculate
~
1
=

E

i
1j
i
t
1j

÷1
E(i
1j
n
j
)
=

E

i
1j
i
t
1j

÷1
E

i
1j

i
t
1j
d
1
+i
t
2j
d
2
+c
j

= d
1
+

E

i
1j
i
t
1j

÷1
E

i
1j
i
t
2j

d
2
= d
1
+Id
2
where
I =

E

i
1j
i
t
1j

÷1
E

i
1j
i
t
2j

is the coe¢cient from a regression of i
2j
on i
1j
.
Observe that ~
1
= d
1
unless I = 0 or d
2
= 0. Thus the short and long regressions have the
same coe¢cient on i
1j
only under one of two conditions. First, the regression of i
2j
on i
1j
yields
a set of zero coe¢cients (they are uncorrelated), or second, the coe¢cient on i
2j
in (4.23) is zero.
In general, least-squares estimation of (4.24) is an estimate of ~
1
= d
1
+Id
2
rather than d
1
. The
di¤erence Id
2
is known as omitted variable bias. It is the consequence of omission of a relevant
correlated variable.
To avoid omitted variables bias the standard advice is to include potentially relevant variables
in the estimated model. By construction, the general model will be free of the omitted variables
problem. Typically there are limits, as many desired variables are not available in a given dataset.
In this case, the possibility of omitted variables bias should be acknowledged and discussed in the
course of an empirical investigation.
4.13 Normal Regression Model
In the special case of the normal linear regression model introduced in Section 3.11, we can derive
exact sampling distributions for the least-squares estimator, residuals, and variance estimator.
In particular, under the normality assumption c
j
[ i
j
~ N

0. o
2

then we have the multivariate
implication
c [ A ~ N

0. 1
a
o
2

.
61
That is, the error vector c is independent of A and is normally distributed. Since linear functions
of normals are also normal, this implies that conditional on A

`
d ÷d
` c

=

(A
t
A)
÷1
A
t
A

c ~ N

0.

o
2
(A
t
A)
÷1
0
0 o
2
A

where A = 1
a
÷A(A
t
A)
÷1
A
t
. Since uncorrelated normal variables are independent, it follows
that
`
d is independent of any function of the OLS residuals including the estimated error variance
:
2
or ^ o
2
or prediction errors ¯ c.
The spectral decomposition of A yields
A = H
¸
1
a÷I
0
0 0

H
t
(see equation (A.4)) where H
t
H = 1
a
. Let u = o
÷1
H
t
c ~ N(0. H
t
H) ~ N(0. 1
a
) . Then
:^ o
2
o
2
=
(: ÷/) :
2
o
2
=
1
o
2
` c
t
` c
=
1
o
2
c
t
Ac
=
1
o
2
c
t
H
¸
1
a÷I
0
0 0

H
t
c
= u
t
¸
1
a÷I
0
0 0

u
~ .
2
a÷I
.
a chi-square distribution with : ÷/ degrees of freedom.
Furthermore, if standard errors are calculated using the homoskedastic formula (4.20)
^

;
÷
;
:(
^

;
)
=
^

;
÷
;
:

(A
t
A)
÷1

;;
~
N

0. o
2

(A
t
A)
÷1

;;

o
2
a÷I
.
2
a÷I

(A
t
A)
÷1

;;
=
N(0. 1)

\
2
nk
a÷I
~ t
a÷I
a t distribution with : ÷/ degrees of freedom.
Theorem 4.13.1 Normal Regression
In the linear regression model (Assumption 4.1.1) if c
j
is independent of
i
j
and distributed N

0. o
2

then
«
`
d ÷d ~ N

0. o
2
(A
t
A)
÷1

«
a^ o
2
o
2
=
(a÷I)c
2
o
2
~ .
2
a÷I
«
^
o
j
÷o
j
c(
^
o
j
)
~ t
a÷I
These are the exact …nite-sample distributions of the least-squares estimator and variance esti-
mators, and are the basis for traditional inference in linear regression.
62
While elegant, the di¢culty in applying Theorem 4.13.1 is that the normality assumption is
too restrictive to be empirical plausible, and therefore inference based on Theorem 4.13.1 has no
guarantee of accuracy. We develop a more broadly-applicable inference theory based on large
sample (asymptotic) approximations in the following chapter.
63
Exercises
Exercise 4.1 Explain the di¤erence between
1
a
¸
a
j=1
i
j
i
t
j
and E(i
j
i
t
j
) .
Exercise 4.2 True or False. If n
j
= r
j
+ c
j
, r
j
÷ R. E(c
j
[ r
j
) = 0. and ^ c
j
is the OLS residual
from the regression of n
j
on r
j
. then
¸
a
j=1
r
2
j
^ c
j
= 0.
Exercise 4.3 Prove Theorem 4.5.1.2.
Exercise 4.4 In a linear model
u = Ad +c. E(c [ A) = 0. var (c [ A) = o
2
D
with D known, the GLS estimator is
¯
d =

A
t
D
÷1
A

÷1

A
t
D
÷1
u

.
the residual vector is ` c = u ÷A
¯
d. and an estimate of o
2
is
:
2
=
1
: ÷/
` c
t
D
÷1
` c.
(a) Why is this a reasonable estimator for o
2
?
(b) Prove that ` c = A
1
c. where A
1
= 1 ÷A

A
t
D
÷1
A

÷1
A
t
D
÷1
.
(c) Prove that A
t
1
D
÷1
A
1
= D
÷1
÷D
÷1
A

A
t
D
÷1
A

÷1
A
t
D
÷1
.
Exercise 4.5 Let (n
j
. i
j
) be a random sample with E(u [ A) = Ad. Consider the Weighted
Least Squares (WLS) estimator of d
¯
d =

A
t
MA

÷1

A
t
Mu

where M = diag (n
1
. .... n
a
) and n
j
= r
÷2
;j
, where r
;j
is one of the i
j
.
(a) In which contexts would
¯
d be a good estimator?
(b) Using your intuition, in which situations would you expect that
¯
d would perform better than
OLS?
Exercise 4.6 Show (4.19) in the homoskedastic regression model.
Exercise 4.7 Show (4.21) and (4.22) in the homoskedastic regression model.
64
Chapter 5
Asymptotic Theory
5.1 Introduction
As discussed in Section 4.2, the OLS estimator
`
d is has an unknown statistical distribution. In-
ference (con…dence intervals and hypothesis testing) requires useful approximations to the sampling
distribution. The most widely used and versatile method is asymptotic theory, which approximates
sampling distributions by taking the limit of the …nite sample distribution as the sample size : tends
to in…nity. The primary tools of asymptotic theory are the weak law of large numbers (WLLN),
central limit theorem (CLT), and continuous mapping theorem (CMT). With these tools we can
approximate the sampling distributions of most econometric estimators.
It turns out that most of this theory equally applies to the projection model and the linear
conditional mean model, and therefore the results in this Chapter will be stated for the broader
projection model unless otherwise stated. Throughout this chapter we maintain the following.
Assumption 5.1.1 Linear Projection Model
The observations (n
j
. i
j
) come from a random sample with …nite second
moments
En
2
j
< ·
and
Er
2
;j
< ·
for , = 1. .... /. and an invertible design matrix
O = E

i
j
i
t
j

0.
From Theorems 2.9.1 and 2.9.2, under Assumtpion 5.1.1 the variables satisfy the linear projec-
tion equation
n
j
= i
t
j
d +c
j
E(i
j
c
j
) = 0
d =

E

ii
t

÷1
E(in) .
A review of the most important tools in asymptotic theory is contained in Appendix C.
5.2 Weak Law of Large Numbers
At the beginning of Chapter 4, we showed in Figure 4.1 how the sampling density of the least-
square estimator varies with the sample size :. It is possible to see in the …gure that the sampling
65
density concentrates about the true parameter value as the sample size increases. This is the
property of estimator consistency – convergence in probability to the true parameter value. In this
section we review the core theory explaining this phenomenon.
At its heart, estimator consistency is the e¤ect of sample size on the variance of the sample
mean. To review, suppose n
j
is an iid random variable with …nite mean En
j
= j and variance
E(n
j
÷j)
2
= o
2
. and consider the sample mean ^ j =
1
a
¸
a
j=1
n
j
. The mean and variance of ^ j are
E^ j = E
1
:
a
¸
j=1
n
j
=
1
:
a
¸
j=1
En
j
= j
and
var(^ j) = E(^ j ÷j)
2
= E

1
:
a
¸
j=1
(n
j
÷j)

2
=
1
:
2
a
¸
j=1
a
¸
;=1
E(n
j
÷j) (n
;
÷j) =
1
:
2
a
¸
j=1
o
2
=
o
2
:
where the second-to-last inequality is because E(n
j
÷j) (n
;
÷j) = o
2
for i = , yet E(n
j
÷j) (n
;
÷j) =
0 for i = , due to independence.
We see that var(^ j) = o
2
´: which is decreasing in : (as long as o
2
< ·). It follows that
var(^ j) = o
2
´: ÷ 0 as : ÷ ·. This means that the distribution of ^ j is increasingly concentrated
about its mean j as : increases.
To be more precise, for any c 0, an application of Chebyshev’s inequality yields
P([^ j ÷j[ c) _
var(^ j)
c
2
=
o
2
´:
c
2
÷0
as : ÷·. This says that the probability that ^ j di¤ers from j by more than c declines to zero as
: ÷ ·. Equivalently, the distribution of ^ j becomes concentrated within the region [j ÷ c. j + c]
as : diverges. As this holds for any c (even an extremely small value) it is reasonable to say that
the distribution of ^ j concentrates about j as : increases.
We have described three distinct but intertwined concepts: convergence in probability (concen-
tration of a sampling distribution), consistency (convergence in probability of an estimator to the
parameter value), and the weak law of large numbers (convergence in probability of the sample
mean). We now state these concepts formally.
De…nition 5.2.1 We say that a random variable .
a
÷ R converges in
probability to . as : ÷·. denoted .
a
j
÷÷.. if for all c 0.
lim
a÷o
P([.
a
÷.[ c) = 0.
De…nition 5.2.2 An estimator
`
0 of a parameter 0 is consistent if
`
0
j
÷÷
0 as : ÷·
Consistency is a good property for an estimator to possess. It means that for any given data
distribution. there is a sample size : su¢ciently large such that the estimator
^
0 will be arbitrarily
close to the true value 0 with high probability.
Above, we showed that the sample mean ^ j converges in probability to the population mean j
as : ÷·. and is thus consistent for j. This result is known as the weak law of large numbers.
66
Theorem 5.2.1 Weak Law of Large Numbers (WLLN)
If n
j
÷ R is iid and E[n
j
[ < ·. then
n
a
=
1
:
a
¸
j=1
n
j
j
÷÷E(n
j
)
as : ÷· .
Theorem 5.2.2 WLLN for Random Matrices
If 1
j
÷ R
Iv
is iid and E[n
;|j
[ < · for 1 _ , _ / and 1 _ | _ r then
1
a
=
1
:
a
¸
j=1
1
j
j
÷÷E(1
j
)
as : ÷·.
In our derivation, we proved the WLLN under the assumption that n
j
has a …nite variance.
Theorem 5.2.1 states that the WLLN holds under the weaker assumption of a …nite mean. We
provide a proof of this more general result for the technically-inclinded readers.
Proof of Theorem 5.2.1: Without loss of generality, we can assume E(n
j
) = 0 by recentering n
j
on its expectation.
We need to show that for all c 0 and : 0 there is some · < · so that for all : _ ·.
P([n
a
[ c) _ :. Fix c and :. Set - = c:´3. Pick C < · large enough so that
E([n
j
[ 1 ([n
j
[ C)) _ - (5.1)
(where 1 () is the indicator function) which is possible since E[n
j
[ < ·. De…ne the random variables
n
j
= n
j
1 ([n
j
[ _ C) ÷E(n
j
1 ([n
j
[ _ C))
.
j
= n
j
1 ([n
j
[ C) ÷E(n
j
1 ([n
j
[ C)) .
By the Triangle Inequality (A.8), the Expectation Inequality (C.2), and (5.1),
E[.
a
[ = E

1
:
a
¸
j=1
.
j

_
1
:
a
¸
j=1
E[.
j
[
= E[.
j
[
_ E[n
j
[ 1 ([n
j
[ C) +[E(n
j
1 ([n
j
[ C))[
_ 2E[n
j
[ 1 ([n
j
[ C)
_ 2-. (5.2)
67
By Jensen’s Inequality (C.1), the fact that the n
j
are iid and mean zero, and the bound [n
j
[ _ 2C.
(E[n
a
[)
2
_ En
2
a
=
En
2
j
:
_
4C
2
:
_ -
2
(5.3)
the …nal inequality holding for : _ 4C
2
´-
2
= 36C
2
´c
2
:
2
.
Finally, by Markov’s Inequality (C.6), the fact that n
a
= n
a
+.
a
. the triangle inequality, (5.2)
and (5.3),
P([n
a
[ c) _
E[n
a
[
c
_
E[n
a
[ +E[.
a
[
c
_
3-
c
= :.
the equality by the de…nition of -. We have shown that for any c 0 and : 0 then for all
: _ 36C
2
´c
2
:
2
. P([n
a
[ c) _ :. as needed.
Proof of Theorem 5.2.2: A random vector or matrix converges in probability to its limit if (and
only if) all elements in the vector or matrix converge in probability. Since each element of 1
j
has
a …nite mean by assumption, Theorem 5.2.1 applies to each element and therefore converges in
probability, as needed.
Jacob Bernoulli
Jacob Bernoulli (1654 -1705) of Switzerland was one of many famous mathematicians in the
Bernoulli family. One of Jacob Bernoulli’s important contributions was the …rst proof of
the weak law of large numbers, published in his posthumous masterpiece Ars Conjectandi.
5.3 Consistency of Least-Squares Estimation
In this section we use the WLLN and continuous mapping theorem (CMT, Theorem C.3.1) to
show that the least-squares estimator
`
d is consistent for the projection coe¢cient d.
This derivation is based on three key components. First, the OLS estimator can be written as
a continuous function of a set of sample moments. Second, the weak law of large numbers (WLLN,
Theorem 5.2.1) shows that sample moments converge in probability to population moments. And
third, the continuous mapping theorem (CMT, Theorem C.3.1) states that continuous functions
preserve convergence in probability. We now explain each step in brief and then in greater detail.
First, observe that the OLS estimator
`
d =

1
:
a
¸
j=1
i
j
i
t
j

÷1

1
:
a
¸
j=1
i
j
n
j

is a function of the sample moments
1
a
¸
a
j=1
i
j
i
t
j
and
1
a
¸
a
j=1
i
j
n
j
.
Second, by an application of the WLLN these sample moments converge in probability to the
population moments. Speci…cally, as : ÷·.
1
:
a
¸
j=1
i
j
i
t
j
j
÷÷E

i
j
i
t
j

= O (5.4)
68
and
1
:
a
¸
j=1
i
j
n
j
j
÷÷E(i
j
n
j
) . (5.5)
Third, the CMT to allows us to combine these equations to show that
`
d converges in probability
to d. Speci…cally, as : ÷·.
`
d =

1
:
a
¸
j=1
i
j
i
t
j

÷1

1
:
a
¸
j=1
i
j
n
j

j
÷÷

E

i
j
i
t
j

÷1
(E(i
j
n
j
))
= d. (5.6)
We have shown that
`
d
j
÷÷d, as : ÷·. In words, the OLS estimator converges in probability to
the projection coe¢cient vector d as the sample size : gets large.
For a slightly di¤erent demonstration of this result, recall that (4.6) implies that
`
d ÷d =

1
:
a
¸
j=1
i
j
i
t
j

÷1

1
:
a
¸
j=1
i
j
c
j

. (5.7)
The WLLN and (2.14) imply
1
:
a
¸
j=1
i
j
c
j
j
÷÷E(i
j
c
j
) = 0. (5.8)
Therefore
`
d ÷d =

1
:
a
¸
j=1
i
j
i
t
j

÷1

1
:
a
¸
j=1
i
j
c
j

j
÷÷O
÷1
0
= 0
which is the same as
`
d
j
÷÷d.
Theorem 5.3.1 Consistency of Least-Squares
Under Assumption 5.1.1,
`
d
j
÷÷d as : ÷·.
Theorem 5.3.1 states that the OLS estimator
`
d converges in probability to d as : diverges to
positive in…nity, and thus
`
d is consistent for d.
We now explain the application of the WLLN in (5.4) and (5.5) and the CMT in (5.6) in greater
detail.
The weak law of large numbers (Theorem 5.2.1, Section 5.2) says that when when random vari-
ables are iid and have …nite mean, then sample averages converge in probability to their population
mean. Thus to apply the WLLN to (5.4) and (5.5) it is su¢cient to verify that the elements of the
random matrices i
j
i
t
j
and i
j
n
j
are iid and have …nite mean. First, these random variables are iid
because the observations (n
j
. i
j
) are mutually independent and identically distributed (Assumption
1.5.1), and so are any functions of the observations, including i
j
i
t
j
and i
j
n
j
. Second, Assumption
69
5.1.1 is su¢cient for 1 _ , _ / and 1 _ | _ /. E[r
;j
r
|j
[ < · and E[r
;j
n
j
[ < ·. Indeed, by an
application of the Cauchy-Schwarz inequality and Assumption 5.1.1
E[r
;j
r
|j
[ _

Er
2
;j
Er
2
|j

1/2
< ·
and
E[r
;j
n
j
[ _

Er
2
;j
En
2
j

1/2
< ·.
We have veri…ed the conditions for the WLLN, and thus (5.4) and (5.5).
The …nal step of the proof is the application of the continuous mapping theorem to obtain (5.6).
To fully understand its application we walk through it in detail. We can write
`
d =

1
:
a
¸
j=1
i
j
i
t
j

÷1

1
:
a
¸
j=1
i
j
n
j

= g

1
:
a
¸
j=1
i
j
i
t
j
.
1
:
a
¸
j=1
i
j
n
j

where g (A. I) = A
÷1
I is a function of A and I. The function g (A. I) is a continuous function
of A and I at all values of the arguments such that A
÷1
exists. Assumption 5.1.1 implies that
O
÷1
exists and thus g (A. I) is continuous at A = O. Hence by the continuous mapping theorem
(Theorem C.3.1), as : ÷·.
`
d = g

1
:
a
¸
j=1
i
j
i
t
j
.
1
:
a
¸
j=1
i
j
n
j

j
÷÷g (O. E(i
j
n
j
))
= E

i
j
i
t
j

÷1
E(i
j
n
j
)
= d.
This completes the proof of Theorem 5.3.1.
5.4 Asymptotic Normality
We started this Chapter discussing the need for an approximation to the distribution of the OLS
estimator
`
d. In Section 5.3 we showed that
`
d converges in probability to d. Consistency is a useful
…rst step, but in itself does not provide a useful approximation to the distribution of the estimator.
In this Section we derive an approximation typically called the asymptotic distribution.
The derivation starts by writing the estimator as a function of sample moments. One of the
moments must be written as a sum of zero-mean random vectors and normalized so that the central
limit theorem can be applied. The steps are as follows.
Take equation (5.7) and multiply it by

:. This yields the expression

:

`
d ÷d

=

1
:
a
¸
j=1
i
j
i
t
j

÷1

1

:
a
¸
j=1
i
j
c
j

. (5.9)
This shows that the normalized and centered estimator

:

`
d ÷d

is a function of the sample
average
1
a
¸
a
j=1
i
j
i
t
j
and the normalized sample average
1

a
¸
a
j=1
i
j
c
j
. Furthermore, the latter has
mean zero so the central limit theorem (CLT) applies.
70
Central Limit Theorem (Theorem C.2.1)
If u
j
÷ R
I
is iid, Eu
j
= 0 and En
2
;j
< · for , = 1. .... /. then as : ÷·
1

:
a
¸
j=1
u
j
o
÷÷N

0. E

u
j
u
t
j

.
For our application, u
j
= i
j
c
j
which is iid (since the observations are iid) and mean zero (since
E(i
j
c
j
) = 0). We calculate that E(u
j
u
t
j
) = E

i
j
i
t
j
c
2
j

. By the CLT we conclude
1

:
a
¸
j=1
i
j
c
j
o
÷÷N(0. D) (5.10)
as : ÷·. where
D = E

i
j
i
t
j
c
2
j

. (5.11)
Putting these steps together, using (5.4), (5.9), and (5.10),

:

`
d ÷d

o
÷÷O
÷1
N(0. D)
= N

0. O
÷1
DO
÷1

as : ÷ ·. where the …nal equality follows from the property that linear combinations of normal
vectors are also normal (Theorem B.9.1).
Formally, (5.10) requires that the elements of u
j
= i
j
c
j
have …nite variances. Indeed, if this is
not true then (5.11) is not well de…ned and (5.10) does not make sense. A su¢cient condition can
be found as follows. For any , = 1. .... /. by the Cauchy-Schwarz Inequality (C.3), note that
E[r
;j
c
j
[
2
= E

r
2
;j
c
2
j

_

Er
4
;j

1/2

Ec
4
j

1/2
(5.12)
which is …nite if r
;j
and c
j
have …nite fourth moments. As c
j
is a linear combination of n
j
and i
j
.
it is su¢cient that the observables have …nite fourth moments.
Assumption 5.4.1 In addition to Assumption 5.1.1, En
4
j
< · and for
, = 1. .... /. Er
4
;j
< ·.
We have derived the asymptotic normal approximation to the distribution of the least-squares
estimator.
Theorem 5.4.1 Asymptotic Normality of Least-Squares Estimator
Under Assumption 5.4.1, as : ÷·

:

`
d ÷d

o
÷÷N(0. \
d
)
where
\
d
= O
÷1
DO
÷1
.
71
As \
d
is the variance of the asymptotic distribution of

:

`
d ÷d

. \
d
is often referred to as
the asymptotic covariance matrix of
`
d. The expression \
d
= O
÷1
DO
÷1
is called a sandwich
form.
Theorem 5.4.1 states that the sampling distribution of the least-squares estimator, after rescal-
ing, is approximately normal when the sample size : is su¢ciently large. This holds true for all
joint distributions of (n
j
. i
j
) which satisfy the conditions of Assumption 5.4.1. However, for any
…xed : the sampling distribution of
`
d can be arbitrarily far from the normal distribution. In Figure
4.1 we have already seen a simple example where the least-squares estimate is quite asymmetric
and non-normal even for reasonably large sample sizes.
There is a special case where D and \
d
simplify. We say that c
j
is a Homoskedastic Pro-
jection Error when
cov(i
j
i
t
j
. c
2
j
) = 0. (5.13)
Condition (5.13) holds in the homoskedastic linear regression model, but is somewhat broader.
Under (5.13) the asymptotic variance formulas simplify as
D = E

i
j
i
t
j

1

c
2
j

= Oo
2
(5.14)
\
d
= O
÷1
DO
÷1
= O
÷1
o
2
= \
0
d
(5.15)
In (5.15) we de…ne \
0
d
= O
÷1
o
2
whether (5.13) is true or false. When (5.13) is true then \
d
= \
0
d
.
otherwise \
d
= \
0
d
. We call \
0
the homoskedastic covariance matrix.
The asymptotic distribution of Theorem 5.4.1 is commonly used to approximate the …nite
sample distribution of

:

`
d ÷d

. The approximation may be poor when : is small. How large
should : be in order for the approximation to be useful? Unfortunately, there is no simple answer
to this reasonable question. The trouble is that no matter how large is the sample size, the
normal approximation is arbitrarily poor for some data distribution satisfying the assumptions.
We illustrate this problem using a simulation. Let n
j
=
0
+
1
r
j
+ c
j
where r
j
is N(0. 1) . and
c
j
is independent of r
j
with the Double Pareto density 1(c) =
c
2
[c[
÷c÷1
. [c[ _ 1. If c 2 the
error c
j
has zero mean and variance c´(c ÷ 2). As c approaches 2, however, its variance diverges
to in…nity. In this context the normalized least-squares slope estimator

:
c÷2
c

^

2
÷
2

has the
N(0. 1) asymptotic distibution for any c 2. In Figure 5.1 we display the …nite sample densities
of the normalized estimator

:
c÷2
c

^

2
÷
2

. setting : = 100 and varying the parameter c.
For c = 3.0 the density is very close to the N(0. 1) density. As c diminishes the density changes
signi…cantly, concentrating most of the probability mass around zero.
Vilfredo Pareto
Vilfredo Pareto (1848-1923) of Italy was a major economic theorist, introducing the eco-
nomic concept of Pareto e¢ciency. His major econometric contribution was the Pareto
(or power law) distribution which is commonly used to model the empirical distribution of
wealth.
Another example is shown in Figure 5.2. Here the model is n
j
= +c
j
where (5.16)
c
j
=
n
I
j
÷E

n
I
j

E

n
2I
j

÷

E

n
I
j

2

1/2
(5.16)
72
Figure 5.1: Density of Normalized OLS estimator with Double Pareto Error
and n
j
~ N(0. 1). We show the sampling distribution of

:

^
÷

setting : = 100. for / = 1. 4,
6 and 8. As / increases, the sampling distribution becomes highly skewed and non-normal. The
lesson from Figures 5.1 and 5.2 is that the N(0. 1) asymptotic approximation is never guaranteed
to be accurate.
5.5 Consistency of Sample Variance Estimators
Using the methods of Section 5.3 we can show that the estimators ^ o
2
and :
2
are consistent for
o
2
.
Theorem 5.5.1 Under Assumption 5.1.1, ^ o
2
j
÷÷ o
2
and :
2
j
÷÷ o
2
as
: ÷·.
One implication of this theorem is that multiple estimators can be consistent for the sample
population parameter. While ^ o
2
and :
2
are unequal in any given application, they are close in
value when : is very large.
Proof of Theorem 5.5.1. Note that
^ c
j
= n
j
÷i
t
j
`
d
= c
j
+i
t
j
d ÷r
t
j
`
d
= c
j
÷i
t
j

`
d ÷d

.
Thus
^ c
2
j
= c
2
j
÷2c
j
i
t
j

`
d ÷d

+

`
d ÷d

t
i
j
i
t
j

`
d ÷d

(5.17)
73
Figure 5.2: Density of Normalized OLS estimator with error process (5.16)
and
^ o
2
=
1
:
a
¸
j=1
^ c
2
j
=
1
:
a
¸
j=1
c
2
j
÷2

1
:
a
¸
j=1
c
j
i
t
j

`
d ÷d

+

`
d ÷d

t

1
:
a
¸
j=1
i
j
i
t
j

`
d ÷d

j
÷÷o
2
as : ÷·. the last line using the WLLN, (5.4), (5.8) and Theorem 5.3.1. Thus ^ o
2
is consistent for
o
2
.
Finally, since :´(: ÷/) ÷1 as : ÷·. it follows that as : ÷·.
:
2
=

:
: ÷/

^ o
2
j
÷÷o
2
.
5.6 Consistent Covariance Matrix Estimation
In Sections 4.8 and 4.9 we introduced estimators of the …nite-sample covariance matrix of the
least-squares estimator in the regression model. In this section we show that these estimators, when
normalized, are consistent for the asymptotic covariance matrix.
First, consider
´
\
0
^
d
. the covariance matrix estimate constructed under the assumption of ho-
moskedasticity. Writing
`
O =
1
:
a
¸
j=1
i
j
i
t
j
=
1
:
A
t
A
as the moment estimator of O, we can write the covariance matrix estimator as
´
\
0
^
d
=

1
:
A
t
A

÷1
:
2
=
`
O
÷1
:
2
.
74
Since
`
O
j
÷÷O and :
2
j
÷÷o
2
(see (5.4) and Theorem 5.5.1), and the invertibility of O (Assumption
5.1.1), it follows that
´
\
0
^
d
=
`
O
÷1
:
2
j
÷÷O
÷1
o
2
= \
0
d
so that
´
\
0
^
d
is consistent for \
0
d
. the homoskedastic covariance matrix.
Theorem 5.6.1 Under Assumption 5.1.1,
´
\
0
^
d
j
÷÷\
0
d
as : ÷·.
Now consider
´
\
0
^
d
. the White covariance matrix estimator. Writing
`
D =
1
:
a
¸
j=1
i
j
i
t
j
^ c
2
j
(5.18)
as the moment estimator for D = E

i
j
i
t
j
c
2
j

. then
´
\
^
d
=

1
:
A
t
A

÷1

1
:
a
¸
j=1
i
j
i
t
j
^ c
2
j

1
:
A
t
A

÷1
=
`
O
÷1
`
D
`
O
÷1
.
With some work, we can show that
`
D is consisent for D. Combined with the consistency of
`
O for
O and the invertibility of O we …nd that
´
\
^
d
converges in probability to O
÷1
DO
÷1
= \
d
.
Theorem 5.6.2 Under Assumption 5.4.1,
`
D
j
÷÷ D and
´
\
^
d
j
÷÷ \
d
as
: ÷·.
To illustrate, we return to the log wage regression (3.9) of Section 3.3. We calculate that
:
2
= 0.20 and
`
D =

0.199 2.80
2.80 40.6

.
Therefore the two covariance matrix estimates are
´
\
0
^
d
=

1 14.14
14.14 205.83

÷1
0.20 =

6.98 ÷0.480
÷0.480 .039

and
`
\
d
=

1 14.14
14.14 205.83

÷1

.199 2.80
2.80 40.6

1 14.14
14.14 205.83

÷1
=

7.20 ÷0.493
÷0.493 0.035

.
In this case the two estimates are quite similar. The (White) standard errors for
^

0
are

7.2´988 =
.085 and that for
^

1
is

.035´988 = .006. We can write the estimated equation with standard errors
using the format
\
log(\aoc) = 1.33
(.08)
+ 0.115
(.006)
1dncatio:.
75
Proof of Theorem 5.6.2. We …rst show
`
D
j
÷÷D. Using (5.17)
`
D =
1
:
a
¸
j=1
i
j
i
t
j
^ c
2
j
=
1
:
a
¸
j=1
i
j
i
t
j
c
2
j
÷
2
:
a
¸
j=1
i
j
i
t
j

`
d ÷d

t
i
j
c
j
+
1
:
a
¸
j=1
i
j
i
t
j

`
d ÷d

t
i
j

2
. (5.19)
We now examine each / / sum on the right-hand-side of (5.19) in turn.
Take the …rst term on the right-hand-side of (5.19). The ,|’th element of i
j
i
t
j
c
2
j
is r
;j
r
|j
c
2
j
.
Using the Cauchy-Schwarz Inequality (C.3) twice and Assumption 5.4.1,
E

r
;j
r
|j
c
2
j

_

Er
2
;j
r
2
|j

1/2

Ec
4
j

1/2
_

Er
4
;j

1/4

Er
4
|j

1/4

Ec
4
j

1/2
.
Since this expectation is …nite, we can apply the WLLN (Theorem 5.2.1) to …nd that
1
:
a
¸
j=1
i
j
i
t
j
c
2
j
j
÷÷E

i
j
i
t
j
c
2
j

= D.
Now take the second term on the right-hand-side of (5.19). Applying the Triangle Inequality
(A.8) to the matrix Euclidean norm, the Matrix Schwarz Inequality (A.7), equation (A.5) and the
Schwarz Inequality (A.6)

2
:
a
¸
j=1
i
j
i
t
j

`
d ÷d

t
i
j
c
j

_
2
:
a
¸
j=1

i
j
i
t
j

`
d ÷d

t
i
j
c
j

_
2
:
a
¸
j=1

i
j
i
t
j

`
d ÷d

t
i
j

[c
j
[
_

2
:
a
¸
j=1
|i
j
|
3
[c
j
[

`
d ÷d

. (5.20)
Using Holder’s inequality (C.4) and Assumption 5.4.1,
E

|i
j
|
3
[c
j
[

_

E|i
j
|
4

3/4
E

c
4
j

1/4
< ·.
By the WLLN
1
:
a
¸
j=1
|i
j
|
3
[c
j
[
j
÷÷E

|i
j
|
3
[c
j
[

< ·.
Since
`
d÷d
j
÷÷0 it follows that (5.20) converges in probability to zero. This shows that the second
term on the right-hand-side of (5.19) converges in probability to zero.
We now take the third term in (5.19). Again by the Triangle Inequality, the Matrix Schwarz
Inequality, (A.5) and the Schwarz Inequality

1
:
a
¸
j=1
i
j
i
t
j

`
d ÷d

t
i
j

2

_
1
:
a
¸
j=1

i
j
i
t
j

`
d ÷d

t
i
j

2
_
1
:
a
¸
j=1
|i
j
|
4

`
d ÷d

j
÷÷0
76
the …nal convergence since
`
d ÷ d
j
÷÷ 0 and
1
a
¸
a
j=1
|i
j
|
4
j
÷÷ 1 |i
j
|
4
< · under Assumption
5.4.1. This shows that the third term on the right-hand-side of (5.19) converges in probability to
zero.
Considering the three terms on the right-hand-side of (5.19), we have shown that the …rst
term converges in probability to D, and the second and third converge in probability to zero. We
conclude that
`
D
µ
! D as claimed.
Finally, combined with (5.4) and the invertibilility of O.
`
\
d
=
`
O
÷1
`
D
`
O
÷1 µ
! O
1
DO
1
= \
d
,
from which it follows that
`
\
d
j
÷÷\
d
as : ÷·.
5.7 Functions of Parameters
Sometimes we are interested in some lower-dimensional function of the parameter vector d =
(
1
. ....
I
). For example, we may be interested in a single coe¢cient
;
or a ratio
;
´
|
. In these
cases we can write the parameter of interest as a function of d. Let l : R
I
÷ R
o
denote this
function and let
0 = l(d)
denote the parameter of interest. The estimate of 0 is
`
0 = l(
`
d).
What is the asymptotic distribution of
`
0? Assume that l(d) is di¤erentiable at the true value
of d. By a …rst-order Taylor series approximation:
l(
`
d) · l(d) +H
t
d

`
d ÷d

.
where
H
d
=
0
0d
l(d) / c.
Thus

:

`
0 ÷0

=

:

l(
`
d) ÷l(d)

· H
t
d

:

`
d ÷d

o
÷÷H
t
d
N(0. \
d
)
= N(0. \
0
) . (5.21)
where
\
0
= H
t
d
\
d
H
d
. (5.22)
The asymptotic approximation (5.21) is often called the delta method because it approximates
the distribution of
`
0 by a …rst-order expansion. It shows that (at least approximately), nonlinear
functions of asymptotically normal estimators are themselves asymptotically normally distributed.
It is a very powerful result, as most parameters of interest can be written in this form.
In many cases, the function l(d) is linear:
l(d) = H
t
d
for some / c matrix H. In this case, H
d
= H.
77
In particular, if H is a “selector matrix”
H =

1
0

(5.23)
so that if d = (d
1
. d
2
). then 0 = H
t
d = d
1
and
\
0
=

1 0

\
d

1
0

= \
11
.
the upper-left block of \
d
. In other words, (5.21)-(5.22) in this case is

:

`
d
1
÷d
1

o
÷÷N(0. \
11
)
where
\
11
= [\ ]
11
How do we estimate the covariance matrix for
`
0? From (5.22) we see we need an estimate of
H
d
and \
d
. We already have an estimate of the latter,
`
\
^
d
. To estimate H
d
we use
´
H
d
=
0
0d
l(
`
d).
Putting the parts together we obtain
´
\
^
0
=
´
H
t
d
´
\
^
d
´
H
d
as the covariance matrix estimator for
`
0. As the primary justi…cation for
´
\
^
0
is the asymptotic
approximation (5.21),
´
\
^
0
is often called an asymptotic covariance matrix estimator.
When l(d) is linear
l(d) = H
t
d
then H
d
= H and
´
\
^
0
= H
t
´
\
^
d
H.
When H takes the form of a selector matrix as in (5.23) then
´
\
^
0
=
´
\
11
=

´
\

11
.
the upper-left block of the covariance matrix estimate
´
\ .
When c = 1 (so l(d) is real-valued), the standard error for
^
0 is the square root of
´
\
^
0
. that is,
:(
^
0) = :
÷1/2

´
\
^
0
= :
÷1/2

´
H
t
d
´
\
^
d
´
H
d
.
Theorem 5.7.1 Asymptotic Distribution of Functions of Parameters
Under Assumption 5.4.1,

:

`
0 ÷0

o
÷÷N(0. \
0
)
where
\
0
= H
t
d
\
d
H
d
and
´
\
^
0
j
÷÷\
0
as : ÷·.
78
Proof. We showed (5.21), we need only to show consistency of the covariance matrix estimator.
First, from Theorem 5.6.2,
´
\
^
d
j
÷÷\
d
.
Second, since
`
d
j
÷÷
`
d and l(d) is continuously di¤erentiable, by the continuous mapping theorem,
´
H
d
=
0
0d
l(
`
d)
j
÷÷
0
0d
l(d) = H
d
.
Putting these together
´
\
^
0
=
´
H
t
d
:
´
\
^
d
´
H
d
j
÷÷H
t
d
\
d
H
d
= \
0
.
completing the proof .
5.8 t statistic
Let 0 = /(d) : R
I
÷ R be any parameter of interest (for example, 0 could be a single element
of d),
^
0 its estimate and :(
^
0) its asymptotic standard error. Consider the statistic
t
a
(0) =
^
0 ÷0
:(
^
0)
(5.24)
which di¤erent writers alternatively call a t-statistic, a z-statistic or a studentized statistic.
We won’t be making such distinctions and will typically refer to t
a
(0) as a t-statistic. We also
often suppress the parameter dependence, writing it as t
a
. The t-statistic is a simple function of
the estimate, its standard error, and the parameter.
Theorem 5.8.1 t
a
(0)
o
÷÷N(0. 1)
Thus the asymptotic distribution of the t-ratio t
a
(0) is the standard normal. Since this dis-
tribution does not depend on the parameters, we say that t
a
(0) is asymptotically pivotal. In
special cases (such as the normal regression model, see Section 3.11), the statistic t
a
has an exact
t distribution, and is therefore exactly free of unknowns. In this case, we say that t
a
is exactly
pivotal. In general, however, pivotal statistics are unavailable and we must rely on asymptotically
pivotal statistics.
William Gosset
William S. Gosset (1876-1937) of England is most famous for his derivation of the student’s
t distribution, published in the paper “The probable error of a mean” in 1908. At the time,
Gosset worked at Guiness brewery, which prohibited its employees from publishing in order
to prevent the possible loss of trade secrets. To circumvent this barrier, Gosset published
under the pseudonym “Student”. Consequently, this famous distribution is known as the
student’s t rather than Gosset’s t!
79
Proof of Theorem 5.8.1. By Theorem 5.7.1,

:

`
0 ÷0

o
÷÷N(0. \
0
)
and
´
\
^
0
j
÷÷\
0
Thus
t
a
(0) =
^
0 ÷0
:(
^
0)
=

:

^
0 ÷0

´
\
^
0
o
÷÷
N(0. \
0
)

\
0
= N(0. 1)
The last equality is by the property that linear scales of normal distributions are normal.
5.9 Con…dence Intervals
A con…dence interval C
a
is an interval estimate of 0 ÷ R. It is a function of the data and
hence is random. It is designed to cover 0 with high probability. Either 0 ÷ C
a
or 0 ´ ÷ C
a
. The
coverage probability is P(0 ÷ C
a
). The convention is to design con…dence intervals to have coverage
probability approximately equal to a pre-speci…ed target, typically 90% or 95%, or more generally
written as (1 ÷ c)% for some c ÷ (0. 1). In this case, by reporting a (1 ÷ c)% con…dence interval
C
a
. we are stating that with (1 ÷c)% probability (in repeated samples) the true 0 lies in C
a
.
There is not a unique method to construct con…dence intervals. For example, a simple (yet
silly) interval is
C
a
=

R with probability 1 ÷c
^
0 with probability c
By construction, if
^
0 has a continuous distribution, P(0 ÷ C
a
) = 1 ÷c. so this con…dence interval
has perfect coverage, but C
a
is uninformative about 0. This is not a useful con…dence interval.
When we have an asymptotically normal parameter estimate
^
0 with standard error :(
^
0). it turns
out that a generally reasonable con…dence interval for 0 takes the form
C
a
=

^
0 ÷c :(
^
0).
^
0 +c :(
^
0)

(5.25)
where c 0 is a pre-speci…ed constant. This con…dence interval is symmetric about the point
estimate
^
0. and its length is proportional to the standard error :(
^
0).
Equivalently, C
a
is the set of parameter values for 0 such that the t-statistic t
a
(0) is smaller (in
absolute value) than c. that is
C
a
= ¦0 : [t
a
(0)[ _ c¦ =

0 : ÷c _
^
0 ÷0
:(
^
0)
_ c
¸
.
The coverage probability of this con…dence interval is
P(0 ÷ C
a
) = P([t
a
(0)[ _ c)
80
which is generally unknown, but we can approximate the coverage probability by taking the as-
ymptotic limit as : ÷·. Since t
a
(0) is asymptotically standard normal (Theorem 5.8.1), it follows
that as : ÷· that
P(0 ÷ C
a
) ÷P([7[ _ c) = (c) ÷(÷c)
where 7 ~ N(0. 1) and (n) = P(7 _ n) is the standard normal distribution function. Thus the
asymptotic coverage probability is a function only of c.
The convention is to design the con…dence interval to have a pre-speci…ed coverage probability
1 ÷c. typically 90% or 95%. This means selecting the constant c so that
(c) ÷(÷c) = 1 ÷c.
E¤ectively, this makes c a function of c. and can be backed out of a normal distribution table.
For example, c = 0.05 (a 95% interval) implies c = 1.96 and c = 0.1 (a 90% interval) implies
c = 1.645. Rounding 1.96 to 2, this yields the most commonly implied con…dence interval in
applied econometric practice
C
a
=

^
0 ÷2:(
^
0).
^
0 + 2:(
^
0)

.
This is a useful rule-of thumb. This asymptotic 95% con…dence interval C
a
is simple to compute
and can be roughly calculated from tables of coe¢cient estimates and standard errors. (Technically,
it is a 95.4% interval, due to the substitution of 2.0 for 1.96, but this distinction is meaningless.)
Con…dence intervals are a simple yet e¤ective tool to assess estimation uncertainty. When
reading a set of empirical results, look at the estimated coe¢cient estimates and the standard
errors. For a parameter of interest, compute the con…dence interval C
a
and consider the meaning
of the spread of the suggested values. If the rage of values in the con…dence interval are too wide
to learn about 0. then do not jump to a conclusion about 0 based on the point estimate alone.
5.10 Semiparametric E¢ciency
In Section 4.5 we presented the Gauss-Markov theorem as a limited e¢ciency justi…cation for the
least-squares estimator. A broader justi…cation is provided in Chamberlain (1987), who established
that in the projection model the OLS estimator has the smallest asymptotic mean-squared error
among feasible estimators. This property is called semiparametric e¢ciency, and is a strong
justi…cation for the least-squares estimator. We discuss the intuition behind his result in this
section.
Suppose that the joint distribution of (n
j
. i
j
) is discrete. That is, for …nite r.
1

n
j
= t
;
. i
j
= í
;

= j
;
. , = 1. .... r (5.26)
for some constants j
;
. t
;
. and í
;
. Assume that the t
;
and í
;
are known, but the j
;
are unknown.
(We know the values n
j
and i
j
can take, but we don’t know the probabilities.)
In this discrete setting, the de…nition of linear the projection coe¢cient (2.10) can be rewritten
as
d =

¸
v
¸
;=1
j
;
í
;
í
t
;
¸

÷1

¸
v
¸
;=1
j
;
í
;
t
;
¸

(5.27)
Thus d is a function of (¬
1
. .... ¬
v
) .
As the data are multinomial, the maximum likelihood estimator (MLE) is
^ j
;
=
1
:
a
¸
j=1
1 (n
j
= t
;
) 1

i
j
= í
;

81
for , = 1. .... r. where 1 () is the indicator function. That is, ^ j
;
is the percentage of the observations
which fall in each category. The MLE
`
d
mle
for d is then the analog of (5.27) with the parameters
j
;
replaced by the estimates ^ j
;
:
`
d
mle
=

¸
v
¸
;=1
^ j
;
í
;
í
t
;
¸

÷1

¸
v
¸
;=1
^ j
;
í
;
t
;
¸

.
Substituting in the expressions for ^ j
;
,
v
¸
;=1
^ j
;
í
;
í
t
;
=
v
¸
;=1
1
:
a
¸
j=1
1 (n
j
= t
;
) 1

i
j
= í
;

í
;
í
t
;
=
1
:
a
¸
j=1
v
¸
;=1
1 (n
j
= t
;
) 1

i
j
= í
;

i
j
i
t
j
=
1
:
a
¸
j=1
i
j
i
t
j
and
v
¸
;=1
^ j
;
í
;
t
;
=
v
¸
;=1
1
:
a
¸
j=1
1 (n
j
= t
;
) 1

i
j
= í
;

í
;
t
;
=
1
:
a
¸
j=1
v
¸
;=1
1 (n
j
= t
;
) 1

i
j
= í
;

i
j
n
j
=
1
:
a
¸
j=1
i
j
n
j
.
Thus
`
d
mle
=

1
:
a
¸
j=1
i
j
i
t
j

÷1

1
:
a
¸
j=1
i
j
n
j

=
`
d
ols
.
In other words, if the data have a discrete distribution, the maximum likelihood estimator is
identical to the OLS estimator.
Since this is a regular parametric model the MLE is asymptotically e¢cient (see Appendix D).
It follows that the OLS estimator is asymptotically e¢cient.
The hard part of the argument (which was rigorously developed in Chamberlain’s paper, but we
do not present it here) is the extension to the case of continuously-distributed data.The intuition
is that all continuous distributions can be arbitrarily well approximated by some multinomial
distribution, and for any multinomial distribution the moment estimator is asymptotically e¢cient.
Formalizing this intuition using a rigorous mathematical argument, Chamberlain proved that the
OLS estimator is asymptotically semiparametrically e¢cient for the projection coe¢cient d for the
class of models satisfying Assumption 5.1.1.
5.11 Semiparametric E¢ciency in the Projection Model
In this section we continue the investigation of semiparametric e¢ciency as raised in Section
5.10. There we presented the intuition behind Chamberlain’s demonstration of the asymptotic
e¢ciency of the least-squares estimator. In this section we provide an alternative demonstration
based on the rich but technically challenging theory of semiparametric e¢ciency bounds. An
excellent accessible review has been provided by Newey (1990).
82
Our treatment covers what is known as the smooth function model, which includes the projection
model as a special case. Let z ÷ R
n
be a random vector with …nite mean µ = Ez and …nite variance
matrix X = var (z) . and let z
1
. .... z
a
be an iid sample from this distribution. The parameter of
interest is d = g (µ) where g () is a continuously di¤erentiable function. The standard moment
estimator for µ is the sample mean ` µ = :
÷1
¸
a
j=1
z
j
and that for d is
`
d = g (` µ) . This setting
includes the least-squares estimator for the projection model n = i
t
d+c by letting z be the vector
with elements r
;
c and r
;
r
|
for all , _ / and | _ /.
The sample mean has the asymptotic distribution

:(` µ ÷µ)
o
÷÷N(0. X) . Applying the Delta
Method (Theorem C.3.3), we see that the moment estimator
`
d has the asymptotic distribution

:

`
d ÷d

o
÷÷ N(0. \ ) where \ =
0

0
g (µ) X
0

g (µ)
t
. We want to know if
`
d is the best
feasible estimator. Is there another estimator with a smaller asymptotic variance? While it seems
intuitively unlikely that another estimator could have a smaller asymptotic variance than
`
d. how
do we know that this is not the case?
To show that the answer is not immediately obvious, it might be helpful to review a set-
ting where the sample mean is ine¢cient. Suppose that . ÷ R has the density 1 (. [ j) =
2
÷1/2
exp

÷[. ÷j[

2

. Since var (.) = 1 we see that the sample mean satis…es

:(^ j ÷j)
o
÷÷
N(0. 1). In this model the maximum likelihood estimator (MLE) ~ j for j is di¤erent than the sam-
ple mean (and happens to be the sample median). Recall from the theory of maximum likelhood
that the MLE satis…es

:(~ j ÷j)
o
÷÷ N(0. J
0
) where J
0
=

Eo
2
j

÷1
and o
j
=
0
0j
log 1 (. [ j) =
÷

2 sgn(. ÷j) is the score. We can calculate that Eo
2
j
= 2 and thus conclude that

:(~ j ÷j)
o
÷÷
N(0. 1´2) . The asymptotic variance of the MLE is one-half that of the sample mean. In this setting
the sample mean is ine¢cient.
But the question at hand is whether or not the sample mean is e¢cient when the form of the
distribution is unknown. We call this setting semiparametric as the parameter of interest (the
mean) is …nite dimensional while the remaining features of the distribution are unspeci…ed. In the
semiparametric context an estimator is called semiparametrically e¢cient if it has the smallest
asymptotic variance among all semiparametric estimators.
The mathematical trick is to reduce the semiparametric model to a set of parametric “submod-
els”. The classic Cramer-Rao variance bound can be found for each parametric submodel. The
variance bound for the semiparametric model (the union of the submodels) is then de…ned as the
supremum of the individual variance bounds.
Formally, suppose that the true density of z is the unknown function 1(z) with mean µ =
Ez =

z1(z)dz and the parameter of interest is d = g (µ) . A parametric submodel : for 1(z) is
a density 1
a
(z [ 0) which is a smooth function of a parameter 0, and there is some 0
0
such that
1
a
(z [ 0
0
) = 1(z). The index : indicates the submodels. The equality 1
a
(z [ 0
0
) = 1(z) means
that the submodel class passes through the true density, so the submodel is a true model. The class
of submodels : and parameter 0
0
depend on the true density 1. In the submodel 1
a
(z [ 0) . the
mean is µ
a
(0) =

z1
a
(z [ 0) dz. and the parameter of interest is d
a
(0) = g

µ
a
(0)

which varies
with the parameter 0. Let : ÷ ì be the class of all submodels for 1.
Since each submodel : is parametric we can calculate its Cramer-Rao bound for estimation
of d. Speci…cally, given the density 1
a
(z [ 0) we can construct the MLE
`
0
a
for 0, the MLE
` µ
a
=

z1
a

z [
`
0
a

dz for µ. and the MLE
`
d
a
= g(` µ
a
) for d. The MLE satis…es

:

`
d
a
÷d
a
(0)

o
÷÷N(0. \
a
)
where \
a
is the smallest possible covariance matrix among regular estimators. By the Cramer-Rao
theorem no estimator (and in particular no semiparametric estimator) has an asymptotic vari-
ance smaller than \
a
. This comparison is true for all submodels :, so the asymptotic variance
of any semiparametric estimator cannot be smaller than the Cramer-Rao bound for any paramet-
ric submodel. The semiparametric asymptotic variance bound (which is sometimes called
83
the semiparametric e¢ciency bound) is the supremum of the Cramer-Rao bounds from all
conceivable submodels.
\ = sup
a֓
\
a
.
It is a lower bound for the asymptotic variance of any semiparametric estimator. If the asymptotic
variance of a speci…c semiparametric estimator equals the bound \ we say that the estimator is
semiparametrically e¢cient.
For many statistical problems it is quite challenging to calculate the semiparametric variance
bound. However the solution is straightforward in the smooth function model. As the semiparamet-
ric variance bound cannot be smaller than the Cramer-Rao bound for any submodel, and cannot
be larger than the asymptotic variance of any feasible semiparametric estimator, it follows that if
the asymptotic variance of a feasible semiparametric estimator equals the Cramer-Rao bound for
at least one submodel, then this is the semiparametric asymptotic variance bound, and the afore-
mentioned feasible semiparametric estimator must be semiparametrically e¢cient. In these cases,
it is su¢cient to construct a parametric submodel for which the Cramer-Rao bound (equivalently,
the asymptotic variance of the MLE) equals that of a known semiparametric estimator.
Formally, for any submodel : with Cramer-Rao variance \
a
and any semiparametric estimator
`
d with asymptotic variance \
d
. then it is necessary that
\
a
_ \ _ \
d
.
The …rst inequality holds by the de…nition of \ . and the second holds since no semiparametric
estimator can be more e¢cient than the MLE in any parametric submodel. Thus if we …nd a
submodel : and semiparametric estimator
`
d such that \
a
= \
d
. then it must be the case that
\ = \
d
and
`
d is semiparametrically e¢cient.
We now show this for the moment estimator
`
d = g (` µ) discussed above. As
`
d has asymptotic
variance \
d
. our goal is to …nd a parametric submodel whose Cramer-Rao bound for estimation of
d is \
d
. The solution involves creating a tilted version of the true density. Consider the parametric
submodel
1 (z [ 0) = 1(z)

1 +0
t
X
÷1
(z ÷µ)

(5.28)
where 1(z) is the true density and µ = Ez. Note that

1 (z [ 0) dz =

1(z)dz +0
t
X
÷1

1(z) (z ÷µ) dz = 1
and for all 0 close to zero 1 (z [ 0) _ 0. Thus 1 (z [ 0) is a valid density function. It is a parametric
submodel since 1 (z [ 0
0
) = 1(z) when 0
0
= 0. This parametric submodel has the mean
µ(0) =

z1 (z [ 0) dz
=

z1(z)dz +

1(z)z (z ÷µ)
t
X
÷1
0dz
= µ +0
and parameter of interest d (0) = g (µ +0) both which are smooth functions of 0.
Since
0
00
log 1 (z [ 0) =
0
00
log

1 +0
t
X
÷1
(z ÷µ)

=
X
÷1
(z ÷µ)
1 +0
t
X
÷1
(z ÷µ)
it follows that the score function for 0 is
s =
0
00
log 1 (z [ 0
0
) = X
÷1
(z ÷µ) . (5.29)
84
By classic theory the asymptotic variance of the MLE
`
0 for 0 is the Cramer-Rao bound (E(ss
t
))
÷1
=

X
÷1
E

(z ÷µ) (z ÷µ)
t

X
÷1

÷1
= X. The MLE for d is d(
`
0) = g

µ +
`
0

which by the delta
method has asymptotic variance \
d
=
0

0
g (µ) X
0

g (µ)
t
. which is identical to the asymptotic
variance of the moment estimator
`
d. This shows that moment estimators are semiparametrically
e¢cient, and this includes the OLS estimator in the projection model. We have established the
following theorem.
Theorem 5.11.1 Under Assumption 5.1.1, the semiparametric variance
bound for estimation of d is \
d
= O
÷1
DO
÷1
. and the OLS estimator is
semiparametrically e¢cient.
5.12 Semiparametric E¢ciency in the Homoskedastic Regression
Model
In Section 4.5 we presented the Gauss-Markov theorem, which stated that in the homoskedastic
regression model, in the class of linear unbiased estimators the one with the smallest variance is
least-squares. As we noted in that section, the restriction to linear unbiased estimators is unsat-
isfactory as it leaves open the possibility that an alternative (non-linear) estimator could have a
smaller asymptotic variance. In Sections 5.10 and 5.11 we showed that the OLS estimator is ef-
…cient in the projection model, but this does not address the question of whether or not OLS is
e¢cient in the homoskedastic regression model. In this section we return to the question of e¢cient
estimation in this model using the theory of semiparametric variance bounds as presented in the
previous section.
Recall that in the homoskedastic regression model the asymptotic variance of the OLS estimator
`
d for d is \
0
d
= O
÷1
o
2
. Therefore, as described in the previous section, it is su¢cient to …nd a
parametric submodel whose Cramer-Rao bound for estimation of d is \
0
d
. This would establish
that \
0
d
is the semiparametric variance bound and the OLS estimator
`
d is semiparametrically
e¢cient for d.
Let the joint density of n and i be written as 1 (n. i) = 1
1
(n [ i) 1
2
(i) . the product of the
conditional density of n given i, and the marginal density of i. Now consider the parametric
submodel
1 (n. i [ 0) = 1
1
(n [ i)

1 +

n ÷i
t
d

i
t
0

´o
2

1
2
(i) . (5.30)
You can check that in this submodel, the marginal density of i is 1
2
(i) . and the conditional density
of n given i is 1
1
(n [ i)

1 + (n ÷i
t
d) (i
t
0) ´o
2

. To see that the latter is a valid conditional
density, observe that the regression assumption implies that

n1
1
(n [ i) dn = i
t
d and therefore

1
1
(n [ i)

1 +

n ÷i
t
d

i
t
0

´o
2

dn =

1
1
(n [ i) dn +

1
1
(n [ i)

n ÷i
t
d

dn

i
t
0

´o
2
= 1.
85
In this parametric submodel the conditional mean of n given i is
E
0
(n [ i) =

n1
1
(n [ i)

1 +

n ÷i
t
d

i
t
0

´o
2

dn
=

n1
1
(n [ i) dn +

n1
1
(n [ i)

n ÷i
t
d

i
t
0

´o
2
dn
=

n1
1
(n [ i) dn +

n ÷i
t
d

2
1
1
(n [ i)

i
t
0

´o
2
dn
+

n ÷i
t
d

1
1
(n [ i) dn

i
t
d

i
t
0

´o
2
= i
t
(d +0) .
using the homoskedasticity assumption that

(n ÷i
t
d)
2
1
1
(n [ i) dn = o
2
. This means that in
this parametric submodel, the conditional mean is linear in i and the regression coe¢cient is
d (0) = d +0.
We now calculate the score for estimation of 0. Since
0
00
log 1 (n. i [ 0) =
0
00
log

1 +

n ÷i
t
d

i
t
0

´o
2

=
i(n ÷i
t
d) ´o
2
1 + (n ÷i
t
d) (i
t
0) ´o
2
the score is
s =
0
00
log 1 (n. i [ 0
0
) = ic´o
2
.
The Cramer-Rao bound for estimation of 0 (and therefore d (0) as well) is

E

ss
t

÷1
=

o
÷4
E

(ic) (ic)
t

÷1
= o
2
O
÷1
= \
0
d
.
We have shown that there is a parametric submodel (5.30) whose Cramer-Rao bound for estimation
of d is identical to the asymptotic variance of the least-squares estimator, which therefore is the
semiparametric variance bound.
Theorem 5.12.1 In the homoskedastic regression model, the semipara-
metric variance bound for estimation of d is \
0
= o
2
O
÷1
and the OLS
estimator is semiparametrically e¢cient.
This result is similar to the Gauss-Markov theorem, in that it asserts the e¢ciency of the least-
squares estimator in the context of the homoskedastic regression model. The di¤erence is that the
Gauss-Markov theorem states that OLS has the smallest variance among the set of unbiased linear
estimators, while Theorem 5.12.1 states that OLS has the smallest asymptotic variance among
regular estimators. This is a much more powerful statement.
86
Exercises
Exercise 5.1 You have two independent samples (u
1
. A
1
) and (u
2
. A
2
) which satisfy u
1
= A
1
d
1
+
c
1
and u
2
= A
2
d
2
+ c
2
. where E(i
1j
c
1j
) = 0 and E(i
2j
c
2j
) = 0. and both A
1
and A
2
have /
columns. Let
`
d
1
and
`
d
2
be the OLS estimates of d
1
and d
2
. For simplicity, you may assume that
both samples have the same number of observations :.
(a) Find the asymptotic distribution of

:

`
d
2
÷
`
d
1

÷(d
2
÷d
1
)

as : ÷·.
(b) Find an appropriate test statistic for H
0
: d
2
= d
1
.
(c) Find the asymptotic distribution of this statistic under H
0
.
Exercise 5.2 The model is
n
j
= i
t
j
d +c
j
E(i
j
c
j
) = 0
D = E

i
j
i
t
j
c
2
j

.
Find the method of moments estimators

`
d.
`
D

for (d. D) .
(a) In this model, are

`
d.
`
D

e¢cient estimators of (d. D)?
(b) If so, in what sense are they e¢cient?
Exercise 5.3 Take the model n
j
= i
t
1j
d
1
+i
t
2j
d
2
+c
j
with Ei
j
c
j
= 0. Suppose that d
1
is estimated
by regressing n
j
on i
1j
only. Find the probability limit of this estimator. In general, is it consistent
for d
1
? If not, under what conditions is this estimator consistent for d
1
?
Exercise 5.4 Let u be :1. A be :/ (rank /). u = Ad+c with E(i
j
c
j
) = 0. De…ne the ridge
regression estimator
`
d =

a
¸
j=1
i
j
i
t
j
+`1
I

÷1

a
¸
j=1
i
j
n
j

where ` 0 is a …xed constant. Find the probability limit of
`
d as : ÷·. Is
`
d consistent for d?
Exercise 5.5 Of the variables (n
+
j
. n
j
. i
j
) only the pair (n
j
. i
j
) are observed. In this case, we say
that n
+
j
is a latent variable. Suppose
n
+
j
= i
t
j
d +c
j
E(i
j
c
j
) = 0
n
j
= n
+
j
+n
j
where n
j
is a measurement error satisfying
E(i
j
n
j
) = 0
E(n
+
j
n
j
) = 0
Let
`
d denote the OLS coe¢cient from the regression of n
j
on i
j
.
(a) Is d the coe¢cient from the linear projection of n
j
on i
j
?
(b) Is
`
d consistent for d as : ÷·?
87
(c) Find the asymptotic distribution of

:

`
d ÷d

as : ÷·.
Exercise 5.6 The model is
n
j
= r
j
+c
j
E(c
j
[ r
j
) = 0
where r
j
÷ R. Consider the two estimators
^
=
¸
a
j=1
r
j
n
j
¸
a
j=1
r
2
j
~
=
1
:
a
¸
j=1
n
j
r
j
.
(a) Under the stated assumptions, are both estimators consistent for ?
(b) Are there conditions under which either estimator is e¢cient?
88
Chapter 6
Testing
6.1 t tests
The t-test is routinely used to test hypotheses on 0. A simple null and composite hypothesis
takes the form
H
0
: 0 = 0
0
H
1
: 0 = 0
0
where 0
0
is some pre-speci…ed value. A t-test rejects H
0
in favor of H
1
when [t
a
(0
0
)[ is large. By
“large” we mean that the observed value of the t-statistic would be unlikely if H
0
were true.
Formally, we …rst pick an asymptotic signi…cance level c. We then …nd .
c/2
. the upper c´2
quantile of the standard normal distribution which has the property that if 7 ~ N(0. 1) then
P

[7[ .
c/2

= c.
For example, .
.025
= 1.96 and .
.05
= 1.645. A test of asymptotic signi…cance c rejects H
0
if
[t
a
[ .
c/2
. Otherwise the test does not reject, or “accepts” H
0
.
The asymptotic signi…cance level is c because Theorem 5.8.1 implies that
P(reject H
0
[ H
0
true) = P

[t
a
[ .
c/2
[ 0 = 0
0

÷ P

[7[ .
c/2

= c.
The rejection/acceptance dichotomy is associated with the Neyman-Pearson approach to hypothesis
testing.
While there is no objective scienti…c basis for choice of signi…cance level c. the common practice
is to set c = .05 or 5%. This implies a critical value of .
.025
= 1.96 - 2. When [t
a
[ 2 it is common
to say that the t-statistic is statistically signi…cant. and if [t
a
[ < 2 it is common to say that
the t-statistic is statistically insigni…cant. It is helpful to remember that this is simply a way of
saying “Using a t-test, the hypothesis that 0 = 0
0
can [cannot] be rejected at the asymptotic 5%
level.”
A related statistic is the asymptotic p-value, which can be interpreted as a measure of the
evidence against the null hypothesis. The asymptotic p-value of the statistic t
a
is
j
a
= j(t
a
)
where j(t) is the tail probability function
j(t) = P([7[ [t[) = 2 (1 ÷([t[)) .
If the p-value j
a
is small (close to zero) then the evidence against H
0
is strong.
89
An equivalent statement of a Neyman-Pearson test is to reject at the c% level if and only if
j
a
< c. Signi…cance tests can be deduced directly from the p-value since for any c. j
a
< c if and
only if [t
a
[ .
c/2
. The p-value is more general, however, in that the reader is allowed to pick the
level of signi…cance c, in contrast to Neyman-Pearson rejection/acceptance reporting where the
researcher picks the signi…cance level.
Another helpful observation is that the p-value function is a unit-free transformation of the
t statistic. That is, under H
0
. j
a
o
÷÷ U[0. 1]. so the “unusualness” of the test statistic can be
compared to the easy-to-understand uniform distribution, regardless of the complication of the
distribution of the original test statistic. To see this fact, note that the asymptotic distribution of
[t
a
[ is 1(r) = 1 ÷j(r). Thus
P(1 ÷j
a
_ n) = P(1 ÷j(t
a
) _ n)
= P(1(t
a
) _ n)
= P

[t
a
[ _ 1
÷1
(n)

÷ 1

1
÷1
(n)

= n.
establishing that 1 ÷j
a
o
÷÷U[0. 1]. from which it follows that j
a
o
÷÷U[0. 1].
6.2 t-ratios
Some applied papers (especially older ones) report “t-ratios” for each estimated coe¢cient. For
a coe¢cient 0 these are
t
a
= t
a
(0) =
^
0
:(
^
0)
.
the ratio of the coe¢cient estimate to its standard error, and equal the t-statistic for the test of
the hypothesis H
0
: 0 = 0. Such papers often discuss the “signi…cance” of certain variables or
coe¢cients, or describe “which regressors have a signi…cant e¤ect on n” by noting which t-ratios
exceed 2 in absolute value.
This is very poor econometric practice, and should be studiously avoided. It is a receipe for
banishment of your work to lower tier economics journals.
Fundamentally, the common t-ratio is a test for the hypothesis that a coe¢cient equals zero.
This should be reported and discussed when this is an interesting economic hypothesis of interest.
But if this is not the case, it is distracting.
Instead, when a coe¢cient 0 is of interest, it is constructive to focus on the point estimate,
its standard error, and its con…dence interval. The point estimate gives our “best guess” for the
value. The standard error is a measure of precision. The con…dence interval gives us the range
of values consistent with the data. If the standard error is large then the point estimate is not a
good summary about 0. The endpoints of the con…dence interval describe the bounds on the likely
possibilities. If the con…dence interval embraces too broad a set of values for 0. then the dataset
is not su¢ciently informative to render inferences about 0. On the other hand if the con…dence
interval is tight, then the data have produced an accurate estimate, and the focus should be on
the value and interpretation of this estimate. In contrast, the widely-seen statement “the t-ratio is
highly signi…cant” has little interpretive value.
The above discussion requires that the researcher knows what the coe¢cient 0 means (in terms
of the economic problem) and can interpret values and magnitudes, not just signs. This is critical
for good applied econometric practice.
90
6.3 Wald Tests
Sometimes 0 = l(d) is a c 1 vector, and it is desired to test the joint restrictions simultane-
ously. In this case the t-statistic approach does not work. We have the null and alternative
H
0
: 0 = 0
0
H
1
: 0 = 0
0
.
The natural estimate of 0 is
`
0 = l(
`
d) and has asymptotic covariance matrix estimate
`
\
0
=
`
H
t
d
`
\
d
`
H
d
where
`
H
d
=
0
0d
l(
`
d).
The Wald statistic for H
0
against H
1
is
\
a
= :

`
0 ÷0
0

t
`
\
÷1
0

`
0 ÷0
0

= :

l(
`
d) ÷0
0

t

`
H
t
d
`
\
d
`
H
d

÷1

l(
`
d) ÷0
0

. (6.1)
When l is a linear function of d. l(d) = H
t
d. then the Wald statistic takes the form
\
a
= :

H
t
`
d ÷0
0

t

H
t
`
\
d
H

÷1

H
t
`
d ÷0
0

.
The delta method (5.21) showed that

:

`
0 ÷0

o
÷÷ Z ~ N(0. \
0
) . and Theorem 5.6.2
showed that
`
\
d
j
÷÷ \
d
. Furthermore, H
d
(d) is a continuous function of d. so by the continuous
mapping theorem, H
d
(
`
d)
j
÷÷ H
d
. Thus
`
\
0
=
`
H
t
d
`
\
d
`
H
d
j
÷÷ H
t
d
\
d
H
d
= \
0
0 if H
d
has
full rank c. Hence
\
a
= :

`
0 ÷0
0

t
`
\
÷1
0

`
0 ÷0
0

o
÷÷Z
t
\
÷1
0
Z = .
2
o
.
by Theorem B.9.3. We have established:
Theorem 6.3.1 Under H
0
and Assumption 5.4.1, if rank(H
d
) = c. then
\
a
o
÷÷.
2
o
. a chi-square random variable with c degrees of freedom.
An asymptotic Wald test rejects H
0
in favor of H
1
if \
a
exceeds .
2
o
(c). the upper-c quantile
of the .
2
o
distribution. For example, .
2
1
(.05) = 3.84 = .
2
.025
. The Wald test fails to reject if \
a
is
less than .
2
o
(c). As with t-tests, it is conventional to describe a Wald test as “signi…cant” if \
a
exceeds the 5% critical value.
Notice that the asymptotic distribution in Theorem 6.3.1 depends solely on c – the number of
restrictions being tested. It does not depend on / – the number of parameters estimated.
The asymptotic p-value for \
a
is j
a
= j(\
a
). where j(r) = P

.
2
o
_ r

is the tail probability
function of the .
2
o
distribution. The Wald test rejects at the c% level if and only if j
a
< c. and
j
a
is asymptotically U[0. 1] under H
0
. In applied work it is good practice to report the p-value of
a Wald statistic, as it helps readers intrepret the magnitude of the statistic.
91
6.4 F Tests
Take the linear model
u = A
1
d
1
+A
2
d
2
+c
where A
1
is : /
1
. A
2
is : /
2
. / = /
1
+/
2
. and the null hypothesis is
H
0
: d
2
= 0.
In this case, 0 = d
2
. and there are c = /
2
restrictions. Also l(d) = H
t
d is linear with H =

0
1

a selector matrix. We know that the Wald statistic takes the form
\
a
= :
`
0
t
`
\
÷1
0
`
0
= :
`
d
t
2

H
t
`
\
d
H

÷1
`
d
2
.
Now suppose that covariance matrix is computed under the assumption of homoskedasticity, so
that
`
\
d
is replaced with
`
\
0
d
= :
2

:
÷1
A
t
A

÷1
. We de…ne the “homoskedastic” Wald statistic
\
0
a
= :
`
0
t

`
\
0
0

÷1
`
0
= :
`
d
t
2

H
t
`
\
0
d
H

÷1
`
d
2
.
What we show in this section is that this Wald statistic can be written very simply using the
formula
\
0
a
= (: ÷/)

¯ c
t
¯ c ÷ ` c
t
` c
` c
t
` c

(6.2)
where
¯ c = u ÷A
1
¯
d
1
.
¯
d
1
=

A
t
1
A
1

÷1
A
t
1
u
are from OLS of u on A
1
. and
` c = u ÷A
`
d.
`
d =

A
t
A

÷1
A
t
u
are from OLS of u on A = (A
1
. A
2
).
The elegant feature about (6.2) is that it is directly computable from the standard output
from two simple OLS regressions, as the sum of squared errors is a typical output from statistical
packages. This statistic is typically reported as an “F-statistic” which is de…ned as
1
a
=
\
0
a
/
2
=

¯ c
t
¯ c ÷ ` c
t
` c

´/
2
` c
t
` c´(: ÷/)
.
While it should be emphasized that equality (6.2) only holds if
`
\
0
d
= :
2

:
÷1
A
t
A

÷1
. still this
formula often …nds good use in reading applied papers. Because of this connection we call (6.2) the
F form of the Wald statistic. (We can also call \
0
a
a homoskedastic form of the Wald statistic.)
We now derive expression (6.2). First, note that by partitioned matrix inversion (A.3)
H
t

A
t
A

÷1
H = H
t

A
t
1
A
1
A
t
1
A
2
A
t
2
A
1
A
t
2
A
2

÷1
H =

A
t
2
A
1
A
2

÷1
where A
1
= 1 ÷A
1
(A
t
1
A
1
)
÷1
A
t
1
. Thus

H
t
`
\
0
d
H

÷1
= :
÷2
:
÷1

H
t

A
t
A

÷1
H

÷1
= :
÷2
:
÷1

A
t
2
A
1
A
2

92
and
\
0
a
= :
`
d
t
2

H
t
`
\
0
d
H

÷1
`
d
2
=
`
d
t
2
(A
t
2
A
1
A
2
)
`
d
2
:
2
.
To simplify this expression further, note that if we regress u on A
1
alone, the residual is
¯ c = A
1
u. Now consider the residual regression of ¯ c on
¯
A
2
= A
1
A
2
. By the FWL theorem,
¯ c =
¯
A
2
`
d
2
+ ` c and
¯
A
t
2
` c = 0. Thus
¯ c
t
¯ c =

¯
A
2
`
d
2
+ ` c

t

¯
A
2
`
d
2
+ ` c

=
`
d
t
2
¯
A
t
2
¯
A
2
`
d
2
+ ` c
t
` c
=
`
d
t
2
A
t
2
A
1
A
2
`
d
2
+ ` c
t
` c.
or alternatively,
`
d
t
2
A
t
2
A
1
A
2
`
d
2
= ¯ c
t
¯ c ÷ ` c
t
` c.
Also, since
:
2
= (: ÷/)
÷1
` c
t
` c
we conclude that
\
0
a
= (: ÷/)

¯ c
t
¯ c ÷ ` c
t
` c
` c
t
` c

as claimed.
In many statistical packages, when an OLS regression is estimated, an “F-statistic” is reported.
This is 1
a
when A
1
is a vector is ones, so H
0
is an intercept-only model. This special F statistic is
testing the hypothesis that all slope coe¢cients (all coe¢cients other than the intercept) are zero.
This was a popular statistic in the early days of econometric reporting, when sample sizes were very
small and researchers wanted to know if there was “any explanatory power” to their regression.
This is rarely an issue today, as sample sizes are typically su¢ciently large that this F statistic is
nearly always highly signi…cant. While there are special cases where this F statistic is useful, these
cases are atypical. As a general rule, there is no reason to report this F statistic.
6.5 Normal Regression Model
Now let us partition d = (d
1
. d
2
) and consider tests of the linear restriction
H
0
: d
2
= 0
H
1
: d
2
= 0
in the normal regression model. In parametric models, a good test statistic is the likelihood ratio,
which is twice the di¤erence in the log-likelihood function evaluated under the null and alternative
hypotheses. The estimator under the alternative is the unrestricted estimator (
`
d
1
.
`
d
2
. ^ o
2
) discussed
above. The Gaussian log-likelihood at these estimates is
log 1(
`
d
1
.
`
d
2
. ^ o
2
) = ÷
:
2
log

2¬^ o
2

÷
1
2^ o
2
` c
t
` c
= ÷
:
2
log

^ o
2

÷
:
2
log (2¬) ÷
:
2
.
The MLE under the null hypothesis is the restricted estimates (
¯
d
1
. 0. ~ o
2
) where
¯
d
1
is the OLS
estimate from a regression of n
j
on i
1j
only, with residual variance ~ o
2
. The log-likelihood of this
model is
log 1(
¯
d
1
. 0. ~ o
2
) = ÷
:
2
log

~ o
2

÷
:
2
log (2¬) ÷
:
2
.
93
The LR statistic for H
0
against H
1
is
11
a
= 2

log 1(
`
d
1
.
`
d
2
. ^ o
2
) ÷log 1(
¯
d
1
. 0. ~ o
2
)

= :

log

~ o
2

÷log

^ o
2

= :log

~ o
2
^ o
2

.
By a …rst-order Taylor series approximation
11
a
= :log

1 +
~ o
2
^ o
2
÷1

· :

~ o
2
^ o
2
÷1

= \
0
a
.
the homoskedastic Wald statistic. This shows that the two statistics (11
a
and \
0
a
) can be numer-
ically close. It also shows that the homoskedastic Wald statistic for linear hypotheses can also be
interpreted as an appropriate likelihood ratio statistic under normality.
6.6 Problems with Tests of NonLinear Hypotheses
While the t and Wald tests work well when the hypothesis is a linear restriction on d. they
can work quite poorly when the restrictions are nonlinear. This can be seen by a simple example
introduced by Lafontaine and White (1986). Take the model
n
j
= +c
j
c
j
~ N(0. o
2
)
and consider the hypothesis
H
0
: = 1.
Let
^
and ^ o
2
be the sample mean and variance of n
j
. The standard Wald test for H
0
is
\
a
= :

^
÷1

2
^ o
2
.
Now notice that H
0
is equivalent to the hypothesis
H
0
(r) :
v
= 1
for any positive integer r. Letting /() =
v
. and noting H
o
= r
v÷1
. we …nd that the standard
Wald test for H
0
(r) is
\
a
(r) = :

^

v
÷1

2
^ o
2
r
2^

2v÷2
.
While the hypothesis
v
= 1 is una¤ected by the choice of r. the statistic \
a
(r) varies with r. This
is an unfortunate feature of the Wald statistic.
To demonstrate this e¤ect, we have plotted in Figure 6.1 the Wald statistic \
a
(r) as a function
of r. setting :´o
2
= 10. The increasing solid line is for the case
^
= 0.8. The decreasing dashed
line is for the case
^
= 1.6. It is easy to see that in each case there are values of r for which the
test statistic is signi…cant relative to asymptotic critical values, while there are other values of r
for which the test statistic is insigni…cant. This is distressing since the choice of r is arbitrary and
irrelevant to the actual hypothesis.
Our …rst-order asymptotic theory is not useful to help pick r. as \
a
(r)
o
÷÷.
2
1
under H
0
for any
r. This is a context where Monte Carlo simulation can be quite useful as a tool to study and
94
Figure 6.1: Wald Statistic as a function of :
compare the exact distributions of statistical procedures in …nite samples. The method uses random
simulation to create arti…cial datasets, to which we apply the statistical tools of interest. This
produces random draws from the statistic’s sampling distribution. Through repetition, features of
this distribution can be calculated.
In the present context of the Wald statistic, one feature of importance is the Type I error
of the test using the asymptotic 5% critical value 3.84 – the probability of a false rejection,
P(\
a
(r) 3.84 [ = 1) . Given the simplicity of the model, this probability depends only on r. :.
and o
2
. In Table 2.1 we report the results of a Monte Carlo simulation where we vary these three
parameters. The value of r is varied from 1 to 10, : is varied among 20, 100 and 500, and o is
varied among 1 and 3. Table 4.1 reports the simulation estimate of the Type I error probability
from 50,000 random samples. Each row of the table corresponds to a di¤erent value of r – and thus
corresponds to a particular choice of test statistic. The second through seventh columns contain the
Type I error probabilities for di¤erent combinations of : and o. These probabilities are calculated
as the percentage of the 50,000 simulated Wald statistics \
a
(r) which are larger than 3.84. The
null hypothesis
v
= 1 is true, so these probabilities are Type I error.
To interpret the table, remember that the ideal Type I error probability is 5% (.05) with devia-
tions indicating distortion. Type I error rates between 3% and 8% are considered reasonable. Error
rates above 10% are considered excessive. Rates above 20% are unacceptable. When comparing
statistical procedures, we compare the rates row by row, looking for tests for which rejection rates
are close to 5% and rarely fall outside of the 3%-8% range. For this particular example the only
test which meets this criterion is the conventional \
a
= \
a
(1) test. Any other choice of r leads
to a test with unacceptable Type I error probabilities.
In Table 4.1 you can also see the impact of variation in sample size. In each case, the Type I
error probability improves towards 5% as the sample size : increases. There is, however, no magic
choice of : for which all tests perform uniformly well. Test performance deteriorates as r increases,
which is not surprising given the dependence of \
a
(r) on r as shown in Figure 6.1.
Table 4.1
Type I error Probability of Asymptotic 5% \
a
(r) Test
95
o = 1 o = 3
r : = 20 : = 100 : = 500 : = 20 : = 100 : = 500
1 .06 .05 .05 .07 .05 .05
2 .08 .06 .05 .15 .08 .06
3 .10 .06 .05 .21 .12 .07
4 .13 .07 .06 .25 .15 .08
5 .15 .08 .06 .28 .18 .10
6 .17 .09 .06 .30 .20 .11
7 .19 .10 .06 .31 .22 .13
8 .20 .12 .07 .33 .24 .14
9 .22 .13 .07 .34 .25 .15
10 .23 .14 .08 .35 .26 .16
Note: Rejection frequencies from 50,000 simulated random samples
In this example it is not surprising that the choice r = 1 yields the best test statistic. Other
choices are arbitrary and would not be used in practice. While this is clear in this particular
example, in other examples natural choices are not always obvious and the best choices may in fact
appear counter-intuitive at …rst.
This point can be illustrated through another example which is similar to one developed in
Gregory and Veall (1985). Take the model
n
j
=
0
+r
1j

1
+r
2j

2
+c
j
(6.3)
E(i
j
c
j
) = 0
and the hypothesis
H
0
:

1

2
= r
where r is a known constant. Equivalently, de…ne 0 =
1
´
2
. so the hypothesis can be stated as
H
0
: 0 = r.
Let
`
d = (
^

0
.
^

1
.
^

2
) be the least-squares estimates of (6.3), let
`
\
^
d
be an estimate of the
asymptotic covariance matrix for
`
d and set
^
0 =
^

1
´
^

2
. De…ne
`
H
1
=

¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
0
1
^

2
÷
^

1
^

2
2
¸

so that the standard error for
^
0 is :(
^
0) =

:
÷1
`
H
t
1
`
\
`
H
1

1/2
. In this case a t-statistic for H
0
is
t
1a
=

^
o
1
^
o
2
÷r

:(
^
0)
.
An alternative statistic can be constructed through reformulating the null hypothesis as
H
0
:
1
÷r
2
= 0.
A t-statistic based on this formulation of the hypothesis is
t
2a
=
^

1
÷r
^

2

:
÷1
H
t
2
`
\
^
d
H
2

1/2
.
96
where
H
2
=

¸
0
1
÷r
¸

.
To compare t
1a
and t
2a
we perform another simple Monte Carlo simulation. We let r
1j
and r
2j
be mutually independent N(0. 1) variables, c
j
be an independent N(0. o
2
) draw with o = 3, and
normalize
0
= 0 and
1
= 1. This leaves
2
as a free parameter, along with sample size :. We
vary
2
among .1. .25, .50, .75, and 1.0 and : among 100 and 500.
Table 4.2
Type I error Probability of Asymptotic 5% t-tests
: = 100 : = 500
P(t
a
< ÷1.645) P(t
a
1.645) P(t
a
< ÷1.645) P(t
a
1.645)

2
t
1a
t
2a
t
1a
t
2a
t
1a
t
2a
t
1a
t
2a
.10 .47 .06 .00 .06 .28 .05 .00 .05
.25 .26 .06 .00 .06 .15 .05 .00 .05
.50 .15 .06 .00 .06 .10 .05 .00 .05
.75 .12 .06 .00 .06 .09 .05 .00 .05
1.00 .10 .06 .00 .06 .07 .05 .02 .05
The one-sided Type I error probabilities P(t
a
< ÷1.645) and P(t
a
1.645) are calculated from
50,000 simulated samples. The results are presented in Table 4.2. Ideally, the entries in the table
should be 0.05. However, the rejection rates for the t
1a
statistic diverge greatly from this value,
especially for small values of
2
. The left tail probabilities P(t
1a
< ÷1.645) greatly exceed 5%, while
the right tail probabilities P(t
1a
1.645) are close to zero in most cases. In contrast, the rejection
rates for the linear t
2a
statistic are invariant to the value of
2
. and are close to the ideal 5% rate for
both sample sizes. The implication of Table 4.2 is that the two t-ratios have dramatically di¤erent
sampling behavior.
The common message from both examples is that Wald statistics are sensitive to the algebraic
formulation of the null hypothesis. In all cases, if the hypothesis can be expressed as a linear
restriction on the model parameters, this formulation should be used. If no linear formulation is
feasible, then the “most linear” formulation should be selected (as suggested by the theory of Park
and Phillips (1988)), and alternatives to asymptotic critical values should be considered. It is also
prudent to consider alternative tests to the Wald statistic, such as the GMM distance statistic
which will be presented in Section 9.7 (as advocated by Hansen (2006)).
6.7 Monte Carlo Simulation
In the previous section we introduced the method of Monte Carlo simulation to illustrate the
small sample problems with tests of nonlinear hypotheses. In this section we describe the method
in more detail.
Recall, our data consist of observations (n
j
. i
j
) which are random draws from a population
distribution 1. Let 0 be a parameter and let T
a
= T
a
((n
1
. i
1
) . .... (n
a
. i
a
) . 0) be a statistic of
interest, for example an estimator
^
0 or a t-statistic (
^
0 ÷0)´:(
^
0). The exact distribution of T
a
is
G
a
(n. 1) = P(T
a
_ n [ 1) .
While the asymptotic distribution of T
a
might be known, the exact (…nite sample) distribution G
a
is generally unknown.
Monte Carlo simulation uses numerical simulation to compute G
a
(n. 1) for selected choices of 1.
This is useful to investigate the performance of the statistic T
a
in reasonable situations and sample
sizes. The basic idea is that for any given 1. the distribution function G
a
(n. 1) can be calculated
97
numerically through simulation. The name Monte Carlo derives from the famous Mediterranean
gambling resort where games of chance are played.
The method of Monte Carlo is quite simple to describe. The researcher chooses 1 (the dis-
tribution of the data) and the sample size :. A “true” value of 0 is implied by this choice, or
equivalently the value 0 is selected directly by the researcher which implies restrictions on 1.
Then the following experiment is conducted
« : independent random pairs (n
+
j
. i
+
j
) . i = 1. .... :. are drawn from the distribution 1 using
the computer’s random number generator.
« The statistic T
a
= T
a
((n
+
1
. i
+
1
) . .... (n
+
a
. i
+
a
) . 0) is calculated on this pseudo data.
For step 1, most computer packages have built-in procedures for generating U[0. 1] and N(0. 1)
random numbers, and from these most random variables can be constructed. (For example, a
chi-square can be generated by sums of squares of normals.)
For step 2, it is important that the statistic be evaluated at the “true” value of 0 corresponding
to the choice of 1.
The above experiment creates one random draw from the distribution G
a
(n. 1). This is one
observation from an unknown distribution. Clearly, from one observation very little can be said.
So the researcher repeats the experiment 1 times, where 1 is a large number. Typically, we set
1 = 1000 or 1 = 5000. We will discuss this choice later.
Notationally, let the /
t
th experiment result in the draw T
ab
. / = 1. .... 1. These results are stored.
They constitute a random sample of size 1 from the distribution of G
a
(n. 1) = P(T
ab
_ n) =
P(T
a
_ n [ 1) .
From a random sample, we can estimate any feature of interest using (typically) a method of
moments estimator. For example:
Suppose we are interested in the bias, mean-squared error (MSE), or variance of the distribution
of
^
0 ÷0. We then set T
a
=
^
0 ÷0. run the above experiment, and calculate
\
1ia:(
^
0) =
1
1
1
¸
b=1
T
ab
=
1
1
1
¸
b=1
^
0
b
÷0
\
'o1(
^
0) =
1
1
1
¸
b=1
(T
ab
)
2
=
1
1
1
¸
b=1

^
0
b
÷0

2
\
var(
^
0) =
\
'o1(
^
0) ÷

\
1ia:(
^
0)

2
Suppose we are interested in the Type I error associated with an asymptotic 5% two-sided t-test.
We would then set T
a
=

^
0 ÷0

´:(
^
0) and calculate
^
P =
1
1
1
¸
b=1
1 (T
ab
_ 1.96) . (6.4)
the percentage of the simulated t-ratios which exceed the asymptotic 5% critical value.
Suppose we are interested in the 5% and 95% quantile of T
a
=
^
0. We then compute the 5% and
95% sample quantiles of the sample ¦T
ab
¦. The c% sample quantile is a number c
c
such that c% of
the sample are less than c
c
. A simple way to compute sample quantiles is to sort the sample ¦T
ab
¦
from low to high. Then c
c
is the ·’th number in this ordered sequence, where · = (1 + 1)c. It
is therefore convenient to pick 1 so that · is an integer. For example, if we set 1 = 999. then the
5% sample quantile is 50’th sorted value and the 95% sample quantile is the 950’th sorted value.
The typical purpose of a Monte Carlo simulation is to investigate the performance of a statistical
procedure (estimator or test) in realistic settings. Generally, the performance will depend on : and
98
1. In many cases, an estimator or test may perform wonderfully for some values, and poorly for
others. It is therefore useful to conduct a variety of experiments, for a selection of choices of : and
1.
As discussed above, the researcher must select the number of experiments, 1. Often this is
called the number of replications. Quite simply, a larger 1 results in more precise estimates of
the features of interest of G
a
. but requires more computational time. In practice, therefore, the
choice of 1 is often guided by the computational demands of the statistical procedure. Since the
results of a Monte Carlo experiment are estimates computed from a random sample of size 1. it
is straightforward to calculate standard errors for any quantity of interest. If the standard error is
too large to make a reliable inference, then 1 will have to be increased.
In particular, it is simple to make inferences about rejection probabilities from statistical
tests, such as the percentage estimate reported in (6.4). The random variable 1 (T
ab
_ 1.96) is
iid Bernoulli, equalling 1 with probability j = E1 (T
ab
_ 1.96) . The average (6.4) is therefore an
unbiased estimator of j with standard error : (^ j) =

j (1 ÷j) ´1. As j is unknown, this may be
approximated by replacing j with ^ j or with an hypothesized value. For example, if we are assessing
an asymptotic 5% test, then we can set : (^ j) =

(.05) (.95) ´1 · .22´

1. Hence, standard errors
for 1 = 100. 1000, and 5000, are, respectively, : (^ j) = .022. .007. and .003.
6.8 Estimating a Wage Equation
We again return to our wage equation. We use the sample of wage earners from the March 2004
Current Population Survey, excluding military. For the dependent variable we use the natural log
of wages so that coe¢cients may be interpreted as semi-elasticities. For regressors we include years
of education, potential work experience, experience squared, and dummy variable indicators for
the following: married, female, union member, immigrant, hispanic, and non-white. Furthermore,
we included a dummy variable for state of residence (including the District of Columbia, this adds
50 regressors). The available sample is 18,808 so the parameter estimates are quite precise and
reported in Table 4.1, excluding the coe¢cients on the state dummy variables.
Table 4.1 displays the parameter estimates in a standard format. The Table clearly states the
estimation method (OLS), the dependent variable (log(Wage)), and the regressors are clearly la-
beled. Parameter estimates are both reported for the coe¢cients of interest (the coe¢cients on the
state dummy variables are omitted) and standard errors are reported for all reported coe¢cient es-
timates. In addition to the coe¢cient estimates, the table also reports the estimated error standard
deviation. and the sample size. These are useful summary measures of …t which aid readers.
Table 4.1
OLS Estimates of Linear Equation for Log(Wage)
^
:(
^
)
Intercept 1.027 .032
Education .101 .002
Experience .033 .001
Experience
2
÷.00057 .00002
Married .102 .008
Female ÷.232 .007
Union Member .097 .010
Immigrant ÷.121 .013
Hispanic ÷.102 .014
Non-White ÷.070 .010
^ o .4877
Sample Size 18,808
99
Note: Equation also includes state dummy variables.
As a general rule, it is best to always report standard errors along with parameter estimates
(as done in Table 4.1). This allows readers to assess the precision of the parameter estimates, and
form con…dence intervals and t-tests on individual coe¢cients if desired. For example, if you are
interested in the di¤erence in mean wages between men and women, you can read from the table
that the estimated coe¢cient on the Female dummy variable is ÷0.232. implying a mean wage
di¤erence of 23%. To assess the precision, you can see that the standard error for this coe¢cient
estimate is 0.007. This implies a 95% asymptotic con…dence interval for the coe¢cient estimate of
[÷.246. ÷.218]. This means that we have estimated the di¤erence in mean wages between men and
women to lie between 22% and 25%. I interpret this as a precise estimate because there is not an
important di¤erence between the lower and upper bound.
Instead of reporting standard errors, some empirical researchers report t-ratios for each pa-
rameter estimate. “t-ratios” are t-statistics which test the hypothesis that the coe¢cient equals
zero. An example is reported in Table 4.2. In this example, all the t-ratios are highly signi…cant,
ranging in magnitude from 9.3 to 50. What we learn from these statistics is that these coe¢cients
are non-zero, but not much more. In a sample of this size this …nding is rather uninteresting;
consequently the reporting of t-ratios is a waste of space. Again consider the male-female wage
di¤erence. Table 4.2 reports that the t-ratio is 33, enabling us to reject the hypothesis that the
coe¢cient is zero. But how precise is the reported estimate of a wage gap of 23%? It is hard to
assess from a quick reading of Table 4.2 Standard errors are much more useful, for they enable for
quick and easy assessment of the degree of estimation uncertainty.
Table 4.2
OLS Estimates of Linear Equation for Log(Wage)
Improper Reporting: t-ratios replacing standard errors
^
t
Intercept 1.027 32
Education .101 50
Experience .033 33
Experience
2
÷.00057 28
Married .102 12.8
Female ÷.232 33
Union Member .097 9.7
Immigrant ÷.121 9.3
Hispanic ÷.102 7.3
Non-White ÷.070 7
Returning to the estimated wage equation, one might question whether or not the state dummy
variables are relevant. Computing the Wald statistic (6.1) that the state coe¢cients are jointly zero,
we …nd \
a
= 550. Alternatively, re-estimating the model with the 50 state dummies excluded, the
restricted standard deviation estimate is ~ o = .4945. The 1 form of the Wald statistic (6.2) is
\
a
= :

~ o
2
^ o
2
÷1

= 18. 808

.4945
2
.4877
2
÷1

= 528.
Notice that the two statistics are close, but not equal. Using either statistic the hypothesis is easily
rejected, as the 1% critical value for the .
2
50
distribution is 76.
100
Another interesting question which can be addressed from these estimates is the maximal impact
of experience on mean wages. Ignoring the other coe¢cients, we can write this e¤ect as
log(\aoc) =
2
1rjcric:cc +
3
1rjcric:cc
2
+
Our question is: At which level of experience 0 do workers achieve the highest wage? In this
quadratic model, if
2
0 and
3
< 0 the solution is
0 = ÷

2
2
3
.
From Table 4.1 we …nd the point estimate
^
0 = ÷
^

2
2
^

3
= 28.69.
Using the Delta Method, we can calculate a standard error of :(
^
0) = .40. implying a 95% con…dence
interval of [27.9. 29.5].
However, this is a poor choice, as the coverage probability of this con…dence interval is one
minus the Type I error of the hypothesis test based on the t-test. In Section 6.6 we discovered
that such t-tests have very poor Type I error rates. Instead, we found better Type I error rates by
reformulating the hypothesis as a linear restriction. These t-statistics take the form
t
a
(0) =
^

2
+ 2
^

3
0

l
t
0
`
\ l
0

1/2
where
l
0
=

1
20

and
`
\ is the covariance matrix for (
^

2
^

3
).
In the present context we are interested in forming a con…dence interval, not testing a hypothesis,
so we have to go one step further. Our desired con…dence interval will be the set of parameter values
0 which are not rejected by the hypothesis test. This is the set of 0 such that [t
a
(0)[ _ 1.96. Since
t
a
(0) is a non-linear function of 0. there is not a simple expression for this set, but it can be found
numerically quite easily. This set is [27.0. 29.5]. Notice that the upper end of the con…dence interval
is the same as that from the delta method, but the lower end is substantially lower.
101
Exercises
For exercises 1-4, the following de…nition is used. In the model u = Ad +c. the least-squares
estimate of d subject to the restriction l(d) = 0 is
¯
d = argmin
h(d)=0
o
a
(d)
o
a
(d) = (u ÷Ad)
t
(u ÷Ad) .
That is,
¯
d minimizes the sum of squared errors o
a
(d) over all d such that the restriction holds.
Exercise 6.1 In the model u = A
1
d
1
+ A
2
d
2
+ c. show that the least-squares estimate of d =
(d
1
. d
2
) subject to the constraint that d
2
= 0 is the OLS regression of u on A
1
.
Exercise 6.2 In the model u = A
1
d
1
+ A
2
d
2
+ c. show that the least-squares estimate of d =
(d
1
. d
2
). subject to the constraint that d
1
= c (where c is some given vector) is simply the OLS
regression of u ÷A
1
c on A
2
.
Exercise 6.3 In the model u = A
1
d
1
+ A
2
d
2
+ c. with A
1
and A
2
each : /. …nd the least-
squares estimate of d = (d
1
. d
2
). subject to the constraint that d
1
= ÷d
2
.
Exercise 6.4 Take the model u = Ad + c with the restriction H
t
d = r where H is a known
/ : matrix, r is a known : 1 vector, 0 < : < /. and rank(H) = :. Explain why
¯
d solves the
minimization of the Lagrangian
L(d. X) =
1
2
o
a
(d) +X
t

H
t
d ÷r

where X is : 1.
(a) Show that the solution is
¯
d =
`
d ÷

A
t
A

÷1
H

H
t

A
t
A

÷1
H

÷1

H
t
`
d ÷r

`
X =

H
t

A
t
A

÷1
H

÷1

H
t
`
d ÷r

where
`
d =

A
t
A

÷1
A
t
u
is the unconstrained OLS estimator.
(b) Verify that H
t
¯
d = r.
(c) Show that if H
t
d = r is true, then
¯
d ÷d =

1
I
÷

A
t
A

÷1
H

H
t

A
t
A

÷1
H

÷1
H
t

A
t
A

÷1
A
t
c.
(d) Under the standard assumptions plus H
t
d = r. …nd the asymptotic distribution of

:

¯
d ÷d

as : ÷·.
(e) Find an appropriate formula to calculate standard errors for the elements of
¯
d.
102
Exercise 6.5 Prove that if an additional regressor A
I+1
is added to A. Theil’s adjusted 1
2
increases if and only if [t
I+1
[ 1. where t
I+1
=
^

I+1
´:(
^

I+1
) is the t-ratio for
^

I+1
and
:(
^

I+1
) =

:
2
[(A
t
A)
÷1
]
I+1.I+1

1/2
is the homoskedasticity-formula standard error.
Exercise 6.6 The data set invest.dat contains data on 565 U.S. …rms extracted from Compustat
for the year 1987. The variables, in order, are
« 1
j
Investment to Capital Ratio (multiplied by 100).
« Q
j
Total Market Value to Asset Ratio (Tobin’s Q).
« C
j
Cash Flow to Asset Ratio.
« 1
j
Long Term Debt to Asset Ratio.
The ‡ow variables are annual sums for 1987. The stock variables are beginning of year.
(a) Estimate a linear regression of 1
j
on the other variables. Calculate appropriate standard
errors.
(b) Calculate asymptotic con…dence intervals for the coe¢cients.
(c) This regression is related to Tobin’s c theory of investment, which suggests that investment
should be predicted solely by Q
j
. Thus the coe¢cient on Q
j
should be positive and the others
should be zero. Test the joint hypothesis that the coe¢cients on C
j
and 1
j
are zero. Test the
hypothesis that the coe¢cient on Q
j
is zero. Are the results consistent with the predictions
of the theory?
(d) Now try a non-linear (quadratic) speci…cation. Regress 1
j
on Q
j
. C
j
, 1
j
. Q
2
j
. C
2
j
, 1
2
j
. Q
j
C
j
.
Q
j
1
j
. C
j
1
j
. Test the joint hypothesis that the six interaction and quadratic coe¢cients are
zero.
Exercise 6.7 In a paper in 1963, Marc Nerlove analyzed a cost function for 145 American electric
companies. (The problem is discussed in Example 8.3 of Greene, section 1.7 of Hayashi, and the
empirical exercise in Chapter 1 of Hayashi). The data …le nerlov.dat contains his data. The
variables are described on page 77 of Hayashi. Nerlov was interested in estimating a cost function:
TC = 1(Q. 11. 11. 11).
(a) First estimate an unrestricted Cobb-Douglass speci…cation
log TC
j
=
1
+
2
log Q
j
+
3
log 11
j
+
4
log 11
j
+
5
log 11
j
+c
j
. (6.5)
Report parameter estimates and standard errors. You should obtain the same OLS estimates
as in Hayashi’s equation (1.7.7), but your standard errors may di¤er.
(b) Using a Wald statistic, test the hypothesis H
0
:
3
+
4
+
5
= 1.
(c) Estimate (6.5) by least-squares imposing this restriction by substitution. Report your para-
meter estimates and standard errors.
(d) Estimate (6.5) subject to
3
+
4
+
5
= 1 using the restricted least-squares estimator from
problem 4. Do you obtain the same estimates as in part (c)?
103
Chapter 7
Additional Regression Topics
7.1 Generalized Least Squares
In the projection model, we know that the least-squares estimator is semi-parametrically e¢cient
for the projection coe¢cient. However, in the linear regression model
n
j
= i
t
j
d +c
j
E(c
j
[ i
j
) = 0.
the least-squares estimator is ine¢cient. The theory of Chamberlain (1987) can be used to show
that in this model the semiparametric e¢ciency bound is obtained by the Generalized Least
Squares (GLS) estimator (4.11) introduced in Section 4.5.1. The GLS estimator is sometimes
called the Aitken estimator. The GLS estimator (7.1) is infeasible since the matrix L is unknown.
A feasible GLS (FGLS) estimator replaces the unknown L with an estimate
`
L = diag¦^ o
2
1
. .... ^ o
2
a
¦.
We now discuss this estimation problem.
Suppose that we model the conditional variance using the parametric form
o
2
j
= c
0
+z
t
1j
o
1
= o
t
z
j
.
where z
1j
is some c 1 function of i
j
. Typically, z
1j
are squares (and perhaps levels) of some (or
all) elements of i
j
. Often the functional form is kept simple for parsimony.
Let :
j
= c
2
j
. Then
E(:
j
[ i
j
) = c
0
+z
t
1j
o
1
and we have the regression equation
:
j
= c
0
+z
t
1j
o
1
+:
j
(7.1)
E(:
j
[ i
j
) = 0.
This regression error :
j
is generally heteroskedastic and has the conditional variance
var (:
j
[ i
j
) = var

c
2
j
[ i
j

= E

c
2
j
÷E

c
2
j
[ i
j

2
[ i
j

= E

c
4
j
[ i
j

÷

E

c
2
j
[ i
j

2
.
Suppose c
j
(and thus :
j
) were observed. Then we could estimate o by OLS:
` o =

Z
t
Z

÷1
Z
t
n
j
÷÷o
104
and

:(` o÷o)
o
÷÷N(0. \
o
)
where
\
o
=

E

z
j
z
t
j

÷1
E

z
j
z
t
j
:
2
j

E

z
j
z
t
j

÷1
. (7.2)
While c
j
is not observed, we have the OLS residual ^ c
j
= n
j
÷i
t
j
`
d = c
j
÷i
t
j
(
`
d ÷d). Thus
c
j
= ^ : ÷:
j
= ^ c
2
j
÷c
2
j
= ÷2c
j
i
t
j

`
d ÷d

+ (
`
d ÷d)
t
i
j
i
t
j
(
`
d ÷d).
And then
1

:
a
¸
j=1
z
j
c
j
=
÷2
:
a
¸
j=1
z
j
c
j
i
t
j

:

`
d ÷d

+
1
:
a
¸
j=1
z
j
(
`
d ÷d)
t
i
j
i
t
j
(
`
d ÷d)

:
j
÷÷0
Let
¯ o =

Z
t
Z

÷1
Z
t
` n (7.3)
be from OLS regression of ^ :
j
on z
j
. Then

:(¯ o÷o) =

:(` o÷o) +

:
÷1
Z
t
Z

÷1
:
÷1/2
Z
t
w
o
÷÷N(0. \
o
) (7.4)
Thus the fact that :
j
is replaced with ^ :
j
is asymptotically irrelevant. We call (7.3) the skedastic
regression, as it is estimating the conditional variance of the regression of n
j
on i
j
. We have shown
that o is consistently estimated by a simple procedure, and hence we can estimate o
2
j
= z
t
j
o by
~ o
2
j
= ¯ o
t
z
j
. (7.5)
Suppose that ~ o
2
j
0 for all i. Then set
¯
L = diag¦~ o
2
1
. .... ~ o
2
a
¦
and
¯
d =

A
t
¯
L
÷1
A

÷1
A
t
¯
L
÷1
u.
This is the feasible GLS, or FGLS, estimator of d. Since there is not a unique speci…cation for
the conditional variance the FGLS estimator is not unique, and will depend on the model (and
estimation method) for the skedastic regression.
One typical problem with implementation of FGLS estimation is that in the linear speci…cation
(7.1), there is no guarantee that ~ o
2
j
0 for all i. If ~ o
2
j
< 0 for some i. then the FGLS estimator
is not well de…ned. Furthermore, if ~ o
2
j
- 0 for some i then the FGLS estimator will force the
regression equation through the point (n
j
. i
j
). which is undesirable. This suggests that there is a
need to bound the estimated variances away from zero. A trimming rule takes the form
o
2
j
= max[~ o
2
j
. c^ o
2
]
for some c 0. For example, setting c = 1´4 means that the conditional variance function is
constrained to exceed one-fourth of the unconditional variance. As there is no clear method to
select c, this introduces a degree of arbitrariness. In this context it is useful to re-estimate the
model with several choices for the trimming parameter. If the estimates turn out to be sensitive to
its choice, the estimation method should probably be reconsidered.
It is possible to show that if the skedastic regression is correctly speci…ed, then FGLS is asymp-
totically equivalent to GLS. As the proof is tricky, we just state the result without proof.
105
Theorem 7.1.1 If the skedastic regression is correctly speci…ed,

:

¯
d
G1S
÷
¯
d
1G1S

j
÷÷0.
and thus

:

¯
d
1G1S
÷d

o
÷÷N(0. \
d
) .
where
\
d
=

E

o
÷2
j
i
j
i
t
j

÷1
.
Examining the asymptotic distribution of Theorem 7.1.1, the natural estimator of the asymp-
totic variance of
¯
d is
¯
\
0
d
=

1
:
a
¸
j=1
~ o
÷2
j
i
j
i
t
j

÷1
=

1
:
A
t
¯
L
÷1
A

÷1
.
which is consistent for \
d
as : ÷ ·. This estimator
¯
\
0
d
is appropriate when the skedastic
regression (7.1) is correctly speci…ed.
It may be the case that o
t
z
j
is only an approximation to the true conditional variance o
2
j
=
E(c
2
j
[ i
j
). In this case we interpret o
t
z
j
as a linear projection of c
2
j
on z
j
.
¯
d should perhaps be
called a quasi-FGLS estimator of d. Its asymptotic variance is not that given in Theorem 7.1.1.
Instead,
\
d
=

E

o
t
z
j

÷1
i
j
i
t
j

÷1

E

o
t
z
j

÷2
o
2
j
i
j
i
t
j

E

o
t
z
j

÷1
i
j
i
t
j

÷1
.
\
d
takes a sandwich form similar to the covariance matrix of the OLS estimator. Unless o
2
j
= o
t
z
j
,
¯
\
0
d
is inconsistent for \
d
.
An appropriate solution is to use a White-type estimator in place of
¯
\
0
d
. This may be written
as
¯
\
d
=

1
:
a
¸
j=1
~ o
÷2
j
i
j
i
t
j

÷1

1
:
a
¸
j=1
~ o
÷4
j
^ c
2
j
i
j
i
t
j

1
:
a
¸
j=1
~ o
÷2
j
i
j
i
t
j

÷1
=

1
:
A
t
¯
L
÷1
A

÷1

1
:
A
t
¯
L
÷1
`
L
¯
L
÷1
A

1
:
A
t
¯
L
÷1
A

÷1
where
`
L = diag¦^ c
2
1
. .... ^ c
2
a
¦. This is estimator is robust to misspeci…cation of the conditional vari-
ance, and was proposed by Cragg (1992).
In the linear regression model, FGLS is asymptotically superior to OLS. Why then do we not
exclusively estimate regression models by FGLS? This is a good question. There are three reasons.
First, FGLS estimation depends on speci…cation and estimation of the skedastic regression.
Since the form of the skedastic regression is unknown, and it may be estimated with considerable
error, the estimated conditional variances may contain more noise than information about the true
conditional variances. In this case, FGLS can do worse than OLS in practice.
Second, individual estimated conditional variances may be negative, and this requires trimming
to solve. This introduces an element of arbitrariness which is unsettling to empirical researchers.
Third, and probably most importantly, OLS is a robust estimator of the parameter vector. It
is consistent not only in the regression model, but also under the assumptions of linear projection.
The GLS and FGLS estimators, on the other hand, require the assumption of a correct conditional
mean. If the equation of interest is a linear projection and not a conditional mean, then the OLS
106
and FGLS estimators will converge in probability to di¤erent limits as they will be estimating two
di¤erent projections. The FGLS probability limit will depend on the particular function selected for
the skedastic regression. The point is that the e¢ciency gains from FGLS are built on the stronger
assumption of a correct conditional mean, and the cost is a loss of robustness to misspeci…cation.
7.2 Testing for Heteroskedasticity
The hypothesis of homoskedasticity is that E

c
2
j
[ i
j

= o
2
, or equivalently that
H
0
: o
1
= 0
in the regression (7.1). We may therefore test this hypothesis by the estimation (7.3) and con-
structing a Wald statistic. In the classic literature it is typical to impose the stronger assumption
that c
j
is independent of i
j
. in which case :
j
is independent of i
j
and the asymptotic variance (7.2)
for ¯ o simpli…es to
\
o
=

E

z
j
z
t
j

÷1
E

:
2
j

. (7.6)
Hence the standard test of H
0
is a classic 1 (or Wald) test for exclusion of all regressors from
the skedastic regression (7.3). The asymptotic distribution (7.4) and the asymptotic variance (7.6)
under independence show that this test has an asymptotic chi-square distribution.
Theorem 7.2.1 Under H
0
and c
j
independent of i
j
. the Wald test of H
0
is asymptotically .
2
o
.
Most tests for heteroskedasticity take this basic form. The main di¤erences between popular
tests are which transformations of i
j
enter z
j
. Motivated by the form of the asymptotic variance
of the OLS estimator
`
d. White (1980) proposed that the test for heteroskedasticity be based on
setting z
j
to equal all non-redundant elements of i
j
. its squares, and all cross-products. Breusch-
Pagan (1979) proposed what might appear to be a distinct test, but the only di¤erence is that they
allowed for general choice of z
j
. and replaced E

:
2
j

with 2o
4
which holds when c
j
is N

0. o
2

. If
this simpli…cation is replaced by the standard formula (under independence of the error), the two
tests coincide.
It is important not to misuse tests for heteroskedasticity. It should not be used to determine
whether to estimate a regression equation by OLS or FGLS, nor to determine whether classic or
White standard errors should be reported. Hypothesis tests are not designed for these purposes.
Rather, tests for heteroskedasticity should be used to answer the scienti…c question of whether or
not the conditional variance is a function of the regressors. If this question is not of economic
interest, then there is no value in conducting a test for heteorskedasticity.
7.3 Forecast Intervals
In the linear regression model the conditional mean of n
j
given i
j
= i is
:(i) = E(n
j
[ i
j
= i) = i
t
d.
In some cases, we want to estimate :(i) at a particular point i. Notice that this is a (linear)
function of d. Letting /(d) = i
t
d and 0 = /(d). we see that ^ :(i) =
^
0 = i
t
`
d and H
d
= i. so
:(
^
0) =

:
÷1
i
t `
\
d
i. Thus an asymptotic 95% con…dence interval for :(i) is
¸
i
t
`
d ±2

:
÷1
i
t `
\
d
i

.
It is interesting to observe that if this is viewed as a function of i. the width of the con…dence set
is dependent on i.
107
For a given value of i
j
= i. we may want to forecast (guess) n
j
out-of-sample. A reasonable
rule is the conditional mean :(i) as it is the mean-square-minimizing forecast. A point forecast is
the estimated conditional mean ^ :(i) = i
t
`
d. We would also like a measure of uncertainty for the
forecast.
The forecast error is ^ c
j
= n
j
÷ ^ :(i) = c
j
÷ i
t

`
d ÷d

. As the out-of-sample error c
j
is
independent of the in-sample estimate
`
d. this has variance
E^ c
2
j
= E

c
2
j
[ i
j
= i

+i
t
E

`
d ÷d

`
d ÷d

t
i
= o
2
(i) +:
÷1
i
t
\
d
i.
Assuming E

c
2
j
[ i
j

= o
2
. the natural estimate of this variance is ^ o
2
+ :
÷1
i
t
`
\
d
i. so a standard
error for the forecast is ^ :(i) =

^ o
2
+:
÷1
i
t `
\
d
i. Notice that this is di¤erent from the standard
error for the conditional mean. If we have an estimate of the conditional variance function, e.g.
~ o
2
(i) = ¯ o
t
z from (7.5), then the forecast standard error is ^ :(i) =

~ o
2
(i) +:
÷1
i
t `
\
d
i
It would appear natural to conclude that an asymptotic 95% forecast interval for n
j
is

i
t
`
d ±2^ :(i)

.
but this turns out to be incorrect. In general, the validity of an asymptotic con…dence interval is
based on the asymptotic normality of the studentized ratio. In the present case, this would require
the asymptotic normality of the ratio
c
j
÷i
t

`
d ÷d

^ :(i)
.
But no such asymptotic approximation can be made. The only special exception is the case where
c
j
has the exact distribution N(0. o
2
). which is generally invalid.
To get an accurate forecast interval, we need to estimate the conditional distribution of c
j
given
i
j
= i. which is a much more di¢cult task. Perhaps due to this di¢culty, many applied forecasters
use the simple approximate interval

i
t
`
d ±2^ :(i)

despite the lack of a convincing justi…cation.
7.4 NonLinear Least Squares
In some cases we might use a parametric regression function :(i. 0) = E(n
j
[ i
j
= i) which
is a non-linear function of the parameters 0. We describe this setting as non-linear regression.
Examples of nonlinear regression functions include
:(r. 0) = 0
1
+0
2
r
1 +0
3
r
:(r. 0) = 0
1
+0
2
r
0
3
:(r. 0) = 0
1
+0
2
exp(0
3
r)
:(i. 0) = G(i
t
0). G known
:(i. 0) = 0
t
1
i
1
+

0
t
2
i
1

r
2
÷0
3
0
4

:(r. 0) = 0
1
+0
2
r +0
3
(r ÷0
4
) 1 (r 0
4
)
:(i. 0) =

0
t
1
i
1

1 (r
2
< 0
3
) +

0
t
2
i
1

1 (r
2
0
3
)
In the …rst …ve examples, :(i. 0) is (generically) di¤erentiable in the parameters 0. In the …nal
two examples, : is not di¤erentiable with respect to 0
4
and 0
3
which alters some of the analysis.
When it exists, let
n
0
(i. 0) =
0
00
:(i. 0) .
108
Nonlinear regression is sometimes adopted because the functional form :(i. 0) is suggested
by an economic model. In other cases, it is adopted as a ‡exible approximation to an unknown
regression function.
The least squares estimator
`
0 minimizes the normalized sum-of-squared-errors
o
a
(0) =
1
:
a
¸
j=1
(n
j
÷:(i
j
. 0))
2
.
When the regression function is nonlinear, we call this the nonlinear least squares (NLLS)
estimator. The NLLS residuals are ^ c
j
= n
j
÷:

i
j
.
`
0

.
One motivation for the choice of NLLS as the estimation method is that the parameter 0 is the
solution to the population problem min
0
E(n
j
÷:(i
j
. 0))
2
Since sum-of-squared-errors function o
a
(0) is not quadratic,
`
0 must be found by numerical
methods. See Appendix E. When :(i. 0) is di¤erentiable, then the FOC for minimization are
0 =
a
¸
j=1
n
0

i
j
.
`
0

^ c
j
. (7.7)
Theorem 7.4.1 Asymptotic Distribution of NLLS Estimator
If the model is identi…ed and :(i. 0) is di¤erentiable with respect to 0,

:

`
0 ÷0

o
÷÷N(0. \
0
)
\
0
=

E

n
0j
n
t
0j

÷1

E

n
0j
n
t
0j
c
2
j

E

n
0j
n
t
0j

÷1
where n
0j
= n
0
(i
j
. 0
0
).
Based on Theorem 7.4.1, an estimate of the asymptotic variance \
0
is
`
\
0
=

1
:
a
¸
j=1
` n
0j
` n
t
0j

÷1

1
:
a
¸
j=1
` n
0j
` n
t
0j
^ c
2
j

1
:
a
¸
j=1
` n
0j
` n
t
0j

÷1
where ` n
0j
= n
0
(i
j
.
`
0) and ^ c
j
= n
j
÷:(i
j
.
`
0).
Identi…cation is often tricky in nonlinear regression models. Suppose that
:(i
j
. 0) = d
t
1
z
j
+d
t
2
i
j
()
where i
j
() is a function of i
j
and the unknown parameter ~. Examples include r
j
() = r
~
j
.
r
j
() = exp(r
j
) . and r
j
(~) = r
j
1 (o (r
j
) ). The model is linear when d
2
= 0. and this is
often a useful hypothesis (sub-model) to consider. Thus we want to test
H
0
: d
2
= 0.
However, under H
0
, the model is
n
j
= d
t
1
z
j
+c
j
and both d
2
and have dropped out. This means that under H
0
. is not identi…ed. This renders
the distribution theory presented in the previous section invalid. Thus when the truth is that
109
d
2
= 0. the parameter estimates are not asymptotically normally distributed. Furthermore, tests
of H
0
do not have asymptotic normal or chi-square distributions.
The asymptotic theory of such tests have been worked out by Andrews and Ploberger (1994)
and B. Hansen (1996). In particular, Hansen shows how to use simulation (similar to the bootstrap)
to construct the asymptotic critical values (or p-values) in a given application.
Proof of Theorem 7.4.1 (Sketch). NLLS estimation falls in the class of optimization estimators.
For this theory, it is useful to denote the true value of the parameter 0 as 0
0
.
The …rst step is to show that
`
0
j
÷÷0
0
. Proving that nonlinear estimators are consistent is more
challenging than for linear estimators. We sketch the main argument. The idea is that
`
0 minimizes
the sample criterion function o
a
(0). which (for any 0) converges in probability to the mean-squared
error function E(n
j
÷:(i
j
. 0))
2
. Thus it seems reasonable that the minimizer
`
0 will converge in
probability to 0
0
. the minimizer of E(n
j
÷:(i
j
. 0))
2
. It turns out that to show this rigorously, we
need to show that o
a
(0) converges uniformly to its expectation E(n
j
÷:(i
j
. 0))
2
. which means
that the maximum discrepancy must converge in probability to zero, to exclude the possibility that
o
a
(0) is excessively wiggly in 0. Proving uniform convergence is technically challenging, but it
can be shown to hold broadly for relevant nonlinear regression models, especially if the regression
function :(i
j
. 0) is di¤erentiabel in 0. For a complete treatment of the theory of optimization
estimators see Newey and McFadden (1994).
Since
`
0
j
÷÷ 0
0
.
`
0 is close to 0
0
for : large, so the minimization of o
a
(0) only needs to be
examined for 0 close to 0
0
. Let
n
0
j
= c
j
+n
t
0j
0
0
.
For 0 close to the true value 0
0
. by a …rst-order Taylor series approximation,
:(i
j
. 0) · :(i
j
. 0
0
) +n
t
0j
(0 ÷0
0
) .
Thus
n
j
÷:(i
j
. 0) · (c
j
+:(i
j
. 0
0
)) ÷

:(i
j
. 0
0
) +n
t
0j
(0 ÷0
0
)

= c
j
÷n
t
0j
(0 ÷0
0
)
= n
0
j
÷n
t
0j
0.
Hence the sum of squared errors function is
o
a
(0) =
a
¸
j=1
(n
j
÷:(i
j
. 0))
2
·
a
¸
j=1

n
0
j
÷n
t
0j
0

2
and the right-hand-side is the SSE function for a linear regression of n
0
j
on n
0j
. Thus the NLLS
estimator
`
0 has the same asymptotic distribution as the (infeasible) OLS regression of n
0
j
on n
0j
.
which is that stated in the theorem.
7.5 Least Absolute Deviations
We stated that a conventional goal in econometrics is estimation of impact of variation in i
j
on the central tendency of n
j
. We have discussed projections and conditional means, but these are
not the only measures of central tendency. An alternative good measure is the conditional median.
To recall the de…nition and properties of the median, let n be a continuous random variable.
The median 0 = med(n) is the value such that P(n _ 0) = P(n _ 0
0
) = .5. Two useful facts about
the median are that
0 = argmin
0
E[n ÷0[ (7.8)
110
and
Esgn(n ÷0) = 0
where
sgn(n) =

1 if n _ 0
÷1 if n < 0
is the sign function.
These facts and de…nitions motivate three estimators of 0. The …rst de…nition is the 50t/
empirical quantile. The second is the value which minimizes
1
a
¸
a
j=1
[n
j
÷0[ . and the third de…nition
is the solution to the moment equation
1
a
¸
a
j=1
sgn(n
j
÷0) . These distinctions are illusory, however,
as these estimators are indeed identical.
Now let’s consider the conditional median of n given a random vector i. Let :(i) = med(n [ i)
denote the conditional median of n given i. The linear median regression model takes the form
n
j
= i
t
j
d +c
j
med(c
j
[ i
j
) = 0
In this model, the linear function med(n
j
[ i
j
= i) = i
t
d is the conditional median function, and
the substantive assumption is that the median function is linear in i.
Conditional analogs of the facts about the median are
« P(n
j
_ i
t
d
0
[ i
j
= i) = P(n
j
i
t
d [ i
j
= i) = .5
« E(sgn(c
j
) [ i
j
) = 0
« E(i
j
sgn(c
j
)) = 0
« d = min
d
E[n
j
÷i
t
j
d[
These facts motivate the following estimator. Let
1¹1
a
(d) =
1
:
a
¸
j=1

n
j
÷i
t
j
d

be the average of absolute deviations. The least absolute deviations (LAD) estimator of d
minimizes this function
`
d = argmin
d
1¹1
a
(d)
Equivalently, it is a solution to the moment condition
1
:
a
¸
j=1
i
j
sgn

n
j
÷i
t
j
`
d

= 0. (7.9)
The LAD estimator has an asymptotic normal distribution.
Theorem 7.5.1 Asymptotic Distribution of LAD Estimator
When the conditional median is linear in i

:

`
d ÷d

o
÷÷N(0. \ )
where
\ =
1
4

E

i
j
i
t
j
1 (0 [ i
j
)

÷1

Ei
j
i
t
j

E

i
j
i
t
j
1 (0 [ i
j
)

÷1
and 1 (c [ i) is the conditional density of c
j
given i
j
= i.
111
The variance of the asymptotic distribution inversely depends on 1 (0 [ i) . the conditional
density of the error at its median. When 1 (0 [ i) is large, then there are many innovations near
to the median, and this improves estimation of the median. In the special case where the error is
independent of i
j
. then 1 (0 [ i) = 1 (0) and the asymptotic variance simpli…es
\ =
(Ei
j
i
t
j
)
÷1
41 (0)
2
(7.10)
This simpli…cation is similar to the simpli…cation of the asymptotic covariance of the OLS estimator
under homoskedasticity.
Computation of standard error for LAD estimates typically is based on equation (7.10). The
main di¢culty is the estimation of 1(0). the height of the error density at its median. This can be
done with kernel estimation techniques. See Chapter 16. While a complete proof of Theorem 7.5.1
is advanced, we provide a sketch here for completeness.
Proof of Theorem 7.5.1: Similar to NLLS, LAD is an optimization estimator. Let d
0
denote
the true value of d
0
.
The …rst step is to show that
`
d
j
÷÷d
0
. The general nature of the proof is similar to that for the
NLLS estimator, and is sketched here. For any …xed d. by the WLLN, 1¹1
a
(d)
j
÷÷E[n
j
÷i
t
j
d[ .
Furthermore, it can be shown that this convergence is uniform in d. (Proving uniform convergence
is more challenging than for the NLLS criterion since the LAD criterion is not di¤erentiable in
d.) It follows that
`
d. the minimizer of 1¹1
a
(d). converges in probability to d
0
. the minimizer of
E[n
j
÷i
t
j
d[.
Since sgn(a) = 1÷2 1 (a _ 0) . (7.9) is equivalent to g
a
(
`
d) = 0. where g
a
(d) = :
÷1
¸
a
j=1
g
j
(d)
and g
j
(d) = i
j
(1 ÷2 1 (n
j
_ i
t
j
d)) . Let g(d) = Eg
j
(d). We need three preliminary results. First,
by the central limit theorem (Theorem C.2.1)

:(g
a
(d
0
) ÷g(d
0
)) = ÷:
÷1/2
a
¸
j=1
g
j
(d
0
)
o
÷÷N

0. Ei
j
i
t
j

since Eg
j
(d
0
)g
j
(d
0
)
t
= Ei
j
i
t
j
. Second using the law of iterated expectations and the chain rule of
di¤erentiation,
0
0d
t
g(d) =
0
0d
t
Ei
j

1 ÷2 1

n
j
_ i
t
j
d

= ÷2
0
0d
t
E

i
j
E

1

c
j
_ i
t
j
d ÷i
t
j
d
0

[ i
j

= ÷2
0
0d
t
E
¸
i
j

æ
0
i
d־
0
i
d
0
÷o
1 (c [ i
j
) dc
¸
= ÷2E

i
j
i
t
j
1

i
t
j
d ÷i
t
j
d
0
[ i
j

so
0
0d
t
g(d) = ÷2E

i
j
i
t
j
1 (0 [ i
j
)

.
Third, by a Taylor series expansion and the fact g(d) = 0
g(
`
d) ·
0
0d
t
g(d)

`
d ÷d

.
112
Together

:

`
d ÷d
0

·

0
0d
t
g(d
0
)

÷1

:g(
`
d)
=

÷2E

i
j
i
t
j
1 (0 [ i
j
)

÷1

:

g(
`
d) ÷g
a
(
`
d)

·
1
2

E

i
j
i
t
j
1 (0 [ i
j
)

÷1

:(g
a
(d
0
) ÷g(d
0
))
o
÷÷
1
2

E

i
j
i
t
j
1 (0 [ i
j
)

÷1
N

0. Ei
j
i
t
j

= N(0. \ ) .
The third line follows from an asymptotic empirical process argument and the fact that
`
d
j
÷÷d
0
.
7.6 Quantile Regression
Quantile regression has become quite popular in recent econometric practice. For t ÷ [0. 1] the
t’th quantile Q
t
of a random variable with distribution function 1(n) is de…ned as
Q
t
= inf ¦n : 1(n) _ t¦
When 1(n) is continuous and strictly monotonic, then 1 (Q
t
) = t. so you can think of the quantile
as the inverse of the distribution function. The quantile Q
t
is the value such that t (percent) of
the mass of the distribution is less than Q
t
. The median is the special case t = .5.
The following alternative representation is useful. If the random variable l has t’th quantile
Q
t
. then
Q
t
= argmin
0
Ej
t
(l ÷0) . (7.11)
where j
t
(c) is the piecewise linear function
j
t
(c) =

÷c (1 ÷t) c < 0
ct c _ 0
(7.12)
= c (t ÷1 (c < 0)) .
This generalizes representation (7.8) for the median to all quantiles.
For the random variables (n
j
. i
j
) with conditional distribution function 1 (n [ i) the conditional
quantile function c
t
(i) is
Q
t
(i) = inf ¦n : 1 (n [ i) _ t¦ .
Again, when 1 (n [ i) is continuous and strictly monotonic in n, then 1 (Q
t
(i) [ i) = t. For
…xed t. the quantile regression function c
t
(i) describes how the t’th quantile of the conditional
distribution varies with the regressors.
As functions of i. the quantile regression functions can take any shape. However for computa-
tional convenience it is typical to assume that they are (approximately) linear in i (after suitable
transformations). This linear speci…cation assumes that Q
t
(i) = d
t
t
i where the coe¢cients d
t
vary across the quantiles t. We then have the linear quantile regression model
n
j
= i
t
j
d
t
+c
j
where c
j
is the error de…ned to be the di¤erence between n
j
and its t’th conditional quantile i
t
j
d
t
.
By construction, the t’th conditional quantile of c
j
is zero, otherwise its properties are unspeci…ed
without further restrictions.
113
Given the representation (7.11), the quantile regression estimator
`
d
t
for d
t
solves the mini-
mization problem
`
d
t
= argmin
d
o
t
a
(d)
where
o
t
a
(d) =
1
:
a
¸
j=1
j
t

n
j
÷i
t
j
d

and j
t
(c) is de…ned in (7.12).
Since the quanitle regression criterion function o
t
a
(d) does not have an algebraic solution,
numerical methods are necessary for its minimization. Furthermore, since it has discontinuous
derivatives, conventional Newton-type optimization methods are inappropriate. Fortunately, fast
linear programming methods have been developed for this problem, and are widely available.
An asymptotic distribution theory for the quantile regression estimator can be derived using
similar arguments as those for the LAD estimator in Theorem 7.5.1.
Theorem 7.6.1 Asymptotic Distribution of the Quantile Regres-
sion Estimator
When the t’th conditional quantile is linear in i

:

`
d
t
÷d
t

o
÷÷N(0. \
t
) .
where
\
t
= t (1 ÷t)

E

i
j
i
t
j
1 (0 [ i
j
)

÷1

Ei
j
i
t
j

E

i
j
i
t
j
1 (0 [ i
j
)

÷1
and 1 (c [ i) is the conditional density of c
j
given i
j
= i.
In general, the asymptotic variance depends on the conditional density of the quantile regression
error. When the error c
j
is independent of i
j
. then 1 (0 [ i
j
) = 1 (0) . the unconditional density of
c
j
at 0, and we have the simpli…cation
\
t
=
t (1 ÷t)
1 (0)
2

E

i
j
i
t
j

÷1
.
A recent monograph on the details of quantile regression is Koenker (2005).
7.7 Testing for Omitted NonLinearity
If the goal is to estimate the conditional expectation E(n
j
[ i
j
) . it is useful to have a general
test of the adequacy of the speci…cation.
One simple test for neglected nonlinearity is to add nonlinear functions of the regressors to the
regression, and test their signi…cance using a Wald test. Thus, if the model n
j
= i
t
j
`
d + ^ c
j
has been
…t by OLS, let z
j
= l(i
j
) denote functions of i
j
which are not linear functions of i
j
(perhaps
squares of non-binary regressors) and then …t n
j
= i
t
j
¯
d+z
t
j
¯ ~+~ c
j
by OLS, and form a Wald statistic
for ~ = 0.
Another popular approach is the RESET test proposed by Ramsey (1969). The null model is
n
j
= i
t
j
d +c
j
114
which is estimated by OLS, yielding predicted values ^ n
j
= i
t
j
`
d. Now let
z
j
=

¸
¸
^ n
2
j
.
.
.
^ n
n
j
¸

be an (:÷1)-vector of powers of ^ n
j
. Then run the auxiliary regression
n
j
= i
t
j
¯
d +z
t
j
¯ ~ + ~ c
j
(7.13)
by OLS, and form the Wald statistic \
a
for ~ = 0. It is easy (although somewhat tedious) to
show that under the null hypothesis, \
a
o
÷÷.
2
n÷1
. Thus the null is rejected at the c% level if \
a
exceeds the upper c% tail critical value of the .
2
n÷1
distribution.
To implement the test, : must be selected in advance. Typically, small values such as : = 2,
3, or 4 seem to work best.
The RESET test appears to work well as a test of functional form against a wide range of
smooth alternatives. It is particularly powerful at detecting single-index models of the form
n
j
= G(i
t
j
d) +c
j
where G() is a smooth “link” function. To see why this is the case, note that (7.13) may be written
as
n
j
= i
t
j
¯
d +

i
t
j
`
d

2
~
1
+

i
t
j
`
d

3
~
2
+

i
t
j
`
d

n
~
n÷1
+ ~ c
j
which has essentially approximated G() by a :’th order polynomial.
7.8 Irrelevant Variables
In the model
n
j
= i
t
1j
d
1
+i
t
2j
d
2
+c
j
E(i
j
c
j
) = 0.
i
2j
is “irrelevant” if d
1
is the parameter of interest and d
2
= 0. One estimator of d
1
is to regress
n
j
on i
1j
alone,
¯
d
1
= (A
t
1
A
1
)
÷1
(A
t
1
u) . Another is to regress n
j
on i
1j
and i
2j
jointly, yielding
(
`
d
1
.
`
d
2
). Under which conditions is
`
d
1
or
¯
d
1
superior?
It is easy to see that both estimators are consistent for d
1
. However, they will (typically) have
di¤erent asymptotic variances.
The comparison between the two estimators is straightforward when the error is conditionally
homoskedastic E

c
2
j
[ i
j

= o
2
. In this case
lim
a÷o
:var(
¯
d
1
) =

Ei
1j
i
t
1j

÷1
o
2
= O
÷1
11
o
2
.
say, and
lim
a÷o
:var(
`
d
1
) =

Ei
1j
i
t
1j
÷Ei
1j
i
t
2j

Ei
2j
i
t
2j

÷1
Ei
2j
i
t
1j

÷1
o
2
=

O
11
÷O
12
O
÷1
22
O
21

÷1
o
2
.
say. If O
12
= 0 (so the variables are orthogonal) then these two variance matrices equal, and the
two estimators have equal asymptotic e¢ciency. Otherwise, since O
12
O
÷1
22
O
21
0. then O
11

O
11
÷O
12
O
÷1
22
O
21
. and consequently
O
÷1
11
o
2
<

O
11
÷O
12
O
÷1
22
O
21

÷1
o
2
.
115
This means that
¯
d
1
has a lower asymptotic variance matrix than
`
d
1
. We conclude that the inclusion
of irrelevant variable reduces estimation e¢ciency if these variables are correlated with the relevant
variables.
For example, take the model n
j
=
0
+
1
r
j
+ c
j
and suppose that
0
= 0. Let
^

1
be the
estimate of
1
from the unconstrained model, and
~

1
be the estimate under the constraint
0
= 0.
(The least-squares estimate with the intercept omitted.). Let Er
j
= j. and E(r
j
÷j)
2
= o
2
a
. Then
under (5.13),
lim
a÷o
:var(
~

1
) =
o
2
o
2
a
+j
2
while
lim
a÷o
:var(
^

1
) =
o
2
o
2
a
.
When j = 0. we see that
~

1
has a lower asymptotic variance.
However, this result can be reversed when the error is conditionally heteroskedastic. In the
absence of the homoskedasticity assumption, there is no clear ranking of the e¢ciency of the
restricted estimator
¯
d
1
versus the unrestricted estimator.
7.9 Model Selection
In earlier sections we discussed the costs and bene…ts of inclusion/exclusion of variables. How
does a researcher go about selecting an econometric speci…cation, when economic theory does not
provide complete guidance? This is the question of model selection. It is important that the model
selection question be well-posed. For example, the question: “What is the right model for n?”
is not well-posed, because it does not make clear the conditioning set. In contrast, the question,
“Which subset of (r
1
. .... r
1
) enters the regression function E(n
j
[ r
1j
= r
1
. .... r
1j
= r
1
)?” is well
posed.
In many cases the problem of model selection can be reduced to the comparison of two nested
models, as the larger problem can be written as a sequence of such comparisons. We thus consider
the question of the inclusion of A
2
in the linear regression
u = A
1
d
1
+A
2
d
2
+c.
where A
1
is : /
1
and A
2
is : /
2
. This is equivalent to the comparison of the two models
´
1
: u = A
1
d
1
+c. E(c [ A
1
. A
2
) = 0
´
2
: u = A
1
d
1
+A
2
d
2
+c. E(c [ A
1
. A
2
) = 0.
Note that ´
1
· ´
2
. To be concrete, we say that ´
2
is true if d
2
= 0.
To …x notation, models 1 and 2 are estimated by OLS, with residual vectors ` c
1
and ` c
2
. estimated
variances ^ o
2
1
and ^ o
2
2
. etc., respectively. To simplify some of the statistical discussion, we will on
occasion use the homoskedasticity assumption E

c
2
j
[ i
1j
. i
2j

= o
2
.
A model selection procedure is a data-dependent rule which selects one of the two models. We
can write this as
´
´. There are many possible desirable properties for a model selection procedure.
One useful property is consistency, that it selects the true model with probability one if the sample
is su¢ciently large. A model selection procedure is consistent if
P

´
´= ´
1
[ ´
1

÷ 1
P

´
´= ´
2
[ ´
2

÷ 1
However, this rule only makes sense when the true model is …nite dimensional. If the truth is
in…nite dimensional, it is more appropriate to view model selection as determining the best …nite
sample approximation.
116
A common approach to model selection is to base the decision on a statistical test such as
the Wald \
a
. The model selection rule is as follows. For some critical level c. let c
c
satisfy
P

.
2
I
2
c
c

= c. Then select ´
1
if \
a
_ c
c
. else select ´
2
.
A major problem with this approach is that the critical level c is indeterminate. The reasoning
which helps guide the choice of c in hypothesis testing (controlling Type I error) is not relevant for
model selection. That is, if c is set to be a small number, then P

´
´= ´
1
[ ´
1

- 1 ÷ c but
P

´
´= ´
2
[ ´
2

could vary dramatically, depending on the sample size, etc. Another problem is
that if c is held …xed, then this model selection procedure is inconsistent, as P

´
´= ´
1
[ ´
1

÷
1 ÷c < 1.
Another common approach to model selection is to use a selection criterion. One popular choice
is the Akaike Information Criterion (AIC). The AIC under normality for model : is
¹1C
n
= log

^ o
2
n

+ 2
/
n
:
. (7.14)
where ^ o
2
n
is the variance estimate for model :. and /
n
is the number of coe¢cients in the model.
The AIC can be derived as an estimate of the KullbackLeibler information distance 1(´) =
E(log 1(u [ A) ÷log 1(u [ A. ´)) between the true density and the model density. The rule is
to select ´
1
if ¹1C
1
< ¹1C
2
. else select ´
2
. AIC selection is inconsistent, as the rule tends to
over…t. Indeed, since under ´
1
.
11
a
= :

log ^ o
2
1
÷log ^ o
2
2

· \
a
o
÷÷.
2
I
2
. (7.15)
then
P

´
´= ´
1
[ ´
1

= P(¹1C
1
< ¹1C
2
[ ´
1
)
= P

log(^ o
2
1
) + 2
/
1
:
< log(^ o
2
2
) + 2
/
1
+/
2
:
[ ´
1

= P(11
a
< 2/
2
[ ´
1
)
÷ P

.
2
I
2
< 2/
2

< 1.
While many criterions similar to the AIC have been proposed, the most popular is one proposed
by Schwarz based on Bayesian arguments. His criterion, known as the BIC, is
11C
n
= log

^ o
2
n

+ log(:)
/
n
:
. (7.16)
Since log(:) 2 (if : 8). the BIC places a larger penalty than the AIC on the number of
estimated parameters and is more parsimonious.
In contrast to the AIC, BIC model selection is consistent. Indeed, since (7.15) holds under ´
1
.
11
a
log(:)
j
÷÷0.
so
P

´
´= ´
1
[ ´
1

= P(11C
1
< 11C
2
[ ´
1
)
= P(11
a
< log(:)/
2
[ ´
1
)
= P

11
a
log(:)
< /
2
[ ´
1

÷ P(0 < /
2
) = 1.
117
Also under ´
2
. one can show that
11
a
log(:)
j
÷÷·.
thus
P

´
´= ´
2
[ ´
2

= P

11
a
log(:)
/
2
[ ´
2

÷ 1.
We have discussed model selection between two models. The methods extend readily to the
issue of selection among multiple regressors. The general problem is the model
n
j
=
1
r
1j
+
2
r
2j
+ +
1
r
1j
+c
j
. E(c
j
[ i
j
) = 0
and the question is which subset of the coe¢cients are non-zero (equivalently, which regressors
enter the regression).
There are two leading cases: ordered regressors and unordered.
In the ordered case, the models are
´
1
:
1
= 0.
2
=
3
= =
1
= 0
´
2
:
1
= 0.
2
= 0.
3
= =
1
= 0
.
.
.
´
1
:
1
= 0.
2
= 0. . . . .
1
= 0.
which are nested. The AIC selection criteria estimates the 1 models by OLS, stores the residual
variance ^ o
2
for each model, and then selects the model with the lowest AIC (7.14). Similarly for
the BIC, selecting based on (7.16).
In the unordered case, a model consists of any possible subset of the regressors ¦r
1j
. .... r
1j
¦.
and the AIC or BIC in principle can be implemented by estimating all possible subset models.
However, there are 2
1
such models, which can be a very large number. For example, 2
10
= 1024.
and 2
20
= 1. 048. 576. In the latter case, a full-blown implementation of the BIC selection criterion
would seem computationally prohibitive.
118
Exercises
Exercise 7.1 The data …le cps78.dat contains 550 observations on 20 variables taken from the
May 1978 current population survey. Variables are listed in the …le cps78.pdf. The goal of the
exercise is to estimate a model for the log of earnings (variable LNWAGE) as a function of the
conditioning variables.
(a) Start by an OLS regression of LNWAGE on the other variables. Report coe¢cient estimates
and standard errors.
(b) Consider augmenting the model by squares and/or cross-products of the conditioning vari-
ables. Estimate your selected model and report the results.
(c) Are there any variables which seem to be unimportant as a determinant of wages? You may
re-estimate the model without these variables, if desired.
(d) Test whether the error variance is di¤erent for men and women. Interpret.
(e) Test whether the error variance is di¤erent for whites and nonwhites. Interpret.
(f) Construct a model for the conditional variance. Estimate such a model, test for general
heteroskedasticity and report the results.
(g) Using this model for the conditional variance, re-estimate the model from part (c) using
FGLS. Report the results.
(h) Do the OLS and FGLS estimates di¤er greatly? Note any interesting di¤erences.
(i) Compare the estimated standard errors. Note any interesting di¤erences.
Exercise 7.2 In the homoskedastic regression model u = Ad + c with E(c
j
[ i
j
) = 0 and E(c
2
j
[
i
j
) = o
2
. suppose
`
d is the OLS estimate of d with covariance matrix
`
\ . based on a sample of
size :. Let ^ o
2
be the estimate of o
2
. You wish to forecast an out-of-sample value of n
a+1
given
that i
a+1
= i. Thus the available information is the sample (u. A). the estimates (
`
d.
`
\ . ^ o
2
), the
residuals ` c. and the out-of-sample value of the regressors, i
a+1
.
(a) Find a point forecast of n
a+1
.
(b) Find an estimate of the variance of this forecast.
Exercise 7.3 Suppose that n
j
= o(i
j
. 0)+c
j
with E(c
j
[ i
j
) = 0.
`
0 is the NLLS estimator, and
`
\ is
the estimate of var

`
0

. You are interested in the conditional mean function E(n
j
[ i
j
= i) = o(i)
at some i. Find an asymptotic 95% con…dence interval for o(i).
Exercise 7.4 For any predictor o(i
j
) for n
j
. the mean absolute error (MAE) is
E[n
j
÷o(i
j
)[ .
Show that the function o(i) which minimizes the MAE is the conditional median :(i) = med(n
j
[
i
j
).
Exercise 7.5 De…ne
o(n) = t ÷1 (n < 0)
where 1 () is the indicator function (takes the value 1 if the argument is true, else equals zero).
Let 0 satisfy Eo(n
j
÷0) = 0. Is 0 a quantile of the distribution of n
j
?
119
Exercise 7.6 Verify equation (7.11).
Exercise 7.7 In Exercise 6.7, you estimated a cost function on a cross-section of electric companies.
The equation you estimated was
log TC
j
=
1
+
2
log Q
j
+
3
log 11
j
+
4
log 11
j
+
5
log 11
j
+c
j
. (7.17)
(a) Following Nerlove, add the variable (log Q
j
)
2
to the regression. Do so. Assess the merits of
this new speci…cation using (i) a hypothesis test; (ii) AIC criterion; (iii) BIC criterion. Do
you agree with this modi…cation?
(b) Now try a non-linear speci…cation. Consider model (7.17) plus the extra term
6
z
j
. where
z
j
= log Q
j
(1 + exp(÷(log Q
j
÷
7
)))
÷1
.
In addition, impose the restriction
3
+
4
+
5
= 1. This model is called a smooth threshold
model. For values of log Q
j
much below
7
. the variable log Q
j
has a regression slope of
2
.
For values much above
7
. the regression slope is
2
+
6
. and the model imposes a smooth
transition between these regimes. The model is non-linear because of the parameter
7
.
The model works best when
7
is selected so that several values (in this example, at least
10 to 15) of log Q
j
are both below and above
7
. Examine the data and pick an appropriate
range for
7
.
(c) Estimate the model by non-linear least squares. I recommend the concentration method:
Pick 10 (or more or you like) values of
7
in this range. For each value of
7
. calculate z
j
and estimate the model by OLS. Record the sum of squared errors, and …nd the value of
7
for which the sum of squared errors is minimized.
(d) Calculate standard errors for all the parameters (
1
. ....
7
).
120
Chapter 8
The Bootstrap
8.1 De…nition of the Bootstrap
Let 1 denote a distribution function for the population of observations (n
j
. i
j
) . Let
T
a
= T
a
((n
1
. i
1
) . .... (n
a
. i
a
) . 1)
be a statistic of interest, for example an estimator
^
0 or a t-statistic

^
0 ÷0

´:(
^
0). Note that we
write T
a
as possibly a function of 1. For example, the t-statistic is a function of the parameter 0
which itself is a function of 1.
The exact CDF of T
a
when the data are sampled from the distribution 1 is
G
a
(n. 1) = P(T
a
_ n [ 1)
In general, G
a
(n. 1) depends on 1. meaning that G changes as 1 changes.
Ideally, inference would be based on G
a
(n. 1). This is generally impossible since 1 is unknown.
Asymptotic inference is based on approximating G
a
(n. 1) with G(n. 1) = lim
a÷o
G
a
(n. 1).
When G(n. 1) = G(n) does not depend on 1. we say that T
a
is asymptotically pivotal and use the
distribution function G(n) for inferential purposes.
In a seminal contribution, Efron (1979) proposed the bootstrap, which makes a di¤erent ap-
proximation. The unknown 1 is replaced by a consistent estimate 1
a
(one choice is discussed in
the next section). Plugged into G
a
(n. 1) we obtain
G
+
a
(n) = G
a
(n. 1
a
). (8.1)
We call G
+
a
the bootstrap distribution. Bootstrap inference is based on G
+
a
(n).
Let (n
+
j
. i
+
j
) denote random variables with the distribution 1
a
. A random sample from this dis-
tribution is called the bootstrap data. The statistic T
+
a
= T
a
((n
+
1
. i
+
1
) . .... (n
+
a
. i
+
a
) . 1
a
) constructed
on this sample is a random variable with distribution G
+
a
. That is, P(T
+
a
_ n) = G
+
a
(n). We call T
+
a
the bootstrap statistic. The distribution of T
+
a
is identical to that of T
a
when the true CDF of 1
a
rather than 1.
The bootstrap distribution is itself random, as it depends on the sample through the estimator
1
a
.
In the next sections we describe computation of the bootstrap distribution.
8.2 The Empirical Distribution Function
Recall that 1(n. i) = P(n
j
_ n. i
j
_ i) = E(1 (n
j
_ n) 1 (i
j
_ i)) . where 1() is the indicator
function. This is a population moment. The method of moments estimator is the corresponding
121
sample moment:
1
a
(n. i) =
1
:
a
¸
j=1
1 (n
j
_ n) 1 (i
j
_ i) . (8.2)
1
a
(n. i) is called the empirical distribution function (EDF). 1
a
is a nonparametric estimate of 1.
Note that while 1 may be either discrete or continuous, 1
a
is by construction a step function.
The EDF is a consistent estimator of the CDF. To see this, note that for any (n. i). 1 (n
j
_ n) 1 (i
j
_ i)
is an iid random variable with expectation 1(n. i). Thus by the WLLN (Theorem 5.2.1), 1
a
(n. i)
j
÷÷
1 (n. i) . Furthermore, by the CLT (Theorem C.2.1),

:(1
a
(n. i) ÷1 (n. i))
o
÷÷N(0. 1 (n. i) (1 ÷1 (n. i))) .
To see the e¤ect of sample size on the EDF, in the Figure below, I have plotted the EDF and
true CDF for three random samples of size : = 25. 50, 100, and 500. The random draws are from
the N(0. 1) distribution. For : = 25. the EDF is only a crude approximation to the CDF, but the
approximation appears to improve for the large :. In general, as the sample size gets larger, the
EDF step function gets uniformly close to the true CDF.
Figure 8.1: Empirical Distribution Functions
The EDF is a valid discrete probability distribution which puts probability mass 1´: at each
pair (n
j
. i
j
), i = 1. .... :. Notationally, it is helpful to think of a random pair (n
+
j
. i
+
j
) with the
distribution 1
a
. That is,
P(n
+
j
_ n. i
+
j
_ i) = 1
a
(n. i).
We can easily calculate the moments of functions of (n
+
j
. i
+
j
) :
E/(n
+
j
. i
+
j
) =

/(n. i)d1
a
(n. i)
=
a
¸
j=1
/(n
j
. i
j
) P(n
+
j
= n
j
. i
+
j
= i
j
)
=
1
:
a
¸
j=1
/(n
j
. i
j
) .
the empirical sample average.
122
8.3 Nonparametric Bootstrap
The nonparametric bootstrap is obtained when the bootstrap distribution (8.1) is de…ned
using the EDF (8.2) as the estimate 1
a
of 1.
Since the EDF 1
a
is a multinomial (with : support points), in principle the distribution G
+
a
could
be calculated by direct methods. However, as there are 2
a
possible samples ¦(n
+
1
. i
+
1
) . .... (n
+
a
. i
+
a
)¦.
such a calculation is computationally infeasible. The popular alternative is to use simulation to ap-
proximate the distribution. The algorithm is identical to our discussion of Monte Carlo simulation,
with the following points of clari…cation:
« The sample size : used for the simulation is the same as the sample size.
« The random vectors (n
+
j
. i
+
j
) are drawn randomly from the empirical distribution. This is
equivalent to sampling a pair (n
j
. i
j
) randomly from the sample.
The bootstrap statistic T
+
a
= T
a
((n
+
1
. i
+
1
) . .... (n
+
a
. i
+
a
) . 1
a
) is calculated for each bootstrap sam-
ple. This is repeated 1 times. 1 is known as the number of bootstrap replications. A theory
for the determination of the number of bootstrap replications 1 has been developed by Andrews
and Buchinsky (2000). It is desirable for 1 to be large, so long as the computational costs are
reasonable. 1 = 1000 typically su¢ces.
When the statistic T
a
is a function of 1. it is typically through dependence on a parameter.
For example, the t-ratio

^
0 ÷0

´:(
^
0) depends on 0. As the bootstrap statistic replaces 1 with
1
a
. it similarly replaces 0 with 0
a
. the value of 0 implied by 1
a
. Typically 0
a
=
^
0. the parameter
estimate. (When in doubt use
^
0.)
Sampling from the EDF is particularly easy. Since 1
a
is a discrete probability distribution
putting probability mass 1´: at each sample point, sampling from the EDF is equivalent to random
sampling a pair (n
j
. i
j
) from the observed data with replacement. In consequence, a bootstrap
sample ¦(n
+
1
. i
+
1
) . .... (n
+
a
. i
+
a
)¦ will necessarily have some ties and multiple values, which is generally
not a problem.
8.4 Bootstrap Estimation of Bias and Variance
The bias of
^
0 is t
a
= E(
^
0 ÷ 0
0
). Let T
a
(0) =
^
0 ÷ 0. Then t
a
= E(T
a
(0
0
)). The bootstrap
counterparts are
^
0
+
=
^
0((n
+
1
. i
+
1
) . .... (n
+
a
. i
+
a
)) and T
+
a
=
^
0
+
÷ 0
a
=
^
0
+
÷
^
0. The bootstrap estimate
of t
a
is
t
+
a
= E(T
+
a
).
If this is calculated by the simulation described in the previous section, the estimate of t
+
a
is
^ t
+
a
=
1
1
1
¸
b=1
T
+
ab
=
1
1
1
¸
b=1
^
0
+
b
÷
^
0
=
^
0
+
÷
^
0.
If
`
0 is biased, it might be desirable to construct a biased-corrected estimator (one with reduced
bias). Ideally, this would be
~
0 =
^
0 ÷t
a
.
123
but t
a
is unknown. The (estimated) bootstrap biased-corrected estimator is
~
0
+
=
^
0 ÷ ^ t
+
a
=
^
0 ÷(
^
0
+
÷
^
0)
= 2
^
0 ÷
^
0
+
.
Note, in particular, that the biased-corrected estimator is not
^
0
+
. Intuitively, the bootstrap makes
the following experiment. Suppose that
^
0 is the truth. Then what is the average value of
^
0
calculated from such samples? The answer is
^
0
+
. If this is lower than
^
0. this suggests that the
estimator is downward-biased, so a biased-corrected estimator of 0 should be larger than
^
0. and the
best guess is the di¤erence between
^
0 and
^
0
+
. Similarly if
^
0
+
is higher than
^
0. then the estimator is
upward-biased and the biased-corrected estimator should be lower than
^
0.
Let T
a
=
^
0. The variance of
^
0 is
\
a
= E(T
a
÷ET
a
)
2
.
Let T
+
a
=
^
0
+
. It has variance
\
+
a
= E(T
+
a
÷ET
+
a
)
2
.
The simulation estimate is
^
\
+
a
=
1
1
1
¸
b=1

^
0
+
b
÷
^
0
+

2
.
A bootstrap standard error for
^
0 is the square root of the bootstrap estimate of variance,
:
+
(
^
0) =

^
\
+
a
.
While this standard error may be calculated and reported, it is not clear if it is useful. The
primary use of asymptotic standard errors is to construct asymptotic con…dence intervals, which are
based on the asymptotic normal approximation to the t-ratio. However, the use of the bootstrap
presumes that such asymptotic approximations might be poor, in which case the normal approxi-
mation is suspected. It appears superior to calculate bootstrap con…dence intervals, and we turn
to this next.
8.5 Percentile Intervals
For a distribution function G
a
(n. 1). let c
a
(c. 1) denote its quantile function. This is the
function which solves
G
a
(c
a
(c. 1). 1) = c.
[When G
a
(n. 1) is discrete, c
a
(c. 1) may be non-unique, but we will ignore such complications.]
Let c
a
(c) denote the quantile function of the true sampling distribution, and c
+
a
(c) = c
a
(c. 1
a
)
denote the quantile function of the bootstrap distribution. Note that this function will change
depending on the underlying statistic T
a
whose distribution is G
a
.
Let T
a
=
^
0. an estimate of a parameter of interest. In (1 ÷c)% of samples,
^
0 lies in the region
[c
a
(c´2). c
a
(1 ÷c´2)]. This motivates a con…dence interval proposed by Efron:
C
1
= [c
+
a
(c´2). c
+
a
(1 ÷c´2)].
This is often called the percentile con…dence interval.
Computationally, the quantile c
+
a
(c) is estimated by ^ c
+
a
(c). the c’th sample quantile of the
simulated statistics ¦T
+
a1
. .... T
+
a1
¦. as discussed in the section on Monte Carlo simulation. The
(1 ÷c)% Efron percentile interval is then [^ c
+
a
(c´2). ^ c
+
a
(1 ÷c´2)].
124
The interval C
1
is a popular bootstrap con…dence interval often used in empirical practice. This
is because it is easy to compute, simple to motivate, was popularized by Efron early in the history
of the bootstrap, and also has the feature that it is translation invariant. That is, if we de…ne
c = 1(0) as the parameter of interest for a monotonically increasing function 1. then percentile
method applied to this problem will produce the con…dence interval [1(c
+
a
(c´2)). 1(c
+
a
(1÷c´2))].
which is a naturally good property.
However, as we show now, C
1
is in a deep sense very poorly motivated.
It will be useful if we introduce an alternative de…nition C
1
. Let T
a
(0) =
^
0 ÷ 0 and let c
a
(c)
be the quantile function of its distribution. (These are the original quantiles, with 0 subtracted.)
Then C
1
can alternatively be written as
C
1
= [
^
0 +c
+
a
(c´2).
^
0 +c
+
a
(1 ÷c´2)].
This is a bootstrap estimate of the “ideal” con…dence interval
C
0
1
= [
^
0 +c
a
(c´2).
^
0 +c
a
(1 ÷c´2)].
The latter has coverage probability
P

0
0
÷ C
0
1

= P

^
0 +c
a
(c´2) _ 0
0
_
^
0 +c
a
(1 ÷c´2)

= P

÷c
a
(1 ÷c´2) _
^
0 ÷0
0
_ ÷c
a
(c´2)

= G
a
(÷c
a
(c´2). 1
0
) ÷G
a
(÷c
a
(1 ÷c´2). 1
0
)
which generally is not 1÷c! There is one important exception. If
^
0÷0
0
has a symmetric distribution,
then G
a
(÷n. 1
0
) = 1 ÷G
a
(n. 1
0
). so
P

0
0
÷ C
0
1

= G
a
(÷c
a
(c´2). 1
0
) ÷G
a
(÷c
a
(1 ÷c´2). 1
0
)
= (1 ÷G
a
(c
a
(c´2). 1
0
)) ÷(1 ÷G
a
(c
a
(1 ÷c´2). 1
0
))
=

1 ÷
c
2

÷

1 ÷

1 ÷
c
2

= 1 ÷c
and this idealized con…dence interval is accurate. Therefore, C
0
1
and C
1
are designed for the case
that
^
0 has a symmetric distribution about 0
0
.
When
^
0 does not have a symmetric distribution, C
1
may perform quite poorly.
However, by the translation invariance argument presented above, it also follows that if there
exists some monotonically increasing transformation 1() such that 1(
^
0) is symmetrically distributed
about 1(0
0
). then the idealized percentile bootstrap method will be accurate.
Based on these arguments, many argue that the percentile interval should not be used unless
the sampling distribution is close to unbiased and symmetric.
The problems with the percentile method can be circumvented, at least in principle, by an
alternative method.
Let T
a
(0) =
^
0 ÷0. Then
1 ÷c = P(c
a
(c´2) _ T
a
(0
0
) _ c
a
(1 ÷c´2))
= P
^
0 ÷c
a
(1 ÷c´2) _ 0
0
_
^
0 ÷c
a
(c´2).
so an exact (1 ÷c)% con…dence interval for 0
0
would be
C
0
2
= [
^
0 ÷c
a
(1 ÷c´2).
^
0 ÷c
a
(c´2)].
This motivates a bootstrap analog
C
2
= [
^
0 ÷c
+
a
(1 ÷c´2).
^
0 ÷c
+
a
(c´2)].
125
Notice that generally this is very di¤erent from the Efron interval C
1
! They coincide in the special
case that G
+
a
(n) is symmetric about
^
0. but otherwise they di¤er.
Computationally, this interval can be estimated from a bootstrap simulation by sorting the
bootstrap statistics T
+
a
=

^
0
+
÷
^
0

. which are centered at the sample estimate
^
0. These are sorted
to yield the quantile estimates ^ c
+
a
(.025) and ^ c
+
a
(.975). The 95% con…dence interval is then [
^
0 ÷
^ c
+
a
(.975).
^
0 ÷ ^ c
+
a
(.025)].
This con…dence interval is discussed in most theoretical treatments of the bootstrap, but is not
widely used in practice.
8.6 Percentile-t Equal-Tailed Interval
Suppose we want to test H
0
: 0 = 0
0
against H
1
: 0 < 0
0
at size c. We would set T
a
(0) =

^
0 ÷0

´:(
^
0) and reject H
0
in favor of H
1
if T
a
(0
0
) < c. where c would be selected so that
P(T
a
(0
0
) < c) = c.
Thus c = c
a
(c). Since this is unknown, a bootstrap test replaces c
a
(c) with the bootstrap estimate
c
+
a
(c). and the test rejects if T
a
(0
0
) < c
+
a
(c).
Similarly, if the alternative is H
1
: 0 0
0
. the bootstrap test rejects if T
a
(0
0
) c
+
a
(1 ÷c).
Computationally, these critical values can be estimated from a bootstrap simulation by sorting
the bootstrap t-statistics T
+
a
=

^
0
+
÷
^
0

´:(
^
0
+
). Note, and this is important, that the bootstrap test
statistic is centered at the estimate
^
0. and the standard error :(
^
0
+
) is calculated on the bootstrap
sample. These t-statistics are sorted to …nd the estimated quantiles ^ c
+
a
(c) and/or ^ c
+
a
(1 ÷c).
Let T
a
(0) =

^
0 ÷0

´:(
^
0). Then taking the intersection of two one-sided intervals,
1 ÷c = P(c
a
(c´2) _ T
a
(0
0
) _ c
a
(1 ÷c´2))
= P

c
a
(c´2) _

^
0 ÷0
0

´:(
^
0) _ c
a
(1 ÷c´2)

= P

^
0 ÷:(
^
0)c
a
(1 ÷c´2) _ 0
0
_
^
0 ÷:(
^
0)c
a
(c´2)

.
so an exact (1 ÷c)% con…dence interval for 0
0
would be
C
0
3
= [
^
0 ÷:(
^
0)c
a
(1 ÷c´2).
^
0 ÷:(
^
0)c
a
(c´2)].
This motivates a bootstrap analog
C
3
= [
^
0 ÷:(
^
0)c
+
a
(1 ÷c´2).
^
0 ÷:(
^
0)c
+
a
(c´2)].
This is often called a percentile-t con…dence interval. It is equal-tailed or central since the probability
that 0
0
is below the left endpoint approximately equals the probability that 0
0
is above the right
endpoint, each c´2.
Computationally, this is based on the critical values from the one-sided hypothesis tests, dis-
cussed above.
8.7 Symmetric Percentile-t Intervals
Suppose we want to test H
0
: 0 = 0
0
against H
1
: 0 = 0
0
at size c. We would set T
a
(0) =

^
0 ÷0

´:(
^
0) and reject H
0
in favor of H
1
if [T
a
(0
0
)[ c. where c would be selected so that
P([T
a
(0
0
)[ c) = c.
126
Note that
P([T
a
(0
0
)[ < c) = P(÷c < T
a
(0
0
) < c)
= G
a
(c) ÷G
a
(÷c)
= G
a
(c).
which is a symmetric distribution function. The ideal critical value c = c
a
(c) solves the equation
G
a
(c
a
(c)) = 1 ÷c.
Equivalently, c
a
(c) is the 1 ÷c quantile of the distribution of [T
a
(0
0
)[ .
The bootstrap estimate is c
+
a
(c). the 1 ÷ c quantile of the distribution of [T
+
a
[ . or the number
which solves the equation
G
+
a
(c
+
a
(c)) = G
+
a
(c
+
a
(c)) ÷G
+
a
(÷c
+
a
(c)) = 1 ÷c.
Computationally, c
+
a
(c) is estimated from a bootstrap simulation by sorting the bootstrap t-
statistics [T
+
a
[ =

^
0
+
÷
^
0

´:(
^
0
+
). and taking the upper c% quantile. The bootstrap test rejects if
[T
a
(0
0
)[ c
+
a
(c).
Let
C
4
= [
^
0 ÷:(
^
0)c
+
a
(c).
^
0 +:(
^
0)c
+
a
(c)].
where c
+
a
(c) is the bootstrap critical value for a two-sided hypothesis test. C
4
is called the symmetric
percentile-t interval. It is designed to work well since
P(0
0
÷ C
4
) = P

^
0 ÷:(
^
0)c
+
a
(c) _ 0
0
_
^
0 +:(
^
0)c
+
a
(c)

= P([T
a
(0
0
)[ < c
+
a
(c))
· P([T
a
(0
0
)[ < c
a
(c))
= 1 ÷c.
If 0 is a vector, then to test H
0
: 0 = 0
0
against H
1
: 0 = 0
0
at size c. we would use a Wald
statistic
\
a
(0) = :

`
0 ÷0

t
`
\
÷1
0

`
0 ÷0

or some other asymptotically chi-square statistic. Thus here T
a
(0) = \
a
(0). The ideal test rejects
if \
a
_ c
a
(c). where c
a
(c) is the (1 ÷c)% quantile of the distribution of \
a
. The bootstrap test
rejects if \
a
_ c
+
a
(c). where c
+
a
(c) is the (1 ÷c)% quantile of the distribution of
\
+
a
= :

`
0
+
÷
`
0

t
`
\
+÷1
0

`
0
+
÷
`
0

.
Computationally, the critical value c
+
a
(c) is found as the quantile from simulated values of \
+
a
.
Note in the simulation that the Wald statistic is a quadratic form in

`
0
+
÷
`
0

. not

`
0
+
÷0
0

.
[This is a typical mistake made by practitioners.]
8.8 Asymptotic Expansions
Let T
a
÷ R be a statistic such that
T
a
o
÷÷N(0. o
2
). (8.3)
In some cases, such as when T
a
is a t-ratio, then o
2
= 1. In other cases o
2
is unknown. Equivalently,
writing T
a
~ G
a
(n. 1) then
lim
a÷o
G
a
(n. 1) =

n
o

.
127
or
G
a
(n. 1) =

n
o

+o (1) . (8.4)
While (8.4) says that G
a
converges to

&
o

as : ÷ ·. it says nothing, however, about the rate
of convergence. or the size of the divergence for any particular sample size :. A better asymptotic
approximation may be obtained through an asymptotic expansion.
The following notation will be helpful. Let a
a
be a sequence.
De…nition 8.8.1 a
a
= o(1) if a
a
÷0 as : ÷·
De…nition 8.8.2 a
a
= O(1) if [a
a
[ is uniformly bounded.
De…nition 8.8.3 a
a
= o(:
÷v
) if :
v
[a
a
[ ÷0 as : ÷·.
Basically, a
a
= O(:
÷v
) if it declines to zero like :
÷v
.
We say that a function o(n) is even if o(÷n) = o(n). and a function /(n) is odd if /(÷n) = ÷/(n).
The derivative of an even function is odd, and vice-versa.
Theorem 8.8.1 Under regularity conditions and (8.3),
G
a
(n. 1) =

n
o

+
1
:
1/2
o
1
(n. 1) +
1
:
o
2
(n. 1) +O(:
÷3/2
)
uniformly over n. where o
1
is an even function of n. and o
2
is an odd
function of n. Moreover, o
1
and o
2
are di¤erentiable functions of n and
continuous in 1 relative to the supremum norm on the space of distribution
functions.
The expansion in Theorem 8.8.1 is often called an Edgeworth expansion.
We can interpret Theorem 8.8.1 as follows. First, G
a
(n. 1) converges to the normal limit at
rate :
1/2
. To a second order of approximation,
G
a
(n. 1) -

n
o

+:
÷1/2
o
1
(n. 1).
Since the derivative of o
1
is odd, the density function is skewed. To a third order of approximation,
G
a
(n. 1) -

n
o

+:
÷1/2
o
1
(n. 1) +:
÷1
o
2
(n. 1)
which adds a symmetric non-normal component to the approximate density (for example, adding
leptokurtosis).
[Side Note: When T
a
=

:

A
a
÷j

´o. a standardized sample mean, then
o
1
(n) = ÷
1
6
i
3

n
2
÷1

c(n)
o
2
(n) = ÷

1
24
i
4

n
3
÷3n

+
1
72
i
2
3

n
5
÷10n
3
+ 15n

c(n)
128
where c(n) is the standard normal pdf, and
i
3
= E(A ÷j)
3
´o
3
i
4
= E(A ÷j)
4
´o
4
÷3
the standardized skewness and excess kurtosis of the distribution of A. Note that when i
3
= 0
and i
4
= 0. then o
1
= 0 and o
2
= 0. so the second-order Edgeworth expansion corresponds to the
normal distribution.]
Francis Edgeworth
Francis Ysidro Edgeworth (1845-1926) of Ireland, founding editor of the Economic Journal,
was a profound economic and statistical theorist, developing the theories of indi¤erence
curves and asymptotic expansions. He also could be viewed as the …rst econometrician due
to his early use of mathematical statistics in the study of economic data.
8.9 One-Sided Tests
Using the expansion of Theorem 8.8.1, we can assess the accuracy of one-sided hypothesis tests
and con…dence regions based on an asymptotically normal t-ratio T
a
. An asymptotic test is based
on (n).
To the second order, the exact distribution is
P(T
a
< n) = G
a
(n. 1
0
) = (n) +
1
:
1/2
o
1
(n. 1
0
) +O(:
÷1
)
since o = 1. The di¤erence is
(n) ÷G
a
(n. 1
0
) =
1
:
1/2
o
1
(n. 1
0
) +O(:
÷1
)
= O(:
÷1/2
).
so the order of the error is O(:
÷1/2
).
A bootstrap test is based on G
+
a
(n). which from Theorem 8.8.1 has the expansion
G
+
a
(n) = G
a
(n. 1
a
) = (n) +
1
:
1/2
o
1
(n. 1
a
) +O(:
÷1
).
Because (n) appears in both expansions, the di¤erence between the bootstrap distribution and
the true distribution is
G
+
a
(n) ÷G
a
(n. 1
0
) =
1
:
1/2
(o
1
(n. 1
a
) ÷o
1
(n. 1
0
)) +O(:
÷1
).
Since 1
a
converges to 1 at rate

:. and o
1
is continuous with respect to 1. the di¤erence
(o
1
(n. 1
a
) ÷o
1
(n. 1
0
)) converges to 0 at rate

:. Heuristically,
o
1
(n. 1
a
) ÷o
1
(n. 1
0
) -
0
01
o
1
(n. 1
0
) (1
a
÷1
0
)
= O(:
÷1/2
).
The “derivative”
0
01
o
1
(n. 1) is only heuristic, as 1 is a function. We conclude that
G
+
a
(n) ÷G
a
(n. 1
0
) = O(:
÷1
).
129
or
P(T
+
a
_ n) = P(T
a
_ n) +O(:
÷1
).
which is an improved rate of convergence over the asymptotic test (which converged at rate
O(:
÷1/2
)). This rate can be used to show that one-tailed bootstrap inference based on the t-
ratio achieves a so-called asymptotic re…nement – the Type I error of the test converges at a faster
rate than an analogous asymptotic test.
8.10 Symmetric Two-Sided Tests
If a random variable n has distribution function H(n) = P(n _ n). then the random variable [n[
has distribution function
H(n) = H(n) ÷H(÷n)
since
P([n[ _ n) = P(÷n _ n _ n)
= P(n _ n) ÷P(n _ ÷n)
= H(n) ÷H(÷n).
For example, if 7 ~ N(0. 1). then [7[ has distribution function
(n) = (n) ÷(÷n) = 2(n) ÷1.
Similarly, if T
a
has exact distribution G
a
(n. 1). then [T
a
[ has the distribution function
G
a
(n. 1) = G
a
(n. 1) ÷G
a
(÷n. 1).
A two-sided hypothesis test rejects H
0
for large values of [T
a
[ . Since T
a
o
÷÷ 7. then [T
a
[
o
÷÷
[7[ ~ . Thus asymptotic critical values are taken from the distribution, and exact critical values
are taken from the G
a
(n. 1
0
) distribution. From Theorem 8.8.1, we can calculate that
G
a
(n. 1) = G
a
(n. 1) ÷G
a
(÷n. 1)
=

(n) +
1
:
1/2
o
1
(n. 1) +
1
:
o
2
(n. 1)

÷

(÷n) +
1
:
1/2
o
1
(÷n. 1) +
1
:
o
2
(÷n. 1)

+O(:
÷3/2
)
= (n) +
2
:
o
2
(n. 1) +O(:
÷3/2
). (8.5)
where the simpli…cations are because o
1
is even and o
2
is odd. Hence the di¤erence between the
asymptotic distribution and the exact distribution is
(n) ÷G
a
(n. 1
0
) =
2
:
o
2
(n. 1
0
) +O(:
÷3/2
) = O(:
÷1
).
The order of the error is O(:
÷1
).
Interestingly, the asymptotic two-sided test has a better coverage rate than the asymptotic
one-sided test. This is because the …rst term in the asymptotic expansion, o
1
. is an even function,
meaning that the errors in the two directions exactly cancel out.
Applying (8.5) to the bootstrap distribution, we …nd
G
+
a
(n) = G
a
(n. 1
a
) = (n) +
2
:
o
2
(n. 1
a
) +O(:
÷3/2
).
130
Thus the di¤erence between the bootstrap and exact distributions is
G
+
a
(n) ÷G
a
(n. 1
0
) =
2
:
(o
2
(n. 1
a
) ÷o
2
(n. 1
0
)) +O(:
÷3/2
)
= O(:
÷3/2
).
the last equality because 1
a
converges to 1
0
at rate

:. and o
2
is continuous in 1. Another way
of writing this is
P([T
+
a
[ < n) = P([T
a
[ < n) +O(:
÷3/2
)
so the error from using the bootstrap distribution (relative to the true unknown distribution) is
O(:
÷3/2
). This is in contrast to the use of the asymptotic distribution, whose error is O(:
÷1
). Thus
a two-sided bootstrap test also achieves an asymptotic re…nement, similar to a one-sided test.
A reader might get confused between the two simultaneous e¤ects. Two-sided tests have better
rates of convergence than the one-sided tests, and bootstrap tests have better rates of convergence
than asymptotic tests.
The analysis shows that there may be a trade-o¤ between one-sided and two-sided tests. Two-
sided tests will have more accurate size (Reported Type I error), but one-sided tests might have
more power against alternatives of interest. Con…dence intervals based on the bootstrap can be
asymmetric if based on one-sided tests (equal-tailed intervals) and can therefore be more informative
and have smaller length than symmetric intervals. Therefore, the choice between symmetric and
equal-tailed con…dence intervals is unclear, and needs to be determined on a case-by-case basis.
8.11 Percentile Con…dence Intervals
To evaluate the coverage rate of the percentile interval, set T
a
=

:

^
0 ÷0
0

. We know that
T
a
o
÷÷ N(0. \ ). which is not pivotal, as it depends on the unknown \. Theorem 8.8.1 shows that
a …rst-order approximation
G
a
(n. 1) =

n
o

+O(:
÷1/2
).
where o =

\ . and for the bootstrap
G
+
a
(n) = G
a
(n. 1
a
) =

n
o

+O(:
÷1/2
).
where ^ o = \ (1
a
) is the bootstrap estimate of o. The di¤erence is
G
+
a
(n) ÷G
a
(n. 1
0
) =

n
o

÷

n
o

+O(:
÷1/2
)
= ÷c

n
o

n
o
(^ o ÷o) +O(:
÷1/2
)
= O(:
÷1/2
)
Hence the order of the error is O(:
÷1/2
).
The good news is that the percentile-type methods (if appropriately used) can yield

:-
convergent asymptotic inference. Yet these methods do not require the calculation of standard
errors! This means that in contexts where standard errors are not available or are di¢cult to
calculate, the percentile bootstrap methods provide an attractive inference method.
The bad news is that the rate of convergence is disappointing. It is no better than the rate
obtained from an asymptotic one-sided con…dence region. Therefore if standard errors are available,
it is unclear if there are any bene…ts from using the percentile bootstrap over simple asymptotic
methods.
Based on these arguments, the theoretical literature (e.g. Hall, 1992, Horowitz, 2001) tends to
advocate the use of the percentile-t bootstrap methods rather than percentile methods.
131
8.12 Bootstrap Methods for Regression Models
The bootstrap methods we have discussed have set G
+
a
(n) = G
a
(n. 1
a
). where 1
a
is the EDF.
Any other consistent estimate of 1 may be used to de…ne a feasible bootstrap estimator. The
advantage of the EDF is that it is fully nonparametric, it imposes no conditions, and works in
nearly any context. But since it is fully nonparametric, it may be ine¢cient in contexts where
more is known about 1. We discuss bootstrap methods appropriate for the linear regression model
n
j
= i
t
j
d +c
j
E(c
j
[ i
j
) = 0.
The non-parametric bootstrap resamples the observations (n
+
j
. i
+
j
) from the EDF, which implies
n
+
j
= i
+t
j
`
d +c
+
j
E(i
+
j
c
+
j
) = 0
but generally
E(c
+
j
[ i
+
j
) = 0.
The the bootstrap distribution does not impose the regression assumption, and is thus an ine¢cient
estimator of the true distribution (when in fact the regression assumption is true.)
One approach to this problem is to impose the very strong assumption that the error -
j
is
independent of the regressor i
j
. The advantage is that in this case it is straightforward to con-
struct bootstrap distributions. The disadvantage is that the bootstrap distribution may be a poor
approximation when the error is not independent of the regressors.
To impose independence, it is su¢cient to sample the i
+
j
and c
+
j
independently, and then create
n
+
j
= i
+t
j
`
d + c
+
j
. There are di¤erent ways to impose independence. A non-parametric method
is to sample the bootstrap errors c
+
j
randomly from the OLS residuals ¦^ c
1
. .... ^ c
a
¦. A parametric
method is to generate the bootstrap errors c
+
j
from a parametric distribution, such as the normal
c
+
j
~ N(0. ^ o
2
).
For the regressors i
+
j
, a nonparametric method is to sample the i
+
j
randomly from the EDF
or sample values ¦i
1
. .... i
a
¦. A parametric method is to sample i
+
j
from an estimated parametric
distribution. A third approach sets i
+
j
= i
j
. This is equivalent to treating the regressors as …xed
in repeated samples. If this is done, then all inferential statements are made conditionally on the
observed values of the regressors, which is a valid statistical approach. It does not really matter,
however, whether or not the i
j
are really “…xed” or random.
The methods discussed above are unattractive for most applications in econometrics because
they impose the stringent assumption that i
j
and c
j
are independent. Typically what is desirable
is to impose only the regression condition E(c
j
[ i
j
) = 0. Unfortunately this is a harder problem.
One proposal which imposes the regression condition without independence is the Wild Boot-
strap. The idea is to construct a conditional distribution for c
+
j
so that
E(c
+
j
[ i
j
) = 0
E

c
+2
j
[ i
j

= ^ c
2
j
E

c
+3
j
[ i
j

= ^ c
3
j
.
A conditional distribution with these features will preserve the main important features of the data.
This can be achieved using a two-point distribution of the form
P

c
+
j
=

1 +

5
2

^ c
j

=

5 ÷1
2

5
P

c
+
j
=

1 ÷

5
2

^ c
j

=

5 + 1
2

5
For each i
j
. you sample c
+
j
using this two-point distribution.
132
Exercises
Exercise 8.1 Let 1
a
(i) denote the EDF of a random sample. Show that

:(1
a
(i) ÷1
0
(i))
o
÷÷N(0. 1
0
(i) (1 ÷1
0
(i))) .
Exercise 8.2 Take a random sample ¦n
1
. .... n
a
¦ with j = En
j
and o
2
= var (n
j
) . Let the statistic
of interest be the sample mean T
a
= n
a
. Find the population moments ET
a
and var (T
a
) . Let
¦n
+
1
. .... n
+
a
¦ be a random sample from the empirical distribution function and let T
+
a
= n
+
a
be its
sample mean. Find the bootstrap moments ET
+
a
and var (T
+
a
) .
Exercise 8.3 Consider the following bootstrap procedure for a regression of n
j
on i
j
. Let
`
d denote
the OLS estimator from the regression of u on A, and ` c = u ÷A
`
d the OLS residuals.
(a) Draw a random vector (i
+
. c
+
) from the pair ¦(i
j
. ^ c
j
) : i = 1. .... :¦ . That is, draw a random
integer i
t
from [1. 2. .... :]. and set i
+
= i
j
0 and c
+
= ^ c
j
0 . Set n
+
= i
+t
`
d + c
+
. Draw (with
replacement) : such vectors, creating a random bootstrap data set (u
+
. A
+
).
(b) Regress u
+
on A
+
. yielding OLS estimates
`
d
+
and any other statistic of interest.
Show that this bootstrap procedure is (numerically) identical to the non-parametric boot-
strap.
Exercise 8.4 Consider the following bootstrap procedure. Using the non-parametric bootstrap,
generate bootstrap samples, calculate the estimate
^
0
+
on these samples and then calculate
T
+
a
= (
^
0
+
÷
^
0)´:(
^
0).
where :(
^
0) is the standard error in the original data. Let c
+
a
(.05) and c
+
a
(.95) denote the 5% and
95% quantiles of T
+
a
, and de…ne the bootstrap con…dence interval
C =

^
0 ÷:(
^
0)c
+
a
(.95).
^
0 ÷:(
^
0)c
+
a
(.05)

.
Show that C exactly equals the Alternative percentile interval (not the percentile-t interval).
Exercise 8.5 You want to test H
0
: 0 = 0 against H
1
: 0 0. The test for H
0
is to reject if
T
a
=
^
0´:(
^
0) c where c is picked so that Type I error is c. You do this as follows. Using the non-
parametric bootstrap, you generate bootstrap samples, calculate the estimates
^
0
+
on these samples
and then calculate
T
+
a
=
^
0
+
´:(
^
0
+
).
Let c
+
a
(.95) denote the 95% quantile of T
+
a
. You replace c with c
+
a
(.95). and thus reject H
0
if
T
a
=
^
0´:(
^
0) c
+
a
(.95). What is wrong with this procedure?
Exercise 8.6 Suppose that in an application,
^
0 = 1.2 and :(
^
0) = .2. Using the non-parametric
bootstrap, 1000 samples are generated from the bootstrap distribution, and
^
0
+
is calculated on each
sample. The
^
0
+
are sorted, and the 2.5% and 97.5% quantiles of the
^
0
+
are .75 and 1.3, respectively.
(a) Report the 95% Efron Percentile interval for 0.
(b) Report the 95% Alternative Percentile interval for 0.
(c) With the given information, can you report the 95% Percentile-t interval for 0?
Exercise 8.7 The data…le hprice1.dat contains data on house prices (sales), with variables listed
in the …le hprice1.pdf. Estimate a linear regression of price on the number of bedrooms, lot
size, size of house, and the colonial dummy. Calculate 95% con…dence intervals for the regression
coe¢cients using both the asymptotic normal approximation and the percentile-t bootstrap.
133
Chapter 9
Generalized Method of Moments
9.1 Overidenti…ed Linear Model
Consider the linear model
n
j
= i
t
j
d +c
j
= i
t
1j
d
1
+i
t
2j
d
2
+c
j
E(i
j
c
j
) = 0
where i
1j
is / 1 and i
2
is r 1 with / = / + r. We know that without further restrictions, an
asymptotically e¢cient estimator of d is the OLS estimator. Now suppose that we are given the
information that d
2
= 0. Now we can write the model as
n
j
= i
t
1j
d
1
+c
j
E(i
j
c
j
) = 0.
In this case, how should d
1
be estimated? One method is OLS regression of n
j
on i
1j
alone. This
method, however, is not necessarily e¢cient, as there are / restrictions in E(i
j
c
j
) = 0. while d
1
is
of dimension / < /. This situation is called overidenti…ed. There are / ÷ / = r more moment
restrictions than free parameters. We call r the number of overidentifying restrictions.
This is a special case of a more general class of moment condition models. Let g(n. i. z. d) be
an / 1 function of a / 1 parameter d with / _ / such that
Eg(n
j
. i
j
. z
j
. d
0
) = 0 (9.1)
where d
0
is the true value of d. In our previous example, g(n. z. d) = z(n ÷i
t
1
d). In econometrics,
this class of models are called moment condition models. In the statistics literature, these are
known as estimating equations.
As an important special case we will devote special attention to linear moment condition models,
which can be written as
n
j
= i
t
j
d +c
j
E(z
j
c
j
) = 0.
where the dimensions of i
j
and z
j
are / 1 and / 1 , with / _ /. If / = / the model is just
identi…ed, otherwise it is overidenti…ed. The variables i
j
may be components and functions of
z
j
. but this is not required. This model falls in the class (9.1) by setting
g(n. i. z. d
0
) = z(n ÷i
t
d) (9.2)
134
9.2 GMM Estimator
De…ne the sample analog of (9.2)
g
a
(d) =
1
:
a
¸
j=1
g
j
(d) =
1
:
a
¸
j=1
z
j

n
j
÷i
t
j
d

=
1
:

Z
t
u ÷Z
t
Ad

. (9.3)
The method of moments estimator for d is de…ned as the parameter value which sets g
a
(d) = 0.
This is generally not possible when / /. as there are more equations than free parameters. The
idea of the generalized method of moments (GMM) is to de…ne an estimator which sets g
a
(d)
“close” to zero.
For some / / weight matrix M
a
0. let
J
a
(d) = : g
a
(d)
t
M
a
g
a
(d).
This is a non-negative measure of the “length” of the vector g
a
(d). For example, if M
a
= 1. then,
J
a
(d) = : g
a
(d)
t
g
a
(d) = : |g
a
(d)|
2
. the square of the Euclidean length. The GMM estimator
minimizes J
a
(d).
De…nition 9.2.1 d
GAA
= argmin
d
J
a
(d) .
Note that if / = /. then g
a
(
`
d) = 0. and the GMM estimator is the method of moments
estimator. The …rst order conditions for the GMM estimator are
0 =
0
0d
J
a
(
`
d)
= 2
0
0d
g
a
(
`
d)
t
M
a
g
a
(
`
d)
= ÷2

1
:
A
t
Z

M
a

1
:
Z
t

u ÷A
`
d

so
2

A
t
Z

M
a

Z
t
A

`
d = 2

A
t
Z

M
a

Z
t
u

which establishes the following.
Proposition 9.2.1
`
d
GAA
=

A
t
Z

M
a

Z
t
A

÷1

A
t
Z

M
a

Z
t
u

.
While the estimator depends on M
a
. the dependence is only up to scale, for if M
a
is replaced
by cM
a
for some c 0.
`
d
GAA
does not change.
135
9.3 Distribution of GMM Estimator
Assume that M
a
j
÷÷M 0. Let
O = E

z
j
i
t
j

and
D = E

z
j
z
t
j
c
2
j

= E

g
j
g
t
j

.
where g
j
= z
j
c
j
. Then

1
:
A
t
Z

M
a

1
:
Z
t
A

j
÷÷O
t
MO
and

1
:
A
t
Z

M
a

1

:
Z
t
c

o
÷÷O
t
MN(0. D) .
We conclude:
Theorem 9.3.1 Asymptotic Distribution of GMM Estimator

:

`
d ÷d

o
÷÷N(0. \
d
) .
where
\
d
=

O
t
MO

÷1

O
t
MDMO

O
t
MO

÷1
.
In general, GMM estimators are asymptotically normal with “sandwich form” asymptotic vari-
ances.
The optimal weight matrix M
0
is one which minimizes \
d
. This turns out to be M
0
= D
÷1
.
The proof is left as an exercise. This yields the e¢cient GMM estimator:
`
d =

A
t
ZD
÷1
Z
t
A

÷1
A
t
ZD
÷1
Z
t
u.
Thus we have
Theorem 9.3.2 Asymptotic Distribution of E¢cient GMM Esti-
mator

:

`
d ÷d

o
÷÷N

0.

O
t
D
÷1
O

÷1

.
M
0
= D
÷1
is not known in practice, but it can be estimated consistently. For any M
a
j
÷÷M
0
.
we still call
`
d the e¢cient GMM estimator, as it has the same asymptotic distribution.
By “e¢cient”, we mean that this estimator has the smallest asymptotic variance in the class
of GMM estimators with this set of moment conditions. This is a weak concept of optimality, as
we are only considering alternative weight matrices M
a
. However, it turns out that the GMM
estimator is semiparametrically e¢cient, as shown by Gary Chamberlain (1987).
If it is known that E(g
j
(d)) = 0. and this is all that is known, this is a semi-parametric
problem, as the distribution of the data is unknown. Chamberlain showed that in this context,
no semiparametric estimator (one which is consistent globally for the class of models considered)
can have a smaller asymptotic variance than

C
t
D
÷1
C

÷1
where C = E
0
0d
0
g
j
(d). Since the GMM
estimator has this asymptotic variance, it is semiparametrically e¢cient.
This result shows that in the linear model, no estimator has greater asymptotic e¢ciency than
the e¢cient linear GMM estimator. No estimator can do better (in this …rst-order asymptotic
sense), without imposing additional assumptions.
136
9.4 Estimation of the E¢cient Weight Matrix
Given any weight matrix M
a
0. the GMM estimator
`
d is consistent yet ine¢cient. For
example, we can set M
a
= 1
¹
. In the linear model, a better choice is M
a
= (Z
t
Z)
÷1
. Given
any such …rst-step estimator, we can de…ne the residuals ^ c
j
= n
j
÷ i
t
j
`
d and moment equations
` g
j
= z
j
^ c
j
= g(n
j
. i
j
. z
j
.
`
d). Construct
g
a
= g
a
(
`
d) =
1
:
a
¸
j=1
` g
j
.
` g
+
j
= ` g
j
÷g
a
.
and de…ne
M
a
=

1
:
a
¸
j=1
` g
+
j
` g
+t
j

÷1
=

1
:
a
¸
j=1
` g
j
` g
t
j
÷g
a
g
t
a

÷1
. (9.4)
Then M
a
j
÷÷D
÷1
= M
0
. and GMM using M
a
as the weight matrix is asymptotically e¢cient.
A common alternative choice is to set
M
a
=

1
:
a
¸
j=1
` g
j
` g
t
j

÷1
which uses the uncentered moment conditions. Since Eg
j
= 0. these two estimators are asymptot-
ically equivalent under the hypothesis of correct speci…cation. However, Alastair Hall (2000) has
shown that the uncentered estimator is a poor choice. When constructing hypothesis tests, under
the alternative hypothesis the moment conditions are violated, i.e. Eg
j
= 0. so the uncentered
estimator will contain an undesirable bias term and the power of the test will be adversely a¤ected.
A simple solution is to use the centered moment conditions to construct the weight matrix, as in
(9.4) above.
Here is a simple way to compute the e¢cient GMM estimator for the linear model. First, set
M
a
= (Z
t
Z)
÷1
, estimate
`
d using this weight matrix, and construct the residual ^ c
j
= n
j
÷ i
t
j
`
d.
Then set ` g
j
= z
j
^ c
j
. and let ` g be the associated : / matrix. Then the e¢cient GMM estimator is
`
d =

A
t
Z

` g
t
` g ÷:g
a
g
t
a

÷1
Z
t
A

÷1
A
t
Z

` g
t
` g ÷:g
a
g
t
a

÷1
Z
t
u.
In most cases, when we say “GMM”, we actually mean “e¢cient GMM”. There is little point in
using an ine¢cient GMM estimator when the e¢cient estimator is easy to compute.
An estimator of the asymptotic variance of
`
d can be seen from the above formula. Set
`
\ = :

A
t
Z

` g
t
` g ÷:g
a
g
t
a

÷1
Z
t
A

÷1
.
Asymptotic standard errors are given by the square roots of the diagonal elements of
`
\ .
There is an important alternative to the two-step GMM estimator just described. Instead, we
can let the weight matrix be considered as a function of d. The criterion function is then
J(d) = : g
a
(d)
t

1
:
a
¸
j=1
g
+
j
(d)g
+
j
(d)
t

÷1
g
a
(d).
where
g
+
j
(d) = g
j
(d) ÷g
a
(d)
The
`
d which minimizes this function is called the continuously-updated GMM estimator, and
was introduced by L. Hansen, Heaton and Yaron (1996).
The estimator appears to have some better properties than traditional GMM, but can be nu-
merically tricky to obtain in some cases. This is a current area of research in econometrics.
137
9.5 GMM: The General Case
In its most general form, GMM applies whenever an economic or statistical model implies the
/ 1 moment condition
E(g
j
(d)) = 0.
Often, this is all that is known. Identi…cation requires | _ / = dim(d). The GMM estimator
minimizes
J(d) = : g
a
(d)
t
M
a
g
a
(d)
where
g
a
(d) =
1
:
a
¸
j=1
g
j
(d)
and
M
a
=

1
:
a
¸
j=1
` g
j
` g
t
j
÷g
a
g
t
a

÷1
.
with ` g
j
= g
j
(
¯
d) constructed using a preliminary consistent estimator
¯
d, perhaps obtained by …rst
setting M
a
= 1. Since the GMM estimator depends upon the …rst-stage estimator, often the weight
matrix M
a
is updated, and then
`
d recomputed. This estimator can be iterated if needed.
Theorem 9.5.1 Distribution of Nonlinear GMM Estimator
Under general regularity conditions,

:

`
d ÷d

o
÷÷N

0.

C
t
D
÷1
C

÷1

.
where
D =

E

g
j
g
t
j

÷1
and
C = E
0
0d
t
g
j
(d).
The variance of
`
d may be estimated by
´
\
d
=

`
C
t
`
D
÷1
`
C

÷1
where
`
D = :
÷1
¸
j
` g
+
j
` g
+t
j
and
`
C = :
÷1
¸
j
0
0d
t
g
j
(
`
d).
The general theory of GMM estimation and testing was exposited by L. Hansen (1982).
9.6 Over-Identi…cation Test
Overidenti…ed models (/ /) are special in the sense that there may not be a parameter value
d such that the moment condition
138
Eg(n
j
. i
j
. z
j
. d) = 0
holds. Thus the model – the overidentifying restrictions – are testable.
For example, take the linear model n
j
= d
t
1
i
1j
+d
t
2
i
2j
+c
j
with E(i
1j
c
j
) = 0 and E(i
2z
c
j
) = 0.
It is possible that d
2
= 0. so that the linear equation may be written as n
j
= d
t
1
i
1j
+c
j
. However,
it is possible that d
2
= 0. and in this case it would be impossible to …nd a value of d
1
so that
both E(i
1j
(n
j
÷i
t
1j
d
1
)) = 0 and E(i
2j
(n
j
÷i
t
1j
d
1
)) = 0 hold simultaneously. In this sense an
exclusion restriction can be seen as an overidentifying restriction.
Note that g
a
j
÷÷ Eg
j
. and thus g
a
can be used to assess whether or not the hypothesis that
Eg
j
= 0 is true or not. The criterion function at the parameter estimates is
J = : g
t
a
M
a
g
a
= :
2
g
t
a

` g
t
` g ÷:g
a
g
t
a

÷1
g
a
.
is a quadratic form in g
a
. and is thus a natural test statistic for H
0
: Eg
j
= 0.
Theorem 9.6.1 (Sargan-Hansen). Under the hypothesis of correct speci-
…cation, and if the weight matrix is asymptotically e¢cient,
J = J(
`
d)
o
÷÷.
2
¹÷I
.
The proof of the theorem is left as an exercise. This result was established by Sargan (1958)
for a specialized case, and by L. Hansen (1982) for the general case.
The degrees of freedom of the asymptotic distribution are the number of overidentifying restric-
tions. If the statistic J exceeds the chi-square critical value, we can reject the model. Based on
this information alone, it is unclear what is wrong, but it is typically cause for concern. The GMM
overidenti…cation test is a very useful by-product of the GMM methodology, and it is advisable to
report the statistic J whenever GMM is the estimation method.
When over-identi…ed models are estimated by GMM, it is customary to report the J statistic
as a general test of model adequacy.
9.7 Hypothesis Testing: The Distance Statistic
We described before how to construct estimates of the asymptotic covariance matrix of the
GMM estimates. These may be used to construct Wald tests of statistical hypotheses.
If the hypothesis is non-linear, a better approach is to directly use the GMM criterion function.
This is sometimes called the GMM Distance statistic, and sometimes called a LR-like statistic (the
LR is for likelihood-ratio). The idea was …rst put forward by Newey and West (1987).
For a given weight matrix M
a
. the GMM criterion function is
J(d) = : g
a
(d)
t
M
a
g
a
(d)
For l : R
I
÷R
v
. the hypothesis is
H
0
: l(d) = 0.
The estimates under H
1
are
`
d = argmin
d
J(d)
139
and those under H
0
are
¯
d = argmin
h(d)=0
J(d).
The two minimizing criterion functions are J(
`
d) and J(
¯
d). The GMM distance statistic is the
di¤erence
1 = J(
¯
d) ÷J(
`
d).
Proposition 9.7.1 If the same weight matrix M
a
is used for both null
and alternative,
1. 1 _ 0
2. 1
o
÷÷.
2
v
3. If l is linear in d. then 1 equals the Wald statistic.
If l is non-linear, the Wald statistic can work quite poorly. In contrast, current evidence
suggests that the 1 statistic appears to have quite good sampling properties, and is the preferred
test statistic.
Newey and West (1987) suggested to use the same weight matrix M
a
for both null and alter-
native, as this ensures that 1 _ 0. This reasoning is not compelling, however, and some current
research suggests that this restriction is not necessary for good performance of the test.
This test shares the useful feature of LR tests in that it is a natural by-product of the compu-
tation of alternative models.
9.8 Conditional Moment Restrictions
In many contexts, the model implies more than an unconditional moment restriction of the form
Eg
j
(d) = 0. It implies a conditional moment restriction of the form
E(c
j
(d) [ z
j
) = 0
where c
j
(d) is some : 1 function of the observation and the parameters. In many cases, : = 1.
It turns out that this conditional moment restriction is much more powerful, and restrictive,
than the unconditional moment restriction discussed above.
Our linear model n
j
= i
t
j
d + c
j
with instruments z
j
falls into this class under the stronger
assumption E(c
j
[ z
j
) = 0. Then c
j
(d) = n
j
÷i
t
j
d.
It is also helpful to realize that conventional regression models also fall into this class, except
that in this case i
j
= z
j
. For example, in linear regression, c
j
(d) = n
j
÷i
t
j
d, while in a nonlinear
regression model c
j
(d) = n
j
÷g(i
j
. d). In a joint model of the conditional mean and variance
c
j
(d. ~) =

n
j
÷i
t
j
d
(n
j
÷i
t
j
d)
2
÷1 (i
j
)
t
~
.
Here : = 2.
Given a conditional moment restriction, an unconditional moment restriction can always be
constructed. That is for any / 1 function w(i
j
. d) . we can set g
j
(d) = w(i
j
. d) c
j
(d) which
satis…es Eg
j
(d) = 0 and hence de…nes a GMM estimator. The obvious problem is that the class of
functions w is in…nite. Which should be selected?
140
This is equivalent to the problem of selection of the best instruments. If r
j
÷ R is a valid
instrument satisfying E(c
j
[ r
j
) = 0. then r
j
. r
2
j
. r
3
j
. .... etc., are all valid instruments. Which
should be used?
One solution is to construct an in…nite list of potent instruments, and then use the …rst /
instruments. How is / to be determined? This is an area of theory still under development. A
recent study of this problem is Donald and Newey (2001).
Another approach is to construct the optimal instrument. The form was uncovered by Cham-
berlain (1987). Take the case : = 1. Let
H
j
= E

0
0d
c
j
(d) [ z
j

and
o
2
j
= E

c
j
(d)
2
[ z
j

.
Then the “optimal instrument” is
A
j
= ÷o
÷2
j
H
j
so the optimal moment is
g
j
(d) = A
j
c
j
(d).
Setting g
j
(d) to be this choice (which is / 1. so is just-identi…ed) yields the best GMM estimator
possible.
In practice, A
j
is unknown, but its form does help us think about construction of optimal
instruments.
In the linear model c
j
(d) = n
j
÷i
t
j
d. note that
H
j
= ÷E(i
j
[ z
j
)
and
o
2
j
= E

c
2
j
[ z
j

.
so
A
j
= o
÷2
j
E(i
j
[ z
j
) .
In the case of linear regression, i
j
= z
j
. so A
j
= o
÷2
j
z
j
. Hence e¢cient GMM is GLS, as we
discussed earlier in the course.
In the case of endogenous variables, note that the e¢cient instrument A
j
involves the estimation
of the conditional mean of i
j
given z
j
. In other words, to get the best instrument for i
j
. we need the
best conditional mean model for i
j
given z
j
, not just an arbitrary linear projection. The e¢cient
instrument is also inversely proportional to the conditional variance of c
j
. This is the same as the
GLS estimator; namely that improved e¢ciency can be obtained if the observations are weighted
inversely to the conditional variance of the errors.
9.9 Bootstrap GMM Inference
Let
`
d be the 2SLS or GMM estimator of d. Using the EDF of (n
j
. z
j
. i
j
), we can apply the
bootstrap methods discussed in Chapter 8 to compute estimates of the bias and variance of
`
d.
and construct con…dence intervals for d. identically as in the regression model. However, caution
should be applied when interpreting such results.
A straightforward application of the nonparametric bootstrap works in the sense of consistently
achieving the …rst-order asymptotic distribution. This has been shown by Hahn (1996). However,
it fails to achieve an asymptotic re…nement when the model is over-identi…ed, jeopardizing the
theoretical justi…cation for percentile-t methods. Furthermore, the bootstrap applied J test will
yield the wrong answer.
141
The problem is that in the sample,
`
d is the “true” value and yet g
a
(
`
d) = 0. Thus according to
random variables (n
+
j
. z
+
j
. i
+
j
) drawn from the EDF 1
a
.
E

g
j

`
d

= g
a
(
`
d) = 0.
This means that (n
+
j
. z
+
j
. i
+
j
) do not satisfy the same moment conditions as the population distrib-
ution.
A correction suggested by Hall and Horowitz (1996) can solve the problem. Given the bootstrap
sample (u
+
. Z
+
. A
+
). de…ne the bootstrap GMM criterion
J
+
(d) = :

g
+
a
(d) ÷g
a
(
`
d)

t
M
+
a

g
+
a
(d) ÷g
a
(
`
d)

where g
a
(
`
d) is from the in-sample data, not from the bootstrap data.
Let
`
d
+
minimize J
+
(d). and de…ne all statistics and tests accordingly. In the linear model, this
implies that the bootstrap estimator is
`
d
+
=

A
+t
Z
+
M
+
a
Z
+t
A
+

÷1

A
+t
Z
+
M
+
a

Z
+t
u
+
÷Z
t
` c

.
where ` c = u ÷A
`
d are the in-sample residuals. The bootstrap J statistic is J
+
(
`
d
+
).
Brown and Newey (2002) have an alternative solution. They note that we can sample from
the observations with the empirical likelihood probabilities ^ j
j
described in Chapter 10. Since
¸
a
j=1
^ j
j
g
j

`
d

= 0. this sampling scheme preserves the moment conditions of the model, so no
recentering or adjustments is needed. Brown and Newey argue that this bootstrap procedure will
be more e¢cient than the Hall-Horowitz GMM bootstrap.
142
Exercises
Exercise 9.1 Take the model
n
j
= i
t
j
d +c
j
E(i
j
c
j
) = 0
c
2
j
= z
t
j
~ +:
j
E(z
j
:
j
) = 0.
Find the method of moments estimators

`
d. ` ~

for (d. ~) .
Exercise 9.2 Take the single equation
u = Ad +c
E(c [ Z) = 0
Assume E

c
2
j
[ z
j

= o
2
. Show that if
`
d is estimated by GMM with weight matrix M
a
= (Z
t
Z)
÷1
.
then

:

`
d ÷d

o
÷÷N

0. o
2

O
t
A
÷1
O

÷1

where O = E(z
j
i
t
j
) and A = E(z
j
z
t
j
) .
Exercise 9.3 Take the model n
j
= i
t
j
d + c
j
with E(z
j
c
j
) = 0. Let ^ c
j
= n
j
÷ i
t
j
`
d where
`
d is
consistent for d (e.g. a GMM estimator with arbitrary weight matrix). De…ne the estimate of the
optimal GMM weight matrix
M
a
=

1
:
a
¸
j=1
z
j
z
t
j
^ c
2
j

÷1
.
Show that M
a
j
÷÷D
÷1
where D = E

z
j
z
t
j
c
2
j

.
Exercise 9.4 In the linear model estimated by GMM with general weight matrix M. the asymp-
totic variance of
`
d
GAA
is
\ =

O
t
MO

÷1
O
t
MDMO

O
t
MO

÷1
(a) Let \
0
be this matrix when M = D
÷1
. Show that \
0
=

O
t
D
÷1
O

÷1
.
(b) We want to show that for any M. \ ÷\
0
is positive semi-de…nite (for then \
0
is the smaller
possible covariance matrix and M = D
÷1
is the e¢cient weight matrix). To do this, start by
…nding matrices A and H such that \ = A
t
DA and \
0
= H
t
DH.
(c) Show that H
t
DA = H
t
DH and therefore that H
t
D(A÷H) = 0.
(d) Use the expressions \ = A
t
DA. A = H + (A÷H) . and H
t
D(A÷H) = 0 to show that
\ _ \
0
.
Exercise 9.5 The equation of interest is
n
j
= g(i
j
. d) +c
j
E(z
j
c
j
) = 0.
The observed data is (n
j
. z
j
. i
j
). z
j
is / 1 and d is / 1. / _ /. Show how to construct an e¢cient
GMM estimator for d.
143
Exercise 9.6 In the linear model u = Ad+c with E(i
j
c
j
) = 0. a Generalized Method of Moments
(GMM) criterion function for d is de…ned as
J
a
(d) =
1
:
(u ÷Ad)
t
A
`
D
÷1
a
A
t
(u ÷Ad) (9.5)
where
`
D
a
=
1
a
¸
a
j=1
i
j
i
t
j
^ c
2
j
. ^ c
j
= n
j
÷ i
t
j
`
d are the OLS residuals, and
`
d = (A
t
A)
÷1
A
t
A is LS.
The GMM estimator of d. subject to the restriction l(d) = 0. is de…ned as
¯
d = argmin
h(d)=0
J
a
(d).
The GMM test statistic (the distance statistic) of the hypothesis l(d) = 0 is
1 = J
a
(
¯
d) = min
h(d)=0
J
a
(d). (9.6)
(a) Show that you can rewrite J
a
(d) in (9.5) as
J
a
(d) =

d ÷
`
d

t
`
\
÷1
a

d ÷
`
d

where
`
\
a
=

A
t
A

÷1

a
¸
j=1
i
j
i
t
j
^ c
2
j

A
t
A

÷1
.
(b) Now focus on linear restrictions: l(d) = H
t
d ÷r. Thus
¯
d = argmin
1
0
d÷r=0
J
a
(d)
and hence H
t
¯
d = r. De…ne the Lagrangian (d. X) =
1
2
J
a
(d) +X
t
(H
t
d ÷r) where X is : 1.
Show that the minimizer is
¯
d =
`
d ÷
`
\
a
H

H
t
a
`
\ H

÷1

H
t
`
d ÷r

`
X =

H
t
a
`
\ H

÷1

H
t
`
d ÷r

.
(c) Show that if H
t
d = r then

:

¯
d ÷d

o
÷÷N(0. \
1
) where
\
1
= \ ÷\ H

H
t
\ H

÷1
H
t
\ .
(d) Show that in this setting, the distance statistic 1 in (9.6) equals the Wald statistic.
Exercise 9.7 Take the linear model
n
j
= i
t
j
d +c
j
E(z
j
c
j
) = 0.
and consider the GMM estimator
`
d of d. Let
J
a
= :g
a
(
`
d)
t
`
D
÷1
g
a
(
`
d)
denote the test of overidentifying restrictions. Show that J
a
o
÷÷.
2
¹÷I
as : ÷· by demonstrating
each of the following:
144
(a) Since D 0. we can write D
÷1
= CC
t
and D = C
t÷1
C
÷1
(b) J
a
= :

C
t
g
a
(
`
d)

t

C
t
`
DC

÷1
C
t
g
a
(
`
d)
(c) C
t
g
a
(
`
d) = L
a
C
t
g
a
(d
0
) where
L
a
= 1
¹
÷C
t

1
:
Z
t
A

1
:
A
t
Z

`
D
÷1

1
:
Z
t
A

÷1

1
:
A
t
Z

`
D
÷1
C
t÷1
g
a
(d
0
) =
1
:
Z
t
c.
(d) L
a
j
÷÷1
¹
÷H(H
t
H)
÷1
H
t
where H = C
t
E(z
j
i
t
j
)
(e) :
1/2
C
t
g
a
(d
0
)
o
÷÷A ~ N(0. 1
¹
)
(f) J
a
o
÷÷A
t

1
¹
÷H(H
t
H)
÷1
H
t

A
(g) A
t

1
¹
÷H(H
t
H)
÷1
H
t

A ~ .
2
¹÷I
.
Hint: 1
¹
÷H(H
t
H)
÷1
H
t
is a projection matrix.
145
Chapter 10
Empirical Likelihood
10.1 Non-Parametric Likelihood
An alternative to GMM is empirical likelihood. The idea is due to Art Owen (1988, 2001) and
has been extended to moment condition models by Qin and Lawless (1994). It is a non-parametric
analog of likelihood estimation.
The idea is to construct a multinomial distribution 1(j
1
. .... j
a
) which places probability j
j
at each observation. To be a valid multinomial distribution, these probabilities must satisfy the
requirements that j
j
_ 0 and
a
¸
j=1
j
j
= 1. (10.1)
Since each observation is observed once in the sample, the log-likelihood function for this multino-
mial distribution is
log 1(j
1
. .... j
a
) =
a
¸
j=1
log(j
j
). (10.2)
First let us consider a just-identi…ed model. In this case the moment condition places no
additional restrictions on the multinomial distribution. The maximum likelihood estimators of
the probabilities (j
1
. .... j
a
) are those which maximize the log-likelihood subject to the constraint
(10.1). This is equivalent to maximizing
a
¸
j=1
log(j
j
) ÷j

a
¸
j=1
j
j
÷1

where j is a Lagrange multiplier. The : …rst order conditions are 0 = j
÷1
j
÷j. Combined with the
constraint (10.1) we …nd that the MLE is j
j
= :
÷1
yielding the log-likelihood ÷:log(:).
Now consider the case of an overidenti…ed model with moment condition
Eg
j
(d
0
) = 0
where g is / 1 and d is / 1 and for simplicity we write g
j
(d) = g(n
j
. z
j
. i
j
. d). The multinomial
distribution which places probability j
j
at each observation (n
j
. i
j
. z
j
) will satisfy this condition if
and only if
a
¸
j=1
j
j
g
j
(d) = 0 (10.3)
The empirical likelihood estimator is the value of d which maximizes the multinomial log-
likelihood (10.2) subject to the restrictions (10.1) and (10.3).
146
The Lagrangian for this maximization problem is
L(d. j
1
. .... j
a
. X. j) =
a
¸
j=1
log(j
j
) ÷j

a
¸
j=1
j
j
÷1

÷:X
t
a
¸
j=1
j
j
g
j
(d)
where X and j are Lagrange multipliers. The …rst-order-conditions of L with respect to j
j
, j. and
X are
1
j
j
= j +:X
t
g
j
(d)
a
¸
j=1
j
j
= 1
a
¸
j=1
j
j
g
j
(d) = 0.
Multiplying the …rst equation by j
j
, summing over i. and using the second and third equations, we
…nd j = : and
j
j
=
1
:

1 +X
t
g
j
(d)
.
Substituting into L we …nd
1(d. X) = ÷:log (:) ÷
a
¸
j=1
log

1 +X
t
g
j
(d)

. (10.4)
For given d. the Lagrange multiplier X(d) minimizes 1(d. X) :
X(d) = argmin
X
1(d. X). (10.5)
This minimization problem is the dual of the constrained maximization problem. The solution
(when it exists) is well de…ned since 1(d. X) is a convex function of X. The solution cannot be
obtained explicitly, but must be obtained numerically (see section 6.5). This yields the (pro…le)
empirical log-likelihood function for d.
1(d) = 1(d. X(d))
= ÷:log (:) ÷
a
¸
j=1
log

1 +X(d)
t
g
j
(d)

The EL estimate
`
d is the value which maximizes 1(d). or equivalently minimizes its negative
`
d = argmin
d
[÷1(d)] (10.6)
Numerical methods are required for calculation of
`
d (see Section 10.5).
As a by-product of estimation, we also obtain the Lagrange multiplier
`
X = X(
`
d). probabilities
^ j
j
=
1
:

1 +
`
X
t
g
j

`
d
.
and maximized empirical likelihood
1(
`
d) =
a
¸
j=1
log (^ j
j
) . (10.7)
147
10.2 Asymptotic Distribution of EL Estimator
De…ne
C
j
(d) =
0
0d
t
g
j
(d) (10.8)
C = EC
j
(d
0
)
D = E

g
j
(d
0
) g
j
(d
0
)
t

and
\ =

C
t
D
÷1
C

÷1
(10.9)
\
X
= D÷C

C
t
D
÷1
C

÷1
C
t
(10.10)
For example, in the linear model, C
j
(d) = ÷z
j
i
t
j
. C = ÷E(z
j
i
t
j
), and D = E

z
j
z
t
j
c
2
j

.
Theorem 10.2.1 Under regularity conditions,

:

`
d ÷d
0

o
÷÷N(0. \
d
)

:
`
X
o
÷÷D
÷1
N(0. \
X
)
where \ and \
X
are de…ned in (10.9) and (10.10), and

:

`
d ÷d
0

and

:
`
X are asymptotically independent.
The theorem shows that asymptotic variance \
d
for
`
d is the same as for e¢cient GMM. Thus
the EL estimator is asymptotically e¢cient.
Chamberlain (1987) showed that \
d
is the semiparametric e¢ciency bound for d in the overi-
denti…ed moment condition model. This means that no consistent estimator for this class of models
can have a lower asymptotic variance than \
d
. Since the EL estimator achieves this bound, it is
an asymptotically e¢cient estimator for d.
Proof of Theorem 10.2.1. (
`
d.
`
X) jointly solve
0 =
0
0X
1(d. X) = ÷
a
¸
j=1
g
j

`
d

1 +
`
X
t
g
j

`
d
(10.11)
0 =
0
0d
1(d. X) = ÷
a
¸
j=1
C
j

`
d

t
X
1 +
`
X
t
g
j

`
d
. (10.12)
Let C
a
=
1
a
¸
a
j=1
C
j
(d
0
) . g
a
=
1
a
¸
a
j=1
g
j
(d
0
) and D
a
=
1
a
¸
a
j=1
g
j
(d
0
) g
j
(d
0
)
t
.
Expanding (10.12) around d = d
0
and X = X
0
= 0 yields
0 · C
t
a

`
X ÷X
0

. (10.13)
Expanding (10.11) around d = d
0
and X = X
0
= 0 yields
0 · ÷g
a
÷C
a

`
d ÷d
0

+D
a
`
X (10.14)
148
Premultiplying by C
t
a
D
÷1
a
and using (10.13) yields
0 · ÷C
t
a
D
÷1
a
g
a
÷C
t
a
D
÷1
a
C
a

`
d ÷d
0

+C
t
a
D
÷1
a
D
a
`
X
= ÷C
t
a
D
÷1
a
g
a
÷C
t
a
D
÷1
a
C
a

`
d ÷d
0

Solving for
`
d and using the WLLN and CLT yields

:

`
d ÷d
0

· ÷

C
t
a
D
÷1
a
C
a

÷1
C
t
a
D
÷1
a

:g
a
(10.15)
o
÷÷

C
t
D
÷1
C

÷1
C
t
D
÷1
N(0. D)
= N(0. \
d
)
Solving (10.14) for
`
X and using (10.15) yields

:
`
X · D
÷1
a

1 ÷C
a

C
t
a
D
÷1
a
C
a

÷1
C
t
a
D
÷1
a

:g
a
(10.16)
o
÷÷D
÷1

1 ÷C

C
t
D
÷1
C

÷1
C
t
D
÷1

N(0. D)
= D
÷1
N(0. \
X
)
Furthermore, since
C
t

1 ÷D
÷1
C

C
t
D
÷1
C

÷1
C
t

= 0

:

`
d ÷d
0

and

:
`
X are asymptotically uncorrelated and hence independent.
10.3 Overidentifying Restrictions
In a parametric likelihood context, tests are based on the di¤erence in the log likelihood func-
tions. The same statistic can be constructed for empirical likelihood. Twice the di¤erence between
the unrestricted empirical log-likelihood ÷:log (:) and the maximized empirical log-likelihood for
the model (10.7) is
11
a
=
a
¸
j=1
2 log

1 +
`
X
t
g
j

`
d

. (10.17)
Theorem 10.3.1 If Eg
j
(d
0
) = 0 then 11
a
o
÷÷.
2
¹÷I
.
The EL overidenti…cation test is similar to the GMM overidenti…cation test. They are asymp-
totically …rst-order equivalent, and have the same interpretation. The overidenti…cation test is a
very useful by-product of EL estimation, and it is advisable to report the statistic 11
a
whenever
EL is the estimation method.
Proof of Theorem 10.3.1. First, by a Taylor expansion, (10.15), and (10.16),
1

:
a
¸
j=1
g
j

`
d

·

:

g
a
+C
a

`
d ÷d
0

·

1 ÷C
a

C
t
a
D
÷1
a
C
a

÷1
C
t
a
D
÷1
a

:g
a
· D
a

:
`
X.
149
Second, since log(1 +n) · n ÷n
2
´2 for n small,
11
a
=
a
¸
j=1
2 log

1 +
`
X
t
g
j

`
d

· 2
`
X
t
a
¸
j=1
g
j

`
d

÷
`
X
t
a
¸
j=1
g
j

`
d

g
j

`
d

t
`
X
· :
`
X
t
D
a
`
X
o
÷÷N(0. \
X
)
t
D
÷1
N(0. \
X
)
= .
2
¹÷I
where the proof of the …nal equality is left as an exercise.
10.4 Testing
Let the maintained model be
Eg
j
(d) = 0 (10.18)
where g is / 1 and d is / 1. By “maintained” we mean that the overidentfying restrictions
contained in (10.18) are assumed to hold and are not being challenged (at least for the test discussed
in this section). The hypothesis of interest is
l(d) = 0.
where l : R
I
÷R
o
. The restricted EL estimator and likelihood are the values which solve
¯
d = argmax
h(d)=0
1(d)
1(
¯
d) = max
h(d)=0
1(d).
Fundamentally, the restricted EL estimator
¯
d is simply an EL estimator with /÷/+a overidentifying
restrictions, so there is no fundamental change in the distribution theory for
¯
d relative to
`
d. To test
the hypothesis l(d) while maintaining (10.18), the simple overidentifying restrictions test (10.17)
is not appropriate. Instead we use the di¤erence in log-likelihoods:
11
a
= 2

1(
`
d) ÷1(
¯
d)

.
This test statistic is a natural analog of the GMM distance statistic.
Theorem 10.4.1 Under (10.18) and H
0
: l(d) = 0. 11
a
o
÷÷.
2
o
.
The proof of this result is more challenging and is omitted.
150
10.5 Numerical Computation
Gauss code which implements the methods discussed below can be found at
http://www.ssc.wisc.edu/~bhansen/progs/elike.prc
Derivatives
The numerical calculations depend on derivatives of the dual likelihood function (10.4). De…ne
g
+
j
(d. X) =
g
j
(d)

1 +X
t
g
j
(d)

C
+
j
(d. X) =
C
j
(d)
t
X
1 +X
t
g
j
(d)
The …rst derivatives of (10.4) are
H
X
=
0
0X
1(d. X) = ÷
a
¸
j=1
g
+
j
(d. X)
H
d
=
0
0d
1(d. X) = ÷
a
¸
j=1
C
+
j
(d. X) .
The second derivatives are
H
XX
=
0
2
0X0X
t
1(d. X) =
a
¸
j=1
g
+
j
(d. X) g
+
j
(d. X)
t
H
Xd
=
0
2
0X0d
t
1(d. X) =
a
¸
j=1

g
+
j
(d. X) C
+
j
(d. X)
t
÷
C
j
(d)
1 +X
t
g
j
(d)

H
dd
=
0
2
0d0d
t
1(d. X) =
a
¸
j=1

¸
C
+
j
(d. X) C
+
j
(d. X)
t
÷
0
2
0d0d
0

g
j
(d)
t
X

1 +X
t
g
j
(d)
¸

Inner Loop
The so-called “inner loop” solves (10.5) for given d. The modi…ed Newton method takes a
quadratic approximation to 1
a
(d. X) yielding the iteration rule
X
;+1
= X
;
÷c (H
XX
(d. X
;
))
÷1
H
X
(d. X
;
) . (10.19)
where c 0 is a scalar steplength (to be discussed next). The starting value X
1
can be set to the
zero vector. The iteration (10.19) is continued until the gradient 1
X
(d. X
;
) is smaller than some
prespeci…ed tolerance.
E¢cient convergence requires a good choice of steplength c. One method uses the following
quadratic approximation. Set c
0
= 0. c
1
=
1
2
and c
2
= 1. For j = 0. 1. 2. set
X
j
= X
;
÷c
j
(H
XX
(d. X
;
))
÷1
H
X
(d. X
;
))
1
j
= 1(d. X
j
)
A quadratic function can be …t exactly through these three points. The value of c which minimizes
this quadratic is
^
c =
1
2
+ 31
0
÷41
1
41
2
+ 41
0
÷81
1
.
yielding the steplength to be plugged into (10.19).
151
A complication is that X must be constrained so that 0 _ j
j
_ 1 which holds if
:

1 +X
t
g
j
(d)

_ 1 (10.20)
for all i. If (10.20) fails, the stepsize c needs to be decreased.
Outer Loop
The outer loop is the minimization (10.6). This can be done by the modi…ed Newton method
described in the previous section. The gradient for (10.6) is
H
d
=
0
0d
1(d) =
0
0d
1(d. X) = H
d
+X
t
d
H
X
= H
d
since H
X
(d. X) = 0 at X = X(d). where
X
d
=
0
0d
t
X(d) = ÷H
÷1
XX
H
Xd
.
the second equality following from the implicit function theorem applied to H
X
(d. X(d)) = 0.
The Hessian for (10.6) is
H
dd
= ÷
0
0d0d
t
1(d)
= ÷
0
0d
t

H
d
(d. X(d)) +X
t
d
H
X
(d. X(d))

= ÷

H
dd
(d. X(d)) +H
t
Xd
X
d
+X
t
d
H
Xd
+X
t
d
H
XX
X
d

= H
t
Xd
H
÷1
XX
H
Xd
÷H
dd
.
It is not guaranteed that H
dd
0. If not, the eigenvalues of H
dd
should be adjusted so that all
are positive. The Newton iteration rule is
d
;+1
= d
;
÷cH
÷1
dd
H
d
where c is a scalar stepsize, and the rule is iterated until convergence.
152
Chapter 11
Endogeneity
We say that there is endogeneity in the linear model n = i
t
j
d + c
j
if d is the parameter of
interest and E(i
j
c
j
) = 0. This cannot happen if d is de…ned by linear projection, so requires a
structural interpretation. The coe¢cient d must have meaning separately from the de…nition of a
conditional mean or linear projection.
Example: Measurement error in the regressor. Suppose that (n
j
. i
+
j
) are joint random
variables, E(n
j
[ i
+
j
) = i
+t
j
d is linear, d is the parameter of interest, and i
+
j
is not observed. Instead
we observe i
j
= i
+
j
+u
j
where u
j
is an / 1 measurement error, independent of n
j
and i
+
j
. Then
n
j
= i
+t
j
d +c
j
= (i
j
÷u
j
)
t
d +c
j
= i
t
j
d +·
j
where
·
j
= c
j
÷u
t
j
d.
The problem is that
E(i
j
·
j
) = E

(i
+
j
+u
j
)

c
j
÷u
t
j
d

= ÷E

u
j
u
t
j

d = 0
if d = 0 and E(u
j
u
t
j
) = 0. It follows that if
`
d is the OLS estimator, then
`
d
j
÷÷d
+
= d ÷

E

i
j
i
t
j

÷1
E

u
j
u
t
j

d = d.
This is called measurement error bias.
Example: Supply and Demand. The variables c
j
and j
j
(quantity and price) are determined
jointly by the demand equation
c
j
= ÷
1
j
j
+c
1j
and the supply equation
c
j
=
2
j
j
+c
2j
.
Assume that c
j
=

c
1j
c
2j

is iid, Ec
j
= 0.
1
+
2
= 1 and Ec
j
c
t
j
= 1
2
(the latter for simplicity).
The question is, if we regress c
j
on j
j
. what happens?
It is helpful to solve for c
j
and j
j
in terms of the errors. In matrix notation,
¸
1
1
1 ÷
2

c
j
j
j

=

c
1j
c
2j

153
so

c
j
j
j

=
¸
1
1
1 ÷
2

÷1

c
1j
c
2j

=
¸

2

1
1 ÷1

c
1j
c
2j

=


2
c
1j
+
1
c
2j
(c
1j
÷c
2j
)

.
The projection of c
j
on j
j
yields
c
j
=
+
j
j
+-
j
E(j
j
-
j
) = 0
where

+
=
E(j
j
c
j
)
E

j
2
j
=

2
÷
1
2
Hence if it is estimated by OLS,
^

j
÷÷
+
. which does not equal either
1
or
2
. This is called
simultaneous equations bias.
11.1 Instrumental Variables
Let the equation of interest be
n
j
= i
t
j
d +c
j
(11.1)
where i
j
is / 1. and assume that E(i
j
c
j
) = 0 so there is endogeneity. We call (11.1) the
structural equation. In matrix notation, this can be written as
u = Ad +c. (11.2)
Any solution to the problem of endogeneity requires additional information which we call in-
struments.
De…nition 11.1.1 The / 1 random vector z
j
is an instrumental vari-
able for (11.1) if E(z
j
c
j
) = 0.
In a typical set-up, some regressors in i
j
will be uncorrelated with c
j
(for example, at least the
intercept). Thus we make the partition
i
j
=

i
1j
i
2j

/
1
/
2
(11.3)
where E(i
1j
c
j
) = 0 yet E(i
2j
c
j
) = 0. We call i
1j
exogenous and i
2j
endogenous. By the above
de…nition, i
1j
is an instrumental variable for (11.1). so should be included in z
j
. So we have the
partition
z
j
=

i
1j
z
2j

/
1
/
2
(11.4)
where i
1j
= z
1j
are the included exogenous variables, and z
2j
are the excluded exogenous
variables. That is z
2j
are variables which could be included in the equation for n
j
(in the sense
that they are uncorrelated with c
j
) yet can be excluded, as they would have true zero coe¢cients
in the equation.
The model is just-identi…ed if / = / (i.e., if /
2
= /
2
) and over-identi…ed if / / (i.e., if
/
2
/
2
).
We have noted that any solution to the problem of endogeneity requires instruments. This does
not mean that valid instruments actually exist.
154
11.2 Reduced Form
The reduced form relationship between the variables or “regressors” i
j
and the instruments z
j
is found by linear projection. Let
I = E

z
j
z
t
j

÷1
E

z
j
i
t
j

be the / / matrix of coe¢cients from a projection of i
j
on z
j
. and de…ne
u
j
= i
j
÷I
t
z
j
as the projection error. Then the reduced form linear relationship between i
j
and z
j
is
i
j
= I
t
z
j
+u
j
. (11.5)
In matrix notation, we can write (11.5) as
A = ZI +1 (11.6)
where 1 is : /.
By construction,
E(z
j
u
t
j
) = 0.
so (11.5) is a projection and can be estimated by OLS:
i = z
^
+ ` u
`
I =

z
t
z

÷1

z
t
i

.
Substituting (11.6) into (11.2), we …nd
u = (ZI +1) d +c
= ZX +r. (11.7)
where
X = Id (11.8)
and
r = 1d +c.
Observe that
E(z
j
·
j
) = E

z
j
u
t
j

d +E(z
j
c
j
) = 0.
Thus (11.7) is a projection equation and may be estimated by OLS. This is
u = Z
`
X + ` r.
`
X =

Z
t
Z

÷1

Z
t
u

The equation (11.7) is the reduced form for u. (11.6) and (11.7) together are the reduced form
equations for the system
u = ZX +r
i = ZI +1.
As we showed above, OLS yields the reduced-form estimates

`
X.
`
I

155
11.3 Identi…cation
The structural parameter d relates to (X. I) through (11.8). The parameter d is identi…ed,
meaning that it can be recovered from the reduced form, if
rank (I) = /. (11.9)
Assume that (11.9) holds. If / = /. then d = I
÷1
X. If / /. then for any M 0. d =
(I
t
MI)
÷1
I
t
MX.
If (11.9) is not satis…ed, then d cannot be recovered from (X. I) . Note that a necessary (although
not su¢cient) condition for (11.9) is / _ /.
Since Z and A have the common variables A
1
. we can rewrite some of the expressions. Using
(11.3) and (11.4) to make the matrix partitions Z = [Z
1
. Z
2
] and A = [Z
1
. A
2
] . we can partition
I as
I =
¸
I
11
I
12
I
21
I
22

=
¸
1 I
12
0 I
22

(11.6) can be rewritten as
A
1
= Z
1
A
2
= Z
1
I
12
+Z
2
I
22
+1
2
. (11.10)
d is identi…ed if rank(I) = /. which is true if and only if rank(I
22
) = /
2
(by the upper-diagonal
structure of I). Thus the key to identi…cation of the model rests on the /
2
/
2
matrix I
22
in
(11.10).
11.4 Estimation
The model can be written as
n
j
= i
t
j
d +c
j
E(z
j
c
j
) = 0
or
Eg
j
(d) = 0
g
j
(d) = z
j

n
j
÷i
t
j
d

.
This is a moment condition model. Appropriate estimators include GMM and EL. The estimators
and distribution theory developed in those Chapter 8 and 9 directly apply. Recall that the GMM
estimator, for given weight matrix M
a
. is
`
d =

A
t
ZM
a
Z
t
A

÷1
A
t
ZM
a
Z
t
u.
11.5 Special Cases: IV and 2SLS
If the model is just-identi…ed, so that / = /. then the formula for GMM simpli…es. We …nd that
`
d =

A
t
ZM
a
Z
t
A

÷1
A
t
ZM
a
Z
t
u
=

Z
t
A

÷1
M
÷1
a

A
t
Z

÷1
A
t
ZM
a
Z
t
u
=

Z
t
A

÷1
Z
t
u
156
This estimator is often called the instrumental variables estimator (IV) of d. where Z is used
as an instrument for A. Observe that the weight matrix M
a
has disappeared. In the just-identi…ed
case, the weight matrix places no role. This is also the MME estimator of d. and the EL estimator.
Another interpretation stems from the fact that since d = I
÷1
X. we can construct the Indirect
Least Squares (ILS) estimator:
`
d =
`
I
÷1
`
X
=

Z
t
Z

÷1

Z
t
A

÷1

Z
t
Z

÷1

Z
t
u

=

Z
t
A

÷1

Z
t
Z

Z
t
Z

÷1

Z
t
u

=

Z
t
A

÷1

Z
t
u

.
which again is the IV estimator.
Recall that the optimal weight matrix is an estimate of the inverse of D = E

z
j
z
t
j
c
2
j

. In the
special case that E

c
2
j
[ z
j

= o
2
(homoskedasticity), then D = E(z
j
z
t
j
) o
2
· E(z
j
z
t
j
) suggesting
the weight matrix M
a
= (Z
t
Z)
÷1
. Using this choice, the GMM estimator equals
`
d
2S1S
=

A
t
Z

Z
t
Z

÷1
Z
t
A

÷1
A
t
Z

Z
t
Z

÷1
Z
t
u
This is called the two-stage-least squares (2SLS) estimator. It was originally proposed by Theil
(1953) and Basmann (1957), and is the classic estimator for linear equations with instruments.
Under the homoskedasticity assumption, the 2SLS estimator is e¢cient GMM, but otherwise it is
ine¢cient.
It is useful to observe that writing
1 = Z

Z
t
Z

÷1
Z
t
`
A = 1A = Z
`
I
then
`
d =

A
t
1A

÷1
A
t
1u
=

`
Z
t
`
Z

÷1
`
Z
t
u.
The source of the “two-stage” name is since it can be computed as follows
« First regress A on Z. vis.,
`
I = (Z
t
Z)
÷1
(Z
t
A) and
`
A = Z
`
I = 1A.
« Second, regress u on
`
Z. vis.,
`
d =

`
Z
t
`
Z

÷1
`
Z
t
u.
It is useful to scrutinize the projection
`
Z. Recall, A = [A
1
. A
2
] and Z = [A
1
. Z
2
]. Then
`
A =

`
A
1
.
`
A
2

= [1A
1
. 1A
2
]
= [A
1
. 1A
2
]
=

A
1
.
`
A
2

.
since A
1
lies in the span of A. Thus in the second stage, we regress u on A
1
and
`
A
2
. So only the
endogenous variables A
2
are replaced by their …tted values:
`
A
2
= Z
1
`
I
12
+Z
2
`
I
22
.
157
11.6 Bekker Asymptotics
Bekker (1994) used an alternative asymptotic framework to analyze the …nite-sample bias in
the 2SLS estimator. Here we present a simpli…ed version of one of his results. In our notation, the
model is
u = Ad +c (11.11)
A = ZI +1 (11.12)
í = (c. 1)
E(í [ Z) = 0
E

í
t
í [ Z

= S
As before, Z is : | so there are | instruments.
First, let’s analyze the approximate bias of OLS applied to (11.11). Using (11.12),
E

1
:
A
t
c

= E(i
j
c
j
) = I
t
E(z
j
c
j
) +E(u
j
c
j
) = s
21
and
E

1
:
A
t
A

= E

i
j
i
t
j

= I
t
E

z
j
z
t
j

I +E

u
j
z
t
j

I +I
t
E

z
j
u
t
j

+E

u
j
u
t
j

= I
t
OI +S
22
where O = E(z
j
z
t
j
) . Hence by a …rst-order approximation
E

`
d
O1S
÷d

-

E

1
:
A
t
A

÷1
E

1
:
A
t
c

=

I
t
OI +S
22

÷1
s
21
(11.13)
which is zero only when s
21
= 0 (when A is exogenous).
We now derive a similar result for the 2SLS estimator.
`
d
2S1S
=

A
t
1A

÷1

A
t
1u

.
Let 1 = Z (Z
t
Z)
÷1
Z
t
. By the spectral decomposition of an idempotent matrix, 1 = HAH
t
where A = diag (1
|
. 0) . Let O = H
t
íS
÷1/2
which satis…es EO
t
O = 1
a
and partition O = (g
t
1
O
t
2
)
where g
1
is | 1. Hence
E

1
:
í
t
1í [ Z

=
1
:
S
1/2t
E

O
t
AO [ Z

S
1/2
=
1
:
S
1/2t
E

1
:
g
t
1
g
1

S
1/2
=
|
:
S
1/2t
S
1/2
= cS
where
c =
|
:
.
Using (11.12) and this result,
1
:
E

A
t
1c

=
1
:
E

I
t
Z
t
c

+
1
:
E

1
t
1c

= cs
21
.
158
and
1
:
E

A
t
1A

= I
t
E

z
j
z
t
j

I +I
t
E(z
j
u
j
) +E

u
j
z
t
j

I +
1
:
E

1
t
11

= I
t
OI +cS
22
.
Together
E

`
d
2S1S
÷d

-

E

1
:
A
t
1A

÷1
E

1
:
A
t
1c

= c

I
t
OI +cS
22

÷1
s
21
. (11.14)
In general this is non-zero, except when s
21
= 0 (when A is exogenous). It is also close to zero
when c = 0. Bekker (1994) pointed out that it also has the reverse implication – that when c = |´:
is large, the bias in the 2SLS estimator will be large. Indeed as c ÷ 1. the expression in (11.14)
approaches that in (11.13), indicating that the bias in 2SLS approaches that of OLS as the number
of instruments increases.
Bekker (1994) showed further that under the alternative asymptotic approximation that c is
…xed as : ÷ · (so that the number of instruments goes to in…nity proportionately with sample
size) then the expression in (11.14) is the probability limit of
`
d
2S1S
÷d
11.7 Identi…cation Failure
Recall the reduced form equation
A
2
= Z
1
I
12
+Z
2
I
22
+1
2
.
The parameter d fails to be identi…ed if I
22
has de…cient rank. The consequences of identi…cation
failure for inference are quite severe.
Take the simplest case where / = | = 1 (so there is no Z
1
). Then the model may be written as
n
j
= r
j
+c
j
r
j
= .
j
+n
j
and
22
= = E(.
j
r
j
) ´E.
2
j
. We see that is identi…ed if and only if = 0. which occurs
when E(r
j
.
j
) = 0. Thus identi…cation hinges on the existence of correlation between the excluded
exogenous variable and the included endogenous variable.
Suppose this condition fails, so E(r
j
.
j
) = 0. Then by the CLT
1

:
a
¸
j=1
.
j
c
j
o
÷÷·
1
~ N

0. E

.
2
j
c
2
j

(11.15)
1

:
a
¸
j=1
.
j
r
j
=
1

:
a
¸
j=1
.
j
n
j
o
÷÷·
2
~ N

0. E

.
2
j
n
2
j

(11.16)
therefore
^
÷ =
1

a
¸
a
j=1
.
j
c
j
1

a
¸
a
j=1
.
j
i
j
o
÷÷
·
1
·
2
~ Cauchy.
since the ratio of two normals is Cauchy. This is particularly nasty, as the Cauchy distribution
does not have a …nite mean. This result carries over to more general settings, and was examined
by Phillips (1989) and Choi and Phillips (1992).
159
Suppose that identi…cation does not completely fail, but is weak. This occurs when
22
is full
rank, but small. This can be handled in an asymptotic analysis by modeling it as local-to-zero, viz
I
22
= :
÷1/2
C.
where C is a full rank matrix. The :
÷1/2
is picked because it provides just the right balancing to
allow a rich distribution theory.
To see the consequences, once again take the simple case / = | = 1. Here, the instrument r
j
is
weak for .
j
if
= :
÷1/2
c.
Then (11.15) is una¤ected, but (11.16) instead takes the form
1

:
a
¸
j=1
.
j
r
j
=
1

:
a
¸
j=1
.
2
j
+
1

:
a
¸
j=1
.
j
n
j
=
1
:
a
¸
j=1
.
2
j
c +
1

:
a
¸
j=1
.
j
n
j
o
÷÷Qc +·
2
therefore
^
÷
o
÷÷
·
1
Qc +·
2
.
As in the case of complete identi…cation failure, we …nd that
^
is inconsistent for and the
asymptotic distribution of
^
is non-normal. In addition, standard test statistics have non-standard
distributions, meaning that inferences about parameters of interest can be misleading.
The distribution theory for this model was developed by Staiger and Stock (1997) and extended
to nonlinear GMM estimation by Stock and Wright (2000). Further results on testing were obtained
by Wang and Zivot (1998).
The bottom line is that it is highly desirable to avoid identi…cation failure. Once again, the
equation to focus on is the reduced form
A
2
= Z
1
I
12
+Z
2
I
22
+1
2
and identi…cation requires rank(I
22
) = /
2
. If /
2
= 1. this requires
22
= 0. which is straightforward
to assess using a hypothesis test on the reduced form. Therefore in the case of /
2
= 1 (one RHS
endogenous variable), one constructive recommendation is to explicitly estimate the reduced form
equation for A
2
. construct the test of I
22
= 0, and at a minimum check that the test rejects
H
0
:
22
= 0.
When /
2
1. I
22
= 0 is not su¢cient for identi…cation. It is not even su¢cient that each
column of I
22
is non-zero (each column corresponds to a distinct endogenous variable in Z
2
). So
while a minimal check is to test that each columns of I
22
is non-zero, this cannot be interpreted
as de…nitive proof that I
22
has full rank. Unfortunately, tests of de…cient rank are di¢cult to
implement. In any event, it appears reasonable to explicitly estimate and report the reduced form
equations for Z
2
. and attempt to assess the likelihood that I
22
has de…cient rank.
160
Exercises
1. Consider the single equation model
n
j
= r
j
+c
j
.
where n
j
and .
j
are both real-valued (1 1). Let
^
denote the IV estimator of using as an
instrument a dummy variable d
j
(takes only the values 0 and 1). Find a simple expression
for the IV estimator in this context.
2. In the linear model
n
j
= i
t
j
d +c
j
E(c
j
[ i
j
) = 0
suppose o
2
j
= E

c
2
j
[ A
j

is known. Show that the GLS estimator of d can be written as an
IV estimator using some instrument z
j
. (Find an expression for z
j
.)
3. Take the linear model
u = Ad +c.
Let the OLS estimator for d be
`
d and the OLS residual be ` c = u ÷A
`
d.
Let the IV estimator for d using some instrument Z be
¯
d and the IV residual be ¯ c = u÷A
¯
d.
If Z is indeed endogeneous, will IV “…t” better than OLS, in the sense that ¯ c
t
¯ c < ` c
t
` c. at
least in large samples?
4. The reduced form between the regressors i
j
and instruments z
j
takes the form
i
j
= I
t
z
j
+u
j
or
A = ZI +1
where A
j
is / 1. z
j
is | 1. A is :/. Z is :|. 1 is :/. and I is | /. The parameter
I is de…ned by the population moment condition
E

z
j
u
t
j

= 0
Show that the method of moments estimator for I is
`
I = (Z
t
Z)
÷1
(Z
t
A) .
5. In the structural model
u = Ad +c
A = ZI +1
with I | /. | _ /. we claim that d is identi…ed (can be recovered from the reduced form) if
rank(I) = /. Explain why this is true. That is, show that if rank(I) < / then d cannot be
identi…ed.
6. Take the linear model
n
j
= i
j
d +c
j
E(c
j
[ i
j
) = 0.
where r
j
and d are 1 1.
161
(a) Show that E(r
j
c
j
) = 0 and E

r
2
j
c
j

= 0. Is z
j
= (r
j
r
2
j
)
t
a valid instrumental variable
for estimation of d?
(b) De…ne the 2SLS estimator of d. using z
j
as an instrument for r
j
. How does this di¤er
from OLS?
(c) Find the e¢cient GMM estimator of d based on the moment condition
E(z
j
(n
j
÷r
j
d)) = 0.
Does this di¤er from 2SLS and/or OLS?
7. Suppose that price and quantity are determined by the intersection of the linear demand and
supply curves
Demand : Q = a
0
+a
1
1 +a
2
1 +c
1
Supply : Q = /
0
+/
1
1 +/
2
\ +c
2
where income (1 ) and wage (\) are determined outside the market. In this model, are the
parameters identi…ed?
8. The data …le card.dat is taken from David Card “Using Geographic Variation in College
Proximity to Estimate the Return to Schooling” in Aspects of Labour Market Behavior (1995).
There are 2215 observations with 29 variables, listed in card.pdf. We want to estimate a wage
equation
log(\aoc) =
0
+
1
1dnc +
2
1rjcr +
3
1rjcr
2
+
4
oont/ +
5
1|ac/ +c
where 1dnc = 1dnatio: (Years) 1rjcr = 1rjcric:cc (Years), and oont/ and 1|ac/ are
regional and racial dummy variables.
(a) Estimate the model by OLS. Report estimates and standard errors.
(b) Now treat 1dncatio: as endogenous, and the remaining variables as exogenous. Estimate
the model by 2SLS, using the instrument :car4, a dummy indicating that the observation
lives near a 4-year college. Report estimates and standard errors.
(c) Re-estimate by 2SLS (report estimates and standard errors) adding three additional
instruments: near2 (a dummy indicating that the observation lives near a 2-year college),
1at/cdnc (the education, in years, of the father) and :ot/cdnc (the education, in years,
of the mother).
(d) Re-estimate the model by e¢cient GMM. I suggest that you use the 2SLS estimates as
the …rst-step to get the weight matrix, and then calculate the GMM estimator from this
weight matrix without further iteration. Report the estimates and standard errors.
(e) Calculate and report the J statistic for overidenti…cation.
(f) Discuss your …ndings.
162
Chapter 12
Univariate Time Series
A time series n
t
is a process observed in sequence over time, t = 1. .... T. To indicate the
dependence on time, we adopt new notation, and use the subscript t to denote the individual
observation, and T to denote the number of observations.
Because of the sequential nature of time series, we expect that n
t
and n
t÷1
are not independent,
so classical assumptions are not valid.
We can separate time series into two categories: univariate (n
t
÷ R is scalar); and multivariate
(n
t
÷ R
n
is vector-valued). The primary model for univariate time series is autoregressions (ARs).
The primary model for multivariate time series is vector autoregressions (VARs).
12.1 Stationarity and Ergodicity
De…nition 12.1.1 ¦n
t
¦ is covariance (weakly) stationary if
E(n
t
) = j
is independent of t. and
cov (n
t
. n
t÷I
) = (/)
is independent of t for all /.(/) is called the autocovariance function.
j(/) = (/)´(0) = corr(n
t
. n
t÷I
)
is the autocorrelation function.
De…nition 12.1.2 ¦n
t
¦ is strictly stationary if the joint distribution of
(n
t
. .... n
t÷I
) is independent of t for all /.
De…nition 12.1.3 A stationary time series is ergodic if (/) ÷ 0 as
/ ÷·.
163
The following two theorems are essential to the analysis of stationary time series. There proofs
are rather di¢cult, however.
Theorem 12.1.1 If n
t
is strictly stationary and ergodic and r
t
=
1(n
t
. n
t÷1
. ...) is a random variable, then r
t
is strictly stationary and er-
godic.
Theorem 12.1.2 (Ergodic Theorem). If n
t
is strictly stationary and er-
godic and E[n
t
[ < ·. then as T ÷·.
1
T
T
¸
t=1
n
t
j
÷÷E(n
t
).
This allows us to consistently estimate parameters using time-series moments:
The sample mean:
^ j =
1
T
T
¸
t=1
n
t
The sample autocovariance
^ (/) =
1
T
T
¸
t=1
(n
t
÷ ^ j) (n
t÷I
÷ ^ j) .
The sample autocorrelation
^ j(/) =
^ (/)
^ (0)
.
Theorem 12.1.3 If n
t
is strictly stationary and ergodic and En
2
t
< ·.
then as T ÷·.
1. ^ j
j
÷÷E(n
t
);
2. ^ (/)
j
÷÷(/);
3. ^ j(/)
j
÷÷j(/).
Proof of Theorem 12.1.3. Part (1) is a direct consequence of the Ergodic theorem. For Part
(2), note that
^ (/) =
1
T
T
¸
t=1
(n
t
÷ ^ j) (n
t÷I
÷ ^ j)
=
1
T
T
¸
t=1
n
t
n
t÷I
÷
1
T
T
¸
t=1
n
t
^ j ÷
1
T
T
¸
t=1
n
t÷I
^ j + ^ j
2
.
164
By Theorem 12.1.1 above, the sequence n
t
n
t÷I
is strictly stationary and ergodic, and it has a …nite
mean by the assumption that En
2
t
< ·. Thus an application of the Ergodic Theorem yields
1
T
T
¸
t=1
n
t
n
t÷I
j
÷÷E(n
t
n
t÷I
).
Thus
^ (/)
j
÷÷E(n
t
n
t÷I
) ÷j
2
÷j
2
+j
2
= E(n
t
n
t÷I
) ÷j
2
= (/).
Part (3) follows by the continuous mapping theorem: ^ j(/) = ^ (/)´^ (0)
j
÷÷(/)´(0) = j(/).
12.2 Autoregressions
In time-series, the series ¦.... n
1
. n
2
. .... n
T
. ...¦ are jointly random. We consider the conditional
expectation
E(n
t
[ T
t÷1
)
where T
t÷1
= ¦n
t÷1
. n
t÷2
. ...¦ is the past history of the series.
An autoregressive (AR) model speci…es that only a …nite number of past lags matter:
E(n
t
[ T
t÷1
) = E(n
t
[ n
t÷1
. .... n
t÷I
) .
A linear AR model (the most common type used in practice) speci…es linearity:
E(n
t
[ T
t÷1
) = c +j
1
n
t÷1
+j
2
n
t÷1
+ +j
I
n
t÷I
.
Letting
c
t
= n
t
÷E(n
t
[ T
t÷1
) .
then we have the autoregressive model
n
t
= c +j
1
n
t÷1
+j
2
n
t÷1
+ +j
I
n
t÷I
+c
t
E(c
t
[ T
t÷1
) = 0.
The last property de…nes a special time-series process.
De…nition 12.2.1 c
t
is a martingale di¤erence sequence (MDS) if
E(c
t
[ T
t÷1
) = 0.
Regression errors are naturally a MDS. Some time-series processes may be a MDS as a conse-
quence of optimizing behavior. For example, some versions of the life-cycle hypothesis imply that
either changes in consumption, or consumption growth rates, should be a MDS. Most asset pricing
models imply that asset returns should be the sum of a constant plus a MDS.
The MDS property for the regression error plays the same role in a time-series regression as
does the conditional mean-zero property for the regression error in a cross-section regression. In
fact, it is even more important in the time-series context, as it is di¢cult to derive distribution
theories without this property.
A useful property of a MDS is that c
t
is uncorrelated with any function of the lagged information
T
t÷1
. Thus for / 0. E(n
t÷I
c
t
) = 0.
165
12.3 Stationarity of AR(1) Process
A mean-zero AR(1) is
n
t
= jn
t÷1
+c
t
.
Assume that c
t
is iid, E(c
t
) = 0 and Ec
2
t
= o
2
< ·.
By back-substitution, we …nd
n
t
= c
t
+jc
t÷1
+j
2
c
t÷2
+...
=
o
¸
I=0
j
I
c
t÷I
.
Loosely speaking, this series converges if the sequence j
I
c
t÷I
gets small as / ÷ ·. This occurs
when [j[ < 1.
Theorem 12.3.1 If [j[ < 1 then n
t
is strictly stationary and ergodic.
We can compute the moments of n
t
using the in…nite sum:
En
t
=
o
¸
I=0
j
I
E(c
t÷I
) = 0
var(n
t
) =
o
¸
I=0
j
2I
var (c
t÷I
) =
o
2
1 ÷j
2
.
If the equation for n
t
has an intercept, the above results are unchanged, except that the mean
of n
t
can be computed from the relationship
En
t
= c +jEn
t÷1
.
and solving for En
t
= En
t÷1
we …nd En
t
= c´(1 ÷j).
12.4 Lag Operator
An algebraic construct which is useful for the analysis of autoregressive models is the lag oper-
ator.
De…nition 12.4.1 The lag operator L satis…es Ln
t
= n
t÷1
.
De…ning L
2
= LL. we see that L
2
n
t
= Ln
t÷1
= n
t÷2
. In general, L
I
n
t
= n
t÷I
.
The AR(1) model can be written in the format
n
t
÷jn
t÷1
+c
t
or
(1 ÷jL) n
t÷1
= c
t
.
The operator j(L) = (1 ÷ jL) is a polynomial in the operator L. We say that the root of the
polynomial is 1´j. since j(.) = 0 when . = 1´j. We call j(L) the autoregressive polynomial of n
t
.
From Theorem 12.3.1, an AR(1) is stationary i¤ [j[ < 1. Note that an equivalent way to say
this is that an AR(1) is stationary i¤ the root of the autoregressive polynomial is larger than one
(in absolute value).
166
12.5 Stationarity of AR(k)
The AR(k) model is
n
t
= j
1
n
t÷1
+j
2
n
t÷2
+ +j
I
n
t÷I
+c
t
.
Using the lag operator,
n
t
÷j
1
Ln
t
÷j
2
L
2
n
t
÷ ÷j
I
L
I
n
t
= c
t
.
or
j(L)n
t
= c
t
where
j(L) = 1 ÷j
1
L ÷j
2
L
2
÷ ÷j
I
L
I
.
We call j(L) the autoregressive polynomial of n
t
.
The Fundamental Theorem of Algebra says that any polynomial can be factored as
j(.) =

1 ÷`
÷1
1
.

1 ÷`
÷1
2
.

1 ÷`
÷1
I
.

where the `
1
. .... `
I
are the complex roots of j(.). which satisfy j(`
;
) = 0.
We know that an AR(1) is stationary i¤ the absolute value of the root of its autoregressive
polynomial is larger than one. For an AR(k), the requirement is that all roots are larger than one.
Let [`[ denote the modulus of a complex number `.
Theorem 12.5.1 The AR(k) is strictly stationary and ergodic if and only
if [`
;
[ 1 for all ,.
One way of stating this is that “All roots lie outside the unit circle.”
If one of the roots equals 1, we say that j(L). and hence n
t
. “has a unit root”. This is a special
case of non-stationarity, and is of great interest in applied time series.
12.6 Estimation
Let
i
t
=

1 n
t÷1
n
t÷2
n
t÷I

t
d =

c j
1
j
2
j
I

t
.
Then the model can be written as
n
t
= i
t
t
d +c
t
.
The OLS estimator is
`
d =

A
t
A

÷1
A
t
u.
To study
`
d. it is helpful to de…ne the process n
t
= i
t
c
t
. Note that n
t
is a MDS, since
E(n
t
[ T
t÷1
) = E(i
t
c
t
[ T
t÷1
) = i
t
E(c
t
[ T
t÷1
) = 0.
By Theorem 12.1.1, it is also strictly stationary and ergodic. Thus
1
T
T
¸
t=1
i
t
c
t
=
1
T
T
¸
t=1
n
t
j
÷÷E(n
t
) = 0. (12.1)
167
The vector i
t
is strictly stationary and ergodic, and by Theorem 12.1.1, so is i
t
i
t
t
. Thus by the
Ergodic Theorem,
1
T
T
¸
t=1
i
t
i
t
t
j
÷÷E

i
t
i
t
t

= O.
Combined with (12.1) and the continuous mapping theorem, we see that
`
d = d +

1
T
T
¸
t=1
i
t
i
t
t

÷1

1
T
T
¸
t=1
i
t
c
t

j
÷÷O
÷1
0 = 0.
We have shown the following:
Theorem 12.6.1 If the AR(k) process n
t
is strictly stationary and ergodic
and En
2
t
< ·. then
`
d
j
÷÷d as T ÷·.
12.7 Asymptotic Distribution
Theorem 12.7.1 MDS CLT. If u
t
is a strictly stationary and ergodic
MDS and E(u
t
u
t
t
) = D < ·. then as T ÷·.
1

T
T
¸
t=1
u
t
o
÷÷N(0. D) .
Since i
t
c
t
is a MDS, we can apply Theorem 12.7.1 to see that
1

T
T
¸
t=1
i
t
c
t
o
÷÷N(0. D) .
where
D = E(i
t
i
t
t
c
2
t
).
Theorem 12.7.2 If the AR(k) process n
t
is strictly stationary and ergodic
and En
4
t
< ·. then as T ÷·.

T

`
d ÷d

o
÷÷N

0. O
÷1
DO
÷1

.
This is identical in form to the asymptotic distribution of OLS in cross-section regression. The
implication is that asymptotic inference is the same. In particular, the asymptotic covariance
matrix is estimated just as in the cross-section case.
168
12.8 Bootstrap for Autoregressions
In the non-parametric bootstrap, we constructed the bootstrap sample by randomly resampling
from the data values ¦n
t
. i
t
¦. This creates an iid bootstrap sample. Clearly, this cannot work in a
time-series application, as this imposes inappropriate independence.
Brie‡y, there are two popular methods to implement bootstrap resampling for time-series data.
Method 1: Model-Based (Parametric) Bootstrap.
1. Estimate
`
d and residuals ^ c
t
.
2. Fix an initial condition (n
÷I+1
. n
÷I+2
. .... n
0
).
3. Simulate iid draws c
+
j
from the empirical distribution of the residuals ¦^ c
1
. .... ^ c
T
¦.
4. Create the bootstrap series n
+
t
by the recursive formula
n
+
t
= ^ c + ^ j
1
n
+
t÷1
+ ^ j
2
n
+
t÷2
+ + ^ j
I
n
+
t÷I
+c
+
t
.
This construction imposes homoskedasticity on the errors c
+
j
. which may be di¤erent than the
properties of the actual c
j
. It also presumes that the AR(k) structure is the truth.
Method 2: Block Resampling
1. Divide the sample into T´: blocks of length :.
2. Resample complete blocks. For each simulated sample, draw T´: blocks.
3. Paste the blocks together to create the bootstrap time-series n
+
t
.
4. This allows for arbitrary stationary serial correlation, heteroskedasticity, and for model-
misspeci…cation.
5. The results may be sensitive to the block length, and the way that the data are partitioned
into blocks.
6. May not work well in small samples.
12.9 Trend Stationarity
n
t
= j
0
+j
1
t +o
t
(12.2)
o
t
= j
1
o
t÷1
+j
2
o
t÷2
+ +j
I
o
t÷|
+c
t
. (12.3)
or
n
t
= c
0
+c
1
t +j
1
n
t÷1
+j
2
n
t÷1
+ +j
I
n
t÷I
+c
t
. (12.4)
There are two essentially equivalent ways to estimate the autoregressive parameters (j
1
. .... j
I
).
« You can estimate (12.4) by OLS.
« You can estimate (12.2)-(12.3) sequentially by OLS. That is, …rst estimate (12.2), get the
residual
^
o
t
. and then perform regression (12.3) replacing o
t
with
^
o
t
. This procedure is some-
times called Detrending.
169
The reason why these two procedures are (essentially) the same is the Frisch-Waugh-Lovell
theorem.
Seasonal E¤ects
There are three popular methods to deal with seasonal data.
« Include dummy variables for each season. This presumes that “seasonality” does not change
over the sample.
« Use “seasonally adjusted” data. The seasonal factor is typically estimated by a two-sided
weighted average of the data for that season in neighboring years. Thus the seasonally
adjusted data is a “…ltered” series. This is a ‡exible approach which can extract a wide range
of seasonal factors. The seasonal adjustment, however, also alters the time-series correlations
of the data.
« First apply a seasonal di¤erencing operator. If : is the number of seasons (typically : = 4 or
: = 12).

c
n
t
= n
t
÷n
t÷c
.
or the season-to-season change. The series
c
n
t
is clearly free of seasonality. But the long-run
trend is also eliminated, and perhaps this was of relevance.
12.10 Testing for Omitted Serial Correlation
For simplicity, let the null hypothesis be an AR(1):
n
t
= c +jn
t÷1
+n
t
. (12.5)
We are interested in the question if the error n
t
is serially correlated. We model this as an AR(1):
n
t
= 0n
t÷1
+c
t
(12.6)
with c
t
a MDS. The hypothesis of no omitted serial correlation is
H
0
: 0 = 0
H
1
: 0 = 0.
We want to test H
0
against H
1
.
To combine (12.5) and (12.6), we take (12.5) and lag the equation once:
n
t÷1
= c +jn
t÷2
+n
t÷1
.
We then multiply this by 0 and subtract from (12.5), to …nd
n
t
÷0n
t÷1
= c ÷0c +jn
t÷1
÷0jn
t÷1
+n
t
÷0n
t÷1
.
or
n
t
= c(1 ÷0) + (j +0) n
t÷1
÷0jn
t÷2
+c
t
= ¹1(2).
Thus under H
0
. n
t
is an AR(1), and under H
1
it is an AR(2). H
0
may be expressed as the restriction
that the coe¢cient on n
t÷2
is zero.
An appropriate test of H
0
against H
1
is therefore a Wald test that the coe¢cient on n
t÷2
is
zero. (A simple exclusion test).
In general, if the null hypothesis is that n
t
is an AR(k), and the alternative is that the error is an
AR(m), this is the same as saying that under the alternative n
t
is an AR(k+m), and this is equivalent
to the restriction that the coe¢cients on n
t÷I÷1
. .... n
t÷I÷n
are jointly zero. An appropriate test is
the Wald test of this restriction.
170
12.11 Model Selection
What is the appropriate choice of / in practice? This is a problem of model selection.
One approach to model selection is to choose / based on a Wald tests.
Another is to minimize the AIC or BIC information criterion, e.g.
¹1C(/) = log ^ o
2
(/) +
2/
T
.
where ^ o
2
(/) is the estimated residual variance from an AR(k)
One ambiguity in de…ning the AIC criterion is that the sample available for estimation changes
as / changes. (If you increase /. you need more initial conditions.) This can induce strange
behavior in the AIC. The best remedy is to …x a upper value /. and then reserve the …rst / as
initial conditions, and then estimate the models AR(1), AR(2), ..., AR(/) on this (uni…ed) sample.
12.12 Autoregressive Unit Roots
The AR(k) model is
j(L)n
t
= j +c
t
j(L) = 1 ÷j
1
L ÷ ÷j
I
L
I
.
As we discussed before, n
t
has a unit root when j(1) = 0. or
j
1
+j
2
+ +j
I
= 1.
In this case, n
t
is non-stationary. The ergodic theorem and MDS CLT do not apply, and test
statistics are asymptotically non-normal.
A helpful way to write the equation is the so-called Dickey-Fuller reparameterization:
n
t
= j +c
0
n
t÷1
+c
1
n
t÷1
+ +c
I÷1
n
t÷(I÷1)
+c
t
. (12.7)
These models are equivalent linear transformations of one another. The DF parameterization
is convenient because the parameter c
0
summarizes the information about the unit root, since
j(1) = ÷c
0
. To see this, observe that the lag polynomial for the n
t
computed from (12.7) is
(1 ÷L) ÷c
0
L ÷c
1
(L ÷L
2
) ÷ ÷c
I÷1
(L
I÷1
÷L
I
)
But this must equal j(L). as the models are equivalent. Thus
j(1) = (1 ÷1) ÷c
0
÷(1 ÷1) ÷ ÷(1 ÷1) = ÷c
0
.
Hence, the hypothesis of a unit root in n
t
can be stated as
H
0
: c
0
= 0.
Note that the model is stationary if c
0
< 0. So the natural alternative is
H
1
: c
0
< 0.
Under H
0
. the model for n
t
is
n
t
= j +c
1
n
t÷1
+ +c
I÷1
n
t÷(I÷1)
+c
t
.
which is an AR(k-1) in the …rst-di¤erence n
t
. Thus if n
t
has a (single) unit root, then n
t
is a
stationary AR process. Because of this property, we say that if n
t
is non-stationary but
o
n
t
is
stationary, then n
t
is “integrated of order d”. or 1(d). Thus a time series with unit root is 1(1).
171
Since c
0
is the parameter of a linear regression, the natural test statistic is the t-statistic for
H
0
from OLS estimation of (12.7). Indeed, this is the most popular unit root test, and is called the
Augmented Dickey-Fuller (ADF) test for a unit root.
It would seem natural to assess the signi…cance of the ADF statistic using the normal table.
However, under H
0
. n
t
is non-stationary, so conventional normal asymptotics are invalid. An
alternative asymptotic framework has been developed to deal with non-stationary data. We do not
have the time to develop this theory in detail, but simply assert the main results.
Theorem 12.12.1 Dickey-Fuller Theorem.
Assume c
0
= 0. As T ÷·.
T ^ c
0
o
÷÷(1 ÷c
1
÷c
2
÷ ÷c
I÷1
) 11
c
¹11 =
^ c
0
:(^ c
0
)
÷11
t
.
The limit distributions 11
c
and 11
t
are non-normal. They are skewed to the left, and have
negative means.
The …rst result states that ^ c
0
converges to its true value (of zero) at rate T. rather than the
conventional rate of T
1/2
. This is called a “super-consistent” rate of convergence.
The second result states that the t-statistic for ^ c
0
converges to a limit distribution which is
non-normal, but does not depend on the parameters c. This distribution has been extensively
tabulated, and may be used for testing the hypothesis H
0
. Note: The standard error :(^ c
0
) is the
conventional (“homoskedastic”) standard error. But the theorem does not require an assumption
of homoskedasticity. Thus the Dickey-Fuller test is robust to heteroskedasticity.
Since the alternative hypothesis is one-sided, the ADF test rejects H
0
in favor of H
1
when
¹11 < c. where c is the critical value from the ADF table. If the test rejects H
0
. this means that
the evidence points to n
t
being stationary. If the test does not reject H
0
. a common conclusion is
that the data suggests that n
t
is non-stationary. This is not really a correct conclusion, however.
All we can say is that there is insu¢cient evidence to conclude whether the data are stationary or
not.
We have described the test for the setting of with an intercept. Another popular setting includes
as well a linear time trend. This model is
n
t
= j
1
+j
2
t +c
0
n
t÷1
+c
1
n
t÷1
+ +c
I÷1
n
t÷(I÷1)
+c
t
. (12.8)
This is natural when the alternative hypothesis is that the series is stationary about a linear time
trend. If the series has a linear trend (e.g. GDP, Stock Prices), then the series itself is non-
stationary, but it may be stationary around the linear time trend. In this context, it is a silly waste
of time to …t an AR model to the level of the series without a time trend, as the AR model cannot
conceivably describe this data. The natural solution is to include a time trend in the …tted OLS
equation. When conducting the ADF test, this means that it is computed as the t-ratio for c
0
from
OLS estimation of (12.8).
If a time trend is included, the test procedure is the same, but di¤erent critical values are
required. The ADF test has a di¤erent distribution when the time trend has been included, and a
di¤erent table should be consulted.
Most texts include as well the critical values for the extreme polar case where the intercept has
been omitted from the model. These are included for completeness (from a pedagogical perspective)
but have no relevance for empirical practice where intercepts are always included.
172
Chapter 13
Multivariate Time Series
A multivariate time series u
t
is a vector process :1. Let T
t÷1
= (u
t÷1
. u
t÷2
. ...) be all lagged
information at time t. The typical goal is to …nd the conditional expectation E(u
t
[ T
t÷1
) . Note
that since u
t
is a vector, this conditional expectation is also a vector.
13.1 Vector Autoregressions (VARs)
A VAR model speci…es that the conditional mean is a function of only a …nite number of lags:
E(u
t
[ T
t÷1
) = E

u
t
[ u
t÷1
. .... u
t÷I

.
A linear VAR speci…es that this conditional mean is linear in the arguments:
E

u
t
[ u
t÷1
. .... u
t÷I

= u
0
+A
1
u
t÷1
+A
2
u
t÷2
+ A
I
u
t÷I
.
Observe that u
0
is :1,and each of A
1
through A
I
are :: matrices.
De…ning the :1 regression error
c
t
= u
t
÷E(u
t
[ T
t÷1
) .
we have the VAR model
u
t
= u
0
+A
1
u
t÷1
+A
2
u
t÷2
+ A
I
u
t÷I
+c
t
E(c
t
[ T
t÷1
) = 0.
Alternatively, de…ning the :/ + 1 vector
i
t
=

¸
¸
¸
¸
¸
¸
1
u
t÷1
u
t÷2
.
.
.
u
t÷I
¸

and the :(:/ + 1) matrix
A =

u
0
A
1
A
2
A
I

.
then
u
t
= Ai
t
+c
t
.
The VAR model is a system of : equations. One way to write this is to let a
t
;
be the ,th row
of A. Then the VAR system can be written as the equations
1
;t
= a
t
;
i
t
+c
;t
.
Unrestricted VARs were introduced to econometrics by Sims (1980).
173
13.2 Estimation
Consider the moment conditions
E(i
t
c
;t
) = 0.
, = 1. .... :. These are implied by the VAR model, either as a regression, or as a linear projection.
The GMM estimator corresponding to these moment conditions is equation-by-equation OLS
` u
;
= (A
t
A)
÷1
A
t
u
;
.
An alternative way to compute this is as follows. Note that
` u
t
;
= u
t
;
A(A
t
A)
÷1
.
And if we stack these to create the estimate
^
¹. we …nd
`
A =

¸
¸
¸
¸
u
t
1
u
t
2
.
.
.
u
t
n+1
¸

A(A
t
A)
÷1
= A
t
A(A
t
A)
÷1
.
where
A =

u
1
u
2
u
n

the T : matrix of the stacked u
t
t
.
This (system) estimator is known as the SUR (Seemingly Unrelated Regressions) estimator,
and was originally derived by Zellner (1962)
13.3 Restricted VARs
The unrestricted VAR is a system of : equations, each with the same set of regressors. A
restricted VAR imposes restrictions on the system. For example, some regressors may be excluded
from some of the equations. Restrictions may be imposed on individual equations, or across equa-
tions. The GMM framework gives a convenient method to impose such restrictions on estimation.
13.4 Single Equation from a VAR
Often, we are only interested in a single equation out of a VAR system. This takes the form
n
;t
= u
t
;
i
t
+c
t
.
and i
t
consists of lagged values of n
;t
and the other n
t
|t
:. In this case, it is convenient to re-de…ne
the variables. Let n
t
= n
;t
. and z
t
be the other variables. Let c
t
= c
;t
and = a
;
. Then the single
equation takes the form
n
t
= i
t
t
d +c
t
. (13.1)
and
i
t
=

1 u
t÷1
u
t÷I
z
t
t÷1
z
t
t÷I

t

.
This is just a conventional regression with time series data.
174
13.5 Testing for Omitted Serial Correlation
Consider the problem of testing for omitted serial correlation in equation (13.1). Suppose that
c
t
is an AR(1). Then
n
t
= i
t
t
d +c
t
c
t
= 0c
t÷1
+n
t
(13.2)
E(n
t
[ T
t÷1
) = 0.
Then the null and alternative are
H
0
: 0 = 0 H
1
: 0 = 0.
Take the equation n
t
= i
t
t
d +c
t
. and subtract o¤ the equation once lagged multiplied by 0. to get
n
t
÷0n
t÷1
=

i
t
t
d +c
t

÷0

i
t
t÷1
d +c
t÷1

= i
t
t
d ÷0i
t÷1
d +c
t
÷0c
t÷1
.
or
n
t
= 0n
t÷1
+i
t
t
d +i
t
t÷1
~ +n
t
. (13.3)
which is a valid regression model.
So testing H
0
versus H
1
is equivalent to testing for the signi…cance of adding (n
t÷1
. i
t÷1
) to
the regression. This can be done by a Wald test. We see that an appropriate, general, and simple
way to test for omitted serial correlation is to test the signi…cance of extra lagged values of the
dependent variable and regressors.
You may have heard of the Durbin-Watson test for omitted serial correlation, which once was
very popular, and is still routinely reported by conventional regression packages. The DW test is
appropriate only when regression n
t
= i
t
t
d +c
t
is not dynamic (has no lagged values on the RHS),
and c
t
is iid N(0. o
2
). Otherwise it is invalid.
Another interesting fact is that (13.2) is a special case of (13.3), under the restriction = ÷d0.
This restriction, which is called a common factor restriction, may be tested if desired. If valid,
the model (13.2) may be estimated by iterated GLS. (A simple version of this estimator is called
Cochrane-Orcutt.) Since the common factor restriction appears arbitrary, and is typically rejected
empirically, direct estimation of (13.2) is uncommon in recent applications.
13.6 Selection of Lag Length in an VAR
If you want a data-dependent rule to pick the lag length / in a VAR, you may either use a testing-
based approach (using, for example, the Wald statistic), or an information criterion approach. The
formula for the AIC and BIC are
¹1C(/) = log det

`
D(/)

+ 2
j
T
11C(/) = log det

`
D(/)

+
j log(T)
T
`
D(/) =
1
T
T
¸
t=1
` c
t
(/)` c
t
(/)
t
j = :(/:+ 1)
where j is the number of parameters in the model, and ` c
t
(/) is the OLS residual vector from the
model with / lags. The log determinant is the criterion from the multivariate normal likelihood.
175
13.7 Granger Causality
Partition the data vector into (u
t
. z
t
). De…ne the two information sets
T
1t
=

u
t
. u
t÷1
. u
t÷2
. ...

T
2t
=

u
t
. z
t
. u
t÷1
. z
t÷1
. u
t÷2
. z
t÷2
. . ...

The information set T
1t
is generated only by the history of u
t
. and the information set T
2t
is
generated by both u
t
and z
t
. The latter has more information.
We say that z
t
does not Granger-cause u
t
if
E(u
t
[ T
1.t÷1
) = E(u
t
[ T
2.t÷1
) .
That is, conditional on information in lagged u
t
. lagged z
t
does not help to forecast u
t
. If this
condition does not hold, then we say that z
t
Granger-causes u
t
.
The reason why we call this “Granger Causality” rather than “causality” is because this is not
a physical or structure de…nition of causality. If z
t
is some sort of forecast of the future, such as a
futures price, then z
t
may help to forecast u
t
even though it does not “cause” u
t
. This de…nition
of causality was developed by Granger (1969) and Sims (1972).
In a linear VAR, the equation for u
t
is
u
t
= c +j
1
u
t÷1
+ +j
I
u
t÷I
+z
t
t÷1
~
1
+ +z
t
t÷I
~
I
+c
t
.
In this equation, z
t
does not Granger-cause u
t
if and only if
H
0
: ~
1
= ~
2
= = ~
I
= 0.
This may be tested using an exclusion (Wald) test.
This idea can be applied to blocks of variables. That is, u
t
and/or z
t
can be vectors. The
hypothesis can be tested by using the appropriate multivariate Wald test.
If it is found that z
t
does not Granger-cause u
t
. then we deduce that our time-series model of
E(u
t
[ T
t÷1
) does not require the use of z
t
. Note, however, that z
t
may still be useful to explain
other features of u
t
. such as the conditional variance.
Clive W. J. Granger
Clive Granger (1934-2009) of England was one of the leading …gures in time-series econo-
metrics, and co-winner in 2003 of the Nobel Memorial Prize in Economic Sciences (along
with Robert Engle). In addition to formalizing the de…nition of causality known as Granger
causality, he invented the concept of cointegration, introduced spectral methods into econo-
metrics, and formalized methods for the combination of forecasts.
13.8 Cointegration
The idea of cointegration is due to Granger (1981), and was articulated in detail by Engle and
Granger (1987).
De…nition 13.8.1 The : 1 series u
t
is cointegrated if u
t
is 1(1) yet
there exists d. :r, of rank r. such that z
t
= d
t
u
t
is 1(0). The r vectors
in d are called the cointegrating vectors.
176
If the series u
t
is not cointegrated, then r = 0. If r = :. then u
t
is 1(0). For 0 < r < :. u
t
is
1(1) and cointegrated.
In some cases, it may be believed that d is known a priori. Often, d = (1 ÷1)
t
. For example,
if u
t
is a pair of interest rates, then d = (1 ÷ 1)
t
speci…es that the spread (the di¤erence in
returns) is stationary. If u = (log(Co::n:jtio:) log(1:co:c))
t
. then d = (1 ÷ 1)
t
speci…es
that log(Co::n:jtio:´1:co:c) is stationary.
In other cases, d may not be known.
If u
t
is cointegrated with a single cointegrating vector (r = 1). then it turns out that d can
be consistently estimated by an OLS regression of one component of u
t
on the others. Thus u
t
=
(1
1t
. 1
2t
) and d = (
1

2
) and normalize
1
= 1. Then
^

2
= (u
t
2
u
2
)
÷1
u
t
2
u
1
j
÷÷
2
. Furthermore
this estimation is super-consistent: T(
^

2
÷
2
)
o
÷÷ 1i:it. as …rst shown by Stock (1987). This
is not, in general, a good method to estimate d. but it is useful in the construction of alternative
estimators and tests.
We are often interested in testing the hypothesis of no cointegration:
H
0
: r = 0
H
1
: r 0.
Suppose that d is known, so z
t
= d
t
u
t
is known. Then under H
0
z
t
is 1(1). yet under H
1
z
t
is
1(0). Thus H
0
can be tested using a univariate ADF test on z
t
.
When d is unknown, Engle and Granger (1987) suggested using an ADF test on the estimated
residual ^ .
t
=
`
d
t
u
t
. from OLS of n
1t
on n
2t
. Their justi…cation was Stock’s result that
`
d is super-
consistent under H
1
. Under H
0
. however,
`
d is not consistent, so the ADF critical values are not
appropriate. The asymptotic distribution was worked out by Phillips and Ouliaris (1990).
When the data have time trends, it may be necessary to include a time trend in the estimated
cointegrating regression. Whether or not the time trend is included, the asymptotic distribution of
the test is a¤ected by the presence of the time trend. The asymptotic distribution was worked out
in B. Hansen (1992).
13.9 Cointegrated VARs
We can write a VAR as
A(L)u
t
= c
t
A(L) = 1 ÷A
1
L ÷A
2
L
2
÷ ÷A
I
L
I
or alternatively as
u
t
= Du
t÷1
+L(L)u
t÷1
+c
t
where
D = ÷A(1)
= ÷1 +A
1
+A
2
+ +A
I
.
Theorem 13.9.1 Granger Representation Theorem
u
t
is cointegrated with : r d if and only if rank(D) = r and D = od
t
where c is :r, rank (o) = r.
177
Thus cointegration imposes a restriction upon the parameters of a VAR. The restricted model
can be written as
u
t
= od
t
u
t÷1
+L(L)u
t÷1
+c
t
u
t
= oz
t÷1
+L(L)u
t÷1
+c
t
.
If d is known, this can be estimated by OLS of u
t
on z
t÷1
and the lags of u
t
.
If d is unknown, then estimation is done by “reduced rank regression”, which is least-squares
subject to the stated restriction. Equivalently, this is the MLE of the restricted parameters under
the assumption that c
t
is iid N(0. D).
One di¢culty is that d is not identi…ed without normalization. When r = 1. we typically just
normalize one element to equal unity. When r 1. this does not work, and di¤erent authors have
adopted di¤erent identi…cation schemes.
In the context of a cointegrated VAR estimated by reduced rank regression, it is simple to test
for cointegration by testing the rank of D. These tests are constructed as likelihood ratio (LR) tests.
As they were discovered by Johansen (1988, 1991, 1995), they are typically called the “Johansen
Max and Trace” tests. Their asymptotic distributions are non-standard, and are similar to the
Dickey-Fuller distributions.
178
Chapter 14
Limited Dependent Variables
A “limited dependent variable” n is one which takes a “limited” set of values. The most common
cases are
« Binary: n ÷ ¦0. 1¦
« Multinomial: n ÷ ¦0. 1. 2. .... /¦
« Integer: n ÷ ¦0. 1. 2. ...¦
« Censored: n ÷ R
+
The traditional approach to the estimation of limited dependent variable (LDV) models is
parametric maximum likelihood. A parametric model is constructed, allowing the construction of
the likelihood function. A more modern approach is semi-parametric, eliminating the dependence
on a parametric distributional assumption. We will discuss only the …rst (parametric) approach,
due to time constraints. They still constitute the majority of LDV applications. If, however, you
were to write a thesis involving LDV estimation, you would be advised to consider employing a
semi-parametric estimation approach.
For the parametric approach, estimation is by MLE. A major practical issue is construction of
the likelihood function.
14.1 Binary Choice
The dependent variable n
j
÷ ¦0. 1¦. This represents a Yes/No outcome. Given some regressors
i
j
. the goal is to describe P(n
j
= 1 [ i
j
) . as this is the full conditional distribution.
The linear probability model speci…es that
P(n
j
= 1 [ i
j
) = i
t
j
d.
As P(n
j
= 1 [ i
j
) = E(n
j
[ i
j
) . this yields the regression: n
j
= i
t
j
d +c
j
which can be estimated by
OLS. However, the linear probability model does not impose the restriction that 0 _ P(n
j
[ i
j
) _ 1.
Even so estimation of a linear probability model is a useful starting point for subsequent analysis.
The standard alternative is to use a function of the form
P(n
j
= 1 [ i
j
) = 1

i
t
j
d

where 1 () is a known CDF, typically assumed to be symmetric about zero, so that 1(n) =
1 ÷1(÷n). The two standard choices for 1 are
« Logistic: 1(n) = (1 +c
÷&
)
÷1
.
179
« Normal: 1(n) = (n).
If 1 is logistic, we call this the logit model, and if 1 is normal, we call this the probit model.
This model is identical to the latent variable model
n
+
j
= i
t
j
d +c
j
c
j
~ 1 ()
n
j
=

1 if n
+
j
0
0 otherwise
.
For then
P(n
j
= 1 [ i
j
) = P(n
+
j
0 [ i
j
)
= P

i
t
j
d +c
j
0 [ i
j

= P

c
j
÷i
t
j
d [ i
j

= 1 ÷1

÷i
t
j
d

= 1

i
t
j
d

.
Estimation is by maximum likelihood. To construct the likelihood, we need the conditional
distribution of an individual observation. Recall that if n is Bernoulli, such that P(n = 1) = j and
P(n = 0) = 1 ÷j, then we can write the density of n as
1(n) = j
&
(1 ÷j)
1÷&
. n = 0. 1.
In the Binary choice model, n
j
is conditionally Bernoulli with P(n
j
= 1 [ i
j
) = j
j
= 1 (i
t
j
d) . Thus
the conditional density is
1 (n
j
[ i
j
) = j
&
i
j
(1 ÷j
j
)
1÷&
i
= 1

i
t
j
d

&
i
(1 ÷1

i
t
j
d

)
1÷&
i
.
Hence the log-likelihood function is
log 1(d) =
a
¸
j=1
log 1(n
j
[ i
j
)
=
a
¸
j=1
log

1

i
t
j
d

&
i
(1 ÷1

i
t
j
d

)
1÷&
i

=
a
¸
j=1

n
j
log 1

i
t
j
d

+ (1 ÷n
j
) log(1 ÷1

i
t
j
d

)

=
¸
&
i
=1
log 1

i
t
j
d

+
¸
&
i
=0
log(1 ÷1

i
t
j
d

).
The MLE
`
d is the value of d which maximizes log 1(d). Standard errors and test statistics are
computed by asymptotic approximations. Details of such calculations are left to more advanced
courses.
14.2 Count Data
If n ÷ ¦0. 1. 2. ...¦. a typical approach is to employ Poisson regression. This model speci…es that
P(n
j
= / [ i
j
) =
exp(÷`
j
) `
I
j
/!
. / = 0. 1. 2. ...
`
j
= exp(i
t
j
d).
180
The conditional density is the Poisson with parameter `
j
. The functional form for `
j
has been
picked to ensure that `
j
0.
The log-likelihood function is
log 1(d) =
a
¸
j=1
log 1(n
j
[ i
j
) =
a
¸
j=1

÷exp(i
t
j
d) +n
j
i
t
j
d ÷log(n
j
!)

.
The MLE is the value
`
d which maximizes log 1(d).
Since
E(n
j
[ i
j
) = `
j
= exp(i
t
j
d)
is the conditional mean, this motivates the label Poisson “regression.”
Also observe that the model implies that
var (n
j
[ i
j
) = `
j
= exp(i
t
j
d).
so the model imposes the restriction that the conditional mean and variance of n
j
are the same.
This may be considered restrictive. A generalization is the negative binomial.
14.3 Censored Data
The idea of “censoring” is that some data above or below a threshold are mis-reported at the
threshold. Thus the model is that there is some latent process n
+
j
with unbounded support, but we
observe only
n
j
=

n
+
j
if n
+
j
_ 0
0 if n
+
j
< 0
. (14.1)
(This is written for the case of the threshold being zero, any known value can substitute.) The
observed data n
j
therefore come from a mixed continuous/discrete distribution.
Censored models are typically applied when the data set has a meaningful proportion (say 5%
or higher) of data at the boundary of the sample support. The censoring process may be explicit
in data collection, or it may be a by-product of economic constraints.
An example of a data collection censoring is top-coding of income. In surveys, incomes above
a threshold are typically reported at the threshold.
The …rst censored regression model was developed by Tobin (1958) to explain consumption of
durable goods. Tobin observed that for many households, the consumption level (purchases) in a
particular period was zero. He proposed the latent variable model
n
+
j
= i
t
j
d +c
j
c
j
~ iid N(0. o
2
)
with the observed variable n
j
generated by the censoring equation (14.1). This model (now called
the Tobit) speci…es that the latent (or ideal) value of consumption may be negative (the household
would prefer to sell than buy). All that is reported is that the household purchased zero units of
the good.
The naive approach to estimate d is to regress n
j
on i
j
. This does not work because regression
estimates E(n
j
[ i
j
) . not E(n
+
j
[ i
j
) = i
t
j
d. and the latter is of interest. Thus OLS will be biased
for the parameter of interest d.
[Note: it is still possible to estimate E(n
j
[ i
j
) by LS techniques. The Tobit framework postu-
lates that this is not inherently interesting, that the parameter of d is de…ned by an alternative
statistical structure.]
181
Consistent estimation will be achieved by the MLE. To construct the likelihood, observe that
the probability of being censored is
P(n
j
= 0 [ i
j
) = P(n
+
j
< 0 [ i
j
)
= P

i
t
j
d +c
j
< 0 [ i
j

= P

c
j
o
< ÷
i
t
j
d
o
[ i
j

=

÷
i
t
j
d
o

.
The conditional distribution function above zero is Gaussian:
P(n
j
= n [ i
j
) =

&
0
o
÷1
c

. ÷i
t
j
d
o

d.. n 0.
Therefore, the density function can be written as
1 (n [ i
j
) =

÷
i
t
j
d
o

1(&=0)
¸
o
÷1
c

. ÷i
t
j
d
o

1(&0)
.
where 1 () is the indicator function.
Hence the log-likelihood is a mixture of the probit and the normal:
log 1(d) =
a
¸
j=1
log 1(n
j
[ i
j
)
=
¸
&
i
=0
log

÷
i
t
j
d
o

+
¸
&
i
0
log
¸
o
÷1
c

n
j
÷i
t
j
d
o

.
The MLE is the value
`
d which maximizes log 1(d).
14.4 Sample Selection
The problem of sample selection arises when the sample is a non-random selection of potential
observations. This occurs when the observed data is systematically di¤erent from the population
of interest. For example, if you ask for volunteers for an experiment, and they wish to extrapolate
the e¤ects of the experiment on a general population, you should worry that the people who
volunteer may be systematically di¤erent from the general population. This has great relevance for
the evaluation of anti-poverty and job-training programs, where the goal is to assess the e¤ect of
“training” on the general population, not just on the volunteers.
A simple sample selection model can be written as the latent model
n
j
= i
t
j
d +c
1j
T
j
= 1

z
t
j
~ +c
0j
0

where 1 () is the indicator function. The dependent variable n
j
is observed if (and only if) T
j
= 1.
Else it is unobserved.
For example, n
j
could be a wage, which can be observed only if a person is employed. The
equation for T
j
is an equation specifying the probability that the person is employed.
The model is often completed by specifying that the errors are jointly normal

c
0j
c
1j

~ N

0.

1 j
j o
2

.
182
It is presumed that we observe ¦i
j
. z
j
. T
j
¦ for all observations.
Under the normality assumption,
c
1j
= jc
0j

j
.
where ·
j
is independent of c
0j
~ N(0. 1). A useful fact about the standard normal distribution is
that
E(c
0j
[ c
0j
÷r) = `(r) =
c(r)
(r)
.
and the function `(r) is called the inverse Mills ratio.
The naive estimator of d is OLS regression of n
j
on i
j
for those observations for which n
j
is
available. The problem is that this is equivalent to conditioning on the event ¦T
j
= 1¦. However,
E(c
1j
[ T
j
= 1. z
j
) = E

c
1j
[ ¦c
0j
÷z
t
j
~¦. z
j

= jE

c
0j
[ ¦c
0j
÷z
t
j
~¦. z
j

+E

·
j
[ ¦c
0j
÷z
t
j
~¦. z
j

= j`

z
t
j
~

.
which is non-zero. Thus
c
1j
= j`

z
t
j
~

+n
j
.
where
E(n
j
[ T
j
= 1. z
j
) = 0.
Hence
n
j
= i
t
j
d +j`

z
t
j
~

+n
j
(14.2)
is a valid regression equation for the observations for which T
j
= 1.
Heckman (1979) observed that we could consistently estimate d and j from this equation, if ~
were known. It is unknown, but also can be consistently estimated by a Probit model for selection.
The “Heckit” estimator is thus calculated as follows
« Estimate ` ~ from a Probit, using regressors z
j
. The binary dependent variable is T
j
.
« Estimate

`
d. ^ j

from OLS of n
j
on i
j
and `(z
t
j
` ~).
« The OLS standard errors will be incorrect, as this is a two-step estimator. They can be
corrected using a more complicated formula. Or, alternatively, by viewing the Probit/OLS
estimation equations as a large joint GMM problem.
The Heckit estimator is frequently used to deal with problems of sample selection. However,
the estimator is built on the assumption of normality, and the estimator can be quite sensitive
to this assumption. Some modern econometric research is exploring how to relax the normality
assumption.
The estimator can also work quite poorly if `(z
t
j
^ ) does not have much in-sample variation.
This can happen if the Probit equation does not “explain” much about the selection choice. Another
potential problem is that if z
j
= i
j
. then `(z
t
j
^ ) can be highly collinear with i
j
. so the second
step OLS estimator will not be able to precisely estimate d. Based this observation, it is typically
recommended to …nd a valid exclusion restriction: a variable should be in z
j
which is not in i
j
. If
this is valid, it will ensure that `(z
t
j
^ ) is not collinear with i
j
. and hence improve the second stage
estimator’s precision.
183
Chapter 15
Panel Data
A panel is a set of observations on individuals, collected over time. An observation is the pair
¦n
jt
. i
jt
¦. where the i subscript denotes the individual, and the t subscript denotes time. A panel
may be balanced:
¦n
jt
. i
jt
¦ : t = 1. .... T; i = 1. .... :.
or unbalanced:
¦n
jt
. i
jt
¦ : For i = 1. .... :. t = t
j
. .... t
j
.
15.1 Individual-E¤ects Model
The standard panel data speci…cation is that there is an individual-speci…c e¤ect which enters
linearly in the regression
n
jt
= i
t
jt
d +n
j
+c
jt
.
The typical maintained assumptions are that the individuals i are mutually independent, that n
j
and c
jt
are independent, that c
jt
is iid across individuals and time, and that c
jt
is uncorrelated with
i
jt
.
OLS of n
jt
on i
jt
is called pooled estimation. It is consistent if
E(i
jt
n
j
) = 0 (15.1)
If this condition fails, then OLS is inconsistent. (15.1) fails if the individual-speci…c unobserved
e¤ect n
j
is correlated with the observed explanatory variables i
jt
. This is often believed to be
plausible if n
j
is an omitted variable.
If (15.1) is true, however, OLS can be improved upon via a GLS technique. In either event,
OLS appears a poor estimation choice.
Condition (15.1) is called the random e¤ects hypothesis. It is a strong assumption, and most
applied researchers try to avoid its use.
15.2 Fixed E¤ects
This is the most common technique for estimation of non-dynamic linear panel regressions.
The motivation is to allow n
j
to be arbitrary, and have arbitrary correlated with i
j
. The goal
is to eliminate n
j
from the estimator, and thus achieve invariance.
There are several derivations of the estimator.
First, let
d
j;
=

1 if i = ,
0 else
.
184
and
u
j
=

¸
¸
d
j1
.
.
.
d
ja
¸

.
an : 1 dummy vector with a “1” in the i
t
t/ place. Let
u =

¸
¸
n
1
.
.
.
n
a
¸

.
Then note that
n
j
= u
t
j
u.
and
n
jt
= i
t
jt
d +u
t
j
u +c
jt
. (15.2)
Observe that
E(c
jt
[ i
jt
. u
j
) = 0.
so (15.2) is a valid regression, with u
j
as a regressor along with i
j
.
OLS on (15.2) yields estimator

`
d. ` u

. Conventional inference applies.
Observe that
« This is generally consistent.
« If i
jt
contains an intercept, it will be collinear with u
j
. so the intercept is typically omitted
from i
jt
.
« Any regressor in i
jt
which is constant over time for all individuals (e.g., their gender) will be
collinear with u
j
. so will have to be omitted.
« There are : +/ regression parameters, which is quite large as typically : is very large.
Computationally, you do not want to actually implement conventional OLS estimation, as the
parameter space is too large. OLS estimation of d proceeds by the FWL theorem. Stacking the
observations together:
u = Ad +Lu +c.
then by the FWL theorem,
`
d =

A
t
(1 ÷1
J
) A

÷1

A
t
(1 ÷1
J
) u

=

A
+t
A
+

÷1

A
+t
u
+

.
where
u
+
= u ÷L(L
t
L)
÷1
L
t
u
A
+
= A ÷L(L
t
L)
÷1
L
t
A.
Since the regression of n
jt
on u
j
is a regression onto individual-speci…c dummies, the predicted value
from these regressions is the individual speci…c mean n
j
. and the residual is the demean value
n
+
jt
= n
jt
÷n
j
.
The …xed e¤ects estimator
`
d is OLS of n
+
jt
on i
+
jt
, the dependent variable and regressors in deviation-
from-mean form.
185
Another derivation of the estimator is to take the equation
n
jt
= i
t
jt
d +n
j
+c
jt
.
and then take individual-speci…c means by taking the average for the i
t
t/ individual:
1
T
j
t
i
¸
t=t
i
n
jt
=
1
T
j
t
i
¸
t=t
i
i
t
jt
d +n
j
+
1
T
j
t
i
¸
t=t
i
c
jt
or
n
j
= i
t
j
d +n
j
+c
j
.
Subtracting, we …nd
n
+
jt
= i
+t
jt
d +c
+
jt
.
which is free of the individual-e¤ect n
j
.
15.3 Dynamic Panel Regression
A dynamic panel regression has a lagged dependent variable
n
jt
= cn
jt÷1
+i
t
jt
d +n
j
+c
jt
. (15.3)
This is a model suitable for studying dynamic behavior of individual agents.
Unfortunately, the …xed e¤ects estimator is inconsistent, at least if T is held …nite as : ÷ ·.
This is because the sample mean of n
jt÷1
is correlated with that of c
jt
.
The standard approach to estimate a dynamic panel is to combine …rst-di¤erencing with IV or
GMM. Taking …rst-di¤erences of (15.3) eliminates the individual-speci…c e¤ect:
n
jt
= cn
jt÷1
+ i
t
jt
d + c
jt
. (15.4)
However, if c
jt
is iid, then it will be correlated with n
jt÷1
:
E(n
jt÷1
c
jt
) = E((n
jt÷1
÷n
jt÷2
) (c
jt
÷c
jt÷1
)) = ÷E(n
jt÷1
c
jt÷1
) = ÷o
2
c
.
So OLS on (15.4) will be inconsistent.
But if there are valid instruments, then IV or GMM can be used to estimate the equation.
Typically, we use lags of the dependent variable, two periods back, as n
t÷2
is uncorrelated with
c
jt
. Thus values of n
jt÷I
. / _ 2, are valid instruments.
Hence a valid estimator of c and d is to estimate (15.4) by IV using n
t÷2
as an instrument for
n
t÷1
(which is just identi…ed). Alternatively, GMM using n
t÷2
and n
t÷3
as instruments (which is
overidenti…ed, but loses a time-series observation).
A more sophisticated GMM estimator recognizes that for time-periods later in the sample, there
are more instruments available, so the instrument list should be di¤erent for each equation. This is
conveniently organized by the GMM principle, as this enables the moments from the di¤erent time-
periods to be stacked together to create a list of all the moment conditions. A simple application
of GMM yields the parameter estimates and standard errors.
186
Chapter 16
Nonparametrics
16.1 Kernel Density Estimation
Let A be a random variable with continuous distribution 1(r) and density 1(r) =
o
oa
1(r).
The goal is to estimate 1(r) from a random sample (A
1
. .... A
a
¦ While 1(r) can be estimated by
the EDF
^
1(r) = :
÷1
¸
a
j=1
1 (A
j
_ r) . we cannot de…ne
o
oa
^
1(r) since
^
1(r) is a step function. The
standard nonparametric method to estimate 1(r) is based on smoothing using a kernel.
While we are typically interested in estimating the entire function 1(r). we can simply focus
on the problem where r is a speci…c …xed number, and then see how the method generalizes to
estimating the entire function.
De…nition 16.1.1 1(n) is a second-order kernel function if it is a
symmetric zero-mean density function.
Three common choices for kernels include the Normal
1(n) =
1


exp

÷
n
2
2

the Epanechnikov
1(n) =

3
4

1 ÷n
2

. [n[ _ 1
0 [n[ 1
and the Biweight or Quartic
1(n) =

15
16

1 ÷n
2

2
. [n[ _ 1
0 [n[ 1
In practice, the choice between these three rarely makes a meaningful di¤erence in the estimates.
The kernel functions are used to smooth the data. The amount of smoothing is controlled by
the bandwidth / 0. Let
1
I
(n) =
1
/
1

n
/

.
be the kernel 1 rescaled by the bandwidth /. The kernel density estimator of 1(r) is
^
1(r) =
1
:
a
¸
j=1
1
I
(A
j
÷r) .
187
This estimator is the average of a set of weights. If a large number of the observations A
j
are near
r. then the weights are relatively large and
^
1(r) is larger. Conversely, if only a few A
j
are near r.
then the weights are small and
^
1(r) is small. The bandwidth / controls the meaning of “near”.
Interestingly,
^
1(r) is a valid density. That is,
^
1(r) _ 0 for all r. and

o
÷o
^
1(r)dr =

o
÷o
1
:
a
¸
j=1
1
I
(A
j
÷r) dr =
1
:
a
¸
j=1

o
÷o
1
I
(A
j
÷r) dr =
1
:
a
¸
j=1

o
÷o
1 (n) dn = 1
where the second-to-last equality makes the change-of-variables n = (A
j
÷r)´/.
We can also calculate the moments of the density
^
1(r). The mean is

o
÷o
r
^
1(r)dr =
1
:
a
¸
j=1

o
÷o
r1
I
(A
j
÷r) dr
=
1
:
a
¸
j=1

o
÷o
(A
j
+n/) 1 (n) dn
=
1
:
a
¸
j=1
A
j

o
÷o
1 (n) dn +
1
:
a
¸
j=1
/

o
÷o
n1 (n) dn
=
1
:
a
¸
j=1
A
j
the sample mean of the A
j
. where the second-to-last equality used the change-of-variables n =
(A
j
÷r)´/ which has Jacobian /.
The second moment of the estimated density is

o
÷o
r
2
^
1(r)dr =
1
:
a
¸
j=1

o
÷o
r
2
1
I
(A
j
÷r) dr
=
1
:
a
¸
j=1

o
÷o
(A
j
+n/)
2
1 (n) dn
=
1
:
a
¸
j=1
A
2
j
+
2
:
a
¸
j=1
A
j
/

o
÷o
1(n)dn +
1
:
a
¸
j=1
/
2

o
÷o
n
2
1 (n) dn
=
1
:
a
¸
j=1
A
2
j
+/
2
o
2
1
where
o
2
1
=

o
÷o
n
2
1 (n) dn
is the variance of the kernel. It follows that the variance of the density
^
1(r) is

o
÷o
r
2
^
1(r)dr ÷

o
÷o
r
^
1(r)dr

2
=
1
:
a
¸
j=1
A
2
j
+/
2
o
2
1
÷

1
:
a
¸
j=1
A
j

2
= ^ o
2
+/
2
o
2
1
Thus the variance of the estimated density is in‡ated by the factor /
2
o
2
1
relative to the sample
moment.
188
16.2 Asymptotic MSE for Kernel Estimates
For …xed r and bandwidth / observe that
E1
I
(A ÷r) =

o
÷o
1
I
(. ÷r) 1(.)d. =

o
÷o
1
I
(n/) 1(r +/n)/dn =

o
÷o
1 (n) 1(r +/n)dn
The second equality uses the change-of variables n = (. ÷r)´/. The last expression shows that the
expected value is an average of 1(.) locally about r.
This integral (typically) is not analytically solvable, so we approximate it using a second order
Taylor expansion of 1(r +/n) in the argument /n about /n = 0. which is valid as / ÷0. Thus
1 (r +/n) · 1(r) +1
t
(r)/n +
1
2
1
tt
(r)/
2
n
2
and therefore
E1
I
(A ÷r) ·

o
÷o
1 (n)

1(r) +1
t
(r)/n +
1
2
1
tt
(r)/
2
n
2

dn
= 1(r)

o
÷o
1 (n) dn +1
t
(r)/

o
÷o
1 (n) ndn +
1
2
1
tt
(r)/
2

o
÷o
1 (n) n
2
dn
= 1(r) +
1
2
1
tt
(r)/
2
o
2
1
.
The bias of
^
1(r) is then
1ia:(r) = E
^
1(r) ÷1(r) =
1
:
a
¸
j=1
E1
I
(A
j
÷r) ÷1(r) =
1
2
1
tt
(r)/
2
o
2
1
.
We see that the bias of
^
1(r) at r depends on the second derivative 1
tt
(r). The sharper the derivative,
the greater the bias. Intuitively, the estimator
^
1(r) smooths data local to A
j
= r. so is estimating
a smoothed version of 1(r). The bias results from this smoothing, and is larger the greater the
curvature in 1(r).
We now examine the variance of
^
1(r). Since it is an average of iid random variables, using
…rst-order Taylor approximations and the fact that :
÷1
is of smaller order than (:/)
÷1
var (r) =
1
:
var (1
I
(A
j
÷r))
=
1
:
E1
I
(A
j
÷r)
2
÷
1
:
(E1
I
(A
j
÷r))
2
·
1
:/
2

o
÷o
1

. ÷r
/

2
1(.)d. ÷
1
:
1(r)
2
=
1
:/

o
÷o
1 (n)
2
1 (r +/n) dn
·
1 (r)
:/

o
÷o
1 (n)
2
dn
=
1 (r) 1(1)
:/
.
where 1(1) =

o
÷o
1 (n)
2
dn is called the roughness of 1.
Together, the asymptotic mean-squared error (AMSE) for …xed r is the sum of the approximate
squared bias and approximate variance
¹'o1
I
(r) =
1
4
1
tt
(r)
2
/
4
o
4
1
+
1 (r) 1(1)
:/
.
189
A global measure of precision is the asymptotic mean integrated squared error (AMISE)
¹'1o1
I
=

¹'o1
I
(r)dr =
/
4
o
4
1
1(1
tt
)
4
+
1(1)
:/
. (16.1)
where 1(1
tt
) =

(1
tt
(r))
2
dr is the roughness of 1
tt
. Notice that the …rst term (the squared bias)
is increasing in / and the second term (the variance) is decreasing in :/. Thus for the AMISE to
decline with :. we need / ÷ 0 but :/ ÷ ·. That is, / must tend to zero, but at a slower rate
than :
÷1
.
Equation (16.1) is an asymptotic approximation to the MSE. We de…ne the asymptotically
optimal bandwidth /
0
as the value which minimizes this approximate MSE. That is,
/
0
= argmin
I
¹'1o1
I
It can be found by solving the …rst order condition
d
d/
¹'1o1
I
= /
3
o
4
1
1(1
tt
) ÷
1(1)
:/
2
= 0
yielding
/
0
=

1(1)
o
4
1
1(1
tt
)

1/5
:
÷1/2
. (16.2)
This solution takes the form /
0
= c:
÷1/5
where c is a function of 1 and 1. but not of :. We
thus say that the optimal bandwidth is of order O(:
÷1/5
). Note that this / declines to zero, but at
a very slow rate.
In practice, how should the bandwidth be selected? This is a di¢cult problem, and there is a
large and continuing literature on the subject. The asymptotically optimal choice given in (16.2)
depends on 1(1). o
2
1
. and 1(1
tt
). The …rst two are determined by the kernel function. Their
values for the three functions introduced in the previous section are given here.
1 o
2
1
=

o
÷o
n
2
1 (n) dn 1(1) =

o
÷o
1 (n)
2
dn
Gaussian 1 1/(2

¬)
Epanechnikov 1´5 1´5
Biweight 1´7 5´7
An obvious di¢culty is that 1(1
tt
) is unknown. A classic simple solution proposed by Silverman
(1986)has come to be known as the reference bandwidth or Silverman’s Rule-of-Thumb. It
uses formula (16.2) but replaces 1(1
tt
) with ^ o
÷5
1(c
tt
). where c is the N(0. 1) distribution and ^ o
2
is
an estimate of o
2
= var(A). This choice for / gives an optimal rule when 1(r) is normal, and gives
a nearly optimal rule when 1(r) is close to normal. The downside is that if the density is very far
from normal, the rule-of-thumb / can be quite ine¢cient. We can calculate that 1(c
tt
) = 3´ (8

¬) .
Together with the above table, we …nd the reference rules for the three kernel functions introduced
earlier.
Gaussian Kernel: /
v&|c
= 1.06^ o:
÷1/5
Epanechnikov Kernel: /
v&|c
= 2.34^ o:
÷1/5
Biweight (Quartic) Kernel: /
v&|c
= 2.78^ o:
÷1/5
Unless you delve more deeply into kernel estimation methods the rule-of-thumb bandwidth is
a good practical bandwidth choice, perhaps adjusted by visual inspection of the resulting estimate
^
1(r). There are other approaches, but implementation can be delicate. I now discuss some of these
choices. The plug-in approach is to estimate 1(1
tt
) in a …rst step, and then plug this estimate into
the formula (16.2). This is more treacherous than may …rst appear, as the optimal / for estimation
of the roughness 1(1
tt
) is quite di¤erent than the optimal / for estimation of 1(r). However, there
190
are modern versions of this estimator work well, in particular the iterative method of Sheather
and Jones (1991). Another popular choice for selection of / is cross-validation. This works by
constructing an estimate of the MISE using leave-one-out estimators. There are some desirable
properties of cross-validation bandwidths, but they are also known to converge very slowly to the
optimal values. They are also quite ill-behaved when the data has some discretization (as is common
in economics), in which case the cross-validation rule can sometimes select very small bandwidths
leading to dramatically undersmoothed estimates. Fortunately there are remedies, which are known
as smoothed cross-validation which is a close cousin of the bootstrap.
191
Appendix A
Matrix Algebra
A.1 Notation
A scalar a is a single number.
A vector u is a / 1 list of numbers, typically arranged in a column. We write this as
u =

¸
¸
¸
¸
a
1
a
2
.
.
.
a
I
¸

Equivalently, a vector u is an element of Euclidean / space, written as u ÷ R
I
. If / = 1 then u is
a scalar.
A matrix A is a / r rectangular array of numbers, written as
A =

a
11
a
12
a
1v
a
21
a
22
a
2v
.
.
.
.
.
.
.
.
.
a
I1
a
I2
a
Iv
¸
¸
¸
¸
¸
By convention a
j;
refers to the element in the i
t
t/ row and ,
t
t/ column of A. If r = 1 then A is a
column vector. If / = 1 then A is a row vector. If r = / = 1. then A is a scalar.
A standard convention (which we will follow in this text whenever possible) is to denote scalars
by lower-case italics (a). vectors by lower-case bold italics (u). and matrices by upper-case bold
italics (A). Sometimes a matrix A is denoted by the symbol (a
j;
).
A matrix can be written as a set of column vectors or as a set of row vectors. That is,
A =

u
1
u
2
u
v

=

o
1
o
2
.
.
.
o
I
¸
¸
¸
¸
¸
where
u
j
=

a
1j
a
2j
.
.
.
a
Ij
¸
¸
¸
¸
¸
are column vectors and
o
;
=

a
;1
a
;2
a
;v

192
are row vectors.
The transpose of a matrix, denoted A
t
. is obtained by ‡ipping the matrix on its diagonal.
Thus
A
t
=

a
11
a
21
a
I1
a
12
a
22
a
I2
.
.
.
.
.
.
.
.
.
a
1v
a
2v
a
Iv
¸
¸
¸
¸
¸
Alternatively, letting H = A
t
. then /
j;
= a
;j
. Note that if A is / r, then A
t
is r /. If u is a
/ 1 vector, then u
t
is a 1 / row vector. An alternative notation for the transpose of A is A
¯
.
A matrix is square if / = r. A square matrix is symmetric if A = A
t
. which requires a
j;
= a
;j
.
A square matrix is diagonal if the o¤-diagonal elements are all zero, so that a
j;
= 0 if i = ,. A
square matrix is upper (lower) diagonal if all elements below (above) the diagonal equal zero.
An important diagonal matrix is the identity matrix, which has ones on the diagonal. The
/ / identity matrix is denoted as
1
I
=

1 0 0
0 1 0
.
.
.
.
.
.
.
.
.
0 0 1
¸
¸
¸
¸
¸
.
A partitioned matrix takes the form
A =

A
11
A
12
A
1v
A
21
A
22
A
2v
.
.
.
.
.
.
.
.
.
A
I1
A
I2
A
Iv
¸
¸
¸
¸
¸
where the ¹
j;
denote matrices, vectors and/or scalars.
A.2 Matrix Addition
If the matrices A = (a
j;
) and H = (/
j;
) are of the same order, we de…ne the sum
A+H = (a
j;
+/
j;
) .
Matrix addition follows the communtative and associative laws:
A+H = H +A
A+ (H +C) = (A+H) +C.
A.3 Matrix Multiplication
If A is / r and c is real, we de…ne their product as
Ac = cA = (a
j;
c) .
If u and I are both / 1. then their inner product is
u
t
I = a
1
/
1
+a
2
/
2
+ +a
I
/
I
=
I
¸
;=1
a
;
/
;
.
Note that u
t
I = I
t
u. We say that two vectors u and I are orthogonal if u
t
I = 0.
193
If A is / r and H is r :. so that the number of columns of A equals the number of rows
of H. we say that A and H are conformable. In this event the matrix product AH is de…ned.
Writing A as a set of row vectors and H as a set of column vectors (each of length r). then the
matrix product is de…ned as
AH =

u
t
1
u
t
2
.
.
.
u
t
I
¸
¸
¸
¸
¸

I
1
I
2
I
c

=

u
t
1
I
1
u
t
1
I
2
u
t
1
I
c
u
t
2
I
1
u
t
2
I
2
u
t
2
I
c
.
.
.
.
.
.
.
.
.
u
t
I
I
1
u
t
I
I
2
u
t
I
I
c
¸
¸
¸
¸
¸
.
Matrix multiplication is not communicative: in general AH 6= HA. However, it is associative
and distributive:
A(HC) = (AH) C
A(H +C) = AH +AC
An alternative way to write the matrix product is to use matrix partitions. For example,
AH =
¸
A
11
A
12
A
21
A
22
¸
H
11
H
12
H
21
H
22

=
¸
A
11
H
11
+A
12
H
21
A
11
H
12
+A
12
H
22
A
21
H
11
+A
22
H
21
A
21
H
12
+A
22
H
22

.
As another example,
AH =

A
1
A
2
A
v

H
1
H
2
.
.
.
H
v
¸
¸
¸
¸
¸
= A
1
H
1
+A
2
H
2
+ +A
v
H
v
=
v
¸
;=1
A
;
H
;
An important property of the identity matrix is that if A is /r. then A1
v
= A and 1
I
A = A.
The / r matrix A, r _ /, is called orthogonal if A
t
A = 1
v
.
A.4 Trace
The trace of a / / square matrix A is the sum of its diagonal elements
tr (A) =
I
¸
j=1
a
jj
.
Some straightforward properties for square matrices A and H and real c are
tr (cA) = c tr (A)
tr

A
t

= tr (A)
tr (A+H) = tr (A) + tr (H)
tr (1
I
) = /.
194
Also, for / r A and r / H we have
tr (AH) = tr (HA) .
Indeed,
tr (AH) = tr

u
t
1
I
1
u
t
1
I
2
u
t
1
I
I
u
t
2
I
1
u
t
2
I
2
u
t
2
I
I
.
.
.
.
.
.
.
.
.
u
t
I
I
1
u
t
I
I
2
u
t
I
I
I
¸
¸
¸
¸
¸
=
I
¸
j=1
u
t
j
I
j
=
I
¸
j=1
I
t
j
u
j
= tr (HA) .
A.5 Rank and Inverse
The rank of the / r matrix (r _ /)
A =

u
1
u
2
u
v

is the number of linearly independent columns u
;
. and is written as rank (A) . We say that A has
full rank if rank (A) = r.
A square / / matrix A is said to be nonsingular if it is has full rank, e.g. rank (A) = /.
This means that there is no / 1 c = 0 such that Ac = 0.
If a square / / matrix A is nonsingular then there exists a unique matrix / / matrix A
÷1
called the inverse of A which satis…es
AA
÷1
= A
÷1
A = 1
I
.
For non-singular A and C. some important properties include
AA
÷1
= A
÷1
A = 1
I

A
÷1

t
=

A
t

÷1
(AC)
÷1
= C
÷1
A
÷1
(A+C)
÷1
= A
÷1

A
÷1
+C
÷1

÷1
C
÷1
A
÷1
÷(A+C)
÷1
= A
÷1

A
÷1
+C
÷1

A
÷1
Also, if A is an orthogonal matrix, then A
÷1
= A.
Another useful result for non-singular A is known as the Woodbury matrix identity
(A+HCL)
÷1
= A
÷1
÷A
÷1
HC

C +CLA
÷1
HC

÷1
CLA
÷1
. (A.1)
In particular, for C = ÷1. H = I and L = I
t
for vector I we …nd what is known as the Sherman–
Morrison formula

A÷II
t

÷1
= A
÷1
+

1 ÷I
t
A
÷1
I

÷1
A
÷1
II
t
A
÷1
. (A.2)
195
The following fact about inverting partitioned matrices is quite useful. If A ÷ HL
÷1
C and
L÷CA
÷1
H are non-singular, then
¸
A H
C L

÷1
=
¸

A÷HL
÷1
C

÷1
÷

A÷HL
÷1
C

÷1
HL
÷1
÷

L÷CA
÷1
H

÷1
CA
÷1

L÷CA
÷1
H

÷1
¸
. (A.3)
Even if a matrix A does not possess an inverse, we can still de…ne the Moore-Penrose gen-
eralized inverse A
÷
as the matrix which satis…es
AA
÷
A = A
A
÷
AA
÷
= A
÷
AA
÷
is symmetric
A
÷
A is symmetric
For any matrix A. the Moore-Penrose generalized inverse A
÷
exists and is unique.
For example, if
A =
¸
A
11
0
0 0

then
A
÷
=
¸
A
÷
11
0
0 0

.
A.6 Determinant
The determinant is a measure of the volume of a square matrix.
While the determinant is widely used, its precise de…nition is rarely needed. However, we present
the de…nition here for completeness. Let A = (a
j;
) be a general / / matrix . Let ¬ = (,
1
. .... ,
I
)
denote a permutation of (1. .... /) . There are /! such permutations. There is a unique count of the
number of inversions of the indices of such permutations (relative to the natural order (1. .... /) .
and let -
¬
= +1 if this count is even and -
¬
= ÷1 if the count is odd. Then the determinant of A
is de…ned as
det A =
¸
¬
-
¬
a
1;
1
a
2;
2
a
I;
k
.
For example, if A is 2 2. then the two permutations of (1. 2) are (1. 2) and (2. 1) . for which
-
(1.2)
= 1 and -
(2.1)
= ÷1. Thus
det A = -
(1.2)
a
11
a
22
+-
(2.1)
a
21
a
12
= a
11
a
22
÷a
12
a
21
.
Some properties include
« det (A) = det (A
t
)
« det (cA) = c
I
det A
« det (AH) = (det A) (det H)
« det

A
÷1

= (det A)
÷1
« det
¸
A H
C L

= (det L) det

A÷HL
÷1
C

if det L = 0
« det A = 0 if and only if A is nonsingular.
« If A is triangular (upper or lower), then det A =
¸
I
j=1
a
jj
« If A is orthogonal, then det A = ±1
196
A.7 Eigenvalues
The characteristic equation of a square matrix A is
det (A÷`1
I
) = 0.
The left side is a polynomial of degree / in ` so it has exactly / roots, which are not necessarily
distinct and may be real or complex. They are called the latent roots or characteristic roots or
eigenvalues of A. If `
j
is an eigenvalue of A. then A÷`
j
1
I
is singular so there exists a non-zero
vector l
j
such that
(A÷`
j
1
I
) l
j
= 0.
The vector l
j
is called a latent vector or characteristic vector or eigenvector of A corre-
sponding to `
j
.
We now state some useful properties. Let `
j
and l
j
, i = 1. .... / denote the / eigenvalues and
eigenvectors of a square matrix A. Let A be a diagonal matrix with the characteristic roots in the
diagonal, and let H = [l
1
l
I
].
« det(A) =
¸
I
j=1
`
j
« tr(A) =
¸
I
j=1
`
j
« A is non-singular if and only if all its characteristic roots are non-zero.
« If A has distinct characteristic roots, there exists a nonsingular matrix 1 such that A =
1
÷1
A1 and 1A1
÷1
= A.
« If A is symmetric, then A = HAH
t
and H
t
AH = A. and the characteristic roots are all
real. A = HAH
t
is called the spectral decomposition of a matrix.
« The characteristic roots of A
÷1
are `
÷1
1
. `
÷1
2
. ..., `
÷1
I
.
« The matrix H has the orthonormal properties H
t
H = 1 and HH
t
= 1.
« H
÷1
= H
t
and (H
t
)
÷1
= H
A.8 Positive De…niteness
We say that a / / symmetric square matrix A is positive semi-de…nite if for all c = 0.
c
t
Ac _ 0. This is written as A _ 0. We say that A is positive de…nite if for all c = 0. c
t
Ac 0.
This is written as A 0.
Some properties include:
« If A = C
t
C for some matrix C, then A is positive semi-de…nite. (For any c = 0. c
t
Ac =
o
t
o _ 0 where o = Cc.) If C has full rank, then A is positive de…nite.
« If A is positive de…nite, then A is non-singular and A
÷1
exists. Furthermore, A
÷1
0.
« A 0 if and only if it is symmetric and all its characteristic roots are positive.
« By the spectral decomposition, A = HAH
t
where H
t
H = 1 and A is diagonal with non-
negative diagonal elements. All diagonal elements of A are strictly positive if (and only if)
A 0.
« If ¹ 0 then ¹
÷1
= HA
÷1
H
t
.
197
« If ¹ _ 0 and rank (A) = r < / then ¹
÷
= HA
÷
H
t
where ¹
÷
is the Moore-Penrose
generalized inverse, and A
÷
= diag

`
÷1
1
. `
÷1
2
. .... `
÷1
I
. 0. .... 0

« If A 0 we can …nd a matrix H such that A = HH
t
. We call H a matrix square root
of A. The matrix H need not be unique. One way to construct H is to use the spectral
decomposition A = HAH
t
where A is diagonal, and then set H = HA
1/2
.
A square matrix A is idempotent if AA = A. If A is idempotent and symmetric then all its
characteristic roots equal either zero or one and is thus positive semi-de…nite. To see this, note
that we can write A = HAH
t
where H is orthogonal and A contains the r (real) characteristic
roots. Then
A = AA = HAH
t
HAH
t
= HA
2
H
t
.
By the uniqueness of the characteristic roots, we deduce that A
2
= A and `
2
j
= `
j
for i = 1. .... r.
Hence they must equal either 0 or 1. It follows that the spectral decomposition of idempotent A
takes the form
A = H
¸
1
I÷v
0
0 0

H
t
(A.4)
with H
t
H = 1
I
. Additionally, tr(A) = rank(A).
A.9 Matrix Calculus
Let i = (r
1
. .... r
I
) be / 1 and o(i) = o(r
1
. .... r
I
) : R
I
÷R. The vector derivative is
0
0i
o (i) =

¸
¸
0
0a
1
o (i)
.
.
.
0
0a
k
o (i)
¸

and
0
0i
t
o (i) =

0
0a
1
o (i)
0
0a
k
o (i)

.
Some properties are now summarized.
«
0

(u
t
i) =
0

(i
t
u) = u
«
0

0
(Ai) = A
«
0

(i
t
Ai) = (A+A
t
) i
«
0
2
0æ0æ
0
(i
t
Ai) = A+A
t
A.10 Kronecker Products and the Vec Operator
Let A = [u
1
u
2
u
a
] be ::. The vec of A. denoted by vec (A) . is the :: 1 vector
vec (A) =

¸
¸
¸
¸
u
1
u
2
.
.
.
u
a
¸

.
198
Let A = (a
j;
) be an : : matrix and let H be any matrix. The Kronecker product of A
and H. denoted A.H. is the matrix
A.H =

a
11
H a
12
H a
1a
H
a
21
H a
22
H a
2a
H
.
.
.
.
.
.
.
.
.
a
n1
H a
n2
H a
na
H
¸
¸
¸
¸
¸
.
Some important properties are now summarized. These results hold for matrices for which all
matrix multiplications are conformable.
« (A+H) .C = A.C +H .C
« (A.H) (C .L) = AC .HL
« A.(H .C) = (A.H) .C
« (A.H)
t
= A
t
.H
t
« tr (A.H) = tr (A) tr (H)
« If A is :: and H is : :. det(A.H) = (det (A))
a
(det (H))
n
« (A.H)
÷1
= A
÷1
.H
÷1
« If A 0 and H 0 then A.H 0
« vec (AHC) = (C
t
.A) vec (H)
« tr (AHCL) = vec (L
t
)
t
(C
t
.A) vec (H)
A.11 Vector and Matrix Norms
The Euclidean norm of an :1 vector u is
|u| =

u
t
u

1/2
=

n
¸
j=1
a
2
j

1/2
.
The Euclidean norm of an :: matrix A is
|A| = |vec (A)|
= tr

A
t
A

1/2
=

¸
n
¸
j=1
a
¸
;=1
a
2
j;
¸

1/2
.
A useful calculation is for any :1 vectors u and I,

uI
t

= |u| |I|
and in particular

uu
t

= |u|
2
(A.5)
199
Some useful inequalities are now given:
Schwarz Inequality: For any :1 vectors u and I,

u
t
I

_ |u| |I| . (A.6)
Schwarz Matrix Inequality: For any :: matrices A and H,

A
t
H

_ |A| |H| . (A.7)
Triangle Inequality: For any :: matrices A and H,
|A+H| _ |A| +|H| . (A.8)
Proof of Schwarz Inequality: First, suppose that |I| = 0. Then I = 0 and both [u
t
I[ = 0 and
|u| |I| = 0 so the inequality is true. Second, suppose that |I| 0 and de…ne c = u÷I

I
t
I

÷1
I
t
u.
Since c is a vector, c
t
c _ 0. Thus
0 _ c
t
c = u
t
u ÷

u
t
I

2
´

I
t
I

.
Rearranging, this implies that

u
t
I

2
_

u
t
u

I
t
I

.
Taking the square root of each side yields the result.
Proof of Schwarz Matrix Inequality: Partition A = [u
1
. .... u
a
] and H = [I
1
. .... I
a
]. Then
by partitioned matrix multiplication, the de…nition of the matrix Euclidean norm and the Schwarz
inequality

A
t
H

=

u
t
1
I
1
u
t
1
I
2

u
t
2
I
1
u
t
2
I
2

.
.
.
.
.
.
.
.
.

_

|u
1
| |I
1
| |u
1
| |I
2
|
|u
2
| |I
1
| |u
2
| |I
2
|
.
.
.
.
.
.
.
.
.

=

¸
a
¸
j=1
a
¸
;=1
|u
j
|
2
|I
;
|
2
¸

1/2
=

a
¸
j=1
|u
j
|
2

1/2

a
¸
j=1
|I
j
|
2

1/2
=

¸
a
¸
j=1
n
¸
;=1
u
2
;j
¸

1/2

¸
a
¸
j=1
n
¸
;=1
|I
;j
|
2
¸

1/2
= |A| |H|
Proof of Triangle Inequality: Let u = vec (A) and I = vec (H) . Then by the de…nition of the
matrix norm and the Schwarz Inequality
|A+H|
2
= |u +I|
2
= u
t
u + 2u
t
I +I
t
I
_ u
t
u + 2

u
t
I

+I
t
I
_ |u|
2
+ 2 |u| |I| +|I|
2
= (|u| +|I|)
2
= (|A| +|H|)
2
200
Appendix B
Probability
B.1 Foundations
The set o of all possible outcomes of an experiment is called the sample space for the exper-
iment. Take the simple example of tossing a coin. There are two outcomes, heads and tails, so
we can write o = ¦H. T¦. If two coins are tossed in sequence, we can write the four outcomes as
o = ¦HH. HT. TH. TT¦.
An event ¹ is any collection of possible outcomes of an experiment. An event is a subset of o.
including o itself and the null set O. Continuing the two coin example, one event is ¹ = ¦HH. HT¦.
the event that the …rst coin is heads. We say that ¹ and 1 are disjoint or mutually exclusive
if ¹ ¨ 1 = O. For example, the sets ¦HH. HT¦ and ¦TH¦ are disjoint. Furthermore, if the sets
¹
1
. ¹
2
. ... are pairwise disjoint and '
o
j=1
¹
j
= o. then the collection ¹
1
. ¹
2
. ... is called a partition
of o.
The following are elementary set operations:
Union: ¹' 1 = ¦r : r ÷ ¹ or r ÷ 1¦.
Intersection: ¹¨ 1 = ¦r : r ÷ ¹ and r ÷ 1¦.
Complement: ¹
c
= ¦r : r ´ ÷ ¹¦.
The following are useful properties of set operations.
Communtatitivity: ¹' 1 = 1 ' ¹; ¹¨ 1 = 1 ¨ ¹.
Associativity: ¹' (1 ' C) = (¹' 1) ' C; ¹¨ (1 ¨ C) = (¹¨ 1) ¨ C.
Distributive Laws: ¹¨(1 ' C) = (¹¨ 1) '(¹¨ C) ; ¹'(1 ¨ C) = (¹' 1) ¨(¹' C) .
DeMorgan’s Laws: (¹' 1)
c
= ¹
c
¨ 1
c
; (¹¨ 1)
c
= ¹
c
' 1
c
.
A probability function assigns probabilities (numbers between 0 and 1) to events ¹ in o.
This is straightforward when o is countable; when o is uncountable we must be somewhat more
careful. A set E is called a sigma algebra (or Borel …eld) if O ÷ E , ¹ ÷ E implies ¹
c
÷ E, and
¹
1
. ¹
2
. ... ÷ E implies '
o
j=1
¹
j
÷ E. A simple example is ¦O. o¦ which is known as the trivial sigma
algebra. For any sample space o. let E be the smallest sigma algebra which contains all of the open
sets in o. When o is countable, E is simply the collection of all subsets of o. including O and o.
When o is the real line, then E is the collection of all open and closed intervals. We call E the
sigma algebra associated with o. We only de…ne probabilities for events contained in E.
We now can give the axiomatic de…nition of probability. Given o and E, a probability function
P satis…es P(o) = 1. P(¹) _ 0 for all ¹ ÷ E, and if ¹
1
. ¹
2
. ... ÷ E are pairwise disjoint, then
P('
o
j=1
¹
j
) =
¸
o
j=1
P(¹
j
).
Some important properties of the probability function include the following
« P(O) = 0
« P(¹) _ 1
« P(¹
c
) = 1 ÷P(¹)
201
« P(1 ¨ ¹
c
) = P(1) ÷P(¹¨ 1)
« P(¹' 1) = P(¹) +P(1) ÷P(¹¨ 1)
« If ¹ · 1 then P(¹) _ P(1)
« Bonferroni’s Inequality: P(¹¨ 1) _ P(¹) +P(1) ÷1
« Boole’s Inequality: P(¹' 1) _ P(¹) +P(1)
For some elementary probability models, it is useful to have simple rules to count the number
of objects in a set. These counting rules are facilitated by using the binomial coe¢cients which are
de…ned for nonnegative integers : and r. : _ r. as

:
r

=
:!
r! (: ÷r)!
.
When counting the number of objects in a set, there are two important distinctions. Counting
may be with replacement or without replacement. Counting may be ordered or unordered.
For example, consider a lottery where you pick six numbers from the set 1, 2, ..., 49. This selection is
without replacement if you are not allowed to select the same number twice, and is with replacement
if this is allowed. Counting is ordered or not depending on whether the sequential order of the
numbers is relevant to winning the lottery. Depending on these two distinctions, we have four
expressions for the number of objects (possible arrangements) of size r from : objects.
Without With
Replacement Replacement
Ordered
a!
(a÷v)!
:
v
Unordered

a
v

a+v÷1
v

In the lottery example, if counting is unordered and without replacement, the number of po-
tential combinations is

49
6

= 13. 983. 816.
If P(1) 0 the conditional probability of the event ¹ given the event 1 is
P(¹ [ 1) =
P(¹¨ 1)
P(1)
.
For any 1. the conditional probability function is a valid probability function where o has been
replaced by 1. Rearranging the de…nition, we can write
P(¹¨ 1) = P(¹ [ 1) P(1)
which is often quite useful. We can say that the occurrence of 1 has no information about the
likelihood of event ¹ when P(¹ [ 1) = P(¹). in which case we …nd
P(¹¨ 1) = P(¹) P(1) (B.1)
We say that the events ¹ and 1 are statistically independent when (B.1) holds. Furthermore,
we say that the collection of events ¹
1
. .... ¹
I
are mutually independent when for any subset
¦¹
j
: i ÷ 1¦.
P

¸
j÷1
¹
j

=
¸
j÷1
P(¹
j
) .
Theorem 1 (Bayes’ Rule). For any set 1 and any partition ¹
1
. ¹
2
. ... of the sample space, then
for each i = 1. 2. ...
P(¹
j
[ 1) =
P(1 [ ¹
j
) P(¹
j
)
¸
o
;=1
P(1 [ ¹
;
) P(¹
;
)
202
B.2 Random Variables
A random variable A is a function from a sample space o into the real line. This induces a
new sample space – the real line – and a new probability function on the real line. Typically, we
denote random variables by uppercase letters such as A. and use lower case letters such as r for
potential values and realized values. (This is in contrast to the notation adopted for most of the
textbook.) For a random variable A we de…ne its cumulative distribution function (CDF) as
1(r) = P(A _ r) . (B.2)
Sometimes we write this as 1
A
(r) to denote that it is the CDF of A. A function 1(r) is a CDF if
and only if the following three properties hold:
1. lim
a÷÷o
1(r) = 0 and lim
a÷o
1(r) = 1
2. 1(r) is nondecreasing in r
3. 1(r) is right-continuous
We say that the random variable A is discrete if 1(r) is a step function. In the latter case,
the range of A consists of a countable set of real numbers t
1
. .... t
v
. The probability function for
A takes the form
P(A = t
;
) = ¬
;
. , = 1. .... r (B.3)
where 0 _ ¬
;
_ 1 and
¸
v
;=1
¬
;
= 1.
We say that the random variable A is continuous if 1(r) is continuous in r. In this case P(A =
t) = 0 for all t ÷ 1 so the representation (B.3) is unavailable. Instead, we represent the relative
probabilities by the probability density function (PDF)
1(r) =
d
dr
1(r)
so that
1(r) =

a
÷o
1(n)dn
and
P(a _ A _ /) =

b
o
1(n)dn.
These expressions only make sense if 1(r) is di¤erentiable. While there are examples of continuous
random variables which do not possess a PDF, these cases are unusual and are typically ignored.
A function 1(r) is a PDF if and only if 1(r) _ 0 for all r ÷ 1 and

o
÷o
1(r)dr.
B.3 Expectation
For any measurable real function o. we de…ne the mean or expectation Eo(A) as follows. If
A is discrete,
Eo(A) =
v
¸
;=1
o(t
;

;
.
and if A is continuous
Eo(A) =

o
÷o
o(r)1(r)dr.
The latter is well de…ned and …nite if

o
÷o
[o(r)[ 1(r)dr < ·. (B.4)
203
If (B.4) does not hold, evaluate
1
1
=

j(a)0
o(r)1(r)dr
1
2
= ÷

j(a)<0
o(r)1(r)dr
If 1
1
= · and 1
2
< · then we de…ne Eo(A) = ·. If 1
1
< · and 1
2
= · then we de…ne
Eo(A) = ÷·. If both 1
1
= · and 1
2
= · then Eo(A) is unde…ned.
Since E(a +/A) = a +/EA. we say that expectation is a linear operator.
For : 0. we de…ne the :
t
t/ moment of A as EA
n
and the :
t
t/ central moment as
E(A ÷EA)
n
.
Two special moments are the mean j = EA and variance o
2
= E(A ÷j)
2
= EA
2
÷j
2
. We
call o =

o
2
the standard deviation of A. We can also write o
2
= var(A). For example, this
allows the convenient expression var(a +/A) = /
2
var(A).
The moment generating function (MGF) of A is
'(`) = Eexp(`A) .
The MGF does not necessarily exist. However, when it does and E[A[
n
< · then
d
n
d`
n
'(`)

A=0
= E(A
n
)
which is why it is called the moment generating function.
More generally, the characteristic function (CF) of A is
C(`) = Eexp(i`A)
where i =

÷1 is the imaginary unit. The CF always exists, and when E[A[
n
< ·
d
n
d`
n
C(`)

A=0
= i
n
E(A
n
) .
The 1
j
norm, j _ 1. of the random variable A is
|A|
j
= (E[A[
j
)
1/j
.
B.4 Gamma Function
The gamma function is de…ned for c 0 as
(c) =

o
0
r
c÷1
exp(÷r) .
It satis…es the property
(1 +c) = (c)c
so for positive integers :.
(:) = (: ÷1)!
Special values include
(1) = 1
and

1
2

= ¬
1/2
.
Sterling’s formula is an expansion for the its logarithm
log (c) =
1
2
log(2¬) +

c ÷
1
2

log c ÷. +
1
12c
÷
1
360c
3
+
1
1260c
5
+
204
B.5 Common Distributions
For reference, we now list some important discrete distribution function.
Bernoulli
P(A = r) = j
a
(1 ÷j)
1÷a
. r = 0. 1; 0 _ j _ 1
EA = j
var(A) = j(1 ÷j)
Binomial
P(A = r) =

:
r

j
a
(1 ÷j)
a÷a
. r = 0. 1. .... :; 0 _ j _ 1
EA = :j
var(A) = :j(1 ÷j)
Geometric
P(A = r) = j(1 ÷j)
a÷1
. r = 1. 2. ...; 0 _ j _ 1
EA =
1
j
var(A) =
1 ÷j
j
2
Multinomial
P(A
1
= r
1
. A
2
= r
2
. .... A
n
= r
n
) =
:!
r
1
!r
2
! r
n
!
j
a
1
1
j
a
2
2
j
am
n
.
r
1
+ +r
n
= :;
j
1
+ +j
n
= 1
EA
j
= j
j
var(A
j
) = :j
j
(1 ÷j
j
)
cov (A
j
. A
;
) = ÷:j
j
j
;
Negative Binomial
P(A = r) =
(r +r)
r!(r)
j
v
(1 ÷j)
a÷1
. r = 0. 1. 2. ...; 0 _ j _ 1
EA =
r (1 ÷j)
j
var(A) =
r (1 ÷j)
j
2
Poisson
P(A = r) =
exp(÷`) `
a
r!
. r = 0. 1. 2. .... ` 0
EA = `
var(A) = `
We now list some important continuous distributions.
205
Beta
1(r) =
(c +)
(c)()
r
c÷1
(1 ÷r)
o÷1
. 0 _ r _ 1; c 0. 0
j =
c
c +
var(A) =
c
(c + + 1) (c +)
2
Cauchy
1(r) =
1
¬ (1 +r
2
)
. ÷·< r < ·
EA = ·
var(A) = ·
Exponential
1(r) =
1
0
exp

r
0

. 0 _ r < ·; 0 0
EA = 0
var(A) = 0
2
Logistic
1(r) =
exp(÷r)
(1 + exp(÷r))
2
. ÷·< r < ·;
EA = 0
var(A) =
¬
2
3
Lognormal
1(r) =
1

2¬or
exp

÷
(log r ÷j)
2
2o
2

. 0 _ r < ·; o 0
EA = exp

j +o
2
´2

var(A) = exp

2j + 2o
2

÷exp

2j +o
2

Pareto
1(r) =
c
o
r
o+1
. c _ r < ·. c 0. 0
EA =
c
÷1
. 1
var(A) =
c
2
( ÷1)
2
( ÷2)
. 2
Uniform
1(r) =
1
/ ÷a
. a _ r _ /
EA =
a +/
2
var(A) =
(/ ÷a)
2
12
206
Weibull
1(r) =

r
~÷1
exp

÷
r
~

. 0 _ r < ·; 0. 0
EA =
1/~

1 +
1

var(A) =
2/~

1 +
2

÷
2

1 +
1

Gamma
1(r) =
1
(c)0
c
r
c÷1
exp

÷
r
0

. 0 _ r < ·; c 0. 0 0
EA = c0
var(A) = c0
2
Chi-Square
1(r) =
1
(r´2)2
v/2
r
v/2÷1
exp

÷
r
2

. 0 _ r < ·; r 0
EA = r
var(A) = 2r
Normal
1(r) =
1

2¬o
exp

÷
(r ÷j)
2
2o
2

. ÷·< r < ·; ÷·< j < ·. o
2
0
EA = j
var(A) = o
2
Student t
1(r) =

v+1
2

v
2

1 +
r
2
r

÷(
r+1
2
)
. ÷·< r < ·; r 0
EA = 0 if r 1
var(A) =
r
r ÷2
if r 2
B.6 Multivariate Random Variables
A pair of bivariate random variables (A. 1 ) is a function from the sample space into R
2
. The
joint CDF of (A. 1 ) is
1(r. n) = P(A _ r. 1 _ n) .
If 1 is continuous, the joint probability density function is
1(r. n) =
0
2
0r0n
1(r. n).
For a Borel measurable set ¹ ÷ 1
2
.
P((A < 1 ) ÷ ¹) =

¹
1(r. n)drdn
207
For any measurable function o(r. n).
Eo(A. 1 ) =

o
÷o

o
÷o
o(r. n)1(r. n)drdn.
The marginal distribution of A is
1
A
(r) = P(A _ r)
= lim
&÷o
1(r. n)
=

a
÷o

o
÷o
1(r. n)dndr
so the marginal density of A is
1
A
(r) =
d
dr
1
A
(r) =

o
÷o
1(r. n)dn.
Similarly, the marginal density of 1 is
1
Y
(n) =

o
÷o
1(r. n)dr.
The random variables A and 1 are de…ned to be independent if 1(r. n) = 1
A
(r)1
Y
(n).
Furthermore, A and 1 are independent if and only if there exist functions o(r) and /(n) such that
1(r. n) = o(r)/(n).
If A and 1 are independent, then
E(o(A)/(1 )) =

o(r)/(n)1(n. r)dndr
=

o(r)/(n)1
Y
(n)1
A
(r)dndr
=

o(r)1
A
(r)dr

/(n)1
Y
(n)dn
= Eo (A) E/(1 ) . (B.5)
if the expectations exist. For example, if A and 1 are independent then
E(A1 ) = EAE1.
Another implication of (B.5) is that if A and 1 are independent and 7 = A +1. then
'
Z
(`) = Eexp(`(A +1 ))
= E(exp(`A) exp(`1 ))
= Eexp

`
t
A

Eexp

`
t
1

= '
A
(`)'
Y
(`). (B.6)
The covariance between A and 1 is
cov(A. 1 ) = o
AY
= E((A ÷EA) (1 ÷E1 )) = EA1 ÷EAE1.
The correlation between A and 1 is
corr (A. 1 ) = j
AY
=
o
AY
o
a
o
Y
.
208
The Cauchy-Schwarz Inequality implies that
[j
AY
[ _ 1. (B.7)
The correlation is a measure of linear dependence, free of units of measurement.
If A and 1 are independent, then o
AY
= 0 and j
AY
= 0. The reverse, however, is not true.
For example, if EA = 0 and EA
3
= 0, then cov(A. A
2
) = 0.
A useful fact is that
var (A +1 ) = var(A) + var(1 ) + 2 cov(A. 1 ).
An implication is that if A and 1 are independent, then
var (A +1 ) = var(A) + var(1 ).
the variance of the sum is the sum of the variances.
A /1 random vector A = (A
1
. .... A
I
)
t
is a function from o to R
I
. Let i = (r
1
. .... r
I
)
t
denote
a vector in R
I
. (In this Appendix, we use bold to denote vectors. Bold capitals A are random
vectors and bold lower case i are nonrandom vectors. Again, this is in distinction to the notation
used in the bulk of the text) The vector A has the distribution and density functions
1(i) = P(A _ i)
1(i) =
0
I
0r
1
0r
I
1(i).
For a measurable function g : R
I
÷R
c
. we de…ne the expectation
Eg(A) =

R
k
o(i)1(i)di
where the symbol di denotes dr
1
dr
I
. In particular, we have the / 1 multivariate mean
µ = EA
and / / covariance matrix
X = E

(A ÷µ) (A ÷µ)
t

= EAA
t
÷µµ
t
If the elements of A are mutually independent, then X is a diagonal matrix and
var

I
¸
j=1
A
j

=
I
¸
j=1
var (A
j
)
B.7 Conditional Distributions and Expectation
The conditional density of 1 given A = i is de…ned as
1
Y [×
(n [ i) =
1(i. n)
1
×
(i)
209
if 1
×
(i) 0. One way to derive this expression from the de…nition of conditional probability is
1
Y [A
(n [ i) =
0
0n
lim
.÷0
P(1 _ n [ i _ A _ i +-)
=
0
0n
lim
.÷0
P(¦1 _ n¦ ¨ ¦i _ A _ i +-¦)
P(i _ A _ i +-)
=
0
0n
lim
.÷0
1(i +-. n) ÷1(i. n)
1
×
(i +-) ÷1
A
(i)
=
0
0n
lim
.÷0
0
0a
1(i +-. n)
1
×
(i +-)
=
0
2
0a0&
1(i. n)
1
×
(i)
=
1(i. n)
1
×
(i)
.
The conditional mean or conditional expectation is the function
:(i) = E(1 [ A = i) =

o
÷o
n1
Y [×
(n [ i) dn.
The conditional mean :(i) is a function, meaning that when A equals i. then the expected value
of 1 is :(i).
Similarly, we de…ne the conditional variance of 1 given A = i as
o
2
(i) = var (1 [ A = i)
= E

(1 ÷:(i))
2
[ A = i

= E

1
2
[ A = i

÷:(i)
2
.
Evaluated at i = A. the conditional mean :(A) and conditional variance o
2
(A) are random
variables, functions of A. We write this as E(1 [ A) = :(A) and var (1 [ A) = o
2
(A). For
example, if E(1 [ A = i) = c +
t
i. then E(1 [ A) = c +
t
A. a transformation of A.
The following are important facts about conditional expectations.
Simple Law of Iterated Expectations:
E(E(1 [ A)) = E(1 ) (B.8)
Proof :
E(E(1 [ A)) = E(:(A))
=

o
÷o
:(i)1
×
(i)di
=

o
÷o

o
÷o
n1
Y [×
(n [ i) 1
×
(i)dndi
=

o
÷o

o
÷o
n1 (n. i) dndi
= E(1 ).
Law of Iterated Expectations:
E(E(1 [ A. Z) [ A) = E(1 [ A) (B.9)
210
Conditioning Theorem. For any function o(i).
E(o(A)1 [ A) = o (A) E(1 [ A) (B.10)
Proof : Let
/(i) = E(o(A)1 [ A = i)
=

o
÷o
o(i)n1
Y [×
(n [ i) dn
= o(i)

o
÷o
n1
Y [×
(n [ i) dn
= o(i):(i)
where :(i) = E(1 [ A = i) . Thus /(A) = o(A):(A), which is the same as E(o(A)1 [ A) =
o (A) E(1 [ A) .
B.8 Transformations
Suppose that A ÷ R
I
with continuous distribution function 1
×
(i) and density 1
×
(i). Let
A = g(A) where g(i) : R
I
÷R
I
is one-to-one, di¤erentiable, and invertible. Let l(u) denote the
inverse of g(i). The Jacobian is
J(u) = det

0
0u
t
l(u)

.
Consider the univariate case / = 1. If o(r) is an increasing function, then o(A) _ 1 if and only
if A _ /(1 ). so the distribution function of 1 is
1
Y
(n) = P(o(A) _ n)
= P(A _ /(1 ))
= 1
A
(/(1 )) .
Taking the derivative, the density of 1 is
1
Y
(n) =
d
dn
1
Y
(n) = 1
A
(/(1 ))
d
dn
/(n).
If o(r) is a decreasing function, then o(A) _ 1 if and only if A _ /(1 ). so
1
Y
(n) = P(o(A) _ n)
= 1 ÷P(A _ /(1 ))
= 1 ÷1
A
(/(1 ))
and the density of 1 is
1
Y
(n) = ÷1
A
(/(1 ))
d
dn
/(n).
We can write these two cases jointly as
1
Y
(n) = 1
A
(/(1 )) [J(n)[ . (B.11)
This is known as the change-of-variables formula. This same formula (B.11) holds for / 1. but
its justi…cation requires deeper results from analysis.
As one example, take the case A ~ l[0. 1] and 1 = ÷log(A). Here, o(r) = ÷log(r) and
/(n) = exp(÷n) so the Jacobian is J(n) = ÷exp(n). As the range of A is [0. 1]. that for 1 is [0,·).
Since 1
A
(r) = 1 for 0 _ r _ 1 (B.11) shows that
1
Y
(n) = exp(÷n). 0 _ n _ ·.
an exponential density.
211
B.9 Normal and Related Distributions
The standard normal density is
c(r) =
1


exp

÷
r
2
2

. ÷·< r < ·.
It is conventional to write A ~ N(0. 1) . and to denote the standard normal density function by
c(r) and its distribution function by (r). The latter has no closed-form solution. The normal
density has all moments …nite. Since it is symmetric about zero all odd moments are zero. By
iterated integration by parts, we can also show that EA
2
= 1 and EA
4
= 3. In fact, for any positive
integer :, EA
2n
= (2:÷1)!! = (2:÷1) (2:÷3) 1. Thus EA
4
= 3. EA
6
= 15. EA
8
= 105.
and EA
10
= 945.
If 7 is standard normal and A = j + o7. then using the change-of-variables formula, A has
density
1(r) =
1

2¬o
exp

÷
(r ÷j)
2
2o
2

. ÷·< r < ·.
which is the univariate normal density. The mean and variance of the distribution are j and
o
2
. and it is conventional to write A ~ N

j. o
2

.
For i ÷ R
I
. the multivariate normal density is
1(i) =
1
(2¬)
I/2
det (X)
1/2
exp

÷
(i ÷µ)
t
X
÷1
(i ÷µ)
2

. i ÷ R
I
.
The mean and covariance matrix of the distribution are µ and X. and it is conventional to write
A ~ N(µ. X).
The MGF and CF of the multivariate normal are exp

X
t
µ +X
t
XX´2

and exp

iX
t
µ ÷X
t
XX´2

.
respectively.
If A ÷ R
I
is multivariate normal and the elements of A are mutually uncorrelated, then
X = diag¦o
2
;
¦ is a diagonal matrix. In this case the density function can be written as
1(i) =
1
(2¬)
I/2
o
1
o
I
exp

÷

(r
1
÷j
1
)
2
´o
2
1
+ + (r
I
÷j
I
)
2
´o
2
I
2

=
I
¸
;=1
1
(2¬)
1/2
o
;
exp

÷

r
;
÷j
;

2
2o
2
;

which is the product of marginal univariate normal densities. This shows that if A is multivariate
normal with uncorrelated elements, then they are mutually independent.
Theorem B.9.1 If A ~ N(µ. X) and A = u + HA with H an invertible matrix, then A ~
N(u +Hµ. HXH
t
) .
Theorem B.9.2 Let A ~ N(0. 1
v
) . Then Q = A
t
A is distributed chi-square with r degrees of
freedom, written .
2
v
.
Theorem B.9.3 If Z ~ N(0. A) with A 0. c c. then Z
t
A
÷1
Z ~ .
2
o
.
Theorem B.9.4 Let 7 ~ N(0. 1) and Q ~ .
2
v
be independent. Then T
v
= 7´

Q´r is distributed
as student’s t with r degrees of freedom.
212
Proof of Theorem B.9.1. By the change-of-variables formula, the density of A = u +HA is
1(u) =
1
(2¬)
I/2
det (X
Y
)
1/2
exp

÷
(u ÷µ
Y
)
t
X
÷1
Y
(u ÷µ
Y
)
2

. u ÷ R
I
.
where µ
Y
= u+Hµ and X
Y
= HXH
t
. where we used the fact that det (HXH
t
)
1/2
= det (X)
1/2
det (H) .

Proof of Theorem B.9.2. First, suppose a random variable Q is distributed chi-square with r
degrees of freedom. It has the MGF
Eexp(tQ) =

o
0
1

v
2

2
v/2
r
v/2÷1
exp(tr) exp(÷r´2) dn = (1 ÷2t)
÷v/2
where the second equality uses the fact that

o
0
n
o÷1
exp(÷/n) dn = /
÷o
(a). which can be found
by applying change-of-variables to the gamma function. Our goal is to calculate the MGF of
Q = A
t
A and show that it equals (1 ÷2t)
÷v/2
. which will establish that Q ~ .
2
v
.
Note that we can write Q = A
t
A =
¸
v
;=1
7
2
;
where the 7
;
are independent N(0. 1) . The
distribution of each of the 7
2
;
is
P

7
2
;
_ n

= 2P(0 _ 7
;
_

n)
= 2

&
0
1


exp

÷
r
2
2

dr
=

&
0
1

1
2

2
1/2
:
÷1/2
exp

÷
:
2

d:
using the change–of-variables : = r
2
and the fact

1
2

=

¬. Thus the density of 7
2
;
is
1
1
(r) =
1

1
2

2
1/2
r
÷1/2
exp

÷
r
2

which is the .
2
1
and by our above calculation has the MGF of Eexp

t7
2
;

= (1 ÷2t)
÷1/2
.
Since the 7
2
;
are mutually independent, (B.6) implies that the MGF of Q =
¸
v
;=1
7
2
;
is

(1 ÷2t)
÷1/2

v
= (1 ÷2t)
÷v/2
. which is the MGF of the .
2
v
density as desired.
Proof of Theorem B.9.3. The fact that A 0 means that we can write A = CC
t
where C is
non-singular. Then A
÷1
= C
÷1t
C
÷1
and
C
÷1
Z ~ N

0. C
÷1
AC
÷1t

= N

0. C
÷1
CC
t
C
÷1t

= N(0. 1
o
) .
Thus
Z
t
A
÷1
Z = Z
t
C
÷1t
C
÷1
Z =

C
÷1
Z

t

C
÷1
Z

~ .
2
o
.

Proof of Theorem B.9.4. Using the simple law of iterated expectations, T
v
has distribution
213
function
1 (r) = P

7

Q´r
_ r

= E

7 _ r

Q
r
¸
= E
¸
P

7 _ r

Q
r
[ Q
¸
= E

r

Q
r

Thus its density is
1 (r) = E
d
dr

r

Q
r

= E

c

r

Q
r

Q
r

=

o
0

1


exp

÷
cr
2
2r

c
r

1

v
2

2
v/2
c
v/2÷1
exp(÷c´2)

dc
=

v+1
2

v
2

1 +
r
2
r

÷(
r+1
2
)
which is that of the student t with r degrees of freedom.
214
Appendix C
Asymptotic Theory
C.1 Inequalities
The following inequalities are frequently used in asymptotic distribution theory.
Jensen’s Inequality. If o() : R ÷ R is convex, then for any random variable r for which
E[r[ < · and E[o (r)[ < ·.
o(E(r)) _ E(o (r)) . (C.1)
Expectation Inequality. For any random variable r for which E[r[ < ·.
[E(r)[ _ E[r[ . (C.2)
Cauchy-Schwarz Inequality. For any random :: matrices A and A,
E

A
t
A

_

E|A|
2

1/2

E|A |
2

1/2
. (C.3)
Holder’s Inequality. If j 1 and c 1 and
1
j
+
1
o
= 1. then for any random :: matrices A
and A,
E

A
t
A

_ (E|A|
j
)
1/j
(E|A |
o
)
1/o
. (C.4)
Minkowski’s Inequality. For any random :: matrices A and A,
(E|A +A |
j
)
1/j
_ (E|A|
j
)
1/j
+ (E|A |
j
)
1/j
(C.5)
Markov’s Inequality. For any random vector i and non-negative function o(i) _ 0.
P(o(i) c) _ c
÷1
Eo(i). (C.6)
Proof of Jensen’s Inequality. Let a + /n be the tangent line to o(n) at n = Er. Since o(n) is
convex, tangent lines lie below it. So for all n. o(n) _ a + /n yet o(Er) = a + /Er since the curve
is tangent at Er. Applying expectations, Eo(r) _ a +/Er = o(Er). as stated.
Proof of Expecation Inequality. Follows from an application of Jensen’s Inequality, noting that
the function o(n) = [n[ is convex.
Proof of Holder’s Inequality. Since
1
j
+
1
o
= 1 an application of Jensen’s Inequality shows that
for any real a and /
exp
¸
1
j
a +
1
c
/

_
1
j
exp(a) +
1
c
exp(/) .
215
Setting n = exp(a) and · = exp(/) this implies
n
1/j
·
1/o
_
n
j
+
·
c
and this inequality holds for any n 0 and · 0.
Set n = |A|
j
´E|A|
j
and · = |A |
o
´E|A |
o
. Note that En = E· = 1. By the matrix Schwarz
Inequality (A.7), |A
t
A | _ |A| |A |. Thus
E|A
t
A |
(E|A|
j
)
1/j
(E|A |
o
)
1/o
_
E(|A| |A |)
(E|A|
j
)
1/j
(E|A |
o
)
1/o
= E

n
1/j
·
1/o

_ E

n
j
+
·
c

=
1
j
+
1
c
= 1.
which is (C.4).
Proof of Minkowski’s Inequality. Note that by rewriting, using the triangle inequality (A.8),
and then Holder’s Inequality to the two expectations
E|A +A |
j
= E

|A +A | |A +A |
j÷1

_ E

|A| |A +A |
j÷1

+E

|A | |A +A |
j÷1

_ (E|A|
j
)
1/j
E

|A +A |
o(j÷1)

1/o
+ (E|A |
j
)
1/j
E

|A +A |
o(j÷1)

1/o
=

(E|A|
j
)
1/j
+ (E|A |
j
)
1/j

E(|A +A |
j
)
(j÷1)/j
where the second equality picks c to satisfy 1´j+1´c = 1. and the …nal equality uses this fact to make
the substitution c = j´(j÷1) and then collects terms. Dividing both sides by E(|A +A |
j
)
(j÷1)/j
.
we obtain (C.5).
Proof of Markov’s Inequality. Let 1 denote the density function of i. Then
P(o(i) _ c) =

|uj(u)`c¦
1(u)du
_

|uj(u)`c¦
o(u)
c
1(u)du
_ c
÷1

o
÷o
o(u)1(u)du
= c
÷1
E(o(i))
the …rst inequality using the region of integration ¦o(u) c¦.
C.2 Convergence in Distribution
Let z
a
be a random vector with distribution 1
a
(u) = P(z
a
_ u) . We say that z
a
converges
in distribution to z as : ÷·, denoted z
a
o
÷÷z. where z has distribution 1(u) = P(z _ u) . if
for all u at which 1(u) is continuous, 1
a
(u) ÷1(u) as : ÷·.
216
Theorem C.2.1 Central Limit Theorem (CLT). If i
j
÷ R
I
is iid and Er
2
;j
< · for , =
1. .... /. then as : ÷·

:(i
a
÷µ) =
1

:
a
¸
j=1
(i
j
÷µ)
o
÷÷N(0. D) .
where µ = Ei
j
and D = E(i
j
÷µ) (i
j
÷µ)
t
.
Proof: The moment bound Ei
2
;j
< · is su¢cient to guarantee that the elements of µ and \ are
well de…ned and …nite. Without loss of generality, it is su¢cient to consider the case µ = 0 and
D = 1
I
.
For X ÷ R
I
. let C (X) = Eexp

iX
t
i
j

denote the characteristic function of i
j
and set c (X) =
log C(X). Then observe
0
0X
C(X) = iE

i
j
exp

iX
t
i
j

0
2
0X0X
t
C(X) = i
2
E

i
j
i
t
j
exp

iX
t
i
j

so when evaluated at X = 0
C(0) = 1
0
0X
C(0) = iE(i
j
) = 0
0
2
0X0X
t
C(0) = ÷E

i
j
i
t
j

= ÷1
I
.
Furthermore,
c
X
(X) =
0
0X
c(X) = C(X)
÷1
0
0X
C(X)
c
XX
(X) =
0
2
0X0X
t
c(X) = C(X)
÷1
0
2
0X0X
t
C(X) ÷C(X)
÷2
0
0X
C (X)
0
0X
t
C(X)
so when evaluated at X = 0
c(0) = 0
c
A
(0) = 0
c
AA
(0) = ÷1
I
.
By a second-order Taylor series expansion of c(X) about X = 0.
c(X) = c(0) +c
A
(0)
t
X +
1
2
X
t
c
XX
(X
+
)X =
1
2
X
t
c
XX
(X
+
)X (C.7)
where X
+
lies on the line segment joining 0 and X.
We now compute C
a
(X) = 1 exp

iX
t

:i
a

the characteristic function of

:i
a
. By the prop-
erties of the exponential function, the independence of the i
j
. the de…nition of c(X) and (C.7)
log C
a
(X) = log Eexp

i
1

:
a
¸
j=1
X
t
i
j

= log E
a
¸
j=1
exp

i
1

:
X
t
i
j

= log
a
¸
j=1
Eexp

i
1

:
X
t
i
j

= :c

X

:

=
1
2
X
t
c
XX
(X
a
)X
217
where X
a
÷ 0 lies on the line segment joining 0 and X´

:. Since c
XX
(X
a
) ÷ c
XX
(0) = ÷1
I
. we
see that as : ÷·.
C
a
(X) ÷exp

÷
1
2
X
t
X

the characteristic function of the N(0. 1
I
) distribution. This is su¢cient to establish the theorem.

C.3 Asymptotic Transformations
Theorem C.3.1 Continuous Mapping Theorem 1 (CMT). If z
a
j
÷÷ c as : ÷ · and o ()
is continuous at c. then o(z
a
)
j
÷÷o(c) as : ÷·.
Proof: Since o is continuous at c. for all - 0 we can …nd a c 0 such that if |z
a
÷c| < c
then [o (z
a
) ÷o (c)[ _ -. Recall that ¹ · 1 implies P(¹) _ P(1). Thus P([o (z
a
) ÷o (c)[ _ -) _
P(|z
a
÷c| < c) ÷1 as : ÷·by the assumption that z
a
j
÷÷c. Hence o(z
a
)
j
÷÷o(c) as : ÷·.
Theorem C.3.2 Continuous Mapping Theorem 2. If z
a
o
÷÷ z as : ÷ · and o () is
continuous. then o(z
a
)
o
÷÷o(z) as : ÷·.
Theorem C.3.3 Delta Method: If

:(0
a
÷0
0
)
o
÷÷N(0. X) . where 0 is :1 and X is ::.
and o(0) : R
n
÷R
I
. / _ :. then

:(o (0
a
) ÷o(0
0
))
o
÷÷N

0. o
0
Xo
t
0

where o
0
(0) =
0
00
0
o(0) and o
0
= o
0
(0
0
).
Proof : By a vector Taylor series expansion, for each element of o.
o
;
(0
a
) = o
;
(0
0
) +o
;0
(0
+
;a
) (0
a
÷0
0
)
where 0
+
a;
lies on the line segment between 0
a
and 0
0
and therefore converges in probability to 0
0
.
It follows that a
;a
= o
;0
(0
+
;a
) ÷o
;0
j
÷÷0. Stacking across elements of o. we …nd

:(o (0
a
) ÷o(0
0
)) = (o
0
+a
a
)

:(0
a
÷0
0
)
o
÷÷o
0 N(0. X) = N

0. o
0
Xo
t
0

.
218
Appendix D
Maximum Likelihood
If the distribution of u
j
is 1(u. 0) where 1 is a known distribution function and 0 ÷ is an
unknown :1 vector, we say that the distribution is parametric and that 0 is the parameter
of the distribution 1. The space is the set of permissible value for 0. In this setting the method
of maximum likelihood is the appropriate technique for estimation and inference on 0.
If the distribution 1 is continuous then the density of u
j
can be written as 1(u. 0) and the joint
density of a random sample (u
1
. .... u
a
) is
1
a
(u
1
. .... u
a
. 0) =
a
¸
j=1
1 (u
j
. 0) .
The likelihood of the sample is this joint density evaluated at the observed sample values, viewed
as a function of 0. The log-likelihood function is its natural log
log 1(0) =
a
¸
j=1
log 1 (u
j
. 0) .
If the distribution 1 is discrete, the likelihood and log-likelihood are constructed by setting 1 (u. 0) =
P(u
j
= u. 0) .
De…ne the Hessian
H= ÷E
0
2
0000
t
log 1 (u
j
. 0
0
) (D.1)
and the outer product matrix
D = E

0
00
log 1 (u
j
. 0
0
)
0
00
log 1 (u
j
. 0
0
)
t

. (D.2)
Two important features of the likelihood are
Theorem D.0.4
0
00
Elog 1 (u
j
. 0)

0=0
0
= 0 (D.3)
H= D = J
0
(D.4)
The matrix J
0
is called the information, and the equality (D.4) is often called the information
matrix equality.
Theorem D.0.5 Cramer-Rao Lower Bound. If
~
0 is an unbiased estimator of 0 ÷ R. then
var(
~
0) _ (:J
0
)
÷1
.
219
The Cramer-Rao Theorem gives a lower bound for estimation. However, the restriction to
unbiased estimators means that the theorem has little direct relevance for …nite sample e¢ciency.
The maximum likelihood estimator or MLE
`
0 is the parameter value which maximizes the
likelihood (equivalently, which maximizes the log-likelihood). We can write this as
`
0 = argmax

log 1(0).
In some simple cases, we can …nd an explicit expression for
`
0 as a function of the data, but these
cases are rare. More typically, the MLE
`
0 must be found by numerical methods.
Why do we believe that the MLE
`
0 is estimating the parameter 0? Observe that when stan-
dardized, the log-likelihood is a sample average
1
:
log 1(0) =
1
:
a
¸
j=1
log 1 (u
j
. 0)
j
÷÷Elog 1 (u
j
. 0) .
As the MLE
`
0 maximizes the left-hand-side, we can see that it is an estimator of the maximizer of
the right-hand-side. The …rst-order condition for the latter problem is
0 =
0
00
Elog 1 (u
j
. 0)
which holds at 0 = 0
0
by (D.3). In fact, under conventional regularity conditions,
`
0 is consistent
for this value,
`
0
j
÷÷0
0
as : ÷·.
Theorem D.0.6 Under regularity conditions,

:

`
0 ÷0
0

o
÷÷N

0. J
÷1
0

.
Thus in large samples, the approximate variance of the MLE is (:1
0
)
÷1
which is the Cramer-
Rao lower bound. Thus in large samples the MLE has approximately the best possible variance.
Therefore the MLE is called asymptotically e¢cient.
Typically, to estimate the asymptotic variance of the MLE we use an estimate based on the
Hessian formula (D.1)
´
H= ÷
1
:
a
¸
j=1
0
2
0000
t
log 1

u
j
.
`
0

(D.5)
We then set
´
J
÷1
0
=
´
H
÷1
. Asymptotic standard errors for
`
0 are then the square roots of the diagonal
elements of :
÷1
´
J
÷1
0
.
Sometimes a parametric density function 1(u. 0) is used to approximate the true unknown
density 1(u). but it is not literally believed that the model 1(u. 0) is necessarily the true density.
In this case, we refer to log 1(0) as a quasi-likelihood and the its maximizer
`
0 as a quasi-mle
or QMLE.
In this case there is not a “true” value of the parameter 0. Instead we de…ne the pseudo-true
value 0
0
as the maximizer of
Elog 1 (u
j
. 0) =

1 (u) log 1 (u. 0) du
which is the same as the minimizer of
111C =

1 (u) log

1(u)
1 (u. 0)

du
the Kullback-Leibler information distance between the true density 1(u) and the parametric density
1(u. 0). Thus the QMLE 0
0
is the value which makes the parametric density “closest” to the true
value according to this measure of distance. The QMLE is consistent for the pseudo-true value, but
220
has a di¤erent covariance matrix than in the pure MLE case, since the information matrix equality
(D.4) does not hold. A minor adjustment to Theorem (D.0.6) yields the asymptotic distribution of
the QMLE:

:

`
0 ÷0
0

o
÷÷N(0. \ ) . \ = H
÷1
DH
÷1
The moment estimator for \ is
`
\ =
´
H
÷1
`
D
´
H
÷1
where
´
H is given in (D.5) and
`
D =
1
:
a
¸
j=1
0
00
log 1

u
j
.
`
0

0
00
log 1

u
j
.
`
0

t
.
Asymptotic standard errors (sometimes called qmle standard errors) are then the square roots of
the diagonal elements of :
÷1
`
\ .
Proof of Theorem D.0.4. To see (D.3).
0
00
Elog 1 (u
j
. 0)

0=0
0
=
0
00

log 1 (u. 0) 1 (u. 0
0
) du

0=0
0
=

0
00
1 (u. 0)
1 (u. 0
0
)
1 (u. 0)
du

0=0
0
=
0
00

1 (u. 0) du

0=0
0
=
0
00
1

0=0
0
= 0.
Similarly, we can show that
E

0
2
0000
0
1 (u
j
. 0
0
)
1 (u
j
. 0
0
)

= 0.
By direction computation,
0
2
0000
t
log 1 (u
j
. 0
0
) =
0
2
0000
0
1 (u
j
. 0
0
)
1 (u
j
. 0
0
)
÷
0
00
1 (u
j
. 0
0
)
0
00
1 (u
j
. 0
0
)
t
1 (u
j
. 0
0
)
2
=
0
2
0000
0
1 (u
j
. 0
0
)
1 (u
j
. 0
0
)
÷
0
00
log 1 (u
j
. 0
0
)
0
00
log 1 (u
j
. 0
0
)
t
.
Taking expectations yields (D.4).
Proof of Theorem D.0.5. Let A = (u
1
. .... u
a
) be the sample, and set
o =
0
00
log 1
a
(A . 0
0
) =
a
¸
j=1
0
00
log 1 (u
j
. 0
0
)
which by Theorem (D.0.4) has mean zero and variance :H. Write the estimator
~
0 =
~
0 (A ) as a
function of the data. Since
~
0 is unbiased for any 0.
0 = E
~
0 =

~
0 (A ) 1 (A . 0) dA .
Di¤erentiating with respect to 0 and evaluating at 0
0
yields
1 =

~
0 (A )
0
00
1 (A . 0) dA =

~
0 (A )
0
00
log 1 (A . 0) 1 (A . 0
0
) dA = E

~
0o

.
221
By the Cauchy-Schwarz inequality
1 =

E

~
0o

2
_ var (o) var

~
0

so
var

~
0

_
1
var (o)
=
1
:H
.

Proof of Theorem D.0.6 Taking the …rst-order condition for maximization of log 1(0), and
making a …rst-order Taylor series expansion,
0 =
0
00
log 1(0)

0=
^
0
=
a
¸
j=1
0
00
log 1

u
j
.
`
0

·
a
¸
j=1
0
00
log 1 (u
j
. 0
0
) +
a
¸
j=1
0
2
0000
t
log 1 (u
j
. 0
a
)

`
0 ÷0
0

.
where 0
a
lies on a line segment joining
`
0 and 0
0
. (Technically, the speci…c value of 0
a
varies by
row in this expansion.) Rewriting this equation, we …nd

`
0 ÷0
0

=

÷
a
¸
j=1
0
2
0000
t
log 1 (u
j
. 0
a
)

÷1

a
¸
j=1
0
00
log 1 (u
j
. 0
0
)

.
Since
0
00
log 1 (u
j
. 0
0
) is mean-zero with covariance matrix D. an application of the CLT yields
1

:
a
¸
j=1
0
00
log 1 (u
j
. 0
0
)
o
÷÷N(0. D) .
The analysis of the sample Hessian is somewhat more complicated due to the presence of 0
a
. Let
H(0) = ÷
0
2
0000
0
log 1 (u
j
. 0) . If it is continuous in 0. then since 0
a
j
÷÷ 0
0
we …nd H(0
a
)
j
÷÷ H
and so
÷
1
:
a
¸
j=1
0
2
0000
t
log 1 (u
j
. 0
a
) =
1
:
a
¸
j=1

÷
0
2
0000
t
log 1 (u
j
. 0
a
) ÷H(0
a
)

+H(0
a
)
j
÷÷H
by an application of a uniform WLLN. Together,

:

`
0 ÷0
0

o
÷÷H
÷1
N(0. D) = N

0. H
÷1
DH
÷1

= N

0. H
÷1

.
the …nal equality using Theorem D.0.4 .
222
Appendix E
Numerical Optimization
Many econometric estimators are de…ned by an optimization problem of the form
`
0 = argmin

Q(0) (E.1)
where the parameter is 0 ÷ O · R
n
and the criterion function is Q(0) : O ÷ R. For example
NLLS, GLS, MLE and GMM estimators take this form. In most cases, Q(0) can be computed
for given 0. but
`
0 is not available in closed form. In this case, numerical methods are required to
obtain
`
0.
E.1 Grid Search
Many optimization problems are either one dimensional (: = 1) or involve one-dimensional
optimization as a sub-problem (for example, a line search). In this context grid search may be
employed.
Grid Search. Let = [a. /] be an interval. Pick some - 0 and set G = (/ ÷ a)´- to be
the number of gridpoints. Construct an equally spaced grid on the region [a. /] with G gridpoints,
which is {0(,) = a + ,(/ ÷ a)´G : , = 0. .... G¦. At each point evaluate the criterion function
and …nd the gridpoint which yields the smallest value of the criterion, which is 0(^ ,) where ^ , =
argmin
0<;<G
Q(0(,)). This value 0 (^ ,) is the gridpoint estimate of
`
0. If the grid is su¢ciently …ne to
capture small oscillations in Q(0). the approximation error is bounded by -. that is,

0(^ ,) ÷
`
0

_ -.
Plots of Q(0(,)) against 0(,) can help diagnose errors in grid selection. This method is quite robust
but potentially costly.
Two-Step Grid Search. The gridsearch method can be re…ned by a two-step execution. For
an error bound of - pick G so that G
2
= (/ ÷ a)´- For the …rst step de…ne an equally spaced
grid on the region [a. /] with G gridpoints, which is {0(,) = a + ,(/ ÷ a)´G : , = 0. .... G¦.
At each point evaluate the criterion function and let ^ , = argmin
0<;<G
Q(0(,)). For the second
step de…ne an equally spaced grid on [0(^ , ÷1). 0(^ , + 1)] with G gridpoints, which is {0
t
(/) =
0(^ , ÷ 1) + 2/(/ ÷ a)´G
2
: / = 0. .... G¦. Let
^
/ = argmin
0<I<G
Q(0
t
(/)). The estimate of
`
0 is
0

^
/

. The advantage of the two-step method over a one-step grid search is that the number of
function evaluations has been reduced from (/÷a)´- to 2

(/ ÷a)´- which can be substantial. The
disadvantage is that if the function Q(0) is irregular, the …rst-step grid may not bracket
`
0 which
thus would be missed.
E.2 Gradient Methods
Gradient Methods are iterative methods which produce a sequence 0
j
: i = 1. 2. ... which
are designed to converge to
`
0. All require the choice of a starting value 0
1
. and all require the
223
computation of the gradient of Q(0)
g(0) =
0
00
Q(0)
and some require the Hessian
H(0) =
0
2
0000
t
Q(0).
If the functions g(0) and H(0) are not analytically available, they can be calculated numerically.
Take the ,
t
t/ element of g(0). Let c
;
be the ,
t
t/ unit vector (zeros everywhere except for a one in
the ,
t
t/ row). Then for - small
o
;
(0) ·
Q(0 +c
;
-) ÷Q(0)
-
.
Similarly,
o
;I
(0) ·
Q(0 +c
;
- +c
I
-) ÷Q(0 +c
I
-) ÷Q(0 +c
;
-) +Q(0)
-
2
In many cases, numerical derivatives can work well but can be computationally costly relative to
analytic derivatives. In some cases, however, numerical derivatives can be quite unstable.
Most gradient methods are a variant of Newton’s method which is based on a quadratic
approximation. By a Taylor’s expansion for 0 close to
`
0
0 = g(
`
0) · g(0) +H(0)

`
0 ÷0

which implies
`
0 = 0 ÷H(0)
÷1
g(0).
This suggests the iteration rule
`
0
j+1
= 0
j
÷H(0
j
)
÷1
g(0
j
).
where
One problem with Newton’s method is that it will send the iterations in the wrong direction if
H(0
j
) is not positive de…nite. One modi…cation to prevent this possibility is quadratic hill-climbing
which sets
`
0
j+1
= 0
j
÷(H(0
j
) +c
j
1
n
)
÷1
g(0
j
).
where c
j
is set just above the smallest eigenvalue of H(0
j
) if H(0) is not positive de…nite.
Another productive modi…cation is to add a scalar steplength `
j
. In this case the iteration
rule takes the form
0
j+1
= 0
j
÷L
j
g
j
`
j
(E.2)
where g
j
= g(0
j
) and L
j
= H(0
j
)
÷1
for Newton’s method and 1
j
= (H(0
j
) +c
j
1
n
)
÷1
for
quadratic hill-climbing.
Allowing the steplength to be a free parameter allows for a line search, a one-dimensional
optimization. To pick `
j
write the criterion function as a function of `
Q(`) = Q(0
j
+L
j
g
j
`)
a one-dimensional optimization problem. There are two common methods to perform a line search.
A quadratic approximation evaluates the …rst and second derivatives of Q(`) with respect to
`. and picks `
j
as the value minimizing this approximation. The half-step method considers the
sequence ` = 1. 1/2, 1/4, 1/8, ... . Each value in the sequence is considered and the criterion
Q(0
j
+L
j
g
j
`) evaluated. If the criterion has improved over Q(0
j
), use this value, otherwise move
to the next element in the sequence.
224
Newton’s method does not perform well if Q(0) is irregular, and it can be quite computationally
costly if H(0) is not analytically available. These problems have motivated alternative choices for
the weight matrix 1
j
. These methods are called Quasi-Newton methods. Two popular methods
are do to Davidson-Fletcher-Powell (DFP) and Broyden-Fletcher-Goldfarb-Shanno (BFGS).
Let
g
j
= g
j
÷g
j÷1
0
j
= 0
j
÷0
j÷1
and . The DFP method sets
L
j
= L
j÷1
+
0
j
0
t
j
0
t
j
g
j
+
L
j÷1
g
j
g
t
j
L
j÷1
g
t
j
L
j÷1
g
j
.
The BFGS methods sets
L
j
= L
j÷1
+
0
j
0
t
j
0
t
j
g
j
÷
0
j
0
t
j

0
t
j
g
j

2
o
t
j
L
j÷1
g
j
+
0
j
g
t
j
L
j÷1
0
t
j
g
j
+
L
j÷1
g
j
0
t
j
0
t
j
g
j
.
For any of the gradient methods, the iterations continue until the sequence has converged in
some sense. This can be de…ned by examining whether [0
j
÷0
j÷1
[ . [Q(0
j
) ÷Q(0
j÷1
)[ or [o(0
j
)[
has become small.
E.3 Derivative-Free Methods
All gradient methods can be quite poor in locating the global minimum when Q(0) has several
local minima. Furthermore, the methods are not well de…ned when Q(0) is non-di¤erentiable. In
these cases, alternative optimization methods are required. One example is the simplex method
of Nelder-Mead (1965).
A more recent innovation is the method of simulated annealing (SA). For a review see Go¤e,
Ferrier, and Rodgers (1994). The SA method is a sophisticated random search. Like the gradient
methods, it relies on an iterative sequence. At each iteration, a random variable is drawn and
added to the current value of the parameter. If the resulting criterion is decreased, this new value
is accepted. If the criterion is increased, it may still be accepted depending on the extent of the
increase and another randomization. The latter property is needed to keep the algorithm from
selecting a local minimum. As the iterations continue, the variance of the random innovations is
shrunk. The SA algorithm stops when a large number of iterations is unable to improve the criterion.
The SA method has been found to be successful at locating global minima. The downside is that
it can take considerable computer time to execute.
225
Bibliography
[1] Abadir, Karim M. and Jan R. Magnus (2005): Matrix Algebra, Cambridge University Press.
[2] Aitken, A.C. (1935): “On least squares and linear combinations of observations,” Proceedings
of the Royal Statistical Society, 55, 42-48.
[3] Akaike, H. (1973): “Information theory and an extension of the maximum likelihood prin-
ciple.” In B. Petroc and F. Csake, eds., Second International Symposium on Information
Theory.
[4] Anderson, T.W. and H. Rubin (1949): “Estimation of the parameters of a single equation in
a complete system of stochastic equations,” The Annals of Mathematical Statistics, 20, 46-63.
[5] Andrews, Donald W. K. (1988): “Laws of large numbers for dependent non-identically dis-
tributed random variables,’ Econometric Theory, 4, 458-467.
[6] Andrews, Donald W. K. (1991), “Asymptotic normality of series estimators for nonparameric
and semiparametric regression models,” Econometrica, 59, 307-345.
[7] Andrews, Donald W. K. (1993), “Tests for parameter instability and structural change with
unknown change point,” Econometrica, 61, 821-8516.
[8] Andrews, Donald W. K. and Moshe Buchinsky: (2000): “A three-step method for choosing
the number of bootstrap replications,” Econometrica, 68, 23-51.
[9] Andrews, Donald W. K. and Werner Ploberger (1994): “Optimal tests when a nuisance
parameter is present only under the alternative,” Econometrica, 62, 1383-1414.
[10] Ash, Robert B. (1972): Real Analysis and Probability, Academic Press.
[11] Basmann, R. L. (1957): “A generalized classical method of linear estimation of coe¢cients
in a structural equation,” Econometrica, 25, 77-83.
[12] Bekker, P.A. (1994): “Alternative approximations to the distributions of instrumental vari-
able estimators, Econometrica, 62, 657-681.
[13] Billingsley, Patrick (1968): Convergence of Probability Measures. New York: Wiley.
[14] Billingsley, Patrick (1995): Probability and Measure, 3rd Edition, New York: Wiley.
[15] Bose, A. (1988): “Edgeworth correction by bootstrap in autoregressions,” Annals of Statistics,
16, 1709-1722.
[16] Breusch, T.S. and A.R. Pagan (1979): “The Lagrange multiplier test and its application to
model speci…cation in econometrics,” Review of Economic Studies, 47, 239-253.
[17] Brown, B. W. and Whitney K. Newey (2002): “GMM, e¢cient bootstrapping, and improved
inference ,” Journal of Business and Economic Statistics.
226
[18] Carlstein, E. (1986): “The use of subseries methods for estimating the variance of a general
statistic from a stationary time series,” Annals of Statistics, 14, 1171-1179.
[19] Casella, George and Roger L. Berger (2002): Statistical Inference, 2nd Edition, Duxbury
Press.
[20] Chamberlain, G. (1987): “Asymptotic e¢ciency in estimation with conditional moment re-
strictions,” Journal of Econometrics, 34, 305-334.
[21] Choi, I. and P.C.B. Phillips (1992): “Asymptotic and …nite sample distribution theory for IV
estimators and tests in partially identi…ed structural equations,” Journal of Econometrics,
51, 113-150.
[22] Chow, G.C. (1960): “Tests of equality between sets of coe¢cients in two linear regressions,”
Econometrica, 28, 591-603.
[23] Cragg, John (1992): “Quasi-Aitken Estimation for Heterskedasticity of Unknown Form"
Journal of Econometrics, 54, 179-201.
[24] Davidson, James (1994): Stochastic Limit Theory: An Introduction for Econometricians.
Oxford: Oxford University Press.
[25] Davison, A.C. and D.V. Hinkley (1997): Bootstrap Methods and their Application. Cambridge
University Press.
[26] Dickey, D.A. and W.A. Fuller (1979): “Distribution of the estimators for autoregressive time
series with a unit root,” Journal of the American Statistical Association, 74, 427-431.
[27] Donald Stephen G. and Whitney K. Newey (2001): “Choosing the number of instruments,”
Econometrica, 69, 1161-1191.
[28] Dufour, J.M. (1997): “Some impossibility theorems in econometrics with applications to
structural and dynamic models,” Econometrica, 65, 1365-1387.
[29] Efron, Bradley (1979): “Bootstrap methods: Another look at the jackknife,” Annals of Sta-
tistics, 7, 1-26.
[30] Efron, Bradley (1982): The Jackknife, the Bootstrap, and Other Resampling Plans. Society
for Industrial and Applied Mathematics.
[31] Efron, Bradley and R.J. Tibshirani (1993): An Introduction to the Bootstrap, New York:
Chapman-Hall.
[32] Eicker, F. (1963): “Asymptotic normality and consistency of the least squares estimators for
families of linear regressions,” Annals of Mathematical Statistics, 34, 447-456.
[33] Engle, Robert F. and Clive W. J. Granger (1987): “Co-integration and error correction:
Representation, estimation and testing,” Econometrica, 55, 251-276.
[34] Frisch, Ragnar (1933): “Editorial,” Econometrica, 1, 1-4.
[35] Frisch, R. and F. Waugh (1933): “Partial time regressions as compared with individual
trends,” Econometrica, 1, 387-401.
[36] Gallant, A. Ronald and D.W. Nychka (1987): “Seminonparametric maximum likelihood es-
timation,” Econometrica, 55, 363-390.
[37] Gallant, A. Ronald and Halbert White (1988): A Uni…ed Theory of Estimation and Inference
for Nonlinear Dynamic Models. New York: Basil Blackwell.
227
[38] Galton, Francis (1886): “Regression Towards Mediocrity in Hereditary Stature,” The Journal
of the Anthropological Institute of Great Britain and Ireland, 15, 246-263.
[39] Goldberger, Arthur S. (1991): A Course in Econometrics. Cambridge: Harvard University
Press.
[40] Go¤e, W.L., G.D. Ferrier and J. Rogers (1994): “Global optimization of statistical functions
with simulated annealing,” Journal of Econometrics, 60, 65-99.
[41] Gauss, K.F. (1809): “Theoria motus corporum coelestium,” in Werke, Vol. VII, 240-254.
[42] Granger, Clive W. J. (1969): “Investigating causal relations by econometric models and
cross-spectral methods,” Econometrica, 37, 424-438.
[43] Granger, Clive W. J. (1981): “Some properties of time series data and their use in econometric
speci…cation,” Journal of Econometrics, 16, 121-130.
[44] Granger, Clive W. J. and Timo Teräsvirta (1993): Modelling Nonlinear Economic Relation-
ships, Oxford University Press, Oxford.
[45] Gregory, A. and M. Veall (1985): “On formulating Wald tests of nonlinear restrictions,”
Econometrica, 53, 1465-1468,
[46] Haavelmo, T. (1944): “The probability approach in econometrics,” Econometrica, supple-
ment, 12.
[47] Hall, A. R. (2000): “Covariance matrix estimation and the power of the overidentifying
restrictions test,” Econometrica, 68, 1517-1527,
[48] Hall, P. (1992): The Bootstrap and Edgeworth Expansion, New York: Springer-Verlag.
[49] Hall, P. (1994): “Methodology and theory for the bootstrap,” Handbook of Econometrics,
Vol. IV, eds. R.F. Engle and D.L. McFadden. New York: Elsevier Science.
[50] Hall, P. and J.L. Horowitz (1996): “Bootstrap critical values for tests based on Generalized-
Method-of-Moments estimation,” Econometrica, 64, 891-916.
[51] Hahn, J. (1996): “A note on bootstrapping generalized method of moments estimators,”
Econometric Theory, 12, 187-197.
[52] Hamilton, James D. (1994) Time Series Analysis.
[53] Hansen, Bruce E. (1992): “E¢cient estimation and testing of cointegrating vectors in the
presence of deterministic trends,” Journal of Econometrics, 53, 87-121.
[54] Hansen, Bruce E. (1996): “Inference when a nuisance parameter is not identi…ed under the
null hypothesis,” Econometrica, 64, 413-430.
[55] Hansen, Bruce E. (2006): “Edgeworth expansions for the Wald and GMM statistics for non-
linear restrictions,” Econometric Theory and Practice: Frontiers of Analysis and Applied
Research, edited by Dean Corbae, Steven N. Durlauf and Bruce E. Hansen. Cambridge Uni-
versity Press.
[56] Hansen, Lars Peter (1982): “Large sample properties of generalized method of moments
estimators, Econometrica, 50, 1029-1054.
[57] Hansen, Lars Peter, John Heaton, and A. Yaron (1996): “Finite sample properties of some
alternative GMM estimators,” Journal of Business and Economic Statistics, 14, 262-280.
228
[58] Hausman, J.A. (1978): “Speci…cation tests in econometrics,” Econometrica, 46, 1251-1271.
[59] Heckman, J. (1979): “Sample selection bias as a speci…cation error,” Econometrica, 47, 153-
161.
[60] Horowitz, Joel (2001): “The Bootstrap,” Handbook of Econometrics, Vol. 5, J.J. Heckman
and E.E. Leamer, eds., Elsevier Science, 3159-3228.
[61] Imbens, G.W. (1997): “One step estimators for over-identi…ed generalized method of moments
models,” Review of Economic Studies, 64, 359-383.
[62] Imbens, G.W., R.H. Spady and P. Johnson (1998): “Information theoretic approaches to
inference in moment condition models,” Econometrica, 66, 333-357.
[63] Jarque, C.M. and A.K. Bera (1980): “E¢cient tests for normality, homoskedasticity and
serial independence of regression residuals, Economic Letters, 6, 255-259.
[64] Johansen, S. (1988): “Statistical analysis of cointegrating vectors,” Journal of Economic
Dynamics and Control, 12, 231-254.
[65] Johansen, S. (1991): “Estimation and hypothesis testing of cointegration vectors in the pres-
ence of linear trend,” Econometrica, 59, 1551-1580.
[66] Johansen, S. (1995): Likelihood-Based Inference in Cointegrated Vector Auto-Regressive Mod-
els, Oxford University Press.
[67] Johansen, S. and K. Juselius (1992): “Testing structural hypotheses in a multivariate cointe-
gration analysis of the PPP and the UIP for the UK,” Journal of Econometrics, 53, 211-244.
[68] Kitamura, Y. (2001): “Asymptotic optimality and empirical likelihood for testing moment
restrictions,” Econometrica, 69, 1661-1672.
[69] Kitamura, Y. and M. Stutzer (1997): “An information-theoretic alternative to generalized
method of moments,” Econometrica, 65, 861-874..
[70] Koenker, Roger (2005): Quantile Regression. Cambridge University Press.
[71] Kunsch, H.R. (1989): “The jackknife and the bootstrap for general stationary observations,”
Annals of Statistics, 17, 1217-1241.
[72] Kwiatkowski, D., P.C.B. Phillips, P. Schmidt, and Y. Shin (1992): “Testing the null hypoth-
esis of stationarity against the alternative of a unit root: How sure are we that economic time
series have a unit root?” Journal of Econometrics, 54, 159-178.
[73] Lafontaine, F. and K.J. White (1986): “Obtaining any Wald statistic you want,” Economics
Letters, 21, 35-40.
[74] Lehmann, E.L. and George Casella (1998): Theory of Point Estimation, 2nd Edition,
Springer.
[75] Lehmann, E.L. and Joseph P. Romano (2005): Testing Statistical Hypotheses, 3rd Edition,
Springer.
[76] Li, Qi and Je¤rey Racine (2007) Nonparametric Econometrics.
[77] Lovell, M.C. (1963): “Seasonal adjustment of economic time series,” Journal of the American
Statistical Association, 58, 993-1010.
229
[78] MacKinnon, James G. (1990): “Critical values for cointegration,” in Engle, R.F. and C.W.
Granger (eds.) Long-Run Economic Relationships: Readings in Cointegration, Oxford, Oxford
University Press.
[79] MacKinnon, James G. and Halbert White (1985): “Some heteroskedasticity-consistent covari-
ance matrix estimators with improved …nite sample properties,” Journal of Econometrics, 29,
305-325.
[80] Magnus, J. R., and H. Neudecker (1988): Matrix Di¤erential Calculus with Applications in
Statistics and Econometrics, New York: John Wiley and Sons.
[81] Muirhead, R.J. (1982): Aspects of Multivariate Statistical Theory. New York: Wiley.
[82] Nelder, J. and R. Mead (1965): “A simplex method for function minimization,” Computer
Journal, 7, 308-313.
[83] Newey, Whitney K. (1990): “Semiparametric e¢ciency bounds,” Journal of Applied Econo-
metrics, 5, 99-135.
[84] Newey, Whitney K. and Daniel L. McFadden (1994): “Large Sample Estimation and Hy-
pothesis Testing,” in Robert Engle and Daniel McFadden, (eds.) Handbook of Econometrics,
vol. IV, 2111-2245, North Holland: Amsterdam.
[85] Newey, Whitney K. and Kenneth D. West (1987): “Hypothesis testing with e¢cient method
of moments estimation,” International Economic Review, 28, 777-787.
[86] Owen, Art B. (1988): “Empirical likelihood ratio con…dence intervals for a single functional,”
Biometrika, 75, 237-249.
[87] Owen, Art B. (2001): Empirical Likelihood. New York: Chapman & Hall.
[88] Park, Joon Y. and Peter C. B. Phillips (1988): “On the formulation of Wald tests of nonlinear
restrictions,” Econometrica, 56, 1065-1083,
[89] Phillips, Peter C.B. (1989): “Partially identi…ed econometric models,” Econometric Theory,
5, 181-240.
[90] Phillips, Peter C.B. and Sam Ouliaris (1990): “Asymptotic properties of residual based tests
for cointegration,” Econometrica, 58, 165-193.
[91] Politis, D.N. and J.P. Romano (1996): “The stationary bootstrap,” Journal of the American
Statistical Association, 89, 1303-1313.
[92] Potscher, B.M. (1991): “E¤ects of model selection on inference,” Econometric Theory, 7,
163-185.
[93] Qin, J. and J. Lawless (1994): “Empirical likelihood and general estimating equations,” The
Annals of Statistics, 22, 300-325.
[94] Ramsey, J. B. (1969): “Tests for speci…cation errors in classical linear least-squares regression
analysis,” Journal of the Royal Statistical Society, Series B, 31, 350-371.
[95] Rudin, W. (1987): Real and Complex Analysis, 3rd edition. New York: McGraw-Hill.
[96] Said, S.E. and D.A. Dickey (1984): “Testing for unit roots in autoregressive-moving average
models of unknown order,” Biometrika, 71, 599-608.
[97] Shao, J. and D. Tu (1995): The Jackknife and Bootstrap. NY: Springer.
230
[98] Sargan, J.D. (1958): “The estimation of economic relationships using instrumental variables,”
Econometrica, 26, 393-415.
[99] Shao, Jun (2003): Mathematical Statistics, 2nd edition, Springer.
[100] Sheather, S.J. and M.C. Jones (1991): “A reliable data-based bandwidth selection method
for kernel density estimation, Journal of the Royal Statistical Society, Series B, 53, 683-690.
[101] Shin, Y. (1994): “A residual-based test of the null of cointegration against the alternative of
no cointegration,” Econometric Theory, 10, 91-115.
[102] Silverman, B.W. (1986): Density Estimation for Statistics and Data Analysis. London: Chap-
man and Hall.
[103] Sims, C.A. (1972): “Money, income and causality,” American Economic Review, 62, 540-552.
[104] Sims, C.A. (1980): “Macroeconomics and reality,” Econometrica, 48, 1-48.
[105] Staiger, D. and James H. Stock (1997): “Instrumental variables regression with weak instru-
ments,” Econometrica, 65, 557-586.
[106] Stock, James H. (1987): “Asymptotic properties of least squares estimators of cointegrating
vectors,” Econometrica, 55, 1035-1056.
[107] Stock, James H. (1991): “Con…dence intervals for the largest autoregressive root in U.S.
macroeconomic time series,” Journal of Monetary Economics, 28, 435-460.
[108] Stock, James H. and Jonathan H. Wright (2000): “GMM with weak identi…cation,” Econo-
metrica, 68, 1055-1096.
[109] Theil, H. (1953): “Repeated least squares applied to complete equation systems,” The Hague,
Central Planning Bureau, mimeo.
[110] Theil, H. (1971): Principles of Econometrics, New York: Wiley.
[111] Tobin, James (1958): “Estimation of relationships for limited dependent variables,” Econo-
metrica, 26, 24-36.
[112] van der Vaart, A.W. (1998): Asymptotic Statistics, Cambridge University Press.
[113] Wald, A. (1943): “Tests of statistical hypotheses concerning several parameters when the
number of observations is large,” Transactions of the American Mathematical Society, 54,
426-482.
[114] Wang, J. and E. Zivot (1998): “Inference on structural parameters in instrumental variables
regression with weak instruments,” Econometrica, 66, 1389-1404.
[115] White, Halbert (1980): “A heteroskedasticity-consistent covariance matrix estimator and a
direct test for heteroskedasticity,” Econometrica, 48, 817-838.
[116] White, Halbert (1984): Asymptotic Theory for Econometricians, Academic Press.
[117] Wooldridge, Je¤rey M. (2002) Econometric Analysis of Cross Section and Panel Data, MIT
Press.
[118] Zellner, Arnold. (1962): “An e¢cient method of estimating seemingly unrelated regressions,
and tests for aggregation bias,” Journal of the American Statistical Association, 57, 348-368.
231

Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction 1.1 What is Econometrics? . . . . . . . . . . . . 1.2 The Probability Approach to Econometrics 1.3 Econometric Terms and Notation . . . . . . 1.4 Observational Data . . . . . . . . . . . . . . 1.5 Standard Data Structures . . . . . . . . . . 1.6 Sources for Economic Data . . . . . . . . . 1.7 Econometric Software . . . . . . . . . . . . 1.8 Reading the Manuscript . . . . . . . . . . . 2 Regression and Projection 2.1 Introduction . . . . . . . . . . . . . . . . 2.2 Notation . . . . . . . . . . . . . . . . . . 2.3 Conditional Mean . . . . . . . . . . . . . 2.4 Regression Error . . . . . . . . . . . . . 2.5 Best Predictor . . . . . . . . . . . . . . 2.6 Conditional Variance . . . . . . . . . . . 2.7 Homoskedasticity and Heteroskedasticity 2.8 Linear Regression . . . . . . . . . . . . . 2.9 Best Linear Predictor . . . . . . . . . . 2.10 Regression Coe¢ cients . . . . . . . . . . 2.11 Best Linear Approximation . . . . . . . 2.12 Normal Regression . . . . . . . . . . . . 2.13 Regression to the Mean . . . . . . . . . 2.14 Reverse Regression . . . . . . . . . . . . 2.15 Limitations of the Best Linear Predictor 2.16 Identi…cation of the Conditional Mean . Exercises . . . . . . . . . . . . . . . . . . . . 3 The 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 Algebra of Least Squares Introduction . . . . . . . . . . Least Squares Estimator . . . Solving for Least Squares . . Least Squares Residuals . . . Model in Matrix Notation . . Projection Matrices . . . . . . Residual Regression . . . . . Prediction Errors . . . . . . . In‡ uential Observations . . . Measures of Fit . . . . . . . . vi 1 1 1 2 3 4 5 6 7 8 8 8 9 11 12 13 14 15 16 19 20 21 21 23 23 25 26 28 28 28 29 32 32 33 35 37 39 39

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . . i

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

3.11 Normal Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4 Least Squares Regression 4.1 Introduction . . . . . . . . . . . . . . 4.2 Sampling Distribution . . . . . . . . 4.3 Mean of Least-Squares Estimator . . 4.4 Variance of Least Squares Estimator 4.5 Gauss-Markov Theorem . . . . . . . 4.6 Residuals . . . . . . . . . . . . . . . 4.7 Estimation of Error Variance . . . . 4.8 Covariance Matrix Estimation Under 4.9 Covariance Matrix Estimation Under 4.10 Standard Errors . . . . . . . . . . . . 4.11 Multicollinearity . . . . . . . . . . . 4.12 Omitted Variable Bias . . . . . . . . 4.13 Normal Regression Model . . . . . . Exercises . . . . . . . . . . . . . . . . . . 46 46 47 47 49 50 52 53 54 55 57 58 61 61 64 65 65 65 68 70 73 74 77 79 80 81 82 85 87 89 89 90 91 92 93 94 97 99 102 104 104 107 107 108 110 113

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Homoskedasticity Heteroskedasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

5 Asymptotic Theory 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Weak Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . 5.3 Consistency of Least-Squares Estimation . . . . . . . . . . . . . . . 5.4 Asymptotic Normality . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Consistency of Sample Variance Estimators . . . . . . . . . . . . . 5.6 Consistent Covariance Matrix Estimation . . . . . . . . . . . . . . 5.7 Functions of Parameters . . . . . . . . . . . . . . . . . . . . . . . . 5.8 t statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Con…dence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Semiparametric E¢ ciency . . . . . . . . . . . . . . . . . . . . . . . 5.11 Semiparametric E¢ ciency in the Projection Model . . . . . . . . . 5.12 Semiparametric E¢ ciency in the Homoskedastic Regression Model Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Testing 6.1 t tests . . . . . . . . . . . . . . . . . . . . . . . 6.2 t-ratios . . . . . . . . . . . . . . . . . . . . . . . 6.3 Wald Tests . . . . . . . . . . . . . . . . . . . . 6.4 F Tests . . . . . . . . . . . . . . . . . . . . . . 6.5 Normal Regression Model . . . . . . . . . . . . 6.6 Problems with Tests of NonLinear Hypotheses 6.7 Monte Carlo Simulation . . . . . . . . . . . . . 6.8 Estimating a Wage Equation . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . 7 Additional Regression Topics 7.1 Generalized Least Squares . . 7.2 Testing for Heteroskedasticity 7.3 Forecast Intervals . . . . . . . 7.4 NonLinear Least Squares . . 7.5 Least Absolute Deviations . . 7.6 Quantile Regression . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . . ii

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

7.7 Testing for Omitted NonLinearity . 7.8 Irrelevant Variables . . . . . . . . . 7.9 Model Selection . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

114 115 116 119

8 The Bootstrap 8.1 De…nition of the Bootstrap . . . . . . . . . 8.2 The Empirical Distribution Function . . . . 8.3 Nonparametric Bootstrap . . . . . . . . . . 8.4 Bootstrap Estimation of Bias and Variance 8.5 Percentile Intervals . . . . . . . . . . . . . . 8.6 Percentile-t Equal-Tailed Interval . . . . . . 8.7 Symmetric Percentile-t Intervals . . . . . . 8.8 Asymptotic Expansions . . . . . . . . . . . 8.9 One-Sided Tests . . . . . . . . . . . . . . . 8.10 Symmetric Two-Sided Tests . . . . . . . . . 8.11 Percentile Con…dence Intervals . . . . . . . 8.12 Bootstrap Methods for Regression Models . Exercises . . . . . . . . . . . . . . . . . . . . . . 9 Generalized Method of Moments 9.1 Overidenti…ed Linear Model . . . . . . . . . 9.2 GMM Estimator . . . . . . . . . . . . . . . 9.3 Distribution of GMM Estimator . . . . . . 9.4 Estimation of the E¢ cient Weight Matrix . 9.5 GMM: The General Case . . . . . . . . . . 9.6 Over-Identi…cation Test . . . . . . . . . . . 9.7 Hypothesis Testing: The Distance Statistic 9.8 Conditional Moment Restrictions . . . . . . 9.9 Bootstrap GMM Inference . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . 10 Empirical Likelihood 10.1 Non-Parametric Likelihood . . . . . . . . 10.2 Asymptotic Distribution of EL Estimator 10.3 Overidentifying Restrictions . . . . . . . . 10.4 Testing . . . . . . . . . . . . . . . . . . . . 10.5 Numerical Computation . . . . . . . . . . 11 Endogeneity 11.1 Instrumental Variables . . . 11.2 Reduced Form . . . . . . . 11.3 Identi…cation . . . . . . . . 11.4 Estimation . . . . . . . . . 11.5 Special Cases: IV and 2SLS 11.6 Bekker Asymptotics . . . . 11.7 Identi…cation Failure . . . . Exercises . . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

121 . 121 . 121 . 123 . 123 . 124 . 126 . 126 . 127 . 129 . 130 . 131 . 132 . 133 134 . 134 . 135 . 136 . 137 . 138 . 138 . 139 . 140 . 141 . 143 146 . 146 . 148 . 149 . 150 . 151 153 154 155 156 156 156 158 159 161

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

iii

. . . . . . . . . . . . 187 16. . . . . . . . 12. . . . . . . . . . . . . . . . . . . . .1 Vector Autoregressions (VARs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13. . . . . . . . . . . . . . . . . . . . . . .4 Single Equation from a VAR . .6 Selection of Lag Length in an VAR . . . . . . . 189 A Matrix Algebra A. .1 Binary Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Asymptotic Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Stationarity and Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Restricted VARs . . . . . . . . . . . . . . . . .3 Censored Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 . . . . . . . . . . . . . . . . . . . . 192 . . . .12Autoregressive Unit Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 . . . . . . . . . . . . . Vec Operator . . . . . 13. . . . . . . . . . . . .8 Cointegration . . . . . . . . . . . . . . . . . . . . .2 Matrix Addition . . . . . . . . . . . . .3 Stationarity of AR(1) Process . .3 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . 180 . . . . . . . . . . . . . . . . . . . . . 12. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 . . . . 182 15 Panel Data 184 15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Autoregressions . . . . . . . . . . . 12. . . . 13. .9 Matrix Calculus . . . . . . . . . . . . . . . . 193 . . . 163 163 165 166 166 167 167 168 169 169 170 171 171 173 173 174 174 174 175 175 176 176 177 . . . . . . . . . . . . . . . . 14 Limited Dependent Variables 14. . . . . . . . . . . . A. . . . . . 12. . . . . . . . . . . . . . . . . . . . . .1 Individual-E¤ects Model . . . . . . . . . A. . . . A. . . . . . . . . . .8 Bootstrap for Autoregressions . . . . . . . . . . 179 . . . . . . . . . . .2 Count Data . . . . . . . . A. . . . . . 14. . . . . . . . . .10 Kronecker Products and the A. . . .4 Sample Selection . . . . . . . . . . . . . . . . . . .5 Testing for Omitted Serial Correlation 13. . . . . . . . . . . .2 Fixed E¤ects . . . . . . . . . . . . . . 14. . . . . . . . 198 . . . . . . . . . . . . . . . . . .1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 . . . . . . . . . .3 Dynamic Panel Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Determinant . . 13. . . . . . . . . . . . 197 . . . . . . . . 193 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12. . . . . . . .10Testing for Omitted Serial Correlation 12. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12. . . 199 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Asymptotic MSE for Kernel Estimates . . . . . . . . . . . . . . . . . A. . 179 . . . . . . . 13. . . . 184 15. . . . . . . . 13 Multivariate Time Series 13. . . . . . . . . . . .7 Eigenvalues . . . . . 12. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12. . . . . . . . . .12 Univariate Time Series 12. . . . . . . .11Model Selection . . . . . . . . . . . . 198 . . . .9 Trend Stationarity . . . . . . . . . . . . . . . . . . . . .4 Lag Operator . . . . . . . . . . . . . . . . . . . . . .2 Estimation . . . . . . . .8 Positive De…niteness . . . .1 Kernel Density Estimation . . . . . . . . . . . . . . . . . .5 Stationarity of AR(k) . . 12. .11 Vector and Matrix Norms . . . . . . . . . . . . . . . . . . . A. . . . . . . . . . . . . . iv . . . . . . . .5 Rank and Inverse . . .9 Cointegrated VARs . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Granger Causality . . . . . . . .6 Estimation . 196 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. 186 16 Nonparametrics 187 16. . . . . . . A. . . 12. . . . . . . . . . . . . . . . . . . . . . .4 Trace . . . . . . . . . . . . . . . . . . . . 14. . . . . . . . A. . . . . . . . . . . 181 . . . . . . . . . . . . . . . . . . . 13. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . 215 . . . . C. . . . . . . . . . . 218 219 223 . . . .3 Derivative-Free Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . D Maximum Likelihood E Numerical Optimization E. . . . . . . . . . . . . B. . E. . . . E. . . . . . . . . . . . . . .7 Conditional Distributions and Expectation . . . . . . . . . . . . . . . . . . . . . 215 . . . . . . . . . . . . . . . . . . .4 Gamma Function . . . . . . . . . . . . 223 . . . . . . . . . . . . . . .9 Normal and Related Distributions . . . . . . . . . . . . . . . . . . . . 216 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B. . . . . . . . . B. . . . . .5 Common Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B. . . . . . . . . . . . .8 Transformations . . . 201 201 203 203 204 205 207 209 211 212 C Asymptotic Theory C. B. . . . . . . . . . . . . . . . . . . . . . .3 Asymptotic Transformations . . . . . . . 225 v . . . . . . .2 Random Variables . . . . . . . . . .3 Expectation . .B Probability B. . . . . . . . . . . . . . . . . . . .6 Multivariate Random Variables . . . . . . .2 Convergence in Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 . . . . . . . . . . . . . . C. . . . . . . . . . . . . . B. . . . . . . . . . . . . . . .2 Gradient Methods . . . . . . . . . . . . . . . . . . B. . .1 Inequalities . . . . . . . .1 Foundations . . . . . . . . . . B. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Grid Search . . . . . . . . . . . . . . . . . . . . . . . . .

For more advanced statistical theory. the Handbook of Econometrics series provides advanced summaries of contemporary econometric methods and theory. I recommend Matrix Algebra by Abadir and Magnus (2005).Preface This book is intended to serve as the textbook for a …rst-year graduate course in econometrics. Beyond these texts. As this is a manuscript in progress. For students wishing to deepen their knowledge of matrix algebra in relation to their study of econometrics. Students are assumed to have an understanding of multivariate calculus. Shao (2003). and mathematical statistics. For those wanting a deeper foundation in probability. van der Vaart (1998). some of the basic tools of matrix algebra. For reference. Hopefully one day these sections will be ‡ eshed out and completed in more detail. probability. and Lehmann and Romano (2005). I recommend Lehmann and Casella (1998). I recommend Davidson (1994) for asymptotic theory. and Li and Racine (2007) for nonparametrics and semiparametric econometrics. I recommend Ash (1972) or Billingsley (1995). or be used as a supplement to another text. linear algebra. For further study in econometrics beyond this text. but not required. A prior course in undergraduate econometrics would be helpful. It can be used as a stand-alone text. some parts are quite incomplete. probability theory. An excellent introduction to probability and statistics is Statistical Inference by Casella and Berger (2002). and statistics are reviewed in the Appendix. in particular the later sections of the manuscript. vi . Hamilton (1994) for time-series methods. Wooldridge (2002) for panel data and discrete response models.

which reads: “The Econometric Society is an international society for the advancement of economic theory in its relation to statistics and mathematics.. And it is this uni…cation that constitutes econometrics.. and economic data. Ragnar Frisch. we would say that econometrics is the uni…ed study of economic models. 1.. Its de…nition is implied in the statement of the scope of the [Econometric] Society. in Section I of the Constitution. should be confounded with econometrics. 1-2. This de…nition remains valid today.. (1933). condition for a real understanding of the quantitative relations in modern economic life.. Its main object shall be to promote studies that aim at a uni…cation of the theoreticalquantitative and the empirical-quantitative approach to economic problems. although a considerable portion of this theory has a de…ninitely quantitative character. one of the three principle founders of the Econometric Society. and the study of the properties of econometric methods. Within the …eld of econometrics there are sub-divisions and specializations. mathematical statistics. Econometrica. and no single one of these aspects. A word of explanation regarding the term econometrics may be in order.1 What is Econometrics? The term “econometrics” is believed to have been crafted by Ragnar Frisch (1895-1973) of Norway. econometrics is by no means the same as economic statistics. Applied econometrics is a term describing the development of quantitative economic models and the application of econometric methods to these models using economic data. Thus.2 The Probability Approach to Econometrics The unifying methodology of modern econometrics was articulated by Trygve Haavelmo (19111999) of Norway. economic theory. Experience has shown that each of these three view-points.. winner of the 1989 Nobel Memorial Prize in Economic Sciences. is a necessary. It is the uni…cation of all three that is powerful. although some terms have evolved somewhat in their usage. Nor should econometrics be taken as synonomous with the application of mathematics to economics. that of statistics. in his seminal 1 .” But there are several aspects of the quantitative approach to economics.Chapter 1 Introduction 1. Today. It is therefore …tting that we turn to Frisch’ own words in the introduction to the …rst issue of s Econometrica for an explanation of the discipline. Nor is it identical with what we call general economic theory. and mathematics. Econometric theory concerns the development of tools and methods. and co-winner of the …rst Nobel Memorial Prize in Economic Sciences in 1969. …rst editor of the journal Econometrica. but not by itself a su¢ cient. 1. taken by itself. pp.

1. Deterministic models are blatently inconsistent with observed economic quantities. The di¤erence is that the calibrationist literature rejects mathematical statistics as inappropriate for approximate models. Rather. organization. how should we interpret structural econometric analysis? The quasistructural approach to inference views a structural economic model as an approximation rather than the truth. This theory has led to the concepts of the pseudo-true value (the parameter value de…ned by the estimation problem). x. the calibration approach interprets structural models as approximations and hence inherently false. the quasi-likelihood function. In this case. and the quantitative analysis performed under the assumption that the economic model is correctly speci…ed. This approach typically leads to estimation methods such as least-squares and the Generalized Method of Moments. Similar to the quasi-structural approach. …rm. corporation. and is the main focus of this textbook. and other descriptive characteristics. The appropriate method for a quantitative economic analysis follows from the probabilistic construction of the economic model.paper “The probability approach in econometrics” Econometrica (1944). and/or z: The convention in econometrics is to use the character y to denote the variable to be explained. an econometrician has a set of repeated measurements on a set of variables. it follows naturally that the best way to quantify. Economic models should be explicitly designed to incorporate randomness. We call this information the data. and it is incohorent to apply deterministic models to non-deterministic data. Closely related is the semiparametric approach. and conduct inferences about the economy is through the powerful theory of mathematical statistics. 2 . dataset. educational attainment. A criticism of the structural approach is that it is misleading to treat an economic model as correctly speci…ed. state. A probabilistic economic model is partially speci…ed but some features are left unspeci…ed. A probabilistic economic s model is speci…ed. household. while 1 Ad hoc means “for this purpose” – a method designed for a speci…c problem – and not based on a generalizable principle. in an labor application the variables could include weekly earnings. We use the term observations to refer to the distinct repeated measurements on the variables. stochastic errors should not be simply added to deterministic models to make them random. For example. and quasi-likelihood inference. An individual observation could also be a measurement at a point in time. Researchers often describe this as “taking their model seriously. country. quasi-MLE. Economists typically denote variables by the italized roman characters y. it is more accurate to view a model as a useful abstraction or approximation. there has been some evolution in its implementation. such as quarterly GDP or a daily interest rate.” The structural approach typically leads to likelihood-based analysis. An individual observation often corresponds to a speci…c economic unit. quantitative economic models must necessarily be probability models (by which today we would mean stochastic). including maximum likelihood and Bayesian estimation. The semiparametric approach dominates contemporary econometrics. estimate. Haavelmo’ probability approach was quickly embraced by the economics profession. Today no s quantitative work in economics shuns its fundamental vision. Another branch of quantitative structural economics is the calibration approach. city or other geographical region. and instead selects parameters by matching model and data moments using non-statistical ad hoc 1 methods. such as a person. or sample.3 Econometric Terms and Notation In a typical application. Haavelmo argued that . While all economists embrace the probability approach. The structural approach is the closest to Haavelmo’ original idea. Once we acknowledge that an economic model is a probability model. age.

and then follow the children’ wage path after they mature and s enter the labor force. and 2 to denote unknown parameters of an econometric model. xi and z i . 1.g. Ideally. experiments such as this would be widely condemned as immoral! Consequently. C: @ .4 Observational Data A common econometric question is to quantify the impact of one set of variables on another variable.the characters x and z are used to denote the conditioning (explaining) variables. xi . yi .g. For example. and subscript the variables by the index i to denote the individual observation. we see few non-laboratory experimental data sets in economics. often with a subscript to denote the estimator. e. We typically use Greek letters such as . This practice is not commonly followed in econometrics because instead we use upper case to denote matrices. In practice it is common to …nd that some variables are not measured for some observations. Estimates will be denoted by appending hats or tildes. e. ideally each observation consists of a set of measurements on the list of variables. To measure the returns to schooling. V is an estimate of V . and ~ are estimates of : The covariance matrix of an econometric estimator will typically be written using the capital p ^ boldface V . Following mathematical convention. In some contexts we use indices other than i. We typically denote the number of observations by the natural number n. and in panel studies we typically use the double index it to refer to individual i at a time period t. Another s issue of interest is the earnings gap between men and women. 3 . e. xk Upper case bold italics such as X will be used for matrices. The di¤erences between the groups would be direct measurements of the effects of di¤erent levels of education. e. ^ . e. V ^ = var n as the p ^ covariance matrix for n : Hopefully without causing confusion. and in other places a speci…c realization.g. and will use boldface. when these are vector-valued. The i’ observation is the set (yi . such as in time-series applications where the index t is common. and in these cases we describe these variables or observations as unobserved or missing.g. Hopefully there will be no confusion as the use should be evident from the context. an experiment might randomly divide children into groups. holding other variables constant. we will use the notation p V to denote the asymptotic covariance matrix of n ^ (the variance of the asymptotic b distribution). z i ): th It is proper mathematical practice to use upper case X for random variables and lower case x for realizations or speci…c values.g. However. real numbers (elements of the real line R) are written using lower case italics such as y.g. As we mentioned before. 1 0 x1 B x2 C C B x = B . or . A . and vectors (elements of Rk ) by lower case bold italics such as x. Thus the notation yi will in some places refer to a random variable. we would use experimental data to answer these questions. Estimates are typically denoted by putting a hat “^” tilde “~” or bar “-” over the corresponding letter. e. mandate di¤erent levels of education to the di¤erent groups. a concern in labor economics is the returns to schooling – the change in earnings induced by increasing a worker’ education.

5 Standard Data Structures There are three major types of economic data sets: cross-sectional. This is a modi…ed random sampling s environment. It is conventional to assume that cross-sectional observations are mutually independent. This discussion means that it is di¢ cult to infer causality from observational data alone. Most economic data sets are observational. as we are not able to manipulate one variable to see the direct e¤ect on the other. or corporations) surveyed repeatedly over time. quarterly or perhaps monthly) so the sample size can be much smaller than in typical cross-section studies. In typical applications. prices and interest rates. households. They are distinguished by the dependence structure across observations. not experimental.Instead. through data collection we can record the level of a person’ education and their wage. Data Structures Cross-section Time-series Panel 4 . hourly. and assess the joint dependence. Time-series data are indexed by time. Surveys are a typical source for cross-sectional data. and therefore choose to attain higher levels of education. This is an alternative explanation for an observed positive correlation between educational levels and wages. and panel. Cross-sectional data sets have one observation per individual. but a given individual’ observations are mutually dependent. For example. The exception is …nancial data where data are available at a high frequency (weekly. a person’ level of education is (at least partially) s determined by that person’ choices as well as their educational level. This means that all variables must be treated as random and possibly jointly determined. To continue the above example. and their high ability is the fundamental reason for their high wages. Knowledge of the joint distibution alone may not be able to distinguish between these explanations. High ability individuals do better in school. Panel data combines elements of cross-section and time-series. households. Typical examples include macroeconomic aggregates. the individuals surveyed are persons. …rms or other economic agents. This type of data is characterized by serial dependence so the random sampling assumption is inappropriate. But from observational data it is di¢ cult to infer causality. The point is that multiple explanations are consistent with a positive correlation between schooling levels and education. In many contemporary econometric cross-section studies the sample size n is quite large. Most of this text is devoted to the study of cross-section data. time-series. Causal inference requires identi…cation. 1. These factors are likely to s be a¤ected by their personal abilities and attitudes towards work. data. These data sets consist of a set of individuals (typically persons. With such data we s can measure the joint distribution of these variables. The fact that a person is highly educated suggests a high level of ability. The common modeling assumption is that the individuals are mutually independent of one another. and this is based on strong assumptions. Most aggregate economic data is only available at a low frequency (annual. which suggests a high relative wage. most economic data is observational. We will return to a discussion of some of these issues in Chapter 11. or tick-by-tick) so sample sizes can be quite large.

allowing the application of mathematical statistics to the social sciences. available at rfe. time-series. 1. (Sometimes the th label “independent”is misconstrued. the the internet provides a convenient forum for dissemination of economic data. We call this a random sample. a necessary precondition for the application of statistical methods. This “population” is in…nitely large. For most of this text we will assume that our observations come from a random sample. z) which can call the population. The random sampling framework was a major intellectural breakthrough of the late 19th century.6 Sources for Economic Data Fortunately for economists. From this site you can …nd almost every publically available economic data set. Before this conceptual development. and the goal of statistical inference is to learn about features of F from the sample. The distribution F is unknown. By mutual independence we mean that the i’ th observation (yi . The random sampling framework enabled economic samples to be viewed as homogenous and random.org.1 The observations (yi . x. xi . xi . De…nition 1. The assumption of random sampling provides the mathematical foundation for treating economic statistics with the tools of mathematical statistics. and panel data modeling. Many large-scale economic datasets are available without charge from governmental agencies. Some speci…c data sources of interest include Bureau of Labor Statistics US Census Current Population Survey Survey of Income and Program Participation Panel Study of Income Dynamics Federal Reserve System (Board of Governors and regional banks) National Bureau of Economic Research 5 . This abstraction can be a source of confusion as it does not correspond to a physical population in the real world. z i ) as a realization from a joint probability distribution F (y. :::. not a statement about the relationship between yi and xi and/or z i :) Furthermore.5. z j ) for i 6= j. n: In the random sampling framework. z i ) is independent of the j’ observation (yj .Some contemporary econometric applications combine elements of cross-section. It is a statement about the relationship between observations i and j. most of this text will be devoted to cross-sectional data under the assumption of mutually independent observations. we think of an individual observation (yi . xj . if the data is randomly gathered. These include models of spatial correlation and clustering. it is reasonable to model each observation as a random draw from the same probability distribution. xi . In this case we say that the data are independent and identically distributed or iid. z i ) are a random sample if they are mutually independent and identically distributed (iid) across i = 1. An excellent starting point is the Resources for Economists Data Links. As we mentioned above. methods from mathematical statistics had not been applied to economic data as they were viewed as inappropraite.

Therefore. wants to extend your work. …rst check the journal’ website. The irony of the situation is that it is typically in the best interests of a scholar to make as much of their work (including all data and programs) freely available. Econometrica states: Econometrica has the policy that all empirical. The Journal of Political Economy states: It is the policy of the Journal of Political Economy to publish papers only if the data used in the analysis are clearly and precisely documented and are readily available to any researcher for purposes of replication. and some make available s replication …les complete with data and programs. Remember that as part of your end product. STATA (www.S. GAUSS (www. Unfortunately. as this only increases the likelihood of their work being cited and having an impact. all authors absolutely have the obligation to make their data and programs available. Second.mathworks. In addition. public openness provides a healthy incentive for transparency and integrity in empirical analysis. As a matter of professional etiquette. The American Economic Review states: All data used in analysis must be made available to any researcher for purposes of replication.U. and it is 6 . 1. The greatest form of ‡ attery is to learn that another scholar has read your paper.7 Econometric Software Economists use a variety of econometric.net) are high-level matrix programming languages with a wide variety of built-in statistical functions.stata. MATLAB (www. experimental and simulation results must be replicable. It is an excellent package for most econometric analysis. but is limited when you want to use new or less-common econometric methods which have not yet been programed. If you are interested in using the data from a published paper.aptech. and typically for poor reasons. statistical. programs. email the author(s). For example. and is continuously being updated with new methods. authors of accepted papers must submit data sets. Bureau of Economic Analysis CompuStat International Financial Statistics Another good source of data is from authors of published empirical studies. It is quite popular among economists. Most journals in economics require authors of published papers to make their datasets generally available. Keep this in mind as you start your own empirical project. Many econometric methods have been programed in these languages and are available on the web. and information on empirical analysis. you will need (and want) to provide all data and programs to the community of scholars. and programming software. s as many journals archive data and replication programs online. and Ox (www. If these investigations fail. The advantage of these packages is that you are in complete control of your analysis.com) is a powerful statistical program with a broad set of pre-programmed econometric and statistical tools.com). check the website(s) of the paper’ author(s). in its instructions for submission.com). politely requesting the data.oxmetrics. many fail to do so. experiments and simulations that are needed for replication and some limited sensitivity analysis. Most academic economists maintain webpages. You may need to be persistent. or wants to use your empirical methods.

and probably more than one.8 Reading the Manuscript Chapters 2 through 7 deal with the core linear regression and projection models. some economists write their programs in a standard programming language such as Fortran or C. many empirical economists end up using more than one package. Chapter 8 introduces the bootstrap.easier to program new methods than in STATA.org) is an integrated suite of statistical and graphical software that is ‡ exible. free! For highly-intensive computational tasks. open source. As these di¤erent packages have distinct advantages.r-project. you will learn at least one of these packages. R (www. and numerical optimization can be found in the appendix. As a student of econometrics. 7 . panel data. programming complicated procedures takes signi…cant time. asymptotic theory. Chapters 9 through 11 deal with the Generalized Method of Moments. maximum likelihood. 1. and nonparametrics. and Chapters 14. at the cost of increased time in programming and debugging. Chapters 12 and 13 cover time series. empirical likelihood and endogeneity. Reviews of matrix algebra. 15 and 16 cover limited dependent variables. and programming errors are hard to prevent and di¢ cult to detect and eliminate. probability theory. and best of all. This can lead to major gains in computational speed. Some disadvantages are that you have to do much of the programming yourself.

:::. conditioning variables. the same is true about the dependent variable – it could be continuous or discrete.Chapter 2 Regression and Projection 2. taking on only the values 0 and 1. many regressors in econometric applications are binary. x2 .2 imply that the variables have …nite means and variances. Throughout this section we maintain the assumption that the variables are stochastic. least-squares is a tool to estimate an approximate conditional mean of one variable (the dependent variable) given another set of variables (the regressors. In this chapter we abstract from estimation.1. xk . also known as regression. x1 . Assumption 2.1 Introduction The most commonly applied econometric tool is least-squares estimation. and focus on the probabilistic foundation of the regression model and its projection approximation. C: @ . and are called dummy variables.2. As an example of a discrete variable. 8 The …nite second moment conditions imposed in Assumption 2. But when the dependent variable is discrete we typically use speci…c models and techniques built for this purpose (see Chapter 14). As we will see. x2 . For some purposes.1 (y. A . xk ) is a random vector with a joint probability distribution such that 1. It is convenient to write the set of regressors as a vector in Rk : 0 1 x1 B x2 C B C (2. or covariates). Ex2 < 1 for j = 1.2.2. :::. xk ) denote the k regressors. :::.1) x = B .2 Notation We let y denote the dependent variable and let (x1 .1. k: j For most of our analysis it is unimportant whether the regressors x come from continuous or discrete distributions. Ey 2 < 1: 2.1 and 2. 2.

22.1 by the arrows drawn to the x-axis. gender.1.S.S. The data are from the 2004 Current Population Survey. You can see that the right tail of the density is much thicker than the left tail. These values are the conditional means of U. we start with f (y j x) . Figure 2. with a college degree and 10-15 years of potential work experience. When x has a continuous distribution with density fx (x) then m (x) is de…ned for those values of x for which fx (x) > 0: In the example presented in Figure 2.16.1. m (x) can have arbitrary shape. These are conditional density functions – the density of hourly wages conditional on race. Take a closer look at the density functions displayed in Figure 2. holding the other variables constant. 3 The support of a random vector x is the closed set of points for which its distribution F (x) is increasing in all elements of x: 1 9 . The two density curves show the e¤ect of gender on the distribution of wages. the mean wage for men is $27. While it is easy to observe that the two densities are unequal. wages in 2004 (conditional on gender.2. from the population of white non-military wage earners in the U. These are indicated in Figure 2.73. See Chapter 16. and conditional for white non-military wages earners with a college degree and 10-15 years of work experience). An important summary measure is the conditional mean2 Z 1 m (x) = E (y j x) = yf (y j x) dy: (2. 2 The conditional mean exists if E jyj < 1: For a rigorous de…nition see Section 2. These are nonparametric density estimates using a normal kernel with the bandwidth selected by cross-validation. the conditional density of y given x: Figure 2.2) 1 The function m (x) varies with the vector x and is thus a function from Rk to R: The conditional mean m (x) is sometimes called the regression function. education and experience.3 Conditional Mean To study how the distribution of y varies with the variables x in the population. it is useful to have numerical measures of the di¤erence.1 displays the density1 of hourly wages for men and women.1: Wage Densities for White College Grads with 10-15 Years Work Experience To illustrate. These are asymmetric (skewed) densities. although in some cases an economic model may dictate a speci…c shape restriction (such as monotonicity) or a speci…c functional form (such as linearity). The regression function m (x) is de…ned for values of x in the support3 of x: Thus when x has a discrete distribution then m (x) is de…ned for those values of x with positive probability. In general. and that for women is $20.

respectively) drawn in with the arrows.36.78 and $18. 5 White non-military male wage earners with 10-15 years of potential work experience. The solid line is the conditional mean.4 is that conditional expectations can be quite non-linear.1 and 2. Figure 2. this is equivalent to measuring the central tendency by the conditional geometric mean exp (E (log y j x)). Mathematically.9. the dashed line is a linear projection approximation which will be discussed in Section 2. the conditional geometric means for the densities in Figure 2. Figure 2.21 and 2. When a distribution is skewed. When the distribution of the control variable takes on multiple values or is continuous.A.1 are $24. The main point to be learned from Figure 2. For example.3 is that the conditional expectation is a useful summary of the central tendency of the conditional distribution when the control variable takes multiple values. 4 10 . the solid line displays the conditional expectation of log wages varying with education.D. the mean is not necessarily a good summary of the central tendency.91.2: Log Wage Densities for White College Grads with 10-15 Years Work Experience The comparisons in Figures 2. We see that the conditional mean is strongly non-linear and non-monotonic. respectively. In this context it is often convenient to transform the data by taking the (natural) logarithm4 .30. wage regressions typically use log wages as a dependent variable rather than the level of wages. and a Ph. As another example. Assuming for simplicity that this is the true joint distribution.2 shows the density of log hourly wages for the same population. Figure 2.3 displays a scatter plot5 of log wages against education levels.2 are facilitated by the fact that the control variable (gender) is binary.which is a common feature of many economic variables. implying an average 36% di¤erence in wage levels. The di¤erence between the mean log wage of men and women is 0. For this reason. The conditional expectation function is close to linear. then comparisons become more complicated. degree in mean log hourly wages is 0. which implies a 30% average wage di¤erence for this population.36. Figure 2. The main lesson to be learned at this point from Figure 2. 6 In the population of white non-military male wage earners with 12 years of education. Of particular interest to graduate students may be the observation that di¤erence between a B. To illustrate.4 displays the conditional mean6 of log hourly wages as a function of labor market experience. The di¤erence in the mean log wage is a more robust measure of the typical wage gap than the di¤erence in the untransformed wage means. with mean log hourly wages (3.

1.1.4 Regression Error The regression error e is de…ned as the di¤erence between y and its conditional mean (2. E(e) = 0: 3. E (h(x)e) = 0 for any function h ( ) such that Eh(x)2 < 1 4. this yields the formula y = m(x) + e: (2. Theorem 2. E (e j x) = E ((y m(x)) j x) = E (y j x) = m(x) m(x) = 0: E (m(x) j x) Proofs of the remaining parts of Theorem 2.1 are left to Exercise 2.1: By the de…nition of e and the linearity of conditional expectations. 11 .1.3: Scatter Plot and Conditional Mean of Log Wages Given Education 2.1 Properties of the regression error e: Under Assumption 2.3) It is useful to understand that the regression error is derived from the joint distribution of (y.Figure 2. E(xe) = 0: Proof of Theorem 2.4. E (e j x) = 0: 2.2.4. and so its properties are derived from this construction. x).1.2) evaluated at the random vector x: e = y m(x): By construction.1.4. We now discuss some of these properties.

suppose that y = xu where x and u are independent and Eu = 1: Then E (y j x) = x so the regression equation is y = x + e where e = x(u 1): Yet e is not independent of x. Typically and generally. or mean squared error E (y m (x))2 = Ee2 2 : (2. A non-stochastic measure of the magnitude of the prediction error is the expectation of the squared error. which is random. As a simple example. It is quite important to understand. The prediction error is e = y m(x). that it does not imply that the distribution of e is independent of x: Sometimes the assumption “e is independent of x” is added as a convenient simpli…cation.4: Log Hourly Wage as a Function of Experience The equations y = m(x) + e E (e j x) = 0: are often stated jointly as the regression framework. These equations hold true by de…nition. x): We state this formally in the following result.Figure 2. because no restrictions have been placed on the joint distribution of the data. for the conditional mean of e is 0 and thus independent of x. we can view m(x) as a predictor or forecast of y. This holds regardless of the joint distribution of (y. since the conditional mean is restricted to equal zero. even though E (e j x) = 0: 2. It turns out that the the conditional mean is a good predictor of y in the sense that it has the lowest mean squared error among all predictors.5 Best Predictor Given a realized value of x. however. The property is also sometimes called mean independence. 12 . This equation is sometimes called a conditional mean restriction. but it is not generic feature of regression. even though the conditional mean of e is zero. not a model. A regression model imposes further restrictions on the permissible class of regression functions m (x) : The condition E (e j x) = 0 is the key implication of the conditional mean model. e and x are jointly dependent.4) The parameter 2 is also known as the variance of the regression error. It is important to understand that this is a framework.

The conditional standard deviation is its square root (x) = 2 (x) is that it is the conditional mean of e2 given x. it does not provide information about the spread of the distribution. So while men have higher average wages. treat it as a constant 2 (x) = 2 .1. compare the conditional wage densities for men and women displayed in Figure 2.1.Theorem 2. Speci…cally.6 Conditional Variance While the conditional mean is a good measure of the location of a conditional distribution. yielding the …nal inequality. De…nition 2. we can see that the density for men’ wages is somewhat more spread out than that for women. The right-hand-side after the third equality is minimized by setting g (x) = m (x).1: Since y = m(x) + e. One way to think about As an example of how the conditional variance depends on observables. 2. but is also a di¤erence in spread. The perverse consequences of a narrow-minded focus on the mean has been parodied in a classic joke: 13 . including income and wealth distribution.1 The conditional variance of y given x is 2 (x) = var (y j x) = E y2 j x = E (y = E e2 j x E (y j x))2 j x (E (y j x))2 Generally.5. while the density for women’ s s wages is somewhat more peaked.5. E (y g (x))2 E (y m (x))2 : Proof of Theorem 2. 2 (x) is a non-trivial function of x and can take any form subject to the restriction p 2 (x): that it is non-negative. or treat it as a nuisance parameter (a parameter not of primary interest). This may be unfortunate as dispersion is relevant to many economic topics.6.2.4. The di¤erence between the densities is not just a location shift. the mean squared error using g (x) is E (y g (x))2 = E (e + m (x) = Ee2 + E (m (x) Ee2 = E (y g (x))2 g (x))) + E (m (x) g (x))2 g (x))2 = Ee2 + 2E (e (m (x) m (x))2 where the third equality uses Theorem 2.1 Conditional Mean as Best Predictor Let m (x) = E (y j x) be the conditional mean and let g (x) be any other predictor of y given x: Under Assumption 2. A common measure of the dispersion is the conditional variance. and price dispersion. Indeed. they are also somewhat more dispersed.5. economic inequality.1.1 and that for women is 10.3. the conditional standard deviation for men’ wages is s 12. Many econometric studies focus on the conditional mean m(x) and either ignore the conditional variance 2 (x).

2 (x) Even when the error is heteroskedastic we still de…ne the unconditional variance 2 of the error e as in (2. recall Figure 2. It should always be remembered. but rather because of its simplicity. It is more constructive . but it is unfortunately backwards.7.An economist was standing with one foot in a bucket of boiling water and the other foot in a bucket of ice. De…nition 2. the economist in question ignored variance! 2. 2 In the general case where 2 (x) depends on x we say that the error e is heteroskedastic.2 The error is heteroskedastic if E e2 j x = depends on x. When asked how he felt. and describe heteroskedasticity as an exception or deviance.) Older textbooks also tend to describe homoskedasticity as a component of a correct regression speci…cation. In apparent contraction to the above statement. 14 . The correct view is that heteroskedasticity is generic and “standard” while homoskedasticity is unusual and excep.1 and how the variance of wages varies between men and women. The default in empirical work should be to assume that the errors are heteroskedastic. This description has in‡ uenced many generations of economists. tional.4). “On average I feel just …ne. to understand that heteroskedasticity is the case where the conditional variance 2 (x) depends on the variables x: (Once again. It may be helpful to notice that by using iterated expectations the unconditional variance can be written as the expected conditional error variance 2 = E e2 = E E e2 j x =E 2 (x) : Thus 2 is well-de…ned whether or not the error is homoskedastic or heteroskedastic. however. and it is therefore quite advantageous for teaching and learning. not the converse. Some older or introductory textbooks describe heteroskedasticity as the case where “the variance of e varies across observations” This is a poor and confusing de…nition. he replied.” Clearly. we will still frequently impose the homoskedasticity assumption when making theoretical investigations into the properties of regression techniques. that homoskedasticity is never imposed because it is believed to be a correct feature of an empirical regression.7. De…nition 2. The reason is that in many cases homoskedasticity greatly simpli…es the theoretical calculations.1 The error is homoskedastic if E e2 j x = does not depend on x.7 Homoskedasticity and Heteroskedasticity 2 (x) An important special case obtains when the conditional variance of the regression error is a constant and independent of x. This is called homoskedasticity.

t 15 . We call this the “constant”and the corresponding coe¢ cient is called the “intercept” Equivalently. An easy way to do so is to augment the regressor vector x by listing the number “1” as an element. assuming that . A @ .7) Linear Regression y = x0 + e E (e j x) = 0 If in addition the error is homoskedastic. Homoskedastic Linear Regression y = x0 + e E e2 j x E (e j x) = 0 = 2 7 The order doesn’ matter.1) has been rede…ned as the k 1 vector 0 1 1 B x2 C B C x = B .2.6) 0 1 is a k 1 coe¢ cient vector. we call this the homoskedastic linear regression model. A .3) is when the conditional mean function m (x) is linear in x (or linear in functions of x): In this case we can write the mean equation as m(x) = 0 + x1 1 + x2 2 + + xk k: Notationally it is convenient to write this as a simple function of the vector x. then x1 = 1. the …rst element7 of the vector x is the intercept. This is called the linear regression model. B . k 1 (2.5) . C =@ . then the mean equation is m(x) = x1 = x where 0 1 + x2 2 + + xk k (2.8 Linear Regression An important special case of (2. It could be any element. Thus (2. C: (2. xk With this rede…nition.

we can de…ne a linear approximation to the conditional mean function as the linear function with the lowest mean squared error among all linear predictors.9) as 2E (xy) = 2E xx0 dividing by 2.2. The mean squared error of this predictor is 2 S( ) = E y x0 : The best linear predictor of y given x is de…ned by …nding the vector which minimizes S( ): De…nition 2. In this section we derive a speci…c approximation with a simple interpretation. Theorem 2. and then inverting the k k matrix E (xx0 ) .9) is 0= @ S( ) = @ 2E (xy) + 2E xx0 : (2. its functional form is typically unknown. Assumption 2. the linear equation of the previous section is empirically unlikely to be accurate.1 Q = E (xx0 ) is invertible. The quadratic structure of S( ) means that we can solve explicitly for prediction error can be written out as a quadratic function of : S( ) = Ey 2 2 0 E (xy) + 0 : The mean squared E xx0 The …rst-order condition for minimization (from Appendix A.8) is called the Linear Projection Coe¢ cient. In practice it is more realistic to view the linear speci…cation (2.1 showed that the conditional mean m (x) is the best predictor in the sense that it has the lowest mean squared error among all predictors.5. To be precise. In particular. a linear predictor for y given x is x0 for some 2 Rk .9. 16 . The matrix Q is sometimes called the design matrix.6) as an approximation.9) This has a unique solution under the following condition. as in experimental settings the researcher is able to control Q by manipulating the distribution of the regressors x: Rewriting (2.9 Best Linear Predictor While the conditional mean m(x) = E (y j x) is the best predictor of y among all functions of x. we obtain the solution for . By extension.9.1 The Best Linear Predictor of y given x is x0 . where minimizes the mean squared error S( ) = E y The minimizer = argmin S( ) 2Rk x0 2 : (2.

In this case (2. We call x0 the best linear predictor of y given x. Thus the linear model (2. then (2. the linear projection coe¢ cient equals = E xx0 1 E (xy) : (2. and are discussed in Chapter 11.2.1 is presented below. x0 is the best linear predictor for y: The projection error is e=y x0 : (2. E(xx0 ) or E (xy) (E (xx )) Given the de…nition of in (2.12) the linear projection model. E (xe) = 0: (2. it is important not to misinterpret the generality of this statement. or the linear projection of y onto x: In general we call equation (2.2. Rewriting.9. for any pair (y. in many economic models the parameter may be de…ned within the model.1. we obtain a decomposition of y into linear predictor and error y = x0 + e: (2. 2 = E e2 < 1. The linear equation (2.10).9. We have shown that under mild regularity conditions.2 Properties of Linear Projection Model Under Assumptions 2. However.1 may be false.10) It is worth taking the time to understand the notation involved in the expression (2.9.11) The error e from the linear prediction equation is equal to the error from the regression equation when (and only when) the conditional mean is linear in x.12) exists quite generally.Theorem 2. Therefore. E (xx0 ) is a k k matrix and E (xy) is a k 1 column vector. The following are important properties of the model.10). Theorem 2. In contrast.9.1.1.14) A complete proof of Theorem 2. These structural models require alternative estimation methods.12) is de…ned as the best linear predictor.1 and 2.12) This completes the derivation of the model. Linear Projection Model y = x0 + e: E (xe) = 0 = E xx0 1 E (xy) 17 .9.12) exist and are unique.1 Linear Projection Coe¢ cient Under Assumptions 2.13) and (2.9. alternative expressions such as E(xy) 1 0 are incoherent and incorrect. No additional assumptions are required.9.12) with the properties listed in Theorem 2.1 and 2. x) we can de…ne a linear equation (2. otherwise they are distinct.10) may not hold and the implications of Theorem 2.11) and (2.

1. which requires that for all non-zero . x1 = 1.14) is a set of k equations. In other words. (When x does not have a constant. (2. the best linear predictor x0 still identi…ed.16) together imply that the variables xj and e are uncorrelated. As it is desireable for e to have a zero mean. by the Cauchy-Schwarz Inequality (C.17) When Q is not invertible there are multiple solutions to (2. this is not guarenteed. Invertibility and Identi…cation The vector (2. Even so.5). there cannot exist a non-zero vector such that 0 x = 0 identically.10) exists and is unique as long as the k invertible.17).15)-(2.Equation (2.) It is also useful to observe that since cov(xj . E ( 0 x)2 > 0: Equivalently.1 We …rst show that the moments E (xy) and E (xx0 ) are …nite and well de…ned. Proof of Theorem 2.1 E jxj yj Ex2 j 1=2 Ey 2 1=2 < 1: 18 . this situation must be excluded. :::. Observe that for any non-zero 2 Rk .16) Thus the projection error has a mean of zero when the regression contains a constant. this is a good reason to always include a constant in any regression.5).2.14) is equivalent to E (xj e) = 0 (2. :::.9. In this case the coe¢ cient is not identi…ed as it does not have a unique value. it is useful to note that Assumption 2.3) and Assumption 2. Theorem 2. k. then (2. This occurs when redundant variables are included in x: In order for to be uniquely de…ned. e) = E (xj e) E (xj ) E (e) .2. One solution is to set = E xx0 E (xy) where A denotes the generalized inverse of A (see Appendix A. Otherwise.18) Note that for j = 1.g.9. one for each regressor. the regressor vector x typically contains a constant.2. It is invertible if and only if it is positive de…nite. 0 k matrix Q = E (xx0 ) is 0 Q =E 0 xx0 =E 0 x 2 so Q by construction is positive semi-de…nite. k: As in (2.15) for j = 1. there is no unique solution to the equation E xx0 = E (xy) : (2. all of which yield an equivalent best linear predictor x0 . The key is invertibility of Q.9.1 and 2. First.1 shows that the linear projection coe¢ cient is identi…ed (uniquely determined) under Assumptions 2.15) for j = 1 is the same as E (e) = 0: (2.1 implies that E kxk2 = E x0 x = k X j=1 Ex2 < 1: j (2. e. In this case (2.

5).1. Assumption 2.19) we …nd s E e2 1=2 2 1=2 2 1=2 = E y Ey 2 x0 1=2 + E x0 < 1 establishing (2. Taking expectations of this equation.11) and (2.2. since E (e) = 0 from (2. (2.16). Next.21) .10) states that = (E (xx0 )) 1 E (xy) which is well de…ned since (E (xx0 )) 1 exists under Assumption 2.20) we …nd y y = (x 19 0 x) + e. we …nd = y 0 x : Subtracting this equation from (2. x0 E xx0 E xx0 1 = I and Ia = a.13).6) implies (x0 )2 kxk2 k k2 and therefore combined with (2.3) shows that for any j E jxj ej Ex2 j 1=2 Ee2 1=2 <1 1 and therefore the elements in the vector E (xe) are well de…ned and …nite. Rearranging. note that the jl’ element th of E (xx0 ) is E (xj xl ) : Observe that E jxj xl j Ex2 j 1=2 Ex2 l 1=2 < 1: Thus all elements of the matrix E (xx0 ) are …nite. and (2.9.19) Using Minkowski’ Inequality (C. An application of the Cauchy-Schwarz Inequality (C.10).Thus the elements in the vector E (xy) are well de…ned and …nite. and write the regression equation in the format y = + x0 + e (2. E (xy) = 0 2.1. and the matrix properties that AA E (xe) = E x y = E (xy) completing the proof. Using the de…nitions (2.18) we see that 2 E x0 E kxk2 k k2 < 1: (2.20) where is the intercept and x does not contain a constant. It follows that e = y x0 as de…ned in (2. Equation (2.11) is also well de…ned. Note the Schwarz Inequality (A. we …nd Ey = E + Ex0 + Ee or y = + 0 x where y = Ey and x = Ex.10 Regression Coe¢ cients Sometimes it is useful to separate the intercept from the other regressors.

= E (x x ) (x 1 x) 0 1 E (x x) y y = cov (x.25) Similar to the best linear predictor we are measuring accuracy by expected squared error. z) = E (x covariance matrix of x: 8 Ex) (z Ez)0 : We call cov (x. y) a function only of the covariances8 of x and y: Theorem 2. (They are centered at their means.25) selects to minimize the expected squared approximation error. (2. it turns out that the best linear predictor and the best linear approximation are identical. otherwise d( ) > 0: We can also view the mean-square di¤erence d( ) as a densityweighted average of the function (m(x) x0 )2 : We can then de…ne the best linear approximation to the conditional m(x) as the function x0 obtained by selecting to minimize d( ) : = argmin d( ): 2Rk (2. x) the 20 . The covariance matrix between vectors x and z is cov (x.14).8) selects to minimize the expected squared prediction error. (2. We start by de…ning the mean-square approximation error of x0 to m(x) as the expected squared di¤erence between x0 and the conditional mean m(x) d( ) = E m(x) x0 2 : (2. x) cov (x. Thus (2.11 Best Linear Approximation There are alternative ways we could construct a linear approximation x0 to the conditional mean m(x): In this section we show that one natural approach turns out to yield the same answer as the best linear predictor. thus by the formula for the linear projection model. We conclude that the de…nition (2.21) is also a linear projection. The di¤erence is that the best linear predictor (2.a linear equation between the centered variables y y and x x .24) The function d( ) is a measure of the deviation of x0 from m(x): If the two functions are identical then d( ) = 0.26) (2. y) : 2.23) cov (x.22) (2. x) 1 y 0 x + x0 + e.1 In the linear projection model y= then = and = cov (x. Despite the di¤erent de…nitions.25) can be viewed as an alternative motivation for the linear projection coe¢ cient.) Because x x is uncorrelated with e.25) equals (2. By the same steps as in (2. while the best linear approximation (2.8).10. or equivalently are mean-zero random variables.9) plus an application of conditional expectations we can …nd that = = E xx0 E xx0 1 1 E (xm(x)) E (xy) (2.27) (see Exercise 2.

and since they are jointly normal and uncorrelated (since E (xe) = 0) they are also independent (see Appendix B. This motivation has limited merit in econometric applications since economic data is typically non-normal.g. Consider the best linear predictor of y on x y = x0 + e: = E xx0 1 E (xy) : Since the error e is a linear transformation of the normal vector (y. This implies that on average a child’ height is more mediocre than his or her parent’ height. x) are jointly normally distributed.13 Regression to the Mean The term regression originated in an in‡ uential paper by Francis Galton published in 1886. 2 ) is independent of x.12 Normal Regression Suppose the variables (y. the heights of children and parents in a stable environment) then the regression slope in a linear projection is always less than one. Galton s discovered that this conditional mean was approximately linear with a slope of 2/3. x) is jointly normal. he was estimating the conditional mean of children’ height given their parents height. 2. This is an alternative (and traditional) motivation for the linear regression model. Galton called s s this phenomenon regression to the mean. Assume that y and x have the same mean.22) = (1 so we can write the conditional mean of (2. they satisfy a normal linear regression y = x0 + e where e N (0. To be more precise.9). so that y = x = : Then from (2. where he examined the joint distribution of the stature (height) of parents and children. and the label regression has stuck to this day to describe most conditional relationships.2. Independence implies that E (e j x) = E (e) = 0 and E e2 j x = E e2 = 2 which are properties of a homoskedastic linear conditional regression. with the weight equal to the regression slope : When the height 21 . it follows that (e. One of Galton’ fundamental insights was to recognize that if the marginal distributions of y s and x are the same (e. take the simple regression y= +x +e (2.28) where y equals the height of the child and x equals the height of the parent.28) as E (y j x) = (1 ) +x : ) This shows that the expected height of the child is a weighted average of the population average height and the parents height x. We have shown that when (y. x). E¤ectively. x) are jointly normally.

28). non-converging) means and variances. It is easy for intelligent economists to succumb to its temptation. The regression fallacy is subtle.distribution is stable across generations. A common error –known as the regression fallacy –is to infer from < 1 that the population is converging9 . equation (B. the standard deviation.7) in the Appendix). then this slope is the simple correlation of y and x: Using (2. 22 . the regression property implies that the plots lines will converge –children’ height will be more average than their parents. y): var(x) By the properties of correlation (e. In addition to inventing the concept of regression. and the bivariate normal distribution. and plotted the average pro…ts of these groups for the subsequent 10 years. is strictly less than 1. published in 1933.g. It cannot be anything else. If the population is stable. when he divided the stores into groups based on 1920-1921 pro…ts. stable. A slope less than one does not imply that the variance of y is less than than the variance of x: Another way of seeing this is to examine the conditions for convergence in the context of equation (2. there was no discovery – regression to .g. The message is that such plots are misleading for inferences about convergence. allowing the application of the tools of mathematical statistics to the social sciences. A famous example is The Triumph of Mediocrity in Business by Horace Secrist. he is credited with introducing the concepts of correlation. so that var(y) = var(x). The regression fallacy is to incorrectly s conclude that the population is converging. y) = 1 only in the degenerate case y = x: Thus if we exclude degeneracy. y) 1. with corr(x. Suppose you sort families into groups by the heights of the parents. he found clear and persuasive evidence for convergence “toward mediocrity” Of course. the slope coe¢ cient must be less than one. This is a fallacy because we have shown that under the assumption of constant (e. 1 corr(x. In this book. and then plot the average heights of each subsequent generation over time. it follows that var(y) = Then var(y) < var(x) if and only if 2 2 var(x) + var(e): var(e) var(x) <1 which is not implied by the simple condition j j < 1: The regression fallacy arises in related empirical situations. His work on heredity made a signi…cant intellectual advance by examing the joint distributions of observables. the mean is a necessary feature of stable distributions. Sir Francis Galton Sir Francis Galton (1822-1911) of England was one of the leading …gures in late 19th century statistics.23) = cov (x. y) = corr(x. Secrist carefully and with great detail documented that in a sample of department stores over 1920-1930. This means that on average a child’ height is more mediocre (closer to the population average) s than the parent’ s. Since x and e are uncorrelated. 9 A population is converging if its variance is declining towards zero.

28). divide through by and rewrite to …nd the equation x= +y 1 1 e (2. because it is a simple manipulation of the valid equation (2. and intercept of . Instead. suppose we take a normally distributed random variable x N (0.30) is not a regression equation. Galton’ …nding was that when the variables are standardized. However.30).) This regression takes the form x= +y +e : (2. and which are exactly the same as in the regression of y on x! The intercept and slope have exactly the same values in the forward and reverse regression! While this algebraic discovery is quite simple. a common yet mistaken guess for the form of the reverse regression is to take the regression (2. the converse is not true as the projection error does not necessarily satisfy E (e j x) = 0: To see this in a simple example.30) suggesting that the regression of x on y should have a slope coe¢ cient of 1= instead of .2. it is counter-intuitive. The intercept and slope may be calculated as = = = 1 E (x) E (x) E x2 1 E (x) E (x) E x2 1 0 1 E (y) E (xy) E x2 E x3 1 Thus the linear projection equation takes the form y= +x +e 23 . In any event.4 we know that the regression error has the property E (xe) = 0: Thus a linear regression is a linear projection.29) This is sometimes called the reverse regression.= rather than : What went wrong? Equation (2.30) is perfectly valid. There is nothing special about a regression of y on x: We can also regress x on y: (In his heredity example this is the best linear predictor of the height of parents given the height of their children. In this equation. It is not a causal relation. 2. not (2. The trouble is that (2. Inverting a regression does not yield a regression. (2. and both equations exhibit regression to the mean.14 Reverse Regression Galton noticed another interesting feature of the bivariate distribution. Instead. the slope in both s regressions (y on x.4.28). s From Theorem 2.15 Limitations of the Best Linear Predictor Let’ compare the linear projection and linear regression models. 1) and set y = x2 : Note that y is a deterministic function of x! Now consider the linear projection of y on x and an intercept. In a stable population we …nd that = corr(x.29) is a valid regression.1. and x and y) equals the correlation. but a natural feature of all joint distributions. the coe¢ cients error e are de…ned by linear projection. y) = = (1 ) = .

since the resulting function is quadratic in experience: Other than the rede…nition of the regressor vector. there are no changes in our methods or analysis. Return for a moment to the joint distributions displayed in Figures 2.4. The separate linear projections of y on x for these two groups are displayed in the Figure by the dashed lines. Figure 2. and Group 1 has a lower mean value of x than Group 2. yet E (e j x) = x2 1 6= 0: In this simple example e is a deterministic function of x. We illustrate the issue in Figure 2. In the example of Figure 2. in Figure 2.5 for a constructed10 joint distribution of y and x. In these …gures. Most importantly. 1). it misses the strong downturn in expected wages for those above 35 years work experience (equivalently.3 (the conditional mean of log hourly wages as a function of education) the conditional mean and linear projection are quite close to one another. so the linear projection is a poor approximation. yet e and x are uncorrelated! The point is that a projection error need not be a regression error.where = 1. These two projections are distinct approximations to the conditional mean. In Figure 2. It over-predicts wages for young and old workers. This defect in the best linear predictor can be partially corrected through a careful selection of regressors. for those over 53 in age). 1) where m(x) = 2x x2 =6: 10 24 . In this example it is a much better approximation to the conditional mean than the linear projection. and the conditional distriubtion of y given x is N(m(x). = 0 and e = x2 1: Observe that E (e) = E x2 1 = 0 and E (xe) = E x3 E (e) = 0.4 (the conditional mean of log hourly wages as a function of labor market experience) the conditional mean is quite nonlinear.4 we display as well the quadratic projection. A defect with linear projection is that it leads to the incorrect conclusion that the e¤ect of x on y is di¤erent for individuals in the two Groups.4.3 and 2. However. This conclusion is incorrect because in fact there is no di¤erence in the conditional mean function. and under-predicts for the rest. The solid line is the non-linear conditional mean of y given x: The data are divided in two – Group 1 and Group 2 – which have di¤erent marginal distributions for the regressor x. 1) and those in Group 2 are N(4. In this example the linear predictor is a close approximation to the conditional mean. The apparant di¤erence is a by-product of a linear approximation The x in Group 1 are N(2. the solid lines are the conditional means and the straight dashed lines are the linear projections.5: Conditional Mean and Two Linear Projections Another defect of linear projection is that it is sensitive to the marginal distribution of the regressors when the conditional mean is non-linear. we can augment the regressor vector x to include both experience and experience2 : The best linear predictor of log wages given these two variables can be called a quadratic projection. In Figure 2.

The function m(x) de…ned by (2.31) specializes to (2. Theorem 6. Identi…cation is a necessary pre-condition for estimation. we say that the parameter is identi…ed. consider the ratio of means = 1 = 2 where 1 = Ey1 and 2 = Ey2 . 2.16. the conditional mean m(x) = E (y j x) is identi…ed for x 2 S where P(S ) = 1: 25 .16 Identi…cation of the Conditional Mean When a parameter is uniquely determined by the distribution of the observable variables. in the sense that if h(x) satis…es (2. for example.16.31) The function m(x) is almost everywhere unique. x) have a joint density. and is thus is identi…ed. and an identi…cation theorem carefully describes a set of such conditions which are su¢ cient for identi…cation. Ash (1972). but if 2 = 0 then is unde…ned. and Ey2 6= 0: Unless these conditions hold. it is meaningless to estimate : Now consider the conditional mean m(x) = E (y j x). y2 ) under the restrictions E jy1 j < 1.31). Theorem 2. It is well de…ned when 1 and 2 are both …nite and 2 6= 0.3. For example. Theorem 2. it is meaningless to attempt to estimate Ey: As another example. consider the unconditional mean = Ey. Thus is identi…ed from the distribution of (y1 . Under which conditions is m(x) de…ned and unique? The answer is provided in the following deep result from probability theory. E jy2 j < 1.1 Existence of the Conditional Mean If E jyj < 1 then there exists a function m(x) such that for all measurable sets X E (1 (x 2 X ) y) = E (1 (x 2 X ) m(x)) : (2.3. Theorem 2. which establishes the existence of the conditional mean.2 Identi…cation of the Conditional Mean If E jyj < 1.16. combined with di¤erent marginal distributions for the conditioning variables. identi…cation only holds under a set of restrictions.2) when (y.1 shows that the conditional mean function m(x) exists and is almost everywhere unique. then there is a set S such that P(S ) = 1 and m(x) = h(x) for x 2 S : The function m(x) is called the conditional mean and is written m(x) = E (y j x) : See. Typically. It is well de…ned and unique for all distributions for which E jyj < 1: Thus the mean is identi…ed from the distribution of y under the restriction E jyj < 1: Unless E jyj < 1.to a non-linear mean.

::: Compute E (y j x) and var (y j x) : Does this justify a linear regression model of the form y = x0 + e? j Hint: If P (y = j) = exp( j! ) .4 Use y = m(x) + e to show that var (y) = var (m(x)) + 2 Exercise 2.5 Suppose that y is discrete-valued. If y = x0 + e and E (e j x) = 0. 0 y 1: 2 Compute the coe¢ cients of the best linear predictor y = + x + e: Compute the conditional mean m(x) = E (y j x) : Are the best linear predictor and conditional mean di¤erent? Exercise 2.2 Suppose that the random variables y and x only take the values 0 and 1. If y = x + e. taking values only on the non-negative integers. and have the following joint probability distribution y=0 y=1 x=0 . 2 2.1 Prove parts 2. s) = 0 if and only if m = and s = 26 .6 Let x and y have the joint density f (x. 2.12 Let x be a random variable with g xj .7 True or False. Exercise 2.3 Show that 2 (x) is the best predictor of e2 given x: (a) Write down the mean-squared error of a predictor h(x) for e2 : (b) What does it mean to be predicting e2 ? (c) Show that 2 (x) minimizes the mean-squared error and is thus the best predictor. and E (xe) = 0. 3 and 4 of Theorem 2. then E (e j x) = 0: Exercise 2. then E x2 e = 0: Exercise 2.1 .2 .8 True or False. x 2 R. If y = x0 + e. and the conditional distribution of y given x is Poisson: exp ( x0 ) (x0 )j . Exercise 2.4 x=1 .1.4. then E x2 e = 0: Exercise 2.11 True or False. y) = 3 x2 + y 2 on 0 x 1. then = Ex and x (x )2 2: 2 2 = var(x): De…ne : = Show that Eg (x j m. E (e j x) = 0. a constant.9 True or False. If y = x + e. E y 2 j x and var (y j x) for x = 0 and x = 1: Exercise 2. then e is independent of x: Exercise 2.10 True or False. then Ey = and var(y) = : Exercise 2. and E e2 j x = e is independent of x: Exercise 2. P (y = j j x) = j! j = 0. 1. If y = x0 + e and E(xe) = 0. x 2 R.3 Find E (y j x) .Exercises Exercise 2. and E (e j x) = 0.

(Be explicit. namely that for d( ) = E m(x) then = argmin d( ) 2Rk x0 2 = = E xx0 E xx0 1 1 E (xm(x)) E (xy) : Hint: To show E (xm(x)) = E (xy) use the law of iterated expectations.27).26)-(2.) Exercise 2.Exercise 2. (b) Use a linear transformation of x to …nd an expression for the best linear predictor of y given x. do not just use the generalized inverse formula. 27 .14 Show (2.13 Suppose that and x3 = 1 + 2 x2 is a linear function of x2 : 1 1 x = @ x2 A x3 0 (a) Show that Q = E (xx0 ) is not invertible.

5): ^ = argmin Sn ( ): 2Rk 28 . and de…ne the estimator of the parameter as the minimizer of the empirical function. :::. 2Rk (3.3) (3.4) E (xy) : When a parameter is de…ned as the minimizer of a function as in (3. Most of the discussion will be algebraic.1 Introduction In this chapter we introduce the popular least-squares estimator. x) 2 R Rk .2 Least Squares Estimator In Section 2. Applied to observations from a random sample with observations (yi . n) this model takes the form yi = x0 + ei i where is de…ned as = argmin S( ).Chapter 3 The Algebra of Least Squares 3.1) (3. a standard approach to estimation is to construct an empirical analog of the function. and called this the linear projection model. The empirical analog of the expected squared error (3. with questions of distribution and inference defered to later chapters.2) 2 S( ) = E yi and = E xx0 x0 i 1 . (3.3) is the sample average squared error Sn ( ) = = where SSEn ( ) = 1X yi n i=1 n x0 i 2 (3.5) 1 SSEn ( ) n n X i=1 yi x0 i 2 is called the sum-of-squared-errors function. xi : i = 1. 3.2). An estimator for is the minimizer of (3.9 we derived and discussed the best linear predictor of y given x for a pair of random variables (y.

3 Solving for Least Squares n X i=1 n X i=1 n X i=1 To solve for ^ . The …rst-order-condition for minimization of SSEn ( ) is n n X X @ 0= Sn ( ^ ) = 2 xi yi + 2 xi x0 ^ : (3.1: Sum-of-Squared Errors Function Alternatively.1 displays an example sum-of-squared errors function SSEn ( ) for the case k = 2: The least-squares estimator ^ is the the pair ( ^ 1 . as Sn ( ) is a scale multiple of SSEn ( ). and can This is the natural estimator of the best linear prediction coe¢ cient also be called the linear projection estimator.Figure 3. Figure 3. expand the SSE function to …nd SSEn ( ) = 2 yi 2 0 xi yi + 0 xi x0 i which is quadratic in the vector argument . 29 .6) i @ i=1 i=1 P By inverting the k k matrix n xi x0 we …nd an explicit formula for the least-squares estimator i i=1 ^= n X i=1 xi x0 i ! 1 n X i=1 xi yi ! : (3.7) de…ned in (3. ^ 2 ) minimizing this function. we may equivalently de…ne ^ as the minimizer of SSEn ( ): Hence ^ is commonly called the least-squares estimator of . To visualize the quadratic function Sn ( ).2). 3.

4) with the sample moments: ^ = = b E xi x0 i b E (xi yi ) ! 1 ! n n 1X 1X 0 xi xi x i yi n n i=1 i=1 ! 1 n ! n X X 0 xi xi xi yi i=1 i=1 1 b E xi x0 i = n 1X xi x0 : i n = which is identical with (3. equation (3. excluding 30 .3. written by several generations of scholars from the 10th to 2nd century BCE. consider the data used to generate Figure 2.4) writes the projection coe¢ cient as an explicit function of the population moments E (xi yi ) and E (xi x0 ) : Their moment estimators are the sample moments i b E (xi yi ) = 1X xi yi n i=1 i=1 n The moment estimator of replaces the population moments in (3.7).Early Use of Matrices The earliest known treatment of the use of matrix methods to solve simultaneous systems is found in Chapter 8 of the Chinese text The Nine Chapters on the Mathematical Art. Least Squares Estimation De…nition 3.1 The least-squares estimator ^ is ^ = argmin Sn ( ) 2Rk where and has the solution ^= 1X yi Sn ( ) = n i=1 n X i=1 n x0 i 2 xi x0 i ! 1 n X i=1 xi yi ! : To illustrate least-squares estimation in practice. Alternatively.3. These are white male wage earners from the March 2004 Current Population Survey.

Legendre’ goal was to select s to make the set of errors as small as possible. A multivariate regression has two or more regressors. He proposed the sum of squared error criterion. but now including all levels of experience. This was a vexing and common problem in astronomical measurement. and derived the algebraic solution presented above. Let yi be log wages and xi be an intercept and years of education. This sample has 988 observations.9) An interpretation of the estimated equation is that each year of education is associated with an 11% increase in mean wages. As he noted. Adrien-Marie Legendre The method of least-squares was …rst published in 1805 by the French mathematician Adrien-Marie Legendre (1752-1833). Legendre proposed least-squares as a solution to the algebraic problem of solving a system of equations when the number of equations exceeded the number of unknowns.10) These estimates suggest a 10% increase in mean wages per year of education. s 31 .military. Let’ redo the s example. we obtain the estimates \ log(W age) = 0:959 + 0:100 education + 0:053 experience 0:095 experience2 =100: (3. As viewed by Legendre. Equation (3. Then 1X x i yi = n i=1 n 2:951 42:405 and Thus 1X xi x0 = i n i=1 n 1 14:136 14:136 205:826 : ^ = = 1 14:136 14:136 205:826 1: 33 0:115 : 1 2:951 42:405 (3. with 10-15 years of potential work experience.1) is a set of n equations with k unknowns. (3. As the equations cannot be solved exactly. This expanded sample includes 6578 observations.6) is a system of k equations with k unknowns.9) is called a bivariate regression as there are only two variables. which can be solved by “ordinary”methods. Hence the method became known as Ordinary Least Squares and to this day we still use the abbreviation OLS to refer to Legendre’ estimation method. the …rst-order conditions (3. and allows a more detailed investigation. Including as regressors years of experience and its square (experience 2 =100) (we divide by 100 to simplify reporting).8) We often write the estimated equation using the format \ log(W age) = 1:33 + 0:115 education: (3.

Given the residuals. . using (3.4 Least Squares Residuals yi = x 0 ^ ^ i As a by-product of estimation. including computation.13) 3.3.13): ^2 = 1X 2 ei : ^ n i=1 n (3. which can cause confusion.11) and (3. it is convenient to write the model and statistics in matrix notation.5 Model in Matrix Notation For many purposes. one for each observation. Equation (3. we can construct an estimator for 2 as de…ned in (2. we de…ne the …tted or predicted value and the residual ei = yi ^ yi = yi ^ x0 ^ : i (3. These are algebraic results.11) Note that yi = yi + ei : We make a distinction between the error ei and the residual ei : The ^ ^ ^ error ei is unobservable while the residual ei is a by-product of estimation. 1X xi ei = ^ n i=1 n 1X xi yi n i=1 n 1X xi yi n i=1 n x0 ^ i 1X xi x0 ^ i n i=1 n n = = = 1 n = 0: n 1X xi yi n i=1 n X i=1 xi yi n X i=1 1X xi x0 i n i=1 n X i=1 xi x0 i ! 1 n X i=1 xi yi ! xi yi When xi contains a constant.12) is a system of n equations. These two variables are ^ frequently mislabeled.6) implies that n 1X xi ei = 0: ^ (3. We can stack these n equations together as y1 = x0 + e1 1 y2 = x0 + e2 2 . an implication of (3. yn = x0 + en : n 32 . The linear equation (2.12) n i=1 To see this by a direct calculation.7).12) is 1X ei = 0: ^ n i=1 n Thus the residuals have a sample mean of zero and the sample correlation between the regressors and the residual is zero. and hold true for all linear regression estimates. .

Now de…ne

Observe that y and e are n 1 vectors, and X is an n can be compactly written in the single equation

B B y=B @

0

y1 y2 . . . yn

1

C C C; A

B B X=B @

0

x0 1 x0 2 . . . x0 n

1

C C C; A

k matrix. Then the system of n equations

B B e=B @

0

e1 e2 . . . en

1

C C C: A

y = X + e: Sample sums can also be written in matrix notation. For example
n X

xi x0 = X 0 X i xi yi = X 0 y:

i=1 n X i=1

Therefore ^ = X 0X

1

X 0y :

(3.14)

Using matrix notation we have simple expressions for most estimators. This is particularly convenient for computer programming, as most languages allow matrix notation and manipulation.

Important Matrix Expressions y = X +e ^ = X 0X e = y X^ ^ ^2 = n
1 0 1

X 0y

e e: ^^

3.6

Projection Matrices
P = X X 0X
1

De…ne the matrices and M = In = In

X0

X X 0X P

1

X0

where I n is the n n identity matrix. P and M are called projection matrices due to the property that for any matrix Z which can be written as Z = X for some matrix (we say that Z lies in the range space of X); then PZ = PX and M Z = (I n P)Z = Z 33 PZ = Z Z = 0: = X X 0X
1

X 0X

=X

=Z

As an important example of this property, partition the matrix X into two matrices X 1 and X 2 so that X = [X 1 X 2 ] : Then P X 1 = X 1 and M X 1 = 0: It follows that M X = 0 and M P = 0; so M and P are orthogonal. The matrices P and M are symmetric and idempotent1 . To see that P is symmetric, P0 = = X X 0X X0
0 1

X0

0

X 0X
0 0 1

1 0

(X)0

= X

X 0X

X0
1

= X (X)0 X 0 = P: To establish that it is idempotent, X X 0X
1 1 1

X0

PP

=

X0

X X 0X
1

1

X0

= X X 0X = X X 0X = P: Similarly, M 0 = (I n and MM

X 0X X 0X X0

X0

P )0 = I n

P =M

= M (I n = M = M;

P)

MP

since M P = 0: Another useful property is that tr P tr M = k = n k (3.15) (3.16)

(See Appendix A.4 for de…nition and properties of the trace operator.) To show (3.15) and (3.16), tr P = tr X X 0 X = tr = k; and tr M = tr (I n
1

1

X0

X 0X

1

X 0X

= tr (I k )

P ) = tr (I n )

tr (P ) = n

k:

A matrix P is symmetric if P 0 = P : A matrix P is idempotent if P P = P: See Appendix A.8.

34

Given the de…nitions of P and M ; observe that y = X ^ = X X 0X ^ and e=y ^ X^ = y P y = M y: (3.17) Furthermore, since y = X + e and M X = 0; then e = M (X + e) = M e: ^ Another way of writing (3.17) is y = (P + M ) y = P y + M y = y + e: ^ ^ This decomposition is orthogonal, that is y 0 e = (P y)0 (M y) = y 0 P M y = 0: ^^ The projection matrix P is also known as the hat matrix due to the equation y = P y. The ^ 1 i’ diagonal element of P = X (X 0 X) X 0 is th hii = x0 X 0 X i
1 1

X 0y = P y

(3.18)

xi

(3.19)

which is called the leverage of the i’ observation. The hii take values in [0; 1] and sum to k th
n X i=1

hii = k

(3.20)

(See Exercise 3.6).

3.7

Residual Regression
X = [X 1 X 2]
1 2

Partition and = Then the regression model can be rewritten as y = X1 Observe that the OLS estimator of = ( X 2 ]: OLS estimation can be written as
0 1; 1

:

+ X2

2

+ e:

(3.21)

0 0 2)

can be obtained by regression of y on X = [X 1 (3.22)

y = X 1 ^1 + X 2 ^2 + e ^

Suppose that we are primarily interested in 2 ; not in 1 ; and we want to obtain the OLS subcomponent ^ 2 : In this section we derive an alternative expression for ^ 2 which does not involve estimation of the full model. De…ne 1 M 1 = In X1 X0 X1 X0 : 1 1 Recalling the de…nition M = I n X (X 0 X)
1

X 0 ; observe that X 0 M 1 = 0 and thus 1
1

M 1M = M

X1 X0 X1 1 35

X0 M = M : 1

It follows that M 1 e = M 1 M y = M y = e: ^ ^ Using this result, if we premultiply (3.22) by M 1 we obtain ^ M 1y = M 1X 1 ^ 1 + M 1X 2 ^ 2 + M 1e ^2 + e ^ = M 1X 2

(3.23)

the second equality since M 1 X 1 = 0. Premultiplying by X 0 and recalling that X 0 e = 0; we 2 2^ obtain 0 X 0 M 1 y = X2 M 1 X 2 ^ 2 + X 0 e = X 0 M 1 X 2 ^ 2 : 2^ 2 2 Solving, ^2 = X 0 M 1X 2 2 an alternative expression for ^ 2 : Now, de…ne ~ X 2 = M 1X 2 y = M 1 y; ~ (3.24) (3.25)
1

X 0 M 1y 2

the least-squares residuals from the regression of X 2 and y; respectively, on the matrix X 1 only. Since the matrix M 1 is idempotent, M 1 = M 1 M 1 and thus ^2 = = = X 0 M 1X 2 2 ~0 ~ X 2X 2
1 1

X 0 M 1y 2
1

X 0 M 1M 1X 2 2

X 0 M 1M 1y 2

~0 ~ X 2y :

~ This shows that ^ 2 can be calculated by the OLS regression of y on X 2 : This technique is called ~ residual regression. Furthermore, using the de…nitions (3.24) and (3.25), expression (3.23) can be equivalently written as ~ ^ y = X 2 ^ 2 + e: ~ ~ Since ^ 2 is precisely the OLS coe¢ cient from a regression of y on X 2 ; this shows that the residual ~ vector from this regression is e, numerically the same residual vector as from the joint regression ^ (3.22). We have proven the following theorem.

Theorem 3.7.1 Frisch-Waugh-Lovell In the model (3.21), the OLS estimator of 2 and the OLS residuals e ^ may be equivalently computed by either the OLS regression (3.22) or via the following algorithm: 1. Regress y on X 1 ; obtain residuals y ; ~ ~ 2. Regress X 2 on X 1 ; obtain residuals X 2 ; ~ 3. Regress y on X 2 ; obtain OLS estimates ^ 2 and residuals e: ~ ^

In some contexts, the FWL theorem can be used to speed computation, but in most cases there is little computational advantage to using the two-step algorithm. Rather, the primary use is theoretical. 36

as they are constructed based on ^ the full sample including yi . production theory. yi y on x2i x2 : ! 1 n ! n X X ^2 = (x2i x2 ) (x2i x2 )0 (x2i x2 ) (yi y) : i=1 i=1 Thus the OLS estimator for the slope coe¢ cients is a regression with demeaned data.26) where X ( i) and y ( value for yi is i) are the data matrices omitting the i’ row. is the demeaning formula for regression. and business cycle theory. 0 1 0 0 1 0 0 1 0 : X2 X2 y which are “demeaned” The FWL theorem says that ^ 2 is the OLS estimate from a regression of . Frisch made a number of foundational contributions to modern economics beyond the Frisch-Waugh-Lovell Theorem. . M1 = I Observe that ~ X 2 = M 1X 2 = X2 = X2 and y = M 1y ~ = y = y y. which you may have seen in an introductory econometrics course. including formalizing consumer theory.8 Prediction Errors The least-squares residual ei are not true prediction errors. In this case. We can do this by de…ning the leave-one-out OLS estimator of as that obtained from the sample excluding the i’ observation: th 1 10 1 0 X X 1 ^ ( i) = @ 1 xj x0 A @ xj yj A j n 1 n 1 j6=i j6=i = X0 ( 1 i) X ( i) X( i) y ( i) (3. Ragnar Frisch Ragnar Frisch (1895-1973) was co-winner with Jan Tinbergen of the …rst Nobel Memorial Prize in Economic Sciences in 1969 for their work in developing and applying dynamic models for the analysis of economic problems. Partition X = [X 1 X 2 ] where X 1 = is a vector of ones. The leave-one-out predicted th yi = x0 ^ ( ~ i 37 i) . 3.A common application of the FWL theorem. and X 2 is the vector of observed regressors. A proper prediction for yi should be based on estimates constructed only using the other observations.

19).27).5 states that for nonsingular A and vector b A This implies X 0X and thus ^( i) bb0 1 =A 1 + 1 b0 A 1 b 1 A 1 bb0 A 1 : xi x0 i 1 = X 0X 1 + (1 hi ) 1 X 0X 1 xi x0 X 0 X i 1 = = X 0X X 0X + (1 xi x0 i 1 1 X 0y X 0X 1 xi yi 1 X 0y 1 1 1 1 xi yi 1 hi ) X 0X (1 (1 X 0X xi x0 X 0 X i hi ) 1 1 1 X 0y 1 xi yi hi yi = ^ = ^ = ^ xi yi + (1 X 0X X 0X X 0X hi ) yi xi x0 ^ i x0 ^ + hi yi i hi ) hi ) xi (1 xi ei ^ 1 the third equality making the substitutions ^ = (X 0 X) remainder collecting terms. X 0 y and hi = x0 (X 0 X) i 1 xi .28) = ei + (1 ^ hii ) ei : ^ A convenient feature of this expression is that it shows that computation of ei is based on a simple ~ linear operation. Its square root ~ = prediction standard error. and the 38 . p ~ 2 is the Proof of Equation (3.27) where hii are the leverage values as de…ned in (3.and the leave-one-out residual or prediction error is ei = yi ~ A convenient alternative expression for ^ ( ^( i) i) yi : ~ (derived below) is 1 =^ (1 hii ) X 0X 1 xi ei ^ (3. One use of the prediction errors is to estimate the out-of-sample mean squared error ~2 = = 1X 2 ei ~ n i=1 i=1 n n 1X (1 n hii ) 2 2 ei : ^ This is also known as the mean squared prediction error.2) from Appendix A. The Sherman– Morrison formula (A.27) we can simplify the expression for the prediction error: e i = yi ~ = yi = (1 x0 ^ ( i i) x0 ^ + (1 i hii ) 1 1 hii ) hii ei ^ 1 x0 X 0 X i 1 xi ei ^ (3. and does not really require n separate estimations. Using (3.

A large hii means that observation i is unusual in the sense that the regressor xi is far from its sample mean. sometimes called outliers. 39 .27)-(3. Other researchers will delete the observation from the sample. it may be useful to examine the corresponding observation or observations. If there is an error. ~ e One way to think about this is that a large leverage value hii gives the potential for observation i to be in‡ uential. we can focus on the predicted values.10 Measures of Fit When a least-squares regression is reported in applied economics. x0 ^ ( i 1 i) = x0 X 0 X i xi ei ~ 3. who believe it reduces the integrity of reported empirical results. measuring how well the regressors explain the observed variation in the dependent variable. as it would seem unlikely that data error would be con…ned to a single observation. a useful summary statistic is hii j~i j e j^i yi j y ~ = max In‡uence = max 1 i n 1 i n ~ ~ which scales the maximum change in predicted values by the prediction standard error. which requires that both hii and j~i j are large. We say that observation i is in‡ uential if its omission from the sample induces a substantial change in a parameter of interest. so judgement must be employed. we can directly discover if a speci…c observation i is in‡ uential for a coe¢ cient estimate of interest. but unusual and in‡ uential.) If an observation is determined to be in‡ uential. you should scrutinize all observations more carefully. and this is a common cause of in‡ uential observations.) It is also possible that an observation is correctly measured. In this case it is unclear how to proceed. If In‡uence is large. This is especially useful when revising empirical work at a later date. and a record describing the revision process. The motivation for this choice is to prevent the results from being skewed or determined by individual observations.9 In‡ uential Observations Another use of the leave-one-out estimator is to investigate the impact of in‡ uential observations. (As this is an informal comparison there is no magic threshold. For a more general assessment. (It is useful to keep the source data in its original form. We call this observation with large hii a leverage point. the recorded values for the observations should be examined. The di¤erence between the full-sample and leave-one-out predicted values is yi ^ yi = x0 ^ ~ i = hii ei ~ which is a simple function of the leverage values hii and prediction errors ei : Observation i is ~ in‡ uential for the predicted value if jhii ei j is large. Some researchers will try to alter the speci…cation to properly model the in‡ uential observation. When this is done it is proper empirical practice to document such choices.28) we know that ^ ^( i) = (1 = hii ) 1 1 X 0X 1 xi ei ^ X 0X xi ei : ~ By direct calculation of this quantity for each observation i. It is quite possible that there is a data error. ~ To determine if any individual observations are in‡ uential in this sense. From (3. but this practice is viewed skeptically by many researchers.3. what should be done? Certainly. A leverage point is not necessarily in‡ uential as this also requires that the prediction error ei is large. a revised data …le after cleaning. then the observation is typically deleted from the sample. it is common to see a reported summary measure of …t. If it is determined that an observation is incorrectly recorded.

2 . He also wrote an early and in‡ uential advanced textbook on econometrics (Theil. This example shows that a regression with a high R2 can actually have poor …t. In this sense R2 can be a useful summary measure for an out-of-sample forecast or policy experiment. In contrast.Some common summary measures are based on scaled or transformed estimates of the meanP squared error 2 : These include the sum of squared errors n e2 . the accuracy of the coe¢ cient estimates. and the mean prediction error ~ 2 = n n e2 . their usefulness should not be exaggerated. and the root mean squared error n 1 n e2 (sometimes ^ i=1 ^i 1 P called the standard error of the regression). or the validity of statistical inferences based on the estimated regression. 1971). Another mistaken belief is that a high R2 is important in order to justify interpretation of the regression coe¢ cients. even if the R2 is quite small. both of which are routinely seen in applied econometrics. For example. where 2 = var(yi ): A high 2 or R2 means that forecasts of y using x0 or x0 b will be quite y accurate relative to the unconditional mean. 1) and yi = xi + x2 : If we regress yi on xi (incorrectly omitting i x2 ). For example. the best linear predictor is yi = 1 + xi + ei where ei = x2 1: This is a misspeci…ed regression. then R2 ' 2 = :8. accurate estimates of regression coe¢ cients is quite possible when sample sizes are large. i i as the true relationship is deterministic! You can also calculate that the population 2 = =(2 + ) which can be arbitrarily close to 1 if is large. 2 The bottom line is that while R2 and R have appropriate uses. 2 Unfortunately. or if = 18 then R2 ' 2 = :9. the frequent reporting of R2 and R seems to have led to exaggerated beliefs regarding their usefulness. 2 40 . This is mistaken as there is no known association between the level of R2 and the “correctness” of a regression. incorrect. as an incorrectly speci…ed model can still have a reasonably high R suppose the truth is that xi N (0. One mistaken belief is that R2 is a measure of “…t” This belief is . the mean squared error i=1 ^iq P Pn 2 of sample variance n 1 i=1 ei = ^ 2 . An alternative estimator of 2 proposed by Theil called R-bar-squared or adjusted R2 is P (n 1) n e2 ^ 2 R =1 Pn i=1 i 2 : (n k) i=1 (yi y) 2 Henri Theil Henri Theil (1924-2000) of Holland invented R and two-stage least squares. if = 8. i=1 ~i A related and commonly reported statistic is the coe¢ cient of determination or R-squared: Pn y)2 y ^2 2 i=1 (^i R = Pn =1 ^2 y)2 y i=1 (yi where ^2 = y is the sample variance of yi : R2 1X (yi n i=1 n y)2 can be viewed as an estimator of the population parameter 2 = var (x0 ) i =1 var(yi ) 2 2 y Theil’ estimator R is better estimator of 2 than the unadjusted estimator R2 because it can be s expressed as a ratio of bias-corrected variance estimates.

despite emerging from quite di¤erent motivations. We can also …nd the MLE for 2 : Plugging ^ into the log-likelihood we obtain log L ^ . It may seem surprising that the MLE ^ is numerically equal to the OLS estimator. Maximization with respect to 2 2 = n log 2 2 2 2 n 1 X 2 i=1 e2 : ^i yields the …rst-order condition n 1 2 + 2^ 2 ^2 n n X i=1 @ log L ^ .13). The least-squares estimator minimizes a particular sample loss function – the sum of squared error criterion – and most loss functions are equivalent to the likelihood of a speci…c parametric distribution. ^ 2 ) maximize log L( . maximizing the likelihood is identical to minimizing SSEn ( ).11 Normal Regression Model The normal regression model is the linear regression model under the restriction that the error ei is independent of xi and has the distribution N 0. in this case the normal regression model. i 2 N 0. Hence ^ mle = ^ ols . It is not completely accidental. ) = log 2 2 (2 2 )1=2 i=1 = n log 2 2 2 1 2 2 SSEn ( ): The maximum likelihood estimator (MLE) ( ^ . 2 ): Since the latter is a function of only through the sum of squared errors SSEn ( ). the least squares estimator ^ is also known as the MLE. 41 . the MLE for equals the OLS estimator: Due to this equivalence. testing. 2 : : Normal regression is a parametric model. In this sense it is not surprising that the least-squares estimator can be motivated as either the minimizer of a sample loss function or as the maximizer of a likelihood function. and distribution theory.3. ^ 2 = @ 2 Solving for ^ 2 yields the MLE for 2 2 e2 = 0: ^i 1X 2 ei ^ ^ = n 2 i=1 which is the same as the moment estimator (3. The log-likelihood function for the normal regression model is ! n X 1 1 2 0 2 exp yi xi log L( . 2 : We can write this as ei j xi This assumption implies yi j xi N x0 . where likelihood methods can be used for estimation.

which provided a justi…cation for viewing random disturbances as approximately normal.Carl Friedrich Gauss The mathematician Carl Friedrich Gauss (1777-1855) proposed the normal regression model. and derived the least squares estimator as the maximum likelihood estimator for this model. He claimed to have discovered the method in 1795 at the age of eighteen. but did not publish the result until 1809. 42 . Interest in Gauss’ approach was reinforced by Laplace’ simultaneous s s discovery of the central limit theorem.

3 Let e be the OLS residual from a regression of y on X = [X 1 X 2 ]. (a) Compare regressions (3. ^ show that ^ 1 is sample mean of the dependent variable among the men of the sample (y 1 ). Suppose that there are n1 men and n2 women in the sample. ^ 2 ) be the values such that g n (^ . 2 = Ey and y (y )2 2 = var(y): De…ne : 1 = 2 Let (^ .31) y = d1 + d2 +e + d1 + e Can all three regressions (3.6 Show ()3. ^ 2 ) = 0 where g n (m.4 Let e be the OLS residual from a regression of y on X: Find the OLS coe¢ cient ^ from a regression of e on X: ^ Exercise 3. Consider an alternative set of regressors Z = XC. Let d1 and d2 be vectors of 1’ and 0’ with the i0 th element of d1 s s s. each column of Z is a mixture of some of the columns of X: Compare the OLS estimates and residuals from the regression of y on X to the OLS estimates from the regression of y on Z: Exercise 3.30) and (3. that hii in (3.8 Let d1 and d2 be de…ned as in the previous exercise.30) (3.30) and (3.15). (c) Letting = ( 1 2 )0 .19) sum to k: (Hint: Use (3. and (3.29) (3. (a) In the OLS regression y = d1 ^ 1 + d2 ^ 2 + u. .2 Consider the OLS regression of the n 1 vector y on the n k matrix X.) Exercise 3. Is there any content to this assumption in this setting? Exercise 3.Exercises Exercise 3. Find X 0 e: ^ 2^ Exercise 3. s) : Show that Exercise 3.7 A dummy variable takes on only the values 0 and 1.1 Let y be a random variable with g y.5 Let y = X(X 0 X) ^ 1 X 0 y: Find the OLS coe¢ cient from a regression of y on X: ^ Exercise 3.31). equaling 1 and that of d2 equaling 0 if the person is a man. 2 where is an n 1 is a vector of ones. (b) Describe in words the transformations y X = y = X 43 d1 y 1 d1 X 1 d2 y 2 d2 X 2 : .31). (3. Thus. It is used for categorical data. Pn i=1 g (yi . and that ^ 2 is the sample mean among the women (y 2 ). write equation (3. m.30).30) as y = X + e: Consider the assumption E(xi ei ) = 0. where C is a k k non-singular matrix. Consider the three regressions y = y = + d1 1 1 + d2 2 2 +e (3. s) = n ^ and ^ 2 are the sample mean and variance.20). Is one more general than the other? Explain the relationship between the parameters in (3.29). (b) Compute 0d 1 and 0d .31) be estimated by OLS? Explain if not. and the reverse if the person is a woman. such as an individual’ gender.

S. Prove that the OLS estimate computed using this additional observation is ^ n+1 = ^ n + 1 1+ 1 x0 (X 0 X n ) xn+1 n n+1 X0 Xn n 1 xn+1 yn+1 x0 ^ n : n+1 Exercise 3. Numerically calculate ^ ^ the following: (a) (b) (c) (d) (e) (f) (g) Pn Pn ^ i=1 ei ^ i=1 x1i ei ^ i=1 x2i ei 2 ^ i=1 x1i ei 2 ^ i=1 x2i ei Pn Pn Pn Pn Pn ^^ i=1 yi ei ^2 i=1 ei (h) R2 Are these calculations consistent with the theoretical properties of OLS? Explain. 0 otherwise) (coded 1 if Hispanic.10 Prove that R2 is the square of the simple correlation between y and y : ^ Exercise 3. 0 otherwise) hourly wage (in dollars) Estimate a regression of wage yi on education x1i . 0 otherwise) (coded 1 if nonwhite and non-Hispanic. Let ei be the OLS residual and yi the predicted value from the regression. V1 V2 V3 V4 V5 V6 V7 V8 V9 = = = = = = = = = education (in years) region of residence (coded 1 if South.(c) Compare ~ from the OLS regresion y =X ~+e ~ with ^ from the OLS regression y = d1 ^ 1 + d2 ^ 2 + X ^ + e: ^ 1 Exercise 3. xn+1 ) becomes available.11 The data …le cps85. A new observation (yn+1 . 0 otherwise) gender (coded 1 if female. listed in the …le cps85. 0 otherwise) potential labor market experience (in years) union status (coded 1 if in union job. experience x2i . 44 . The …le contains observations on nine variables.dat contains a random sample of 528 individuals from the 1985 Current Population Survey by the U.9 Let ^ n = (X 0 X n ) X 0 y n denote the OLS estimate when y n is n 1 and X n is n n n k.pdf. 0 otherwise) marital status (coded 1 if married. Census Bureau. and experienced-squared x3i = x2 2i (and a constant). Report the OLS estimates.

calculate the equation R2 and sum of squared errors. regress x1i on (1. x2 ). Do they equal the values from the initial OLS regression? Explain. 45 .12 Using the data from the previous problem.Exercise 3. x2i . x2i . Does it equal the value from the …rst OLS regression? Explain. x2 ). In the second-stage residual regression. Report the estimate from this regression. restimate the slope on education using the residual regression approach. (the regression of the residuals on the residuals). Regress yi on (1. and regress 2i 2i the residuals on the residuals.

E e2 j xi = i is independent of xi : 2 (xi ) = 2 (4. and an invertible design matrix Q = E xi x0 > 0: i We will consider both the general case of heteroskedastic regression.2 Homoskedastic Linear Regression Model In addition to Assumption 4.1. Throughout this chapter we maintain the following.3) 46 . where the conditional variance E e2 j xi = 2 (xi ) = 2 i i is unrestricted. Assumption 4.1.2) and Ex2 < 1 ji for j = 1. xi ) come from a random sample and satisfy the linear regression equation yi = x0 + ei i E (ei j xi ) = 0: The variables have …nite second moments 2 Eyi < 1 (4.1 Introduction In this chapter we investigate some …nite-sample properties of least-squares applied to a random sample in the the linear regression model. Assumption 4.1.Chapter 4 Least Squares Regression 4. where the conditional variance is constant. In the latter case we add the following assumption. :::.1.1) (4. and the specialized case of homoskedastic regression. k.1 Linear Regression Model The observations (yi .

1)-(4. and therefore has a sampling distribution. C E (y j X) = B E (yi j X) C = B E (yi j xi ) C = B x0 C = X : (4.1 for sample sizes of n = 25. . . From the …gure we can see that the density functions are dispersed and highly non-normal. we need to have a way to characterize the sampling distribution of ^ .4) @ A @ A @ i A . . the density function of ^ was computed and plotted in Figure 4.3 Mean of Least-Squares Estimator In this section we show that the OLS estimator is unbiased in the linear regression model. . Under (4.2) note that 0 1 0 1 0 1 . . We start in the next sections by deriving the mean and variance of ^: 4. To learn about the true value of from the sample estimate ^ . . In general. y) = 1 exp 2 xy 1 (log y 2 log x)2 exp 1 (log x)2 2 and let ^ be the slope coe¢ cient estimate from a bivariate regression on observations from this joint density. since it is a function of random data. Using simulation methods. .2 Sampling Distribution The least-squares estimator is random. . . As the sample size increases the density becomes more concentrated about the population coe¢ cient. its distribution is a complicated function of the joint distribution of (yi . B C B C B . .Figure 4. 47 . .1: Sampling Density of ^ 4. let yi and xi be drawn from the joint density f (x. . . . . xi ) and the sample size n: To illustrate the possibilities in one example. n = 100 and n = 800: The vertical line marks the true value of the projection coe¢ cient.

1) E ^jX = and E( ^ ) = : (4. . . . It says that ^ is unbiased for any realization of the regressor matrix X. we …nd that E ^ =E E ^jX = : + e into the formula Another way to calculate the same result is as follows. the linearity of expectations. 48 . .5).5) X 0y j X X 0X X 0X : X 0 E (y j X) X 0X Applying the law of iterated expectations to E ^ j X = . conditioning on X. . 0 . Theorem 4.7) says that the estimator is conditionally unbiased.6).14) for ^ to obtain ^ = = = X 0X X 0X 1 1 X 0 (X + e) X 0X + X 0X 1 1 X 0e (4.7) Equation (4. 1 0 . . conditioning on X.6) and the + X 0X X 0 e: This is a useful linear decomponsition of the estimator ^ into the true parameter 1 stochastic component (X 0 X) X 0 e: Using (4.8) (4. Insert y = X (3. 1 (4. E ^ jX = E = = 0: X 0X 1 1 X 0e j X X 0X X 0 E (e j X) Using either derivation. which is a stronger result.1 Mean of Least-Squares Estimator In the linear regression model (Assumption 4. . and the properties of the matrix inverse. and (4.8) says that the estimator is unbiased. .1. . Equation (4.Similarly By (3.14).4). meaning that the distribution of ^ is centered at . (4. we have shown the following theorem. E ^jX = E = = = X 0X 1 1 1 C C B B E (e j X) = B E (ei j X) C = B E (ei j xi ) C = 0: A A @ @ .3.

3). . For any r 1 random vector Z de…ne the r r covariance matrix var(Z) = E (Z = EZZ 0 EZ) (Z EZ)0 (EZ) (EZ)0 and for any pair (Z. however. . D need not necessarily take this simpli…ed form. :::. .2).10) and thus X 0X n X i=1 1 X 0 DX X 0 X 1 : xi x0 i 2 i.4 Variance of Least Squares Estimator In this section we calculate the conditional variance of the OLS estimator. 2 = B . . C: 1 n . @ . 49 . var(A0 y j X) = var(A0 e j X) = A0 DA: In particular. .4. 0 0 2 n 2 i In the special case of the linear homoskedastic regression model (4. then E e2 j xi = i and we have the simpli…cation D = In 2 2 i = 2 : In general. For any matrix n r matrix A = A(X). (4.9) .1) and the second is (4..5. X) de…ne the conditional covariance matrix var(Z j X) = E (Z E (Z j X)) (Z E (Z j X))0 j X : n matrix The conditional covariance matrix of the n 1 regression error e is the n D = E ee0 j X : The i’ diagonal element of D is th E e2 j X = E e2 j xi = i i while the ij 0 th o¤-diagonal element of D is E (ei ej j X) = E (ei j xi ) E (ej j xj ) = 0: where the …rst equality uses independence of the observations (Assumption 1. A . Thus D is a diagonal matrix with i’ diagonal element 2 : th i 0 2 1 0 0 1 2 B 0 0 C 2 B C D = diag 2 . . 0 0 1 (4. we can write ^ = A0 y where A = X (X 0 X) var ^ j X = A0 DA = It is useful to note that X DX = a weighted version of X X.

the covariance matrix simpli…es to V^= 1 0 XX n 1 2 : 4. so X 0 DX = : Theorem 4. To see this.2). As we will see in the next chapter. it will be useful to work p with the conditional variance of the scaled estimator n ^ V^ = var p n ^ jX X 0 DX 1 = n var ^ j X = n X 0X = 1 0 XX n 1 X 0X 1 1 0 X DX n 1 0 XX n 1 : This rescaling might seem rather odd. then for any linear estimator ~ = A0 y we have E ~ j X = A0 E (y j X) = A0 X . since E (y j X) = X .5 Gauss-Markov Theorem which are linear functions of the vector y. var ^ j X vanishes as n tends to in…nity. D = I n 0 X X 2 . V^ = var = p n ^ 1 jX 1 0 X DX n 1 0 XX n 1 1 0 XX n where D is de…ned in (4. In the homoskedastic linear regression model (Assumption 4.4. but it will help provide continuity between the …nite-sample treatment of this chapter and the asymptotic treatment of later chapters. yet V ^ converges to a constant matrix. and the variance matrix simpli…es to V^= 1 0 XX n 1 2 2. says that the least-squares estimator is the best choice when the errors are homoskedastic. and thus ~ = A0 y Now consider the class of estimators of can be written as where A is an n k function of X. as the least-squares estimator has the smallest variance among all unbiased linear estimators. which we now present. In the special case of the linear homoskedastic regression model.1.1).1.Rather than working with the variance of the unscaled estimator ^ . The least-squares estimator is the special case obtained by setting A = X(X 0 X) 1 : What is the best choice of A? The Gauss-Markov theorem.9). 50 .1 Variance of Least-Squares Estimator In the linear regression model (Assumption 4.

Proof of Theorem 4. We return to the issue of feasible implementation of GLS in Section 7. The justi…cation is limited because the class of models is restricted to homoskedastic linear regression and the class of potential estimators is restricted to linear unbiased estimators. This estimator is infeasible as the matrix D is unknown. Let A be any n k function of X such that A0 X = I k : The variance 1 2 of the least-squares estimator is (X 0 X) and that of A0 y is A0 A 2 : It is su¢ cient to show 1 1 0 0 that the di¤erence A A (X X) is positive semi-de…nite. 51 .3.7) as required.2).1. Within the class of linear unbiased estimators the best estimator is (4.10) that var ~ j X = var A0 y j X = A0 DA = A0 A 2 : the last equality using the homoskedasticity assumption D = I n 2 . the best unbiased linear estimator is ~ = X 0D 1 X 1 X 0D 1 y (4. In the linear regression model (Assumption 4.so ~ is unbiased if (and only if) A0 X = I k : Furthermore.11) The …rst part of the Gauss-Markov theorem is a limited e¢ ciency justi…cation for the leastsquares estimator.1).11) and is called the Generalized Least Squares (GLS) estimator. The proof of Theorem 4. the best (minimum-variance) unbiased linear estimator is the leastsquares estimator ^ = X 0X 1 X 0y 2.1.1.5.2 is left for Exercise 4.1. the least-squares estimator is ine¢ cient.1. The second part of the theorem shows that in the (heteroskedastic) linear regression model.5.5. This latter restriction is particularly unsatisfactory as the theorem leaves open the possibility that a non-linear or biased estimator could have lower mean squared error than the least-squares estimator. Set C = A X (X 0 X) : Note that X 0 C = 0: Then we calculate that A0 A X 0X 1 = C + X X 0X 1 1 0 C + X X 0X 1 1 X 0X 1 = C 0C + C 0X X 0X + X 0X = C 0C + X 0X 1 1 X 0C 1 X 0X X 0X X 0X The matrix C 0 C is positive semi-de…nite (see Appendix A. Theorem 4. we saw in (4. This result does not suggest a practical alternative to least-squares. In the homoskedastic linear regression model (Assumption 4. The “best” unbiased linear estimator is obtained by …nding the matrix A such that A0 A is minimized in the positive de…nite sense.1.1 Gauss-Markov 1.

13) since the diagonal elements of M are 1 hii as de…ned in (3.18) that we can write the residuals in vector notation as e=y ^ 1 X^ = My = Me where M = I n X (X 0 X) X 0 is the matrix which projects on the the space orthogonal to the columns of X: Using the properties of conditional expectation E (^ j X) = E (M e j X) = M E (e j X) = 0 e and var (^ j X) = var (M e j X) = M var (e j X) M = M DM e where D is de…ned in (4.6 Residuals x0 ^ ( i i) .12) : : (4. We can simplify this expression under the assumption of conditional homoskedasticity E e2 j xi = i In this case (4. Thus the residuals are heteroskedastic even if the errors are homoskedastic. ::. Similarly. we can write the prediction errors ei = (1 hii ) 1 ei in vector notation. What are some properties of the residuals ei = yi x0 ^ and prediction errors ei = yi ^ ~ i at least in the context of the linear regression model? Recall from (3.17) and (3.12) simplies to var (^ j X) = M e In particular. (1 hnn ) 1 g 2 : 1 1 (1 2 hii ) (1 : hii ) 1 2 52 . for a single observation i. Set ~ ^ M = diagf(1 Then we can write the prediction errors as e = M My ~ = M M e: We can calculate that E (~ j X) = M M E (e j X) = 0 e and var (~ j X) = M M var (e j X) M M = M M DM M e which simpli…es under homoskedasticity to var (~ j X) = M M M M e = M MM The variance of the i’ prediction error is then th var (~i j X) = E e2 j X e ~i = (1 hii ) = (1 hii ) 2 h11 ) 1 .9).4.19). we obtain var (^i j X) = E e2 j X = (1 e ^i hii ) 2 2 2 (4.

the properties of projection matrices and the trace operator.16) so that D = I n 2. observe that ^2 = Then E ^2 j X = = = 1 tr E M ee0 j X n 1 tr M E ee0 j X n 1 tr (M D) : n 2. This calculation shows that ^ 2 is biased towards zero.13). n then E ^2 j X = = the …nal equality by (3. :::. Adding the assumption of conditional homoskedasticity E e2 j xi = i (4.A residual with proper variance can be obtained by rescaling. 1 1 1 1 1 0 e e = e0 M M e = e0 M e = tr e0 M e = tr M ee0 : ^^ n n n n n (4. ^ M e: (4. 53 . The studentized residuals are ei = (1 and in vector notation e = (e1 .15) and thus these rescaled residuals have the same bias and variance as the original errors when the latter are homoskedastic. The order of the bias depends on k=n.16). var (e j X) = M and var (ei j X) = E e2 j X = i 2 1=2 1=2 hi ) 1=2 ei .7 Estimation of Error Variance The error variance 2 = Ee2 can be a parameter of interest.18). en )0 = M From our above calculations. 2 measures the variation in the “unexplained” part of the regression. even in a heteroskedastic regression i or a projection model. In the linear regression model we can calculate the mean of ^ 2 : From (3. under homoskedasticity.14) MM 1=2 2 (4.16) simpli…es to 1 tr M 2 n k 2 n . 4. the ratio of the number of estimated coe¢ cients to the sample size. Its method of moments estimator (MME) is the sample average of the squared residuals: ^2 = 1X 2 ei ^ n i=1 n and equals the MLE in the normal regression model (3.

For example.14). and all equal 0. Under homoskedasticity. this is not the only method to construct an unbiased estimator for 2 . Note that E ^2 j X = = = 1X E e2 j X ^i n 1X (1 n i=1 i=1 n n hii ) 2 2 n k n By the above calculation. s2 and 2 are likely to be close. a classic method to obtain an unbiased estimator is by rescaling the estimator. Since the bias takes a scale form. s2 is known as the “bias-corrected estimator” for 2 and in empirical practice s2 is the most widely used estimator for 2 : Interestingly.1)-(4.20). The estimators are more likely to di¤er when n is small and k is large.18) and the estimator s2 is unbiased for 2 : Consequently. in the regression (3.10). yielding the estimator n n 1X 1X 2 2 ei = (1 hii ) 1 e2 : ^i = n n i=1 i=1 You can show (see Exercise 4. s. leading to the classic covariance matrix estimator b0 V^= 1 0 XX n 1 s2 : (4. the covariance matrix takes the relatively simple form V^= 1 0 XX n 1 2 : which is known up to the unknown scale 2 . An alternative unbiased estimator can be using the studentized residuals ei from (4.3).Another way to see this is to use (4. 4.2)-(4.6) that E 2 jX = 2 (4.19) and thus 2 is unbiased for 2 (in the homoskedastic linear regression model). In this section we consider estimation of V ^ in the homoskedastic regression model (4. In the previous section we discussed three estimators of 2 : The most commonly used choice is s2 .20) 54 . we need an estimate of the covariance matrix V ^ of the least-squares estimator. De…ne n 1 X 2 s2 = ei : ^ (4.8 Covariance Matrix Estimation Under Homoskedasticity For inference.13).490. When the sample sizes are large and the number of regressors small. the estimators ^ 2 . so using (3. ^ .17) n k i=1 E s2 j X = E s2 = 2 2 (4.

e2 . :::. b0 If the estimator (4. Recall that the general form for the covariance matrix is V^= 1 0 XX n 1 1 0 X DX n 1 0 XX n 1 : This depends on the unknown matrix D which we can write as D = diag 2 1 . the point is that the classic covariance matrix estimator (4. so the latter is an unbiased estimator for D: n 1 Therefore.b0 Since s2 is conditionally unbiased for 2 . e2 X 1 n n ! n 1X 1 0 xi x0 e2 XX i i n n i=1 1 0 XX n 1 1 : 55 . if the squared errors e2 were observable.9 Covariance Matrix Estimation Under Heteroskedasticity In the previous section we showed that that the classic covariance matrix estimator can be highly biased if homoskedasticity fails. it is simple to calculate that V ^ is conditionally unbiased for V ^ under the assumption of homoskedasticity: b0 E V^ jX = 1 0 XX n 1 E s2 j X 1 2 1 0 XX = n = V ^: This estimator was the dominant covariance matrix estimator in applied econometrics in previous generations. we could construct the unbiased estimator i V ideal = ^ = 1 0 XX n 1 0 XX n 1 1 1 0 X diag e2 . 2 n = E ee0 j X = E diag e2 . 4. :::. e2 j X : 1 n Thus D is the conditional mean of diag e2 . suppose k = 1 and 2 = x2 (extreme heteroskedasticity). but the regression error is heteroskedastic. The ratio of the true i i variance of the least-squares estimator to the expectation of the variance estimator is V^ b0 E V^ jX 1 Pn x4 4 4 n i=1 i ' Exi = Exi : = 2 2 Ex2 1 Pn 2 Ex2 2 i i i=1 xi n (Notice that we use the fact that 2 = x2 implies 2 = E 2 = Ex2 :) This is the kurtosis of i i i i the regressor xi : As the kurtosis can be any number greater than one. :::. In this section we show how to contruct covariance matrix estimators which do not require homoskedasticity.20) may be quite biased when the homoskedasticity assumption fails. and is still the default in most regression packages.20) is used. :::. it is possible for V ^ 1 1 1 0 1 0 1 0 XX X DX XX : to be quite biased for the correct covariance matrix V ^ = n n n For example. we conclude that the bias b0 of V ^ can be arbitrarily large. While this is an extreme and constructed example.

and V^ = = = 1 0 XX n 1 0 XX n 1 0 XX n 1 1 1 0 1 0 X DX XX n n ! n 1X 1 0 xi x0 e2 XX i i n n i=1 ! n 1X 1 0 2 (1 hii ) xi xi ei ^ n i=1 1 1 1 1 0 XX n 1 : b e The estimators V ^ . E V ideal ^ jX = = = 1 0 XX n 1 0 XX n 1 0 XX n 1 1 1 1 0 X DX n ! n 1 0 1X 0 2 xi xi E ei j X XX n n i=1 ! n 1 X 1 1 0 0 2 XX xi xi i n n i=1 1 1 0 XX n 1 = V^ verifying that V ideal is unbiased for V ^ ^ Since the errors e2 are unobserved. e. The estimator V ^ was …rst developed by Eicker (1963). b D = diag e2 . the prediction errors ei or ^ ~ the unbiased residuals ei . :::. :::. V ^ . or heteroskedasticityb robust covariance matrix estimators. V ideal is not a feasible estimator. e2 .Indeed. e2 : 1 n Substituting these matrices into the formula for V ^ we obtain the estimators b V^ = = 1 0 XX n 1 0 XX n 1 1 1 1 0b 1 0 X DX XX n n ! n 1X 1 0 xi x0 e2 XX i ^i n n i=1 1 1 . and introduced to econometrics by White (1980). and V ^ are often called robust.g. e V^ = = = 1 0 XX n 1 0 XX n 1 0 XX n 1 1 1 1 0e 1 0 X DX XX n n ! n 1X 1 0 XX xi x0 e2 i ~i n n i=1 ! n 1X (1 hii ) 2 xi x0 e2 i ^i n i=1 1 1 0 XX n 1 . and is sometimes called the Eicker-White or White 56 . :::. To construct a feasible ^ i estimator we can replace the errors with the least-squares residuals ei . heteroskedasticity-consistent. e2 . ^1 ^n e D = diag e2 . ~1 ~n D = diag e2 .

is quite complicated. as it is only valid under the unlikely homoskedasticity restriction. (See Exercise 4. speci…cally 1 1 0 2 e XX (4. For example.13). For example. and the estimator V ^ was introduced by Horn.7 It might seem rather odd to compare the bias of heteroskedasticity-robust estimators under the assumption of homoskedasticity. V ^ . This calculation shows that V ^ is biased i i=1 downwards. the bias of the estimators V ^ . as it is the most e straightforward and familiar. and in particular V ^ . A more easily interpretable measure of spread is its square root – the standard deviation.21) E V^ jX > n while the estimator V ^ is unbiased E V^ jX = 1 0 XX n 1 2 : (4. standard regression packages set the classic estimator V ^ as the e default. . V ^ .e covariance matrix estimator1 . e Similarly. V ^ and V ^ . V ^ is implemented by selecting “Robust”standard errors and selecting the bias correction option “1=(1 h)” or using the vce(hc2) option. but they greatly simplify under the assumption of homoskedasticity (4. The estimator V ^ was introduced by Andrews (1991) based on the principle of leave-one-out cross-validation. ! n 1 1 1 0 1 0 1X 0 2 b^ jX E V = XX xi xi E ei j X ^ XX n n n i=1 ! n 1 1 1 0 1 0 1X XX xi x0 (1 hii ) 2 XX = i n n n i=1 ! n 1 1 1 1 0 1 0 1X 1 0 2 2 XX XX xi x0 hii XX = i n n n n i=1 1 0 XX n = V ^: < 1 2 The inequality A < B when applied to matrices means that the matrix B A is positive de…nite. this should not be a barrier. and V ^ : Which should b0 you use? The classic estimator V ^ is typically a poor choice. P b which holds here since n xi x0 hii is positive de…nite. n k in analogy to the bias- 57 . As V ^ and V ^ are simple to implement. Unfortunately.3). V ^ is the most commonly used. (again under homoskedasticity) we can calculate that V ^ is biased upwards. but it does give us a baseline for comparison. in STATA. V ^ . Of the three robust estimators. For this reason it is not typically used in contemporary econometb ric research. using (4.22) b0 improved bias. b0 b e We have introduced four covariance matrix estimators.10 Standard Errors b A variance estimator such as V ^ is an estimate of the variance of the distribution of ^ . are perferred based on their 4. this estimator is rescaled by multiplying by the ad hoc bias adjustment n corrected error variance estimator. Horn and Duncan (1975) as a reduced-bias covariance matrix estimator. V ^ . This is 1 Often. However. b e In general.

b V^ i jj : 1V b 1V b ^.5 for the de…ntion of the rank of a matrix. This is called strict multicollinearity. this is rarely a problem for applied econometric practice. standard errors ^: That is. when the columns of X are close to linearly dependent. because we have not said what it means for a matrix to be “near singular” This is one di¢ culty with the de…nition and interpretation of . We can see this most simply in a homoskedastic linear regression model with two regressors yi = x1i 1 + x2i 2 + ei . log(p2 ) and log(p1 =p2 ): When this happens. this arises when sets of regressors are included which are identically related. One implication of near singularity of matrices is that the numerical reliability of the calculations is reduced. It is also important to understand that a particular standard error may be relevant under one set of model assumptions. For example. so standard errors are not unique. This de…nition is not precise. but not under another set of assumptions. if X includes both the logs of two prices and the log of the relative prices. the applied researcher quickly discovers the error as the statistical software will be unable to construct (X 0 X) 1 : Since the error is discovered quickly. i. 58 . and 1 0 XX= n var ^ j X = 2 1 1 1 2 In this case 2 1 1 n = 1 2) n (1 1 : See Appendix A. The more relevant situation is near multicollinearity. It is therefore important to understand what formula and method is used by an author when studying their work.e. De…nition 4. then ^ is not de…ned2 . This happens when the columns of X are linearly dependent. we have a special name for estimates of their standard deviation. In extreme cases it is possible that the reported calculations will be in error. 4.1 A standard error s( ^ ) for an realvalued estimator ^ is an estimate of the standard deviation of the distribution of ^ : When is a vector with estimate ^ and covariance matrix estimate n r rh for individual elements are the square roots of the diagonal elements of n s( ^ j ) = n 1V b ^ j =n 1=2 As we discussed in the previous section. multicollinearity.so important when discussing the distribution of parameter estimates. This is the situation when the X 0 X matrix is near singular. log(p1 ). there is some 6= 0 such that X = 0: Most commonly. A more relevant implication of near multicollinearity is that individual coe¢ cient estimates will be imprecise.11 Multicollinearity If rank(X 0 X) < k. which is often called “multicollinearity” for brevity. there are multiple possible covariance matrix estimators.10..

3 of Goldberger’ A Course in Econometrics (1991). Goldberger Art Goldberger (1930-2009) was one of the most distinguished members of the Department of Economics at the University of Wisconsin. Goldberger wrote a series of highly regarded and in‡ uential graduate econometric textbooks. His PhD thesis developed an early macroeconometric forecasting model (known as the Klein-Goldberger model) but most of his career focused on microeconometric issues. Topics in Regression Analysis (1968). What is happening is that when the regressors are highly dependent. so there is no distortion in inference. The imprecision. it is statistically di¢ cult to disentangle the impact of 1 from that of 2 : As a consequence. 59 . the precision of individual estimates are reduced.The correlation indexes collinearity. He was the leading pioneer of what has been called the Wisconsin Tradition of empirical work – a combination of formal econometric theory with a careful critical analysis of empirical work. Thus the more “collinear” are the regressors. We can see the e¤ect of collinearity on precision by observing that the variance of a coe¢ cient esti1 2 mate 2 n 1 approaches in…nity as approaches 1. will be re‡ ected by large standard errors. Arthur S. s which is reprinted below. and A Course in Econometrics (1991). Some earlier textbooks overemphasized a concern about multicollinearity. the worse the precision of the individual coe¢ cient estimates. since as approaches 1 the matrix becomes singular. including including Econometric Theory (1964). A very amusing parody of these texts appeared in Chapter 23. you should notice how the estimation 1 2 variance 2 n 1 depends equally and symmetrically on the the correlation and the sample size n. To understand his basic point. however.

there is a violation of the rank condition n > 0 : the matrix 0 is singular. Suppose an econometrician set out to write a chapter about small sample size in sampling from a univariate population. 60 . Some researchers prefer a single …nger. A generally reliable guide may be obtained by counting the number of observations. in which case the sample estimate of is not unique. still others let their thumbs rule. and not only that.” If so. Remedies for micronumerosity If micronumerosity proves serious in the sense that the estimate of has an unsatisfactorily low degree of precision. It arises when the rank condition n > 0 is barely satis…ed. even though Vy = 2 =n is large because of micronumerosity. Most of the time in econometric analysis. Perhaps that imbalance is attributable to the lack of an exotic polysyllabic name for “small sample size. “Near micronumerosity” is more subtle. it is also far from in…nity. when n is close to zero. The remedy lies essentially in the acquisition. Near micronumerosity is very prevalent in empirical economics. but they say little about the closely analogous problem of small sample size in estimation a univariate mean. even though the true situation may be not that = 0 but simply that the sample data have not enabled us to pick up. There are two aspects of this reduction: estimates of may have large errors. and the addition of a few more observations can sometimes produce drastic shifts in the sample mean. The true may be su¢ ciently large for the null hypothesis = 0 to be rejected. But more data are no remedy for micronumerosity if the additional data are simply “more of the same. we are in the statistical position of not being able to make bricks without straw. others use their toes. But if the true is small (although nonzero) the hypothesis = 0 may mistakenly be accepted. Micronumerosity The extreme case. 2. Judging from what is now written about multicollinearity. Chapter 23. if possible.3 Econometrics texts devote many pages to the problem of multicollinearity in multiple regression. The estimate of will be very sensitive to sample data.) The extreme case is easy enough to recognize. Investigators will sometimes be led to accept the hypothesis = 0 because y=^ y is small. Consequences of micronumerosity The consequences of micronumerosity are serious. we can remove that impediment by introducing the term micronumerosity.” arises when n = 0. Testing for micronumerosity Tests for the presence of micronumerosity require the judicious use of various …ngers. of larger samples from the same population.Micronumerosity Arthur S. Goldberger A Course in Econometrics (1991). 3. “exact micronumerosity.” So obtaining lots of small samples from the same population will not help. the chapter might look like this: 1. but Vy will be large. (Technically. Several test procedures develop critical values n . and yet very serious. such that micronumerosity is a problem only if n is smaller than n : But those procedures are questionable. 4. Precision of estimation is reduced.

we calculate 1 1 1 x1i x2i : + x0 2i 2 + ei (4. I n 2 : 61 . In particular. the regression of x2i on x1i yields a set of zero coe¢ cients (they are uncorrelated). we can derive exact sampling distributions for the least-squares estimator. It is the consequence of omission of a relevant 2 correlated variable. under the normality assumption ei j xi N 0.23) is zero.24) = = = = E x1i x0 1i E 1 1 1 E (x1i yi ) E x1i x0 1i 1 x1i x0 1i + E x1i x0 1i 2 E 1+ x1i x0 2i x0 2i 2 2 + ei 1+ where = E x1i x0 1i 1 E x1i x0 2i is the coe¢ cient from a regression of x2i on x1i : Observe that 1 6= 1 unless = 0 or 2 = 0: Thus the short and long regressions have the same coe¢ cient on x1i only under one of two conditions. in order to reduce the number of estimated parameters. 1 6= 1 . 4. In this case. In general. 2 then we have the multivariate implication e j X N 0. Goldberger (1991) introduced the labels (4. and variance estimator. By construction.13 Normal Regression Model In the special case of the normal linear regression model introduced in Section 3.24) is an estimate of 1 = 1 + 2 rather than 1 : The di¤erence is known as omitted variable bias. To avoid omitted variables bias the standard advice is to include potentially relevant variables in the estimated model. or second.4. E¤ectively.23) by least-squares. First.23) + ui (4.23). Typically there are limits.11.24) the short regression to emphasize the distinction. except in special cases.23) the long regression and (4.12 Omitted Variable Bias Let the regressors be partitioned as xi = We can write the regression of yi on xi as yi = x0 1i E (xi ei ) = 0: Now suppose that instead of estimating equation (4. residuals. we are estimating the equation yi = x0 1i E (x1i ui ) = 0 Notice that we have written the coe¢ cient on x1i as 1 rather than 1 and the error as ui rather than ei : This is because the model being estimated is di¤erent than (4. the coe¢ cient on x2i in (4. Perhaps this is done because the variables x2i are not in the data set. the general model will be free of the omitted variables problem. To see this. the possibility of omitted variables bias should be acknowledged and discussed in the course of an empirical investigation. as many desired variables are not available in a given dataset. we regress yi on x1i only. Typically. least-squares estimation of (4.

2 (X 0 X) 1 0 2M 0 where M = I n X (X 0 X) X 0 : Since uncorrelated normal variables are independent.20) ^ ^ N 0. a chi-square distribution with n k degrees of freedom. Since linear functions of normals are also normal. q 2 2 = rh s( ^ j ) s (X 0 X) a t distribution with n j j j j 1 i jj n k 2 n i 1 (X 0 X) jj rh i 1 (X 0 X) k h jj N (0. 62 . 1) = q 2 n k tn k n k k degrees of freedom. = j 2 (X 0 X) 1 2 n k (n k)s2 2 ^ j s( ^ j ) tn k These are the exact …nite-sample distributions of the least-squares estimator and variance estimators. and are the basis for traditional inference in linear regression. Furthermore. if standard errors are calculated using the homoskedastic formula (4. H 0 H) N (0.4)) where H 0 H = I n : Let u = n^ 2 2 1H 0e = = = = (n 1 2 k) s2 2 e0 e ^^ e0 M e e0 H In 0 k 1 2 1 2 In 0 0 0 k 0 0 u H 0e = u0 2 n k. it follows that ^ is independent of any function of the OLS residuals including the estimated error variance s2 or ^ 2 or prediction errors e: ~ The spectral decomposition of M yields M =H In 0 k 0 0 H0 N (0. 2 then ^ n^ 2 2 N 0. this implies that conditional on X ^ e ^ = 1 (X 0 X) X 0 M 1 e N 0.1.That is.1) if ei is independent of xi and distributed N 0. the error vector e is independent of X and is normally distributed. I n ) : Then (see equation (A.1 Normal Regression In the linear regression model (Assumption 4.13. Theorem 4.

1 has no guarantee of accuracy. and therefore inference based on Theorem 4.While elegant.13. the di¢ culty in applying Theorem 4.1 is that the normality assumption is too restrictive to be empirical plausible.13. We develop a more broadly-applicable inference theory based on large sample (asymptotic) approximations in the following chapter. 63 .

Exercise 4. E(ei j xi ) = 0. :::. with known.7 Show (4. in which situations would you expect that ~ would perform better than OLS? Exercise 4. var (e j X) = 2 X 1 X0 2 1 y : X ~ .6 Show (4. the GLS estimator is ~ = X0 the residual vector is e = y ^ 1 Pn 0 i=1 xi xi and E (xi x0 ) : i E(e j X) = 0. wn ) and wi = xji2 .5.22) in the homoskedastic regression model.1. xi ) be a random sample with E(y j X) = X : Consider the Weighted Least Squares (WLS) estimator of ~ = X 0W X 1 X 0W y where W = diag (w1 .5 Let (yi . then n x2 ei = 0: ^ i=1 i Exercise 4. where M 1 = I ^ (c) Prove that M 0 1 1 X X0 X X0 1 1 X 1 1 X0 1 1 : M1 = 1 1 X X0 : Exercise 4. If yi =Pi + ei .19) in the homoskedastic regression model.2 True or False. 64 .4 In a linear model y = X + e.21) and (4.1 Explain the di¤erence between 1 n Exercise 4. and ei is the OLS residual x ^ from the regression of yi on xi . where xji is one of the xi : (a) In which contexts would ~ be a good estimator? (b) Using your intuition.3 Prove Theorem 4. and an estimate of s2 = 1 n 2? is k e0 ^ 1 e: ^ (a) Why is this a reasonable estimator for (b) Prove that e = M 1 e. xi 2 R. Exercise 4.Exercises Exercise 4.2.

1.2. Inference (con…dence intervals and hypothesis testing) requires useful approximations to the sampling distribution.1 Linear Projection Model The observations (yi .1 the variables satisfy the linear projection equation yi = x0 + ei i E (xi ei ) = 0 = E xx0 1 E (xy) : A review of the most important tools in asymptotic theory is contained in Appendix C. Throughout this chapter we maintain the following.9.1 and 2. and therefore the results in this Chapter will be stated for the broader projection model unless otherwise stated.1. under Assumtpion 5. and an invertible design matrix Q = E xi x0 > 0: i From Theorems 2. xi ) come from a random sample with …nite second moments 2 Eyi < 1 and Ex2 < 1 ji for j = 1. The primary tools of asymptotic theory are the weak law of large numbers (WLLN). With these tools we can approximate the sampling distributions of most econometric estimators. which approximates sampling distributions by taking the limit of the …nite sample distribution as the sample size n tends to in…nity.2 Weak Law of Large Numbers At the beginning of Chapter 4. Assumption 5. k.1 Introduction As discussed in Section 4. It turns out that most of this theory equally applies to the projection model and the linear conditional mean model. 5. we showed in Figure 4. the OLS estimator ^ is has an unknown statistical distribution.9.Chapter 5 Asymptotic Theory 5.1 how the sampling density of the leastsquare estimator varies with the sample size n: It is possible to see in the …gure that the sampling 65 .2. central limit theorem (CLT). and continuous mapping theorem (CMT). The most widely used and versatile method is asymptotic theory. :::.

2. As this holds for any (even an extremely small value) it is reasonable to say that the distribution of ^ concentrates about as n increases. if for all > 0. We now state these concepts formally. denoted zn ! z. suppose ui is an iid random variable with …nite mean Eui = and variance 1 P E (ui )2 = 2 . We have described three distinct but intertwined concepts: convergence in probability (concentration of a sampling distribution). At its heart. n!1 lim P (jzn zj > ) = 0: De…nition 5. estimator consistency is the e¤ect of sample size on the variance of the sample mean. It means that for any given data distribution. De…nition 5.2. It follows that var(^ ) = 2 =n ! 0 as n ! 1: This means that the distribution of ^ is increasingly concentrated about its mean as n increases. To be more precise. and is thus consistent for : This result is known as the weak law of large numbers. and the weak law of large numbers (convergence in probability of the sample mean). To review. We see that var(^ ) = 2 =n which is decreasing in n (as long as 2 < 1).1 We say that a random variable zn 2 R converges in p probability to z as n ! 1.2 An estimator ^ of a parameter as n ! 1 p is consistent if ^ ! Consistency is a good property for an estimator to possess. consistency (convergence in probability of an estimator to the parameter value). Above. the distribution of ^ becomes concentrated within the region [ . This is the property of estimator consistency –convergence in probability to the true parameter value. In this section we review the core theory explaining this phenomenon. + ] as n diverges. 66 . and consider the sample mean ^ = n n ui : The mean and variance of ^ are i=1 1X 1X E^ = E ui = Eui = n n i=1 i=1 n n and var(^ ) = E (^ )2 = E 1X (ui n i=1 n ) !2 = n n 1 XX E (ui n2 i=1 j=1 ) (uj )= n 1 X n2 i=1 2 2 = n where the second-to-last inequality is because E (ui ) (uj ) = 2 for i = j yet E (ui ) (uj )= 0 for i 6= j due to independence. we showed that the sample mean ^ converges in probability to the population mean as n ! 1. an application of Chebyshev’ inequality yields s P (j^ j> ) var(^ ) 2 = 2 =n 2 !0 as n ! 1: This says that the probability that ^ di¤ers from by more than declines to zero as n ! 1: Equivalently.density concentrates about the true parameter value as the sample size increases. for any > 0. there is a sample size n su¢ ciently large such that the estimator ^ will be arbitrarily close to the true value with high probability.

2. E jz n j = E 1X zi n i=1 n = E jzi j n 1X E jzi j n i=1 2": 2E jui j 1 (jui j > C) E jui j 1 (jui j > C) + jE (ui 1 (jui j > C))j (5. Theorem 5.8).1 Weak Law of Large Numbers (WLLN) If ui 2 R is iid and E jui j < 1.2) 67 . then 1X p un = ui ! E(ui ) n i=1 n as n ! 1 .2. we can assume E(ui ) = 0 by recentering ui on its expectation.2 WLLN for Random Matrices If U i 2 Rk r is iid and E jujli j < 1 for 1 j k and 1 Un = as n ! 1: 1X p U i ! E(U i ) n i=1 n l r then In our derivation. We need to show that for all > 0 and > 0 there is some N < 1 so that for all n N. Theorem 5.1) (where 1 ( ) is the indicator function) which is possible since E jui j < 1: De…ne the random variables wi = ui 1 (jui j C) zi = ui 1 (jui j > C) E (ui 1 (jui j > C)) : E (ui 1 (jui j C)) By the Triangle Inequality (A.2. the Expectation Inequality (C. Proof of Theorem 5.2). and (5.1 states that the WLLN holds under the weaker assumption of a …nite mean.1: Without loss of generality. we proved the WLLN under the assumption that ui has a …nite variance.2. We provide a proof of this more general result for the technically-inclinded readers.Theorem 5.1). P (jun j > ) : Fix and : Set " = =3: Pick C < 1 large enough so that E (jui j 1 (jui j > C)) " (5.

By Jensen’ Inequality (C. We now explain each step in brief and then in greater detail.2.2) s and (5. > 0 and > 0 then for all Proof of Theorem 5. Jacob Bernoulli Jacob Bernoulli (1654 -1705) of Switzerland was one of many famous mathematicians in the Bernoulli family. by Markov’ Inequality (C. the triangle inequality.2: A random vector or matrix converges in probability to its limit if (and only if) all elements in the vector or matrix converge in probability. Pn Pn 1X p xi x0 ! E xi x0 = Q i i n i=1 n (5. 5. the equality by the de…nition of ": We have shown that for any n 36C 2 = 2 2 .3. as n ! 1.1) to show that the least-squares estimator ^ is consistent for the projection coe¢ cient : This derivation is based on three key components.6). published in his posthumous masterpiece Ars Conjectandi.3.4) 68 .3). Speci…cally. Finally. the fact that un = wn + z n . Second. and the bound jwi j s (E jwn j)2 Ew2 n 2 Ewi = n 4C 2 n 2 " 2C.2. as needed.3 Consistency of Least-Squares Estimation In this section we use the WLLN and continuous mapping theorem (CMT. the fact that the wi are iid and mean zero.1) shows that sample moments converge in probability to population moments. (5. One of Jacob Bernoulli’ important contributions was the …rst proof of s the weak law of large numbers. the OLS estimator can be written as a continuous function of a set of sample moments. Theorem 5. observe that the OLS estimator ! 1 ! n n 1X 1X ^= xi x0 xi yi i n n i=1 i=1 1 1 is a function of the sample moments n i=1 xi x0 and n i=1 xi yi : i Second. E jun j E jwn j + E jz n j 3" P (jun j > ) = .3) the …nal inequality holding for n 4C 2 ="2 = 36C 2 = 2 2 .1) states that continuous functions preserve convergence in probability.1). First. the continuous mapping theorem (CMT. P (jun j > ) . Theorem C. by an application of the WLLN these sample moments converge in probability to the population moments.1 applies to each element and therefore converges in probability. Theorem 5. Since each element of U i has a …nite mean by assumption. Theorem C. (5. as needed.2. the weak law of large numbers (WLLN. First. And third.

3.1. First.1. xi ) are mutually independent and identically distributed (Assumption 1.3. the CMT to allows us to combine these equations to show that ^ converges in probability to : Speci…cally. and so are any functions of the observations.2. including xi x0 and xi yi : Second. ^= p 1X p xi yi ! E (xi yi ) : n i=1 n (5.1.6) p We have shown that ^ ! .2) says that when when random variables are iid and have …nite mean. !Q 1X xi x0 i n i=1 1 n ! 1 0 1X xi ei n i=1 n ! Theorem 5. the OLS estimator converges in probability to the projection coe¢ cient vector as the sample size n gets large. For a slightly di¤erent demonstration of this result. and thus ^ is consistent for .1 Consistency of Least-Squares p Under Assumption 5.8) = p =0 p which is the same as ^ ! . as n ! 1.and Third.6) implies that ^ The WLLN and (2. The weak law of large numbers (Theorem 5. Assumption i 69 .4) and (5.5) it is su¢ cient to verify that the elements of the random matrices xi x0 and xi yi are iid and have …nite mean.1 states that the OLS estimator ^ converges in probability to as n diverges to positive in…nity. Thus to apply the WLLN to (5. recall that (4. We now explain the application of the WLLN in (5. these random variables are iid i because the observations (yi .14) imply = 1X xi x0 i n i=1 n n ! 1 1X xi ei n i=1 n ! : (5.5) = : ! E 1X xi x0 i n i=1 n ! 1 xi x0 i 1 (E (xi yi )) 1X xi yi n i=1 n ! (5.4) and (5.5.1).5) and the CMT in (5. as n ! 1: In words.7) Therefore ^ 1X p xi ei ! E (xi ei ) = 0: n i=1 (5. ^ ! as n ! 1: Theorem 5. then sample averages converge in probability to their population mean.6) in greater detail. Section 5.

3 we showed that ^ converges in probability to . ! g (Q. E (xi yi )) 1 E (xi yi ) 5. The …nal step of the proof is the application of the continuous mapping theorem to obtain (5. p Take equation (5.1.1 is su¢ cient for 1 j k and 1 l k.1.5). The derivation starts by writing the estimator as a function of sample moments. The steps are as follows. xi yi i n n i=1 i=1 p = E xi x0 i = : This completes the proof of Theorem 5.1 E jxji xli j and E jxji yi j 2 Ex2 Eyi ji Ex2 Ex2 ji li 1=2 <1 1=2 < 1: We have veri…ed the conditions for the WLLN.9) p This shows that the normalized and centered estimator n ^ is a function of the sample Pn Pn 1 1 average n i=1 xi x0 and the normalized sample average pn i=1 xi ei : Furthermore.1 implies that Q 1 exists and thus g (A.7) and multiply it by n: This yields the expression p n ^ = 1X xi x0 i n i=1 n ! 1 1 X p xi ei n i=1 n ! : (5.3. Consistency is a useful …rst step. and thus (5. One of the moments must be written as a sum of zero-mean random vectors and normalized so that the central limit theorem can be applied. the latter has i mean zero so the central limit theorem (CLT) applies. by an application of the Cauchy-Schwarz inequality and Assumption 5. ! n n 1X 1X ^=g xi x0 . xi yi n n i=1 n ! 1 n ! where g (A.4) and (5. as n ! 1. Assumption 5.3.1. b) is a continuous function of A and b at all values of the arguments such that A 1 exists. We can write 1X xi x0 i n i=1 ^ = = g 1X x i yi n i=1 i=1 ! n n X 1X 0 1 xi xi .6). b) = A 1 b is a function of A and b: The function g (A.5.4 Asymptotic Normality We started this Chapter discussing the need for an approximation to the distribution of the OLS estimator ^ : In Section 5. but in itself does not provide a useful approximation to the distribution of the estimator.1. To fully understand its application we walk through it in detail. E jxji xli j < 1 and E jxji yi j < 1: Indeed. b) is continuous at A = Q: Hence by the continuous mapping theorem (Theorem C.1). In this Section we derive an approximation typically called the asymptotic distribution. 70 .

Indeed. V ) Q 1 : 71 .11) Putting these steps together. Eui = 0 and Eu2 < 1 for j = 1. Exji We have derived the asymptotic normal approximation to the distribution of the least-squares estimator. Q Q 1 as n ! 1. :::.9. using (5. k.1. :::.10) as n ! 1.9). :::. ui = xi ei which is iid (since the observations are iid) and mean zero (since E (xi ei ) = 0): We calculate that E (ui u0 ) = E xi x0 e2 . as n ! 1 p where V =Q 1 n ^ d ! N (0.1. it is su¢ cient that the observables have …nite fourth moments.4.12) which is …nite if xji and ei have …nite fourth moments. Theorem 5.1 In addition to Assumption 5. then as n ! 1 ji 1 X d p ui ! N 0.4). As ei is a linear combination of yi and xi . ) n i=1 n (5. A su¢ cient condition can be found as follows. Formally. k.1 Asymptotic Normality of Least-Squares Estimator Under Assumption 5. For any j = 1.1. where = E xi x0 e2 : i i p (5.11) is not well de…ned and (5.3).10) requires that the elements of ui = xi ei have …nite variances. n ^ d !Q 1 N (0.1) If ui 2 Rk is iid. if this is not true then (5. (5. Eyi < 1 and for 4 < 1: j = 1.4. where the …nal equality follows from the property that linear combinations of normal vectors are also normal (Theorem B. and (5. ) 1 = N 0.4.1). 4 Assumption 5.10) does not make sense. E ui u0 i n i=1 n : For our application.Central Limit Theorem (Theorem C.2. By the CLT we conclude i i i 1 X d p xi ei ! N (0. k. (5. by the Cauchy-Schwarz Inequality (C.10). note that E jxji ei j2 = E x2 e2 ji i Ex4 ji 1=2 Ee4 i 1=2 (5.

16) (5.13) is true or false. concentrating most of the probability mass around zero. As diminishes the density changes signi…cantly. Here the model is yi = ei = uk i E u2k i E uk i E uk i 2 1=2 + ei where (5. In Figure 5. In this context the normalized least-squares slope estimator n 2 ^ has the 2 2 N(0. 1) .p . Let yi = 0 + 1 xi + ei where xi is N (0.13) the asymptotic variance formulas simplify as = E xi x0 E e2 = Q i i V = Q 1 2 (5.1 states that the sampling distribution of the least-squares estimator. xi ) which satisfy the conditions of Assumption 5. This holds true for all joint distributions of (yi .4. there is no simple answer to this reasonable question. Vilfredo Pareto Vilfredo Pareto (1848-1923) of Italy was a major economic theorist.15) we de…ne V 0 = Q 1 2 whether (5. setting n = 100 and varying the parameter For = 3:0 the density is very close to the N(0. e2 ) = 0: (5. Another example is shown in Figure 5. We illustrate this problem using a simulation. but is somewhat broader. 2 . V is often referred to as is the variance of the asymptotic distribution of n ^ the asymptotic covariance matrix of ^ : The expression V = Q 1 Q 1 is called a sandwich form.13) is true then V = V 0 .13) i i As V Condition (5. However. We say that ei is a Homoskedastic Projection Error when cov(xi x0 .1.14) V 0 Q 1 =Q 1 2 (5. There is a special case where and V simplify. is approximately normal when the sample size n is su¢ ciently large. and 1 ei is independent of xi with the Double Pareto density f (e) = 2 jej .1 is commonly used to approximate the …nite p sample distribution of n ^ : The approximation may be poor when n is small. His major econometric contribution was the Pareto (or power law) distribution which is commonly used to model the empirical distribution of wealth. introducing the economic concept of Pareto e¢ ciency. otherwise V 6= V 0 : We call V 0 the homoskedastic covariance matrix.2.13) holds in the homoskedastic linear regression model. Theorem 5. In Figure 4.1 we display the …nite sample densities q of the normalized estimator n 2 ^ 2 . however. The asymptotic distribution of Theorem 5. 1) density.1 we have already seen a simple example where the least-squares estimate is quite asymmetric and non-normal even for reasonably large sample sizes. jej 1: If > 2 the error ei has zero mean and variance =( 2): As approaches 2. its variance diverges q to in…nity. When (5.4.16) 72 . for any …xed n the sampling distribution of ^ can be arbitrarily far from the normal distribution. The trouble is that no matter how large is the sample size.4. after rescaling. 1) asymptotic distibution for any > 2. the normal approximation is arbitrarily poor for some data distribution satisfying the assumptions.15) In (5. Under (5. How large should n be in order for the approximation to be useful? Unfortunately.

1): We show the sampling distribution of n ^ setting n = 100. 4.2 is that the N(0.1 Under Assumption 5. they are close in value when n is very large. ^ 2 n ! 1: p ! 2 and s2 p ! 2 as One implication of this theorem is that multiple estimators can be consistent for the sample population parameter. 6 and 8.5. Note that ei = yi ^ = ei Thus e2 = e2 ^i i 2ei x0 ^ i + ^ x0 ^ i x0 ^ i : 0 = ei + x0 i x0 ^ i xi x0 ^ i (5.5.1 and 5.1. 1) asymptotic approximation is never guaranteed to be accurate.3 we can show that the estimators ^ 2 and s2 are consistent for Theorem 5.1: Density of Normalized OLS estimator with Double Pareto Error p and ui N(0.17) 73 . The lesson from Figures 5. the sampling distribution becomes highly skewed and non-normal. 5. for k = 1.1.5 2: Consistency of Sample Variance Estimators Using the methods of Section 5.Figure 5.1. As k increases. Proof of Theorem 5. While ^ 2 and s2 are unequal in any given application.

consider V ^ . are consistent for the asymptotic covariance matrix. s2 = n n k ^2 ! p 2 : 5.8) and Theorem 5. since n=(n k) ! 1 as n ! 1.6 Consistent Covariance Matrix Estimation In Sections 4. when normalized.4).3. the last line using the WLLN. it follows that as n ! 1.2: Density of Normalized OLS estimator with error process (5. the covariance matrix estimate constructed under the assumption of homoskedasticity. (5.16) and 1X 2 ei ^ ^ = n 2 n 1 = n p ! i=1 n X i=1 2 e2 i 2 1X ei x0 i n i=1 n ! ^ + ^ 0 1X xi x0 i n i=1 n ! ^ as n ! 1. In this section we show that these estimators.1. (5.9 we introduced estimators of the …nite-sample covariance matrix of the least-squares estimator in the regression model. b0 First. we can write the covariance matrix estimator as b0 V^= 1 0 XX n 74 1 ^ s2 = Q 1 2 s : .8 and 4. Writing n X 1 ^= 1 Q xi x0 = X 0 X i n n i=1 as the moment estimator of Q. Thus ^ 2 is consistent for 2: Finally.Figure 5.

9) of Section 3. b0 p Theorem 5.1).2 Under Assumption 5.5. the homoskedastic covariance matrix. V ^ ! V 0 as n ! 1: b0 Now consider V ^ . the White covariance matrix estimator.6.^ Since Q ! Q and s2 ! 5.1. ^ n ! 1: p ! b and V ^ p !V as To illustrate. it follows that p p 2 (see (5.1.1 Under Assumption 5.1. Writing X ^ = 1 xi x0 e2 i ^i n i=1 n (5. and the invertibility of Q (Assumption b0 ^ V^ =Q 1 2 s p !Q 1 2 =V0 b0 so that V ^ is consistent for V 0 .1. we return to the log wage regression (3.3. then i i 1 0 XX n 1 1 b V^ = ^ = Q ^Q ^ 1 1X xi x0 e2 i ^i n i=1 n ! 1 0 XX n 1 : ^ With some work. we can show that ^ is consisent for : Combined with the consistency of Q for 1 1 b Q and the invertibility of Q we …nd that V ^ converges in probability to Q Q =V : Theorem 5. We calculate that s2 = 0:20 and ^ = 0:199 2:80 : 2:80 40:6 Therefore the two covariance matrix estimates are b0 V^= = 1 14:14 14:14 205:83 1 14:14 14:14 205:83 1 0:20 = 6:98 0:480 0:480 :039 and ^ V 1 :199 2:80 2:80 40:6 1 14:14 14:14 205:83 1 = 7:20 0:493 0:493 0:035 : p In this case the two estimates are quite similar.6.4) and Theorem 5. The (White) standard errors for ^ 0 are 7:2=988 = p :085 and that for ^ 1 is :035=988 = :006: We can write the estimated equation with standard errors using the format \ log(W age) = 1:33 + 0:115 Education: (:08) (:006) 75 .4.18) as the moment estimator for = E xi x0 e2 .1).

We …rst show ^ ^ = = 1X xi x0 e2 i ^i n 1 n i=1 n X i=1 n p ! : Using (5. (A. th i i i Using the Cauchy-Schwarz Inequality (C.1.3) twice and Assumption 5.6.17) xi x0 e2 i i 2X xi x0 ^ i n i=1 n 0 xi ei + 1X xi x0 i n i=1 n ^ 0 2 xi : (5.4) and Assumption 5. Take the …rst term on the right-hand-side of (5.4. We now take the third term in (5.5) and the Schwarz Inequality (A.Proof of Theorem 5.19).20) kxi k jei j ^ Using Holder’ inequality (C.5) and the Schwarz Inequality 1X xi x0 i n i=1 n p 1X p kxi k3 jei j ! E kxi k3 jei j < 1: n i=1 ^ 0 2 xi 1X xi x0 i n i=1 i=1 n ^ 0 2 xi p !0 n 1X kxi k4 ^ n 76 . This shows that the second term on the right-hand-side of (5.19) converges in probability to zero.7).19) in turn.19).20) converges in probability to zero. we can apply the WLLN (Theorem 5. E xji xli e2 i Ex2 x2 ji li Ex4 ji 1=2 Ee4 i 1=2 1=4 Ex4 li 1=4 Ee4 i 1=2 : Since this expectation is …nite.2.1. the Matrix Schwarz Inequality.2. Applying the Triangle Inequality (A. the Matrix Schwarz Inequality (A.1) to …nd that 1X p xi x0 e2 ! E xi x0 e2 = i i i i n i=1 n : Now take the second term on the right-hand-side of (5. s E kxi k3 jei j By the WLLN n E kxi k4 3=4 E e4 i 1=4 < 1: Since ^ ! 0 it follows that (5.19).6) 2X xi x0 ^ i n i=1 n 0 xi ei 2X xi x0 ^ i n 2 n i=1 n X n 0 xi ei 0 xi x0 i 3 ^ ! 2 n i=1 n X i=1 xi jei j : (5. Again by the Triangle Inequality.8) to the matrix Euclidean norm.4. The jl’ element of xi x0 e2 is xji xli e2 .19) We now examine each k k sum on the right-hand-side of (5. equation (A.

Considering the three terms on the right-hand-side of (5. combined with (5.21) = H0 V H : (5. Finally. k ): For example.19).1.22) The asymptotic approximation (5. It is a very powerful result. and the second and third converge in probability to zero. we may be interested in a single coe¢ cient j or a ratio j = l : In these cases we can write the parameter of interest as a function of : Let h : Rk ! Rq denote this function and let = h( ) denote the parameter of interest. 77 . H = R. the function h( ) is linear: h( ) = R0 for some k q matrix R: In this case. ^ V ^ from which it follows that V ^ =Q p 1 ^Q ^ 1 p !Q 1 Q 1 =V . as most parameters of interest can be written in this form. We p conclude that ^ ! as claimed. :::. The estimate of is ^ = h( ^ ): What is the asymptotic distribution of ^? Assume that h( ) is di¤erentiable at the true value of : By a …rst-order Taylor series approximation: h( ^ ) ' h( ) + H 0 where H = Thus p n ^ n h( ^ ) p ' H0 n ^ = d ^ : @ h( ) @ p k q: h( ) = N (0. This shows that the third term on the right-hand-side of (5.21) is often called the delta method because it approximates the distribution of ^ by a …rst-order expansion. V ) (5.7 Functions of Parameters Sometimes we are interested in some lower-dimensional function of the parameter vector = ( 1 . !V as n ! 1: 5.p p 1 P the …nal convergence since ^ ! 0 and n n kxi k4 ! E kxi k4 < 1 under Assumption i=1 5. nonlinear functions of asymptotically normal estimators are themselves asymptotically normally distributed.4.4) and the invertibilility of Q. V ) : where V ! H 0 N (0. It shows that (at least approximately). In many cases.19) converges in probability to zero. we have shown that the …rst term converges in probability to .

if R is a “selector matrix” R= so that if =( 1.22) we see we need an estimate of ^ H and V .23) then h i b b b V ^ = V 11 = V . V ^ is often called an asymptotic covariance matrix estimator. I 0 (5. 2 ).21). (5. p d n ^ ! N (0. q q ^) = n 1=2 V ^ = n 1=2 H 0 V ^ H b c b c s( Theorem 5. V ^ : To estimate H we use @ c H = h( ^ ): @ b c0 b c V ^ = H V ^H n ^1 d 1 ! N (0. I 0 the upper-left block of V : In other words. that is.1 Asymptotic Distribution of Functions of Parameters Under Assumption 5.4.In particular.1.22) in this case is p where V 11 = [V ]11 How do we estimate the covariance matrix for ^? From (5. the standard error for ^ is the square root of V ^ .7. When h( ) is linear h( ) = R0 then H = R and When R takes the form of a selector matrix as in (5.21)-(5. 11 b b V ^ = R0 V ^ R: : b the upper-left block of the covariance matrix estimate V : b When q = 1 (so h( ) is real-valued). V ) where V and as n ! 1: = H0 V H p b V^ !V 78 . We already have an estimate of the latter.23) then = R0 V = = 1 and V I 0 = V 11 . V 11 ) Putting the parts together we obtain b as the covariance matrix estimator for ^: As the primary justi…cation for V ^ is the asymptotic b approximation (5.

1) d Thus the asymptotic distribution of the t-ratio tn ( ) is the standard normal.8. Since this distribution does not depend on the parameters. and the parameter. a z-statistic or a studentized statistic. by the continuous mapping theorem. Gosset published under the pseudonym “Student” Consequently. We showed (5. we say that tn is exactly pivotal. the statistic tn has an exact t distribution. and is therefore exactly free of unknowns. @ @ p c H = h( ^ ) ! h( ) = H : @ @ b c0 b c V ^ = H nV ^ H p ! H0 V H = V . however. this famous distribution is known as the . since ^ ! ^ and h( ) is continuously di¤erentiable. We won’ be making such distinctions and will typically refer to tn ( ) as a t-statistic. its standard error. Putting these together completing the proof . At the time.2. Gosset (1876-1937) of England is most famous for his derivation of the student’ s t distribution. see Section 3. could be a single element of ). William Gosset William S.8 t statistic Let = h( ) : Rk ! R be any parameter of interest (for example. we say that tn ( ) is asymptotically pivotal.Proof.21). Gosset worked at Guiness brewery. ^ its estimate and s(^) its asymptotic standard error. Consider the statistic ^ tn ( ) = s(^) (5. In this case. In special cases (such as the normal regression model. pivotal statistics are unavailable and we must rely on asymptotically pivotal statistics.11). which prohibited its employees from publishing in order to prevent the possible loss of trade secrets. we need only to show consistency of the covariance matrix estimator. First. student’ t rather than Gosset’ t! s s 79 . published in the paper “The probable error of a mean”in 1908. In general.24) which di¤erent writers alternatively call a t-statistic. p b V^ !V : p Second. We also t often suppress the parameter dependence. from Theorem 5. Theorem 5.1 tn ( ) ! N (0. To circumvent this barrier. writing it as tn : The t-statistic is a simple function of the estimate. 5.6.

V ) p V = N (0. a simple (yet silly) interval is R with probability 1 Cn = ^ with probability By construction. Either 2 Cn or 2 Cn : The = coverage probability is P( 2 Cn ).Proof of Theorem 5. It is designed to cover with high probability.9 Con…dence Intervals A con…dence interval Cn is an interval estimate of 2 R: It is a function of the data and hence is random. By Theorem 5. by reporting a (1 )% con…dence interval Cn .1. For example. V ) Thus p b V^ !V ^ tn ( ) = s(^) p ^ n q = b V^ d N (0. p n ^ and d ! N (0. This con…dence interval is symmetric about the point estimate ^. that is ( ) ^ Cn = f : jtn ( )j cg = : c c : s(^) The coverage probability of this con…dence interval is P ( 2 Cn ) = P (jtn ( )j 80 c) . we are stating that with (1 )% probability (in repeated samples) the true lies in Cn : There is not a unique method to construct con…dence intervals. and its length is proportional to the standard error s(^): Equivalently. if ^ has a continuous distribution.7. but Cn is uninformative about : This is not a useful con…dence interval. it turns out that a generally reasonable con…dence interval for takes the form h i Cn = ^ c s(^). or more generally written as (1 )% for some 2 (0. typically 90% or 95%. ^ + c s(^) (5. When we have an asymptotically normal parameter estimate ^ with standard error s(^).8. P( 2 Cn ) = 1 .25) where c > 0 is a pre-speci…ed constant. 1) ! The last equality is by the property that linear scales of normal distributions are normal. so this con…dence interval has perfect coverage. Cn is the set of parameter values for such that the t-statistic tn ( ) is smaller (in absolute value) than c. The convention is to design con…dence intervals to have coverage probability approximately equal to a pre-speci…ed target.1. 1): In this case. 5.

(Technically. who established that in the projection model the OLS estimator has the smallest asymptotic mean-squared error among feasible estimators.8. j . this yields the most commonly implied con…dence interval in applied econometric practice h i Cn = ^ 2s(^). and can be backed out of a normal distribution table. This means selecting the constant c so that (c) ( c) = 1 : This is a useful rule-of thumb.5 we presented the Gauss-Markov theorem as a limited e¢ ciency justi…cation for the least-squares estimator. A broader justi…cation is provided in Chamberlain (1987). This property is called semiparametric e¢ ciency. but we can approximate the coverage probability by taking the asymptotic limit as n ! 1: Since tn ( ) is asymptotically standard normal (Theorem 5.0 for 1. E¤ectively.26) for some constants pj . the maximum likelihood estimator (MLE) is 1X pj = ^ 1 (yi = n i=1 n j) 1 xi = j 81 . but the pj are unknown. it is a 95. the de…nition of linear the projection coe¢ cient (2. When reading a set of empirical results.) Con…dence intervals are a simple yet e¤ective tool to assess estimation uncertainty. This asymptotic 95% con…dence interval Cn is simple to compute and can be roughly calculated from tables of coe¢ cient estimates and standard errors. j = 1.96. = 0:05 (a 95% interval) implies c = 1:96 and = 0:1 (a 90% interval) implies c = 1:645: Rounding 1. and j : Assume that the j and j are known.96 to 2. typically 90% or 95%.10 Semiparametric E¢ ciency In Section 4. 1) and (u) = P (Z u) is the standard normal distribution function. :::.4% interval.27) j j=1 j=1 Thus is a function of ( 1 . but this distinction is meaningless.10) can be rewritten as 0 1 10 1 r r X X =@ pj j 0 A @ pj j j A (5. Suppose that the joint distribution of (yi . this makes c a function of .) t In this discrete setting. due to the substitution of 2.which is generally unknown. then do not jump to a conclusion about based on the point estimate alone. look at the estimated coe¢ cient estimates and the standard errors. That is.1). compute the con…dence interval Cn and consider the meaning of the spread of the suggested values. r (5. Thus the asymptotic coverage probability is a function only of c: The convention is to design the con…dence interval to have a pre-speci…ed coverage probability 1 . (We know the values yi and xi can take. We discuss the intuition behind his result in this section. it follows that as n ! 1 that P ( 2 Cn ) ! P (jZj c) = (c) ( c) where Z N (0. If the rage of values in the con…dence interval are too wide to learn about . r ) : As the data are multinomial. For example. xi ) is discrete. for …nite r. but we don’ know the probabilities. ^ + 2s(^) : 5. and is a strong justi…cation for the least-squares estimator. xi = j = pj . P yi = j. For a parameter of interest. :::.

Since this is a regular parametric model the MLE is asymptotically e¢ cient (see Appendix D).27) with the parameters pj replaced by the estimates pj : ^ ^ mle = @ 0 r X j=1 0A j j pj ^ 1 10 Substituting in the expressions for pj . pj is the percentage of the observations ^ which fall in each category. the maximum likelihood estimator is identical to the OLS estimator. In this section we provide an alternative demonstration based on the rich but technically challenging theory of semiparametric e¢ ciency bounds. It follows that the OLS estimator is asymptotically e¢ cient. The MLE ^ mle for is then the analog of (5. An excellent accessible review has been provided by Newey (1990). where 1 ( ) is the indicator function. :::. if the data have a discrete distribution. and for any multinomial distribution the moment estimator is asymptotically e¢ cient. r.10. 1X xi x0 i n i=1 ! 1 1X xi yi n i=1 n ! = ^ ols : 5.1. The hard part of the argument (which was rigorously developed in Chamberlain’ paper. Chamberlain proved that the OLS estimator is asymptotically semiparametrically e¢ cient for the projection coe¢ cient for the class of models satisfying Assumption 5.11 Semiparametric E¢ ciency in the Projection Model In this section we continue the investigation of semiparametric e¢ ciency as raised in Section 5. but we s do not present it here) is the extension to the case of continuously-distributed data:The intuition is that all continuous distributions can be arbitrarily well approximated by some multinomial distribution. ^ r X j=1 @ r X j=1 pj ^ A: j j 1 pj ^ 0 j j = n r X1X 1 (yi = n j=1 j) 1 xi = xi = j 0 j j = 1 n = and r X j=1 n 1X xi x0 i n i=1 i=1 n X X r i=1 j=1 1 (yi = j) 1 j xi x0 i pj ^ j j = r n X1X 1 (yi = n j=1 j) 1 xi = xi = j j j = 1 n = Thus ^ mle = n 1X xi yi : n i=1 n i=1 n X X r i=1 j=1 1 (yi = j) 1 j xi yi In other words. There we presented the intuition behind Chamberlain’ demonstration of the asymptotic s e¢ ciency of the least-squares estimator. That is.for j = 1. Formalizing this intuition using a rigorous mathematical argument. 82 .1.

and let z 1 . 1).3). we see that the moment estimator ^ has the asymptotic distribution p ^ d ! N (0. so the asymptotic variance of any semiparametric estimator cannot be smaller than the Cramer-Rao bound for any parametric submodel. By the Cramer-Rao theorem no estimator (and in particular no semiparametric estimator) has an asymptotic variance smaller than V . We can calculate that ES 2 = 2 and thus conclude that n (~ ) ! N (0. the R mean is ( ) = zf (z j ) dz. and the parameter of interest is ( ) = g ( ) which varies with the parameter . given the density f (z j ) we can construct the MLE ^ for . The equality f (z j 0 ) = f (z) means that the submodel class passes through the true density. 1=2) : The asymptotic variance of the MLE is one-half that of the sample mean. In this setting the sample mean is ine¢ cient. The semiparametric asymptotic variance bound (which is sometimes called 83 . Is there another estimator with a smaller asymptotic variance? While it seems intuitively unlikely that another estimator could have a smaller asymptotic variance than ^ . :::. Speci…cally. V ) where V is the smallest possible covariance matrix among regular estimators. The class of submodels and parameter 0 depend on the true density f: In the submodel f (z j ) . I0 ) where I0 = ES 2 and S = @@ log f (z j ) = p p d 2 sgn (z ) is the score. This comparison is true for all submodels . In this model the maximum likelihood estimator (MLE) ~ for is di¤erent than the sample mean (and happens to be the sample median). and the MLE ^ = g(^ ) for : The MLE satis…es p n ^ ( ) d Our treatment covers what is known as the smooth function model. the MLE R ^ = zf z j ^ dz for . z n be an iid sample from this distribution. suppose that the true density of z is the unknown function f (z) with mean = R Ez = zf (z)dz and the parameter of interest is = g ( ) : A parametric submodel for f (z) is a density f (z j ) which is a smooth function of a parameter . and there is some 0 such that f (z j 0 ) = f (z): The index indicates the submodels. how do we know that this is not the case? To show that the answer is not immediately obvious. which includes the projection model as a special case. Suppose that z 2 R has the density f (z j ) = p p d 2 1=2 exp jz j 2 : Since var (z) = 1 we see that the sample mean satis…es n (^ ) ! N (0. The standard moment P estimator for is the sample mean ^ = n 1 n z i and that for is ^ = g (^ ) : This setting i=1 0 + e by letting z be the vector includes the least-squares estimator for the projection model y = x with elements xj e and xj xl for all j k and l k: p d ) ! N (0.3. The mathematical trick is to reduce the semiparametric model to a set of parametric “submodels” The classic Cramer-Rao variance bound can be found for each parametric submodel. Let 2 @ be the class of all submodels for f: Since each submodel is parametric we can calculate its Cramer-Rao bound for estimation of . ) : Applying the Delta The sample mean has the asymptotic distribution n (^ Method (Theorem C. The . Recall from the theory of maximum likelhood p d 1 that the MLE satis…es n (~ ) ! N (0. Let z 2 Rm be a random vector with …nite mean = Ez and …nite variance matrix = var (z) . We call this setting semiparametric as the parameter of interest (the mean) is …nite dimensional while the remaining features of the distribution are unspeci…ed. it might be helpful to review a setting where the sample mean is ine¢ cient.feasible estimator. V ) where V = @ 0 g ( ) @ g ( )0 : We want to know if ^ is the best n @ @ ! N (0. In the semiparametric context an estimator is called semiparametrically e¢ cient if it has the smallest asymptotic variance among all semiparametric estimators. But the question at hand is whether or not the sample mean is e¢ cient when the form of the distribution is unknown. so the submodel is a true model. The parameter of interest is = g ( ) where g ( ) is a continuously di¤erentiable function. variance bound for the semiparametric model (the union of the submodels) is then de…ned as the supremum of the individual variance bounds. Formally.

for any submodel with Cramer-Rao variance V and any semiparametric estimator ^ with asymptotic variance V . it is su¢ cient to construct a parametric submodel for which the Cramer-Rao bound (equivalently. our goal is to …nd a parametric submodel whose Cramer-Rao bound for estimation of is V : The solution involves creating a tilted version of the true density. In these cases. then it is necessary that V V V : The …rst inequality holds by the de…nition of V . Consider the parametric submodel f (z j ) = f (z) 1 + 0 1 (z ) (5. Thus if we …nd a submodel and semiparametric estimator ^ such that V = V . then this is the semiparametric asymptotic variance bound. It is a parametric submodel since f (z j 0 ) = f (z) when 0 = 0: This parametric submodel has the mean Z ( ) = zf (z j ) dz Z Z 1 = zf (z)dz + f (z)z (z )0 dz = + and parameter of interest ( ) = g ( + ) both which are smooth functions of : Since 1 @ @ (z ) log f (z j ) = log 1 + 0 1 (z ) = 0 1 @ @ 1+ (z ) it follows that the score function for s= is 0) @ log f (z j @ = 1 (z ): (5. and the second holds since no semiparametric estimator can be more e¢ cient than the MLE in any parametric submodel. the asymptotic variance of the MLE) equals that of a known semiparametric estimator. However the solution is straightforward in the smooth function model. As the semiparametric variance bound cannot be smaller than the Cramer-Rao bound for any submodel. If the asymptotic variance of a speci…c semiparametric estimator equals the bound V we say that the estimator is semiparametrically e¢ cient.28) where f (z) is the true density and = Ez: Note that Z Z Z 0 1 f (z) (z f (z j ) dz = f (z)dz + ) dz = 1 and for all close to zero f (z j ) 0: Thus f (z j ) is a valid density function. and cannot be larger than the asymptotic variance of any feasible semiparametric estimator. Formally. V = sup V : 2@ It is a lower bound for the asymptotic variance of any semiparametric estimator. We now show this for the moment estimator ^ = g (^ ) discussed above. As ^ has asymptotic variance V . then it must be the case that V = V and ^ is semiparametrically e¢ cient.29) 84 . For many statistical problems it is quite challenging to calculate the semiparametric variance bound.the semiparametric e¢ ciency bound) is the supremum of the Cramer-Rao bounds from all conceivable submodels. and the aforementioned feasible semiparametric estimator must be semiparametrically e¢ cient. it follows that if the asymptotic variance of a feasible semiparametric estimator equals the Cramer-Rao bound for at least one submodel.

and the marginal density of x. it is su¢ cient to …nd a parametric submodel whose Cramer-Rao bound for estimation of is V 0 : This would establish that V 0 is the semiparametric variance bound and the OLS estimator ^ is semiparametrically e¢ cient for : Let the joint density of y and x be written as f (y. As we noted in that section. the restriction to linear unbiased estimators is unsatisfactory as it leaves open the possibility that an alternative (non-linear) estimator could have a smaller asymptotic variance. as described in the previous section. Recall that in the homoskedastic regression model the asymptotic variance of the OLS estimator ^ for is V 0 = Q 1 2 : Therefore. In Sections 5. and the conditional density of y given x is f1 (y j x) 1 + (y x0 ) (x0 ) = 2 : To see that the latter is a valid conditional R density.By classic theory the asymptotic variance of the MLE ^ for is the Cramer-Rao bound (E (ss0 )) 1 = 1 1 1 E (z ) (z )0 = .30) You can check that in this submodel. which is identical to the asymptotic Theorem 5. in the class of linear unbiased estimators the one with the smallest variance is least-squares.11. 5.1. the marginal density of x is f2 (x) . the product of the conditional density of y given x. which stated that in the homoskedastic regression model. but this does not address the question of whether or not OLS is e¢ cient in the homoskedastic regression model.11 we showed that the OLS estimator is ef…cient in the projection model. @ @ 0g( ) @ @ g ( )0 . In this section we return to the question of e¢ cient estimation in this model using the theory of semiparametric variance bounds as presented in the previous section. observe that the regression assumption implies that yf1 (y j x) dy = x0 and therefore Z Z Z 0 0 2 f1 (y j x) 1 + y x x = dy = f1 (y j x) dy + f1 (y j x) y x0 dy x0 = 2 = 1: 85 . The MLE for is (^) = g + ^ which by the delta method has asymptotic variance V = variance of the moment estimator ^ : This shows that moment estimators are semiparametrically e¢ cient. x) = f1 (y j x) f2 (x) .1 Under Assumption 5. x j ) = f1 (y j x) 1 + y x0 x0 = 2 f2 (x) : (5.5 we presented the Gauss-Markov theorem. and the OLS estimator is semiparametrically e¢ cient. We have established the following theorem.10 and 5. and this includes the OLS estimator in the projection model.1. Now consider the parametric submodel f (y.12 Semiparametric E¢ ciency in the Homoskedastic Regression Model In Section 4. the semiparametric variance bound for estimation of is V = Q 1 Q 1 .

1 states that OLS has the smallest asymptotic variance among regular estimators.1 In the homoskedastic regression model. while Theorem 5. the conditional mean is linear in x and the regression coe¢ cient is ( )= + : We now calculate the score for estimation of : Since @ @ log f (y. x j 0 ) = xe= 2 : @ The Cramer-Rao bound for estimation of (and therefore ( ) as well) is s= E ss0 1 = 4 E (xe) (xe)0 1 = 2 Q 1 = V 0: We have shown that there is a parametric submodel (5. The di¤erence is that the Gauss-Markov theorem states that OLS has the smallest variance among the set of unbiased linear estimators. This result is similar to the Gauss-Markov theorem. This is a much more powerful statement. in that it asserts the e¢ ciency of the leastsquares estimator in the context of the homoskedastic regression model. 86 . Theorem 5.12.30) whose Cramer-Rao bound for estimation of is identical to the asymptotic variance of the least-squares estimator.12. which therefore is the semiparametric variance bound. the semiparametric variance bound for estimation of is V 0 = 2 Q 1 and the OLS estimator is semiparametrically e¢ cient. x j ) = log 1 + y @ @ the score is x0 x0 = 2 = x (y x0 ) = 2 1 + (y x0 ) (x0 ) = 2 @ log f (y. = = 2 dy 2 dy R using the homoskedasticity assumption that (y x0 )2 f1 (y j x) dy = 2 : This means that in this parametric submodel.In this parametric submodel the conditional mean of y given x is Z E (y j x) = yf1 (y j x) 1 + y x0 x0 = 2 dy Z Z = yf1 (y j x) dy + yf1 (y j x) y x0 x0 Z Z 2 y x0 = yf1 (y j x) dy + f1 (y j x) x0 Z + y x0 f1 (y j x) dy x0 x0 = 2 = x0 ( + ) .

For simplicity. Find the probability limit of this estimator. Find the probability limit of ^ as n ! 1: Is ^ consistent for n X i=1 xi yi ! ? Exercise 5. xi ) are observed. is it consistent for 1 ? If not. X 2 ) which satisfy y 1 = X 1 1 + e1 and y 2 = X 2 2 + e2 . Let ^ 1 and ^ 2 be the OLS estimates of 1 and 2 . xi ) only the pair (yi .5 Of the variables (yi .1 You have two independent samples (y 1 . we say that yi is a latent variable.3 Take the model yi = x0 1 +x0 2 +ei with Exi ei = 0: Suppose that 1 is estimated 1i 2i by regressing yi on x1i only.2 The model is yi = x 0 + e i i E (xi ei ) = 0 = E xi x0 e2 : i i Find the method of moments estimators ^ . ^ for ( . where E (x1i e1i ) = 0 and E (x2i e2i ) = 0. In this case. ^ (a) In this model. X be n ^= where k (rank k): y = X + e with E(xi ei ) = 0: De…ne the ridge xi x0 + I k i ! 1 n X i=1 > 0 is a …xed constant. ) : e¢ cient estimators of ( .Exercises Exercise 5. are ^ . in what sense are they e¢ cient? Exercise 5. )? (b) If so. In general.4 Let y be n regression estimator 1. Suppose yi = x0 + ei i E (xi ei ) = 0 yi = yi + ui where ui is a measurement error satisfying E (xi ui ) = 0 E (yi ui ) = 0 Let ^ denote the OLS coe¢ cient from the regression of yi on xi : (a) Is the coe¢ cient from the linear projection of yi on xi ? as n ! 1? 87 (b) Is ^ consistent for . you may assume that both samples have the same number of observations n: (a) Find the asymptotic distribution of p n ^2 2 ^1 = 1: ( 2 1) as n ! 1: (b) Find an appropriate test statistic for H0 : (c) Find the asymptotic distribution of this statistic under H0 : Exercise 5. under what conditions is this estimator consistent for 1 ? Exercise 5. X 1 ) and (y 2 . yi . and both X 1 and X 2 have k columns.

are both estimators consistent for ? (b) Are there conditions under which either estimator is e¢ cient? 88 .(c) Find the asymptotic distribution of Exercise 5.6 The model is p n ^ as n ! 1: yi = xi + ei E (ei j xi ) = 0 where xi 2 R: Consider the two estimators ^ = ~ = Pn i=1 x y Pn i 2 i i=1 xi n 1 X yi : n xi i=1 (a) Under the stated assumptions.

We then …nd z =2 . It is helpful to remember that this is simply a way of saying “Using a t-test. and if jtn j < 2 it is common to say that the t-statistic is statistically insigni…cant. Formally.8. the upper =2 quantile of the standard normal distribution which has the property that if Z N(0. When jtn j > 2 it is common to say that the t-statistic is statistically signi…cant. The asymptotic p-value of the statistic tn is pn = p(tn ) where p(t) is the tail probability function p(t) = P (jZj > jtj) = 2 (1 (jtj)) : If the p-value pn is small (close to zero) then the evidence against H0 is strong. While there is no objective scienti…c basis for choice of signi…cance level . This implies a critical value of z:025 = 1:96 2. By “large” we mean that the observed value of the t-statistic would be unlikely if H0 were true.Chapter 6 Testing 6. we …rst pick an asymptotic signi…cance level . z:025 = 1:96 and z:05 = 1:645: A test of asymptotic signi…cance jtn j > z =2 : Otherwise the test does not reject. the common practice is to set = :05 or 5%. A t-test rejects H0 in favor of H1 when jtn ( 0 )j is large.1 t tests The t-test is routinely used to test hypotheses on .” A related statistic is the asymptotic p-value. which can be interpreted as a measure of the evidence against the null hypothesis. A simple null and composite hypothesis takes the form H0 : H1 : = 6= 0 0 where 0 is some pre-speci…ed value. 89 . the hypothesis that = 0 can [cannot] be rejected at the asymptotic 5% level. 1) then P jZj > z =2 = : rejects H0 if For example.1 implies that P (reject H0 j H0 true) = P jtn j > z =2 =2 ! P jZj > z j = = : 0 The rejection/acceptance dichotomy is associated with the Neyman-Pearson approach to hypothesis testing. or “accepts” H0 : The asymptotic signi…cance level is because Theorem 5.

But if this is not the case. 1]. s(^) the ratio of the coe¢ cient estimate to its standard error.An equivalent statement of a Neyman-Pearson test is to reject at the % level if and only if pn < : Signi…cance tests can be deduced directly from the p-value since for any . The above discussion requires that the researcher knows what the coe¢ cient means (in terms of the economic problem) and can interpret values and magnitudes. it is constructive to focus on the point estimate. Fundamentally. and the focus should be on the value and interpretation of this estimate. in that the reader is allowed to pick the level of signi…cance . This is critical for good applied econometric practice. however. This is very poor econometric practice. The standard error is a measure of precision. pn < if and only if jtn j > z =2 : The p-value is more general. the common t-ratio is a test for the hypothesis that a coe¢ cient equals zero. 1]. when a coe¢ cient is of interest. not just signs. That is. It is a receipe for banishment of your work to lower tier economics journals. Another helpful observation is that the p-value function is a unit-free transformation of the d t statistic. it is distracting. If the standard error is large then the point estimate is not a good summary about : The endpoints of the con…dence interval describe the bounds on the likely possibilities. 1]: 6. pn ! U[0.2 t-ratios Some applied papers (especially older ones) report “t-ratios”for each estimated coe¢ cient. This should be reported and discussed when this is an interesting economic hypothesis of interest. from which it follows that pn ! U[0. its standard error. under H0 . the widely-seen statement “the t-ratio is highly signi…cant” has little interpretive value. and its con…dence interval. The point estimate gives our “best guess” for the value. To see this fact. then the data have produced an accurate estimate. Instead. note that the asymptotic distribution of jtn j is F (x) = 1 p(x): Thus P (1 pn u) = = = P (1 P jtn j p(tn ) u) F 1 1 u) (u) P (F (tn ) ! F F establishing that 1 d (u) = u. then the dataset is not su¢ ciently informative to render inferences about : On the other hand if the con…dence interval is tight. in contrast to Neyman-Pearson rejection/acceptance reporting where the researcher picks the signi…cance level. In contrast. so the “unusualness” of the test statistic can be compared to the easy-to-understand uniform distribution. If the con…dence interval embraces too broad a set of values for . d pn ! U[0. regardless of the complication of the distribution of the original test statistic. and should be studiously avoided. or describe “which regressors have a signi…cant e¤ect on y” by noting which t-ratios exceed 2 in absolute value. For a coe¢ cient these are ^ tn = tn (0) = . and equal the t-statistic for the test of the hypothesis H0 : = 0: Such papers often discuss the “signi…cance” of certain variables or coe¢ cients. 90 . The con…dence interval gives us the range of values consistent with the data.

1. 1] under H0 : In applied work it is good practice to report the p-value of a Wald statistic. so by the continuous p p ^ ^0 ^ ^ mapping theorem. The asymptotic p-value for Wn is pn = p(Wn ). H ( ^ ) ! H : Thus V = H V H ! H 0 V H = V > 0 if H has full rank q: Hence 0 d ^ 1 ^ Wn = n ^ ! Z 0V 1Z = 2. For example.6. In this case the t-statistic approach does not work.9. it is conventional to describe a Wald test as “signi…cant” if Wn q exceeds the 5% critical value. 2 (:05) = 3:84 = z:025 : The Wald test fails to reject if Wn is q 1 less than 2 ( ): As with t-tests.1 depends solely on q –the number of restrictions being tested.21) showed that p ^ R0 V R 1 R0 ^ 0 : p n ^ d ^ showed that V ! V : Furthermore. 0 V 0 q by Theorem B. then Wn ! d 2.3.4. 91 .6.1 Under H0 and Assumption 5. We have the null and alternative H0 : H1 : The natural estimate of = 6= 0 0: is ^ = h( ^ ) and has asymptotic covariance matrix estimate ^ ^0 ^ ^ V =H V H where @ ^ H = h( ^ ): @ The Wald statistic for H0 against H1 is Wn = n ^ = n h( ^ ) When h is a linear function of 0 0 ^ V 0 0 1 ^ 0 1 ^0 ^ ^ H V H h( ^ ) 0 : (6. An asymptotic Wald test rejects H0 in favor of H1 if Wn exceeds 2 ( ). and Theorem 5.1) . as it helps readers intrepret the magnitude of the statistic. H ( ) is a continuous function of .3: We have established: ! Z N (0. It does not depend on k –the number of parameters estimated. h( ) = R0 . q a chi-square random variable with q degrees of freedom.3.quantile q 2 of the 2 distribution. the upper. where p(x) = P 2 x is the tail probability q function of the 2 distribution. and q pn is asymptotically U[0. The Wald test rejects at the % level if and only if pn < . and it is desired to test the joint restrictions simultaneously. if rank(H ) = q. V ) .2 Theorem 6. Notice that the asymptotic distribution in Theorem 6.3 Wald Tests Sometimes = h( ) is a q 1 vector. then the Wald statistic takes the form 0 0 Wn = n R 0 ^ The delta method (5.

= 2. First.2) e0 e ^^ where e=y ~ are from OLS of y on X 1 . k = k1 + k2 .2) only holds if V 0 = s2 n 1 X 0 X .6. We know that the Wald statistic takes the form ^ Wn = n ^ V 0 0 1 ^ 1 ^ = n ^ 2 R0 V R ^ 2: Now suppose that covariance matrix is computed under the assumption of homoskedasticity.) We now derive expression (6.3) R0 X 0 X where M 1 = I 1 R = R0 1X 0 : 1 2 X0 X1 X0 X2 1 1 X0 X1 X0 X2 2 2 1 R = X 0 M 1X 2 2 1 X 1 (X 0 X 1 ) 1 ^ R0 V 0 R 1 Thus 1 =s n R0 X 0 X 92 1 1 R =s 2 n 1 X 0 M 1X 2 2 .2). and there are q = k2 restrictions. ~1 = X 0 X 1 1 1 X0 y 1 X 0y are from OLS of y on X = (X 1 . as the sum of squared errors is a typical output from statistical packages. (We can also call Wn a homoskedastic form of the Wald statistic. and the null hypothesis is H0 : 2 = 0: is linear with R = 0 I In this case. Because of this connection we call (6. This statistic is typically reported as an “F-statistic” which is de…ned as Fn = 0 e0 e e0 e =k2 ~~ ^^ Wn = : k2 e0 e=(n k) ^^ 1 ^ While it should be emphasized that equality (6.4 F Tests y = X1 1 Take the linear model + X2 2 +e where X 1 is n k1 . ^ = X 0X 1 X 1 ~ 1.2) is that it is directly computable from the standard output from two simple OLS regressions. X 2 ): The elegant feature about (6. note that by partitioned matrix inversion (A. Also h( ) = R0 a selector matrix. X 2 is n k2 . so 1 ^ ^ that V is replaced with V 0 = s2 n 1 X 0 X : We de…ne the “homoskedastic” Wald statistic 0 Wn = n ^ 0 0 ^0 V 1 ^ 1 ^ = n ^ 2 R0 V 0 R ^ 2: What we show in this section is that this Wald statistic can be written very simply using the formula e0 e e0 e ~~ ^^ 0 Wn = (n k) (6.2) the 0 F form of the Wald statistic. and e=y ^ X ^. still this formula often …nds good use in reading applied papers.

This is rarely an issue today. The Gaussian log-likelihood at these estimates is log L( ^ 1 . ~ 2 ) where ~ 1 is the OLS estimate from a regression of yi on x1i only. While there are special cases where this F statistic is useful. Also. there is no reason to report this F statistic. since ^ 0 X 0 M 1 X 2 ^ 2 = e0 e ~~ 2 2 s2 = (n k) 1 e0 e: ^^ e0 e ^^ we conclude that 0 Wn = (n k) e0 e e0 e ~~ ^^ e0 e ^^ as claimed. ^^ = 2 2 or alternatively. an “F-statistic”is reported. As a general rule. The estimator under the alternative is the unrestricted estimator ( ^ 1 . these cases are atypical. with residual variance ~ 2 : The log-likelihood of this model is n n n log L( ~ 1 . 0. so H0 is an intercept-only model. ^ 2 ) = = n 1 0 log 2 ^ 2 ee ^^ 2 2^ 2 n n log ^ 2 log (2 ) 2 2 n : 2 The MLE under the null hypothesis is the restricted estimates ( ~ 1 . the residual is ~ e = M 1 y: Now consider the residual regression of e on X 2 = M 1 X 2 : By the FWL theorem. In many statistical packages. as sample sizes are typically su¢ ciently large that this F statistic is nearly always highly signi…cant. 0. ^ 2 . ^ 2 ) discussed above. which is twice the di¤erence in the log-likelihood function evaluated under the null and alternative hypotheses. This was a popular statistic in the early days of econometric reporting.and 0 ^ Wn = n ^ 2 R 0 V 0 R 0 1 ^2 : = ^ 0 (X 0 M 1 X 2 ) ^ 2 2 2 s2 To simplify this expression further. This is Fn when X 1 is a vector is ones. 6.5 Normal Regression Model =( 1. ~ 2 ) = log ~ 2 log (2 ) : 2 2 2 93 . note that if we regress y on X 1 alone. ^ 2 . when sample sizes were very small and researchers wanted to know if there was “any explanatory power” to their regression. when an OLS regression is estimated. 2) Now let us partition and consider tests of the linear restriction H0 : H1 : 2 2 =0 6= 0 in the normal regression model. ~ ~ 0 ~ ^ ~ ^ e = X 2 ^ 2 + e and X 2 e = 0: Thus ~ e0 e = ~~ ~ ^ X 2 ^2 + e 0 ~ ^ X 2 ^2 + e 0 ~0 ~ ^^ = ^ 2 X 2 X 2 ^ 2 + e0 e ^ 0 X 0 M 1 X 2 ^ 2 + e0 e. This special F statistic is testing the hypothesis that all slope coe¢ cients (all coe¢ cients other than the intercept) are zero. a good test statistic is the likelihood ratio. In parametric models.

2 ) : Now notice that H0 is equivalent to the hypothesis H0 (r) : for any positive integer r: Letting h( ) = Wald test for H0 (r) is r r =1 r 1 . It also shows that the homoskedastic Wald statistic for linear hypotheses can also be interpreted as an appropriate likelihood ratio statistic under normality. 6. we …nd that the standard Wn (r) = n ^ 2 r2 ^ 2r 2 : While the hypothesis r = 1 is una¤ected by the choice of r. d Our …rst-order asymptotic theory is not useful to help pick r. 0. they can work quite poorly when the restrictions are nonlinear. as Wn (r) ! 2 under H0 for any 1 r: This is a context where Monte Carlo simulation can be quite useful as a tool to study and 94 . setting n= 2 = 10: The increasing solid line is for the case ^ = 0:8: The decreasing dashed line is for the case ^ = 1:6: It is easy to see that in each case there are values of r for which the test statistic is signi…cant relative to asymptotic critical values. This can be seen by a simple example introduced by Lafontaine and White (1986). and noting H = r ^r 1 2 . the statistic Wn (r) varies with r: This is an unfortunate feature of the Wald statistic. ^ 2 ) = n log ~ 2 = n log ~2 ^2 : log ^ 2 log L( ~ 1 . while there are other values of r for which the test statistic is insigni…cant. This is distressing since the choice of r is arbitrary and irrelevant to the actual hypothesis. Take the model yi = ei and consider the hypothesis H0 : = 1: Let ^ and ^ 2 be the sample mean and variance of yi : The standard Wald test for H0 is ^ Wn = n 1 ^2 2 + ei N(0.6 Problems with Tests of NonLinear Hypotheses While the t and Wald tests work well when the hypothesis is a linear restriction on . To demonstrate this e¤ect. ~ 2 ) By a …rst-order Taylor series approximation LRn = n log 1 + ~2 ^2 1 'n ~2 ^2 1 0 = Wn : 0 the homoskedastic Wald statistic. This shows that the two statistics (LRn and Wn ) can be numerically close. ^ 2 . we have plotted in Figure 6.1 the Wald statistic Wn (r) as a function of r.The LR statistic for H0 against H1 is LRn = 2 log L( ^ 1 .

looking for tests for which rejection rates are close to 5% and rarely fall outside of the 3%-8% range.1 you can also see the impact of variation in sample size.1 we report the results of a Monte Carlo simulation where we vary these three parameters. Type I error rates between 3% and 8% are considered reasonable. 100 and 500. however. In each case.000 simulated Wald statistics Wn (r) which are larger than 3.1 reports the simulation estimate of the Type I error probability from 50. no magic choice of n for which all tests perform uniformly well. Through repetition.1: Wald Statistic as a function of s compare the exact distributions of statistical procedures in …nite samples. Test performance deteriorates as r increases. P (Wn (r) > 3:84 j = 1) : Given the simplicity of the model. This produces random draws from the statistic’ sampling distribution. the Type I error probability improves towards 5% as the sample size n increases. Any other choice of r leads to a test with unacceptable Type I error probabilities.1. and is varied among 1 and 3. To interpret the table. remember that the ideal Type I error probability is 5% (. For this particular example the only test which meets this criterion is the conventional Wn = Wn (1) test. These probabilities are calculated as the percentage of the 50. Table 4. so these probabilities are Type I error. When comparing statistical procedures. In Table 4. The value of r is varied from 1 to 10. n. we compare the rates row by row. Rates above 20% are unacceptable.05) with deviations indicating distortion.1 Type I error Probability of Asymptotic 5% Wn (r) Test 95 . Table 4.Figure 6. features of s this distribution can be calculated. which is not surprising given the dependence of Wn (r) on r as shown in Figure 6. and 2 : In Table 2. this probability depends only on r. one feature of importance is the Type I error of the test using the asymptotic 5% critical value 3. Each row of the table corresponds to a di¤erent value of r –and thus corresponds to a particular choice of test statistic. to which we apply the statistical tools of interest.84. There is. The method uses random simulation to create arti…cial datasets. The null hypothesis r = 1 is true.84 – the probability of a false rejection. n is varied among 20. Error rates above 10% are considered excessive. In the present context of the Wald statistic.000 random samples. The second through seventh columns contain the Type I error probabilities for di¤erent combinations of n and .

06 . Take the model yi = E (xi ei ) = 0 and the hypothesis H0 : 1 2 0 + x1i 1 + x2i 2 + ei (6.14 .07 .15 .06 . While this is clear in this particular example.22 .07 .15 . De…ne 0 1 0 B C B C B 1 C B C B C ^ H 1 = B ^2 C B C B C B ^ C @ 1 A ^2 2 so that the standard error for ^ is s(^) = n 1H 0 V H ^1^ ^1 ^ 1 ^ 1=2 : In this case a t-statistic for H0 is r : t1n = 2 s(^) An alternative statistic can be constructed through reformulating the null hypothesis as H0 : 1 r 2 = 0: A t-statistic based on this formulation of the hypothesis is ^ t2n = n 1 r ^2 1=2 : 1H 0 V H ^ 2 ^ 2 96 .07 .13 .13 .05 . ^ 2 ) be the least-squares estimates of (6.24 .15 .06 .31 .21 .35 .05 2 . so the hypothesis can be stated as H0 : = r: ^ Let ^ = ( ^ 0 .08 .09 .15 10 . de…ne = 1 = 2 .06 .06 3 .14 9 .17 .19 .3).13 8 . This point can be illustrated through another example which is similar to one developed in Gregory and Veall (1985).30 .34 .26 .08 5 .16 Note: Rejection frequencies from 50. in other examples natural choices are not always obvious and the best choices may in fact appear counter-intuitive at …rst.22 .05 . let V ^ be an estimate of the asymptotic covariance matrix for ^ and set ^ = ^ 1 = ^ 2 .07 4 . Other choices are arbitrary and would not be used in practice.10 .10 6 .06 .05 .20 .08 .12 . Equivalently.05 .06 .25 .25 .12 .28 .3) =r where r is a known constant.23 .=1 =3 r n = 20 n = 100 n = 500 n = 20 n = 100 n = 500 1 .000 simulated random samples In this example it is not surprising that the choice r = 1 yields the best test statistic.08 . ^ 1 .08 .33 .07 .10 .06 .11 7 .20 .18 .05 .

05 . and normalize 0 = 0 and 1 = 1: This leaves 2 as a free parameter. and alternatives to asymptotic critical values should be considered. In contrast. F ) can be calculated 97 . In all cases.25. 2 ) draw with = 3.2.7 Monte Carlo Simulation In the previous section we introduced the method of Monte Carlo simulation to illustrate the small sample problems with tests of nonlinear hypotheses.50.00 .00 .05. .06 . (yn .10 .2 is that the two t-ratios have dramatically di¤erent sampling behavior. :::. If no linear formulation is feasible. .05 .00 . and 1.00 . while the right tail probabilities P (t1n > 1:645) are close to zero in most cases. the rejection rates for the t1n statistic diverge greatly from this value. xn ) .06 .00 .00 .05 .75 1.09 .75. In this section we describe the method in more detail.06 . for example an estimator ^ or a t-statistic (^ )=s(^): The exact distribution of Tn is Gn (u.06 .10 . F ) for selected choices of F: This is useful to investigate the performance of the statistic Tn in reasonable situations and sample sizes. ei be an independent N(0.7 (as advocated by Hansen (2006)).2 Type I error Probability of Asymptotic 5% t-tests n = 100 n = 500 P (tn < 1:645) P (tn > 1:645) P (tn < 1:645) P (tn > 1:645) t1n t2n t1n t2n t1n t2n t1n t2n . the rejection rates for the linear t2n statistic are invariant to the value of 2 .05 .25 . the exact (…nite sample) distribution Gn is generally unknown.15 . such as the GMM distance statistic which will be presented in Section 9.06 .05 . and are close to the ideal 5% rate for both sample sizes.47 . It is also prudent to consider alternative tests to the Wald statistic. Ideally.28 .10 . this formulation should be used. the entries in the table should be 0.where To compare t1n and t2n we perform another simple Monte Carlo simulation. F ) = P (Tn u j F): While the asymptotic distribution of Tn might be known. Monte Carlo simulation uses numerical simulation to compute Gn (u. The results are presented in Table 4.26 . xi ) which are random draws from a population distribution F: Let be a parameter and let Tn = Tn ((y1 . The implication of Table 4. our data consist of observations (yi .07 . 6. 1) variables. x1 ) .06 . then the “most linear”formulation should be selected (as suggested by the theory of Park and Phillips (1988)). ) be a statistic of interest.05 .05 . especially for small values of 2 : The left tail probabilities P (t1n < 1:645) greatly exceed 5%. the distribution function Gn (u.06 .05 .02 . if the hypothesis can be expressed as a linear restriction on the model parameters.06 . The common message from both examples is that Wald statistics are sensitive to the algebraic formulation of the null hypothesis. Recall.00 .00 .000 simulated samples. . However. along with sample size n: We vary 2 among :1. We let x1i and x2i be mutually independent N(0.12 .50 .06 .05 .00 The one-sided Type I error probabilities P (tn < 1:645) and P (tn > 1:645) are calculated from 50.05 1 0 H2 = @ 1 A : r 0 2 .0 and n among 100 and 500: Table 4. The basic idea is that for any given F.15 .00 .06 .

F ): This is one observation from an unknown distribution. it is important that the statistic be evaluated at the “true”value of corresponding to the choice of F: The above experiment creates one random draw from the distribution Gn (u. =s(^) and calculate We would then set Tn = ^ B 1 X ^ 1 (Tnb P= B b=1 1:96) . run the above experiment. s The statistic Tn = Tn ((y1 . The researcher chooses F (the distribution of the data) and the sample size n. Then q is the N ’ number in this ordered sequence. th th The typical purpose of a Monte Carlo simulation is to investigate the performance of a statistical procedure (estimator or test) in realistic settings. Generally. B: These results are stored. from one observation very little can be said. i = 1.) For step 2. we set B = 1000 or B = 5000: We will discuss this choice later. :::. xn ) . 1] and N(0. So the researcher repeats the experiment B times. and calculate \ Bias(^) = \ M SE(^) = B B 1 X 1 X^ Tnb = b B B 1 B b=1 B X b=1 b=1 B 1 X ^ (Tnb ) = b B 2 b=1 2 \ \ var(^) = M SE(^) \ Bias(^) 2 Suppose we are interested in the Type I error associated with an asymptotic 5% two-sided t-test. 1) random numbers. we can estimate any feature of interest using (typically) a method of moments estimator. :::. ) is calculated on this pseudo data. where N = (B + 1) : It th is therefore convenient to pick B so that N is an integer. The name Monte Carlo derives from the famous Mediterranean gambling resort where games of chance are played. For example. (For example. A “true” value of is implied by this choice. (yn . are drawn from the distribution F using the computer’ random number generator. Suppose we are interested in the 5% and 95% quantile of Tn = ^: We then compute the 5% and 95% sample quantiles of the sample fTnb g: The % sample quantile is a number q such that % of the sample are less than q : A simple way to compute sample quantiles is to sort the sample fTnb g from low to high. a chi-square can be generated by sums of squares of normals. For example: Suppose we are interested in the bias. Then the following experiment is conducted n independent random pairs (yi . For step 1. mean-squared error (MSE). then the 5% sample quantile is 50’ sorted value and the 95% sample quantile is the 950’ sorted value. or variance of the distribution ^ of : We then set Tn = ^ . F ) = P (Tnb u) = P (Tn u j F ) : From a random sample. (6. where B is a large number. :::. let the b0 th experiment result in the draw Tnb .4) the percentage of the simulated t-ratios which exceed the asymptotic 5% critical value. xi ) . Notationally. They constitute a random sample of size B from the distribution of Gn (u. n. b = 1. Typically. Clearly. x1 ) . if we set B = 999. The method of Monte Carlo is quite simple to describe. or equivalently the value is selected directly by the researcher which implies restrictions on F .numerically through simulation. most computer packages have built-in procedures for generating U[0. the performance will depend on n and 98 . and from these most random variables can be constructed.

union member. In addition to the coe¢ cient estimates. and the regressors are clearly labeled. then B will have to be increased. Since the results of a Monte Carlo experiment are estimates computed from a random sample of size B.008 . it is straightforward to calculate standard errors for any quantity of interest. excluding military. potential work experience. immigrant. For regressors we include years of education. Table 4. such as the percentage estimate reported in (6.8 Estimating a Wage Equation We again return to our wage equation.033 :00057 . then we can set s (^) = (:05) (:95) =B ' :22= B: Hence. equalling 1 with probability p = E1 (Tnb 1:96) : The average (6.032 . 1000.808 s( ^ ) .027 . for a selection of choices of n and F: As discussed above. therefore. Forp ^ example. The Table clearly states the estimation method (OLS). are.00002 . In practice. The available sample is 18.001 . p 6.1 OLS Estimates of Linear Equation for Log(Wage) ^ Intercept Education Experience Experience2 Married Female Union Member Immigrant Hispanic Non-White ^ Sample Size 99 1.010 . this adds 50 regressors). B: Often this is called the number of replications. and the sample size: These are useful summary measures of …t which aid readers. but requires more computational time. female. excluding the coe¢ cients on the state dummy variables. standard errors p for B = 100. and poorly for others. It is therefore useful to conduct a variety of experiments. and non-white.002 . we included a dummy variable for state of residence (including the District of Columbia.010 .101 . We use the sample of wage earners from the March 2004 Current Population Survey. the dependent variable (log(Wage)). this may be p approximated by replacing p with p or with an hypothesized value.F: In many cases. Table 4. As p is unknown. and .013 .097 :121 :102 :070 . respectively. the choice of B is often guided by the computational demands of the statistical procedure. Furthermore. and 5000. For the dependent variable we use the natural log of wages so that coe¢ cients may be interpreted as semi-elasticities.1. and dummy variable indicators for the following: married. it is simple to make inferences about rejection probabilities from statistical tests. Parameter estimates are both reported for the coe¢ cients of interest (the coe¢ cients on the state dummy variables are omitted) and standard errors are reported for all reported coe¢ cient estimates.014 .102 :232 .1 displays the parameter estimates in a standard format.4).4) is therefore an p unbiased estimator of p with standard error s (^) = p (1 p) =B.003. the researcher must select the number of experiments. In particular. s (^) = :022.4877 18.007 . experience squared. a larger B results in more precise estimates of the features of interest of Gn . If the standard error is too large to make a reliable inference. hispanic. :007. an estimator or test may perform wonderfully for some values. The random variable 1 (Tnb 1:96) is iid Bernoulli. Quite simply. the table also reports the estimated error standard deviation. if we are assessing p an asymptotic 5% test.808 so the parameter estimates are quite precise and reported in Table 4.

2) is Wn = n ~2 ^2 1 = 18. 50 100 . enabling us to reject the hypothesis that the coe¢ cient is zero. As a general rule. and form con…dence intervals and t-tests on individual coe¢ cients if desired. all the t-ratios are highly signi…cant. we …nd Wn = 550: Alternatively. re-estimating the model with the 50 state dummies excluded. some empirical researchers report t-ratios for each parameter estimate. Again consider the male-female wage di¤erence.3 to 50.007. Instead of reporting standard errors. if you are interested in the di¤erence in mean wages between men and women. Table 4. as the 1% critical value for the 2 distribution is 76.027 .2 reports that the t-ratio is 33.7 9:3 7:3 7 Returning to the estimated wage equation.2 OLS Estimates of Linear Equation for Log(Wage) Improper Reporting: t-ratios replacing standard errors ^ Intercept Education Experience Experience2 Married Female Union Member Immigrant Hispanic Non-White 1. In this example. I interpret this as a precise estimate because there is not an important di¤erence between the lower and upper bound. consequently the reporting of t-ratios is a waste of space. one might question whether or not the state dummy variables are relevant. 808 :49452 :48772 1 = 528: Notice that the two statistics are close. but not much more. This allows readers to assess the precision of the parameter estimates.2. For example.102 :232 . ranging in magnitude from 9.033 :00057 . “t-ratios” are t-statistics which test the hypothesis that the coe¢ cient equals zero. you can see that the standard error for this coe¢ cient estimate is 0. But how precise is the reported estimate of a wage gap of 23%? It is hard to assess from a quick reading of Table 4.1) that the state coe¢ cients are jointly zero. Using either statistic the hypothesis is easily rejected. implying a mean wage di¤erence of 23%.097 :121 :102 :070 t 32 50 33 28 12. This implies a 95% asymptotic con…dence interval for the coe¢ cient estimate of [ :246. for they enable for quick and easy assessment of the degree of estimation uncertainty.8 33 9.2 Standard errors are much more useful. To assess the precision. Table 4. Computing the Wald statistic (6. This means that we have estimated the di¤erence in mean wages between men and women to lie between 22% and 25%.1). you can read from the table that the estimated coe¢ cient on the Female dummy variable is 0:232.101 . :218].Note: Equation also includes state dummy variables. In a sample of this size this …nding is rather uninteresting. What we learn from these statistics is that these coe¢ cients are non-zero. it is best to always report standard errors along with parameter estimates (as done in Table 4. An example is reported in Table 4. but not equal. the restricted standard deviation estimate is ~ = :4945: The F form of the Wald statistic (6.

This set is [27:0. but it can be found numerically quite easily. implying a 95% con…dence interval of [27:9. we can write this e¤ect as log(W age) = 2 Experience + 3 Experience 2 + Our question is: At which level of experience do workers achieve the highest wage? In this quadratic model. Ignoring the other coe¢ cients. In Section 6. this is a poor choice. we can calculate a standard error of s(^) = :40. not testing a hypothesis. there is not a simple expression for this set. we found better Type I error rates by reformulating the hypothesis as a linear restriction. This is the set of such that jtn ( )j 1:96: Since tn ( ) is a non-linear function of . so we have to go one step further. if 2 > 0 and 3 < 0 the solution is = From Table 4. 101 .6 we discovered that such t-tests have very poor Type I error rates. 29:5]: Notice that the upper end of the con…dence interval is the same as that from the delta method. These t-statistics take the form tn ( ) = ^ + 2^ 2 3 ^ h0 V h where h = 1 2 1=2 ^ and V is the covariance matrix for ( ^ 2 ^ 3 ): In the present context we are interested in forming a con…dence interval. as the coverage probability of this con…dence interval is one minus the Type I error of the hypothesis test based on the t-test. Our desired con…dence interval will be the set of parameter values which are not rejected by the hypothesis test. Instead. 29:5]: However. but the lower end is substantially lower.1 we …nd the point estimate ^= ^ 2 2 2 : 3 2^3 = 28:69: Using the Delta Method.Another interesting question which can be addressed from these estimates is the maximal impact of experience on mean wages.

4 Take the model y = X + e with the restriction R0 = r where R is a known k s matrix. 2 ). then = Ik X 0X 1 1 X 0y h R R0 X 0 X 1 R i 1 R0 X 0X 1 X 0 e: p n ~ (d) Under the standard assumptions plus R0 as n ! 1: = r. show that the least-squares estimate of ( 1 . show that the least-squares estimate of = ( 1 . = That is. …nd the least- Exercise 6.1 In the model y = X 1 1 + X 2 2 + e. with X 1 and X 2 each n squares estimate of = ( 1 .3 In the model y = X 1 1 + X 2 2 + e. and rank(R) = s: Explain why ~ solves the minimization of the Lagrangian 1 L( .2 In the model y = X 1 1 + X 2 2 + e. subject to the constraint that 1 = c (where c is some given vector) is simply the OLS regression of y X 1 c on X 2 : Exercise 6. 2 ) subject to the constraint that 2 = 0 is the OLS regression of y on X 1 : Exercise 6. …nd the asymptotic distribution of (e) Find an appropriate formula to calculate standard errors for the elements of ~ : 102 . subject to the constraint that 1 = 2: k. the following de…nition is used.Exercises For exercises 1-4. r is a known s 1 vector. ) = Sn ( ) + 2 where is s 1: 0 R0 r (a) Show that the solution is ~ = ^ X 0X h ^ = R0 X 0 X h i 1 R R0 X 0 X R i 1 1 R R0 ^ r 1 1 R0 ^ r where ^ = X 0X is the unconstrained OLS estimator. ~ minimizes the sum of squared errors Sn ( ) over all Exercise 6. 0 < s < k. 2 ). the least-squares estimate of subject to the restriction h( ) = 0 is ~ = argmin Sn ( ) h( )=0 Sn ( ) = (y X )0 (y X ): such that the restriction holds. In the model y = X + e. (b) Verify that R0 ~ = r: (c) Show that if R0 ~ = r is true.

Ci2 . (b) Calculate asymptotic con…dence intervals for the coe¢ cients. (c) This regression is related to Tobin’ q theory of investment. Marc Nerlove analyzed a cost function for 145 American electric companies.k+1 1=2 Exercise 6. Theil’ adjusted R s ^ ^ ^ increases if and only if jtk+1 j > 1. section 1. where tk+1 = k+1 =s( k+1 ) is the t-ratio for k+1 and s( ^ k+1 ) = s2 [(X 0 X) is the homoskedasticity-formula standard error. P L.5) Report parameter estimates and standard errors. which suggests that investment s should be predicted solely by Qi : Thus the coe¢ cient on Qi should be positive and the others should be zero. Calculate appropriate standard errors. …rms extracted from Compustat for the year 1987. Exercise 6. and the empirical exercise in Chapter 1 of Hayashi). The variables.7. Test the hypothesis that the coe¢ cient on Qi is zero.Exercise 6.S.7 of Hayashi. The data …le nerlov. P K): (a) First estimate an unrestricted Cobb-Douglass speci…cation log T Ci = 1 + 2 log Qi + 3 log P Li + 4 log P Ki + 5 log P Fi + ei : (6. i Qi Di . Report your parameter estimates and standard errors.5) by least-squares imposing this restriction by substitution.3 of Greene. Are the results consistent with the predictions of the theory? 2 (d) Now try a non-linear (quadratic) speci…cation. The variables are described on page 77 of Hayashi. Ci . test the hypothesis H0 : 3 + 4 + 5 = 1: (c) Estimate (6. Test the joint hypothesis that the coe¢ cients on Ci and Di are zero. Q2 . but your standard errors may di¤er.5) subject to 3 + 4 + 5 = 1 using the restricted least-squares estimator from problem 4. ow (a) Estimate a linear regression of Ii on the other variables. P F. (d) Estimate (6. s Cash Flow to Asset Ratio. Di . Nerlov was interested in estimating a cost function: T C = f (Q.5 Prove that if an additional regressor X k+1 is added to X. Di . Long Term Debt to Asset Ratio.6 The data set invest. Do you obtain the same estimates as in part (c)? 103 . Total Market Value to Asset Ratio (Tobin’ Q). in order. Ci Di : Test the joint hypothesis that the six interaction and quadratic coe¢ cients are zero. (The problem is discussed in Example 8.dat contains data on 565 U. The stock variables are beginning of year.7). are Ii Qi Ci Di Investment to Capital Ratio (multiplied by 100). Qi Ci . s (b) Using a Wald statistic. 1 2 ]k+1. The ‡ variables are annual sums for 1987.dat contains his data. You should obtain the same OLS estimates as in Hayashi’ equation (1. Regress Ii on Qi .7 In a paper in 1963.

Let i = e2 : Then i E ( i j xi ) = 0 + z 0 1 1i and we have the regression equation i = 0 + z0 1i 1 + i (7. However. we know that the least-squares estimator is semi-parametrically e¢ cient for the projection coe¢ cient. Suppose that we model the conditional variance using the parametric form 2 i = = 0 0 + z0 1i 1 zi. The theory of Chamberlain (1987) can be used to show that in this model the semiparametric e¢ ciency bound is obtained by the Generalized Least Squares (GLS) estimator (4.5.1) E( This regression error i i j xi ) = 0: is generally heteroskedastic and has the conditional variance var ( i j xi ) = var e2 j xi i = E e2 i = E e4 j xi i E e2 j xi i 2 j xi 2 E e2 j xi i : by OLS: Suppose ei (and thus i) were observed.Chapter 7 Additional Regression Topics 7.1 Generalized Least Squares In the projection model. :::.1) is infeasible since the matrix D is unknown. The GLS estimator (7. The GLS estimator is sometimes called the Aitken estimator. the least-squares estimator is ine¢ cient. Then we could estimate ^ = Z 0Z 1 Z0 p ! 104 .1. ^ A feasible GLS (FGLS) estimator replaces the unknown D with an estimate D = diagf^ 2 . in the linear regression model yi = x0 + ei i E (ei j xi ) = 0.11) introduced in Section 4. where z 1i is some q 1 function of xi : Typically. z 1i are squares (and perhaps levels) of some (or all) elements of xi : Often the functional form is kept simple for parsimony. ^ 2 g: 1 n We now discuss this estimation problem.

we have the OLS residual ei = yi ^ i x0 ^ = ei i ^ = e2 ^i = i e2 i + (^ )0 xi x0 ( ^ i 1X zi( ^ n i=1 n 2ei x0 ^ i n ): And then 1 X p zi n i=1 n i = p !0 p 2X z i ei x0 n ^ i n i=1 + )0 xi x0 ( ^ i p ) n Let ~ = Z 0Z be from OLS regression of ^i on z i : Then p p n (~ ) = n (^ d 1 Z0^ 1 (7. we just state the result without proof. there is no guarantee that ~ 2 > 0 for all i: If ~ 2 < 0 for some i. ~ 2 g 1 n and ~ = X 0D ~ 1 1 (7. xi ). estimator of : Since there is not a unique speci…cation for the conditional variance the FGLS estimator is not unique. :::. As the proof is tricky. the estimation method should probably be reconsidered. V ) Thus the fact that i is replaced with ^i is asymptotically irrelevant.3) 1 )+ n Z 0Z n 1=2 Z0 (7. If the estimates turn out to be sensitive to its choice. setting c = 1=4 means that the conditional variance function is constrained to exceed one-fourth of the unconditional variance. then the FGLS estimator i i is not well de…ned. Furthermore.5) X ~ X 0D 1 y: This is the feasible GLS.2) While ei is not observed. One typical problem with implementation of FGLS estimation is that in the linear speci…cation (7. 105 .4) ! N (0.3) the skedastic regression. We call (7.and where V p n (^ 1 ) ! N (0. or FGLS. V ) E ziz0 i 2 i d = E ziz0 i E ziz0 i 1 : x0 ( ^ i ): Thus (7. this introduces a degree of arbitrariness. and will depend on the model (and estimation method) for the skedastic regression.1). It is possible to show that if the skedastic regression is correctly speci…ed. As there is no clear method to select c. and hence we can estimate 2 = z 0 by i i ~2 = ~ 0zi: i Suppose that ~ 2 > 0 for all i: Then set i ~ D = diagf~ 2 . In this context it is useful to re-estimate the model with several choices for the trimming parameter. A trimming rule takes the form 2 i = max[~ 2 . which is undesirable. This suggests that there is a need to bound the estimated variances away from zero. as it is estimating the conditional variance of the regression of yi on xi : We have shown that is consistently estimated by a simple procedure. then FGLS is asymptotically equivalent to GLS. if ~ 2 0 for some i then the FGLS estimator will force the i regression equation through the point (yi . c^ 2 ] i for some c > 0: For example.

Unless 2 = 0 z i . V = E 0 zi 1 xi x0 i 1 E 0 zi 2 2 0 i xi xi E 0 zi 1 xi x0 i 1 : V takes a sandwich form similar to the covariance matrix of the OLS estimator. the natural estimator of the asymptotic variance of ~ is ! 1 n 1 1X 2 1 0~ 1 0 ~ V = XD X ~ i xi x0 : = i n n i=1 ~0 which is consistent for V as n ! 1: This estimator V is appropriate when the skedastic regression (7. and this requires trimming to solve. Why then do we not exclusively estimate regression models by FGLS? This is a good question. It is consistent not only in the regression model. p and thus where V = E 2 i n ~ GLS ~ F GLS d p ! 0.1. In this case we interpret 0 z as a linear projection of e2 on z : ~ should perhaps be E(ei i i i i called a quasi-FGLS estimator of : Its asymptotic variance is not that given in Theorem 7.1) is correctly speci…ed.1. 1 xi x0 i : Examining the asymptotic distribution of Theorem 7. V ) .1. require the assumption of a correct conditional mean. :::. then the OLS 106 . The GLS and FGLS estimators. the estimated conditional variances may contain more noise than information about the true conditional variances. In this case. and was proposed by Cragg (1992). but also under the assumptions of linear projection. It may be the case that 0 z i is only an approximation to the true conditional variance 2 = i 2 j x ). on the other hand. p n ~ F GLS ! N (0. Since the form of the skedastic regression is unknown. and it may be estimated with considerable error. and probably most importantly. If the equation of interest is a linear projection and not a conditional mean. In the linear regression model. FGLS can do worse than OLS in practice. individual estimated conditional variances may be negative. There are three reasons. e2 g: This is estimator is robust to misspeci…cation of the conditional varie1 ^n ance.1.1. Second. OLS is a robust estimator of the parameter vector. This introduces an element of arbitrariness which is unsettling to empirical researchers. Third.1 If the skedastic regression is correctly speci…ed. First. i ~0 V is inconsistent for V . FGLS estimation depends on speci…cation and estimation of the skedastic regression. Instead. ~0 An appropriate solution is to use a White-type estimator in place of V : This may be written as ! 1 ! ! 1 n n n 1X 2 1X 4 2 1X 2 ~ ~ i xi x0 ~ i ei xi x0 ^ ~ i xi x0 V = i i i n n n i=1 i=1 i=1 = 1 0~ XD n 1 1 X 1 0~ XD n 1 ^~ DD 1 X 1 0~ XD n 1 1 X ^ where D = diagf^2 .Theorem 7. FGLS is asymptotically superior to OLS.

3 Forecast Intervals m(x) = E (yi j xi = x) = x0 : In the linear regression model the conditional mean of yi given xi = x is In some cases.4) and the asymptotic variance (7. The point is that the e¢ ciency gains from FGLS are built on the stronger assumption of a correct conditional mean. nor to determine whether classic or White standard errors should be reported. 7. then there is no value in conducting a test for heteorskedasticity. the Wald test of H0 is asymptotically 2: q Most tests for heteroskedasticity take this basic form. the width of the con…dence set is dependent on x: 107 . and replaced E 2 with 2 4 which holds when ei is N 0.2.3). White (1980) proposed that the test for heteroskedasticity be based on setting z i to equal all non-redundant elements of xi . and all cross-products.2 Testing for Heteroskedasticity 2. Hypothesis tests are not designed for these purposes. We may therefore test this hypothesis by the estimation (7.3) and constructing a Wald statistic. its squares. Rather. In the classic literature it is typical to impose the stronger assumption that ei is independent of xi . The hypothesis of homoskedasticity is that E e2 j xi = i H0 : 1 or equivalently that =0 in the regression (7. The main di¤erences between popular tests are which transformations of xi enter z i : Motivated by the form of the asymptotic variance of the OLS estimator ^ . Theorem 7.6) i i Hence the standard test of H0 is a classic F (or Wald) test for exclusion of all regressors from the skedastic regression (7. we want to estimate m(x) at a particular point x: Notice that this is a (linear) function of : Letting h( ) = x0 and = h( ).1 Under H0 and ei independent of xi . but the only di¤erence is that they allowed for general choice of z i . tests for heteroskedasticity should be used to answer the scienti…c question of whether or not the conditional variance is a function of the regressors. we see that m(x) = ^ = x0 ^ and H = x. 2 : If i this simpli…cation is replaced by the standard formula (under independence of the error). the two tests coincide. If this question is not of economic interest.6) under independence show that this test has an asymptotic chi-square distribution. The FGLS probability limit will depend on the particular function selected for the skedastic regression.2) for ~ simpli…es to 1 V = E ziz0 E 2 : (7. The asymptotic distribution (7.1). BreuschPagan (1979) proposed what might appear to be a distinct test. It is important not to misuse tests for heteroskedasticity. It should not be used to determine whether to estimate a regression equation by OLS or FGLS. so ^ q ^ s(^) = n 1 x0 V x: Thus an asymptotic 95% con…dence interval for m(x) is x0 ^ 2 q n 1 x0 V ^ x : It is interesting to observe that if this is viewed as a function of x.and FGLS estimators will converge in probability to di¤erent limits as they will be estimating two di¤erent projections. in which case i is independent of xi and the asymptotic variance (7. and the cost is a loss of robustness to misspeci…cation. 7.

For a given value of xi = x; we may want to forecast (guess) yi out-of-sample. A reasonable rule is the conditional mean m(x) as it is the mean-square-minimizing forecast. A point forecast is the estimated conditional mean m(x) = x0 ^ . We would also like a measure of uncertainty for the ^ forecast. The forecast error is ei = yi m(x) = ei x0 ^ ^ ^ : As the out-of-sample error ei is independent of the in-sample estimate ^ ; this has variance E^2 = E e2 j xi = x + x0 E ^ ei i = Assuming E e2 j xi = i
2; 2

^

0

x

(x) + n

1 0

x V x:

^ the natural estimate of this variance is ^ 2 + n 1 x0 V x; so a standard q ^ error for the forecast is s(x) = ^ 2 + n 1 x0 V x: Notice that this is di¤erent from the standard ^ error for the conditional mean. If we have an estimate of the conditional variance function, e.g. q ^ ~ 2 (x) = ~ 0 z from (7.5), then the forecast standard error is s(x) = ~ 2 (x) + n 1 x0 V x ^ It would appear natural to conclude that an asymptotic 95% forecast interval for yi is h i x0 ^ 2^(x) ; s

but this turns out to be incorrect. In general, the validity of an asymptotic con…dence interval is based on the asymptotic normality of the studentized ratio. In the present case, this would require the asymptotic normality of the ratio ei x0 ^ s(x) ^ :

But no such asymptotic approximation can be made. The only special exception is the case where ei has the exact distribution N(0; 2 ); which is generally invalid. To get an accurate forecast interval, we need to estimate the conditional distribution of ei given xi = x; which is a much more di¢ cult task. Perhaps due to this di¢ culty, many applied forecasters h i use the simple approximate interval x0 ^ 2^(x) despite the lack of a convincing justi…cation. s

7.4

NonLinear Least Squares

In some cases we might use a parametric regression function m (x; ) = E (yi j xi = x) which is a non-linear function of the parameters : We describe this setting as non-linear regression. Examples of nonlinear regression functions include x m (x; ) = 1 + 2 1 + 3x m (x; ) = 1 + 2 x 3 m (x; ) =
1

+
0

2 exp( 3 x)

m (x; ) = G(x ); G known m (x; ) = m (x; ) = m (x; ) =
0 1 x1 1

+
2x

0 2 x1

x2
4

3

+
0 1 x1

+

3 (x

1 (x2 <

4 ) 1 (x > 4 ) 0 3) + 2 x1 1 (x2

>

3)

In the …rst …ve examples, m (x; ) is (generically) di¤erentiable in the parameters : In the …nal two examples, m is not di¤erentiable with respect to 4 and 3 which alters some of the analysis. When it exists, let @ m (x; ) = m (x; ) : @ 108

Nonlinear regression is sometimes adopted because the functional form m (x; ) is suggested by an economic model. In other cases, it is adopted as a ‡ exible approximation to an unknown regression function. The least squares estimator ^ minimizes the normalized sum-of-squared-errors Sn ( ) = 1X (yi n
i=1 n

m (xi ; ))2 :

When the regression function is nonlinear, we call this the nonlinear least squares (NLLS) estimator. The NLLS residuals are ei = yi m xi ; ^ : ^ One motivation for the choice of NLLS as the estimation method is that the parameter is the solution to the population problem min E (yi m (xi ; ))2 Since sum-of-squared-errors function Sn ( ) is not quadratic, ^ must be found by numerical methods. See Appendix E. When m(x; ) is di¤erentiable, then the FOC for minimization are 0=
n X i=1

m

xi ; ^ ei : ^

(7.7)

Theorem 7.4.1 Asymptotic Distribution of NLLS Estimator If the model is identi…ed and m (x; ) is di¤ erentiable with respect to , p V where m
i

n ^
1

d

! N (0; V ) E m i m0 i
1

= E m i m0 i = m (xi ;
0 ):

E m i m0 i e2 i

Based on Theorem 7.4.1, an estimate of the asymptotic variance V ^ V = 1X m i m0 i ^ ^ n
i=1 n

is !
1

!

1

1X m i m0 i e2 ^ ^ ^i n
i=1

n

!

1X m i m0 i ^ ^ n
i=1

n

where m i = m (xi ; ^) and ei = yi m(xi ; ^): ^ ^ Identi…cation is often tricky in nonlinear regression models. Suppose that m(xi ; ) =
0 1zi

+

0 2 xi (

)

where xi ( ) is a function of xi and the unknown parameter : Examples include xi ( ) = xi ; xi ( ) = exp ( xi ) ; and xi ( ) = xi 1 (g (xi ) > ). The model is linear when 2 = 0; and this is often a useful hypothesis (sub-model) to consider. Thus we want to test H0 : However, under H0 , the model is yi =
2

= 0:

0 1zi

+ ei

and both 2 and have dropped out. This means that under H0 ; is not identi…ed. This renders the distribution theory presented in the previous section invalid. Thus when the truth is that 109

2 = 0; the parameter estimates are not asymptotically normally distributed. Furthermore, tests of H0 do not have asymptotic normal or chi-square distributions. The asymptotic theory of such tests have been worked out by Andrews and Ploberger (1994) and B. Hansen (1996). In particular, Hansen shows how to use simulation (similar to the bootstrap) to construct the asymptotic critical values (or p-values) in a given application.

Proof of Theorem 7.4.1 (Sketch). NLLS estimation falls in the class of optimization estimators. For this theory, it is useful to denote the true value of the parameter as 0 : p The …rst step is to show that ^ ! 0 : Proving that nonlinear estimators are consistent is more challenging than for linear estimators. We sketch the main argument. The idea is that ^ minimizes the sample criterion function Sn ( ); which (for any ) converges in probability to the mean-squared error function E (yi m (xi ; ))2 : Thus it seems reasonable that the minimizer ^ will converge in probability to 0 ; the minimizer of E (yi m (xi ; ))2 . It turns out that to show this rigorously, we need to show that Sn ( ) converges uniformly to its expectation E (yi m (xi ; ))2 ; which means that the maximum discrepancy must converge in probability to zero, to exclude the possibility that Sn ( ) is excessively wiggly in . Proving uniform convergence is technically challenging, but it can be shown to hold broadly for relevant nonlinear regression models, especially if the regression function m (xi ; ) is di¤erentiabel in : For a complete treatment of the theory of optimization estimators see Newey and McFadden (1994). p Since ^ ! 0 ; ^ is close to 0 for n large, so the minimization of Sn ( ) only needs to be examined for close to 0 : Let 0 yi = ei + m0 i 0 : For close to the true value
0;

by a …rst-order Taylor series approximation,
0)

m (xi ; ) ' m (xi ; Thus yi m (xi ; ) ' (ei + m (xi ; = ei = m i( m
0 i 0 0 yi

+ m0 i (

0) :

0 )) 0)

m (xi ;

0)

+ m0 i (

0)

:

Hence the sum of squared errors function is Sn ( ) =
n X i=1

(yi

m (xi ; )) '

2

n X i=1

0 yi

m0 i

2

0 and the right-hand-side is the SSE function for a linear regression of yi on m i : Thus the NLLS 0 estimator ^ has the same asymptotic distribution as the (infeasible) OLS regression of yi on m i ; which is that stated in the theorem.

7.5

Least Absolute Deviations

We stated that a conventional goal in econometrics is estimation of impact of variation in xi on the central tendency of yi : We have discussed projections and conditional means, but these are not the only measures of central tendency. An alternative good measure is the conditional median. To recall the de…nition and properties of the median, let y be a continuous random variable. The median = med(y) is the value such that P(y ) = P (y 0 ) = :5: Two useful facts about the median are that = argmin E jy j (7.8) 110

and E sgn (y where sgn (u) = 1 1 if u 0 if u < 0 )=0

is the sign function. These facts and de…nitions motivate three estimators of : The …rst de…nition is the 50th 1 P empirical quantile. The second is the value which minimizes n n jyi j ; and the third de…nition i=1 1 Pn is the solution to the moment equation n i=1 sgn (yi ) : These distinctions are illusory, however, as these estimators are indeed identical. Now let’ consider the conditional median of y given a random vector x: Let m(x) = med (y j x) s denote the conditional median of y given x: The linear median regression model takes the form yi = x 0 + e i i med (ei j xi ) = 0 In this model, the linear function med (yi j xi = x) = x0 is the conditional median function, and the substantive assumption is that the median function is linear in x: Conditional analogs of the facts about the median are P(yi x0
0

j xi = x) = P(yi > x0

j xi = x) = :5

E (sgn (ei ) j xi ) = 0 E (xi sgn (ei )) = 0 = min E jyi x0 j i 1X LADn ( ) = yi n
i=1 n

These facts motivate the following estimator. Let x0 i

be the average of absolute deviations. The least absolute deviations (LAD) estimator of minimizes this function ^ = argmin LADn ( ) Equivalently, it is a solution to the moment condition 1X xi sgn yi n
i=1 n

x0 ^ = 0: i

(7.9)

The LAD estimator has an asymptotic normal distribution. Theorem 7.5.1 Asymptotic Distribution of LAD Estimator When the conditional median is linear in x p d n ^ ! N (0; V ) where V = 1 E xi x0 f (0 j xi ) i 4
1

Exi x0 i

E xi x0 f (0 j xi ) i

1

and f (e j x) is the conditional density of ei given xi = x: 111

we provide a sketch here for completeness.9) is equivalent to g n ( ^ ) = 0. the height of the error density at its median. the minimizer of E jyi x0 j. by a Taylor series expansion and the fact g( ) = 0 g( ^ ) ' @ g( ) ^ @ 0 : 112 . In the special case where the error is independent of xi .10) This simpli…cation is similar to the simpli…cation of the asymptotic covariance of the OLS estimator under homoskedasticity. and is sketched here. For any …xed . Proof of Theorem 7. 0 0) 0) g( 0 )) = n 1=2 n X i=1 gi( 0) d ! N 0.1: Similar to NLLS.1 is advanced. the conditional density of the error at its median.5. and this improves estimation of the median. then there are many innovations near to the median. Let 0 denote the true value of 0 : p The …rst step is to show that ^ ! 0 : The general nature of the proof is similar to that for the p NLLS estimator. the minimizer of LADn ( ). LADn ( ) ! E jyi x0 j : i Furthermore.2. Computation of standard error for LAD estimates typically is based on equation (7. While a complete proof of Theorem 7.10). LAD is an optimization estimator. i P Since sgn (a) = 1 2 1 (a 0) . When f (0 j x) is large. where g n ( ) = n 1 n g i ( ) i=1 and g i ( ) = xi (1 2 1 (yi x0 )) : Let g( ) = Eg i ( ). i by the central limit theorem (Theorem C. This can be done with kernel estimation techniques. Exi x0 i = Exi x0 : Second using the law of iterated expectations and the chain rule of i @ g( ) = @ 0 = = = @ Exi 1 2 1 yi x0 i @ 0 @ 2 0 E xi E 1 ei x0 x0 0 j xi i i @ " Z 0 # xi x0 0 i @ 2 0 E xi f (e j xi ) de @ 1 2E xi x0 f x0 i i x0 i 0 j xi so @ g( ) = @ 0 2E xi x0 f (0 j xi ) : i Third. by the WLLN. The main di¢ culty is the estimation of f (0). converges in probability to 0 .5. (7. it can be shown that this convergence is uniform in : (Proving uniform convergence is more challenging than for the NLLS criterion since the LAD criterion is not di¤erentiable in .The variance of the asymptotic distribution inversely depends on f (0 j x) . First.) It follows that ^ . We need three preliminary results.1) p n (g n ( since Eg i ( 0 )g i ( di¤erentiation. See Chapter 16. then f (0 j x) = f (0) and the asymptotic variance simpli…es V = (Exi x0 ) i 4f (0)2 1 (7.

For ’ quantile Q of a random variable with distribution function F (u) is de…ned as th Q = inf fu : F (u) g When F (u) is continuous and strictly monotonic. the quantile regression functions can take any shape. then F (Q (x) j x) = : For …xed . As functions of x. then ): (7. the quantile regression function q (x) describes how the ’ quantile of the conditional th distribution varies with the regressors. V ) : 2E xi x0 f (0 j xi ) i n g( ^ ) 0) gn( ^ ) g( 0 )) n (g n ( 0 N 0. xi ) with conditional distribution function F (y j x) the conditional quantile function q (x) is Q (x) = inf fy : F (y j x) g: Again. when F (y j x) is continuous and strictly monotonic in y. then F (Q ) = . The quantile Q is the value such that (percent) of the mass of the distribution is less than Q : The median is the special case = :5: The following alternative representation is useful.Together p n ^ 0 ' = ' @ g( @ 0 1 0) p ng( ^ ) 1p 1p 1 1 E xi x0 f (0 j xi ) i 2 d 1 ! E xi x0 f (0 j xi ) i 2 = N (0. 7. otherwise its properties are unspeci…ed th without further restrictions. For the random variables (yi . This linear speci…cation assumes that Q (x) = 0 x where the coe¢ cients vary across the quantiles : We then have the linear quantile regression model yi = x0 i + ei where ei is the error de…ned to be the di¤erence between yi and its ’ conditional quantile x0 : th i By construction. 113 . However for computational convenience it is typical to assume that they are (approximately) linear in x (after suitable transformations).6 Quantile Regression 2 [0.8) for the median to all quantiles. 1] the Quantile regression has become quite popular in recent econometric practice. If the random variable U has ’ quantile th Q . so you can think of the quantile as the inverse of the distribution function. Exi xi p The third line follows from an asymptotic empirical process argument and the fact that ^ ! 0.12) This generalizes representation (7.11) Q = argmin E (U where (q) is the piecewise linear function (q) = q (1 ) q<0 q q 0 = q( 1 (q < 0)) : (7. the ’ conditional quantile of ei is zero.

the quantile regression estimator ^ for mization problem ^ = argmin Sn ( ) where Sn ( ) = 1X n i=1 n solves the mini- yi x0 i and (q) is de…ned in (7. and we have the simpli…cation V = (1 ) E xi x0 i 2 f (0) 1 : A recent monograph on the details of quantile regression is Koenker (2005).1.11). When the error ei is independent of xi . and are widely available. The null model is yi = x0 + ei i 114 . and test their signi…cance using a Wald test. it is useful to have a general test of the adequacy of the speci…cation.6. and form a Wald statistic ~ i i for = 0: Another popular approach is the RESET test proposed by Ramsey (1969).12).Given the representation (7. Theorem 7. Thus. V ) . Furthermore. if the model yi = x0 ^ + ei has been ^ i …t by OLS. One simple test for neglected nonlinearity is to add nonlinear functions of the regressors to the regression. Fortunately. numerical methods are necessary for its minimization. the unconditional density of ei at 0. Since the quanitle regression criterion function Sn ( ) does not have an algebraic solution.5. fast linear programming methods have been developed for this problem.7 Testing for Omitted NonLinearity If the goal is to estimate the conditional expectation E (yi j xi ) .1 Asymptotic Distribution of the Quantile Regression Estimator When the ’ conditional quantile is linear in x th p where V = (1 ) E xi x0 f (0 j xi ) i 1 n ^ d ! N (0. 7. since it has discontinuous derivatives. then f (0 j xi ) = f (0) . the asymptotic variance depends on the conditional density of the quantile regression error. Exi x0 i E xi x0 f (0 j xi ) i 1 and f (e j x) is the conditional density of ei given xi = x: In general. conventional Newton-type optimization methods are inappropriate. let z i = h(xi ) denote functions of xi which are not linear functions of xi (perhaps squares of non-binary regressors) and then …t yi = x0 ~ +z 0 ~ + ei by OLS. An asymptotic distribution theory for the quantile regression estimator can be derived using similar arguments as those for the LAD estimator in Theorem 7.

then Q11 > Q11 Q12 Q221 Q21 . is rejected at the % level if Wn show that under the null hypothesis. Otherwise. th 7. To see why this is the case. say. yi ^m yi = x0 ^ : Now let ^ i 1 C A 1)-vector of powers of yi : Then run the auxiliary regression ^ yi = x0 ~ + z 0 ~ + ei ~ i i (7. + x0 2i In the model 1 2 + ei x2i is “irrelevant” if 1 is the parameter of interest and 2 = 0: One estimator of 1 is to regress 1 yi on x1i alone. It is particularly powerful at detecting single-index models of the form yi = G(x0 ) + ei i where G( ) is a smooth “link”function.be an (m which is estimated by OLS. small values such as m = 2. ~ 1 = (X 0 X 1 ) (X 0 y) : Another is to regress yi on x1i and x2i jointly.13) by OLS. or 4 seem to work best.8 Irrelevant Variables yi = x0 1i E (xi ei ) = 0. m must be selected in advance. they will (typically) have di¤erent asymptotic variances. note that (7.13) may be written as 2 3 m yi = x0 ~ + x0 ^ ~ 1 + x0 ^ ~ 2 + x0 ^ ~ m 1 + ei ~ i i i i which has essentially approximated G( ) by a m’ order polynomial. and n!1 lim n var( ^ 1 ) = Ex1i x0 1i Ex1i x0 Ex2i x0 2i 2i 1 Ex2i x0 1i 1 2 = Q11 Q12 Q221 Q21 1 2 . Typically. and consequently Q111 2 < Q11 Q12 Q221 Q21 115 1 2 : . since Q12 Q221 Q21 > 0. zi = @ . yielding 1 1 ( ^ 1 . ^ 2 ): Under which conditions is ^ 1 or ~ 1 superior? It is easy to see that both estimators are consistent for 1 : However. The comparison between the two estimators is straightforward when the error is conditionally homoskedastic E e2 j xi = 2 : In this case i n!1 lim n var( ~ 1 ) = Ex1i x0 1i 1 2 = Q111 2 . say. . The RESET test appears to work well as a test of functional form against a wide range of smooth alternatives. yielding predicted values 0 2 yi ^ B . 3. and the two estimators have equal asymptotic e¢ ciency. and form the Wald statistic Wn for d = 0: It is easy (although somewhat tedious) to 2 m 1 : Thus the null 2 m 1 distribution. Wn ! exceeds the upper % tail critical value of the To implement the test. If Q12 = 0 (so the variables are orthogonal) then these two variance matrices equal.

:::. “Which subset of (x1 . + X2 2 M2 : + e. 7. this rule only makes sense when the true model is …nite dimensional. 116 .).13). xK ) enters the regression function E (yi j x1i = x1 . we see that ~ 1 has a lower asymptotic variance. In the absence of the homoskedasticity assumption. In many cases the problem of model selection can be reduced to the comparison of two nested models. We thus consider the question of the inclusion of X 2 in the linear regression y = X1 where X 1 is n k1 and X 2 is n M1 : 1 + X2 2 + e. There are many possible desirable properties for a model selection procedure. the question. with residual vectors e1 and e2 . and ~ 1 be the estimate under the constraint 0 = 0: (The least-squares estimate with the intercept omitted. For example. it is more appropriate to view model selection as determining the best …nite sample approximation. we will on 1 2 occasion use the homoskedasticity assumption E e2 j x1i . X 2 ) = 0 Note that M1 M2 : To be concrete. A model selection procedure is consistent if c P M = M1 j M1 c P M = M2 j M2 ! 1 ! 1 However. Let Exi = . as the larger problem can be written as a sequence of such comparisons. E (e j X 1 . there is no clear ranking of the e¢ ciency of the restricted estimator ~ 1 versus the unrestricted estimator. the question: “What is the right model for y?” is not well-posed. estimated ^ ^ variances ^ 2 and ^ 2 . respectively.. x2i = 2 : i A model selection procedure is a data-dependent rule which selects one of the two models. xKi = xK )?”is well posed. take the model yi = 0 + 1 xi + ei and suppose that 0 = 0: Let ^ 1 be the estimate of 1 from the unconstrained model. One useful property is consistency. For example. that it selects the true model with probability one if the sample is su¢ ciently large. k2 : This is equivalent to the comparison of the two models 1 1 y = X1 y = X1 + e. If the truth is in…nite dimensional.9 Model Selection In earlier sections we discussed the costs and bene…ts of inclusion/exclusion of variables. However. It is important that the model selection question be well-posed. To simplify some of the statistical discussion. when economic theory does not provide complete guidance? This is the question of model selection. X 2 ) = 0: E (e j X 1 . and E (xi )2 = 2 : Then x under (5. In contrast. because it does not make clear the conditioning set. etc. we say that M2 is true if 2 6= 0: To …x notation. models 1 and 2 are estimated by OLS.This means that ~ 1 has a lower asymptotic variance matrix than ^ 1 : We conclude that the inclusion of irrelevant variable reduces estimation e¢ ciency if these variables are correlated with the relevant variables. We c can write this as M. n!1 lim n var( ~ 1 ) = lim n var( ^ 1 ) = 2 2 x + 2 2 x 2 while n!1 : When 6= 0. this result can be reversed when the error is conditionally heteroskedastic. :::. How does a researcher go about selecting an econometric speci…cation.

then this model selection procedure is inconsistent. In contrast to the AIC. log(n) so c P M = M1 j M1 = = P (BIC1 < BIC2 j M1 ) P (LRn < log(n)k2 j M1 ) LRn = P < k2 j M1 log(n) ! P (0 < k2 ) = 1: 117 . Another problem is (7. if is set to be a small number. known as the BIC. LRn p ! 0. m The AIC can be derived as an estimate of the KullbackLeibler information distance K(M) = E (log f (y j X) log f (y j X. Indeed. One popular choice is the Akaike Information Criterion (AIC). For some critical level . else select M2 . BIC model selection is consistent. since under M1 . The reasoning which helps guide the choice of in hypothesis testing (controlling Type I error) is not relevant for c model selection. etc.14) where ^ 2 is the variance estimate for model m. LRn = n log ^ 2 1 then c P M = M1 j M1 = = = P (AIC1 < AIC2 j M1 ) P log(^ 2 ) + 2 1 2 k2 log ^ 2 ' Wn ! 2 d 2 k2 . The rule is to select M1 if AIC1 < AIC2 . The AIC under normality for model m is AICm = log ^ 2 + 2 m km : n c P M = M2 j M2 could vary dramatically. is BICm = log ^ 2 + log(n) m km : n (7.A common approach to model selection is to base the decision on a statistical test such as the Wald Wn : The model selection rule is as follows. since (7. Indeed. as the rule tends to over…t.15) ! P k1 k1 + k2 < log(^ 2 ) + 2 j M1 2 n n P (LRn < 2k2 j M1 ) < 2k2 < 1: While many criterions similar to the AIC have been proposed. k A major problem with this approach is that the critical level is indeterminate. (7. the BIC places a larger penalty than the AIC on the number of estimated parameters and is more parsimonious. depending on the sample size. the most popular is one proposed by Schwarz based on Bayesian arguments. let c satisfy P 22 > c = : Then select M1 if Wn c . That is. His criterion. M)) between the true density and the model density. else select M2 : AIC selection is inconsistent.16) Since log(n) > 2 (if n > 8). as P M = M1 j M1 ! 1 < 1: Another common approach to model selection is to use a selection criterion. then P M = M1 j M1 1 but c that if is held …xed.15) holds under M1 . and km is the number of coe¢ cients in the model.

The AIC selection criteria estimates the K models by OLS. 048.Also under M2 .16). 576: In the latter case. In the unordered case. xKi g. 6= 0. :::. and the AIC or BIC in principle can be implemented by estimating all possible subset models. selecting based on (7. which regressors enter the regression). stores the residual variance ^ 2 for each model. . The methods extend readily to the issue of selection among multiple regressors. K 6= 0: which are nested. Similarly for the BIC. a model consists of any possible subset of the regressors fx1i . The general problem is the model yi = 1 x1i + 2 x2i + + K xKi + ei . 6= 0. . E (ei j xi ) = 0 and the question is which subset of the coe¢ cients are non-zero (equivalently. one can show that thus LRn p ! 1.14). and then selects the model with the lowest AIC (7. 210 = 1024. there are 2K such models. There are two leading cases: ordered regressors and unordered. 2 2 = 3 = 3 = = K =0 K 6= 0. which can be a very large number. 118 . log(n) c P M = M2 j M2 = P LRn > k2 j M2 log(n) ! 1: We have discussed model selection between two models. 1 1 1 M2 : MK : 6= 0. and 220 = 1. : : : . a full-blown implementation of the BIC selection criterion would seem computationally prohibitive. However. In the ordered case. the models are M1 : . = =0 2 6= 0. For example.

Let satisfy Eg(yi ) = 0: Is a quantile of the distribution of yi ? 119 . ^ is the NLLS estimator. Report the results. the mean absolute error (MAE) is E jyi g(xi )j : Show that the function g(x) which minimizes the MAE is the conditional median m (x) = med(yi j xi ): Exercise 7. and V is the estimate of var ^ : You are interested in the conditional mean function E (yi j xi = x) = g(x) at some x: Find an asymptotic 95% con…dence interval for g(x): Exercise 7.Exercises Exercise 7. X). Estimate your selected model and report the results. re-estimate the model from part (c) using FGLS. )+ei with E (ei j xi ) = 0. Variables are listed in the …le cps78.5 De…ne g(u) = 1 (u < 0) where 1 ( ) is the indicator function (takes the value 1 if the argument is true. Interpret.3 Suppose that yi = g(xi . (f) Construct a model for the conditional variance. (i) Compare the estimated standard errors. Report coe¢ cient estimates and standard errors.dat contains 550 observations on 20 variables taken from the May 1978 current population survey.2 In the homoskedastic regression model y = X + e with E(ei j xi ) = 0 and E(e2 j i ^ xi ) = 2 . suppose ^ is the OLS estimate of with covariance matrix V . (e) Test whether the error variance is di¤erent for whites and nonwhites. (h) Do the OLS and FGLS estimates di¤er greatly? Note any interesting di¤erences. Estimate such a model. the estimates ( ^ . (d) Test whether the error variance is di¤erent for men and women. (a) Start by an OLS regression of LNWAGE on the other variables. xn+1 : ^ (a) Find a point forecast of yn+1 : (b) Find an estimate of the variance of this forecast. test for general heteroskedasticity and report the results. Note any interesting di¤erences. Exercise 7. else equals zero). (b) Consider augmenting the model by squares and/or cross-products of the conditioning variables. if desired. V . and the out-of-sample value of the regressors. ^ Exercise 7.4 For any predictor g(xi ) for yi . the residuals e.1 The data …le cps78. Interpret. (c) Are there any variables which seem to be unimportant as a determinant of wages? You may re-estimate the model without these variables. The goal of the exercise is to estimate a model for the log of earnings (variable LNWAGE) as a function of the conditioning variables. (g) Using this model for the conditional variance.pdf. based on a sample of size n: Let ^ 2 be the estimate of 2 : You wish to forecast an out-of-sample value of yn+1 given ^ that xn+1 = x: Thus the available information is the sample (y. ^ 2 ).

at least 10 to 15) of log Qi are both below and above 7 : Examine the data and pick an appropriate range for 7 : (c) Estimate the model by non-linear least squares.17) (a) Following Nerlove. 120 . impose the restriction 3 + 4 + 5 = 1: This model is called a smooth threshold model. :::. Consider model (7. Do you agree with this modi…cation? (b) Now try a non-linear speci…cation. where : In addition.11). calculate z i and estimate the model by OLS.7.Exercise 7. For each value of 7 . (iii) BIC criterion. you estimated a cost function on a cross-section of electric companies. (d) Calculate standard errors for all the parameters ( 1 . For values of log Qi much below 7 . (ii) AIC criterion. Record the sum of squared errors. the variable log Qi has a regression slope of 2 : For values much above 7 . The equation you estimated was log T Ci = 1 + 2 log Qi + 3 log P Li + 4 log P Ki + 5 log P Fi + ei : (7. and …nd the value of 7 for which the sum of squared errors is minimized. I recommend the concentration method: Pick 10 (or more or you like) values of 7 in this range. add the variable (log Qi )2 to the regression. the regression slope is 2 + 6 .6 Verify equation (7.17) plus the extra term z i = log Qi (1 + exp ( (log Qi 7 ))) 1 6zi. The model is non-linear because of the parameter 7 : The model works best when 7 is selected so that several values (in this example. Exercise 7. 7 ).7 In Exercise 6. Do so. and the model imposes a smooth transition between these regimes. Assess the merits of this new speci…cation using (i) a hypothesis test.

Bootstrap inference is based on Gn (u): Let (yi . F ) with G(u. xn ) . Gn (u. F ) = limn!1 Gn (u. The unknown F is replaced by a consistent estimate Fn (one choice is discussed in the next section). x1 ) . F ) be a statistic of interest. x) = P (yi y. The method of moments estimator is the corresponding 121 .2 The Empirical Distribution Function Recall that F (y. This is generally impossible since F is unknown. The statistic Tn = Tn ((y1 . :::. Asymptotic inference is based on approximating Gn (u. Fn ): (8. where 1( ) is the indicator function. xi ) : Let Tn = Tn ((y1 . In a seminal contribution.Chapter 8 The Bootstrap 8. F ) we obtain Gn (u) = Gn (u. F ). P(Tn u) = Gn (u): We call Tn the bootstrap statistic: The distribution of Tn is identical to that of Tn when the true CDF of Fn rather than F: The bootstrap distribution is itself random. F ) = P(Tn u j F) In general. For example. This is a population moment. Plugged into Gn (u. :::. xi x) = E (1 (yi y) 1 (xi x)) . F ) = G(u) does not depend on F.1 De…nition of the Bootstrap Let F denote a distribution function for the population of observations (yi . Efron (1979) proposed the bootstrap. inference would be based on Gn (u. the t-statistic is a function of the parameter which itself is a function of F: The exact CDF of Tn when the data are sampled from the distribution F is Gn (u. (yn . we say that Tn is asymptotically pivotal and use the distribution function G(u) for inferential purposes. xi ) denote random variables with the distribution Fn : A random sample from this distribution is called the bootstrap data. meaning that G changes as F changes. F ) depends on F.1) We call Gn the bootstrap distribution. Ideally. (yn . for example an estimator ^ or a t-statistic ^ =s(^): Note that we write Tn as possibly a function of F . which makes a di¤erent approximation. as it depends on the sample through the estimator Fn : In the next sections we describe computation of the bootstrap distribution. x1 ) . 8. Fn ) constructed on this sample is a random variable with distribution Gn : That is. F ): When G(u. xn ) .

x) (1 d 1X 1 (yi n i=1 n y) 1 (xi x) : (8. it is helpful to think of a random pair (yi . by the CLT (Theorem C. n i=1 . x): We can easily calculate the moments of functions of (yi . x))) : To see the e¤ect of sample size on the EDF. in the Figure below. Fn (y.2. The random draws are from the N (0. x) : Furthermore. and 500. In general. x) = = the empirical sample average. the EDF is only a crude approximation to the CDF. as the sample size gets larger. i = 1. The EDF is a consistent estimator of the CDF. 50. xi ) . xi = xi ) n 1X h (yi . x): Thus by the WLLN (Theorem 5. xi ) = h(y. x)) ! N (0.1: Empirical Distribution Functions The EDF is a valid discrete probability distribution which puts probability mass 1=n at each pair (yi . p n (Fn (y. xi ) : Z Eh (yi . n: Notationally.1).2) x) F (y.sample moment: Fn (y. To see this. 1) distribution. I have plotted the EDF and true CDF for three random samples of size n = 25. xi ) P (yi = yi . 122 n X i=1 h (yi . note that for any (y. x) is called the empirical distribution function (EDF). x).1). 1 (yi y) 1 (xi p is an iid random variable with expectation F (y. xi ) with the distribution Fn : That is. Fn is by construction a step function. 100. x) ! F (y. F (y. Fn is a nonparametric estimate of F: Note that while F may be either discrete or continuous. xi x) = Fn (y. :::. x) = Fn (y. x) F (y. x)dFn (y. but the approximation appears to improve for the large n. the EDF step function gets uniformly close to the true CDF. For n = 25. P(yi y. Figure 8. xi ).2.

3 Nonparametric Bootstrap The nonparametric bootstrap is obtained when the bootstrap distribution (8. The bootstrap statistic Tn = Tn ((y1 . the value of implied by Fn : Typically n = ^. it might be desirable to construct a biased-corrected estimator (one with reduced bias). The popular alternative is to use simulation to approximate the distribution. xn )g. xn )) and Tn = ^ n = of n is n = E(Tn ): If this is calculated by the simulation described in the previous section. the parameter estimate. the t-ratio ^ =s(^) depends on : As the bootstrap statistic replaces F with Fn . The algorithm is identical to our discussion of Monte Carlo simulation. as there are 2n possible samples f(y1 . :::. In consequence. with the following points of clari…cation: The sample size n used for the simulation is the same as the sample size. (yn .4 Bootstrap Estimation of Bias and Variance ^ The bias of ^ is n = E(^ : Then n = E(Tn ( 0 )): The bootstrap 0 ): Let Tn ( ) = ^ ^: The bootstrap estimate counterparts are ^ = ^((y1 . 123 . B is known as the number of bootstrap replications. B = 1000 typically su¢ ces. x1 ) . which is generally not a problem. 8. :::. x1 ) . This is equivalent to sampling a pair (yi . However. x1 ) . xi ) randomly from the sample. sampling from the EDF is equivalent to random sampling a pair (yi . xn )g will necessarily have some ties and multiple values. it similarly replaces with n . A theory for the determination of the number of bootstrap replications B has been developed by Andrews and Buchinsky (2000). the estimate of ^n = = B 1 X Tnb B n is 1 B = ^ b=1 B X b=1 ^ b ^ ^: If ^ is biased. xi ) from the observed data with replacement. This is repeated B times. it is typically through dependence on a parameter. such a calculation is computationally infeasible.2) as the estimate Fn of F: Since the EDF Fn is a multinomial (with n support points). When the statistic Tn is a function of F. xi ) are drawn randomly from the empirical distribution. (yn . xn ) . :::. The random vectors (yi . (yn . Since Fn is a discrete probability distribution putting probability mass 1=n at each sample point.8. in principle the distribution Gn could be calculated by direct methods. so long as the computational costs are reasonable. a bootstrap sample f(y1 . (When in doubt use ^:) Sampling from the EDF is particularly easy. this would be ~=^ n. It is desirable for B to be large.1) is de…ned using the EDF (8. :::. Ideally. For example. Fn ) is calculated for each bootstrap sample. (yn . x1 ) .

q ^ s (^) = Vn : While this standard error may be calculated and reported. it is not clear if it is useful. the use of the bootstrap presumes that such asymptotic approximations might be poor. However. Fn ) denote the quantile function of the bootstrap distribution. the bootstrap makes the following experiment. let qn ( . and the best guess is the di¤erence between ^ and ^ : Similarly if ^ is higher than ^. then the estimator is upward-biased and the biased-corrected estimator should be lower than ^. qn ( . and qn ( ) = qn ( .5 Percentile Intervals For a distribution function Gn (u. qn (1 =2)]: This motivates a con…dence interval proposed by Efron: C1 = [qn ( =2). :::. that the biased-corrected estimator is not ^ : Intuitively.] Let qn ( ) denote the quantile function of the true sampling distribution. Suppose that ^ is the truth. The primary use of asymptotic standard errors is to construct asymptotic con…dence intervals. This is the function which solves Gn (qn ( . ^ lies in the region [qn ( =2). F ) denote its quantile function. F ). It appears superior to calculate bootstrap con…dence intervals. in particular. Note that this function will change depending on the underlying statistic Tn whose distribution is Gn : Let Tn = ^. F ) = : [When Gn (u. F ) may be non-unique. so a biased-corrected estimator of should be larger than ^. which are based on the asymptotic normal approximation to the t-ratio. TnB g. Computationally. the ’ sample quantile of the ^ th simulated statistics fTn1 . Let Tn = ^: The variance of ^ is Vn = E(Tn Let Tn = ^ : It has variance Vn = E(Tn The simulation estimate is B X ^ ^ = 1 Vn b B b=1 ETn )2 : ETn )2 : ^ 2 : A bootstrap standard error for ^ is the square root of the bootstrap estimate of variance. and we turn to this next. The (1 )% Efron percentile interval is then [^n ( =2). as discussed in the section on Monte Carlo simulation. this suggests that the estimator is downward-biased. the quantile qn ( ) is estimated by qn ( ). F ) is discrete. Then what is the average value of ^ calculated from such samples? The answer is ^ : If this is lower than ^. but we will ignore such complications. qn (1 =2)]: This is often called the percentile con…dence interval.but n is unknown. qn (1 q ^ =2)]: 124 . in which case the normal approximation is suspected. an estimate of a parameter of interest. In (1 )% of samples. F ). 8. The (estimated) bootstrap biased-corrected estimator is ~ = ^ = ^ = 2^ ^n (^ ^ : ^) Note.

then the idealized percentile bootstrap method will be accurate. However. f (qn (1 =2))]. ^ + qn (1 =2)]: The latter has coverage probability P 0 0 2 C1 = P ^ + qn ( =2) = P qn (1 =2) 0 ^ + qn (1 0 =2) ^ qn ( =2) =2). F0 ) = (1 = = 1 1 1 1 Gn ( qn (1 (1 2 =2). many argue that the percentile interval should not be used unless the sampling distribution is close to unbiased and symmetric. as we show now. F0 ) =2). F0 ) Gn ( qn (1 which generally is not 1 ! There is one important exception. so an exact (1 )% con…dence interval for 0 C2 = [^ qn (1 qn ( =2)]: This motivates a bootstrap analog C2 = [^ qn (1 =2).) Then C1 can alternatively be written as C1 = [^ + qn ( =2). = Gn ( qn ( =2). F0 )) 2 Gn (qn (1 0 and this idealized con…dence interval is accurate. The problems with the percentile method can be circumvented. F0 ) 0 = Gn ( qn ( =2). ^ 0 qn (1 =2)) ^ qn ( =2). F0 ) = 1 Gn (u. and also has the feature that it is translation invariant. C1 and C1 are designed for the case that ^ has a symmetric distribution about 0 : When ^ does not have a symmetric distribution. Let Tn ( ) = ^ and let qn ( ) be the quantile function of its distribution. it also follows that if there exists some monotonically increasing transformation f ( ) such that f (^) is symmetrically distributed about f ( 0 ). F0 )) Gn (qn ( =2). Based on these arguments. C1 may perform quite poorly. at least in principle. F0 ). ^ + q (1 n =2)]: This is a bootstrap estimate of the “ideal” con…dence interval 0 C1 = [^ + qn ( =2). then percentile method applied to this problem will produce the con…dence interval [f (qn ( =2)).The interval C1 is a popular bootstrap con…dence interval often used in empirical practice. Therefore. Let Tn ( ) = ^ . was popularized by Efron early in the history of the bootstrap. It will be useful if we introduce an alternative de…nition C1 . This is because it is easy to compute. C1 is in a deep sense very poorly motivated. which is a naturally good property. by the translation invariance argument presented above. so P 0 0 2 C1 has a symmetric distribution. If ^ then Gn ( u. if we de…ne = f ( ) as the parameter of interest for a monotonically increasing function f. However. (These are the original quantiles. That is. 125 ^ qn ( =2)]: . with subtracted. by an alternative method. Then 1 = P (qn ( =2) = P^ qn (1 0 Tn ( 0 ) =2) would be =2). simple to motivate.

and the test rejects if Tn ( 0 ) < qn ( ): ): Similarly. that the bootstrap test the bootstrap t-statistics Tn = ^ statistic is centered at the estimate ^.6 Percentile-t Equal-Tailed Interval Suppose we want to test H0 : = 0 against H1 : < 0 at size : We would set Tn ( ) = ^ =s(^) and reject H0 in favor of H1 if Tn ( 0 ) < c.Notice that generally this is very di¤erent from the Efron interval C1 ! They coincide in the special case that Gn (u) is symmetric about ^. this interval can be estimated from a bootstrap simulation by sorting the ^ . discussed above. 8. 1 = P (qn ( =2) = P qn ( =2) = P ^ so an exact (1 Tn ( 0 ) ^ 0 ): qn (1 =s(^) 0 =2)) qn (1 ^ =2) s(^)qn (1 0 =2) would be =2). but otherwise they di¤er. where c would be selected so that P (jTn ( 0 )j > c) = : 126 . 8. These t-statistics are sorted to …nd the estimated quantiles qn ( ) and/or qn (1 ^ Let Tn ( ) = ^ =s(^). each =2: Computationally. and the standard error s(^ ) is calculated on the bootstrap ^ sample. this is based on the critical values from the one-sided hypothesis tests.7 ^ Symmetric Percentile-t Intervals Suppose we want to test H0 : = 0 against H1 : 6= 0 at size : We would set Tn ( ) = =s(^) and reject H0 in favor of H1 if jTn ( 0 )j > c. these critical values can be estimated from a bootstrap simulation by sorting ^ =s(^ ): Note. ^ s(^)qn ( =2) . the bootstrap test rejects if Tn ( 0 ) > qn (1 Computationally. where c would be selected so that P (Tn ( 0 ) < c) = : Thus c = qn ( ): Since this is unknown. a bootstrap test replaces qn ( ) with the bootstrap estimate qn ( ). It is equal-tailed or central since the probability that 0 is below the left endpoint approximately equals the probability that 0 is above the right endpoint. Computationally. if the alternative is H1 : > 0 . but is not widely used in practice. and this is important. ^ s(^)qn ( =2)]: This is often called a percentile-t con…dence interval. ^ ^ n n This con…dence interval is discussed in most theoretical treatments of the bootstrap. which are centered at the sample estimate ^: These are sorted bootstrap statistics Tn = ^ to yield the quantile estimates qn (:025) and qn (:975): The 95% con…dence interval is then [^ ^ ^ ^ q (:025)]: q (:975). Then taking the intersection of two one-sided intervals. )% con…dence interval for 0 C3 = [^ s(^)qn (1 s(^)qn ( =2)]: This motivates a bootstrap analog C3 = [^ s(^)qn (1 =2).

not ^ Note in the simulation that the Wald statistic is a quadratic form in ^ 0 : [This is a typical mistake made by practitioners. In some cases. which is a symmetric distribution function. where qn ( ) is the bootstrap critical value for a two-sided hypothesis test. or the number which solves the equation Gn (qn ( )) = Gn (qn ( )) Gn ( qn ( )) = 1 : Computationally. F ) then u lim Gn (u. Equivalently. such as when Tn is a t-ratio. where qn ( ) is the (1 )% quantile of the distribution of Wn = n ^ ^ 0 ^ V 1 ^ ^ : Computationally. where qn ( ) is the (1 )% quantile of the distribution of Wn : The bootstrap test rejects if Wn qn ( ). then to test H0 : = 0 against H1 : 6= statistic 0 ^ 1 ^ Wn ( ) = n ^ V 0 at size . Thus here Tn ( ) = Wn ( ): The ideal test rejects if Wn qn ( ). n!1 127 . The ideal critical value c = qn ( ) solves the equation Gn (qn ( )) = 1 : Gn ( c) Equivalently.] 8. C4 is called the symmetric percentile-t interval.3) is unknown. the critical value qn ( ) is found as the quantile from simulated values of Wn : ^ . and taking the upper % quantile. F ) = .8 Asymptotic Expansions d 2 Let Tn 2 R be a statistic such that Tn ! N(0. qn ( ) is estimated from a bootstrap simulation by sorting the bootstrap t^ =s(^ ). then 2 = 1: In other cases writing Tn Gn (u. qn ( ) is the 1 quantile of the distribution of jTn ( 0 )j : The bootstrap estimate is qn ( ). the 1 quantile of the distribution of jTn j .Note that P (jTn ( 0 )j < c) = P ( c < Tn ( 0 ) < c) = Gn (c) Gn (c). The bootstrap test rejects if statistics jTn j = ^ jTn ( 0 )j > qn ( ): Let C4 = [^ s(^)qn ( ). we would use a Wald or some other asymptotically chi-square statistic. ): 2 (8. ^ + s(^)qn ( )]. It is designed to work well since P( 0 2 C4 ) = P ^ s(^)qn ( ) 0 ^ + s(^)q ( ) n = P (jTn ( 0 )j < qn ( )) ' P (jTn ( 0 )j < qn ( )) = 1 : If is a vector.

Gn (u. Let an be a sequence. F ) u +n 1=2 g1 (u. the density function is skewed. a standardized sample mean. an = O(n r ) if it declines to zero like n r : We say that a function g(u) is even if g( u) = g(u).3 an = o(n r) if nr jan j ! 0 as n ! 1.8. We can interpret Theorem 8.3). De…nition 8. The expansion in Theorem 8. First. adding leptokurtosis).4) u While (8. g1 and g2 are di¤ erentiable functions of u and continuous in F relative to the supremum norm on the space of distribution functions. and vice-versa. F ) = u + o (1) : (8. Gn (u.8. Gn (u. however. about the rate of convergence.8. F ) converges to the normal limit at rate n1=2 : To a second order of approximation. F ): Since the derivative of g1 is odd.1 as follows.4) says that Gn converges to as n ! 1. and a function h(u) is odd if h( u) = The derivative of an even function is odd. F ) n1=2 1 + 1 g2 (u. De…nition 8. F ) which adds a symmetric non-normal component to the approximate density (for example.8. where g1 is an even function of u. Gn (u. and g2 is an odd function of u: Moreover. F ) u +n 1=2 g1 (u.or Gn (u. To a third order of approximation. or the size of the divergence for any particular sample size n: A better asymptotic approximation may be obtained through an asymptotic expansion.8. h(u): Theorem 8.8.1 Under regularity conditions and (8. The following notation will be helpful. F ) = u + g1 (u. Basically.2 an = O(1) if jan j is uniformly bounded.1 an = o(1) if an ! 0 as n ! 1 De…nition 8.1 is often called an Edgeworth expansion. F ) + O(n n 3=2 ) uniformly over u. it says nothing. then g1 (u) = g2 (u) = 1 6 3 u2 4 1 u3 (u) 3u + 1 72 2 3 1 24 u5 10u3 + 15u (u) 128 . F ) + n 1 g2 (u. p [Side Note: When Tn = n Xn = .

Fn ) g1 (u. F0 )) + O(n 1 1 ): p Since Fn converges to F at rate n. the di¤erence p (g1 (u. which from Theorem 8. the exact distribution is P (Tn < u) = Gn (u. F0 ) @ g1 (u.8.9 One-Sided Tests Using the expansion of Theorem 8. . An asymptotic test is based on (u): To the second order. 1 (u) + g1 (u. F0 )) converges to 0 at rate n: Heuristically. F ) is only heuristic. then g1 = 0 and g2 = 0. F0 ) = O(n 129 1 ). We conclude that Gn (u) Gn (u.1. F0 ) n1=2 1 + O(n 1 ) ) so the order of the error is O(n 1=2 ): A bootstrap test is based on Gn (u). was a profound economic and statistical theorist. so the second-order Edgeworth expansion corresponds to the normal distribution. 8. F0 ) = 1 g1 (u. and g1 is continuous with respect to F. F0 ) (Fn @F = O(n 1=2 ). Fn ) = (u) + Gn (u) Gn (u. the di¤erence between the bootstrap distribution and the true distribution is Gn (u) = Gn (u. He also could be viewed as the …rst econometrician due to his early use of mathematical statistics in the study of economic data. and 3 4 = E (X = E (X )3 = )4 = 3 4 3 the standardized skewness and excess kurtosis of the distribution of X: Note that when 3 = 0 and 4 = 0. we can assess the accuracy of one-sided hypothesis tests and con…dence regions based on an asymptotically normal t-ratio Tn .where (u) is the standard normal pdf.8.] Francis Edgeworth Francis Ysidro Edgeworth (1845-1926) of Ireland. g1 (u. Fn ) g1 (u. F0 ) = 1 n1=2 (g1 (u. developing the theories of indi¤erence curves and asymptotic expansions. F0 ) = since = 1: The di¤erence is (u) Gn (u. F0 ) @ The “derivative” @F g1 (u.1 has the expansion g1 (u. F0 ) + O(n n1=2 = O(n 1=2 ). Fn ) + O(n 1 ): n1=2 Because (u) appears in both expansions. Fn ) g1 (u. as F is a function. founding editor of the Economic Journal.

then the random variable jyj If a random variable y has distribution function H(u) = P(y has distribution function H(u) = H(u) H( u) since P (jyj u) = P ( u = P (y = H(u) For example. This rate can be used to show that one-tailed bootstrap inference based on the tratio achieves a so-called asymptotic re…nement –the Type I error of the test converges at a faster rate than an analogous asymptotic test. and exact critical values jZj are taken from the Gn (u. Fn ) = (u) + 2 g2 (u. is an even function. Fn ) + O(n n 3=2 ): 130 . F ) n n 1 1 ( u) + 1=2 g1 ( u. F ) = Gn (u. F ) Gn ( u. the asymptotic two-sided test has a better coverage rate than the asymptotic one-sided test. Hence the di¤erence between the asymptotic distribution and the exact distribution is (u) Gn (u. This is because the …rst term in the asymptotic expansion. F ) + g2 (u. F ) = Gn ( u. F ) + O(n 3=2 ). F ). From Theorem 8.10 Symmetric Two-Sided Tests u). we can calculate that Gn (u. 1). which is an improved rate of convergence over the asymptotic test (which converged at rate O(n 1=2 )). n 3=2 ) (8. meaning that the errors in the two directions exactly cancel out. then jTn j ! : Thus asymptotic critical values are taken from the distribution. F ): d d A two-sided hypothesis test rejects H0 for large values of jTn j : Since Tn ! Z. F ) + g2 ( u.1. then jZj has distribution function (u) = (u) ( u) = 2 (u) 1: Similarly. if Tn has exact distribution Gn (u.or P (Tn u) = P (Tn u) + O(n 1 ).5) = where the simpli…cations are because g1 is even and g2 is odd. F0 ) distribution. Applying (8. F ) 1 1 (u) + 1=2 g1 (u. then jTn j has the distribution function Gn (u. F0 ) + O(n n 3=2 ) = O(n 1 ): The order of the error is O(n 1 ): Interestingly.5) to the bootstrap distribution. 8. if Z y u) u) P (y u) H( u): N(0.8. we …nd Gn (u) = Gn (u. F ) + O(n n n 2 (u) + g2 (u. F ) = Gn (u. g1 . F0 ) = 2 g2 (u.

the percentile bootstrap methods provide an attractive inference method. similar to a one-sided test. the theoretical literature (e. The bad news is that the rate of convergence is disappointing. It is no better than the rate obtained from an asymptotic one-sided con…dence region. and needs to be determined on a case-by-case basis.g. Horowitz. whose error is O(n 1 ): Thus a two-sided bootstrap test also achieves an asymptotic re…nement.11 d Percentile Con…dence Intervals p n ^ 0 To evaluate the coverage rate of the percentile interval. it is unclear if there are any bene…ts from using the percentile bootstrap over simple asymptotic methods. Based on these arguments. Therefore. p the last equality because Fn converges to F0 at rate n. Therefore if standard errors are available.Thus the di¤erence between the bootstrap and exact distributions is 2 (g2 (u. Hall. V ). and bootstrap tests have better rates of convergence than asymptotic tests. 1992. Twosided tests will have more accurate size (Reported Type I error). p where = V . 2001) tends to advocate the use of the percentile-t bootstrap methods rather than percentile methods. the choice between symmetric and equal-tailed con…dence intervals is unclear. set Tn = : We know that Tn ! N(0. F0 ) = so the error from using the bootstrap distribution (relative to the true unknown distribution) is O(n 3=2 ): This is in contrast to the use of the asymptotic distribution. as it depends on the unknown V: Theorem 8.1 shows that a …rst-order approximation u Gn (u. Fn ) g2 (u. and g2 is continuous in F: Another way of writing this is P (jTn j < u) = P (jTn j < u) + O(n 3=2 ) Gn (u) Gn (u. but one-sided tests might have more power against alternatives of interest. The analysis shows that there may be a trade-o¤ between one-sided and two-sided tests. F ) = + O(n 1=2 ). F0 ) = = = O(n u u 1=2 u u ) (^ + O(n 1=2 ) ) ) + O(n 1=2 Hence the order of the error is O(n 1=2 ): p The good news is that the percentile-type methods (if appropriately used) can yield nconvergent asymptotic inference. 131 . Fn ) = u + O(n 1=2 ). F0 )) + O(n 3=2 ) n = O(n 3=2 ). which is not pivotal. Two-sided tests have better rates of convergence than the one-sided tests. and for the bootstrap Gn (u) = Gn (u. A reader might get confused between the two simultaneous e¤ects.8. Yet these methods do not require the calculation of standard errors! This means that in contexts where standard errors are not available or are di¢ cult to calculate. where ^ = V (Fn ) is the bootstrap estimate of : The di¤erence is Gn (u) Gn (u. 8. Con…dence intervals based on the bootstrap can be asymmetric if based on one-sided tests (equal-tailed intervals) and can therefore be more informative and have smaller length than symmetric intervals.

which is a valid statistical approach. :::. The advantage of the EDF is that it is fully nonparametric. Any other consistent estimate of F may be used to de…ne a feasible bootstrap estimator. it may be ine¢ cient in contexts where more is known about F: We discuss bootstrap methods appropriate for the linear regression model yi = x0 + ei i E (ei j xi ) = 0: The non-parametric bootstrap resamples the observations (yi . A non-parametric method is to sample the bootstrap errors ei randomly from the OLS residuals f^1 . Fn ). But since it is fully nonparametric. however. whether or not the xi are really “…xed” or random. The disadvantage is that the bootstrap distribution may be a poor approximation when the error is not independent of the regressors. The methods discussed above are unattractive for most applications in econometrics because they impose the stringent assumption that xi and ei are independent. where Fn is the EDF. The idea is to construct a conditional distribution for ei so that E ei 2 j xi E ei 3 j xi E (ei j xi ) = 0 = e2 ^i = e3 : ^i E (ei j xi ) 6= 0: = xi 0 ^ + ei E (xi ei ) = 0 A conditional distribution with these features will preserve the main important features of the data. xn g: A parametric method is to sample xi from an estimated parametric distribution.12 Bootstrap Methods for Regression Models The bootstrap methods we have discussed have set Gn (u) = Gn (u.8. and is thus an ine¢ cient estimator of the true distribution (when in fact the regression assumption is true.) One approach to this problem is to impose the very strong assumption that the error "i is independent of the regressor xi : The advantage is that in this case it is straightforward to construct bootstrap distributions. it is su¢ cient to sample the xi and ei independently. and works in nearly any context. which implies yi but generally The the bootstrap distribution does not impose the regression assumption. xi ) from the EDF. :::. ^ 2 ): For the regressors xi . a nonparametric method is to sample the xi randomly from the EDF or sample values fx1 . This can be achieved using a two-point distribution of the form p ! ! p 1+ 5 5 1 p P ei = ei ^ = 2 2 5 p ! ! p 1 5 5+1 p P ei = ei ^ = 2 2 5 For each xi . such as the normal ei N(0. then all inferential statements are made conditionally on the observed values of the regressors. It does not really matter. Typically what is desirable is to impose only the regression condition E (ei j xi ) = 0: Unfortunately this is a harder problem. A third approach sets xi = xi : This is equivalent to treating the regressors as …xed in repeated samples. One proposal which imposes the regression condition without independence is the Wild Bootstrap. en g: A parametric e ^ method is to generate the bootstrap errors ei from a parametric distribution. you sample ei using this two-point distribution. 132 . it imposes no conditions. To impose independence. and then create yi = xi 0 ^ + ei : There are di¤erent ways to impose independence. If this is done.

and ^ is calculated on each sample. lot size. generate bootstrap samples. respectively. can you report the 95% Percentile-t interval for ? Exercise 8. Using the non-parametric bootstrap. You replace c with qn (:95). n]. and set x = xi0 and e = ei0 .Exercises Exercise 8.5 You want to test H0 : = 0 against H1 : > 0: The test for H0 is to reject if Tn = ^=s(^) > c where c is picked so that Type I error is : You do this as follows. calculate the estimate ^ on these samples and then calculate Tn = (^ ^)=s(^). F0 (x) (1 F0 (x))) : Exercise 8. Set y = x 0 ^ + e : Draw (with ^ replacement) n such vectors.3. size of house. 133 . calculate the estimates ^ on these samples and then calculate Tn = ^ =s(^ ): Let qn (:95) denote the 95% quantile of Tn . Show that p d n (Fn (x) F0 (x)) ! N (0.1 Let Fn (x) denote the EDF of a random sample.75 and 1.6 Suppose that in an application.pdf. Show that C exactly equals the Alternative percentile interval (not the percentile-t interval). :::. (a) Report the 95% Efron Percentile interval for : (b) Report the 95% Alternative Percentile interval for : (c) With the given information.dat contains data on house prices (sales).7 The data…le hprice1. and de…ne the bootstrap con…dence interval h i C = ^ s(^)qn (:95). 2. ^ s(^)qn (:05) : Exercise 8.2 Take a random sample fy1 . :::. :::. ng : That is. Exercise 8. yn g with = Eyi and 2 = var (yi ) : Let the statistic of interest be the sample mean Tn = y n : Find the population moments ETn and var (Tn ) : Let fy1 . and thus reject H0 if Tn = ^=s(^) > qn (:95): What is wrong with this procedure? Exercise 8. Using the nonparametric bootstrap. ei ) : i = 1. creating a random bootstrap data set (y . and the colonial dummy.5% and 97. with variables listed in the …le hprice1. Calculate 95% con…dence intervals for the regression coe¢ cients using both the asymptotic normal approximation and the percentile-t bootstrap. Find the bootstrap moments ETn and var (Tn ) : Exercise 8. and the 2.4 Consider the following bootstrap procedure. The ^ are sorted. e ) from the pair f(xi . Show that this bootstrap procedure is (numerically) identical to the non-parametric bootstrap. Estimate a linear regression of price on the number of bedrooms. :::. ^ (a) Draw a random vector (x . draw a random ^ integer i0 from [1. where s(^) is the standard error in the original data.3 Consider the following bootstrap procedure for a regression of yi on xi : Let ^ denote the OLS estimator from the regression of y on X. yielding OLS estimates ^ and any other statistic of interest.5% quantiles of the ^ are . ^ = 1:2 and s(^) = :2: Using the non-parametric bootstrap. yn g be a random sample from the empirical distribution function and let Tn = y n be its sample mean. you generate bootstrap samples. X ): (b) Regress y on X . Let qn (:05) and qn (:95) denote the 5% and 95% quantiles of Tn . 1000 samples are generated from the bootstrap distribution. and e = y X ^ the OLS residuals.

z. z i . Let g(y. with ` k: If k = ` the model is just identi…ed. z. This is a special case of a more general class of moment condition models. x. z. There are ` k = r more moment restrictions than free parameters. as there are ` restrictions in E (xi ei ) = 0. xi . In the statistics literature.1) by setting g(y. ) be an ` 1 function of a k 1 parameter with ` k such that Eg(yi . The variables xi may be components and functions of z i .1 Overidenti…ed Linear Model yi = x0 + ei i = x0 1i E (xi ei ) = 0 where x1i is k 1 and x2 is r 1 with ` = k + r: We know that without further restrictions. 1 this class of models are called moment condition models. otherwise it is overidenti…ed.2) 134 . 0) = z (y x0 ) (9. an asymptotically e¢ cient estimator of is the OLS estimator. Now suppose that we are given the information that 2 = 0: Now we can write the model as yi = x0 1i E (xi ei ) = 0: In this case. these are known as estimating equations. g(y. is not necessarily e¢ cient. but this is not required. As an important special case we will devote special attention to linear moment condition models.1) where 0 is the true value of : In our previous example.Chapter 9 Generalized Method of Moments 9. This method. while 1 is of dimension k < `. 0) 1 1 Consider the linear model + x0 2i 2 + ei + ei =0 (9. x. which can be written as yi = x 0 + e i i E (z i ei ) = 0: where the dimensions of xi and z i are k 1 and ` 1 . This situation is called overidenti…ed. ) = z (y x0 ): In econometrics. however. This model falls in the class (9. how should 1 be estimated? One method is OLS regression of yi on x1i alone. We call r the number of overidentifying restrictions.

as there are more equations than free parameters.2 GMM Estimator 1X 1X gi( ) = z i yi n n i=1 i=1 n n De…ne the sample analog of (9.2) gn( ) = x0 i = 1 Z 0y n Z 0X : (9. De…nition 9. This is generally not possible when ` > k. Jn ( ) = n g n ( )0 g n ( ) = n kg n ( )k2 . then. For some ` ` weight matrix W n > 0.2.2.3) The method of moments estimator for is de…ned as the parameter value which sets g n ( ) = 0. if W n = I. for if W n is replaced by cW n for some c > 0. then g n ( ^ ) = 0. Proposition 9. The idea of the generalized method of moments (GMM) is to de…ne an estimator which sets g n ( ) “close” to zero. The GMM estimator minimizes Jn ( ). let Jn ( ) = n g n ( )0 W n g n ( ): This is a non-negative measure of the “length”of the vector g n ( ): For example. and the GMM estimator is the method of moments estimator: The …rst order conditions for the GMM estimator are 0 = @ Jn ( ^ ) @ @ = 2 g n ( ^ )0 W n g n ( ^ ) @ 1 0 1 0 = 2 X Z Wn Z y n n X^ so 2 X 0Z W n Z 0X ^ = 2 X 0Z W n Z 0y which establishes the following. the square of the Euclidean length.9. the dependence is only up to scale.1 GM M = argmin Jn ( ) : Note that if k = `. 135 . ^ GM M does not change.1 ^ GM M = X 0Z W n Z 0X 1 X 0Z W n Z 0y : While the estimator depends on W n .

3 Distribution of GMM Estimator p Assume that W n ! W > 0: Let and Q = E z i x0 i = E z i z 0 e2 = E g i g 0 . No estimator can do better (in this …rst-order asymptotic sense). p 136 . ^ the e¢ cient GMM estimator. it turns out that the GMM estimator is semiparametrically e¢ cient. i i i where g i = z i ei : Then 1 0 X Z Wn n and 1 0 X Z Wn n 1 0 ZX n d p ! Q0 W Q 1 p Z 0e n ! Q0 W N (0. and this is all that is known. This result shows that in the linear model. it is semiparametrically e¢ cient. If it is known that E (g i ( )) = 0. as the distribution of the data is unknown. Q0 1 Q : 1 Z 0X 1 X 0Z 1 Z 0 y: 1 W0 = is not known in practice. we still call By “e¢ cient” we mean that this estimator has the smallest asymptotic variance in the class . ) : We conclude: Theorem 9. 1 The optimal weight matrix W 0 is one which minimizes V : This turns out to be W 0 = : The proof is left as an exercise. V ) . as we are only considering alternative weight matrices W n : However. as it has the same asymptotic distribution.1 Asymptotic Distribution of GMM Estimator p where V = Q0 W Q n ^ 1 d ! N (0. this is a semi-parametric problem.9.2 Asymptotic Distribution of E¢ cient GMM Estimator p 1 d n ^ ! N 0. This yields the e¢ cient GMM estimator: ^ = X 0Z Thus we have Theorem 9. Chamberlain showed that in this context. no estimator has greater asymptotic e¢ ciency than the e¢ cient linear GMM estimator. as shown by Gary Chamberlain (1987).3. This is a weak concept of optimality. For any W n ! W 0 . GMM estimators are asymptotically normal with “sandwich form”asymptotic variances. but it can be estimated consistently. of GMM estimators with this set of moment conditions. Q0 W Q 1 Q0 W W Q : In general. no semiparametric estimator (one which is consistent globally for the class of models considered) 1 can have a smaller asymptotic variance than G0 1 G where G = E @@ 0 g i ( ): Since the GMM estimator has this asymptotic variance. without imposing additional assumptions.3.

9. these two estimators are asymptotically equivalent under the hypothesis of correct speci…cation. so the uncentered estimator will contain an undesirable bias term and the power of the test will be adversely a¤ected. A common alternative choice is to set ! 1 n 1X Wn = gig0 ^ ^i n i=1 1X gi gi 0 ^ ^ n i=1 n ! 1 = gng0 n ! 1 : (9. 137 . estimate ^ using this weight matrix. When constructing hypothesis tests. i. and was introduced by L.e. xi .4) which uses the uncentered moment conditions. A simple solution is to use the centered moment conditions to construct the weight matrix. and construct the residual ei = yi x0 ^ : ^ i ^ ^ Then set g i = z i ei . An estimator of the asymptotic variance of ^ can be seen from the above formula. we can set W n = I ` : In the linear model. Since Eg i = 0. Hansen. However. First. The estimator appears to have some better properties than traditional GMM. we can de…ne the residuals ei = yi x0 ^ and moment equations ^ i g i = z i ei = g(yi . as in (9. Set ^ V = n X 0Z g0g ^^ ng n g 0 n 1 Z 0X 1 : ^ Asymptotic standard errors are given by the square roots of the diagonal elements of V : There is an important alternative to the two-step GMM estimator just described. under the alternative hypothesis the moment conditions are violated.4 Estimation of the E¢ cient Weight Matrix Given any weight matrix W n > 0. and let g be the associated n ` matrix: Then the e¢ cient GMM estimator is ^ ^ = X 0Z g0g ^^ ng n g 0 n 1 Z 0X 1 X 0Z g0g ^^ ng n g 0 n 1 Z 0 y: In most cases.4) above. Here is a simple way to compute the e¢ cient GMM estimator for the linear model. a better choice is W n = (Z 0 Z) : Given any such …rst-step estimator. ^ ): Construct ^ ^ 1X gn = gn( ^ ) = gi. For 1 example. Heaton and Yaron (1996). when we say “GMM” we actually mean “e¢ cient GMM” There is little point in . set W n = (Z 0 Z) 1 . but can be numerically tricky to obtain in some cases. ^ n i=1 n gi = gi ^ ^ and de…ne Wn = p gn. Eg i 6= 0. and GMM using W n as the weight matrix is asymptotically e¢ cient. using an ine¢ cient GMM estimator when the e¢ cient estimator is easy to compute. we can let the weight matrix be considered as a function of : The criterion function is then ! 1 n 1X J( ) = n g n ( )0 g i ( )g i ( )0 g n ( ): n i=1 where gi ( ) = gi( ) gn( ) The ^ which minimizes this function is called the continuously-updated GMM estimator. . Alastair Hall (2000) has shown that the uncentered estimator is a poor choice. z i . 1X gig0 ^ ^i n i=1 n 1 Then W n ! = W 0 . This is a current area of research in econometrics. Instead. the GMM estimator ^ is consistent yet ine¢ cient.

9. and then ^ recomputed. Theorem 9. p where = E gig0 i and G=E n ^ d ! N 0. G0 1 1 G 1 . perhaps obtained by …rst ^ setting W n = I: Since the GMM estimator depends upon the …rst-stage estimator.5 ` GMM: The General Case In its most general form. this is all that is known. This estimator can be iterated if needed.1 Distribution of Nonlinear GMM Estimator Under general regularity conditions. often the weight matrix W n is updated. 1X gig0 ^ ^i n i=1 gng0 n ! 1 . Hansen (1982). @ g ( ): @ 0 i The variance of ^ may be estimated by b V ^0 = G^ X i 1 ^ G 1 where ^ =n and ^ G=n 1 1 gi gi 0 ^ ^ X @ g ( ^ ): @ 0 i i The general theory of GMM estimation and testing was exposited by L. Identi…cation requires l k = dim( ): The GMM estimator minimizes J( ) = n g n ( )0 W n g n ( ) where gn( ) = and n 1X gi( ) n i=1 n Wn = with g i = g i ( ~ ) constructed using a preliminary consistent estimator ~ . GMM applies whenever an economic or statistical model implies the 1 moment condition E (g i ( )) = 0: Often.5.6 Over-Identi…cation Test Overidenti…ed models (` > k) are special in the sense that there may not be a parameter value such that the moment condition 138 . 9.

and it is advisable to report the statistic J whenever GMM is the estimation method.6. so that the linear equation may be written as yi = 0 x1i + ei : However. When over-identi…ed models are estimated by GMM.1 (Sargan-Hansen). This is sometimes called the GMM Distance statistic. but it is typically cause for concern. These may be used to construct Wald tests of statistical hypotheses. 9. ) = 0 holds. Under the hypothesis of correct speci…cation. J = J( ^ ) ! d 2 ` k: The proof of the theorem is left as an exercise. The GMM overidenti…cation test is a very useful by-product of the GMM methodology. a better approach is to directly use the GMM criterion function. This result was established by Sargan (1958) for a specialized case. it is customary to report the J statistic as a general test of model adequacy.Eg(yi . z i . and thus g n can be used to assess whether or not the hypothesis that Eg i = 0 is true or not. we can reject the model. For a given weight matrix W n . The idea was …rst put forward by Newey and West (1987). In this sense an 1i 1i exclusion restriction can be seen as an overidentifying restriction. and if the weight matrix is asymptotically e¢ cient. p Note that g n ! Eg i . and by L. 1 it is possible that 2 6= 0. xi . and in this case it would be impossible to …nd a value of 1 so that both E (x1i (yi x0 1 )) = 0 and E (x2i (yi x0 1 )) = 0 hold simultaneously. Hansen (1982) for the general case. take the linear model yi = 0 x1i + 0 x2i +ei with E (x1i ei ) = 0 and E (x2i ei ) = 0: 1 2 It is possible that 2 = 0. Thus the model –the overidentifying restrictions –are testable.7 Hypothesis Testing: The Distance Statistic We described before how to construct estimates of the asymptotic covariance matrix of the GMM estimates. For example. the hypothesis is H0 : h( ) = 0: The estimates under H1 are ^ = argmin J( ) 139 . The criterion function at the parameter estimates is J = n g0 W ngn n = n2 g 0 g 0 g n ^^ ng n g 0 n 1 gn: is a quadratic form in g n . The degrees of freedom of the asymptotic distribution are the number of overidentifying restrictions. and sometimes called a LR-like statistic (the LR is for likelihood-ratio). it is unclear what is wrong. If the statistic J exceeds the chi-square critical value. If the hypothesis is non-linear. Based on this information alone. and is thus a natural test statistic for H0 : Eg i = 0: Theorem 9. the GMM criterion function is J( ) = n g n ( )0 W n g n ( ) For h : Rk ! Rr .

In many cases.8 Conditional Moment Restrictions In many contexts. Our linear model yi = x0 + ei with instruments z i falls into this class under the stronger i assumption E (ei j z i ) = 0: Then ei ( ) = yi x0 : i It is also helpful to realize that conventional regression models also fall into this class. If h is linear in . ): In a joint model of the conditional mean and variance 8 yi x 0 < i : ei ( . This reasoning is not compelling. except that in this case xi = z i : For example. and some current research suggests that this restriction is not necessary for good performance of the test. 1. while in a nonlinear i regression model ei ( ) = yi g(xi . ) ei ( ) which satis…es Eg i ( ) = 0 and hence de…nes a GMM estimator. s = 1. in linear regression. as this ensures that D 0. In contrast.1 If the same weight matrix W n is used for both null and alternative. This test shares the useful feature of LR tests in that it is a natural by-product of the computation of alternative models. If h is non-linear. Which should be selected? 140 . the Wald statistic can work quite poorly. Newey and West (1987) suggested to use the same weight matrix W n for both null and alternative. ) . ) = : 0 0 )2 (yi xi f (xi ) Here s = 2: Given a conditional moment restriction. D d 0 2 r 2. and is the preferred test statistic. 9. an unconditional moment restriction can always be constructed. however.7. current evidence suggests that the D statistic appears to have quite good sampling properties. we can set g i ( ) = (xi . ei ( ) = yi x0 . D ! 3. That is for any ` 1 function (xi . It turns out that this conditional moment restriction is much more powerful. than the unconditional moment restriction discussed above. then D equals the Wald statistic.and those under H0 are ~ = argmin J( ): h( )=0 The two minimizing criterion functions are J( ^ ) and J( ~ ): The GMM distance statistic is the di¤erence D = J( ~ ) J( ^ ): Proposition 9. The obvious problem is that the class of functions is in…nite. and restrictive. the model implies more than an unconditional moment restriction of the form Eg i ( ) = 0: It implies a conditional moment restriction of the form E (ei ( ) j z i ) = 0 where ei ( ) is some s 1 function of the observation and the parameters.

The e¢ cient instrument is also inversely proportional to the conditional variance of ei : This is the same as the GLS estimator. as we discussed earlier in the course. namely that improved e¢ ciency can be obtained if the observations are weighted inversely to the conditional variance of the errors. it fails to achieve an asymptotic re…nement when the model is over-identi…ed. to get the best instrument for xi .9 Bootstrap GMM Inference Let ^ be the 2SLS or GMM estimator of . note that the e¢ cient instrument Ai involves the estimation of the conditional mean of xi given z i : In other words. If xi 2 R is a valid instrument satisfying E (ei j xi ) = 0. In the case of endogenous variables. 141 . and then use the …rst k instruments. so is just-identi…ed) yields the best GMM estimator possible. How is k to be determined? This is an area of theory still under development. note that i Ri = and 2 i E (xi j z i ) = E e2 j z i . i 2 i so Ai = In the case of linear regression. xi ). In the linear model ei ( ) = yi x0 . Another approach is to construct the optimal instrument. However. identically as in the regression model. In practice. However. :::. x2 .This is equivalent to the problem of selection of the best instruments. we need the best conditional mean model for xi given z i . x3 . A recent study of this problem is Donald and Newey (2001). are all valid instruments. then xi . not just an arbitrary linear projection. so Ai = i 2 z i : Hence e¢ cient GMM is GLS. Which i i should be used? One solution is to construct an in…nite list of potent instruments. E (xi j z i ) : 9. xi = z i . This has been shown by Hahn (1996).. A straightforward application of the nonparametric bootstrap works in the sense of consistently achieving the …rst-order asymptotic distribution. Take the case s = 1: Let Ri = E and 2 i @ ei ( ) j z i @ = E ei ( )2 j z i : Ai = 2 i Then the “optimal instrument” is Ri so the optimal moment is g i ( ) = Ai ei ( ): Setting g i ( ) to be this choice (which is k 1. jeopardizing the theoretical justi…cation for percentile-t methods. Furthermore. etc. Ai is unknown. the bootstrap applied J test will yield the wrong answer. z i . The form was uncovered by Chamberlain (1987). caution should be applied when interpreting such results. Using the EDF of (yi . but its form does help us think about construction of optimal instruments. and construct con…dence intervals for . we can apply the bootstrap methods discussed in Chapter 8 to compute estimates of the bias and variance of ^ .

so no ^ i=1 pi g i recentering or adjustments is needed. xi ) drawn from the EDF Fn . 142 . The bootstrap J statistic is J ( ^ ): ^ Brown and Newey (2002) have an alternative solution. A correction suggested by Hall and Horowitz (1996) can solve the problem. this implies that the bootstrap estimator is ^ = X 0Z W nZ 0X 1 X 0Z W n Z 0y Z 0e ^ : where e = y X ^ are the in-sample residuals. z i . Since ^ Pn ^ = 0. z i . Given the bootstrap sample (y . They note that we can sample from the observations with the empirical likelihood probabilities pi described in Chapter 10. Let ^ minimize J ( ). E gi ^ = g n ( ^ ) 6= 0: This means that (yi . not from the bootstrap data. de…ne the bootstrap GMM criterion J ( )=n gn( ) gn( ^ ) 0 W n gn( ) gn( ^ ) where g n ( ^ ) is from the in-sample data. Brown and Newey argue that this bootstrap procedure will be more e¢ cient than the Hall-Horowitz GMM bootstrap. Z . X ). and de…ne all statistics and tests accordingly. ^ is the “true”value and yet g n ( ^ ) 6= 0: Thus according to random variables (yi .The problem is that in the sample. this sampling scheme preserves the moment conditions of the model. xi ) do not satisfy the same moment conditions as the population distribution. In the linear model.

To do this.1 Take the model yi = x 0 + e i i E (xi ei ) = 0 e2 = z 0 + i i E (z i i ) = 0: Find the method of moments estimators ^ . ^ for ( . z i is ` GMM estimator for .2 Take the single equation y = X +e E (e j Z) = 0 Assume E e2 j z i = i then 2: i Show that if ^ is estimated by GMM with weight matrix W n = (Z 0 Z) p n ^ d 1 .4 In the linear model estimated by GMM with general weight matrix W .Exercises Exercise 9. ) : Exercise 9. the asymptotic variance of ^ GM M is V = Q0 W Q (a) Let V 0 be this matrix when W = 1 Q0 W W Q Q0 W Q 1 1 : Show that V 0 = Q0 1 Q 1 : (b) We want to show that for any W . 1 and is k 1. ` k: Show how to construct an e¢ cient (A B) = 0: (A B) = 0 to show that B) . z i . xi ).g. a GMM estimator with arbitrary weight matrix). A = B + (A V V 0: Exercise 9. V V 0 is positive semi-de…nite (for then V 0 is the smaller 1 possible covariance matrix and W = is the e¢ cient weight matrix). and B 0 143 .5 The equation of interest is yi = g(xi . ) + ei E (z i ei ) = 0: The observed data is (yi . ! N 0. start by …nding matrices A and B such that V = A0 A and V 0 = B 0 B: (c) Show that B 0 A = B 0 B and therefore that B 0 (d) Use the expressions V = A0 A. 2 Q0 M 1 Q 1 where Q = E (z i x0 ) and M = E (z i z 0 ) : i i Exercise 9.3 Take the model yi = x0 + ei with E (z i ei ) = 0: Let ei = yi x0 ^ where ^ is ^ i i consistent for (e. De…ne the estimate of the optimal GMM weight matrix ! 1 n 1X 0 2 z i z i ei ^ : Wn = n i=1 Show that W n ! p 1 where = E z i z 0 e2 : i i Exercise 9.

Exercise 9.5) X 0 Y is LS: 1 P where ^ n = n n xi x0 e2 .6) (a) Show that you can rewrite Jn ( ) in (9. is de…ned as ~ = argmin Jn ( ): h( )=0 The GMM test statistic (the distance statistic) of the hypothesis h( ) = 0 is D = Jn ( ~ ) = min Jn ( ): h( )=0 (9.Exercise 9. subject to the restriction h( ) = 0. Show that Jn ! each of the following: 144 as n ! 1 by demonstrating .6 In the linear model y = X +e with E(xi ei ) = 0.7 Take the linear model yi = x 0 + e i i E (z i ei ) = 0: and consider the GMM estimator ^ of : Let 1 Jn = ng n ( ^ )0 ^ gn( ^ ) d 2 ` k denote the test of overidentifying restrictions. V R ) where 1 V R R0 V R R0 V : (d) Show that in this setting. the distance statistic D in (9. a Generalized Method of Moments (GMM) criterion function for is de…ned as Jn ( ) = 1 (y n X )0 X ^ n X 0 (y 1 X ) 1 (9. and ^ = (X 0 X) i ^i ^ i i=1 The GMM estimator of . ei = yi x0 ^ are the OLS residuals.6) equals the Wald statistic.5) as Jn ( ) = where ^ V n = X 0X 1 ^ 0 ^ 1 Vn ! ^ n X i=1 xi x0 e2 i ^i X 0X 1 : (b) Now focus on linear restrictions: h( ) = R0 R0 r: Thus r=0 0 ~ = argmin Jn ( ) and hence