B E. H
c (2000, 2011
1
University of Wisconsin
www.ssc.wisc.edu/~bhansen
This Revision: January 13, 2011
Comments Welcome
1
This manuscript may be printed and reproduced for individual or instructional use, but may not be
printed for commercial purposes.
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
1 Introduction 1
1.1 What is Econometrics? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 The Probability Approach to Econometrics . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Econometric Terms and Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Observational Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Standard Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Sources for Economic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Econometric Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.8 Reading the Manuscript . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Moment Estimation 8
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Population and Sample Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Sample Mean is Unbiased . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5 Convergence in Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6 Weak Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.7 VectorValued Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.8 Convergence in Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.9 Functions of Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.10 Delta Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.11 Stochastic Order Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.12 Uniform Stochastic Bounds* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.13 Semiparametric Eciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.14 Expectation* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.15 Technical Proofs* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3 Conditional Expectation and Projection 28
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2 The Distribution of Wages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3 Conditional Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.4 Conditional Expectation Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.5 Continuous Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.6 Law of Iterated Expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.7 Monotonicity of Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.8 CEF Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.9 Best Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.10 Conditional Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.11 Homoskedasticity and Heteroskedasticity . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.12 Regression Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
i
CONTENTS ii
3.13 Linear CEF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.14 Linear CEF with Nonlinear Eects . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.15 Linear CEF with Dummy Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.16 Best Linear Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.17 Linear Predictor Error Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.18 Regression Coecients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.19 Regression SubVectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.20 Coecient Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.21 Omitted Variable Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.22 Best Linear Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.23 Normal Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.24 Regression to the Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.25 Reverse Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.26 Limitations of the Best Linear Predictor . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.27 Random Coecient Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.28 Causal Eects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.29 Existence and Uniqueness of the Conditional Expectation* . . . . . . . . . . . . . . 65
3.30 Technical Proofs* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4 The Algebra of Least Squares 71
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.2 Least Squares Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.3 Solving for Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.4 Illustration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.5 Least Squares Residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.6 Model in Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.7 Projection Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.8 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.9 Regression Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.10 Residual Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.11 Prediction Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.12 Inﬂuential Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.13 Measures of Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.14 Normal Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5 Least Squares Regression 91
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.2 Mean of LeastSquares Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.3 Variance of Least Squares Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.4 GaussMarkov Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.5 Residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.6 Estimation of Error Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.7 Covariance Matrix Estimation Under Homoskedasticity . . . . . . . . . . . . . . . . 98
5.8 Covariance Matrix Estimation Under Heteroskedasticity . . . . . . . . . . . . . . . . 99
5.9 Standard Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5.10 Multicollinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.11 Normal Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
CONTENTS iii
6 Asymptotic Theory for Least Squares 109
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.2 Consistency of LeastSquares Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.3 Consistency of Sample Variance Estimators . . . . . . . . . . . . . . . . . . . . . . . 112
6.4 Asymptotic Normality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.5 Joint Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
6.6 Uniformly Consistent Residuals* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.7 Asymptotic Leverage* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.8 Consistent Covariance Matrix Estimation . . . . . . . . . . . . . . . . . . . . . . . . 120
6.9 Functions of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
6.10 Asymptotic Standard Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6.11 t statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
6.12 Conﬁdence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
6.13 Regression Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
6.14 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
6.15 Conﬁdence Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.16 Semiparametric Eciency in the Projection Model . . . . . . . . . . . . . . . . . . . 128
6.17 Semiparametric Eciency in the Homoskedastic Regression Model* . . . . . . . . . . 130
6.18 Technical Proofs* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
7 Restricted Estimation 136
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
7.2 Constrained Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
7.3 Exclusion Restriction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
7.4 Minimum Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
7.5 Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
7.6 Asymptotic Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.7 Ecient Minimum Distance Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . 141
7.8 Exclusion Restriction Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
7.9 Variance and Standard Error Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.10 Nonlinear Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.11 Technical Proofs* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
8 Testing 147
8.1 t tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
8.2 tratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
8.3 Wald Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
8.4 Minimum Distance Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
8.5 F Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
8.6 Normal Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
8.7 Problems with Tests of NonLinear Hypotheses . . . . . . . . . . . . . . . . . . . . . 152
8.8 Monte Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
8.9 Estimating a Wage Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
9 Additional Regression Topics 163
9.1 Generalized Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
9.2 Testing for Heteroskedasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
9.3 Forecast Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
9.4 NonLinear Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
CONTENTS iv
9.5 Least Absolute Deviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
9.6 Quantile Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
9.7 Testing for Omitted NonLinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
9.8 Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
10 The Bootstrap 179
10.1 Deﬁnition of the Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
10.2 The Empirical Distribution Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
10.3 Nonparametric Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
10.4 Bootstrap Estimation of Bias and Variance . . . . . . . . . . . . . . . . . . . . . . . 181
10.5 Percentile Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
10.6 Percentilet EqualTailed Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
10.7 Symmetric Percentilet Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
10.8 Asymptotic Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
10.9 OneSided Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
10.10Symmetric TwoSided Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
10.11Percentile Conﬁdence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
10.12Bootstrap Methods for Regression Models . . . . . . . . . . . . . . . . . . . . . . . . 190
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
11 Generalized Method of Moments 193
11.1 Overidentiﬁed Linear Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
11.2 GMM Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
11.3 Distribution of GMM Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
11.4 Estimation of the Ecient Weight Matrix . . . . . . . . . . . . . . . . . . . . . . . . 196
11.5 GMM: The General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
11.6 OverIdentiﬁcation Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
11.7 Hypothesis Testing: The Distance Statistic . . . . . . . . . . . . . . . . . . . . . . . 198
11.8 Conditional Moment Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
11.9 Bootstrap GMM Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
12 Empirical Likelihood 204
12.1 NonParametric Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
12.2 Asymptotic Distribution of EL Estimator . . . . . . . . . . . . . . . . . . . . . . . . 206
12.3 Overidentifying Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
12.4 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
12.5 Numerical Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
13 Endogeneity 211
13.1 Instrumental Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
13.2 Reduced Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
13.3 Identiﬁcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
13.4 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
13.5 Special Cases: IV and 2SLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
13.6 Bekker Asymptotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
13.7 Identiﬁcation Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
CONTENTS v
14 Univariate Time Series 221
14.1 Stationarity and Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
14.2 Autoregressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
14.3 Stationarity of AR(1) Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
14.4 Lag Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
14.5 Stationarity of AR(k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
14.6 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
14.7 Asymptotic Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
14.8 Bootstrap for Autoregressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
14.9 Trend Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
14.10Testing for Omitted Serial Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . 228
14.11Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
14.12Autoregressive Unit Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
15 Multivariate Time Series 231
15.1 Vector Autoregressions (VARs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
15.2 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
15.3 Restricted VARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
15.4 Single Equation from a VAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
15.5 Testing for Omitted Serial Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . 233
15.6 Selection of Lag Length in an VAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
15.7 Granger Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
15.8 Cointegration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
15.9 Cointegrated VARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
16 Limited Dependent Variables 237
16.1 Binary Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
16.2 Count Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
16.3 Censored Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
16.4 Sample Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
17 Panel Data 242
17.1 IndividualEects Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
17.2 Fixed Eects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
17.3 Dynamic Panel Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
18 Nonparametrics 245
18.1 Kernel Density Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
18.2 Asymptotic MSE for Kernel Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . 247
A Matrix Algebra 250
A.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
A.2 Matrix Addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
A.3 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
A.4 Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
A.5 Rank and Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
A.6 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
A.7 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
A.8 Positive Deﬁniteness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
A.9 Matrix Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
A.10 Kronecker Products and the Vec Operator . . . . . . . . . . . . . . . . . . . . . . . . 257
A.11 Vector and Matrix Norms and Inequalities . . . . . . . . . . . . . . . . . . . . . . . . 257
CONTENTS vi
B Probability 260
B.1 Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
B.2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
B.3 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
B.4 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
B.5 Common Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
B.6 Multivariate Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
B.7 Conditional Distributions and Expectation . . . . . . . . . . . . . . . . . . . . . . . . 268
B.8 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
B.9 Normal and Related Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
B.10 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
B.11 Maximum Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
C Numerical Optimization 282
C.1 Grid Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
C.2 Gradient Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
C.3 DerivativeFree Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
Preface
This book is intended to serve as the textbook for a ﬁrstyear graduate course in econometrics.
It can be used as a standalone text, or be used as a supplement to another text.
Students are assumed to have an understanding of multivariate calculus, probability theory,
linear algebra, and mathematical statistics. A prior course in undergraduate econometrics would
be helpful, but not required.
For reference, some of the basic tools of matrix algebra, probability, and statistics are reviewed
in the Appendix.
For students wishing to deepen their knowledge of matrix algebra in relation to their study of
econometrics, I recommend Matrix Algebra by Abadir and Magnus (2005).
An excellent introduction to probability and statistics is Statistical Inference by Casella and
Berger (2002). For those wanting a deeper foundation in probability, I recommend Ash (1972)
or Billingsley (1995). For more advanced statistical theory, I recommend Lehmann and Casella
(1998), van der Vaart (1998), Shao (2003), and Lehmann and Romano (2005).
For further study in econometrics beyond this text, I recommend Davidson (1994) for asymp
totic theory, Hamilton (1994) for timeseries methods, Wooldridge (2002) for panel data and discrete
response models, and Li and Racine (2007) for nonparametrics and semiparametric econometrics.
Beyond these texts, the Handbook of Econometrics series provides advanced summaries of contem
porary econometric methods and theory.
As this is a manuscript in progress, some parts are quite incomplete, in particular the later
sections of the manuscript. Hopefully one day these sections will be ﬂeshed out and completed in
more detail.
I would like to thank YingYing Lee for providing research assistance in preparing some of the
empirical examples presented in the text.
vii
Chapter 1
Introduction
1.1 What is Econometrics?
The term “econometrics” is believed to have been crafted by Ragnar Frisch (18951973) of
Norway, one of the three principle founders of the Econometric Society, ﬁrst editor of the journal
Econometrica, and cowinner of the ﬁrst Nobel Memorial Prize in Economic Sciences in 1969. It
is therefore ﬁtting that we turn to Frisch’s own words in the introduction to the ﬁrst issue of
Econometrica for an explanation of the discipline.
A word of explanation regarding the term econometrics may be in order. Its deﬁni
tion is implied in the statement of the scope of the [Econometric] Society, in Section I
of the Constitution, which reads: “The Econometric Society is an international society
for the advancement of economic theory in its relation to statistics and mathematics....
Its main object shall be to promote studies that aim at a uniﬁcation of the theoretical
quantitative and the empiricalquantitative approach to economic problems....”
But there are several aspects of the quantitative approach to economics, and no single
one of these aspects, taken by itself, should be confounded with econometrics. Thus,
econometrics is by no means the same as economic statistics. Nor is it identical with
what we call general economic theory, although a considerable portion of this theory has
a deﬁninitely quantitative character. Nor should econometrics be taken as synonomous
with the application of mathematics to economics. Experience has shown that each
of these three viewpoints, that of statistics, economic theory, and mathematics, is
a necessary, but not by itself a sucient, condition for a real understanding of the
quantitative relations in modern economic life. It is the uniﬁcation of all three that is
powerful. And it is this uniﬁcation that constitutes econometrics.
Ragnar Frisch, Econometrica, (1933), 1, pp. 12.
This deﬁnition remains valid today, although some terms have evolved somewhat in their usage.
Today, we would say that econometrics is the uniﬁed study of economic models, mathematical
statistics, and economic data.
Within the ﬁeld of econometrics there are subdivisions and specializations. Econometric theory
concerns the development of tools and methods, and the study of the properties of econometric
methods. Applied econometrics is a term describing the development of quantitative economic
models and the application of econometric methods to these models using economic data.
1.2 The Probability Approach to Econometrics
The unifying methodology of modern econometrics was articulated by Trygve Haavelmo (1911
1999) of Norway, winner of the 1989 Nobel Memorial Prize in Economic Sciences, in his seminal
1
CHAPTER 1. INTRODUCTION 2
paper “The probability approach in econometrics”, Econometrica (1944). Haavelmo argued that
quantitative economic models must necessarily be probability models (by which today we would
mean stochastic). Deterministic models are blatently inconsistent with observed economic quan
tities, and it is incohorent to apply deterministic models to nondeterministic data. Economic
models should be explicitly designed to incorporate randomness; stochastic errors should not be
simply added to deterministic models to make them random. Once we acknowledge that an eco
nomic model is a probability model, it follows naturally that the best way to quantify, estimate,
and conduct inferences about the economy is through the powerful theory of mathematical statis
tics. The appropriate method for a quantitative economic analysis follows from the probabilistic
construction of the economic model.
Haavelmo’s probability approach was quickly embraced by the economics profession. Today no
quantitative work in economics shuns its fundamental vision.
While all economists embrace the probability approach, there has been some evolution in its
implementation.
The structural approach is the closest to Haavelmo’s original idea. A probabilistic economic
model is speciﬁed, and the quantitative analysis performed under the assumption that the economic
model is correctly speciﬁed. Researchers often describe this as “taking their model seriously.” The
structural approach typically leads to likelihoodbased analysis, including maximum likelihood and
Bayesian estimation.
A criticism of the structural approach is that it is misleading to treat an economic model
as correctly speciﬁed. Rather, it is more accurate to view a model as a useful abstraction or
approximation. In this case, how should we interpret structural econometric analysis? The quasi
structural approach to inference views a structural economic model as an approximation rather
than the truth. This theory has led to the concepts of the pseudotrue value (the parameter value
deﬁned by the estimation problem), the quasilikelihood function, quasiMLE, and quasilikelihood
inference.
Closely related is the semiparametric approach. A probabilistic economic model is partially
speciﬁed but some features are left unspeciﬁed. This approach typically leads to estimation methods
such as leastsquares and the Generalized Method of Moments. The semiparametric approach
dominates contemporary econometrics, and is the main focus of this textbook.
Another branch of quantitative structural economics is the calibration approach. Similar
to the quasistructural approach, the calibration approach interprets structural models as approx
imations and hence inherently false. The dierence is that the calibrationist literature rejects
mathematical statistics as inappropriate for approximate models, and instead selects parameters
by matching model and data moments using nonstatistical ad hoc
1
methods.
1.3 Econometric Terms and Notation
In a typical application, an econometrician has a set of repeated measurements on a set of vari
ables. For example, in a labor application the variables could include weekly earnings, educational
attainment, age, and other descriptive characteristics. We call this information the data, dataset,
or sample.
We use the term observations to refer to the distinct repeated measurements on the variables.
An individual observation often corresponds to a speciﬁc economic unit, such as a person, household,
corporation, ﬁrm, organization, country, state, city or other geographical region. An individual
observation could also be a measurement at a point in time, such as quarterly GDP or a daily
interest rate.
Economists typically denote variables by the italicized roman characters y, x, and/or z. The
convention in econometrics is to use the character y to denote the variable to be explained, while
1
Ad hoc means “for this purpose” — a method designed for a speciﬁc problem — and not based on a generalizable
principle.
CHAPTER 1. INTRODUCTION 3
the characters x and z are used to denote the conditioning (explaining) variables.
Following mathematical convention, real numbers (elements of the real line R) are written using
lower case italics such as y, and vectors (elements of R
k
) by lower case bold italics such as x, e.g.
x =
¸
¸
¸
¸
x
1
x
2
.
.
.
x
k
¸
.
Upper case bold italics such as X are used for matrices.
We typically denote the number of observations by the natural number n, and subscript the
variables by the index i to denote the individual observation, e.g. y
i
, x
i
and z
i
. In some contexts
we use indices other than i, such as in timeseries applications where the index t is common, and
in panel studies we typically use the double index it to refer to individual i at a time period t.
The i’th observation is the set (y
i
, x
i
, z
i
).
It is proper mathematical practice to use upper case X for random variables and lower case x
for realizations or speciﬁc values. This practice is not commonly followed in econometrics because
instead we use upper case to denote matrices. Thus the notation y
i
will in some places refer to a
random variable, and in other places a speciﬁc realization. Hopefully there will be no confusion as
the use should be evident from the context.
As we mentioned before, ideally each observation consists of a set of measurements on the
list of variables. In practice it is common to ﬁnd that some variables are not measured for some
observations, and in these cases we describe these variables or observations as unobserved or
missing.
We typically use Greek letters such as , 0 and o
2
to denote unknown parameters of an econo
metric model, and will use boldface, e.g. d or 0, when these are vectorvalued. Estimates are
typically denoted by putting a hat “^”, tilde “~” or bar “” over the corresponding letter, e.g.
ˆ
and
˜
are estimates of .
The covariance matrix of an econometric estimator will typically be written using the capital
boldface V , often with a subscript to denote the estimator, e.g. V
= var
n
´
d ÷d
as the
covariance matrix for
n
´
d ÷d
. Hopefully without causing confusion, we will use the notation
V
= avar(
´
d) to denote the asymptotic covariance matrix of
n
´
d ÷d
(the variance of the
asymptotic distribution). Estimates will be denoted by appending hats or tildes, e.g.
´
V
is an
estimate of V
.
1.4 Observational Data
A common econometric question is to quantify the impact of one set of variables on another
variable. For example, a concern in labor economics is the returns to schooling — the change in
earnings induced by increasing a worker’s education, holding other variables constant. Another
issue of interest is the earnings gap between men and women.
Ideally, we would use experimental data to answer these questions. To measure the returns to
schooling, an experiment might randomly divide children into groups, mandate dierent levels of
education to the dierent groups, and then follow the children’s wage path after they mature and
enter the labor force. The dierences between the groups would be direct measurements of the ef
fects of dierent levels of education. However, experiments such as this would be widely condemned
as immoral! Consequently, we see few nonlaboratory experimental data sets in economics.
CHAPTER 1. INTRODUCTION 4
Instead, most economic data is observational. To continue the above example, through data
collection we can record the level of a person’s education and their wage. With such data we
can measure the joint distribution of these variables, and assess the joint dependence. But from
observational data it is dicult to infer causality, as we are not able to manipulate one variable to
see the direct eect on the other. For example, a person’s level of education is (at least partially)
determined by that person’s choices. These factors are likely to be aected by their personal abilities
and attitudes towards work. The fact that a person is highly educated suggests a high level of ability,
which suggests a high relative wage. This is an alternative explanation for an observed positive
correlation between educational levels and wages. High ability individuals do better in school,
and therefore choose to attain higher levels of education, and their high ability is the fundamental
reason for their high wages. The point is that multiple explanations are consistent with a positive
correlation between schooling levels and education. Knowledge of the joint distibution alone may
not be able to distinguish between these explanations.
Most economic data sets are observational, not experimental. This means that
all variables must be treated as random and possibly jointly determined.
This discussion means that it is dicult to infer causality from observational data alone. Causal
inference requires identiﬁcation, and this is based on strong assumptions. We will return to a
discussion of some of these issues in Chapter 13.
1.5 Standard Data Structures
There are three major types of economic data sets: crosssectional, timeseries, and panel. They
are distinguished by the dependence structure across observations.
Crosssectional data sets have one observation per individual. Surveys are a typical source
for crosssectional data. In typical applications, the individuals surveyed are persons, households,
ﬁrms or other economic agents. In many contemporary econometric crosssection studies the sample
size n is quite large. It is conventional to assume that crosssectional observations are mutually
independent. Most of this text is devoted to the study of crosssection data.
Timeseries data are indexed by time. Typical examples include macroeconomic aggregates,
prices and interest rates. This type of data is characterized by serial dependence so the random
sampling assumption is inappropriate. Most aggregate economic data is only available at a low
frequency (annual, quarterly or perhaps monthly) so the sample size is typically much smaller than
in crosssection studies. The exception is ﬁnancial data where data are available at a high frequency
(weekly, daily, hourly, or tickbytick) so sample sizes can be quite large.
Panel data combines elements of crosssection and timeseries. These data sets consist of a set
of individuals (typically persons, households, or corporations) surveyed repeatedly over time. The
common modeling assumption is that the individuals are mutually independent of one another,
but a given individual’s observations are mutually dependent. This is a modiﬁed random sampling
environment.
Data Structures
• Crosssection
• Timeseries
• Panel
CHAPTER 1. INTRODUCTION 5
Some contemporary econometric applications combine elements of crosssection, timeseries,
and panel data modeling. These include models of spatial correlation and clustering.
As we mentioned above, most of this text will be devoted to crosssectional data under the
assumption of mutually independent observations. By mutual independence we mean that the i’th
observation (y
i
, x
i
, z
i
) is independent of the j’th observation (y
j
, x
j
, z
j
) for i = j. (Sometimes the
label “independent” is misconstrued. It is a statement about the relationship between observations
i and j, not a statement about the relationship between y
i
and x
i
and/or z
i
.)
Furthermore, if the data is randomly gathered, it is reasonable to model each observation as
a random draw from the same probability distribution. In this case we say that the data are
independent and identically distributed or iid. We call this a random sample. For most of
this text we will assume that our observations come from a random sample.
Deﬁnition 1.5.1 The observations (y
i
, x
i
, z
i
) are a random sample if they are
mutually independent and identically distributed (iid) across i = 1, ..., n.
In the random sampling framework, we think of an individual observation (y
i
, x
i
, z
i
) as a re
alization from a joint probability distribution F (y, x, z) which can call the population. This
“population” is inﬁnitely large. This abstraction can be a source of confusion as it does not cor
respond to a physical population in the real world. The distribution F is unknown, and the goal
of statistical inference is to learn about features of F from the sample. The assumption of random
sampling provides the mathematical foundation for treating economic statistics with the tools of
mathematical statistics.
The random sampling framework was a major intellectural breakthrough of the late 19th cen
tury, allowing the application of mathematical statistics to the social sciences. Before this concep
tual development, methods from mathematical statistics had not been applied to economic data as
they were viewed as inappropriate. The random sampling framework enabled economic samples to
be viewed as homogenous and random, a necessary precondition for the application of statistical
methods.
1.6 Sources for Economic Data
Fortunately for economists, the internet provides a convenient forum for dissemination of eco
nomic data. Many largescale economic datasets are available without charge from governmental
agencies. An excellent starting point is the Resources for Economists Data Links, available at
rfe.org. From this site you can ﬁnd almost every publically available economic data set. Some
speciﬁc data sources of interest include
• Bureau of Labor Statistics
• US Census
• Current Population Survey
• Survey of Income and Program Participation
• Panel Study of Income Dynamics
• Federal Reserve System (Board of Governors and regional banks)
• National Bureau of Economic Research
CHAPTER 1. INTRODUCTION 6
• U.S. Bureau of Economic Analysis
• CompuStat
• International Financial Statistics
Another good source of data is from authors of published empirical studies. Most journals
in economics require authors of published papers to make their datasets generally available. For
example, in its instructions for submission, Econometrica states:
Econometrica has the policy that all empirical, experimental and simulation results must
be replicable. Therefore, authors of accepted papers must submit data sets, programs,
and information on empirical analysis, experiments and simulations that are needed for
replication and some limited sensitivity analysis.
The American Economic Review states:
All data used in analysis must be made available to any researcher for purposes of
replication.
The Journal of Political Economy states:
It is the policy of the Journal of Political Economy to publish papers only if the data
used in the analysis are clearly and precisely documented and are readily available to
any researcher for purposes of replication.
If you are interested in using the data from a published paper, ﬁrst check the journal’s website,
as many journals archive data and replication programs online. Second, check the website(s) of
the paper’s author(s). Most academic economists maintain webpages, and some make available
replication ﬁles complete with data and programs. If these investigations fail, email the author(s),
politely requesting the data. You may need to be persistent.
As a matter of professional etiquette, all authors absolutely have the obligation to make their
data and programs available. Unfortunately, many fail to do so, and typically for poor reasons.
The irony of the situation is that it is typically in the best interests of a scholar to make as much of
their work (including all data and programs) freely available, as this only increases the likelihood
of their work being cited and having an impact.
Keep this in mind as you start your own empirical project. Remember that as part of your end
product, you will need (and want) to provide all data and programs to the community of scholars.
The greatest form of ﬂattery is to learn that another scholar has read your paper, wants to extend
your work, or wants to use your empirical methods. In addition, public openness provides a healthy
incentive for transparency and integrity in empirical analysis.
1.7 Econometric Software
Economists use a variety of econometric, statistical, and programming software.
STATA (www.stata.com) is a powerful statistical program with a broad set of preprogrammed
econometric and statistical tools. It is quite popular among economists, and is continuously being
updated with new methods. It is an excellent package for most econometric analysis, but is limited
when you want to use new or lesscommon econometric methods which have not yet been programed.
GAUSS (www.aptech.com), MATLAB (www.mathworks.com), and Ox (www.oxmetrics.net)
are highlevel matrix programming languages with a wide variety of builtin statistical functions.
Many econometric methods have been programed in these languages and are available on the web.
The advantage of these packages is that you are in complete control of your analysis, and it is
CHAPTER 1. INTRODUCTION 7
easier to program new methods than in STATA. Some disadvantages are that you have to do
much of the programming yourself, programming complicated procedures takes signiﬁcant time,
and programming errors are hard to prevent and dicult to detect and eliminate.
R (www.rproject.org) is an integrated suite of statistical and graphical software that is ﬂexible,
open source, and best of all, free!
For highlyintensive computational tasks, some economists write their programs in a standard
programming language such as Fortran or C. This can lead to major gains in computational speed,
at the cost of increased time in programming and debugging.
As these dierent packages have distinct advantages, many empirical economists end up using
more than one package. As a student of econometrics, you will learn at least one of these packages,
and probably more than one.
1.8 Reading the Manuscript
Chapter 2 is a review of moment estimation and asymptotic distribution theory. This material
should be familiar from an earlier course in statistics, but I have included this at the beginning be
cause of its central importance in econometric distribution theory. Chapters 3 through 9 deal with
the core linear regression and projection models. Chapter 10 introduces the bootstrap. Chapters
11 through 13 deal with the Generalized Method of Moments, empirical likelihood and endogeneity.
Chapters 14 and 15 cover time series, and Chapters 16, 17 and 18 cover limited dependent vari
ables, panel data, and nonparametrics. Reviews of matrix algebra, probability theory, maximum
likelihood, and numerical optimization can be found in the appendix.
Technical sections which may not be of interest to all readers are marked with an asterisk (*).
Chapter 2
Moment Estimation
2.1 Introduction
Most econometric estimators can be written as functions of sample moments. To understand
econometric estimation we need a thorough understanding of moment estimation. This chapter
provides a concise summary. It will useful for most students to review this material, even if most
is familiar.
2.2 Population and Sample Mean
A random variable y with density f has the expectation or mean
1
µ = E(y)
=
o
÷o
uf(u)du.
This is the average value of y in the population.
We would like to estimate µ from a random sample. Recall that a random sample {y
1
, ..., y
n
}
consists of n observations of independent and identically draws from the distribution of y.
Assumption 2.2.1 The observations {y
1
, ..., y
n
} are a
random sample.
As µ is the average value of y in the population, it seems reasonable to estimate µ from the
average value of y in the sample. This is the sample mean, written as
y =
1
n
(y
1
+· · · y
n
) =
1
n
n
¸
i=1
y
i
It is important to understand the distinction between µ and y. The population mean µ is a non
random feature of the population while the sample mean y is a random feature of a random sample.
µ is ﬁxed, while y varies with the sample. We use the term “mean” to refer to both, but they are
really quite distinct. Here, as is common in econometrics, we put a bar “
÷
” over y to indicate that
the quantity is a sample mean. This convention is useful as it helps readers recognize a sample
mean. It is also common to see the notation y
n
, where the subscript “n” indicates that the sample
mean depends on the sample size n.
1
For a rigorous treatment of expectation see Section 2.14.
8
CHAPTER 2. MOMENT ESTIMATION 9
Moment estimation uses sample moments as estimates of population moments. In the case
of the mean, the moment estimate of the population mean µ = Ey is the sample mean ´ µ = y. Here,
as is common in econometrics, we put a hat “^” over the parameter µ to indicate that ´ µ is a sample
estimate of µ. This is a helpful convention, as just by seeing the symbol ´ µ we understand that it is
a sample estimate of a population parameter µ.
2.3 Sample Mean is Unbiased
Since the sample mean is a linear function of the observations, it is simple to calculate its
expectation.
Ey = E
1
n
n
¸
i=1
y
i
=
1
n
n
¸
i=1
Ey
i
= µ.
This shows that the expected value of the sample mean equals the population mean. An estimator
with this property is called unbiased.
Deﬁnition 2.3.1 An estimator
´
0 for 0 is unbiased if E
´
0 = 0.
Theorem 2.3.1 If Ey < · then Ey = µ and ´ µ = y is unbiased for the popula
tion mean µ.
You may notice that we slipped in the additional condition “If Ey < ·”. This assumption
ensures that µ is ﬁnite and the mean of y is well deﬁned.
2.4 Variance
The variance of the random variable y is deﬁned as
o
2
= var (y)
= E(y ÷Ey)
2
= Ey
2
÷(Ey)
2
.
Notice that the variance is the function of two moments, Ey
2
and Ey.
We can calculate the variance of the sample mean ´ µ. It is convenient to deﬁne the centered
observations u
i
= y
i
÷µ which have mean zero and variance o
2
. Then
ˆ µ ÷µ =
1
n
n
¸
i=1
u
i
and
var (ˆ µ) = E(ˆ µ ÷µ)
2
= E
1
n
n
¸
i=1
u
i
¸
1
n
n
¸
j=1
u
j
¸
=
1
n
2
n
¸
i=1
n
¸
j=1
E(u
i
u
j
) =
1
n
2
n
¸
i=1
o
2
=
1
n
o
2
where the secondtolast inequality is because E(u
i
u
j
) = o
2
for i = j yet E(u
i
u
j
) = 0 for i = j due
to independence.
CHAPTER 2. MOMENT ESTIMATION 10
Theorem 2.4.1 If o
2
< · then var (´ µ) =
1
n
o
2
This result links the variance of the estimator ´ µ with the variance of the individual observation
y
i
and with the sample size n. In particular, var (´ µ) is proportional to o
2
, and inversely proportional
to n and thus decreases as n increases.
2.5 Convergence in Probability
In Theorem 2.4.1 we showed that the variance of ´ µ decreases with the sample size n. This
implies that the sampling distribution of ´ µ concentrates as the sample size increases. We now give
a formal deﬁnition.
Deﬁnition 2.5.1 A random variable z
n
÷ R converges in probability
to z as n ÷·, denoted z
n
p
÷÷z, if for all c > 0,
lim
n÷o
Pr (z
n
÷z _ c) = 1. (2.1)
The deﬁnition looks quite abstract, but it formalizes the concept of a distribution concen
trating about a point. The event {z
n
÷z _ c} is the event that z
n
is within c of the point z.
Pr (z
n
÷z _ c) is the probability of this event — that z
n
is within c of the point z. The statement
(2.1) is that this probability approaches 1 as the sample size n increases. The deﬁnition of conver
gence in probability requires that this holds for any c. So even for very small intervals about z, the
distribution of z
n
concentrates within this interval for large n.
When z
n
p
÷÷z we call z the probability limit (or plim) of z
n
.
Two comments about the notation are worth mentioning. First, it is conventional to write the
convergence symbol as
p
÷÷ where the “p” above the arrow indicates that the convergence is “in
probability”. You should try and adhere to this notation, and not simply write z
n
÷÷z. Second, it
is also important to include the phrase “as n ÷·” to be speciﬁc about how the limit is obtained.
Students often confuse convergence in probability with convergence in expectation:
Ez
n
÷÷Ez (2.2)
but these are distinct concepts. Neither (2.1) nor (2.2) implies the other.
To see the distinction it might be helpful to think through a stylized example. Consider a
discrete random variable z
n
which takes the value 0 with probability n
÷1
and the value a
n
= 0 with
probability 1 ÷n
÷1
, or
Pr (z
n
= a
n
) =
1
n
(2.3)
Pr (z
n
= 0) =
n ÷1
n
.
In this example the probability distribution of z
n
concentrates at zero as n increases. You can
check that z
n
p
÷÷0 as n ÷·, regardless of the sequence a
n
.
In this example we can also calculate that the expectation of z
n
is
Ez
n
=
a
n
n
.
CHAPTER 2. MOMENT ESTIMATION 11
Despite the fact that z
n
converges in probability to zero, its expectation will not decrease to zero
unless a
n
/n ÷0. If a
n
diverges to inﬁnity at a rate equal to n (or faster) then Ez
n
will not converge
to zero. For example, if a
n
= n, then Ez
n
= 1 for all n, even though z
n
p
÷÷0. This example might
seem a bit artiﬁcial, but the point is that the concepts of convergence in probability and convergence
in expectation are distinct, so it is important not to confuse one with the other.
Another common source of confusion with the notation surrounding probability limits is that
the expression to the right of the arrow “
p
÷÷” must be free of dependence on the sample size n.
Thus expressions of the form “z
n
p
÷÷c
n
” are notationally meaningless and must not be used.
2.6 Weak Law of Large Numbers
As we mentioned in the two previous sections, the variance of the sample mean decreases to
zero as the sample size increases. We now show that this implies that the sample mean converges
in probability to the population mean.
When y has a ﬁnite variance there is a fairly straightforward proof by applying Chebyshev’s
inequality (B.26). The latter states that for any random variable z
n
and constant c > 0
Pr (z
n
÷Ez
n
 > c) _
var(z
n
)
c
2
.
Set z
n
= ´ µ, for which Ez
n
= µ and var(z
n
) =
1
n
o
2
(by Theorems 2.3.1 and 2.4.1). Then
Pr (´ µ ÷µ > c) _
o
2
nc
2
.
For ﬁxed o
2
and c, the bound on the righthandside shrinks to zero as n ÷·. Thus the probability
that ´ µ is within c of µ approaches 1 as n gets large, so ´ µ converges in probability to µ.
We have shown that the sample mean ´ µ converges in probability to the population mean µ.
This result is called the weak law of large numbers. Our derivation assumed that y has a ﬁnite
variance, but this is not necessary. It is only necessary for y to have a ﬁnite mean.
Theorem 2.6.1 Weak Law of Large Numbers (WLLN)
If Ey < · then as n ÷·,
y =
1
n
n
¸
i=1
y
i
p
÷÷E(y
i
).
The proof of Theorem 2.6.1 is presented in Section 2.15.
The WLLN shows that the estimator ´ µ = y converges in probability to the true population
mean µ. An estimator which converges in probability to the population value is called consistent.
Deﬁnition 2.6.1 An estimator
ˆ
0 of a parameter 0 is consistent if
ˆ
0
p
÷÷0 as n ÷·.
Consistency is a good property for an estimator to possess. It means that for any given data
distribution, there is a sample size n suciently large such that the estimator
ˆ
0 will be arbitrarily
close to the true value 0 with high probability. Unfortunately it does not mean that
ˆ
0 will actually
be close to 0 in a given ﬁnite sample, but it is minimal property for an estimator to be considered
a “good” estimator.
CHAPTER 2. MOMENT ESTIMATION 12
Theorem 2.6.2 Under Assumption 2.2.1 and Ey < ·, ´ µ = y is consis
tent for the population mean µ.
Almost Sure Convergence and the Strong Law*
Convergence in probability is sometimes called weak convergence. A related con
cept is almost sure convergence, also known as strong convergence. (In probability
theory the term “almost sure” means “with probability equal to one”. An event which is
random but occurs with probability equal to one is said to be almost sure.)
Deﬁnition 2.6.2 A random variable z
n
÷ R converges almost surely
to z as n ÷·, denoted z
n
a.s.
÷÷z, if for every c > 0
Pr
lim
n÷o
z
n
÷z _ c
= 1. (2.4)
The convergence (2.4) is stronger than (2.1) because it computes the probability of
a limit rather than the limit of a probability. Almost sure convergence is stronger than
convergence in probability in the sense that z
n
a.s.
÷÷z implies z
n
p
÷÷z.
In the example (2.3) of Section 2.5, the sequence z
n
converges in probability to zero
for any sequence a
n
, but this is not sucient for z
n
to converge almost surely. In order
for z
n
to converge to zero almost surely, it is necessary that a
n
÷0.
In the random sampling context the sample mean can be shown to converge almost
surely to the population mean. This is called the strong law of large numbers.
Theorem 2.6.3 Strong Law of Large Numbers (SLLN)
If Ey < ·, then as n ÷·,
y =
1
n
n
¸
i=1
y
i
a.s.
÷÷E(y
i
).
The proof of the SLLN is technically quite advanced so is not presented here. For a
proof see Billingsley (1995, Section 22) or Ash (1972, Theorem 7.2.5).
The WLLN is sucient for most purposes in econometrics, so we will not use the
SLLN in this text.
2.7 VectorValued Moments
Our preceding discussion focused on the case where y is realvalued (a scalar), but nothing
important changes if we generalize to the case where y ÷ R
m
is a vector. To ﬁx notation, the
CHAPTER 2. MOMENT ESTIMATION 13
elements of y are
y =
¸
¸
¸
¸
y
1
y
2
.
.
.
y
m
¸
.
The population mean of y is just the vector of marginal means
µ = E(y) =
¸
¸
¸
¸
E(y
1
)
E(y
2
)
.
.
.
E(y
m
)
¸
.
When working with random vectors y it is convenient to measure their magnitude with the
Euclidean norm
y =
y
2
1
+· · · +y
2
m
1/2
.
This is the classic Euclidean length of the vector y. Notice that
y
2
= y
t
y.
It turns out that it is equivalent to describe ﬁniteness of moments in terms of the Euclidean
norm of a vector or all individual components.
Theorem 2.7.1 For y ÷ R
m
, Ey < · if and only if Ey
j
 < · for
j = 1, ..., m.
Theorem 2.7.1 implies that the components of µ are ﬁnite if and only if Ey < ·.
The mm variance matrix of y is
V = var (y) = E
(y ÷µ) (y ÷µ)
t
.
V is often called a variancecovariance matrix. You can show that the elements of V are ﬁnite if
Ey
2
< ·.
A random sample {y
1
, ..., y
n
} consists of n observations of independent and identically draws
from the distribution of y. (Each draw is an mvector.) The vector sample mean
y =
1
n
n
¸
i=1
y
i
=
¸
¸
¸
¸
y
1
y
2
.
.
.
y
m
¸
is the vector of means of the individual variables.
Convergence in probability of a vector is deﬁned as convergence in probability of all elements
in the vector. Thus y
p
÷÷ µ if and only if y
j
p
÷÷ µ
j
for j = 1, ..., m. Since the latter holds if
Ey
j
 < · for j = 1, ..., m, or equivalently Ey < ·, we can state this formally as follows.
Theorem 2.7.2 Weak Law of Large Numbers (WLLN) for random vectors
If Ey < · then as n ÷·,
y =
1
n
n
¸
i=1
y
i
p
÷÷E(y
i
).
CHAPTER 2. MOMENT ESTIMATION 14
2.8 Convergence in Distribution
The WLLN is a useful ﬁrst step, but does not give an approximation to the distribution of an
estimator. A largesample or asymptotic approximation can be obtained using the concept of
convergence in distribution.
Deﬁnition 2.8.1 Let z
n
be a random vector with distribution F
n
(u) = Pr (z
n
_ u) . We
say that z
n
converges in distribution to z as n ÷·, denoted z
n
d
÷÷z, if for all u at
which F(u) = Pr (z _ u) is continuous, F
n
(u) ÷F(u) as n ÷·.
When z
n
d
÷÷z, it is common to refer to z as the asymptotic distribution or limit distri
bution of z
n
.
When the limit distribution z is degenerate (that is, Pr (z = c) = 1 for some c) we can write
the convergence as z
n
d
÷÷c, which is equivalent to convergence in probability, z
n
p
÷÷c.
The typical path to establishing convergence in distribution is through the central limit theorem
(CLT), which states that a standardized sample average converges in distribution to a normal
random vector.
Theorem 2.8.1 Central Limit Theorem (CLT). If Ey
2
< · then
as n ÷·
n(y
n
÷µ) =
1
n
n
¸
i=1
(y
i
÷µ)
d
÷÷N(0, V )
where µ = Ey and V = E
(y ÷µ) (y ÷µ)
t
.
The standardized sum z
n
=
n(y
n
÷µ) has mean zero and variance V . What the CLT adds is
that the variable z
n
is also approximately normally distributed, and that the normal approximation
improves as n increases.
The CLT is one of the most powerful and mysterious results in statistical theory. It shows that
the simple process of averaging induces normality. The ﬁrst version of the CLT (for the number
of heads resulting from many tosses of a fair coin) was established by the French mathematician
Abraham de Moivre in 1733. This was extended to cover an approximation to the binomial dis
tribution in 1812 by PierreSimon Laplace, and the general statement is credited to the Russian
mathematician Aleksandr Lyapunov in 1901.
2.9 Functions of Moments
We now expand our investigation and consider estimation of parameters which can be written
as a continuous function of µ. That is, the parameter of interest is the vector of functions
d = g (µ) (2.5)
where g : R
m
÷R
k
. As one example, the geometric mean of wages w is
= exp(E(log (w))) (2.6)
CHAPTER 2. MOMENT ESTIMATION 15
which is (2.5) with
g(u) = exp(u)
and µ = E(log (w)) . As another example, the skewness of the wage distribution is
sk =
E(w ÷Ew)
3
E(w ÷Ew)
2
3/2
= g
Ew, Ew
2
, Ew
3
where w = wage and
g (µ
1
, µ
2
, µ
3
) =
µ
3
÷3µ
2
µ
1
+ 2µ
3
1
µ
2
÷µ
2
1
3/2
. (2.7)
In this case we can set
y =
¸
w
w
2
w
3
¸
so that
µ =
¸
Ew
Ew
2
Ew
3
¸
. (2.8)
The parameter d = g (µ) is not a population moment, so it does not have a direct moment
estimator. Instead, it is common to use a plugin estimate formed by replacing the unknown µ
with its point estimate ´ µ so that
´
d = g (´ µ) . (2.9)
Again, the hat “^” indicates that
´
d is a sample estimate of d.
For example, the plugin estimate of the geometric mean of the wage distribution from (2.6)
is
´ = exp(´ µ)
with
´ µ =
1
n
n
¸
i=1
log (wage
i
) .
The plugin estimate of the skewness of the wage distribution is
´
sk =
1
n
¸
n
i=1
(w
i
÷w)
3
1
n
¸
n
i=1
(w
i
÷w)
2
3/2
=
´ µ
3
÷3´ µ
2
´ µ
1
+ 2´ µ
3
1
´ µ
2
÷ ´ µ
2
1
3/2
where
´ µ
j
=
1
n
n
¸
i=1
w
j
i
.
A useful property is that continuous functions are limitpreserving.
Theorem 2.9.1 Continuous Mapping Theorem (CMT). If z
n
p
÷÷ c
as n ÷· and g (·) is continuous at c, then g(z
n
)
p
÷÷g(c) as n ÷·.
CHAPTER 2. MOMENT ESTIMATION 16
The proof of Theorem 2.9.1 is given in Section 2.15.
For example, if z
n
p
÷÷c as n ÷· then
z
n
+a
p
÷÷c +a
az
n
p
÷÷ac
z
2
n
p
÷÷c
2
as the functions g (u) = u +a, g (u) = au, and g (u) = u
2
are continuous. Also
a
z
n
p
÷÷
a
c
if c = 0. The condition c = 0 is important as the function g(u) = a/u is not continuous at u = 0.
We need the following assumption in order for
´
d to be consistent for d.
Theorem 2.9.2 If Ey < · and g (u) is continuous at u = µ then
´
d = g (´ µ)
p
÷÷g (µ) = d
as n ÷·, and thus
´
d is consistent for d.
To apply Theorem 2.9.2 it is necessary to check if the function g is continuous at µ. In our
ﬁrst example g(u) = exp(u) is continuous everywhere. It therefore follows from Theorem 2.7.2 and
Theorem 2.9.2 that if Elog (wage) < · then as n ÷·
´
p
÷÷.
In our second example g deﬁned in (2.7) is continuous for all µ such that var(w) = µ
2
÷µ
2
1
> 0,
which holds unless w has a degenerate distribution. Thus if Ew
3
< · and var(w) > 0 then as
n ÷·
´
sk
p
÷÷sk.
2.10 Delta Method
In this section we introduce two tools — an extended version of the CMT and the Delta Method
— which allow us to calculate the asymptotic distribution of the parameter estimate
´
d.
We ﬁrst present an extended version of the continuous mapping theorem which allows conver
gence in distribution.
Theorem 2.10.1 Continuous Mapping Theorem
If z
n
d
÷÷z as n ÷· and g : R
m
÷R
k
has the set of discontinuity points
D
g
such that Pr (z ÷ D
g
) = 0, then g(z
n
)
d
÷÷g(z) as n ÷·.
For a proof of Theorem 2.10.1 see Theorem 2.3 of van der Vaart (1998). It was ﬁrst proved by
Mann and Wald (1943) and is therefore sometimes referred to as the MannWald Theorem
Theorem 2.10.1 allows the function g to be discontinuous only if the probability at being at a
discontinuity point is zero. For example, the function g(u) = u
÷1
is discontinuous at u = 0, but if
z
n
d
÷÷z ~ N(0, 1) then Pr (z = 0) = 0 so z
÷1
n
d
÷÷z
÷1
.
A special case of the Continuous Mapping Theorem is known as Slutsky’s Theorem.
CHAPTER 2. MOMENT ESTIMATION 17
Theorem 2.10.2 Slutsky’s Theorem
If z
n
d
÷÷z and c
n
p
÷÷c as n ÷· then
1. z
n
+c
n
d
÷÷z +c
2. z
n
c
n
d
÷÷zc
3.
z
n
c
n
d
÷÷
z
c
if c = 0
Even though Slutsky’s Theorem is a special case of the CMT, it is a useful statement as it
focuses on the most common applications — addition, multiplication and division.
Despite the fact that the plugin estimator
´
d is a function of ´ µ for which we have an asymptotic
distribution, Theorem 2.10.1 does not directly give us an asymptotic distribution for
´
d. This is
because
´
d = g (´ µ) is written as a function of ´ µ, not of the standardized sequence
n(´ µ ÷µ) .
We need an intermediate step — a ﬁrst order Taylor series expansion. This step is so critical to
statistical theory that it has its own name — The Delta Method.
Theorem 2.10.3 Delta Method:
If
n(0
n
÷0
0
)
d
÷÷ í, where 0 is m 1, and g(0) : R
m
÷ R
k
, k _ m, is
continuously dierentiable in a neighborhood of 0 then as n ÷·
a
n(g (0
n
) ÷g(0
0
))
d
÷÷G
t
í (2.10)
where G(0) =
0
0
g(0)
t
and G = G(0
0
). In particular, if
n(0
n
÷0
0
)
d
÷÷N(0, V )
where V is mm, then as n ÷·
n(g (0
n
) ÷g(0
0
))
d
÷÷N
0, G
t
V G
. (2.11)
The Delta Method allows us to complete our derivation of the asymptotic distribution of the
estimator
´
d of d. Relative to consistency, it requires the stronger smoothness condition that g(0)
is continuously dierentiable.
Now by combining Theorems 2.8.1 and 2.10.3 we can ﬁnd the asymptotic distribution of the
plugin estimator
´
d.
Theorem 2.10.4 If Ey
2
< · and G(u) =
0
0u
g (u)
t
is continuous in
a neighborhood of u = µ then as n ÷·
n
´
d ÷d
d
÷÷N
0, G
t
V G
where G = G(µ) .
CHAPTER 2. MOMENT ESTIMATION 18
2.11 Stochastic Order Symbols
It is convenient to have simple symbols for random variables and vectors which converge in
probability to zero or are stochastically bounded. The notation z
n
= o
p
(1) (pronounced “small
ohPone”) means that z
n
p
÷÷0 as n ÷·. We also say that
z
n
= o
p
(a
n
)
if a
n
is a sequence such that a
÷1
n
z
n
= o
p
(1). For example, for any consistent estimator
´
d for d we
then can write
´
d = d +o
p
(1)
Similarly, the notation z
n
= O
p
(1) (pronounced “big onPone”) means that z
n
is bounded in
probability. Precisely, for any  > 0 there is a constant M
.
< · such that
lim
n÷o
Pr (z
n
 > M
.
) _ .
We say that
z
n
= O
p
(a
n
)
if a
n
is a sequence such that a
÷1
n
z
n
= O
p
(1).
O
p
(1) is weaker than o
p
(1) in the sense that z
n
= o
p
(1) implies z
n
= O
p
(1) but not the reverse.
However, if z
n
= O
p
(a
n
) then z
n
= o
p
(b
n
) for any b
n
such that a
n
/b
n
÷0.
If a random vector converges in distribution z
n
d
÷÷ z (for example, if z ~ N(0, V )) then
z
n
= O
p
(1). It follows that for estimators
´
d which satisfy the convergence of Theorem 2.10.4 then
we can write
´
d = d +O
p
(n
÷1/2
).
There are many simple rules for manipulating o
p
(1) and O
p
(1) sequences which can be deduced
from the continuous mapping theorem or Slutsky’s Theorem. For example,
o
p
(1) +o
p
(1) = o
p
(1)
o
p
(1) +O
p
(1) = O
p
(1)
O
p
(1) +O
p
(1) = O
p
(1)
o
p
(1)o
p
(1) = o
p
(1)
o
p
(1)O
p
(1) = o
p
(1)
O
p
(1)O
p
(1) = O
p
(1)
2.12 Uniform Stochastic Bounds*
For some applications it can be useful to obtain the stochastic order of the random variable
max
1<i<n
y
i
 .
This is the magnitude of the largest observation in the sample {y
1
, ..., y
n
}. If the support of the
distribution of y
i
is unbounded, then as the sample size n increases, the largest observation will
also tend to increase. It turns out that there is a simple characterization.
Theorem 2.12.1 If Ey
r
< · then as n ÷·
n
÷1/r
max
1<i<n
y
i

p
÷÷0
CHAPTER 2. MOMENT ESTIMATION 19
Equivalently,
max
1<i<n
y
i
 = o
p
(n
1/r
). (2.12)
Theorem 2.12.1 says that the largest observation will diverge at a rate slower than n
1/r
. As r
increases this rate decreases. Thus the higher the moment, the slower the rate of divergence of the
largest observation.
To simplify the notation, we write (2.12) as
y
i
= o
p
(n
1/r
)
uniformly in 1 _ i _ n. It is important to understand when the O
p
or o
p
symbols are applied to
subscript i random variables we typically mean uniform convergence in the sense of (2.12).
Theorem 2.12.1 applies to random vectors. If Ey
r
< · then
max
1<i<n
y
i
 = o
p
(n
1/r
).
We now prove Theorem 2.12.1. Take any c. The event
¸
max
1<i<n
y
i
 > cn
1/r
means that at
least one of the y
i
 exceeds cn
1/r
, which is the same as the event
¸
n
i=1
¸
y
i
 > cn
1/r
or equivalently
¸
n
i=1
{y
i

r
> c
r
n} . Since the probability of the union of events is smaller than the sum of the
probabilities,
Pr
n
÷1/r
max
1<i<n
y
i
 > c
= Pr
n
¸
i=1
{y
i

r
> c
r
n}
_
n
¸
i=1
Pr (y
i

r
> nc
r
)
_
1
nc
r
n
¸
i=1
E(y
i

r
1 (y
i

r
> nc
r
))
=
1
c
r
E(y
i

r
1 (y
i

r
> nc
r
))
where the second inequality is the strong form of Markov’s inequality (Theorem B.25) and the ﬁnal
equality is since the y
i
are iid. Since Ey
r
< · this ﬁnal expectation converges to zero as n ÷·.
This is because
Ey
i

r
=
y
r
dF(y) < ·
implies
E(y
i

r
1 (y
i

r
> c)) =
y
r
>c
y
r
dF(y) ÷0
as c ÷·. We have established that n
÷1/r
max
1<i<n
y
i

p
÷÷0, as required.
2.13 Semiparametric Eciency
In this section we argue that the sample mean ´ µ and plugin estimator
´
d = g (´ µ) are ecient
estimators of the parameters µ and d. Our demonstration is based on the rich but technically
challenging theory of semiparametric eciency bounds. An excellent accessible review has been
provided by Newey (1990). We will also appeal to the asymptotic theory of maximum likelihood
estimation (see Section B.11).
We start by examining the sample mean ´ µ, for the asymptotic eciency of
´
d will follow from
that of ´ µ.
CHAPTER 2. MOMENT ESTIMATION 20
Recall, we know that if Ey
2
< · then the sample mean has the asymptotic distribution
n(´ µ ÷µ)
d
÷÷N(0, V ) . We want to know if ´ µ is the best feasible estimator, or if there is another
estimator with a smaller asymptotic variance. While it seems intuitively unlikely that another
estimator could have a smaller asymptotic variance, how do we know that this is not the case?
When we ask if ´ µ is the best estimator, we need to be clear about the class of models — the class
of permissible distributions. For estimation of the mean µ of the distribution of y the broadest
conceivable class is L
1
= {F : Ey < ·} . This class is too broad n for our current purposes, as
´ µ is not asymptotically N(0, V ) for all F ÷ L
1
. A more realistic choice is L
2
=
F : Ey
2
< ·
¸
— the class of ﬁnitevariance distributions. When we seek an ecient estimator of the mean µ in
the class of models L
2
what we are seeking is the best estimator, given that all we know is that
F ÷ L
2
.
To show that the answer is not immediately obvious, it might be helpful to review a set
ting where the sample mean is inecient. Suppose that y ÷ R has the double exponential den
sity f (y  µ) = 2
÷1/2
exp
÷y ÷µ
2
. Since var (y) = 1 we see that the sample mean sat
isﬁes
n(ˆ µ ÷µ)
d
÷÷ N(0, 1). In this model the maximum likelihood estimator (MLE) ˜ µ for
µ is the sample median. Recall from the theory of maximum likelhood that the MLE satisﬁes
n(˜ µ ÷µ)
d
÷÷N
0,
ES
2
÷1
where S =
0
0µ
log f (y  µ) = ÷
2 sgn(y ÷µ) is the score. We can
calculate that ES
2
= 2 and thus conclude that
n(˜ µ ÷µ)
d
÷÷N(0, 1/2) . The asymptotic variance
of the MLE is onehalf that of the sample mean. Thus when the true density is known to be double
exponential the sample mean is inecient.
But the estimator which achieves this improved eciency — the sample median — is not generi
cally consistent for the population mean. It is inconsistent if the density is asymmetric or skewed.
So the improvement comes at a great cost. Another way of looking at this is that the sample
median is ecient in the class of densities
¸
f (y  µ) = 2
÷1/2
exp
÷y ÷µ
2
but unless it is
known that this is the correct distribution class this knowledge is not very useful.
The relevant question is whether or not the sample mean is ecient when the form of the
distribution is unknown. We call this setting semiparametric as the parameter of interest (the
mean) is ﬁnite dimensional while the remaining features of the distribution are unspeciﬁed. In the
semiparametric context an estimator is called semiparametrically ecient if it has the smallest
asymptotic variance among all semiparametric estimators.
The mathematical trick is to reduce the semiparametric model to a set of parametric “submod
els”. The CramerRao variance bound can be found for each parametric submodel. The variance
bound for the semiparametric model (the union of the submodels) is then deﬁned as the supremum
of the individual variance bounds.
Formally, suppose that the true density of y is the unknown function f(y) with mean µ = Ey =
yf(y)dy. A parametric submodel : for f(y) is a density f
a
(y  0) which is a smooth function of
a parameter 0, and there is a true value 0
0
such that f
a
(y  0
0
) = f(y). The index : indicates the
submodels. The equality f
a
(y  0
0
) = f(y) means that the submodel class passes through the true
density, so the submodel is a true model. The class of submodels : and parameter 0
0
depend on
the true density f. In the submodel f
a
(y  0) , the mean is µ
a
(0) =
yf
a
(y  0) dy which varies
with the parameter 0. Let : ÷ ì be the class of all submodels for f.
Since each submodel : is parametric we can calculate the eciency bound for estimation of µ
within this submodel. Speciﬁcally, given the density f
a
(y  0) its likelihood score is
S
a
=
0
00
log f
a
(y  0
0
) ,
so the CramerRao lower bound for estimation of 0 is
ES
S
÷1
. Deﬁning M
a
=
0
0
µ
a
(0
0
)
t
,
by Theorem B.11.5 the CramerRao lower bound for estimation of µ within the submodel : is
V
a
= M
t
a
ES
S
÷1
M
a
.
CHAPTER 2. MOMENT ESTIMATION 21
As V
a
is the eciency bound for the submodel class f
a
(y  0) , no estimator can have an
asymptotic variance smaller than V
a
for any density f
a
(y  0) in the submodel class, including the
true density f. This is true for all submodels :. Thus the asymptotic variance of any semiparametric
estimator cannot be smaller than V
a
for any conceivable submodel. Taking the supremum of the
CramerRao bounds lower from all conceivable submodels we deﬁne
2
V = sup
a÷ì
V
a
.
The asymptotic variance of any semiparametric estimator cannot be smaller than V , since it cannot
be smaller than any individual V
a
. We call V the semiparametric asymptotic variance bound
or semiparametric eciency bound for estimation of µ, as it is a lower bound on the asymptotic
variance for any semiparametric estimator. If the asymptotic variance of a speciﬁc semiparametric
estimator equals the bound V we say that the estimator is semiparametrically ecient.
For many statistical problems it is quite challenging to calculate the semiparametric variance
bound. However, in some cases there is a simple method to ﬁnd the solution. Suppose that
we can ﬁnd a submodel :
0
whose CramerRao lower bound satisﬁes V
a
0
= V
µ
where V
µ
is
the asymptotic variance of a known semiparametric estimator. In this case, we can deduce that
V = V
a
0
= V
µ
. Otherwise there would exist another submodel :
1
whose CramerRao lower bound
satisﬁes V
a
0
< V
a
1
but this would imply V
µ
< V
a
1
which contradicts the CramerRao Theorem.
We now ﬁnd this submodel for the sample mean ´ µ. Our goal is to ﬁnd a parametric submodel
whose CramerRao bound for µ is V . This can be done by creating a tilted version of the true
density. Consider the parametric submodel
f
a
(y  0) = f(y)
1 +0
t
V
÷1
(y ÷µ)
(2.13)
where f(y) is the true density and µ = Ey. Note that
f
a
(y  0) dy =
f(y)dy +0
t
V
÷1
f(y) (y ÷µ) dy = 1
and for all 0 close to zero f
a
(y  0) _ 0. Thus f
a
(y  0) is a valid density function. It is a parametric
submodel since f
a
(y  0
0
) = f(y) when 0
0
= 0. This parametric submodel has the mean
µ(0) =
yf
a
(y  0) dy
=
yf(y)dy +
f(y)y (y ÷µ)
t
V
÷1
0dy
= µ +0
which is a smooth function of 0.
Since
0
00
log f
a
(y  0) =
0
00
log
1 +0
t
V
÷1
(y ÷µ)
=
V
÷1
(y ÷µ)
1 +0
t
V
÷1
(y ÷µ)
it follows that the score function for 0 is
S
a
=
0
00
log f
a
(y  0
0
) = V
÷1
(y ÷µ) . (2.14)
By Theorem B.11.3 the CramerRao lower bound for 0 is
E
S
a
S
t
a
÷1
=
V
÷1
E
(y ÷µ) (y ÷µ)
t
V
÷1
÷1
= V . (2.15)
2
It is not obvious that this supremum exists, as V is a matrix so there is not a unique ordering of matrices.
However, in many cases (including the ones we study) the supremum exists and is unique.
CHAPTER 2. MOMENT ESTIMATION 22
The CramerRao lower bound for µ(0) = µ+0 is also V , and this equals the asymptotic variance
of the moment estimator ´ µ. This was what we set out to show.
In summary, we have shown that in the submodel (2.13) the CramerRao lower bound for
estimation of µ is V which equals the asymptotic variance of the sample mean. This establishes
the following result.
Proposition 2.13.1 In the class of distributions F ÷ L
2
, the semipara
metric variance bound for estimation of µ is V = var(y
i
), and the sample
mean ´ µ is a semiparametrically ecient estimator of the population mean
µ.
We call this result a proposition rather than a theorem as we have not attended to the regularity
conditions.
It is a simple matter to extend this result to the plugin estimator
´
d = g (´ µ). We know from
Theorem 2.10.4 that if Ey
2
< ·and g (u) is continuously dierentiable at u = µ then the plugin
estimator has the asymptotic distribution
n
´
d ÷d
d
÷÷N(0, G
t
V G) . We therefore consider the
class of distributions L
2
(g) =
F : Ey
2
< ·, g (u) is continuously dierentiable at u = Ey
¸
.
For example, if = µ
1
/µ
2
where µ
1
= Ey
1
and µ
2
= Ey
2
then L
2
(g) =
¸
F : Ey
2
1
< ·, Ey
2
2
< ·, and Ey
2
= 0
.
For any submodel : the CramerRao lower bound for estimation of d = g (µ) is G
t
V
a
G by
Theorem B.11.5. For the submodel (2.13) this bound is G
t
V Gwhich equals the asymptotic variance
of
´
d from Theorem 2.10.4. Thus
´
d is semiparametrically ecient.
Proposition 2.13.2 In the class of distributions F ÷ L
2
(g) the semipara
metric variance bound for estimation of d = g (µ) is G
t
V G, and the plugin
estimator
´
d = g (´ µ) is a semiparametrically ecient estimator of d.
The result in Proposition 2.13.2 is quite general. Smooth functions of sample moments are
ecient estimators for their population counterparts. This is a very powerful result, as most
econometric estimators can be written (or approximated) as smooth functions of sample means.
2.14 Expectation*
For any random variable y we deﬁne the mean or expectation Ey as follows. If y is discrete,
Ey =
o
¸
j=1
t
j
Pr (y = t
j
) ,
and if y is continuous with density f
Ey =
o
÷o
yf(y)dy.
We can unify these deﬁnitions by writing the expectation as the Lebesgue integral with respect to
the distribution function F
Ey =
o
÷o
ydF(y).
CHAPTER 2. MOMENT ESTIMATION 23
The mean is well deﬁned and ﬁnite if
Ey =
o
÷o
y dF(y) < ·.
If this does not hold, we evaluate
I
1
=
o
0
ydF(y)
I
2
= ÷
0
÷o
ydF(y)
If I
1
= · and I
2
< · then we deﬁne Ey = ·. If I
1
< · and I
2
= · then we deﬁne Ey = ÷·. If
both I
1
= · and I
2
= · then Ey is undeﬁned.
If µ = Ey is well deﬁned we say that µ is identiﬁed, meaning that the parameter is uniquely
determined by the distribution of the observed variables. The demonstration that the parameters
of an econometric model are identiﬁed is an important precondition for estimation. Typically,
identiﬁcation holds under a set of restrictions, and an identiﬁcation theorem carefully describes a
set of such conditions which are sucient for identiﬁcation. In the case of the mean µ, a sucient
condition for identiﬁcation is Ey < ·.
The mean of y is ﬁnite if Ey < ·. More generally, y has a ﬁnite r’th moment if
Ey
r
< ·.
It is common in econometric theory to assume that the variables, or certain transformations of
the variables, have ﬁnite moments of a certain order. How should we interpret this assumption?
How restrictive is it?
One way to visualize the importance is to consider the class of Pareto densities given by
f(y) = ay
÷a÷1
, y > 1.
The parameter a of the Pareto distribution indexes the rate of decay of the tail of the density.
Larger a means that the tail declines to zero more quickly. See the ﬁgure below where we show the
Pareto density for a = 1 and a = 2. The parameter a also determines which moments are ﬁnite.
We can calculate that
Ey
r
=
a
o
1
y
r÷a÷1
dy =
a
a ÷r
if r < a
· if r _ a
Thus to allow for stricter ﬁnite moments (larger r) we need to restrict the class of permissible
densities (require larger a).
1 2 3 4
0.0
0.5
1.0
1.5
2.0
y
f(y)
a=2
a=1
Pareto Densities, a = 1 and a = 2
CHAPTER 2. MOMENT ESTIMATION 24
Thus, broadly speaking, the restriction that y has a ﬁnite r’th moment means that the tail of y’s
density declines to zero faster than y
÷r÷1
. The faster decline of the tail means that the probability
of observing an extreme value of y is a more rare event.
It is helpful to know that the existence of ﬁnite moments is monotonic. Liapunov’s Inequality
(B.23) implies that if Ey
p
< · for some p > 0, then Ey
r
< · for all 0 _ r _ p. For example,
Ey
2
< · implies Ey < · and thus both the mean and variance of y are ﬁnite.
2.15 Technical Proofs*
In this section we provide proofs of some of the more technical points in the chapter. These
proofs may only be of interest to more mathematically inclined.
Proof of Theorem 2.6.1: Without loss of generality, we can assume E(y
i
) = 0 by recentering y
i
on its expectation.
We need to show that for all c > 0 and : > 0 there is some N < · so that for all n _ N,
Pr (y > c) _ :. Fix c and :. Set  = c:/3. Pick C < · large enough so that
E(y
i
 1 (y
i
 > C)) _  (2.16)
(where 1 (·) is the indicator function) which is possible since Ey
i
 < ·. Deﬁne the random variables
w
i
= y
i
1 (y
i
 _ C) ÷E(y
i
1 (y
i
 _ C))
z
i
= y
i
1 (y
i
 > C) ÷E(y
i
1 (y
i
 > C))
so that
y = w +z
and
Ey _ Ew +Ez . (2.17)
We now show that sum of the expectations on the righthandside can be bounded below 3.
First, by the Triangle Inequality (A.9) and the Expectation Inequality (B.18),
Ez
i
 = Ey
i
1 (y
i
 > C) ÷E(y
i
1 (y
i
 > C))
_ Ey
i
1 (y
i
 > C) +E(y
i
1 (y
i
 > C))
_ 2Ey
i
1 (y
i
 > C)
_ 2, (2.18)
and thus by the Triangle Inequality (A.9) and (2.18)
Ez = E
1
n
n
¸
i=1
z
i
_
1
n
n
¸
i=1
Ez
i
 _ 2. (2.19)
Second, by a similar argument
w
i
 = y
i
1 (y
i
 _ C) ÷E(y
i
1 (y
i
 _ C))
_ y
i
1 (y
i
 _ C) +E(y
i
1 (y
i
 _ C))
_ 2 y
i
1 (y
i
 _ C)
_ 2C (2.20)
where the ﬁnal inequality is (2.16). Then by Jensen’s Inequality (B.15), the fact that the w
i
are
iid and mean zero, and (2.20),
(Ew)
2
_ Ew
2
=
Ew
2
i
n
=
4C
2
n
_ 
2
(2.21)
CHAPTER 2. MOMENT ESTIMATION 25
the ﬁnal inequality holding for n _ 4C
2
/
2
= 36C
2
/c
2
:
2
. Equations (2.17), (2.19) and (2.21)
together show that
Ey _ 3
2
(2.22)
as desired.
Finally, by Markov’s Inequality (B.24) and (2.22),
Pr (y > c) _
Ey
c
_
3
c
= :,
the ﬁnal equality by the deﬁnition of . We have shown that for any c > 0 and : > 0 then for all
n _ 36C
2
/c
2
:
2
, Pr (y > c) _ :, as needed.
Proof of Theorem 2.7.1: By Loève’s c
r
Inequality (B.14)
y =
¸
m
¸
j=1
y
2
j
¸
1/2
_
m
¸
j=1
y
j
 .
Thus if Ey
j
 < · for j = 1, ..., m, then
Ey _
m
¸
j=1
Ey
j
 < ·.
For the reverse inequality, the Euclidean norm of a vector is larger than the length of any individual
component, so for any j, y
j
 _ y . Thus, if Ey < ·, then Ey
j
 < · for j = 1, ..., m.
Proof of Theorem 2.8.1: The moment bound Ey
t
i
y
i
< · is sucient to guarantee that the
elements of µ and V are well deﬁned and ﬁnite. Without loss of generality, it is sucient to
consider the case µ = 0.
Our proof method is to calculate the characteristic function of
ny
n
and show that it converges
pointwise to the characteristic function of N(0, V ) . By Lévy’s Continuity Theorem (see Van der
Vaart (2008) Theorem 2.13) this is sucient to established that
ny
n
converges in distribution to
N(0, V ) .
For X ÷ R
m
, let C (X) = Eexp
iX
t
y
i
denote the characteristic function of y
i
and set c (X) =
log C(X). Since y
i
has two ﬁnite moments the ﬁrst and second derivatives of C(X) are continuous
in `. They are
0
0X
C(X) = iE
y
i
exp
iX
t
y
i
0
2
0X0X
t
C(X) = i
2
E
y
i
y
t
i
exp
iX
t
y
i
.
When evaluated at X = 0
C(0) = 1
0
0X
C(0) = iE(y
i
) = 0
0
2
0X0X
t
C(0) = ÷E
y
i
y
t
i
= ÷V .
Furthermore,
c
(X) =
0
0X
c(X) = C(X)
÷1
0
0X
C(X)
c
(X) =
0
2
0X0X
t
c(X) = C(X)
÷1
0
2
0X0X
t
C(X) ÷C(X)
÷2
0
0X
C (X)
0
0X
t
C(X)
CHAPTER 2. MOMENT ESTIMATION 26
so when evaluated at X = 0
c(0) = 0
c
A
(0) = 0
c
AA
(0) = ÷V .
By a secondorder Taylor series expansion of c(X) about X = 0,
c(X) = c(0) +c
A
(0)
t
X +
1
2
X
t
c
(X
+
)X =
1
2
X
t
c
(X
+
)X (2.23)
where X
+
lies on the line segment joining 0 and X.
We now compute C
n
(X) = E exp
iX
t
ny
n
, the characteristic function of
ny
n
. By the prop
erties of the exponential function, the independence of the y
i
, the deﬁnition of c(X) and (2.23)
log C
n
(X) = log Eexp
i
1
n
n
¸
i=1
X
t
y
i
= log E
n
¸
i=1
exp
i
1
n
X
t
y
i
= log
n
¸
i=1
Eexp
i
1
n
X
t
y
i
=
n
¸
i=1
log Eexp
i
1
n
X
t
y
i
= nc
X
n
=
1
2
X
t
c
(X
n
)X
where X
n
lies on the line segment joining 0 and X/
n. Since X
n
÷ 0 and c
(X) is continuous,
c
(X
n
) ÷c
(0) = ÷V. We thus ﬁnd that as n ÷·,
log C
n
(X) ÷÷
1
2
X
t
V X
and
C
n
(X) ÷exp
÷
1
2
X
t
V X
which is the characteristic function of the N(0, V ) distribution. This completes the proof.
Proof of Theorem 2.9.1: Since g is continuous at c, for all  > 0 we can ﬁnd a c > 0 such
that if z
n
÷c < c then g (z
n
) ÷g (c) _ . Recall that A · B implies Pr(A) _ Pr(B). Thus
Pr (g (z
n
) ÷g (c) _ ) _ Pr (z
n
÷c < c) ÷ 1 as n ÷ · by the assumption that z
n
p
÷÷ c.
Hence g(z
n
)
p
÷÷g(c) as n ÷·.
Proof of Theorem 2.10.3: By a vector Taylor series expansion, for each element of g,
g
j
(0
n
) = g
j
(0) +g
j
(0
+
jn
) (0
n
÷0)
where 0
+
nj
lies on the line segment between 0
n
and 0 and therefore converges in probability to 0.
It follows that a
jn
= g
j
(0
+
jn
) ÷g
j
p
÷÷0. Stacking across elements of g, we ﬁnd
n(g (0
n
) ÷g(0)) = (G+a
n
)
t
n(0
n
÷0)
d
÷÷G
t
í. (2.24)
CHAPTER 2. MOMENT ESTIMATION 27
The convergence is by Theorem 2.10.1, as G+a
n
d
÷÷G,
n(0
n
÷0)
d
÷÷í, and their product is
continuous. This establishes (2.10)
When í ~ N(0, V ) , the righthandside of (2.24) equals
G
t
: = G
t
N(0, V ) = N
0, G
t
V G
establishing (2.11).
Chapter 3
Conditional Expectation and
Projection
3.1 Introduction
The most commonly applied econometric tool is leastsquares estimation, also known as regres
sion. As we will see, leastsquares is a tool to estimate an approximate conditional mean of one
variable (the dependent variable) given another set of variables (the regressors, conditioning
variables, or covariates).
In this chapter we abstract from estimation, and focus on the probabilistic foundation of the
conditional expectation model and its projection approximation.
3.2 The Distribution of Wages
Suppose that we are interested in wage rates in the United States. Since wage rates vary across
workers, we cannot describe wage rates by a single number. Instead, we can describe wages using a
probability distribution. Formally, we view the wage of an individual worker as a random variable
wage with the probability distribution
F(u) = Pr(wage _ u).
When we say that a person’s wage is random we mean that we do not know their wage before it is
measured, and we treat observed wage rates as realizations from the distribution F. Treating un
observed wages as random variables and observed wages as realizations is a powerful mathematical
abstraction which allows us to use the tools of mathematical probability.
A useful thought experiment is to imagine dialing a telephone number selected at random, and
then asking the person who responds to tell us their wage rate. (Assume for simplicity that all
workers have equal access to telephones, and that the person who answers your call will respond
honestly.) In this thought experiment, the wage of the person you have called is a single draw from
the distribution F of wages in the population. By making many such phone calls we can learn the
distribution F of the entire population.
When a distribution function F is dierentiable we deﬁne the probability density function
f(u) =
d
du
F(u).
The density contains the same information as the distribution function, but the density is typically
easier to visually interpret.
28
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 29
Dollars per Hour
W
a
g
e
D
i
s
t
r
i
b
u
t
i
o
n
0 10 20 30 40 50 60 70
0
.
0
0
.
1
0
.
2
0
.
3
0
.
4
0
.
5
0
.
6
0
.
7
0
.
8
0
.
9
1
.
0
Dollars per Hour
W
a
g
e
D
e
n
s
i
t
y
0 10 20 30 40 50 60 70 80 90 100
Figure 3.1: Wage Distribution and Density. All fulltime U.S. workers
In Figure 3.1 we display estimates
1
of the probability distribution function (on the left) and
density function (on the right) of U.S. wage rates in 2009. We see that the density is peaked around
$15, and most of the probability mass appears to lie between $10 and $40. These are ranges for
typical wage rates in the U.S. population.
Important measures of central tendency are the median and the mean. The median m of a
continuous
2
distribution F is the unique solution to
F(m) =
1
2
.
The median U.S. wage ($19.23) is indicated in the left panel of Figure 3.1 by the arrow. The median
is a robust
3
measure of central tendency, but it is tricky to use for many calculations as it is not
a linear operator. For this reason the median is not the dominant measure of central tendency in
econometrics.
As discussed in Sections 2.2 and 2.14, the expectation or mean of a random variable y with
density f is
µ = E(y)
=
o
÷o
uf(u)du.
The mean U.S. wage ($23.90) is indicated in the right panel of Figure 3.1 by the arrow. Here
we have used the common and convenient convention of using the single character y to denote a
random variable, rather than the more cumbersome label wage.
The mean is convenient measure of central tendency because it is a linear operator and arises
naturally in many economic models. A disadvantage of the mean is that it is not robust
4
especially
in the presence of substantial skewness or thick tails, which are both features of the wage distribution
1
The distribution and density are estimated nonparametrically from the sample of 50,742 fulltime nonmilitary
wageearners reported in the March 2009 Current Population Survey. The wage rate is constructed as individual
wage and salary earnings divided by hours worked.
2
If F is not continuous the deﬁnition is m = inf{u : F(u)
1
2
}
3
The median is not sensitive to pertubations in the tails of the distribution.
4
The mean is sensitive to pertubations in the tails of the distribution.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 30
as can be seen easily in the right panel of Figure 3.1. Another way of viewing this is that 64% of
workers earn less that the mean wage of $23.90, suggesting that it is incorrect to describe $23.90
as a “typical” wage rate.
Log Dollars per Hour
L
o
g
W
a
g
e
D
e
n
s
i
t
y
1 2 3 4 5 6
Figure 3.2: Log Wage Density
In this context it is useful to transform the data by taking the natural logarithm
5
. Figure 3.2
shows the density of log hourly wages log(wage) for the same population, with its mean 2.95 drawn
in with the arrow. The density of log wages is much less skewed and fattailed than the density of
the level of wages, so its mean
E(log(wage)) = 2.95
is a much better (more robust) measure
6
of central tendency of the distribution. For this reason,
wage regressions typically use log wages as a dependent variable rather than the level of wages.
3.3 Conditional Expectation
We saw in Figure 3.2 the density of log wages. Is this wage distribution the same for all workers,
or does the wage distribution vary across subpopulations? To answer this question, we can compare
wage distributions for dierent groups — for example, men and women. The plot on the left in Figure
3.3 displays the densities of log wages for U.S. men and women with their means (3.05 and 2.81)
indicated by the arrows. We can see that the two wage densities take similar shapes but the density
for men is somewhat shifted to the right with a higher mean.
The values 3.05 and 2.81 are the mean log wages in the subpopulations of men and women
workers. They are called the conditional means (or conditional expectations) of log wages
given gender. We can write their speciﬁc values as
E(log(wage)  gender = man) = 3.05 (3.1)
E(log(wage)  gender = woman) = 2.81. (3.2)
We call these means conditional as they are conditioning on a ﬁxed value of the variable gender.
While you might not think of gender as a random variable, it is random from the viewpoint of
5
Throughout the text, we will use log(y) to denote the natural logarithm of y.
6
More precisely, the geometric mean exp (E(log w)) = $19.11 is a robust measure of central tendency.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 31
Log Dollars per Hour
L
o
g
W
a
g
e
D
e
n
s
i
t
y
0 1 2 3 4 5 6
Men Women
Log Dollars per Hour
L
o
g
W
a
g
e
D
e
n
s
i
t
y
1 2 3 4 5
white men
white women
black men
black women
Figure 3.3: Left: Log Wage Density for Women and Men. Right: Log Wage Density by Gender
and Race
econometric analysis. If you randomly select an individual, the gender of the individual is unknown
and thus random. (In the population of U.S. workers, the probability that a worker is a woman
happens to be 43%.) In observational data, it is most appropriate to view all measurements as
random variables, and the means of subpopulations are then conditional means.
As the two densities in Figure 3.3 appear similar, a hasty inference might be that there is not
a meaningful dierence between the wage distributions of men and women. Before jumping to this
conclusion let us examine the dierences in the distributions of Figure 3.3 more carefully. As we
mentioned above, the primary dierence between the two densities appears to be their means. This
dierence equals
E(log(wage)  gender = man) ÷E(log(wage)  gender = woman) = 3.05 ÷2.81
= 0.24 (3.3)
A dierence in expected log wages of 0.24 implies an average 24% dierence between the wages
of men and women, which is quite substantial. (For an explanation of logarithmic and percentage
dierences see the box on Log Dierences below.)
Consider further splitting the men and women subpopulations by race, dividing the population
into whites, blacks, and other races. We display the log wage density functions of four of these
groups on the right in Figure 3.3. Again we see that the primary dierence between the four density
functions is their central tendency.
Focusing on the means of these distributions, Table 3.1 reports the mean log wage for each of
the six subpopulations.
men women
white 3.07 2.82
black 2.86 2.73
other 3.03 2.86
Table 3.1: Mean Log Wages by Sex and Race
The entries in Table 3.1 are the conditional means of log(wage) given gender and race. For
example
E(log(wage)  gender = man, race = white) = 3.07
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 32
and
E(log(wage)  gender = woman, race = black) = 2.73
One beneﬁt of focusing on conditional means is that they reduce complicated distributions
to a single summary measure, and thereby facilitate comparisons across groups. Because of this
simplifying property, conditional means are the primary interest of regression analysis and are a
major focus in econometrics.
Table 3.1 allows us to easily calculate average wage dierences between groups. For example,
we can see that the wage gap between men and women continues after disaggregation by race, as
the average gap between white men and white women is 25%, and that between black men and
black women is 13%. We also can see that there is a race gap, as the average wages of blacks are
substantially less than the other race categories. In particular, the average wage gap between white
men and black men is 21%, and that between white women and black women is 9%.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 33
Log Dierences
A useful approximation for the natural logarithm for small x is
log (1 +x)  x. (3.4)
This can be derived from the inﬁnite series expansion of log (1 +x) :
log (1 +x) = x ÷
x
2
2
+
x
3
3
÷
x
4
4
+· · ·
= x +O(x
2
).
The symbol O(x
2
) means that the remainder is bounded by Ax
2
as x ÷0 for some A < ·.
A plot of log (1 +x) and the linear approximation x is shown in the following ﬁgure. We
can see that log (1 +x) and the linear approximation x are very close for x _ 0.1, and
reasonably close for x _ 0.2, but the dierence increases with x.
0.4 0.2 0.2 0.4
0.4
0.2
0.2
0.4
x
log(1 +x)
Now, if y
+
is c% greater than y, then
y
+
= (1 +c/100)y.
Taking natural logarithms,
log y
+
= log y + log(1 +c/100)
or
log y
+
÷log y = log(1 +c/100) 
c
100
where the approximation is (3.4). This shows that 100 multiplied by the dierence in
logarithms is approximately the percentage dierence between y and y
+
, and this approx
imation is quite good for c _ 10%.
3.4 Conditional Expectation Function
An important determinant of wage levels is education. In many empirical studies economists
measure educational attainment by the number of years of schooling, and we will write this variable
as education
7
.
7
Here, education is deﬁned as years of schooling beyond kindergarten. A high school graduate has education=12,
a college graduate has education=16, a Master’s degree has education=18, and a professional degree (medical, law or
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 34
The conditional mean of log wages given gender, race, and education is a single number for each
category. For example
E(log(wage)  gender = man, race = white, education = 12) = 2.84
We display in Figure 3.4 the conditional means of log(wage) for white men and white women as a
function of education. The plot is quite revealing. We see that the conditional mean is increasing in
years of education, but at a dierent rate for schooling levels above and below nine years. Another
striking feature of Figure 3.4 is that the gap between men and women is roughly constant for all
education levels. As the variables are measured in logs this implies a constant average percentage
gap between men and women regardless of educational attainment.
●
2
.
0
2
.
5
3
.
0
3
.
5
4
.
0
Years of Education
L
o
g
D
o
l
l
a
r
s
p
e
r
H
o
u
r
4 6 8 10 12 14 16 18 20
●
●
●
● ●
●
●
●
●
●
●
● white men
white women
Figure 3.4: Mean Log Wage as a Function of Years of Education
In many cases it is convenient to simplify the notation by writing variables using single charac
ters, typically y, x and/or z. It is conventional in econometrics to denote the dependent variable
(e.g. log(wage)) by the letter y, a conditioning variable (such as gender) by the letter x, and
multiple conditioning variables (such as race, education and gender) by the subscripted letters
x
1
, x
2
, ..., x
k
.
Conditional expectations can be written with the generic notation
E(y  x
1
, x
2
, ..., x
k
) = m(x
1
, x
2
, ..., x
k
).
We call this the conditional expectation function (CEF). The CEF is a function of (x
1
, x
2
, ..., x
k
)
as it varies with the variables. For example, the conditional expectation of y = log(wage) given
(x
1
, x
2
) = (gender, race) is given by the six entries of Table 3.1. The CEF is a function of (gender,
race) as it varies across the entries.
For greater compactness, we will typically write the conditioning variables as a vector in R
k
:
x =
¸
¸
¸
¸
x
1
x
2
.
.
.
x
k
¸
. (3.5)
PhD) has education=20.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 35
Here we follow the convention of using lower case bold italics x to denote a vector. Given this
notation, the CEF can be compactly written as
E(y  x) = m(x) .
Sometimes, it is useful to notationally distinguish E(y  x) as the CEF evaluated at the random
vector x from E(y  x = x
0
) as the CEF evaluated at the ﬁxed value x
0
. (And it is mathematically
correct to do so.) The ﬁrst expression E(y  x) is a random variable and the second expression
E(y  x = x
0
) is a function. We will not always enforce this distinction as it can become notationally
burdensome. Hopefully, the use of E(y  x) should be apparent from the context.
3.5 Continuous Variables
In the previous sections, we implicitly assumed that the conditioning variables are discrete.
However, many conditioning variables are continuous. In this section, we take up this case and
assume that the variables (y, x) are continuously distributed with a joint density function f(y, x).
As an example, take y = log(wage) and x = experience, the number of years of labor market
experience. The contours of their joint density are plotted on the left side of Figure 3.5 for the
population of white men with 12 years of education.
Labor Market Experience (Years)
L
o
g
D
o
l
l
a
r
s
p
e
r
H
o
u
r
0 10 20 30 40 50
2
.
0
2
.
5
3
.
0
3
.
5
4
.
0
Log Dollars per Hour
L
o
g
W
a
g
e
C
o
n
d
i
t
i
o
n
a
l
D
e
n
s
i
t
y
Exp=5
Exp=10
Exp=25
Exp=40
1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5
Figure 3.5: Left: Joint density of log(wage) and experience and conditional mean of log(wage)
given experience for white men with education=12. Right: Conditional densities of log(wage) for
white men with education=12.
Given the joint density f(y, x) the variable x has the marginal density
f
x
(x) =
R
f(y, x)dy.
For any x such that f
x
(x) > 0 the conditional density of y given x is deﬁned as
f
yx
(y  x) =
f(y, x)
f
x
(x)
. (3.6)
The conditional density is a slice of the joint density f(y, x) holding x ﬁxed. We can visualize this
by slicing the joint density function at a speciﬁc value of x parallel with the yaxis. For example,
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 36
take the density contours on the left side of Figure 3.5 and slice through the contour plot at a
speciﬁc value of experience. This gives us the conditional density of log(wage) for white men with
12 years of education and this level of experience. We do this for four levels of experience (5, 10,
25, and 40 years), and plot these densities on the right side of Figure 3.5. We can see that the
distribution of wages shifts to the right and becomes more diuse as experience increases from 5 to
10 years, and from 10 to 25 years, but there is little change from 25 to 40 years experience.
The CEF of y given x is the mean of the conditional density (3.6)
m(x) = E(y  x) =
R
yf
yx
(y  x) dy. (3.7)
Intuitively, m(x) is the mean of y for the idealized subpopulation where the conditioning variables
are ﬁxed at x. This is idealized since x is continuously distributed so this subpopulation is inﬁnitely
small.
In Figure 3.5 the CEF of log(wage) given experience is plotted as the solid line. We can see
that the CEF is a smooth but nonlinear function. The CEF is initially increasing in experience,
ﬂattens out around experience = 30, and then decreases for high levels of experience.
3.6 Law of Iterated Expectations
An extremely useful tool from probability theory is the law of iterated expectations. An
important special case is the known as the Simple Law.
Theorem 3.6.1 Simple Law of Iterated Expectations
If Ey < · then for any random vector x,
E(E(y  x)) = E(y)
The simple law states that the expectation of the conditional expectation is the unconditional
expectation. In other words, the average of the conditional averages is the unconditional average.
When x is discrete
E(E(y  x)) =
o
¸
j=1
E(y  x
j
) Pr (x = x
j
)
and when x is continuous
E(E(y  x)) =
R
k
E(y  x) f
x
(x)dx.
Going back to our investigation of average log wages for men and women, the simple law states
that
E(log(wage)  gender = man) Pr (gender = man)
+E(log(wage)  gender = woman) Pr (gender = woman)
= E(log(wage)) .
Or numerically,
3.05 + 0.57 + 2.79 + 0.43 = 2.92.
The general law of iterated expectations allows two sets of conditioning variables.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 37
Theorem 3.6.2 Law of Iterated Expectations
If Ey < · then for any random vectors x
1
and x
2
,
E(E(y  x
1
, x
2
)  x
1
) = E(y  x
1
)
Notice the way the law is applied. The inner expectation conditions on x
1
and x
2
, while
the outer expectation conditions only on x
1
. The iterated expectation yields the simple answer
E(y  x
1
) , the expectation conditional on x
1
alone. Sometimes we phrase this as: “The smaller
information set wins.”
As an example
E(log(wage)  gender = man, race = white) Pr (race = whitegender = man)
+E(log(wage)  gender = man, race = black) Pr (race = blackgender = man)
+E(log(wage)  gender = man, race = other) Pr (race = othergender = man)
= E(log(wage)  gender = man)
or numerically
3.07 + 0.84 + 2.86 + 0.08 + 3.05 + 0.08 = 3.05
A property of conditional expectations is that when you condition on a random vector x you
can eectively treat it as if it is constant. For example, E(x  x) = x and E(g (x)  x) = g (x) for
any function g(·). The general property is known as the conditioning theorem.
Theorem 3.6.3 Conditioning Theorem
If
Eg (x) y < · (3.8)
then
E(g (x) y  x) = g (x) E(y  x) (3.9)
and
E(g (x) y) = E(g (x) E(y  x)) (3.10)
The proofs of Theorems 3.6.1, 3.6.2 and 3.6.3 are given in Section 3.30.
3.7 Monotonicity of Conditioning
What is the eect of increasing the amount of information when constructing a conditional
expectation? That is, how do we compare E(y  x
1
) versus E(y  x
1
, x
2
)? We have seen that
by increasing the conditioning set, the conditional expectation reveals greater detail about the
distribution of y. Is there something more that can be said?
It turns out that there is a simple relationship induced by conditioning. We can think of
the conditional mean E(y  x
1
) as the “explained portion” of y. The remainder y ÷ E(y  x
1
) is
the “unexplained portion”. The simple relationship we now derive shows that the variance of
this unexplained portion decreases when we condition on more variables. This relationship is
monotonic in the sense that increasing the amont of information always decreases the variance of
the unexplained portion.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 38
Theorem 3.7.1 If Ey
2
< · then
var (y) _ var (y ÷E(y  x
1
)) _ var (y ÷E(y  x
1
, x
2
))
Theorem 3.7.1 says that the variance of the dierence between y and its conditional mean
(weakly) decreases whenever an additional variable is added to the conditioning information.
The proof of Theorem 3.7.1 is given in Section 3.30.
3.8 CEF Error
The CEF error e is deﬁned as the dierence between y and the CEF evaluated at the random
vector x:
e = y ÷m(x).
By construction, this yields the formula
y = m(x) +e. (3.11)
In (3.11) it is useful to understand that the error e is derived from the joint distribution of
(y, x), and so its properties are derived from this construction.
A key property of the CEF error is that it has a conditional mean of zero. To see this, by the
linearity of expectations, the deﬁnition m(x) = E(y  x) and the Conditioning Theorem
E(e  x) = E((y ÷m(x))  x)
= E(y  x) ÷E(m(x)  x)
= m(x) ÷m(x)
= 0.
This fact can be combined with the law of iterated expectations to show that the unconditional
mean is also zero.
E(e) = E(E(e  x)) = E(0) = 0
We state this and some other results formally.
Theorem 3.8.1 Properties of the CEF error
If Ey < · then
1. E(e  x) = 0.
2. E(e) = 0.
3. If Ey
r
< · for r _ 1 then Ee
r
< ·.
4. For any function h(x) such that Eh(x) e < · then E(h(x) e) = 0
The proof of the third result is deferred to Section 3.30.
The fourth result, whose proof is left to Exercise 3.3, says that e is uncorrelated with any
function of the regressors.
The equations
y = m(x) +e
E(e  x) = 0.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 39
together imply that m(x) is the CEF of y given x. It is important to understand that this is not
a restriction. These equations hold true by deﬁnition.
The condition E(e  x) = 0 is implied by the deﬁnition of e as the dierence between y and the
CEF m(x) . The equation E(e  x) = 0 is sometimes called a conditional mean restriction, since
the conditional mean of the error e is restricted to equal zero. The property is also sometimes called
mean independence, for the conditional mean of e is 0 and thus independent of x. However,
it does not imply that the distribution of e is independent of x. Sometimes the assumption “e is
independent of x” is added as a convenient simpliﬁcation, but it is not generic feature of the con
ditional mean. Typically and generally, e and x are jointly dependent, even though the conditional
mean of e is zero.
As an example, the contours of the joint density of e and experience are plotted in Figure 3.6
for the same population as Figure 3.5. The error e has a conditional mean of zero for all values of
experience, but the shape of the conditional distribution varies with the level of experience.
Labor Market Experience (Years)
e
0 10 20 30 40 50
−
1
.
0
−
0
.
5
0
.
0
0
.
5
1
.
0
Figure 3.6: Joint density of CEF error e and experience for white men with education=12.
As a simple example of a case where x and e are mean independent yet dependent, let y = xu
where x and u are independent and Eu = 1. Then
E(y  x) = E(xu  x) = xE(u  x) = x
so the CEF equation is
y = x +e
where
e = x(u ÷1).
Note that even though e is not independent of x,
E(e  x) = E(x(u ÷1)  x) = xE((u ÷1)  x) = 0
and is thus mean independent.
An important measure of the dispersion about the CEF function is the unconditional variance
of the CEF error e. We write this as
o
2
= var (e) = E
(e ÷Ee)
2
= E
e
2
.
Theorem 3.8.1.3 implies the following simple but useful result.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 40
Theorem 3.8.2 If Ey
2
< · then o
2
< ·
3.9 Best Predictor
Suppose that given a realized value of x, we want to create a prediction or forecast of y. We can
write any predictor as a function g (x) of x. The prediction error is the realized dierence y ÷g(x).
A nonstochastic measure of the magnitude of the prediction error is the expectation of its square
E(y ÷g (x))
2
. (3.12)
We can deﬁne the best predictor as the function g (x) which minimizes (3.12). What function
is the best predictor? It turns out that the answer is the CEF m(x). This holds regardless of the
joint distribution of (y, x).
To see this, note that the mean squared error of a predictor g (x) is
E(y ÷g (x))
2
= E(e +m(x) ÷g (x))
2
= Ee
2
+ 2E(e (m(x) ÷g (x))) +E(m(x) ÷g (x))
2
= Ee
2
+E(m(x) ÷g (x))
2
_ Ee
2
= E(y ÷m(x))
2
where the ﬁrst equality makes the substitution y = m(x) +e and the third equality uses Theorem
3.8.1.4. The righthandside after the third equality is minimized by setting g (x) = m(x), yielding
the ﬁnal inequality. The minimum is ﬁnite under the assumption Ey
2
< · as shown by Theorem
3.8.2.
We state this formally in the following result.
Theorem 3.9.1 Conditional Mean as Best Predictor
If Ey
2
< ·, then for any predictor g (x),
E(y ÷g (x))
2
_ E(y ÷m(x))
2
where m(x) = E(y  x).
3.10 Conditional Variance
While the conditional mean is a good measure of the location of a conditional distribution,
it does not provide information about the spread of the distribution. A common measure of the
dispersion is the conditional variance.
Deﬁnition 3.10.1 If Ey
2
< ·, the conditional variance of y
given x is
o
2
(x) = var (y  x)
= E
(y ÷E(y  x))
2
 x
= E
e
2
 x
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 41
Generally, o
2
(x) is a nontrivial function of x and can take any form subject to the restriction
that it is nonnegative. The conditional standard deviation is its square root o(x) =
o
2
(x).
One way to think about o
2
(x) is that it is the conditional mean of e
2
given x.
As an example of how the conditional variance depends on observables, compare the conditional
log wage densities for men and women displayed in Figure 3.3. The dierence between the densities
is not purely a location shift, but is also a dierence in spread. Speciﬁcally, we can see that the
density for men’s log wages is somewhat more spread out than that for women, while the density
for women’s wages is somewhat more peaked. Indeed, the conditional standard deviation for men’s
wages is 3.05 and that for women is 2.81. So while men have higher average wages, they are also
somewhat more dispersed.
The unconditional error variance and the conditional variance are related by the law of iterated
expectations
o
2
= E
e
2
= E
E
e
2
 x
= E
o
2
(x)
.
That is, the unconditional error variance is the average conditional variance.
Given the conditional variance, we can deﬁne a rescaled error
 =
e
o(x)
. (3.13)
We can calculate that since o(x) is a function of x
E(  x) = E
e
o(x)
 x
=
1
o(x)
E(e  x) = 0
and
var (  x) = E

2
 x
= E
e
2
o
2
(x)
 x
=
1
o
2
(x)
E
e
2
 x
=
o
2
(x)
o
2
(x)
= 1.
Thus  has a conditional mean of zero, and a conditional variance of 1.
Notice that (3.13) can be rewritten as
e = o(x).
and substituting this for e in the CEF equation (3.11), we ﬁnd that
y = m(x) + o(x). (3.14)
This is an alternative (meanvariance) representation of the CEF equation.
Many econometric studies focus on the conditional mean m(x) and either ignore the condi
tional variance o
2
(x), treat it as a constant o
2
(x) = o
2
, or treat it as a nuisance parameter (a
parameter not of primary interest). This is appropriate when the primary variation in the condi
tional distribution is in the mean, but can be shortsighted in other cases. Dispersion is relevant
to many economic topics, including income and wealth distribution, economic inequality, and price
dispersion. Conditional dispersion (variance) can be a fruitful subject for investigation.
The perverse consequences of a narrowminded focus on the mean has been parodied in a classic
joke:
An economist was standing with one foot in a bucket of boiling water
and the other foot in a bucket of ice. When asked how he felt, he
replied, “On average I feel just ﬁne.”
Clearly, the economist in question ignored variance!
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 42
3.11 Homoskedasticity and Heteroskedasticity
An important special case obtains when the conditional variance o
2
(x) is a constant and inde
pendent of x. This is called homoskedasticity.
Deﬁnition 3.11.1 The error is homoskedastic if E
e
2
 x
= o
2
does not depend on x.
In the general case where o
2
(x) depends on x we say that the error e is heteroskedastic.
Deﬁnition 3.11.2 The error is heteroskedastic if E
e
2
 x
= o
2
(x)
depends on x.
It is helpful to understand that the concepts homoskedasticity and heteroskedasticity concern
the conditional variance, not the unconditional variance. By deﬁnition, the unconditional variance
is a constant and independent of the regressors x. So when we talk about the variance as a function
of the regressors, we are talking about the conditional variance. Recall Figure 3.3 and how the
variance of wages varies between men and women.
Some older or introductory textbooks describe heteroskedasticity as the case where “the vari
ance of e varies across observations”. This is a poor and confusing deﬁnition. It is more constructive
to understand that heteroskedasticity means that the conditional variance o
2
(x) depends on ob
servables.
Older textbooks also tend to describe homoskedasticity as a component of a correct regression
speciﬁcation, and describe heteroskedasticity as an exception or deviance. This description has
inﬂuenced many generations of economists, but it is unfortunately backwards. The correct view
is that heteroskedasticity is generic and “standard”, while homoskedasticity is unusual and excep
tional. The default in empirical work should be to assume that the errors are heteroskedastic, not
the converse.
In apparent contradiction to the above statement, we will still frequently impose the ho
moskedasticity assumption when making theoretical investigations into the properties of estimation
and inference methods. The reason is that in many cases homoskedasticity greatly simpliﬁes the
theoretical calculations, and it is therefore quite advantageous for teaching and learning. It should
always be remembered, however, that homoskedasticity is never imposed because it is believed to
be a correct feature of an empirical model, but rather because of its simplicity.
3.12 Regression Derivative
One way to interpret the CEF m(x) = E(y  x) is in terms of how marginal changes in the
regressors x imply changes in the conditional mean of the response variable y. It is typical to
consider marginal changes in single regressors, holding the remainder ﬁxed. When a regressor x
1
is continuously distributed, we deﬁne the marginal eect of a change in x
1
, holding the variables
x
2
, ..., x
k
ﬁxed, as the partial derivative of the CEF
0
0x
1
m(x
1
, ..., x
k
).
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 43
When x
1
is discrete we deﬁne the marginal eect as a discrete dierence. For example, if x
1
is
binary, then the marginal eect of x
1
on the CEF is
m(1, x
2
, ..., x
k
) ÷m(0, x
2
, ..., x
k
).
We can unify the continuous and discrete cases with the notation
\
1
m(x) =
0
0x
1
m(x
1
, ..., x
k
), if x
1
is continuous
m(1, x
2
, ..., x
k
) ÷m(0, x
2
, ..., x
k
), if x
1
is binary.
Collecting the k eects into one k 1 vector, we deﬁne the regression derivative of x on y :
m(x) =
\
1
m(x)
\
2
m(x)
.
.
.
\
k
m(x)
¸
¸
¸
¸
¸
When all elements of x are continuous, then we have the simpliﬁcation m(x) =
0
0x
m(x), the
vector of partial derivatives.
There are two important points to remember concerning our deﬁnition of the regression deriv
ative.
First, the eect of each variable is calculated holding the other variables constant. This is the
ceteris paribus concept commonly used in economics. But in the case of a regression derivative,
the conditional mean does not literally hold all else constant. It only holds constant the variables
included in the conditional mean.
Second, the regression derivative is the change in the conditional expectation of y, not the
change in the actual value of y for an individual. It is tempting to think of the regression derivative
as the change in the actual value of y, but this is not a correct interpretation. The regression
derivative m(x) is the actual eect on the response variable y only if the error e is unaected by
the change in the regressor x. We return to a discussion of causal eects in Section 3.28.
3.13 Linear CEF
An important special case is when the CEF m(x) = E(y  x) is linear in x. In this case we can
write the mean equation as
m(x) = x
1
1
+x
2
2
+· · · +x
k
k
+ c.
Notationally it is convenient to write this as a simple function of the vector x. An easy way to do
so is to augment the regressor vector x by listing the number “1” as an element. We call this the
“constant” and the corresponding coecient is called the “intercept”. Equivalently, assuming that
the ﬁnal element
8
of the vector x is the intercept, then x
k
= 1. Thus (3.5) has been redeﬁned as
the k 1 vector
x =
¸
¸
¸
¸
¸
¸
x
1
x
2
.
.
.
x
k÷1
1
¸
. (3.15)
8
The order doesn’t matter. It could be any element.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 44
With this redeﬁnition, then the CEF is
m(x) = x
1
1
+x
2
2
+· · · +x
k
k
= x
t
d (3.16)
where
d =
¸
¸
1
.
.
.
k
¸
(3.17)
is a k 1 coecient vector. This is the linear CEF model. It is also often called the linear
regression model, or the regression of y on x.
In the linear CEF model, the regression derivative is simply the coecient vector. That is
m(x) = d.
This is one of the appealing features of the linear CEF model. The coecients have simple and
natural interpretations as the marginal eects of changing one variable, holding the others constant.
Linear CEF Model
y = x
t
d +e
E(e  x) = 0
If in addition the error is homoskedastic, we call this the homoskedastic linear CEF model.
Homoskedastic Linear CEF Model
y = x
t
d +e
E(e  x) = 0
E
e
2
 x
= o
2
3.14 Linear CEF with Nonlinear Eects
The linear CEF model of the previous section is less restrictive than it might appear, as we can
include as regressors nonlinear transformations of the original variables. In this sense, the linear
CEF framework is ﬂexible and can capture many nonlinear eects.
For example, suppose we have two scalar variables x
1
and x
2
. The CEF could take the quadratic
form
m(x
1
, x
2
) = x
1
1
+x
2
2
+x
2
1
3
+x
2
2
4
+x
1
x
2
5
+
6
. (3.18)
This equation is quadratic in the regressors (x
1
, x
2
) yet linear in the coecients (
1
, ...,
6
). We
will descriptively call (3.18) a quadratic CEF, and yet (3.18) is also a linear CEF in the sense
of being linear in the coecients. The key is to understand that (3.18) is quadratic in the variables
(x
1
, x
2
) yet linear in the coecients (
1
, ...,
6
).
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 45
To simplify the expression, we deﬁne the transformations x
3
= x
2
1
, x
4
= x
2
2
, x
5
= x
1
x
2
, and
x
6
= 1, and redeﬁne the regressor vector as x = (x
1
, ..., x
6
). With this redeﬁnition,
m(x
1
, x
2
) = x
t
d
which is linear in d. For most econometric purposes (estimation and inference on d) the linearity
in d is all that is important.
An exception is in the analysis of regression derivatives. In nonlinear equations such as (3.18),
the regression derivative should be deﬁned with respect to the original variables, not with respect
to the transformed variables. Thus
0
0x
1
m(x
1
, x
2
) =
1
+ 2x
1
3
+x
2
5
0
0x
2
m(x
1
, x
2
) =
2
+ 2x
2
4
+x
1
5
We see that in the model (3.18), the regression derivatives are not a simple coecient, but are
functions of several coecients plus the levels of (x
1,
x
2
). Consequently it is dicult to interpret
the coecients individually. It is more useful to interpret them as a group.
We typically call
5
the interaction eect. Notice that it appears in both regression derivative
equations, and has a symmetric interpretation in each. If
5
> 0 then the regression derivative of
x
1
on y is increasing in the level of x
2
(and the regression derivative of x
2
on y is increasing in the
level of x
1
), while if
5
< 0 the reverse is true.
3.15 Linear CEF with Dummy Variables
When all regressors are discrete, it turns out the CEF can be written as a linear function of
regressors.
Consider the simplest case of a single binary regressor. A variable is binary if it only takes two
distinct values. (For example, the variable gender.) It is common to call such regressors dummy
variables, and sometimes they are called indicator variables. When there is only a single dummy
regressor the conditional mean can only take two distinct values. For example,
E(y  gender) =
µ
0
if gender=man
µ
1
if gender=woman
To facilitate a mathematical treatment, we typically record dummy variables with the values {0, 1}.
For example
x
1
=
0 if gender=man
1 if gender=woman
Given this notation we can write the conditional mean as a linear function of the dummy variable
x
1
, that is
E(y  x
1
) = c + x
1
where c = µ
0
and = µ
1
÷ µ
0
. In this simple regression equation the intercept c is equal to the
conditional mean of y for the x
1
= 0 subpopulation (men) and the slope is equal to the dierence
in the conditional means of the two subpopulations.
Now suppose we have two dummy variables x
1
and x
2
. For example, x
2
= 1 if the person is
married, else x
2
= 0. The conditional mean given x
1
and x
2
takes at most four possible values:
E(y  x
1
, x
2
) =
µ
00
if x
1
= 0 and x
2
= 0 (unmarried men)
µ
01
if x
1
= 0 and x
2
= 1 (married men)
µ
10
if x
1
= 1 and x
2
= 0 (unmarried women)
µ
11
if x
1
= 1 and x
2
= 1 (married women)
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 46
In this case we can write the conditional mean as a linear function of x
1
, x
2
and their product
x
1
x
2
:
E(y  x
1
, x
2
) = c +
1
x
1
+
2
x
2
+
3
x
1
x
2
where c = µ
00
,
1
= µ
10
÷µ
00
,
2
= µ
01
÷µ
00
, and
3
= µ
11
÷µ
10
÷µ
01
+µ
00
.
We can view the coecient
1
as the eect of gender on expected log wages for unmarried
wages earners, the coecient
2
as the eect of marriage on expected log wages for men wage
earners, and the coecient
3
as the dierence between the eects of marriage on expected log
wages among women and among men. Alternatively, it can also be interpreted as the dierence
between the eects of gender on expected log wages among married and nonmarried wage earners.
Both interpretations are equally valid. We often describe
3
as measuring the interaction between
the two dummy variables, or the interaction eect, and describe
3
= 0 as the case when the
interaction eect is zero.
In this setting we can see that the CEF is linear in the three variables (x
1
, x
2
, x
1
x
2
). Thus to
put the model in the framework of Section 3.13, we would deﬁne the regressor x
3
= x
1
x
2
and the
regressor vector as
x =
¸
¸
¸
x
1
x
2
x
3
1
¸
.
So even though we started with only 2 dummy variables, the number of regressors (including the
intercept) is 4.
If there are 3 dummy variables x
1
, x
2
, x
3
, then E(y  x
1
, x
2
, x
3
) takes at most 2
3
= 8 distinct
values and can be written as the linear function
E(y  x
1
, x
2
, x
3
) = c +
1
x
1
+
2
x
2
+
3
x
3
+
4
x
1
x
2
+
5
x
1
x
3
+
6
x
2
x
3
+
7
x
1
x
2
x
3
which has eight regressors including the intercept.
In general, if there are p dummy variables x
1
, ..., x
p
then the CEF E(y  x
1
, x
2
, ..., x
p
) takes
at most 2
p
distinct values, and can be written as a linear function of the 2
p
regressors including
x
1
, x
2
, ..., x
p
and all crossproducts. This might be excessive in practice if p is modestly large. In
the next section we will discuss projection approximations which yield more parsimonious parame
terizations.
We started this section by saying that the conditional mean is linear whenever all regressors are
discrete, meaning that they take a ﬁnite number of possible values. How can we see this? Take a
categorical variable, such as race. For example, we earlier divided race into three categories. We
can record categorical variables using numbers to indicate each category, for example
x
3
=
1 if white
2 if black
3 if other
When doing so, the values of x
3
have no meaning in terms of magnitude, they simply indicate the
relevant category.
When the regressor is categorical the conditional mean of y given x
3
takes a distinct value for
each possibility:
E(y  x
3
) =
µ
1
if x
3
= 1
µ
2
if x
3
= 2
µ
3
if x
3
= 3
This is not a linear function of x
3
itself, but it can be made so by constructing dummy variables
for two of the three categories. For example
x
4
=
1 if black
0 if not black
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 47
x
5
=
1 if other
0 if not other
In this case, the categorical variable x
3
is equivalent to the pair of dummy variables (x
4
, x
5
). The
explicit relationship is
x
3
=
1 if x
4
= 0 and x
5
= 0
2 if x
4
= 1 and x
5
= 0
3 if x
4
= 0 and x
5
= 1
Given these transformations, we can write the conditional mean of y as a linear function of x
4
and
x
5
E(y  x
3
) = E(y  x
4
, x
5
) = c +
1
x
4
+
2
x
5
We can write the CEF as either E(y  x
3
) or E(y  x
4
, x
5
) (they are equivalent), but it is only linear
as a function of x
4
and x
5
.
This setting is similar to the case of two dummy variables, with the dierence that we have not
included the interaction term x
4
x
5
. This is because the event {x
4
= 1 and x
5
= 1} is empty by
construction, so x
4
x
5
= 0 by deﬁnition.
3.16 Best Linear Predictor
While the conditional mean m(x) = E(y  x) is the best predictor of y among all functions
of x, its functional form is typically unknown. In particular, the linear CEF model is empirically
unlikely to be accurate unless x is discrete and lowdimensional so all interactions are included.
Consequently in most cases it is more realistic to view the linear speciﬁcation (3.16) as an approx
imation. In this section we derive a speciﬁc approximation with a simple interpretation.
Theorem 3.9.1 showed that the conditional mean m(x) is the best predictor in the sense that
it has the lowest mean squared error among all predictors. By extension, we can deﬁne an approx
imation to the CEF by the linear function with the lowest mean squared error among all linear
predictors.
For this derivation we require the following regularity condition.
Assumption 3.16.1
1. Ey
2
< ·.
2. Ex
2
< ·.
3. Q
xx
= E(xx
t
) is positive deﬁnite.
The ﬁrst two parts of Assumption 3.16.1 imply that the variables y and x have ﬁnite means,
variances, and covariances. The third part of the assumption is more technical, and its role will
become apparent shortly. It is equivalent to imposing that the columns of Q
xx
= E(xx
t
) are
linearly independent, or equivalently that the matrix Q
xx
is invertible.
A linear predictor for y is a function of the form x
t
d for some d ÷ R
k
. The mean squared
prediction error is
S(d) = E
y ÷x
t
d
2
.
The best linear predictor of y given x, written P(y  x), is found by selecting the vector d to
minimize S(d).
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 48
Deﬁnition 3.16.1 The Best Linear Predictor of y given x is
P(y  x) = x
t
d
where d minimizes the mean squared prediction error
S(d) = E
y ÷x
t
d
2
.
The minimizer
d = argmin
÷R
k
S(d) (3.19)
is called the Linear Projection Coecient.
We now calculate an explicit expression for its value.
The mean squared prediction error can be written out as a quadratic function of d :
S(d) = Ey
2
÷2d
t
E(xy) +d
t
E
xx
d.
The quadratic structure of S(d) means that we can solve explicitly for the minimizer. The ﬁrst
order condition for minimization (from Appendix A.9) is
0 =
0
0d
S(d) = ÷2E(xy) + 2E
xx
t
d. (3.20)
Rewriting (3.20) as
2E(xy) = 2E
xx
t
d
and dividing by 2, this equation takes the form
Q
xy
= Q
xx
d (3.21)
where Q
xy
= E(xy) is k 1 and Q
xx
= E(xx
t
) is k k. The solution is found by inverting the
matrix Q
xx
, and is written
d = Q
÷1
xx
Q
xy
or
d =
E
xx
t
÷1
E(xy) . (3.22)
It is worth taking the time to understand the notation involved in the expression (3.22). Q
xx
is a
k k matrix and Q
xy
is a k 1 column vector. Therefore, alternative expressions such as
E(xy)
E(xx
)
or E(xy) (E(xx
t
))
÷1
are incoherent and incorrect. We also can now see the role of Assumption
3.16.1.3. It is necessary in order for the solution (3.22) to exist. Otherwise, there would be multiple
solutions to the equation (3.21).
We now have an explicit expression for the best linear predictor:
P(y  x) = x
t
E
xx
t
÷1
E(xy) .
This expression is also referred to as the linear projection of y on x.
The projection error is
e = y ÷x
t
d. (3.23)
This equals the error from the regression equation when (and only when) the conditional mean is
linear in x, otherwise they are distinct.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 49
Rewriting, we obtain a decomposition of y into linear predictor and error
y = x
t
d +e. (3.24)
In general we call equation (3.24) or x
t
d the best linear predictor of y given x, or the linear
projection of y on x. Equation (3.24) is also often called the regression of y on x but this can
sometimes be confusing as economists use the term regression in many contexts. (Recall that we
said in Section 3.13 that the linear CEF model is also called the linear regression model.)
An important property of the projection error e is
E(xe) = 0. (3.25)
To see this, using the deﬁnitions (3.23) and (3.22) and the matrix properties AA
÷1
= I and
Ia = a,
E(xe) = E
x
y ÷x
t
d
= E(xy) ÷E
xx
t
E
xx
t
÷1
E(xy)
= 0 (3.26)
as claimed.
Equation (3.25) is a set of k equations, one for each regressor. In other words, (3.25) is equivalent
to
E(x
j
e) = 0 (3.27)
for j = 1, ..., k. As in (3.15), the regressor vector x typically contains a constant, e.g. x
k
= 1. In
this case (3.27) for j = k is the same as
E(e) = 0. (3.28)
Thus the projection error has a mean of zero when the regressor vector contains a constant. (When
x does not have a constant, (3.28) is not guaranteed. As it is desirable for e to have a zero mean,
this is a good reason to always include a constant in any model.)
It is also useful to observe that since cov(x
j
, e) = E(x
j
e) ÷ E(x
j
) E(e) , then (3.27)(3.28)
together imply that the variables x
j
and e are uncorrelated.
This completes the derivation of the model. We summarize some of the most important prop
erties.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 50
Theorem 3.16.1 Properties of Linear Projection Model
Under Assumption 3.16.1,
1. The moments E(xx
t
) and E(xy) exist with ﬁnite elements.
2. The Linear Projection Coecient (3.19) exists, is unique, and equals
d =
E
xx
t
÷1
E(xy) .
3. The best linear predictor of y given x is
P(y  x) = x
t
E
xx
t
÷1
E(xy) .
4. The projection error e = y ÷x
t
d exists and satisﬁes
E
e
2
< ·
and
E(xe) = 0.
5. If x contains an constant, then
E(e) = 0.
6. If Ey
r
< · and Ex
r
< · for r _ 1 then Ee
r
< ·.
A complete proof of Theorem 3.16.1 is given in Section 3.30.
It is useful to reﬂect on the generality of Theorem 3.16.1. The only restriction is Assumption
3.16.1. Thus for any random variables (y, x) with ﬁnite variances we can deﬁne a linear equation
(3.24) with the properties listed in Theorem 3.16.1. Stronger assumptions (such as the linear CEF
model) are not necessary. In this sense the linear model (3.24) exists quite generally. However,
it is important not to misinterpret the generality of this statement. The linear equation (3.24) is
deﬁned as the best linear predictor. It is not necessarily a conditional mean, nor a parameter of a
structural or causal economic model.
Linear Projection Model
y = x
t
d +e.
E(xe) = 0
d =
E
xx
t
÷1
E(xy)
We illustrate projection using three log wage equations introduced in earlier sections.
For our ﬁrst example, we consider a model with the two dummy variables for gender and race
similar to Table 3.1. As we learned in Section 3.15, the entries in this table can be equivalently
expressed by a linear CEF. For simplicity, let’s consider the CEF of ln(wage) as a function of Black
and Female.
E(log(wage)  Black, Female) = ÷0.20Black ÷0.24Female + 0.10Black + Female + 3.06.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 51
This is a CEF as the variables are dummys and all interactions are included.
Now consider a simpler model omitting the interaction eect. This is the linear projection on
the variables Black and Female
P(log(wage)  Black, Female) = ÷0.15Black ÷0.23Female + 3.06.
What is the dierence? The projection model suggests a 15% average wage gap for blacks, while
the full CEF shows that the race gap is dierentiated by gender: it is 20% for black men (relative
to nonblack men) and 10% for black women (relative to nonblack women).
For our second example we consider the CEF of log wages as a function of years of education
for white men which was illustrated in Figure 3.4 and is repeated in Figure 3.7. Superimposed on
the ﬁgure are two projections. The ﬁrst (given by the dashed line) is the linear projection of log
wages on years of education
P(log(wage)  Education) = 1.5 + 0.11Education
This simple equation indicates an average 11% increase in wages for every year of education. An
inspection of the Figure shows that this approximation works well for education_ 9, but under
predicts for individuals with lower levels of education. To correct this imbalance we use a linear
spline equation which allows dierent rates of return above and below 9 years of education:
P (log(wage)  Education, (Education ÷9) + 1 (Education > 9))
= 2.3 + 0.02Education +.10 + (Education ÷9) + 1 (Education > 9)
This equation is displayed in Figure 3.7 using the solid line, and appears to ﬁt much better. It
indicates a 2% increase in mean wages for every year of education below 9, and a 12% increase in
mean wages for every year of education above 9. It is still an approximation to the conditional
mean but it appears to be fairly reasonable.
2
.
0
2
.
5
3
.
0
3
.
5
4
.
0
Years of Education
L
o
g
D
o
l
l
a
r
s
p
e
r
H
o
u
r
4 6 8 10 12 14 16 18 20
● ●
●
●
● ●
●
●
●
●
●
●
Figure 3.7: Projections of log(wage) onto Education
For our third example we take the CEF of log wages as a function of years of experience for
white men with 12 years of education, which was illustrated in Figure 3.5 and is repeated as the
solid line in Figure 3.8. Superimposed on the ﬁgure are two projections. The ﬁrst (given by the
dotdashed line) is the linear projection on experience
P(log(wage)  Experience) = 2.5 + 0.011Experience
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 52
and the second (given by the dashed line) is the linear projection on experience and its square
P(log(wage)  Experience) = 2.3 + 0.046Experience ÷0.0007Experience
2
.
It is fairly clear from an examination of Figure 3.8 that the ﬁrst linear projection is a poor approx
imation. It overpredicts wages for young and old workers, and underpredicts for the rest. Most
importantly, it misses the strong downturn in expected wages for older wageearners. The second
projection ﬁts much better. We can call this equation a quadratic projection since the function
is quadratic in experience.
0 10 20 30 40 50
2
.
0
2
.
5
3
.
0
3
.
5
4
.
0
Labor Market Experience (Years)
L
o
g
D
o
l
l
a
r
s
p
e
r
H
o
u
r
Conditional Mean
Linear Projection
Quadratic Projection
Figure 3.8: Linear and Quadratic Projections of log(wage) onto Experience
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 53
Invertibility and Identiﬁcation
The linear projection coecient d = (E(xx
t
))
÷1
E(xy) exists and is unique as long
as the k k matrix Q
xx
= E(xx
t
) is invertible. The matrix Q
xx
is sometimes called
the design matrix, as in experimental settings the researcher is able to control Q
xx
by
manipulating the distribution of the regressors x.
Observe that for any nonzero o ÷ R
k
,
o
t
Q
xx
o = E
o
t
xx
t
o
= E
o
t
x
2
_ 0
so Q
xx
by construction is positive semideﬁnite. The assumption that it is positive deﬁnite
means that this is a strict inequality, E(o
t
x)
2
> 0. Equivalently, there cannot exist a non
zero vector o such that o
t
x = 0 identically. This occurs when redundant variables are
included in x. Positive semideﬁnite matrices are invertible if and only if they are positive
deﬁnite. When Q
xx
is invertible then d = (E(xx
t
))
÷1
E(xy) exists and is uniquely
deﬁned. In other words, in order for d to be uniquely deﬁned, we must exclude the
degenerate situation of redundant varibles.
Theorem 3.16.1 shows that the linear projection coecient d is identiﬁed (uniquely
determined) under Assumptions 3.16.1. The key is invertibility of Q
xx
. Otherwise, there
is no unique solution to the equation
Q
xx
d = Q
xy
. (3.29)
When Q
xx
is not invertible there are multiple solutions to (3.29), all of which yield an
equivalent best linear predictor x
t
d. In this case the coecient d is not identiﬁed as it
does not have a unique value. Even so, the best linear predictor x
t
d still identiﬁed. One
solution is to set
d =
E
xx
t
÷
E(xy)
where A
÷
denotes the generalized inverse of A (see Appendix A.5).
3.17 Linear Predictor Error Variance
As in the CEF model, we deﬁne the error variance as
o
2
= E
e
2
.
Setting Q
yy
= E
y
2
and Q
yx
= E(yx
t
) we can write o
2
as
o
2
= E
y ÷x
t
d
2
= Ey
2
÷2E
yx
t
d +d
t
E
xx
t
d
= Q
yy
÷2Q
yx
Q
÷1
xx
Q
xy
+Q
yx
Q
÷1
xx
Q
xx
Q
÷1
xx
Q
xy
= Q
yy
÷Q
yx
Q
÷1
xx
Q
xy
def
= Q
yy·x
(3.30)
One useful feature of this formula is that it shows that Q
yy·x
= Q
yy
÷ Q
yx
Q
÷1
xx
Q
xy
equals the
variance of the error from the linear projection of y on x.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 54
3.18 Regression Coecients
Sometimes it is useful to separate the intercept from the other regressors, and write the linear
projection equation in the format
y = c +x
t
d +e (3.31)
where c is the intercept and x does not contain a constant.
Taking expectations of this equation, we ﬁnd
Ey = Ec +Ex
t
d +Ee
or
µ
y
= c +µ
t
x
d
where µ
y
= Ey and µ
x
= Ex, since E(e) = 0 from (3.28). Rearranging, we ﬁnd
c = µ
y
÷µ
t
x
d.
Subtracting this equation from (3.31) we ﬁnd
y ÷µ
y
= (x ÷µ
x
)
t
d +e, (3.32)
a linear equation between the centered variables y ÷ µ
y
and x ÷ µ
x
. (They are centered at their
means, so are meanzero random variables.) Because x ÷ µ
x
is uncorrelated with e, (3.32) is also
a linear projection, thus by the formula for the linear projection model,
d =
E
(x ÷µ
x
) (x ÷µ
x
)
t
÷1
E
(x ÷µ
x
)
y ÷µ
y
= var (x)
÷1
cov (x, y)
a function only of the covariances
9
of x and y.
Theorem 3.18.1 In the linear projection model
y = c +x
t
d +e,
then
c = µ
y
÷µ
t
x
d (3.33)
and
d = var (x)
÷1
cov (x, y) . (3.34)
3.19 Regression SubVectors
Let the regressors be partitioned as
x
i
=
x
1i
x
2i
. (3.35)
9
The covariance matrix between vectors x and z is cov (x, z) = E
(x Ex) (z Ez)
. The (co)variance
matrix of the vector x is var (x) = cov (x, z) = E
(x Ex) (x Ex)
.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 55
We can write the projection of y
i
on x
i
as
y
i
= x
t
i
d +e
i
= x
t
1i
d
1
+x
t
2i
d
2
+e
i
(3.36)
E(x
i
e
i
) = 0.
In this section we derive formula for the subvectors d
1
and d
2
.
Partition Q
xx
comformably with x
i
Q
xx
=
¸
Q
11
Q
12
Q
21
Q
22
=
¸
E(x
1i
x
t
1i
) E(x
1i
x
t
2i
)
E(x
2i
x
t
1i
) E(x
2i
x
t
2i
)
and similarly Q
xy
Q
xy
=
¸
Q
1y
Q
2y
=
¸
E(x
1i
y
i
)
E(x
2i
y
i
)
.
By the partitioned matrix inversion formula (A.4)
Q
÷1
xx
=
¸
Q
11
Q
12
Q
21
Q
22
÷1
def
=
¸
Q
11
Q
12
Q
21
Q
22
=
¸
Q
÷1
11·2
÷Q
÷1
11·2
Q
12
Q
÷1
22
÷Q
÷1
22·1
Q
21
Q
÷1
11
Q
÷1
22·1
. (3.37)
where Q
11·2
def
= Q
11
÷Q
12
Q
÷1
22
Q
21
and Q
22·1
def
= Q
22
÷Q
21
Q
÷1
11
Q
12
. Thus
d =
d
1
d
2
=
¸
Q
÷1
11·2
÷Q
÷1
11·2
Q
12
Q
÷1
22
÷Q
÷1
22·1
Q
21
Q
÷1
11
Q
÷1
22·1
¸
Q
1y
Q
2y
=
Q
÷1
11·2
Q
1y
÷Q
12
Q
÷1
22
Q
2y
Q
÷1
22·1
Q
2y
÷Q
21
Q
÷1
11
Q
1y
=
Q
÷1
11·2
Q
1y·2
Q
÷1
22·1
Q
2y·1
We have shown that
d
1
= Q
÷1
11·2
Q
1y·2
d
2
= Q
÷1
22·1
Q
2y·1
3.20 Coecient Decomposition
In the previous section we derived formula for the coecient subvectors d
1
and d
2
. We now
use these formula to give a useful interpretation of the coecients as obtaining from an interated
projection.
Take equation (3.36) for the case dim(x
1i
) = 1 so that
1
÷ R.
y
i
= x
1i
1
+x
t
2i
d
2
+e
i
. (3.38)
Now consider the projection of x
1i
on x
2i
:
x
1i
= x
t
2i
~
2
+u
1i
E(x
2i
u
1i
) = 0.
From (3.22) and (3.30), ~
2
= Q
÷1
22
Q
21
and Eu
2
1i
= Q
11·2
. We can also calculate that
E(u
1i
y
i
) = E
x
1i
÷~
t
2
x
2i
y
i
= E(x
1i
y
i
) ÷~
t
2
E(x
2i
y
i
) = Q
1y
÷Q
12
Q
÷1
22
Q
2y
= Q
1y·2
.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 56
We have found that
1
= Q
÷1
11·2
Q
1y·2
=
E(u
1i
y
i
)
Eu
2
1i
the coecient from the simple regression of y
i
on u
1i
.
What this means is that in the multivariate projection equation (3.38), the coecient
1
equals
the projection coecient from a regression of y
i
on u
1i
, the error from a projection of x
1i
on the
other regressors x
2i
. The error u
1i
can be thought of as the component of x
1i
which is not linearly
explained by the other regressors. Thus the the coecient
1
equals the linear eect of x
1i
on y
i
,
after stripping out the eects of the other variables.
There was nothing special in the choice of the variable x
1i
. So this derivation applies symmet
rically to all coecients in a linear projection. Each coecient equals the simple regression of y
i
on the error from a projection of that regressor on all the other regressors. Each coecient equals
the linear eect of that variable on y
i
, after linearly controlling for all the other regressors.
3.21 Omitted Variable Bias
Again, let the regressors be partitioned as in (3.35). Consider the projection of y
i
on x
1i
only.
Perhaps this is done because the variables x
2i
are not observed. This is the equation
y
i
= x
t
1i
~
1
+u
i
(3.39)
E(x
1i
u
i
) = 0
Notice that we have written the coecient on x
1i
as ~
1
rather than d
1
and the error as u
i
rather
than e
i
. This is because (3.39) is dierent than (3.36). Goldberger (1991) introduced the catchy
labels long regression for (3.36) and short regression for (3.39) to emphasize the distinction.
Typically, d
1
= ~
1
, except in special cases. To see this, we calculate
~
1
=
E
x
1i
x
t
1i
÷1
E(x
1i
y
i
)
=
E
x
1i
x
t
1i
÷1
E
x
1i
x
t
1i
d
1
+x
t
2i
d
2
+e
i
= d
1
+
E
x
1i
x
t
1i
÷1
E
x
1i
x
t
2i
d
2
= d
1
+d
2
where
=
E
x
1i
x
t
1i
÷1
E
x
1i
x
t
2i
is the coecient matrix from a projection of x
2i
on x
1i
.
Observe that ~
1
= d
1
+d
2
= d
1
unless = 0 or d
2
= 0. Thus the short and long regressions
have dierent coecients on x
1i
. They are the same only under one of two conditions. First, if the
projection of x
2i
on x
1i
yields a set of zero coecients (they are uncorrelated), or second, if the
coecient on x
2i
in (3.36) is zero. In general, the coecient in (3.39) is ~
1
rather than d
1
. The
dierence d
2
between ~
1
and d
1
is known as omitted variable bias. It is the consequence of
omission of a relevant correlated variable.
To avoid omitted variables bias the standard advice is to include all potentially relevant variables
in estimated models. By construction, the general model will be free of the omitted variables
problem. Typically it is not feasible to completely follow this advice as many desired variables are
not observed. In this case, the possibility of omitted variables bias should be acknowledged and
discussed in the course of an empirical investigation.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 57
3.22 Best Linear Approximation
There are alternative ways we could construct a linear approximation x
t
d to the conditional
mean m(x). In this section we show that one natural approach turns out to yield the same answer
as the best linear predictor.
We start by deﬁning the meansquare approximation error of x
t
d to m(x) as the expected
squared dierence between x
t
d and the conditional mean m(x)
d(d) = E
m(x) ÷x
t
d
2
. (3.40)
The function d(d) is a measure of the deviation of x
t
d from m(x). If the two functions are identical
then d(d) = 0, otherwise d(d) > 0. We can also view the meansquare dierence d(d) as a density
weighted average of the function (m(x) ÷x
t
d)
2
, since
d(d) =
R
k
m(x) ÷x
t
d
2
f
x
(x)dx
where f
x
(x) is the marginal density of x.
We can then deﬁne the best linear approximation to the conditional m(x) as the function x
t
d
obtained by selecting d to minimize d(d) :
d = argmin
÷R
k
d(d). (3.41)
Similar to the best linear predictor we are measuring accuracy by expected squared error. The
dierence is that the best linear predictor (3.19) selects d to minimize the expected squared predic
tion error, while the best linear approximation (3.41) selects d to minimize the expected squared
approximation error.
Despite the dierent deﬁnitions, it turns out that the best linear predictor and the best linear
approximation are identical. By the same steps as in (3.16) plus an application of conditional
expectations we can ﬁnd that
d =
E
xx
t
÷1
E(xm(x)) (3.42)
=
E
xx
t
÷1
E(xy) (3.43)
(see Exercise 3.18). Thus (3.41) equals (3.19). We conclude that the deﬁnition (3.41) can be viewed
as an alternative motivation for the linear projection coecient.
3.23 Normal Regression
Suppose the variables (y, x) are jointly normally distributed. Consider the best linear predictor
of y given x
y = x
t
d +e
d =
E
xx
t
÷1
E(xy) .
Since the error e is a linear transformation of the normal vector (y, x), it follows that (e, x) is
jointly normal, and since they are jointly normal and uncorrelated (since E(xe) = 0) they are also
independent (see Appendix B.9). Independence implies that
E(e  x) = E(e) = 0
and
E
e
2
 x
= E
e
2
= o
2
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 58
which are properties of a homoskedastic linear CEF.
We have shown that when (y, x) are jointly normal, they satisfy a normal linear CEF
y = x
t
d +e
where
e ~ N(0, o
2
)
is independent of x.
This is an alternative (and traditional) motivation for the linear CEF model. This motivation
has limited merit in econometric applications since economic data is typically nonnormal.
3.24 Regression to the Mean
The term regression originated in an inﬂuential paper by Francis Galton published in 1886,
where he examined the joint distribution of the stature (height) of parents and children. Eectively,
he was estimating the conditional mean of children’s height given their parents height. Galton
discovered that this conditional mean was approximately linear with a slope of 2/3. This implies
that on average a child’s height is more mediocre than his or her parent’s height. Galton called
this phenomenon regression to the mean, and the label regression has stuck to this day to
describe most conditional relationships.
One of Galton’s fundamental insights was to recognize that if the marginal distributions of y
and x are the same (e.g. the heights of children and parents in a stable environment) then the
regression slope in a linear projection is always less than one.
To be more precise, take the simple linear projection
y = c +x +e (3.44)
where y equals the height of the child and x equals the height of the parent. Assume that y and x
have the same mean, so that µ
y
= µ
x
= µ. Then from (3.33)
c = (1 ÷) µ
so we can write the linear projection (3.44) as
P (y  x) = (1 ÷) µ +x.
This shows that the projected height of the child is a weighted average of the population average
height µ and the parent’s height x, with the weight equal to the regression slope . When the
height distribution is stable across generations, so that var(y) = var(x), then this slope is the
simple correlation of y and x. Using (3.34)
=
cov (x, y)
var(x)
= corr(x, y).
By the properties of correlation (e.g. equation (B.7) in the Appendix), ÷1 _ corr(x, y) _ 1, with
corr(x, y) = 1 only in the degenerate case y = x. Thus if we exclude degeneracy, is strictly less
than 1.
This means that on average a child’s height is more mediocre (closer to the population average)
than the parent’s.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 59
Sir Francis Galton
Sir Francis Galton (18221911) of England was one of the leading ﬁgures in late 19th century
statistics. In addition to inventing the concept of regression, he is credited with introducing
the concepts of correlation, the standard deviation, and the bivariate normal distribution.
His work on heredity made a signiﬁcant intellectual advance by examing the joint distribu
tions of observables, allowing the application of the tools of mathematical statistics to the
social sciences.
A common error — known as the regression fallacy — is to infer from < 1 that the population
is converging, meaning that its variance is declining towards zero. This is a fallacy because we have
shown that under the assumption of constant (e.g. stable, nonconverging) means and variances,
the slope coecient must be less than one. It cannot be anything else. A slope less than one does
not imply that the variance of y is less than than the variance of x.
Another way of seeing this is to examine the conditions for convergence in the context of equation
(3.44). Since x and e are uncorrelated, it follows that
var(y) =
2
var(x) + var(e).
Then var(y) < var(x) if and only if
2
< 1 ÷
var(e)
var(x)
which is not implied by the simple condition  < 1.
The regression fallacy arises in related empirical situations. Suppose you sort families into groups
by the heights of the parents, and then plot the average heights of each subsequent generation over
time. If the population is stable, the regression property implies that the plots lines will converge
— children’s height will be more average than their parents. The regression fallacy is to incorrectly
conclude that the population is converging. The message is that such plots are misleading for
inferences about convergence.
The regression fallacy is subtle. It is easy for intelligent economists to succumb to its temptation.
A famous example is The Triumph of Mediocrity in Business by Horace Secrist, published in 1933.
In this book, Secrist carefully and with great detail documented that in a sample of department
stores over 19201930, when he divided the stores into groups based on 19201921 proﬁts, and
plotted the average proﬁts of these groups for the subsequent 10 years, he found clear and persuasive
evidence for convergence “toward mediocrity”. Of course, there was no discovery — regression to
the mean is a necessary feature of stable distributions.
3.25 Reverse Regression
Galton noticed another interesting feature of the bivariate distribution. There is nothing special
about a regression of y on x. We can also regress x on y. (In his heredity example this is the best
linear predictor of the height of parents given the height of their children.) This regression takes
the form
x = c
+
+y
+
+e
+
. (3.45)
This is sometimes called the reverse regression. In this equation, the coecients c
+
,
+
and
error e
+
are deﬁned by linear projection. In a stable population we ﬁnd that
+
= corr(x, y) =
c
+
= (1 ÷) µ = c
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 60
which are exactly the same as in the projection of y on x! The intercept and slope have exactly the
same values in the forward and reverse projections!
While this algebraic discovery is quite simple, it is counterintuitive. Instead, a common yet
mistaken guess for the form of the reverse regression is to take the equation (3.44), divide through
by and rewrite to ﬁnd the equation
x = ÷
c
+y
1
÷
1
e (3.46)
suggesting that the projection of x on y should have a slope coecient of 1/ instead of , and
intercept of c/ rather than c. What went wrong? Equation (3.46) is perfectly valid, because it
is a simple manipulation of the valid equation (3.44). The trouble is that (3.46) is not a CEF nor
a linear projection. Inverting a projection (or CEF) does not yield a projection (or CEF). Instead,
(3.45) is a valid projection, not (3.46).
In any event, Galton’s ﬁnding was that when the variables are standardized, the slope in both
projections (y on x, and x and y) equals the correlation, and both equations exhibit regression to
the mean. It is not a causal relation, but a natural feature of all joint distributions.
3.26 Limitations of the Best Linear Predictor
Let’s compare the linear projection and linear CEF models.
From Theorem 3.8.1.4 we know that the CEF error has the property E(xe) = 0. Thus a linear
CEF is a linear projection. However, the converse is not true as the projection error does not
necessarily satisfy E(e  x) = 0.
To see this in a simple example, suppose we take a normally distributed random variable
x ~ N(0, 1) and set y = x
2
. Note that y is a deterministic function of x! Now consider the linear
projection of y on x and an intercept. The intercept and slope may be calculated as
c
=
1 E(x)
E(x) E
x
2
÷1
E(y)
E(xy)
=
1 E(x)
E(x) E
x
2
÷1
E
x
2
E
x
3
=
1
0
Thus the linear projection equation takes the form
y = c +x +e
where c = 1, = 0 and e = x
2
÷1. Observe that E(e) = E
x
2
÷1 = 0 and E(xe) = E
x
3
÷E(e) =
0, yet E(e  x) = x
2
÷1 = 0. In this simple example e is a deterministic function of x, yet e and x
are uncorrelated! The point is that a projection error need not be a CEF error.
Another defect of linear projection is that it is sensitive to the marginal distribution of the
regressors when the conditional mean is nonlinear. We illustrate the issue in Figure 3.9 for a
constructed
10
joint distribution of y and x. The solid line is the nonlinear CEF of y given x.
The data are divided in two — Group 1 and Group 2 — which have dierent marginal distributions
for the regressor x, and Group 1 has a lower mean value of x than Group 2. The separate linear
projections of y on x for these two groups are displayed in the Figure by the dashed lines. These
two projections are distinct approximations to the CEF. A defect with linear projection is that it
leads to the incorrect conclusion that the eect of x on y is dierent for individuals in the two
10
The x in Group 1 are N(2, 1) and those in Group 2 are N(4, 1), and the conditional distriubtion of y given x is
N(m(x), 1) where m(x) = 2x x
2
/6.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 61
Figure 3.9: Conditional Mean and Two Linear Projections
groups. This conclusion is incorrect because in fact there is no dierence in the conditional mean
function. The apparant dierence is a byproduct of a linear approximation to a nonlinear mean,
combined with dierent marginal distributions for the conditioning variables.
3.27 Random Coecient Model
A model which is notationally similar to but conceptually distinct from the linear CEF model
is the linear random coecient model. It takes the form
y = x
t
n
where the individualspeciﬁc coecient n is random and independent of x. For example, if x is
years of schooling and y is log wages, then n is the individualspeciﬁc returns to schooling. If
a person obtains an extra year of schooling, n is the actual change in their wage. The random
coecient model allows the returns to schooling to vary in the population. Some individuals might
have a high return to education (a high n) and others a low return, possibly 0, or even negative.
In the linear CEF model the regressor coecient equals the regression derivative — the change
in the conditional mean due to a change in the regressors, d = m(x). This is not the eect on an
given individual, it is the eect on the population average. In contrast, in the random coecient
model, the random vector n = x
t
n is the true causal eect — the change in the response variable
y itself due to a change in the regressors.
It is interesting, however, to discover that the linear random coecient model implies a linear
CEF. To see this, let d and denote the mean and covariance matrix of n :
d = E(n)
= var (n)
and then decompose the random coecient as
n = d +u
where u is distributed independently of x with mean zero and covariance matrix . Then we can
write
E(y  x) = x
t
E(n  x) = x
t
E(n) = x
t
d
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 62
so the CEF is linear in x, and the coecients d equal the mean of the random coecient n.
We can thus write the equation as a linear CEF
y = x
t
d +e (3.47)
where e = x
t
u and u = n ÷d. The error is conditionally mean zero:
E(e  x) = 0.
Furthermore
var (e  x) = x
t
var (n)x
= x
t
x
so the error is conditionally heteroskedastic with its variance a quadratic function of x.
Theorem 3.27.1 In the linear random coecient model y = x
t
n with n indepen
dent of x, Ex
2
< ·, and En
2
< ·, then
E(y  x) = x
t
d
var (y  x) = x
t
x
where d = E(n) and = var (n).
3.28 Causal Eects
So far we have avoided the concept of causality, yet often the underlying goal of an econometric
analysis is to uncover a causal relationship between variables. It is often of great interest to
understand the causes and eects of decisions, actions, and policies. For example, we may be
interested in the eect of class sizes on test scores, police expenditures on crime rates, climate
change on economic activity, years of schooling on wages, institutional structure on growth, the
eectiveness of rewards on behavior, the consequences of medical procedures for health outcomes,
or any variety of possible causal relationships. In each case, the goal is to understand what is the
actual eect on the outcome y due to a change in the input x. We are not just interested in the
conditional mean or linear projection, we would like to know the actual change. The causal eect
is typically speciﬁc to an individual, and also cannot be directly observed.
For example, the causal eect of schooling on wages is the actual dierence a person would re
ceive in wages if we could change their level of education. The causal eect of a medical treatment is
the actual dierence in an individual’s health outcome, comparing treatment versus nontreatment.
In both cases the eects are individual and unobservable. For example, suppose that Jennifer would
have earned $10 an hour as a highschool graduate and $20 a hour as a college graduate while George
would have earned $8 as a highschool graduate and $12 as a college graduate. In this example the
causal eect of schooling is $10 a hour for Jennifer and $4 an hour for George. Furthermore, the
causal eect is unobserved as we only observe the wage corresponding to the actual outcome.
A variable x
1
can be said to have a causal eect on the response variable y if the latter changes
when all other inputs are held constant. To make this precise we need a mathematical formulation.
We can write a full model for the response variable y as
y = h(x
1
, x
2
, u) (3.48)
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 63
where x
1
and x
2
are the observed variables, u is an / 1 unobserved random factor, and h is a
functional relationship. This framework includes as a special case the random coecient model
(3.27) studied earlier. We deﬁne the causal eect of x
1
within this model as the change in y due to
a change in x
1
holding the other variables x
2
and u constant.
Deﬁnition 3.28.1 In the model (3.48) the the causal eect of x
1
on y is
C(x
1
, x
2
, u) =
1
h(x
1
, x
2
, u) , (3.49)
the change in y due to a change in x
1
, holding x
2
and u constant.
To understand this concept, imagine taking a single individual. As far as our structural model is
concerned, this person is described by their observables x
1
and x
2
, and their unobservables u. In a
wage regression the unobservables would include characteristics such as the person’s abilities, skills,
work ethic, interpersonal connections, and preferences. The causal eect of x
1
(say, education) is
the change in the wage as x
1
changes, holding constant all other observables and unobservables.
It may be helpful to understand that (3.49) is a deﬁnition, and does not necessarily describe
causality in a fundamental or experimental sense. Perhaps it would be more appropriate to label
(3.49) as a structural eect (the eect within the structural model).
Sometimes it is useful to write this relationship as a potential outcome function
y(x
1
) = h(x
1
, x
2
, u)
where the notation implies that y(x
1
) is holding x
2
and u constant.
A popular example arises in the analysis of treatment eects with a binary regressor x
1
. Let x
1
=
1 indicate treatment (e.g. a medical procedure) and x
1
= 0 indicating nontreatment. In this case
y(x
1
) can be written
y(0) = h(0, x
2
, u)
y(1) = h(1, x
2
, u)
In the literature on treatment eects, it is common to refer to y(0) and y(1) as the latent outcomes
associated with nontreatment and treatment, respectively. That is, for a given individual, y(0) is
the health outcome if there is no treatment, and y(1) is the health outcome if there is treatment.
The causal eect of treatment for the individual is the change in their health outcome due to
treatment — the change in y as we hold both x
2
and u constant:
C (x
2
, u) = y(1) ÷y(0).
This is random (a function of x
2
and u) as both potential outcomes y(0) and y(1) are dierent
across individuals.
In a sample, we cannot observe both outcomes from the same individual, we only observe the
realized value
y =
y(0) if x
1
= 0
y(1) if x
1
= 1
As the causal eect varies across individuals and is not observable, it cannot be measured on
the individual level. We therefore focus on aggregate causal eects, in particular what is known as
the average causal eect.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 64
Deﬁnition 3.28.2 In the model (3.48) average causal eect of x
1
on y condi
tional on x
2
is
ACE(x
1
, x
2
) = E(C(x
1
, x
2
, u)  x
1
, x
2
) =
R
1
h(x
1
, x
2
, u) f(u  x
1
, x
2
)du
(3.50)
where f(u  x
1
, x
2
) is the conditional density of u given x
1
, x
2
.
We can think of the average causal eect ACE(x
1
, x
2
) as the average eect in the general
population.
What is the relationship beteween the average causal eect ACE(x
1
, x
2
) and the regression
derivative
1
m(x
1
, x
2
)? Equation (3.48) implies that the CEF is
m(x
1
, x
2
) = E(h(x
1
, x
2
, u)  x
1
, x
2
)
=
R
h(x
1
, x
2
, u) f(u  x
1
, x
2
)du,
the average causal equation, averaged over the conditional distribution of the observed component
u.
Applying the marginal eect operator, the regression derivative is
1
m(x
1
, x
2
) =
R
1
h(x
1
, x
2
, u) f(u  x
1
, x
2
)du +
R
h(x
1
, x
2
, u)
1
f(u  x
1
, x
2
)du
= ACE(x
1
, x
2
) +
R
h(x
1
, x
2
, u)
1
f(u  x
1
, x
2
)du. (3.51)
In general, the average causal eect is not the regression derivative. However, they equal when
the second component in (3.51) is zero. This occurs when
1
f(u  x
1
, x
2
) = 0, that is, when
the conditional density of u given (x
1
, x
2
) does not depend on x
1
. The condition is suciently
important that it has a special name in the treatment eects literature.
Deﬁnition 3.28.3 Conditional Independence Assumption (CIA). Condi
tional on x
2
, the random variables x
1
and u are statistically independent.
The CIA implies f(u  x
1
, x
2
) = f(u  x
2
) does not depend on x
1
, and thus
1
f(u  x
1
, x
2
) = 0.
Thus the CIA implies that
1
m(x
1
, x
2
) = ACE(x
1
, x
2
), the regression derivative equals the average
causal eect.
Theorem 3.28.1 In the structural model (3.48), the Conditional Independence As
sumption implies
1
m(x
1
, x
2
) = ACE(x
1
, x
2
)
the regression derivative equals the average causal eect for x
1
on y conditional on x
2
.
This is a fascinating result. It shows that whenever the unobservable is independent of the
treatment variable (after conditioning on appropriate regressors) the regression derivative equals the
average causal eect. In this case, the CEF has causal economic meaning, giving strong justiﬁcation
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 65
to estimation of the CEF. Our derivation also shows the critical role of the CIA. If CIA fails, then
the equality of the regression derivative and ACE fails.
This theorem is quite general. It applies equally to the treatmenteects model where x
1
is
binary or to more general settings where x
1
is continuous.
It is also helpful to understand that the CIA is weaker than full independence of u from the
regressors (x
1
, x
2
). The CIA was introduced precisely as a minimal sucient condition to obtain
the desired result. Full independence implies the CIA and implies that each regression derivative
equal that variable’s average causal eect, but full independence is not necessary in order to be
able to causally interpret a subset of the regressors.
3.29 Existence and Uniqueness of the Conditional Expectation*
In Sections 3.3 and 3.5 we deﬁned the conditional mean when the conditioning variables x
are discrete and when the variables (y, x) have a joint density. We have explored these cases
because these are the situations where the conditional mean is easiest to describe and understand.
However, the conditional mean can be deﬁned without appealing to the properties of either discrete
or continuous random variables. The conditional mean exists quite generally.
To justify this claim we now present a deep result from probability theory. What is says is that
the conditional mean exists for all joint distributions (y, x). The only requirement is that y has a
ﬁnite mean.
Theorem 3.29.1 Existence of the Conditional Mean
If Ey < · then there exists a function m(x) such that for all measurable sets X
E(1 (x ÷ X) y) = E(1 (x ÷ X) m(x)) . (3.52)
The function m(x) is almost everywhere unique, in the sense that if h(x) satisﬁes
(3.52), then there is a set S such that Pr(S) = 1 and m(x) = h(x) for x ÷ S. The
function m(x) is called the conditional mean and is written m(x) = E(y  x) .
See, for example, Ash (1972), Theorem 6.3.3.
The conditional mean m(x) deﬁned by (3.52) specializes to (3.7) when (y, x) have a joint density.
The usefulness of deﬁnition (3.52) is that Theorem 3.29.1 shows that the conditional mean m(x)
exists for all ﬁnitemean distributions. This deﬁnition allows y to be discrete or continuous, for x to
be scalar or vectorvalued, and for the components of x to be discrete or continuously distributed.
Theorem 3.29.1 also demonstrates that Ey < ·is a sucient condition for identiﬁcation of the
conditional mean. (Recall, a parameter is identiﬁed if it is uniquely determined by the distribution
of the observed variables.)
Theorem 3.29.2 Identiﬁcation of the Conditional Mean
If Ey < ·, the conditional mean m(x) = E(y  x) is identiﬁed almost everywhere.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 66
3.30 Technical Proofs*
Proof of Theorem 3.6.1:
For convenience, assume that the variables have a joint density f (y, x). Since E(y  x) is a
function of the random vector x only, to calculate its expectation we integrate with respect to the
density f
x
(x) of x, that is
E(E(y  x)) =
R
k
E(y  x) f
x
(x) dx.
Substituting in (3.7) and noting that f
yx
(yx) f
x
(x) = f (y, x) , we ﬁnd that the above expression
equals
R
k
R
yf
yx
(yx) dy
f
x
(x) dx =
R
k
R
yf (y, x) dydx = E(y)
the unconditional mean of y.
Proof of Theorem 3.6.2:
Again assume that the variables have a joint density. It is useful to observe that
f (yx
1
, x
2
) f (x
2
x
1
) =
f (y, x
1
, x
2
)
f (x
1
, x
2
)
f (x
1
, x
2
)
f (x
1
)
= f (y, x
2
x
1
) , (3.53)
the density of (y, x
2
) given x
1
. Here, we have abused notation and used a single symbol f to denote
the various unconditional and conditional densities to reduce notational clutter.
Note that
E(y  x
1
, x
2
) =
R
yf (yx
1
, x
2
) dy. (3.54)
Integrating (3.54) with respect to the conditional density of x
2
given x
1
, and applying (3.53) we
ﬁnd that
E(E(y  x
1
, x
2
)  x
1
) =
R
k
2
E(y  x
1
, x
2
) f (x
2
x
1
) dx
2
=
R
k
2
R
yf (yx
1
, x
2
) dy
f (x
2
x
1
) dx
2
=
R
k
2
R
yf (yx
1
, x
2
) f (x
2
x
1
) dydx
2
=
R
k
2
R
yf (y, x
2
x
1
) dydx
2
= E(y  x
1
)
as stated.
Proof of Theorem 3.6.3:
E(g (x) y  x) =
R
g (x) yf
yx
(yx) dy = g (x)
R
yf
yx
(yx) dy = g (x) E(y  x)
This is (3.9). The assumption that Eg (x) y < · is required for the ﬁrst equality to be well
deﬁned. Equation (3.10) follows by applying the Simple Law of Iterated Expectations to (3.9).
Proof of Theorem 3.7.1: The assumption that Ey
2
< · implies that all the conditional expec
tations below exist.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 67
Set z = E(y  x
1
, x
2
). By the conditional Jensen’s inequality (B.16),
(E(z  x
1
))
2
_ E
z
2
 x
1
.
Taking unconditional expectations, this implies
E(E(y  x
1
))
2
_ E
(E(y  x
1
, x
2
))
2
.
Similarly,
(Ey)
2
_ E
(E(y  x
1
))
2
_ E
(E(y  x
1
, x
2
))
2
. (3.55)
The variables y, E(y  x
1
) and E(y  x
1
, x
2
) all have the same mean Ey, so the inequality (3.55)
implies that the variances are ranked monotonically:
0 _ var (E(y  x
1
)) _ var (E(y  x
1
, x
2
)) . (3.56)
Next, for µ = Ey observe that
E(y ÷E(y  x)) (E(y  x) ÷µ) = E(y ÷E(y  x)) (E(y  x) ÷µ) = 0
so the decomposition
y ÷µ = y ÷E(y  x) +E(y  x) ÷µ
satisﬁes
var (y) = var (y ÷E(y  x)) + var (E(y  x)) . (3.57)
The monotonicity of the variances of the conditional mean (3.56) applied to the variance decom
position (3.57) implies the reverse monotonicity of the variances of the dierences, completing the
proof.
Proof of Theorem 3.8.1. Applying Minkowski’s Inequality (B.22) to e = y ÷m(x),
(Ee
r
)
1/r
= (Ey ÷m(x)
r
)
1/r
_ (Ey
r
)
1/r
+ (Em(x)
r
)
1/r
< ·,
where the two parts on the righthand are ﬁnite since Ey
r
< ·by assumption and Em(x)
r
< ·
by the Conditional Expectation Inequality (B.17). The fact that (Ee
r
)
1/r
< · implies Ee
r
<
·.
Proof of Theorem 3.16.1. For part 1, by the Expectation Inequality (B.18), (A.6) and Assump
tion 3.16.1,
E
xx
t
_ E
xx
t
= Ex
2
< ·.
Similarly, using the Expectation Inequality (B.18), the CauchySchwarz Inequality (B.20) and As
sumption 3.16.1,
E(xy) _ Exy =
Ex
2
1/2
Ey
2
1/2
< ·.
Thus the moments E(xy) and E(xx
t
) are ﬁnite and well deﬁned.
For part 2, the coecient d = (E(xx
t
))
÷1
E(xy) is well deﬁned since (E(xx
t
))
÷1
exists under
Assumption 3.16.1.
Part 3 follows from Deﬁnition 3.16.1 and part 2.
For part 4, ﬁrst note that
Ee
2
= E
y ÷x
t
d
2
= Ey
2
÷2E
yx
t
d +d
t
E
xx
t
d
= Ey
2
÷2E
yx
t
E
xx
t
÷1
E(xy)
_ Ey
2
< ·
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 68
The ﬁrst inequality holds because E(yx
t
) (E(xx
t
))
÷1
E(xy) is a quadratic form and therefore neces
sarily nonnegative. Second, by the Expectation Inequality (B.18), the CauchySchwarz Inequality
(B.20) and Assumption 3.16.1,
E(xe) _ Exe =
Ex
2
1/2
Ee
2
1/2
< ·.
It follows that the expectation E(xe) is ﬁnite, and is zero by the calculation (3.26).
For part 6, Applying Minkowski’s Inequality (B.22) to e = y ÷x
t
d,
(Ee
r
)
1/r
=
E
y ÷x
t
d
r
1/r
_ (Ey
r
)
1/r
+
E
x
t
d
r
1/r
_ (Ey
r
)
1/r
+ (Ex
r
)
1/r
d
< ·,
the ﬁnal inequality by assumption.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 69
Exercises
Exercise 3.1 Find E(E(E(y  x
1
, x
2
, x
3
)  x
1
, x
2
)  x
1
) .
Exercise 3.2 If E(y  x) = a +bx, ﬁnd E(yx) as a function of moments of x.
Exercise 3.3 Prove Theorem 3.8.1.4 using the law of iterated expectations.
Exercise 3.4 Suppose that the random variables y and x only take the values 0 and 1, and have
the following joint probability distribution
x = 0 x = 1
y = 0 .1 .2
y = 1 .4 .3
Find E(y  x) , E
y
2
 x
and var (y  x) for x = 0 and x = 1.
Exercise 3.5 Show that o
2
(x) is the best predictor of e
2
given x:
(a) Write down the meansquared error of a predictor h(x) for e
2
.
(b) What does it mean to be predicting e
2
?
(c) Show that o
2
(x) minimizes the meansquared error and is thus the best predictor.
Exercise 3.6 Use y = m(x) +e to show that
var (y) = var (m(x)) + o
2
Exercise 3.7 Show that the conditional variance can be written as
o
2
(x) = E
y
2
 x
÷(E(y  x))
2
.
Exercise 3.8 Suppose that y is discretevalued, taking values only on the nonnegative integers,
and the conditional distribution of y given x is Poisson:
Pr (y = j  x) =
exp(÷x
t
d) (x
t
d)
j
j!
, j = 0, 1, 2, ...
Compute E(y  x) and var (y  x) . Does this justify a linear regression model of the form y =
x
t
d +e?
Hint: If Pr (y = j) =
exp(÷A)A
j
j!
, then Ey = ` and var(y) = `.
Exercise 3.9 Suppose you have two regressors: x
1
is binary (takes values 0 and 1) and x
2
is
categorical with 3 categories (A, B, C). Write E(y  x
1
, x
2
) as a linear regression.
Exercise 3.10 True or False. If y = x +e, x ÷ R, and E(e  x) = 0, then E
x
2
e
= 0.
Exercise 3.11 True or False. If y = x +e, x ÷ R, and E(xe) = 0, then E
x
2
e
= 0.
Exercise 3.12 True or False. If y = x
t
d +e and E(e  x) = 0, then e is independent of x.
Exercise 3.13 True or False. If y = x
t
d +e and E(xe) = 0, then E(e  x) = 0.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION 70
Exercise 3.14 True or False. If y = x
t
d + e, E(e  x) = 0, and E
e
2
 x
= o
2
, a constant, then
e is independent of x.
Exercise 3.15 Let x and y have the joint density f (x, y) =
3
2
x
2
+y
2
on 0 _ x _ 1, 0 _ y _ 1.
Compute the coecients of the best linear predictor y = c+x+e. Compute the conditional mean
m(x) = E(y  x) . Are the best linear predictor and conditional mean dierent?
Exercise 3.16 Let x be a random variable with µ = Ex and o
2
= var(x). Deﬁne
g
x  µ, o
2
=
x ÷µ
(x ÷µ)
2
÷o
2
.
Show that Eg (x  m, s) = 0 if and only if m = µ and s = o
2
.
Exercise 3.17 Suppose that
x =
¸
1
x
2
x
3
¸
and x
3
= c
1
+ c
2
x
2
is a linear function of x
2
.
(a) Show that Q
xx
= E(xx
t
) is not invertible.
(b) Use a linear transformation of x to ﬁnd an expression for the best linear predictor of y given
x. (Be explicit, do not just use the generalized inverse formula.)
Exercise 3.18 Show (3.42)(3.43), namely that for
d(d) = E
m(x) ÷x
t
d
2
then
d = argmin
÷R
k
d(d)
=
E
xx
t
÷1
E(xm(x))
=
E
xx
t
÷1
E(xy) .
Hint: To show E(xm(x)) = E(xy) use the law of iterated expectations.
Chapter 4
The Algebra of Least Squares
4.1 Introduction
In this chapter we introduce the popular leastsquares estimator. Most of the discussion will be
algebraic, with questions of distribution and inference defered to later chapters.
4.2 Least Squares Estimator
In Section 3.16 we derived and discussed the best linear predictor of y given x for a pair
of random variables (y, x) ÷ R R
k
, and called this the linear projection model. Applied to
observations from a random sample with observations (y
i
, x
i
: i = 1, ..., n) this model takes the
form
y
i
= x
t
i
d +e
i
(4.1)
where d is deﬁned as
d = argmin
÷R
k
S(d), (4.2)
the minimizer of the expected squared error
S(d) = E
y
i
÷x
t
i
d
2
, (4.3)
and has the explicit solution
d =
E
x
i
x
t
i
÷1
E(x
i
y
i
) . (4.4)
When a parameter is deﬁned as the minimizer of a function as in (4.2), a standard approach
to estimation is to construct an empirical analog of the function, and deﬁne the estimator of the
parameter as the minimizer of the empirical function.
The empirical analog of the expected squared error (4.3) is the sample average squared error
S
n
(d) =
1
n
n
¸
i=1
y
i
÷x
t
i
d
2
(4.5)
=
1
n
SSE
n
(d)
where
SSE
n
(d) =
n
¸
i=1
y
i
÷x
t
i
d
2
is called the sumofsquarederrors function.
An estimator for d is the minimizer of (4.5):
´
d = argmin
÷R
k
S
n
(d).
71
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 72
Figure 4.1: SumofSquared Errors Function
Alternatively, as S
n
(d) is a scale multiple of SSE
n
(d), we may equivalently deﬁne
´
d as the min
imizer of SSE
n
(d). Hence
´
d is commonly called the leastsquares (LS) (or ordinary least
squares (OLS)) estimator of d. As discussed in Chapter 2, the hat “^” on
´
d signiﬁes that it is
an estimator of the parameter d. If we want to be explicit about the estimation method, we can
write
´
d
ols
to signify that it is the OLS estimator.
To visualize the quadratic function S
n
(d), Figure 4.1 displays an example sumofsquared er
rors function SSE
n
(d) for the case k = 2. The leastsquares estimator
´
d is the the pair (
´
1
,
´
2
)
minimizing this function.
4.3 Solving for Least Squares
To solve for
´
d, expand the SSE function to ﬁnd
SSE
n
(d) =
n
¸
i=1
y
2
i
÷2d
t
n
¸
i=1
x
i
y
i
+d
t
n
¸
i=1
x
i
x
t
i
d
which is quadratic in the vector argument d . The ﬁrstordercondition for minimization of SSE
n
(d)
is
0 =
0
0d
SSE
n
(
´
d) = ÷2
n
¸
i=1
x
i
y
i
+ 2
n
¸
i=1
x
i
x
t
i
´
d. (4.6)
By inverting the k k matrix
¸
n
i=1
x
i
x
t
i
we ﬁnd an explicit formula for the leastsquares estimator
´
d =
n
¸
i=1
x
i
x
t
i
÷1
n
¸
i=1
x
i
y
i
. (4.7)
This is the natural estimator of the best linear projection coecient d deﬁned in (4.2), and can
also be called the linear projection estimator.
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 73
Alternatively, equation (4.4) writes the projection coecient d as an explicit function of the
population moments Q
xy
and Q
xx
. Their moment estimators are the sample moments
´
Q
xy
=
1
n
n
¸
i=1
x
i
y
i
´
Q
xx
=
1
n
n
¸
i=1
x
i
x
t
i
.
The moment estimator of d replaces the population moments in (4.4) with the sample moments:
´
d =
´
Q
÷1
xx
´
Q
xy
=
1
n
n
¸
i=1
x
i
x
t
i
÷1
1
n
n
¸
i=1
x
i
y
i
=
n
¸
i=1
x
i
x
t
i
÷1
n
¸
i=1
x
i
y
i
which is identical with (4.7).
Least Squares Estimation
Deﬁnition 4.3.1 The leastsquares estimator
´
d is
´
d = argmin
÷R
k
S
n
(d)
where
S
n
(d) =
1
n
n
¸
i=1
y
i
÷x
t
i
d
2
and has the solution
´
d =
n
¸
i=1
x
i
x
t
i
÷1
n
¸
i=1
x
i
y
i
.
AdrienMarie Legendre
The method of leastsquares was ﬁrst published in 1805 by the French mathematician
AdrienMarie Legendre (17521833). Legendre proposed leastsquares as a solution to the
algebraic problem of solving a system of equations when the number of equations exceeded
the number of unknowns. This was a vexing and common problem in astronomical mea
surement. As viewed by Legendre, (4.1) is a set of n equations with k unknowns. As the
equations cannot be solved exactly, Legendre’s goal was to select d to make the set of
errors as small as possible. He proposed the sum of squared error criterion, and derived
the algebraic solution presented above. As he noted, the ﬁrstorder conditions (4.6) is a
system of k equations with k unknowns, which can be solved by “ordinary” methods. Hence
the method became known as Ordinary Least Squares and to this day we still use the
abbreviation OLS to refer to Legendre’s estimation method.
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 74
4.4 Illustration
We illustrate the leastsquares estimator in practice with the data set used to generate the
estimates from Chapter 3. This is the March 2009 Current Population Survey, which has extensive
information on the U.S. population. This data set is described in more detail in Section ? For this
illustration, we use the subsample of nonwhite married nonmilitary female wages earners with
12 years potential work experience. This subsample has 61 observations. Let y
i
be log wages and
x
i
be an intercept and years of education. Then
1
n
n
¸
i=1
x
i
y
i
=
3.025
47.447
and
1
n
n
¸
i=1
x
i
x
t
i
=
1 15.426
15.426 243
.
Thus
´
d =
1 15.426
15.426 243
÷1
3.025
47.447
=
0.626
0.156
. (4.8)
We often write the estimated equation using the format
log(Wage) = 0.626 + 0.156 education. (4.9)
An interpretation of the estimated equation is that each year of education is associated with an
16% increase in mean wages.
Equation (4.9) is called a bivariate regression as there are only two variables. A multivari
ate regression has two or more regressors, and allows a more detailed investigation. Let’s redo
the example, but now including all levels of experience. This expanded sample includes 2454 ob
servations. Including as regressors years of experience and its square (experience
2
/100) (we divide
by 100 to simplify reporting), we obtain the estimates
log(Wage) = 1.06 + 0.116 education + 0.010 experience ÷0.014 experience
2
/100. (4.10)
These estimates suggest a 12% increase in mean wages per year of education, holding experience
constant.
4.5 Least Squares Residuals
As a byproduct of estimation, we deﬁne the ﬁtted or predicted value
ˆ y
i
= x
t
i
´
d
and the residual
ˆ e
i
= y
i
÷ ˆ y
i
= y
i
÷x
t
i
´
d. (4.11)
Note that y
i
= ˆ y
i
+ ˆ e
i
and
y
i
= x
t
i
´
d + ˆ e
i
. (4.12)
We make a distinction between the error e
i
and the residual ˆ e
i
. The error e
i
is unobservable while
the residual ˆ e
i
is a byproduct of estimation. These two variables are frequently mislabeled, which
can cause confusion.
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 75
Equation (4.6) implies that
n
¸
i=1
x
i
ˆ e
i
= 0. (4.13)
To see this by a direct calculation, using (4.11) and (4.7),
n
¸
i=1
x
i
ˆ e
i
=
n
¸
i=1
x
i
y
i
÷x
t
i
´
d
=
n
¸
i=1
x
i
y
i
÷
n
¸
i=1
x
i
x
t
i
´
d
=
n
¸
i=1
x
i
y
i
÷
n
¸
i=1
x
i
x
t
i
n
¸
i=1
x
i
x
t
i
÷1
n
¸
i=1
x
i
y
i
=
n
¸
i=1
x
i
y
i
÷
n
¸
i=1
x
i
y
i
= 0.
When x
i
contains a constant, an implication of (4.13) is
1
n
n
¸
i=1
ˆ e
i
= 0.
Thus the residuals have a sample mean of zero and the sample correlation between the regressors
and the residual is zero. These are algebraic results, and hold true for all linear regression estimates.
Given the residuals, we can construct an estimator for o
2
= Ee
2
i
:
ˆ o
2
=
1
n
n
¸
i=1
ˆ e
2
i
. (4.14)
4.6 Model in Matrix Notation
For many purposes, including computation, it is convenient to write the model and statistics in
matrix notation. The linear equation (3.24) is a system of n equations, one for each observation.
We can stack these n equations together as
y
1
= x
t
1
d +e
1
y
2
= x
t
2
d +e
2
.
.
.
y
n
= x
t
n
d +e
n
.
Now deﬁne
y =
¸
¸
¸
¸
y
1
y
2
.
.
.
y
n
¸
, X =
¸
¸
¸
¸
x
t
1
x
t
2
.
.
.
x
t
n
¸
, e =
¸
¸
¸
¸
e
1
e
2
.
.
.
e
n
¸
.
Observe that y and e are n1 vectors, and X is an nk matrix. Then the system of n equations
can be compactly written in the single equation
y = Xd +e. (4.15)
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 76
Sample sums can also be written in matrix notation. For example
n
¸
i=1
x
i
x
t
i
= X
t
X
n
¸
i=1
x
i
y
i
= X
t
y.
Therefore
´
d =
X
t
X
÷1
X
t
y
. (4.16)
The matrix version of (4.12) and estimated version of (4.15) is
y = X
´
d + ˆ e,
or equivalently the residual vector is
ˆ e = y ÷X
´
d.
Using the residual vector, we can write (4.13) as
X
t
ˆ e = 0 (4.18)
and the error variance estimator (4.14) as
ˆ o
2
= n
÷1
ˆ e
t
ˆ e (4.19)
Using matrix notation we have simple expressions for most estimators. This is particularly
convenient for computer programming, as most languages allow matrix notation and manipulation.
Important Matrix Expressions
y = Xd +e
´
d =
X
t
X
÷1
X
t
y
ˆ e = y ÷X
´
d
X
t
ˆ e = 0
ˆ o
2
= n
÷1
ˆ e
t
ˆ e.
Early Use of Matrices
The earliest known treatment of the use of matrix methods
to solve simultaneous systems is found in Chapter 8 of the
Chinese text The Nine Chapters on the Mathematical Art,
written by several generations of scholars from the 10th to
2nd century BCE.
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 77
4.7 Projection Matrix
Deﬁne the matrix
P = X
X
t
X
÷1
X
t
.
Observe that
PX = X
X
t
X
÷1
X
t
X = X.
This is a property of a projection matrix. More generally, for any matrix Z which can be written
as Z = X for some matrix (we say that Z lies in the range space of X), then
PZ = PX = X
X
t
X
÷1
X
t
X = X = Z.
As an important example, if we partition the matrix X into two matrices X
1
and X
2
so that
X = [X
1
X
2
] ,
then PX
1
= X
1
.
The matrix P is symmetric and idempotent
1
. To see that it is symmetric,
P
t
=
X
X
t
X
÷1
X
t
t
=
X
t
t
X
t
X
÷1
t
(X)
t
= X
X
t
X
t
÷1
X
t
= X
(X)
t
X
t
t
÷1
X
t
= P.
To establish that it is idempotent, the fact that PX = X implies that
PP = PX
X
t
X
÷1
X
t
= X
X
t
X
÷1
X
t
= P.
The matrix P has the property that it creates the ﬁtted values in a leastsquares regression:
Py = X
X
t
X
÷1
X
t
y = X
´
d = ˆ y.
Because of this property, P is also known as the “hat matrix”.
Another useful property is that the trace of P equals the number of columns of X
tr P = k. (4.20)
Indeed,
tr P = tr
X
X
t
X
÷1
X
t
= tr
X
t
X
÷1
X
t
X
= tr (I
k
)
= k.
1
A matrix P is symmetric if P
= P. A matrix P is idempotent if PP = P. See Appendix A.8.
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 78
(See Appendix A.4 for deﬁnition and properties of the trace operator.)
The i’th diagonal element of P = X(X
t
X)
÷1
X
t
is
h
ii
= x
t
i
X
t
X
÷1
x
i
(4.21)
which is called the leverage of the i’th observation. The h
ii
take values in [0, 1] and sum to k
n
¸
i=1
h
ii
= k (4.22)
(See Exercise 4.8).
4.8 Orthogonal Projection
Deﬁne
M = I
n
÷P
= I
n
÷X
X
t
X
÷1
X
t
where I
n
is the n n identity matrix. Note that
MX = (I
n
÷P) X = X ÷PX = X ÷X = 0.
Thus M and X are orthogonal. We call M an orthogonal projection matrix or an annihilator
matrix due to the property that for any matrix Z in the range space of X then
MZ = Z ÷PZ = 0.
For example, MX
1
= 0 for any subcomponent X
1
of X, and MP = 0.
The orthogonal projection matrix M has many similar properties with P, including that M is
symmetric (M
t
= M) and idempotent (MM = M). Similarly to (4.20) we can calculate
tr M = n ÷k. (4.23)
While P creates ﬁtted values, M creates leastsquares residuals:
My = y ÷Py = y ÷X
´
d = ˆ e. (4.24)
Another way of writing (4.24) is
y = Py +My = ˆ y + ˆ e.
This decomposition is orthogonal, that is
ˆ y
t
ˆ e = (Py)
t
(My) = y
t
PMy = 0.
We can also use (4.24) to write an alternative expression for the residual vector. Substituting
y = Xd +e into ˆ e = My and using MX = 0 we ﬁnd
ˆ e = M (Xd +e) = Me. (4.25)
which is free of dependence on the regression coecient d.
Another useful application of (4.24) is to the error variance estimator (4.19)
ˆ o
2
= n
÷1
ˆ e
t
ˆ e
= n
÷1
y
t
MMy
= n
÷1
y
t
My,
the ﬁnal equality since MM = M. Similarly using (4.25) we ﬁnd
ˆ o
2
= n
÷1
e
t
Me.
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 79
4.9 Regression Components
Partition
X = [X
1
X
2
]
and
d =
d
1
d
2
.
Then the regression model can be rewritten as
y = X
1
d
1
+X
2
d
2
+e. (4.26)
The OLS estimator of d = (d
t
1
, d
t
2
)
t
is obtained by regression of y on X = [X
1
X
2
] and can be
written as
y = X
´
d + ˆ e = X
1
´
d
1
+X
2
´
d
2
+ ˆ e. (4.27)
We are interested in algebraic expressions for
´
d
1
and
´
d
2
.
The algebra for the estimator is identical as that for the population coecients as presented in
Section 3.19.
Partition
´
Q
xx
and
´
Q
xy
as
´
Q
xx
=
´
Q
11
´
Q
12
´
Q
21
´
Q
22
¸
¸
=
1
n
X
t
1
X
1
1
n
X
t
1
X
2
1
n
X
t
2
X
1
1
n
X
t
2
X
2
¸
¸
¸
¸
and similarly Q
xy
´
Q
xy
=
´
Q
1y
´
Q
2y
¸
¸
=
1
n
X
t
1
y
1
n
X
t
2
y
¸
¸
¸
¸
.
By the partitioned matrix inversion formula (A.4)
´
Q
÷1
xx
=
´
Q
11
´
Q
12
´
Q
21
´
Q
22
¸
¸
÷1
def
=
´
Q
11
´
Q
12
´
Q
21
´
Q
22
¸
¸
¸ =
´
Q
÷1
11·2
÷
´
Q
÷1
11·2
´
Q
12
´
Q
÷1
22
÷
´
Q
÷1
22·1
Q
21
Q
÷1
11
´
Q
÷1
22·1
¸
¸
¸ (4.28)
where
´
Q
11·2
=
´
Q
11
÷
´
Q
12
´
Q
÷1
22
´
Q
21
and
´
Q
22·1
=
´
Q
22
÷
´
Q
21
´
Q
÷1
11
´
Q
12
.
Thus
´
d =
´
d
1
´
d
2
=
¸
´
Q
÷1
11·2
÷
´
Q
÷1
11·2
´
Q
12
´
Q
÷1
22
÷
´
Q
÷1
22·1
´
Q
21
´
Q
÷1
11
´
Q
÷1
22·1
¸¸
´
Q
1y
´
Q
2y
¸
=
´
Q
÷1
11·2
´
Q
1y·2
´
Q
÷1
22·1
´
Q
2y·1
Now
´
Q
11·2
=
´
Q
11
÷
´
Q
12
´
Q
÷1
22
´
Q
21
=
1
n
X
t
1
X
1
÷
1
n
X
t
1
X
2
1
n
X
t
2
X
2
÷1
1
n
X
t
2
X
1
=
1
n
X
t
1
M
2
X
1
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 80
where
M
2
= I
n
÷X
2
X
t
2
X
2
÷1
X
t
2
is the orthogonal projection matrix for X
2
. Similarly
´
Q
22·1
=
1
n
X
t
2
M
1
X
2
where
M
1
= I
n
÷X
1
X
t
1
X
1
÷1
X
t
1
is the orthogonal projection matrix for X
1
. Also
´
Q
1y·2
=
´
Q
1y
÷
´
Q
12
´
Q
÷1
22
´
Q
2y
=
1
n
X
t
1
y ÷
1
n
X
t
1
X
2
1
n
X
t
2
X
2
÷1
1
n
X
t
2
y
=
1
n
X
t
1
M
2
y
and
´
Q
2y·1
=
1
n
X
t
2
M
1
y.
Therefore
´
d
1
=
X
t
1
M
2
X
1
÷1
X
t
1
M
2
y
(4.29)
and
´
d
2
=
X
t
2
M
1
X
2
÷1
X
t
2
M
1
y
. (4.30)
These are algebraic expressions for the subcoecient estimates from (4.27).
4.10 Residual Regression
As ﬁrst recognized by Ragnar Frisch, expressions (4.29) and (4.30) can be used to show that
the leastsquares estimators
´
d
1
and
´
d
2
can be found by a twostep regression procedure.
Take (4.30). Since M
1
is idempotent, M
1
= M
1
M
1
and thus
´
d
2
=
X
t
2
M
1
X
2
÷1
X
t
2
M
1
y
=
X
t
2
M
1
M
1
X
2
÷1
X
t
2
M
1
M
1
y
=
X
t
2
X
2
÷1
X
t
2
˜ e
1
where
X
2
= M
1
X
2
and
˜ e
1
= M
1
y.
Thus the coecient estimate
´
d
2
is algebraically equal to the leastsquares regression of ˜ e
1
on
X
2
. Notice that these two are y and X
2
, respectively, premultiplied by M
1
. But we know that
multiplication by M
1
is equivalent to creating leastsquares residuals. Therefore ˜ e
1
is simply the
leastsquares residual from a regression of y on X
1
, and the columns of
X
2
are the leastsquares
residuals from the regressions of the columns of X
2
on X
1
.
We have proven the following theorem.
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 81
Theorem 4.10.1 FrischWaughLovell
In the model (4.26), the OLS estimator of d
2
and the OLS residuals ˆ e
may be equivalently computed by either the OLS regression (4.27) or via
the following algorithm:
1. Regress y on X
1
, obtain residuals ˜ e
1
;
2. Regress X
2
on X
1
, obtain residuals
X
2
;
3. Regress ˜ e
1
on
X
2
, obtain OLS estimates
´
d
2
and residuals ˆ e.
In some contexts, the FWL theorem can be used to speed computation, but in most cases
there is little computational advantage to using the twostep algorithm. Rather, the primary use
is theoretical.
A common application of the FWL theorem, which you may have seen in an introductory
econometrics course, is the demeaning formula for regression. Partition X = [X
1
X
2
] where X
1
is the vector of observed regressors and X
2
= i is a vector of ones . In this case,
M
2
= I ÷i
i
t
i
÷1
i
t
.
Observe that
X
1
= M
2
X
1
= X
1
÷i
i
t
i
÷1
i
t
X
1
= X
1
÷X
1
and
˜ y = M
2
y
= y ÷i
i
t
i
÷1
i
t
y
= y ÷y,
which are “demeaned”. The FWL theorem says that
´
d
1
is the OLS estimate from a regression of
y
i
÷y on x
1i
÷x
1
:
´
d
1
=
n
¸
i=1
(x
1i
÷x
1
) (x
1i
÷x
1
)
t
÷1
n
¸
i=1
(x
1i
÷x
1
) (y
i
÷y)
.
Thus the OLS estimator for the slope coecients is a regression with demeaned data.
Ragnar Frisch
Ragnar Frisch (18951973) was cowinner with Jan Tinbergen of the ﬁrst Nobel Memorial
Prize in Economic Sciences in 1969 for their work in developing and applying dynamic mod
els for the analysis of economic problems. Frisch made a number of foundational contribu
tions to modern economics beyond the FrischWaughLovell Theorem, including formalizing
consumer theory, production theory, and business cycle theory.
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 82
4.11 Prediction Errors
The leastsquares residual ˆ e
i
are not true prediction errors, as they are constructed based on
the full sample including y
i
. A proper prediction for y
i
should be based on estimates constructed
only using the other observations. We can do this by deﬁning the leaveoneout OLS estimator
of d as that obtained from the sample of n ÷1 observations excluding the i’th observation:
´
d
(÷i)
=
¸
1
n ÷1
¸
j,=i
x
j
x
t
j
¸
÷1
¸
1
n ÷1
¸
j,=i
x
j
y
j
¸
=
X
t
(÷i)
X
(÷i)
÷1
X
(÷i)
y
(÷i)
. (4.31)
Here, X
(÷i)
and y
(÷i)
are the data matrices omitting the i’th row. The leaveoneout predicted
value for y
i
is
˜ y
i
= x
t
i
´
d
(÷i)
,
and the leaveoneout residual or prediction error is
˜ e
i
= y
i
÷ ˜ y
i
.
A convenient alternative expression for
´
d
(÷i)
(derived below) is
´
d
(÷i)
=
´
d ÷(1 ÷h
ii
)
÷1
X
t
X
÷1
x
i
ˆ e
i
(4.32)
where h
ii
are the leverage values as deﬁned in (4.21).
Using (4.32) we can simplify the expression for the prediction error:
˜ e
i
= y
i
÷x
t
i
´
d
(÷i)
= y
i
÷x
t
i
ˆ
d + (1 ÷h
ii
)
÷1
x
t
i
X
t
X
÷1
x
i
ˆ e
i
= ˆ e
i
+ (1 ÷h
ii
)
÷1
h
ii
ˆ e
i
= (1 ÷h
ii
)
÷1
ˆ e
i
. (4.33)
A convenient feature of this expression is that it shows that computation of ˜ e
i
is based on a simple
linear operation, and does not really require n separate estimations.
One use of the prediction errors is to estimate the outofsample mean squared error
˜ o
2
=
1
n
n
¸
i=1
˜ e
2
i
=
1
n
n
¸
i=1
(1 ÷h
ii
)
÷2
ˆ e
2
i
.
This is also known as the mean squared prediction error. Its square root ˜ o =
˜ o
2
is the
prediction standard error.
Proof of Equation (4.32). The Sherman—Morrison formula (A.3) from Appendix A.5 states that
for nonsingular A and vector b
A÷bb
t
÷1
= A
÷1
+
1 ÷b
t
A
÷1
b
÷1
A
÷1
bb
t
A
÷1
.
This implies
X
t
X ÷x
i
x
t
i
÷1
=
X
t
X
÷1
+ (1 ÷h
ii
)
÷1
X
t
X
÷1
x
i
x
t
i
X
t
X
÷1
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 83
and thus
´
d
(÷i)
=
X
t
X ÷x
i
x
t
i
÷1
X
t
y ÷x
i
y
i
=
X
t
X
÷1
X
t
y ÷
X
t
X
÷1
x
i
y
i
+(1 ÷h
ii
)
÷1
X
t
X
÷1
x
i
x
t
i
X
t
X
÷1
X
t
y ÷x
i
y
i
=
´
d ÷
X
t
X
÷1
x
i
y
i
+ (1 ÷h
ii
)
÷1
X
t
X
÷1
x
i
x
t
i
´
d ÷h
ii
y
i
=
´
d ÷(1 ÷h
ii
)
÷1
X
t
X
÷1
x
i
(1 ÷h
ii
) y
i
÷x
t
i
´
d +h
ii
y
i
=
´
d ÷(1 ÷h
ii
)
÷1
X
t
X
÷1
x
i
ˆ e
i
the third equality making the substitutions
´
d = (X
t
X)
÷1
X
t
y and h
ii
= x
t
i
(X
t
X)
÷1
x
i
, and the
remainder collecting terms.
4.12 Inﬂuential Observations
Another use of the leaveoneout estimator is to investigate the impact of inﬂuential obser
vations, sometimes called outliers. We say that observation i is inﬂuential if its omission from
the sample induces a substantial change in a parameter of interest. From (4.32)(4.33) we know
that
´
d ÷
´
d
(÷i)
= (1 ÷h
ii
)
÷1
X
t
X
÷1
x
i
ˆ e
i
=
X
t
X
÷1
x
i
˜ e
i
. (4.34)
By direct calculation of this quantity for each observation i, we can directly discover if a speciﬁc
observation i is inﬂuential for a coecient estimate of interest.
For a general assessment, we can focus on the predicted values. The dierence between the
fullsample and leaveoneout predicted values is
ˆ y
i
÷ ˜ y
i
= x
t
i
´
d ÷x
t
i
´
d
(÷i)
= x
t
i
X
t
X
÷1
x
i
˜ e
i
= h
ii
˜ e
i
which is a simple function of the leverage values h
ii
and prediction errors ˜ e
i
. Observation i is
inﬂuential for the predicted value if h
ii
˜ e
i
 is large, which requires that both h
ii
and ˜ e
i
 are large.
One way to think about this is that a large leverage value h
ii
gives the potential for observation
i to be inﬂuential. A large h
ii
means that observation i is unusual in the sense that the regressor x
i
is far from its sample mean. We call an observation with large h
ii
a leverage point. A leverage
point is not necessarily inﬂuential as the latter also requires that the prediction error ˜ e
i
is large.
To determine if any individual observations are inﬂuential in this sense, a large number of
diagnostic statistics have been proposed (some names include DFITS, Cook’s Distance, and Welsch
Distance) but as they are not based on statistical theory it is unclear if they are useful for practical
work. Probably the most relevant measure is the change in the coecient estimates given in
(4.34). The ratio of these changes to the coecient’s standard error is called its DFBETA, and is
a postestimation diagnostic available in STATA. While there is no magic threshold, the concern is
whether or not an individual observation meaningfully changes an estimated coecient of interest.
For illustration, consider Figure 4.2 which shows a scatter plot of random variables (y
i
, x
i
).
The 25 observations shown with the open circles are generated by x
i
~ U[1, 10] and y
i
~ N(x
i
, 4).
The 26’th observation shown with the ﬁlled circle is x
26
= 9, y
26
= 0. (Imagine that y
26
= 0 was
incorrectly recorded due to a mistaken key entry.) The Figure shows both the leastsquares ﬁtted
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 84
line from the full sample and that obtained after deletion of the 26’th observation from the sample.
In this example we can see how the 26’th observation (the “outlier”) greatly tilts the leastsquares
ﬁtted line towards the 26’th observation. In fact, the slope coecient decreases from 0.97 (which
is close to the true value of 1.00) to 0.56, which is substantially reduced. Neither y
26
nor x
26
are
unusual values relative to their marginal distributions, so this outlier would not have been detected
from examination of the marginal distributions of the data. The change in the slope coecient of
÷0.41 is meaningful and should raise concern to an applied economist.
2 4 6 8 10
0
2
4
6
8
1
0
x
y
●
leave−one−out OLS
OLS
Figure 4.2: Impact of an inﬂuential observation on the leastsquares estimator
If an observation is determined to be inﬂuential, what should be done? As a common cause
of inﬂuential observations is data entry error, the inﬂuential observations should be examined for
evidence that the observation was misrecorded. Perhaps the observation falls outside of permitted
ranges, or some observables are inconsistent (for example, a person is listed as having a job but
receives earnings of $0). If it is determined that an observation is incorrectly recorded, then the
observation is typically deleted from the sample. This process is often called “cleaning the data”.
The decisions made in this process involve an fair amount of individual judgement. When this is
done it is proper empirical practice to document such choices. (It is useful to keep the source data
in its original form, a revised data ﬁle after cleaning, and a record describing the revision process.
This is especially useful when revising empirical work at a later date.)
It is also possible that an observation is correctly measured, but unusual and inﬂuential. In
this case it is unclear how to proceed. Some researchers will try to alter the speciﬁcation to
properly model the inﬂuential observation. Other researchers will delete the observation from the
sample. The motivation for this choice is to prevent the results from being skewed or determined
by individual observations, but this practice is viewed skeptically by many researchers who believe
it reduces the integrity of reported empirical results.
4.13 Measures of Fit
When a leastsquares regression is reported in applied economics, it is common to see a summary
measure of ﬁt, measuring how well the regressors explain the observed variation in the dependent
variable.
Some common summary measures are based on scaled or transformed estimates of the mean
squared error o
2
. These include the sum of squared errors
¸
n
i=1
ˆ e
2
i
, the sample variance n
÷1
¸
n
i=1
ˆ e
2
i
=
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 85
ˆ o
2
, the root mean squared error
n
÷1
¸
n
i=1
ˆ e
2
i
(sometimes called the standard error of the re
gression), and the mean prediction error ˜ o
2
=
1
n
¸
n
i=1
˜ e
2
i
.
A related and commonly reported statistic is the coecient of determination or Rsquared:
R
2
=
¸
n
i=1
(ˆ y
i
÷y)
2
¸
n
i=1
(y
i
÷y)
2
= 1 ÷
ˆ o
2
ˆ o
2
y
where
ˆ o
2
y
=
1
n
n
¸
i=1
(y
i
÷y)
2
is the sample variance of y
i
. R
2
can be viewed as an estimator of the population parameter
j
2
=
var (x
t
i
d)
var(y
i
)
= 1 ÷
o
2
o
2
y
where o
2
y
= var(y
i
). A high j
2
means that forecasts of y using x
t
d will be quite accurate relative to
the unconditional mean. In this sense R
2
can be a useful summary measure for an outofsample
forecast or policy experiment.
An alternative estimator of j
2
proposed by Theil called Rbarsquared or adjusted R
2
is
R
2
= 1 ÷
(n ÷1)
¸
n
i=1
ˆ e
2
i
(n ÷k)
¸
n
i=1
(y
i
÷y)
2
.
Theil’s estimator R
2
is better estimator of j
2
than the unadjusted estimator R
2
because it is a
ratio of biascorrected variance estimates.
Unfortunately, the frequent reporting of R
2
and R
2
seems to have led to exaggerated beliefs
regarding their usefulness. One mistaken belief is that R
2
is a measure of “ﬁt”. This belief is
incorrect, as an incorrectly speciﬁed model can still have a reasonably high R
2
. For example,
suppose the truth is that x
i
~ N(0, 1) and y
i
= x
i
+x
2
i
. If we regress y
i
on x
i
(incorrectly omitting
x
2
i
), the best linear predictor is y
i
= 1+x
i
+e
i
where e
i
= x
2
i
÷1. This is a misspeciﬁed regression,
as the true relationship is deterministic! You can also calculate that the population j
2
= /(2 +)
which can be arbitrarily close to 1 if is large. For example, if = 8, then R
2
· j
2
= .8, or if
= 18 then R
2
· j
2
= .9. This example shows that a regression with a high R
2
can actually have
poor ﬁt.
Another mistaken belief is that a high R
2
is important in order to justify interpretation of the
regression coecients. This is mistaken as there is no direct association between the level of R
2
and the “correctness” of a regression, the accuracy of the coecient estimates, or the validity of
statistical inferences based on the estimated regression. In contrast, even if the R
2
is quite small,
accurate estimates of regression coecients is quite possible when sample sizes are large.
The bottom line is that while R
2
and R
2
have appropriate uses, their usefulness should not be
exaggerated.
Henri Theil
Henri Theil (19242000) of Holland invented R
2
and twostage least squares, both of which
are routinely seen in applied econometrics. He also wrote an early and inﬂuential advanced
textbook on econometrics (Theil, 1971).
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 86
4.14 Normal Regression Model
The normal regression model is the linear regression model under the restriction that the error
e
i
is independent of x
i
and has the distribution N
0, o
2
. We can write this as
e
i
 x
i
~ N
0, o
2
.
This assumption implies
y
i
 x
i
~ N
x
t
i
d, o
2
.
Normal regression is a parametric model, where likelihood methods can be used for estimation,
testing, and distribution theory.
The loglikelihood function for the normal regression model is
log L(d, o
2
) =
n
¸
i=1
log
1
(2¬o
2
)
1/2
exp
÷
1
2o
2
y
i
÷x
t
i
d
2
= ÷
n
2
log
2¬o
2
÷
1
2o
2
SSE
n
(d).
The maximum likelihood estimator (MLE) (
´
d
mle
, ˆ o
2
mle
) maximizes log L(d, o
2
). Since the latter is
a function of d only through the sum of squared errors SSE
n
(d), maximizing the likelihood is
identical to minimizing SSE
n
(d). Hence
´
d
mle
=
´
d
ols
,
the MLE for d equals the OLS estimator. Due to this equivalence, the least squares estimator
´
d
ols
is often called the MLE.
We can also ﬁnd the MLE for o
2
. Plugging
´
d into the loglikelihood we obtain
log L
´
d, o
2
= ÷
n
2
log
2¬o
2
÷
1
2o
2
n
¸
i=1
ˆ e
2
i
.
Maximization with respect to o
2
yields the ﬁrstorder condition
0
0o
2
log L
´
d, ˆ o
2
= ÷
n
2ˆ o
2
+
1
2
ˆ o
2
2
n
¸
i=1
ˆ e
2
i
= 0.
Solving for ˆ o
2
yields the MLE for o
2
ˆ o
2
mle
=
1
n
n
¸
i=1
ˆ e
2
i
which is the same as the moment estimator (4.14).
It may seem surprising that the MLE
´
d
mle
is numerically equal to the OLS estimator, despite
emerging from quite dierent motivations. It is not completely accidental. The leastsquares
estimator minimizes a particular sample loss function — the sum of squared error criterion — and
most loss functions are equivalent to the likelihood of a speciﬁc parametric distribution, in this case
the normal regression model. In this sense it is not surprising that the leastsquares estimator can
be motivated as either the minimizer of a sample loss function or as the maximizer of a likelihood
function.
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 87
Carl Friedrich Gauss
The mathematician Carl Friedrich Gauss (17771855) proposed the normal regression model,
and derived the least squares estimator as the maximum likelihood estimator for this model.
He claimed to have discovered the method in 1795 at the age of eighteen, but did not publish
the result until 1809. Interest in Gauss’s approach was reinforced by Laplace’s simultaneous
discovery of the central limit theorem, which provided a justiﬁcation for viewing random
disturbances as approximately normal.
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 88
Exercises
Exercise 4.1 Let y be a random variable with µ = Ey and o
2
= var(y). Deﬁne
g
y, µ, o
2
=
y ÷µ
(y ÷µ)
2
÷o
2
.
Let (ˆ µ, ˆ o
2
) be the values such that g
n
(ˆ µ, ˆ o
2
) = 0 where g
n
(m, s) = n
÷1
¸
n
i=1
g (y
i
, m, s) . Show that
ˆ µ and ˆ o
2
are the sample mean and variance.
Exercise 4.2 Consider the OLS regression of the n1 vector y on the nk matrix X. Consider
an alternative set of regressors Z = XC, where C is a k k nonsingular matrix. Thus, each
column of Z is a mixture of some of the columns of X. Compare the OLS estimates and residuals
from the regression of y on X to the OLS estimates from the regression of y on Z.
Exercise 4.3 Using matrix algebra, show X
t
ˆ e = 0.
Exercise 4.4 Let ˆ e be the OLS residual from a regression of y on X = [X
1
X
2
]. Find X
t
2
ˆ e.
Exercise 4.5 Let ˆ e be the OLS residual from a regression of y on X. Find the OLS coecient
from a regression of ˆ e on X.
Exercise 4.6 Let ˆ y = X(X
t
X)
÷1
X
t
y. Find the OLS coecient from a regression of ˆ y on X.
Exercise 4.7 Show that if X = [X
1
X
2
] then PX
1
= X
1
.
Exercise 4.8 Show (4.22), that h
ii
in (4.21) sum to k. (Hint: Use (4.20).)
Exercise 4.9 Show that M is idempotent: MM = M.
Exercise 4.10 Show that tr M = n ÷k.
Exercise 4.11 Show that if X = [X
1
X
2
] and X
t
1
X
2
= 0 then P = P
1
+P
2
.
Exercise 4.12 A dummy variable takes on only the values 0 and 1. It is used for categorical
data, such as an individual’s gender. Let d
1
and d
2
be vectors of 1’s and 0’s, with the i
t
th element
of d
1
equaling 1 and that of d
2
equaling 0 if the person is a man, and the reverse if the person is a
woman. Suppose that there are n
1
men and n
2
women in the sample. Consider ﬁtting the following
three equations by OLS
y = µ +d
1
c
1
+d
2
c
2
+e (4.35)
y = d
1
c
1
+d
2
c
2
+e (4.36)
y = µ +d
1
c +e (4.37)
Can all three equations (4.35), (4.36), and (4.37) be estimated by OLS? Explain if not.
(a) Compare regressions (4.36) and (4.37). Is one more general than the other? Explain the
relationship between the parameters in (4.36) and (4.37).
(b) Compute i
t
d
1
and i
t
d
2
, where i is an n 1 is a vector of ones.
(c) Letting o = (c
1
c
2
)
t
, write equation (4.36) as y = Xo + e. Consider the assumption
E(x
i
e
i
) = 0. Is there any content to this assumption in this setting?
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 89
Exercise 4.13 Let d
1
and d
2
be deﬁned as in the previous exercise.
(a) In the OLS regression
y = d
1
ˆ
1
+d
2
ˆ
2
+ ˆ u,
show that ˆ
1
is the sample mean of the dependent variable among the men of the sample
(y
1
), and that ˆ
2
is the sample mean among the women (y
2
).
(b) Let X (n k) be an additional matrix of regressions. Describe in words the transformations
y
+
= y ÷d
1
y
1
÷d
2
y
2
X
+
= X ÷d
1
X
1
÷d
2
X
2
.
(c) Compare
¯
d from the OLS regresion
y
+
= X
+
¯
d + ˜ e
with
´
d from the OLS regression
y = d
1
ˆ c
1
+d
2
ˆ c
2
+X
´
d + ˆ e.
Exercise 4.14 Let
´
d
n
= (X
t
n
X
n
)
÷1
X
t
n
y
n
denote the OLS estimate when y
n
is n 1 and X
n
is
n k. A new observation (y
n+1
, x
n+1
) becomes available. Prove that the OLS estimate computed
using this additional observation is
´
d
n+1
=
´
d
n
+
1
1 +x
t
n+1
(X
t
n
X
n
)
÷1
x
n+1
X
t
n
X
n
÷1
x
n+1
y
n+1
÷x
t
n+1
´
d
n
.
Exercise 4.15 Prove that R
2
is the square of the sample correlation between y and ˆ y.
Exercise 4.16 Show that ˜ o
2
_ ˆ o
2
. Is equality possible?
Exercise 4.17 For which observations will
´
d
(÷i)
=
´
d?
Exercise 4.18 The data ﬁle cps85.dat contains a random sample of 528 individuals from the
1985 Current Population Survey by the U.S. Census Bureau. The ﬁle contains observations on nine
variables, listed in the ﬁle cps85.pdf.
V1 = education (in years)
V2 = region of residence (coded 1 if South, 0 otherwise)
V3 = (coded 1 if nonwhite and nonHispanic, 0 otherwise)
V4 = (coded 1 if Hispanic, 0 otherwise)
V5 = gender (coded 1 if female, 0 otherwise)
V6 = marital status (coded 1 if married, 0 otherwise)
V7 = potential labor market experience (in years)
V8 = union status (coded 1 if in union job, 0 otherwise)
V9 = hourly wage (in dollars)
Estimate a regression of wage y
i
on education x
1i
, experience x
2i
, and experiencedsquared x
3i
= x
2
2i
(and a constant). Report the OLS estimates.
Let ˆ e
i
be the OLS residual and ˆ y
i
the predicted value from the regression. Numerically calculate
the following:
(a)
¸
n
i=1
ˆ e
i
CHAPTER 4. THE ALGEBRA OF LEAST SQUARES 90
(b)
¸
n
i=1
x
1i
ˆ e
i
(c)
¸
n
i=1
x
2i
ˆ e
i
(d)
¸
n
i=1
x
2
1i
ˆ e
i
(e)
¸
n
i=1
x
2
2i
ˆ e
i
(f)
¸
n
i=1
ˆ y
i
ˆ e
i
(g)
¸
n
i=1
ˆ e
2
i
(h) R
2
Are these calculations consistent with the theoretical properties of OLS? Explain.
Exercise 4.19 Using the data from the previous problem, restimate the slope on education using
the residual regression approach. Regress y
i
on (1, x
2i
, x
2
2i
), regress x
1i
on (1, x
2i
, x
2
2i
), and regress
the residuals on the residuals. Report the estimate from this regression. Does it equal the value
from the ﬁrst OLS regression? Explain.
In the secondstage residual regression, (the regression of the residuals on the residuals), cal
culate the equation R
2
and sum of squared errors. Do they equal the values from the initial OLS
regression? Explain.
Chapter 5
Least Squares Regression
5.1 Introduction
In this chapter we investigate some ﬁnitesample properties of leastsquares applied to a random
sample in the the linear regression model. Throughout this chapter we maintain the following.
Assumption 5.1.1 Linear Regression Model
The observations (y
i
, x
i
) come from a random sample and satisfy the linear
regression equation
y
i
= x
t
i
d +e
i
(5.1)
E(e
i
 x
i
) = 0. (5.2)
The variables have ﬁnite second moments
Ey
2
i
< ·,
Ex
i

2
< ·,
and an invertible design matrix
Q
xx
= E
x
i
x
t
i
> 0.
We will consider both the general case of heteroskedastic regression, where the conditional
variance
E
e
2
i
 x
i
= o
2
(x
i
) = o
2
i
is unrestricted, and the specialized case of homoskedastic regression, where the conditional variance
is constant. In the latter case we add the following assumption.
Assumption 5.1.2 Homoskedastic Linear Regression Model
In addition to Assumption 5.1.1,
E
e
2
i
 x
i
= o
2
(x
i
) = o
2
(5.3)
is independent of x
i
.
91
CHAPTER 5. LEAST SQUARES REGRESSION 92
5.2 Mean of LeastSquares Estimator
In this section we show that the OLS estimator is unbiased in the linear regression model.
Under (5.1)(5.2) note that
E(y  X) =
¸
¸
¸
.
.
.
E(y
i
 X)
.
.
.
¸
=
¸
¸
¸
.
.
.
E(y
i
 x
i
)
.
.
.
¸
=
¸
¸
¸
.
.
.
x
t
i
d
.
.
.
¸
= Xd. (5.4)
Similarly
E(e  X) =
¸
¸
¸
.
.
.
E(e
i
 X)
.
.
.
¸
=
¸
¸
¸
.
.
.
E(e
i
 x
i
)
.
.
.
¸
= 0. (5.5)
By (4.16), conditioning on X, the linearity of expectations, (5.4), and the properties of the
matrix inverse,
E
´
d  X
= E
X
t
X
÷1
X
t
y  X
=
X
t
X
÷1
X
t
E(y  X)
=
X
t
X
÷1
X
t
Xd
= d.
Applying the law of iterated expectations to E
´
d  X
= d, we ﬁnd that
E
´
d
= E
E
´
d  X
= d.
Another way to calculate the same result is as follows. Insert y = Xd + e into the formula
(4.16) for
´
d to obtain
´
d =
X
t
X
÷1
X
t
(Xd +e)
=
X
t
X
÷1
X
t
Xd +
X
t
X
÷1
X
t
e
= d +
X
t
X
÷1
X
t
e. (5.6)
This is a useful linear decomposition of the estimator
´
d into the true parameter d and the stochastic
component (X
t
X)
÷1
X
t
e.
Using (5.6), conditioning on X, and (5.5),
E
´
d ÷d  X
= E
X
t
X
÷1
X
t
e  X
=
X
t
X
÷1
X
t
E(e  X)
= 0.
Using either derivation, we have shown the following theorem.
Theorem 5.2.1 Mean of LeastSquares Estimator
In the linear regression model (Assumption 5.1.1)
E
´
d  X
= d (5.7)
and
E(
´
d) = d. (5.8)
CHAPTER 5. LEAST SQUARES REGRESSION 93
Equation (5.8) says that the estimator is unbiased, meaning that the distribution of
´
d is centered
at d. Equation (5.7) says that the estimator is conditionally unbiased, which is a stronger result.
It says that
´
d is unbiased for any realization of the regressor matrix X.
5.3 Variance of Least Squares Estimator
In this section we calculate the conditional variance of the OLS estimator.
For any r 1 random vector Z deﬁne the r r covariance matrix
var(Z) = E(Z ÷EZ) (Z ÷EZ)
t
= EZZ
t
÷(EZ) (EZ)
t
and for any pair (Z, X) deﬁne the conditional covariance matrix
var(Z  X) = E
(Z ÷E(Z  X)) (Z ÷E(Z  X))
t
 X
.
The conditional covariance matrix of the n 1 regression error e is the n n matrix
D = E
ee
t
 X
.
The i’th diagonal element of D is
E
e
2
i
 X
= E
e
2
i
 x
i
= o
2
i
while the ij
t
th odiagonal element of D is
E(e
i
e
j
 X) = E(e
i
 x
i
) E(e
j
 x
j
) = 0.
where the ﬁrst equality uses independence of the observations (Assumption 1.5.1) and the second
is (5.2). Thus D is a diagonal matrix with i’th diagonal element o
2
i
:
D = diag
o
2
1
, ..., o
2
n
=
¸
¸
¸
¸
o
2
1
0 · · · 0
0 o
2
2
· · · 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 · · · o
2
n
¸
. (5.9)
In the special case of the linear homoskedastic regression model (5.3), then
E
e
2
i
 x
i
= o
2
i
= o
2
and we have the simpliﬁcation
D = I
n
o
2
.
In general, however, D need not necessarily take this simpliﬁed form.
For any matrix n r matrix A = A(X),
var(A
t
y  X) = var(A
t
e  X) = A
t
DA. (5.10)
In particular, we can write
´
d = A
t
y where A = X(X
t
X)
÷1
and thus
var(
´
d  X) = A
t
DA =
X
t
X
÷1
X
t
DX
X
t
X
÷1
.
It is useful to note that
X
t
DX =
n
¸
i=1
x
i
x
t
i
o
2
i
,
CHAPTER 5. LEAST SQUARES REGRESSION 94
a weighted version of X
t
X.
Rather than working with the variance of the unscaled estimator
´
d, it will be useful to work
with the conditional variance of the scaled estimator
n
´
d ÷d
.
V
def
= var
n
´
d ÷d
 X
= nvar(
´
d  X)
= n
X
t
X
÷1
X
t
DX
X
t
X
÷1
=
1
n
X
t
X
÷1
1
n
X
t
DX
1
n
X
t
X
÷1
.
This rescaling might seem rather odd, but it will help provide continuity between the ﬁnitesample
treatment of this chapter and the asymptotic treatment of later chapters. As we will see in the
next chapter, var(
´
d  X) vanishes as n tends to inﬁnity, yet V
converges to a constant matrix.
In the special case of the linear homoskedastic regression model, D = I
n
o
2
, so X
t
DX =
X
t
Xo
2
, and the variance matrix simpliﬁes to
V
=
1
n
X
t
X
÷1
o
2
.
Theorem 5.3.1 Variance of LeastSquares Estimator
In the linear regression model (Assumption 5.1.1)
V
= var
n
´
d ÷d
 X
=
1
n
X
t
X
÷1
1
n
X
t
DX
1
n
X
t
X
÷1
(5.11)
where D is deﬁned in (5.9).
In the homoskedastic linear regression model (Assumption 5.1.2)
V
=
1
n
X
t
X
÷1
o
2
.
5.4 GaussMarkov Theorem
Now consider the class of estimators of d which are linear functions of the vector y, and thus
can be written as
¯
d = A
t
y
where A is an n k function of X. The leastsquares estimator is the special case obtained by
setting A = X(X
t
X)
÷1
. What is the best choice of A? The GaussMarkov theorem, which we now
present, says that the leastsquares estimator is the best choice among linear unbiased estimators
when the errors are homoskedastic, in the sense that the leastsquares estimator has the smallest
variance among all unbiased linear estimators.
To see this, since E(y  X) = Xd, then for any linear estimator
¯
d = A
t
y we have
E
¯
d  X
= A
t
E(y  X) = A
t
Xd,
CHAPTER 5. LEAST SQUARES REGRESSION 95
so
¯
d is unbiased if (and only if) A
t
X = I
k
. Furthermore, we saw in (5.10) that
var
¯
d  X
= var
A
t
y  X
= A
t
DA = A
t
Ao
2
.
the last equality using the homoskedasticity assumption D = I
n
o
2
. The “best” unbiased linear
estimator is obtained by ﬁnding the matrix A such that A
t
A is minimized in the positive deﬁnite
sense.
Theorem 5.4.1 GaussMarkov
1. In the homoskedastic linear regression model (Assumption 5.1.2),
the best (minimumvariance) unbiased linear estimator is the least
squares estimator
´
d =
X
t
X
÷1
X
t
y
2. In the linear regression model (Assumption 5.1.1), the best unbiased
linear estimator is
¯
d =
X
t
D
÷1
X
÷1
X
t
D
÷1
y (5.12)
The ﬁrst part of the GaussMarkov theorem is a limited eciency justiﬁcation for the least
squares estimator. The justiﬁcation is limited because the class of models is restricted to ho
moskedastic linear regression and the class of potential estimators is restricted to linear unbiased
estimators. This latter restriction is particularly unsatisfactory as the theorem leaves open the
possibility that a nonlinear or biased estimator could have lower mean squared error than the
leastsquares estimator.
The second part of the theorem shows that in the (heteroskedastic) linear regression model,
the leastsquares estimator is inecient. Within the class of linear unbiased estimators the best
estimator is (5.12) and is called the Generalized Least Squares (GLS) estimator. This estimator
is infeasible as the matrix D is unknown. This result does not suggest a practical alternative to
leastsquares. We return to the issue of feasible implementation of GLS in Section 9.1.
We give a proof of the ﬁrst part of the theorem below, and leave the proof of the second part
for Exercise 5.3.
Proof of Theorem 5.4.1.1. Let A be any nk function of X such that A
t
X = I
k
. The variance
of the leastsquares estimator is (X
t
X)
÷1
o
2
and that of A
t
y is A
t
Ao
2
. It is sucient to show
that the dierence A
t
A÷(X
t
X)
÷1
is positive semideﬁnite. Set C = A÷X(X
t
X)
÷1
. Note that
X
t
C = 0. Then we calculate that
A
t
A÷
X
t
X
÷1
=
C +X
X
t
X
÷1
t
C +X
X
t
X
÷1
÷
X
t
X
÷1
= C
t
C +C
t
X
X
t
X
÷1
+
X
t
X
÷1
X
t
C
+
X
t
X
÷1
X
t
X
X
t
X
÷1
÷
X
t
X
÷1
= C
t
C
The matrix C
t
C is positive semideﬁnite (see Appendix A.7) as required.
CHAPTER 5. LEAST SQUARES REGRESSION 96
5.5 Residuals
What are some properties of the residuals ˆ e
i
= y
i
÷x
t
i
´
d and prediction errors ˜ e
i
= y
i
÷x
t
i
´
d
(÷i)
,
at least in the context of the linear regression model?
Recall from (4.25) that we can write the residuals in vector notation as
ˆ e = Me
where M = I
n
÷ X(X
t
X)
÷1
X
t
is the orthogonal projection matrix. Using the properties of
conditional expectation
E(ˆ e  X) = E(Me  X) = ME(e  X) = 0
and
var (ˆ e  X) = var (Me  X) = M var (e  X) M = MDM (5.13)
where D is deﬁned in (5.9).
We can simplify this expression under the assumption of conditional homoskedasticity
E
e
2
i
 x
i
= o
2
.
In this case (5.13) simplies to
var (ˆ e  X) = Mo
2
.
In particular, for a single observation i, we obtain
var (ˆ e
i
 X) = E
ˆ e
2
i
 X
= (1 ÷h
ii
) o
2
(5.14)
since the diagonal elements of M are 1 ÷ h
ii
as deﬁned in (4.21). Thus the residuals ˆ e
i
are
heteroskedastic even if the errors e
i
are homoskedastic.
Similarly, we can write the prediction errors ˜ e
i
= (1 ÷h
ii
)
÷1
ˆ e
i
in vector notation. Set
M
+
= diag{(1 ÷h
11
)
÷1
, .., (1 ÷h
nn
)
÷1
}.
Then we can write the prediction errors as
˜ e = M
+
My
= M
+
Me.
We can calculate that
E(˜ e  X) = M
+
ME(e  X) = 0
and
var (˜ e  X) = M
+
M var (e  X) MM
+
= M
+
MDMM
+
which simpliﬁes under homoskedasticity to
var (˜ e  X) = M
+
MMM
+
o
2
= M
+
MM
+
o
2
.
The variance of the i’th prediction error is then
var (˜ e
i
 X) = E
˜ e
2
i
 X
= (1 ÷h
ii
)
÷1
(1 ÷h
ii
) (1 ÷h
ii
)
÷1
o
2
= (1 ÷h
ii
)
÷1
o
2
.
CHAPTER 5. LEAST SQUARES REGRESSION 97
A residual with constant conditional variance can be obtained by rescaling. The standardized
residuals are
¯ e
i
= (1 ÷h
i
)
÷1/2
ˆ e
i
, (5.15)
and in vector notation
¯ e = (¯ e
1
, ..., ¯ e
n
)
t
= M
+1/2
Me.
From our above calculations, under homoskedasticity,
var (¯ e  X) = M
+1/2
MM
+1/2
o
2
and
var (¯ e
i
 X) = E
¯ e
2
i
 X
= o
2
(5.16)
and thus these standardized residuals have the same bias and variance as the original errors when
the latter are homoskedastic.
5.6 Estimation of Error Variance
The error variance o
2
= Ee
2
i
can be a parameter of interest, even in a heteroskedastic regression
or a projection model. o
2
measures the variation in the “unexplained” part of the regression. Its
method of moments estimator (MME) is the sample average of the squared residuals:
ˆ o
2
=
1
n
n
¸
i=1
ˆ e
2
i
and equals the MLE in the normal regression model (4.14).
In the linear regression model we can calculate the mean of ˆ o
2
. From (4.25), the properties of
projection matrices and the trace operator, observe that
ˆ o
2
=
1
n
ˆ e
t
ˆ e =
1
n
e
t
MMe =
1
n
e
t
Me =
1
n
tr
e
t
Me
=
1
n
tr
Mee
t
.
Then
E
ˆ o
2
 X
=
1
n
tr
E
Mee
t
 X
=
1
n
tr
ME
ee
t
 X
=
1
n
tr (MD) . (5.17)
Adding the assumption of conditional homoskedasticity E
e
2
i
 x
i
= o
2
, so that D = I
n
o
2
, then
(5.17) simpliﬁes to
E
ˆ o
2
 X
=
1
n
tr
Mo
2
= o
2
n ÷k
n
,
the ﬁnal equality by (4.23). This calculation shows that ˆ o
2
is biased towards zero. The order of
the bias depends on k/n, the ratio of the number of estimated coecients to the sample size.
CHAPTER 5. LEAST SQUARES REGRESSION 98
Another way to see this is to use (5.14). Note that
E
ˆ o
2
 X
=
1
n
n
¸
i=1
E
ˆ e
2
i
 X
=
1
n
n
¸
i=1
(1 ÷h
ii
) o
2
=
n ÷k
n
o
2
using (4.22).
Since the bias takes a scale form, a classic method to obtain an unbiased estimator is by rescaling
the estimator. Deﬁne
s
2
=
1
n ÷k
n
¸
i=1
ˆ e
2
i
. (5.18)
By the above calculation,
E
s
2
 X
= o
2
(5.19)
so
E
s
2
= o
2
and the estimator s
2
is unbiased for o
2
. Consequently, s
2
is known as the “biascorrected estimator”
for o
2
and in empirical practice s
2
is the most widely used estimator for o
2
.
Interestingly, this is not the only method to construct an unbiased estimator for o
2
. An esti
mator constructed with the standardized residuals ¯ e
i
from (5.15) is
¯ o
2
=
1
n
n
¸
i=1
¯ e
2
i
=
1
n
n
¸
i=1
(1 ÷h
ii
)
÷1
ˆ e
2
i
.
You can show (see Exercise 5.6) that
E
¯ o
2
 X
= o
2
(5.20)
and thus ¯ o
2
is unbiased for o
2
(in the homoskedastic linear regression model).
When the sample sizes are large and the number of regressors small, the estimators ˆ o
2
, s
2
and
¯ o
2
are likely to be close.
5.7 Covariance Matrix Estimation Under Homoskedasticity
For inference, we need an estimate of the covariance matrix V
of the leastsquares estimator.
In this section we consider the homoskedastic regression model (Assumption 5.1.2).
Under homoskedasticity, the covariance matrix takes the relatively simple form
V
=
1
n
X
t
X
÷1
o
2
which is known up to the unknown scale o
2
. In the previous section we discussed three estimators
of o
2
. The most commonly used choice is s
2
, leading to the classic covariance matrix estimator
´
V
0
=
1
n
X
t
X
÷1
s
2
. (5.21)
CHAPTER 5. LEAST SQUARES REGRESSION 99
Since s
2
is conditionally unbiased for o
2
, it is simple to calculate that
´
V
0
is conditionally
unbiased for V
under the assumption of homoskedasticity:
E
´
V
0
 X
=
1
n
X
t
X
÷1
E
s
2
 X
=
1
n
X
t
X
÷1
o
2
= V
.
This estimator was the dominant covariance matrix estimator in applied econometrics in pre
vious generations, and is still the default in most regression packages.
If the estimator (5.21) is used, but the regression error is heteroskedastic, it is possible for
´
V
0
to be quite biased for the correct covariance matrix V
=
1
n
X
t
X
÷1
1
n
X
t
DX
1
n
X
t
X
÷1
.
For example, suppose k = 1 and o
2
i
= x
2
i
. The ratio of the true variance of the leastsquares
estimator to the expectation of the variance estimator is
V
E
´
V
0
 X
=
1
n
¸
n
i=1
x
4
i
o
2
1
n
¸
n
i=1
x
2
i
·
Ex
4
i
o
2
Ex
2
i
=
Ex
4
i
Ex
2
i
2
.
(Notice that we use the fact that o
2
i
= x
2
i
implies o
2
= Eo
2
i
= Ex
2
i
.) This is the standardized
forth moment (or kurtosis) of the regressor x
i
. The ratio can be any number greater than one,
for example it is 3 if x
i
~ N
0, o
2
. We conclude that the bias of
´
V
0
can be arbitrarily large.
While this is an extreme and constructed example, the point is that the classic covariance matrix
estimator (5.21) may be quite biased when the homoskedasticity assumption fails.
5.8 Covariance Matrix Estimation Under Heteroskedasticity
In the previous section we showed that that the classic covariance matrix estimator can be
highly biased if homoskedasticity fails. In this section we show how to contruct covariance matrix
estimators which do not require homoskedasticity.
Recall that the general form for the covariance matrix is
V
=
1
n
X
t
X
÷1
1
n
X
t
DX
1
n
X
t
X
÷1
.
This depends on the unknown matrix D which we can write as
D = diag
o
2
1
, ..., o
2
n
= E
ee
t
 X
= E
diag
e
2
1
, ..., e
2
n
 X
.
Thus D is the conditional mean of diag
e
2
1
, ..., e
2
n
, so the latter is an unbiased estimator for D.
Therefore, if the squared errors e
2
i
were observable, we could construct the unbiased estimator
´
V
ideal
=
1
n
X
t
X
÷1
1
n
X
t
diag
e
2
1
, ..., e
2
n
X
1
n
X
t
X
÷1
=
1
n
X
t
X
÷1
1
n
n
¸
i=1
x
i
x
t
i
e
2
i
1
n
X
t
X
÷1
.
CHAPTER 5. LEAST SQUARES REGRESSION 100
Indeed,
E
´
V
ideal
 X
=
1
n
X
t
X
÷1
1
n
n
¸
i=1
x
i
x
t
i
E
e
2
i
 X
1
n
X
t
X
÷1
=
1
n
X
t
X
÷1
1
n
n
¸
i=1
x
i
x
t
i
o
2
i
1
n
X
t
X
÷1
=
1
n
X
t
X
÷1
1
n
X
t
DX
1
n
X
t
X
÷1
= V
verifying that
´
V
ideal
is unbiased for V
Since the errors e
2
i
are unobserved, V
ideal
is not a feasible estimator. To construct a feasible
estimator we can replace the errors with the leastsquares residuals ˆ e
i
, the prediction errors ˜ e
i
or
the standardized residuals ¯ e
i
, e.g.
´
D = diag
ˆ e
2
1
, ..., ˆ e
2
n
,
¯
D = diag
˜ e
2
1
, ..., ˜ e
2
n
,
D = diag
¯ e
2
1
, ..., ¯ e
2
n
.
Substituting these matrices into the formula for V
we obtain the estimators
´
V
=
1
n
X
t
X
÷1
1
n
X
t
´
DX
1
n
X
t
X
÷1
=
1
n
X
t
X
÷1
1
n
n
¸
i=1
x
i
x
t
i
ˆ e
2
i
1
n
X
t
X
÷1
,
¯
V
=
1
n
X
t
X
÷1
1
n
X
t
¯
DX
1
n
X
t
X
÷1
=
1
n
X
t
X
÷1
1
n
n
¸
i=1
x
i
x
t
i
˜ e
2
i
1
n
X
t
X
÷1
=
1
n
X
t
X
÷1
1
n
n
¸
i=1
(1 ÷h
ii
)
÷2
x
i
x
t
i
ˆ e
2
i
1
n
X
t
X
÷1
,
and
V
=
1
n
X
t
X
÷1
1
n
X
t
DX
1
n
X
t
X
÷1
=
1
n
X
t
X
÷1
1
n
n
¸
i=1
x
i
x
t
i
¯ e
2
i
1
n
X
t
X
÷1
=
1
n
X
t
X
÷1
1
n
n
¸
i=1
(1 ÷h
ii
)
÷1
x
i
x
t
i
ˆ e
2
i
1
n
X
t
X
÷1
.
The estimators
´
V
,
¯
V
, and V
are often called robust, heteroskedasticityconsistent, or
heteroskedasticityrobust covariance matrix estimators. The estimator
´
V
was ﬁrst developed
by Eicker (1963), and introduced to econometrics by White (1980), and is sometimes called the
CHAPTER 5. LEAST SQUARES REGRESSION 101
EickerWhite or White covariance matrix estimator
1
. The estimator
¯
V
was introduced by
Andrews (1991) based on the principle of leaveoneout crossvalidation, and the estimator V
was
introduced by Horn, Horn and Duncan (1975) as a reducedbias covariance matrix estimator.
Since (1 ÷h
ii
)
÷2
> (1 ÷h
ii
)
÷1
> 1 it is straightforward to show that
´
V
< V
<
¯
V
. (5.22)
(See Exercise 5.7.) The inequality A < B when applied to matrices means that the matrix B÷A
is positive deﬁnite
In general, the bias of the estimators
´
V
,
¯
V
and V
, is quite complicated, but they greatly
simplify under the assumption of homoskedasticity (5.3). For example, using (5.14),
E
´
V
 X
=
1
n
X
t
X
÷1
1
n
n
¸
i=1
x
i
x
t
i
E
ˆ e
2
i
 X
1
n
X
t
X
÷1
=
1
n
X
t
X
÷1
1
n
n
¸
i=1
x
i
x
t
i
(1 ÷h
ii
) o
2
1
n
X
t
X
÷1
=
1
n
X
t
X
÷1
o
2
÷
1
n
X
t
X
÷1
1
n
n
¸
i=1
x
i
x
t
i
h
ii
1
n
X
t
X
÷1
o
2
_
1
n
X
t
X
÷1
o
2
= V
.
This calculation shows that
´
V
is biased towards zero.
Similarly, (again under homoskedasticity) we can calculate that
¯
V
is biased away from zero,
speciﬁcally
E
¯
V
 X
_
1
n
X
t
X
÷1
o
2
(5.23)
while the estimator V
is unbiased
E
V
 X
=
1
n
X
t
X
÷1
o
2
. (5.24)
(See Exercise 5.8.)
It might seem rather odd to compare the bias of heteroskedasticityrobust estimators under the
assumption of homoskedasticity, but it does give us a baseline for comparison.
We have introduced four covariance matrix estimators,
´
V
0
,
´
V
,
¯
V
, and V
. Which should
you use? The classic estimator
´
V
0
is typically a poor choice, as it is only valid under the unlikely
homoskedasticity restriction. For this reason it is not typically used in contemporary economet
ric research. Of the three robust estimators,
´
V
is the most commonly used, as it is the most
straightforward and familiar. However,
¯
V
and (in particular) V
are preferred based on their
improved bias. Unfortunately, standard regression packages set the classic estimator
´
V
0
as the
default. As
¯
V
and V
are simple to implement, this should not be a barrier. For example, in
STATA, V
is implemented by selecting “Robust” standard errors and selecting the bias correction
option “1/(1 ÷h)” or using the vce(hc2) option.
1
Often, this estimator is rescaled by multiplying by the ad hoc bias adjustment
n
n k
in analogy to the bias
corrected error variance estimator.
CHAPTER 5. LEAST SQUARES REGRESSION 102
5.9 Standard Errors
A variance estimator such as
´
V
is an estimate of the variance of the distribution of
´
d. A
more easily interpretable measure of spread is its square root — the standard deviation. This is
so important when discussing the distribution of parameter estimates, we have a special name for
estimates of their standard deviation.
Deﬁnition 5.9.1 A standard error s(
´
) for an real
valued estimator
´
is an estimate of the standard deviation
of the distribution of
´
.
When d is a vector with estimate
´
d and covariance matrix estimate n
÷1
´
V
, standard errors
for individual elements are the square roots of the diagonal elements of n
÷1
´
V
. That is,
s(
ˆ
j
) =
n
÷1 ´
V
ˆ
o
j
= n
÷1/2
´
V
jj
.
As we discussed in the previous section, there are multiple possible covariance matrix estimators,
so standard errors are not unique. It is therefore important to understand what formula and method
is used by an author when studying their work. It is also important to understand that a particular
standard error may be relevant under one set of model assumptions, but not under another set of
assumptions.
To illustrate the computation of the covariance matrix estimate and standard errors, we return
to the log wage regression (4.9) of Section 4.4. We calculate that s
2
= 0.215 and
´
=
0.208 3.200
3.200 49.961
.
Therefore the homoskedastic and White covariance matrix estimates are
´
V
0
=
1 15.426
15.426 243
÷1
0.215 =
10.387 ÷0.659
÷0.659 0.043
and
´
V
=
1 15.426
15.426 243
÷1
0.208 3.200
3.200 49.961
1 15.426
15.426 243
÷1
=
7.092 ÷0.445
÷0.445 0.029
.
The standard errors are the square roots of the diagonal elements of these matrices. For example,
the White standard errors for
´
0
are
7.092/61 = 0.341 and that for
ˆ
1
is
.029/61 = .022. A
conventional format to write the estimated equation with standard errors is
log(Wage) = 0.626
(.341)
+ 0.156
(.022)
Education.
Alternatively our standard errors could be calculated using
¯
V
or V
. We report the four
possible standard errors in the following table
n
÷1 ´
V
0
n
÷1 ´
V
n
÷1 ¯
V
n
÷1
V
Intercept 0.412 0.341 0.361 0.351
Education 0.026 0.022 0.023 0.022
The homoskedastic standard errors are noticably dierent than the others, but the three robust
standard errors are quite close to one another.
CHAPTER 5. LEAST SQUARES REGRESSION 103
5.10 Multicollinearity
If rank(X
t
X) < k, then
´
d is not deﬁned
2
. This is called strict multicollinearity. This
happens when the columns of X are linearly dependent, i.e., there is some o = 0 such that
Xo = 0. Most commonly, this arises when sets of regressors are included which are identically
related. For example, if X includes both the logs of two prices and the log of the relative prices,
log(p
1
), log(p
2
) and log(p
1
/p
2
). When this happens, the applied researcher quickly discovers the
error as the statistical software will be unable to construct (X
t
X)
÷1
. Since the error is discovered
quickly, this is rarely a problem for applied econometric practice.
The more relevant situation is near multicollinearity, which is often called “multicollinearity”
for brevity. This is the situation when the X
t
X matrix is near singular, when the columns of X are
close to linearly dependent. This deﬁnition is not precise, because we have not said what it means
for a matrix to be “near singular”. This is one diculty with the deﬁnition and interpretation of
multicollinearity.
One implication of near singularity of matrices is that the numerical reliability of the calculations
is reduced. In extreme cases it is possible that the reported calculations will be in error.
A more relevant implication of near multicollinearity is that individual coecient estimates will
be imprecise. We can see this most simply in a homoskedastic linear regression model with two
regressors
y
i
= x
1i
1
+x
2i
2
+e
i
,
and
1
n
X
t
X =
1 j
j 1
.
In this case
var
´
d  X
=
o
2
n
1 j
j 1
÷1
=
o
2
n(1 ÷j
2
)
1 ÷j
÷j 1
.
The correlation j indexes collinearity, since as j approaches 1 the matrix becomes singular. We
can see the eect of collinearity on precision by observing that the variance of a coecient esti
mate o
2
n
1 ÷j
2
÷1
approaches inﬁnity as j approaches 1. Thus the more “collinear” are the
regressors, the worse the precision of the individual coecient estimates.
What is happening is that when the regressors are highly dependent, it is statistically dicult
to disentangle the impact of
1
from that of
2
. As a consequence, the precision of individual
estimates are reduced. The imprecision, however, will be reﬂected by large standard errors, so
there is no distortion in inference.
Some earlier textbooks overemphasized a concern about multicollinearity. A very amusing
parody of these texts appeared in Chapter 23.3 of Goldberger’s A Course in Econometrics (1991),
which is reprinted below. To understand his basic point, you should notice how the estimation
variance o
2
n
1 ÷j
2
÷1
depends equally and symmetrically on the the correlation j and the
sample size n.
2
See Appendix A.5 for the deﬁntion of the rank of a matrix.
CHAPTER 5. LEAST SQUARES REGRESSION 104
Arthur S. Goldberger
Art Goldberger (19302009) was one of the most distinguished members of the Depart
ment of Economics at the University of Wisconsin. His PhD thesis developed an early
macroeconometric forecasting model (known as the KleinGoldberger model) but most of
his career focused on microeconometric issues. He was the leading pioneer of what has been
called the Wisconsin Tradition of empirical work — a combination of formal econometric
theory with a careful critical analysis of empirical work. Goldberger wrote a series of highly
regarded and inﬂuential graduate econometric textbooks, including including Econometric
Theory (1964), Topics in Regression Analysis (1968), and A Course in Econometrics (1991).
CHAPTER 5. LEAST SQUARES REGRESSION 105
Micronumerosity
Arthur S. Goldberger
A Course in Econometrics (1991), Chapter 23.3
Econometrics texts devote many pages to the problem of multicollinearity in multiple regres
sion, but they say little about the closely analogous problem of small sample size in estimation
a univariate mean. Perhaps that imbalance is attributable to the lack of an exotic polysyllabic
name for “small sample size.” If so, we can remove that impediment by introducing the term
micronumerosity.
Suppose an econometrician set out to write a chapter about small sample size in sampling
from a univariate population. Judging from what is now written about multicollinearity, the
chapter might look like this:
1. Micronumerosity
The extreme case, “exact micronumerosity,” arises when n = 0, in which case the sample
estimate of µ is not unique. (Technically, there is a violation of the rank condition n > 0 : the
matrix 0 is singular.) The extreme case is easy enough to recognize. “Near micronumerosity”
is more subtle, and yet very serious. It arises when the rank condition n > 0 is barely
satisﬁed. Near micronumerosity is very prevalent in empirical economics.
2. Consequences of micronumerosity
The consequences of micronumerosity are serious. Precision of estimation is reduced. There
are two aspects of this reduction: estimates of µ may have large errors, and not only that,
but V
¯ y
will be large.
Investigators will sometimes be led to accept the hypothesis µ = 0 because ¯ y/ˆ o
¯ y
is small,
even though the true situation may be not that µ = 0 but simply that the sample data have
not enabled us to pick µ up.
The estimate of µ will be very sensitive to sample data, and the addition of a few more
observations can sometimes produce drastic shifts in the sample mean.
The true µ may be suciently large for the null hypothesis µ = 0 to be rejected, even
though V
¯ y
= o
2
/n is large because of micronumerosity. But if the true µ is small (although
nonzero) the hypothesis µ = 0 may mistakenly be accepted.
3. Testing for micronumerosity
Tests for the presence of micronumerosity require the judicious use of various ﬁngers. Some
researchers prefer a single ﬁnger, others use their toes, still others let their thumbs rule.
A generally reliable guide may be obtained by counting the number of observations. Most
of the time in econometric analysis, when n is close to zero, it is also far from inﬁnity.
Several test procedures develop critical values n
+
, such that micronumerosity is a problem
only if n is smaller than n
+
. But those procedures are questionable.
4. Remedies for micronumerosity
If micronumerosity proves serious in the sense that the estimate of µ has an unsatisfactorily
low degree of precision, we are in the statistical position of not being able to make bricks
without straw. The remedy lies essentially in the acquisition, if possible, of larger samples
from the same population.
But more data are no remedy for micronumerosity if the additional data are simply “more
of the same.” So obtaining lots of small samples from the same population will not help.
CHAPTER 5. LEAST SQUARES REGRESSION 106
5.11 Normal Regression Model
In the special case of the normal linear regression model introduced in Section 4.14, we can derive
exact sampling distributions for the leastsquares estimator, residuals, and variance estimator.
In particular, under the normality assumption e
i
 x
i
~ N
0, o
2
then we have the multivariate
implication
e  X ~ N
0, I
n
o
2
.
That is, the error vector e is independent of X and is normally distributed. Since linear functions
of normals are also normal, this implies that conditional on X
´
d ÷d
ˆ e
=
(X
t
X)
÷1
X
t
M
e ~ N
0,
o
2
(X
t
X)
÷1
0
0 o
2
M
where M = I
n
÷X(X
t
X)
÷1
X
t
. Since uncorrelated normal variables are independent, it follows
that
´
d is independent of any function of the OLS residuals including the estimated error variance
s
2
or ˆ o
2
or prediction errors ˜ e.
The spectral decomposition (see equation (A.5)) of M yields
M = H
¸
I
n÷k
0
0 0
H
t
where H
t
H = I
n
. Let u = o
÷1
H
t
e ~ N(0, H
t
H) ~ N(0, I
n
) . Then
nˆ o
2
o
2
=
(n ÷k) s
2
o
2
=
1
o
2
ˆ e
t
ˆ e
=
1
o
2
e
t
Me
=
1
o
2
e
t
H
¸
I
n÷k
0
0 0
H
t
e
= u
t
¸
I
n÷k
0
0 0
u
~ .
2
n÷k
,
a chisquare distribution with n ÷k degrees of freedom.
Furthermore, if standard errors are calculated using the homoskedastic formula (5.21)
´
j
÷
j
s(
ˆ
j
)
=
´
j
÷
j
s
(X
t
X)
÷1
jj
~
N
0, o
2
(X
t
X)
÷1
jj
o
2
n÷k
.
2
n÷k
(X
t
X)
÷1
jj
=
N(0, 1)
\
2
nk
n÷k
~ t
n÷k
a t distribution with n ÷k degrees of freedom.
CHAPTER 5. LEAST SQUARES REGRESSION 107
Theorem 5.11.1 Normal Regression
In the linear regression model (Assumption 5.1.1) if e
i
is independent of
x
i
and distributed N
0, o
2
then
•
´
d ÷d ~ N
0, o
2
(X
t
X)
÷1
•
nˆ o
2
o
2
=
(n÷k)s
2
o
2
~ .
2
n÷k
•
ˆ
o
j
÷o
j
s(
ˆ
o
j
)
~ t
n÷k
These are the exact ﬁnitesample distributions of the leastsquares estimator and variance esti
mators, and are the basis for traditional inference in linear regression.
While elegant, the diculty in applying Theorem 5.11.1 is that the normality assumption is
too restrictive to be empirical plausible, and therefore inference based on Theorem 5.11.1 has no
guarantee of accuracy. We develop a more broadlyapplicable inference theory based on large
sample (asymptotic) approximations in the following chapter.
William Gosset
William S. Gosset (18761937) of England is most famous for his derivation of the student’s
t distribution, published in the paper “The probable error of a mean” in 1908. At the time,
Gosset worked at Guiness Brewery, which prohibited its employees from publishing in order
to prevent the possible loss of trade secrets. To circumvent this barrier, Gosset published
under the pseudonym “Student”. Consequently, this famous distribution is known as the
student’s t rather than Gosset’s t!
CHAPTER 5. LEAST SQUARES REGRESSION 108
Exercises
Exercise 5.1 Explain the dierence between
1
n
¸
n
i=1
x
i
x
t
i
and E(x
i
x
t
i
) .
Exercise 5.2 True or False. If y
i
= x
i
+ e
i
, x
i
÷ R, E(e
i
 x
i
) = 0, and ˆ e
i
is the OLS residual
from the regression of y
i
on x
i
, then
¸
n
i=1
x
2
i
ˆ e
i
= 0.
Exercise 5.3 Prove Theorem 5.4.1.2.
Exercise 5.4 In a linear model
y = Xd +e, E(e  X) = 0, var (e  X) = o
2
with a known function of X , the GLS estimator is
¯
d =
X
t
÷1
X
÷1
X
t
÷1
y
,
the residual vector is ˆ e = y ÷X
¯
d, and an estimate of o
2
is
s
2
=
1
n ÷k
ˆ e
t
÷1
ˆ e.
(a) Find E
¯
d  X
.
(b) Find var
¯
d  X
.
(c) Prove that ˆ e = M
1
e, where M
1
= I ÷X
X
t
÷1
X
÷1
X
t
÷1
.
(d) Prove that M
t
1
÷1
M
1
=
÷1
÷
÷1
X
X
t
÷1
X
÷1
X
t
÷1
.
(e) Find E
s
2
 X
.
(f) Is s
2
a reasonable estimator for o
2
?
Exercise 5.5 Let (y
i
, x
i
) be a random sample with E(y  X) = Xd. Consider the Weighted
Least Squares (WLS) estimator of d
¯
d =
X
t
WX
÷1
X
t
Wy
where W = diag (w
1
, ..., w
n
) and w
i
= x
÷2
ji
, where x
ji
is one of the x
i
.
(a) In which contexts would
¯
d be a good estimator?
(b) Using your intuition, in which situations would you expect that
¯
d would perform better than
OLS?
Exercise 5.6 Show (5.20) in the homoskedastic regression model.
Exercise 5.7 Prove (5.22).
Exercise 5.8 Show (5.23) and (5.24) in the homoskedastic regression model.
Chapter 6
Asymptotic Theory for Least Squares
6.1 Introduction
In the previous chapter we derived the mean and variance of the leastsquares estimator in the
context of the linear regression model, but this is not a complete description of the sampling dis
tribution, nor sucient for inference (conﬁdence intervals and hypothesis testing) on the unknown
parameters. Furthermore, the theory does not apply in the context of the linear projection model,
which is more relevant for empirical applications.
To illustrate the situation with an example, let y
i
and x
i
be drawn from the joint density
f(x, y) =
1
2¬xy
exp
÷
1
2
(log y ÷log x)
2
exp
÷
1
2
(log x)
2
and let
ˆ
be the slope coecient estimate from a leastsquares regression of y
i
on x
i
and a constant.
Using simulation methods, the density function of
ˆ
was computed and plotted in Figure 6.1 for
sample sizes of n = 25, n = 100 and n = 800. The vertical line marks the true projection coecient.
From the ﬁgure we can see that the density functions are dispersed and highly nonnormal. As
the sample size increases the density becomes more concentrated about the population coecient.
Is there a simple way to characterize the sampling distribution of
ˆ
?
In principle the sampling distribution of
ˆ
is a function of the joint distribution of (y
i
, x
i
)
and the sample size n, but in practice this function is extremely complicated so it is not feasible to
analytically calculate the exact distribution of
ˆ
except in very special cases. Therefore we typically
rely on approximation methods.
The most widely used and versatile method is asymptotic theory, which approximates sampling
distributions by taking the limit of the ﬁnite sample distribution as the sample size n tends to
inﬁnity. The primary tools of asymptotic theory are the weak law of large numbers (WLLN),
central limit theorem (CLT), and continuous mapping theorem (CMT), which were reviewed in
Chapter 2. With these tools we can approximate the sampling distributions of most econometric
estimators.
It turns out that the asymptotic theory of leastsquares estimation applies equally to the pro
jection model and the linear CEF model, and therefore the results in this chapter will be stated for
the broader projection model (Assumption 1.5.1 and Assumption 3.16.1).
6.2 Consistency of LeastSquares Estimation
In this section we use the weak law of large numbers (WLLN, Theorem 2.6.1 and Theorem 2.7.2)
and continuous mapping theorem (CMT, Theorem 2.9.1) to show that the leastsquares estimator
´
d is consistent for the projection coecient d.
This derivation is based on three key components. First, the OLS estimator can be written as
a continuous function of a set of sample moments. Second, the WLLN shows that sample moments
109
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 110
Figure 6.1: Sampling Density of
ˆ
converge in probability to population moments. And third, the CMT states that continuous func
tions preserve convergence in probability. We now explain each step in brief and then in greater
detail.
First, observe that the OLS estimator
´
d =
1
n
n
¸
i=1
x
i
x
t
i
÷1
1
n
n
¸
i=1
x
i
y
i
=
´
Q
÷1
xx
´
Q
xy
is a function of the sample moments
´
Q
xx
=
1
n
¸
n
i=1
x
i
x
t
i
and
´
Q
xy
=
1
n
¸
n
i=1
x
i
y
i
.
Second, by an application of the WLLN these sample moments converge in probability to the
population moments. Speciﬁcally, the fact that (y
i
, x
i
) are mutually independent and identically
distributed (Assumption 1.5.1) implies that any function of (y
i
, x
i
) is iid, including x
i
x
t
i
and x
i
y
i
.
These variables also have ﬁnite expectations by Theorem 3.16.1.1. Under these conditions, the
WLLN (Theorem 2.7.2) implies that as n ÷·,
´
Q
xx
=
1
n
n
¸
i=1
x
i
x
t
i
p
÷÷E
x
i
x
t
i
= Q
xx
(6.1)
and
´
Q
xy
=
1
n
n
¸
i=1
x
i
y
i
p
÷÷E(x
i
y
i
) = Q
xy
. (6.2)
Third, the CMT ( Theorem 2.9.1) allows us to combine these equations to show that
´
d converges
in probability to d. Speciﬁcally, as n ÷·,
´
d =
´
Q
÷1
xx
´
Q
xy
p
÷÷Q
÷1
xx
Q
xy
= d. (6.3)
We have shown that
´
d
p
÷÷d, as n ÷·. In words, the OLS estimator converges in probability to
the projection coecient vector d as the sample size n gets large.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 111
To fully understand the application of the CMT we walk through it in detail. We can write
´
d = g
´
Q
xx
,
´
Q
xy
where g (A, b) = A
÷1
b is a function of A and b. The function g (A, b) is a continuous function of
A and b at all values of the arguments such that A
÷1
exists. Assumption 3.16.1 implies that Q
÷1
xx
exists and thus g (A, b) is continuous at A = Q
xx
. This justiﬁes the application of the CMT in
(6.3).
For a slightly dierent demonstration of (6.3), recall that (5.6) implies that
´
d ÷d =
´
Q
÷1
xx
´
Q
xe
(6.4)
where
´
Q
xe
=
1
n
n
¸
i=1
x
i
e
i
.
The WLLN and (3.25) imply
´
Q
xe
p
÷÷E(x
i
e
i
) = 0. (6.5)
Therefore
´
d ÷d =
´
Q
÷1
xx
´
Q
xe
p
÷÷Q
÷1
xx
0
= 0
which is the same as
´
d
p
÷÷d.
Theorem 6.2.1 Consistency of LeastSquares
Under Assumptions 1.5.1 and 3.16.1,
´
Q
xx
p
÷÷Q
xx
,
´
Q
xy
p
÷÷Q
xy
,
´
Q
xe
p
÷÷0, and
´
d
p
÷÷d as n ÷·.
Theorem 6.2.1 states that the OLS estimator
´
d converges in probability to d as n increases,
and thus
´
d is consistent for d. In the stochastic order notation, Theorem 6.2.1 can be equivalently
written as
´
d = d +o
p
(1). (6.6)
To illustrate the eect of sample size on the leastsquares estimator consider the leastsquares
regression
ln(Wage
i
) =
0
+
1
Education
i
+
2
Experience
i
+
3
Experience
2
i
+e
i
.
We use the sample of 30,833 white men from the March 2009 CPS. Randomly sorting the observa
tions, and sequentially estimating the model by leastsquares, starting with the ﬁrst 40 observations,
and continuing until the full sample is used, the sequence of estimates are displayed in Figure 6.2.
You can see how the leastsquares estimate changes with the sample size, but as the number of
observations increases it settles down to the fullsample estimate
ˆ
1
= 0.114.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 112
5000 10000 15000 20000
0
.
0
8
0
.
0
9
0
.
1
0
0
.
1
1
0
.
1
2
0
.
1
3
0
.
1
4
0
.
1
5
Number of Observations
O
L
S
E
s
t
i
m
a
t
i
o
n
Figure 6.2: The leastsquares estimator
ˆ
1
as a function of sample size n
6.3 Consistency of Sample Variance Estimators
Using the methods of Section 6.2 we can show that the estimators ˆ o
2
and s
2
are consistent for
o
2
. (The proof is given in Section 6.18.)
Theorem 6.3.1 Under Assumption 1.5.1 and Assumption 3.16.1,
ˆ o
2
p
÷÷o
2
and s
2
p
÷÷o
2
as n ÷·.
One implication of this theorem is that multiple estimators can be consistent for the sample
population parameter. While ˆ o
2
and s
2
are unequal in any given application, they are close in
value when n is very large.
6.4 Asymptotic Normality
We started this chapter discussing the need for an approximation to the distribution of the OLS
estimator
´
d. In Section 6.2 we showed that
´
d converges in probability to d. Consistency is a useful
ﬁrst step, but in itself does not provide a useful approximation to the distribution of the estimator.
In this section we derive an approximation typically called the asymptotic distribution.
The derivation starts by writing the estimator as a function of sample moments. One of the
moments must be written as a sum of zeromean random vectors and normalized so that the central
limit theorem can be applied. The steps are as follows.
Take equation (6.4) and multiply it by
n. This yields the expression
n
´
d ÷d
=
1
n
n
¸
i=1
x
i
x
t
i
÷1
1
n
n
¸
i=1
x
i
e
i
. (6.7)
This shows that the normalized and centered estimator
n
´
d ÷d
is a function of the sample
average
1
n
¸
n
i=1
x
i
x
t
i
and the normalized sample average
1
n
¸
n
i=1
x
i
e
i
. Furthermore, the latter has
mean zero so the central limit theorem (CLT, Theorem 2.8.1) applies.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 113
The product x
i
e
i
is iid (since the observations are iid) and mean zero (since E(x
i
e
i
) = 0).
Deﬁne the k k covariance matrix
= E
x
i
x
t
i
e
2
i
. (6.8)
We require the elements of to be ﬁnite, or equivalently that Ex
i
e
i

2
< ·. Usingx
i
e
i

2
=
x
i

2
e
2
i
and the CauchySchwarz Inequality (B.20),
Ex
i
e
i

2
= E
x
i

2
e
2
i
_
Ex
i

4
1/2
Ee
4
i
1/2
(6.9)
which is ﬁnite if x
i
and e
i
have ﬁnite fourth moments. As e
i
is a linear combination of y
i
and x
i
,
it is sucient that the observables have ﬁnite fourth moments (Theorem 3.16.1.6).
Assumption 6.4.1 In addition to Assumption 3.16.1, Ey
4
i
< · and
Ex
i

4
< ·.
Under Assumption 6.4.1 the CLT (Theorem 2.8.1) can be applied.
Theorem 6.4.1 Under Assumption 1.5.1 and Assumption 6.4.1, as n ÷·
1
n
n
¸
i=1
x
i
e
i
d
÷÷N(0, ) (6.10)
where = E
x
i
x
t
i
e
2
i
.
Putting together (6.1), (6.7), and (6.10),
n
´
d ÷d
d
÷÷Q
÷1
xx
N(0, )
= N
0, Q
÷1
xx
Q
÷1
xx
as n ÷ ·, where the ﬁnal equality follows from the property that linear combinations of normal
vectors are also normal (Theorem B.9.1).
We have derived the asymptotic normal approximation to the distribution of the leastsquares
estimator.
Theorem 6.4.2 Asymptotic Normality of LeastSquares Estimator
Under Assumption 1.5.1 and Assumption 6.4.1, as n ÷·
n
´
d ÷d
d
÷÷N(0, V
)
where
V
= Q
÷1
xx
Q
÷1
xx
, (6.11)
Q
xx
= E(x
i
x
t
i
) , and = E
x
i
x
t
i
e
2
i
.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 114
In the stochastic order notation, Theorem 6.4.2 implies that
´
d = d +O
p
(n
÷1/2
) (6.12)
and
´
d ÷d
= O
p
(n
÷1/2
)
which is stronger than (6.6).
The matrix V
= avar(
´
d) is the variance of the asymptotic distribution of
n
´
d ÷d
. Con
sequently, V
is often referred to as the asymptotic covariance matrix of
´
d. The expression
V
= Q
÷1
xx
Q
÷1
xx
is called a sandwich form. It might be worth noticing that there is a dierence
between the variance of the asymptotic distribution given in (6.11) and the ﬁnitesample conditional
variance in the CEF model as given in (5.11):
V
=
1
n
X
t
X
÷1
1
n
X
t
DX
1
n
X
t
X
÷1
.
While V
and V
are dierent, the two are close if n is large. Indeed, as n ÷·
V
p
÷÷V
.
There is a special case where and V
simplify. We say that e
i
is a Homoskedastic Pro
jection Error when
cov(x
i
x
t
i
, e
2
i
) = 0. (6.13)
Condition (6.13) holds in the homoskedastic linear regression model, but is somewhat broader.
Under (6.13) the asymptotic variance formulas simplify as
= E
x
i
x
t
i
E
e
2
i
= Q
xx
o
2
(6.14)
V
= Q
÷1
xx
Q
÷1
xx
= Q
÷1
xx
o
2
= V
0
(6.15)
In (6.15) we deﬁne V
0
= Q
÷1
xx
o
2
whether (6.13) is true or false. When (6.13) is true then V
= V
0
,
otherwise V
= V
0
. We call V
0
the homoskedastic asymptotic covariance matrix.
Theorem 6.4.2 states that the sampling distribution of the leastsquares estimator, after rescal
ing, is approximately normal when the sample size n is suciently large. This holds true for all joint
distributions of (y
i
, x
i
) which satisfy the conditions of Assumption 6.4.1, and is therefore broadly
applicable. Consequently, asymptotic normality is routinely used to approximate the ﬁnite sample
distribution of
n
´
d ÷d
.
A diculty is that for any ﬁxed n the sampling distribution of
´
d can be arbitrarily far from the
normal distribution. In Figure 6.1 we have already seen a simple example where the leastsquares
estimate is quite asymmetric and nonnormal even for reasonably large sample sizes. The normal
approximation improves as n increases, but how large should n be in order for the approximation
to be useful? Unfortunately, there is no simple answer to this reasonable question. The trouble
is that no matter how large is the sample size, the normal approximation is arbitrarily poor for
some data distribution satisfying the assumptions. We illustrate this problem using a simulation.
Let y
i
=
1
x
i
+
2
+ e
i
where x
i
is N(0, 1) , and e
i
is independent of x
i
with the Double Pareto
density f(e) =
c
2
e
÷c÷1
, e _ 1. If c > 2 the error e
i
has zero mean and variance c/(c ÷ 2).
As c approaches 2, however, its variance diverges to inﬁnity. In this context the normalized least
squares slope estimator
n
c÷2
c
ˆ
1
÷
1
has the N(0, 1) asymptotic distibution for any c > 2.
In Figure 6.3 we display the ﬁnite sample densities of the normalized estimator
n
c÷2
c
ˆ
1
÷
1
,
setting n = 100 and varying the parameter c. For c = 3.0 the density is very close to the N(0, 1)
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 115
Figure 6.3: Density of Normalized OLS estimator with Double Pareto Error
density. As c diminishes the density changes signiﬁcantly, concentrating most of the probability
mass around zero.
Another example is shown in Figure 6.4. Here the model is y
i
= +e
i
where
e
i
=
u
k
i
÷E
u
k
i
E
u
2k
i
÷
E
u
k
i
2
1/2
(6.16)
and u
i
~ N(0, 1). We show the sampling distribution of
n
´
÷
setting n = 100, for k = 1, 4,
6 and 8. As k increases, the sampling distribution becomes highly skewed and nonnormal. The
lesson from Figures 6.3 and 6.4 is that the N(0, 1) asymptotic approximation is never guaranteed
to be accurate.
6.5 Joint Distribution
Theorem 6.4.2 gives the joint asymptotic distribution of the coecient estimates. We can use
the result to study the covariance between the coecient estimates. For example, suppose k = 2
and write the estimates as (
ˆ
1
,
ˆ
2
). For simplicity suppose that the regressors are mean zero. Then
we can write
Q
xx
=
¸
o
2
1
jo
1
o
2
jo
1
o
2
o
2
2
where o
2
1
and o
2
2
are the variances of x
1i
and x
2i
, and j is their correlation. If the error is ho
moskedastic, then the asymptotic variance matrix for (
ˆ
1
,
ˆ
2
) is V
0
= Q
÷1
xx
o
2
. By the formula for
inversion of a 2 2 matrix,
Q
÷1
xx
=
1
o
2
1
o
2
2
(1 ÷j
2
)
¸
o
2
2
÷jo
1
o
2
÷jo
1
o
2
o
2
1
.
Thus if x
1i
and x
2i
are positively correlated (j > 0) then
ˆ
1
and
ˆ
2
are negatively correlated (and
viceversa).
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 116
Figure 6.4: Density of Normalized OLS estimator with error process (6.16)
For illustration, Figure 6.5 displays the probability contours of the joint asymptotic distribution
of
ˆ
1
÷
1
and
ˆ
2
÷
2
when o
2
1
= o
2
2
= o
2
= 1 and j = 0.5. The coecient estimates are negatively
correlated since the regressors are positively correlated. This means that if
ˆ
1
is unusually negative,
it is likely that
ˆ
2
is unusually positive, or conversely. It is also unlikely that we will observe both
ˆ
1
and
ˆ
2
unusually large and of the same sign.
This ﬁnding that the correlation of the regressors is of opposite sign of the correlation of the coef
ﬁcient estimates is sensitive to the assumption of homoskedasticity. If the errors are heteroskedastic
then this relationship is not guaranteed.
This can be seen through a simple constructed example. Suppose that x
1i
and x
2i
only take
the values {÷1, +1}, symmetrically, with Pr (x
1i
= x
2i
= 1) = Pr (x
1i
= x
2i
= ÷1) = 3/8, and
Pr (x
1i
= 1, x
2i
= ÷1) = Pr (x
1i
= ÷1, x
2i
= 1) = 1/8. You can check that the regressors are mean
zero, unit variance and correlation 0.5, which is identical with the setting displayed in Figure 6.5
when the error is homoskedastic.
Now suppose that the error is heteroskedastic. Speciﬁcally, suppose that E
e
2
i
 x
1i
= x
2i
=
5
4
and E
e
2
i
 x
1i
= x
2i
=
1
4
. You can check that E
e
2
i
= 1, E
x
2
1i
e
2
i
= E
x
2
2i
e
2
i
= 1 and
E
x
1i
x
2i
e
2
i
=
7
8
. Therefore
V
= Q
÷1
xx
Q
÷1
xx
=
9
16
1 ÷
1
2
÷
1
2
1
¸
¸
¸
1
7
8
7
8
1
¸
¸
¸
1 ÷
1
2
÷
1
2
1
¸
¸
¸
=
4
3
1
1
4
1
4
1
¸
¸
¸.
Thus the coecient estimates
ˆ
1
and
ˆ
2
are positively correlated (their correlation is 1/4.) The
joint probability contours of their asymptotic distribution is displayed in Figure 6.6. We can see
how the two estimates are positively associated.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 117
Figure 6.5: Contours of Joint Distribution of (
ˆ
1
,
ˆ
2
), homoskedastic case
What we found through this example is that in the presence of heteroskedasticity there is no
simple relationship between the correlation of the regressors and the correlation of the parameter
estimates.
We can extend the above analysis to study the covariance between coecient subvectors. For
example, partitioning x
t
i
= (x
t
1i
, x
t
2i
) and d
t
=
d
t
1
, d
t
2
, we can write the general model as
y
i
= x
t
1i
d
1
+x
t
2i
d
2
+e
i
and the coecient estimates as
´
d
t
=
´
d
t
1
,
´
d
t
2
. Make the partitions
Q
xx
=
¸
Q
11
Q
12
Q
21
Q
22
, =
¸
11
12
21
22
. (6.17)
From (3.37)
Q
÷1
xx
=
¸
Q
÷1
11·2
÷Q
÷1
11·2
Q
12
Q
÷1
22
÷Q
÷1
22·1
Q
21
Q
÷1
11
Q
÷1
22·1
where Q
11·2
= Q
11
÷ Q
12
Q
÷1
22
Q
21
and Q
22·1
= Q
22
÷ Q
21
Q
÷1
11
Q
12
. Thus when the error is ho
moskedastic,
cov
´
d
1
,
´
d
2
= ÷o
2
Q
÷1
11·2
Q
12
Q
÷1
22
which is a matrix generalization of the tworegressor case.
In the general case, you can show that (Exercise 6.5)
V
=
¸
V
11
V
12
V
21
V
22
(6.18)
where
V
11
= Q
÷1
11·2
11
÷Q
12
Q
÷1
22
21
÷
12
Q
÷1
22
Q
21
+Q
12
Q
÷1
22
22
Q
÷1
22
Q
21
Q
÷1
11·2
(6.19)
V
21
= Q
÷1
22·1
21
÷Q
21
Q
÷1
11
11
÷
22
Q
÷1
22
Q
21
+Q
21
Q
÷1
11
12
Q
÷1
22
Q
21
Q
÷1
11·2
(6.20)
V
22
= Q
÷1
22·1
22
÷Q
21
Q
÷1
11
12
÷
21
Q
÷1
11
Q
12
+Q
21
Q
÷1
11
11
Q
÷1
11
Q
12
Q
÷1
22·1
(6.21)
Unfortunately, these expressions are not easily interpretable.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 118
Figure 6.6: Contours of Joint Distribution of
ˆ
1
and
ˆ
2
, heteroskedastic case
6.6 Uniformly Consistent Residuals*
We have described the leastsquares residuals ˆ e
i
as estimates of the errors e
i
. Are ˆ e
i
consistent
for e
i
? Notice that we can write the residual as
ˆ e
i
= y
i
÷x
t
i
´
d
= e
i
+x
t
i
d ÷x
t
i
´
d
= e
i
÷x
t
i
´
d ÷d
. (6.22)
Since
´
d ÷d
p
÷÷0 it seems reasonable to guess that ˆ e
i
will be close to e
i
if n is large.
We can bound the dierence in (6.22) using the Schwarz inequality (A.7) to ﬁnd
ˆ e
i
÷e
i
 =
x
t
i
´
d ÷d
_ x
i

´
d ÷d
. (6.23)
To bound (6.23) we can use
´
d ÷d
= O
p
(n
÷1/2
) from Theorem 6.4.2, but we also need to bound
the random variable x
i
.
The key is Theorem 2.12.1 which shows that Ex
i

4
< · implies x
i
= o
p
n
1/4
uniformly in
i, or
n
÷1/4
max
1<i<n
x
i

p
÷÷0.
Applied to (6.23) we obtain
max
1<i<n
ˆ e
i
÷e
i
 _ max
1<i<n
x
i

´
d ÷d
= o
p
n
1/4
O
p
(n
÷1/2
)
= o
p
(n
÷1/4
).
We have shown the following.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 119
Theorem 6.6.1 Under Assumptions 1.5.1 and 6.4.1, uniformly in 1 _ i _ n
ˆ e
i
= e
i
+o
p
(n
÷1/4
). (6.24)
What about the squared residuals ˆ e
2
i
? Squaring the two sides of (6.24) we obtain
ˆ e
2
i
=
e
i
+o
p
(n
÷1/4
)
2
= e
2
i
+ 2e
i
o
p
(n
÷1/4
) +o
p
(n
÷1/2
)
= e
2
i
+o
p
(1) (6.25)
uniformly in 1 _ i _ n, since e
i
= o
p
n
1/4
when Ee
i

4
< · by Theorem 2.12.1.
Theorem 6.6.2 Under Assumptions 1.5.1 and 6.4.1, uniformly in 1 _ i _ n
ˆ e
2
i
= e
2
i
+o
p
(1)
6.7 Asymptotic Leverage*
Recall the deﬁnition of leverage from (4.21)
h
ii
= x
t
i
X
t
X
÷1
x
i
.
These are the diagonal elements of the projection matrix P and appear in the formula for leave
oneout prediction errors and several covariance matrix estimators. We can show that under iid
sampling the leverage values are uniformly asymptotically small.
Let `
min
(A) and `
max
(A) denote the smallest and largest eigenvalues of a symmetric square
matrix A, and note that `
max
(A
÷1
) = (`
min
(A))
÷1
.
Since
1
n
X
t
X
p
÷÷Q
xx
> 0 then by the CMT, `
min
1
n
X
t
X
p
÷÷`
min
(Q
xx
) > 0. (The latter is
positive since Q
xx
is positive deﬁnite and thus all its eigenvalues are positive.) Then by the Trace
Inequality (A.10)
h
ii
= x
t
i
X
t
X
÷1
x
i
= tr
1
n
X
t
X
÷1
1
n
x
i
x
t
i
_ `
max
1
n
X
t
X
÷1
tr
1
n
x
i
x
t
i
=
`
min
1
n
X
t
X
÷1
1
n
x
i

2
_ (`
min
(Q
xx
) +o
p
(1))
÷1
1
n
max
1<i<n
x
i

2
. (6.26)
Theorem 2.12.1 shows that Ex
i

2
< · implies
n
÷1/2
max
1<i<n
x
i

p
÷÷0
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 120
and thus
n
÷1
max
1<i<n
x
i

2
p
÷÷0.
It follows that (6.26) is o
p
(1), uniformly in i.
Theorem 6.7.1 Under Assumption 1.5.1 and Ex
i

2
< ·, uniformly in
1 _ i _ n, h
ii
= o
p
(1).
Theorem (6.7.1) implies that under random sampling with ﬁnite variances and large samples,
no individual observation should have a large leverage value. Consequently individual observations
should not be inﬂuential, unless one of these conditions is violated.
6.8 Consistent Covariance Matrix Estimation
In Sections 5.7 and 5.8 we introduced estimators of the ﬁnitesample covariance matrix of the
leastsquares estimator in the regression model. In this section we show that these estimators are
consistent for the asymptotic covariance matrix.
First, consider the covariance matrix estimate constructed under the assumption of homoskedas
ticity:
´
V
0
=
1
n
X
t
X
÷1
s
2
=
´
Q
÷1
xx
s
2
.
Since
´
Q
xx
p
÷÷Q
xx
(Theorem 6.2.1), s
2
p
÷÷o
2
(Theorem 6.3.1), and Q
xx
is invertible (Assumption
3.16.1), it follows that
´
V
0
=
´
Q
÷1
xx
s
2
p
÷÷Q
÷1
xx
o
2
= V
0
so that
´
V
0
is consistent for V
0
, the homoskedastic covariance matrix.
Theorem 6.8.1 Under Assumption 1.5.1 and Assumption 3.16.1,
´
V
0
p
÷÷V
0
as n ÷·.
Now consider the heteroskedasticityrobust covariance matrix estimators
´
V
,
¯
V
, and V
.
Writing
´
=
1
n
n
¸
i=1
x
i
x
t
i
ˆ e
2
i
, (6.27)
¯
=
1
n
n
¸
i=1
(1 ÷h
ii
)
÷2
x
i
x
t
i
ˆ e
2
i
and
=
1
n
n
¸
i=1
(1 ÷h
ii
)
÷1
x
i
x
t
i
ˆ e
2
i
as moment estimators for = E
x
i
x
t
i
e
2
i
, then the covariance matrix estimators are
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 121
´
V
=
´
Q
÷1
xx
´
´
Q
÷1
xx
,
¯
V
=
´
Q
÷1
xx
¯
´
Q
÷1
xx
,
and
V
=
´
Q
÷1
xx
´
Q
÷1
xx
.
We can show that
´
,
¯
, and are consistent for . Combined with the consistency of
´
Q
xx
for Q
xx
and the invertibility of Q
xx
we ﬁnd that
´
V
,
¯
V
, and V
converge in probability to
Q
÷1
xx
Q
÷1
xx
= V
. The complete proof is given in Section 6.18.
Theorem 6.8.2 Under Assumption 1.5.1 and Assumption 6.4.1, as n ÷ ·,
´
p
÷÷,
¯
p
÷÷,
p
÷÷,
´
V
p
÷÷V
,
¯
V
p
÷÷V
, and V
p
÷÷V
.
6.9 Functions of Parameters
Sometimes we are interested in a lowerdimensional function of the parameter vector d =
(
1
, ...,
k
). For example, we may be interested in a single coecient
j
or a ratio
j
/
l
. In these
cases we can write the parameter of interest as a function of d. Let h : R
k
÷ R
q
denote this
function and let
0 = h(d)
denote the parameter of interest. The estimate of 0 is
´
0 = h(
´
d).
By the continuous mapping theorem (Theorem 2.9.1) and the fact
´
d
p
÷÷d we can deduce that
´
0 is consistent for 0.
Theorem 6.9.1 Under Assumption 1.5.1 and Assumption 3.16.1, if h(d) is con
tinuous at the true value of d, then as n ÷·,
´
0
p
÷÷0.
Furthermore, by the Delta Method (Theorem 2.10.3) we know that
´
0 is asymptotically normal.
Theorem 6.9.2 Asymptotic Distribution of Functions of Parameters
Under Assumption 1.5.1 and Assumption 6.4.1, if h(d) is continuously dierentiable at
the true value of d, then as n ÷·,
n
´
0 ÷0
d
÷÷N(0, V
) (6.28)
where
V
= H
t
V
H
(6.29)
and
H
=
0
0d
h(d)
t
.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 122
In many cases, the function h(d) is linear:
h(d) = R
t
d
for some k q matrix R. In this case, H
= R.
In particular, if R is a “selector matrix”
R =
I
0
(6.30)
so that 0 = R
t
d = d
1
for d = (d
t
1
, d
t
2
)
t
, then
V
=
I 0
V
I
0
= V
11
,
where V
11
is given in (6.19). Under homoskedasticity the covariance matrix (6.19) simpliﬁes to
V
0
11
= Q
÷1
11·2
o
2
.
We have shown that for the case (6.30) of a subset of coecients, (6.28) is
n
´
d
1
÷d
1
d
÷÷N(0, V
11
)
with V
11
given in (6.19).
6.10 Asymptotic Standard Errors
How do we estimate the covariance matrix V
for
´
0? From (6.29) we see we need estimates of
H
and V
. We already have an estimate of the latter,
´
V
(or
¯
V
or V
). To estimate H
we
use
´
H
=
0
0d
h(
´
d).
Putting the parts together we obtain
´
V
=
´
H
t
´
V
´
H
as the covariance matrix estimator for
´
0. As the primary justiﬁcation for
´
V
is the asymptotic
approximation (6.28),
´
V
is often called an asymptotic covariance matrix estimator.
In particular, when h(d) is linear h(d) = R
t
d then
´
V
= R
t
´
V
R.
When R takes the form of a selector matrix as in (6.30) then
´
V
=
´
V
11
=
´
V
11
,
the upperleft block of the covariance matrix estimate
´
V
.
When q = 1 (so h(d) is realvalued), the standard error for
ˆ
0 is the square root of
´
V
, that is,
s(
ˆ
0) = n
÷1/2
´
V
= n
÷1/2
´
H
t
´
V
´
H
.
This is known as an asymptotic standard error for s(
ˆ
0).
The estimator
´
V
is consistent for V
under the conditions of Theorem 6.9.2 since
´
V
p
÷÷V
by Theorem 6.8.2, and
´
H
=
0
0d
h(
´
d)
t
p
÷÷
0
0d
h(d)
t
= H
since
´
d
p
÷÷d and the function
0
0
h(d)
t
is continuous.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 123
Theorem 6.10.1 Under Assumption 1.5.1 and Assumption 6.4.1, if h(d) is continuously
dierentiable at the true value of d, then as n ÷·,
´
V
p
÷÷V
.
6.11 t statistic
Let 0 = h(d) : R
k
÷ R be any parameter of interest (for example, 0 could be a single element
of d),
´
0 its estimate and s(
´
0) its asymptotic standard error. Consider the statistic
t
n
(0) =
´
0 ÷0
s(
´
0)
. (6.31)
Dierent writers have called (6.31) a tstatistic, a tratio, a zstatistic or a studentized sta
tistic. We won’t be making such distinctions and will refer to t
n
(0) as a tstatistic or a tratio. We
also often suppress the parameter dependence, writing it as t
n
. The tstatistic is a simple function
of the estimate, its standard error, and the parameter.
Theorem 6.11.1 t
n
(0)
d
÷÷N(0, 1)
Thus the asymptotic distribution of the tratio t
n
(0) is the standard normal. Since this dis
tribution does not depend on the parameters, we say that t
n
(0) is asymptotically pivotal. In
special cases (such as the normal regression model, see Section 4.14), the statistic t
n
has an exact
t distribution, and is therefore exactly free of unknowns. In this case, we say that t
n
is exactly
pivotal. In general, however, pivotal statistics are unavailable and we must rely on asymptotically
pivotal statistics.
6.12 Conﬁdence Intervals
A conﬁdence interval C
n
is an interval estimate of 0 ÷ R. It is a function of the data and
hence is random. It is designed to cover 0 with high probability. Either 0 ÷ C
n
or 0 / ÷ C
n
. Its
coverage probability is Pr(0 ÷ C
n
). The convention is to design conﬁdence intervals to have
coverage probability approximately equal to a prespeciﬁed target, typically 90% or 95%, or more
generally written as (1 ÷c)% for some c ÷ (0, 1). By reporting a (1 ÷c)% conﬁdence interval C
n
,
we are stating that the true 0 lies in C
n
with (1 ÷c)% probability across repeated samples.
There is not a unique method to construct conﬁdence intervals. For example, a simple (yet
silly) interval is
C
n
=
R with probability 1 ÷c
´
0 with probability c
By construction, if
´
0 has a continuous distribution, Pr(0 ÷ C
n
) = 1 ÷c, so this conﬁdence interval
has perfect coverage, but C
n
is uninformative about 0. This is not a useful conﬁdence interval.
When we have an asymptotically normal parameter estimate
´
0 with standard error s(
´
0), it turns
out that a generally reasonable conﬁdence interval for 0 takes the form
C
n
=
´
0 ÷c · s(
´
0),
´
0 +c · s(
´
0)
(6.32)
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 124
where c > 0 is a prespeciﬁed constant. This conﬁdence interval is symmetric about the point
estimate
´
0, and its length is proportional to the standard error s(
´
0).
Equivalently, C
n
is the set of parameter values for 0 such that the tstatistic t
n
(0) is smaller (in
absolute value) than c, that is
C
n
= {0 : t
n
(0) _ c} =
0 : ÷c _
´
0 ÷0
s(
´
0)
_ c
¸
.
The coverage probability of this conﬁdence interval is
Pr (0 ÷ C
n
) = Pr (t
n
(0) _ c)
which is generally unknown, but we can approximate the coverage probability by taking the asymp
totic limit as n ÷ ·. Since t
n
(0) is asymptotically standard normal (Theorem 6.11.1), it follows
that as n ÷· that
Pr (0 ÷ C
n
) ÷Pr (Z _ c) = (c) ÷(÷c)
where Z ~ N(0, 1) and (u) = Pr (Z _ u) is the standard normal distribution function. We call
this the asymptotic coverage probability, and it is a function only of c.
As we mentioned before, the convention is to design the conﬁdence interval to have a pre
speciﬁed asymptotic coverage probability 1 ÷ c, typically 90% or 95%. This means selecting the
constant c so that
(c) ÷(÷c) = 1 ÷c.
Eectively, this makes c a function of c, and can be backed out of a normal distribution table. For
example, c = 0.05 (a 95% interval) implies c = 1.96 and c = 0.1 (a 90% interval) implies c = 1.645.
Rounding 1.96 to 2, we obtain the most commonly used conﬁdence interval in applied econometric
practice
C
n
=
´
0 ÷2s(
´
0),
´
0 + 2s(
´
0)
.
This is a useful ruleof thumb. This asymptotic 95% conﬁdence interval C
n
is simple to compute
and can be roughly calculated from tables of coecient estimates and standard errors. (Technically,
it is an asymptotic 95.4% interval, due to the substitution of 2.0 for 1.96, but this distinction is
meaningless.)
Conﬁdence intervals are a simple yet eective tool to assess estimation uncertainty. When
reading a set of empirical results, look at the estimated coecient estimates and the standard
errors. For a parameter of interest, compute the conﬁdence interval C
n
and consider the meaning
of the spread of the suggested values. If the range of values in the conﬁdence interval are too wide
to learn about 0, then do not jump to a conclusion about 0 based on the point estimate alone.
6.13 Regression Intervals
In the linear regression model the conditional mean of y
i
given x
i
= x is
m(x) = E(y
i
 x
i
= x) = x
t
d.
In some cases, we want to estimate m(x) at a particular point x. Notice that this is a (linear)
function of d. Letting h(d) = x
t
d and 0 = h(d), we see that ´ m(x) =
´
0 = x
t
´
d and H
= x, so
s(
´
0) =
n
÷1
x
t ´
V
x. Thus an asymptotic 95% conﬁdence interval for m(x) is
¸
x
t
´
d ±2
n
÷1
x
t ´
V
x
.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 125
Figure 6.7: Wage on Education Regression Intervals
It is interesting to observe that if this is viewed as a function of x, the width of the conﬁdence set
is dependent on x.
To illustrate, we return to the log wage regression (4.9) of Section 4.4. The estimated regression
equation is
log(Wage) = x
t
´
d = 0.626 + 0.156x.
where x = Education. The White covariance matrix estimate is
´
V
=
7.092 ÷0.445
÷0.445 0.029
and the sample size is n = 61. Thus the 95% conﬁdence interval for the regression takes the form
0.626 + 0.156x ±2
1
61
(7.092 ÷0.89x + 0.029x
2
) .
The estimated regression and 95% intervals are shown in Figure 6.7. Notice that the conﬁdence
bands take a hyperbolic shape. This means that the regression line is less precisely estimated for
very large and very small values of education.
Plots of the estimated regression line and conﬁdence intervals are especially useful when the
regression includes nonlinear terms. To illustrate, consider the log wage regression (4.10) which
includes experience and its square.
log(Wage) = 1.06 + 0.116 education + 0.010 experience ÷0.014 experience
2
/100 (6.33)
and has n = 2454 observations. We are interested in plotting the regression estimate and regression
intervals as a function of experience. Since the regression also includes education, in order to plot
the estimates in a simple graph we need to ﬁx education at a speciﬁc value. We select education=12.
This only aects the level of the estimated regression, since education enters without an interaction.
Deﬁne the points of evaluation
x =
¸
¸
¸
1
12
x
x
2
/100
¸
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 126
Figure 6.8: Wage on Experience Regression Intervals
where x =experience. The covariance matrix estimate is
´
V
=
¸
¸
¸
22.92 ÷1.0601 ÷0.56687 0.86626
÷1.0601 0.06454 .0080737 ÷.0066749
÷0.56687 .0080737 .040736 ÷.075583
0.86626 ÷.0066749 ÷.075583 0.14994
¸
.
Thus the regression interval for education=12, as a function of x =experience is
1.06 + 0.116 + 12 + 0.010 x ÷0.014 x
2
/100
±
1
50
1
2454
1 12 x x
2
/100
¸
¸
¸
22.92 ÷1.0601 ÷0.56687 0.86626
÷1.0601 0.06454 .0080737 ÷.0066749
÷0.56687 .0080737 .040736 ÷.075583
0.86626 ÷.0066749 ÷.075583 0.14994
¸
¸
¸
¸
1
12
x
x
2
/100
¸
= 2.452 + 0.010 x ÷.00014 x
2
±
2
100
27.592 ÷3.8304 x + 0.23007 x
2
÷0.00616 x
3
+ 0.0000611 x
4
The estimated regression and 95% intervals are shown in Figure 6.8. The regression interval
widens greatly for small and large values of experience, indicating considerable uncertainty about
the eect of experience on mean wages for this population. The conﬁdence bands take a more
complicated shape than in Figure 6.7 due to the nonlinear speciﬁcation.
6.14 Quadratic Forms
Let 0 = h(d) : R
k
÷ R
q
be any parameter vector of interest,
´
0 its estimate and
´
V
its
covariance matrix estimator. Consider the quadratic form
W
n
(0) = n
´
0 ÷0
t
´
V
÷1
´
0 ÷0
. (6.34)
When q = 1 then W
n
(0) = t
n
(0)
2
is the square of the tratio. When q > 1 W
n
(0) is typically called
a Wald statistic. We are interested in its sampling distribution.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 127
The asymptotic distribution of W
n
(0) is simple to derive given Theorem 6.9.2 and Theorem
6.10.1, which show that
n
´
0 ÷0
d
÷÷Z ~ N(0, V
)
and
´
V
p
÷÷V
.
It follows that
W
n
(0) =
n
´
0 ÷0
t
´
V
÷1
n
´
0 ÷0
d
÷÷Z
t
V
÷1
Z (6.35)
a quadratic in the normal random vector Z. Here we can appeal to a useful result from probability
theory. (See Theorem B.9.3 in the Appendix.)
Theorem 6.14.1 If Z ~ N(0, A) with A > 0, q q, then Z
t
A
÷1
Z ~ .
2
q
, a chisquare
random variable with q degrees of freedom.
The asymptotic distribution in (6.35) takes exactly this form. It follows that W
n
(0) converges
in distribution to a chisquare random variable.
Theorem 6.14.2 Under Assumption 1.5.1 and Assumption 6.4.1, if h(d) is continuously
dierentiable at the true value of d, then as n ÷·,
W
n
(0)
d
÷÷.
2
q
.
6.15 Conﬁdence Regions
A conﬁdence region C
n
is a generalization of a conﬁdence interval to the case 0 ÷ R
q
with q > 1.
A conﬁdence region C
n
is a set in R
q
intended to cover the true parameter value with a preselected
probability 1÷c. Thus an ideal conﬁdence region has the coverage probability Pr(0 ÷ C
n
) = 1÷c.
In practice it is typically not possible to construct a region with exact coverage, but we can calculate
its asymptotic coverage.
When the parameter estimate satisﬁes the conditions of Theorem 6.14.2, a good choice for a
conﬁdence region is the ellipse
C
n
= {0 : W
n
(0) _ c
1÷c
} .
with c
1÷c
the 1 ÷ c’th quantile of the .
2
q
distribution. (Thus F
q
(c
1÷c
) = 1 ÷ c.) These quantiles
can be found from a critical value table for the .
2
q
distribution.
Theorem 6.14.2 implies
Pr (0 ÷ C
n
) ÷Pr
.
2
q
_ c
1÷c
= 1 ÷c
which shows that C
n
has asymptotic coverage (1 ÷c)%.
To illustrate the construction of a conﬁdence region, consider the estimated regression (6.33) of
the model
log(Wage) = c +
1
education +
2
experience +
3
experience
2
/100.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 128
Suppose that the two parameters of interest are the percentage return to education 0
1
= 100
1
and
the percentage return to experience for individuals with 10 years experience 0
2
= 100
2
+ 20
3
.
(We need to condition on the level of experience since the regression is quadratic in experience.)
These two parameters are a linear transformation of the regression parameters with point estimates
´
0 =
0 100 0 0
0 0 100 20
´
d =
11.6
0.72
,
and have the covariance matrix estimate
´
V
=
0 100 0 0
0 0 100 20
´
V
¸
¸
¸
0 0
100 0
0 100
0 20
¸
=
645.4 67.387
67.387 165
with inverse
´
V
÷1
=
0.0016184 ÷0.00066098
÷0.00066098 0.0063306
.
Thus the Wald statistic is
W
n
(0) = n
´
0 ÷0
t
´
V
÷1
´
0 ÷0
= 2454
11.6 ÷0
1
0.72 ÷0
2
t
0.0016184 ÷0.00066098
÷0.00066098 0.0063306
11.6 ÷0
1
0.72 ÷0
2
= 3.97 (11.6 ÷0
1
)
2
÷3.2441 (11.6 ÷0
1
) (0.72 ÷0
2
) + 15.535 (0.72 ÷0
2
)
2
The 90% quantile of the .
2
2
distribution is 4.605 (we use the .
2
2
distribution as the dimension
of 0 is two), so an asymptotic 90% conﬁdence region for the two parameters is the interior of the
ellipse
3.97 (11.6 ÷0
1
)
2
÷3.2441 (11.6 ÷0
1
) (0.72 ÷0
2
) + 15.535 (0.72 ÷0
2
)
2
= 4.605
which is displayed in Figure 6.9. Since the estimated correlation of the two coecient estimates is
small (about 0.2) the ellipse is close to circular.
6.16 Semiparametric Eciency in the Projection Model
In Section 5.4 we presented the GaussMarkov theorem, which stated that in the homoskedastic
CEF model, in the class of linear unbiased estimators the one with the smallest variance is least
squares. As we noted in that section, the restriction to linear unbiased estimators is unsatisfactory
as it leaves open the possibility that an alternative (nonlinear) estimator could have a smaller
asymptotic variance. In addition, the restriction to the homoskedastic CEF model is also unsatis
factory as the projection model is more relevant for empirical application. The question remains:
what is the most ecient estimator of the projection coecient d (or functions 0 = h(d)) in the
projection model?
It turns out that it is straightforward to show that the projection model falls in the estimator
class considered in Proposition 2.13.2. It follows that the leastsquares estimator is semiparametri
cally ecient in the sense that it has the smallest asymptotic variance in the class of semiparametric
estimators of d. This is a more powerful and interesting result than the GaussMarkov theorem.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 129
Figure 6.9: Conﬁdence Region for Return to Experience and Return to Education
To see this, it is worth rephrasing Proposition 2.13.2 with amended notation. Suppose that a pa
rameter of interest is 0 = g(µ) where µ = Ez
i
, for which the moment estimators are ´ µ =
1
n
¸
n
i=1
z
i
and
´
0 = g(´ µ). Let L
2
(g) =
F : Ez
2
< ·, g (u) is continuously dierentiable at u = Ez
¸
be
the set of distributions for which
´
0 satisﬁes the central limit theorem.
Proposition 6.16.1 In the class of distributions F ÷ L
2
(g),
´
0 is semi
parametrically ecient for 0 in the sense that its asymptotic variance equals
the semiparametric eciency bound.
Proposition 6.16.1 says that under the minimal conditions in which
´
0 is asymptotically normal,
then no semiparametric estimator can have a smaller asymptotic variance than
´
0.
To show that an estimator is semiparametrically ecient it is sucient to show that it falls
in the class covered by this Proposition. To show that the projection model falls in this class, we
write d = Q
÷1
xx
Q
xy
= g (µ) where µ = Ez
i
and z
i
= (x
i
x
t
i
, x
i
y
i
) . The class L
2
(g) equals the class
of distributions
L
4
(d) =
F : Ey
4
< ·, Ex
4
< ·, Ex
i
x
t
i
> 0
¸
.
Proposition 6.16.2 In the class of distributions F ÷ L
4
(d), the least
squares estimator
´
d is semiparametrically ecient for d.
The leastsquares estimator is an asymptotically ecient estimator of the projection coecient
because the latter is a smooth function of sample moments and the model implies no further
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 130
restrictions. However, if the class of permissible distributions is restricted to a strict subset of L
4
(d)
then leastsquares can be inecient. For example, the linear CEF model with heteroskedastic errors
is a strict subset of L
4
(d), and the GLS estimator has a smaller asymptotic variance than OLS. In
this case, the knowledge that true conditional mean is linear allows for more ecient estimation of
the unknown parameter.
From Proposition 6.16.1 we can also deduce that plugin estimators
´
0 = h(
´
d) are semiparamet
rically ecient estimators of 0 = h(d) when h is continuously dierentiable. We can also deduce
that other parameters estimators are semiparametrically ecient, such as ˆ o
2
for o
2
. To see this,
note that we can write
o
2
= E
y
i
÷x
t
i
d
2
= Ey
2
i
÷2E
y
i
x
t
i
d +d
t
E
x
i
x
t
i
d
= Q
yy
÷Q
yx
Q
÷1
xx
Q
xy
which is a smooth function of the moments Q
yy
, Q
yx
and Q
xx
. Similarly the estimator ˆ o
2
equals
ˆ o
2
=
1
n
n
¸
i=1
ˆ e
2
i
=
´
Q
yy
÷
´
Q
yx
´
Q
÷1
xx
´
Q
xy
Since the variables y
2
i
, y
i
x
t
i
and x
i
x
t
i
all have ﬁnite variances when F ÷ L
4
(d), the conditions of
Proposition 6.16.1 are satisﬁed. We conclude:
Proposition 6.16.3 In the class of distributions F ÷ L
4
(d), ˆ o
2
is semi
parametrically ecient for o
2
.
6.17 Semiparametric Eciency in the Homoskedastic Regression
Model*
In Section 6.16 we showed that the OLS estimator is semiparametrically ecient in the projec
tion model. What if we restrict attention to the classical homoskedastic regression model? Is OLS
still ecient in this class? In this section we derive the asymptotic semiparametric eciency bound
for this model, and show that it is the same as that obtained by the OLS estimator. Therefore it
turns out that leastsquares is ecient in this class as well.
Recall that in the homoskedastic regression model the asymptotic variance of the OLS estimator
´
d for d is V
0
= Q
÷1
xx
o
2
. Therefore, as described in Section 2.13, it is sucient to ﬁnd a parametric
submodel whose CramerRao bound for estimation of d is V
0
. This would establish that V
0
is
the semiparametric variance bound and the OLS estimator
´
d is semiparametrically ecient for d.
Let the joint density of y and x be written as f (y, x) = f
1
(y  x) f
2
(x) , the product of the
conditional density of y given x and the marginal density of x. Now consider the parametric
submodel
f (y, x  0) = f
1
(y  x)
1 +
y ÷x
t
d
x
t
0
/o
2
f
2
(x) . (6.36)
You can check that in this submodel the marginal density of x is f
2
(x) and the conditional density
of y given x is f
1
(y  x)
1 + (y ÷x
t
d) (x
t
0) /o
2
. To see that the latter is a valid conditional
density, observe that the regression assumption implies that
yf
1
(y  x) dy = x
t
d and therefore
f
1
(y  x)
1 +
y ÷x
t
d
x
t
0
/o
2
dy =
f
1
(y  x) dy +
f
1
(y  x)
y ÷x
t
d
dy
x
t
0
/o
2
= 1.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 131
In this parametric submodel the conditional mean of y given x is
E
0
(y  x) =
yf
1
(y  x)
1 +
y ÷x
t
d
x
t
0
/o
2
dy
=
yf
1
(y  x) dy +
yf
1
(y  x)
y ÷x
t
d
x
t
0
/o
2
dy
=
yf
1
(y  x) dy +
y ÷x
t
d
2
f
1
(y  x)
x
t
0
/o
2
dy
+
y ÷x
t
d
f
1
(y  x) dy
x
t
d
x
t
0
/o
2
= x
t
(d +0) ,
using the homoskedasticity assumption
(y ÷x
t
d)
2
f
1
(y  x) dy = o
2
. This means that in this
parametric submodel, the conditional mean is linear in x and the regression coecient is d (0) =
d +0.
We now calculate the score for estimation of 0. Since
0
00
log f (y, x  0) =
0
00
log
1 +
y ÷x
t
d
x
t
0
/o
2
=
x(y ÷x
t
d) /o
2
1 + (y ÷x
t
d) (x
t
0) /o
2
the score is
s =
0
00
log f (y, x  0
0
) = xe/o
2
.
The CramerRao bound for estimation of 0 (and therefore d (0) as well) is
E
ss
t
÷1
=
o
÷4
E
(xe) (xe)
t
÷1
= o
2
Q
÷1
xx
= V
0
.
We have shown that there is a parametric submodel (6.36) whose CramerRao bound for estimation
of d is identical to the asymptotic variance of the leastsquares estimator, which therefore is the
semiparametric variance bound.
Theorem 6.17.1 In the homoskedastic regression model, the semipara
metric variance bound for estimation of d is V
0
= o
2
Q
÷1
xx
and the OLS
estimator is semiparametrically ecient.
This result is similar to the GaussMarkov theorem, in that it asserts the eciency of the least
squares estimator in the context of the homoskedastic regression model. The dierence is that the
GaussMarkov theorem states that OLS has the smallest variance among the set of unbiased linear
estimators, while Theorem 6.17.1 states that OLS has the smallest asymptotic variance among all
regular estimators. This is a much more powerful statement.
6.18 Technical Proofs*
Proof of Theorem 6.3.1. Note that
ˆ e
i
= y
i
÷x
t
i
´
d
= e
i
+x
t
i
d ÷x
t
i
´
d
= e
i
÷x
t
i
´
d ÷d
.
Thus
ˆ e
2
i
= e
2
i
÷2e
i
x
t
i
´
d ÷d
+
´
d ÷d
t
x
i
x
t
i
´
d ÷d
(6.37)
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 132
and
ˆ o
2
=
1
n
n
¸
i=1
ˆ e
2
i
=
1
n
n
¸
i=1
e
2
i
÷2
1
n
n
¸
i=1
e
i
x
t
i
´
d ÷d
+
´
d ÷d
t
1
n
n
¸
i=1
x
i
x
t
i
´
d ÷d
p
÷÷o
2
as n ÷·, the last line using the WLLN and Theorem 6.2.1. Thus ˆ o
2
is consistent for o
2
.
Finally, since n/(n ÷k) ÷1 as n ÷·, it follows that as n ÷·,
s
2
=
n
n ÷k
ˆ o
2
p
÷÷o
2
.
Proof of Theorem 6.8.2. We ﬁrst show
´
p
÷÷. Note that
´
=
1
n
n
¸
i=1
x
i
x
t
i
ˆ e
2
i
=
1
n
n
¸
i=1
x
i
x
t
i
e
2
i
+
1
n
n
¸
i=1
x
i
x
t
i
ˆ e
2
i
÷e
2
i
. (6.38)
We now examine each k k sum on the righthandside of (6.38) in turn.
Take the ﬁrst term on the righthandside of (6.38). Since
x
i
x
t
i
e
2
i
= x
i

2
e
2
i
, then by the
CauchySchwarz Inequality (B.20) and Assumption 6.4.1,
E
x
i
x
t
i
e
2
i
= E
x
i

2
e
2
i
_
E
x
i

4
E
e
4
i
1/2
< ·.
Since this expectation is ﬁnite, we can apply the WLLN (Theorem 2.7.2) to ﬁnd that
1
n
n
¸
i=1
x
i
x
t
i
e
2
i
p
÷÷E
x
i
x
t
i
e
2
i
= .
Now take the second term on the righthandside of (6.38). By the Triangle Inequality (A.9),
the fact that Ex
i

2
< · and Theorem 6.6.2,
1
n
n
¸
i=1
x
i
x
t
i
ˆ e
2
i
÷e
2
i
_
1
n
n
¸
i=1
x
i
x
t
i
ˆ e
2
i
÷e
2
i
=
1
n
n
¸
i=1
x
i

2
ˆ e
2
i
÷e
2
i
_
1
n
n
¸
i=1
x
i

2
max
1<i<n
ˆ e
2
i
÷e
2
i
= O
p
(1)o
p
(1)
= o
p
(1)
Together, we have established that
´
p
÷÷ as claimed.
Combined with (6.1) and the invertibilility of Q
xx
,
´
V
=
´
Q
÷1
xx
´
´
Q
÷1
xx
p
÷÷Q
÷1
xx
Q
÷1
xx
= V
,
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 133
from which it follows that
´
V
p
÷÷V
as n ÷·.
We also need to show that
¯
p
÷÷ and
p
÷÷ , from which it will follow that
¯
V
p
÷÷V
and V
p
÷÷V
. Since
´
_ _
¯
it is sucient to show that
¯
p
÷÷. Notice that
¯
÷
´
=
1
n
n
¸
i=1
x
i
x
t
i
(1 ÷h
ii
)
÷2
÷1
ˆ e
2
i
=
1
n
n
¸
i=1
x
i
x
t
i
(1 ÷h
ii
)
÷2
÷1
e
2
i
+
1
n
n
¸
i=1
x
i
x
t
i
(1 ÷h
ii
)
÷2
÷1
ˆ e
2
i
÷e
2
i
.
Note that Theorem 6.7.1 states max
1<i<n
h
ii
= o
p
(1), and thus by the CMT
max
1<i<n
(1 ÷h
ii
)
÷2
÷1
= o
p
(1).
Thus
¯
÷
´
_
1
n
n
¸
i=1
x
i
x
t
i
(1 ÷h
ii
)
÷2
÷1
e
2
i
+
1
n
n
¸
i=1
x
i
x
t
i
(1 ÷h
ii
)
÷2
÷1
ˆ e
2
i
÷e
2
i
_
2
n
n
¸
i=1
x
i
x
t
i
e
2
i
max
1<i<n
(1 ÷h
ii
)
÷2
÷1
+
1
n
n
¸
i=1
x
i
x
t
i
max
1<i<n
(1 ÷h
ii
)
÷2
÷1
max
1<i<n
ˆ e
2
i
÷e
2
i
= O
p
(1)o
p
(1) +O
p
(1)o
p
(1)o
p
(1)
= o
p
(1)
Since
´
p
÷÷ it follows that
¯
p
÷÷ and
p
÷÷.
Proof of Theorem 6.11.1. By Theorem 6.9.2,
n
´
0 ÷0
d
÷÷N(0, V
0
) and
´
V
ˆ
0
p
÷÷V
0
. Thus
t
n
(0) =
´
0 ÷0
s(
´
0)
=
n
´
0 ÷0
´
V
0
d
÷÷
N(0, V
0
)
V
0
= N(0, 1)
The last equality is by the property that linear scales of normal distributions are normal.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 134
Exercises
Exercise 6.1 Take the model y
i
= x
t
1i
d
1
+x
t
2i
d
2
+e
i
with Ex
i
e
i
= 0. Suppose that d
1
is estimated
by regressing y
i
on x
1i
only. Find the probability limit of this estimator. In general, is it consistent
for d
1
? If not, under what conditions is this estimator consistent for d
1
?
Exercise 6.2 Let y be n1, X be nk (rank k). y = Xd+e with E(x
i
e
i
) = 0. Deﬁne the ridge
regression estimator
´
d =
n
¸
i=1
x
i
x
t
i
+ `I
k
÷1
n
¸
i=1
x
i
y
i
(6.39)
where ` > 0 is a ﬁxed constant. Find the probability limit of
´
d as n ÷·. Is
´
d consistent for d?
Exercise 6.3 For the ridge regression estimator (6.39), set ` = cn where c > 0 is ﬁxed as n ÷·.
Find the probability limit of
´
d as n ÷·.
Exercise 6.4 Verify some of the calculations reported in Section 6.5. Speciﬁcally, suppose that
x
1i
and x
2i
only take the values {÷1, +1}, symmetrically, with
Pr (x
1i
= x
2i
= 1) = Pr (x
1i
= x
2i
= ÷1) = 3/8
Pr (x
1i
= 1, x
2i
= ÷1) = Pr (x
1i
= ÷1, x
2i
= 1) = 1/8
E
e
2
i
 x
1i
= x
2i
=
5
4
E
e
2
i
 x
1i
= x
2i
=
1
4
.
Verify the following:
1. Ex
1i
= 0
2. Ex
2
1i
= 1
3. Ex
1i
x
2i
=
1
2
4. E
e
2
i
= 1
5. E
x
2
1i
e
2
i
= 1
6. E
x
1i
x
2i
e
2
i
=
7
8
.
Exercise 6.5 Show (6.18)(6.21).
Exercise 6.6 The model is
y
i
= x
t
i
d +e
i
E(x
i
e
i
) = 0
= E
x
i
x
t
i
e
2
i
.
Find the method of moments estimators (
´
d,
´
) for (d, ) .
(a) In this model, are (
´
d,
´
) ecient estimators of (d, )?
(b) If so, in what sense are they ecient?
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 135
Exercise 6.7 Of the variables (y
+
i
, y
i
, x
i
) only the pair (y
i
, x
i
) are observed. In this case, we say
that y
+
i
is a latent variable. Suppose
y
+
i
= x
t
i
d +e
i
E(x
i
e
i
) = 0
y
i
= y
+
i
+u
i
where u
i
is a measurement error satisfying
E(x
i
u
i
) = 0
E(y
+
i
u
i
) = 0
Let
´
d denote the OLS coecient from the regression of y
i
on x
i
.
(a) Is d the coecient from the linear projection of y
i
on x
i
?
(b) Is
´
d consistent for d as n ÷·?
(c) Find the asymptotic distribution of
n
´
d ÷d
as n ÷·.
Exercise 6.8 Find the asymptotic distribution of
n
ˆ o
2
÷o
2
as n ÷·.
Exercise 6.9 The model is
y
i
= x
i
+e
i
E(e
i
 x
i
) = 0
where x
i
÷ R. Consider the two estimators
´
=
¸
n
i=1
x
i
y
i
¸
n
i=1
x
2
i
¯
=
1
n
n
¸
i=1
y
i
x
i
.
(a) Under the stated assumptions, are both estimators consistent for ?
(b) Are there conditions under which either estimator is ecient?
Chapter 7
Restricted Estimation
7.1 Introduction
In the linear projection model
y
i
= x
t
i
d +e
i
E(x
i
e
i
) = 0
a common task is to impose a constraint on the coecient vector d. For example, partitioning
x
t
i
= (x
t
1i
, x
t
2i
) and d
t
=
d
t
1
, d
t
2
, a typical constraint is an exclusion restriction of the form
d
2
= 0. In this case the constrained model is
y
i
= x
t
1i
d
1
+e
i
E(x
i
e
i
) = 0
At ﬁrst glance this appears the same as the linear projection model, but there is one important
dierence: the error e
i
is uncorrelated with the entire regressor vector x
t
i
= (x
t
1i
, x
t
2i
) not just the
included regressor x
1i
.
In general, a set of q linear constraints on d takes the form
R
t
d = c (7.1)
where R is k q, rank(R) = q < k and c is q 1. The assumption that R is full rank means that
the constraints are linearly independent (there are no redundant or contradictory constraints).
The constraint d
2
= 0 discussed above is a special case of the constraint (7.1) with
R =
0
I
, (7.2)
a selector matrix, and c = 0.
Another common restriction is that a set of coecients sum to a known constant, i.e.
1
+
2
= 1.
This constraint arises in a constantreturntoscale production function. Other common restrictions
include the equality of coecients
1
=
2
, and equal and osetting coecients
1
÷
2
= 0.
A typical reason to impose a constraint is that we believe (or have information) that the con
straint is true. By imposing the constraint we hope to improve estimation eciency. The goal is
to obtain consistent estimates with reduced variance relative to the unconstrained estimator.
The questions then arise: How should we estimate the coecient vector d imposing the linear
restriction (7.1)? If we impose such constraints, what is the sampling distribution of the resulting
estimator? How should we calculate standard errors? These are the questions explored in this
chapter.
136
CHAPTER 7. RESTRICTED ESTIMATION 137
7.2 Constrained Least Squares
An intuitively appealing method to estimate a constrained linear projection is to minimize the
leastsquares criterion subject to the constraint R
t
d = c. This estimator is
¯
d = argmin
R
=c
SSE
n
(d) (7.3)
where
SSE
n
(d) =
n
¸
i=1
y
i
÷x
t
i
d
2
= y
t
y ÷2y
t
Xd +d
t
X
t
Xd.
The estimator
¯
d minimizes the sum of squared errors over all d such that the restriction (7.1)
holds. We call
¯
d the constrained leastsquares (CLS) estimator. We follow the convention of
using a tilde “~” rather than a hat “^” to indicate that
¯
d is a restricted estimator in contrast to
the unrestricted leastsquares estimator
´
d, and write it as
¯
d
cls
when we want to be clear that the
estimation method is CLS.
One method to ﬁnd the solution to (7.3) uses the technique of Lagrange multipliers. The
problem (7.3) is equivalent to the minimization of the Lagrangian
L(d, X) =
1
2
SSE
n
(d) +X
t
R
t
d ÷c
(7.4)
over (d, X), where X is an s 1 vector of Lagrange multipliers. The ﬁrstorder conditions for
minimization of (7.4) are
0
0d
L(
¯
d,
¯
X) = ÷X
t
y +X
t
X
¯
d +R
¯
X = 0 (7.5)
and
0
0X
L(d, X) = R
t
¯
d ÷c = 0. (7.6)
Premultiplying (7.5) by R
t
(X
t
X)
÷1
we obtain
÷R
t
´
d +R
t
¯
d +R
t
X
t
X
÷1
R
¯
X = 0 (7.7)
where
´
d = (X
t
X)
÷1
X
t
y is the unrestricted leastsquares estimator. Imposing R
t
¯
d ÷ c = 0 from
(7.6) and solving for
¯
X we ﬁnd
¯
X =
R
t
X
t
X
÷1
R
÷1
R
t
´
d ÷c
.
Substuting this expression into (7.5) and solving for
¯
d we ﬁnd the solution to the constrained
minimization problem (7.3)
¯
d =
´
d ÷
X
t
X
÷1
R
R
t
X
t
X
÷1
R
÷1
R
t
´
d ÷c
. (7.8)
This is a general formula for the CLS estimator. It also can be written as
¯
d =
´
d ÷
´
Q
÷1
xx
R
R
t
´
Q
÷1
xx
R
÷1
R
t
´
d ÷c
.
CHAPTER 7. RESTRICTED ESTIMATION 138
7.3 Exclusion Restriction
While (4.4) is a general formula for the CLS estimator, in most cases the estimator can be
found by applying leastsquares to a reparameterized equation. To illustrate, let us return to the
ﬁrst example presented at the beginning of the chapter — a simple exclusion restriction. Recall the
unconstrained model is
y
i
= x
t
1i
d
1
+x
t
2i
d
2
+e
i
(7.9)
the exclusion restriction is d
2
= 0, and the constrained equation is
y
i
= x
t
1i
d
1
+e
i
. (7.10)
In this setting the CLS estimator is OLS of y
i
on x
1i
. (See Exercise 7.1.) We can write this as
¯
d
1
=
n
¸
i=1
x
1i
x
t
1i
÷1
n
¸
i=1
x
1i
y
i
.
The CLS estimator of the entire vector d
t
=
d
t
1
, d
t
2
is
¯
d =
¯
d
1
0
. (7.11)
It is not immediately obvious, but (7.8) and (7.11) are algebraically (and numerically) equivalent.
To see this, the ﬁrst component of (7.8) with (7.2) is
¯
d
1
=
I 0
¸
´
d ÷
´
Q
÷1
xx
0
I
¸
0 I
´
Q
÷1
xx
0
I
÷1
0 I
´
d
¸
.
Using (4.28) this equals
¯
d
1
=
´
d
1
÷
´
Q
12
´
Q
22
÷1
´
d
2
=
´
d
1
+
´
Q
÷1
11·2
´
Q
12
´
Q
÷1
22
´
Q
22·1
´
d
2
=
´
Q
÷1
11·2
´
Q
1y
÷
´
Q
12
´
Q
÷1
22
´
Q
2y
+
´
Q
÷1
11·2
´
Q
12
´
Q
÷1
22
´
Q
22·1
´
Q
÷1
22·1
´
Q
2y
÷
´
Q
21
´
Q
÷1
11
´
Q
1y
=
´
Q
÷1
11·2
´
Q
1y
÷
´
Q
12
´
Q
÷1
22
´
Q
21
´
Q
÷1
11
´
Q
1y
=
´
Q
÷1
11·2
´
Q
11
÷
´
Q
12
´
Q
÷1
22
´
Q
21
´
Q
÷1
11
´
Q
1y
=
´
Q
÷1
11
´
Q
1y
which is (7.11) as originally claimed.
7.4 Minimum Distance
The CLS estimator is a special case of a more general class of constrained estimators. To
see this, rewrite the leastsquares criterion as follows. Let
´
d be the unconstrained leastsquares
estimator, and write the uncontrained leastsquares ﬁtted equation as y
i
= x
t
i
´
d + ˆ e
i
. Substitute
CHAPTER 7. RESTRICTED ESTIMATION 139
this equation into SSE
n
(d) to obtain
SSE
n
(d) =
n
¸
i=1
y
i
÷x
t
i
d
2
=
n
¸
i=1
x
t
i
´
d + ˆ e
i
÷x
t
i
d
2
=
n
¸
i=1
ˆ e
2
i
+
´
d ÷d
t
n
¸
i=1
x
i
x
t
i
´
d ÷d
= nˆ o
2
+n
´
d ÷d
t
´
Q
xx
´
d ÷d
. (7.12)
where the third equality uses the fact that
¸
n
i=1
x
i
ˆ e
i
= 0. Since the ﬁrst term on the last line does
not depend on d it follows that the CLS estimator minimizes the quadratic on the rightside of
(7.12). This is a (squared) weighted Euclidean distance between
´
d and d. It is a special case of the
general weighted distance
J
n
(d, W
n
) = n
´
d ÷d
t
W
÷1
n
´
d ÷d
for W
n
> 0 a k k positive deﬁnite weight matrix. In summary, we have found that the CLS
estimator can be written as
¯
d = argmin
R
=c
J
n
(d,
´
Q
÷1
xx
)
More generally, a minimum distance estimator for d is
¯
d
md
(W
n
) = argmin
R
=c
J
n
(d, W
n
) (7.13)
where W
n
> 0. We have written the estimator as
¯
d
md
(W
n
) as it depends upon the weight matrix
W
n
.
An obvious question is which weight matrix W
n
is appropriate. We will address this question
after we derive the asymptotic distribution for a general weight matrix.
7.5 Computation
A general method to solve the algebraic problem (7.13) is by the method of Lagrange multipliers.
The Lagrangian is
L(d, X) =
1
2
J
n
(d, W
n
) +X
t
R
t
d ÷c
which is minimized over (d, X). The solution is
¯
d
md
(W
n
) =
´
d ÷W
n
R
R
t
W
n
R
÷1
R
t
´
d ÷c
. (7.14)
(See Exercise 7.5.)
If we set W
n
=
´
Q
÷1
xx
then (7.14) specializes to the CLS estimator:
¯
d
md
(
´
Q
÷1
xx
) =
¯
d
cls
In this sense the minimum distance estimator generalizes constrained leastsquares.
CHAPTER 7. RESTRICTED ESTIMATION 140
7.6 Asymptotic Distribution
We ﬁrst show that the class of minimum distance estimators are consistent for the population
parameters when the constraints are valid.
Assumption 7.6.1 R
t
d = c where R is k q with rank(R) = q.
Theorem 7.6.1 Consistency
Under Assumption 1.5.1, Assumption 3.16.1, Assumption 7.6.1,
and W
n
p
÷÷W > 0,
¯
d
md
(W
n
)
p
÷÷d as n ÷·.
Theorem 7.6.1 shows that consistency holds for any weight matrix, so the result includes the
CLS estimator.
Similarly, the constrained estimators are asymptotically normally distributed.
Theorem 7.6.2 Asymptotic Normality
Under Assumption 1.5.1, Assumption 6.4.1, Assumption 7.6.1, and W
n
p
÷÷W > 0,
n
¯
d
md
(W
n
) ÷d
d
÷÷N(0, V
(W)) (7.15)
as n ÷·, where
V
(W) = V
÷WR
R
t
WR
÷1
R
t
V
÷V
R
R
t
WR
÷1
R
t
W
+WR
R
t
WR
÷1
R
t
V
R
R
t
WR
÷1
R
t
W (7.16)
and
V
= Q
÷1
xx
Q
÷1
xx
.
Theorem 7.6.2 shows that the minimum distance estimator is asymptotically normal for all
positive deﬁnite weight matrices. The asymptotic variance depends on W. The theorem includes
the CLS estimator as a special case by setting W = Q
÷1
xx
.
Theorem 7.6.3 Asymptotic Distribution of CLS Estimator
Under Assumption 1.5.1, Assumption 6.4.1, and Assumption 7.6.1, as n ÷·
n
¯
d
cls
÷d
d
÷÷N(0, V
cls
)
where
V
cls
= V
÷Q
÷1
xx
R
R
t
Q
÷1
xx
R
÷1
R
t
V
÷V
R
R
t
Q
÷1
xx
R
÷1
R
t
Q
÷1
xx
+Q
÷1
xx
R
R
t
Q
÷1
xx
R
÷1
R
t
V
R
R
t
Q
÷1
xx
R
÷1
R
t
Q
÷1
xx
CHAPTER 7. RESTRICTED ESTIMATION 141
7.7 Ecient Minimum Distance Estimator
Theorem 7.6.2 shows that the minimum distance estimators, which include CLS as a special
case, are asymptotically normal with an asymptotic covariance matrix which depends on the weight
matrix W. The asymptotically optimal weight matrix is the one which minimizes the asymptotic
variance V
(W). This turns out to be W = V
has shown in Theorem 7.7.1 below. Since V
is
unknown this weight matrix cannot be used for a feasible estimator, but we can replace V
with
a consistent estimate
´
V
and the asymptotic distribution (and eciency) are unchanged. We call
the minimum distance estimator setting W
n
=
´
V
the ecient minimum distance estimator
and takes the form
¯
d =
´
d ÷
´
V
R
R
t
´
V
R
÷1
R
t
´
d ÷c
(7.17)
This estimator has the smallest asymptotic variance in the class of minimum distance estimators,
The asymptotic distribution of (7.17) can be deduced from Theorem 7.6.2.
Theorem 7.7.1 Ecient Minimum Distance Estimator
Under Assumption 1.5.1, Assumption 6.4.1, and Assumption 7.6.1, for
¯
d deﬁned
in (7.17) ,
n
¯
d ÷d
d
÷÷N
0, V
+
as n ÷·, where
V
+
= V
÷V
R
R
t
V
R
÷1
R
t
V
. (7.18)
Since
V
+
_ V
(7.19)
the estimator (7.17) has lower asymptotic variance than the unrestricted estimator.
Furthermor, for any W,
V
+
_ V
(W) (7.20)
so (7.17) is asymptotically ecient in the class of minimum distance estimators.
Theorem 7.7.1 shows that the minimum distance estimator with the smallest asymptotic vari
ance is (7.17). One implication is that the constrained least squares estimator is generally inef
ﬁcient. The interesting exception is the case of conditional homoskedasticity, in which case the
optimal weight matrix is W = V
÷1
= o
÷2
Q
xx
so in this case CLS is an ecient minimum dis
tance estimator. Otherwise when the error is conditionally heteroskedastic, there are asymptotic
eciency gains by using minimum distance rather than least squares.
The fact that CLS is generally inecient is counterintuitive and requires some reﬂection to
understand. Standard intuition suggests to apply the same estimation method (least squares) to
the unconstrained and constrained models, and this is the most common empirical practice. But
our statistical analysis has shown that this is not the ecient estimation method. Instead, the
ecient minimum distance estimator has a smaller asymptotic variance. Why? The reason is
that the leastsquares estimator does not make use of the regressor x
2i
. It ignores the information
E(x
2i
e
i
) = 0. This information is relevant when the error is heteroskedastic and the excluded
regressors are correlated with the included regressors.
Inequality (7.19) shows that the ecient minimum distance estimator
¯
d has a smaller asymptotic
variance than the unrestricted least squares estimator
´
d. This means that estimation is more ecient
by imposing correct restrictions when we use the minimum distance method.
CHAPTER 7. RESTRICTED ESTIMATION 142
7.8 Exclusion Restriction Revisited
We return to the example of estimation with a simple exclusion restriction. The model is
y
i
= x
t
1i
d
1
+x
t
2i
d
2
+e
i
with the exclusion restriction d
2
= 0. We have introduced three estimators of d
1
. The ﬁrst is
unconstrained leastsquares applied to (7.9), which can be written as
´
d
1
=
´
Q
÷1
11·2
´
Q
1y·2
.
From Theorem 6.28 and equation (6.19) its asymptotic variance is
avar(
´
d
1
) = Q
÷1
11·2
11
÷Q
12
Q
÷1
22
21
÷
12
Q
÷1
22
Q
21
+Q
12
Q
÷1
22
22
Q
÷1
22
Q
21
Q
÷1
11·2
.
The second estimator of d
1
is the CLS estimator, which can be written as
¯
d
1,cls
=
´
Q
÷1
11
´
Q
1y
.
Its asymptotic variance can be deduced from Theorem 7.6.3, but it is simpler to apply the CLT
directly to show that
avar(
¯
d
1,cls
) = Q
÷1
11
11
Q
÷1
11
. (7.21)
The third estimator of d
1
is the ecient minimum distance estimator. Applying (7.17), it equals
¯
d
1,md
=
´
d
1
÷
´
V
12
´
V
÷1
22
´
d
2
(7.22)
where we have partitioned
´
V
=
¸
´
V
11
´
V
12
´
V
21
´
V
22
¸
.
From Theorem 7.7.1 its asymptotic variance is
avar(
¯
d
1,md
) = V
11
÷V
12
V
÷1
22
V
21
. (7.23)
In general, the three estimators are dierent, and they have dierent asymptotic variances.
It is quite instructive to compare the asymptotic variances of the CLS and unconstrained least
squares estimators to assess whether or not the constrained estimator is necessarily more ecient
than the unconstrained estimator.
First, consider the case of conditional homoskedasticity. In this case the two covariance matrices
simplify to
avar(
´
d
1
) = o
2
Q
÷1
11·2
and
avar(
¯
d
1,cls
) = o
2
Q
÷1
11
.
If Q
12
= 0 (so x
1i
and x
2i
are orthogonal) then these two variance matrices equal and the two
estimators have equal asymptotic eciency. Otherwise, since Q
12
Q
÷1
22
Q
21
_ 0, then Q
11
_ Q
11
÷
Q
12
Q
÷1
22
Q
21
, and consequently
Q
÷1
11
o
2
_
Q
11
÷Q
12
Q
÷1
22
Q
21
÷1
o
2
.
This means that under conditional homoskedasticity,
¯
d
1,cls
has a lower asymptotic variance matrix
than
´
d
1
. Therefore in this context, constrained leastsquares is more ecient than unconstrained
leastsquares. This is consistent with our intuition that imposing a correct restriction (excluding
an irrelevant regressor) improves estimation eciency.
CHAPTER 7. RESTRICTED ESTIMATION 143
However, in the general case of conditional heteroskedasticity this ranking is not guaranteed, in
fact what is really amazing is that the variance ranking can be reversed. The CLS estimator can
have a larger asymptotic variance than the unconstrained least squares estimator.
To see this let’s use the simple heteroskedastic example from Section 6.5. In that example,
Q
11
= Q
22
= 1, Q
12
=
1
2
,
11
=
22
= 1, and
12
=
7
8
. We can calculate that Q
11·2
=
3
4
and
avar(
´
d
1
) =
2
3
(7.24)
avar(
¯
d
1,cls
) = 1 (7.25)
avar(
¯
d
1,md
) =
5
8
. (7.26)
Thus the restricted leastsquares estimator
¯
d
1
has a larger variance than the unrestricted least
squares estimator
´
d
1
! The minimum distance estimator has the smallest variance of the three, as
expected.
What we have found is that when the estimation method is leastsquares, deleting the irrelevant
variable x
2i
can actually decrease the precision of estimation of
1
, or equivalently, adding the
irrelevant variable x
2i
can actually improve the precision of the estimation.
To repeat this unexpected ﬁnding, we have shown in a very simple example that it is possible
for leastsquares applied to the short regression (7.10) to be less ecient for estimation of d
1
than
leastsquares applied to the long regression (7.9), even though the constraint d
2
= 0 is valid!
This result is strongly counterintuitive. It seems to contradict our initial motivation for pursuing
constrained estimation — to improve estimation eciency.
It turns out that a more reﬁned answer is appropriate. Constrained estimation is desirable,
but not constrained leastsquares estimation. While leastsquares is asymptotically ecient for
estimation of the unconstrained projection model, it is not an ecient estimator of the constrained
projection model.
7.9 Variance and Standard Error Estimation
The asymptotic covariance matrix (7.18) may be estimated by replacing V
with a consistent
estimates such as
´
V
. This variance estimator is then
´
V
+
=
´
V
÷
´
V
R
R
t
´
V
R
÷1
R
t
´
V
. (7.27)
We can calculate standard errors for any linear combination h
t
¯
d so long as h does not lie in
the range space of R. A standard error for h
t
¯
d is
s(h
t
¯
d) =
n
÷1
h
t
´
V
+
h
1/2
.
7.10 Nonlinear Constraints
In some cases it is desirable to impose nonlinear constraints on the parameter vector d. They
can be written as
r(d) = 0 (7.28)
where r : R
k
÷ R
q
. This includes the linear constraints (7.1) as a special case. An example of
(7.28) which cannot be written as (7.1) is
1
2
= 1, or r(d) =
1
2
÷1.
The minimum distance estimator of d subject to (7.28) solves the minimization problem
¯
d = argmin
r()=0
J
n
(d) (7.29)
CHAPTER 7. RESTRICTED ESTIMATION 144
where
J
n
(d) = n
´
d ÷d
t
´
V
÷1
´
d ÷d
.
The solution minimizes the Lagrangian
L(d, X) =
1
2
J
n
(d) +X
t
r(d) (7.30)
over (d, X).
Computationally, there is no explicit expression for the solution
¯
d so it must be found numer
ically. Computational methods are based on the method of quadratic programming and are not
reviewed here.
Assumption 7.10.1 r(d) = 0 with rank(R) = q, where R =
0
0d
r(d)
t
.
The asymptotic distribution is a simple generalization of the case of a linear constraint, but the
proof is more delicate.
Theorem 7.10.1 Under Assumption 1.5.1, Assumption 6.4.1, and Assumption
7.10.1, for
¯
d deﬁned in (7.29) ,
n
¯
d ÷d
d
÷÷N
0, V
+
as n ÷·, where
V
+
= V
÷V
R
R
t
V
R
÷1
R
t
V
The asymptotic variance matrix can be estimated by
´
V
+
=
´
V
÷
´
V
´
R
´
R
t
´
V
´
R
÷1
´
R
t
´
V
where
´
R =
0
0d
r(
¯
d)
t
.
Standard errors for the elements of
¯
d are the square roots of the diagonal elements of n
÷1
´
V
+
.
7.11 Technical Proofs*
Proof of Theorem 7.7.1, Equation (7.20). Let R
l
be a full rank k (k ÷q) matrix satisfying
R
t
l
V
R = 0 and then set C = [R, R
l
] which is full rank and invertible. Then we can calculate
that
C
t
V
+
o
C =
¸
R
t
V
+
o
R R
V
+
o
R
l
R
t
l
V
+
o
R R
t
l
V
+
o
R
l
=
¸
0 0
0 R
t
l
V
R
l
CHAPTER 7. RESTRICTED ESTIMATION 145
and
C
t
V
o
(W)C =
¸
R
t
V
+
o
(W)R R
V
+
o
(W)R
l
R
t
l
V
+
o
(W)R R
t
l
V
+
o
(W)R
l
=
¸
0 0
0 R
t
l
V
R
l
+R
t
l
WR(R
WR)
1
R
V
R(R
WR)
1
R
WR
l
.
Thus
C
t
V
o
(W) ÷V
+
o
C = C
t
V
o
(W)C ÷C
t
V
+
o
C
=
¸
0 0
0 R
t
l
WR(R
WR)
1
R
V
R(R
WR)
1
R
WR
l
_ 0
Since C is invertible it follows that V
o
(W) ÷V
+
o
_ 0 which is (7.20).
Proof of Theorem 7.10.1. For simplicity, we assume that the constrained estimator is consistent
¯
d
p
÷÷ d. This can be shown with more eort, but requires a deeper treatment than appropriate
for this textbook.
For each element r
j
(d) of the qvector r(d), by the mean value theorem there exists a d
+
j
on
the line segment joining
¯
d and d such that
r
j
(
¯
d) = r
j
(d) +
0
0d
r
j
(d
+
j
)
t
¯
d ÷d
. (7.31)
Let R
+
n
be the k q matrix
R
+
n
=
¸
0
0d
r
1
(d
+
1
)
0
0d
r
2
(d
+
2
) · · ·
0
0d
r
q
(d
+
q
)
.
Since
¯
d
p
÷÷d it follows that d
+
j
p
÷÷d, and by the CMT, R
+
n
p
÷÷R. Stacking the (7.31), we obtain
r(
¯
d) = r(d) +R
+t
n
¯
d ÷d
.
Since r(
¯
d) = 0 by construction and r(d) = 0 by Assumption 7.6.1, this implies
0 = R
+t
n
¯
d ÷d
. (7.32)
The ﬁrstorder condition for (7.30) is
´
V
÷1
´
d ÷
¯
d
=
¯
R
¯
X.
Premultiplying by R
+t
´
V
, inverting, and using (7.32), we ﬁnd
¯
X =
R
+t
n
´
V
¯
R
÷1
R
+t
n
´
d ÷
¯
d
=
R
+t
n
´
V
¯
R
÷1
R
+t
n
´
d ÷d
.
Thus
¯
d ÷d =
I ÷
´
V
¯
R
R
+t
n
´
V
H
÷1
R
+t
n
´
d ÷d
.
From Theorem 6.4.2 and Theorem 6.8.2 we ﬁnd
n
¯
d ÷d
=
I ÷
´
V
¯
R
R
+t
n
´
V
¯
R
÷1
R
+t
n
n
´
d ÷d
d
÷÷
I ÷V
R
R
t
V
R
÷1
R
t
N(0, V
)
= N
0, V
+
.
CHAPTER 7. RESTRICTED ESTIMATION 146
Exercises
Exercise 7.1 In the model y = X
1
d
1
+ X
2
d
2
+ e, show directly from deﬁnition (7.3) that the
CLS estimate of d = (d
1
, d
2
) subject to the constraint that d
2
= 0 is the OLS regression of y on
X
1
.
Exercise 7.2 In the model y = X
1
d
1
+ X
2
d
2
+ e, show directly from deﬁnition (7.3) that the
CLS estimate of d = (d
1
, d
2
), subject to the constraint that d
1
= c (where c is some given vector)
is the OLS regression of y ÷X
1
c on X
2
.
Exercise 7.3 In the model y = X
1
d
1
+ X
2
d
2
+ e, with X
1
and X
2
each n k, ﬁnd the CLS
estimate of d = (d
1
, d
2
), subject to the constraint that d
1
= ÷d
2
.
Exercise 7.4 Verify that for
¯
d deﬁned in (7.8) that R
t
¯
d = c.
Exercise 7.5 Verify (7.14).
Exercise 7.6 Verify that the minimum distance estimator
¯
d with W
n
=
´
Q
÷1
xx
equals the CLS
estimator.
Exercise 7.7 Prove Theorem 7.6.1.
Exercise 7.8 Prove Theorem 7.6.2.
Exercise 7.9 Prove Theorem 7.6.3. (Hint: Use that CLS is a special case of Theorem 7.6.2.)
Exercise 7.10 Verify that (7.18) is V
(W) with W = V
÷1
.
Exercise 7.11 Prove (7.19). Hint: Use (7.18).
Exercise 7.12 Verify (7.21), (7.22) and (7.23)
Exercise 7.13 Verify (7.24), (7.25), and (7.26).
Chapter 8
Testing
8.1 t tests
The ttest is routinely used to test hypotheses on 0. A simple null and composite hypothesis
takes the form
H
0
: 0 = 0
0
H
1
: 0 = 0
0
where 0
0
is some prespeciﬁed value. A ttest rejects H
0
in favor of H
1
when t
n
(0
0
) is large. By
“large” we mean that the observed value of the tstatistic would be unlikely if H
0
were true.
Formally, we ﬁrst pick an asymptotic signiﬁcance level c. We then ﬁnd z
c/2
, the upper c/2
quantile of the standard normal distribution which has the property that if Z ~ N(0, 1) then
Pr
Z > z
c/2
= c.
For example, z
.025
= 1.96 and z
.05
= 1.645. A test of asymptotic signiﬁcance c rejects H
0
if
t
n
 > z
c/2
. Otherwise the test does not reject, or “accepts” H
0
.
The asymptotic signiﬁcance level is c because Theorem 6.11.1 implies that
Pr (reject H
0
 H
0
true) = Pr
t
n
 > z
c/2
 0 = 0
0
÷ Pr
Z > z
c/2
= c.
The rejection/acceptance dichotomy is associated with the NeymanPearson approach to hypothesis
testing.
While there is no objective scientiﬁc basis for choice of signiﬁcance level c, the common practice
is to set c = .05 or 5%. This implies a critical value of z
.025
= 1.96  2. When t
n
 > 2 it is common
to say that the tstatistic is statistically signiﬁcant. and if t
n
 < 2 it is common to say that
the tstatistic is statistically insigniﬁcant. It is helpful to remember that this is simply a way of
saying “Using a ttest, the hypothesis that 0 = 0
0
can [cannot] be rejected at the asymptotic 5%
level.”
A related statistic is the asymptotic pvalue, which can be interpreted as a measure of the
evidence against the null hypothesis. The asymptotic pvalue of the statistic t
n
is
p
n
= p(t
n
)
where p(t) is the tail probability function
p(t) = Pr (Z > t) = 2 (1 ÷(t)) .
If the pvalue p
n
is small (close to zero) then the evidence against H
0
is strong.
147
CHAPTER 8. TESTING 148
An equivalent statement of a NeymanPearson test is to reject at the c% level if and only if
p
n
< c. Signiﬁcance tests can be deduced directly from the pvalue since for any c, p
n
< c if
and only if t
n
 > z
c/2
. The pvalue is more general, however, in that the reader is allowed to pick
the level of signiﬁcance c, in contrast to NeymanPearson rejection/acceptance reporting where
the researcher picks the signiﬁcance level. (However, the NeymanPearson approach requires the
reader to select the signiﬁcance level before observing the pvalue.)
Another helpful observation is that the pvalue function is a unitfree transformation of the
t statistic. That is, under H
0
, p
n
d
÷÷ U[0, 1], so the “unusualness” of the test statistic can be
compared to the easytounderstand uniform distribution, regardless of the complication of the
distribution of the original test statistic. To see this fact, note that the asymptotic distribution of
t
n
 is F(x) = 1 ÷p(x). Thus
Pr (1 ÷p
n
_ u) = Pr (1 ÷p(t
n
) _ u)
= Pr (F(t
n
) _ u)
= Pr
t
n
 _ F
÷1
(u)
÷ F
F
÷1
(u)
= u,
establishing that 1 ÷p
n
d
÷÷U[0, 1], from which it follows that p
n
d
÷÷U[0, 1].
8.2 tratios
Some applied papers (especially older ones) report “tratios” for each estimated coecient. For
a coecient 0 these are
t
n
= t
n
(0) =
ˆ
0
s(
ˆ
0)
,
the ratio of the coecient estimate to its standard error, and equal the tstatistic for the test of
the hypothesis H
0
: 0 = 0. Such papers often discuss the “signiﬁcance” of certain variables or
coecients, or describe “which regressors have a signiﬁcant eect on y” by noting which tratios
exceed 2 in absolute value.
This is very poor econometric practice, and should be studiously avoided. It is a receipe for
banishment of your work to lower tier economics journals.
Fundamentally, the common tratio is a test for the hypothesis that a coecient equals zero.
This should be reported and discussed when this is an interesting economic hypothesis of interest.
But if this is not the case, it is distracting.
Instead, when a coecient 0 is of interest, it is constructive to focus on the point estimate,
its standard error, and its conﬁdence interval. The point estimate gives our “best guess” for the
value. The standard error is a measure of precision. The conﬁdence interval gives us the range
of values consistent with the data. If the standard error is large then the point estimate is not a
good summary about 0. The endpoints of the conﬁdence interval describe the bounds on the likely
possibilities. If the conﬁdence interval embraces too broad a set of values for 0, then the dataset
is not suciently informative to render inferences about 0. On the other hand if the conﬁdence
interval is tight, then the data have produced an accurate estimate, and the focus should be on
the value and interpretation of this estimate. In contrast, the widelyseen statement “the tratio is
highly signiﬁcant” has little interpretive value.
The above discussion requires that the researcher knows what the coecient 0 means (in terms
of the economic problem) and can interpret values and magnitudes, not just signs. This is critical
for good applied econometric practice.
CHAPTER 8. TESTING 149
8.3 Wald Tests
Sometimes 0 = h(d) is a q 1 vector, and it is desired to test the joint restrictions simultane
ously. We have the null and alternative
H
0
: 0 = 0
0
H
1
: 0 = 0
0
.
A commonly used test of H
0
against H
1
is the Wald statistic (6.34) evaluated at the null hypothesis
W
n
= n
´
0 ÷0
0
t
´
V
÷1
´
0 ÷0
0
. (8.1)
Typically, we have
´
0 = h(
´
d) with asymptotic covariance matrix estimate
´
V
=
´
H
t
´
V
´
H
where
´
H
=
0
0d
h(
´
d).
Then
W
n
= n
h(
´
d) ÷0
0
t
´
H
t
´
V
´
H
÷1
h(
´
d) ÷0
0
.
When h is a linear function of d, h(d) = R
t
d, then the Wald statistic simpliﬁes to
W
n
= n
R
t
´
d ÷0
0
t
R
t
´
V
R
÷1
R
t
´
d ÷0
0
.
As shown in Theorem 6.14.2, when 0 = 0
0
then W
n
÷÷.
2
q
, a chisquare random variable with
q degrees of freedom.
Theorem 8.3.1 Under Assumption 1.5.1, Assumption 6.4.1,
rank(H
) = q, and H
0
, then W
n
d
÷÷.
2
q
.
An asymptotic Wald test rejects H
0
in favor of H
1
if W
n
exceeds .
2
q
(c), the upperc quantile
of the .
2
q
distribution. For example, .
2
1
(.05) = 3.84 = z
2
.025
. The Wald test fails to reject if W
n
is
less than .
2
q
(c). As with ttests, it is conventional to describe a Wald test as “signiﬁcant” if W
n
exceeds the 5% critical value.
Notice that the asymptotic distribution in Theorem 8.3.1 depends solely on q — the number of
restrictions being tested. It does not depend on k — the number of parameters estimated.
The asymptotic pvalue for W
n
is p
n
= p(W
n
), where p(x) = Pr
.
2
q
_ x
is the tail probability
function of the .
2
q
distribution. The Wald test rejects at the c% level if and only if p
n
< c, and
p
n
is asymptotically U[0, 1] under H
0
. In applied work it is good practice to report the pvalue of
a Wald statistic, as it helps readers intrepret the magnitude of the statistic.
8.4 Minimum Distance Tests
The Wald test (8.1) measures the distance between the unrestricted estimate
´
0 and the null
hypothesis 0
0
. A minimum distance test measures the distance between
´
d and the restricted
estimate
¯
d of the previous chapter. Recall that under the restriction
h(d) = 0
0
CHAPTER 8. TESTING 150
the ecient minimum distance estimate solves the minimization problem
¯
d = argmin
h()=
0
J
n
(d)
where
J
n
(d) = n
´
d ÷d
t
´
V
÷1
´
d ÷d
.
The minimum distance test statistic of H
0
against H
1
is
J
n
= J
n
(
¯
d) = min
h()=
0
J
n
(d)
or more simply
J
n
= n
´
d ÷
¯
d
t
´
V
÷1
´
d ÷
¯
d
.
An asymptotic test rejects H
0
in favor of H
1
if J
n
exceeds .
2
q
(c), the upperc quantile of the .
2
q
distribution. Otherwise the test does not reject H
0
.
When h(d) is linear it turns out that J
n
= W
n
, so the Wald and minimum distance tests are
equal. When h(d) is nonlinear then the two tests are dierent.
The chisquare critical value is justiﬁed by the following theorm.
Theorem 8.4.1 UnderAssumption 1.5.1, Assumption 6.4.1, rank(H
) =
q, and H
0
, then J
n
d
÷÷.
2
q
.
8.5 F Tests
Take the linear model
y = X
1
d
1
+X
2
d
2
+e
where X
1
is n k
1
, X
2
is n k
2
, k = k
1
+k
2
, and the null hypothesis is
H
0
: d
2
= 0.
In this case, 0 = d
2
, and there are q = k
2
restrictions. Also h(d) = R
t
d is linear with R =
0
I
a selector matrix. We know that the Wald statistic takes the form
W
n
= n
´
0
t
´
V
÷1
´
0
= n
´
d
t
2
R
t
´
V
R
÷1
´
d
2
.
Now suppose that covariance matrix is computed under the assumption of homoskedasticity, so
that
´
V
is replaced with
´
V
0
= s
2
n
÷1
X
t
X
÷1
. We deﬁne the “homoskedastic” Wald statistic
W
0
n
= n
´
0
t
´
V
0
÷1
´
0
= n
´
d
t
2
R
t
´
V
0
R
÷1
´
d
2
.
What we show in this section is that this Wald statistic can be written very simply using the
formula
W
0
n
= (n ÷k)
˜ e
t
˜ e ÷ ˆ e
t
ˆ e
ˆ e
t
ˆ e
(8.2)
CHAPTER 8. TESTING 151
where
˜ e = y ÷X
1
¯
d
1
,
¯
d
1
=
X
t
1
X
1
÷1
X
t
1
y
are from OLS of y on X
1
, and
ˆ e = y ÷X
´
d,
´
d =
X
t
X
÷1
X
t
y
are from OLS of y on X = (X
1
, X
2
).
The elegant feature about (8.2) is that it is directly computable from the standard output
from two simple OLS regressions, as the sum of squared errors is a typical output from statistical
packages. This statistic is typically reported as an “Fstatistic” which is deﬁned as
F
n
=
W
0
n
k
2
=
˜ e
t
˜ e ÷ ˆ e
t
ˆ e
/k
2
ˆ e
t
ˆ e/(n ÷k)
.
While it should be emphasized that equality (8.2) only holds if
´
V
0
= s
2
n
÷1
X
t
X
÷1
, still this
formula often ﬁnds good use in reading applied papers. Because of this connection we call (8.2) the
F form of the Wald statistic. (We can also call W
0
n
a homoskedastic form of the Wald statistic.)
We now derive expression (8.2). First, note that by partitioned matrix inversion (A.4)
R
t
X
t
X
÷1
R = R
t
X
t
1
X
1
X
t
1
X
2
X
t
2
X
1
X
t
2
X
2
÷1
R =
X
t
2
M
1
X
2
÷1
where M
1
= I ÷X
1
(X
t
1
X
1
)
÷1
X
t
1
. Thus
R
t
´
V
0
R
÷1
= s
÷2
n
÷1
R
t
X
t
X
÷1
R
÷1
= s
÷2
n
÷1
X
t
2
M
1
X
2
and
W
0
n
= n
´
d
t
2
R
t
´
V
0
R
÷1
´
d
2
=
´
d
t
2
(X
t
2
M
1
X
2
)
´
d
2
s
2
.
To simplify this expression further, note that if we regress y on X
1
alone, the residual is
˜ e = M
1
y. Now consider the residual regression of ˜ e on
X
2
= M
1
X
2
. By the FWL theorem,
˜ e =
X
2
´
d
2
+ ˆ e and
X
t
2
ˆ e = 0. Thus
˜ e
t
˜ e =
X
2
´
d
2
+ ˆ e
t
X
2
´
d
2
+ ˆ e
=
´
d
t
2
X
t
2
X
2
´
d
2
+ ˆ e
t
ˆ e
=
´
d
t
2
X
t
2
M
1
X
2
´
d
2
+ ˆ e
t
ˆ e,
or alternatively,
´
d
t
2
X
t
2
M
1
X
2
´
d
2
= ˜ e
t
˜ e ÷ ˆ e
t
ˆ e.
Also, since
s
2
= (n ÷k)
÷1
ˆ e
t
ˆ e
we conclude that
W
0
n
= (n ÷k)
˜ e
t
˜ e ÷ ˆ e
t
ˆ e
ˆ e
t
ˆ e
as claimed.
In many statistical packages, when an OLS regression is estimated, an “Fstatistic” is reported.
This is F
n
when X
1
is a vector of ones, so H
0
is an interceptonly model. This special F statistic is
CHAPTER 8. TESTING 152
testing the hypothesis that all slope coecients (all coecients other than the intercept) are zero.
This was a popular statistic in the early days of econometric reporting, when sample sizes were very
small and researchers wanted to know if there was “any explanatory power” to their regression.
This is rarely an issue today, as sample sizes are typically suciently large that this F statistic is
nearly always highly signiﬁcant. While there are special cases where this F statistic is useful, these
cases are atypical. As a general rule, there is no reason to report this F statistic.
8.6 Normal Regression Model
Now let us partition d = (d
1
, d
2
) and consider tests of the linear restriction
H
0
: d
2
= 0
H
1
: d
2
= 0
in the normal regression model. In parametric models, a good test statistic is the likelihood ratio,
which is twice the dierence in the loglikelihood function evaluated under the null and alternative
hypotheses. The estimator under the alternative is the unrestricted estimator (
´
d
1
,
´
d
2
, ˆ o
2
) discussed
above. The Gaussian loglikelihood at these estimates is
log L(
´
d
1
,
´
d
2
, ˆ o
2
) = ÷
n
2
log
2¬ˆ o
2
÷
1
2ˆ o
2
ˆ e
t
ˆ e
= ÷
n
2
log
ˆ o
2
÷
n
2
log (2¬) ÷
n
2
.
The MLE under the null hypothesis is the restricted estimates (
¯
d
1
, 0, ˜ o
2
) where
¯
d
1
is the OLS
estimate from a regression of y
i
on x
1i
only, with residual variance ˜ o
2
. The loglikelihood of this
model is
log L(
¯
d
1
, 0, ˜ o
2
) = ÷
n
2
log
˜ o
2
÷
n
2
log (2¬) ÷
n
2
.
The LR statistic for H
0
against H
1
is
LR
n
= 2
log L(
´
d
1
,
´
d
2
, ˆ o
2
) ÷log L(
¯
d
1
, 0, ˜ o
2
)
= n
log
˜ o
2
÷log
ˆ o
2
= nlog
˜ o
2
ˆ o
2
.
By a ﬁrstorder Taylor series approximation
LR
n
= nlog
1 +
˜ o
2
ˆ o
2
÷1
· n
˜ o
2
ˆ o
2
÷1
= W
0
n
.
the homoskedastic Wald statistic. This shows that the two statistics (LR
n
and W
0
n
) can be numer
ically close. It also shows that the homoskedastic Wald statistic for linear hypotheses can also be
interpreted as an appropriate likelihood ratio statistic under normality.
8.7 Problems with Tests of NonLinear Hypotheses
While the t and Wald tests work well when the hypothesis is a linear restriction on d, they
can work quite poorly when the restrictions are nonlinear. This can be seen by a simple example
introduced by Lafontaine and White (1986). Take the model
y
i
= +e
i
e
i
~ N(0, o
2
)
CHAPTER 8. TESTING 153
Figure 8.1: Wald Statistic as a function of s
and consider the hypothesis
H
0
: = 1.
Let
ˆ
and ˆ o
2
be the sample mean and variance of y
i
. The standard Wald test for H
0
is
W
n
= n
ˆ
÷1
2
ˆ o
2
.
Now notice that H
0
is equivalent to the hypothesis
H
0
(r) :
r
= 1
for any positive integer r. Letting h() =
r
, and noting H
o
= r
r÷1
, we ﬁnd that the standard
Wald test for H
0
(r) is
W
n
(r) = n
ˆ
r
÷1
2
ˆ o
2
r
2ˆ
2r÷2
.
While the hypothesis
r
= 1 is unaected by the choice of r, the statistic W
n
(r) varies with r. This
is an unfortunate feature of the Wald statistic.
To demonstrate this eect, we have plotted in Figure 8.1 the Wald statistic W
n
(r) as a function
of r, setting n/o
2
= 10. The increasing solid line is for the case
ˆ
= 0.8. The decreasing dashed
line is for the case
ˆ
= 1.6. It is easy to see that in each case there are values of r for which the
test statistic is signiﬁcant relative to asymptotic critical values, while there are other values of r
for which the test statistic is insigniﬁcant. This is distressing since the choice of r is arbitrary and
irrelevant to the actual hypothesis.
Our ﬁrstorder asymptotic theory is not useful to help pick r, as W
n
(r)
d
÷÷.
2
1
under H
0
for any
r. This is a context where Monte Carlo simulation can be quite useful as a tool to study and
compare the exact distributions of statistical procedures in ﬁnite samples. The method uses random
simulation to create artiﬁcial datasets, to which we apply the statistical tools of interest. This
produces random draws from the statistic’s sampling distribution. Through repetition, features of
this distribution can be calculated.
CHAPTER 8. TESTING 154
In the present context of the Wald statistic, one feature of importance is the Type I error
of the test using the asymptotic 5% critical value 3.84 — the probability of a false rejection,
Pr (W
n
(r) > 3.84  = 1) . Given the simplicity of the model, this probability depends only on
r, n, and o
2
. In Table 2.1 we report the results of a Monte Carlo simulation where we vary these
three parameters. The value of r is varied from 1 to 10, n is varied among 20, 100 and 500, and o
is varied among 1 and 3. Table 4.1 reports the simulation estimate of the Type I error probability
from 50,000 random samples. Each row of the table corresponds to a dierent value of r — and thus
corresponds to a particular choice of test statistic. The second through seventh columns contain the
Type I error probabilities for dierent combinations of n and o. These probabilities are calculated
as the percentage of the 50,000 simulated Wald statistics W
n
(r) which are larger than 3.84. The
null hypothesis
r
= 1 is true, so these probabilities are Type I error.
To interpret the table, remember that the ideal Type I error probability is 5% (.05) with devia
tions indicating distortion. Type I error rates between 3% and 8% are considered reasonable. Error
rates above 10% are considered excessive. Rates above 20% are unacceptable. When comparing
statistical procedures, we compare the rates row by row, looking for tests for which rejection rates
are close to 5% and rarely fall outside of the 3%8% range. For this particular example the only
test which meets this criterion is the conventional W
n
= W
n
(1) test. Any other choice of r leads
to a test with unacceptable Type I error probabilities.
In Table 4.1 you can also see the impact of variation in sample size. In each case, the Type I
error probability improves towards 5% as the sample size n increases. There is, however, no magic
choice of n for which all tests perform uniformly well. Test performance deteriorates as r increases,
which is not surprising given the dependence of W
n
(r) on r as shown in Figure 8.1.
Table 4.1
Type I error Probability of Asymptotic 5% W
n
(r) Test
o = 1 o = 3
r n = 20 n = 100 n = 500 n = 20 n = 100 n = 500
1 .06 .05 .05 .07 .05 .05
2 .08 .06 .05 .15 .08 .06
3 .10 .06 .05 .21 .12 .07
4 .13 .07 .06 .25 .15 .08
5 .15 .08 .06 .28 .18 .10
6 .17 .09 .06 .30 .20 .11
7 .19 .10 .06 .31 .22 .13
8 .20 .12 .07 .33 .24 .14
9 .22 .13 .07 .34 .25 .15
10 .23 .14 .08 .35 .26 .16
Note: Rejection frequencies from 50,000 simulated random samples
In this example it is not surprising that the choice r = 1 yields the best test statistic. Other
choices are arbitrary and would not be used in practice. While this is clear in this particular
example, in other examples natural choices are not always obvious and the best choices may in fact
appear counterintuitive at ﬁrst.
This point can be illustrated through another example which is similar to one developed in
Gregory and Veall (1985). Take the model
y
i
=
0
+x
1i
1
+x
2i
2
+e
i
(8.3)
E(x
i
e
i
) = 0
and the hypothesis
H
0
:
1
2
= r
CHAPTER 8. TESTING 155
where r is a known constant. Equivalently, deﬁne 0 =
1
/
2
, so the hypothesis can be stated as
H
0
: 0 = r.
Let
´
d = (
ˆ
0
,
ˆ
1
,
ˆ
2
) be the leastsquares estimates of (8.3), let
´
V
be an estimate of the
asymptotic covariance matrix for
´
d and set
ˆ
0 =
ˆ
1
/
ˆ
2
. Deﬁne
´
H
1
=
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
0
1
ˆ
2
÷
ˆ
1
ˆ
2
2
¸
so that the standard error for
ˆ
0 is s(
ˆ
0) =
n
÷1
ˆ
H
t
1
´
V
ˆ
H
1
1/2
. In this case a tstatistic for H
0
is
t
1n
=
ˆ
o
1
ˆ
o
2
÷r
s(
ˆ
0)
.
An alternative statistic can be constructed through reformulating the null hypothesis as
H
0
:
1
÷r
2
= 0.
A tstatistic based on this formulation of the hypothesis is
t
2n
=
ˆ
1
÷r
ˆ
2
n
÷1
H
t
2
´
V
H
2
1/2
.
where
H
2
=
¸
0
1
÷r
¸
.
To compare t
1n
and t
2n
we perform another simple Monte Carlo simulation. We let x
1i
and x
2i
be mutually independent N(0, 1) variables, e
i
be an independent N(0, o
2
) draw with o = 3, and
normalize
0
= 0 and
1
= 1. This leaves
2
as a free parameter, along with sample size n. We
vary
2
among .1, .25, .50, .75, and 1.0 and n among 100 and 500.
Table 4.2
Type I error Probability of Asymptotic 5% ttests
n = 100 n = 500
Pr (t
n
< ÷1.645) Pr (t
n
> 1.645) Pr (t
n
< ÷1.645) Pr (t
n
> 1.645)
2
t
1n
t
2n
t
1n
t
2n
t
1n
t
2n
t
1n
t
2n
.10 .47 .06 .00 .06 .28 .05 .00 .05
.25 .26 .06 .00 .06 .15 .05 .00 .05
.50 .15 .06 .00 .06 .10 .05 .00 .05
.75 .12 .06 .00 .06 .09 .05 .00 .05
1.00 .10 .06 .00 .06 .07 .05 .02 .05
The onesided Type I error probabilities Pr (t
n
< ÷1.645) and Pr (t
n
> 1.645) are calculated
from 50,000 simulated samples. The results are presented in Table 4.2. Ideally, the entries in the
table should be 0.05. However, the rejection rates for the t
1n
statistic diverge greatly from this
value, especially for small values of
2
. The left tail probabilities Pr (t
1n
< ÷1.645) greatly exceed
CHAPTER 8. TESTING 156
5%, while the right tail probabilities Pr (t
1n
> 1.645) are close to zero in most cases. In contrast,
the rejection rates for the linear t
2n
statistic are invariant to the value of
2
, and are close to the
ideal 5% rate for both sample sizes. The implication of Table 4.2 is that the two tratios have
dramatically dierent sampling behavior.
The common message from both examples is that Wald statistics are sensitive to the algebraic
formulation of the null hypothesis. The simple solution is to use the minimum distance statistic
J
n
, which equals W
n
with r = 1 in the ﬁrst example, and t
2n
in the second example. The minimum
distance statistic is invariant to the algebraic formulation of the null hypothesis, so is immune to this
problem. Whenever possible, the Wald statistic should not be used to test nonlinear hypotheses.
8.8 Monte Carlo Simulation
In the previous section we introduced the method of Monte Carlo simulation to illustrate the
small sample problems with tests of nonlinear hypotheses. In this section we describe the method
in more detail.
Recall, our data consist of observations (y
i
, x
i
) which are random draws from a population
distribution F. Let 0 be a parameter and let T
n
= T
n
((y
1
, x
1
) , ..., (y
n
, x
n
) , 0) be a statistic of
interest, for example an estimator
ˆ
0 or a tstatistic (
ˆ
0 ÷0)/s(
ˆ
0). The exact distribution of T
n
is
G
n
(u, F) = Pr (T
n
_ u  F) .
While the asymptotic distribution of T
n
might be known, the exact (ﬁnite sample) distribution G
n
is generally unknown.
Monte Carlo simulation uses numerical simulation to compute G
n
(u, F) for selected choices of F.
This is useful to investigate the performance of the statistic T
n
in reasonable situations and sample
sizes. The basic idea is that for any given F, the distribution function G
n
(u, F) can be calculated
numerically through simulation. The name Monte Carlo derives from the famous Mediterranean
gambling resort where games of chance are played.
The method of Monte Carlo is quite simple to describe. The researcher chooses F (the dis
tribution of the data) and the sample size n. A “true” value of 0 is implied by this choice, or
equivalently the value 0 is selected directly by the researcher which implies restrictions on F.
Then the following experiment is conducted
• n independent random pairs (y
+
i
, x
+
i
) , i = 1, ..., n, are drawn from the distribution F using
the computer’s random number generator.
• The statistic T
n
= T
n
((y
+
1
, x
+
1
) , ..., (y
+
n
, x
+
n
) , 0) is calculated on this pseudo data.
For step 1, most computer packages have builtin procedures for generating U[0, 1] and N(0, 1)
random numbers, and from these most random variables can be constructed. (For example, a
chisquare can be generated by sums of squares of normals.)
For step 2, it is important that the statistic be evaluated at the “true” value of 0 corresponding
to the choice of F.
The above experiment creates one random draw from the distribution G
n
(u, F). This is one
observation from an unknown distribution. Clearly, from one observation very little can be said.
So the researcher repeats the experiment B times, where B is a large number. Typically, we set
B = 1000 or B = 5000. We will discuss this choice later.
Notationally, let the b
t
th experiment result in the draw T
nb
, b = 1, ..., B. These results are stored.
They constitute a random sample of size B from the distribution of G
n
(u, F) = Pr (T
nb
_ u) =
Pr (T
n
_ u  F) .
From a random sample, we can estimate any feature of interest using (typically) a method of
moments estimator. For example:
CHAPTER 8. TESTING 157
Suppose we are interested in the bias, meansquared error (MSE), or variance of the distribution
of
ˆ
0 ÷0. We then set T
n
=
ˆ
0 ÷0, run the above experiment, and calculate
Bias(
ˆ
0) =
1
B
B
¸
b=1
T
nb
=
1
B
B
¸
b=1
ˆ
0
b
÷0
MSE(
ˆ
0) =
1
B
B
¸
b=1
(T
nb
)
2
=
1
B
B
¸
b=1
ˆ
0
b
÷0
2
var(
ˆ
0) =
MSE(
ˆ
0) ÷
Bias(
ˆ
0)
2
Suppose we are interested in the Type I error associated with an asymptotic 5% twosided ttest.
We would then set T
n
=
ˆ
0 ÷0
/s(
ˆ
0) and calculate
ˆ
P =
1
B
B
¸
b=1
1 (T
nb
_ 1.96) , (8.4)
the percentage of the simulated tratios which exceed the asymptotic 5% critical value.
Suppose we are interested in the 5% and 95% quantile of T
n
=
ˆ
0. We then compute the 5% and
95% sample quantiles of the sample {T
nb
}. The c% sample quantile is a number q
c
such that c% of
the sample are less than q
c
. A simple way to compute sample quantiles is to sort the sample {T
nb
}
from low to high. Then q
c
is the N’th number in this ordered sequence, where N = (B + 1)c. It
is therefore convenient to pick B so that N is an integer. For example, if we set B = 999, then the
5% sample quantile is 50’th sorted value and the 95% sample quantile is the 950’th sorted value.
The typical purpose of a Monte Carlo simulation is to investigate the performance of a statistical
procedure (estimator or test) in realistic settings. Generally, the performance will depend on n and
F. In many cases, an estimator or test may perform wonderfully for some values, and poorly for
others. It is therefore useful to conduct a variety of experiments, for a selection of choices of n and
F.
As discussed above, the researcher must select the number of experiments, B. Often this is
called the number of replications. Quite simply, a larger B results in more precise estimates of
the features of interest of G
n
, but requires more computational time. In practice, therefore, the
choice of B is often guided by the computational demands of the statistical procedure. Since the
results of a Monte Carlo experiment are estimates computed from a random sample of size B, it
is straightforward to calculate standard errors for any quantity of interest. If the standard error is
too large to make a reliable inference, then B will have to be increased.
In particular, it is simple to make inferences about rejection probabilities from statistical
tests, such as the percentage estimate reported in (8.4). The random variable 1 (T
nb
_ 1.96) is
iid Bernoulli, equalling 1 with probability p = E1 (T
nb
_ 1.96) . The average (8.4) is therefore an
unbiased estimator of p with standard error s (ˆ p) =
p (1 ÷p) /B. As p is unknown, this may be
approximated by replacing p with ˆ p or with an hypothesized value. For example, if we are assessing
an asymptotic 5% test, then we can set s (ˆ p) =
(.05) (.95) /B · .22/
B. Hence, standard errors
for B = 100, 1000, and 5000, are, respectively, s (ˆ p) = .022, .007, and .003.
8.9 Estimating a Wage Equation
We again return to our wage equation. We use the sample of wage earners from the March 2004
Current Population Survey, excluding military. For the dependent variable we use the natural log
of wages so that coecients may be interpreted as semielasticities. For regressors we include years
of education, potential work experience, experience squared, and dummy variable indicators for
the following: married, female, union member, immigrant, hispanic, and nonwhite. Furthermore,
CHAPTER 8. TESTING 158
we included a dummy variable for state of residence (including the District of Columbia, this adds
50 regressors). The available sample is 18,808 so the parameter estimates are quite precise and
reported in Table 4.1, excluding the coecients on the state dummy variables.
Table 4.1 displays the parameter estimates in a standard format. The Table clearly states the
estimation method (OLS), the dependent variable (log(Wage)), and the regressors are clearly la
beled. Parameter estimates are both reported for the coecients of interest (the coecients on the
state dummy variables are omitted) and standard errors are reported for all reported coecient es
timates. In addition to the coecient estimates, the table also reports the estimated error standard
deviation, and the sample size. These are useful summary measures of ﬁt which aid readers.
Table 4.1
OLS Estimates of Linear Equation for Log(Wage)
ˆ
s(
ˆ
)
Intercept 1.027 .032
Education .101 .002
Experience .033 .001
Experience
2
÷.00057 .00002
Married .102 .008
Female ÷.232 .007
Union Member .097 .010
Immigrant ÷.121 .013
Hispanic ÷.102 .014
NonWhite ÷.070 .010
ˆ o .4877
Sample Size 18,808
Note: Equation also includes state dummy variables.
As a general rule, it is best to always report standard errors along with parameter estimates
(as done in Table 4.1). This allows readers to assess the precision of the parameter estimates, and
form conﬁdence intervals and ttests on individual coecients if desired. For example, if you are
interested in the dierence in mean wages between men and women, you can read from the table
that the estimated coecient on the Female dummy variable is ÷0.232, implying a mean wage
dierence of 23%. To assess the precision, you can see that the standard error for this coecient
estimate is 0.007. This implies a 95% asymptotic conﬁdence interval for the coecient estimate of
[÷.246, ÷.218]. This means that we have estimated the dierence in mean wages between men and
women to lie between 22% and 25%. I interpret this as a precise estimate because there is not an
important dierence between the lower and upper bound.
Instead of reporting standard errors, some empirical researchers report tratios for each pa
rameter estimate. “tratios” are tstatistics which test the hypothesis that the coecient equals
zero. An example is reported in Table 4.2. In this example, all the tratios are highly signiﬁcant,
ranging in magnitude from 9.3 to 50. What we learn from these statistics is that these coecients
are nonzero, but not much more. In a sample of this size this ﬁnding is rather uninteresting;
consequently the reporting of tratios is a waste of space. Again consider the malefemale wage
dierence. Table 4.2 reports that the tratio is 33, enabling us to reject the hypothesis that the
coecient is zero. But how precise is the reported estimate of a wage gap of 23%? It is hard to
assess from a quick reading of Table 4.2 Standard errors are much more useful, for they enable for
quick and easy assessment of the degree of estimation uncertainty.
CHAPTER 8. TESTING 159
Table 4.2
OLS Estimates of Linear Equation for Log(Wage)
Improper Reporting: tratios replacing standard errors
ˆ
t
Intercept 1.027 32
Education .101 50
Experience .033 33
Experience
2
÷.00057 28
Married .102 12.8
Female ÷.232 33
Union Member .097 9.7
Immigrant ÷.121 9.3
Hispanic ÷.102 7.3
NonWhite ÷.070 7
Returning to the estimated wage equation, one might question whether or not the state dummy
variables are relevant. Computing the Wald statistic (8.1) that the state coecients are jointly zero,
we ﬁnd W
n
= 550. Alternatively, reestimating the model with the 50 state dummies excluded, the
restricted standard deviation estimate is ˜ o = .4945. The F form of the Wald statistic (8.2) is
W
n
= n
˜ o
2
ˆ o
2
÷1
= 18, 808
.4945
2
.4877
2
÷1
= 528.
Notice that the two statistics are close, but not equal. Using either statistic the hypothesis is easily
rejected, as the 1% critical value for the .
2
50
distribution is 76.
Another interesting question which can be addressed from these estimates is the maximal impact
of experience on mean wages. Ignoring the other coecients, we can write this eect as
log(Wage) =
2
Experience +
3
Experience
2
+· · ·
Our question is: At which level of experience 0 do workers achieve the highest wage? In this
quadratic model, if
2
> 0 and
3
< 0 the solution is
0 = ÷
2
2
3
.
From Table 4.1 we ﬁnd the point estimate
ˆ
0 = ÷
ˆ
2
2
ˆ
3
= 28.69.
Using the Delta Method, we can calculate a standard error of s(
ˆ
0) = .40, implying a 95% conﬁdence
interval of [27.9, 29.5].
However, this is a poor choice, as the coverage probability of this conﬁdence interval is one
minus the Type I error of the hypothesis test based on the ttest. In Section 8.7 we discovered
that such ttests have very poor Type I error rates. Instead, we found better Type I error rates by
reformulating the hypothesis as a linear restriction. These tstatistics take the form
t
n
(0) =
ˆ
2
+ 2
ˆ
3
0
h
t
0
ˆ
V h
0
1/2
CHAPTER 8. TESTING 160
where
h
0
=
1
20
and
ˆ
V is the covariance matrix for (
ˆ
2
ˆ
3
).
In the present context we are interested in forming a conﬁdence interval, not testing a hypothesis,
so we have to go one step further. Our desired conﬁdence interval will be the set of parameter values
0 which are not rejected by the hypothesis test. This is the set of 0 such that t
n
(0) _ 1.96. Since
t
n
(0) is a nonlinear function of 0, there is not a simple expression for this set, but it can be found
numerically quite easily. This set is [27.0, 29.5]. Notice that the upper end of the conﬁdence interval
is the same as that from the delta method, but the lower end is substantially lower.
CHAPTER 8. TESTING 161
Exercises
Exercise 8.1 Prove that if an additional regressor X
k+1
is added to X, Theil’s adjusted R
2
increases if and only if t
k+1
 > 1, where t
k+1
=
ˆ
k+1
/s(
ˆ
k+1
) is the tratio for
ˆ
k+1
and
s(
ˆ
k+1
) =
s
2
[(X
t
X)
÷1
]
k+1,k+1
1/2
is the homoskedasticityformula standard error.
Exercise 8.2 You have two independent samples (y
1
, X
1
) and (y
2
, X
2
) which satisfy y
1
= X
1
d
1
+
e
1
and y
2
= X
2
d
2
+ e
2
, where E(x
1i
e
1i
) = 0 and E(x
2i
e
2i
) = 0, and both X
1
and X
2
have k
columns. Let
´
d
1
and
´
d
2
be the OLS estimates of d
1
and d
2
. For simplicity, you may assume that
both samples have the same number of observations n.
(a) Find the asymptotic distribution of
n
´
d
2
÷
´
d
1
÷(d
2
÷d
1
)
as n ÷·.
(b) Find an appropriate test statistic for H
0
: d
2
= d
1
.
(c) Find the asymptotic distribution of this statistic under H
0
.
Exercise 8.3 The data set invest.dat contains data on 565 U.S. ﬁrms extracted from Compustat
for the year 1987. The variables, in order, are
• I
i
Investment to Capital Ratio (multiplied by 100).
• Q
i
Total Market Value to Asset Ratio (Tobin’s Q).
• C
i
Cash Flow to Asset Ratio.
• D
i
Long Term Debt to Asset Ratio.
The ﬂow variables are annual sums for 1987. The stock variables are beginning of year.
(a) Estimate a linear regression of I
i
on the other variables. Calculate appropriate standard
errors.
(b) Calculate asymptotic conﬁdence intervals for the coecients.
(c) This regression is related to Tobin’s q theory of investment, which suggests that investment
should be predicted solely by Q
i
. Thus the coecient on Q
i
should be positive and the others
should be zero. Test the joint hypothesis that the coecients on C
i
and D
i
are zero. Test the
hypothesis that the coecient on Q
i
is zero. Are the results consistent with the predictions
of the theory?
(d) Now try a nonlinear (quadratic) speciﬁcation. Regress I
i
on Q
i
, C
i
, D
i
, Q
2
i
, C
2
i
, D
2
i
, Q
i
C
i
,
Q
i
D
i
, C
i
D
i
. Test the joint hypothesis that the six interaction and quadratic coecients are
zero.
Exercise 8.4 In a paper in 1963, Marc Nerlove analyzed a cost function for 145 American electric
companies. (The problem is discussed in Example 8.3 of Greene, section 1.7 of Hayashi, and the
empirical exercise in Chapter 1 of Hayashi). The data ﬁle nerlov.dat contains his data. The
variables are described on page 77 of Hayashi. Nerlov was interested in estimating a cost function:
TC = f(Q, PL, PF, PK).
CHAPTER 8. TESTING 162
(a) First estimate an unrestricted CobbDouglass speciﬁcation
log TC
i
=
1
+
2
log Q
i
+
3
log PL
i
+
4
log PK
i
+
5
log PF
i
+e
i
. (8.5)
Report parameter estimates and standard errors. You should obtain the same OLS estimates
as in Hayashi’s equation (1.7.7), but your standard errors may dier.
(b) What is the economic meaning of the restriction H
0
:
3
+
4
+
5
= 1?
(c) Estimate (8.5) by constrained leastsquares imposing
3
+
4
+
5
= 1. Report your parameter
estimates and standard errors.
(d) Estimate (8.5) by ecient minimum distance imposing
3
+
4
+
5
= 1. Report your
parameter estimates and standard errors.
(e) Test H
0
:
3
+
4
+
5
= 1 using a Wald statistic
(f) Test H
0
:
3
+
4
+
5
= 1 using a minimum distance statistic.
Chapter 9
Additional Regression Topics
9.1 Generalized Least Squares
In the projection model, we know that the leastsquares estimator is semiparametrically ecient
for the projection coecient. However, in the linear regression model
y
i
= x
t
i
d +e
i
E(e
i
 x
i
) = 0,
the leastsquares estimator is inecient. The theory of Chamberlain (1987) can be used to show
that in this model the semiparametric eciency bound is obtained by the Generalized Least
Squares (GLS) estimator (5.12) introduced in Section 5.4.1. The GLS estimator is sometimes
called the Aitken estimator. The GLS estimator (9.1) is infeasible since the matrix D is unknown.
A feasible GLS (FGLS) estimator replaces the unknown D with an estimate
ˆ
D = diag{ˆ o
2
1
, ..., ˆ o
2
n
}.
We now discuss this estimation problem.
Suppose that we model the conditional variance using the parametric form
o
2
i
= c
0
+z
t
1i
o
1
= o
t
z
i
,
where z
1i
is some q 1 function of x
i
. Typically, z
1i
are squares (and perhaps levels) of some (or
all) elements of x
i
. Often the functional form is kept simple for parsimony.
Let :
i
= e
2
i
. Then
E(:
i
 x
i
) = c
0
+z
t
1i
o
1
and we have the regression equation
:
i
= c
0
+z
t
1i
o
1
+ :
i
(9.1)
E(:
i
 x
i
) = 0.
This regression error :
i
is generally heteroskedastic and has the conditional variance
var (:
i
 x
i
) = var
e
2
i
 x
i
= E
e
2
i
÷E
e
2
i
 x
i
2
 x
i
= E
e
4
i
 x
i
÷
E
e
2
i
 x
i
2
.
Suppose e
i
(and thus :
i
) were observed. Then we could estimate o by OLS:
´ o =
Z
t
Z
÷1
Z
t
n
p
÷÷o
163
CHAPTER 9. ADDITIONAL REGRESSION TOPICS 164
and
n(´ o÷o)
d
÷÷N(0, V
)
where
V
=
E
z
i
z
t
i
÷1
E
z
i
z
t
i
:
2
i
E
z
i
z
t
i
÷1
. (9.2)
While e
i
is not observed, we have the OLS residual ˆ e
i
= y
i
÷x
t
i
´
d = e
i
÷x
t
i
(
´
d ÷d). Thus
c
i
= ˆ : ÷:
i
= ˆ e
2
i
÷e
2
i
= ÷2e
i
x
t
i
´
d ÷d
+ (
´
d ÷d)
t
x
i
x
t
i
(
´
d ÷d).
And then
1
n
n
¸
i=1
z
i
c
i
=
÷2
n
n
¸
i=1
z
i
e
i
x
t
i
n
´
d ÷d
+
1
n
n
¸
i=1
z
i
(
´
d ÷d)
t
x
i
x
t
i
(
´
d ÷d)
n
p
÷÷0
Let
¯ o =
Z
t
Z
÷1
Z
t
ˆ n (9.3)
be from OLS regression of ˆ :
i
on z
i
. Then
n(¯ o÷o) =
n(´ o÷o) +
n
÷1
Z
t
Z
÷1
n
÷1/2
Z
t
w
d
÷÷N(0, V
) (9.4)
Thus the fact that :
i
is replaced with ˆ :
i
is asymptotically irrelevant. We call (9.3) the skedastic
regression, as it is estimating the conditional variance of the regression of y
i
on x
i
. We have shown
that o is consistently estimated by a simple procedure, and hence we can estimate o
2
i
= z
t
i
o by
˜ o
2
i
= ˜ o
t
z
i
. (9.5)
Suppose that ˜ o
2
i
> 0 for all i. Then set
¯
D = diag{˜ o
2
1
, ..., ˜ o
2
n
}
and
¯
d =
X
t
¯
D
÷1
X
÷1
X
t
¯
D
÷1
y.
This is the feasible GLS, or FGLS, estimator of d. Since there is not a unique speciﬁcation for
the conditional variance the FGLS estimator is not unique, and will depend on the model (and
estimation method) for the skedastic regression.
One typical problem with implementation of FGLS estimation is that in the linear speciﬁcation
(9.1), there is no guarantee that ˜ o
2
i
> 0 for all i. If ˜ o
2
i
< 0 for some i, then the FGLS estimator
is not well deﬁned. Furthermore, if ˜ o
2
i
 0 for some i then the FGLS estimator will force the
regression equation through the point (y
i
, x
i
), which is undesirable. This suggests that there is a
need to bound the estimated variances away from zero. A trimming rule takes the form
o
2
i
= max[˜ o
2
i
, cˆ o
2
]
for some c > 0. For example, setting c = 1/4 means that the conditional variance function is
constrained to exceed onefourth of the unconditional variance. As there is no clear method to
select c, this introduces a degree of arbitrariness. In this context it is useful to reestimate the
model with several choices for the trimming parameter. If the estimates turn out to be sensitive to
its choice, the estimation method should probably be reconsidered.
It is possible to show that if the skedastic regression is correctly speciﬁed, then FGLS is asymp
totically equivalent to GLS. As the proof is tricky, we just state the result without proof.
CHAPTER 9. ADDITIONAL REGRESSION TOPICS 165
Theorem 9.1.1 If the skedastic regression is correctly speciﬁed,
n
¯
d
GLS
÷
¯
d
FGLS
p
÷÷0,
and thus
n
¯
d
FGLS
÷d
d
÷÷N(0, V
) ,
where
V
=
E
o
÷2
i
x
i
x
t
i
÷1
.
Examining the asymptotic distribution of Theorem 9.1.1, the natural estimator of the asymp
totic variance of
¯
d is
¯
V
0
=
1
n
n
¸
i=1
˜ o
÷2
i
x
i
x
t
i
÷1
=
1
n
X
t
˜
D
÷1
X
÷1
.
which is consistent for V
as n ÷ ·. This estimator
¯
V
0
is appropriate when the skedastic
regression (9.1) is correctly speciﬁed.
It may be the case that o
t
z
i
is only an approximation to the true conditional variance o
2
i
=
E(e
2
i
 x
i
). In this case we interpret o
t
z
i
as a linear projection of e
2
i
on z
i
.
˜
d should perhaps be
called a quasiFGLS estimator of d. Its asymptotic variance is not that given in Theorem 9.1.1.
Instead,
V
=
E
o
t
z
i
÷1
x
i
x
t
i
÷1
E
o
t
z
i
÷2
o
2
i
x
i
x
t
i
E
o
t
z
i
÷1
x
i
x
t
i
÷1
.
V
takes a sandwich form similar to the covariance matrix of the OLS estimator. Unless o
2
i
= o
t
z
i
,
¯
V
0
is inconsistent for V
.
An appropriate solution is to use a Whitetype estimator in place of
¯
V
0
. This may be written
as
¯
V
=
1
n
n
¸
i=1
˜ o
÷2
i
x
i
x
t
i
÷1
1
n
n
¸
i=1
˜ o
÷4
i
ˆ e
2
i
x
i
x
t
i
1
n
n
¸
i=1
˜ o
÷2
i
x
i
x
t
i
÷1
=
1
n
X
t
¯
D
÷1
X
÷1
1
n
X
t
¯
D
÷1
´
D
¯
D
÷1
X
1
n
X
t
¯
D
÷1
X
÷1
where
´
D = diag{ˆ e
2
1
, ..., ˆ e
2
n
}. This is estimator is robust to misspeciﬁcation of the conditional vari
ance, and was proposed by Cragg (1992).
In the linear regression model, FGLS is asymptotically superior to OLS. Why then do we not
exclusively estimate regression models by FGLS? This is a good question. There are three reasons.
First, FGLS estimation depends on speciﬁcation and estimation of the skedastic regression.
Since the form of the skedastic regression is unknown, and it may be estimated with considerable
error, the estimated conditional variances may contain more noise than information about the true
conditional variances. In this case, FGLS can do worse than OLS in practice.
Second, individual estimated conditional variances may be negative, and this requires trimming
to solve. This introduces an element of arbitrariness which is unsettling to empirical researchers.
Third, and probably most importantly, OLS is a robust estimator of the parameter vector. It
is consistent not only in the regression model, but also under the assumptions of linear projection.
The GLS and FGLS estimators, on the other hand, require the assumption of a correct conditional
mean. If the equation of interest is a linear projection and not a conditional mean, then the OLS
CHAPTER 9. ADDITIONAL REGRESSION TOPICS 166
and FGLS estimators will converge in probability to dierent limits as they will be estimating two
dierent projections. The FGLS probability limit will depend on the particular function selected for
the skedastic regression. The point is that the eciency gains from FGLS are built on the stronger
assumption of a correct conditional mean, and the cost is a loss of robustness to misspeciﬁcation.
9.2 Testing for Heteroskedasticity
The hypothesis of homoskedasticity is that E
e
2
i
 x
i
= o
2
, or equivalently that
H
0
: o
1
= 0
in the regression (9.1). We may therefore test this hypothesis by the estimation (9.3) and con
structing a Wald statistic. In the classic literature it is typical to impose the stronger assumption
that e
i
is independent of x
i
, in which case :
i
is independent of x
i
and the asymptotic variance (9.2)
for ˜ o simpliﬁes to
V
=
E
z
i
z
t
i
÷1
E
:
2
i
. (9.6)
Hence the standard test of H
0
is a classic F (or Wald) test for exclusion of all regressors from
the skedastic regression (9.3). The asymptotic distribution (9.4) and the asymptotic variance (9.6)
under independence show that this test has an asymptotic chisquare distribution.
Theorem 9.2.1 Under H
0
and e
i
independent of x
i
, the Wald test of H
0
is asymptotically .
2
q
.
Most tests for heteroskedasticity take this basic form. The main dierences between popular
tests are which transformations of x
i
enter z
i
. Motivated by the form of the asymptotic variance
of the OLS estimator
´
d, White (1980) proposed that the test for heteroskedasticity be based on
setting z
i
to equal all nonredundant elements of x
i
, its squares, and all crossproducts. Breusch
Pagan (1979) proposed what might appear to be a distinct test, but the only dierence is that they
allowed for general choice of z
i
, and replaced E
:
2
i
with 2o
4
which holds when e
i
is N
0, o
2
. If
this simpliﬁcation is replaced by the standard formula (under independence of the error), the two
tests coincide.
It is important not to misuse tests for heteroskedasticity. It should not be used to determine
whether to estimate a regression equation by OLS or FGLS, nor to determine whether classic or
White standard errors should be reported. Hypothesis tests are not designed for these purposes.
Rather, tests for heteroskedasticity should be used to answer the scientiﬁc question of whether or
not the conditional variance is a function of the regressors. If this question is not of economic
interest, then there is no value in conducting a test for heteorskedasticity.
9.3 Forecast Intervals
For a given value of x
i
= x, we may want to forecast (guess) y
i
outofsample. A reasonable
rule is the conditional mean m(x) as it is the meansquareminimizing forecast. A point forecast is
the estimated conditional mean ˆ m(x) = x
t
´
d. We would also like a measure of uncertainty for the
forecast.
The forecast error is ˆ e
i
= y
i
÷ ˆ m(x) = e
i
÷ x
t
´
d ÷d
. As the outofsample error e
i
is
independent of the insample estimate
ˆ
d, this has variance
Eˆ e
2
i
= E
e
2
i
 x
i
= x
+x
t
E
´
d ÷d
´
d ÷d
t
x
= o
2
(x) +n
÷1
x
t
V
x.
Assuming E
e
2
i
 x
i
= o
2
, the natural estimate of this variance is ˆ o
2
+n
÷1
x
t
´
V
x, so a standard
error for the forecast is ˆ s(x) =
ˆ o
2
+n
÷1
x
t ´
V
x. Notice that this is dierent from the standard
CHAPTER 9. ADDITIONAL REGRESSION TOPICS 167
error for the conditional mean. If we have an estimate of the conditional variance function, e.g.
˜ o
2
(x) = ¯ o
t
z from (9.5), then the forecast standard error is ˆ s(x) =
˜ o
2
(x) +n
÷1
x
t ´
V
x
It would appear natural to conclude that an asymptotic 95% forecast interval for y
i
is
x
t
´
d ±2ˆ s(x)
,
but this turns out to be incorrect. In general, the validity of an asymptotic conﬁdence interval is
based on the asymptotic normality of the studentized ratio. In the present case, this would require
the asymptotic normality of the ratio
e
i
÷x
t
´
d ÷d
ˆ s(x)
.
But no such asymptotic approximation can be made. The only special exception is the case where
e
i
has the exact distribution N(0, o
2
), which is generally invalid.
To get an accurate forecast interval, we need to estimate the conditional distribution of e
i
given
x
i
= x, which is a much more dicult task. Perhaps due to this diculty, many applied forecasters
use the simple approximate interval
x
t
´
d ±2ˆ s(x)
despite the lack of a convincing justiﬁcation.
9.4 NonLinear Least Squares
In some cases we might use a parametric regression function m(x, 0) = E(y
i
 x
i
= x) which
is a nonlinear function of the parameters 0. We describe this setting as nonlinear regression.
Examples of nonlinear regression functions include
m(x, 0) = 0
1
+ 0
2
x
1 + 0
3
x
m(x, 0) = 0
1
+ 0
2
x
0
3
m(x, 0) = 0
1
+ 0
2
exp(0
3
x)
m(x, 0) = G(x
t
0), G known
m(x, 0) = 0
t
1
x
1
+
0
t
2
x
1
x
2
÷0
3
0
4
m(x, 0) = 0
1
+ 0
2
x + 0
3
(x ÷0
4
) 1 (x > 0
4
)
m(x, 0) =
0
t
1
x
1
1 (x
2
< 0
3
) +
0
t
2
x
1
1 (x
2
> 0
3
)
In the ﬁrst ﬁve examples, m(x, 0) is (generically) dierentiable in the parameters 0. In the ﬁnal
two examples, m is not dierentiable with respect to 0
4
and 0
3
which alters some of the analysis.
When it exists, let
m
(x, 0) =
0
00
m(x, 0) .
Nonlinear regression is sometimes adopted because the functional form m(x, 0) is suggested
by an economic model. In other cases, it is adopted as a ﬂexible approximation to an unknown
regression function.
The least squares estimator
´
0 minimizes the normalized sumofsquarederrors
S
n
(0) =
1
n
n
¸
i=1
(y
i
÷m(x
i
, 0))
2
.
When the regression function is nonlinear, we call this the nonlinear least squares (NLLS)
estimator. The NLLS residuals are ˆ e
i
= y
i
÷m
x
i
,
´
0
.
CHAPTER 9. ADDITIONAL REGRESSION TOPICS 168
One motivation for the choice of NLLS as the estimation method is that the parameter 0 is the
solution to the population problem min
E(y
i
÷m(x
i
, 0))
2
Since sumofsquarederrors function S
n
(0) is not quadratic,
´
0 must be found by numerical
methods. See Appendix E. When m(x, 0) is dierentiable, then the FOC for minimization are
0 =
n
¸
i=1
m
x
i
,
´
0
ˆ e
i
. (9.7)
Theorem 9.4.1 Asymptotic Distribution of NLLS Estimator
If the model is identiﬁed and m(x, 0) is dierentiable with respect to 0,
n
´
0 ÷0
d
÷÷N(0, V
)
V
=
E
m
i
m
t
i
÷1
E
m
i
m
t
i
e
2
i
E
m
i
m
t
i
÷1
where m
i
= m
(x
i
, 0
0
).
Based on Theorem 9.4.1, an estimate of the asymptotic variance V
is
´
V
=
1
n
n
¸
i=1
ˆ m
i
ˆ m
t
i
÷1
1
n
n
¸
i=1
ˆ m
i
ˆ m
t
i
ˆ e
2
i
1
n
n
¸
i=1
ˆ m
i
ˆ m
t
i
÷1
where ˆ m
i
= m
(x
i
,
´
0) and ˆ e
i
= y
i
÷m(x
i
,
´
0).
Identiﬁcation is often tricky in nonlinear regression models. Suppose that
m(x
i
, 0) = d
t
1
z
i
+d
t
2
x
i
()
where x
i
() is a function of x
i
and the unknown parameter ~. Examples include x
i
() = x
~
i
,
x
i
() = exp(x
i
) , and x
i
(~) = x
i
1 (g (x
i
) > ). The model is linear when d
2
= 0, and this is
often a useful hypothesis (submodel) to consider. Thus we want to test
H
0
: d
2
= 0.
However, under H
0
, the model is
y
i
= d
t
1
z
i
+e
i
and both d
2
and have dropped out. This means that under H
0
, is not identiﬁed. This renders
the distribution theory presented in the previous section invalid. Thus when the truth is that
d
2
= 0, the parameter estimates are not asymptotically normally distributed. Furthermore, tests
of H
0
do not have asymptotic normal or chisquare distributions.
The asymptotic theory of such tests have been worked out by Andrews and Ploberger (1994)
and B. Hansen (1996). In particular, Hansen shows how to use simulation (similar to the bootstrap)
to construct the asymptotic critical values (or pvalues) in a given application.
Proof of Theorem 9.4.1 (Sketch). NLLS estimation falls in the class of optimization estimators.
For this theory, it is useful to denote the true value of the parameter 0 as 0
0
.
The ﬁrst step is to show that
ˆ
0
p
÷÷0
0
. Proving that nonlinear estimators are consistent is more
challenging than for linear estimators. We sketch the main argument. The idea is that
ˆ
0 minimizes
the sample criterion function S
n
(0), which (for any 0) converges in probability to the meansquared
CHAPTER 9. ADDITIONAL REGRESSION TOPICS 169
error function E(y
i
÷m(x
i
, 0))
2
. Thus it seems reasonable that the minimizer
ˆ
0 will converge in
probability to 0
0
, the minimizer of E(y
i
÷m(x
i
, 0))
2
. It turns out that to show this rigorously, we
need to show that S
n
(0) converges uniformly to its expectation E(y
i
÷m(x
i
, 0))
2
, which means
that the maximum discrepancy must converge in probability to zero, to exclude the possibility that
S
n
(0) is excessively wiggly in 0. Proving uniform convergence is technically challenging, but it
can be shown to hold broadly for relevant nonlinear regression models, especially if the regression
function m(x
i
, 0) is dierentiabel in 0. For a complete treatment of the theory of optimization
estimators see Newey and McFadden (1994).
Since
ˆ
0
p
÷÷ 0
0
,
ˆ
0 is close to 0
0
for n large, so the minimization of S
n
(0) only needs to be
examined for 0 close to 0
0
. Let
y
0
i
= e
i
+m
t
i
0
0
.
For 0 close to the true value 0
0
, by a ﬁrstorder Taylor series approximation,
m(x
i
, 0) · m(x
i
, 0
0
) +m
t
i
(0 ÷0
0
) .
Thus
y
i
÷m(x
i
, 0) · (e
i
+m(x
i
, 0
0
)) ÷
m(x
i
, 0
0
) +m
t
i
(0 ÷0
0
)
= e
i
÷m
t
i
(0 ÷0
0
)
= y
0
i
÷m
t
i
0.
Hence the sum of squared errors function is
S
n
(0) =
n
¸
i=1
(y
i
÷m(x
i
, 0))
2
·
n
¸
i=1
y
0
i
÷m
t
i
0
2
and the righthandside is the SSE function for a linear regression of y
0
i
on m
i
. Thus the NLLS
estimator
´
0 has the same asymptotic distribution as the (infeasible) OLS regression of y
0
i
on m
i
,
which is that stated in the theorem.
9.5 Least Absolute Deviations
We stated that a conventional goal in econometrics is estimation of impact of variation in x
i
on the central tendency of y
i
. We have discussed projections and conditional means, but these are
not the only measures of central tendency. An alternative good measure is the conditional median.
To recall the deﬁnition and properties of the median, let y be a continuous random variable.
The median 0 = med(y) is the value such that Pr(y _ 0) = Pr (y _ 0) = .5. Two useful facts about
the median are that
0 = argmin
0
Ey ÷0 (9.8)
and
Esgn(y ÷0) = 0
where
sgn(u) =
1 if u _ 0
÷1 if u < 0
is the sign function.
These facts and deﬁnitions motivate three estimators of 0. The ﬁrst deﬁnition is the 50th
empirical quantile. The second is the value which minimizes
1
n
¸
n
i=1
y
i
÷0 , and the third deﬁnition
is the solution to the moment equation
1
n
¸
n
i=1
sgn(y
i
÷0) . These distinctions are illusory, however,
as these estimators are indeed identical.
CHAPTER 9. ADDITIONAL REGRESSION TOPICS 170
Now let’s consider the conditional median of y given a random vector x. Let m(x) = med(y  x)
denote the conditional median of y given x. The linear median regression model takes the form
y
i
= x
t
i
d +e
i
med(e
i
 x
i
) = 0
In this model, the linear function med(y
i
 x
i
= x) = x
t
d is the conditional median function, and
the substantive assumption is that the median function is linear in x.
Conditional analogs of the facts about the median are
• Pr(y
i
_ x
t
d  x
i
= x) = Pr(y
i
> x
t
d  x
i
= x) = .5
• E(sgn(e
i
)  x
i
) = 0
• E(x
i
sgn(e
i
)) = 0
• d = min
Ey
i
÷x
t
i
d
These facts motivate the following estimator. Let
LAD
n
(d) =
1
n
n
¸
i=1
y
i
÷x
t
i
d
be the average of absolute deviations. The least absolute deviations (LAD) estimator of d
minimizes this function
´
d = argmin
LAD
n
(d)
Equivalently, it is a solution to the moment condition
1
n
n
¸
i=1
x
i
sgn
y
i
÷x
t
i
´
d
= 0. (9.9)
The LAD estimator has an asymptotic normal distribution.
Theorem 9.5.1 Asymptotic Distribution of LAD Estimator
When the conditional median is linear in x
n
´
d ÷d
d
÷÷N(0, V )
where
V =
1
4
E
x
i
x
t
i
f (0  x
i
)
÷1
Ex
i
x
t
i
E
x
i
x
t
i
f (0  x
i
)
÷1
and f (e  x) is the conditional density of e
i
given x
i
= x.
The variance of the asymptotic distribution inversely depends on f (0  x) , the conditional
density of the error at its median. When f (0  x) is large, then there are many innovations near
to the median, and this improves estimation of the median. In the special case where the error is
independent of x
i
, then f (0  x) = f (0) and the asymptotic variance simpliﬁes
V =
(Ex
i
x
t
i
)
÷1
4f (0)
2
(9.10)
CHAPTER 9. ADDITIONAL REGRESSION TOPICS 171
This simpliﬁcation is similar to the simpliﬁcation of the asymptotic covariance of the OLS estimator
under homoskedasticity.
Computation of standard error for LAD estimates typically is based on equation (9.10). The
main diculty is the estimation of f(0), the height of the error density at its median. This can be
done with kernel estimation techniques. See Chapter 18. While a complete proof of Theorem 9.5.1
is advanced, we provide a sketch here for completeness.
Proof of Theorem 9.5.1: Similar to NLLS, LAD is an optimization estimator. Let d
0
denote
the true value of d
0
.
The ﬁrst step is to show that
ˆ
d
p
÷÷d
0
. The general nature of the proof is similar to that for the
NLLS estimator, and is sketched here. For any ﬁxed d, by the WLLN, LAD
n
(d)
p
÷÷Ey
i
÷x
t
i
d .
Furthermore, it can be shown that this convergence is uniform in d. (Proving uniform convergence
is more challenging than for the NLLS criterion since the LAD criterion is not dierentiable in
d.) It follows that
ˆ
d, the minimizer of LAD
n
(d), converges in probability to d
0
, the minimizer of
Ey
i
÷x
t
i
d.
Since sgn(a) = 1÷2· 1 (a _ 0) , (9.9) is equivalent to g
n
(
´
d) = 0, where g
n
(d) = n
÷1
¸
n
i=1
g
i
(d)
and g
i
(d) = x
i
(1 ÷2 · 1 (y
i
_ x
t
i
d)) . Let g(d) = Eg
i
(d). We need three preliminary results. First,
by the central limit theorem (Theorem 2.8.1)
n(g
n
(d
0
) ÷g(d
0
)) = ÷n
÷1/2
n
¸
i=1
g
i
(d
0
)
d
÷÷N
0, Ex
i
x
t
i
since Eg
i
(d
0
)g
i
(d
0
)
t
= Ex
i
x
t
i
. Second using the law of iterated expectations and the chain rule of
dierentiation,
0
0d
t
g(d) =
0
0d
t
Ex
i
1 ÷2 · 1
y
i
_ x
t
i
d
= ÷2
0
0d
t
E
x
i
E
1
e
i
_ x
t
i
d ÷x
t
i
d
0
 x
i
= ÷2
0
0d
t
E
¸
x
i
x
i
÷x
i
0
÷o
f (e  x
i
) de
¸
= ÷2E
x
i
x
t
i
f
x
t
i
d ÷x
t
i
d
0
 x
i
so
0
0d
t
g(d) = ÷2E
x
i
x
t
i
f (0  x
i
)
.
Third, by a Taylor series expansion and the fact g(d) = 0
g(
´
d) ·
0
0d
t
g(d)
´
d ÷d
.
Together
n
´
d ÷d
0
·
0
0d
t
g(d
0
)
÷1
ng(
ˆ
d)
=
÷2E
x
i
x
t
i
f (0  x
i
)
÷1
n
g(
ˆ
d) ÷g
n
(
ˆ
d)
·
1
2
E
x
i
x
t
i
f (0  x
i
)
÷1
n(g
n
(d
0
) ÷g(d
0
))
d
÷÷
1
2
E
x
i
x
t
i
f (0  x
i
)
÷1
N
0, Ex
i
x
t
i
= N(0, V ) .
The third line follows from an asymptotic empirical process argument and the fact that
´
d
p
÷÷d
0
.
CHAPTER 9. ADDITIONAL REGRESSION TOPICS 172
9.6 Quantile Regression
Quantile regression has become quite popular in recent econometric practice. For t ÷ [0, 1] the
t’th quantile Q
t
of a random variable with distribution function F(u) is deﬁned as
Q
t
= inf {u : F(u) _ t}
When F(u) is continuous and strictly monotonic, then F (Q
t
) = t, so you can think of the quantile
as the inverse of the distribution function. The quantile Q
t
is the value such that t (percent) of
the mass of the distribution is less than Q
t
. The median is the special case t = .5.
The following alternative representation is useful. If the random variable U has t’th quantile
Q
t
, then
Q
t
= argmin
0
Ej
t
(U ÷0) . (9.11)
where j
t
(q) is the piecewise linear function
j
t
(q) =
÷q (1 ÷t) q < 0
qt q _ 0
(9.12)
= q (t ÷1 (q < 0)) .
This generalizes representation (9.8) for the median to all quantiles.
For the random variables (y
i
, x
i
) with conditional distribution function F (y  x) the conditional
quantile function q
t
(x) is
Q
t
(x) = inf {y : F (y  x) _ t} .
Again, when F (y  x) is continuous and strictly monotonic in y, then F (Q
t
(x)  x) = t. For
ﬁxed t, the quantile regression function q
t
(x) describes how the t’th quantile of the conditional
distribution varies with the regressors.
As functions of x, the quantile regression functions can take any shape. However for computa
tional convenience it is typical to assume that they are (approximately) linear in x (after suitable
transformations). This linear speciﬁcation assumes that Q
t
(x) = d
t
t
x where the coecients d
t
vary across the quantiles t. We then have the linear quantile regression model
y
i
= x
t
i
d
t
+e
i
where e
i
is the error deﬁned to be the dierence between y
i
and its t’th conditional quantile x
t
i
d
t
.
By construction, the t’th conditional quantile of e
i
is zero, otherwise its properties are unspeciﬁed
without further restrictions.
Given the representation (9.11), the quantile regression estimator
´
d
t
for d
t
solves the mini
mization problem
´
d
t
= argmin
S
t
n
(d)
where
S
t
n
(d) =
1
n
n
¸
i=1
j
t
y
i
÷x
t
i
d
and j
t
(q) is deﬁned in (9.12).
Since the quanitle regression criterion function S
t
n
(d) does not have an algebraic solution,
numerical methods are necessary for its minimization. Furthermore, since it has discontinuous
derivatives, conventional Newtontype optimization methods are inappropriate. Fortunately, fast
linear programming methods have been developed for this problem, and are widely available.
An asymptotic distribution theory for the quantile regression estimator can be derived using
similar arguments as those for the LAD estimator in Theorem 9.5.1.
CHAPTER 9. ADDITIONAL REGRESSION TOPICS 173
Theorem 9.6.1 Asymptotic Distribution of the Quantile Regres
sion Estimator
When the t’th conditional quantile is linear in x
n
´
d
t
÷d
t
d
÷÷N(0, V
t
) ,
where
V
t
= t (1 ÷t)
E
x
i
x
t
i
f (0  x
i
)
÷1
Ex
i
x
t
i
E
x
i
x
t
i
f (0  x
i
)
÷1
and f (e  x) is the conditional density of e
i
given x
i
= x.
In general, the asymptotic variance depends on the conditional density of the quantile regression
error. When the error e
i
is independent of x
i
, then f (0  x
i
) = f (0) , the unconditional density of
e
i
at 0, and we have the simpliﬁcation
V
t
=
t (1 ÷t)
f (0)
2
E
x
i
x
t
i
÷1
.
A recent monograph on the details of quantile regression is Koenker (2005).
9.7 Testing for Omitted NonLinearity
If the goal is to estimate the conditional expectation E(y
i
 x
i
) , it is useful to have a general
test of the adequacy of the speciﬁcation.
One simple test for neglected nonlinearity is to add nonlinear functions of the regressors to the
regression, and test their signiﬁcance using a Wald test. Thus, if the model y
i
= x
t
i
´
d + ˆ e
i
has been
ﬁt by OLS, let z
i
= h(x
i
) denote functions of x
i
which are not linear functions of x
i
(perhaps
squares of nonbinary regressors) and then ﬁt y
i
= x
t
i
¯
d+z
t
i
˜ ~+˜ e
i
by OLS, and form a Wald statistic
for ~ = 0.
Another popular approach is the RESET test proposed by Ramsey (1969). The null model is
y
i
= x
t
i
d +e
i
which is estimated by OLS, yielding predicted values ˆ y
i
= x
t
i
´
d. Now let
z
i
=
¸
¸
ˆ y
2
i
.
.
.
ˆ y
m
i
¸
be an (m÷1)vector of powers of ˆ y
i
. Then run the auxiliary regression
y
i
= x
t
i
¯
d +z
t
i
¯ ~ + ˜ e
i
(9.13)
by OLS, and form the Wald statistic W
n
for ~ = 0. It is easy (although somewhat tedious) to
show that under the null hypothesis, W
n
d
÷÷.
2
m÷1
. Thus the null is rejected at the c% level if W
n
exceeds the upper c% tail critical value of the .
2
m÷1
distribution.
To implement the test, m must be selected in advance. Typically, small values such as m = 2,
3, or 4 seem to work best.
The RESET test appears to work well as a test of functional form against a wide range of
smooth alternatives. It is particularly powerful at detecting singleindex models of the form
y
i
= G(x
t
i
d) +e
i
CHAPTER 9. ADDITIONAL REGRESSION TOPICS 174
where G(·) is a smooth “link” function. To see why this is the case, note that (9.13) may be written
as
y
i
= x
t
i
¯
d +
x
t
i
´
d
2
˜
1
+
x
t
i
´
d
3
˜
2
+· · ·
x
t
i
´
d
m
˜
m÷1
+ ˜ e
i
which has essentially approximated G(·) by a m’th order polynomial.
9.8 Model Selection
In earlier sections we discussed the costs and beneﬁts of inclusion/exclusion of variables. How
does a researcher go about selecting an econometric speciﬁcation, when economic theory does not
provide complete guidance? This is the question of model selection. It is important that the model
selection question be wellposed. For example, the question: “What is the right model for y?”
is not wellposed, because it does not make clear the conditioning set. In contrast, the question,
“Which subset of (x
1
, ..., x
K
) enters the regression function E(y
i
 x
1i
= x
1
, ..., x
Ki
= x
K
)?” is well
posed.
In many cases the problem of model selection can be reduced to the comparison of two nested
models, as the larger problem can be written as a sequence of such comparisons. We thus consider
the question of the inclusion of X
2
in the linear regression
y = X
1
d
1
+X
2
d
2
+e,
where X
1
is n k
1
and X
2
is n k
2
. This is equivalent to the comparison of the two models
M
1
: y = X
1
d
1
+e, E(e  X
1
, X
2
) = 0
M
2
: y = X
1
d
1
+X
2
d
2
+e, E(e  X
1
, X
2
) = 0.
Note that M
1
· M
2
. To be concrete, we say that M
2
is true if d
2
= 0.
To ﬁx notation, models 1 and 2 are estimated by OLS, with residual vectors ˆ e
1
and ˆ e
2
, estimated
variances ˆ o
2
1
and ˆ o
2
2
, etc., respectively. To simplify some of the statistical discussion, we will on
occasion use the homoskedasticity assumption E
e
2
i
 x
1i
, x
2i
= o
2
.
A model selection procedure is a datadependent rule which selects one of the two models. We
can write this as
´
M. There are many possible desirable properties for a model selection procedure.
One useful property is consistency, that it selects the true model with probability one if the sample
is suciently large. A model selection procedure is consistent if
Pr
´
M = M
1
 M
1
÷ 1
Pr
´
M = M
2
 M
2
÷ 1
However, this rule only makes sense when the true model is ﬁnite dimensional. If the truth is
inﬁnite dimensional, it is more appropriate to view model selection as determining the best ﬁnite
sample approximation.
A common approach to model selection is to base the decision on a statistical test such as
the Wald W
n
. The model selection rule is as follows. For some critical level c, let c
c
satisfy
Pr
.
2
k
2
> c
c
= c. Then select M
1
if W
n
_ c
c
, else select M
2
.
A major problem with this approach is that the critical level c is indeterminate. The rea
soning which helps guide the choice of c in hypothesis testing (controlling Type I error) is not
relevant for model selection. That is, if c is set to be a small number, then Pr
´
M = M
1
 M
1

1 ÷ c but Pr
´
M = M
2
 M
2
could vary dramatically, depending on the sample size, etc. An
other problem is that if c is held ﬁxed, then this model selection procedure is inconsistent, as
Pr
´
M = M
1
 M
1
÷1 ÷c < 1.
CHAPTER 9. ADDITIONAL REGRESSION TOPICS 175
Another common approach to model selection is to use a selection criterion. One popular choice
is the Akaike Information Criterion (AIC). The AIC under normality for model m is
AIC
m
= log
ˆ o
2
m
+ 2
k
m
n
. (9.14)
where ˆ o
2
m
is the variance estimate for model m, and k
m
is the number of coecients in the
model. The AIC can be derived as an estimate of the Kullback Leibler information distance
K(M) = E(log f(y  X) ÷log f(y  X, M)) between the true density and the model density. The
expectation is taken with respect to the true density. The rule is to select M
1
if AIC
1
< AIC
2
,
else select M
2
. AIC selection is inconsistent, as the rule tends to overﬁt. Indeed, since under M
1
,
LR
n
= n
log ˆ o
2
1
÷log ˆ o
2
2
· W
n
d
÷÷.
2
k
2
, (9.15)
then
Pr
´
M = M
1
 M
1
= Pr (AIC
1
< AIC
2
 M
1
)
= Pr
log(ˆ o
2
1
) + 2
k
1
n
< log(ˆ o
2
2
) + 2
k
1
+k
2
n
 M
1
= Pr (LR
n
< 2k
2
 M
1
)
÷ Pr
.
2
k
2
< 2k
2
< 1.
While many criterions similar to the AIC have been proposed, the most popular is one proposed
by Schwarz based on Bayesian arguments. His criterion, known as the BIC, is
BIC
m
= log
ˆ o
2
m
+ log(n)
k
m
n
. (9.16)
Since log(n) > 2 (if n > 8), the BIC places a larger penalty than the AIC on the number of
estimated parameters and is more parsimonious.
In contrast to the AIC, BIC model selection is consistent. Indeed, since (9.15) holds under M
1
,
LR
n
log(n)
p
÷÷0,
so
Pr
´
M = M
1
 M
1
= Pr (BIC
1
< BIC
2
 M
1
)
= Pr (LR
n
< log(n)k
2
 M
1
)
= Pr
LR
n
log(n)
< k
2
 M
1
÷ Pr (0 < k
2
) = 1.
Also under M
2
, one can show that
LR
n
log(n)
p
÷÷·,
thus
Pr
´
M = M
2
 M
2
= Pr
LR
n
log(n)
> k
2
 M
2
÷ 1.
We have discussed model selection between two models. The methods extend readily to the
issue of selection among multiple regressors. The general problem is the model
y
i
=
1
x
1i
+
2
x
2i
+· · · +
K
x
Ki
+e
i
, E(e
i
 x
i
) = 0
CHAPTER 9. ADDITIONAL REGRESSION TOPICS 176
and the question is which subset of the coecients are nonzero (equivalently, which regressors
enter the regression).
There are two leading cases: ordered regressors and unordered.
In the ordered case, the models are
M
1
:
1
= 0,
2
=
3
= · · · =
K
= 0
M
2
:
1
= 0,
2
= 0,
3
= · · · =
K
= 0
.
.
.
M
K
:
1
= 0,
2
= 0, . . . ,
K
= 0.
which are nested. The AIC selection criteria estimates the K models by OLS, stores the residual
variance ˆ o
2
for each model, and then selects the model with the lowest AIC (9.14). Similarly for
the BIC, selecting based on (9.16).
In the unordered case, a model consists of any possible subset of the regressors {x
1i
, ..., x
Ki
},
and the AIC or BIC in principle can be implemented by estimating all possible subset models.
However, there are 2
K
such models, which can be a very large number. For example, 2
10
= 1024,
and 2
20
= 1, 048, 576. In the latter case, a fullblown implementation of the BIC selection criterion
would seem computationally prohibitive.
CHAPTER 9. ADDITIONAL REGRESSION TOPICS 177
Exercises
Exercise 9.1 The data ﬁle cps78.dat contains 550 observations on 20 variables taken from the
May 1978 current population survey. Variables are listed in the ﬁle cps78.pdf. The goal of the
exercise is to estimate a model for the log of earnings (variable LNWAGE) as a function of the
conditioning variables.
(a) Start by an OLS regression of LNWAGE on the other variables. Report coecient estimates
and standard errors.
(b) Consider augmenting the model by squares and/or crossproducts of the conditioning vari
ables. Estimate your selected model and report the results.
(c) Are there any variables which seem to be unimportant as a determinant of wages? You may
reestimate the model without these variables, if desired.
(d) Test whether the error variance is dierent for men and women. Interpret.
(e) Test whether the error variance is dierent for whites and nonwhites. Interpret.
(f) Construct a model for the conditional variance. Estimate such a model, test for general
heteroskedasticity and report the results.
(g) Using this model for the conditional variance, reestimate the model from part (c) using
FGLS. Report the results.
(h) Do the OLS and FGLS estimates dier greatly? Note any interesting dierences.
(i) Compare the estimated standard errors. Note any interesting dierences.
Exercise 9.2 In the homoskedastic regression model y = Xd + e with E(e
i
 x
i
) = 0 and E(e
2
i

x
i
) = o
2
, suppose
ˆ
d is the OLS estimate of d with covariance matrix
ˆ
V , based on a sample of
size n. Let ˆ o
2
be the estimate of o
2
. You wish to forecast an outofsample value of y
n+1
given
that x
n+1
= x. Thus the available information is the sample (y, X), the estimates (
ˆ
d,
ˆ
V , ˆ o
2
), the
residuals ˆ e, and the outofsample value of the regressors, x
n+1
.
(a) Find a point forecast of y
n+1
.
(b) Find an estimate of the variance of this forecast.
Exercise 9.3 Suppose that y
i
= g(x
i
, 0)+e
i
with E(e
i
 x
i
) = 0,
ˆ
0 is the NLLS estimator, and
ˆ
V is
the estimate of var
ˆ
0
. You are interested in the conditional mean function E(y
i
 x
i
= x) = g(x)
at some x. Find an asymptotic 95% conﬁdence interval for g(x).
Exercise 9.4 For any predictor g(x
i
) for y
i
, the mean absolute error (MAE) is
Ey
i
÷g(x
i
) .
Show that the function g(x) which minimizes the MAE is the conditional median m(x) = med(y
i

x
i
).
Exercise 9.5 Deﬁne
g(u) = t ÷1 (u < 0)
where 1 (·) is the indicator function (takes the value 1 if the argument is true, else equals zero).
Let 0 satisfy Eg(y
i
÷0) = 0. Is 0 a quantile of the distribution of y
i
?
CHAPTER 9. ADDITIONAL REGRESSION TOPICS 178
Exercise 9.6 Verify equation (9.11).
Exercise 9.7 In Exercise 8.4, you estimated a cost function on a crosssection of electric companies.
The equation you estimated was
log TC
i
=
1
+
2
log Q
i
+
3
log PL
i
+
4
log PK
i
+
5
log PF
i
+e
i
. (9.17)
(a) Following Nerlove, add the variable (log Q
i
)
2
to the regression. Do so. Assess the merits of
this new speciﬁcation using (i) a hypothesis test; (ii) AIC criterion; (iii) BIC criterion. Do
you agree with this modiﬁcation?
(b) Now try a nonlinear speciﬁcation. Consider model (9.17) plus the extra term
6
z
i
, where
z
i
= log Q
i
(1 + exp(÷(log Q
i
÷
7
)))
÷1
.
In addition, impose the restriction
3
+
4
+
5
= 1. This model is called a smooth threshold
model. For values of log Q
i
much below
7
, the variable log Q
i
has a regression slope of
2
.
For values much above
7
, the regression slope is
2
+
6
, and the model imposes a smooth
transition between these regimes. The model is nonlinear because of the parameter
7
.
The model works best when
7
is selected so that several values (in this example, at least
10 to 15) of log Q
i
are both below and above
7
. Examine the data and pick an appropriate
range for
7
.
(c) Estimate the model by nonlinear least squares. I recommend the concentration method:
Pick 10 (or more or you like) values of
7
in this range. For each value of
7
, calculate z
i
and estimate the model by OLS. Record the sum of squared errors, and ﬁnd the value of
7
for which the sum of squared errors is minimized.
(d) Calculate standard errors for all the parameters (
1
, ...,
7
).
Chapter 10
The Bootstrap
10.1 Deﬁnition of the Bootstrap
Let F denote a distribution function for the population of observations (y
i
, x
i
) . Let
T
n
= T
n
((y
1
, x
1
) , ..., (y
n
, x
n
) , F)
be a statistic of interest, for example an estimator
ˆ
0 or a tstatistic
ˆ
0 ÷0
/s(
ˆ
0). Note that we
write T
n
as possibly a function of F. For example, the tstatistic is a function of the parameter 0
which itself is a function of F.
The exact CDF of T
n
when the data are sampled from the distribution F is
G
n
(u, F) = Pr(T
n
_ u  F)
In general, G
n
(u, F) depends on F, meaning that G changes as F changes.
Ideally, inference would be based on G
n
(u, F). This is generally impossible since F is unknown.
Asymptotic inference is based on approximating G
n
(u, F) with G(u, F) = lim
n÷o
G
n
(u, F).
When G(u, F) = G(u) does not depend on F, we say that T
n
is asymptotically pivotal and use the
distribution function G(u) for inferential purposes.
In a seminal contribution, Efron (1979) proposed the bootstrap, which makes a dierent ap
proximation. The unknown F is replaced by a consistent estimate F
n
(one choice is discussed in
the next section). Plugged into G
n
(u, F) we obtain
G
+
n
(u) = G
n
(u, F
n
). (10.1)
We call G
+
n
the bootstrap distribution. Bootstrap inference is based on G
+
n
(u).
Let (y
+
i
, x
+
i
) denote random variables with the distribution F
n
. A random sample from this dis
tribution is called the bootstrap data. The statistic T
+
n
= T
n
((y
+
1
, x
+
1
) , ..., (y
+
n
, x
+
n
) , F
n
) constructed
on this sample is a random variable with distribution G
+
n
. That is, Pr(T
+
n
_ u) = G
+
n
(u). We call
T
+
n
the bootstrap statistic. The distribution of T
+
n
is identical to that of T
n
when the true CDF is
F
n
rather than F.
The bootstrap distribution is itself random, as it depends on the sample through the estimator
F
n
.
In the next sections we describe computation of the bootstrap distribution.
10.2 The Empirical Distribution Function
Recall that F(y, x) = Pr (y
i
_ y, x
i
_ x) = E(1 (y
i
_ y) 1 (x
i
_ x)) , where 1(·) is the indicator
function. This is a population moment. The method of moments estimator is the corresponding
179
CHAPTER 10. THE BOOTSTRAP 180
sample moment:
F
n
(y, x) =
1
n
n
¸
i=1
1 (y
i
_ y) 1 (x
i
_ x) . (10.2)
F
n
(y, x) is called the empirical distribution function (EDF). F
n
is a nonparametric estimate of F.
Note that while F may be either discrete or continuous, F
n
is by construction a step function.
The EDF is a consistent estimator of the CDF. To see this, note that for any (y, x), 1 (y
i
_ y) 1 (x
i
_ x)
is an iid random variable with expectation F(y, x). Thus by the WLLN (Theorem 2.6.1), F
n
(y, x)
p
÷÷
F (y, x) . Furthermore, by the CLT (Theorem 2.8.1),
n(F
n
(y, x) ÷F (y, x))
d
÷÷N(0, F (y, x) (1 ÷F (y, x))) .
To see the eect of sample size on the EDF, in the Figure below, I have plotted the EDF and
true CDF for three random samples of size n = 25, 50, 100, and 500. The random draws are from
the N(0, 1) distribution. For n = 25, the EDF is only a crude approximation to the CDF, but the
approximation appears to improve for the large n. In general, as the sample size gets larger, the
EDF step function gets uniformly close to the true CDF.
Figure 10.1: Empirical Distribution Functions
The EDF is a valid discrete probability distribution which puts probability mass 1/n at each
pair (y
i
, x
i
), i = 1, ..., n. Notationally, it is helpful to think of a random pair (y
+
i
, x
+
i
) with the
distribution F
n
. That is,
Pr(y
+
i
_ y, x
+
i
_ x) = F
n
(y, x).
We can easily calculate the moments of functions of (y
+
i
, x
+
i
) :
Eh(y
+
i
, x
+
i
) =
h(y, x)dF
n
(y, x)
=
n
¸
i=1
h(y
i
, x
i
) Pr (y
+
i
= y
i
, x
+
i
= x
i
)
=
1
n
n
¸
i=1
h(y
i
, x
i
) ,
the empirical sample average.
CHAPTER 10. THE BOOTSTRAP 181
10.3 Nonparametric Bootstrap
The nonparametric bootstrap is obtained when the bootstrap distribution (10.1) is deﬁned
using the EDF (10.2) as the estimate F
n
of F.
Since the EDF F
n
is a multinomial (with n support points), in principle the distribution G
+
n
could
be calculated by direct methods. However, as there are
2n÷1
n
possible samples {(y
+
1
, x
+
1
) , ..., (y
+
n
, x
+
n
)},
such a calculation is computationally infeasible. The popular alternative is to use simulation to ap
proximate the distribution. The algorithm is identical to our discussion of Monte Carlo simulation,
with the following points of clariﬁcation:
• The sample size n used for the simulation is the same as the sample size.
• The random vectors (y
+
i
, x
+
i
) are drawn randomly from the empirical distribution. This is
equivalent to sampling a pair (y
i
, x
i
) randomly from the sample.
The bootstrap statistic T
+
n
= T
n
((y
+
1
, x
+
1
) , ..., (y
+
n
, x
+
n
) , F
n
) is calculated for each bootstrap sam
ple. This is repeated B times. B is known as the number of bootstrap replications. A theory
for the determination of the number of bootstrap replications B has been developed by Andrews
and Buchinsky (2000). It is desirable for B to be large, so long as the computational costs are
reasonable. B = 1000 typically suces.
When the statistic T
n
is a function of F, it is typically through dependence on a parameter.
For example, the tratio
ˆ
0 ÷0
/s(
ˆ
0) depends on 0. As the bootstrap statistic replaces F with
F
n
, it similarly replaces 0 with 0
n
, the value of 0 implied by F
n
. Typically 0
n
=
ˆ
0, the parameter
estimate. (When in doubt use
ˆ
0.)
Sampling from the EDF is particularly easy. Since F
n
is a discrete probability distribution
putting probability mass 1/n at each sample point, sampling from the EDF is equivalent to random
sampling a pair (y
i
, x
i
) from the observed data with replacement. In consequence, a bootstrap
sample {(y
+
1
, x
+
1
) , ..., (y
+
n
, x
+
n
)} will necessarily have some ties and multiple values, which is generally
not a problem.
10.4 Bootstrap Estimation of Bias and Variance
The bias of
ˆ
0 is t
n
= E(
ˆ
0 ÷ 0
0
). Let T
n
(0) =
ˆ
0 ÷ 0. Then t
n
= E(T
n
(0
0
)). The bootstrap
counterparts are
ˆ
0
+
=
ˆ
0((y
+
1
, x
+
1
) , ..., (y
+
n
, x
+
n
)) and T
+
n
=
ˆ
0
+
÷ 0
n
=
ˆ
0
+
÷
ˆ
0. The bootstrap estimate
of t
n
is
t
+
n
= E(T
+
n
).
If this is calculated by the simulation described in the previous section, the estimate of t
+
n
is
ˆ t
+
n
=
1
B
B
¸
b=1
T
+
nb
=
1
B
B
¸
b=1
ˆ
0
+
b
÷
ˆ
0
=
ˆ
0
+
÷
ˆ
0.
If
ˆ
0 is biased, it might be desirable to construct a biasedcorrected estimator (one with reduced
bias). Ideally, this would be
˜
0 =
ˆ
0 ÷t
n
,
CHAPTER 10. THE BOOTSTRAP 182
but t
n
is unknown. The (estimated) bootstrap biasedcorrected estimator is
˜
0
+
=
ˆ
0 ÷ ˆ t
+
n
=
ˆ
0 ÷(
ˆ
0
+
÷
ˆ
0)
= 2
ˆ
0 ÷
ˆ
0
+
.
Note, in particular, that the biasedcorrected estimator is not
ˆ
0
+
. Intuitively, the bootstrap makes
the following experiment. Suppose that
ˆ
0 is the truth. Then what is the average value of
ˆ
0
calculated from such samples? The answer is
ˆ
0
+
. If this is lower than
ˆ
0, this suggests that the
estimator is downwardbiased, so a biasedcorrected estimator of 0 should be larger than
ˆ
0, and the
best guess is the dierence between
ˆ
0 and
ˆ
0
+
. Similarly if
ˆ
0
+
is higher than
ˆ
0, then the estimator is
upwardbiased and the biasedcorrected estimator should be lower than
ˆ
0.
Let T
n
=
ˆ
0. The variance of
ˆ
0 is
V
n
= E(T
n
÷ET
n
)
2
.
Let T
+
n
=
ˆ
0
+
. It has variance
V
+
n
= E(T
+
n
÷ET
+
n
)
2
.
The simulation estimate is
ˆ
V
+
n
=
1
B
B
¸
b=1
ˆ
0
+
b
÷
ˆ
0
+
2
.
A bootstrap standard error for
ˆ
0 is the square root of the bootstrap estimate of variance,
s
+
(
ˆ
0) =
ˆ
V
+
n
.
While this standard error may be calculated and reported, it is not clear if it is useful. The
primary use of asymptotic standard errors is to construct asymptotic conﬁdence intervals, which are
based on the asymptotic normal approximation to the tratio. However, the use of the bootstrap
presumes that such asymptotic approximations might be poor, in which case the normal approxi
mation is suspected. It appears superior to calculate bootstrap conﬁdence intervals, and we turn
to this next.
10.5 Percentile Intervals
For a distribution function G
n
(u, F), let q
n
(c, F) denote its quantile function. This is the
function which solves
G
n
(q
n
(c, F), F) = c.
[When G
n
(u, F) is discrete, q
n
(c, F) may be nonunique, but we will ignore such complications.]
Let q
n
(c) denote the quantile function of the true sampling distribution, and q
+
n
(c) = q
n
(c, F
n
)
denote the quantile function of the bootstrap distribution. Note that this function will change
depending on the underlying statistic T
n
whose distribution is G
n
.
Let T
n
=
ˆ
0, an estimate of a parameter of interest. In (1 ÷c)% of samples,
ˆ
0 lies in the region
[q
n
(c/2), q
n
(1 ÷c/2)]. This motivates a conﬁdence interval proposed by Efron:
C
1
= [q
+
n
(c/2), q
+
n
(1 ÷c/2)].
This is often called the percentile conﬁdence interval.
Computationally, the quantile q
+
n
(c) is estimated by ˆ q
+
n
(c), the c’th sample quantile of the
simulated statistics {T
+
n1
, ..., T
+
nB
}, as discussed in the section on Monte Carlo simulation. The
(1 ÷c)% Efron percentile interval is then [ˆ q
+
n
(c/2), ˆ q
+
n
(1 ÷c/2)].
CHAPTER 10. THE BOOTSTRAP 183
The interval C
1
is a popular bootstrap conﬁdence interval often used in empirical practice. This
is because it is easy to compute, simple to motivate, was popularized by Efron early in the history
of the bootstrap, and also has the feature that it is translation invariant. That is, if we deﬁne
c = f(0) as the parameter of interest for a monotonically increasing function f, then percentile
method applied to this problem will produce the conﬁdence interval [f(q
+
n
(c/2)), f(q
+
n
(1÷c/2))],
which is a naturally good property.
However, as we show now, C
1
is in a deep sense very poorly motivated.
It will be useful if we introduce an alternative deﬁnition C
1
. Let T
n
(0) =
ˆ
0 ÷ 0 and let q
n
(c)
be the quantile function of its distribution. (These are the original quantiles, with 0 subtracted.)
Then C
1
can alternatively be written as
C
1
= [
ˆ
0 +q
+
n
(c/2),
ˆ
0 +q
+
n
(1 ÷c/2)].
This is a bootstrap estimate of the “ideal” conﬁdence interval
C
0
1
= [
ˆ
0 +q
n
(c/2),
ˆ
0 +q
n
(1 ÷c/2)].
The latter has coverage probability
Pr
0
0
÷ C
0
1
= Pr
ˆ
0 +q
n
(c/2) _ 0
0
_
ˆ
0 +q
n
(1 ÷c/2)
= Pr
÷q
n
(1 ÷c/2) _
ˆ
0 ÷0
0
_ ÷q
n
(c/2)
= G
n
(÷q
n
(c/2), F
0
) ÷G
n
(÷q
n
(1 ÷c/2), F
0
)
which generally is not 1÷c! There is one important exception. If
ˆ
0÷0
0
has a symmetric distribution,
then G
n
(÷u, F
0
) = 1 ÷G
n
(u, F
0
), so
Pr
0
0
÷ C
0
1
= G
n
(÷q
n
(c/2), F
0
) ÷G
n
(÷q
n
(1 ÷c/2), F
0
)
= (1 ÷G
n
(q
n
(c/2), F
0
)) ÷(1 ÷G
n
(q
n
(1 ÷c/2), F
0
))
=
1 ÷
c
2
÷
1 ÷
1 ÷
c
2
= 1 ÷c
and this idealized conﬁdence interval is accurate. Therefore, C
0
1
and C
1
are designed for the case
that
ˆ
0 has a symmetric distribution about 0
0
.
When
ˆ
0 does not have a symmetric distribution, C
1
may perform quite poorly.
However, by the translation invariance argument presented above, it also follows that if there
exists some monotonically increasing transformation f(·) such that f(
ˆ
0) is symmetrically distributed
about f(0
0
), then the idealized percentile bootstrap method will be accurate.
Based on these arguments, many argue that the percentile interval should not be used unless
the sampling distribution is close to unbiased and symmetric.
The problems with the percentile method can be circumvented, at least in principle, by an
alternative method.
Let T
n
(0) =
ˆ
0 ÷0. Then
1 ÷c = Pr (q
n
(c/2) _ T
n
(0
0
) _ q
n
(1 ÷c/2))
= Pr
ˆ
0 ÷q
n
(1 ÷c/2) _ 0
0
_
ˆ
0 ÷q
n
(c/2)
,
so an exact (1 ÷c)% conﬁdence interval for 0
0
would be
C
0
2
= [
ˆ
0 ÷q
n
(1 ÷c/2),
ˆ
0 ÷q
n
(c/2)].
This motivates a bootstrap analog
C
2
= [
ˆ
0 ÷q
+
n
(1 ÷c/2),
ˆ
0 ÷q
+
n
(c/2)].
CHAPTER 10. THE BOOTSTRAP 184
Notice that generally this is very dierent from the Efron interval C
1
! They coincide in the special
case that G
+
n
(u) is symmetric about
ˆ
0, but otherwise they dier.
Computationally, this interval can be estimated from a bootstrap simulation by sorting the
bootstrap statistics T
+
n
=
ˆ
0
+
÷
ˆ
0
, which are centered at the sample estimate
ˆ
0. These are sorted
to yield the quantile estimates ˆ q
+
n
(.025) and ˆ q
+
n
(.975). The 95% conﬁdence interval is then [
ˆ
0 ÷
ˆ q
+
n
(.975),
ˆ
0 ÷ ˆ q
+
n
(.025)].
This conﬁdence interval is discussed in most theoretical treatments of the bootstrap, but is not
widely used in practice.
10.6 Percentilet EqualTailed Interval
Suppose we want to test H
0
: 0 = 0
0
against H
1
: 0 < 0
0
at size c. We would set T
n
(0) =
ˆ
0 ÷0
/s(
ˆ
0) and reject H
0
in favor of H
1
if T
n
(0
0
) < c, where c would be selected so that
Pr (T
n
(0
0
) < c) = c.
Thus c = q
n
(c). Since this is unknown, a bootstrap test replaces q
n
(c) with the bootstrap estimate
q
+
n
(c), and the test rejects if T
n
(0
0
) < q
+
n
(c).
Similarly, if the alternative is H
1
: 0 > 0
0
, the bootstrap test rejects if T
n
(0
0
) > q
+
n
(1 ÷c).
Computationally, these critical values can be estimated from a bootstrap simulation by sorting
the bootstrap tstatistics T
+
n
=
ˆ
0
+
÷
ˆ
0
/s(
ˆ
0
+
). Note, and this is important, that the bootstrap test
statistic is centered at the estimate
ˆ
0, and the standard error s(
ˆ
0
+
) is calculated on the bootstrap
sample. These tstatistics are sorted to ﬁnd the estimated quantiles ˆ q
+
n
(c) and/or ˆ q
+
n
(1 ÷c).
Let T
n
(0) =
ˆ
0 ÷0
/s(
ˆ
0). Then taking the intersection of two onesided intervals,
1 ÷c = Pr (q
n
(c/2) _ T
n
(0
0
) _ q
n
(1 ÷c/2))
= Pr
q
n
(c/2) _
ˆ
0 ÷0
0
/s(
ˆ
0) _ q
n
(1 ÷c/2)
= Pr
ˆ
0 ÷s(
ˆ
0)q
n
(1 ÷c/2) _ 0
0
_
ˆ
0 ÷s(
ˆ
0)q
n
(c/2)
,
so an exact (1 ÷c)% conﬁdence interval for 0
0
would be
C
0
3
= [
ˆ
0 ÷s(
ˆ
0)q
n
(1 ÷c/2),
ˆ
0 ÷s(
ˆ
0)q
n
(c/2)].
This motivates a bootstrap analog
C
3
= [
ˆ
0 ÷s(
ˆ
0)q
+
n
(1 ÷c/2),
ˆ
0 ÷s(
ˆ
0)q
+
n
(c/2)].
This is often called a percentilet conﬁdence interval. It is equaltailed or central since the probability
that 0
0
is below the left endpoint approximately equals the probability that 0
0
is above the right
endpoint, each c/2.
Computationally, this is based on the critical values from the onesided hypothesis tests, dis
cussed above.
10.7 Symmetric Percentilet Intervals
Suppose we want to test H
0
: 0 = 0
0
against H
1
: 0 = 0
0
at size c. We would set T
n
(0) =
ˆ
0 ÷0
/s(
ˆ
0) and reject H
0
in favor of H
1
if T
n
(0
0
) > c, where c would be selected so that
Pr (T
n
(0
0
) > c) = c.
CHAPTER 10. THE BOOTSTRAP 185
Note that
Pr (T
n
(0
0
) < c) = Pr (÷c < T
n
(0
0
) < c)
= G
n
(c) ÷G
n
(÷c)
= G
n
(c),
which is a symmetric distribution function. The ideal critical value c = q
n
(c) solves the equation
G
n
(q
n
(c)) = 1 ÷c.
Equivalently, q
n
(c) is the 1 ÷c quantile of the distribution of T
n
(0
0
) .
The bootstrap estimate is q
+
n
(c), the 1 ÷ c quantile of the distribution of T
+
n
 , or the number
which solves the equation
G
+
n
(q
+
n
(c)) = G
+
n
(q
+
n
(c)) ÷G
+
n
(÷q
+
n
(c)) = 1 ÷c.
Computationally, q
+
n
(c) is estimated from a bootstrap simulation by sorting the bootstrap t
statistics T
+
n
 =
ˆ
0
+
÷
ˆ
0
/s(
ˆ
0
+
), and taking the upper c% quantile. The bootstrap test rejects if
T
n
(0
0
) > q
+
n
(c).
Let
C
4
= [
ˆ
0 ÷s(
ˆ
0)q
+
n
(c),
ˆ
0 +s(
ˆ
0)q
+
n
(c)],
where q
+
n
(c) is the bootstrap critical value for a twosided hypothesis test. C
4
is called the symmetric
percentilet interval. It is designed to work well since
Pr (0
0
÷ C
4
) = Pr
ˆ
0 ÷s(
ˆ
0)q
+
n
(c) _ 0
0
_
ˆ
0 +s(
ˆ
0)q
+
n
(c)
= Pr (T
n
(0
0
) < q
+
n
(c))
· Pr (T
n
(0
0
) < q
n
(c))
= 1 ÷c.
If 0 is a vector, then to test H
0
: 0 = 0
0
against H
1
: 0 = 0
0
at size c, we would use a Wald
statistic
W
n
(0) = n
ˆ
0 ÷0
t
ˆ
V
÷1
ˆ
0 ÷0
or some other asymptotically chisquare statistic. Thus here T
n
(0) = W
n
(0). The ideal test rejects
if W
n
_ q
n
(c), where q
n
(c) is the (1 ÷c)% quantile of the distribution of W
n
. The bootstrap test
rejects if W
n
_ q
+
n
(c), where q
+
n
(c) is the (1 ÷c)% quantile of the distribution of
W
+
n
= n
ˆ
0
+
÷
ˆ
0
t
ˆ
V
+÷1
0
ˆ
0
+
÷
ˆ
0
.
Computationally, the critical value q
+
n
(c) is found as the quantile from simulated values of W
+
n
.
Note in the simulation that the Wald statistic is a quadratic form in
ˆ
0
+
÷
ˆ
0
, not
ˆ
0
+
÷0
0
.
[This is a typical mistake made by practitioners.]
10.8 Asymptotic Expansions
Let T
n
÷ R be a statistic such that
T
n
d
÷÷N(0, o
2
). (10.3)
CHAPTER 10. THE BOOTSTRAP 186
In some cases, such as when T
n
is a tratio, then o
2
= 1. In other cases o
2
is unknown. Equivalently,
writing T
n
~ G
n
(u, F) then for each u and F
lim
n÷o
G
n
(u, F) =
u
o
,
or
G
n
(u, F) =
u
o
+o (1) . (10.4)
While (10.4) says that G
n
converges to
u
o
as n ÷·, it says nothing, however, about the rate
of convergence, or the size of the divergence for any particular sample size n. A better asymptotic
approximation may be obtained through an asymptotic expansion.
The following notation will be helpful. Let a
n
be a sequence.
Deﬁnition 10.8.1 a
n
= o(1) if a
n
÷0 as n ÷·
Deﬁnition 10.8.2 a
n
= O(1) if a
n
 is uniformly bounded.
Deﬁnition 10.8.3 a
n
= o(n
÷r
) if n
r
a
n
 ÷0 as n ÷·.
Basically, a
n
= O(n
÷r
) if it declines to zero like n
÷r
.
We say that a function g(u) is even if g(÷u) = g(u), and a function h(u) is odd if h(÷u) = ÷h(u).
The derivative of an even function is odd, and viceversa.
Theorem 10.8.1 Under regularity conditions and (10.3),
G
n
(u, F) =
u
o
+
1
n
1/2
g
1
(u, F) +
1
n
g
2
(u, F) +O(n
÷3/2
)
uniformly over u, where g
1
is an even function of u, and g
2
is an odd
function of u. Moreover, g
1
and g
2
are dierentiable functions of u and
continuous in F relative to the supremum norm on the space of distribution
functions.
The expansion in Theorem 10.8.1 is often called an Edgeworth expansion.
We can interpret Theorem 10.8.1 as follows. First, G
n
(u, F) converges to the normal limit at
rate n
1/2
. To a second order of approximation,
G
n
(u, F) 
u
o
+n
÷1/2
g
1
(u, F).
Since the derivative of g
1
is odd, the density function is skewed. To a third order of approximation,
G
n
(u, F) 
u
o
+n
÷1/2
g
1
(u, F) +n
÷1
g
2
(u, F)
which adds a symmetric nonnormal component to the approximate density (for example, adding
leptokurtosis).
CHAPTER 10. THE BOOTSTRAP 187
[Side Note: When T
n
=
n
¯
X
n
÷µ
/o, a standardized sample mean, then
g
1
(u) = ÷
1
6
i
3
u
2
÷1
c(u)
g
2
(u) = ÷
1
24
i
4
u
3
÷3u
+
1
72
i
2
3
u
5
÷10u
3
+ 15u
c(u)
where c(u) is the standard normal pdf, and
i
3
= E(X ÷µ)
3
/o
3
i
4
= E(X ÷µ)
4
/o
4
÷3
the standardized skewness and excess kurtosis of the distribution of X. Note that when i
3
= 0
and i
4
= 0, then g
1
= 0 and g
2
= 0, so the secondorder Edgeworth expansion corresponds to the
normal distribution.]
Francis Edgeworth
Francis Ysidro Edgeworth (18451926) of Ireland, founding editor of the Economic Journal,
was a profound economic and statistical theorist, developing the theories of indierence
curves and asymptotic expansions. He also could be viewed as the ﬁrst econometrician due
to his early use of mathematical statistics in the study of economic data.
10.9 OneSided Tests
Using the expansion of Theorem 10.8.1, we can assess the accuracy of onesided hypothesis tests
and conﬁdence regions based on an asymptotically normal tratio T
n
. An asymptotic test is based
on (u).
To the second order, the exact distribution is
Pr (T
n
< u) = G
n
(u, F
0
) = (u) +
1
n
1/2
g
1
(u, F
0
) +O(n
÷1
)
since o = 1. The dierence is
(u) ÷G
n
(u, F
0
) =
1
n
1/2
g
1
(u, F
0
) +O(n
÷1
)
= O(n
÷1/2
),
so the order of the error is O(n
÷1/2
).
A bootstrap test is based on G
+
n
(u), which from Theorem 10.8.1 has the expansion
G
+
n
(u) = G
n
(u, F
n
) = (u) +
1
n
1/2
g
1
(u, F
n
) +O(n
÷1
).
Because (u) appears in both expansions, the dierence between the bootstrap distribution and
the true distribution is
G
+
n
(u) ÷G
n
(u, F
0
) =
1
n
1/2
(g
1
(u, F
n
) ÷g
1
(u, F
0
)) +O(n
÷1
).
CHAPTER 10. THE BOOTSTRAP 188
Since F
n
converges to F at rate
n, and g
1
is continuous with respect to F, the dierence
(g
1
(u, F
n
) ÷g
1
(u, F
0
)) converges to 0 at rate
n. Heuristically,
g
1
(u, F
n
) ÷g
1
(u, F
0
) 
0
0F
g
1
(u, F
0
) (F
n
÷F
0
)
= O(n
÷1/2
),
The “derivative”
0
0F
g
1
(u, F) is only heuristic, as F is a function. We conclude that
G
+
n
(u) ÷G
n
(u, F
0
) = O(n
÷1
),
or
Pr (T
+
n
_ u) = Pr (T
n
_ u) +O(n
÷1
),
which is an improved rate of convergence over the asymptotic test (which converged at rate
O(n
÷1/2
)). This rate can be used to show that onetailed bootstrap inference based on the t
ratio achieves a socalled asymptotic reﬁnement — the Type I error of the test converges at a faster
rate than an analogous asymptotic test.
10.10 Symmetric TwoSided Tests
If a random variable y has distribution function H(u) = Pr(y _ u), then the random variable
y has distribution function
H(u) = H(u) ÷H(÷u)
since
Pr (y _ u) = Pr (÷u _ y _ u)
= Pr (y _ u) ÷Pr (y _ ÷u)
= H(u) ÷H(÷u).
For example, if Z ~ N(0, 1), then Z has distribution function
(u) = (u) ÷(÷u) = 2(u) ÷1.
Similarly, if T
n
has exact distribution G
n
(u, F), then T
n
 has the distribution function
G
n
(u, F) = G
n
(u, F) ÷G
n
(÷u, F).
A twosided hypothesis test rejects H
0
for large values of T
n
 . Since T
n
d
÷÷ Z, then T
n

d
÷÷
Z ~ . Thus asymptotic critical values are taken from the distribution, and exact critical values
are taken from the G
n
(u, F
0
) distribution. From Theorem 10.8.1, we can calculate that
G
n
(u, F) = G
n
(u, F) ÷G
n
(÷u, F)
=
(u) +
1
n
1/2
g
1
(u, F) +
1
n
g
2
(u, F)
÷
(÷u) +
1
n
1/2
g
1
(÷u, F) +
1
n
g
2
(÷u, F)
+O(n
÷3/2
)
= (u) +
2
n
g
2
(u, F) +O(n
÷3/2
), (10.5)
where the simpliﬁcations are because g
1
is even and g
2
is odd. Hence the dierence between the
asymptotic distribution and the exact distribution is
(u) ÷G
n
(u, F
0
) =
2
n
g
2
(u, F
0
) +O(n
÷3/2
) = O(n
÷1
).
CHAPTER 10. THE BOOTSTRAP 189
The order of the error is O(n
÷1
).
Interestingly, the asymptotic twosided test has a better coverage rate than the asymptotic
onesided test. This is because the ﬁrst term in the asymptotic expansion, g
1
, is an even function,
meaning that the errors in the two directions exactly cancel out.
Applying (10.5) to the bootstrap distribution, we ﬁnd
G
+
n
(u) = G
n
(u, F
n
) = (u) +
2
n
g
2
(u, F
n
) +O(n
÷3/2
).
Thus the dierence between the bootstrap and exact distributions is
G
+
n
(u) ÷G
n
(u, F
0
) =
2
n
(g
2
(u, F
n
) ÷g
2
(u, F
0
)) +O(n
÷3/2
)
= O(n
÷3/2
),
the last equality because F
n
converges to F
0
at rate
n, and g
2
is continuous in F. Another way
of writing this is
Pr (T
+
n
 < u) = Pr (T
n
 < u) +O(n
÷3/2
)
so the error from using the bootstrap distribution (relative to the true unknown distribution) is
O(n
÷3/2
). This is in contrast to the use of the asymptotic distribution, whose error is O(n
÷1
). Thus
a twosided bootstrap test also achieves an asymptotic reﬁnement, similar to a onesided test.
A reader might get confused between the two simultaneous eects. Twosided tests have better
rates of convergence than the onesided tests, and bootstrap tests have better rates of convergence
than asymptotic tests.
The analysis shows that there may be a tradeo between onesided and twosided tests. Two
sided tests will have more accurate size (Reported Type I error), but onesided tests might have
more power against alternatives of interest. Conﬁdence intervals based on the bootstrap can be
asymmetric if based on onesided tests (equaltailed intervals) and can therefore be more informative
and have smaller length than symmetric intervals. Therefore, the choice between symmetric and
equaltailed conﬁdence intervals is unclear, and needs to be determined on a casebycase basis.
10.11 Percentile Conﬁdence Intervals
To evaluate the coverage rate of the percentile interval, set T
n
=
n
ˆ
0 ÷0
0
. We know that
T
n
d
÷÷N(0, V ), which is not pivotal, as it depends on the unknown V. Theorem 10.8.1 shows that
a ﬁrstorder approximation
G
n
(u, F) =
u
o
+O(n
÷1/2
),
where o =
V , and for the bootstrap
G
+
n
(u) = G
n
(u, F
n
) =
u
ˆ o
+O(n
÷1/2
),
where ˆ o = V (F
n
) is the bootstrap estimate of o. The dierence is
G
+
n
(u) ÷G
n
(u, F
0
) =
u
ˆ o
÷
u
o
+O(n
÷1/2
)
= ÷c
u
o
u
o
(ˆ o ÷o) +O(n
÷1/2
)
= O(n
÷1/2
)
Hence the order of the error is O(n
÷1/2
).
The good news is that the percentiletype methods (if appropriately used) can yield
n
convergent asymptotic inference. Yet these methods do not require the calculation of standard
CHAPTER 10. THE BOOTSTRAP 190
errors! This means that in contexts where standard errors are not available or are dicult to
calculate, the percentile bootstrap methods provide an attractive inference method.
The bad news is that the rate of convergence is disappointing. It is no better than the rate
obtained from an asymptotic onesided conﬁdence region. Therefore if standard errors are available,
it is unclear if there are any beneﬁts from using the percentile bootstrap over simple asymptotic
methods.
Based on these arguments, the theoretical literature (e.g. Hall, 1992, Horowitz, 2001) tends to
advocate the use of the percentilet bootstrap methods rather than percentile methods.
10.12 Bootstrap Methods for Regression Models
The bootstrap methods we have discussed have set G
+
n
(u) = G
n
(u, F
n
), where F
n
is the EDF.
Any other consistent estimate of F may be used to deﬁne a feasible bootstrap estimator. The
advantage of the EDF is that it is fully nonparametric, it imposes no conditions, and works in
nearly any context. But since it is fully nonparametric, it may be inecient in contexts where
more is known about F. We discuss bootstrap methods appropriate for the linear regression model
y
i
= x
t
i
d +e
i
E(e
i
 x
i
) = 0.
The nonparametric bootstrap resamples the observations (y
+
i
, x
+
i
) from the EDF, which implies
y
+
i
= x
+t
i
ˆ
d +e
+
i
E(x
+
i
e
+
i
) = 0
but generally
E(e
+
i
 x
+
i
) = 0.
The bootstrap distribution does not impose the regression assumption, and is thus an inecient
estimator of the true distribution (when in fact the regression assumption is true.)
One approach to this problem is to impose the very strong assumption that the error 
i
is
independent of the regressor x
i
. The advantage is that in this case it is straightforward to con
struct bootstrap distributions. The disadvantage is that the bootstrap distribution may be a poor
approximation when the error is not independent of the regressors.
To impose independence, it is sucient to sample the x
+
i
and e
+
i
independently, and then create
y
+
i
= x
+t
i
ˆ
d + e
+
i
. There are dierent ways to impose independence. A nonparametric method
is to sample the bootstrap errors e
+
i
randomly from the OLS residuals {ˆ e
1
, ..., ˆ e
n
}. A parametric
method is to generate the bootstrap errors e
+
i
from a parametric distribution, such as the normal
e
+
i
~ N(0, ˆ o
2
).
For the regressors x
+
i
, a nonparametric method is to sample the x
+
i
randomly from the EDF
or sample values {x
1
, ..., x
n
}. A parametric method is to sample x
+
i
from an estimated parametric
distribution. A third approach sets x
+
i
= x
i
. This is equivalent to treating the regressors as ﬁxed
in repeated samples. If this is done, then all inferential statements are made conditionally on the
observed values of the regressors, which is a valid statistical approach. It does not really matter,
however, whether or not the x
i
are really “ﬁxed” or random.
The methods discussed above are unattractive for most applications in econometrics because
they impose the stringent assumption that x
i
and e
i
are independent. Typically what is desirable
is to impose only the regression condition E(e
i
 x
i
) = 0. Unfortunately this is a harder problem.
One proposal which imposes the regression condition without independence is the Wild Boot
strap. The idea is to construct a conditional distribution for e
+
i
so that
E(e
+
i
 x
i
) = 0
E
e
+2
i
 x
i
= ˆ e
2
i
E
e
+3
i
 x
i
= ˆ e
3
i
.
CHAPTER 10. THE BOOTSTRAP 191
A conditional distribution with these features will preserve the main important features of the data.
This can be achieved using a twopoint distribution of the form
Pr
e
+
i
=
1 +
5
2
ˆ e
i
=
5 ÷1
2
5
Pr
e
+
i
=
1 ÷
5
2
ˆ e
i
=
5 + 1
2
5
For each x
i
, you sample e
+
i
using this twopoint distribution.
CHAPTER 10. THE BOOTSTRAP 192
Exercises
Exercise 10.1 Let F
n
(x) denote the EDF of a random sample. Show that
n(F
n
(x) ÷F
0
(x))
d
÷÷N(0, F
0
(x) (1 ÷F
0
(x))) .
Exercise 10.2 Take a random sample {y
1
, ..., y
n
} with µ = Ey
i
and o
2
= var (y
i
) . Let the statistic
of interest be the sample mean T
n
= y
n
. Find the population moments ET
n
and var (T
n
) . Let
{y
+
1
, ..., y
+
n
} be a random sample from the empirical distribution function and let T
+
n
= y
+
n
be its
sample mean. Find the bootstrap moments ET
+
n
and var (T
+
n
) .
Exercise 10.3 Consider the following bootstrap procedure for a regression of y
i
on x
i
. Let
ˆ
d
denote the OLS estimator from the regression of y on X, and ˆ e = y ÷X
ˆ
d the OLS residuals.
(a) Draw a random vector (x
+
, e
+
) from the pair {(x
i
, ˆ e
i
) : i = 1, ..., n} . That is, draw a random
integer i
t
from [1, 2, ..., n], and set x
+
= x
i
and e
+
= ˆ e
i
. Set y
+
= x
+t
ˆ
d + e
+
. Draw (with
replacement) n such vectors, creating a random bootstrap data set (y
+
, X
+
).
(b) Regress y
+
on X
+
, yielding OLS estimates
ˆ
d
+
and any other statistic of interest.
Show that this bootstrap procedure is (numerically) identical to the nonparametric boot
strap.
Exercise 10.4 Consider the following bootstrap procedure. Using the nonparametric bootstrap,
generate bootstrap samples, calculate the estimate
ˆ
0
+
on these samples and then calculate
T
+
n
= (
ˆ
0
+
÷
ˆ
0)/s(
ˆ
0),
where s(
ˆ
0) is the standard error in the original data. Let q
+
n
(.05) and q
+
n
(.95) denote the 5% and
95% quantiles of T
+
n
, and deﬁne the bootstrap conﬁdence interval
C =
ˆ
0 ÷s(
ˆ
0)q
+
n
(.95),
ˆ
0 ÷s(
ˆ
0)q
+
n
(.05)
.
Show that C exactly equals the Alternative percentile interval (not the percentilet interval).
Exercise 10.5 You want to test H
0
: 0 = 0 against H
1
: 0 > 0. The test for H
0
is to reject if
T
n
=
ˆ
0/s(
ˆ
0) > c where c is picked so that Type I error is c. You do this as follows. Using the non
parametric bootstrap, you generate bootstrap samples, calculate the estimates
ˆ
0
+
on these samples
and then calculate
T
+
n
=
ˆ
0
+
/s(
ˆ
0
+
).
Let q
+
n
(.95) denote the 95% quantile of T
+
n
. You replace c with q
+
n
(.95), and thus reject H
0
if
T
n
=
ˆ
0/s(
ˆ
0) > q
+
n
(.95). What is wrong with this procedure?
Exercise 10.6 Suppose that in an application,
ˆ
0 = 1.2 and s(
ˆ
0) = .2. Using the nonparametric
bootstrap, 1000 samples are generated from the bootstrap distribution, and
ˆ
0
+
is calculated on each
sample. The
ˆ
0
+
are sorted, and the 2.5% and 97.5% quantiles of the
ˆ
0
+
are .75 and 1.3, respectively.
(a) Report the 95% Efron Percentile interval for 0.
(b) Report the 95% Alternative Percentile interval for 0.
(c) With the given information, can you report the 95% Percentilet interval for 0?
Exercise 10.7 The dataﬁle hprice1.dat contains data on house prices (sales), with variables
listed in the ﬁle hprice1.pdf. Estimate a linear regression of price on the number of bedrooms, lot
size, size of house, and the colonial dummy. Calculate 95% conﬁdence intervals for the regression
coecients using both the asymptotic normal approximation and the percentilet bootstrap.
Chapter 11
Generalized Method of Moments
11.1 Overidentiﬁed Linear Model
Consider the linear model
y
i
= x
t
i
d +e
i
= x
t
1i
d
1
+x
t
2i
d
2
+e
i
E(x
i
e
i
) = 0
where x
1i
is k 1 and x
2
is r 1 with / = k + r. We know that without further restrictions, an
asymptotically ecient estimator of d is the OLS estimator. Now suppose that we are given the
information that d
2
= 0. Now we can write the model as
y
i
= x
t
1i
d
1
+e
i
E(x
i
e
i
) = 0.
In this case, how should d
1
be estimated? One method is OLS regression of y
i
on x
1i
alone. This
method, however, is not necessarily ecient, as there are / restrictions in E(x
i
e
i
) = 0, while d
1
is
of dimension k < /. This situation is called overidentiﬁed. There are / ÷ k = r more moment
restrictions than free parameters. We call r the number of overidentifying restrictions.
This is a special case of a more general class of moment condition models. Let g(y, x, z, d) be
an / 1 function of a k 1 parameter d with / _ k such that
Eg(y
i
, x
i
, z
i
, d
0
) = 0 (11.1)
where d
0
is the true value of d. In our previous example, g(y, z, d) = z·(y ÷x
t
1
d). In econometrics,
this class of models are called moment condition models. In the statistics literature, these are
known as estimating equations.
As an important special case we will devote special attention to linear moment condition models,
which can be written as
y
i
= x
t
i
d +e
i
E(z
i
e
i
) = 0.
where the dimensions of x
i
and z
i
are k 1 and / 1 , with / _ k. If k = / the model is just
identiﬁed, otherwise it is overidentiﬁed. The variables x
i
may be components and functions of
z
i
, but this is not required. This model falls in the class (11.1) by setting
g(y, x, z, d
0
) = z·(y ÷x
t
d) (11.2)
193
CHAPTER 11. GENERALIZED METHOD OF MOMENTS 194
11.2 GMM Estimator
Deﬁne the sample analog of (11.2)
g
n
(d) =
1
n
n
¸
i=1
g
i
(d) =
1
n
n
¸
i=1
z
i
y
i
÷x
t
i
d
=
1
n
Z
t
y ÷Z
t
Xd
. (11.3)
The method of moments estimator for d is deﬁned as the parameter value which sets g
n
(d) = 0.
This is generally not possible when / > k, as there are more equations than free parameters. The
idea of the generalized method of moments (GMM) is to deﬁne an estimator which sets g
n
(d)
“close” to zero.
For some / / weight matrix W
n
> 0, let
J
n
(d) = n · g
n
(d)
t
W
n
g
n
(d).
This is a nonnegative measure of the “length” of the vector g
n
(d). For example, if W
n
= I, then,
J
n
(d) = n · g
n
(d)
t
g
n
(d) = n · g
n
(d)
2
, the square of the Euclidean length. The GMM estimator
minimizes J
n
(d).
Deﬁnition 11.2.1
´
d
GMM
= argmin
J
n
(d) .
Note that if k = /, then g
n
(
´
d) = 0, and the GMM estimator is the method of moments
estimator. The ﬁrst order conditions for the GMM estimator are
0 =
0
0d
J
n
(
´
d)
= 2
0
0d
g
n
(
´
d)
t
W
n
g
n
(
´
d)
= ÷2
1
n
X
t
Z
W
n
1
n
Z
t
y ÷X
´
d
so
2
X
t
Z
W
n
Z
t
X
´
d = 2
X
t
Z
W
n
Z
t
y
which establishes the following.
Proposition 11.2.1
´
d
GMM
=
X
t
Z
W
n
Z
t
X
÷1
X
t
Z
W
n
Z
t
y
.
While the estimator depends on W
n
, the dependence is only up to scale, for if W
n
is replaced
by cW
n
for some c > 0,
´
d
GMM
does not change.
CHAPTER 11. GENERALIZED METHOD OF MOMENTS 195
11.3 Distribution of GMM Estimator
Assume that W
n
p
÷÷W > 0. Let
Q = E
z
i
x
t
i
and
= E
z
i
z
t
i
e
2
i
= E
g
i
g
t
i
,
where g
i
= z
i
e
i
. Then
1
n
X
t
Z
W
n
1
n
Z
t
X
p
÷÷Q
t
WQ
and
1
n
X
t
Z
W
n
1
n
Z
t
e
d
÷÷Q
t
WN(0, ) .
We conclude:
Theorem 11.3.1 Asymptotic Distribution of GMM Estimator
n
´
d ÷d
d
÷÷N(0, V
) ,
where
V
=
Q
t
WQ
÷1
Q
t
WWQ
Q
t
WQ
÷1
.
In general, GMM estimators are asymptotically normal with “sandwich form” asymptotic vari
ances.
The optimal weight matrix W
0
is one which minimizes V
. This turns out to be W
0
=
÷1
.
The proof is left as an exercise. This yields the ecient GMM estimator:
´
d =
X
t
Z
÷1
Z
t
X
÷1
X
t
Z
÷1
Z
t
y.
Thus we have
Theorem 11.3.2 Asymptotic Distribution of Ecient GMM Es
timator
n
´
d ÷d
d
÷÷N
0,
Q
t
÷1
Q
÷1
.
W
0
=
÷1
is not known in practice, but it can be estimated consistently. For any W
n
p
÷÷W
0
,
we still call
´
d the ecient GMM estimator, as it has the same asymptotic distribution.
By “ecient”, we mean that this estimator has the smallest asymptotic variance in the class
of GMM estimators with this set of moment conditions. This is a weak concept of optimality, as
we are only considering alternative weight matrices W
n
. However, it turns out that the GMM
estimator is semiparametrically ecient, as shown by Gary Chamberlain (1987).
If it is known that E(g
i
(d)) = 0, and this is all that is known, this is a semiparametric
problem, as the distribution of the data is unknown. Chamberlain showed that in this context,
no semiparametric estimator (one which is consistent globally for the class of models considered)
can have a smaller asymptotic variance than
G
t
÷1
G
÷1
where G = E
0
0
g
i
(d). Since the GMM
estimator has this asymptotic variance, it is semiparametrically ecient.
This result shows that in the linear model, no estimator has greater asymptotic eciency than
the ecient linear GMM estimator. No estimator can do better (in this ﬁrstorder asymptotic
sense), without imposing additional assumptions.
CHAPTER 11. GENERALIZED METHOD OF MOMENTS 196
11.4 Estimation of the Ecient Weight Matrix
Given any weight matrix W
n
> 0, the GMM estimator
´
d is consistent yet inecient. For
example, we can set W
n
= I
¹
. In the linear model, a better choice is W
n
= (Z
t
Z)
÷1
. Given
any such ﬁrststep estimator, we can deﬁne the residuals ˆ e
i
= y
i
÷ x
t
i
´
d and moment equations
ˆ g
i
= z
i
ˆ e
i
= g(y
i
, x
i
, z
i
,
´
d). Construct
g
n
= g
n
(
´
d) =
1
n
n
¸
i=1
ˆ g
i
,
ˆ g
+
i
= ˆ g
i
÷g
n
,
and deﬁne
W
n
=
1
n
n
¸
i=1
ˆ g
+
i
ˆ g
+t
i
÷1
=
1
n
n
¸
i=1
ˆ g
i
ˆ g
t
i
÷g
n
g
t
n
÷1
. (11.4)
Then W
n
p
÷÷
÷1
= W
0
, and GMM using W
n
as the weight matrix is asymptotically ecient.
A common alternative choice is to set
W
n
=
1
n
n
¸
i=1
ˆ g
i
ˆ g
t
i
÷1
which uses the uncentered moment conditions. Since Eg
i
= 0, these two estimators are asymptot
ically equivalent under the hypothesis of correct speciﬁcation. However, Alastair Hall (2000) has
shown that the uncentered estimator is a poor choice. When constructing hypothesis tests, under
the alternative hypothesis the moment conditions are violated, i.e. Eg
i
= 0, so the uncentered
estimator will contain an undesirable bias term and the power of the test will be adversely aected.
A simple solution is to use the centered moment conditions to construct the weight matrix, as in
(11.4) above.
Here is a simple way to compute the ecient GMM estimator for the linear model. First, set
W
n
= (Z
t
Z)
÷1
, estimate
´
d using this weight matrix, and construct the residual ˆ e
i
= y
i
÷ x
t
i
´
d.
Then set ˆ g
i
= z
i
ˆ e
i
, and let ˆ g be the associated n / matrix. Then the ecient GMM estimator is
´
d =
X
t
Z
ˆ g
t
ˆ g ÷ng
n
g
t
n
÷1
Z
t
X
÷1
X
t
Z
ˆ g
t
ˆ g ÷ng
n
g
t
n
÷1
Z
t
y.
In most cases, when we say “GMM”, we actually mean “ecient GMM”. There is little point in
using an inecient GMM estimator when the ecient estimator is easy to compute.
An estimator of the asymptotic variance of
ˆ
d can be seen from the above formula. Set
´
V = n
X
t
Z
ˆ g
t
ˆ g ÷ng
n
g
t
n
÷1
Z
t
X
÷1
.
Asymptotic standard errors are given by the square roots of the diagonal elements of
´
V .
There is an important alternative to the twostep GMM estimator just described. Instead, we
can let the weight matrix be considered as a function of d. The criterion function is then
J(d) = n · g
n
(d)
t
1
n
n
¸
i=1
g
+
i
(d)g
+
i
(d)
t
÷1
g
n
(d).
where
g
+
i
(d) = g
i
(d) ÷g
n
(d)
The
´
d which minimizes this function is called the continuouslyupdated GMM estimator, and
was introduced by L. Hansen, Heaton and Yaron (1996).
The estimator appears to have some better properties than traditional GMM, but can be nu
merically tricky to obtain in some cases. This is a current area of research in econometrics.
CHAPTER 11. GENERALIZED METHOD OF MOMENTS 197
11.5 GMM: The General Case
In its most general form, GMM applies whenever an economic or statistical model implies the
/ 1 moment condition
E(g
i
(d)) = 0.
Often, this is all that is known. Identiﬁcation requires l _ k = dim(d). The GMM estimator
minimizes
J(d) = n · g
n
(d)
t
W
n
g
n
(d)
where
g
n
(d) =
1
n
n
¸
i=1
g
i
(d)
and
W
n
=
1
n
n
¸
i=1
ˆ g
i
ˆ g
t
i
÷g
n
g
t
n
÷1
,
with ˆ g
i
= g
i
(
¯
d) constructed using a preliminary consistent estimator
¯
d, perhaps obtained by ﬁrst
setting W
n
= I. Since the GMM estimator depends upon the ﬁrststage estimator, often the weight
matrix W
n
is updated, and then
´
d recomputed. This estimator can be iterated if needed.
Theorem 11.5.1 Distribution of Nonlinear GMM Estimator
Under general regularity conditions,
n
´
d ÷d
d
÷÷N
0,
G
t
÷1
G
÷1
,
where
=
E
g
i
g
t
i
÷1
and
G = E
0
0d
t
g
i
(d).
The variance of
´
d may be estimated by
´
V
=
ˆ
G
t
ˆ
÷1
ˆ
G
÷1
where
ˆ
= n
÷1
¸
i
ˆ g
+
i
ˆ g
+t
i
and
ˆ
G = n
÷1
¸
i
0
0d
t
g
i
(
ˆ
d).
The general theory of GMM estimation and testing was exposited by L. Hansen (1982).
11.6 OverIdentiﬁcation Test
Overidentiﬁed models (/ > k) are special in the sense that there may not be a parameter value
d such that the moment condition
CHAPTER 11. GENERALIZED METHOD OF MOMENTS 198
Eg(y
i
, x
i
, z
i
, d) = 0
holds. Thus the model — the overidentifying restrictions — are testable.
For example, take the linear model y
i
= d
t
1
x
1i
+d
t
2
x
2i
+e
i
with E(x
1i
e
i
) = 0 and E(x
2i
e
i
) = 0.
It is possible that d
2
= 0, so that the linear equation may be written as y
i
= d
t
1
x
1i
+e
i
. However,
it is possible that d
2
= 0, and in this case it would be impossible to ﬁnd a value of d
1
so that
both E(x
1i
(y
i
÷x
t
1i
d
1
)) = 0 and E(x
2i
(y
i
÷x
t
1i
d
1
)) = 0 hold simultaneously. In this sense an
exclusion restriction can be seen as an overidentifying restriction.
Note that g
n
p
÷÷ Eg
i
, and thus g
n
can be used to assess whether or not the hypothesis that
Eg
i
= 0 is true or not. The criterion function at the parameter estimates is
J
n
= n g
t
n
W
n
g
n
= n
2
g
t
n
ˆ g
t
ˆ g ÷ng
n
g
t
n
÷1
g
n
.
is a quadratic form in g
n
, and is thus a natural test statistic for H
0
: Eg
i
= 0.
Theorem 11.6.1 (SarganHansen). Under the hypothesis of correct spec
iﬁcation, and if the weight matrix is asymptotically ecient,
J
n
= J
n
(
´
d)
d
÷÷.
2
¹÷k
.
The proof of the theorem is left as an exercise. This result was established by Sargan (1958)
for a specialized case, and by L. Hansen (1982) for the general case.
The degrees of freedom of the asymptotic distribution are the number of overidentifying restric
tions. If the statistic J exceeds the chisquare critical value, we can reject the model. Based on
this information alone, it is unclear what is wrong, but it is typically cause for concern. The GMM
overidentiﬁcation test is a very useful byproduct of the GMM methodology, and it is advisable to
report the statistic J whenever GMM is the estimation method.
When overidentiﬁed models are estimated by GMM, it is customary to report the J statistic
as a general test of model adequacy.
11.7 Hypothesis Testing: The Distance Statistic
We described before how to construct estimates of the asymptotic covariance matrix of the
GMM estimates. These may be used to construct Wald tests of statistical hypotheses.
If the hypothesis is nonlinear, a better approach is to directly use the GMM criterion function.
This is sometimes called the GMM Distance statistic, and sometimes called a LRlike statistic (the
LR is for likelihoodratio). The idea was ﬁrst put forward by Newey and West (1987).
For a given weight matrix W
n
, the GMM criterion function is
J
n
(d) = n · g
n
(d)
t
W
n
g
n
(d)
For h : R
k
÷R
r
, the hypothesis is
H
0
: h(d) = 0.
The estimates under H
1
are
´
d = argmin
J
n
(d)
CHAPTER 11. GENERALIZED METHOD OF MOMENTS 199
and those under H
0
are
¯
d = argmin
h()=0
J(d).
The two minimizing criterion functions are J
n
(
´
d) and J
n
(
¯
d). The GMM distance statistic is the
dierence
D
n
= J
n
(
¯
d) ÷J
n
(
´
d).
Proposition 11.7.1 If the same weight matrix W
n
is used for both null
and alternative,
1. D _ 0
2. D
d
÷÷.
2
r
3. If h is linear in d, then D equals the Wald statistic.
If h is nonlinear, the Wald statistic can work quite poorly. In contrast, current evidence
suggests that the D
n
statistic appears to have quite good sampling properties, and is the preferred
test statistic.
Newey and West (1987) suggested to use the same weight matrix W
n
for both null and alter
native, as this ensures that D
n
_ 0. This reasoning is not compelling, however, and some current
research suggests that this restriction is not necessary for good performance of the test.
This test shares the useful feature of LR tests in that it is a natural byproduct of the compu
tation of alternative models.
11.8 Conditional Moment Restrictions
In many contexts, the model implies more than an unconditional moment restriction of the form
Eg
i
(d) = 0. It implies a conditional moment restriction of the form
E(e
i
(d)  z
i
) = 0
where e
i
(d) is some s 1 function of the observation and the parameters. In many cases, s = 1.
It turns out that this conditional moment restriction is much more powerful, and restrictive,
than the unconditional moment restriction discussed above.
Our linear model y
i
= x
t
i
d + e
i
with instruments z
i
falls into this class under the stronger
assumption E(e
i
 z
i
) = 0. Then e
i
(d) = y
i
÷x
t
i
d.
It is also helpful to realize that conventional regression models also fall into this class, except
that in this case x
i
= z
i
. For example, in linear regression, e
i
(d) = y
i
÷x
t
i
d, while in a nonlinear
regression model e
i
(d) = y
i
÷g(x
i
, d). In a joint model of the conditional mean and variance
e
i
(d, ~) =
y
i
÷x
t
i
d
(y
i
÷x
t
i
d)
2
÷f (x
i
)
t
~
.
Here s = 2.
Given a conditional moment restriction, an unconditional moment restriction can always be
constructed. That is for any / 1 function w(x
i
, d) , we can set g
i
(d) = w(x
i
, d) e
i
(d) which
satisﬁes Eg
i
(d) = 0 and hence deﬁnes a GMM estimator. The obvious problem is that the class of
functions w is inﬁnite. Which should be selected?
CHAPTER 11. GENERALIZED METHOD OF MOMENTS 200
This is equivalent to the problem of selection of the best instruments. If x
i
÷ R is a valid
instrument satisfying E(e
i
 x
i
) = 0, then x
i
, x
2
i
, x
3
i
, ..., etc., are all valid instruments. Which
should be used?
One solution is to construct an inﬁnite list of potent instruments, and then use the ﬁrst k
instruments. How is k to be determined? This is an area of theory still under development. A
recent study of this problem is Donald and Newey (2001).
Another approach is to construct the optimal instrument. The form was uncovered by Cham
berlain (1987). Take the case s = 1. Let
R
i
= E
0
0d
e
i
(d)  z
i
and
o
2
i
= E
e
i
(d)
2
 z
i
.
Then the “optimal instrument” is
A
i
= ÷o
÷2
i
R
i
so the optimal moment is
g
i
(d) = A
i
e
i
(d).
Setting g
i
(d) to be this choice (which is k 1, so is justidentiﬁed) yields the best GMM estimator
possible.
In practice, A
i
is unknown, but its form does help us think about construction of optimal
instruments.
In the linear model e
i
(d) = y
i
÷x
t
i
d, note that
R
i
= ÷E(x
i
 z
i
)
and
o
2
i
= E
e
2
i
 z
i
,
so
A
i
= o
÷2
i
E(x
i
 z
i
) .
In the case of linear regression, x
i
= z
i
, so A
i
= o
÷2
i
z
i
. Hence ecient GMM is GLS, as we
discussed earlier in the course.
In the case of endogenous variables, note that the ecient instrument A
i
involves the estimation
of the conditional mean of x
i
given z
i
. In other words, to get the best instrument for x
i
, we need the
best conditional mean model for x
i
given z
i
, not just an arbitrary linear projection. The ecient
instrument is also inversely proportional to the conditional variance of e
i
. This is the same as the
GLS estimator; namely that improved eciency can be obtained if the observations are weighted
inversely to the conditional variance of the errors.
11.9 Bootstrap GMM Inference
Let
´
d be the 2SLS or GMM estimator of d. Using the EDF of (y
i
, z
i
, x
i
), we can apply the
bootstrap methods discussed in Chapter 10 to compute estimates of the bias and variance of
´
d,
and construct conﬁdence intervals for d, identically as in the regression model. However, caution
should be applied when interpreting such results.
A straightforward application of the nonparametric bootstrap works in the sense of consistently
achieving the ﬁrstorder asymptotic distribution. This has been shown by Hahn (1996). However,
it fails to achieve an asymptotic reﬁnement when the model is overidentiﬁed, jeopardizing the
theoretical justiﬁcation for percentilet methods. Furthermore, the bootstrap applied J test will
yield the wrong answer.
CHAPTER 11. GENERALIZED METHOD OF MOMENTS 201
The problem is that in the sample,
´
d is the “true” value and yet g
n
(
ˆ
d) = 0. Thus according to
random variables (y
+
i
, z
+
i
, x
+
i
) drawn from the EDF F
n
,
E
g
i
´
d
= g
n
(
ˆ
d) = 0.
This means that (y
+
i
, z
+
i
, x
+
i
) do not satisfy the same moment conditions as the population distrib
ution.
A correction suggested by Hall and Horowitz (1996) can solve the problem. Given the bootstrap
sample (y
+
, Z
+
, X
+
), deﬁne the bootstrap GMM criterion
J
+
n
(d) = n ·
g
+
n
(d) ÷g
n
(
ˆ
d)
t
W
+
n
g
+
n
(d) ÷g
n
(
ˆ
d)
where g
n
(
´
d) is from the insample data, not from the bootstrap data.
Let
´
d
+
minimize J
+
n
(d), and deﬁne all statistics and tests accordingly. In the linear model, this
implies that the bootstrap estimator is
´
d
+
n
=
X
+t
Z
+
W
+
n
Z
+t
X
+
÷1
X
+t
Z
+
W
+
n
Z
+t
y
+
÷Z
t
ˆ e
.
where ˆ e = y ÷X
´
d are the insample residuals. The bootstrap J statistic is J
+
n
(
´
d
+
).
Brown and Newey (2002) have an alternative solution. They note that we can sample from
the observations with the empirical likelihood probabilities ˆ p
i
described in Chapter 12. Since
¸
n
i=1
ˆ p
i
g
i
´
d
= 0, this sampling scheme preserves the moment conditions of the model, so no
recentering or adjustments is needed. Brown and Newey argue that this bootstrap procedure will
be more ecient than the HallHorowitz GMM bootstrap.
CHAPTER 11. GENERALIZED METHOD OF MOMENTS 202
Exercises
Exercise 11.1 Take the model
y
i
= x
t
i
d +e
i
E(x
i
e
i
) = 0
e
2
i
= z
t
i
~ + :
i
E(z
i
:
i
) = 0.
Find the method of moments estimators
ˆ
d, ˆ ~
for (d, ~) .
Exercise 11.2 Take the single equation
y = Xd +e
E(e  Z) = 0
Assume E
e
2
i
 z
i
= o
2
. Show that if
ˆ
d is estimated by GMM with weight matrix W
n
= (Z
t
Z)
÷1
,
then
n
´
d ÷d
d
÷÷N
0, o
2
Q
t
M
÷1
Q
÷1
where Q = E(z
i
x
t
i
) and M = E(z
i
z
t
i
) .
Exercise 11.3 Take the model y
i
= x
t
i
d + e
i
with E(z
i
e
i
) = 0. Let ˆ e
i
= y
i
÷ x
t
i
´
d where
´
d is
consistent for d (e.g. a GMM estimator with arbitrary weight matrix). Deﬁne the estimate of the
optimal GMM weight matrix
W
n
=
1
n
n
¸
i=1
z
i
z
t
i
ˆ e
2
i
÷1
.
Show that W
n
p
÷÷
÷1
where = E
z
i
z
t
i
e
2
i
.
Exercise 11.4 In the linear model estimated by GMM with general weight matrix W, the asymp
totic variance of
ˆ
d
GMM
is
V =
Q
t
WQ
÷1
Q
t
WWQ
Q
t
WQ
÷1
(a) Let V
0
be this matrix when W =
÷1
. Show that V
0
=
Q
t
÷1
Q
÷1
.
(b) We want to show that for any W, V ÷V
0
is positive semideﬁnite (for then V
0
is the smaller
possible covariance matrix and W =
÷1
is the ecient weight matrix). To do this, start by
ﬁnding matrices A and B such that V = A
t
A and V
0
= B
t
B.
(c) Show that B
t
A = B
t
B and therefore that B
t
(A÷B) = 0.
(d) Use the expressions V = A
t
A, A = B + (A÷B) , and B
t
(A÷B) = 0 to show that
V _ V
0
.
Exercise 11.5 The equation of interest is
y
i
= g(x
i
, d) +e
i
E(z
i
e
i
) = 0.
The observed data is (y
i
, z
i
, x
i
). z
i
is / 1 and d is k 1, / _ k. Show how to construct an ecient
GMM estimator for d.
CHAPTER 11. GENERALIZED METHOD OF MOMENTS 203
Exercise 11.6 In the linear model y = Xd + e with E(x
i
e
i
) = 0, a Generalized Method of
Moments (GMM) criterion function for d is deﬁned as
J
n
(d) =
1
n
(y ÷Xd)
t
X
´
÷1
X
t
(y ÷Xd) (11.5)
where
´
=
1
n
¸
n
i=1
x
i
x
t
i
ˆ e
2
i
, ˆ e
i
= y
i
÷x
t
i
ˆ
d are the OLS residuals, and
´
d = (X
t
X)
÷1
X
t
Y is LS. The
GMM estimator of d, subject to the restriction h(d) = 0, is deﬁned as
¯
d = argmin
h()=0
J
n
(d).
The GMM test statistic (the distance statistic) of the hypothesis h(d) = 0 is
D = J
n
(
¯
d) = min
h()=0
J
n
(d). (11.6)
(a) Show that you can rewrite J
n
(d) in (11.5) as
J
n
(d) = n
d ÷
´
d
t
ˆ
V
÷1
d ÷
´
d
thus
¯
d is the same as the minimum distance estimator.
(b) Show that in this setting, the distance statistic D in (11.6) equals the Wald statistic.
Exercise 11.7 Take the linear model
y
i
= x
t
i
d +e
i
E(z
i
e
i
) = 0.
and consider the GMM estimator
ˆ
d of d. Let
J
n
= ng
n
(
´
d)
t
´
÷1
g
n
(
´
d)
denote the test of overidentifying restrictions. Show that J
n
d
÷÷.
2
¹÷k
as n ÷· by demonstrating
each of the following:
(a) Since > 0, we can write
÷1
= CC
t
and = C
t÷1
C
÷1
(b) J
n
= n
C
t
g
n
(
´
d)
t
C
t
ˆ
C
÷1
C
t
g
n
(
´
d)
(c) C
t
g
n
(
´
d) = D
n
C
t
g
n
(d
0
) where
D
n
= I
¹
÷C
t
1
n
Z
t
X
1
n
X
t
Z
´
÷1
1
n
Z
t
X
÷1
1
n
X
t
Z
´
÷1
C
t÷1
g
n
(d
0
) =
1
n
Z
t
e.
(d) D
n
p
÷÷I
¹
÷R(R
t
R)
÷1
R
t
where R = C
t
E(z
i
x
t
i
)
(e) n
1/2
C
t
g
n
(d
0
)
d
÷÷X ~ N(0, I
¹
)
(f) J
n
d
÷÷X
t
I
¹
÷R(R
t
R)
÷1
R
t
X
(g) X
t
I
¹
÷R(R
t
R)
÷1
R
t
X ~ .
2
¹÷k
.
Hint: I
¹
÷R(R
t
R)
÷1
R
t
is a projection matrix.
Chapter 12
Empirical Likelihood
12.1 NonParametric Likelihood
An alternative to GMM is empirical likelihood. The idea is due to Art Owen (1988, 2001) and
has been extended to moment condition models by Qin and Lawless (1994). It is a nonparametric
analog of likelihood estimation.
The idea is to construct a multinomial distribution F(p
1
, ..., p
n
) which places probability p
i
at each observation. To be a valid multinomial distribution, these probabilities must satisfy the
requirements that p
i
_ 0 and
n
¸
i=1
p
i
= 1. (12.1)
Since each observation is observed once in the sample, the loglikelihood function for this multino
mial distribution is
log L(p
1
, ..., p
n
) =
n
¸
i=1
log(p
i
). (12.2)
First let us consider a justidentiﬁed model. In this case the moment condition places no
additional restrictions on the multinomial distribution. The maximum likelihood estimators of
the probabilities (p
1
, ..., p
n
) are those which maximize the loglikelihood subject to the constraint
(12.1). This is equivalent to maximizing
n
¸
i=1
log(p
i
) ÷µ
n
¸
i=1
p
i
÷1
where µ is a Lagrange multiplier. The n ﬁrst order conditions are 0 = p
÷1
i
÷µ. Combined with the
constraint (12.1) we ﬁnd that the MLE is p
i
= n
÷1
yielding the loglikelihood ÷nlog(n).
Now consider the case of an overidentiﬁed model with moment condition
Eg
i
(d
0
) = 0
where g is / 1 and d is k 1 and for simplicity we write g
i
(d) = g(y
i
, z
i
, x
i
, d). The multinomial
distribution which places probability p
i
at each observation (y
i
, x
i
, z
i
) will satisfy this condition if
and only if
n
¸
i=1
p
i
g
i
(d) = 0 (12.3)
The empirical likelihood estimator is the value of d which maximizes the multinomial log
likelihood (12.2) subject to the restrictions (12.1) and (12.3).
204
CHAPTER 12. EMPIRICAL LIKELIHOOD 205
The Lagrangian for this maximization problem is
L(d, p
1
, ..., p
n
, X, µ) =
n
¸
i=1
log(p
i
) ÷µ
n
¸
i=1
p
i
÷1
÷nX
t
n
¸
i=1
p
i
g
i
(d)
where X and µ are Lagrange multipliers. The ﬁrstorderconditions of L with respect to p
i
, µ, and
X are
1
p
i
= µ +nX
t
g
i
(d)
n
¸
i=1
p
i
= 1
n
¸
i=1
p
i
g
i
(d) = 0.
Multiplying the ﬁrst equation by p
i
, summing over i, and using the second and third equations, we
ﬁnd µ = n and
p
i
=
1
n
1 +X
t
g
i
(d)
.
Substituting into L we ﬁnd
R(d, X) = ÷nlog (n) ÷
n
¸
i=1
log
1 +X
t
g
i
(d)
. (12.4)
For given d, the Lagrange multiplier X(d) minimizes R(d, X) :
X(d) = argmin
R(d, X). (12.5)
This minimization problem is the dual of the constrained maximization problem. The solution
(when it exists) is well deﬁned since R(d, X) is a convex function of X. The solution cannot be
obtained explicitly, but must be obtained numerically (see section 6.5). This yields the (proﬁle)
empirical loglikelihood function for d.
R(d) = R(d, X(d))
= ÷nlog (n) ÷
n
¸
i=1
log
1 +X(d)
t
g
i
(d)
The EL estimate
ˆ
d is the value which maximizes R(d), or equivalently minimizes its negative
ˆ
d = argmin
[÷R(d)] (12.6)
Numerical methods are required for calculation of
ˆ
d (see Section 12.5).
As a byproduct of estimation, we also obtain the Lagrange multiplier
ˆ
X = X(
ˆ
d), probabilities
ˆ p
i
=
1
n
1 +
ˆ
X
t
g
i
ˆ
d
.
and maximized empirical likelihood
R(
ˆ
d) =
n
¸
i=1
log (ˆ p
i
) . (12.7)
CHAPTER 12. EMPIRICAL LIKELIHOOD 206
12.2 Asymptotic Distribution of EL Estimator
Deﬁne
G
i
(d) =
0
0d
t
g
i
(d) (12.8)
G = EG
i
(d
0
)
= E
g
i
(d
0
) g
i
(d
0
)
t
and
V =
G
t
÷1
G
÷1
(12.9)
V
= ÷G
G
t
÷1
G
÷1
G
t
(12.10)
For example, in the linear model, G
i
(d) = ÷z
i
x
t
i
, G = ÷E(z
i
x
t
i
), and = E
z
i
z
t
i
e
2
i
.
Theorem 12.2.1 Under regularity conditions,
n
ˆ
d ÷d
0
d
÷÷N(0, V
)
n
ˆ
X
d
÷÷
÷1
N(0, V
)
where V and V
are deﬁned in (12.9) and (12.10), and
n
ˆ
d ÷d
0
and
n
ˆ
X are asymptotically independent.
The theorem shows that asymptotic variance V
for
ˆ
d is the same as for ecient GMM. Thus
the EL estimator is asymptotically ecient.
Chamberlain (1987) showed that V
is the semiparametric eciency bound for d in the overi
dentiﬁed moment condition model. This means that no consistent estimator for this class of models
can have a lower asymptotic variance than V
. Since the EL estimator achieves this bound, it is
an asymptotically ecient estimator for d.
Proof of Theorem 12.2.1. (
ˆ
d,
ˆ
X) jointly solve
0 =
0
0X
R(d, X) = ÷
n
¸
i=1
g
i
ˆ
d
1 +
ˆ
X
t
g
i
ˆ
d
(12.11)
0 =
0
0d
R(d, X) = ÷
n
¸
i=1
G
i
ˆ
d
t
X
1 +
ˆ
X
t
g
i
ˆ
d
. (12.12)
Let G
n
=
1
n
¸
n
i=1
G
i
(d
0
) , g
n
=
1
n
¸
n
i=1
g
i
(d
0
) and
n
=
1
n
¸
n
i=1
g
i
(d
0
) g
i
(d
0
)
t
.
Expanding (12.12) around d = d
0
and X = X
0
= 0 yields
0 · G
t
n
ˆ
X ÷X
0
. (12.13)
Expanding (12.11) around d = d
0
and X = X
0
= 0 yields
0 · ÷g
n
÷G
n
ˆ
d ÷d
0
+
n
ˆ
X (12.14)
CHAPTER 12. EMPIRICAL LIKELIHOOD 207
Premultiplying by G
t
n
÷1
n
and using (12.13) yields
0 · ÷G
t
n
÷1
n
g
n
÷G
t
n
÷1
n
G
n
ˆ
d ÷d
0
+G
t
n
÷1
n
n
ˆ
X
= ÷G
t
n
÷1
n
g
n
÷G
t
n
÷1
n
G
n
ˆ
d ÷d
0
Solving for
ˆ
d and using the WLLN and CLT yields
n
ˆ
d ÷d
0
· ÷
G
t
n
÷1
n
G
n
÷1
G
t
n
÷1
n
ng
n
(12.15)
d
÷÷
G
t
÷1
G
÷1
G
t
÷1
N(0, )
= N(0, V
)
Solving (12.14) for
ˆ
X and using (12.15) yields
n
ˆ
X ·
÷1
n
I ÷G
n
G
t
n
÷1
n
G
n
÷1
G
t
n
÷1
n
ng
n
(12.16)
d
÷÷
÷1
I ÷G
G
t
÷1
G
÷1
G
t
÷1
N(0, )
=
÷1
N(0, V
)
Furthermore, since
G
t
I ÷
÷1
G
G
t
÷1
G
÷1
G
t
= 0
n
ˆ
d ÷d
0
and
n
ˆ
X are asymptotically uncorrelated and hence independent.
12.3 Overidentifying Restrictions
In a parametric likelihood context, tests are based on the dierence in the log likelihood func
tions. The same statistic can be constructed for empirical likelihood. Twice the dierence between
the unrestricted empirical loglikelihood ÷nlog (n) and the maximized empirical loglikelihood for
the model (12.7) is
LR
n
=
n
¸
i=1
2 log
1 +
ˆ
X
t
g
i
ˆ
d
. (12.17)
Theorem 12.3.1 If Eg
i
(d
0
) = 0 then LR
n
d
÷÷.
2
¹÷k
.
The EL overidentiﬁcation test is similar to the GMM overidentiﬁcation test. They are asymp
totically ﬁrstorder equivalent, and have the same interpretation. The overidentiﬁcation test is a
very useful byproduct of EL estimation, and it is advisable to report the statistic LR
n
whenever
EL is the estimation method.
Proof of Theorem 12.3.1. First, by a Taylor expansion, (12.15), and (12.16),
1
n
n
¸
i=1
g
i
ˆ
d
·
n
g
n
+G
n
ˆ
d ÷d
0
·
I ÷G
n
G
t
n
÷1
n
G
n
÷1
G
t
n
÷1
n
ng
n
·
n
n
ˆ
X.
CHAPTER 12. EMPIRICAL LIKELIHOOD 208
Second, since log(1 +u) · u ÷u
2
/2 for u small,
LR
n
=
n
¸
i=1
2 log
1 +
ˆ
X
t
g
i
ˆ
d
· 2
ˆ
X
t
n
¸
i=1
g
i
ˆ
d
÷
ˆ
X
t
n
¸
i=1
g
i
ˆ
d
g
i
ˆ
d
t
ˆ
X
· n
ˆ
X
t
n
ˆ
X
d
÷÷N(0, V
)
t
÷1
N(0, V
)
= .
2
¹÷k
where the proof of the ﬁnal equality is left as an exercise.
12.4 Testing
Let the maintained model be
Eg
i
(d) = 0 (12.18)
where g is / 1 and d is k 1. By “maintained” we mean that the overidentfying restrictions
contained in (12.18) are assumed to hold and are not being challenged (at least for the test discussed
in this section). The hypothesis of interest is
h(d) = 0.
where h : R
k
÷R
a
. The restricted EL estimator and likelihood are the values which solve
˜
d = argmax
h()=0
R(d)
R(
˜
d) = max
h()=0
R(d).
Fundamentally, the restricted EL estimator
˜
d is simply an EL estimator with /÷k+a overidentifying
restrictions, so there is no fundamental change in the distribution theory for
˜
d relative to
ˆ
d. To test
the hypothesis h(d) while maintaining (12.18), the simple overidentifying restrictions test (12.17)
is not appropriate. Instead we use the dierence in loglikelihoods:
LR
n
= 2
R(
ˆ
d) ÷R(
˜
d)
.
This test statistic is a natural analog of the GMM distance statistic.
Theorem 12.4.1 Under (12.18) and H
0
: h(d) = 0, LR
n
d
÷÷.
2
a
.
The proof of this result is more challenging and is omitted.
CHAPTER 12. EMPIRICAL LIKELIHOOD 209
12.5 Numerical Computation
Gauss code which implements the methods discussed below can be found at
http://www.ssc.wisc.edu/~bhansen/progs/elike.prc
Derivatives
The numerical calculations depend on derivatives of the dual likelihood function (12.4). Deﬁne
g
+
i
(d, X) =
g
i
(d)
1 +X
t
g
i
(d)
G
+
i
(d, X) =
G
i
(d)
t
X
1 +X
t
g
i
(d)
The ﬁrst derivatives of (12.4) are
R
=
0
0X
R(d, X) = ÷
n
¸
i=1
g
+
i
(d, X)
R
=
0
0d
R(d, X) = ÷
n
¸
i=1
G
+
i
(d, X) .
The second derivatives are
R
=
0
2
0X0X
t
R(d, X) =
n
¸
i=1
g
+
i
(d, X) g
+
i
(d, X)
t
R
=
0
2
0X0d
t
R(d, X) =
n
¸
i=1
g
+
i
(d, X) G
+
i
(d, X)
t
÷
G
i
(d)
1 +X
t
g
i
(d)
R
=
0
2
0d0d
t
R(d, X) =
n
¸
i=1
¸
G
+
i
(d, X) G
+
i
(d, X)
t
÷
0
2
00
g
i
(d)
t
X
1 +X
t
g
i
(d)
¸
Inner Loop
The socalled “inner loop” solves (12.5) for given d. The modiﬁed Newton method takes a
quadratic approximation to R
n
(d, X) yielding the iteration rule
X
j+1
= X
j
÷c (R
(d, X
j
))
÷1
R
(d, X
j
) . (12.19)
where c > 0 is a scalar steplength (to be discussed next). The starting value X
1
can be set to the
zero vector. The iteration (12.19) is continued until the gradient R
(d, X
j
) is smaller than some
prespeciﬁed tolerance.
Ecient convergence requires a good choice of steplength c. One method uses the following
quadratic approximation. Set c
0
= 0, c
1
=
1
2
and c
2
= 1. For p = 0, 1, 2, set
X
p
= X
j
÷c
p
(R
(d, X
j
))
÷1
R
(d, X
j
))
R
p
= R(d, X
p
)
A quadratic function can be ﬁt exactly through these three points. The value of c which minimizes
this quadratic is
ˆ
c =
R
2
+ 3R
0
÷4R
1
4R
2
+ 4R
0
÷8R
1
.
yielding the steplength to be plugged into (12.19).
CHAPTER 12. EMPIRICAL LIKELIHOOD 210
A complication is that X must be constrained so that 0 _ p
i
_ 1 which holds if
n
1 +X
t
g
i
(d)
_ 1 (12.20)
for all i. If (12.20) fails, the stepsize c needs to be decreased.
Outer Loop
The outer loop is the minimization (12.6). This can be done by the modiﬁed Newton method
described in the previous section. The gradient for (12.6) is
R
=
0
0d
R(d) =
0
0d
R(d, X) = R
+X
t
R
= R
since R
(d, X) = 0 at X = X(d), where
X
=
0
0d
t
X(d) = ÷R
÷1
R
,
the second equality following from the implicit function theorem applied to R
(d, X(d)) = 0.
The Hessian for (12.6) is
R
= ÷
0
0d0d
t
R(d)
= ÷
0
0d
t
R
(d, X(d)) +X
t
R
(d, X(d))
= ÷
R
(d, X(d)) +R
t
X
+X
t
R
+X
t
R
X
= R
t
R
÷1
R
÷R
.
It is not guaranteed that R
> 0. If not, the eigenvalues of R
should be adjusted so that all
are positive. The Newton iteration rule is
d
j+1
= d
j
÷cR
÷1
R
where c is a scalar stepsize, and the rule is iterated until convergence.
Chapter 13
Endogeneity
We say that there is endogeneity in the linear model y = x
t
i
d + e
i
if d is the parameter of
interest and E(x
i
e
i
) = 0. This cannot happen if d is deﬁned by linear projection, so requires a
structural interpretation. The coecient d must have meaning separately from the deﬁnition of a
conditional mean or linear projection.
Example: Measurement error in the regressor. Suppose that (y
i
, x
+
i
) are joint random
variables, E(y
i
 x
+
i
) = x
+t
i
d is linear, d is the parameter of interest, and x
+
i
is not observed. Instead
we observe x
i
= x
+
i
+u
i
where u
i
is an k 1 measurement error, independent of y
i
and x
+
i
. Then
y
i
= x
+t
i
d +e
i
= (x
i
÷u
i
)
t
d +e
i
= x
t
i
d +v
i
where
v
i
= e
i
÷u
t
i
d.
The problem is that
E(x
i
v
i
) = E
(x
+
i
+u
i
)
e
i
÷u
t
i
d
= ÷E
u
i
u
t
i
d = 0
if d = 0 and E(u
i
u
t
i
) = 0. It follows that if
ˆ
d is the OLS estimator, then
ˆ
d
p
÷÷d
+
= d ÷
E
x
i
x
t
i
÷1
E
u
i
u
t
i
d = d.
This is called measurement error bias.
Example: Supply and Demand. The variables q
i
and p
i
(quantity and price) are determined
jointly by the demand equation
q
i
= ÷
1
p
i
+e
1i
and the supply equation
q
i
=
2
p
i
+e
2i
.
Assume that e
i
=
e
1i
e
2i
is iid, Ee
i
= 0,
1
+
2
= 1 and Ee
i
e
t
i
= I
2
(the latter for simplicity).
The question is, if we regress q
i
on p
i
, what happens?
It is helpful to solve for q
i
and p
i
in terms of the errors. In matrix notation,
¸
1
1
1 ÷
2
q
i
p
i
=
e
1i
e
2i
211
CHAPTER 13. ENDOGENEITY 212
so
q
i
p
i
=
¸
1
1
1 ÷
2
÷1
e
1i
e
2i
=
¸
2
1
1 ÷1
e
1i
e
2i
=
2
e
1i
+
1
e
2i
(e
1i
÷e
2i
)
.
The projection of q
i
on p
i
yields
q
i
=
+
p
i
+ 
i
E(p
i

i
) = 0
where
+
=
E(p
i
q
i
)
E
p
2
i
=
2
÷
1
2
Hence if it is estimated by OLS,
ˆ
p
÷÷
+
, which does not equal either
1
or
2
. This is called
simultaneous equations bias.
13.1 Instrumental Variables
Let the equation of interest be
y
i
= x
t
i
d +e
i
(13.1)
where x
i
is k 1, and assume that E(x
i
e
i
) = 0 so there is endogeneity. We call (13.1) the
structural equation. In matrix notation, this can be written as
y = Xd +e. (13.2)
Any solution to the problem of endogeneity requires additional information which we call in
struments.
Deﬁnition 13.1.1 The / 1 random vector z
i
is an instrumental vari
able for (13.1) if E(z
i
e
i
) = 0.
In a typical setup, some regressors in x
i
will be uncorrelated with e
i
(for example, at least the
intercept). Thus we make the partition
x
i
=
x
1i
x
2i
k
1
k
2
(13.3)
where E(x
1i
e
i
) = 0 yet E(x
2i
e
i
) = 0. We call x
1i
exogenous and x
2i
endogenous. By the above
deﬁnition, x
1i
is an instrumental variable for (13.1), so should be included in z
i
. So we have the
partition
z
i
=
x
1i
z
2i
k
1
/
2
(13.4)
where x
1i
= z
1i
are the included exogenous variables, and z
2i
are the excluded exogenous
variables. That is z
2i
are variables which could be included in the equation for y
i
(in the sense
that they are uncorrelated with e
i
) yet can be excluded, as they would have true zero coecients
in the equation.
The model is justidentiﬁed if / = k (i.e., if /
2
= k
2
) and overidentiﬁed if / > k (i.e., if
/
2
> k
2
).
We have noted that any solution to the problem of endogeneity requires instruments. This does
not mean that valid instruments actually exist.
CHAPTER 13. ENDOGENEITY 213
13.2 Reduced Form
The reduced form relationship between the variables or “regressors” x
i
and the instruments z
i
is found by linear projection. Let
= E
z
i
z
t
i
÷1
E
z
i
x
t
i
be the / k matrix of coecients from a projection of x
i
on z
i
, and deﬁne
u
i
= x
i
÷
t
z
i
as the projection error. Then the reduced form linear relationship between x
i
and z
i
is
x
i
=
t
z
i
+u
i
. (13.5)
In matrix notation, we can write (13.5) as
X = Z +U (13.6)
where U is n k.
By construction,
E(z
i
u
t
i
) = 0,
so (13.5) is a projection and can be estimated by OLS:
x = z
ˆ
+ ˆ u
ˆ
=
z
t
z
÷1
z
t
x
.
Substituting (13.6) into (13.2), we ﬁnd
y = (Z +U) d +e
= ZX +v, (13.7)
where
X = d (13.8)
and
v = Ud +e.
Observe that
E(z
i
v
i
) = E
z
i
u
t
i
d +E(z
i
e
i
) = 0.
Thus (13.7) is a projection equation and may be estimated by OLS. This is
y = Z
ˆ
X + ˆ v,
ˆ
X =
Z
t
Z
÷1
Z
t
y
The equation (13.7) is the reduced form for y. (13.6) and (13.7) together are the reduced form
equations for the system
y = ZX +v
x = Z +U.
As we showed above, OLS yields the reducedform estimates
ˆ
X,
ˆ
CHAPTER 13. ENDOGENEITY 214
13.3 Identiﬁcation
The structural parameter d relates to (X, ) through (13.8). The parameter d is identiﬁed,
meaning that it can be recovered from the reduced form, if
rank () = k. (13.9)
Assume that (13.9) holds. If / = k, then d =
÷1
X. If / > k, then for any W > 0, d =
(
t
W)
÷1
t
WX.
If (13.9) is not satisﬁed, then d cannot be recovered from (X, ) . Note that a necessary (although
not sucient) condition for (13.9) is / _ k.
Since Z and X have the common variables X
1
, we can rewrite some of the expressions. Using
(13.3) and (13.4) to make the matrix partitions Z = [Z
1
, Z
2
] and X = [Z
1
, X
2
] , we can partition
as
=
¸
11
12
21
22
=
¸
I
12
0
22
(13.6) can be rewritten as
X
1
= Z
1
X
2
= Z
1
12
+Z
2
22
+U
2
. (13.10)
d is identiﬁed if rank() = k, which is true if and only if rank(
22
) = k
2
(by the upperdiagonal
structure of ). Thus the key to identiﬁcation of the model rests on the /
2
k
2
matrix
22
in
(13.10).
13.4 Estimation
The model can be written as
y
i
= x
t
i
d +e
i
E(z
i
e
i
) = 0
or
Eg
i
(d) = 0
g
i
(d) = z
i
y
i
÷x
t
i
d
.
This is a moment condition model. Appropriate estimators include GMM and EL. The estimators
and distribution theory developed in those Chapter 8 and 9 directly apply. Recall that the GMM
estimator, for given weight matrix W
n
, is
ˆ
d =
X
t
ZW
n
Z
t
X
÷1
X
t
ZW
n
Z
t
y.
13.5 Special Cases: IV and 2SLS
If the model is justidentiﬁed, so that k = /, then the formula for GMM simpliﬁes. We ﬁnd that
´
d =
X
t
ZW
n
Z
t
X
÷1
X
t
ZW
n
Z
t
y
=
Z
t
X
÷1
W
÷1
n
X
t
Z
÷1
X
t
ZW
n
Z
t
y
=
Z
t
X
÷1
Z
t
y
CHAPTER 13. ENDOGENEITY 215
This estimator is often called the instrumental variables estimator (IV) of d, where Z is used
as an instrument for X. Observe that the weight matrix W
n
has disappeared. In the justidentiﬁed
case, the weight matrix places no role. This is also the MME estimator of d, and the EL estimator.
Another interpretation stems from the fact that since d =
÷1
X, we can construct the Indirect
Least Squares (ILS) estimator:
´
d =
´
÷1
´
X
=
Z
t
Z
÷1
Z
t
X
÷1
Z
t
Z
÷1
Z
t
y
=
Z
t
X
÷1
Z
t
Z
Z
t
Z
÷1
Z
t
y
=
Z
t
X
÷1
Z
t
y
.
which again is the IV estimator.
Recall that the optimal weight matrix is an estimate of the inverse of = E
z
i
z
t
i
e
2
i
. In the
special case that E
e
2
i
 z
i
= o
2
(homoskedasticity), then = E(z
i
z
t
i
) o
2
· E(z
i
z
t
i
) suggesting
the weight matrix W
n
= (Z
t
Z)
÷1
. Using this choice, the GMM estimator equals
´
d
2SLS
=
X
t
Z
Z
t
Z
÷1
Z
t
X
÷1
X
t
Z
Z
t
Z
÷1
Z
t
y
This is called the twostageleast squares (2SLS) estimator. It was originally proposed by Theil
(1953) and Basmann (1957), and is the classic estimator for linear equations with instruments.
Under the homoskedasticity assumption, the 2SLS estimator is ecient GMM, but otherwise it is
inecient.
It is useful to observe that writing
P = Z
Z
t
Z
÷1
Z
t
´
X = PX = Z
´
then the 2SLS estimator is
´
d =
X
t
PX
÷1
X
t
Py
=
´
X
t
´
X
÷1
´
X
t
y.
The source of the “twostage” name is since it can be computed as follows
• First regress X on Z, vis.,
´
= (Z
t
Z)
÷1
(Z
t
X) and
´
X = Z
´
= PX.
• Second, regress y on
´
X, vis.,
´
d =
´
X
t
´
X
÷1
´
X
t
y.
It is useful to scrutinize the projection
´
X. Recall, X = [X
1
, X
2
] and Z = [X
1
, Z
2
]. Then
´
X =
´
X
1
,
´
X
2
= [PX
1
, PX
2
]
= [X
1
, PX
2
]
=
X
1
,
´
X
2
,
since X
1
lies in the span of X. Thus in the second stage, we regress y on X
1
and
´
X
2
. So only the
endogenous variables X
2
are replaced by their ﬁtted values:
´
X
2
= Z
1
´
12
+Z
2
´
22
.
CHAPTER 13. ENDOGENEITY 216
13.6 Bekker Asymptotics
Bekker (1994) used an alternative asymptotic framework to analyze the ﬁnitesample bias in
the 2SLS estimator. Here we present a simpliﬁed version of one of his results. In our notation, the
model is
y = Xd +e (13.11)
X = Z +U (13.12)
í = (e, U)
E(í  Z) = 0
E
í
t
í  Z
= S
As before, Z is n l so there are l instruments.
First, let’s analyze the approximate bias of OLS applied to (13.11). Using (13.12),
E
1
n
X
t
e
= E(x
i
e
i
) =
t
E(z
i
e
i
) +E(u
i
e
i
) = s
21
and
E
1
n
X
t
X
= E
x
i
x
t
i
=
t
E
z
i
z
t
i
+E
u
i
z
t
i
+
t
E
z
i
u
t
i
+E
u
i
u
t
i
=
t
Q +S
22
where Q = E(z
i
z
t
i
) . Hence by a ﬁrstorder approximation
E
ˆ
d
OLS
÷d

E
1
n
X
t
X
÷1
E
1
n
X
t
e
=
t
Q +S
22
÷1
s
21
(13.13)
which is zero only when s
21
= 0 (when X is exogenous).
We now derive a similar result for the 2SLS estimator.
ˆ
d
2SLS
=
X
t
PX
÷1
X
t
Py
.
Let P = Z (Z
t
Z)
÷1
Z
t
. By the spectral decomposition of an idempotent matrix, P = HH
t
where = diag (I
l
, 0) . Let Q = H
t
íS
÷1/2
which satisﬁes EQ
t
Q = I
n
and partition Q = (q
t
1
Q
t
2
)
where q
1
is l 1. Hence
E
1
n
í
t
Pí  Z
=
1
n
S
1/2t
E
Q
t
Q  Z
S
1/2
=
1
n
S
1/2t
E
1
n
q
t
1
q
1
S
1/2
=
l
n
S
1/2t
S
1/2
= cS
where
c =
l
n
.
Using (13.12) and this result,
1
n
E
X
t
Pe
=
1
n
E
t
Z
t
e
+
1
n
E
U
t
Pe
= cs
21
,
CHAPTER 13. ENDOGENEITY 217
and
1
n
E
X
t
PX
=
t
E
z
i
z
t
i
+
t
E(z
i
u
i
) +E
u
i
z
t
i
+
1
n
E
U
t
PU
=
t
Q + cS
22
.
Together
E
ˆ
d
2SLS
÷d

E
1
n
X
t
PX
÷1
E
1
n
X
t
Pe
= c
t
Q + cS
22
÷1
s
21
. (13.14)
In general this is nonzero, except when s
21
= 0 (when X is exogenous). It is also close to zero
when c = 0. Bekker (1994) pointed out that it also has the reverse implication — that when c = l/n
is large, the bias in the 2SLS estimator will be large. Indeed as c ÷ 1, the expression in (13.14)
approaches that in (13.13), indicating that the bias in 2SLS approaches that of OLS as the number
of instruments increases.
Bekker (1994) showed further that under the alternative asymptotic approximation that c is
ﬁxed as n ÷ · (so that the number of instruments goes to inﬁnity proportionately with sample
size) then the expression in (13.14) is the probability limit of
ˆ
d
2SLS
÷d
13.7 Identiﬁcation Failure
Recall the reduced form equation
X
2
= Z
1
12
+Z
2
22
+U
2
.
The parameter d fails to be identiﬁed if
22
has deﬁcient rank. The consequences of identiﬁcation
failure for inference are quite severe.
Take the simplest case where k = l = 1 (so there is no Z
1
). Then the model may be written as
y
i
= x
i
+e
i
x
i
= z
i
+u
i
and
22
= = E(z
i
x
i
) /Ez
2
i
. We see that is identiﬁed if and only if = 0, which occurs
when E(x
i
z
i
) = 0. Thus identiﬁcation hinges on the existence of correlation between the excluded
exogenous variable and the included endogenous variable.
Suppose this condition fails, so E(x
i
z
i
) = 0. Then by the CLT
1
n
n
¸
i=1
z
i
e
i
d
÷÷N
1
~ N
0, E
z
2
i
e
2
i
(13.15)
1
n
n
¸
i=1
z
i
x
i
=
1
n
n
¸
i=1
z
i
u
i
d
÷÷N
2
~ N
0, E
z
2
i
u
2
i
(13.16)
therefore
ˆ
÷ =
1
n
¸
n
i=1
z
i
e
i
1
n
¸
n
i=1
z
i
x
i
d
÷÷
N
1
N
2
~ Cauchy,
since the ratio of two normals is Cauchy. This is particularly nasty, as the Cauchy distribution
does not have a ﬁnite mean. This result carries over to more general settings, and was examined
by Phillips (1989) and Choi and Phillips (1992).
CHAPTER 13. ENDOGENEITY 218
Suppose that identiﬁcation does not completely fail, but is weak. This occurs when
22
is full
rank, but small. This can be handled in an asymptotic analysis by modeling it as localtozero, viz
22
= n
÷1/2
C,
where C is a full rank matrix. The n
÷1/2
is picked because it provides just the right balancing to
allow a rich distribution theory.
To see the consequences, once again take the simple case k = l = 1. Here, the instrument x
i
is
weak for z
i
if
= n
÷1/2
c.
Then (13.15) is unaected, but (13.16) instead takes the form
1
n
n
¸
i=1
z
i
x
i
=
1
n
n
¸
i=1
z
2
i
+
1
n
n
¸
i=1
z
i
u
i
=
1
n
n
¸
i=1
z
2
i
c +
1
n
n
¸
i=1
z
i
u
i
d
÷÷Qc +N
2
therefore
ˆ
÷
d
÷÷
N
1
Qc +N
2
.
As in the case of complete identiﬁcation failure, we ﬁnd that
ˆ
is inconsistent for and the
asymptotic distribution of
ˆ
is nonnormal. In addition, standard test statistics have nonstandard
distributions, meaning that inferences about parameters of interest can be misleading.
The distribution theory for this model was developed by Staiger and Stock (1997) and extended
to nonlinear GMM estimation by Stock and Wright (2000). Further results on testing were obtained
by Wang and Zivot (1998).
The bottom line is that it is highly desirable to avoid identiﬁcation failure. Once again, the
equation to focus on is the reduced form
X
2
= Z
1
12
+Z
2
22
+U
2
and identiﬁcation requires rank(
22
) = k
2
. If k
2
= 1, this requires
22
= 0, which is straightforward
to assess using a hypothesis test on the reduced form. Therefore in the case of k
2
= 1 (one RHS
endogenous variable), one constructive recommendation is to explicitly estimate the reduced form
equation for X
2
, construct the test of
22
= 0, and at a minimum check that the test rejects
H
0
:
22
= 0.
When k
2
> 1,
22
= 0 is not sucient for identiﬁcation. It is not even sucient that each
column of
22
is nonzero (each column corresponds to a distinct endogenous variable in Z
2
). So
while a minimal check is to test that each columns of
22
is nonzero, this cannot be interpreted
as deﬁnitive proof that
22
has full rank. Unfortunately, tests of deﬁcient rank are dicult to
implement. In any event, it appears reasonable to explicitly estimate and report the reduced form
equations for Z
2
, and attempt to assess the likelihood that
22
has deﬁcient rank.
CHAPTER 13. ENDOGENEITY 219
Exercises
1. Consider the single equation model
y
i
= x
i
+e
i
,
where y
i
and z
i
are both realvalued (1 1). Let
ˆ
denote the IV estimator of using as an
instrument a dummy variable d
i
(takes only the values 0 and 1). Find a simple expression
for the IV estimator in this context.
2. In the linear model
y
i
= x
t
i
d +e
i
E(e
i
 x
i
) = 0
suppose o
2
i
= E
e
2
i
 X
i
is known. Show that the GLS estimator of d can be written as an
IV estimator using some instrument z
i
. (Find an expression for z
i
.)
3. Take the linear model
y = Xd +e.
Let the OLS estimator for d be
ˆ
d and the OLS residual be ˆ e = y ÷X
ˆ
d.
Let the IV estimator for d using some instrument Z be
˜
d and the IV residual be ˜ e = y÷X
˜
d.
If Z is indeed endogeneous, will IV “ﬁt” better than OLS, in the sense that ˜ e
t
˜ e < ˆ e
t
ˆ e, at
least in large samples?
4. The reduced form between the regressors x
i
and instruments z
i
takes the form
x
i
=
t
z
i
+u
i
or
X = Z +U
where X
i
is k 1, z
i
is l 1, X is nk, Z is nl, U is nk, and is l k. The parameter
is deﬁned by the population moment condition
E
z
i
u
t
i
= 0
Show that the method of moments estimator for is
ˆ
= (Z
t
Z)
÷1
(Z
t
X) .
5. In the structural model
y = Xd +e
X = Z +U
with l k, l _ k, we claim that d is identiﬁed (can be recovered from the reduced form) if
rank() = k. Explain why this is true. That is, show that if rank() < k then d cannot be
identiﬁed.
6. Take the linear model
y
i
= x
i
d +e
i
E(e
i
 x
i
) = 0.
where x
i
and d are 1 1.
CHAPTER 13. ENDOGENEITY 220
(a) Show that E(x
i
e
i
) = 0 and E
x
2
i
e
i
= 0. Is z
i
= (x
i
x
2
i
)
t
a valid instrumental variable
for estimation of d?
(b) Deﬁne the 2SLS estimator of d, using z
i
as an instrument for x
i
. How does this dier
from OLS?
(c) Find the ecient GMM estimator of d based on the moment condition
E(z
i
(y
i
÷x
i
d)) = 0.
Does this dier from 2SLS and/or OLS?
7. Suppose that price and quantity are determined by the intersection of the linear demand and
supply curves
Demand : Q = a
0
+a
1
P +a
2
Y +e
1
Supply : Q = b
0
+b
1
P +b
2
W +e
2
where income (Y ) and wage (W) are determined outside the market. In this model, are the
parameters identiﬁed?
8. The data ﬁle card.dat is taken from David Card “Using Geographic Variation in College
Proximity to Estimate the Return to Schooling” in Aspects of Labour Market Behavior (1995).
There are 2215 observations with 29 variables, listed in card.pdf. We want to estimate a wage
equation
log(Wage) =
0
+
1
Educ +
2
Exper +
3
Exper
2
+
4
South +
5
Black +e
where Educ = Eduation (Years) Exper = Experience (Years), and South and Black are
regional and racial dummy variables.
(a) Estimate the model by OLS. Report estimates and standard errors.
(b) Now treat Education as endogenous, and the remaining variables as exogenous. Estimate
the model by 2SLS, using the instrument near4, a dummy indicating that the observation
lives near a 4year college. Report estimates and standard errors.
(c) Reestimate by 2SLS (report estimates and standard errors) adding three additional
instruments: near2 (a dummy indicating that the observation lives near a 2year college),
fatheduc (the education, in years, of the father) and motheduc (the education, in years,
of the mother).
(d) Reestimate the model by ecient GMM. I suggest that you use the 2SLS estimates as
the ﬁrststep to get the weight matrix, and then calculate the GMM estimator from this
weight matrix without further iteration. Report the estimates and standard errors.
(e) Calculate and report the J statistic for overidentiﬁcation.
(f) Discuss your ﬁndings.
Chapter 14
Univariate Time Series
A time series y
t
is a process observed in sequence over time, t = 1, ..., T. To indicate the
dependence on time, we adopt new notation, and use the subscript t to denote the individual
observation, and T to denote the number of observations.
Because of the sequential nature of time series, we expect that y
t
and y
t÷1
are not independent,
so classical assumptions are not valid.
We can separate time series into two categories: univariate (y
t
÷ R is scalar); and multivariate
(y
t
÷ R
m
is vectorvalued). The primary model for univariate time series is autoregressions (ARs).
The primary model for multivariate time series is vector autoregressions (VARs).
14.1 Stationarity and Ergodicity
Deﬁnition 14.1.1 {y
t
} is covariance (weakly) stationary if
E(y
t
) = µ
is independent of t, and
cov (y
t
, y
t÷k
) = (k)
is independent of t for all k.(k) is called the autocovariance function.
j(k) = (k)/(0) = corr(y
t
, y
t÷k
)
is the autocorrelation function.
Deﬁnition 14.1.2 {y
t
} is strictly stationary if the joint distribution of
(y
t
, ..., y
t÷k
) is independent of t for all k.
Deﬁnition 14.1.3 A stationary time series is ergodic if (k) ÷ 0 as
k ÷·.
221
CHAPTER 14. UNIVARIATE TIME SERIES 222
The following two theorems are essential to the analysis of stationary time series. There proofs
are rather dicult, however.
Theorem 14.1.1 If y
t
is strictly stationary and ergodic and x
t
=
f(y
t
, y
t÷1
, ...) is a random variable, then x
t
is strictly stationary and er
godic.
Theorem 14.1.2 (Ergodic Theorem). If y
t
is strictly stationary and er
godic and Ey
t
 < ·, then as T ÷·,
1
T
T
¸
t=1
y
t
p
÷÷E(y
t
).
This allows us to consistently estimate parameters using timeseries moments:
The sample mean:
ˆ µ =
1
T
T
¸
t=1
y
t
The sample autocovariance
ˆ (k) =
1
T
T
¸
t=1
(y
t
÷ ˆ µ) (y
t÷k
÷ ˆ µ) .
The sample autocorrelation
ˆ j(k) =
ˆ (k)
ˆ (0)
.
Theorem 14.1.3 If y
t
is strictly stationary and ergodic and Ey
2
t
< ·,
then as T ÷·,
1. ˆ µ
p
÷÷E(y
t
);
2. ˆ (k)
p
÷÷(k);
3. ˆ j(k)
p
÷÷j(k).
Proof of Theorem 14.1.3. Part (1) is a direct consequence of the Ergodic theorem. For Part
(2), note that
ˆ (k) =
1
T
T
¸
t=1
(y
t
÷ ˆ µ) (y
t÷k
÷ ˆ µ)
=
1
T
T
¸
t=1
y
t
y
t÷k
÷
1
T
T
¸
t=1
y
t
ˆ µ ÷
1
T
T
¸
t=1
y
t÷k
ˆ µ + ˆ µ
2
.
CHAPTER 14. UNIVARIATE TIME SERIES 223
By Theorem 14.1.1 above, the sequence y
t
y
t÷k
is strictly stationary and ergodic, and it has a ﬁnite
mean by the assumption that Ey
2
t
< ·. Thus an application of the Ergodic Theorem yields
1
T
T
¸
t=1
y
t
y
t÷k
p
÷÷E(y
t
y
t÷k
).
Thus
ˆ (k)
p
÷÷E(y
t
y
t÷k
) ÷µ
2
÷µ
2
+µ
2
= E(y
t
y
t÷k
) ÷µ
2
= (k).
Part (3) follows by the continuous mapping theorem: ˆ j(k) = ˆ (k)/ˆ (0)
p
÷÷(k)/(0) = j(k).
14.2 Autoregressions
In timeseries, the series {..., y
1
, y
2
, ..., y
T
, ...} are jointly random. We consider the conditional
expectation
E(y
t
 F
t÷1
)
where F
t÷1
= {y
t÷1
, y
t÷2
, ...} is the past history of the series.
An autoregressive (AR) model speciﬁes that only a ﬁnite number of past lags matter:
E(y
t
 F
t÷1
) = E(y
t
 y
t÷1
, ..., y
t÷k
) .
A linear AR model (the most common type used in practice) speciﬁes linearity:
E(y
t
 F
t÷1
) = c + j
1
y
t÷1
+ j
2
y
t÷1
+· · · + j
k
y
t÷k
.
Letting
e
t
= y
t
÷E(y
t
 F
t÷1
) ,
then we have the autoregressive model
y
t
= c + j
1
y
t÷1
+ j
2
y
t÷1
+· · · + j
k
y
t÷k
+e
t
E(e
t
 F
t÷1
) = 0.
The last property deﬁnes a special timeseries process.
Deﬁnition 14.2.1 e
t
is a martingale dierence sequence (MDS) if
E(e
t
 F
t÷1
) = 0.
Regression errors are naturally a MDS. Some timeseries processes may be a MDS as a conse
quence of optimizing behavior. For example, some versions of the lifecycle hypothesis imply that
either changes in consumption, or consumption growth rates, should be a MDS. Most asset pricing
models imply that asset returns should be the sum of a constant plus a MDS.
The MDS property for the regression error plays the same role in a timeseries regression as
does the conditional meanzero property for the regression error in a crosssection regression. In
fact, it is even more important in the timeseries context, as it is dicult to derive distribution
theories without this property.
A useful property of a MDS is that e
t
is uncorrelated with any function of the lagged information
F
t÷1
. Thus for k > 0, E(y
t÷k
e
t
) = 0.
CHAPTER 14. UNIVARIATE TIME SERIES 224
14.3 Stationarity of AR(1) Process
A meanzero AR(1) is
y
t
= jy
t÷1
+e
t
.
Assume that e
t
is iid, E(e
t
) = 0 and Ee
2
t
= o
2
< ·.
By backsubstitution, we ﬁnd
y
t
= e
t
+ je
t÷1
+ j
2
e
t÷2
+...
=
o
¸
k=0
j
k
e
t÷k
.
Loosely speaking, this series converges if the sequence j
k
e
t÷k
gets small as k ÷ ·. This occurs
when j < 1.
Theorem 14.3.1 If and only if j < 1 then y
t
is strictly stationary and
ergodic.
We can compute the moments of y
t
using the inﬁnite sum:
Ey
t
=
o
¸
k=0
j
k
E(e
t÷k
) = 0
var(y
t
) =
o
¸
k=0
j
2k
var (e
t÷k
) =
o
2
1 ÷j
2
.
If the equation for y
t
has an intercept, the above results are unchanged, except that the mean
of y
t
can be computed from the relationship
Ey
t
= c + jEy
t÷1
,
and solving for Ey
t
= Ey
t÷1
we ﬁnd Ey
t
= c/(1 ÷j).
14.4 Lag Operator
An algebraic construct which is useful for the analysis of autoregressive models is the lag oper
ator.
Deﬁnition 14.4.1 The lag operator L satisﬁes Ly
t
= y
t÷1
.
Deﬁning L
2
= LL, we see that L
2
y
t
= Ly
t÷1
= y
t÷2
. In general, L
k
y
t
= y
t÷k
.
The AR(1) model can be written in the format
y
t
÷jy
t÷1
= e
t
or
(1 ÷jL) y
t
= e
t
.
The operator j(L) = (1 ÷ jL) is a polynomial in the operator L. We say that the root of the
polynomial is 1/j, since j(z) = 0 when z = 1/j. We call j(L) the autoregressive polynomial of y
t
.
From Theorem 14.3.1, an AR(1) is stationary i j < 1. Note that an equivalent way to say
this is that an AR(1) is stationary i the root of the autoregressive polynomial is larger than one
(in absolute value).
CHAPTER 14. UNIVARIATE TIME SERIES 225
14.5 Stationarity of AR(k)
The AR(k) model is
y
t
= j
1
y
t÷1
+ j
2
y
t÷2
+· · · + j
k
y
t÷k
+e
t
.
Using the lag operator,
y
t
÷j
1
Ly
t
÷j
2
L
2
y
t
÷· · · ÷j
k
L
k
y
t
= e
t
,
or
j(L)y
t
= e
t
where
j(L) = 1 ÷j
1
L ÷j
2
L
2
÷· · · ÷j
k
L
k
.
We call j(L) the autoregressive polynomial of y
t
.
The Fundamental Theorem of Algebra says that any polynomial can be factored as
j(z) =
1 ÷`
÷1
1
z
1 ÷`
÷1
2
z
· · ·
1 ÷`
÷1
k
z
where the `
1
, ..., `
k
are the complex roots of j(z), which satisfy j(`
j
) = 0.
We know that an AR(1) is stationary i the absolute value of the root of its autoregressive
polynomial is larger than one. For an AR(k), the requirement is that all roots are larger than one.
Let ` denote the modulus of a complex number `.
Theorem 14.5.1 The AR(k) is strictly stationary and ergodic if and only
if `
j
 > 1 for all j.
One way of stating this is that “All roots lie outside the unit circle.”
If one of the roots equals 1, we say that j(L), and hence y
t
, “has a unit root”. This is a special
case of nonstationarity, and is of great interest in applied time series.
14.6 Estimation
Let
x
t
=
1 y
t÷1
y
t÷2
· · · y
t÷k
t
d =
c j
1
j
2
· · · j
k
t
.
Then the model can be written as
y
t
= x
t
t
d +e
t
.
The OLS estimator is
ˆ
d =
X
t
X
÷1
X
t
y.
To study
ˆ
d, it is helpful to deﬁne the process u
t
= x
t
e
t
. Note that u
t
is a MDS, since
E(u
t
 F
t÷1
) = E(x
t
e
t
 F
t÷1
) = x
t
E(e
t
 F
t÷1
) = 0.
By Theorem 14.1.1, it is also strictly stationary and ergodic. Thus
1
T
T
¸
t=1
x
t
e
t
=
1
T
T
¸
t=1
u
t
p
÷÷E(u
t
) = 0. (14.1)
CHAPTER 14. UNIVARIATE TIME SERIES 226
The vector x
t
is strictly stationary and ergodic, and by Theorem 14.1.1, so is x
t
x
t
t
. Thus by the
Ergodic Theorem,
1
T
T
¸
t=1
x
t
x
t
t
p
÷÷E
x
t
x
t
t
= Q.
Combined with (14.1) and the continuous mapping theorem, we see that
ˆ
d = d +
1
T
T
¸
t=1
x
t
x
t
t
÷1
1
T
T
¸
t=1
x
t
e
t
p
÷÷Q
÷1
0 = 0.
We have shown the following:
Theorem 14.6.1 If the AR(k) process y
t
is strictly stationary and ergodic
and Ey
2
t
< ·, then
ˆ
d
p
÷÷d as T ÷·.
14.7 Asymptotic Distribution
Theorem 14.7.1 MDS CLT. If u
t
is a strictly stationary and ergodic
MDS and E(u
t
u
t
t
) = < ·, then as T ÷·,
1
T
T
¸
t=1
u
t
d
÷÷N(0, ) .
Since x
t
e
t
is a MDS, we can apply Theorem 14.7.1 to see that
1
T
T
¸
t=1
x
t
e
t
d
÷÷N(0, ) ,
where
= E(x
t
x
t
t
e
2
t
).
Theorem 14.7.2 If the AR(k) process y
t
is strictly stationary and ergodic
and Ey
4
t
< ·, then as T ÷·,
T
ˆ
d ÷d
d
÷÷N
0, Q
÷1
Q
÷1
.
This is identical in form to the asymptotic distribution of OLS in crosssection regression. The
implication is that asymptotic inference is the same. In particular, the asymptotic covariance
matrix is estimated just as in the crosssection case.
CHAPTER 14. UNIVARIATE TIME SERIES 227
14.8 Bootstrap for Autoregressions
In the nonparametric bootstrap, we constructed the bootstrap sample by randomly resampling
from the data values {y
t
, x
t
}. This creates an iid bootstrap sample. Clearly, this cannot work in a
timeseries application, as this imposes inappropriate independence.
Brieﬂy, there are two popular methods to implement bootstrap resampling for timeseries data.
Method 1: ModelBased (Parametric) Bootstrap.
1. Estimate
ˆ
d and residuals ˆ e
t
.
2. Fix an initial condition (y
÷k+1
, y
÷k+2
, ..., y
0
).
3. Simulate iid draws e
+
i
from the empirical distribution of the residuals {ˆ e
1
, ..., ˆ e
T
}.
4. Create the bootstrap series y
+
t
by the recursive formula
y
+
t
= ˆ c + ˆ j
1
y
+
t÷1
+ ˆ j
2
y
+
t÷2
+· · · + ˆ j
k
y
+
t÷k
+e
+
t
.
This construction imposes homoskedasticity on the errors e
+
i
, which may be dierent than the
properties of the actual e
i
. It also presumes that the AR(k) structure is the truth.
Method 2: Block Resampling
1. Divide the sample into T/m blocks of length m.
2. Resample complete blocks. For each simulated sample, draw T/m blocks.
3. Paste the blocks together to create the bootstrap timeseries y
+
t
.
4. This allows for arbitrary stationary serial correlation, heteroskedasticity, and for model
misspeciﬁcation.
5. The results may be sensitive to the block length, and the way that the data are partitioned
into blocks.
6. May not work well in small samples.
14.9 Trend Stationarity
y
t
= µ
0
+µ
1
t +S
t
(14.2)
S
t
= j
1
S
t÷1
+ j
2
S
t÷2
+· · · + j
k
S
t÷l
+e
t
, (14.3)
or
y
t
= c
0
+ c
1
t + j
1
y
t÷1
+ j
2
y
t÷1
+· · · + j
k
y
t÷k
+e
t
. (14.4)
There are two essentially equivalent ways to estimate the autoregressive parameters (j
1
, ..., j
k
).
• You can estimate (14.4) by OLS.
• You can estimate (14.2)(14.3) sequentially by OLS. That is, ﬁrst estimate (14.2), get the
residual
ˆ
S
t
, and then perform regression (14.3) replacing S
t
with
ˆ
S
t
. This procedure is some
times called Detrending.
CHAPTER 14. UNIVARIATE TIME SERIES 228
The reason why these two procedures are (essentially) the same is the FrischWaughLovell
theorem.
Seasonal Eects
There are three popular methods to deal with seasonal data.
• Include dummy variables for each season. This presumes that “seasonality” does not change
over the sample.
• Use “seasonally adjusted” data. The seasonal factor is typically estimated by a twosided
weighted average of the data for that season in neighboring years. Thus the seasonally
adjusted data is a “ﬁltered” series. This is a ﬂexible approach which can extract a wide range
of seasonal factors. The seasonal adjustment, however, also alters the timeseries correlations
of the data.
• First apply a seasonal dierencing operator. If s is the number of seasons (typically s = 4 or
s = 12),
s
y
t
= y
t
÷y
t÷s
,
or the seasontoseason change. The series
s
y
t
is clearly free of seasonality. But the longrun
trend is also eliminated, and perhaps this was of relevance.
14.10 Testing for Omitted Serial Correlation
For simplicity, let the null hypothesis be an AR(1):
y
t
= c + jy
t÷1
+u
t
. (14.5)
We are interested in the question if the error u
t
is serially correlated. We model this as an AR(1):
u
t
= 0u
t÷1
+e
t
(14.6)
with e
t
a MDS. The hypothesis of no omitted serial correlation is
H
0
: 0 = 0
H
1
: 0 = 0.
We want to test H
0
against H
1
.
To combine (14.5) and (14.6), we take (14.5) and lag the equation once:
y
t÷1
= c + jy
t÷2
+u
t÷1
.
We then multiply this by 0 and subtract from (14.5), to ﬁnd
y
t
÷0y
t÷1
= c ÷0c + jy
t÷1
÷0jy
t÷1
+u
t
÷0u
t÷1
,
or
y
t
= c(1 ÷0) + (j + 0) y
t÷1
÷0jy
t÷2
+e
t
= AR(2).
Thus under H
0
, y
t
is an AR(1), and under H
1
it is an AR(2). H
0
may be expressed as the restriction
that the coecient on y
t÷2
is zero.
An appropriate test of H
0
against H
1
is therefore a Wald test that the coecient on y
t÷2
is
zero. (A simple exclusion test).
In general, if the null hypothesis is that y
t
is an AR(k), and the alternative is that the error is an
AR(m), this is the same as saying that under the alternative y
t
is an AR(k+m), and this is equivalent
to the restriction that the coecients on y
t÷k÷1
, ..., y
t÷k÷m
are jointly zero. An appropriate test is
the Wald test of this restriction.
CHAPTER 14. UNIVARIATE TIME SERIES 229
14.11 Model Selection
What is the appropriate choice of k in practice? This is a problem of model selection.
One approach to model selection is to choose k based on a Wald tests.
Another is to minimize the AIC or BIC information criterion, e.g.
AIC(k) = log ˆ o
2
(k) +
2k
T
,
where ˆ o
2
(k) is the estimated residual variance from an AR(k)
One ambiguity in deﬁning the AIC criterion is that the sample available for estimation changes
as k changes. (If you increase k, you need more initial conditions.) This can induce strange
behavior in the AIC. The best remedy is to ﬁx a upper value k, and then reserve the ﬁrst k as
initial conditions, and then estimate the models AR(1), AR(2), ..., AR(k) on this (uniﬁed) sample.
14.12 Autoregressive Unit Roots
The AR(k) model is
j(L)y
t
= µ +e
t
j(L) = 1 ÷j
1
L ÷· · · ÷j
k
L
k
.
As we discussed before, y
t
has a unit root when j(1) = 0, or
j
1
+ j
2
+· · · + j
k
= 1.
In this case, y
t
is nonstationary. The ergodic theorem and MDS CLT do not apply, and test
statistics are asymptotically nonnormal.
A helpful way to write the equation is the socalled DickeyFuller reparameterization:
y
t
= µ + c
0
y
t÷1
+ c
1
y
t÷1
+· · · + c
k÷1
y
t÷(k÷1)
+e
t
. (14.7)
These models are equivalent linear transformations of one another. The DF parameterization
is convenient because the parameter c
0
summarizes the information about the unit root, since
j(1) = ÷c
0
. To see this, observe that the lag polynomial for the y
t
computed from (14.7) is
(1 ÷L) ÷c
0
L ÷c
1
(L ÷L
2
) ÷· · · ÷c
k÷1
(L
k÷1
÷L
k
)
But this must equal j(L), as the models are equivalent. Thus
j(1) = (1 ÷1) ÷c
0
÷(1 ÷1) ÷· · · ÷(1 ÷1) = ÷c
0
.
Hence, the hypothesis of a unit root in y
t
can be stated as
H
0
: c
0
= 0.
Note that the model is stationary if c
0
< 0. So the natural alternative is
H
1
: c
0
< 0.
Under H
0
, the model for y
t
is
y
t
= µ + c
1
y
t÷1
+· · · + c
k÷1
y
t÷(k÷1)
+e
t
,
which is an AR(k1) in the ﬁrstdierence y
t
. Thus if y
t
has a (single) unit root, then y
t
is a
stationary AR process. Because of this property, we say that if y
t
is nonstationary but
d
y
t
is
stationary, then y
t
is “integrated of order d”, or I(d). Thus a time series with unit root is I(1).
CHAPTER 14. UNIVARIATE TIME SERIES 230
Since c
0
is the parameter of a linear regression, the natural test statistic is the tstatistic for
H
0
from OLS estimation of (14.7). Indeed, this is the most popular unit root test, and is called the
Augmented DickeyFuller (ADF) test for a unit root.
It would seem natural to assess the signiﬁcance of the ADF statistic using the normal table.
However, under H
0
, y
t
is nonstationary, so conventional normal asymptotics are invalid. An
alternative asymptotic framework has been developed to deal with nonstationary data. We do not
have the time to develop this theory in detail, but simply assert the main results.
Theorem 14.12.1 DickeyFuller Theorem.
Assume c
0
= 0. As T ÷·,
T ˆ c
0
d
÷÷(1 ÷c
1
÷c
2
÷· · · ÷c
k÷1
) DF
c
ADF =
ˆ c
0
s(ˆ c
0
)
÷DF
t
.
The limit distributions DF
c
and DF
t
are nonnormal. They are skewed to the left, and have
negative means.
The ﬁrst result states that ˆ c
0
converges to its true value (of zero) at rate T, rather than the
conventional rate of T
1/2
. This is called a “superconsistent” rate of convergence.
The second result states that the tstatistic for ˆ c
0
converges to a limit distribution which is
nonnormal, but does not depend on the parameters c. This distribution has been extensively
tabulated, and may be used for testing the hypothesis H
0
. Note: The standard error s(ˆ c
0
) is the
conventional (“homoskedastic”) standard error. But the theorem does not require an assumption
of homoskedasticity. Thus the DickeyFuller test is robust to heteroskedasticity.
Since the alternative hypothesis is onesided, the ADF test rejects H
0
in favor of H
1
when
ADF < c, where c is the critical value from the ADF table. If the test rejects H
0
, this means that
the evidence points to y
t
being stationary. If the test does not reject H
0
, a common conclusion is
that the data suggests that y
t
is nonstationary. This is not really a correct conclusion, however.
All we can say is that there is insucient evidence to conclude whether the data are stationary or
not.
We have described the test for the setting of with an intercept. Another popular setting includes
as well a linear time trend. This model is
y
t
= µ
1
+µ
2
t + c
0
y
t÷1
+ c
1
y
t÷1
+· · · + c
k÷1
y
t÷(k÷1)
+e
t
. (14.8)
This is natural when the alternative hypothesis is that the series is stationary about a linear time
trend. If the series has a linear trend (e.g. GDP, Stock Prices), then the series itself is non
stationary, but it may be stationary around the linear time trend. In this context, it is a silly waste
of time to ﬁt an AR model to the level of the series without a time trend, as the AR model cannot
conceivably describe this data. The natural solution is to include a time trend in the ﬁtted OLS
equation. When conducting the ADF test, this means that it is computed as the tratio for c
0
from
OLS estimation of (14.8).
If a time trend is included, the test procedure is the same, but dierent critical values are
required. The ADF test has a dierent distribution when the time trend has been included, and a
dierent table should be consulted.
Most texts include as well the critical values for the extreme polar case where the intercept has
been omitted from the model. These are included for completeness (from a pedagogical perspective)
but have no relevance for empirical practice where intercepts are always included.
Chapter 15
Multivariate Time Series
A multivariate time series y
t
is a vector process m1. Let F
t÷1
= (y
t÷1
, y
t÷2
, ...) be all lagged
information at time t. The typical goal is to ﬁnd the conditional expectation E(y
t
 F
t÷1
) . Note
that since y
t
is a vector, this conditional expectation is also a vector.
15.1 Vector Autoregressions (VARs)
A VAR model speciﬁes that the conditional mean is a function of only a ﬁnite number of lags:
E(y
t
 F
t÷1
) = E
y
t
 y
t÷1
, ..., y
t÷k
.
A linear VAR speciﬁes that this conditional mean is linear in the arguments:
E
y
t
 y
t÷1
, ..., y
t÷k
= a
0
+A
1
y
t÷1
+A
2
y
t÷2
+· · · A
k
y
t÷k
.
Observe that a
0
is m1,and each of A
1
through A
k
are mm matrices.
Deﬁning the m1 regression error
e
t
= y
t
÷E(y
t
 F
t÷1
) ,
we have the VAR model
y
t
= a
0
+A
1
y
t÷1
+A
2
y
t÷2
+· · · A
k
y
t÷k
+e
t
E(e
t
 F
t÷1
) = 0.
Alternatively, deﬁning the mk + 1 vector
x
t
=
¸
¸
¸
¸
¸
¸
1
y
t÷1
y
t÷2
.
.
.
y
t÷k
¸
and the m(mk + 1) matrix
A =
a
0
A
1
A
2
· · · A
k
,
then
y
t
= Ax
t
+e
t
.
The VAR model is a system of m equations. One way to write this is to let a
t
j
be the jth row
of A. Then the VAR system can be written as the equations
Y
jt
= a
t
j
x
t
+e
jt
.
Unrestricted VARs were introduced to econometrics by Sims (1980).
231
CHAPTER 15. MULTIVARIATE TIME SERIES 232
15.2 Estimation
Consider the moment conditions
E(x
t
e
jt
) = 0,
j = 1, ..., m. These are implied by the VAR model, either as a regression, or as a linear projection.
The GMM estimator corresponding to these moment conditions is equationbyequation OLS
ˆ a
j
= (X
t
X)
÷1
X
t
y
j
.
An alternative way to compute this is as follows. Note that
ˆ a
t
j
= y
t
j
X(X
t
X)
÷1
.
And if we stack these to create the estimate
ˆ
A, we ﬁnd
ˆ
A =
¸
¸
¸
¸
y
t
1
y
t
2
.
.
.
y
t
m+1
¸
X(X
t
X)
÷1
= Y
t
X(X
t
X)
÷1
,
where
Y =
y
1
y
2
· · · y
m
the T m matrix of the stacked y
t
t
.
This (system) estimator is known as the SUR (Seemingly Unrelated Regressions) estimator,
and was originally derived by Zellner (1962)
15.3 Restricted VARs
The unrestricted VAR is a system of m equations, each with the same set of regressors. A
restricted VAR imposes restrictions on the system. For example, some regressors may be excluded
from some of the equations. Restrictions may be imposed on individual equations, or across equa
tions. The GMM framework gives a convenient method to impose such restrictions on estimation.
15.4 Single Equation from a VAR
Often, we are only interested in a single equation out of a VAR system. This takes the form
y
jt
= a
t
j
x
t
+e
t
,
and x
t
consists of lagged values of y
jt
and the other y
t
lt
s. In this case, it is convenient to redeﬁne
the variables. Let y
t
= y
jt
, and z
t
be the other variables. Let e
t
= e
jt
and = a
j
. Then the single
equation takes the form
y
t
= x
t
t
d +e
t
, (15.1)
and
x
t
=
1 y
t÷1
· · · y
t÷k
z
t
t÷1
· · · z
t
t÷k
t
.
This is just a conventional regression with time series data.
CHAPTER 15. MULTIVARIATE TIME SERIES 233
15.5 Testing for Omitted Serial Correlation
Consider the problem of testing for omitted serial correlation in equation (15.1). Suppose that
e
t
is an AR(1). Then
y
t
= x
t
t
d +e
t
e
t
= 0e
t÷1
+u
t
(15.2)
E(u
t
 F
t÷1
) = 0.
Then the null and alternative are
H
0
: 0 = 0 H
1
: 0 = 0.
Take the equation y
t
= x
t
t
d +e
t
, and subtract o the equation once lagged multiplied by 0, to get
y
t
÷0y
t÷1
=
x
t
t
d +e
t
÷0
x
t
t÷1
d +e
t÷1
= x
t
t
d ÷0x
t÷1
d +e
t
÷0e
t÷1
,
or
y
t
= 0y
t÷1
+x
t
t
d +x
t
t÷1
~ +u
t
, (15.3)
which is a valid regression model.
So testing H
0
versus H
1
is equivalent to testing for the signiﬁcance of adding (y
t÷1
, x
t÷1
) to
the regression. This can be done by a Wald test. We see that an appropriate, general, and simple
way to test for omitted serial correlation is to test the signiﬁcance of extra lagged values of the
dependent variable and regressors.
You may have heard of the DurbinWatson test for omitted serial correlation, which once was
very popular, and is still routinely reported by conventional regression packages. The DW test is
appropriate only when regression y
t
= x
t
t
d +e
t
is not dynamic (has no lagged values on the RHS),
and e
t
is iid N(0, o
2
). Otherwise it is invalid.
Another interesting fact is that (15.2) is a special case of (15.3), under the restriction = ÷d0.
This restriction, which is called a common factor restriction, may be tested if desired. If valid,
the model (15.2) may be estimated by iterated GLS. (A simple version of this estimator is called
CochraneOrcutt.) Since the common factor restriction appears arbitrary, and is typically rejected
empirically, direct estimation of (15.2) is uncommon in recent applications.
15.6 Selection of Lag Length in an VAR
If you want a datadependent rule to pick the lag length k in a VAR, you may either use a testing
based approach (using, for example, the Wald statistic), or an information criterion approach. The
formula for the AIC and BIC are
AIC(k) = log det
ˆ
(k)
+ 2
p
T
BIC(k) = log det
ˆ
(k)
+
p log(T)
T
ˆ
(k) =
1
T
T
¸
t=1
ˆ e
t
(k)ˆ e
t
(k)
t
p = m(km+ 1)
where p is the number of parameters in the model, and ˆ e
t
(k) is the OLS residual vector from the
model with k lags. The log determinant is the criterion from the multivariate normal likelihood.
CHAPTER 15. MULTIVARIATE TIME SERIES 234
15.7 Granger Causality
Partition the data vector into (y
t
, z
t
). Deﬁne the two information sets
F
1t
=
y
t
, y
t÷1
, y
t÷2
, ...
F
2t
=
y
t
, z
t
, y
t÷1
, z
t÷1
, y
t÷2
, z
t÷2
, , ...
The information set F
1t
is generated only by the history of y
t
, and the information set F
2t
is
generated by both y
t
and z
t
. The latter has more information.
We say that z
t
does not Grangercause y
t
if
E(y
t
 F
1,t÷1
) = E(y
t
 F
2,t÷1
) .
That is, conditional on information in lagged y
t
, lagged z
t
does not help to forecast y
t
. If this
condition does not hold, then we say that z
t
Grangercauses y
t
.
The reason why we call this “Granger Causality” rather than “causality” is because this is not
a physical or structure deﬁnition of causality. If z
t
is some sort of forecast of the future, such as a
futures price, then z
t
may help to forecast y
t
even though it does not “cause” y
t
. This deﬁnition
of causality was developed by Granger (1969) and Sims (1972).
In a linear VAR, the equation for y
t
is
y
t
= c + j
1
y
t÷1
+· · · + j
k
y
t÷k
+z
t
t÷1
~
1
+· · · +z
t
t÷k
~
k
+e
t
.
In this equation, z
t
does not Grangercause y
t
if and only if
H
0
: ~
1
= ~
2
= · · · = ~
k
= 0.
This may be tested using an exclusion (Wald) test.
This idea can be applied to blocks of variables. That is, y
t
and/or z
t
can be vectors. The
hypothesis can be tested by using the appropriate multivariate Wald test.
If it is found that z
t
does not Grangercause y
t
, then we deduce that our timeseries model of
E(y
t
 F
t÷1
) does not require the use of z
t
. Note, however, that z
t
may still be useful to explain
other features of y
t
, such as the conditional variance.
Clive W. J. Granger
Clive Granger (19342009) of England was one of the leading ﬁgures in timeseries econo
metrics, and cowinner in 2003 of the Nobel Memorial Prize in Economic Sciences (along
with Robert Engle). In addition to formalizing the deﬁnition of causality known as Granger
causality, he invented the concept of cointegration, introduced spectral methods into econo
metrics, and formalized methods for the combination of forecasts.
15.8 Cointegration
The idea of cointegration is due to Granger (1981), and was articulated in detail by Engle and
Granger (1987).
Deﬁnition 15.8.1 The m 1 series y
t
is cointegrated if y
t
is I(1) yet
there exists d, mr, of rank r, such that z
t
= d
t
y
t
is I(0). The r vectors
in d are called the cointegrating vectors.
CHAPTER 15. MULTIVARIATE TIME SERIES 235
If the series y
t
is not cointegrated, then r = 0. If r = m, then y
t
is I(0). For 0 < r < m, y
t
is
I(1) and cointegrated.
In some cases, it may be believed that d is known a priori. Often, d = (1 ÷1)
t
. For example,
if y
t
is a pair of interest rates, then d = (1 ÷ 1)
t
speciﬁes that the spread (the dierence in
returns) is stationary. If y = (log(Consumption) log(Income))
t
, then d = (1 ÷ 1)
t
speciﬁes
that log(Consumption/Income) is stationary.
In other cases, d may not be known.
If y
t
is cointegrated with a single cointegrating vector (r = 1), then it turns out that d can
be consistently estimated by an OLS regression of one component of y
t
on the others. Thus y
t
=
(Y
1t
, Y
2t
) and d = (
1
2
) and normalize
1
= 1. Then
ˆ
2
= (y
t
2
y
2
)
÷1
y
t
2
y
1
p
÷÷
2
. Furthermore
this estimation is superconsistent: T(
ˆ
2
÷
2
)
d
÷÷ Limit, as ﬁrst shown by Stock (1987). This
is not, in general, a good method to estimate d, but it is useful in the construction of alternative
estimators and tests.
We are often interested in testing the hypothesis of no cointegration:
H
0
: r = 0
H
1
: r > 0.
Suppose that d is known, so z
t
= d
t
y
t
is known. Then under H
0
z
t
is I(1), yet under H
1
z
t
is
I(0). Thus H
0
can be tested using a univariate ADF test on z
t
.
When d is unknown, Engle and Granger (1987) suggested using an ADF test on the estimated
residual ˆ z
t
=
ˆ
d
t
y
t
, from OLS of y
1t
on y
2t
. Their justiﬁcation was Stock’s result that
ˆ
d is super
consistent under H
1
. Under H
0
, however,
ˆ
d is not consistent, so the ADF critical values are not
appropriate. The asymptotic distribution was worked out by Phillips and Ouliaris (1990).
When the data have time trends, it may be necessary to include a time trend in the estimated
cointegrating regression. Whether or not the time trend is included, the asymptotic distribution of
the test is aected by the presence of the time trend. The asymptotic distribution was worked out
in B. Hansen (1992).
15.9 Cointegrated VARs
We can write a VAR as
A(L)y
t
= e
t
A(L) = I ÷A
1
L ÷A
2
L
2
÷· · · ÷A
k
L
k
or alternatively as
y
t
= y
t÷1
+D(L)y
t÷1
+e
t
where
= ÷A(1)
= ÷I +A
1
+A
2
+· · · +A
k
.
Theorem 15.9.1 Granger Representation Theorem
y
t
is cointegrated with m r d if and only if rank() = r and = od
t
where c is mr, rank (o) = r.
CHAPTER 15. MULTIVARIATE TIME SERIES 236
Thus cointegration imposes a restriction upon the parameters of a VAR. The restricted model
can be written as
y
t
= od
t
y
t÷1
+D(L)y
t÷1
+e
t
y
t
= oz
t÷1
+D(L)y
t÷1
+e
t
.
If d is known, this can be estimated by OLS of y
t
on z
t÷1
and the lags of y
t
.
If d is unknown, then estimation is done by “reduced rank regression”, which is leastsquares
subject to the stated restriction. Equivalently, this is the MLE of the restricted parameters under
the assumption that e
t
is iid N(0, ).
One diculty is that d is not identiﬁed without normalization. When r = 1, we typically just
normalize one element to equal unity. When r > 1, this does not work, and dierent authors have
adopted dierent identiﬁcation schemes.
In the context of a cointegrated VAR estimated by reduced rank regression, it is simple to test
for cointegration by testing the rank of . These tests are constructed as likelihood ratio (LR) tests.
As they were discovered by Johansen (1988, 1991, 1995), they are typically called the “Johansen
Max and Trace” tests. Their asymptotic distributions are nonstandard, and are similar to the
DickeyFuller distributions.
Chapter 16
Limited Dependent Variables
A “limited dependent variable” y is one which takes a “limited” set of values. The most common
cases are
• Binary: y ÷ {0, 1}
• Multinomial: y ÷ {0, 1, 2, ..., k}
• Integer: y ÷ {0, 1, 2, ...}
• Censored: y ÷ R
+
The traditional approach to the estimation of limited dependent variable (LDV) models is
parametric maximum likelihood. A parametric model is constructed, allowing the construction of
the likelihood function. A more modern approach is semiparametric, eliminating the dependence
on a parametric distributional assumption. We will discuss only the ﬁrst (parametric) approach,
due to time constraints. They still constitute the majority of LDV applications. If, however, you
were to write a thesis involving LDV estimation, you would be advised to consider employing a
semiparametric estimation approach.
For the parametric approach, estimation is by MLE. A major practical issue is construction of
the likelihood function.
16.1 Binary Choice
The dependent variable y
i
÷ {0, 1}. This represents a Yes/No outcome. Given some regressors
x
i
, the goal is to describe Pr (y
i
= 1  x
i
) , as this is the full conditional distribution.
The linear probability model speciﬁes that
Pr (y
i
= 1  x
i
) = x
t
i
d.
As Pr (y
i
= 1  x
i
) = E(y
i
 x
i
) , this yields the regression: y
i
= x
t
i
d+e
i
which can be estimated by
OLS. However, the linear probability model does not impose the restriction that 0 _ Pr (y
i
 x
i
) _ 1.
Even so estimation of a linear probability model is a useful starting point for subsequent analysis.
The standard alternative is to use a function of the form
Pr (y
i
= 1  x
i
) = F
x
t
i
d
where F (·) is a known CDF, typically assumed to be symmetric about zero, so that F(u) =
1 ÷F(÷u). The two standard choices for F are
• Logistic: F(u) = (1 +e
÷u
)
÷1
.
237
CHAPTER 16. LIMITED DEPENDENT VARIABLES 238
• Normal: F(u) = (u).
If F is logistic, we call this the logit model, and if F is normal, we call this the probit model.
This model is identical to the latent variable model
y
+
i
= x
t
i
d +e
i
e
i
~ F (·)
y
i
=
1 if y
+
i
> 0
0 otherwise
.
For then
Pr (y
i
= 1  x
i
) = Pr (y
+
i
> 0  x
i
)
= Pr
x
t
i
d +e
i
> 0  x
i
= Pr
e
i
> ÷x
t
i
d  x
i
= 1 ÷F
÷x
t
i
d
= F
x
t
i
d
.
Estimation is by maximum likelihood. To construct the likelihood, we need the conditional
distribution of an individual observation. Recall that if y is Bernoulli, such that Pr(y = 1) = p and
Pr(y = 0) = 1 ÷p, then we can write the density of y as
f(y) = p
y
(1 ÷p)
1÷y
, y = 0, 1.
In the Binary choice model, y
i
is conditionally Bernoulli with Pr (y
i
= 1  x
i
) = p
i
= F (x
t
i
d) . Thus
the conditional density is
f (y
i
 x
i
) = p
y
i
i
(1 ÷p
i
)
1÷y
i
= F
x
t
i
d
y
i
(1 ÷F
x
t
i
d
)
1÷y
i
.
Hence the loglikelihood function is
log L(d) =
n
¸
i=1
log f(y
i
 x
i
)
=
n
¸
i=1
log
F
x
t
i
d
y
i
(1 ÷F
x
t
i
d
)
1÷y
i
=
n
¸
i=1
y
i
log F
x
t
i
d
+ (1 ÷y
i
) log(1 ÷F
x
t
i
d
)
=
¸
y
i
=1
log F
x
t
i
d
+
¸
y
i
=0
log(1 ÷F
x
t
i
d
).
The MLE
ˆ
d is the value of d which maximizes log L(d). Standard errors and test statistics are
computed by asymptotic approximations. Details of such calculations are left to more advanced
courses.
16.2 Count Data
If y ÷ {0, 1, 2, ...}, a typical approach is to employ Poisson regression. This model speciﬁes that
Pr (y
i
= k  x
i
) =
exp(÷`
i
) `
k
i
k!
, k = 0, 1, 2, ...
`
i
= exp(x
t
i
d).
CHAPTER 16. LIMITED DEPENDENT VARIABLES 239
The conditional density is the Poisson with parameter `
i
. The functional form for `
i
has been
picked to ensure that `
i
> 0.
The loglikelihood function is
log L(d) =
n
¸
i=1
log f(y
i
 x
i
) =
n
¸
i=1
÷exp(x
t
i
d) +y
i
x
t
i
d ÷log(y
i
!)
.
The MLE is the value
ˆ
d which maximizes log L(d).
Since
E(y
i
 x
i
) = `
i
= exp(x
t
i
d)
is the conditional mean, this motivates the label Poisson “regression.”
Also observe that the model implies that
var (y
i
 x
i
) = `
i
= exp(x
t
i
d),
so the model imposes the restriction that the conditional mean and variance of y
i
are the same.
This may be considered restrictive. A generalization is the negative binomial.
16.3 Censored Data
The idea of “censoring” is that some data above or below a threshold are misreported at the
threshold. Thus the model is that there is some latent process y
+
i
with unbounded support, but we
observe only
y
i
=
y
+
i
if y
+
i
_ 0
0 if y
+
i
< 0
. (16.1)
(This is written for the case of the threshold being zero, any known value can substitute.) The
observed data y
i
therefore come from a mixed continuous/discrete distribution.
Censored models are typically applied when the data set has a meaningful proportion (say 5%
or higher) of data at the boundary of the sample support. The censoring process may be explicit
in data collection, or it may be a byproduct of economic constraints.
An example of a data collection censoring is topcoding of income. In surveys, incomes above
a threshold are typically reported at the threshold.
The ﬁrst censored regression model was developed by Tobin (1958) to explain consumption of
durable goods. Tobin observed that for many households, the consumption level (purchases) in a
particular period was zero. He proposed the latent variable model
y
+
i
= x
t
i
d +e
i
e
i
~ iid N(0, o
2
)
with the observed variable y
i
generated by the censoring equation (16.1). This model (now called
the Tobit) speciﬁes that the latent (or ideal) value of consumption may be negative (the household
would prefer to sell than buy). All that is reported is that the household purchased zero units of
the good.
The naive approach to estimate d is to regress y
i
on x
i
. This does not work because regression
estimates E(y
i
 x
i
) , not E(y
+
i
 x
i
) = x
t
i
d, and the latter is of interest. Thus OLS will be biased
for the parameter of interest d.
[Note: it is still possible to estimate E(y
i
 x
i
) by LS techniques. The Tobit framework postu
lates that this is not inherently interesting, that the parameter of d is deﬁned by an alternative
statistical structure.]
CHAPTER 16. LIMITED DEPENDENT VARIABLES 240
Consistent estimation will be achieved by the MLE. To construct the likelihood, observe that
the probability of being censored is
Pr (y
i
= 0  x
i
) = Pr (y
+
i
< 0  x
i
)
= Pr
x
t
i
d +e
i
< 0  x
i
= Pr
e
i
o
< ÷
x
t
i
d
o
 x
i
=
÷
x
t
i
d
o
.
The conditional distribution function above zero is Gaussian:
Pr (y
i
= y  x
i
) =
y
0
o
÷1
c
z ÷x
t
i
d
o
dz, y > 0.
Therefore, the density function can be written as
f (y  x
i
) =
÷
x
t
i
d
o
1(y=0)
¸
o
÷1
c
z ÷x
t
i
d
o
1(y>0)
,
where 1 (·) is the indicator function.
Hence the loglikelihood is a mixture of the probit and the normal:
log L(d) =
n
¸
i=1
log f(y
i
 x
i
)
=
¸
y
i
=0
log
÷
x
t
i
d
o
+
¸
y
i
>0
log
¸
o
÷1
c
y
i
÷x
t
i
d
o
.
The MLE is the value
ˆ
d which maximizes log L(d).
16.4 Sample Selection
The problem of sample selection arises when the sample is a nonrandom selection of potential
observations. This occurs when the observed data is systematically dierent from the population
of interest. For example, if you ask for volunteers for an experiment, and they wish to extrapolate
the eects of the experiment on a general population, you should worry that the people who
volunteer may be systematically dierent from the general population. This has great relevance for
the evaluation of antipoverty and jobtraining programs, where the goal is to assess the eect of
“training” on the general population, not just on the volunteers.
A simple sample selection model can be written as the latent model
y
i
= x
t
i
d +e
1i
T
i
= 1
z
t
i
~ +e
0i
> 0
where 1 (·) is the indicator function. The dependent variable y
i
is observed if (and only if) T
i
= 1.
Else it is unobserved.
For example, y
i
could be a wage, which can be observed only if a person is employed. The
equation for T
i
is an equation specifying the probability that the person is employed.
The model is often completed by specifying that the errors are jointly normal
e
0i
e
1i
~ N
0,
1 j
j o
2
.
CHAPTER 16. LIMITED DEPENDENT VARIABLES 241
It is presumed that we observe {x
i
, z
i
, T
i
} for all observations.
Under the normality assumption,
e
1i
= je
0i
+v
i
,
where v
i
is independent of e
0i
~ N(0, 1). A useful fact about the standard normal distribution is
that
E(e
0i
 e
0i
> ÷x) = `(x) =
c(x)
(x)
,
and the function `(x) is called the inverse Mills ratio.
The naive estimator of d is OLS regression of y
i
on x
i
for those observations for which y
i
is
available. The problem is that this is equivalent to conditioning on the event {T
i
= 1}. However,
E(e
1i
 T
i
= 1, z
i
) = E
e
1i
 {e
0i
> ÷z
t
i
~}, z
i
= jE
e
0i
 {e
0i
> ÷z
t
i
~}, z
i
+E
v
i
 {e
0i
> ÷z
t
i
~}, z
i
= j`
z
t
i
~
,
which is nonzero. Thus
e
1i
= j`
z
t
i
~
+u
i
,
where
E(u
i
 T
i
= 1, z
i
) = 0.
Hence
y
i
= x
t
i
d + j`
z
t
i
~
+u
i
(16.2)
is a valid regression equation for the observations for which T
i
= 1.
Heckman (1979) observed that we could consistently estimate d and j from this equation, if ~
were known. It is unknown, but also can be consistently estimated by a Probit model for selection.
The “Heckit” estimator is thus calculated as follows
• Estimate ˆ ~ from a Probit, using regressors z
i
. The binary dependent variable is T
i
.
• Estimate
ˆ
d, ˆ j
from OLS of y
i
on x
i
and `(z
t
i
ˆ ~).
• The OLS standard errors will be incorrect, as this is a twostep estimator. They can be
corrected using a more complicated formula. Or, alternatively, by viewing the Probit/OLS
estimation equations as a large joint GMM problem.
The Heckit estimator is frequently used to deal with problems of sample selection. However,
the estimator is built on the assumption of normality, and the estimator can be quite sensitive
to this assumption. Some modern econometric research is exploring how to relax the normality
assumption.
The estimator can also work quite poorly if `(z
t
i
ˆ ) does not have much insample variation.
This can happen if the Probit equation does not “explain” much about the selection choice. Another
potential problem is that if z
i
= x
i
, then `(z
t
i
ˆ ) can be highly collinear with x
i
, so the second
step OLS estimator will not be able to precisely estimate d. Based this observation, it is typically
recommended to ﬁnd a valid exclusion restriction: a variable should be in z
i
which is not in x
i
. If
this is valid, it will ensure that `(z
t
i
ˆ ) is not collinear with x
i
, and hence improve the second stage
estimator’s precision.
Chapter 17
Panel Data
A panel is a set of observations on individuals, collected over time. An observation is the pair
{y
it
, x
it
}, where the i subscript denotes the individual, and the t subscript denotes time. A panel
may be balanced:
{y
it
, x
it
} : t = 1, ..., T; i = 1, ..., n,
or unbalanced:
{y
it
, x
it
} : For i = 1, ..., n, t = t
i
, ..., t
i
.
17.1 IndividualEects Model
The standard panel data speciﬁcation is that there is an individualspeciﬁc eect which enters
linearly in the regression
y
it
= x
t
it
d +u
i
+e
it
.
The typical maintained assumptions are that the individuals i are mutually independent, that u
i
and e
it
are independent, that e
it
is iid across individuals and time, and that e
it
is uncorrelated with
x
it
.
OLS of y
it
on x
it
is called pooled estimation. It is consistent if
E(x
it
u
i
) = 0 (17.1)
If this condition fails, then OLS is inconsistent. (17.1) fails if the individualspeciﬁc unobserved
eect u
i
is correlated with the observed explanatory variables x
it
. This is often believed to be
plausible if u
i
is an omitted variable.
If (17.1) is true, however, OLS can be improved upon via a GLS technique. In either event,
OLS appears a poor estimation choice.
Condition (17.1) is called the random eects hypothesis. It is a strong assumption, and most
applied researchers try to avoid its use.
17.2 Fixed Eects
This is the most common technique for estimation of nondynamic linear panel regressions.
The motivation is to allow u
i
to be arbitrary, and have arbitrary correlated with x
i
. The goal
is to eliminate u
i
from the estimator, and thus achieve invariance.
There are several derivations of the estimator.
First, let
d
ij
=
1 if i = j
0 else
,
242
CHAPTER 17. PANEL DATA 243
and
d
i
=
¸
¸
d
i1
.
.
.
d
in
¸
,
an n 1 dummy vector with a “1” in the i
t
th place. Let
u =
¸
¸
u
1
.
.
.
u
n
¸
.
Then note that
u
i
= d
t
i
u,
and
y
it
= x
t
it
d +d
t
i
u +e
it
. (17.2)
Observe that
E(e
it
 x
it
, d
i
) = 0,
so (17.2) is a valid regression, with d
i
as a regressor along with x
i
.
OLS on (17.2) yields estimator
ˆ
d, ˆ u
. Conventional inference applies.
Observe that
• This is generally consistent.
• If x
it
contains an intercept, it will be collinear with d
i
, so the intercept is typically omitted
from x
it
.
• Any regressor in x
it
which is constant over time for all individuals (e.g., their gender) will be
collinear with d
i
, so will have to be omitted.
• There are n +k regression parameters, which is quite large as typically n is very large.
Computationally, you do not want to actually implement conventional OLS estimation, as the
parameter space is too large. OLS estimation of d proceeds by the FWL theorem. Stacking the
observations together:
y = Xd +Du +e,
then by the FWL theorem,
ˆ
d =
X
t
(I ÷P
D
) X
÷1
X
t
(I ÷P
D
) y
=
X
+t
X
+
÷1
X
+t
y
+
,
where
y
+
= y ÷D(D
t
D)
÷1
D
t
y
X
+
= X ÷D(D
t
D)
÷1
D
t
X.
Since the regression of y
it
on d
i
is a regression onto individualspeciﬁc dummies, the predicted value
from these regressions is the individual speciﬁc mean y
i
, and the residual is the demean value
y
+
it
= y
it
÷y
i
.
The ﬁxed eects estimator
ˆ
d is OLS of y
+
it
on x
+
it
, the dependent variable and regressors in deviation
frommean form.
CHAPTER 17. PANEL DATA 244
Another derivation of the estimator is to take the equation
y
it
= x
t
it
d +u
i
+e
it
,
and then take individualspeciﬁc means by taking the average for the i
t
th individual:
1
T
i
t
i
¸
t=t
i
y
it
=
1
T
i
t
i
¸
t=t
i
x
t
it
d +u
i
+
1
T
i
t
i
¸
t=t
i
e
it
or
y
i
= x
t
i
d +u
i
+e
i
.
Subtracting, we ﬁnd
y
+
it
= x
+t
it
d +e
+
it
,
which is free of the individualeect u
i
.
17.3 Dynamic Panel Regression
A dynamic panel regression has a lagged dependent variable
y
it
= cy
it÷1
+x
t
it
d +u
i
+e
it
. (17.3)
This is a model suitable for studying dynamic behavior of individual agents.
Unfortunately, the ﬁxed eects estimator is inconsistent, at least if T is held ﬁnite as n ÷ ·.
This is because the sample mean of y
it÷1
is correlated with that of e
it
.
The standard approach to estimate a dynamic panel is to combine ﬁrstdierencing with IV or
GMM. Taking ﬁrstdierences of (17.3) eliminates the individualspeciﬁc eect:
y
it
= cy
it÷1
+x
t
it
d +e
it
. (17.4)
However, if e
it
is iid, then it will be correlated with y
it÷1
:
E(y
it÷1
e
it
) = E((y
it÷1
÷y
it÷2
) (e
it
÷e
it÷1
)) = ÷E(y
it÷1
e
it÷1
) = ÷o
2
e
.
So OLS on (17.4) will be inconsistent.
But if there are valid instruments, then IV or GMM can be used to estimate the equation.
Typically, we use lags of the dependent variable, two periods back, as y
t÷2
is uncorrelated with
e
it
. Thus values of y
it÷k
, k _ 2, are valid instruments.
Hence a valid estimator of c and d is to estimate (17.4) by IV using y
t÷2
as an instrument for
y
t÷1
(which is just identiﬁed). Alternatively, GMM using y
t÷2
and y
t÷3
as instruments (which is
overidentiﬁed, but loses a timeseries observation).
A more sophisticated GMM estimator recognizes that for timeperiods later in the sample, there
are more instruments available, so the instrument list should be dierent for each equation. This is
conveniently organized by the GMM principle, as this enables the moments from the dierent time
periods to be stacked together to create a list of all the moment conditions. A simple application
of GMM yields the parameter estimates and standard errors.
Chapter 18
Nonparametrics
18.1 Kernel Density Estimation
Let X be a random variable with continuous distribution F(x) and density f(x) =
d
dx
F(x).
The goal is to estimate f(x) from a random sample (X
1
, ..., X
n
} While F(x) can be estimated by
the EDF
ˆ
F(x) = n
÷1
¸
n
i=1
1 (X
i
_ x) , we cannot deﬁne
d
dx
ˆ
F(x) since
ˆ
F(x) is a step function. The
standard nonparametric method to estimate f(x) is based on smoothing using a kernel.
While we are typically interested in estimating the entire function f(x), we can simply focus
on the problem where x is a speciﬁc ﬁxed number, and then see how the method generalizes to
estimating the entire function.
Deﬁnition 18.1.1 K(u) is a secondorder kernel function if it is a
symmetric zeromean density function.
Three common choices for kernels include the Normal
K(u) =
1
2¬
exp
÷
u
2
2
the Epanechnikov
K(u) =
3
4
1 ÷u
2
, u _ 1
0 u > 1
and the Biweight or Quartic
K(u) =
15
16
1 ÷u
2
2
, u _ 1
0 u > 1
In practice, the choice between these three rarely makes a meaningful dierence in the estimates.
The kernel functions are used to smooth the data. The amount of smoothing is controlled by
the bandwidth h > 0. Let
K
h
(u) =
1
h
K
u
h
.
be the kernel K rescaled by the bandwidth h. The kernel density estimator of f(x) is
ˆ
f(x) =
1
n
n
¸
i=1
K
h
(X
i
÷x) .
245
CHAPTER 18. NONPARAMETRICS 246
This estimator is the average of a set of weights. If a large number of the observations X
i
are near
x, then the weights are relatively large and
ˆ
f(x) is larger. Conversely, if only a few X
i
are near x,
then the weights are small and
ˆ
f(x) is small. The bandwidth h controls the meaning of “near”.
Interestingly,
ˆ
f(x) is a valid density. That is,
ˆ
f(x) _ 0 for all x, and
o
÷o
ˆ
f(x)dx =
o
÷o
1
n
n
¸
i=1
K
h
(X
i
÷x) dx =
1
n
n
¸
i=1
o
÷o
K
h
(X
i
÷x) dx =
1
n
n
¸
i=1
o
÷o
K (u) du = 1
where the secondtolast equality makes the changeofvariables u = (X
i
÷x)/h.
We can also calculate the moments of the density
ˆ
f(x). The mean is
o
÷o
x
ˆ
f(x)dx =
1
n
n
¸
i=1
o
÷o
xK
h
(X
i
÷x) dx
=
1
n
n
¸
i=1
o
÷o
(X
i
+uh) K (u) du
=
1
n
n
¸
i=1
X
i
o
÷o
K (u) du +
1
n
n
¸
i=1
h
o
÷o
uK (u) du
=
1
n
n
¸
i=1
X
i
the sample mean of the X
i
, where the secondtolast equality used the changeofvariables u =
(X
i
÷x)/h which has Jacobian h.
The second moment of the estimated density is
o
÷o
x
2
ˆ
f(x)dx =
1
n
n
¸
i=1
o
÷o
x
2
K
h
(X
i
÷x) dx
=
1
n
n
¸
i=1
o
÷o
(X
i
+uh)
2
K (u) du
=
1
n
n
¸
i=1
X
2
i
+
2
n
n
¸
i=1
X
i
h
o
÷o
K(u)du +
1
n
n
¸
i=1
h
2
o
÷o
u
2
K (u) du
=
1
n
n
¸
i=1
X
2
i
+h
2
o
2
K
where
o
2
K
=
o
÷o
u
2
K (u) du
is the variance of the kernel. It follows that the variance of the density
ˆ
f(x) is
o
÷o
x
2
ˆ
f(x)dx ÷
o
÷o
x
ˆ
f(x)dx
2
=
1
n
n
¸
i=1
X
2
i
+h
2
o
2
K
÷
1
n
n
¸
i=1
X
i
2
= ˆ o
2
+h
2
o
2
K
Thus the variance of the estimated density is inﬂated by the factor h
2
o
2
K
relative to the sample
moment.
CHAPTER 18. NONPARAMETRICS 247
18.2 Asymptotic MSE for Kernel Estimates
For ﬁxed x and bandwidth h observe that
EK
h
(X ÷x) =
o
÷o
K
h
(z ÷x) f(z)dz =
o
÷o
K
h
(uh) f(x +hu)hdu =
o
÷o
K (u) f(x +hu)du
The second equality uses the changeof variables u = (z ÷x)/h. The last expression shows that the
expected value is an average of f(z) locally about x.
This integral (typically) is not analytically solvable, so we approximate it using a second order
Taylor expansion of f(x +hu) in the argument hu about hu = 0, which is valid as h ÷0. Thus
f (x +hu) · f(x) +f
t
(x)hu +
1
2
f
tt
(x)h
2
u
2
and therefore
EK
h
(X ÷x) ·
o
÷o
K (u)
f(x) +f
t
(x)hu +
1
2
f
tt
(x)h
2
u
2
du
= f(x)
o
÷o
K (u) du +f
t
(x)h
o
÷o
K (u) udu +
1
2
f
tt
(x)h
2
o
÷o
K (u) u
2
du
= f(x) +
1
2
f
tt
(x)h
2
o
2
K
.
The bias of
ˆ
f(x) is then
Bias(x) = E
ˆ
f(x) ÷f(x) =
1
n
n
¸
i=1
EK
h
(X
i
÷x) ÷f(x) =
1
2
f
tt
(x)h
2
o
2
K
.
We see that the bias of
ˆ
f(x) at x depends on the second derivative f
tt
(x). The sharper the derivative,
the greater the bias. Intuitively, the estimator
ˆ
f(x) smooths data local to X
i
= x, so is estimating
a smoothed version of f(x). The bias results from this smoothing, and is larger the greater the
curvature in f(x).
We now examine the variance of
ˆ
f(x). Since it is an average of iid random variables, using
ﬁrstorder Taylor approximations and the fact that n
÷1
is of smaller order than (nh)
÷1
var (x) =
1
n
var (K
h
(X
i
÷x))
=
1
n
EK
h
(X
i
÷x)
2
÷
1
n
(EK
h
(X
i
÷x))
2
·
1
nh
2
o
÷o
K
z ÷x
h
2
f(z)dz ÷
1
n
f(x)
2
=
1
nh
o
÷o
K (u)
2
f (x +hu) du
·
f (x)
nh
o
÷o
K (u)
2
du
=
f (x) R(K)
nh
.
where R(K) =
o
÷o
K (u)
2
du is called the roughness of K.
Together, the asymptotic meansquared error (AMSE) for ﬁxed x is the sum of the approximate
squared bias and approximate variance
AMSE
h
(x) =
1
4
f
tt
(x)
2
h
4
o
4
K
+
f (x) R(K)
nh
.
CHAPTER 18. NONPARAMETRICS 248
A global measure of precision is the asymptotic mean integrated squared error (AMISE)
AMISE
h
=
AMSE
h
(x)dx =
h
4
o
4
K
R(f
tt
)
4
+
R(K)
nh
. (18.1)
where R(f
tt
) =
(f
tt
(x))
2
dx is the roughness of f
tt
. Notice that the ﬁrst term (the squared bias)
is increasing in h and the second term (the variance) is decreasing in nh. Thus for the AMISE to
decline with n, we need h ÷ 0 but nh ÷ ·. That is, h must tend to zero, but at a slower rate
than n
÷1
.
Equation (18.1) is an asymptotic approximation to the MSE. We deﬁne the asymptotically
optimal bandwidth h
0
as the value which minimizes this approximate MSE. That is,
h
0
= argmin
h
AMISE
h
It can be found by solving the ﬁrst order condition
d
dh
AMISE
h
= h
3
o
4
K
R(f
tt
) ÷
R(K)
nh
2
= 0
yielding
h
0
=
R(K)
o
4
K
R(f
tt
)
1/5
n
÷1/2
. (18.2)
This solution takes the form h
0
= cn
÷1/5
where c is a function of K and f, but not of n. We
thus say that the optimal bandwidth is of order O(n
÷1/5
). Note that this h declines to zero, but at
a very slow rate.
In practice, how should the bandwidth be selected? This is a dicult problem, and there is a
large and continuing literature on the subject. The asymptotically optimal choice given in (18.2)
depends on R(K), o
2
K
, and R(f
tt
). The ﬁrst two are determined by the kernel function. Their
values for the three functions introduced in the previous section are given here.
K o
2
K
=
o
÷o
u
2
K (u) du R(K) =
o
÷o
K (u)
2
du
Gaussian 1 1/(2
¬)
Epanechnikov 1/5 1/5
Biweight 1/7 5/7
An obvious diculty is that R(f
tt
) is unknown. A classic simple solution proposed by Silverman
(1986)has come to be known as the reference bandwidth or Silverman’s RuleofThumb. It
uses formula (18.2) but replaces R(f
tt
) with ˆ o
÷5
R(c
tt
), where c is the N(0, 1) distribution and ˆ o
2
is
an estimate of o
2
= var(X). This choice for h gives an optimal rule when f(x) is normal, and gives
a nearly optimal rule when f(x) is close to normal. The downside is that if the density is very far
from normal, the ruleofthumb h can be quite inecient. We can calculate that R(c
tt
) = 3/ (8
¬) .
Together with the above table, we ﬁnd the reference rules for the three kernel functions introduced
earlier.
Gaussian Kernel: h
rule
= 1.06ˆ on
÷1/5
Epanechnikov Kernel: h
rule
= 2.34ˆ on
÷1/5
Biweight (Quartic) Kernel: h
rule
= 2.78ˆ on
÷1/5
Unless you delve more deeply into kernel estimation methods the ruleofthumb bandwidth is
a good practical bandwidth choice, perhaps adjusted by visual inspection of the resulting estimate
ˆ
f(x). There are other approaches, but implementation can be delicate. I now discuss some of these
choices. The plugin approach is to estimate R(f
tt
) in a ﬁrst step, and then plug this estimate into
the formula (18.2). This is more treacherous than may ﬁrst appear, as the optimal h for estimation
of the roughness R(f
tt
) is quite dierent than the optimal h for estimation of f(x). However, there
CHAPTER 18. NONPARAMETRICS 249
are modern versions of this estimator work well, in particular the iterative method of Sheather
and Jones (1991). Another popular choice for selection of h is crossvalidation. This works by
constructing an estimate of the MISE using leaveoneout estimators. There are some desirable
properties of crossvalidation bandwidths, but they are also known to converge very slowly to the
optimal values. They are also quite illbehaved when the data has some discretization (as is common
in economics), in which case the crossvalidation rule can sometimes select very small bandwidths
leading to dramatically undersmoothed estimates. Fortunately there are remedies, which are known
as smoothed crossvalidation which is a close cousin of the bootstrap.
Appendix A
Matrix Algebra
A.1 Notation
A scalar a is a single number.
A vector a is a k 1 list of numbers, typically arranged in a column. We write this as
a =
¸
¸
¸
¸
a
1
a
2
.
.
.
a
k
¸
Equivalently, a vector a is an element of Euclidean k space, written as a ÷ R
k
. If k = 1 then a is
a scalar.
A matrix A is a k r rectangular array of numbers, written as
A =
a
11
a
12
· · · a
1r
a
21
a
22
· · · a
2r
.
.
.
.
.
.
.
.
.
a
k1
a
k2
· · · a
kr
¸
¸
¸
¸
¸
By convention a
ij
refers to the element in the i
t
th row and j
t
th column of A. If r = 1 then A is a
column vector. If k = 1 then A is a row vector. If r = k = 1, then A is a scalar.
A standard convention (which we will follow in this text whenever possible) is to denote scalars
by lowercase italics (a), vectors by lowercase bold italics (a), and matrices by uppercase bold
italics (A). Sometimes a matrix A is denoted by the symbol (a
ij
).
A matrix can be written as a set of column vectors or as a set of row vectors. That is,
A =
a
1
a
2
· · · a
r
=
o
1
o
2
.
.
.
o
k
¸
¸
¸
¸
¸
where
a
i
=
a
1i
a
2i
.
.
.
a
ki
¸
¸
¸
¸
¸
are column vectors and
o
j
=
a
j1
a
j2
· · · a
jr
250
APPENDIX A. MATRIX ALGEBRA 251
are row vectors.
The transpose of a matrix, denoted A
t
, is obtained by ﬂipping the matrix on its diagonal.
Thus
A
t
=
a
11
a
21
· · · a
k1
a
12
a
22
· · · a
k2
.
.
.
.
.
.
.
.
.
a
1r
a
2r
· · · a
kr
¸
¸
¸
¸
¸
Alternatively, letting B = A
t
, then b
ij
= a
ji
. Note that if A is k r, then A
t
is r k. If a is a
k 1 vector, then a
t
is a 1 k row vector. An alternative notation for the transpose of A is A
¯
.
A matrix is square if k = r. A square matrix is symmetric if A = A
t
, which requires a
ij
= a
ji
.
A square matrix is diagonal if the odiagonal elements are all zero, so that a
ij
= 0 if i = j. A
square matrix is upper (lower) diagonal if all elements below (above) the diagonal equal zero.
An important diagonal matrix is the identity matrix, which has ones on the diagonal. The
k k identity matrix is denoted as
I
k
=
1 0 · · · 0
0 1 · · · 0
.
.
.
.
.
.
.
.
.
0 0 · · · 1
¸
¸
¸
¸
¸
.
A partitioned matrix takes the form
A =
A
11
A
12
· · · A
1r
A
21
A
22
· · · A
2r
.
.
.
.
.
.
.
.
.
A
k1
A
k2
· · · A
kr
¸
¸
¸
¸
¸
where the A
ij
denote matrices, vectors and/or scalars.
A.2 Matrix Addition
If the matrices A = (a
ij
) and B = (b
ij
) are of the same order, we deﬁne the sum
A+B = (a
ij
+b
ij
) .
Matrix addition follows the communtative and associative laws:
A+B = B +A
A+ (B +C) = (A+B) +C.
A.3 Matrix Multiplication
If A is k r and c is real, we deﬁne their product as
Ac = cA = (a
ij
c) .
If a and b are both k 1, then their inner product is
a
t
b = a
1
b
1
+a
2
b
2
+· · · +a
k
b
k
=
k
¸
j=1
a
j
b
j
.
Note that a
t
b = b
t
a. We say that two vectors a and b are orthogonal if a
t
b = 0.
APPENDIX A. MATRIX ALGEBRA 252
If A is k r and B is r s, so that the number of columns of A equals the number of rows
of B, we say that A and B are conformable. In this event the matrix product AB is deﬁned.
Writing A as a set of row vectors and B as a set of column vectors (each of length r), then the
matrix product is deﬁned as
AB =
a
t
1
a
t
2
.
.
.
a
t
k
¸
¸
¸
¸
¸
b
1
b
2
· · · b
s
=
a
t
1
b
1
a
t
1
b
2
· · · a
t
1
b
s
a
t
2
b
1
a
t
2
b
2
· · · a
t
2
b
s
.
.
.
.
.
.
.
.
.
a
t
k
b
1
a
t
k
b
2
· · · a
t
k
b
s
¸
¸
¸
¸
¸
.
Matrix multiplication is not communicative: in general AB = BA. However, it is associative
and distributive:
A(BC) = (AB) C
A(B +C) = AB +AC
An alternative way to write the matrix product is to use matrix partitions. For example,
AB =
¸
A
11
A
12
A
21
A
22
¸
B
11
B
12
B
21
B
22
=
¸
A
11
B
11
+A
12
B
21
A
11
B
12
+A
12
B
22
A
21
B
11
+A
22
B
21
A
21
B
12
+A
22
B
22
.
As another example,
AB =
A
1
A
2
· · · A
r
B
1
B
2
.
.
.
B
r
¸
¸
¸
¸
¸
= A
1
B
1
+A
2
B
2
+· · · +A
r
B
r
=
r
¸
j=1
A
j
B
j
An important property of the identity matrix is that if A is kr, then AI
r
= A and I
k
A = A.
The k r matrix A, r _ k, is called orthogonal if A
t
A = I
r
.
A.4 Trace
The trace of a k k square matrix A is the sum of its diagonal elements
tr (A) =
k
¸
i=1
a
ii
.
Some straightforward properties for square matrices A and B and real c are
tr (cA) = c tr (A)
tr
A
t
= tr (A)
tr (A+B) = tr (A) + tr (B)
tr (I
k
) = k.
APPENDIX A. MATRIX ALGEBRA 253
Also, for k r A and r k B we have
tr (AB) = tr (BA) . (A.1)
Indeed,
tr (AB) = tr
a
t
1
b
1
a
t
1
b
2
· · · a
t
1
b
k
a
t
2
b
1
a
t
2
b
2
· · · a
t
2
b
k
.
.
.
.
.
.
.
.
.
a
t
k
b
1
a
t
k
b
2
· · · a
t
k
b
k
¸
¸
¸
¸
¸
=
k
¸
i=1
a
t
i
b
i
=
k
¸
i=1
b
t
i
a
i
= tr (BA) .
A.5 Rank and Inverse
The rank of the k r matrix (r _ k)
A =
a
1
a
2
· · · a
r
is the number of linearly independent columns a
j
, and is written as rank (A) . We say that A has
full rank if rank (A) = r.
A square k k matrix A is said to be nonsingular if it is has full rank, e.g. rank (A) = k.
This means that there is no k 1 c = 0 such that Ac = 0.
If a square k k matrix A is nonsingular then there exists a unique matrix k k matrix A
÷1
called the inverse of A which satisﬁes
AA
÷1
= A
÷1
A = I
k
.
For nonsingular A and C, some important properties include
AA
÷1
= A
÷1
A = I
k
A
÷1
t
=
A
t
÷1
(AC)
÷1
= C
÷1
A
÷1
(A+C)
÷1
= A
÷1
A
÷1
+C
÷1
÷1
C
÷1
A
÷1
÷(A+C)
÷1
= A
÷1
A
÷1
+C
÷1
A
÷1
Also, if A is an orthogonal matrix, then A
÷1
= A.
Another useful result for nonsingular A is known as the Woodbury matrix identity
(A+BCD)
÷1
= A
÷1
÷A
÷1
BC
C +CDA
÷1
BC
÷1
CDA
÷1
. (A.2)
In particular, for C = ÷1, B = b and D = b
t
for vector b we ﬁnd what is known as the Sherman—
Morrison formula
A÷bb
t
÷1
= A
÷1
+
1 ÷b
t
A
÷1
b
÷1
A
÷1
bb
t
A
÷1
. (A.3)
APPENDIX A. MATRIX ALGEBRA 254
The following fact about inverting partitioned matrices is quite useful.
¸
A
11
A
12
A
21
A
22
÷1
=
¸
A
11
A
12
A
21
A
22
=
¸
A
÷1
11·2
÷A
÷1
11·2
A
21
A
÷1
22
÷A
÷1
22·1
A
21
A
÷1
11
A
÷1
22·1
(A.4)
where A
11·2
= A
11
÷A
12
A
÷1
22
A
21
and A
22·1
= A
22
÷A
21
A
÷1
11
A
12
. There are alternative algebraic
representations for the components. For example, using the Woodbury matrix identity you can
show the following alternative expressions
A
11
= A
÷1
11
+A
÷1
11
A
12
A
÷1
22·1
A
21
A
÷1
11
A
22
= A
÷1
22
+A
÷1
22
A
21
A
÷1
11·2
A
12
A
÷1
22
A
12
= ÷A
÷1
11
A
21
A
÷1
22·1
A
21
= ÷A
÷1
22
A
21
A
÷1
112
Even if a matrix A does not possess an inverse, we can still deﬁne the MoorePenrose gen
eralized inverse A
÷
as the matrix which satisﬁes
AA
÷
A = A
A
÷
AA
÷
= A
÷
AA
÷
is symmetric
A
÷
A is symmetric
For any matrix A, the MoorePenrose generalized inverse A
÷
exists and is unique.
For example, if
A =
¸
A
11
0
0 0
then
A
÷
=
¸
A
÷
11
0
0 0
.
A.6 Determinant
The determinant is a measure of the volume of a square matrix.
While the determinant is widely used, its precise deﬁnition is rarely needed. However, we present
the deﬁnition here for completeness. Let A = (a
ij
) be a general k k matrix . Let ¬ = (j
1
, ..., j
k
)
denote a permutation of (1, ..., k) . There are k! such permutations. There is a unique count of the
number of inversions of the indices of such permutations (relative to the natural order (1, ..., k) ,
and let 
¬
= +1 if this count is even and 
¬
= ÷1 if the count is odd. Then the determinant of A
is deﬁned as
det A =
¸
¬

¬
a
1j
1
a
2j
2
· · · a
kj
k
.
For example, if A is 2 2, then the two permutations of (1, 2) are (1, 2) and (2, 1) , for which

(1,2)
= 1 and 
(2,1)
= ÷1. Thus
det A = 
(1,2)
a
11
a
22
+ 
(2,1)
a
21
a
12
= a
11
a
22
÷a
12
a
21
.
Some properties include
• det (A) = det (A
t
)
• det (cA) = c
k
det A
APPENDIX A. MATRIX ALGEBRA 255
• det (AB) = (det A) (det B)
• det
A
÷1
= (det A)
÷1
• det
¸
A B
C D
= (det D) det
A÷BD
÷1
C
if det D = 0
• det A = 0 if and only if A is nonsingular.
• If A is triangular (upper or lower), then det A =
¸
k
i=1
a
ii
• If A is orthogonal, then det A = ±1
A.7 Eigenvalues
The characteristic equation of a square matrix A is
det (A÷`I
k
) = 0.
The left side is a polynomial of degree k in ` so it has exactly k roots, which are not necessarily
distinct and may be real or complex. They are called the latent roots or characteristic roots or
eigenvalues of A. If `
i
is an eigenvalue of A, then A÷`
i
I
k
is singular so there exists a nonzero
vector h
i
such that
(A÷`
i
I
k
) h
i
= 0.
The vector h
i
is called a latent vector or characteristic vector or eigenvector of A corre
sponding to `
i
.
We now state some useful properties. Let `
i
and h
i
, i = 1, ..., k denote the k eigenvalues and
eigenvectors of a square matrix A. Let be a diagonal matrix with the characteristic roots in the
diagonal, and let H = [h
1
· · · h
k
].
• det(A) =
¸
k
i=1
`
i
• tr(A) =
¸
k
i=1
`
i
• A is nonsingular if and only if all its characteristic roots are nonzero.
• If A has distinct characteristic roots, there exists a nonsingular matrix P such that A =
P
÷1
P and PAP
÷1
= .
• If A is symmetric, then A = HH
t
and H
t
AH = , and the characteristic roots are all
real. A = HH
t
is called the spectral decomposition of a matrix.
• The characteristic roots of A
÷1
are `
÷1
1
, `
÷1
2
, ..., `
÷1
k
.
• The matrix H has the orthonormal properties H
t
H = I and HH
t
= I.
• H
÷1
= H
t
and (H
t
)
÷1
= H
A.8 Positive Deﬁniteness
We say that a k k symmetric square matrix A is positive semideﬁnite if for all c = 0,
c
t
Ac _ 0. This is written as A _ 0. We say that A is positive deﬁnite if for all c = 0, c
t
Ac > 0.
This is written as A > 0.
Some properties include:
APPENDIX A. MATRIX ALGEBRA 256
• If A = G
t
G for some matrix G, then A is positive semideﬁnite. (For any c = 0, c
t
Ac =
o
t
o _ 0 where o = Gc.) If G has full rank, then A is positive deﬁnite.
• If A is positive deﬁnite, then A is nonsingular and A
÷1
exists. Furthermore, A
÷1
> 0.
• A > 0 if and only if it is symmetric and all its characteristic roots are positive.
• By the spectral decomposition, A = HH
t
where H
t
H = I and is diagonal with non
negative diagonal elements. All diagonal elements of are strictly positive if (and only if)
A > 0.
• If A > 0 then A
÷1
= H
÷1
H
t
.
• If A _ 0 and rank (A) = r < k then A
÷
= H
÷
H
t
where A
÷
is the MoorePenrose
generalized inverse, and
÷
= diag
`
÷1
1
, `
÷1
2
, ..., `
÷1
k
, 0, ..., 0
• If A > 0 we can ﬁnd a matrix B such that A = BB
t
. We call B a matrix square root
of A. The matrix B need not be unique. One way to construct B is to use the spectral
decomposition A = HH
t
where is diagonal, and then set B = H
1/2
.
A square matrix A is idempotent if AA = A. If A is idempotent and symmetric then all its
characteristic roots equal either zero or one and is thus positive semideﬁnite. To see this, note
that we can write A = HH
t
where H is orthogonal and contains the r (real) characteristic
roots. Then
A = AA = HH
t
HH
t
= H
2
H
t
.
By the uniqueness of the characteristic roots, we deduce that
2
= and `
2
i
= `
i
for i = 1, ..., r.
Hence they must equal either 0 or 1. It follows that the spectral decomposition of idempotent A
takes the form
A = H
¸
I
k÷r
0
0 0
H
t
(A.5)
with H
t
H = I
k
. Additionally, tr(A) = rank(A).
A.9 Matrix Calculus
Let x = (x
1
, ..., x
k
) be k 1 and g(x) = g(x
1
, ..., x
k
) : R
k
÷R. The vector derivative is
0
0x
g (x) =
¸
¸
0
0x
1
g (x)
.
.
.
0
0x
k
g (x)
¸
and
0
0x
t
g (x) =
0
0x
1
g (x) · · ·
0
0x
k
g (x)
.
Some properties are now summarized.
•
0
0x
(a
t
x) =
0
0x
(x
t
a) = a
•
0
0x
(Ax) = A
•
0
0x
(x
t
Ax) = (A+A
t
) x
•
0
2
0x0x
(x
t
Ax) = A+A
t
APPENDIX A. MATRIX ALGEBRA 257
A.10 Kronecker Products and the Vec Operator
Let A = [a
1
a
2
· · · a
n
] be mn. The vec of A, denoted by vec (A) , is the mn 1 vector
vec (A) =
¸
¸
¸
¸
a
1
a
2
.
.
.
a
n
¸
.
Let A = (a
ij
) be an m n matrix and let B be any matrix. The Kronecker product of A
and B, denoted A.B, is the matrix
A.B =
a
11
B a
12
B a
1n
B
a
21
B a
22
B · · · a
2n
B
.
.
.
.
.
.
.
.
.
a
m1
B a
m2
B · · · a
mn
B
¸
¸
¸
¸
¸
.
Some important properties are now summarized. These results hold for matrices for which all
matrix multiplications are conformable.
• (A+B) .C = A.C +B .C
• (A.B) (C .D) = AC .BD
• A.(B .C) = (A.B) .C
• (A.B)
t
= A
t
.B
t
• tr (A.B) = tr (A) tr (B)
• If A is mm and B is n n, det(A.B) = (det (A))
n
(det (B))
m
• (A.B)
÷1
= A
÷1
.B
÷1
• If A > 0 and B > 0 then A.B > 0
• vec (ABC) = (C
t
.A) vec (B)
• tr (ABCD) = vec (D
t
)
t
(C
t
.A) vec (B)
A.11 Vector and Matrix Norms and Inequalities
The Euclidean norm of an m1 vector a is
a =
a
t
a
1/2
=
m
¸
i=1
a
2
i
1/2
.
The Euclidean norm of an mn matrix A is
A = vec (A)
= tr
A
t
A
1/2
=
¸
m
¸
i=1
n
¸
j=1
a
2
ij
¸
1/2
.
APPENDIX A. MATRIX ALGEBRA 258
A useful calculation is for any m1 vectors a and b, using (A.1),
ab
t
= tr
ba
ab
t
1/2
=
b
t
ba
a
1/2
= a b
and in particular
aa
t
= a
2
(A.6)
Some useful inequalities are now given:
Schwarz Inequality: For any m1 vectors a and b,
a
t
b
_ a b . (A.7)
Schwarz Matrix Inequality: For any mn matrices A and B,
A
t
B
_ A B . (A.8)
Triangle Inequality: For any mn matrices A and B,
A+B _ A +B . (A.9)
Trace Inequality. For any mm matrices A and B such that A is symmetric and B _ 0
tr (AB) _ `
max
(A) tr (B) (A.10)
where `
max
(A) is the largest eigenvalue of A.
Proof of Schwarz Inequality: First, suppose that b = 0. Then b = 0 and both a
t
b = 0 and
a b = 0 so the inequality is true. Second, suppose that b > 0 and deﬁne c = a÷b
b
t
b
÷1
b
t
a.
Since c is a vector, c
t
c _ 0. Thus
0 _ c
t
c = a
t
a ÷
a
t
b
2
/
b
t
b
.
Rearranging, this implies that
a
t
b
2
_
a
t
a
b
t
b
.
Taking the square root of each side yields the result.
Proof of Schwarz Matrix Inequality: Partition A = [a
1
, ..., a
n
] and B = [b
1
, ..., b
n
]. Then
by partitioned matrix multiplication, the deﬁnition of the matrix Euclidean norm and the Schwarz
inequality
A
t
B
=
a
t
1
b
1
a
t
1
b
2
· · ·
a
t
2
b
1
a
t
2
b
2
· · ·
.
.
.
.
.
.
.
.
.
_
a
1
 b
1
 a
1
 b
2
 · · ·
a
2
 b
1
 a
2
 b
2
 · · ·
.
.
.
.
.
.
.
.
.
=
¸
n
¸
i=1
n
¸
j=1
a
i

2
b
j

2
¸
1/2
=
n
¸
i=1
a
i

2
1/2
n
¸
i=1
b
i

2
1/2
=
¸
n
¸
i=1
m
¸
j=1
a
2
ji
¸
1/2
¸
n
¸
i=1
m
¸
j=1
b
ji

2
¸
1/2
= A B
APPENDIX A. MATRIX ALGEBRA 259
Proof of Triangle Inequality: Let a = vec (A) and b = vec (B) . Then by the deﬁnition of the
matrix norm and the Schwarz Inequality
A+B
2
= a +b
2
= a
t
a + 2a
t
b +b
t
b
_ a
t
a + 2
a
t
b
+b
t
b
_ a
2
+ 2 a b +b
2
= (a +b)
2
= (A +B)
2
Proof of Trace Inequality. By the spectral decomposition for symmetric matices, A =
HH
t
where has the eigenvalues `
j
of A on the diagonal and H is orthonormal. Deﬁne
C = H
t
BH which has nonnegative diagonal elements C
jj
since B is positive semideﬁnite. Then
tr (AB) = tr (C) =
m
¸
j=1
`
j
C
jj
_ max
j
`
j
m
¸
j=1
C
jj
= `
max
(A) tr (C)
where the inequality uses the fact that C
jj
_ 0. But note that
tr (C) = tr
H
t
BH
= tr
HH
t
B
= tr (B)
since H is orthonormal. Thus tr (AB) _ `
max
(A) tr (B) as stated.
Appendix B
Probability
B.1 Foundations
The set S of all possible outcomes of an experiment is called the sample space for the exper
iment. Take the simple example of tossing a coin. There are two outcomes, heads and tails, so
we can write S = {H, T}. If two coins are tossed in sequence, we can write the four outcomes as
S = {HH, HT, TH, TT}.
An event A is any collection of possible outcomes of an experiment. An event is a subset of S,
including S itself and the null set O. Continuing the two coin example, one event is A = {HH, HT},
the event that the ﬁrst coin is heads. We say that A and B are disjoint or mutually exclusive
if A ¨ B = O. For example, the sets {HH, HT} and {TH} are disjoint. Furthermore, if the sets
A
1
, A
2
, ... are pairwise disjoint and '
o
i=1
A
i
= S, then the collection A
1
, A
2
, ... is called a partition
of S.
The following are elementary set operations:
Union: A' B = {x : x ÷ A or x ÷ B}.
Intersection: A¨ B = {x : x ÷ A and x ÷ B}.
Complement: A
c
= {x : x / ÷ A}.
The following are useful properties of set operations.
Communtatitivity: A' B = B ' A; A¨ B = B ¨ A.
Associativity: A' (B ' C) = (A' B) ' C; A¨ (B ¨ C) = (A¨ B) ¨ C.
Distributive Laws: A¨(B ' C) = (A¨ B) '(A¨ C) ; A'(B ¨ C) = (A' B) ¨(A' C) .
DeMorgan’s Laws: (A' B)
c
= A
c
¨ B
c
; (A¨ B)
c
= A
c
' B
c
.
A probability function assigns probabilities (numbers between 0 and 1) to events A in S.
This is straightforward when S is countable; when S is uncountable we must be somewhat more
careful. A set B is called a sigma algebra (or Borel ﬁeld) if O ÷ B , A ÷ B implies A
c
÷ B, and
A
1
, A
2
, ... ÷ B implies '
o
i=1
A
i
÷ B. A simple example is {O, S} which is known as the trivial sigma
algebra. For any sample space S, let B be the smallest sigma algebra which contains all of the open
sets in S. When S is countable, B is simply the collection of all subsets of S, including O and S.
When S is the real line, then B is the collection of all open and closed intervals. We call B the
sigma algebra associated with S. We only deﬁne probabilities for events contained in B.
We now can give the axiomatic deﬁnition of probability. Given S and B, a probability function
Pr satisﬁes Pr(S) = 1, Pr(A) _ 0 for all A ÷ B, and if A
1
, A
2
, ... ÷ B are pairwise disjoint, then
Pr ('
o
i=1
A
i
) =
¸
o
i=1
Pr(A
i
).
Some important properties of the probability function include the following
• Pr (O) = 0
• Pr(A) _ 1
• Pr (A
c
) = 1 ÷Pr(A)
260
APPENDIX B. PROBABILITY 261
• Pr (B ¨ A
c
) = Pr(B) ÷Pr(A¨ B)
• Pr (A' B) = Pr(A) + Pr(B) ÷Pr(A¨ B)
• If A · B then Pr(A) _ Pr(B)
• Bonferroni’s Inequality: Pr(A¨ B) _ Pr(A) + Pr(B) ÷1
• Boole’s Inequality: Pr (A' B) _ Pr(A) + Pr(B)
For some elementary probability models, it is useful to have simple rules to count the number
of objects in a set. These counting rules are facilitated by using the binomial coecients which are
deﬁned for nonnegative integers n and r, n _ r, as
n
r
=
n!
r! (n ÷r)!
.
When counting the number of objects in a set, there are two important distinctions. Counting
may be with replacement or without replacement. Counting may be ordered or unordered.
For example, consider a lottery where you pick six numbers from the set 1, 2, ..., 49. This selection is
without replacement if you are not allowed to select the same number twice, and is with replacement
if this is allowed. Counting is ordered or not depending on whether the sequential order of the
numbers is relevant to winning the lottery. Depending on these two distinctions, we have four
expressions for the number of objects (possible arrangements) of size r from n objects.
Without With
Replacement Replacement
Ordered
n!
(n÷r)!
n
r
Unordered
n
r
n+r÷1
r
In the lottery example, if counting is unordered and without replacement, the number of po
tential combinations is
49
6
= 13, 983, 816.
If Pr(B) > 0 the conditional probability of the event A given the event B is
Pr (A  B) =
Pr (A¨ B)
Pr(B)
.
For any B, the conditional probability function is a valid probability function where S has been
replaced by B. Rearranging the deﬁnition, we can write
Pr(A¨ B) = Pr (A  B) Pr(B)
which is often quite useful. We can say that the occurrence of B has no information about the
likelihood of event A when Pr (A  B) = Pr(A), in which case we ﬁnd
Pr(A¨ B) = Pr (A) Pr(B) (B.1)
We say that the events A and B are statistically independent when (B.1) holds. Furthermore,
we say that the collection of events A
1
, ..., A
k
are mutually independent when for any subset
{A
i
: i ÷ I},
Pr
¸
i÷I
A
i
=
¸
i÷I
Pr (A
i
) .
Theorem 1 (Bayes’ Rule). For any set B and any partition A
1
, A
2
, ... of the sample space, then
for each i = 1, 2, ...
Pr (A
i
 B) =
Pr (B  A
i
) Pr(A
i
)
¸
o
j=1
Pr (B  A
j
) Pr(A
j
)
APPENDIX B. PROBABILITY 262
B.2 Random Variables
A random variable X is a function from a sample space S into the real line. This induces a
new sample space — the real line — and a new probability function on the real line. Typically, we
denote random variables by uppercase letters such as X, and use lower case letters such as x for
potential values and realized values. (This is in contrast to the notation adopted for most of the
textbook.) For a random variable X we deﬁne its cumulative distribution function (CDF) as
F(x) = Pr (X _ x) . (B.2)
Sometimes we write this as F
X
(x) to denote that it is the CDF of X. A function F(x) is a CDF if
and only if the following three properties hold:
1. lim
x÷÷o
F(x) = 0 and lim
x÷o
F(x) = 1
2. F(x) is nondecreasing in x
3. F(x) is rightcontinuous
We say that the random variable X is discrete if F(x) is a step function. In the latter case,
the range of X consists of a countable set of real numbers t
1
, ..., t
r
. The probability function for
X takes the form
Pr (X = t
j
) = ¬
j
, j = 1, ..., r (B.3)
where 0 _ ¬
j
_ 1 and
¸
r
j=1
¬
j
= 1.
We say that the random variable X is continuous if F(x) is continuous in x. In this case Pr(X =
t) = 0 for all t ÷ R so the representation (B.3) is unavailable. Instead, we represent the relative
probabilities by the probability density function (PDF)
f(x) =
d
dx
F(x)
so that
F(x) =
x
÷o
f(u)du
and
Pr (a _ X _ b) =
b
a
f(u)du.
These expressions only make sense if F(x) is dierentiable. While there are examples of continuous
random variables which do not possess a PDF, these cases are unusual and are typically ignored.
A function f(x) is a PDF if and only if f(x) _ 0 for all x ÷ R and
o
÷o
f(x)dx.
B.3 Expectation
For any measurable real function g, we deﬁne the mean or expectation Eg(X) as follows. If
X is discrete,
Eg(X) =
r
¸
j=1
g(t
j
)¬
j
,
and if X is continuous
Eg(X) =
o
÷o
g(x)f(x)dx.
The latter is well deﬁned and ﬁnite if
o
÷o
g(x) f(x)dx < ·. (B.4)
APPENDIX B. PROBABILITY 263
If (B.4) does not hold, evaluate
I
1
=
g(x)>0
g(x)f(x)dx
I
2
= ÷
g(x)<0
g(x)f(x)dx
If I
1
= · and I
2
< · then we deﬁne Eg(X) = ·. If I
1
< · and I
2
= · then we deﬁne
Eg(X) = ÷·. If both I
1
= · and I
2
= · then Eg(X) is undeﬁned.
Since E(a +bX) = a +bEX, we say that expectation is a linear operator.
For m > 0, we deﬁne the m
t
th moment of X as EX
m
and the m
t
th central moment as
E(X ÷EX)
m
.
Two special moments are the mean µ = EX and variance o
2
= E(X ÷µ)
2
= EX
2
÷µ
2
. We
call o =
o
2
the standard deviation of X. We can also write o
2
= var(X). For example, this
allows the convenient expression var(a +bX) = b
2
var(X).
The moment generating function (MGF) of X is
M(`) = Eexp(`X) .
The MGF does not necessarily exist. However, when it does and EX
m
< · then
d
m
d`
m
M(`)
A=0
= E(X
m
)
which is why it is called the moment generating function.
More generally, the characteristic function (CF) of X is
C(`) = Eexp(i`X)
where i =
÷1 is the imaginary unit. The CF always exists, and when EX
m
< ·
d
m
d`
m
C(`)
A=0
= i
m
E(X
m
) .
The L
p
norm, p _ 1, of the random variable X is
X
p
= (EX
p
)
1/p
.
B.4 Gamma Function
The gamma function is deﬁned for c > 0 as
(c) =
o
0
x
c÷1
exp(÷x) .
It satisﬁes the property
(1 + c) = (c)c
so for positive integers n,
(n) = (n ÷1)!
Special values include
(1) = 1
and
1
2
= ¬
1/2
.
Sterling’s formula is an expansion for the its logarithm
log (c) =
1
2
log(2¬) +
c ÷
1
2
log c ÷z +
1
12c
÷
1
360c
3
+
1
1260c
5
+· · ·
APPENDIX B. PROBABILITY 264
B.5 Common Distributions
For reference, we now list some important discrete distribution function.
Bernoulli
Pr (X = x) = p
x
(1 ÷p)
1÷x
, x = 0, 1; 0 _ p _ 1
EX = p
var(X) = p(1 ÷p)
Binomial
Pr (X = x) =
n
x
p
x
(1 ÷p)
n÷x
, x = 0, 1, ..., n; 0 _ p _ 1
EX = np
var(X) = np(1 ÷p)
Geometric
Pr (X = x) = p(1 ÷p)
x÷1
, x = 1, 2, ...; 0 _ p _ 1
EX =
1
p
var(X) =
1 ÷p
p
2
Multinomial
Pr (X
1
= x
1
, X
2
= x
2
, ..., X
m
= x
m
) =
n!
x
1
!x
2
! · · · x
m
!
p
x
1
1
p
x
2
2
· · · p
xm
m
,
x
1
+· · · +x
m
= n;
p
1
+· · · +p
m
= 1
EX
i
= p
i
var(X
i
) = np
i
(1 ÷p
i
)
cov (X
i
, X
j
) = ÷np
i
p
j
Negative Binomial
Pr (X = x) =
(r +x)
x!(r)
p
r
(1 ÷p)
x÷1
, x = 0, 1, 2, ...; 0 _ p _ 1
EX =
r (1 ÷p)
p
var(X) =
r (1 ÷p)
p
2
Poisson
Pr (X = x) =
exp(÷`) `
x
x!
, x = 0, 1, 2, ..., ` > 0
EX = `
var(X) = `
We now list some important continuous distributions.
APPENDIX B. PROBABILITY 265
Beta
f(x) =
(c + )
(c)()
x
c÷1
(1 ÷x)
o÷1
, 0 _ x _ 1; c > 0, > 0
µ =
c
c +
var(X) =
c
(c + + 1) (c + )
2
Cauchy
f(x) =
1
¬ (1 +x
2
)
, ÷·< x < ·
EX = ·
var(X) = ·
Exponential
f(x) =
1
0
exp
x
0
, 0 _ x < ·; 0 > 0
EX = 0
var(X) = 0
2
Logistic
f(x) =
exp(÷x)
(1 + exp(÷x))
2
, ÷·< x < ·;
EX = 0
var(X) =
¬
2
3
Lognormal
f(x) =
1
2¬ox
exp
÷
(log x ÷µ)
2
2o
2
, 0 _ x < ·; o > 0
EX = exp
µ + o
2
/2
var(X) = exp
2µ + 2o
2
÷exp
2µ + o
2
Pareto
f(x) =
c
o
x
o+1
, c _ x < ·, c > 0, > 0
EX =
c
÷1
, > 1
var(X) =
c
2
( ÷1)
2
( ÷2)
, > 2
Uniform
f(x) =
1
b ÷a
, a _ x _ b
EX =
a +b
2
var(X) =
(b ÷a)
2
12
APPENDIX B. PROBABILITY 266
Weibull
f(x) =
x
~÷1
exp
÷
x
~
, 0 _ x < ·; > 0, > 0
EX =
1/~
1 +
1
var(X) =
2/~
1 +
2
÷
2
1 +
1
Gamma
f(x) =
1
(c)0
c
x
c÷1
exp
÷
x
0
, 0 _ x < ·; c > 0, 0 > 0
EX = c0
var(X) = c0
2
ChiSquare
f(x) =
1
(r/2)2
r/2
x
r/2÷1
exp
÷
x
2
, 0 _ x < ·; r > 0
EX = r
var(X) = 2r
Normal
f(x) =
1
2¬o
exp
÷
(x ÷µ)
2
2o
2
, ÷·< x < ·; ÷·< µ < ·, o
2
> 0
EX = µ
var(X) = o
2
Student t
f(x) =
r+1
2
r¬
r
2
1 +
x
2
r
÷(
r+1
2
)
, ÷·< x < ·; r > 0
EX = 0 if r > 1
var(X) =
r
r ÷2
if r > 2
B.6 Multivariate Random Variables
A pair of bivariate random variables (X, Y ) is a function from the sample space into R
2
. The
joint CDF of (X, Y ) is
F(x, y) = Pr (X _ x, Y _ y) .
If F is continuous, the joint probability density function is
f(x, y) =
0
2
0x0y
F(x, y).
For a Borel measurable set A ÷ R
2
,
Pr ((X < Y ) ÷ A) =
A
f(x, y)dxdy
APPENDIX B. PROBABILITY 267
For any measurable function g(x, y),
Eg(X, Y ) =
o
÷o
o
÷o
g(x, y)f(x, y)dxdy.
The marginal distribution of X is
F
X
(x) = Pr(X _ x)
= lim
y÷o
F(x, y)
=
x
÷o
o
÷o
f(x, y)dydx
so the marginal density of X is
f
X
(x) =
d
dx
F
X
(x) =
o
÷o
f(x, y)dy.
Similarly, the marginal density of Y is
f
Y
(y) =
o
÷o
f(x, y)dx.
The random variables X and Y are deﬁned to be independent if f(x, y) = f
X
(x)f
Y
(y).
Furthermore, X and Y are independent if and only if there exist functions g(x) and h(y) such that
f(x, y) = g(x)h(y).
If X and Y are independent, then
E(g(X)h(Y )) =
g(x)h(y)f(y, x)dydx
=
g(x)h(y)f
Y
(y)f
X
(x)dydx
=
g(x)f
X
(x)dx
h(y)f
Y
(y)dy
= Eg (X) Eh(Y ) . (B.5)
if the expectations exist. For example, if X and Y are independent then
E(XY ) = EXEY.
Another implication of (B.5) is that if X and Y are independent and Z = X +Y, then
M
Z
(`) = Eexp(`(X +Y ))
= E(exp(`X) exp(`Y ))
= Eexp
`
t
X
Eexp
`
t
Y
= M
X
(`)M
Y
(`). (B.6)
The covariance between X and Y is
cov(X, Y ) = o
XY
= E((X ÷EX) (Y ÷EY )) = EXY ÷EXEY.
The correlation between X and Y is
corr (X, Y ) = j
XY
=
o
XY
o
x
o
Y
.
APPENDIX B. PROBABILITY 268
The CauchySchwarz Inequality implies that
j
XY
 _ 1. (B.7)
The correlation is a measure of linear dependence, free of units of measurement.
If X and Y are independent, then o
XY
= 0 and j
XY
= 0. The reverse, however, is not true.
For example, if EX = 0 and EX
3
= 0, then cov(X, X
2
) = 0.
A useful fact is that
var (X +Y ) = var(X) + var(Y ) + 2 cov(X, Y ).
An implication is that if X and Y are independent, then
var (X +Y ) = var(X) + var(Y ),
the variance of the sum is the sum of the variances.
A k1 random vector X = (X
1
, ..., X
k
)
t
is a function from S to R
k
. Let x = (x
1
, ..., x
k
)
t
denote
a vector in R
k
. (In this Appendix, we use bold to denote vectors. Bold capitals X are random
vectors and bold lower case x are nonrandom vectors. Again, this is in distinction to the notation
used in the bulk of the text) The vector X has the distribution and density functions
F(x) = Pr(X _ x)
f(x) =
0
k
0x
1
· · · 0x
k
F(x).
For a measurable function g : R
k
÷R
s
, we deﬁne the expectation
Eg(X) =
R
k
g(x)f(x)dx
where the symbol dx denotes dx
1
· · · dx
k
. In particular, we have the k 1 multivariate mean
µ = EX
and k k covariance matrix
= E
(X ÷µ) (X ÷µ)
t
= EXX
t
÷µµ
t
If the elements of X are mutually independent, then is a diagonal matrix and
var
k
¸
i=1
X
i
=
k
¸
i=1
var (X
i
)
B.7 Conditional Distributions and Expectation
The conditional density of Y given X = x is deﬁned as
f
Y X
(y  x) =
f(x, y)
f
X
(x)
APPENDIX B. PROBABILITY 269
if f
X
(x) > 0. One way to derive this expression from the deﬁnition of conditional probability is
f
Y X
(y  x) =
0
0y
lim
.÷0
Pr (Y _ y  x _ X _ x + )
=
0
0y
lim
.÷0
Pr ({Y _ y} ¨ {x _ X _ x + })
Pr(x _ X _ x + )
=
0
0y
lim
.÷0
F(x + , y) ÷F(x, y)
F
X
(x + ) ÷F
X
(x)
=
0
0y
lim
.÷0
0
0x
F(x + , y)
f
X
(x + )
=
0
2
0x0y
F(x, y)
f
X
(x)
=
f(x, y)
f
X
(x)
.
The conditional mean or conditional expectation is the function
m(x) = E(Y  X = x) =
o
÷o
yf
Y X
(y  x) dy.
The conditional mean m(x) is a function, meaning that when X equals x, then the expected value
of Y is m(x).
Similarly, we deﬁne the conditional variance of Y given X = x as
o
2
(x) = var (Y  X = x)
= E
(Y ÷m(x))
2
 X = x
= E
Y
2
 X = x
÷m(x)
2
.
Evaluated at x = X, the conditional mean m(X) and conditional variance o
2
(X) are random
variables, functions of X. We write this as E(Y  X) = m(X) and var (Y  X) = o
2
(X). For
example, if E(Y  X = x) = c +
t
x, then E(Y  X) = c +
t
X, a transformation of X.
The following are important facts about conditional expectations.
Simple Law of Iterated Expectations:
E(E(Y  X)) = E(Y ) (B.8)
Proof :
E(E(Y  X)) = E(m(X))
=
o
÷o
m(x)f
X
(x)dx
=
o
÷o
o
÷o
yf
Y X
(y  x) f
X
(x)dydx
=
o
÷o
o
÷o
yf (y, x) dydx
= E(Y ).
Law of Iterated Expectations:
E(E(Y  X, Z)  X) = E(Y  X) (B.9)
APPENDIX B. PROBABILITY 270
Conditioning Theorem. For any function g(x),
E(g(X)Y  X) = g (X) E(Y  X) (B.10)
Proof : Let
h(x) = E(g(X)Y  X = x)
=
o
÷o
g(x)yf
Y X
(y  x) dy
= g(x)
o
÷o
yf
Y X
(y  x) dy
= g(x)m(x)
where m(x) = E(Y  X = x) . Thus h(X) = g(X)m(X), which is the same as E(g(X)Y  X) =
g (X) E(Y  X) .
B.8 Transformations
Suppose that X ÷ R
k
with continuous distribution function F
X
(x) and density f
X
(x). Let
Y = g(X) where g(x) : R
k
÷R
k
is onetoone, dierentiable, and invertible. Let h(y) denote the
inverse of g(x). The Jacobian is
J(y) = det
0
0y
t
h(y)
.
Consider the univariate case k = 1. If g(x) is an increasing function, then g(X) _ Y if and only
if X _ h(Y ), so the distribution function of Y is
F
Y
(y) = Pr (g(X) _ y)
= Pr (X _ h(Y ))
= F
X
(h(Y )) .
Taking the derivative, the density of Y is
f
Y
(y) =
d
dy
F
Y
(y) = f
X
(h(Y ))
d
dy
h(y).
If g(x) is a decreasing function, then g(X) _ Y if and only if X _ h(Y ), so
F
Y
(y) = Pr (g(X) _ y)
= 1 ÷Pr (X _ h(Y ))
= 1 ÷F
X
(h(Y ))
and the density of Y is
f
Y
(y) = ÷f
X
(h(Y ))
d
dy
h(y).
We can write these two cases jointly as
f
Y
(y) = f
X
(h(Y )) J(y) . (B.11)
This is known as the changeofvariables formula. This same formula (B.11) holds for k > 1, but
its justiﬁcation requires deeper results from analysis.
As one example, take the case X ~ U[0, 1] and Y = ÷log(X). Here, g(x) = ÷log(x) and
h(y) = exp(÷y) so the Jacobian is J(y) = ÷exp(y). As the range of X is [0, 1], that for Y is [0,·).
Since f
X
(x) = 1 for 0 _ x _ 1 (B.11) shows that
f
Y
(y) = exp(÷y), 0 _ y _ ·,
an exponential density.
APPENDIX B. PROBABILITY 271
B.9 Normal and Related Distributions
The standard normal density is
c(x) =
1
2¬
exp
÷
x
2
2
, ÷·< x < ·.
It is conventional to write X ~ N(0, 1) , and to denote the standard normal density function by
c(x) and its distribution function by (x). The latter has no closedform solution. The normal
density has all moments ﬁnite. Since it is symmetric about zero all odd moments are zero. By
iterated integration by parts, we can also show that EX
2
= 1 and EX
4
= 3. In fact, for any positive
integer m, EX
2m
= (2m÷1)!! = (2m÷1) · (2m÷3) · · · 1. Thus EX
4
= 3, EX
6
= 15, EX
8
= 105,
and EX
10
= 945.
If Z is standard normal and X = µ + oZ, then using the changeofvariables formula, X has
density
f(x) =
1
2¬o
exp
÷
(x ÷µ)
2
2o
2
, ÷·< x < ·.
which is the univariate normal density. The mean and variance of the distribution are µ and
o
2
, and it is conventional to write X ~ N
µ, o
2
.
For x ÷ R
k
, the multivariate normal density is
f(x) =
1
(2¬)
k/2
det ()
1/2
exp
÷
(x ÷µ)
t
÷1
(x ÷µ)
2
, x ÷ R
k
.
The mean and covariance matrix of the distribution are µ and , and it is conventional to write
X ~ N(µ, ).
The MGF and CF of the multivariate normal are exp
X
t
µ +X
t
X/2
and exp
iX
t
µ ÷X
t
X/2
,
respectively.
If X ÷ R
k
is multivariate normal and the elements of X are mutually uncorrelated, then
= diag{o
2
j
} is a diagonal matrix. In this case the density function can be written as
f(x) =
1
(2¬)
k/2
o
1
· · · o
k
exp
÷
(x
1
÷µ
1
)
2
/o
2
1
+· · · + (x
k
÷µ
k
)
2
/o
2
k
2
=
k
¸
j=1
1
(2¬)
1/2
o
j
exp
÷
x
j
÷µ
j
2
2o
2
j
which is the product of marginal univariate normal densities. This shows that if X is multivariate
normal with uncorrelated elements, then they are mutually independent.
Theorem B.9.1 If X ~ N(µ, ) and Y = a + BX with B an invertible matrix, then Y ~
N(a +Bµ, BB
t
) .
Theorem B.9.2 Let X ~ N(0, I
r
) . Then Q = X
t
X is distributed chisquare with r degrees of
freedom, written .
2
r
.
Theorem B.9.3 If Z ~ N(0, A) with A > 0, q q, then Z
t
A
÷1
Z ~ .
2
q
.
Theorem B.9.4 Let Z ~ N(0, 1) and Q ~ .
2
r
be independent. Then T
r
= Z/
Q/r is distributed
as student’s t with r degrees of freedom.
APPENDIX B. PROBABILITY 272
Proof of Theorem B.9.1. By the changeofvariables formula, the density of Y = a +BX is
f(y) =
1
(2¬)
k/2
det (
Y
)
1/2
exp
÷
(y ÷µ
Y
)
t
÷1
Y
(y ÷µ
Y
)
2
, y ÷ R
k
.
where µ
Y
= a+Bµ and
Y
= BB
t
, where we used the fact that det (BB
t
)
1/2
= det ()
1/2
det (B) .
Proof of Theorem B.9.2. First, suppose a random variable Q is distributed chisquare with r
degrees of freedom. It has the MGF
Eexp(tQ) =
o
0
1
r
2
2
r/2
x
r/2÷1
exp(tx) exp(÷x/2) dy = (1 ÷2t)
÷r/2
where the second equality uses the fact that
o
0
y
a÷1
exp(÷by) dy = b
÷a
(a), which can be found
by applying changeofvariables to the gamma function. Our goal is to calculate the MGF of
Q = X
t
X and show that it equals (1 ÷2t)
÷r/2
, which will establish that Q ~ .
2
r
.
Note that we can write Q = X
t
X =
¸
r
j=1
Z
2
j
where the Z
j
are independent N(0, 1) . The
distribution of each of the Z
2
j
is
Pr
Z
2
j
_ y
= 2 Pr (0 _ Z
j
_
y)
= 2
y
0
1
2¬
exp
÷
x
2
2
dx
=
y
0
1
1
2
2
1/2
s
÷1/2
exp
÷
s
2
ds
using the change—ofvariables s = x
2
and the fact
1
2
=
¬. Thus the density of Z
2
j
is
f
1
(x) =
1
1
2
2
1/2
x
÷1/2
exp
÷
x
2
which is the .
2
1
and by our above calculation has the MGF of Eexp
tZ
2
j
= (1 ÷2t)
÷1/2
.
Since the Z
2
j
are mutually independent, (B.6) implies that the MGF of Q =
¸
r
j=1
Z
2
j
is
(1 ÷2t)
÷1/2
r
= (1 ÷2t)
÷r/2
, which is the MGF of the .
2
r
density as desired.
Proof of Theorem B.9.3. The fact that A > 0 means that we can write A = CC
t
where C is
nonsingular. Then A
÷1
= C
÷1t
C
÷1
and
C
÷1
Z ~ N
0, C
÷1
AC
÷1t
= N
0, C
÷1
CC
t
C
÷1t
= N(0, I
q
) .
Thus
Z
t
A
÷1
Z = Z
t
C
÷1t
C
÷1
Z =
C
÷1
Z
t
C
÷1
Z
~ .
2
q
.
Proof of Theorem B.9.4. Using the simple law of iterated expectations, T
r
has distribution
APPENDIX B. PROBABILITY 273
function
F (x) = Pr
Z
Q/r
_ x
= E
Z _ x
Q
r
¸
= E
¸
Pr
Z _ x
Q
r
 Q
¸
= E
x
Q
r
Thus its density is
f (x) = E
d
dx
x
Q
r
= E
c
x
Q
r
Q
r
=
o
0
1
2¬
exp
÷
qx
2
2r
q
r
1
r
2
2
r/2
q
r/2÷1
exp(÷q/2)
dq
=
r+1
2
r¬
r
2
1 +
x
2
r
÷(
r+1
2
)
which is that of the student t with r degrees of freedom.
B.10 Inequalities
Jensen’s Inequality (ﬁnite form). If g(·) : R ÷R is convex, then for any nonnegative weights
a
j
such that
¸
m
j=1
a
j
= 1, and any real numbers x
j
g
¸
m
¸
j=1
a
j
x
j
¸
_
m
¸
j=1
a
j
g (x
j
) . (B.12)
In particular, setting a
j
= 1/m, then
g
¸
1
m
m
¸
j=1
x
j
¸
_
1
m
m
¸
j=1
g (x
j
) . (B.13)
Loève’s c
r
Inequality. For r > 0,
m
¸
j=1
a
j
r
_ c
r
m
¸
j=1
a
j

r
(B.14)
where c
r
= 1 when r _ 1 and c
r
= m
r÷1
when r _ 1.
Jensen’s Inequality (probabilistic form). If g(·) : R
m
÷ R is convex, then for any random
APPENDIX B. PROBABILITY 274
vector x for which Ex < · and Eg (x) < ·,
g(E(x)) _ E(g (x)) . (B.15)
Conditional Jensen’s Inequality. If g(·) : R
m
÷ R is convex, then for any random vectors
(y, x) for which Ey < · and Eg (y) < ·,
g(E(y  x)) _ E(g (y)  x) . (B.16)
Conditional Expectation Inequality. For any r _ such that Ey
r
< ·, then
EE(y  x)
r
_ Ey
r
< ·. (B.17)
Expectation Inequality. For any random matrix Y for which EY  < ·,
E(Y ) _ EY  . (B.18)
Hölder’s Inequality. If p > 1 and q > 1 and
1
p
+
1
q
= 1, then for any random mn matrices X
and Y,
E
X
t
Y
_ (EX
p
)
1/p
(EY 
q
)
1/q
. (B.19)
CauchySchwarz Inequality. For any random mn matrices X and Y,
E
X
t
Y
_
EX
2
1/2
EY 
2
1/2
. (B.20)
Matrix CauchySchwarz Inequality. Tripathi (1999). For any random x ÷ R
m
and y ÷ R
¹
,
Eyx
t
Exx
t
÷
Exy
t
_ Eyy
t
(B.21)
Minkowski’s Inequality. For any random mn matrices X and Y,
(EX +Y 
p
)
1/p
_ (EX
p
)
1/p
+ (EY 
p
)
1/p
(B.22)
Liapunov’s Inequality. For any random mn matrix X and 1 _ r _ p,
(EX
r
)
1/r
_ (EX
p
)
1/p
(B.23)
Markov’s Inequality (standard form). For any random vector x and nonnegative function
g(x) _ 0,
Pr(g(x) > c) _ c
÷1
Eg(x). (B.24)
Markov’s Inequality (strong form). For any random vector x and nonnegative function
g(x) _ 0,
Pr(g(x) > c) _ c
÷1
E(g (x) 1 (g(x) > c)) . (B.25)
Chebyshev’s Inequality. For any random variable x,
Pr(x ÷Ex > c) _
var (x)
c
2
. (B.26)
APPENDIX B. PROBABILITY 275
Proof of Jensen’s Inequality (B.12). By the deﬁnition of convexity, for any ` ÷ [0, 1]
g (`x
1
+ (1 ÷`) x
2
) _ `g (x
1
) + (1 ÷`) g (x
2
) . (B.27)
This implies
g
¸
m
¸
j=1
a
j
x
j
¸
= g
¸
a
1
g (x
1
) + (1 ÷a
1
)
m
¸
j=2
a
j
1 ÷a
1
x
j
¸
_ a
1
g (x
1
) + (1 ÷a
1
) g
¸
m
¸
j=2
b
j
x
j
¸
.
where b
j
= a
j
/(1 ÷a
1
) and
¸
m
j=2
b
j
= 1. By another application of (B.27) this is bounded by
a
1
g (x
1
)+(1 ÷a
1
)
¸
b
2
g(x
2
) + (1 ÷b
2
)g
¸
m
¸
j=2
c
j
x
j
¸
¸
= a
1
g (x
1
)+a
2
g(x
2
)+(1 ÷a
1
) (1÷b
2
)g
¸
m
¸
j=2
c
j
x
j
¸
where c
j
= b
j
/(1 ÷b
2
). By repeated application of (B.27) we obtain (B.12).
Proof of Loève’s c
r
Inequality. For r _ 1 this is simply a rewriting of the ﬁnite form Jensen’s
inequality (B.13) with g(u) = u
r
. For r < 1, deﬁne b
j
= a
j
 /
¸
m
j=1
a
j

. The facts that 0 _ b
j
_ 1
and r < 1 imply b
j
_ b
r
j
and thus
1 =
m
¸
j=1
b
j
_
m
¸
j=1
b
r
j
which implies
¸
m
¸
j=1
a
j

¸
r
_
m
¸
j=1
a
j

r
.
The proof is completed by observing that
¸
m
¸
j=1
a
j
¸
r
_
¸
m
¸
j=1
a
j

¸
r
.
Proof of Jensen’s Inequality (B.15). Since g(u) is convex, at any point u there is a nonempty
set of subderivatives (linear surfaces touching g(u) at u but lying below g(u) for all u). Let a+b
t
u
be a subderivative of g(u) at u = Ex. Then for all u, g(u) _ a + b
t
u yet g(Ex) = a + b
t
Ex.
Applying expectations, Eg(x) _ a +b
t
Ex = g(Ex), as stated.
Proof of Conditional Jensen’s Inequality. The same as the proof of (B.15), but using condi
tional expectations. The conditional expectations exist since Ey < ·and Eg (y) < ·.
Proof of Conditional Expectation Inequality. As the function u
r
is convex for r _ 1, the
Conditional Jensen’s inequality implies
E(y  x)
r
_ E(y
r
 x) .
Taking unconditional expectations and the law of iterated expectations, we obtain
EE(y  x)
r
_ EE(y
r
 x) = Ey
r
< ·
APPENDIX B. PROBABILITY 276
as required.
Proof of Expectation Inequality. By the Triangle inequality, for ` ÷ [0, 1],
`U
1
+ (1 ÷`)U
2
 _ `U
1
 + (1 ÷`) U
2

which shows that the matrix norm g(U) = U is convex. Applying Jensen’s Inequality (B.15) we
ﬁnd (B.18).
Proof of Hölder’s Inequality. Since
1
p
+
1
q
= 1 an application of Jensen’s Inequality (B.12)
shows that for any real a and b
exp
¸
1
p
a +
1
q
b
_
1
p
exp(a) +
1
q
exp(b) .
Setting u = exp(a) and v = exp(b) this implies
u
1/p
v
1/q
_
u
p
+
v
q
and this inequality holds for any u > 0 and v > 0.
Set u = X
p
/EX
p
and v = Y 
q
/EY 
q
. Note that Eu = Ev = 1. By the matrix Schwarz
Inequality (A.8), X
t
Y  _ X Y . Thus
EX
t
Y 
(EX
p
)
1/p
(EY 
q
)
1/q
_
E(X Y )
(EX
p
)
1/p
(EY 
q
)
1/q
= E
u
1/p
v
1/q
_ E
u
p
+
v
q
=
1
p
+
1
q
= 1,
which is (B.19).
Proof of CauchySchwarz Inequality. Special case of Hölder’s with p = q = 2.
Proof of Matrix CauchySchwarz Inequality. Deﬁne e = y ÷ (Eyx
t
) (Exx
t
)
÷
x. Note that
Eee
t
_ 0 is positive semideﬁnite. We can calculate that
Eee
t
= Eyy
t
÷
Eyx
t
Exx
t
÷
Exy
t
.
Since the lefthandside is positive semideﬁnite, so is the righthandside, which means Eyy
t
_
(Eyx
t
) (Exx
t
)
÷
Exy
t
as stated.
Proof of Liapunov’s Inequality. The function g(u) = u
p/r
is convex for u > 0 since p _ r. Set
u = X
r
. By Jensen’s inequality, g (Eu) _ Eg (u) or
(EX
r
)
p/r
_ E(X
r
)
p/r
= EX
p
.
Raising both sides to the power 1/p yields (EX
r
)
1/r
_ (EX
p
)
1/p
as claimed.
APPENDIX B. PROBABILITY 277
Proof of Minkowski’s Inequality. Note that by rewriting, using the triangle inequality (A.9),
and then Hölder’s Inequality to the two expectations
EX +Y 
p
= E
X +Y  X +Y 
p÷1
_ E
X X +Y 
p÷1
+E
Y  X +Y 
p÷1
_ (EX
p
)
1/p
E
X +Y 
q(p÷1)
1/q
+ (EY 
p
)
1/p
E
X +Y 
q(p÷1)
1/q
=
(EX
p
)
1/p
+ (EY 
p
)
1/p
E(X +Y 
p
)
(p÷1)/p
where the second equality picks q to satisfy 1/p+1/q = 1, and the ﬁnal equality uses this fact to make
the substitution q = p/(p÷1) and then collects terms. Dividing both sides by E(X +Y 
p
)
(p÷1)/p
,
we obtain (B.22).
Proof of Markov’s Inequality. Let F denote the distribution function of x. Then
Pr (g(x) _ c) =
{g(u)`c}
dF(u)
_
{g(u)`c}
g(u)
c
dF(u)
= c
÷1
1 (g(u) > c) g(u)dF(u)
= c
÷1
E(g (x) 1 (g(x) > c))
the inequality using the region of integration {g(u) > c}. This establishes the strong form (B.25).
Since 1 (g(x) > c) _ 1, the ﬁnal expression is less than c
÷1
E(g(x)) , establishing the standard
form (B.24).
Proof of Chebyshev’s Inequality. Deﬁne y = (x ÷Ex)
2
and note that Ey = var (x) . The events
{x ÷Ex > c} and
¸
y > c
2
are equal, so by an application Markov’s inequality we ﬁnd
Pr(x ÷Ex > c) = Pr(y > c
2
) _ c
÷2
E(y) = c
÷2
var (x)
as stated.
B.11 Maximum Likelihood
In this section we provide a brief review of the asymptotic theory of maximum likelihood
estimation.
When the density of y
i
is f(y  0) where F is a known distribution function and 0 ÷ is an
unknown m1 vector, we say that the distribution is parametric and that 0 is the parameter
of the distribution F. The space is the set of permissible value for 0. In this setting the method
of maximum likelihood is an appropriate technique for estimation and inference on 0. We let 0
denote a generic value of the parameter and let 0
0
denote its true value.
The joint density of a random sample (y
1
, ..., y
n
) is
f
n
(y
1
, ..., y
n
 0) =
n
¸
i=1
f (y
i
 0) .
The likelihood of the sample is this joint density evaluated at the observed sample values, viewed
as a function of 0. The loglikelihood function is its natural logarithm
log L(0) =
n
¸
i=1
log f (y
i
 0) .
APPENDIX B. PROBABILITY 278
The likelihood score is the derivative of the loglikelihood, evaluated at the true parameter
value.
S
i
=
0
00
log f (y
i
 0
0
) .
We also deﬁne the Hessian
H = ÷E
0
2
0000
t
log f (y
i
 0
0
) (B.28)
and the outer product matrix
= E
S
i
S
t
i
. (B.29)
We now present three important features of the likelihood.
Theorem B.11.1
0
00
Elog f (y  0)
=
0
= 0 (B.30)
ES
i
= 0 (B.31)
and
H = = I (B.32)
The matrix I is called the information, and the equality (B.32) is called the information
matrix equality.
The maximum likelihood estimator (MLE)
ˆ
0 is the parameter value which maximizes the
likelihood (equivalently, which maximizes the loglikelihood). We can write this as
ˆ
0 = argmax
÷
log L(0). (B.33)
In some simple cases, we can ﬁnd an explicit expression for
ˆ
0 as a function of the data, but these
cases are rare. More typically, the MLE
ˆ
0 must be found by numerical methods.
To understand why the MLE
ˆ
0 is a natural estimator for the parameter 0 observe that the
standardized loglikelihood is a sample average and an estimator of Elog f (y
i
 0) :
1
n
log L(0) =
1
n
n
¸
i=1
log f (y
i
 0)
p
÷÷Elog f (y
i
 0) .
As the MLE
ˆ
0 maximizes the lefthandside, we can see that it is an estimator of the maximizer of
the righthandside. The ﬁrstorder condition for the latter problem is
0 =
0
00
Elog f (y
i
 0)
which holds at 0 = 0
0
by (B.30). This suggests that
ˆ
0 is an estimator of 0
0
. In. fact, under
conventional regularity conditions,
ˆ
0 is consistent,
ˆ
0
p
÷÷0
0
as n ÷·. Furthermore, we can derive
its asymptotic distribution.
Theorem B.11.2 Under regularity conditions,
n
ˆ
0 ÷0
0
d
÷÷N
0, I
÷1
.
APPENDIX B. PROBABILITY 279
We omit the regularity conditions for Theorem B.11.2, but the result holds quite broadly for
models which are smooth functions of the parameters. Theorem B.11.2 gives the general form for
the asymptotic distribution of the MLE. A famous result shows that the asymptotic variance is the
smallest possible.
Theorem B.11.3 CramerRao Lower Bound. If
¯
0 is an unbiased reg
ular estimator of 0, then var(
¯
0) _ (nI)
÷
.
The CramerRao Theorem shows that the ﬁnite sample variance of an unbiased estimator is
bounded below by (nI)
÷1
. This means that the asymptotic variance of the standardized estimator
n
¯
0 ÷0
0
is bounded below by I
÷1
. In other words, the best possible asymptotic variance among
all (regular) estimators is I
÷1
. An estimator is called asymptotically ecient if its asymptotic
variance equals this lower bound. Theorem B.11.2 shows that the MLE has this asymptotic variance,
and is thus asymptotically ecient.
Theorem B.11.4 The MLE is asymptotically ecient in the sense that
its asymptotic variance equals the CramerRao Lower Bound.
Theorem B.11.4 gives a strong endorsement for the MLE in parametric models.
Finally, consider functions of parameters. If r = g(0) then the MLE of r is
´
r = g(
´
0).
This is because maximization (e.g. (B.33)) is unaected by parameterization and transformation.
Applying the Delta Method to Theorem B.11.2 we conclude that
n
´
r ÷r
· G
t
n
´
0 ÷0
d
÷÷N
0, G
t
I
÷1
G
(B.34)
where G =
0
0
g(0
0
). By Theorem B.11.4,
´
r is an asymptotically ecient estimator for r. The
asymptotic variance G
t
I
÷1
G is the CramerRao lower bound for estimation of r.
Theorem B.11.5 The CramerRao lower bound for r = g(0) is G
t
I
÷1
G
, and the MLE
´
r = g(
´
0) is asymptotically ecient.
Proof of Theorem B.11.1. To see (B.30),
0
00
Elog f (y  0)
=
0
=
0
00
log f (y  0) f (y  0
0
) dy
=
0
=
0
00
f (y  0)
f (y  0
0
)
f (y  0)
dy
=
0
=
0
00
f (y  0) dy
=
0
=
0
00
1
=
0
= 0.
APPENDIX B. PROBABILITY 280
Equation (B.31) follows by exchanging integration and dierentiation
E
0
00
log f (y  0
0
) =
0
00
Elog f (y  0
0
) = 0.
Similarly, we can show that
E
0
2
00
f (y  0
0
)
f (y  0
0
)
= 0.
By direction computation,
0
2
0000
t
log f (y  0
0
) =
0
2
00
f (y  0
0
)
f (y  0
0
)
÷
0
0
f (y  0
0
)
0
0
f (y  0
0
)
t
f (y  0
0
)
2
=
0
2
00
f (y  0
0
)
f (y  0
0
)
÷
0
00
log f (y  0
0
)
0
00
log f (y  0
0
)
t
.
Taking expectations yields (B.32).
Proof of Theorem B.11.2 Taking the ﬁrstorder condition for maximization of log L(0), and
making a ﬁrstorder Taylor series expansion,
0 =
0
00
log L(0)
=
ˆ
=
n
¸
i=1
0
00
log f
y
i

ˆ
0
=
n
¸
i=1
0
00
log f (y
i
 0
0
) +
n
¸
i=1
0
2
0000
t
log f (y
i
 0
n
)
ˆ
0 ÷0
0
,
where 0
n
lies on a line segment joining
ˆ
0 and 0
0
. (Technically, the speciﬁc value of 0
n
varies by
row in this expansion.) Rewriting this equation, we ﬁnd
ˆ
0 ÷0
0
=
÷
n
¸
i=1
0
2
0000
t
log f (y
i
 0
n
)
÷1
n
¸
i=1
S
i
where S
i
are the likelihood scores. Since the score S
i
is meanzero (B.31) with covariance matrix
(equation B.29) an application of the CLT yields
1
n
n
¸
i=1
S
i
d
÷÷N(0, ) .
The analysis of the sample Hessian is somewhat more complicated due to the presence of 0
n
.
Let H(0) = ÷
0
2
00
log f (y
i
, 0) . If it is continuous in 0, then since 0
n
p
÷÷ 0
0
it follows that
H(0
n
)
p
÷÷H and so
÷
1
n
n
¸
i=1
0
2
0000
t
log f (y
i
, 0
n
) =
1
n
n
¸
i=1
÷
0
2
0000
t
log f (y
i
, 0
n
) ÷H(0
n
)
+ H(0
n
)
p
÷÷H
by an application of a uniform WLLN. (By uniform, we mean that the WLLN holds uniformly over
the parameter value. This requires the second derivative to be a smooth function of the parameter.)
APPENDIX B. PROBABILITY 281
Together,
n
ˆ
0 ÷0
0
d
÷÷H
÷1
N(0, ) = N
0, H
÷1
H
÷1
= N
0, I
÷1
,
the ﬁnal equality using Theorem B.11.1 .
Proof of Theorem B.11.3. Let Y = (y
1
, ..., y
n
) be the sample, and set
S =
0
00
log f
n
(Y , 0
0
) =
n
¸
i=1
S
i
which by Theorem (B.11.1) has mean zero and variance nI. Write the estimator
¯
0 =
¯
0 (Y ) as a
function of the data. Since
¯
0 is unbiased for any 0,
0 = E
¯
0 =
¯
0 (Y ) f (Y , 0) dY .
Dierentiating with respect to 0 and evaluating at 0
0
yields
I =
¯
0 (Y )
0
00
t
f (Y , 0) dY =
¯
0 (Y )
0
00
t
log f (Y , 0) f (Y , 0
0
) dY = E
¯
0S
t
= E
¯
0 ÷0
0
S
t
the ﬁnal equality since E(S) = 0
By the matrix CauchySchwarz inequality (B.21), E
¯
0 ÷0
0
S
t
= I, and var (S) = E(SS
t
) =
nI,
var
¯
0
= E
¯
0 ÷0
0
¯
0 ÷0
0
t
_ E
¯
0 ÷0
0
S
t
E
SS
t
÷
E
S
¯
0 ÷0
0
t
= E
SS
t
÷
= (nI)
÷
as stated.
Appendix C
Numerical Optimization
Many econometric estimators are deﬁned by an optimization problem of the form
ˆ
0 = argmin
÷
Q(0) (C.1)
where the parameter is 0 ÷ · R
m
and the criterion function is Q(0) : ÷ R. For example
NLLS, GLS, MLE and GMM estimators take this form. In most cases, Q(0) can be computed
for given 0, but
ˆ
0 is not available in closed form. In this case, numerical methods are required to
obtain
ˆ
0.
C.1 Grid Search
Many optimization problems are either one dimensional (m = 1) or involve onedimensional
optimization as a subproblem (for example, a line search). In this context grid search may be
employed.
Grid Search. Let = [a, b] be an interval. Pick some  > 0 and set G = (b ÷ a)/ to be
the number of gridpoints. Construct an equally spaced grid on the region [a, b] with G gridpoints,
which is {0(j) = a + j(b ÷ a)/G : j = 0, ..., G}. At each point evaluate the criterion function
and ﬁnd the gridpoint which yields the smallest value of the criterion, which is 0(ˆ ,) where ˆ , =
argmin
0<j<G
Q(0(j)). This value 0 (ˆ ,) is the gridpoint estimate of
ˆ
0. If the grid is suciently ﬁne to
capture small oscillations in Q(0), the approximation error is bounded by , that is,
0(ˆ ,) ÷
ˆ
0
_ .
Plots of Q(0(j)) against 0(j) can help diagnose errors in grid selection. This method is quite robust
but potentially costly.
TwoStep Grid Search. The gridsearch method can be reﬁned by a twostep execution. For
an error bound of  pick G so that G
2
= (b ÷ a)/ For the ﬁrst step deﬁne an equally spaced
grid on the region [a, b] with G gridpoints, which is {0(j) = a + j(b ÷ a)/G : j = 0, ..., G}.
At each point evaluate the criterion function and let ˆ , = argmin
0<j<G
Q(0(j)). For the second
step deﬁne an equally spaced grid on [0(ˆ , ÷1), 0(ˆ , + 1)] with G gridpoints, which is {0
t
(k) =
0(ˆ , ÷ 1) + 2k(b ÷ a)/G
2
: k = 0, ..., G}. Let
ˆ
k = argmin
0<k<G
Q(0
t
(k)). The estimate of
ˆ
0 is
0
ˆ
k
. The advantage of the twostep method over a onestep grid search is that the number of
function evaluations has been reduced from (b÷a)/ to 2
(b ÷a)/ which can be substantial. The
disadvantage is that if the function Q(0) is irregular, the ﬁrststep grid may not bracket
ˆ
0 which
thus would be missed.
C.2 Gradient Methods
Gradient Methods are iterative methods which produce a sequence 0
i
: i = 1, 2, ... which
are designed to converge to
ˆ
0. All require the choice of a starting value 0
1
, and all require the
282
APPENDIX C. NUMERICAL OPTIMIZATION 283
computation of the gradient of Q(0)
g(0) =
0
00
Q(0)
and some require the Hessian
H(0) =
0
2
0000
t
Q(0).
If the functions g(0) and H(0) are not analytically available, they can be calculated numerically.
Take the j
t
th element of g(0). Let c
j
be the j
t
th unit vector (zeros everywhere except for a one in
the j
t
th row). Then for  small
g
j
(0) ·
Q(0 + c
j
) ÷Q(0)

.
Similarly,
g
jk
(0) ·
Q(0 + c
j
 + c
k
) ÷Q(0 + c
k
) ÷Q(0 + c
j
) +Q(0)

2
In many cases, numerical derivatives can work well but can be computationally costly relative to
analytic derivatives. In some cases, however, numerical derivatives can be quite unstable.
Most gradient methods are a variant of Newton’s method which is based on a quadratic
approximation. By a Taylor’s expansion for 0 close to
ˆ
0
0 = g(
ˆ
0) · g(0) + H(0)
ˆ
0 ÷0
which implies
ˆ
0 = 0 ÷H(0)
÷1
g(0).
This suggests the iteration rule
ˆ
0
i+1
= 0
i
÷H(0
i
)
÷1
g(0
i
).
where
One problem with Newton’s method is that it will send the iterations in the wrong direction if
H(0
i
) is not positive deﬁnite. One modiﬁcation to prevent this possibility is quadratic hillclimbing
which sets
ˆ
0
i+1
= 0
i
÷(H(0
i
) + c
i
I
m
)
÷1
g(0
i
).
where c
i
is set just above the smallest eigenvalue of H(0
i
) if H(0) is not positive deﬁnite.
Another productive modiﬁcation is to add a scalar steplength `
i
. In this case the iteration
rule takes the form
0
i+1
= 0
i
÷D
i
g
i
`
i
(C.2)
where g
i
= g(0
i
) and D
i
= H(0
i
)
÷1
for Newton’s method and D
i
= (H(0
i
) + c
i
I
m
)
÷1
for
quadratic hillclimbing.
Allowing the steplength to be a free parameter allows for a line search, a onedimensional
optimization. To pick `
i
write the criterion function as a function of `
Q(`) = Q(0
i
+D
i
g
i
`)
a onedimensional optimization problem. There are two common methods to perform a line search.
A quadratic approximation evaluates the ﬁrst and second derivatives of Q(`) with respect to
`, and picks `
i
as the value minimizing this approximation. The halfstep method considers the
sequence ` = 1, 1/2, 1/4, 1/8, ... . Each value in the sequence is considered and the criterion
Q(0
i
+D
i
g
i
`) evaluated. If the criterion has improved over Q(0
i
), use this value, otherwise move
to the next element in the sequence.
APPENDIX C. NUMERICAL OPTIMIZATION 284
Newton’s method does not perform well if Q(0) is irregular, and it can be quite computationally
costly if H(0) is not analytically available. These problems have motivated alternative choices for
the weight matrix D
i
. These methods are called QuasiNewton methods. Two popular methods
are do to DavidsonFletcherPowell (DFP) and BroydenFletcherGoldfarbShanno (BFGS).
Let
g
i
= g
i
÷g
i÷1
0
i
= 0
i
÷0
i÷1
and . The DFP method sets
D
i
= D
i÷1
+
0
i
0
t
i
0
t
i
g
i
+
D
i÷1
g
i
g
t
i
D
i÷1
g
t
i
D
i÷1
g
i
.
The BFGS methods sets
D
i
= D
i÷1
+
0
i
0
t
i
0
t
i
g
i
÷
0
i
0
t
i
0
t
i
g
i
2
g
t
i
D
i÷1
g
i
+
0
i
g
t
i
D
i÷1
0
t
i
g
i
+
D
i÷1
g
i
0
t
i
0
t
i
g
i
.
For any of the gradient methods, the iterations continue until the sequence has converged in
some sense. This can be deﬁned by examining whether 0
i
÷0
i÷1
 , Q(0
i
) ÷Q(0
i÷1
) or g(0
i
)
has become small.
C.3 DerivativeFree Methods
All gradient methods can be quite poor in locating the global minimum when Q(0) has several
local minima. Furthermore, the methods are not well deﬁned when Q(0) is nondierentiable. In
these cases, alternative optimization methods are required. One example is the simplex method
of NelderMead (1965).
A more recent innovation is the method of simulated annealing (SA). For a review see Goe,
Ferrier, and Rodgers (1994). The SA method is a sophisticated random search. Like the gradient
methods, it relies on an iterative sequence. At each iteration, a random variable is drawn and
added to the current value of the parameter. If the resulting criterion is decreased, this new value
is accepted. If the criterion is increased, it may still be accepted depending on the extent of the
increase and another randomization. The latter property is needed to keep the algorithm from
selecting a local minimum. As the iterations continue, the variance of the random innovations is
shrunk. The SA algorithm stops when a large number of iterations is unable to improve the criterion.
The SA method has been found to be successful at locating global minima. The downside is that
it can take considerable computer time to execute.
Bibliography
[1] Abadir, Karim M. and Jan R. Magnus (2005): Matrix Algebra, Cambridge University Press.
[2] Aitken, A.C. (1935): “On least squares and linear combinations of observations,” Proceedings
of the Royal Statistical Society, 55, 4248.
[3] Akaike, H. (1973): “Information theory and an extension of the maximum likelihood prin
ciple.” In B. Petroc and F. Csake, eds., Second International Symposium on Information
Theory.
[4] Anderson, T.W. and H. Rubin (1949): “Estimation of the parameters of a single equation in
a complete system of stochastic equations,” The Annals of Mathematical Statistics, 20, 4663.
[5] Andrews, Donald W. K. (1988): “Laws of large numbers for dependent nonidentically dis
tributed random variables,’ Econometric Theory, 4, 458467.
[6] Andrews, Donald W. K. (1991), “Asymptotic normality of series estimators for nonparameric
and semiparametric regression models,” Econometrica, 59, 307345.
[7] Andrews, Donald W. K. (1993), “Tests for parameter instability and structural change with
unknown change point,” Econometrica, 61, 8218516.
[8] Andrews, Donald W. K. and Moshe Buchinsky: (2000): “A threestep method for choosing
the number of bootstrap replications,” Econometrica, 68, 2351.
[9] Andrews, Donald W. K. and Werner Ploberger (1994): “Optimal tests when a nuisance
parameter is present only under the alternative,” Econometrica, 62, 13831414.
[10] Ash, Robert B. (1972): Real Analysis and Probability, Academic Press.
[11] Basmann, R. L. (1957): “A generalized classical method of linear estimation of coecients
in a structural equation,” Econometrica, 25, 7783.
[12] Bekker, P.A. (1994): “Alternative approximations to the distributions of instrumental vari
able estimators, Econometrica, 62, 657681.
[13] Billingsley, Patrick (1968): Convergence of Probability Measures. New York: Wiley.
[14] Billingsley, Patrick (1995): Probability and Measure, 3rd Edition, New York: Wiley.
[15] Bose, A. (1988): “Edgeworth correction by bootstrap in autoregressions,” Annals of Statistics,
16, 17091722.
[16] Breusch, T.S. and A.R. Pagan (1979): “The Lagrange multiplier test and its application to
model speciﬁcation in econometrics,” Review of Economic Studies, 47, 239253.
[17] Brown, B. W. and Whitney K. Newey (2002): “GMM, ecient bootstrapping, and improved
inference ,” Journal of Business and Economic Statistics.
285
BIBLIOGRAPHY 286
[18] Carlstein, E. (1986): “The use of subseries methods for estimating the variance of a general
statistic from a stationary time series,” Annals of Statistics, 14, 11711179.
[19] Casella, George and Roger L. Berger (2002): Statistical Inference, 2nd Edition, Duxbury
Press.
[20] Chamberlain, Gary (1987): “Asymptotic eciency in estimation with conditional moment
restrictions,” Journal of Econometrics, 34, 305334.
[21] Choi, In and Peter C.B. Phillips (1992): “Asymptotic and ﬁnite sample distribution theory for
IV estimators and tests in partially identiﬁed structural equations,” Journal of Econometrics,
51, 113150.
[22] Chow, G.C. (1960): “Tests of equality between sets of coecients in two linear regressions,”
Econometrica, 28, 591603.
[23] Cragg, John (1992): “QuasiAitken Estimation for Heterskedasticity of Unknown Form"
Journal of Econometrics, 54, 179201.
[24] Davidson, James (1994): Stochastic Limit Theory: An Introduction for Econometricians.
Oxford: Oxford University Press.
[25] Davison, A.C. and D.V. Hinkley (1997): Bootstrap Methods and their Application. Cambridge
University Press.
[26] Dickey, D.A. and W.A. Fuller (1979): “Distribution of the estimators for autoregressive time
series with a unit root,” Journal of the American Statistical Association, 74, 427431.
[27] Donald Stephen G. and Whitney K. Newey (2001): “Choosing the number of instruments,”
Econometrica, 69, 11611191.
[28] Dufour, J.M. (1997): “Some impossibility theorems in econometrics with applications to
structural and dynamic models,” Econometrica, 65, 13651387.
[29] Efron, Bradley (1979): “Bootstrap methods: Another look at the jackknife,” Annals of Sta
tistics, 7, 126.
[30] Efron, Bradley (1982): The Jackknife, the Bootstrap, and Other Resampling Plans. Society
for Industrial and Applied Mathematics.
[31] Efron, Bradley and R.J. Tibshirani (1993): An Introduction to the Bootstrap, New York:
ChapmanHall.
[32] Eicker, F. (1963): “Asymptotic normality and consistency of the least squares estimators for
families of linear regressions,” Annals of Mathematical Statistics, 34, 447456.
[33] Engle, Robert F. and Clive W. J. Granger (1987): “Cointegration and error correction:
Representation, estimation and testing,” Econometrica, 55, 251276.
[34] Frisch, Ragnar (1933): “Editorial,” Econometrica, 1, 14.
[35] Frisch, Ragnar and F. Waugh (1933): “Partial time regressions as compared with individual
trends,” Econometrica, 1, 387401.
[36] Gallant, A. Ronald and D.W. Nychka (1987): “Seminonparametric maximum likelihood es
timation,” Econometrica, 55, 363390.
[37] Gallant, A. Ronald and Halbert White (1988): A Uniﬁed Theory of Estimation and Inference
for Nonlinear Dynamic Models. New York: Basil Blackwell.
BIBLIOGRAPHY 287
[38] Galton, Francis (1886): “Regression Towards Mediocrity in Hereditary Stature,” The Journal
of the Anthropological Institute of Great Britain and Ireland, 15, 246263.
[39] Goldberger, Arthur S. (1991): A Course in Econometrics. Cambridge: Harvard University
Press.
[40] Goe, W.L., G.D. Ferrier and J. Rogers (1994): “Global optimization of statistical functions
with simulated annealing,” Journal of Econometrics, 60, 6599.
[41] Gauss, K.F. (1809): “Theoria motus corporum coelestium,” in Werke, Vol. VII, 240254.
[42] Granger, Clive W. J. (1969): “Investigating causal relations by econometric models and
crossspectral methods,” Econometrica, 37, 424438.
[43] Granger, Clive W. J. (1981): “Some properties of time series data and their use in econometric
speciﬁcation,” Journal of Econometrics, 16, 121130.
[44] Granger, Clive W. J. and Timo Teräsvirta (1993): Modelling Nonlinear Economic Relation
ships, Oxford University Press, Oxford.
[45] Gregory, A. and M. Veall (1985): “On formulating Wald tests of nonlinear restrictions,”
Econometrica, 53, 14651468,
[46] Haavelmo, T. (1944): “The probability approach in econometrics,” Econometrica, supple
ment, 12.
[47] Hall, A. R. (2000): “Covariance matrix estimation and the power of the overidentifying
restrictions test,” Econometrica, 68, 15171527,
[48] Hall, P. (1992): The Bootstrap and Edgeworth Expansion, New York: SpringerVerlag.
[49] Hall, P. (1994): “Methodology and theory for the bootstrap,” Handbook of Econometrics,
Vol. IV, eds. R.F. Engle and D.L. McFadden. New York: Elsevier Science.
[50] Hall, P. and J.L. Horowitz (1996): “Bootstrap critical values for tests based on Generalized
MethodofMoments estimation,” Econometrica, 64, 891916.
[51] Hahn, J. (1996): “A note on bootstrapping generalized method of moments estimators,”
Econometric Theory, 12, 187197.
[52] Hamilton, James D. (1994) Time Series Analysis.
[53] Hansen, Bruce E. (1992): “Ecient estimation and testing of cointegrating vectors in the
presence of deterministic trends,” Journal of Econometrics, 53, 87121.
[54] Hansen, Bruce E. (1996): “Inference when a nuisance parameter is not identiﬁed under the
null hypothesis,” Econometrica, 64, 413430.
[55] Hansen, Bruce E. (2006): “Edgeworth expansions for the Wald and GMM statistics for non
linear restrictions,” Econometric Theory and Practice: Frontiers of Analysis and Applied
Research, edited by Dean Corbae, Steven N. Durlauf and Bruce E. Hansen. Cambridge Uni
versity Press.
[56] Hansen, Lars Peter (1982): “Large sample properties of generalized method of moments
estimators, Econometrica, 50, 10291054.
[57] Hansen, Lars Peter, John Heaton, and A. Yaron (1996): “Finite sample properties of some
alternative GMM estimators,” Journal of Business and Economic Statistics, 14, 262280.
BIBLIOGRAPHY 288
[58] Hausman, J.A. (1978): “Speciﬁcation tests in econometrics,” Econometrica, 46, 12511271.
[59] Heckman, J. (1979): “Sample selection bias as a speciﬁcation error,” Econometrica, 47, 153
161.
[60] Horowitz, Joel (2001): “The Bootstrap,” Handbook of Econometrics, Vol. 5, J.J. Heckman
and E.E. Leamer, eds., Elsevier Science, 31593228.
[61] Imbens, G.W. (1997): “One step estimators for overidentiﬁed generalized method of moments
models,” Review of Economic Studies, 64, 359383.
[62] Imbens, G.W., R.H. Spady and P. Johnson (1998): “Information theoretic approaches to
inference in moment condition models,” Econometrica, 66, 333357.
[63] Jarque, C.M. and A.K. Bera (1980): “Ecient tests for normality, homoskedasticity and
serial independence of regression residuals, Economic Letters, 6, 255259.
[64] Johansen, S. (1988): “Statistical analysis of cointegrating vectors,” Journal of Economic
Dynamics and Control, 12, 231254.
[65] Johansen, S. (1991): “Estimation and hypothesis testing of cointegration vectors in the pres
ence of linear trend,” Econometrica, 59, 15511580.
[66] Johansen, S. (1995): LikelihoodBased Inference in Cointegrated Vector AutoRegressive Mod
els, Oxford University Press.
[67] Johansen, S. and K. Juselius (1992): “Testing structural hypotheses in a multivariate cointe
gration analysis of the PPP and the UIP for the UK,” Journal of Econometrics, 53, 211244.
[68] Kitamura, Y. (2001): “Asymptotic optimality and empirical likelihood for testing moment
restrictions,” Econometrica, 69, 16611672.
[69] Kitamura, Y. and M. Stutzer (1997): “An informationtheoretic alternative to generalized
method of moments,” Econometrica, 65, 861874..
[70] Koenker, Roger (2005): Quantile Regression. Cambridge University Press.
[71] Kunsch, H.R. (1989): “The jackknife and the bootstrap for general stationary observations,”
Annals of Statistics, 17, 12171241.
[72] Kwiatkowski, D., P.C.B. Phillips, P. Schmidt, and Y. Shin (1992): “Testing the null hypoth
esis of stationarity against the alternative of a unit root: How sure are we that economic time
series have a unit root?” Journal of Econometrics, 54, 159178.
[73] Lafontaine, F. and K.J. White (1986): “Obtaining any Wald statistic you want,” Economics
Letters, 21, 3540.
[74] Lehmann, E.L. and George Casella (1998): Theory of Point Estimation, 2nd Edition,
Springer.
[75] Lehmann, E.L. and Joseph P. Romano (2005): Testing Statistical Hypotheses, 3rd Edition,
Springer.
[76] Li, Qi and Jerey Racine (2007) Nonparametric Econometrics.
[77] Lovell, M.C. (1963): “Seasonal adjustment of economic time series,” Journal of the American
Statistical Association, 58, 9931010.
BIBLIOGRAPHY 289
[78] MacKinnon, James G. (1990): “Critical values for cointegration,” in Engle, R.F. and C.W.
Granger (eds.) LongRun Economic Relationships: Readings in Cointegration, Oxford, Oxford
University Press.
[79] MacKinnon, James G. and Halbert White (1985): “Some heteroskedasticityconsistent covari
ance matrix estimators with improved ﬁnite sample properties,” Journal of Econometrics, 29,
305325.
[80] Magnus, J. R., and H. Neudecker (1988): Matrix Dierential Calculus with Applications in
Statistics and Econometrics, New York: John Wiley and Sons.
[81] Mann, H.B. and A. Wald (1943). “On stochastic limit and order relationships,” The Annals
of Mathematical Statistics 14, 217—226.
[82] Muirhead, R.J. (1982): Aspects of Multivariate Statistical Theory. New York: Wiley.
[83] Nelder, J. and R. Mead (1965): “A simplex method for function minimization,” Computer
Journal, 7, 308313.
[84] Newey, Whitney K. (1990): “Semiparametric eciency bounds,” Journal of Applied Econo
metrics, 5, 99135.
[85] Newey, Whitney K. and Daniel L. McFadden (1994): “Large Sample Estimation and Hy
pothesis Testing,” in Robert Engle and Daniel McFadden, (eds.) Handbook of Econometrics,
vol. IV, 21112245, North Holland: Amsterdam.
[86] Newey, Whitney K. and Kenneth D. West (1987): “Hypothesis testing with ecient method
of moments estimation,” International Economic Review, 28, 777787.
[87] Owen, Art B. (1988): “Empirical likelihood ratio conﬁdence intervals for a single functional,”
Biometrika, 75, 237249.
[88] Owen, Art B. (2001): Empirical Likelihood. New York: Chapman & Hall.
[89] Park, Joon Y. and Peter C. B. Phillips (1988): “On the formulation of Wald tests of nonlinear
restrictions,” Econometrica, 56, 10651083,
[90] Phillips, Peter C.B. (1989): “Partially identiﬁed econometric models,” Econometric Theory,
5, 181240.
[91] Phillips, Peter C.B. and Sam Ouliaris (1990): “Asymptotic properties of residual based tests
for cointegration,” Econometrica, 58, 165193.
[92] Politis, D.N. and J.P. Romano (1996): “The stationary bootstrap,” Journal of the American
Statistical Association, 89, 13031313.
[93] Potscher, B.M. (1991): “Eects of model selection on inference,” Econometric Theory, 7,
163185.
[94] Qin, J. and J. Lawless (1994): “Empirical likelihood and general estimating equations,” The
Annals of Statistics, 22, 300325.
[95] Ramsey, J. B. (1969): “Tests for speciﬁcation errors in classical linear leastsquares regression
analysis,” Journal of the Royal Statistical Society, Series B, 31, 350371.
[96] Rudin, W. (1987): Real and Complex Analysis, 3rd edition. New York: McGrawHill.
[97] Said, S.E. and D.A. Dickey (1984): “Testing for unit roots in autoregressivemoving average
models of unknown order,” Biometrika, 71, 599608.
BIBLIOGRAPHY 290
[98] Shao, J. and D. Tu (1995): The Jackknife and Bootstrap. NY: Springer.
[99] Sargan, J.D. (1958): “The estimation of economic relationships using instrumental variables,”
Econometrica, 26, 393415.
[100] Shao, Jun (2003): Mathematical Statistics, 2nd edition, Springer.
[101] Sheather, S.J. and M.C. Jones (1991): “A reliable databased bandwidth selection method
for kernel density estimation, Journal of the Royal Statistical Society, Series B, 53, 683690.
[102] Shin, Y. (1994): “A residualbased test of the null of cointegration against the alternative of
no cointegration,” Econometric Theory, 10, 91115.
[103] Silverman, B.W. (1986): Density Estimation for Statistics and Data Analysis. London: Chap
man and Hall.
[104] Sims, C.A. (1972): “Money, income and causality,” American Economic Review, 62, 540552.
[105] Sims, C.A. (1980): “Macroeconomics and reality,” Econometrica, 48, 148.
[106] Staiger, D. and James H. Stock (1997): “Instrumental variables regression with weak instru
ments,” Econometrica, 65, 557586.
[107] Stock, James H. (1987): “Asymptotic properties of least squares estimators of cointegrating
vectors,” Econometrica, 55, 10351056.
[108] Stock, James H. (1991): “Conﬁdence intervals for the largest autoregressive root in U.S.
macroeconomic time series,” Journal of Monetary Economics, 28, 435460.
[109] Stock, James H. and Jonathan H. Wright (2000): “GMM with weak identiﬁcation,” Econo
metrica, 68, 10551096.
[110] Theil, H. (1953): “Repeated least squares applied to complete equation systems,” The Hague,
Central Planning Bureau, mimeo.
[111] Theil, H. (1971): Principles of Econometrics, New York: Wiley.
[112] Tobin, James (1958): “Estimation of relationships for limited dependent variables,” Econo
metrica, 26, 2436.
[113] Tripathi, Gautam (1999): “A matrix extension of the CauchySchwarz inequality,” Economics
Letters, 63, 13.
[114] van der Vaart, A.W. (1998): Asymptotic Statistics, Cambridge University Press.
[115] Wald, A. (1943): “Tests of statistical hypotheses concerning several parameters when the
number of observations is large,” Transactions of the American Mathematical Society, 54,
426482.
[116] Wang, J. and E. Zivot (1998): “Inference on structural parameters in instrumental variables
regression with weak instruments,” Econometrica, 66, 13891404.
[117] White, Halbert (1980): “A heteroskedasticityconsistent covariance matrix estimator and a
direct test for heteroskedasticity,” Econometrica, 48, 817838.
[118] White, Halbert (1984): Asymptotic Theory for Econometricians, Academic Press.
[119] Wooldridge, Jerey M. (2002) Econometric Analysis of Cross Section and Panel Data, MIT
Press.
BIBLIOGRAPHY 291
[120] Zellner, Arnold. (1962): “An ecient method of estimating seemingly unrelated regressions,
and tests for aggregation bias,” Journal of the American Statistical Association, 57, 348368.
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 1 Introduction 1.1 What is Econometrics? . . . . . . . . . . . . 1.2 The Probability Approach to Econometrics 1.3 Econometric Terms and Notation . . . . . . 1.4 Observational Data . . . . . . . . . . . . . . 1.5 Standard Data Structures . . . . . . . . . . 1.6 Sources for Economic Data . . . . . . . . . 1.7 Econometric Software . . . . . . . . . . . . 1.8 Reading the Manuscript . . . . . . . . . . . 2 Moment Estimation 2.1 Introduction . . . . . . . . . . 2.2 Population and Sample Mean 2.3 Sample Mean is Unbiased . . 2.4 Variance . . . . . . . . . . . . 2.5 Convergence in Probability . 2.6 Weak Law of Large Numbers 2.7 VectorValued Moments . . . 2.8 Convergence in Distribution . 2.9 Functions of Moments . . . . 2.10 Delta Method . . . . . . . . . 2.11 Stochastic Order Symbols . . 2.12 Uniform Stochastic Bounds* . 2.13 Semiparametric Eciency . . 2.14 Expectation* . . . . . . . . . 2.15 Technical Proofs* . . . . . . . 1 1 1 2 3 4 5 6 7 8 8 8 9 9 10 11 12 14 14 16 18 18 19 22 24 28 28 28 30 33 35 36 37 38 40 40 42 42
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
3 Conditional Expectation and Projection 3.1 Introduction . . . . . . . . . . . . . . . . 3.2 The Distribution of Wages . . . . . . . . 3.3 Conditional Expectation . . . . . . . . . 3.4 Conditional Expectation Function . . . 3.5 Continuous Variables . . . . . . . . . . . 3.6 Law of Iterated Expectations . . . . . . 3.7 Monotonicity of Conditioning . . . . . . 3.8 CEF Error . . . . . . . . . . . . . . . . . 3.9 Best Predictor . . . . . . . . . . . . . . 3.10 Conditional Variance . . . . . . . . . . . 3.11 Homoskedasticity and Heteroskedasticity 3.12 Regression Derivative . . . . . . . . . . i
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
CONTENTS 3.13 Linear CEF . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.14 Linear CEF with Nonlinear Eects . . . . . . . . . . . . . . 3.15 Linear CEF with Dummy Variables . . . . . . . . . . . . . . 3.16 Best Linear Predictor . . . . . . . . . . . . . . . . . . . . . 3.17 Linear Predictor Error Variance . . . . . . . . . . . . . . . . 3.18 Regression Coecients . . . . . . . . . . . . . . . . . . . . . 3.19 Regression SubVectors . . . . . . . . . . . . . . . . . . . . 3.20 Coecient Decomposition . . . . . . . . . . . . . . . . . . . 3.21 Omitted Variable Bias . . . . . . . . . . . . . . . . . . . . . 3.22 Best Linear Approximation . . . . . . . . . . . . . . . . . . 3.23 Normal Regression . . . . . . . . . . . . . . . . . . . . . . . 3.24 Regression to the Mean . . . . . . . . . . . . . . . . . . . . 3.25 Reverse Regression . . . . . . . . . . . . . . . . . . . . . . . 3.26 Limitations of the Best Linear Predictor . . . . . . . . . . . 3.27 Random Coecient Model . . . . . . . . . . . . . . . . . . . 3.28 Causal Eects . . . . . . . . . . . . . . . . . . . . . . . . . . 3.29 Existence and Uniqueness of the Conditional Expectation* 3.30 Technical Proofs* . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 The Algebra of Least Squares 4.1 Introduction . . . . . . . . . . 4.2 Least Squares Estimator . . . 4.3 Solving for Least Squares . . 4.4 Illustration . . . . . . . . . . 4.5 Least Squares Residuals . . . 4.6 Model in Matrix Notation . . 4.7 Projection Matrix . . . . . . 4.8 Orthogonal Projection . . . . 4.9 Regression Components . . . 4.10 Residual Regression . . . . . 4.11 Prediction Errors . . . . . . . 4.12 Inﬂuential Observations . . . 4.13 Measures of Fit . . . . . . . . 4.14 Normal Regression Model . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ii 43 44 45 47 53 54 54 55 56 57 57 58 59 60 61 62 65 66 69 71 71 71 72 74 74 75 77 78 79 80 82 83 84 86 88 91 91 92 93 94 96 97 98 99 102 103 106 108
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
5 Least Squares Regression 5.1 Introduction . . . . . . . . . . . . . . 5.2 Mean of LeastSquares Estimator . . 5.3 Variance of Least Squares Estimator 5.4 GaussMarkov Theorem . . . . . . . 5.5 Residuals . . . . . . . . . . . . . . . 5.6 Estimation of Error Variance . . . . 5.7 Covariance Matrix Estimation Under 5.8 Covariance Matrix Estimation Under 5.9 Standard Errors . . . . . . . . . . . . 5.10 Multicollinearity . . . . . . . . . . . 5.11 Normal Regression Model . . . . . . Exercises . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Homoskedasticity Heteroskedasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
CONTENTS 6 Asymptotic Theory for Least Squares 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Consistency of LeastSquares Estimation . . . . . . . . . . . . . . . . 6.3 Consistency of Sample Variance Estimators . . . . . . . . . . . . . . 6.4 Asymptotic Normality . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Joint Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Uniformly Consistent Residuals* . . . . . . . . . . . . . . . . . . . . 6.7 Asymptotic Leverage* . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 Consistent Covariance Matrix Estimation . . . . . . . . . . . . . . . 6.9 Functions of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 Asymptotic Standard Errors . . . . . . . . . . . . . . . . . . . . . . . 6.11 t statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12 Conﬁdence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.13 Regression Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.14 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.15 Conﬁdence Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.16 Semiparametric Eciency in the Projection Model . . . . . . . . . . 6.17 Semiparametric Eciency in the Homoskedastic Regression Model* . 6.18 Technical Proofs* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Restricted Estimation 7.1 Introduction . . . . . . . . . . . . . . . . 7.2 Constrained Least Squares . . . . . . . . 7.3 Exclusion Restriction . . . . . . . . . . . 7.4 Minimum Distance . . . . . . . . . . . . 7.5 Computation . . . . . . . . . . . . . . . 7.6 Asymptotic Distribution . . . . . . . . . 7.7 Ecient Minimum Distance Estimator . 7.8 Exclusion Restriction Revisited . . . . . 7.9 Variance and Standard Error Estimation 7.10 Nonlinear Constraints . . . . . . . . . . 7.11 Technical Proofs* . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . .
iii 109 109 109 112 112 115 118 119 120 121 122 123 123 124 126 127 128 130 131 134 136 136 137 138 138 139 140 141 142 143 143 144 146
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
8 Testing 8.1 t tests . . . . . . . . . . . . . . . . . . . . . . . 8.2 tratios . . . . . . . . . . . . . . . . . . . . . . . 8.3 Wald Tests . . . . . . . . . . . . . . . . . . . . 8.4 Minimum Distance Tests . . . . . . . . . . . . . 8.5 F Tests . . . . . . . . . . . . . . . . . . . . . . 8.6 Normal Regression Model . . . . . . . . . . . . 8.7 Problems with Tests of NonLinear Hypotheses 8.8 Monte Carlo Simulation . . . . . . . . . . . . . 8.9 Estimating a Wage Equation . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . 9 Additional Regression Topics 9.1 Generalized Least Squares . . 9.2 Testing for Heteroskedasticity 9.3 Forecast Intervals . . . . . . . 9.4 NonLinear Least Squares . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
147 . 147 . 148 . 149 . 149 . 150 . 152 . 152 . 156 . 157 . 161 163 . 163 . 166 . 166 . 167
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
CONTENTS 9.5 Least Absolute Deviations . . . . . 9.6 Quantile Regression . . . . . . . . 9.7 Testing for Omitted NonLinearity . 9.8 Model Selection . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iv 169 172 173 174 177 179 179 179 181 181 182 184 184 185 187 188 189 190 192 193 193 194 195 196 197 197 198 199 200 202 204 204 206 207 208 209 211 212 213 214 214 214 216 217 219
10 The Bootstrap 10.1 Deﬁnition of the Bootstrap . . . . . . . . . 10.2 The Empirical Distribution Function . . . . 10.3 Nonparametric Bootstrap . . . . . . . . . . 10.4 Bootstrap Estimation of Bias and Variance 10.5 Percentile Intervals . . . . . . . . . . . . . . 10.6 Percentilet EqualTailed Interval . . . . . . 10.7 Symmetric Percentilet Intervals . . . . . . 10.8 Asymptotic Expansions . . . . . . . . . . . 10.9 OneSided Tests . . . . . . . . . . . . . . . 10.10Symmetric TwoSided Tests . . . . . . . . . 10.11Percentile Conﬁdence Intervals . . . . . . . 10.12Bootstrap Methods for Regression Models . Exercises . . . . . . . . . . . . . . . . . . . . . . 11 Generalized Method of Moments 11.1 Overidentiﬁed Linear Model . . . . . . . . . 11.2 GMM Estimator . . . . . . . . . . . . . . . 11.3 Distribution of GMM Estimator . . . . . . 11.4 Estimation of the Ecient Weight Matrix . 11.5 GMM: The General Case . . . . . . . . . . 11.6 OverIdentiﬁcation Test . . . . . . . . . . . 11.7 Hypothesis Testing: The Distance Statistic 11.8 Conditional Moment Restrictions . . . . . . 11.9 Bootstrap GMM Inference . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . 12 Empirical Likelihood 12.1 NonParametric Likelihood . . . . . . . . 12.2 Asymptotic Distribution of EL Estimator 12.3 Overidentifying Restrictions . . . . . . . . 12.4 Testing . . . . . . . . . . . . . . . . . . . . 12.5 Numerical Computation . . . . . . . . . . 13 Endogeneity 13.1 Instrumental Variables . . . 13.2 Reduced Form . . . . . . . 13.3 Identiﬁcation . . . . . . . . 13.4 Estimation . . . . . . . . . 13.5 Special Cases: IV and 2SLS 13.6 Bekker Asymptotics . . . . 13.7 Identiﬁcation Failure . . . . Exercises . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .3 Dynamic Panel Regression . . . . . . . . . . . . . . . . . . 15. .6 Selection of Lag Length in an VAR . . . . . . . . . . . . . . . . . . . . . . . . . 256 . . . . . . . . . . . 237 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Limited Dependent Variables 16. . . . . . . . . . . . . . . . . . . A. . . . 14. . . . . . . . . . . . . . . . . . . . . . . .11 Vector and Matrix Norms and Inequalities 250 . . . . . . . . . . A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 17. . . 254 . . . . . . . . . . . . . . . . . . . . 14. . . . .4 Sample Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14. . . . . . . . . . . . . . .3 Restricted VARs . . . . . . 16. . . . . . . . . . . . . . . 250 . . . . . . . . . . . . . . . . . . . .12Autoregressive Unit Roots . . . . . . . . . . . . 15. . . . .7 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 . . . . . . . . . . . . . . . . . . . . . . . . . . 240 17 Panel Data 242 17. . . . A. . . . . . . . . . . . . .1 Binary Choice . . . . . . . . . . . 15. . . . . A. . . . . . . . A. . . . . . . . . . . . . . . . 14. . 15 Multivariate Time Series 15. . . . . . . . . . . . . .11Model Selection . . . . . . . . . . . . . . . . 239 . . . . . . . . . . . . . . . . . . . . . . .9 Matrix Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Censored Data . . . . . . . . . . . . . . . . . .1 IndividualEects Model . .8 Cointegration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14. . . . . . . . . . . . . . . . . 257 . 14. . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 . . . . . . .9 Cointegrated VARs . . . . . . . . . . . . . .4 Single Equation from a VAR . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Matrix Multiplication . . . . . . . . . . . . . . 255 . . . . . .2 Fixed Eects . v 221 221 223 224 224 225 225 226 227 227 228 229 229 231 231 232 232 232 233 233 234 234 235 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . 15. . . . . . . . . . . . . . . . . . . . . . . . . . . . 16. . . . . . . . .2 Matrix Addition . . . . . . . .2 Asymptotic MSE for Kernel Estimates . . . . . . . . . . . . . . . . . . . . . . . . 16. . . .2 Estimation .6 Estimation . . . . . . . . 14. . . . . . . 14. . . . .8 Positive Deﬁniteness . . . 251 . . .CONTENTS 14 Univariate Time Series 14. . . . . . . . . . . . . . . . . . . . 244 18 Nonparametrics 245 18. . A. . . . . . . . . .1 Stationarity and Ergodicity . . . . . . 238 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 A Matrix Algebra A. . . . . . . . . . . . . . . . . . . . . .3 Stationarity of AR(1) Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Count Data . . . . . . . . . . . . . . . . 15. . . . . . . . . . . 257 . 245 18. . . . . . . .5 Rank and Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 . . . . . . . . . . . . . . . .4 Lag Operator . . A. . . .1 Notation . . . 15. . . . . . . . . . .4 Trace . . . . . .5 Testing for Omitted Serial Correlation 15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. . . . . . . . . . . . . . . . . 14. . . . . . . . 15. . . . . . 237 . . . . . . . . . . . . . . . . . . . . . . . . . .10Testing for Omitted Serial Correlation 14. . . . . . . . . . . . . . .7 Granger Causality . . . . . . . . . . . 255 . . . . . . . . . .1 Vector Autoregressions (VARs) . . . . . . . . . . . . . . . . . . . . .9 Trend Stationarity . . . . . . . . . . . . . . . .2 Autoregressions . . .8 Bootstrap for Autoregressions . .10 Kronecker Products and the Vec Operator A. . . . . . . . . 242 17. . . . . . . . .5 Stationarity of AR(k) . . . . . . . . . . . 14. . . .1 Kernel Density Estimation . . . . . . . . . . . .7 Asymptotic Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . .3 Expectation . . . . . . . . 282 . B. . . . . . . . . B. . . . . . . . . . . . . . . . B. . . . . . . . . . . . . . . . . . . . . . . . . . . C. . . . 282 . . C Numerical Optimization C. . . . . . . . . . . . . . . . 282 . . . . . . . . . . . . . . . . . . . . . .2 Gradient Methods . . B. . . . . . . . . . . . . . . . . . . B. . . . . . . . . . . . . . . . . . . . . . C. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 Maximum Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . .CONTENTS B Probability B. . . . . . . . . . . . . . . 284 . . . .4 Gamma Function . . . . . . . . . . . .9 Normal and Related Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B. . . . . . . . . . . . . . . . . . . . . . . . . . B. .7 Conditional Distributions and Expectation . . . . .1 Grid Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Inequalities . . . . . . . . . . . . . . . . . . . . B. . . . .8 Transformations . . . . . . . . . . . . . . . . . . B. . vi 260 260 262 262 263 264 266 268 270 271 273 277 . . . . . . . . . . . . . . . . . . . . . .5 Common Distributions . . . . . . . .3 DerivativeFree Methods . . . . . . . .6 Multivariate Random Variables . . . . . . . . . . . . . . .1 Foundations . . . B. . .2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
For those wanting a deeper foundation in probability. some parts are quite incomplete. the Handbook of Econometrics series provides advanced summaries of contemporary econometric methods and theory. I recommend Ash (1972) or Billingsley (1995). and Lehmann and Romano (2005). Beyond these texts. probability. vii . Hamilton (1994) for timeseries methods. in particular the later sections of the manuscript.Preface This book is intended to serve as the textbook for a ﬁrstyear graduate course in econometrics. I recommend Davidson (1994) for asymptotic theory. or be used as a supplement to another text. but not required. I recommend Lehmann and Casella (1998). I would like to thank YingYing Lee for providing research assistance in preparing some of the empirical examples presented in the text. I recommend Matrix Algebra by Abadir and Magnus (2005). It can be used as a standalone text. Wooldridge (2002) for panel data and discrete response models. For further study in econometrics beyond this text. As this is a manuscript in progress. and mathematical statistics. linear algebra. Hopefully one day these sections will be ﬂeshed out and completed in more detail. Students are assumed to have an understanding of multivariate calculus. and statistics are reviewed in the Appendix. An excellent introduction to probability and statistics is Statistical Inference by Casella and Berger (2002). probability theory. For more advanced statistical theory. and Li and Racine (2007) for nonparametrics and semiparametric econometrics. some of the basic tools of matrix algebra. van der Vaart (1998). For reference. For students wishing to deepen their knowledge of matrix algebra in relation to their study of econometrics. Shao (2003). A prior course in undergraduate econometrics would be helpful.
It is therefore ﬁtting that we turn to Frisch’s own words in the introduction to the ﬁrst issue of Econometrica for an explanation of the discipline. and the study of the properties of econometric methods. is a necessary.1 What is Econometrics? The term “econometrics” is believed to have been crafted by Ragnar Frisch (18951973) of Norway. 1.. in his seminal 1 . And it is this uniﬁcation that constitutes econometrics. should be confounded with econometrics. econometrics is by no means the same as economic statistics. economic theory. and economic data..” But there are several aspects of the quantitative approach to economics. Its main object shall be to promote studies that aim at a uniﬁcation of the theoreticalquantitative and the empiricalquantitative approach to economic problems. Econometric theory concerns the development of tools and methods. ﬁrst editor of the journal Econometrica. A word of explanation regarding the term econometrics may be in order. (1933). and cowinner of the ﬁrst Nobel Memorial Prize in Economic Sciences in 1969. Applied econometrics is a term describing the development of quantitative economic models and the application of econometric methods to these models using economic data. 1. and no single one of these aspects. Within the ﬁeld of econometrics there are subdivisions and specializations. Nor should econometrics be taken as synonomous with the application of mathematics to economics. but not by itself a sucient. we would say that econometrics is the uniﬁed study of economic models. Ragnar Frisch. winner of the 1989 Nobel Memorial Prize in Economic Sciences. that of statistics. This deﬁnition remains valid today.. Its deﬁnition is implied in the statement of the scope of the [Econometric] Society. in Section I of the Constitution. Nor is it identical with what we call general economic theory.... Thus. Econometrica. 12. although some terms have evolved somewhat in their usage. although a considerable portion of this theory has a deﬁninitely quantitative character. pp. It is the uniﬁcation of all three that is powerful. mathematical statistics.2 The Probability Approach to Econometrics The unifying methodology of modern econometrics was articulated by Trygve Haavelmo (19111999) of Norway. taken by itself. which reads: “The Econometric Society is an international society for the advancement of economic theory in its relation to statistics and mathematics. Today. condition for a real understanding of the quantitative relations in modern economic life. Experience has shown that each of these three viewpoints. and mathematics.Chapter 1 Introduction 1. one of the three principle founders of the Econometric Society.
An individual observation could also be a measurement at a point in time. and conduct inferences about the economy is through the powerful theory of mathematical statistics. an econometrician has a set of repeated measurements on a set of variables. Deterministic models are blatently inconsistent with observed economic quantities. An individual observation often corresponds to a speciﬁc economic unit. Econometrica (1944). The semiparametric approach dominates contemporary econometrics. and the quantitative analysis performed under the assumption that the economic model is correctly speciﬁed. Economists typically denote variables by the italicized roman characters y. including maximum likelihood and Bayesian estimation. such as quarterly GDP or a daily interest rate. A probabilistic economic model is speciﬁed. Rather. Similar to the quasistructural approach. Today no quantitative work in economics shuns its fundamental vision. We call this information the data. 1. quasiMLE. corporation. household. .3 Econometric Terms and Notation In a typical application. or sample. city or other geographical region. and other descriptive characteristics. and is the main focus of this textbook. organization. age. Another branch of quantitative structural economics is the calibration approach. ﬁrm. and it is incohorent to apply deterministic models to nondeterministic data. state. A probabilistic economic model is partially speciﬁed but some features are left unspeciﬁed. educational attainment. stochastic errors should not be simply added to deterministic models to make them random. country. We use the term observations to refer to the distinct repeated measurements on the variables. and quasilikelihood inference. The appropriate method for a quantitative economic analysis follows from the probabilistic construction of the economic model. Haavelmo argued that quantitative economic models must necessarily be probability models (by which today we would mean stochastic). While all economists embrace the probability approach. Haavelmo’s probability approach was quickly embraced by the economics profession. and instead selects parameters by matching model and data moments using nonstatistical ad hoc 1 methods. The convention in econometrics is to use the character y to denote the variable to be explained. Closely related is the semiparametric approach. estimate. The structural approach is the closest to Haavelmo’s original idea. The dierence is that the calibrationist literature rejects mathematical statistics as inappropriate for approximate models. how should we interpret structural econometric analysis? The quasistructural approach to inference views a structural economic model as an approximation rather than the truth. the quasilikelihood function. there has been some evolution in its implementation. x. This theory has led to the concepts of the pseudotrue value (the parameter value deﬁned by the estimation problem).” The structural approach typically leads to likelihoodbased analysis. it is more accurate to view a model as a useful abstraction or approximation. such as a person. Economic models should be explicitly designed to incorporate randomness. INTRODUCTION 2 paper “The probability approach in econometrics”. A criticism of the structural approach is that it is misleading to treat an economic model as correctly speciﬁed. and/or z.CHAPTER 1. the calibration approach interprets structural models as approximations and hence inherently false. This approach typically leads to estimation methods such as leastsquares and the Generalized Method of Moments. Once we acknowledge that an economic model is a probability model. in a labor application the variables could include weekly earnings. it follows naturally that the best way to quantify. In this case. For example. Researchers often describe this as “taking their model seriously. dataset. while 1 Ad hoc means “for this purpose” — a method designed for a speciﬁc problem — and not based on a generalizable principle.
xi and z i . an experiment might randomly divide children into groups. Estimates will be denoted by appending hats or tildes. e. and in other places a speciﬁc realization. . In some contexts we use indices other than i. Ideally. Estimates are ˆ typically denoted by putting a hat “^”.g. z i ). and in these cases we describe these variables or observations as unobserved or missing. . holding other variables constant. 1.g. In practice it is common to ﬁnd that some variables are not measured for some observations. and 2 to denote unknown parameters of an econometric model. xi . e. However. real numbers (elements of the real line R) are written using lower case italics such as y.g. such as in timeseries applications where the index t is common. mandate dierent levels of education to the dierent groups. . e. xk Upper case bold italics such as X are used for matrices. yi . we see few nonlaboratory experimental data sets in economics. experiments such as this would be widely condemned as immoral! Consequently. often with a subscript to denote the estimator. The dierences between the groups would be direct measurements of the effects of dierent levels of education.g. ˜ are estimates of . For example. and vectors (elements of Rk ) by lower case bold italics such as x. Thus the notation yi will in some places refer to a random variable. and then follow the children’s wage path after they mature and enter the labor force. This practice is not commonly followed in econometrics because instead we use upper case to denote matrices. We typically use Greek letters such as . e. and will use boldface. or .g. V = var n as the covariance matrix for n . As we mentioned before.g. and in panel studies we typically use the double index it to refer to individual i at a time period t. It is proper mathematical practice to use upper case X for random variables and lower case x for realizations or speciﬁc values. Hopefully without causing confusion. Hopefully there will be no confusion as the use should be evident from the context. We typically denote the number of observations by the natural number n. Another issue of interest is the earnings gap between men and women. we will use the notation V = avar() to denote the asymptotic covariance matrix of n (the variance of the asymptotic distribution).CHAPTER 1. e. Following mathematical convention. tilde “~” or bar “” over the corresponding letter. e. when these are vectorvalued.4 Observational Data A common econometric question is to quantify the impact of one set of variables on another variable. The i’th observation is the set (yi . we would use experimental data to answer these questions. To measure the returns to schooling. ideally each observation consists of a set of measurements on the list of variables. and subscript the variables by the index i to denote the individual observation. INTRODUCTION 3 the characters x and z are used to denote the conditioning (explaining) variables. x1 x2 x = . . and The covariance matrix of an econometric estimator will typically be written using the capital boldface V . V is an estimate of V . a concern in labor economics is the returns to schooling — the change in earnings induced by increasing a worker’s education.
households. This means that all variables must be treated as random and possibly jointly determined. or tickbytick) so sample sizes can be quite large. We will return to a discussion of some of these issues in Chapter 13. which suggests a high relative wage. households. not experimental. In many contemporary econometric crosssection studies the sample size n is quite large. In typical applications. Surveys are a typical source for crosssectional data. It is conventional to assume that crosssectional observations are mutually independent. This type of data is characterized by serial dependence so the random sampling assumption is inappropriate. For example. Most aggregate economic data is only available at a low frequency (annual. Knowledge of the joint distibution alone may not be able to distinguish between these explanations. timeseries. and assess the joint dependence. Most of this text is devoted to the study of crosssection data. or corporations) surveyed repeatedly over time. This discussion means that it is dicult to infer causality from observational data alone. but a given individual’s observations are mutually dependent. Causal inference requires identiﬁcation. The fact that a person is highly educated suggests a high level of ability. Most economic data sets are observational. High ability individuals do better in school. These data sets consist of a set of individuals (typically persons. through data collection we can record the level of a person’s education and their wage. 1.CHAPTER 1. and this is based on strong assumptions. These factors are likely to be aected by their personal abilities and attitudes towards work. hourly. But from observational data it is dicult to infer causality. They are distinguished by the dependence structure across observations. and therefore choose to attain higher levels of education. and panel. This is an alternative explanation for an observed positive correlation between educational levels and wages. Crosssectional data sets have one observation per individual. Timeseries data are indexed by time. The exception is ﬁnancial data where data are available at a high frequency (weekly. as we are not able to manipulate one variable to see the direct eect on the other. and their high ability is the fundamental reason for their high wages. Typical examples include macroeconomic aggregates.5 Standard Data Structures There are three major types of economic data sets: crosssectional. To continue the above example. Data Structures • Crosssection • Timeseries • Panel . The common modeling assumption is that the individuals are mutually independent of one another. most economic data is observational. With such data we can measure the joint distribution of these variables. The point is that multiple explanations are consistent with a positive correlation between schooling levels and education. ﬁrms or other economic agents. prices and interest rates. the individuals surveyed are persons. INTRODUCTION 4 Instead. daily. quarterly or perhaps monthly) so the sample size is typically much smaller than in crosssection studies. This is a modiﬁed random sampling environment. a person’s level of education is (at least partially) determined by that person’s choices. Panel data combines elements of crosssection and timeseries.
It is a statement about the relationship between observations i and j. z) which can call the population. In the random sampling framework. xj . Deﬁnition 1. and the goal of statistical inference is to learn about features of F from the sample.1 The observations (yi . An excellent starting point is the Resources for Economists Data Links. This abstraction can be a source of confusion as it does not correspond to a physical population in the real world. timeseries.. The assumption of random sampling provides the mathematical foundation for treating economic statistics with the tools of mathematical statistics. These include models of spatial correlation and clustering. Before this conceptual development. most of this text will be devoted to crosssectional data under the assumption of mutually independent observations.) Furthermore. x. INTRODUCTION 5 Some contemporary econometric applications combine elements of crosssection. (Sometimes the label “independent” is misconstrued.6 Sources for Economic Data Fortunately for economists. z i ) as a realization from a joint probability distribution F (y. if the data is randomly gathered. xi . z i ) are a random sample if they are mutually independent and identically distributed (iid) across i = 1. For most of this text we will assume that our observations come from a random sample. z i ) is independent of the j’th observation (yj . The distribution F is unknown.org. z j ) for i = j. In this case we say that the data are independent and identically distributed or iid. This “population” is inﬁnitely large. and panel data modeling. methods from mathematical statistics had not been applied to economic data as they were viewed as inappropriate. The random sampling framework enabled economic samples to be viewed as homogenous and random. The random sampling framework was a major intellectural breakthrough of the late 19th century.CHAPTER 1. Many largescale economic datasets are available without charge from governmental agencies. we think of an individual observation (yi . By mutual independence we mean that the i’th observation (yi . As we mentioned above.. not a statement about the relationship between yi and xi and/or z i . We call this a random sample. available at rfe.. From this site you can ﬁnd almost every publically available economic data set. the internet provides a convenient forum for dissemination of economic data. it is reasonable to model each observation as a random draw from the same probability distribution. xi . n.5. Some speciﬁc data sources of interest include • Bureau of Labor Statistics • US Census • Current Population Survey • Survey of Income and Program Participation • Panel Study of Income Dynamics • Federal Reserve System (Board of Governors and regional banks) • National Bureau of Economic Research . allowing the application of mathematical statistics to the social sciences. a necessary precondition for the application of statistical methods. xi . 1. .
The irony of the situation is that it is typically in the best interests of a scholar to make as much of their work (including all data and programs) freely available. public openness provides a healthy incentive for transparency and integrity in empirical analysis. wants to extend your work. MATLAB (www.CHAPTER 1. as many journals archive data and replication programs online.com). experimental and simulation results must be replicable. politely requesting the data. you will need (and want) to provide all data and programs to the community of scholars. and some make available replication ﬁles complete with data and programs. ﬁrst check the journal’s website. If these investigations fail. all authors absolutely have the obligation to make their data and programs available. 1. statistical. but is limited when you want to use new or lesscommon econometric methods which have not yet been programed.com). Therefore. and it is . programs. Unfortunately. As a matter of professional etiquette.mathworks. Most journals in economics require authors of published papers to make their datasets generally available. experiments and simulations that are needed for replication and some limited sensitivity analysis. The American Economic Review states: All data used in analysis must be made available to any researcher for purposes of replication. In addition.net) are highlevel matrix programming languages with a wide variety of builtin statistical functions. Remember that as part of your end product.stata. The greatest form of ﬂattery is to learn that another scholar has read your paper. and programming software. and typically for poor reasons. Second. or wants to use your empirical methods. in its instructions for submission.7 Econometric Software Economists use a variety of econometric. as this only increases the likelihood of their work being cited and having an impact. INTRODUCTION • U. For example. GAUSS (www.aptech. The Journal of Political Economy states: It is the policy of the Journal of Political Economy to publish papers only if the data used in the analysis are clearly and precisely documented and are readily available to any researcher for purposes of replication. If you are interested in using the data from a published paper.S. many fail to do so. Most academic economists maintain webpages.oxmetrics. Econometrica states: Econometrica has the policy that all empirical. You may need to be persistent. email the author(s). Keep this in mind as you start your own empirical project. Bureau of Economic Analysis • CompuStat • International Financial Statistics 6 Another good source of data is from authors of published empirical studies. check the website(s) of the paper’s author(s). and Ox (www.com) is a powerful statistical program with a broad set of preprogrammed econometric and statistical tools. Many econometric methods have been programed in these languages and are available on the web. The advantage of these packages is that you are in complete control of your analysis. It is quite popular among economists. authors of accepted papers must submit data sets. and is continuously being updated with new methods. It is an excellent package for most econometric analysis. STATA (www. and information on empirical analysis.
1. many empirical economists end up using more than one package. but I have included this at the beginning because of its central importance in econometric distribution theory.8 Reading the Manuscript Chapter 2 is a review of moment estimation and asymptotic distribution theory. As a student of econometrics. maximum likelihood. empirical likelihood and endogeneity. INTRODUCTION 7 easier to program new methods than in STATA. 17 and 18 cover limited dependent variables. at the cost of increased time in programming and debugging. .rproject. programming complicated procedures takes signiﬁcant time. and Chapters 16. As these dierent packages have distinct advantages. Some disadvantages are that you have to do much of the programming yourself. Chapters 3 through 9 deal with the core linear regression and projection models. you will learn at least one of these packages. Chapters 11 through 13 deal with the Generalized Method of Moments. R (www. open source. and nonparametrics.org) is an integrated suite of statistical and graphical software that is ﬂexible. probability theory. This material should be familiar from an earlier course in statistics. Reviews of matrix algebra. and best of all. Chapters 14 and 15 cover time series. free! For highlyintensive computational tasks. This can lead to major gains in computational speed. and probably more than one. panel data. Chapter 10 introduces the bootstrap. Technical sections which may not be of interest to all readers are marked with an asterisk (*). and programming errors are hard to prevent and dicult to detect and eliminate.CHAPTER 1. and numerical optimization can be found in the appendix. some economists write their programs in a standard programming language such as Fortran or C.
as is common in econometrics. This convention is useful as it helps readers recognize a sample mean. It is also common to see the notation y n . The population mean µ is a nonrandom feature of the population while the sample mean y is a random feature of a random sample. This chapter provides a concise summary. Here. yn } consists of n observations of independent and identically draws from the distribution of y.. To understand econometric estimation we need a thorough understanding of moment estimation. As µ is the average value of y in the population.2.2 Population and Sample Mean A random variable y with density f has the expectation or mean1 µ = E (y) = uf (u)du.. we put a bar “ ” over y to indicate that the quantity is a sample mean. This is the sample mean. We would like to estimate µ from a random sample.1 Introduction Most econometric estimators can be written as functions of sample moments. Assumption 2. It will useful for most students to review this material. .1 The observations {y1 . Recall that a random sample {y1 . We use the term “mean” to refer to both. This is the average value of y in the population. 1 For a rigorous treatment of expectation see Section 2. 8 .14.... µ is ﬁxed. but they are really quite distinct. written as 1 1 y = (y1 + · · · yn ) = yi n n i=1 n It is important to understand the distinction between µ and y. where the subscript “n” indicates that the sample mean depends on the sample size n. 2. yn } are a random sample. . even if most is familiar.. it seems reasonable to estimate µ from the average value of y in the sample.Chapter 2 Moment Estimation 2. while y varies with the sample.
as is common in econometrics. we put a hat “^” over the parameter µ to indicate that µ is a sample estimate of µ. Here. n n n n 1 1 2 1 1 uj = 2 E (ui uj ) = 2 = 2 n n n n j=1 i=1 j=1 i=1 .3. We can calculate the variance of the sample mean µ.3. n n i=1 i=1 This shows that the expected value of the sample mean equals the population mean. You may notice that we slipped in the additional condition “If E y < ”. Theorem 2.1 If E y < then Ey = µ and µ = y is unbiased for the popula tion mean µ. It is convenient to deﬁne the centered observations ui = yi µ which have mean zero and variance 2 . as just by seeing the symbol µ we understand that it is a sample estimate of a population parameter µ. This is a helpful convention. The variance of the random variable y is deﬁned as Notice that the variance is the function of two moments. An estimator with this property is called unbiased. Then 1 µµ= ˆ ui n i=1 n and var (ˆ ) = E (ˆ µ)2 = E µ µ n 1 ui n i=1 where the secondtolast inequality is because E (ui uj ) = 2 for i = j yet E (ui uj ) = 0 for i = j due to independence.3 Sample Mean is Unbiased Since the sample mean is a linear function of the observations. In the case of the mean. Deﬁnition 2.1 An estimator for is unbiased if E = . the moment estimate of the population mean µ = Ey is the sample mean µ = y. it is simple to calculate its expectation. This assumption ensures that µ is ﬁnite and the mean of y is well deﬁned.CHAPTER 2. n n 1 1 Ey = E yi = Eyi = µ. Ey 2 and Ey.4 Variance 2 = var (y) = E (y Ey)2 = Ey 2 (Ey)2 . 2. MOMENT ESTIMATION 9 Moment estimation uses sample moments as estimates of population moments. 2.
In particular. (2. The statement (2. and not simply write zn z.1 If 2 < then var () = n 2 µ 10 This result links the variance of the estimator µ with the variance of the individual observation yi and with the sample size n. n lim Pr (zn z ) = 1. We now give a formal deﬁnition. 2. Neither (2. First.4. MOMENT ESTIMATION 1 Theorem 2. it is also important to include the phrase “as n ” to be speciﬁc about how the limit is obtained. the distribution of zn concentrates within this interval for large n.1) is that this probability approaches 1 as the sample size n increases. Deﬁnition 2. Consider a discrete random variable zn which takes the value 0 with probability n1 and the value an = 0 with probability 1 n1 .5. denoted zn z. Two comments about the notation are worth mentioning. You should try and adhere to this notation. regardless of the sequence an . Second.4. In this example we can also calculate that the expectation of zn is Ezn = an .1) nor (2.2) but these are distinct concepts.1 we showed that the variance of µ decreases with the sample size n. p When zn z we call z the probability limit (or plim) of zn . So even for very small intervals about z. n (2. var () is proportional to 2 . it is conventional to write the p convergence symbol as where the “p” above the arrow indicates that the convergence is “in probability”.2) implies the other. but it formalizes the concept of a distribution concentrating about a point. n .CHAPTER 2.5 Convergence in Probability In Theorem 2. Pr (zn z ) is the probability of this event — that zn is within of the point z. This implies that the sampling distribution of µ concentrates as the sample size increases. To see the distinction it might be helpful to think through a stylized example. or Pr (zn = an ) = Pr (zn = 0) = 1 n n1 . The event {zn z } is the event that zn is within of the point z.3) In this example the probability distribution of zn concentrates at zero as n increases. if for all > 0. Students often confuse convergence in probability with convergence in expectation: Ezn Ez (2. You can p check that zn 0 as n . The deﬁnition of convergence in probability requires that this holds for any .1) The deﬁnition looks quite abstract.1 A random variable zn R converges in probability p to z as n . and inversely proportional µ to n and thus decreases as n increases.
The WLLN shows that the estimator µ = y converges in probability to the true population mean µ. n 2 For ﬁxed 2 and . Consistency is a good property for an estimator to possess.1 is presented in Section 2. y= 1 p yi E(yi ).1 An estimator ˆ of a parameter is consistent if ˆ as n . If an diverges to inﬁnity at a rate equal to n (or faster) then Ezn will not converge p to zero. When y has a ﬁnite variance there is a fairly straightforward proof by applying Chebyshev’s inequality (B. if an = n. Unfortunately it does not mean that ˆ will actually be close to in a given ﬁnite sample.1). We now show that this implies that the sample mean converges in probability to the population mean.4. Thus the probability that µ is within of µ approaches 1 as n gets large. This result is called the weak law of large numbers. Another common source of confusion with the notation surrounding probability limits is that p the expression to the right of the arrow “ ” must be free of dependence on the sample size n.3.26).15. the variance of the sample mean decreases to zero as the sample size increases. so it is important not to confuse one with the other.1 and 2. there is a sample size n suciently large such that the estimator ˆ will be arbitrarily close to the true value with high probability. For example. so µ converges in probability to µ. We have shown that the sample mean µ converges in probability to the population mean µ. 2. An estimator which converges in probability to the population value is called consistent. for which Ezn = µ and var(zn ) = n 2 (by Theorems 2.1 Weak Law of Large Numbers (WLLN) If E y < then as n . p Thus expressions of the form “zn cn ” are notationally meaningless and must not be used. .6 Weak Law of Large Numbers As we mentioned in the two previous sections. but it is minimal property for an estimator to be considered a “good” estimator.6. p Deﬁnition 2.CHAPTER 2. It is only necessary for y to have a ﬁnite mean. MOMENT ESTIMATION 11 Despite the fact that zn converges in probability to zero. but this is not necessary.6.6. even though zn 0. n i=1 n The proof of Theorem 2. but the point is that the concepts of convergence in probability and convergence in expectation are distinct. It means that for any given data distribution. Our derivation assumed that y has a ﬁnite variance. This example might seem a bit artiﬁcial. The latter states that for any random variable zn and constant > 0 Pr (zn Ezn  > ) var(zn ) . its expectation will not decrease to zero unless an /n 0. Theorem 2. Then Pr ( µ > ) µ 2 . 2 1 Set zn = µ. then Ezn = 1 for all n. the bound on the righthandside shrinks to zero as n .
CHAPTER 2.2. Theorem 7.3 Strong Law of Large Numbers (SLLN) If E y < . but this is not sucient for zn to converge almost surely. (2.1 and E y < . µ = y is consis tent for the population mean µ.s. To ﬁx notation. MOMENT ESTIMATION Theorem 2.6.s. A related concept is almost sure convergence.6.7 VectorValued Moments Our preceding discussion focused on the case where y is realvalued (a scalar). The WLLN is sucient for most purposes in econometrics. the sequence zn converges in probability to zero for any sequence an . For a proof see Billingsley (1995.) Deﬁnition 2. Section 22) or Ash (1972. Almost sure convergence is stronger than p a. This is called the strong law of large numbers.2 Under Assumption 2.2. In the random sampling context the sample mean can be shown to converge almost surely to the population mean. also known as strong convergence.4) n The convergence (2.3) of Section 2.4) is stronger than (2. In order for zn to converge to zero almost surely. 12 Almost Sure Convergence and the Strong Law* Convergence in probability is sometimes called weak convergence. 2. yi E(yi ). to z as n . n i=1 n The proof of the SLLN is technically quite advanced so is not presented here. An event which is random but occurs with probability equal to one is said to be almost sure. Theorem 2.6.5).2 A random variable zn R converges almost surely a. but nothing important changes if we generalize to the case where y Rm is a vector. convergence in probability in the sense that zn z implies zn z.5. the .1) because it computes the probability of a limit rather than the limit of a probability. if for every > 0 Pr lim zn z = 1.s. denoted zn z. In the example (2. y= 1 a. it is necessary that an 0. (In probability theory the term “almost sure” means “with probability equal to one”. so we will not use the SLLN in this text. then as n .
. ym is the vector of means of the individual variables. (Each draw is an mvector. . When working with random vectors y it is convenient to measure their magnitude with the Euclidean norm 2 2 1/2 y = y1 + · · · + ym . MOMENT ESTIMATION elements of y are 13 ym The population mean of y is just the vector of marginal means E (y1 ) E (y2 ) µ = E(y) = . Theorem 2. This is the classic Euclidean length of the vector y. V is often called a variancecovariance matrix.7...2 Weak Law of Large Numbers (WLLN) for random vectors If E y < then as n . It turns out that it is equivalent to describe ﬁniteness of moments in terms of the Euclidean norm of a vector or all individual components. n i=1 n . . .. or equivalently E y < . Since the latter holds if E yj  < for j = 1.1 For y Rm . . You can show that the elements of V are ﬁnite if E y2 < . m. Thus y µ if and only if y j µj for j = 1. Notice that y2 = y y. A random sample {y 1 . Theorem 2. y= 1 p y i E(y i ).. E (ym ) y= y1 y2 ..7. n . . E y < if and only if E yj  < for j = 1.7.CHAPTER 2. . i=1 Theorem 2.) The vector sample mean y1 n y 1 2 y= yi = ...1 implies that the components of µ are ﬁnite if and only if E y < . .. we can state this formally as follows. m.... The m m variance matrix of y is V = var (y) = E (y µ) (y µ) . .. y n } consists of n observations of independent and identically draws from the distribution of y. . . m. Convergence in probability of a vector is deﬁned as convergence in probability of all elements p p in the vector.
if for all u at which F (u) = Pr (z u) is continuous.CHAPTER 2. The typical path to establishing convergence in distribution is through the central limit theorem (CLT). MOMENT ESTIMATION 14 2.8. 2.8. This was extended to cover an approximation to the binomial distribution in 1812 by PierreSimon Laplace. but does not give an approximation to the distribution of an estimator. d p d Theorem 2. the parameter of interest is the vector of functions = g (µ) where g : Rm Rk . What the CLT adds is that the variable z n is also approximately normally distributed. d When z n z. Fn (u) F (u) as n .5) . Pr (z = c) = 1 for some c) we can write the convergence as z n c. It shows that the simple process of averaging induces normality.1 Let z n be a random vector with distribution Fn (u) = Pr (z n u) . That is.1 Central Limit Theorem (CLT). denoted z n z. A largesample or asymptotic approximation can be obtained using the concept of convergence in distribution.9 Functions of Moments We now expand our investigation and consider estimation of parameters which can be written as a continuous function of µ.6) (2. When the limit distribution z is degenerate (that is. which is equivalent to convergence in probability. We say that z n converges in distribution to z as n . the geometric mean of wages w is = exp (E (log (w))) (2.8 Convergence in Distribution The WLLN is a useful ﬁrst step. z n c. The CLT is one of the most powerful and mysterious results in statistical theory. The standardized sum z n = n (y n µ) has mean zero and variance V . and the general statement is credited to the Russian mathematician Aleksandr Lyapunov in 1901. Deﬁnition 2. If E y2 < then as n 1 d n (y n µ) = (y i µ) N (0. it is common to refer to z as the asymptotic distribution or limit distribution of z n . As one example. and that the normal approximation improves as n increases. which states that a standardized sample average converges in distribution to a normal random vector. V ) n i=1 n where µ = Ey and V = E (y µ) (y µ) . The ﬁrst version of the CLT (for the number of heads resulting from many tosses of a fair coin) was established by the French mathematician Abraham de Moivre in 1733.
Ew3 w y = w2 w3 (2. 2 3/2 µ2 µ1 15 where w = wage and g (µ1 . Ew2 . Theorem 2. For example.9) Again. As another example. Instead. then g(z n ) g(c) as n .9. If z n c p as n and g (·) is continuous at c.CHAPTER 2. µ2 .8) The plugin estimate of the skewness of the wage distribution is 3 1 n = n i=1 (wi w) sk 3/2 2 1 n i=1 (wi w) n = µ3 32 µ1 + 23 µ µ1 3/2 µ2 µ2 1 1 j µj = wi .1 Continuous Mapping Theorem (CMT). the plugin estimate of the geometric mean of the wage distribution from (2. Ew3 µ3 3µ2 µ1 + 2µ3 1 . n i=1 n where A useful property is that continuous functions are limitpreserving.6) is = exp() µ with µ= 1 log (wagei ) . p . the hat “^” indicates that is a sample estimate of . n i=1 n Ew µ = Ew2 . MOMENT ESTIMATION which is (2. so it does not have a direct moment estimator. it is common to use a plugin estimate formed by replacing the unknown µ with its point estimate µ so that = g ( ) . µ3 ) = In this case we can set (2. the skewness of the wage distribution is sk = E (w Ew)3 3/2 E (w Ew)2 = g Ew.5) with g(u) = exp (u) and µ = E (log (w)) .7) so that The parameter = g (µ) is not a population moment. µ (2.
MOMENT ESTIMATION The proof of Theorem 2. p For example. 1 which holds unless w has a degenerate distribution. g (u) = au. To apply Theorem 2. For a proof of Theorem 2. It was ﬁrst proved by Mann and Wald (1943) and is therefore sometimes referred to as the MannWald Theorem Theorem 2.10.CHAPTER 2.10. The condition c = 0 is important as the function g(u) = a/u is not continuous at u = 0.2 and Theorem 2. . d d . the function g(u) = u1 is discontinuous at u = 0.3 of van der Vaart (1998). Theorem 2.2 If E y < and g (u) is continuous at u = µ then p = g ( ) g (µ) = µ as n . For example.1 see Theorem 2.9.7) is continuous for all µ such that var(w) = µ2 µ2 > 0.2 it is necessary to check if the function g is continuous at µ.9.15.7.1 allows the function g to be discontinuous only if the probability at being at a discontinuity point is zero.1 Continuous Mapping Theorem d If z n z as n and g : Rm Rk has the set of discontinuity points d Dg such that Pr (z Dg ) = 0. Thus if E w3 < and var(w) > 0 then as n p sk sk. In our ﬁrst example g(u) = exp (u) is continuous everywhere. We need the following assumption in order for to be consistent for . Also a p a zn c 2 zn c2 p p p 16 if c = 0. 1) then Pr (z = 0) = 0 so zn z 1 . Theorem 2.1 is given in Section 2.2 that if E log (wage) < then as n In our second example g deﬁned in (2. It therefore follows from Theorem 2.10 Delta Method In this section we introduce two tools — an extended version of the CMT and the Delta Method — which allow us to calculate the asymptotic distribution of the parameter estimate . and g (u) = u2 are continuous. then g(z n ) g(z) as n . and thus is consistent for . p 2. We ﬁrst present an extended version of the continuous mapping theorem which allows convergence in distribution.9. A special case of the Continuous Mapping Theorem is known as Slutsky’s Theorem.10. if zn c as n then zn + a c + a azn ac as the functions g (u) = u + a. but if 1 zn z N (0.9.
4 If E y2 < and G (u) = g (u) is continuous in u a neighborhood of u = µ then as n d n N 0. is continuously dierentiable in a neighborhood of then as n d a n (g ( n ) g( 0 )) G where G() = g() (2. because µ µ We need an intermediate step — a ﬁrst order Taylor series expansion. Now by combining Theorems 2.10. This is = g ( ) is written as a function of µ. where is m 1. and g() : Rm Rk . Theorem 2. then as n d n (g ( n ) g( 0 )) N 0. V ) d where V is m m.10) and G = G( 0 ). Despite the fact that the plugin estimator is a function of µ for which we have an asymptotic distribution.1 does not directly give us an asymptotic distribution for .10.1 and 2. it requires the stronger smoothness condition that g() is continuously dierentiable. In particular.10. multiplication and division.3 we can ﬁnd the asymptotic distribution of the plugin estimator . Theorem 2. zn cn zc 3. This step is so critical to statistical theory that it has its own name — The Delta Method. d d 17 zn d z if c = 0 cn c Even though Slutsky’s Theorem is a special case of the CMT.3 Delta Method: d If n ( n 0 ) . if n ( n 0 ) N (0.11) The Delta Method allows us to complete our derivation of the asymptotic distribution of the estimator of . G V G where G = G (µ) . . it is a useful statement as it focuses on the most common applications — addition. not of the standardized sequence n ( µ) . (2. Relative to consistency. MOMENT ESTIMATION Theorem 2.CHAPTER 2. zn + cn z + c 2.8. k m. G V G . Theorem 2.2 Slutsky’s Theorem p d If zn z and cn c as n then 1.10.10.
The notation zn = op (1) (pronounced “small p ohPone”) means that zn 0 as n .. the largest observation will also tend to increase. We also say that zn = op (an ) if an is a sequence such that a1 zn = op (1). Precisely. If the support of the distribution of yi is unbounded. .11 Stochastic Order Symbols It is convenient to have simple symbols for random variables and vectors which converge in probability to zero or are stochastically bounded. op (1) + op (1) = op (1) op (1) + Op (1) = Op (1) Op (1) + Op (1) = Op (1) op (1)op (1) = op (1) op (1)Op (1) = op (1) Op (1)Op (1) = Op (1) 2. if z N (0. MOMENT ESTIMATION 18 2.1 If E yr < then as n n1/r max yi  0 1in p . yn }. Op (1) is weaker than op (1) in the sense that zn = op (1) implies zn = Op (1) but not the reverse. Theorem 2. For example. There are many simple rules for manipulating op (1) and Op (1) sequences which can be deduced from the continuous mapping theorem or Slutsky’s Theorem. the notation zn = Op (1) (pronounced “big onPone”) means that zn is bounded in probability. if zn = Op (an ) then zn = op (bn ) for any bn such that an /bn 0. However. then as the sample size n increases. for any consistent estimator for we n then can write = + op (1) n Similarly. It follows that for estimators which satisfy the convergence of Theorem 2.. This is the magnitude of the largest observation in the sample {y1 .4 then we can write = + Op (n1/2 ). For example. for any > 0 there is a constant M < such that lim Pr (zn  > M ) .10. V )) then zn = Op (1).CHAPTER 2.. It turns out that there is a simple characterization. zn = Op (an ) if an is a sequence such that = Op (1). d We say that a1 zn n If a random vector converges in distribution zn z (for example.12.12 Uniform Stochastic Bounds* For some applications it can be useful to obtain the stochastic order of the random variable 1in max yi  .
We have established that n1/r max1in yi  0. To simplify the notation. The event max1in yi  > n1/r means that at 1/r or equivalently least one of the yi  exceeds n1/r .1 applies to random vectors.12. If E yr < then 1in max y i = op (n1/r ). for the asymptotic eciency of will follow from that of µ. .CHAPTER 2. Thus the higher the moment.12) as yi = op (n1/r ) uniformly in 1 i n. MOMENT ESTIMATION Equivalently. as required. We will also appeal to the asymptotic theory of maximum likelihood estimation (see Section B.12).12. (2. This is because r E yi  = yr dF (y) < implies E (yi  1 (yi  > c)) = r r y >c r yr dF (y) 0 p as c . 2. It is important to understand when the Op or op symbols are applied to subscript i random variables we typically mean uniform convergence in the sense of (2. we write (2. We start by examining the sample mean µ.25) and the ﬁnal equality is since the yi are iid. An excellent accessible review has been provided by Newey (1990). Theorem 2.1 says that the largest observation will diverge at a rate slower than n1/r . the slower the rate of divergence of the largest observation.1. Our demonstration is based on the rich but technically challenging theory of semiparametric eciency bounds. We now prove Theorem 2. As r increases this rate decreases.12.12) Theorem 2. Since E yr < this ﬁnal expectation converges to zero as n . n Pr n1/r max yi  > = Pr {yi r > r n} 1in i=1 = n i=1 Pr (yi r > n r ) n 1 E (yi r 1 (yi r > n r )) n r i=1 1 E (yi r 1 (yi r > n r )) r where the second inequality is the strong form of Markov’s inequality (Theorem B.13 Semiparametric Eciency In this section we argue that the sample mean µ and plugin estimator = g ( ) are ecient µ estimators of the parameters µ and .11). Since the probability of the union of events is smaller than the sum of the probabilities. Take any . 1in 19 max yi  = op (n1/r ). which is the same as the event n i=1 yi  > n n r r i=1 {yi  > n} .
The relevant question is whether or not the sample mean is ecient when the form of the distribution is unknown. S = Recall. the mean is µ () = yf (y  ) dy which varies with the parameter . MOMENT ESTIMATION 20 n ( µ) N (0. 1/2) . as µ is not asymptotically N (0. This class is too broad n for our current purposes. ES 2 µ where S = µ log f (y  µ) = 2 sgn (y µ) is the score. so the submodel is a true model. Suppose that y R has the double exponential den sity f (y  µ) = 21/2 exp y µ 2 . A parametric submodel for f (y) is a density f (y  ) which is a smooth function of a parameter . In the submodel f (y  ) . The mathematical trick is to reduce the semiparametric model to a set of parametric “submodels”. Formally. Recall from the theory of maximum likelhood that the MLE satisﬁes 1 d n (˜ µ) N 0. we need to be clear about the class of models — the class of permissible distributions. given the density f (y  ) its likelihood score is log f (y  0 ) . It is inconsistent if the density is asymmetric or skewed. Another way of looking at this is that the sample median is ecient in the class of densities f (y  µ) = 21/2 exp y µ 2 but unless it is known that this is the correct distribution class this knowledge is not very useful. V ) . Thus when the true density is known to be double exponential the sample mean is inecient. For estimation of the mean µ of the distribution of y the broadest conceivable class is L1 = {F : E y < } . We call this setting semiparametric as the parameter of interest (the mean) is ﬁnite dimensional while the remaining features of the distribution are unspeciﬁed. The asymptotic variance µ of the MLE is onehalf that of the sample mean. When we seek an ecient estimator of the mean µ in the class of models L2 what we are seeking is the best estimator. it might be helpful to review a setting where the sample mean is inecient. by Theorem B. 1 so the CramerRao lower bound for estimation of is ES S . and there is a true value 0 such that f (y  0 ) = f (y). Speciﬁcally.CHAPTER 2. In the semiparametric context an estimator is called semiparametrically ecient if it has the smallest asymptotic variance among all semiparametric estimators. A more realistic choice is L2 = F : E y2 < — the class of ﬁnitevariance distributions. The equality f (y  0 ) = f (y) means that the submodel class passes through the true density. The index indicates the submodels. given that all we know is that F L2 .11. In this model the maximum likelihood estimator (MLE) µ for µ ˜ µ is the sample median. V ) for all F L1 . suppose that the true density of y is the unknown function f (y) with mean µ = Ey = yf (y)dy.5 the CramerRao lower bound for estimation of µ within the submodel is 1 V = M ES S M . We want to know if µ is the best feasible estimator. or if there is another µ estimator with a smaller asymptotic variance. Deﬁning M = µ ( 0 ) . Let be the class of all submodels for f. While it seems intuitively unlikely that another estimator could have a smaller asymptotic variance. We can d calculate that ES 2 = 2 and thus conclude that n (˜ µ) N (0. we know that if E y2 < then the sample mean has the asymptotic distribution d . To show that the answer is not immediately obvious. But the estimator which achieves this improved eciency — the sample median — is not generically consistent for the population mean. how do we know that this is not the case? When we ask if µ is the best estimator. The CramerRao variance bound can be found for each parametric submodel. Since each submodel is parametric we can calculate the eciency bound for estimation of µ within this submodel. The class of submodels and parameter 0 depend on the true density f. Since var (y) = 1 we see that the sample mean sat d isﬁes n (ˆ µ) N (0. 1). So the improvement comes at a great cost. The variance bound for the semiparametric model (the union of the submodels) is then deﬁned as the supremum of the individual variance bounds.
However.15) It is not obvious that this supremum exists. We now ﬁnd this submodel for the sample mean µ. including the true density f . It is a parametric submodel since f (y  0 ) = f (y) when 0 = 0. This is true for all submodels . 2 . Consider the parametric submodel f (y  ) = f (y) 1 + V 1 (y µ) (2. Otherwise there would exist another submodel 1 whose CramerRao lower bound satisﬁes V 0 < V 1 but this would imply V µ < V 1 which contradicts the CramerRao Theorem.11. (2. as it is a lower bound on the asymptotic variance for any semiparametric estimator.14) By Theorem B. since it cannot be smaller than any individual V . The asymptotic variance of any semiparametric estimator cannot be smaller than V . This can be done by creating a tilted version of the true density. (2. Since V 1 (y µ) log f (y  ) = log 1 + V 1 (y µ) = 1 + V 1 (y µ) it follows that the score function for is S = log f (y  0 ) = V 1 (y µ) .13) where f (y) is the true density and µ = Ey. MOMENT ESTIMATION 21 As V is the eciency bound for the submodel class f (y  ) . We call V the semiparametric asymptotic variance bound or semiparametric eciency bound for estimation of µ. Thus f (y  ) is a valid density function. we can deduce that V = V 0 = V µ . If the asymptotic variance of a speciﬁc semiparametric estimator equals the bound V we say that the estimator is semiparametrically ecient.CHAPTER 2. in many cases (including the ones we study) the supremum exists and is unique. Note that 1 f (y  ) dy = f (y)dy + V f (y) (y µ) dy = 1 and for all close to zero f (y  ) 0. Taking the supremum of the CramerRao bounds lower from all conceivable submodels we deﬁne2 V = sup V . This parametric submodel has the mean µ() = yf (y  ) dy = yf (y)dy + f (y)y (y µ) V 1 dy = µ+ which is a smooth function of .3 the CramerRao lower bound for is 1 1 1 E S S = V E (y µ) (y µ) V 1 =V. Suppose that we can ﬁnd a submodel 0 whose CramerRao lower bound satisﬁes V 0 = V µ where V µ is the asymptotic variance of a known semiparametric estimator. However. Thus the asymptotic variance of any semiparametric estimator cannot be smaller than V for any conceivable submodel. For many statistical problems it is quite challenging to calculate the semiparametric variance bound. as V is a matrix so there is not a unique ordering of matrices. no estimator can have an asymptotic variance smaller than V for any density f (y  ) in the submodel class. in some cases there is a simple method to ﬁnd the solution. Our goal is to ﬁnd a parametric submodel whose CramerRao bound for µ is V . In this case.
14 Expectation* j=1 For any random variable y we deﬁne the mean or expectation Ey as follows.5. and Ey2 = 0 .13. and this equals the asymptotic variance of the moment estimator µ. G V G) .2 In the class of distributions F L2 (g) the semiparametric variance bound for estimation of = g (µ) is G V G. For any submodel the CramerRao lower bound for estimation of = g (µ) is G V G by Theorem B. yf (y)dy.4 that if E y < and g (u) is continuously dierentiable at u = µ then the plugin d estimator has the asymptotic distribution n N (0.4.10. g (u) is continuously dierentiable at u = Ey . 2 2 For example. In summary. the semiparametric variance bound for estimation of µ is V = var(yi ). This establishes the following result. Proposition 2. . This was what we set out to show. We know from µ 2 Theorem 2.13. It is a simple matter to extend this result to the plugin estimator = g ( ). µ The result in Proposition 2.13) the CramerRao lower bound for estimation of µ is V which equals the asymptotic variance of the sample mean. we have shown that in the submodel (2. Ey = and if y is continuous with density f Ey = j Pr (y = j ) .2 is quite general. We call this result a proposition rather than a theorem as we have not attended to the regularity conditions. This is a very powerful result.10. Proposition 2. We can unify these deﬁnitions by writing the expectation as the Lebesgue integral with respect to the distribution function F Ey = ydF (y). and the plugin estimator = g ( ) is a semiparametrically ecient estimator of .1 In the class of distributions F L2 . Ey2 < . MOMENT ESTIMATION 22 The CramerRao lower bound for µ() = µ + is also V .13. Thus is semiparametrically ecient.11. Smooth functions of sample moments are ecient estimators for their population counterparts. as most econometric estimators can be written (or approximated) as smooth functions of sample means. if = µ1 /µ2 where µ1 = Ey1 and µ2 = Ey2 then L2 (g) = F : Ey1 < .CHAPTER 2. and the sample mean µ is a semiparametrically ecient estimator of the population mean µ. 2.13) this bound is G V G which equals the asymptotic variance of from Theorem 2. We therefore consider the class of distributions L2 (g) = F : E y2 < . If y is discrete. For the submodel (2.
0 f(y) a=2 1. How should we interpret this assumption? How restrictive is it? One way to visualize the importance is to consider the class of Pareto densities given by f (y) = ay a1 .5 1. y > 1. or certain transformations of the variables. If both I1 = and I2 = then Ey is undeﬁned. In the case of the mean µ. identiﬁcation holds under a set of restrictions. have ﬁnite moments of a certain order. 2. The parameter a also determines which moments are ﬁnite.5 0.0 1 2 3 a=1 y 4 Pareto Densities. The mean of y is ﬁnite if E y < . MOMENT ESTIMATION The mean is well deﬁned and ﬁnite if E y = If this does not hold. Typically. we evaluate I1 = 23 y dF (y) < . ydF (y) 0 0 I2 = ydF (y) If I1 = and I2 < then we deﬁne Ey = . The demonstration that the parameters of an econometric model are identiﬁed is an important precondition for estimation. Larger a means that the tail declines to zero more quickly. y has a ﬁnite r’th moment if E yr < .CHAPTER 2. a sucient condition for identiﬁcation is E y < . The parameter a of the Pareto distribution indexes the rate of decay of the tail of the density. meaning that the parameter is uniquely determined by the distribution of the observed variables.0 0. If I1 < and I2 = then we deﬁne Ey = . It is common in econometric theory to assume that the variables. If µ = Ey is well deﬁned we say that µ is identiﬁed. See the ﬁgure below where we show the Pareto density for a = 1 and a = 2. We can calculate that a a 1 y ra1 dy = if r < a ar E yr = if r a Thus to allow for stricter ﬁnite moments (larger r) we need to restrict the class of permissible densities (require larger a). a = 1 and a = 2 . More generally. and an identiﬁcation theorem carefully describes a set of such conditions which are sucient for identiﬁcation.
and (2. the restriction that y has a ﬁnite r’th moment means that the tail of y’s density declines to zero faster than y r1 . Then by Jensen’s Inequality (B. For example.17) We now show that sum of the expectations on the righthandside can be bounded below 3. It is helpful to know that the existence of ﬁnite moments is monotonic. 2.1: Without loss of generality. we can assume E(yi ) = 0 by recentering yi on its expectation. (2. Proof of Theorem 2. Liapunov’s Inequality (B. then E yr < for all 0 r p. (E w)2 E w2 = 2 Ewi 4C 2 = 2 n n (2. broadly speaking. Deﬁne the random variables E yi 1 (yi  > C) + E (yi 1 (yi  > C)) (2. by the Triangle Inequality (A. Ey 2 < implies E y < and thus both the mean and variance of y are ﬁnite.16).6. n n i=1 i=1 (2. so that (2. by a similar argument wi  = yi 1 (yi  C) E (yi 1 (yi  C)) 2 yi 1 (yi  C) 2C yi 1 (yi  C) + E (yi 1 (yi  C)) (2.21) . Set = /3.18). E zi  = E yi 1 (yi  > C) E (yi 1 (yi  > C)) 2E yi 1 (yi  > C) 2. MOMENT ESTIMATION 24 Thus.15).20).9) and the Expectation Inequality (B.20) where the ﬁnal inequality is (2.23) implies that if E yp < for some p > 0.CHAPTER 2.16) (where 1 (·) is the indicator function) which is possible since E yi  < . We need to show that for all > 0 and > 0 there is some N < so that for all n N. First. Fix and .19) Second. Pick C < large enough so that E (yi  1 (yi  > C)) wi = yi 1 (yi  C) E (yi 1 (yi  C)) zi = yi 1 (yi  > C) E (yi 1 (yi  > C)) y =w+z and E y E w + E z .18) n n 1 1 E z = E zi E zi  2. the fact that the wi are iid and mean zero. Pr (y > ) . The faster decline of the tail means that the probability of observing an extreme value of y is a more rare event.15 Technical Proofs* In this section we provide proofs of some of the more technical points in the chapter. These proofs may only be of interest to more mathematically inclined.9) and (2.18) and thus by the Triangle Inequality (A.
. For the reverse inequality. if E y < . yj  y .14) 1/2 m m 2 y = yj yj  .19) and (2. For Rm ... C() = i E y i y i exp i y i C(0) = 1 C(0) = iE (y i ) = 0 2 C(0) = E y i y i = V . They are C() = iE y i exp i y i 2 2 .13) this is sucient to established that ny n converges in distribution to N (0. the Euclidean norm of a vector is larger than the length of any individual component. it is sucient to consider the case µ = 0. Our proof method is to calculate the characteristic function of ny n and show that it converges pointwise to the characteristic function of N (0. Since y i has two ﬁnite moments the ﬁrst and second derivatives of C() are continuous in . Proof of Theorem 2. By Lévy’s Continuity Theorem (see Van der Vaart (2008) Theorem 2. (2. c() = C()1 C() 2 2 1 C() C()2 C () C() c() = C() When evaluated at = 0 Furthermore. . Equations (2.21) together show that E y 32 (2.1: By Loève’s cr Inequality (B.22)..8. Pr (y > ) . Pr (y > ) E y 3 = .7. as needed. V ) . V ) . j=1 j=1 Thus if E yj  < for j = 1. by Markov’s Inequality (B. Without loss of generality. Finally.17).CHAPTER 2. m. Proof of Theorem 2. Thus. let C () = E exp i y i denote the characteristic function of y i and set c () = log C().. MOMENT ESTIMATION 25 the ﬁnal inequality holding for n 4C 2 /2 = 36C 2 / 2 2 . then E yj  < for j = 1. so for any j. then E y m j=1 E yj  < .1: The moment bound Ey y i < is sucient to guarantee that the i elements of µ and V are well deﬁned and ﬁnite.. . the ﬁnal equality by the deﬁnition of .22) as desired. c () = c () = . We have shown that for any > 0 and > 0 then for all n 36C 2 / 2 2 .24) and (2. m.
24) . This completes the proof. nj It follows that ajn = gj ( ) gj 0. 1 log Cn () V 2 and 1 Cn () exp V 2 which is the characteristic function of the N (0. 1 1 c() = c(0) + c (0) + c ( ) = c ( ) 2 2 26 (2. Proof of Theorem 2. V ) distribution. By the properties of the exponential function.3: By a vector Taylor series expansion. We now compute Cn () = E exp i ny n . Proof of Theorem 2. the characteristic function of ny n . we ﬁnd jn n (g ( n ) g()) = (G + an ) n ( n ) G . for each element of g. Stacking across elements of g.23) where lies on the line segment joining 0 and . By a secondorder Taylor series expansion of c() about = 0. gj ( n ) = gj () + gj ( ) ( n ) jn where lies on the line segment between n and and therefore converges in probability to . c (n ) c (0) = V. Thus p Pr (g (z n ) g (c) ) Pr (z n c < ) 1 as n by the assumption that z n c.9.1: Since g is continuous at c. We thus ﬁnd that as n . p Hence g(z n ) g(c) as n . MOMENT ESTIMATION so when evaluated at = 0 c(0) = 0 c (0) = 0 c (0) = V .23) n 1 log Cn () = log E exp i yi n i=1 n 1 = log E exp i y i n i=1 n 1 = log E exp i y i n i=1 n 1 = log E exp i y i n i=1 = nc n 1 = c (n ) 2 where n lies on the line segment joining 0 and / n. for all > 0 we can ﬁnd a > 0 such that if z n c < then g (z n ) g (c) . Since n 0 and c () is continuous. the independence of the y i . the deﬁnition of c() and (2.10. Recall that A B implies Pr(A) Pr(B).CHAPTER 2. d p (2.
V ) = N 0.10.CHAPTER 2. This establishes (2. and their product is continuous.24) equals G = G N (0. G V G establishing (2.10) When N (0. as G + an G. n ( n ) . MOMENT ESTIMATION 27 d d The convergence is by Theorem 2. the righthandside of (2. . V ) .1.11).
and that the person who answers your call will respond honestly. (Assume for simplicity that all workers have equal access to telephones. du The density contains the same information as the distribution function. we view the wage of an individual worker as a random variable wage with the probability distribution F (u) = Pr(wage u). Since wage rates vary across workers. we cannot describe wage rates by a single number.) In this thought experiment. and then asking the person who responds to tell us their wage rate. or covariates). leastsquares is a tool to estimate an approximate conditional mean of one variable (the dependent variable) given another set of variables (the regressors.2 The Distribution of Wages Suppose that we are interested in wage rates in the United States. As we will see. By making many such phone calls we can learn the distribution F of the entire population. we can describe wages using a probability distribution. A useful thought experiment is to imagine dialing a telephone number selected at random. When a distribution function F is dierentiable we deﬁne the probability density function f (u) = d F (u). conditioning variables. 28 . the wage of the person you have called is a single draw from the distribution F of wages in the population.1 Introduction The most commonly applied econometric tool is leastsquares estimation. Formally.Chapter 3 Conditional Expectation and Projection 3. 3. but the density is typically easier to visually interpret. When we say that a person’s wage is random we mean that we do not know their wage before it is measured. and we treat observed wage rates as realizations from the distribution F. also known as regression. Treating unobserved wages as random variables and observed wages as realizations is a powerful mathematical abstraction which allows us to use the tools of mathematical probability. Instead. and focus on the probabilistic foundation of the conditional expectation model and its projection approximation. In this chapter we abstract from estimation.
2 The median U. The mean is convenient measure of central tendency because it is a linear operator and arises naturally in many economic models. 1 2 If F is not continuous the deﬁnition is m = inf{u : F (u) } 2 3 The median is not sensitive to pertubations in the tails of the distribution.1 0.S.1 by the arrow. but it is tricky to use for many calculations as it is not a linear operator.0 0.23) is indicated in the left panel of Figure 3.3 0.1 by the arrow. We see that the density is peaked around $15. the expectation or mean of a random variable y with density f is µ = E (y) = uf (u)du.0 0.S.S. The mean U.2 and 2. A disadvantage of the mean is that it is not robust4 especially in the presence of substantial skewness or thick tails.1: Wage Distribution and Density. and most of the probability mass appears to lie between $10 and $40. workers In Figure 3.1 we display estimates1 of the probability distribution function (on the left) and density function (on the right) of U. These are ranges for typical wage rates in the U.4 0 10 20 30 40 50 60 70 Wage Density 0. wage ($19. The median m of a continuous2 distribution F is the unique solution to 1 F (m) = . rather than the more cumbersome label wage. Important measures of central tendency are the median and the mean.S.8 0. For this reason the median is not the dominant measure of central tendency in econometrics. wage ($23.CHAPTER 3. The wage rate is constructed as individual wage and salary earnings divided by hours worked.9 1.14. 4 The mean is sensitive to pertubations in the tails of the distribution. which are both features of the wage distribution The distribution and density are estimated nonparametrically from the sample of 50.90) is indicated in the right panel of Figure 3. Here we have used the common and convenient convention of using the single character y to denote a random variable. wage rates in 2009.7 0. population. All fulltime U.S.6 0.5 0 10 20 30 40 50 60 70 80 90 100 Dollars per Hour Dollars per Hour Figure 3. The median is a robust3 measure of central tendency. 1 .742 fulltime nonmilitary wageearners reported in the March 2009 Current Population Survey. CONDITIONAL EXPECTATION AND PROJECTION 29 Wage Distribution 0. As discussed in Sections 2.2 0.
Is this wage distribution the same for all workers.2) We call these means conditional as they are conditioning on a ﬁxed value of the variable gender.1) (3. 3. For this reason. While you might not think of gender as a random variable. the geometric mean exp (E (log w)) = $19.2 the density of log wages.1. CONDITIONAL EXPECTATION AND PROJECTION 30 as can be seen easily in the right panel of Figure 3.81) indicated by the arrows. . Log Wage Density 1 2 3 4 5 6 Log Dollars per Hour Figure 3.90 as a “typical” wage rate.CHAPTER 3. with its mean 2. men and women. The plot on the left in Figure 3. (3.05 E (log(wage)  gender = woman) = 2. We can see that the two wage densities take similar shapes but the density for men is somewhat shifted to the right with a higher mean. wage regressions typically use log wages as a dependent variable rather than the level of wages. The density of log wages is much less skewed and fattailed than the density of the level of wages. More precisely. We can write their speciﬁc values as E (log(wage)  gender = man) = 3.95 is a much better (more robust) measure6 of central tendency of the distribution.05 and 2. we will use log(y) to denote the natural logarithm of y. or does the wage distribution vary across subpopulations? To answer this question.2 shows the density of log hourly wages log(wage) for the same population.95 drawn in with the arrow.S.2: Log Wage Density In this context it is useful to transform the data by taking the natural logarithm5 . They are called the conditional means (or conditional expectations) of log wages given gender. men and women with their means (3.3 Conditional Expectation We saw in Figure 3. it is random from the viewpoint of 5 6 Throughout the text.3 displays the densities of log wages for U.90.81 are the mean log wages in the subpopulations of men and women workers. suggesting that it is incorrect to describe $23. Another way of viewing this is that 64% of workers earn less that the mean wage of $23. we can compare wage distributions for dierent groups — for example.11 is a robust measure of central tendency. so its mean E (log(wage)) = 2.05 and 2.81. The values 3. Figure 3.
This dierence equals E (log(wage)  gender = man) E (log(wage)  gender = woman) = 3. the gender of the individual is unknown and thus random.3.) Consider further splitting the men and women subpopulations by race. a hasty inference might be that there is not a meaningful dierence between the wage distributions of men and women.3 more carefully. We display the log wage density functions of four of these groups on the right in Figure 3.S.03 women 2. Table 3.1 reports the mean log wage for each of the six subpopulations.CHAPTER 3.86 Table 3.24 implies an average 24% dierence between the wages of men and women. CONDITIONAL EXPECTATION AND PROJECTION 31 white men white women black men black women Log Wage Density Women Men 0 1 2 3 Log Dollars per Hour 4 5 6 Log Wage Density 1 2 3 Log Dollars per Hour 4 5 Figure 3. dividing the population into whites. which is quite substantial. Again we see that the primary dierence between the four density functions is their central tendency. and other races.07 2.73 2.05 2. the probability that a worker is a woman happens to be 43%.86 3. and the means of subpopulations are then conditional means. it is most appropriate to view all measurements as random variables. white black other men 3.24 (3.1 are the conditional means of log(wage) given gender and race. (In the population of U.82 2. For example E (log(wage)  gender = man. Right: Log Wage Density by Gender and Race econometric analysis.1: Mean Log Wages by Sex and Race The entries in Table 3. As we mentioned above.07 .81 = 0. (For an explanation of logarithmic and percentage dierences see the box on Log Dierences below.) In observational data. blacks.3 appear similar. Focusing on the means of these distributions.3) A dierence in expected log wages of 0.3: Left: Log Wage Density for Women and Men. As the two densities in Figure 3. workers. If you randomly select an individual. race = white) = 3. Before jumping to this conclusion let us examine the dierences in the distributions of Figure 3. the primary dierence between the two densities appears to be their means.
Table 3. CONDITIONAL EXPECTATION AND PROJECTION and E (log(wage)  gender = woman. and thereby facilitate comparisons across groups. and that between black men and black women is 13%. race = black) = 2. as the average gap between white men and white women is 25%. In particular. Because of this simplifying property. we can see that the wage gap between men and women continues after disaggregation by race. . For example. the average wage gap between white men and black men is 21%. We also can see that there is a race gap. and that between white women and black women is 9%.73 32 One beneﬁt of focusing on conditional means is that they reduce complicated distributions to a single summary measure.CHAPTER 3. conditional means are the primary interest of regression analysis and are a major focus in econometrics.1 allows us to easily calculate average wage dierences between groups. as the average wages of blacks are substantially less than the other race categories.
and a professional degree (medical.2.2 x 0.2 0. CONDITIONAL EXPECTATION AND PROJECTION Log Dierences A useful approximation for the natural logarithm for small x is log (1 + x) x. A plot of log (1 + x) and the linear approximation x is shown in the following ﬁgure.2 0. Here. but the dierence increases with x. and we will write this variable as education 7 .4 log(1 + x) Now.4) 33 The symbol O(x2 ) means that the remainder is bounded by Ax2 as x 0 for some A < . then y = (1 + c/100)y.4). law or 7 . 0.CHAPTER 3. (3.4 0.2 0.4 0. This shows that 100 multiplied by the dierence in logarithms is approximately the percentage dierence between y and y . log y = log y + log(1 + c/100) or c 100 where the approximation is (3. This can be derived from the inﬁnite series expansion of log (1 + x) : log (1 + x) = x x2 x3 x4 + + ··· 2 3 4 = x + O(x2 ). A high school graduate has education=12. We can see that log (1 + x) and the linear approximation x are very close for x 0. In many empirical studies economists measure educational attainment by the number of years of schooling. if y is c% greater than y.4 0. a Master’s degree has education=18. and reasonably close for x 0. log y log y = log(1 + c/100) 3. and this approximation is quite good for c 10%.1. a college graduate has education=16.4 Conditional Expectation Function An important determinant of wage levels is education. Taking natural logarithms. education is deﬁned as years of schooling beyond kindergarten.
. race) as it varies across the entries. x2 ) = (gender .. a conditioning variable (such as gender ) by the letter x. we will typically write the conditioning variables as a vector in Rk : x1 x2 x = . and education is a single number for each category. The CEF is a function of (gender . xk . xk PhD) has education=20. We see that the conditional mean is increasing in years of education. It is conventional in econometrics to denote the dependent variable (e. For example E (log(wage)  gender = man.0 ● white men white women 18 20 4 6 8 10 12 14 16 Years of Education Figure 3..g.. . . education and gender ) by the subscripted letters x1 . . race. education = 12) = 2. We call this the conditional expectation function (CEF). xk ) = m(x1 . . race) is given by the six entries of Table 3.. and multiple conditioning variables (such as race. log(wage)) by the letter y. . x and/or z.. race = white. 4. Conditional expectations can be written with the generic notation E (y  x1 .... x2 .. For example. xk ).4 is that the gap between men and women is roughly constant for all education levels. xk ) as it varies with the variables.CHAPTER 3.. x2 ..84 We display in Figure 3.5) . x2 .0 ● ● ● 2. . The plot is quite revealing.. x2 . CONDITIONAL EXPECTATION AND PROJECTION 34 The conditional mean of log wages given gender.4: Mean Log Wage as a Function of Years of Education In many cases it is convenient to simplify the notation by writing variables using single characters. (3. Another striking feature of Figure 3.0 ● 3. For greater compactness. typically y.4 the conditional means of log(wage) for white men and white women as a function of education.1. but at a dierent rate for schooling levels above and below nine years.5 ● ● ● ● ● ● 2. As the variables are measured in logs this implies a constant average percentage gap between men and women regardless of educational attainment. the conditional expectation of y = log(wage) given (x1 .5 ● ● Log Dollars per Hour 3. The CEF is a function of (x1 .
the use of E (y  x) should be apparent from the context. CONDITIONAL EXPECTATION AND PROJECTION 35 Here we follow the convention of using lower case bold italics x to denote a vector.0 Exp=5 Exp=10 Exp=25 Exp=40 3. take y = log(wage) and x = experience. the CEF can be compactly written as E (y  x) = m (x) .5: Left: Joint density of log(wage) and experience and conditional mean of log(wage) given experience for white men with education=12.6) The conditional density is a slice of the joint density f (y.0 2.5 3. However.5 for the population of white men with 12 years of education. x) are continuously distributed with a joint density function f (y.5 2. Right: Conditional densities of log(wage) for white men with education=12. We will not always enforce this distinction as it can become notationally burdensome. R For any x such that fx (x) > 0 the conditional density of y given x is deﬁned as fyx (y  x) = f (y. 3. x) holding x ﬁxed.0 0 10 20 30 40 50 Log Wage Conditional Density Log Dollars per Hour 2. We can visualize this by slicing the joint density function at a speciﬁc value of x parallel with the yaxis.5 3. Hopefully. For example. we take up this case and assume that the variables (y. . Sometimes. (And it is mathematically correct to do so.0 4. the number of years of labor market experience.) The ﬁrst expression E (y  x) is a random variable and the second expression E (y  x = x0 ) is a function. x). As an example. x) . The contours of their joint density are plotted on the left side of Figure 3.0 1.5 Continuous Variables In the previous sections. x)dy. it is useful to notationally distinguish E (y  x) as the CEF evaluated at the random vector x from E (y  x = x0 ) as the CEF evaluated at the ﬁxed value x0 .5 2.0 3.5 Labor Market Experience (Years) Log Dollars per Hour Figure 3.0 1. Given this notation. 4. In this section. x) the variable x has the marginal density fx (x) = f (y. we implicitly assumed that the conditioning variables are discrete. many conditioning variables are continuous.5 4. fx (x) (3. Given the joint density f (y.CHAPTER 3.
79 0. In Figure 3. 3. 25. This is idealized since x is continuously distributed so this subpopulation is inﬁnitely small.05 0. 10. 3.5. the simple law states that E (log(wage)  gender = man) Pr (gender = man) = E (log(wage)) . and from 10 to 25 years.5 the CEF of log(wage) given experience is plotted as the solid line.7) R Intuitively. The CEF of y given x is the mean of the conditional density (3. +E (log(wage)  gender = woman) Pr (gender = woman) . We do this for four levels of experience (5. CONDITIONAL EXPECTATION AND PROJECTION 36 take the density contours on the left side of Figure 3.1 Simple Law of Iterated Expectations If E y < then for any random vector x.6) m (x) = E (y  x) = yfyx (y  x) dy. We can see that the CEF is a smooth but nonlinear function. Theorem 3.43 = 2. When x is discrete E (E (y  x)) = E (y  x j ) Pr (x = x j ) j=1 and when x is continuous E (E (y  x)) = Going back to our investigation of average log wages for men and women. Or numerically. but there is little change from 25 to 40 years experience. ﬂattens out around experience = 30.5 and slice through the contour plot at a speciﬁc value of experience. The general law of iterated expectations allows two sets of conditioning variables. This gives us the conditional density of log(wage) for white men with 12 years of education and this level of experience.92. The CEF is initially increasing in experience. We can see that the distribution of wages shifts to the right and becomes more diuse as experience increases from 5 to 10 years. and then decreases for high levels of experience. Rk E (y  x ) fx (x)dx.6 Law of Iterated Expectations An extremely useful tool from probability theory is the law of iterated expectations.6. E (E (y  x)) = E (y) The simple law states that the expectation of the conditional expectation is the unconditional expectation. and 40 years). In other words. and plot these densities on the right side of Figure 3.57 + 2. (3. the average of the conditional averages is the unconditional average. An important special case is the known as the Simple Law.CHAPTER 3. m (x) is the mean of y for the idealized subpopulation where the conditioning variables are ﬁxed at x.
08 = 3. x2 )  x1 ) = E (y  x1 ) 37 Notice the way the law is applied. Is there something more that can be said? It turns out that there is a simple relationship induced by conditioning. E (E (y  x1 . race = black) Pr (race = blackgender = man) = E (log(wage)  gender = man) or numerically 3.3 are given in Section 3. race = white) Pr (race = whitegender = man) +E (log(wage)  gender = man.1.6.07 0. .2 Law of Iterated Expectations If E y < then for any random vectors x1 and x2 . the expectation conditional on x1 alone. The iterated expectation yields the simple answer E (y  x1 ) . race = other) Pr (race = othergender = man) A property of conditional expectations is that when you condition on a random vector x you can eectively treat it as if it is constant.” As an example E (log(wage)  gender = man.08 + 3.30. CONDITIONAL EXPECTATION AND PROJECTION Theorem 3.2 and 3.6. For example.05 0. The remainder y E (y  x1 ) is the “unexplained portion”. This relationship is monotonic in the sense that increasing the amont of information always decreases the variance of the unexplained portion. how do we compare E (y  x1 ) versus E (y  x1 .CHAPTER 3.6. The simple relationship we now derive shows that the variance of this unexplained portion decreases when we condition on more variables. E (x  x) = x and E (g (x)  x) = g (x) for any function g(·). We can think of the conditional mean E (y  x1 ) as the “explained portion” of y.7 Monotonicity of Conditioning What is the eect of increasing the amount of information when constructing a conditional expectation? That is.6. 3. 3.05 +E (log(wage)  gender = man.84 + 2.3 Conditioning Theorem If E g (x) y < then E (g (x) y  x) = g (x) E (y  x) and E (g (x) y) = E (g (x) E (y  x)) (3. the conditional expectation reveals greater detail about the distribution of y.86 0. The general property is known as the conditioning theorem. Theorem 3. The inner expectation conditions on x1 and x2 .10) The proofs of Theorems 3.9) (3. while the outer expectation conditions only on x1 .6.8) (3. x2 )? We have seen that by increasing the conditioning set. Sometimes we phrase this as: “The smaller information set wins.
11) it is useful to understand that the error e is derived from the joint distribution of (y. 3. CONDITIONAL EXPECTATION AND PROJECTION Theorem 3. = E (y  x) E (m(x)  x) This fact can be combined with the law of iterated expectations to show that the unconditional mean is also zero. 2.7. by the linearity of expectations.1 says that the variance of the dierence between y and its conditional mean (weakly) decreases whenever an additional variable is added to the conditioning information.30.3. The equations y = m(x) + e E (e  x) = 0. says that e is uncorrelated with any function of the regressors. and so its properties are derived from this construction. By construction. (3.8. 4. E (e) = E (E (e  x)) = E (0) = 0 We state this and some other results formally. E (e  x) = 0.7.1 is given in Section 3. Theorem 3. x2 )) 38 Theorem 3.30. E (e) = 0. . the deﬁnition m(x) = E (y  x) and the Conditioning Theorem E (e  x) = E ((y m(x))  x) = m(x) m(x) = 0. If E yr < for r 1 then E er < . this yields the formula y = m(x) + e.1 Properties of the CEF error If E y < then 1. x).1 If Ey 2 < then var (y) var (y E (y  x1 )) var (y E (y  x1 . The proof of Theorem 3. For any function h (x) such that E h (x) e < then E (h (x) e) = 0 The proof of the third result is deferred to Section 3.7. A key property of the CEF error is that it has a conditional mean of zero.CHAPTER 3. The fourth result. 3.11) In (3.8 CEF Error The CEF error e is deﬁned as the dierence between y and the CEF evaluated at the random vector x: e = y m(x). whose proof is left to Exercise 3. To see this.
but the shape of the conditional distribution varies with the level of experience. even though the conditional mean of e is zero. Then E (y  x) = E (xu  x) = xE (u  x) = x so the CEF equation is y =x+e where Note that even though e is not independent of x. Typically and generally. . The equation E (e  x) = 0 is sometimes called a conditional mean restriction. The property is also sometimes called mean independence. These equations hold true by deﬁnition. It is important to understand that this is not a restriction. but it is not generic feature of the conditional mean.1. As an example. e = x(u 1). As a simple example of a case where x and e are mean independent yet dependent.6 for the same population as Figure 3. E (e  x) = E (x(u 1)  x) = xE ((u 1)  x) = 0 and is thus mean independent.5 0.CHAPTER 3.8. for the conditional mean of e is 0 and thus independent of x. The condition E (e  x) = 0 is implied by the deﬁnition of e as the dierence between y and the CEF m (x) . The error e has a conditional mean of zero for all values of experience. since the conditional mean of the error e is restricted to equal zero. the contours of the joint density of e and experience are plotted in Figure 3.6: Joint density of CEF error e and experience for white men with education=12. CONDITIONAL EXPECTATION AND PROJECTION 39 together imply that m(x) is the CEF of y given x. Sometimes the assumption “e is independent of x” is added as a convenient simpliﬁcation. An important measure of the dispersion about the CEF function is the unconditional variance of the CEF error e.0 e 0. We write this as 2 = var (e) = E (e Ee)2 = E e2 .5 1.0 10 20 30 40 50 Labor Market Experience (Years) Figure 3.3 implies the following simple but useful result.5. −1.0 0 −0. However. Theorem 3. e and x are jointly dependent. it does not imply that the distribution of e is independent of x. let y = xu where x and u are independent and Eu = 1.
(3.2. we want to create a prediction or forecast of y.1 If Ey 2 < . The prediction error is the realized dierence y g(x). x). We can write any predictor as a function g (x) of x. To see this. E (y g (x))2 E (y m (x))2 where m (x) = E (y  x).2 If Ey 2 < then 2 < 40 3.8. Theorem 3. The righthandside after the third equality is minimized by setting g (x) = m (x). What function is the best predictor? It turns out that the answer is the CEF m(x).10.12) We can deﬁne the best predictor as the function g (x) which minimizes (3. it does not provide information about the spread of the distribution. A nonstochastic measure of the magnitude of the prediction error is the expectation of its square E (y g (x))2 .9 Best Predictor Suppose that given a realized value of x.CHAPTER 3. The minimum is ﬁnite under the assumption Ey 2 < as shown by Theorem 3. We state this formally in the following result. the conditional variance of y given x is 2 (x) = var (y  x) = E (y E (y  x))2  x = E e2  x . 3.9. A common measure of the dispersion is the conditional variance. CONDITIONAL EXPECTATION AND PROJECTION Theorem 3. Deﬁnition 3.4.8. yielding the ﬁnal inequality. note that the mean squared error of a predictor g (x) is E (y g (x))2 = E (e + m (x) g (x))2 = Ee2 + 2E (e (m (x) g (x))) + E (m (x) g (x))2 = Ee2 + E (m (x) g (x))2 Ee2 = E (y m (x))2 where the ﬁrst equality makes the substitution y = m(x) + e and the third equality uses Theorem 3.12).10 Conditional Variance While the conditional mean is a good measure of the location of a conditional distribution.1. This holds regardless of the joint distribution of (y.1 Conditional Mean as Best Predictor If Ey 2 < .8. then for any predictor g (x).
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION
41
Generally, 2 (x) is a nontrivial function of x and can take any form subject to the restriction that it is nonnegative. The conditional standard deviation is its square root (x) = 2 (x). One way to think about 2 (x) is that it is the conditional mean of e2 given x. As an example of how the conditional variance depends on observables, compare the conditional log wage densities for men and women displayed in Figure 3.3. The dierence between the densities is not purely a location shift, but is also a dierence in spread. Speciﬁcally, we can see that the density for men’s log wages is somewhat more spread out than that for women, while the density for women’s wages is somewhat more peaked. Indeed, the conditional standard deviation for men’s wages is 3.05 and that for women is 2.81. So while men have higher average wages, they are also somewhat more dispersed. The unconditional error variance and the conditional variance are related by the law of iterated expectations 2 = E e2 = E E e2  x = E 2 (x) . That is, the unconditional error variance is the average conditional variance. Given the conditional variance, we can deﬁne a rescaled error = e . (x) (3.13)
We can calculate that since (x) is a function of x e 1 E (  x) = E x = E (e  x) = 0 (x) (x) and var (  x) = E  x = E
2
Thus has a conditional mean of zero, and a conditional variance of 1. Notice that (3.13) can be rewritten as e = (x). and substituting this for e in the CEF equation (3.11), we ﬁnd that y = m(x) + (x).
2 (x) e2 1 x = 2 E e2  x = 2 = 1. 2 (x) (x) (x)
(3.14)
This is an alternative (meanvariance) representation of the CEF equation. Many econometric studies focus on the conditional mean m(x) and either ignore the conditional variance 2 (x), treat it as a constant 2 (x) = 2 , or treat it as a nuisance parameter (a parameter not of primary interest). This is appropriate when the primary variation in the conditional distribution is in the mean, but can be shortsighted in other cases. Dispersion is relevant to many economic topics, including income and wealth distribution, economic inequality, and price dispersion. Conditional dispersion (variance) can be a fruitful subject for investigation. The perverse consequences of a narrowminded focus on the mean has been parodied in a classic joke:
An economist was standing with one foot in a bucket of boiling water and the other foot in a bucket of ice. When asked how he felt, he replied, “On average I feel just ﬁne.”
Clearly, the economist in question ignored variance!
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION
42
3.11
Homoskedasticity and Heteroskedasticity
An important special case obtains when the conditional variance 2 (x) is a constant and independent of x. This is called homoskedasticity. Deﬁnition 3.11.1 The error is homoskedastic if E e2  x = 2 does not depend on x.
In the general case where 2 (x) depends on x we say that the error e is heteroskedastic. Deﬁnition 3.11.2 The error is heteroskedastic if E e2  x = 2 (x) depends on x.
It is helpful to understand that the concepts homoskedasticity and heteroskedasticity concern the conditional variance, not the unconditional variance. By deﬁnition, the unconditional variance is a constant and independent of the regressors x. So when we talk about the variance as a function of the regressors, we are talking about the conditional variance. Recall Figure 3.3 and how the variance of wages varies between men and women. Some older or introductory textbooks describe heteroskedasticity as the case where “the variance of e varies across observations”. This is a poor and confusing deﬁnition. It is more constructive to understand that heteroskedasticity means that the conditional variance 2 (x) depends on observables. Older textbooks also tend to describe homoskedasticity as a component of a correct regression speciﬁcation, and describe heteroskedasticity as an exception or deviance. This description has inﬂuenced many generations of economists, but it is unfortunately backwards. The correct view is that heteroskedasticity is generic and “standard”, while homoskedasticity is unusual and exceptional. The default in empirical work should be to assume that the errors are heteroskedastic, not the converse. In apparent contradiction to the above statement, we will still frequently impose the homoskedasticity assumption when making theoretical investigations into the properties of estimation and inference methods. The reason is that in many cases homoskedasticity greatly simpliﬁes the theoretical calculations, and it is therefore quite advantageous for teaching and learning. It should always be remembered, however, that homoskedasticity is never imposed because it is believed to be a correct feature of an empirical model, but rather because of its simplicity.
3.12
Regression Derivative
One way to interpret the CEF m(x) = E (y  x) is in terms of how marginal changes in the regressors x imply changes in the conditional mean of the response variable y. It is typical to consider marginal changes in single regressors, holding the remainder ﬁxed. When a regressor x1 is continuously distributed, we deﬁne the marginal eect of a change in x1 , holding the variables x2 , ..., xk ﬁxed, as the partial derivative of the CEF m(x1 , ..., xk ). x1
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION
43
When x1 is discrete we deﬁne the marginal eect as a discrete dierence. For example, if x1 is binary, then the marginal eect of x1 on the CEF is m(1, x2 , ..., xk ) m(0, x2 , ..., xk ). We can unify the continuous and discrete cases with the notation m(x1 , ..., xk ), if x1 is continuous x1 1 m(x) = m(1, x , ..., x ) m(0, x , ..., x ), if x is binary.
2 k 2 k 1
Collecting the k eects into one k 1 vector, we deﬁne the regression derivative of x on y : 1 m(x) 2 m(x) m(x) = . . . k m(x)
When all elements of x are continuous, then we have the simpliﬁcation m(x) = m(x), the x vector of partial derivatives. There are two important points to remember concerning our deﬁnition of the regression derivative. First, the eect of each variable is calculated holding the other variables constant. This is the ceteris paribus concept commonly used in economics. But in the case of a regression derivative, the conditional mean does not literally hold all else constant. It only holds constant the variables included in the conditional mean. Second, the regression derivative is the change in the conditional expectation of y, not the change in the actual value of y for an individual. It is tempting to think of the regression derivative as the change in the actual value of y, but this is not a correct interpretation. The regression derivative m(x) is the actual eect on the response variable y only if the error e is unaected by the change in the regressor x. We return to a discussion of causal eects in Section 3.28.
3.13
Linear CEF
An important special case is when the CEF m (x) = E (y  x) is linear in x. In this case we can write the mean equation as m(x) = x1 1 + x2 2 + · · · + xk k + . Notationally it is convenient to write this as a simple function of the vector x. An easy way to do so is to augment the regressor vector x by listing the number “1” as an element. We call this the “constant” and the corresponding coecient is called the “intercept”. Equivalently, assuming that the ﬁnal element8 of the vector x is the intercept, then xk = 1. Thus (3.5) has been redeﬁned as the k 1 vector x1 x2 . x = . . (3.15) . xk1 1
8
The order doesn’t matter. It could be any element.
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION With this redeﬁnition, then the CEF is m(x) = x1 1 + x2 2 + · · · + xk k = x where 1 . = . . k
44
(3.16)
(3.17)
is a k 1 coecient vector. This is the linear CEF model. It is also often called the linear regression model, or the regression of y on x. In the linear CEF model, the regression derivative is simply the coecient vector. That is m(x) = . This is one of the appealing features of the linear CEF model. The coecients have simple and natural interpretations as the marginal eects of changing one variable, holding the others constant.
Linear CEF Model y = x + e E (e  x) = 0
If in addition the error is homoskedastic, we call this the homoskedastic linear CEF model.
Homoskedastic Linear CEF Model y = x + e E (e  x) = 0 E e2  x = 2
3.14
Linear CEF with Nonlinear Eects
The linear CEF model of the previous section is less restrictive than it might appear, as we can include as regressors nonlinear transformations of the original variables. In this sense, the linear CEF framework is ﬂexible and can capture many nonlinear eects. For example, suppose we have two scalar variables x1 and x2 . The CEF could take the quadratic form m(x1 , x2 ) = x1 1 + x2 2 + x2 3 + x2 4 + x1 x2 5 + 6 . (3.18) 1 2 This equation is quadratic in the regressors (x1 , x2 ) yet linear in the coecients ( 1 , ..., 6 ). We will descriptively call (3.18) a quadratic CEF, and yet (3.18) is also a linear CEF in the sense of being linear in the coecients. The key is to understand that (3.18) is quadratic in the variables (x1 , x2 ) yet linear in the coecients ( 1 , ..., 6 ).
CHAPTER 3. CONDITIONAL EXPECTATION AND PROJECTION
45
To simplify the expression, we deﬁne the transformations x3 = x2 , x4 = x2 , x5 = x1 x2 , and 1 2 x6 = 1, and redeﬁne the regressor vector as x = (x1 , ..., x6 ). With this redeﬁnition, m(x1 , x2 ) = x which is linear in . For most econometric purposes (estimation and inference on ) the linearity in is all that is important. An exception is in the analysis of regression derivatives. In nonlinear equations such as (3.18), the regression derivative should be deﬁned with respect to the original variables, not with respect to the transformed variables. Thus m(x1 , x2 ) = 1 + 2x1 3 + x2 5 x1 m(x1 , x2 ) = 2 + 2x2 4 + x1 5 x2 We see that in the model (3.18), the regression derivatives are not a simple coecient, but are functions of several coecients plus the levels of (x1, x2 ). Consequently it is dicult to interpret the coecients individually. It is more useful to interpret them as a group. We typically call 5 the interaction eect. Notice that it appears in both regression derivative equations, and has a symmetric interpretation in each. If 5 > 0 then the regression derivative of x1 on y is increasing in the level of x2 (and the regression derivative of x2 on y is increasing in the level of x1 ), while if 5 < 0 the reverse is true.
3.15
Linear CEF with Dummy Variables
When all regressors are discrete, it turns out the CEF can be written as a linear function of regressors. Consider the simplest case of a single binary regressor. A variable is binary if it only takes two distinct values. (For example, the variable gender.) It is common to call such regressors dummy variables, and sometimes they are called indicator variables. When there is only a single dummy regressor the conditional mean can only take two distinct values. For example, gender=man µ0 if E (y  gender) = µ1 if gender=woman To facilitate a mathematical treatment, we typically record dummy variables with the values {0, 1}. For example 0 if gender=man x1 = 1 if gender=woman Given this notation we can write the conditional mean as a linear function of the dummy variable x1 , that is E (y  x1 ) = + x1
where = µ0 and = µ1 µ0 . In this simple regression equation the intercept is equal to the conditional mean of y for the x1 = 0 subpopulation (men) and the slope is equal to the dierence in the conditional means of the two subpopulations. Now suppose we have two dummy variables x1 and x2 . For example, x2 = 1 if the person is married, else x2 = 0. The conditional mean given x1 and x2 takes at most four possible values: (unmarried men) µ00 if x1 = 0 and x2 = 0 µ01 if x1 = 0 and x2 = 1 (married men) E (y  x1 , x2 ) = µ10 if x1 = 1 and x2 = 0 (unmarried women) µ11 if x1 = 1 and x2 = 1 (married women)
the number of regressors (including the intercept) is 4. For example. xp then the CEF E (y  x1 . x3 . If there are 3 dummy variables x1 . such as race. x2 . then E (y  x1 .. When the regressor is categorical the conditional mean of y given x3 takes a distinct value for each possibility: µ1 if x3 = 1 µ if x3 = 2 E (y  x3 ) = 2 µ3 if x3 = 3 This is not a linear function of x3 itself. . In general. x1 x2 ). x2 . x2 . How can we see this? Take a categorical variable. the coecient 2 as the eect of marriage on expected log wages for men wage earners. they simply indicate the relevant category. 1 = µ10 µ00 . and the coecient 3 as the dierence between the eects of marriage on expected log wages among women and among men. and can be written as a linear function of the 2p regressors including x1 ... for example 1 if white 2 if black x3 = 3 if other When doing so. We started this section by saying that the conditional mean is linear whenever all regressors are discrete.. if there are p dummy variables x1 . In the next section we will discuss projection approximations which yield more parsimonious parameterizations. we earlier divided race into three categories. . but it can be made so by constructing dummy variables for two of the three categories. We often describe 3 as measuring the interaction between the two dummy variables. xp ) takes at most 2p distinct values. the values of x3 have no meaning in terms of magnitude.. or the interaction eect. xp and all crossproducts..13. and describe 3 = 0 as the case when the interaction eect is zero. Thus to put the model in the framework of Section 3. x3 1 So even though we started with only 2 dummy variables. We can record categorical variables using numbers to indicate each category. CONDITIONAL EXPECTATION AND PROJECTION 46 In this case we can write the conditional mean as a linear function of x1 . it can also be interpreted as the dierence between the eects of gender on expected log wages among married and nonmarried wage earners.CHAPTER 3.. For example 1 if black x4 = 0 if not black . meaning that they take a ﬁnite number of possible values.. x2 ) = + 1 x1 + 2 x2 + 3 x1 x2 where = µ00 . and 3 = µ11 µ10 µ01 + µ00 . x2 . Alternatively.. we would deﬁne the regressor x3 = x1 x2 and the regressor vector as x1 x x = 2 . x3 ) takes at most 23 = 8 distinct values and can be written as the linear function E (y  x1 . x2 . We can view the coecient 1 as the eect of gender on expected log wages for unmarried wages earners. x2 . Both interpretations are equally valid. 2 = µ01 µ00 . This might be excessive in practice if p is modestly large. In this setting we can see that the CEF is linear in the three variables (x1 . . x3 ) = + 1 x1 + 2 x2 + 3 x3 + 4 x1 x2 + 5 x1 x3 + 6 x2 x3 + 7 x1 x2 x3 which has eight regressors including the intercept. x2 and their product x1 x2 : E (y  x1 .
Qxx = E (xx ) is positive deﬁnite. Assumption 3.16) as an approximation. is found by selecting the vector to minimize S().9. The best linear predictor of y given x.1 imply that the variables y and x have ﬁnite means. Theorem 3. with the dierence that we have not included the interaction term x4 x5 . The mean squared prediction error is 2 S() = E y x . The third part of the assumption is more technical.1 1. Ey 2 < . In particular. variances. Consequently in most cases it is more realistic to view the linear speciﬁcation (3. and its role will become apparent shortly. For this derivation we require the following regularity condition. E x2 < . 2.16 Best Linear Predictor While the conditional mean m(x) = E (y  x) is the best predictor of y among all functions of x. By extension. In this section we derive a speciﬁc approximation with a simple interpretation. we can write the conditional mean of y as a linear function of x4 and x5 E (y  x3 ) = E (y  x4 . and covariances. This is because the event {x4 = 1 and x5 = 1} is empty by construction. the linear CEF model is empirically unlikely to be accurate unless x is discrete and lowdimensional so all interactions are included. It is equivalent to imposing that the columns of Qxx = E (xx ) are linearly independent.CHAPTER 3. we can deﬁne an approximation to the CEF by the linear function with the lowest mean squared error among all linear predictors. A linear predictor for y is a function of the form x for some Rk . The ﬁrst two parts of Assumption 3. the categorical variable x3 is equivalent to the pair of dummy variables (x4 . 3. x5 ) = + 1 x4 + 2 x5 We can write the CEF as either E (y  x3 ) or E (y  x4 .16. but it is only linear as a function of x4 and x5 . 3.1 showed that the conditional mean m (x) is the best predictor in the sense that it has the lowest mean squared error among all predictors. x5 ) (they are equivalent).16. so x4 x5 = 0 by deﬁnition. CONDITIONAL EXPECTATION AND PROJECTION x5 = 1 if other 0 if not other 47 In this case. written P(y  x). . x5 ). its functional form is typically unknown. The explicit relationship is 1 if x4 = 0 and x5 = 0 2 if x4 = 1 and x5 = 0 x3 = 3 if x4 = 0 and x5 = 1 Given these transformations. This setting is similar to the case of two dummy variables. or equivalently that the matrix Qxx is invertible.
(3. The solution is found by inverting the matrix Qxx .1. this equation takes the form where Qxy = E (xy) is k 1 and Qxx = E (xx ) is k k. and dividing by 2.20) We now calculate an explicit expression for its value. It is necessary in order for the solution (3.3.9) is 0= Rewriting (3. The mean squared prediction error can be written out as a quadratic function of : S() = Ey 2 2 E (xy) + E xx . and is written = Q1 Qxy xx or 1 = E xx E (xy) . (3.19) is called the Linear Projection Coecient.22) It is worth taking the time to understand the notation involved in the expression (3.16. We now have an explicit expression for the best linear predictor: 1 P(y  x) = x E xx E (xy) .22) to exist. This expression is also referred to as the linear projection of y on x. The minimizer = argmin S() Rk 48 (3.16. otherwise they are distinct. CONDITIONAL EXPECTATION AND PROJECTION Deﬁnition 3. Qxx is a E(xy) k k matrix and Qxy is a k 1 column vector. there would be multiple solutions to the equation (3. Otherwise. The quadratic structure of S() means that we can solve explicitly for the minimizer. The projection error is e = y x . 2E (xy) = 2E xx Qxy = Qxx (3. We also can now see the role of Assumption 3.21).20) as S() = 2E (xy) + 2E xx .22).23) This equals the error from the regression equation when (and only when) the conditional mean is linear in x.CHAPTER 3.1 The Best Linear Predictor of y given x is P(y  x) = x where minimizes the mean squared prediction error 2 S() = E y x . Therefore. . The ﬁrstorder condition for minimization (from Appendix A. alternative expressions such as E(xx ) or E (xy) (E (xx ))1 are incoherent and incorrect.21) (3.
one for each regressor. e. . 49 (3.24) or x the best linear predictor of y given x. This completes the derivation of the model.27) for j = 1.27) for j = k is the same as E (e) = 0. As it is desirable for e t