P. 1
Econometric s

Econometric s

|Views: 5|Likes:
Published by Md Sifat Khan
book
book

More info:

Published by: Md Sifat Khan on Apr 14, 2013
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

08/06/2014

pdf

text

original

ECONOMETRICS

Bruce E. Hansen
c _2000, 2013
1
University of Wisconsin
www.ssc.wisc.edu/~bhansen
This Revision: January 18, 2013
Comments Welcome
1
This manuscript may be printed and reproduced for individual or instructional use, but may not be
printed for commercial purposes.
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
1 Introduction 1
1.1 What is Econometrics? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 The Probability Approach to Econometrics . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Econometric Terms and Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Observational Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Standard Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Sources for Economic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Econometric Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.8 Reading the Manuscript . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Conditional Expectation and Projection 8
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 The Distribution of Wages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Conditional Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Log Di¤erences* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Conditional Expectation Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.6 Continuous Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.7 Law of Iterated Expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.8 CEF Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.9 Regression Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.10 Best Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.11 Conditional Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.12 Homoskedasticity and Heteroskedasticity . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.13 Regression Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.14 Linear CEF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.15 Linear CEF with Nonlinear E¤ects . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.16 Linear CEF with Dummy Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.17 Best Linear Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.18 Linear Predictor Error Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.19 Regression Coe¢cients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.20 Regression Sub-Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.21 Coe¢cient Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.22 Omitted Variable Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.23 Best Linear Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.24 Normal Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.25 Regression to the Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.26 Reverse Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.27 Limitations of the Best Linear Predictor . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.28 Random Coe¢cient Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.29 Causal E¤ects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
i
CONTENTS ii
2.30 Expectation: Mathematical Details* . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.31 Existence and Uniqueness of the Conditional Expectation* . . . . . . . . . . . . . . 46
2.32 Identi…cation* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.33 Technical Proofs* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3 The Algebra of Least Squares 53
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.2 Random Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.3 Least Squares Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.4 Solving for Least Squares with One Regressor . . . . . . . . . . . . . . . . . . . . . . 55
3.5 Solving for Least Squares with Multiple Regressors . . . . . . . . . . . . . . . . . . . 55
3.6 Illustration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.7 Least Squares Residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.8 Model in Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.9 Projection Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.10 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.11 Estimation of Error Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.12 Analysis of Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.13 Regression Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.14 Residual Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.15 Prediction Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.16 In‡uential Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.17 Normal Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.18 Technical Proofs* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4 Least Squares Regression 75
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.2 Sample Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.3 Linear Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.4 Mean of Least-Squares Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.5 Variance of Least Squares Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.6 Gauss-Markov Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.7 Residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.8 Estimation of Error Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.9 Mean-Square Forecast Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.10 Covariance Matrix Estimation Under Homoskedasticity . . . . . . . . . . . . . . . . 85
4.11 Covariance Matrix Estimation Under Heteroskedasticity . . . . . . . . . . . . . . . . 86
4.12 Standard Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.13 Measures of Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.14 Empirical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.15 Multicollinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.16 Normal Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5 An Introduction to Large Sample Asymptotics 98
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.2 Asymptotic Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.3 Convergence in Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5.4 Weak Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.5 Almost Sure Convergence and the Strong Law* . . . . . . . . . . . . . . . . . . . . . 102
CONTENTS iii
5.6 Vector-Valued Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.7 Convergence in Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.8 Higher Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.9 Functions of Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.10 Delta Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.11 Stochastic Order Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.12 Uniform Stochastic Bounds* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.13 Semiparametric E¢ciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.14 Technical Proofs* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
6 Asymptotic Theory for Least Squares 120
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
6.2 Consistency of Least-Squares Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 121
6.3 Asymptotic Normality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6.4 Joint Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.5 Consistency of Error Variance Estimators . . . . . . . . . . . . . . . . . . . . . . . . 128
6.6 Homoskedastic Covariance Matrix Estimation . . . . . . . . . . . . . . . . . . . . . . 129
6.7 Heteroskedastic Covariance Matrix Estimation . . . . . . . . . . . . . . . . . . . . . 130
6.8 Alternative Covariance Matrix Estimators* . . . . . . . . . . . . . . . . . . . . . . . 131
6.9 Functions of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.10 Asymptotic Standard Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
6.11 t statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
6.12 Con…dence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6.13 Regression Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.14 Forecast Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.15 Wald Statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.16 Con…dence Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
6.17 Semiparametric E¢ciency in the Projection Model . . . . . . . . . . . . . . . . . . . 142
6.18 Semiparametric E¢ciency in the Homoskedastic Regression Model* . . . . . . . . . . 143
6.19 Uniformly Consistent Residuals* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.20 Asymptotic Leverage* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
7 Restricted Estimation 149
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
7.2 Constrained Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
7.3 Exclusion Restriction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7.4 Minimum Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7.5 Asymptotic Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
7.6 E¢cient Minimum Distance Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . 154
7.7 Exclusion Restriction Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
7.8 Variance and Standard Error Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 156
7.9 Misspeci…cation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
7.10 Nonlinear Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
7.11 Inequality Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
7.12 Constrained MLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
7.13 Technical Proofs* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
CONTENTS iv
8 Hypothesis Testing 163
8.1 Hypotheses and Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
8.2 t tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
8.3 t-ratios and the Abuse of Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
8.4 Wald Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
8.5 Minimum Distance Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
8.6 F Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
8.7 Likelihood Ratio Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
8.8 Problems with Tests of NonLinear Hypotheses . . . . . . . . . . . . . . . . . . . . . 173
8.9 Monte Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.10 Con…dence Intervals by Test Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . 179
8.11 Asymptotic Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
9 Regression Extensions 184
9.1 Generalized Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
9.2 Testing for Heteroskedasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
9.3 NonLinear Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
9.4 Testing for Omitted NonLinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
10 The Bootstrap 192
10.1 De…nition of the Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
10.2 The Empirical Distribution Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
10.3 Nonparametric Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
10.4 Bootstrap Estimation of Bias and Variance . . . . . . . . . . . . . . . . . . . . . . . 194
10.5 Percentile Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
10.6 Percentile-t Equal-Tailed Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
10.7 Symmetric Percentile-t Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
10.8 Asymptotic Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
10.9 One-Sided Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
10.10Symmetric Two-Sided Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
10.11Percentile Con…dence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
10.12Bootstrap Methods for Regression Models . . . . . . . . . . . . . . . . . . . . . . . . 203
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
11 NonParametric Regression 206
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
11.2 Binned Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
11.3 Kernel Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
11.4 Local Linear Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
11.5 Nonparametric Residuals and Regression Fit . . . . . . . . . . . . . . . . . . . . . . 210
11.6 Cross-Validation Bandwidth Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 212
11.7 Asymptotic Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
11.8 Conditional Variance Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
11.9 Standard Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
11.10Multiple Regressors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
CONTENTS v
12 Series Estimation 222
12.1 Approximation by Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
12.2 Splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
12.3 Partially Linear Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
12.4 Additively Separable Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
12.5 Uniform Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
12.6 Runge’s Phenomenon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
12.7 Approximating Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
12.8 Residuals and Regression Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
12.9 Cross-Validation Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
12.10Convergence in Mean-Square . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
12.11Uniform Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
12.12Asymptotic Normality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
12.13Asymptotic Normality with Undersmoothing . . . . . . . . . . . . . . . . . . . . . . 233
12.14Regression Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
12.15Kernel Versus Series Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
12.16Technical Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
13 Quantile Regression 241
13.1 Least Absolute Deviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
13.2 Quantile Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
14 Generalized Method of Moments 247
14.1 Overidenti…ed Linear Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
14.2 GMM Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
14.3 Distribution of GMM Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
14.4 Estimation of the E¢cient Weight Matrix . . . . . . . . . . . . . . . . . . . . . . . . 250
14.5 GMM: The General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
14.6 Over-Identi…cation Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
14.7 Hypothesis Testing: The Distance Statistic . . . . . . . . . . . . . . . . . . . . . . . 252
14.8 Conditional Moment Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
14.9 Bootstrap GMM Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
15 Empirical Likelihood 258
15.1 Non-Parametric Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
15.2 Asymptotic Distribution of EL Estimator . . . . . . . . . . . . . . . . . . . . . . . . 260
15.3 Overidentifying Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
15.4 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
15.5 Numerical Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
16 Endogeneity 265
16.1 Instrumental Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
16.2 Reduced Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
16.3 Identi…cation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
16.4 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
16.5 Special Cases: IV and 2SLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
16.6 Bekker Asymptotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
16.7 Identi…cation Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
CONTENTS vi
17 Univariate Time Series 275
17.1 Stationarity and Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
17.2 Autoregressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
17.3 Stationarity of AR(1) Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
17.4 Lag Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
17.5 Stationarity of AR(k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
17.6 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
17.7 Asymptotic Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
17.8 Bootstrap for Autoregressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
17.9 Trend Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
17.10Testing for Omitted Serial Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . 282
17.11Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
17.12Autoregressive Unit Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
18 Multivariate Time Series 285
18.1 Vector Autoregressions (VARs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
18.2 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
18.3 Restricted VARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
18.4 Single Equation from a VAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
18.5 Testing for Omitted Serial Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . 287
18.6 Selection of Lag Length in an VAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
18.7 Granger Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
18.8 Cointegration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
18.9 Cointegrated VARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
19 Limited Dependent Variables 291
19.1 Binary Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
19.2 Count Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
19.3 Censored Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
19.4 Sample Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
20 Panel Data 296
20.1 Individual-E¤ects Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
20.2 Fixed E¤ects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
20.3 Dynamic Panel Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
21 Nonparametric Density Estimation 299
21.1 Kernel Density Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
21.2 Asymptotic MSE for Kernel Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . 301
A Matrix Algebra 304
A.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
A.2 Matrix Addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
A.3 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
A.4 Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
A.5 Rank and Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
A.6 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
A.7 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
A.8 Positive De…niteness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
A.9 Matrix Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
A.10 Kronecker Products and the Vec Operator . . . . . . . . . . . . . . . . . . . . . . . . 311
A.11 Vector and Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
CONTENTS vii
A.12 Matrix Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
B Probability 316
B.1 Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
B.2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
B.3 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
B.4 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
B.5 Common Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
B.6 Multivariate Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
B.7 Conditional Distributions and Expectation . . . . . . . . . . . . . . . . . . . . . . . . 324
B.8 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
B.9 Normal and Related Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
B.10 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
B.11 Maximum Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
C Numerical Optimization 337
C.1 Grid Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
C.2 Gradient Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
C.3 Derivative-Free Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Preface
This book is intended to serve as the textbook for a …rst-year graduate course in econometrics.
It can be used as a stand-alone text, or be used as a supplement to another text.
Students are assumed to have an understanding of multivariate calculus, probability theory,
linear algebra, and mathematical statistics. A prior course in undergraduate econometrics would
be helpful, but not required. Two excellent undergraduate textbooks are Wooldridge (2009) and
Stock and Watson (2010).
For reference, some of the basic tools of matrix algebra, probability, and statistics are reviewed
in the Appendix.
For students wishing to deepen their knowledge of matrix algebra in relation to their study of
econometrics, I recommend Matrix Algebra by Abadir and Magnus (2005).
An excellent introduction to probability and statistics is Statistical Inference by Casella and
Berger (2002). For those wanting a deeper foundation in probability, I recommend Ash (1972)
or Billingsley (1995). For more advanced statistical theory, I recommend Lehmann and Casella
(1998), van der Vaart (1998), Shao (2003), and Lehmann and Romano (2005).
For further study in econometrics beyond this text, I recommend Davidson (1994) for asymp-
totic theory, Hamilton (1994) for time-series methods, Wooldridge (2002) for panel data and discrete
response models, and Li and Racine (2007) for nonparametrics and semiparametric econometrics.
Beyond these texts, the Handbook of Econometrics series provides advanced summaries of contem-
porary econometric methods and theory.
The end-of-chapter exercises are important parts of the text and are meant to help teach students
of econometrics. Answers are not provided, and this is intentional.
I would like to thank Ying-Ying Lee for providing research assistance in preparing some of the
empirical examples presented in the text.
As this is a manuscript in progress, some parts are quite incomplete, and there are many topics
which I plan to add. In general chapters 1-8 are the most complete, the remaining need signi…cant
work and revision.
viii
Chapter 1
Introduction
1.1 What is Econometrics?
The term “econometrics” is believed to have been crafted by Ragnar Frisch (1895-1973) of
Norway, one of the three principle founders of the Econometric Society, …rst editor of the journal
Econometrica, and co-winner of the …rst Nobel Memorial Prize in Economic Sciences in 1969. It
is therefore …tting that we turn to Frisch’s own words in the introduction to the …rst issue of
Econometrica for an explanation of the discipline.
A word of explanation regarding the term econometrics may be in order. Its de…ni-
tion is implied in the statement of the scope of the [Econometric] Society, in Section I
of the Constitution, which reads: “The Econometric Society is an international society
for the advancement of economic theory in its relation to statistics and mathematics....
Its main object shall be to promote studies that aim at a uni…cation of the theoretical-
quantitative and the empirical-quantitative approach to economic problems....”
But there are several aspects of the quantitative approach to economics, and no single
one of these aspects, taken by itself, should be confounded with econometrics. Thus,
econometrics is by no means the same as economic statistics. Nor is it identical with
what we call general economic theory, although a considerable portion of this theory has
a de…ninitely quantitative character. Nor should econometrics be taken as synonomous
with the application of mathematics to economics. Experience has shown that each
of these three view-points, that of statistics, economic theory, and mathematics, is
a necessary, but not by itself a su¢cient, condition for a real understanding of the
quantitative relations in modern economic life. It is the uni…cation of all three that is
powerful. And it is this uni…cation that constitutes econometrics.
Ragnar Frisch, Econometrica, (1933), 1, pp. 1-2.
This de…nition remains valid today, although some terms have evolved somewhat in their usage.
Today, we would say that econometrics is the uni…ed study of economic models, mathematical
statistics, and economic data.
Within the …eld of econometrics there are sub-divisions and specializations. Econometric theory
concerns the development of tools and methods, and the study of the properties of econometric
methods. Applied econometrics is a term describing the development of quantitative economic
models and the application of econometric methods to these models using economic data.
1.2 The Probability Approach to Econometrics
The unifying methodology of modern econometrics was articulated by Trygve Haavelmo (1911-
1999) of Norway, winner of the 1989 Nobel Memorial Prize in Economic Sciences, in his seminal
1
CHAPTER 1. INTRODUCTION 2
paper “The probability approach in econometrics”, Econometrica (1944). Haavelmo argued that
quantitative economic models must necessarily be probability models (by which today we would
mean stochastic). Deterministic models are blatently inconsistent with observed economic quan-
tities, and it is incoherent to apply deterministic models to non-deterministic data. Economic
models should be explicitly designed to incorporate randomness; stochastic errors should not be
simply added to deterministic models to make them random. Once we acknowledge that an eco-
nomic model is a probability model, it follows naturally that an appropriate tool way to quantify,
estimate, and conduct inferences about the economy is through the powerful theory of mathe-
matical statistics. The appropriate method for a quantitative economic analysis follows from the
probabilistic construction of the economic model.
Haavelmo’s probability approach was quickly embraced by the economics profession. Today no
quantitative work in economics shuns its fundamental vision.
While all economists embrace the probability approach, there has been some evolution in its
implementation.
The structural approach is the closest to Haavelmo’s original idea. A probabilistic economic
model is speci…ed, and the quantitative analysis performed under the assumption that the economic
model is correctly speci…ed. Researchers often describe this as “taking their model seriously.” The
structural approach typically leads to likelihood-based analysis, including maximum likelihood and
Bayesian estimation.
A criticism of the structural approach is that it is misleading to treat an economic model
as correctly speci…ed. Rather, it is more accurate to view a model as a useful abstraction or
approximation. In this case, how should we interpret structural econometric analysis? The quasi-
structural approach to inference views a structural economic model as an approximation rather
than the truth. This theory has led to the concepts of the pseudo-true value (the parameter value
de…ned by the estimation problem), the quasi-likelihood function, quasi-MLE, and quasi-likelihood
inference.
Closely related is the semiparametric approach. A probabilistic economic model is partially
speci…ed but some features are left unspeci…ed. This approach typically leads to estimation methods
such as least-squares and the Generalized Method of Moments. The semiparametric approach
dominates contemporary econometrics, and is the main focus of this textbook.
Another branch of quantitative structural economics is the calibration approach. Similar
to the quasi-structural approach, the calibration approach interprets structural models as approx-
imations and hence inherently false. The di¤erence is that the calibrationist literature rejects
mathematical statistics as inappropriate for approximate models, and instead selects parameters
by matching model and data moments using non-statistical ad hoc
1
methods.
1.3 Econometric Terms and Notation
In a typical application, an econometrician has a set of repeated measurements on a set of vari-
ables. For example, in a labor application the variables could include weekly earnings, educational
attainment, age, and other descriptive characteristics. We call this information the data, dataset,
or sample.
We use the term observations to refer to the distinct repeated measurements on the variables.
An individual observation often corresponds to a speci…c economic unit, such as a person, household,
corporation, …rm, organization, country, state, city or other geographical region. An individual
observation could also be a measurement at a point in time, such as quarterly GDP or a daily
interest rate.
Economists typically denote variables by the italicized roman characters j, r, and/or .. The
convention in econometrics is to use the character j to denote the variable to be explained, while
1
Ad hoc means “for this purpose” – a method designed for a speci…c problem – and not based on a generalizable
principle.
CHAPTER 1. INTRODUCTION 3
the characters r and . are used to denote the conditioning (explaining) variables.
Following mathematical convention, real numbers (elements of the real line R) are written using
lower case italics such as j, and vectors (elements of R
I
) by lower case bold italics such as i, e.g.
i =
_
_
_
_
_
r
1
r
2
.
.
.
r
I
_
_
_
_
_
.
Upper case bold italics such as A are used for matrices.
We typically denote the number of observations by the natural number :, and subscript the
variables by the index i to denote the individual observation, e.g. j
i
, i
i
and z
i
. In some contexts
we use indices other than i, such as in time-series applications where the index t is common, and
in panel studies we typically use the double index it to refer to individual i at a time period t.
The i’th observation is the set (j
i
, i
i
, z
i
).
It is proper mathematical practice to use upper case A for random variables and lower case r
for realizations or speci…c values. This practice is not commonly followed in econometrics because
instead we use upper case to denote matrices. Thus the notation j
i
will in some places refer to a
random variable, and in other places a speci…c realization. Hopefully there will be no confusion as
the use should be evident from the context.
We typically use Greek letters such as ,, 0 and o
2
to denote unknown parameters of an econo-
metric model, and will use boldface, e.g. d or 0, when these are vector-valued. Estimates are
typically denoted by putting a hat “^”, tilde “~” or bar “-” over the corresponding letter, e.g.
´
,
and

, are estimates of ,.
The covariance matrix of an econometric estimator will typically be written using the capital
boldface X , often with a subscript to denote the estimator, e.g. X
b
f
= vai
_
_
:
_
´
d ÷d
__
as the
covariance matrix for
_
:
_
´
d ÷d
_
. Hopefully without causing confusion, we will use the notation
X
f
= avai(
´
d) to denote the asymptotic covariance matrix of
_
:
_
´
d ÷d
_
(the variance of the
asymptotic distribution). Estimates will be denoted by appending hats or tildes, e.g.
´
X
f
is an
estimate of X
f
.
1.4 Observational Data
A common econometric question is to quantify the impact of one set of variables on another
variable. For example, a concern in labor economics is the returns to schooling – the change in
earnings induced by increasing a worker’s education, holding other variables constant. Another
issue of interest is the earnings gap between men and women.
Ideally, we would use experimental data to answer these questions. To measure the returns to
schooling, an experiment might randomly divide children into groups, mandate di¤erent levels of
education to the di¤erent groups, and then follow the children’s wage path after they mature and
enter the labor force. The di¤erences between the groups would be direct measurements of the ef-
fects of di¤erent levels of education. However, experiments such as this would be widely condemned
as immoral! Consequently, we see few non-laboratory experimental data sets in economics.
Instead, most economic data is observational. To continue the above example, through data
collection we can record the level of a person’s education and their wage. With such data we
can measure the joint distribution of these variables, and assess the joint dependence. But from
CHAPTER 1. INTRODUCTION 4
observational data it is di¢cult to infer causality, as we are not able to manipulate one variable to
see the direct e¤ect on the other. For example, a person’s level of education is (at least partially)
determined by that person’s choices. These factors are likely to be a¤ected by their personal abilities
and attitudes towards work. The fact that a person is highly educated suggests a high level of ability,
which suggests a high relative wage. This is an alternative explanation for an observed positive
correlation between educational levels and wages. High ability individuals do better in school,
and therefore choose to attain higher levels of education, and their high ability is the fundamental
reason for their high wages. The point is that multiple explanations are consistent with a positive
correlation between schooling levels and education. Knowledge of the joint distibution alone may
not be able to distinguish between these explanations.
Most economic data sets are observational, not experimental. This means
that all variables must be treated as random and possibly jointly deter-
mined.
This discussion means that it is di¢cult to infer causality from observational data alone. Causal
inference requires identi…cation, and this is based on strong assumptions. We will return to a
discussion of some of these issues in Chapter 16.
1.5 Standard Data Structures
There are three major types of economic data sets: cross-sectional, time-series, and panel. They
are distinguished by the dependence structure across observations.
Cross-sectional data sets have one observation per individual. Surveys are a typical source
for cross-sectional data. In typical applications, the individuals surveyed are persons, households,
…rms or other economic agents. In many contemporary econometric cross-section studies the sample
size : is quite large. It is conventional to assume that cross-sectional observations are mutually
independent. Most of this text is devoted to the study of cross-section data.
Time-series data are indexed by time. Typical examples include macroeconomic aggregates,
prices and interest rates. This type of data is characterized by serial dependence so the random
sampling assumption is inappropriate. Most aggregate economic data is only available at a low
frequency (annual, quarterly or perhaps monthly) so the sample size is typically much smaller than
in cross-section studies. The exception is …nancial data where data are available at a high frequency
(weekly, daily, hourly, or by transaction) so sample sizes can be quite large.
Panel data combines elements of cross-section and time-series. These data sets consist of a set
of individuals (typically persons, households, or corporations) surveyed repeatedly over time. The
common modeling assumption is that the individuals are mutually independent of one another,
but a given individual’s observations are mutually dependent. This is a modi…ed random sampling
environment.
Data Structures
« Cross-section
« Time-series
« Panel
CHAPTER 1. INTRODUCTION 5
Some contemporary econometric applications combine elements of cross-section, time-series,
and panel data modeling. These include models of spatial correlation and clustering.
As we mentioned above, most of this text will be devoted to cross-sectional data under the
assumption of mutually independent observations. By mutual independence we mean that the i’th
observation (j
i
, i
i
, z
i
) is independent of the ,’th observation (j
)
, i
)
, z
)
) for i ,= ,. (Sometimes the
label “independent” is misconstrued. It is a statement about the relationship between observations
i and ,, not a statement about the relationship between j
i
and i
i
and/or z
i
.)
Furthermore, if the data is randomly gathered, it is reasonable to model each observation as
a random draw from the same probability distribution. In this case we say that the data are
independent and identically distributed or iid. We call this a random sample. For most of
this text we will assume that our observations come from a random sample.
De…nition 1.5.1 The observations (j
i
, i
i
, z
i
) are a random sample if
they are mutually independent and identically distributed (iid) across i =
1, ..., :.
In the random sampling framework, we think of an individual observation (j
i
, i
i
, z
i
) as a re-
alization from a joint probability distribution 1 (j, i, z) which we can call the population. This
“population” is in…nitely large. This abstraction can be a source of confusion as it does not cor-
respond to a physical population in the real world. It’s an abstraction since the distribution 1
is unknown, and the goal of statistical inference is to learn about features of 1 from the sample.
The assumption of random sampling provides the mathematical foundation for treating economic
statistics with the tools of mathematical statistics.
The random sampling framework was a major intellectural breakthrough of the late 19th cen-
tury, allowing the application of mathematical statistics to the social sciences. Before this concep-
tual development, methods from mathematical statistics had not been applied to economic data as
they were viewed as inappropriate. The random sampling framework enabled economic samples to
be viewed as homogenous and random, a necessary precondition for the application of statistical
methods.
1.6 Sources for Economic Data
Fortunately for economists, the internet provides a convenient forum for dissemination of eco-
nomic data. Many large-scale economic datasets are available without charge from governmental
agencies. An excellent starting point is the Resources for Economists Data Links, available at
rfe.org. From this site you can …nd almost every publically available economic data set. Some
speci…c data sources of interest include
« Bureau of Labor Statistics
« US Census
« Current Population Survey
« Survey of Income and Program Participation
« Panel Study of Income Dynamics
« Federal Reserve System (Board of Governors and regional banks)
« National Bureau of Economic Research
CHAPTER 1. INTRODUCTION 6
« U.S. Bureau of Economic Analysis
« CompuStat
« International Financial Statistics
Another good source of data is from authors of published empirical studies. Most journals
in economics require authors of published papers to make their datasets generally available. For
example, in its instructions for submission, Econometrica states:
Econometrica has the policy that all empirical, experimental and simulation results must
be replicable. Therefore, authors of accepted papers must submit data sets, programs,
and information on empirical analysis, experiments and simulations that are needed for
replication and some limited sensitivity analysis.
The American Economic Review states:
All data used in analysis must be made available to any researcher for purposes of
replication.
The Journal of Political Economy states:
It is the policy of the Journal of Political Economy to publish papers only if the data
used in the analysis are clearly and precisely documented and are readily available to
any researcher for purposes of replication.
If you are interested in using the data from a published paper, …rst check the journal’s website,
as many journals archive data and replication programs online. Second, check the website(s) of
the paper’s author(s). Most academic economists maintain webpages, and some make available
replication …les complete with data and programs. If these investigations fail, email the author(s),
politely requesting the data. You may need to be persistent.
As a matter of professional etiquette, all authors absolutely have the obligation to make their
data and programs available. Unfortunately, many fail to do so, and typically for poor reasons.
The irony of the situation is that it is typically in the best interests of a scholar to make as much of
their work (including all data and programs) freely available, as this only increases the likelihood
of their work being cited and having an impact.
Keep this in mind as you start your own empirical project. Remember that as part of your end
product, you will need (and want) to provide all data and programs to the community of scholars.
The greatest form of ‡attery is to learn that another scholar has read your paper, wants to extend
your work, or wants to use your empirical methods. In addition, public openness provides a healthy
incentive for transparency and integrity in empirical analysis.
1.7 Econometric Software
Economists use a variety of econometric, statistical, and programming software.
STATA (www.stata.com) is a powerful statistical program with a broad set of pre-programmed
econometric and statistical tools. It is quite popular among economists, and is continuously being
updated with new methods. It is an excellent package for most econometric analysis, but is limited
when you want to use new or less-common econometric methods which have not yet been programed.
R (www.r-project.org), GAUSS (www.aptech.com), MATLAB (www.mathworks.com), and Ox
(www.oxmetrics.net) are high-level matrix programming languages with a wide variety of built-in
statistical functions. Many econometric methods have been programed in these languages and are
available on the web. The advantage of these packages is that you are in complete control of your
CHAPTER 1. INTRODUCTION 7
analysis, and it is easier to program new methods than in STATA. Some disadvantages are that
you have to do much of the programming yourself, programming complicated procedures takes
signi…cant time, and programming errors are hard to prevent and di¢cult to detect and eliminate.
Of these languages, Gauss used to be quite popular among econometricians, but now Matlab is
more popular. A smaller but growing group of econometricians are enthusiastic fans of R, which of
these languages is uniquely open-source, user-contributed, and best of all, completely free!
For highly-intensive computational tasks, some economists write their programs in a standard
programming language such as Fortran or C. This can lead to major gains in computational speed,
at the cost of increased time in programming and debugging.
As these di¤erent packages have distinct advantages, many empirical economists end up using
more than one package. As a student of econometrics, you will learn at least one of these packages,
and probably more than one.
1.8 Reading the Manuscript
Chapter 2 is a review of moment estimation and asymptotic distribution theory. This material
should be familiar from an earlier course in statistics, but I have included this at the beginning be-
cause of its central importance in econometric distribution theory. Chapters 3 through 9 deal with
the core linear regression and projection models. Chapter 10 introduces the bootstrap. Chapters
11 through 13 deal with the Generalized Method of Moments, empirical likelihood and endogeneity.
Chapters 14 and 15 cover time series, and Chapters 16, 17 and 18 cover limited dependent vari-
ables, panel data, and nonparametrics. Reviews of matrix algebra, probability theory, maximum
likelihood, and numerical optimization can be found in the appendix.
Technical sections which may not be of interest to all readers are marked with an asterisk (*).
Chapter 2
Conditional Expectation and
Projection
2.1 Introduction
The most commonly applied econometric tool is least-squares estimation, also known as regres-
sion. As we will see, least-squares is a tool to estimate an approximate conditional mean of one
variable (the dependent variable) given another set of variables (the regressors, conditioning
variables, or covariates).
In this chapter we abstract from estimation, and focus on the probabilistic foundation of the
conditional expectation model and its projection approximation.
2.2 The Distribution of Wages
Suppose that we are interested in wage rates in the United States. Since wage rates vary across
workers, we cannot describe wage rates by a single number. Instead, we can describe wages using a
probability distribution. Formally, we view the wage of an individual worker as a random variable
naqc with the probability distribution
1(n) = Ii(naqc _ n).
When we say that a person’s wage is random we mean that we do not know their wage before it is
measured, and we treat observed wage rates as realizations from the distribution 1. Treating un-
observed wages as random variables and observed wages as realizations is a powerful mathematical
abstraction which allows us to use the tools of mathematical probability.
A useful thought experiment is to imagine dialing a telephone number selected at random, and
then asking the person who responds to tell us their wage rate. (Assume for simplicity that all
workers have equal access to telephones, and that the person who answers your call will respond
honestly.) In this thought experiment, the wage of the person you have called is a single draw from
the distribution 1 of wages in the population. By making many such phone calls we can learn the
distribution 1 of the entire population.
When a distribution function 1 is di¤erentiable we de…ne the probability density function
)(n) =
d
dn
1(n).
The density contains the same information as the distribution function, but the density is typically
easier to visually interpret.
8
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 9
Dollars per Hour
W
a
g
e

D
i
s
t
r
i
b
u
t
i
o
n
0 10 20 30 40 50 60 70
0
.
0
0
.
1
0
.
2
0
.
3
0
.
4
0
.
5
0
.
6
0
.
7
0
.
8
0
.
9
1
.
0
Dollars per Hour
W
a
g
e

D
e
n
s
i
t
y
0 10 20 30 40 50 60 70 80 90 100
Figure 2.1: Wage Distribution and Density. All full-time U.S. workers
In Figure 2.1 we display estimates
1
of the probability distribution function (on the left) and
density function (on the right) of U.S. wage rates in 2009. We see that the density is peaked around
$15, and most of the probability mass appears to lie between $10 and $40. These are ranges for
typical wage rates in the U.S. population.
Important measures of central tendency are the median and the mean. The median : of a
continuous
2
distribution 1 is the unique solution to
1(:) =
1
2
.
The median U.S. wage ($19.23) is indicated in the left panel of Figure 2.1 by the arrow. The median
is a robust
3
measure of central tendency, but it is tricky to use for many calculations as it is not a
linear operator.
The expectation or mean of a random variable j with density ) is
j = E(j) =
_
o
÷o
n)(n)dn.
A general de…nition of the mean is presented in Section 2.30. The mean U.S. wage ($23.90) is
indicated in the right panel of Figure 2.1 by the arrow. Here we have used the common and
convenient convention of using the single character j to denote a random variable, rather than the
more cumbersome label naqc.
The mean is a convenient measure of central tendency because it is a linear operator and
arises naturally in many economic models. A disadvantage of the mean is that it is not robust
4
especially in the presence of substantial skewness or thick tails, which are both features of the wage
distribution as can be seen easily in the right panel of Figure 2.1. Another way of viewing this
is that 64% of workers earn less that the mean wage of $23.90, suggesting that it is incorrect to
describe the mean as a “typical” wage rate.
1
The distribution and density are estimated nonparametrically from the sample of 50,742 full-time non-military
wage-earners reported in the March 2009 Current Population Survey. The wage rate is constructed as individual
wage and salary earnings divided by hours worked.
2
If F is not continuous the de…nition is m = inffu : F(u)
1
2
g
3
The median is not sensitive to pertubations in the tails of the distribution.
4
The mean is sensitive to pertubations in the tails of the distribution.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 10
Log Dollars per Hour
L
o
g

W
a
g
e

D
e
n
s
i
t
y
1 2 3 4 5 6
Figure 2.2: Log Wage Density
In this context it is useful to transform the data by taking the natural logarithm
5
. Figure 2.2
shows the density of log hourly wages log(naqc) for the same population, with its mean 2.95 drawn
in with the arrow. The density of log wages is much less skewed and fat-tailed than the density of
the level of wages, so its mean
E(log(naqc)) = 2.0ò
is a much better (more robust) measure
6
of central tendency of the distribution. For this reason,
wage regressions typically use log wages as a dependent variable rather than the level of wages.
Another useful way to summarize the probability distribution 1(n) is in terms of its quantiles.
For any c ¸ (0, 1), the c’th quantile of the continuous
7
distribution 1 is the real number ¡
c
which
satis…es
1 (¡
c
) = c.
The quantile function ¡
c
, viewed as a function of c, is the inverse of the distribution function 1.
The most commonly used quantile is the median, that is, ¡
0.5
= :. We sometimes refer to quantiles
by the percentile representation of c, and in this case they are often called percentiles, e.g. the
median is the 50
tI
percentile.
2.3 Conditional Expectation
We saw in Figure 2.2 the density of log wages. Is this distribution the same for all workers, or
does the wage distribution vary across subpopulations? To answer this question, we can compare
wage distributions for di¤erent groups – for example, men and women. The plot on the left in
Figure 2.3 displays the densities of log wages for U.S. men and women with their means (3.05 and
2.81) indicated by the arrows. We can see that the two wage densities take similar shapes but the
density for men is somewhat shifted to the right with a higher mean.
The values 3.05 and 2.81 are the mean log wages in the subpopulations of men and women
workers. They are called the conditional means (or conditional expectations) of log wages
5
Throughout the text, we will use log(y) to denote the natural logarithm of y:
6
More precisely, the geometric mean exp(E(log w)) = $19:11 is a robust measure of central tendency.
7
If F is not continuous the de…nition is q = inffu : F(u) g
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 11
Log Dollars per Hour
L
o
g

W
a
g
e

D
e
n
s
i
t
y
0 1 2 3 4 5 6
Men Women
Log Dollars per Hour
L
o
g

W
a
g
e

D
e
n
s
i
t
y
1 2 3 4 5
white men
white women
black men
black women
Figure 2.3: Left: Log Wage Density for Women and Men. Right: Log Wage Density by Gender
and Race
given gender. We can write their speci…c values as
E(log(naqc) [ qc:dcr = :a:) = 8.0ò (2.1)
E(log(naqc) [ qc:dcr = no:a:) = 2.81. (2.2)
We call these means conditional as they are conditioning on a …xed value of the variable gender.
While you might not think of a person’s gender as a random variable, it is random from the
viewpoint of econometric analysis. If you randomly select an individual, the gender of the individual
is unknown and thus random. (In the population of U.S. workers, the probability that a worker is a
woman happens to be 43%.) In observational data, it is most appropriate to view all measurements
as random variables, and the means of subpopulations are then conditional means.
As the two densities in Figure 2.3 appear similar, a hasty inference might be that there is not
a meaningful di¤erence between the wage distributions of men and women. Before jumping to this
conclusion let us examine the di¤erences in the distributions of Figure 2.3 more carefully. As we
mentioned above, the primary di¤erence between the two densities appears to be their means. This
di¤erence equals
E(log(naqc) [ qc:dcr = :a:) ÷E(log(naqc) [ qc:dcr = no:a:) = 8.0ò ÷2.81
= 0.24 (2.3)
A di¤erence in expected log wages of 0.24 implies an average 24% di¤erence between the wages
of men and women, which is quite substantial. (For an explanation of logarithmic and percentage
di¤erences see Section 2.4.)
Consider further splitting the men and women subpopulations by race, dividing the population
into whites, blacks, and other races. We display the log wage density functions of four of these
groups on the right in Figure 2.3. Again we see that the primary di¤erence between the four density
functions is their central tendency.
Focusing on the means of these distributions, Table 2.1 reports the mean log wage for each of
the six sub-populations.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 12
men women
white 3.07 2.82
black 2.86 2.73
other 3.03 2.86
Table 2.1: Mean Log Wages by Sex and Race
The entries in Table 2.1 are the conditional means of log(naqc) given gender and race. For
example
E(log(naqc) [ qc:dcr = :a:, racc = n/itc) = 8.07
and
E(log(naqc) [ qc:dcr = no:a:, racc = /|ac/) = 2.78
One bene…t of focusing on conditional means is that they reduce complicated distributions
to a single summary measure, and thereby facilitate comparisons across groups. Because of this
simplifying property, conditional means are the primary interest of regression analysis and are a
major focus in econometrics.
Table 2.1 allows us to easily calculate average wage di¤erences between groups. For example,
we can see that the wage gap between men and women continues after disaggregation by race, as
the average gap between white men and white women is 25%, and that between black men and
black women is 13%. We also can see that there is a race gap, as the average wages of blacks are
substantially less than the other race categories. In particular, the average wage gap between white
men and black men is 21%, and that between white women and black women is 9%.
2.4 Log Di¤erences*
A useful approximation for the natural logarithm for small r is
log (1 ÷r) - r. (2.4)
This can be derived from the in…nite series expansion of log (1 ÷r) :
log (1 ÷r) = r ÷
r
2
2
÷
r
3
8
÷
r
4
4
÷
= r ÷O(r
2
).
The symbol O(r
2
) means that the remainder is bounded by ¹r
2
as r ÷ 0 for some ¹ < ·. A
plot of log (1 ÷r) and the linear approximation r is shown in the following …gure. We can see that
log (1 ÷r) and the linear approximation r are very close for [r[ _ 0.1, and reasonably close for
[r[ _ 0.2, but the di¤erence increases with [r[.
-0.4 -0.2 0.2 0.4
-0.4
-0.2
0.2
0.4
x
log(1 ÷r)
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 13
Now, if j
+
is c/ greater than j, then
j
+
= (1 ÷c,100)j.
Taking natural logarithms,
log j
+
= log j ÷ log(1 ÷c,100)
or
log j
+
÷log j = log(1 ÷c,100) -
c
100
where the approximation is (2.4). This shows that 100 multiplied by the di¤erence in logarithms
is approximately the percentage di¤erence between j and j
+
, and this approximation is quite good
for [c[ _ 10.
2.5 Conditional Expectation Function
An important determinant of wage levels is education. In many empirical studies economists
measure educational attainment by the number of years of schooling, and we will write this variable
as education
8
.
The conditional mean of log wages given gender, race, and education is a single number for each
category. For example
E(log(naqc) [ qc:dcr = :a:, racc = n/itc, cdncatio: = 12) = 2.84
We display in Figure 2.4 the conditional means of log(naqc) for white men and white women as a
function of education. The plot is quite revealing. We see that the conditional mean is increasing in
years of education, but at a di¤erent rate for schooling levels above and below nine years. Another
striking feature of Figure 2.4 is that the gap between men and women is roughly constant for all
education levels. As the variables are measured in logs this implies a constant average percentage
gap between men and women regardless of educational attainment.
q
2
.
0
2
.
5
3
.
0
3
.
5
4
.
0
Years of Education
L
o
g

D
o
l
l
a
r
s

p
e
r

H
o
u
r
4 6 8 10 12 14 16 18 20
q
q
q
q q
q
q
q
q
q
q
q white men
white women
Figure 2.4: Mean Log Wage as a Function of Years of Education
8
Here, education is de…ned as years of schooling beyond kindergarten. A high school graduate has education=12,
a college graduate has education=16, a Master’s degree has education=18, and a professional degree (medical, law or
PhD) has education=20.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 14
In many cases it is convenient to simplify the notation by writing variables using single charac-
ters, typically j, r and/or .. It is conventional in econometrics to denote the dependent variable
(e.g. log(naqc)) by the letter j, a conditioning variable (such as gender) by the letter r, and
multiple conditioning variables (such as race, education and gender) by the subscripted letters
r
1
, r
2
, ..., r
I
.
Conditional expectations can be written with the generic notation
E(j [ r
1
, r
2
, ..., r
I
) = :(r
1
, r
2
, ..., r
I
).
We call this the conditional expectation function (CEF). The CEF is a function of (r
1
, r
2
, ..., r
I
)
as it varies with the variables. For example, the conditional expectation of j = log(naqc) given
(r
1
, r
2
) = (gender, race) is given by the six entries of Table 2.1. The CEF is a function of (gender,
race) as it varies across the entries.
For greater compactness, we will typically write the conditioning variables as a vector in R
I
:
i =
_
_
_
_
_
r
1
r
2
.
.
.
r
I
_
_
_
_
_
. (2.5)
Here we follow the convention of using lower case bold italics i to denote a vector. Given this
notation, the CEF can be compactly written as
E(j [ i) = :(i) .
The CEF E(j [ i) is a random variable as it is a function of the random variable i. It is also
sometimes useful to view the CEF as a function of i. In this case we can write :(u) = E(j [ i = u)
, which is a function of the argument u. The expression E(j [ i = u) is the conditional expectation
of j, given that we know that the random variable i equals the speci…c value u. However, sometimes
in econometrics we take a notational shortcut and use E(j [ i) to refer to this function. Hopefully,
the use of E(j [ i) should be apparent from the context.
2.6 Continuous Variables
In the previous sections, we implicitly assumed that the conditioning variables are discrete.
However, many conditioning variables are continuous. In this section, we take up this case and
assume that the variables (j, i) are continuously distributed with a joint density function )(j, i).
As an example, take j = log(naqc) and r = experience, the number of years of labor market
experience. The contours of their joint density are plotted on the left side of Figure 2.5 for the
population of white men with 12 years of education.
Given the joint density )(j, i) the variable i has the marginal density
)
æ
(i) =
_
R
)(j, i)dj.
For any i such that )
æ
(i) 0 the conditional density of j given i is de…ned as
)
j[æ
(j [ i) =
)(j, i)
)
æ
(i)
. (2.6)
The conditional density is a slice of the joint density )(j, i) holding i …xed. We can visualize this
by slicing the joint density function at a speci…c value of i parallel with the j-axis. For example,
take the density contours on the left side of Figure 2.5 and slice through the contour plot at a
speci…c value of experience. This gives us the conditional density of log(naqc) for white men with
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 15
Labor Market Experience (Years)
L
o
g

D
o
l
l
a
r
s

p
e
r

H
o
u
r
0 10 20 30 40 50
2
.
0
2
.
5
3
.
0
3
.
5
4
.
0
Log Dollars per Hour
L
o
g

W
a
g
e

C
o
n
d
i
t
i
o
n
a
l

D
e
n
s
i
t
y
Exp=5
Exp=10
Exp=25
Exp=40
1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5
Figure 2.5: Left: Joint density of log(wage) and experience and conditional mean of log(wage)
given experience for white men with education=12. Right: Conditional densities of log(wage) for
white men with education=12.
12 years of education and this level of experience. We do this for four levels of experience (5, 10,
25, and 40 years), and plot these densities on the right side of Figure 2.5. We can see that the
distribution of wages shifts to the right and becomes more di¤use as experience increases from 5 to
10 years, and from 10 to 25 years, but there is little change from 25 to 40 years experience.
The CEF of j given i is the mean of the conditional density (2.6)
:(i) = E(j [ i) =
_
R
j)
j[æ
(j [ i) dj. (2.7)
Intuitively, :(i) is the mean of j for the idealized subpopulation where the conditioning variables
are …xed at i. This is idealized since i is continuously distributed so this subpopulation is in…nitely
small.
In Figure 2.5 the CEF of log(naqc) given experience is plotted as the solid line. We can see
that the CEF is a smooth but nonlinear function. The CEF is initially increasing in experience,
‡attens out around experience = 80, and then decreases for high levels of experience.
2.7 Law of Iterated Expectations
An extremely useful tool from probability theory is the law of iterated expectations. An
important special case is the known as the Simple Law.
Theorem 2.7.1 Simple Law of Iterated Expectations
If E[j[ < · then for any random vector i,
E(E(j [ i)) = E(j)
The simple law states that the expectation of the conditional expectation is the unconditional
expectation. In other words, the average of the conditional averages is the unconditional average.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 16
When i is discrete
E(E(j [ i)) =
o

)=1
E(j [ i
)
) Ii (i = i
)
)
and when i is continuous
E(E(j [ i)) =
_
R
k
E(j [ i) )
æ
(i)di.
Going back to our investigation of average log wages for men and women, the simple law states
that
E(log(naqc) [ qc:dcr = :a:) Ii (qc:dcr = :a:)
÷E(log(naqc) [ qc:dcr = no:a:) Ii (qc:dcr = no:a:)
= E(log(naqc)) .
Or numerically,
8.0ò + 0.ò7 ÷ 2.70 + 0.48 = 2.02.
The general law of iterated expectations allows two sets of conditioning variables.
Theorem 2.7.2 Law of Iterated Expectations
If E[j[ < · then for any random vectors i
1
and i
2
,
E(E(j [ i
1
, i
2
) [ i
1
) = E(j [ i
1
)
Notice the way the law is applied. The inner expectation conditions on i
1
and i
2
, while
the outer expectation conditions only on i
1
. The iterated expectation yields the simple answer
E(j [ i
1
) , the expectation conditional on i
1
alone. Sometimes we phrase this as: “The smaller
information set wins.”
As an example
E(log(naqc) [ qc:dcr = :a:, racc = n/itc) Ii (racc = n/itc[qc:dcr = :a:)
÷E(log(naqc) [ qc:dcr = :a:, racc = /|ac/) Ii (racc = /|ac/[qc:dcr = :a:)
÷E(log(naqc) [ qc:dcr = :a:, racc = ot/cr) Ii (racc = ot/cr[qc:dcr = :a:)
= E(log(naqc) [ qc:dcr = :a:)
or numerically
8.07 + 0.84 ÷ 2.86 + 0.08 ÷ 8.0ò + 0.08 = 8.0ò
A property of conditional expectations is that when you condition on a random vector i you
can e¤ectively treat it as if it is constant. For example, E(i [ i) = i and E(q (i) [ i) = q (i) for
any function q(). The general property is known as the conditioning theorem.
Theorem 2.7.3 Conditioning Theorem
If
E[q (i) j[ < · (2.8)
then
E(q (i) j [ i) = q (i) E(j [ i) (2.9)
and
E(q (i) j) = E(q (i) E(j [ i)) (2.10)
The proofs of Theorems 2.7.1, 2.7.2 and 2.7.3 are given in Section 2.33.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 17
2.8 CEF Error
The CEF error c is de…ned as the di¤erence between j and the CEF evaluated at the random
vector i:
c = j ÷:(i).
By construction, this yields the formula
j = :(i) ÷c. (2.11)
In (2.11) it is useful to understand that the error c is derived from the joint distribution of
(j, i), and so its properties are derived from this construction.
A key property of the CEF error is that it has a conditional mean of zero. To see this, by the
linearity of expectations, the de…nition :(i) = E(j [ i) and the Conditioning Theorem
E(c [ i) = E((j ÷:(i)) [ i)
= E(j [ i) ÷E(:(i) [ i)
= :(i) ÷:(i)
= 0.
This fact can be combined with the law of iterated expectations to show that the unconditional
mean is also zero.
E(c) = E(E(c [ i)) = E(0) = 0.
We state this and some other results formally.
Theorem 2.8.1 Properties of the CEF error
If E[j[ < · then
1. E(c [ i) = 0.
2. E(c) = 0.
3. If E[j[
v
< · for r _ 1 then E[c[
v
< ·.
4. For any function /(i) such that E[/(i) c[ < · then E(/(i) c) = 0.
The proof of the third result is deferred to Section 2.33.
The fourth result, whose proof is left to Exercise 2.3, says that c is uncorrelated with any
function of the regressors.
The equations
j = :(i) ÷c
E(c [ i) = 0.
together imply that :(i) is the CEF of j given i. It is important to understand that this is not
a restriction. These equations hold true by de…nition.
The condition E(c [ i) = 0 is implied by the de…nition of c as the di¤erence between j and the
CEF :(i) . The equation E(c [ i) = 0 is sometimes called a conditional mean restriction, since
the conditional mean of the error c is restricted to equal zero. The property is also sometimes called
mean independence, for the conditional mean of c is 0 and thus independent of i. However,
it does not imply that the distribution of c is independent of i. Sometimes the assumption “c is
independent of i” is added as a convenient simpli…cation, but it is not generic feature of the con-
ditional mean. Typically and generally, c and i are jointly dependent, even though the conditional
mean of c is zero.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 18
Labor Market Experience (Years)
e
0 10 20 30 40 50

1
.
0

0
.
5
0
.
0
0
.
5
1
.
0
Figure 2.6: Joint density of CEF error c and experience for white men with education=12.
As an example, the contours of the joint density of c and experience are plotted in Figure 2.6
for the same population as Figure 2.5. The error c has a conditional mean of zero for all values of
experience, but the shape of the conditional distribution varies with the level of experience.
As a simple example of a case where r and c are mean independent yet dependent, let c = r-
where r and - are independent N(0, 1). Then conditional on r, the error c has the distribution
N(0, r
2
). Thus E(c [ r) = 0 and c is mean independent of r, yet c is not fully independent of r.
Mean independence does not imply full independence.
2.9 Regression Variance
An important measure of the dispersion about the CEF function is the unconditional variance
of the CEF error c. We write this as
o
2
= vai (c) = E
_
(c ÷Ec)
2
_
= E
_
c
2
_
.
Theorem 2.8.1.3 implies the following simple but useful result.
Theorem 2.9.1 If Ej
2
< · then o
2
< ·
We can call o
2
the regression variance or the variance of the regression error. The magnitude
of o
2
measures the amount of variation in j which is not “explained” or accounted for in the
conditional mean E(j [ i) .
The regression variance depends on the regressors i. Consider two regressions
j = E(j [ i
1
) ÷c
1
j = E(j [ i
1
, i
2
) ÷c
2
.
We write the two errors distinctly as c
1
and c
2
as they are di¤erent – changing the conditioning
information changes the conditional mean and therefore the regression error as well.
In our discussion of iterated expectations, we have seen that by increasing the conditioning
set, the conditional expectation reveals greater detail about the distribution of j. What is the
implication for the regression error?
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 19
It turns out that there is a simple relationship. We can think of the conditional mean E(j [ i)
as the “explained portion” of j. The remainder c = j ÷E(j [ i) is the “unexplained portion”. The
simple relationship we now derive shows that the variance of this unexplained portion decreases
when we condition on more variables. This relationship is monotonic in the sense that increasing
the amont of information always decreases the variance of the unexplained portion.
Theorem 2.9.2 If Ej
2
< · then
vai (j) _ vai (j ÷E(j [ i
1
)) _ vai (j ÷E(j [ i
1
, i
2
))
Theorem 2.9.2 says that the variance of the di¤erence between j and its conditional mean
(weakly) decreases whenever an additional variable is added to the conditioning information.
The proof of Theorem 2.9.2 is given in Section 2.33.
2.10 Best Predictor
Suppose that given a realized value of i, we want to create a prediction or forecast of j. We can
write any predictor as a function q (i) of i. The prediction error is the realized di¤erence j ÷q(i).
A non-stochastic measure of the magnitude of the prediction error is the expectation of its square
E(j ÷q (i))
2
. (2.12)
We can de…ne the best predictor as the function q (i) which minimizes (2.12). What function
is the best predictor? It turns out that the answer is the CEF :(i). This holds regardless of the
joint distribution of (j, i).
To see this, note that the mean squared error of a predictor q (i) is
E(j ÷q (i))
2
= E(c ÷:(i) ÷q (i))
2
= Ec
2
÷ 2E(c (:(i) ÷q (i))) ÷E(:(i) ÷q (i))
2
= Ec
2
÷E(:(i) ÷q (i))
2
_ Ec
2
= E(j ÷:(i))
2
where the …rst equality makes the substitution j = :(i) ÷c and the third equality uses Theorem
2.8.1.4. The right-hand-side after the third equality is minimized by setting q (i) = :(i), yielding
the …nal inequality. The minimum is …nite under the assumption Ej
2
< · as shown by Theorem
2.9.1.
We state this formally in the following result.
Theorem 2.10.1 Conditional Mean as Best Predictor
If Ej
2
< ·, then for any predictor q (i),
E(j ÷q (i))
2
_ E(j ÷:(i))
2
where :(i) = E(j [ i).
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 20
2.11 Conditional Variance
While the conditional mean is a good measure of the location of a conditional distribution,
it does not provide information about the spread of the distribution. A common measure of the
dispersion is the conditional variance.
De…nition 2.11.1 If Ej
2
< ·, the conditional variance of j given i
is
o
2
(i) = vai (j [ i)
= E
_
(j ÷E(j [ i))
2
[ i
_
= E
_
c
2
[ i
_
Generally, o
2
(i) is a non-trivial function of i and can take any form subject to the restriction
that it is non-negative. The conditional standard deviation is its square root o(i) =
_
o
2
(i).
One way to think about o
2
(i) is that it is the conditional mean of c
2
given i.
As an example of how the conditional variance depends on observables, compare the conditional
log wage densities for men and women displayed in Figure 2.3. The di¤erence between the densities
is not purely a location shift, but is also a di¤erence in spread. Speci…cally, we can see that the
density for men’s log wages is somewhat more spread out than that for women, while the density
for women’s wages is somewhat more peaked. Indeed, the conditional standard deviation for men’s
wages is 3.05 and that for women is 2.81. So while men have higher average wages, they are also
somewhat more dispersed.
The unconditional error variance and the conditional variance are related by the law of iterated
expectations
o
2
= E
_
c
2
_
= E
_
E
_
c
2
[ i
__
= E
_
o
2
(i)
_
.
That is, the unconditional error variance is the average conditional variance.
Given the conditional variance, we can de…ne a rescaled error
- =
c
o(i)
. (2.13)
We can calculate that since o(i) is a function of i
E(- [ i) = E
_
c
o(i)
[ i
_
=
1
o(i)
E(c [ i) = 0
and
vai (- [ i) = E
_
-
2
[ i
_
= E
_
c
2
o
2
(i)
[ i
_
=
1
o
2
(i)
E
_
c
2
[ i
_
=
o
2
(i)
o
2
(i)
= 1.
Thus - has a conditional mean of zero, and a conditional variance of 1.
Notice that (2.13) can be rewritten as
c = o(i)-.
and substituting this for c in the CEF equation (2.11), we …nd that
j = :(i) ÷o(i)-. (2.14)
This is an alternative (mean-variance) representation of the CEF equation.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 21
Many econometric studies focus on the conditional mean :(i) and either ignore the condi-
tional variance o
2
(i), treat it as a constant o
2
(i) = o
2
, or treat it as a nuisance parameter (a
parameter not of primary interest). This is appropriate when the primary variation in the condi-
tional distribution is in the mean, but can be short-sighted in other cases. Dispersion is relevant
to many economic topics, including income and wealth distribution, economic inequality, and price
dispersion. Conditional dispersion (variance) can be a fruitful subject for investigation.
The perverse consequences of a narrow-minded focus on the mean has been parodied in a classic
joke:
An economist was standing with one foot in a bucket of boiling water
and the other foot in a bucket of ice. When asked how he felt, he
replied, “On average I feel just …ne.”
Clearly, the economist in question ignored variance!
2.12 Homoskedasticity and Heteroskedasticity
An important special case obtains when the conditional variance o
2
(i) is a constant and inde-
pendent of i. This is called homoskedasticity.
De…nition 2.12.1 The error is homoskedastic if E
_
c
2
[ i
_
= o
2
does not depend on i.
In the general case where o
2
(i) depends on i we say that the error c is heteroskedastic.
De…nition 2.12.2 The error is heteroskedastic if E
_
c
2
[ i
_
= o
2
(i)
depends on i.
It is helpful to understand that the concepts homoskedasticity and heteroskedasticity concern
the conditional variance, not the unconditional variance. By de…nition, the unconditional variance
o
2
is a constant and independent of the regressors i. So when we talk about the variance as a
function of the regressors, we are talking about the conditional variance o
2
(i).
Some older or introductory textbooks describe heteroskedasticity as the case where “the vari-
ance of c varies across observations”. This is a poor and confusing de…nition. It is more constructive
to understand that heteroskedasticity means that the conditional variance o
2
(i) depends on ob-
servables.
Older textbooks also tend to describe homoskedasticity as a component of a correct regression
speci…cation, and describe heteroskedasticity as an exception or deviance. This description has
in‡uenced many generations of economists, but it is unfortunately backwards. The correct view
is that heteroskedasticity is generic and “standard”, while homoskedasticity is unusual and excep-
tional. The default in empirical work should be to assume that the errors are heteroskedastic, not
the converse.
In apparent contradiction to the above statement, we will still frequently impose the ho-
moskedasticity assumption when making theoretical investigations into the properties of estimation
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 22
and inference methods. The reason is that in many cases homoskedasticity greatly simpli…es the
theoretical calculations, and it is therefore quite advantageous for teaching and learning. It should
always be remembered, however, that homoskedasticity is never imposed because it is believed to
be a correct feature of an empirical model, but rather because of its simplicity.
2.13 Regression Derivative
One way to interpret the CEF :(i) = E(j [ i) is in terms of how marginal changes in the
regressors i imply changes in the conditional mean of the response variable j. It is typical to
consider marginal changes in a single regressor, say r
1
, holding the remainder …xed. When a
regressor r
1
is continuously distributed, we de…ne the marginal e¤ect of a change in r
1
, holding
the variables r
2
, ..., r
I
…xed, as the partial derivative of the CEF
0
0r
1
:(r
1
, ..., r
I
).
When r
1
is discrete we de…ne the marginal e¤ect as a discrete di¤erence. For example, if r
1
is
binary, then the marginal e¤ect of r
1
on the CEF is
:(1, r
2
, ..., r
I
) ÷:(0, r
2
, ..., r
I
).
We can unify the continuous and discrete cases with the notation
\
1
:(i) =
_
¸
¸
_
¸
¸
_
0
0r
1
:(r
1
, ..., r
I
), if r
1
is continuous
:(1, r
2
, ..., r
I
) ÷:(0, r
2
, ..., r
I
), if r
1
is binary.
Collecting the / e¤ects into one / 1 vector, we de…ne the regression derivative of i on j :
r:(i) =
_
¸
¸
¸
_
\
1
:(i)
\
2
:(i)
.
.
.
\
I
:(i)
_
¸
¸
¸
_
When all elements of i are continuous, then we have the simpli…cation r:(i) =
0
0i
:(i), the
vector of partial derivatives.
There are two important points to remember concerning our de…nition of the regression deriv-
ative.
First, the e¤ect of each variable is calculated holding the other variables constant. This is the
ceteris paribus concept commonly used in economics. But in the case of a regression derivative,
the conditional mean does not literally hold all else constant. It only holds constant the variables
included in the conditional mean. This means that the regression derivative depends on which
regressors are included. For example, in a regression of wages on education, experience, race and
gender, the regression derivative with respect to education shows the marginal e¤ect of education
on wages, holding constant a person’s observable characteristic experience, race and gender. But
it does not hold constant a person’s unobservable characteristics (such as ability), or variables not
included in the regression (such as the quality of education).
Second, the regression derivative is the change in the conditional expectation of j, not the
change in the actual value of j for an individual. It is tempting to think of the regression derivative
as the change in the actual value of j, but this is not a correct interpretation. The regression
derivative r:(i) is the change in the actual value of j only if the error c is una¤ected by the
change in the regressor i. We return to a discussion of causal e¤ects in Section 2.29.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 23
2.14 Linear CEF
An important special case is when the CEF :(i) = E(j [ i) is linear in i. In this case we can
write the mean equation as
:(i) = r
1
,
1
÷r
2
,
2
÷ ÷r
I
,
I
÷c.
Notationally it is convenient to write this as a simple function of the vector i. An easy way to do
so is to augment the regressor vector i by listing the number “1” as an element. We call this the
“constant” and the corresponding coe¢cient is called the “intercept”. Equivalently, assuming that
the …nal element
9
of the vector i is the intercept, then r
I
= 1. Thus (2.5) has been rede…ned as
the / 1 vector
i =
_
_
_
_
_
_
_
r
1
r
2
.
.
.
r
I÷1
1
_
_
_
_
_
_
_
. (2.15)
With this rede…nition, then the CEF is
:(i) = r
1
,
1
÷r
2
,
2
÷ ÷r
I
,
I
= i
t
d (2.16)
where
d =
_
_
_
,
1
.
.
.
,
I
_
_
_ (2.17)
is a / 1 coe¢cient vector. This is the linear CEF model. It is also often called the linear
regression model, or the regression of j on i.
In the linear CEF model, the regression derivative is simply the coe¢cient vector. That is
r:(i) = d.
This is one of the appealing features of the linear CEF model. The coe¢cients have simple and
natural interpretations as the marginal e¤ects of changing one variable, holding the others constant.
Linear CEF Model
j = i
t
d ÷c
E(c [ i) = 0
If in addition the error is homoskedastic, we call this the homoskedastic linear CEF model.
Homoskedastic Linear CEF Model
j = i
t
d ÷c
E(c [ i) = 0
E
_
c
2
[ i
_
= o
2
9
The order doesn’t matter. It could be any element.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 24
2.15 Linear CEF with Nonlinear E¤ects
The linear CEF model of the previous section is less restrictive than it might appear, as we can
include as regressors nonlinear transformations of the original variables. In this sense, the linear
CEF framework is ‡exible and can capture many nonlinear e¤ects.
For example, suppose we have two scalar variables r
1
and r
2
. The CEF could take the quadratic
form
:(r
1
, r
2
) = r
1
,
1
÷r
2
,
2
÷r
2
1
,
3
÷r
2
2
,
4
÷r
1
r
2
,
5
÷,
6
. (2.18)
This equation is quadratic in the regressors (r
1
, r
2
) yet linear in the coe¢cients (,
1
, ..., ,
6
). We
will descriptively call (2.18) a quadratic CEF, and yet (2.18) is also a linear CEF in the sense
of being linear in the coe¢cients. The key is to understand that (2.18) is quadratic in the variables
(r
1
, r
2
) yet linear in the coe¢cients (,
1
, ..., ,
6
).
To simplify the expression, we de…ne the transformations r
3
= r
2
1
, r
4
= r
2
2
, r
5
= r
1
r
2
, and
r
6
= 1, and rede…ne the regressor vector as i = (r
1
, ..., r
6
). With this rede…nition,
:(r
1
, r
2
) = i
t
d
which is linear in d. For most econometric purposes (estimation and inference on d) the linearity
in d is all that is important.
An exception is in the analysis of regression derivatives. In nonlinear equations such as (2.18),
the regression derivative should be de…ned with respect to the original variables, not with respect
to the transformed variables. Thus
0
0r
1
:(r
1
, r
2
) = ,
1
÷ 2r
1
,
3
÷r
2
,
5
0
0r
2
:(r
1
, r
2
) = ,
2
÷ 2r
2
,
4
÷r
1
,
5
We see that in the model (2.18), the regression derivatives are not a simple coe¢cient, but are
functions of several coe¢cients plus the levels of (r
1,
r
2
). Consequently it is di¢cult to interpret
the coe¢cients individually. It is more useful to interpret them as a group.
We typically call ,
5
the interaction e¤ect. Notice that it appears in both regression derivative
equations, and has a symmetric interpretation in each. If ,
5
0 then the regression derivative of
r
1
on j is increasing in the level of r
2
(and the regression derivative of r
2
on j is increasing in the
level of r
1
), while if ,
5
< 0 the reverse is true. It is worth noting that this symmetry is an arti…cial
implication of the quadratic equation (2.18), and is not a general feature of nonlinear conditional
means :(r
1
, r
2
).
2.16 Linear CEF with Dummy Variables
When all regressors takes a …nite set of values, it turns out the CEF can be written as a linear
function of regressors.
This simplest example is a binary variable, which takes only two distinct values. For exam-
ple, the variable gender takes only the values man and woman. Binary variables are extremely
common in econometric applications, and are alternatively called dummy variables or indicator
variables.
Consider the simple case of a single binary regressor. In this case, the conditional mean can
only take two distinct values. For example,
E(j [ qc:dcr) =
_
_
_
j
0
if gender=man
j
1
if gender=woman
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 25
To facilitate a mathematical treatment, we typically record dummy variables with the values ¦0, 1¦.
For example
r
1
=
_
0 if gender=man
1 if gender=woman
(2.19)
Given this notation we can write the conditional mean as a linear function of the dummy variable
r
1
, that is
E(j [ r
1
) = c ÷,r
1
where c = j
0
and , = j
1
÷ j
0
. In this simple regression equation the intercept c is equal to the
conditional mean of j for the r
1
= 0 subpopulation (men) and the slope , is equal to the di¤erence
in the conditional means between the two subpopulations.
Equivalently, we could have de…ned r
1
as
r
1
=
_
1 if gender=man
0 if gender=woman
(2.20)
In this case, the regression intercept is the mean for women (rather than for men) and the regression
slope has switched signs. The two regressions are equivalent but the interpretation of the coe¢cients
has changed. Therefore it is always important to understand the precise de…nitions of the variables,
and illuminating labels are helpful. For example, labelling r
1
as “gender” does not help distinguish
between de…nitions (2.19) and (2.20). Instead, it is better to label r
1
as “women” or “female” if
de…nition (2.19) is used, or as “men” or “male” if (2.20) is used.
Now suppose we have two dummy variables r
1
and r
2
. For example, r
2
= 1 if the person is
married, else r
2
= 0. The conditional mean given r
1
and r
2
takes at most four possible values:
E(j [ r
1
, r
2
) =
_
¸
¸
_
¸
¸
_
j
00
if r
1
= 0 and r
2
= 0 (unmarried men)
j
01
if r
1
= 0 and r
2
= 1 (married men)
j
10
if r
1
= 1 and r
2
= 0 (unmarried women)
j
11
if r
1
= 1 and r
2
= 1 (married women)
In this case we can write the conditional mean as a linear function of r
1
, r
2
and their product
r
1
r
2
:
E(j [ r
1
, r
2
) = c ÷,
1
r
1
÷,
2
r
2
÷,
3
r
1
r
2
where c = j
00
, ,
1
= j
10
÷j
00
, ,
2
= j
01
÷j
00
, and ,
3
= j
11
÷j
10
÷j
01
÷j
00
.
We can view the coe¢cient ,
1
as the e¤ect of gender on expected log wages for unmarried
wages earners, the coe¢cient ,
2
as the e¤ect of marriage on expected log wages for men wage
earners, and the coe¢cient ,
3
as the di¤erence between the e¤ects of marriage on expected log
wages among women and among men. Alternatively, it can also be interpreted as the di¤erence
between the e¤ects of gender on expected log wages among married and non-married wage earners.
Both interpretations are equally valid. We often describe ,
3
as measuring the interaction between
the two dummy variables, or the interaction e¤ect, and describe ,
3
= 0 as the case when the
interaction e¤ect is zero.
In this setting we can see that the CEF is linear in the three variables (r
1
, r
2
, r
1
r
2
). Thus to
put the model in the framework of Section 2.14, we would de…ne the regressor r
3
= r
1
r
2
and the
regressor vector as
i =
_
_
_
_
r
1
r
2
r
3
1
_
_
_
_
.
So even though we started with only 2 dummy variables, the number of regressors (including the
intercept) is 4.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 26
If there are 3 dummy variables r
1
, r
2
, r
3
, then E(j [ r
1
, r
2
, r
3
) takes at most 2
3
= 8 distinct
values and can be written as the linear function
E(j [ r
1
, r
2
, r
3
) = c ÷,
1
r
1
÷,
2
r
2
÷,
3
r
3
÷,
4
r
1
r
2
÷,
5
r
1
r
3
÷,
6
r
2
r
3
÷,
7
r
1
r
2
r
3
which has eight regressors including the intercept.
In general, if there are j dummy variables r
1
, ..., r
j
then the CEF E(j [ r
1
, r
2
, ..., r
j
) takes
at most 2
j
distinct values, and can be written as a linear function of the 2
j
regressors including
r
1
, r
2
, ..., r
j
and all cross-products. This might be excessive in practice if j is modestly large. In
the next section we will discuss projection approximations which yield more parsimonious parame-
terizations.
We started this section by saying that the conditional mean is linear whenever all regressors
take only a …nite number of possible values. How can we see this? Take a categorical variable,
such as race. For example, we earlier divided race into three categories. We can record categorical
variables using numbers to indicate each category, for example
r
3
=
_
_
_
1 if white
2 if black
8 if other
When doing so, the values of r
3
have no meaning in terms of magnitude, they simply indicate the
relevant category.
When the regressor is categorical the conditional mean of j given r
3
takes a distinct value for
each possibility:
E(j [ r
3
) =
_
_
_
j
1
if r
3
= 1
j
2
if r
3
= 2
j
3
if r
3
= 8
This is not a linear function of r
3
itself, but it can be made so by constructing dummy variables
for two of the three categories. For example
r
4
=
_
1 if black
0 if not black
r
5
=
_
1 if other
0 if not other
In this case, the categorical variable r
3
is equivalent to the pair of dummy variables (r
4
, r
5
). The
explicit relationship is
r
3
=
_
_
_
1 if r
4
= 0 and r
5
= 0
2 if r
4
= 1 and r
5
= 0
8 if r
4
= 0 and r
5
= 1
Given these transformations, we can write the conditional mean of j as a linear function of r
4
and
r
5
E(j [ r
3
) = E(j [ r
4
, r
5
) = c ÷,
1
r
4
÷,
2
r
5
We can write the CEF as either E(j [ r
3
) or E(j [ r
4
, r
5
) (they are equivalent), but it is only linear
as a function of r
4
and r
5
.
This setting is similar to the case of two dummy variables, with the di¤erence that we have not
included the interaction term r
4
r
5
. This is because the event ¦r
4
= 1 and r
5
= 1¦ is empty by
construction, so r
4
r
5
= 0 by de…nition.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 27
2.17 Best Linear Predictor
While the conditional mean :(i) = E(j [ i) is the best predictor of j among all functions
of i, its functional form is typically unknown. In particular, the linear CEF model is empirically
unlikely to be accurate unless i is discrete and low-dimensional so all interactions are included.
Consequently in most cases it is more realistic to view the linear speci…cation (2.16) as an approx-
imation. In this section we derive a speci…c approximation with a simple interpretation.
Theorem 2.10.1 showed that the conditional mean :(i) is the best predictor in the sense
that it has the lowest mean squared error among all predictors. By extension, we can de…ne an
approximation to the CEF by the linear function with the lowest mean squared error among all
linear predictors.
For this derivation we require the following regularity condition.
Assumption 2.17.1
1. Ej
2
< ·.
2. E|i|
2
< ·.
3. Q
ææ
= E(ii
t
) is positive de…nite.
In Assumption 2.17.1.2 we use the notation |i| = (i
t
i)
1¸2
to denote the Euclidean length of
the vector i.
The …rst two parts of Assumption 2.17.1 imply that the variables j and i have …nite means,
variances, and covariances. The third part of the assumption is more technical, and its role will
become apparent shortly. It is equivalent to imposing that the columns of Q
aa
= E(ii
t
) are
linearly independent, or equivalently that the matrix Q
aa
is invertible.
A linear predictor for j is a function of the form i
t
d for some d ¸ R
I
. The mean squared
prediction error is
o(d) = E
_
j ÷i
t
d
_
2
.
The best linear predictor of j given i, written T(j [ i), is found by selecting the vector d to
minimize o(d).
De…nition 2.17.1 The Best Linear Predictor of j given i is
T(j [ i) = i
t
d
where d minimizes the mean squared prediction error
o(d) = E
_
j ÷i
t
d
_
2
.
The minimizer
d = aigmin
f¸R
k
o(d) (2.21)
is called the Linear Projection Coe¢cient.
We now calculate an explicit expression for its value.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 28
The mean squared prediction error can be written out as a quadratic function of d :
o(d) = Ej
2
÷2d
t
E(ij) ÷d
t
E
_
ii
0
_
d.
The quadratic structure of o(d) means that we can solve explicitly for the minimizer. The …rst-
order condition for minimization (from Appendix A.9) is
0 =
0
0d
o(d) = ÷2E(ij) ÷ 2E
_
ii
t
_
d. (2.22)
Rewriting (2.22) as
2E(ij) = 2E
_
ii
t
_
d
and dividing by 2, this equation takes the form
Q
æg
= Q
ææ
d (2.23)
where Q
æg
= E(ij) is / 1 and Q
ææ
= E(ii
t
) is / /. The solution is found by inverting the
matrix Q
ææ
, and is written
d = Q
÷1
aa
Q
æg
or
d =
_
E
_
ii
t
__
÷1
E(ij) . (2.24)
It is worth taking the time to understand the notation involved in the expression (2.24). Q
ææ
is a
/ / matrix and Q
æg
is a / 1 column vector. Therefore, alternative expressions such as
E(æj)
E(ææ
0
)
or E(ij) (E(ii
t
))
÷1
are incoherent and incorrect. We also can now see the role of Assumption
2.17.1.3. It is necessary in order for the solution (2.24) to exist. Otherwise, there would be multiple
solutions to the equation (2.23).
We now have an explicit expression for the best linear predictor:
T(j [ i) = i
t
_
E
_
ii
t
__
÷1
E(ij) .
This expression is also referred to as the linear projection of j on i.
The projection error is
c = j ÷i
t
d. (2.25)
This equals the error from the regression equation when (and only when) the conditional mean is
linear in i, otherwise they are distinct.
Rewriting, we obtain a decomposition of j into linear predictor and error
j = i
t
d ÷c. (2.26)
In general we call equation (2.26) or i
t
d the best linear predictor of j given i, or the linear
projection of j on i. Equation (2.26) is also often called the regression of j on i but this can
sometimes be confusing as economists use the term regression in many contexts. (Recall that we
said in Section 2.14 that the linear CEF model is also called the linear regression model.)
An important property of the projection error c is
E(ic) = 0. (2.27)
To see this, using the de…nitions (2.25) and (2.24) and the matrix properties AA
÷1
= 1 and
1u = u,
E(ic) = E
_
i
_
j ÷i
t
d
__
= E(ij) ÷E
_
ii
t
_ _
E
_
ii
t
__
÷1
E(ij)
= 0 (2.28)
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 29
as claimed.
Equation (2.27) is a set of / equations, one for each regressor. In other words, (2.27) is equivalent
to
E(r
)
c) = 0 (2.29)
for , = 1, ..., /. As in (2.15), the regressor vector i typically contains a constant, e.g. r
I
= 1. In
this case (2.29) for , = / is the same as
E(c) = 0. (2.30)
Thus the projection error has a mean of zero when the regressor vector contains a constant. (When
i does not have a constant, (2.30) is not guaranteed. As it is desirable for c to have a zero mean,
this is a good reason to always include a constant in any regression model.)
It is also useful to observe that since cov(r
)
, c) = E(r
)
c) ÷ E(r
)
) E(c) , then (2.29)-(2.30)
together imply that the variables r
)
and c are uncorrelated.
This completes the derivation of the model. We summarize some of the most important prop-
erties.
Theorem 2.17.1 Properties of Linear Projection Model
Under Assumption 2.17.1,
1. The moments E(ii
t
) and E(ij) exist with …nite elements.
2. The Linear Projection Coe¢cient (2.21) exists, is unique, and equals
d =
_
E
_
ii
t
__
÷1
E(ij) .
3. The best linear predictor of j given i is
T(j [ i) = i
t
_
E
_
ii
t
__
÷1
E(ij) .
4. The projection error c = j ÷i
t
d exists and satis…es
E
_
c
2
_
< ·
and
E(ic) = 0.
5. If i contains an constant, then
E(c) = 0.
6. If E[j[
v
< · and E|i|
v
< · for r _ 1 then E[c[
v
< ·.
A complete proof of Theorem 2.17.1 is given in Section 2.33.
It is useful to re‡ect on the generality of Theorem 2.17.1. The only restriction is Assumption
2.17.1. Thus for any random variables (j, i) with …nite variances we can de…ne a linear equation
(2.26) with the properties listed in Theorem 2.17.1. Stronger assumptions (such as the linear CEF
model) are not necessary. In this sense the linear model (2.26) exists quite generally. However,
it is important not to misinterpret the generality of this statement. The linear equation (2.26) is
de…ned as the best linear predictor. It is not necessarily a conditional mean, nor a parameter of a
structural or causal economic model.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 30
Linear Projection Model
j = i
t
d ÷c.
E(ic) = 0
d =
_
E
_
ii
t
__
÷1
E(ij)
We illustrate projection using three log wage equations introduced in earlier sections.
For our …rst example, we consider a model with the two dummy variables for gender and race
similar to Table 2.1. As we learned in Section 2.16, the entries in this table can be equivalently
expressed by a linear CEF. For simplicity, let’s consider the CEF of log(naqc) as a function of
Black and Female.
E(log(naqc) [ 1|ac/, 1c:a|c) = ÷0.201|ac/ ÷0.241c:a|c ÷ 0.101|ac/ + 1c:a|c ÷ 8.06. (2.31)
This is a CEF as the variables are dummys and all interactions are included.
Now consider a simpler model omitting the interaction e¤ect. This is the linear projection on
the variables 1|ac/ and 1c:a|c
T(log(naqc) [ 1|ac/, 1c:a|c) = ÷0.1ò1|ac/ ÷0.281c:a|c ÷ 8.06. (2.32)
What is the di¤erence? The full CEF (2.31) shows that the race gap is di¤erentiated by gender: it
is 20% for black men (relative to non-black men) and 10% for black women (relative to non-black
women). The projection model (2.32) simpli…es this analysis, calculating an average 15% wage gap
for blacks, ignoring the role of gender. Notice that this is despite the fact that the gender variable
is included in (2.32).
For our second example we consider the CEF of log wages as a function of years of education
for white men which was illustrated in Figure 2.4 and is repeated in Figure 2.7. Superimposed on
the …gure are two projections. The …rst (given by the dashed line) is the linear projection of log
wages on years of education
T(log(naqc) [ 1dncatio:) = 1.ò ÷ 0.111dncatio:
This simple equation indicates an average 11% increase in wages for every year of education. An
inspection of the Figure shows that this approximation works well for education_ 0, but under-
predicts for individuals with lower levels of education. To correct this imbalance we use a linear
spline equation which allows di¤erent rates of return above and below 9 years of education:
T (log(naqc) [ 1dncatio:, (1dncatio: ÷0) + 1 (1dncatio: 0))
= 2.8 ÷ 0.021dncatio: ÷.10 + (1dncatio: ÷0) + 1 (1dncatio: 0)
This equation is displayed in Figure 2.7 using the solid line, and appears to …t much better. It
indicates a 2% increase in mean wages for every year of education below 9, and a 12% increase in
mean wages for every year of education above 9. It is still an approximation to the conditional
mean but it appears to be fairly reasonable.
For our third example we take the CEF of log wages as a function of years of experience for
white men with 12 years of education, which was illustrated in Figure 2.5 and is repeated as the
solid line in Figure 2.8. Superimposed on the …gure are two projections. The …rst (given by the
dot-dashed line) is the linear projection on experience
T(log(naqc) [ 1rjcric:cc) = 2.ò ÷ 0.0111rjcric:cc
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 31
2
.
0
2
.
5
3
.
0
3
.
5
4
.
0
Years of Education
L
o
g

D
o
l
l
a
r
s

p
e
r

H
o
u
r
4 6 8 10 12 14 16 18 20
q q
q
q
q q
q
q
q
q
q
q
Figure 2.7: Projections of log(naqc) onto Education
and the second (given by the dashed line) is the linear projection on experience and its square
T(log(naqc) [ 1rjcric:cc) = 2.8 ÷ 0.0461rjcric:cc ÷0.00071rjcric:cc
2
.
It is fairly clear from an examination of Figure 2.8 that the …rst linear projection is a poor approx-
imation. It over-predicts wages for young and old workers, and under-predicts for the rest. Most
importantly, it misses the strong downturn in expected wages for older wage-earners. The second
projection …ts much better. We can call this equation a quadratic projection since the function
is quadratic in crjcric:cc.
0 10 20 30 40 50
2
.
0
2
.
5
3
.
0
3
.
5
4
.
0
Labor Market Experience (Years)
L
o
g

D
o
l
l
a
r
s

p
e
r

H
o
u
r
Conditional Mean
Linear Projection
Quadratic Projection
Figure 2.8: Linear and Quadratic Projections of log(naqc) onto Experience
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 32
Invertibility and Identi…cation
The linear projection coe¢cient d = (E(ii
t
))
÷1
E(ij) exists and is
unique as long as the // matrix Q
ææ
= E(ii
t
) is invertible. The matrix
Q
ææ
is sometimes called the design matrix, as in experimental settings
the researcher is able to control Q
ææ
by manipulating the distribution of
the regressors i.
Observe that for any non-zero o ¸ R
I
,
o
t
Q
ææ
o = E
_
o
t
ii
t
o
_
= E
_
o
t
i
_
2
_ 0
so Q
ææ
by construction is positive semi-de…nite. The assumption that
it is positive de…nite means that this is a strict inequality, E(o
t
i)
2

0. Equivalently, there cannot exist a non-zero vector o such that o
t
i =
0 identically. This occurs when redundant variables are included in i.
Positive semi-de…nite matrices are invertible if and only if they are positive
de…nite. When Q
ææ
is invertible then d = (E(ii
t
))
÷1
E(ij) exists and is
uniquely de…ned. In other words, in order for d to be uniquely de…ned, we
must exclude the degenerate situation of redundant varibles.
Theorem 2.17.1 shows that the linear projection coe¢cient d is iden-
ti…ed (uniquely determined) under Assumptions 2.17.1. The key is invert-
ibility of Q
ææ
. Otherwise, there is no unique solution to the equation
Q
ææ
d = Q
æg
. (2.33)
When Q
ææ
is not invertible there are multiple solutions to (2.33), all of
which yield an equivalent best linear predictor i
t
d. In this case the coe¢-
cient d is not identi…ed as it does not have a unique value. Even so, the
best linear predictor i
t
d still identi…ed. One solution is to set
d =
_
E
_
ii
t
__
÷
E(ij)
where A
÷
denotes the generalized inverse of A (see Appendix A.5).
2.18 Linear Predictor Error Variance
As in the CEF model, we de…ne the error variance as
o
2
= E
_
c
2
_
.
Setting Q
jj
= E
_
j
2
_
and Q

= E(ji
t
) we can write o
2
as
o
2
= E
_
j ÷i
t
d
_
2
= Ej
2
÷2E
_
ji
t
_
d ÷d
t
E
_
ii
t
_
d
= Q
jj
÷2Q

Q
÷1
ææ
Q
æj
÷Q

Q
÷1
ææ
Q
ææ
Q
÷1
ææ
Q
æj
= Q
jj
÷Q

Q
÷1
ææ
Q
æj
oc)
= Q
jj·æ
(2.34)
One useful feature of this formula is that it shows that Q
jj·æ
= Q
jj
÷ Q

Q
÷1
ææ
Q
æj
equals the
variance of the error from the linear projection of j on i.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 33
2.19 Regression Coe¢cients
Sometimes it is useful to separate the intercept from the other regressors, and write the linear
projection equation in the format
j = c ÷i
t
d ÷c (2.35)
where c is the intercept and i does not contain a constant.
Taking expectations of this equation, we …nd
Ej = Ec ÷Ei
t
d ÷Ec
or
j
j
= c ÷j
t
a
d
where j
j
= Ej and j
a
= Ei, since E(c) = 0 from (2.30). Rearranging, we …nd
c = j
j
÷j
t
a
d.
Subtracting this equation from (2.35) we …nd
j ÷j
j
= (i ÷j
a
)
t
d ÷c, (2.36)
a linear equation between the centered variables j ÷ j
j
and i ÷ j
a
. (They are centered at their
means, so are mean-zero random variables.) Because i ÷ j
a
is uncorrelated with c, (2.36) is also
a linear projection, thus by the formula for the linear projection model,
d =
_
E
_
(i ÷j
a
) (i ÷j
a
)
t
__
÷1
E
_
(i ÷j
a
)
_
j ÷j
j
__
= vai (i)
÷1
cov (i, j)
a function only of the covariances
10
of i and j.
Theorem 2.19.1 In the linear projection model
j = c ÷i
t
d ÷c,
then
c = j
j
÷j
t
a
d (2.37)
and
d = vai (i)
÷1
cov (i, j) . (2.38)
2.20 Regression Sub-Vectors
Let the regressors be partitioned as
i =
_
i
1
i
2
_
. (2.39)
10
The covariance matrix between vectors x and z is cov (x; z) = E

(x Ex) (z Ez)
0

: The (co)variance
matrix of the vector x is var (x) = cov (x; x) = E

(x Ex) (x Ex)
0

:
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 34
We can write the projection of j on i as
j = i
t
d ÷c
= i
t
1
d
1
÷i
t
2
d
2
÷c (2.40)
E(ic) = 0.
In this section we derive formula for the sub-vectors d
1
and d
2
.
Partition Q
aa
comformably with i
Q
aa
=
_
Q
11
Q
12
Q
21
Q
22
_
=
_
E(i
1
i
t
1
) E(i
1
i
t
2
)
E(i
2
i
t
1
) E(i
2
i
t
2
)
_
and similarly Q
aj
Q
aj
=
_
Q
1j
Q
2j
_
=
_
E(i
1
j)
E(i
2
j)
_
.
By the partitioned matrix inversion formula (A.4)
Q
÷1
aa
=
_
Q
11
Q
12
Q
21
Q
22
_
÷1
oc)
=
_
Q
11
Q
12
Q
21
Q
22
_
=
_
Q
÷1
11·2
÷Q
÷1
11·2
Q
12
Q
÷1
22
÷Q
÷1
22·1
Q
21
Q
÷1
11
Q
÷1
22·1
_
. (2.41)
where Q
11·2
oc)
= Q
11
÷Q
12
Q
÷1
22
Q
21
and Q
22·1
oc)
= Q
22
÷Q
21
Q
÷1
11
Q
12
. Thus
d =
_
d
1
d
2
_
=
_
Q
÷1
11·2
÷Q
÷1
11·2
Q
12
Q
÷1
22
÷Q
÷1
22·1
Q
21
Q
÷1
11
Q
÷1
22·1
_ _
Q
1j
Q
2j
_
=
_
Q
÷1
11·2
_
Q
1j
÷Q
12
Q
÷1
22
Q
2j
_
Q
÷1
22·1
_
Q
2j
÷Q
21
Q
÷1
11
Q
1j
_
_
=
_
Q
÷1
11·2
Q
1j·2
Q
÷1
22·1
Q
2j·1
_
We have shown that
d
1
= Q
÷1
11·2
Q
1j·2
d
2
= Q
÷1
22·1
Q
2j·1
2.21 Coe¢cient Decomposition
In the previous section we derived the formula for the coe¢cient sub-vectors d
1
and d
2
. We now
use these formula to give a useful interpretation of the coe¢cients as obtaining from an iterated
projection.
Take equation (2.40) for the case oim(r
1
) = 1 so that ,
1
¸ R.
j = r
1
,
1
÷i
t
2
d
2
÷c. (2.42)
Now consider the projection of r
1
on i
2
:
r
1
= i
t
2
_
2
÷n
1
E(i
2
n
1
) = 0.
From (2.24) and (2.34), _
2
= Q
÷1
22
Q
21
and En
2
1
= Q
11·2
= Q
11
÷Q
12
Q
÷1
22
Q
21
. We can also calculate
that
E(n
1
j) = E
__
r
1
÷_
t
2
i
2
_
j
_
= E(r
1
j) ÷_
t
2
E(i
2
j) = Q
1j
÷Q
12
Q
÷1
22
Q
2j
= Q
1j·2
.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 35
We have found that
,
1
= Q
÷1
11·2
Q
1j·2
=
E(n
1
j)
En
2
1
the coe¢cient from the simple regression of j on n
1
.
What this means is that in the multivariate projection equation (2.42), the coe¢cient ,
1
equals
the projection coe¢cient from a regression of j on n
1
, the error from a projection of r
1
on the
other regressors i
2
. The error n
1
can be thought of as the component of r
1
which is not linearly
explained by the other regressors. Thus the coe¢cient ,
1
equals the linear e¤ect of r
1
on j, after
stripping out the e¤ects of the other variables.
There was nothing special in the choice of the variable r
1
. So this derivation applies symmetri-
cally to all coe¢cients in a linear projection. Each coe¢cient equals the simple regression of j on
the error from a projection of that regressor on all the other regressors. Each coe¢cient equals the
linear e¤ect of that variable on j, after linearly controlling for all the other regressors.
2.22 Omitted Variable Bias
Again, let the regressors be partitioned as in (2.39). Consider the projection of j on i
1
only.
Perhaps this is done because the variables i
2
are not observed. This is the equation
j = i
t
1
_
1
÷n (2.43)
E(i
1
n) = 0
Notice that we have written the coe¢cient on i
1
as _
1
rather than d
1
and the error as n rather
than c. This is because (2.43) is di¤erent than (2.40). Goldberger (1991) introduced the catchy
labels long regression for (2.40) and short regression for (2.43) to emphasize the distinction.
Typically, d
1
,= _
1
, except in special cases. To see this, we calculate
_
1
=
_
E
_
i
1
i
t
1
__
÷1
E(i
1
j)
=
_
E
_
i
1
i
t
1
__
÷1
E
_
i
1
_
i
t
1
d
1
÷i
t
2
d
2
÷c
__
= d
1
÷
_
E
_
i
1
i
t
1
__
÷1
E
_
i
1
i
t
2
_
d
2
= d
1
÷Id
2
where
I =
_
E
_
i
1
i
t
1
__
÷1
E
_
i
1
i
t
2
_
is the coe¢cient matrix from a projection of i
2
on i
1
.
Observe that _
1
= d
1
÷Id
2
,= d
1
unless I = 0 or d
2
= 0. Thus the short and long regressions
have di¤erent coe¢cients on i
1
. They are the same only under one of two conditions. First, if the
projection of i
2
on i
1
yields a set of zero coe¢cients (they are uncorrelated), or second, if the
coe¢cient on i
2
in (2.40) is zero. In general, the coe¢cient in (2.43) is _
1
rather than d
1
. The
di¤erence Id
2
between _
1
and d
1
is known as omitted variable bias. It is the consequence of
omission of a relevant correlated variable.
To avoid omitted variables bias the standard advice is to include all potentially relevant variables
in estimated models. By construction, the general model will be free of such bias. Unfortunately
in many cases it is not feasible to completely follow this advice as many desired variables are
not observed. In this case, the possibility of omitted variables bias should be acknowledged and
discussed in the course of an empirical investigation.
For example, suppose j is log wages, r
1
is education, and r
2
is intellectual ability. It seems
reasonable to suppose that education and intellectual ability are positively correlated (highly able
individuals attain higher levels of education) which means I 0. It also seems reasonable to
suppose that conditional on education, individuals with higher intelligence will earn higher wages
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 36
so that ,
2
0. This implies that I,
2
0 and ¸
1
= ,
1
÷I,
2
,
1
. Therefore, it seems reasonable to
expect that in a regression of wages on education with ability omitted, the coe¢cient on education
is higher than in a regression where ability is included. In other words, in this context the omitted
variable biases the regression coe¢cient upwards.
2.23 Best Linear Approximation
There are alternative ways we could construct a linear approximation i
t
d to the conditional
mean :(i). In this section we show that one alternative approach turns out to yield the same
answer as the best linear predictor.
We start by de…ning the mean-square approximation error of i
t
d to :(i) as the expected
squared di¤erence between i
t
d and the conditional mean :(i)
d(d) = E
_
:(i) ÷i
t
d
_
2
. (2.44)
The function d(d) is a measure of the deviation of i
t
d from :(i). If the two functions are identical
then d(d) = 0, otherwise d(d) 0. We can also view the mean-square di¤erence d(d) as a density-
weighted average of the function (:(i) ÷i
t
d)
2
, since
d(d) =
_
R
k
_
:(i) ÷i
t
d
_
2
)
æ
(i)di
where )
æ
(i) is the marginal density of i.
We can then de…ne the best linear approximation to the conditional :(i) as the function i
t
d
obtained by selecting d to minimize d(d) :
d = aigmin
f¸R
k
d(d). (2.45)
Similar to the best linear predictor we are measuring accuracy by expected squared error. The
di¤erence is that the best linear predictor (2.21) selects d to minimize the expected squared predic-
tion error, while the best linear approximation (2.45) selects d to minimize the expected squared
approximation error.
Despite the di¤erent de…nitions, it turns out that the best linear predictor and the best linear
approximation are identical. By the same steps as in (2.17) plus an application of conditional
expectations we can …nd that
d =
_
E
_
ii
t
__
÷1
E(i:(i)) (2.46)
=
_
E
_
ii
t
__
÷1
E(ij) (2.47)
(see Exercise 2.19). Thus (2.45) equals (2.21). We conclude that the de…nition (2.45) can be viewed
as an alternative motivation for the linear projection coe¢cient.
2.24 Normal Regression
Suppose the variables (j, i) are jointly normally distributed. Consider the best linear predictor
of j given i
j = i
t
d ÷c
d =
_
E
_
ii
t
__
÷1
E(ij) .
Since the error c is a linear transformation of the normal vector (j, i), it follows that (c, i) is
jointly normal, and since they are jointly normal and uncorrelated (since E(ic) = 0) they are also
independent (see Appendix B.9). Independence implies that
E(c [ i) = E(c) = 0
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 37
and
E
_
c
2
[ i
_
= E
_
c
2
_
= o
2
which are properties of a homoskedastic linear CEF.
We have shown that when (j, i) are jointly normally distributed, they satisfy a normal linear
CEF
j = i
t
d ÷c
where
c ~ N(0, o
2
)
is independent of i.
This is an alternative (and traditional) motivation for the linear CEF model. This motivation
has limited merit in econometric applications since economic data is typically non-normal.
2.25 Regression to the Mean
The term regression originated in an in‡uential paper by Francis Galton published in 1886,
where he examined the joint distribution of the stature (height) of parents and children. E¤ectively,
he was estimating the conditional mean of children’s height given their parent’s height. Galton
discovered that this conditional mean was approximately linear with a slope of 2/3. This implies
that on average a child’s height is more mediocre (average) than his or her parent’s height. Galton
called this phenomenon regression to the mean, and the label regression has stuck to this day
to describe most conditional relationships.
One of Galton’s fundamental insights was to recognize that if the marginal distributions of j
and r are the same (e.g. the heights of children and parents in a stable environment) then the
regression slope in a linear projection is always less than one.
To be more precise, take the simple linear projection
j = c ÷r, ÷c (2.48)
where j equals the height of the child and r equals the height of the parent. Assume that j and r
have the same mean, so that j
j
= j
a
= j. Then from (2.37)
c = (1 ÷,) j
so we can write the linear projection (2.48) as
T (j [ r) = (1 ÷,) j ÷r,.
This shows that the projected height of the child is a weighted average of the population average
height j and the parent’s height r, with the weight equal to the regression slope ,. When the
height distribution is stable across generations, so that vai(j) = vai(r), then this slope is the
simple correlation of j and r. Using (2.38)
, =
cov (r, j)
vai(r)
= coii(r, j).
By the properties of correlation (e.g. equation (B.7) in the Appendix), ÷1 _ coii(r, j) _ 1, with
coii(r, j) = 1 only in the degenerate case j = r. Thus if we exclude degeneracy, , is strictly less
than 1.
This means that on average a child’s height is more mediocre (closer to the population average)
than the parent’s.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 38
Sir Francis Galton
Sir Francis Galton (1822-1911) of England was one of the leading …gures in
late 19th century statistics. In addition to inventing the concept of regres-
sion, he is credited with introducing the concepts of correlation, the standard
deviation, and the bivariate normal distribution. His work on heredity made
a signi…cant intellectual advance by examing the joint distributions of ob-
servables, allowing the application of the tools of mathematical statistics to
the social sciences.
A common error – known as the regression fallacy – is to infer from , < 1 that the population
is converging, meaning that its variance is declining towards zero. This is a fallacy because we
derived the implication , < 1 under the assumption of constant means and variances. So certainly
, < 1 does not imply that the variance j is less than than the variance of r.
Another way of seeing this is to examine the conditions for convergence in the context of equation
(2.48). Since r and c are uncorrelated, it follows that
vai(j) = ,
2
vai(r) ÷ vai(c).
Then vai(j) < vai(r) if and only if
,
2
< 1 ÷
vai(c)
vai(r)
which is not implied by the simple condition [,[ < 1.
The regression fallacy arises in related empirical situations. Suppose you sort families into groups
by the heights of the parents, and then plot the average heights of each subsequent generation over
time. If the population is stable, the regression property implies that the plots lines will converge
– children’s height will be more average than their parents. The regression fallacy is to incorrectly
conclude that the population is converging. A message to be learned from this example is that such
plots are misleading for inferences about convergence.
The regression fallacy is subtle. It is easy for intelligent economists to succumb to its temptation.
A famous example is The Triumph of Mediocrity in Business by Horace Secrist, published in 1933.
In this book, Secrist carefully and with great detail documented that in a sample of department
stores over 1920-1930, when he divided the stores into groups based on 1920-1921 pro…ts, and
plotted the average pro…ts of these groups for the subsequent 10 years, he found clear and persuasive
evidence for convergence “toward mediocrity”. Of course, there was no discovery – regression to
the mean is a necessary feature of stable distributions.
2.26 Reverse Regression
Galton noticed another interesting feature of the bivariate distribution. There is nothing special
about a regression of j on r. We can also regress r on j. (In his heredity example this is the best
linear predictor of the height of parents given the height of their children.) This regression takes
the form
r = c
+
÷j,
+
÷c
+
. (2.49)
This is sometimes called the reverse regression. In this equation, the coe¢cients c
+
, ,
+
and
error c
+
are de…ned by linear projection. In a stable population we …nd that
,
+
= coii(r, j) = ,
c
+
= (1 ÷,) j = c
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 39
which are exactly the same as in the projection of j on r! The intercept and slope have exactly the
same values in the forward and reverse projections!
While this algebraic discovery is quite simple, it is counter-intuitive. Instead, a common yet
mistaken guess for the form of the reverse regression is to take the equation (2.48), divide through
by , and rewrite to …nd the equation
r = ÷
c
,
÷j
1
,
÷
1
,
c (2.50)
suggesting that the projection of r on j should have a slope coe¢cient of 1,, instead of ,, and
intercept of -c,, rather than c. What went wrong? Equation (2.50) is perfectly valid, because it
is a simple manipulation of the valid equation (2.48). The trouble is that (2.50) is not a CEF nor
a linear projection. Inverting a projection (or CEF) does not yield a projection (or CEF). Instead,
(2.49) is a valid projection, not (2.50).
In any event, Galton’s …nding was that when the variables are standardized, the slope in both
projections (j on r, and r and j) equals the correlation, and both equations exhibit regression to
the mean. It is not a causal relation, but a natural feature of all joint distributions.
2.27 Limitations of the Best Linear Predictor
Let’s compare the linear projection and linear CEF models.
From Theorem 2.8.1.4 we know that the CEF error has the property E(ic) = 0. Thus a linear
CEF is a linear projection. However, the converse is not true as the projection error does not
necessarily satisfy E(c [ i) = 0. Furthermore, the linear projection may be a poor approximation
to the CEF.
To see these points in a simple example, suppose that the true CEF is j = r÷r
2
and r ~ N(0, 1).
In this case the true CEF is :(r) = r÷r
2
and there is no error. Now consider the linear projection
of j on r and an intercept, namely the model j = c ÷ ,r ÷ n. Since r and r
2
are uncorrelated
the linear projection takes the form T (j [ r) = 1 ÷ r. This is quite di¤erent from the true CEF
:(r) = r ÷ r
2
. The projection error equals c = r
2
÷1, which is a deterministic function of r, yet
is uncorrelated with r. We see in this example that a projection error need not be a CEF error,
and a linear projection can be a poor approximation to the CEF.
Figure 2.9: Conditional Mean and Two Linear Projections
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 40
Another defect of linear projection is that it is sensitive to the marginal distribution of the
regressors when the conditional mean is non-linear. We illustrate the issue in Figure 2.9 for a
constructed
11
joint distribution of j and r. The solid line is the non-linear CEF of j given r.
The data are divided in two – Group 1 and Group 2 – which have di¤erent marginal distributions
for the regressor r, and Group 1 has a lower mean value of r than Group 2. The separate linear
projections of j on r for these two groups are displayed in the Figure by the dashed lines. These
two projections are distinct approximations to the CEF. A defect with linear projection is that it
leads to the incorrect conclusion that the e¤ect of r on j is di¤erent for individuals in the two
groups. This conclusion is incorrect because in fact there is no di¤erence in the conditional mean
function. The apparant di¤erence is a by-product of a linear approximation to a non-linear mean,
combined with di¤erent marginal distributions for the conditioning variables.
2.28 Random Coe¢cient Model
A model which is notationally similar to but conceptually distinct from the linear CEF model
is the linear random coe¢cient model. It takes the form
j = i
t
µ
where the individual-speci…c coe¢cient µ is random and independent of i. For example, if i is
years of schooling and j is log wages, then µ is the individual-speci…c returns to schooling. If
a person obtains an extra year of schooling, µ is the actual change in their wage. The random
coe¢cient model allows the returns to schooling to vary in the population. Some individuals might
have a high return to education (a high µ) and others a low return, possibly 0, or even negative.
In the linear CEF model the regressor coe¢cient equals the regression derivative – the change
in the conditional mean due to a change in the regressors, d = r:(i). This is not the e¤ect on a
given individual, it is the e¤ect on the population average. In contrast, in the random coe¢cient
model, the random vector µ = ri
t
µ is the true causal e¤ect – the change in the response variable
j itself due to a change in the regressors.
It is interesting, however, to discover that the linear random coe¢cient model implies a linear
CEF. To see this, let d and X denote the mean and covariance matrix of µ :
d = E(µ)
X = vai (µ)
and then decompose the random coe¢cient as
µ = d ÷u
where u is distributed independently of i with mean zero and covariance matrix X. Then we can
write
E(j [ i) = i
t
E(µ [ i) = i
t
E(µ) = i
t
d
so the CEF is linear in i, and the coe¢cients d equal the mean of the random coe¢cient µ.
We can thus write the equation as a linear CEF
j = i
t
d ÷c (2.51)
where c = i
t
u and u = µ ÷d. The error is conditionally mean zero:
E(c [ i) = 0.
11
The x in Group 1 are N(2; 1) and those in Group 2 are N(4; 1); and the conditional distriubtion of y given x is
N(m(x); 1) where m(x) = 2x x
2
=6:
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 41
Furthermore
vai (c [ i) = i
t
vai (µ)i
= i
t
Xi
so the error is conditionally heteroskedastic with its variance a quadratic function of i.
Theorem 2.28.1 In the linear random coe¢cient model j = i
t
µ with µ
independent of i, E|i|
2
< ·, and E|µ|
2
< ·, then
E(j [ i) = i
t
d
vai (j [ i) = i
t
Xi
where d = E(µ) a:d X = vai (µ).
2.29 Causal E¤ects
So far we have avoided the concept of causality, yet often the underlying goal of an econometric
analysis is to uncover a causal relationship between variables. It is often of great interest to
understand the causes and e¤ects of decisions, actions, and policies. For example, we may be
interested in the e¤ect of class sizes on test scores, police expenditures on crime rates, climate
change on economic activity, years of schooling on wages, institutional structure on growth, the
e¤ectiveness of rewards on behavior, the consequences of medical procedures for health outcomes,
or any variety of possible causal relationships. In each case, the goal is to understand what is the
actual e¤ect on the outcome j due to a change in the input r. We are not just interested in the
conditional mean or linear projection, we would like to know the actual change. The causal e¤ect
is typically speci…c to an individual, and also cannot be directly observed.
For example, the causal e¤ect of schooling on wages is the actual di¤erence a person would re-
ceive in wages if we could change their level of education. The causal e¤ect of a medical treatment is
the actual di¤erence in an individual’s health outcome, comparing treatment versus non-treatment.
In both cases the e¤ects are individual and unobservable. For example, suppose that Jennifer would
have earned $10 an hour as a high-school graduate and $20 a hour as a college graduate while George
would have earned $8 as a high-school graduate and $12 as a college graduate. In this example the
causal e¤ect of schooling is $10 a hour for Jennifer and $4 an hour for George. Furthermore, the
causal e¤ect is unobserved as we only observe the wage corresponding to the actual outcome.
A variable r
1
can be said to have a causal e¤ect on the response variable j if the latter changes
when all other inputs are held constant. To make this precise we need a mathematical formulation.
We can write a full model for the response variable j as
j = /(r
1
, i
2
, u) (2.52)
where r
1
and i
2
are the observed variables, u is an / 1 unobserved random factor, and / is a
functional relationship. This framework includes as a special case the random coe¢cient model
(2.28) studied earlier. We de…ne the causal e¤ect of r
1
within this model as the change in j due to
a change in r
1
holding the other variables i
2
and u constant.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 42
De…nition 2.29.1 In the model (2.52) the causal e¤ect of r
1
on j is
C(r
1
, i
2
, u) = r
1
/(r
1
, i
2
, u) , (2.53)
the change in j due to a change in r
1
, holding i
2
and u constant.
To understand this concept, imagine taking a single individual. As far as our structural model is
concerned, this person is described by their observables r
1
and i
2
, and their unobservables u. In a
wage regression the unobservables would include characteristics such as the person’s abilities, skills,
work ethic, interpersonal connections, and preferences. The causal e¤ect of r
1
(say, education) is
the change in the wage as r
1
changes, holding constant all other observables and unobservables.
It may be helpful to understand that (2.53) is a de…nition, and does not necessarily describe
causality in a fundamental or experimental sense. Perhaps it would be more appropriate to label
(2.53) as a structural e¤ect (the e¤ect within the structural model).
Sometimes it is useful to write this relationship as a potential outcome function
j(r
1
) = /(r
1
, i
2
, u)
where the notation implies that j(r
1
) is holding i
2
and u constant.
A popular example arises in the analysis of treatment e¤ects with a binary regressor r
1
. Let r
1
=
1 indicate treatment (e.g. a medical procedure) and r
1
= 0 indicating non-treatment. In this case
j(r
1
) can be written
j(0) = /(0, i
2
, u)
j(1) = /(1, i
2
, u)
In the literature on treatment e¤ects, it is common to refer to j(0) and j(1) as the latent outcomes
associated with non-treatment and treatment, respectively. That is, for a given individual, j(0) is
the health outcome if there is no treatment, and j(1) is the health outcome if there is treatment.
The causal e¤ect of treatment for the individual is the change in their health outcome due to
treatment – the change in j as we hold both i
2
and u constant:
C (i
2
, u) = j(1) ÷j(0).
This is random (a function of i
2
and u) as both potential outcomes j(0) and j(1) are di¤erent
across individuals.
In a sample, we cannot observe both outcomes from the same individual, we only observe the
realized value
j =
_
_
_
j(0) if r
1
= 0
j(1) if r
1
= 1
As the causal e¤ect varies across individuals and is not observable, it cannot be measured on
the individual level. We therefore focus on aggregate causal e¤ects, in particular what is known as
the average causal e¤ect.
De…nition 2.29.2 In the model (2.52) the average causal e¤ect of r
1
on j conditional on i
2
is
¹C1(r
1
, i
2
) = E(C(r
1
, i
2
, u) [ r
1
, i
2
) (2.54)
=
_
R
`
r
1
/(r
1
, i
2
, u) )(u [ r
1
, i
2
)du
where )(u [ r
1
, i
2
) is the conditional density of u given r
1
, i
2
.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 43
We can think of the average causal e¤ect ¹C1(r
1
, i
2
) as the average e¤ect in the general
population. In our Jennifer & George schooling example given earlier, supposing that half of the
population are Jennifer’s and the other half George’s, then the average causal e¤ect of college is
(10 ÷ 4),2 = $7 an hour.
What is the relationship between the average causal e¤ect ¹C1(r
1
, i
2
) and the regression
derivative r
1
:(r
1
, i
2
)´ Equation (2.52) implies that the CEF is
:(r
1
, i
2
) = E(/(r
1
, i
2
, u) [ r
1
, i
2
)
=
_
R
`
/(r
1
, i
2
, u) )(u [ r
1
, i
2
)du,
the average causal equation, averaged over the conditional distribution of the unobserved component
u.
Applying the marginal e¤ect operator, the regression derivative is
r
1
:(r
1
, i
2
) =
_
R
`
r
1
/(r
1
, i
2
, u) )(u [ r
1
, i
2
)du
÷
_
R
`
/(r
1
, i
2
, u) r
1
)(u[r
1
, i
2
)du
= ¹C1(r
1
, i
2
) ÷
_
R
`
/(r
1
, i
2
, u) r
1
)(u [ r
1
, i
2
)du. (2.55)
In general, the average causal e¤ect is not the regression derivative. However, they equal when
the second component in (2.55) is zero. This occurs when r
1
)(u [ r
1
, i
2
) = 0, that is, when
the conditional density of u given (r
1
, i
2
) does not depend on r
1
. The condition is su¢ciently
important that it has a special name in the treatment e¤ects literature.
De…nition 2.29.3 Conditional Independence Assumption (CIA).
Conditional on i
2
, the random variables r
1
and u are statistically inde-
pendent.
The CIA implies )(u [ r
1
, i
2
) = )(u [ i
2
) does not depend on r
1
, and thus r
1
)(u [ r
1
, i
2
) = 0.
Thus the CIA implies that r
1
:(r
1
, i
2
) = ¹C1(r
1
, i
2
), the regression derivative equals the average
causal e¤ect.
Theorem 2.29.1 In the structural model (2.52), the Conditional Indepen-
dence Assumption implies
r
1
:(r
1
, i
2
) = ¹C1(r
1
, i
2
)
the regression derivative equals the average causal e¤ect for r
1
on j condi-
tional on i
2
.
This is a fascinating result. It shows that whenever the unobservable is independent of the
treatment variable (after conditioning on appropriate regressors) the regression derivative equals the
average causal e¤ect. In this case, the CEF has causal economic meaning, giving strong justi…cation
to estimation of the CEF. Our derivation also shows the critical role of the CIA. If CIA fails, then
the equality of the regression derivative and ACE fails.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 44
This theorem is quite general. It applies equally to the treatment-e¤ects model where r
1
is
binary or to more general settings where r
1
is continuous.
It is also helpful to understand that the CIA is weaker than full independence of u from the
regressors (r
1
, i
2
). The CIA was introduced precisely as a minimal su¢cient condition to obtain
the desired result. Full independence implies the CIA and implies that each regression derivative
equals that variable’s average causal e¤ect, but full independence is not necessary in order to
causally interpret a subset of the regressors.
2.30 Expectation: Mathematical Details*
We de…ne the mean or expectation Ej of a random variable j as follows. If j is discrete on
the set ¦t
1
, t
2
, ...¦ then
Ej =
o

)=1
t
)
Ii (j = t
)
) ,
and if j is continuous with density ) then
Ej =
_
o
÷o
j)(j)dj.
We can unify these de…nitions by writing the expectation as the Lebesgue integral with respect to
the distribution function 1
Ej =
_
o
÷o
jd1(j). (2.56)
In the event that the integral (2.56) is not …nite, separately evaluate the two integrals
1
1
=
_
o
0
jd1(j) (2.57)
1
2
= ÷
_
0
÷o
jd1(j). (2.58)
If 1
1
= · and 1
2
< · then it is typical to de…ne Ej = ·. If 1
1
< · and 1
2
= · then we de…ne
Ej = ÷·. However, if both 1
1
= · and 1
2
= · then Ej is unde…ned.
If
E[j[ =
_
o
÷o
[j[ d1(j) = 1
1
÷1
2
< ·
then Ej exists and is …nite. In this case it is common to say that the mean Ej is “well-de…ned”.
More generally, j has a …nite r’th moment if
E[j[
v
< ·. (2.59)
By Liapunov’s Inequality (B.20), (2.59) implies E[j[
c
< · for all : _ r. Thus, for example, if the
fourth moment is …nite then the …rst, second and third moments are also …nite.
It is common in econometric theory to assume that the variables, or certain transformations of
the variables, have …nite moments of a certain order. How should we interpret this assumption?
How restrictive is it?
One way to visualize the importance is to consider the class of Pareto densities given by
)(j) = aj
÷o÷1
, j 1.
The parameter a of the Pareto distribution indexes the rate of decay of the tail of the density.
Larger a means that the tail declines to zero more quickly. See the …gure below where we show the
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 45
Pareto density for a = 1 and a = 2. The parameter a also determines which moments are …nite.
We can calculate that
E[j[
v
=
_
¸
_
¸
_
a
_
o
1
j
v÷o÷1
dj =
a
a ÷r
if r < a
· if r _ a
This shows that if j is Pareto distributed with parameter a, then the r’th moment of j is …nite if
and only if r < a. Higher a means higher …nite moments. Equivalently, the faster the tail of the
density declines to zero, the more moments are …nite.
1 2 3 4
0.0
0.5
1.0
1.5
2.0
y
f(y)
a=2
a=1
Pareto Densities, a = 1 and a = 2
This connection between tail decay and …nite moments is not limited to the Pareto distribution.
We can make a similar analysis using a tail bound. Suppose that j has density )(j) which satis…es
the bound )(j) _ ¹[j[
÷o÷1
for some ¹ < · and a 0. Since )(j) is bounded below a scale of a
Pareto density, its tail behavior is similarly bounded. This means that for r < a
E[j[
v
=
_
o
÷o
[j[
v
)(j)dj _
_
1
÷1
)(j)dj ÷ 2¹
_
o
1
j
v÷o÷1
dj _ 1 ÷

a ÷r
< ·.
Thus if the tail of the density declines at the rate [j[
÷o÷1
or faster, then j has …nite moments up
to (but not including) a. Broadly speaking, the restriction that j has a …nite r’th moment means
that the tail of j’s density declines to zero faster than j
÷v÷1
. The faster decline of the tail means
that the probability of observing an extreme value of j is a more rare event.
We complete this section by adding an alternative representation of expectation in terms of the
distribution function.
Theorem 2.30.1 For any non-negative random variable j
Ej =
_
o
0
Ii (j n) dn
Proof of Theorem 2.30.1: Let 1
+
(r) = Ii (j r) = 1 ÷ 1(r), where 1(r) is the distribution
function. By integration by parts
Ej =
_
o
0
jd1(j) = ÷
_
o
0
jd1
+
(j) = ÷[j1
+
(j)[
o
0
÷
_
o
0
1
+
(j)dj =
_
o
0
Ii (j n) dn
as stated.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 46
2.31 Existence and Uniqueness of the Conditional Expectation*
In Sections 2.3 and 2.6 we de…ned the conditional mean when the conditioning variables i are
discrete and when the variables (j, i) have a joint density. We have explored these cases because
these are the situations where the conditional mean is easiest to describe and understand. However,
the conditional mean exists quite generally without appealing to the properties of either discrete
or continuous random variables.
To justify this claim we now present a deep result from probability theory. What is says is that
the conditional mean exists for all joint distributions (j, i) for which j has a …nite mean.
Theorem 2.31.1 Existence of the Conditional Mean
If E[j[ < · then there exists a function :(i) such that for all measurable
sets A
E(1 (i ¸ A) j) = E(1 (i ¸ A) :(i)) . (2.60)
The function :(i) is almost everywhere unique, in the sense that if /(i)
satis…es (2.60), then there is a set o such that Ii(o) = 1 and :(i) = /(i)
for i ¸ o. The function :(i) is called the conditional mean and is
written :(i) = E(j [ i) .
See, for example, Ash (1972), Theorem 6.3.3.
The conditional mean :(i) de…ned by (2.60) specializes to (2.7) when (j, i) have a joint density.
The usefulness of de…nition (2.60) is that Theorem 2.31.1 shows that the conditional mean :(i)
exists for all …nite-mean distributions. This de…nition allows j to be discrete or continuous, for i to
be scalar or vector-valued, and for the components of i to be discrete or continuously distributed.
2.32 Identi…cation*
A critical and important issue in structural econometric modeling is identi…cation, meaning that
a parameter is uniquely determined by the distribution of the observed variables. It is relatively
straightforward in the context of the unconditional and conditional mean, but it is worthwhile to
introduce and explore the concept at this point for clarity.
Let 1 denote the distribution of the observed data, for example the distribution of the pair
(j, r). Let T be a collection of distributions 1. Let 0 be a parameter of interest (for example, the
mean Ej).
De…nition 2.32.1 A parameter 0 ¸ R is identi…ed on T if for all 1 ¸ T,
there is a uniquely determined value of 0.
Equivalently, 0 is identi…ed if we can write it as a mapping 0 = q(1) on the set T. The restriction
to the set T is important. Most parameters are identi…ed only on a strict subset of the space of all
distributions.
Take, for example, the mean j = Ej. It is uniquely determined if E[j[ < ·, so it is clear that
j is identi…ed for the set T =
_
1 :
_
o
÷o
[j[ d1(j) < ·
_
. However, j is also well de…ned when it is
either positive or negative in…nity. Hence, de…ning 1
1
and 1
2
as in (2.57) and (2.58), we can deduce
that j is identi…ed on the set T = ¦1 : ¦1
1
< ·¦ ' ¦1
2
< ·¦¦ .
Next, consider the conditional mean. Theorem 2.31.1 demonstrates that E[j[ < ·is a su¢cient
condition for identi…cation.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 47
Theorem 2.32.1 Identi…cation of the Conditional Mean
If E[j[ < ·, the conditional mean :(i) = E(j [ i) is identi…ed almost
everywhere.
It might seem as if identi…cation is a general property for parameters, so long as we exclude
degenerate cases. This is true for moments of observed data, but not necessarily for more compli-
cated models. As a case in point, consider the context of censoring. Let j be a random variable
with distribution 1. Instead of observing j, we observe j
+
de…ned by the censoring rule
j
+
=
_
j if j _ t
t if j t
.
That is, j
+
is capped at the value t. A common example is income surveys, where income responses
are “top-coded”, meaning that incomes above the top code t are recorded as equalling the top
code. The observed variable j
+
has distribution
1
+
(n) =
_
1(n) for n _ t
1 for n _ t
We are interested in features of the distribution 1 not the censored distribution 1
+
. For example,
we are interested in the mean wage j = E(j) . The di¢culty is that we cannot calculate j from
1
+
except in the trivial case where there is no censoring Ii (j _ t) = 0. Thus the mean j is not
generically identi…ed from the censored distribution.
A typical solution to the identi…cation problem is to assume a parametric distribution. For
example, let T be the set of normal distributions j ~ N(j, o
2
). It is possible to show that the
parameters (j, o
2
) are identi…ed for all 1 ¸ T. That is, if we know that the uncensored distribution
is normal, we can uniquely determine the parameters from the censored distribution. This is often
called parametric identi…cation as identi…cation is restricted to a parametric class of distribu-
tions. In modern econometrics this is generally viewed as a second-best solution, as identi…cation
has been achieved only through the use of an arbitrary and unveri…able parametric assumption.
A pessimistic conclusion might be that it is impossible to identi…ed parameters of interest from
censored data without parametric assumptions. Interestingly, this pessimism is unwarranted. It
turns out that we can identify the quantiles ¡
c
of 1 for c _ Ii (j _ t) . For example, if 20%
of the distribution is censored, we can identify all quantiles for c ¸ (0, 0.8). This is often called
nonparametric identi…cation as the parameters are identi…ed without restriction to a parametric
class.
What we have learned from this little exercise is that in the context of censored data, moments
can only be parametrically identi…ed, while (non-censored) quantiles are nonparametrically identi-
…ed. Part of the message is that a study of identi…cation can help focus attention on what can be
learned from the data distributions available.
2.33 Technical Proofs*
Proof of Theorem 2.7.1: For convenience, assume that the variables have a joint density ) (j, i).
Since E(j [ i) is a function of the random vector i only, to calculate its expectation we integrate
with respect to the density )
æ
(i) of i, that is
E(E(j [ i)) =
_
R
k
E(j [ i) )
æ
(i) di.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 48
Substituting in (2.7) and noting that )
j[æ
(j[i) )
æ
(i) = ) (j, i) , we …nd that the above expression
equals
_
R
k
__
R
j)
j[æ
(j[i) dj
_
)
æ
(i) di =
_
R
k
_
R
j) (j, i) djdi = E(j)
the unconditional mean of j.
Proof of Theorem 2.7.2: Again assume that the variables have a joint density. It is useful to
observe that
) (j[i
1
, i
2
) ) (i
2
[i
1
) =
) (j, i
1
, i
2
)
) (i
1
, i
2
)
) (i
1
, i
2
)
) (i
1
)
= ) (j, i
2
[i
1
) , (2.61)
the density of (j, i
2
) given i
1
. Here, we have abused notation and used a single symbol ) to denote
the various unconditional and conditional densities to reduce notational clutter.
Note that
E(j [ i
1
, i
2
) =
_
R
j) (j[i
1
, i
2
) dj. (2.62)
Integrating (2.62) with respect to the conditional density of i
2
given i
1
, and applying (2.61) we
…nd that
E(E(j [ i
1
, i
2
) [ i
1
) =
_
R
k
2
E(j [ i
1
, i
2
) ) (i
2
[i
1
) di
2
=
_
R
k
2
__
R
j) (j[i
1
, i
2
) dj
_
) (i
2
[i
1
) di
2
=
_
R
k
2
_
R
j) (j[i
1
, i
2
) ) (i
2
[i
1
) djdi
2
=
_
R
k
2
_
R
j) (j, i
2
[i
1
) djdi
2
= E(j [ i
1
)
as stated.
Proof of Theorem 2.7.3:
E(q (i) j [ i) =
_
R
q (i) j)
j[æ
(j[i) dj = q (i)
_
R
j)
j[æ
(j[i) dj = q (i) E(j [ i)
This is (2.9). The assumption that E[q (i) j[ < · is required for the …rst equality to be well-
de…ned. Equation (2.10) follows by applying the Simple Law of Iterated Expectations to (2.9).

Proof of Theorem 2.9.2: The assumption that Ej
2
< · implies that all the conditional expec-
tations below exist.
Set . = E(j [ i
1
, i
2
). By the conditional Jensen’s inequality (B.13),
(E(. [ i
1
))
2
_ E
_
.
2
[ i
1
_
.
Taking unconditional expectations, this implies
E(E(j [ i
1
))
2
_ E
_
(E(j [ i
1
, i
2
))
2
_
.
Similarly,
(Ej)
2
_ E
_
(E(j [ i
1
))
2
_
_ E
_
(E(j [ i
1
, i
2
))
2
_
. (2.63)
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 49
The variables j, E(j [ i
1
) and E(j [ i
1
, i
2
) all have the same mean Ej, so the inequality (2.63)
implies that the variances are ranked monotonically:
0 _ vai (E(j [ i
1
)) _ vai (E(j [ i
1
, i
2
)) . (2.64)
Next, for j = Ej observe that
E(j ÷E(j [ i)) (E(j [ i) ÷j) = E(j ÷E(j [ i)) (E(j [ i) ÷j) = 0
so the decomposition
j ÷j = j ÷E(j [ i) ÷E(j [ i) ÷j
satis…es
vai (j) = vai (j ÷E(j [ i)) ÷ vai (E(j [ i)) . (2.65)
The monotonicity of the variances of the conditional mean (2.64) applied to the variance decom-
position (2.65) implies the reverse monotonicity of the variances of the di¤erences, completing the
proof.
Proof of Theorem 2.8.1. Applying Minkowski’s Inequality (B.19) to c = j ÷:(i),
(E[c[
v
)
1¸v
= (E[j ÷:(i)[
v
)
1¸v
_ (E[j[
v
)
1¸v
÷ (E[:(i)[
v
)
1¸v
< ·,
where the two parts on the right-hand are …nite since E[j[
v
< ·by assumption and E[:(i)[
v
< ·
by the Conditional Expectation Inequality (B.14). The fact that (E[c[
v
)
1¸v
< · implies E[c[
v
<
·.
Proof of Theorem 2.17.1. For part 1, by the Expectation Inequality (B.15), (A.9) and Assump-
tion 2.17.1,
_
_
E
_
ii
t
__
_
_ E
_
_
ii
t
_
_
= E|i|
2
< ·.
Similarly, using the Expectation Inequality (B.15), the Cauchy-Schwarz Inequality (B.17) and As-
sumption 2.17.1,
|E(ij)| _ E|ij| =
_
E|i|
2
_
1¸2 _
Ej
2
_
1¸2
< ·.
Thus the moments E(i¸) and E(ii
t
) are …nite and well de…ned.
For part 2, the coe¢cient d = (E(ii
t
))
÷1
E(ij) is well de…ned since (E(ii
t
))
÷1
exists under
Assumption 2.17.1.
Part 3 follows from De…nition 2.17.1 and part 2.
For part 4, …rst note that
Ec
2
= E
_
j ÷i
t
d
_
2
= Ej
2
÷2E
_
ji
t
_
d ÷d
t
E
_
ii
t
_
d
= Ej
2
÷2E
_
ji
t
_ _
E
_
ii
t
__
÷1
E(ij)
_ Ej
2
< ·
The …rst inequality holds because E(ji
t
) (E(ii
t
))
÷1
E(ij) is a quadratic form and therefore neces-
sarily non-negative. Second, by the Expectation Inequality (B.15), the Cauchy-Schwarz Inequality
(B.17) and Assumption 2.17.1,
|E(ic)| _ E|ic| =
_
E|i|
2
_
1¸2 _
Ec
2
_
1¸2
< ·.
It follows that the expectation E(ic) is …nite, and is zero by the calculation (2.28).
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 50
For part 6, Applying Minkowski’s Inequality (B.19) to c = j ÷i
t
d,
(E[c[
v
)
1¸v
=
_
E
¸
¸
j ÷i
t
d
¸
¸
v
_
1¸v
_ (E[j[
v
)
1¸v
÷
_
E
¸
¸
i
t
d
¸
¸
v
_
1¸v
_ (E[j[
v
)
1¸v
÷ (E|i|
v
)
1¸v
|d|
< ·,
the …nal inequality by assumption.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 51
Exercises
Exercise 2.1 Find E(E(E(j [ i
1
, i
2
, i
3
) [ i
1
, i
2
) [ i
1
) .
Exercise 2.2 If E(j [ r) = a ÷/r, …nd E(jr) as a function of moments of r.
Exercise 2.3 Prove Theorem 2.8.1.4 using the law of iterated expectations.
Exercise 2.4 Suppose that the random variables j and r only take the values 0 and 1, and have
the following joint probability distribution
r = 0 r = 1
j = 0 .1 .2
j = 1 .4 .3
Find E(j [ r) , E
_
j
2
[ r
_
and vai (j [ r) for r = 0 and r = 1.
Exercise 2.5 Show that o
2
(i) is the best predictor of c
2
given i:
(a) Write down the mean-squared error of a predictor /(i) for c
2
.
(b) What does it mean to be predicting c
2
´
(c) Show that o
2
(i) minimizes the mean-squared error and is thus the best predictor.
Exercise 2.6 Use j = :(i) ÷c to show that
vai (j) = vai (:(i)) ÷o
2
Exercise 2.7 Show that the conditional variance can be written as
o
2
(i) = E
_
j
2
[ i
_
÷(E(j [ i))
2
.
Exercise 2.8 Suppose that j is discrete-valued, taking values only on the non-negative integers,
and the conditional distribution of j given i is Poisson:
Ii (j = , [ i) =
oxp(÷i
t
d) (i
t
d)
)
,!
, , = 0, 1, 2, ...
Compute E(j [ i) and vai (j [ i) . Does this justify a linear regression model of the form j =
i
t
d ÷c´
Hint: If Ii (j = ,) =
exp(÷A)A
j
)!
, then Ej = ` and vai(j) = `.
Exercise 2.9 Suppose you have two regressors: r
1
is binary (takes values 0 and 1) and r
2
is
categorical with 3 categories (¹, 1, C). Write E(j [ r
1
, r
2
) as a linear regression.
Exercise 2.10 True or False. If j = r, ÷c, r ¸ R, and E(c [ r) = 0, then E
_
r
2
c
_
= 0.
Exercise 2.11 True or False. If j = r, ÷c, r ¸ R, and E(rc) = 0, then E
_
r
2
c
_
= 0.
Exercise 2.12 True or False. If j = i
t
d ÷c and E(c [ i) = 0, then c is independent of i.
Exercise 2.13 True or False. If j = i
t
d ÷c and E(ic) = 0, then E(c [ i) = 0.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 52
Exercise 2.14 True or False. If j = i
t
d ÷ c, E(c [ i) = 0, and E
_
c
2
[ i
_
= o
2
, a constant, then
c is independent of i.
Exercise 2.15 Consider the intercept-only model j = c ÷ c de…ned as the best linear predictor.
Show that c = E(j).
Exercise 2.16 Let r and j have the joint density ) (r, j) =
3
2
_
r
2
÷j
2
_
on 0 _ r _ 1, 0 _ j _ 1.
Compute the coe¢cients of the best linear predictor j = c÷,r÷c. Compute the conditional mean
:(r) = E(j [ r) . Are the best linear predictor and conditional mean di¤erent?
Exercise 2.17 Let r be a random variable with j = Er and o
2
= vai(r). De…ne
q
_
r [ j, o
2
_
=
_
r ÷j
(r ÷j)
2
÷o
2
_
.
Show that Eq (r [ :, :) = 0 if and only if : = j and : = o
2
.
Exercise 2.18 Suppose that
i =
_
_
1
r
2
r
3
_
_
and r
3
= c
1
÷c
2
r
2
is a linear function of r
2
.
(a) Show that Q
ææ
= E(ii
t
) is not invertible.
(b) Use a linear transformation of i to …nd an expression for the best linear predictor of j given
i. (Be explicit, do not just use the generalized inverse formula.)
Exercise 2.19 Show (2.46)-(2.47), namely that for
d(d) = E
_
:(i) ÷i
t
d
_
2
then
d = aigmin
f¸R
k
d(d)
=
_
E
_
ii
t
__
÷1
E(i:(i))
=
_
E
_
ii
t
__
÷1
E(ij) .
Hint: To show E(i:(i)) = E(ij) use the law of iterated expectations.
Chapter 3
The Algebra of Least Squares
3.1 Introduction
In this chapter we introduce the popular least-squares estimator. Most of the discussion will be
algebraic, with questions of distribution and inference defered to later chapters.
3.2 Random Samples
In Section 2.17 we derived and discussed the best linear predictor of j given i for a pair of
random variables (j, i) ¸ RR
I
, and called this the linear projection model. We are now interested
in estimating the parameters of this model, in particular the projection coe¢cient
d =
_
E
_
ii
t
__
÷1
E(ij) .
We can estimate d from observational data which includes joint measurements on the variables
(j, i) . For example, supposing we are interested in estimating a wage equation, we would use
a dataset with observations on wages (or weekly earnings), education, experience (or age), and
demographic characteristics (gender, race, location). One possible dataset is the Current Popula-
tion Survey (CPS), a survey of U.S. households which includes questions on employment, income,
education, and demographic characteristics.
Notationally we wish to emphasize when we are discussing observations. Typically in econo-
metrics we denote observations by appending a subscript i which runs from 1 to :, thus the i
tI
observation is (j
i
, i
i
), and : denotes the sample size. The dataset is then {(j
i
, i
i
); i = 1, ..., :}.
From the viewpoint of empirical analysis, a dataset is a array of numbers often organized as a
table, where the columns of the table correspond to distinct variables and the rows correspond to
distinct observations. For empirical analysis, the dataset and observations are …xed in the sense that
they are numbers presented to the researcher. For statistical analysis we need to view the dataset
as random, or more precisely as a realization of a random process. For cross-sectional studies,
the most common approach is to treat the individual observations as independent draws from an
underlying population 1. When the observations are realizations of independent and identically
distributed random variables, we say that the data is a random sample.
Assumption 3.2.1 The observations ¦(j
1
, i
1
), ..., (j
i
, i
i
), ..., (j
a
, i
a
)¦ are a
random sample.
With a random sample, the ordering of the data is irrelevant. There is nothing special about any
speci…c observation or ordering. You can permute the order of the observations and no information
is gained or lost.
53
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 54
As most economic data sets are not literally the result of a random experiment, the random
sampling framework is best viewed as an approximation rather than being literally true.
The linear projection model applies to the random observations (j
i
, i
i
) . This means that the
probability model for the observations is the same as that described in Section 2.17. We can write
the model as
j
i
= i
t
i
d ÷c
i
(3.1)
where the linear proejction d is de…ned as
d = aigmin
f¸R
k
o(d), (3.2)
the minimizer of the expected squared error
o(d) = E
_
j
i
÷i
t
i
d
_
2
, (3.3)
and has the explicit solution
d =
_
E
_
i
z
i
t
i
__
÷1
E(i
i
j
i
) . (3.4)
3.3 Least Squares Estimator
When a parameter is de…ned as the minimizer of a function as in (3.2), a standard approach
to estimation is to construct an empirical analog of the function, and de…ne the estimator of the
parameter as the minimizer of the empirical function.
The empirical analog of the expected squared error (3.3) is the sample average squared error
o
a
(d) =
1
:
a

i=1
_
j
i
÷i
t
i
d
_
2
(3.5)
=
1
:
oo1
a
(d)
where
oo1
a
(d) =
a

i=1
_
j
i
÷i
t
i
d
_
2
is called the sum-of-squared-errors function.
An estimator for d is the minimizer of (3.5):
´
d = aigmin
f¸R
k
o
a
(d).
Alternatively, as o
a
(d) is a scale multiple of oo1
a
(d), we may equivalently de…ne
´
d as the min-
imizer of oo1
a
(d). Hence
´
d is commonly called the least-squares (LS) (or ordinary least
squares (OLS)) estimator of d. Here, as is common in econometrics, we put a hat “^” over the
parameter d to indicate that
´
d is a sample estimate of d. This is a helpful convention, as just by
seeing the symbol
´
d we can immediately interpret it as an estimator (because of the hat), and as an
estimator of a parameter labelled d. Sometimes when we want to be explicit about the estimation
method, we will write
´
d
ols
to signify that it is the OLS estimator. It is also common to see the
notation
´
d
a
, where the subscript “:” indicates that the estimator depends on the sample size :.
It is important to understand the distinction between population parameters such as d and
sample estimates such as
´
d. The population parameter d is a non-random feature of the population
while the sample estimate
´
d is a random feature of a random sample. d is …xed, while
´
d varies
across samples.
To visualize the quadratic function o
a
(d), Figure 3.1 displays an example sum-of-squared er-
rors function oo1
a
(d) for the case / = 2. The least-squares estimator
´
d is the the pair (
´
,
1
,
´
,
2
)
minimizing this function.
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 55
Figure 3.1: Sum-of-Squared Errors Function
3.4 Solving for Least Squares with One Regressor
For simplicity, we start by considering the case / = 1 so that the coe¢cient , is a scalar. Then
the sum of squared errors is a simple quadratic
oo1
a
(,) =
a

i=1
(j
i
÷r
i
,)
2
=
_
a

i=1
j
2
i
_
÷2,
_
a

i=1
r
i
j
i
_
÷,
2
_
a

i=1
r
2
i
_
.
The OLS estimator
´
, minimizes this function. From elementary algebra we know that the minimizer
of the quadratic function a ÷ 2/r ÷cr
2
is r = /,c. Thus the minimizer of oo1
a
(,) is
´
, =

a
i=1
r
i
j
i

a
i=1
r
2
i
. (3.6)
The intercept-only model is the special case r
i
= 1. In this case we …nd
´
, =

a
i=1
j
i

a
i=1
1
=
1
:
a

i=1
j
i
= j, (3.7)
the sample mean of j
i
. Here, as is common, we put a bar “
÷
” over j to indicate that the quantity
is a sample mean. This calculation shows that the OLS estimator in the intercept-only model is
the sample mean.
3.5 Solving for Least Squares with Multiple Regressors
We now consider the case with / _ 1 so that the coe¢cient d is a vector.
To solve for
´
d, expand the SSE function to …nd
oo1
a
(d) =
a

i=1
j
2
i
÷2d
t
a

i=1
i
i
j
i
÷d
t
a

i=1
i
i
i
t
i
d.
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 56
This is a quadratic expression in the vector argument d . The …rst-order-condition for minimization
of oo1
a
(d) is
0 =
0
0d
oo1
a
(
´
d) = ÷2
a

i=1
i
i
j
i
÷ 2
a

i=1
i
i
i
t
i
´
d. (3.8)
We have written this using a single expression, but it is actually a system of / equations with /
unknowns (the elements of
´
d).
The solution for
´
d may be found by solving the system of / equations in (3.8). We can write
this solution compactly using matrix algebra. Inverting the / / matrix

a
i=1
i
i
i
t
i
we …nd an
explicit formula for the least-squares estimator
´
d =
_
a

i=1
i
i
i
t
i
_
÷1
_
a

i=1
i
i
j
i
_
. (3.9)
This is the natural estimator of the best linear projection coe¢cient d de…ned in (3.2), and can
also be called the linear projection estimator.
We see that (3.9) simpli…es to the expression (3.6) when / = 1. The expression (3.9) is a nota-
tionally simple generalization but requires a careful attention to vector and matrix manipulations.
Alternatively, equation (3.4) writes the projection coe¢cient d as an explicit function of the
population moments Q
aj
and Q
aa
. Their moment estimators are the sample moments
´
Q
aj
=
1
:
a

i=1
i
i
j
i
´
Q
aa
=
1
:
a

i=1
i
i
i
t
i
.
The moment estimator of d replaces the population moments in (3.4) with the sample moments:
´
d =
´
Q
÷1
aa
´
Q
aj
=
_
1
:
a

i=1
i
i
i
t
i
_
÷1
_
1
:
a

i=1
i
i
j
i
_
=
_
a

i=1
i
i
i
t
i
_
÷1
_
a

i=1
i
i
j
i
_
which is identical with (3.9).
Least Squares Estimation
De…nition 3.5.1 The least-squares estimator
´
d is
´
d = aigmin
f¸R
k
o
a
(d)
where
o
a
(d) =
1
:
a

i=1
_
j
i
÷i
t
i
d
_
2
and has the solution
´
d =
_
a

i=1
i
i
i
t
i
_
÷1
_
a

i=1
i
i
j
i
_
.
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 57
Adrien-Marie Legendre
The method of least-squares was …rst published in 1805 by the French math-
ematician Adrien-Marie Legendre (1752-1833). Legendre proposed least-
squares as a solution to the algebraic problem of solving a system of equa-
tions when the number of equations exceeded the number of unknowns. This
was a vexing and common problem in astronomical measurement. As viewed
by Legendre, (3.1) is a set of : equations with / unknowns. As the equations
cannot be solved exactly, Legendre’s goal was to select d to make the set of
errors as small as possible. He proposed the sum of squared error criterion,
and derived the algebraic solution presented above. As he noted, the …rst-
order conditions (3.8) is a system of / equations with / unknowns, which
can be solved by “ordinary” methods. Hence the method became known
as Ordinary Least Squares and to this day we still use the abbreviation
OLS to refer to Legendre’s estimation method.
3.6 Illustration
We illustrate the least-squares estimator in practice with the data set used to generate the
estimates from Chapter 2. This is the March 2009 Current Population Survey, which has extensive
information on the U.S. population. This data set is described in more detail in Section ? For this
illustration, we use the sub-sample of non-white married non-military female wages earners with
12 years potential work experience. This sub-sample has 61 observations. Let j
i
be log wages and
i
i
be an intercept and years of education. Then
1
:
a

i=1
i
i
j
i
=
_
8.02ò
47.447
_
and
1
:
a

i=1
i
i
i
t
i
=
_
1 1ò.426
1ò.426 248
_
.
Thus
´
d =
_
1 1ò.426
1ò.426 248
_
÷1
_
8.02ò
47.447
_
=
_
0.626
0.1ò6
_
. (3.10)
We often write the estimated equation using the format
\
log(\aqc) = 0.626 ÷ 0.1ò6 cdncatio:. (3.11)
An interpretation of the estimated equation is that each year of education is associated with an
16% increase in mean wages.
Equation (3.11) is called a bivariate regression as there are only two variables. A multivari-
ate regression has two or more regressors, and allows a more detailed investigation. Let’s redo
the example, but now including all levels of experience. This expanded sample includes 2454 ob-
servations. Including as regressors years of experience and its square (experience
2
,100) (we divide
by 100 to simplify reporting), we obtain the estimates
\
log(\aqc) = 1.06 ÷ 0.116 cdncatio: ÷ 0.010 crjcric:cc ÷0.014 crjcric:cc
2
,100. (3.12)
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 58
These estimates suggest a 12% increase in mean wages per year of education, holding experience
constant.
3.7 Least Squares Residuals
As a by-product of estimation, we de…ne the …tted value
´ j
i
= i
t
i
´
d
and the residual
´ c
i
= j
i
÷ ´ j
i
= j
i
÷i
t
i
´
d. (3.13)
Sometimes ´ j
i
is called the predicted value, but this is a misleading label. The …tted value ´ j
i
is a
function of the entire sample, including j
i
, and thus cannot be interpreted as a valid prediction of
j
i
. It is thus more accurate to describe ´ j
i
as a …tted value rather than a predicted value.
Note that j
i
= ´ j
i
÷ ´ c
i
and
j
i
= i
t
i
´
d ÷ ´ c
i
. (3.14)
We make a distinction between the error c
i
and the residual ´ c
i
. The error c
i
is unobservable while
the residual ´ c
i
is a by-product of estimation. These two variables are frequently mislabeled, which
can cause confusion.
Equation (3.8) implies that
a

i=1
i
i
´ c
i
= 0. (3.15)
To see this by a direct calculation, using (3.13) and (3.9),
a

i=1
i
i
´ c
i
=
a

i=1
i
i
_
j
i
÷i
t
i
´
d
_
=
a

i=1
i
i
j
i
÷
a

i=1
i
i
i
t
i
´
d
=
a

i=1
i
i
j
i
÷
a

i=1
i
i
i
t
i
_
a

i=1
i
i
i
t
i
_
÷1
_
a

i=1
i
i
j
i
_
=
a

i=1
i
i
j
i
÷
a

i=1
i
i
j
i
= 0.
When i
i
contains a constant, an implication of (3.15) is
1
:
a

i=1
´ c
i
= 0. (3.16)
Thus the residuals have a sample mean of zero and the sample correlation between the regressors
and the residual is zero. These are algebraic results, and hold true for all linear regression estimates.
3.8 Model in Matrix Notation
For many purposes, including computation, it is convenient to write the model and statistics in
matrix notation. The linear equation (2.26) is a system of : equations, one for each observation.
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 59
We can stack these : equations together as
j
1
= i
t
1
d ÷c
1
j
2
= i
t
2
d ÷c
2
.
.
.
j
a
= i
t
a
d ÷c
a
.
Now de…ne
¸ =
_
_
_
_
_
j
1
j
2
.
.
.
j
a
_
_
_
_
_
, A =
_
_
_
_
_
i
t
1
i
t
2
.
.
.
i
t
a
_
_
_
_
_
, c =
_
_
_
_
_
c
1
c
2
.
.
.
c
a
_
_
_
_
_
.
Observe that ¸ and c are :1 vectors, and A is an :/ matrix. Then the system of : equations
can be compactly written in the single equation
¸ = Ad ÷c. (3.17)
Sample sums can be written in matrix notation. For example
a

i=1
i
i
i
t
i
= A
t
A
a

i=1
i
i
j
i
= A
t
¸.
Therefore the least-squares estimator can be written as
´
d =
_
A
t
A
_
÷1
_
A
t
¸
_
. (3.18)
The matrix version of (3.14) and estimated version of (3.17) is
¸ = A
´
d ÷ ` c,
or equivalently the residual vector is
` c = ¸ ÷A
´
d.
Using the residual vector, we can write (3.15) as
A
t
` c = 0. (3.20)
Using matrix notation we have simple expressions for most estimators. This is particularly
convenient for computer programming, as most languages allow matrix notation and manipulation.
Important Matrix Expressions
¸ = Ad ÷c
´
d =
_
A
t
A
_
÷1
_
A
t
¸
_
` c = ¸ ÷A
´
d
A
t
` c = 0.
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 60
Early Use of Matrices
The earliest known treatment of the use of matrix methods
to solve simultaneous systems is found in Chapter 8 of the
Chinese text The Nine Chapters on the Mathematical Art,
written by several generations of scholars from the 10th to
2nd century BCE.
3.9 Projection Matrix
De…ne the matrix
1 = A
_
A
t
A
_
÷1
A
t
.
Observe that
1A = A
_
A
t
A
_
÷1
A
t
A = A.
This is a property of a projection matrix. More generally, for any matrix Z which can be written
as Z = AI for some matrix I (we say that Z lies in the range space of A), then
1Z = 1AI = A
_
A
t
A
_
÷1
A
t
AI = AI = Z.
As an important example, if we partition the matrix A into two matrices A
1
and A
2
so that
A = [A
1
A
2
[ ,
then 1A
1
= A
1
. (See Exercise 3.7.)
The matrix 1 is symmetric and idempotent
1
. To see that it is symmetric,
1
t
=
_
A
_
A
t
A
_
÷1
A
t
_
t
=
_
A
t
_
t
_
_
A
t
A
_
÷1
_
t
(A)
t
= A
_
_
A
t
A
_
t
_
÷1
A
t
= A
_
(A)
t
_
A
t
_
t
_
÷1
A
t
= 1.
To establish that it is idempotent, the fact that 1A = A implies that
11 = 1A
_
A
t
A
_
÷1
A
t
= A
_
A
t
A
_
÷1
A
t
= 1.
The matrix 1 has the property that it creates the …tted values in a least-squares regression:
1¸ = A
_
A
t
A
_
÷1
A
t
¸ = A
´
d = ` ¸.
Because of this property, 1 is also known as the “hat matrix”.
1
A matrix P is symmetric if P
0
= P: A matrix P is idempotent if PP = P: See Appendix A.8.
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 61
A special example of a projection matrix occurs when A = 1 is an :-vector of ones. Then
1
1
= 1
_
1
t
1
_
÷1
1
t
=
1
:
11
t
.
Note that
1
1
¸ = 1
_
1
t
1
_
÷1
1
t
¸
= 1¯ j
creates an :-vector whose elements are the sample mean ¯ j of j
i
.
The i’th diagonal element of 1 = A(A
t
A)
÷1
A
t
is
/
ii
= i
t
i
_
A
t
A
_
÷1
i
i
(3.21)
which is called the leverage of the i’th observation.
Some useful properties of the the matrix 1 and the leverage values /
ii
are now summarized.
Theorem 3.9.1
a

i=1
/
ii
= li 1 = / (3.22)
and
0 _ /
ii
_ 1 (3.23)
To show (3.22),
li 1 = li
_
A
_
A
t
A
_
÷1
A
t
_
= li
_
_
A
t
A
_
÷1
A
t
A
_
= li (1
I
)
= /.
See Appendix A.4 for de…nition and properties of the trace operator. The proof of (3.23) is defered
to Section 3.18.
3.10 Orthogonal Projection
De…ne
A = 1
a
÷1
= 1
a
÷A
_
A
t
A
_
÷1
A
t
where 1
a
is the : : identity matrix. Note that
AA = (1
a
÷1) A = A ÷1A = A ÷A = 0.
Thus A and A are orthogonal. We call A an orthogonal projection matrix or an annihilator
matrix due to the property that for any matrix Z in the range space of A then
AZ = Z ÷1Z = 0.
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 62
For example, AA
1
= 0 for any subcomponent A
1
of A, and A1 = 0 (see Exercise 3.7).
The orthogonal projection matrix A has many similar properties with 1, including that A is
symmetric (A
t
= A) and idempotent (AA = A). Similarly to (3.22) we can calculate
li A = : ÷/. (3.24)
(See Exercise 3.9.) While 1 creates …tted values, A creates least-squares residuals:
A¸ = ¸ ÷1¸ = ¸ ÷A
´
d = ´c. (3.25)
As discussed in the previous section, a special example of a projection matrix occurs when A = 1
is an :-vector of ones, so that 1
1
= 1(1
t
1)
÷1
1
t
. Similarly, set
A
1
= 1
a
÷1
1
= 1
a
÷1
_
1
t
1
_
÷1
1
t
.
While 1
1
creates a vector of sample means, A
1
creates demeaned values:
A
1
¸ = ¸ ÷1¯ j.
For simplicity we will often write the right-hand-side as ¸ ÷ ¯ j. The i’th element is j
i
÷ ¯ j, the
demeaned value of j
i
.
We can also use (3.25) to write an alternative expression for the residual vector. Substituting
¸ = Ad ÷c into ` c = A¸ and using AA = 0 we …nd
` c = A¸ = A (Ad ÷c) = Ac (3.26)
which is free of dependence on the regression coe¢cient d.
3.11 Estimation of Error Variance
The error variance o
2
= Ec
2
i
is a moment, so a natural estimator is a moment estimator. If c
i
were observed we would estimate o
2
by
o
2
=
1
:
a

i=1
c
2
i
. (3.27)
However, this is infeasible as c
i
is not observed. In this case it is common to take a two-step
approach to estimation. The residuals ´ c
i
are calculated in the …rst step, and then we substitute ´ c
i
for c
i
in expression (3.27) to obtain the feasible estimator
´ o
2
=
1
:
a

i=1
´ c
2
i
. (3.28)
In matrix notation, we can write (3.27) and (3.28) as
o
2
= :
÷1
c
t
c
and
´ o
2
= :
÷1
´c
t
´c. (3.29)
Recall the expressions ´c = A¸ = Ac from (3.25) and (3.26). Applied to (3.29) we …nd
´ o
2
= :
÷1
` c
t
` c
= :
÷1
¸
t
AA¸
= :
÷1
¸
t

= :
÷1
c
t
Ac
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 63
the third equality since AA = A.
An interesting implication is that
o
2
÷ ´ o
2
= :
÷1
c
t
c ÷:
÷1
c
t
Ac
= :
÷1
c
t
1c
_ 0.
The …nal inequality holds because 1 is positive semi-de…nite and c
t
1c is a quadratic form. This
shows that the feasible estimator ´ o
2
is numerically smaller than the idealized estimator (3.27).
3.12 Analysis of Variance
Another way of writing (3.25) is
¸ = 1¸ ÷A¸ = ` ¸ ÷ ` c. (3.30)
This decomposition is orthogonal, that is
` ¸
t
` c = (1¸)
t
(A¸) = ¸
t
1A¸ = 0.
It follows that
¸
t
¸ = ` ¸
t
` ¸ ÷ 2` ¸
t
` c ÷ ` c
t
` c = ` ¸
t
` ¸ ÷ ` c
t
` c
or
a

i=1
j
2
i
=
a

i=1
´ j
2
i
÷
a

i=1
´ c
2
i
.
Now subtracting ¯ j from both sizes of (3.30) we obtain
¸ ÷1¯ j = ` ¸ ÷1¯ j ÷ ` c
This decomposition is also orthogonal when A contains a constant, as
(` ¸ ÷1¯ j)
t
` c = ` ¸
t
` c ÷ ¯ j1
t
` c = 0
under (3.16). It follows that
(¸ ÷1¯ j)
t
(¸ ÷1¯ j) = (` ¸ ÷1¯ j)
t
(` ¸ ÷1¯ j) ÷ ` c
t
` c
or
a

i=1
(j
i
÷ ¯ j)
2
=
a

i=1
(´ j
i
÷ ¯ j)
2
÷
a

i=1
´ c
2
i
.
This is commonly called the analysis-of-variance formula for least squares regression.
A commonly reported statistic is the coe¢cient of determination or R-squared:
1
2
=

a
i=1
(´ j
i
÷ ¯ j)
2

a
i=1
(j
i
÷ ¯ j)
2
= 1 ÷

a
i=1
´ c
2
i

a
i=1
(j
i
÷ ¯ j)
2
.
It is often described as the fraction of the sample variance of j
i
which is explained by the least-
squares …t. 1
2
is a crude measure of regression …t. We have better measures of …t, but these require
a statistical (not just algebraic) analysis and we will return to these issues later. One di¢culty
with 1
2
is that it increases when regressors are added to a regression (see Exercise 3.16).
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 64
3.13 Regression Components
Partition
A = [A
1
A
2
[
and
d =
_
d
1
d
2
_
.
Then the regression model can be rewritten as
¸ = A
1
d
1
÷A
2
d
2
÷c. (3.31)
The OLS estimator of d = (d
t
1
, d
t
2
)
t
is obtained by regression of ¸ on A = [A
1
A
2
[ and can be
written as
¸ = A
´
d ÷ ` c = A
1
´
d
1
÷A
2
´
d
2
÷ ` c. (3.32)
We are interested in algebraic expressions for
´
d
1
and
´
d
2
.
The algebra for the estimator is identical as that for the population coe¢cients as presented in
Section 2.20.
Partition
´
Q
aa
and
´
Q
aj
as
´
Q
aa
=
_
_
´
Q
11
´
Q
12
´
Q
21
´
Q
22
_
_
=
_
¸
¸
_
1
:
A
t
1
A
1
1
:
A
t
1
A
2
1
:
A
t
2
A
1
1
:
A
t
2
A
2
_
¸
¸
_
and similarly Q
aj
´
Q
aj
=
_
_
´
Q
1j
´
Q
2j
_
_
=
_
¸
¸
_
1
:
A
t
1
¸
1
:
A
t
2
¸
_
¸
¸
_
.
By the partitioned matrix inversion formula (A.4)
´
Q
÷1
aa
=
_
_
´
Q
11
´
Q
12
´
Q
21
´
Q
22
_
_
÷1
oc)
=
_
¸
_
´
Q
11
´
Q
12
´
Q
21
´
Q
22
_
¸
_ =
_
¸
_
´
Q
÷1
11·2
÷
´
Q
÷1
11·2
´
Q
12
´
Q
÷1
22
÷
´
Q
÷1
22·1
´
Q
21
´
Q
÷1
11
´
Q
÷1
22·1
_
¸
_ (3.33)
where
´
Q
11·2
=
´
Q
11
÷
´
Q
12
´
Q
÷1
22
´
Q
21
and
´
Q
22·1
=
´
Q
22
÷
´
Q
21
´
Q
÷1
11
´
Q
12
.
Thus
´
d =
_
´
d
1
´
d
2
_
=
_
´
Q
÷1
11·2
÷
´
Q
÷1
11·2
´
Q
12
´
Q
÷1
22
÷
´
Q
÷1
22·1
´
Q
21
´
Q
÷1
11
´
Q
÷1
22·1
__
´
Q
1j
´
Q
2j
_
=
_
´
Q
÷1
11·2
´
Q
1j·2
´
Q
÷1
22·1
´
Q
2j·1
_
Now
´
Q
11·2
=
´
Q
11
÷
´
Q
12
´
Q
÷1
22
´
Q
21
=
1
:
A
t
1
A
1
÷
1
:
A
t
1
A
2
_
1
:
A
t
2
A
2
_
÷1
1
:
A
t
2
A
1
=
1
:
A
t
1
A
2
A
1
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 65
where
A
2
= 1
a
÷A
2
_
A
t
2
A
2
_
÷1
A
t
2
is the orthogonal projection matrix for A
2
. Similarly
´
Q
22·1
=
1
:
A
t
2
A
1
A
2
where
A
1
= 1
a
÷A
1
_
A
t
1
A
1
_
÷1
A
t
1
is the orthogonal projection matrix for A
1
. Also
´
Q
1j·2
=
´
Q
1j
÷
´
Q
12
´
Q
÷1
22
´
Q
2j
=
1
:
A
t
1
¸ ÷
1
:
A
t
1
A
2
_
1
:
A
t
2
A
2
_
÷1
1
:
A
t
2
¸
=
1
:
A
t
1
A
2
¸
and
´
Q
2j·1
=
1
:
A
t
2
A
1
¸.
Therefore
´
d
1
=
_
A
t
1
A
2
A
1
_
÷1
_
A
t
1
A
2
¸
_
(3.34)
and
´
d
2
=
_
A
t
2
A
1
A
2
_
÷1
_
A
t
2
A
1
¸
_
. (3.35)
These are algebraic expressions for the sub-coe¢cient estimates from (3.32).
3.14 Residual Regression
As …rst recognized by Frisch and Waugh (1933), expressions (3.34) and (3.35) can be used to
show that the least-squares estimators
´
d
1
and
´
d
2
can be found by a two-step regression procedure.
Take (3.35). Since A
1
is idempotent, A
1
= A
1
A
1
and thus
´
d
2
=
_
A
t
2
A
1
A
2
_
÷1
_
A
t
2
A
1
¸
_
=
_
A
t
2
A
1
A
1
A
2
_
÷1
_
A
t
2
A
1
A
1
¸
_
=
_

A
t
2

A
2
_
÷1
_

A
t
2
¯ c
1
_
where

A
2
= A
1
A
2
and
¯ c
1
= A
1
¸.
Thus the coe¢cient estimate
´
d
2
is algebraically equal to the least-squares regression of ¯ c
1
on

A
2
. Notice that these two are ¸ and A
2
, respectively, premultiplied by A
1
. But we know that
multiplication by A
1
is equivalent to creating least-squares residuals. Therefore ¯ c
1
is simply the
least-squares residual from a regression of ¸ on A
1
, and the columns of

A
2
are the least-squares
residuals from the regressions of the columns of A
2
on A
1
.
We have proven the following theorem.
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 66
Theorem 3.14.1 Frisch-Waugh-Lovell
In the model (3.31), the OLS estimator of d
2
and the OLS residuals ` c
may be equivalently computed by either the OLS regression (3.32) or via
the following algorithm:
1. Regress ¸ on A
1
, obtain residuals ¯ c
1
;
2. Regress A
2
on A
1
, obtain residuals

A
2
;
3. Regress ¯ c
1
on

A
2
, obtain OLS estimates
´
d
2
and residuals ` c.
In some contexts, the FWL theorem can be used to speed computation, but in most cases there
is little computational advantage to using the two-step algorithm.
This result is a direct analogy of the coe¢cient representation obtained in Section 2.21. The
result obtained in that section concerned the population projection coe¢cients, the result obtained
here concern the least-squares estimates. The key message is the same. In the least-squares
regression (3.32), the estimated coe¢cient
´
d
2
numerically equals the regression of ¸ on the regressors
A
2
, only after the regressors A
1
have been linearly projected out. Similarly, the coe¢cient estimate
´
d
1
numerically equals the regression of ¸ on the regressors A
1
, after the regressors A
2
have been
linearly projected out. This result can be very insightful when intrepreting regression coe¢cients.
A common application of the FWL theorem, which you may have seen in an introductory
econometrics course, is the demeaning formula for regression. Partition A = [A
1
A
2
[ where
A
1
= 1 is a vector of ones and A
2
is a matrix of observed regressors. In this case,
A
1
= 1
a
÷1
_
1
t
1
_
÷1
1
t
.
Observe that

A
2
= A
1
A
2
= A
2
÷A
2
and
¯ ¸ = A
1
¸ = ¸ ÷¸,
are the “demeaned” variables. The FWL theorem says that
´
d
2
is the OLS estimate from a regression
of j
i
÷j on i
2i
÷i
2
:
´
d
2
=
_
a

i=1
(i
2i
÷i
2
) (i
2i
÷i
2
)
t
_
÷1
_
a

i=1
(i
2i
÷i
2
) (j
i
÷j)
_
.
Thus the OLS estimator for the slope coe¢cients is a regression with demeaned data.
Ragnar Frisch
Ragnar Frisch (1895-1973) was co-winner with Jan Tinbergen of the …rst
Nobel Memorial Prize in Economic Sciences in 1969 for their work in devel-
oping and applying dynamic models for the analysis of economic problems.
Frisch made a number of foundational contributions to modern economics
beyond the Frisch-Waugh-Lovell Theorem, including formalizing consumer
theory, production theory, and business cycle theory.
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 67
3.15 Prediction Errors
The least-squares residual ´ c
i
are not true prediction errors, as they are constructed based on
the full sample including j
i
. A proper prediction for j
i
should be based on estimates constructed
using only the other observations. We can do this by de…ning the leave-one-out OLS estimator
of d as that obtained from the sample of : ÷1 observations excluding the i’th observation:
´
d
(÷i)
=
_
_
1
: ÷1

),=i
i
)
i
t
)
_
_
÷1
_
_
1
: ÷1

),=i
i
)
j
)
_
_
=
_
A
t
(÷i)
A
(÷i)
_
÷1
A
(÷i)
¸
(÷i)
. (3.36)
Here, A
(÷i)
and ¸
(÷i)
are the data matrices omitting the i’th row. The leave-one-out predicted
value for j
i
is
j
i
= i
t
i
´
d
(÷i)
,
and the leave-one-out residual or prediction error is
c
i
= j
i
÷ j
i
.
A convenient alternative expression for
´
d
(÷i)
(derived in Section 3.18) is
´
d
(÷i)
=
´
d ÷(1 ÷/
ii
)
÷1
_
A
t
A
_
÷1
i
i
´ c
i
(3.37)
where /
ii
are the leverage values as de…ned in (3.21).
Using (3.37) we can simplify the expression for the prediction error:
c
i
= j
i
÷i
t
i
´
d
(÷i)
= j
i
÷i
t
i
`
d ÷ (1 ÷/
ii
)
÷1
i
t
i
_
A
t
A
_
÷1
i
i
´ c
i
= ´ c
i
÷ (1 ÷/
ii
)
÷1
/
ii
´ c
i
= (1 ÷/
ii
)
÷1
´ c
i
. (3.38)
To write this in vector notation, de…ne
A
+
= (1
a
÷oiag¦/
11
, .., /
aa
¦)
÷1
= oiag¦(1 ÷/
11
)
÷1
, .., (1 ÷/
aa
)
÷1
¦. (3.39)
Then (3.38) is equivalent to
¯c = A
+
´c. (3.40)
A convenient feature of this expression is that it shows that computation of the full vector of
prediction errors ¯c is based on a simple linear operation, and does not really require : separate
estimations.
One use of the prediction errors is to estimate the out-of-sample mean squared error
o
2
=
1
:
a

i=1
c
2
i
=
1
:
a

i=1
(1 ÷/
ii
)
÷2
´ c
2
i
. (3.41)
This is also known as the sample mean squared prediction error. Its square root o =
_
o
2
is
the prediction standard error.
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 68
3.16 In‡uential Observations
Another use of the leave-one-out estimator is to investigate the impact of in‡uential obser-
vations, sometimes called outliers. We say that observation i is in‡uential if its omission from
the sample induces a substantial change in a parameter of interest. From (3.37)-(3.38) we know
that
´
d ÷
´
d
(÷i)
= (1 ÷/
ii
)
÷1
_
A
t
A
_
÷1
i
i
´ c
i
=
_
A
t
A
_
÷1
i
i
c
i
. (3.42)
By direct calculation of this quantity for each observation i, we can directly discover if a speci…c
observation i is in‡uential for a coe¢cient estimate of interest.
For a general assessment, we can focus on the predicted values. The di¤erence between the
full-sample and leave-one-out predicted values is
´ j
i
÷ j
i
= i
t
i
´
d ÷i
t
i
´
d
(÷i)
= i
t
i
_
A
t
A
_
÷1
i
i
c
i
= /
ii
c
i
which is a simple function of the leverage values /
ii
and prediction errors c
i
. Observation i is
in‡uential for the predicted value if [/
ii
c
i
[ is large, which requires that both /
ii
and [ c
i
[ are large.
One way to think about this is that a large leverage value /
ii
gives the potential for observation
i to be in‡uential. A large /
ii
means that observation i is unusual in the sense that the regressor i
i
is far from its sample mean. We call an observation with large /
ii
a leverage point. A leverage
point is not necessarily in‡uential as the latter also requires that the prediction error c
i
is large.
To determine if any individual observations are in‡uential in this sense, several diagnostics have
been proposed (some names include DFITS, Cook’s Distance, and Welsch Distance). Unfortunately,
from a statistical perspective it is di¢cult to recommend these diagnostics for applications as they
are not based on statistical theory. Probably the most relevant measure is the change in the
coe¢cient estimates given in (3.42). The ratio of these changes to the coe¢cient’s standard error
is called its DFBETA, and is a postestimation diagnostic available in STATA. While there is no
magic threshold, the concern is whether or not an individual observation meaningfully changes an
estimated coe¢cient of interest.
For illustration, consider Figure 3.2 which shows a scatter plot of random variables (j
i
, r
i
).
The 25 observations shown with the open circles are generated by r
i
~ l[1, 10[ and j
i
~ ·(r
i
, 4).
The 26’th observation shown with the …lled circle is r
26
= 0, j
26
= 0. (Imagine that j
26
= 0 was
incorrectly recorded due to a mistaken key entry.) The Figure shows both the least-squares …tted
line from the full sample and that obtained after deletion of the 26’th observation from the sample.
In this example we can see how the 26’th observation (the “outlier”) greatly tilts the least-squares
…tted line towards the 26’th observation. In fact, the slope coe¢cient decreases from 0.97 (which
is close to the true value of 1.00) to 0.56, which is substantially reduced. Neither j
26
nor r
26
are
unusual values relative to their marginal distributions, so this outlier would not have been detected
from examination of the marginal distributions of the data. The change in the slope coe¢cient of
÷0.41 is meaningful and should raise concern to an applied economist.
If an observation is determined to be in‡uential, what should be done? As a common cause
of in‡uential observations is data entry error, the in‡uential observations should be examined for
evidence that the observation was mis-recorded. Perhaps the observation falls outside of permitted
ranges, or some observables are inconsistent (for example, a person is listed as having a job but
receives earnings of $0). If it is determined that an observation is incorrectly recorded, then the
observation is typically deleted from the sample. This process is often called “cleaning the data”.
The decisions made in this process involve an fair amount of individual judgement. When this is
done it is proper empirical practice to document such choices. (It is useful to keep the source data
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 69
2 4 6 8 10
0
2
4
6
8
1
0
x
y
q
leave−one−out OLS
OLS
Figure 3.2: Impact of an in‡uential observation on the least-squares estimator
in its original form, a revised data …le after cleaning, and a record describing the revision process.
This is especially useful when revising empirical work at a later date.)
It is also possible that an observation is correctly measured, but unusual and in‡uential. In
this case it is unclear how to proceed. Some researchers will try to alter the speci…cation to
properly model the in‡uential observation. Other researchers will delete the observation from the
sample. The motivation for this choice is to prevent the results from being skewed or determined
by individual observations, but this practice is viewed skeptically by many researchers who believe
it reduces the integrity of reported empirical results.
3.17 Normal Regression Model
The normal regression model is the linear regression model under the restriction that the error
c
i
is independent of i
i
and has the distribution N
_
0, o
2
_
. We can write this as
c
i
[ i
i
~ N
_
0, o
2
_
.
This assumption implies
j
i
[ i
i
~ N
_
i
t
i
d, o
2
_
.
Normal regression is a parametric model, where likelihood methods can be used for estimation,
testing, and distribution theory.
The log-likelihood function for the normal regression model is
log 1(d, o
2
) =
a

i=1
log
_
1
(2¬o
2
)
1¸2
oxp
_
÷
1
2o
2
_
j
i
÷i
t
i
d
_
2
_
_
= ÷
:
2
log (2¬) ÷
:
2
log
_
o
2
_
÷
1
2o
2
oo1
a
(d). (3.43)
The maximum likelihood estimator (MLE) (
´
d
mle
, ´ o
2
mle
) maximizes log 1(d, o
2
). Since the latter
is a function of d only through the sum of squared errors oo1
a
(d), maximizing the likelihood is
identical to minimizing oo1
a
(d). Hence
´
d
mle
=
´
d
ols
,
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 70
the MLE for d equals the OLS estimator. Due to this equivalence, the least squares estimator
´
d
ols
is often called the MLE.
We can also …nd the MLE for o
2
. Plugging
´
d into the log-likelihood we obtain
log 1
_
´
d
mle
, o
2
_
= ÷
:
2
log (2¬) ÷
:
2
log
_
o
2
_
÷
oo1
a
(
´
d
mle
)
2o
2
.
Maximization with respect to o
2
yields the …rst-order condition
0
0o
2
log 1
_
´
d
mle
, ´ o
2
_
= ÷
:
2´ o
2
÷
1
2
_
´ o
2
_
2
oo1
a
(
´
d
mle
) = 0.
Solving for ´ o
2
yields the MLE for o
2
´ o
2
mle
=
oo1
a
(
´
d
mle
)
:
=
1
:
a

i=1
´ c
2
i
which is the same as the moment estimator (3.28).
Plugging the estimates into (3.43) we obtain the maximized log-likelihood
log 1
_
´
d
mle
, ´ o
2
mle
_
= ÷
:
2
(log (2¬) ÷ 1) ÷
:
2
log
_
´ o
2
mle
_
. (3.44)
The log-likelihood (or the negative log-likelihood) is typically reported as a measure of …t.
It may seem surprising that the MLE
´
d
mle
is numerically equal to the OLS estimator, despite
emerging from quite di¤erent motivations. It is not completely accidental. The least-squares
estimator minimizes a particular sample loss function – the sum of squared error criterion – and
most loss functions are equivalent to the likelihood of a speci…c parametric distribution, in this case
the normal regression model. In this sense it is not surprising that the least-squares estimator can
be motivated as either the minimizer of a sample loss function or as the maximizer of a likelihood
function.
Carl Friedrich Gauss
The mathematician Carl Friedrich Gauss (1777-1855) proposed the normal
regression model, and derived the least squares estimator as the maximum
likelihood estimator for this model. He claimed to have discovered the
method in 1795 at the age of eighteen, but did not publish the result until
1809. Interest in Gauss’s approach was reinforced by Laplace’s simultane-
ous discovery of the central limit theorem, which provided a justi…cation for
viewing random disturbances as approximately normal.
3.18 Technical Proofs*
Proof of Theorem 3.9.1, equation (3.23): First, /
ii
= i
t
i
(A
t
A)
÷1
i
i
_ 0 since it is a
quadratic form and A
t
A 0. Next, since /
ii
is the i’th diagonal element of the projection matrix
1 = A(A
t
A)
÷1
A, then
/
ii
= s
t
1s
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 71
where
s =
_
_
_
_
_
_
_
_
0
.
.
.
1
.
.
.
0
_
_
_
_
_
_
_
_
is a unit vector with a 1 in the i’th place (and zeros elsewhere).
By the spectral decomposition of the idempotent matrix 1 (see equation (A.5))
1 = H
t
_
1
I
0
0 0
_
H
where H
t
H = 1
a
. Thus letting I = Hs denote the i’th column of H, and partitioning I
t
=
_
I
t
1
I
t
2
_
then
/
ii
= s
t
H
t
_
1
I
0
0 0
_
Hs
= I
t
1
_
1
I
0
0 0
_
I
1
= I
t
1
I
1
_ I
t
I
= 1
the …nal equality since I is the i’th column of H and H
t
H = 1
a
. We have shown that /
ii
_ 1,
establishing (3.23).
Proof of Equation (3.37). The Sherman–Morrison formula (A.3) from Appendix A.5 states that
for nonsingular A and vector I
_
A÷II
t
_
÷1
= A
÷1
÷
_
1 ÷I
t
A
÷1
I
_
÷1
A
÷1
II
t
A
÷1
.
This implies
_
A
t
A ÷i
i
i
t
i
_
÷1
=
_
A
t
A
_
÷1
÷ (1 ÷/
ii
)
÷1
_
A
t
A
_
÷1
i
i
i
t
i
_
A
t
A
_
÷1
and thus
´
d
(÷i)
=
_
A
t
A ÷i
i
i
t
i
_
÷1
_
A
t
¸ ÷i
i
j
i
_
=
_
A
t
A
_
÷1
A
t
¸ ÷
_
A
t
A
_
÷1
i
i
j
i
÷(1 ÷/
ii
)
÷1
_
A
t
A
_
÷1
i
i
i
t
i
_
A
t
A
_
÷1
_
A
t
¸ ÷i
i
j
i
_
=
´
d ÷
_
A
t
A
_
÷1
i
i
j
i
÷ (1 ÷/
ii
)
÷1
_
A
t
A
_
÷1
i
i
_
i
t
i
´
d ÷/
ii
j
i
_
=
´
d ÷(1 ÷/
ii
)
÷1
_
A
t
A
_
÷1
i
i
_
(1 ÷/
ii
) j
i
÷i
t
i
´
d ÷/
ii
j
i
_
=
´
d ÷(1 ÷/
ii
)
÷1
_
A
t
A
_
÷1
i
i
´ c
i
the third equality making the substitutions
´
d = (A
t
A)
÷1
A
t
¸ and /
ii
= i
t
i
(A
t
A)
÷1
i
i
, and the
remainder collecting terms.
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 72
Exercises
Exercise 3.1 Let j be a random variable with j = Ej and o
2
= vai(j). De…ne
q
_
j, j, o
2
_
=
_
j ÷j
(j ÷j)
2
÷o
2
_
.
Let (´ j, ´ o
2
) be the values such that q
a
(´ j, ´ o
2
) = 0 where q
a
(:, :) = :
÷1

a
i=1
q (j
i
, :, :) . Show that
´ j and ´ o
2
are the sample mean and variance.
Exercise 3.2 Consider the OLS regression of the :1 vector ¸ on the :/ matrix A. Consider
an alternative set of regressors Z = AC, where C is a / / non-singular matrix. Thus, each
column of Z is a mixture of some of the columns of A. Compare the OLS estimates and residuals
from the regression of ¸ on A to the OLS estimates from the regression of ¸ on Z.
Exercise 3.3 Using matrix algebra, show A
t
` c = 0.
Exercise 3.4 Let ` c be the OLS residual from a regression of ¸ on A = [A
1
A
2
[. Find A
t
2
` c.
Exercise 3.5 Let ` c be the OLS residual from a regression of ¸ on A. Find the OLS coe¢cient
from a regression of ` c on A.
Exercise 3.6 Let ` ¸ = A(A
t
A)
÷1
A
t
¸. Find the OLS coe¢cient from a regression of ` ¸ on A.
Exercise 3.7 Show that if A = [A
1
A
2
[ then 1A
1
= A
1
and AA
1
= 0.
Exercise 3.8 Show that A is idempotent: AA = A.
Exercise 3.9 Show that li A = : ÷/.
Exercise 3.10 Show that if A = [A
1
A
2
[ and A
t
1
A
2
= 0 then 1 = 1
1
÷1
2
.
Exercise 3.11 Show that when A contains a constant,
1
:

a
i=1
´ j
i
= ¯ j.
Exercise 3.12 A dummy variable takes on only the values 0 and 1. It is used for categorical
data, such as an individual’s gender. Let u
1
and u
2
be vectors of 1’s and 0’s, with the i
t
th element
of u
1
equaling 1 and that of u
2
equaling 0 if the person is a man, and the reverse if the person is a
woman. Suppose that there are :
1
men and :
2
women in the sample. Consider …tting the following
three equations by OLS
¸ = j ÷u
1
c
1
÷u
2
c
2
÷c (3.45)
¸ = u
1
c
1
÷u
2
c
2
÷c (3.46)
¸ = j ÷u
1
c ÷c (3.47)
Can all three equations (3.45), (3.46), and (3.47) be estimated by OLS? Explain if not.
(a) Compare regressions (3.46) and (3.47). Is one more general than the other? Explain the
relationship between the parameters in (3.46) and (3.47).
(b) Compute i
t
u
1
and i
t
u
2
, where i is an : 1 is a vector of ones.
(c) Letting o = (c
1
c
2
)
t
, write equation (3.46) as ¸ = Ao ÷ c. Consider the assumption
E(i
i
c
i
) = 0. Is there any content to this assumption in this setting?
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 73
Exercise 3.13 Let u
1
and u
2
be de…ned as in the previous exercise.
(a) In the OLS regression
¸ = u
1
´ ¸
1
÷u
2
´ ¸
2
÷ ` u,
show that ´ ¸
1
is the sample mean of the dependent variable among the men of the sample
(j
1
), and that ´ ¸
2
is the sample mean among the women (j
2
).
(b) Let A (: /) be an additional matrix of regressions. Describe in words the transformations
¸
+
= ¸ ÷u
1
j
1
÷u
2
j
2
A
+
= A ÷u
1
i
t
1
÷u
2
i
t
2
where i
1
and i
2
are the / 1 means of the regressors for men and women, respectively.
(c) Compare
¯
d from the OLS regresion
¸
+
= A
+
¯
d ÷ ¯ c
with
´
d from the OLS regression
¸ = u
1
´ c
1
÷u
2
´ c
2
÷A
´
d ÷ ` c.
Exercise 3.14 Let
´
d
a
= (A
t
a
A
a
)
÷1
A
t
a
¸
a
denote the OLS estimate when ¸
a
is : 1 and A
a
is
: /. A new observation (j
a+1
, i
a+1
) becomes available. Prove that the OLS estimate computed
using this additional observation is
´
d
a+1
=
´
d
a
÷
1
1 ÷i
t
a+1
(A
t
a
A
a
)
÷1
i
a+1
_
A
t
a
A
a
_
÷1
i
a+1
_
j
a+1
÷i
t
a+1
´
d
a
_
.
Exercise 3.15 Prove that 1
2
is the square of the sample correlation between ¸ and ` ¸.
Exercise 3.16 Consider two least-squares regressions
¸ = A
1
¯
d
1
÷ ¯ c
and
¸ = A
1
´
d
1
÷A
2
´
d
2
÷ ` c.
Let 1
2
1
and 1
2
2
be the 1-squared from the two regressions. Show that 1
2
2
_ 1
2
1
. Is there a case
(explain) when there is equality 1
2
2
= 1
2
1
´
Exercise 3.17 Show that o
2
_ ´ o
2
. Is equality possible?
Exercise 3.18 For which observations will
´
d
(÷i)
=
´

Exercise 3.19 The data …le cps85.dat contains a random sample of 528 individuals from the
1985 Current Population Survey by the U.S. Census Bureau. The …le contains observations on nine
variables, listed in the …le cps85.pdf.
V1 = education (in years)
V2 = region of residence (coded 1 if South, 0 otherwise)
V3 = (coded 1 if nonwhite and non-Hispanic, 0 otherwise)
V4 = (coded 1 if Hispanic, 0 otherwise)
V5 = gender (coded 1 if female, 0 otherwise)
V6 = marital status (coded 1 if married, 0 otherwise)
V7 = potential labor market experience (in years)
V8 = union status (coded 1 if in union job, 0 otherwise)
V9 = hourly wage (in dollars)
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 74
Estimate a regression of wage j
i
on education r
1i
, experience r
2i
, and experienced-squared r
3i
= r
2
2i
(and a constant). Report the OLS estimates.
Let ´ c
i
be the OLS residual and ´ j
i
the predicted value from the regression. Numerically calculate
the following:
(a)

a
i=1
´ c
i
(b)

a
i=1
r
1i
´ c
i
(c)

a
i=1
r
2i
´ c
i
(d)

a
i=1
r
2
1i
´ c
i
(e)

a
i=1
r
2
2i
´ c
i
(f)

a
i=1
´ j
i
´ c
i
(g)

a
i=1
´ c
2
i
(h) 1
2
Are these calculations consistent with the theoretical properties of OLS? Explain.
Exercise 3.20 Using the data from the previous problem, re-estimate the slope on education using
the residual regression approach. Regress j
i
on (1, r
2i
, r
2
2i
), regress r
1i
on (1, r
2i
, r
2
2i
), and regress
the residuals on the residuals. Report the estimate from this regression. Does it equal the value
from the …rst OLS regression? Explain.
In the second-stage residual regression, (the regression of the residuals on the residuals), cal-
culate the equation 1
2
and sum of squared errors. Do they equal the values from the initial OLS
regression? Explain.
Chapter 4
Least Squares Regression
4.1 Introduction
In this chapter we investigate some …nite-sample properties of least-squares applied to a random
sample in the the linear regression model. In particular, we calculate the …nite-sample mean and
covariance matrix and propose standard errors for the coe¢cient estimates
4.2 Sample Mean
To start with the simplest setting, we …rst consider the intercept-only model
j
i
= j ÷c
i
E(c
i
) = 0.
which is equivalent to the regression model with / = 1 and r
i
= 1. In the intercept model, j = E(j
i
)
is the mean of j
i
. (See Exercise 2.15.) The least-squares estimator ´ j = j equals the sample mean
as shown in (3.7).
We now calculate the mean and variance of the estimator j. Since the sample mean is a linear
function of the observations, its expectation is simple to calculate
Ej = E
_
1
:
a

i=1
j
i
_
=
1
:
a

i=1
Ej
i
= j.
This shows that the expected value of least-squares estimator (the sample mean) equals the projec-
tion coe¢cient (the population mean). An estimator with the property that its expectation equals
the parameter it is estimating is called unbiased.
De…nition 4.2.1 An estimator
´
0 for 0 is unbiased if E
´
0 = 0.
We next calculate the variance of the estimator j. Making the substitution j
i
= j ÷c
i
we …nd
j ÷j =
1
:
a

i=1
c
i
.
75
CHAPTER 4. LEAST SQUARES REGRESSION 76
Then
vai (j) = E(j ÷j)
2
= E
_
1
:
a

i=1
c
i
_
_
_
1
:
a

)=1
c
)
_
_
=
1
:
2
a

i=1
a

)=1
E(c
i
c
)
)
=
1
:
2
a

i=1
o
2
=
1
:
o
2
.
The second-to-last inequality is because E(c
i
c
)
) = o
2
for i = , yet E(c
i
c
)
) = 0 for i ,= , due to
independence.
We have shown that vai (j) =
1
a
o
2
. This is the familiar formula for the variance of the sample
mean.
4.3 Linear Regression Model
We now consider the linear regression model. Throughout the remainder of this chapter we
maintain the following.
Assumption 4.3.1 Linear Regression Model
The observations (j
i
, i
i
) come from a random sample and satisfy the linear
regression equation
j
i
= i
t
i
d ÷c
i
(4.1)
E(c
i
[ i
i
) = 0. (4.2)
The variables have …nite second moments
Ej
2
i
< ·,
E|i
i
|
2
< ·,
and an invertible design matrix
Q
ææ
= E
_
i
i
i
t
i
_
0.
We will consider both the general case of heteroskedastic regression, where the conditional
variance
E
_
c
2
i
[ i
i
_
= o
2
(i
i
) = o
2
i
is unrestricted, and the specialized case of homoskedastic regression, where the conditional variance
is constant. In the latter case we add the following assumption.
CHAPTER 4. LEAST SQUARES REGRESSION 77
Assumption 4.3.2 Homoskedastic Linear Regression Model
In addition to Assumption 4.3.1,
E
_
c
2
i
[ i
i
_
= o
2
(i
i
) = o
2
(4.3)
is independent of i
i
.
4.4 Mean of Least-Squares Estimator
In this section we show that the OLS estimator is unbiased in the linear regression model. This
calculation can be done using either summation notation or matrix notation. We will use both.
First take summation notation. Observe that under (4.1)-(4.2)
E(j
i
[ A) = E(j
i
[ i
i
) = i
t
i
d. (4.4)
The …rst equality states that the conditional expectation of j
i
given ¦i
1
, ..., i
a
¦ only depends on
i
i
, since the observations are independent across i. The second equality is the assumption of a
linear conditional mean.
Using de…nition (3.9), the conditioning theorem, the linearity of expectations, (4.4), and prop-
erties of the matrix inverse,
E
_
´
d [ A
_
= E
_
_
_
a

i=1
i
i
i
t
i
_
÷1
_
a

i=1
i
i
j
i
_
[ A
_
_
=
_
a

i=1
i
i
i
t
i
_
÷1
E
__
a

i=1
i
i
j
i
_
[ A
_
=
_
a

i=1
i
i
i
t
i
_
÷1
a

i=1
E(i
i
j
i
[ A)
=
_
a

i=1
i
i
i
t
i
_
÷1
a

i=1
i
i
E(j
i
[ A)
=
_
a

i=1
i
i
i
t
i
_
÷1
a

i=1
i
i
i
t
i
d
= d.
Now let’s show the same result using matrix notation. (4.4) implies
E(¸ [ A) =
_
_
_
_
.
.
.
E(j
i
[ A)
.
.
.
_
_
_
_
=
_
_
_
_
.
.
.
i
t
i
d
.
.
.
_
_
_
_
= Ad. (4.5)
Similarly
E(c [ A) =
_
_
_
_
.
.
.
E(c
i
[ A)
.
.
.
_
_
_
_
=
_
_
_
_
.
.
.
E(c
i
[ i
i
)
.
.
.
_
_
_
_
= 0. (4.6)
CHAPTER 4. LEAST SQUARES REGRESSION 78
Using de…nition (3.18), the conditioning theorem, the linearity of expectations, (4.5), and the
properties of the matrix inverse,
E
_
´
d [ A
_
= E
_
_
A
t
A
_
÷1
A
t
¸ [ A
_
=
_
A
t
A
_
÷1
A
t
E(¸ [ A)
=
_
A
t
A
_
÷1
A
t
Ad
= d.
At the risk of belaboring the derivation, another way to calculate the same result is as follows.
Insert ¸ = Ad ÷c into the formula (3.18) for
´
d to obtain
´
d =
_
A
t
A
_
÷1
_
A
t
(Ad ÷c)
_
=
_
A
t
A
_
÷1
A
t
Ad ÷
_
A
t
A
_
÷1
_
A
t
c
_
= d ÷
_
A
t
A
_
÷1
A
t
c. (4.7)
This is a useful linear decomposition of the estimator
´
d into the true parameter d and the stochastic
component (A
t
A)
÷1
A
t
c. Once again, we can calculate that
E
_
´
d ÷d [ A
_
= E
_
_
A
t
A
_
÷1
A
t
c [ A
_
=
_
A
t
A
_
÷1
A
t
E(c [ A)
= 0.
Regardless of the method, we have shown that E
_
´
d [ A
_
= d. Applying the law of iterated
expectations, we …nd that
E
_
´
d
_
= E
_
E
_
´
d [ A
__
= d.
We have shown the following theorem.
Theorem 4.4.1 Mean of Least-Squares Estimator
In the linear regression model (Assumption 4.3.1)
E
_
´
d [ A
_
= d (4.8)
and
E(
´
d) = d. (4.9)
Equation (4.9) says that the estimator
´
d is unbiased for d, meaning that the distribution of
´
d is centered at d. Equation (4.8) says that the estimator is conditionally unbiased, which is a
stronger result. It says that
´
d is unbiased for any realization of the regressor matrix A.
4.5 Variance of Least Squares Estimator
In this section we calculate the conditional variance of the OLS estimator.
For any r 1 random vector Z de…ne the r r covariance matrix
vai(Z) = E(Z ÷EZ) (Z ÷EZ)
t
= EZZ
t
÷(EZ) (EZ)
t
CHAPTER 4. LEAST SQUARES REGRESSION 79
and for any pair (Z, X) de…ne the conditional covariance matrix
vai(Z [ A) = E
_
(Z ÷E(Z [ A)) (Z ÷E(Z [ A))
t
[ A
_
.
The conditional covariance matrix of the : 1 regression error c is the : : matrix
L = E
_
cc
t
[ A
_
.
The i’th diagonal element of L is
E
_
c
2
i
[ A
_
= E
_
c
2
i
[ i
i
_
= o
2
i
while the i,
t
th o¤-diagonal element of L is
E(c
i
c
)
[ A) = E(c
i
[ i
i
) E(c
)
[ i
)
) = 0.
where the …rst equality uses independence of the observations (Assumption 1.5.1) and the second
is (4.2). Thus L is a diagonal matrix with i’th diagonal element o
2
i
:
L = oiag
_
o
2
1
, ..., o
2
a
_
=
_
_
_
_
_
o
2
1
0 0
0 o
2
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 o
2
a
_
_
_
_
_
. (4.10)
In the special case of the linear homoskedastic regression model (4.3), then
E
_
c
2
i
[ i
i
_
= o
2
i
= o
2
and we have the simpli…cation
L = 1
a
o
2
.
In general, however, L need not necessarily take this simpli…ed form.
For any matrix : r matrix A = A(A),
vai(A
t
¸ [ A) = vai(A
t
c [ A) = A
t
LA. (4.11)
In particular, we can write
´
d = A
t
¸ where A = A(A
t
A)
÷1
and thus
vai(
´
d [ A) = A
t
LA =
_
A
t
A
_
÷1
A
t
LA
_
A
t
A
_
÷1
.
It is useful to note that
A
t
LA =
a

i=1
i
i
i
t
i
o
2
i
,
a weighted version of A
t
A.
Rather than working with the variance of the unscaled estimator
´
d, it will be useful to work
with the conditional variance of the scaled estimator
_
:
_
´
d ÷d
_
. We calculate that
X
b
f
oc)
= vai
_
_
:
_
´
d ÷d
_
[ A
_
= :vai(
´
d [ A)
= :
_
A
t
A
_
÷1
_
A
t
LA
_ _
A
t
A
_
÷1
=
_
1
:
A
t
A
_
÷1
_
1
:
A
t
LA
__
1
:
A
t
A
_
÷1
.
CHAPTER 4. LEAST SQUARES REGRESSION 80
This rescaling might seem rather odd, but it will help provide continuity between the …nite-sample
treatment of this chapter and the asymptotic treatment of later chapters. As we will see in the
next chapter, vai(
´
d [ A) vanishes as : tends to in…nity, yet X
b
f
converges to a constant matrix.
In the special case of the linear homoskedastic regression model, L = 1
a
o
2
, so A
t
LA =
A
t
Ao
2
, and the variance matrix simpli…es to
X
b
f
=
_
1
:
A
t
A
_
÷1
o
2
.
It may be worth observing that without rescaling, the variance can be written as
vai
_
´
d [ A
_
=
_
A
t
A
_
÷1
_
A
t
LA
_ _
A
t
A
_
÷1
and under conditional homoskedasticity
vai
_
´
d [ A
_
=
_
A
t
A
_
÷1
o
2
.
Theorem 4.5.1 Variance of Least-Squares Estimator
In the linear regression model (Assumption 4.3.1)
X
b
f
= vai
_
_
:
_
´
d ÷d
_
[ A
_
=
_
1
:
A
t
A
_
÷1
_
1
:
A
t
LA
__
1
:
A
t
A
_
÷1
(4.12)
where L is de…ned in (4.10).
In the homoskedastic linear regression model (Assumption 4.3.2)
X
b
f
=
_
1
:
A
t
A
_
÷1
o
2
.
4.6 Gauss-Markov Theorem
Now consider the class of estimators of d which are linear functions of the vector ¸, and thus
can be written as
¯
d = A
t
¸
where A is an :/ function of A. As noted before, the least-squares estimator is the special case
obtained by setting A = A(A
t
A)
÷1
. What is the best choice of A´ The Gauss-Markov theorem,
which we now present, says that the least-squares estimator is the best choice among linear unbiased
estimators when the errors are homoskedastic, in the sense that the least-squares estimator has the
smallest variance among all unbiased linear estimators.
To see this, since E(¸ [ A) = Ad, then for any linear estimator
¯
d = A
t
¸ we have
E
_
¯
d [ A
_
= A
t
E(¸ [ A) = A
t
Ad,
so
¯
d is unbiased if (and only if) A
t
A = 1
I
. Furthermore, we saw in (4.11) that
vai
_
¯
d [ A
_
= vai
_
A
t
¸ [ A
_
= A
t
LA = A
t
Ao
2
CHAPTER 4. LEAST SQUARES REGRESSION 81
the last equality using the homoskedasticity assumption L = 1
a
o
2
. The “best” unbiased linear
estimator is obtained by …nding the matrix A
0
satisfying A
t
0
A = 1
I
such that A
t
0
A
0
is minimized
in the positive de…nite sense, in that for any other matrix Asatisfying A
t
A = 1
I
, then A
t
A÷A
t
0
A
0
is positive semi-de…nite.
Theorem 4.6.1 Gauss-Markov
1. In the homoskedastic linear regression model (Assumption 4.3.2),
the best (minimum-variance) unbiased linear estimator is the least-
squares estimator
´
d =
_
A
t
A
_
÷1
A
t
¸
2. In the linear regression model (Assumption 4.3.1), the best unbiased
linear estimator is
¯
d =
_
A
t
L
÷1
A
_
÷1
A
t
L
÷1
¸ (4.13)
The …rst part of the Gauss-Markov theorem is a limited e¢ciency justi…cation for the least-
squares estimator. The justi…cation is limited because the class of models is restricted to ho-
moskedastic linear regression and the class of potential estimators is restricted to linear unbiased
estimators. This latter restriction is particularly unsatisfactory as the theorem leaves open the
possibility that a non-linear or biased estimator could have lower mean squared error than the
least-squares estimator.
The second part of the theorem shows that in the (heteroskedastic) linear regression model,
within the class of linear unbiased estimators the best estimator is not least-squares but is (4.13).
This is called the Generalized Least Squares (GLS) estimator. The GLS estimator is infeasible
as the matrix L is unknown. This result does not suggest a practical alternative to least-squares.
We return to the issue of feasible implementation of GLS in Section 9.1.
We give a proof of the …rst part of the theorem below, and leave the proof of the second part
for Exercise 4.3.
Proof of Theorem 4.6.1.1. Let A be any :/ function of A such that A
t
A = 1
I
. The variance
of the least-squares estimator is (A
t
A)
÷1
o
2
and that of A
t
¸ is A
t
Ao
2
. It is su¢cient to show
that the di¤erence A
t
A÷(A
t
A)
÷1
is positive semi-de…nite. Set C = A÷A(A
t
A)
÷1
. Note that
A
t
C = 0. Then we calculate that
A
t

_
A
t
A
_
÷1
=
_
C ÷A
_
A
t
A
_
÷1
_
t
_
C ÷A
_
A
t
A
_
÷1
_
÷
_
A
t
A
_
÷1
= C
t
C ÷C
t
A
_
A
t
A
_
÷1
÷
_
A
t
A
_
÷1
A
t
C
÷
_
A
t
A
_
÷1
A
t
A
_
A
t
A
_
÷1
÷
_
A
t
A
_
÷1
= C
t
C
The matrix C
t
C is positive semi-de…nite (see Appendix A.8) as required.
CHAPTER 4. LEAST SQUARES REGRESSION 82
4.7 Residuals
What are some properties of the residuals ´ c
i
= j
i
÷i
t
i
´
d and prediction errors c
i
= j
i
÷i
t
i
´
d
(÷i)
,
at least in the context of the linear regression model?
Recall from (3.26) that we can write the residuals in vector notation as
` c = Ac
where A = 1
a
÷ A(A
t
A)
÷1
A
t
is the orthogonal projection matrix. Using the properties of
conditional expectation
E(` c [ A) = E(Ac [ A) = AE(c [ A) = 0
and
vai (` c [ A) = vai (Ac [ A) = A var {c j A) A = ALA (4.14)
where L is de…ned in (4.10).
We can simplify this expression under the assumption of conditional homoskedasticity
E
_
c
2
i
[ i
i
_
= o
2
.
In this case (4.14) simplies to
vai (` c [ A) = Ao
2
.
In particular, for a single observation i, we obtain
vai (´ c
i
[ A) = E
_
´ c
2
i
[ A
_
= (1 ÷/
ii
) o
2
(4.15)
since the diagonal elements of A are 1 ÷ /
ii
as de…ned in (3.21). Thus the residuals ´ c
i
are
heteroskedastic even if the errors c
i
are homoskedastic.
Similarly, recall from (3.40) that the prediction errors c
i
= (1 ÷/
ii
)
÷1
´ c
i
can be written in
vector notation as ¯ c = A
+
` c where A
+
is a diagonal matrix with i
tI
diagonal element (1 ÷/
ii
)
÷1
.
Thus ¯ c = A
+
Ac. We can calculate that
E(¯ c [ A) = A
+
AE(c [ A) = 0
and
vai (¯ c [ A) = A
+
A var {c j A) AA
+
= A
+
ALAA
+
which simpli…es under homoskedasticity to
vai (¯ c [ A) = A
+
AAA
+
o
2
= A
+
AA
+
o
2
.
The variance of the i’th prediction error is then
vai ( c
i
[ A) = E
_
c
2
i
[ A
_
= (1 ÷/
ii
)
÷1
(1 ÷/
ii
) (1 ÷/
ii
)
÷1
o
2
= (1 ÷/
ii
)
÷1
o
2
.
A residual with constant conditional variance can be obtained by rescaling. The standardized
residuals are
¯ c
i
= (1 ÷/
ii
)
÷1¸2
´ c
i
, (4.16)
and in vector notation
¯ c = (¯ c
1
, ..., ¯ c
a
)
t
= A
+1¸2
Ac.
CHAPTER 4. LEAST SQUARES REGRESSION 83
From our above calculations, under homoskedasticity,
vai (¯ c [ A) = A
+1¸2
AA
+1¸2
o
2
and
vai (¯ c
i
[ A) = E
_
¯ c
2
i
[ A
_
= o
2
(4.17)
and thus these standardized residuals have the same bias and variance as the original errors when
the latter are homoskedastic.
4.8 Estimation of Error Variance
The error variance o
2
= Ec
2
i
can be a parameter of interest, even in a heteroskedastic regression
or a projection model. o
2
measures the variation in the “unexplained” part of the regression. Its
method of moments estimator (MME) is the sample average of the squared residuals:
´ o
2
=
1
:
a

i=1
´ c
2
i
and equals the MLE in the normal regression model (3.28).
In the linear regression model we can calculate the mean of ´ o
2
. From (3.26), the properties of
projection matrices and the trace operator, observe that
´ o
2
=
1
:
` c
t
` c =
1
:
c
t
AAc =
1
:
c
t
Ac =
1
:
li
_
c
t
Ac
_
=
1
:
li
_
Acc
t
_
.
Then
E
_
´ o
2
[ A
_
=
1
:
li
_
E
_
Acc
t
[ A
__
=
1
:
li
_
AE
_
cc
t
[ A
__
=
1
:
li (AL) . (4.18)
Adding the assumption of conditional homoskedasticity E
_
c
2
i
[ i
i
_
= o
2
, so that L = 1
a
o
2
, then
(4.18) simpli…es to
E
_
´ o
2
[ A
_
=
1
:
li
_
Ao
2
_
= o
2
_
: ÷/
:
_
,
the …nal equality by (3.24). This calculation shows that ´ o
2
is biased towards zero. The order of
the bias depends on /,:, the ratio of the number of estimated coe¢cients to the sample size.
Another way to see this is to use (4.15). Note that
E
_
´ o
2
[ A
_
=
1
:
a

i=1
E
_
´ c
2
i
[ A
_
=
1
:
a

i=1
(1 ÷/
ii
) o
2
=
_
: ÷/
:
_
o
2
(4.19)
CHAPTER 4. LEAST SQUARES REGRESSION 84
the line equality using Theorem 3.9.1.
Since the bias takes a scale form, a classic method to obtain an unbiased estimator is by rescaling
the estimator. De…ne
:
2
=
1
: ÷/
a

i=1
´ c
2
i
. (4.20)
By the above calculation,
E
_
:
2
[ A
_
= o
2
(4.21)
so
E
_
:
2
_
= o
2
and the estimator :
2
is unbiased for o
2
. Consequently, :
2
is known as the “bias-corrected estimator”
for o
2
and in empirical practice :
2
is the most widely used estimator for o
2
.
Interestingly, this is not the only method to construct an unbiased estimator for o
2
. An esti-
mator constructed with the standardized residuals ¯ c
i
from (4.16) is
¯ o
2
=
1
:
a

i=1
¯ c
2
i
=
1
:
a

i=1
(1 ÷/
ii
)
÷1
´ c
2
i
. (4.22)
You can show (see Exercise 4.6) that
E
_
¯ o
2
[ A
_
= o
2
(4.23)
and thus ¯ o
2
is unbiased for o
2
(in the homoskedastic linear regression model).
When /,: is small (typically, this occurs when : is large), the estimators ´ o
2
, :
2
and ¯ o
2
are
likely to be close. However, if not then :
2
and ¯ o
2
are generally preferred to ´ o
2
. Consequently it is
best to use one of the bias-corrected variance estimators in applications.
4.9 Mean-Square Forecast Error
A major purpose of estimated regressions is to predict out-of-sample values. Consider an out-
of-sample observation (j
a+1
, i
a+1
) where i
a+1
will be observed but not j
a+1
. Given the coe¢cient
estimate
´
d the standard point estimate of E(j
a+1
[ i
a+1
) = i
t
a+1
d is j
a+1
= i
t
a+1
´
d. The forecast
error is the di¤erence between the actual value j
a+1
and the point forecast, c
a+1
= j
a+1
÷ j
a+1
.
The mean-squared forecast error (MSFE) is
'o11
a
= E c
2
a+1
.
In the linear regression model, c
a+1
= c
a+1
÷i
t
a+1
_
´
d ÷d
_
, so
'o11
a
= Ec
2
a+1
÷2E
_
c
a+1
i
t
a+1
_
´
d ÷d
__
(4.24)
÷E
_
i
t
a+1
_
´
d ÷d
__
´
d ÷d
_
i
a+1
_
.
The …rst term in (4.24) is o
2
. The second term in (4.24) is zero since c
a+1
i
t
a+1
is independent
of
´
d ÷d and both are mean zero. The third term in (4.24) is
li
_
E
_
i
a+1
i
t
a+1
_
E
_
´
d ÷d
__
´
d ÷d
__
= li
_
E
_
i
a+1
i
t
a+1
_
E
_
´
d ÷d
__
´
d ÷d
__
=
1
:
li
_
E
_
i
a+1
i
t
a+1
_
EX
b
f
_
=
1
:
Eli
_
_
i
a+1
i
t
a+1
_
X
b
f
_
=
1
:
E
_
i
t
a+1
X
b
f
i
a+1
_
(4.25)
CHAPTER 4. LEAST SQUARES REGRESSION 85
where we use the fact that i
a+1
is independent of
´
d and use the de…nition X
b
f
= vai
_
_
:
_
´
d ÷d
_
[ A
_
.
Thus
'o11
a
= o
2
÷
1
:
E
_
i
t
a+1
X
b
f
i
a+1
_
.
Under conditional homoskedasticity, this simpli…es to
'o11
a
= o
2
_
1 ÷E
_
i
t
a+1
_
A
t
A
_
÷1
i
a+1
__
.
A simple estimator for the MSFE is the averaging of squared prediction errors (3.41)
o
2
=
1
:
a

i=1
c
2
i
where c
i
= j
i
÷i
t
i
´
d
(÷i)
= ´ c
i
(1 ÷/
ii
)
÷1
. Indeed, we can calculate that
E o
2
= E c
2
i
= E
_
c
i
÷i
t
i
_
´
d
(÷i)
÷d
__
2
= o
2
÷E
_
i
t
i
_
´
d
(÷i)
÷d
__
´
d
(÷i)
÷d
_
i
i
_
By the same calculations as in (4.25) we …nd
E o
2
= o
2
÷
1
: ÷1
E
_
i
t
i
X
b
f
i
i
_
= 'o11
a÷1
.
This is the MSFE based on a sample of size :÷1, rather than size :. The di¤erence arises because
the in-sample prediction errors c
i
for i _ : are calculated using an e¤ective sample size of :÷1, while
the out-of sample prediction error c
a+1
is calculated from a sample with the full : observations.
Unless : is very small we should expect 'o11
a÷1
(the MSFE based on : ÷ 1 observations) to
be close to 'o11
a
(the MSFE based on : observations). Thus o
2
is a reasonable estimator for
'o11
a
Theorem 4.9.1 MSFE
In the linear regression model (Assumption 4.3.1)
'o11
a
= E c
2
a+1
= o
2
÷
1
:
E
_
i
t
a+1
X
b
f
i
a+1
_
where X
b
f
= vai
_
_
:
_
´
d ÷d
_
[ A
_
. Furthermore, o
2
de…ned in (3.41) is
an unbiased estimator of 'o11
a÷1
:
E o
2
= 'o11
a÷1
4.10 Covariance Matrix Estimation Under Homoskedasticity
For inference, we need an estimate of the covariance matrix X
b
f
of the least-squares estimator.
In this section we consider the homoskedastic regression model (Assumption 4.3.2).
CHAPTER 4. LEAST SQUARES REGRESSION 86
Under homoskedasticity, the covariance matrix takes the relatively simple form
X
b
f
=
_
1
:
A
t
A
_
÷1
o
2
which is known up to the unknown scale o
2
. In Section 4.8 we discussed three estimators of o
2
.
The most commonly used choice is :
2
, leading to the classic covariance matrix estimator
´
X
0
b
f
=
_
1
:
A
t
A
_
÷1
:
2
. (4.26)
Since :
2
is conditionally unbiased for o
2
, it is simple to calculate that
´
X
0
b
f
is conditionally
unbiased for X
b
f
under the assumption of homoskedasticity:
E
_
´
X
0
b
f
[ A
_
=
_
1
:
A
t
A
_
÷1
E
_
:
2
[ A
_
=
_
1
:
A
t
A
_
÷1
o
2
= X
b
f
.
This estimator was the dominant covariance matrix estimator in applied econometrics at one
time, and is still the default in most regression packages.
If the estimator (4.26) is used, but the regression error is heteroskedastic, it is possible for
´
X
0
b
f
to be quite biased for the correct covariance matrix X
b
f
=
_
1
:
A
t
A
_
÷1
_
1
:
A
t
LA
__
1
:
A
t
A
_
÷1
.
For example, suppose / = 1 and o
2
i
= r
2
i
with Er
i
= 0. The ratio of the true variance of the
least-squares estimator to the expectation of the variance estimator is
X
b
f
E
_
´
X
0
b
f
[ A
_ =
1
:

a
i=1
r
4
i
o
2
1
:

a
i=1
r
2
i
·
Er
4
i
_
Er
2
i
_
2
= i.
(Notice that we use the fact that o
2
i
= r
2
i
implies o
2
= Eo
2
i
= Er
2
i
.) The constant i is the
standardized forth moment (or kurtosis) of the regressor r
i
, and can be any number greater than
one. For example, if r
i
~ N
_
0, o
2
_
then i = 8, so the true variance is three times larger than the
homoskedastic estimator. But i can be much larger. Suppose, for example, that r
i
~ ¸
2
1
÷ 1. In
this case i = 1ò, so that the true variance is …fteen times larger than the homoskedastic estimator.
While this is an extreme and constructed example, the point is that the classic covariance matrix
estimator (4.26) may be quite biased when the homoskedasticity assumption fails.
4.11 Covariance Matrix Estimation Under Heteroskedasticity
In the previous section we showed that that the classic covariance matrix estimator can be
highly biased if homoskedasticity fails. In this section we show how to contruct covariance matrix
estimators which do not require homoskedasticity.
Recall that the general form for the covariance matrix is
X
b
f
=
_
1
:
A
t
A
_
÷1
_
1
:
A
t
LA
__
1
:
A
t
A
_
÷1
.
CHAPTER 4. LEAST SQUARES REGRESSION 87
This depends on the unknown matrix L which we can write as
L = oiag
_
o
2
1
, ..., o
2
a
_
= E
_
cc
t
[ A
_
= E(L
0
[ A) .
where L
0
= oiag
_
c
2
1
, ..., c
2
a
_
. Thus L
0
is a conditionally unbiased estimator for L. Therefore, if
the squared errors c
2
i
were observable, we could construct the unbiased estimator
´
X
ioco|
b
f
=
_
1
:
A
t
A
_
÷1
_
1
:
A
t
L
0
A
__
1
:
A
t
A
_
÷1
=
_
1
:
A
t
A
_
÷1
_
1
:
a

i=1
i
i
i
t
i
c
2
i
_
_
1
:
A
t
A
_
÷1
.
Indeed,
E
_
´
X
ioco|
b
f
[ A
_
=
_
1
:
A
t
A
_
÷1
_
1
:
a

i=1
i
i
i
t
i
E
_
c
2
i
[ A
_
_
_
1
:
A
t
A
_
÷1
=
_
1
:
A
t
A
_
÷1
_
1
:
a

i=1
i
i
i
t
i
o
2
i
_
_
1
:
A
t
A
_
÷1
=
_
1
:
A
t
A
_
÷1
_
1
:
A
t
LA
__
1
:
A
t
A
_
÷1
= X
b
f
verifying that
´
X
ioco|
b
f
is unbiased for X
b
f
Since the errors c
2
i
are unobserved,
´
X
ioco|
b
f
is not a feasible estimator. To construct a feasible
estimator we can replace the errors with the least-squares residuals ´ c
i
, the prediction errors c
i
or
the standardized residuals ¯ c
i
, e.g.
´
L = oiag
_
´ c
2
1
, ..., ´ c
2
a
_
,
¯
L = oiag
_
c
2
1
, ..., c
2
a
_
,
L = oiag
_
¯ c
2
1
, ..., ¯ c
2
a
_
. (4.27)
Substituting these matrices into the formula for X
b
f
we obtain the estimators
´
X
b
f
=
_
1
:
A
t
A
_
÷1
_
1
:
A
t
´
LA
__
1
:
A
t
A
_
÷1
=
_
1
:
A
t
A
_
÷1
_
1
:
a

i=1
i
i
i
t
i
´ c
2
i
_
_
1
:
A
t
A
_
÷1
, (4.28)
¯
X
b
f
=
_
1
:
A
t
A
_
÷1
_
1
:
A
t
¯
LA
__
1
:
A
t
A
_
÷1
=
_
1
:
A
t
A
_
÷1
_
1
:
a

i=1
i
i
i
t
i
c
2
i
_
_
1
:
A
t
A
_
÷1
=
_
1
:
A
t
A
_
÷1
_
1
:
a

i=1
(1 ÷/
ii
)
÷2
i
i
i
t
i
´ c
2
i
_
_
1
:
A
t
A
_
÷1
,
CHAPTER 4. LEAST SQUARES REGRESSION 88
and
X
b
f
=
_
1
:
A
t
A
_
÷1
_
1
:
A
t
LA
__
1
:
A
t
A
_
÷1
=
_
1
:
A
t
A
_
÷1
_
1
:
a

i=1
i
i
i
t
i
¯ c
2
i
_
_
1
:
A
t
A
_
÷1
=
_
1
:
A
t
A
_
÷1
_
1
:
a

i=1
(1 ÷/
ii
)
÷1
i
i
i
t
i
´ c
2
i
_
_
1
:
A
t
A
_
÷1
.
The estimators
´
X
b
f
,
¯
X
b
f
, and X
b
f
are often called robust, heteroskedasticity-consistent, or
heteroskedasticity-robust covariance matrix estimators. The estimator
´
X
b
f
was …rst developed
by Eicker (1963), and introduced to econometrics by White (1980), and is sometimes called the
Eicker-White or White covariance matrix estimator
1
. The estimator
¯
X
b
f
was introduced by
Andrews (1991) based on the principle of leave-one-out cross-validation, and the estimator X
b
f
was
introduced by Horn, Horn and Duncan (1975) as a reduced-bias covariance matrix estimator.
Since (1 ÷/
ii
)
÷2
(1 ÷/
ii
)
÷1
1 it is straightforward to show that
´
X
b
f
< X
b
f
<
¯
X
b
f
(4.29)
(See Exercise 4.7). The inequality A < H when applied to matrices means that the matrix H÷A
is positive de…nite.
In general, the bias of the estimators
´
X
b
f
,
¯
X
b
f
and X
b
f
, is quite complicated, but they greatly
simplify under the assumption of homoskedasticity (4.3). For example, using (4.15),
E
_
´
X
b
f
[ A
_
=
_
1
:
A
t
A
_
÷1
_
1
:
a

i=1
i
i
i
t
i
E
_
´ c
2
i
[ A
_
_
_
1
:
A
t
A
_
÷1
=
_
1
:
A
t
A
_
÷1
_
1
:
a

i=1
i
i
i
t
i
(1 ÷/
ii
) o
2
_
_
1
:
A
t
A
_
÷1
=
_
1
:
A
t
A
_
÷1
o
2
÷
_
1
:
A
t
A
_
÷1
_
1
:
a

i=1
i
i
i
t
i
/
ii
_
_
1
:
A
t
A
_
÷1
o
2
_
_
1
:
A
t
A
_
÷1
o
2
= X
b
f
.
This calculation shows that
´
X
b
f
is biased towards zero.
Similarly, (again under homoskedasticity) we can calculate that
¯
X
b
f
is biased away from zero,
speci…cally
E
_
¯
X
b
f
[ A
_
_
_
1
:
A
t
A
_
÷1
o
2
(4.30)
while the estimator X
b
f
is unbiased
E
_
X
b
f
[ A
_
=
_
1
:
A
t
A
_
÷1
o
2
. (4.31)
(See Exercise 4.8.)
1
Often, this estimator is rescaled by multiplying by the ad hoc bias adjustment
n
n k
in analogy to the bias-
corrected error variance estimator.
CHAPTER 4. LEAST SQUARES REGRESSION 89
It might seem rather odd to compare the bias of heteroskedasticity-robust estimators under the
assumption of homoskedasticity, but it does give us a baseline for comparison.
We have introduced four covariance matrix estimators,
´
X
0
b
f
,
´
X
b
f
,
¯
X
b
f
, and X
b
f
. Which should
you use? The classic estimator
´
X
0
b
f
is typically a poor choice, as it is only valid under the unlikely
homoskedasticity restriction. For this reason it is not typically used in contemporary economet-
ric research. Of the three robust estimators,
´
X
b
f
is the most commonly used, as it is the most
straightforward and familiar. However,
¯
X
b
f
and (in particular) X
b
f
are preferred based on their
improved bias. Unfortunately, standard regression packages set the classic estimator
´
X
0
b
f
as the
default. As
¯
X
b
f
and X
b
f
are simple to implement, this should not be a barrier. For example, in
STATA, X
b
f
is implemented by selecting “Robust” standard errors and selecting the bias correction
option “1,(1 ÷/)” or using the vce(hc2) option.
4.12 Standard Errors
A variance estimator such as :
÷1
´
X
b
f
is an estimate of the variance of the distribution of
´
d. A
more easily interpretable measure of spread is its square root – the standard deviation. This is
so important when discussing the distribution of parameter estimates, we have a special name for
estimates of their standard deviation.
De…nition 4.12.1 A standard error :(
´
,) for a real-valued estimator
´
,
is an estimate of the standard deviation of the distribution of
´
,.
When d is a vector with estimate
´
d and covariance matrix estimate :
÷1
´
X
b
f
, standard errors
for individual elements are the square roots of the diagonal elements of :
÷1
´
X
b
f
. That is,
:(
´
,
)
) =
_
:
÷1 ´
X
^
o
j
= :
÷1¸2
_
_
´
X
b
f
_
))
.
As we discussed in the previous section, there are multiple possible covariance matrix estimators,
so standard errors are not unique. It is therefore important to understand what formula and method
is used by an author when studying their work. It is also important to understand that a particular
standard error may be relevant under one set of model assumptions, but not under another set of
assumptions.
To illustrate the computation of the covariance matrix estimate and standard errors, we return
to the log wage regression (3.11) of Section 3.6. We calculate that :
2
= 0.21ò and
´
D =
_
0.208 8.200
8.200 40.061
_
.
Therefore the homoskedastic and White covariance matrix estimates are
´
X
0
b
f
=
_
1 1ò.426
1ò.426 248
_
÷1
0.21ò =
_
10.887 ÷0.6ò0
÷0.6ò0 0.048
_
and
´
X
b
f
=
_
1 1ò.426
1ò.426 248
_
÷1
_
0.208 8.200
8.200 40.061
__
1 1ò.426
1ò.426 248
_
÷1
=
_
7.002 ÷0.44ò
÷0.44ò 0.020
_
.
CHAPTER 4. LEAST SQUARES REGRESSION 90
The standard errors are the square roots of the diagonal elements of these matrices. For example,
the White standard error for
´
,
0
is
_
7.002,61 = 0.841 and that for
´
,
1
is
_
.020,61 = 0.022. A
conventional format to write the estimated equation with standard errors is
\
log(\aqc) = 0.626
(0.841)
÷ 0.1ò6
(0.022)
1dncatio:.
Alternatively, standard errors could be calculated using
¯
X
b
f
or X
b
f
. We report the four possible
standard errors in the following table
_
:
÷1 ´
X
0
b
f
_
:
÷1 ´
X
b
f
_
:
÷1 ¯
X
b
f
_
:
÷1
X
b
f
Intercept 0.412 0.341 0.361 0.351
Education 0.026 0.022 0.023 0.022
The homoskedastic standard errors are noticably di¤erent than the others, but the three robust
standard errors are quite close to one another.
4.13 Measures of Fit
As we described in the previous chapter, a commonly reported measure of regression …t is the
regression 1
2
de…ned as
1
2
= 1 ÷

a
i=1
´ c
2
i

a
i=1
(j
i
÷ ¯ j)
2
= 1 ÷
´ o
2
´ o
2
j
.
where ´ o
2
j
= :
÷1

a
i=1
(j
i
÷j)
2
. 1
2
can be viewed as an estimator of the population parameter
j
2
=
vai (i
t
i
d)
vai(j
i
)
= 1 ÷
o
2
o
2
j
.
However, ´ o
2
and ´ o
2
j
are biased estimators. Theil (1961) proposed replacing these by the unbi-
ased versions :
2
and o
2
j
= (: ÷ 1)
÷1

a
i=1
(j
i
÷j)
2
yielding what is known as R-bar-squared or
adjusted R-squared:
1
2
= 1 ÷
:
2
o
2
j
= 1 ÷
(: ÷1)

a
i=1
´ c
2
i
(: ÷/)

a
i=1
(j
i
÷ ¯ j)
2
.
While 1
2
is an improvement on 1
2
, a much better improvement is
¯
1
2
= 1 ÷

a
i=1
c
2
i

a
i=1
(j
i
÷ ¯ j)
2
= 1 ÷
o
2
´ o
2
j
where c
i
are the prediction errors (3.38) and o
2
is the MSPE from (3.41). As described in Section
(4.9), o
2
is a good estimator of the out-of-sample mean-squared forecast error, so
¯
1
2
is a good
estimator of the percentage of the forecast variance which is explained by the regression forecast.
In this sense,
¯
1
2
is a good measure of …t.
One problem with 1
2
, which is partially corrected by 1
2
and fully corrected by
¯
1
2
, is that 1
2
necessarily increases when regressors are added to a regression model. This occurs because 1
2
is a
negative function of the sum of squared residuals which cannot increase when a regressor is added.
In contrast, 1
2
and
¯
1
2
are non-monotonic in the number of regressors.
¯
1
2
can even be negative,
which occurs when an estimated model predicts worse than a constant-only model.
In the statistical literature the MSPE o
2
is known as the leave-one-out cross validation
criterion, and is popular for model comparison and selection, especially in high-dimensional (non-
parametric) contexts. It is equivalent to use
¯
1
2
or o
2
to compare and select models. Models with
CHAPTER 4. LEAST SQUARES REGRESSION 91
high
¯
1
2
(or low o
2
) are better models in terms of expected out of sample squared error. In contrast,
1
2
cannot be used for model selection, as it necessarily increases when regressors are added to a
regression model. 1
2
is also an inappropriate choice for model selection (it tends to select models
with too many parameters), though a justi…cation of this assertion requires a study of the theory
of model selection.
In summary, it is recommended to calculate and report
¯
1
2
and/or o
2
in regression analysis,
and omit 1
2
and 1
2
.
Henri Theil
Henri Theil (1924-2000) of Holland invented 1
2
and two-stage least squares,
both of which are routinely seen in applied econometrics. He also wrote an
early and in‡uential advanced textbook on econometrics (Theil, 1971).
4.14 Empirical Example
We again return to our wage equation, but use an extended sample of non-military wage earners
with at least 12 years of education. For regressors we include years of education, potential work
experience, experience squared, and dummy variable indicators for the following: female, female
union member, male union member, female married, male married, hispanic, and non-white. The
available sample is 46,943 so the parameter estimates are quite precise and reported in Table 5.1.
Table 5.1 displays the parameter estimates in a standard tabular format. The table clearly
states the estimation method (OLS), the dependent variable (log(Wage)), and the regressors are
clearly labeled. Both parameter estimates and standard errors are reported for all coe¢cient. In
addition to the coe¢cient estimates, the table also reports the estimated error standard deviation
and the sample size. These are useful summary measures of …t which aid readers.
Table 5.1
OLS Estimates of Linear Equation for Log(Wage)
´
, :(
´
,)
Intercept 0.915 0.021
Education 0.118 0.001
Experience 0.034 0.001
Experience
2
,100 -0.057 0.002
Female -0.129 0.009
Female Union Member 0.022 0.020
Male Union Member 0.095 0.020
Married Female 0.016 0.008
Married Male 0.180 0.008
Hispanic -0.110 0.008
Non-White -0.075 0.007
´ o 0.5659
Sample Size 46,943
Note: Standard errors are heteroskedasticity-consistent
CHAPTER 4. LEAST SQUARES REGRESSION 92
As a general rule, it is advisable to always report standard errors along with parameter estimates.
This allows readers to assess the precision of the parameter estimates, and as we will discuss in
later chapters, form con…dence intervals and t-tests for individual coe¢cients if desired.
The results in Table 5.1 con…rm our earlier …ndings that the return to a year of education is
approximately 12%, the return to experience is concave, that women earn approximately 13% less
then men, and non-whites earn about 7% less than whites. In addition, we see that there are wage
premiums for being a member of a labor union or being married, but the premiums appear to be
much larger for men than for women.
4.15 Multicollinearity
If A
t
A is singular, then (A
t
A)
÷1
and
´
d are not de…ned. This situation is called strict
multicollinearity, as the columns of A are linearly dependent, i.e., there is some o ,= 0 such that
Ao = 0. Most commonly, this arises when sets of regressors are included which are identically
related. For example, if A includes both the logs of two prices and the log of the relative prices,
log(j
1
), log(j
2
) and log(j
1
,j
2
), for then A
t
A will necessarily be singular. When this happens, the
applied researcher quickly discovers the error as the statistical software will be unable to construct
(A
t
A)
÷1
. Since the error is discovered quickly, this is rarely a problem for applied econometric
practice.
The more relevant situation is near multicollinearity, which is often called “multicollinearity”
for brevity. This is the situation when the A
t
A matrix is near singular, when the columns of A are
close to linearly dependent. This de…nition is not precise, because we have not said what it means
for a matrix to be “near singular”. This is one di¢culty with the de…nition and interpretation of
multicollinearity.
One potential complication of near singularity of matrices is that the numerical reliability of
the calculations may be reduced. In practice this is rarely an important concern, except when the
number of regressors is very large.
A more relevant implication of near multicollinearity is that individual coe¢cient estimates will
be imprecise. We can see this most simply in a homoskedastic linear regression model with two
regressors
j
i
= r
1i
,
1
÷r
2i
,
2
÷c
i
,
and
1
:
A
t
A =
_
1 j
j 1
_
.
In this case
vai
_
´
d [ A
_
=
o
2
:
_
1 j
j 1
_
÷1
=
o
2
:(1 ÷j
2
)
_
1 ÷j
÷j 1
_
.
The correlation j indexes collinearity, since as j approaches 1 the matrix becomes singular. We
can see the e¤ect of collinearity on precision by observing that the variance of a coe¢cient esti-
mate o
2
_
:
_
1 ÷j
2

÷1
approaches in…nity as j approaches 1. Thus the more “collinear” are the
regressors, the worse the precision of the individual coe¢cient estimates.
What is happening is that when the regressors are highly dependent, it is statistically di¢cult
to disentangle the impact of ,
1
from that of ,
2
. As a consequence, the precision of individual
estimates are reduced. The imprecision, however, will be re‡ected by large standard errors, so
there is no distortion in inference.
Some earlier textbooks overemphasized a concern about multicollinearity. A very amusing
parody of these texts appeared in Chapter 23.3 of Goldberger’s A Course in Econometrics (1991),
which is reprinted below. To understand his basic point, you should notice how the estimation
variance o
2
_
:
_
1 ÷j
2

÷1
depends equally and symmetrically on the correlation j and the sample
size :.
CHAPTER 4. LEAST SQUARES REGRESSION 93
Arthur S. Goldberger
Art Goldberger (1930-2009) was one of the most distinguished members
of the Department of Economics at the University of Wisconsin. His PhD
thesis developed an early macroeconometric forecasting model (known as the
Klein-Goldberger model) but most of his career focused on microeconometric
issues. He was the leading pioneer of what has been called the Wisconsin
Tradition of empirical work – a combination of formal econometric theory
with a careful critical analysis of empirical work. Goldberger wrote a series
of highly regarded and in‡uential graduate econometric textbooks, including
including Econometric Theory (1964), Topics in Regression Analysis (1968),
and A Course in Econometrics (1991).
CHAPTER 4. LEAST SQUARES REGRESSION 94
Micronumerosity
Arthur S. Goldberger
A Course in Econometrics (1991), Chapter 23.3
Econometrics texts devote many pages to the problem of multicollinearity in
multiple regression, but they say little about the closely analogous problem of
small sample size in estimating a univariate mean. Perhaps that imbalance is
attributable to the lack of an exotic polysyllabic name for “small sample size.” If
so, we can remove that impediment by introducing the term micronumerosity.
Suppose an econometrician set out to write a chapter about small sample size
in sampling from a univariate population. Judging from what is now written about
multicollinearity, the chapter might look like this:
1. Micronumerosity
The extreme case, “exact micronumerosity,” arises when : = 0, in which case
the sample estimate of j is not unique. (Technically, there is a violation of
the rank condition : 0 : the matrix 0 is singular.) The extreme case is
easy enough to recognize. “Near micronumerosity” is more subtle, and yet
very serious. It arises when the rank condition : 0 is barely satis…ed. Near
micronumerosity is very prevalent in empirical economics.
2. Consequences of micronumerosity
The consequences of micronumerosity are serious. Precision of estimation is
reduced. There are two aspects of this reduction: estimates of j may have
large errors, and not only that, but \
j
will be large.
Investigators will sometimes be led to accept the hypothesis j = 0 because
¯ j,´ o
j
is small, even though the true situation may be not that j = 0 but
simply that the sample data have not enabled us to pick j up.
The estimate of j will be very sensitive to sample data, and the addition of
a few more observations can sometimes produce drastic shifts in the sample
mean.
The true j may be su¢ciently large for the null hypothesis j = 0 to be
rejected, even though \
j
= o
2
,: is large because of micronumerosity. But if
the true j is small (although nonzero) the hypothesis j = 0 may mistakenly
be accepted.
CHAPTER 4. LEAST SQUARES REGRESSION 95
3. Testing for micronumerosity
Tests for the presence of micronumerosity require the judicious use
of various …ngers. Some researchers prefer a single …nger, others use
their toes, still others let their thumbs rule.
A generally reliable guide may be obtained by counting the number
of observations. Most of the time in econometric analysis, when : is
close to zero, it is also far from in…nity.
Several test procedures develop critical values :
+
, such that micron-
umerosity is a problem only if : is smaller than :
+
. But those proce-
dures are questionable.
4. Remedies for micronumerosity
If micronumerosity proves serious in the sense that the estimate of j
has an unsatisfactorily low degree of precision, we are in the statistical
position of not being able to make bricks without straw. The remedy
lies essentially in the acquisition, if possible, of larger samples from
the same population.
But more data are no remedy for micronumerosity if the additional
data are simply “more of the same.” So obtaining lots of small samples
from the same population will not help.
4.16 Normal Regression Model
In the special case of the normal linear regression model introduced in Section 3.17, we can derive
exact sampling distributions for the least-squares estimator, residuals, and variance estimator.
In particular, under the normality assumption c
i
[ i
i
~ N
_
0, o
2
_
then we have the multivariate
implication
c [ A ~ N
_
0, 1
a
o
2
_
.
That is, the error vector c is independent of A and is normally distributed. Since linear functions
of normals are also normal, this implies that conditional on A
_
´
d ÷d
` c
_
=
_
(A
t
A)
÷1
A
t
A
_
c ~ N
_
0,
_
o
2
(A
t
A)
÷1
0
0 o
2
A
__
where A = 1
a
÷A(A
t
A)
÷1
A
t
. Since uncorrelated normal variables are independent, it follows
that
´
d is independent of any function of the OLS residuals including the estimated error variance
:
2
or ´ o
2
or prediction errors ¯ c.
The spectral decomposition (see equation (A.5)) of A yields
A = H
_
1
a÷I
0
0 0
_
H
t
CHAPTER 4. LEAST SQUARES REGRESSION 96
where H
t
H = 1
a
. Let u = o
÷1
H
t
c ~ N(0, H
t
H) ~ N(0, 1
a
) . Then
:´ o
2
o
2
=
(: ÷/) :
2
o
2
=
1
o
2
` c
t
` c
=
1
o
2
c
t
Ac
=
1
o
2
c
t
H
_
1
a÷I
0
0 0
_
H
t
c
= u
t
_
1
a÷I
0
0 0
_
u
~ ¸
2
a÷I
,
a chi-square distribution with : ÷/ degrees of freedom.
Furthermore, if standard errors are calculated using the homoskedastic formula (4.26)
´
,
)
÷,
)
:(
´
,
)
)
=
´
,
)
÷,
)
:
_
_
(A
t
A)
÷1
_
))
~
N
_
0, o
2
_
(A
t
A)
÷1
_
))
_
_
o
2
a÷I
¸
2
a÷I
_
_
(A
t
A)
÷1
_
))
=
N(0, 1)
_
¸
2
nk
a÷I
~ t
a÷I
a t distribution with : ÷/ degrees of freedom.
Theorem 4.16.1 Normal Regression
In the linear regression model (Assumption 4.3.1) if c
i
is independent of
i
i
and distributed N
_
0, o
2
_
then
«
´
d ÷d ~ N
_
0, o
2
(A
t
A)
÷1
_
«
a^ o
2
o
2
=
(a÷I)c
2
o
2
~ ¸
2
a÷I
«
^
o
j
÷o
j
c(
^
o
j
)
~ t
a÷I
These are the exact …nite-sample distributions of the least-squares estimator and variance esti-
mators, and are the basis for traditional inference in linear regression.
While elegant, the di¢culty in applying Theorem 4.16.1 is that the normality assumption is too
restrictive to be empirical plausible, and therefore inference based on Theorem 4.16.1 has no guar-
antee of accuracy. We develop an alternative inference theory based on large sample (asymptotic)
approximations in the following chapter.
William Gosset
William S. Gosset (1876-1937) of England is most famous for his derivation
of the student’s t distribution, published in the paper “The probable error
of a mean” in 1908. At the time, Gosset worked at Guiness Brewery, which
prohibited its employees from publishing in order to prevent the possible
loss of trade secrets. To circumvent this barrier, Gosset published under the
pseudonym “Student”. Consequently, this famous distribution is known as
the student’s t rather than Gosset’s t!
CHAPTER 4. LEAST SQUARES REGRESSION 97
Exercises
Exercise 4.1 Explain the di¤erence between
1
a

a
i=1
i
i
i
t
i
and E(i
i
i
t
i
) .
Exercise 4.2 True or False. If j
i
= r
i
, ÷ c
i
, r
i
¸ R, E(c
i
[ r
i
) = 0, and ´ c
i
is the OLS residual
from the regression of j
i
on r
i
, then

a
i=1
r
2
i
´ c
i
= 0.
Exercise 4.3 Prove Theorem 4.6.1.2.
Exercise 4.4 In a linear model
¸ = Ad ÷c, E(c [ A) = 0, vai (c [ A) = o
2
D
with D a known function of A , the GLS estimator is
¯
d =
_
A
t
D
÷1
A
_
÷1
_
A
t
D
÷1
¸
_
,
the residual vector is ` c = ¸ ÷A
¯
d, and an estimate of o
2
is
:
2
=
1
: ÷/
` c
t
D
÷1
` c.
(a) Find E
_
¯
d [ A
_
.
(b) Find vai
_
¯
d [ A
_
.
(c) Prove that ` c = A
1
c, where A
1
= 1 ÷A
_
A
t
D
÷1
A
_
÷1
A
t
D
÷1
.
(d) Prove that A
t
1
D
÷1
A
1
= D
÷1
÷D
÷1
A
_
A
t
D
÷1
A
_
÷1
A
t
D
÷1
.
(e) Find E
_
:
2
[ A
_
.
(f) Is :
2
a reasonable estimator for o
2
´
Exercise 4.5 Let (j
i
, i
i
) be a random sample with E(¸ [ A) = Ad. Consider the Weighted
Least Squares (WLS) estimator of d
¯
d =
_
A
t
MA
_
÷1
_
A
t

_
where M = oiag (n
1
, ..., n
a
) and n
i
= r
÷2
)i
, where r
)i
is one of the i
i
.
(a) In which contexts would
¯
d be a good estimator?
(b) Using your intuition, in which situations would you expect that
¯
d would perform better than
OLS?
Exercise 4.6 Show (4.23) in the homoskedastic regression model.
Exercise 4.7 Prove (4.29).
Exercise 4.8 Show (4.30) and (4.31) in the homoskedastic regression model.
Chapter 5
An Introduction to Large Sample
Asymptotics
5.1 Introduction
In Chapter 4 we derived the mean and variance of the least-squares estimator in the context of
the linear regression model, but this is not a complete description of the sampling distribution, nor
su¢cient for inference (con…dence intervals and hypothesis testing) on the unknown parameters.
Furthermore, the theory does not apply in the context of the linear projection model, which is more
relevant for empirical applications.
Figure 5.1: Sampling Density of
´
,
To illustrate the situation with an example, let j
i
and r
i
be drawn from the joint density
)(r, j) =
1
2¬rj
oxp
_
÷
1
2
(log j ÷log r)
2
_
oxp
_
÷
1
2
(log r)
2
_
and let
´
, be the slope coe¢cient estimate from a least-squares regression of j
i
on r
i
and a constant.
Using simulation methods, the density function of
´
, was computed and plotted in Figure 5.1 for
sample sizes of : = 2ò, : = 100 and : = 800. The vertical line marks the true projection coe¢cient.
98
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 99
From the …gure we can see that the density functions are dispersed and highly non-normal. As
the sample size increases the density becomes more concentrated about the population coe¢cient.
Is there a simple way to characterize the sampling distribution of
´
,?
In principle the sampling distribution of
´
, is a function of the joint distribution of (j
i
, r
i
)
and the sample size :, but in practice this function is extremely complicated so it is not feasible to
analytically calculate the exact distribution of
´
, except in very special cases. Therefore we typically
rely on approximation methods.
The most widely used and versatile method is asymptotic theory, which approximates sampling
distributions by taking the limit of the …nite sample distribution as the sample size : tends to
in…nity. It is important to understand that this is an approximation technique, as the asymptotic
distributions are used to assess the …nite sample distributions of our estimators in actual practical
samples. The primary tools of asymptotic theory are the weak law of large numbers (WLLN),
central limit theorem (CLT), and continuous mapping theorem (CMT). With these tools we can
approximate the sampling distributions of most econometric estimators.
In this chapter we provide a consise summary. It will be useful for most students to review this
material, even if most is familiar.
5.2 Asymptotic Limits
“Asymptotic analysis” is a method of approximation obtained by taking a suitable limit. There
is more than one method to take limits, but the most common method in statistics and econo-
metrics is to approximate sampling distributions by taking the limit as the sample size tends to
positive in…nity, written “as : ÷ ·.” It is not meant to be interpreted literally, but rather as an
approximating device.
The …rst building block for asymptotic analysis is the concept of a limit of a sequence.
De…nition 5.2.1 A sequence a
a
has the limit a, written a
a
÷÷a as : ÷
·, or alternatively as lim
a÷o
a
a
= a, if for all c 0 there is some :
c
< ·
such that for all : _ :
c
, [a
a
÷a[ _ c.
In words, a
a
has the limit a if the sequence gets closer and closer to a as : gets larger. If a
sequence has a limit, that limit is unique (a sequence cannot have two distinct limits). If a
a
has
the limit a, we also say that a
a
converges to a as : ÷·.
Not all sequences have limits. For example, the sequence ¦1, 2, 1, 2, 1, 2, ...¦ does not have a
limit. It is therefore sometimes useful to have a more general de…nition of limits which always
exist, and these are the limit superior and limit inferior of sequence
De…nition 5.2.2 liminf
a÷o
a
a
= lim
a÷o
inf
n¸a
a
a
De…nition 5.2.3 limsup
a÷o
a
a
= lim
a÷o
sup
n¸a
a
a
The limit inferior and limit superior always exist, and equal when the limit exists. In the
example given earlier, the limit inferior of ¦1, 2, 1, 2, 1, 2, ...¦ is 1, and the limit superior is 2.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 100
5.3 Convergence in Probability
A sequence of numbers may converge to a limit, but what about a sequence of random variables?
For example, consider a sample mean j =
1
:

a
i=1
j
i
based on an random sample of : observations.
As : increases, the distribution of j changes. In what sense can we describe the “limit” of j. In
what sense does it converge?
Since j is a random variable, we cannot directly apply the deterministic concept of a sequence of
numbers. Instead, we require a de…nition of convergence which is appropriate for random variables.
There are more than one such de…nition, but the most commonly used is called convergence in
probability.
De…nition 5.3.1 A random variable .
a
¸ R converges in probability
to . as : ÷ ·, denoted .
a
j
÷÷ ., or alternatively plim
a÷o
.
a
= ., if for
all c 0,
lim
a÷o
Ii ([.
a
÷.[ _ c) = 1. (5.1)
We call . the probability limit (or plim) of .
a
.
The de…nition looks quite abstract, but it formalizes the concept of the sequence of random
variables concentrating about a point. The event ¦[.
a
÷.[ _ c¦ occurs when .
a
is within c of
the point .. Ii ([.
a
÷.[ _ c) is the probability of this event – that .
a
is within c of the point
.. Equation (5.1) states that this probability approaches 1 as the sample size : increases. The
de…nition of convergence in probability requires that this holds for any c. So for any small interval
about . the distribution of .
a
concentrates within this interval for large :.
You may notice that the de…nition concerns the distribution of the random variables .
a
, not
their realizations. Furthermore, notice that the de…nition uses to the concept of a conventional
(deterministic) limit, but the latter is applied to a sequence of probabilities, not directly to the
random variables .
a
or their realizations.
When .
a
j
÷÷. we call . the probability limit (or plim) of .
a
.
Two comments about the notation are worth mentioning. First, it is conventional to write the
convergence symbol as
j
÷÷ where the “j” above the arrow indicates that the convergence is “in
probability”. You should try and adhere to this notation, and not simply write .
a
÷÷.. Second, it
is also important to include the phrase “as : ÷·” to be speci…c about how the limit is obtained.
It is common to confuse convergence in probability with convergence in expectation:
E.
a
÷÷E.. (5.2)
They are related but distinct concepts. Neither (5.1) nor (5.2) implies the other.
To see the distinction it might be helpful to think through a stylized example. Consider a
discrete random variable .
a
which takes the value 0 with probability 1 ÷:
÷1
and the value a
a
,= 0
with probability :
÷1
, or
Ii (.
a
= 0) = 1 ÷
1
:
(5.3)
Ii (.
a
= a
a
) =
1
:
.
In this example the probability distribution of .
a
concentrates at zero as : increases, regardless of
the sequence a
a
. You can check that .
a
j
÷÷0 as : ÷·.
In this example we can also calculate that the expectation of .
a
is
E.
a
=
a
a
:
.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 101
Despite the fact that .
a
converges in probability to zero, its expectation will not decrease to zero
unless a
a
,: ÷0. If a
a
diverges to in…nity at a rate equal to : (or faster) then E.
a
will not converge
to zero. For example, if a
a
= :, then E.
a
= 1 for all :, even though .
a
j
÷÷0. This example might
seem a bit arti…cial, but the point is that the concepts of convergence in probability and convergence
in expectation are distinct, so it is important not to confuse one with the other.
Another common source of confusion with the notation surrounding probability limits is that
the expression to the right of the arrow
j
÷÷” must be free of dependence on the sample size :.
Thus expressions of the form “.
a
j
÷÷c
a
” are notationally meaningless and should not be used.
5.4 Weak Law of Large Numbers
In large samples we expect parameter estimates to be close to the population values. For
example, in Section 4.2 we saw that the sample mean j is unbiased for j = Ej and has variance
o
2
,:. As : gets large its variance decreases and thus the distribution of j concentrates about the
population mean j. It turns out that this implies that the sample mean converges in probability
to the population mean.
When j has a …nite variance there is a fairly straightforward proof by applying Chebyshev’s
inequality.
Theorem 5.4.1 Chebyshev’s Inequality. For any random variable .
a
and constant c 0
Ii ([.
a
÷E.
a
[ c) _
vai(.
a
)
c
2
.
Chebyshev’s inequality is terri…cally important in asymptotic theory. While it’s proof is a
technical exercise in probability theory, it is quite simple so we discuss it forthwith. Let 1
a
(n)
denote the distribution of .
a
÷E.
a
. Then
Ii ([.
a
÷E.
a
[ c) = Ii
_
(.
a
÷E.
a
)
2
c
2
_
=
_
¦&
2
c
2
¦
d1
a
(n).
The integral is over the event
_
n
2
c
2
_
, so that the inequality 1 _
n
2
c
2
holds throughout. Thus
_
¦&
2
c
2
¦
d1
a
(n) _
_
¦&
2
c
2
¦
n
2
c
2
d1
a
(n) _
_
n
2
c
2
d1
a
(n) =
E(.
a
÷E.
a
)
2
c
2
=
vai(.
a
)
c
2
,
which establishes the desired inequality.
Applied to the sample mean j, Chebyshev’s inequality shows that for any c 0
Ii ([j ÷Ej[ c) _
o
2
:c
2
.
For …xed o
2
and c, the bound on the right-hand-side shrinks to zero as : ÷·. Thus the probability
that j is within c of Ej = j approaches 1 as : gets large, or
lim
a÷o
Ii ([j ÷j[ _ c) = 1.
This means that j converges in probability to j as : ÷·.
This result is called the weak law of large numbers. Our derivation assumed that j has a
…nite variance, but all that is necessary is for j to have a …nite mean.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 102
Theorem 5.4.2 Weak Law of Large Numbers (WLLN)
If j
i
are independent and identically distributed and E[j[ < ·, then as
: ÷·,
j =
1
:
a

i=1
j
i
j
÷÷E(j).
The proof of Theorem 5.4.2 is presented in Section 5.14.
The WLLN shows that the estimator j converges in probability to the true population mean j.
In general, an estimator which converges in probability to the population value is called consistent.
De…nition 5.4.1 An estimator
´
0 of a parameter 0 is consistent if
´
0
j
÷÷0
as : ÷·.
Consistency is a good property for an estimator to possess. It means that for any given data
distribution, there is a sample size : su¢ciently large such that the estimator
´
0 will be arbitrarily
close to the true value 0 with high probability. Unfortunately it does not mean that
´
0 will actually
be close to 0 in a given …nite sample, but it is a minimal property for an estimator to be considered
a “good” estimator.
Theorem 5.4.3 If j
i
are independent and identically distributed and
E[j[ < ·, then ´ j = j is consistent for the population mean j.
5.5 Almost Sure Convergence and the Strong Law*
Convergence in probability is sometimes called weak convergence. A related concept is
almost sure convergence, also known as strong convergence. (In probability theory the term
“almost sure” means “with probability equal to one”. An event which is random but occurs with
probability equal to one is said to be almost sure.)
De…nition 5.5.1 A random variable .
a
¸ R converges almost surely
to . as : ÷·, denoted .
a
o.c.
÷÷., if for every c 0
Ii
_
lim
a÷o
[.
a
÷.[ _ c
_
= 1. (5.4)
The convergence (5.4) is stronger than (5.1) because it computes the probability of a limit
rather than the limit of a probability. Almost sure convergence is stronger than convergence in
probability in the sense that .
a
o.c.
÷÷. implies .
a
j
÷÷..
In the example (5.3) of Section 5.3, the sequence .
a
converges in probability to zero for any
sequence a
a
, but this is not su¢cient for .
a
to converge almost surely. In order for .
a
to converge
to zero almost surely, it is necessary that a
a
÷0.
In the random sampling context the sample mean can be shown to converge almost surely to
the population mean. This is called the strong law of large numbers.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 103
Theorem 5.5.1 Strong Law of Large Numbers (SLLN)
If j
i
are independent and identically distributed and E[j[ < ·, then as
: ÷·,
j =
1
:
a

i=1
j
i
o.c.
÷÷E(j).
The proof of the SLLN is technically quite advanced so is not presented here. For a proof see
Billingsley (1995, Section 22) or Ash (1972, Theorem 7.2.5).
The WLLN is su¢cient for most purposes in econometrics, so we will not use the SLLN in this
text.
5.6 Vector-Valued Moments
Our preceding discussion focused on the case where j is real-valued (a scalar), but nothing
important changes if we generalize to the case where ¸ ¸ R
n
is a vector. To …x notation, the
elements of ¸ are
¸ =
_
_
_
_
_
j
1
j
2
.
.
.
j
n
_
_
_
_
_
.
The population mean of ¸ is just the vector of marginal means
µ = E(¸) =
_
_
_
_
_
E(j
1
)
E(j
2
)
.
.
.
E(j
n
)
_
_
_
_
_
.
When working with random vectors ¸ it is convenient to measure their magnitude by their
Euclidean length, which is Euclidean norm
|¸| =
_
j
2
1
÷ ÷j
2
n
_
1¸2
.
In vector notation we have
|¸|
2
= ¸
t
¸.
It turns out that it is equivalent to describe …niteness of moments in terms of the Euclidean
norm of a vector or all individual components.
Theorem 5.6.1 For ¸ ¸ R
n
, E|¸| < · if and only if E[j
)
[ < · for
, = 1, ..., :.
The :: variance matrix of ¸ is
X = vai (¸) = E
_
(¸ ÷µ) (¸ ÷µ)
t
_
.
X is often called a variance-covariance matrix. You can show that the elements of X are …nite if
E|¸|
2
< ·.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 104
A random sample ¦¸
1
, ..., ¸
a
¦ consists of : observations of independent and identically draws
from the distribution of ¸. (Each draw is an :-vector.) The vector sample mean
¸ =
1
:
a

i=1
¸
i
=
_
_
_
_
_
j
1
j
2
.
.
.
j
n
_
_
_
_
_
is the vector of sample means of the individual variables.
Convergence in probability of a vector can be de…ned as convergence in probability of all ele-
ments in the vector. Thus ¸
j
÷÷ µ if and only if j
)
j
÷÷ j
)
for , = 1, ..., :. Since the latter holds
if E[j
)
[ < · for , = 1, ..., :, or equivalently E|¸| < ·, we can state this formally as follows.
Theorem 5.6.2 Weak Law of Large Numbers (WLLN) for ran-
dom vectors
If ¸
i
are independent and identically distributed and E|¸| < ·, then as
: ÷·,
¸ =
1
:
a

i=1
¸
i
j
÷÷E(¸).
5.7 Convergence in Distribution
The WLLN is a useful …rst step, but does not give an approximation to the distribution of an
estimator. A large-sample or asymptotic approximation can be obtained using the concept of
convergence in distribution.
De…nition 5.7.1 Let z
a
be a random vector with distribution 1
a
(u) =
Ii (z
a
_ u) . We say that z
a
converges in distribution to z as : ÷·,
denoted z
a
o
÷÷ z, if for all u at which 1(u) = Ii (z _ u) is continuous,
1
a
(u) ÷1(u) as : ÷·.
When z
a
o
÷÷ z, it is common to refer to z as the asymptotic distribution or limit distri-
bution of z
a
.
When the limit distribution z is degenerate (that is, Ii (z = c) = 1 for some c) we can write
the convergence as z
a
o
÷÷c, which is equivalent to convergence in probability, z
a
j
÷÷c.
The typical path to establishing convergence in distribution is through the central limit theorem
(CLT), which states that a standardized sample average converges in distribution to a normal
random vector.
Theorem 5.7.1 Lindeberg–Lévy Central Limit Theorem (CLT). If
¸
i
are independent and identically distributed and E|¸|
2
< ·, then as
: ÷·
_
:(¸ ÷µ) =
1
_
:
a

i=1

i
÷µ)
o
÷÷N(0, X )
where µ = E¸ and X = E
_
(¸ ÷µ) (¸ ÷µ)
t
_
.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 105
The standardized sum z
a
=
_
:(¸
a
÷µ) has mean zero and variance X . What the CLT adds is
that the variable z
a
is also approximately normally distributed, and that the normal approximation
improves as : increases.
The CLT is one of the most powerful and mysterious results in statistical theory. It shows that
the simple process of averaging induces normality. The …rst version of the CLT (for the number
of heads resulting from many tosses of a fair coin) was established by the French mathematician
Abraham de Moivre in an article published in 1733. This was extended to cover an approximation
to the binomial distribution in 1812 by Pierre-Simon Laplace in his book Théorie Analytique des
Probabilités, and the most general statements are credited to articles by the Russian mathematician
Aleksandr Lyapunov (1901) and the Finnish mathematician Jarl Waldemar Lindeberg (1920, 1922).
The above statement is known as the classic (or Lindeberg-Lévy) CLT due to contributions by
Lindeberg (1920) and the French mathematician Paul Pierre Lévy.
A more general version which does not require the restriction to identical distributions was
provided by Lindeberg (1922).
Theorem 5.7.2 Lindeberg Central Limit Theorem (CLT). Suppose
that j
i
are independent but not necessarily identically distributed with …nite
means j
i
= Ej
i
and variances o
2
i
= E(j
i
÷j
i
)
2
. Set i
2
a
=

a
i=1
o
2
i
. If for
all - 0
lim
a÷o
1
i
2
a
a

i=1
E(j
i
÷j
i
)
2
1 ([j
i
÷j
i
[ _ -i
a
) = 0 (5.5)
then
1
i
a
a

i=1
(j
i
÷j
i
)
o
÷÷N(0, 1)
Equation (5.5) is known as Lindeberg’s condition. A standard method to verify (5.5) is via
Lyapunov’s condition: For some c 0
lim
a÷o
1
i
2+c
a
a

i=1
E(j
i
÷j
i
)
2+c
= 0. (5.6)
It is easy to verify that (5.6) implies (5.5), and (5.6) is often easy to verify. For example, if
sup
i
E(j
i
÷j
i
)
3
_ i < · and inf
i
o
2
i
_ c 0 then
1
i
3
a
a

i=1
E(j
i
÷j
i
)
3
_
:i
(:c)
3¸2
÷0
so (5.6) is satis…ed.
5.8 Higher Moments
Often we want to estimate a parameter µ which is the expected value of a transformation of a
random vector ¸. That is, µ can be written as
µ = El(¸)
for some function l : R
n
÷R
I
. For example, the second moment of j is Ej
2
, the /’th is Ej
I
, the
moment generating function is Eoxp(tj) , and the distribution function is E1 ¦j _ r¦ .
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 106
Estimating parameters of this form …ts into our previous analysis by de…ning the random
variable z = l(¸) for then µ = Ez is just a simple moment of z. This suggests the moment
estimator
´ µ =
1
:
a

i=1
z
i
=
1
:
a

i=1
l(¸
i
) .
For example, the moment estimator of Ej
I
is :
÷1

a
i=1
j
I
i
, that of the moment generating function
is :
÷1

a
i=1
oxp(tj
i
) , and for the distribution function the estimator is :
÷1

a
i=1
1 ¦j
i
_ r¦
Since ´ µ is a sample average, and transformations of iid variables are also iid, the asymptotic
results of the previous sections immediately apply.
Theorem 5.8.1 If ¸
i
are independent and identically distributed, µ =
El(¸) , and E|l(¸)| < ·, then for ´ µ =
1
a

a
i=1
l(¸
i
) , as : ÷ ·,
´ µ
j
÷÷µ.
Theorem 5.8.2 If ¸
i
are independent and identically distributed, µ =
El(¸) , and E|l(¸)|
2
< ·, then for ´ µ =
1
a

a
i=1
l(¸
i
) , as : ÷·,
_
:(´ µ ÷µ)
o
÷÷N(0, X )
where X = E
_
(l(¸) ÷µ) (l(¸) ÷µ)
t
_
.
Theorems 5.8.1 and 5.8.2 show that the estimate ´ µ is consistent for µ and asymptotically
normally distributed, so long as the stated moment conditions hold.
A word of caution. Theorems 5.8.1 and 5.8.2 give the impression that it is possible to estimate
any moment of j. Technically this is the case, so long as that moment is …nite. What is hidden
by the notation, however, is that estimates of high order momnets can be quite imprecise. For
example, consider the sample 8
tI
moment ´ j
8
=
1
a

a
i=1
j
8
i
, and suppose for simplicity that j is
N(0, 1). Then we can calculate
1
that vai (´ j
8
) = :
÷1
64ò, 01ò, which is huge, even for large :! In
general, higher-order moments are challenging to estimate because their variance depends upon
even higher moments which can be quite large in some cases.
5.9 Functions of Moments
We now expand our investigation and consider estimation of parameters which can be written
as a continuous function of µ = El(¸). That is, the parameter of interest can be written as
d = j (µ) = j (El(¸)) (5.7)
for some functions j : R
I
÷R
¹
and l : R
n
÷R
I
.
As one example, the geometric mean of wages n is
¸ = oxp(E(log (n))) . (5.8)
1
By the formula for the variance of a mean var (b
8
) = n
1

Ey
16

Ey
8

2

: Since y is N(0; 1); Ey
16
= 15!! =
2; 027; 025 and Ey
8
= 7!! = 105 where k!! = k(k 1) is the double factorial.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 107
This is (5.7) with q(n) = oxp(n) and /(n) = log(n).
A simple yet common example is the variance
o
2
= E(n ÷En)
2
= En
2
÷(En)
2
.
This is (5.7) with
l(n) =
_
n
n
2
_
and
q (j
1
, j
2
) = j
2
÷j
2
1
.
Similarly, the skewness of the wage distribution is
:/ =
E(n ÷En)
3
_
E(n ÷En)
2
_
3¸2
.
This is (5.7) with
l(n) =
_
_
n
n
2
n
3
_
_
and
q (j
1
, j
2
, j
3
) =
j
3
÷8j
2
j
1
÷ 2j
3
1
_
j
2
÷j
2
1
_
3¸2
. (5.9)
The parameter d = j (µ) is not a population moment, so it does not have a direct moment
estimator. Instead, it is common to use a plug-in estimate formed by replacing the unknown µ
with its point estimate ´ µ and then “plugging” this into the expression for d. The …rst step is
´ µ =
1
:
a

i=1
l(¸
i
)
and the second step is
´
d = j (´ µ) .
Again, the hat “^” indicates that
´
d is a sample estimate of d.
For example, the plug-in estimate of the geometric mean ¸ of the wage distribution from (5.8)
is
´ ¸ = oxp(´ j)
with
´ j =
1
:
a

i=1
log (naqc
i
) .
The plug-in estimate of the variance is
´ o
2
=
1
:
a

i=1
n
2
i
÷
_
1
:
a

i=1
n
i
_
2
=
1
:
a

i=1
(n
i
÷n)
2
.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 108
and that for the skewness is
´
:/ =
´ j
3
÷8´ j
2
´ j
1
÷ 2´ j
3
1
_
´ j
2
÷ ´ j
2
1
_
3¸2
=
1
a

a
i=1
(n
i
÷n)
3
_
1
a

a
i=1
(n
i
÷n)
2
_
3¸2
where
´ j
)
=
1
:
a

i=1
n
)
i
.
A useful property is that continuous functions are limit-preserving.
Theorem 5.9.1 Continuous Mapping Theorem (CMT). If z
a
j
÷÷ c
as : ÷· and j () is continuous at c, then j(z
a
)
j
÷÷j(c) as : ÷·.
The proof of Theorem 5.9.1 is given in Section 5.14.
For example, if .
a
j
÷÷c as : ÷· then
.
a
÷a
j
÷÷c ÷a
a.
a
j
÷÷ac
.
2
a
j
÷÷c
2
as the functions q (n) = n ÷a, q (n) = an, and q (n) = n
2
are continuous. Also
a
.
a
j
÷÷
a
c
if c ,= 0. The condition c ,= 0 is important as the function q(n) = a,n is not continuous at n = 0.
If ¸
i
are independent and identically distributed, µ = El(¸) , and E|l(¸)| < ·, then for
´ µ =
1
a

a
i=1
l(¸
i
) , as : ÷·,
´ µ
j
÷÷µ.
Theorem 5.9.2 If ¸
i
are independent and identically distributed, d =
j (El(¸)) , E|l(¸)| < ·, and j (u) is continuous at u = µ, then for
´
d = j
_
1
a

a
i=1
l(¸
i
)
_
, as : ÷·,
´
d
j
÷÷d.
To apply Theorem 5.9.2 it is necessary to check if the function j is continuous at µ. In our
…rst example q(n) = oxp(n) is continuous everywhere. It therefore follows from Theorem 5.6.2 and
Theorem 5.9.2 that if E[log (naqc)[ < · then as : ÷·, ´ ¸
j
÷÷¸.
In the example of the variance, q is continuous for all µ. Thus if En
2
< · then as : ÷ ·,
´ o
2
j
÷÷o
2
.
In our third example q de…ned in (5.9) is continuous for all µ such that vai(n) = j
2
÷j
2
1
0,
which holds unless n has a degenerate distribution. Thus if E[n[
3
< · and vai(n) 0 then as
: ÷·,
´
:/
j
÷÷:/.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 109
5.10 Delta Method
In this section we introduce two tools – an extended version of the CMT and the Delta Method
– which allow us to calculate the asymptotic distribution of the parameter estimate
´
d.
We …rst present an extended version of the continuous mapping theorem which allows conver-
gence in distribution.
Theorem 5.10.1 Continuous Mapping Theorem
If z
a
o
÷÷z as : ÷· and j : R
n
÷R
I
has the set of discontinuity points
1
j
such that Ii (z ¸ 1
j
) = 0, then j(z
a
)
o
÷÷j(z) as : ÷·.
For a proof of Theorem 5.10.1 see Theorem 2.3 of van der Vaart (1998). It was …rst proved by
Mann and Wald (1943) and is therefore sometimes referred to as the Mann-Wald Theorem.
Theorem 5.10.1 allows the function j to be discontinuous only if the probability at being at a
discontinuity point is zero. For example, the function q(n) = n
÷1
is discontinuous at n = 0, but if
.
a
o
÷÷. ~ N(0, 1) then Ii (. = 0) = 0 so .
÷1
a
o
÷÷.
÷1
.
A special case of the Continuous Mapping Theorem is known as Slutsky’s Theorem.
Theorem 5.10.2 Slutsky’s Theorem
If .
a
o
÷÷. and c
a
j
÷÷c as : ÷·, then
1. .
a
÷c
a
o
÷÷. ÷c
2. .
a
c
a
o
÷÷.c
3.
.
a
c
a
o
÷÷
.
c
if c ,= 0
Even though Slutsky’s Theorem is a special case of the CMT, it is a useful statement as it
focuses on the most common applications – addition, multiplication, and division.
Despite the fact that the plug-in estimator
´
d is a function of ´ µ for which we have an asymptotic
distribution, Theorem 5.10.1 does not directly give us an asymptotic distribution for
´
d. This is
because
´
d = j (´ µ) is written as a function of ´ µ, not of the standardized sequence
_
:(´ µ ÷µ) .
We need an intermediate step – a …rst order Taylor series expansion. This step is so critical to
statistical theory that it has its own name – The Delta Method.
Theorem 5.10.3 Delta Method:
If
_
:(´ µ ÷µ)
o
÷÷ ¸, where j(u) is continuously di¤erentiable in a neigh-
borhood of µ then as : ÷·
_
:(j (´ µ) ÷j(µ))
o
÷÷C
t
¸ (5.10)
where C(u) =
0
0u
j(u)
t
and C = C(µ). In particular, if ¸ N(0, X ) then
as : ÷·
_
:(j (´ µ) ÷j(µ))
o
÷÷N
_
0, C
t
X C
_
. (5.11)
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 110
The Delta Method allows us to complete our derivation of the asymptotic distribution of the
estimator
´
d of d.
Now by combining Theorems 5.8.2 and 5.10.3 we can …nd the asymptotic distribution of the
plug-in estimator
´
d.
Theorem 5.10.4 If ¸
i
are independent and identically distributed, µ =
El(¸), d = j (µ) , E|l(¸)|
2
< ·, and C(u) =
0
0u
j (u)
t
is continuous
in a neighborhood of µ, then for
´
d = j
_
1
a

a
i=1
l(¸
i
)
_
, as : ÷·
_
:
_
´
d ÷d
_
o
÷÷N
_
0, C
t
X C
_
where X = E
_
(l(¸) ÷µ) (l(¸) ÷µ)
t
_
and C = C(µ) .
Theorem 5.9.2 established the consistency of
´
d for d, and Theorem 5.10.4 established its asymp-
totic normality. It is instructive to compare the conditions required for these results. Consistency
required that l(¸) have a …nite mean, while asymptotic normality requires that this variable have a
…nite variance. Consistency required that j(u) be continuous, while asymptotic normality required
that j(u) be continuously di¤erentiable, the latter a stronger smoothness condition.
5.11 Stochastic Order Symbols
It is convenient to have simple symbols for random variables and vectors which converge in
probability to zero or are stochastically bounded. In this section we introduce some of the most
commonly found notation.
It might be useful to review the common notation for non-random convergence and boundedness.
Let r
a
and a
a
, : = 1, 2, ..., be a non-random sequences. The notation
r
a
= o(1)
(pronounced “small oh-one”) is equivalent to a
a
÷0 as : ÷·. The notation
r
a
= o(a
a
)
is equivalent to a
÷1
a
r
a
÷0 as : ÷·. The notation
r
a
= O(1)
(pronounced “big oh-one”) means that r
a
is bounded uniformly in : : there exists an ' < · such
that [r
a
[ _ ' for all :. The notation
r
a
= O(a
a
)
is equivalent to a
÷1
a
r
a
= O(1).
We now introduce similar concepts for sequences of random variables. Let .
a
and a
a
, : = 1, 2, ...
be sequences of random variables. (In most applications, a
a
is non-random.) The notation
.
a
= o
j
(1)
(“small oh-P-one”) means that .
a
j
÷÷0 as : ÷·. We also write
.
a
= o
j
(a
a
)
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 111
if a
÷1
a
.
a
= o
j
(1). For example, for any consistent estimator
´
d for d we can write
´
d = d ÷o
j
(1).
Similarly, the notation .
a
= O
j
(1) (“big oh-P-one”) means that .
a
is bounded in probability.
Precisely, for any - 0 there is a constant '
.
< · such that
limsup
a÷o
Ii ([.
a
[ '
.
) _ -.
Furthermore, we write
.
a
= O
j
(a
a
)
if a
÷1
a
.
a
= O
j
(1).
O
j
(1) is weaker than o
j
(1) in the sense that .
a
= o
j
(1) implies .
a
= O
j
(1) but not the reverse.
However, if .
a
= O
j
(a
a
) then .
a
= o
j
(/
a
) for any /
a
such that a
a
,/
a
÷0.
If a random vector converges in distribution z
a
o
÷÷ z (for example, if z ~ N(0, X )) then
z
a
= O
j
(1). It follows that for estimators
´
d which satisfy the convergence of Theorem 5.10.4 then
we can write
´
d = d ÷O
j
(:
÷1¸2
).
Another useful observation is that a random sequence with a bounded moment is stochastically
bounded.
Theorem 5.11.1 If z
a
is a random vector which satis…es
E|z
a
|
c
= O(a
a
)
for some sequence a
a
and c 0, then
z
a
= O
j
(a
1¸c
a
).
Similarly, E|z
a
|
c
= o (a
a
) implies z
a
= o
j
(a
1¸c
a
).
This can be shown using Markov’s inequality (B.21). The assumptions imply that there is some
' < · such that E|z
a
|
c
_ 'a
a
for all :. For any - set 1 =
_
'
-
_
1¸c
. Then
Ii
_
a
÷1¸c
a
|z
a
| 1
_
= Ii
_
|z
a
|
c

'a
a
-
_
_
-
'a
a
E|z
a
|
c
_ -
as required.
There are many simple rules for manipulating o
j
(1) and O
j
(1) sequences which can be deduced
from the continuous mapping theorem or Slutsky’s Theorem. For example,
o
j
(1) ÷o
j
(1) = o
j
(1)
o
j
(1) ÷O
j
(1) = O
j
(1)
O
j
(1) ÷O
j
(1) = O
j
(1)
o
j
(1)o
j
(1) = o
j
(1)
o
j
(1)O
j
(1) = o
j
(1)
O
j
(1)O
j
(1) = O
j
(1)
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 112
5.12 Uniform Stochastic Bounds*
For some applications it can be useful to obtain the stochastic order of the random variable
max
1¸i¸a
[j
i
[ .
This is the magnitude of the largest observation in the sample ¦j
1
, ..., j
a
¦. If the support of the
distribution of j
i
is unbounded, then as the sample size : increases, the largest observation will
also tend to increase. It turns out that there is a simple characterization.
Theorem 5.12.1 If ¸
i
are independent and identically distributed:
If E[j[
v
< ·, then as : ÷·
:
÷1¸v
max
1¸i¸a
[j
i
[
j
÷÷0. (5.12)
If Eoxp(tj) < · for all t < ·, then
(log :)
÷1
max
1¸i¸a
[j
i
[
j
÷÷0. (5.13)
The proof of Theorem 5.12.1 is presented in Section 5.14.
Equivalently, (5.12) can be written as
max
1¸i¸a
[j
i
[ = o
j
(:
1¸v
) (5.14)
and (5.13) as
max
1¸i¸a
[j
i
[ = o
j
(log :). (5.15)
Equation (5.12) says that if j has r …nite moments, then the largest observation will diverge
at a rate slower than :
1¸v
. As r increases this rate decreases. Equation (5.13) shows that if we
strengthen this to j having all …nite moments and a …nite moment generating function (for example,
if j is normally distributed) then the largest observation will diverge slower than log :. Thus the
higher the moments, the slower the rate of divergence.
To simplify the notation, we write (5.14) as j
i
= o
j
(:
1¸v
) uniformly in 1 _ i _ :, and similarly
(5.15) as j
i
= o
j
(log :), uniformly in 1 _ i _ :. It is important to understand when the O
j
or o
j
symbols are applied to subscript i random variables whether the convergence is pointwise in i, or
is uniform in i in the sense of (5.14)-(5.15).
Theorem 5.12.1 applies to random vectors. If E|¸|
v
< · then
max
1¸i¸a

i
| = o
j
(:
1¸v
),
and if Eoxp(I
t
¸) < · for all |I| < · then
(log :)
÷1
max
1¸i¸a

i
|
j
÷÷0. (5.16)
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 113
5.13 Semiparametric E¢ciency
In this section we argue that the sample mean ´ µ and plug-in estimator
´
d = g (´ µ) are e¢cient
estimators of the parameters µ and d. Our demonstration is based on the rich but technically
challenging theory of semiparametric e¢ciency bounds. An excellent accessible review has been
provided by Newey (1990). We will also appeal to the asymptotic theory of maximum likelihood
estimation (see Section B.11).
We start by examining the sample mean ´ µ, for the asymptotic e¢ciency of
´
d will follow from
that of ´ µ.
Recall, we know that if E|¸|
2
< · then the sample mean has the asymptotic distribution
_
:(´ µ ÷µ)
o
÷÷N(0, X ) . We want to know if ´ µ is the best feasible estimator, or if there is another
estimator with a smaller asymptotic variance. While it seems intuitively unlikely that another
estimator could have a smaller asymptotic variance, how do we know that this is not the case?
When we ask if ´ µ is the best estimator, we need to be clear about the class of models – the class
of permissible distributions. For estimation of the mean µ of the distribution of ¸ the broadest
conceivable class is /
1
= ¦1 : E|¸| < ·¦ . This class is too broad : for our current purposes, as
´ µ is not asymptotically N(0, X ) for all 1 ¸ /
1
. A more realistic choice is /
2
=
_
1 : E|¸|
2
< ·
_
– the class of …nite-variance distributions. When we seek an e¢cient estimator of the mean µ in
the class of models /
2
what we are seeking is the best estimator, given that all we know is that
1 ¸ /
2
.
To show that the answer is not immediately obvious, it might be helpful to review a set-
ting where the sample mean is ine¢cient. Suppose that j ¸ R has the double exponential den-
sity ) (j [ j) = 2
÷1¸2
oxp
_
÷[j ÷j[
_
2
_
. Since vai (j) = 1 we see that the sample mean sat-
is…es
_
:(´ j ÷j)
o
÷÷ N(0, 1). In this model the maximum likelihood estimator (MLE) j for
j is the sample median. Recall from the theory of maximum likelhood that the MLE satis…es
_
:( j ÷j)
o
÷÷ N
_
0,
_
Eo
2
_
÷1
_
where o =
0
0j
log ) (j [ j) = ÷
_
2 sgn(j ÷j) is the score. We can
calculate that Eo
2
= 2 and thus conclude that
_
:( j ÷j)
o
÷÷N(0, 1,2) . The asymptotic variance
of the MLE is one-half that of the sample mean. Thus when the true density is known to be double
exponential the sample mean is ine¢cient.
But the estimator which achieves this improved e¢ciency – the sample median – is not generi-
cally consistent for the population mean. It is inconsistent if the density is asymmetric or skewed.
So the improvement comes at a great cost. Another way of looking at this is that the sample
median is e¢cient in the class of densities
_
) (j [ j) = 2
÷1¸2
oxp
_
÷[j ÷j[
_
2
__
but unless it is
known that this is the correct distribution class this knowledge is not very useful.
The relevant question is whether or not the sample mean is e¢cient when the form of the
distribution is unknown. We call this setting semiparametric as the parameter of interest (the
mean) is …nite dimensional while the remaining features of the distribution are unspeci…ed. In the
semiparametric context an estimator is called semiparametrically e¢cient if it has the smallest
asymptotic variance among all semiparametric estimators.
The mathematical trick is to reduce the semiparametric model to a set of parametric “submod-
els”. The Cramer-Rao variance bound can be found for each parametric submodel. The variance
bound for the semiparametric model (the union of the submodels) is then de…ned as the supremum
of the individual variance bounds.
Formally, suppose that the true density of ¸ is the unknown function )(¸) with mean µ = E¸ =
_
¸)(¸)d¸. A parametric submodel j for )(¸) is a density )
j
(¸ [ 0) which is a smooth function of
a parameter 0, and there is a true value 0
0
such that )
j
(¸ [ 0
0
) = )(¸). The index j indicates the
submodels. The equality )
j
(¸ [ 0
0
) = )(¸) means that the submodel class passes through the true
density, so the submodel is a true model. The class of submodels j and parameter 0
0
depend on
the true density ). In the submodel )
j
(¸ [ 0) , the mean is µ
j
(0) =
_
¸)
j
(¸ [ 0) d¸ which varies
with the parameter 0. Let j ¸ ì be the class of all submodels for ).
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 114
Since each submodel j is parametric we can calculate the e¢ciency bound for estimation of µ
within this submodel. Speci…cally, given the density )
j
(¸ [ 0) its likelihood score is
S
j
=
0
00
log )
j
(¸ [ 0
0
) ,
so the Cramer-Rao lower bound for estimation of 0 is
_
ES
µ
S
0
µ
_
÷1
. De…ning A
j
=
0
00
µ
j
(0
0
)
t
,
by Theorem B.11.5 the Cramer-Rao lower bound for estimation of µ within the submodel j is
X
j
= A
t
j
_
ES
µ
S
0
µ
_
÷1
A
j
.
As X
j
is the e¢ciency bound for the submodel class )
j
(¸ [ 0) , no estimator can have an
asymptotic variance smaller than X
j
for any density )
j
(¸ [ 0) in the submodel class, including the
true density ). This is true for all submodels j. Thus the asymptotic variance of any semiparametric
estimator cannot be smaller than X
j
for any conceivable submodel. Taking the supremum of the
Cramer-Rao bounds lower from all conceivable submodels we de…ne
2
X = sup
j¸ì
X
j
.
The asymptotic variance of any semiparametric estimator cannot be smaller than X , since it cannot
be smaller than any individual X
j
. We call X the semiparametric asymptotic variance bound
or semiparametric e¢ciency bound for estimation of µ, as it is a lower bound on the asymptotic
variance for any semiparametric estimator. If the asymptotic variance of a speci…c semiparametric
estimator equals the bound X we say that the estimator is semiparametrically e¢cient.
For many statistical problems it is quite challenging to calculate the semiparametric variance
bound. However, in some cases there is a simple method to …nd the solution. Suppose that
we can …nd a submodel j
0
whose Cramer-Rao lower bound satis…es X
j
0
= X
µ
where X
µ
is
the asymptotic variance of a known semiparametric estimator. In this case, we can deduce that
X = X
j
0
= X
µ
. Otherwise there would exist another submodel j
1
whose Cramer-Rao lower bound
satis…es X
j
0
< X
j
1
but this would imply X
j
< X
j
1
which contradicts the Cramer-Rao Theorem.
We now …nd this submodel for the sample mean ´ µ. Our goal is to …nd a parametric submodel
whose Cramer-Rao bound for µ is X . This can be done by creating a tilted version of the true
density. Consider the parametric submodel
)
j
(¸ [ 0) = )(¸)
_
1 ÷0
t
X
÷1
(¸ ÷µ)
_
(5.17)
where )(¸) is the true density and µ = E¸. Note that
_
)
j
(¸ [ 0) d¸ =
_
)(¸)d¸ ÷0
t
X
÷1
_
)(¸) (¸ ÷µ) d¸ = 1
and for all 0 close to zero )
j
(¸ [ 0) _ 0. Thus )
j
(¸ [ 0) is a valid density function. It is a parametric
submodel since )
j
(¸ [ 0
0
) = )(¸) when 0
0
= 0. This parametric submodel has the mean
µ(0) =
_
¸)
j
(¸ [ 0) d¸
=
_
¸)(¸)d¸ ÷
_
)(¸)¸ (¸ ÷µ)
t
X
÷1
0d¸
= µ ÷0
which is a smooth function of 0.
Since
0
00
log )
j
(¸ [ 0) =
0
00
log
_
1 ÷0
t
X
÷1
(¸ ÷µ)
_
=
X
÷1
(¸ ÷µ)
1 ÷0
t
X
÷1
(¸ ÷µ)
2
It is not obvious that this supremum exists, as V is a matrix so there is not a unique ordering of matrices.
However, in many cases (including the ones we study) the supremum exists and is unique.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 115
it follows that the score function for 0 is
S
j
=
0
00
log )
j
(¸ [ 0
0
) = X
÷1
(¸ ÷µ) . (5.18)
By Theorem B.11.3 the Cramer-Rao lower bound for 0 is
_
E
_
S
j
S
t
j
__
÷1
=
_
X
÷1
E
_
(¸ ÷µ) (¸ ÷µ)
t
_
X
÷1
_
÷1
= X . (5.19)
The Cramer-Rao lower bound for µ(0) = µ÷0 is also X , and this equals the asymptotic variance
of the moment estimator ´ µ. This was what we set out to show.
In summary, we have shown that in the submodel (5.17) the Cramer-Rao lower bound for
estimation of µ is X which equals the asymptotic variance of the sample mean. This establishes
the following result.
Proposition 5.13.1 In the class of distributions 1 ¸ /
2
, the semipara-
metric variance bound for estimation of µ is X = vai(j
i
), and the sample
mean ´ µ is a semiparametrically e¢cient estimator of the population mean
µ.
We call this result a proposition rather than a theorem as we have not attended to the regularity
conditions.
It is a simple matter to extend this result to the plug-in estimator
´
d = j (´ µ). We know from
Theorem 5.10.4 that if E|¸|
2
< ·and j (u) is continuously di¤erentiable at u = µ then the plug-
in estimator has the asymptotic distribution
_
:
_
´
d ÷d
_
o
÷÷N(0, C
t
X C) . We therefore consider
the class of distributions
/
2
(j) =
_
1 : E|¸|
2
< ·, j (u) is continuously di¤erentiable at u = E¸
_
.
For example, if , = j
1
,j
2
where j
1
= Ej
1
and j
2
= Ej
2
then /
2
(q) =
_
1 : Ej
2
1
< ·, Ej
2
2
< ·, and Ej
2
,= 0
_
.
For any submodel j the Cramer-Rao lower bound for estimation of d = j (µ) is C
t
X
j
C by
Theorem B.11.5. For the submodel (5.17) this bound is C
t
X Cwhich equals the asymptotic variance
of
´
d from Theorem 5.10.4. Thus
´
d is semiparametrically e¢cient.
Proposition 5.13.2 In the class of distributions 1 ¸ /
2
(j) the semi-
parametric variance bound for estimation of d = j (µ) is C
t
X C, and the
plug-in estimator
´
d = j (´ µ) is a semiparametrically e¢cient estimator of
d.
The result in Proposition 5.13.2 is quite general. Smooth functions of sample moments are
e¢cient estimators for their population counterparts. This is a very powerful result, as most
econometric estimators can be written (or approximated) as smooth functions of sample means.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 116
5.14 Technical Proofs*
In this section we provide proofs of some of the more technical points in the chapter. These
proofs may only be of interest to more mathematically inclined.
Proof of Theorem 5.4.2: Without loss of generality, we can assume E(j
i
) = 0 by recentering j
i
on its expectation.
We need to show that for all c 0 and j 0 there is some · < · so that for all : _ ·,
Ii ([j[ c) _ j. Fix c and j. Set - = cj,8. Pick C < · large enough so that
E([j
i
[ 1 ([j
i
[ C)) _ - (5.20)
(where 1 () is the indicator function) which is possible since E[j
i
[ < ·. De…ne the random variables
n
i
= j
i
1 ([j
i
[ _ C) ÷E(j
i
1 ([j
i
[ _ C))
.
i
= j
i
1 ([j
i
[ C) ÷E(j
i
1 ([j
i
[ C))
so that
j = n ÷.
and
E[j[ _ E[n[ ÷E[.[ . (5.21)
We now show that sum of the expectations on the right-hand-side can be bounded below 8-.
First, by the Triangle Inequality (A.12) and the Expectation Inequality (B.15),
E[.
i
[ = E[j
i
1 ([j
i
[ C) ÷E(j
i
1 ([j
i
[ C))[
_ E[j
i
1 ([j
i
[ C)[ ÷[E(j
i
1 ([j
i
[ C))[
_ 2E[j
i
1 ([j
i
[ C)[
_ 2-, (5.22)
and thus by the Triangle Inequality (A.12) and (5.22)
E[.[ = E
¸
¸
¸
¸
¸
1
:
a

i=1
.
i
¸
¸
¸
¸
¸
_
1
:
a

i=1
E[.
i
[ _ 2-. (5.23)
Second, by a similar argument
[n
i
[ = [j
i
1 ([j
i
[ _ C) ÷E(j
i
1 ([j
i
[ _ C))[
_ [j
i
1 ([j
i
[ _ C)[ ÷[E(j
i
1 ([j
i
[ _ C))[
_ 2 [j
i
1 ([j
i
[ _ C)[
_ 2C (5.24)
where the …nal inequality is (5.20). Then by Jensen’s Inequality (B.12), the fact that the n
i
are
iid and mean zero, and (5.24),
(E[n[)
2
_ E[n[
2
=
En
2
i
:
=
4C
2
:
_ -
2
(5.25)
the …nal inequality holding for : _ 4C
2
,-
2
= 86C
2
,c
2
j
2
. Equations (5.21), (5.23) and (5.25)
together show that
E[j[ _ 8-
2
(5.26)
as desired.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 117
Finally, by Markov’s Inequality (B.21) and (5.26),
Ii ([j[ c) _
E[j[
c
_
8-
c
= j,
the …nal equality by the de…nition of -. We have shown that for any c 0 and j 0 then for all
: _ 86C
2
,c
2
j
2
, Ii ([j[ c) _ j, as needed.
Proof of Theorem 5.6.1: By Loève’s c
v
Inequality (A.19)
|¸| =
_
_
n

)=1
j
2
)
_
_
1¸2
_
n

)=1
[j
)
[ .
Thus if E[j
)
[ < · for , = 1, ..., :, then
E|¸| _
n

)=1
E[j
)
[ < ·.
For the reverse inequality, the Euclidean norm of a vector is larger than the length of any individual
component, so for any ,, [j
)
[ _ |¸| . Thus, if E|¸| < ·, then E[j
)
[ < · for , = 1, ..., :.
Proof of Theorem 5.7.1: The moment bound E¸
t
i
¸
i
< · is su¢cient to guarantee that the
elements of µ and X are well de…ned and …nite. Without loss of generality, it is su¢cient to
consider the case µ = 0.
Our proof method is to calculate the characteristic function of
_

a
and show that it converges
pointwise to the characteristic function of N(0, X ) . By Lévy’s Continuity Theorem (see Van der
Vaart (2008) Theorem 2.13) this is su¢cient to established that
_

a
converges in distribution to
N(0, X ) .
For X ¸ R
n
, let C (X) = Eoxp
_
iX
t
¸
i
_
denote the characteristic function of ¸
i
and set c (X) =
log C(X). Since ¸
i
has two …nite moments the …rst and second derivatives of C(X) are continuous
in `. They are
0
0X
C(X) = iE
_
¸
i
oxp
_
iX
t
¸
i
__
0
2
0X0X
t
C(X) = i
2
E
_
¸
i
¸
t
i
oxp
_
iX
t
¸
i
__
.
When evaluated at X = 0
C(0) = 1
0
0X
C(0) = iE(¸
i
) = 0
0
2
0X0X
t
C(0) = ÷E
_
¸
i
¸
t
i
_
= ÷X .
Furthermore,
c
X
(X) =
0
0X
c(X) = C(X)
÷1
0
0X
C(X)
c
XX
(X) =
0
2
0X0X
t
c(X) = C(X)
÷1
0
2
0X0X
t
C(X) ÷C(X)
÷2
0
0X
C (X)
0
0X
t
C(X)
so when evaluated at X = 0
c(0) = 0
c
A
(0) = 0
c
AA
(0) = ÷X .
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 118
By a second-order Taylor series expansion of c(X) about X = 0,
c(X) = c(0) ÷c
A
(0)
t
X ÷
1
2
X
t
c
XX
(X
+
)X =
1
2
X
t
c
XX
(X
+
)X (5.27)
where X
+
lies on the line segment joining 0 and X.
We now compute C
a
(X) = 1 oxp
_
iX
t
_

a
_
, the characteristic function of
_

a
. By the prop-
erties of the exponential function, the independence of the ¸
i
, the de…nition of c(X) and (5.27)
log C
a
(X) = log Eoxp
_
i
1
_
:
a

i=1
X
t
¸
i
_
= log E
a

i=1
oxp
_
i
1
_
:
X
t
¸
i
_
= log
a

i=1
Eoxp
_
i
1
_
:
X
t
¸
i
_
=
a

i=1
log Eoxp
_
i
1
_
:
X
t
¸
i
_
= :c
_
X
_
:
_
=
1
2
X
t
c
XX
(X
a
)X
where X
a
lies on the line segment joining 0 and X,
_
:. Since X
a
÷ 0 and c
XX
(X) is continuous,
c
XX
(X
a
) ÷c
XX
(0) = ÷X. We thus …nd that as : ÷·,
log C
a
(X) ÷÷
1
2
X
t
X X
and
C
a
(X) ÷oxp
_
÷
1
2
X
t
X X
_
which is the characteristic function of the N(0, X ) distribution. This completes the proof.
Proof of Theorem 5.9.1: Since j is continuous at c, for all - 0 we can …nd a c 0 such
that if |z
a
÷c| < c then |j (z
a
) ÷j (c)| _ -. Recall that ¹ ¸ 1 implies Ii(¹) _ Ii(1). Thus
Ii (|j (z
a
) ÷j (c)| _ -) _ Ii (|z
a
÷c| < c) ÷ 1 as : ÷ · by the assumption that z
a
j
÷÷ c.
Hence j(z
a
)
j
÷÷j(c) as : ÷·.
Proof of Theorem 5.10.3: By a vector Taylor series expansion, for each element of j,
q
)
(0
a
) = q
)
(0) ÷q
)0
(0
+
)a
) (0
a
÷0)
where 0
+
a)
lies on the line segment between 0
a
and 0 and therefore converges in probability to 0.
It follows that a
)a
= q
)0
(0
+
)a
) ÷q
)0
j
÷÷0. Stacking across elements of j, we …nd
_
:(j (0
a
) ÷j(0)) = (C÷a
a
)
t
_
:(0
a
÷0)
o
÷÷C
t
¸. (5.28)
The convergence is by Theorem 5.10.1, as C÷a
a
o
÷÷C,
_
:(0
a
÷0)
o
÷÷¸, and their product is
continuous. This establishes (5.10)
When ¸ ~ N(0, X ) , the right-hand-side of (5.28) equals
C
t
¸ = C
t
N(0, X ) = N
_
0, C
t
X C
_
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 119
establishing (5.11).
Proof of Theorem 5.12.1: First consider (5.12). Take any c 0. The event
_
max
1¸i¸a
[j
i
[ c:
1¸v
_
means that at least one of the [j
i
[ exceeds c:
1¸v
, which is the same as the event

a
i=1
_
[j
i
[ c:
1¸v
_
or equivalently

a
i=1
¦[j
i
[
v
c
v
:¦ . Since the probability of the union of events is smaller than the
sum of the probabilities,
Ii
_
:
÷1¸v
max
1¸i¸a
[j
i
[ c
_
= Ii
_
a
_
i=1
¦[j
i
[
v
c
v

_
_
a

i=1
Ii ([j
i
[
v
:c
v
)
_
1
:c
v
a

i=1
E([j
i
[
v
1 ([j
i
[
v
:c
v
))
=
1
c
v
E([j
i
[
v
1 ([j
i
[
v
:c
v
))
where the second inequality is the strong form of Markov’s inequality (Theorem B.22) and the …nal
equality is since the j
i
are iid. Since E[j[
v
< · this …nal expectation converges to zero as : ÷·.
This is because
E[j
i
[
v
=
_
[j[
v
d1(j) < ·
implies
E([j
i
[
v
1 ([j
i
[
v
c)) =
_
[j[
r
c
[j[
v
d1(j) ÷0 (5.29)
as c ÷·. This establishes (5.12).
Now consider (5.13). Take any c 0 and set t = 1,c. By a similar calculation
Ii
_
(log :)
÷1
max
1¸i¸a
[j
i
[ c
_
= Ii
_
a
_
i=1
¦oxp[tj
i
[ oxp(tc log :)¦
_
_
a

i=1
Ii (oxp[tj
i
[ :)
_ E(oxp[tj[ 1 (oxp[tj[ :))
where the second line uses oxp(tc log :) = oxp(log :) = :. The assumption Eoxp(tj) < · means
E(oxp[tj[ 1 (oxp[tj[ :)) ÷ 0 as : ÷ · by the same argument as in (5.29). This establishes
(5.13).
Chapter 6
Asymptotic Theory for Least Squares
6.1 Introduction
It turns out that the asymptotic theory of least-squares estimation applies equally to the pro-
jection model and the linear CEF model, and therefore the results in this chapter will be stated for
the broader projection model described in Section 2.17. Recall that the model is
j
i
= i
t
i
d ÷c
i
for i = 1, ..., :, where the linear proejction d is
d =
_
E
_
i
z
i
t
i
__
÷1
E(i
i
j
i
) .
Many of the results of this section hold under random sampling (Assumption 1.5.1) and …nite
second moments (Assumption 2.17.1). We restate this conditions here for clarity.
Assumption 6.1.1
1. The observations (j
i
, i
z
), i = 1, ..., :, are independent and identically distributed
2. Ej
2
< ·.
3. E|i|
2
< ·.
4. Q
ææ
= E(ii
t
) is positive de…nite.
Some of the results will require a strengthening to …nite fourth moments.
Assumption 6.1.2 In addition to Assumption 6.1.1, Ej
4
i
< · and
E|i
i
|
4
< ·.
120
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 121
6.2 Consistency of Least-Squares Estimation
In this section we use the weak law of large numbers (WLLN, Theorem 5.4.2 and Theorem 5.6.2)
and continuous mapping theorem (CMT, Theorem 5.9.1) to show that the least-squares estimator
´
d is consistent for the projection coe¢cient d.
This derivation is based on three key components. First, the OLS estimator can be written as
a continuous function of a set of sample moments. Second, the WLLN shows that sample moments
converge in probability to population moments. And third, the CMT states that continuous func-
tions preserve convergence in probability. We now explain each step in brief and then in greater
detail.
First, observe that the OLS estimator
´
d =
_
1
:
a

i=1
i
i
i
t
i
_
÷1
_
1
:
a

i=1
i
i
j
i
_
=
´
Q
÷1
aa
´
Q
aj
is a function of the sample moments
´
Q
aa
=
1
a

a
i=1
i
i
i
t
i
and
´
Q
aj
=
1
a

a
i=1
i
i
j
i
.
Second, by an application of the WLLN these sample moments converge in probability to the
population moments. Speci…cally, the fact that (j
i
, i
i
) are mutually independent and identically
distributed implies that any function of (j
i
, i
i
) is iid, including i
i
i
t
i
and i
i
j
i
. These variables also
have …nite expectations by Theorem 2.17.1.1. Under these conditions, the WLLN (Theorem 5.6.2)
implies that as : ÷·,
´
Q
aa
=
1
:
a

i=1
i
i
i
t
i
j
÷÷E
_
i
i
i
t
i
_
= Q
aa
(6.1)
and
´
Q
aj
=
1
:
a

i=1
i
i
j
i
j
÷÷E(i
i
j
i
) = Q
aj
. (6.2)
Third, the CMT ( Theorem 5.9.1) allows us to combine these equations to show that
´
d converges
in probability to d. Speci…cally, as : ÷·,
´
d =
´
Q
÷1
aa
´
Q
aj
j
÷÷Q
÷1
aa
Q
aj
= d. (6.3)
We have shown that
´
d
j
÷÷d, as : ÷·. In words, the OLS estimator converges in probability to
the projection coe¢cient vector d as the sample size : gets large.
To fully understand the application of the CMT we walk through it in detail. We can write
´
d = j
_
´
Q
aa
,
´
Q
aj
_
where j (A, I) = A
÷1
I is a function of A and I. The function j (A, I) is a continuous function of
A and I at all values of the arguments such that A
÷1
exists. Assumption 2.17.1 implies that Q
÷1
aa
exists and thus j (A, I) is continuous at A = Q
aa
. This justi…es the application of the CMT in
(6.3).
For a slightly di¤erent demonstration of (6.3), recall that (4.7) implies that
´
d ÷d =
´
Q
÷1
aa
´
Q
ac
(6.4)
where
´
Q
ac
=
1
:
a

i=1
i
i
c
i
.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 122
The WLLN and (2.27) imply
´
Q
ac
j
÷÷E(i
i
c
i
) = 0. (6.5)
Therefore
´
d ÷d =
´
Q
÷1
aa
´
Q
ac
j
÷÷Q
÷1
aa
0
= 0
which is the same as
´
d
j
÷÷d.
Theorem 6.2.1 Consistency of Least-Squares
Under Assumption 6.1.1,
´
Q
aa
j
÷÷ Q
aa
,
´
Q
aj
j
÷÷ Q
aj
,
´
Q
÷1
aa
j
÷÷ Q
÷1
aa
,
´
Q
ac
j
÷÷0, and
´
d
j
÷÷d as : ÷·.
Theorem 6.2.1 states that the OLS estimator
´
d converges in probability to d as : increases,
and thus
´
d is consistent for d. In the stochastic order notation, Theorem 6.2.1 can be equivalently
written as
´
d = d ÷o
j
(1). (6.6)
To illustrate the e¤ect of sample size on the least-squares estimator consider the least-squares
regression
ln(\aqc
i
) = ,
0
÷,
1
1dncatio:
i
÷,
2
1rjcric:cc
i
÷,
3
1rjcric:cc
2
i
÷c
i
.
We use the sample of 30,833 white men from the March 2009 CPS. Randomly sorting the observa-
tions, and sequentially estimating the model by least-squares, starting with the …rst 40 observations,
and continuing until the full sample is used, the sequence of estimates are displayed in Figure 6.1.
You can see how the least-squares estimate changes with the sample size, but as the number of
observations increases it settles down to the full-sample estimate
´
,
1
= 0.114.
6.3 Asymptotic Normality
We started this chapter discussing the need for an approximation to the distribution of the OLS
estimator
´
d. In Section 6.2 we showed that
´
d converges in probability to d. Consistency is a good
…rst step, but in itself does not describe the distribution of the estimator. In this section we derive
an approximation typically called the asymptotic distribution.
The derivation starts by writing the estimator as a function of sample moments. One of the
moments must be written as a sum of zero-mean random vectors and normalized so that the central
limit theorem can be applied. The steps are as follows.
Take equation (6.4) and multiply it by
_
:. This yields the expression
_
:
_
´
d ÷d
_
=
_
1
:
a

i=1
i
i
i
t
i
_
÷1
_
1
_
:
a

i=1
i
i
c
i
_
. (6.7)
This shows that the normalized and centered estimator
_
:
_
´
d ÷d
_
is a function of the sample
average
1
a

a
i=1
i
i
i
t
i
and the normalized sample average
1
_
a

a
i=1
i
i
c
i
. Furthermore, the latter has
mean zero so the central limit theorem (CLT, Theorem 5.7.1) applies.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 123
5000 10000 15000 20000
0
.
0
8
0
.
0
9
0
.
1
0
0
.
1
1
0
.
1
2
0
.
1
3
0
.
1
4
0
.
1
5
Number of Observations
O
L
S

E
s
t
i
m
a
t
i
o
n
Figure 6.1: The least-squares estimator
´
,
1
as a function of sample size :
The product i
i
c
i
is iid (since the observations are iid) and mean zero (since E(i
i
c
i
) = 0).
De…ne the / / covariance matrix
D = E
_
i
i
i
t
i
c
2
i
_
. (6.8)
We require the elements of D to be …nite, written D < ·. By the Expectation Inequality (B.15),
|D| _ E
_
_
i
i
i
t
i
c
2
i
_
_
= E|i
i
c
i
|
2
= E|i
i
|
2
c
2
i
or equivalently that E|i
i
c
i
|
2
< ·. Using|i
i
c
i
|
2
= |i
i
|
2
c
2
i
and the Cauchy-Schwarz Inequality
(B.17),
|D| _ E
_
_
i
i
i
t
i
c
2
i
_
_
= E|i
i
c
i
|
2
= E
_
|i
i
|
2
c
2
i
_
_
_
E|i
i
|
4
_
1¸2 _
Ec
4
i
_
1¸2
(6.9)
which is …nite if i
i
and c
i
have …nite fourth moments. As c
i
is a linear combination of j
i
and i
i
,
it is su¢cient that the observables have …nite fourth moments (Theorem 2.17.1.6). We can then
apply the CLT (Theorem 5.7.1).
Theorem 6.3.1 Under Assumption 6.1.2,
|D| _ E|i
i
c
i
|
2
< · (6.10)
and
1
_
:
a

i=1
i
i
c
i
o
÷÷N(0, D) (6.11)
as : ÷·.
Putting together (6.1), (6.7), and (6.11),
_
:
_
´
d ÷d
_
o
÷÷Q
÷1
aa
N(0, D)
= N
_
0, Q
÷1
aa
DQ
÷1
aa
_
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 124
as : ÷ ·, where the …nal equality follows from the property that linear combinations of normal
vectors are also normal (Theorem B.9.1).
We have derived the asymptotic normal approximation to the distribution of the least-squares
estimator.
Theorem 6.3.2 Asymptotic Normality of Least-Squares Estima-
tor
Under Assumption 6.1.2, as : ÷·
_
:
_
´
d ÷d
_
o
÷÷N(0, X
f
)
where
X
f
= Q
÷1
aa
DQ
÷1
aa
, (6.12)
Q
aa
= E(i
i
i
t
i
) , and D = E
_
i
i
i
t
i
c
2
i
_
.
In the stochastic order notation, Theorem 6.3.2 implies that
´
d = d ÷O
j
(:
÷1¸2
) (6.13)
which is stronger than (6.6).
The matrix X
f
= avai(
´
d) is the variance of the asymptotic distribution of
_
:
_
´
d ÷d
_
. Con-
sequently, X
f
is often referred to as the asymptotic covariance matrix of
´
d. The expression
X
f
= Q
÷1
aa
DQ
÷1
aa
is called a sandwich form. It might be worth noticing that there is a di¤erence
between the variance of the asymptotic distribution given in (6.12) and the …nite-sample conditional
variance in the CEF model as given in (4.12):
X
b
f
=
_
1
:
A
t
A
_
÷1
_
1
:
A
t
LA
__
1
:
A
t
A
_
÷1
.
While X
f
and X
b
f
are di¤erent, the two are close if : is large. Indeed, as : ÷·
X
b
f
j
÷÷X
f
.
There is a special case where D and X
f
simplify. We say that c
i
is a Homoskedastic Pro-
jection Error when
cov(i
i
i
t
i
, c
2
i
) = 0. (6.14)
Condition (6.14) holds in the homoskedastic linear regression model, but is somewhat broader.
Under (6.14) the asymptotic variance formulas simplify as
D = E
_
i
i
i
t
i
_
E
_
c
2
i
_
= Q
aa
o
2
(6.15)
X
f
= Q
÷1
aa
DQ
÷1
aa
= Q
÷1
aa
o
2
= X
0
f
(6.16)
In (6.16) we de…ne X
0
f
= Q
÷1
aa
o
2
whether (6.14) is true or false. When (6.14) is true then X
f
= X
0
f
,
otherwise X
f
,= X
0
f
. We call X
0
f
the homoskedastic asymptotic covariance matrix.
Theorem 6.3.2 states that the sampling distribution of the least-squares estimator, after rescal-
ing, is approximately normal when the sample size : is su¢ciently large. This holds true for all joint
distributions of (j
i
, i
i
) which satisfy the conditions of Assumption 6.1.2, and is therefore broadly
applicable. Consequently, asymptotic normality is routinely used to approximate the …nite sample
distribution of
_
:
_
´
d ÷d
_
.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 125
Figure 6.2: Density of Normalized OLS estimator with Double Pareto Error
A di¢culty is that for any …xed : the sampling distribution of
´
d can be arbitrarily far from the
normal distribution. In Figure 5.1 we have already seen a simple example where the least-squares
estimate is quite asymmetric and non-normal even for reasonably large sample sizes. The normal
approximation improves as : increases, but how large should : be in order for the approximation
to be useful? Unfortunately, there is no simple answer to this reasonable question. The trouble
is that no matter how large is the sample size, the normal approximation is arbitrarily poor for
some data distribution satisfying the assumptions. We illustrate this problem using a simulation.
Let j
i
= ,
1
r
i
÷ ,
2
÷ c
i
where r
i
is N(0, 1) , and c
i
is independent of r
i
with the Double Pareto
density )(c) =
c
2
[c[
÷c÷1
, [c[ _ 1. If c 2 the error c
i
has zero mean and variance c,(c ÷ 2).
As c approaches 2, however, its variance diverges to in…nity. In this context the normalized least-
squares slope estimator
_
:
c÷2
c
_
´
,
1
÷,
1
_
has the N(0, 1) asymptotic distibution for any c 2.
In Figure 6.2 we display the …nite sample densities of the normalized estimator
_
:
c÷2
c
_
´
,
1
÷,
1
_
,
setting : = 100 and varying the parameter c. For c = 8.0 the density is very close to the N(0, 1)
density. As c diminishes the density changes signi…cantly, concentrating most of the probability
mass around zero.
Another example is shown in Figure 6.3. Here the model is j
i
= , ÷c
i
where
c
i
=
n
I
i
÷E
_
n
I
i
_
_
E
_
n
2I
i
_
÷
_
E
_
n
I
i
__
2
_
1¸2
(6.17)
and n
i
~ N(0, 1). We show the sampling distribution of
_
:
_
´
, ÷,
_
setting : = 100, for / = 1, 4,
6 and 8. As / increases, the sampling distribution becomes highly skewed and non-normal. The
lesson from Figures 6.2 and 6.3 is that the N(0, 1) asymptotic approximation is never guaranteed
to be accurate.
6.4 Joint Distribution
Theorem 6.3.2 gives the joint asymptotic distribution of the coe¢cient estimates. We can use
the result to study the covariance between the coe¢cient estimates. For example, suppose / = 2
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 126
Figure 6.3: Density of Normalized OLS estimator with error process (6.17)
and write the estimates as (
´
,
1
,
´
,
2
). For simplicity suppose that the regressors are mean zero. Then
we can write
Q
aa
=
_
o
2
1
jo
1
o
2
jo
1
o
2
o
2
2
_
where o
2
1
and o
2
2
are the variances of r
1i
and r
2i
, and j is their correlation. If the error is ho-
moskedastic, then the asymptotic variance matrix for (
´
,
1
,
´
,
2
) is X
0
f
= Q
÷1
aa
o
2
. By the formula for
inversion of a 2 2 matrix,
Q
÷1
aa
=
1
o
2
1
o
2
2
(1 ÷j
2
)
_
o
2
2
÷jo
1
o
2
÷jo
1
o
2
o
2
1
_
.
Thus if r
1i
and r
2i
are positively correlated (j 0) then
´
,
1
and
´
,
2
are negatively correlated (and
vice-versa).
For illustration, Figure 6.4 displays the probability contours of the joint asymptotic distribution
of
´
,
1
÷,
1
and
´
,
2
÷,
2
when o
2
1
= o
2
2
= o
2
= 1 and j = 0.ò. The coe¢cient estimates are negatively
correlated since the regressors are positively correlated. This means that if
´
,
1
is unusually negative,
it is likely that
´
,
2
is unusually positive, or conversely. It is also unlikely that we will observe both
´
,
1
and
´
,
2
unusually large and of the same sign.
This …nding that the correlation of the regressors is of opposite sign of the correlation of the coef-
…cient estimates is sensitive to the assumption of homoskedasticity. If the errors are heteroskedastic
then this relationship is not guaranteed.
This can be seen through a simple constructed example. Suppose that r
1i
and r
2i
only take
the values ¦÷1, ÷1¦, symmetrically, with Ii (r
1i
= r
2i
= 1) = Ii (r
1i
= r
2i
= ÷1) = 8,8, and
Ii (r
1i
= 1, r
2i
= ÷1) = Ii (r
1i
= ÷1, r
2i
= 1) = 1,8. You can check that the regressors are mean
zero, unit variance and correlation 0.5, which is identical with the setting displayed in Figure 6.4.
Now suppose that the error is heteroskedastic. Speci…cally, suppose that E
_
c
2
i
[ r
1i
= r
2i
_
=
ò
4
and E
_
c
2
i
[ r
1i
,= r
2i
_
=
1
4
. You can check that E
_
c
2
i
_
= 1, E
_
r
2
1i
c
2
i
_
= E
_
r
2
2i
c
2
i
_
= 1 and
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 127
Figure 6.4: Contours of Joint Distribution of (
´
,
1
,
´
,
2
), homoskedastic case
E
_
r
1i
r
2i
c
2
i
_
=
7
8
. Therefore
X
f
= Q
÷1
aa
DQ
÷1
aa
=
0
16
_
¸
_
1 ÷
1
2
÷
1
2
1
_
¸
_
_
¸
_
1
7
8
7
8
1
_
¸
_
_
¸
_
1 ÷
1
2
÷
1
2
1
_
¸
_
=
4
8
_
¸
_
1
1
4
1
4
1
_
¸
_.
Thus the coe¢cient estimates
´
,
1
and
´
,
2
are positively correlated (their correlation is 1,4.) The
joint probability contours of their asymptotic distribution is displayed in Figure 6.5. We can see
how the two estimates are positively associated.
What we found through this example is that in the presence of heteroskedasticity there is no
simple relationship between the correlation of the regressors and the correlation of the parameter
estimates.
We can extend the above analysis to study the covariance between coe¢cient sub-vectors. For
example, partitioning i
t
i
= (i
t
1i
, i
t
2i
) and d
t
=
_
d
t
1
, d
t
2
_
, we can write the general model as
j
i
= i
t
1i
d
1
÷i
t
2i
d
2
÷c
i
and the coe¢cient estimates as
´
d
t
=
_
´
d
t
1
,
´
d
t
2
_
. Make the partitions
Q
aa
=
_
Q
11
Q
12
Q
21
Q
22
_
, D =
_
D
11
D
12
D
21
D
22
_
. (6.18)
From (2.41)
Q
÷1
aa
=
_
Q
÷1
11·2
÷Q
÷1
11·2
Q
12
Q
÷1
22
÷Q
÷1
22·1
Q
21
Q
÷1
11
Q
÷1
22·1
_
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 128
Figure 6.5: Contours of Joint Distribution of
´
,
1
and
´
,
2
, heteroskedastic case
where Q
11·2
= Q
11
÷ Q
12
Q
÷1
22
Q
21
and Q
22·1
= Q
22
÷ Q
21
Q
÷1
11
Q
12
. Thus when the error is ho-
moskedastic,
cov
_
´
d
1
,
´
d
2
_
= ÷o
2
Q
÷1
11·2
Q
12
Q
÷1
22
which is a matrix generalization of the two-regressor case.
In the general case, you can show that (Exercise 6.5)
X
f
=
_
X
11
X
12
X
21
X
22
_
(6.19)
where
X
11
= Q
÷1
11·2
_
D
11
÷Q
12
Q
÷1
22
D
21
÷D
12
Q
÷1
22
Q
21
÷Q
12
Q
÷1
22
D
22
Q
÷1
22
Q
21
_
Q
÷1
11·2
(6.20)
X
21
= Q
÷1
22·1
_
D
21
÷Q
21
Q
÷1
11
D
11
÷D
22
Q
÷1
22
Q
21
÷Q
21
Q
÷1
11
D
12
Q
÷1
22
Q
21
_
Q
÷1
11·2
(6.21)
X
22
= Q
÷1
22·1
_
D
22
÷Q
21
Q
÷1
11
D
12
÷D
21
Q
÷1
11
Q
12
÷Q
21
Q
÷1
11
D
11
Q
÷1
11
Q
12
_
Q
÷1
22·1
(6.22)
Unfortunately, these expressions are not easily interpretable.
6.5 Consistency of Error Variance Estimators
Using the methods of Section 6.2 we can show that the estimators ´ o
2
=
1
a

a
i=1
´ c
2
i
and :
2
=
1
a÷I

a
i=1
´ c
2
i
are consistent for o
2
.
The trick is to write the residual ´ c
i
as equal to the error c
i
plus a deviation term
´ c
i
= j
i
÷i
t
i
´
d
= c
i
÷i
t
i
d ÷r
t
i
´
d
= c
i
÷i
t
i
_
´
d ÷d
_
.
Thus the squared residual equals the squared error plus a deviation
´ c
2
i
= c
2
i
÷2c
i
i
t
i
_
´
d ÷d
_
÷
_
´
d ÷d
_
t
i
i
i
t
i
_
´
d ÷d
_
. (6.23)
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 129
So when we take the average of the squared residuals we obtain the average of the squared errors,
plus two terms which are (hopefully) asymptotically negligable.
´ o
2
=
1
:
a

i=1
c
2
i
÷2
_
1
:
a

i=1
c
i
i
t
i
_
_
´
d ÷d
_
÷
_
´
d ÷d
_
t
_
1
:
a

i=1
i
i
i
t
i
_
_
´
d ÷d
_
. (6.24)
Indeed, the WLLN shows that
1
:
a

i=1
c
2
i
j
÷÷o
2
1
:
a

i=1
c
i
i
t
i
j
÷÷E
_
c
i
i
t
i
_
= 0
1
:
a

i=1
i
i
i
t
i
j
÷÷E
_
i
i
i
t
i
_
= Q
aa
and Theorem 6.2.1 shows that
´
d
j
÷÷d. Hence (6.24) converges in probability to o
2
, as desired.
Finally, since :,(: ÷/) ÷1 as : ÷·, it follows that
:
2
=
_
:
: ÷/
_
´ o
2
j
÷÷o
2
.
Thus both estimators are consistent.
Theorem 6.5.1 Under Assumption 6.1.1, ´ o
2
j
÷÷ o
2
and :
2
j
÷÷ o
2
as
: ÷·.
6.6 Homoskedastic Covariance Matrix Estimation
Theorem 6.3.2 describe the asymptotic covariance matrix of the least-squares estimators
´
d. For
asymptotic inference (con…dence intervals and tests) we need a consistent estimate of its covariance
matrix. In this section we start with the simpli…ed problem of estimating X
0
f
= Q
÷1
aa
o
2
, the
asymptotic variance of
´
d under conditional homoskedasticity. As we described in Section 4.10,
the conventional estimator is
´
X
0
b
f
=
´
Q
÷1
aa
:
2
where
´
Q
aa
and :
2
are de…ned in (6.1) and (4.20).
We now show that this estimator is consistent for X
0
f
. Since the estimator is the product of two
moment estimates, the method is to show consistency of each moment estimator, and then apply
the continuous mapping theorem to the product.
Theorem 6.2.1 established that
´
Q
÷1
aa
j
÷÷ Q
÷1
aa
, and Theorem 6.5.1 established :
2
j
÷÷ o
2
. It
follows by the CMT that
´
X
0
b
f
=
´
Q
÷1
aa
:
2
j
÷÷Q
÷1
aa
o
2
= X
0
f
so that
´
X
0
b
f
is consistent for X
0
f
, as desired.
Theorem 6.6.1 Under Assumption 6.1.1,
´
X
0
b
f
j
÷÷X
0
f
as : ÷·.
It is instructive to notice that Theorem 6.6.1 does not require the assumption of homoskedastic-
ity. That is,
´
X
0
b
f
is consistent for X
0
f
regardless if the regression is homoskedastic or heteroskedastic.
However, X
0
f
= X
f
= avai(
´
d) only under homoskedasticity. Thus in the general case,
´
X
0
b
f
is con-
sistent for a well-de…ned but non-useful object.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 130
6.7 Heteroskedastic Covariance Matrix Estimation
Theorems 6.3.2 established that the asymptotic variance of
´
d is X
f
= Q
÷1
aa
DQ
÷1
aa
. We now
consider estimation of this covariance matrix without imposing homoskedasticity. The standard
approach is to use a plug-in estimator which replace the unknowns with sample moments.
The moment estimator for D is
´
D =
1
:
a

i=1
i
i
i
t
i
´ c
2
i
, (6.25)
leading to the plug-in covariance matrix estimator
´
X
f
=
´
Q
÷1
aa
´
D
´
Q
÷1
aa
. (6.26)
You can check that this is identical to the White covariance matrix estimator
´
X
b
f
introduced in
(4.28). Here we write the estimator as
´
X
f
to indicate that it is an estimate of X
f
= avai(
´
d). We
will use both
´
X
b
f
and
´
X
f
to indicate (6.26).
As shown in Theorem 6.2.1,
´
Q
÷1
aa
j
÷÷Q
÷1
aa
, so we just need to verify the consistency of
´
D. The
key is to write the replace the squared residual ´ c
2
i
with the squared error c
2
i
, and then show that
the di¤erence is asymptotically negligible.
Speci…cally, observe that
´
D =
1
:
a

i=1
i
i
i
t
i
´ c
2
i
=
1
:
a

i=1
i
i
i
t
i
c
2
i
÷
1
:
a

i=1
i
i
i
t
i
_
´ c
2
i
÷c
2
i
_
. (6.27)
The …rst term is an average of the iid random variables i
i
i
t
i
c
2
i
, and therefore by the WLLN
converges in probability to its expectation, namely,
1
:
a

i=1
i
i
i
t
i
c
2
i
j
÷÷E
_
i
i
i
t
i
c
2
i
_
= D.
Technically, this requires that D has …nite elements, which was shown in (6.10).
So to establish that
´
D is consistent for D it remains to show that
1
:
a

i=1
i
i
i
t
i
_
´ c
2
i
÷c
2
i
_
j
÷÷0. (6.28)
There are multiple ways to do this. A reasonable straightforward yet slightly tedious derivation is
to start by applying the Triangle Inequality (A.12)
_
_
_
_
_
1
:
a

i=1
i
i
i
t
i
_
´ c
2
i
÷c
2
i
_
_
_
_
_
_
_
1
:
a

i=1
_
_
i
i
i
t
i
_
´ c
2
i
÷c
2
i
__
_
=
1
:
a

i=1
|i
i
|
2
¸
¸
´ c
2
i
÷c
2
i
¸
¸
. (6.29)
Then recalling the expression for the squared residual (6.23), apply the Triangle Inequality and
then the Schwarz Inequality (A.10) twice
¸
¸
´ c
2
i
÷c
2
i
¸
¸
_ 2
¸
¸
¸c
i
i
t
i
_
´
d ÷d

¸
¸ ÷
_
´
d ÷d
_
t
i
i
i
t
i
_
´
d ÷d
_
= 2 [c
i
[
¸
¸
¸i
t
i
_
´
d ÷d

¸
¸ ÷
¸
¸
¸
¸
_
´
d ÷d
_
t
i
i
¸
¸
¸
¸
2
_ 2 [c
i
[ |i
i
|
_
_
_
´
d ÷d
_
_
_ ÷|i
i
|
2
_
_
_
´
d ÷d
_
_
_
2
. (6.30)
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 131
Combining (6.29) and (6.30), we …nd
_
_
_
_
_
1
:
a

i=1
i
i
i
t
i
_
´ c
2
i
÷c
2
i
_
_
_
_
_
_
_ 2
_
1
:
a

i=1
|i
i
|
3
[c
i
[
_
_
_
_
´
d ÷d
_
_
_ ÷
_
1
:
a

i=1
|i
i
|
4
_
_
_
_
´
d ÷d
_
_
_
2
= o
j
(1). (6.31)
The expression is oj (1) because
_
_
_
´
d ÷d
_
_
_
j
÷÷ 0, and both averages in parenthesis are averages of
random variables with …nite mean under Assumption ??. Indeed, by Hölder’s Inequality (B.16)
E
_
|i
i
|
3
[c
i
[
_
_
_
E
_
|i
i
|
3
_
4¸3
_
3¸4
_
Ec
4
i
_
1¸4
=
_
E|i
i
|
4
_
3¸4 _
Ec
4
i
_
1¸4
< ·
We have established (6.28), as desired.
Theorem 6.7.1 Under Assumption 6.1.2, as : ÷ ·,
´
D
j
÷÷ D and
´
X
b
f
j
÷÷X
f
.
6.8 Alternative Covariance Matrix Estimators*
In Section 4.11 we also introduced the alternative heteroskedasticity-robust covariance matrix
estimators
¯
X
b
f
, and X
b
f
which take the form (6.26) but with
´
D replaced by
¯
D =
1
:
a

i=1
(1 ÷/
ii
)
÷2
i
i
i
t
i
´ c
2
i
and
D =
1
:
a

i=1
(1 ÷/
ii
)
÷1
i
i
i
t
i
´ c
2
i
,
respectively. To show that these estimators also consistent for X
f
, given
´
D
j
÷÷ D, it is su¢cient
to show that the di¤erences
¯

´
D and D÷
´
D converge in probability to zero as : ÷·.
The trick is to use the fact that the leverage values are asymptotically negligible:
max
1¸i¸a
/
ii
= o
j
(1). (6.32)
(See Theorem 6.20.1 in Section 6.20).) Then using the Triangle Inequality and (6.32)
_
_
_D÷
´
D
_
_
_ _
1
:
a

i=1
_
_
i
i
i
t
i
_
_
´ c
2
i
¸
¸
¸(1 ÷/
ii
)
÷1
÷1
¸
¸
¸
_
_
1
:
a

i=1
|i
i
|
2
´ c
2
i
_
_
max
1¸i¸a
¸
¸
¸
¸
/
ii
1 ÷/
ii
¸
¸
¸
¸
_
= o
j
(1).
Similarly,
_
_
_
¯

´
D
_
_
_ _
1
:
a

i=1
_
_
i
i
i
t
i
_
_
´ c
2
i
¸
¸
¸(1 ÷/
ii
)
÷2
÷1
¸
¸
¸
_
_
1
:
a

i=1
|i
i
|
2
´ c
2
i
_
_
max
1¸i¸a
¸
¸
¸
¸
2/
ii
÷/
2
ii
(1 ÷/
ii
)
2
¸
¸
¸
¸
_
= o
j
(1).
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 132
Theorem 6.8.1 Under Assumption 6.1.2, as : ÷·,
¯
D
j
÷÷D, D
j
÷÷D,
¯
X
f
j
÷÷X
f
, and X
f
j
÷÷X
f
.
6.9 Functions of Parameters
Sometimes we are interested in a transformation of the coe¢cient vector d = (,
1
, ..., ,
I
). For
example, we may be interested in a single coe¢cient ,
)
, or a ratio ,
)
,,
|
. In these cases we can write
the transformation as a function of the coe¢cients, e.g. 0 = l(d) for some function l : R
I
÷ R
q
.
The estimate of 0 is
´
0 = l(
´
d).
By the continuous mapping theorem (Theorem 5.9.1) and the fact
´
d
j
÷÷d we can deduce that
´
0 is consistent for 0.
Theorem 6.9.1 Under Assumption 6.1.1, if l(d) is continuous at the
true value of d, then as : ÷·,
´
0
j
÷÷0.
Furthermore, by the Delta Method (Theorem 5.10.3) we know that
´
0 is asymptotically normal.
Assumption 6.9.1 l(d) : R
I
÷ R
q
is continuously di¤erentiable at the
true value of d and H
f
=
0
0f
l(d)
t
has rank ¡.
Theorem 6.9.2 Asymptotic Distribution of Functions of Parame-
ters
Under Assumptions 6.1.2 and 6.9.1, as : ÷·,
_
:
_
´
0 ÷0
_
o
÷÷N(0, X
0
) (6.33)
where
X
0
= H
t
f
X
f
H
f
. (6.34)
In many cases, the function l(d) is linear:
l(d) = H
t
d
for some / ¡ matrix H. In this case, H
f
= H. In particular, if H is a “selector matrix”
H =
_
1
0
_
(6.35)
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 133
then we can conformably partition d = (d
t
1
, d
t
2
)
t
so that H
t
d = d
1
for d = (d
t
1
, d
t
2
)
t
. Then
X
0
=
_
1 0
_
X
f
_
1
0
_
= X
11
,
the upper-left sub-matrix of X
11
given in (6.20). In this case (6.33) states that
_
:
_
´
d
1
÷d
1
_
o
÷÷N(0, X
11
) .
That is, subsets of
´
d are approximately normal with variances given by the comformable subcom-
ponents of X .
To illustrate the case of a nonlinear transformation, take the example 0 = ,
)
,,
|
for ,= |. Then
H
f
=
_
_
_
_
_
_
_
_
_
_
_
_
_
0
.
.
.
1,,
|
.
.
.
÷,
)
,,
2
|
.
.
.
0
_
_
_
_
_
_
_
_
_
_
_
_
_
(6.36)
so
X
0
= X
))
,,
2
|
÷X
||
,
2
)
,,
4
|
÷2X
)|
,
)
,,
3
|
where X
ob
denotes the a/’th element of X
f
.
For inference we need an estimate of the asymptotic variance matrix X
0
= H
t
f
X
f
H
f
, and for
this it is typical to use a plug-in estimator. The natural estimator of H
f
is the derivative evaluated
at the point estimates
´
H
f
=
0
0d
l(
´
d)
t
. (6.37)
The derivative in (6.37) may be calculated analytically or numerically. By analytically, we mean
working out for the formula for the derivative and replacing the unknowns by point estimates. For
example, if 0 = ,
)
,,
|
, then
0
0f
l(d) is (6.36). However in some cases the function l(d) may be
extremely complicated and a formula for the analytic derivative may not be easily available. In
this case calculation by numerical di¤erentiation may be preferable. Let c
|
= (0 1 0)
t
be the
unit vector with the “1” in the |’th place. Then the ,|’th element of a numerical derivative
´
H
f
is
´
H
)|
=
l
)
(
´
d ÷c
|
-) ÷l
)
(
´
d)
-
for some small -.
The estimate of X
0
is
´
X
0
=
´
H
t
f
´
X
f
´
H
f
. (6.38)
Alternatively,
´
X
0
b
f
,
¯
X
f
or X
f
may be used in place of
´
X
f
. Given (6.37), (6.38) is simple to calculate
using matrix operations.
As the primary justi…cation for
´
X
0
is the asymptotic approximation (6.33),
´
X
0
is often called
an asymptotic covariance matrix estimator.
The estimator
´
X
0
is consistent for X
0
under the conditions of Theorem 6.9.2 since
´
X
b
f
j
÷÷X
f
by Theorem 6.7.1, and
´
H
f
=
0
0d
l(
´
d)
t
j
÷÷
0
0d
l(d)
t
= H
f
since
´
d
j
÷÷d and the function
0
0f
l(d)
t
is continuous.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 134
Theorem 6.9.3 Under Assumptions 6.1.2 and 6.9.1, as : ÷·,
´
X
0
j
÷÷X
0
.
6.10 Asymptotic Standard Errors
As described in Section 4.12, a standard error is an estimate of the standard deviation of the
distribution of an estimator. Thus if
´
X
f
is an estimate of the asymptotic covariance of
_
:
_
´
d ÷d
_
,
then :
÷1
´
X
f
is an estimate of the variance of
´
d, and standard errors are the square roots of the
diagonal elements of this matrix. These take the form
:(
´
,
)
) =
_
:
÷1 ´
X
o
j
= :
÷1¸2
_
_
´
X
f
_
))
.
When the justi…cation for
´
X
f
is based on asymptotic theory we call :(
´
,
)
) an asymptotic standard
error for
´
,
)
.
Standard errors for
´
0 are constructed similarly. Supposing that ¡ = 1 (so /(d) is real-valued),
then the asymptotic standard error for
´
0 is the square root of :
÷1
´
X
0
, that is,
:(
´
0) = :
÷1¸2
_
´
X
0
= :
÷1¸2
_
´
H
t
f
´
X
f
´
H
f
.
When calculating and reporting coe¢cients estimates
´
d or estimates
´
0 which are transformations
of the original coe¢cient estimates, it is good practice to report standard errors for each reported
estimate. This helps users of the work assess the estimation precision.
6.11 t statistic
Let 0 = /(d) : R
I
÷ R be any parameter of interest (for example, 0 could be a single element
of d),
´
0 its estimate and :(
´
0) its asymptotic standard error. Consider the statistic
t
a
(0) =
´
0 ÷0
:(
´
0)
. (6.39)
Di¤erent writers have called (6.39) a t-statistic, a t-ratio, a z-statistic or a studentized sta-
tistic, sometimes using the di¤erent labels to distinguish between …nite-sample and asymptotic
inference. As the statistics themselves are always (6.39) we won’t make such as distinction, and
will simply refer to t
a
(0) as a t-statistic or a t-ratio. We also often suppress the parameter depen-
dence, writing it as t
a
. The t-statistic is a simple function of the estimate, its standard error, and
the parameter.
By Theorem 6.9.2,
_
:
_
´
0 ÷0
_
o
÷÷N(0, \
0
) and
´
\
^
0
j
÷÷\
0
. Thus
t
a
(0) =
´
0 ÷0
:(
´
0)
=
_
:
_
´
0 ÷0
_
_
´
\
b
0
o
÷÷
N(0, \
0
)
_
\
0
= Z ~ N(0, 1) .
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 135
The last equality is by the property that linear scales of normal distributions are normal.
Thus the asymptotic distribution of the t-ratio t
a
(0) is the standard normal. Since this dis-
tribution does not depend on the parameters, we say that t
a
(0) is asymptotically pivotal. In
special cases (such as the normal regression model, see Section 3.17), the statistic t
a
has an exact
t distribution, and is therefore exactly free of unknowns. In this case, we say that t
a
is exactly
pivotal. In general, however, pivotal statistics are unavailable and we must rely on asymptotically
pivotal statistics.
As we will see in the next section, it is also useful to consider the distribution of the absolute
t-ratio [t
a
(0)[ . Since t
a
(0)
o
÷÷ Z, the continuous mapping theorem yields [t
a
(0)[
o
÷÷ [Z[ . Letting
1(n) = Ii (Z _ n) denote the standard normal distribution function, we can calculate that the
distribution function of [Z[ is
Ii ([Z[ _ n) = Ii (÷n _ Z _ n)
= Ii (Z _ n) ÷Ii (Z < ÷n)
= 1(n) ÷1(÷n)
= 21(n) ÷1
:= 1(n) (6.40)
Theorem 6.11.1 Under Assumptions 6.1.2 and 6.9.1, t
a
(0)
o
÷÷ Z ~
N(0, 1) and [t
a
(0)[
o
÷÷[Z[
The asymptotic normality of Theorem 6.11.1 is used to justify con…dence intervals and tests for
the parameters.
6.12 Con…dence Intervals
The OLS estimate
´
d is a point estimate for d, meaning that
´
d is a single value in R
I
. A
broader concept is a set estimate C
a
which is a collection of values in R
I
. When the parameter 0
is real-valued then it is common to focus on intervals C
a
= [1
a
, l
a
[ and which is called an interval
estimate for 0. The goal of an interval estimate C
a
is typically to contain the true value, e.g.
0 ¸ C
a
, with high probability, yet without being too big.
The interval estimate C
a
is a function of the data and hence is random. The coverage prob-
ability of the interval C
a
= [1
a
, l
a
[ is Ii
0
(0 ¸ C
a
). The randomness comes from C
a
as the
parameter 0 is treated as …xed.
Interval estimates C
a
are typically called con…dence intervals as the goal is typically to set the
coverage probability to equal a pre-speci…ed target, typically 90% or 95%. C
a
is called a (1 ÷c)/
con…dence interval if inf
0
Ii
0
(0 ¸ C
a
) = 1 ÷c.
There is not a unique method to construct con…dence intervals. For example, a simple (yet
silly) interval is
C
a
=
_
R with probability 1 ÷c
´
0 with probability c
By construction, if
´
0 has a continuous distribution, Ii(0 ¸ C
a
) = 1 ÷c, so this con…dence interval
has perfect coverage, but C
a
is uninformative about 0 and is therefore not useful.
When we have an asymptotically normal parameter estimate
´
0 with standard error :(
´
0), the
standard con…dence interval for 0 takes the form
C
a
=
_
´
0 ÷c :(
´
0),
´
0 ÷c :(
´
0)
_
(6.41)
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 136
where c 0 is a pre-speci…ed constant. This con…dence interval is symmetric about the point
estimate
´
0, and its length is proportional to the standard error :(
´
0).
Equivalently, C
a
is the set of parameter values for 0 such that the t-statistic t
a
(0) is smaller (in
absolute value) than c, that is
C
a
= ¦0 : [t
a
(0)[ _ c¦ =
_
0 : ÷c _
´
0 ÷0
:(
´
0)
_ c
_
.
The coverage probability of this con…dence interval is
Ii (0 ¸ C
a
) = Ii ([t
a
(0)[ _ c)
which is generally unknown. We can approximate the coverage probability by taking the asymptotic
limit as : ÷·. Since [t
a
(0)[ is asymptotically [Z[ (Theorem 6.11.1), it follows that as : ÷· that
Ii (0 ¸ C
a
) ÷Ii ([Z[ _ c) = 1(c)
where 1(n) is given in (6.40). We call this the asymptotic coverage probability. Since the t-
ratio is asymptotically pivotal, the asymptotic coverage probability is independent of the parameter
0, and is only a function of c.
As we mentioned before, an ideal con…dence interval has a pre-speci…ed probability coverage
1 ÷c, typically 90% or 95%. This means selecting the constant c so that
1(c) = 1 ÷c.
E¤ectively, this makes c a function of c, and can be backed out of a normal distribution table. For
example, c = 0.0ò (a 95% interval) implies c = 1.06 and c = 0.1 (a 90% interval) implies c = 1.64ò.
Rounding 1.96 to 2, we obtain the most commonly used con…dence interval in applied econometric
practice
C
a
=
_
´
0 ÷2:(
´
0),
´
0 ÷ 2:(
´
0)
_
. (6.42)
This is a useful rule-of thumb. This asymptotic 95% con…dence interval C
a
is simple to compute
and can be roughly calculated from tables of coe¢cient estimates and standard errors. (Technically,
it is an asymptotic 95.4% interval, due to the substitution of 2.0 for 1.96, but this distinction is
meaningless.)
Theorem 6.12.1 Under Assumptions 6.1.2 and 6.9.1, for C
a
de…ned in
(6.41), Ii (0 ¸ C
a
) ÷÷1(c). For c = 1.06, Ii (0 ¸ C
a
) ÷÷0.0ò.
Con…dence intervals are a simple yet e¤ective tool to assess estimation uncertainty. When
reading a set of empirical results, look at the estimated coe¢cient estimates and the standard
errors. For a parameter of interest, compute the con…dence interval C
a
and consider the meaning
of the spread of the suggested values. If the range of values in the con…dence interval are too wide
to learn about 0, then do not jump to a conclusion about 0 based on the point estimate alone.
6.13 Regression Intervals
In the linear regression model the conditional mean of j
i
given i
i
= i is
:(i) = E(j
i
[ i
i
= i) = i
t
d.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 137
Figure 6.6: Wage on Education Regression Intervals
In some cases, we want to estimate :(i) at a particular point i. Notice that this is a (linear)
function of d. Letting /(d) = i
t
d and 0 = /(d), we see that ´ :(i) =
´
0 = i
t
´
d and H
f
= i, so
:(
´
0) =
_
:
÷1
i
t ´
X
f
i. Thus an asymptotic 95% con…dence interval for :(i) is
_
i
t
´
d ±2
_
:
÷1
i
t ´
X
f
i
_
.
It is interesting to observe that if this is viewed as a function of i, the width of the con…dence set
is dependent on i.
To illustrate, we return to the log wage regression (3.11) of Section 3.6. The estimated regression
equation is
\
log(\aqc) = i
t
´
d = 0.626 ÷ 0.1ò6r.
where r = ddncatio:. The White covariance matrix estimate is
´
X
b
f
=
_
7.002 ÷0.44ò
÷0.44ò 0.020
_
and the sample size is : = 61. Thus the 95% con…dence interval for the regression takes the form
0.626 ÷ 0.1ò6r ±2
_
1
61
(7.002 ÷0.80r ÷ 0.020r
2
) .
The estimated regression and 95% intervals are shown in Figure 6.6. Notice that the con…dence
bands take a hyperbolic shape. This means that the regression line is less precisely estimated for
very large and very small values of education.
Plots of the estimated regression line and con…dence intervals are especially useful when the
regression includes nonlinear terms. To illustrate, consider the log wage regression (3.12) which
includes experience and its square.
\
log(\aqc) = 1.06 ÷ 0.116 cdncatio: ÷ 0.010 crjcric:cc ÷0.014 crjcric:cc
2
,100 (6.43)
and has : = 24ò4 observations. We are interested in plotting the regression estimate and regression
intervals as a function of experience. Since the regression also includes education, to plot the
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 138
Figure 6.7: Wage on Experience Regression Intervals
estimates in a simple graph we need to …x education at a speci…c value. We select education=12.
This only a¤ects the level of the estimated regression, since education enters without an interaction.
De…ne the points of evaluation
z(r) =
_
_
_
_
1
12
r
r
2
,100
_
_
_
_
where r =experience. The covariance matrix estimate is
´
X
b
f
=
_
_
_
_
22.02 ÷1.0601 ÷0.ò6687 0.86626
÷1.0601 0.064ò4 .0080787 ÷.0066740
÷0.ò6687 .0080787 .040786 ÷.07òò88
0.86626 ÷.0066740 ÷.07òò88 0.14004
_
_
_
_
.
Thus the regression interval for cdncatio:=12, as a function of r =experience is
1.06 ÷ 0.116 + 12 ÷ 0.010 r ÷0.014 r
2
,100
±
1
ò0
¸
¸
¸
¸
¸
¸
_
1
24ò4
z(r)
t
_
_
_
_
22.02 ÷1.0601 ÷0.ò6687 0.86626
÷1.0601 0.064ò4 .0080787 ÷.0066740
÷0.ò6687 .0080787 .040786 ÷.07òò88
0.86626 ÷.0066740 ÷.07òò88 0.14004
_
_
_
_
z(r)
= 2.4ò2 ÷ 0.010 r ÷.00014 r
2
±
2
100
_
27.ò02 ÷8.8804 r ÷ 0.28007 r
2
÷0.00616 r
3
÷ 0.0000611 r
4
The estimated regression and 95% intervals are shown in Figure 6.7. The regression interval
widens greatly for small and large values of experience, indicating considerable uncertainty about
the e¤ect of experience on mean wages for this population. The con…dence bands take a more
complicated shape than in Figure 6.6 due to the nonlinear speci…cation.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 139
6.14 Forecast Intervals
For a given value of i
i
= i, we may want to forecast (guess) j
i
out-of-sample. A reasonable
rule is the conditional mean :(i) as it is the mean-square-minimizing forecast. A point forecast is
the estimated conditional mean ´ :(i) = i
t
´
d. We would also like a measure of uncertainty for the
forecast.
The forecast error is ´ c
i
= j
i
÷ ´ :(i) = c
i
÷ i
t
_
´
d ÷d
_
. As the out-of-sample error c
i
is
independent of the in-sample estimate
`
d, this has variance
E´ c
2
i
= E
_
c
2
i
[ i
i
= i
_
÷i
t
E
_
´
d ÷d
__
´
d ÷d
_
t
i
= o
2
(i) ÷:
÷1
i
t
X
f
i.
Assuming E
_
c
2
i
[ i
i
_
= o
2
, the natural estimate of this variance is ´ o
2
÷:
÷1
i
t
´
X
f
i, so a standard
error for the forecast is ´ :(i) =
_
´ o
2
÷:
÷1
i
t ´
X
f
i. Notice that this is di¤erent from the standard
error for the conditional mean. If we have an estimate of the conditional variance function, e.g.
o
2
(i) = ¯ o
t
z from (9.5), then the forecast standard error is ´ :(i) =
_
o
2
(i) ÷:
÷1
i
t ´
X
f
i
It would appear natural to conclude that an asymptotic 95% forecast interval for j
i
is
_
i
t
´
d ±2´ :(i)
_
,
but this turns out to be incorrect. In general, the validity of an asymptotic con…dence interval is
based on the asymptotic normality of the studentized ratio. In the present case, this would require
the asymptotic normality of the ratio
c
i
÷i
t
_
´
d ÷d
_
´ :(i)
.
But no such asymptotic approximation can be made. The only special exception is the case where
c
i
has the exact distribution N(0, o
2
), which is generally invalid.
To get an accurate forecast interval, we need to estimate the conditional distribution of c
i
given
i
i
= i, which is a much more di¢cult task. Perhaps due to this di¢culty, many applied forecasters
use the simple approximate interval
_
i
t
´
d ±2´ :(i)
_
despite the lack of a convincing justi…cation.
6.15 Wald Statistic
Let 0 = l(d) : R
I
÷ R
q
be any parameter vector of interest,
´
0 its estimate and
´
X
0
its
covariance matrix estimator. Consider the quadratic form
\
a
(0) = :
_
´
0 ÷0
_
t
´
X
÷1
0
_
´
0 ÷0
_
. (6.44)
When ¡ = 1, then \
a
(0) = t
a
(0)
2
is the square of the t-ratio. When ¡ 1, \
a
(0) is typically
called a Wald statistic. We are interested in its sampling distribution.
The asymptotic distribution of \
a
(0) is simple to derive given Theorem 6.9.2 and Theorem
6.9.3, which show that
_
:
_
´
0 ÷0
_
o
÷÷Z ~ N(0, X
0
)
and
´
X
0
j
÷÷X
0
.
It follows that
\
a
(0) =
_
:
_
´
0 ÷0
_
t
´
X
÷1
0
_
:
_
´
0 ÷0
_
o
÷÷Z
t
X
÷1
0
Z (6.45)
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 140
a quadratic in the normal random vector Z. Here we can appeal to a useful result from probability
theory. (See Theorem B.9.3 in the Appendix.)
Theorem 6.15.1 If Z ~ N(0, A) with A 0, ¡ ¡, then Z
t
A
÷1
Z ~ ¸
2
q
,
a chi-square random variable with ¡ degrees of freedom.
The asymptotic distribution in (6.45) takes exactly this form. Note that X
0
0 since H
f
is
full rank under Assumption 6.9.1 It follows that \
a
(0) converges in distribution to a chi-square
random variable.
Theorem 6.15.2 Under Assumptions 6.1.2 and 6.9.1, as : ÷·,
\
a
(0)
o
÷÷¸
2
q
.
Theorem 6.15.2 is used to justify multivariate con…dence regions and mutivariate hypothesis
tests.
6.16 Con…dence Regions
A con…dence region C
a
is a set estimator for 0 ¸ R
q
when ¡ 1. A con…dence region C
a
is a set
in R
q
intended to cover the true parameter value with a pre-selected probability 1÷c. Thus an ideal
con…dence region has the coverage probability Ii(0 ¸ C
a
) = 1 ÷ c. In practice it is typically not
possible to construct a region with exact coverage, but we can calculate its asymptotic coverage.
When the parameter estimate satis…es the conditions of Theorem 6.15.2, a good choice for a
con…dence region is the ellipse
C
a
= ¦0 : \
a
(0) _ c
1÷c
¦ .
with c
1÷c
the 1 ÷ c’th quantile of the ¸
2
q
distribution. (Thus 1
q
(c
1÷c
) = 1 ÷ c.) These quantiles
can be found from the ¸
2
q
critical value table.
Theorem 6.15.2 implies
Ii (0 ¸ C
a
) ÷Ii
_
¸
2
q
_ c
1÷c
_
= 1 ÷c
which shows that C
a
has asymptotic coverage (1 ÷c)/.
To illustrate the construction of a con…dence region, consider the estimated regression (6.43) of
the model
\
log(\aqc) = c ÷,
1
cdncatio: ÷,
2
crjcric:cc ÷,
3
crjcric:cc
2
,100.
Suppose that the two parameters of interest are the percentage return to education 0
1
= 100,
1
and
the percentage return to experience for individuals with 10 years experience 0
2
= 100,
2
÷ 20,
3
.
(We need to condition on the level of experience since the regression is quadratic in experience.)
These two parameters are a linear transformation of the regression parameters with point estimates
´
0 =
_
0 100 0 0
0 0 100 20
_
´
d =
_
11.6
0.72
_
,
and have the covariance matrix estimate
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 141
Figure 6.8: Con…dence Region for Return to Experience and Return to Education
´
X
0
=
_
0 100 0 0
0 0 100 20
_
´
X
b
f
_
_
_
_
0 0
100 0
0 100
0 20
_
_
_
_
=
_
64ò.4 67.887
67.887 16ò
_
with inverse
´
X
÷1
0
=
_
0.0016184 ÷0.00066008
÷0.00066008 0.0068806
_
.
Thus the Wald statistic is
\
a
(0) = :
_
´
0 ÷0
_
t
´
X
÷1
0
_
´
0 ÷0
_
= 24ò4
_
11.6 ÷0
1
0.72 ÷0
2
_
t
_
0.0016184 ÷0.00066008
÷0.00066008 0.0068806
__
11.6 ÷0
1
0.72 ÷0
2
_
= 8.07 (11.6 ÷0
1
)
2
÷8.2441 (11.6 ÷0
1
) (0.72 ÷0
2
) ÷ 1ò.ò8ò (0.72 ÷0
2
)
2
The 90% quantile of the ¸
2
2
distribution is 4.605 (we use the ¸
2
2
distribution as the dimension
of 0 is two), so an asymptotic 90% con…dence region for the two parameters is the interior of the
ellipse
8.07 (11.6 ÷0
1
)
2
÷8.2441 (11.6 ÷0
1
) (0.72 ÷0
2
) ÷ 1ò.ò8ò (0.72 ÷0
2
)
2
= 4.60ò
which is displayed in Figure 6.8. Since the estimated correlation of the two coe¢cient estimates is
small (about 0.2) the ellipse is close to circular.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 142
6.17 Semiparametric E¢ciency in the Projection Model
In Section 4.6 we presented the Gauss-Markov theorem, which stated that in the homoskedastic
CEF model, in the class of linear unbiased estimators the one with the smallest variance is least-
squares. As we noted in that section, the restriction to linear unbiased estimators is unsatisfactory
as it leaves open the possibility that an alternative (non-linear) estimator could have a smaller
asymptotic variance. In addition, the restriction to the homoskedastic CEF model is also unsatis-
factory as the projection model is more relevant for empirical application. The question remains:
what is the most e¢cient estimator of the projection coe¢cient d (or functions 0 = l(d)) in the
projection model?
It turns out that it is straightforward to show that the projection model falls in the estimator
class considered in Proposition 5.13.2. It follows that the least-squares estimator is semiparametri-
cally e¢cient in the sense that it has the smallest asymptotic variance in the class of semiparametric
estimators of d. This is a more powerful and interesting result than the Gauss-Markov theorem.
To see this, it is worth rephrasing Proposition 5.13.2 with amended notation. Suppose that a pa-
rameter of interest is 0 = j(µ) where µ = Ez
i
, for which the moment estimators are ´ µ =
1
a

a
i=1
z
i
and
´
0 = j(´ µ). Let /
2
(j) =
_
1 : E|z|
2
< ·, j (u) is continuously di¤erentiable at u = Ez
_
be
the set of distributions for which
´
0 satis…es the central limit theorem.
Proposition 6.17.1 In the class of distributions 1 ¸ /
2
(j),
´
0 is semi-
parametrically e¢cient for 0 in the sense that its asymptotic variance equals
the semiparametric e¢ciency bound.
Proposition 6.17.1 says that under the minimal conditions in which
´
0 is asymptotically normal,
then no semiparametric estimator can have a smaller asymptotic variance than
´
0.
To show that an estimator is semiparametrically e¢cient it is su¢cient to show that it falls
in the class covered by this Proposition. To show that the projection model falls in this class, we
write d = Q
÷1
aa
Q
aj
= j (µ) where µ = Ez
i
and z
i
= (i
i
i
t
i
, i
i
j
i
) . The class /
2
(j) equals the class
of distributions
/
4
(d) =
_
1 : Ej
4
< ·, E|i|
4
< ·, Ei
i
i
t
i
0
_
.
Proposition 6.17.2 In the class of distributions 1 ¸ /
4
(d), the least-
squares estimator
´
d is semiparametrically e¢cient for d.
The least-squares estimator is an asymptotically e¢cient estimator of the projection coe¢cient
because the latter is a smooth function of sample moments and the model implies no further
restrictions. However, if the class of permissible distributions is restricted to a strict subset of /
4
(d)
then least-squares can be ine¢cient. For example, the linear CEF model with heteroskedastic errors
is a strict subset of /
4
(d), and the GLS estimator has a smaller asymptotic variance than OLS. In
this case, the knowledge that true conditional mean is linear allows for more e¢cient estimation of
the unknown parameter.
From Proposition 6.17.1 we can also deduce that plug-in estimators
´
0 = l(
´
d) are semiparamet-
rically e¢cient estimators of 0 = l(d) when l is continuously di¤erentiable. We can also deduce
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 143
that other parameters estimators are semiparametrically e¢cient, such as ´ o
2
for o
2
. To see this,
note that we can write
o
2
= E
_
j
i
÷i
t
i
d
_
2
= Ej
2
i
÷2E
_
j
i
i
t
i
_
d ÷d
t
E
_
i
i
i
t
i
_
d
= Q
jj
÷Q

Q
÷1
ææ
Q
æj
which is a smooth function of the moments Q
jj
, Q

and Q
ææ
. Similarly the estimator ´ o
2
equals
´ o
2
=
1
:
a

i=1
´ c
2
i
=
´
Q
jj
÷
´
Q

´
Q
÷1
ææ
´
Q
æj
Since the variables j
2
i
, j
i
i
t
i
and i
i
i
t
i
all have …nite variances when 1 ¸ /
4
(d), the conditions of
Proposition 6.17.1 are satis…ed. We conclude:
Proposition 6.17.3 In the class of distributions 1 ¸ /
4
(d), ´ o
2
is semi-
parametrically e¢cient for o
2
.
6.18 Semiparametric E¢ciency in the Homoskedastic Regression
Model*
In Section 6.17 we showed that the OLS estimator is semiparametrically e¢cient in the projec-
tion model. What if we restrict attention to the classical homoskedastic regression model? Is OLS
still e¢cient in this class? In this section we derive the asymptotic semiparametric e¢ciency bound
for this model, and show that it is the same as that obtained by the OLS estimator. Therefore it
turns out that least-squares is e¢cient in this class as well.
Recall that in the homoskedastic regression model the asymptotic variance of the OLS estimator
´
d for d is X
0
f
= Q
÷1
aa
o
2
. Therefore, as described in Section 5.13, it is su¢cient to …nd a parametric
submodel whose Cramer-Rao bound for estimation of d is X
0
f
. This would establish that X
0
f
is
the semiparametric variance bound and the OLS estimator
´
d is semiparametrically e¢cient for d.
Let the joint density of j and i be written as ) (j, i) = )
1
(j [ i) )
2
(i) , the product of the
conditional density of j given i and the marginal density of i. Now consider the parametric
submodel
) (j, i [ 0) = )
1
(j [ i)
_
1 ÷
_
j ÷i
t
d
_ _
i
t
0
_
,o
2
_
)
2
(i) . (6.46)
You can check that in this submodel the marginal density of i is )
2
(i) and the conditional density
of j given i is )
1
(j [ i)
_
1 ÷ (j ÷i
t
d) (i
t
0) ,o
2
_
. To see that the latter is a valid conditional
density, observe that the regression assumption implies that
_
j)
1
(j [ i) dj = i
t
d and therefore
_
)
1
(j [ i)
_
1 ÷
_
j ÷i
t
d
_ _
i
t
0
_
,o
2
_
dj
=
_
)
1
(j [ i) dj ÷
_
)
1
(j [ i)
_
j ÷i
t
d
_
dj
_
i
t
0
_
,o
2
= 1.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 144
In this parametric submodel the conditional mean of j given i is
E
0
(j [ i) =
_
j)
1
(j [ i)
_
1 ÷
_
j ÷i
t
d
_ _
i
t
0
_
,o
2
_
dj
=
_
j)
1
(j [ i) dj ÷
_
j)
1
(j [ i)
_
j ÷i
t
d
_ _
i
t
0
_
,o
2
dj
=
_
j)
1
(j [ i) dj ÷
_
_
j ÷i
t
d
_
2
)
1
(j [ i)
_
i
t
0
_
,o
2
dj
÷
_
_
j ÷i
t
d
_
)
1
(j [ i) dj
_
i
t
d
_ _
i
t
0
_
,o
2
= i
t
(d ÷0) ,
using the homoskedasticity assumption
_
(j ÷i
t
d)
2
)
1
(j [ i) dj = o
2
. This means that in this
parametric submodel, the conditional mean is linear in i and the regression coe¢cient is d (0) =
d ÷0.
We now calculate the score for estimation of 0. Since
0
00
log ) (j, i [ 0) =
0
00
log
_
1 ÷
_
j ÷i
t
d
_ _
i
t
0
_
,o
2
_
=
i(j ÷i
t
d) ,o
2
1 ÷ (j ÷i
t
d) (i
t
0) ,o
2
the score is
s =
0
00
log ) (j, i [ 0
0
) = ic,o
2
.
The Cramer-Rao bound for estimation of 0 (and therefore d (0) as well) is
_
E
_
ss
t
__
÷1
=
_
o
÷4
E
_
(ic) (ic)
t
__
÷1
= o
2
Q
÷1
aa
= X
0
f
.
We have shown that there is a parametric submodel (6.46) whose Cramer-Rao bound for estimation
of d is identical to the asymptotic variance of the least-squares estimator, which therefore is the
semiparametric variance bound.
Theorem 6.18.1 In the homoskedastic regression model, the semipara-
metric variance bound for estimation of d is X
0
= o
2
Q
÷1
aa
and the OLS
estimator is semiparametrically e¢cient.
This result is similar to the Gauss-Markov theorem, in that it asserts the e¢ciency of the least-
squares estimator in the context of the homoskedastic regression model. The di¤erence is that the
Gauss-Markov theorem states that OLS has the smallest variance among the set of unbiased linear
estimators, while Theorem 6.18.1 states that OLS has the smallest asymptotic variance among all
regular estimators. This is a much more powerful statement.
6.19 Uniformly Consistent Residuals*
It seems natural to view the residuals ´ c
i
as estimates of the unknown errors c
i
. Are they
consistent estimates? In this section we develop an appropriate convergence result. This is not a
widely-used technique, and can safely be skipped by most readers.
Notice that we can write the residual as
´ c
i
= j
i
÷i
t
i
´
d
= c
i
÷i
t
i
d ÷r
t
i
´
d
= c
i
÷i
t
i
_
´
d ÷d
_
. (6.47)
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 145
Since
´
d ÷d
j
÷÷0 it seems reasonable to guess that ´ c
i
will be close to c
i
if : is large.
We can bound the di¤erence in (6.47) using the Schwarz inequality (A.10) to …nd
[´ c
i
÷c
i
[ =
¸
¸
¸i
t
i
_
´
d ÷d

¸
¸ _ |i
i
|
_
_
_
´
d ÷d
_
_
_. (6.48)
To bound (6.48) we can use
_
_
_
´
d ÷d
_
_
_ = O
j
(:
÷1¸2
) from Theorem 6.3.2, but we also need to bound
the random variable |i
i
|.
The key is Theorem 5.12.1 which shows that E|i
i
|
v
< · implies i
i
= o
j
_
:
1¸v
_
uniformly in
i, or
:
÷1¸v
max
1¸i¸a
|i
i
|
j
÷÷0.
Applied to (6.48) we obtain
max
1¸i¸a
[´ c
i
÷c
i
[ _ max
1¸i¸a
|i
i
|
_
_
_
´
d ÷d
_
_
_
= o
j
(:
÷1¸2+1¸v
).
We have shown the following.
Theorem 6.19.1 Under Assumption 6.1.2 and E|i
i
|
v
< ·, then uni-
formly in 1 _ i _ :
´ c
i
= c
i
÷o
j
(:
÷1¸2+1¸v
). (6.49)
The rate of convergence in (6.49) depends on r. Assumption 6.1.2 requires r _ 4, so the rate
of convergence is at least o
j
(:
÷1¸4
). As r increases, the rate becomes close to O
j
(:
÷1¸2
). If the
regressor is bounded, |i
i
| _ 1 < ·, then ´ c
i
= c
i
÷o
j
(:
÷1¸2
).
6.20 Asymptotic Leverage*
Recall the de…nition of leverage from (3.21)
/
ii
= i
t
i
_
A
t
A
_
÷1
i
i
.
These are the diagonal elements of the projection matrix 1 and appear in the formula for leave-
one-out prediction errors and several covariance matrix estimators. We can show that under iid
sampling the leverage values are uniformly asymptotically small.
Let `
min
(A) and `
max
(A) denote the smallest and largest eigenvalues of a symmetric square
matrix A, and note that `
max
(A
÷1
) = (`
min
(A))
÷1
.
Since
1
a
A
t
A
j
÷÷ Q
aa
0 then by the CMT, `
min
_
1
a
A
t
A
_
j
÷÷ `
min
(Q
aa
) 0. (The latter is
positive since Q
aa
is positive de…nite and thus all its eigenvalues are positive.) Then by the Trace
Inequality (A.13)
/
ii
= i
t
i
_
A
t
A
_
÷1
i
i
= li
_
_
1
:
A
t
A
_
÷1
1
:
i
i
i
t
i
_
_ `
max
_
_
1
:
A
t
A
_
÷1
_
li
_
1
:
i
i
i
t
i
_
=
_
`
min
_
1
:
A
t
A
__
÷1
1
:
|i
i
|
2
_ (`
min
(Q
aa
) ÷o
j
(1))
÷1
1
:
max
1¸i¸a
|i
i
|
2
. (6.50)
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 146
Theorem 5.12.1 shows that E|i
i
|
v
< · implies max
1¸i¸a
|i
i
| = o
j
_
:
1¸v
_
and thus (6.50) is
o
j
_
:
2¸v÷1
_
.
Theorem 6.20.1 If i
i
is independent and identically distributed and
E|i
i
|
v
< · for some r _ 2, then uniformly in 1 _ i _ :, /
ii
=
o
j
_
:
2¸v÷1
_
.
For any r _ 2 then /
ii
= o
j
(1) (uniformly in i _ :). Larger r implies a stronger rate of
convergence, for example r = 4 implies /
ii
= o
j
_
:
÷1¸2
_
.
Theorem (6.20.1) implies that under random sampling with …nite variances and large samples,
no individual observation should have a large leverage value. Consequently individual observations
should not be in‡uential, unless one of these conditions is violated.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 147
Exercises
Exercise 6.1 Take the model j
i
= i
t
1i
d
1
÷i
t
2i
d
2
÷c
i
with Ei
i
c
i
= 0. Suppose that d
1
is estimated
by regressing j
i
on i
1i
only. Find the probability limit of this estimator. In general, is it consistent
for d
1
? If not, under what conditions is this estimator consistent for d
1
´
Exercise 6.2 Let ¸ be :1, A be :/ (rank /). ¸ = Ad÷c with E(i
i
c
i
) = 0. De…ne the ridge
regression estimator
´
d =
_
a

i=1
i
i
i
t
i
÷`1
I
_
÷1
_
a

i=1
i
i
j
i
_
(6.51)
where ` 0 is a …xed constant. Find the probability limit of
´
d as : ÷·. Is
´
d consistent for d´
Exercise 6.3 For the ridge regression estimator (6.51), set ` = c: where c 0 is …xed as : ÷·.
Find the probability limit of
´
d as : ÷·.
Exercise 6.4 Verify some of the calculations reported in Section 6.4. Speci…cally, suppose that
r
1i
and r
2i
only take the values ¦÷1, ÷1¦, symmetrically, with
Ii (r
1i
= r
2i
= 1) = Ii (r
1i
= r
2i
= ÷1) = 8,8
Ii (r
1i
= 1, r
2i
= ÷1) = Ii (r
1i
= ÷1, r
2i
= 1) = 1,8
E
_
c
2
i
[ r
1i
= r
2i
_
=
ò
4
E
_
c
2
i
[ r
1i
,= r
2i
_
=
1
4
.
Verify the following:
1. Er
1i
= 0
2. Er
2
1i
= 1
3. Er
1i
r
2i
=
1
2
4. E
_
c
2
i
_
= 1
5. E
_
r
2
1i
c
2
i
_
= 1
6. E
_
r
1i
r
2i
c
2
i
_
=
7
8
.
Exercise 6.5 Show (6.19)-(6.22).
Exercise 6.6 The model is
j
i
= i
t
i
d ÷c
i
E(i
i
c
i
) = 0
D = E
_
i
i
i
t
i
c
2
i
_
.
Find the method of moments estimators (
´
d,
´
D) for (d, D) .
(a) In this model, are (
´
d,
´
D) e¢cient estimators of (d, D)´
(b) If so, in what sense are they e¢cient?
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 148
Exercise 6.7 Of the variables (j
+
i
, j
i
, i
i
) only the pair (j
i
, i
i
) are observed. In this case, we say
that j
+
i
is a latent variable. Suppose
j
+
i
= i
t
i
d ÷c
i
E(i
i
c
i
) = 0
j
i
= j
+
i
÷n
i
where n
i
is a measurement error satisfying
E(i
i
n
i
) = 0
E(j
+
i
n
i
) = 0
Let
´
d denote the OLS coe¢cient from the regression of j
i
on i
i
.
(a) Is d the coe¢cient from the linear projection of j
i
on i
i
´
(b) Is
´
d consistent for d as : ÷·´
(c) Find the asymptotic distribution of
_
:
_
´
d ÷d
_
as : ÷·.
Exercise 6.8 Find the asymptotic distribution of
_
:
_
´ o
2
÷o
2
_
as : ÷·.
Exercise 6.9 The model is
j
i
= r
i
, ÷c
i
E(c
i
[ r
i
) = 0
where r
i
¸ R. Consider the two estimators
´
, =

a
i=1
r
i
j
i

a
i=1
r
2
i
¯
, =
1
:
a

i=1
j
i
r
i
.
(a) Under the stated assumptions, are both estimators consistent for ,´
(b) Are there conditions under which either estimator is e¢cient?
Exercise 6.10 In the homoskedastic regression model ¸ = Ad ÷ c with E(c
i
[ i
i
) = 0 and
E(c
2
i
[ i
i
) = o
2
, suppose
`
d is the OLS estimate of d with covariance matrix
`
X , based on a sample
of size :. Let ´ o
2
be the estimate of o
2
. You wish to forecast an out-of-sample value of j
a+1
given
that i
a+1
= i. Thus the available information is the sample (¸, A), the estimates (
´
d,
´
X , ´ o
2
), the
residuals ` c, and the out-of-sample value of the regressors, i
a+1
.
(a) Find a point forecast of j
a+1
.
(b) Find an estimate of the variance of this forecast.
Chapter 7
Restricted Estimation
7.1 Introduction
In the linear projection model
j
i
= i
t
i
d ÷c
i
E(i
i
c
i
) = 0
a common task is to impose a constraint on the coe¢cient vector d. For example, partitioning
i
t
i
= (i
t
1i
, i
t
2i
) and d
t
=
_
d
t
1
, d
t
2
_
, a typical constraint is an exclusion restriction of the form
d
2
= 0. In this case the constrained model is
j
i
= i
t
1i
d
1
÷c
i
E(i
i
c
i
) = 0
At …rst glance this appears the same as the linear projection model, but there is one important
di¤erence: the error c
i
is uncorrelated with the entire regressor vector i
t
i
= (i
t
1i
, i
t
2i
) not just the
included regressor i
1i
.
In general, a set of ¡ linear constraints on d takes the form
H
t
d = c (7.1)
where H is / ¡, iank(H) = ¡ < / and c is ¡ 1. The assumption that H is full rank means that
the constraints are linearly independent (there are no redundant or contradictory constraints).
The constraint d
2
= 0 discussed above is a special case of the constraint (7.1) with
H =
_
0
1
_
, (7.2)
a selector matrix, and c = 0.
Another common restriction is that a set of coe¢cients sum to a known constant, i.e. ,
1
÷,
2
= 1.
This constraint arises in a constant-return-to-scale production function. Other common restrictions
include the equality of coe¢cients ,
1
= ,
2
, and equal and o¤setting coe¢cients ,
1
= ÷,
2
.
A typical reason to impose a constraint is that we believe (or have information) that the con-
straint is true. By imposing the constraint we hope to improve estimation e¢ciency. The goal is
to obtain consistent estimates with reduced variance relative to the unconstrained estimator.
The questions then arise: How should we estimate the coe¢cient vector d imposing the linear
restriction (7.1)? If we impose such constraints, what is the sampling distribution of the resulting
estimator? How should we calculate standard errors? These are the questions explored in this
chapter.
149
CHAPTER 7. RESTRICTED ESTIMATION 150
7.2 Constrained Least Squares
An intuitively appealing method to estimate a constrained linear projection is to minimize the
least-squares criterion subject to the constraint H
t
d = c. This estimator is
¯
d = aigmin
1
0
f=c
oo1
a
(d) (7.3)
where
oo1
a
(d) =
a

i=1
_
j
i
÷i
t
i
d
_
2
= ¸
t
¸ ÷2¸
t
Ad ÷d
t
A
t
Ad.
The estimator
¯
d minimizes the sum of squared errors over all d such that the restriction (7.1)
holds. We call
¯
d the constrained least-squares (CLS) estimator. We follow the convention of
using a tilde “~” rather than a hat “^” to indicate that
¯
d is a restricted estimator in contrast to
the unrestricted least-squares estimator
´
d, and write it as
¯
d
cls
when we want to be clear that the
estimation method is CLS.
One method to …nd the solution to (7.3) uses the technique of Lagrange multipliers. The
problem (7.3) is equivalent to the minimization of the Lagrangian
/(d, X) =
1
2
oo1
a
(d) ÷X
t
_
H
t
d ÷c
_
(7.4)
over (d, X), where X is an : 1 vector of Lagrange multipliers. The …rst-order conditions for
minimization of (7.4) are
0
0d
/(
¯
d,
¯
X) = ÷A
t
¸ ÷A
t
A
¯
d ÷H
¯
X = 0 (7.5)
and
0
0X
/(d, X) = H
t
¯
d ÷c = 0. (7.6)
Premultiplying (7.5) by H
t
(A
t
A)
÷1
we obtain
÷H
t
´
d ÷H
t
¯
d ÷H
t
_
A
t
A
_
÷1
H
¯
X = 0 (7.7)
where
´
d = (A
t
A)
÷1
A
t
¸ is the unrestricted least-squares estimator. Imposing H
t
¯
d ÷c = 0 from
(7.6) and solving for
¯
X we …nd
¯
X =
_
H
t
_
A
t
A
_
÷1
H
_
÷1
_
H
t
´
d ÷c
_
.
Substuting this expression into (7.5) and solving for
¯
d we …nd the solution to the constrained
minimization problem (7.3)
¯
d
cls
=
´
d ÷
_
A
t
A
_
÷1
H
_
H
t
_
A
t
A
_
÷1
H
_
÷1
_
H
t
´
d ÷c
_
. (7.8)
This is a general formula for the CLS estimator. It also can be written as
¯
d
cls
=
´
d ÷
´
Q
÷1
aa
H
_
H
t
´
Q
÷1
aa
H
_
÷1
_
H
t
´
d ÷c
_
.
Given
¯
d
cls
the residuals are
c
i
= j
i
÷i
t
i
¯
d
cls
.
The moment estimator of o
2
is
o
2
cls
=
1
:
a

i=1
c
2
i
.
CHAPTER 7. RESTRICTED ESTIMATION 151
7.3 Exclusion Restriction
While (7.8) is a general formula for the CLS estimator, in most cases the estimator can be
found by applying least-squares to a reparameterized equation. To illustrate, let us return to the
…rst example presented at the beginning of the chapter – a simple exclusion restriction. Recall the
unconstrained model is
j
i
= i
t
1i
d
1
÷i
t
2i
d
2
÷c
i
(7.9)
the exclusion restriction is d
2
= 0, and the constrained equation is
j
i
= i
t
1i
d
1
÷c
i
. (7.10)
In this setting the CLS estimator is OLS of j
i
on r
1i
. (See Exercise 7.1.) We can write this as
¯
d
1
=
_
a

i=1
i
1i
i
t
1i
_
÷1
_
a

i=1
i
1i
j
i
_
. (7.11)
The CLS estimator of the entire vector d
t
=
_
d
t
1
, d
t
2
_
is
¯
d =
_
¯
d
1
0
_
. (7.12)
It is not immediately obvious, but (7.8) and (7.12) are algebraically (and numerically) equivalent.
To see this, the …rst component of (7.8) with (7.2) is
¯
d
1
=
_
1 0
_
_
´
d ÷
´
Q
÷1
aa
_
0
1
__
_
0 1
_
´
Q
÷1
aa
_
0
1
__
÷1
_
0 1
_
´
d
_
.
Using (3.33) this equals
¯
d
1
=
´
d
1
÷
´
Q
12
_
´
Q
22
_
÷1
´
d
2
=
´
d
1
÷
´
Q
÷1
11·2
´
Q
12
´
Q
÷1
22
´
Q
22·1
´
d
2
=
´
Q
÷1
11·2
_
´
Q
1j
÷
´
Q
12
´
Q
÷1
22
´
Q
2j
_
÷
´
Q
÷1
11·2
´
Q
12
´
Q
÷1
22
´
Q
22·1
´
Q
÷1
22·1
_
´
Q
2j
÷
´
Q
21
´
Q
÷1
11
´
Q
1j
_
=
´
Q
÷1
11·2
_
´
Q
1j
÷
´
Q
12
´
Q
÷1
22
´
Q
21
´
Q
÷1
11
´
Q
1j
_
=
´
Q
÷1
11·2
_
´
Q
11
÷
´
Q
12
´
Q
÷1
22
´
Q
21
_
´
Q
÷1
11
´
Q
1j
=
´
Q
÷1
11
´
Q
1j
which is (7.12) as originally claimed.
7.4 Minimum Distance
A minimum distance estimator tries to …nd a parameter value which satis…es the constraint
which is as close as possible to the unconstrained estimate. Let
´
d be the unconstrained least-
squares estimator, and for some / / positive de…nite weight matricx M
a
0 de…ne
J
a
(d) = :
_
´
d ÷d
_
t
M
a
_
´
d ÷d
_
(7.13)
CHAPTER 7. RESTRICTED ESTIMATION 152
This is a (squared) weighted Euclidean distance between
´
d and d. J
a
(d) is small if d is close to
´
d, and is minimized at zero only if d =
´
d. A minimum distance estimator
¯
d for
¯
d minimizes
J
a
(d) subject to the constraint (7.1), that is,
¯
d
md
= aigmin
1
0
f=c
J
a
(d) . (7.14)
The CLS estimator is the special case when M
a
=
_
´
X
0
f
_
÷1
. To see this, rewrite the least-
squares criterion as follows. Write the unconstrained least-squares …tted equation as j
i
= i
t
i
´
d ÷ ´ c
i
and substitute this equation into oo1
a
(d) to obtain
oo1
a
(d) =
a

i=1
_
j
i
÷i
t
i
d
_
2
=
a

i=1
_
i
t
i
´
d ÷ ´ c
i
÷i
t
i
d
_
2
=
a

i=1
´ c
2
i
÷
_
´
d ÷d
_
t
_
a

i=1
i
i
i
t
i
_
_
´
d ÷d
_
= :
2
(: ÷/ ÷J
a
(d)) (7.15)
where the third equality uses the fact that

a
i=1
i
i
´ c
i
= 0, and the last line holds when M
a
=
_
´
X
0
f
_
÷1
=
_
1
:

a
i=1
i
i
i
t
i
_
:
÷2
. The expression (7.15) only depends on d through J
a
(d) . Thus
minimization of oo1
a
(d) and J
a
(d) are equivalent, and hence
¯
d
md
=
¯
d
cls
when M
a
=
_
´
X
0
f
_
÷1
.
We can solve for
¯
d
md
explicitly by the method of Lagrange multipliers. The Lagrangian is
/(d, X) =
1
2
J
a
(d, M
a
) ÷X
t
_
H
t
d ÷c
_
which is minimized over (d, X). The solution is
¯
d
md
=
´
d ÷M
÷1
a
H
_
H
t
M
÷1
a
H
_
÷1
_
H
t
´
d ÷c
_
. (7.16)
(See Exercise 7.5.) Examining (7.16) we can see that
¯
d
md
specializes to
¯
d
cls
when we set M
a
=
_
´
X
0
f
_
÷1
.
An obvious question is which weight matrix M
a
is best. We will address this question after we
derive the asymptotic distribution for a general weight matrix.
7.5 Asymptotic Distribution
We …rst show that the class of minimum distance estimators are consistent for the population
parameters when the constraints are valid.
Assumption 7.5.1 H
t
d = c where H is / ¡ with iank(H) = ¡.
CHAPTER 7. RESTRICTED ESTIMATION 153
Assumption 7.5.2 M
a
j
÷÷M 0
Theorem 7.5.1 Consistency
Under Assumptions 6.1.1, 7.5.1, and 7.5.2,
¯
d
md
j
÷÷d as : ÷·.
Theorem 7.5.1 shows that consistency holds for any weight matrix with a positive de…nate limit,
so the result includes the CLS estimator.
Similarly, the constrained estimators are asymptotically normally distributed.
Theorem 7.5.2 Asymptotic Normality
Under Assumptions 6.1.2, 7.5.1, and 7.5.2,
_
:
_
¯
d
md
÷d
_
o
÷÷N(0, X
f
(M)) (7.17)
as : ÷·, where
X
f
(M) = X
f
÷M
÷1
H
_
H
t
M
÷1
H
_
÷1
H
t
X
f
÷X
f
H
_
H
t
M
÷1
H
_
÷1
H
t
M
÷1
÷M
÷1
H
_
H
t
M
÷1
H
_
÷1
H
t
X
f
H
_
H
t
M
÷1
H
_
÷1
H
t
M
÷1
(7.18)
and X
f
= Q
÷1
aa
DQ
÷1
aa
.
Theorem 7.5.2 shows that the minimum distance estimator is asymptotically normal for all
positive de…nite weight matrices. The asymptotic variance depends on M. The theorem includes
the CLS estimator as a special case by setting M = Q
aa
.
Theorem 7.5.3 Asymptotic Distribution of CLS Estimator
Under Assumptions 6.1.2 and 7.5.1, as : ÷·
_
:
_
¯
d
cls
÷d
_
o
÷÷N(0, X
cls
)
where
X
cls
= X
f
÷Q
÷1
aa
H
_
H
t
Q
÷1
aa
H
_
÷1
H
t
X
f
÷X
f
H
_
H
t
Q
÷1
aa
H
_
÷1
H
t
Q
÷1
aa
÷Q
÷1
aa
H
_
H
t
Q
÷1
aa
H
_
÷1
H
t
X
f
H
_
H
t
Q
÷1
aa
H
_
÷1
H
t
Q
÷1
aa
CHAPTER 7. RESTRICTED ESTIMATION 154
7.6 E¢cient Minimum Distance Estimator
Theorem 7.5.2 shows that the minimum distance estimators, which include CLS as a special
case, are asymptotically normal with an asymptotic covariance matrix which depends on the weight
matrix M. The asymptotically optimal weight matrix is the one which minimizes the asymptotic
variance X
f
(M). This turns out to be M = X
÷1
f
as is shown in Theorem 7.6.1 below. Since X
÷1
f
is unknown this weight matrix cannot be used for a feasible estimator, but we can replace X
÷1
f
with a consistent estimate
´
X
÷1
f
and the asymptotic distribution (and e¢ciency) are unchanged.
We call the minimum distance estimator setting M
a
=
´
X
÷1
f
the e¢cient minimum distance
estimator and takes the form
¯
d
emd
=
´
d ÷
´
X
f
H
_
H
t
´
X
f
H
_
÷1
_
H
t
´
d ÷c
_
(7.19)
The asymptotic distribution of (7.19) can be deduced from Theorem 7.5.2.
Theorem 7.6.1 E¢cient Minimum Distance Estimator
Under Assumptions 6.1.2 and 7.5.1,
_
:
_
¯
d
emd
÷d
_
o
÷÷N
_
0, X
+
f
_
as : ÷·, where
X
+
f
= X
f
÷X
f
H
_
H
t
X
f
H
_
÷1
H
t
X
f
. (7.20)
Since
X
+
f
_ X
f
(7.21)
the estimator (7.19) has lower asymptotic variance than the unrestricted
estimator. Furthermore, for any M,
X
+
f
_ X
f
(M) (7.22)
so (7.19) is asymptotically e¢cient in the class of minimum distance esti-
mators.
Theorem 7.6.1 shows that the minimum distance estimator with the smallest asymptotic vari-
ance is (7.19). One implication is that the constrained least squares estimator is generally in-
e¢cient. The interesting exception is the case of conditional homoskedasticity, in which case the
optimal weight matrix is M = X
0÷1
f
so in this case CLS is an e¢cient minimum distance estimator.
Otherwise when the error is conditionally heteroskedastic, there are asymptotic e¢ciency gains by
using minimum distance rather than least squares.
The fact that CLS is generally ine¢cient is counter-intuitive and requires some re‡ection to
understand. Standard intuition suggests to apply the same estimation method (least squares) to
the unconstrained and constrained models, and this is the most common empirical practice. But
Theorem 7.6.1 shows that this is not the e¢cient estimation method. Instead, the e¢cient minimum
distance estimator has a smaller asymptotic variance. Why? The reason is that the least-squares
estimator does not make use of the regressor i
2i
. It ignores the information E(i
2i
c
i
) = 0. This
information is relevant when the error is heteroskedastic and the excluded regressors are correlated
with the included regressors.
Inequality (7.21) shows that the e¢cient minimum distance estimator
¯
d has a smaller asymptotic
variance than the unrestricted least squares estimator
´
d. This means that estimation is more e¢cient
by imposing correct restrictions when we use the minimum distance method.
CHAPTER 7. RESTRICTED ESTIMATION 155
7.7 Exclusion Restriction Revisited
We return to the example of estimation with a simple exclusion restriction. The model is
j
i
= i
t
1i
d
1
÷i
t
2i
d
2
÷c
i
with the exclusion restriction d
2
= 0. We have introduced three estimators of d
1
. The …rst is
unconstrained least-squares applied to (7.9), which can be written as
´
d
1
=
´
Q
÷1
11·2
´
Q
1j·2
.
From Theorem 6.33 and equation (6.20) its asymptotic variance is
avai(
´
d
1
) = Q
÷1
11·2
_
D
11
÷Q
12
Q
÷1
22
D
21
÷D
12
Q
÷1
22
Q
21
÷Q
12
Q
÷1
22
D
22
Q
÷1
22
Q
21
_
Q
÷1
11·2
.
The second estimator of d
1
is the CLS estimator, which can be written as
¯
d
1,cls
=
´
Q
÷1
11
´
Q
1j
.
Its asymptotic variance can be deduced from Theorem 7.5.3, but it is simpler to apply the CLT
directly to show that
avai(
¯
d
1,cls
) = Q
÷1
11
D
11
Q
÷1
11
. (7.23)
The third estimator of d
1
is the e¢cient minimum distance estimator. Applying (7.19), it equals
¯
d
1,md
=
´
d
1
÷
´
X
12
´
X
÷1
22
´
d
2
(7.24)
where we have partitioned
´
X
f
=
_
´
X
11
´
X
12
´
X
21
´
X
22
_
.
From Theorem 7.6.1 its asymptotic variance is
avai(
¯
d
1,md
) = X
11
÷X
12
X
÷1
22
X
21
. (7.25)
In general, the three estimators are di¤erent, and they have di¤erent asymptotic variances.
It is quite instructive to compare the asymptotic variances of the CLS and unconstrained least-
squares estimators to assess whether or not the constrained estimator is necessarily more e¢cient
than the unconstrained estimator.
First, consider the case of conditional homoskedasticity. In this case the two covariance matrices
simplify to
avai(
´
d
1
) = o
2
Q
÷1
11·2
and
avai(
¯
d
1,cls
) = o
2
Q
÷1
11
.
If Q
12
= 0 (so i
1i
and i
2i
are orthogonal) then these two variance matrices equal and the two
estimators have equal asymptotic e¢ciency. Otherwise, since Q
12
Q
÷1
22
Q
21
_ 0, then Q
11
_ Q
11
÷
Q
12
Q
÷1
22
Q
21
, and consequently
Q
÷1
11
o
2
_
_
Q
11
÷Q
12
Q
÷1
22
Q
21
_
÷1
o
2
.
This means that under conditional homoskedasticity,
¯
d
1,cls
has a lower asymptotic variance matrix
than
´
d
1
. Therefore in this context, constrained least-squares is more e¢cient than unconstrained
least-squares. This is consistent with our intuition that imposing a correct restriction (excluding
an irrelevant regressor) improves estimation e¢ciency.
CHAPTER 7. RESTRICTED ESTIMATION 156
However, in the general case of conditional heteroskedasticity this ranking is not guaranteed.
In fact what is really amazing is that the variance ranking can be reversed. The CLS estimator
can have a larger asymptotic variance than the unconstrained least squares estimator.
To see this let’s use the simple heteroskedastic example from Section 6.4. In that example,
Q
11
= Q
22
= 1, Q
12
=
1
2
, \
11
= \
22
= 1, and \
12
=
7
8
. We can calculate that Q
11·2
=
8
4
and
avai(
´
d
1
) =
2
8
(7.26)
avai(
¯
d
1,cls
) = 1 (7.27)
avai(
¯
d
1,md
) =
ò
8
. (7.28)
Thus the restricted least-squares estimator
¯
d
1,cls
has a larger variance than the unrestricted least-
squares estimator
´
d
1
! The minimum distance estimator has the smallest variance of the three, as
expected.
What we have found is that when the estimation method is least-squares, deleting the irrelevant
variable r
2i
can actually increase estimation variance, or equivalently, adding an irrelevant variable
can actually decrease the estimation variance.
To repeat this unexpected …nding, we have shown in a very simple example that it is possible
for least-squares applied to the short regression (7.10) to be less e¢cient for estimation of d
1
than
least-squares applied to the long regression (7.9), even though the constraint d
2
= 0 is valid!
This result is strongly counter-intuitive. It seems to contradict our initial motivation for pursuing
constrained estimation – to improve estimation e¢ciency.
It turns out that a more re…ned answer is appropriate. Constrained estimation is desirable,
but not constrained least-squares estimation. While least-squares is asymptotically e¢cient for
estimation of the unconstrained projection model, it is not an e¢cient estimator of the constrained
projection model.
7.8 Variance and Standard Error Estimation
The asymptotic covariance matrix (7.20) may be estimated by replacing X
f
with a consistent
estimates such as
´
X
f
. This variance estimator is then
´
X
+
f
=
´
X
f
÷
´
X
f
H
_
H
t
´
X
f
H
_
÷1
H
t
´
X
f
. (7.29)
We can calculate standard errors for any linear combination l
t
¯
d so long as l does not lie in
the range space of H. A standard error for l
t
¯
d is
:(l
t
¯
d) =
_
:
÷1
l
t
´
X
+
f
l
_
1¸2
. (7.30)
7.9 Misspeci…cation
What are the consequences for the constrained estimator
¯
d if the constraint (7.1) is incorrect?
To be speci…c, suppose that
H
t
d = c
+
where c
+
is not necessarily equal to c.
This situation is a generalization of the analysis of “omitted variable bias” from Section 2.22,
where we found that the short regression (e.g. (7.11)) is estimating a di¤erent projection coe¢cient
than the long regression (e.g. (7.9)).
CHAPTER 7. RESTRICTED ESTIMATION 157
One mechanical answer is that we can use the formula (7.16) for the minimum distance estimator
to …nd that
¯
d
md
j
÷÷d
+
md
= d ÷M
÷1
H
_
H
t
M
÷1
H
_
÷1
(c
+
÷c) .
The second term, M
÷1
H
_
H
t
M
÷1
H
_
÷1
(c
+
÷c), shows that imposing an incorrect constraint leads
to inconsistency – an asymptotic bias. We can call the limiting value d
+
md
the minimum-distance
projection coe¢cient or the pseudo-true value implied by the restriction.
However, we can say more.
For example, we can describe some characteristics of the approximating projections. The CLS
estimator projection coe¢cient has the representation
d
+
cls
= aigmin
1
0
f=c
E
_
j
i
÷i
t
i
d
_
2
,
the best linear predictor subject to the constraint (7.1). The minimum distance estimator converges
to
d
+
md
= aigmin
1
0
f=c
(d ÷d
0
)
t
M (d ÷d
0
)
where d
0
is the true coe¢cient. That is, d
+
md
is the coe¢cient vector satisfying (7.1) closest to
the true value ¬n the weighted Euclidean norm. These calculations show that the constrained
estimators are still reasonable in the sense that they produce good approximations to the true
coe¢cient, conditional on being required to satisfy the constraint.
We can also show that
¯
d
md
has an asymptotic normal distribution. The trick is to de…ne the
pseudo-true value
d
+
a
= d ÷M
÷1
a
H
_
H
t
M
÷1
a
H
_
÷1
(c
+
÷c) .
Then
_
:
_
¯
d
md
÷d
+
a
_
=
_
:
_
´
d ÷d
_
÷M
÷1
a
H
_
H
t
M
÷1
a
H
_
÷1
_
:
_
H
t
´
d ÷c
+
_
=
_
1 ÷M
÷1
a
H
_
H
t
M
÷1
a
H
_
÷1
H
t
_
_
:
_
´
d ÷d
_
o
÷÷
_
1 ÷M
÷1
H
_
H
t
M
÷1
H
_
÷1
H
t
_
N(0, X
f
)
= N(0, X
f
(M)) . (7.31)
In particular
_
:
_
¯
d
emd
÷d
+
a
_
o
÷÷N
_
0, X
+
f
_
.
This means that even when the constraint (7.1) is misspeci…ed, the conventional covariance matrix
estimator (7.29) and standard errors (7.30) are appropriate measures of the sampling variance,
though the distributions are centered at the pseudo-true values (or projections) d
+
a
rather than d.
The fact that the estimators are biased is an unavoidable consequence of misspeci…cation.
There is another way of representing an asymptotic distribution for the estimator under mis-
speci…cation based on the concept of local alternatives. It is a technical device which might seem
a bit arti…cial, but it is a powerful technique which yields useful distributional approximations in
a wide variety of contexts. The idea is to index the true coe¢cient d
a
by :, and suppose that
H
t
d
a
= c ÷ð:
÷1¸2
. (7.32)
The asymptotic theory is then derived as : ÷ · under the sequence of probability distributions
with the coe¢cients d
a
. The expression (7.32) speci…es that d
a
does not satisfy (7.1), but the
deviation from the constraint is ð:
÷1¸2
which depends on ð and the sample size. The choice to
make the deviation of this form is precisely so that the localizing parameter ð appears in the
asymptotic distribution but does not dominate it.
CHAPTER 7. RESTRICTED ESTIMATION 158
To see this, …rst observe that since d
a
is the true coe¢cient value, then j
i
= i
t
i
d
a
÷c
i
and
_
:
_
´
d ÷d
a
_
=
_
1
:
a

i=1
i
i
i
t
i
_
÷1
_
1
_
:
a

i=1
i
i
c
i
_
o
÷÷N(0, X
f
)
as conventional. Then under (7.32),
¯
d
md
=
´
d ÷M
÷1
a
H
_
H
t
M
÷1
a
H
_
÷1
_
H
t
´
d ÷c
_
=
´
d ÷M
÷1
a
H
_
H
t
M
÷1
a
H
_
÷1
H
t
_
´
d ÷d
_
÷M
÷1
a
H
_
H
t
M
÷1
a
H
_
÷1
ð:
÷1¸2
so
_
:
_
¯
d
md
÷d
a
_
=
_
1 ÷M
÷1
a
H
_
H
t
M
÷1
a
H
_
÷1
H
t
_
_
:
_
´
d ÷d
a
_
÷M
÷1
a
H
_
H
t
M
÷1
a
H
_
÷1
ð
o
÷÷N(0, X
f
) ÷M
÷1
H
_
H
t
M
÷1
H
_
÷1
ð
= N(ð
+
, X
f
(M)) (7.33)
where
ð
+
= M
÷1
H
_
H
t
M
÷1
H
_
÷1
ð.
The asymptotic distribution (7.33) is an alternative way of expressing the sampling distribution
of the restricted estimator under misspeci…cation. The distribution (7.33) contains an asymptotic
bias component ð
+
. The approximation is not fundamentally di¤erent from (7.31) – they both have
the same asymptotic variances, and both re‡ect the bias due to misspeci…cation. The di¤erence is
that (7.31) puts the bias on the left-side of the convergence arrow, while (7.33) has the bias on the
right-side. There is no substantive di¤erence between the two, but (7.33) is more convenient for
some purposes, such as the analysis of the power of tests, as we will explore in the next chapter.
7.10 Nonlinear Constraints
In some cases it is desirable to impose nonlinear constraints on the parameter vector d. They
can be written as
r(d) = 0 (7.34)
where r : R
I
÷ R
q
. This includes the linear constraints (7.1) as a special case. An example of
(7.34) which cannot be written as (7.1) is ,
1
,
2
= 1, which is (7.34) with r(d) = ,
1
,
2
÷1.
The minimum distance estimator of d subject to (7.34) solves the minimization problem
¯
d = aigmin
r(f)=0
J
a
(d) (7.35)
where
J
a
(d) = :
_
´
d ÷d
_
t
M
a
_
´
d ÷d
_
.
The solution minimizes the Lagrangian
/(d, X) =
1
2
J
a
(d) ÷X
t
r(d) (7.36)
over (d, X).
Computationally, there is no explicit expression for the solution
¯
d so it must be found numeri-
cally. Algorithms to numerically solve (7.35) are known as constrained optimization methods,
and are available in programming languages including Matlab, Gauss and R.
CHAPTER 7. RESTRICTED ESTIMATION 159
Assumption 7.10.1 r(d) = 0 with iank(H) = ¡, where H =
0
0d
r(d)
t
.
The asymptotic distribution is a simple generalization of the case of a linear constraint, but the
proof is more delicate.
Theorem 7.10.1 Under Assumptions 6.1.2, 7.10.1, and 7.5.2, for
¯
d de-
…ned in (7.35) ,
_
:
_
¯
d ÷d
_
o
÷÷N(0, X
f
(M))
as : ÷·, where X
f
(M) i: de…ned in (7.18). X
f
(M) is minimized with
M = X
÷1
f
, in which case the asymptotic variance is
X
+
f
= X
f
÷X
f
H
_
H
t
X
f
H
_
÷1
H
t
X
f
.
The asymptotic variance matrix for the e¢cient minimum distance estimator can be estimated
by
´
X
+
f
=
´
X
f
÷
´
X
f
´
H
_
´
H
t
´
X
f
´
H
_
÷1
´
H
t
´
X
f
where
´
H =
0
0d
r(
¯
d)
t
.
Standard errors for the elements of
¯
d are the square roots of the diagonal elements of :
÷1
´
X
+
f
.
7.11 Inequality Constraints
Inequality constraints on the parameter vector d take the form
r(d) _ 0
for some function r : R
I
÷R
q
. The most common example is a non-negative constraint
,
1
_ 0.
The constrained least-squares and minimum distance estimators can be written as
¯
d
cls
= aigmin
r(f)¸0
oo1
a
(d) (7.37)
and
¯
d
md
= aigmin
r(f)¸0
J
a
(d) . (7.38)
Except in special cases the constrained estimators do not have simple algebraic solutions. An
important exception is when there is a single non-negativity constraint, e.g. ,
1
_ 0 with ¡ = 1.
In this case the constrained estimator can be found by two-step approach. First compute the
uncontrained estimator
´
d. If
´
,
1
_ 0 then
¯
d =
´
d. Second, if
´
,
1
< 0 then impose ,
1
= 0 (eliminate
the regressor A
1
) and re-estimate. This yields the constrained least-squares estimator. While this
CHAPTER 7. RESTRICTED ESTIMATION 160
method works when there is a single non-negativity constraint, it does not immediately generalize
to other contexts.
The computational problems (7.37) and (7.38) are examples of quadratic programming
problems. Quick and easy computer algorithms are available in programming languages including
Matlab, Gauss and R.
Inference on inequality-constrained estimators is unfortunately quite challenging. The conven-
tional asymptotic theory gives rise to the following dichotomy. If the true parameter satis…es the
strict inequality r(d) 0, then asymptotically the estimator is not subject to the constraint and the
inequality-constrained estimator has an asymptotic distribution equal to the unconstrained case.
However if the true parameter is on the boundary, e.g. r(d) = 0, then the estimator has a trun-
cated structure. This is easiest to see in the one-dimensional case. If we have an estimator
´
, which
satis…es
_
:
_
´
, ÷,
_
o
÷÷ Z = N(0, \
o
) and , = 0, then the constrained estimator

, = max[
´
,, 0[
will have the asymptotic distribution
_
:

,
o
÷÷max[Z, 0[, a “half-normal” distribution.
7.12 Constrained MLE
Recall that the log-likelihood function (3.43) for the normal regression model is
log 1(d, o
2
) = ÷
:
2
log
_
2¬o
2
_
÷
1
2o
2
oo1
a
(d).
The constrained maximum likelihood estimator (CMLE) (
´
d
cmle
, ´ o
2
cmle
) maximizes log 1(d, o
2
)
subject to the constraint (7.34) Since log 1(d, o
2
) is a function of d only through the sum of squared
errors oo1
a
(d), maximizing the likelihood is identical to minimizing oo1
a
(d). Hence
´
d
cmle
=
´
d
cls
and ´ o
2
cmle
= ´ o
2
cls
.
7.13 Technical Proofs*
Proof of Theorem 7.6.1, Equation (7.22). Let H
l
be a full rank / (/ ÷¡) matrix satisfying
H
t
l
X
f
H = 0 and then set C = [H, H
l
[ which is full rank and invertible. Then we can calculate
that
C
t
X
+
o
C =
_
H
t
X
+
o
H H
0
X
+
o
H
l
H
t
l
X
+
o
H H
t
l
X
+
o
H
l
_
=
_
0 0
0 H
t
l
X
f
H
l
_
and
C
t
X
o
(M)C
=
_
H
t
X
+
o
(M)H H
0
X
+
o
(M)H
l
H
t
l
X
+
o
(M)H H
t
l
X
+
o
(M)H
l
_
=
_
0 0
0 H
t
l
X
f
H
l
÷H
t
l
MH{H
0
MH)
1
H
0
X
f
H{H
0
MH)
1
H
0
MH
l
_
.
Thus
C
t
_
X
o
(M) ÷X
+
o
_
C
= C
t
X
o
(M)C ÷C
t
X
+
o
C
=
_
0 0
0 H
t
l
MH{H
0
MH)
1
H
0
X
f
H{H
0
MH)
1
H
0
MH
l
_
_ 0
CHAPTER 7. RESTRICTED ESTIMATION 161
Since C is invertible it follows that X
o
(M) ÷X
+
o
_ 0 which is (7.22).
Proof of Theorem 7.10.1. For simplicity, we assume that the constrained estimator is consistent
¯
d
j
÷÷ d. This can be shown with more e¤ort, but requires a deeper treatment than appropriate
for this textbook.
For each element r
)
(d) of the ¡-vector r(d), by the mean value theorem there exists a d
+
)
on
the line segment joining
¯
d and d such that
r
)
(
¯
d) = r
)
(d) ÷
0
0d
r
)
(d
+
)
)
t
_
¯
d ÷d
_
. (7.39)
Let H
+
a
be the / ¡ matrix
H
+
a
=
_
0
0d
r
1
(d
+
1
)
0
0d
r
2
(d
+
2
)
0
0d
r
q
(d
+
q
)
_
.
Since
¯
d
j
÷÷d it follows that d
+
)
j
÷÷d, and by the CMT, H
+
a
j
÷÷H. Stacking the (7.39), we obtain
r(
¯
d) = r(d) ÷H
+t
a
_
¯
d ÷d
_
.
Since r(
¯
d) = 0 by construction and r(d) = 0 by Assumption 7.5.1, this implies
0 = H
+t
a
_
¯
d ÷d
_
. (7.40)
The …rst-order condition for (7.36) is
M
a
_
´
d ÷
¯
d
_
=
¯
H
¯
X.
Premultiplying by H
+t
M
÷1
a
, inverting, and using (7.40), we …nd
¯
X =
_
H
+t
a
M
÷1
a
¯
H
_
÷1
H
+t
a
_
´
d ÷
¯
d
_
=
_
H
+t
a
M
÷1
a
¯
H
_
÷1
H
+t
a
_
´
d ÷d
_
.
Thus
¯
d ÷d =
_
1 ÷M
÷1
a
¯
H
_
H
+t
a
M
÷1
a

H
_
÷1
H
+t
a
_
_
´
d ÷d
_
.
From Theorem 6.3.2 and Theorem 6.7.1 we …nd
_
:
_
¯
d ÷d
_
=
_
1 ÷M
÷1
a
¯
H
_
H
+t
a
M
÷1
a
¯
H
_
÷1
H
+t
a
_
_
:
_
´
d ÷d
_
o
÷÷
_
1 ÷M
÷1
H
_
H
t
M
÷1
H
_
÷1
H
t
_
N(0, X
f
)
= N(0, X
f
(M)) .

CHAPTER 7. RESTRICTED ESTIMATION 162
Exercises
Exercise 7.1 In the model ¸ = A
1
d
1
÷ A
2
d
2
÷ c, show directly from de…nition (7.3) that the
CLS estimate of d = (d
1
, d
2
) subject to the constraint that d
2
= 0 is the OLS regression of ¸ on
A
1
.
Exercise 7.2 In the model ¸ = A
1
d
1
÷ A
2
d
2
÷ c, show directly from de…nition (7.3) that the
CLS estimate of d = (d
1
, d
2
), subject to the constraint that d
1
= c (where c is some given vector)
is the OLS regression of ¸ ÷A
1
c on A
2
.
Exercise 7.3 In the model ¸ = A
1
d
1
÷ A
2
d
2
÷ c, with A
1
and A
2
each : /, …nd the CLS
estimate of d = (d
1
, d
2
), subject to the constraint that d
1
= ÷d
2
.
Exercise 7.4 Verify that for
¯
d de…ned in (7.8) that H
t
¯
d = c.
Exercise 7.5 Verify (7.16).
Exercise 7.6 Verify that the minimum distance estimator
¯
d with M
a
=
´
Q
aa
equals the CLS
estimator.
Exercise 7.7 Prove Theorem 7.5.1.
Exercise 7.8 Prove Theorem 7.5.2.
Exercise 7.9 Prove Theorem 7.5.3. (Hint: Use that CLS is a special case of Theorem 7.5.2.)
Exercise 7.10 Verify that (7.20) is X
f
(M) with M = X
÷1
f
.
Exercise 7.11 Prove (7.21). Hint: Use (7.20).
Exercise 7.12 Verify (7.23), (7.24) and (7.25)
Exercise 7.13 Verify (7.26), (7.27), and (7.28).
Chapter 8
Hypothesis Testing
8.1 Hypotheses and Tests
It is often the goal of an empirical investigation to determine if one variable a¤ects another.
Returning to our example of wage determination, we might be interested if union membership
a¤ects wages. Equivalently, we might ask if the hypothesis “Union membership does not a¤ect
wages” is true or false. In hypothesis testing, we are interest in testing if a speci…c restriction
on the parameters is compatible with the observed data. Letting 0 be the coe¢cient in a wage
regression for union membership, the hypothesis that union membership has no e¤ect on mean
wages is the restriction 0 = 0. Hypothesis testing is about making a decision if the restriction 0 = 0
is true or false.
In general, a hypothesis is a statement (or assertion) that a restriction is true, where a restriction
takes the form 0 ¸ O
0
with O
0
is a strict subset of a pararameter space O.
De…nition 1 A hypothesis is a statement that 0 ¸ O
0
¸ O.
For every hypothesis 0 ¸ O
0
the statement 0 ¸ O
c
0
is also a hypothesis, where O
c
0
= ¦0 ¸ O : 0 , ¸ O
0
¦
is the complement of O
0
in O. We give a hypothesis and its complement special names.
De…nition 2 The null hypothesis is H
0
: 0 ¸ O
0
and the alternative
hypothesis is its complement H
1
: 0 ¸ O
c
0
.
In the example given previously, the alternative hypothesis is that union membership has an
e¤ect on mean wages, or 0 ,= 0.
A hypothesis can either be true or not true. The goal of hypothesis testing is to provide evidence
concerning the truth of the null hypothesis versus its alternative.
A hypothesis test makes one of two decisions based on the data: either “Accept H
0
” or “Reject
H
0
”. Take again the example about union membership and examine the wage regression reported
in Table 5.1. We see that the coe¢cient for “Male Union Member” is 0.095 (a wage premium
of 9.5%) with a standard error of 0.020. Given the magnitude of this estimate, it seems unlikely
that the true coe¢cient could be zero, and so we may be inclined to reject the hypothesis that
union membership does not a¤ect wages for males. However, we can also see that the coe¢cient
for “Female Union Member” is 0.022 with a standard error of 0.020. While the point estimate
suggests a wage premium of 2.2%, the standard error suggests that it is also plausible that the true
163
CHAPTER 8. HYPOTHESIS TESTING 164
coe¢cient could be zero and the point estimate is merely sampling error. In this case what decision
should we make?
A hypothesis test consists of a real-valued test statistic
T
a
= T
a
((j
1
, i
1
) , ..., (j
a
, i
a
)) ,
a critical value c, plus the decision rule
1. Accept H
0
if T
a
< c,
2. Reject H
0
if T
a
_ c.
The test statistic T
a
should be designed so that small values of T
a
are likely when H
0
is true
and large values of T
a
are likely when H
1
is true. For example, for a test of H
0
: 0 = 0
0
against
H
1
: 0 ,= 0
0
the standard test statistic is the absolute t-statistic T
a
= [t
a
(0
0
)[ , as we expect T
a
to
have a well-behaved distribution 0 = 0
0
, and to be large when 0 ,= 0
0
.
Given the two possible states of the world (H
0
or H
1
) and the two possible decisions (Accept H
0
or Reject H
0
), there are four possible pairings of states and decisions as is depicted in the following
chart.
Hypthesis Testing Decisions
Accept H
0
Reject H
0
H
0
true Correct Decision Type I Error
H
1
true Type II Error Correct Decision
Hypothesis tests are useful if they avoid making errors. There are two possible errors: (1)
Rejecting H
0
when H
0
is true; and (2) Accepting H
0
when H
0
is false. These two errors are called
Type I and Type II errors, respectively. As the events are random it is constructive to evaluate
the tests based on the probability of their making an error. For a given test we de…ne the rejection
probability function as the probability of rejecting the null hypothesis
¬
a
(0) = Ii (Reject H
0
[ 0)
= Ii (T
a
_ c [ 0) .
The rejection probability in general depends on the unknown parameter 0 and the sample size :.
For parameter values in the null hypothesis, 0 ¸ O
0
, ¬
a
(0) is the probability of making a Type I
error. We also call ¬
a
(0) the power function of the test, as for parameter values in the alternative,
0 ¸ O
c
0
, ¬
a
(0) equals 1 minus the probability of a Type II error.
For the reasons discussed in Chapter 6, in typical econometric models the exact sampling
distribution of statistics such as T
a
is unknown and hence ¬
a
(0) is unknown. Therefore we typically
rely on asymptotic approximations which allow a more precise characterization. In particular, we
focus on the asymptotic rejection probability
¬(0) = lim
a÷o
¬
a
(0).
Therefore, it is convenient to select a test statistic T
a
which has a well speci…ed asymptotic dis-
tribution under H
0
, that is, T
a
o
÷÷ ¸ under H
0
. Furthermore, if the distribution 1 of ¸ does not
depend on 0 ¸ O
0
we say that T
a
is asymptotically pivotal. In this case,
¬(0) = lim
a÷o
Ii (T
a
_ c [ 0)
= Ii (¸ _ c)
= 1 ÷1(c)
CHAPTER 8. HYPOTHESIS TESTING 165
is only a function of c. For example, if T
a
is the absolute t-statistic, then by Theorem 6.11.1, under
H
0
, T
a
o
÷÷ [Z[ where Z ~ N(0, 1) and thus 1(c) = 1(c), the symmetrized normal distribution
function de…ned in (6.40).
Given a test statistic T
a
, how should we select the critical value c? Larger values of c mean that
the test rejects less frequently, which decreases the probability of a Type I error but increases the
probability of a Type II error. How can we balance one against the other? The dominant approach
is to give special priority to the null hypothesis, and select c so that the Type I error probability
is controlled at a speci…ed level, meaning that we select a signi…cance level c ¸ (0, 1) and then
pick c so that ¬(0) _ c for all 0 ¸ O
0
. When the test statistic T
a
is asymptotically pivotal with
distribution 1 this is accomplished by selecting c so that 1 ÷1(c) = c.
There is no objective scienti…c basis for choice of signi…cance level c. However, the common
practice is to set c = .0ò (5%), which implies that the critical value c is selected so that 1(c) = 0.0ò.
For example, if T
a
is the absolute t-statistic, we …nd from a normal table that 1(1.06) = 0.0ò, so
that the 5% critical value is thus c = 1.06.
The reasonsing behind the choice of a 5% critical value is to ensure that Type I errors should be
relatively unlikely – that the decision “Reject H
0
” has scienti…c strength – yet the test retains power
against reasonable alternatives. The decision “Reject H
0
” means that the evidence is inconsistent
with the null hypothesis, in the sense that it is relatively unlikely (1 in 20) that data generated by
the null hypothesis would yield the observed test result.
In contrast, the decision “Accept H
0
” is not a strong statement. It does not mean that the
evidence supports H
0
, only that there is insu¢cient evidence to reject H
0
. Because of this, it is
more accurate to use the label “Do not Reject H
0
” instead of “Accept H
0
”.
When a test rejects H
0
at the 5% signi…cance level it is common to say that the statistic
is statistically signi…cant and if the test accepts H
0
it is common to say that the statistic is
statistically insigni…cant. It is helpful to remember that this is simply a way of saying “Using
the statistic T
a
, the hypothesis H
0
can [cannot] be rejected at the asymptotic 5% level.” When
the null hypothesis H
0
: 0 = 0 is rejected it is common to say that the coe¢cient 0 is statistically
signi…cant, because the test has shown that the coe¢cient is not equal to zero.
Let us return to the example about the union wage premium. The absolute t-statistic for the
coe¢cient on “Male Union Member” is 0.00ò,0.020 = 4.7ò, which is greater than the 5% asymptotic
critical value of 1.96. Therefore we reject the hypothesis that union membership does not a¤ect
wages for men. However, the absolute t-statistic for the coe¢cient on “Female Union Member” is
0.022,0.020 = 1.10, which is less than 1.96 and therefore we do not reject the hypothesis that union
membership does not a¤ect wages for women. We thus say that the e¤ect of union membership for
men is statistically signi…cant, while membership for women is not statistically signi…cant.
When a test accepts a null hypothesis, it is commonly interpreted as evidence that the null
hypothesis is true. In our wage example, a common interpretation is that “the regression …nds
that female union membership has no e¤ect on wages”. This is an incorrect and most unfortunate
interpretation. The test has failed to reject the hypothesis that the coe¢cient is zero, but that does
not mean that the coe¢cient is actually zero. The test could be making a Type II error.
Consider another question: Does marriage status a¤ect wages? To test the hypothesis that
marriage status has no e¤ect on wages, we examine the t-statistics for the coe¢cients on “Married
Male” and “Married Female” in Table 5.1, which are 0.180,0.008 = 22.ò and 0.016,0.008 = 2.0,
respectively. Both exceed the asymptotic 5% critical value of 1.96, so we reject the hypothesis for
both men and women. But the statistic for men is exceptionally high, and that for women is only
slightly above the critical value. Suppose in contrast that the t-statistic had been 1.9, which is less
than the critical value, leading to the decision “Accept H
0
” rather than “Reject H
0
”. Should we
really be making a di¤erent decision if the t-statistic is 1.9 rather than 2.0? The di¤erence in values
is small, shouldn’t the di¤erence in the decision be also small? Thinking through these examples
it seems unsatisfactory to simply report “Accept H
0
” or “Reject H
0
”. These two decisions do not
summarize the evidence. Instead, the magnitude of the statistic T
a
suggests a “degree of evidence”
CHAPTER 8. HYPOTHESIS TESTING 166
against H
0
. How can we take this into account?
The answer is to report what is known as the asymptotic p-value
j
a
= 1 ÷1(T
a
).
Since the distribution function 1 is monotonically increasing, the p-value is a monotonically de-
creasing function of T
a
and is an equivalent test statistic. Instead of rejecting H
0
at the signi…cance
level c if T
a
_ c, we can reject H
0
if j
a
_ c. Thus it is su¢cient to report j
a
, and let the reader
decide.
Furthermore, the asymptotic p-value has a very convenient asymptotic null distribution. Since
T
a
o
÷÷¸ under H
0
, then j
a
= 1 ÷1(T
a
)
o
÷÷1 ÷1(¸), which has the distribution
Ii (1 ÷1(¸) _ n) = Ii (1 ÷n _ 1(¸))
= 1 ÷Ii
_
¸ _ 1
÷1
(1 ÷n)
_
= 1 ÷1
_
1
÷1
(1 ÷n)
_
= 1 ÷(1 ÷n)
= n,
which is the uniform distribution on [0, 1[. Thus j
a
o
÷÷U[0, 1[. This means that the “unusualness”
of j
a
is easier to interpret than the “unusualness” of T
a
.
The implication is that the best empirical practice is to compute and report the asymptotic p-
value j
a
rather than simply the test statistic T
a
or the binary decision Accept/Reject. The p-value
is a simple statistic, easy to interpret, and contains more information than the other choices.
For example, consider the tests for the e¤ect of marriage status on wages. The p-value for men
is 0.000 which we can intrepret as a very strong rejection of the hypothesis of no e¤ect, while the
p-value for women is 0.046, which we can describe as “borderline signi…cant”.
We now summarize the main features of hypothesis testing.
1. Select a signi…cance level c.
2. Select a test statistic T
a
with asymptotic distribution T
a
o
÷÷¸ under H
0
.
3. Set the critical value c so that 1 ÷1(c) = c, where 1 is the distribution function of ¸.
4. Calculate the asymptotic p-value j
a
= 1 ÷1(T
a
).
5. Reject H
0
if T
a
_ c, or equivalently j
a
_ c.
6. Accept H
0
if T
a
< c, or equivalently j
a
c.
7. Report j
a
to summarize the evidence concerning H
0
versus H
1
.
8.2 t tests
The most commonly applied statistical tests are hypotheses on individual coe¢cients 0 = /(d),
and can be written in the form
H
0
: 0 = 0
0
(8.1)
where 0
0
is some pre-speci…ed value. Quite typically, 0
0
= 0, as interest focuses on whether or not
a coe¢cient equals zero, but this is not the only possibility. For example, interest may focus on
whether an elasticity 0 equals 1, in which case we may wish to test H
0
: 0 = 1.
As we described in the previous section, tests of H
0
typically are based on the t-statistic
t
a
(0) =
´
0 ÷0
:(
´
0)
CHAPTER 8. HYPOTHESIS TESTING 167
where
´
0 is the point estimate and :(
´
0) is its standard error. The test will depend on whether the
alternative hypothesis is one-sided or two-sided. The two-sided alternative is
H
1
: 0 ,= 0
0
(8.2)
which is appropriate for general tests of signi…cance. In this case the test statistic is the absolute
value of the t-statistic
t
a
= [t
a
(0
0
)[ =
¸
¸
¸
¸
¸
´
0 ÷0
0
:(
´
0)
¸
¸
¸
¸
¸
.
Since Theorem 6.11.1 established that when 0 = 0
0
,
t
a
o
÷÷[Z[
where Z ~ N(0, 1), then asymptotic critical values can be taken from the normal distribution table.
In particular, the asymptotic 5% critical value is c = 1.06.
Also, the asymptotic p-value for H
0
against H
1
is
j
a
= 2 (1 ÷1(t
a
)) .
Theorem 8.2.1 Under Assumptions 6.1.2 and 6.9.1, and H
0
,
[t
a
(0
0
)[
o
÷÷[Z[
and for c satisfying c = 2 (1 ÷1(c)) ,
Ii ([t
a
(0
0
)[ _ c [ H
0
) ÷÷c
so the test “Reject H
0
if [t
a
(0
0
)[ _ c” /a: asymptotic signi…cance level c.
We described (8.2) as a “two-sided alternative”. Sometimes we are interested in testing for
one-sided alternatives such as
H
1
: 0 0
0
(8.3)
or
H
1
: 0 < 0
0
. (8.4)
Tests of (8.1) against (8.3) or (8.4) are based on the signed t-statistic
t
a
= t
a
(0
0
) =
´
0 ÷0
0
:(
´
0)
.
The hypothesis (8.1) is rejected in favor of (8.3) if t
a
_ c where c satis…es c = 1 ÷1(c). Negative
values of t
a
are not taken as evidence against H
0
, as point estimates
´
0 less than 0
0
do not point to
(8.3). Since the critical values are taken from the single tail of the normal distribution, they are
smaller than for two-sided tests. Speci…cally, the asymptotic 5% critical value is c = 1.64ò. Thus
we can reject (8.1) is rejected in favor of (8.3) if t
a
_ 1.64ò.
Conversely, tests of (8.1) against (8.4) reject H
0
for negative t-statistics, e.g. if t
a
_ ÷c. For this
alternative large positive values of t
a
are not evidence against H
0
. An asymptotic 5% test rejects
if t
a
_ ÷1.64ò.
There seems to be an ambiguity. Should we use the two-sided critical value 1.96 or the one-sided
critical value 1.645? The answer is that we should use one-sided tests and critical values only when
the parameter space is known to satisfy a one-sided restriction such as 0 _ 0
0
. This is when the test
of (8.1) against (8.3) makes sense. If the restriction 0 _ 0
0
is not known a priori, then imposing
this restriction to test (8.1) against (8.3) does not makes sense. Since linear regression coe¢cients
do not have a priori sign restrictions, we conclude that two-sided tests are generally appropriate.
CHAPTER 8. HYPOTHESIS TESTING 168
8.3 t-ratios and the Abuse of Testing
In Section 4.14, we argued that a good applied practice is to report coe¢cient estimates
´
0 and
standard errors :(
´
0) for all coe¢cients of interest in estimated models. With
´
0 and :(
´
0) the reader
can easily construct con…dence intervals [
´
0 ± 2:(
´
0)[ and t-statistics
_
´
0 ÷0
0
_
,:(
´
0) for hypotheses
of interest.
Some applied papers (especially older ones) instead report estimates
´
0 and t-ratios t
a
=
´
0,:(
´
0),
not standard errors. Reporting t-ratios instead of standard errors is poor econometric practice.
While the same information is being reported (you can back out standard errors by division, e.g.
:(
´
0) =
´
0,t
a
), standard errors are generally more helpful to readers than t-ratios. Standard errors
help the reader focus on the estimation precision and con…dence intervals, while t-ratios focus
attention on statistical signi…cance. While statistical signi…cance is important, it is less important
that the parameter estimates themselves and their con…dence intervals. The focus should be on
the meaning of the parameter estimates, their magnitudes, and their interpretation, not on listing
which variables have signi…cant (e.g. non-zero) coe¢cients. In many modern applications, sample
sizes are very large so standard errors can be very small. Consequently t-ratios can be large even
if the coe¢cient estimates are economically small. In such contexts it may not be interesting
to announce “The coe¢cient is non-zero!” Instead, what is interesting to announce is that “The
coe¢cient estimate is economically interesting!”
In particular, some applied papers report coe¢cient estimates and t-ratios, and limit their
discussion of the results to describing which variables are “signi…cant” (meaning that their t-ratios
exceed 2) and the signs of the coe¢cient estimates. This is very poor empirical work, and should
be studiously avoided. It is also a receipe for banishment of your work to lower tier economics
journals.
Fundamentally, the common t-ratio is a test for the hypothesis that a coe¢cient equals zero.
This should be reported and discussed when this is an interesting economic hypothesis of interest.
But if this is not the case, it is distracting.
In general, when a coe¢cient 0 is of interest, it is constructive to focus on the point estimate,
its standard error, and its con…dence interval. The point estimate gives our “best guess” for the
value. The standard error is a measure of precision. The con…dence interval gives us the range
of values consistent with the data. If the standard error is large then the point estimate is not a
good summary about 0. The endpoints of the con…dence interval describe the bounds on the likely
possibilities. If the con…dence interval embraces too broad a set of values for 0, then the dataset
is not su¢ciently informative to render inferences about 0. On the other hand if the con…dence
interval is tight, then the data have produced an accurate estimate, and the focus should be on
the value and interpretation of this estimate. In contrast, the statement “the t-ratio is highly
signi…cant” has little interpretive value.
The above discussion requires that the researcher knows what the coe¢cient 0 means (in terms
of the economic problem) and can interpret values and magnitudes, not just signs. This is critical
for good applied econometric practice.
For example, consider the question about the e¤ect of marriage status on mean log wages. We
had found that the e¤ect is “highly signi…cant” for men and “marginally signi…cant” for women.
Now, let’s construct asymptotic con…dence intervals for the coe¢cients. The one for men is [0.16,
0.20[ and that for women is [0.00, 0.08[. This shows that average wages for married men are about
16-20% higher than for unmarried men, which is very substantial, while the di¤erence for women
about 0-3%, which is small. These magnitudes may be more informative than the results of the
hypothesis tests.
CHAPTER 8. HYPOTHESIS TESTING 169
8.4 Wald Tests
The t-test is appropriate when the null hypothesis is a real-valued restriction. More generally,
there may be multiple restrictions on the coe¢cient vector d. For a ¡ 1 vector of functions r, we
can write a multiple testing problem as
H
0
: r(d) = 0
H
1
: r(d) ,= 0
It is natural to estimate 0 = r(d) by the plug-in estimate
´
0 = r(
´
d). As this is a ¡ 1 vector,
we can assess its magnitude by constructing a quadratic form such as the Wald statistic (6.44)
evaluated at the null hypothesis
\
a
= :
´
0
t
´
X
÷1
0
´
0 (8.5)
= :r(
´
d)
t
_
´
H
t
´
X
f
´
H
_
÷1
r(
´
d)
where
´
H =
0
0d
r(
´
d)
t
.
The Wald statistic \
a
is a weighted Euclidean measure of the length of the vector
´
0. When ¡ = 1
then \
a
= t
2
a
, the square of the t-statistic, so hypothesis tests based on \
a
and [t
a
[ are equivalent.
The Wald statistic (8.5) is a generalization of the t-statistic to the case of multiple restrictions.
When r(d) = H
t
d ÷c is a linear function of d, then the Wald statistic simpli…es to
\
a
= :
_
H
t
´
d ÷c
_
t
_
H
t
´
X
f
H
_
÷1
_
H
t
´
d ÷c
_
.
As shown in Theorem 6.15.2, when r(d) = 0 then \
a
÷÷ ¸
2
q
, a chi-square random variable
with ¡ degrees of freedom. Let 1
q
(n) denote the ¸
2
q
distribution function. For a given signi…cance
level c, the asymptotic critical value c satis…es c = 1 ÷1
q
(c) and can be found from the chi-square
distribution table. For example, the 5% critical values for ¡ = 1, ¡ = 2, and ¡ = 8 are 3.84, 5.99,
and 7.82, respectively. An asymptotic test rejects H
0
in favor of H
1
if \
a
_ c. As with t-tests, it
is conventional to describe a Wald test as “signi…cant” if \
a
exceeds the 5% critical value.
Theorem 8.4.1 Under Assumptions 6.1.2 and 6.9.1, iank(H
f
) = ¡, and
H
0
, then \
a
o
÷÷¸
2
q
, and for c satisfying c = 1 ÷1
q
(c),
Ii (\
a
_ c [ H
0
) ÷÷c
so the test “Reject H
0
if \
a
_ c” /a: asymptotic signi…cance level c.
Notice that the asymptotic distribution in Theorem 8.4.1 depends solely on ¡ – the number of
restrictions being tested. It does not depend on / – the number of parameters estimated.
The asymptotic p-value for \
a
is j
a
= 1 ÷1
q
(\
a
). For multiple hypothesis tests it is particu-
larly useful to report p-values instead of the Wald statistic. For example, if you write that a Wald
test on eight restrictions (¡ = 8) has the value \
a
= 11.2, it is di¢cult for a reader to assess the
magnitude of this statistic without the time-consuming and cumbersome process of looking up the
critical values from a table. Instead, if you write that the p-value is j
a
= 0.10 (as is the case for
\
a
= 11.2 and ¡ = 8) then it is simple for a reader to intrepret its magnitude.
CHAPTER 8. HYPOTHESIS TESTING 170
For example consider the empirical results presented in Table 5.1. The hypothesis “Union
membership does not a¤ect wages" is the joint restriction that the coe¢cients on “Male Union
Member” and “Female Union Member” are jointly zero. We calculate the Wald statistic (8.5) for
this joint hypothesis and …nd \
a
= 28.14 with a p-value of j
a
= 0.00. Thus we can reject the
hypothesis in favor of the alternative that at least one of the coe¢cients is non-zero. This does not
mean that both coe¢cients are non-zero, just that one of the two is non-zero. Therefore examining
the joint Wald statistic and the invididual t-statistics is useful to interpret the coe¢cients.
If the error is known to be homoskedastic, then it is appropriate to replace
´
X
f
in (8.5) with
the homoskedastic covariance matrix estimate
´
X
0
f
=
´
Q
÷1
aa
:
2
. In this case the Wald statisic equals
\
0
a
= :r(
´
d)
t
_
´
H
t
´
X
0
f
´
H
_
÷1
r(
´
d)
=
:r(
´
d)
t
_
´
H
t
´
Q
÷1
aa
´
H
_
÷1
r(
´
d)
:
2
=
r(
´
d)
t
_
´
H
t
A
t
A
´
H
_
÷1
r(
´
d)
:
2
. (8.6)
The Wald statistic is named after the statistician Abraham Wald, who showed that \
a
has
optimal weighted average power in certain settings.
8.5 Minimum Distance Tests
A minimum distance test measures the distance between
´
d and the restricted estimate
¯
d. Recall
that under the restriction
r(d) = 0
the minimum distance estimate solves the minimization problem
¯
d = aigmin
r(f)=0
J
a
(d)
where
J
a
(d) = :
_
´
d ÷d
_
t
M
a
_
´
d ÷d
_
and M
a
is a weight matrix. Setting M
a
=
_
´
X
0
f
_
÷1
yields the constrained least squares estimator
¯
d
cls
and setting M
a
=
´
X
÷1
f
yields the e¢cient minimum distance estimator
¯
d
emd
.
The minimum distance test statistic of H
0
against H
1
is
J
a
= J
a
(
¯
d)
= min
r(f)=0
J
a
(d)
= :
_
´
d ÷
¯
d
_
t
M
a
_
´
d ÷
¯
d
_
.
When r(d) = H
t
d ÷c is linear then
J
a
= :
_
H
t
´
d ÷c
_
t _
H
t
M
÷1
a
H
_
÷1
_
H
t
´
d ÷c
_
.
Setting M
a
=
_
´
X
0
f
_
÷1
we …nd
J
0
a
:= J
a
(
¯
d
cls
) = :
_
H
t
´
d ÷c
_
t
_
H
t
´
X
0
f
H
_
÷1
_
H
t
´
d ÷c
_
= \
0
a
(8.7)
CHAPTER 8. HYPOTHESIS TESTING 171
and setting M
a
=
´
X
÷1
f
we …nd
J
+
a
:= J
a
(
¯
d
emd
) = :
_
H
t
´
d ÷c
_
t
_
H
t
´
X
f
H
_
÷1
_
H
t
´
d ÷c
_
= \
a
.
Thus for linear hypotheses r(d) = H
t
d ÷ c, J
+
a
is identical to the Wald statistic (8.5), and
J
0
a
equals the homoskedastic Wald statistic (8.6). When r(d) is non-linear then the Wald and
minimum distance statistics are di¤erent.
Theorem 8.5.1 Under Assumptions 6.1.2 and 6.9.1, iank(H
f
) = ¡, and
H
0
, then J
+
a
o
÷÷¸
2
q
.
Testing using the minimum distance statistic J
+
a
is similar to testing using the Wald statistic
\
a
. Critical values and p-values are computed using the ¸
2
q
distribution. H
0
is rejected in favor of
H
1
if J
+
a
exceeds the level c critical value. The asymptotic p-value is j
a
= 1 ÷1
q
(J
+
a
).
8.6 F Tests
When the hypothesis is the linear restriction
H
0
: H
t
d ÷c = 0
then a classic statistic for H
0
against H
1
is the 1 statistic
1
a
=
_
oo1
a
(
¯
d
cls
) ÷oo1
a
(
´
d)
_

oo1
a
(
´
d),(: ÷/)
(8.8)
=
_
: ÷/
¡
_
_
o
2
÷ ´ o
2
_
´ o
2
(8.9)
where
´ o
2
=
oo1
a
(
´
d)
:
=
1
:
a

i=1
´ c
2
i
is the residual variance estimate under H
0
and
o
2
=
oo1
a
(
¯
d
cls
)
:
=
1
:
a

i=1
c
2
i
with c
i
= j
i
÷i
t
i
¯
d
cls
is the residual variance estimate under H
1
.
If we recall equation (7.15) which showed that oo1
a
(d) = :
2
(: ÷/ ÷J
a
(d)) , then we can
also write 1
a
as
1
a
=
_
J
a
(
¯
d
cls
) ÷J
a
(
´
d)
_

= J
a
(
¯
d
cls
),¡
= \
0
a
,¡.
The second equality uses the fact that J
a
(
´
d) = 0 and the third equality is (8.7). Thus the 1
a
statistic equals the homoskedastic Wald statistic divided by ¡. It follows that they are equivalent
tests for H
0
against H
1
.
CHAPTER 8. HYPOTHESIS TESTING 172
In many statistical packages, linear hypothesis tests are reported as 1
a
rather than \
a
. While
they are equivalent, it is important to know which is being reported to know which critical values
to use. (If p-values are directly reported this is not an issue.)
Most packages will calculate critical values and p-values using the 1(¡, :÷/) distribution rather
than ¸
2
q
. This is a prudent small sample adjustment, as the 1 distribution is exact when the errors
are independent of the regressors and normally distributed. However, when the degrees of freedom
: ÷/ are large then the di¤erence is negligible. More relevantly, if : ÷/ is small enough to make
a di¤erence, probably we shouldn’t be trusting the asymptotic approximation anyway!
An elegant feature about (8.8) is that it is directly computable from the standard output from
two simple OLS regressions, as the sum of squared errors (or regression variance) is a typical output
from statistical packages, and is often reported in applied tables. Thus 1
a
can be calculated by
hand from standard reported statistics even if you don’t have the original data (or if you are sitting
in a seminar and listening to a presentation!).
If you are presented with an 1 statistic (or a Wald statistic, as you can just divide by ¡) but
don’t have access to critical values, a useful rule of thumb is to know that for large :, the 5%
asymptotic critical value is decreasing as ¡ increases, and is less than 2 for ¡ _ 7.
In many statistical packages, when an OLS regression is estimated, an “F-statistic” is reported.
This is 1
a
where H
0
restricts all coe¢cients except the intercept to be zero. This was a popular sta-
tistic in the early days of econometric reporting, when sample sizes were very small and researchers
wanted to know if there was “any explanatory power” to their regression. This is rarely an issue
today, as sample sizes are typically su¢ciently large that this F statistic is nearly always highly
signi…cant. While there are special cases where this F statistic is useful, these cases are atypical.
As a general rule, there is no reason to report this F statistic.
The 1 statistic is named after the statistician Ronald Fisher, one of the founders of modern
statistical theory.
8.7 Likelihood Ratio Test
For a model with parameter 0 ¸ O and likelihood function 1
a
(0) the likelihood ratio statistic
for H
0
: 0 ¸ O
0
versus H
1
: 0 ¸ O
c
0
is
11
a
= 2
_
max

log 1
a
(0) ÷ max

0
log 1
a
(0)
_
= 2
_
log 1
a
(
´
0) ÷log 1
a
(
¯
0)
_
where
´
0 and
¯
0 are the unrestricted and constrained MLE.
In the normal linear model the maximized log likelihood (3.44) at the unrestricted and restricted
estimates are
log 1
_
´
d
mle
, ´ o
2
mle
_
= ÷
:
2
(log (2¬) ÷ 1) ÷
:
2
log
_
´ o
2
_
and
log 1
_
¯
d
cmle
, o
2
cmle
_
= ÷
:
2
(log (2¬) ÷ 1) ÷
:
2
log
_
o
2
_
respectively. Thus the LR statistic is
11
a
= :
_
log
_
o
2
_
÷log
_
´ o
2
__
= :log
_
o
2
´ o
2
_
which is a monotonic function of o
2
,´ o
2
. Recall that the 1 statistic (8.9) is also a monotonic function
of o
2
,´ o
2
. Thus 11
a
and 1
a
are fundamentally the same statistic and have the same information
about H
0
versus H
1
.
CHAPTER 8. HYPOTHESIS TESTING 173
Furthermore, /y a …rst-order Taylor series approximation
11
a
,¡ =
:
¡
log
_
1 ÷
o
2
´ o
2
÷1
_
·
:
¡
_
o
2
´ o
2
÷1
_
· 1
a
.
This shows that the two statistics (11
a
and 1
a
) will be numerically close. It also shows that the
1 statistic and the homoskedastic Wald statistic for linear hypotheses can also be interpreted as
approximate likelihood ratio statistics under normality.
8.8 Problems with Tests of NonLinear Hypotheses
While the t and Wald tests work well when the hypothesis is a linear restriction on d, they
can work quite poorly when the restrictions are nonlinear. This can be seen by a simple example
introduced by Lafontaine and White (1986). Take the model
j
i
= , ÷c
i
c
i
~ N(0, o
2
)
and consider the hypothesis
H
0
: , = 1.
Let
´
, and ´ o
2
be the sample mean and variance of j
i
. The standard Wald test for H
0
is
\
a
= :
_
´
, ÷1
_
2
´ o
2
.
Now notice that H
0
is equivalent to the hypothesis
H
0
(r) : ,
v
= 1
for any positive integer r. Letting /(,) = ,
v
, and noting H
o
= r,
v÷1
, we …nd that the standard
Wald test for H
0
(r) is
\
a
(r) = :
_
´
,
v
÷1
_
2
´ o
2
r

,
2v÷2
.
While the hypothesis ,
v
= 1 is una¤ected by the choice of r, the statistic \
a
(r) varies with r. This
is an unfortunate feature of the Wald statistic.
To demonstrate this e¤ect, we have plotted in Figure 8.1 the Wald statistic \
a
(r) as a function
of r, setting :,o
2
= 10. The increasing solid line is for the case
´
, = 0.8. The decreasing dashed
line is for the case
´
, = 1.6. It is easy to see that in each case there are values of r for which the
test statistic is signi…cant relative to asymptotic critical values, while there are other values of r
for which the test statistic is insigni…cant. This is distressing since the choice of r is arbitrary and
irrelevant to the actual hypothesis.
Our …rst-order asymptotic theory is not useful to help pick r, as \
a
(r)
o
÷÷¸
2
1
under H
0
for any
r. This is a context where Monte Carlo simulation can be quite useful as a tool to study and
compare the exact distributions of statistical procedures in …nite samples. The method uses random
simulation to create arti…cial datasets, to which we apply the statistical tools of interest. This
produces random draws from the statistic’s sampling distribution. Through repetition, features of
this distribution can be calculated.
In the present context of the Wald statistic, one feature of importance is the Type I error
of the test using the asymptotic 5% critical value 3.84 – the probability of a false rejection,
Ii (\
a
(r) 8.84 [ , = 1) . Given the simplicity of the model, this probability depends only on
r, :, and o
2
. In Table 8.1 we report the results of a Monte Carlo simulation where we vary these
CHAPTER 8. HYPOTHESIS TESTING 174
Figure 8.1: Wald Statistic as a function of r
three parameters. The value of r is varied from 1 to 10, : is varied among 20, 100 and 500, and o
is varied among 1 and 3. The Table reports the simulation estimate of the Type I error probability
from 50,000 random samples. Each row of the table corresponds to a di¤erent value of r – and thus
corresponds to a particular choice of test statistic. The second through seventh columns contain the
Type I error probabilities for di¤erent combinations of : and o. These probabilities are calculated
as the percentage of the 50,000 simulated Wald statistics \
a
(r) which are larger than 3.84. The
null hypothesis ,
v
= 1 is true, so these probabilities are Type I error.
To interpret the table, remember that the ideal Type I error probability is 5% (.05) with devia-
tions indicating distortion. Type I error rates between 3% and 8% are considered reasonable. Error
rates above 10% are considered excessive. Rates above 20% are unacceptable. When comparing
statistical procedures, we compare the rates row by row, looking for tests for which rejection rates
are close to 5% and rarely fall outside of the 3%-8% range. For this particular example the only
test which meets this criterion is the conventional \
a
= \
a
(1) test. Any other choice of r leads
to a test with unacceptable Type I error probabilities.
In Table 8.1 you can also see the impact of variation in sample size. In each case, the Type I
error probability improves towards 5% as the sample size : increases. There is, however, no magic
choice of : for which all tests perform uniformly well. Test performance deteriorates as r increases,
which is not surprising given the dependence of \
a
(r) on r as shown in Figure 8.1.
Table 8.1
Type I error Probability of Asymptotic 5% \
a
(r) Test
CHAPTER 8. HYPOTHESIS TESTING 175
o = 1 o = 8
r : = 20 : = 100 : = ò00 : = 20 : = 100 : = ò00
1 .06 .05 .05 .07 .05 .05
2 .08 .06 .05 .15 .08 .06
3 .10 .06 .05 .21 .12 .07
4 .13 .07 .06 .25 .15 .08
5 .15 .08 .06 .28 .18 .10
6 .17 .09 .06 .30 .20 .11
7 .19 .10 .06 .31 .22 .13
8 .20 .12 .07 .33 .24 .14
9 .22 .13 .07 .34 .25 .15
10 .23 .14 .08 .35 .26 .16
Note: Rejection frequencies from 50,000 simulated random samples
In this example it is not surprising that the choice r = 1 yields the best test statistic. Other
choices are arbitrary and would not be used in practice. While this is clear in this particular
example, in other examples natural choices are not always obvious and the best choices may in fact
appear counter-intuitive at …rst.
This point can be illustrated through another example which is similar to one developed in
Gregory and Veall (1985). Take the model
j
i
= ,
0
÷r
1i
,
1
÷r
2i
,
2
÷c
i
(8.10)
E(i
i
c
i
) = 0
and the hypothesis
H
0
:
,
1
,
2
= r
where r is a known constant. Equivalently, de…ne 0 = ,
1
,,
2
, so the hypothesis can be stated as
H
0
: 0 = r.
Let
´
d = (
´
,
0
,
´
,
1
,
´
,
2
) be the least-squares estimates of (8.10), let
´
X
f
be an estimate of the
asymptotic covariance matrix for
´
d and set
´
0 =
´
,
1
,
´
,
2
. De…ne
´
H
1
=
_
_
_
_
_
_
_
_
_
_
_
0
1
´
,
2
÷
´
,
1
´
,
2
2
_
_
_
_
_
_
_
_
_
_
_
so that the standard error for
´
0 is :(
´
0) =
_
:
÷1
`
H
t
1
´
X
f
`
H
1
_
1¸2
. In this case a t-statistic for H
0
is
t
1a
=
_
^
o
1
^
o
2
÷r
_
:(
´
0)
.
An alternative statistic can be constructed through reformulating the null hypothesis as
H
0
: ,
1
÷r,
2
= 0.
A t-statistic based on this formulation of the hypothesis is
t
2a
=
´
,
1
÷r
´
,
2
_
:
÷1
H
t
2
´
X
f
H
2
_
1¸2
.
CHAPTER 8. HYPOTHESIS TESTING 176
where
H
2
=
_
_
0
1
÷r
_
_
.
To compare t
1a
and t
2a
we perform another simple Monte Carlo simulation. We let r
1i
and r
2i
be mutually independent N(0, 1) variables, c
i
be an independent N(0, o
2
) draw with o = 8, and
normalize ,
0
= 0 and ,
1
= 1. This leaves ,
2
as a free parameter, along with sample size :. We
vary ,
2
among .1, .25, .50, .75, and 1.0 and : among 100 and ò00.
Table 8.2
Type I error Probability of Asymptotic 5% t-tests
: = 100 : = ò00
Ii (t
a
< ÷1.64ò) Ii (t
a
1.64ò) Ii (t
a
< ÷1.64ò) Ii (t
a
1.64ò)
,
2
t
1a
t
2a
t
1a
t
2a
t
1a
t
2a
t
1a
t
2a
.10 .47 .06 .00 .06 .28 .05 .00 .05
.25 .26 .06 .00 .06 .15 .05 .00 .05
.50 .15 .06 .00 .06 .10 .05 .00 .05
.75 .12 .06 .00 .06 .09 .05 .00 .05
1.00 .10 .06 .00 .06 .07 .05 .02 .05
The one-sided Type I error probabilities Ii (t
a
< ÷1.64ò) and Ii (t
a
1.64ò) are calculated
from 50,000 simulated samples. The results are presented in Table 8.2. Ideally, the entries in the
table should be 0.05. However, the rejection rates for the t
1a
statistic diverge greatly from this
value, especially for small values of ,
2
. The left tail probabilities Ii (t
1a
< ÷1.64ò) greatly exceed
5%, while the right tail probabilities Ii (t
1a
1.64ò) are close to zero in most cases. In contrast,
the rejection rates for the linear t
2a
statistic are invariant to the value of ,
2
, and are close to the
ideal 5% rate for both sample sizes. The implication of Table 4.2 is that the two t-ratios have
dramatically di¤erent sampling behavior.
The common message from both examples is that Wald statistics are sensitive to the algebraic
formulation of the null hypothesis.
A simple solution is to use the minimum distance statistic J
a
, which equals \
a
with r = 1 in
the …rst example, and t
2a
in the second example. The minimum distance statistic is invariant to
the algebraic formulation of the null hypothesis, so is immune to this problem. Whenever possible,
the Wald statistic should not be used to test nonlinear hypotheses.
8.9 Monte Carlo Simulation
In the Section 8.8 we introduced the method of Monte Carlo simulation to illustrate the small
sample problems with tests of nonlinear hypotheses. In this section we describe the method in more
detail.
Recall, our data consist of observations (j
i
, i
i
) which are random draws from a population
distribution 1. Let 0 be a parameter and let T
a
= T
a
((j
1
, i
1
) , ..., (j
a
, i
a
) , 0) be a statistic of
interest, for example an estimator
´
0 or a t-statistic (
´
0 ÷0),:(
´
0). The exact distribution of T
a
is
G
a
(n, 1) = Ii (T
a
_ n [ 1) .
While the asymptotic distribution of T
a
might be known, the exact (…nite sample) distribution G
a
is generally unknown.
Monte Carlo simulation uses numerical simulation to compute G
a
(n, 1) for selected choices of 1.
This is useful to investigate the performance of the statistic T
a
in reasonable situations and sample
sizes. The basic idea is that for any given 1, the distribution function G
a
(n, 1) can be calculated
CHAPTER 8. HYPOTHESIS TESTING 177
numerically through simulation. The name Monte Carlo derives from the famous Mediterranean
gambling resort where games of chance are played.
The method of Monte Carlo is quite simple to describe. The researcher chooses 1 (the dis-
tribution of the data) and the sample size :. A “true” value of 0 is implied by this choice, or
equivalently the value 0 is selected directly by the researcher which implies restrictions on 1.
Then the following experiment is conducted by computer simulation:
1. : independent random pairs (j
+
i
, i
+
i
) , i = 1, ..., :, are drawn from the distribution 1 using
the computer’s random number generator.
2. The statistic T
a
= T
a
((j
+
1
, i
+
1
) , ..., (j
+
a
, i
+
a
) , 0) is calculated on this pseudo data.
For step 1, most computer packages have built-in procedures for generating U[0, 1[ and N(0, 1)
random numbers, and from these most random variables can be constructed. (For example, a
chi-square can be generated by sums of squares of normals.)
For step 2, it is important that the statistic be evaluated at the “true” value of 0 corresponding
to the choice of 1.
The above experiment creates one random draw from the distribution G
a
(n, 1). This is one
observation from an unknown distribution. Clearly, from one observation very little can be said.
So the researcher repeats the experiment 1 times, where 1 is a large number. Typically, we set
1 = 1000 or 1 = ò000. We will discuss this choice later.
Notationally, let the /
tI
experiment result in the draw T
ab
, / = 1, ..., 1. These results are stored.
After all 1 experiments have been calculated, these results constitute a random sample of size 1
from the distribution of G
a
(n, 1) = Ii (T
ab
_ n) = Ii (T
a
_ n [ 1) .
From a random sample, we can estimate any feature of interest using (typically) a method of
moments estimator. For example:
Suppose we are interested in the bias, mean-squared error (MSE), and/or variance of the dis-
tribution of
´
0 ÷0. We then set T
a
=
´
0 ÷0, run the above experiment, and calculate
\
1ia:(
´
0) =
1
1
1

b=1
T
ab
=
1
1
1

b=1
´
0
b
÷0
\
'o1(
´
0) =
1
1
1

b=1
(T
ab
)
2
=
1
1
1

b=1
_
´
0
b
÷0
_
2
\
vai(
´
0) =
\
'o1(
´
0) ÷
_
\
1ia:(
´
0)
_
2
Suppose we are interested in the Type I error associated with an asymptotic 5% two-sided t-test.
We would then set T
a
=
¸
¸
¸
´
0 ÷0
¸
¸
¸ ,:(
´
0) and calculate
´
1 =
1
1
1

b=1
1 (T
ab
_ 1.06) , (8.11)
the percentage of the simulated t-ratios which exceed the asymptotic 5% critical value.
Suppose we are interested in the 5% and 95% quantile of T
a
=
´
0 or T
a
=
_
´
0 ÷0
_
,:(
´
0) We
then compute the 5% and 95% sample quantiles of the sample ¦T
ab
¦. The c/ sample quantile is a
number ¡
c
such that c/ of the sample are less than ¡
c
. A simple way to compute sample quantiles
is to sort the sample ¦T
ab
¦ from low to high. Then ¡
c
is the ·’th number in this ordered sequence,
where · = (1 ÷ 1)c. It is therefore convenient to pick 1 so that · is an integer. For example, if
we set 1 = 000, then the 5% sample quantile is 50’th sorted value and the 95% sample quantile is
the 950’th sorted value.
CHAPTER 8. HYPOTHESIS TESTING 178
The typical purpose of a Monte Carlo simulation is to investigate the performance of a statistical
procedure (estimator or test) in realistic settings. Generally, the performance will depend on : and
1. In many cases, an estimator or test may perform wonderfully for some values, and poorly for
others. It is therefore useful to conduct a variety of experiments, for a selection of choices of : and
1.
As discussed above, the researcher must select the number of experiments, 1. Often this is
called the number of replications. Quite simply, a larger 1 results in more precise estimates of
the features of interest of G
a
, but requires more computational time. In practice, therefore, the
choice of 1 is often guided by the computational demands of the statistical procedure. Since the
results of a Monte Carlo experiment are estimates computed from a random sample of size 1, it
is straightforward to calculate standard errors for any quantity of interest. If the standard error is
too large to make a reliable inference, then 1 will have to be increased.
In particular, it is simple to make inferences about rejection probabilities from statistical tests,
such as the percentage estimate reported in (8.11). The random variable 1 (T
ab
_ 1.06) is iid
Bernoulli, equalling 1 with probability j = E1 (T
ab
_ 1.06) . The average (8.11) is therefore an
unbiased estimator of j with standard error : (´ j) =
_
j (1 ÷j) ,1. As j is unknown, this may be
approximated by replacing j with ´ j or with an hypothesized value. For example, if we are assessing
an asymptotic 5% test, then we can set : (´ j) =
_
(.0ò) (.0ò) ,1 · .22,
_
1. Hence, standard errors
for 1 = 100, 1000, and 5000, are, respectively, : (´ j) = .022, .007, and .003.
Most papers in econometric methods, and some empirical papers, include the results of Monte
Carlo simulations to illustrate the performance of their methods. When extending existing results,
it is good practice to start by replicating existing (published) results. This is not exactly possible
in the case of simulation results, as they are inherently random. For example suppose a paper
investigates a statistical test, and reports a simulated rejection probability of 0.07 based on a
simulation with 1 = 100 replications. Suppose you attempt to replicate this result, and …nd a
rejection probability of 0.03 (again using 1 = 100 simulation replications). Should you concluce
that you have failed in your attempt? Absolutely not! Under the hypothesis that both simulations
are identical, you have two independent estimates, ´ j
1
= 0.07 and ´ j
2
= 0.08, of a common probability
j. The asymptotic (as 1 ÷·) distribution of their di¤erence is
_
1(´ j
1
÷ ´ j
2
)
o
÷÷N(0, 2j(1÷j)), so
a standard error for ´ j
1
÷ ´ j
2
= 0.04 is ´ : =
_
2j(1 ÷j),1 · 0.08, where I estimate j = (´ j
1
÷ ´ j
2
),2.
Since the t-ratio 0.04,0.08 = 1.8 is not statistically signi…cant, it is incorrect to reject the null
hypothesis that the two simulations are identical. The di¤erence between the results ´ j
1
= 0.07 and
´ j
2
= 0.08 is consistent with random variation.
What should be done? The …rst mistake was to copy the previous paper’s choice of 1 = 100.
Instead, suppose you set 1 = ò000 and now obtain ´ j
2
= 0.04. Then ´ j
1
÷ ´ j
2
= 0.08 and a standard
error is ´ : =
_
j(1 ÷j) (1,100 ÷ 1,ò000) · 0.02. Still we cannot reject the hypothesis that the two
simulations are di¤erent. Even though the estimates (0.07 and 0.04) appear to be quite di¤erent,
the di¢culty is that the original simulation used a very small number of replications (1 = 100) so
the reported estimate is quite imprecise. In this case, it is appropriate to conclude that your results
“replicate” the previous study, as there is no statistical evidence to reject the hypothesis that they
are equivalent.
Most journals have policies requiring authors to make available their data sets and computer
programs required for empirical results. They do not have similar politicies regarding simulations.
Never-the-less, it is good professional practice to make your simulations available. The best practice
is to post your simulation code on your webpage. This invites others to build on and use your results,
leading to possible collaboration, citation, and/or advancement.
CHAPTER 8. HYPOTHESIS TESTING 179
8.10 Con…dence Intervals by Test Inversion
There is a close relationship between hypothesis tests and con…dence intervals. We observed in
Section 6.12 that the standard 95% asymptotic con…dence interval for a parameter 0 is
C
a
=
_
´
0 ÷1.06 :(
´
0),
´
0 ÷ 1.06 :(
´
0)
_
(8.12)
= ¦0 : [t
a
(0)[ _ 1.06¦ .
That is, we can describe C
a
as “The point estimate plus or minus 2 standard errors” or “The set of
parameter values not rejected by a two-sided t-test.” The second de…nition, known as “test statistic
inversion” is a general method for …nding con…dence intervals, and typically produces con…dence
intervals with excellent properties.
Given a test statistic T
a
(0) and critical value c, the acceptance region “Accept if T
a
(0) _ c”
is identical to the con…dence interval C
a
= ¦0 : T
a
(0) _ c¦. Since the regions are identical, the
probability of coverage Ii (0 ¸ C
a
) equals the probability of correct acceptance Ii (Accept[0) which
is exactly 1 minus the Type I error probability. Thus inverting tests with good Type I error
probabilities yields a con…dence interval with good coverage probabilities.
Now suppose that the parameter of interest 0 = /(d) is a nonlinear function of the coe¢cient
vector d. In this case the standard con…dence interval for 0 is the set C
a
as in (8.12) where
´
0 = /(
´
d) is the point estimate and :(
´
0) = :
÷1¸2
_
´
H
t
f
´
X
f
´
H
f
is the delta method standard error.
This con…dence interval is inverting the t-test based on the nonlinear hypothesis /(d) = 0. The
trouble is that in Section 8.8 we learned that there is no unique t-statistic for tests of nonlinear
hypotheses and that the choice of parameterization matters greatly.
For example, if 0 = ,
1
,,
2
then the coverage probability of the standard interval (8.12) is 1
minus the probability of the Type I error, which as shown in Table 8.2 can be far from the nominal
5%.
In this example a good solution is the same as discussed in Section 8.8 – to rewrite the hypothesis
as a linear restriction. The hypothesis 0 = ,
1
,,
2
is the same as 0,
2
= ,
1
. The t-statistic for this
restriction is
t
a
(0) =
´
,
1
÷
´
,
2
0
_
l
t
0
´
X l
0
_
1¸2
where
l
0
=
_
1
÷0
_
and
´
X is the covariance matrix for (
´
,
1
´
,
2
). A 95% con…dence interval for 0 = ,
1
,,
2
is the set of
values of 0 such that [t
a
(0)[ _ 1.06. Since 0 appears in both the numerator and denominator, t
a
(0)
is a non-linear function of 0 so the easiest method to …nd the con…dence set is by grid search over
0.
For example, in the wage equation
log(\aqc) = ,
1
1rjcric:cc ÷,
2
1rjcric:cc
2
,100 ÷
the highest expected wage occurs at 1rjcric:cc = ÷ò0,
1
,,
2
. From Table 5.1 we have the point
estimate
´
0 = 20.8 and we can calculate the standard error :(
´
0) = 0.022 for a 95% con…dence interval
[20.8, 29.9]. However, if we instead invert the linear form of the test we can numerically …nd the
interval [20.1, 30.6] which is much larger. From the evidence presented in Section 8.8 we know the
…rst interval can be quite inaccurate and the second interval is greatly preferred.
CHAPTER 8. HYPOTHESIS TESTING 180
8.11 Asymptotic Power
The power of a test is the probability of rejecting H
0
when H
1
is true.
For simplicity suppose that j
i
is i.i.d. ·(j, o
2
) with o
2
known, consider the t-statistic t
a
(j) =
_
:(¯ j ÷j) ,o, and tests of H
0
: j = 0 against H
1
: j 1. We reject H
0
if t
a
= t
a
(0) c. Note that
t
a
= t
a
(j) ÷
_
:j,o
and t
a
(j) = Z has an exact N(0, 1) distribution. This is because t
a
(j) is centered at the true mean
j, while the test statistic t
a
(0) is centered at the (false) hypothesized mean of 0.
The power of the test is
Ii (t
a
_ c [ 0) = Ii
_
Z ÷
_
:j,o _ c
_
= 1
_
c ÷
_
:j,o
_
.
This function is monotonically increasing in j and :, and decreasing in o.
Notice that for any c and j ,= 0, the power increases to 1 as : ÷ ·. This means that for
0 ¸ H
1
, the test will reject H
0
with probability approaching 1 as the sample size gets large. We
call this property test consistency.
De…nition 8.11.1 A asymptotic level c test of H
0
: 0 ¸ O
0
is consistent
against …xed alternatives if for all 0 ¸ O
1
, Ii (Reject H
0
[ 0) ÷ 1 as
: ÷·.
For asymptotic level c tests of the form “Reject H
0
if T
a
_ c”, a su¢cient condition for test
consistency is that the T
a
diverges to positive in…nity with probability one for all 0 ¸ O
1
.
De…nition 8.11.2 T
a
j
÷÷ · as : ÷ · if for all ' < ·,
Ii (T
a
_ ') ÷0 as : ÷·.
In general, t-test and Wald tests are consistent against …xed alternatives. Take a t-statistic for
a test of H
0
: 0 = 0
0
t
a
=
´
0 ÷0
0
:(
´
0)
where 0
0
is a known value and :(
´
0) =
_
:
÷1 ´
\ . Note that
t
a
=
´
0 ÷0
:(
´
0)
÷
_
:(0 ÷0
0
)
_
´
\
where the …rst term converges in distribution to N(0, 1) and the second term converges in probability
to ÷· if 0 0
0
and converges to ÷· if 0 < 0
0
. Thus the two-sided t-test is consistent against
H
1
: 0 ,= 0
0
, and one-sided t-tests are consistent against the alternatives for which they are designed.
The Wald statistic for H
0
: 0 = r(d) = 0 against H
1
: 0 ,= 0 is
\
a
= :
´
0
t
´
X
÷1
0
´
0
Under H
1
,
´
0
j
÷÷ 0 ,= 0. Thus
´
0
t
´
X
÷1
0
´
0
j
÷÷ 0
t
X
÷1
0
0 0. Hence under H
1
, \
a
j
÷÷ ·. Again, this
implies that Wald tests are consistent tests.
CHAPTER 8. HYPOTHESIS TESTING 181
Theorem 8.11.1 Under Assumptions 6.1.2 and 6.9.1, iank(H
f
) = ¡, for
0 = r(d) ,= 0, then \
a
j
÷÷ ·, so the test “Reject H
0
if \
a
_ c” i:
consistent against …xed alternatives.
Consistency is a good property for a test, but does not give a useful approximation to the
power function. One useful asymptotic method to compute a continuous approximation to the
power function is based on local alternatives similar to our analysis of restriction estimation under
misspeci…cation (Section 7.9). The technique is to index the parameter by sample size so that the
asymptotic distribution of the statistic is continuous in a localizing parameter. Speci…cally, suppose
that
0 = r(d) = :
÷1¸2
ð
where ð is ¡ 1 is the localizing parameter. Then
_
:
´
0 =
_
:
_
´
0 ÷0
_
÷
_
:0
=
_
:
_
´
0 ÷0
_
÷ð
o
÷÷Z
ð
= N(ð, X
0
)
a non-central normal distribution, and
\
a
= :
´
0
t
´
X
÷1
0
´
0
o
÷÷Z
t
ð
X
÷1
0
Z
ð
= ¸
2
q
(j)
where j = ð
t
X
÷1
0
ð . The distribution ¸
2
q
(j) is the noncentral chi-square distribution with
degrees of freedom ¡ and noncentrality parameter j, and is a function of ¡ and j only.
This is convenient as we then obtain the following approximation to the power function. As
: ÷·,
Ii (\
a
_ c) ÷÷Ii
_
¸
2
q
(j) _ c
_
:= ¬
q
(j, c).
The function ¬
q
(j, c) is known as the asymptotic local power function, and for given c and ¡,
depends only on the real-valued parameter j _ 0.
Theorem 8.11.2 Under Assumptions 6.1.2 and 6.9.1, iank(H
f
) = ¡, and
0 = r(d) = :
÷1¸2
ð, then
\
a
o
÷÷¸
2
q
(j)
where j = ð
t
X
÷1
0
ð. Furthermore,
Ii (\
a
_ c) ÷÷Ii
_
¸
2
q
(j) _ c
_
= ¬
q
(j, c)
CHAPTER 8. HYPOTHESIS TESTING 182
Exercises
Exercise 8.1 Prove that if an additional regressor A
I+1
is added to A, Theil’s adjusted 1
2
increases if and only if [t
I+1
[ 1, where t
I+1
=
´
,
I+1
,:(
´
,
I+1
) is the t-ratio for
´
,
I+1
and
:(
´
,
I+1
) =
_
:
2
[(A
t
A)
÷1
[
I+1,I+1
_
1¸2
is the homoskedasticity-formula standard error.
Exercise 8.2 You have two independent samples (¸
1
, A
1
) and (¸
2
, A
2
) which satisfy ¸
1
= A
1
d
1
÷
c
1
and ¸
2
= A
2
d
2
÷ c
2
, where E(i
1i
c
1i
) = 0 and E(i
2i
c
2i
) = 0, and both A
1
and A
2
have /
columns. Let
´
d
1
and
´
d
2
be the OLS estimates of d
1
and d
2
. For simplicity, you may assume that
both samples have the same number of observations :.
(a) Find the asymptotic distribution of
_
:
__
´
d
2
÷
´
d
1
_
÷(d
2
÷d
1
)
_
as : ÷·.
(b) Find an appropriate test statistic for H
0
: d
2
= d
1
.
(c) Find the asymptotic distribution of this statistic under H
0
.
Exercise 8.3 The data set invest.dat contains data on 565 U.S. …rms extracted from Compustat
for the year 1987. The variables, in order, are
« 1
i
Investment to Capital Ratio (multiplied by 100).
« Q
i
Total Market Value to Asset Ratio (Tobin’s Q).
« C
i
Cash Flow to Asset Ratio.
« 1
i
Long Term Debt to Asset Ratio.
The ‡ow variables are annual sums for 1987. The stock variables are beginning of year.
(a) Estimate a linear regression of 1
i
on the other variables. Calculate appropriate standard
errors.
(b) Calculate asymptotic con…dence intervals for the coe¢cients.
(c) This regression is related to Tobin’s ¡ theory of investment, which suggests that investment
should be predicted solely by Q
i
. Thus the coe¢cient on Q
i
should be positive and the others
should be zero. Test the joint hypothesis that the coe¢cients on C
i
and 1
i
are zero. Test the
hypothesis that the coe¢cient on Q
i
is zero. Are the results consistent with the predictions
of the theory?
(d) Now try a non-linear (quadratic) speci…cation. Regress 1
i
on Q
i
, C
i
, 1
i
, Q
2
i
, C
2
i
, 1
2
i
, Q
i
C
i
,
Q
i
1
i
, C
i
1
i
. Test the joint hypothesis that the six interaction and quadratic coe¢cients are
zero.
Exercise 8.4 In a paper in 1963, Marc Nerlove analyzed a cost function for 145 American electric
companies. (The problem is discussed in Example 8.3 of Greene, section 1.7 of Hayashi, and the
empirical exercise in Chapter 1 of Hayashi). The data …le nerlov.dat contains his data. The
variables are described on page 77 of Hayashi. Nerlov was interested in estimating a cost function:
TC = )(Q, 11, 11, 11).
CHAPTER 8. HYPOTHESIS TESTING 183
(a) First estimate an unrestricted Cobb-Douglass speci…cation
log TC
i
= ,
1
÷,
2
log Q
i
÷,
3
log 11
i
÷,
4
log 11
i
÷,
5
log 11
i
÷c
i
. (8.13)
Report parameter estimates and standard errors. You should obtain the same OLS estimates
as in Hayashi’s equation (1.7.7), but your standard errors may di¤er.
(b) What is the economic meaning of the restriction H
0
: ,
3
÷,
4
÷,
5
= 1´
(c) Estimate (8.13) by constrained least-squares imposing ,
3
÷ ,
4
÷ ,
5
= 1. Report your para-
meter estimates and standard errors.
(d) Estimate (8.13) by e¢cient minimum distance imposing ,
3
÷ ,
4
÷ ,
5
= 1. Report your
parameter estimates and standard errors.
(e) Test H
0
: ,
3
÷,
4
÷,
5
= 1 using a Wald statistic
(f) Test H
0
: ,
3
÷,
4
÷,
5
= 1 using a minimum distance statistic
Chapter 9
Regression Extensions
9.1 Generalized Least Squares
In the projection model, we know that the least-squares estimator is semi-parametrically e¢cient
for the projection coe¢cient. However, in the linear regression model
j
i
= i
t
i
d ÷c
i
E(c
i
[ i
i
) = 0,
the least-squares estimator is ine¢cient. The theory of Chamberlain (1987) can be used to show
that in this model the semiparametric e¢ciency bound is obtained by the Generalized Least
Squares (GLS) estimator (4.13) introduced in Section 4.6.1. The GLS estimator is sometimes
called the Aitken estimator. The GLS estimator (9.1) is infeasible since the matrix L is unknown.
A feasible GLS (FGLS) estimator replaces the unknown L with an estimate
`
L = oiag¦´ o
2
1
, ..., ´ o
2
a
¦.
We now discuss this estimation problem.
Suppose that we model the conditional variance using the parametric form
o
2
i
= c
0
÷z
t
1i
o
1
= o
t
z
i
,
where z
1i
is some ¡ 1 function of i
i
. Typically, z
1i
are squares (and perhaps levels) of some (or
all) elements of i
i
. Often the functional form is kept simple for parsimony.
Let j
i
= c
2
i
. Then
E(j
i
[ i
i
) = c
0
÷z
t
1i
o
1
and we have the regression equation
j
i
= c
0
÷z
t
1i
o
1
÷¸
i
(9.1)
E(¸
i
[ i
i
) = 0.
This regression error ¸
i
is generally heteroskedastic and has the conditional variance
vai (¸
i
[ i
i
) = vai
_
c
2
i
[ i
i
_
= E
_
_
c
2
i
÷E
_
c
2
i
[ i
i
__
2
[ i
i
_
= E
_
c
4
i
[ i
i
_
÷
_
E
_
c
2
i
[ i
i
__
2
.
Suppose c
i
(and thus j
i
) were observed. Then we could estimate o by OLS:
´ o =
_
Z
t
Z
_
÷1
Z
t
µ
j
÷÷o
184
CHAPTER 9. REGRESSION EXTENSIONS 185
and
_
:(´ o÷o)
o
÷÷N(0, X
o
)
where
X
o
=
_
E
_
z
i
z
t
i
__
÷1
E
_
z
i
z
t
i
¸
2
i
_ _
E
_
z
i
z
t
i
__
÷1
. (9.2)
While c
i
is not observed, we have the OLS residual ´ c
i
= j
i
÷i
t
i
´
d = c
i
÷i
t
i
(
´
d ÷d). Thus
c
i
= ´ j
i
÷j
i
= ´ c
2
i
÷c
2
i
= ÷2c
i
i
t
i
_
´
d ÷d
_
÷ (
´
d ÷d)
t
i
i
i
t
i
(
´
d ÷d).
And then
1
_
:
a

i=1
z
i
c
i
=
÷2
:
a

i=1
z
i
c
i
i
t
i
_
:
_
´
d ÷d
_
÷
1
:
a

i=1
z
i
(
´
d ÷d)
t
i
i
i
t
i
(
´
d ÷d)
_
:
j
÷÷0
Let
¯ o =
_
Z
t
Z
_
÷1
Z
t
` µ (9.3)
be from OLS regression of ´ j
i
on z
i
. Then
_
:(¯ o÷o) =
_
:(´ o÷o) ÷
_
:
÷1
Z
t
Z
_
÷1
:
÷1¸2
Z
t
ç
o
÷÷N(0, X
o
) (9.4)
Thus the fact that j
i
is replaced with ´ j
i
is asymptotically irrelevant. We call (9.3) the skedastic
regression, as it is estimating the conditional variance of the regression of j
i
on i
i
. We have shown
that o is consistently estimated by a simple procedure, and hence we can estimate o
2
i
= z
t
i
o by
o
2
i
= ¯ o
t
z
i
. (9.5)
Suppose that o
2
i
0 for all i. Then set
¯
L = oiag¦ o
2
1
, ..., o
2
a
¦
and
¯
d =
_
A
t
¯
L
÷1
A
_
÷1
A
t
¯
L
÷1
¸.
This is the feasible GLS, or FGLS, estimator of d. Since there is not a unique speci…cation for
the conditional variance the FGLS estimator is not unique, and will depend on the model (and
estimation method) for the skedastic regression.
One typical problem with implementation of FGLS estimation is that in the linear speci…cation
(9.1), there is no guarantee that o
2
i
0 for all i. If o
2
i
< 0 for some i, then the FGLS estimator
is not well de…ned. Furthermore, if o
2
i
- 0 for some i then the FGLS estimator will force the
regression equation through the point (j
i
, i
i
), which is undesirable. This suggests that there is a
need to bound the estimated variances away from zero. A trimming rule takes the form
o
2
i
= max[ o
2
i
, c´ o
2
[
for some c 0. For example, setting c = 1,4 means that the conditional variance function is
constrained to exceed one-fourth of the unconditional variance. As there is no clear method to
select c, this introduces a degree of arbitrariness. In this context it is useful to re-estimate the
model with several choices for the trimming parameter. If the estimates turn out to be sensitive to
its choice, the estimation method should probably be reconsidered.
It is possible to show that if the skedastic regression is correctly speci…ed, then FGLS is asymp-
totically equivalent to GLS. As the proof is tricky, we just state the result without proof.
CHAPTER 9. REGRESSION EXTENSIONS 186
Theorem 9.1.1 If the skedastic regression is correctly speci…ed,
_
:
_
¯
d
G1S
÷
¯
d
1G1S
_
j
÷÷0,
and thus
_
:
_
¯
d
1G1S
÷d
_
o
÷÷N(0, X
f
) ,
where
X
f
=
_
E
_
o
÷2
i
i
i
i
t
i
__
÷1
.
Examining the asymptotic distribution of Theorem 9.1.1, the natural estimator of the asymp-
totic variance of
¯
d is
¯
X
0
f
=
_
1
:
a

i=1
o
÷2
i
i
i
i
t
i
_
÷1
=
_
1
:
A
t
¯
L
÷1
A
_
÷1
.
which is consistent for X
f
as : ÷ ·. This estimator
¯
X
0
f
is appropriate when the skedastic
regression (9.1) is correctly speci…ed.
It may be the case that o
t
z
i
is only an approximation to the true conditional variance o
2
i
=
E(c
2
i
[ i
i
). In this case we interpret o
t
z
i
as a linear projection of c
2
i
on z
i
.
¯
d should perhaps be
called a quasi-FGLS estimator of d. Its asymptotic variance is not that given in Theorem 9.1.1.
Instead,
X
f
=
_
E
_
_
o
t
z
i
_
÷1
i
i
i
t
i
__
÷1
_
E
_
_
o
t
z
i
_
÷2
o
2
i
i
i
i
t
i
___
E
_
_
o
t
z
i
_
÷1
i
i
i
t
i
__
÷1
.
X
f
takes a sandwich form similar to the covariance matrix of the OLS estimator. Unless o
2
i
= o
t
z
i
,
¯
X
0
f
is inconsistent for X
f
.
An appropriate solution is to use a White-type estimator in place of
¯
X
0
f
. This may be written
as
¯
X
f
=
_
1
:
a

i=1
o
÷2
i
i
i
i
t
i
_
÷1
_
1
:
a

i=1
o
÷4
i
´ c
2
i
i
i
i
t
i
__
1
:
a

i=1
o
÷2
i
i
i
i
t
i
_
÷1
=
_
1
:
A
t
¯
L
÷1
A
_
÷1
_
1
:
A
t
¯
L
÷1
´
L
¯
L
÷1
A
__
1
:
A
t
¯
L
÷1
A
_
÷1
where
´
L = oiag¦´ c
2
1
, ..., ´ c
2
a
¦. This is estimator is robust to misspeci…cation of the conditional vari-
ance, and was proposed by Cragg (1992).
In the linear regression model, FGLS is asymptotically superior to OLS. Why then do we not
exclusively estimate regression models by FGLS? This is a good question. There are three reasons.
First, FGLS estimation depends on speci…cation and estimation of the skedastic regression.
Since the form of the skedastic regression is unknown, and it may be estimated with considerable
error, the estimated conditional variances may contain more noise than information about the true
conditional variances. In this case, FGLS can do worse than OLS in practice.
Second, individual estimated conditional variances may be negative, and this requires trimming
to solve. This introduces an element of arbitrariness which is unsettling to empirical researchers.
Third, and probably most importantly, OLS is a robust estimator of the parameter vector. It
is consistent not only in the regression model, but also under the assumptions of linear projection.
The GLS and FGLS estimators, on the other hand, require the assumption of a correct conditional
mean. If the equation of interest is a linear projection and not a conditional mean, then the OLS
CHAPTER 9. REGRESSION EXTENSIONS 187
and FGLS estimators will converge in probability to di¤erent limits as they will be estimating two
di¤erent projections. The FGLS probability limit will depend on the particular function selected for
the skedastic regression. The point is that the e¢ciency gains from FGLS are built on the stronger
assumption of a correct conditional mean, and the cost is a loss of robustness to misspeci…cation.
9.2 Testing for Heteroskedasticity
The hypothesis of homoskedasticity is that E
_
c
2
i
[ i
i
_
= o
2
, or equivalently that
H
0
: o
1
= 0
in the regression (9.1). We may therefore test this hypothesis by the estimation (9.3) and con-
structing a Wald statistic. In the classic literature it is typical to impose the stronger assumption
that c
i
is independent of i
i
, in which case ¸
i
is independent of i
i
and the asymptotic variance (9.2)
for ¯ o simpli…es to
\
o
=
_
E
_
z
i
z
t
i
__
÷1
E
_
¸
2
i
_
. (9.6)
Hence the standard test of H
0
is a classic 1 (or Wald) test for exclusion of all regressors from
the skedastic regression (9.3). The asymptotic distribution (9.4) and the asymptotic variance (9.6)
under independence show that this test has an asymptotic chi-square distribution.
Theorem 9.2.1 Under H
0
and c
i
independent of i
i
, the Wald test of H
0
is asymptotically ¸
2
q
.
Most tests for heteroskedasticity take this basic form. The main di¤erences between popular
tests are which transformations of i
i
enter z
i
. Motivated by the form of the asymptotic variance
of the OLS estimator
´
d, White (1980) proposed that the test for heteroskedasticity be based on
setting z
i
to equal all non-redundant elements of i
i
, its squares, and all cross-products. Breusch-
Pagan (1979) proposed what might appear to be a distinct test, but the only di¤erence is that they
allowed for general choice of z
i
, and replaced E
_
¸
2
i
_
with 2o
4
which holds when c
i
is N
_
0, o
2
_
. If
this simpli…cation is replaced by the standard formula (under independence of the error), the two
tests coincide.
It is important not to misuse tests for heteroskedasticity. It should not be used to determine
whether to estimate a regression equation by OLS or FGLS, nor to determine whether classic or
White standard errors should be reported. Hypothesis tests are not designed for these purposes.
Rather, tests for heteroskedasticity should be used to answer the scienti…c question of whether or
not the conditional variance is a function of the regressors. If this question is not of economic
interest, then there is no value in conducting a test for heteorskedasticity
9.3 NonLinear Least Squares
In some cases we might use a parametric regression function :(i, 0) = E(j
i
[ i
i
= i) which
is a non-linear function of the parameters 0. We describe this setting as non-linear regression.
Examples of nonlinear regression functions include
:(r, 0) = 0
1
÷0
2
r
1 ÷0
3
r
:(r, 0) = 0
1
÷0
2
r
0
3
:(r, 0) = 0
1
÷0
2
oxp(0
3
r)
:(i, 0) = G(i
t
0), G known
:(i, 0) = 0
t
1
i
1
÷
_
0
t
2
i
1
_
1
_
r
2
÷0
3
0
4
_
:(r, 0) = 0
1
÷0
2
r ÷0
3
(r ÷0
4
) 1 (r 0
4
)
:(i, 0) =
_
0
t
1
i
1
_
1 (r
2
< 0
3
) ÷
_
0
t
2
i
1
_
1 (r
2
0
3
)
CHAPTER 9. REGRESSION EXTENSIONS 188
In the …rst …ve examples, :(i, 0) is (generically) di¤erentiable in the parameters 0. In the …nal
two examples, : is not di¤erentiable with respect to 0
4
and 0
3
which alters some of the analysis.
When it exists, let
n
0
(i, 0) =
0
00
:(i, 0) .
Nonlinear regression is sometimes adopted because the functional form :(i, 0) is suggested
by an economic model. In other cases, it is adopted as a ‡exible approximation to an unknown
regression function.
The least squares estimator
´
0 minimizes the normalized sum-of-squared-errors
o
a
(0) =
1
:
a

i=1
(j
i
÷:(i
i
, 0))
2
.
When the regression function is nonlinear, we call this the nonlinear least squares (NLLS)
estimator. The NLLS residuals are ´ c
i
= j
i
÷:
_
i
i
,
´
0
_
.
One motivation for the choice of NLLS as the estimation method is that the parameter 0 is the
solution to the population problem min
0
E(j
i
÷:(i
i
, 0))
2
Since sum-of-squared-errors function o
a
(0) is not quadratic,
´
0 must be found by numerical
methods. See Appendix E. When :(i, 0) is di¤erentiable, then the FOC for minimization are
0 =
a

i=1
n
0
_
i
i
,
´
0
_
´ c
i
. (9.7)
Theorem 9.3.1 Asymptotic Distribution of NLLS Estimator
If the model is identi…ed and :(i, 0) is di¤erentiable with respect to 0,
_
:
_
´
0 ÷0
_
o
÷÷N(0, X
0
)
X
0
=
_
E
_
n
0i
n
t
0i
__
÷1
_
E
_
n
0i
n
t
0i
c
2
i
__ _
E
_
n
0i
n
t
0i
__
÷1
where n
0i
= n
0
(i
i
, 0
0
).
Based on Theorem 9.3.1, an estimate of the asymptotic variance X
0
is
´
X
0
=
_
1
:
a

i=1
` n
0i
` n
t
0i
_
÷1
_
1
:
a

i=1
` n
0i
` n
t
0i
´ c
2
i
__
1
:
a

i=1
` n
0i
` n
t
0i
_
÷1
where ` n
0i
= n
0
(i
i
,
´
0) and ´ c
i
= j
i
÷:(i
i
,
´
0).
Identi…cation is often tricky in nonlinear regression models. Suppose that
:(i
i
, 0) = d
t
1
z
i
÷d
t
2
i
i
(¸)
where i
i
(¸) is a function of i
i
and the unknown parameter _. Examples include r
i
(¸) = r
¸
i
,
r
i
(¸) = oxp(¸r
i
) , and r
i
(_) = r
i
1 (q (r
i
) ¸). The model is linear when d
2
= 0, and this is
often a useful hypothesis (sub-model) to consider. Thus we want to test
H
0
: d
2
= 0.
CHAPTER 9. REGRESSION EXTENSIONS 189
However, under H
0
, the model is
j
i
= d
t
1
z
i
÷c
i
and both d
2
and ¸ have dropped out. This means that under H
0
, ¸ is not identi…ed. This renders
the distribution theory presented in the previous section invalid. Thus when the truth is that
d
2
= 0, the parameter estimates are not asymptotically normally distributed. Furthermore, tests
of H
0
do not have asymptotic normal or chi-square distributions.
The asymptotic theory of such tests have been worked out by Andrews and Ploberger (1994) and
B. E. Hansen (1996). In particular, Hansen shows how to use simulation (similar to the bootstrap)
to construct the asymptotic critical values (or p-values) in a given application.
Proof of Theorem 9.3.1 (Sketch). NLLS estimation falls in the class of optimization estimators.
For this theory, it is useful to denote the true value of the parameter 0 as 0
0
.
The …rst step is to show that
`
0
j
÷÷0
0
. Proving that nonlinear estimators are consistent is more
challenging than for linear estimators. We sketch the main argument. The idea is that
`
0 minimizes
the sample criterion function o
a
(0), which (for any 0) converges in probability to the mean-squared
error function E(j
i
÷:(i
i
, 0))
2
. Thus it seems reasonable that the minimizer
`
0 will converge in
probability to 0
0
, the minimizer of E(j
i
÷:(i
i
, 0))
2
. It turns out that to show this rigorously, we
need to show that o
a
(0) converges uniformly to its expectation E(j
i
÷:(i
i
, 0))
2
, which means
that the maximum discrepancy must converge in probability to zero, to exclude the possibility that
o
a
(0) is excessively wiggly in 0. Proving uniform convergence is technically challenging, but it
can be shown to hold broadly for relevant nonlinear regression models, especially if the regression
function :(i
i
, 0) is di¤erentiabel in 0. For a complete treatment of the theory of optimization
estimators see Newey and McFadden (1994).
Since
`
0
j
÷÷ 0
0
,
`
0 is close to 0
0
for : large, so the minimization of o
a
(0) only needs to be
examined for 0 close to 0
0
. Let
j
0
i
= c
i
÷n
t
0i
0
0
.
For 0 close to the true value 0
0
, by a …rst-order Taylor series approximation,
:(i
i
, 0) · :(i
i
, 0
0
) ÷n
t
0i
(0 ÷0
0
) .
Thus
j
i
÷:(i
i
, 0) · (c
i
÷:(i
i
, 0
0
)) ÷
_
:(i
i
, 0
0
) ÷n
t
0i
(0 ÷0
0
)
_
= c
i
÷n
t
0i
(0 ÷0
0
)
= j
0
i
÷n
t
0i
0.
Hence the sum of squared errors function is
o
a
(0) =
a

i=1
(j
i
÷:(i
i
, 0))
2
·
a

i=1
_
j
0
i
÷n
t
0i
0
_
2
and the right-hand-side is the SSE function for a linear regression of j
0
i
on n
0i
. Thus the NLLS
estimator
´
0 has the same asymptotic distribution as the (infeasible) OLS regression of j
0
i
on n
0i
,
which is that stated in the theorem.
9.4 Testing for Omitted NonLinearity
If the goal is to estimate the conditional expectation E(j
i
[ i
i
) , it is useful to have a general
test of the adequacy of the speci…cation.
CHAPTER 9. REGRESSION EXTENSIONS 190
One simple test for neglected nonlinearity is to add nonlinear functions of the regressors to the
regression, and test their signi…cance using a Wald test. Thus, if the model j
i
= i
t
i
´
d ÷ ´ c
i
has been
…t by OLS, let z
i
= l(i
i
) denote functions of i
i
which are not linear functions of i
i
(perhaps
squares of non-binary regressors) and then …t j
i
= i
t
i
¯
d÷z
t
i
¯ _÷ c
i
by OLS, and form a Wald statistic
for _ = 0.
Another popular approach is the RESET test proposed by Ramsey (1969). The null model is
j
i
= i
t
i
d ÷c
i
which is estimated by OLS, yielding predicted values ´ j
i
= i
t
i
´
d. Now let
z
i
=
_
_
_
´ j
2
i
.
.
.
´ j
n
i
_
_
_
be a (:÷1)-vector of powers of ´ j
i
. Then run the auxiliary regression
j
i
= i
t
i
¯
d ÷z
t
i
¯ _ ÷ c
i
(9.8)
by OLS, and form the Wald statistic \
a
for _ = 0. It is easy (although somewhat tedious) to
show that under the null hypothesis, \
a
o
÷÷¸
2
n÷1
. Thus the null is rejected at the c/ level if \
a
exceeds the upper c/ tail critical value of the ¸
2
n÷1
distribution.
To implement the test, : must be selected in advance. Typically, small values such as : = 2,
3, or 4 seem to work best.
The RESET test appears to work well as a test of functional form against a wide range of
smooth alternatives. It is particularly powerful at detecting single-index models of the form
j
i
= G(i
t
i
d) ÷c
i
where G() is a smooth “link” function. To see why this is the case, note that (9.8) may be written
as
j
i
= i
t
i
¯
d ÷
_
i
t
i
´
d
_
2
¸
1
÷
_
i
t
i
´
d
_
3
¸
2
÷
_
i
t
i
´
d
_
n
¸
n÷1
÷ c
i
which has essentially approximated G() by a :’th order polynomial.
CHAPTER 9. REGRESSION EXTENSIONS 191
Exercises
Exercise 9.1 Suppose that j
i
= q(i
i
, 0)÷c
i
with E(c
i
[ i
i
) = 0,
`
0 is the NLLS estimator, and
`
X is
the estimate of vai
_
`
0
_
. You are interested in the conditional mean function E(j
i
[ i
i
= i) = q(i)
at some i. Find an asymptotic 95% con…dence interval for q(i).
Exercise 9.2 In Exercise 8.4, you estimated a cost function on a cross-section of electric companies.
The equation you estimated was
log TC
i
= ,
1
÷,
2
log Q
i
÷,
3
log 11
i
÷,
4
log 11
i
÷,
5
log 11
i
÷c
i
. (9.9)
(a) Following Nerlove, add the variable (log Q
i
)
2
to the regression. Do so. Assess the merits of
this new speci…cation using a hypothesis test. Do you agree with this modi…cation?
(b) Now try a non-linear speci…cation. Consider model (9.9) plus the extra term ,
6
.
i
, where
.
i
= log Q
i
(1 ÷ oxp(÷(log Q
i
÷,
7
)))
÷1
.
In addition, impose the restriction ,
3
÷,
4
÷,
5
= 1. This model is called a smooth threshold
model. For values of log Q
i
much below ,
7
, the variable log Q
i
has a regression slope of ,
2
.
For values much above ,
7
, the regression slope is ,
2
÷ ,
6
, and the model imposes a smooth
transition between these regimes. The model is non-linear because of the parameter ,
7
.
The model works best when ,
7
is selected so that several values (in this example, at least
10 to 15) of log Q
i
are both below and above ,
7
. Examine the data and pick an appropriate
range for ,
7
.
(c) Estimate the model by non-linear least squares. I recommend the concentration method:
Pick 10 (or more if you like) values of ,
7
in this range. For each value of ,
7
, calculate .
i
and
estimate the model by OLS. Record the sum of squared errors, and …nd the value of ,
7
for
which the sum of squared errors is minimized.
(d) Calculate standard errors for all the parameters (,
1
, ..., ,
7
).
Exercise 9.3 The data …le cps78.dat contains 550 observations on 20 variables taken from the
May 1978 current population survey. Variables are listed in the …le cps78.pdf. The goal of the
exercise is to estimate a model for the log of earnings (variable LNWAGE) as a function of the
conditioning variables.
(a) Start by an OLS regression of LNWAGE on the other variables. Report coe¢cient estimates
and standard errors.
(b) Consider augmenting the model by squares and/or cross-products of the conditioning vari-
ables. Estimate your selected model and report the results.
(c) Are there any variables which seem to be unimportant as a determinant of wages? You may
re-estimate the model without these variables, if desired.
(d) Test whether the error variance is di¤erent for men and women. Interpret.
(e) Test whether the error variance is di¤erent for whites and nonwhites. Interpret.
(f) Construct a model for the conditional variance. Estimate such a model, test for general
heteroskedasticity and report the results.
(g) Using this model for the conditional variance, re-estimate the model from part (c) using
FGLS. Report the results.
(h) Do the OLS and FGLS estimates di¤er greatly? Note any interesting di¤erences.
(i) Compare the estimated standard errors. Note any interesting di¤erences.
Chapter 10
The Bootstrap
10.1 De…nition of the Bootstrap
Let 1 denote a distribution function for the population of observations (j
i
, i
i
) . Let
T
a
= T
a
((j
1
, i
1
) , ..., (j
a
, i
a
) , 1)
be a statistic of interest, for example an estimator
´
0 or a t-statistic
_
´
0 ÷0
_
,:(
´
0). Note that we
write T
a
as possibly a function of 1. For example, the t-statistic is a function of the parameter 0
which itself is a function of 1.
The exact CDF of T
a
when the data are sampled from the distribution 1 is
G
a
(n, 1) = Ii(T
a
_ n [ 1)
In general, G
a
(n, 1) depends on 1, meaning that G changes as 1 changes.
Ideally, inference would be based on G
a
(n, 1). This is generally impossible since 1 is unknown.
Asymptotic inference is based on approximating G
a
(n, 1) with G(n, 1) = lim
a÷o
G
a
(n, 1).
When G(n, 1) = G(n) does not depend on 1, we say that T
a
is asymptotically pivotal and use the
distribution function G(n) for inferential purposes.
In a seminal contribution, Efron (1979) proposed the bootstrap, which makes a di¤erent ap-
proximation. The unknown 1 is replaced by a consistent estimate 1
a
(one choice is discussed in
the next section). Plugged into G
a
(n, 1) we obtain
G
+
a
(n) = G
a
(n, 1
a
). (10.1)
We call G
+
a
the bootstrap distribution. Bootstrap inference is based on G
+
a
(n).
Let (j
+
i
, i
+
i
) denote random variables with the distribution 1
a
. A random sample from this dis-
tribution is called the bootstrap data. The statistic T
+
a
= T
a
((j
+
1
, i
+
1
) , ..., (j
+
a
, i
+
a
) , 1
a
) constructed
on this sample is a random variable with distribution G
+
a
. That is, Ii(T
+
a
_ n) = G
+
a
(n). We call
T
+
a
the bootstrap statistic. The distribution of T
+
a
is identical to that of T
a
when the true CDF is
1
a
rather than 1.
The bootstrap distribution is itself random, as it depends on the sample through the estimator
1
a
.
In the next sections we describe computation of the bootstrap distribution.
10.2 The Empirical Distribution Function
Recall that 1(j, i) = Ii (j
i
_ j, i
i
_ i) = E(1 (j
i
_ j) 1 (i
i
_ i)) , where 1() is the indicator
function. This is a population moment. The method of moments estimator is the corresponding
192
CHAPTER 10. THE BOOTSTRAP 193
sample moment:
1
a
(j, i) =
1
:
a

i=1
1 (j
i
_ j) 1 (i
i
_ i) . (10.2)
1
a
(j, i) is called the empirical distribution function (EDF). 1
a
is a nonparametric estimate of 1.
Note that while 1 may be either discrete or continuous, 1
a
is by construction a step function.
The EDF is a consistent estimator of the CDF. To see this, note that for any (j, i), 1 (j
i
_ j) 1 (i
i
_ i)
is an iid random variable with expectation 1(j, i). Thus by the WLLN (Theorem 5.4.2), 1
a
(j, i)
j
÷÷
1 (j, i) . Furthermore, by the CLT (Theorem 5.7.1),
_
:(1
a
(j, i) ÷1 (j, i))
o
÷÷N(0, 1 (j, i) (1 ÷1 (j, i))) .
To see the e¤ect of sample size on the EDF, in the Figure below, I have plotted the EDF and
true CDF for three random samples of size : = 2ò, 50, 100, and 500. The random draws are from
the N(0, 1) distribution. For : = 2ò, the EDF is only a crude approximation to the CDF, but the
approximation appears to improve for the large :. In general, as the sample size gets larger, the
EDF step function gets uniformly close to the true CDF.
Figure 10.1: Empirical Distribution Functions
The EDF is a valid discrete probability distribution which puts probability mass 1,: at each
pair (j
i
, i
i
), i = 1, ..., :. Notationally, it is helpful to think of a random pair (j
+
i
, i
+
i
) with the
distribution 1
a
. That is,
Ii(j
+
i
_ j, i
+
i
_ i) = 1
a
(j, i).
We can easily calculate the moments of functions of (j
+
i
, i
+
i
) :
E/(j
+
i
, i
+
i
) =
_
/(j, i)d1
a
(j, i)
=
a

i=1
/(j
i
, i
i
) Ii (j
+
i
= j
i
, i
+
i
= i
i
)
=
1
:
a

i=1
/(j
i
, i
i
) ,
the empirical sample average.
CHAPTER 10. THE BOOTSTRAP 194
10.3 Nonparametric Bootstrap
The nonparametric bootstrap is obtained when the bootstrap distribution (10.1) is de…ned
using the EDF (10.2) as the estimate 1
a
of 1.
Since the EDF 1
a
is a multinomial (with : support points), in principle the distribution G
+
a
could
be calculated by direct methods. However, as there are
_
2a÷1
a
_
possible samples ¦(j
+
1
, i
+
1
) , ..., (j
+
a
, i
+
a
)¦,
such a calculation is computationally infeasible. The popular alternative is to use simulation to ap-
proximate the distribution. The algorithm is identical to our discussion of Monte Carlo simulation,
with the following points of clari…cation:
« The sample size : used for the simulation is the same as the sample size.
« The random vectors (j
+
i
, i
+
i
) are drawn randomly from the empirical distribution. This is
equivalent to sampling a pair (j
i
, i
i
) randomly from the sample.
The bootstrap statistic T
+
a
= T
a
((j
+
1
, i
+
1
) , ..., (j
+
a
, i
+
a
) , 1
a
) is calculated for each bootstrap sam-
ple. This is repeated 1 times. 1 is known as the number of bootstrap replications. A theory
for the determination of the number of bootstrap replications 1 has been developed by Andrews
and Buchinsky (2000). It is desirable for 1 to be large, so long as the computational costs are
reasonable. 1 = 1000 typically su¢ces.
When the statistic T
a
is a function of 1, it is typically through dependence on a parameter.
For example, the t-ratio
_
´
0 ÷0
_
,:(
´
0) depends on 0. As the bootstrap statistic replaces 1 with
1
a
, it similarly replaces 0 with 0
a
, the value of 0 implied by 1
a
. Typically 0
a
=
´
0, the parameter
estimate. (When in doubt use
´
0.)
Sampling from the EDF is particularly easy. Since 1
a
is a discrete probability distribution
putting probability mass 1,: at each sample point, sampling from the EDF is equivalent to random
sampling a pair (j
i
, i
i
) from the observed data with replacement. In consequence, a bootstrap
sample ¦(j
+
1
, i
+
1
) , ..., (j
+
a
, i
+
a
)¦ will necessarily have some ties and multiple values, which is generally
not a problem.
10.4 Bootstrap Estimation of Bias and Variance
The bias of
´
0 is t
a
= E(
´
0 ÷ 0
0
). Let T
a
(0) =
´
0 ÷ 0. Then t
a
= E(T
a
(0
0
)). The bootstrap
counterparts are
´
0
+
=
´
0((j
+
1
, i
+
1
) , ..., (j
+
a
, i
+
a
)) and T
+
a
=
´
0
+
÷ 0
a
=
´
0
+
÷
´
0. The bootstrap estimate
of t
a
is
t
+
a
= E(T
+
a
).
If this is calculated by the simulation described in the previous section, the estimate of t
+
a
is
´ t
+
a
=
1
1
1

b=1
T
+
ab
=
1
1
1

b=1
´
0
+
b
÷
´
0
=
´
0
+
÷
´
0.
If
´
0 is biased, it might be desirable to construct a biased-corrected estimator (one with reduced
bias). Ideally, this would be

0 =
´
0 ÷t
a
,
CHAPTER 10. THE BOOTSTRAP 195
but t
a
is unknown. The (estimated) bootstrap biased-corrected estimator is

0
+
=
´
0 ÷ ´ t
+
a
=
´
0 ÷(
´
0
+
÷
´
0)
= 2
´
0 ÷
´
0
+
.
Note, in particular, that the biased-corrected estimator is not
´
0
+
. Intuitively, the bootstrap makes
the following experiment. Suppose that
´
0 is the truth. Then what is the average value of
´
0
calculated from such samples? The answer is
´
0
+
. If this is lower than
´
0, this suggests that the
estimator is downward-biased, so a biased-corrected estimator of 0 should be larger than
´
0, and the
best guess is the di¤erence between
´
0 and
´
0
+
. Similarly if
´
0
+
is higher than
´
0, then the estimator is
upward-biased and the biased-corrected estimator should be lower than
´
0.
Let T
a
=
´
0. The variance of
´
0 is
\
a
= E(T
a
÷ET
a
)
2
.
Let T
+
a
=
´
0
+
. It has variance
\
+
a
= E(T
+
a
÷ET
+
a
)
2
.
The simulation estimate is
´
\
+
a
=
1
1
1

b=1
_
´
0
+
b
÷
´
0
+
_
2
.
A bootstrap standard error for
´
0 is the square root of the bootstrap estimate of variance,
:
+
(
´
0) =
_
´
\
+
a
.
While this standard error may be calculated and reported, it is not clear if it is useful. The
primary use of asymptotic standard errors is to construct asymptotic con…dence intervals, which are
based on the asymptotic normal approximation to the t-ratio. However, the use of the bootstrap
presumes that such asymptotic approximations might be poor, in which case the normal approxi-
mation is suspected. It appears superior to calculate bootstrap con…dence intervals, and we turn
to this next.
10.5 Percentile Intervals
For a distribution function G
a
(n, 1), let ¡
a
(c, 1) denote its quantile function. This is the
function which solves
G
a

a
(c, 1), 1) = c.
[When G
a
(n, 1) is discrete, ¡
a
(c, 1) may be non-unique, but we will ignore such complications.]
Let ¡
a
(c) denote the quantile function of the true sampling distribution, and ¡
+
a
(c) = ¡
a
(c, 1
a
)
denote the quantile function of the bootstrap distribution. Note that this function will change
depending on the underlying statistic T
a
whose distribution is G
a
.
Let T
a
=
´
0, an estimate of a parameter of interest. In (1 ÷c)/ of samples,
´
0 lies in the region

a
(c,2), ¡
a
(1 ÷c,2)[. This motivates a con…dence interval proposed by Efron:
C
1
= [¡
+
a
(c,2), ¡
+
a
(1 ÷c,2)[.
This is often called the percentile con…dence interval.
Computationally, the quantile ¡
+
a
(c) is estimated by ´ ¡
+
a
(c), the c’th sample quantile of the
simulated statistics ¦T
+
a1
, ..., T
+
a1
¦, as discussed in the section on Monte Carlo simulation. The
(1 ÷c)/ Efron percentile interval is then [´ ¡
+
a
(c,2), ´ ¡
+
a
(1 ÷c,2)[.
CHAPTER 10. THE BOOTSTRAP 196
The interval C
1
is a popular bootstrap con…dence interval often used in empirical practice. This
is because it is easy to compute, simple to motivate, was popularized by Efron early in the history
of the bootstrap, and also has the feature that it is translation invariant. That is, if we de…ne
c = )(0) as the parameter of interest for a monotonically increasing function ), then percentile
method applied to this problem will produce the con…dence interval [)(¡
+
a
(c,2)), )(¡
+
a
(1÷c,2))[,
which is a naturally good property.
However, as we show now, C
1
is in a deep sense very poorly motivated.
It will be useful if we introduce an alternative de…nition of C
1
. Let T
a
(0) =
´
0 ÷0 and let ¡
a
(c)
be the quantile function of its distribution. (These are the original quantiles, with 0 subtracted.)
Then C
1
can alternatively be written as
C
1
= [
´
0 ÷¡
+
a
(c,2),
´
0 ÷¡
+
a
(1 ÷c,2)[.
This is a bootstrap estimate of the “ideal” con…dence interval
C
0
1
= [
´
0 ÷¡
a
(c,2),
´
0 ÷¡
a
(1 ÷c,2)[.
The latter has coverage probability
Ii
_
0
0
¸ C
0
1
_
= Ii
_
´
0 ÷¡
a
(c,2) _ 0
0
_
´
0 ÷¡
a
(1 ÷c,2)
_
= Ii
_
÷¡
a
(1 ÷c,2) _
´
0 ÷0
0
_ ÷¡
a
(c,2)
_
= G
a
(÷¡
a
(c,2), 1
0
) ÷G
a
(÷¡
a
(1 ÷c,2), 1
0
)
which generally is not 1÷c! There is one important exception. If
´
0÷0
0
has a symmetric distribution
about 0, then G
a
(÷n, 1
0
) = 1 ÷G
a
(n, 1
0
), so
Ii
_
0
0
¸ C
0
1
_
= G
a
(÷¡
a
(c,2), 1
0
) ÷G
a
(÷¡
a
(1 ÷c,2), 1
0
)
= (1 ÷G
a

a
(c,2), 1
0
)) ÷(1 ÷G
a

a
(1 ÷c,2), 1
0
))
=
_
1 ÷
c
2
_
÷
_
1 ÷
_
1 ÷
c
2
__
= 1 ÷c
and this idealized con…dence interval is accurate. Therefore, C
0
1
and C
1
are designed for the case
that
´
0 has a symmetric distribution about 0
0
.
When
´
0 does not have a symmetric distribution, C
1
may perform quite poorly.
However, by the translation invariance argument presented above, it also follows that if there
exists some monotonically increasing transformation )() such that )(
´
0) is symmetrically distributed
about )(0
0
), then the idealized percentile bootstrap method will be accurate.
Based on these arguments, many argue that the percentile interval should not be used unless
the sampling distribution is close to unbiased and symmetric.
The problems with the percentile method can be circumvented, at least in principle, by an
alternative method.
Let T
a
(0) =
´
0 ÷0. Then
1 ÷c = Ii (¡
a
(c,2) _ T
a
(0
0
) _ ¡
a
(1 ÷c,2))
= Ii
_
´
0 ÷¡
a
(1 ÷c,2) _ 0
0
_
´
0 ÷¡
a
(c,2)
_
,
so an exact (1 ÷c)/ con…dence interval for 0
0
would be
C
0
2
= [
´
0 ÷¡
a
(1 ÷c,2),
´
0 ÷¡
a
(c,2)[.
This motivates a bootstrap analog
C
2
= [
´
0 ÷¡
+
a
(1 ÷c,2),
´
0 ÷¡
+
a
(c,2)[.
CHAPTER 10. THE BOOTSTRAP 197
Notice that generally this is very di¤erent from the Efron interval C
1
! They coincide in the special
case that G
+
a
(n) is symmetric about
´
0, but otherwise they di¤er.
Computationally, this interval can be estimated from a bootstrap simulation by sorting the
bootstrap statistics T
+
a
=
_
´
0
+
÷
´
0
_
, which are centered at the sample estimate
´
0. These are sorted
to yield the quantile estimates ´ ¡
+
a
(.02ò) and ´ ¡
+
a
(.07ò). The 95% con…dence interval is then [
´
0 ÷
´ ¡
+
a
(.07ò),
´
0 ÷ ´ ¡
+
a
(.02ò)[.
This con…dence interval is discussed in most theoretical treatments of the bootstrap, but is not
widely used in practice.
10.6 Percentile-t Equal-Tailed Interval
Suppose we want to test H
0
: 0 = 0
0
against H
1
: 0 < 0
0
at size c. We would set T
a
(0) =
_
´
0 ÷0
_
,:(
´
0) and reject H
0
in favor of H
1
if T
a
(0
0
) < c, where c would be selected so that
Ii (T
a
(0
0
) < c) = c.
Thus c = ¡
a
(c). Since this is unknown, a bootstrap test replaces ¡
a
(c) with the bootstrap estimate
¡
+
a
(c), and the test rejects if T
a
(0
0
) < ¡
+
a
(c).
Similarly, if the alternative is H
1
: 0 0
0
, the bootstrap test rejects if T
a
(0
0
) ¡
+
a
(1 ÷c).
Computationally, these critical values can be estimated from a bootstrap simulation by sorting
the bootstrap t-statistics T
+
a
=
_
´
0
+
÷
´
0
_
,:(
´
0
+
). Note, and this is important, that the bootstrap test
statistic is centered at the estimate
´
0, and the standard error :(
´
0
+
) is calculated on the bootstrap
sample. These t-statistics are sorted to …nd the estimated quantiles ´ ¡
+
a
(c) and/or ´ ¡
+
a
(1 ÷c).
Let T
a
(0) =
_
´
0 ÷0
_
,:(
´
0). Then taking the intersection of two one-sided intervals,
1 ÷c = Ii (¡
a
(c,2) _ T
a
(0
0
) _ ¡
a
(1 ÷c,2))
= Ii
_
¡
a
(c,2) _
_
´
0 ÷0
0
_
,:(
´
0) _ ¡
a
(1 ÷c,2)
_
= Ii
_
´
0 ÷:(
´
0)¡
a
(1 ÷c,2) _ 0
0
_
´
0 ÷:(
´
0)¡
a
(c,2)
_
,
so an exact (1 ÷c)/ con…dence interval for 0
0
would be
C
0
3
= [
´
0 ÷:(
´
0)¡
a
(1 ÷c,2),
´
0 ÷:(
´
0)¡
a
(c,2)[.
This motivates a bootstrap analog
C
3
= [
´
0 ÷:(
´
0)¡
+
a
(1 ÷c,2),
´
0 ÷:(
´
0)¡
+
a
(c,2)[.
This is often called a percentile-t con…dence interval. It is equal-tailed or central since the probability
that 0
0
is below the left endpoint approximately equals the probability that 0
0
is above the right
endpoint, each c,2.
Computationally, this is based on the critical values from the one-sided hypothesis tests, dis-
cussed above.
10.7 Symmetric Percentile-t Intervals
Suppose we want to test H
0
: 0 = 0
0
against H
1
: 0 ,= 0
0
at size c. We would set T
a
(0) =
_
´
0 ÷0
_
,:(
´
0) and reject H
0
in favor of H
1
if [T
a
(0
0
)[ c, where c would be selected so that
Ii ([T
a
(0
0
)[ c) = c.
CHAPTER 10. THE BOOTSTRAP 198
Note that
Ii ([T
a
(0
0
)[ < c) = Ii (÷c < T
a
(0
0
) < c)
= G
a
(c) ÷G
a
(÷c)
= G
a
(c),
which is a symmetric distribution function. The ideal critical value c = ¡
a
(c) solves the equation
G
a

a
(c)) = 1 ÷c.
Equivalently, ¡
a
(c) is the 1 ÷c quantile of the distribution of [T
a
(0
0
)[ .
The bootstrap estimate is ¡
+
a
(c), the 1 ÷ c quantile of the distribution of [T
+
a
[ , or the number
which solves the equation
G
+
a

+
a
(c)) = G
+
a

+
a
(c)) ÷G
+
a
(÷¡
+
a
(c)) = 1 ÷c.
Computationally, ¡
+
a
(c) is estimated from a bootstrap simulation by sorting the bootstrap t-
statistics [T
+
a
[ =
¸
¸
¸
´
0
+
÷
´
0
¸
¸
¸ ,:(
´
0
+
), and taking the upper c/ quantile. The bootstrap test rejects if
[T
a
(0
0
)[ ¡
+
a
(c).
Let
C
4
= [
´
0 ÷:(
´
0)¡
+
a
(c),
´
0 ÷:(
´
0)¡
+
a
(c)[,
where ¡
+
a
(c) is the bootstrap critical value for a two-sided hypothesis test. C
4
is called the symmetric
percentile-t interval. It is designed to work well since
Ii (0
0
¸ C
4
) = Ii
_
´
0 ÷:(
´
0)¡
+
a
(c) _ 0
0
_
´
0 ÷:(
´
0)¡
+
a
(c)
_
= Ii ([T
a
(0
0
)[ < ¡
+
a
(c))
· Ii ([T
a
(0
0
)[ < ¡
a
(c))
= 1 ÷c.
If 0 is a vector, then to test H
0
: 0 = 0
0
against H
1
: 0 ,= 0
0
at size c, we would use a Wald
statistic
\
a
(0) = :
_
`
0 ÷0
_
t
`
X
÷1
0
_
`
0 ÷0
_
or some other asymptotically chi-square statistic. Thus here T
a
(0) = \
a
(0). The ideal test rejects
if \
a
_ ¡
a
(c), where ¡
a
(c) is the (1 ÷c)/ quantile of the distribution of \
a
. The bootstrap test
rejects if \
a
_ ¡
+
a
(c), where ¡
+
a
(c) is the (1 ÷c)/ quantile of the distribution of
\
+
a
= :
_
`
0
+
÷
`
0
_
t
`
X
+÷1
0
_
`
0
+
÷
`
0
_
.
Computationally, the critical value ¡
+
a
(c) is found as the quantile from simulated values of \
+
a
.
Note in the simulation that the Wald statistic is a quadratic form in
_
`
0
+
÷
`
0
_
, not
_
`
0
+
÷0
0
_
.
[This is a typical mistake made by practitioners.]
10.8 Asymptotic Expansions
Let T
a
¸ R be a statistic such that
T
a
o
÷÷N(0, o
2
). (10.3)
CHAPTER 10. THE BOOTSTRAP 199
In some cases, such as when T
a
is a t-ratio, then o
2
= 1. In other cases o
2
is unknown. Equivalently,
writing T
a
~ G
a
(n, 1) then for each n and 1
lim
a÷o
G
a
(n, 1) = 1
_
n
o
_
,
or
G
a
(n, 1) = 1
_
n
o
_
÷o (1) . (10.4)
While (10.4) says that G
a
converges to 1
_
&
o
_
as : ÷ ·, it says nothing, however, about the rate
of convergence, or the size of the divergence for any particular sample size :. A better asymptotic
approximation may be obtained through an asymptotic expansion.
The following notation will be helpful. Let a
a
be a sequence.
De…nition 10.8.1 a
a
= o(1) if a
a
÷0 as : ÷·
De…nition 10.8.2 a
a
= O(1) if [a
a
[ is uniformly bounded.
De…nition 10.8.3 a
a
= o(:
÷v
) if :
v
[a
a
[ ÷0 as : ÷·.
Basically, a
a
= O(:
÷v
) if it declines to zero like :
÷v
.
We say that a function q(n) is even if q(÷n) = q(n), and a function /(n) is odd if /(÷n) = ÷/(n).
The derivative of an even function is odd, and vice-versa.
Theorem 10.8.1 Under regularity conditions and (10.3),
G
a
(n, 1) = 1
_
n
o
_
÷
1
:
1¸2
q
1
(n, 1) ÷
1
:
q
2
(n, 1) ÷O(:
÷3¸2
)
uniformly over n, where q
1
is an even function of n, and q
2
is an odd
function of n. Moreover, q
1
and q
2
are di¤erentiable functions of n and
continuous in 1 relative to the supremum norm on the space of distribution
functions.
The expansion in Theorem 10.8.1 is often called an Edgeworth expansion.
We can interpret Theorem 10.8.1 as follows. First, G
a
(n, 1) converges to the normal limit at
rate :
1¸2
. To a second order of approximation,
G
a
(n, 1) - 1
_
n
o
_
÷:
÷1¸2
q
1
(n, 1).
Since the derivative of q
1
is odd, the density function is skewed. To a third order of approximation,
G
a
(n, 1) - 1
_
n
o
_
÷:
÷1¸2
q
1
(n, 1) ÷:
÷1
q
2
(n, 1)
which adds a symmetric non-normal component to the approximate density (for example, adding
leptokurtosis).
CHAPTER 10. THE BOOTSTRAP 200
[Side Note: When T
a
=
_
:
_
¯
A
a
÷j
_
,o, a standardized sample mean, then
q
1
(n) = ÷
1
6
i
3
_
n
2
÷1
_
c(n)
q
2
(n) = ÷
_
1
24
i
4
_
n
3
÷8n
_
÷
1
72
i
2
3
_
n
5
÷10n
3
÷ 1òn
_
_
c(n)
where c(n) is the standard normal pdf, and
i
3
= E(A ÷j)
3
,o
3
i
4
= E(A ÷j)
4
,o
4
÷8
the standardized skewness and excess kurtosis of the distribution of A. Note that when i
3
= 0
and i
4
= 0, then q
1
= 0 and q
2
= 0, so the second-order Edgeworth expansion corresponds to the
normal distribution.]
Francis Edgeworth
Francis Ysidro Edgeworth (1845-1926) of Ireland, founding editor of the Eco-
nomic Journal, was a profound economic and statistical theorist, developing
the theories of indi¤erence curves and asymptotic expansions. He also could
be viewed as the …rst econometrician due to his early use of mathematical
statistics in the study of economic data.
10.9 One-Sided Tests
Using the expansion of Theorem 10.8.1, we can assess the accuracy of one-sided hypothesis tests
and con…dence regions based on an asymptotically normal t-ratio T
a
. An asymptotic test is based
on 1(n).
To the second order, the exact distribution is
Ii (T
a
< n) = G
a
(n, 1
0
) = 1(n) ÷
1
:
1¸2
q
1
(n, 1
0
) ÷O(:
÷1
)
since o = 1. The di¤erence is
1(n) ÷G
a
(n, 1
0
) =
1
:
1¸2
q
1
(n, 1
0
) ÷O(:
÷1
)
= O(:
÷1¸2
),
so the order of the error is O(:
÷1¸2
).
A bootstrap test is based on G
+
a
(n), which from Theorem 10.8.1 has the expansion
G
+
a
(n) = G
a
(n, 1
a
) = 1(n) ÷
1
:
1¸2
q
1
(n, 1
a
) ÷O(:
÷1
).
Because 1(n) appears in both expansions, the di¤erence between the bootstrap distribution and
the true distribution is
G
+
a
(n) ÷G
a
(n, 1
0
) =
1
:
1¸2
(q
1
(n, 1
a
) ÷q
1
(n, 1
0
)) ÷O(:
÷1
).
CHAPTER 10. THE BOOTSTRAP 201
Since 1
a
converges to 1 at rate
_
:, and q
1
is continuous with respect to 1, the di¤erence
(q
1
(n, 1
a
) ÷q
1
(n, 1
0
)) converges to 0 at rate
_
:. Heuristically,
q
1
(n, 1
a
) ÷q
1
(n, 1
0
) -
0
01
q
1
(n, 1
0
) (1
a
÷1
0
)
= O(:
÷1¸2
),
The “derivative”
0
01
q
1
(n, 1) is only heuristic, as 1 is a function. We conclude that
G
+
a
(n) ÷G
a
(n, 1
0
) = O(:
÷1
),
or
Ii (T
+
a
_ n) = Ii (T
a
_ n) ÷O(:
÷1
),
which is an improved rate of convergence over the asymptotic test (which converged at rate
O(:
÷1¸2
)). This rate can be used to show that one-tailed bootstrap inference based on the t-
ratio achieves a so-called asymptotic re…nement – the Type I error of the test converges at a faster
rate than an analogous asymptotic test.
10.10 Symmetric Two-Sided Tests
If a random variable j has distribution function H(n) = Ii(j _ n), then the random variable
[j[ has distribution function
H(n) = H(n) ÷H(÷n)
since
Ii ([j[ _ n) = Ii (÷n _ j _ n)
= Ii (j _ n) ÷Ii (j _ ÷n)
= H(n) ÷H(÷n).
For example, if 7 ~ N(0, 1), then [7[ has distribution function
1(n) = 1(n) ÷1(÷n) = 21(n) ÷1.
Similarly, if T
a
has exact distribution G
a
(n, 1), then [T
a
[ has the distribution function
G
a
(n, 1) = G
a
(n, 1) ÷G
a
(÷n, 1).
A two-sided hypothesis test rejects H
0
for large values of [T
a
[ . Since T
a
o
÷÷ 7, then [T
a
[
o
÷÷
[7[ ~ 1. Thus asymptotic critical values are taken from the 1 distribution, and exact critical values
are taken from the G
a
(n, 1
0
) distribution. From Theorem 10.8.1, we can calculate that
G
a
(n, 1) = G
a
(n, 1) ÷G
a
(÷n, 1)
=
_
1(n) ÷
1
:
1¸2
q
1
(n, 1) ÷
1
:
q
2
(n, 1)
_
÷
_
1(÷n) ÷
1
:
1¸2
q
1
(÷n, 1) ÷
1
:
q
2
(÷n, 1)
_
÷O(:
÷3¸2
)
= 1(n) ÷
2
:
q
2
(n, 1) ÷O(:
÷3¸2
), (10.5)
where the simpli…cations are because q
1
is even and q
2
is odd. Hence the di¤erence between the
asymptotic distribution and the exact distribution is
1(n) ÷G
a
(n, 1
0
) =
2
:
q
2
(n, 1
0
) ÷O(:
÷3¸2
) = O(:
÷1
).
CHAPTER 10. THE BOOTSTRAP 202
The order of the error is O(:
÷1
).
Interestingly, the asymptotic two-sided test has a better coverage rate than the asymptotic
one-sided test. This is because the …rst term in the asymptotic expansion, q
1
, is an even function,
meaning that the errors in the two directions exactly cancel out.
Applying (10.5) to the bootstrap distribution, we …nd
G
+
a
(n) = G
a
(n, 1
a
) = 1(n) ÷
2
:
q
2
(n, 1
a
) ÷O(:
÷3¸2
).
Thus the di¤erence between the bootstrap and exact distributions is
G
+
a
(n) ÷G
a
(n, 1
0
) =
2
:
(q
2
(n, 1
a
) ÷q
2
(n, 1
0
)) ÷O(:
÷3¸2
)
= O(:
÷3¸2
),
the last equality because 1
a
converges to 1
0
at rate
_
:, and q
2
is continuous in 1. Another way
of writing this is
Ii ([T
+
a
[ < n) = Ii ([T
a
[ < n) ÷O(:
÷3¸2
)
so the error from using the bootstrap distribution (relative to the true unknown distribution) is
O(:
÷3¸2
). This is in contrast to the use of the asymptotic distribution, whose error is O(:
÷1
). Thus
a two-sided bootstrap test also achieves an asymptotic re…nement, similar to a one-sided test.
A reader might get confused between the two simultaneous e¤ects. Two-sided tests have better
rates of convergence than the one-sided tests, and bootstrap tests have better rates of convergence
than asymptotic tests.
The analysis shows that there may be a trade-o¤ between one-sided and two-sided tests. Two-
sided tests will have more accurate size (Reported Type I error), but one-sided tests might have
more power against alternatives of interest. Con…dence intervals based on the bootstrap can be
asymmetric if based on one-sided tests (equal-tailed intervals) and can therefore be more informative
and have smaller length than symmetric intervals. Therefore, the choice between symmetric and
equal-tailed con…dence intervals is unclear, and needs to be determined on a case-by-case basis.
10.11 Percentile Con…dence Intervals
To evaluate the coverage rate of the percentile interval, set T
a
=
_
:
_
´
0 ÷0
0
_
. We know that
T
a
o
÷÷N(0, \ ), which is not pivotal, as it depends on the unknown \. Theorem 10.8.1 shows that
a …rst-order approximation
G
a
(n, 1) = 1
_
n
o
_
÷O(:
÷1¸2
),
where o =
_
\ , and for the bootstrap
G
+
a
(n) = G
a
(n, 1
a
) = 1
_
n
´ o
_
÷O(:
÷1¸2
),
where ´ o = \ (1
a
) is the bootstrap estimate of o. The di¤erence is
G
+
a
(n) ÷G
a
(n, 1
0
) = 1
_
n
´ o
_
÷1
_
n
o
_
÷O(:
÷1¸2
)
= ÷c
_
n
o
_
n
o
(´ o ÷o) ÷O(:
÷1¸2
)
= O(:
÷1¸2
)
Hence the order of the error is O(:
÷1¸2
).
The good news is that the percentile-type methods (if appropriately used) can yield
_
:-
convergent asymptotic inference. Yet these methods do not require the calculation of standard
CHAPTER 10. THE BOOTSTRAP 203
errors! This means that in contexts where standard errors are not available or are di¢cult to
calculate, the percentile bootstrap methods provide an attractive inference method.
The bad news is that the rate of convergence is disappointing. It is no better than the rate
obtained from an asymptotic one-sided con…dence region. Therefore if standard errors are available,
it is unclear if there are any bene…ts from using the percentile bootstrap over simple asymptotic
methods.
Based on these arguments, the theoretical literature (e.g. Hall, 1992, Horowitz, 2001) tends to
advocate the use of the percentile-t bootstrap methods rather than percentile methods.
10.12 Bootstrap Methods for Regression Models
The bootstrap methods we have discussed have set G
+
a
(n) = G
a
(n, 1
a
), where 1
a
is the EDF.
Any other consistent estimate of 1 may be used to de…ne a feasible bootstrap estimator. The
advantage of the EDF is that it is fully nonparametric, it imposes no conditions, and works in
nearly any context. But since it is fully nonparametric, it may be ine¢cient in contexts where
more is known about 1. We discuss bootstrap methods appropriate for the linear regression model
j
i
= i
t
i
d ÷c
i
E(c
i
[ i
i
) = 0.
The non-parametric bootstrap resamples the observations (j
+
i
, i
+
i
) from the EDF, which implies
j
+
i
= i
+t
i
`
d ÷c
+
i
E(i
+
i
c
+
i
) = 0
but generally
E(c
+
i
[ i
+
i
) ,= 0.
The bootstrap distribution does not impose the regression assumption, and is thus an ine¢cient
estimator of the true distribution (when in fact the regression assumption is true.)
One approach to this problem is to impose the very strong assumption that the error -
i
is
independent of the regressor i
i
. The advantage is that in this case it is straightforward to con-
struct bootstrap distributions. The disadvantage is that the bootstrap distribution may be a poor
approximation when the error is not independent of the regressors.
To impose independence, it is su¢cient to sample the i
+
i
and c
+
i
independently, and then create
j
+
i
= i
+t
i
`
d ÷ c
+
i
. There are di¤erent ways to impose independence. A non-parametric method
is to sample the bootstrap errors c
+
i
randomly from the OLS residuals ¦´ c
1
, ..., ´ c
a
¦. A parametric
method is to generate the bootstrap errors c
+
i
from a parametric distribution, such as the normal
c
+
i
~ N(0, ´ o
2
).
For the regressors i
+
i
, a nonparametric method is to sample the i
+
i
randomly from the EDF
or sample values ¦i
1
, ..., i
a
¦. A parametric method is to sample i
+
i
from an estimated parametric
distribution. A third approach sets i
+
i
= i
i
. This is equivalent to treating the regressors as …xed
in repeated samples. If this is done, then all inferential statements are made conditionally on the
observed values of the regressors, which is a valid statistical approach. It does not really matter,
however, whether or not the i
i
are really “…xed” or random.
The methods discussed above are unattractive for most applications in econometrics because
they impose the stringent assumption that i
i
and c
i
are independent. Typically what is desirable
is to impose only the regression condition E(c
i
[ i
i
) = 0. Unfortunately this is a harder problem.
One proposal which imposes the regression condition without independence is the Wild Boot-
strap. The idea is to construct a conditional distribution for c
+
i
so that
E(c
+
i
[ i
i
) = 0
E
_
c
+2
i
[ i
i
_
= ´ c
2
i
E
_
c
+3
i
[ i
i
_
= ´ c
3
i
.
CHAPTER 10. THE BOOTSTRAP 204
A conditional distribution with these features will preserve the main important features of the data.
This can be achieved using a two-point distribution of the form
Ii
_
c
+
i
=
_
1 ÷
_
ò
2
_
´ c
i
_
=
_
ò ÷1
2
_
ò
Ii
_
c
+
i
=
_
1 ÷
_
ò
2
_
´ c
i
_
=
_
ò ÷ 1
2
_
ò
For each i
i
, you sample c
+
i
using this two-point distribution.
CHAPTER 10. THE BOOTSTRAP 205
Exercises
Exercise 10.1 Let 1
a
(i) denote the EDF of a random sample. Show that
_
:(1
a
(i) ÷1
0
(i))
o
÷÷N(0, 1
0
(i) (1 ÷1
0
(i))) .
Exercise 10.2 Take a random sample ¦j
1
, ..., j
a
¦ with j = Ej
i
and o
2
= vai (j
i
) . Let the statistic
of interest be the sample mean T
a
= j
a
. Find the population moments ET
a
and vai (T
a
) . Let
¦j
+
1
, ..., j
+
a
¦ be a random sample from the empirical distribution function and let T
+
a
= j
+
a
be its
sample mean. Find the bootstrap moments ET
+
a
and vai (T
+
a
) .
Exercise 10.3 Consider the following bootstrap procedure for a regression of j
i
on i
i
. Let
`
d
denote the OLS estimator from the regression of ¸ on A, and ` c = ¸ ÷A
`
d the OLS residuals.
(a) Draw a random vector (i
+
, c
+
) from the pair ¦(i
i
, ´ c
i
) : i = 1, ..., :¦ . That is, draw a random
integer i
t
from [1, 2, ..., :[, and set i
+
= i
i
0 and c
+
= ´ c
i
0 . Set j
+
= i
+t
`
d ÷ c
+
. Draw (with
replacement) : such vectors, creating a random bootstrap data set (¸
+
, A
+
).
(b) Regress ¸
+
on A
+
, yielding OLS estimates
`
d
+
and any other statistic of interest.
Show that this bootstrap procedure is (numerically) identical to the non-parametric boot-
strap.
Exercise 10.4 Consider the following bootstrap procedure. Using the non-parametric bootstrap,
generate bootstrap samples, calculate the estimate
´
0
+
on these samples and then calculate
T
+
a
= (
´
0
+
÷
´
0),:(
´
0),
where :(
´
0) is the standard error in the original data. Let ¡
+
a
(.0ò) and ¡
+
a
(.0ò) denote the 5% and
95% quantiles of T
+
a
, and de…ne the bootstrap con…dence interval
C =
_
´
0 ÷:(
´
0)¡
+
a
(.0ò),
´
0 ÷:(
´
0)¡
+
a
(.0ò)
_
.
Show that C exactly equals the Alternative percentile interval (not the percentile-t interval).
Exercise 10.5 You want to test H
0
: 0 = 0 against H
1
: 0 0. The test for H
0
is to reject if
T
a
=
´
0,:(
´
0) c where c is picked so that Type I error is c. You do this as follows. Using the non-
parametric bootstrap, you generate bootstrap samples, calculate the estimates
´
0
+
on these samples
and then calculate
T
+
a
=
´
0
+
,:(
´
0
+
).
Let ¡
+
a
(.0ò) denote the 95% quantile of T
+
a
. You replace c with ¡
+
a
(.0ò), and thus reject H
0
if
T
a
=
´
0,:(
´
0) ¡
+
a
(.0ò). What is wrong with this procedure?
Exercise 10.6 Suppose that in an application,
´
0 = 1.2 and :(
´
0) = .2. Using the non-parametric
bootstrap, 1000 samples are generated from the bootstrap distribution, and
´
0
+
is calculated on each
sample. The
´
0
+
are sorted, and the 2.5% and 97.5% quantiles of the
´
0
+
are .75 and 1.3, respectively.
(a) Report the 95% Efron Percentile interval for 0.
(b) Report the 95% Alternative Percentile interval for 0.
(c) With the given information, can you report the 95% Percentile-t interval for 0´
Exercise 10.7 The data…le hprice1.dat contains data on house prices (sales), with variables
listed in the …le hprice1.pdf. Estimate a linear regression of price on the number of bedrooms, lot
size, size of house, and the colonial dummy. Calculate 95% con…dence intervals for the regression
coe¢cients using both the asymptotic normal approximation and the percentile-t bootstrap.
Chapter 11
NonParametric Regression
11.1 Introduction
When components of i are continuously distributed then the conditional expectation function
E(j
i
[ i
i
= i) = :(i)
can take any nonlinear shape. Unless an economic model restricts the form of :(i) to a parametric
function, the CEF is inherently nonparametric, meaning that the function :(i) is an element
of an in…nite dimensional class. In this situation, how can we estimate :(i)? What is a suitable
method, if we acknowledge that :(i) is nonparametric?
There are two main classes of nonparametric regression estimators: kernel estimators, and series
estimators. In this chapter we introduce kernel methods.
To get started, suppose that there is a single real-valued regressor r
i
. We consider the case of
vector-valued regressors later.
11.2 Binned Estimator
For clarity, …x the point r and consider estimation of the single point :(r). This is the mean
of j
i
for random pairs (j
i
, r
i
) such that r
i
= r. If the distribution of r
i
were discrete then we
could estimate :(r) by taking the average of the sub-sample of observations j
i
for which r
i
= r.
But when r
i
is continuous then the probability is zero that r
i
exactly equals any speci…c r. So
there is no sub-sample of observations with r
i
= r and we cannot simply take the average of the
corresponding j
i
values. However, if the CEF :(r) is continuous, then it should be possible to get
a good approximation by taking the average of the observations for which r
i
is close to r, perhaps
for the observations for which [r
i
÷r[ _ / for some small / 0. Later we will call / a bandwidth.
This estimator can be written as
´ :(r) =

a
i=1
1 ([r
i
÷r[ _ /) j
i

a
i=1
1 ([r
i
÷r[ _ /)
(11.1)
where 1() is the indicator function. Alternatively, (11.1) can be written as
´ :(r) =
a

i=1
n
i
(r)j
i
(11.2)
where
n
i
(r) =
1 ([r
i
÷r[ _ /)

a
)=1
1 ([r
)
÷r[ _ /)
.
Notice that

a
i=1
n
i
(r) = 1, so (11.2) is a weighted average of the j
i
.
206
CHAPTER 11. NONPARAMETRIC REGRESSION 207
Figure 11.1: Scatter of (j
i
, r
i
) and Nadaraya-Watson regression
It is possible that for some values of r there are no values of r
i
such that [r
i
÷r[ _ /, which
implies that

a
i=1
1 ([r
i
÷r[ _ /) = 0. In this case the estimator (11.1) is unde…ned for those values
of r.
To visualize, Figure 11.1 displays a scatter plot of 100 observations on a random pair (j
i
, r
i
)
generated by simulation
1
. (The observations are displayed as the open circles.) The estimator
(11.1) of the CEF :(r) at r = 2 with / = 1,2 is the average of the j
i
for the observations
such that r
i
falls in the interval [1.ò _ r
i
_ 2.ò[. (Our choice of / = 1,2 is somewhat arbitrary.
Selection of / will be discussed later.) The estimate is ´ :(2) = ò.16 and is shown on Figure 11.1 by
the …rst solid square. We repeat the calculation (11.1) for r = 8, 4, 5, and 6, which is equivalent to
partitioning the support of r
i
into the regions [1.ò, 2.ò[, [2.ò, 8.ò[, [8.ò, 4.ò[, [4.ò, ò.ò[, and [ò.ò, 6.ò[.
These partitions are shown in Figure 11.1 by the verticle dotted lines, and the estimates (11.1) by
the solid squares.
These estimates ´ :(r) can be viewed as estimates of the CEF :(r). Sometimes called a binned
estimator, this is a step-function approximation to :(r) and is displayed in Figure 11.1 by the
horizontal lines passing through the solid squares. This estimate roughly tracks the central tendency
of the scatter of the observations (j
i
, r
i
). However, the huge jumps in the estimated step function
at the edges of the partitions are disconcerting, counter-intuitive, and clearly an artifact of the
discrete binning.
If we take another look at the estimation formula (11.1) there is no reason why we need to
evaluate (11.1) only on a course grid. We can evaluate ´ :(r) for any set of values of r. In particular,
we can evaluate (11.1) on a …ne grid of values of r and thereby obtain a smoother estimate of the
CEF. This estimator with / = 1,2 is displayed in Figure 11.1 with the solid line. This is a
generalization of the binned estimator and by construction passes through the solid squares.
The bandwidth / determines the degree of smoothing. Larger values of / increase the width
of the bins in Figure 11.1, thereby increasing the smoothness of the estimate ´ :(r) as a function
of r. Smaller values of / decrease the width of the bins, resulting in less smooth conditional mean
estimates.
1
The distribution is xi N(4; 1) and yi j xi N(m(xi); 16) with m(x) = 10 log(x):
CHAPTER 11. NONPARAMETRIC REGRESSION 208
11.3 Kernel Regression
One de…ciency with the estimator (11.1) is that it is a step function in r, as it is discontinuous
at each observation r = r
i
. That is why its plot in Figure 11.1 is jagged. The source of the dis-
continuity is that the weights n
i
(r) are constructed from indicator functions, which are themselves
discontinuous. If instead the weights are constructed from continuous functions then the CEF
estimator will also be continuous in r.
To generalize (11.1) it is useful to write the weights 1 ([r
i
÷r[ _ /) in terms of the uniform
density function on [÷1, 1[
/
0
(n) =
1
2
1 ([n[ _ 1) .
Then
1 ([r
i
÷r[ _ /) = 1

¸
¸
¸
r
i
÷r
/
¸
¸
¸
¸
_ 1
_
= 2/
0
_
r
i
÷r
/
_
.
and (11.1) can be written as
´ :(r) =

a
i=1
/
0
_
r
i
÷r
/
_
j
i

a
i=1
/
0
_
r
i
÷r
/
_ . (11.3)
The uniform density /
0
(n) is a special case of what is known as a kernel function.
De…nition 11.3.1 A second-order kernel function /(n) satis…es 0 _
/(n) < ·, /(n) = /(÷n),
_
o
÷o
/(n)dn = 1 and o
2
I
=
_
o
÷o
n
2
/(n)dn < ·.
Essentially, a kernel function is a probability density function which is bounded and symmetric
about zero. A generalization of (11.1) is obtained by replacing the uniform kernel with any other
kernel function:
´ :(r) =

a
i=1
/
_
r
i
÷r
/
_
j
i

a
i=1
/
_
r
i
÷r
/
_ . (11.4)
The estimator (11.4) also takes the form (11.2) with
n
i
(r) =
/
_
r
i
÷r
/
_

a
)=1
/
_
r
)
÷r
/
_.
The estimator (11.4) is known as the Nadaraya-Watson estimator, the kernel regression
estimator, or the local constant estimator.
The bandwidth / plays the same role in (11.4) as it does in (11.1). Namely, larger values of
/ will result in estimates ´ :(r) which are smoother in r, and smaller values of / will result in
estimates which are more erratic. It might be helpful to consider the two extreme cases / ÷0 and
/ ÷ ·. As / ÷ 0 we can see that ´ :(r
i
) ÷ j
i
(if the values of r
i
are unique), so that ´ :(r) is
simply the scatter of j
i
on r
i
. In contrast, as / ÷ · then for all r, ´ :(r) ÷ j, the sample mean,
so that the nonparametric CEF estimate is a constant function. For intermediate values of /, ´ :(r)
will lie between these two extreme cases.
CHAPTER 11. NONPARAMETRIC REGRESSION 209
The uniform density is not a good kernel choice as it produces discontinuous CEF estimates.
To obtain a continuous CEF estimate ´ :(r) it is necessary for the kernel /(n) to be continuous.
The two most commonly used choices are the Epanechnikov kernel
/
1
(n) =
8
4
_
1 ÷n
2
_
1 ([n[ _ 1)
and the normal or Gaussian kernel
/
ç
(n) =
1
_

oxp
_
÷
n
2
2
_
.
For computation of the CEF estimate (11.4) the scale of the kernel is not important so long as
the bandwidth is selected appropriately. That is, for any / 0, /
b
(n) = /
÷1
/
_
n
/
_
is a valid kernel
function with the identical shape as /(n). Kernel regression with the kernel /(n) and bandwidth /
is identical to kernel regression with the kernel /
b
(n) and bandwidth /,/.
The estimate (11.4) using the Epanechnikov kernel and / = 1,2 is also displayed in Figure 11.1
with the dashed line. As you can see, this estimator appears to be much smoother than that using
the uniform kernel.
Two important constants associated with a kernel function /(n) are its variance o
2
I
and rough-
ness 1
I
, which are de…ned as
o
2
I
=
_
o
÷o
n
2
/(n)dn (11.5)
1
I
=
_
o
÷o
/(n)
2
dn. (11.6)
Some common kernels and their roughness and variance values are reported in Table 9.1.
Table 9.1: Common Second-Order Kernels
Kernel Equation 1
I
o
2
I
Uniform /
0
(n) =
1
2
1 ([n[ _ 1) 1,2 1,8
Epanechnikov /
1
(n) =
3
4
_
1 ÷n
2
_
1 ([n[ _ 1) 8,ò 1,ò
Biweight /
2
(n) =
15
16
_
1 ÷n
2
_
2
1 ([n[ _ 1) ò,7 1,7
Triweight /
3
(n) =
35
32
_
1 ÷n
2
_
3
1 ([n[ _ 1) 8ò0,420 1,0
Gaussian /
ç
(n) =
1
_

oxp
_
÷
&
2
2
_
1, (2
_
¬) 1
11.4 Local Linear Estimator
The Nadaraya-Watson (NW) estimator is often called a local constant estimator as it locally
(about r) approximates the CEF :(r) as a constant function. One way to see this is to observe
that ´ :(r) solves the minimization problem
´ :(r) = aigmin
c
a

i=1
/
_
r
i
÷r
/
_
(j
i
÷c)
2
.
This is a weighted regression of j
i
on an intercept only. Without the weights, this estimation
problem reduces to the sample mean. The NW estimator generalizes this to a local mean.
This interpretation suggests that we can construct alternative nonparametric estimators of the
CEF by alternative local approximations. Many such local approximations are possible. A popular
choice is the local linear (LL) approximation. Instead of approximating :(r) locally as a constant,
CHAPTER 11. NONPARAMETRIC REGRESSION 210
the local linear approximation approximates the CEF locally by a linear function, and estimates
this local approximation by locally weighted least squares.
Speci…cally, for each r we solve the following minimization problem
_
´ c(r),
´
,(r)
_
= aigmin
c,o
a

i=1
/
_
r
i
÷r
/
_
(j
i
÷c ÷, (r
i
÷r))
2
.
The local linear estimator of :(r) is the estimated intercept
´ :(r) = ´ c(r)
and the local linear estimator of the regression derivative \:(r) is the estimated slope coe¢cient
¯
\:(r) =
´
,(r).
Computationally, for each r set
z
i
(r) =
_
1
r
i
÷r
_
and
/
i
(r) = /
_
r
i
÷r
/
_
.
Then
_
´ c(r)
´
,(r)
_
=
_
a

i=1
/
i
(r)z
i
(r)z
i
(r)
t
_
÷1
a

i=1
/
i
(r)z
i
(r)j
i
=
_
Z
t
1Z
_
÷1
Z
t

where 1 = oiag¦/
1
(r), ..., /
a
(r)¦.
To visualize, Figure 11.2 displays the scatter plot of the same 100 observations from Figure 11.1,
divided into three regions depending on the regressor r
i
: [1, 8[, [8, ò[, [ò, 7[. A linear regression is …t
to the observations in each region, with the observations weighted by the Epanechnikov kernel with
/ = 1. The three …tted regression lines are displayed by the three straight solid lines. The values of
these regression lines at r = 2, r = 4 and r = 6, respectively, are the local linear estimates ´ :(r) at
r = 2, 4, and 6. This estimation is repeated for all r in the support of the regressors, and plotted
as the continuous solid line in Figure 11.2.
One interesting feature is that as / ÷ ·, the LL estimator approaches the full-sample linear
least-squares estimator ´ :(r) ÷ ´ c ÷
´
,r. That is because as / ÷ · all observations receive equal
weight regardless of r. In this sense we can see that the LL estimator is a ‡exible generalization of
the linear OLS estimator.
Which nonparametric estimator should you use in practice: NW or LL? The theoretical liter-
ature shows that neither strictly dominates the other, but we can describe contexts where one or
the other does better. Roughly speaking, the NW estimator performs better than the LL estimator
when :(r) is close to a ‡at line, but the LL estimator performs better when :(r) is meaningfully
non-constant. The LL estimator also performs better for values of r near the boundary of the
support of r
i
.
11.5 Nonparametric Residuals and Regression Fit
The …tted regression at r = r
i
is ´ :(r
i
) and the …tted residual is
´ c
i
= j
i
÷ ´ :(r
i
).
CHAPTER 11. NONPARAMETRIC REGRESSION 211
Figure 11.2: Scatter of (j
i
, r
i
) and Local Linear …tted regression
As a general rule, but especially when the bandwidth / is small, it is hard to view ´ c
i
as a good
measure of the …t of the regression. As / ÷ 0 then ´ :(r
i
) ÷ j
i
and therefore ´ c
i
÷ 0. This clearly
indicates over…tting as the true error is not zero. In general, since ´ :(r
i
) is a local average which
includes j
i
, the …tted value will be necessarily close to j
i
and the residual ´ c
i
small, and the degree
of this over…tting increases as / decreases.
A standard solution is to measure the …t of the regression at r = r
i
by re-estimating the model
excluding the i’th observation. For Nadaraya-Watson regression, the leave-one-out estimator of
:(r) excluding observation i is
¯ :
÷i
(r) =

),=i
/
_
r
)
÷r
/
_
j
)

),=i
/
_
r
)
÷r
/
_ .
Notationally, the “÷i” subscript is used to indicate that the i’th observation is omitted.
The leave-one-out predicted value for j
i
at r = r
i
equals
j
i
= ¯ :
÷i
(r
i
) =

),=i
/
_
r
)
÷r
i
/
_
j
)

),=i
/
_
r
)
÷r
i
/
_ .
The leave-one-out residuals (or prediction errors) are the di¤erence between the leave-one-out pre-
dicted values and the actual observation
c
i
= j
i
÷ j
i
.
Since j
i
is not a function of j
i
, there is no tendency for j
i
to over…t for small /. Consequently, c
i
is a good measure of the …t of the estimated nonparametric regression.
Similarly, the leave-one-out local-linear residual is c
i
= j
i
÷ ¯ c
i
with
_
¯ c
i
¯
,
i
_
=
_
_

),=i
/
i)
z
i)
z
t
i)
_
_
÷1

),=i
/
i)
z
i)
j
)
,
CHAPTER 11. NONPARAMETRIC REGRESSION 212
z
i)
=
_
1
r
)
÷r
i
_
and
/
i)
= /
_
r
)
÷r
i
/
_
.
11.6 Cross-Validation Bandwidth Selection
As we mentioned before, the choice of bandwidth / is crucial. As / increases, the kernel
regression estimators (both NW and LL) become more smooth, ironing out the bumps and wiggles.
This reduces estimation variance but at the cost of increased bias and oversmoothing. As / decreases
the estimators become more wiggly, erratic, and noisy. It is desirable to select / to trade-o¤ these
features. How can this be done systematically?
To be explicit about the dependence of the estimator on the bandwidth, let us write the esti-
mator of :(r) with a given bandwidth / as ´ :(r, /), and our discussion will apply equally to the
NW and LL estimators.
Ideally, we would like to select / to minimize the mean-squared error (MSE) of ´ :(r, /) as a
estimate of :(r). For a given value of r the MSE is
'o1
a
(r, /) = E( ´ :(r, /) ÷:(r))
2
.
We are typically interested in estimating :(r) for all values in the support of r. A common measure
for the average …t is the integrated MSE
1'o1
a
(/) =
_
'o1
a
(r, /))
a
(r)dr
=
_
E( ´ :(r, /) ÷:(r))
2
)
a
(r)dr
where )
a
(r) is the marginal density of r
i
. Notice that we have de…ned the IMSE as an integral with
respect to the density )
a
(r). Other weight functions could be used, but it turns out that this is a
convenient choice.
The IMSE is closely related with the MSFE of Section 4.9. Let (j
a+1
, r
a+1
) be out-of-sample
observations (and thus independent of the sample) and consider predicting j
a+1
given r
a+1
and
the nonparametric estimate ´ :(r, /). The natural point estimate for j
a+1
is ´ :(r
a+1
, /) which has
mean-squared forecast error
'o11
a
(/) = E(j
a+1
÷ ´ :(r
a+1
, /))
2
= E(c
a+1
÷:(r
a+1
) ÷ ´ :(r
a+1
, /))
2
= o
2
÷E(:(r
a+1
) ÷ ´ :(r
a+1
, /))
2
= o
2
÷
_
E( ´ :(r, /) ÷:(r))
2
)
a
(r)dr
where the …nal equality uses the fact that r
a+1
is independent of ´ :(r, /). We thus see that
'o11
a
(/) = o
2
÷1'o1
a
(/).
Since o
2
is a constant independent of the bandwidth /, 'o11
a
(/) and 1'o1
a
(/) are equivalent
measures of the …t of the nonparameric regression.
The optimal bandwidth / is the value which minimizes 1'o1
a
(/) (or equivalently 'o11
a
(/)).
While these functions are unknown, we learned in Theorem 4.9.1 that (at least in the case of linear
regression) 'o11
a
can be estimated by the sample mean-squared prediction errors. It turns out
that this fact extends to nonparametric regression. The nonparametric leave-one-out residuals are
c
i
(/) = j
i
÷ ¯ :
÷i
(r
i
, /)
CHAPTER 11. NONPARAMETRIC REGRESSION 213
where we are being explicit about the dependence on the bandwidth /. The mean squared leave-
one-out residuals is
C\ (/) =
1
:
a

i=1
c
i
(/)
2
.
This function of / is known as the cross-validation criterion.
The cross-validation bandwidth
´
/ is the value which minimizes C\ (/)
´
/ = aigmin
I¸I
`
C\ (/) (11.7)
for some /
¹
0. The restriction / _ /
¹
is imposed so that C\ (/) is not evaluated over unreasonably
small bandwidths.
There is not an explicit solution to the minimization problem (11.7), so it must be solved
numerically. A typical practical method is to create a grid of values for /, e.g. [/
1
, /
2
, ..., /
J
[,
evaluate C\ (/
)
) for , = 1, ..., J, and set
´
/ = aigmin
I¸[I
1
,I
2
,...,I
J
]
C\ (/).
Evaluation using a coarse grid is typically su¢cient for practical application. Plots of C\ (/) against
/ are a useful diagnostic tool to verify that the minimum of C\ (/) has been obtained.
We said above that the cross-validation criterion is an estimator of the MSFE. This claim is
based on the following result.
Theorem 11.6.1
E(C\ (/)) = 'o11
a÷1
(/) = 1'o1
a÷1
(/) ÷o
2
(11.8)
Theorem 11.6.1 shows that C\ (/) is an unbiased estimator of 1'o1
a÷1
(/) ÷ o
2
. The …rst
term, 1'o1
a÷1
(/), is the integrated MSE of the nonparametric estimator using a sample of size
: ÷ 1. If : is large, 1'o1
a÷1
(/) and 1'o1
a
(/) will be nearly identical, so C\ (/) is essentially
unbiased as an estimator of 1'o1
a
(/) ÷ o
2
. Since the second term (o
2
) is una¤ected by the
bandwidth /, it is irrelevant for the problem of selection of /. In this sense we can view C\ (/)
as an estimator of the IMSE, and more importantly we can view the minimizer of C\ (/) as an
estimate of the minimizer of 1'o1
a
(/).
To illustrate, Figure 11.3 displays the cross-validation criteria C\ (/) for the Nadaraya-Watson
and Local Linear estimators using the data from Figure 11.1, both using the Epanechnikov kernel.
The CV functions are computed on a grid with intervals 0.01. The CV-minimizing bandwidths are
/ = 1.00 for the Nadaraya-Watson estimator and / = 1.ò0 for the local linear estimator. Figure
11.3 shows the minimizing bandwidths by the arrows. It is not surprising that the CV criteria
recommend a larger bandwidth for the LL estimator than for the NW estimator, as the LL employs
more smoothing for a given bandwidth.
The CV criterion can also be used to select between di¤erent nonparametric estimators. The
CV-selected estimator is the one with the lowest minimized CV criterion. For example, in Figure
11.3, the NW estimator has a minimized CV criterion of 16.88, while the LL estimator has a
minimized CV criterion of 16.81. Since the LL estimator achieves a lower value of the CV criterion,
LL is the CV-selected estimator. The di¤erence (0.07) is small, suggesting that the two estimators
are near equivalent in IMSE.
Figure 11.4 displays the …tted CEF estimates (NW and LL) using the bandwidths selected by
cross-validation. Also displayed is the true CEF :(r) = 10 ln(r). Notice that the nonparametric
CHAPTER 11. NONPARAMETRIC REGRESSION 214
Figure 11.3: Cross-Validation Criteria, Nadaraya-Watson Regression and Local Linear Regression
Figure 11.4: Nonparametric Estimates using data-dependent (CV) bandwidths
CHAPTER 11. NONPARAMETRIC REGRESSION 215
estimators with the CV-selected bandwidths (and especially the LL estimator) track the true CEF
quite well.
Proof of Theorem 11.6.1. Observe that :(r
i
) ÷ ¯ :
÷i
(r
i
, /) is a function only of (r
1
, ..., r
a
) and
(c
1
, ..., c
a
) excluding c
i
, and is thus uncorrelated with c
i
. Since c
i
(/) = :(r
i
) ÷ ¯ :
÷i
(r
i
, /) ÷ c
i
,
then
E(C\ (/)) = E
_
c
i
(/)
2
_
= E
_
c
2
i
_
÷E( ¯ :
÷i
(r
i
, /) ÷:(r
i
))
2
÷2E(( ¯ :
÷i
(r
i
, /) ÷:(r
i
)) c
i
)
= o
2
÷E( ¯ :
÷i
(r
i
, /) ÷:(r
i
))
2
. (11.9)
The second term is an expectation over the random variables r
i
and ¯ :
÷i
(r, /), which are indepen-
dent as the second is not a function of the i’th observation. Thus taking the conditional expectation
given the sample excluding the i’th observation, this is the expectation over r
i
only, which is the
integral with respect to its density
E
÷i
( ¯ :
÷i
(r
i
, /) ÷:(r
i
))
2
=
_
( ¯ :
÷i
(r, /) ÷:(r))
2
)
a
(r)dr.
Taking the unconditional expecation yields
E( ¯ :
÷i
(r
i
, /) ÷:(r
i
))
2
= E
_
( ¯ :
÷i
(r, /) ÷:(r))
2
)
a
(r)dr
= 1'o1
a÷1
(/)
where this is the IMSE of a sample of size : ÷ 1 as the estimator ¯ :
÷i
uses : ÷ 1 observations.
Combined with (11.9) we obtain (11.8), as desired.
11.7 Asymptotic Distribution
There is no …nite sample distribution theory for kernel estimators, but there is a well developed
asymptotic distribution theory. The theory is based on the approximation that the bandwidth /
decreases to zero as the sample size : increases. This means that the smoothing is increasingly
localized as the sample size increases. So long as the bandwidth does not decrease to zero too
quickly, the estimator can be shown to be asymptotically normal, but with a non-trivial bias.
Let )
a
(r) denote the marginal density of r
i
and o
2
(r) = E
_
c
2
i
[ r
i
= r
_
denote the conditional
variance of c
i
= j
i
÷:(r
i
).
CHAPTER 11. NONPARAMETRIC REGRESSION 216
Theorem 11.7.1 Let ´ :(r) denote either the Nadarya-Watson or Local
Linear estimator of :(r). If r is interior to the support of r
i
and )
a
(r) 0,
then as : ÷· and / ÷0 such that :/ ÷·,
_
:/
_
´ :(r) ÷:(r) ÷/
2
o
2
I
1(r)
_
o
÷÷N
_
0,
1
I
o
2
(r)
)
a
(r)
_
(11.10)
where o
2
I
a:d 1
I
are de…ned in (11.5) and (11.6). For the Nadaraya-
Watson estimator
1(r) =
1
2
:
tt
(r) ÷)
a
(r)
÷1
)
t
a
(r):
t
(r)
and for the local linear estimator
1(r) =
1
2
)
a
(r):
tt
(r)
There are several interesting features about the asymptotic distribution which are noticeably
di¤erent than for parametric estimators. First, the estimator converges at the rate
_
:/, not
_
:.
Since / ÷ 0,
_
:/ diverges slower than
_
:, thus the nonparametric estimator converges more
slowly than a parametric estimator. Second, the asymptotic distribution contains a non-neglible
bias term /
2
o
2
I
1(r). This term asymptotically disappears since / ÷ 0. Third, the assumptions
that :/ ÷· and / ÷0 mean that the estimator is consistent for the CEF :(r).
The fact that the estimator converges at the rate
_
:/ has led to the interpretation of :/ as the
“e¤ective sample size”. This is because the number of observations being used to construct ´ :(r)
is proportional to :/, not : as for a parametric estimator.
It is helpful to understand that the nonparametric estimator has a reduced convergence rate
because the object being estimated – :(r) – is nonparametric. This is harder than estimating a
…nite dimensional parameter, and thus comes at a cost.
Unlike parametric estimation, the asymptotic distribution of the nonparametric estimator in-
cludes a term representing the bias of the estimator. The asymptotic distribution (11.10) shows
the form of this bias. Not only is it proportional to the squared bandwidth /
2
(the degree of
smoothing), it is proportional to the function 1(r) which depends on the slope and curvature of
the CEF :(r). Interestingly, when :(r) is constant then 1(r) = 0 and the kernel estimator has no
asymptotic bias. The bias is essentially increasing in the curvature of the CEF function :(r). This
is because the local averaging smooths :(r), and the smoothing induces more bias when :(r) is
curved.
Theorem 11.7.1 shows that the asymptotic distributions of the NW and LL estimators are
similar, with the only di¤erence arising in the bias function 1(r). The bias term for the NW
estimator has an extra component which depends on the …rst derivative of the CEF :(r) while the
bias term of the LL estimator is invariant to the …rst derivative. The fact that the bias formula for
the LL estimator is simpler and is free of dependence on the …rst derivative of :(r) suggests that
the LL estimator will generally have smaller bias than the NW estimator (but this is not a precise
ranking). Since the asymptotic variances in the two distributions are the same, this means that the
LL estimator achieves a reduced bias without an e¤ect on asymptotic variance. This analysis has
led to the general preference for the LL estimator over the NW estimator in the nonparametrics
literature.
One implication of Theorem 11.7.1 is that we can de…ne the asymptotic MSE (AMSE) of ´ :(r)
as the squared bias plus the asymptotic variance
¹'o1 ( ´ :(r)) =
_
/
2
o
2
I
1(r)
_
2
÷
1
I
o
2
(r)
:/)
a
(r)
. (11.11)
CHAPTER 11. NONPARAMETRIC REGRESSION 217
Focusing on rates, this says
¹'o1 ( ´ :(r)) ~ /
4
÷
1
:/
(11.12)
which means that the AMSE is dominated by the larger of /
4
and (:/)
÷1
. Notice that the bias is
increasing in / and the variance is decreasing in /. (More smoothing means more observations are
used for local estimation: this increases the bias but decreases estimation variance.) To select / to
minimize the AMSE, these two components should balance each other. Setting /
4
· (:/)
÷1
means
setting / · :
÷1¸5
. Another way to see this is to pick / to minimize the right-hand-side of (11.12).
The …rst-order condition for / is
0
0/
_
/
4
÷
1
:/
_
= 4/
3
÷
1
:/
2
= 0
which when solved for / yields / = :
÷1¸5
. What this means is that for AMSE-e¢cient estimation
of :(r), the optimal rate for the bandwidth is / · :
÷1¸5
.
Theorem 11.7.2 The bandwidth which minimizes the AMSE (11.12) is
of order / · :
÷1¸5
. With / · :
÷1¸5
then ¹'o1 ( ´ :(r)) = O
_
:
÷4¸5
_
and
´ :(r) = :(r) ÷O
j
_
:
÷2¸5
_
.
This result means that the bandwidth should take the form / = c:
÷1¸5
. The optimal constant
c depends on the kernel /, the bias function 1(r) and the marginal density )
a
(r). A common mis-
interpretation is to set / = :
÷1¸5
, which is equivalent to setting c = 1 and is completely arbitrary.
Instead, an empirical bandwidth selection rule such as cross-validation should be used in practice.
When / = c:
÷1¸5
we can rewrite the asymptotic distribution (11.10) as
:
2¸5
( ´ :(r) ÷:(r))
o
÷÷N
_
c
2
o
2
I
1(r),
1
I
o
2
(r)
c
1¸2
)
a
(r)
_
In this representation, we see that ´ :(r) is asymptotically normal, but with a :
2¸5
rate of conver-
gence and non-zero mean. The asymptotic distribution depends on the constant c through the bias
(positively) and the variance (inversely).
The asymptotic distribution in Theorem 11.7.1 allows for the optimal rate / = c:
÷1¸5
but this
rate is not required. In particular, consider an undersmoothing (smaller than optimal) bandwith
with rate / = o
_
:
÷1¸5
_
. For example, we could specify that / = c:
÷c
for some c 0 and
1,ò < c < 1. Then
_
://
2
= O(:
(1÷5c)¸2
) = o(1) so the bias term in (11.10) is asymptotically
negligible so Theorem 11.7.1 implies
_
:/( ´ :(r) ÷:(r))
o
÷÷N
_
0,
1
I
o
2
(r)
)
a
(r)
_
.
That is, the estimator is asymptotically normal without a bias component. Not having an asymp-
totic bias component is convenient for some theoretical manipuations, so many authors impose the
undersmoothing condition / = o
_
:
÷1¸5
_
to ensure this situation. This convenience comes at a cost.
First, the resulting estimator is ine¢cient as its convergence rate is is O
j
_
:
÷(1÷c)¸2
_
O
j
_
:
÷2¸5
_
since c 1,ò. Second, the distribution theory is an inherently misleading approximation as it misses
a critically key ingredient of nonparametric estimation – the trade-o¤ between bias and variance.
The approximation (11.10) is superior precisely because it contains the asymptotic bias component
which is a realistic implication of nonparametric estimation. Undersmoothing assumptions should
be avoided when possible.
CHAPTER 11. NONPARAMETRIC REGRESSION 218
11.8 Conditional Variance Estimation
Let’s consider the problem of estimation of the conditional variance
o
2
(r) = vai (j
i
[ r
i
= r)
= E
_
c
2
i
[ r
i
= r
_
.
Even if the conditional mean :(r) is parametrically speci…ed, it is natural to view o
2
(r) as inher-
ently nonparametric as economic models rarely specify the form of the conditional variance. Thus
it is quite appropriate to estimate o
2
(r) nonparametrically.
We know that o
2
(r) is the CEF of c
2
i
given r
i
. Therefore if c
2
i
were observed, o
2
(r) could be
nonparametrically estimated using NW or LL regression. For example, the NW estimator is
o
2
(r) =

a
i=1
/
i
(r)c
2
i

a
i=1
/
i
(r)
.
Since the errors c
i
are not observed, we need to replace them with an empirical residual, such as
´ c
i
= j
i
÷ ´ :(r
i
) where ´ :(r) is the estimated CEF. (The latter could be a nonparametric estimator
such as NW or LL, or even a parametric estimator.) Even better, use the leave-one-out predication
errors c
i
= j
i
÷ ´ :
÷i
(r
i
), as these are not subject to over…tting.
With this substitution the NW estimator of the conditional variance is
´ o
2
(r) =

a
i=1
/
i
(r) c
2
i

a
i=1
/
i
(r)
. (11.13)
This estimator depends on a set of bandwidths /
1
, ..., /
q
, but there is no reason for the band-
widths to be the same as those used to estimate the conditional mean. Cross-validation can be used
to select the bandwidths for estimation of ´ o
2
(r) separately from cross-validation for estimation of
´ :(r).
There is one subtle di¤erence between CEF and conditional variance estimation. The conditional
variance is inherently non-negative o
2
(r) _ 0 and it is desirable for our estimator to satisfy this
property. Interestingly, the NW estimator (11.13) is necessarily non-negative, since it is a smoothed
average of the non-negative squared residuals, but the LL estimator is not guarenteed to be non-
negative for all r. For this reason, the NW estimator is preferred for conditional variance estimation.
Fan and Yao (1998, Biometrika) derive the asymptotic distribution of the estimator (11.13).
They obtain the surprising result that the asymptotic distribution of this two-step estimator is
identical to that of the one-step idealized estimator o
2
(r).
11.9 Standard Errors
Theorem 11.7.1 shows the asymptotic variances of both the NW and LL nonparametric regres-
sion estimators equal
\ (r) =
1
I
o
2
(r)
)
a
(r)
.
For standard errors we need an estimate of \ (r) . A plug-in estimate replaces the unknowns by
estimates. The roughness 1
I
can be found from Table 9.1. The conditional variance can be
estimated using (11.13). The density of r
i
can be estimated using the methods from Section 21.1.
Replacing these estimates into the formula for \ (r) we obtain the asymptotic variance estimate
´
\ (r) =
1
I
´ o
2
(r)
´
)
a
(r)
.
CHAPTER 11. NONPARAMETRIC REGRESSION 219
Then an asymptotic standard error for the kernel estimate ´ :(i) is
´ :(r) =
_
1
:/
´
\ (r).
Plots of the estimated CEF ´ :(r) can be accompanied by con…dence intervals ´ :(r) ± 2´ :(r).
These are known as pointwise con…dence intervals, as they are designed to have correct coverage
at each r, not uniformly in r.
One important caveat about the interpretation of nonparametric con…dence intervals is that
they are not centered at the true CEF :(r), but rather are centered at the biased or pseudo-true
value
:
+
(r) = :(r) ÷/
2
o
2
I
1(r).
Consequently, a correct statement about the con…dence interval ´ :(r) ±2´ :(r) is that it asymptoti-
cally contains :
+
(r) with probability 95%, not that it asymptotically contains :(r) with probability
95%. The discrepancy is that the con…dence interval does not take into account the bias /
2
o
2
I
1(r).
Unfortunately, nothing constructive can be done about this. The bias is di¢cult and noisy to esti-
mate, so making a bias-correction only in‡ates estimation variance and decreases overall precision.
A technical “trick” is to assume undersmoothing / = o
_
:
÷1¸5
_
but this does not really eliminate
the bias, it only assumes it away. The plain fact is that once we honestly acknowledge that the
true CEF is nonparametric, it then follows that any …nite sample estimate will have …nite sample
bias, and this bias will be inherently unknown and thus impossible to incorporate into con…dence
intervals.
11.10 Multiple Regressors
Our analysis has focus on the case of real-valued r
i
for simplicity of exposition, but the methods
of kernel regression extend easily to the multiple regressor case, at the cost of a reduced rate of
convergence. In this section we consider the case of estimation of the conditional expectation
function
E(j
i
[ i
i
= i) = :(i)
when
i
i
=
_
_
_
r
1i
.
.
.
r
oi
_
_
_
is a d-vector.
For any evaluation point i and observation i, de…ne the kernel weights
/
i
(i) = /
_
r
1i
÷r
1
/
1
_
/
_
r
2i
÷r
2
/
2
_
/
_
r
oi
÷r
o
/
o
_
,
a d-fold product kernel. The kernel weights /
i
(i) assess if the regressor vector i
i
is close to the
evaluation point i in the Euclidean space R
o
.
These weights depend on a set of d bandwidths, /
)
, one for each regressor. We can group them
together into a single vector for notational convenience:
l =
_
_
_
/
1
.
.
.
/
o
_
_
_.
Given these weights, the Nadaraya-Watson estimator takes the form
´ :(i) =

a
i=1
/
i
(i)j
i

a
i=1
/
i
(i)
.
CHAPTER 11. NONPARAMETRIC REGRESSION 220
For the local-linear estimator, de…ne
z
i
(i) =
_
1
i
i
÷i
_
and then the local-linear estimator can be written as ´ :(i) = ´ c(i) where
_
´ c(i)
´
,(i)
_
=
_
a

i=1
/
i
(i)z
i
(i)z
i
(i)
t
_
÷1
a

i=1
/
i
(i)z
i
(i)j
i
=
_
Z
t
1Z
_
÷1
Z
t

where 1 = oiag¦/
1
(r), ..., /
a
(r)¦.
In multiple regressor kernel regression, cross-validation remains a recommended method for
bandwidth selection. The leave-one-out residuals c
i
and cross-validation criterion C\ (l) are de-
…ned identically as in the single regressor case. The only di¤erence is that now the CV criterion is
a function over the d-dimensional bandwidth l. This is a critical practical di¤erence since …nding
the bandwidth vector
´
l which minimizes C\ (l) can be computationally di¢cult when l is high
dimensional. Grid search is cumbersome and costly, since G gridpoints per dimension imply evau-
lation of C\ (l) at G
o
distinct points, which can be a large number. Furthermore, plots of C\ (l)
against l are challenging when d 2.
The asymptotic distribution of the estimators in the multiple regressor case is an extension of
the single regressor case. Let )
a
(i) denote the marginal density of i
i
and o
2
(i) = E
_
c
2
i
[ i
i
= i
_
the conditional variance of c
i
= j
i
÷:(i
i
). Let [l[ = /
1
/
2
/
o
.
Theorem 11.10.1 Let ´ :(i) denote either the Nadarya-Watson or Local
Linear estimator of :(i). If i is interior to the support of i
i
and )
a
(i)
0, then as : ÷· and /
)
÷0 such that :[l[ ÷·,
_
:[l[
_
_
´ :(i) ÷:(i) ÷o
2
I
o

)=1
/
2
)
1
,)
(i)
_
_
o
÷÷N
_
0,
1
o
I
o
2
(i)
)
a
(i)
_
where for the Nadaraya-Watson estimator
1
)
(i) =
1
2
0
2
0r
2
)
:(i) ÷)
a
(i)
÷1
0
0r
)
)
a
(i)
0
0r
)
:(i)
and for the Local Linear estimator
1
)
(i) =
1
2
0
2
0r
2
)
:(i)
For notational simplicity consider the case that there is a single common bandwidth /. In this
case the AMSE takes the form
¹'o1( ´ :(i)) ~ /
4
÷
1
:/
o
That is, the squared bias is of order /
4
, the same as in the single regressor case, but the variance is
of larger order (:/
o
)
÷1
. Setting / to balance these two components requires setting / ~ :
÷1¸(4+o)
.
CHAPTER 11. NONPARAMETRIC REGRESSION 221
Theorem 11.10.2 The bandwidth which minimizes the AMSE is of order
/ · :
÷1¸(4+o)
. With / · :
÷1¸(4+o)
then ¹'o1 ( ´ :(i)) = O
_
:
÷4¸(4+o)
_
and ´ :(i) = :(i) ÷O
j
_
:
÷2¸(4+o)
_
In all estimation problems an increase in the dimension decreases estimation precision. For
example, in parametric estimation an increase in dimension typically increases the asymptotic vari-
ance. In nonparametric estimation an increase in the dimension typically decreases the convergence
rate, which is a more fundamental decrease in precision. For example, in kernel regression the con-
vergence rate O
j
_
:
÷2¸(4+o)
_
decreases as d increases. The reason is the estimator ´ :(i) is a local
average of the j
i
for observations such that i
i
is close to i, and when there are multiple regressors
the number of such observations is inherently smaller. This phenomenon – that the rate of con-
vergence of nonparametric estimation decreases as the dimension increases – is called the curse of
dimensionality.
Chapter 12
Series Estimation
12.1 Approximation by Series
As we mentioned at the beginning of Chapter 11, there are two main methods of nonparametric
regression: kernel estimation and series estimation. In this chapter we study series methods.
Series methods approximate an unknown function (e.g. the CEF :(i)) with a ‡exible paramet-
ric function, with the number of parameters treated similarly to the bandwidth in kernel regression.
A series approximation to :(i) takes the form :
1
(i) = :
1
(i, d
1
) where :
1
(i, d
1
) is a known
parametric family and d
1
is an unknown coe¢cient. The integer 1 is the dimension of d
1
and
indexes the complexity of the approximation.
A linear series approximation takes the form
:
1
(i) =
1

)=1
.
)1
(i),
)1
= z
1
(i)
t
d
1
(12.1)
where .
)1
(i) are (nonlinear) functions of i, and are known as basis functions or basis function
transformations of i.
For real-valued r, a well-known linear series approximation is the j’th-order polynomial
:
1
(r) =
j

)=0
r
)
,
)1
where 1 = j ÷ 1.
When i 2 R
o
is vector-valued, a j’th-order polynomial is
:
1
(i) =
j

)
1
=0

j

)
d
=0
r
)
1
1
r
)
d
o
,
)
1
,...,)
d
1
.
This includes all powers and cross-products, and the coe¢cient vector has dimension 1 = (j÷1)
o
.
In general, a common method to create a series approximation for vector-valued i is to include all
non-redundant cross-products of the basis function transformations of the components of i.
12.2 Splines
Another common series approximation is a continuous piecewise polynomial function known
as a spline. While splines can be of any polynomial order (e.g. linear, quadratic, cubic, etc.),
a common choice is cubic. To impose smoothness it is common to constrain the spline function
to have continuous derivatives up to the order of the spline. Thus a quadratic spline is typically
222
CHAPTER 12. SERIES ESTIMATION 223
constrained to have a continuous …rst derivative, and a cubic spline is typically constrained to have
a continuous …rst and second derivative.
There is more than one way to de…ne a spline series expansion. All are based on the number of
knots – the join points between the polynomial segments.
To illustrate, a piecewise linear function with two segments and a knot at t is
:
1
(r) =
_
_
_
:
1
(r) = ,
00
÷,
01
(r ÷t) r < t
:
2
(r) = ,
10
÷,
11
(r ÷t) r _ t
(For convenience we have written the segments functions as polyomials in r ÷ t.) The function
:
1
(r) equals the linear function :
1
(r) for r < t and equals :
2
(t) for r t. Its left limit at r = t
is ,
00
and its right limit is ,
10
, so is continuous if (and only if) ,
00
= ,
10
. Enforcing this constraint
is equivalent to writing the function as
:
1
(r) = ,
0
÷,
1
(r ÷t) ÷,
2
(r ÷t) 1 (r _ t)
or after transforming coe¢cients, as
:
1
(r) = ,
0
÷,
1
r ÷,
2
(r ÷t) 1 (r _ t)
Notice that this function has 1 = 8 coe¢cients, the same as a quadratic polynomial.
A piecewise quadratic function with one knot at t is
:
1
(r) =
_
_
_
:
1
(r) = ,
00
÷,
01
(r ÷t) ÷,
02
(r ÷t)
2
r < t
:
2
(r) = ,
10
÷,
11
(r ÷t) ÷,
12
(r ÷t)
2
r _ t
This function is continuous at r = t if ,
00
= ,
10
, and has a continuous …rst derivative if ,
01
= ,
11
.
Imposing these contraints and rewriting, we obtain the function
:
1
(r) = ,
0
÷,
1
r ÷,
2
r
2
÷,
3
(r ÷t)
2
1 (r _ t) .
Here, 1 = 4.
Furthermore, a piecewise cubic function with one knot and a continuous second derivative is
:
1
(r) = ,
0
÷,
1
r ÷,
2
r
2
÷,
3
r
3
÷,
4
(r ÷t)
3
1 (r _ t)
which has 1 = ò.
The polynomial order j is selected to control the smoothness of the spline, as :
1
(r) has
continuous derivatives up to j ÷1.
In general, a j’th-order spline with · knots at t
1
, t
2
, ..., t
.
with t
1
< t
2
< < t
.
is
:
1
(r) =
j

)=0
,
)
r
)
÷
.

I=1
¸
I
(r ÷t
I
)
j
1 (r _ t
I
)
which has 1 = · ÷j ÷ 1 coe¢cients.
In spline approximation, the typical approach is to treat the polynomial order j as …xed, and
select the number of knots · to determine the complexity of the approximation. The knots t
I
are
typically treated as …xed. A common choice is to set the knots to evenly partition the support A
of i
i
.
CHAPTER 12. SERIES ESTIMATION 224
12.3 Partially Linear Model
A common use of a series expansion is to allow the CEF to be nonparametric with respect
to one variable, yet linear in the other variables. This allows ‡exibility in a particular variable
of interest. A partially linear CEF with vector-valued regressor i
1
and real-valued continuous r
2
takes the form
:(i
1
, r
2
) = i
t
1
d
1
÷:
2
(r
2
).
This model is commonly used when i
1
are discrete (e.g. binary variables) and r
2
is continuously
distributed.
Series methods are particularly convenient for estimation of partially linear models, as we can
replace the unknown function :
2
(r
2
) with a series expansion to obtain
:(i) · :
1
(i)
= i
t
1
d
1
÷z
t
1
d
21
= i
t
1
d
1
where z
1
= z
1
(r
2
) are the basis transformations of r
2
(typically polynomials or splines) and d
21
are coe¢cients. After transformation the regressors are i
1
= (i
t
1
, z
t
1
). and the coe¢cients are
d
1
= (d
t
1
, d
t
21
)
t
.
12.4 Additively Separable Models
When i is multivariate a common simpli…cation is to treat the regression function :(i) as
additively separable in the individual regressors, which means that
:(i) = :
1
(r
1
) ÷:
2
(r
2
) ÷ ÷:
o
(r
o
) .
Series methods are quite convenient for estimation of additively separable models, as we simply
apply series expansions (polynomials or splines) separately for each component :
)
(r
)
) . The advan-
tage of additive separability is the reduction in dimensionality. While an unconstrained j’th order
polynomial has (j ÷ 1)
o
coe¢cients, an additively separable polynomial model has only (j ÷ 1)d
coe¢cients. This can be a major reduction in the number of coe¢cients. The disadvatage of this
simpli…cation is that the interaction e¤ects have been eliminated.
The decision to impose additive separability can be based on an economic model which suggests
the absence of interaction e¤ects, or can be a model selection decision similar to the selection of
the number of series terms. We will discuss model selection methods below.
12.5 Uniform Approximations
A good series approximation :
1
(i) will have the property that it gets close to the true CEF
:(i) as the complexity 1 increases. Formal statements can be derived from the theory of functional
analysis.
An elegant and famous theorem is the Stone-Weierstrass theorem, (Weierstrass, 1885, Stone
1937, 1948) which states that any continuous function can be arbitrarily uniformly well approxi-
mated by a polynomial of su¢ciently high order. Speci…cally, the theorem states that for i 2 R
o
,
if :(i) is continuous on a compact set A, then for any - 0 there exists a polynomial :
1
(i) of
some order 1 which is uniformly within - of :(i):
sup
æ¸A
[:
1
(i) ÷:(i)[ _ -. (12.2)
Thus the true unknown :(i) can be aribitrarily well approximately by selecting a suitable poly-
nomial.
CHAPTER 12. SERIES ESTIMATION 225
Figure 12.1: True CEF and Best Approximations
The result (12.2) can be stengthened. In particular, if the :
tI
derivative of :(i) is continuous
then the uniform approximation error satis…es
sup
æ¸A
[:
1
(i) ÷:(i)[ = O
_
1
÷c
_
(12.3)
as 1 ÷ · where c = :,d. This result is more useful than (12.2) because it gives a rate at which
the approximation :
1
(i) approaches :(i) as 1 increases.
Both (12.2) and (12.3) hold for spline approximations as well.
Intuitively, the number of derivatives : indexes the smoothness of the function :(i). (12.3)
says that the best rate at which a polynomial or spline approximates the CEF :(i) depends on
the underlying smoothness of :(i). The more smooth is :(i), the fewer series terms (polynomial
order or spline knots) are needed to obtain a good approximation.
To illustrate polynomial approximation, Figure 12.1 displays the CEF :(r) = r
1¸4
(1 ÷ r)
1¸2
on r ¸ [0, 1[. In addition, the best approximations using polynomials of order 1 = 8, 1 = 4, and
1 = 6 are displayed. You can see how the approximation with 1 = 8 is fairly crude, but improves
with 1 = 4 and especially 1 = 6. Approximations obtained with cubic splines are quite similar so
not displayed.
As a series approximation can be written as :
1
(i) = z
1
(i)
t
d
1
as in (12.1), then the coe¢cient
of the best uniform approximation (12.3) is then
d
+
1
= aigmin
f
K
sup
æ¸A
¸
¸
z
1
(i)
t
d
1
÷:(i)
¸
¸
. (12.4)
The approximation error is
r
+
1
(i) = :(i) ÷z
1
(i)
t
d
+
1
.
We can write this as
:(i) = z
1
(i)
t
d
+
1
÷r
+
1
(i) (12.5)
to emphasize that the true conditional mean can be written as the linear approximation plus error.
A useful consequence of equation (12.3) is
sup
æ¸A
[r
+
1
(i)[ _ O
_
1
÷c
_
. (12.6)
CHAPTER 12. SERIES ESTIMATION 226
Figure 12.2: True CEF, polynomial interpolation, and spline interpolation
12.6 Runge’s Phenomenon
Despite the excellent approximation implied by the Stone-Weierstrass theorem, polynomials
have the troubling disadvantage that they are very poor at simple interpolation. The problem is
known as Runge’s phenomenon, and is illustrated in Figure 12.2. The solid line is the CEF
:(r) = (1 ÷r
2
)
÷1
displayed on [÷ò, ò[. The circles display the function at the 1 = 11 integers in
this interval. The long dashes display the 10’th order polynomial …t through these points. Notice
that the polynomial approximation is erratic and far from the smooth CEF. This discrepancy gets
worse as the number of evaluation points increases, as Runge (1901) showed that the discrepancy
increases to in…nity with 1.
In contrast, splines do not exhibit Runge’s phenomenon. In Figure 12.2 the short dashes display
a cubic spline with seven knots …t through the same points as the polynomial. While the …tted
spline displays some oscillation relative to the true CEF, they are relatively moderate.
Because of Runge’s phenomenon, high-order polynomials are not used for interpolation, and are
not popular choices for high-order series approximations. Instead, splines are widely used.
12.7 Approximating Regression
For each observation i we observe (j
i
, i
i
) and then construct the regressor vector z
1i
= z
1
(i
i
)
using the series transformations. Stacking the observations in the matices ¸ and Z
1
, the least
squares estimate of the coe¢cient d
1
in the series approximation z
1
(i)
t
d
1
is
´
d
1
=
_
Z
t
1
Z
1
_
÷1
Z
t
1
¸,
and the least squares estimate of the regression function is
´ :
1
(i) = z
1
(i)
t
´
d
1
. (12.7)
As we learned in Chapter 2, the least-squares coe¢cient is estimating the best linear predictor
of j
i
given z
1i
. This is
d
1
= E
_
z
1i
z
t
1i
_
÷1
E(z
1i
j
i
) .
CHAPTER 12. SERIES ESTIMATION 227
Given this coe¢cient, the series approximation is z
1
(i)
t
d
1
with approximation error
r
1
(i) = :(i) ÷z
1
(i)
t
d
1
. (12.8)
The true CEF equation for j
i
is
j
i
= :(i
i
) ÷c
i
(12.9)
with c
i
the CEF error. De…ning r
1i
= r
1
(i
i
), we …nd
j
i
= z
t
1i
d
1
÷c
1i
where the equation error is
c
1i
= r
1i
÷c
i
.
Observe that the error c
1i
includes the approximation error and thus does not have the properties
of a CEF error.
In matrix notation we can write these equations as
¸ = Z
1
d
1
÷r
1
÷c
= Z
1
d
1
÷c
1
. (12.10)
We now impose some regularity conditions on the regression model to facilitate the theory.
De…ne the 1 1 expected design matrix
Q
1
= E
_
z
1i
z
t
1i
_
,
let A denote the support of i
i
, and de…ne the largest normalized length of the regressor vector in
the support of i
i
¸
1
= sup
æ¸A
_
z
1
(i)
t
Q
÷1
1
z
1
(i)
_
1¸2
. (12.11)
ç
1
will increase with 1. For example, if the support of the variables z
1
(i
i
) is the unit cube [0, 1[
1
,
then you can compute that ¸
1
=
_
1. As discussed in Newey (1997) and Li and Racine (2007,
Corollary 15.1) if the support of i
i
is compact then ¸
1
= O(1) for polynomials and ¸
1
= O(1
1¸2
)
for splines.
Assumption 12.7.1
1. For some c 0 the series approximation satis…es (12.3).
2. E
_
c
2
i
[ i
i
_
_ ¯ o
2
< ·.
3. `
min
(Q
1
) _ ` 0.
4. 1 = 1(:) is a function of : which satis…es 1,: ÷0 and ¸
2
1
1,: ÷
0 as : ÷·.
Assumptions 12.7.1.1 through 12.7.1.3 concern properties of the regression model. Assumption
12.7.1.1 holds with c = :,d if A is compact and the :’th derivative of :(i) is continuous. Assump-
tion 12.7.1.2 allows for conditional heteroskedasticity, but requires the conditional variance to be
bounded. Assumption 12.7.1.3 excludes near-singular designs. Since estimates of the conditional
mean are unchanged if we replace z
1i
with z
+
1i
= H
1
z
1i
for any non-singular H
1
, Assumption
12.7.1.3 can be viewed as holding after transformation by an appropriate non-singular H
1
.
CHAPTER 12. SERIES ESTIMATION 228
Assumption 12.7.1.4 concerns the choice of the number of series terms, which is under the
control of the user. It speci…es that 1 can increase with sample size, but at a controlled rate of
growth. Since ¸
1
= O(1) for polynomials and ¸
1
= O(1
1¸2
) for splines, Assumption 12.7.1.4 is
satis…ed if 1
3
,: ÷0 for polynomials and 1
2
,: ÷0 for splines. This means that while the number
of series terms 1 can increase with the sample size, 1 must increase at a much slower rate.
In Section 12.5 we introduced the best uniform approximation, and in this section we introduced
the best linear predictor. What is the relationship? They may be similar in practice, but they are
not the same and we should be careful to maintain the distinction. Note that from (12.5) we can
write :(i
i
) = z
t
1i
d
+
1
÷ r
+
1i
where r
+
1i
= r
+
1
(i
i
) satis…es sup
i
[r
+
1i
[ = O(1
÷c
) from (12.6). Then
the best linear predictor equals
d
1
= E
_
z
1i
z
t
1i
_
÷1
E(z
1i
j
i
)
= E
_
z
1i
z
t
1i
_
÷1
E(z
1i
:(i
i
))
= E
_
z
1i
z
t
1i
_
÷1
E
_
z
1i
(z
t
1i
d
+
1
÷r
+
1i
)
_
= d
+
1
÷E
_
z
1i
z
t
1i
_
÷1
E(z
1i
r
+
1i
) .
Thus the di¤erence between the two approximations is
r
1
(i) ÷r
+
1
(i) = z
1
(i)
t
(d
+
1
÷d
1
)
= z
1
(i)
t
E
_
z
1i
z
t
1i
_
÷1
E(z
1i
r
+
1i
) . (12.12)
Observe that by the properties of projection
E
_
r
+2
1i
_
÷E(r
+
1i
z
1i
)
t
E
_
z
1i
z
t
1i
_
÷1
E(z
1i
r
+
1i
) _ 0 (12.13)
and by (12.6)
E
_
r
+2
1i
_
=
_
r
+
1
(i)
2
)
a
(i)di _ O
_
1
÷2c
_
. (12.14)
Then applying the Schwarz inequality to (12.12), De…nition (12.11), (12.13) and (12.14), we …nd
[r
1
(i) ÷r
+
1
(i)[ _
_
z
1
(i)
t
E
_
z
1i
z
t
1i
_
÷1
z
1
(i)
_
1¸2
_
E(r
+
1i
z
1i
)
t
E
_
z
1i
z
t
1i
_
÷1
E(z
1i
r
+
1i
)
_
1¸2
_ O
_
¸
1
1
÷c
_
. (12.15)
It follows that the best linear predictor approximation error satis…es
sup
æ¸A
[r
1
(i)[ _ O
_
¸
1
1
÷c
_
. (12.16)
The bound (12.16) is probably not the best possible, but it shows that the best linear predictor
satis…es a uniform approximation bound. Relative to (12.6), the rate is slower by the factor ¸
1
.
The bound (12.16) term is o(1) as 1 ÷ · if ¸
1
1
÷c
÷ 0. A su¢cient condition is that c 1
(: d) for polynomials and c 1,2 (: d,2) for splines, where d = oim(i) and : is the number
of continuous derivatives of :(i).
It is also useful to observe that since d
1
is the best linear approximation to :(i
i
) in mean-
square (see Section 2.23), then
Er
2
1i
= E
_
:(i
i
) ÷z
t
1i
d
1
_
2
_ E
_
:(i
i
) ÷z
t
1i
d
+
1
_
2
_ O
_
1
÷2c
_
(12.17)
the …nal inequality by (12.14).
CHAPTER 12. SERIES ESTIMATION 229
12.8 Residuals and Regression Fit
The …tted regression at i = i
i
is ´ :
1
(i
i
) = z
t
1i
´
d
1
and the …tted residual is
´ c
i1
= j
i
÷ ´ :
1
(i
i
).
The leave-one-out prediction errors are
c
i1
= j
i
÷ ´ :
1,÷i
(i
i
)
= j
i
÷z
t
1i
´
d
1,÷i
where
´
d
1,÷i
is the least-squares coe¢cient with the i’th observation omitted. Using (3.38) we can
also write
c
i1
= ´ c
i1
(1 ÷/
1ii
)
÷1
where /
1ii
= z
t
1i
(Z
t
1
Z
1
)
÷1
z
1i
.
As for kernel regression, the prediction errors c
i1
are better estimates of the errors than the
…tted residuals ´ c
i1
, as they do not have the tendency to “over-…t” when the number of series terms
is large.
To assess the …t of the nonparametric regression, the estimate of the mean-square prediction
error is
o
2
1
=
1
:
a

i=1
c
2
i1
=
1
:
a

i=1
´ c
2
i1
(1 ÷/
1ii
)
÷2
and the prediction 1
2
is
¯
1
2
1
= 1 ÷

a
i=1
c
2
i1

a
i=1
(j
i
÷ ¯ j)
2
.
12.9 Cross-Validation Model Selection
The cross-validation criterion for selection of the number of series terms is the MSPE
C\ (1) = o
2
1
=
1
:
a

i=1
´ c
2
i1
(1 ÷/
1ii
)
÷2
.
By selecting the series terms to minimize C\ (1), or equivalently maximize
¯
1
2
1
, we have a data-
dependent rule which is designed to produce estimates with low integrated mean-squared error
(IMSE) and mean-squared forecast error (MSFE). As shown in Theorem 11.6.1, C\ (1) is an
approximately unbiased estimated of the MSFE and IMSE, so …nding the model which produces
the smallest value of C\ (1) is a good indicator that the estimated model has small MSFE and
IMSE. The proof of the result is the same for all nonparametric estimators (series as well as kernels)
so does not need to be repeated here.
As a practical matter, an estimator corresponds to a set of regressors z
1i
, that is, a set of
transformations of the original variables i
i
. For each set of regressions, the regression is estimated
and C\ (1) calculated, and the estimator is selected which has the smallest value of C\ (1). If
there are j ordered regressors, then there are j possible estimators. Typically, this calculation is
simple even if j is large. However, if the j regressors are unordered (and this is typical) then there
are 2
j
possible subsets of conceivable models. If j is even moderately large, 2
j
can be immensely
large so brute-force computation of all models may be computationally demanding.
CHAPTER 12. SERIES ESTIMATION 230
12.10 Convergence in Mean-Square
The series estimate
´
d
1
are indexed by 1. The point of nonparametric estimation is to let
1 be ‡exible so as to incorporate greater complexity when the data are su¢ciently informative.
This means that 1 will typically be increasing with sample size :. This invalidates conventional
asymptotic distribution theory. However, we can develop extensions which use appropriate matrix
norms, and by focusing on real-valued functions of the parameters including the estimated regression
function itself.
The asymptotic theory we present in this and the next several sections is largely taken from
Newey (1997).
Our …rst main result shows that the least-squares estimate converges to d
1
in mean-square
distance.
Theorem 12.10.1 Under Assumption 12.7.1, as : ÷·,
_
´
d
1
÷d
1
_
t
Q
1
_
´
d
1
÷d
1
_
= O
j
_
1
:
_
÷o
j
_
1
÷2c
_
(12.18)
The proof of Theorem 12.10.1 is rather technical and deferred to Section 12.16.
The rate of convergence in (12.18) has two terms. The O
j
(1,:) term is due to estimation
variance. Note in contrast that the corresponding rate would be O
j
(1,:) in the parametric case.
The di¤erence is that in the parametric case we assume that the number of regressors 1 is …xed as
: increases, while in the nonparametric case we allow the number of regressors 1 to be ‡exible. As
1 increases, the estimation variance increases. The o
j
_
1
÷2c
_
term in (12.18) is due to the series
approximation error.
Using Theorem 12.10.1 we can establish the following convergence rate for the estimated re-
gression function.
Theorem 12.10.2 Under Assumption 12.7.1, as : ÷·,
_
( ´ :
1
(i) ÷:(i))
2
)
a
(i)di = O
j
_
1
:
_
÷O
j
_
1
÷2c
_
(12.19)
Theorem 12.10.2 shows that the integrated squared di¤erence between the …tted regression and
the true CEF converges in probability to zero if 1 ÷ · as : ÷ ·. The convergence results of
Theorem 12.10.2 show that the number of series terms 1 involves a trade-o¤ similar to the role of
the bandwidth / in kernel regression. Larger 1 implies smaller approximation error but increased
estimation variance.
The optimal rate which minimizes the average squared error in (12.19) is 1 = O
_
:
1¸(1+2c)
_
,
yielding an optimal rate of convergence in (12.19) of O
j
_
:
÷2c¸(1+2c)
_
. This rate depends on the
unknown smoothness c of the true CEF (the number of derivatives :) and so does not directly
syggest a practical rule for determining 1. Still, the implication is that when the function being
estimated is less smooth (c is small) then it is necessary to use a larger number of series terms 1
to reduce the bias. In contrast, when the function is more smooth then it is better to use a smaller
number of series terms 1 to reduce the variance.
To establish (12.19), using (12.7) and (12.8) we can write
´ :
1
(i) ÷:(i) = z
1
(i)
t
_
´
d
1
÷d
1
_
÷r
1
(i). (12.20)
CHAPTER 12. SERIES ESTIMATION 231
Since c
1i
are projection errors, they satisfy E(z
1i
c
1i
) = 0 and thus E(z
1i
r
1i
) = 0. This
means
_
z
1
(i)r
1
(i))
a
(i)di = 0. Also observe that Q
1
=
_
z
1
(i)z
1
(i)
t
)
a
(i)di and Er
2
1i
=
_
r
1
(i)
2
)
a
(i)di. Then
_
( ´ :
1
(i) ÷:(i))
2
)
a
(i)di
=
_
´
d
1
÷d
1
_
t
Q
1
_
´
d
1
÷d
1
_
÷Er
2
1i
_ O
j
_
1
:
_
÷O
j
_
1
÷2c
_
by (12.18) and (12.17), establishing (12.19).
12.11 Uniform Convergence
Theorem 12.10.2 established conditions under which ´ :
1
(i) is consistent in a squared error
norm. It is also of interest to know the rate at which the largest deviation converges to zero. We
have the following rate.
Theorem 12.11.1 Under Assumption 12.7.1, then as : ÷·,
sup
æ¸A
[ ´ :
1
(i) ÷:(i)[ = O
j
_
_
_
¸
2
1
1
:
_
_
÷O
j
_
¸
1
1
÷c
_
. (12.21)
Relative to Theorem 12.10.2, the error has been increased multiplicatively by ¸
1
. This slower
convergence rate is a penalty for the stronger uniform convergence, though it is probably not
the best possible rate. Examining the bound in (12.21) notice that the …rst term is o
j
(1) under
Assumption 12.7.1.4. The second term is o
j
(1) if ¸
1
1
÷c
÷ 0, which requires that 1 ÷ · and
that c be su¢ciently large. A su¢cient condition is that : d for polynomials and : d,2 for
splines, where d = oim(i) and : is the number of continuous derivatives of :(i). Thus higher
dimensional i require a smoother CEF :(i) to ensure that the series estimate ´ :
1
(i) is uniformly
consistent.
The convergence (12.21) is straightforward to show using (12.18). Using (12.20), the Triangle
Inequality, the Schwarz inequality (A.10), De…nition (12.11), (12.18) and (12.16),
sup
æ¸A
[ ´ :
1
(i) ÷:(i)[
_ sup
æ¸A
¸
¸
¸z
1
(i)
t
_
´
d
1
÷d
1

¸
¸ ÷ sup
æ¸A
[r
1
(i)[
_ sup
æ¸A
_
z
1
(i)
t
Q
÷1
1
z
1
(i)
_
1¸2
_
_
´
d
1
÷d
1
_
t
Q
1
_
´
d
1
÷d
1
_
_
1¸2
÷O
_
¸
1
1
÷c
_
_ ¸
1
_
O
j
_
1
:
_
÷O
j
_
1
÷2c
_
_
1¸2
÷O
_
¸
1
1
÷c
_
,
= O
j
_
_
_
¸
2
1
1
:
_
_
÷O
j
_
¸
1
1
÷c
_
. (12.22)
This is (12.21).
CHAPTER 12. SERIES ESTIMATION 232
12.12 Asymptotic Normality
One advantage of series methods is that the estimators are (in …nite samples) equivalent to
parametric estimators, so it is easy to calculate covariance matrix estimates. We now show that
we can also justify normal asymptotic approximations.
The theory we present in this section will apply to any linear function of the regression function.
That is, we allow the parameter of interest to be aany non-trivial real-valued linear function of the
entire regression function :()
0 = a (:) .
This includes the regression function :(i) at a given point i, derivatives of :(i), and integrals
over :(i). Given ´ :
1
(i) = z
1
(i)
t
´
d
1
as an estimator for :(i), the estimator for 0 is
´
0
1
= a ( ´ :
1
) = u
t
1
´
d
1
for some 1 1 vector of constants u
1
,= 0. (The relationship a ( ´ :
1
) = u
t
1
´
d
1
follows since a is
linear in : and ´ :
1
is linear in
´
d
1
.)
If 1 were …xed as : ÷ ·, then by standard asymptotic theory we would expect
´
0
1
to be
asymptotically normal with variance
·
1
= u
t
1
Q
÷1
1
D
1
Q
÷1
1
u
1
where
D
1
= E
_
z
1i
z
t
1i
c
2
1i
_
.
The standard justi…cation, however, is not valid in the nonparametric case, in part because ·
1
may diverge as 1 ÷·, and in part due to the …nite sample bias due to the approximation error.
Therefore a new theory is required. Interestingly, it turns out that in the nonparametric case
´
0
1
is
still asymptotically normal, and ·
1
is still the appropriate variance for
´
0
1
. The proof is di¤erent
than the parametric case as the dimensions of the matrices are increasing with 1, and we need to
be attentive to the estimator’s bias due to the series approximation.
Theorem 12.12.1 Under Assumption 12.7.1, if in addition E
_
c
4
i
[i
i
_
_
i
4
< ·, E
_
c
2
i
[i
i
_
_ o
2
0, and ¸
1
1
÷c
= O(1), then as : ÷·,
_
:
_
´
0
1
÷0 ÷a (r
1
)
_
·
1¸2
1
o
÷÷N(0, 1) (12.23)
The proof of Theorem 12.12.1 can be found in Section 12.16.
Theorem 12.12.1 shows that the estimator
´
0
1
is approximately normal with bias ÷a (r
1
) and
variance ·
1
,:. The variance is the same as in the parametric case, but the asymptotic distribution
contains an asymptotic bias, similar as is found in kernel regression. We discuss the bias in more
detail below.
Notice that Theorem 12.12.1 requires ¸
1
1
÷c
= O(1), which is similar to that found in Theorem
12.11.1 to establish uniform convergence. The the bound ¸
1
1
÷c
= O(1) allows 1 to be constant
with : or to increase with :. However, when 1 is increasing the bound requires that c be su¢cient
large so that 1
c
grows faster than ¸
1
. A su¢cient condition is that : = d for polynomials and
: = d,2 for splines. The fact that the condition allows for 1 to be constant means that Theorem
12.12.1 includes parametric least-squares as a special case with explicit attention to estimation bias.
CHAPTER 12. SERIES ESTIMATION 233
One useful message from Theorem 12.12.1 is that the classic variance formula ·
1
for
´
0
1
still
applies for series regression. Indeed, we can estimate the asymptotic variance using the standard
White formula
´ ·
1
= u
t
1
´
Q
÷1
1
´
D
1
´
Q
÷1
1
u
1
´
D
1
=
1
:
a

i=1
z
1i
z
t
1i
´ c
2
i1
´
Q
1
=
1
:
a

i=1
z
1i
z
t
1i
.
Hence a standard error for
´
0
1
is
´ :(0
1
) =
_
1
:
u
t
1
´
Q
÷1
1
´
D
1
´
Q
÷1
1
u
1
.
It can be shown (Newey, 1997) that ´ ·
1

1
j
÷÷1 as : ÷· and thus the distribution in (12.23) is
unchanged if ·
1
is replaced with ´ ·
1
.
Theorem 12.12.1 shows that the estimator
´
0
1
has a bias term a (r
1
) . What is this? It is the
same transformation of the function r
1
(i) as 0 = a (:) is of the regression function :(i). For
example, if 0 = :(i) is the regression at a …xed point i , then a (r
1
) = r
1
(i), the approximation
error at the same point. If 0 =
d
dr
:(r) is the regression derivative, then a (r
1
) =
d
dr
r
1
(i) is the
derivative of the approximation error.
This means that the bias in the estimator
´
0
1
for 0 shown in Theorem 12.12.1 is simply the
approximation error, transformed by the functional of interest. If we are estimating the regression
function then the bias is the error in approximating the regression function; if we are estimating
the regression derivative then the bias is the error in the derivative in the approximation error for
the regression function.
12.13 Asymptotic Normality with Undersmoothing
An unpleasant aspect about Theorem 12.12.1 is the bias term. An interesting trick is that
this bias term can be made asymptotically negligible if we assume that 1 increases with : at a
su¢ciently fast rate.
Theorem 12.13.1 Under Assumption 12.7.1, if in addition E
_
c
4
i
[i
i
_
_
i
4
< ·, E
_
c
2
i
[i
i
_
_ o
2
0, a (r
+
1
) _ O(1
÷c
) , :1
÷2c
÷ 0, and
u
t
1
Q
÷1
1
u
1
is bounded away from zero, then
_
:
_
´
0
1
÷0
_
·
1¸2
1
o
÷÷N(0, 1) . (12.24)
The condition a (r
+
1
) _ O(1
÷c
) states that the function of interest (for example, the regression
function, its derivative, or its integral) applied to the uniform approximation error converges to
zero as the number of terms 1 in the series approximation increases. If a (:) = :(i) then this
condition holds by (12.6).
The condition that u
t
1
Q
÷1
1
u
1
is bounded away from zero is simply a technical requirement to
exclude degeneracy.
CHAPTER 12. SERIES ESTIMATION 234
The critical condition is the assumption that :1
÷2c
÷ 0. This requires that 1 ÷ · at a
rate faster than :
1¸2c
. This is a troubling condition. The optimal rate for estimation of :(i) is
1 = O
_
:
1¸(1+2c)
_
. If we set 1 = :
1¸(1+2c)
by this rule then :1
÷2c
= :
1¸(1+2c)
÷ ·, not zero.
Thus this assumption is equivalent to assuming that 1 is much larger than optimal. The reason
why this trick works (that is, why the bias is negligible) is that by increasing 1, the asymptotic
bias decreases and the asymptotic variance increases and thus the variance dominates. Because 1
is larger than optimal, we typically say that ´ :
1
(i) is undersmoothed relative to the optimal series
estimator.
Many authors like to focus their asymptotic theory on the assumptions in Theorem 12.13.1, as
the distribution (12.24) appears cleaner. However, it is a poor use of asymptotic theory. There
are three problems with the assumption :1
÷2c
÷0 and the approximation (12.24). First, it says
that if we intentionally pick 1 to be larger than optimal, we can increase the estimation variance
relative to the bias so the variance will dominate the bias. But why would we want to intentionally
use an estimator which is sub-optimal? Second, the assumption :1
÷2c
÷0 does not eliminate the
asymptotic bias, it only makes it of lower order than the variance. So the approximation (12.24) is
technically valid, but the missing asymptotic bias term is just slightly smaller in asymptotic order,
and thus still relevant in …nite samples. Third, the condition :1
÷2c
÷0 is just an assumption, it
has nothing to do with actual empirical practice. Thus the di¤erence between (12.23) and (12.24)
is in the assumptions, not in the actual reality or in the actual empirical practice. Eliminating a
nuisance (the asymptotic bias) through an assumption is a trick, not a substantive use of theory.
My strong view is that the result (12.23) is more informative than (12.24). It shows that the
asymptotic distribution is normal but has a non-trivial …nite sample bias.
12.14 Regression Estimation
A special yet important example of a linear estimator of the regression function is the regression
function at a …xed point i. In the notation of the previous section, a (:) = :(i) and u
1
= z
1
(i).
The series estimator of :(i) is
´
0
1
= ´ :
1
(i) = z
1
(i)
t
´
d
1
. As this is a key problem of interest, we
restate the asymptotic results of Theorems 12.12.1 and 12.13.1 for this estimator.
Theorem 12.14.1 Under Assumption 12.7.1, if in addition E
_
c
4
i
[i
i
_
_
i
4
< ·, E
_
c
2
i
[i
i
_
_ o
2
0, and ¸
1
1
÷c
= O(1), then as : ÷·,
_
:( ´ :
1
(i) ÷:(i) ÷r
1
(i))
·
1¸2
1
(i)
o
÷÷N(0, 1) (12.25)
where
·
1
(i) = z
1
(i)
t
Q
÷1
1
D
1
Q
÷1
1
z
1
(i).
If ¸
1
1
÷c
= O(1) is replaced by :1
÷2c
÷ 0, and z
1
(i)
t
Q
÷1
1
z
1
(i) is
bounded away from zero, then
_
:( ´ :
1
(i) ÷:(i))
·
1¸2
1
(i)
o
÷÷N(0, 1) (12.26)
There are two important features about the asymptotic distribution (12.25).
First, as mentioned in the previous section, it shows how to construct asymptotic standard
errors for the CEF :(i). These are
´ :(i) =
_
1
:
z
1
(i)
t ´
Q
÷1
1
´
D
1
´
Q
÷1
1
z
1
(i).
CHAPTER 12. SERIES ESTIMATION 235
Second, (12.25) shows that the estimator has the asymptotic bias component r
1
(i). This is
due to the fact that the …nite order series is an approximation to the unknown CEF :(i), and this
results in …nite sample bias.
The asymptotic distribution (12.26) shows that the bias term is negligable if 1 diverges fast
enough so that :1
÷2c
÷0. As discussed in the previous section, this means that 1 is larger than
optimal.
The assumption that z
1
(i)
t
Q
÷1
1
z
1
(i) is bounded away from zero is a technical condition to
exclude degenerate cases, and is automatically satis…ed if z
1
(i) includes an intercept.
Plots of the CEF estimate ´ :
1
(i) can be accompanied by 95% con…dence intervals ´ :
1
(i) ±
2´ :(i). As we discussed in the chapter on kernel regression, this can be viewed as a con…dence
interval for the pseudo-true CEF :
+
1
(i) = :(i) ÷ r
1
(i), not for the true :(i). As for kernel
regression, the di¤erence is the unavoidable consequence of nonparametric estimation.
12.15 Kernel Versus Series Regression
In this and the previous chapter we have presented two distinct methods of nonparametric
regression based on kernel methods and series methods. Which should be used in practice? Both
methods have advantages and disadvantages and there is no clear overall winner.
First, while the asymptotic theory of the two estimators appear quite di¤erent, they are actually
rather closely related. When the regression function :(i) is twice di¤erentiable (: = 2) then the
rate of convergence of both the MSE of the kernel regression estimator with optimal bandwidth
/ and the series estimator with optimal 1 is :
÷2¸(o+4)
. There is no di¤erence. If the regression
function is smoother than twice di¤erentiable (: 2) then the rate of the convergence of the series
estimator improves. This may appear to be an advantage for series methods, but kernel regression
can also take advantage of the higher smoothness by using so-called higher-order kernels or local
polynomial regression, so perhaps this advantage is not too large.
Both estimators are asymptotically normal and have straightforward asymptotic standard error
formulae. The series estimators are a bit more convenient for this purpose, as classic parametric
standard error formula work without ammendment.
An advantage of kernel methods is that their distributional theory is easier to derive. The
theory is all based on local averages which is relatively straightforward. In contrast, series theory is
more challenging, dealing with increasing parameter spaces. An important di¤erence in the theory
is that for kernel estimators we have explicit representations for the bias while we only have rates
for series methods. This means that plug-in methods can be used for bandwidth selection in kernel
regression. However, typically we rely on cross-validation, which is equally applicable in both kernel
and series regression.
Kernel methods are also relatively easy to implement when the dimension d is large. There is
not a major change in the methodology as d increases. In contrast, series methods become quite
cumbersome as d increases as the number of cross-terms increases exponentially.
A major advantage of series methods is that it has inherently a high degree of ‡exibility, and
the user is able to implement shape restrictions quite easily. For example, in series estimation it
is relatively simple to implement a partial linear CEF, an additively separable CEF, monotonicity,
concavity or convexity. These restrictions are harder to implement in kernel regression.
12.16 Technical Proofs
De…ne z
1i
= z
1
(i
i
) and let Q
1¸2
1
denote the positive de…nite square root of Q
1
. As mentioned
before Theorem 12.10.1, the regression problem is unchanged if we replace z
1i
with a rotated
regressor such as z
+
1i
= Q
÷1¸2
1
z
1i
. This is a convenient choice for then E(z
+
1i
z
+t
1i
) = 1
1
. For
notational convenience we will simply write the transformed regressors as z
1i
and set Q
1
= 1
1
.
CHAPTER 12. SERIES ESTIMATION 236
We start with some convergence results for the sample design matrix
´
Q
1
=
1
:
Z
t
1
Z
1
=
1
:
a

i=1
z
1i
z
t
1i
.
Theorem 12.16.1 Under Assumption 12.7.1 and Q
1
= 1
1
, as : ÷·,
_
_
_
´
Q
1
÷1
1
_
_
_ = o
j
(1) (12.27)
and
`
min
(
´
Q
1
)
j
÷÷1. (12.28)
Proof. Since
_
_
_
´
Q
1
÷1
1
_
_
_
2
=
1

)=1
1

¹=1
_
1
:
a

i=1
(z
)1i
z
¹1i
÷Ez
)1i
z
¹1i
)
_
2
then
E
_
_
_
´
Q
1
÷1
1
_
_
_
2
=
1

)=1
1

¹=1
vai
_
1
:
a

i=1
z
)1i
z
¹1i
_
= :
÷1
1

)=1
1

¹=1
vai (z
)1i
z
¹1i
)
_ :
÷1
E
1

)=1
z
2
)1i
1

¹=1
z
2
¹1i
= :
÷1
E
_
z
t
1i
z
1i
_
2
. (12.29)
Since z
t
1i
z
1i
_ ¸
2
1
by de…nition (12.11) and using (A.1) we …nd
E
_
z
t
1i
z
1i
_
= li
_
Ez
1i
z
t
1i
_
= li 1
1
= 1, (12.30)
so that
E
_
z
t
1i
z
1i
_
2
_ ¸
2
1
1 (12.31)
and hence (12.29) is o(1) under Assumption 12.7.1.4. Theorem 5.11.1 shows that this implies
(12.27).
Let `
1
, `
2
, ..., `
1
be the eigenvalues of
´
Q
1
÷1
1
which are real as
´
Q
1
÷1
1
is symmetric. Then
¸
¸
¸`
min
(
´
Q
1
) ÷1
¸
¸
¸ =
¸
¸
¸`
min
(
´
Q
1
÷1
1
)
¸
¸
¸ _
_
1

¹=1
`
2
¹
_
1¸2
=
_
_
_
´
Q
1
÷1
1
_
_
_
where the second equality is (A.8). This is o
j
(1) by (12.27), establishing (12.28).
Proof of Theorem 12.10.1. As above, assume that the regressors have been transformed so that
Q
1
= 1
1
.
From expression (12.10) we can substitute to …nd
´
d
1
÷d
1
=
_
Z
t
1
Z
1
_
÷1
Z
t
1
c
1
.
=
´
Q
÷1
1
_
1
:
Z
t
1
c
1
_
(12.32)
CHAPTER 12. SERIES ESTIMATION 237
Using (12.32) and the Quadratic Inequality (A.14),
_
´
d
1
÷d
1
_
t
_
´
d
1
÷d
1
_
= :
÷2
_
c
t
1
Z
1
_
´
Q
÷1
1
´
Q
÷1
1
_
Z
t
1
c
1
_
_
_
`
max
_
´
Q
÷1
1
__
2
:
÷2
_
c
t
1
Z
1
Z
t
1
c
1
_
. (12.33)
Observe that (12.28) implies
`
max
_
´
Q
÷1
1
_
=
_
`
max
_
´
Q
1
__
÷1
= O
j
(1). (12.34)
Since c
1i
= c
i
÷r
1i
, and using Assumption 12.7.1.2 and (12.16), then
sup
i
E
_
c
2
1i
[i
i
_
= o
2
÷ sup
i
r
2
1i
_ o
2
÷O
_
¸
2
1
1
÷2c
_
. (12.35)
As c
1i
are projection errors, they satisfy E(z
1i
c
1i
) = 0. Since the observations are indepen-
dent, using (12.30) and (12.35), then
:
÷2
E
_
c
t
1
Z
1
Z
t
1
c
1
_
= :
÷2
E
_
_
a

i=1
c
1i
z
t
1i
a

i)=1
z
1)
c
1)
_
_
= :
÷2
a

i=1
E
_
z
t
1i
z
1i
c
2
1i
_
_ :
÷1
E
_
z
t
1i
z
1i
_
sup
i
E
_
c
2
1i
[i
i
_
_ o
2
1
:
÷O
_
¸
2
1
1
1÷2c
:
_
= o
2
1
:
÷o
_
1
÷2c
_
(12.36)
since ¸
2
1
1,: = o(1) by Assumption 12.7.1.4. Theorem 5.11.1 shows that this implies
:
÷2
c
t
1
Z
1
Z
t
1
c
1
= O
j
_
:
÷2
_
÷o
j
_
1
÷2c
_
. (12.37)
Together, (12.33), (12.34) and (12.37) imply (12.18).
Proof of Theorem 12.12.1. As above, assume that the regressors have been transformed so that
Q
1
= 1
1
.
Using :(i) = z
1
(i)
t
d
1
÷r
1
(i) and linearity
0 = a (:)
= a
_
z
1
(i)
t
d
1
_
÷a (r
1
)
= u
t
1
d
1
÷a (r
1
)
Combined with (12.32) we …nd
´
0
1
÷0 ÷a (r
1
) = u
t
1
_
´
d
1
÷d
1
_
=
1
:
u
t
1
´
Q
÷1
1
Z
t
1
c
1
CHAPTER 12. SERIES ESTIMATION 238
and thus
_
:
·
I
_
´
0
1
÷0
1
÷a (r
1
)
_
=
_
:
·
I
u
t
1
_
´
d
1
÷d
1
_
=
_
1

I
u
t
1
´
Q
÷1
1
Z
t
1
c
1
=
1
_

1
u
t
1
Z
t
1
c
1
(12.38)
÷
1
_

1
u
t
1
_
´
Q
÷1
1
÷1
1
_
Z
t
1
c (12.39)
÷
1
_

1
u
t
1
_
´
Q
÷1
1
÷1
1
_
Z
t
1
r
1
. (12.40)
where we have used c
1
= c ÷r
1
. We now take the terms in (12.38)-(12.40) separately.
First, take (12.38). We can write
1
_

1
u
t
1
Z
t
1
c
1
=
1
_

1
a

i=1
u
t
1
z
1i
c
1i
. (12.41)
Observe that u
t
1
z
1i
c
1i
are independent across i, mean zero, and have variance
E
_
u
t
1
z
1i
c
1i
_
2
= u
t
1
E
_
z
1i
z
t
1i
c
2
1i
_
u
1
= ·
1
.
We will apply the Lindeberg CLT 5.7.2, for which it is su¢cient to verify Lyapunov’s condition
(5.6):
1
:
2
·
2
1
a

i=1
E
_
u
t
1
z
1i
c
1i
_
4
=
1

2
1
E
_
_
u
t
1
z
1i
_
4
c
4
1i
_
÷0. (12.42)
The assumption that ¸
1
1
÷c
= O(1) means ¸
1
1
÷c
_ i
1
for some i
1
< ·. Then by the c
v
inequality and E
_
c
4
i
[i
i
_
_ i
sup
i
E
_
c
4
1i
[i
i
_
_ 8 sup
i
_
E
_
c
4
i
[i
i
_
÷r
4
1i
_
_ 8 (i ÷i
1
) . (12.43)
Using (12.43), the Schwarz Inequality, and (12.31)
E
_
_
u
t
1
z
1i
_
4
c
4
1i
_
= E
_
_
u
t
1
z
1i
_
4
E
_
c
4
1i
[i
i
_
_
_ 8 (i ÷i
1
) E
_
u
t
1
z
1i
_
4
_ 8 (i ÷i
1
)
_
u
t
1
u
1
_
2
E
_
z
t
1i
z
1i
_
2
= 8 (i ÷i
1
)
_
u
t
1
u
1
_
2
¸
2
1
1. (12.44)
Since E
_
c
2
1i
[i
i
_
= E
_
c
2
i
[i
i
_
÷r
2
1i
_ o
2
,
·
1
= u
t
1
E
_
z
1i
z
t
1i
c
2
1i
_
u
1
_ o
2
u
t
1
E
_
z
1i
z
t
1i
_
u
1
= o
2
u
t
1
u
1
. (12.45)
Equation (12.44) and (12.45) combine to show that
1

2
1
E
_
_
u
t
1
z
1i
_
4
c
4
1i
_
_
8 (i ÷i
1
)
o
4
¸
2
1
1
:
= o(1)
CHAPTER 12. SERIES ESTIMATION 239
under Assumption 12.7.1.4. This establishes Lyapunov’s condition (12.42). Hence the Lindeberg
CLT applies to (12.41) and we conclude
1
_

1
u
t
1
Z
t
1
c
1
o
÷÷N(0, 1) . (12.46)
Second, take (12.39). Since E(c [ A) = 0, then applying E
_
c
2
i
[i
i
_
_ ¯ o
2
, the Schwarz and Norm
Inequalities, (12.45), (12.34) and (12.27),
E
_
_
1
_

1
u
t
1
_
´
Q
÷1
1
÷1
1
_
Z
t
1
c
_
2
[ A
_
=
1

1
u
t
1
_
´
Q
÷1
1
÷1
1
_
Z
t
1
E
_
cc
t
[ A
_
Z
1
_
´
Q
÷1
1
÷1
1
_
u
1
_
¯ o
2
·
1
u
t
1
_
´
Q
÷1
1
÷1
1
_
´
Q
1
_
´
Q
÷1
1
÷1
1
_
u
1
=
¯ o
2
·
1
u
t
1
_
´
Q
1
÷1
1
_
´
Q
÷1
1
_
´
Q
1
÷1
1
_
u
1
_
¯ o
2
u
t
1
u
1
·
1
`
max
_
´
Q
÷1
1
__
_
_
´
Q
1
÷1
1
_
_
_
2
_
¯ o
2
o
2
o
j
(1).
This establishes
1
_

1
u
t
1
_
´
Q
÷1
1
÷1
1
_
Z
t
1
c
j
÷÷0. (12.47)
Third, take (12.40). By the Cauchy-Schwarz inequality, (12.45), and the Quadratic Inequality,
_
1
_

1
u
t
1
_
´
Q
÷1
1
÷1
1
_
Z
t
1
r
1
_
2
_
u
t
1
u
1

1
r
t
1
Z
1
_
´
Q
÷1
1
÷1
1
__
´
Q
÷1
1
÷1
1
_
Z
t
1
r
1
_
1
o
2
`
max
_
´
Q
÷1
1
÷1
1
_
2
1
:
r
t
1
Z
1
Z
t
1
r
1
. (12.48)
Observe that since the observations are independent and Ez
1i
r
1i
= 0, z
t
1i
z
1i
_ ¸
2
1
, and (12.17)
E
_
1
:
r
t
1
Z
1
Z
t
1
r
1
_
= E
_
_
1
:
a

i=1
r
1i
z
t
1i
a

i)=1
z
1)
r
1)
_
_
= E
_
1
:
a

i=1
z
t
1i
z
1i
r
2
1i
_
_ ¸
2
1
E
_
r
2
1i
_
= O
_
¸
2
1
1
÷2c
_
= O(1)
since ¸
1
1
÷2
= O(1). Thus
1
:
r
t
1
Z
1
Z
t
1
r
1
= O
j
(1). This means that (12.48) is o
j
(1) since (12.28)
implies
`
max
_
´
Q
÷1
1
÷1
1
_
= `
max
_
´
Q
÷1
1
_
÷1 = o
j
(1). (12.49)
Equivalently,
1
_

1
u
t
1
_
´
Q
÷1
1
÷1
1
_
Z
t
1
r
1
j
÷÷0. (12.50)
CHAPTER 12. SERIES ESTIMATION 240
Equations (12.46), (12.47) and (12.50) applied to (12.38)-(12.40) show that
_
:
·
I
_
´
0
1
÷0
1
÷a (r
1
)
_
o
÷÷N(0, 1)
completing the proof.
Proof of Theorem 12.13.1. The assumption that :1
÷2c
= o(1) implies 1
÷c
= o
_
:
÷1¸2
_
. Thus
¸
1
1
÷c
_ o
_
_
¸
2
1
:
_
1¸2
_
_ o
_
_
¸
2
1
1
:
_
1¸2
_
= o(1)
so the conditions of Theorem 12.12.1 are satis…ed. It is thus su¢cient to show that
_
:
·
I
a (r
1
) = o(1).
From (12.12)
r
1
(i) = r
+
1
(i) ÷z
1
(i)
t
¸
1
¸
1
= E
_
z
1i
z
t
1i
_
÷1
E(z
1i
r
+
1i
) .
Thus by linearity, applying (12.45), and the Schwarz inequality
_
:
·
I
a (r
1
) =
_
:
·
I
_
a (r
+
1
) ÷u
t
1
¸
1
_
_
:
1¸2
o
2
_
u
t
1
u
1
_
1¸2
a (r
+
1
) (12.51)
÷
(:¸
t
1
¸
1
)
1¸2
o
. (12.52)
By assumption, :
1¸2
a (r
+
1
) = O
_
:
1¸2
1
÷c
_
= o(1). By (12.14) and :1
÷2c
= o(1)

t
1
¸
1
= :E
_
r
+
1i
z
t
1i
_
E
_
z
1i
z
t
1i
_
÷1
E(z
1i
r
+
1i
)
_ :O
_
1
÷2c
_
= o(1).
Together, both (12.51) and (12.52) are o(1), as required.
Chapter 13
Quantile Regression
13.1 Least Absolute Deviations
We stated that a conventional goal in econometrics is estimation of impact of variation in i
i
on the central tendency of j
i
. We have discussed projections and conditional means, but these are
not the only measures of central tendency. An alternative good measure is the conditional median.
To recall the de…nition and properties of the median, let j be a continuous random variable.
The median 0 = moo(j) is the value such that Ii(j _ 0) = Ii (j _ 0) = .ò. Two useful facts about
the median are that
0 = aigmin
0
E[j ÷0[ (13.1)
and
Esgn(j ÷0) = 0
where
sgn(n) =
_
1 if n _ 0
÷1 if n < 0
is the sign function.
These facts and de…nitions motivate three estimators of 0. The …rst de…nition is the 50t/
empirical quantile. The second is the value which minimizes
1
a

a
i=1
[j
i
÷0[ , and the third de…nition
is the solution to the moment equation
1
a

a
i=1
sgn(j
i
÷0) . These distinctions are illusory, however,
as these estimators are indeed identical.
Now let’s consider the conditional median of j given a random vector i. Let :(i) = moo(j [ i)
denote the conditional median of j given i. The linear median regression model takes the form
j
i
= i
t
i
d ÷c
i
moo(c
i
[ i
i
) = 0
In this model, the linear function moo(j
i
[ i
i
= i) = i
t
d is the conditional median function, and
the substantive assumption is that the median function is linear in i.
Conditional analogs of the facts about the median are
« Ii(j
i
_ i
t
d [ i
i
= i) = Ii(j
i
i
t
d [ i
i
= i) = .ò
« E(sgn(c
i
) [ i
i
) = 0
« E(i
i
sgn(c
i
)) = 0
« d = min
f
E[j
i
÷i
t
i
d[
241
CHAPTER 13. QUANTILE REGRESSION 242
These facts motivate the following estimator. Let
1¹1
a
(d) =
1
:
a

i=1
¸
¸
j
i
÷i
t
i
d
¸
¸
be the average of absolute deviations. The least absolute deviations (LAD) estimator of d
minimizes this function
´
d = aigmin
f
1¹1
a
(d)
Equivalently, it is a solution to the moment condition
1
:
a

i=1
i
i
sgn
_
j
i
÷i
t
i
´
d
_
= 0. (13.2)
The LAD estimator has an asymptotic normal distribution.
Theorem 13.1.1 Asymptotic Distribution of LAD Estimator
When the conditional median is linear in i
_
:
_
´
d ÷d
_
o
÷÷N(0, X )
where
\ =
1
4
_
E
_
i
i
i
t
i
) (0 [ i
i
)
__
÷1
_
Ei
i
i
t
i
_ _
E
_
i
i
i
t
i
) (0 [ i
i
)
__
÷1
and ) (c [ i) is the conditional density of c
i
given i
i
= i.
The variance of the asymptotic distribution inversely depends on ) (0 [ i) , the conditional
density of the error at its median. When ) (0 [ i) is large, then there are many innovations near
to the median, and this improves estimation of the median. In the special case where the error is
independent of i
i
, then ) (0 [ i) = ) (0) and the asymptotic variance simpli…es
X =
(Ei
i
i
t
i
)
÷1
4) (0)
2
(13.3)
This simpli…cation is similar to the simpli…cation of the asymptotic covariance of the OLS estimator
under homoskedasticity.
Computation of standard error for LAD estimates typically is based on equation (13.3). The
main di¢culty is the estimation of )(0), the height of the error density at its median. This can
be done with kernel estimation techniques. See Chapter 21. While a complete proof of Theorem
13.1.1 is advanced, we provide a sketch here for completeness.
Proof of Theorem 13.1.1: Similar to NLLS, LAD is an optimization estimator. Let d
0
denote
the true value of d
0
.
The …rst step is to show that
`
d
j
÷÷d
0
. The general nature of the proof is similar to that for the
NLLS estimator, and is sketched here. For any …xed d, by the WLLN, 1¹1
a
(d)
j
÷÷E[j
i
÷i
t
i
d[ .
Furthermore, it can be shown that this convergence is uniform in d. (Proving uniform convergence
is more challenging than for the NLLS criterion since the LAD criterion is not di¤erentiable in
d.) It follows that
`
d, the minimizer of 1¹1
a
(d), converges in probability to d
0
, the minimizer of
E[j
i
÷i
t
i
d[.
CHAPTER 13. QUANTILE REGRESSION 243
Since sgn(a) = 1÷21 (a _ 0) , (13.2) is equivalent to j
a
(
´
d) = 0, where j
a
(d) = :
÷1

a
i=1
j
i
(d)
and j
i
(d) = i
i
(1 ÷2 1 (j
i
_ i
t
i
d)) . Let j(d) = Ej
i
(d). We need three preliminary results. First,
by the central limit theorem (Theorem 5.7.1)
_
:(j
a
(d
0
) ÷j(d
0
)) = ÷:
÷1¸2
a

i=1
j
i
(d
0
)
o
÷÷N
_
0, Ei
i
i
t
i
_
since Ej
i
(d
0
)j
i
(d
0
)
t
= Ei
i
i
t
i
. Second using the law of iterated expectations and the chain rule of
di¤erentiation,
0
0d
t
j(d) =
0
0d
t
Ei
i
_
1 ÷2 1
_
j
i
_ i
t
i
d
__
= ÷2
0
0d
t
E
_
i
i
E
_
1
_
c
i
_ i
t
i
d ÷i
t
i
d
0
_
[ i
i

= ÷2
0
0d
t
E
_
i
i
_
æ
0
i
f־
0
i
f
0
÷o
) (c [ i
i
) dc
_
= ÷2E
_
i
i
i
t
i
)
_
i
t
i
d ÷i
t
i
d
0
[ i
i

so
0
0d
t
j(d) = ÷2E
_
i
i
i
t
i
) (0 [ i
i
)
¸
.
Third, by a Taylor series expansion and the fact j(d) = 0
j(
´
d) ·
0
0d
t
j(d)
_
´
d ÷d
_
.
Together
_
:
_
´
d ÷d
0
_
·
_
0
0d
t
j(d
0
)
_
÷1
_
:j(
`
d)
=
_
÷2E
_
i
i
i
t
i
) (0 [ i
i
)
¸_
÷1
_
:
_
j(
`
d) ÷j
a
(
`
d)
_
·
1
2
_
E
_
i
i
i
t
i
) (0 [ i
i
)
¸_
÷1
_
:(j
a
(d
0
) ÷j(d
0
))
o
÷÷
1
2
_
E
_
i
i
i
t
i
) (0 [ i
i
)
¸_
÷1
N
_
0, Ei
i
i
t
i
_
= N(0, X ) .
The third line follows from an asymptotic empirical process argument and the fact that
´
d
j
÷÷d
0
.
13.2 Quantile Regression
Quantile regression has become quite popular in recent econometric practice. For t ¸ [0, 1[ the
t’th quantile Q
t
of a random variable with distribution function 1(n) is de…ned as
Q
t
= inf ¦n : 1(n) _ t¦
When 1(n) is continuous and strictly monotonic, then 1 (Q
t
) = t, so you can think of the quantile
as the inverse of the distribution function. The quantile Q
t
is the value such that t (percent) of
the mass of the distribution is less than Q
t
. The median is the special case t = .ò.
CHAPTER 13. QUANTILE REGRESSION 244
The following alternative representation is useful. If the random variable l has t’th quantile
Q
t
, then
Q
t
= aigmin
0
Ej
t
(l ÷0) . (13.4)
where j
t
(¡) is the piecewise linear function
j
t
(¡) =
_
÷¡ (1 ÷t) ¡ < 0
¡t ¡ _ 0
(13.5)
= ¡ (t ÷1 (¡ < 0)) .
This generalizes representation (13.1) for the median to all quantiles.
For the random variables (j
i
, i
i
) with conditional distribution function 1 (j [ i) the conditional
quantile function ¡
t
(i) is
Q
t
(i) = inf ¦j : 1 (j [ i) _ t¦ .
Again, when 1 (j [ i) is continuous and strictly monotonic in j, then 1 (Q
t
(i) [ i) = t. For
…xed t, the quantile regression function ¡
t
(i) describes how the t’th quantile of the conditional
distribution varies with the regressors.
As functions of i, the quantile regression functions can take any shape. However for computa-
tional convenience it is typical to assume that they are (approximately) linear in i (after suitable
transformations). This linear speci…cation assumes that Q
t
(i) = d
t
t
i where the coe¢cients d
t
vary across the quantiles t. We then have the linear quantile regression model
j
i
= i
t
i
d
t
÷c
i
where c
i
is the error de…ned to be the di¤erence between j
i
and its t’th conditional quantile i
t
i
d
t
.
By construction, the t’th conditional quantile of c
i
is zero, otherwise its properties are unspeci…ed
without further restrictions.
Given the representation (13.4), the quantile regression estimator
´
d
t
for d
t
solves the mini-
mization problem
´
d
t
= aigmin
f
o
t
a
(d)
where
o
t
a
(d) =
1
:
a

i=1
j
t
_
j
i
÷i
t
i
d
_
and j
t
(¡) is de…ned in (13.5).
Since the quanitle regression criterion function o
t
a
(d) does not have an algebraic solution,
numerical methods are necessary for its minimization. Furthermore, since it has discontinuous
derivatives, conventional Newton-type optimization methods are inappropriate. Fortunately, fast
linear programming methods have been developed for this problem, and are widely available.
An asymptotic distribution theory for the quantile regression estimator can be derived using
similar arguments as those for the LAD estimator in Theorem 13.1.1.
Theorem 13.2.1 Asymptotic Distribution of the Quantile Regres-
sion Estimator
When the t’th conditional quantile is linear in i
_
:
_
´
d
t
÷d
t
_
o
÷÷N(0, X
t
) ,
where
X
t
= t (1 ÷t)
_
E
_
i
i
i
t
i
) (0 [ i
i
)
__
÷1
_
Ei
i
i
t
i
_ _
E
_
i
i
i
t
i
) (0 [ i
i
)
__
÷1
and ) (c [ i) is the conditional density of c
i
given i
i
= i.
CHAPTER 13. QUANTILE REGRESSION 245
In general, the asymptotic variance depends on the conditional density of the quantile regression
error. When the error c
i
is independent of i
i
, then ) (0 [ i
i
) = ) (0) , the unconditional density of
c
i
at 0, and we have the simpli…cation
X
t
=
t (1 ÷t)
) (0)
2
_
E
_
i
i
i
t
i
__
÷1
.
A recent monograph on the details of quantile regression is Koenker (2005).
CHAPTER 13. QUANTILE REGRESSION 246
Exercises
Exercise 13.1 For any predictor q(i
i
) for j
i
, the mean absolute error (MAE) is
E[j
i
÷q(i
i
)[ .
Show that the function q(i) which minimizes the MAE is the conditional median :(i) = moo(j
i
[
i
i
).
Exercise 13.2 De…ne
q(n) = t ÷1 (n < 0)
where 1 () is the indicator function (takes the value 1 if the argument is true, else equals zero).
Let 0 satisfy Eq(j
i
÷0) = 0. Is 0 a quantile of the distribution of j
i
´
Exercise 13.3 Verify equation (13.4).
Chapter 14
Generalized Method of Moments
14.1 Overidenti…ed Linear Model
Consider the linear model
j
i
= i
t
i
d ÷c
i
= i
t
1i
d
1
÷i
t
2i
d
2
÷c
i
E(i
i
c
i
) = 0
where i
1i
is / 1 and i
2i
is r 1 with / = / ÷ r. We know that without further restrictions, an
asymptotically e¢cient estimator of d is the OLS estimator. Now suppose that we are given the
information that d
2
= 0. Now we can write the model as
j
i
= i
t
1i
d
1
÷c
i
E(i
i
c
i
) = 0.
In this case, how should d
1
be estimated? One method is OLS regression of j
i
on i
1i
alone. This
method, however, is not necessarily e¢cient, as there are / restrictions in E(i
i
c
i
) = 0, while d
1
is
of dimension / < /. This situation is called overidenti…ed. There are / ÷ / = r more moment
restrictions than free parameters. We call r the number of overidentifying restrictions.
This is a special case of a more general class of moment condition models. Let j(j, i, z, d) be
an / 1 function of a / 1 parameter d with / _ / such that
Ej(j
i
, i
i
, z
i
, d
0
) = 0 (14.1)
where d
0
is the true value of d. In our previous example, j(j, z, d) = z(j÷i
t
1
d
1
). In econometrics,
this class of models are called moment condition models. In the statistics literature, these are
known as estimating equations.
As an important special case we will devote special attention to linear moment condition models,
which can be written as
j
i
= i
t
i
d ÷c
i
E(z
i
c
i
) = 0.
where the dimensions of i
i
and z
i
are / 1 and / 1 , with / _ /. If / = / the model is just
identi…ed, otherwise it is overidenti…ed. The variables i
i
may be components and functions of
z
i
, but this is not required. This model falls in the class (14.1) by setting
j(j, i, z, d
0
) = z(j ÷i
t
d) (14.2)
247
CHAPTER 14. GENERALIZED METHOD OF MOMENTS 248
14.2 GMM Estimator
De…ne the sample analog of (14.2)
j
a
(d) =
1
:
a

i=1
j
i
(d) =
1
:
a

i=1
z
i
_
j
i
÷i
t
i
d
_
=
1
:
_
Z
t
¸ ÷Z
t
Ad
_
. (14.3)
The method of moments estimator for d is de…ned as the parameter value which sets j
a
(d) = 0.
This is generally not possible when / /, as there are more equations than free parameters. The
idea of the generalized method of moments (GMM) is to de…ne an estimator which sets j
a
(d)
“close” to zero.
For some / / weight matrix M
a
0, let
J
a
(d) = : j
a
(d)
t
M
a
j
a
(d).
This is a non-negative measure of the “length” of the vector j
a
(d). For example, if M
a
= 1, then,
J
a
(d) = : j
a
(d)
t
j
a
(d) = : |j
a
(d)|
2
, the square of the Euclidean length. The GMM estimator
minimizes J
a
(d).
De…nition 14.2.1
´
d
GAA
= aigmin
f
J
a
(d) .
Note that if / = /, then j
a
(
´
d) = 0, and the GMM estimator is the method of moments
estimator. The …rst order conditions for the GMM estimator are
0 =
0
0d
J
a
(
´
d)
= 2
0
0d
j
a
(
´
d)
t
M
a
j
a
(
´
d)
= ÷2
_
1
:
A
t
Z
_
M
a
_
1
:
Z
t
_
¸ ÷A
´
d
_
_
so
2
_
A
t
Z
_
M
a
_
Z
t
A
_
´
d = 2
_
A
t
Z
_
M
a
_
Z
t
¸
_
which establishes the following.
Proposition 14.2.1
´
d
GAA
=
__
A
t
Z
_
M
a
_
Z
t
A
__
÷1
_
A
t
Z
_
M
a
_
Z
t
¸
_
.
While the estimator depends on M
a
, the dependence is only up to scale, for if M
a
is replaced
by cM
a
for some c 0,
´
d
GAA
does not change.
CHAPTER 14. GENERALIZED METHOD OF MOMENTS 249
14.3 Distribution of GMM Estimator
Assume that M
a
j
÷÷M 0. Let
Q = E
_
z
i
i
t
i
_
and
D = E
_
z
i
z
t
i
c
2
i
_
= E
_
j
i
j
t
i
_
,
where j
i
= z
i
c
i
. Then
_
1
:
A
t
Z
_
M
a
_
1
:
Z
t
A
_
j
÷÷Q
t
MQ
and
_
1
:
A
t
Z
_
M
a
_
1
_
:
Z
t
c
_
o
÷÷Q
t
MN(0, D) .
We conclude:
Theorem 14.3.1 Asymptotic Distribution of GMM Estimator
_
:
_
´
d ÷d
_
o
÷÷N(0, X
f
) ,
where
X
f
=
_
Q
t
MQ
_
÷1
_
Q
t
MDMQ
_ _
Q
t
MQ
_
÷1
.
In general, GMM estimators are asymptotically normal with “sandwich form” asymptotic vari-
ances.
The optimal weight matrix M
0
is one which minimizes X
f
. This turns out to be M
0
= D
÷1
.
The proof is left as an exercise. This yields the e¢cient GMM estimator:
´
d =
_
A
t
ZD
÷1
Z
t
A
_
÷1
A
t
ZD
÷1
Z
t
¸.
Thus we have
Theorem 14.3.2 Asymptotic Distribution of E¢cient GMM Es-
timator
_
:
_
´
d ÷d
_
o
÷÷N
_
0,
_
Q
t
D
÷1
Q
_
÷1
_
.
M
0
= D
÷1
is not known in practice, but it can be estimated consistently. For any M
a
j
÷÷M
0
,
we still call
´
d the e¢cient GMM estimator, as it has the same asymptotic distribution.
By “e¢cient”, we mean that this estimator has the smallest asymptotic variance in the class
of GMM estimators with this set of moment conditions. This is a weak concept of optimality, as
we are only considering alternative weight matrices M
a
. However, it turns out that the GMM
estimator is semiparametrically e¢cient, as shown by Gary Chamberlain (1987).
If it is known that E(j
i
(d)) = 0, and this is all that is known, this is a semi-parametric
problem, as the distribution of the data is unknown. Chamberlain showed that in this context,
no semiparametric estimator (one which is consistent globally for the class of models considered)
can have a smaller asymptotic variance than
_
C
t
D
÷1
C
_
÷1
where C = E
0
0f
0
j
i
(d). Since the GMM
estimator has this asymptotic variance, it is semiparametrically e¢cient.
This result shows that in the linear model, no estimator has greater asymptotic e¢ciency than
the e¢cient linear GMM estimator. No estimator can do better (in this …rst-order asymptotic
sense), without imposing additional assumptions.
CHAPTER 14. GENERALIZED METHOD OF MOMENTS 250
14.4 Estimation of the E¢cient Weight Matrix
Given any weight matrix M
a
0, the GMM estimator
´
d is consistent yet ine¢cient. For
example, we can set M
a
= 1
¹
. In the linear model, a better choice is M
a
= (Z
t
Z)
÷1
. Given
any such …rst-step estimator, we can de…ne the residuals ´ c
i
= j
i
÷ i
t
i
´
d and moment equations
` j
i
= z
i
´ c
i
= j(j
i
, i
i
, z
i
,
´
d). Construct
j
a
= j
a
(
´
d) =
1
:
a

i=1
` j
i
,
` j
+
i
= ` j
i
÷j
a
,
and de…ne
M
a
=
_
1
:
a

i=1
` j
+
i
` j
+t
i
_
÷1
=
_
1
:
a

i=1
` j
i
` j
t
i
÷j
a
j
t
a
_
÷1
. (14.4)
Then M
a
j
÷÷D
÷1
= M
0
, and GMM using M
a
as the weight matrix is asymptotically e¢cient.
A common alternative choice is to set
M
a
=
_
1
:
a

i=1
` j
i
` j
t
i
_
÷1
which uses the uncentered moment conditions. Since Ej
i
= 0, these two estimators are asymptot-
ically equivalent under the hypothesis of correct speci…cation. However, Alastair Hall (2000) has
shown that the uncentered estimator is a poor choice. When constructing hypothesis tests, under
the alternative hypothesis the moment conditions are violated, i.e. Ej
i
,= 0, so the uncentered
estimator will contain an undesirable bias term and the power of the test will be adversely a¤ected.
A simple solution is to use the centered moment conditions to construct the weight matrix, as in
(14.4) above.
Here is a simple way to compute the e¢cient GMM estimator for the linear model. First, set
M
a
= (Z
t
Z)
÷1
, estimate
´
d using this weight matrix, and construct the residual ´ c
i
= j
i
÷ i
t
i
´
d.
Then set ` j
i
= z
i
´ c
i
, and let ` j be the associated : / matrix. Then the e¢cient GMM estimator is
´
d =
_
A
t
Z
_
` j
t
` j ÷:j
a
j
t
a
_
÷1
Z
t
A
_
÷1
A
t
Z
_
` j
t
` j ÷:j
a
j
t
a
_
÷1
Z
t
¸.
In most cases, when we say “GMM”, we actually mean “e¢cient GMM”. There is little point in
using an ine¢cient GMM estimator when the e¢cient estimator is easy to compute.
An estimator of the asymptotic variance of
`
d can be seen from the above formula. Set
´
X = :
_
A
t
Z
_
` j
t
` j ÷:j
a
j
t
a
_
÷1
Z
t
A
_
÷1
.
Asymptotic standard errors are given by the square roots of the diagonal elements of
1
:
´
X .
There is an important alternative to the two-step GMM estimator just described. Instead, we
can let the weight matrix be considered as a function of d. The criterion function is then
J(d) = : j
a
(d)
t
_
1
:
a

i=1
j
+
i
(d)j
+
i
(d)
t
_
÷1
j
a
(d).
where
j
+
i
(d) = j
i
(d) ÷j
a
(d)
The
´
d which minimizes this function is called the continuously-updated GMM estimator, and
was introduced by L. Hansen, Heaton and Yaron (1996).
The estimator appears to have some better properties than traditional GMM, but can be nu-
merically tricky to obtain in some cases. This is a current area of research in econometrics.
CHAPTER 14. GENERALIZED METHOD OF MOMENTS 251
14.5 GMM: The General Case
In its most general form, GMM applies whenever an economic or statistical model implies the
/ 1 moment condition
E(j
i
(d)) = 0.
Often, this is all that is known. Identi…cation requires | _ / = oim(d). The GMM estimator
minimizes
J(d) = : j
a
(d)
t
M
a
j
a
(d)
where
j
a
(d) =
1
:
a

i=1
j
i
(d)
and
M
a
=
_
1
:
a

i=1
` j
i
` j
t
i
÷j
a
j
t
a
_
÷1
,
with ` j
i
= j
i
(
¯
d) constructed using a preliminary consistent estimator
¯
d, perhaps obtained by …rst
setting M
a
= 1. Since the GMM estimator depends upon the …rst-stage estimator, often the weight
matrix M
a
is updated, and then
´
d recomputed. This estimator can be iterated if needed.
Theorem 14.5.1 Distribution of Nonlinear GMM Estimator
Under general regularity conditions,
_
:
_
´
d ÷d
_
o
÷÷N
_
0,
_
C
t
D
÷1
C
_
÷1
_
,
where
D = E
_
j
i
j
t
i
_
and
C = E
0
0d
t
j
i
(d).
The variance of
´
d may be estimated by
´
X
f
=
_
`
C
t
`
D
÷1
`
C
_
÷1
where
`
D = :
÷1

i
` j
+
i
` j
+t
i
and
`
C = :
÷1

i
0
0d
t
j
i
(
`
d).
The general theory of GMM estimation and testing was exposited by L. Hansen (1982).
14.6 Over-Identi…cation Test
Overidenti…ed models (/ /) are special in the sense that there may not be a parameter value
d such that the moment condition
CHAPTER 14. GENERALIZED METHOD OF MOMENTS 252
Ej(j
i
, i
i
, z
i
, d) = 0
holds. Thus the model – the overidentifying restrictions – are testable.
For example, take the linear model j
i
= d
t
1
i
1i
÷d
t
2
i
2i
÷c
i
with E(i
1i
c
i
) = 0 and E(i
2z
c
i
) = 0.
It is possible that d
2
= 0, so that the linear equation may be written as j
i
= d
t
1
i
1i
÷c
i
. However,
it is possible that d
2
,= 0, and in this case it would be impossible to …nd a value of d
1
so that
both E(i
1i
(j
i
÷i
t
1i
d
1
)) = 0 and E(i
2i
(j
i
÷i
t
1i
d
1
)) = 0 hold simultaneously. In this sense an
exclusion restriction can be seen as an overidentifying restriction.
Note that j
a
j
÷÷ Ej
i
, and thus j
a
can be used to assess whether or not the hypothesis that
Ej
i
= 0 is true or not. The criterion function at the parameter estimates is
J
a
= : j
t
a
M
a
j
a
= :
2
j
t
a
_
` j
t
` j ÷:j
a
j
t
a
_
÷1
j
a
.
is a quadratic form in j
a
, and is thus a natural test statistic for H
0
: Ej
i
= 0.
Theorem 14.6.1 (Sargan-Hansen). Under the hypothesis of correct spec-
i…cation, and if the weight matrix is asymptotically e¢cient,
J
a
= J
a
(
´
d)
o
÷÷¸
2
¹÷I
.
The proof of the theorem is left as an exercise. This result was established by Sargan (1958)
for a specialized case, and by L. Hansen (1982) for the general case.
The degrees of freedom of the asymptotic distribution are the number of overidentifying restric-
tions. If the statistic J exceeds the chi-square critical value, we can reject the model. Based on
this information alone, it is unclear what is wrong, but it is typically cause for concern. The GMM
overidenti…cation test is a very useful by-product of the GMM methodology, and it is advisable to
report the statistic J whenever GMM is the estimation method.
When over-identi…ed models are estimated by GMM, it is customary to report the J statistic
as a general test of model adequacy.
14.7 Hypothesis Testing: The Distance Statistic
We described before how to construct estimates of the asymptotic covariance matrix of the
GMM estimates. These may be used to construct Wald tests of statistical hypotheses.
If the hypothesis is non-linear, a better approach is to directly use the GMM criterion function.
This is sometimes called the GMM Distance statistic, and sometimes called a LR-like statistic (the
LR is for likelihood-ratio). The idea was …rst put forward by Newey and West (1987).
For a given weight matrix M
a
, the GMM criterion function is
J
a
(d) = : j
a
(d)
t
M
a
j
a
(d)
For l : R
I
÷R
v
, the hypothesis is
H
0
: l(d) = 0.
The estimates under H
1
are
´
d = aigmin
f
J
a
(d)
CHAPTER 14. GENERALIZED METHOD OF MOMENTS 253
and those under H
0
are
¯
d = aigmin
h(f)=0
J(d).
The two minimizing criterion functions are J
a
(
´
d) and J
a
(
¯
d). The GMM distance statistic is the
di¤erence
1
a
= J
a
(
¯
d) ÷J
a
(
´
d).
Proposition 14.7.1 If the same weight matrix M
a
is used for both null
and alternative,
1. 1 _ 0
2. 1
o
÷÷¸
2
v
3. If l is linear in d, then 1 equals the Wald statistic.
If l is non-linear, the Wald statistic can work quite poorly. In contrast, current evidence
suggests that the 1
a
statistic appears to have quite good sampling properties, and is the preferred
test statistic.
Newey and West (1987) suggested to use the same weight matrix M
a
for both null and alter-
native, as this ensures that 1
a
_ 0. This reasoning is not compelling, however, and some current
research suggests that this restriction is not necessary for good performance of the test.
This test shares the useful feature of LR tests in that it is a natural by-product of the compu-
tation of alternative models.
14.8 Conditional Moment Restrictions
In many contexts, the model implies more than an unconditional moment restriction of the form
Ej
i
(d) = 0. It implies a conditional moment restriction of the form
E(c
i
(d) [ z
i
) = 0
where c
i
(d) is some : 1 function of the observation and the parameters. In many cases, : = 1.
It turns out that this conditional moment restriction is much more powerful, and restrictive,
than the unconditional moment restriction discussed above.
Our linear model j
i
= i
t
i
d ÷ c
i
with instruments z
i
falls into this class under the stronger
assumption E(c
i
[ z
i
) = 0. Then c
i
(d) = j
i
÷i
t
i
d.
It is also helpful to realize that conventional regression models also fall into this class, except
that in this case i
i
= z
i
. For example, in linear regression, c
i
(d) = j
i
÷i
t
i
d, while in a nonlinear
regression model c
i
(d) = j
i
÷j(i
i
, d). In a joint model of the conditional mean and variance
c
i
(d, _) =
_
_
_
j
i
÷i
t
i
d
(j
i
÷i
t
i
d)
2
÷) (i
i
)
t
_
.
Here : = 2.
Given a conditional moment restriction, an unconditional moment restriction can always be
constructed. That is for any / 1 function ç(i
i
, d) , we can set j
i
(d) = ç(i
i
, d) c
i
(d) which
satis…es Ej
i
(d) = 0 and hence de…nes a GMM estimator. The obvious problem is that the class of
functions ç is in…nite. Which should be selected?
CHAPTER 14. GENERALIZED METHOD OF MOMENTS 254
This is equivalent to the problem of selection of the best instruments. If r
i
¸ R is a valid
instrument satisfying E(c
i
[ r
i
) = 0, then r
i
, r
2
i
, r
3
i
, ..., etc., are all valid instruments. Which
should be used?
One solution is to construct an in…nite list of potent instruments, and then use the …rst /
instruments. How is / to be determined? This is an area of theory still under development. A
recent study of this problem is Donald and Newey (2001).
Another approach is to construct the optimal instrument. The form was uncovered by Cham-
berlain (1987). Take the case : = 1. Let
H
i
= E
_
0
0d
c
i
(d) [ z
i
_
and
o
2
i
= E
_
c
i
(d)
2
[ z
i
_
.
Then the “optimal instrument” is
A
i
= ÷o
÷2
i
H
i
so the optimal moment is
j
i
(d) = A
i
c
i
(d).
Setting j
i
(d) to be this choice (which is / 1, so is just-identi…ed) yields the best GMM estimator
possible.
In practice, A
i
is unknown, but its form does help us think about construction of optimal
instruments.
In the linear model c
i
(d) = j
i
÷i
t
i
d, note that
H
i
= ÷E(i
i
[ z
i
)
and
o
2
i
= E
_
c
2
i
[ z
i
_
,
so
A
i
= o
÷2
i
E(i
i
[ z
i
) .
In the case of linear regression, i
i
= z
i
, so A
i
= o
÷2
i
z
i
. Hence e¢cient GMM is GLS, as we
discussed earlier in the course.
In the case of endogenous variables, note that the e¢cient instrument A
i
involves the estimation
of the conditional mean of i
i
given z
i
. In other words, to get the best instrument for i
i
, we need the
best conditional mean model for i
i
given z
i
, not just an arbitrary linear projection. The e¢cient
instrument is also inversely proportional to the conditional variance of c
i
. This is the same as the
GLS estimator; namely that improved e¢ciency can be obtained if the observations are weighted
inversely to the conditional variance of the errors.
14.9 Bootstrap GMM Inference
Let
´
d be the 2SLS or GMM estimator of d. Using the EDF of (j
i
, z
i
, i
i
), we can apply the
bootstrap methods discussed in Chapter 10 to compute estimates of the bias and variance of
´
d,
and construct con…dence intervals for d, identically as in the regression model. However, caution
should be applied when interpreting such results.
A straightforward application of the nonparametric bootstrap works in the sense of consistently
achieving the …rst-order asymptotic distribution. This has been shown by Hahn (1996). However,
it fails to achieve an asymptotic re…nement when the model is over-identi…ed, jeopardizing the
theoretical justi…cation for percentile-t methods. Furthermore, the bootstrap applied J test will
yield the wrong answer.
CHAPTER 14. GENERALIZED METHOD OF MOMENTS 255
The problem is that in the sample,
´
d is the “true” value and yet j
a
(
`
d) ,= 0. Thus according to
random variables (j
+
i
, z
+
i
, i
+
i
) drawn from the EDF 1
a
,
E
_
j
i
_
´
d
__
= j
a
(
`
d) ,= 0.
This means that (j
+
i
, z
+
i
, i
+
i
) do not satisfy the same moment conditions as the population distrib-
ution.
A correction suggested by Hall and Horowitz (1996) can solve the problem. Given the bootstrap
sample (¸
+
, Z
+
, A
+
), de…ne the bootstrap GMM criterion
J
+
a
(d) = :
_
j
+
a
(d) ÷j
a
(
`
d)
_
t
M
+
a
_
j
+
a
(d) ÷j
a
(
`
d)
_
where j
a
(
´
d) is from the in-sample data, not from the bootstrap data.
Let
´
d
+
minimize J
+
a
(d), and de…ne all statistics and tests accordingly. In the linear model, this
implies that the bootstrap estimator is
´
d
+
a
=
_
A
+t
Z
+
M
+
a
Z
+t
A
+
_
÷1
_
A
+t
Z
+
M
+
a
_
Z
+t
¸
+
÷Z
t
` c
__
.
where ` c = ¸ ÷A
´
d are the in-sample residuals. The bootstrap J statistic is J
+
a
(
´
d
+
).
Brown and Newey (2002) have an alternative solution. They note that we can sample from
the observations with the empirical likelihood probabilities ´ j
i
described in Chapter 15. Since

a
i=1
´ j
i
j
i
_
´
d
_
= 0, this sampling scheme preserves the moment conditions of the model, so no
recentering or adjustments is needed. Brown and Newey argue that this bootstrap procedure will
be more e¢cient than the Hall-Horowitz GMM bootstrap.
CHAPTER 14. GENERALIZED METHOD OF MOMENTS 256
Exercises
Exercise 14.1 Take the model
j
i
= i
t
i
d ÷c
i
E(i
i
c
i
) = 0
c
2
i
= z
t
i
_ ÷j
i
E(z
i
j
i
) = 0.
Find the method of moments estimators
_
`
d, ` _
_
for (d, _) .
Exercise 14.2 Take the single equation
¸ = Ad ÷c
E(c [ Z) = 0
Assume E
_
c
2
i
[ z
i
_
= o
2
. Show that if
`
d is estimated by GMM with weight matrix M
a
= (Z
t
Z)
÷1
,
then
_
:
_
´
d ÷d
_
o
÷÷N
_
0, o
2
_
Q
t
A
÷1
Q
_
÷1
_
where Q = E(z
i
i
t
i
) and A = E(z
i
z
t
i
) .
Exercise 14.3 Take the model j
i
= i
t
i
d ÷ c
i
with E(z
i
c
i
) = 0. Let ´ c
i
= j
i
÷ i
t
i
´
d where
´
d is
consistent for d (e.g. a GMM estimator with arbitrary weight matrix). De…ne the estimate of the
optimal GMM weight matrix
M
a
=
_
1
:
a

i=1
z
i
z
t
i
´ c
2
i
_
÷1
.
Show that M
a
j
÷÷D
÷1
where D = E
_
z
i
z
t
i
c
2
i
_
.
Exercise 14.4 In the linear model estimated by GMM with general weight matrix M, the asymp-
totic variance of
`
d
GAA
is
X =
_
Q
t
MQ
_
÷1
Q
t
MDMQ
_
Q
t
MQ
_
÷1
(a) Let X
0
be this matrix when M = D
÷1
. Show that X
0
=
_
Q
t
D
÷1
Q
_
÷1
.
(b) We want to show that for any M, X ÷X
0
is positive semi-de…nite (for then X
0
is the smaller
possible covariance matrix and M = D
÷1
is the e¢cient weight matrix). To do this, start by
…nding matrices A and H such that X = A
t
DA and X
0
= H
t
DH.
(c) Show that H
t
DA = H
t
DH and therefore that H
t
D(A÷H) = 0.
(d) Use the expressions X = A
t
DA, A = H ÷ (A÷H) , and H
t
D(A÷H) = 0 to show that
X _ X
0
.
Exercise 14.5 The equation of interest is
j
i
= n(i
i
, d) ÷c
i
E(z
i
c
i
) = 0.
The observed data is (j
i
, z
i
, i
i
). z
i
is / 1 and d is / 1, / _ /. Show how to construct an e¢cient
GMM estimator for d.
CHAPTER 14. GENERALIZED METHOD OF MOMENTS 257
Exercise 14.6 In the linear model ¸ = Ad ÷ c with E(i
i
c
i
) = 0, a Generalized Method of
Moments (GMM) criterion function for d is de…ned as
J
a
(d) =
1
:
(¸ ÷Ad)
t
A
´
D
÷1
A
t
(¸ ÷Ad) (14.5)
where
´
D =
1
a

a
i=1
i
i
i
t
i
´ c
2
i
, ´ c
i
= j
i
÷i
t
i
`
d are the OLS residuals, and
´
d = (A
t
A)
÷1
A
t
¸ is LS. The
GMM estimator of d, subject to the restriction l(d) = 0, is de…ned as
¯
d = aigmin
h(f)=0
J
a
(d).
The GMM test statistic (the distance statistic) of the hypothesis l(d) = 0 is
1 = J
a
(
¯
d) = min
h(f)=0
J
a
(d). (14.6)
(a) Show that you can rewrite J
a
(d) in (14.5) as
J
a
(d) = :
_
d ÷
´
d
_
t
`
X
÷1
f
_
d ÷
´
d
_
thus
¯
d is the same as the minimum distance estimator.
(b) Show that in this setting, the distance statistic 1 in (14.6) equals the Wald statistic.
Exercise 14.7 Take the linear model
j
i
= i
t
i
d ÷c
i
E(z
i
c
i
) = 0.
and consider the GMM estimator
`
d of d. Let
J
a
= :j
a
(
´
d)
t
´
D
÷1
j
a
(
´
d)
denote the test of overidentifying restrictions. Show that J
a
o
÷÷¸
2
¹÷I
as : ÷· by demonstrating
each of the following:
(a) Since D 0, we can write D
÷1
= CC
t
and D = C
t÷1
C
÷1
(b) J
a
= :
_
C
t
j
a
(
´
d)
_
t
_
C
t
`
DC
_
÷1
C
t
j
a
(
´
d)
(c) C
t
j
a
(
´
d) = L
a
C
t
j
a
(d
0
) where
L
a
= 1
¹
÷C
t
_
1
:
Z
t
A
___
1
:
A
t
Z
_
´
D
÷1
_
1
:
Z
t
A
__
÷1
_
1
:
A
t
Z
_
´
D
÷1
C
t÷1
j
a
(d
0
) =
1
:
Z
t
c.
(d) L
a
j
÷÷1
¹
÷H(H
t
H)
÷1
H
t
where H = C
t
E(z
i
i
t
i
)
(e) :
1¸2
C
t
j
a
(d
0
)
o
÷÷u ~ N(0, 1
¹
)
(f) J
a
o
÷÷u
t
_
1
¹
÷H(H
t
H)
÷1
H
t
_
u
(g) u
t
_
1
¹
÷H(H
t
H)
÷1
H
t
_
u ~ ¸
2
¹÷I
.
Hint: 1
¹
÷H(H
t
H)
÷1
H
t
is a projection matrix.
Chapter 15
Empirical Likelihood
15.1 Non-Parametric Likelihood
An alternative to GMM is empirical likelihood. The idea is due to Art Owen (1988, 2001) and
has been extended to moment condition models by Qin and Lawless (1994). It is a non-parametric
analog of likelihood estimation.
The idea is to construct a multinomial distribution 1(j
1
, ..., j
a
) which places probability j
i
at each observation. To be a valid multinomial distribution, these probabilities must satisfy the
requirements that j
i
_ 0 and
a

i=1
j
i
= 1. (15.1)
Since each observation is observed once in the sample, the log-likelihood function for this multino-
mial distribution is
log 1(j
1
, ..., j
a
) =
a

i=1
log(j
i
). (15.2)
First let us consider a just-identi…ed model. In this case the moment condition places no
additional restrictions on the multinomial distribution. The maximum likelihood estimators of
the probabilities (j
1
, ..., j
a
) are those which maximize the log-likelihood subject to the constraint
(15.1). This is equivalent to maximizing
a

i=1
log(j
i
) ÷j
_
a

i=1
j
i
÷1
_
where j is a Lagrange multiplier. The : …rst order conditions are 0 = j
÷1
i
÷j. Combined with the
constraint (15.1) we …nd that the MLE is j
i
= :
÷1
yielding the log-likelihood ÷:log(:).
Now consider the case of an overidenti…ed model with moment condition
Ej
i
(d
0
) = 0
where j is / 1 and d is / 1 and for simplicity we write j
i
(d) = j(j
i
, z
i
, i
i
, d). The multinomial
distribution which places probability j
i
at each observation (j
i
, i
i
, z
i
) will satisfy this condition if
and only if
a

i=1
j
i
j
i
(d) = 0 (15.3)
The empirical likelihood estimator is the value of d which maximizes the multinomial log-
likelihood (15.2) subject to the restrictions (15.1) and (15.3).
258
CHAPTER 15. EMPIRICAL LIKELIHOOD 259
The Lagrangian for this maximization problem is
/(d, j
1
, ..., j
a
, X, j) =
a

i=1
log(j
i
) ÷j
_
a

i=1
j
i
÷1
_
÷:X
t
a

i=1
j
i
j
i
(d)
where X and j are Lagrange multipliers. The …rst-order-conditions of / with respect to j
i
, j, and
X are
1
j
i
= j ÷:X
t
j
i
(d)
a

i=1
j
i
= 1
a

i=1
j
i
j
i
(d) = 0.
Multiplying the …rst equation by j
i
, summing over i, and using the second and third equations, we
…nd j = : and
j
i
=
1
:
_
1 ÷X
t
j
i
(d)
_.
Substituting into / we …nd
1(d, X) = ÷:log (:) ÷
a

i=1
log
_
1 ÷X
t
j
i
(d)
_
. (15.4)
For given d, the Lagrange multiplier X(d) minimizes 1(d, X) :
X(d) = aigmin
X
1(d, X). (15.5)
This minimization problem is the dual of the constrained maximization problem. The solution
(when it exists) is well de…ned since 1(d, X) is a convex function of X. The solution cannot be
obtained explicitly, but must be obtained numerically (see section 6.5). This yields the (pro…le)
empirical log-likelihood function for d.
1(d) = 1(d, X(d))
= ÷:log (:) ÷
a

i=1
log
_
1 ÷X(d)
t
j
i
(d)
_
The EL estimate
`
d is the value which maximizes 1(d), or equivalently minimizes its negative
`
d = aigmin
f
[÷1(d)[ (15.6)
Numerical methods are required for calculation of
`
d (see Section 15.5).
As a by-product of estimation, we also obtain the Lagrange multiplier
`
X = X(
`
d), probabilities
´ j
i
=
1
:
_
1 ÷
`
X
t
j
i
_
`
d
__.
and maximized empirical likelihood
1(
`
d) =
a

i=1
log (´ j
i
) . (15.7)
CHAPTER 15. EMPIRICAL LIKELIHOOD 260
15.2 Asymptotic Distribution of EL Estimator
Let d
0
denote the true value of d and de…ne
C
i
(d) =
0
0d
t
j
i
(d) (15.8)
C = EC
i
(d
0
)
D = E
_
j
i
(d
0
) j
i
(d
0
)
t
_
and
X =
_
C
t
D
÷1
C
_
÷1
(15.9)
X
X
= D÷C
_
C
t
D
÷1
C
_
÷1
C
t
(15.10)
For example, in the linear model, C
i
(d) = ÷z
i
i
t
i
, C = ÷E(z
i
i
t
i
), and D = E
_
z
i
z
t
i
c
2
i
_
.
Theorem 15.2.1 Under regularity conditions,
_
:
_
`
d ÷d
0
_
o
÷÷N(0, X
f
)
_
:
`
X
o
÷÷D
÷1
N(0, X
X
)
where X and X
X
are de…ned in (15.9) and (15.10), and
_
:
_
`
d ÷d
0
_
and
_
:
`
X are asymptotically independent.
The theorem shows that the asymptotic variance X
f
for
`
d is the same as for e¢cient GMM.
Thus the EL estimator is asymptotically e¢cient.
Chamberlain (1987) showed that X
f
is the semiparametric e¢ciency bound for d in the overi-
denti…ed moment condition model. This means that no consistent estimator for this class of models
can have a lower asymptotic variance than X
f
. Since the EL estimator achieves this bound, it is
an asymptotically e¢cient estimator for d.
Proof of Theorem 15.2.1. (
`
d,
`
X) jointly solve
0 =
0
0X
1(d, X) = ÷
a

i=1
j
i
_
`
d
_
_
1 ÷
`
X
t
j
i
_
`
d
__ (15.11)
0 =
0
0d
1(d, X) = ÷
a

i=1
C
i
_
`
d
_
t
X
1 ÷
`
X
t
j
i
_
`
d
_. (15.12)
Let C
a
=
1
a

a
i=1
C
i
(d
0
) , j
a
=
1
a

a
i=1
j
i
(d
0
) and D
a
=
1
a

a
i=1
j
i
(d
0
) j
i
(d
0
)
t
.
Expanding (15.12) around d = d
0
and X = X
0
= 0 yields
0 · C
t
a
_
`
X ÷X
0
_
. (15.13)
Expanding (15.11) around d = d
0
and X = X
0
= 0 yields
0 · ÷j
a
÷C
a
_
`
d ÷d
0
_
÷D
a
`
X (15.14)
CHAPTER 15. EMPIRICAL LIKELIHOOD 261
Premultiplying by C
t
a
D
÷1
a
and using (15.13) yields
0 · ÷C
t
a
D
÷1
a
j
a
÷C
t
a
D
÷1
a
C
a
_
`
d ÷d
0
_
÷C
t
a
D
÷1
a
D
a
`
X
= ÷C
t
a
D
÷1
a
j
a
÷C
t
a
D
÷1
a
C
a
_
`
d ÷d
0
_
Solving for
`
d and using the WLLN and CLT yields
_
:
_
`
d ÷d
0
_
· ÷
_
C
t
a
D
÷1
a
C
a
_
÷1
C
t
a
D
÷1
a
_
:j
a
(15.15)
o
÷÷
_
C
t
D
÷1
C
_
÷1
C
t
D
÷1
N(0, D)
= N(0, X
f
)
Solving (15.14) for
`
X and using (15.15) yields
_
:
`
X · D
÷1
a
_
1 ÷C
a
_
C
t
a
D
÷1
a
C
a
_
÷1
C
t
a
D
÷1
a
_
_
:j
a
(15.16)
o
÷÷D
÷1
_
1 ÷C
_
C
t
D
÷1
C
_
÷1
C
t
D
÷1
_
N(0, D)
= D
÷1
N(0, X
X
)
Furthermore, since
C
t
_
1 ÷D
÷1
C
_
C
t
D
÷1
C
_
÷1
C
t
_
= 0
_
:
_
`
d ÷d
0
_
and
_
:
`
X are asymptotically uncorrelated and hence independent.
15.3 Overidentifying Restrictions
In a parametric likelihood context, tests are based on the di¤erence in the log likelihood func-
tions. The same statistic can be constructed for empirical likelihood. Twice the di¤erence between
the unrestricted empirical log-likelihood ÷:log (:) and the maximized empirical log-likelihood for
the model (15.7) is
11
a
=
a

i=1
2 log
_
1 ÷
`
X
t
j
i
_
`
d
__
. (15.17)
Theorem 15.3.1 If Ej
i
(d
0
) = 0 then 11
a
o
÷÷¸
2
¹÷I
.
The EL overidenti…cation test is similar to the GMM overidenti…cation test. They are asymp-
totically …rst-order equivalent, and have the same interpretation. The overidenti…cation test is a
very useful by-product of EL estimation, and it is advisable to report the statistic 11
a
whenever
EL is the estimation method.
Proof of Theorem 15.3.1. First, by a Taylor expansion, (15.15), and (15.16),
1
_
:
a

i=1
j
i
_
`
d
_
·
_
:
_
j
a
÷C
a
_
`
d ÷d
0
__
·
_
1 ÷C
a
_
C
t
a
D
÷1
a
C
a
_
÷1
C
t
a
D
÷1
a
_
_
:j
a
· D
a
_
:
`
X.
CHAPTER 15. EMPIRICAL LIKELIHOOD 262
Second, since log(1 ÷n) · n ÷n
2
,2 for n small,
11
a
=
a

i=1
2 log
_
1 ÷
`
X
t
j
i
_
`
d
__
· 2
`
X
t
a

i=1
j
i
_
`
d
_
÷
`
X
t
a

i=1
j
i
_
`
d
_
j
i
_
`
d
_
t
`
X
· :
`
X
t
D
a
`
X
o
÷÷N(0, X
X
)
t
D
÷1
N(0, X
X
)
= ¸
2
¹÷I
where the proof of the …nal equality is left as an exercise.
15.4 Testing
Let the maintained model be
Ej
i
(d) = 0 (15.18)
where j is / 1 and d is / 1. By “maintained” we mean that the overidentfying restrictions
contained in (15.18) are assumed to hold and are not being challenged (at least for the test discussed
in this section). The hypothesis of interest is
l(d) = 0.
where l : R
I
÷R
o
. The restricted EL estimator and likelihood are the values which solve
¯
d = aigmax
h(f)=0
1(d)
1(
¯
d) = max
h(f)=0
1(d).
Fundamentally, the restricted EL estimator
¯
d is simply an EL estimator with /÷/÷a overidentifying
restrictions, so there is no fundamental change in the distribution theory for
¯
d relative to
`
d. To test
the hypothesis l(d) while maintaining (15.18), the simple overidentifying restrictions test (15.17)
is not appropriate. Instead we use the di¤erence in log-likelihoods:
11
a
= 2
_
1(
`
d) ÷1(
¯
d)
_
.
This test statistic is a natural analog of the GMM distance statistic.
Theorem 15.4.1 Under (15.18) and H
0
: l(d) = 0, 11
a
o
÷÷¸
2
o
.
The proof of this result is more challenging and is omitted.
CHAPTER 15. EMPIRICAL LIKELIHOOD 263
15.5 Numerical Computation
Gauss code which implements the methods discussed below can be found at
http://www.ssc.wisc.edu/~bhansen/progs/elike.prc
Derivatives
The numerical calculations depend on derivatives of the dual likelihood function (15.4). De…ne
j
+
i
(d, X) =
j
i
(d)
_
1 ÷X
t
j
i
(d)
_
C
+
i
(d, X) =
C
i
(d)
t
X
1 ÷X
t
j
i
(d)
The …rst derivatives of (15.4) are
H
X
=
0
0X
1(d, X) = ÷
a

i=1
j
+
i
(d, X)
H
f
=
0
0d
1(d, X) = ÷
a

i=1
C
+
i
(d, X) .
The second derivatives are
H
XX
=
0
2
0X0X
t
1(d, X) =
a

i=1
j
+
i
(d, X) j
+
i
(d, X)
t
H
Xf
=
0
2
0X0d
t
1(d, X) =
a

i=1
_
j
+
i
(d, X) C
+
i
(d, X)
t
÷
C
i
(d)
1 ÷X
t
j
i
(d)
_
H
ff
=
0
2
0d0d
t
1(d, X) =
a

i=1
_
_
C
+
i
(d, X) C
+
i
(d, X)
t
÷
0
2
0f0f
0
_
j
i
(d)
t
X
_
1 ÷X
t
j
i
(d)
_
_
Inner Loop
The so-called “inner loop” solves (15.5) for given d. The modi…ed Newton method takes a
quadratic approximation to 1
a
(d, X) yielding the iteration rule
X
)+1
= X
)
÷c (H
XX
(d, X
)
))
÷1
H
X
(d, X
)
) . (15.19)
where c 0 is a scalar steplength (to be discussed next). The starting value X
1
can be set to the
zero vector. The iteration (15.19) is continued until the gradient 1
X
(d, X
)
) is smaller than some
prespeci…ed tolerance.
E¢cient convergence requires a good choice of steplength c. One method uses the following
quadratic approximation. Set c
0
= 0, c
1
=
1
2
and c
2
= 1. For j = 0, 1, 2, set
X
j
= X
)
÷c
j
(H
XX
(d, X
)
))
÷1
H
X
(d, X
)
))
1
j
= 1(d, X
j
)
A quadratic function can be …t exactly through these three points. The value of c which minimizes
this quadratic is
´
c =
1
2
÷ 81
0
÷41
1
41
2
÷ 41
0
÷81
1
.
yielding the steplength to be plugged into (15.19).
CHAPTER 15. EMPIRICAL LIKELIHOOD 264
A complication is that X must be constrained so that 0 _ j
i
_ 1 which holds if
:
_
1 ÷X
t
j
i
(d)
_
_ 1 (15.20)
for all i. If (15.20) fails, the stepsize c needs to be decreased.
Outer Loop
The outer loop is the minimization (15.6). This can be done by the modi…ed Newton method
described in the previous section. The gradient for (15.6) is
H
f
=
0
0d
1(d) =
0
0d
1(d, X) = H
f
÷X
t
f
H
X
= H
f
since H
X
(d, X) = 0 at X = X(d), where
X
f
=
0
0d
t
X(d) = ÷H
÷1
XX
H
Xf
,
the second equality following from the implicit function theorem applied to H
X
(d, X(d)) = 0.
The Hessian for (15.6) is
H
ff
= ÷
0
0d0d
t
1(d)
= ÷
0
0d
t
_
H
f
(d, X(d)) ÷X
t
f
H
X
(d, X(d))
¸
= ÷
_
H
ff
(d, X(d)) ÷H
t
Xf
X
f
÷X
t
f
H
Xf
÷X
t
f
H
XX
X
f
_
= H
t
Xf
H
÷1
XX
H
Xf
÷H
ff
.
It is not guaranteed that H
ff
0. If not, the eigenvalues of H
ff
should be adjusted so that all
are positive. The Newton iteration rule is
d
)+1
= d
)
÷cH
÷1
ff
H
f
where c is a scalar stepsize, and the rule is iterated until convergence.
Chapter 16
Endogeneity
We say that there is endogeneity in the linear model j = i
t
i
d ÷ c
i
if d is the parameter of
interest and E(i
i
c
i
) ,= 0. This cannot happen if d is de…ned by linear projection, so requires a
structural interpretation. The coe¢cient d must have meaning separately from the de…nition of a
conditional mean or linear projection.
Example: Measurement error in the regressor. Suppose that (j
i
, i
+
i
) are joint random
variables, E(j
i
[ i
+
i
) = i
+t
i
d is linear, d is the parameter of interest, and i
+
i
is not observed. Instead
we observe i
i
= i
+
i
÷u
i
where u
i
is an / 1 measurement error, independent of j
i
and i
+
i
. Then
j
i
= i
+t
i
d ÷c
i
= (i
i
÷u
i
)
t
d ÷c
i
= i
t
i
d ÷·
i
where
·
i
= c
i
÷u
t
i
d.
The problem is that
E(i
i
·
i
) = E
_
(i
+
i
÷u
i
)
_
c
i
÷u
t
i
d

= ÷E
_
u
i
u
t
i
_
d ,= 0
if d ,= 0 and E(u
i
u
t
i
) ,= 0. It follows that if
`
d is the OLS estimator, then
`
d
j
÷÷d
+
= d ÷
_
E
_
i
i
i
t
i
__
÷1
E
_
u
i
u
t
i
_
d ,= d.
This is called measurement error bias.
Example: Supply and Demand. The variables ¡
i
and j
i
(quantity and price) are determined
jointly by the demand equation
¡
i
= ÷,
1
j
i
÷c
1i
and the supply equation
¡
i
= ,
2
j
i
÷c
2i
.
Assume that c
i
=
_
c
1i
c
2i
_
is iid, Ec
i
= 0, ,
1
÷ ,
2
= 1 and Ec
i
c
t
i
= 1
2
(the latter for simplicity).
The question is, if we regress ¡
i
on j
i
, what happens?
It is helpful to solve for ¡
i
and j
i
in terms of the errors. In matrix notation,
_
1 ,
1
1 ÷,
2
_ _
¡
i
j
i
_
=
_
c
1i
c
2i
_
265
CHAPTER 16. ENDOGENEITY 266
so
_
¡
i
j
i
_
=
_
1 ,
1
1 ÷,
2
_
÷1
_
c
1i
c
2i
_
=
_
,
2
,
1
1 ÷1
_ _
c
1i
c
2i
_
=
_
,
2
c
1i
÷,
1
c
2i
(c
1i
÷c
2i
)
_
.
The projection of ¡
i
on j
i
yields
¡
i
= ,
+
j
i
÷-
i
E(j
i
-
i
) = 0
where
,
+
=
E(j
i
¡
i
)
E
_
j
2
i
_ =
,
2
÷,
1
2
Hence if it is estimated by OLS,
´
,
j
÷÷ ,
+
, which does not equal either ,
1
or ,
2
. This is called
simultaneous equations bias.
16.1 Instrumental Variables
Let the equation of interest be
j
i
= i
t
i
d ÷c
i
(16.1)
where i
i
is / 1, and assume that E(i
i
c
i
) ,= 0 so there is endogeneity. We call (16.1) the
structural equation. In matrix notation, this can be written as
¸ = Ad ÷c. (16.2)
Any solution to the problem of endogeneity requires additional information which we call in-
struments.
De…nition 16.1.1 The / 1 random vector z
i
is an instrumental vari-
able for (16.1) if E(z
i
c
i
) = 0.
In a typical set-up, some regressors in i
i
will be uncorrelated with c
i
(for example, at least the
intercept). Thus we make the partition
i
i
=
_
i
1i
i
2i
_
/
1
/
2
(16.3)
where E(i
1i
c
i
) = 0 yet E(i
2i
c
i
) ,= 0. We call i
1i
exogenous and i
2i
endogenous. By the above
de…nition, i
1i
is an instrumental variable for (16.1), so should be included in z
i
. So we have the
partition
z
i
=
_
i
1i
z
2i
_
/
1
/
2
(16.4)
where i
1i
= z
1i
are the included exogenous variables, and z
2i
are the excluded exogenous
variables. That is z
2i
are variables which could be included in the equation for j
i
(in the sense
that they are uncorrelated with c
i
) yet can be excluded, as they would have true zero coe¢cients
in the equation.
The model is just-identi…ed if / = / (i.e., if /
2
= /
2
) and over-identi…ed if / / (i.e., if
/
2
/
2
).
We have noted that any solution to the problem of endogeneity requires instruments. This does
not mean that valid instruments actually exist.
CHAPTER 16. ENDOGENEITY 267
16.2 Reduced Form
The reduced form relationship between the variables or “regressors” i
i
and the instruments z
i
is found by linear projection. Let
I = E
_
z
i
z
t
i
_
÷1
E
_
z
i
i
t
i
_
be the / / matrix of coe¢cients from a projection of i
i
on z
i
, and de…ne
u
i
= i
i
÷I
t
z
i
as the projection error. Then the reduced form linear relationship between i
i
and z
i
is
i
i
= I
t
z
i
÷u
i
. (16.5)
In matrix notation, we can write (16.5) as
A = ZI ÷1 (16.6)
where 1 is : /.
By construction,
E(z
i
u
t
i
) = 0,
so (16.5) is a projection and can be estimated by OLS:
i
i
=
´
I
t
z
i
÷ ` u
i
.
or
A = Z
´
I ÷
´
1
where
´
I =
_
Z
t
Z
_
÷1
_
Z
t
A
_
.
Substituting (16.6) into (16.2), we …nd
¸ = (ZI ÷1) d ÷c
= ZX ÷r, (16.7)
where
X = Id (16.8)
and
r = 1d ÷c.
Observe that
E(z
i
·
i
) = E
_
z
i
u
t
i
_
d ÷E(z
i
c
i
) = 0.
Thus (16.7) is a projection equation and may be estimated by OLS. This is
¸ = Z
`
X ÷ ` r,
`
X =
_
Z
t
Z
_
÷1
_
Z
t
¸
_
The equation (16.7) is the reduced form for ¸. (16.6) and (16.7) together are the reduced form
equations for the system
¸ = ZX ÷r
A = ZI ÷1.
As we showed above, OLS yields the reduced-form estimates
_
`
X,
`
I
_
CHAPTER 16. ENDOGENEITY 268
16.3 Identi…cation
The structural parameter d relates to (X, I) through (16.8). The parameter d is identi…ed,
meaning that it can be recovered from the reduced form, if
iank (I) = /. (16.9)
Assume that (16.9) holds. If / = /, then d = I
÷1
X. If / /, then for any M 0, d =
(I
t
MI)
÷1
I
t
MX.
If (16.9) is not satis…ed, then d cannot be recovered from (X, I) . Note that a necessary (although
not su¢cient) condition for (16.9) is / _ /.
Since Z and A have the common variables A
1
, we can rewrite some of the expressions. Using
(16.3) and (16.4) to make the matrix partitions Z = [Z
1
, Z
2
[ and A = [Z
1
, A
2
[ , we can partition
I as
I =
_
I
11
I
12
I
21
I
22
_
=
_
1 I
12
0 I
22
_
(16.6) can be rewritten as
A
1
= Z
1
A
2
= Z
1
I
12
÷Z
2
I
22
÷1
2
. (16.10)
d is identi…ed if iank(I) = /, which is true if and only if iank(I
22
) = /
2
(by the upper-diagonal
structure of I). Thus the key to identi…cation of the model rests on the /
2
/
2
matrix I
22
in
(16.10).
16.4 Estimation
The model can be written as
j
i
= i
t
i
d ÷c
i
E(z
i
c
i
) = 0
or
Ej
i
(d) = 0
j
i
(d) = z
i
_
j
i
÷i
t
i
d
_
.
This is a moment condition model. Appropriate estimators include GMM and EL. The estimators
and distribution theory developed in those Chapter 8 and 9 directly apply. Recall that the GMM
estimator, for given weight matrix M
a
, is
`
d =
_
A
t
ZM
a
Z
t
A
_
÷1
A
t
ZM
a
Z
t
¸.
16.5 Special Cases: IV and 2SLS
If the model is just-identi…ed, so that / = /, then the formula for GMM simpli…es. We …nd that
´
d =
_
A
t
ZM
a
Z
t
A
_
÷1
A
t
ZM
a
Z
t
¸
=
_
Z
t
A
_
÷1
M
÷1
a
_
A
t
Z
_
÷1
A
t
ZM
a
Z
t
¸
=
_
Z
t
A
_
÷1
Z
t
¸
CHAPTER 16. ENDOGENEITY 269
This estimator is often called the instrumental variables estimator (IV) of d, where Z is used
as an instrument for A. Observe that the weight matrix M
a
has disappeared. In the just-identi…ed
case, the weight matrix places no role. This is also the method of moments estimator of d, and the
EL estimator. Another interpretation stems from the fact that since d = I
÷1
X, we can construct
the Indirect Least Squares (ILS) estimator:
´
d =
´
I
÷1
´
X
=
_
_
Z
t
Z
_
÷1
_
Z
t
A
_
_
÷1
_
_
Z
t
Z
_
÷1
_
Z
t
¸
_
_
=
_
Z
t
A
_
÷1
_
Z
t
Z
_ _
Z
t
Z
_
÷1
_
Z
t
¸
_
=
_
Z
t
A
_
÷1
_
Z
t
¸
_
.
which again is the IV estimator.
Recall that the optimal weight matrix is an estimate of the inverse of D = E
_
z
i
z
t
i
c
2
i
_
. In the
special case that E
_
c
2
i
[ z
i
_
= o
2
(homoskedasticity), then D = E(z
i
z
t
i
) o
2
· E(z
i
z
t
i
) suggesting
the weight matrix M
a
= (Z
t
Z)
÷1
. Using this choice, the GMM estimator equals
´
d
2S1S
=
_
A
t
Z
_
Z
t
Z
_
÷1
Z
t
A
_
÷1
A
t
Z
_
Z
t
Z
_
÷1
Z
t
¸
This is called the two-stage-least squares (2SLS) estimator. It was originally proposed by Theil
(1953) and Basmann (1957), and is the classic estimator for linear equations with instruments.
Under the homoskedasticity assumption, the 2SLS estimator is e¢cient GMM, but otherwise it is
ine¢cient.
It is useful to observe that writing
1 = Z
_
Z
t
Z
_
÷1
Z
t
´
A = 1A = Z
´
I
then the 2SLS estimator is
´
d =
_
A
t
1A
_
÷1
A
t

=
_
´
A
t
´
A
_
÷1
´
A
t
¸.
The source of the “two-stage” name is since it can be computed as follows
« First regress A on Z, vis.,
´
I = (Z
t
Z)
÷1
(Z
t
A) and
´
A = Z
´
I = 1A.
« Second, regress ¸ on
´
A, vis.,
´
d =
_
´
A
t
´
A
_
÷1
´
A
t
¸.
It is useful to scrutinize the projection
´
A. Recall, A = [A
1
, A
2
[ and Z = [A
1
, Z
2
[. Then
´
A =
_
´
A
1
,
´
A
2
_
= [1A
1
, 1A
2
[
= [A
1
, 1A
2
[
=
_
A
1
,
´
A
2
_
,
since A
1
lies in the span of Z. Thus in the second stage, we regress ¸ on A
1
and
´
A
2
. So only the
endogenous variables A
2
are replaced by their …tted values:
´
A
2
= Z
1
´
I
12
÷Z
2
´
I
22
.
CHAPTER 16. ENDOGENEITY 270
16.6 Bekker Asymptotics
Bekker (1994) used an alternative asymptotic framework to analyze the …nite-sample bias in
the 2SLS estimator. Here we present a simpli…ed version of one of his results. In our notation, the
model is
¸ = Ad ÷c (16.11)
A = ZI ÷1 (16.12)
¸ = (c, 1)
E(¸ [ Z) = 0
E
_
¸
t
¸ [ Z
_
= S
As before, Z is : | so there are | instruments.
First, let’s analyze the approximate bias of OLS applied to (16.11). Using (16.12),
E
_
1
:
A
t
c
_
= E(i
i
c
i
) = I
t
E(z
i
c
i
) ÷E(u
i
c
i
) = s
21
and
E
_
1
:
A
t
A
_
= E
_
i
i
i
t
i
_
= I
t
E
_
z
i
z
t
i
_
I ÷E
_
u
i
z
t
i
_
I ÷I
t
E
_
z
i
u
t
i
_
÷E
_
u
i
u
t
i
_
= I
t
QI ÷S
22
where Q = E(z
i
z
t
i
) . Hence by a …rst-order approximation
E
_
`
d
O1S
÷d
_
-
_
E
_
1
:
A
t
A
__
÷1
E
_
1
:
A
t
c
_
=
_
I
t
QI ÷S
22
_
÷1
s
21
(16.13)
which is zero only when s
21
= 0 (when A is exogenous).
We now derive a similar result for the 2SLS estimator.
`
d
2S1S
=
_
A
t
1A
_
÷1
_
A
t

_
.
Let 1 = Z (Z
t
Z)
÷1
Z
t
. By the spectral decomposition of an idempotent matrix, 1 = HAH
t
where A = oiag (1
|
, 0) . Let Q = H
t
¸S
÷1¸2
which satis…es EQ
t
Q = 1
a
and partition Q = (g
t
1
Q
t
2
)
where g
1
is | 1. Hence
E
_
1
:
¸
t
1¸ [ Z
_
=
1
:
S
1¸2t
E
_
Q
t
AQ [ Z
_
S
1¸2
=
1
:
S
1¸2t
E
_
1
:
g
t
1
g
1
_
S
1¸2
=
|
:
S
1¸2t
S
1¸2
= cS
where
c =
|
:
.
Using (16.12) and this result,
1
:
E
_
A
t
1c
_
=
1
:
E
_
I
t
Z
t
c
_
÷
1
:
E
_
1
t
1c
_
= cs
21
,
CHAPTER 16. ENDOGENEITY 271
and
1
:
E
_
A
t
1A
_
= I
t
E
_
z
i
z
t
i
_
I ÷I
t
E(z
i
u
i
) ÷E
_
u
i
z
t
i
_
I ÷
1
:
E
_
1
t
11
_
= I
t
QI ÷cS
22
.
Together
E
_
`
d
2S1S
÷d
_
-
_
E
_
1
:
A
t
1A
__
÷1
E
_
1
:
A
t
1c
_
= c
_
I
t
QI ÷cS
22
_
÷1
s
21
. (16.14)
In general this is non-zero, except when s
21
= 0 (when A is exogenous). It is also close to zero
when c = 0. Bekker (1994) pointed out that it also has the reverse implication – that when c = |,:
is large, the bias in the 2SLS estimator will be large. Indeed as c ÷ 1, the expression in (16.14)
approaches that in (16.13), indicating that the bias in 2SLS approaches that of OLS as the number
of instruments increases.
Bekker (1994) showed further that under the alternative asymptotic approximation that c is
…xed as : ÷ · (so that the number of instruments goes to in…nity proportionately with sample
size) then the expression in (16.14) is the probability limit of
`
d
2S1S
÷d
16.7 Identi…cation Failure
Recall the reduced form equation
A
2
= Z
1
I
12
÷Z
2
I
22
÷1
2
.
The parameter d fails to be identi…ed if I
22
has de…cient rank. The consequences of identi…cation
failure for inference are quite severe.
Take the simplest case where / = | = 1 (so there is no Z
1
). Then the model may be written as
j
i
= r
i
, ÷c
i
r
i
= .
i
¸ ÷n
i
and I
22
= ¸ = E(.
i
r
i
) ,E.
2
i
. We see that , is identi…ed if and only if ¸ ,= 0, which occurs
when E(r
i
.
i
) ,= 0. Thus identi…cation hinges on the existence of correlation between the excluded
exogenous variable and the included endogenous variable.
Suppose this condition fails, so E(r
i
.
i
) = 0. Then by the CLT
1
_
:
a

i=1
.
i
c
i
o
÷÷·
1
~ N
_
0, E
_
.
2
i
c
2
i
__
(16.15)
1
_
:
a

i=1
.
i
r
i
=
1
_
:
a

i=1
.
i
n
i
o
÷÷·
2
~ N
_
0, E
_
.
2
i
n
2
i
__
(16.16)
therefore
´
, ÷, =
1
_
a

a
i=1
.
i
c
i
1
_
a

a
i=1
.
i
r
i
o
÷÷
·
1
·
2
~ Cauchy,
since the ratio of two normals is Cauchy. This is particularly nasty, as the Cauchy distribution
does not have a …nite mean. This result carries over to more general settings, and was examined
by Phillips (1989) and Choi and Phillips (1992).
CHAPTER 16. ENDOGENEITY 272
Suppose that identi…cation does not completely fail, but is weak. This occurs when I
22
is full
rank, but small. This can be handled in an asymptotic analysis by modeling it as local-to-zero, viz
I
22
= :
÷1¸2
C,
where C is a full rank matrix. The :
÷1¸2
is picked because it provides just the right balancing to
allow a rich distribution theory.
To see the consequences, once again take the simple case / = | = 1. Here, the instrument r
i
is
weak for .
i
if
¸ = :
÷1¸2
c.
Then (16.15) is una¤ected, but (16.16) instead takes the form
1
_
:
a

i=1
.
i
r
i
=
1
_
:
a

i=1
.
2
i
¸ ÷
1
_
:
a

i=1
.
i
n
i
=
1
:
a

i=1
.
2
i
c ÷
1
_
:
a

i=1
.
i
n
i
o
÷÷Qc ÷·
2
therefore
´
, ÷,
o
÷÷
·
1
Qc ÷·
2
.
As in the case of complete identi…cation failure, we …nd that
´
, is inconsistent for , and the
asymptotic distribution of
´
, is non-normal. In addition, standard test statistics have non-standard
distributions, meaning that inferences about parameters of interest can be misleading.
The distribution theory for this model was developed by Staiger and Stock (1997) and extended
to nonlinear GMM estimation by Stock and Wright (2000). Further results on testing were obtained
by Wang and Zivot (1998).
The bottom line is that it is highly desirable to avoid identi…cation failure. Once again, the
equation to focus on is the reduced form
A
2
= Z
1
I
12
÷Z
2
I
22
÷1
2
and identi…cation requires iank(I
22
) = /
2
. If /
2
= 1, this requires I
22
,= 0, which is straightforward
to assess using a hypothesis test on the reduced form. Therefore in the case of /
2
= 1 (one RHS
endogenous variable), one constructive recommendation is to explicitly estimate the reduced form
equation for A
2
, construct the test of I
22
= 0, and at a minimum check that the test rejects
H
0
: I
22
= 0.
When /
2
1, I
22
,= 0 is not su¢cient for identi…cation. It is not even su¢cient that each
column of I
22
is non-zero (each column corresponds to a distinct endogenous variable in Z
2
). So
while a minimal check is to test that each columns of I
22
is non-zero, this cannot be interpreted
as de…nitive proof that I
22
has full rank. Unfortunately, tests of de…cient rank are di¢cult to
implement. In any event, it appears reasonable to explicitly estimate and report the reduced form
equations for Z
2
, and attempt to assess the likelihood that I
22
has de…cient rank.
CHAPTER 16. ENDOGENEITY 273
Exercises
1. Consider the single equation model
j
i
= .
i
, ÷c
i
,
where j
i
and .
i
are both real-valued (1 1). Let
´
, denote the IV estimator of , using as an
instrument a dummy variable d
i
(takes only the values 0 and 1). Find a simple expression
for the IV estimator in this context.
2. In the linear model
j
i
= i
t
i
d ÷c
i
E(c
i
[ i
i
) = 0
suppose o
2
i
= E
_
c
2
i
[ r
i
_
is known. Show that the GLS estimator of d can be written as an
IV estimator using some instrument z
i
. (Find an expression for z
i
.)
3. Take the linear model
¸ = Ad ÷c.
Let the OLS estimator for d be
`
d and the OLS residual be ` c = ¸ ÷A
`
d.
Let the IV estimator for d using some instrument Z be
¯
d and the IV residual be ¯ c = ¸÷A
¯
d.
If A is indeed endogeneous, will IV “…t” better than OLS, in the sense that ¯ c
t
¯ c < ` c
t
` c, at
least in large samples?
4. The reduced form between the regressors i
i
and instruments z
i
takes the form
i
i
= I
t
z
i
÷u
i
or
A = ZI ÷1
where i
i
is / 1, z
i
is | 1, A is :/, Z is :|, 1 is :/, and I is | /. The parameter
I is de…ned by the population moment condition
E
_
z
i
u
t
i
_
= 0
Show that the method of moments estimator for I is
`
I = (Z
t
Z)
÷1
(Z
t
A) .
5. In the structural model
¸ = Ad ÷c
A = ZI ÷1
with I | /, | _ /, we claim that d is identi…ed (can be recovered from the reduced form) if
iank(I) = /. Explain why this is true. That is, show that if iank(I) < / then d cannot be
identi…ed.
6. Take the linear model
j
i
= r
i
, ÷c
i
E(c
i
[ r
i
) = 0.
where r
i
and , are 1 1.
CHAPTER 16. ENDOGENEITY 274
(a) Show that E(r
i
c
i
) = 0 and E
_
r
2
i
c
i
_
= 0. Is z
i
= (r
i
r
2
i
)
t
a valid instrumental variable
for estimation of ,´
(b) De…ne the 2SLS estimator of ,, using z
i
as an instrument for r
i
. How does this di¤er
from OLS?
(c) Find the e¢cient GMM estimator of , based on the moment condition
E(z
i
(j
i
÷r
i
,)) = 0.
Does this di¤er from 2SLS and/or OLS?
7. Suppose that price and quantity are determined by the intersection of the linear demand and
supply curves
Demand : Q = a
0
÷a
1
1 ÷a
2
1 ÷c
1
Supply : Q = /
0
÷/
1
1 ÷/
2
\ ÷c
2
where income (1 ) and wage (\) are determined outside the market. In this model, are the
parameters identi…ed?
8. The data …le card.dat is taken from Card (1995). There are 2215 observations with 29
variables, listed in card.pdf. We want to estimate a wage equation
log(\aqc) = ,
0
÷,
1
1dnc ÷,
2
1rjcr ÷,
3
1rjcr
2
÷,
4
oont/ ÷,
5
1|ac/ ÷c
where 1dnc = 1dnatio: (Years) 1rjcr = 1rjcric:cc (Years), and oont/ and 1|ac/ are
regional and racial dummy variables.
(a) Estimate the model by OLS. Report estimates and standard errors.
(b) Now treat 1dncatio: as endogenous, and the remaining variables as exogenous. Estimate
the model by 2SLS, using the instrument :car4, a dummy indicating that the observation
lives near a 4-year college. Report estimates and standard errors.
(c) Re-estimate by 2SLS (report estimates and standard errors) adding three additional
instruments: near2 (a dummy indicating that the observation lives near a 2-year college),
)at/cdnc (the education, in years, of the father) and :ot/cdnc (the education, in years,
of the mother).
(d) Re-estimate the model by e¢cient GMM. I suggest that you use the 2SLS estimates as
the …rst-step to get the weight matrix, and then calculate the GMM estimator from this
weight matrix without further iteration. Report the estimates and standard errors.
(e) Calculate and report the J statistic for overidenti…cation.
(f) Discuss your …ndings.
Chapter 17
Univariate Time Series
A time series j
t
is a process observed in sequence over time, t = 1, ..., T. To indicate the
dependence on time, we adopt new notation, and use the subscript t to denote the individual
observation, and T to denote the number of observations.
Because of the sequential nature of time series, we expect that j
t
and j
t÷1
are not independent,
so classical assumptions are not valid.
We can separate time series into two categories: univariate (j
t
¸ R is scalar); and multivariate
(j
t
¸ R
n
is vector-valued). The primary model for univariate time series is autoregressions (ARs).
The primary model for multivariate time series is vector autoregressions (VARs).
17.1 Stationarity and Ergodicity
De…nition 17.1.1 ¦j
t
¦ is covariance (weakly) stationary if
E(j
t
) = j
is independent of t, and
cov (j
t
, j
t÷I
) = ¸(/)
is independent of t for all /.¸(/) is called the autocovariance function.
j(/) = ¸(/),¸(0) = coii(j
t
, j
t÷I
)
is the autocorrelation function.
De…nition 17.1.2 ¦j
t
¦ is strictly stationary if the joint distribution of
(j
t
, ..., j
t÷I
) is independent of t for all /.
De…nition 17.1.3 A stationary time series is ergodic if ¸(/) ÷ 0 as
/ ÷·.
275
CHAPTER 17. UNIVARIATE TIME SERIES 276
The following two theorems are essential to the analysis of stationary time series. There proofs
are rather di¢cult, however.
Theorem 17.1.1 If j
t
is strictly stationary and ergodic and r
t
=
)(j
t
, j
t÷1
, ...) is a random variable, then r
t
is strictly stationary and er-
godic.
Theorem 17.1.2 (Ergodic Theorem). If j
t
is strictly stationary and er-
godic and E[j
t
[ < ·, then as T ÷·,
1
T
T

t=1
j
t
j
÷÷E(j
t
).
This allows us to consistently estimate parameters using time-series moments:
The sample mean:
´ j =
1
T
T

t=1
j
t
The sample autocovariance
´ ¸(/) =
1
T
T

t=1
(j
t
÷ ´ j) (j
t÷I
÷ ´ j) .
The sample autocorrelation
´ j(/) =
´ ¸(/)
´ ¸(0)
.
Theorem 17.1.3 If j
t
is strictly stationary and ergodic and Ej
2
t
< ·,
then as T ÷·,
1. ´ j
j
÷÷E(j
t
);
2. ´ ¸(/)
j
÷÷¸(/);
3. ´ j(/)
j
÷÷j(/).
Proof of Theorem 17.1.3. Part (1) is a direct consequence of the Ergodic theorem. For Part
(2), note that
´ ¸(/) =
1
T
T

t=1
(j
t
÷ ´ j) (j
t÷I
÷ ´ j)
=
1
T
T

t=1
j
t
j
t÷I
÷
1
T
T

t=1
j
t
´ j ÷
1
T
T

t=1
j
t÷I
´ j ÷ ´ j
2
.
CHAPTER 17. UNIVARIATE TIME SERIES 277
By Theorem 17.1.1 above, the sequence j
t
j
t÷I
is strictly stationary and ergodic, and it has a …nite
mean by the assumption that Ej
2
t
< ·. Thus an application of the Ergodic Theorem yields
1
T
T

t=1
j
t
j
t÷I
j
÷÷E(j
t
j
t÷I
).
Thus
´ ¸(/)
j
÷÷E(j
t
j
t÷I
) ÷j
2
÷j
2
÷j
2
= E(j
t
j
t÷I
) ÷j
2
= ¸(/).
Part (3) follows by the continuous mapping theorem: ´ j(/) = ´ ¸(/),´ ¸(0)
j
÷÷¸(/),¸(0) = j(/).
17.2 Autoregressions
In time-series, the series ¦..., j
1
, j
2
, ..., j
T
, ...¦ are jointly random. We consider the conditional
expectation
E(j
t
[ T
t÷1
)
where T
t÷1
= ¦j
t÷1
, j
t÷2
, ...¦ is the past history of the series.
An autoregressive (AR) model speci…es that only a …nite number of past lags matter:
E(j
t
[ T
t÷1
) = E(j
t
[ j
t÷1
, ..., j
t÷I
) .
A linear AR model (the most common type used in practice) speci…es linearity:
E(j
t
[ T
t÷1
) = c ÷j
1
j
t÷1
÷j
2
j
t÷1
÷ ÷j
I
j
t÷I
.
Letting
c
t
= j
t
÷E(j
t
[ T
t÷1
) ,
then we have the autoregressive model
j
t
= c ÷j
1
j
t÷1
÷j
2
j
t÷1
÷ ÷j
I
j
t÷I
÷c
t
E(c
t
[ T
t÷1
) = 0.
The last property de…nes a special time-series process.
De…nition 17.2.1 c
t
is a martingale di¤erence sequence (MDS) if
E(c
t
[ T
t÷1
) = 0.
Regression errors are naturally a MDS. Some time-series processes may be a MDS as a conse-
quence of optimizing behavior. For example, some versions of the life-cycle hypothesis imply that
either changes in consumption, or consumption growth rates, should be a MDS. Most asset pricing
models imply that asset returns should be the sum of a constant plus a MDS.
The MDS property for the regression error plays the same role in a time-series regression as
does the conditional mean-zero property for the regression error in a cross-section regression. In
fact, it is even more important in the time-series context, as it is di¢cult to derive distribution
theories without this property.
A useful property of a MDS is that c
t
is uncorrelated with any function of the lagged information
T
t÷1
. Thus for / 0, E(j
t÷I
c
t
) = 0.
CHAPTER 17. UNIVARIATE TIME SERIES 278
17.3 Stationarity of AR(1) Process
A mean-zero AR(1) is
j
t
= jj
t÷1
÷c
t
.
Assume that c
t
is iid, E(c
t
) = 0 and Ec
2
t
= o
2
< ·.
By back-substitution, we …nd
j
t
= c
t
÷jc
t÷1
÷j
2
c
t÷2
÷...
=
o

I=0
j
I
c
t÷I
.
Loosely speaking, this series converges if the sequence j
I
c
t÷I
gets small as / ÷ ·. This occurs
when [j[ < 1.
Theorem 17.3.1 If and only if [j[ < 1 then j
t
is strictly stationary and
ergodic.
We can compute the moments of j
t
using the in…nite sum:
Ej
t
=
o

I=0
j
I
E(c
t÷I
) = 0
vai(j
t
) =
o

I=0
j
2I
vai (c
t÷I
) =
o
2
1 ÷j
2
.
If the equation for j
t
has an intercept, the above results are unchanged, except that the mean
of j
t
can be computed from the relationship
Ej
t
= c ÷jEj
t÷1
,
and solving for Ej
t
= Ej
t÷1
we …nd Ej
t
= c,(1 ÷j).
17.4 Lag Operator
An algebraic construct which is useful for the analysis of autoregressive models is the lag oper-
ator.
De…nition 17.4.1 The lag operator L satis…es Lj
t
= j
t÷1
.
De…ning L
2
= LL, we see that L
2
j
t
= Lj
t÷1
= j
t÷2
. In general, L
I
j
t
= j
t÷I
.
The AR(1) model can be written in the format
j
t
÷jj
t÷1
= c
t
or
(1 ÷jL) j
t
= c
t
.
The operator j(L) = (1 ÷ jL) is a polynomial in the operator L. We say that the root of the
polynomial is 1,j, since j(.) = 0 when . = 1,j. We call j(L) the autoregressive polynomial of j
t
.
From Theorem 17.3.1, an AR(1) is stationary i¤ [j[ < 1. Note that an equivalent way to say
this is that an AR(1) is stationary i¤ the root of the autoregressive polynomial is larger than one
(in absolute value).
CHAPTER 17. UNIVARIATE TIME SERIES 279
17.5 Stationarity of AR(k)
The AR(k) model is
j
t
= j
1
j
t÷1
÷j
2
j
t÷2
÷ ÷j
I
j
t÷I
÷c
t
.
Using the lag operator,
j
t
÷j
1
Lj
t
÷j
2
L
2
j
t
÷ ÷j
I
L
I
j
t
= c
t
,
or
j(L)j
t
= c
t
where
j(L) = 1 ÷j
1
L ÷j
2
L
2
÷ ÷j
I
L
I
.
We call j(L) the autoregressive polynomial of j
t
.
The Fundamental Theorem of Algebra says that any polynomial can be factored as
j(.) =
_
1 ÷`
÷1
1
.
_ _
1 ÷`
÷1
2
.
_

_
1 ÷`
÷1
I
.
_
where the `
1
, ..., `
I
are the complex roots of j(.), which satisfy j(`
)
) = 0.
We know that an AR(1) is stationary i¤ the absolute value of the root of its autoregressive
polynomial is larger than one. For an AR(k), the requirement is that all roots are larger than one.
Let [`[ denote the modulus of a complex number `.
Theorem 17.5.1 The AR(k) is strictly stationary and ergodic if and only
if [`
)
[ 1 for all ,.
One way of stating this is that “All roots lie outside the unit circle.”
If one of the roots equals 1, we say that j(L), and hence j
t
, “has a unit root”. This is a special
case of non-stationarity, and is of great interest in applied time series.
17.6 Estimation
Let
i
t
=
_
1 j
t÷1
j
t÷2
j
t÷I
_
t
d =
_
c j
1
j
2
j
I
_
t
.
Then the model can be written as
j
t
= i
t
t
d ÷c
t
.
The OLS estimator is
`
d =
_
A
t
A
_
÷1
A
t
¸.
To study
`
d, it is helpful to de…ne the process n
t
= i
t
c
t
. Note that n
t
is a MDS, since
E(n
t
[ T
t÷1
) = E(i
t
c
t
[ T
t÷1
) = i
t
E(c
t
[ T
t÷1
) = 0.
By Theorem 17.1.1, it is also strictly stationary and ergodic. Thus
1
T
T

t=1
i
t
c
t
=
1
T
T

t=1
n
t
j
÷÷E(n
t
) = 0. (17.1)
CHAPTER 17. UNIVARIATE TIME SERIES 280
The vector i
t
is strictly stationary and ergodic, and by Theorem 17.1.1, so is i
t
i
t
t
. Thus by the
Ergodic Theorem,
1
T
T

t=1
i
t
i
t
t
j
÷÷E
_
i
t
i
t
t
_
= Q.
Combined with (17.1) and the continuous mapping theorem, we see that
`
d = d ÷
_
1
T
T

t=1
i
t
i
t
t
_
÷1
_
1
T
T

t=1
i
t
c
t
_
j
÷÷Q
÷1
0 = 0.
We have shown the following:
Theorem 17.6.1 If the AR(k) process j
t
is strictly stationary and ergodic
and Ej
2
t
< ·, then
`
d
j
÷÷d as T ÷·.
17.7 Asymptotic Distribution
Theorem 17.7.1 MDS CLT. If u
t
is a strictly stationary and ergodic
MDS and E(u
t
u
t
t
) = D < ·, then as T ÷·,
1
_
T
T

t=1
u
t
o
÷÷N(0, D) .
Since i
t
c
t
is a MDS, we can apply Theorem 17.7.1 to see that
1
_
T
T

t=1
i
t
c
t
o
÷÷N(0, D) ,
where
D = E(i
t
i
t
t
c
2
t
).
Theorem 17.7.2 If the AR(k) process j
t
is strictly stationary and ergodic
and Ej
4
t
< ·, then as T ÷·,
_
T
_
`
d ÷d
_
o
÷÷N
_
0, Q
÷1
DQ
÷1
_
.
This is identical in form to the asymptotic distribution of OLS in cross-section regression. The
implication is that asymptotic inference is the same. In particular, the asymptotic covariance
matrix is estimated just as in the cross-section case.
CHAPTER 17. UNIVARIATE TIME SERIES 281
17.8 Bootstrap for Autoregressions
In the non-parametric bootstrap, we constructed the bootstrap sample by randomly resampling
from the data values ¦j
t
, i
t
¦. This creates an iid bootstrap sample. Clearly, this cannot work in a
time-series application, as this imposes inappropriate independence.
Brie‡y, there are two popular methods to implement bootstrap resampling for time-series data.
Method 1: Model-Based (Parametric) Bootstrap.
1. Estimate
`
d and residuals ´ c
t
.
2. Fix an initial condition (j
÷I+1
, j
÷I+2
, ..., j
0
).
3. Simulate iid draws c
+
i
from the empirical distribution of the residuals ¦´ c
1
, ..., ´ c
T
¦.
4. Create the bootstrap series j
+
t
by the recursive formula
j
+
t
= ´ c ÷ ´ j
1
j
+
t÷1
÷ ´ j
2
j
+
t÷2
÷ ÷ ´ j
I
j
+
t÷I
÷c
+
t
.
This construction imposes homoskedasticity on the errors c
+
i
, which may be di¤erent than the
properties of the actual c
i
. It also presumes that the AR(k) structure is the truth.
Method 2: Block Resampling
1. Divide the sample into T,: blocks of length :.
2. Resample complete blocks. For each simulated sample, draw T,: blocks.
3. Paste the blocks together to create the bootstrap time-series j
+
t
.
4. This allows for arbitrary stationary serial correlation, heteroskedasticity, and for model-
misspeci…cation.
5. The results may be sensitive to the block length, and the way that the data are partitioned
into blocks.
6. May not work well in small samples.
17.9 Trend Stationarity
j
t
= j
0
÷j
1
t ÷o
t
(17.2)
o
t
= j
1
o
t÷1
÷j
2
o
t÷2
÷ ÷j
I
o
t÷|
÷c
t
, (17.3)
or
j
t
= c
0
÷c
1
t ÷j
1
j
t÷1
÷j
2
j
t÷1
÷ ÷j
I
j
t÷I
÷c
t
. (17.4)
There are two essentially equivalent ways to estimate the autoregressive parameters (j
1
, ..., j
I
).
« You can estimate (17.4) by OLS.
« You can estimate (17.2)-(17.3) sequentially by OLS. That is, …rst estimate (17.2), get the
residual
´
o
t
, and then perform regression (17.3) replacing o
t
with
´
o
t
. This procedure is some-
times called Detrending.
CHAPTER 17. UNIVARIATE TIME SERIES 282
The reason why these two procedures are (essentially) the same is the Frisch-Waugh-Lovell
theorem.
Seasonal E¤ects
There are three popular methods to deal with seasonal data.
« Include dummy variables for each season. This presumes that “seasonality” does not change
over the sample.
« Use “seasonally adjusted” data. The seasonal factor is typically estimated by a two-sided
weighted average of the data for that season in neighboring years. Thus the seasonally
adjusted data is a “…ltered” series. This is a ‡exible approach which can extract a wide range
of seasonal factors. The seasonal adjustment, however, also alters the time-series correlations
of the data.
« First apply a seasonal di¤erencing operator. If : is the number of seasons (typically : = 4 or
: = 12),
^
c
j
t
= j
t
÷j
t÷c
,
or the season-to-season change. The series ^
c
j
t
is clearly free of seasonality. But the long-run
trend is also eliminated, and perhaps this was of relevance.
17.10 Testing for Omitted Serial Correlation
For simplicity, let the null hypothesis be an AR(1):
j
t
= c ÷jj
t÷1
÷n
t
. (17.5)
We are interested in the question if the error n
t
is serially correlated. We model this as an AR(1):
n
t
= 0n
t÷1
÷c
t
(17.6)
with c
t
a MDS. The hypothesis of no omitted serial correlation is
H
0
: 0 = 0
H
1
: 0 ,= 0.
We want to test H
0
against H
1
.
To combine (17.5) and (17.6), we take (17.5) and lag the equation once:
j
t÷1
= c ÷jj
t÷2
÷n
t÷1
.
We then multiply this by 0 and subtract from (17.5), to …nd
j
t
÷0j
t÷1
= c ÷0c ÷jj
t÷1
÷0jj
t÷1
÷n
t
÷0n
t÷1
,
or
j
t
= c(1 ÷0) ÷ (j ÷0) j
t÷1
÷0jj
t÷2
÷c
t
= ¹1(2).
Thus under H
0
, j
t
is an AR(1), and under H
1
it is an AR(2). H
0
may be expressed as the restriction
that the coe¢cient on j
t÷2
is zero.
An appropriate test of H
0
against H
1
is therefore a Wald test that the coe¢cient on j
t÷2
is
zero. (A simple exclusion test).
In general, if the null hypothesis is that j
t
is an AR(k), and the alternative is that the error is an
AR(m), this is the same as saying that under the alternative j
t
is an AR(k+m), and this is equivalent
to the restriction that the coe¢cients on j
t÷I÷1
, ..., j
t÷I÷n
are jointly zero. An appropriate test is
the Wald test of this restriction.
CHAPTER 17. UNIVARIATE TIME SERIES 283
17.11 Model Selection
What is the appropriate choice of / in practice? This is a problem of model selection.
One approach to model selection is to choose / based on a Wald tests.
Another is to minimize the AIC or BIC information criterion, e.g.
¹1C(/) = log ´ o
2
(/) ÷
2/
T
,
where ´ o
2
(/) is the estimated residual variance from an AR(k)
One ambiguity in de…ning the AIC criterion is that the sample available for estimation changes
as / changes. (If you increase /, you need more initial conditions.) This can induce strange
behavior in the AIC. The best remedy is to …x a upper value /, and then reserve the …rst / as
initial conditions, and then estimate the models AR(1), AR(2), ..., AR(/) on this (uni…ed) sample.
17.12 Autoregressive Unit Roots
The AR(k) model is
j(L)j
t
= j ÷c
t
j(L) = 1 ÷j
1
L ÷ ÷j
I
L
I
.
As we discussed before, j
t
has a unit root when j(1) = 0, or
j
1
÷j
2
÷ ÷j
I
= 1.
In this case, j
t
is non-stationary. The ergodic theorem and MDS CLT do not apply, and test
statistics are asymptotically non-normal.
A helpful way to write the equation is the so-called Dickey-Fuller reparameterization:
^j
t
= j ÷c
0
j
t÷1
÷c
1
^j
t÷1
÷ ÷c
I÷1
^j
t÷(I÷1)
÷c
t
. (17.7)
These models are equivalent linear transformations of one another. The DF parameterization
is convenient because the parameter c
0
summarizes the information about the unit root, since
j(1) = ÷c
0
. To see this, observe that the lag polynomial for the j
t
computed from (17.7) is
(1 ÷L) ÷c
0
L ÷c
1
(L ÷L
2
) ÷ ÷c
I÷1
(L
I÷1
÷L
I
)
But this must equal j(L), as the models are equivalent. Thus
j(1) = (1 ÷1) ÷c
0
÷(1 ÷1) ÷ ÷(1 ÷1) = ÷c
0
.
Hence, the hypothesis of a unit root in j
t
can be stated as
H
0
: c
0
= 0.
Note that the model is stationary if c
0
< 0. So the natural alternative is
H
1
: c
0
< 0.
Under H
0
, the model for j
t
is
^j
t
= j ÷c
1
^j
t÷1
÷ ÷c
I÷1
^j
t÷(I÷1)
÷c
t
,
which is an AR(k-1) in the …rst-di¤erence ^j
t
. Thus if j
t
has a (single) unit root, then ^j
t
is a
stationary AR process. Because of this property, we say that if j
t
is non-stationary but ^
o
j
t
is
stationary, then j
t
is “integrated of order d”, or 1(d). Thus a time series with unit root is 1(1).
CHAPTER 17. UNIVARIATE TIME SERIES 284
Since c
0
is the parameter of a linear regression, the natural test statistic is the t-statistic for
H
0
from OLS estimation of (17.7). Indeed, this is the most popular unit root test, and is called the
Augmented Dickey-Fuller (ADF) test for a unit root.
It would seem natural to assess the signi…cance of the ADF statistic using the normal table.
However, under H
0
, j
t
is non-stationary, so conventional normal asymptotics are invalid. An
alternative asymptotic framework has been developed to deal with non-stationary data. We do not
have the time to develop this theory in detail, but simply assert the main results.
Theorem 17.12.1 Dickey-Fuller Theorem.
Assume c
0
= 0. As T ÷·,
T ´ c
0
o
÷÷(1 ÷c
1
÷c
2
÷ ÷c
I÷1
) 11
c
¹11 =
´ c
0
:(´ c
0
)
÷11
t
.
The limit distributions 11
c
and 11
t
are non-normal. They are skewed to the left, and have
negative means.
The …rst result states that ´ c
0
converges to its true value (of zero) at rate T, rather than the
conventional rate of T
1¸2
. This is called a “super-consistent” rate of convergence.
The second result states that the t-statistic for ´ c
0
converges to a limit distribution which is
non-normal, but does not depend on the parameters c. This distribution has been extensively
tabulated, and may be used for testing the hypothesis H
0
. Note: The standard error :(´ c
0
) is the
conventional (“homoskedastic”) standard error. But the theorem does not require an assumption
of homoskedasticity. Thus the Dickey-Fuller test is robust to heteroskedasticity.
Since the alternative hypothesis is one-sided, the ADF test rejects H
0
in favor of H
1
when
¹11 < c, where c is the critical value from the ADF table. If the test rejects H
0
, this means that
the evidence points to j
t
being stationary. If the test does not reject H
0
, a common conclusion is
that the data suggests that j
t
is non-stationary. This is not really a correct conclusion, however.
All we can say is that there is insu¢cient evidence to conclude whether the data are stationary or
not.
We have described the test for the setting of with an intercept. Another popular setting includes
as well a linear time trend. This model is
^j
t
= j
1
÷j
2
t ÷c
0
j
t÷1
÷c
1
^j
t÷1
÷ ÷c
I÷1
^j
t÷(I÷1)
÷c
t
. (17.8)
This is natural when the alternative hypothesis is that the series is stationary about a linear time
trend. If the series has a linear trend (e.g. GDP, Stock Prices), then the series itself is non-
stationary, but it may be stationary around the linear time trend. In this context, it is a silly waste
of time to …t an AR model to the level of the series without a time trend, as the AR model cannot
conceivably describe this data. The natural solution is to include a time trend in the …tted OLS
equation. When conducting the ADF test, this means that it is computed as the t-ratio for c
0
from
OLS estimation of (17.8).
If a time trend is included, the test procedure is the same, but di¤erent critical values are
required. The ADF test has a di¤erent distribution when the time trend has been included, and a
di¤erent table should be consulted.
Most texts include as well the critical values for the extreme polar case where the intercept has
been omitted from the model. These are included for completeness (from a pedagogical perspective)
but have no relevance for empirical practice where intercepts are always included.
Chapter 18
Multivariate Time Series
A multivariate time series ¸
t
is a vector process :1. Let T
t÷1
= (¸
t÷1
, ¸
t÷2
, ...) be all lagged
information at time t. The typical goal is to …nd the conditional expectation E(¸
t
[ T
t÷1
) . Note
that since ¸
t
is a vector, this conditional expectation is also a vector.
18.1 Vector Autoregressions (VARs)
A VAR model speci…es that the conditional mean is a function of only a …nite number of lags:
E(¸
t
[ T
t÷1
) = E
_
¸
t
[ ¸
t÷1
, ..., ¸
t÷I
_
.
A linear VAR speci…es that this conditional mean is linear in the arguments:
E
_
¸
t
[ ¸
t÷1
, ..., ¸
t÷I
_
= u
0
÷A
1
¸
t÷1
÷A
2
¸
t÷2
÷ A
I
¸
t÷I
.
Observe that u
0
is :1,and each of A
1
through A
I
are :: matrices.
De…ning the :1 regression error
c
t
= ¸
t
÷E(¸
t
[ T
t÷1
) ,
we have the VAR model
¸
t
= u
0
÷A
1
¸
t÷1
÷A
2
¸
t÷2
÷ A
I
¸
t÷I
÷c
t
E(c
t
[ T
t÷1
) = 0.
Alternatively, de…ning the :/ ÷ 1 vector
i
t
=
_
_
_
_
_
_
_
1
¸
t÷1
¸
t÷2
.
.
.
¸
t÷I
_
_
_
_
_
_
_
and the :(:/ ÷ 1) matrix
A =
_
u
0
A
1
A
2
A
I
_
,
then
¸
t
= Ai
t
÷c
t
.
The VAR model is a system of : equations. One way to write this is to let a
t
)
be the ,th row
of A. Then the VAR system can be written as the equations
1
)t
= a
t
)
i
t
÷c
)t
.
Unrestricted VARs were introduced to econometrics by Sims (1980).
285
CHAPTER 18. MULTIVARIATE TIME SERIES 286
18.2 Estimation
Consider the moment conditions
E(i
t
c
)t
) = 0,
, = 1, ..., :. These are implied by the VAR model, either as a regression, or as a linear projection.
The GMM estimator corresponding to these moment conditions is equation-by-equation OLS
` u
)
= (A
t
A)
÷1
A
t
¸
)
.
An alternative way to compute this is as follows. Note that
` u
t
)
= ¸
t
)
A(A
t
A)
÷1
.
And if we stack these to create the estimate
´
¹, we …nd
`
A =
_
_
_
_
_
¸
t
1
¸
t
2
.
.
.
¸
t
n+1
_
_
_
_
_
A(A
t
A)
÷1
= A
t
A(A
t
A)
÷1
,
where
A =
_
¸
1
¸
2
¸
n
_
the T : matrix of the stacked ¸
t
t
.
This (system) estimator is known as the SUR (Seemingly Unrelated Regressions) estimator,
and was originally derived by Zellner (1962)
18.3 Restricted VARs
The unrestricted VAR is a system of : equations, each with the same set of regressors. A
restricted VAR imposes restrictions on the system. For example, some regressors may be excluded
from some of the equations. Restrictions may be imposed on individual equations, or across equa-
tions. The GMM framework gives a convenient method to impose such restrictions on estimation.
18.4 Single Equation from a VAR
Often, we are only interested in a single equation out of a VAR system. This takes the form
j
)t
= u
t
)
i
t
÷c
t
,
and i
t
consists of lagged values of j
)t
and the other j
t
|t
:. In this case, it is convenient to re-de…ne
the variables. Let j
t
= j
)t
, and z
t
be the other variables. Let c
t
= c
)t
and , = a
)
. Then the single
equation takes the form
j
t
= i
t
t
d ÷c
t
, (18.1)
and
i
t
=
_
_
1 ¸
t÷1
¸
t÷I
z
t
t÷1
z
t
t÷I
_
t
_
.
This is just a conventional regression with time series data.
CHAPTER 18. MULTIVARIATE TIME SERIES 287
18.5 Testing for Omitted Serial Correlation
Consider the problem of testing for omitted serial correlation in equation (18.1). Suppose that
c
t
is an AR(1). Then
j
t
= i
t
t
d ÷c
t
c
t
= 0c
t÷1
÷n
t
(18.2)
E(n
t
[ T
t÷1
) = 0.
Then the null and alternative are
H
0
: 0 = 0 H
1
: 0 ,= 0.
Take the equation j
t
= i
t
t
d ÷c
t
, and subtract o¤ the equation once lagged multiplied by 0, to get
j
t
÷0j
t÷1
=
_
i
t
t
d ÷c
t
_
÷0
_
i
t
t÷1
d ÷c
t÷1
_
= i
t
t
d ÷0i
t÷1
d ÷c
t
÷0c
t÷1
,
or
j
t
= 0j
t÷1
÷i
t
t
d ÷i
t
t÷1
_ ÷n
t
, (18.3)
which is a valid regression model.
So testing H
0
versus H
1
is equivalent to testing for the signi…cance of adding (j
t÷1
, i
t÷1
) to
the regression. This can be done by a Wald test. We see that an appropriate, general, and simple
way to test for omitted serial correlation is to test the signi…cance of extra lagged values of the
dependent variable and regressors.
You may have heard of the Durbin-Watson test for omitted serial correlation, which once was
very popular, and is still routinely reported by conventional regression packages. The DW test is
appropriate only when regression j
t
= i
t
t
d ÷c
t
is not dynamic (has no lagged values on the RHS),
and c
t
is iid N(0, o
2
). Otherwise it is invalid.
Another interesting fact is that (18.2) is a special case of (18.3), under the restriction ¸ = ÷d0.
This restriction, which is called a common factor restriction, may be tested if desired. If valid,
the model (18.2) may be estimated by iterated GLS. (A simple version of this estimator is called
Cochrane-Orcutt.) Since the common factor restriction appears arbitrary, and is typically rejected
empirically, direct estimation of (18.2) is uncommon in recent applications.
18.6 Selection of Lag Length in an VAR
If you want a data-dependent rule to pick the lag length / in a VAR, you may either use a testing-
based approach (using, for example, the Wald statistic), or an information criterion approach. The
formula for the AIC and BIC are
¹1C(/) = log ool
_
`
D(/)
_
÷ 2
j
T
11C(/) = log ool
_
`
D(/)
_
÷
j log(T)
T
`
D(/) =
1
T
T

t=1
` c
t
(/)` c
t
(/)
t
j = :(/:÷ 1)
where j is the number of parameters in the model, and ` c
t
(/) is the OLS residual vector from the
model with / lags. The log determinant is the criterion from the multivariate normal likelihood.
CHAPTER 18. MULTIVARIATE TIME SERIES 288
18.7 Granger Causality
Partition the data vector into (¸
t
, z
t
). De…ne the two information sets
T
1t
=
_
¸
t
, ¸
t÷1
, ¸
t÷2
, ...
_
T
2t
=
_
¸
t
, z
t
, ¸
t÷1
, z
t÷1
, ¸
t÷2
, z
t÷2
, , ...
_
The information set T
1t
is generated only by the history of ¸
t
, and the information set T
2t
is
generated by both ¸
t
and z
t
. The latter has more information.
We say that z
t
does not Granger-cause ¸
t
if
E(¸
t
[ T
1,t÷1
) = E(¸
t
[ T
2,t÷1
) .
That is, conditional on information in lagged ¸
t
, lagged z
t
does not help to forecast ¸
t
. If this
condition does not hold, then we say that z
t
Granger-causes ¸
t
.
The reason why we call this “Granger Causality” rather than “causality” is because this is not
a physical or structure de…nition of causality. If z
t
is some sort of forecast of the future, such as a
futures price, then z
t
may help to forecast ¸
t
even though it does not “cause” ¸
t
. This de…nition
of causality was developed by Granger (1969) and Sims (1972).
In a linear VAR, the equation for ¸
t
is
¸
t
= c ÷j
1
¸
t÷1
÷ ÷j
I
¸
t÷I
÷z
t
t÷1
_
1
÷ ÷z
t
t÷I
_
I
÷c
t
.
In this equation, z
t
does not Granger-cause ¸
t
if and only if
H
0
: _
1
= _
2
= = _
I
= 0.
This may be tested using an exclusion (Wald) test.
This idea can be applied to blocks of variables. That is, ¸
t
and/or z
t
can be vectors. The
hypothesis can be tested by using the appropriate multivariate Wald test.
If it is found that z
t
does not Granger-cause ¸
t
, then we deduce that our time-series model of
E(¸
t
[ T
t÷1
) does not require the use of z
t
. Note, however, that z
t
may still be useful to explain
other features of ¸
t
, such as the conditional variance.
Clive W. J. Granger
Clive Granger (1934-2009) of England was one of the leading …gures in time-
series econometrics, and co-winner in 2003 of the Nobel Memorial Prize in
Economic Sciences (along with Robert Engle). In addition to formalizing
the de…nition of causality known as Granger causality, he invented the con-
cept of cointegration, introduced spectral methods into econometrics, and
formalized methods for the combination of forecasts.
18.8 Cointegration
The idea of cointegration is due to Granger (1981), and was articulated in detail by Engle and
Granger (1987).
CHAPTER 18. MULTIVARIATE TIME SERIES 289
De…nition 18.8.1 The : 1 series ¸
t
is cointegrated if ¸
t
is 1(1) yet
there exists d, :r, of rank r, such that z
t
= d
t
¸
t
is 1(0). The r vectors
in d are called the cointegrating vectors.
If the series ¸
t
is not cointegrated, then r = 0. If r = :, then ¸
t
is 1(0). For 0 < r < :, ¸
t
is
1(1) and cointegrated.
In some cases, it may be believed that d is known a priori. Often, d = (1 ÷1)
t
. For example, if
¸
t
is a pair of interest rates, then d = (1 ÷1)
t
speci…es that the spread (the di¤erence in returns)
is stationary. If ¸ = (log(C) log(1))
t
, then d = (1 ÷1)
t
speci…es that log(C,1) is stationary.
In other cases, d may not be known.
If ¸
t
is cointegrated with a single cointegrating vector (r = 1), then it turns out that d can
be consistently estimated by an OLS regression of one component of ¸
t
on the others. Thus ¸
t
=
(1
1t
, 1
2t
) and d = (,
1
,
2
) and normalize ,
1
= 1. Then
´
,
2
= (¸
t
2
¸
2
)
÷1
¸
t
2
¸
1
j
÷÷ ,
2
. Furthermore
this estimation is super-consistent: T(
´
,
2
÷ ,
2
)
o
÷÷ 1i:it, as …rst shown by Stock (1987). This
is not, in general, a good method to estimate d, but it is useful in the construction of alternative
estimators and tests.
We are often interested in testing the hypothesis of no cointegration:
H
0
: r = 0
H
1
: r 0.
Suppose that d is known, so z
t
= d
t
¸
t
is known. Then under H
0
z
t
is 1(1), yet under H
1
z
t
is
1(0). Thus H
0
can be tested using a univariate ADF test on z
t
.
When d is unknown, Engle and Granger (1987) suggested using an ADF test on the estimated
residual ´ .
t
=
`
d
t
¸
t
, from OLS of j
1t
on j
2t
. Their justi…cation was Stock’s result that
`
d is super-
consistent under H
1
. Under H
0
, however,
`
d is not consistent, so the ADF critical values are not
appropriate. The asymptotic distribution was worked out by Phillips and Ouliaris (1990).
When the data have time trends, it may be necessary to include a time trend in the estimated
cointegrating regression. Whether or not the time trend is included, the asymptotic distribution of
the test is a¤ected by the presence of the time trend. The asymptotic distribution was worked out
in B. Hansen (1992).
18.9 Cointegrated VARs
We can write a VAR as
A(L)¸
t
= c
t
A(L) = 1 ÷A
1
L ÷A
2
L
2
÷ ÷A
I
L
I
or alternatively as

t
= D¸
t÷1
÷L(L)^¸
t÷1
÷c
t
where
D = ÷A(1)
= ÷1 ÷A
1
÷A
2
÷ ÷A
I
.
CHAPTER 18. MULTIVARIATE TIME SERIES 290
Theorem 18.9.1 Granger Representation Theorem
¸
t
is cointegrated with : r d if and only if iank(D) = r and D = od
t
where c is :r, iank (o) = r.
Thus cointegration imposes a restriction upon the parameters of a VAR. The restricted model
can be written as

t
= od
t
¸
t÷1
÷L(L)^¸
t÷1
÷c
t

t
= oz
t÷1
÷L(L)^¸
t÷1
÷c
t
.
If d is known, this can be estimated by OLS of ^¸
t
on z
t÷1
and the lags of ^¸
t
.
If d is unknown, then estimation is done by “reduced rank regression”, which is least-squares
subject to the stated restriction. Equivalently, this is the MLE of the restricted parameters under
the assumption that c
t
is iid N(0, D).
One di¢culty is that d is not identi…ed without normalization. When r = 1, we typically just
normalize one element to equal unity. When r 1, this does not work, and di¤erent authors have
adopted di¤erent identi…cation schemes.
In the context of a cointegrated VAR estimated by reduced rank regression, it is simple to test
for cointegration by testing the rank of D. These tests are constructed as likelihood ratio (LR) tests.
As they were discovered by Johansen (1988, 1991, 1995), they are typically called the “Johansen
Max and Trace” tests. Their asymptotic distributions are non-standard, and are similar to the
Dickey-Fuller distributions.
Chapter 19
Limited Dependent Variables
A “limited dependent variable” j is one which takes a “limited” set of values. The most common
cases are
« Binary: j ¸ ¦0, 1¦
« Multinomial: j ¸ ¦0, 1, 2, ..., /¦
« Integer: j ¸ ¦0, 1, 2, ...¦
« Censored: j ¸ R
+
The traditional approach to the estimation of limited dependent variable (LDV) models is
parametric maximum likelihood. A parametric model is constructed, allowing the construction of
the likelihood function. A more modern approach is semi-parametric, eliminating the dependence
on a parametric distributional assumption. We will discuss only the …rst (parametric) approach,
due to time constraints. They still constitute the majority of LDV applications. If, however, you
were to write a thesis involving LDV estimation, you would be advised to consider employing a
semi-parametric estimation approach.
For the parametric approach, estimation is by MLE. A major practical issue is construction of
the likelihood function.
19.1 Binary Choice
The dependent variable j
i
¸ ¦0, 1¦. This represents a Yes/No outcome. Given some regressors
i
i
, the goal is to describe Ii (j
i
= 1 [ i
i
) , as this is the full conditional distribution.
The linear probability model speci…es that
Ii (j
i
= 1 [ i
i
) = i
t
i
d.
As Ii (j
i
= 1 [ i
i
) = E(j
i
[ i
i
) , this yields the regression: j
i
= i
t
i
d÷c
i
which can be estimated by
OLS. However, the linear probability model does not impose the restriction that 0 _ Ii (j
i
[ i
i
) _ 1.
Even so estimation of a linear probability model is a useful starting point for subsequent analysis.
The standard alternative is to use a function of the form
Ii (j
i
= 1 [ i
i
) = 1
_
i
t
i
d
_
where 1 () is a known CDF, typically assumed to be symmetric about zero, so that 1(n) =
1 ÷1(÷n). The two standard choices for 1 are
« Logistic: 1(n) = (1 ÷c
÷&
)
÷1
.
291
CHAPTER 19. LIMITED DEPENDENT VARIABLES 292
« Normal: 1(n) = 1(n).
If 1 is logistic, we call this the logit model, and if 1 is normal, we call this the probit model.
This model is identical to the latent variable model
j
+
i
= i
t
i
d ÷c
i
c
i
~ 1 ()
j
i
=
_
1 if j
+
i
0
0 otherwise
.
For then
Ii (j
i
= 1 [ i
i
) = Ii (j
+
i
0 [ i
i
)
= Ii
_
i
t
i
d ÷c
i
0 [ i
i
_
= Ii
_
c
i
÷i
t
i
d [ i
i
_
= 1 ÷1
_
÷i
t
i
d
_
= 1
_
i
t
i
d
_
.
Estimation is by maximum likelihood. To construct the likelihood, we need the conditional
distribution of an individual observation. Recall that if j is Bernoulli, such that Ii(j = 1) = j and
Ii(j = 0) = 1 ÷j, then we can write the density of j as
)(j) = j
j
(1 ÷j)
1÷j
, j = 0, 1.
In the Binary choice model, j
i
is conditionally Bernoulli with Ii (j
i
= 1 [ i
i
) = j
i
= 1 (i
t
i
d) . Thus
the conditional density is
) (j
i
[ i
i
) = j
j
i
i
(1 ÷j
i
)
1÷j
i
= 1
_
i
t
i
d
_
j
i
(1 ÷1
_
i
t
i
d
_
)
1÷j
i
.
Hence the log-likelihood function is
log 1(d) =
a

i=1
log )(j
i
[ i
i
)
=
a

i=1
log
_
1
_
i
t
i
d
_
j
i
(1 ÷1
_
i
t
i
d
_
)
1÷j
i
_
=
a

i=1
_
j
i
log 1
_
i
t
i
d
_
÷ (1 ÷j
i
) log(1 ÷1
_
i
t
i
d
_
)
¸
=

j
i
=1
log 1
_
i
t
i
d
_
÷

j
i
=0
log(1 ÷1
_
i
t
i
d
_
).
The MLE
`
d is the value of d which maximizes log 1(d). Standard errors and test statistics are
computed by asymptotic approximations. Details of such calculations are left to more advanced
courses.
19.2 Count Data
If j ¸ ¦0, 1, 2, ...¦, a typical approach is to employ Poisson regression. This model speci…es that
Ii (j
i
= / [ i
i
) =
oxp(÷`
i
) `
I
i
/!
, / = 0, 1, 2, ...
`
i
= oxp(i
t
i
d).
CHAPTER 19. LIMITED DEPENDENT VARIABLES 293
The conditional density is the Poisson with parameter `
i
. The functional form for `
i
has been
picked to ensure that `
i
0.
The log-likelihood function is
log 1(d) =
a

i=1
log )(j
i
[ i
i
) =
a

i=1
_
÷oxp(i
t
i
d) ÷j
i
i
t
i
d ÷log(j
i
!)
_
.
The MLE is the value
`
d which maximizes log 1(d).
Since
E(j
i
[ i
i
) = `
i
= oxp(i
t
i
d)
is the conditional mean, this motivates the label Poisson “regression.”
Also observe that the model implies that
vai (j
i
[ i
i
) = `
i
= oxp(i
t
i
d),
so the model imposes the restriction that the conditional mean and variance of j
i
are the same.
This may be considered restrictive. A generalization is the negative binomial.
19.3 Censored Data
The idea of “censoring” is that some data above or below a threshold are mis-reported at the
threshold. Thus the model is that there is some latent process j
+
i
with unbounded support, but we
observe only
j
i
=
_
j
+
i
if j
+
i
_ 0
0 if j
+
i
< 0
. (19.1)
(This is written for the case of the threshold being zero, any known value can substitute.) The
observed data j
i
therefore come from a mixed continuous/discrete distribution.
Censored models are typically applied when the data set has a meaningful proportion (say 5%
or higher) of data at the boundary of the sample support. The censoring process may be explicit
in data collection, or it may be a by-product of economic constraints.
An example of a data collection censoring is top-coding of income. In surveys, incomes above
a threshold are typically reported at the threshold.
The …rst censored regression model was developed by Tobin (1958) to explain consumption of
durable goods. Tobin observed that for many households, the consumption level (purchases) in a
particular period was zero. He proposed the latent variable model
j
+
i
= i
t
i
d ÷c
i
c
i
iio
~ N(0, o
2
)
with the observed variable j
i
generated by the censoring equation (19.1). This model (now called
the Tobit) speci…es that the latent (or ideal) value of consumption may be negative (the household
would prefer to sell than buy). All that is reported is that the household purchased zero units of
the good.
The naive approach to estimate d is to regress j
i
on i
i
. This does not work because regression
estimates E(j
i
[ i
i
) , not E(j
+
i
[ i
i
) = i
t
i
d, and the latter is of interest. Thus OLS will be biased
for the parameter of interest d.
[Note: it is still possible to estimate E(j
i
[ i
i
) by LS techniques. The Tobit framework postu-
lates that this is not inherently interesting, that the parameter of d is de…ned by an alternative
statistical structure.]
CHAPTER 19. LIMITED DEPENDENT VARIABLES 294
Consistent estimation will be achieved by the MLE. To construct the likelihood, observe that
the probability of being censored is
Ii (j
i
= 0 [ i
i
) = Ii (j
+
i
< 0 [ i
i
)
= Ii
_
i
t
i
d ÷c
i
< 0 [ i
i
_
= Ii
_
c
i
o
< ÷
i
t
i
d
o
[ i
i
_
= 1
_
÷
i
t
i
d
o
_
.
The conditional density function above zero is normal:
o
÷1
c
_
j ÷i
t
i
d
o
_
, j 0.
Therefore, the density function for j _ 0 can be written as
) (j [ i
i
) = 1
_
÷
i
t
i
d
o
_
1(j=0)
_
o
÷1
c
_
. ÷i
t
i
d
o
__
1(j0)
,
where 1 () is the indicator function.
Hence the log-likelihood is a mixture of the probit and the normal:
log 1(d) =
a

i=1
log )(j
i
[ i
i
)
=

j
i
=0
log 1
_
÷
i
t
i
d
o
_
÷

j
i
0
log
_
o
÷1
c
_
j
i
÷i
t
i
d
o
__
.
The MLE is the value
`
d which maximizes log 1(d).
19.4 Sample Selection
The problem of sample selection arises when the sample is a non-random selection of potential
observations. This occurs when the observed data is systematically di¤erent from the population
of interest. For example, if you ask for volunteers for an experiment, and they wish to extrapolate
the e¤ects of the experiment on a general population, you should worry that the people who
volunteer may be systematically di¤erent from the general population. This has great relevance for
the evaluation of anti-poverty and job-training programs, where the goal is to assess the e¤ect of
“training” on the general population, not just on the volunteers.
A simple sample selection model can be written as the latent model
j
i
= i
t
i
d ÷c
1i
T
i
= 1
_
z
t
i
_ ÷c
0i
0
_
where 1 () is the indicator function. The dependent variable j
i
is observed if (and only if) T
i
= 1.
Else it is unobserved.
For example, j
i
could be a wage, which can be observed only if a person is employed. The
equation for T
i
is an equation specifying the probability that the person is employed.
The model is often completed by specifying that the errors are jointly normal
_
c
0i
c
1i
_
~ N
_
0,
_
1 j
j o
2
__
.
CHAPTER 19. LIMITED DEPENDENT VARIABLES 295
It is presumed that we observe ¦i
i
, z
i
, T
i
¦ for all observations.
Under the normality assumption,
c
1i
= jc
0i
÷·
i
,
where ·
i
is independent of c
0i
~ N(0, 1). A useful fact about the standard normal distribution is
that
E(c
0i
[ c
0i
÷r) = `(r) =
c(r)
1(r)
,
and the function `(r) is called the inverse Mills ratio.
The naive estimator of d is OLS regression of j
i
on i
i
for those observations for which j
i
is
available. The problem is that this is equivalent to conditioning on the event ¦T
i
= 1¦. However,
E(c
1i
[ T
i
= 1, z
i
) = E
_
c
1i
[ ¦c
0i
÷z
t
i
_¦, z
i
_
= jE
_
c
0i
[ ¦c
0i
÷z
t
i
_¦, z
i
_
÷E
_
·
i
[ ¦c
0i
÷z
t
i
_¦, z
i
_
= j`
_
z
t
i
_
_
,
which is non-zero. Thus
c
1i
= j`
_
z
t
i
_
_
÷n
i
,
where
E(n
i
[ T
i
= 1, z
i
) = 0.
Hence
j
i
= i
t
i
d ÷j`
_
z
t
i
_
_
÷n
i
(19.2)
is a valid regression equation for the observations for which T
i
= 1.
Heckman (1979) observed that we could consistently estimate d and j from this equation, if _
were known. It is unknown, but also can be consistently estimated by a Probit model for selection.
The “Heckit” estimator is thus calculated as follows
« Estimate ` _ from a Probit, using regressors z
i
. The binary dependent variable is T
i
.
« Estimate
_
`
d, ´ j
_
from OLS of j
i
on i
i
and `(z
t
i
` _).
« The OLS standard errors will be incorrect, as this is a two-step estimator. They can be
corrected using a more complicated formula. Or, alternatively, by viewing the Probit/OLS
estimation equations as a large joint GMM problem.
The Heckit estimator is frequently used to deal with problems of sample selection. However,
the estimator is built on the assumption of normality, and the estimator can be quite sensitive
to this assumption. Some modern econometric research is exploring how to relax the normality
assumption.
The estimator can also work quite poorly if `(z
t
i
´ ¸) does not have much in-sample variation.
This can happen if the Probit equation does not “explain” much about the selection choice. Another
potential problem is that if z
i
= i
i
, then `(z
t
i
´ ¸) can be highly collinear with i
i
, so the second
step OLS estimator will not be able to precisely estimate d. Based this observation, it is typically
recommended to …nd a valid exclusion restriction: a variable should be in z
i
which is not in i
i
. If
this is valid, it will ensure that `(z
t
i
´ ¸) is not collinear with i
i
, and hence improve the second stage
estimator’s precision.
Chapter 20
Panel Data
A panel is a set of observations on individuals, collected over time. An observation is the pair
¦j
it
, i
it
¦, where the i subscript denotes the individual, and the t subscript denotes time. A panel
may be balanced:
¦j
it
, i
it
¦ : t = 1, ..., T; i = 1, ..., :,
or unbalanced:
¦j
it
, i
it
¦ : For i = 1, ..., :, t = t
i
, ..., t
i
.
20.1 Individual-E¤ects Model
The standard panel data speci…cation is that there is an individual-speci…c e¤ect which enters
linearly in the regression
j
it
= i
t
it
d ÷n
i
÷c
it
.
The typical maintained assumptions are that the individuals i are mutually independent, that n
i
and c
it
are independent, that c
it
is iid across individuals and time, and that c
it
is uncorrelated with
i
it
.
OLS of j
it
on i
it
is called pooled estimation. It is consistent if
E(i
it
n
i
) = 0 (20.1)
If this condition fails, then OLS is inconsistent. (20.1) fails if the individual-speci…c unobserved
e¤ect n
i
is correlated with the observed explanatory variables i
it
. This is often believed to be
plausible if n
i
is an omitted variable.
If (20.1) is true, however, OLS can be improved upon via a GLS technique. In either event,
OLS appears a poor estimation choice.
Condition (20.1) is called the random e¤ects hypothesis. It is a strong assumption, and most
applied researchers try to avoid its use.
20.2 Fixed E¤ects
This is the most common technique for estimation of non-dynamic linear panel regressions.
The motivation is to allow n
i
to be arbitrary, and have arbitrary correlated with i
i
. The goal
is to eliminate n
i
from the estimator, and thus achieve invariance.
There are several derivations of the estimator.
First, let
d
i)
=
_
_
_
1 if i = ,
0 else
,
296
CHAPTER 20. PANEL DATA 297
and
u
i
=
_
_
_
d
i1
.
.
.
d
ia
_
_
_,
an : 1 dummy vector with a “1” in the i
t
t/ place. Let
u =
_
_
_
n
1
.
.
.
n
a
_
_
_.
Then note that
n
i
= u
t
i
u,
and
j
it
= i
t
it
d ÷u
t
i
u ÷c
it
. (20.2)
Observe that
E(c
it
[ i
it
, u
i
) = 0,
so (20.2) is a valid regression, with u
i
as a regressor along with i
i
.
OLS on (20.2) yields estimator
_
`
d, ` u
_
. Conventional inference applies.
Observe that
« This is generally consistent.
« If i
it
contains an intercept, it will be collinear with u
i
, so the intercept is typically omitted
from i
it
.
« Any regressor in i
it
which is constant over time for all individuals (e.g., their gender) will be
collinear with u
i
, so will have to be omitted.
« There are : ÷/ regression parameters, which is quite large as typically : is very large.
Computationally, you do not want to actually implement conventional OLS estimation, as the
parameter space is too large. OLS estimation of d proceeds by the FWL theorem. Stacking the
observations together:
¸ = Ad ÷Lu ÷c,
then by the FWL theorem,
`
d =
_
A
t
(1 ÷1
1
) A
_
÷1
_
A
t
(1 ÷1
1
) ¸
_
=
_
A
+t
A
+
_
÷1
_
A
+t
¸
+
_
,
where
¸
+
= ¸ ÷L(L
t
L)
÷1
L
t
¸
A
+
= A ÷L(L
t
L)
÷1
L
t
A.
Since the regression of j
it
on u
i
is a regression onto individual-speci…c dummies, the predicted value
from these regressions is the individual speci…c mean j
i
, and the residual is the demean value
j
+
it
= j
it
÷j
i
.
The …xed e¤ects estimator
`
d is OLS of j
+
it
on i
+
it
, the dependent variable and regressors in deviation-
from-mean form.
CHAPTER 20. PANEL DATA 298
Another derivation of the estimator is to take the equation
j
it
= i
t
it
d ÷n
i
÷c
it
,
and then take individual-speci…c means by taking the average for the i
t
t/ individual:
1
T
i
t
i

t=t
i
j
it
=
1
T
i
t
i

t=t
i
i
t
it
d ÷n
i
÷
1
T
i
t
i

t=t
i
c
it
or
j
i
= i
t
i
d ÷n
i
÷c
i
.
Subtracting, we …nd
j
+
it
= i
+t
it
d ÷c
+
it
,
which is free of the individual-e¤ect n
i
.
20.3 Dynamic Panel Regression
A dynamic panel regression has a lagged dependent variable
j
it
= cj
it÷1
÷i
t
it
d ÷n
i
÷c
it
. (20.3)
This is a model suitable for studying dynamic behavior of individual agents.
Unfortunately, the …xed e¤ects estimator is inconsistent, at least if T is held …nite as : ÷ ·.
This is because the sample mean of j
it÷1
is correlated with that of c
it
.
The standard approach to estimate a dynamic panel is to combine …rst-di¤erencing with IV or
GMM. Taking …rst-di¤erences of (20.3) eliminates the individual-speci…c e¤ect:
^j
it
= c^j
it÷1
÷ ^i
t
it
d ÷ ^c
it
. (20.4)
However, if c
it
is iid, then it will be correlated with ^j
it÷1
:
E(^j
it÷1
^c
it
) = E((j
it÷1
÷j
it÷2
) (c
it
÷c
it÷1
)) = ÷E(j
it÷1
c
it÷1
) = ÷o
2
c
.
So OLS on (20.4) will be inconsistent.
But if there are valid instruments, then IV or GMM can be used to estimate the equation.
Typically, we use lags of the dependent variable, two periods back, as j
t÷2
is uncorrelated with
^c
it
. Thus values of j
it÷I
, / _ 2, are valid instruments.
Hence a valid estimator of c and d is to estimate (20.4) by IV using j
t÷2
as an instrument for
^j
t÷1
(which is just identi…ed). Alternatively, GMM using j
t÷2
and j
t÷3
as instruments (which is
overidenti…ed, but loses a time-series observation).
A more sophisticated GMM estimator recognizes that for time-periods later in the sample, there
are more instruments available, so the instrument list should be di¤erent for each equation. This is
conveniently organized by the GMM principle, as this enables the moments from the di¤erent time-
periods to be stacked together to create a list of all the moment conditions. A simple application
of GMM yields the parameter estimates and standard errors.
Chapter 21
Nonparametric Density Estimation
21.1 Kernel Density Estimation
Let A be a random variable with continuous distribution 1(r) and density )(r) =
o
oa
1(r).
The goal is to estimate )(r) from a random sample (A
1
, ..., A
a
¦ While 1(r) can be estimated by
the EDF
´
1(r) = :
÷1

a
i=1
1 (A
i
_ r) , we cannot de…ne
o
oa
´
1(r) since
´
1(r) is a step function. The
standard nonparametric method to estimate )(r) is based on smoothing using a kernel.
While we are typically interested in estimating the entire function )(r), we can simply focus
on the problem where r is a speci…c …xed number, and then see how the method generalizes to
estimating the entire function.
De…nition 21.1.1 1(n) is a second-order kernel function if it is a
symmetric zero-mean density function.
Three common choices for kernels include the Normal
1(n) =
1
_

oxp
_
÷
n
2
2
_
the Epanechnikov
1(n) =
_
3
4
_
1 ÷n
2
_
, [n[ _ 1
0 [n[ 1
and the Biweight or Quartic
1(n) =
_
15
16
_
1 ÷n
2
_
2
, [n[ _ 1
0 [n[ 1
In practice, the choice between these three rarely makes a meaningful di¤erence in the estimates.
The kernel functions are used to smooth the data. The amount of smoothing is controlled by
the bandwidth / 0. Let
1
I
(n) =
1
/
1
_
n
/
_
.
be the kernel 1 rescaled by the bandwidth /. The kernel density estimator of )(r) is
´
)(r) =
1
:
a

i=1
1
I
(A
i
÷r) .
299
CHAPTER 21. NONPARAMETRIC DENSITY ESTIMATION 300
This estimator is the average of a set of weights. If a large number of the observations A
i
are near
r, then the weights are relatively large and
´
)(r) is larger. Conversely, if only a few A
i
are near r,
then the weights are small and
´
)(r) is small. The bandwidth / controls the meaning of “near”.
Interestingly,
´
)(r) is a valid density. That is,
´
)(r) _ 0 for all r, and
_
o
÷o
´
)(r)dr =
_
o
÷o
1
:
a

i=1
1
I
(A
i
÷r) dr
=
1
:
a

i=1
_
o
÷o
1
I
(A
i
÷r) dr
=
1
:
a

i=1
_
o
÷o
1 (n) dn = 1
where the second-to-last equality makes the change-of-variables n = (A
i
÷r),/.
We can also calculate the moments of the density
´
)(r). The mean is
_
o
÷o
r
´
)(r)dr =
1
:
a

i=1
_
o
÷o
r1
I
(A
i
÷r) dr
=
1
:
a

i=1
_
o
÷o
(A
i
÷n/) 1 (n) dn
=
1
:
a

i=1
A
i
_
o
÷o
1 (n) dn ÷
1
:
a

i=1
/
_
o
÷o
n1 (n) dn
=
1
:
a

i=1
A
i
the sample mean of the A
i
, where the second-to-last equality used the change-of-variables n =
(A
i
÷r),/ which has Jacobian /.
The second moment of the estimated density is
_
o
÷o
r
2
´
)(r)dr =
1
:
a

i=1
_
o
÷o
r
2
1
I
(A
i
÷r) dr
=
1
:
a

i=1
_
o
÷o
(A
i
÷n/)
2
1 (n) dn
=
1
:
a

i=1
A
2
i
÷
2
:
a

i=1
A
i
/
_
o
÷o
1(n)dn ÷
1
:
a

i=1
/
2
_
o
÷o
n
2
1 (n) dn
=
1
:
a

i=1
A
2
i
÷/
2
o
2
1
where
o
2
1
=
_
o
÷o
n
2
1 (n) dn
is the variance of the kernel. It follows that the variance of the density
´
)(r) is
_
o
÷o
r
2
´
)(r)dr ÷
__
o
÷o
r
´
)(r)dr
_
2
=
1
:
a

i=1
A
2
i
÷/
2
o
2
1
÷
_
1
:
a

i=1
A
i
_
2
= ´ o
2
÷/
2
o
2
1
Thus the variance of the estimated density is in‡ated by the factor /
2
o
2
1
relative to the sample
moment.
CHAPTER 21. NONPARAMETRIC DENSITY ESTIMATION 301
21.2 Asymptotic MSE for Kernel Estimates
For …xed r and bandwidth / observe that
E1
I
(A ÷r) =
_
o
÷o
1
I
(. ÷r) )(.)d.
=
_
o
÷o
1
I
(n/) )(r ÷/n)/dn
=
_
o
÷o
1 (n) )(r ÷/n)dn
The second equality uses the change-of variables n = (. ÷r),/. The last expression shows that the
expected value is an average of )(.) locally about r.
This integral (typically) is not analytically solvable, so we approximate it using a second order
Taylor expansion of )(r ÷/n) in the argument /n about /n = 0, which is valid as / ÷0. Thus
) (r ÷/n) · )(r) ÷)
t
(r)/n ÷
1
2
)
tt
(r)/
2
n
2
and therefore
E1
I
(A ÷r) ·
_
o
÷o
1 (n)
_
)(r) ÷)
t
(r)/n ÷
1
2
)
tt
(r)/
2
n
2
_
dn
= )(r)
_
o
÷o
1 (n) dn ÷)
t
(r)/
_
o
÷o
1 (n) ndn
÷
1
2
)
tt
(r)/
2
_
o
÷o
1 (n) n
2
dn
= )(r) ÷
1
2
)
tt
(r)/
2
o
2
1
.
The bias of
´
)(r) is then
1ia:(r) = E
´
)(r) ÷)(r) =
1
:
a

i=1
E1
I
(A
i
÷r) ÷)(r) =
1
2
)
tt
(r)/
2
o
2
1
.
We see that the bias of
´
)(r) at r depends on the second derivative )
tt
(r). The sharper the derivative,
the greater the bias. Intuitively, the estimator
´
)(r) smooths data local to A
i
= r, so is estimating
a smoothed version of )(r). The bias results from this smoothing, and is larger the greater the
curvature in )(r).
We now examine the variance of
´
)(r). Since it is an average of iid random variables, using
…rst-order Taylor approximations and the fact that :
÷1
is of smaller order than (:/)
÷1
vai (r) =
1
:
vai (1
I
(A
i
÷r))
=
1
:
E1
I
(A
i
÷r)
2
÷
1
:
(E1
I
(A
i
÷r))
2
·
1
:/
2
_
o
÷o
1
_
. ÷r
/
_
2
)(.)d. ÷
1
:
)(r)
2
=
1
:/
_
o
÷o
1 (n)
2
) (r ÷/n) dn
·
) (r)
:/
_
o
÷o
1 (n)
2
dn
=
) (r) 1(1)
:/
.
CHAPTER 21. NONPARAMETRIC DENSITY ESTIMATION 302
where 1(1) =
_
o
÷o
1 (n)
2
dn is called the roughness of 1.
Together, the asymptotic mean-squared error (AMSE) for …xed r is the sum of the approximate
squared bias and approximate variance
¹'o1
I
(r) =
1
4
)
tt
(r)
2
/
4
o
4
1
÷
) (r) 1(1)
:/
.
A global measure of precision is the asymptotic mean integrated squared error (AMISE)
¹'1o1
I
=
_
¹'o1
I
(r)dr =
/
4
o
4
1
1()
tt
)
4
÷
1(1)
:/
. (21.1)
where 1()
tt
) =
_
()
tt
(r))
2
dr is the roughness of )
tt
. Notice that the …rst term (the squared bias)
is increasing in / and the second term (the variance) is decreasing in :/. Thus for the AMISE to
decline with :, we need / ÷ 0 but :/ ÷ ·. That is, / must tend to zero, but at a slower rate
than :
÷1
.
Equation (21.1) is an asymptotic approximation to the MSE. We de…ne the asymptotically
optimal bandwidth /
0
as the value which minimizes this approximate MSE. That is,
/
0
= aigmin
I
¹'1o1
I
It can be found by solving the …rst order condition
d
d/
¹'1o1
I
= /
3
o
4
1
1()
tt
) ÷
1(1)
:/
2
= 0
yielding
/
0
=
_
1(1)
o
4
1
1()
tt
)
_
1¸5
:
÷1¸2
. (21.2)
This solution takes the form /
0
= c:
÷1¸5
where c is a function of 1 and ), but not of :. We
thus say that the optimal bandwidth is of order O(:
÷1¸5
). Note that this / declines to zero, but at
a very slow rate.
In practice, how should the bandwidth be selected? This is a di¢cult problem, and there is a
large and continuing literature on the subject. The asymptotically optimal choice given in (21.2)
depends on 1(1), o
2
1
, and 1()
tt
). The …rst two are determined by the kernel function. Their
values for the three functions introduced in the previous section are given here.
1 o
2
1
=
_
o
÷o
n
2
1 (n) dn 1(1) =
_
o
÷o
1 (n)
2
dn
Gaussian 1 1/(2
_
¬)
Epanechnikov 1,ò 1,ò
Biweight 1,7 ò,7
An obvious di¢culty is that 1()
tt
) is unknown. A classic simple solution proposed by Silverman
(1986) has come to be known as the reference bandwidth or Silverman’s Rule-of-Thumb. It
uses formula (21.2) but replaces 1()
tt
) with ´ o
÷5
1(c
tt
), where c is the N(0, 1) distribution and ´ o
2
is
an estimate of o
2
= vai(A). This choice for / gives an optimal rule when )(r) is normal, and gives
a nearly optimal rule when )(r) is close to normal. The downside is that if the density is very far
from normal, the rule-of-thumb / can be quite ine¢cient. We can calculate that 1(c
tt
) = 8, (8
_
¬) .
Together with the above table, we …nd the reference rules for the three kernel functions introduced
earlier.
Gaussian Kernel: /
v&|c
= 1.06´ o:
÷1¸5
Epanechnikov Kernel: /
v&|c
= 2.84´ o:
÷1¸5
Biweight (Quartic) Kernel: /
v&|c
= 2.78´ o:
÷1¸5
CHAPTER 21. NONPARAMETRIC DENSITY ESTIMATION 303
Unless you delve more deeply into kernel estimation methods the rule-of-thumb bandwidth is
a good practical bandwidth choice, perhaps adjusted by visual inspection of the resulting estimate
´
)(r). There are other approaches, but implementation can be delicate. I now discuss some of these
choices. The plug-in approach is to estimate 1()
tt
) in a …rst step, and then plug this estimate into
the formula (21.2). This is more treacherous than may …rst appear, as the optimal / for estimation
of the roughness 1()
tt
) is quite di¤erent than the optimal / for estimation of )(r). However, there
are modern versions of this estimator work well, in particular the iterative method of Sheather
and Jones (1991). Another popular choice for selection of / is cross-validation. This works by
constructing an estimate of the MISE using leave-one-out estimators. There are some desirable
properties of cross-validation bandwidths, but they are also known to converge very slowly to the
optimal values. They are also quite ill-behaved when the data has some discretization (as is common
in economics), in which case the cross-validation rule can sometimes select very small bandwidths
leading to dramatically undersmoothed estimates. Fortunately there are remedies, which are known
as smoothed cross-validation which is a close cousin of the bootstrap.
Appendix A
Matrix Algebra
A.1 Notation
A scalar a is a single number.
A vector u is a / 1 list of numbers, typically arranged in a column. We write this as
u =
_
_
_
_
_
a
1
a
2
.
.
.
a
I
_
_
_
_
_
Equivalently, a vector u is an element of Euclidean / space, written as u ¸ R
I
. If / = 1 then u is
a scalar.
A matrix A is a / r rectangular array of numbers, written as
A =
_
¸
¸
¸
_
a
11
a
12
a
1v
a
21
a
22
a
2v
.
.
.
.
.
.
.
.
.
a
I1
a
I2
a
Iv
_
¸
¸
¸
_
By convention a
i)
refers to the element in the i
t
t/ row and ,
t
t/ column of A. If r = 1 then A is a
column vector. If / = 1 then A is a row vector. If r = / = 1, then A is a scalar.
A standard convention (which we will follow in this text whenever possible) is to denote scalars
by lower-case italics (a), vectors by lower-case bold italics (u), and matrices by upper-case bold
italics (A). Sometimes a matrix A is denoted by the symbol (a
i)
).
A matrix can be written as a set of column vectors or as a set of row vectors. That is,
A =
_
u
1
u
2
u
v
¸
=
_
¸
¸
¸
_
o
1
o
2
.
.
.
o
I
_
¸
¸
¸
_
where
u
i
=
_
¸
¸
¸
_
a
1i
a
2i
.
.
.
a
Ii
_
¸
¸
¸
_
are column vectors and
o
)
=
_
a
)1
a
)2
a
)v
¸
304
APPENDIX A. MATRIX ALGEBRA 305
are row vectors.
The transpose of a matrix, denoted A
t
, is obtained by ‡ipping the matrix on its diagonal.
Thus
A
t
=
_
¸
¸
¸
_
a
11
a
21
a
I1
a
12
a
22
a
I2
.
.
.
.
.
.
.
.
.
a
1v
a
2v
a
Iv
_
¸
¸
¸
_
Alternatively, letting H = A
t
, then /
i)
= a
)i
. Note that if A is / r, then A
t
is r /. If u is a
/ 1 vector, then u
t
is a 1 / row vector. An alternative notation for the transpose of A is A
¯
.
A matrix is square if / = r. A square matrix is symmetric if A = A
t
, which requires a
i)
= a
)i
.
A square matrix is diagonal if the o¤-diagonal elements are all zero, so that a
i)
= 0 if i ,= ,. A
square matrix is upper (lower) diagonal if all elements below (above) the diagonal equal zero.
An important diagonal matrix is the identity matrix, which has ones on the diagonal. The
/ / identity matrix is denoted as
1
I
=
_
¸
¸
¸
_
1 0 0
0 1 0
.
.
.
.
.
.
.
.
.
0 0 1
_
¸
¸
¸
_
.
A partitioned matrix takes the form
A =
_
¸
¸
¸
_
A
11
A
12
A
1v
A
21
A
22
A
2v
.
.
.
.
.
.
.
.
.
A
I1
A
I2
A
Iv
_
¸
¸
¸
_
where the ¹
i)
denote matrices, vectors and/or scalars.
A.2 Matrix Addition
If the matrices A = (a
i)
) and H = (/
i)
) are of the same order, we de…ne the sum
A÷H = (a
i)
÷/
i)
) .
Matrix addition follows the communtative and associative laws:
A÷H = H ÷A
A÷ (H ÷C) = (A÷H) ÷C.
A.3 Matrix Multiplication
If A is / r and c is real, we de…ne their product as
Ac = cA = (a
i)
c) .
If u and I are both / 1, then their inner product is
u
t
I = a
1
/
1
÷a
2
/
2
÷ ÷a
I
/
I
=
I

)=1
a
)
/
)
.
Note that u
t
I = I
t
u. We say that two vectors u and I are orthogonal if u
t
I = 0.
APPENDIX A. MATRIX ALGEBRA 306
If A is / r and H is r :, so that the number of columns of A equals the number of rows
of H, we say that A and H are conformable. In this event the matrix product AH is de…ned.
Writing A as a set of row vectors and H as a set of column vectors (each of length r), then the
matrix product is de…ned as
AH =
_
¸
¸
¸
_
u
t
1
u
t
2
.
.
.
u
t
I
_
¸
¸
¸
_
_
I
1
I
2
I
c
¸
=
_
¸
¸
¸
_
u
t
1
I
1
u
t
1
I
2
u
t
1
I
c
u
t
2
I
1
u
t
2
I
2
u
t
2
I
c
.
.
.
.
.
.
.
.
.
u
t
I
I
1
u
t
I
I
2
u
t
I
I
c
_
¸
¸
¸
_
.
Matrix multiplication is not commutative: in general AH 6= HA. However, it is associative
and distributive:
A(HC) = (AH) C
A(H ÷C) = AH ÷AC
An alternative way to write the matrix product is to use matrix partitions. For example,
AH =
_
A
11
A
12
A
21
A
22
_ _
H
11
H
12
H
21
H
22
_
=
_
A
11
H
11
÷A
12
H
21
A
11
H
12
÷A
12
H
22
A
21
H
11
÷A
22
H
21
A
21
H
12
÷A
22
H
22
_
.
As another example,
AH =
_
A
1
A
2
A
v
¸
_
¸
¸
¸
_
H
1
H
2
.
.
.
H
v
_
¸
¸
¸
_
= A
1
H
1
÷A
2
H
2
÷ ÷A
v
H
v
=
v

)=1
A
)
H
)
An important property of the identity matrix is that if A is /r, then A1
v
= A and 1
I
A = A.
The / r matrix A, r _ /, is called orthogonal if A
t
A = 1
v
.
A.4 Trace
The trace of a / / square matrix A is the sum of its diagonal elements
li (A) =
I

i=1
a
ii
.
Some straightforward properties for square matrices A and H and real c are
li (cA) = c li (A)
li
_
A
t
_
= li (A)
li (A÷H) = li (A) ÷ li (H)
li (1
I
) = /.
APPENDIX A. MATRIX ALGEBRA 307
Also, for / r A and r / H we have
li (AH) = li (HA) . (A.1)
Indeed,
li (AH) = li
_
¸
¸
¸
_
u
t
1
I
1
u
t
1
I
2
u
t
1
I
I
u
t
2
I
1
u
t
2
I
2
u
t
2
I
I
.
.
.
.
.
.
.
.
.
u
t
I
I
1
u
t
I
I
2
u
t
I
I
I
_
¸
¸
¸
_
=
I

i=1
u
t
i
I
i
=
I

i=1
I
t
i
u
i
= li (HA) .
A.5 Rank and Inverse
The rank of the / r matrix (r _ /)
A =
_
u
1
u
2
u
v
¸
is the number of linearly independent columns u
)
, and is written as iank (A) . We say that A has
full rank if iank (A) = r.
A square / / matrix A is said to be nonsingular if it is has full rank, e.g. iank (A) = /.
This means that there is no / 1 c ,= 0 such that Ac = 0.
If a square / / matrix A is nonsingular then there exists a unique matrix / / matrix A
÷1
called the inverse of A which satis…es
AA
÷1
= A
÷1
A = 1
I
.
For non-singular A and C, some important properties include
AA
÷1
= A
÷1
A = 1
I
_
A
÷1
_
t
=
_
A
t
_
÷1
(AC)
÷1
= C
÷1
A
÷1
(A÷C)
÷1
= A
÷1
_
A
÷1
÷C
÷1
_
÷1
C
÷1
A
÷1
÷(A÷C)
÷1
= A
÷1
_
A
÷1
÷C
÷1
_
÷1
A
÷1
Also, if A is an orthogonal matrix, then A
÷1
= A
t
.
Another useful result for non-singular A is known as the Woodbury matrix identity
(A÷HCL)
÷1
= A
÷1
÷A
÷1
HC
_
C ÷CLA
÷1
HC
_
÷1
CLA
÷1
. (A.2)
In particular, for C = ÷1, H = I and L = I
t
for vector I we …nd what is known as the Sherman–
Morrison formula
_
A÷II
t
_
÷1
= A
÷1
÷
_
1 ÷I
t
A
÷1
I
_
÷1
A
÷1
II
t
A
÷1
. (A.3)
APPENDIX A. MATRIX ALGEBRA 308
The following fact about inverting partitioned matrices is quite useful.
_
A
11
A
12
A
21
A
22
_
÷1
=
_
A
11
A
12
A
21
A
22
_
=
_
A
÷1
11·2
÷A
÷1
11·2
A
12
A
÷1
22
÷A
÷1
22·1
A
21
A
÷1
11
A
÷1
22·1
_
(A.4)
where A
11·2
= A
11
÷A
12
A
÷1
22
A
21
and A
22·1
= A
22
÷A
21
A
÷1
11
A
12
. There are alternative algebraic
representations for the components. For example, using the Woodbury matrix identity you can
show the following alternative expressions
A
11
= A
÷1
11
÷A
÷1
11
A
12
A
÷1
22·1
A
21
A
÷1
11
A
22
= A
÷1
22
÷A
÷1
22
A
21
A
÷1
11·2
A
12
A
÷1
22
A
12
= ÷A
÷1
11
A
12
A
÷1
22·1
A
21
= ÷A
÷1
22
A
21
A
÷1
112
Even if a matrix A does not possess an inverse, we can still de…ne the Moore-Penrose gen-
eralized inverse A
÷
as the matrix which satis…es
AA
÷
A = A
A
÷
AA
÷
= A
÷
AA
÷
is symmetric
A
÷
A is symmetric
For any matrix A, the Moore-Penrose generalized inverse A
÷
exists and is unique.
For example, if
A =
_
A
11
0
0 0
_
then
A
÷
=
_
A
÷
11
0
0 0
_
.
A.6 Determinant
The determinant is a measure of the volume of a square matrix.
While the determinant is widely used, its precise de…nition is rarely needed. However, we present
the de…nition here for completeness. Let A = (a
i)
) be a general / / matrix . Let ¬ = (,
1
, ..., ,
I
)
denote a permutation of (1, ..., /) . There are /! such permutations. There is a unique count of the
number of inversions of the indices of such permutations (relative to the natural order (1, ..., /) ,
and let -
¬
= ÷1 if this count is even and -
¬
= ÷1 if the count is odd. Then the determinant of A
is de…ned as
ool A =

¬
-
¬
a
1)
1
a
2)
2
a
I)
k
.
For example, if A is 2 2, then the two permutations of (1, 2) are (1, 2) and (2, 1) , for which
-
(1,2)
= 1 and -
(2,1)
= ÷1. Thus
ool A = -
(1,2)
a
11
a
22
÷-
(2,1)
a
21
a
12
= a
11
a
22
÷a
12
a
21
.
Some properties include
« ool (A) = ool (A
t
)
« ool (cA) = c
I
ool A
APPENDIX A. MATRIX ALGEBRA 309
« ool (AH) = (ool A) (ool H)
« ool
_
A
÷1
_
= (ool A)
÷1
« ool
_
A H
C L
_
= (ool L) ool
_
A÷HL
÷1
C
_
if ool L ,= 0
« ool A ,= 0 if and only if A is nonsingular.
« If A is triangular (upper or lower), then ool A =

I
i=1
a
ii
« If A is orthogonal, then ool A = ±1
A.7 Eigenvalues
The characteristic equation of a / / square matrix A is
ool (A÷`1
I
) = 0.
The left side is a polynomial of degree / in ` so it has exactly / roots, which are not necessarily
distinct and may be real or complex. They are called the latent roots or characteristic roots or
eigenvalues of A. If `
i
is an eigenvalue of A, then A÷`
i
1
I
is singular so there exists a non-zero
vector l
i
such that
(A÷`
i
1
I
) l
i
= 0.
The vector l
i
is called a latent vector or characteristic vector or eigenvector of A corre-
sponding to `
i
.
We now state some useful properties. Let `
i
and l
i
, i = 1, ..., / denote the / eigenvalues and
eigenvectors of a square matrix A. Let A be a diagonal matrix with the characteristic roots in the
diagonal, and let H = [l
1
l
I
[.
« ool(A) =

I
i=1
`
i
« li(A) =

I
i=1
`
i
« A is non-singular if and only if all its characteristic roots are non-zero.
« If A has distinct characteristic roots, there exists a nonsingular matrix 1 such that A =
1
÷1
A1 and 1A1
÷1
= A.
« If A is symmetric, then A = HAH
t
and H
t
AH = A, and the characteristic roots are all
real. A = HAH
t
is called the spectral decomposition of a matrix.
« When the eigenvalues of / / A are real they are written in decending order `
1
_ `
2
_ _
`
I
. We also write `
min
(A) = `
I
= min¦`
¹
¦ and `
max
(A) = `
1
= max¦`
¹
¦.
« The characteristic roots of A
÷1
are `
÷1
1
, `
÷1
2
, ..., `
÷1
I
.
« The matrix H has the orthonormal properties H
t
H = 1 and HH
t
= 1.
« H
÷1
= H
t
and (H
t
)
÷1
= H
APPENDIX A. MATRIX ALGEBRA 310
A.8 Positive De…niteness
We say that a / / symmetric square matrix A is positive semi-de…nite if for all c ,= 0,
c
t
Ac _ 0. This is written as A _ 0. We say that A is positive de…nite if for all c ,= 0, c
t
Ac 0.
This is written as A 0.
Some properties include:
« If A = C
t
C for some matrix C, then A is positive semi-de…nite. (For any c ,= 0, c
t
Ac =
o
t
o _ 0 where o = Cc.) If C has full rank, then A is positive de…nite.
« If A is positive de…nite, then A is non-singular and A
÷1
exists. Furthermore, A
÷1
0.
« A 0 if and only if it is symmetric and all its characteristic roots are positive.
« By the spectral decomposition, A = HAH
t
where H
t
H = 1 and A is diagonal with non-
negative diagonal elements. All diagonal elements of A are strictly positive if (and only if)
A 0.
« If A 0 then A
÷1
= HA
÷1
H
t
.
« If A _ 0 and iank (A) = r < / then A
÷
= HA
÷
H
t
where A
÷
is the Moore-Penrose
generalized inverse, and A
÷
= oiag
_
`
÷1
1
, `
÷1
2
, ..., `
÷1
I
, 0, ..., 0
_
« If A _ 0 we can …nd a matrix H such that A = HH
t
. We call H a matrix square root
of A. The matrix H need not be unique. One way to construct H is to use the spectral
decomposition A = HAH
t
where A is diagonal, and then set H = HA
1¸2
. There is a unique
root root H which is also positive semi-de…nite H _ 0.
A square matrix A is idempotent if AA = A. If A is idempotent and symmetric then all its
characteristic roots equal either zero or one and is thus positive semi-de…nite. To see this, note
that we can write A = HAH
t
where H is orthogonal and A contains the r (real) characteristic
roots. Then
A = AA = HAH
t
HAH
t
= HA
2
H
t
.
By the uniqueness of the characteristic roots, we deduce that A
2
= A and `
2
i
= `
i
for i = 1, ..., r.
Hence they must equal either 0 or 1. It follows that the spectral decomposition of idempotent A
takes the form
A = H
_
1
I÷v
0
0 0
_
H
t
(A.5)
with H
t
H = 1
I
. Additionally, li(A) = iank(A).
If A is idempotent then 1 ÷A is also idempotent.
One useful fact is that A is idempotent then for any conformable vector c,
c
t
Ac _ c
t
c (A.6)
c
t
(1 ÷A) c _ c
t
c (A.7)
To see this, note that
c
t
c = c
t
Ac ÷c
t
(1 ÷A) c.
Since A and 1 ÷ A are idempotent, they are both positive semi-de…nite, so both c
t
Ac and
c
t
(1 ÷A) c are negative. Thus they must satisfy (A.6)-(A.7),.
APPENDIX A. MATRIX ALGEBRA 311
A.9 Matrix Calculus
Let i = (r
1
, ..., r
I
) be / 1 and q(i