0 Up votes0 Down votes

302 views355 pagesBasic econometric

Dec 26, 2014

© © All Rights Reserved

PDF, TXT or read online from Scribd

Basic econometric

© All Rights Reserved

302 views

Basic econometric

© All Rights Reserved

- The Gifts of Imperfection: Let Go of Who You Think You're Supposed to Be and Embrace Who You Are
- Why We Sleep: Unlocking the Power of Sleep and Dreams
- The Power of Habit: Why We Do What We Do in Life and Business
- Grit: The Power of Passion and Perseverance
- Come as You Are: The Surprising New Science that Will Transform Your Sex Life
- Come as You Are: The Surprising New Science that Will Transform Your Sex Life
- Postmortem
- Undaunted Courage: Meriwether Lewis Thomas Jefferson and the Opening
- Love and Respect: The Love She Most Desires; The Respect He Desperately Needs
- The Bone Labyrinth: A Sigma Force Novel
- Self-Compassion: The Proven Power of Being Kind to Yourself
- Influence: The Psychology of Persuasion
- Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead
- Team of Teams: New Rules of Engagement for a Complex World
- The Highly Sensitive Person
- Radical Remission: Surviving Cancer Against All Odds
- How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence
- Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness
- Out of My Later Years: The Scientist, Philosopher, and Man Portrayed Through His Own Words

You are on page 1of 355

Bruce E. Hansen

c 2000, 20131

University of Wisconsin

www.ssc.wisc.edu/~bhansen

This Revision: January 18, 2013

Comments Welcome

This manuscript may be printed and reproduced for individual or instructional use, but may not be

printed for commercial purposes.

Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

1 Introduction

1.1 What is Econometrics? . . . . . . . . . . . .

1.2 The Probability Approach to Econometrics

1.3 Econometric Terms and Notation . . . . . .

1.4 Observational Data . . . . . . . . . . . . . .

1.5 Standard Data Structures . . . . . . . . . .

1.6 Sources for Economic Data . . . . . . . . .

1.7 Econometric Software . . . . . . . . . . . .

1.8 Reading the Manuscript . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

1

1

1

2

3

4

5

6

7

2.1 Introduction . . . . . . . . . . . . . . . .

2.2 The Distribution of Wages . . . . . . . .

2.3 Conditional Expectation . . . . . . . . .

2.4 Log Dierences* . . . . . . . . . . . . .

2.5 Conditional Expectation Function . . .

2.6 Continuous Variables . . . . . . . . . . .

2.7 Law of Iterated Expectations . . . . . .

2.8 CEF Error . . . . . . . . . . . . . . . . .

2.9 Regression Variance . . . . . . . . . . .

2.10 Best Predictor . . . . . . . . . . . . . .

2.11 Conditional Variance . . . . . . . . . . .

2.12 Homoskedasticity and Heteroskedasticity

2.13 Regression Derivative . . . . . . . . . .

2.14 Linear CEF . . . . . . . . . . . . . . . .

2.15 Linear CEF with Nonlinear Eects . . .

2.16 Linear CEF with Dummy Variables . . .

2.17 Best Linear Predictor . . . . . . . . . .

2.18 Linear Predictor Error Variance . . . . .

2.19 Regression Coe cients . . . . . . . . . .

2.20 Regression Sub-Vectors . . . . . . . . .

2.21 Coe cient Decomposition . . . . . . . .

2.22 Omitted Variable Bias . . . . . . . . . .

2.23 Best Linear Approximation . . . . . . .

2.24 Normal Regression . . . . . . . . . . . .

2.25 Regression to the Mean . . . . . . . . .

2.26 Reverse Regression . . . . . . . . . . . .

2.27 Limitations of the Best Linear Predictor

2.28 Random Coe cient Model . . . . . . . .

2.29 Causal Eects . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

8

8

8

10

12

13

14

15

17

18

19

20

21

22

23

24

24

27

32

33

33

34

35

36

36

37

38

39

40

41

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

CONTENTS

ii

2.31 Existence and Uniqueness of the Conditional Expectation*

2.32 Identication* . . . . . . . . . . . . . . . . . . . . . . . . . .

2.33 Technical Proofs* . . . . . . . . . . . . . . . . . . . . . . . .

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 The Algebra of Least Squares

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . .

3.2 Random Samples . . . . . . . . . . . . . . . . . . .

3.3 Least Squares Estimator . . . . . . . . . . . . . . .

3.4 Solving for Least Squares with One Regressor . . .

3.5 Solving for Least Squares with Multiple Regressors

3.6 Illustration . . . . . . . . . . . . . . . . . . . . . .

3.7 Least Squares Residuals . . . . . . . . . . . . . . .

3.8 Model in Matrix Notation . . . . . . . . . . . . . .

3.9 Projection Matrix . . . . . . . . . . . . . . . . . .

3.10 Orthogonal Projection . . . . . . . . . . . . . . . .

3.11 Estimation of Error Variance . . . . . . . . . . . .

3.12 Analysis of Variance . . . . . . . . . . . . . . . . .

3.13 Regression Components . . . . . . . . . . . . . . .

3.14 Residual Regression . . . . . . . . . . . . . . . . .

3.15 Prediction Errors . . . . . . . . . . . . . . . . . . .

3.16 Inuential Observations . . . . . . . . . . . . . . .

3.17 Normal Regression Model . . . . . . . . . . . . . .

3.18 Technical Proofs* . . . . . . . . . . . . . . . . . . .

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . .

4 Least Squares Regression

4.1 Introduction . . . . . . . . . . . . . .

4.2 Sample Mean . . . . . . . . . . . . .

4.3 Linear Regression Model . . . . . . .

4.4 Mean of Least-Squares Estimator . .

4.5 Variance of Least Squares Estimator

4.6 Gauss-Markov Theorem . . . . . . .

4.7 Residuals . . . . . . . . . . . . . . .

4.8 Estimation of Error Variance . . . .

4.9 Mean-Square Forecast Error . . . . .

4.10 Covariance Matrix Estimation Under

4.11 Covariance Matrix Estimation Under

4.12 Standard Errors . . . . . . . . . . . .

4.13 Measures of Fit . . . . . . . . . . . .

4.14 Empirical Example . . . . . . . . . .

4.15 Multicollinearity . . . . . . . . . . .

4.16 Normal Regression Model . . . . . .

Exercises . . . . . . . . . . . . . . . . . .

5 An

5.1

5.2

5.3

5.4

5.5

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

Homoskedasticity

Heteroskedasticity

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

Introduction . . . . . . . . . . . . . . . . . . . . .

Asymptotic Limits . . . . . . . . . . . . . . . . .

Convergence in Probability . . . . . . . . . . . .

Weak Law of Large Numbers . . . . . . . . . . .

Almost Sure Convergence and the Strong Law* .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

44

46

46

47

51

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

53

53

53

54

55

55

57

58

58

60

61

62

63

64

65

67

68

69

70

72

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

75

75

75

76

77

78

80

82

83

84

85

86

89

90

91

92

95

97

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

98

98

99

100

101

102

CONTENTS

5.6

5.7

5.8

5.9

5.10

5.11

5.12

5.13

5.14

Vector-Valued Moments . . .

Convergence in Distribution .

Higher Moments . . . . . . .

Functions of Moments . . . .

Delta Method . . . . . . . . .

Stochastic Order Symbols . .

Uniform Stochastic Bounds* .

Semiparametric E ciency . .

Technical Proofs* . . . . . . .

iii

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6.2 Consistency of Least-Squares Estimation . . . . . . . . . . . . . . . .

6.3 Asymptotic Normality . . . . . . . . . . . . . . . . . . . . . . . . . .

6.4 Joint Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6.5 Consistency of Error Variance Estimators . . . . . . . . . . . . . . .

6.6 Homoskedastic Covariance Matrix Estimation . . . . . . . . . . . . .

6.7 Heteroskedastic Covariance Matrix Estimation . . . . . . . . . . . .

6.8 Alternative Covariance Matrix Estimators* . . . . . . . . . . . . . .

6.9 Functions of Parameters . . . . . . . . . . . . . . . . . . . . . . . . .

6.10 Asymptotic Standard Errors . . . . . . . . . . . . . . . . . . . . . . .

6.11 t statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6.12 Condence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . .

6.13 Regression Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . .

6.14 Forecast Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6.15 Wald Statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6.16 Condence Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6.17 Semiparametric E ciency in the Projection Model . . . . . . . . . .

6.18 Semiparametric E ciency in the Homoskedastic Regression Model* .

6.19 Uniformly Consistent Residuals* . . . . . . . . . . . . . . . . . . . .

6.20 Asymptotic Leverage* . . . . . . . . . . . . . . . . . . . . . . . . . .

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7 Restricted Estimation

7.1 Introduction . . . . . . . . . . . . . . . .

7.2 Constrained Least Squares . . . . . . . .

7.3 Exclusion Restriction . . . . . . . . . . .

7.4 Minimum Distance . . . . . . . . . . . .

7.5 Asymptotic Distribution . . . . . . . . .

7.6 E cient Minimum Distance Estimator .

7.7 Exclusion Restriction Revisited . . . . .

7.8 Variance and Standard Error Estimation

7.9 Misspecication . . . . . . . . . . . . . .

7.10 Nonlinear Constraints . . . . . . . . . .

7.11 Inequality Constraints . . . . . . . . . .

7.12 Constrained MLE . . . . . . . . . . . . .

7.13 Technical Proofs* . . . . . . . . . . . . .

Exercises . . . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

103

104

105

106

109

110

112

113

116

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

120

. 120

. 121

. 122

. 125

. 128

. 129

. 130

. 131

. 132

. 134

. 134

. 135

. 136

. 139

. 139

. 140

. 142

. 143

. 144

. 145

. 147

.

.

.

.

.

.

.

.

.

.

.

.

.

.

149

. 149

. 150

. 151

. 151

. 152

. 154

. 155

. 156

. 156

. 158

. 159

. 160

. 160

. 162

CONTENTS

iv

8 Hypothesis Testing

8.1 Hypotheses and Tests . . . . . . . . . . . . . .

8.2 t tests . . . . . . . . . . . . . . . . . . . . . . .

8.3 t-ratios and the Abuse of Testing . . . . . . . .

8.4 Wald Tests . . . . . . . . . . . . . . . . . . . .

8.5 Minimum Distance Tests . . . . . . . . . . . . .

8.6 F Tests . . . . . . . . . . . . . . . . . . . . . .

8.7 Likelihood Ratio Test . . . . . . . . . . . . . .

8.8 Problems with Tests of NonLinear Hypotheses

8.9 Monte Carlo Simulation . . . . . . . . . . . . .

8.10 Condence Intervals by Test Inversion . . . . .

8.11 Asymptotic Power . . . . . . . . . . . . . . . .

Exercises . . . . . . . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

163

163

166

168

169

170

171

172

173

176

179

180

182

9 Regression Extensions

9.1 Generalized Least Squares . . . . .

9.2 Testing for Heteroskedasticity . . .

9.3 NonLinear Least Squares . . . . .

9.4 Testing for Omitted NonLinearity .

Exercises . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

184

184

187

187

189

191

10 The Bootstrap

10.1 Denition of the Bootstrap . . . . . . . . .

10.2 The Empirical Distribution Function . . . .

10.3 Nonparametric Bootstrap . . . . . . . . . .

10.4 Bootstrap Estimation of Bias and Variance

10.5 Percentile Intervals . . . . . . . . . . . . . .

10.6 Percentile-t Equal-Tailed Interval . . . . . .

10.7 Symmetric Percentile-t Intervals . . . . . .

10.8 Asymptotic Expansions . . . . . . . . . . .

10.9 One-Sided Tests . . . . . . . . . . . . . . .

10.10Symmetric Two-Sided Tests . . . . . . . . .

10.11Percentile Condence Intervals . . . . . . .

10.12Bootstrap Methods for Regression Models .

Exercises . . . . . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

192

192

192

194

194

195

197

197

198

200

201

202

203

205

. . .

. . .

. . .

. . .

Fit

. . .

. . .

. . .

. . .

. . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

206

206

206

208

209

210

212

215

218

218

219

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

11 NonParametric Regression

11.1 Introduction . . . . . . . . . . . . . . . .

11.2 Binned Estimator . . . . . . . . . . . . .

11.3 Kernel Regression . . . . . . . . . . . . .

11.4 Local Linear Estimator . . . . . . . . . .

11.5 Nonparametric Residuals and Regression

11.6 Cross-Validation Bandwidth Selection .

11.7 Asymptotic Distribution . . . . . . . . .

11.8 Conditional Variance Estimation . . . .

11.9 Standard Errors . . . . . . . . . . . . . .

11.10Multiple Regressors . . . . . . . . . . . .

.

.

.

.

.

CONTENTS

12 Series Estimation

12.1 Approximation by Series . . . . . . . . . . . .

12.2 Splines . . . . . . . . . . . . . . . . . . . . . .

12.3 Partially Linear Model . . . . . . . . . . . . .

12.4 Additively Separable Models . . . . . . . . .

12.5 Uniform Approximations . . . . . . . . . . . .

12.6 Runges Phenomenon . . . . . . . . . . . . . .

12.7 Approximating Regression . . . . . . . . . . .

12.8 Residuals and Regression Fit . . . . . . . . .

12.9 Cross-Validation Model Selection . . . . . . .

12.10Convergence in Mean-Square . . . . . . . . .

12.11Uniform Convergence . . . . . . . . . . . . . .

12.12Asymptotic Normality . . . . . . . . . . . . .

12.13Asymptotic Normality with Undersmoothing

12.14Regression Estimation . . . . . . . . . . . . .

12.15Kernel Versus Series Regression . . . . . . . .

12.16Technical Proofs . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

222

222

222

224

224

224

226

226

229

229

230

231

232

233

234

235

235

13 Quantile Regression

241

13.1 Least Absolute Deviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241

13.2 Quantile Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246

14 Generalized Method of Moments

14.1 Overidentied Linear Model . . . . . . . . .

14.2 GMM Estimator . . . . . . . . . . . . . . .

14.3 Distribution of GMM Estimator . . . . . .

14.4 Estimation of the E cient Weight Matrix .

14.5 GMM: The General Case . . . . . . . . . .

14.6 Over-Identication Test . . . . . . . . . . .

14.7 Hypothesis Testing: The Distance Statistic

14.8 Conditional Moment Restrictions . . . . . .

14.9 Bootstrap GMM Inference . . . . . . . . . .

Exercises . . . . . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

247

247

248

249

250

251

251

252

253

254

256

15 Empirical Likelihood

15.1 Non-Parametric Likelihood . . . . . . . .

15.2 Asymptotic Distribution of EL Estimator

15.3 Overidentifying Restrictions . . . . . . . .

15.4 Testing . . . . . . . . . . . . . . . . . . . .

15.5 Numerical Computation . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

258

258

260

261

262

263

16 Endogeneity

16.1 Instrumental Variables . . .

16.2 Reduced Form . . . . . . .

16.3 Identication . . . . . . . .

16.4 Estimation . . . . . . . . .

16.5 Special Cases: IV and 2SLS

16.6 Bekker Asymptotics . . . .

16.7 Identication Failure . . . .

Exercises . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

265

266

267

268

268

268

270

271

273

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

CONTENTS

vi

17.1 Stationarity and Ergodicity . . . . . .

17.2 Autoregressions . . . . . . . . . . . . .

17.3 Stationarity of AR(1) Process . . . . .

17.4 Lag Operator . . . . . . . . . . . . . .

17.5 Stationarity of AR(k) . . . . . . . . .

17.6 Estimation . . . . . . . . . . . . . . .

17.7 Asymptotic Distribution . . . . . . . .

17.8 Bootstrap for Autoregressions . . . . .

17.9 Trend Stationarity . . . . . . . . . . .

17.10Testing for Omitted Serial Correlation

17.11Model Selection . . . . . . . . . . . . .

17.12Autoregressive Unit Roots . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

275

275

277

278

278

279

279

280

281

281

282

283

283

18.1 Vector Autoregressions (VARs) . . . .

18.2 Estimation . . . . . . . . . . . . . . .

18.3 Restricted VARs . . . . . . . . . . . .

18.4 Single Equation from a VAR . . . . .

18.5 Testing for Omitted Serial Correlation

18.6 Selection of Lag Length in an VAR . .

18.7 Granger Causality . . . . . . . . . . .

18.8 Cointegration . . . . . . . . . . . . . .

18.9 Cointegrated VARs . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

285

285

286

286

286

287

287

288

288

289

.

.

.

.

291

. 291

. 292

. 293

. 294

19.1 Binary Choice . . . . . . . .

19.2 Count Data . . . . . . . . .

19.3 Censored Data . . . . . . .

19.4 Sample Selection . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

20 Panel Data

296

20.1 Individual-Eects Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296

20.2 Fixed Eects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296

20.3 Dynamic Panel Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298

21 Nonparametric Density Estimation

299

21.1 Kernel Density Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299

21.2 Asymptotic MSE for Kernel Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . 301

A Matrix Algebra

A.1 Notation . . . . . . . . . . .

A.2 Matrix Addition . . . . . .

A.3 Matrix Multiplication . . .

A.4 Trace . . . . . . . . . . . . .

A.5 Rank and Inverse . . . . . .

A.6 Determinant . . . . . . . . .

A.7 Eigenvalues . . . . . . . . .

A.8 Positive Deniteness . . . .

A.9 Matrix Calculus . . . . . . .

A.10 Kronecker Products and the

A.11 Vector and Matrix Norms .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

Vec Operator

. . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

304

. 304

. 305

. 305

. 306

. 307

. 308

. 309

. 310

. 311

. 311

. 312

CONTENTS

vii

B Probability

B.1 Foundations . . . . . . . . . . . . . . . . . .

B.2 Random Variables . . . . . . . . . . . . . .

B.3 Expectation . . . . . . . . . . . . . . . . . .

B.4 Gamma Function . . . . . . . . . . . . . . .

B.5 Common Distributions . . . . . . . . . . . .

B.6 Multivariate Random Variables . . . . . . .

B.7 Conditional Distributions and Expectation .

B.8 Transformations . . . . . . . . . . . . . . .

B.9 Normal and Related Distributions . . . . .

B.10 Inequalities . . . . . . . . . . . . . . . . . .

B.11 Maximum Likelihood . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

316

. 316

. 318

. 318

. 319

. 320

. 322

. 324

. 326

. 327

. 329

. 332

C Numerical Optimization

C.1 Grid Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

C.2 Gradient Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

C.3 Derivative-Free Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

337

. 337

. 337

. 339

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Preface

This book is intended to serve as the textbook for a rst-year graduate course in econometrics.

It can be used as a stand-alone text, or be used as a supplement to another text.

Students are assumed to have an understanding of multivariate calculus, probability theory,

linear algebra, and mathematical statistics. A prior course in undergraduate econometrics would

be helpful, but not required. Two excellent undergraduate textbooks are Wooldridge (2009) and

Stock and Watson (2010).

For reference, some of the basic tools of matrix algebra, probability, and statistics are reviewed

in the Appendix.

For students wishing to deepen their knowledge of matrix algebra in relation to their study of

econometrics, I recommend Matrix Algebra by Abadir and Magnus (2005).

An excellent introduction to probability and statistics is Statistical Inference by Casella and

Berger (2002). For those wanting a deeper foundation in probability, I recommend Ash (1972)

or Billingsley (1995). For more advanced statistical theory, I recommend Lehmann and Casella

(1998), van der Vaart (1998), Shao (2003), and Lehmann and Romano (2005).

For further study in econometrics beyond this text, I recommend Davidson (1994) for asymptotic theory, Hamilton (1994) for time-series methods, Wooldridge (2002) for panel data and discrete

response models, and Li and Racine (2007) for nonparametrics and semiparametric econometrics.

Beyond these texts, the Handbook of Econometrics series provides advanced summaries of contemporary econometric methods and theory.

The end-of-chapter exercises are important parts of the text and are meant to help teach students

of econometrics. Answers are not provided, and this is intentional.

I would like to thank Ying-Ying Lee for providing research assistance in preparing some of the

empirical examples presented in the text.

As this is a manuscript in progress, some parts are quite incomplete, and there are many topics

which I plan to add. In general chapters 1-8 are the most complete, the remaining need signicant

work and revision.

viii

Chapter 1

Introduction

1.1

What is Econometrics?

The term econometrics is believed to have been crafted by Ragnar Frisch (1895-1973) of

Norway, one of the three principle founders of the Econometric Society, rst editor of the journal

Econometrica, and co-winner of the rst Nobel Memorial Prize in Economic Sciences in 1969. It

is therefore tting that we turn to Frischs own words in the introduction to the rst issue of

Econometrica for an explanation of the discipline.

A word of explanation regarding the term econometrics may be in order. Its denition is implied in the statement of the scope of the [Econometric] Society, in Section I

of the Constitution, which reads: The Econometric Society is an international society

for the advancement of economic theory in its relation to statistics and mathematics....

Its main object shall be to promote studies that aim at a unication of the theoreticalquantitative and the empirical-quantitative approach to economic problems....

But there are several aspects of the quantitative approach to economics, and no single

one of these aspects, taken by itself, should be confounded with econometrics. Thus,

econometrics is by no means the same as economic statistics. Nor is it identical with

what we call general economic theory, although a considerable portion of this theory has

a deninitely quantitative character. Nor should econometrics be taken as synonomous

with the application of mathematics to economics. Experience has shown that each

of these three view-points, that of statistics, economic theory, and mathematics, is

a necessary, but not by itself a su cient, condition for a real understanding of the

quantitative relations in modern economic life. It is the unication of all three that is

powerful. And it is this unication that constitutes econometrics.

Ragnar Frisch, Econometrica, (1933), 1, pp. 1-2.

This denition remains valid today, although some terms have evolved somewhat in their usage.

Today, we would say that econometrics is the unied study of economic models, mathematical

statistics, and economic data.

Within the eld of econometrics there are sub-divisions and specializations. Econometric theory

concerns the development of tools and methods, and the study of the properties of econometric

methods. Applied econometrics is a term describing the development of quantitative economic

models and the application of econometric methods to these models using economic data.

1.2

The unifying methodology of modern econometrics was articulated by Trygve Haavelmo (19111999) of Norway, winner of the 1989 Nobel Memorial Prize in Economic Sciences, in his seminal

1

CHAPTER 1. INTRODUCTION

paper The probability approach in econometrics, Econometrica (1944). Haavelmo argued that

quantitative economic models must necessarily be probability models (by which today we would

mean stochastic). Deterministic models are blatently inconsistent with observed economic quantities, and it is incoherent to apply deterministic models to non-deterministic data. Economic

models should be explicitly designed to incorporate randomness; stochastic errors should not be

simply added to deterministic models to make them random. Once we acknowledge that an economic model is a probability model, it follows naturally that an appropriate tool way to quantify,

estimate, and conduct inferences about the economy is through the powerful theory of mathematical statistics. The appropriate method for a quantitative economic analysis follows from the

probabilistic construction of the economic model.

Haavelmos probability approach was quickly embraced by the economics profession. Today no

quantitative work in economics shuns its fundamental vision.

While all economists embrace the probability approach, there has been some evolution in its

implementation.

The structural approach is the closest to Haavelmos original idea. A probabilistic economic

model is specied, and the quantitative analysis performed under the assumption that the economic

model is correctly specied. Researchers often describe this as taking their model seriously. The

structural approach typically leads to likelihood-based analysis, including maximum likelihood and

Bayesian estimation.

A criticism of the structural approach is that it is misleading to treat an economic model

as correctly specied. Rather, it is more accurate to view a model as a useful abstraction or

approximation. In this case, how should we interpret structural econometric analysis? The quasistructural approach to inference views a structural economic model as an approximation rather

than the truth. This theory has led to the concepts of the pseudo-true value (the parameter value

dened by the estimation problem), the quasi-likelihood function, quasi-MLE, and quasi-likelihood

inference.

Closely related is the semiparametric approach. A probabilistic economic model is partially

specied but some features are left unspecied. This approach typically leads to estimation methods

such as least-squares and the Generalized Method of Moments. The semiparametric approach

dominates contemporary econometrics, and is the main focus of this textbook.

Another branch of quantitative structural economics is the calibration approach. Similar

to the quasi-structural approach, the calibration approach interprets structural models as approximations and hence inherently false. The dierence is that the calibrationist literature rejects

mathematical statistics as inappropriate for approximate models, and instead selects parameters

by matching model and data moments using non-statistical ad hoc 1 methods.

1.3

In a typical application, an econometrician has a set of repeated measurements on a set of variables. For example, in a labor application the variables could include weekly earnings, educational

attainment, age, and other descriptive characteristics. We call this information the data, dataset,

or sample.

We use the term observations to refer to the distinct repeated measurements on the variables.

An individual observation often corresponds to a specic economic unit, such as a person, household,

corporation, rm, organization, country, state, city or other geographical region. An individual

observation could also be a measurement at a point in time, such as quarterly GDP or a daily

interest rate.

Economists typically denote variables by the italicized roman characters y, x; and/or z: The

convention in econometrics is to use the character y to denote the variable to be explained, while

1

Ad hoc means for this purpose a method designed for a specic problem and not based on a generalizable

principle.

CHAPTER 1. INTRODUCTION

the characters x and z are used to denote the conditioning (explaining) variables.

Following mathematical convention, real numbers (elements of the real line R) are written using

lower case italics such as y, and vectors (elements of Rk ) by lower case bold italics such as x; e.g.

0

1

x1

B x2 C

B

C

x = B . C:

@ .. A

xk

We typically denote the number of observations by the natural number n; and subscript the

variables by the index i to denote the individual observation, e.g. yi ; xi and z i . In some contexts

we use indices other than i, such as in time-series applications where the index t is common, and

in panel studies we typically use the double index it to refer to individual i at a time period t.

It is proper mathematical practice to use upper case X for random variables and lower case x

for realizations or specic values. This practice is not commonly followed in econometrics because

instead we use upper case to denote matrices. Thus the notation yi will in some places refer to a

random variable, and in other places a specic realization. Hopefully there will be no confusion as

the use should be evident from the context.

We typically use Greek letters such as ; and 2 to denote unknown parameters of an econometric model, and will use boldface, e.g.

or , when these are vector-valued. Estimates are

typically denoted by putting a hat ^, tilde ~ or bar - over the corresponding letter, e.g. ^

and ~ are estimates of :

The covariance matrix of an econometric estimator will typically be written using the capital

p b

boldface V ; often with a subscript to denote the estimator, e.g. V b = var

n

as the

p b

covariance matrix for n

: Hopefully without causing confusion, we will use the notation

p

V = avar( b ) to denote the asymptotic covariance matrix of n b

(the variance of the

asymptotic distribution). Estimates will be denoted by appending hats or tildes, e.g. Vb is an

estimate of V .

1.4

Observational Data

A common econometric question is to quantify the impact of one set of variables on another

variable. For example, a concern in labor economics is the returns to schooling the change in

earnings induced by increasing a workers education, holding other variables constant. Another

issue of interest is the earnings gap between men and women.

Ideally, we would use experimental data to answer these questions. To measure the returns to

schooling, an experiment might randomly divide children into groups, mandate dierent levels of

education to the dierent groups, and then follow the childrens wage path after they mature and

enter the labor force. The dierences between the groups would be direct measurements of the effects of dierent levels of education. However, experiments such as this would be widely condemned

as immoral! Consequently, we see few non-laboratory experimental data sets in economics.

Instead, most economic data is observational. To continue the above example, through data

collection we can record the level of a persons education and their wage. With such data we

can measure the joint distribution of these variables, and assess the joint dependence. But from

CHAPTER 1. INTRODUCTION

observational data it is di cult to infer causality, as we are not able to manipulate one variable to

see the direct eect on the other. For example, a persons level of education is (at least partially)

determined by that persons choices. These factors are likely to be aected by their personal abilities

and attitudes towards work. The fact that a person is highly educated suggests a high level of ability,

which suggests a high relative wage. This is an alternative explanation for an observed positive

correlation between educational levels and wages. High ability individuals do better in school,

and therefore choose to attain higher levels of education, and their high ability is the fundamental

reason for their high wages. The point is that multiple explanations are consistent with a positive

correlation between schooling levels and education. Knowledge of the joint distibution alone may

not be able to distinguish between these explanations.

Most economic data sets are observational, not experimental. This means

that all variables must be treated as random and possibly jointly determined.

This discussion means that it is di cult to infer causality from observational data alone. Causal

inference requires identication, and this is based on strong assumptions. We will return to a

discussion of some of these issues in Chapter 16.

1.5

There are three major types of economic data sets: cross-sectional, time-series, and panel. They

are distinguished by the dependence structure across observations.

Cross-sectional data sets have one observation per individual. Surveys are a typical source

for cross-sectional data. In typical applications, the individuals surveyed are persons, households,

rms or other economic agents. In many contemporary econometric cross-section studies the sample

size n is quite large. It is conventional to assume that cross-sectional observations are mutually

independent. Most of this text is devoted to the study of cross-section data.

Time-series data are indexed by time. Typical examples include macroeconomic aggregates,

prices and interest rates. This type of data is characterized by serial dependence so the random

sampling assumption is inappropriate. Most aggregate economic data is only available at a low

frequency (annual, quarterly or perhaps monthly) so the sample size is typically much smaller than

in cross-section studies. The exception is nancial data where data are available at a high frequency

(weekly, daily, hourly, or by transaction) so sample sizes can be quite large.

Panel data combines elements of cross-section and time-series. These data sets consist of a set

of individuals (typically persons, households, or corporations) surveyed repeatedly over time. The

common modeling assumption is that the individuals are mutually independent of one another,

but a given individuals observations are mutually dependent. This is a modied random sampling

environment.

Data Structures

Cross-section

Time-series

Panel

CHAPTER 1. INTRODUCTION

and panel data modeling. These include models of spatial correlation and clustering.

As we mentioned above, most of this text will be devoted to cross-sectional data under the

assumption of mutually independent observations. By mutual independence we mean that the ith

observation (yi ; xi ; z i ) is independent of the jth observation (yj ; xj ; z j ) for i 6= j. (Sometimes the

label independentis misconstrued. It is a statement about the relationship between observations

i and j, not a statement about the relationship between yi and xi and/or z i :)

Furthermore, if the data is randomly gathered, it is reasonable to model each observation as

a random draw from the same probability distribution. In this case we say that the data are

independent and identically distributed or iid. We call this a random sample. For most of

this text we will assume that our observations come from a random sample.

they are mutually independent and identically distributed (iid) across i =

1; :::; n:

In the random sampling framework, we think of an individual observation (yi ; xi ; z i ) as a realization from a joint probability distribution F (y; x; z) which we can call the population. This

population is innitely large. This abstraction can be a source of confusion as it does not correspond to a physical population in the real world. Its an abstraction since the distribution F

is unknown, and the goal of statistical inference is to learn about features of F from the sample.

The assumption of random sampling provides the mathematical foundation for treating economic

statistics with the tools of mathematical statistics.

The random sampling framework was a major intellectural breakthrough of the late 19th century, allowing the application of mathematical statistics to the social sciences. Before this conceptual development, methods from mathematical statistics had not been applied to economic data as

they were viewed as inappropriate. The random sampling framework enabled economic samples to

be viewed as homogenous and random, a necessary precondition for the application of statistical

methods.

1.6

Fortunately for economists, the internet provides a convenient forum for dissemination of economic data. Many large-scale economic datasets are available without charge from governmental

agencies. An excellent starting point is the Resources for Economists Data Links, available at

rfe.org. From this site you can nd almost every publically available economic data set. Some

specic data sources of interest include

Bureau of Labor Statistics

US Census

Current Population Survey

Survey of Income and Program Participation

Panel Study of Income Dynamics

Federal Reserve System (Board of Governors and regional banks)

National Bureau of Economic Research

CHAPTER 1. INTRODUCTION

CompuStat

International Financial Statistics

Another good source of data is from authors of published empirical studies. Most journals

in economics require authors of published papers to make their datasets generally available. For

example, in its instructions for submission, Econometrica states:

Econometrica has the policy that all empirical, experimental and simulation results must

be replicable. Therefore, authors of accepted papers must submit data sets, programs,

and information on empirical analysis, experiments and simulations that are needed for

replication and some limited sensitivity analysis.

The American Economic Review states:

All data used in analysis must be made available to any researcher for purposes of

replication.

The Journal of Political Economy states:

It is the policy of the Journal of Political Economy to publish papers only if the data

used in the analysis are clearly and precisely documented and are readily available to

any researcher for purposes of replication.

If you are interested in using the data from a published paper, rst check the journals website,

as many journals archive data and replication programs online. Second, check the website(s) of

the papers author(s). Most academic economists maintain webpages, and some make available

replication les complete with data and programs. If these investigations fail, email the author(s),

politely requesting the data. You may need to be persistent.

As a matter of professional etiquette, all authors absolutely have the obligation to make their

data and programs available. Unfortunately, many fail to do so, and typically for poor reasons.

The irony of the situation is that it is typically in the best interests of a scholar to make as much of

their work (including all data and programs) freely available, as this only increases the likelihood

of their work being cited and having an impact.

Keep this in mind as you start your own empirical project. Remember that as part of your end

product, you will need (and want) to provide all data and programs to the community of scholars.

The greatest form of attery is to learn that another scholar has read your paper, wants to extend

your work, or wants to use your empirical methods. In addition, public openness provides a healthy

incentive for transparency and integrity in empirical analysis.

1.7

Econometric Software

STATA (www.stata.com) is a powerful statistical program with a broad set of pre-programmed

econometric and statistical tools. It is quite popular among economists, and is continuously being

updated with new methods. It is an excellent package for most econometric analysis, but is limited

when you want to use new or less-common econometric methods which have not yet been programed.

R (www.r-project.org), GAUSS (www.aptech.com), MATLAB (www.mathworks.com), and Ox

(www.oxmetrics.net) are high-level matrix programming languages with a wide variety of built-in

statistical functions. Many econometric methods have been programed in these languages and are

available on the web. The advantage of these packages is that you are in complete control of your

CHAPTER 1. INTRODUCTION

analysis, and it is easier to program new methods than in STATA. Some disadvantages are that

you have to do much of the programming yourself, programming complicated procedures takes

signicant time, and programming errors are hard to prevent and di cult to detect and eliminate.

Of these languages, Gauss used to be quite popular among econometricians, but now Matlab is

more popular. A smaller but growing group of econometricians are enthusiastic fans of R, which of

these languages is uniquely open-source, user-contributed, and best of all, completely free!

For highly-intensive computational tasks, some economists write their programs in a standard

programming language such as Fortran or C. This can lead to major gains in computational speed,

at the cost of increased time in programming and debugging.

As these dierent packages have distinct advantages, many empirical economists end up using

more than one package. As a student of econometrics, you will learn at least one of these packages,

and probably more than one.

1.8

Chapter 2 is a review of moment estimation and asymptotic distribution theory. This material

should be familiar from an earlier course in statistics, but I have included this at the beginning because of its central importance in econometric distribution theory. Chapters 3 through 9 deal with

the core linear regression and projection models. Chapter 10 introduces the bootstrap. Chapters

11 through 13 deal with the Generalized Method of Moments, empirical likelihood and endogeneity.

Chapters 14 and 15 cover time series, and Chapters 16, 17 and 18 cover limited dependent variables, panel data, and nonparametrics. Reviews of matrix algebra, probability theory, maximum

likelihood, and numerical optimization can be found in the appendix.

Technical sections which may not be of interest to all readers are marked with an asterisk (*).

Chapter 2

Projection

2.1

Introduction

The most commonly applied econometric tool is least-squares estimation, also known as regression. As we will see, least-squares is a tool to estimate an approximate conditional mean of one

variable (the dependent variable) given another set of variables (the regressors, conditioning

variables, or covariates).

In this chapter we abstract from estimation, and focus on the probabilistic foundation of the

conditional expectation model and its projection approximation.

2.2

Suppose that we are interested in wage rates in the United States. Since wage rates vary across

workers, we cannot describe wage rates by a single number. Instead, we can describe wages using a

probability distribution. Formally, we view the wage of an individual worker as a random variable

wage with the probability distribution

F (u) = Pr(wage

u):

When we say that a persons wage is random we mean that we do not know their wage before it is

measured, and we treat observed wage rates as realizations from the distribution F: Treating unobserved wages as random variables and observed wages as realizations is a powerful mathematical

abstraction which allows us to use the tools of mathematical probability.

A useful thought experiment is to imagine dialing a telephone number selected at random, and

then asking the person who responds to tell us their wage rate. (Assume for simplicity that all

workers have equal access to telephones, and that the person who answers your call will respond

honestly.) In this thought experiment, the wage of the person you have called is a single draw from

the distribution F of wages in the population. By making many such phone calls we can learn the

distribution F of the entire population.

When a distribution function F is dierentiable we dene the probability density function

f (u) =

d

F (u):

du

The density contains the same information as the distribution function, but the density is typically

easier to visually interpret.

Wage Density

0.6

0.5

0.4

0.0

0.1

0.2

0.3

Wage Distribution

0.7

0.8

0.9

1.0

10

20

30

40

50

60

70

10

20

30

40

50

60

70

80

90

100

Figure 2.1: Wage Distribution and Density. All full-time U.S. workers

In Figure 2.1 we display estimates1 of the probability distribution function (on the left) and

density function (on the right) of U.S. wage rates in 2009. We see that the density is peaked around

$15, and most of the probability mass appears to lie between $10 and $40. These are ranges for

typical wage rates in the U.S. population.

Important measures of central tendency are the median and the mean. The median m of a

continuous2 distribution F is the unique solution to

1

F (m) = :

2

The median U.S. wage ($19.23) is indicated in the left panel of Figure 2.1 by the arrow. The median

is a robust3 measure of central tendency, but it is tricky to use for many calculations as it is not a

linear operator.

The expectation or mean of a random variable y with density f is

Z 1

uf (u)du:

= E (y) =

1

A general denition of the mean is presented in Section 2.30. The mean U.S. wage ($23.90) is

indicated in the right panel of Figure 2.1 by the arrow. Here we have used the common and

convenient convention of using the single character y to denote a random variable, rather than the

more cumbersome label wage.

The mean is a convenient measure of central tendency because it is a linear operator and

arises naturally in many economic models. A disadvantage of the mean is that it is not robust4

especially in the presence of substantial skewness or thick tails, which are both features of the wage

distribution as can be seen easily in the right panel of Figure 2.1. Another way of viewing this

is that 64% of workers earn less that the mean wage of $23.90, suggesting that it is incorrect to

describe the mean as a typical wage rate.

1

The distribution and density are estimated nonparametrically from the sample of 50,742 full-time non-military

wage-earners reported in the March 2009 Current Population Survey. The wage rate is constructed as individual

wage and salary earnings divided by hours worked.

1

2

If F is not continuous the denition is m = inffu : F (u)

g

2

3

The median is not sensitive to pertubations in the tails of the distribution.

4

The mean is sensitive to pertubations in the tails of the distribution.

10

In this context it is useful to transform the data by taking the natural logarithm5 . Figure 2.2

shows the density of log hourly wages log(wage) for the same population, with its mean 2.95 drawn

in with the arrow. The density of log wages is much less skewed and fat-tailed than the density of

the level of wages, so its mean

E (log(wage)) = 2:95

is a much better (more robust) measure6 of central tendency of the distribution. For this reason,

wage regressions typically use log wages as a dependent variable rather than the level of wages.

Another useful way to summarize the probability distribution F (u) is in terms of its quantiles.

For any 2 (0; 1); the th quantile of the continuous7 distribution F is the real number q which

satises

F (q ) = :

The quantile function q ; viewed as a function of ; is the inverse of the distribution function F:

The most commonly used quantile is the median, that is, q0:5 = m: We sometimes refer to quantiles

by the percentile representation of ; and in this case they are often called percentiles, e.g. the

median is the 50th percentile.

2.3

Conditional Expectation

We saw in Figure 2.2 the density of log wages. Is this distribution the same for all workers, or

does the wage distribution vary across subpopulations? To answer this question, we can compare

wage distributions for dierent groups for example, men and women. The plot on the left in

Figure 2.3 displays the densities of log wages for U.S. men and women with their means (3.05 and

2.81) indicated by the arrows. We can see that the two wage densities take similar shapes but the

density for men is somewhat shifted to the right with a higher mean.

The values 3.05 and 2.81 are the mean log wages in the subpopulations of men and women

workers. They are called the conditional means (or conditional expectations) of log wages

5

Throughout the text, we will use log(y) to denote the natural logarithm of y:

More precisely, the geometric mean exp (E (log w)) = $19:11 is a robust measure of central tendency.

7

If F is not continuous the denition is q = inffu : F (u)

g

11

Women

white men

white women

black men

black women

Men

Figure 2.3: Left: Log Wage Density for Women and Men. Right: Log Wage Density by Gender

and Race

given gender. We can write their specic values as

E (log(wage) j gender = man) = 3:05

(2.1)

(2.2)

We call these means conditional as they are conditioning on a xed value of the variable gender.

While you might not think of a persons gender as a random variable, it is random from the

viewpoint of econometric analysis. If you randomly select an individual, the gender of the individual

is unknown and thus random. (In the population of U.S. workers, the probability that a worker is a

woman happens to be 43%.) In observational data, it is most appropriate to view all measurements

as random variables, and the means of subpopulations are then conditional means.

As the two densities in Figure 2.3 appear similar, a hasty inference might be that there is not

a meaningful dierence between the wage distributions of men and women. Before jumping to this

conclusion let us examine the dierences in the distributions of Figure 2.3 more carefully. As we

mentioned above, the primary dierence between the two densities appears to be their means. This

dierence equals

E (log(wage) j gender = man)

= 0:24

2:81

(2.3)

A dierence in expected log wages of 0.24 implies an average 24% dierence between the wages

of men and women, which is quite substantial. (For an explanation of logarithmic and percentage

dierences see Section 2.4.)

Consider further splitting the men and women subpopulations by race, dividing the population

into whites, blacks, and other races. We display the log wage density functions of four of these

groups on the right in Figure 2.3. Again we see that the primary dierence between the four density

functions is their central tendency.

Focusing on the means of these distributions, Table 2.1 reports the mean log wage for each of

the six sub-populations.

men

3.07

2.86

3.03

white

black

other

12

women

2.82

2.73

2.86

The entries in Table 2.1 are the conditional means of log(wage) given gender and race. For

example

E (log(wage) j gender = man; race = white) = 3:07

and

E (log(wage) j gender = woman; race = black) = 2:73

One benet of focusing on conditional means is that they reduce complicated distributions

to a single summary measure, and thereby facilitate comparisons across groups. Because of this

simplifying property, conditional means are the primary interest of regression analysis and are a

major focus in econometrics.

Table 2.1 allows us to easily calculate average wage dierences between groups. For example,

we can see that the wage gap between men and women continues after disaggregation by race, as

the average gap between white men and white women is 25%, and that between black men and

black women is 13%. We also can see that there is a race gap, as the average wages of blacks are

substantially less than the other race categories. In particular, the average wage gap between white

men and black men is 21%, and that between white women and black women is 9%.

2.4

Log Dierences*

log (1 + x)

x:

(2.4)

x2 x3

+

2

3

2

= x + O(x ):

log (1 + x) = x

x4

+

4

The symbol O(x2 ) means that the remainder is bounded by Ax2 as x ! 0 for some A < 1: A

plot of log (1 + x) and the linear approximation x is shown in the following gure. We can see that

log (1 + x) and the linear approximation x are very close for jxj

0:1, and reasonably close for

jxj 0:2, but the dierence increases with jxj.

0.4

0.2

-0.4

-0.2

0.2

-0.2

-0.4

log(1 + x)

0.4

13

y = (1 + c=100)y:

Taking natural logarithms,

log y = log y + log(1 + c=100)

or

c

100

where the approximation is (2.4). This shows that 100 multiplied by the dierence in logarithms

is approximately the percentage dierence between y and y , and this approximation is quite good

for jcj 10:

log y

2.5

measure educational attainment by the number of years of schooling, and we will write this variable

as education 8 .

The conditional mean of log wages given gender, race, and education is a single number for each

category. For example

E (log(wage) j gender = man; race = white; education = 12) = 2:84

4.0

We display in Figure 2.4 the conditional means of log(wage) for white men and white women as a

function of education. The plot is quite revealing. We see that the conditional mean is increasing in

years of education, but at a dierent rate for schooling levels above and below nine years. Another

striking feature of Figure 2.4 is that the gap between men and women is roughly constant for all

education levels. As the variables are measured in logs this implies a constant average percentage

gap between men and women regardless of educational attainment.

3.5

3.0

2.5

2.0

10

12

14

16

white men

white women

18

20

Years of Education

8

Here, education is dened as years of schooling beyond kindergarten. A high school graduate has education=12,

a college graduate has education=16, a Masters degree has education=18, and a professional degree (medical, law or

PhD) has education=20.

14

In many cases it is convenient to simplify the notation by writing variables using single characters, typically y; x and/or z. It is conventional in econometrics to denote the dependent variable

(e.g. log(wage)) by the letter y; a conditioning variable (such as gender ) by the letter x; and

multiple conditioning variables (such as race, education and gender ) by the subscripted letters

x1 ; x2 ; :::; xk .

Conditional expectations can be written with the generic notation

E (y j x1 ; x2 ; :::; xk ) = m(x1 ; x2 ; :::; xk ):

We call this the conditional expectation function (CEF). The CEF is a function of (x1 ; x2 ; :::; xk )

as it varies with the variables. For example, the conditional expectation of y = log(wage) given

(x1 ; x2 ) = (gender ; race) is given by the six entries of Table 2.1. The CEF is a function of (gender ;

race) as it varies across the entries.

For greater compactness, we will typically write the conditioning variables as a vector in Rk :

0

1

x1

B x2 C

B

C

x = B . C:

(2.5)

.

@ . A

xk

Here we follow the convention of using lower case bold italics x to denote a vector. Given this

notation, the CEF can be compactly written as

E (y j x) = m (x) :

The CEF E (y j x) is a random variable as it is a function of the random variable x. It is also

sometimes useful to view the CEF as a function of x. In this case we can write m (u) = E (y j x = u)

, which is a function of the argument u. The expression E (y j x = u) is the conditional expectation

of y; given that we know that the random variable x equals the specic value u. However, sometimes

in econometrics we take a notational shortcut and use E (y j x) to refer to this function. Hopefully,

the use of E (y j x) should be apparent from the context.

2.6

Continuous Variables

In the previous sections, we implicitly assumed that the conditioning variables are discrete.

However, many conditioning variables are continuous. In this section, we take up this case and

assume that the variables (y; x) are continuously distributed with a joint density function f (y; x):

As an example, take y = log(wage) and x = experience, the number of years of labor market

experience. The contours of their joint density are plotted on the left side of Figure 2.5 for the

population of white men with 12 years of education.

Given the joint density f (y; x) the variable x has the marginal density

Z

fx (x) =

f (y; x)dy:

R

For any x such that fx (x) > 0 the conditional density of y given x is dened as

fyjx (y j x) =

f (y; x)

:

fx (x)

(2.6)

The conditional density is a slice of the joint density f (y; x) holding x xed. We can visualize this

by slicing the joint density function at a specic value of x parallel with the y-axis. For example,

take the density contours on the left side of Figure 2.5 and slice through the contour plot at a

specic value of experience. This gives us the conditional density of log(wage) for white men with

15

4.0

3.0

2.0

2.5

3.5

Exp=5

Exp=10

Exp=25

Exp=40

10

20

30

40

50

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

Figure 2.5: Left: Joint density of log(wage) and experience and conditional mean of log(wage)

given experience for white men with education=12. Right: Conditional densities of log(wage) for

white men with education=12.

12 years of education and this level of experience. We do this for four levels of experience (5, 10,

25, and 40 years), and plot these densities on the right side of Figure 2.5. We can see that the

distribution of wages shifts to the right and becomes more diuse as experience increases from 5 to

10 years, and from 10 to 25 years, but there is little change from 25 to 40 years experience.

The CEF of y given x is the mean of the conditional density (2.6)

Z

m (x) = E (y j x) =

yfyjx (y j x) dy:

(2.7)

R

Intuitively, m (x) is the mean of y for the idealized subpopulation where the conditioning variables

are xed at x. This is idealized since x is continuously distributed so this subpopulation is innitely

small.

In Figure 2.5 the CEF of log(wage) given experience is plotted as the solid line. We can see

that the CEF is a smooth but nonlinear function. The CEF is initially increasing in experience,

attens out around experience = 30, and then decreases for high levels of experience.

2.7

An extremely useful tool from probability theory is the law of iterated expectations. An

important special case is the known as the Simple Law.

If E jyj < 1 then for any random vector x,

E (E (y j x)) = E (y)

The simple law states that the expectation of the conditional expectation is the unconditional

expectation. In other words, the average of the conditional averages is the unconditional average.

When x is discrete

E (E (y j x)) =

and when x is continuous

1

X

j=1

E (E (y j x)) =

16

E (y j x j ) Pr (x = x j )

Rk

E (y j x ) fx (x)dx:

Going back to our investigation of average log wages for men and women, the simple law states

that

E (log(wage) j gender = man) Pr (gender = man)

= E (log(wage)) :

Or numerically,

The general law of iterated expectations allows two sets of conditioning variables.

Theorem 2.7.2 Law of Iterated Expectations

If E jyj < 1 then for any random vectors x1 and x2 ,

E (E (y j x1 ; x2 ) j x1 ) = E (y j x1 )

Notice the way the law is applied. The inner expectation conditions on x1 and x2 , while

the outer expectation conditions only on x1 : The iterated expectation yields the simple answer

E (y j x1 ) ; the expectation conditional on x1 alone. Sometimes we phrase this as: The smaller

information set wins.

As an example

E (log(wage) j gender = man; race = white) Pr (race = whitejgender = man)

or numerically

A property of conditional expectations is that when you condition on a random vector x you

can eectively treat it as if it is constant. For example, E (x j x) = x and E (g (x) j x) = g (x) for

any function g( ): The general property is known as the conditioning theorem.

Theorem 2.7.3 Conditioning Theorem

If

E jg (x) yj < 1

(2.8)

then

E (g (x) y j x) = g (x) E (y j x)

(2.9)

(2.10)

and

The proofs of Theorems 2.7.1, 2.7.2 and 2.7.3 are given in Section 2.33.

2.8

17

CEF Error

The CEF error e is dened as the dierence between y and the CEF evaluated at the random

vector x:

e = y m(x):

By construction, this yields the formula

y = m(x) + e:

(2.11)

In (2.11) it is useful to understand that the error e is derived from the joint distribution of

(y; x); and so its properties are derived from this construction.

A key property of the CEF error is that it has a conditional mean of zero. To see this, by the

linearity of expectations, the denition m(x) = E (y j x) and the Conditioning Theorem

E (e j x) = E ((y

m(x)) j x)

= E (y j x)

= m(x)

E (m(x) j x)

m(x)

= 0:

This fact can be combined with the law of iterated expectations to show that the unconditional

mean is also zero.

E (e) = E (E (e j x)) = E (0) = 0:

We state this and some other results formally.

Theorem 2.8.1 Properties of the CEF error

If E jyj < 1 then

1. E (e j x) = 0:

2. E (e) = 0:

3. If E jyjr < 1 for r

4. For any function h (x) such that E jh (x) ej < 1 then E (h (x) e) = 0:

The proof of the third result is deferred to Section 2.33:

The fourth result, whose proof is left to Exercise 2.3, says that e is uncorrelated with any

function of the regressors.

The equations

y = m(x) + e

E (e j x) = 0:

together imply that m(x) is the CEF of y given x. It is important to understand that this is not

a restriction. These equations hold true by denition.

The condition E (e j x) = 0 is implied by the denition of e as the dierence between y and the

CEF m (x) : The equation E (e j x) = 0 is sometimes called a conditional mean restriction, since

the conditional mean of the error e is restricted to equal zero. The property is also sometimes called

mean independence, for the conditional mean of e is 0 and thus independent of x. However,

it does not imply that the distribution of e is independent of x: Sometimes the assumption e is

independent of x is added as a convenient simplication, but it is not generic feature of the conditional mean. Typically and generally, e and x are jointly dependent, even though the conditional

mean of e is zero.

18

1.0

0.5

0.0

0.5

1.0

10

20

30

40

50

Figure 2.6: Joint density of CEF error e and experience for white men with education=12.

As an example, the contours of the joint density of e and experience are plotted in Figure 2.6

for the same population as Figure 2.5. The error e has a conditional mean of zero for all values of

experience, but the shape of the conditional distribution varies with the level of experience.

As a simple example of a case where x and e are mean independent yet dependent, let e = x"

where x and " are independent N(0; 1): Then conditional on x; the error e has the distribution

N(0; x2 ): Thus E (e j x) = 0 and e is mean independent of x; yet e is not fully independent of x:

Mean independence does not imply full independence.

2.9

Regression Variance

An important measure of the dispersion about the CEF function is the unconditional variance

of the CEF error e: We write this as

2

= var (e) = E (e

Ee)2 = E e2 :

Theorem 2.9.1 If Ey 2 < 1 then

<1

We can call 2 the regression variance or the variance of the regression error. The magnitude

of 2 measures the amount of variation in y which is not explained or accounted for in the

conditional mean E (y j x) :

The regression variance depends on the regressors x. Consider two regressions

y = E (y j x1 ) + e1

y = E (y j x1 ; x2 ) + e2 :

We write the two errors distinctly as e1 and e2 as they are dierent changing the conditioning

information changes the conditional mean and therefore the regression error as well.

In our discussion of iterated expectations, we have seen that by increasing the conditioning

set, the conditional expectation reveals greater detail about the distribution of y: What is the

implication for the regression error?

19

It turns out that there is a simple relationship. We can think of the conditional mean E (y j x)

as the explained portionof y: The remainder e = y E (y j x) is the unexplained portion. The

simple relationship we now derive shows that the variance of this unexplained portion decreases

when we condition on more variables. This relationship is monotonic in the sense that increasing

the amont of information always decreases the variance of the unexplained portion.

var (y)

var (y

E (y j x1 ))

var (y

E (y j x1 ; x2 ))

Theorem 2.9.2 says that the variance of the dierence between y and its conditional mean

(weakly) decreases whenever an additional variable is added to the conditioning information.

The proof of Theorem 2.9.2 is given in Section 2.33.

2.10

Best Predictor

Suppose that given a realized value of x, we want to create a prediction or forecast of y: We can

write any predictor as a function g (x) of x. The prediction error is the realized dierence y g(x):

A non-stochastic measure of the magnitude of the prediction error is the expectation of its square

E (y

g (x))2 :

(2.12)

We can dene the best predictor as the function g (x) which minimizes (2.12). What function

is the best predictor? It turns out that the answer is the CEF m(x). This holds regardless of the

joint distribution of (y; x):

To see this, note that the mean squared error of a predictor g (x) is

E (y

g (x))2 = E (e + m (x)

g (x))2

= Ee2 + 2E (e (m (x)

= Ee2 + E (m (x)

g (x))) + E (m (x)

g (x))2

g (x))2

Ee2

= E (y

m (x))2

where the rst equality makes the substitution y = m(x) + e and the third equality uses Theorem

2.8.1.4. The right-hand-side after the third equality is minimized by setting g (x) = m (x), yielding

the nal inequality. The minimum is nite under the assumption Ey 2 < 1 as shown by Theorem

2.9.1.

We state this formally in the following result.

If Ey 2 < 1; then for any predictor g (x),

E (y

where m (x) = E (y j x).

g (x))2

E (y

m (x))2

2.11

20

Conditional Variance

While the conditional mean is a good measure of the location of a conditional distribution,

it does not provide information about the spread of the distribution. A common measure of the

dispersion is the conditional variance.

is

2

(x) = var (y j x)

E (y j x))2 j x

= E (y

= E e2 j x

Generally, 2 (x) is a non-trivial function of x and can take any form subject to the restriction

p

2 (x):

that it is non-negative. The conditional standard deviation is its square root (x) =

2

2

One way to think about (x) is that it is the conditional mean of e given x.

As an example of how the conditional variance depends on observables, compare the conditional

log wage densities for men and women displayed in Figure 2.3. The dierence between the densities

is not purely a location shift, but is also a dierence in spread. Specically, we can see that the

density for mens log wages is somewhat more spread out than that for women, while the density

for womens wages is somewhat more peaked. Indeed, the conditional standard deviation for mens

wages is 3.05 and that for women is 2.81. So while men have higher average wages, they are also

somewhat more dispersed.

The unconditional error variance and the conditional variance are related by the law of iterated

expectations

2

= E e2 = E E e2 j x = E 2 (x) :

That is, the unconditional error variance is the average conditional variance.

Given the conditional variance, we can dene a rescaled error

"=

e

:

(x)

(2.13)

E (" j x) = E

and

var (" j x) = E "2 j x = E

e

jx

(x)

e2

jx

2 (x)

1

E (e j x) = 0

(x)

1

E e2 j x =

2 (x)

2 (x)

2 (x)

= 1:

Notice that (2.13) can be rewritten as

e = (x)":

and substituting this for e in the CEF equation (2.11), we nd that

y = m(x) + (x)":

This is an alternative (mean-variance) representation of the CEF equation.

(2.14)

21

Many econometric studies focus on the conditional mean m(x) and either ignore the conditional variance 2 (x); treat it as a constant 2 (x) = 2 ; or treat it as a nuisance parameter (a

parameter not of primary interest). This is appropriate when the primary variation in the conditional distribution is in the mean, but can be short-sighted in other cases. Dispersion is relevant

to many economic topics, including income and wealth distribution, economic inequality, and price

dispersion. Conditional dispersion (variance) can be a fruitful subject for investigation.

The perverse consequences of a narrow-minded focus on the mean has been parodied in a classic

joke:

An economist was standing with one foot in a bucket of boiling water

and the other foot in a bucket of ice. When asked how he felt, he

replied, On average I feel just ne.

2.12

pendent of x. This is called homoskedasticity.

2 (x)

does not depend on x.

2 (x)

depends on x.

2 (x)

the conditional variance, not the unconditional variance. By denition, the unconditional variance

2 is a constant and independent of the regressors x. So when we talk about the variance as a

function of the regressors, we are talking about the conditional variance 2 (x).

Some older or introductory textbooks describe heteroskedasticity as the case where the variance of e varies across observations. This is a poor and confusing denition. It is more constructive

to understand that heteroskedasticity means that the conditional variance 2 (x) depends on observables.

Older textbooks also tend to describe homoskedasticity as a component of a correct regression

specication, and describe heteroskedasticity as an exception or deviance. This description has

inuenced many generations of economists, but it is unfortunately backwards. The correct view

is that heteroskedasticity is generic and standard, while homoskedasticity is unusual and exceptional. The default in empirical work should be to assume that the errors are heteroskedastic, not

the converse.

In apparent contradiction to the above statement, we will still frequently impose the homoskedasticity assumption when making theoretical investigations into the properties of estimation

22

and inference methods. The reason is that in many cases homoskedasticity greatly simplies the

theoretical calculations, and it is therefore quite advantageous for teaching and learning. It should

always be remembered, however, that homoskedasticity is never imposed because it is believed to

be a correct feature of an empirical model, but rather because of its simplicity.

2.13

Regression Derivative

One way to interpret the CEF m(x) = E (y j x) is in terms of how marginal changes in the

regressors x imply changes in the conditional mean of the response variable y: It is typical to

consider marginal changes in a single regressor, say x1 , holding the remainder xed. When a

regressor x1 is continuously distributed, we dene the marginal eect of a change in x1 , holding

the variables x2 ; :::; xk xed, as the partial derivative of the CEF

@

m(x1 ; :::; xk ):

@x1

When x1 is discrete we dene the marginal eect as a discrete dierence. For example, if x1 is

binary, then the marginal eect of x1 on the CEF is

m(1; x2 ; :::; xk )

m(0; x2 ; :::; xk ):

We can unify the continuous and discrete cases with the notation

8

@

>

>

m(x1 ; :::; xk );

if x1 is continuous

<

@x1

r1 m(x) =

>

>

: m(1; x ; :::; x ) m(0; x ; :::; x );

if x is binary.

2

2

3

r1 m(x)

6 r2 m(x) 7

6

7

rm(x) = 6

7

..

4

5

.

rk m(x)

@

m(x), the

When all elements of x are continuous, then we have the simplication rm(x) =

@x

vector of partial derivatives.

There are two important points to remember concerning our denition of the regression derivative.

First, the eect of each variable is calculated holding the other variables constant. This is the

ceteris paribus concept commonly used in economics. But in the case of a regression derivative,

the conditional mean does not literally hold all else constant. It only holds constant the variables

included in the conditional mean. This means that the regression derivative depends on which

regressors are included. For example, in a regression of wages on education, experience, race and

gender, the regression derivative with respect to education shows the marginal eect of education

on wages, holding constant a persons observable characteristic experience, race and gender. But

it does not hold constant a persons unobservable characteristics (such as ability), or variables not

included in the regression (such as the quality of education).

Second, the regression derivative is the change in the conditional expectation of y, not the

change in the actual value of y for an individual. It is tempting to think of the regression derivative

as the change in the actual value of y, but this is not a correct interpretation. The regression

derivative rm(x) is the change in the actual value of y only if the error e is unaected by the

change in the regressor x. We return to a discussion of causal eects in Section 2.29.

2.14

23

Linear CEF

An important special case is when the CEF m (x) = E (y j x) is linear in x: In this case we can

write the mean equation as

m(x) = x1

+ x2

+ xk

+ :

Notationally it is convenient to write this as a simple function of the vector x. An easy way to do

so is to augment the regressor vector x by listing the number 1 as an element. We call this the

constantand the corresponding coe cient is called the intercept. Equivalently, assuming that

the nal element9 of the vector x is the intercept, then xk = 1. Thus (2.5) has been redened as

the k 1 vector

0

1

x1

B x2 C

B

C

B .. C

x = B . C:

(2.15)

B

C

@ xk 1 A

1

With this redenition, then the CEF is

m(x) = x1

+ x2

+ xk

= x

(2.16)

where

B

C

= @ ... A

(2.17)

is a k 1 coe cient vector. This is the linear CEF model. It is also often called the linear

regression model, or the regression of y on x:

In the linear CEF model, the regression derivative is simply the coe cient vector. That is

rm(x) = :

This is one of the appealing features of the linear CEF model. The coe cients have simple and

natural interpretations as the marginal eects of changing one variable, holding the others constant.

Linear CEF Model

y = x0 + e

E (e j x) = 0

If in addition the error is homoskedastic, we call this the homoskedastic linear CEF model.

Homoskedastic Linear CEF Model

y = x0 + e

E (e j x) = 0

E e2 j x

2.15

24

The linear CEF model of the previous section is less restrictive than it might appear, as we can

include as regressors nonlinear transformations of the original variables. In this sense, the linear

CEF framework is exible and can capture many nonlinear eects.

For example, suppose we have two scalar variables x1 and x2 : The CEF could take the quadratic

form

m(x1 ; x2 ) = x1 1 + x2 2 + x21 3 + x22 4 + x1 x2 5 + 6 :

(2.18)

This equation is quadratic in the regressors (x1 ; x2 ) yet linear in the coe cients ( 1 ; :::; 6 ): We

will descriptively call (2.18) a quadratic CEF, and yet (2.18) is also a linear CEF in the sense

of being linear in the coe cients. The key is to understand that (2.18) is quadratic in the variables

(x1 ; x2 ) yet linear in the coe cients ( 1 ; :::; 6 ):

To simplify the expression, we dene the transformations x3 = x21 ; x4 = x22 ; x5 = x1 x2 ; and

x6 = 1; and redene the regressor vector as x = (x1 ; :::; x6 ): With this redenition,

m(x1 ; x2 ) = x0

which is linear in . For most econometric purposes (estimation and inference on ) the linearity

in is all that is important.

An exception is in the analysis of regression derivatives. In nonlinear equations such as (2.18),

the regression derivative should be dened with respect to the original variables, not with respect

to the transformed variables. Thus

@

m(x1 ; x2 ) =

@x1

@

m(x1 ; x2 ) =

@x2

+ 2x1

+ x2

+ 2x2

+ x1

We see that in the model (2.18), the regression derivatives are not a simple coe cient, but are

functions of several coe cients plus the levels of (x1; x2 ): Consequently it is di cult to interpret

the coe cients individually. It is more useful to interpret them as a group.

We typically call 5 the interaction eect. Notice that it appears in both regression derivative

equations, and has a symmetric interpretation in each. If 5 > 0 then the regression derivative of

x1 on y is increasing in the level of x2 (and the regression derivative of x2 on y is increasing in the

level of x1 ); while if 5 < 0 the reverse is true. It is worth noting that this symmetry is an articial

implication of the quadratic equation (2.18), and is not a general feature of nonlinear conditional

means m(x1 ; x2 ).

2.16

When all regressors takes a nite set of values, it turns out the CEF can be written as a linear

function of regressors.

This simplest example is a binary variable, which takes only two distinct values. For example, the variable gender takes only the values man and woman. Binary variables are extremely

common in econometric applications, and are alternatively called dummy variables or indicator

variables.

Consider the simple case of a single binary regressor. In this case, the conditional mean can

only take two distinct values. For example,

8

gender=man

< 0 if

E (y j gender) =

:

1 if gender=woman

25

To facilitate a mathematical treatment, we typically record dummy variables with the values f0; 1g:

For example

0 if

gender=man

x1 =

(2.19)

1 if gender=woman

Given this notation we can write the conditional mean as a linear function of the dummy variable

x1 ; that is

E (y j x1 ) = + x1

where = 0 and = 1

is equal to the

0 : In this simple regression equation the intercept

conditional mean of y for the x1 = 0 subpopulation (men) and the slope is equal to the dierence

in the conditional means between the two subpopulations.

Equivalently, we could have dened x1 as

x1 =

1 if

gender=man

0 if gender=woman

(2.20)

In this case, the regression intercept is the mean for women (rather than for men) and the regression

slope has switched signs. The two regressions are equivalent but the interpretation of the coe cients

has changed. Therefore it is always important to understand the precise denitions of the variables,

and illuminating labels are helpful. For example, labelling x1 as genderdoes not help distinguish

between denitions (2.19) and (2.20). Instead, it is better to label x1 as women or female if

denition (2.19) is used, or as men or male if (2.20) is used.

Now suppose we have two dummy variables x1 and x2 : For example, x2 = 1 if the person is

married, else x2 = 0: The conditional mean given x1 and x2 takes at most four possible values:

8

if x1 = 0 and x2 = 0

(unmarried men)

>

>

< 00

if

x

=

0

and

x

=

1

(married men)

1

2

01

E (y j x1 ; x2 ) =

if

x

=

1

and

x

=

0

(unmarried

women)

>

1

2

>

: 10

(married women)

11 if x1 = 1 and x2 = 1

In this case we can write the conditional mean as a linear function of x1 , x2 and their product

x1 x2 :

E (y j x1 ; x2 ) = + 1 x1 + 2 x2 + 3 x1 x2

where = 00 ; 1 = 10

00 ; 2 = 01

00 ; and 3 = 11

10

01 + 00 :

We can view the coe cient 1 as the eect of gender on expected log wages for unmarried

wages earners, the coe cient 2 as the eect of marriage on expected log wages for men wage

earners, and the coe cient 3 as the dierence between the eects of marriage on expected log

wages among women and among men. Alternatively, it can also be interpreted as the dierence

between the eects of gender on expected log wages among married and non-married wage earners.

Both interpretations are equally valid. We often describe 3 as measuring the interaction between

the two dummy variables, or the interaction eect, and describe 3 = 0 as the case when the

interaction eect is zero.

In this setting we can see that the CEF is linear in the three variables (x1 ; x2 ; x1 x2 ): Thus to

put the model in the framework of Section 2.14, we would dene the regressor x3 = x1 x2 and the

regressor vector as

0

1

x1

B x2 C

C

x=B

@ x3 A :

1

So even though we started with only 2 dummy variables, the number of regressors (including the

intercept) is 4.

26

values and can be written as the linear function

E (y j x1 ; x2 ; x3 ) =

1 x1

2 x2

3 x3

4 x1 x2

5 x1 x3

6 x2 x3

7 x1 x2 x3

In general, if there are p dummy variables x1 ; :::; xp then the CEF E (y j x1 ; x2 ; :::; xp ) takes

at most 2p distinct values, and can be written as a linear function of the 2p regressors including

x1 ; x2 ; :::; xp and all cross-products. This might be excessive in practice if p is modestly large. In

the next section we will discuss projection approximations which yield more parsimonious parameterizations.

We started this section by saying that the conditional mean is linear whenever all regressors

take only a nite number of possible values. How can we see this? Take a categorical variable,

such as race. For example, we earlier divided race into three categories. We can record categorical

variables using numbers to indicate each category, for example

8

< 1 if white

2 if black

x3 =

:

3 if other

When doing so, the values of x3 have no meaning in terms of magnitude, they simply indicate the

relevant category.

When the regressor is categorical the conditional mean of y given x3 takes a distinct value for

each possibility:

8

< 1 if x3 = 1

if x3 = 2

E (y j x3 ) =

: 2

3 if x3 = 3

This is not a linear function of x3 itself, but it can be made so by constructing dummy variables

for two of the three categories. For example

x4 =

1 if

black

0 if not black

x5 =

1 if

other

0 if not other

In this case, the categorical variable x3 is equivalent to the pair of dummy variables (x4 ; x5 ): The

explicit relationship is

8

< 1 if x4 = 0 and x5 = 0

2 if x4 = 1 and x5 = 0

x3 =

:

3 if x4 = 0 and x5 = 1

Given these transformations, we can write the conditional mean of y as a linear function of x4 and

x5

E (y j x3 ) = E (y j x4 ; x5 ) = + 1 x4 + 2 x5

We can write the CEF as either E (y j x3 ) or E (y j x4 ; x5 ) (they are equivalent), but it is only linear

as a function of x4 and x5 :

This setting is similar to the case of two dummy variables, with the dierence that we have not

included the interaction term x4 x5 : This is because the event fx4 = 1 and x5 = 1g is empty by

construction, so x4 x5 = 0 by denition.

2.17

27

While the conditional mean m(x) = E (y j x) is the best predictor of y among all functions

of x; its functional form is typically unknown. In particular, the linear CEF model is empirically

unlikely to be accurate unless x is discrete and low-dimensional so all interactions are included.

Consequently in most cases it is more realistic to view the linear specication (2.16) as an approximation. In this section we derive a specic approximation with a simple interpretation.

Theorem 2.10.1 showed that the conditional mean m (x) is the best predictor in the sense

that it has the lowest mean squared error among all predictors. By extension, we can dene an

approximation to the CEF by the linear function with the lowest mean squared error among all

linear predictors.

For this derivation we require the following regularity condition.

Assumption 2.17.1

1. Ey 2 < 1:

2. E kxk2 < 1:

3. Qxx = E (xx0 ) is positive denite.

In Assumption 2.17.1.2 we use the notation kxk = (x0 x)1=2 to denote the Euclidean length of

the vector x.

The rst two parts of Assumption 2.17.1 imply that the variables y and x have nite means,

variances, and covariances. The third part of the assumption is more technical, and its role will

become apparent shortly. It is equivalent to imposing that the columns of Qxx = E (xx0 ) are

linearly independent, or equivalently that the matrix Qxx is invertible.

A linear predictor for y is a function of the form x0 for some

2 Rk . The mean squared

prediction error is

2

S( ) = E y x0

:

The best linear predictor of y given x, written P(y j x); is found by selecting the vector

minimize S( ):

P(y j x) = x0

where

S( ) = E y

x0

The minimizer

= argmin S( )

2Rk

(2.21)

to

The mean squared prediction error can be written out as a quadratic function of

S( ) = Ey 2

2 0 E (xy) +

E xx0

28

:

The quadratic structure of S( ) means that we can solve explicitly for the minimizer. The rstorder condition for minimization (from Appendix A.9) is

0=

@

S( ) =

@

2E (xy) + 2E xx0

(2.22)

Rewriting (2.22) as

2E (xy) = 2E xx0

and dividing by 2, this equation takes the form

Qxy = Qxx

(2.23)

where Qxy = E (xy) is k 1 and Qxx = E (xx0 ) is k k. The solution is found by inverting the

matrix Qxx , and is written

= Qxx1 Qxy

or

= E xx0

E (xy) :

(2.24)

It is worth taking the time to understand the notation involved in the expression (2.24). Qxx is a

E(xy)

k k matrix and Qxy is a k 1 column vector. Therefore, alternative expressions such as E(xx

0)

or E (xy) (E (xx0 )) 1 are incoherent and incorrect. We also can now see the role of Assumption

2.17.1.3. It is necessary in order for the solution (2.24) to exist. Otherwise, there would be multiple

solutions to the equation (2.23).

We now have an explicit expression for the best linear predictor:

1

P(y j x) = x0 E xx0

E (xy) :

The projection error is

e=y

x0 :

(2.25)

This equals the error from the regression equation when (and only when) the conditional mean is

linear in x; otherwise they are distinct.

Rewriting, we obtain a decomposition of y into linear predictor and error

y = x0 + e:

(2.26)

In general we call equation (2.26) or x0 the best linear predictor of y given x; or the linear

projection of y on x. Equation (2.26) is also often called the regression of y on x but this can

sometimes be confusing as economists use the term regression in many contexts. (Recall that we

said in Section 2.14 that the linear CEF model is also called the linear regression model.)

An important property of the projection error e is

E (xe) = 0:

(2.27)

To see this, using the denitions (2.25) and (2.24) and the matrix properties AA

Ia = a;

E (xe) = E x y

= E (xy)

= 0

= I and

x0

E xx0

E xx0

E (xy)

(2.28)

29

as claimed.

Equation (2.27) is a set of k equations, one for each regressor. In other words, (2.27) is equivalent

to

E (xj e) = 0

(2.29)

for j = 1; :::; k: As in (2.15), the regressor vector x typically contains a constant, e.g. xk = 1. In

this case (2.29) for j = k is the same as

E (e) = 0:

(2.30)

Thus the projection error has a mean of zero when the regressor vector contains a constant. (When

x does not have a constant, (2.30) is not guaranteed. As it is desirable for e to have a zero mean,

this is a good reason to always include a constant in any regression model.)

It is also useful to observe that since cov(xj ; e) = E (xj e) E (xj ) E (e) ; then (2.29)-(2.30)

together imply that the variables xj and e are uncorrelated.

This completes the derivation of the model. We summarize some of the most important properties.

Theorem 2.17.1 Properties of Linear Projection Model

Under Assumption 2.17.1,

1. The moments E (xx0 ) and E (xy) exist with nite elements.

2. The Linear Projection Coe cient (2.21) exists, is unique, and equals

1

= E xx0

E (xy) :

P(y j x) = x0 E xx0

4. The projection error e = y

x0

E (xy) :

E e2 < 1

and

E (xe) = 0:

5. If x contains an constant, then

E (e) = 0:

6. If E jyjr < 1 and E kxkr < 1 for r

It is useful to reect on the generality of Theorem 2.17.1. The only restriction is Assumption

2.17.1. Thus for any random variables (y; x) with nite variances we can dene a linear equation

(2.26) with the properties listed in Theorem 2.17.1. Stronger assumptions (such as the linear CEF

model) are not necessary. In this sense the linear model (2.26) exists quite generally. However,

it is important not to misinterpret the generality of this statement. The linear equation (2.26) is

dened as the best linear predictor. It is not necessarily a conditional mean, nor a parameter of a

structural or causal economic model.

30

y = x0 + e:

E (xe) = 0

=

E xx0

E (xy)

We illustrate projection using three log wage equations introduced in earlier sections.

For our rst example, we consider a model with the two dummy variables for gender and race

similar to Table 2.1. As we learned in Section 2.16, the entries in this table can be equivalently

expressed by a linear CEF. For simplicity, lets consider the CEF of log(wage) as a function of

Black and Female.

E(log(wage) j Black; F emale) =

0:20Black

This is a CEF as the variables are dummys and all interactions are included.

Now consider a simpler model omitting the interaction eect. This is the linear projection on

the variables Black and F emale

P(log(wage) j Black; F emale) =

0:15Black

(2.32)

What is the dierence? The full CEF (2.31) shows that the race gap is dierentiated by gender: it

is 20% for black men (relative to non-black men) and 10% for black women (relative to non-black

women). The projection model (2.32) simplies this analysis, calculating an average 15% wage gap

for blacks, ignoring the role of gender. Notice that this is despite the fact that the gender variable

is included in (2.32).

For our second example we consider the CEF of log wages as a function of years of education

for white men which was illustrated in Figure 2.4 and is repeated in Figure 2.7. Superimposed on

the gure are two projections. The rst (given by the dashed line) is the linear projection of log

wages on years of education

P(log(wage) j Education) = 1:5 + 0:11Education

This simple equation indicates an average 11% increase in wages for every year of education. An

inspection of the Figure shows that this approximation works well for education 9, but underpredicts for individuals with lower levels of education. To correct this imbalance we use a linear

spline equation which allows dierent rates of return above and below 9 years of education:

P (log(wage) j Education; (Education

9) 1 (Education > 9)

This equation is displayed in Figure 2.7 using the solid line, and appears to t much better. It

indicates a 2% increase in mean wages for every year of education below 9, and a 12% increase in

mean wages for every year of education above 9. It is still an approximation to the conditional

mean but it appears to be fairly reasonable.

For our third example we take the CEF of log wages as a function of years of experience for

white men with 12 years of education, which was illustrated in Figure 2.5 and is repeated as the

solid line in Figure 2.8. Superimposed on the gure are two projections. The rst (given by the

dot-dashed line) is the linear projection on experience

P(log(wage) j Experience) = 2:5 + 0:011Experience

31

4.0

3.5

3.0

2.5

2.0

10

12

14

16

18

20

Years of Education

and the second (given by the dashed line) is the linear projection on experience and its square

P(log(wage) j Experience) = 2:3 + 0:046Experience

0:0007Experience2 :

4.0

It is fairly clear from an examination of Figure 2.8 that the rst linear projection is a poor approximation. It over-predicts wages for young and old workers, and under-predicts for the rest. Most

importantly, it misses the strong downturn in expected wages for older wage-earners. The second

projection ts much better. We can call this equation a quadratic projection since the function

is quadratic in experience:

3.0

2.0

2.5

3.5

Conditional Mean

Linear Projection

Quadratic Projection

10

20

30

40

50

32

The linear projection coe cient = (E (xx0 )) 1 E (xy) exists and is

unique as long as the k k matrix Qxx = E (xx0 ) is invertible. The matrix

Qxx is sometimes called the design matrix, as in experimental settings

the researcher is able to control Qxx by manipulating the distribution of

the regressors x:

Observe that for any non-zero 2 Rk ;

0

Qxx

=E

xx0

=E

it is positive denite means that this is a strict inequality, E ( 0 x)2 >

0: Equivalently, there cannot exist a non-zero vector

such that 0 x =

0 identically. This occurs when redundant variables are included in x:

Positive semi-denite matrices are invertible if and only if they are positive

denite. When Qxx is invertible then = (E (xx0 )) 1 E (xy) exists and is

uniquely dened. In other words, in order for to be uniquely dened, we

must exclude the degenerate situation of redundant varibles.

Theorem 2.17.1 shows that the linear projection coe cient is identied (uniquely determined) under Assumptions 2.17.1. The key is invertibility of Qxx . Otherwise, there is no unique solution to the equation

= Qxy :

Qxx

(2.33)

When Qxx is not invertible there are multiple solutions to (2.33), all of

which yield an equivalent best linear predictor x0 . In this case the coe cient is not identied as it does not have a unique value. Even so, the

best linear predictor x0 still identied. One solution is to set

= E xx0

E (xy)

2.18

2

= E e2 :

2

as

=E y

x0

= Ey 2

2E yx0

= Qyy

= Qyy

E xx0

def

= Qyy x

One useful feature of this formula is that it shows that Qyy x = Qyy

variance of the error from the linear projection of y on x.

(2.34)

Qyx Qxx1 Qxy equals the

2.19

33

Sometimes it is useful to separate the intercept from the other regressors, and write the linear

projection equation in the format

y = + x0 + e

(2.35)

where is the intercept and x does not contain a constant.

Taking expectations of this equation, we nd

Ey = E + Ex0 + Ee

or

y

where

= Ey and

0

x

=

0

x

y

0

x)

= (x

+ e;

(2.36)

y and x

x . (They are centered at their

means, so are mean-zero random variables.) Because x

x is uncorrelated with e; (2.36) is also

a linear projection, thus by the formula for the linear projection model,

=

x ) (x

E (x

= var (x)

x)

E (x

x)

cov (x; y)

y=

+ x0 + e;

then

=

0

x

(2.37)

and

= var (x)

2.20

cov (x; y) :

(2.38)

Regression Sub-Vectors

x=

10

x1

x2

matrix of the vector x is var (x) = cov (x; x) = E (x Ex) (x Ex)0 :

(2.39)

Ex) (z

34

y = x0 + e

= x01

+ x02

+e

(2.40)

and

2:

E (xe) = 0:

In this section we derive formula for the sub-vectors

Partition Qxx comformably with x

Qxx =

Q11 Q12

Q21 Q22

E (x2 x01 ) E (x2 x02 )

Q1y

Q2y

Qxy =

E (x1 y)

E (x2 y)

Qxx1 =

Q11 Q12

Q21 Q22

def

def

Q11 Q12

Q21 Q22

Q1112

Q2211 Q21 Q111

=

def

Q2211

(2.41)

1

Q2211

Q1112

Q2211 Q21 Q111

=

=

Q1112 Q1y

Q2211 Q2y

Q1112 Q1y 2

Q2211 Q2y 1

Q1y

Q2y

Q21 Q111 Q1y

2.21

= Q1112 Q1y 2

= Q2211 Q2y 1

In the previous section we derived the formula for the coe cient sub-vectors 1 and 2 : We now

use these formula to give a useful interpretation of the coe cients as obtaining from an iterated

projection.

Take equation (2.40) for the case dim(x1 ) = 1 so that 1 2 R:

y = x1

+ x02

+ e:

(2.42)

x1 = x02

+ u1

E (x2 u1 ) = 0:

From (2.24) and (2.34),

that

E (u1 y) = E

x1

0

2 x2

y = E (x1 y)

0

2 E (x2 y)

= Q1y

We have found that

1

= Q1112 Q1y 2 =

35

E (u1 y)

Eu21

What this means is that in the multivariate projection equation (2.42), the coe cient 1 equals

the projection coe cient from a regression of y on u1 ; the error from a projection of x1 on the

other regressors x2 : The error u1 can be thought of as the component of x1 which is not linearly

explained by the other regressors. Thus the coe cient 1 equals the linear eect of x1 on y; after

stripping out the eects of the other variables.

There was nothing special in the choice of the variable x1 : So this derivation applies symmetrically to all coe cients in a linear projection. Each coe cient equals the simple regression of y on

the error from a projection of that regressor on all the other regressors. Each coe cient equals the

linear eect of that variable on y; after linearly controlling for all the other regressors.

2.22

Again, let the regressors be partitioned as in (2.39). Consider the projection of y on x1 only.

Perhaps this is done because the variables x2 are not observed. This is the equation

y = x01

+u

(2.43)

E (x1 u) = 0

Notice that we have written the coe cient on x1 as 1 rather than 1 and the error as u rather

than e: This is because (2.43) is dierent than (2.40). Goldberger (1991) introduced the catchy

labels long regression for (2.40) and short regression for (2.43) to emphasize the distinction.

Typically, 1 6= 1 , except in special cases. To see this, we calculate

1

=

=

E x1 x01

E x1 x01

E (x1 y)

E x1 x01

+ E x1 x01

+ x02

E x1 x02

+e

where

= E x1 x01

E x1 x02

Observe that 1 = 1 +

= 0 or 2 = 0: Thus the short and long regressions

2 6= 1 unless

have dierent coe cients on x1 : They are the same only under one of two conditions. First, if the

projection of x2 on x1 yields a set of zero coe cients (they are uncorrelated), or second, if the

coe cient on x2 in (2.40) is zero. In general, the coe cient in (2.43) is 1 rather than 1 : The

dierence

2 between 1 and 1 is known as omitted variable bias. It is the consequence of

omission of a relevant correlated variable.

To avoid omitted variables bias the standard advice is to include all potentially relevant variables

in estimated models. By construction, the general model will be free of such bias. Unfortunately

in many cases it is not feasible to completely follow this advice as many desired variables are

not observed. In this case, the possibility of omitted variables bias should be acknowledged and

discussed in the course of an empirical investigation.

For example, suppose y is log wages, x1 is education, and x2 is intellectual ability. It seems

reasonable to suppose that education and intellectual ability are positively correlated (highly able

individuals attain higher levels of education) which means

> 0. It also seems reasonable to

suppose that conditional on education, individuals with higher intelligence will earn higher wages

36

so that 2 > 0: This implies that 2 > 0 and 1 = 1 + 2 > 1 : Therefore, it seems reasonable to

expect that in a regression of wages on education with ability omitted, the coe cient on education

is higher than in a regression where ability is included. In other words, in this context the omitted

variable biases the regression coe cient upwards.

2.23

There are alternative ways we could construct a linear approximation x0 to the conditional

mean m(x): In this section we show that one alternative approach turns out to yield the same

answer as the best linear predictor.

We start by dening the mean-square approximation error of x0 to m(x) as the expected

squared dierence between x0 and the conditional mean m(x)

x0

d( ) = E m(x)

(2.44)

The function d( ) is a measure of the deviation of x0 from m(x): If the two functions are identical

then d( ) = 0; otherwise d( ) > 0: We can also view the mean-square dierence d( ) as a densityweighted average of the function (m(x) x0 )2 ; since

Z

2

d( ) =

m(x) x0

fx (x)dx

Rk

We can then dene the best linear approximation to the conditional m(x) as the function x0

obtained by selecting to minimize d( ) :

= argmin d( ):

(2.45)

2Rk

Similar to the best linear predictor we are measuring accuracy by expected squared error. The

dierence is that the best linear predictor (2.21) selects to minimize the expected squared prediction error, while the best linear approximation (2.45) selects to minimize the expected squared

approximation error.

Despite the dierent denitions, it turns out that the best linear predictor and the best linear

approximation are identical. By the same steps as in (2.17) plus an application of conditional

expectations we can nd that

=

=

E xx0

E xx0

E (xm(x))

(2.46)

E (xy)

(2.47)

(see Exercise 2.19). Thus (2.45) equals (2.21). We conclude that the denition (2.45) can be viewed

as an alternative motivation for the linear projection coe cient.

2.24

Normal Regression

Suppose the variables (y; x) are jointly normally distributed. Consider the best linear predictor

of y given x

y = x0 + e

=

E xx0

E (xy) :

Since the error e is a linear transformation of the normal vector (y; x); it follows that (e; x) is

jointly normal, and since they are jointly normal and uncorrelated (since E (xe) = 0) they are also

independent (see Appendix B.9). Independence implies that

E (e j x) = E (e) = 0

37

and

E e2 j x = E e2 =

We have shown that when (y; x) are jointly normally distributed, they satisfy a normal linear

CEF

y = x0 + e

where

e

N(0;

is independent of x.

This is an alternative (and traditional) motivation for the linear CEF model. This motivation

has limited merit in econometric applications since economic data is typically non-normal.

2.25

The term regression originated in an inuential paper by Francis Galton published in 1886,

where he examined the joint distribution of the stature (height) of parents and children. Eectively,

he was estimating the conditional mean of childrens height given their parents height. Galton

discovered that this conditional mean was approximately linear with a slope of 2/3. This implies

that on average a childs height is more mediocre (average) than his or her parents height. Galton

called this phenomenon regression to the mean, and the label regression has stuck to this day

to describe most conditional relationships.

One of Galtons fundamental insights was to recognize that if the marginal distributions of y

and x are the same (e.g. the heights of children and parents in a stable environment) then the

regression slope in a linear projection is always less than one.

To be more precise, take the simple linear projection

y=

+x +e

(2.48)

where y equals the height of the child and x equals the height of the parent. Assume that y and x

have the same mean, so that y = x = : Then from (2.37)

= (1

P (y j x) = (1

) +x :

This shows that the projected height of the child is a weighted average of the population average

height and the parents height x; with the weight equal to the regression slope : When the

height distribution is stable across generations, so that var(y) = var(x); then this slope is the

simple correlation of y and x: Using (2.38)

=

cov (x; y)

= corr(x; y):

var(x)

By the properties of correlation (e.g. equation (B.7) in the Appendix), 1 corr(x; y) 1; with

corr(x; y) = 1 only in the degenerate case y = x: Thus if we exclude degeneracy, is strictly less

than 1.

This means that on average a childs height is more mediocre (closer to the population average)

than the parents.

38

Sir Francis Galton (1822-1911) of England was one of the leading gures in

late 19th century statistics. In addition to inventing the concept of regression, he is credited with introducing the concepts of correlation, the standard

deviation, and the bivariate normal distribution. His work on heredity made

a signicant intellectual advance by examing the joint distributions of observables, allowing the application of the tools of mathematical statistics to

the social sciences.

A common error known as the regression fallacy is to infer from < 1 that the population

is converging, meaning that its variance is declining towards zero. This is a fallacy because we

derived the implication < 1 under the assumption of constant means and variances. So certainly

< 1 does not imply that the variance y is less than than the variance of x:

Another way of seeing this is to examine the conditions for convergence in the context of equation

(2.48). Since x and e are uncorrelated, it follows that

var(y) =

var(x) + var(e):

2

<1

var(e)

var(x)

The regression fallacy arises in related empirical situations. Suppose you sort families into groups

by the heights of the parents, and then plot the average heights of each subsequent generation over

time. If the population is stable, the regression property implies that the plots lines will converge

childrens height will be more average than their parents. The regression fallacy is to incorrectly

conclude that the population is converging. A message to be learned from this example is that such

plots are misleading for inferences about convergence.

The regression fallacy is subtle. It is easy for intelligent economists to succumb to its temptation.

A famous example is The Triumph of Mediocrity in Business by Horace Secrist, published in 1933.

In this book, Secrist carefully and with great detail documented that in a sample of department

stores over 1920-1930, when he divided the stores into groups based on 1920-1921 prots, and

plotted the average prots of these groups for the subsequent 10 years, he found clear and persuasive

evidence for convergence toward mediocrity. Of course, there was no discovery regression to

the mean is a necessary feature of stable distributions.

2.26

Reverse Regression

Galton noticed another interesting feature of the bivariate distribution. There is nothing special

about a regression of y on x: We can also regress x on y: (In his heredity example this is the best

linear predictor of the height of parents given the height of their children.) This regression takes

the form

x=

+y +e :

(2.49)

This is sometimes called the reverse regression. In this equation, the coe cients

error e are dened by linear projection. In a stable population we nd that

= corr(x; y) =

= (1

and

39

which are exactly the same as in the projection of y on x! The intercept and slope have exactly the

same values in the forward and reverse projections!

While this algebraic discovery is quite simple, it is counter-intuitive. Instead, a common yet

mistaken guess for the form of the reverse regression is to take the equation (2.48), divide through

by and rewrite to nd the equation

x=

+y

(2.50)

suggesting that the projection of x on y should have a slope coe cient of 1= instead of ; and

intercept of - = rather than : What went wrong? Equation (2.50) is perfectly valid, because it

is a simple manipulation of the valid equation (2.48). The trouble is that (2.50) is not a CEF nor

a linear projection. Inverting a projection (or CEF) does not yield a projection (or CEF). Instead,

(2.49) is a valid projection, not (2.50).

In any event, Galtons nding was that when the variables are standardized, the slope in both

projections (y on x; and x and y) equals the correlation, and both equations exhibit regression to

the mean. It is not a causal relation, but a natural feature of all joint distributions.

2.27

From Theorem 2.8.1.4 we know that the CEF error has the property E (xe) = 0: Thus a linear

CEF is a linear projection. However, the converse is not true as the projection error does not

necessarily satisfy E (e j x) = 0: Furthermore, the linear projection may be a poor approximation

to the CEF.

To see these points in a simple example, suppose that the true CEF is y = x+x2 and x N(0; 1):

In this case the true CEF is m(x) = x + x2 and there is no error. Now consider the linear projection

of y on x and an intercept, namely the model y = + x + u: Since x and x2 are uncorrelated

the linear projection takes the form P (y j x) = 1 + x: This is quite dierent from the true CEF

m(x) = x + x2 : The projection error equals e = x2 1; which is a deterministic function of x; yet

is uncorrelated with x. We see in this example that a projection error need not be a CEF error,

and a linear projection can be a poor approximation to the CEF.

40

Another defect of linear projection is that it is sensitive to the marginal distribution of the

regressors when the conditional mean is non-linear. We illustrate the issue in Figure 2.9 for a

constructed11 joint distribution of y and x. The solid line is the non-linear CEF of y given x:

The data are divided in two Group 1 and Group 2 which have dierent marginal distributions

for the regressor x; and Group 1 has a lower mean value of x than Group 2. The separate linear

projections of y on x for these two groups are displayed in the Figure by the dashed lines. These

two projections are distinct approximations to the CEF. A defect with linear projection is that it

leads to the incorrect conclusion that the eect of x on y is dierent for individuals in the two

groups. This conclusion is incorrect because in fact there is no dierence in the conditional mean

function. The apparant dierence is a by-product of a linear approximation to a non-linear mean,

combined with dierent marginal distributions for the conditioning variables.

2.28

A model which is notationally similar to but conceptually distinct from the linear CEF model

is the linear random coe cient model. It takes the form

y = x0

where the individual-specic coe cient is random and independent of x. For example, if x is

years of schooling and y is log wages, then

is the individual-specic returns to schooling. If

a person obtains an extra year of schooling, is the actual change in their wage. The random

coe cient model allows the returns to schooling to vary in the population. Some individuals might

have a high return to education (a high ) and others a low return, possibly 0, or even negative.

In the linear CEF model the regressor coe cient equals the regression derivative the change

in the conditional mean due to a change in the regressors, = rm(x). This is not the eect on a

given individual, it is the eect on the population average. In contrast, in the random coe cient

model, the random vector = rx0 is the true causal eect the change in the response variable

y itself due to a change in the regressors.

It is interesting, however, to discover that the linear random coe cient model implies a linear

CEF. To see this, let and denote the mean and covariance matrix of :

= E( )

= var ( )

and then decompose the random coe cient as

=

+u

write

E(y j x) = x0 E( j x) = x0 E( ) = x0

: Then we can

so the CEF is linear in x; and the coe cients equal the mean of the random coe cient .

We can thus write the equation as a linear CEF

y = x0 + e

where e = x0 u and u =

(2.51)

E(e j x) = 0:

11

The x in Group 1 are N(2; 1) and those in Group 2 are N(4; 1); and the conditional distriubtion of y given x is

N(m(x); 1) where m(x) = 2x x2 =6:

41

Furthermore

var (e j x) = x0 var ( )x

= x0 x

independent of x, E kxk2 < 1; and E k k2 < 1; then

with

E (y j x) = x0

var (y j x) = x0 x

where

2.29

= E( ) and

= var ( ):

Causal Eects

So far we have avoided the concept of causality, yet often the underlying goal of an econometric

analysis is to uncover a causal relationship between variables. It is often of great interest to

understand the causes and eects of decisions, actions, and policies. For example, we may be

interested in the eect of class sizes on test scores, police expenditures on crime rates, climate

change on economic activity, years of schooling on wages, institutional structure on growth, the

eectiveness of rewards on behavior, the consequences of medical procedures for health outcomes,

or any variety of possible causal relationships. In each case, the goal is to understand what is the

actual eect on the outcome y due to a change in the input x: We are not just interested in the

conditional mean or linear projection, we would like to know the actual change. The causal eect

is typically specic to an individual, and also cannot be directly observed.

For example, the causal eect of schooling on wages is the actual dierence a person would receive in wages if we could change their level of education. The causal eect of a medical treatment is

the actual dierence in an individuals health outcome, comparing treatment versus non-treatment.

In both cases the eects are individual and unobservable. For example, suppose that Jennifer would

have earned $10 an hour as a high-school graduate and $20 a hour as a college graduate while George

would have earned $8 as a high-school graduate and $12 as a college graduate. In this example the

causal eect of schooling is $10 a hour for Jennifer and $4 an hour for George. Furthermore, the

causal eect is unobserved as we only observe the wage corresponding to the actual outcome.

A variable x1 can be said to have a causal eect on the response variable y if the latter changes

when all other inputs are held constant. To make this precise we need a mathematical formulation.

We can write a full model for the response variable y as

y = h (x1 ; x2 ; u)

(2.52)

where x1 and x2 are the observed variables, u is an ` 1 unobserved random factor, and h is a

functional relationship. This framework includes as a special case the random coe cient model

(2.28) studied earlier. We dene the causal eect of x1 within this model as the change in y due to

a change in x1 holding the other variables x2 and u constant.

42

C(x1 ; x2 ; u) = r1 h (x1 ; x2 ; u) ;

(2.53)

To understand this concept, imagine taking a single individual. As far as our structural model is

concerned, this person is described by their observables x1 and x2 ; and their unobservables u. In a

wage regression the unobservables would include characteristics such as the persons abilities, skills,

work ethic, interpersonal connections, and preferences. The causal eect of x1 (say, education) is

the change in the wage as x1 changes, holding constant all other observables and unobservables.

It may be helpful to understand that (2.53) is a denition, and does not necessarily describe

causality in a fundamental or experimental sense. Perhaps it would be more appropriate to label

(2.53) as a structural eect (the eect within the structural model).

Sometimes it is useful to write this relationship as a potential outcome function

y(x1 ) = h (x1 ; x2 ; u)

where the notation implies that y(x1 ) is holding x2 and u constant.

A popular example arises in the analysis of treatment eects with a binary regressor x1 . Let x1 =

1 indicate treatment (e.g. a medical procedure) and x1 = 0 indicating non-treatment. In this case

y(x1 ) can be written

y(0) = h (0; x2 ; u)

y(1) = h (1; x2 ; u)

In the literature on treatment eects, it is common to refer to y(0) and y(1) as the latent outcomes

associated with non-treatment and treatment, respectively. That is, for a given individual, y(0) is

the health outcome if there is no treatment, and y(1) is the health outcome if there is treatment.

The causal eect of treatment for the individual is the change in their health outcome due to

treatment the change in y as we hold both x2 and u constant:

C (x2 ; u) = y(1)

y(0):

This is random (a function of x2 and u) as both potential outcomes y(0) and y(1) are dierent

across individuals.

In a sample, we cannot observe both outcomes from the same individual, we only observe the

realized value

8

< y(0) if x1 = 0

y=

:

y(1) if x1 = 1

As the causal eect varies across individuals and is not observable, it cannot be measured on

the individual level. We therefore focus on aggregate causal eects, in particular what is known as

the average causal eect.

Denition 2.29.2 In the model (2.52) the average causal e ect of x1

on y conditional on x2 is

ACE(x1 ; x2 ) = E (C(x1 ; x2 ; u) j x1 ; x2 )

Z

=

r1 h (x1 ; x2 ; u) f (u j x1 ; x2 )du

R`

(2.54)

43

We can think of the average causal eect ACE(x1 ; x2 ) as the average eect in the general

population. In our Jennifer & George schooling example given earlier, supposing that half of the

population are Jennifers and the other half Georges, then the average causal eect of college is

(10 + 4)=2 = $7 an hour.

What is the relationship between the average causal eect ACE(x1 ; x2 ) and the regression

derivative r1 m (x1 ; x2 )? Equation (2.52) implies that the CEF is

m(x1 ; x2 ) = E (h (x1 ; x2 ; u) j x1 ; x2 )

Z

=

h (x1 ; x2 ; u) f (u j x1 ; x2 )du;

R`

the average causal equation, averaged over the conditional distribution of the unobserved component

u.

Applying the marginal eect operator, the regression derivative is

Z

r1 h (x1 ; x2 ; u) f (u j x1 ; x2 )du

r1 m(x1 ; x2 ) =

`

RZ

h (x1 ; x2 ; u) r1 f (ujx1 ; x2 )du

+

R`

Z

= ACE(x1 ; x2 ) +

h (x1 ; x2 ; u) r1 f (u j x1 ; x2 )du:

(2.55)

R`

In general, the average causal eect is not the regression derivative. However, they equal when

the second component in (2.55) is zero. This occurs when r1 f (u j x1 ; x2 ) = 0; that is, when

the conditional density of u given (x1 ; x2 ) does not depend on x1 : The condition is su ciently

important that it has a special name in the treatment eects literature.

Conditional on x2 ; the random variables x1 and u are statistically independent.

Thus the CIA implies that r1 m(x1 ; x2 ) = ACE(x1 ; x2 ); the regression derivative equals the average

causal eect.

Theorem 2.29.1 In the structural model (2.52), the Conditional Independence Assumption implies

r1 m(x1 ; x2 ) = ACE(x1 ; x2 )

the regression derivative equals the average causal e ect for x1 on y conditional on x2 .

This is a fascinating result. It shows that whenever the unobservable is independent of the

treatment variable (after conditioning on appropriate regressors) the regression derivative equals the

average causal eect. In this case, the CEF has causal economic meaning, giving strong justication

to estimation of the CEF. Our derivation also shows the critical role of the CIA. If CIA fails, then

the equality of the regression derivative and ACE fails.

44

This theorem is quite general. It applies equally to the treatment-eects model where x1 is

binary or to more general settings where x1 is continuous.

It is also helpful to understand that the CIA is weaker than full independence of u from the

regressors (x1 ; x2 ): The CIA was introduced precisely as a minimal su cient condition to obtain

the desired result. Full independence implies the CIA and implies that each regression derivative

equals that variables average causal eect, but full independence is not necessary in order to

causally interpret a subset of the regressors.

2.30

the set f 1 ; 2 ; :::g then

1

X

Ey =

j Pr (y = j ) ;

j=1

Ey =

yf (y)dy:

We can unify these denitions by writing the expectation as the Lebesgue integral with respect to

the distribution function F

Z 1

Ey =

ydF (y):

(2.56)

1

In the event that the integral (2.56) is not nite, separately evaluate the two integrals

Z 1

I1 =

ydF (y)

0

Z 0

I2 =

ydF (y):

(2.57)

(2.58)

Ey = 1: However, if both I1 = 1 and I2 = 1 then Ey is undened.

If

Z 1

E jyj =

jyj dF (y) = I1 + I2 < 1

1

then Ey exists and is nite. In this case it is common to say that the mean Ey is well-dened.

More generally, y has a nite rth moment if

E jyjr < 1:

(2.59)

By Liapunovs Inequality (B.20), (2.59) implies E jyjs < 1 for all s r: Thus, for example, if the

fourth moment is nite then the rst, second and third moments are also nite.

It is common in econometric theory to assume that the variables, or certain transformations of

the variables, have nite moments of a certain order. How should we interpret this assumption?

How restrictive is it?

One way to visualize the importance is to consider the class of Pareto densities given by

f (y) = ay

a 1

y > 1:

The parameter a of the Pareto distribution indexes the rate of decay of the tail of the density.

Larger a means that the tail declines to zero more quickly. See the gure below where we show the

45

Pareto density for a = 1 and a = 2: The parameter a also determines which moments are nite.

We can calculate that

8 R

a

1 r a 1

>

dy =

if r < a

< a 1 y

a r

r

E jyj =

>

:

1

if r a

This shows that if y is Pareto distributed with parameter a; then the rth moment of y is nite if

and only if r < a: Higher a means higher nite moments. Equivalently, the faster the tail of the

density declines to zero, the more moments are nite.

f(y)

2.0

a=2

1.5

1.0

a=1

0.5

0.0

1

This connection between tail decay and nite moments is not limited to the Pareto distribution.

We can make a similar analysis using a tail bound. Suppose that y has density f (y) which satises

the bound f (y) A jyj a 1 for some A < 1 and a > 0. Since f (y) is bounded below a scale of a

Pareto density, its tail behavior is similarly bounded. This means that for r < a

Z 1

Z 1

Z 1

2A

r

r

E jyj =

jyj f (y)dy

f (y)dy + 2A

y r a 1 dy 1 +

< 1:

a

r

1

1

1

Thus if the tail of the density declines at the rate jyj a 1 or faster, then y has nite moments up

to (but not including) a: Broadly speaking, the restriction that y has a nite rth moment means

that the tail of ys density declines to zero faster than y r 1 : The faster decline of the tail means

that the probability of observing an extreme value of y is a more rare event.

We complete this section by adding an alternative representation of expectation in terms of the

distribution function.

Theorem 2.30.1 For any non-negative random variable y

Z 1

Ey =

Pr (y > u) du

0

Proof of Theorem 2.30.1: Let F (x) = Pr (y > x) = 1 F (x), where F (x) is the distribution

function. By integration by parts

Z 1

Z 1

Z 1

Z 1

1

Ey =

ydF (y) =

ydF (y) = [yF (y)]0 +

F (y)dy =

Pr (y > u) du

0

as stated.

2.31

46

In Sections 2.3 and 2.6 we dened the conditional mean when the conditioning variables x are

discrete and when the variables (y; x) have a joint density. We have explored these cases because

these are the situations where the conditional mean is easiest to describe and understand. However,

the conditional mean exists quite generally without appealing to the properties of either discrete

or continuous random variables.

To justify this claim we now present a deep result from probability theory. What is says is that

the conditional mean exists for all joint distributions (y; x) for which y has a nite mean.

Theorem 2.31.1 Existence of the Conditional Mean

If E jyj < 1 then there exists a function m(x) such that for all measurable

sets X

E (1 (x 2 X ) y) = E (1 (x 2 X ) m(x)) :

(2.60)

The function m(x) is almost everywhere unique, in the sense that if h(x)

satises (2.60), then there is a set S such that Pr(S) = 1 and m(x) = h(x)

for x 2 S: The function m(x) is called the conditional mean and is

written m(x) = E (y j x) :

See, for example, Ash (1972), Theorem 6.3.3.

The conditional mean m(x) dened by (2.60) specializes to (2.7) when (y; x) have a joint density.

The usefulness of denition (2.60) is that Theorem 2.31.1 shows that the conditional mean m(x)

exists for all nite-mean distributions. This denition allows y to be discrete or continuous, for x to

be scalar or vector-valued, and for the components of x to be discrete or continuously distributed.

2.32

Identication*

A critical and important issue in structural econometric modeling is identication, meaning that

a parameter is uniquely determined by the distribution of the observed variables. It is relatively

straightforward in the context of the unconditional and conditional mean, but it is worthwhile to

introduce and explore the concept at this point for clarity.

Let F denote the distribution of the observed data, for example the distribution of the pair

(y; x): Let F be a collection of distributions F: Let be a parameter of interest (for example, the

mean Ey).

Denition 2.32.1 A parameter 2 R is identied on F if for all F 2 F;

there is a uniquely determined value of :

Equivalently, is identied if we can write it as a mapping = g(F ) on the set F: The restriction

to the set F is important. Most parameters are identied only on a strict subset of the space of all

distributions.

Take, for example, the mean

Ey: It is uniquelyodetermined if E jyj < 1; so it is clear that

n =

R1

is identied for the set F = F : 1 jyj dF (y) < 1 . However, is also well dened when it is

either positive or negative innity. Hence, dening I1 and I2 as in (2.57) and (2.58), we can deduce

that is identied on the set F = fF : fI1 < 1g [ fI2 < 1gg :

Next, consider the conditional mean. Theorem 2.31.1 demonstrates that E jyj < 1 is a su cient

condition for identication.

47

If E jyj < 1; the conditional mean m(x) = E (y j x) is identied almost

everywhere.

degenerate cases. This is true for moments of observed data, but not necessarily for more complicated models. As a case in point, consider the context of censoring. Let y be a random variable

with distribution F: Instead of observing y; we observe y dened by the censoring rule

y =

if y

if y >

That is, y is capped at the value : A common example is income surveys, where income responses

are top-coded, meaning that incomes above the top code are recorded as equalling the top

code. The observed variable y has distribution

F (u) =

F (u)

1

for u

for u

We are interested in features of the distribution F not the censored distribution F : For example,

we are interested in the mean wage = E (y) : The di culty is that we cannot calculate from

F except in the trivial case where there is no censoring Pr (y

) = 0: Thus the mean is not

generically identied from the censored distribution.

A typical solution to the identication problem is to assume a parametric distribution. For

example, let F be the set of normal distributions y

N( ; 2 ): It is possible to show that the

2

parameters ( ; ) are identied for all F 2 F: That is, if we know that the uncensored distribution

is normal, we can uniquely determine the parameters from the censored distribution. This is often

called parametric identication as identication is restricted to a parametric class of distributions. In modern econometrics this is generally viewed as a second-best solution, as identication

has been achieved only through the use of an arbitrary and unveriable parametric assumption.

A pessimistic conclusion might be that it is impossible to identied parameters of interest from

censored data without parametric assumptions. Interestingly, this pessimism is unwarranted. It

turns out that we can identify the quantiles q of F for

Pr (y

) : For example, if 20%

of the distribution is censored, we can identify all quantiles for 2 (0; 0:8): This is often called

nonparametric identication as the parameters are identied without restriction to a parametric

class.

What we have learned from this little exercise is that in the context of censored data, moments

can only be parametrically identied, while (non-censored) quantiles are nonparametrically identied. Part of the message is that a study of identication can help focus attention on what can be

learned from the data distributions available.

2.33

Technical Proofs*

Proof of Theorem 2.7.1: For convenience, assume that the variables have a joint density f (y; x).

Since E (y j x) is a function of the random vector x only, to calculate its expectation we integrate

with respect to the density fx (x) of x; that is

Z

E (E (y j x)) =

E (y j x) fx (x) dx:

Rk

48

Substituting in (2.7) and noting that fyjx (yjx) fx (x) = f (y; x) ; we nd that the above expression

equals

Z Z

Z

Z

Rk

Rk

Proof of Theorem 2.7.2: Again assume that the variables have a joint density. It is useful to

observe that

f (y; x1 ; x2 ) f (x1 ; x2 )

f (yjx1 ; x2 ) f (x2 jx1 ) =

= f (y; x2 jx1 ) ;

(2.61)

f (x1 ; x2 ) f (x1 )

the density of (y; x2 ) given x1 : Here, we have abused notation and used a single symbol f to denote

the various unconditional and conditional densities to reduce notational clutter.

Note that

Z

E (y j x1 ; x2 ) =

yf (yjx1 ; x2 ) dy:

(2.62)

R

Integrating (2.62) with respect to the conditional density of x2 given x1 , and applying (2.61) we

nd that

Z

E (y j x1 ; x2 ) f (x2 jx1 ) dx2

E (E (y j x1 ; x2 ) j x1 ) =

Rk2

Z

Z

=

yf (yjx1 ; x2 ) dy f (x2 jx1 ) dx2

k

ZR 2 Z R

yf (yjx1 ; x2 ) f (x2 jx1 ) dydx2

=

k

ZR 2 ZR

=

yf (y; x2 jx1 ) dydx2

Rk2

= E (y j x1 )

as stated.

Proof of Theorem 2.7.3:

Z

Z

E (g (x) y j x) =

g (x) yfyjx (yjx) dy = g (x) yfyjx (yjx) dy = g (x) E (y j x)

R

This is (2.9). The assumption that E jg (x) yj < 1 is required for the rst equality to be welldened. Equation (2.10) follows by applying the Simple Law of Iterated Expectations to (2.9).

Proof of Theorem 2.9.2: The assumption that Ey 2 < 1 implies that all the conditional expectations below exist.

Set z = E(y j x1 ; x2 ). By the conditional Jensens inequality (B.13),

(E(z j x1 ))2

E z 2 j x1 :

E (E(y j x1 ))2

E (E(y j x1 ; x2 ))2 :

Similarly,

(Ey)2

E (E(y j x1 ))2

E (E(y j x1 ; x2 ))2 :

(2.63)

49

The variables y; E(y j x1 ) and E(y j x1 ; x2 ) all have the same mean Ey; so the inequality (2.63)

implies that the variances are ranked monotonically:

0

Next, for

var (E(y j x1 ))

var (E(y j x1 ; x2 )) :

(2.64)

= Ey observe that

E (y

) = E (y

)=0

so the decomposition

y

=y

E(y j x) + E(y j x)

satises

var (y) = var (y

(2.65)

The monotonicity of the variances of the conditional mean (2.64) applied to the variance decomposition (2.65) implies the reverse monotonicity of the variances of the dierences, completing the

proof.

Proof of Theorem 2.8.1. Applying Minkowskis Inequality (B.19) to e = y

(E jejr )1=r = (E jy

m(x)jr )1=r

m(x);

where the two parts on the right-hand are nite since E jyjr < 1 by assumption and E jm(x)jr < 1

by the Conditional Expectation Inequality (B.14). The fact that (E jejr )1=r < 1 implies E jejr <

1:

Proof of Theorem 2.17.1. For part 1, by the Expectation Inequality (B.15), (A.9) and Assumption 2.17.1,

E xx0

E xx0 = E kxk2 < 1:

Similarly, using the Expectation Inequality (B.15), the Cauchy-Schwarz Inequality (B.17) and Assumption 2.17.1,

kE (xy)k

1=2

E kxyk = E kxk2

Ey 2

1=2

< 1:

Thus the moments E (xy) and E (xx0 ) are nite and well dened.

For part 2, the coe cient = (E (xx0 )) 1 E (xy) is well dened since (E (xx0 ))

Assumption 2.17.1.

Part 3 follows from Denition 2.17.1 and part 2.

For part 4, rst note that

Ee2 = E y

x0

exists under

= Ey 2

2E yx0

= Ey 2

2E yx0

Ey

E xx0

E xx0

E (xy)

< 1

The rst inequality holds because E (yx0 ) (E (xx0 )) 1 E (xy) is a quadratic form and therefore necessarily non-negative. Second, by the Expectation Inequality (B.15), the Cauchy-Schwarz Inequality

(B.17) and Assumption 2.17.1,

kE (xe)k

E kxek = E kxk2

1=2

Ee2

1=2

< 1:

It follows that the expectation E (xe) is nite, and is zero by the calculation (2.28).

For part 6, Applying Minkowskis Inequality (B.19) to e = y

(E jejr )1=r =

E y

x0

x0 ;

r 1=r

(E jyjr )1=r + E x0

r 1=r

< 1;

the nal inequality by assumption:

50

51

Exercises

Exercise 2.1 Find E (E (E (y j x1 ; x2 ; x3 ) j x1 ; x2 ) j x1 ) :

Exercise 2.2 If E (y j x) = a + bx; nd E (yx) as a function of moments of x:

Exercise 2.3 Prove Theorem 2.8.1.4 using the law of iterated expectations.

Exercise 2.4 Suppose that the random variables y and x only take the values 0 and 1, and have

the following joint probability distribution

y=0

y=1

x=0

.1

.4

x=1

.2

.3

Exercise 2.5 Show that

2 (x)

(b) What does it mean to be predicting e2 ?

(c) Show that

2 (x)

var (y) = var (m(x)) +

2

(x) = E y 2 j x

(E (y j x))2 :

Exercise 2.8 Suppose that y is discrete-valued, taking values only on the non-negative integers,

and the conditional distribution of y given x is Poisson:

Pr (y = j j x) =

exp ( x0 ) (x0 )j

;

j!

j = 0; 1; 2; :::

Compute E (y j x) and var (y j x) : Does this justify a linear regression model of the form y =

x0 + e?

j

Hint: If Pr (y = j) = exp( j! ) ; then Ey = and var(y) = :

Exercise 2.9 Suppose you have two regressors: x1 is binary (takes values 0 and 1) and x2 is

categorical with 3 categories (A; B; C): Write E (y j x1 ; x2 ) as a linear regression.

Exercise 2.10 True or False. If y = x + e; x 2 R; and E (e j x) = 0; then E x2 e = 0:

Exercise 2.11 True or False. If y = x + e; x 2 R; and E (xe) = 0; then E x2 e = 0:

Exercise 2.12 True or False. If y = x0 + e and E (e j x) = 0; then e is independent of x:

Exercise 2.13 True or False. If y = x0 + e and E(xe) = 0; then E (e j x) = 0:

Exercise 2.14 True or False. If y = x0 + e, E (e j x) = 0; and E e2 j x =

e is independent of x:

Exercise 2.15 Consider the intercept-only model y =

Show that = E(y):

52

2;

a constant, then

Compute the coe cients of the best linear predictor y = + x + e: Compute the conditional mean

m(x) = E (y j x) : Are the best linear predictor and conditional mean dierent?

Exercise 2.17 Let x be a random variable with

g xj ;

and x3 =

2 x2

)2

(x

Exercise 2.18 Suppose that

= Ex and

and s =

= var(x): Dene

:

2:

1

1

x = @ x2 A

x3

is a linear function of x2 :

(b) Use a linear transformation of x to nd an expression for the best linear predictor of y given

x. (Be explicit, do not just use the generalized inverse formula.)

Exercise 2.19 Show (2.46)-(2.47), namely that for

d( ) = E m(x)

x0

then

= argmin d( )

2Rk

=

=

E xx0

E xx0

E (xm(x))

E (xy) :

Chapter 3

3.1

Introduction

In this chapter we introduce the popular least-squares estimator. Most of the discussion will be

algebraic, with questions of distribution and inference defered to later chapters.

3.2

Random Samples

In Section 2.17 we derived and discussed the best linear predictor of y given x for a pair of

random variables (y; x) 2 R Rk ; and called this the linear projection model. We are now interested

in estimating the parameters of this model, in particular the projection coe cient

= E xx0

E (xy) :

We can estimate from observational data which includes joint measurements on the variables

(y; x) : For example, supposing we are interested in estimating a wage equation, we would use

a dataset with observations on wages (or weekly earnings), education, experience (or age), and

demographic characteristics (gender, race, location). One possible dataset is the Current Population Survey (CPS), a survey of U.S. households which includes questions on employment, income,

education, and demographic characteristics.

Notationally we wish to emphasize when we are discussing observations. Typically in econometrics we denote observations by appending a subscript i which runs from 1 to n; thus the ith

observation is (yi ; xi ); and n denotes the sample size. The dataset is then {(yi ; xi ); i = 1; :::; n}.

From the viewpoint of empirical analysis, a dataset is a array of numbers often organized as a

table, where the columns of the table correspond to distinct variables and the rows correspond to

distinct observations. For empirical analysis, the dataset and observations are xed in the sense that

they are numbers presented to the researcher. For statistical analysis we need to view the dataset

as random, or more precisely as a realization of a random process. For cross-sectional studies,

the most common approach is to treat the individual observations as independent draws from an

underlying population F: When the observations are realizations of independent and identically

distributed random variables, we say that the data is a random sample.

Assumption 3.2.1 The observations f(y1 ; x1 ); :::; (yi ; xi ); :::; (yn ; xn )g are a

random sample.

With a random sample, the ordering of the data is irrelevant. There is nothing special about any

specic observation or ordering. You can permute the order of the observations and no information

is gained or lost.

53

54

As most economic data sets are not literally the result of a random experiment, the random

sampling framework is best viewed as an approximation rather than being literally true.

The linear projection model applies to the random observations (yi ; xi ) : This means that the

probability model for the observations is the same as that described in Section 2.17. We can write

the model as

yi = x0i + ei

(3.1)

where the linear proejction

is dened as

= argmin S( );

(3.2)

2Rk

= E xi x0i

3.3

x0i

S( ) = E yi

(3.3)

E (xi yi ) :

(3.4)

to estimation is to construct an empirical analog of the function, and dene the estimator of the

parameter as the minimizer of the empirical function.

The empirical analog of the expected squared error (3.3) is the sample average squared error

n

Sn ( ) =

1X

yi

n

x0i

(3.5)

i=1

1

SSEn ( )

n

where

SSEn ( ) =

n

X

yi

x0i

i=1

An estimator for is the minimizer of (3.5):

b = argmin Sn ( ):

2Rk

Alternatively, as Sn ( ) is a scale multiple of SSEn ( ); we may equivalently dene b as the minimizer of SSEn ( ): Hence b is commonly called the least-squares (LS) (or ordinary least

squares (OLS)) estimator of . Here, as is common in econometrics, we put a hat ^over the

parameter to indicate that b is a sample estimate of : This is a helpful convention, as just by

seeing the symbol b we can immediately interpret it as an estimator (because of the hat), and as an

estimator of a parameter labelled . Sometimes when we want to be explicit about the estimation

method, we will write b ols to signify that it is the OLS estimator. It is also common to see the

notation b n ; where the subscript n indicates that the estimator depends on the sample size n:

It is important to understand the distinction between population parameters such as

and

sample estimates such as b . The population parameter is a non-random feature of the population

while the sample estimate b is a random feature of a random sample.

is xed, while b varies

across samples.

To visualize the quadratic function Sn ( ), Figure 3.1 displays an example sum-of-squared errors function SSEn ( ) for the case k = 2: The least-squares estimator b is the the pair ( b 1 ; b 2 )

minimizing this function.

55

3.4

For simplicity, we start by considering the case k = 1 so that the coe cient

the sum of squared errors is a simple quadratic

SSEn ( ) =

=

n

X

xi )2

(yi

i=1

n

X

i=1

yi2

is a scalar. Then

n

X

x i yi

i=1

n

X

x2i

i=1

The OLS estimator b minimizes this function. From elementary algebra we know that the minimizer

of the quadratic function a + 2bx + cx2 is x = b=c: Thus the minimizer of SSEn ( ) is

Pn

b = Pi=1 xi yi :

(3.6)

n

2

i=1 xi

The intercept-only model is the special case xi = 1: In this case we nd

Pn

n

yi

1X

b = Pi=1

=

yi = y;

n

n

i=1 1

(3.7)

i=1

the sample mean of yi : Here, as is common, we put a bar over y to indicate that the quantity

is a sample mean. This calculation shows that the OLS estimator in the intercept-only model is

the sample mean.

3.5

To solve for b , expand the SSE function to nd

SSEn ( ) =

n

X

i=1

yi2

n

X

i=1

xi yi +

is a vector.

n

X

i=1

xi x0i :

56

This is a quadratic expression in the vector argument . The rst-order-condition for minimization

of SSEn ( ) is

n

n

X

X

@

b

0=

SSEn ( ) = 2

xi yi + 2

xi x0i b :

(3.8)

@

i=1

i=1

We have written this using a single expression, but it is actually a system of k equations with k

unknowns (the elements of b ).

The solution for b may be found by solving the system of k equations in

We can write

P(3.8).

n

this solution compactly using matrix algebra. Inverting the k k matrix i=1 xi x0i we nd an

explicit formula for the least-squares estimator

! 1 n

!

n

X

X

b=

xi x0

(3.9)

xi yi :

i

i=1

i=1

This is the natural estimator of the best linear projection coe cient dened in (3.2), and can

also be called the linear projection estimator.

We see that (3.9) simplies to the expression (3.6) when k = 1: The expression (3.9) is a notationally simple generalization but requires a careful attention to vector and matrix manipulations.

Alternatively, equation (3.4) writes the projection coe cient as an explicit function of the

population moments Qxy and Qxx : Their moment estimators are the sample moments

n

1X

xi yi

n

b xy =

Q

i=1

n

1X

xi x0i :

n

b xx =

Q

i=1

b = Q

b 1Q

b

xx xy

n

1X

xi x0i

n

i=1

!

n

X

0

xi xi

i=1

1X

x i yi

n

i=1

!

n

X

xi yi

i=1

Least Squares Estimation

Denition 3.5.1 The least-squares estimator b is

b = argmin Sn ( )

2Rk

where

1X

Sn ( ) =

yi

n

x0i

i=1

b=

n

X

i=1

xi x0i

n

X

i=1

xi yi

57

Adrien-Marie Legendre

The method of least-squares was rst published in 1805 by the French mathematician Adrien-Marie Legendre (1752-1833). Legendre proposed leastsquares as a solution to the algebraic problem of solving a system of equations when the number of equations exceeded the number of unknowns. This

was a vexing and common problem in astronomical measurement. As viewed

by Legendre, (3.1) is a set of n equations with k unknowns. As the equations

cannot be solved exactly, Legendres goal was to select to make the set of

errors as small as possible. He proposed the sum of squared error criterion,

and derived the algebraic solution presented above. As he noted, the rstorder conditions (3.8) is a system of k equations with k unknowns, which

can be solved by ordinary methods. Hence the method became known

as Ordinary Least Squares and to this day we still use the abbreviation

OLS to refer to Legendres estimation method.

3.6

Illustration

We illustrate the least-squares estimator in practice with the data set used to generate the

estimates from Chapter 2. This is the March 2009 Current Population Survey, which has extensive

information on the U.S. population. This data set is described in more detail in Section ? For this

illustration, we use the sub-sample of non-white married non-military female wages earners with

12 years potential work experience. This sub-sample has 61 observations. Let yi be log wages and

xi be an intercept and years of education. Then

n

1X

x i yi =

n

i=1

and

1X

xi x0i =

n

1

15:426

15:426

243

i=1

Thus

b =

=

3:025

47:447

1

15:426

15:426

243

0:626

0:156

3:025

47:447

(3.10)

\

log(W

age) = 0:626 + 0:156 education:

(3.11)

An interpretation of the estimated equation is that each year of education is associated with an

16% increase in mean wages.

Equation (3.11) is called a bivariate regression as there are only two variables. A multivariate regression has two or more regressors, and allows a more detailed investigation. Lets redo

the example, but now including all levels of experience. This expanded sample includes 2454 observations. Including as regressors years of experience and its square (experience 2 =100) (we divide

by 100 to simplify reporting), we obtain the estimates

\

log(W

age) = 1:06 + 0:116 education + 0:010 experience

(3.12)

58

These estimates suggest a 12% increase in mean wages per year of education, holding experience

constant.

3.7

y^i = x0i b

e^i = yi

x0i b :

y^i = yi

(3.13)

Sometimes y^i is called the predicted value, but this is a misleading label. The tted value y^i is a

function of the entire sample, including yi , and thus cannot be interpreted as a valid prediction of

yi . It is thus more accurate to describe y^i as a tted value rather than a predicted value.

Note that yi = y^i + e^i and

yi = x0i b + e^i :

(3.14)

We make a distinction between the error ei and the residual e^i : The error ei is unobservable while

the residual e^i is a by-product of estimation. These two variables are frequently mislabeled, which

can cause confusion.

Equation (3.8) implies that

n

X

xi e^i = 0:

(3.15)

i=1

n

X

xi e^i =

i=1

n

X

x i yi

i=1

n

X

xi yi

i=1

n

X

i=1

= 0:

n

X

i=1

xi yi

i=1

n

X

x0i b

n

X

xi x0i b

xi x0i

i=1

xi yi

n

X

n

X

xi x0i

i=1

n

X

i=1

xi yi

xi yi

i=1

n

1X

e^i = 0:

n

(3.16)

i=1

Thus the residuals have a sample mean of zero and the sample correlation between the regressors

and the residual is zero. These are algebraic results, and hold true for all linear regression estimates.

3.8

For many purposes, including computation, it is convenient to write the model and statistics in

matrix notation. The linear equation (2.26) is a system of n equations, one for each observation.

59

y1 = x01 + e1

y2 = x02 + e2

..

.

yn = x0n + en :

Now dene

B

B

y=B

@

y1

y2

..

.

yn

C

C

C;

A

B

B

X=B

@

x01

x02

..

.

x0n

C

C

C;

A

can be compactly written in the single equation

B

B

e=B

@

e1

e2

..

.

en

C

C

C:

A

y = X + e:

(3.17)

n

X

xi x0i = X 0 X

i=1

n

X

xi yi = X 0 y:

i=1

b = X 0X

X 0y :

(3.18)

y = Xb + e

^;

e

^= y

X b:

X 0e

^ = 0:

(3.20)

Using matrix notation we have simple expressions for most estimators. This is particularly

convenient for computer programming, as most languages allow matrix notation and manipulation.

y = X +e

b = X 0X

e

^ = y Xb

X 0e

^ = 0:

X 0y

60

The earliest known treatment of the use of matrix methods

to solve simultaneous systems is found in Chapter 8 of the

Chinese text The Nine Chapters on the Mathematical Art,

written by several generations of scholars from the 10th to

2nd century BCE.

3.9

Projection Matrix

P = X X 0X

Observe that

P X = X X 0X

X 0:

X 0 X = X:

This is a property of a projection matrix. More generally, for any matrix Z which can be written

as Z = X for some matrix (we say that Z lies in the range space of X); then

PZ = PX

= X X 0X

X 0X

=X

= Z:

As an important example, if we partition the matrix X into two matrices X 1 and X 2 so that

X = [X 1

X 2] ;

The matrix P is symmetric and idempotent1 . To see that it is symmetric,

P0 =

=

X X 0X

X0

= X

1 0

X 0X

X 0X

= X (X)0 X 0

X0

(X)0

X0

1

X0

= P:

To establish that it is idempotent, the fact that P X = X implies that

PP

= P X X 0X

= X X 0X

X0

X0

= P:

The matrix P has the property that it creates the tted values in a least-squares regression:

P y = X X 0X

X 0y = X b = y

^:

1

61

P 1 = 1 10 1

1 0

=

11 :

n

10

P 1 y = 1 10 1

10 y

Note that

= 1y

creates an n-vector whose elements are the sample mean y of yi :

1

The ith diagonal element of P = X (X 0 X) X 0 is

1

hii = x0i X 0 X

(3.21)

xi

Some useful properties of the the matrix P and the leverage values hii are now summarized.

Theorem 3.9.1

n

X

hii = tr P = k

(3.22)

(3.23)

i=1

and

hii

To show (3.22),

tr P

= tr X X 0 X

= tr

X 0X

X0

X 0X

= tr (I k )

= k:

See Appendix A.4 for denition and properties of the trace operator. The proof of (3.23) is defered

to Section 3.18.

3.10

Orthogonal Projection

Dene

M

where I n is the n

= In

= In

X X 0X

X0

M X = (I n

P)X = X

PX = X

X = 0:

matrix due to the property that for any matrix Z in the range space of X then

MZ = Z

P Z = 0:

62

The orthogonal projection matrix M has many similar properties with P , including that M is

symmetric (M 0 = M ) and idempotent (M M = M ). Similarly to (3.22) we can calculate

tr M = n

k:

(3.24)

(See Exercise 3.9.) While P creates tted values, M creates least-squares residuals:

My = y

Xb = b

e:

Py = y

(3.25)

As discussed in the previous section, a special example of a projection matrix occurs when X = 1

is an n-vector of ones, so that P 1 = 1 (10 1) 1 10 : Similarly, set

M 1 = In

P1

1

1 10 1

= In

10 :

M 1y = y

1y:

For simplicity we will often write the right-hand-side as y y: The ith element is yi y; the

demeaned value of yi :

We can also use (3.25) to write an alternative expression for the residual vector. Substituting

y = X + e into e

^ = M y and using M X = 0 we nd

e

^ = M y = M (X + e) = M e

which is free of dependence on the regression coe cient

3.11

(3.26)

were observed we would estimate 2 by

n

1X 2

ei :

~ =

n

2

(3.27)

i=1

However, this is infeasible as ei is not observed. In this case it is common to take a two-step

approach to estimation. The residuals e^i are calculated in the rst step, and then we substitute e^i

for ei in expression (3.27) to obtain the feasible estimator

n

^2 =

1X 2

e^i :

n

(3.28)

i=1

~2 = n

1 0

^2 = n

1 0

ee

and

b

eb

e:

e = M y = M e from (3.25) and (3.26). Applied to (3.29) we nd

^2 = n

1 0

= n

1 0

= n

1 0

= n

1 0

e

^e

^

y MMy

y My

e Me

(3.29)

63

An interesting implication is that

~2

^2 = n

1 0

= n

1 0

1 0

ee

e Me

e Pe

0:

The nal inequality holds because P is positive semi-denite and e0 P e is a quadratic form. This

shows that the feasible estimator ^ 2 is numerically smaller than the idealized estimator (3.27).

3.12

Analysis of Variance

y = P y + My = y

^+e

^:

(3.30)

y

^0 e

^ = (P y)0 (M y) = y 0 P M y = 0:

It follows that

y0y = y

^0 y

^ + 2^

y0e

^+ e

^0 e

^= y

^0 y

^+e

^0 e

^

or

n

X

yi2

i=1

n

X

y^i2

i=1

n

X

e^2i :

i=1

1y = y

^

1y + e

^

(^

y

1y)0 e

^= y

^0 e

^

y10 e

^= 0

(y

or

1y)0 (y

n

X

i=1

(yi

1y) = (^

y

y) =

n

X

i=1

(^

yi

1y)0 (^

y

y) +

1y) + e

^0 e

^

n

X

e^2i :

i=1

This is commonly called the analysis-of-variance formula for least squares regression.

A commonly reported statistic is the coe cient of determination or R-squared:

Pn

Pn 2

yi y)2

e^

2

i=1 (^

R = Pn

Pn i=1 i 2 :

2 =1

y)

y)

i=1 (yi

i=1 (yi

It is often described as the fraction of the sample variance of yi which is explained by the leastsquares t. R2 is a crude measure of regression t. We have better measures of t, but these require

a statistical (not just algebraic) analysis and we will return to these issues later. One di culty

with R2 is that it increases when regressors are added to a regression (see Exercise 3.16).

3.13

64

Regression Components

Partition

X = [X 1

X 2]

and

1

y = X1

The OLS estimator of

written as

=(

0

1;

0 0

2)

+ X2

+ e:

(3.31)

y = Xb + e

^ = X1 b1 + X2 b2 + e

^:

(3.32)

b

b

We are interested in algebraic expressions for 1 and 2 :

The algebra for the estimator is identical as that for the population coe cients as presented in

Section 2.20.

b xx and Q

b xy as

Partition Q

2 1

3

1 0

2

3

0

X

X

X

X

b

b

1

2

Q11 Q12

n 1

6 n 1

7

b xx = 4

7

5=6

Q

4

5

1

1

b

b

0

0

Q21 Q22

X X1

X X2

n 2

n 2

and similarly Qxy

2 1

3

3

2

X 01 y

b 1y

Q

6 n

7

b xy = 4

7:

5=6

Q

4

5

1 0

b 2y

Q

X 2y

n

By the partitioned matrix inversion formula (A.4)

3

2 11

3 2

2

3 1

b 11 Q

b 12

b

b 12

b 1

b 1 Q

b 12 Q

b 1

Q

Q

Q

Q

Q

11

2

11

2

22

6

7 6

7

b 1=4

5 def

Q

= 4

(3.33)

5

5=4

xx

21

22

1

1

1

b

b

b

b

b

b 21 Q

b

b

Q21 Q22

Q

Q

Q

Q

Q

22 1

b 11

b 11 2 = Q

where Q

Thus

b 12 Q

b 22 1 = Q

b 22

b 1Q

b 21 and Q

Q

22

b =

"

b

1

b

2

22 1

b 21 Q

b 1Q

b 12 :

Q

11

b 1

Q

11 2

1

b

b 21 Q

b 1

Q22 1 Q

11

!

1

b

b

Q

11 2 Q1y 2

1

b

b 2y 1

Q

Q

11

b 1 Q

b b 1

Q

11 2 12 Q22

b 1

Q

22 1

#"

b 1y

Q

b

Q2y

22 1

Now

b 11 2 = Q

b 11

Q

=

b 12 Q

b 1Q

b 21

Q

22

1 0

1 0

X 1X 1

X X2

n

n 1

1 0

X M 2X 1

n 1

1 0

X X2

n 2

1 0

X X1

n 2

where

X 2 X 02 X 2

M 2 = In

65

X 02

b 22 1 = 1 X 0 M 1 X 2 where

is the orthogonal projection matrix for X 2 : Similarly Q

n 2

X 1 X 01 X 1

M 1 = In

X 01

b 1y 2 = Q

b 1y

Q

=

b = X 0 M 2X 1

1

1

and

1 0

X X2

n 2

1 0

1 0

X 1y

X X2

n

n 1

1 0

X M 2y

n 1

b 2y 1 = 1 X 0 M 1 y:

and Q

n 2

Therefore

b 12 Q

b 1Q

b 2y

Q

22

b = X 0 M 1X 2

2

2

1 0

X y

n 2

X 01 M 2 y

(3.34)

X 02 M 1 y :

(3.35)

These are algebraic expressions for the sub-coe cient estimates from (3.32).

3.14

Residual Regression

As rst recognized by Frisch and Waugh (1933), expressions (3.34) and (3.35) can be used to

show that the least-squares estimators b 1 and b 2 can be found by a two-step regression procedure.

Take (3.35). Since M 1 is idempotent, M 1 = M 1 M 1 and thus

b

X 02 M 1 X 2

X 02 M 1 M 1 X 2

=

where

and

f0 X

f

X

2 2

X 02 M 1 y

1

X 02 M 1 M 1 y

f0 e

X

2 ~1

f2 = M 1 X 2

X

e

~1 = M 1 y:

Thus the coe cient estimate b 2 is algebraically equal to the least-squares regression of e

~1 on

f

X 2 : Notice that these two are y and X 2 , respectively, premultiplied by M 1 . But we know that

multiplication by M 1 is equivalent to creating least-squares residuals. Therefore e

~1 is simply the

f2 are the least-squares

least-squares residual from a regression of y on X 1 ; and the columns of X

residuals from the regressions of the columns of X 2 on X 1 :

We have proven the following theorem.

66

In the model (3.31), the OLS estimator of 2 and the OLS residuals e

^

may be equivalently computed by either the OLS regression (3.32) or via

the following algorithm:

1. Regress y on X 1 ; obtain residuals e

~1 ;

f2 ;

2. Regress X 2 on X 1 ; obtain residuals X

3. Regress e

~1 on X

^:

In some contexts, the FWL theorem can be used to speed computation, but in most cases there

is little computational advantage to using the two-step algorithm.

This result is a direct analogy of the coe cient representation obtained in Section 2.21. The

result obtained in that section concerned the population projection coe cients, the result obtained

here concern the least-squares estimates. The key message is the same. In the least-squares

regression (3.32), the estimated coe cient b 2 numerically equals the regression of y on the regressors

X 2 ; only after the regressors X 1 have been linearly projected out. Similarly, the coe cient estimate

b numerically equals the regression of y on the regressors X 1 ; after the regressors X 2 have been

1

linearly projected out. This result can be very insightful when intrepreting regression coe cients.

A common application of the FWL theorem, which you may have seen in an introductory

econometrics course, is the demeaning formula for regression. Partition X = [X 1 X 2 ] where

X 1 = 1 is a vector of ones and X 2 is a matrix of observed regressors. In this case,

M 1 = In

Observe that

f2 = M 1 X 2 = X 2

X

and

1 10 1

y

~ = M 1y = y

10 :

X2

y;

are the demeanedvariables. The FWL theorem says that b 2 is the OLS estimate from a regression

of yi y on x2i x2 :

b =

2

n

X

i=1

(x2i

x2 ) (x2i

x2 )

n

X

i=1

(x2i

x2 ) (yi

y) :

Thus the OLS estimator for the slope coe cients is a regression with demeaned data.

Ragnar Frisch

Ragnar Frisch (1895-1973) was co-winner with Jan Tinbergen of the rst

Nobel Memorial Prize in Economic Sciences in 1969 for their work in developing and applying dynamic models for the analysis of economic problems.

Frisch made a number of foundational contributions to modern economics

beyond the Frisch-Waugh-Lovell Theorem, including formalizing consumer

theory, production theory, and business cycle theory.

3.15

67

Prediction Errors

The least-squares residual e^i are not true prediction errors, as they are constructed based on

the full sample including yi . A proper prediction for yi should be based on estimates constructed

using only the other observations. We can do this by dening the leave-one-out OLS estimator

of as that obtained from the sample of n 1 observations excluding the ith observation:

0

1 10

1

X

X

1

b

@ 1

xj x0j A @

xj yj A

( i) =

n 1

n 1

j6=i

X 0(

Here, X ( i) and y (

value for yi is

i)

j6=i

i) X ( i)

i) y ( i) :

X(

(3.36)

are the data matrices omitting the ith row. The leave-one-out predicted

y~i = x0i b (

i) ;

e~i = yi

A convenient alternative expression for b (

b

( i)

=b

i)

(1

y~i :

hii )

X 0X

xi e^i

(3.37)

Using (3.37) we can simplify the expression for the prediction error:

x0i b (

e~i = yi

i)

x0i ^ + (1

= yi

= e^i + (1

hii )

= (1

hii )

hii )

1

x0i X 0 X

xi e^i

hii e^i

e^i :

(3.38)

M

= (I n

= diagf(1

h11 )

; ::; (1

hnn )

g:

(3.39)

e

e=M b

e:

(3.40)

A convenient feature of this expression is that it shows that computation of the full vector of

prediction errors e

e is based on a simple linear operation, and does not really require n separate

estimations.

One use of the prediction errors is to estimate the out-of-sample mean squared error

n

~2 =

=

1X 2

e~i

n

1

n

i=1

n

X

(1

hii )

2 2

e^i :

(3.41)

i=1

This is also known as the sample mean squared prediction error. Its square root ~ =

the prediction standard error.

~ 2 is

3.16

68

Inuential Observations

Another use of the leave-one-out estimator is to investigate the impact of inuential observations, sometimes called outliers. We say that observation i is inuential if its omission from

the sample induces a substantial change in a parameter of interest. From (3.37)-(3.38) we know

that

b

( i)

= (1

X 0X

hii )

X 0X

xi e~i :

xi e^i

(3.42)

By direct calculation of this quantity for each observation i; we can directly discover if a specic

observation i is inuential for a coe cient estimate of interest.

For a general assessment, we can focus on the predicted values. The dierence between the

full-sample and leave-one-out predicted values is

y^i

y~i = x0i b

=

x0i

x0i b (

XX

= hii e~i

i)

1

xi e~i

which is a simple function of the leverage values hii and prediction errors e~i : Observation i is

inuential for the predicted value if jhii e~i j is large, which requires that both hii and j~

ei j are large.

One way to think about this is that a large leverage value hii gives the potential for observation

i to be inuential. A large hii means that observation i is unusual in the sense that the regressor xi

is far from its sample mean. We call an observation with large hii a leverage point. A leverage

point is not necessarily inuential as the latter also requires that the prediction error e~i is large.

To determine if any individual observations are inuential in this sense, several diagnostics have

been proposed (some names include DFITS, Cooks Distance, and Welsch Distance). Unfortunately,

from a statistical perspective it is di cult to recommend these diagnostics for applications as they

are not based on statistical theory. Probably the most relevant measure is the change in the

coe cient estimates given in (3.42). The ratio of these changes to the coe cients standard error

is called its DFBETA, and is a postestimation diagnostic available in STATA. While there is no

magic threshold, the concern is whether or not an individual observation meaningfully changes an

estimated coe cient of interest.

For illustration, consider Figure 3.2 which shows a scatter plot of random variables (yi ; xi ).

The 25 observations shown with the open circles are generated by xi U [1; 10] and yi N (xi ; 4):

The 26th observation shown with the lled circle is x26 = 9; y26 = 0: (Imagine that y26 = 0 was

incorrectly recorded due to a mistaken key entry.) The Figure shows both the least-squares tted

line from the full sample and that obtained after deletion of the 26th observation from the sample.

In this example we can see how the 26th observation (the outlier) greatly tilts the least-squares

tted line towards the 26th observation. In fact, the slope coe cient decreases from 0.97 (which

is close to the true value of 1.00) to 0.56, which is substantially reduced. Neither y26 nor x26 are

unusual values relative to their marginal distributions, so this outlier would not have been detected

from examination of the marginal distributions of the data. The change in the slope coe cient of

0:41 is meaningful and should raise concern to an applied economist.

If an observation is determined to be inuential, what should be done? As a common cause

of inuential observations is data entry error, the inuential observations should be examined for

evidence that the observation was mis-recorded. Perhaps the observation falls outside of permitted

ranges, or some observables are inconsistent (for example, a person is listed as having a job but

receives earnings of $0). If it is determined that an observation is incorrectly recorded, then the

observation is typically deleted from the sample. This process is often called cleaning the data.

The decisions made in this process involve an fair amount of individual judgement. When this is

done it is proper empirical practice to document such choices. (It is useful to keep the source data

69

10

leaveoneout OLS

8

OLS

10

in its original form, a revised data le after cleaning, and a record describing the revision process.

This is especially useful when revising empirical work at a later date.)

It is also possible that an observation is correctly measured, but unusual and inuential. In

this case it is unclear how to proceed. Some researchers will try to alter the specication to

properly model the inuential observation. Other researchers will delete the observation from the

sample. The motivation for this choice is to prevent the results from being skewed or determined

by individual observations, but this practice is viewed skeptically by many researchers who believe

it reduces the integrity of reported empirical results.

3.17

The normal regression model is the linear regression model under the restriction that the error

ei is independent of xi and has the distribution N 0; 2 : We can write this as

ei j xi

N 0;

N x0i ;

yi j xi

Normal regression is a parametric model, where likelihood methods can be used for estimation,

testing, and distribution theory.

The log-likelihood function for the normal regression model is

!

n

X

1

1

2

log L( ; 2 ) =

log

exp

yi x0i

2 2

(2 2 )1=2

i=1

n

log (2 )

2

n

log

2

SSEn ( ):

(3.43)

The maximum likelihood estimator (MLE) ( b mle ; ^ 2mle ) maximizes log L( ; 2 ): Since the latter

is a function of only through the sum of squared errors SSEn ( ); maximizing the likelihood is

identical to minimizing SSEn ( ). Hence

b

mle

= b ols ;

70

the MLE for equals the OLS estimator: Due to this equivalence, the least squares estimator b ols

is often called the MLE.

We can also nd the MLE for 2 : Plugging b into the log-likelihood we obtain

log L b mle ;

n

log (2 )

2

n

log

2

@

log L b mle ; ^ 2 =

@ 2

n

1

2 +

2^

2 ^2

SSEn ( b mle )

:

2 2

b

2 SSEn ( mle )

= 0:

^ 2mle =

SSEn ( b mle )

1X 2

=

e^i

n

n

i=1

Plugging the estimates into (3.43) we obtain the maximized log-likelihood

log L b mle ; ^ 2mle =

n

(log (2 ) + 1)

2

n

log ^ 2mle :

2

(3.44)

It may seem surprising that the MLE b mle is numerically equal to the OLS estimator, despite

emerging from quite dierent motivations. It is not completely accidental. The least-squares

estimator minimizes a particular sample loss function the sum of squared error criterion and

most loss functions are equivalent to the likelihood of a specic parametric distribution, in this case

the normal regression model. In this sense it is not surprising that the least-squares estimator can

be motivated as either the minimizer of a sample loss function or as the maximizer of a likelihood

function.

The mathematician Carl Friedrich Gauss (1777-1855) proposed the normal

regression model, and derived the least squares estimator as the maximum

likelihood estimator for this model. He claimed to have discovered the

method in 1795 at the age of eighteen, but did not publish the result until

1809. Interest in Gausss approach was reinforced by Laplaces simultaneous discovery of the central limit theorem, which provided a justication for

viewing random disturbances as approximately normal.

3.18

Technical Proofs*

1

0 since it is a

0

quadratic form and X X > 0: Next, since hii is the ith diagonal element of the projection matrix

1

P = X (X 0 X) X; then

hii = s0 P s

where

71

1

0

B .. C

B . C

B C

C

s=B

B 1 C

B .. C

@ . A

0

0

By the spectral decomposition of the idempotent matrix P (see equation (A.5))

Ik 0

0 0

P = B0

where B 0 B = I n . Thus letting b = Bs denote the ith column of B, and partitioning b0 = b01

then

Ik 0

0 0

hii = s0 B 0

Ik 0

0 0

= b01

b02

Bs

b1

= b01 b1

b0 b

= 1

the nal equality since b is the ith column of B and B 0 B = I n : We have shown that hii

establishing (3.23).

1;

Proof of Equation (3.37). The ShermanMorrison formula (A.3) from Appendix A.5 states that

for nonsingular A and vector b

A

bb0

=A

b0 A

+ 1

bb0 A

This implies

X 0X

xi x0i

= X 0X

+ (1

hii )

X 0X

xi x0i X 0 X

and thus

b

( i)

=

=

X 0X

xi x0i

X 0X

+ (1

= b

= b

= b

X 0y

hii )

X 0X

X 0y

xi yi

X 0X

XX

1

xi yi

xi x0i

xi yi + (1

X 0X

hii )

(1

hii )

X 0X

(1

hii )

X 0X

X 0y

X 0X

xi (1

hii ) yi

xi e^i

remainder collecting terms.

x i yi

xi x0i b

hii yi

x0i b + hii yi

xi ; and the

72

Exercises

Exercise 3.1 Let y be a random variable with

g y; ;

= Ey and

)2

(y

= var(y): Dene

:

^ and ^ 2 are the sample mean and variance.

Pn

i=1 g (yi ; m; s) :

Show that

Exercise 3.2 Consider the OLS regression of the n 1 vector y on the n k matrix X. Consider

an alternative set of regressors Z = XC; where C is a k k non-singular matrix. Thus, each

column of Z is a mixture of some of the columns of X: Compare the OLS estimates and residuals

from the regression of y on X to the OLS estimates from the regression of y on Z:

Exercise 3.3 Using matrix algebra, show X 0 e

^ = 0:

Exercise 3.4 Let e

^ be the OLS residual from a regression of y on X = [X 1 X 2 ]. Find X 02 e

^:

Exercise 3.5 Let e

^ be the OLS residual from a regression of y on X: Find the OLS coe cient

from a regression of e

^ on X:

Exercise 3.6 Let y

^ = X(X 0 X)

1 X 0 y:

^ on X:

Exercise 3.8 Show that M is idempotent: M M = M :

Exercise 3.9 Show that tr M = n

k:

Exercise 3.11 Show that when X contains a constant,

1 Pn

y^i = y:

n i=1

Exercise 3.12 A dummy variable takes on only the values 0 and 1. It is used for categorical

data, such as an individuals gender. Let d1 and d2 be vectors of 1s and 0s, with the i0 th element

of d1 equaling 1 and that of d2 equaling 0 if the person is a man, and the reverse if the person is a

woman. Suppose that there are n1 men and n2 women in the sample. Consider tting the following

three equations by OLS

y =

+ d1

y = d1

y =

+ d2

+ d2

+e

+e

+ d1 + e

(3.45)

(3.46)

(3.47)

Can all three equations (3.45), (3.46), and (3.47) be estimated by OLS? Explain if not.

(a) Compare regressions (3.46) and (3.47). Is one more general than the other? Explain the

relationship between the parameters in (3.46) and (3.47).

(b) Compute

0d

1

and

0d ;

2

where

is an n

1 is a vector of ones.

(c) Letting

= ( 1 2 )0 ; write equation (3.46) as y = X + e: Consider the assumption

E(xi ei ) = 0. Is there any content to this assumption in this setting?

73

(a) In the OLS regression

y = d1 ^ 1 + d2 ^ 2 + u

^;

show that ^ 1 is the sample mean of the dependent variable among the men of the sample

(y 1 ), and that ^ 2 is the sample mean among the women (y 2 ).

(b) Let X (n

y

X

= y

d1 y 1

d2 y 2

d1 x01

= X

d2 x02

with b from the OLS regression

y =X e +e

~

y = d1 ^ 1 + d2 ^ 2 + X b + e

^:

1

Exercise 3.14 Let b n = (X 0n X n ) X 0n y n denote the OLS estimate when y n is n 1 and X n is

n k. A new observation (yn+1 ; xn+1 ) becomes available. Prove that the OLS estimate computed

using this additional observation is

n+1

= bn +

1

1+

1

x0n+1 (X 0n X n ) xn+1

X 0n X n

xn+1 yn+1

x0n+1 b n :

Exercise 3.15 Prove that R2 is the square of the sample correlation between y and y

^:

Exercise 3.16 Consider two least-squares regressions

y = X1 e1 + e

~

and

^:

y = X1 b1 + X2 b2 + e

Let R12 and R22 be the R-squared from the two regressions. Show that R22

(explain) when there is equality R22 = R12 ?

Exercise 3.17 Show that ~ 2

^ 2 : Is equality possible?

i)

= b?

Exercise 3.19 The data le cps85.dat contains a random sample of 528 individuals from the

1985 Current Population Survey by the U.S. Census Bureau. The le contains observations on nine

variables, listed in the le cps85.pdf.

V1

V2

V3

V4

V5

V6

V7

V8

V9

=

=

=

=

=

=

=

=

=

region of residence (coded 1 if South, 0 otherwise)

(coded 1 if nonwhite and non-Hispanic, 0 otherwise)

(coded 1 if Hispanic, 0 otherwise)

gender (coded 1 if female, 0 otherwise)

marital status (coded 1 if married, 0 otherwise)

potential labor market experience (in years)

union status (coded 1 if in union job, 0 otherwise)

hourly wage (in dollars)

74

Estimate a regression of wage yi on education x1i , experience x2i , and experienced-squared x3i = x22i

(and a constant). Report the OLS estimates.

Let e^i be the OLS residual and y^i the predicted value from the regression. Numerically calculate

the following:

(a)

(b)

(c)

(d)

(e)

(f)

(g)

Pn

^i

i=1 e

Pn

^i

i=1 x1i e

Pn

^i

i=1 x2i e

Pn

2 ^

i

i=1 x1i e

Pn

2 ^

i

i=1 x2i e

Pn

^i e^i

i=1 y

Pn

^2i

i=1 e

(h) R2

Are these calculations consistent with the theoretical properties of OLS? Explain.

Exercise 3.20 Using the data from the previous problem, re-estimate the slope on education using

the residual regression approach. Regress yi on (1; x2i ; x22i ), regress x1i on (1; x2i ; x22i ), and regress

the residuals on the residuals. Report the estimate from this regression. Does it equal the value

from the rst OLS regression? Explain.

In the second-stage residual regression, (the regression of the residuals on the residuals), calculate the equation R2 and sum of squared errors. Do they equal the values from the initial OLS

regression? Explain.

Chapter 4

4.1

Introduction

sample in the the linear regression model. In particular, we calculate the nite-sample mean and

covariance matrix and propose standard errors for the coe cient estimates

4.2

Sample Mean

To start with the simplest setting, we rst consider the intercept-only model

yi =

+ ei

E (ei ) = 0:

which is equivalent to the regression model with k = 1 and xi = 1: In the intercept model, = E (yi )

is the mean of yi : (See Exercise 2.15.) The least-squares estimator b = y equals the sample mean

as shown in (3.7).

We now calculate the mean and variance of the estimator y. Since the sample mean is a linear

function of the observations, its expectation is simple to calculate

!

n

n

1X

1X

Ey = E

yi =

Eyi = :

n

n

i=1

i=1

This shows that the expected value of least-squares estimator (the sample mean) equals the projection coe cient (the population mean). An estimator with the property that its expectation equals

the parameter it is estimating is called unbiased.

is unbiased if Eb = .

n

1X

ei :

n

i=1

75

+ ei we nd

76

Then

)2

var (y) = E (y

n

1X

ei

n

= E

i=1

1

n2

1

n2

1

n

n X

n

X

!0

1

n

X

@1

ej A

n

j=1

E (ei ej )

i=1 j=1

n

X

2

i=1

The second-to-last inequality is because E (ei ej ) = 2 for i = j yet E (ei ej ) = 0 for i 6= j due to

independence.

We have shown that var (y) = n1 2 . This is the familiar formula for the variance of the sample

mean.

4.3

We now consider the linear regression model. Throughout the remainder of this chapter we

maintain the following.

The observations (yi ; xi ) come from a random sample and satisfy the linear

regression equation

yi = x0i + ei

E (ei j xi ) = 0:

(4.1)

(4.2)

Eyi2 < 1;

E kxi k2 < 1;

and an invertible design matrix

Qxx = E xi x0i > 0:

We will consider both the general case of heteroskedastic regression, where the conditional

variance

E e2i j xi = 2 (xi ) = 2i

is unrestricted, and the specialized case of homoskedastic regression, where the conditional variance

is constant. In the latter case we add the following assumption.

77

In addition to Assumption 4.3.1,

E e2i j xi =

(xi ) =

(4.3)

is independent of xi :

4.4

In this section we show that the OLS estimator is unbiased in the linear regression model. This

calculation can be done using either summation notation or matrix notation. We will use both.

First take summation notation. Observe that under (4.1)-(4.2)

E (yi j X) = E (yi j xi ) = x0i :

(4.4)

The rst equality states that the conditional expectation of yi given fx1 ; :::; xn g only depends on

xi ; since the observations are independent across i: The second equality is the assumption of a

linear conditional mean.

Using denition (3.9), the conditioning theorem, the linearity of expectations, (4.4), and properties of the matrix inverse,

0

1

! 1 n

!

n

X

X

E bjX

= E@

xi x0i

xi yi j X A

i=1

n

X

xi x0i

i=1

n

X

xi x0i

i=1

n

X

xi x0i

i=1

n

X

i=1

xi x0i

i=1

n

X

i=1

1 n

X

i=1

1 n

X

i=1

1 n

X

xi yi

jX

E (xi yi j X)

xi E (yi j X)

xi x0i

i=1

Now lets show the same result using matrix notation. (4.4) implies

0

1 0

1

..

..

.

B

C B 0. C

C=B x C=X :

E

(y

j

X)

E (y j X) = B

i

@

A @ i A

..

..

.

.

Similarly

..

.

..

.

B

C B

C

C B

C

E (e j X) = B

@ E (ei j X) A = @ E (ei j xi ) A = 0:

..

..

.

.

(4.5)

(4.6)

78

Using denition (3.18), the conditioning theorem, the linearity of expectations, (4.5), and the

properties of the matrix inverse,

E bjX

X 0X

= E

=

X 0X

X 0X

X 0y j X

X 0 E (y j X)

X 0X

At the risk of belaboring the derivation, another way to calculate the same result is as follows.

Insert y = X + e into the formula (3.18) for b to obtain

b =

=

=

X 0X

X 0 (X + e)

X 0X

X 0X + X 0X

+ X 0X

X 0e

X 0 e:

(4.7)

This is a useful linear decomposition of the estimator b into the true parameter

1

component (X 0 X) X 0 e: Once again, we can calculate that

E b

jX

= E

=

X 0X

X 0X

= 0:

X 0e j X

X 0 E (e j X)

expectations, we nd that

E b =E E bjX

= :

In the linear regression model (Assumption 4.3.1)

and

E bjX =

(4.8)

E( b ) = :

(4.9)

Equation (4.9) says that the estimator b is unbiased for , meaning that the distribution of

b is centered at . Equation (4.8) says that the estimator is conditionally unbiased, which is a

stronger result. It says that b is unbiased for any realization of the regressor matrix X.

4.5

For any r 1 random vector Z dene the r r covariance matrix

var(Z) = E (Z

= EZZ 0

EZ) (Z

EZ)0

(EZ) (EZ)0

79

and for any pair (Z; X) dene the conditional covariance matrix

var(Z j X) = E (Z

E (Z j X))0 j X :

E (Z j X)) (Z

n matrix

D = E ee0 j X :

The ith diagonal element of D is

E e2i j X = E e2i j xi =

2

i

E (ei ej j X) = E (ei j xi ) E (ej j xj ) = 0:

where the rst equality uses independence of the observations (Assumption 1.5.1) and the second

is (4.2). Thus D is a diagonal matrix with ith diagonal element 2i :

0 2

1

0

0

1

2

B 0

0 C

2

B

C

D = diag 21 ; :::; 2n = B .

(4.10)

.. . .

.. C :

@ ..

.

.

. A

0

2

n

In the special case of the linear homoskedastic regression model (4.3), then

E e2i j xi =

2

i

D = In

:

For any matrix n r matrix A = A(X),

var(A0 y j X) = var(A0 e j X) = A0 DA:

In particular, we can write b = A0 y where A = X (X 0 X)

1

var( b j X) = A0 DA = X 0 X

X DX =

n

X

(4.11)

and thus

X 0 DX X 0 X

xi x0i

2

i;

i=1

a weighted version of X X.

Rather than working with the variance of the unscaled estimator b ; it will be useful to work

p

: We calculate that

with the conditional variance of the scaled estimator n b

V

def

= var

n b

= n var( b j X)

= n X 0X

1 0

XX

n

X 0 DX

jX

X 0X

1 0

X DX

n

1 0

XX

n

80

This rescaling might seem rather odd, but it will help provide continuity between the nite-sample

treatment of this chapter and the asymptotic treatment of later chapters. As we will see in the

next chapter, var( b j X) vanishes as n tends to innity, yet V b converges to a constant matrix.

In the special case of the linear homoskedastic regression model, D = I n 2 , so X 0 DX =

0

X X 2 ; and the variance matrix simplies to

V

1 0

XX

n

It may be worth observing that without rescaling, the variance can be written as

var b j X = X 0 X

X 0 DX

X 0X

var b j X = X 0 X

In the linear regression model (Assumption 4.3.1)

V

= var

=

n b

1 0

XX

n

jX

1

1 0

X DX

n

1 0

XX

n

(4.12)

In the homoskedastic linear regression model (Assumption 4.3.2)

V

4.6

1 0

XX

n

1

2

Gauss-Markov Theorem

can be written as

e = A0 y

where A is an n k function of X. As noted before, the least-squares estimator is the special case

obtained by setting A = X(X 0 X) 1 : What is the best choice of A? The Gauss-Markov theorem,

which we now present, says that the least-squares estimator is the best choice among linear unbiased

estimators when the errors are homoskedastic, in the sense that the least-squares estimator has the

smallest variance among all unbiased linear estimators.

To see this, since E (y j X) = X ; then for any linear estimator e = A0 y we have

E e j X = A0 E (y j X) = A0 X ;

var e j X = var A0 y j X = A0 DA = A0 A

81

the last equality using the homoskedasticity assumption D = I n 2 . The best unbiased linear

estimator is obtained by nding the matrix A0 satisfying A00 X = I k such that A00 A0 is minimized

in the positive denite sense, in that for any other matrix A satisfying A0 X = I k ; then A0 A A00 A0

is positive semi-denite.

1. In the homoskedastic linear regression model (Assumption 4.3.2),

the best (minimum-variance) unbiased linear estimator is the leastsquares estimator

b = X 0X 1 X 0y

2. In the linear regression model (Assumption 4.3.1), the best unbiased

linear estimator is

e = X 0D

X 0D

(4.13)

The rst part of the Gauss-Markov theorem is a limited e ciency justication for the leastsquares estimator. The justication is limited because the class of models is restricted to homoskedastic linear regression and the class of potential estimators is restricted to linear unbiased

estimators. This latter restriction is particularly unsatisfactory as the theorem leaves open the

possibility that a non-linear or biased estimator could have lower mean squared error than the

least-squares estimator.

The second part of the theorem shows that in the (heteroskedastic) linear regression model,

within the class of linear unbiased estimators the best estimator is not least-squares but is (4.13).

This is called the Generalized Least Squares (GLS) estimator. The GLS estimator is infeasible

as the matrix D is unknown. This result does not suggest a practical alternative to least-squares.

We return to the issue of feasible implementation of GLS in Section 9.1.

We give a proof of the rst part of the theorem below, and leave the proof of the second part

for Exercise 4.3.

Proof of Theorem 4.6.1.1. Let A be any n k function of X such that A0 X = I k : The variance

1 2

of the least-squares estimator is (X 0 X)

and that of A0 y is A0 A 2 : It is su cient to show

1

1

0

0

that the dierence A A (X X) is positive semi-denite. Set C = A X (X 0 X) : Note that

X 0 C = 0: Then we calculate that

A0 A

X 0X

C + X X 0X

1 0

= C 0C + C 0X X 0X

+ X 0X

C + X X 0X

1

+ X 0X

X 0X

X 0X X 0X

X 0C

1

= C 0C

The matrix C 0 C is positive semi-denite (see Appendix A.8) as required.

X 0X

4.7

82

Residuals

What are some properties of the residuals e^i = yi x0i b and prediction errors e~i = yi

at least in the context of the linear regression model?

Recall from (3.26) that we can write the residuals in vector notation as

x0i b (

i) ,

e

^ = Me

where M = I n X (X 0 X)

conditional expectation

E (^

e j X) = E (M e j X) = M E (e j X) = 0

and

var (^

e j X) = var (M e j X) = M var (e j X) M = M DM

(4.14)

We can simplify this expression under the assumption of conditional homoskedasticity

E e2i j xi =

var (^

e j X) = M

var (^

ei j X) = E e^2i j X = (1

hii )

(4.15)

since the diagonal elements of M are 1 hii as dened in (3.21). Thus the residuals e^i are

heteroskedastic even if the errors ei are homoskedastic.

Similarly, recall from (3.40) that the prediction errors e~i = (1 hii ) 1 e^i can be written in

vector notation as e

~= M e

^ where M is a diagonal matrix with ith diagonal element (1 hii ) 1 :

Thus e

~ = M M e: We can calculate that

E (~

e j X) = M M E (e j X) = 0

and

var (~

e j X) = M M var (e j X) M M = M M DM M

which simplies under homoskedasticity to

var (~

e j X) = M M M M

= M MM

var (~

ei j X) = E e~2i j X

= (1

hii )

(1

= (1

hii )

hii ) (1

hii )

A residual with constant conditional variance can be obtained by rescaling. The standardized

residuals are

ei = (1 hii ) 1=2 e^i ;

(4.16)

and in vector notation

e = (e1 ; :::; en )0 = M

1=2

M e:

83

var (e j X) = M

1=2

MM

1=2 2

and

var (ei j X) = E e2i j X =

(4.17)

and thus these standardized residuals have the same bias and variance as the original errors when

the latter are homoskedastic.

4.8

The error variance 2 = Ee2i can be a parameter of interest, even in a heteroskedastic regression

or a projection model. 2 measures the variation in the unexplained part of the regression. Its

method of moments estimator (MME) is the sample average of the squared residuals:

n

1X 2

^ =

e^i

n

2

i=1

In the linear regression model we can calculate the mean of ^ 2 : From (3.26), the properties of

projection matrices and the trace operator, observe that

^2 =

1

1

1

1

1 0

e

^e

^ = e0 M M e = e0 M e = tr e0 M e = tr M ee0 :

n

n

n

n

n

Then

E ^2 j X

1

tr E M ee0 j X

n

1

tr M E ee0 j X

n

1

tr (M D) :

n

=

=

=

(4.18) simplies to

E ^2 j X

=

=

(4.18)

2;

so that D = I n

2;

then

1

tr M 2

n

k

2 n

;

n

the nal equality by (3.24). This calculation shows that ^ 2 is biased towards zero. The order of

the bias depends on k=n, the ratio of the number of estimated coe cients to the sample size.

Another way to see this is to use (4.15). Note that

n

E ^ jX

1X

E e^2i j X

n

i=1

n

1X

(1

n

hii )

i=1

(4.19)

84

Since the bias takes a scale form, a classic method to obtain an unbiased estimator is by rescaling

the estimator. Dene

n

1 X 2

2

s =

e^i :

(4.20)

n k

i=1

E s2 j X =

so

E s2 =

(4.21)

and the estimator s2 is unbiased for 2 : Consequently, s2 is known as the bias-corrected estimator

for 2 and in empirical practice s2 is the most widely used estimator for 2 :

Interestingly, this is not the only method to construct an unbiased estimator for 2 . An estimator constructed with the standardized residuals ei from (4.16) is

n

1X 2

1X

=

ei =

(1

n

n

i=1

hii )

1 2

e^i :

(4.22)

i=1

jX =

(4.23)

and thus 2 is unbiased for 2 (in the homoskedastic linear regression model).

When k=n is small (typically, this occurs when n is large), the estimators ^ 2 ; s2 and 2 are

likely to be close. However, if not then s2 and 2 are generally preferred to ^ 2 : Consequently it is

best to use one of the bias-corrected variance estimators in applications.

4.9

A major purpose of estimated regressions is to predict out-of-sample values. Consider an outof-sample observation (yn+1 ; xn+1 ) where xn+1 will be observed but not yn+1 . Given the coe cient

estimate b the standard point estimate of E (yn+1 j xn+1 ) = x0n+1 is y~n+1 = x0n+1 b : The forecast

error is the dierence between the actual value yn+1 and the point forecast, e~n+1 = yn+1 y~n+1 :

The mean-squared forecast error (MSFE) is

M SF En = E~

e2n+1 :

In the linear regression model, e~n+1 = en+1

M SF En = Ee2n+1

x0n+1 b

; so

2E en+1 x0n+1 b

+E x0n+1 b

(4.24)

xn+1 :

The rst term in (4.24) is 2 : The second term in (4.24) is zero since en+1 x0n+1 is independent

of b

and both are mean zero. The third term in (4.24) is

b

tr E xn+1 x0n+1 E b

= tr E xn+1 x0n+1 E b

=

=

1

tr E xn+1 x0n+1 EV

n

1

E tr xn+1 x0n+1 V b

n

1

E x0n+1 V b xn+1

n

(4.25)

85

where we use the fact that xn+1 is independent of b and use the denition V

Thus

1

M SF En = 2 + E x0n+1 V b xn+1 :

n

Under conditional homoskedasticity, this simplies to

2

M SF En =

1 + E x0n+1 X 0 X

= var

n b

jX :

xn+1

A simple estimator for the MSFE is the averaging of squared prediction errors (3.41)

n

1X 2

~ =

e~i

n

2

i=1

where e~i = yi

x0i b (

i)

= e^i (1

1:

hii )

E~ 2 = E~

e2i

x0i b (

= E ei

i)

+ E x0i b (

i)

E~ 2 =

1

n

( i)

E x0i V b xi = M SF En

xi

1:

This is the MSFE based on a sample of size n 1; rather than size n: The dierence arises because

the in-sample prediction errors e~i for i n are calculated using an eective sample size of n 1, while

the out-of sample prediction error e~n+1 is calculated from a sample with the full n observations.

Unless n is very small we should expect M SF En 1 (the MSFE based on n 1 observations) to

be close to M SF En (the MSFE based on n observations). Thus ~ 2 is a reasonable estimator for

M SF En

In the linear regression model (Assumption 4.3.1)

M SF En = E~

e2n+1 =

1

E x0n+1 V b xn+1

n

p b

j X : Furthermore, ~ 2 dened in (3.41) is

where V b = var

n

an unbiased estimator of M SF En 1 :

E~ 2 = M SF En

4.10

For inference, we need an estimate of the covariance matrix V b of the least-squares estimator.

In this section we consider the homoskedastic regression model (Assumption 4.3.2).

86

Under homoskedasticity, the covariance matrix takes the relatively simple form

V

1 0

XX

n

which is known up to the unknown scale 2 . In Section 4.8 we discussed three estimators of

The most commonly used choice is s2 ; leading to the classic covariance matrix estimator

1 0

XX

n

0

Vb b =

2:

s2 :

(4.26)

0

Since s2 is conditionally unbiased for 2 , it is simple to calculate that Vb b is conditionally

unbiased for V b under the assumption of homoskedasticity:

0

E Vb b j X

1 0

XX

n

1 0

XX

n

= V b:

=

=

E s2 j X

2

This estimator was the dominant covariance matrix estimator in applied econometrics at

time, and is still the default in most regression packages.

If the estimator (4.26) is used, but the regression error is heteroskedastic, it is possible for

1

1 0

1 0

1 0

XX

X DX

XX

to be quite biased for the correct covariance matrix V b =

n

n

n

For example, suppose k = 1 and 2i = x2i with Exi = 0: The ratio of the true variance of

least-squares estimator to the expectation of the variance estimator is

V

0

E Vb b j X

one

0

Vb b

1

the

1 Pn

4

i=1 xi

Ex4i

'

= n P

= :

1 n

2 2

2

Ex

2

x

i

n i=1 i

(Notice that we use the fact that 2i = x2i implies 2 = E 2i = Ex2i :) The constant

is the

standardized forth moment (or kurtosis) of the regressor xi ; and can be any number greater than

one. For example, if xi N 0; 2 then = 3; so the true variance is three times larger than the

2

homoskedastic estimator. But can be much larger. Suppose, for example, that xi

1: In

1

this case = 15; so that the true variance is fteen times larger than the homoskedastic estimator.

While this is an extreme and constructed example, the point is that the classic covariance matrix

estimator (4.26) may be quite biased when the homoskedasticity assumption fails.

4.11

In the previous section we showed that that the classic covariance matrix estimator can be

highly biased if homoskedasticity fails. In this section we show how to contruct covariance matrix

estimators which do not require homoskedasticity.

Recall that the general form for the covariance matrix is

V

1 0

XX

n

1 0

X DX

n

1 0

XX

n

87

2

1 ; :::;

D = diag

2

n

= E ee0 j X

= E (D 0 j X) :

where D 0 = diag e21 ; :::; e2n : Thus D 0 is a conditionally unbiased estimator for D: Therefore, if

the squared errors e2i were observable, we could construct the unbiased estimator

ideal

Vb b

=

1 0

XX

n

1 0

XX

n

1 0

1 0

X D0 X

XX

n

n

!

n

1X

1 0

0 2

xi xi ei

XX

n

n

i=1

Indeed,

E

ideal

Vb b

jX

1 0

XX

n

1 0

XX

n

1 0

XX

n

=

= V

!

n

1X

1 0

XX

xi x0i E e2i j X

n

n

i=1

!

n

1

1X

1 0

0 2

xi xi i

XX

n

n

i=1

1 0

X DX

n

1 0

XX

n

ideal

verifying that Vb b

is unbiased for V

ideal

Since the errors

are unobserved, Vb b

is not a feasible estimator. To construct a feasible

estimator we can replace the errors with the least-squares residuals e^i ; the prediction errors e~i or

the standardized residuals ei ; e.g.

e2i

D

1

n

2

e

D = diag e~1 ; :::; e~2n ;

D = diag e21 ; :::; e2n :

Substituting these matrices into the formula for V

Vb b

Ve b

1 0

XX

n

1 0

XX

n

1 0

XX

n

1 0

XX

n

1 0

XX

n

(4.27)

1

1 0b

1 0

X DX

XX

n

n

!

n

1X

1 0

xi x0i e^2i

XX

n

n

(4.28)

i=1

1

1 0e

1 0

X DX

XX

n

n

!

n

1X

1 0

xi x0i e~2i

XX

n

n

i=1

!

n

X

1

2

(1 hii ) xi x0i e^2i

n

i=1

1 0

XX

n

88

and

V

1 0

XX

n

1 0

XX

n

1 0

XX

n

1

1 0

1 0

X DX

XX

n

n

!

n

1X

1 0

xi x0i e2i

XX

n

n

i=1

!

n

1X

(1 hii ) 1 xi x0i e^2i

n

i=1

1 0

XX

n

heteroskedasticity-robust covariance matrix estimators. The estimator Vb b was rst developed

by Eicker (1963), and introduced to econometrics by White (1980), and is sometimes called the

Eicker-White or White covariance matrix estimator1 . The estimator Ve b was introduced by

Andrews (1991) based on the principle of leave-one-out cross-validation, and the estimator V b was

introduced by Horn, Horn and Duncan (1975) as a reduced-bias covariance matrix estimator.

Since (1 hii ) 2 > (1 hii ) 1 > 1 it is straightforward to show that

b

Vb b < V

< Ve b

(4.29)

(See Exercise 4.7). The inequality A < B when applied to matrices means that the matrix B A

is positive denite.

In general, the bias of the estimators Vb b ; Ve b and V b ; is quite complicated, but they greatly

simplify under the assumption of homoskedasticity (4.3). For example, using (4.15),

!

n

1

1

X

1

1

1 0

0

0

2

E Vb b j X

=

XX

XX

xi xi E e^i j X

n

n

n

i=1

!

n

1

1

1 0

1X

1 0

0

2

=

XX

XX

xi xi (1 hii )

n

n

n

i=1

!

n

1

1

1

X

1

1 0

1

1 0

2

2

=

XX

X 0X

xi x0i hii

XX

n

n

n

n

i=1

1 0

XX

n

= V b:

Similarly, (again under homoskedasticity) we can calculate that Ve b is biased away from zero,

specically

1

1 0

2

XX

(4.30)

E Ve b j X

n

while the estimator V

is unbiased

E V

(See Exercise 4.8.)

1

jX =

1 0

XX

n

1

2

(4.31)

n

n

89

It might seem rather odd to compare the bias of heteroskedasticity-robust estimators under the

assumption of homoskedasticity, but it does give us a baseline for comparison.

0

We have introduced four covariance matrix estimators, Vb b ; Vb b ; Ve b ; and V b : Which should

0

you use? The classic estimator Vb b is typically a poor choice, as it is only valid under the unlikely

homoskedasticity restriction. For this reason it is not typically used in contemporary econometric research. Of the three robust estimators, Vb b is the most commonly used, as it is the most

straightforward and familiar. However, Ve b and (in particular) V b are preferred based on their

0

improved bias. Unfortunately, standard regression packages set the classic estimator Vb b as the

default. As Ve b and V b are simple to implement, this should not be a barrier. For example, in

STATA, V b is implemented by selecting Robuststandard errors and selecting the bias correction

option 1=(1 h) or using the vce(hc2) option.

4.12

Standard Errors

more easily interpretable measure of spread is its square root the standard deviation. This is

so important when discussing the distribution of parameter estimates, we have a special name for

estimates of their standard deviation.

Denition 4.12.1 A standard error s( b ) for a real-valued estimator b

is an estimate of the standard deviation of the distribution of b :

When

for individual elements are the square roots of the diagonal elements of n

r

rh

i

1=2

1

Vb b :

s( ^ j ) = n Vb ^ = n

j

1V

b

1V

b

b:

b,

standard errors

That is,

jj

As we discussed in the previous section, there are multiple possible covariance matrix estimators,

so standard errors are not unique. It is therefore important to understand what formula and method

is used by an author when studying their work. It is also important to understand that a particular

standard error may be relevant under one set of model assumptions, but not under another set of

assumptions.

To illustrate the computation of the covariance matrix estimate and standard errors, we return

to the log wage regression (3.11) of Section 3.6. We calculate that s2 = 0:215 and

0:208 3:200

3:200 49:961

b =

0

Vb b =

and

Vb b

=

=

1

15:426

15:426

243

1

15:426

15:426

243

7:092

0:445

0:445

0:029

0:215 =

10:387

0:659

0:208 3:200

3:200 49:961

0:659

0:043

1

15:426

15:426

243

90

For example,

p of the diagonal elements of these matrices.

p

the White standard error for b 0 is 7:092=61 = 0:341 and that for ^ 1 is :029=61 = 0:022: A

conventional format to write the estimated equation with standard errors is

\

log(W

age) =

(0:022)

(0:341)

Alternatively, standard errors could be calculated using Ve b or V b : We report the four possible

standard errors in the following table

q

q

q

q

0

n 1 Vb b

n 1 Vb b

n 1 Ve b

n 1V b

Intercept

Education

0.412

0.026

0.341

0.022

0.361

0.023

0.351

0.022

The homoskedastic standard errors are noticably dierent than the others, but the three robust

standard errors are quite close to one another.

4.13

Measures of Fit

regression R2 dened as

Pn 2

e^

^2

2

R = 1 Pn i=1 i 2 = 1

:

^ 2y

y)

i=1 (yi

P

where ^ 2y = n 1 ni=1 (yi y)2 : R2 can be viewed as an estimator of the population parameter

2

var (x0i )

=1

var(yi )

2

y

However, ^ 2 and ^ 2y are biased estimators. Theil (1961) proposed replacing these by the unbiP

ased versions s2 and ~ 2y = (n 1) 1 ni=1 (yi y)2 yielding what is known as R-bar-squared or

adjusted R-squared:

P

(n 1) ni=1 e^2i

s2

2

:

R =1

=1

P

~ 2y

(n k) ni=1 (yi y)2

2

Pn 2

e~

~2

2

e

R = 1 Pn i=1 i 2 = 1

^ 2y

y)

i=1 (yi

where e~i are the prediction errors (3.38) and ~ 2 is the MSPE from (3.41). As described in Section

e2 is a good

(4.9), ~ 2 is a good estimator of the out-of-sample mean-squared forecast error, so R

estimator of the percentage of the forecast variance which is explained by the regression forecast.

e2 is a good measure of t.

In this sense, R

2

e2 ; is that R2

One problem with R2 ; which is partially corrected by R and fully corrected by R

necessarily increases when regressors are added to a regression model. This occurs because R2 is a

negative function of the sum of squared residuals which cannot increase when a regressor is added.

2

e2 are non-monotonic in the number of regressors. R

e2 can even be negative,

In contrast, R and R

which occurs when an estimated model predicts worse than a constant-only model.

In the statistical literature the MSPE ~ 2 is known as the leave-one-out cross validation

criterion, and is popular for model comparison and selection, especially in high-dimensional (none2 or ~ 2 to compare and select models. Models with

parametric) contexts. It is equivalent to use R

91

e2 (or low ~ 2 ) are better models in terms of expected out of sample squared error. In contrast,

high R

2

R cannot be used for model selection, as it necessarily increases when regressors are added to a

2

regression model. R is also an inappropriate choice for model selection (it tends to select models

with too many parameters), though a justication of this assertion requires a study of the theory

of model selection.

e2 and/or ~ 2 in regression analysis,

In summary, it is recommended to calculate and report R

2

and omit R2 and R :

Henri Theil

2

both of which are routinely seen in applied econometrics. He also wrote an

early and inuential advanced textbook on econometrics (Theil, 1971).

4.14

Empirical Example

We again return to our wage equation, but use an extended sample of non-military wage earners

with at least 12 years of education. For regressors we include years of education, potential work

experience, experience squared, and dummy variable indicators for the following: female, female

union member, male union member, female married, male married, hispanic, and non-white. The

available sample is 46,943 so the parameter estimates are quite precise and reported in Table 5.1.

Table 5.1 displays the parameter estimates in a standard tabular format. The table clearly

states the estimation method (OLS), the dependent variable (log(Wage)), and the regressors are

clearly labeled. Both parameter estimates and standard errors are reported for all coe cient. In

addition to the coe cient estimates, the table also reports the estimated error standard deviation

and the sample size: These are useful summary measures of t which aid readers.

Table 5.1

OLS Estimates of Linear Equation for Log(Wage)

^

s( ^ )

0.021

0.001

0.001

0.002

0.009

0.020

0.020

0.008

0.008

0.008

0.007

Intercept

0.915

Education

0.118

Experience

0.034

Experience2 =100

-0.057

Female

-0.129

Female Union Member

0.022

Male Union Member

0.095

Married Female

0.016

Married Male

0.180

Hispanic

-0.110

Non-White

-0.075

^

0.5659

Sample Size

46,943

Note: Standard errors are heteroskedasticity-consistent

92

As a general rule, it is advisable to always report standard errors along with parameter estimates.

This allows readers to assess the precision of the parameter estimates, and as we will discuss in

later chapters, form condence intervals and t-tests for individual coe cients if desired.

The results in Table 5.1 conrm our earlier ndings that the return to a year of education is

approximately 12%, the return to experience is concave, that women earn approximately 13% less

then men, and non-whites earn about 7% less than whites. In addition, we see that there are wage

premiums for being a member of a labor union or being married, but the premiums appear to be

much larger for men than for women.

4.15

Multicollinearity

1

If X 0 X is singular, then (X 0 X)

and b are not dened. This situation is called strict

multicollinearity, as the columns of X are linearly dependent, i.e., there is some 6= 0 such that

X = 0: Most commonly, this arises when sets of regressors are included which are identically

related. For example, if X includes both the logs of two prices and the log of the relative prices,

log(p1 ); log(p2 ) and log(p1 =p2 ); for then X 0 X will necessarily be singular. When this happens, the

applied researcher quickly discovers the error as the statistical software will be unable to construct

(X 0 X) 1 : Since the error is discovered quickly, this is rarely a problem for applied econometric

practice.

The more relevant situation is near multicollinearity, which is often called multicollinearity

for brevity. This is the situation when the X 0 X matrix is near singular, when the columns of X are

close to linearly dependent. This denition is not precise, because we have not said what it means

for a matrix to be near singular. This is one di culty with the denition and interpretation of

multicollinearity.

One potential complication of near singularity of matrices is that the numerical reliability of

the calculations may be reduced. In practice this is rarely an important concern, except when the

number of regressors is very large.

A more relevant implication of near multicollinearity is that individual coe cient estimates will

be imprecise. We can see this most simply in a homoskedastic linear regression model with two

regressors

yi = x1i 1 + x2i 2 + ei ;

and

1 0

XX=

n

In this case

var b j X =

1

1

1

1

1

:

2

n (1

1

2)

The correlation indexes collinearity, since as approaches 1 the matrix becomes singular. We

can see the eect of collinearity on precision by observing that the variance of a coe cient esti1

2

mate 2 n 1

approaches innity as approaches 1. Thus the more collinear are the

regressors, the worse the precision of the individual coe cient estimates.

What is happening is that when the regressors are highly dependent, it is statistically di cult

to disentangle the impact of 1 from that of 2 : As a consequence, the precision of individual

estimates are reduced. The imprecision, however, will be reected by large standard errors, so

there is no distortion in inference.

Some earlier textbooks overemphasized a concern about multicollinearity. A very amusing

parody of these texts appeared in Chapter 23.3 of Goldbergers A Course in Econometrics (1991),

which is reprinted below. To understand his basic point, you should notice how the estimation

1

2

variance 2 n 1

depends equally and symmetrically on the correlation and the sample

size n.

Arthur S. Goldberger

Art Goldberger (1930-2009) was one of the most distinguished members

of the Department of Economics at the University of Wisconsin. His PhD

thesis developed an early macroeconometric forecasting model (known as the

Klein-Goldberger model) but most of his career focused on microeconometric

issues. He was the leading pioneer of what has been called the Wisconsin

Tradition of empirical work a combination of formal econometric theory

with a careful critical analysis of empirical work. Goldberger wrote a series

of highly regarded and inuential graduate econometric textbooks, including

including Econometric Theory (1964), Topics in Regression Analysis (1968),

and A Course in Econometrics (1991).

93

Micronumerosity

Arthur S. Goldberger

A Course in Econometrics (1991), Chapter 23.3

Econometrics texts devote many pages to the problem of multicollinearity in

multiple regression, but they say little about the closely analogous problem of

small sample size in estimating a univariate mean. Perhaps that imbalance is

attributable to the lack of an exotic polysyllabic name for small sample size. If

so, we can remove that impediment by introducing the term micronumerosity.

Suppose an econometrician set out to write a chapter about small sample size

in sampling from a univariate population. Judging from what is now written about

multicollinearity, the chapter might look like this:

1. Micronumerosity

The extreme case, exact micronumerosity,arises when n = 0; in which case

the sample estimate of is not unique. (Technically, there is a violation of

the rank condition n > 0 : the matrix 0 is singular.) The extreme case is

easy enough to recognize. Near micronumerosity is more subtle, and yet

very serious. It arises when the rank condition n > 0 is barely satised. Near

micronumerosity is very prevalent in empirical economics.

2. Consequences of micronumerosity

The consequences of micronumerosity are serious. Precision of estimation is

reduced. There are two aspects of this reduction: estimates of may have

large errors, and not only that, but Vy will be large.

Investigators will sometimes be led to accept the hypothesis = 0 because

y=^ y is small, even though the true situation may be not that = 0 but

simply that the sample data have not enabled us to pick up.

The estimate of will be very sensitive to sample data, and the addition of

a few more observations can sometimes produce drastic shifts in the sample

mean.

The true

may be su ciently large for the null hypothesis

= 0 to be

2

rejected, even though Vy = =n is large because of micronumerosity. But if

the true is small (although nonzero) the hypothesis = 0 may mistakenly

be accepted.

94

95

Tests for the presence of micronumerosity require the judicious use

of various ngers. Some researchers prefer a single nger, others use

their toes, still others let their thumbs rule.

A generally reliable guide may be obtained by counting the number

of observations. Most of the time in econometric analysis, when n is

close to zero, it is also far from innity.

Several test procedures develop critical values n ; such that micronumerosity is a problem only if n is smaller than n : But those procedures are questionable.

4. Remedies for micronumerosity

If micronumerosity proves serious in the sense that the estimate of

has an unsatisfactorily low degree of precision, we are in the statistical

position of not being able to make bricks without straw. The remedy

lies essentially in the acquisition, if possible, of larger samples from

the same population.

But more data are no remedy for micronumerosity if the additional

data are simply more of the same.So obtaining lots of small samples

from the same population will not help.

4.16

In the special case of the normal linear regression model introduced in Section 3.17, we can derive

exact sampling distributions for the least-squares estimator, residuals, and variance estimator.

In particular, under the normality assumption ei j xi N 0; 2 then we have the multivariate

implication

e j X N 0; I n 2 :

That is, the error vector e is independent of X and is normally distributed. Since linear functions

of normals are also normal, this implies that conditional on X

b

e

^

(X 0 X) X 0

M

2 (X 0 X) 1

N 0;

0

2M

that b is independent of any function of the OLS residuals including the estimated error variance

s2 or ^ 2 or prediction errors e

~:

The spectral decomposition (see equation (A.5)) of M yields

M =H

In

0

0

0

H0

where H 0 H = I n : Let u =

1H 0e

n^ 2

2

N (0; H 0 H)

=

=

=

=

96

N (0; I n ) : Then

k) s2

(n

1

2

1

2

1

2

e

^0 e

^

e0 M e

In

0

e0 H

= u0

In

0

0

0

0

0

H 0e

2

n k;

Furthermore, if standard errors are calculated using the homoskedastic formula (4.26)

h

i

1

N 0; 2 (X 0 X)

b

b

jj

N (0; 1)

j

j

j

j

rh

= rh

tn k

= q 2

q

i

i

2

1

1

s( ^ j )

n k

0

0

2

s (X X)

(X X)

n k

n k n k

jj

a t distribution with n

jj

k degrees of freedom.

In the linear regression model (Assumption 4.3.1) if ei is independent of

xi and distributed N 0; 2 then

b

N 0;

n^ 2

^

=

j

s( ^ j )

(n k)s2

2

tn

2 (X 0 X) 1

2

n k

These are the exact nite-sample distributions of the least-squares estimator and variance estimators, and are the basis for traditional inference in linear regression.

While elegant, the di culty in applying Theorem 4.16.1 is that the normality assumption is too

restrictive to be empirical plausible, and therefore inference based on Theorem 4.16.1 has no guarantee of accuracy. We develop an alternative inference theory based on large sample (asymptotic)

approximations in the following chapter.

William Gosset

William S. Gosset (1876-1937) of England is most famous for his derivation

of the students t distribution, published in the paper The probable error

of a mean in 1908. At the time, Gosset worked at Guiness Brewery, which

prohibited its employees from publishing in order to prevent the possible

loss of trade secrets. To circumvent this barrier, Gosset published under the

pseudonym Student. Consequently, this famous distribution is known as

the students t rather than Gossets t!

97

Exercises

Exercise 4.1 Explain the dierence between

1

n

Pn

0

i=1 xi xi

xi + ei , xi 2 R; E(ei j xi ) = 0; and e^i is the OLS residual

from the regression of yi on xi ; then ni=1 x2i e^i = 0:

Exercise 4.3 Prove Theorem 4.6.1.2.

Exercise 4.4 In a linear model

y = X + e;

with

E(e j X) = 0;

var (e j X) =

^= y

e = X0

X0

X e ; and an estimate of

s2 =

is

e

^0

e

^:

X X0

y ;

(a) Find E e j X :

^ = M 1 e; where M 1 = I

M1 =

X X0

1

1

X0

X0

1

1

(e) Find E s2 j X :

(f) Is s2 a reasonable estimator for

2?

Exercise 4.5 Let (yi ; xi ) be a random sample with E(y j X) = X : Consider the Weighted

Least Squares (WLS) estimator of

e = X 0W X

X 0W y

where W = diag (w1 ; :::; wn ) and wi = xji2 , where xji is one of the xi :

(a) In which contexts would e be a good estimator?

(b) Using your intuition, in which situations would you expect that e would perform better than

OLS?

Exercise 4.6 Show (4.23) in the homoskedastic regression model.

Exercise 4.7 Prove (4.29).

Exercise 4.8 Show (4.30) and (4.31) in the homoskedastic regression model.

Chapter 5

Asymptotics

5.1

Introduction

In Chapter 4 we derived the mean and variance of the least-squares estimator in the context of

the linear regression model, but this is not a complete description of the sampling distribution, nor

su cient for inference (condence intervals and hypothesis testing) on the unknown parameters.

Furthermore, the theory does not apply in the context of the linear projection model, which is more

relevant for empirical applications.

To illustrate the situation with an example, let yi and xi be drawn from the joint density

f (x; y) =

1

exp

2 xy

1

(log y

2

1

(log x)2

2

and let ^ be the slope coe cient estimate from a least-squares regression of yi on xi and a constant.

Using simulation methods, the density function of ^ was computed and plotted in Figure 5.1 for

sample sizes of n = 25; n = 100 and n = 800: The vertical line marks the true projection coe cient.

98

99

From the gure we can see that the density functions are dispersed and highly non-normal. As

the sample size increases the density becomes more concentrated about the population coe cient.

Is there a simple way to characterize the sampling distribution of ^ ?

In principle the sampling distribution of ^ is a function of the joint distribution of (yi ; xi )

and the sample size n; but in practice this function is extremely complicated so it is not feasible to

analytically calculate the exact distribution of ^ except in very special cases. Therefore we typically

rely on approximation methods.

The most widely used and versatile method is asymptotic theory, which approximates sampling

distributions by taking the limit of the nite sample distribution as the sample size n tends to

innity. It is important to understand that this is an approximation technique, as the asymptotic

distributions are used to assess the nite sample distributions of our estimators in actual practical

samples. The primary tools of asymptotic theory are the weak law of large numbers (WLLN),

central limit theorem (CLT), and continuous mapping theorem (CMT). With these tools we can

approximate the sampling distributions of most econometric estimators.

In this chapter we provide a consise summary. It will be useful for most students to review this

material, even if most is familiar.

5.2

Asymptotic Limits

is more than one method to take limits, but the most common method in statistics and econometrics is to approximate sampling distributions by taking the limit as the sample size tends to

positive innity, written as n ! 1. It is not meant to be interpreted literally, but rather as an

approximating device.

The rst building block for asymptotic analysis is the concept of a limit of a sequence.

1; or alternatively as limn!1 an = a; if for all > 0 there is some n < 1

such that for all n n ; jan aj

:

In words, an has the limit a if the sequence gets closer and closer to a as n gets larger. If a

sequence has a limit, that limit is unique (a sequence cannot have two distinct limits). If an has

the limit a; we also say that an converges to a as n ! 1:

Not all sequences have limits. For example, the sequence f1; 2; 1; 2; 1; 2; :::g does not have a

limit. It is therefore sometimes useful to have a more general denition of limits which always

exist, and these are the limit superior and limit inferior of sequence

Denition 5.2.3 lim supn!1 an = limn!1 supm

n an

n an

The limit inferior and limit superior always exist, and equal when the limit exists. In the

example given earlier, the limit inferior of f1; 2; 1; 2; 1; 2; :::g is 1, and the limit superior is 2.

5.3

100

Convergence in Probability

A sequence of numbers may converge to a limit, but what about a sequence of random variables?

1 Pn

For example, consider a sample mean y =

yi based on an random sample of n observations.

n i=1

As n increases, the distribution of y changes. In what sense can we describe the limit of y: In

what sense does it converge?

Since y is a random variable, we cannot directly apply the deterministic concept of a sequence of

numbers. Instead, we require a denition of convergence which is appropriate for random variables.

There are more than one such denition, but the most commonly used is called convergence in

probability.

Denition 5.3.1 A random variable zn 2 R converges in probability

p

to z as n ! 1; denoted zn ! z; or alternatively plimn!1 zn = z, if for

all > 0;

lim Pr (jzn zj

) = 1:

(5.1)

n!1

The denition looks quite abstract, but it formalizes the concept of the sequence of random

variables concentrating about a point. The event fjzn zj

g occurs when zn is within of

the point z: Pr (jzn zj

) is the probability of this event that zn is within of the point

z. Equation (5.1) states that this probability approaches 1 as the sample size n increases. The

denition of convergence in probability requires that this holds for any : So for any small interval

about z the distribution of zn concentrates within this interval for large n:

You may notice that the denition concerns the distribution of the random variables zn , not

their realizations. Furthermore, notice that the denition uses to the concept of a conventional

(deterministic) limit, but the latter is applied to a sequence of probabilities, not directly to the

random variables zn or their realizations.

p

When zn ! z we call z the probability limit (or plim) of zn .

Two comments about the notation are worth mentioning. First, it is conventional to write the

p

convergence symbol as ! where the p above the arrow indicates that the convergence is in

probability. You should try and adhere to this notation, and not simply write zn ! z. Second, it

is also important to include the phrase as n ! 1to be specic about how the limit is obtained.

It is common to confuse convergence in probability with convergence in expectation:

Ezn ! Ez:

(5.2)

They are related but distinct concepts. Neither (5.1) nor (5.2) implies the other.

To see the distinction it might be helpful to think through a stylized example. Consider a

discrete random variable zn which takes the value 0 with probability 1 n 1 and the value an 6= 0

with probability n 1 , or

Pr (zn = 0) = 1

Pr (zn = an ) =

1

n

(5.3)

1

:

n

p

the sequence an : You can check that zn ! 0 as n ! 1:

In this example we can also calculate that the expectation of zn is

an

Ezn =

:

n

101

Despite the fact that zn converges in probability to zero, its expectation will not decrease to zero

unless an =n ! 0: If an diverges to innity at a rate equal to n (or faster) then Ezn will not converge

p

to zero. For example, if an = n; then Ezn = 1 for all n; even though zn ! 0: This example might

seem a bit articial, but the point is that the concepts of convergence in probability and convergence

in expectation are distinct, so it is important not to confuse one with the other.

Another common source of confusion with the notation surrounding probability limits is that

p

the expression to the right of the arrow \ ! must be free of dependence on the sample size n:

p

Thus expressions of the form zn ! cn are notationally meaningless and should not be used.

5.4

In large samples we expect parameter estimates to be close to the population values. For

example, in Section 4.2 we saw that the sample mean y is unbiased for = Ey and has variance

2 =n: As n gets large its variance decreases and thus the distribution of y concentrates about the

population mean : It turns out that this implies that the sample mean converges in probability

to the population mean.

When y has a nite variance there is a fairly straightforward proof by applying Chebyshevs

inequality.

Theorem 5.4.1 Chebyshevs Inequality. For any random variable zn

and constant > 0

Pr (jzn

var(zn )

Ezn j > )

technical exercise in probability theory, it is quite simple so we discuss it forthwith. Let Fn (u)

denote the distribution of zn Ezn : Then

Z

2

2

dFn (u):

=

Pr (jzn Ezn j > ) = Pr (zn Ezn ) >

fu2 > 2 g

The integral is over the event u2 >

Z

fu2 > 2 g

dFn (u)

fu2 > 2 g

u2

2 dFn (u)

u2

2 dFn (u) =

u2

2

Ezn )2

E (zn

2

Applied to the sample mean y, Chebyshevs inequality shows that for any

var(zn )

2

>0

Pr (jy

Eyj > )

2:

For xed 2 and ; the bound on the right-hand-side shrinks to zero as n ! 1: Thus the probability

that y is within of Ey = approaches 1 as n gets large, or

lim Pr (jy

n!1

) = 1:

This result is called the weak law of large numbers. Our derivation assumed that y has a

nite variance, but all that is necessary is for y to have a nite mean.

102

If yi are independent and identically distributed and E jyj < 1; then as

n ! 1,

n

1X

p

y=

yi ! E(y):

n

i=1

The WLLN shows that the estimator y converges in probability to the true population mean .

In general, an estimator which converges in probability to the population value is called consistent.

Denition 5.4.1 An estimator ^ of a parameter

as n ! 1:

is consistent if ^ !

Consistency is a good property for an estimator to possess. It means that for any given data

distribution; there is a sample size n su ciently large such that the estimator ^ will be arbitrarily

close to the true value with high probability. Unfortunately it does not mean that ^ will actually

be close to in a given nite sample, but it is a minimal property for an estimator to be considered

a good estimator.

Theorem 5.4.3 If yi are independent and identically distributed and

E jyj < 1; then b = y is consistent for the population mean :

5.5

almost sure convergence, also known as strong convergence. (In probability theory the term

almost sure means with probability equal to one. An event which is random but occurs with

probability equal to one is said to be almost sure.)

Denition 5.5.1 A random variable zn 2 R converges almost surely

a:s:

to z as n ! 1; denoted zn ! z; if for every > 0

Pr

lim jzn

n!1

zj

= 1:

(5.4)

The convergence (5.4) is stronger than (5.1) because it computes the probability of a limit

rather than the limit of a probability. Almost sure convergence is stronger than convergence in

p

a:s:

probability in the sense that zn ! z implies zn ! z.

In the example (5.3) of Section 5.3, the sequence zn converges in probability to zero for any

sequence an ; but this is not su cient for zn to converge almost surely. In order for zn to converge

to zero almost surely, it is necessary that an ! 0.

In the random sampling context the sample mean can be shown to converge almost surely to

the population mean. This is called the strong law of large numbers.

103

If yi are independent and identically distributed and E jyj < 1; then as

n ! 1,

n

1 X a:s:

y=

yi ! E(y):

n

i=1

The proof of the SLLN is technically quite advanced so is not presented here. For a proof see

Billingsley (1995, Section 22) or Ash (1972, Theorem 7.2.5).

The WLLN is su cient for most purposes in econometrics, so we will not use the SLLN in this

text.

5.6

Vector-Valued Moments

Our preceding discussion focused on the case where y is real-valued (a scalar), but nothing

important changes if we generalize to the case where y 2 Rm is a vector. To x notation, the

elements of y are

0

1

y1

B y2 C

B

C

y = B . C:

@ .. A

ym

0

1

E (y1 )

B E (y2 ) C

B

C

= E(y) = B

C:

..

@

A

.

E (ym )

When working with random vectors y it is convenient to measure their magnitude by their

Euclidean length, which is Euclidean norm

kyk = y12 +

2

+ ym

1=2

kyk2 = y 0 y:

It turns out that it is equivalent to describe niteness of moments in terms of the Euclidean

norm of a vector or all individual components.

Theorem 5.6.1 For y 2 Rm ; E kyk < 1 if and only if E jyj j < 1 for

j = 1; :::; m:

The m

m variance matrix of y is

V = var (y) = E (y

) (y

)0 :

V is often called a variance-covariance matrix. You can show that the elements of V are nite if

E kyk2 < 1:

104

from the distribution of y: (Each draw is an m-vector.) The vector sample mean

0

1

y1

n

B y C

1X

B 2 C

y=

yi = B . C

n

@ .. A

i=1

ym

Convergence in probability of a vector can be dened as convergence in probability of all elep

p

ments in the vector. Thus y ! if and only if y j ! j for j = 1; :::; m: Since the latter holds

if E jyj j < 1 for j = 1; :::; m; or equivalently E kyk < 1; we can state this formally as follows.

Theorem 5.6.2 Weak Law of Large Numbers (WLLN) for random vectors

If y i are independent and identically distributed and E kyk < 1; then as

n ! 1,

n

1X

p

y=

y i ! E(y):

n

i=1

5.7

Convergence in Distribution

The WLLN is a useful rst step, but does not give an approximation to the distribution of an

estimator. A large-sample or asymptotic approximation can be obtained using the concept of

convergence in distribution.

Denition 5.7.1 Let z n be a random vector with distribution Fn (u) =

Pr (z n u) : We say that z n converges in distribution to z as n ! 1,

d

Fn (u) ! F (u) as n ! 1:

u) is continuous,

When the limit distribution z is degenerate (that is, Pr (z = c) = 1 for some c) we can write

p

The typical path to establishing convergence in distribution is through the central limit theorem

(CLT), which states that a standardized sample average converges in distribution to a normal

random vector.

Theorem 5.7.1 LindebergLvy Central Limit Theorem (CLT). If

y i are independent and identically distributed and E kyk2 < 1; then as

n!1

n

p

1 X

d

n (y

)= p

(y i

) ! N (0; V )

n

i=1

where

= Ey and V = E (y

) (y

)0 :

105

p

The standardized sum z n = n (y n

) has mean zero and variance V . What the CLT adds is

that the variable z n is also approximately normally distributed, and that the normal approximation

improves as n increases.

The CLT is one of the most powerful and mysterious results in statistical theory. It shows that

the simple process of averaging induces normality. The rst version of the CLT (for the number

of heads resulting from many tosses of a fair coin) was established by the French mathematician

Abraham de Moivre in an article published in 1733. This was extended to cover an approximation

to the binomial distribution in 1812 by Pierre-Simon Laplace in his book Thorie Analytique des

Probabilits, and the most general statements are credited to articles by the Russian mathematician

Aleksandr Lyapunov (1901) and the Finnish mathematician Jarl Waldemar Lindeberg (1920, 1922).

The above statement is known as the classic (or Lindeberg-Lvy) CLT due to contributions by

Lindeberg (1920) and the French mathematician Paul Pierre Lvy.

A more general version which does not require the restriction to identical distributions was

provided by Lindeberg (1922).

that yi are independent but not necessarily identically distributed

nite

Pn with

2

2 : If for

2 =

means i = Eyi and variances 2i = E (yi

)

:

Set

i

n

i=1 i

all " > 0

n

1 X

2

lim 2

E (yi

" n) = 0

(5.5)

i ) 1 (jyi

ij

n!1

then

n i=1

n

1 X

(yi

i)

n i=1

! N (0; 1)

Equation (5.5) is known as Lindebergs condition. A standard method to verify (5.5) is via

Lyapunovs condition: For some > 0

lim

n!1

1

2+

n

n

X

2+

i)

E (yi

= 0:

(5.6)

i=1

It is easy to verify that (5.6) implies (5.5), and (5.6) is often easy to verify. For example, if

3

supi E (yi

< 1 and inf i 2i

c > 0 then

i)

n

1 X

3

n i=1

E (yi

3

i)

n

(nc)3=2

!0

so (5.6) is satised.

5.8

Higher Moments

random vector y. That is, can be written as

= Eh (y)

for some function h : Rm ! Rk : For example, the second moment of y is Ey 2 ; the kth is Ey k ; the

moment generating function is E exp (ty) ; and the distribution function is E1 fy xg :

106

Estimating parameters of this form ts into our previous analysis by dening the random

variable z = h (y) for then

= Ez is just a simple moment of z. This suggests the moment

estimator

n

n

1X

1X

b=

zi =

h (y i ) :

n

n

i=1

i=1

Pn

k is n 1

k

For example,

the

moment

estimator

of

Ey

i=1 yi ; that of the moment

P

Pngenerating function

n

1

1

is n

xg

i=1 exp (tyi ) ; and for the distribution function the estimator is n

i=1 1 fyi

Since b is a sample average, and transformations of iid variables are also iid, the asymptotic

results of the previous sections immediately apply.

Theorem 5.8.1 If y i are independent and identically

distributed,

=

P

Eh (y) ; and E kh (y)k < 1; then for b = n1 ni=1 h (y i ) ; as n ! 1,

p

b ! :

identically distributed,

Eh (y) ; and E kh (y)k2 < 1; then for b = n1 ni=1 h (y i ) ; as n ! 1;

p

where V = E (h (y)

n (b

) (h (y)

) ! N (0; V )

)0 :

Theorems 5.8.1 and 5.8.2 show that the estimate b is consistent for

and asymptotically

normally distributed, so long as the stated moment conditions hold.

A word of caution. Theorems 5.8.1 and 5.8.2 give the impression that it is possible to estimate

any moment of y: Technically this is the case, so long as that moment is nite. What is hidden

by the notation, however, is that estimates of highPorder momnets can be quite imprecise. For

example, consider the sample 8th moment b8 = n1 ni=1 yi8 ; and suppose for simplicity that y is

N(0; 1): Then we can calculate1 that var (b8 ) = n 1 645; 015; which is huge, even for large n! In

general, higher-order moments are challenging to estimate because their variance depends upon

even higher moments which can be quite large in some cases.

5.9

Functions of Moments

We now expand our investigation and consider estimation of parameters which can be written

as a continuous function of = Eh (y). That is, the parameter of interest can be written as

= g ( ) = g (Eh (y))

(5.7)

As one example, the geometric mean of wages w is

= exp (E (log (w))) :

1

1)

Ey 16

(5.8)

Ey 8

107

A simple yet common example is the variance

2

= E (w

Ew)2

= Ew2

(Ew)2 :

w

w2

h(w) =

and

g(

1;

2)

2

1:

sk =

E (w

3=2

Ew)2

E (w

This is (5.7) with

Ew)3

1

w

h(w) = @ w2 A

w3

and

g(

1;

2;

3) =

3

2 1+2 1

:

2 3=2

1

3

2

(5.9)

The parameter

= g ( ) is not a population moment, so it does not have a direct moment

estimator. Instead, it is common to use a plug-in estimate formed by replacing the unknown

with its point estimate b and then plugging this into the expression for . The rst step is

n

1X

b=

h (y i )

n

i=1

b = g (b ) :

For example, the plug-in estimate of the geometric mean of the wage distribution from (5.8)

is

b = exp(b)

with

b=

1X

log (wagei ) :

n

i=1

1X 2

wi

n

i=1

n

1X

(wi

n

i=1

1X

wi

n

i=1

w)2 :

!2

108

c =

sk

=

where

3b2 b1 + 2b31

b3

3=2

b2 b21

1 Pn

w)3

i=1 (wi

n

3=2

1 Pn

w)2

i=1 (wi

n

n

1X j

bj =

wi :

n

i=1

p

as n ! 1 and g ( ) is continuous at c; then g(z n ) ! g(c) as n ! 1.

p

For example, if zn ! c as n ! 1 then

p

zn + a ! c + a

p

azn ! ac

p

zn2 ! c2

as the functions g (u) = u + a; g (u) = au; and g (u) = u2 are continuous. Also

a

zn

a

c

If yP

= Eh (y) ; and E kh (y)k < 1; then for

i are independent and identically distributed,

b = n1 ni=1 h (y i ) ; as n ! 1,

p

b ! :

Theorem 5.9.2 If y i are independent and identically distributed,

=

g (Eh (y)) ; E kh (y)k < 1; and g (u) is continuous at u = , then for

b = g 1 Pn h (y ) ; as n ! 1; b p! :

i

i=1

n

To apply Theorem 5.9.2 it is necessary to check if the function g is continuous at . In our

rst example g(u) = exp (u) is continuous everywhere. It therefore follows from Theorem 5.6.2 and

p

Theorem 5.9.2 that if E jlog (wage)j < 1 then as n ! 1; b ! :

In the example of the variance, g is continuous for all . Thus if Ew2 < 1 then as n ! 1;

2 p

b ! 2:

2 > 0;

In our third example g dened in (5.9) is continuous for all such that var(w) = 2

1

3

which holds unless w has a degenerate distribution. Thus if E jwj < 1 and var(w) > 0 then as

c p! sk:

n ! 1; sk

5.10

109

Delta Method

In this section we introduce two tools an extended version of the CMT and the Delta Method

which allow us to calculate the asymptotic distribution of the parameter estimate b .

We rst present an extended version of the continuous mapping theorem which allows convergence in distribution.

d

If z n ! z as n ! 1 and g : Rm ! Rk has the set of discontinuity points

d

Dg such that Pr (z 2 Dg ) = 0; then g(z n ) ! g(z) as n ! 1.

For a proof of Theorem 5.10.1 see Theorem 2.3 of van der Vaart (1998). It was rst proved by

Mann and Wald (1943) and is therefore sometimes referred to as the Mann-Wald Theorem.

Theorem 5.10.1 allows the function g to be discontinuous only if the probability at being at a

discontinuity point is zero. For example, the function g(u) = u 1 is discontinuous at u = 0; but if

d

zn ! z N (0; 1) then Pr (z = 0) = 0 so zn 1 ! z 1 :

A special case of the Continuous Mapping Theorem is known as Slutskys Theorem.

p

d

If zn ! z and cn ! c as n ! 1, then

d

1. zn + cn ! z + c

d

2. zn cn ! zc

3.

zn

cn

z

if c 6= 0

c

Even though Slutskys Theorem is a special case of the CMT, it is a useful statement as it

focuses on the most common applications addition, multiplication, and division.

Despite the fact that the plug-in estimator b is a function of b for which we have an asymptotic

distribution, Theorem 5.10.1 does not directly give us an asymptotic distribution for b : This is

p

because b = g (b ) is written as a function of b , not of the standardized sequence n (b

):

We need an intermediate step a rst order Taylor series expansion. This step is so critical to

statistical theory that it has its own name The Delta Method.

p

d

If n (b

) ! ; where g(u) is continuously di erentiable in a neighborhood of then as n ! 1

p

where G(u) =

as n ! 1

@

0

@u g(u)

n (g (b )

g( )) ! G0

and G = G( ): In particular, if

n (g (b )

g( )) ! N 0; G0 V G :

(5.10)

N (0; V ) then

(5.11)

110

The Delta Method allows us to complete our derivation of the asymptotic distribution of the

estimator b of .

Now by combining Theorems 5.8.2 and 5.10.3 we can nd the asymptotic distribution of the

plug-in estimator b .

Theorem 5.10.4 If y i are independent and identically distributed,

=

@

Eh (y), = g ( ) ; E kh (y)k2 < 1; and G (u) =

g (u)0 is continuous

@u

P

in a neighborhood of , then for b = g n1 ni=1 h (y i ) ; as n ! 1

p

where V = E (h (y)

n b

! N 0; G0 V G

) (h (y)

)0 and G = G ( ) :

Theorem 5.9.2 established the consistency of b for , and Theorem 5.10.4 established its asymptotic normality. It is instructive to compare the conditions required for these results. Consistency

required that h (y) have a nite mean, while asymptotic normality requires that this variable have a

nite variance. Consistency required that g(u) be continuous, while asymptotic normality required

that g(u) be continuously dierentiable, the latter a stronger smoothness condition.

5.11

It is convenient to have simple symbols for random variables and vectors which converge in

probability to zero or are stochastically bounded. In this section we introduce some of the most

commonly found notation.

It might be useful to review the common notation for non-random convergence and boundedness.

Let xn and an ; n = 1; 2; :::; be a non-random sequences. The notation

xn = o(1)

(pronounced small oh-one) is equivalent to an ! 0 as n ! 1. The notation

xn = o(an )

is equivalent to an 1 xn ! 0 as n ! 1: The notation

xn = O(1)

(pronounced big oh-one) means that xn is bounded uniformly in n : there exists an M < 1 such

that jxn j M for all n: The notation

xn = O(an )

is equivalent to an 1 xn = O(1):

We now introduce similar concepts for sequences of random variables. Let zn and an ; n = 1; 2; :::

be sequences of random variables. (In most applications, an is non-random.) The notation

zn = op (1)

p

zn = op (an )

if an 1 zn = op (1): For example, for any consistent estimator b for

b=

111

we can write

+ op (1):

Similarly, the notation zn = Op (1) (big oh-P-one) means that zn is bounded in probability.

Precisely, for any " > 0 there is a constant M" < 1 such that

lim sup Pr (jzn j > M" )

":

n!1

Furthermore, we write

zn = Op (an )

if an 1 zn = Op (1):

Op (1) is weaker than op (1) in the sense that zn = op (1) implies zn = Op (1) but not the reverse.

However, if zn = Op (an ) then zn = op (bn ) for any bn such that an =bn ! 0:

d

N (0; V )) then

z n = Op (1): It follows that for estimators b which satisfy the convergence of Theorem 5.10.4 then

we can write

b = + Op (n 1=2 ):

Another useful observation is that a random sequence with a bounded moment is stochastically

bounded.

E kz n k = O (an )

for some sequence an and

> 0; then

z n = Op (a1=

n ):

1=

This can be shown using Markovs inequality (B.21). The assumptions imply that there is some

M 1=

M < 1 such that E kz n k

M an for all n: For any " set B =

: Then

"

Pr an 1= kz n k > B = Pr kz n k >

M an

"

"

E kz n k

M an

"

as required.

There are many simple rules for manipulating op (1) and Op (1) sequences which can be deduced

from the continuous mapping theorem or Slutskys Theorem. For example,

op (1) + op (1) = op (1)

op (1) + Op (1) = Op (1)

Op (1) + Op (1) = Op (1)

op (1)op (1) = op (1)

op (1)Op (1) = op (1)

Op (1)Op (1) = Op (1)

5.12

112

For some applications it can be useful to obtain the stochastic order of the random variable

max jyi j :

1 i n

This is the magnitude of the largest observation in the sample fy1 ; :::; yn g: If the support of the

distribution of yi is unbounded, then as the sample size n increases, the largest observation will

also tend to increase. It turns out that there is a simple characterization.

If E jyjr < 1; then as n ! 1

n

1=r

max jyi j ! 0:

1 i n

(5.12)

1

(log n)

max jyi j ! 0:

1 i n

(5.13)

Equivalently, (5.12) can be written as

max jyi j = op (n1=r )

(5.14)

(5.15)

1 i n

and (5.13) as

1 i n

Equation (5.12) says that if y has r nite moments, then the largest observation will diverge

at a rate slower than n1=r . As r increases this rate decreases. Equation (5.13) shows that if we

strengthen this to y having all nite moments and a nite moment generating function (for example,

if y is normally distributed) then the largest observation will diverge slower than log n: Thus the

higher the moments, the slower the rate of divergence.

To simplify the notation, we write (5.14) as yi = op (n1=r ) uniformly in 1 i n, and similarly

(5.15) as yi = op (log n); uniformly in 1 i n: It is important to understand when the Op or op

symbols are applied to subscript i random variables whether the convergence is pointwise in i, or

is uniform in i in the sense of (5.14)-(5.15).

Theorem 5.12.1 applies to random vectors. If E kykr < 1 then

max ky i k = op (n1=r );

1 i n

(log n)

max ky i k ! 0:

1 i n

(5.16)

5.13

113

Semiparametric E ciency

In this section we argue that the sample mean b and plug-in estimator b = g (b ) are e cient

estimators of the parameters

and . Our demonstration is based on the rich but technically

challenging theory of semiparametric e ciency bounds. An excellent accessible review has been

provided by Newey (1990). We will also appeal to the asymptotic theory of maximum likelihood

estimation (see Section B.11).

We start by examining the sample mean b ; for the asymptotic e ciency of b will follow from

that of b :

Recall, we know that if E kyk2 < 1 then the sample mean has the asymptotic distribution

p

d

n (b

) ! N (0; V ) : We want to know if b is the best feasible estimator, or if there is another

estimator with a smaller asymptotic variance. While it seems intuitively unlikely that another

estimator could have a smaller asymptotic variance, how do we know that this is not the case?

When we ask if b is the best estimator, we need to be clear about the class of models the class

of permissible distributions. For estimation of the mean

of the distribution of y the broadest

conceivable class is L1 = fF : E kyk < 1g : This class is too broad n for our current

purposes, as

n

o

b is not asymptotically N (0; V ) for all F 2 L1 : A more realistic choice is L2 = F : E kyk2 < 1

the class of nite-variance distributions. When we seek an e cient estimator of the mean in

the class of models L2 what we are seeking is the best estimator, given that all we know is that

F 2 L2 :

To show that the answer is not immediately obvious, it might be helpful to review a setting where the sample mean is ine p

cient. Suppose that y 2 R has the double exponential den1=2

sity f (y j ) = 2

exp

jy

j 2 : Since var (y) = 1 we see that the sample mean satp

d

) ! N (0; 1). In this model the maximum likelihood estimator (MLE) ~ for

ises n (^

is the sample median. Recall from the theory of maximum likelhood that the MLE satises

p

p

d

1

n (~

) ! N 0; ES 2

where S = @@ log f (y j ) =

2 sgn (y

) is the score. We can

p

d

calculate that ES 2 = 2 and thus conclude that n (~

) ! N (0; 1=2) : The asymptotic variance

of the MLE is one-half that of the sample mean. Thus when the true density is known to be double

exponential the sample mean is ine cient.

But the estimator which achieves this improved e ciency the sample median is not generically consistent for the population mean. It is inconsistent if the density is asymmetric or skewed.

So the improvement comes at a great cost. Another way of looking at this

p is that the sample

median is e cient in the class of densities f (y j ) = 2 1=2 exp

jy

j 2 but unless it is

known that this is the correct distribution class this knowledge is not very useful.

The relevant question is whether or not the sample mean is e cient when the form of the

distribution is unknown. We call this setting semiparametric as the parameter of interest (the

mean) is nite dimensional while the remaining features of the distribution are unspecied. In the

semiparametric context an estimator is called semiparametrically e cient if it has the smallest

asymptotic variance among all semiparametric estimators.

The mathematical trick is to reduce the semiparametric model to a set of parametric submodels. The Cramer-Rao variance bound can be found for each parametric submodel. The variance

bound for the semiparametric model (the union of the submodels) is then dened as the supremum

of the individual variance bounds.

R Formally, suppose that the true density of y is the unknown function f (y) with mean = Ey =

yf (y)dy: A parametric submodel for f (y) is a density f (y j ) which is a smooth function of

a parameter , and there is a true value 0 such that f (y j 0 ) = f (y): The index indicates the

submodels. The equality f (y j 0 ) = f (y) means that the submodel class passes through the true

density, so the submodel is a true model. The class of submodels and

R parameter 0 depend on

the true density f: In the submodel f (y j ) ; the mean is

( ) = yf (y j ) dy which varies

with the parameter . Let 2 @ be the class of all submodels for f:

114

Since each submodel is parametric we can calculate the e ciency bound for estimation of

within this submodel. Specically, given the density f (y j ) its likelihood score is

S =

@

log f (y j

@

0) ;

1

: Dening M = @@

(

by Theorem B.11.5 the Cramer-Rao lower bound for estimation of

within the submodel

0

0)

0;

is

V = M ES S 0

M .

As V is the e ciency bound for the submodel class f (y j ) ; no estimator can have an

asymptotic variance smaller than V for any density f (y j ) in the submodel class, including the

true density f . This is true for all submodels : Thus the asymptotic variance of any semiparametric

estimator cannot be smaller than V for any conceivable submodel. Taking the supremum of the

Cramer-Rao bounds lower from all conceivable submodels we dene2

V = sup V :

2@

The asymptotic variance of any semiparametric estimator cannot be smaller than V , since it cannot

be smaller than any individual V : We call V the semiparametric asymptotic variance bound

or semiparametric e ciency bound for estimation of , as it is a lower bound on the asymptotic

variance for any semiparametric estimator. If the asymptotic variance of a specic semiparametric

estimator equals the bound V we say that the estimator is semiparametrically e cient.

For many statistical problems it is quite challenging to calculate the semiparametric variance

bound. However, in some cases there is a simple method to nd the solution. Suppose that

we can nd a submodel 0 whose Cramer-Rao lower bound satises V 0 = V where V is

the asymptotic variance of a known semiparametric estimator. In this case, we can deduce that

V = V 0 = V . Otherwise there would exist another submodel 1 whose Cramer-Rao lower bound

satises V 0 < V 1 but this would imply V < V 1 which contradicts the Cramer-Rao Theorem.

We now nd this submodel for the sample mean b : Our goal is to nd a parametric submodel

whose Cramer-Rao bound for

is V : This can be done by creating a tilted version of the true

density. Consider the parametric submodel

0

Z

Z

f (y j ) dy = f (y)dy + 0 V

f (y j ) = f (y) 1 +

(y

f (y) (y

(5.17)

) dy = 1

and for all close to zero f (y j ) 0: Thus f (y j ) is a valid density function. It is a parametric

submodel since f (y j 0 ) = f (y) when 0 = 0: This parametric submodel has the mean

Z

( ) =

yf (y j ) dy

Z

Z

=

yf (y)dy + f (y)y (y

)0 V 1 dy

=

Since

@

@

log f (y j ) =

log 1 +

@

@

2

(y

) =

V 1 (y

)

0

1

1 + V (y

)

It is not obvious that this supremum exists, as V is a matrix so there is not a unique ordering of matrices.

However, in many cases (including the ones we study) the supremum exists and is unique.

it follows that the score function for

S =

is

@

log f (y j

@

0)

=V

E S S0

115

= V

E (y

(y

):

(5.18)

is

) (y

)0 V

=V:

(5.19)

The Cramer-Rao lower bound for ( ) = + is also V , and this equals the asymptotic variance

of the moment estimator b : This was what we set out to show.

In summary, we have shown that in the submodel (5.17) the Cramer-Rao lower bound for

estimation of is V which equals the asymptotic variance of the sample mean. This establishes

the following result.

Proposition 5.13.1 In the class of distributions F 2 L2 ; the semiparametric variance bound for estimation of is V = var(yi ); and the sample

mean b is a semiparametrically e cient estimator of the population mean

.

We call this result a proposition rather than a theorem as we have not attended to the regularity

conditions.

It is a simple matter to extend this result to the plug-in estimator b = g (b ). We know from

Theorem 5.10.4 that if E kyk2 < 1 and g (u) is continuously dierentiable at u = then the plugp

d

in estimator has the asymptotic distribution n b

! N (0; G0 V G) : We therefore consider

the class of distributions

n

o

L2 (g) = F : E kyk2 < 1; g (u) is continuously dierentiable at u = Ey :

For example, if = 1 = 2 where 1 = Ey1 and 2 = Ey2 then L2 (g) = F : Ey12 < 1; Ey22 < 1; and Ey2 6= 0 :

For any submodel the Cramer-Rao lower bound for estimation of = g ( ) is G0 V G by

Theorem B.11.5. For the submodel (5.17) this bound is G0 V G which equals the asymptotic variance

of b from Theorem 5.10.4. Thus b is semiparametrically e cient.

Proposition 5.13.2 In the class of distributions F 2 L2 (g) the semiparametric variance bound for estimation of = g ( ) is G0 V G; and the

plug-in estimator b = g (b ) is a semiparametrically e cient estimator of

.

The result in Proposition 5.13.2 is quite general. Smooth functions of sample moments are

e cient estimators for their population counterparts. This is a very powerful result, as most

econometric estimators can be written (or approximated) as smooth functions of sample means.

5.14

116

Technical Proofs*

In this section we provide proofs of some of the more technical points in the chapter. These

proofs may only be of interest to more mathematically inclined.

Proof of Theorem 5.4.2: Without loss of generality, we can assume E(yi ) = 0 by recentering yi

on its expectation.

We need to show that for all > 0 and > 0 there is some N < 1 so that for all n

N;

Pr (jyj > )

: Fix and : Set " = =3: Pick C < 1 large enough so that

E (jyi j 1 (jyi j > C))

"

(5.20)

(where 1 ( ) is the indicator function) which is possible since E jyi j < 1: Dene the random variables

wi = yi 1 (jyi j

C)

E (yi 1 (jyi j

zi = yi 1 (jyi j > C)

C))

so that

y =w+z

and

E jyj

E jwj + E jzj :

(5.21)

We now show that sum of the expectations on the right-hand-side can be bounded below 3":

First, by the Triangle Inequality (A.12) and the Expectation Inequality (B.15),

E jzi j = E jyi 1 (jyi j > C)

2";

(5.22)

n

E jzj = E

1X

zi

n

1X

E jzi j

n

i=1

2":

(5.23)

i=1

jwi j = jyi 1 (jyi j

jyi 1 (jyi j

2 jyi 1 (jyi j

C)

E (yi 1 (jyi j

C))j

C))j

C)j

2C

(5.24)

where the nal inequality is (5.20). Then by Jensens Inequality (B.12), the fact that the wi are

iid and mean zero, and (5.24),

(E jwj)2

the nal inequality holding for n

together show that

E jwj2 =

4C 2 ="2 = 36C 2 =

E jyj

as desired.

Ewi2

4C 2

=

n

n

3"2

2 2

.

"2

(5.25)

(5.26)

117

E jyj

Pr (jyj > )

3"

= ;

the nal equality by the denition of ": We have shown that for any

n 36C 2 = 2 2 ; Pr (jyj > )

; as needed.

> 0 and

0

11=2

m

m

X

X

kyk = @

yj2 A

jyj j :

j=1

j=1

E kyk

m

X

j=1

E jyj j < 1:

For the reverse inequality, the Euclidean norm of a vector is larger than the length of any individual

component, so for any j; jyj j kyk : Thus, if E kyk < 1; then E jyj j < 1 for j = 1; :::; m:

Proof of Theorem 5.7.1: The moment bound Ey 0i y i < 1 is su cient to guarantee that the

elements of

and V are well dened and nite. Without loss of generality, it is su cient to

consider the case = 0:

p

Our proof method is to calculate the characteristic function of ny n and show that it converges

pointwise to the characteristic function of N (0; V ) : By Lvys Continuity Theorem (see Van der

p

Vaart (2008) Theorem 2.13) this is su cient to established that ny n converges in distribution to

N (0; V ) :

For 2 Rm ; let C ( ) = E exp i 0 y i denote the characteristic function of y i and set c ( ) =

log C( ). Since y i has two nite moments the rst and second derivatives of C( ) are continuous

in : They are

@

C( ) = iE y i exp i 0 y i

@

@2

@ @

When evaluated at

0 C(

) = i2 E y i y 0i exp i 0 y i

=0

C(0) = 1

@

C(0) = iE (y i ) = 0

@

@2

@ @

0 C(0)

E y i y 0i =

V:

Furthermore,

c ( ) =

c

( ) =

so when evaluated at

@

@

c( ) = C( ) 1

C( )

@

@

2

@2

1 @

c(

)

=

C(

)

C( )

@ @ 0

@ @ 0

=0

c(0) = 0

c (0) = 0

c (0) =

V:

C( )

@

@

C( )

C( )

@

@ 0

By a second-order Taylor series expansion of c( ) about

c( ) = c(0) + c (0)0 +

1

2

118

= 0;

) =

1

2

(5.27)

where

lies on the line segment joining 0 and :

p

p

We now compute Cn ( ) = E exp i 0 ny n ; the characteristic function of ny n : By the properties of the exponential function, the independence of the y i ; the denition of c( ) and (5.27)

!

n

1 X 0

log Cn ( ) = log E exp i p

yi

n

i=1

1

exp i p

n

yi

1

E exp i p

n

yi

1

log E exp i p

n

= log E

= log

n

Y

i=1

n

Y

i=1

n

X

i=1

= nc

=

1

2

yi

n

(

n)

p

where n lies on the line segment joining 0 and = n: Since

c ( n ) ! c (0) = V: We thus nd that as n ! 1;

log Cn ( ) !

1

2

1

2

and

Cn ( ) ! exp

! 0 and c

( ) is continuous,

which is the characteristic function of the N (0; V ) distribution. This completes the proof.

Proof of Theorem 5.9.1: Since g is continuous at c; for all " > 0 we can nd a > 0 such

that if kz n ck < then kg (z n ) g (c)k ": Recall that A B implies Pr(A) Pr(B): Thus

p

Pr (kg (z n ) g (c)k ")

Pr (kz n ck < ) ! 1 as n ! 1 by the assumption that z n ! c:

p

Hence g(z n ) ! g(c) as n ! 1.

Proof of Theorem 5.10.3: By a vector Taylor series expansion, for each element of g;

gj (

where

nj

n)

= gj ( ) + gj (

p

jn )

n (g (

gj

n)

and

jn ) ( n

g( )) = (G + an )0

n(

) ! G0 :

p

d

The convergence is by Theorem 5.10.1, as G + an ! G; n (

continuous. This establishes (5.10)

When

N (0; V ) ; the right-hand-side of (5.28) equals

G0 = G0 N (0; V ) = N 0; G0 V G

(5.28)

119

establishing (5.11).

Proof of Theorem 5.12.1: First consider (5.12). Take any > 0: The event max1 i n jyi j > n1=r

Sn

1=r

1=r ; which is the same as the event

means that at least

i j exceeds n

i=1 jyi j > n

Sn one ofr the jy

r

or equivalently i=1 fjyi j > ng : Since the probability of the union of events is smaller than the

sum of the probabilities,

!

n

[

r

1=r

r

Pr n

max jyi j >

= Pr

fjyi j > ng

1 i n

i=1

n

X

i=1

Pr (jyi jr > n r )

n

1 X

E (jyi jr 1 (jyi jr > n r ))

n r

i=1

r

r E (jyi j

1 (jyi jr > n r ))

where the second inequality is the strong form of Markovs inequality (Theorem B.22) and the nal

equality is since the yi are iid. Since E jyjr < 1 this nal expectation converges to zero as n ! 1:

This is because

Z

r

E jyi j = jyjr dF (y) < 1

implies

as c ! 1: This establishes (5.12).

Now consider (5.13). Take any

Pr (log n)

jyj >c

jyjr dF (y) ! 0

(5.29)

1 i n

= Pr

n

[

i=1

n

X

i=1

where the second line uses exp (t log n) = exp (log n) = n: The assumption E exp(ty) < 1 means

E (exp jtyj 1 (exp jtyj > n)) ! 0 as n ! 1 by the same argument as in (5.29). This establishes

(5.13).

Chapter 6

6.1

Introduction

It turns out that the asymptotic theory of least-squares estimation applies equally to the projection model and the linear CEF model, and therefore the results in this chapter will be stated for

the broader projection model described in Section 2.17. Recall that the model is

yi = x0i + ei

for i = 1; :::; n; where the linear proejction

is

= E xi x0i

E (xi yi ) :

Many of the results of this section hold under random sampling (Assumption 1.5.1) and nite

second moments (Assumption 2.17.1). We restate this conditions here for clarity.

Assumption 6.1.1

1. The observations (yi ; xi ); i = 1; :::; n; are independent and identically distributed

2. Ey 2 < 1:

3. E kxk2 < 1:

4. Qxx = E (xx0 ) is positive denite.

E kxi k4 < 1:

120

6.2

121

In this section we use the weak law of large numbers (WLLN, Theorem 5.4.2 and Theorem 5.6.2)

and continuous mapping theorem (CMT, Theorem 5.9.1) to show that the least-squares estimator

b is consistent for the projection coe cient :

This derivation is based on three key components. First, the OLS estimator can be written as

a continuous function of a set of sample moments. Second, the WLLN shows that sample moments

converge in probability to population moments. And third, the CMT states that continuous functions preserve convergence in probability. We now explain each step in brief and then in greater

detail.

First, observe that the OLS estimator

!

! 1

n

n

X

X

1

1

b=

b 1Q

b

xi x0i

xi yi = Q

xx xy

n

n

i=1

i=1

b xx = 1 Pn xi x0 and Q

b xy = 1 Pn xi yi :

is a function of the sample moments Q

i

i=1

i=1

n

n

Second, by an application of the WLLN these sample moments converge in probability to the

population moments. Specically, the fact that (yi ; xi ) are mutually independent and identically

distributed implies that any function of (yi ; xi ) is iid, including xi x0i and xi yi : These variables also

have nite expectations by Theorem 2.17.1.1. Under these conditions, the WLLN (Theorem 5.6.2)

implies that as n ! 1;

n

X

p

b xx = 1

Q

(6.1)

xi x0i ! E xi x0i = Qxx

n

i=1

and

X

p

b xy = 1

xi yi ! E (xi yi ) = Qxy :

Q

n

(6.2)

i=1

Third, the CMT ( Theorem 5.9.1) allows us to combine these equations to show that b converges

in probability to : Specically, as n ! 1;

b=Q

b 1Q

b

xx xy

p

! Qxx1 Qxy

= :

(6.3)

p

We have shown that b ! , as n ! 1: In words, the OLS estimator converges in probability to

the projection coe cient vector as the sample size n gets large.

To fully understand the application of the CMT we walk through it in detail. We can write

b=g Q

b xx ; Q

b xy

A and b at all values of the arguments such that A 1 exists. Assumption 2.17.1 implies that Qxx1

exists and thus g (A; b) is continuous at A = Qxx : This justies the application of the CMT in

(6.3).

For a slightly dierent demonstration of (6.3), recall that (4.7) implies that

where

b 1Q

b

=Q

xx xe

n

X

b xe = 1

Q

xi ei :

n

i=1

(6.4)

122

b xe p! E (xi ei ) = 0:

Q

Therefore

(6.5)

b 1Q

b

=Q

xx xe

! Qxx1 0

=0

p

which is the same as b ! .

b xx p! Qxx ; Q

b xy p! Qxy ; Q

b 1

Under Assumption 6.1.1, Q

xx

p

p

b

b

Q

! 0; and

! as n ! 1:

! Qxx1 ;

xe

Theorem 6.2.1 states that the OLS estimator b converges in probability to as n increases,

and thus b is consistent for . In the stochastic order notation, Theorem 6.2.1 can be equivalently

written as

b = + op (1):

(6.6)

To illustrate the eect of sample size on the least-squares estimator consider the least-squares

regression

ln(W agei ) =

1 Educationi

2 Experiencei

2

3 Experiencei

+ ei :

We use the sample of 30,833 white men from the March 2009 CPS. Randomly sorting the observations, and sequentially estimating the model by least-squares, starting with the rst 40 observations,

and continuing until the full sample is used, the sequence of estimates are displayed in Figure 6.1.

You can see how the least-squares estimate changes with the sample size, but as the number of

observations increases it settles down to the full-sample estimate ^ 1 = 0:114:

6.3

Asymptotic Normality

We started this chapter discussing the need for an approximation to the distribution of the OLS

estimator b : In Section 6.2 we showed that b converges in probability to . Consistency is a good

rst step, but in itself does not describe the distribution of the estimator. In this section we derive

an approximation typically called the asymptotic distribution.

The derivation starts by writing the estimator as a function of sample moments. One of the

moments must be written as a sum of zero-mean random vectors and normalized so that the central

limit theorem can be applied. The steps are as follows.

p

Take equation (6.4) and multiply it by n: This yields the expression

! 1

!

n

n

X

X

p

1

1

0

p

n b

=

xi xi

xi ei :

(6.7)

n

n

i=1

i=1

p

This shows that the normalized and centered estimator n b

is a function of the sample

Pn

1 Pn

1

0

average n i=1 xi xi and the normalized sample average pn i=1 xi ei : Furthermore, the latter has

mean zero so the central limit theorem (CLT, Theorem 5.7.1) applies.

123

0.12

0.11

0.08

0.09

0.10

OLS Estimation

0.13

0.14

0.15

5000

10000

15000

20000

Number of Observations

The product xi ei is iid (since the observations are iid) and mean zero (since E (xi ei ) = 0):

Dene the k k covariance matrix

= E xi x0i e2i :

(6.8)

We require the elements of

to be nite, written

k k

or equivalently that E kxi ei k2 < 1: Usingkxi ei k2 = kxi k2 e2i and the Cauchy-Schwarz Inequality

(B.17),

k k

E kxi k4

1=2

Ee4i

1=2

(6.9)

which is nite if xi and ei have nite fourth moments. As ei is a linear combination of yi and xi ;

it is su cient that the observables have nite fourth moments (Theorem 2.17.1.6). We can then

apply the CLT (Theorem 5.7.1).

k k

and

E kxi ei k2 < 1

(6.10)

1 X

d

p

xi ei ! N (0; )

n

i=1

as n ! 1:

p

n b

! Qxx1 N (0; )

= N 0; Qxx1 Qxx1

(6.11)

124

as n ! 1; where the nal equality follows from the property that linear combinations of normal

vectors are also normal (Theorem B.9.1).

We have derived the asymptotic normal approximation to the distribution of the least-squares

estimator.

Theorem 6.3.2 Asymptotic Normality of Least-Squares Estimator

Under Assumption 6.1.2, as n ! 1

p

where

n b

! N (0; V )

= Qxx1 Qxx1 ;

V

Qxx = E (xi x0i ) ; and

(6.12)

= E xi x0i e2i :

b=

+ Op (n

1=2

(6.13)

p

The matrix V = avar( b ) is the variance of the asymptotic distribution of n b

: Consequently, V is often referred to as the asymptotic covariance matrix of b : The expression

V = Qxx1 Qxx1 is called a sandwich form. It might be worth noticing that there is a dierence

between the variance of the asymptotic distribution given in (6.12) and the nite-sample conditional

variance in the CEF model as given in (4.12):

V

While V

and V

1 0

XX

n

1 0

X DX

n

1 0

XX

n

V

jection Error when

and V

p

b

!V :

(6.14)

Condition (6.14) holds in the homoskedastic linear regression model, but is somewhat broader.

Under (6.14) the asymptotic variance formulas simplify as

= E xi x0i E e2i = Qxx

V

Qxx1

Qxx1

Qxx1 2

(6.15)

0

(6.16)

In (6.16) we dene V 0 = Qxx1 2 whether (6.14) is true or false. When (6.14) is true then V = V 0 ;

otherwise V 6= V 0 : We call V 0 the homoskedastic asymptotic covariance matrix.

Theorem 6.3.2 states that the sampling distribution of the least-squares estimator, after rescaling, is approximately normal when the sample size n is su ciently large. This holds true for all joint

distributions of (yi ; xi ) which satisfy the conditions of Assumption 6.1.2, and is therefore broadly

applicable. Consequently, asymptotic normality is routinely used to approximate the nite sample

p

:

distribution of n b

125

Figure 6.2: Density of Normalized OLS estimator with Double Pareto Error

A di culty is that for any xed n the sampling distribution of b can be arbitrarily far from the

normal distribution. In Figure 5.1 we have already seen a simple example where the least-squares

estimate is quite asymmetric and non-normal even for reasonably large sample sizes. The normal

approximation improves as n increases, but how large should n be in order for the approximation

to be useful? Unfortunately, there is no simple answer to this reasonable question. The trouble

is that no matter how large is the sample size, the normal approximation is arbitrarily poor for

some data distribution satisfying the assumptions. We illustrate this problem using a simulation.

Let yi = 1 xi + 2 + ei where xi is N (0; 1) ; and ei is independent of xi with the Double Pareto

1

density f (e) = 2 jej

; jej

1: If > 2 the error ei has zero mean and variance =(

2):

As approaches 2, however,

q its variance diverges to innity. In this context the normalized leastsquares slope estimator n 2 ^ 1

> 2.

1 has the N(0; 1) asymptotic distibution for any

q

In Figure 6.2 we display the nite sample densities of the normalized estimator n 2 ^ 1

1 ;

setting n = 100 and varying the parameter . For = 3:0 the density is very close to the N(0; 1)

density. As diminishes the density changes signicantly, concentrating most of the probability

mass around zero.

Another example is shown in Figure 6.3. Here the model is yi = + ei where

uki

ei =

E

u2k

i

E uki

E

uki

2 1=2

(6.17)

p

setting n = 100; for k = 1; 4,

and ui N(0; 1): We show the sampling distribution of n b

6 and 8. As k increases, the sampling distribution becomes highly skewed and non-normal. The

lesson from Figures 6.2 and 6.3 is that the N(0; 1) asymptotic approximation is never guaranteed

to be accurate.

6.4

Joint Distribution

Theorem 6.3.2 gives the joint asymptotic distribution of the coe cient estimates. We can use

the result to study the covariance between the coe cient estimates. For example, suppose k = 2

126

Figure 6.3: Density of Normalized OLS estimator with error process (6.17)

and write the estimates as ( ^ 1 ; ^ 2 ): For simplicity suppose that the regressors are mean zero. Then

we can write

2

1

Qxx =

1 2

2

2

1 2

where 21 and 22 are the variances of x1i and x2i ; and is their correlation. If the error is homoskedastic, then the asymptotic variance matrix for ( ^ 1 ; ^ 2 ) is V 0 = Qxx1 2 : By the formula for

inversion of a 2 2 matrix,

Qxx1 =

2

2

1

2 2 (1

1 2

2)

1 2

1 2

2

1

Thus if x1i and x2i are positively correlated ( > 0) then ^ 1 and ^ 2 are negatively correlated (and

vice-versa).

For illustration, Figure 6.4 displays the probability contours of the joint asymptotic distribution

2

2

2 = 1 and = 0:5: The coe cient estimates are negatively

^

of ^ 1

1 and 2

2 when 1 = 2 =

correlated since the regressors are positively correlated. This means that if ^ 1 is unusually negative,

it is likely that ^ 2 is unusually positive, or conversely. It is also unlikely that we will observe both

^ and ^ unusually large and of the same sign.

1

2

This nding that the correlation of the regressors is of opposite sign of the correlation of the coefcient estimates is sensitive to the assumption of homoskedasticity. If the errors are heteroskedastic

then this relationship is not guaranteed.

This can be seen through a simple constructed example. Suppose that x1i and x2i only take

the values f 1; +1g; symmetrically, with Pr (x1i = x2i = 1) = Pr (x1i = x2i = 1) = 3=8; and

Pr (x1i = 1; x2i = 1) = Pr (x1i = 1; x2i = 1) = 1=8: You can check that the regressors are mean

zero, unit variance and correlation 0.5, which is identical with the setting displayed in Figure 6.4.

Now suppose that the error is heteroskedastic. Specically, suppose that E e2i j x1i = x2i =

5

1

and E e2i j x1i 6= x2i = : You can check that E e2i = 1; E x21i e2i = E x22i e2i = 1 and

4

4

127

7

E x1i x2i e2i = : Therefore

8

V

= Qxx1 Qxx1

2

32

1

9 6 1

6 1

2 7

=

4 1

54 7

16

1

2

8

2

3

1

46 1 4 7

=

4 1

5:

3

1

4

32

7

6 1

8 7

54 1

1

2

3

1

2 7

5

1

Thus the coe cient estimates ^ 1 and ^ 2 are positively correlated (their correlation is 1=4:) The

joint probability contours of their asymptotic distribution is displayed in Figure 6.5. We can see

how the two estimates are positively associated.

What we found through this example is that in the presence of heteroskedasticity there is no

simple relationship between the correlation of the regressors and the correlation of the parameter

estimates.

We can extend the above analysis to study the covariance between coe cient sub-vectors. For

example, partitioning x0i = (x01i ; x02i ) and 0 = 01 ; 02 ; we can write the general model as

yi = x01i

+ x02i

+ ei

0

0

0

and the coe cient estimates as b = b 1 ; b 2 : Make the partitions

Qxx =

Q11 Q12

Q21 Q22

11

12

21

22

From (2.41)

Qxx1 =

Q1112

Q2211 Q21 Q111

Q2211

(6.18)

128

where Q11 2 = Q11

moskedastic,

cov b 1 ; b 2 =

In the general case, you can show that (Exercise 6.5)

V

Q12 Q221

21

V 11 V 12

V 21 V 22

(6.19)

where

V 11 = Q1112

V 21 =

V 22 =

Q2211

Q2211

11

21

22

Q21 Q111

Q21 Q111

11

12

+ Q12 Q221

1

12 Q22 Q21

1

22 Q22 Q21

1

21 Q11 Q12

+

+

Q21 Q111

Q21 Q111

1

22 Q22 Q21

1

12 Q22 Q21

1

11 Q11 Q12

Q1112

(6.20)

Q1112

Q2211

(6.21)

(6.22)

6.5

2

1 Pn

^2i and s2 =

Using

the

methods

of

Section

6.2

we

can

show

that

the

estimators

^

=

i=1 e

n

P

n

1

^2i are consistent for 2 :

i=1 e

n k

The trick is to write the residual e^i as equal to the error ei plus a deviation term

e^i = yi

x0i b

= ei + x0i

= ei

x0i b

x0i b

Thus the squared residual equals the squared error plus a deviation

e^2i = e2i

2ei x0i b

+ b

xi x0i b

(6.23)

129

So when we take the average of the squared residuals we obtain the average of the squared errors,

plus two terms which are (hopefully) asymptotically negligable.

!

!

n

n

n

X

0

1X 2

1X

1

2

0

0

b

b

^ =

ei 2

ei xi

xi xi

+ b

:

(6.24)

n

n

n

i=1

i=1

i=1

1X 2 p

ei !

n

1

n

i=1

n

X

i=1

ei x0i ! E ei x0i = 0

n

1X

p

xi x0i ! E xi x0i = Qxx

n

i=1

Finally, since n=(n k) ! 1 as n ! 1; it follows that

s2 =

^2 !

2;

as desired.

Theorem 6.5.1 Under Assumption 6.1.1, ^ 2

n ! 1:

6.6

and s2

as

Theorem 6.3.2 describe the asymptotic covariance matrix of the least-squares estimators b . For

asymptotic inference (condence intervals and tests) we need a consistent estimate of its covariance

matrix. In this section we start with the simplied problem of estimating V 0 = Qxx1 2 ; the

asymptotic variance of b under conditional homoskedasticity: As we described in Section 4.10,

0

b 1 s2 where Q

b xx and s2 are dened in (6.1) and (4.20).

the conventional estimator is Vb b = Q

xx

We now show that this estimator is consistent for V 0 : Since the estimator is the product of two

moment estimates, the method is to show consistency of each moment estimator, and then apply

the continuous mapping theorem to the product.

b 1 p! Q 1 ; and Theorem 6.5.1 established s2 p! 2 : It

Theorem 6.2.1 established that Q

xx

xx

follows by the CMT that

0

b 1 s2 p! Q 1 2 = V 0

Vb b = Q

xx

xx

0

so that Vb b is consistent for V 0 ; as desired.

0 p

Theorem 6.6.1 Under Assumption 6.1.1, Vb b ! V 0 as n ! 1:

It is instructive to notice that Theorem 6.6.1 does not require the assumption of homoskedastic0

ity. That is, Vb b is consistent for V 0 regardless if the regression is homoskedastic or heteroskedastic.

0

However, V 0 = V = avar( b ) only under homoskedasticity. Thus in the general case, Vb b is consistent for a well-dened but non-useful object.

6.7

130

Theorems 6.3.2 established that the asymptotic variance of b is V = Qxx1 Qxx1 : We now

consider estimation of this covariance matrix without imposing homoskedasticity. The standard

approach is to use a plug-in estimator which replace the unknowns with sample moments.

The moment estimator for is

n

X

b = 1

xi x0i e^2i ;

(6.25)

n

i=1

Vb

b 1 bQ

b 1:

=Q

xx

xx

(6.26)

You can check that this is identical to the White covariance matrix estimator Vb b introduced in

(4.28). Here we write the estimator as Vb to indicate that it is an estimate of V = avar( b ): We

will use both Vb b and Vb to indicate (6.26).

As shown in Theorem 6.2.1, Q

xx

xx

key is to write the replace the squared residual e^2i with the squared error e2i ; and then show that

the dierence is asymptotically negligible.

Specically, observe that

n

1X

xi x0i e^2i

n

i=1

n

n

1X

1X

0 2

xi xi ei +

xi x0i e^2i

n

n

i=1

e2i :

(6.27)

i=1

The rst term is an average of the iid random variables xi x0i e2i ; and therefore by the WLLN

converges in probability to its expectation, namely,

n

1X

p

xi x0i e2i ! E xi x0i e2i =

n

i=1

Technically, this requires that has nite elements, which was shown in (6.10).

So to establish that b is consistent for it remains to show that

n

1X

p

xi x0i e^2i e2i

! 0:

n

(6.28)

i=1

There are multiple ways to do this. A reasonable straightforward yet slightly tedious derivation is

to start by applying the Triangle Inequality (A.12)

n

1X

xi x0i e^2i

n

1X

xi x0i e^2i

n

e2i

i=1

1

n

i=1

n

X

i=1

kxi k2 e^2i

e2i

e2i :

(6.29)

Then recalling the expression for the squared residual (6.23), apply the Triangle Inequality and

then the Schwarz Inequality (A.10) twice

e^2i

e2i

2 ei x0i b

= 2 jei j x0i b

2 jei j kxi k b

+ b

+

xi x0i b

+ kxi k2 b

xi

(6.30)

Combining (6.29) and (6.30), we nd

n

1X

xi x0i e^2i

n

e2i

i=1

!

n

1X

b

kxi k3 jei j

n

131

i=1

= op (1):

1X

kxi k4

n

i=1

(6.31)

! 0; and both averages in parenthesis are averages of

random variables with nite mean under Assumption ??. Indeed, by Hlders Inequality (B.16)

E kxi k3 jei j

E kxi k3

3=4

4=3

Ee4i

1=4

= E kxi k4

3=4

Ee4i

1=4

<1

Theorem 6.7.1 Under Assumption 6.1.2, as n ! 1; b

p

Vb b ! V :

6.8

and

estimators Ve b ; and V b which take the form (6.26) but with b replaced by

n

X

e = 1

(1

n

hii )

xi x0i e^2i

hii )

xi x0i e^2i ;

i=1

and

1X

=

(1

n

i=1

respectively. To show that these estimators also consistent for V ; given b ! , it is su cient

b converge in probability to zero as n ! 1:

to show that the dierences e b and

The trick is to use the fact that the leverage values are asymptotically negligible:

max hii = op (1):

1 i n

(See Theorem 6.20.1 in Section 6.20).) Then using the Triangle Inequality and (6.32)

n

1X

xi x0i e^2i (1 hii ) 1 1

n

i=1

!

n

1X

hii

kxi k2 e^2i

max

1 i n 1

n

hii

i=1

= op (1):

Similarly,

n

1X

xi x0i e^2i (1 hii ) 2 1

n

i=1

!

n

1X

2hii h2ii

2 2

max

kxi k e^i

1 i n (1

n

hii )2

i=1

= op (1):

(6.32)

Theorem 6.8.1 Under Assumption 6.1.2, as n ! 1; e

p

p

!V :

Ve

! V ; and V

6.9

132

p

Functions of Parameters

Sometimes we are interested in a transformation of the coe cient vector = ( 1 ; :::; k ): For

example, we may be interested in a single coe cient j ; or a ratio j = l : In these cases we can write

the transformation as a function of the coe cients, e.g. = h( ) for some function h : Rk ! Rq .

The estimate of is

b = h( b ):

p

By the continuous mapping theorem (Theorem 5.9.1) and the fact b !

b is consistent for .

p

true value of ; then as n ! 1; b ! :

Furthermore, by the Delta Method (Theorem 5.10.3) we know that b is asymptotically normal.

Assumption 6.9.1 h( ) : Rk ! Rq is continuously di erentiable at the

true value of and H = @@ h( )0 has rank q:

Under Assumptions 6.1.2 and 6.9.1, as n ! 1;

p

where

n b

V

! N (0; V )

= H0 V H :

(6.33)

(6.34)

h( ) = R0

for some k

R=

I

0

(6.35)

then we can conformably partition

V

=(

=

0

1;

I 0

0 0

2)

so that R0

I

0

for

133

=(

0

1;

0 0

2) :

Then

= V 11 ;

the upper-left sub-matrix of V 11 given in (6.20). In this case (6.33) states that

p

d

n b1

! N (0; V 11 ) :

1

That is, subsets of b are approximately normal with variances given by the comformable subcomponents of V .

To illustrate the case of a nonlinear transformation, take the example = j = l for 6= l: Then

0

1

0

B

C

..

B

C

.

B

C

B 1= l C

B

C

B

C

..

H =B

(6.36)

C

.

B

C

2

B

C

j= l C

B

B

C

..

@

A

.

0

so

= V jj =

2

l

+ V ll

2

4

j= l

2V jl

3

j= l

For inference we need an estimate of the asymptotic variance matrix V = H 0 V H , and for

this it is typical to use a plug-in estimator. The natural estimator of H is the derivative evaluated

at the point estimates

c = @ h( b )0 :

H

(6.37)

@

The derivative in (6.37) may be calculated analytically or numerically. By analytically, we mean

working out for the formula for the derivative and replacing the unknowns by point estimates. For

example, if = j = l ; then @@ h( ) is (6.36). However in some cases the function h( ) may be

extremely complicated and a formula for the analytic derivative may not be easily available. In

this case calculation by numerical dierentiation may be preferable. Let l = (0

1

0)0 be the

c is

unit vector with the 1 in the lth place. Then the jlth element of a numerical derivative H

for some small ":

The estimate of V

b

c jl = hj ( + l ")

H

"

is

hj ( b )

c 0 Vb H

c :

Vb = H

(6.38)

0

Alternatively, Vb b ; Ve or V may be used in place of Vb : Given (6.37), (6.38) is simple to calculate

using matrix operations.

As the primary justication for Vb is the asymptotic approximation (6.33), Vb is often called

an asymptotic covariance matrix estimator.

p

The estimator Vb is consistent for V under the conditions of Theorem 6.9.2 since Vb b ! V

by Theorem 6.7.1, and

c = @ h( b )0 p! @ h( )0 = H

H

@

@

p

since b ! and the function @ h( )0 is continuous.

@

134

p

Vb

6.10

!V :

As described in Section 4.12, a standard error is an estimate of the standard deviation of the

p

distribution of an estimator. Thus if Vb is an estimate of the asymptotic covariance of n b

;

then n 1 Vb is an estimate of the variance of b ; and standard errors are the square roots of the

diagonal elements of this matrix. These take the form

rh

q

i

1=2

1

^

b

s( j ) = n V j = n

Vb

:

jj

When the justication for Vb is based on asymptotic theory we call s( ^ j ) an asymptotic standard

error for ^ j :

Standard errors for b are constructed similarly. Supposing that q = 1 (so h( ) is real-valued),

then the asymptotic standard error for ^ is the square root of n 1 Vb ; that is,

q

q

1=2 c 0 b c

1=2

^

b

V =n

H V H :

s( ) = n

When calculating and reporting coe cients estimates b or estimates b which are transformations

of the original coe cient estimates, it is good practice to report standard errors for each reported

estimate. This helps users of the work assess the estimation precision.

6.11

t statistic

of ), b its estimate and s(b) its asymptotic standard error. Consider the statistic

tn ( ) =

s(b)

(6.39)

Dierent writers have called (6.39) a t-statistic, a t-ratio, a z-statistic or a studentized statistic, sometimes using the dierent labels to distinguish between nite-sample and asymptotic

inference. As the statistics themselves are always (6.39) we wont make such as distinction, and

will simply refer to tn ( ) as a t-statistic or a t-ratio. We also often suppress the parameter dependence, writing it as tn : The t-statistic is a simple function of the estimate, its standard error, and

the parameter.

p

p

d

By Theorem 6.9.2, n b

! N (0; V ) and Vb^ ! V : Thus

tn ( ) =

s(b)

p b

n

q

=

Vbb

N (0; V )

p

V

= Z N (0; 1) :

d

135

The last equality is by the property that linear scales of normal distributions are normal.

Thus the asymptotic distribution of the t-ratio tn ( ) is the standard normal. Since this distribution does not depend on the parameters, we say that tn ( ) is asymptotically pivotal. In

special cases (such as the normal regression model, see Section 3.17), the statistic tn has an exact

t distribution, and is therefore exactly free of unknowns. In this case, we say that tn is exactly

pivotal. In general, however, pivotal statistics are unavailable and we must rely on asymptotically

pivotal statistics.

As we will see in the next section, it is also useful to consider the distribution of the absolute

d

d

t-ratio jtn ( )j : Since tn ( ) ! Z, the continuous mapping theorem yields jtn ( )j ! jZj : Letting

(u) = Pr (Z u) denote the standard normal distribution function, we can calculate that the

distribution function of jZj is

Pr (jZj

u)

Pr ( u

Pr (Z

(u)

2 (u)

:=

(u)

Z

u)

u)

Pr (Z <

u)

( u)

1

d

(6.40)

! Z

The asymptotic normality of Theorem 6.11.1 is used to justify condence intervals and tests for

the parameters.

6.12

Condence Intervals

The OLS estimate b is a point estimate for , meaning that b is a single value in Rk . A

broader concept is a set estimate Cn which is a collection of values in Rk : When the parameter

is real-valued then it is common to focus on intervals Cn = [Ln ; Un ] and which is called an interval

estimate for . The goal of an interval estimate Cn is typically to contain the true value, e.g.

2 Cn ; with high probability, yet without being too big.

The interval estimate Cn is a function of the data and hence is random. The coverage probability of the interval Cn = [Ln ; Un ] is Pr ( 2 Cn ): The randomness comes from Cn as the

parameter is treated as xed.

Interval estimates Cn are typically called condence intervals as the goal is typically to set the

coverage probability to equal a pre-specied target, typically 90% or 95%. Cn is called a (1

)%

condence interval if inf Pr ( 2 Cn ) = 1

:

There is not a unique method to construct condence intervals. For example, a simple (yet

silly) interval is

R

with probability 1

Cn =

b

with probability

; so this condence interval

has perfect coverage, but Cn is uninformative about and is therefore not useful.

When we have an asymptotically normal parameter estimate b with standard error s(b); the

standard condence interval for takes the form

h

i

Cn = b c s(b); b + c s(b)

(6.41)

136

where c > 0 is a pre-specied constant. This condence interval is symmetric about the point

estimate b; and its length is proportional to the standard error s(b):

Equivalently, Cn is the set of parameter values for such that the t-statistic tn ( ) is smaller (in

absolute value) than c; that is

(

)

b

Cn = f : jtn ( )j cg =

: c

c :

s(b)

The coverage probability of this condence interval is

Pr ( 2 Cn ) = Pr (jtn ( )j

c)

which is generally unknown. We can approximate the coverage probability by taking the asymptotic

limit as n ! 1: Since jtn ( )j is asymptotically jZj (Theorem 6.11.1), it follows that as n ! 1 that

c) = (c)

Pr ( 2 Cn ) ! Pr (jZj

where (u) is given in (6.40). We call this the asymptotic coverage probability. Since the tratio is asymptotically pivotal, the asymptotic coverage probability is independent of the parameter

; and is only a function of c:

As we mentioned before, an ideal condence interval has a pre-specied probability coverage

1

; typically 90% or 95%. This means selecting the constant c so that

(c) = 1

Eectively, this makes c a function of ; and can be backed out of a normal distribution table. For

example, = 0:05 (a 95% interval) implies c = 1:96 and = 0:1 (a 90% interval) implies c = 1:645:

Rounding 1.96 to 2, we obtain the most commonly used condence interval in applied econometric

practice

h

i

Cn = b 2s(b); b + 2s(b) :

(6.42)

This is a useful rule-of thumb. This asymptotic 95% condence interval Cn is simple to compute

and can be roughly calculated from tables of coe cient estimates and standard errors. (Technically,

it is an asymptotic 95.4% interval, due to the substitution of 2.0 for 1.96, but this distinction is

meaningless.)

(6.41), Pr ( 2 Cn ) ! (c): For c = 1:96; Pr ( 2 Cn ) ! 0:95:

Condence intervals are a simple yet eective tool to assess estimation uncertainty. When

reading a set of empirical results, look at the estimated coe cient estimates and the standard

errors. For a parameter of interest, compute the condence interval Cn and consider the meaning

of the spread of the suggested values. If the range of values in the condence interval are too wide

to learn about ; then do not jump to a conclusion about based on the point estimate alone.

6.13

Regression Intervals

m(x) = E (yi j xi = x) = x0 :

137

In some cases, we want to estimate m(x) at a particular point x: Notice that this is a (linear)

0

function

b

= b = x0 b and H = x; so

qof : Letting h( ) = x and = h( ); we see that m(x)

s(b) = n 1 x0 Vb x: Thus an asymptotic 95% condence interval for m(x) is

0b

q

2 n

1 x0 V

b

x :

It is interesting to observe that if this is viewed as a function of x; the width of the condence set

is dependent on x:

To illustrate, we return to the log wage regression (3.11) of Section 3.6. The estimated regression

equation is

\

log(W

age) = x0 b = 0:626 + 0:156x:

Vb b =

7:092

0:445

0:445

0:029

and the sample size is n = 61: Thus the 95% condence interval for the regression takes the form

r

1

0:626 + 0:156x 2

(7:092 0:89x + 0:029x2 ) :

61

The estimated regression and 95% intervals are shown in Figure 6.6. Notice that the condence

bands take a hyperbolic shape. This means that the regression line is less precisely estimated for

very large and very small values of education.

Plots of the estimated regression line and condence intervals are especially useful when the

regression includes nonlinear terms. To illustrate, consider the log wage regression (3.12) which

includes experience and its square.

\

log(W

age) = 1:06 + 0:116 education + 0:010 experience

(6.43)

and has n = 2454 observations. We are interested in plotting the regression estimate and regression

intervals as a function of experience. Since the regression also includes education, to plot the

138

estimates in a simple graph we need to x education at a specic value. We select education=12.

This only aects the level of the estimated regression, since education enters without an interaction.

Dene the points of evaluation

0

1

1

B 12 C

C

z(x) = B

@

A

x

2

x =100

where x =experience. The covariance matrix estimate is

0

22:92

1:0601

0:56687

B 1:0601

0:06454

:0080737

Vb b = B

@ 0:56687 :0080737

:040736

0:86626

:0066749

:075583

1

0:86626

:0066749 C

C:

:075583 A

0:14994

1:06 + 0:116 12 + 0:010 x 0:014 x2 =100

v

0

u

22:92

1:0601

0:56687

u

u

B 1:0601

1u 1

0:06454

:0080737

u

z(x)0 B

@ 0:56687 :0080737

:040736

50 t 2454

0:86626

:0066749

:075583

2 p

27:592 3:8304 x + 0:23007 x2

100

1

0:86626

:0066749 C

C z(x)

:075583 A

0:14994

0:00616 x3 + 0:0000611 x4

The estimated regression and 95% intervals are shown in Figure 6.7. The regression interval

widens greatly for small and large values of experience, indicating considerable uncertainty about

the eect of experience on mean wages for this population. The condence bands take a more

complicated shape than in Figure 6.6 due to the nonlinear specication.

6.14

139

Forecast Intervals

rule is the conditional mean m(x) as it is the mean-square-minimizing forecast. A point forecast is

the estimated conditional mean m(x)

^

= x0 b . We would also like a measure of uncertainty for the

forecast.

The forecast error is e^i = yi m(x)

^

= ei x0 b

: As the out-of-sample error ei is

independent of the in-sample estimate ^ ; this has variance

E^

e2i = E e2i j xi = x + x0 E b

2

(x) + n

1 0

x V x:

q

error for the forecast is s^(x) = ^ 2 + n 1 x0 Vb x: Notice that this is dierent from the standard

error for the conditional mean. If we have an estimate of the conditional

variance function, e.g.

q

0

2

2

~ (x) = e z from (9.5), then the forecast standard error is s^(x) = ~ (x) + n 1 x0 Vb x

Assuming E e2i j xi =

2;

It would appear natural to conclude that an asymptotic 95% forecast interval for yi is

h

i

x0 b 2^

s(x) ;

but this turns out to be incorrect. In general, the validity of an asymptotic condence interval is

based on the asymptotic normality of the studentized ratio. In the present case, this would require

the asymptotic normality of the ratio

x0 b

ei

s^(x)

But no such asymptotic approximation can be made. The only special exception is the case where

ei has the exact distribution N(0; 2 ); which is generally invalid.

To get an accurate forecast interval, we need to estimate the conditional distribution of ei given

xi = x; which is a much more di culth task. Perhaps

i due to this di culty, many applied forecasters

0

b

use the simple approximate interval x

2^

s(x) despite the lack of a convincing justication.

6.15

Wald Statistic

Let

= h( ) : Rk ! Rq be any parameter vector of interest, b its estimate and Vb

covariance matrix estimator. Consider the quadratic form

0

Wn ( ) = n b

Vb

its

(6.44)

called a Wald statistic. We are interested in its sampling distribution.

The asymptotic distribution of Wn ( ) is simple to derive given Theorem 6.9.2 and Theorem

6.9.3, which show that

p

d

n b

! Z N (0; V )

and

It follows that

Wn ( ) =

n b

Vb

!V :

Vb

1p

n b

! Z0 V

(6.45)

140

a quadratic in the normal random vector Z: Here we can appeal to a useful result from probability

theory. (See Theorem B.9.3 in the Appendix.)

a chi-square random variable with q degrees of freedom.

2;

q

The asymptotic distribution in (6.45) takes exactly this form. Note that V > 0 since H is

full rank under Assumption 6.9.1 It follows that Wn ( ) converges in distribution to a chi-square

random variable.

d

Wn ( ) !

2

q:

Theorem 6.15.2 is used to justify multivariate condence regions and mutivariate hypothesis

tests.

6.16

Condence Regions

A condence region Cn is a set estimator for 2 Rq when q > 1: A condence region Cn is a set

in Rq intended to cover the true parameter value with a pre-selected probability 1

: Thus an ideal

condence region has the coverage probability Pr( 2 Cn ) = 1

. In practice it is typically not

possible to construct a region with exact coverage, but we can calculate its asymptotic coverage.

When the parameter estimate satises the conditions of Theorem 6.15.2, a good choice for a

condence region is the ellipse

Cn = f : Wn ( ) c1 g :

with c1

the 1

th quantile of the 2q distribution. (Thus Fq (c1

can be found from the 2q critical value table.

Theorem 6.15.2 implies

2

q

Pr ( 2 Cn ) ! Pr

c1

)=1

:) These quantiles

=1

)%:

To illustrate the construction of a condence region, consider the estimated regression (6.43) of

the model

\

log(W

age) =

education +

experience +

experience2 =100:

Suppose that the two parameters of interest are the percentage return to education 1 = 100 1 and

the percentage return to experience for individuals with 10 years experience 2 = 100 2 + 20 3 .

(We need to condition on the level of experience since the regression is quadratic in experience.)

These two parameters are a linear transformation of the regression parameters with point estimates

b=

0 100 0

0

0 0 100 20

b=

11:6

0:72

141

Figure 6.8: Condence Region for Return to Experience and Return to Education

Vb

0 100 0

0

0 0 100 20

645:4 67:387

67:387

165

1

0

0

B 100 0 C

C

Vb b B

@ 0 100 A

0

20

0

with inverse

Vb

0

Wn ( ) = n b

Vb

11:6

0:72

= 2454

= 3:97 (11:6

0:0016184

0:00066098

1

2

2

1)

0:00066098

0:0063306

0:0016184

0:00066098

3:2441 (11:6

0:00066098

0:0063306

1 ) (0:72

2)

11:6

0:72

+ 15:535 (0:72

1

2

2)

The 90% quantile of the 22 distribution is 4.605 (we use the 22 distribution as the dimension

of is two), so an asymptotic 90% condence region for the two parameters is the interior of the

ellipse

3:97 (11:6

1)

3:2441 (11:6

1 ) (0:72

2)

+ 15:535 (0:72

2)

= 4:605

which is displayed in Figure 6.8. Since the estimated correlation of the two coe cient estimates is

small (about 0.2) the ellipse is close to circular.

6.17

142

In Section 4.6 we presented the Gauss-Markov theorem, which stated that in the homoskedastic

CEF model, in the class of linear unbiased estimators the one with the smallest variance is leastsquares. As we noted in that section, the restriction to linear unbiased estimators is unsatisfactory

as it leaves open the possibility that an alternative (non-linear) estimator could have a smaller

asymptotic variance. In addition, the restriction to the homoskedastic CEF model is also unsatisfactory as the projection model is more relevant for empirical application. The question remains:

what is the most e cient estimator of the projection coe cient (or functions = h( )) in the

projection model?

It turns out that it is straightforward to show that the projection model falls in the estimator

class considered in Proposition 5.13.2. It follows that the least-squares estimator is semiparametrically e cient in the sense that it has the smallest asymptotic variance in the class of semiparametric

estimators of . This is a more powerful and interesting result than the Gauss-Markov theorem.

To see this, it is worth rephrasing Proposition 5.13.2 with amended notation. Suppose that

P a parameter of interest is = g( n) where = Ez i ; for which the moment estimators are b = n1 ni=1

o zi

2

and b = g(b ): Let L2 (g) = F : E kzk < 1; g (u) is continuously dierentiable at u = Ez be

the set of distributions for which b satises the central limit theorem.

Proposition 6.17.1 In the class of distributions F 2 L2 (g); b is semiparametrically e cient for in the sense that its asymptotic variance equals

the semiparametric e ciency bound.

Proposition 6.17.1 says that under the minimal conditions in which b is asymptotically normal,

then no semiparametric estimator can have a smaller asymptotic variance than b.

To show that an estimator is semiparametrically e cient it is su cient to show that it falls

in the class covered by this Proposition. To show that the projection model falls in this class, we

write = Qxx1 Qxy = g ( ) where = Ez i and z i = (xi x0i ; xi yi ) : The class L2 (g) equals the class

of distributions

n

o

L4 ( ) = F : Ey 4 < 1; E kxk4 < 1; Exi x0i > 0 :

Proposition 6.17.2 In the class of distributions F 2 L4 ( ); the leastsquares estimator b is semiparametrically e cient for .

The least-squares estimator is an asymptotically e cient estimator of the projection coe cient

because the latter is a smooth function of sample moments and the model implies no further

restrictions. However, if the class of permissible distributions is restricted to a strict subset of L4 ( )

then least-squares can be ine cient. For example, the linear CEF model with heteroskedastic errors

is a strict subset of L4 ( ); and the GLS estimator has a smaller asymptotic variance than OLS. In

this case, the knowledge that true conditional mean is linear allows for more e cient estimation of

the unknown parameter.

From Proposition 6.17.1 we can also deduce that plug-in estimators b = h( b ) are semiparametrically e cient estimators of = h( ) when h is continuously dierentiable. We can also deduce

that other parameters estimators are semiparametrically e cient, such as ^ 2 for

note that we can write

2

= E yi

143

2:

To see this,

x0i

= Eyi2

2E yi x0i

= Qyy

E xi x0i

which is a smooth function of the moments Qyy ; Qyx and Qxx : Similarly the estimator ^ 2 equals

n

1X 2

e^i

n

i=1

b yy

= Q

b yx Q

b 1Q

b

Q

xx xy

Since the variables yi2 ; yi x0i and xi x0i all have nite variances when F 2 L4 ( ); the conditions of

Proposition 6.17.1 are satised. We conclude:

6.18

Model*

In Section 6.17 we showed that the OLS estimator is semiparametrically e cient in the projection model. What if we restrict attention to the classical homoskedastic regression model? Is OLS

still e cient in this class? In this section we derive the asymptotic semiparametric e ciency bound

for this model, and show that it is the same as that obtained by the OLS estimator. Therefore it

turns out that least-squares is e cient in this class as well.

Recall that in the homoskedastic regression model the asymptotic variance of the OLS estimator

b for is V 0 = Q 1 2 : Therefore, as described in Section 5.13, it is su cient to nd a parametric

xx

submodel whose Cramer-Rao bound for estimation of is V 0 : This would establish that V 0 is

the semiparametric variance bound and the OLS estimator b is semiparametrically e cient for :

Let the joint density of y and x be written as f (y; x) = f1 (y j x) f2 (x) ; the product of the

conditional density of y given x and the marginal density of x. Now consider the parametric

submodel

f (y; x j ) = f1 (y j x) 1 + y x0

x0 = 2 f2 (x) :

(6.46)

You can check that in this submodel the marginal density of x is f2 (x) and the conditional density

of y given x is f1 (y j x) 1 + (y x0 ) (x0 ) = 2 : To see that

the latter is a valid conditional

R

density, observe that the regression assumption implies that yf1 (y j x) dy = x0 and therefore

Z

f1 (y j x) 1 + y x0

x0 = 2 dy

Z

Z

=

f1 (y j x) dy + f1 (y j x) y x0 dy x0 = 2

= 1:

In this parametric submodel the conditional mean of y given x is

Z

E (y j x) =

yf1 (y j x) 1 + y x0

x0 = 2 dy

Z

Z

=

yf1 (y j x) dy + yf1 (y j x) y x0

x0

Z

Z

2

=

yf1 (y j x) dy +

y x0

f1 (y j x) x0

Z

+

y x0 f1 (y j x) dy x0

x0 = 2

144

dy

dy

= x0 ( + ) ;

R

using the homoskedasticity assumption (y x0 )2 f1 (y j x) dy = 2 : This means that in this

parametric submodel, the conditional mean is linear in x and the regression coe cient is ( ) =

+ :

We now calculate the score for estimation of : Since

@

@

log f (y; x j ) =

log 1 + y

@

@

x0

x0

x (y x0 ) = 2

1 + (y x0 ) (x0 ) =

the score is

@

log f (y; x j 0 ) = xe= 2 :

@

The Cramer-Rao bound for estimation of (and therefore ( ) as well) is

s=

E ss0

E (xe) (xe)0

Qxx1 = V 0 :

We have shown that there is a parametric submodel (6.46) whose Cramer-Rao bound for estimation

of is identical to the asymptotic variance of the least-squares estimator, which therefore is the

semiparametric variance bound.

Theorem 6.18.1 In the homoskedastic regression model, the semiparametric variance bound for estimation of

is V 0 = 2 Qxx1 and the OLS

estimator is semiparametrically e cient.

This result is similar to the Gauss-Markov theorem, in that it asserts the e ciency of the leastsquares estimator in the context of the homoskedastic regression model. The dierence is that the

Gauss-Markov theorem states that OLS has the smallest variance among the set of unbiased linear

estimators, while Theorem 6.18.1 states that OLS has the smallest asymptotic variance among all

regular estimators. This is a much more powerful statement.

6.19

It seems natural to view the residuals e^i as estimates of the unknown errors ei : Are they

consistent estimates? In this section we develop an appropriate convergence result. This is not a

widely-used technique, and can safely be skipped by most readers.

Notice that we can write the residual as

e^i = yi

x0i b

= ei + x0i

= ei

x0i b

x0i b

(6.47)

145

Since b

! 0 it seems reasonable to guess that e^i will be close to ei if n is large.

We can bound the dierence in (6.47) using the Schwarz inequality (A.10) to nd

j^

ei

ei j = x0i b

kxi k b

(6.48)

= Op (n 1=2 ) from Theorem 6.3.2, but we also need to bound

the random variable kxi k.

The key is Theorem 5.12.1 which shows that E kxi kr < 1 implies xi = op n1=r uniformly in

i; or

p

n 1=r max kxi k ! 0:

1 i n

max j^

ei

max kxi k b

ei j

1 i n

1 i n

1=2+1=r

= op (n

):

Theorem 6.19.1 Under Assumption 6.1.2 and E kxi kr < 1, then uniformly in 1 i n

e^i = ei + op (n 1=2+1=r ):

(6.49)

4; so the rate

1=4

of convergence is at least op (n

): As r increases, the rate becomes close to Op (n 1=2 ). If the

regressor is bounded, kxi k B < 1, then e^i = ei + op (n 1=2 ):

6.20

Asymptotic Leverage*

1

hii = x0i X 0 X

xi :

These are the diagonal elements of the projection matrix P and appear in the formula for leaveone-out prediction errors and several covariance matrix estimators. We can show that under iid

sampling the leverage values are uniformly asymptotically small.

Let min (A) and max (A) denote the smallest and largest eigenvalues of a symmetric square

matrix A; and note that max (A 1 ) = ( min (A)) 1 :

p

p

Since n1 X 0 X ! Qxx > 0 then by the CMT, min n1 X 0 X ! min (Qxx ) > 0: (The latter is

positive since Qxx is positive denite and thus all its eigenvalues are positive.) Then by the Trace

Inequality (A.13)

hii = x0i X 0 X

xi

1 0

XX

n

= tr

max

1 0

XX

n

1 0

XX

n

1

xi x0i

n

!

tr

1

xi x0i

n

1

kxi k2

n

1

( min (Qxx ) + op (1)) 1 max kxi k2 :

n1 i n

min

(6.50)

Theorem 5.12.1 shows that E kxi kr < 1 implies max1

op n2=r 1 .

i n kxi k

146

E kxi kr < 1 for some r

2; then uniformly in 1

i

n, hii =

op n2=r 1 :

For any r

2 then hii = op (1) (uniformly in i

n): Larger r implies a stronger rate of

convergence, for example r = 4 implies hii = op n 1=2 :

Theorem (6.20.1) implies that under random sampling with nite variances and large samples,

no individual observation should have a large leverage value. Consequently individual observations

should not be inuential, unless one of these conditions is violated.

147

Exercises

Exercise 6.1 Take the model yi = x01i 1 +x02i 2 +ei with Exi ei = 0: Suppose that 1 is estimated

by regressing yi on x1i only. Find the probability limit of this estimator. In general, is it consistent

for 1 ? If not, under what conditions is this estimator consistent for 1 ?

Exercise 6.2 Let y be n

regression estimator

where

1; X be n

b=

n

X

xi x0i + I k

i=1

n

X

xi yi

i=1

(6.51)

Find the probability limit of b as n ! 1:

Exercise 6.4 Verify some of the calculations reported in Section 6.4. Specically, suppose that

x1i and x2i only take the values f 1; +1g; symmetrically, with

Pr (x1i = x2i = 1) = Pr (x1i = x2i =

Pr (x1i = 1; x2i =

1) = Pr (x1i =

5

E e2i j x1i = x2i =

4

1

2

:

E ei j x1i 6= x2i =

4

1; x2i = 1) = 1=8

1. Ex1i = 0

2. Ex21i = 1

3. Ex1i x2i =

1

2

4. E e2i = 1

5. E x21i e2i = 1

7

6. E x1i x2i e2i = :

8

Exercise 6.5 Show (6.19)-(6.22).

Exercise 6.6 The model is

yi = x0i + ei

E (xi ei ) = 0

= E xi x0i e2i :

Find the method of moments estimators ( b ; b ) for ( ; ) :

1) = 3=8

148

Exercise 6.7 Of the variables (yi ; yi ; xi ) only the pair (yi ; xi ) are observed. In this case, we say

that yi is a latent variable. Suppose

yi

= x0i + ei

E (xi ei ) = 0

yi = yi + ui

where ui is a measurement error satisfying

E (xi ui ) = 0

E (yi ui ) = 0

Let b denote the OLS coe cient from the regression of yi on xi :

(a) Is

as n ! 1?

n b

as n ! 1:

n ^2

as n ! 1:

yi = xi + ei

E (ei j xi ) = 0

where xi 2 R: Consider the two estimators

b =

e =

Pn

x i yi

Pi=1

n

2

i=1 xi

n

1 X yi

:

n

xi

i=1

(a) Under the stated assumptions, are both estimators consistent for ?

(b) Are there conditions under which either estimator is e cient?

Exercise 6.10 In the homoskedastic regression model y = X + e with E(ei j xi ) = 0 and

E(e2i j xi ) = 2 ; suppose ^ is the OLS estimate of with covariance matrix V^ ; based on a sample

of size n: Let ^ 2 be the estimate of 2 : You wish to forecast an out-of-sample value of yn+1 given

that xn+1 = x: Thus the available information is the sample (y; X); the estimates ( b ; Vb ; ^ 2 ), the

residuals e

^; and the out-of-sample value of the regressors, xn+1 :

(a) Find a point forecast of yn+1 :

(b) Find an estimate of the variance of this forecast.

Chapter 7

Restricted Estimation

7.1

Introduction

yi = x0i + ei

E (xi ei ) = 0

a common task is to impose a constraint on the coe cient vector . For example, partitioning

0

0

x0i = (x01i ; x02i ) and 0 =

1 ; 2 ; a typical constraint is an exclusion restriction of the form

2 = 0: In this case the constrained model is

yi = x01i

+ ei

E (xi ei ) = 0

At rst glance this appears the same as the linear projection model, but there is one important

dierence: the error ei is uncorrelated with the entire regressor vector x0i = (x01i ; x02i ) not just the

included regressor x1i :

In general, a set of q linear constraints on takes the form

R0

=c

(7.1)

where R is k q; rank(R) = q < k and c is q 1: The assumption that R is full rank means that

the constraints are linearly independent (there are no redundant or contradictory constraints).

The constraint 2 = 0 discussed above is a special case of the constraint (7.1) with

R=

0

I

(7.2)

Another common restriction is that a set of coe cients sum to a known constant, i.e. 1 + 2 = 1:

This constraint arises in a constant-return-to-scale production function. Other common restrictions

include the equality of coe cients 1 = 2 ; and equal and osetting coe cients 1 =

2:

A typical reason to impose a constraint is that we believe (or have information) that the constraint is true. By imposing the constraint we hope to improve estimation e ciency. The goal is

to obtain consistent estimates with reduced variance relative to the unconstrained estimator.

The questions then arise: How should we estimate the coe cient vector imposing the linear

restriction (7.1)? If we impose such constraints, what is the sampling distribution of the resulting

estimator? How should we calculate standard errors? These are the questions explored in this

chapter.

149

7.2

150

least-squares criterion subject to the constraint R0 = c. This estimator is

e = argmin SSEn ( )

(7.3)

R0 =c

where

SSEn ( ) =

n

X

x0i

yi

= y0y

2y 0 X +

X 0X :

i=1

such that the restriction (7.1)

e

holds. We call the constrained least-squares (CLS) estimator. We follow the convention of

using a tilde ~ rather than a hat ^ to indicate that e is a restricted estimator in contrast to

the unrestricted least-squares estimator b ; and write it as e cls when we want to be clear that the

estimation method is CLS.

One method to nd the solution to (7.3) uses the technique of Lagrange multipliers. The

problem (7.3) is equivalent to the minimization of the Lagrangian

1

L( ; ) = SSEn ( ) +

2

over ( ; ); where

is an s

minimization of (7.4) are

R0

(7.4)

@

L( e ; e ) =

@

and

Premultiplying (7.5) by R0 (X 0 X)

X 0y + X 0X e + R e = 0

@

L( ; ) = R0 e

@

c = 0:

(7.6)

we obtain

R0 b + R0 e + R0 X 0 X

Re = 0

(7.6) and solving for e we nd

h

i 1

e = R0 X 0 X 1 R

R0 b c :

1

(7.5)

(7.7)

c = 0 from

Substuting this expression into (7.5) and solving for e we nd the solution to the constrained

minimization problem (7.3)

h

i 1

1

1

e =b

R R0 X 0 X

R

R0 b c :

(7.8)

X 0X

cls

This is a general formula for the CLS estimator. It also can be written as

h

i 1

1

0b 1

e =b Q

b

R

R

Q

R

R0 b c :

cls

xx

xx

Given e cls the residuals are

e~i = yi

is

x0i e cls :

n

~ 2cls =

1X 2

e~i :

n

i=1

7.3

151

Exclusion Restriction

While (7.8) is a general formula for the CLS estimator, in most cases the estimator can be

found by applying least-squares to a reparameterized equation. To illustrate, let us return to the

rst example presented at the beginning of the chapter a simple exclusion restriction. Recall the

unconstrained model is

yi = x01i 1 + x02i 2 + ei

(7.9)

the exclusion restriction is

yi = x01i

+ ei :

(7.10)

In this setting the CLS estimator is OLS of yi on x1i : (See Exercise 7.1.) We can write this as

e =

1

n

X

x1i x01i

i=1

0

1;

e=

n

X

x1i yi

i=1

0

2

is

(7.11)

(7.12)

It is not immediately obvious, but (7.8) and (7.12) are algebraically (and numerically) equivalent.

To see this, the rst component of (7.8) with (7.2) is

#

"

1

1

1

0

0

b Q

e = I 0

b

b

0 I b :

0 I Q

1

xx

xx

I

I

Using (3.33) this equals

e

= b1

=

=

b 12 Q

b 22

Q

b +Q

b

b 1 Q

b b 1b

1

11 2 12 Q22 Q22 1 2

1

1

b

b 12 Q

b

b Q

b 2y

Q

Q

11 2 Q1y

22

b 1 Q

b b 1b

b 1 b

+Q

11 2 12 Q22 Q22 1 Q22 1 Q2y

b 1 Q

b 1y

= Q

11 2

b 1 Q

b 11

= Q

11 2

b 1Q

b 1y

= Q

11

b 12 Q

b 1Q

b 21 Q

b 1Q

b 1y

Q

22

11

b 21 Q

b 1Q

b 1y

Q

11

b 12 Q

b 1Q

b 21 Q

b 1Q

b 1y

Q

22

11

7.4

Minimum Distance

A minimum distance estimator tries to nd a parameter value which satises the constraint

which is as close as possible to the unconstrained estimate. Let b be the unconstrained leastsquares estimator, and for some k k positive denite weight matricx W n > 0 dene

Jn ( ) = n b

Wn b

(7.13)

152

b , and is minimized at zero only if = b . A minimum distance estimator e for e minimizes

Jn ( ) subject to the constraint (7.1), that is,

e

md

= argmin Jn ( ) :

(7.14)

R0 =c

0

Vb

: To see this, rewrite the leastsquares criterion as follows. Write the unconstrained least-squares tted equation as yi = x0i b + e^i

and substitute this equation into SSEn ( ) to obtain

SSEn ( ) =

=

=

n

X

yi

i=1

n

X

i=1

n

X

x0i b + e^i

e^2i

i=1

= s2 (n

x0i

+ b

x0i

0

n

X

xi x0i

i=1

k + Jn ( ))

(7.15)

P

where the third equality uses the fact that ni=1 xi e^i = 0; and the last line holds when W n =

1

0

1 Pn

=

xi x0i s 2 : The expression (7.15) only depends on through Jn ( ) : Thus

Vb

n i=1

1

0

minimization of SSEn ( ) and Jn ( ) are equivalent, and hence e md = e cls when W n = Vb

:

We can solve for e

explicitly by the method of Lagrange multipliers. The Lagrangian is

md

1

L( ; ) = Jn ( ; W n ) +

2

R0

e

md

=b

W n 1 R R0 W n 1 R

R0 b

c :

(7.16)

(See Exercise 7.5.) Examining (7.16) we can see that e md specializes to e cls when we set W n =

1

0

Vb

:

An obvious question is which weight matrix W n is best. We will address this question after we

derive the asymptotic distribution for a general weight matrix.

7.5

Asymptotic Distribution

We rst show that the class of minimum distance estimators are consistent for the population

parameters when the constraints are valid.

Assumption 7.5.1 R0

= c where R is k

q with rank(R) = q:

153

p

Under Assumptions 6.1.1, 7.5.1, and 7.5.2, e md !

as n ! 1:

Theorem 7.5.1 shows that consistency holds for any weight matrix with a positive denate limit,

so the result includes the CLS estimator.

Similarly, the constrained estimators are asymptotically normally distributed.

Under Assumptions 6.1.2, 7.5.1, and 7.5.2,

p

as n ! 1; where

V (W ) = V

+W

and V

n e md

1

! N (0; V (W ))

R R0 W

R R0 W

R

1

R0 V

(7.17)

V R R0 W

R0 V R R0 W

R0 W

R0 W

(7.18)

= Qxx1 Qxx1 :

Theorem 7.5.2 shows that the minimum distance estimator is asymptotically normal for all

positive denite weight matrices. The asymptotic variance depends on W . The theorem includes

the CLS estimator as a special case by setting W = Qxx :

Under Assumptions 6.1.2 and 7.5.1, as n ! 1

p

where

V cls = V

n e cls

! N (0; V cls )

Qxx1 R R0 Qxx1 R

+Qxx1 R R0 Qxx1 R

R0 V

V R R0 Qxx1 R

R0 V R R0 Qxx1 R

R0 Qxx1

R0 Qxx1

7.6

154

Theorem 7.5.2 shows that the minimum distance estimators, which include CLS as a special

case, are asymptotically normal with an asymptotic covariance matrix which depends on the weight

matrix W . The asymptotically optimal weight matrix is the one which minimizes the asymptotic

variance V (W ): This turns out to be W = V 1 as is shown in Theorem 7.6.1 below. Since V 1

is unknown this weight matrix cannot be used for a feasible estimator, but we can replace V 1

1

with a consistent estimate Vb

and the asymptotic distribution (and e ciency) are unchanged.

estimator and takes the form

e

emd

=b

Vb R R0 Vb R

R0 b

(7.19)

Theorem 7.6.1 E cient Minimum Distance Estimator

Under Assumptions 6.1.2 and 7.5.1,

p

as n ! 1; where

V

n e emd

=V

! N 0; V

V R R0 V R

R0 V :

(7.20)

Since

V

(7.21)

the estimator (7.19) has lower asymptotic variance than the unrestricted

estimator. Furthermore, for any W ;

V

V (W )

(7.22)

Theorem 7.6.1 shows that the minimum distance estimator with the smallest asymptotic variance is (7.19). One implication is that the constrained least squares estimator is generally ine cient. The interesting exception is the case of conditional homoskedasticity, in which case the

optimal weight matrix is W = V 0 1 so in this case CLS is an e cient minimum distance estimator.

Otherwise when the error is conditionally heteroskedastic, there are asymptotic e ciency gains by

using minimum distance rather than least squares.

The fact that CLS is generally ine cient is counter-intuitive and requires some reection to

understand. Standard intuition suggests to apply the same estimation method (least squares) to

the unconstrained and constrained models, and this is the most common empirical practice. But

Theorem 7.6.1 shows that this is not the e cient estimation method. Instead, the e cient minimum

distance estimator has a smaller asymptotic variance. Why? The reason is that the least-squares

estimator does not make use of the regressor x2i : It ignores the information E (x2i ei ) = 0. This

information is relevant when the error is heteroskedastic and the excluded regressors are correlated

with the included regressors.

Inequality (7.21) shows that the e cient minimum distance estimator e has a smaller asymptotic

variance than the unrestricted least squares estimator b : This means that estimation is more e cient

by imposing correct restrictions when we use the minimum distance method.

7.7

155

We return to the example of estimation with a simple exclusion restriction. The model is

yi = x01i

+ x02i

+ ei

unconstrained least-squares applied to (7.9), which can be written as

1:

The rst is

b =Q

b 1 Q

b

1

11 2 1y 2 :

avar( b 1 ) = Q1112

Q12 Q221

11

1

12 Q22 Q21

21

+ Q12 Q221

1

22 Q22 Q21

Q1112 :

e

1;cls

b 1Q

b 1y :

=Q

11

Its asymptotic variance can be deduced from Theorem 7.5.3, but it is simpler to apply the CLT

directly to show that

(7.23)

avar( e 1;cls ) = Q111 11 Q111 :

The third estimator of

e

1;md

Vb

1

Vb 12 Vb 22 b 2

= b1

"

Vb 11 Vb 12

Vb 21 Vb 22

avar( e 1;md ) = V 11

(7.24)

V 12 V 221 V 21 :

(7.25)

In general, the three estimators are dierent, and they have dierent asymptotic variances.

It is quite instructive to compare the asymptotic variances of the CLS and unconstrained leastsquares estimators to assess whether or not the constrained estimator is necessarily more e cient

than the unconstrained estimator.

First, consider the case of conditional homoskedasticity. In this case the two covariance matrices

simplify to

avar( b 1 ) = 2 Q1112

and

avar( e 1;cls ) =

Q111 :

If Q12 = 0 (so x1i and x2i are orthogonal) then these two variance matrices equal and the two

estimators have equal asymptotic e ciency. Otherwise, since Q12 Q221 Q21 0; then Q11 Q11

Q12 Q221 Q21 ; and consequently

Q111

Q11

This means that under conditional homoskedasticity, e 1;cls has a lower asymptotic variance matrix

than b 1 : Therefore in this context, constrained least-squares is more e cient than unconstrained

least-squares. This is consistent with our intuition that imposing a correct restriction (excluding

an irrelevant regressor) improves estimation e ciency.

156

However, in the general case of conditional heteroskedasticity this ranking is not guaranteed.

In fact what is really amazing is that the variance ranking can be reversed. The CLS estimator

can have a larger asymptotic variance than the unconstrained least squares estimator.

To see this lets use the simple heteroskedastic example from Section 6.4. In that example,

1

7

3

Q11 = Q22 = 1; Q12 = ; 11 = 22 = 1; and 12 = : We can calculate that Q11 2 = and

2

8

4

2

3

e

avar( 1;cls ) = 1

avar( b 1 ) =

(7.26)

(7.27)

5

(7.28)

avar( e 1;md ) = :

8

Thus the restricted least-squares estimator e 1;cls has a larger variance than the unrestricted leastsquares estimator b 1 ! The minimum distance estimator has the smallest variance of the three, as

expected.

What we have found is that when the estimation method is least-squares, deleting the irrelevant

variable x2i can actually increase estimation variance; or equivalently, adding an irrelevant variable

can actually decrease the estimation variance.

To repeat this unexpected nding, we have shown in a very simple example that it is possible

for least-squares applied to the short regression (7.10) to be less e cient for estimation of 1 than

least-squares applied to the long regression (7.9), even though the constraint 2 = 0 is valid!

This result is strongly counter-intuitive. It seems to contradict our initial motivation for pursuing

constrained estimation to improve estimation e ciency.

It turns out that a more rened answer is appropriate. Constrained estimation is desirable,

but not constrained least-squares estimation. While least-squares is asymptotically e cient for

estimation of the unconstrained projection model, it is not an e cient estimator of the constrained

projection model.

7.8

estimates such as Vb . This variance estimator is then

Vb

= Vb

Vb R R0 Vb R

with a consistent

R0 Vb :

(7.29)

(7.30)

We can calculate standard errors for any linear combination h0 e so long as h does not lie in

the range space of R. A standard error for h0 e is

7.9

Misspecication

s(h0 e ) = n

h Vb h

1 0

1=2

What are the consequences for the constrained estimator e if the constraint (7.1) is incorrect?

To be specic, suppose that

R0 = c

where c is not necessarily equal to c:

This situation is a generalization of the analysis of omitted variable bias from Section 2.22,

where we found that the short regression (e.g. (7.11)) is estimating a dierent projection coe cient

than the long regression (e.g. (7.9)).

157

One mechanical answer is that we can use the formula (7.16) for the minimum distance estimator

to nd that

p

1

e

W 1 R R0 W 1 R

(c

c) :

md ! md =

1

(c

c), shows that imposing an incorrect constraint leads

to inconsistency an asymptotic bias. We can call the limiting value md the minimum-distance

projection coe cient or the pseudo-true value implied by the restriction.

However, we can say more.

For example, we can describe some characteristics of the approximating projections. The CLS

estimator projection coe cient has the representation

cls

= argmin E yi

R

x0i

=c

the best linear predictor subject to the constraint (7.1). The minimum distance estimator converges

to

0

0) W (

0)

md = argmin (

R0 =c

where 0 is the true coe cient. That is, md is the coe cient vector satisfying (7.1) closest to

the true value n the weighted Euclidean norm. These calculations show that the constrained

estimators are still reasonable in the sense that they produce good approximations to the true

coe cient, conditional on being required to satisfy the constraint.

We can also show that e md has an asymptotic normal distribution. The trick is to dene the

pseudo-true value

1

W n 1 R R0 W n 1 R

(c

c) :

n =

Then

p

n e md

1p

n R0 b

W n 1 R R0 W n 1 R

1 0 p

W n 1 R R0 W n 1 R

R

n b

n b

! I

R R0 W

= N (0; V (W )) :

In particular

R0 N (0; V )

(7.31)

d

n e emd

! N 0; V

This means that even when the constraint (7.1) is misspecied, the conventional covariance matrix

estimator (7.29) and standard errors (7.30) are appropriate measures of the sampling variance,

though the distributions are centered at the pseudo-true values (or projections) n rather than :

The fact that the estimators are biased is an unavoidable consequence of misspecication.

There is another way of representing an asymptotic distribution for the estimator under misspecication based on the concept of local alternatives. It is a technical device which might seem

a bit articial, but it is a powerful technique which yields useful distributional approximations in

a wide variety of contexts. The idea is to index the true coe cient n by n; and suppose that

R0

=c+ n

1=2

(7.32)

The asymptotic theory is then derived as n ! 1 under the sequence of probability distributions

with the coe cients n . The expression (7.32) species that n does not satisfy (7.1), but the

deviation from the constraint is n 1=2 which depends on and the sample size. The choice to

make the deviation of this form is precisely so that the localizing parameter

appears in the

asymptotic distribution but does not dominate it.

is the true coe cient value, then yi = x0i

!

! 1

n

n

X

1X

1

d

p

! N (0; V )

xi x0i

xi ei

n

n

p

n b

158

i=1

+ ei and

i=1

e

so

md

= b

= b

p

W n 1 R R0 W n 1 R

W n 1 R R0 W n 1 R

n e md

R0 b

R0 b

+ W n 1 R R0 W n 1 R

W n 1 R R0 W n 1 R

! N (0; V ) + W

R0

+W n 1 R R0 W n 1 R

d

R R0 W

n b

1=2

= N ( ; V (W ))

(7.33)

where

=W

R R0 W

The asymptotic distribution (7.33) is an alternative way of expressing the sampling distribution

of the restricted estimator under misspecication. The distribution (7.33) contains an asymptotic

bias component : The approximation is not fundamentally dierent from (7.31) they both have

the same asymptotic variances, and both reect the bias due to misspecication. The dierence is

that (7.31) puts the bias on the left-side of the convergence arrow, while (7.33) has the bias on the

right-side. There is no substantive dierence between the two, but (7.33) is more convenient for

some purposes, such as the analysis of the power of tests, as we will explore in the next chapter.

7.10

Nonlinear Constraints

can be written as

r( ) = 0

. They

(7.34)

where r : Rk ! Rq : This includes the linear constraints (7.1) as a special case. An example of

(7.34) which cannot be written as (7.1) is 1 2 = 1; which is (7.34) with r( ) = 1 2 1:

The minimum distance estimator of subject to (7.34) solves the minimization problem

e = argmin Jn ( )

(7.35)

r( )=0

where

Jn ( ) = n b

Wn b

1

L( ; ) = Jn ( ) +

2

r( )

(7.36)

over ( ; ):

Computationally, there is no explicit expression for the solution e so it must be found numerically. Algorithms to numerically solve (7.35) are known as constrained optimization methods,

and are available in programming languages including Matlab, Gauss and R.

159

@

r( )0 :

@

The asymptotic distribution is a simple generalization of the case of a linear constraint, but the

proof is more delicate.

Theorem 7.10.1 Under Assumptions 6.1.2, 7.10.1, and 7.5.2, for e dened in (7.35) ,

p

d

! N (0; V (W ))

n e

W = V 1 ; in which case the asymptotic variance is

V

V R R0 V R

=V

R0 V :

The asymptotic variance matrix for the e cient minimum distance estimator can be estimated

by

where

Vb

b R

b 0 Vb R

b

Vb R

= Vb

b = @ r( e )0 :

R

@

b 0 Vb

R

Standard errors for the elements of e are the square roots of the diagonal elements of n

7.11

Inequality Constraints

1V

b

r( )

1

0:

and

cls

= argmin SSEn ( )

md

(7.37)

r( ) 0

= argmin Jn ( ) :

(7.38)

r( ) 0

Except in special cases the constrained estimators do not have simple algebraic solutions. An

important exception is when there is a single non-negativity constraint, e.g. 1

0 with q = 1:

In this case the constrained estimator can be found by two-step approach. First compute the

uncontrained estimator b . If b 1 0 then e = b : Second, if b 1 < 0 then impose 1 = 0 (eliminate

the regressor X1 ) and re-estimate. This yields the constrained least-squares estimator. While this

160

method works when there is a single non-negativity constraint, it does not immediately generalize

to other contexts.

The computational problems (7.37) and (7.38) are examples of quadratic programming

problems. Quick and easy computer algorithms are available in programming languages including

Matlab, Gauss and R.

Inference on inequality-constrained estimators is unfortunately quite challenging. The conventional asymptotic theory gives rise to the following dichotomy. If the true parameter satises the

strict inequality r( ) > 0, then asymptotically the estimator is not subject to the constraint and the

inequality-constrained estimator has an asymptotic distribution equal to the unconstrained case.

However if the true parameter is on the boundary, e.g. r( ) = 0, then the estimator has a truncated structure. This is easiest to see in the one-dimensional case. If we have an estimator ^ which

p

d

satises n ^

! Z = N (0; V ) and = 0; then the constrained estimator ~ = max[ ^ ; 0]

p

d

will have the asymptotic distribution n ~ ! max[Z; 0]; a half-normal distribution.

7.12

Constrained MLE

Recall that the log-likelihood function (3.43) for the normal regression model is

log L( ;

)=

n

log 2

2

SSEn ( ):

The constrained maximum likelihood estimator (CMLE) ( b cmle ; ^ 2cmle ) maximizes log L( ; 2 )

subject to the constraint (7.34) Since log L( ; 2 ) is a function of only through the sum of squared

errors SSEn ( ); maximizing the likelihood is identical to minimizing SSEn ( ). Hence b cmle = b cls

and ^ 2cmle = ^ 2cls .

7.13

Technical Proofs*

Proof of Theorem 7.6.1, Equation (7.22). Let R? be a full rank k (k q) matrix satisfying

R0? V R = 0 and then set C = [R; R? ] which is full rank and invertible. Then we can calculate

that

R0 V R R0 V R?

R0? V R R0? V R?

C 0V C =

0

0

0

0 R? V R?

=

and

C 0 V (W )C

=

=

R0 V (W )R R0 V (W )R?

R0? V (W )R R0? V (W )R?

0

0

1

0

0

0

0 R? V R? + R? W R (R W R) R0 V R (R0 W R)

R0 W R?

Thus

C 0 V (W )

0

= C V (W )C

=

0

C

0

CV C

0

0 R0? W R (R0 W R)

0

R V R (R0 W R)

0

R0 W R?

Since C is invertible it follows that V (W )

161

0 which is (7.22).

Proof of Theorem 7.10.1. For simplicity, we assume that the constrained estimator is consistent

e p! . This can be shown with more eort, but requires a deeper treatment than appropriate

for this textbook.

For each element rj ( ) of the q-vector r( ); by the mean value theorem there exists a j on

the line segment joining e and such that

Let Rn be the k

rj ( e ) = rj ( ) +

q matrix

@

r1 (

@

Rn =

p

Since e !

1)

@

rj (

@

@

r2 (

@

0

j)

(7.39)

@

rq (

@

2)

it follows that

q)

r( e ) = r( ) + Rn0 e

0 = Rn0 e

(7.40)

Wn b

e =R

e e:

Thus

e = R 0W 1R

e

n

n

e

Rn0 b

e R 0W 1H

f

W n 1R

n

n

e = R 0W 1R

e

n

n

1

Rn0

p

n e

I

d

e R 0W 1R

e

W n 1R

n

n

! I

R R0 W

= N (0; V (W )) :

Rn0 b

Rn0

1

n b

R0 N (0; V )

162

Exercises

Exercise 7.1 In the model y = X 1 1 + X 2 2 + e; show directly from denition (7.3) that the

CLS estimate of = ( 1 ; 2 ) subject to the constraint that 2 = 0 is the OLS regression of y on

X 1:

Exercise 7.2 In the model y = X 1 1 + X 2 2 + e; show directly from denition (7.3) that the

CLS estimate of = ( 1 ; 2 ); subject to the constraint that 1 = c (where c is some given vector)

is the OLS regression of y X 1 c on X 2 :

Exercise 7.3 In the model y = X 1 1 + X 2 2 + e; with X 1 and X 2 each n

estimate of = ( 1 ; 2 ); subject to the constraint that 1 =

2:

k; nd the CLS

Exercise 7.5 Verify (7.16).

Exercise 7.6 Verify that the minimum distance estimator e with W n = Q

estimator.

Exercise 7.7 Prove Theorem 7.5.1.

Exercise 7.8 Prove Theorem 7.5.2.

Exercise 7.9 Prove Theorem 7.5.3. (Hint: Use that CLS is a special case of Theorem 7.5.2.)

Exercise 7.10 Verify that (7.20) is V (W ) with W = V

Exercise 7.11 Prove (7.21). Hint: Use (7.20).

Exercise 7.12 Verify (7.23), (7.24) and (7.25)

Exercise 7.13 Verify (7.26), (7.27), and (7.28).

Chapter 8

Hypothesis Testing

8.1

It is often the goal of an empirical investigation to determine if one variable aects another.

Returning to our example of wage determination, we might be interested if union membership

aects wages. Equivalently, we might ask if the hypothesis Union membership does not aect

wages is true or false. In hypothesis testing, we are interest in testing if a specic restriction

on the parameters is compatible with the observed data. Letting be the coe cient in a wage

regression for union membership, the hypothesis that union membership has no eect on mean

wages is the restriction = 0: Hypothesis testing is about making a decision if the restriction = 0

is true or false.

In general, a hypothesis is a statement (or assertion) that a restriction is true, where a restriction

takes the form 2 0 with 0 is a strict subset of a pararameter space .

Denition 1 A hypothesis is a statement that

is the complement of 0 in

. We give a hypothesis and its complement special names.

0

hypothesis is its complement H1 : 2 c0 :

2

=

In the example given previously, the alternative hypothesis is that union membership has an

eect on mean wages, or 6= 0:

A hypothesis can either be true or not true. The goal of hypothesis testing is to provide evidence

concerning the truth of the null hypothesis versus its alternative.

A hypothesis test makes one of two decisions based on the data: either Accept H0 or Reject

H0 . Take again the example about union membership and examine the wage regression reported

in Table 5.1. We see that the coe cient for Male Union Member is 0.095 (a wage premium

of 9.5%) with a standard error of 0.020. Given the magnitude of this estimate, it seems unlikely

that the true coe cient could be zero, and so we may be inclined to reject the hypothesis that

union membership does not aect wages for males. However, we can also see that the coe cient

for Female Union Member is 0.022 with a standard error of 0.020. While the point estimate

suggests a wage premium of 2.2%, the standard error suggests that it is also plausible that the true

163

0g

164

coe cient could be zero and the point estimate is merely sampling error. In this case what decision

should we make?

A hypothesis test consists of a real-valued test statistic

Tn = Tn ((y1 ; x1 ) ; :::; (yn ; xn )) ;

a critical value c, plus the decision rule

1. Accept H0 if Tn < c;

2. Reject H0 if Tn

c:

The test statistic Tn should be designed so that small values of Tn are likely when H0 is true

and large values of Tn are likely when H1 is true. For example, for a test of H0 : = 0 against

H1 : 6= 0 the standard test statistic is the absolute t-statistic Tn = jtn ( 0 )j ; as we expect Tn to

have a well-behaved distribution = 0 ; and to be large when 6= 0 :

Given the two possible states of the world (H0 or H1 ) and the two possible decisions (Accept H0

or Reject H0 ), there are four possible pairings of states and decisions as is depicted in the following

chart.

Hypthesis Testing Decisions

H0 true

H1 true

Accept H0

Correct Decision

Type II Error

Reject H0

Type I Error

Correct Decision

Hypothesis tests are useful if they avoid making errors. There are two possible errors: (1)

Rejecting H0 when H0 is true; and (2) Accepting H0 when H0 is false. These two errors are called

Type I and Type II errors, respectively. As the events are random it is constructive to evaluate

the tests based on the probability of their making an error. For a given test we dene the rejection

probability function as the probability of rejecting the null hypothesis

n(

) = Pr (Reject H0 j )

= Pr (Tn

c j ):

The rejection probability in general depends on the unknown parameter and the sample size n:

For parameter values in the null hypothesis, 2 0 ; n ( ) is the probability of making a Type I

error. We also call n ( ) the power function of the test, as for parameter values in the alternative,

2 c0 , n ( ) equals 1 minus the probability of a Type II error.

For the reasons discussed in Chapter 6, in typical econometric models the exact sampling

distribution of statistics such as Tn is unknown and hence n ( ) is unknown. Therefore we typically

rely on asymptotic approximations which allow a more precise characterization. In particular, we

focus on the asymptotic rejection probability

( ) = lim

n(

n!1

):

Therefore, it is convenient to select a test statistic Tn which has a well specied asymptotic disd

tribution under H0 , that is, Tn ! under H0 : Furthermore, if the distribution F of does not

depend on 2 0 we say that Tn is asymptotically pivotal. In this case,

( ) =

lim Pr (Tn

n!1

= Pr (

= 1

c)

F (c)

cj )

165

is only a function of c. For example, if Tn is the absolute t-statistic, then by Theorem 6.11.1, under

d

N(0; 1) and thus F (c) = (c); the symmetrized normal distribution

H0 ; Tn ! jZj where Z

function dened in (6.40).

Given a test statistic Tn , how should we select the critical value c? Larger values of c mean that

the test rejects less frequently, which decreases the probability of a Type I error but increases the

probability of a Type II error. How can we balance one against the other? The dominant approach

is to give special priority to the null hypothesis, and select c so that the Type I error probability

is controlled at a specied level, meaning that we select a signicance level 2 (0; 1) and then

pick c so that ( )

for all 2 0 : When the test statistic Tn is asymptotically pivotal with

distribution F this is accomplished by selecting c so that 1 F (c) = :

There is no objective scientic basis for choice of signicance level : However, the common

practice is to set = :05 (5%), which implies that the critical value c is selected so that F (c) = 0:95:

For example, if Tn is the absolute t-statistic, we nd from a normal table that (1:96) = 0:95, so

that the 5% critical value is thus c = 1:96:

The reasonsing behind the choice of a 5% critical value is to ensure that Type I errors should be

relatively unlikely that the decision Reject H0 has scientic strength yet the test retains power

against reasonable alternatives. The decision Reject H0 means that the evidence is inconsistent

with the null hypothesis, in the sense that it is relatively unlikely (1 in 20) that data generated by

the null hypothesis would yield the observed test result.

In contrast, the decision Accept H0 is not a strong statement. It does not mean that the

evidence supports H0 , only that there is insu cient evidence to reject H0 . Because of this, it is

more accurate to use the label Do not Reject H0 instead of Accept H0 .

When a test rejects H0 at the 5% signicance level it is common to say that the statistic

is statistically signicant and if the test accepts H0 it is common to say that the statistic is

statistically insignicant. It is helpful to remember that this is simply a way of saying Using

the statistic Tn , the hypothesis H0 can [cannot] be rejected at the asymptotic 5% level. When

the null hypothesis H0 : = 0 is rejected it is common to say that the coe cient is statistically

signicant, because the test has shown that the coe cient is not equal to zero.

Let us return to the example about the union wage premium. The absolute t-statistic for the

coe cient on Male Union Memberis 0:095=0:020 = 4:75; which is greater than the 5% asymptotic

critical value of 1.96. Therefore we reject the hypothesis that union membership does not aect

wages for men. However, the absolute t-statistic for the coe cient on Female Union Member is

0:022=0:020 = 1:10; which is less than 1.96 and therefore we do not reject the hypothesis that union

membership does not aect wages for women. We thus say that the eect of union membership for

men is statistically signicant, while membership for women is not statistically signicant.

When a test accepts a null hypothesis, it is commonly interpreted as evidence that the null

hypothesis is true. In our wage example, a common interpretation is that the regression nds

that female union membership has no eect on wages. This is an incorrect and most unfortunate

interpretation. The test has failed to reject the hypothesis that the coe cient is zero, but that does

not mean that the coe cient is actually zero. The test could be making a Type II error.

Consider another question: Does marriage status aect wages? To test the hypothesis that

marriage status has no eect on wages, we examine the t-statistics for the coe cients on Married

Male and Married Female in Table 5.1, which are 0:180=0:008 = 22:5 and 0:016=0:008 = 2:0;

respectively. Both exceed the asymptotic 5% critical value of 1.96, so we reject the hypothesis for

both men and women. But the statistic for men is exceptionally high, and that for women is only

slightly above the critical value. Suppose in contrast that the t-statistic had been 1.9, which is less

than the critical value, leading to the decision Accept H0 rather than Reject H0 . Should we

really be making a dierent decision if the t-statistic is 1.9 rather than 2.0? The dierence in values

is small, shouldnt the dierence in the decision be also small? Thinking through these examples

it seems unsatisfactory to simply report Accept H0 or Reject H0 . These two decisions do not

summarize the evidence. Instead, the magnitude of the statistic Tn suggests a degree of evidence

166

The answer is to report what is known as the asymptotic p-value

pn = 1

F (Tn ):

Since the distribution function F is monotonically increasing, the p-value is a monotonically decreasing function of Tn and is an equivalent test statistic. Instead of rejecting H0 at the signicance

level if Tn c; we can reject H0 if pn

: Thus it is su cient to report pn ; and let the reader

decide.

Furthermore, the asymptotic p-value has a very convenient asymptotic null distribution. Since

d

d

Tn ! under H0 ; then pn = 1 F (Tn ) ! 1 F ( ), which has the distribution

Pr (1

F( )

u) = Pr (1

= 1

F ( ))

Pr

= 1

F F

= 1

(1

F

1

(1

(1

u)

u)

u)

= u;

d

which is the uniform distribution on [0; 1]: Thus pn ! U[0; 1]: This means that the unusualness

of pn is easier to interpret than the unusualness of Tn :

The implication is that the best empirical practice is to compute and report the asymptotic pvalue pn rather than simply the test statistic Tn or the binary decision Accept/Reject. The p-value

is a simple statistic, easy to interpret, and contains more information than the other choices.

For example, consider the tests for the eect of marriage status on wages. The p-value for men

is 0.000 which we can intrepret as a very strong rejection of the hypothesis of no eect, while the

p-value for women is 0.046, which we can describe as borderline signicant.

We now summarize the main features of hypothesis testing.

1. Select a signicance level :

d

3. Set the critical value c so that 1

under H0 :

F (Tn ):

5. Reject H0 if Tn

c; or equivalently pn

7. Report pn to summarize the evidence concerning H0 versus H1 :

8.2

t tests

The most commonly applied statistical tests are hypotheses on individual coe cients

and can be written in the form

H0 : = 0

= h( ),

(8.1)

where 0 is some pre-specied value. Quite typically, 0 = 0; as interest focuses on whether or not

a coe cient equals zero, but this is not the only possibility. For example, interest may focus on

whether an elasticity equals 1, in which case we may wish to test H0 : = 1:

As we described in the previous section, tests of H0 typically are based on the t-statistic

tn ( ) =

s(b)

167

where b is the point estimate and s(b) is its standard error. The test will depend on whether the

alternative hypothesis is one-sided or two-sided. The two-sided alternative is

H1 :

6=

(8.2)

which is appropriate for general tests of signicance. In this case the test statistic is the absolute

value of the t-statistic

b

0

tn = jtn ( 0 )j =

:

b

s( )

0;

tn ! jZj

where Z N(0; 1), then asymptotic critical values can be taken from the normal distribution table.

In particular, the asymptotic 5% critical value is c = 1:96:

Also, the asymptotic p-value for H0 against H1 is

pn = 2 (1

(tn )) :

d

jtn ( 0 )j ! jZj

and for c satisfying

= 2 (1

(c)) ;

Pr (jtn ( 0 )j

so the test Reject H0 if jtn ( 0 )j

c j H0 ) !

c has asymptotic signicance level

one-sided alternatives such as

H1 : > 0

(8.3)

or

H1 :

<

0:

(8.4)

Tests of (8.1) against (8.3) or (8.4) are based on the signed t-statistic

tn = tn ( 0 ) =

s(b)

(c): Negative

values of tn are not taken as evidence against H0 ; as point estimates b less than 0 do not point to

(8.3). Since the critical values are taken from the single tail of the normal distribution, they are

smaller than for two-sided tests. Specically, the asymptotic 5% critical value is = 1:645: Thus

we can reject (8.1) is rejected in favor of (8.3) if tn 1:645:

Conversely, tests of (8.1) against (8.4) reject H0 for negative t-statistics, e.g. if tn

c. For this

alternative large positive values of tn are not evidence against H0 : An asymptotic 5% test rejects

if tn

1:645:

There seems to be an ambiguity. Should we use the two-sided critical value 1.96 or the one-sided

critical value 1.645? The answer is that we should use one-sided tests and critical values only when

the parameter space is known to satisfy a one-sided restriction such as

0 : This is when the test

of (8.1) against (8.3) makes sense. If the restriction

is

not

known

a

priori, then imposing

0

this restriction to test (8.1) against (8.3) does not makes sense. Since linear regression coe cients

do not have a priori sign restrictions, we conclude that two-sided tests are generally appropriate.

8.3

168

In Section 4.14, we argued that a good applied practice is to report coe cient estimates ^ and

standard errors s(^) for all coe cients of interest in estimated models. With ^ and s(^) the reader

^

can easily construct condence intervals [^ 2s(^)] and t-statistics ^

0 =s( ) for hypotheses

of interest.

Some applied papers (especially older ones) instead report estimates ^ and t-ratios tn = ^=s(^);

not standard errors. Reporting t-ratios instead of standard errors is poor econometric practice.

While the same information is being reported (you can back out standard errors by division, e.g.

s(^) = ^=tn ); standard errors are generally more helpful to readers than t-ratios. Standard errors

help the reader focus on the estimation precision and condence intervals, while t-ratios focus

attention on statistical signicance. While statistical signicance is important, it is less important

that the parameter estimates themselves and their condence intervals. The focus should be on

the meaning of the parameter estimates, their magnitudes, and their interpretation, not on listing

which variables have signicant (e.g. non-zero) coe cients. In many modern applications, sample

sizes are very large so standard errors can be very small. Consequently t-ratios can be large even

if the coe cient estimates are economically small. In such contexts it may not be interesting

to announce The coe cient is non-zero! Instead, what is interesting to announce is that The

coe cient estimate is economically interesting!

In particular, some applied papers report coe cient estimates and t-ratios, and limit their

discussion of the results to describing which variables are signicant(meaning that their t-ratios

exceed 2) and the signs of the coe cient estimates. This is very poor empirical work, and should

be studiously avoided. It is also a receipe for banishment of your work to lower tier economics

journals.

Fundamentally, the common t-ratio is a test for the hypothesis that a coe cient equals zero.

This should be reported and discussed when this is an interesting economic hypothesis of interest.

But if this is not the case, it is distracting.

In general, when a coe cient is of interest, it is constructive to focus on the point estimate,

its standard error, and its condence interval. The point estimate gives our best guess for the

value. The standard error is a measure of precision. The condence interval gives us the range

of values consistent with the data. If the standard error is large then the point estimate is not a

good summary about : The endpoints of the condence interval describe the bounds on the likely

possibilities. If the condence interval embraces too broad a set of values for ; then the dataset

is not su ciently informative to render inferences about : On the other hand if the condence

interval is tight, then the data have produced an accurate estimate, and the focus should be on

the value and interpretation of this estimate. In contrast, the statement the t-ratio is highly

signicant has little interpretive value.

The above discussion requires that the researcher knows what the coe cient means (in terms

of the economic problem) and can interpret values and magnitudes, not just signs. This is critical

for good applied econometric practice.

For example, consider the question about the eect of marriage status on mean log wages. We

had found that the eect is highly signicant for men and marginally signicant for women.

Now, lets construct asymptotic condence intervals for the coe cients. The one for men is [0:16;

0:20] and that for women is [0:00; 0:03]: This shows that average wages for married men are about

16-20% higher than for unmarried men, which is very substantial, while the dierence for women

about 0-3%, which is small. These magnitudes may be more informative than the results of the

hypothesis tests.

8.4

169

Wald Tests

The t-test is appropriate when the null hypothesis is a real-valued restriction. More generally,

there may be multiple restrictions on the coe cient vector : For a q 1 vector of functions r, we

can write a multiple testing problem as

H0 : r( ) = 0

H1 : r( ) 6= 0

It is natural to estimate = r( ) by the plug-in estimate b = r( b ): As this is a q 1 vector,

we can assess its magnitude by constructing a quadratic form such as the Wald statistic (6.44)

evaluated at the null hypothesis

0

Wn = nb Vb

(8.5)

1

b Vb R

b

= nr( b )0 R

where

b = @ r( b )0 :

R

@

r( b )

The Wald statistic Wn is a weighted Euclidean measure of the length of the vector b: When q = 1

then Wn = t2n ; the square of the t-statistic, so hypothesis tests based on Wn and jtn j are equivalent.

The Wald statistic (8.5) is a generalization of the t-statistic to the case of multiple restrictions.

When r( ) = R0

c is a linear function of ; then the Wald statistic simplies to

Wn = n R 0 b

R0 Vb R

R0 b

c :

with q degrees of freedom. Let Fq (u) denote the 2q distribution function. For a given signicance

level ; the asymptotic critical value c satises = 1 Fq (c) and can be found from the chi-square

distribution table. For example, the 5% critical values for q = 1; q = 2; and q = 3 are 3.84, 5.99,

and 7.82, respectively. An asymptotic test rejects H0 in favor of H1 if Wn c: As with t-tests, it

is conventional to describe a Wald test as signicant if Wn exceeds the 5% critical value.

d

H0 ; then Wn !

2,

q

Pr (Wn

=1

Fq (c);

c j H0 ) !

Notice that the asymptotic distribution in Theorem 8.4.1 depends solely on q the number of

restrictions being tested. It does not depend on k the number of parameters estimated.

The asymptotic p-value for Wn is pn = 1 Fq (Wn ). For multiple hypothesis tests it is particularly useful to report p-values instead of the Wald statistic. For example, if you write that a Wald

test on eight restrictions (q = 8) has the value Wn = 11:2; it is di cult for a reader to assess the

magnitude of this statistic without the time-consuming and cumbersome process of looking up the

critical values from a table. Instead, if you write that the p-value is pn = 0:19 (as is the case for

Wn = 11:2 and q = 8) then it is simple for a reader to intrepret its magnitude.

170

For example consider the empirical results presented in Table 5.1. The hypothesis Union

membership does not aect wages" is the joint restriction that the coe cients on Male Union

Member and Female Union Member are jointly zero. We calculate the Wald statistic (8.5) for

this joint hypothesis and nd Wn = 23:14 with a p-value of pn = 0:00: Thus we can reject the

hypothesis in favor of the alternative that at least one of the coe cients is non-zero. This does not

mean that both coe cients are non-zero, just that one of the two is non-zero. Therefore examining

the joint Wald statistic and the invididual t-statistics is useful to interpret the coe cients.

If the error is known to be homoskedastic, then it is appropriate to replace Vb in (8.5) with

0

b 1 s2 : In this case the Wald statisic equals

the homoskedastic covariance matrix estimate Vb = Q

xx

b 0 Vb 0 R

b

Wn0 = nr( b )0 R

=

b 0Q

b 1R

b

nr( b )0 R

xx

s2

r( b )

b X 0X R

b

r( b )0 R

s2

r( b )

r( b )

(8.6)

The Wald statistic is named after the statistician Abraham Wald, who showed that Wn has

optimal weighted average power in certain settings.

8.5

A minimum distance test measures the distance between b and the restricted estimate e . Recall

that under the restriction

r( ) = 0

the minimum distance estimate solves the minimization problem

e = argmin Jn ( )

r( )=0

where

Jn ( ) = n b

0

and W n is a weight matrix. Setting W n = Vb

Wn b

1

and setting W n = Vb

yields the e cient minimum distance estimator e emd :

The minimum distance test statistic of H0 against H1 is

cls

Jn = Jn ( e )

=

When r( ) = R0

c is linear then

0

Setting W n = Vb

min Jn ( )

r( )=0

= n b

Jn = n R0 b

Wn b

R0 W n 1 R

we nd

Jn0 := Jn ( e cls ) = n R0 b

R0 Vb R

e :

R0 b

1

R0 b

c :

c = Wn0

(8.7)

and setting W n = Vb

171

we nd

Jn := Jn ( e emd ) = n R0 b

R0 Vb R

R0 b

c = Wn :

c, Jn is identical to the Wald statistic (8.5), and

equals the homoskedastic Wald statistic (8.6). When r( ) is non-linear then the Wald and

minimum distance statistics are dierent.

Jn0

d

H0 ; then Jn !

2.

q

Testing using the minimum distance statistic Jn is similar to testing using the Wald statistic

Wn . Critical values and p-values are computed using the 2q distribution. H0 is rejected in favor of

H1 if Jn exceeds the level critical value. The asymptotic p-value is pn = 1 Fq (Jn ):

8.6

F Tests

H0 : R0

c=0

SSEn ( b ) =q

SSEn ( e cls )

Fn =

where

SSEn ( b )=(n

k)

^2

~2

(8.8)

(8.9)

^2

^2 =

SSEn ( b )

1X 2

=

e^i

n

n

i=1

~2 =

SSEn ( e cls )

1X 2

=

e~i

n

n

i=1

If we recall equation (7.15) which showed that SSEn ( ) = s2 (n

also write Fn as

Fn =

Jn ( e cls )

= Jn ( e cls )=q

= Wn0 =q:

k + Jn ( )) ; then we can

Jn ( b ) =q

The second equality uses the fact that Jn ( b ) = 0 and the third equality is (8.7). Thus the Fn

statistic equals the homoskedastic Wald statistic divided by q: It follows that they are equivalent

tests for H0 against H1 :

172

In many statistical packages, linear hypothesis tests are reported as Fn rather than Wn . While

they are equivalent, it is important to know which is being reported to know which critical values

to use. (If p-values are directly reported this is not an issue.)

Most packages will calculate critical values and p-values using the F (q; n k) distribution rather

than 2q : This is a prudent small sample adjustment, as the F distribution is exact when the errors

are independent of the regressors and normally distributed. However, when the degrees of freedom

n k are large then the dierence is negligible. More relevantly, if n k is small enough to make

a dierence, probably we shouldnt be trusting the asymptotic approximation anyway!

An elegant feature about (8.8) is that it is directly computable from the standard output from

two simple OLS regressions, as the sum of squared errors (or regression variance) is a typical output

from statistical packages, and is often reported in applied tables. Thus Fn can be calculated by

hand from standard reported statistics even if you dont have the original data (or if you are sitting

in a seminar and listening to a presentation!).

If you are presented with an F statistic (or a Wald statistic, as you can just divide by q) but

dont have access to critical values, a useful rule of thumb is to know that for large n; the 5%

asymptotic critical value is decreasing as q increases, and is less than 2 for q 7:

In many statistical packages, when an OLS regression is estimated, an F-statisticis reported.

This is Fn where H0 restricts all coe cients except the intercept to be zero. This was a popular statistic in the early days of econometric reporting, when sample sizes were very small and researchers

wanted to know if there was any explanatory power to their regression. This is rarely an issue

today, as sample sizes are typically su ciently large that this F statistic is nearly always highly

signicant. While there are special cases where this F statistic is useful, these cases are atypical.

As a general rule, there is no reason to report this F statistic.

The F statistic is named after the statistician Ronald Fisher, one of the founders of modern

statistical theory.

8.7

for H0 : 2 0 versus H1 : 2

2

is

c

0

2

= 2 log Ln (b)

max log Ln ( )

2

log Ln (e)

In the normal linear model the maximized log likelihood (3.44) at the unrestricted and restricted

estimates are

n

n

log L b mle ; ^ 2mle =

(log (2 ) + 1)

log ^ 2

2

2

and

n

n

log L e cmle ; ~ 2cmle =

(log (2 ) + 1)

log ~ 2

2

2

respectively. Thus the LR statistic is

LRn = n log ~ 2

= n log

log ^ 2

~2

^2

which is a monotonic function of ~ 2 =^ 2 : Recall that the F statistic (8.9) is also a monotonic function

of ~ 2 =^ 2 : Thus LRn and Fn are fundamentally the same statistic and have the same information

about H0 versus H1 :

173

LRn =q =

n

~2

log 1 + 2

q

^

~2

^2

n

q

'

' Fn :

This shows that the two statistics (LRn and Fn ) will be numerically close. It also shows that the

F statistic and the homoskedastic Wald statistic for linear hypotheses can also be interpreted as

approximate likelihood ratio statistics under normality.

8.8

While the t and Wald tests work well when the hypothesis is a linear restriction on ; they

can work quite poorly when the restrictions are nonlinear. This can be seen by a simple example

introduced by Lafontaine and White (1986). Take the model

yi =

ei

+ ei

2

N(0;

H0 :

= 1:

Let ^ and ^ 2 be the sample mean and variance of yi : The standard Wald test for H0 is

^

Wn = n

^2

H0 (r) :

for any positive integer r: Letting h( ) =

Wald test for H0 (r) is

=1

; and noting H = r

^r

Wn (r) = n

^ 2 r2 ^

r 1

2r 2 :

While the hypothesis r = 1 is unaected by the choice of r; the statistic Wn (r) varies with r: This

is an unfortunate feature of the Wald statistic.

To demonstrate this eect, we have plotted in Figure 8.1 the Wald statistic Wn (r) as a function

of r; setting n= 2 = 10: The increasing solid line is for the case ^ = 0:8: The decreasing dashed

line is for the case ^ = 1:6: It is easy to see that in each case there are values of r for which the

test statistic is signicant relative to asymptotic critical values, while there are other values of r

for which the test statistic is insignicant. This is distressing since the choice of r is arbitrary and

irrelevant to the actual hypothesis.

d

Our rst-order asymptotic theory is not useful to help pick r; as Wn (r) ! 21 under H0 for any

r: This is a context where Monte Carlo simulation can be quite useful as a tool to study and

compare the exact distributions of statistical procedures in nite samples. The method uses random

simulation to create articial datasets, to which we apply the statistical tools of interest. This

produces random draws from the statistics sampling distribution. Through repetition, features of

this distribution can be calculated.

In the present context of the Wald statistic, one feature of importance is the Type I error

of the test using the asymptotic 5% critical value 3.84 the probability of a false rejection,

Pr (Wn (r) > 3:84 j = 1) : Given the simplicity of the model, this probability depends only on

r; n; and 2 : In Table 8.1 we report the results of a Monte Carlo simulation where we vary these

174

three parameters. The value of r is varied from 1 to 10, n is varied among 20, 100 and 500, and

is varied among 1 and 3. The Table reports the simulation estimate of the Type I error probability

from 50,000 random samples. Each row of the table corresponds to a dierent value of r and thus

corresponds to a particular choice of test statistic. The second through seventh columns contain the

Type I error probabilities for dierent combinations of n and . These probabilities are calculated

as the percentage of the 50,000 simulated Wald statistics Wn (r) which are larger than 3.84. The

null hypothesis r = 1 is true, so these probabilities are Type I error.

To interpret the table, remember that the ideal Type I error probability is 5% (.05) with deviations indicating distortion. Type I error rates between 3% and 8% are considered reasonable. Error

rates above 10% are considered excessive. Rates above 20% are unacceptable. When comparing

statistical procedures, we compare the rates row by row, looking for tests for which rejection rates

are close to 5% and rarely fall outside of the 3%-8% range. For this particular example the only

test which meets this criterion is the conventional Wn = Wn (1) test. Any other choice of r leads

to a test with unacceptable Type I error probabilities.

In Table 8.1 you can also see the impact of variation in sample size. In each case, the Type I

error probability improves towards 5% as the sample size n increases. There is, however, no magic

choice of n for which all tests perform uniformly well. Test performance deteriorates as r increases,

which is not surprising given the dependence of Wn (r) on r as shown in Figure 8.1.

Table 8.1

Type I error Probability of Asymptotic 5% Wn (r) Test

175

=1

=3

r n = 20 n = 100 n = 500 n = 20 n = 100 n = 500

1

.06

.05

.05

.07

.05

.05

2

.08

.06

.05

.15

.08

.06

3

.10

.06

.05

.21

.12

.07

4

.13

.07

.06

.25

.15

.08

5

.15

.08

.06

.28

.18

.10

6

.17

.09

.06

.30

.20

.11

7

.19

.10

.06

.31

.22

.13

8

.20

.12

.07

.33

.24

.14

9

.22

.13

.07

.34

.25

.15

10

.23

.14

.08

.35

.26

.16

Note: Rejection frequencies from 50,000 simulated random samples

In this example it is not surprising that the choice r = 1 yields the best test statistic. Other

choices are arbitrary and would not be used in practice. While this is clear in this particular

example, in other examples natural choices are not always obvious and the best choices may in fact

appear counter-intuitive at rst.

This point can be illustrated through another example which is similar to one developed in

Gregory and Veall (1985). Take the model

yi =

+ x1i

+ x2i

+ ei

(8.10)

E (xi ei ) = 0

and the hypothesis

1

H0 :

=r

H0 : = r:

Let b = ( ^ 0 ; ^ 1 ; ^ 2 ) be the least-squares estimates of (8.10), let Vb be an estimate of the

asymptotic covariance matrix for b and set ^ = ^ 1 = ^ 2 . Dene

1

0

0

C

B

C

B

B 1 C

C

B

C

c1 = B

H

B ^2 C

B

C

C

B

B ^ C

@

A

1

^2

2

1H

^ 01 Vb

^

H

1

^

1

^

t1n =

1=2

s(^)

H0 :

= 0:

^

t2n =

n

1H 0 V

b

2

r ^2

1=2

H2

where

176

0

1

0

H2 = @ 1 A :

r

To compare t1n and t2n we perform another simple Monte Carlo simulation. We let x1i and x2i

be mutually independent N(0; 1) variables, ei be an independent N(0; 2 ) draw with = 3, and

normalize 0 = 0 and 1 = 1: This leaves 2 as a free parameter, along with sample size n: We

vary 2 among :1; .25, .50, .75, and 1.0 and n among 100 and 500:

.10

.25

.50

.75

1.00

Table 8.2

Type I error Probability of Asymptotic 5% t-tests

n = 100

n = 500

Pr (tn < 1:645) Pr (tn > 1:645) Pr (tn < 1:645) Pr (tn > 1:645)

t1n

t2n

t1n

t2n

t1n

t2n

t1n

t2n

.47

.06

.00

.06

.28

.05

.00

.05

.26

.06

.00

.06

.15

.05

.00

.05

.15

.06

.00

.06

.10

.05

.00

.05

.12

.06

.00

.06

.09

.05

.00

.05

.10

.06

.00

.06

.07

.05

.02

.05

The one-sided Type I error probabilities Pr (tn < 1:645) and Pr (tn > 1:645) are calculated

from 50,000 simulated samples. The results are presented in Table 8.2. Ideally, the entries in the

table should be 0.05. However, the rejection rates for the t1n statistic diverge greatly from this

value, especially for small values of 2 : The left tail probabilities Pr (t1n < 1:645) greatly exceed

5%, while the right tail probabilities Pr (t1n > 1:645) are close to zero in most cases. In contrast,

the rejection rates for the linear t2n statistic are invariant to the value of 2 ; and are close to the

ideal 5% rate for both sample sizes. The implication of Table 4.2 is that the two t-ratios have

dramatically dierent sampling behavior.

The common message from both examples is that Wald statistics are sensitive to the algebraic

formulation of the null hypothesis.

A simple solution is to use the minimum distance statistic Jn , which equals Wn with r = 1 in

the rst example, and t2n in the second example. The minimum distance statistic is invariant to

the algebraic formulation of the null hypothesis, so is immune to this problem. Whenever possible,

the Wald statistic should not be used to test nonlinear hypotheses.

8.9

In the Section 8.8 we introduced the method of Monte Carlo simulation to illustrate the small

sample problems with tests of nonlinear hypotheses. In this section we describe the method in more

detail.

Recall, our data consist of observations (yi ; xi ) which are random draws from a population

distribution F: Let be a parameter and let Tn = Tn ((y1 ; x1 ) ; :::; (yn ; xn ) ; ) be a statistic of

interest, for example an estimator ^ or a t-statistic (^

)=s(^): The exact distribution of Tn is

Gn (u; F ) = Pr (Tn

u j F):

While the asymptotic distribution of Tn might be known, the exact (nite sample) distribution Gn

is generally unknown.

Monte Carlo simulation uses numerical simulation to compute Gn (u; F ) for selected choices of F:

This is useful to investigate the performance of the statistic Tn in reasonable situations and sample

sizes. The basic idea is that for any given F; the distribution function Gn (u; F ) can be calculated

177

numerically through simulation. The name Monte Carlo derives from the famous Mediterranean

gambling resort where games of chance are played.

The method of Monte Carlo is quite simple to describe. The researcher chooses F (the distribution of the data) and the sample size n. A true value of is implied by this choice, or

equivalently the value is selected directly by the researcher which implies restrictions on F .

Then the following experiment is conducted by computer simulation:

1. n independent random pairs (yi ; xi ) ; i = 1; :::; n; are drawn from the distribution F using

the computers random number generator.

2. The statistic Tn = Tn ((y1 ; x1 ) ; :::; (yn ; xn ) ; ) is calculated on this pseudo data.

For step 1, most computer packages have built-in procedures for generating U[0; 1] and N(0; 1)

random numbers, and from these most random variables can be constructed. (For example, a

chi-square can be generated by sums of squares of normals.)

For step 2, it is important that the statistic be evaluated at the truevalue of corresponding

to the choice of F:

The above experiment creates one random draw from the distribution Gn (u; F ): This is one

observation from an unknown distribution. Clearly, from one observation very little can be said.

So the researcher repeats the experiment B times, where B is a large number. Typically, we set

B = 1000 or B = 5000: We will discuss this choice later.

Notationally, let the bth experiment result in the draw Tnb ; b = 1; :::; B: These results are stored.

After all B experiments have been calculated, these results constitute a random sample of size B

from the distribution of Gn (u; F ) = Pr (Tnb u) = Pr (Tn u j F ) :

From a random sample, we can estimate any feature of interest using (typically) a method of

moments estimator. For example:

Suppose we are interested in the bias, mean-squared error (MSE), and/or variance of the distribution of ^

: We then set Tn = ^

; run the above experiment, and calculate

\^) =

Bias(

M\

SE(^) =

B

B

1 X

1 X^

Tnb =

b

B

B

1

B

b=1

B

X

b=1

b=1

B

1 X ^

(Tnb ) =

b

B

2

\

^) = M\

var(

SE(^)

b=1

\^)

Bias(

Suppose we are interested in the Type I error associated with an asymptotic 5% two-sided t-test.

We would then set Tn = ^

=s(^) and calculate

B

1 X

1 (Tnb

P^ =

B

1:96) ;

(8.11)

b=1

the percentage of the simulated t-ratios which exceed the asymptotic 5% critical value.

Suppose we are interested in the 5% and 95% quantile of Tn = ^ or Tn = ^

=s(^) We

then compute the 5% and 95% sample quantiles of the sample fTnb g: The % sample quantile is a

number q such that % of the sample are less than q : A simple way to compute sample quantiles

is to sort the sample fTnb g from low to high. Then q is the N th number in this ordered sequence,

where N = (B + 1) : It is therefore convenient to pick B so that N is an integer. For example, if

we set B = 999; then the 5% sample quantile is 50th sorted value and the 95% sample quantile is

the 950th sorted value.

178

The typical purpose of a Monte Carlo simulation is to investigate the performance of a statistical

procedure (estimator or test) in realistic settings. Generally, the performance will depend on n and

F: In many cases, an estimator or test may perform wonderfully for some values, and poorly for

others. It is therefore useful to conduct a variety of experiments, for a selection of choices of n and

F:

As discussed above, the researcher must select the number of experiments, B: Often this is

called the number of replications. Quite simply, a larger B results in more precise estimates of

the features of interest of Gn ; but requires more computational time. In practice, therefore, the

choice of B is often guided by the computational demands of the statistical procedure. Since the

results of a Monte Carlo experiment are estimates computed from a random sample of size B; it

is straightforward to calculate standard errors for any quantity of interest. If the standard error is

too large to make a reliable inference, then B will have to be increased.

In particular, it is simple to make inferences about rejection probabilities from statistical tests,

such as the percentage estimate reported in (8.11). The random variable 1 (Tnb 1:96) is iid

Bernoulli, equalling 1 with probability p = E1 (Tnbp 1:96) : The average (8.11) is therefore an

unbiased estimator of p with standard error s (^

p) = p (1 p) =B. As p is unknown, this may be

approximated by replacing p with p^ or with an hypothesized

value. Forp

example, if we are assessing

p

an asymptotic 5% test, then we can set s (^

p) = (:05) (:95) =B ' :22= B: Hence, standard errors

for B = 100; 1000, and 5000, are, respectively, s (^

p) = :022; :007; and .003.

Most papers in econometric methods, and some empirical papers, include the results of Monte

Carlo simulations to illustrate the performance of their methods. When extending existing results,

it is good practice to start by replicating existing (published) results. This is not exactly possible

in the case of simulation results, as they are inherently random. For example suppose a paper

investigates a statistical test, and reports a simulated rejection probability of 0.07 based on a

simulation with B = 100 replications. Suppose you attempt to replicate this result, and nd a

rejection probability of 0.03 (again using B = 100 simulation replications). Should you concluce

that you have failed in your attempt? Absolutely not! Under the hypothesis that both simulations

are identical, you have two independent estimates, p^1 = 0:07 and p^2 = 0:03, of a common probability

p

d

p: The asymptotic (as B ! 1) distributionpof their dierence is B (^

p1 p^2 ) ! N(0; 2p(1 p)); so

a standard error for p^1 p^2 = 0:04 is s^ = 2p(1 p)=B ' 0:03; where I estimate p = (^

p1 + p^2 )=2:

Since the t-ratio 0:04=0:03 = 1:3 is not statistically signicant, it is incorrect to reject the null

hypothesis that the two simulations are identical. The dierence between the results p^1 = 0:07 and

p^2 = 0:03 is consistent with random variation.

What should be done? The rst mistake was to copy the previous papers choice of B = 100:

Instead, suppose

p you set B = 5000 and now obtain p^2 = 0:04: Then p^1 p^2 = 0:03 and a standard

error is s^ = p(1 p) (1=100 + 1=5000) ' 0:02: Still we cannot reject the hypothesis that the two

simulations are dierent. Even though the estimates (0:07 and 0:04) appear to be quite dierent,

the di culty is that the original simulation used a very small number of replications (B = 100) so

the reported estimate is quite imprecise. In this case, it is appropriate to conclude that your results

replicatethe previous study, as there is no statistical evidence to reject the hypothesis that they

are equivalent.

Most journals have policies requiring authors to make available their data sets and computer

programs required for empirical results. They do not have similar politicies regarding simulations.

Never-the-less, it is good professional practice to make your simulations available. The best practice

is to post your simulation code on your webpage. This invites others to build on and use your results,

leading to possible collaboration, citation, and/or advancement.

8.10

179

There is a close relationship between hypothesis tests and condence intervals. We observed in

Section 6.12 that the standard 95% asymptotic condence interval for a parameter is

h

i

Cn = b 1:96 s(b); b + 1:96 s(b)

(8.12)

= f : jtn ( )j

1:96g :

That is, we can describe Cn as The point estimate plus or minus 2 standard errorsor The set of

parameter values not rejected by a two-sided t-test.The second denition, known as test statistic

inversion is a general method for nding condence intervals, and typically produces condence

intervals with excellent properties.

Given a test statistic Tn ( ) and critical value c, the acceptance region Accept if Tn ( )

c

is identical to the condence interval Cn = f : Tn ( ) cg. Since the regions are identical, the

probability of coverage Pr ( 2 Cn ) equals the probability of correct acceptance Pr (Acceptj ) which

is exactly 1 minus the Type I error probability. Thus inverting tests with good Type I error

probabilities yields a condence interval with good coverage probabilities.

Now suppose that the parameter of interest = h( ) is a nonlinear function of the coe cient

vector . In this case the standard condenceqinterval for is the set Cn as in (8.12) where

^ = h( b ) is the point estimate and s(^) = n 1=2 H

c 0 Vb H

c is the delta method standard error.

This condence interval is inverting the t-test based on the nonlinear hypothesis h( ) = : The

trouble is that in Section 8.8 we learned that there is no unique t-statistic for tests of nonlinear

hypotheses and that the choice of parameterization matters greatly.

For example, if = 1 = 2 then the coverage probability of the standard interval (8.12) is 1

minus the probability of the Type I error, which as shown in Table 8.2 can be far from the nominal

5%.

In this example a good solution is the same as discussed in Section 8.8 to rewrite the hypothesis

as a linear restriction. The hypothesis = 1 = 2 is the same as 2 = 1 : The t-statistic for this

restriction is

^

^

1

2

tn ( ) =

1=2

0 b

h Vh

where

h =

and Vb is the covariance matrix for ( ^ 1 ^ 2 ): A 95% condence interval for = 1 = 2 is the set of

values of such that jtn ( )j 1:96: Since appears in both the numerator and denominator, tn ( )

is a non-linear function of so the easiest method to nd the condence set is by grid search over

:

For example, in the wage equation

log(W age) =

1 Experience

2 Experience

=100 +

the highest expected wage occurs at Experience = 50 1 = 2 : From Table 5.1 we have the point

estimate ^ = 29:8 and we can calculate the standard error s(^) = 0:022 for a 95% condence interval

[29:8; 29.9]. However, if we instead invert the linear form of the test we can numerically nd the

interval [29:1; 30.6] which is much larger. From the evidence presented in Section 8.8 we know the

rst interval can be quite inaccurate and the second interval is greatly preferred.

8.11

180

Asymptotic Power

For simplicity suppose that yi is i.i.d. N ( ; 2 ) with 2 known, consider the t-statistic tn ( ) =

p

n (y

) = ; and tests of H0 : = 0 against H1 : > 1. We reject H0 if tn = tn (0) > c: Note that

p

tn = tn ( ) + n =

and tn ( ) = Z has an exact N(0; 1) distribution. This is because tn ( ) is centered at the true mean

; while the test statistic tn (0) is centered at the (false) hypothesized mean of 0.

The power of the test is

p

p

Pr (tn c j ) = Pr Z + n =

c =

c

n = :

This function is monotonically increasing in and n; and decreasing in :

Notice that for any c and 6= 0; the power increases to 1 as n ! 1: This means that for

2 H1 ; the test will reject H0 with probability approaching 1 as the sample size gets large. We

call this property test consistency.

test of H0 : 2 0 is consistent

2 1 ; Pr (Reject H0 j ) ! 1 as

against xed alternatives if for all

n ! 1:

c, a su cient condition for test

consistency is that the Tn diverges to positive innity with probability one for all 2 1 :

p

Denition 8.11.2 Tn

! 1 as n ! 1 if for all M

Pr (Tn M ) ! 0 as n ! 1:

< 1;

In general, t-test and Wald tests are consistent against xed alternatives. Take a t-statistic for

a test of H0 : = 0

b

0

tn =

b

s( )

p

where 0 is a known value and s(b) = n 1 V^ . Note that

p

b

n(

0)

p

tn =

+

s(b)

V^

where the rst term converges in distribution to N(0; 1) and the second term converges in probability

to +1 if > 0 and converges to 1 if < 0 : Thus the two-sided t-test is consistent against

H1 : 6= 0 ; and one-sided t-tests are consistent against the alternatives for which they are designed.

The Wald statistic for H0 : = r( ) = 0 against H1 : 6= 0 is

0

Wn = nb Vb

0

1

p

p

Under H1 ; b ! 6= 0: Thus b Vb b ! 0 V

implies that Wald tests are consistent tests.

181

p

= r( ) 6= 0; then Wn ! 1, so the test Reject H0 if Wn

c is

consistent against xed alternatives.

Consistency is a good property for a test, but does not give a useful approximation to the

power function. One useful asymptotic method to compute a continuous approximation to the

power function is based on local alternatives similar to our analysis of restriction estimation under

misspecication (Section 7.9). The technique is to index the parameter by sample size so that the

asymptotic distribution of the statistic is continuous in a localizing parameter. Specically, suppose

that

= r( ) = n 1=2

where

is q

p

nb =

=

p

p

n b

n b

!Z

= N( ; V )

a non-central normal distribution, and

0

Wn = nb Vb

d

! Z0 V

2

q(

degrees of freedom q and noncentrality parameter ; and is a function of q and only.

This is convenient as we then obtain the following approximation to the power function. As

n ! 1;

Pr (Wn c) ! Pr 2q ( ) c := q ( ; c):

The function q ( ; c) is known as the asymptotic local power function, and for given c and q;

depends only on the real-valued parameter

0.

= r( ) = n 1=2 ; then

d

Wn !

where

2

q(

: Furthermore,

Pr (Wn

c) ! Pr

2

q(

c =

q(

; c)

182

Exercises

Exercise 8.1 Prove that if an additional regressor X k+1 is added to X; Theils adjusted R

increases if and only if jtk+1 j > 1; where tk+1 = ^ k+1 =s( ^ k+1 ) is the t-ratio for ^ k+1 and

s( ^ k+1 ) = s2 [(X 0 X)

1=2

]k+1;k+1

Exercise 8.2 You have two independent samples (y 1 ; X 1 ) and (y 2 ; X 2 ) which satisfy y 1 = X 1 1 +

e1 and y 2 = X 2 2 + e2 ; where E (x1i e1i ) = 0 and E (x2i e2i ) = 0; and both X 1 and X 2 have k

columns. Let b 1 and b 2 be the OLS estimates of 1 and 2 . For simplicity, you may assume that

both samples have the same number of observations n:

(a) Find the asymptotic distribution of

1)

as n ! 1:

1:

Exercise 8.3 The data set invest.dat contains data on 565 U.S. rms extracted from Compustat

for the year 1987. The variables, in order, are

Ii

Qi

Ci

Di

The ow variables are annual sums for 1987. The stock variables are beginning of year.

(a) Estimate a linear regression of Ii on the other variables. Calculate appropriate standard

errors.

(b) Calculate asymptotic condence intervals for the coe cients.

(c) This regression is related to Tobins q theory of investment, which suggests that investment

should be predicted solely by Qi : Thus the coe cient on Qi should be positive and the others

should be zero. Test the joint hypothesis that the coe cients on Ci and Di are zero. Test the

hypothesis that the coe cient on Qi is zero. Are the results consistent with the predictions

of the theory?

(d) Now try a non-linear (quadratic) specication. Regress Ii on Qi ; Ci , Di ; Q2i ; Ci2 , Di2 ; Qi Ci ;

Qi Di ; Ci Di : Test the joint hypothesis that the six interaction and quadratic coe cients are

zero.

Exercise 8.4 In a paper in 1963, Marc Nerlove analyzed a cost function for 145 American electric

companies. (The problem is discussed in Example 8.3 of Greene, section 1.7 of Hayashi, and the

empirical exercise in Chapter 1 of Hayashi). The data le nerlov.dat contains his data. The

variables are described on page 77 of Hayashi. Nerlov was interested in estimating a cost function:

T C = f (Q; P L; P F; P K):

183

log T Ci =

2 log Qi

3 log P Li

4 log P Ki

5 log P Fi

+ ei :

(8.13)

Report parameter estimates and standard errors. You should obtain the same OLS estimates

as in Hayashis equation (1.7.7), but your standard errors may dier.

(b) What is the economic meaning of the restriction H0 :

(c) Estimate (8.13) by constrained least-squares imposing

meter estimates and standard errors.

+

3

parameter estimates and standard errors.

+

4

(e) Test H0 :

(f) Test H0 :

= 1?

5

= 1. Report your

Chapter 9

Regression Extensions

9.1

In the projection model, we know that the least-squares estimator is semi-parametrically e cient

for the projection coe cient. However, in the linear regression model

yi = x0i + ei

E (ei j xi ) = 0;

the least-squares estimator is ine cient. The theory of Chamberlain (1987) can be used to show

that in this model the semiparametric e ciency bound is obtained by the Generalized Least

Squares (GLS) estimator (4.13) introduced in Section 4.6.1. The GLS estimator is sometimes

called the Aitken estimator. The GLS estimator (9.1) is infeasible since the matrix D is unknown.

^ = diagf^ 21 ; :::; ^ 2n g:

A feasible GLS (FGLS) estimator replaces the unknown D with an estimate D

We now discuss this estimation problem.

Suppose that we model the conditional variance using the parametric form

2

i

=

=

0

0

+ z 01i

zi;

where z 1i is some q 1 function of xi : Typically, z 1i are squares (and perhaps levels) of some (or

all) elements of xi : Often the functional form is kept simple for parsimony.

Let i = e2i : Then

E ( i j xi ) = 0 + z 01i 1

and we have the regression equation

i

E(

This regression error

+ z 01i

(9.1)

j xi ) = 0:

var (

j xi ) = var e2i j xi

= E

e2i

E e2i j xi

= E e4i j xi

Suppose ei (and thus

i)

j xi

E e2i j xi

b = Z 0Z

Z0

184

:

by OLS:

and

where

n (b

) ! N (0; V )

1

= E z i z 0i

185

E z i z 0i

2

i

E z i z 0i

^i

=

=

And then

n

1 X

p

zi

n

i=1

i

2

ei

e^2i

2ei x0i b

+ (b

p

2X

z i ei x0i n b

n

)0 xi x0i ( b

1X

zi( b

n

!0

e = Z 0Z

p

p

n (e

) = n (b

x0i ( b

Z0^

)+ n

): Thus

):

)0 xi x0i ( b

i=1

Let

(9.2)

i=1

x0i b = ei

i

p

) n

(9.3)

1

Z 0Z

! N (0; V )

1=2

Z0

(9.4)

Thus the fact that i is replaced with ^i is asymptotically irrelevant. We call (9.3) the skedastic

regression, as it is estimating the conditional variance of the regression of yi on xi : We have shown

that is consistently estimated by a simple procedure, and hence we can estimate 2i = z 0i by

~ 2i = ~ 0 z i :

(9.5)

and

e = diagf~ 2 ; :::; ~ 2 g

D

1

n

e = X 0D

e

e

X 0D

y:

This is the feasible GLS, or FGLS, estimator of : Since there is not a unique specication for

the conditional variance the FGLS estimator is not unique, and will depend on the model (and

estimation method) for the skedastic regression.

One typical problem with implementation of FGLS estimation is that in the linear specication

(9.1), there is no guarantee that ~ 2i > 0 for all i: If ~ 2i < 0 for some i; then the FGLS estimator

is not well dened. Furthermore, if ~ 2i

0 for some i then the FGLS estimator will force the

regression equation through the point (yi ; xi ); which is undesirable. This suggests that there is a

need to bound the estimated variances away from zero. A trimming rule takes the form

2

i

= max[~ 2i ; c^ 2 ]

for some c > 0: For example, setting c = 1=4 means that the conditional variance function is

constrained to exceed one-fourth of the unconditional variance. As there is no clear method to

select c, this introduces a degree of arbitrariness. In this context it is useful to re-estimate the

model with several choices for the trimming parameter. If the estimates turn out to be sensitive to

its choice, the estimation method should probably be reconsidered.

It is possible to show that if the skedastic regression is correctly specied, then FGLS is asymptotically equivalent to GLS. As the proof is tricky, we just state the result without proof.

186

p

and thus

n e GLS

p

n e F GLS

where

! 0;

! N (0; V ) ;

= E

p

F GLS

xi x0i

Examining the asymptotic distribution of Theorem 9.1.1, the natural estimator of the asymptotic variance of e is

! 1

n

1

X

0

1

1 0~ 1

2

0

e

V =

~ i xi xi

XD X

:

=

n

n

i=1

0

which is consistent for V as n ! 1: This estimator Ve is appropriate when the skedastic

regression (9.1) is correctly specied.

It may be the case that 0 z i is only an approximation to the true conditional variance 2i =

2

E(ei j xi ). In this case we interpret 0 z i as a linear projection of e2i on z i : ~ should perhaps be

called a quasi-FGLS estimator of : Its asymptotic variance is not that given in Theorem 9.1.1.

Instead,

= E

zi

xi x0i

zi

2

0

i xi xi

zi

xi x0i

V takes a sandwich form similar to the covariance matrix of the OLS estimator. Unless 2i = 0 z i ,

0

Ve is inconsistent for V .

0

An appropriate solution is to use a White-type estimator in place of Ve : This may be written

as

! 1

!

! 1

n

n

n

X

X

X

1

1

1

~ i 2 xi x0i

~ i 4 e^2i xi x0i

~ i 2 xi x0i

Ve

=

n

n

n

i=1

1 0e

XD

n

i=1

1 0e

XD

n

i=1

bD

e

D

1 0e

XD

n

b = diagf^

where D

e21 ; :::; e^2n g: This is estimator is robust to misspecication of the conditional variance, and was proposed by Cragg (1992).

In the linear regression model, FGLS is asymptotically superior to OLS. Why then do we not

exclusively estimate regression models by FGLS? This is a good question. There are three reasons.

First, FGLS estimation depends on specication and estimation of the skedastic regression.

Since the form of the skedastic regression is unknown, and it may be estimated with considerable

error, the estimated conditional variances may contain more noise than information about the true

conditional variances. In this case, FGLS can do worse than OLS in practice.

Second, individual estimated conditional variances may be negative, and this requires trimming

to solve. This introduces an element of arbitrariness which is unsettling to empirical researchers.

Third, and probably most importantly, OLS is a robust estimator of the parameter vector. It

is consistent not only in the regression model, but also under the assumptions of linear projection.

The GLS and FGLS estimators, on the other hand, require the assumption of a correct conditional

mean. If the equation of interest is a linear projection and not a conditional mean, then the OLS

187

and FGLS estimators will converge in probability to dierent limits as they will be estimating two

dierent projections. The FGLS probability limit will depend on the particular function selected for

the skedastic regression. The point is that the e ciency gains from FGLS are built on the stronger

assumption of a correct conditional mean, and the cost is a loss of robustness to misspecication.

9.2

H0 :

2,

or equivalently that

=0

in the regression (9.1). We may therefore test this hypothesis by the estimation (9.3) and constructing a Wald statistic. In the classic literature it is typical to impose the stronger assumption

that ei is independent of xi ; in which case i is independent of xi and the asymptotic variance (9.2)

for ~ simplies to

1

V = E z i z 0i

E 2i :

(9.6)

Hence the standard test of H0 is a classic F (or Wald) test for exclusion of all regressors from

the skedastic regression (9.3). The asymptotic distribution (9.4) and the asymptotic variance (9.6)

under independence show that this test has an asymptotic chi-square distribution.

Theorem 9.2.1 Under H0 and ei independent of xi ; the Wald test of H0 is asymptotically

2:

q

Most tests for heteroskedasticity take this basic form. The main dierences between popular

tests are which transformations of xi enter z i : Motivated by the form of the asymptotic variance

of the OLS estimator b ; White (1980) proposed that the test for heteroskedasticity be based on

setting z i to equal all non-redundant elements of xi ; its squares, and all cross-products. BreuschPagan (1979) proposed what might appear to be a distinct test, but the only dierence is that they

allowed for general choice of z i ; and replaced E 2i with 2 4 which holds when ei is N 0; 2 : If

this simplication is replaced by the standard formula (under independence of the error), the two

tests coincide.

It is important not to misuse tests for heteroskedasticity. It should not be used to determine

whether to estimate a regression equation by OLS or FGLS, nor to determine whether classic or

White standard errors should be reported. Hypothesis tests are not designed for these purposes.

Rather, tests for heteroskedasticity should be used to answer the scientic question of whether or

not the conditional variance is a function of the regressors. If this question is not of economic

interest, then there is no value in conducting a test for heteorskedasticity

9.3

In some cases we might use a parametric regression function m (x; ) = E (yi j xi = x) which

is a non-linear function of the parameters : We describe this setting as non-linear regression.

Examples of nonlinear regression functions include

x

m (x; ) = 1 + 2

1 + 3x

m (x; ) = 1 + 2 x 3

m (x; ) =

2 exp( 3 x)

m (x; ) =

m (x; ) =

m (x; ) =

0

1 x1

1

0

2 x1

+

0

1 x1

2x

3 (x

1 (x2 <

x2

3

4

4 ) 1 (x > 4 )

0

3) +

2 x1 1 (x2

>

3)

188

In the rst ve examples, m (x; ) is (generically) dierentiable in the parameters : In the nal

two examples, m is not dierentiable with respect to 4 and 3 which alters some of the analysis.

When it exists, let

@

m (x; ) =

m (x; ) :

@

Nonlinear regression is sometimes adopted because the functional form m (x; ) is suggested

by an economic model. In other cases, it is adopted as a exible approximation to an unknown

regression function.

The least squares estimator b minimizes the normalized sum-of-squared-errors

n

Sn ( ) =

1X

(yi

n

m (xi ; ))2 :

i=1

When the regression function is nonlinear, we call this the nonlinear least squares (NLLS)

estimator. The NLLS residuals are e^i = yi m xi ; b :

One motivation for the choice of NLLS as the estimation method is that the parameter is the

solution to the population problem min E (yi m (xi ; ))2

Since sum-of-squared-errors function Sn ( ) is not quadratic, b must be found by numerical

methods. See Appendix E. When m(x; ) is dierentiable, then the FOC for minimization are

0=

n

X

xi ; b e^i :

i=1

(9.7)

If the model is identied and m (x; ) is di erentiable with respect to ,

p

V

where m

n b

= E m i m0 i

= m (xi ;

! N (0; V )

E m i m0 i e2i

0 ):

n

Vb =

1X

m

^ im

^0i

n

i=1

E m i m0 i

1X

m

^ im

^ 0 i e^2i

n

i=1

is

1X

m

^ im

^0i

n

i=1

where m

^ i = m (xi ; b) and e^i = yi m(xi ; b):

Identication is often tricky in nonlinear regression models. Suppose that

m(xi ; ) =

0

1zi

0

2 xi (

xi ( ) = exp ( xi ) ; and xi ( ) = xi 1 (g (xi ) > ). The model is linear when 2 = 0; and this is

often a useful hypothesis (sub-model) to consider. Thus we want to test

H0 :

= 0:

However, under H0 , the model is

yi =

189

0

1zi

+ ei

and both 2 and have dropped out. This means that under H0 ; is not identied. This renders

the distribution theory presented in the previous section invalid. Thus when the truth is that

2 = 0; the parameter estimates are not asymptotically normally distributed. Furthermore, tests

of H0 do not have asymptotic normal or chi-square distributions.

The asymptotic theory of such tests have been worked out by Andrews and Ploberger (1994) and

B. E. Hansen (1996). In particular, Hansen shows how to use simulation (similar to the bootstrap)

to construct the asymptotic critical values (or p-values) in a given application.

Proof of Theorem 9.3.1 (Sketch). NLLS estimation falls in the class of optimization estimators.

For this theory, it is useful to denote the true value of the parameter as 0 :

p

The rst step is to show that ^ ! 0 : Proving that nonlinear estimators are consistent is more

challenging than for linear estimators. We sketch the main argument. The idea is that ^ minimizes

the sample criterion function Sn ( ); which (for any ) converges in probability to the mean-squared

error function E (yi m (xi ; ))2 : Thus it seems reasonable that the minimizer ^ will converge in

probability to 0 ; the minimizer of E (yi m (xi ; ))2 . It turns out that to show this rigorously, we

need to show that Sn ( ) converges uniformly to its expectation E (yi m (xi ; ))2 ; which means

that the maximum discrepancy must converge in probability to zero, to exclude the possibility that

Sn ( ) is excessively wiggly in . Proving uniform convergence is technically challenging, but it

can be shown to hold broadly for relevant nonlinear regression models, especially if the regression

function m (xi ; ) is dierentiabel in : For a complete treatment of the theory of optimization

estimators see Newey and McFadden (1994).

p

Since ^ ! 0 ; ^ is close to 0 for n large, so the minimization of Sn ( ) only needs to be

examined for close to 0 : Let

yi0 = ei + m0 i 0 :

For

0;

0)

+ m0 i (

0) :

Thus

yi

0

m i(

= ei

=

yi0

0 ))

m (xi ;

0)

+ m0 i (

0)

0)

Sn ( ) =

n

X

i=1

(yi

m (xi ; )) '

n

X

yi0

m0 i

i=1

and the right-hand-side is the SSE function for a linear regression of yi0 on m i : Thus the NLLS

estimator b has the same asymptotic distribution as the (infeasible) OLS regression of yi0 on m i ;

which is that stated in the theorem.

9.4

If the goal is to estimate the conditional expectation E (yi j xi ) ; it is useful to have a general

test of the adequacy of the specication.

190

One simple test for neglected nonlinearity is to add nonlinear functions of the regressors to the

regression, and test their signicance using a Wald test. Thus, if the model yi = x0i b + e^i has been

t by OLS, let z i = h(xi ) denote functions of xi which are not linear functions of xi (perhaps

squares of non-binary regressors) and then t yi = x0i e +z 0i ~ + e~i by OLS, and form a Wald statistic

for = 0:

Another popular approach is the RESET test proposed by Ramsey (1969). The null model is

yi = x0i + ei

which is estimated by OLS, yielding predicted values

0 2

y^i

B ..

zi = @ .

y^im

be a (m

1

C

A

yi = x0i e + z 0i e + e~i

d

(9.8)

2

m 1 : Thus the null

2

m 1 distribution.

is rejected at the % level if Wn

exceeds the upper % tail critical value of the

To implement the test, m must be selected in advance. Typically, small values such as m = 2,

3, or 4 seem to work best.

The RESET test appears to work well as a test of functional form against a wide range of

smooth alternatives. It is particularly powerful at detecting single-index models of the form

yi = G(x0i ) + ei

where G( ) is a smooth linkfunction. To see why this is the case, note that (9.8) may be written

as

m

2

3

yi = x0i e + x0i b ~ 1 + x0i b ~ 2 +

x0i b

~ m 1 + e~i

191

Exercises

Exercise 9.1 Suppose that yi = g(xi ; )+ei with E (ei j xi ) = 0; ^ is the NLLS estimator, and V^ is

the estimate of var ^ : You are interested in the conditional mean function E (yi j xi = x) = g(x)

at some x: Find an asymptotic 95% condence interval for g(x):

Exercise 9.2 In Exercise 8.4, you estimated a cost function on a cross-section of electric companies.

The equation you estimated was

log T Ci =

2 log Qi

3 log P Li

4 log P Ki

5 log P Fi

+ ei :

(9.9)

(a) Following Nerlove, add the variable (log Qi )2 to the regression. Do so. Assess the merits of

this new specication using a hypothesis test. Do you agree with this modication?

(b) Now try a non-linear specication. Consider model (9.9) plus the extra term

zi = log Qi (1 + exp ( (log Qi

7 )))

6 zi ;

where

model. For values of log Qi much below 7 ; the variable log Qi has a regression slope of 2 :

For values much above 7 ; the regression slope is 2 + 6 ; and the model imposes a smooth

transition between these regimes. The model is non-linear because of the parameter 7 :

The model works best when 7 is selected so that several values (in this example, at least

10 to 15) of log Qi are both below and above 7 : Examine the data and pick an appropriate

range for 7 :

(c) Estimate the model by non-linear least squares. I recommend the concentration method:

Pick 10 (or more if you like) values of 7 in this range. For each value of 7 ; calculate zi and

estimate the model by OLS. Record the sum of squared errors, and nd the value of 7 for

which the sum of squared errors is minimized.

(d) Calculate standard errors for all the parameters (

1 ; :::;

7 ).

Exercise 9.3 The data le cps78.dat contains 550 observations on 20 variables taken from the

May 1978 current population survey. Variables are listed in the le cps78.pdf. The goal of the

exercise is to estimate a model for the log of earnings (variable LNWAGE) as a function of the

conditioning variables.

(a) Start by an OLS regression of LNWAGE on the other variables. Report coe cient estimates

and standard errors.

(b) Consider augmenting the model by squares and/or cross-products of the conditioning variables. Estimate your selected model and report the results.

(c) Are there any variables which seem to be unimportant as a determinant of wages? You may

re-estimate the model without these variables, if desired.

(d) Test whether the error variance is dierent for men and women. Interpret.

(e) Test whether the error variance is dierent for whites and nonwhites. Interpret.

(f) Construct a model for the conditional variance. Estimate such a model, test for general

heteroskedasticity and report the results.

(g) Using this model for the conditional variance, re-estimate the model from part (c) using

FGLS. Report the results.

(h) Do the OLS and FGLS estimates dier greatly? Note any interesting dierences.

(i) Compare the estimated standard errors. Note any interesting dierences.

Chapter 10

The Bootstrap

10.1

Let F denote a distribution function for the population of observations (yi ; xi ) : Let

Tn = Tn ((y1 ; x1 ) ; :::; (yn ; xn ) ; F )

be a statistic of interest, for example an estimator ^ or a t-statistic ^

=s(^): Note that we

write Tn as possibly a function of F . For example, the t-statistic is a function of the parameter

which itself is a function of F:

The exact CDF of Tn when the data are sampled from the distribution F is

Gn (u; F ) = Pr(Tn

u j F)

Ideally, inference would be based on Gn (u; F ). This is generally impossible since F is unknown.

Asymptotic inference is based on approximating Gn (u; F ) with G(u; F ) = limn!1 Gn (u; F ):

When G(u; F ) = G(u) does not depend on F; we say that Tn is asymptotically pivotal and use the

distribution function G(u) for inferential purposes.

In a seminal contribution, Efron (1979) proposed the bootstrap, which makes a dierent approximation. The unknown F is replaced by a consistent estimate Fn (one choice is discussed in

the next section). Plugged into Gn (u; F ) we obtain

Gn (u) = Gn (u; Fn ):

(10.1)

Let (yi ; xi ) denote random variables with the distribution Fn : A random sample from this distribution is called the bootstrap data. The statistic Tn = Tn ((y1 ; x1 ) ; :::; (yn ; xn ) ; Fn ) constructed

on this sample is a random variable with distribution Gn : That is, Pr(Tn

u) = Gn (u): We call

Tn the bootstrap statistic: The distribution of Tn is identical to that of Tn when the true CDF is

Fn rather than F:

The bootstrap distribution is itself random, as it depends on the sample through the estimator

Fn :

In the next sections we describe computation of the bootstrap distribution.

10.2

Recall that F (y; x) = Pr (yi y; xi x) = E (1 (yi y) 1 (xi x)) ; where 1( ) is the indicator

function. This is a population moment. The method of moments estimator is the corresponding

192

193

sample moment:

Fn (y; x) =

1X

1 (yi

n

y) 1 (xi

x) :

(10.2)

i=1

Note that while F may be either discrete or continuous, Fn is by construction a step function.

The EDF is a consistent estimator of the CDF. To see this, note that for any (y; x); 1 (yi y) 1 (xi

p

is an iid random variable with expectation F (y; x): Thus by the WLLN (Theorem 5.4.2), Fn (y; x) !

F (y; x) : Furthermore, by the CLT (Theorem 5.7.1),

p

n (Fn (y; x)

F (y; x))) :

To see the eect of sample size on the EDF, in the Figure below, I have plotted the EDF and

true CDF for three random samples of size n = 25; 50, 100, and 500. The random draws are from

the N (0; 1) distribution. For n = 25; the EDF is only a crude approximation to the CDF, but the

approximation appears to improve for the large n. In general, as the sample size gets larger, the

EDF step function gets uniformly close to the true CDF.

The EDF is a valid discrete probability distribution which puts probability mass 1=n at each

pair (yi ; xi ), i = 1; :::; n: Notationally, it is helpful to think of a random pair (yi ; xi ) with the

distribution Fn : That is,

Pr(yi

y; xi

x) = Fn (y; x):

We can easily calculate the moments of functions of (yi ; xi ) :

Z

Eh (yi ; xi ) =

h(y; x)dFn (y; x)

=

n

X

h (yi ; xi ) Pr (yi = yi ; xi = xi )

i=1

1X

h (yi ; xi ) ;

n

i=1

x)

10.3

194

Nonparametric Bootstrap

The nonparametric bootstrap is obtained when the bootstrap distribution (10.1) is dened

using the EDF (10.2) as the estimate Fn of F:

Since the EDF Fn is a multinomial (with n support points), in principle the distribution Gn could

be calculated by direct methods. However, as there are 2nn 1 possible samples f(y1 ; x1 ) ; :::; (yn ; xn )g;

such a calculation is computationally infeasible. The popular alternative is to use simulation to approximate the distribution. The algorithm is identical to our discussion of Monte Carlo simulation,

with the following points of clarication:

The sample size n used for the simulation is the same as the sample size.

The random vectors (yi ; xi ) are drawn randomly from the empirical distribution. This is

equivalent to sampling a pair (yi ; xi ) randomly from the sample.

The bootstrap statistic Tn = Tn ((y1 ; x1 ) ; :::; (yn ; xn ) ; Fn ) is calculated for each bootstrap sample. This is repeated B times. B is known as the number of bootstrap replications. A theory

for the determination of the number of bootstrap replications B has been developed by Andrews

and Buchinsky (2000). It is desirable for B to be large, so long as the computational costs are

reasonable. B = 1000 typically su ces.

When the statistic Tn is a function of F; it is typically through dependence on a parameter.

For example, the t-ratio ^

=s(^) depends on : As the bootstrap statistic replaces F with

Fn ; it similarly replaces with n ; the value of implied by Fn : Typically n = ^; the parameter

estimate. (When in doubt use ^:)

Sampling from the EDF is particularly easy. Since Fn is a discrete probability distribution

putting probability mass 1=n at each sample point, sampling from the EDF is equivalent to random

sampling a pair (yi ; xi ) from the observed data with replacement. In consequence, a bootstrap

sample f(y1 ; x1 ) ; :::; (yn ; xn )g will necessarily have some ties and multiple values, which is generally

not a problem.

10.4

^

The bias of ^ is n = E(^

: Then n = E(Tn ( 0 )): The bootstrap

0 ): Let Tn ( ) =

^

^: The bootstrap estimate

counterparts are ^ = ^((y1 ; x1 ) ; :::; (yn ; xn )) and Tn = ^

n =

of n is

n = E(Tn ):

If this is calculated by the simulation described in the previous section, the estimate of

^n =

=

is

B

1 X

Tnb

B

1

B

= ^

b=1

B

X

b=1

^:

bias). Ideally, this would be

~=^

n;

but

195

~

= ^

^n

= ^

(^

= 2^

^)

^ :

Note, in particular, that the biased-corrected estimator is not ^ : Intuitively, the bootstrap makes

the following experiment. Suppose that ^ is the truth. Then what is the average value of ^

calculated from such samples? The answer is ^ : If this is lower than ^; this suggests that the

estimator is downward-biased, so a biased-corrected estimator of should be larger than ^; and the

best guess is the dierence between ^ and ^ : Similarly if ^ is higher than ^; then the estimator is

upward-biased and the biased-corrected estimator should be lower than ^.

Let Tn = ^: The variance of ^ is

Vn = E(Tn

ETn )2 :

Vn = E(Tn

ETn )2 :

The simulation estimate is

B

1 X ^

^

Vn =

b

B

b=1

A bootstrap

standard error for ^ is the square root of the bootstrap estimate of variance,

q

s (^) = V^n :

While this standard error may be calculated and reported, it is not clear if it is useful. The

primary use of asymptotic standard errors is to construct asymptotic condence intervals, which are

based on the asymptotic normal approximation to the t-ratio. However, the use of the bootstrap

presumes that such asymptotic approximations might be poor, in which case the normal approximation is suspected. It appears superior to calculate bootstrap condence intervals, and we turn

to this next.

10.5

Percentile Intervals

For a distribution function Gn (u; F ); let qn ( ; F ) denote its quantile function. This is the

function which solves

Gn (qn ( ; F ); F ) = :

[When Gn (u; F ) is discrete, qn ( ; F ) may be non-unique, but we will ignore such complications.]

Let qn ( ) denote the quantile function of the true sampling distribution, and qn ( ) = qn ( ; Fn )

denote the quantile function of the bootstrap distribution. Note that this function will change

depending on the underlying statistic Tn whose distribution is Gn :

Let Tn = ^; an estimate of a parameter of interest. In (1

)% of samples, ^ lies in the region

[qn ( =2); qn (1

=2)]: This motivates a condence interval proposed by Efron:

C1 = [qn ( =2);

qn (1

=2)]:

Computationally, the quantile qn ( ) is estimated by q^n ( ); the th sample quantile of the

simulated statistics fTn1 ; :::; TnB g; as discussed in the section on Monte Carlo simulation. The

(1

)% Efron percentile interval is then [^

qn ( =2); q^n (1

=2)]:

196

The interval C1 is a popular bootstrap condence interval often used in empirical practice. This

is because it is easy to compute, simple to motivate, was popularized by Efron early in the history

of the bootstrap, and also has the feature that it is translation invariant. That is, if we dene

= f ( ) as the parameter of interest for a monotonically increasing function f; then percentile

method applied to this problem will produce the condence interval [f (qn ( =2)); f (qn (1

=2))];

which is a naturally good property.

However, as we show now, C1 is in a deep sense very poorly motivated.

It will be useful if we introduce an alternative denition of C1 . Let Tn ( ) = ^

and let qn ( )

be the quantile function of its distribution. (These are the original quantiles, with subtracted.)

Then C1 can alternatively be written as

C1 = [^ + qn ( =2);

^ + q (1

n

=2)]:

C10 = [^ + qn ( =2);

^ + qn (1

=2)]:

Pr

2 C10

= Pr ^ + qn ( =2)

= Pr

qn (1

^ + qn (1

=2)

= Gn ( qn ( =2); F0 )

qn ( =2)

Gn ( qn (1

=2); F0 )

! There is one important exception. If ^

about 0, then Gn ( u; F0 ) = 1 Gn (u; F0 ); so

Pr

2 C10

= Gn ( qn ( =2); F0 )

= (1

=

Gn ( qn (1

Gn (qn ( =2); F0 ))

1

=2)

(1

=2); F0 )

Gn (qn (1

=2); F0 ))

= 1

and this idealized condence interval is accurate. Therefore, C10 and C1 are designed for the case

that ^ has a symmetric distribution about 0 :

When ^ does not have a symmetric distribution, C1 may perform quite poorly.

However, by the translation invariance argument presented above, it also follows that if there

exists some monotonically increasing transformation f ( ) such that f (^) is symmetrically distributed

about f ( 0 ); then the idealized percentile bootstrap method will be accurate.

Based on these arguments, many argue that the percentile interval should not be used unless

the sampling distribution is close to unbiased and symmetric.

The problems with the percentile method can be circumvented, at least in principle, by an

alternative method.

Let Tn ( ) = ^

. Then

1

= Pr (qn ( =2)

= Pr ^

so an exact (1

qn (1

C20 = [^

Tn ( 0 )

qn (1

=2)

=2))

qn ( =2) ;

would be

qn (1

=2);

qn ( =2)]:

qn (1

=2);

qn ( =2)]:

C2 = [^

197

Notice that generally this is very dierent from the Efron interval C1 ! They coincide in the special

case that Gn (u) is symmetric about ^; but otherwise they dier.

Computationally, this interval can be estimated from a bootstrap simulation by sorting the

^ ; which are centered at the sample estimate ^: These are sorted

bootstrap statistics Tn = ^

to yield the quantile estimates q^n (:025) and q^n (:975): The 95% condence interval is then [^

q^ (:975); ^ q^ (:025)]:

n

This condence interval is discussed in most theoretical treatments of the bootstrap, but is not

widely used in practice.

10.6

^

=s(^) and reject H0 in favor of H1 if Tn ( 0 ) < c; where c would be selected so that

Pr (Tn ( 0 ) < c) = :

Thus c = qn ( ): Since this is unknown, a bootstrap test replaces qn ( ) with the bootstrap estimate

qn ( ); and the test rejects if Tn ( 0 ) < qn ( ):

):

Similarly, if the alternative is H1 : > 0 ; the bootstrap test rejects if Tn ( 0 ) > qn (1

Computationally, these critical values can be estimated from a bootstrap simulation by sorting

^ =s(^ ): Note, and this is important, that the bootstrap test

the bootstrap t-statistics Tn = ^

statistic is centered at the estimate ^; and the standard error s(^ ) is calculated on the bootstrap

sample. These t-statistics are sorted to nd the estimated quantiles q^n ( ) and/or q^n (1

Let Tn ( ) = ^

=s(^). Then taking the intersection of two one-sided intervals,

1

= Pr (qn ( =2)

Tn ( 0 )

= Pr qn ( =2)

= Pr ^

so an exact (1

s(^)qn (1

C30 = [^

qn (1

=2))

=s(^)

qn (1

=2)

):

=2)

s(^)qn ( =2) ;

would be

s(^)qn (1

=2);

s(^)qn ( =2)]:

s(^)qn (1

=2);

s(^)qn ( =2)]:

C3 = [^

This is often called a percentile-t condence interval. It is equal-tailed or central since the probability

that 0 is below the left endpoint approximately equals the probability that 0 is above the right

endpoint, each =2:

Computationally, this is based on the critical values from the one-sided hypothesis tests, discussed above.

10.7

^

=s(^) and reject H0 in favor of H1 if jTn ( 0 )j > c; where c would be selected so that

Pr (jTn ( 0 )j > c) = :

198

Note that

Pr (jTn ( 0 )j < c) = Pr ( c < Tn ( 0 ) < c)

= Gn (c)

Gn ( c)

Gn (c);

which is a symmetric distribution function. The ideal critical value c = qn ( ) solves the equation

Gn (qn ( )) = 1

Equivalently, qn ( ) is the 1

quantile of the distribution of jTn ( 0 )j :

The bootstrap estimate is qn ( ); the 1

quantile of the distribution of jTn j ; or the number

which solves the equation

Gn (qn ( )) = Gn (qn ( ))

Gn ( qn ( )) = 1

Computationally, qn ( ) is estimated from a bootstrap simulation by sorting the bootstrap t^ =s(^ ); and taking the upper % quantile. The bootstrap test rejects if

statistics jTn j = ^

jTn ( 0 )j > qn ( ):

Let

C4 = [^ s(^)qn ( ); ^ + s(^)qn ( )];

where qn ( ) is the bootstrap critical value for a two-sided hypothesis test. C4 is called the symmetric

percentile-t interval. It is designed to work well since

Pr (

2 C4 ) = Pr ^

s(^)qn ( )

^ + s(^)q ( )

n

= Pr (jTn ( 0 )j < qn ( ))

' Pr (jTn ( 0 )j < qn ( ))

= 1

statistic

0

1

^

Wn ( ) = n ^

V^

at size

or some other asymptotically chi-square statistic. Thus here Tn ( ) = Wn ( ): The ideal test rejects

if Wn qn ( ); where qn ( ) is the (1

)% quantile of the distribution of Wn : The bootstrap test

rejects if Wn qn ( ); where qn ( ) is the (1

)% quantile of the distribution of

Wn = n ^

V^

^ :

Computationally, the critical value qn ( ) is found as the quantile from simulated values of Wn :

^ ; not ^

Note in the simulation that the Wald statistic is a quadratic form in ^

0 :

[This is a typical mistake made by practitioners.]

10.8

Asymptotic Expansions

d

Tn ! N(0;

):

(10.3)

199

writing Tn Gn (u; F ) then for each u and F

= 1: In other cases

u

lim Gn (u; F ) =

n!1

or

Gn (u; F ) =

is unknown. Equivalently,

+ o (1) :

(10.4)

u

While (10.4) says that Gn converges to

as n ! 1; it says nothing, however, about the rate

of convergence; or the size of the divergence for any particular sample size n: A better asymptotic

approximation may be obtained through an asymptotic expansion.

The following notation will be helpful. Let an be a sequence.

Denition 10.8.2 an = O(1) if jan j is uniformly bounded.

Denition 10.8.3 an = o(n

r)

if nr jan j ! 0 as n ! 1.

We say that a function g(u) is even if g( u) = g(u); and a function h(u) is odd if h( u) =

The derivative of an even function is odd, and vice-versa.

h(u):

u

Gn (u; F ) =

g1 (u; F )

n1=2

1

g2 (u; F ) + O(n

n

3=2

function of u: Moreover, g1 and g2 are di erentiable functions of u and

continuous in F relative to the supremum norm on the space of distribution

functions.

We can interpret Theorem 10.8.1 as follows. First, Gn (u; F ) converges to the normal limit at

rate n1=2 : To a second order of approximation,

Gn (u; F )

+n

1=2

g1 (u; F ):

Since the derivative of g1 is odd, the density function is skewed. To a third order of approximation,

Gn (u; F )

+n

1=2

g1 (u; F ) + n

g2 (u; F )

which adds a symmetric non-normal component to the approximate density (for example, adding

leptokurtosis).

[Side Note: When Tn =

200

n Xn

1

6

g1 (u) =

g2 (u) =

u2

1

24

(u)

u3

3u +

1

72

2

3

u5

10u3 + 15u

(u)

3

= E (X

)3 =

= E (X

)4 =

the standardized skewness and excess kurtosis of the distribution of X: Note that when 3 = 0

and 4 = 0; then g1 = 0 and g2 = 0; so the second-order Edgeworth expansion corresponds to the

normal distribution.]

Francis Edgeworth

Francis Ysidro Edgeworth (1845-1926) of Ireland, founding editor of the Economic Journal, was a profound economic and statistical theorist, developing

the theories of indierence curves and asymptotic expansions. He also could

be viewed as the rst econometrician due to his early use of mathematical

statistics in the study of economic data.

10.9

One-Sided Tests

Using the expansion of Theorem 10.8.1, we can assess the accuracy of one-sided hypothesis tests

and condence regions based on an asymptotically normal t-ratio Tn . An asymptotic test is based

on (u):

To the second order, the exact distribution is

Pr (Tn < u) = Gn (u; F0 ) =

since

(u) +

g1 (u; F0 )

n1=2

+ O(n

= 1: The dierence is

(u)

Gn (u; F0 ) =

g1 (u; F0 )

n1=2

= O(n

1=2

+ O(n

);

A bootstrap test is based on Gn (u); which from Theorem 10.8.1 has the expansion

Gn (u) = Gn (u; Fn ) =

(u) +

g1 (u; Fn )

n1=2

+ O(n

):

Because (u) appears in both expansions, the dierence between the bootstrap distribution and

the true distribution is

Gn (u)

Gn (u; F0 ) =

1

n1=2

(g1 (u; Fn )

g1 (u; F0 )) + O(n

):

201

p

Since Fn converges to F at rate n; and g1 is continuous with respect to F; the dierence

p

(g1 (u; Fn ) g1 (u; F0 )) converges to 0 at rate n: Heuristically,

g1 (u; Fn )

g1 (u; F0 )

@

g1 (u; F0 ) (Fn

@F

= O(n 1=2 );

F0 )

@

g1 (u; F ) is only heuristic, as F is a function. We conclude that

The derivative @F

Gn (u)

Gn (u; F0 ) = O(n

);

or

Pr (Tn

u) = Pr (Tn

u) + O(n

);

which is an improved rate of convergence over the asymptotic test (which converged at rate

O(n 1=2 )). This rate can be used to show that one-tailed bootstrap inference based on the tratio achieves a so-called asymptotic renement the Type I error of the test converges at a faster

rate than an analogous asymptotic test.

10.10

jyj has distribution function

H(u) = H(u) H( u)

since

Pr (jyj

For example, if Z

u) = Pr ( u

u)

= Pr (y

u)

Pr (y

= H(u)

H( u):

u)

(u) =

(u)

( u) = 2 (u)

1:

Similarly, if Tn has exact distribution Gn (u; F ); then jTn j has the distribution function

Gn (u; F ) = Gn (u; F )

Gn ( u; F ):

d

A two-sided hypothesis test rejects H0 for large values of jTn j : Since Tn ! Z; then jTn j !

jZj

: Thus asymptotic critical values are taken from the distribution, and exact critical values

are taken from the Gn (u; F0 ) distribution. From Theorem 10.8.1, we can calculate that

Gn (u; F ) = Gn (u; F )

=

Gn ( u; F )

1

1

(u) + 1=2 g1 (u; F ) + g2 (u; F )

n

n

1

1

( u) + 1=2 g1 ( u; F ) + g2 ( u; F ) + O(n

n

n

2

(u) + g2 (u; F ) + O(n 3=2 );

n

3=2

)

(10.5)

where the simplications are because g1 is even and g2 is odd. Hence the dierence between the

asymptotic distribution and the exact distribution is

(u)

Gn (u; F0 ) =

2

g2 (u; F0 ) + O(n

n

3=2

) = O(n

):

202

Interestingly, the asymptotic two-sided test has a better coverage rate than the asymptotic

one-sided test. This is because the rst term in the asymptotic expansion, g1 ; is an even function,

meaning that the errors in the two directions exactly cancel out.

Applying (10.5) to the bootstrap distribution, we nd

Gn (u) = Gn (u; Fn ) = (u) +

2

g2 (u; Fn ) + O(n

n

3=2

):

2

(g2 (u; Fn ) g2 (u; F0 )) + O(n 3=2 )

n

= O(n 3=2 );

p

the last equality because Fn converges to F0 at rate n; and g2 is continuous in F: Another way

of writing this is

Pr (jTn j < u) = Pr (jTn j < u) + O(n 3=2 )

Gn (u)

Gn (u; F0 ) =

so the error from using the bootstrap distribution (relative to the true unknown distribution) is

O(n 3=2 ): This is in contrast to the use of the asymptotic distribution, whose error is O(n 1 ): Thus

a two-sided bootstrap test also achieves an asymptotic renement, similar to a one-sided test.

A reader might get confused between the two simultaneous eects. Two-sided tests have better

rates of convergence than the one-sided tests, and bootstrap tests have better rates of convergence

than asymptotic tests.

The analysis shows that there may be a trade-o between one-sided and two-sided tests. Twosided tests will have more accurate size (Reported Type I error), but one-sided tests might have

more power against alternatives of interest. Condence intervals based on the bootstrap can be

asymmetric if based on one-sided tests (equal-tailed intervals) and can therefore be more informative

and have smaller length than symmetric intervals. Therefore, the choice between symmetric and

equal-tailed condence intervals is unclear, and needs to be determined on a case-by-case basis.

10.11

n ^

: We know that

Tn ! N(0; V ); which is not pivotal, as it depends on the unknown V: Theorem 10.8.1 shows that

a rst-order approximation

u

+ O(n 1=2 );

Gn (u; F ) =

p

where = V ; and for the bootstrap

u

+ O(n

^

Gn (u) = Gn (u; Fn ) =

1=2

);

Gn (u)

Gn (u; F0 ) =

u

^

=

= O(n

u

u

1=2

(^

+ O(n

1=2

) + O(n

1=2

p

The good news is that the percentile-type methods (if appropriately used) can yield nconvergent asymptotic inference. Yet these methods do not require the calculation of standard

203

errors! This means that in contexts where standard errors are not available or are di cult to

calculate, the percentile bootstrap methods provide an attractive inference method.

The bad news is that the rate of convergence is disappointing. It is no better than the rate

obtained from an asymptotic one-sided condence region. Therefore if standard errors are available,

it is unclear if there are any benets from using the percentile bootstrap over simple asymptotic

methods.

Based on these arguments, the theoretical literature (e.g. Hall, 1992, Horowitz, 2001) tends to

advocate the use of the percentile-t bootstrap methods rather than percentile methods.

10.12

The bootstrap methods we have discussed have set Gn (u) = Gn (u; Fn ); where Fn is the EDF.

Any other consistent estimate of F may be used to dene a feasible bootstrap estimator. The

advantage of the EDF is that it is fully nonparametric, it imposes no conditions, and works in

nearly any context. But since it is fully nonparametric, it may be ine cient in contexts where

more is known about F: We discuss bootstrap methods appropriate for the linear regression model

yi = x0i + ei

E (ei j xi ) = 0:

The non-parametric bootstrap resamples the observations (yi ; xi ) from the EDF, which implies

yi

= xi 0 ^ + ei

E (xi ei ) = 0

but generally

E (ei j xi ) 6= 0:

The bootstrap distribution does not impose the regression assumption, and is thus an ine cient

estimator of the true distribution (when in fact the regression assumption is true.)

One approach to this problem is to impose the very strong assumption that the error "i is

independent of the regressor xi : The advantage is that in this case it is straightforward to construct bootstrap distributions. The disadvantage is that the bootstrap distribution may be a poor

approximation when the error is not independent of the regressors.

To impose independence, it is su cient to sample the xi and ei independently, and then create

yi = xi 0 ^ + ei : There are dierent ways to impose independence. A non-parametric method

is to sample the bootstrap errors ei randomly from the OLS residuals f^

e1 ; :::; e^n g: A parametric

method is to generate the bootstrap errors ei from a parametric distribution, such as the normal

ei

N(0; ^ 2 ):

For the regressors xi , a nonparametric method is to sample the xi randomly from the EDF

or sample values fx1 ; :::; xn g: A parametric method is to sample xi from an estimated parametric

distribution. A third approach sets xi = xi : This is equivalent to treating the regressors as xed

in repeated samples. If this is done, then all inferential statements are made conditionally on the

observed values of the regressors, which is a valid statistical approach. It does not really matter,

however, whether or not the xi are really xed or random.

The methods discussed above are unattractive for most applications in econometrics because

they impose the stringent assumption that xi and ei are independent. Typically what is desirable

is to impose only the regression condition E (ei j xi ) = 0: Unfortunately this is a harder problem.

One proposal which imposes the regression condition without independence is the Wild Bootstrap. The idea is to construct a conditional distribution for ei so that

E (ei j xi ) = 0

E ei 2 j xi

E ei 3 j xi

= e^2i

= e^3i :

204

A conditional distribution with these features will preserve the main important features of the data.

This can be achieved using a two-point distribution of the form

p

p ! !

1+ 5

5 1

p

=

Pr ei =

e^i

2

2 5

p

p ! !

1

5

5+1

p

=

Pr ei =

e^i

2

2 5

For each xi ; you sample ei using this two-point distribution.

205

Exercises

Exercise 10.1 Let Fn (x) denote the EDF of a random sample. Show that

p

d

n (Fn (x) F0 (x)) ! N (0; F0 (x) (1 F0 (x))) :

Exercise 10.2 Take a random sample fy1 ; :::; yn g with = Eyi and 2 = var (yi ) : Let the statistic

of interest be the sample mean Tn = y n : Find the population moments ETn and var (Tn ) : Let

fy1 ; :::; yn g be a random sample from the empirical distribution function and let Tn = y n be its

sample mean. Find the bootstrap moments ETn and var (Tn ) :

Exercise 10.3 Consider the following bootstrap procedure for a regression of yi on xi : Let ^

denote the OLS estimator from the regression of y on X, and e

^ = y X ^ the OLS residuals.

(a) Draw a random vector (x ; e ) from the pair f(xi ; e^i ) : i = 1; :::; ng : That is, draw a random

integer i0 from [1; 2; :::; n]; and set x = xi0 and e = e^i0 . Set y = x 0 ^ + e : Draw (with

replacement) n such vectors, creating a random bootstrap data set (y ; X ):

(b) Regress y on X ; yielding OLS estimates ^ and any other statistic of interest.

Show that this bootstrap procedure is (numerically) identical to the non-parametric bootstrap.

Exercise 10.4 Consider the following bootstrap procedure. Using the non-parametric bootstrap,

generate bootstrap samples, calculate the estimate ^ on these samples and then calculate

Tn = (^

^)=s(^);

where s(^) is the standard error in the original data. Let qn (:05) and qn (:95) denote the 5% and

95% quantiles of Tn , and dene the bootstrap condence interval

h

i

C = ^ s(^)qn (:95); ^ s(^)qn (:05) :

Show that C exactly equals the Alternative percentile interval (not the percentile-t interval).

Exercise 10.5 You want to test H0 : = 0 against H1 : > 0: The test for H0 is to reject if

Tn = ^=s(^) > c where c is picked so that Type I error is : You do this as follows. Using the nonparametric bootstrap, you generate bootstrap samples, calculate the estimates ^ on these samples

and then calculate

Tn = ^ =s(^ ):

Let qn (:95) denote the 95% quantile of Tn . You replace c with qn (:95); and thus reject H0 if

Tn = ^=s(^) > qn (:95): What is wrong with this procedure?

Exercise 10.6 Suppose that in an application, ^ = 1:2 and s(^) = :2: Using the non-parametric

bootstrap, 1000 samples are generated from the bootstrap distribution, and ^ is calculated on each

sample. The ^ are sorted, and the 2.5% and 97.5% quantiles of the ^ are .75 and 1.3, respectively.

(a) Report the 95% Efron Percentile interval for :

(b) Report the 95% Alternative Percentile interval for :

(c) With the given information, can you report the 95% Percentile-t interval for ?

Exercise 10.7 The datale hprice1.dat contains data on house prices (sales), with variables

listed in the le hprice1.pdf. Estimate a linear regression of price on the number of bedrooms, lot

size, size of house, and the colonial dummy. Calculate 95% condence intervals for the regression

coe cients using both the asymptotic normal approximation and the percentile-t bootstrap.

Chapter 11

NonParametric Regression

11.1

Introduction

When components of x are continuously distributed then the conditional expectation function

E (yi j xi = x) = m(x)

can take any nonlinear shape. Unless an economic model restricts the form of m(x) to a parametric

function, the CEF is inherently nonparametric, meaning that the function m(x) is an element

of an innite dimensional class. In this situation, how can we estimate m(x)? What is a suitable

method, if we acknowledge that m(x) is nonparametric?

There are two main classes of nonparametric regression estimators: kernel estimators, and series

estimators. In this chapter we introduce kernel methods.

To get started, suppose that there is a single real-valued regressor xi : We consider the case of

vector-valued regressors later.

11.2

Binned Estimator

For clarity, x the point x and consider estimation of the single point m(x). This is the mean

of yi for random pairs (yi ; xi ) such that xi = x: If the distribution of xi were discrete then we

could estimate m(x) by taking the average of the sub-sample of observations yi for which xi = x:

But when xi is continuous then the probability is zero that xi exactly equals any specic x. So

there is no sub-sample of observations with xi = x and we cannot simply take the average of the

corresponding yi values. However, if the CEF m(x) is continuous, then it should be possible to get

a good approximation by taking the average of the observations for which xi is close to x; perhaps

for the observations for which jxi xj h for some small h > 0: Later we will call h a bandwidth.

This estimator can be written as

Pn

1 (jxi xj h) yi

m(x)

b

= Pi=1

(11.1)

n

xj h)

i=1 1 (jxi

m(x)

b

=

where

Notice that

Pn

i=1 wi (x)

n

X

wi (x)yi

i=1

1 (jxi xj h)

:

wi (x) = Pn

xj h)

j=1 1 (jxj

206

(11.2)

207

It is possible

P that for some values of x there are no values of xi such that jxi xj h; which

implies that ni=1 1 (jxi xj h) = 0: In this case the estimator (11.1) is undened for those values

of x:

To visualize, Figure 11.1 displays a scatter plot of 100 observations on a random pair (yi ; xi )

generated by simulation1 . (The observations are displayed as the open circles.) The estimator

(11.1) of the CEF m(x) at x = 2 with h = 1=2 is the average of the yi for the observations

such that xi falls in the interval [1:5

xi

2:5]: (Our choice of h = 1=2 is somewhat arbitrary.

Selection of h will be discussed later.) The estimate is m(2)

b

= 5:16 and is shown on Figure 11.1 by

the rst solid square. We repeat the calculation (11.1) for x = 3; 4, 5, and 6, which is equivalent to

partitioning the support of xi into the regions [1:5; 2:5]; [2:5; 3:5]; [3:5; 4:5]; [4:5; 5:5]; and [5:5; 6:5]:

These partitions are shown in Figure 11.1 by the verticle dotted lines, and the estimates (11.1) by

the solid squares.

These estimates m(x)

b

can be viewed as estimates of the CEF m(x): Sometimes called a binned

estimator, this is a step-function approximation to m(x) and is displayed in Figure 11.1 by the

horizontal lines passing through the solid squares. This estimate roughly tracks the central tendency

of the scatter of the observations (yi ; xi ): However, the huge jumps in the estimated step function

at the edges of the partitions are disconcerting, counter-intuitive, and clearly an artifact of the

discrete binning.

If we take another look at the estimation formula (11.1) there is no reason why we need to

evaluate (11.1) only on a course grid. We can evaluate m(x)

b

for any set of values of x: In particular,

we can evaluate (11.1) on a ne grid of values of x and thereby obtain a smoother estimate of the

CEF. This estimator with h = 1=2 is displayed in Figure 11.1 with the solid line. This is a

generalization of the binned estimator and by construction passes through the solid squares.

The bandwidth h determines the degree of smoothing. Larger values of h increase the width

of the bins in Figure 11.1, thereby increasing the smoothness of the estimate m(x)

b

as a function

of x. Smaller values of h decrease the width of the bins, resulting in less smooth conditional mean

estimates.

1

The distribution is xi

N (4; 1) and yi j xi

11.3

208

Kernel Regression

One deciency with the estimator (11.1) is that it is a step function in x, as it is discontinuous

at each observation x = xi : That is why its plot in Figure 11.1 is jagged. The source of the discontinuity is that the weights wi (x) are constructed from indicator functions, which are themselves

discontinuous. If instead the weights are constructed from continuous functions then the CEF

estimator will also be continuous in x:

To generalize (11.1) it is useful to write the weights 1 (jxi xj h) in terms of the uniform

density function on [ 1; 1]

1

k0 (u) = 1 (juj 1) :

2

Then

xi x

xi x

:

1 (jxi xj h) = 1

1 = 2k0

h

h

and (11.1) can be written as

Pn

xi

i=1 k0

m(x)

b

=

Pn

i=1 k0

h

xi x

h

yi

:

(11.3)

The uniform density k0 (u) is a special case of what is known as a kernel function.

kernel function

R 1 k(u) satises 0

R1

k(u) < 1; k(u) = k( u); 1 k(u)du = 1 and 2k = 1 u2 k(u)du < 1:

Essentially, a kernel function is a probability density function which is bounded and symmetric

about zero. A generalization of (11.1) is obtained by replacing the uniform kernel with any other

kernel function:

Pn

xi x

yi

i=1 k

h

:

(11.4)

m(x)

b

=

Pn

xi x

k

i=1

h

x

h

wi (x) =

:

Pn

xj x

j=1 k

h

k

xi

The estimator (11.4) is known as the Nadaraya-Watson estimator, the kernel regression

estimator, or the local constant estimator.

The bandwidth h plays the same role in (11.4) as it does in (11.1). Namely, larger values of

h will result in estimates m(x)

b

which are smoother in x; and smaller values of h will result in

estimates which are more erratic. It might be helpful to consider the two extreme cases h ! 0 and

h ! 1: As h ! 0 we can see that m(x

b i ) ! yi (if the values of xi are unique), so that m(x)

b

is

simply the scatter of yi on xi : In contrast, as h ! 1 then for all x; m(x)

b

! y; the sample mean,

so that the nonparametric CEF estimate is a constant function. For intermediate values of h; m(x)

b

will lie between these two extreme cases.

209

The uniform density is not a good kernel choice as it produces discontinuous CEF estimates:

To obtain a continuous CEF estimate m(x)

b

it is necessary for the kernel k(u) to be continuous.

The two most commonly used choices are the Epanechnikov kernel

k1 (u) =

3

1

4

u2 1 (juj

1)

u2

2

1

k (u) = p exp

2

For computation of the CEF estimate (11.4) the scale of the kernel is not important so long as

u

the bandwidth is selected appropriately. That is, for any b > 0; kb (u) = b 1 k

is a valid kernel

b

function with the identical shape as k(u): Kernel regression with the kernel k(u) and bandwidth h

is identical to kernel regression with the kernel kb (u) and bandwidth h=b:

The estimate (11.4) using the Epanechnikov kernel and h = 1=2 is also displayed in Figure 11.1

with the dashed line. As you can see, this estimator appears to be much smoother than that using

the uniform kernel.

Two important constants associated with a kernel function k(u) are its variance 2k and roughness Rk , which are dened as

Z 1

2

=

u2 k(u)du

(11.5)

k

1

Z 1

Rk =

k(u)2 du:

(11.6)

1

Some common kernels and their roughness and variance values are reported in Table 9.1.

11.4

Kernel

Uniform

Epanechnikov

Biweight

Triweight

Equation

k0 (u) = 21 1 (juj 1)

k1 (u) = 34 1 u2 1 (juj 1)

2

k2 (u) = 15

u2 1 (juj 1)

16 1

3

35

k3 (u) = 32

1 u2 1 (juj 1)

Gaussian

k (u) =

p1

2

u2

2

exp

Rk

1=2

3=5

5=7

350=429

p

1= (2 )

2

k

1=3

1=5

1=7

1=9

1

The Nadaraya-Watson (NW) estimator is often called a local constant estimator as it locally

(about x) approximates the CEF m(x) as a constant function. One way to see this is to observe

that m(x)

b

solves the minimization problem

m(x)

b

= argmin

n

X

i=1

xi

(yi

)2 :

This is a weighted regression of yi on an intercept only. Without the weights, this estimation

problem reduces to the sample mean. The NW estimator generalizes this to a local mean.

This interpretation suggests that we can construct alternative nonparametric estimators of the

CEF by alternative local approximations. Many such local approximations are possible. A popular

choice is the local linear (LL) approximation. Instead of approximating m(x) locally as a constant,

210

the local linear approximation approximates the CEF locally by a linear function, and estimates

this local approximation by locally weighted least squares.

Specically, for each x we solve the following minimization problem

n

n

o

X

b

b (x); (x) = argmin

k

;

xi

(yi

i=1

(xi

x))2 :

m(x)

b

= b (x)

and the local linear estimator of the regression derivative rm(x) is the estimated slope coe cient

d

rm(x)

= b (x):

z i (x) =

xi

xi

and

ki (x) = k

Then

b (x)

b (x)

=

=

n

X

i=1

0

Z KZ

Z Ky

1 n

X

ki (x)z i (x)yi

i=1

To visualize, Figure 11.2 displays the scatter plot of the same 100 observations from Figure 11.1,

divided into three regions depending on the regressor xi : [1; 3]; [3; 5]; [5; 7]: A linear regression is t

to the observations in each region, with the observations weighted by the Epanechnikov kernel with

h = 1: The three tted regression lines are displayed by the three straight solid lines. The values of

these regression lines at x = 2; x = 4 and x = 6; respectively, are the local linear estimates m(x)

b

at

x = 2; 4, and 6. This estimation is repeated for all x in the support of the regressors, and plotted

as the continuous solid line in Figure 11.2.

One interesting feature is that as h ! 1; the LL estimator approaches the full-sample linear

least-squares estimator m(x)

b

! ^ + ^ x. That is because as h ! 1 all observations receive equal

weight regardless of x: In this sense we can see that the LL estimator is a exible generalization of

the linear OLS estimator.

Which nonparametric estimator should you use in practice: NW or LL? The theoretical literature shows that neither strictly dominates the other, but we can describe contexts where one or

the other does better. Roughly speaking, the NW estimator performs better than the LL estimator

when m(x) is close to a at line, but the LL estimator performs better when m(x) is meaningfully

non-constant. The LL estimator also performs better for values of x near the boundary of the

support of xi :

11.5

b i ) and the tted residual is

e^i = yi

m(x

b i ):

211

As a general rule, but especially when the bandwidth h is small, it is hard to view e^i as a good

measure of the t of the regression. As h ! 0 then m(x

b i ) ! yi and therefore e^i ! 0: This clearly

indicates overtting as the true error is not zero. In general, since m(x

b i ) is a local average which

includes yi ; the tted value will be necessarily close to yi and the residual e^i small, and the degree

of this overtting increases as h decreases.

A standard solution is to measure the t of the regression at x = xi by re-estimating the model

excluding the ith observation. For Nadaraya-Watson regression, the leave-one-out estimator of

m(x) excluding observation i is

P

j6=i k

m

e i (x) =

xj

h

xj x

h

j6=i k

yj

:

Notationally, the i subscript is used to indicate that the ith observation is omitted.

The leave-one-out predicted value for yi at x = xi equals

P

j6=i k

y~i = m

e i (xi ) =

j6=i k

xj

xi

h

xj

xi

yj

:

The leave-one-out residuals (or prediction errors) are the dierence between the leave-one-out predicted values and the actual observation

e~i = yi

y~i :

Since y~i is not a function of yi ; there is no tendency for y~i to overt for small h: Consequently, e~i

is a good measure of the t of the estimated nonparametric regression.

Similarly, the leave-one-out local-linear residual is e~i = yi e i with

0

1 1

X

X

ei

0 A

@

=

k

z

z

kij z ij yj ;

ij

ij

ij

e

i

j6=i

j6=i

z ij =

and

kij = k

11.6

212

1

xj

xi

xj

xi

h

regression estimators (both NW and LL) become more smooth, ironing out the bumps and wiggles.

This reduces estimation variance but at the cost of increased bias and oversmoothing. As h decreases

the estimators become more wiggly, erratic, and noisy. It is desirable to select h to trade-o these

features. How can this be done systematically?

To be explicit about the dependence of the estimator on the bandwidth, let us write the estimator of m(x) with a given bandwidth h as m(x;

b

h); and our discussion will apply equally to the

NW and LL estimators.

Ideally, we would like to select h to minimize the mean-squared error (MSE) of m(x;

b

h) as a

estimate of m(x): For a given value of x the MSE is

M SEn (x; h) = E (m(x;

b

h)

m(x))2 :

We are typically interested in estimating m(x) for all values in the support of x: A common measure

for the average t is the integrated MSE

Z

IM SEn (h) =

M SEn (x; h)fx (x)dx

Z

=

E (m(x;

b

h) m(x))2 fx (x)dx

where fx (x) is the marginal density of xi : Notice that we have dened the IMSE as an integral with

respect to the density fx (x): Other weight functions could be used, but it turns out that this is a

convenient choice:

The IMSE is closely related with the MSFE of Section 4.9. Let (yn+1 ; xn+1 ) be out-of-sample

observations (and thus independent of the sample) and consider predicting yn+1 given xn+1 and

the nonparametric estimate m(x;

b

h): The natural point estimate for yn+1 is m(x

b n+1 ; h) which has

mean-squared forecast error

M SF En (h) = E (yn+1

m(x

b n+1 ; h))2

= E (en+1 + m(xn+1 )

=

=

m(x

b n+1 ; h))2

+ E (m(xn+1 ) m(x

b n+1 ; h))2

Z

2

+ E (m(x;

b

h) m(x))2 fx (x)dx

2

where the nal equality uses the fact that xn+1 is independent of m(x;

b

h): We thus see that

M SF En (h) =

+ IM SEn (h):

Since 2 is a constant independent of the bandwidth h; M SF En (h) and IM SEn (h) are equivalent

measures of the t of the nonparameric regression.

The optimal bandwidth h is the value which minimizes IM SEn (h) (or equivalently M SF En (h)):

While these functions are unknown, we learned in Theorem 4.9.1 that (at least in the case of linear

regression) M SF En can be estimated by the sample mean-squared prediction errors. It turns out

that this fact extends to nonparametric regression. The nonparametric leave-one-out residuals are

e~i (h) = yi

m

e i (xi ; h)

213

where we are being explicit about the dependence on the bandwidth h: The mean squared leaveone-out residuals is

n

1X

CV (h) =

e~i (h)2 :

n

i=1

The cross-validation bandwidth b

h is the value which minimizes CV (h)

b

h = argmin CV (h)

(11.7)

h h`

for some h` > 0: The restriction h h` is imposed so that CV (h) is not evaluated over unreasonably

small bandwidths.

There is not an explicit solution to the minimization problem (11.7), so it must be solved

numerically. A typical practical method is to create a grid of values for h; e.g. [h1 ; h2 ; :::; hJ ],

evaluate CV (hj ) for j = 1; :::; J; and set

b

h=

argmin

CV (h):

Evaluation using a coarse grid is typically su cient for practical application. Plots of CV (h) against

h are a useful diagnostic tool to verify that the minimum of CV (h) has been obtained.

We said above that the cross-validation criterion is an estimator of the MSFE. This claim is

based on the following result.

Theorem 11.6.1

E (CV (h)) = M SF En

1 (h)

= IM SEn

1 (h)

(11.8)

Theorem 11.6.1 shows that CV (h) is an unbiased estimator of IM SEn 1 (h) + 2 : The rst

term, IM SEn 1 (h); is the integrated MSE of the nonparametric estimator using a sample of size

n 1: If n is large, IM SEn 1 (h) and IM SEn (h) will be nearly identical, so CV (h) is essentially

unbiased as an estimator of IM SEn (h) + 2 . Since the second term ( 2 ) is unaected by the

bandwidth h; it is irrelevant for the problem of selection of h. In this sense we can view CV (h)

as an estimator of the IMSE, and more importantly we can view the minimizer of CV (h) as an

estimate of the minimizer of IM SEn (h):

To illustrate, Figure 11.3 displays the cross-validation criteria CV (h) for the Nadaraya-Watson

and Local Linear estimators using the data from Figure 11.1, both using the Epanechnikov kernel.

The CV functions are computed on a grid with intervals 0.01. The CV-minimizing bandwidths are

h = 1:09 for the Nadaraya-Watson estimator and h = 1:59 for the local linear estimator. Figure

11.3 shows the minimizing bandwidths by the arrows. It is not surprising that the CV criteria

recommend a larger bandwidth for the LL estimator than for the NW estimator, as the LL employs

more smoothing for a given bandwidth.

The CV criterion can also be used to select between dierent nonparametric estimators. The

CV-selected estimator is the one with the lowest minimized CV criterion. For example, in Figure

11.3, the NW estimator has a minimized CV criterion of 16.88, while the LL estimator has a

minimized CV criterion of 16.81. Since the LL estimator achieves a lower value of the CV criterion,

LL is the CV-selected estimator. The dierence (0.07) is small, suggesting that the two estimators

are near equivalent in IMSE.

Figure 11.4 displays the tted CEF estimates (NW and LL) using the bandwidths selected by

cross-validation. Also displayed is the true CEF m(x) = 10 ln(x). Notice that the nonparametric

214

Figure 11.3: Cross-Validation Criteria, Nadaraya-Watson Regression and Local Linear Regression

215

estimators with the CV-selected bandwidths (and especially the LL estimator) track the true CEF

quite well.

Proof of Theorem 11.6.1. Observe that m(xi ) m

e i (xi ; h) is a function only of (x1 ; :::; xn ) and

(e1 ; :::; en ) excluding ei ; and is thus uncorrelated with ei : Since e~i (h) = m(xi ) m

e i (xi ; h) + ei ;

then

E (CV (h)) = E e~i (h)2

= E e2i + E (m

e i (xi ; h)

+2E ((m

e i (xi ; h)

2

m(xi ))2

m(xi )) ei )

+ E (m

e i (xi ; h)

m(xi ))2 :

(11.9)

e i (x; h); which are independent as the second is not a function of the ith observation. Thus taking the conditional expectation

given the sample excluding the ith observation, this is the expectation over xi only, which is the

integral with respect to its density

Z

2

e i (x; h) m(x))2 fx (x)dx:

E i (m

e i (xi ; h) m(xi )) = (m

Taking the unconditional expecation yields

E (m

e i (xi ; h)

m(xi ))2 = E

(m

e i (x; h)

= IM SEn

m(x))2 fx (x)dx

1 (h)

e

Combined with (11.9) we obtain (11.8), as desired.

11.7

uses n

1 observations.

Asymptotic Distribution

There is no nite sample distribution theory for kernel estimators, but there is a well developed

asymptotic distribution theory. The theory is based on the approximation that the bandwidth h

decreases to zero as the sample size n increases. This means that the smoothing is increasingly

localized as the sample size increases. So long as the bandwidth does not decrease to zero too

quickly, the estimator can be shown to be asymptotically normal, but with a non-trivial bias.

Let fx (x) denote the marginal density of xi and 2 (x) = E e2i j xi = x denote the conditional

variance of ei = yi m(xi ):

216

b

denote either the Nadarya-Watson or Local

Linear estimator of m(x): If x is interior to the support of xi and fx (x) > 0;

then as n ! 1 and h ! 0 such that nh ! 1;

p

nh m(x)

b

m(x)

h2

2

k B(x)

! N 0;

Rk 2 (x)

fx (x)

(11.10)

where 2k and Rk are dened in (11.5) and (11.6). For the NadarayaWatson estimator

1

B(x) = m00 (x) + fx (x)

2

1 0

fx (x)m0 (x)

1

B(x) = fx (x)m00 (x)

2

There are several interesting features about the asymptotic distribution which are

p noticeably

p

dierent than for

the estimator converges at the rate nh; not n:

p parametric estimators. First,

p

Since h ! 0; nh diverges slower than n; thus the nonparametric estimator converges more

slowly than a parametric estimator. Second, the asymptotic distribution contains a non-neglible

bias term h2 2k B(x): This term asymptotically disappears since h ! 0: Third, the assumptions

that nh ! 1 and h ! 0 mean that the estimator is p

consistent for the CEF m(x).

The fact that the estimator converges at the rate nh has led to the interpretation of nh as the

eective sample size. This is because the number of observations being used to construct m(x)

b

is proportional to nh; not n as for a parametric estimator.

It is helpful to understand that the nonparametric estimator has a reduced convergence rate

because the object being estimated m(x) is nonparametric. This is harder than estimating a

nite dimensional parameter, and thus comes at a cost.

Unlike parametric estimation, the asymptotic distribution of the nonparametric estimator includes a term representing the bias of the estimator. The asymptotic distribution (11.10) shows

the form of this bias. Not only is it proportional to the squared bandwidth h2 (the degree of

smoothing), it is proportional to the function B(x) which depends on the slope and curvature of

the CEF m(x): Interestingly, when m(x) is constant then B(x) = 0 and the kernel estimator has no

asymptotic bias. The bias is essentially increasing in the curvature of the CEF function m(x): This

is because the local averaging smooths m(x); and the smoothing induces more bias when m(x) is

curved.

Theorem 11.7.1 shows that the asymptotic distributions of the NW and LL estimators are

similar, with the only dierence arising in the bias function B(x): The bias term for the NW

estimator has an extra component which depends on the rst derivative of the CEF m(x) while the

bias term of the LL estimator is invariant to the rst derivative. The fact that the bias formula for

the LL estimator is simpler and is free of dependence on the rst derivative of m(x) suggests that

the LL estimator will generally have smaller bias than the NW estimator (but this is not a precise

ranking). Since the asymptotic variances in the two distributions are the same, this means that the

LL estimator achieves a reduced bias without an eect on asymptotic variance. This analysis has

led to the general preference for the LL estimator over the NW estimator in the nonparametrics

literature.

One implication of Theorem 11.7.1 is that we can dene the asymptotic MSE (AMSE) of m(x)

b

as the squared bias plus the asymptotic variance

AM SE (m(x))

b

= h2

2

2

k B(x)

Rk 2 (x)

:

nhfx (x)

(11.11)

217

AM SE (m(x))

b

h4 +

1

nh

(11.12)

which means that the AMSE is dominated by the larger of h4 and (nh) 1 : Notice that the bias is

increasing in h and the variance is decreasing in h: (More smoothing means more observations are

used for local estimation: this increases the bias but decreases estimation variance.) To select h to

minimize the AMSE, these two components should balance each other. Setting h4 / (nh) 1 means

setting h / n 1=5 : Another way to see this is to pick h to minimize the right-hand-side of (11.12).

The rst-order condition for h is

@

@h

h4 +

1

nh

= 4h3

1

=0

nh2

which when solved for h yields h = n 1=5 : What this means is that for AMSE-e cient estimation

of m(x); the optimal rate for the bandwidth is h / n 1=5 :

Theorem 11.7.2 The bandwidth which minimizes the AMSE (11.12) is

of order h / n 1=5 . With h / n 1=5 then AM SE (m(x))

b

= O n 4=5 and

2=5

m(x)

b

= m(x) + Op n

:

This result means that the bandwidth should take the form h = cn 1=5 : The optimal constant

c depends on the kernel k; the bias function B(x) and the marginal density fx (x): A common misinterpretation is to set h = n 1=5 ; which is equivalent to setting c = 1 and is completely arbitrary.

Instead, an empirical bandwidth selection rule such as cross-validation should be used in practice.

When h = cn 1=5 we can rewrite the asymptotic distribution (11.10) as

n2=5 (m(x)

b

m(x)) ! N c2

Rk 2 (x)

2

k B(x); 1=2

c fx (x)

b

is asymptotically normal, but with a n2=5 rate of convergence and non-zero mean. The asymptotic distribution depends on the constant c through the bias

(positively) and the variance (inversely).

The asymptotic distribution in Theorem 11.7.1 allows for the optimal rate h = cn 1=5 but this

rate is not required. In particular, consider an undersmoothing (smaller than optimal) bandwith

with rate h = o n 1=5 . For example, we could specify that h = cn

for some c > 0 and

p

2

(1

5

)=2

1=5 < < 1: Then nhh = O(n

) = o(1) so the bias term in (11.10) is asymptotically

negligible so Theorem 11.7.1 implies

p

nh (m(x)

b

m(x)) ! N 0;

Rk 2 (x)

fx (x)

That is, the estimator is asymptotically normal without a bias component. Not having an asymptotic bias component is convenient for some theoretical manipuations, so many authors impose the

undersmoothing condition h = o n 1=5 to ensure this situation. This convenience comes at a cost.

First, the resulting estimator is ine cient as its convergence rate is is Op n (1 )=2 > Op n 2=5

since > 1=5: Second, the distribution theory is an inherently misleading approximation as it misses

a critically key ingredient of nonparametric estimation the trade-o between bias and variance.

The approximation (11.10) is superior precisely because it contains the asymptotic bias component

which is a realistic implication of nonparametric estimation. Undersmoothing assumptions should

be avoided when possible.

11.8

218

2

= E e2i j xi = x :

Even if the conditional mean m(x) is parametrically specied, it is natural to view 2 (x) as inherently nonparametric as economic models rarely specify the form of the conditional variance. Thus

it is quite appropriate to estimate 2 (x) nonparametrically.

We know that 2 (x) is the CEF of e2i given xi : Therefore if e2i were observed, 2 (x) could be

nonparametrically estimated using NW or LL regression. For example, the NW estimator is

Pn

ki (x)e2i

2

:

~ (x) = Pi=1

n

i=1 ki (x)

Since the errors ei are not observed, we need to replace them with an empirical residual, such as

e^i = yi m(x

b i ) where m(x)

b

is the estimated CEF. (The latter could be a nonparametric estimator

such as NW or LL, or even a parametric estimator.) Even better, use the leave-one-out predication

errors e~i = yi m

b i (xi ); as these are not subject to overtting.

With this substitution the NW estimator of the conditional variance is

Pn

ki (x)~

e2i

2

^ (x) = Pi=1

:

(11.13)

n

i=1 ki (x)

This estimator depends on a set of bandwidths h1 ; :::; hq , but there is no reason for the bandwidths to be the same as those used to estimate the conditional mean. Cross-validation can be used

to select the bandwidths for estimation of ^ 2 (x) separately from cross-validation for estimation of

m(x):

b

There is one subtle dierence between CEF and conditional variance estimation. The conditional

variance is inherently non-negative 2 (x)

0 and it is desirable for our estimator to satisfy this

property. Interestingly, the NW estimator (11.13) is necessarily non-negative, since it is a smoothed

average of the non-negative squared residuals, but the LL estimator is not guarenteed to be nonnegative for all x. For this reason, the NW estimator is preferred for conditional variance estimation.

Fan and Yao (1998, Biometrika) derive the asymptotic distribution of the estimator (11.13).

They obtain the surprising result that the asymptotic distribution of this two-step estimator is

identical to that of the one-step idealized estimator ~ 2 (x).

11.9

Standard Errors

Theorem 11.7.1 shows the asymptotic variances of both the NW and LL nonparametric regression estimators equal

Rk 2 (x)

V (x) =

:

fx (x)

For standard errors we need an estimate of V (x) : A plug-in estimate replaces the unknowns by

estimates. The roughness Rk can be found from Table 9.1. The conditional variance can be

estimated using (11.13). The density of xi can be estimated using the methods from Section 21.1.

Replacing these estimates into the formula for V (x) we obtain the asymptotic variance estimate

Rk ^ 2 (x)

V^ (x) =

:

f^x (x)

219

b

is

r

1 ^

s^(x) =

V (x):

nh

Plots of the estimated CEF m(x)

b

can be accompanied by condence intervals m(x)

b

2^

s(x):

These are known as pointwise condence intervals, as they are designed to have correct coverage

at each x; not uniformly in x:

One important caveat about the interpretation of nonparametric condence intervals is that

they are not centered at the true CEF m(x); but rather are centered at the biased or pseudo-true

value

m (x) = m(x) + h2 2k B(x):

Consequently, a correct statement about the condence interval m(x)

b

2^

s(x) is that it asymptotically contains m (x) with probability 95%, not that it asymptotically contains m(x) with probability

95%. The discrepancy is that the condence interval does not take into account the bias h2 2k B(x):

Unfortunately, nothing constructive can be done about this. The bias is di cult and noisy to estimate, so making a bias-correction only inates estimation variance and decreases overall precision.

A technical trick is to assume undersmoothing h = o n 1=5 but this does not really eliminate

the bias, it only assumes it away. The plain fact is that once we honestly acknowledge that the

true CEF is nonparametric, it then follows that any nite sample estimate will have nite sample

bias, and this bias will be inherently unknown and thus impossible to incorporate into condence

intervals.

11.10

Multiple Regressors

Our analysis has focus on the case of real-valued xi for simplicity of exposition, but the methods

of kernel regression extend easily to the multiple regressor case, at the cost of a reduced rate of

convergence. In this section we consider the case of estimation of the conditional expectation

function

E (yi j xi = x) = m(x)

when

1

x1i

B

C

xi = @ ... A

xdi

is a d-vector.

For any evaluation point x and observation i; dene the kernel weights

ki (x) = k

x1i

x1

h1

x2i

x2

h2

xdi

xd

hd

a d-fold product kernel. The kernel weights ki (x) assess if the regressor vector xi is close to the

evaluation point x in the Euclidean space Rd .

These weights depend on a set of d bandwidths, hj ; one for each regressor. We can group them

together into a single vector for notational convenience:

0

1

h1

B

C

h = @ ... A :

hd

Pn

ki (x)yi

:

m(x)

b

= Pi=1

n

i=1 ki (x)

220

1

z i (x) =

xi

b

= b (x) where

b (x)

b (x)

n

X

i=1

0

Z KZ

Z Ky

1 n

X

ki (x)z i (x)yi

i=1

In multiple regressor kernel regression, cross-validation remains a recommended method for

bandwidth selection. The leave-one-out residuals e~i and cross-validation criterion CV (h) are dened identically as in the single regressor case. The only dierence is that now the CV criterion is

a function over the d-dimensional bandwidth h. This is a critical practical dierence since nding

b which minimizes CV (h) can be computationally di cult when h is high

the bandwidth vector h

dimensional. Grid search is cumbersome and costly, since G gridpoints per dimension imply evaulation of CV (h) at Gd distinct points, which can be a large number. Furthermore, plots of CV (h)

against h are challenging when d > 2:

The asymptotic distribution of the estimators in the multiple regressor case is an extension of

the single regressor case. Let fx (x) denote the marginal density of xi and 2 (x) = E e2i j xi = x

the conditional variance of ei = yi m(xi ): Let jhj = h1 h2

hd :

Theorem 11.10.1 Let m(x)

b

denote either the Nadarya-Watson or Local

Linear estimator of m(x): If x is interior to the support of xi and fx (x) >

0; then as n ! 1 and hj ! 0 such that n jhj ! 1;

0

1

d

X

p

Rd 2 (x)

d

2

n jhj @m(x)

b

m(x)

h2j B;j (x)A ! N 0; k

k

fx (x)

j=1

Bj (x) =

1 @2

m(x) + fx (x)

2 @x2j

@

@

fx (x)

m(x)

@xj

@xj

Bj (x) =

1 @2

m(x)

2 @x2j

For notational simplicity consider the case that there is a single common bandwidth h: In this

case the AMSE takes the form

1

AM SE(m(x))

b

h4 +

nhd

That is, the squared bias is of order h4 ; the same as in the single regressor case, but the variance is

of larger order (nhd ) 1 : Setting h to balance these two components requires setting h n 1=(4+d) :

221

h / n 1=(4+d) . With h / n 1=(4+d) then AM SE (m(x))

b

= O n 4=(4+d)

and m(x)

b

= m(x) + Op n 2=(4+d)

In all estimation problems an increase in the dimension decreases estimation precision. For

example, in parametric estimation an increase in dimension typically increases the asymptotic variance. In nonparametric estimation an increase in the dimension typically decreases the convergence

rate, which is a more fundamental decrease in precision. For example, in kernel regression the convergence rate Op n 2=(4+d) decreases as d increases. The reason is the estimator m(x)

b

is a local

average of the yi for observations such that xi is close to x, and when there are multiple regressors

the number of such observations is inherently smaller. This phenomenon that the rate of convergence of nonparametric estimation decreases as the dimension increases is called the curse of

dimensionality.

Chapter 12

Series Estimation

12.1

Approximation by Series

As we mentioned at the beginning of Chapter 11, there are two main methods of nonparametric

regression: kernel estimation and series estimation. In this chapter we study series methods.

Series methods approximate an unknown function (e.g. the CEF m(x)) with a exible parametric function, with the number of parameters treated similarly to the bandwidth in kernel regression.

A series approximation to m(x) takes the form mK (x) = mK (x; K ) where mK (x; K ) is a known

parametric family and K is an unknown coe cient. The integer K is the dimension of K and

indexes the complexity of the approximation.

A linear series approximation takes the form

mK (x) =

K

X

zjK (x)

jK

j=1

= z K (x)0

(12.1)

where zjK (x) are (nonlinear) functions of x; and are known as basis functions or basis function

transformations of x:

For real-valued x; a well-known linear series approximation is the pth-order polynomial

mK (x) =

p

X

xj

jK

j=0

where K = p + 1:

When x 2 Rd is vector-valued, a pth-order polynomial is

mK (x) =

p

X

j1 =0

p

X

xj11

xjdd

j1 ;:::;jd K :

jd =0

This includes all powers and cross-products, and the coe cient vector has dimension K = (p + 1)d :

In general, a common method to create a series approximation for vector-valued x is to include all

non-redundant cross-products of the basis function transformations of the components of x:

12.2

Splines

as a spline. While splines can be of any polynomial order (e.g. linear, quadratic, cubic, etc.),

a common choice is cubic. To impose smoothness it is common to constrain the spline function

to have continuous derivatives up to the order of the spline. Thus a quadratic spline is typically

222

223

constrained to have a continuous rst derivative, and a cubic spline is typically constrained to have

a continuous rst and second derivative.

There is more than one way to dene a spline series expansion. All are based on the number of

knots the join points between the polynomial segments.

To illustrate, a piecewise linear function with two segments and a knot at t is

8

x<t

< m1 (x) = 00 + 01 (x t)

mK (x) =

:

m2 (x) = 10 + 11 (x t)

x t

(For convenience we have written the segments functions as polyomials in x t.) The function

mK (x) equals the linear function m1 (x) for x < t and equals m2 (t) for x > t. Its left limit at x = t

is 00 and its right limit is 10 ; so is continuous if (and only if) 00 = 10 : Enforcing this constraint

is equivalent to writing the function as

mK (x) =

1 (x

mK (x) =

t) +

2 (x

t) 1 (x

t)

+

1x

2 (x

t) 1 (x

t)

Notice that this function has K = 3 coe cients, the same as a quadratic polynomial.

A piecewise quadratic function with one knot at t is

8

2

x<t

< m1 (x) = 00 + 01 (x t) + 02 (x t)

mK (x) =

:

x t

m2 (x) = 10 + 11 (x t) + 12 (x t)2

Imposing these contraints and rewriting, we obtain the function

mK (x) =

1x

2

2x

3 (x

t)2 1 (x

01

11 :

t) :

Here, K = 4:

Furthermore, a piecewise cubic function with one knot and a continuous second derivative is

mK (x) =

1x

2x

3x

4 (x

t)3 1 (x

t)

which has K = 5:

The polynomial order p is selected to control the smoothness of the spline, as mK (x) has

continuous derivatives up to p 1.

In general, a pth-order spline with N knots at t1 , t2 ; :::; tN with t1 < t2 <

< tN is

mK (x) =

p

X

j=0

j

jx +

N

X

(x

tk )p 1 (x

tk )

k=1

In spline approximation, the typical approach is to treat the polynomial order p as xed, and

select the number of knots N to determine the complexity of the approximation. The knots tk are

typically treated as xed. A common choice is to set the knots to evenly partition the support X

of xi :

12.3

224

A common use of a series expansion is to allow the CEF to be nonparametric with respect

to one variable, yet linear in the other variables. This allows exibility in a particular variable

of interest. A partially linear CEF with vector-valued regressor x1 and real-valued continuous x2

takes the form

m (x1 ; x2 ) = x01 1 + m2 (x2 ):

This model is commonly used when x1 are discrete (e.g. binary variables) and x2 is continuously

distributed.

Series methods are particularly convenient for estimation of partially linear models, as we can

replace the unknown function m2 (x2 ) with a series expansion to obtain

m (x) ' mK (x)

= x01

=

x0K

+ z 0K

2K

where z K = z K (x2 ) are the basis transformations of x2 (typically polynomials or splines) and 2K

are coe cients. After transformation the regressors are xK = (x01 ; z 0K ): and the coe cients are

0

0

0

K = ( 1 ; 2K ) :

12.4

additively separable in the individual regressors, which means that

m (x) = m1 (x1 ) + m2 (x2 ) +

+ md (xd ) :

Series methods are quite convenient for estimation of additively separable models, as we simply

apply series expansions (polynomials or splines) separately for each component mj (xj ) : The advantage of additive separability is the reduction in dimensionality. While an unconstrained pth order

polynomial has (p + 1)d coe cients, an additively separable polynomial model has only (p + 1)d

coe cients. This can be a major reduction in the number of coe cients. The disadvatage of this

simplication is that the interaction eects have been eliminated.

The decision to impose additive separability can be based on an economic model which suggests

the absence of interaction eects, or can be a model selection decision similar to the selection of

the number of series terms. We will discuss model selection methods below.

12.5

Uniform Approximations

A good series approximation mK (x) will have the property that it gets close to the true CEF

m(x) as the complexity K increases. Formal statements can be derived from the theory of functional

analysis.

An elegant and famous theorem is the Stone-Weierstrass theorem, (Weierstrass, 1885, Stone

1937, 1948) which states that any continuous function can be arbitrarily uniformly well approximated by a polynomial of su ciently high order. Specically, the theorem states that for x 2 Rd ;

if m(x) is continuous on a compact set X , then for any " > 0 there exists a polynomial mK (x) of

some order K which is uniformly within " of m(x):

sup jmK (x)

m(x)j

":

(12.2)

x2X

Thus the true unknown m(x) can be aribitrarily well approximately by selecting a suitable polynomial.

225

The result (12.2) can be stengthened. In particular, if the sth derivative of m(x) is continuous

then the uniform approximation error satises

sup jmK (x)

m(x)j = O K

(12.3)

x2X

as K ! 1 where = s=d. This result is more useful than (12.2) because it gives a rate at which

the approximation mK (x) approaches m(x) as K increases.

Both (12.2) and (12.3) hold for spline approximations as well.

Intuitively, the number of derivatives s indexes the smoothness of the function m(x): (12.3)

says that the best rate at which a polynomial or spline approximates the CEF m(x) depends on

the underlying smoothness of m(x): The more smooth is m(x); the fewer series terms (polynomial

order or spline knots) are needed to obtain a good approximation.

To illustrate polynomial approximation, Figure 12.1 displays the CEF m(x) = x1=4 (1 x)1=2

on x 2 [0; 1]: In addition, the best approximations using polynomials of order K = 3; K = 4; and

K = 6 are displayed. You can see how the approximation with K = 3 is fairly crude, but improves

with K = 4 and especially K = 6: Approximations obtained with cubic splines are quite similar so

not displayed.

As a series approximation can be written as mK (x) = z K (x)0 K as in (12.1), then the coe cient

of the best uniform approximation (12.3) is then

K

K

x2X

m(x) :

(12.4)

rK (x) = m(x)

z K (x)0

m(x) = z K (x)0

K:

+ rK (x)

(12.5)

to emphasize that the true conditional mean can be written as the linear approximation plus error.

A useful consequence of equation (12.3) is

sup jrK (x)j

x2X

O K

(12.6)

226

12.6

Runges Phenomenon

have the troubling disadvantage that they are very poor at simple interpolation. The problem is

known as Runges phenomenon, and is illustrated in Figure 12.2. The solid line is the CEF

m(x) = (1 + x2 ) 1 displayed on [ 5; 5]: The circles display the function at the K = 11 integers in

this interval. The long dashes display the 10th order polynomial t through these points. Notice

that the polynomial approximation is erratic and far from the smooth CEF. This discrepancy gets

worse as the number of evaluation points increases, as Runge (1901) showed that the discrepancy

increases to innity with K:

In contrast, splines do not exhibit Runges phenomenon. In Figure 12.2 the short dashes display

a cubic spline with seven knots t through the same points as the polynomial. While the tted

spline displays some oscillation relative to the true CEF, they are relatively moderate.

Because of Runges phenomenon, high-order polynomials are not used for interpolation, and are

not popular choices for high-order series approximations. Instead, splines are widely used.

12.7

Approximating Regression

For each observation i we observe (yi ; xi ) and then construct the regressor vector z Ki = z K (xi )

using the series transformations. Stacking the observations in the matices y and Z K ; the least

squares estimate of the coe cient K in the series approximation z K (x)0 K is

b

= Z 0K Z K

Z 0K y;

m

b K (x) = z K (x)0 b K :

(12.7)

As we learned in Chapter 2, the least-squares coe cient is estimating the best linear predictor

of yi given z Ki : This is

1

0

E (z Ki yi ) :

K = E z Ki z Ki

227

z K (x)0

rK (x) = m(x)

K:

(12.8)

yi = m(xi ) + ei

(12.9)

yi = z 0Ki

+ eKi

eKi = rKi + ei :

Observe that the error eKi includes the approximation error and thus does not have the properties

of a CEF error.

In matrix notation we can write these equations as

y = ZK

+ rK + e

= ZK

+ eK :

(12.10)

We now impose some regularity conditions on the regression model to facilitate the theory.

Dene the K K expected design matrix

QK = E z Ki z 0Ki ;

let X denote the support of xi ; and dene the largest normalized length of the regressor vector in

the support of xi

1=2

1

0

:

(12.11)

K = sup z K (x) QK z K (x)

x2X

K

will increase with K. For example,

p if the support of the variables z K (xi ) is the unit cube [0; 1] ,

then you can compute that K = K. As discussed in Newey (1997) and Li and Racine (2007,

Corollary 15.1) if the support of xi is compact then K = O(K) for polynomials and K = O(K 1=2 )

for splines.

K

Assumption 12.7.1

1. For some

2. E e2i j xi

3.

> 0:

min (QK )

< 1:

0 as n ! 1:

2

K K=n

Assumptions 12.7.1.1 through 12.7.1.3 concern properties of the regression model. Assumption

12.7.1.1 holds with = s=d if X is compact and the sth derivative of m(x) is continuous. Assumption 12.7.1.2 allows for conditional heteroskedasticity, but requires the conditional variance to be

bounded. Assumption 12.7.1.3 excludes near-singular designs. Since estimates of the conditional

mean are unchanged if we replace z Ki with z Ki = B K z Ki for any non-singular B K ; Assumption

12.7.1.3 can be viewed as holding after transformation by an appropriate non-singular B K .

228

Assumption 12.7.1.4 concerns the choice of the number of series terms, which is under the

control of the user. It species that K can increase with sample size, but at a controlled rate of

growth. Since K = O(K) for polynomials and K = O(K 1=2 ) for splines, Assumption 12.7.1.4 is

satised if K 3 =n ! 0 for polynomials and K 2 =n ! 0 for splines. This means that while the number

of series terms K can increase with the sample size, K must increase at a much slower rate.

In Section 12.5 we introduced the best uniform approximation, and in this section we introduced

the best linear predictor. What is the relationship? They may be similar in practice, but they are

not the same and we should be careful to maintain the distinction. Note that from (12.5) we can

write m(xi ) = z 0Ki K + rKi where rKi = rK (xi ) satises supi jrKi j = O (K ) from (12.6). Then

the best linear predictor equals

K

= E z Ki z 0Ki

= E z Ki z 0Ki

= E z Ki z 0Ki

+E

E (z Ki yi )

E (z Ki m(xi ))

E z Ki (z 0Ki

z Ki z 0Ki

+ rKi )

E (z Ki rKi ) :

rK (x)

rK (x) = z K (x)0 (

0

= z K (x) E

K)

1

0

z Ki z Ki

E (z Ki rKi ) :

(12.12)

E (r Ki z Ki )0 E z Ki z 0Ki

2

E r Ki

and by (12.6)

E

2

rKi

rK (x)2 fx (x)dx

E (z Ki rKi )

2

O K

(12.13)

(12.14)

Then applying the Schwarz inequality to (12.12), Denition (12.11), (12.13) and (12.14), we nd

jrK (x)

z K (x)0 E z Ki z 0Ki

rK (x)j

z K (x)

1

E (rKi z Ki )0 E z Ki z 0Ki

O

KK

1=2

E (z Ki rKi )

1=2

(12.15)

sup jrK (x)j

x2X

KK

(12.16)

The bound (12.16) is probably not the best possible, but it shows that the best linear predictor

satises a uniform approximation bound. Relative to (12.6), the rate is slower by the factor K :

The bound (12.16) term is o(1) as K ! 1 if K K

! 0. A su cient condition is that > 1

(s > d) for polynomials and > 1=2 (s > d=2) for splines; where d = dim(x) and s is the number

of continuous derivatives of m(x):

It is also useful to observe that since K is the best linear approximation to m(xi ) in meansquare (see Section 2.23), then

z 0Ki

E m(xi )

z 0Ki

O K

the nal inequality by (12.14).

2

ErKi

= E m(xi )

(12.17)

12.8

229

b K (xi ) = z 0Ki b K and the tted residual is

m

b K (xi ):

e^iK = yi

e~iK

m

b K; i (xi )

z0 b

= yi

= yi

where b K;

also write

Ki

K; i

is the least-squares coe cient with the ith observation omitted. Using (3.38) we can

e~iK = e^iK (1

hKii )

As for kernel regression, the prediction errors e~iK are better estimates of the errors than the

tted residuals e^iK ; as they do not have the tendency to over-twhen the number of series terms

is large.

To assess the t of the nonparametric regression, the estimate of the mean-square prediction

error is

n

n

1X 2

1X 2

~ 2K =

e~iK =

e^iK (1 hKii ) 2

n

n

i=1

12.9

R2

is

2

eK

R

=1

i=1

Pn

Pn

~2iK

i=1 e

i=1 (yi

y)2

The cross-validation criterion for selection of the number of series terms is the MSPE

n

CV (K) = ~ 2K =

1X 2

e^iK (1

n

hKii )

i=1

e2 ; we have a dataBy selecting the series terms to minimize CV (K); or equivalently maximize R

K

dependent rule which is designed to produce estimates with low integrated mean-squared error

(IMSE) and mean-squared forecast error (MSFE). As shown in Theorem 11.6.1, CV (K) is an

approximately unbiased estimated of the MSFE and IMSE, so nding the model which produces

the smallest value of CV (K) is a good indicator that the estimated model has small MSFE and

IMSE. The proof of the result is the same for all nonparametric estimators (series as well as kernels)

so does not need to be repeated here.

As a practical matter, an estimator corresponds to a set of regressors z Ki , that is, a set of

transformations of the original variables xi : For each set of regressions, the regression is estimated

and CV (K) calculated, and the estimator is selected which has the smallest value of CV (K): If

there are p ordered regressors, then there are p possible estimators. Typically, this calculation is

simple even if p is large. However, if the p regressors are unordered (and this is typical) then there

are 2p possible subsets of conceivable models. If p is even moderately large, 2p can be immensely

large so brute-force computation of all models may be computationally demanding.

12.10

230

Convergence in Mean-Square

The series estimate b K are indexed by K. The point of nonparametric estimation is to let

K be exible so as to incorporate greater complexity when the data are su ciently informative.

This means that K will typically be increasing with sample size n: This invalidates conventional

asymptotic distribution theory. However, we can develop extensions which use appropriate matrix

norms, and by focusing on real-valued functions of the parameters including the estimated regression

function itself.

The asymptotic theory we present in this and the next several sections is largely taken from

Newey (1997).

Our rst main result shows that the least-squares estimate converges to K in mean-square

distance.

Theorem 12.10.1 Under Assumption 12.7.1, as n ! 1,

b

QK b K

= Op

K

n

+ op K

(12.18)

The proof of Theorem 12.10.1 is rather technical and deferred to Section 12.16.

The rate of convergence in (12.18) has two terms. The Op (K=n) term is due to estimation

variance. Note in contrast that the corresponding rate would be Op (1=n) in the parametric case.

The dierence is that in the parametric case we assume that the number of regressors K is xed as

n increases, while in the nonparametric case we allow the number of regressors K to be exible. As

K increases, the estimation variance increases. The op K 2 term in (12.18) is due to the series

approximation error.

Using Theorem 12.10.1 we can establish the following convergence rate for the estimated regression function.

Theorem 12.10.2 Under Assumption 12.7.1, as n ! 1,

Z

K

+ Op K 2

(m

b K (x) m(x))2 fx (x)dx = Op

n

(12.19)

Theorem 12.10.2 shows that the integrated squared dierence between the tted regression and

the true CEF converges in probability to zero if K ! 1 as n ! 1: The convergence results of

Theorem 12.10.2 show that the number of series terms K involves a trade-o similar to the role of

the bandwidth h in kernel regression. Larger K implies smaller approximation error but increased

estimation variance.

The optimal rate which minimizes the average squared error in (12.19) is K = O n1=(1+2 ) ;

yielding an optimal rate of convergence in (12.19) of Op n 2 =(1+2 ) : This rate depends on the

unknown smoothness of the true CEF (the number of derivatives s) and so does not directly

syggest a practical rule for determining K: Still, the implication is that when the function being

estimated is less smooth ( is small) then it is necessary to use a larger number of series terms K

to reduce the bias. In contrast, when the function is more smooth then it is better to use a smaller

number of series terms K to reduce the variance.

To establish (12.19), using (12.7) and (12.8) we can write

m

b K (x)

m(x) = z K (x)0 b K

rK (x):

(12.20)

231

Since eRKi are projection errors, they satisfy E (z Ki eKi ) = R0 and thus E (z Ki rKi ) = 0: This

0

2

Rmeans 2 z K (x)rK (x)fx (x)dx = 0: Also observe that QK = z K (x)z K (x) fx (x)dx and ErKi =

rK (x) fx (x)dx. Then

Z

(m

b K (x) m(x))2 fx (x)dx

=

Op

K

n

QK b K

2

+ ErKi

+ Op K

12.11

Uniform Convergence

b K (x) is consistent in a squared error

norm. It is also of interest to know the rate at which the largest deviation converges to zero. We

have the following rate.

Theorem 12.11.1 Under Assumption 12.7.1, then as n ! 1;

1

0s

2

K

K

A + Op K K

sup jm

b K (x) m(x)j = Op @

:

n

x2X

(12.21)

Relative to Theorem 12.10.2, the error has been increased multiplicatively by K : This slower

convergence rate is a penalty for the stronger uniform convergence, though it is probably not

the best possible rate. Examining the bound in (12.21) notice that the rst term is op (1) under

! 0; which requires that K ! 1 and

Assumption 12.7.1.4. The second term is op (1) if K K

that be su ciently large. A su cient condition is that s > d for polynomials and s > d=2 for

splines; where d = dim(x) and s is the number of continuous derivatives of m(x): Thus higher

dimensional x require a smoother CEF m(x) to ensure that the series estimate m

b K (x) is uniformly

consistent.

The convergence (12.21) is straightforward to show using (12.18). Using (12.20), the Triangle

Inequality, the Schwarz inequality (A.10), Denition (12.11), (12.18) and (12.16),

sup jm

b K (x)

m(x)j

x2X

sup z K (x)0 b K

x2X

x2X

1=2

x2X

+O

KK

K Op

0s

This is (12.21).

= Op @

K

n

QK b K

1=2

K

1=2

+ Op K

1

2

KK A

+ Op

KK

+O

:

KK

;

(12.22)

12.12

232

Asymptotic Normality

One advantage of series methods is that the estimators are (in nite samples) equivalent to

parametric estimators, so it is easy to calculate covariance matrix estimates. We now show that

we can also justify normal asymptotic approximations.

The theory we present in this section will apply to any linear function of the regression function.

That is, we allow the parameter of interest to be aany non-trivial real-valued linear function of the

entire regression function m( )

= a (m) :

This includes the regression function m(x) at a given point x; derivatives of m(x), and integrals

over m(x). Given m

b K (x) = z K (x)0 b K as an estimator for m(x); the estimator for is

^K = a (m

b K ) = a0K b K

b K ) = a0K b K follows since a is

linear in m and m

b K is linear in b K .)

If K were xed as n ! 1; then by standard asymptotic theory we would expect ^K to be

asymptotically normal with variance

vK = a0K QK1

1

K QK aK

where

K

= E z Ki z 0Ki e2Ki :

The standard justication, however, is not valid in the nonparametric case, in part because vK

may diverge as K ! 1; and in part due to the nite sample bias due to the approximation error.

Therefore a new theory is required. Interestingly, it turns out that in the nonparametric case ^K is

still asymptotically normal, and vK is still the appropriate variance for ^K . The proof is dierent

than the parametric case as the dimensions of the matrices are increasing with K; and we need to

be attentive to the estimators bias due to the series approximation.

2

2 > 0; and

= O(1); then as n ! 1;

4 < 1, E ei jxi

KK

p

n ^K

+ a (rK )

1=2

vK

! N (0; 1)

(12.23)

Theorem 12.12.1 shows that the estimator ^K is approximately normal with bias a (rK ) and

variance vK =n: The variance is the same as in the parametric case, but the asymptotic distribution

contains an asymptotic bias, similar as is found in kernel regression. We discuss the bias in more

detail below.

Notice that Theorem 12.12.1 requires K K = O(1); which is similar to that found in Theorem

12.11.1 to establish uniform convergence. The the bound K K

= O(1) allows K to be constant

with n or to increase with n. However, when K is increasing the bound requires that be su cient

large so that K grows faster than K : A su cient condition is that s = d for polynomials and

s = d=2 for splines. The fact that the condition allows for K to be constant means that Theorem

12.12.1 includes parametric least-squares as a special case with explicit attention to estimation bias.

233

One useful message from Theorem 12.12.1 is that the classic variance formula vK for ^K still

applies for series regression. Indeed, we can estimate the asymptotic variance using the standard

White formula

v^K

bK

bK

Q

s^(

K)

b 1 b KQ

b 1 aK

= a0K Q

K

K

n

X

1

=

z Ki z 0Ki e^2iK

n

i=1

n

1X

z Ki z 0Ki :

n

i=1

1 0 b 1b b 1

a Q

K QK aK :

n K K

p

It can be shown (Newey, 1997) that v^K =vK ! 1 as n ! 1 and thus the distribution in (12.23) is

unchanged if vK is replaced with v^K :

Theorem 12.12.1 shows that the estimator ^K has a bias term a (rK ) : What is this? It is the

same transformation of the function rK (x) as = a (m) is of the regression function m(x). For

example, if = m(x) is the regression at a xed point x , then a (rK ) = rK (x); the approximation

d

d

error at the same point. If =

m(x) is the regression derivative, then a (rK ) =

rK (x) is the

dx

dx

derivative of the approximation error.

This means that the bias in the estimator ^K for shown in Theorem 12.12.1 is simply the

approximation error, transformed by the functional of interest. If we are estimating the regression

function then the bias is the error in approximating the regression function; if we are estimating

the regression derivative then the bias is the error in the derivative in the approximation error for

the regression function.

12.13

An unpleasant aspect about Theorem 12.12.1 is the bias term. An interesting trick is that

this bias term can be made asymptotically negligible if we assume that K increases with n at a

su ciently fast rate.

2

2 > 0, a (r )

O (K ) ; nK 2 ! 0; and

4 < 1, E ei jxi

K

1

0

aK QK aK is bounded away from zero, then

p ^

n K

1=2

vK

! N (0; 1) :

(12.24)

The condition a (rK ) O (K ) states that the function of interest (for example, the regression

function, its derivative, or its integral) applied to the uniform approximation error converges to

zero as the number of terms K in the series approximation increases. If a (m) = m(x) then this

condition holds by (12.6).

The condition that a0K QK1 aK is bounded away from zero is simply a technical requirement to

exclude degeneracy.

234

rate faster than n1=2 : This is a troubling condition. The optimal rate for estimation of m(x) is

K = O n1=(1+2 ) : If we set K = n1=(1+2 ) by this rule then nK 2 = n1=(1+2 ) ! 1; not zero.

Thus this assumption is equivalent to assuming that K is much larger than optimal. The reason

why this trick works (that is, why the bias is negligible) is that by increasing K; the asymptotic

bias decreases and the asymptotic variance increases and thus the variance dominates. Because K

is larger than optimal, we typically say that m

b K (x) is undersmoothed relative to the optimal series

estimator.

Many authors like to focus their asymptotic theory on the assumptions in Theorem 12.13.1, as

the distribution (12.24) appears cleaner. However, it is a poor use of asymptotic theory. There

are three problems with the assumption nK 2 ! 0 and the approximation (12.24). First, it says

that if we intentionally pick K to be larger than optimal, we can increase the estimation variance

relative to the bias so the variance will dominate the bias. But why would we want to intentionally

use an estimator which is sub-optimal? Second, the assumption nK 2 ! 0 does not eliminate the

asymptotic bias, it only makes it of lower order than the variance. So the approximation (12.24) is

technically valid, but the missing asymptotic bias term is just slightly smaller in asymptotic order,

and thus still relevant in nite samples. Third, the condition nK 2 ! 0 is just an assumption, it

has nothing to do with actual empirical practice. Thus the dierence between (12.23) and (12.24)

is in the assumptions, not in the actual reality or in the actual empirical practice. Eliminating a

nuisance (the asymptotic bias) through an assumption is a trick, not a substantive use of theory.

My strong view is that the result (12.23) is more informative than (12.24). It shows that the

asymptotic distribution is normal but has a non-trivial nite sample bias.

12.14

Regression Estimation

A special yet important example of a linear estimator of the regression function is the regression

function at a xed point x. In the notation of the previous section, a (m) = m(x) and aK = z K (x):

The series estimator of m(x) is ^K = m

b K (x) = z K (x)0 b K : As this is a key problem of interest, we

restate the asymptotic results of Theorems 12.12.1 and 12.13.1 for this estimator.

Theorem 12.14.1 Under Assumption 12.7.1, if in addition E e4i jxi

2

2 > 0; and

= O(1); then as n ! 1;

4 < 1; E ei jxi

KK

p

n (m

b K (x)

m(x) + r K (x))

1=2

vK (x)

! N (0; 1)

(12.25)

where

vK (x) = z K (x)0 QK1

1

K QK z K (x):

If K K

= O(1) is replaced by nK 2 ! 0; and z K (x)0 QK1 z K (x) is

bounded away from zero, then

p

n (m

b K (x) m(x)) d

! N (0; 1)

(12.26)

1=2

vK (x)

There are two important features about the asymptotic distribution (12.25).

First, as mentioned in the previous section, it shows how to construct asymptotic standard

errors for the CEF m(x): These are

r

1

b 1 b KQ

b 1 z K (x):

s^(x) =

z K (x)0 Q

K

K

n

235

Second, (12.25) shows that the estimator has the asymptotic bias component r K (x): This is

due to the fact that the nite order series is an approximation to the unknown CEF m(x); and this

results in nite sample bias.

The asymptotic distribution (12.26) shows that the bias term is negligable if K diverges fast

enough so that nK 2 ! 0: As discussed in the previous section, this means that K is larger than

optimal.

The assumption that z K (x)0 QK1 z K (x) is bounded away from zero is a technical condition to

exclude degenerate cases, and is automatically satised if z K (x) includes an intercept.

Plots of the CEF estimate m

b K (x) can be accompanied by 95% condence intervals m

b K (x)

2^

s(x): As we discussed in the chapter on kernel regression, this can be viewed as a condence

interval for the pseudo-true CEF mK (x) = m(x) r K (x); not for the true m(x). As for kernel

regression, the dierence is the unavoidable consequence of nonparametric estimation.

12.15

In this and the previous chapter we have presented two distinct methods of nonparametric

regression based on kernel methods and series methods. Which should be used in practice? Both

methods have advantages and disadvantages and there is no clear overall winner.

First, while the asymptotic theory of the two estimators appear quite dierent, they are actually

rather closely related. When the regression function m(x) is twice dierentiable (s = 2) then the

rate of convergence of both the MSE of the kernel regression estimator with optimal bandwidth

h and the series estimator with optimal K is n 2=(d+4) : There is no dierence. If the regression

function is smoother than twice dierentiable (s > 2) then the rate of the convergence of the series

estimator improves. This may appear to be an advantage for series methods, but kernel regression

can also take advantage of the higher smoothness by using so-called higher-order kernels or local

polynomial regression, so perhaps this advantage is not too large.

Both estimators are asymptotically normal and have straightforward asymptotic standard error

formulae. The series estimators are a bit more convenient for this purpose, as classic parametric

standard error formula work without ammendment.

An advantage of kernel methods is that their distributional theory is easier to derive. The

theory is all based on local averages which is relatively straightforward. In contrast, series theory is

more challenging, dealing with increasing parameter spaces. An important dierence in the theory

is that for kernel estimators we have explicit representations for the bias while we only have rates

for series methods. This means that plug-in methods can be used for bandwidth selection in kernel

regression. However, typically we rely on cross-validation, which is equally applicable in both kernel

and series regression.

Kernel methods are also relatively easy to implement when the dimension d is large. There is

not a major change in the methodology as d increases. In contrast, series methods become quite

cumbersome as d increases as the number of cross-terms increases exponentially.

A major advantage of series methods is that it has inherently a high degree of exibility, and

the user is able to implement shape restrictions quite easily. For example, in series estimation it

is relatively simple to implement a partial linear CEF, an additively separable CEF, monotonicity,

concavity or convexity. These restrictions are harder to implement in kernel regression.

12.16

Technical Proofs

1=2

Dene z Ki = z K (xi ) and let QK denote the positive denite square root of QK : As mentioned

before Theorem 12.10.1, the regression problem is unchanged if we replace z Ki with a rotated

1=2

0 ) = I : For

regressor such as z Ki = QK z Ki . This is a convenient choice for then E (z Ki z Ki

K

notational convenience we will simply write the transformed regressors as z Ki and set QK = I K :

236

We start with some convergence results for the sample design matrix

n

X

b K = 1 Z0 ZK = 1

Q

z Ki z 0Ki :

K

n

n

i=1

bK

Q

and

I K = op (1)

(12.27)

min (QK )

! 1:

(12.28)

Proof. Since

bK

Q

then

IK

K X

K

X

j=1 `=1

bK

E Q

IK

Ez jKi z `Ki )

i=1

K X

K

X

!2

1X

(z jKi z `Ki

n

n

var

i=1

j=1 `=1

= n

1X

z jKi z `Ki

n

K X

K

X

j=1 `=1

= n

K

X

z 2jKi

j=1

Since z 0Ki z Ki

2

K

K

X

z 2`Ki

`=1

2

0

z Ki z Ki :

(12.29)

E z 0Ki z Ki = tr Ez Ki z 0Ki = tr I K = K;

so that

E z 0Ki z Ki

(12.30)

2

KK

(12.31)

and hence (12.29) is o(1) under Assumption 12.7.1.4. Theorem 5.11.1 shows that this implies

(12.27).

b K I K which are real as Q

b K I K is symmetric. Then

Let 1 ; 2 ; :::; K be the eigenvalues of Q

b

min (QK )

1 =

min (QK

IK)

K

X

`=1

2

`

!1=2

bK

= Q

IK

where the second equality is (A.8). This is op (1) by (12.27), establishing (12.28):

Proof of Theorem 12.10.1. As above, assume that the regressors have been transformed so that

QK = I K :

From expression (12.10) we can substitute to nd

b

Z 0K Z K

Z 0K eK :

b 1 1 Z 0 eK

= Q

K

n K

=

(12.32)

237

= n

max

2

b 1Q

b 1 Z 0 eK

e0K Z K Q

K

K

K

2

1

b

n 2 e0K Z K Z 0K eK

max QK

b 1 =

Q

K

bK

Q

max

(12.33)

= Op (1):

(12.34)

Since eKi = ei + rKi ; and using Assumption 12.7.1.2 and (12.16), then

sup E e2Ki jxi =

i

2

+ sup rKi

2

2

KK

+O

(12.35)

As eKi are projection errors, they satisfy E (z Ki eKi ) = 0: Since the observations are independent, using (12.30) and (12.35), then

0

1

n

n

X

X

n 2 E e0K Z K Z 0K eK = n 2 E @

eKi z 0Ki

z Kj eKj A

i=1

= n

n

X

ij=1

E z 0Ki z Ki e2Ki

i=1

2K

+O

n

2K

+o K

n

=

since

2

K K=n

i

2

1

2

KK

n

2

(12.36)

n

2 0

eK Z K Z 0K eK

= Op n

+ op K

(12.37)

Proof of Theorem 12.12.1. As above, assume that the regressors have been transformed so that

QK = I K :

Using m(x) = z K (x)0 K + rK (x) and linearity

= a (m)

= a z K (x)0

=

a0K

+ a (rK )

+ a (rK )

^K

+ a (rK ) = a0K b K

=

1 0 b 1 0

a Q Z eK

n K K K

238

and thus

r

n ^

K

vk

+ a (rK )

n 0 b

a

K

vk K

1 0 b 1 0

a Q Z eK

nvk K K K

1

= p

a0 Z 0 eK

nvK K K

1

b 1 IK Z0 e

+p

a0K Q

K

K

nvK

1

b 1 I K Z 0 rK :

a0 Q

+p

K

K

nvK K

(12.38)

(12.39)

(12.40)

First, take (12.38). We can write

p

n

1

1 X 0

aK z Ki eKi :

a0K Z 0K eK = p

nvK

nvK

(12.41)

i=1

Observe that a0K z Ki eKi are independent across i, mean zero, and have variance

E a0K z Ki eKi

We will apply the Lindeberg CLT 5.7.2, for which it is su cient to verify Lyapunovs condition

(5.6):

n

1 X

1

4

4

E a0K z Ki eKi =

a0K z Ki e4Ki ! 0:

(12.42)

2

2 E

2

n vK i=1

nvK

The assumption that

inequality and E e4i jxi

KK

= O(1) means

i

KK

for some

4

8 sup E e4i jxi + rKi

8( +

1) :

(12.43)

E

a0K z Ki

4 4

eKi

2

Since E e2Ki jxi = E e2i jxi + rKi

vK

= E

a0K z Ki

E e4Ki jxi

a0K z Ki

8( +

1) E

8( +

1)

a0K aK

= 8( +

1)

a0K aK

2 2

K K:

E z 0Ki z Ki

(12.44)

2;

=

2 0

aK E z Ki z 0Ki

2 0

aK aK :

aK

(12.45)

1

2 E

nvK

a0K z Ki

4 4

eKi

8( +

4

2

1) K K

= o(1)

239

under Assumption 12.7.1.4. This establishes Lyapunovs condition (12.42). Hence the Lindeberg

CLT applies to (12.41) and we conclude

p

1

d

a0 Z 0 eK ! N (0; 1) :

nvK K K

Inequalities, (12.45), (12.34) and (12.27),

!

2

1

1

b

E

a0 Q

I K Z 0K e j X

p

K

nvK K

=

1 0 b 1

a

QK

nvK K

2

vK

2

b 1

a0K Q

K

b 1

I K Z 0K E ee0 j X Z K Q

K

bK Q

b 1

IK Q

K

(12.46)

2;

I K aK

I K aK

b K IK Q

b 1 Q

b K I K aK

a0 Q

K

vK K

2 a0 a

2

K K

b K IK

b 1 Q

max QK

vK

2

o (1):

2 p

This establishes

p

1

b 1

a0K Q

K

nvK

I K Z 0K e ! 0:

(12.47)

Third, take (12.40). By the Cauchy-Schwarz inequality, (12.45), and the Quadratic Inequality,

2

1

1

0

0

b

a

QK

I K Z K rK

p

nvK K

a0K aK 0

b 1 IK Q

b 1 I K Z 0 rK

rK Z K Q

K

K

K

nvK

2

1

b 1 I K 1 r0 Z K Z 0 rK :

Q

K

K

2 max

n K

Observe that since the observations are independent and Ez Ki rKi = 0; z 0Ki z Ki

0

1

n

n

X

X

1 0

1

r Z K Z 0K r K

= E@

E

rKi z 0Ki

z Kj rKj A

n K

n

i=1

ij=1

!

n

1X 0

2

= E

z Ki z Ki rKi

n

=

(12.48)

2

K,

and (12.17)

i=1

2

2

K E rKi

O 2K K 2

= O(1)

since K K

implies

= O(1): Thus

1 0

r Z K Z 0K r K = Op (1): This means that (12.48) is op (1) since (12.28)

n K

max

Equivalently,

b 1

Q

K

p

IK =

1

b 1

a0 Q

K

nvK K

max

b 1

Q

K

1 = op (1):

p

I K Z 0K r K ! 0:

(12.49)

(12.50)

240

r

n ^

d

! N (0; 1)

K

K + a (rK )

vk

completing the proof.

2

KK

1=2

2

K

= o(1) implies K

1=2

2

KK

=o n

1=2

. Thus

= o(1)

so the conditions of Theorem 12.12.1 are satised. It is thus su cient to show that

r

n

a (rK ) = o(1):

vk

From (12.12)

rK (x) = rK (x) + z K (x)0

K

= E

z Ki z 0Ki

E (z Ki rKi ) :

r

r

n

n

a (rK ) =

a (rK ) + a0K

vk

vk

n1=2

+

By assumption, n1=2 a (rK ) = O n1=2 K

n

0

K K

a0K aK

(n

1=2

1=2

0

K K)

a (rK )

(12.51)

(12.52)

nO K

= o(1):

Together, both (12.51) and (12.52) are o(1); as required.

E (z Ki rKi )

= o(1)

Chapter 13

Quantile Regression

13.1

on the central tendency of yi : We have discussed projections and conditional means, but these are

not the only measures of central tendency. An alternative good measure is the conditional median.

To recall the denition and properties of the median, let y be a continuous random variable.

The median = med(y) is the value such that Pr(y

) = Pr (y

) = :5: Two useful facts about

the median are that

= argmin E jy

j

(13.1)

and

E sgn (y

)=0

where

sgn (u) =

1

1

if u 0

if u < 0

These facts and denitions motivate three estimators ofP : The rst denition is the 50th

empirical quantile. The second is the value

which minimizes n1 ni=1 jyi

j ; and the third denition

P

n

1

) : These distinctions are illusory, however,

is the solution to the moment equation n i=1 sgn (yi

as these estimators are indeed identical.

Now lets consider the conditional median of y given a random vector x: Let m(x) = med (y j x)

denote the conditional median of y given x: The linear median regression model takes the form

yi = x0i + ei

med (ei j xi ) = 0

In this model, the linear function med (yi j xi = x) = x0 is the conditional median function, and

the substantive assumption is that the median function is linear in x:

Conditional analogs of the facts about the median are

Pr(yi

x0

j xi = x) = Pr(yi > x0

j xi = x) = :5

E (sgn (ei ) j xi ) = 0

E (xi sgn (ei )) = 0

= min E jyi

x0i j

241

242

n

LADn ( ) =

1X

yi

n

x0i

i=1

be the average of absolute deviations. The least absolute deviations (LAD) estimator of

minimizes this function

b = argmin LADn ( )

Equivalently, it is a solution to the moment condition

n

1X

xi sgn yi

n

i=1

x0i b = 0:

(13.2)

When the conditional median is linear in x

p

d

n b

! N (0; V )

where

V =

1

E xi x0i f (0 j xi )

4

Exi x0i

E xi x0i f (0 j xi )

density of the error at its median. When f (0 j x) is large, then there are many innovations near

to the median, and this improves estimation of the median. In the special case where the error is

independent of xi ; then f (0 j x) = f (0) and the asymptotic variance simplies

V =

(Exi x0i )

4f (0)2

(13.3)

This simplication is similar to the simplication of the asymptotic covariance of the OLS estimator

under homoskedasticity.

Computation of standard error for LAD estimates typically is based on equation (13.3). The

main di culty is the estimation of f (0); the height of the error density at its median. This can

be done with kernel estimation techniques. See Chapter 21. While a complete proof of Theorem

13.1.1 is advanced, we provide a sketch here for completeness.

Proof of Theorem 13.1.1: Similar to NLLS, LAD is an optimization estimator. Let 0 denote

the true value of 0 :

p

The rst step is to show that ^ ! 0 : The general nature of the proof is similar to that for the

p

NLLS estimator, and is sketched here. For any xed ; by the WLLN, LADn ( ) ! E jyi x0i j :

Furthermore, it can be shown that this convergence is uniform in : (Proving uniform convergence

is more challenging than for the NLLS criterion since the LAD criterion is not dierentiable in

.) It follows that ^ ; the minimizer of LADn ( ); converges in probability to 0 ; the minimizer of

E jyi x0i j.

243

P

Since sgn (a) = 1 2 1 (a 0) ; (13.2) is equivalent to g n ( b ) = 0; where g n ( ) = n 1 ni=1 g i ( )

and g i ( ) = xi (1 2 1 (yi x0i )) : Let g( ) = Eg i ( ). We need three preliminary results. First,

by the central limit theorem (Theorem 5.7.1)

p

n (g n (

0)

g(

0 )) =

n

X

1=2

gi(

! N 0; Exi x0i

0)

i=1

since Eg i ( 0 )g i (

dierentiation,

0

0)

= Exi x0i : Second using the law of iterated expectations and the chain rule of

@

g( ) =

@ 0

=

=

=

so

@

Exi 1 2 1 yi x0i

@ 0

@

2 0 E xi E 1 ei x0i

x0i 0 j xi

@

#

" Z 0

xi

x0i 0

@

f (e j xi ) de

2 0 E xi

@

1

2E xi x0i f x0i

@

g( ) =

@ 0

x0i

j xi

2E xi x0i f (0 j xi ) :

Together

p

n b

'

=

g( b ) '

@

g( ) b

@ 0

@

g(

@ 0

1

0)

ng( ^ )

2E xi x0i f (0 j xi )

1

E xi x0i f (0 j xi )

2

d 1

!

E xi x0i f (0 j xi )

2

= N (0; V ) :

'

1p

n g( ^ )

1p

1

n (g n (

0)

gn( ^ )

g(

0 ))

0

N 0; Exi xi

p

The third line follows from an asymptotic empirical process argument and the fact that b !

13.2

0.

Quantile Regression

Quantile regression has become quite popular in recent econometric practice. For

th quantile Q of a random variable with distribution function F (u) is dened as

Q = inf fu : F (u)

2 [0; 1] the

When F (u) is continuous and strictly monotonic, then F (Q ) = ; so you can think of the quantile

as the inverse of the distribution function. The quantile Q is the value such that (percent) of

the mass of the distribution is less than Q : The median is the special case = :5:

244

The following alternative representation is useful. If the random variable U has th quantile

Q ; then

Q = argmin E (U

):

(13.4)

where

q (1

)

q<0

q

q 0

= q(

1 (q < 0)) :

(q) =

(13.5)

For the random variables (yi ; xi ) with conditional distribution function F (y j x) the conditional

quantile function q (x) is

Q (x) = inf fy : F (y j x)

g:

xed ; the quantile regression function q (x) describes how the th quantile of the conditional

distribution varies with the regressors.

As functions of x; the quantile regression functions can take any shape. However for computational convenience it is typical to assume that they are (approximately) linear in x (after suitable

transformations). This linear specication assumes that Q (x) = 0 x where the coe cients

vary across the quantiles : We then have the linear quantile regression model

yi = x0i

+ ei

where ei is the error dened to be the dierence between yi and its th conditional quantile x0i :

By construction, the th conditional quantile of ei is zero, otherwise its properties are unspecied

without further restrictions.

Given the representation (13.4), the quantile regression estimator b for

solves the minimization problem

b = argmin S ( )

n

where

1X

Sn ( ) =

n

yi

x0i

i=1

Since the quanitle regression criterion function Sn ( ) does not have an algebraic solution,

numerical methods are necessary for its minimization. Furthermore, since it has discontinuous

derivatives, conventional Newton-type optimization methods are inappropriate. Fortunately, fast

linear programming methods have been developed for this problem, and are widely available.

An asymptotic distribution theory for the quantile regression estimator can be derived using

similar arguments as those for the LAD estimator in Theorem 13.1.1.

Theorem 13.2.1 Asymptotic Distribution of the Quantile Regression Estimator

When the th conditional quantile is linear in x

p

where

V

= (1

n b

) E xi x0i f (0 j xi )

! N (0; V ) ;

Exi x0i

E xi x0i f (0 j xi )

245

In general, the asymptotic variance depends on the conditional density of the quantile regression

error. When the error ei is independent of xi ; then f (0 j xi ) = f (0) ; the unconditional density of

ei at 0, and we have the simplication

V

(1

)

E xi x0i

2

f (0)

246

Exercises

Exercise 13.1 For any predictor g(xi ) for yi ; the mean absolute error (MAE) is

E jyi

g(xi )j :

Show that the function g(x) which minimizes the MAE is the conditional median m (x) = med(yi j

xi ):

Exercise 13.2 Dene

g(u) =

1 (u < 0)

where 1 ( ) is the indicator function (takes the value 1 if the argument is true, else equals zero).

Let satisfy Eg(yi

) = 0: Is a quantile of the distribution of yi ?

Exercise 13.3 Verify equation (13.4).

Chapter 14

14.1

yi = x0i + ei

= x01i

+ x02i

+ ei

E (xi ei ) = 0

where x1i is k 1 and x2i is r 1 with ` = k + r: We know that without further restrictions, an

asymptotically e cient estimator of is the OLS estimator. Now suppose that we are given the

information that 2 = 0: Now we can write the model as

yi = x01i

+ ei

E (xi ei ) = 0:

In this case, how should 1 be estimated? One method is OLS regression of yi on x1i alone. This

method, however, is not necessarily e cient, as there are ` restrictions in E (xi ei ) = 0; while 1 is

of dimension k < `. This situation is called overidentied. There are ` k = r more moment

restrictions than free parameters. We call r the number of overidentifying restrictions.

This is a special case of a more general class of moment condition models. Let g(y; x; z; ) be

an ` 1 function of a k 1 parameter with ` k such that

Eg(yi ; xi ; z i ;

0)

=0

(14.1)

where 0 is the true value of : In our previous example, g(y; z; ) = z (y x01 1 ): In econometrics,

this class of models are called moment condition models. In the statistics literature, these are

known as estimating equations.

As an important special case we will devote special attention to linear moment condition models,

which can be written as

yi = x0i + ei

E (z i ei ) = 0:

where the dimensions of xi and z i are k 1 and ` 1 , with `

k: If k = ` the model is just

identied, otherwise it is overidentied. The variables xi may be components and functions of

z i ; but this is not required. This model falls in the class (14.1) by setting

g(y; x; z;

0)

= z (y

247

x0 )

(14.2)

14.2

248

GMM Estimator

n

gn( ) =

1X

1X

gi( ) =

z i yi

n

n

i=1

x0i

i=1

1

Z 0y

n

Z 0X

(14.3)

The method of moments estimator for is dened as the parameter value which sets g n ( ) = 0.

This is generally not possible when ` > k; as there are more equations than free parameters. The

idea of the generalized method of moments (GMM) is to dene an estimator which sets g n ( )

close to zero.

For some ` ` weight matrix W n > 0; let

Jn ( ) = n g n ( )0 W n g n ( ):

This is a non-negative measure of the lengthof the vector g n ( ): For example, if W n = I; then,

Jn ( ) = n g n ( )0 g n ( ) = n kg n ( )k2 ; the square of the Euclidean length. The GMM estimator

minimizes Jn ( ).

Note that if k = `; then g n ( b ) = 0; and the GMM estimator is the method of moments

estimator: The rst order conditions for the GMM estimator are

@

Jn ( b )

@

@

= 2 g n ( b )0 W n g n ( b )

@

1 0

1 0

=

2

X Z Wn

Z y

n

n

0 =

so

Xb

2 X 0Z W n Z 0X b = 2 X 0Z W n Z 0y

Proposition 14.2.1

b

GM M

X 0Z W n Z 0X

X 0Z W n Z 0y :

While the estimator depends on W n ; the dependence is only up to scale, for if W n is replaced

by cW n for some c > 0; b GM M does not change.

14.3

249

p

Q = E z i x0i

and

= E z i z 0i e2i = E g i g 0i ;

where g i = z i ei : Then

1 0

X Z Wn

n

and

1 0

X Z Wn

n

1 0

ZX

n

1

p Z 0e

n

! Q0 W Q

! Q0 W N (0; ) :

We conclude:

Theorem 14.3.1 Asymptotic Distribution of GMM Estimator

p

where

V

n b

= Q0 W Q

! N (0; V ) ;

Q0 W W Q

Q0 W Q

In general, GMM estimators are asymptotically normal with sandwich formasymptotic variances.

1

The optimal weight matrix W 0 is one which minimizes V : This turns out to be W 0 =

:

The proof is left as an exercise. This yields the e cient GMM estimator:

Thus we have

b = X 0Z

Z 0X

X 0Z

Z 0 y:

p

1

d

n b

! N 0; Q0 1 Q

:

p

1

W0 =

is not known in practice, but it can be estimated consistently. For any W n ! W 0 ;

b

we still call the e cient GMM estimator, as it has the same asymptotic distribution.

By e cient, we mean that this estimator has the smallest asymptotic variance in the class

of GMM estimators with this set of moment conditions. This is a weak concept of optimality, as

we are only considering alternative weight matrices W n : However, it turns out that the GMM

estimator is semiparametrically e cient, as shown by Gary Chamberlain (1987).

If it is known that E (g i ( )) = 0; and this is all that is known, this is a semi-parametric

problem, as the distribution of the data is unknown. Chamberlain showed that in this context,

no semiparametric estimator (one which is consistent globally for the class of models considered)

1

can have a smaller asymptotic variance than G0 1 G

where G = E @@ 0 g i ( ): Since the GMM

estimator has this asymptotic variance, it is semiparametrically e cient.

This result shows that in the linear model, no estimator has greater asymptotic e ciency than

the e cient linear GMM estimator. No estimator can do better (in this rst-order asymptotic

sense), without imposing additional assumptions.

14.4

250

Given any weight matrix W n > 0; the GMM estimator b is consistent yet ine cient. For

1

example, we can set W n = I ` : In the linear model, a better choice is W n = (Z 0 Z) : Given

any such rst-step estimator, we can dene the residuals e^i = yi x0i b and moment equations

g

^i = z i e^i = g(yi ; xi ; z i ; b ): Construct

n

gn = gn( b ) =

g

^i = g

^i

and dene

n

Wn =

1X

g

^i g

^i 0

n

i=1

1X

g

^i ;

n

i=1

gn;

n

1X

g

^i g

^0i

n

g n g 0n

i=1

(14.4)

1

Then W n !

= W 0 ; and GMM using W n as the weight matrix is asymptotically e cient.

A common alternative choice is to set

! 1

n

1X

0

Wn =

g

^i g

^i

n

i=1

which uses the uncentered moment conditions. Since Eg i = 0; these two estimators are asymptotically equivalent under the hypothesis of correct specication. However, Alastair Hall (2000) has

shown that the uncentered estimator is a poor choice. When constructing hypothesis tests, under

the alternative hypothesis the moment conditions are violated, i.e. Eg i 6= 0; so the uncentered

estimator will contain an undesirable bias term and the power of the test will be adversely aected.

A simple solution is to use the centered moment conditions to construct the weight matrix, as in

(14.4) above.

Here is a simple way to compute the e cient GMM estimator for the linear model. First, set

W n = (Z 0 Z) 1 , estimate b using this weight matrix, and construct the residual e^i = yi x0i b :

^ be the associated n ` matrix: Then the e cient GMM estimator is

Then set g

^i = z i e^i ; and let g

b = X 0Z g

^0 g

^

ng n g 0n

Z 0X

X 0Z g

^0 g

^

ng n g 0n

Z 0 y:

In most cases, when we say GMM, we actually mean e cient GMM. There is little point in

using an ine cient GMM estimator when the e cient estimator is easy to compute.

An estimator of the asymptotic variance of ^ can be seen from the above formula. Set

Vb = n X 0 Z g

^0 g

^

ng n g 0n

Z 0X

1

Asymptotic standard errors are given by the square roots of the diagonal elements of Vb :

n

There is an important alternative to the two-step GMM estimator just described. Instead, we

can let the weight matrix be considered as a function of : The criterion function is then

! 1

n

1X

0

0

g i ( )g i ( )

g n ( ):

J( ) = n g n ( )

n

i=1

where

gi ( ) = gi( )

gn( )

The b which minimizes this function is called the continuously-updated GMM estimator, and

was introduced by L. Hansen, Heaton and Yaron (1996).

The estimator appears to have some better properties than traditional GMM, but can be numerically tricky to obtain in some cases. This is a current area of research in econometrics.

14.5

`

251

In its most general form, GMM applies whenever an economic or statistical model implies the

1 moment condition

E (g i ( )) = 0:

k = dim( ): The GMM estimator

minimizes

J( ) = n g n ( )0 W n g n ( )

where

gn( ) =

1X

gi( )

n

i=1

and

n

1X

g

^i g

^0i

n

Wn =

g n g 0n

i=1

with g

^i = g i ( e ) constructed using a preliminary consistent estimator e , perhaps obtained by rst

setting W n = I: Since the GMM estimator depends upon the rst-stage estimator, often the weight

matrix W n is updated, and then b recomputed. This estimator can be iterated if needed.

Theorem 14.5.1 Distribution of Nonlinear GMM Estimator

Under general regularity conditions,

p

where

n b

! N 0; G0

= E g i g 0i

and

G=E

@

g ( ):

@ 0 i

where

Vb

^0 ^

= G

^ =n

^

G

g

^i g

^i 0

and

^=n

G

X @

^

0 g i ( ):

@

i

The general theory of GMM estimation and testing was exposited by L. Hansen (1982).

14.6

Over-Identication Test

Overidentied models (` > k) are special in the sense that there may not be a parameter value

such that the moment condition

252

Eg(yi ; xi ; z i ; ) = 0

holds. Thus the model the overidentifying restrictions are testable.

For example, take the linear model yi = 01 x1i + 02 x2i +ei with E (x1i ei ) = 0 and E (x2i ei ) = 0:

It is possible that 2 = 0; so that the linear equation may be written as yi = 01 x1i + ei : However,

it is possible that 2 6= 0; and in this case it would be impossible to nd a value of 1 so that

both E (x1i (yi x01i 1 )) = 0 and E (x2i (yi x01i 1 )) = 0 hold simultaneously. In this sense an

exclusion restriction can be seen as an overidentifying restriction.

p

Note that g n ! Eg i ; and thus g n can be used to assess whether or not the hypothesis that

Eg i = 0 is true or not. The criterion function at the parameter estimates is

Jn = n g 0n W n g n

^0 g

^

= n2 g 0n g

ng n g 0n

gn:

Theorem 14.6.1 (Sargan-Hansen). Under the hypothesis of correct specication, and if the weight matrix is asymptotically e cient,

d

Jn = Jn ( b ) !

2

` k:

The proof of the theorem is left as an exercise. This result was established by Sargan (1958)

for a specialized case, and by L. Hansen (1982) for the general case.

The degrees of freedom of the asymptotic distribution are the number of overidentifying restrictions. If the statistic J exceeds the chi-square critical value, we can reject the model. Based on

this information alone, it is unclear what is wrong, but it is typically cause for concern. The GMM

overidentication test is a very useful by-product of the GMM methodology, and it is advisable to

report the statistic J whenever GMM is the estimation method.

When over-identied models are estimated by GMM, it is customary to report the J statistic

as a general test of model adequacy.

14.7

We described before how to construct estimates of the asymptotic covariance matrix of the

GMM estimates. These may be used to construct Wald tests of statistical hypotheses.

If the hypothesis is non-linear, a better approach is to directly use the GMM criterion function.

This is sometimes called the GMM Distance statistic, and sometimes called a LR-like statistic (the

LR is for likelihood-ratio). The idea was rst put forward by Newey and West (1987).

For a given weight matrix W n ; the GMM criterion function is

Jn ( ) = n g n ( )0 W n g n ( )

For h : Rk ! Rr ; the hypothesis is

H0 : h( ) = 0:

The estimates under H1 are

b = argmin Jn ( )

and those under H0 are

253

e = argmin J( ):

h( )=0

The two minimizing criterion functions are Jn ( b ) and Jn ( e ): The GMM distance statistic is the

dierence

Dn = Jn ( e ) Jn ( b ):

Proposition 14.7.1 If the same weight matrix W n is used for both null

and alternative,

1. D

0

d

2. D !

2

r

3. If h is linear in

If h is non-linear, the Wald statistic can work quite poorly. In contrast, current evidence

suggests that the Dn statistic appears to have quite good sampling properties, and is the preferred

test statistic.

Newey and West (1987) suggested to use the same weight matrix W n for both null and alternative, as this ensures that Dn 0. This reasoning is not compelling, however, and some current

research suggests that this restriction is not necessary for good performance of the test.

This test shares the useful feature of LR tests in that it is a natural by-product of the computation of alternative models.

14.8

In many contexts, the model implies more than an unconditional moment restriction of the form

Eg i ( ) = 0: It implies a conditional moment restriction of the form

E (ei ( ) j z i ) = 0

where ei ( ) is some s 1 function of the observation and the parameters. In many cases, s = 1.

It turns out that this conditional moment restriction is much more powerful, and restrictive,

than the unconditional moment restriction discussed above.

Our linear model yi = x0i + ei with instruments z i falls into this class under the stronger

assumption E (ei j z i ) = 0: Then ei ( ) = yi x0i :

It is also helpful to realize that conventional regression models also fall into this class, except

that in this case xi = z i : For example, in linear regression, ei ( ) = yi x0i , while in a nonlinear

regression model ei ( ) = yi g(xi ; ): In a joint model of the conditional mean and variance

8

yi x0i

<

ei ( ; ) =

:

:

2

0

0

(yi xi )

f (xi )

Here s = 2:

Given a conditional moment restriction, an unconditional moment restriction can always be

constructed. That is for any ` 1 function (xi ; ) ; we can set g i ( ) = (xi ; ) ei ( ) which

satises Eg i ( ) = 0 and hence denes a GMM estimator. The obvious problem is that the class of

functions is innite. Which should be selected?

254

instrument satisfying E (ei j xi ) = 0; then xi ; x2i ; x3i ; :::; etc., are all valid instruments. Which

should be used?

One solution is to construct an innite list of potent instruments, and then use the rst k

instruments. How is k to be determined? This is an area of theory still under development. A

recent study of this problem is Donald and Newey (2001).

Another approach is to construct the optimal instrument. The form was uncovered by Chamberlain (1987). Take the case s = 1: Let

@

ei ( ) j z i

@

Ri = E

and

2

i

= E ei ( )2 j z i :

2

Ai =

Ri

g i ( ) = Ai ei ( ):

Setting g i ( ) to be this choice (which is k 1; so is just-identied) yields the best GMM estimator

possible.

In practice, Ai is unknown, but its form does help us think about construction of optimal

instruments.

In the linear model ei ( ) = yi x0i ; note that

Ri =

E (xi j z i )

and

2

i

= E e2i j z i ;

so

Ai =

2

i

E (xi j z i ) :

discussed earlier in the course.

In the case of endogenous variables, note that the e cient instrument Ai involves the estimation

of the conditional mean of xi given z i : In other words, to get the best instrument for xi ; we need the

best conditional mean model for xi given z i , not just an arbitrary linear projection. The e cient

instrument is also inversely proportional to the conditional variance of ei : This is the same as the

GLS estimator; namely that improved e ciency can be obtained if the observations are weighted

inversely to the conditional variance of the errors.

14.9

Let b be the 2SLS or GMM estimator of . Using the EDF of (yi ; z i ; xi ), we can apply the

bootstrap methods discussed in Chapter 10 to compute estimates of the bias and variance of b ;

and construct condence intervals for ; identically as in the regression model. However, caution

should be applied when interpreting such results.

A straightforward application of the nonparametric bootstrap works in the sense of consistently

achieving the rst-order asymptotic distribution. This has been shown by Hahn (1996). However,

it fails to achieve an asymptotic renement when the model is over-identied, jeopardizing the

theoretical justication for percentile-t methods. Furthermore, the bootstrap applied J test will

yield the wrong answer.

255

The problem is that in the sample, b is the truevalue and yet g n ( ^ ) 6= 0: Thus according to

random variables (yi ; z i ; xi ) drawn from the EDF Fn ;

E gi b

= g n ( ^ ) 6= 0:

This means that (yi ; z i ; xi ) do not satisfy the same moment conditions as the population distribution.

A correction suggested by Hall and Horowitz (1996) can solve the problem. Given the bootstrap

sample (y ; Z ; X ); dene the bootstrap GMM criterion

Jn ( ) = n

gn( )

gn( ^ )

W n gn( )

gn( ^ )

where g n ( b ) is from the in-sample data, not from the bootstrap data.

Let b minimize Jn ( ); and dene all statistics and tests accordingly. In the linear model, this

implies that the bootstrap estimator is

b = X 0Z W Z 0X

n

n

X 0Z W n Z 0y

Z 0e

^

where e

^ = y X b are the in-sample residuals. The bootstrap J statistic is Jn ( b ):

Brown and Newey (2002) have an alternative solution. They note that we can sample from

the observations with the empirical likelihood probabilities p^i described in Chapter 15. Since

Pn

^i g i b = 0; this sampling scheme preserves the moment conditions of the model, so no

i=1 p

recentering or adjustments is needed. Brown and Newey argue that this bootstrap procedure will

be more e cient than the Hall-Horowitz GMM bootstrap.

256

Exercises

Exercise 14.1 Take the model

yi = x0i + ei

E (xi ei ) = 0

e2i = z 0i +

E (z i i ) = 0:

Find the method of moments estimators ^ ; ^ for ( ; ) :

Exercise 14.2 Take the single equation

y = X +e

E (e j Z) = 0

Assume E e2i j z i =

then

2:

p

n b

! N 0;

Q0 M

Exercise 14.3 Take the model yi = x0i + ei with E (z i ei ) = 0: Let e^i = yi x0i b where b is

consistent for (e.g. a GMM estimator with arbitrary weight matrix). Dene the estimate of the

optimal GMM weight matrix

! 1

n

1X

z i z 0i e^2i

Wn =

:

n

i=1

Show that W n !

where

= E z i z 0i e2i :

Exercise 14.4 In the linear model estimated by GMM with general weight matrix W ; the asymptotic variance of ^ GM M is

V = Q0 W Q

Q0 W W Q Q0 W Q

: Show that V 0 = Q0

(b) We want to show that for any W ; V V 0 is positive semi-denite (for then V 0 is the smaller

1

possible covariance matrix and W =

is the e cient weight matrix). To do this, start by

nding matrices A and B such that V = A0 A and V 0 = B 0 B:

(c) Show that B 0 A = B 0 B and therefore that B 0

(d) Use the expressions V = A0 A; A = B + (A

V

V 0:

(A

B) = 0:

B) ; and B 0

(A

B) = 0 to show that

yi = m(xi ; ) + ei

E (z i ei ) = 0:

The observed data is (yi ; z i ; xi ). z i is `

GMM estimator for .

1 and

is k

1; `

257

Moments (GMM) criterion function for is dened as

1

(y

n

Jn ( ) =

X )0 X b

X 0 (y

X )

(14.5)

P

where b = n1 ni=1 xi x0i e^2i ; e^i = yi x0i ^ are the OLS residuals, and b = (X 0 X)

GMM estimator of ; subject to the restriction h( ) = 0; is dened as

X 0 y is LS: The

e = argmin Jn ( ):

h( )=0

D = Jn ( e ) = min Jn ( ):

(14.6)

h( )=0

Jn ( ) = n

V^

(b) Show that in this setting, the distance statistic D in (14.6) equals the Wald statistic.

Exercise 14.7 Take the linear model

yi = x0i + ei

E (z i ei ) = 0:

and consider the GMM estimator ^ of

: Let

1

Jn = ng n ( b )0 b

gn( b )

each of the following:

(a) Since

(b) Jn = n C 0 g n ( b )

C0 ^ C

(c) C 0 g n ( b ) = D n C 0 g n (

Dn = I `

0)

= CC 0 and

C 0gn( b )

where

1 0

ZX

n

C0

1 0

XZ

n

gn(

(d) D n ! I `

(e) n1=2 C 0 g n (

d

R (R0 R)

0)

(f) Jn ! u0 I `

(g) u0 I `

Hint: I `

!u

R (R0 R)

N (0; I ` )

R0 u

1

R0 u

2 :

` k

R0 is a projection matrix.

as n ! 1 by demonstrating

1 0

ZX

n

1 0

Z e:

n

R0 where R = C 0 E (z i x0i )

R (R0 R)

R (R0 R)

0)

= C0

2

` k

1 0

XZ

n

C0

Chapter 15

Empirical Likelihood

15.1

Non-Parametric Likelihood

An alternative to GMM is empirical likelihood. The idea is due to Art Owen (1988, 2001) and

has been extended to moment condition models by Qin and Lawless (1994). It is a non-parametric

analog of likelihood estimation.

The idea is to construct a multinomial distribution F (p1 ; :::; pn ) which places probability pi

at each observation. To be a valid multinomial distribution, these probabilities must satisfy the

requirements that pi 0 and

n

X

pi = 1:

(15.1)

i=1

Since each observation is observed once in the sample, the log-likelihood function for this multinomial distribution is

n

X

log L (p1 ; :::; pn ) =

log(pi ):

(15.2)

i=1

First let us consider a just-identied model. In this case the moment condition places no

additional restrictions on the multinomial distribution. The maximum likelihood estimators of

the probabilities (p1 ; :::; pn ) are those which maximize the log-likelihood subject to the constraint

(15.1). This is equivalent to maximizing

!

n

n

X

X

log(pi )

pi 1

i=1

i=1

: Combined with the

1

constraint (15.1) we nd that the MLE is pi = n yielding the log-likelihood n log(n):

Now consider the case of an overidentied model with moment condition

Eg i (

0)

=0

distribution which places probability pi at each observation (yi ; xi ; z i ) will satisfy this condition if

and only if

n

X

pi g i ( ) = 0

(15.3)

i=1

which maximizes the multinomial loglikelihood (15.2) subject to the restrictions (15.1) and (15.3).

258

259

L ( ; p1 ; :::; pn ; ; ) =

where

are

and

n

X

n

X

log(pi )

i=1

pi

i=1

n

X

pi g i ( )

i=1

1

pi

n

X

+ n 0gi ( )

pi = 1

i=1

n

X

pi g i ( ) = 0:

i=1

Multiplying the rst equation by pi , summing over i; and using the second and third equations, we

nd = n and

1

:

pi =

n 1 + 0gi ( )

Substituting into L we nd

R( ; ) =

n log (n)

n

X

log 1 +

gi ( ) :

(15.4)

i=1

For given

( ) minimizes R ( ; ) :

( ) = argmin R( ; ):

(15.5)

This minimization problem is the dual of the constrained maximization problem. The solution

(when it exists) is well dened since R( ; ) is a convex function of : The solution cannot be

obtained explicitly, but must be obtained numerically (see section 6.5). This yields the (prole)

empirical log-likelihood function for .

R( ) = R( ; ( ))

=

n log (n)

n

X

log 1 + ( )0 g i ( )

i=1

The EL estimate ^ is the value which maximizes R( ); or equivalently minimizes its negative

^ = argmin [ R( )]

(15.6)

As a by-product of estimation, we also obtain the Lagrange multiplier ^ = ( ^ ); probabilities

p^i =

1

0

n 1 + ^ gi ^

R( ^ ) =

n

X

i=1

log (^

pi ) :

(15.7)

15.2

Let

260

0

and dene

Gi ( ) =

@

g ( )

@ 0 i

G = EGi (

= E gi (

0)

0

0) gi ( 0)

and

V = G0

V

G G0

(15.8)

(15.9)

1

z i x0i ; G =

G0

(15.10)

E (z i x0i ), and

= E z i z 0i e2i :

p

n ^

0

p

d

n^ !

! N (0; V )

N (0; V )

p ^

n are asymptotically independent.

n ^

and

The theorem shows that the asymptotic variance V for ^ is the same as for e cient GMM.

Thus the EL estimator is asymptotically e cient.

Chamberlain (1987) showed that V is the semiparametric e ciency bound for in the overidentied moment condition model. This means that no consistent estimator for this class of models

can have a lower asymptotic variance than V . Since the EL estimator achieves this bound, it is

an asymptotically e cient estimator for .

Proof of Theorem 15.2.1. ( ^ ; ^ ) jointly solve

@

R( ; ) =

@

0 =

@

R( ; ) =

@

0 =

gi ^

n

X

n

Gi ^

X

i=1

(15.11)

0

1 + ^ gi ^

i=1

0

1 + ^ gi ^

P

P

P

Let Gn = n1 ni=1 Gi ( 0 ) ; g n = n1 ni=1 g i ( 0 ) and n = n1 ni=1 g i (

Expanding (15.12) around = 0 and = 0 = 0 yields

0 ' G0n ^

Expanding (15.11) around

and

0'

=

gn

(15.12)

0

0) gi ( 0) :

(15.13)

= 0 yields

Gn ^

(15.14)

Premultiplying by G0n

261

0 '

G0n

G0n

gn

G0n

gn

G0n

Gn ^

Gn ^

p

G0n n 1 Gn

n ^

0 '

d

! G0

1

1

+ G0n

G0n

G0

1p

ng n

(15.15)

N (0; )

= N (0; V )

Solving (15.14) for ^ and using (15.15) yields

p

n ^ ' n 1 I Gn G0n

d

!

1

G G0

Gn

G0n

G0

ng n

(15.16)

N (0; )

N (0; V )

Furthermore, since

1

G0 I

p

n ^

15.3

and

G G0

G0 = 0

p ^

n are asymptotically uncorrelated and hence independent.

Overidentifying Restrictions

In a parametric likelihood context, tests are based on the dierence in the log likelihood functions. The same statistic can be constructed for empirical likelihood. Twice the dierence between

the unrestricted empirical log-likelihood n log (n) and the maximized empirical log-likelihood for

the model (15.7) is

n

X

0

LRn =

2 log 1 + ^ g i ^ :

(15.17)

i=1

Theorem 15.3.1 If Eg i (

0)

2 :

` k

= 0 then LRn !

The EL overidentication test is similar to the GMM overidentication test. They are asymptotically rst-order equivalent, and have the same interpretation. The overidentication test is a

very useful by-product of EL estimation, and it is advisable to report the statistic LRn whenever

EL is the estimation method.

Proof of Theorem 15.3.1. First, by a Taylor expansion, (15.15), and (15.16),

n

1 X

p

gi ^

n

i=1

'

n g n + Gn ^

'

'

Gn G0n

p

n^:

Gn

G0n

ng n

Second, since log(1 + u) ' u

262

u2 =2 for u small,

LRn =

n

X

0

2 log 1 + ^ g i ^

i=1

' 2^

n

0X

i=1

0

' n^

n

X

gi ^ gi ^

i=1

^

n

! N (0; V )0

^0

gi ^

N (0; V )

2

` k

15.4

Testing

Eg i ( ) = 0

(15.18)

where g is ` 1 and

is k 1: By maintained we mean that the overidentfying restrictions

contained in (15.18) are assumed to hold and are not being challenged (at least for the test discussed

in this section). The hypothesis of interest is

h( ) = 0:

where h : Rk ! Ra : The restricted EL estimator and likelihood are the values which solve

~ = argmax R( )

h( )=0

R( ~ ) =

max R( ):

h( )=0

restrictions, so there is no fundamental change in the distribution theory for ~ relative to ^ : To test

the hypothesis h( ) while maintaining (15.18), the simple overidentifying restrictions test (15.17)

is not appropriate. Instead we use the dierence in log-likelihoods:

LRn = 2 R( ^ )

R( ~ ) :

2:

a

15.5

263

Numerical Computation

Gauss code which implements the methods discussed below can be found at

http://www.ssc.wisc.edu/~bhansen/progs/elike.prc

Derivatives

The numerical calculations depend on derivatives of the dual likelihood function (15.4). Dene

gi ( )

1 + 0gi ( )

gi ( ; ) =

Gi ( ; ) =

Gi ( )0

1 + 0gi ( )

@

R( ; ) =

@

@

R( ; ) =

@

n

X

gi ( ; )

i=1

n

X

Gi ( ; ) :

i=1

R

R

=

=

@2

@ @

@2

@ @

@2

@ @

0R(

; )=

n

X

g i ( ; ) g i ( ; )0

i=1

0R(

; )=

n

X

i=1

0R(

; )=

n

X

i=1

g i ( ; ) G i ( ; )0

0

@Gi ( ; ) Gi ( ; )0

Gi ( )

1 + 0gi ( )

@2

@ @

1+

g i ( )0

0

gi ( )

1

A

Inner Loop

The so-called inner loop solves (15.5) for given : The modied Newton method takes a

quadratic approximation to Rn ( ; ) yielding the iteration rule

j+1

(R

( ;

j ))

R ( ;

j) :

(15.19)

where > 0 is a scalar steplength (to be discussed next). The starting value 1 can be set to the

zero vector. The iteration (15.19) is continued until the gradient R ( ; j ) is smaller than some

prespecied tolerance.

E cient convergence requires a good choice of steplength : One method uses the following

quadratic approximation. Set 0 = 0; 1 = 12 and 2 = 1: For p = 0; 1; 2; set

p

Rp = R ( ;

p (R

( ;

j ))

R ( ;

j ))

p)

A quadratic function can be t exactly through these three points. The value of

this quadratic is

^ = R2 + 3R0 4R1 :

4R2 + 4R0 8R1

yielding the steplength to be plugged into (15.19).

which minimizes

A complication is that

264

n 1+

gi ( )

pi

1 which holds if

(15.20)

Outer Loop

The outer loop is the minimization (15.6). This can be done by the modied Newton method

described in the previous section. The gradient for (15.6) is

R =

since R ( ; ) = 0 at

@

@

R( ) =

R( ; ) = R +

@

@

R =R

= ( ); where

=

@

( )=

@ 0

the second equality following from the implicit function theorem applied to R ( ; ( )) = 0:

The Hessian for (15.6) is

R

@

R( )

@ @ 0

@

R ( ; ( )) +

@ 0

R ( ; ( )) + R0

= R

=

=

R ( ; ( ))

+

are positive. The Newton iteration rule is

where

j+1

Chapter 16

Endogeneity

We say that there is endogeneity in the linear model y = x0i + ei if

is the parameter of

interest and E(xi ei ) 6= 0: This cannot happen if

is dened by linear projection, so requires a

structural interpretation. The coe cient must have meaning separately from the denition of a

conditional mean or linear projection.

Example: Measurement error in the regressor. Suppose that (yi ; xi ) are joint random

variables, E(yi j xi ) = xi 0 is linear, is the parameter of interest, and xi is not observed. Instead

we observe xi = xi + ui where ui is an k 1 measurement error, independent of yi and xi : Then

yi = xi 0 + ei

ui )0

= (xi

=

x0i

+ ei

+ vi

where

u0i :

vi = ei

The problem is that

u0i

E (xi vi ) = E (xi + ui ) ei

if

E ui u0i

6= 0

^ p!

E xi x0i

E ui u0i

6= :

Example: Supply and Demand. The variables qi and pi (quantity and price) are determined

jointly by the demand equation

qi =

1 pi + e1i

and the supply equation

qi =

2 pi

+ e2i :

e1i

is iid, Eei = 0; 1 + 2 = 1 and Eei e0i = I 2 (the latter for simplicity).

e2i

The question is, if we regress qi on pi ; what happens?

It is helpful to solve for qi and pi in terms of the errors. In matrix notation,

Assume that ei =

1

1

1

2

qi

pi

265

e1i

e2i

266

so

qi

pi

1

1

2

2

e1i

e2i

1

2 e1i

e1i

e2i

(e1i

1 e2i

e2i )

qi =

pi + "i

E (pi "i ) = 0

where

=

Hence if it is estimated by OLS, ^

simultaneous equations bias.

16.1

E (pi qi )

=

E p2i

or

2:

This is called

Instrumental Variables

yi = x0i + ei

(16.1)

where xi is k 1; and assume that E(xi ei ) 6= 0 so there is endogeneity. We call (16.1) the

structural equation. In matrix notation, this can be written as

y = X + e:

(16.2)

Any solution to the problem of endogeneity requires additional information which we call instruments.

Denition 16.1.1 The ` 1 random vector z i is an instrumental variable for (16.1) if E (z i ei ) = 0:

In a typical set-up, some regressors in xi will be uncorrelated with ei (for example, at least the

intercept). Thus we make the partition

xi =

x1i

x2i

k1

k2

(16.3)

where E(x1i ei ) = 0 yet E(x2i ei ) 6= 0: We call x1i exogenous and x2i endogenous. By the above

denition, x1i is an instrumental variable for (16.1); so should be included in z i : So we have the

partition

x1i

k1

zi =

(16.4)

z 2i

`2

where x1i = z 1i are the included exogenous variables, and z 2i are the excluded exogenous

variables. That is z 2i are variables which could be included in the equation for yi (in the sense

that they are uncorrelated with ei ) yet can be excluded, as they would have true zero coe cients

in the equation.

The model is just-identied if ` = k (i.e., if `2 = k2 ) and over-identied if ` > k (i.e., if

`2 > k2 ):

We have noted that any solution to the problem of endogeneity requires instruments. This does

not mean that valid instruments actually exist.

16.2

267

Reduced Form

The reduced form relationship between the variables or regressors xi and the instruments z i

is found by linear projection. Let

1

= E z i z 0i

be the `

E z i x0i

0

ui = xi

zi

as the projection error. Then the reduced form linear relationship between xi and z i is

0

xi =

z i + ui :

(16.5)

X =Z +U

(16.6)

where U is n k:

By construction,

E(z i u0i ) = 0;

so (16.5) is a projection and can be estimated by OLS:

0

xi = b z i + u

^i :

or

where

b

X = Zb + U

b = Z 0Z

Z 0X :

y = (Z + U )

+e

= Z + v;

(16.7)

where

=

(16.8)

and

v=U

+ e:

Observe that

E (z i vi ) = E z i u0i

+ E (z i ei ) = 0:

y = Z^ + v

^;

0

^ = Z Z 1 Z 0y

The equation (16.7) is the reduced form for y: (16.6) and (16.7) together are the reduced form

equations for the system

y = Z +v

X = Z + U:

As we showed above, OLS yields the reduced-form estimates ^ ; ^

16.3

268

Identication

meaning that it can be recovered from the reduced form, if

rank ( ) = k:

is identied,

(16.9)

1

Assume that (16.9) holds. If ` = k; then

=

: If ` > k; then for any W > 0;

=

1 0

0

( W )

W :

If (16.9) is not satised, then cannot be recovered from ( ; ) : Note that a necessary (although

not su cient) condition for (16.9) is ` k:

Since Z and X have the common variables X 1 ; we can rewrite some of the expressions. Using

(16.3) and (16.4) to make the matrix partitions Z = [Z 1 ; Z 2 ] and X = [Z 1 ; X 2 ] ; we can partition

as

11

12

21

22

I

0

12

22

X1 = Z1

X2 = Z1

12

+ Z2

22

+ U 2:

(16.10)

is identied if rank( ) = k; which is true if and only if rank( 22 ) = k2 (by the upper-diagonal

structure of ): Thus the key to identication of the model rests on the `2 k2 matrix 22 in

(16.10).

16.4

Estimation

yi = x0i + ei

E (z i ei ) = 0

or

Eg i ( ) = 0

x0i

g i ( ) = z i yi

This is a moment condition model. Appropriate estimators include GMM and EL. The estimators

and distribution theory developed in those Chapter 8 and 9 directly apply. Recall that the GMM

estimator, for given weight matrix W n ; is

1

^ = X 0 ZW n Z 0 X

16.5

X 0 ZW n Z 0 y:

If the model is just-identied, so that k = `; then the formula for GMM simplies. We nd that

b =

X 0 ZW n Z 0 X

X 0 ZW n Z 0 y

Z 0X

W n 1 X 0Z

Z 0X

Z 0y

X 0 ZW n Z 0 y

269

This estimator is often called the instrumental variables estimator (IV) of ; where Z is used

as an instrument for X: Observe that the weight matrix W n has disappeared. In the just-identied

case, the weight matrix places no role. This is also the method of moments estimator of ; and the

1

EL estimator. Another interpretation stems from the fact that since =

; we can construct

the Indirect Least Squares (ILS) estimator:

b = b

=

Z 0Z

Z 0X

Z 0X

Z 0Z

Z 0X

Z 0y :

Z 0Z

1

Z 0Z

Z 0y

Z 0y

Recall that the optimal weight matrix is an estimate of the inverse of

= E z i z 0i e2i : In the

2

2

0

2

special case that E ei j z i =

(homoskedasticity), then

= E (z i z i )

/ E (z i z 0i ) suggesting

1

the weight matrix W n = (Z 0 Z) : Using this choice, the GMM estimator equals

b

2SLS

= X 0Z Z 0Z

Z 0X

X 0Z Z 0Z

Z 0y

This is called the two-stage-least squares (2SLS) estimator. It was originally proposed by Theil

(1953) and Basmann (1957), and is the classic estimator for linear equations with instruments.

Under the homoskedasticity assumption, the 2SLS estimator is e cient GMM, but otherwise it is

ine cient.

It is useful to observe that writing

1

P = Z Z 0Z

Z0

c = PX = Zb

X

b =

X 0P X

c0 X

c

X

X 0P y

c0 y:

X

First regress X on Z; vis., b = (Z 0 Z)

c vis., b = X

c0 X

c

Second, regress y on X;

c = Z b = P X:

(Z 0 X) and X

1

c0 y:

X

It is useful to scrutinize the projection X:

h

i

c = X

c1 ; X

c2

X

= [P X 1 ; P X 2 ]

= [X 1 ; P X 2 ]

h

i

c2 ;

= X 1; X

c2 : So only the

since X 1 lies in the span of Z: Thus in the second stage, we regress y on X 1 and X

endogenous variables X 2 are replaced by their tted values:

c2 = Z 1 b 12 + Z 2 b 22 :

X

16.6

270

Bekker Asymptotics

Bekker (1994) used an alternative asymptotic framework to analyze the nite-sample bias in

the 2SLS estimator. Here we present a simplied version of one of his results. In our notation, the

model is

y = X +e

(16.11)

X = Z +U

(16.12)

= (e; U )

E

E ( j Z) = 0

0

jZ

= S

First, lets analyze the approximate bias of OLS applied to (16.11). Using (16.12),

1 0

Xe

n

= E (xi ei ) =

E (z i ei ) + E (ui ei ) = s21

and

E

1 0

XX

n

= E xi x0i

=

E z i z 0i

+ E ui z 0i

Q + S 22

E z i u0i + E ui u0i

E ^ OLS

1 0

XX

n

Q + S 22

E

1

1 0

Xe

n

s21

(16.13)

We now derive a similar result for the 2SLS estimator.

^ 2SLS = X 0 P X

X 0P y :

where = diag (I l ; 0) : Let Q = H 0 S 1=2 which satises EQ0 Q = I n and partition Q = (q 01 Q02 )

where q 1 is l 1: Hence

E

1 0

P jZ

n

1 1=20

S E Q0 Q j Z S 1=2

n

1 1=20

1 0

S E

q q S 1=2

n

n 1 1

l 1=20 1=2

S S

n

S

=

=

=

=

where

=

l

:

n

1

1

E X 0P e = E

n

n

Z 0e +

1

E U 0 P e = s21 ;

n

271

and

1

E X 0P X

n

E z i z 0i

Q + S 22 :

E (z i ui ) + E ui z 0i

1

E U 0P U

n

Together

E ^ 2SLS

1 0

X PX

n

Q + S 22

1 0

X Pe

n

s21 :

(16.14)

In general this is non-zero, except when s21 = 0 (when X is exogenous). It is also close to zero

when = 0. Bekker (1994) pointed out that it also has the reverse implication that when = l=n

is large, the bias in the 2SLS estimator will be large. Indeed as ! 1; the expression in (16.14)

approaches that in (16.13), indicating that the bias in 2SLS approaches that of OLS as the number

of instruments increases.

Bekker (1994) showed further that under the alternative asymptotic approximation that is

xed as n ! 1 (so that the number of instruments goes to innity proportionately with sample

size) then the expression in (16.14) is the probability limit of ^ 2SLS

16.7

Identication Failure

X2 = Z1

12

+ Z2

22

+ U 2:

The parameter fails to be identied if 22 has decient rank. The consequences of identication

failure for inference are quite severe.

Take the simplest case where k = l = 1 (so there is no Z 1 ): Then the model may be written as

yi = xi + ei

xi = zi + ui

and 22 =

= E (zi xi ) =Ezi2 : We see that

is identied if and only if

6= 0; which occurs

when E (xi zi ) 6= 0. Thus identication hinges on the existence of correlation between the excluded

exogenous variable and the included endogenous variable.

Suppose this condition fails, so E (xi zi ) = 0: Then by the CLT

n

1 X

d

p

zi ei ! N1

n

N 0; E zi2 e2i

(16.15)

i=1

1 X

1 X

d

p

zi x i = p

zi ui ! N2

n

n

i=1

therefore

N 0; E zi2 u2i

(16.16)

i=1

p1

n

p1

n

Pn

i=1 zi ei

Pn

i=1 zi xi

N1

N2

Cauchy;

since the ratio of two normals is Cauchy. This is particularly nasty, as the Cauchy distribution

does not have a nite mean. This result carries over to more general settings, and was examined

by Phillips (1989) and Choi and Phillips (1992).

272

Suppose that identication does not completely fail, but is weak. This occurs when 22 is full

rank, but small. This can be handled in an asymptotic analysis by modeling it as local-to-zero, viz

22

1=2

=n

C;

where C is a full rank matrix. The n 1=2 is picked because it provides just the right balancing to

allow a rich distribution theory.

To see the consequences, once again take the simple case k = l = 1: Here, the instrument xi is

weak for zi if

= n 1=2 c:

Then (16.15) is unaected, but (16.16) instead takes the form

n

i=1

i=1

1 X

1 X 2

1 X

p

z i xi = p

zi + p

zi ui

n

n

n

i=1

n

n

1X 2

1 X

p

=

zi c +

zi ui

n

n

i=1

i=1

! Qc + N2

therefore

^

N1

:

Qc + N2

and the

^

asymptotic distribution of is non-normal. In addition, standard test statistics have non-standard

distributions, meaning that inferences about parameters of interest can be misleading.

The distribution theory for this model was developed by Staiger and Stock (1997) and extended

to nonlinear GMM estimation by Stock and Wright (2000). Further results on testing were obtained

by Wang and Zivot (1998).

The bottom line is that it is highly desirable to avoid identication failure. Once again, the

equation to focus on is the reduced form

X2 = Z1

12

+ Z2

22

+ U2

to assess using a hypothesis test on the reduced form. Therefore in the case of k2 = 1 (one RHS

endogenous variable), one constructive recommendation is to explicitly estimate the reduced form

equation for X 2 ; construct the test of 22 = 0, and at a minimum check that the test rejects

H0 : 22 = 0.

When k2 > 1; 22 6= 0 is not su cient for identication. It is not even su cient that each

column of 22 is non-zero (each column corresponds to a distinct endogenous variable in Z 2 ): So

while a minimal check is to test that each columns of 22 is non-zero, this cannot be interpreted

as denitive proof that 22 has full rank. Unfortunately, tests of decient rank are di cult to

implement. In any event, it appears reasonable to explicitly estimate and report the reduced form

equations for Z 2 ; and attempt to assess the likelihood that 22 has decient rank.

273

Exercises

1. Consider the single equation model

yi = z i + e i ;

where yi and zi are both real-valued (1 1). Let ^ denote the IV estimator of using as an

instrument a dummy variable di (takes only the values 0 and 1). Find a simple expression

for the IV estimator in this context.

2. In the linear model

yi = x0i + ei

E (ei j xi ) = 0

suppose 2i = E e2i j xi is known. Show that the GLS estimator of

IV estimator using some instrument z i : (Find an expression for z i :)

can be written as an

y = X + e:

be ^ and the OLS residual be e

^ = y X ^.

Let the IV estimator for using some instrument Z be ~ and the IV residual be e

~ = y X ~.

If X is indeed endogeneous, will IV t better than OLS, in the sense that e

~0 e

~<e

^0 e

^; at

least in large samples?

Let the OLS estimator for

4. The reduced form between the regressors xi and instruments z i takes the form

xi =

z i + ui

or

X =Z +U

where xi is k 1; z i is l 1; X is n k; Z is n l; U is n

is dened by the population moment condition

k; and

is l

k: The parameter

E z i u0i = 0

Show that the method of moments estimator for

is ^ = (Z 0 Z)

(Z 0 X) :

y = X +e

X = Z +U

with l k; l k; we claim that is identied (can be recovered from the reduced form) if

rank( ) = k: Explain why this is true. That is, show that if rank( ) < k then cannot be

identied.

6. Take the linear model

yi = xi + ei

E (ei j xi ) = 0:

where xi and

are 1

1:

274

for estimation of ?

(b) Dene the 2SLS estimator of ; using z i as an instrument for xi : How does this dier

from OLS?

(c) Find the e cient GMM estimator of

E (z i (yi

xi )) = 0:

7. Suppose that price and quantity are determined by the intersection of the linear demand and

supply curves

Demand :

Q = a0 + a1 P + a2 Y + e1

Supply :

Q = b0 + b1 P + b2 W + e2

where income (Y ) and wage (W ) are determined outside the market. In this model, are the

parameters identied?

8. The data le card.dat is taken from Card (1995). There are 2215 observations with 29

variables, listed in card.pdf. We want to estimate a wage equation

log(W age) =

1 Educ

2 Exper

3 Exper

4 South

5 Black

+e

where Educ = Eduation (Years) Exper = Experience (Years), and South and Black are

regional and racial dummy variables.

(a) Estimate the model by OLS. Report estimates and standard errors.

(b) Now treat Education as endogenous, and the remaining variables as exogenous. Estimate

the model by 2SLS, using the instrument near4, a dummy indicating that the observation

lives near a 4-year college. Report estimates and standard errors.

(c) Re-estimate by 2SLS (report estimates and standard errors) adding three additional

instruments: near2 (a dummy indicating that the observation lives near a 2-year college),

f atheduc (the education, in years, of the father) and motheduc (the education, in years,

of the mother).

(d) Re-estimate the model by e cient GMM. I suggest that you use the 2SLS estimates as

the rst-step to get the weight matrix, and then calculate the GMM estimator from this

weight matrix without further iteration. Report the estimates and standard errors.

(e) Calculate and report the J statistic for overidentication.

(f) Discuss your ndings.

Chapter 17

A time series yt is a process observed in sequence over time, t = 1; :::; T . To indicate the

dependence on time, we adopt new notation, and use the subscript t to denote the individual

observation, and T to denote the number of observations.

Because of the sequential nature of time series, we expect that yt and yt 1 are not independent,

so classical assumptions are not valid.

We can separate time series into two categories: univariate (yt 2 R is scalar); and multivariate

(yt 2 Rm is vector-valued). The primary model for univariate time series is autoregressions (ARs).

The primary model for multivariate time series is vector autoregressions (VARs).

17.1

Denition 17.1.1 fyt g is covariance (weakly) stationary if

E(yt ) =

is independent of t; and

cov (yt ; yt

k)

= (k)

(k) = (k)= (0) = corr(yt ; yt

k)

(yt ; :::; yt k ) is independent of t for all k:

k ! 1.

275

(k) ! 0 as

276

The following two theorems are essential to the analysis of stationary time series. There proofs

are rather di cult, however.

Theorem 17.1.1 If yt is strictly stationary and ergodic and xt =

f (yt ; yt 1 ; :::) is a random variable, then xt is strictly stationary and ergodic.

Theorem 17.1.2 (Ergodic Theorem). If yt is strictly stationary and ergodic and E jyt j < 1; then as T ! 1;

T

1X

p

yt ! E(yt ):

T

t=1

The sample mean:

T

1X

^=

yt

T

t=1

^ (k) =

T

1X

(yt

T

^ ) (yt

^) :

t=1

^(k) =

^ (k)

:

^ (0)

then as T ! 1;

p

1. ^ ! E(yt );

p

2. ^ (k) ! (k);

p

3. ^(k) ! (k):

Proof of Theorem 17.1.3. Part (1) is a direct consequence of the Ergodic theorem. For Part

(2), note that

^ (k) =

T

1X

(yt

T

1

T

t=1

T

X

t=1

yt yt

^ ) (yt

^)

T

1X

yt ^

T

t=1

T

1X

yt

T

t=1

k^

+ ^2:

277

By Theorem 17.1.1 above, the sequence yt yt k is strictly stationary and ergodic, and it has a nite

mean by the assumption that Eyt2 < 1: Thus an application of the Ergodic Theorem yields

T

1X

yt yt

T

! E(yt yt

t=1

Thus

^ (k) ! E(yt yt

k)

k ):

= E(yt yt

k)

= (k):

p

Part (3) follows by the continuous mapping theorem: ^(k) = ^ (k)=^ (0) ! (k)= (0) = (k):

17.2

Autoregressions

In time-series, the series f:::; y1 ; y2 ; :::; yT ; :::g are jointly random. We consider the conditional

expectation

E (yt j Ft 1 )

where Ft 1 = fyt 1 ; yt 2 ; :::g is the past history of the series.

An autoregressive (AR) model species that only a nite number of past lags matter:

E (yt j Ft

1)

= E (yt j yt

1 ; :::; yt k ) :

A linear AR model (the most common type used in practice) species linearity:

E (yt j Ft

1)

1 yt 1

2 yt 1

k yt k :

Letting

et = yt

E (yt j Ft

1) ;

yt =

E (et j Ft

1)

1 yt 1

2 yt 1

k yt k

+ et

= 0:

E (et j Ft 1 ) = 0:

Regression errors are naturally a MDS. Some time-series processes may be a MDS as a consequence of optimizing behavior. For example, some versions of the life-cycle hypothesis imply that

either changes in consumption, or consumption growth rates, should be a MDS. Most asset pricing

models imply that asset returns should be the sum of a constant plus a MDS.

The MDS property for the regression error plays the same role in a time-series regression as

does the conditional mean-zero property for the regression error in a cross-section regression. In

fact, it is even more important in the time-series context, as it is di cult to derive distribution

theories without this property.

A useful property of a MDS is that et is uncorrelated with any function of the lagged information

Ft 1 : Thus for k > 0; E (yt k et ) = 0:

17.3

278

A mean-zero AR(1) is

yt = yt

Assume that et is iid, E(et ) = 0 and

By back-substitution, we nd

Ee2t

+ et :

< 1:

yt = et + et

1

X

k

=

et

et

+ :::

k:

k=0

ke

when j j < 1:

t k

ergodic.

Eyt =

1

X

E (et

k)

=0

var (et

k)

k=0

var(yt ) =

1

X

2k

k=0

If the equation for yt has an intercept, the above results are unchanged, except that the mean

of yt can be computed from the relationship

Eyt =

and solving for Eyt = Eyt

17.4

+ Eyt

we nd Eyt = =(1

1;

):

Lag Operator

An algebraic construct which is useful for the analysis of autoregressive models is the lag operator.

Denition 17.4.1 The lag operator L satises Lyt = yt

The AR(1) model can be written in the format

2:

In general, Lk yt = yt

yt

yt

(1

L) yt = et :

1:

k:

= et

or

The operator (L) = (1

L) is a polynomial in the operator L: We say that the root of the

polynomial is 1= ; since (z) = 0 when z = 1= : We call (L) the autoregressive polynomial of yt .

From Theorem 17.3.1, an AR(1) is stationary i j j < 1: Note that an equivalent way to say

this is that an AR(1) is stationary i the root of the autoregressive polynomial is larger than one

(in absolute value).

17.5

279

Stationarity of AR(k)

yt =

1 yt 1

yt

1 Lyt

2 yt 2

k yt k

+ et :

k

k L yt

= et ;

2

2 L yt

or

(L)yt = et

where

(L) = 1

2

2L

1L

kL

The Fundamental Theorem of Algebra says that any polynomial can be factored as

(z) = 1

1

1

where the 1 ; :::; k are the complex roots of (z); which satisfy ( j ) = 0:

We know that an AR(1) is stationary i the absolute value of the root of its autoregressive

polynomial is larger than one. For an AR(k), the requirement is that all roots are larger than one.

Let j j denote the modulus of a complex number :

Theorem 17.5.1 The AR(k) is strictly stationary and ergodic if and only

if j j j > 1 for all j:

One way of stating this is that All roots lie outside the unit circle.

If one of the roots equals 1, we say that (L); and hence yt ; has a unit root. This is a special

case of non-stationarity, and is of great interest in applied time series.

17.6

Estimation

Let

1 yt

xt =

=

yt

yt

2

k

yt = x0t + et :

The OLS estimator is

^ = X 0X

X 0 y:

E (ut j Ft

1)

= E (xt et j Ft

1)

= xt E (et j Ft

1)

= 0:

T

T

1X

1X

p

xt et =

ut ! E (ut ) = 0:

T

T

t=1

t=1

(17.1)

280

The vector xt is strictly stationary and ergodic, and by Theorem 17.1.1, so is xt x0t : Thus by the

Ergodic Theorem,

T

1X

p

xt x0t ! E xt x0t = Q:

T

t=1

Combined with (17.1) and the continuous mapping theorem, we see that

^=

T

1X

xt x0t

T

t=1

T

1X

xt et

T

t=1

!Q

0 = 0:

p

and Eyt2 < 1; then ^ ! as T ! 1:

17.7

Asymptotic Distribution

Theorem 17.7.1 MDS CLT. If ut is a strictly stationary and ergodic

MDS and E (ut u0t ) = < 1; then as T ! 1;

T

1 X

d

p

ut ! N (0; ) :

T t=1

T

1 X

d

p

xt et ! N (0; ) ;

T t=1

where

= E(xt x0t e2t ):

and Eyt4 < 1; then as T ! 1;

p

T ^

! N 0; Q

This is identical in form to the asymptotic distribution of OLS in cross-section regression. The

implication is that asymptotic inference is the same. In particular, the asymptotic covariance

matrix is estimated just as in the cross-section case.

17.8

281

from the data values fyt ; xt g: This creates an iid bootstrap sample. Clearly, this cannot work in a

time-series application, as this imposes inappropriate independence.

Briey, there are two popular methods to implement bootstrap resampling for time-series data.

Method 1: Model-Based (Parametric) Bootstrap.

1. Estimate ^ and residuals e^t :

2. Fix an initial condition (y

e1 ; :::; e^T g:

4. Create the bootstrap series yt by the recursive formula

yt = ^ + ^ 1 yt

+ ^ 2 yt

+ ^ k yt

+ et :

This construction imposes homoskedasticity on the errors ei ; which may be dierent than the

properties of the actual ei : It also presumes that the AR(k) structure is the truth.

Method 2: Block Resampling

1. Divide the sample into T =m blocks of length m:

2. Resample complete blocks. For each simulated sample, draw T =m blocks.

3. Paste the blocks together to create the bootstrap time-series yt :

4. This allows for arbitrary stationary serial correlation, heteroskedasticity, and for modelmisspecication.

5. The results may be sensitive to the block length, and the way that the data are partitioned

into blocks.

6. May not work well in small samples.

17.9

Trend Stationarity

yt =

1t

St =

1 St 1

+ St

(17.2)

2 St 2

k St l

+ et ;

(17.3)

or

yt =

1t

1 yt 1

2 yt 1

k yt k

+ et :

(17.4)

There are two essentially equivalent ways to estimate the autoregressive parameters ( 1 ; :::;

k ):

You can estimate (17.2)-(17.3) sequentially by OLS. That is, rst estimate (17.2), get the

residual S^t ; and then perform regression (17.3) replacing St with S^t : This procedure is sometimes called Detrending.

282

The reason why these two procedures are (essentially) the same is the Frisch-Waugh-Lovell

theorem.

Seasonal Eects

There are three popular methods to deal with seasonal data.

Include dummy variables for each season. This presumes that seasonality does not change

over the sample.

Use seasonally adjusted data. The seasonal factor is typically estimated by a two-sided

weighted average of the data for that season in neighboring years. Thus the seasonally

adjusted data is a lteredseries. This is a exible approach which can extract a wide range

of seasonal factors. The seasonal adjustment, however, also alters the time-series correlations

of the data.

First apply a seasonal dierencing operator. If s is the number of seasons (typically s = 4 or

s = 12);

yt s ;

s yt = yt

or the season-to-season change. The series s yt is clearly free of seasonality. But the long-run

trend is also eliminated, and perhaps this was of relevance.

17.10

yt =

+ yt

+ ut :

(17.5)

We are interested in the question if the error ut is serially correlated. We model this as an AR(1):

ut = ut

+ et

(17.6)

H0 :

=0

H1 :

6= 0:

To combine (17.5) and (17.6), we take (17.5) and lag the equation once:

yt

We then multiply this by

yt

+ yt

+ ut

1:

yt

+ yt

yt

+ ut

ut

1;

or

yt = (1

) + ( + ) yt

yt

+ et = AR(2):

Thus under H0 ; yt is an AR(1), and under H1 it is an AR(2). H0 may be expressed as the restriction

that the coe cient on yt 2 is zero.

An appropriate test of H0 against H1 is therefore a Wald test that the coe cient on yt 2 is

zero. (A simple exclusion test).

In general, if the null hypothesis is that yt is an AR(k), and the alternative is that the error is an

AR(m), this is the same as saying that under the alternative yt is an AR(k+m), and this is equivalent

to the restriction that the coe cients on yt k 1 ; :::; yt k m are jointly zero. An appropriate test is

the Wald test of this restriction.

17.11

283

Model Selection

One approach to model selection is to choose k based on a Wald tests.

Another is to minimize the AIC or BIC information criterion, e.g.

AIC(k) = log ^ 2 (k) +

2k

;

T

One ambiguity in dening the AIC criterion is that the sample available for estimation changes

as k changes. (If you increase k; you need more initial conditions.) This can induce strange

behavior in the AIC. The best remedy is to x a upper value k; and then reserve the rst k as

initial conditions, and then estimate the models AR(1), AR(2), ..., AR(k) on this (unied) sample.

17.12

(L)yt =

+ et

(L) = 1

k

kL :

1L

1

= 1:

In this case, yt is non-stationary. The ergodic theorem and MDS CLT do not apply, and test

statistics are asymptotically non-normal.

A helpful way to write the equation is the so-called Dickey-Fuller reparameterization:

yt =

0 yt 1

yt

k 1

yt

(k 1)

+ et :

(17.7)

These models are equivalent linear transformations of one another. The DF parameterization

is convenient because the parameter 0 summarizes the information about the unit root, since

(1) =

0 : To see this, observe that the lag polynomial for the yt computed from (17.7) is

(1

L)

0L

1 (L

k 1

k 1 (L

L2 )

Lk )

But this must equal (L); as the models are equivalent. Thus

(1) = (1

1)

(1

1)

(1

1) =

0:

H0 :

Note that the model is stationary if

= 0:

H1 :

< 0:

yt =

yt

k 1

yt

(k 1)

+ et ;

which is an AR(k-1) in the rst-dierence yt : Thus if yt has a (single) unit root, then yt is a

stationary AR process. Because of this property, we say that if yt is non-stationary but d yt is

stationary, then yt is integrated of order d; or I(d): Thus a time series with unit root is I(1):

284

Since 0 is the parameter of a linear regression, the natural test statistic is the t-statistic for

H0 from OLS estimation of (17.7). Indeed, this is the most popular unit root test, and is called the

Augmented Dickey-Fuller (ADF) test for a unit root.

It would seem natural to assess the signicance of the ADF statistic using the normal table.

However, under H0 ; yt is non-stationary, so conventional normal asymptotics are invalid. An

alternative asymptotic framework has been developed to deal with non-stationary data. We do not

have the time to develop this theory in detail, but simply assert the main results.

Assume 0 = 0: As T ! 1;

d

T ^ 0 ! (1

ADF =

k 1 ) DF

^0

! DFt :

s(^ 0 )

The limit distributions DF and DFt are non-normal. They are skewed to the left, and have

negative means.

The rst result states that ^ 0 converges to its true value (of zero) at rate T; rather than the

conventional rate of T 1=2 : This is called a super-consistent rate of convergence.

The second result states that the t-statistic for ^ 0 converges to a limit distribution which is

non-normal, but does not depend on the parameters : This distribution has been extensively

tabulated, and may be used for testing the hypothesis H0 : Note: The standard error s(^ 0 ) is the

conventional (homoskedastic) standard error. But the theorem does not require an assumption

of homoskedasticity. Thus the Dickey-Fuller test is robust to heteroskedasticity.

Since the alternative hypothesis is one-sided, the ADF test rejects H0 in favor of H1 when

ADF < c; where c is the critical value from the ADF table. If the test rejects H0 ; this means that

the evidence points to yt being stationary. If the test does not reject H0 ; a common conclusion is

that the data suggests that yt is non-stationary. This is not really a correct conclusion, however.

All we can say is that there is insu cient evidence to conclude whether the data are stationary or

not.

We have described the test for the setting of with an intercept. Another popular setting includes

as well a linear time trend. This model is

yt =

2t

0 yt 1

yt

k 1

yt

(k 1)

+ et :

(17.8)

This is natural when the alternative hypothesis is that the series is stationary about a linear time

trend. If the series has a linear trend (e.g. GDP, Stock Prices), then the series itself is nonstationary, but it may be stationary around the linear time trend. In this context, it is a silly waste

of time to t an AR model to the level of the series without a time trend, as the AR model cannot

conceivably describe this data. The natural solution is to include a time trend in the tted OLS

equation. When conducting the ADF test, this means that it is computed as the t-ratio for 0 from

OLS estimation of (17.8).

If a time trend is included, the test procedure is the same, but dierent critical values are

required. The ADF test has a dierent distribution when the time trend has been included, and a

dierent table should be consulted.

Most texts include as well the critical values for the extreme polar case where the intercept has

been omitted from the model. These are included for completeness (from a pedagogical perspective)

but have no relevance for empirical practice where intercepts are always included.

Chapter 18

A multivariate time series y t is a vector process m 1. Let Ft 1 = (y t 1 ; y t 2 ; :::) be all lagged

information at time t: The typical goal is to nd the conditional expectation E (y t j Ft 1 ) : Note

that since y t is a vector, this conditional expectation is also a vector.

18.1

A VAR model species that the conditional mean is a function of only a nite number of lags:

E (y t j Ft

1)

= E yt j yt

1 ; :::; y t k

A linear VAR species that this conditional mean is linear in the arguments:

E yt j yt

1 ; :::; y t k

= a0 + A1 y t

+ A2 y t

Dening the m 1 regression error

et = y t

E (y t j Ft

Ak y t

k:

m matrices.

1) ;

y t = a0 + A1 y t

E (et j Ft

1)

+ A2 y t

Ak y t

+ et

= 0:

0

and the m

B yt

B

B

xt = B y t

B ..

@ .

yt

(mk + 1) matrix

A=

1

2

1

C

C

C

C

C

A

a0 A1 A2

Ak

then

y t = Axt + et :

The VAR model is a system of m equations. One way to write this is to let a0j be the jth row

of A. Then the VAR system can be written as the equations

Yjt = a0j xt + ejt :

Unrestricted VARs were introduced to econometrics by Sims (1980).

285

18.2

286

Estimation

E (xt ejt ) = 0;

j = 1; :::; m: These are implied by the VAR model, either as a regression, or as a linear projection.

The GMM estimator corresponding to these moment conditions is equation-by-equation OLS

a

^j = (X 0 X)

X 0yj :

a

^0j = y 0j X(X 0 X)

^ we nd

And if we stack these to create the estimate A;

1

0

y 01

B y0 C

2

C

B

^

A = B . C X(X 0 X)

@ .. A

y 0m+1

= Y 0 X(X 0 X)

where

Y =

y1 y2

ym

This (system) estimator is known as the SUR (Seemingly Unrelated Regressions) estimator,

and was originally derived by Zellner (1962)

18.3

Restricted VARs

The unrestricted VAR is a system of m equations, each with the same set of regressors. A

restricted VAR imposes restrictions on the system. For example, some regressors may be excluded

from some of the equations. Restrictions may be imposed on individual equations, or across equations. The GMM framework gives a convenient method to impose such restrictions on estimation.

18.4

Often, we are only interested in a single equation out of a VAR system. This takes the form

yjt = a0j xt + et ;

and xt consists of lagged values of yjt and the other ylt0 s: In this case, it is convenient to re-dene

the variables. Let yt = yjt ; and z t be the other variables. Let et = ejt and = aj : Then the single

equation takes the form

yt = x0t + et ;

(18.1)

and

xt =

1 yt

yt

z 0t

z 0t

18.5

287

Consider the problem of testing for omitted serial correlation in equation (18.1). Suppose that

et is an AR(1). Then

yt = x0t + et

et =

E (ut j Ft

1)

et

+ ut

(18.2)

= 0:

H0 :

=0

H1 :

6= 0:

Take the equation yt = x0t + et ; and subtract o the equation once lagged multiplied by ; to get

yt

yt

=

=

x0t + et

x0t

xt

x0t

1

+ et

+ et

et

1;

or

yt = yt

+ x0t + x0t

+ ut ;

(18.3)

So testing H0 versus H1 is equivalent to testing for the signicance of adding (yt 1 ; xt 1 ) to

the regression. This can be done by a Wald test. We see that an appropriate, general, and simple

way to test for omitted serial correlation is to test the signicance of extra lagged values of the

dependent variable and regressors.

You may have heard of the Durbin-Watson test for omitted serial correlation, which once was

very popular, and is still routinely reported by conventional regression packages. The DW test is

appropriate only when regression yt = x0t + et is not dynamic (has no lagged values on the RHS),

and et is iid N(0; 2 ): Otherwise it is invalid.

Another interesting fact is that (18.2) is a special case of (18.3), under the restriction =

:

This restriction, which is called a common factor restriction, may be tested if desired. If valid,

the model (18.2) may be estimated by iterated GLS. (A simple version of this estimator is called

Cochrane-Orcutt.) Since the common factor restriction appears arbitrary, and is typically rejected

empirically, direct estimation of (18.2) is uncommon in recent applications.

18.6

If you want a data-dependent rule to pick the lag length k in a VAR, you may either use a testingbased approach (using, for example, the Wald statistic), or an information criterion approach. The

formula for the AIC and BIC are

p

AIC(k) = log det ^ (k) + 2

T

p

log(T )

BIC(k) = log det ^ (k) +

T

T

X

^ (k) = 1

e

^t (k)^

et (k)0

T

t=1

p = m(km + 1)

^t (k) is the OLS residual vector from the

model with k lags. The log determinant is the criterion from the multivariate normal likelihood.

18.7

288

Granger Causality

Partition the data vector into (y t ; z t ): Dene the two information sets

F1t =

yt; yt

F2t =

1 ; y t 2 ; :::

yt; zt; yt

1 ; z t 1 ; y t 2 ; z t 2 ; ; :::

The information set F1t is generated only by the history of y t ; and the information set F2t is

generated by both y t and z t : The latter has more information.

We say that z t does not Granger-cause y t if

E (y t j F1;t

1)

= E (y t j F2;t

1) :

That is, conditional on information in lagged y t ; lagged z t does not help to forecast y t : If this

condition does not hold, then we say that z t Granger-causes y t :

The reason why we call this Granger Causality rather than causality is because this is not

a physical or structure denition of causality. If z t is some sort of forecast of the future, such as a

futures price, then z t may help to forecast y t even though it does not cause y t : This denition

of causality was developed by Granger (1969) and Sims (1972).

In a linear VAR, the equation for y t is

yt =

1yt 1

k yt k

+ z 0t

1 1

+ z 0t

k k

+ et :

H0 :

= 0:

This idea can be applied to blocks of variables. That is, y t and/or z t can be vectors. The

hypothesis can be tested by using the appropriate multivariate Wald test.

If it is found that z t does not Granger-cause y t ; then we deduce that our time-series model of

E (y t j Ft 1 ) does not require the use of z t : Note, however, that z t may still be useful to explain

other features of y t ; such as the conditional variance.

Clive W. J. Granger

Clive Granger (1934-2009) of England was one of the leading gures in timeseries econometrics, and co-winner in 2003 of the Nobel Memorial Prize in

Economic Sciences (along with Robert Engle). In addition to formalizing

the denition of causality known as Granger causality, he invented the concept of cointegration, introduced spectral methods into econometrics, and

formalized methods for the combination of forecasts.

18.8

Cointegration

The idea of cointegration is due to Granger (1981), and was articulated in detail by Engle and

Granger (1987).

289

there exists ; m r, of rank r; such that z t = 0 y t is I(0): The r vectors

in are called the cointegrating vectors.

If the series y t is not cointegrated, then r = 0: If r = m; then y t is I(0): For 0 < r < m; y t is

I(1) and cointegrated.

In some cases, it may be believed that is known a priori. Often, = (1

1)0 : For example, if

0

y t is a pair of interest rates, then = (1

1) species that the spread (the dierence in returns)

0

is stationary. If y = (log(C) log(I)) ; then = (1

1)0 species that log(C=I) is stationary.

In other cases, may not be known.

If y t is cointegrated with a single cointegrating vector (r = 1); then it turns out that

can

be consistently estimated by an OLS regression of one component of y t on the others. Thus y t =

p

(Y1t ; Y2t ) and = ( 1 2 ) and normalize 1 = 1: Then ^ 2 = (y 02 y 2 ) 1 y 02 y 1 ! 2 : Furthermore

d

this estimation is super-consistent: T ( ^

) ! Limit; as rst shown by Stock (1987). This

2

is not, in general, a good method to estimate ; but it is useful in the construction of alternative

estimators and tests.

We are often interested in testing the hypothesis of no cointegration:

H0 : r = 0

H1 : r > 0:

I(0): Thus H0 can be tested using a univariate ADF test on z t :

When is unknown, Engle and Granger (1987) suggested using an ADF test on the estimated

0

residual z^t = ^ y t ; from OLS of y1t on y2t : Their justication was Stocks result that ^ is superconsistent under H1 : Under H0 ; however, ^ is not consistent, so the ADF critical values are not

appropriate. The asymptotic distribution was worked out by Phillips and Ouliaris (1990).

When the data have time trends, it may be necessary to include a time trend in the estimated

cointegrating regression. Whether or not the time trend is included, the asymptotic distribution of

the test is aected by the presence of the time trend. The asymptotic distribution was worked out

in B. Hansen (1992).

18.9

Cointegrated VARs

A(L)y t = et

A(L) = I

A1 L

Ak Lk

A2 L2

or alternatively as

yt =

yt

+ D(L) y t

+ et

where

=

A(1)

I + A1 + A2 +

+ Ak :

290

y t is cointegrated with m r if and only if rank( ) = r and

where is m r, rank ( ) = r:

Thus cointegration imposes a restriction upon the parameters of a VAR. The restricted model

can be written as

yt =

yt

yt =

zt

+ D(L) y t

+ D(L) y t

+ et

+ et :

If is unknown, then estimation is done by reduced rank regression, which is least-squares

subject to the stated restriction. Equivalently, this is the MLE of the restricted parameters under

the assumption that et is iid N(0; ):

One di culty is that is not identied without normalization. When r = 1; we typically just

normalize one element to equal unity. When r > 1; this does not work, and dierent authors have

adopted dierent identication schemes.

In the context of a cointegrated VAR estimated by reduced rank regression, it is simple to test

for cointegration by testing the rank of : These tests are constructed as likelihood ratio (LR) tests.

As they were discovered by Johansen (1988, 1991, 1995), they are typically called the Johansen

Max and Trace tests. Their asymptotic distributions are non-standard, and are similar to the

Dickey-Fuller distributions.

Chapter 19

A limited dependent variabley is one which takes a limitedset of values. The most common

cases are

Binary: y 2 f0; 1g

Multinomial: y 2 f0; 1; 2; :::; kg

Integer: y 2 f0; 1; 2; :::g

Censored: y 2 R+

The traditional approach to the estimation of limited dependent variable (LDV) models is

parametric maximum likelihood. A parametric model is constructed, allowing the construction of

the likelihood function. A more modern approach is semi-parametric, eliminating the dependence

on a parametric distributional assumption. We will discuss only the rst (parametric) approach,

due to time constraints. They still constitute the majority of LDV applications. If, however, you

were to write a thesis involving LDV estimation, you would be advised to consider employing a

semi-parametric estimation approach.

For the parametric approach, estimation is by MLE. A major practical issue is construction of

the likelihood function.

19.1

Binary Choice

The dependent variable yi 2 f0; 1g: This represents a Yes/No outcome. Given some regressors

xi ; the goal is to describe Pr (yi = 1 j xi ) ; as this is the full conditional distribution.

The linear probability model species that

Pr (yi = 1 j xi ) = x0i :

As Pr (yi = 1 j xi ) = E (yi j xi ) ; this yields the regression: yi = x0i + ei which can be estimated by

OLS. However, the linear probability model does not impose the restriction that 0 Pr (yi j xi ) 1:

Even so estimation of a linear probability model is a useful starting point for subsequent analysis.

The standard alternative is to use a function of the form

Pr (yi = 1 j xi ) = F x0i

where F ( ) is a known CDF, typically assumed to be symmetric about zero, so that F (u) =

1 F ( u): The two standard choices for F are

Logistic: F (u) = (1 + e

u) 1 :

291

Normal: F (u) =

292

(u):

If F is logistic, we call this the logit model, and if F is normal, we call this the probit model.

This model is identical to the latent variable model

yi

= x0i + ei

ei

F()

1

=

0

yi

if yi > 0

:

otherwise

For then

Pr (yi = 1 j xi ) = Pr (yi > 0 j xi )

= Pr x0i + ei > 0 j xi

x0i

= Pr ei >

= 1

j xi

x0i

= F x0i

distribution of an individual observation. Recall that if y is Bernoulli, such that Pr(y = 1) = p and

Pr(y = 0) = 1 p, then we can write the density of y as

f (y) = py (1

p)1

y = 0; 1:

In the Binary choice model, yi is conditionally Bernoulli with Pr (yi = 1 j xi ) = pi = F (x0i ) : Thus

the conditional density is

f (yi j xi ) = pyi i (1

p i )1

yi

= F x0i

yi

F x0i

(1

)1

yi

log L( ) =

=

n

X

i=1

n

X

log f (yi j xi )

log F x0i

yi

(1

F x0i

)1

yi

i=1

n

X

yi log F x0i

+ (1

i=1

log F x0i

yi =1

yi ) log(1

log(1

F x0i

F x0i

):

yi =0

The MLE ^ is the value of which maximizes log L( ): Standard errors and test statistics are

computed by asymptotic approximations. Details of such calculations are left to more advanced

courses.

19.2

Count Data

If y 2 f0; 1; 2; :::g; a typical approach is to employ Poisson regression. This model species that

Pr (yi = k j xi ) =

i

exp (

i)

k!

= exp(x0i ):

k

i

k = 0; 1; 2; :::

The conditional density is the Poisson with parameter

picked to ensure that i > 0.

The log-likelihood function is

log L( ) =

n

X

i=1

log f (yi j xi ) =

n

X

293

i:

exp(x0i ) + yi x0i

has been

log(yi !) :

i=1

Since

E (yi j xi ) = i = exp(x0i )

is the conditional mean, this motivates the label Poisson regression.

Also observe that the model implies that

var (yi j xi ) =

= exp(x0i );

so the model imposes the restriction that the conditional mean and variance of yi are the same.

This may be considered restrictive. A generalization is the negative binomial.

19.3

Censored Data

The idea of censoring is that some data above or below a threshold are mis-reported at the

threshold. Thus the model is that there is some latent process yi with unbounded support, but we

observe only

yi

if yi

0

:

(19.1)

yi =

0

if yi < 0

(This is written for the case of the threshold being zero, any known value can substitute.) The

observed data yi therefore come from a mixed continuous/discrete distribution.

Censored models are typically applied when the data set has a meaningful proportion (say 5%

or higher) of data at the boundary of the sample support. The censoring process may be explicit

in data collection, or it may be a by-product of economic constraints.

An example of a data collection censoring is top-coding of income. In surveys, incomes above

a threshold are typically reported at the threshold.

The rst censored regression model was developed by Tobin (1958) to explain consumption of

durable goods. Tobin observed that for many households, the consumption level (purchases) in a

particular period was zero. He proposed the latent variable model

yi

= x0i + ei

ei

iid

N(0;

with the observed variable yi generated by the censoring equation (19.1). This model (now called

the Tobit) species that the latent (or ideal) value of consumption may be negative (the household

would prefer to sell than buy). All that is reported is that the household purchased zero units of

the good.

The naive approach to estimate is to regress yi on xi . This does not work because regression

estimates E (yi j xi ) ; not E (yi j xi ) = x0i ; and the latter is of interest. Thus OLS will be biased

for the parameter of interest :

[Note: it is still possible to estimate E (yi j xi ) by LS techniques. The Tobit framework postulates that this is not inherently interesting, that the parameter of is dened by an alternative

statistical structure.]

294

Consistent estimation will be achieved by the MLE. To construct the likelihood, observe that

the probability of being censored is

Pr (yi = 0 j xi ) = Pr (yi < 0 j xi )

= Pr x0i + ei < 0 j xi

ei

x0i

= Pr

<

j xi

x0i

y

x0i

y > 0:

0 can be written as

x0i

f (y j xi ) =

1(y=0)

x0i

1(y>0)

Hence the log-likelihood is a mixture of the probit and the normal:

log L( ) =

n

X

i=1

log f (yi j xi )

log

x0i

log

yi

x0i

yi >0

yi =0

19.4

Sample Selection

The problem of sample selection arises when the sample is a non-random selection of potential

observations. This occurs when the observed data is systematically dierent from the population

of interest. For example, if you ask for volunteers for an experiment, and they wish to extrapolate

the eects of the experiment on a general population, you should worry that the people who

volunteer may be systematically dierent from the general population. This has great relevance for

the evaluation of anti-poverty and job-training programs, where the goal is to assess the eect of

training on the general population, not just on the volunteers.

A simple sample selection model can be written as the latent model

yi = x0i + e1i

Ti = 1 z 0i + e0i > 0

where 1 ( ) is the indicator function. The dependent variable yi is observed if (and only if) Ti = 1:

Else it is unobserved.

For example, yi could be a wage, which can be observed only if a person is employed. The

equation for Ti is an equation specifying the probability that the person is employed.

The model is often completed by specifying that the errors are jointly normal

e0i

e1i

N 0;

1

2

295

Under the normality assumption,

e1i = e0i + vi ;

where vi is independent of e0i

that

E (e0i j e0i >

x) = (x) =

(x)

;

(x)

The naive estimator of is OLS regression of yi on xi for those observations for which yi is

available. The problem is that this is equivalent to conditioning on the event fTi = 1g: However,

E (e1i j Ti = 1; z i ) = E e1i j fe0i >

=

z 0i

z 0i g; z i

z 0i g; z i + E vi j fe0i >

z 0i g; z i

e1i =

z 0i

+ ui ;

where

E (ui j Ti = 1; z i ) = 0:

Hence

yi = x0i +

z 0i

+ ui

(19.2)

Heckman (1979) observed that we could consistently estimate and from this equation, if

were known. It is unknown, but also can be consistently estimated by a Probit model for selection.

The Heckit estimator is thus calculated as follows

Estimate ^ from a Probit, using regressors z i : The binary dependent variable is Ti :

Estimate ^ ; ^ from OLS of yi on xi and (z 0i ^ ):

The OLS standard errors will be incorrect, as this is a two-step estimator. They can be

corrected using a more complicated formula. Or, alternatively, by viewing the Probit/OLS

estimation equations as a large joint GMM problem.

The Heckit estimator is frequently used to deal with problems of sample selection. However,

the estimator is built on the assumption of normality, and the estimator can be quite sensitive

to this assumption. Some modern econometric research is exploring how to relax the normality

assumption.

The estimator can also work quite poorly if (z 0i ^ ) does not have much in-sample variation.

This can happen if the Probit equation does not explainmuch about the selection choice. Another

potential problem is that if z i = xi ; then (z 0i ^ ) can be highly collinear with xi ; so the second

step OLS estimator will not be able to precisely estimate : Based this observation, it is typically

recommended to nd a valid exclusion restriction: a variable should be in z i which is not in xi : If

this is valid, it will ensure that (z 0i ^ ) is not collinear with xi ; and hence improve the second stage

estimators precision.

Chapter 20

Panel Data

A panel is a set of observations on individuals, collected over time. An observation is the pair

fyit ; xit g; where the i subscript denotes the individual, and the t subscript denotes time. A panel

may be balanced :

fyit ; xit g : t = 1; :::; T ; i = 1; :::; n;

or unbalanced :

fyit ; xit g : For i = 1; :::; n;

20.1

t = ti ; :::; ti :

Individual-Eects Model

The standard panel data specication is that there is an individual-specic eect which enters

linearly in the regression

yit = x0it + ui + eit :

The typical maintained assumptions are that the individuals i are mutually independent, that ui

and eit are independent, that eit is iid across individuals and time, and that eit is uncorrelated with

xit :

OLS of yit on xit is called pooled estimation. It is consistent if

E (xit ui ) = 0

(20.1)

If this condition fails, then OLS is inconsistent. (20.1) fails if the individual-specic unobserved

eect ui is correlated with the observed explanatory variables xit : This is often believed to be

plausible if ui is an omitted variable.

If (20.1) is true, however, OLS can be improved upon via a GLS technique. In either event,

OLS appears a poor estimation choice.

Condition (20.1) is called the random e ects hypothesis. It is a strong assumption, and most

applied researchers try to avoid its use.

20.2

Fixed Eects

This is the most common technique for estimation of non-dynamic linear panel regressions.

The motivation is to allow ui to be arbitrary, and have arbitrary correlated with xi : The goal

is to eliminate ui from the estimator, and thus achieve invariance.

There are several derivations of the estimator.

First, let

8

if i = j

< 1

dij =

;

:

0

else

296

297

and

1

di1

B

C

di = @ ... A ;

din

0

u1

B ..

u=@ .

an n

un

Let

1

C

A:

ui = d0i u;

and

yit = x0it + d0i u + eit :

(20.2)

Observe that

E (eit j xit ; di ) = 0;

so (20.2) is a valid regression, with di as a regressor along with xi :

OLS on (20.2) yields estimator ^ ; u

^ : Conventional inference applies.

Observe that

This is generally consistent.

If xit contains an intercept, it will be collinear with di ; so the intercept is typically omitted

from xit :

Any regressor in xit which is constant over time for all individuals (e.g., their gender) will be

collinear with di ; so will have to be omitted.

There are n + k regression parameters, which is quite large as typically n is very large.

Computationally, you do not want to actually implement conventional OLS estimation, as the

parameter space is too large. OLS estimation of proceeds by the FWL theorem. Stacking the

observations together:

y = X + Du + e;

then by the FWL theorem,

^ =

=

X 0 (I

P D) X

1

X 0X

X 0y

X 0 (I

P D) y

where

y

= y

= X

D(D 0 D)

D(D 0 D)

D0 y

1

D 0 X:

Since the regression of yit on di is a regression onto individual-specic dummies, the predicted value

from these regressions is the individual specic mean y i ; and the residual is the demean value

yit = yit

yi:

The xed eects estimator ^ is OLS of yit on xit , the dependent variable and regressors in deviationfrom-mean form.

298

yit = x0it + ui + eit ;

and then take individual-specic means by taking the average for the i0 th individual:

ti

ti

ti

1 X

1 X

1 X

yit =

x0it + ui +

eit

Ti t=t

Ti t=t

Ti t=t

i

or

y i = x0i + ui + ei :

Subtracting, we nd

yit = xit0 + eit ;

which is free of the individual-eect ui :

20.3

yit = yit

+ x0it + ui + eit :

(20.3)

Unfortunately, the xed eects estimator is inconsistent, at least if T is held nite as n ! 1:

This is because the sample mean of yit 1 is correlated with that of eit :

The standard approach to estimate a dynamic panel is to combine rst-dierencing with IV or

GMM. Taking rst-dierences of (20.3) eliminates the individual-specic eect:

yit =

yit

E ( yit

eit ) = E ((yit

yit

2 ) (eit

x0it +

yit

eit

eit :

(20.4)

1 ))

E (yit

1 eit 1 )

2

e:

But if there are valid instruments, then IV or GMM can be used to estimate the equation.

Typically, we use lags of the dependent variable, two periods back, as yt 2 is uncorrelated with

eit : Thus values of yit k ; k 2, are valid instruments.

Hence a valid estimator of and is to estimate (20.4) by IV using yt 2 as an instrument for

yt 1 (which is just identied). Alternatively, GMM using yt 2 and yt 3 as instruments (which is

overidentied, but loses a time-series observation).

A more sophisticated GMM estimator recognizes that for time-periods later in the sample, there

are more instruments available, so the instrument list should be dierent for each equation. This is

conveniently organized by the GMM principle, as this enables the moments from the dierent timeperiods to be stacked together to create a list of all the moment conditions. A simple application

of GMM yields the parameter estimates and standard errors.

Chapter 21

21.1

d

Let X be a random variable with continuous distribution F (x) and density f (x) = dx

F (x):

The goal is to estimate

f

(x)

from

a

random

sample

(X

;

:::;

X

g

While

F

(x)

can

be

estimated

by

1

n

P

d ^

F (x) since F^ (x) is a step function. The

the EDF F^ (x) = n 1 ni=1 1 (Xi x) ; we cannot dene dx

standard nonparametric method to estimate f (x) is based on smoothing using a kernel.

While we are typically interested in estimating the entire function f (x); we can simply focus

on the problem where x is a specic xed number, and then see how the method generalizes to

estimating the entire function.

symmetric zero-mean density function.

u2

2

1

K(u) = p exp

2

the Epanechnikov

K(u) =

3

4

15

16

u2 ; juj 1

0

juj > 1

K(u) =

u2

0

; juj 1

juj > 1

In practice, the choice between these three rarely makes a meaningful dierence in the estimates.

The kernel functions are used to smooth the data. The amount of smoothing is controlled by

the bandwidth h > 0. Let

1

u

Kh (u) = K

:

h

h

be the kernel K rescaled by the bandwidth h: The kernel density estimator of f (x) is

n

1X

f^(x) =

Kh (Xi

n

i=1

299

x) :

300

This estimator is the average of a set of weights. If a large number of the observations Xi are near

x; then the weights are relatively large and f^(x) is larger. Conversely, if only a few Xi are near x;

then the weights are small and f^(x) is small. The bandwidth h controls the meaning of near.

Interestingly, f^(x) is a valid density. That is, f^(x) 0 for all x; and

Z 1

Z 1 X

n

1

^

f (x)dx =

Kh (Xi x) dx

1

1 n i=1

n Z

1X 1

=

Kh (Xi x) dx

n

1

i=1

n Z

1X 1

K (u) du = 1

=

n

1

i=1

We can also calculate the moments of the density f^(x): The mean is

Z 1

n Z

1X 1

^

xf (x)dx =

xKh (Xi x) dx

n

1

1

i=1

n Z

1X 1

=

(Xi + uh) K (u) du

n

1

i=1

Z 1

Z 1

n

n

1X

1X

=

Xi

K (u) du +

h

uK (u) du

n

n

1

1

1

n

i=1

n

X

i=1

Xi

i=1

the sample mean of the Xi ; where the second-to-last equality used the change-of-variables u =

(Xi x)=h which has Jacobian h:

The second moment of the estimated density is

Z 1

n Z

1X 1 2

2^

x f (x)dx =

x Kh (Xi x) dx

n

1

1

i=1

n Z

1X 1

=

(Xi + uh)2 K (u) du

n

1

i=1

Z 1

Z

n

n

n

X

2X

1X 2 1 2

1

2

K(u)du +

u K (u) du

=

Xi +

Xi h

h

n

n

n

1

1

i=1

i=1

n

1X 2

Xi + h2

n

i=1

2

K

i=1

where

2

K

u2 K (u) du

is the variance of the kernel. It follows that the variance of the density f^(x) is

Z

1

1

x f^(x)dx

2

1

1

xf^(x)dx

1X 2

Xi + h2

n

2

K

i=1

= ^ 2 + h2

2

K

moment.

1X

Xi

n

i=1

2

K

!2

21.2

301

Z

EKh (X x) =

Z

=

Z

=

1

1

1

1

1

Kh (z

x) f (z)dz

Kh (uh) f (x + hu)hdu

K (u) f (x + hu)du

The second equality uses the change-of variables u = (z x)=h: The last expression shows that the

expected value is an average of f (z) locally about x:

This integral (typically) is not analytically solvable, so we approximate it using a second order

Taylor expansion of f (x + hu) in the argument hu about hu = 0; which is valid as h ! 0: Thus

1

f (x + hu) ' f (x) + f 0 (x)hu + f 00 (x)h2 u2

2

and therefore

EKh (X

1

K (u) f (x) + f 0 (x)hu + f 00 (x)h2 u2 du

2

1

Z 1

Z 1

= f (x)

K (u) du + f 0 (x)h

K (u) udu

1

1

Z 1

1

K (u) u2 du

+ f 00 (x)h2

2

1

1

= f (x) + f 00 (x)h2 2K :

2

x) '

n

Bias(x) = Ef^(x)

f (x) =

1X

EKh (Xi

n

i=1

x)

1

f (x) = f 00 (x)h2

2

2

K:

We see that the bias of f^(x) at x depends on the second derivative f 00 (x): The sharper the derivative,

the greater the bias. Intuitively, the estimator f^(x) smooths data local to Xi = x; so is estimating

a smoothed version of f (x): The bias results from this smoothing, and is larger the greater the

curvature in f (x):

We now examine the variance of f^(x): Since it is an average of iid random variables, using

rst-order Taylor approximations and the fact that n 1 is of smaller order than (nh) 1

var (x) =

=

'

=

'

=

1

var (Kh (Xi x))

n

1

1

EKh (Xi x)2

(EKh (Xi x))2

n

n

Z 1

1

z x 2

1

f (z)dz

K

f (x)2

2

nh

h

n

1

Z 1

1

K (u)2 f (x + hu) du

nh 1

Z

f (x) 1

K (u)2 du

nh

1

f (x) R(K)

:

nh

302

R1

where R(K) = 1 K (u)2 du is called the roughness of K:

Together, the asymptotic mean-squared error (AMSE) for xed x is the sum of the approximate

squared bias and approximate variance

1

AM SEh (x) = f 00 (x)2 h4

4

4

K

f (x) R(K)

:

nh

A global measure of precision is the asymptotic mean integrated squared error (AMISE)

AM ISEh =

AM SEh (x)dx =

h4

4 R(f 00 )

K

R(K)

:

nh

(21.1)

R

where R(f 00 ) = (f 00 (x))2 dx is the roughness of f 00 : Notice that the rst term (the squared bias)

is increasing in h and the second term (the variance) is decreasing in nh: Thus for the AMISE to

decline with n; we need h ! 0 but nh ! 1: That is, h must tend to zero, but at a slower rate

than n 1 :

Equation (21.1) is an asymptotic approximation to the MSE. We dene the asymptotically

optimal bandwidth h0 as the value which minimizes this approximate MSE. That is,

h0 = argmin AM ISEh

h

d

AM ISEh = h3

dh

R(K)

=0

nh2

4

00

K R(f )

yielding

R(K)

4 R(f 00 )

K

h0 =

1=5

1=2

(21.2)

This solution takes the form h0 = cn 1=5 where c is a function of K and f; but not of n: We

thus say that the optimal bandwidth is of order O(n 1=5 ): Note that this h declines to zero, but at

a very slow rate.

In practice, how should the bandwidth be selected? This is a di cult problem, and there is a

large and continuing literature on the subject. The asymptotically optimal choice given in (21.2)

depends on R(K); 2K ; and R(f 00 ): The rst two are determined by the kernel function. Their

values for the three functions introduced in the previous section are given here.

K

Gaussian

Epanechnikov

Biweight

2

K

R1

2

1u K

1

1=5

1=7

(u) du

R1

R(K) = 1 K (u)2 du

p

1/(2 )

1=5

5=7

An obvious di culty is that R(f 00 ) is unknown. A classic simple solution proposed by Silverman

(1986) has come to be known as the reference bandwidth or Silvermans Rule-of-Thumb. It

uses formula (21.2) but replaces R(f 00 ) with ^ 5 R( 00 ); where is the N(0; 1) distribution and ^ 2 is

an estimate of 2 = var(X): This choice for h gives an optimal rule when f (x) is normal, and gives

a nearly optimal rule when f (x) is close to normal. The downside is that if the density is very far

p

from normal, the rule-of-thumb h can be quite ine cient. We can calculate that R( 00 ) = 3= (8 ) :

Together with the above table, we nd the reference rules for the three kernel functions introduced

earlier.

Gaussian Kernel: hrule = 1:06^ n 1=5

Epanechnikov Kernel: hrule = 2:34^ n 1=5

Biweight (Quartic) Kernel: hrule = 2:78^ n 1=5

303

Unless you delve more deeply into kernel estimation methods the rule-of-thumb bandwidth is

a good practical bandwidth choice, perhaps adjusted by visual inspection of the resulting estimate

f^(x): There are other approaches, but implementation can be delicate. I now discuss some of these

choices. The plug-in approach is to estimate R(f 00 ) in a rst step, and then plug this estimate into

the formula (21.2). This is more treacherous than may rst appear, as the optimal h for estimation

of the roughness R(f 00 ) is quite dierent than the optimal h for estimation of f (x): However, there

are modern versions of this estimator work well, in particular the iterative method of Sheather

and Jones (1991). Another popular choice for selection of h is cross-validation. This works by

constructing an estimate of the MISE using leave-one-out estimators. There are some desirable

properties of cross-validation bandwidths, but they are also known to converge very slowly to the

optimal values. They are also quite ill-behaved when the data has some discretization (as is common

in economics), in which case the cross-validation rule can sometimes select very small bandwidths

leading to dramatically undersmoothed estimates. Fortunately there are remedies, which are known

as smoothed cross-validation which is a close cousin of the bootstrap.

Appendix A

Matrix Algebra

A.1

Notation

A vector a is a k 1 list of numbers, typically arranged in a column. We write this as

0

1

a1

B a2 C

B

C

a=B . C

.

@ . A

ak

a scalar.

A matrix A is a k r rectangular array of numbers, written as

2

3

a11 a12

a1r

6 a21 a22

a2r 7

6

7

A=6 .

..

.. 7

4 ..

.

. 5

ak1 ak2

akr

By convention aij refers to the element in the i0 th row and j 0 th column of A: If r = 1 then A is a

column vector. If k = 1 then A is a row vector. If r = k = 1; then A is a scalar.

A standard convention (which we will follow in this text whenever possible) is to denote scalars

by lower-case italics (a); vectors by lower-case bold italics (a); and matrices by upper-case bold

italics (A): Sometimes a matrix A is denoted by the symbol (aij ):

A matrix can be written as a set of column vectors or as a set of row vectors. That is,

2

3

1

A=

a1 a2

ar

6 2 7

6

7

=6 . 7

.

4 . 5

k

where

6

6

ai = 6

4

j

a1i

a2i

..

.

aki

aj1 aj2

304

3

7

7

7

5

ajr

305

The transpose of a matrix, denoted A0 ; is obtained by

Thus

2

a11 a21

ak1

6 a12 a22

ak2

6

A0 = 6 .

.

..

..

4 ..

.

a1r a2r

akr

3

7

7

7

5

k 1 vector, then a0 is a 1 k row vector. An alternative notation for the transpose of A is A> :

A matrix is square if k = r: A square matrix is symmetric if A = A0 ; which requires aij = aji :

A square matrix is diagonal if the o-diagonal elements are all zero, so that aij = 0 if i 6= j: A

square matrix is upper (lower) diagonal if all elements below (above) the diagonal equal zero.

An important diagonal matrix is the identity matrix, which has ones on the diagonal. The

k k identity matrix is denoted as

3

2

1 0

0

6 0 1

0 7

7

6

Ik = 6 . .

.. 7 :

.

.

4 . .

. 5

0 0

1

A partitioned matrix takes the form

2

A11 A12

6 A21 A22

6

A=6 .

..

4 ..

.

Ak1 Ak2

A1r

A2r

..

.

Akr

A.2

3

7

7

7

5

Matrix Addition

If the matrices A = (aij ) and B = (bij ) are of the same order, we dene the sum

A + B = (aij + bij ) :

Matrix addition follows the communtative and associative laws:

A+B = B+A

A + (B + C) = (A + B) + C:

A.3

Matrix Multiplication

If A is k

Ac = cA = (aij c) :

a0 b = a1 b1 + a2 b2 +

+ ak bk =

k

X

aj bj :

j=1

306

of B; we say that A and B are conformable. In this event the matrix product AB is dened.

Writing A as a set of row vectors and B as a set of column vectors (each of length r); then the

matrix product is dened as

2 0 3

a1

6 a0 7

6 2 7

bs

AB = 6 . 7 b1 b2

4 .. 5

a0k

3

2 0

a1 b1 a01 b2

a01 bs

6 a0 b1 a0 b2

a02 bs 7

2

7

6 2

= 6 .

..

.. 7 :

.

4 .

.

. 5

a0k b1 a0k b2

a0k bs

and distributive:

A (BC) = (AB) C

A (B + C) = AB + AC

An alternative way to write the matrix product is to use matrix partitions. For example,

AB =

=

B 11 B 12

B 21 B 22

A11 A12

A21 A22

A21 B 11 + A22 B 21 A21 B 12 + A22 B 22

As another example,

AB =

A1 A2

Ar

= A1 B 1 + A2 B 2 +

r

X

=

Aj B j

2

6

6

6

4

B1

B2

..

.

Br

+ Ar B r

3

7

7

7

5

j=1

The k r matrix A, r k, is called orthogonal if A0 A = I r :

A.4

Trace

The trace of a k

tr (A) =

k

X

aii :

i=1

Some straightforward properties for square matrices A and B and real c are

tr (cA) = c tr (A)

tr A0

= tr (A)

tr (A + B) = tr (A) + tr (B)

tr (I k ) = k:

Also, for k

r A and r

307

k B we have

tr (AB) = tr (BA) :

(A.1)

Indeed,

2

6

6

tr (AB) = tr 6

4

k

X

i=1

k

X

a01 b1 a01 b2

a02 b1 a02 b2

..

..

.

.

0

0

ak b1 ak b2

a01 bk

a02 bk

..

.

a0k bk

3

7

7

7

5

a0i bi

b0i ai

i=1

= tr (BA) :

A.5

r matrix (r

k)

A=

a1 a2

ar

is the number of linearly independent columns aj ; and is written as rank (A) : We say that A has

full rank if rank (A) = r:

A square k k matrix A is said to be nonsingular if it is has full rank, e.g. rank (A) = k:

This means that there is no k 1 c 6= 0 such that Ac = 0:

If a square k k matrix A is nonsingular then there exists a unique matrix k k matrix A 1

called the inverse of A which satises

1

AA

=A

A = Ik:

AA

A

1 0

= A

A = Ik

1

(AC)

= C

(A + C)

= A

+C

(A + C)

= A

+C

Another useful result for non-singular A is known as the Woodbury matrix identity

(A + BCD)

In particular, for C =

Morrison formula

=A

BC C + CDA

BC

CDA

(A.2)

A

bb0

=A

+ 1

b0 A

bb0 A

(A.3)

308

A11 A12

A21 A22

A11 A12

A21 A22

A1112

A2211 A21 A111

A2211

(A.4)

where A11 2 = A11 A12 A221 A21 and A22 1 = A22 A21 A111 A12 : There are alternative algebraic

representations for the components. For example, using the Woodbury matrix identity you can

show the following alternative expressions

A11 = A111 + A111 A12 A2211 A21 A111

A22 = A221 + A221 A21 A1112 A12 A221

A12 =

A21 =

1

A221 A21 A112

Even if a matrix A does not possess an inverse, we can still dene the Moore-Penrose generalized inverse A as the matrix which satises

AA A = A

A AA = A

AA is symmetric

A A is symmetric

For any matrix A; the Moore-Penrose generalized inverse A exists and is unique.

For example, if

A11 0

A=

0 0

then

A =

A.6

A11 0

0 0

Determinant

While the determinant is widely used, its precise denition is rarely needed. However, we present

the denition here for completeness. Let A = (aij ) be a general k k matrix . Let = (j1 ; :::; jk )

denote a permutation of (1; :::; k) : There are k! such permutations. There is a unique count of the

number of inversions of the indices of such permutations (relative to the natural order (1; :::; k) ;

and let " = +1 if this count is even and " = 1 if the count is odd. Then the determinant of A

is dened as

X

det A =

" a1j1 a2j2

akjk :

For example, if A is 2 2; then the two permutations of (1; 2) are (1; 2) and (2; 1) ; for which

"(1;2) = 1 and "(2;1) = 1. Thus

det A = "(1;2) a11 a22 + "(2;1) a21 a12

= a11 a22

Some properties include

det (A) = det (A0 )

det (cA) = ck det A

a12 a21 :

309

det A

= (det A)

A B

C D

det

= (det D) det A

BD

C if det D 6= 0

If A is triangular (upper or lower), then det A =

If A is orthogonal, then det A =

A.7

Qk

i=1 aii

Eigenvalues

k square matrix A is

det (A

I k ) = 0:

The left side is a polynomial of degree k in so it has exactly k roots, which are not necessarily

distinct and may be real or complex. They are called the latent roots or characteristic roots or

eigenvalues of A. If i is an eigenvalue of A; then A

i I k is singular so there exists a non-zero

vector hi such that

(A

i I k ) hi = 0:

The vector hi is called a latent vector or characteristic vector or eigenvector of A corresponding to i :

We now state some useful properties. Let i and hi , i = 1; :::; k denote the k eigenvalues and

eigenvectors of a square matrix A: Let be a diagonal matrix with the characteristic roots in the

diagonal, and let H = [h1

hk ]:

Q

det(A) = ki=1 i

P

tr(A) = ki=1 i

If A has distinct characteristic roots, there exists a nonsingular matrix P such that A =

P 1 P and P AP 1 = .

If A is symmetric, then A = H H 0 and H 0 AH = ; and the characteristic roots are all

real. A = H H 0 is called the spectral decomposition of a matrix.

When the eigenvalues of k k A are real they are written in decending order

k : We also write min (A) = k = minf ` g and max (A) = 1 = maxf ` g.

The characteristic roots of A

are

1

1

1

2

; ...,

1

k

H

= H 0 and (H 0 )

=H

A.8

310

Positive Deniteness

0: This is written as A 0: We say that A is positive denite if for all c 6= 0; c0 Ac > 0:

This is written as A > 0:

Some properties include:

c0 Ac

0

0 where = Gc:) If G has full rank, then A is positive denite.

If A is positive denite, then A is non-singular and A

exists. Furthermore, A

> 0:

A > 0 if and only if it is symmetric and all its characteristic roots are positive.

By the spectral decomposition, A = H H 0 where H 0 H = I and

is diagonal with nonnegative diagonal elements. All diagonal elements of

are strictly positive if (and only if)

A > 0:

If A > 0 then A

=H

H 0:

If A

0 and rank (A) = r < k then A = H

H 0 where A

1

1

1

generalized inverse, and

= diag 1 ; 2 ; :::; k ; 0; :::; 0

is the Moore-Penrose

If A

0 we can nd a matrix B such that A = BB 0 : We call B a matrix square root

of A: The matrix B need not be unique. One way to construct B is to use the spectral

decomposition A = H H 0 where is diagonal, and then set B = H 1=2 : There is a unique

root root B which is also positive semi-denite B 0:

A square matrix A is idempotent if AA = A: If A is idempotent and symmetric then all its

characteristic roots equal either zero or one and is thus positive semi-denite. To see this, note

that we can write A = H H 0 where H is orthogonal and

contains the r (real) characteristic

roots. Then

A = AA = H H 0 H H 0 = H 2 H 0 :

By the uniqueness of the characteristic roots, we deduce that 2 = and 2i = i for i = 1; :::; r:

Hence they must equal either 0 or 1. It follows that the spectral decomposition of idempotent A

takes the form

Ik r 0

H0

(A.5)

A=H

0

0

with H 0 H = I k . Additionally, tr(A) = rank(A):

If A is idempotent then I A is also idempotent.

One useful fact is that A is idempotent then for any conformable vector c,

c (I

c0 Ac

c0 c

(A.6)

A) c

(A.7)

cc

c0 c = c0 Ac + c0 (I

A) c:

Since A and I

A are idempotent, they are both positive semi-denite, so both c0 Ac and

0

c (I A) c are negative. Thus they must satisfy (A.6)-(A.7),.

A.9

311

Matrix Calculus

1 and g(x) = g(x1 ; :::; xk ) : Rk ! R: The vector derivative is

1

0 @

@x1 g (x)

@

C

B

..

g (x) = @

A

.

@x

@

@xk g (x)

and

@

g (x) =

@x0

Some properties are now summarized.

@

@x

@

@x0

@

@x

(a0 x) =

@

@xk g (x)

(x0 a) = a

(Ax) = A

(x0 Ax) = (A + A0 ) x

@2

@x@x0

A.10

@

@x

@

@x1 g (x)

(x0 Ax) = A + A0

Let A = [a1 a2

an ] be m

0

1

a1

B a2 C

B

C

vec (A) = B . C :

@ .. A

1 vector

an

Let A = (aij ) be an m n matrix and let B be any matrix. The Kronecker product of A

and B; denoted A B; is the matrix

2

3

a11 B a12 B

a1n B

6 a21 B a22 B

a2n B 7

6

7

A B=6

7:

..

..

..

4

5

.

.

.

am1 B am2 B

amn B

Some important properties are now summarized. These results hold for matrices for which all

matrix multiplications are conformable.

(A + B)

(A

A

C=A

B) (C

(B

(A

tr (A

D) = AC

C) = (A

B)0 = A0

BD

B)

B0

B) = tr (A) tr (B)

If A is m

(A

C +B

B)

m and B is n

1

=A

vec (ABC) = (C 0

n; det(A

B>0

A) vec (B)

0

tr (ABCD) = vec (D 0 ) (C 0

A) vec (B)

A.11

312

1 vector a is

1=2

a0 a

kak =

m

X

a2i

i=1

!1=2

n matrix A is

kAk = kvec (A)k

1=2

tr A0 A

0

11=2

m X

n

X

= @

a2ij A :

=

i=1 j=1

If an m

!1=2

m

X

2

:

kAk =

`

`=1

so

!1=2

m

X

1=2

0

0 1=2

2

:

kAk = tr H H H H

= (tr (

)) =

`

1 ; :::;

m g;

(A.8)

`=1

0 1=2

ab0 = tr ba0 ab

1=2

= b0 ba0 a

= kak kbk

and in particular

aa0 = kak2 :

(A.9)

The are other matrix norms. Another norm of frequent use is the spectral norm

kAkS =

where

A.12

max (B)

max

A0 A

1=2

Matrix Inequalities

1 vectors a and b,

a0 b

kak kbk :

n matrices A and B;

A0 B

Triangle Inequality: For any m

(A.10)

kAk kBk :

(A.11)

n matrices A and B;

kA + Bk

kAk + kBk :

(A.12)

Trace Inequality. For any m

313

tr (AB)

Quadratic Inequality. For any m 1 b and m

b0 Ab

Norm Inequality. For any m

(A.14)

min (B)

` (AB)

0 and B

Eigenvalue Product Inequaly. For any k

` (AB) are real and satisfy

min (A)

(A.13)

m symmetric matrix A

max (A) b

kABk

(A.15)

k matrices A

0 and B

A1=2 BA1=2

max (A)

0; the eigenvalues

max (B)

(A.16)

Jensen

s Inequality. If g( ) : R ! R is convex, then for any non-negative weights aj such that

Pm

j=1 aj = 1; and any real numbers xj

0

1

m

m

X

X

g@

aj xj A

aj g (xj ) :

(A.17)

j=1

0

j=1

1

m

X

1

g@

xj A

m

1 X

g (xj ) :

m

j=1

(A.18)

j=1

m

X

aj

cr

j=1

m

X

jaj jr

(A.19)

2a0 a + 2b0 b

(A.20)

j=1

c2 Inequality. For any m 1 vectors a and b,

1:

(a + b)0 (a + b)

Proof of Schwarz Inequality: First, suppose that kbk = 0: Then b = 0 and both ja0 bj = 0 and

1 0

kak kbk = 0 so the inequality is true. Second, suppose that kbk > 0 and dene c = a b b0 b

b a:

0

Since c is a vector, c c 0: Thus

0

c0 c = a0 a

a0 b

a0 b

a0 a

= b0 b :

b0 b :

314

Proof of Schwarz Matrix Inequality: Partition A = [a1 ; :::; an ] and B = [b1 ; :::; bn ]. Then

by partitioned matrix multiplication, the denition of the matrix Euclidean norm and the Schwarz

inequality

A0 B

a01 b1 a01 b2

a02 b1 a02 b2

..

..

.

.

..

ka2 k kb1 k ka2 k kb2 k

..

..

..

.

.

.

0

11=2

n X

n

X

@

=

kai k2 kbj k2 A

i=1 j=1

n

X

i=1

kai k

!1=2

n

X

i=1

kbi k

!1=2

0

11=2 0

11=2

n X

m

n X

m

X

X

= @

a2ji A @

kbji k2 A

i=1 j=1

i=1 j=1

= kAk kBk

Proof of Triangle Inequality: Let a = vec (A) and b = vec (B) . Then by the denition of the

matrix norm and the Schwarz Inequality

kA + Bk2 = ka + bk2

= a0 a + 2a0 b + b0 b

a0 a + 2 a0 b + b0 b

kak2 + 2 kak kbk + kbk2

= (kak + kbk)2

= (kAk + kBk)2

where

has the eigenvalues j of A on the diagonal and H is orthonormal. Dene C = H 0 BH

which has non-negative diagonal elements Cjj since B is positive semi-denite. Then

tr (AB) = tr ( C) =

m

X

j Cjj

j=1

max

j

m

X

Cjj =

j=1

tr (C) = tr H 0 BH = tr HH 0 B = tr (B)

since H is orthonormal. Thus tr (AB)

as stated.

Proof of Quadratic Inequality: In the Trace Inequality set B = bb0 and note tr (AB) = b0 Ab

and tr (B) = b0 b:

315

kABk = (tr (BAAB))1=2

= (tr (AABB))1=2

1=2

max (AA) tr (BB))

(

=

The nal equality holds since A 0 implies that max (AA) = max (A)2 :

Proof of Jensens Inequality (A.17). By the denition of convexity, for any

g ( x1 + (1

This implies

) x2 )

g (x1 ) + (1

0

1

0

m

X

g@

aj xj A = g @a1 g (x1 ) + (1

j=1

a1 g (x1 )+(1

a1 ) and

0

Pm

j=2 bj

a1 ) @b2 g(x2 ) + (1

where cj = bj =(1

a1 )

0

a1 ) g @

a1 g (x1 ) + (1

where bj = aj =(1

) g (x2 ) :

m

X

j=2

m

X

j=2

aj

1

a1

1

2 [0; 1]

(A.21)

xj A

bj xj A :

0

11

0

1

m

m

X

X

b2 )g @

cj xj AA = a1 g (x1 )+a2 g(x2 )+(1 a1 ) (1 b2 )g @

cj xj A

j=2

j=2

Proof of Loves cr Inequality. For r 1 this is simply a rewriting of the nite form Jensens

Pm

inequality (A.18) with g(u) = ur : For r < 1; dene bj = jaj j =

bj 1

j=1 jaj j : The facts that 0

r

and r < 1 imply bj bj and thus

m

m

X

X

1=

bj

brj

j=1

which implies

j=1

0

1r

m

X

@

jaj jA

m

X

j=1

0

1r

m

X

@

aj A

j=1

0

@

j=1

m

X

j=1

jaj jr :

1r

jaj jA :

(a + b)0 (a + b) =

m

X

(aj + bj )2

j=1

m

X

j=1

0

a2j

+2

m

X

j=1

= 2a a + 2b b

b2j

Appendix B

Probability

B.1

Foundations

The set S of all possible outcomes of an experiment is called the sample space for the experiment. Take the simple example of tossing a coin. There are two outcomes, heads and tails, so

we can write S = fH; T g: If two coins are tossed in sequence, we can write the four outcomes as

S = fHH; HT; T H; T T g:

An event A is any collection of possible outcomes of an experiment. An event is a subset of S;

including S itself and the null set ;: Continuing the two coin example, one event is A = fHH; HT g;

the event that the rst coin is heads. We say that A and B are disjoint or mutually exclusive

if A \ B = ;: For example, the sets fHH; HT g and fT Hg are disjoint. Furthermore, if the sets

A1 ; A2 ; ::: are pairwise disjoint and [1

i=1 Ai = S; then the collection A1 ; A2 ; ::: is called a partition

of S:

The following are elementary set operations:

Union: A [ B = fx : x 2 A or x 2 Bg:

Intersection: A \ B = fx : x 2 A and x 2 Bg:

Complement: Ac = fx : x 2

= Ag:

The following are useful properties of set operations.

Communtatitivity: A [ B = B [ A;

A \ B = B \ A:

Associativity: A [ (B [ C) = (A [ B) [ C;

A \ (B \ C) = (A \ B) \ C:

Distributive Laws: A \ (B [ C) = (A \ B) [ (A \ C) ;

A [ (B \ C) = (A [ B) \ (A [ C) :

c

c

c

c

DeMorgans Laws: (A [ B) = A \ B ;

(A \ B) = Ac [ B c :

A probability function assigns probabilities (numbers between 0 and 1) to events A in S:

This is straightforward when S is countable; when S is uncountable we must be somewhat more

careful: A set B is called a sigma algebra (or Borel eld) if ; 2 B , A 2 B implies Ac 2 B, and

A1 ; A2 ; ::: 2 B implies [1

i=1 Ai 2 B. A simple example is f;; Sg which is known as the trivial sigma

algebra. For any sample space S; let B be the smallest sigma algebra which contains all of the open

sets in S: When S is countable, B is simply the collection of all subsets of S; including ; and S:

When S is the real line, then B is the collection of all open and closed intervals. We call B the

sigma algebra associated with S: We only dene probabilities for events contained in B.

We now can give the axiomatic denition of probability. Given S and B, a probability function

Pr satises Pr(S)

0 for all A 2 B, and if A1 ; A2 ; ::: 2 B are pairwise disjoint, then

P1= 1; Pr(A)

Pr ([1

A

)

=

Pr(A

):

i

i

i=1

i=1

Some important properties of the probability function include the following

Pr (;) = 0

Pr(A)

Pr (Ac ) = 1

Pr(A)

316

APPENDIX B. PROBABILITY

Pr (B \ Ac ) = Pr(B)

317

Pr(A \ B)

Pr (A [ B) = Pr(A) + Pr(B)

If A

B then Pr(A)

Pr(A \ B)

Pr(B)

Booles Inequality: Pr (A [ B)

Pr(A) + Pr(B)

Pr(A) + Pr(B)

For some elementary probability models, it is useful to have simple rules to count the number

of objects in a set. These counting rules are facilitated by using the binomial coe cients which are

dened for nonnegative integers n and r; n r; as

n

r

n!

:

r! (n r)!

When counting the number of objects in a set, there are two important distinctions. Counting

may be with replacement or without replacement. Counting may be ordered or unordered.

For example, consider a lottery where you pick six numbers from the set 1, 2, ..., 49. This selection is

without replacement if you are not allowed to select the same number twice, and is with replacement

if this is allowed. Counting is ordered or not depending on whether the sequential order of the

numbers is relevant to winning the lottery. Depending on these two distinctions, we have four

expressions for the number of objects (possible arrangements) of size r from n objects.

Without

Replacement

n!

(n r)!

n

r

Ordered

Unordered

With

Replacement

nr

n+r 1

r

In the lottery example, if counting is unordered and without replacement, the number of potential combinations is 49

6 = 13; 983; 816.

If Pr(B) > 0 the conditional probability of the event A given the event B is

Pr (A j B) =

Pr (A \ B)

:

Pr(B)

For any B; the conditional probability function is a valid probability function where S has been

replaced by B: Rearranging the denition, we can write

Pr(A \ B) = Pr (A j B) Pr(B)

which is often quite useful. We can say that the occurrence of B has no information about the

likelihood of event A when Pr (A j B) = Pr(A); in which case we nd

Pr(A \ B) = Pr (A) Pr(B)

(B.1)

We say that the events A and B are statistically independent when (B.1) holds. Furthermore,

we say that the collection of events A1 ; :::; Ak are mutually independent when for any subset

fAi : i 2 Ig;

!

\

Y

Pr

Ai =

Pr (Ai ) :

i2I

i2I

Theorem 3 (Bayes Rule). For any set B and any partition A1 ; A2 ; ::: of the sample space, then

for each i = 1; 2; :::

Pr (B j Ai ) Pr(Ai )

Pr (Ai j B) = P1

j=1 Pr (B j Aj ) Pr(Aj )

APPENDIX B. PROBABILITY

B.2

318

Random Variables

A random variable X is a function from a sample space S into the real line. This induces a

new sample space the real line and a new probability function on the real line. Typically, we

denote random variables by uppercase letters such as X; and use lower case letters such as x for

potential values and realized values. (This is in contrast to the notation adopted for most of the

textbook.) For a random variable X we dene its cumulative distribution function (CDF) as

F (x) = Pr (X

x) :

(B.2)

Sometimes we write this as FX (x) to denote that it is the CDF of X: A function F (x) is a CDF if

and only if the following three properties hold:

1. limx!

1 F (x)

2. F (x) is nondecreasing in x

3. F (x) is right-continuous

We say that the random variable X is discrete if F (x) is a step function. In the latter case,

the range of X consists of a countable set of real numbers 1 ; :::; r : The probability function for

X takes the form

Pr (X = j ) = j ;

j = 1; :::; r

(B.3)

Pr

where 0

1 and j=1 j = 1.

j

We say that the random variable X is continuous if F (x) is continuous in x: In this case Pr(X =

) = 0 for all 2 R so the representation (B.3) is unavailable. Instead, we represent the relative

probabilities by the probability density function (PDF)

f (x) =

so that

F (x) =

and

Pr (a

d

F (x)

dx

x

f (u)du

1

b) =

f (u)du:

These expressions only make sense if F (x) is dierentiable. While there are examples of continuous

random variables which do not possess a PDF, these cases are unusualRand are typically ignored.

1

A function f (x) is a PDF if and only if f (x) 0 for all x 2 R and 1 f (x)dx:

B.3

Expectation

For any measurable real function g; we dene the mean or expectation Eg(X) as follows. If

X is discrete,

r

X

Eg(X) =

g( j ) j ;

j=1

and if X is continuous

Eg(X) =

The latter is well dened and nite if

Z

1

1

g(x)f (x)dx:

(B.4)

APPENDIX B. PROBABILITY

319

I1 =

g(x)f (x)dx

g(x)>0

I2 =

g(x)f (x)dx

g(x)<0

Eg(X) = 1: If both I1 = 1 and I2 = 1 then Eg(X) is undened.

Since E (a + bX) = a + bEX; we say that expectation is a linear operator.

For m > 0; we dene the m0 th moment of X as EX m and the m0 th central moment as

E (X EX)m :

2 : We

Two special

moments are the mean = EX and variance 2 = E (X

)2 = EX 2

p

2

2

call =

the standard deviation of X: We can also write

= var(X). For example, this

2

allows the convenient expression var(a + bX) = b var(X):

The moment generating function (MGF) of X is

M ( ) = E exp ( X) :

The MGF does not necessarily exist. However, when it does and E jXjm < 1 then

dm

M( )

d m

= E (X m )

=0

More generally, the characteristic function (CF) of X is

where i =

C( ) = E exp (i X)

1 is the imaginary unit. The CF always exists, and when E jXjm < 1

dm

C( )

d m

The Lp norm, p

= im E (X m ) :

=0

kXkp = (E jXjp )1=p :

B.4

Gamma Function

> 0 as

Z 1

( )=

x

exp ( x) :

(1 + ) = ( )

so for positive integers n;

(n) = (n

1)!

(1) = 1

and

1

= 1=2 :

2

Sterlings formula is an expansion for the its logarithm

log ( ) =

1

log(2 ) +

2

1

2

log

z+

1

12

1

360

1

1260

APPENDIX B. PROBABILITY

B.5

320

Common Distributions

Bernoulli

Pr (X = x) = px (1

p)1

x = 0; 1;

x = 0; 1; :::; n;

EX = p

var(X) = p(1

p)

Binomial

n x

p (1

x

EX = np

p)n

Pr (X = x) =

var(X) = np(1

p)

Geometric

Pr (X = x) = p(1 p)x

1

EX =

p

1 p

var(X) =

p2

x = 1; 2; :::;

Multinomial

n!

px1 px2

x1 !x2 ! xm ! 1 2

= n;

pxmm ;

Pr (X1 = x1 ; X2 = x2 ; :::; Xm = xm ) =

x1 +

+ xm

p1 +

+ pm = 1

EXi = pi

var(Xi ) = npi (1

cov (Xi ; Xj ) =

pi )

npi pj

Negative Binomial

Pr (X = x) =

EX =

var(X) =

(r + x) r

p (1

x! (r)

r (1 p)

p

r (1 p)

p2

p)x

x = 0; 1; 2; :::;

Poisson

Pr (X = x) =

exp (

)

x!

x = 0; 1; 2; :::;

EX =

var(X) =

We now list some important continuous distributions.

>0

APPENDIX B. PROBABILITY

321

Beta

( + )

x

( ) ( )

f (x) =

=

(1

x)

1;

> 0;

>0

var(X) =

+ 1) ( + )2

( +

Cauchy

1

;

(1 + x2 )

EX = 1

f (x) =

1<x<1

var(X) = 1

Exponential

1

f (x) =

exp

x < 1;

>0

EX =

2

var(X) =

Logistic

exp ( x)

;

(1 + exp ( x))2

EX = 0

1 < x < 1;

f (x) =

var(X) =

Lognormal

f (x) =

(log x

2

1

p

exp

2 x

2

=2

var(X) = exp 2 + 2

EX = exp

)2

2

exp 2 +

x < 1;

>0

Pareto

f (x) =

EX =

+1

x < 1;

> 0;

>1

2

var(X) =

1)2 (

2)

>2

Uniform

f (x) =

EX =

var(X) =

;

b a

a+b

2

(b a)2

12

>0

APPENDIX B. PROBABILITY

322

Weibull

f (x) =

EX =

1=

var(X) =

2=

exp

x < 1;

> 0;

>0

1+

1+

1+

Gamma

f (x) =

1

( )

exp

x < 1;

> 0;

>0

EX =

2

var(X) =

Chi-Square

1

xr=2

(r=2)2r=2

f (x) =

exp

x

;

2

x < 1;

r>0

EX = r

var(X) = 2r

Normal

f (x) =

1

p

2

exp

)2

(x

2

1 < x < 1;

1<

< 1;

>0

EX =

var(X) =

Student t

f (x) =

r+1

2

x2

1+

r

( r+1

2 )

;

r

r

2

EX = 0 if r > 1

r

var(X) =

if r > 2

r 2

B.6

1 < x < 1;

r>0

A pair of bivariate random variables (X; Y ) is a function from the sample space into R2 : The

joint CDF of (X; Y ) is

F (x; y) = Pr (X x; Y

y) :

If F is continuous, the joint probability density function is

f (x; y) =

@2

F (x; y):

@x@y

Pr ((X; Y ) 2 A) =

Z Z

f (x; y)dxdy

A

APPENDIX B. PROBABILITY

323

Eg(X; Y ) =

1

1

FX (x) = Pr(X

=

=

x)

lim F (x; y)

y!1

Z x Z 1

1

f (x; y)dydx

fX (x) =

d

FX (x) =

dx

f (x; y)dy:

fY (y) =

f (x; y)dx:

The random variables X and Y are dened to be independent if f (x; y) = fX (x)fY (y):

Furthermore, X and Y are independent if and only if there exist functions g(x) and h(y) such that

f (x; y) = g(x)h(y):

If X and Y are independent, then

Z Z

E (g(X)h(Y )) =

g(x)h(y)f (y; x)dydx

Z Z

=

g(x)h(y)fY (y)fX (x)dydx

Z

Z

=

g(x)fX (x)dx h(y)fY (y)dy

= Eg (X) Eh (Y ) :

(B.5)

E(XY ) = EXEY:

Another implication of (B.5) is that if X and Y are independent and Z = X + Y; then

MZ ( ) = E exp ( (X + Y ))

= E (exp ( X) exp ( Y ))

= E exp

X E exp

= MX ( )MY ( ):

(B.6)

cov(X; Y ) =

XY

= E ((X

EX) (Y

EY )) = EXY

corr (X; Y ) =

XY

XY

X Y

EXEY:

APPENDIX B. PROBABILITY

324

j

XY j

1:

(B.7)

If X and Y are independent, then XY = 0 and XY = 0: The reverse, however, is not true.

For example, if EX = 0 and EX 3 = 0, then cov(X; X 2 ) = 0:

A useful fact is that

var (X + Y ) = var(X) + var(Y ) + 2 cov(X; Y ):

An implication is that if X and Y are independent, then

var (X + Y ) = var(X) + var(Y );

the variance of the sum is the sum of the variances.

A k 1 random vector X = (X1 ; :::; Xk )0 is a function from S to Rk : Let x = (x1 ; :::; xk )0 denote

a vector in Rk : (In this Appendix, we use bold to denote vectors. Bold capitals X are random

vectors and bold lower case x are nonrandom vectors. Again, this is in distinction to the notation

used in the bulk of the text) The vector X has the distribution and density functions

F (x) = Pr(X x)

@k

F (x):

f (x) =

@x1

@xk

For a measurable function g : Rk ! Rs ; we dene the expectation

Z

Eg(X) =

g(x)f (x)dx

Rk

1 multivariate mean

= EX

and k

k covariance matrix

= E (X

) (X

= EXX 0

)0

!

k

k

X

X

var

Xi =

var (X i )

i=1

B.7

i=1

fY jX (y j x) =

f (x; y)

fX (x)

APPENDIX B. PROBABILITY

325

if fX (x) > 0: One way to derive this expression from the denition of conditional probability is

fY jX (y j x) =

=

=

=

=

=

@

lim Pr (Y

y j x X x + ")

@y "!0

@

Pr (fY

yg \ fx X x + "g)

lim

"!0

@y

Pr(x X x + ")

@

F (x + "; y) F (x; y)

lim

@y "!0 FX (x + ") FX (x)

@

lim

@y "!0

@

@x F (x

+ "; y)

fX (x + ")

@2

@x@y F (x; y)

fX (x)

f (x; y)

:

fX (x)

Z 1

m(x) = E (Y j X = x) =

yfY jX (y j x) dy:

1

The conditional mean m(x) is a function, meaning that when X equals x; then the expected value

of Y is m(x):

Similarly, we dene the conditional variance of Y given X = x as

2

(x) = var (Y j X = x)

= E (Y

m(x))2 j X = x

= E Y2 jX =x

m(x)2 :

Evaluated at x = X; the conditional mean m(X) and conditional variance 2 (X) are random

variables, functions of X: We write this as E(Y j X) = m(X) and var (Y j X) = 2 (X): For

example, if E (Y j X = x) = + 0 x; then E (Y j X) = + 0 X; a transformation of X:

The following are important facts about conditional expectations.

Simple Law of Iterated Expectations:

E (E (Y j X)) = E (Y )

(B.8)

Proof :

E (E (Y j X)) = E (m(X))

Z 1

=

m(x)fX (x)dx

1

Z 1Z 1

=

yfY jX (y j x) fX (x)dydx

1

1

Z 1

Z 1

=

yf (y; x) dydx

1

= E(Y ):

E (E (Y j X; Z) j X) = E (Y j X)

(B.9)

APPENDIX B. PROBABILITY

326

E (g(X)Y j X) = g (X) E (Y j X)

(B.10)

Proof : Let

h(x) = E (g(X)Y j X = x)

Z 1

g(x)yfY jX (y j x) dy

=

1

Z 1

= g(x)

yfY jX (y j x) dy

1

= g(x)m(x)

g (X) E (Y j X) :

B.8

Transformations

Suppose that X 2 Rk with continuous distribution function FX (x) and density fX (x): Let

Y = g(X) where g(x) : Rk ! Rk is one-to-one, dierentiable, and invertible. Let h(y) denote the

inverse of g(x). The Jacobian is

@

h(y) :

@y 0

J(y) = det

if X h(Y ); so the distribution function of Y is

FY (y) = Pr (g(X)

= Pr (X

Y if and only

y)

h(Y ))

= FX (h(Y )) :

Taking the derivative, the density of Y is

fY (y) =

d

d

FY (y) = fX (h(Y )) h(y):

dy

dy

Y if and only if X

FY (y) = Pr (g(X)

h(Y ); so

y)

= 1

Pr (X

h(Y ))

= 1

FX (h(Y ))

fY (y) =

fX (h(Y ))

d

h(y):

dy

fY (y) = fX (h(Y )) jJ(y)j :

(B.11)

This is known as the change-of-variables formula. This same formula (B.11) holds for k > 1; but

its justication requires deeper results from analysis.

As one example, take the case X

U [0; 1] and Y = log(X). Here, g(x) = log(x) and

h(y) = exp( y) so the Jacobian is J(y) = exp(y): As the range of X is [0; 1]; that for Y is [0,1):

Since fX (x) = 1 for 0 x 1 (B.11) shows that

fY (y) = exp( y);

an exponential density.

1;

APPENDIX B. PROBABILITY

B.9

327

x2

2

1

(x) = p exp

2

1 < x < 1:

It is conventional to write X

N (0; 1) ; and to denote the standard normal density function by

(x) and its distribution function by (x): The latter has no closed-form solution. The normal

density has all moments nite. Since it is symmetric about zero all odd moments are zero. By

iterated integration by parts, we can also show that EX 2 = 1 and EX 4 = 3: In fact, for any positive

integer m, EX 2m = (2m 1)!! = (2m 1) (2m 3)

1: Thus EX 4 = 3; EX 6 = 15; EX 8 = 105;

10

and EX = 945:

If Z is standard normal and X = + Z; then using the change-of-variables formula, X has

density

!

1

(x

)2

f (x) = p

exp

;

1 < x < 1:

2 2

2

which is the univariate normal density. The mean and variance of the distribution are

2 ; and it is conventional to write X

N ; 2 .

For x 2 Rk ; the multivariate normal density is

f (x) =

1

(2 )

k=2

1=2

det ( )

(x

exp

)0

(x

and

x 2 Rk :

The mean and covariance matrix of the distribution are and ; and it is conventional to write

X N ( ; ).

0

The MGF and CF of the multivariate normal are exp 0 + 0

=2 and exp i 0

=2 ;

respectively.

If X 2 Rk is multivariate normal and the elements of X are mutually uncorrelated, then

= diagf 2j g is a diagonal matrix. In this case the density function can be written as

f (x) =

1

(2 )k=2

k

Y

j=1

exp

1

1=2

exp

j

2

2

1) = 1

+ (xk

2

2

k) = k

1

(2 )

(x1

xj

j

2

j

!

2

!!

which is the product of marginal univariate normal densities. This shows that if X is multivariate

normal with uncorrelated elements, then they are mutually independent.

Theorem B.9.1 If X

N (a + B ; B B 0 ) :

Theorem B.9.2 Let X

freedom, written 2r .

Theorem B.9.3 If Z

N (0; I r ) : Then Q = X 0 X is distributed chi-square with r degrees of

as students t with r degrees of freedom.

2

r

q; then Z 0 A

2:

q

be independent. Then Tr = Z=

p

Q=r is distributed

APPENDIX B. PROBABILITY

328

!

0

1

(y

1

Y)

Y)

Y (y

;

y 2 Rk :

f (y) =

exp

k=2

1=2

2

(2 ) det ( Y )

where

= a+B and

1=2

Proof of Theorem B.9.2. First, suppose a random variable Q is distributed chi-square with r

degrees of freedom. It has the MGF

Z 1

1

E exp (tQ) =

xr=2 1 exp (tx) exp ( x=2) dy = (1 2t) r=2

r

r=2

2

0

2

R1

where the second equality uses the fact that 0 y a 1 exp ( by) dy = b a (a); which can be found

by applying change-of-variables to the gamma function. Our goal is to calculate the MGF of

r=2

2.

Q = X 0 X and show that it equals (1 2t) P

; which will establish that Q

r

r

0

2

Note that we can write Q = X X =

j=1 Zj where the Zj are independent N (0; 1) : The

distribution of each of the Zj2 is

Pr Zj2

p

= 2 Pr (0 Zj

y)

Z py

x2

1

p exp

= 2

dx

2

2

0

Z y

1

s

=

ds

s 1=2 exp

1

1=2

2

0

2 2

1

f1 (x) =

which is the

h

Since

(1

2t)

2

1

1

2

21=2

1

2

1=2

exp

x

2

the Zj2

ir

1=2

2t)

= (1

2t)

r=2

2

r

density as desired:

1=2

:

Pr

2

j=1 Zj

is

Proof of Theorem B.9.3. The fact that A > 0 means that we can write A = CC 0 where C is

non-singular. Then A 1 = C 10 C 1 and

C

N 0; C

AC

10

= N 0; C

CC 0 C

Thus

Z 0A

Z = Z 0C

10

Z= C

10

= N (0; I q ) :

2

q:

Proof of Theorem B.9.4. Using the simple law of iterated expectations, Tr has distribution

APPENDIX B. PROBABILITY

329

function

Z

p

Q=r

(

r

F (x) = Pr

= E Z

"

= E Pr Z

= E

)

Q

r

r

x

Q

r

Q

jQ

r

!#

!

Q

x

r

r !r !

Q

Q

x

r

r

r

d

f (x) = E

dx

= E

=

qx2

2r

1

p exp

2

r+1

2

r

2

x2

1+

r

q

r

1

r

2

2r=2

q r=2

exp ( q=2) dq

( r+1

2 )

B.10

Inequalities

Jensens Inequality. If g( ) : Rm ! R is convex, then for any random vector x for which

E kxk < 1 and E jg (x)j < 1;

g(E(x))

E (g (x)) :

(B.12)

(y; x) for which E kyk < 1 and E kg (y)k < 1;

g(E(y j x))

E (g (y) j x) :

E jE(y j x)jr

(B.13)

E jyjr < 1:

(B.14)

kE(Y )k

E kY k :

Hlders Inequality. If p > 1 and q > 1 and p1 + 1q = 1; then for any random m

and Y;

E X 0Y

(E kXkp )1=p (E kY kq )1=q :

(B.15)

n matrices X

(B.16)

APPENDIX B. PROBABILITY

330

E kXk2

E X 0Y

n matrices X and Y;

1=2

E kY k2

1=2

(B.17)

Eyx0 Exx0

Minkowskis Inequality. For any random m

Exy 0

Eyy 0

(B.18)

n matrices X and Y;

(E kX + Y kp )1=p

Liapunovs Inequality. For any random m

(E kXkr )1=r

n matrix X and 1

(B.19)

p;

(E kXkp )1=p

(B.20)

Markovs Inequality (standard form). For any random vector x and non-negative function

g(x) 0;

1

Pr(g(x) > )

Eg(x):

(B.21)

Markovs Inequality (strong form). For any random vector x and non-negative function

g(x) 0;

1

Pr(g(x) > )

E (g (x) 1 (g(x) > )) :

(B.22)

Chebyshevs Inequality. For any random variable x;

Pr(jx

Exj > )

var (x)

2

(B.23)

Proof of Jensens Inequality (B.12). Since g(u) is convex, at any point u there is a nonempty

set of subderivatives (linear surfaces touching g(u) at u but lying below g(u) for all u). Let a + b0 u

be a subderivative of g(u) at u = Ex: Then for all u; g(u)

a + b0 u yet g(Ex) = a + b0 Ex:

Applying expectations, Eg(x) a + b0 Ex = g(Ex); as stated.

Proof of Conditional Jensens Inequality. The same as the proof of (B.12), but using conditional expectations. The conditional expectations exist since E kyk < 1 and E kg (y)k < 1:

Proof of Conditional Expectation Inequality. As the function jujr is convex for r

Conditional Jensens inequality implies

jE(y j x)jr

E (jyjr j x) :

E jE(y j x)jr

as required.

Proof of Expectation Inequality. By the Triangle inequality, for

k U 1 + (1

)U 2 k

kU 1 k + (1

2 [0; 1];

) kU 2 k

1, the

APPENDIX B. PROBABILITY

331

which shows that the matrix norm g(U ) = kU k is convex. Applying Jensens Inequality (B.12) we

nd (B.15).

Proof of Hlders Inequality. Since

shows that for any real a and b

exp

1

p

1

q

1

1

a+ b

p

q

1

1

exp (a) + exp (b) :

p

q

u1=p v 1=q

u v

+

p q

Set u = kXkp =E kXkp and v = kY kq =E kY kq : Note that Eu = Ev = 1: By the matrix Schwarz

Inequality (A.11), kX 0 Y k kXk kY k. Thus

E kX 0 Y k

p 1=p

(E kXk )

E (kXk kY k)

q 1=q

(E kY k )

= E u1=p v 1=q

u v

+

p q

1 1

=

+

p q

= 1;

E

which is (B.16).

Proof of Cauchy-Schwarz Inequality. Special case of Hlders with p = q = 2:

(Eyx0 ) (Exx0 ) x: Note that

Eee0 0 is positive semi-denite. We can calculate that

Eee0 = Eyy 0

Eyx0

Exx0

Exy 0 :

Since the left-hand-side is positive semi-denite, so is the right-hand-side, which means Eyy 0

(Eyx0 ) (Exx0 ) Exy 0 as stated.

Proof of Liapunovs Inequality. The function g(u) = up=r is convex for u > 0 since p

u = kXkr : By Jensens inequality, g (Eu) Eg (u) or

(E kXkr )p=r

r: Set

Proof of Minkowskis Inequality. Note that by rewriting, using the triangle inequality (A.12),

and then Hlders Inequality to the two expectations

E kX + Y kp = E kX + Y k kX + Y kp

E kXk kX + Y kp

+ E kY k kX + Y kp

1)

+ (E kY kp )1=p E kX + Y kq(p

=

1=q

1)

1=q

1)=p

APPENDIX B. PROBABILITY

332

where the second equality picks q to satisfy 1=p+1=q = 1; and the nal equality uses this fact to make

the substitution q = p=(p 1) and then collects terms. Dividing both sides by E (kX + Y kp )(p 1)=p ;

we obtain (B.19).

Proof of Markovs Inequality. Let F denote the distribution function of x: Then

Z

dF (u)

Pr (g(x)

) =

fg(u)

fg(u)

g(u)

dF (u)

the inequality using the region of integration fg(u) > g: This establishes the strong form (B.22).

1 E (g(x)) ; establishing the standard

Since 1 (g(x) > )

1; the nal expression is less than

form (B.21).

Proof of Chebyshevs Inequality. Dene y = (x Ex)2 and note that Ey = var (x) : The events

fjx Exj > g and y > 2 are equal, so by an application Markovs inequality we nd

Pr(jx

E (y) =

var (x)

as stated.

B.11

Maximum Likelihood

In this section we provide a brief review of the asymptotic theory of maximum likelihood

estimation.

When the density of y i is f (y j ) where F is a known distribution function and 2 is an

unknown m 1 vector, we say that the distribution is parametric and that is the parameter

of the distribution F: The space is the set of permissible value for : In this setting the method

of maximum likelihood is an appropriate technique for estimation and inference on : We let

denote a generic value of the parameter and let 0 denote its true value.

The joint density of a random sample (y 1 ; :::; y n ) is

fn (y 1 ; :::; y n j ) =

n

Y

i=1

f (y i j ) :

The likelihood of the sample is this joint density evaluated at the observed sample values, viewed

as a function of . The log-likelihood function is its natural logarithm

log L( ) =

n

X

i=1

log f (y i j ) :

The likelihood score is the derivative of the log-likelihood, evaluated at the true parameter

value.

@

Si =

log f (y i j 0 ) :

@

We also dene the Hessian

@2

H= E

log f (y i j 0 )

(B.24)

@ @ 0

APPENDIX B. PROBABILITY

333

= E S i S 0i :

(B.25)

Theorem B.11.1

@

E log f (y j )

@

=0

=

(B.26)

ES i = 0

(B.27)

and

H=

(B.28)

The matrix I is called the information, and the equality (B.28) is called the information

matrix equality.

The maximum likelihood estimator (MLE) ^ is the parameter value which maximizes the

likelihood (equivalently, which maximizes the log-likelihood). We can write this as

^ = argmax log L( ):

(B.29)

In some simple cases, we can nd an explicit expression for ^ as a function of the data, but these

cases are rare. More typically, the MLE ^ must be found by numerical methods.

To understand why the MLE ^ is a natural estimator for the parameter observe that the

standardized log-likelihood is a sample average and an estimator of E log f (y i j ) :

n

1

1X

p

log L( ) =

log f (y i j ) ! E log f (y i j ) :

n

n

i=1

As the MLE ^ maximizes the left-hand-side, we can see that it is an estimator of the maximizer of

the right-hand-side. The rst-order condition for the latter problem is

0=

@

E log f (y i j )

@

which holds at

= 0 by (B.26). This suggests that ^ is an estimator of 0 : In. fact, under

p

conventional regularity conditions, ^ is consistent, ^ ! 0 as n ! 1: Furthermore, we can derive

its asymptotic distribution.

N 0; I

regularity

conditions,

n ^

d

0

We omit the regularity conditions for Theorem B.11.2, but the result holds quite broadly for

models which are smooth functions of the parameters. Theorem B.11.2 gives the general form for

the asymptotic distribution of the MLE. A famous result shows that the asymptotic variance is the

smallest possible.

APPENDIX B. PROBABILITY

334

Theorem B.11.3 Cramer-Rao Lower Bound. If e is an unbiased regular estimator of ; then var(e) (nI) :

The Cramer-Rao Theorem shows that the nite sample variance of an unbiased estimator is

bounded below by (nI) 1 : This means that the asymptotic variance of the standardized estimator

p e

1

n

: In other words, the best possible asymptotic variance among

0 is bounded below by I

all (regular) estimators is I 1 : An estimator is called asymptotically e cient if its asymptotic

variance equals this lower bound. Theorem B.11.2 shows that the MLE has this asymptotic variance,

and is thus asymptotically e cient.

its asymptotic variance equals the Cramer-Rao Lower Bound.

Theorem B.11.4 gives a strong endorsement for the MLE in parametric models.

Finally, consider functions of parameters. If

= g( ) then the MLE of

is b = g(b):

This is because maximization (e.g. (B.29)) is unaected by parameterization and transformation.

Applying the Delta Method to Theorem B.11.2 we conclude that

p

p

' G0 n b

n b

! N 0; G0 I

(B.30)

is the MLE. The asymptotic variance G0 I 1 G is the Cramer-Rao lower bound for estimation of .

Theorem B.11.5 The Cramer-Rao lower bound for

and the MLE b = g(b) is asymptotically e cient.

= g( ) is G0 I

@

E log f (y j )

@

@

@

Z

=

=

log f (y j ) f (y j

@

f (y j 0 )

f (y j )

dy

@

f (y j )

Z

@

f (y j ) dy

@

= 0

=

=

@

1

@

= 0:

=

E

@

log f (y j

@

0 ) dy

0)

@

E log f (y j

@

0)

= 0:

G,

APPENDIX B. PROBABILITY

335

@2

@ @

f (y j 0 )

f (y j 0 )

= 0:

By direction computation,

@2

@ @

log f (y j

@2

@ @

0) =

f (y j 0 )

f (y j 0 )

@

@

f (y j 0 )

f (y j 0 )

@

log f (y j

@

@2

@ @

@

0) @

f (y j

f (y j

f (y j

0)

0)

0

0)

@

log f (y j

@

0)

Proof of Theorem B.11.2 Taking the rst-order condition for maximization of log L( ), and

making a rst-order Taylor series expansion,

@

log L( )

@

=^

n

X @

log f y i j ^

=

@

0 =

i=1

n

X

i=1

@

log f (y i j

@

0) +

n

X

i=1

@2

@ @

log f (y i j

n)

where n lies on a line segment joining ^ and 0 : (Technically, the specic value of

row in this expansion.) Rewriting this equation, we nd

^

n

X

i=1

@2

@ @

log f (y i j

n

X

n)

Si

i=1

varies by

where S i are the likelihood scores. Since the score S i is mean-zero (B.27) with covariance matrix

(equation B.25) an application of the CLT yields

n

1 X

d

p

S i ! N (0; ) :

n

i=1

The analysis of the sample Hessian is somewhat more complicated due to the presence of n :

2

p

Let H( ) = @ @@ 0 log f (y i ; ) : If it is continuous in ; then since n ! 0 it follows that

H(

n)

! H and so

n

1 X @2

n

@ @

i=1

log f (y i ;

n)

1X

n

i=1

@2

@ @

log f (y i ;

n)

H(

n)

+ H(

n)

!H

by an application of a uniform WLLN. (By uniform, we mean that the WLLN holds uniformly over

the parameter value. This requires the second derivative to be a smooth function of the parameter.)

Together,

p

n ^

d

0

!H

N (0; ) = N 0; H

= N 0; I

APPENDIX B. PROBABILITY

336

@

log fn (Y ;

@

S=

0) =

n

X

Si

i=1

which by Theorem (B.11.1) has mean zero and variance nI: Write the estimator e = e (Y ) as a

function of the data. Since e is unbiased for any ;

Z

e

= E = e (Y ) f (Y ; ) dY :

Z

e (Y ) @ f (Y ; ) dY

I =

@ 0

Z

e (Y ) @ log f (Y ; ) f (Y ;

=

@ 0

= E eS 0

e

= E

S0

By the matrix Cauchy-Schwarz inequality (B.18), E

nI;

var e

= E

E

= E SS 0

= (nI)

as stated.

0 ) dY

S 0 E SS 0

E S e

Appendix C

Numerical Optimization

Many econometric estimators are dened by an optimization problem of the form

^ = argmin Q( )

(C.1)

Rm and the criterion function is Q( ) :

! R: For example

NLLS, GLS, MLE and GMM estimators take this form. In most cases, Q( ) can be computed

for given ; but ^ is not available in closed form. In this case, numerical methods are required to

obtain ^:

C.1

Grid Search

optimization as a sub-problem (for example, a line search). In this context grid search may be

employed.

Grid Search. Let

= [a; b] be an interval. Pick some " > 0 and set G = (b a)=" to be

the number of gridpoints. Construct an equally spaced grid on the region [a; b] with G gridpoints,

which is { (j) = a + j(b a)=G : j = 0; :::; Gg. At each point evaluate the criterion function

and nd the gridpoint which yields the smallest value of the criterion, which is (^

|) where |^ =

^

|) is the gridpoint estimate of : If the grid is su ciently ne to

argmin0 j G Q( (j)): This value (^

capture small oscillations in Q( ); the approximation error is bounded by "; that is, (^

|) ^

":

Plots of Q( (j)) against (j) can help diagnose errors in grid selection. This method is quite robust

but potentially costly:

Two-Step Grid Search. The gridsearch method can be rened by a two-step execution. For

an error bound of " pick G so that G2 = (b a)=" For the rst step dene an equally spaced

grid on the region [a; b] with G gridpoints, which is { (j) = a + j(b a)=G : j = 0; :::; Gg:

At each point evaluate the criterion function and let |^ = argmin0 j G Q( (j)). For the second

step dene an equally spaced grid on [ (^

| 1); (^

| + 1)] with G gridpoints, which is { 0 (k) =

(^

| 1) + 2k(b a)=G2 : k = 0; :::; Gg: Let k^ = argmin0 k G Q( 0 (k)): The estimate of ^ is

k^ . The advantage of the two-step method over a one-step grid search is that the number of

p

function evaluations has been reduced from (b a)=" to 2 (b a)=" which can be substantial. The

disadvantage is that if the function Q( ) is irregular, the rst-step grid may not bracket ^ which

thus would be missed.

C.2

Gradient Methods

are designed to converge to ^: All require the choice of a starting value

337

i

1;

: i = 1; 2; ::: which

and all require the

338

g( ) =

@

Q( )

@

@2

Q( ):

@ @ 0

If the functions g( ) and H( ) are not analytically available, they can be calculated numerically.

Take the j 0 th element of g( ): Let j be the j 0 th unit vector (zeros everywhere except for a one in

the j 0 th row). Then for " small

H( ) =

gj ( ) '

Q( +

j ")

Q( )

"

Similarly,

Q( + k ") Q( + j ") + Q( )

"2

In many cases, numerical derivatives can work well but can be computationally costly relative to

analytic derivatives. In some cases, however, numerical derivatives can be quite unstable.

Most gradient methods are a variant of Newtons method which is based on a quadratic

approximation. By a Taylors expansion for close to ^

gjk ( ) '

Q( +

j"

k ")

0 = g(^) ' g( ) + H( ) ^

which implies

^=

H( )

g( ):

^i+1 =

H( i )

g( i ):

where

One problem with Newtons method is that it will send the iterations in the wrong direction if

H( i ) is not positive denite. One modication to prevent this possibility is quadratic hill-climbing

which sets

^i+1 = i (H( i ) + i I m ) 1 g( i ):

where i is set just above the smallest eigenvalue of H( i ) if H( ) is not positive denite.

Another productive modication is to add a scalar steplength i : In this case the iteration

rule takes the form

Di gi i

(C.2)

i+1 = i

where g i = g( i ) and D i = H( i ) 1 for Newtons method and Di = (H( i ) + i I m ) 1 for

quadratic hill-climbing.

Allowing the steplength to be a free parameter allows for a line search, a one-dimensional

optimization. To pick i write the criterion function as a function of

Q( ) = Q(

+ Di gi )

a one-dimensional optimization problem. There are two common methods to perform a line search.

A quadratic approximation evaluates the rst and second derivatives of Q( ) with respect to

; and picks i as the value minimizing this approximation. The half-step method considers the

sequence = 1; 1/2, 1/4, 1/8, ... . Each value in the sequence is considered and the criterion

Q( i + D i g i ) evaluated. If the criterion has improved over Q( i ), use this value, otherwise move

to the next element in the sequence.

339

Newtons method does not perform well if Q( ) is irregular, and it can be quite computationally

costly if H( ) is not analytically available. These problems have motivated alternative choices for

the weight matrix Di : These methods are called Quasi-Newton methods. Two popular methods

are do to Davidson-Fletcher-Powell (DFP) and Broyden-Fletcher-Goldfarb-Shanno (BFGS).

Let

gi = gi

=

gi

i 1

Di = Di

i

0

i

0

i

gi

Di

g i g 0i D i

g 0i D i 1 g i

Di = Di

i

0

i

0

i

gi

i

0

i

0

i

gi

gi0 D i

gi +

g 0i D i

gi

0

i

Di

1

0

i

gi

gi

0

i

For any of the gradient methods, the iterations continue until the sequence has converged in

some sense. This can be dened by examining whether j i

Q ( i 1 )j or jg( i )j

i 1 j ; jQ ( i )

has become small.

C.3

Derivative-Free Methods

All gradient methods can be quite poor in locating the global minimum when Q( ) has several

local minima. Furthermore, the methods are not well dened when Q( ) is non-dierentiable. In

these cases, alternative optimization methods are required. One example is the simplex method

of Nelder-Mead (1965).

A more recent innovation is the method of simulated annealing (SA). For a review see Goe,

Ferrier, and Rodgers (1994). The SA method is a sophisticated random search. Like the gradient

methods, it relies on an iterative sequence. At each iteration, a random variable is drawn and

added to the current value of the parameter. If the resulting criterion is decreased, this new value

is accepted. If the criterion is increased, it may still be accepted depending on the extent of the

increase and another randomization. The latter property is needed to keep the algorithm from

selecting a local minimum. As the iterations continue, the variance of the random innovations is

shrunk. The SA algorithm stops when a large number of iterations is unable to improve the criterion.

The SA method has been found to be successful at locating global minima. The downside is that

it can take considerable computer time to execute.

Bibliography

[1] Abadir, Karim M. and Jan R. Magnus (2005): Matrix Algebra, Cambridge University Press.

[2] Aitken, A.C. (1935): On least squares and linear combinations of observations,Proceedings

of the Royal Statistical Society, 55, 42-48.

[3] Akaike, H. (1973): Information theory and an extension of the maximum likelihood principle. In B. Petroc and F. Csake, eds., Second International Symposium on Information

Theory.

[4] Anderson, T.W. and H. Rubin (1949): Estimation of the parameters of a single equation in

a complete system of stochastic equations,The Annals of Mathematical Statistics, 20, 46-63.

[5] Andrews, Donald W. K. (1988): Laws of large numbers for dependent non-identically distributed random variables,Econometric Theory, 4, 458-467.

[6] Andrews, Donald W. K. (1991), Asymptotic normality of series estimators for nonparameric

and semiparametric regression models, Econometrica, 59, 307-345.

[7] Andrews, Donald W. K. (1993), Tests for parameter instability and structural change with

unknown change point, Econometrica, 61, 821-8516.

[8] Andrews, Donald W. K. and Moshe Buchinsky: (2000): A three-step method for choosing

the number of bootstrap replications, Econometrica, 68, 23-51.

[9] Andrews, Donald W. K. and Werner Ploberger (1994): Optimal tests when a nuisance

parameter is present only under the alternative, Econometrica, 62, 1383-1414.

[10] Ash, Robert B. (1972): Real Analysis and Probability, Academic Press.

[11] Basmann, R. L. (1957): A generalized classical method of linear estimation of coe cients

in a structural equation, Econometrica, 25, 77-83.

[12] Bekker, P.A. (1994): Alternative approximations to the distributions of instrumental variable estimators, Econometrica, 62, 657-681.

[13] Billingsley, Patrick (1968): Convergence of Probability Measures. New York: Wiley.

[14] Billingsley, Patrick (1995): Probability and Measure, 3rd Edition, New York: Wiley.

[15] Bose, A. (1988): Edgeworth correction by bootstrap in autoregressions,Annals of Statistics,

16, 1709-1722.

[16] Breusch, T.S. and A.R. Pagan (1979): The Lagrange multiplier test and its application to

model specication in econometrics, Review of Economic Studies, 47, 239-253.

[17] Brown, B. W. and Whitney K. Newey (2002): GMM, e cient bootstrapping, and improved

inference , Journal of Business and Economic Statistics.

340

BIBLIOGRAPHY

341

[18] Card, David (1995): Using geographic variation in college proximity to estimate the return

to schooling,in Aspects of Labor Market Behavior: Essays in Honour of John Vanderkamp,

L.N. Christodes, E.K. Grant, and R. Swidinsky, editors. Toronto: University of Toronto

Press.

[19] Carlstein, E. (1986): The use of subseries methods for estimating the variance of a general

statistic from a stationary time series, Annals of Statistics, 14, 1171-1179.

[20] Casella, George and Roger L. Berger (2002): Statistical Inference, 2nd Edition, Duxbury

Press.

[21] Chamberlain, Gary (1987): Asymptotic e ciency in estimation with conditional moment

restrictions, Journal of Econometrics, 34, 305-334.

[22] Choi, In and Peter C.B. Phillips (1992): Asymptotic and nite sample distribution theory for

IV estimators and tests in partially identied structural equations,Journal of Econometrics,

51, 113-150.

[23] Chow, G.C. (1960): Tests of equality between sets of coe cients in two linear regressions,

Econometrica, 28, 591-603.

[24] Cragg, John (1992): Quasi-Aitken Estimation for Heterskedasticity of Unknown Form"

Journal of Econometrics, 54, 179-201.

[25] Davidson, James (1994): Stochastic Limit Theory: An Introduction for Econometricians.

Oxford: Oxford University Press.

[26] Davison, A.C. and D.V. Hinkley (1997): Bootstrap Methods and their Application. Cambridge

University Press.

[27] Dickey, D.A. and W.A. Fuller (1979): Distribution of the estimators for autoregressive time

series with a unit root, Journal of the American Statistical Association, 74, 427-431.

[28] Donald Stephen G. and Whitney K. Newey (2001): Choosing the number of instruments,

Econometrica, 69, 1161-1191.

[29] Dufour, J.M. (1997): Some impossibility theorems in econometrics with applications to

structural and dynamic models, Econometrica, 65, 1365-1387.

[30] Efron, Bradley (1979): Bootstrap methods: Another look at the jackknife, Annals of Statistics, 7, 1-26.

[31] Efron, Bradley (1982): The Jackknife, the Bootstrap, and Other Resampling Plans. Society

for Industrial and Applied Mathematics.

[32] Efron, Bradley and R.J. Tibshirani (1993): An Introduction to the Bootstrap, New York:

Chapman-Hall.

[33] Eicker, F. (1963): Asymptotic normality and consistency of the least squares estimators for

families of linear regressions, Annals of Mathematical Statistics, 34, 447-456.

[34] Engle, Robert F. and Clive W. J. Granger (1987): Co-integration and error correction:

Representation, estimation and testing, Econometrica, 55, 251-276.

[35] Frisch, Ragnar (1933): Editorial, Econometrica, 1, 1-4.

[36] Frisch, Ragnar and F. Waugh (1933): Partial time regressions as compared with individual

trends, Econometrica, 1, 387-401.

BIBLIOGRAPHY

342

[37] Gallant, A. Ronald and D.W. Nychka (1987): Seminonparametric maximum likelihood estimation, Econometrica, 55, 363-390.

[38] Gallant, A. Ronald and Halbert White (1988): A Unied Theory of Estimation and Inference

for Nonlinear Dynamic Models. New York: Basil Blackwell.

[39] Galton, Francis (1886): Regression Towards Mediocrity in Hereditary Stature,The Journal

of the Anthropological Institute of Great Britain and Ireland, 15, 246-263.

[40] Goldberger, Arthur S. (1964): Econometric Theory, Wiley.

[41] Goldberger, Arthur S. (1968): Topics in Regression Analysis, Macmillan

[42] Goldberger, Arthur S. (1991): A Course in Econometrics. Cambridge: Harvard University

Press.

[43] Goe, W.L., G.D. Ferrier and J. Rogers (1994): Global optimization of statistical functions

with simulated annealing, Journal of Econometrics, 60, 65-99.

[44] Gosset, William S. (a.k.a. Student) (1908): The probable error of a mean, Biometrika,

6, 1-25.

[45] Gauss, K.F. (1809): Theoria motus corporum coelestium, in Werke, Vol. VII, 240-254.

[46] Granger, Clive W. J. (1969): Investigating causal relations by econometric models and

cross-spectral methods, Econometrica, 37, 424-438.

[47] Granger, Clive W. J. (1981): Some properties of time series data and their use in econometric

specication, Journal of Econometrics, 16, 121-130.

[48] Granger, Clive W. J. and Timo Tersvirta (1993): Modelling Nonlinear Economic Relationships, Oxford University Press, Oxford.

[49] Gregory, A. and M. Veall (1985): On formulating Wald tests of nonlinear restrictions,

Econometrica, 53, 1465-1468,

[50] Haavelmo, T. (1944): The probability approach in econometrics, Econometrica, supplement, 12.

[51] Hall, A. R. (2000): Covariance matrix estimation and the power of the overidentifying

restrictions test, Econometrica, 68, 1517-1527,

[52] Hall, P. (1992): The Bootstrap and Edgeworth Expansion, New York: Springer-Verlag.

[53] Hall, P. (1994): Methodology and theory for the bootstrap, Handbook of Econometrics,

Vol. IV, eds. R.F. Engle and D.L. McFadden. New York: Elsevier Science.

[54] Hall, P. and J.L. Horowitz (1996): Bootstrap critical values for tests based on GeneralizedMethod-of-Moments estimation, Econometrica, 64, 891-916.

[55] Hahn, J. (1996): A note on bootstrapping generalized method of moments estimators,

Econometric Theory, 12, 187-197.

[56] Hamilton, James D. (1994) Time Series Analysis.

[57] Hansen, Bruce E. (1992): E cient estimation and testing of cointegrating vectors in the

presence of deterministic trends, Journal of Econometrics, 53, 87-121.

BIBLIOGRAPHY

343

[58] Hansen, Bruce E. (1996): Inference when a nuisance parameter is not identied under the

null hypothesis, Econometrica, 64, 413-430.

[59] Hansen, Bruce E. (2006): Edgeworth expansions for the Wald and GMM statistics for nonlinear restrictions, Econometric Theory and Practice: Frontiers of Analysis and Applied

Research, edited by Dean Corbae, Steven N. Durlauf and Bruce E. Hansen. Cambridge University Press.

[60] Hansen, Lars Peter (1982): Large sample properties of generalized method of moments

estimators, Econometrica, 50, 1029-1054.

[61] Hansen, Lars Peter, John Heaton, and A. Yaron (1996): Finite sample properties of some

alternative GMM estimators, Journal of Business and Economic Statistics, 14, 262-280.

[62] Hausman, J.A. (1978): Specication tests in econometrics, Econometrica, 46, 1251-1271.

[63] Heckman, J. (1979): Sample selection bias as a specication error, Econometrica, 47, 153161.

[64] Horn, S.D., R.A. Horn, and D.B. Duncan. (1975) Estimating heteroscedastic variances in

linear model, Journal of the American Statistical Association, 70, 380-385.

[65] Horowitz, Joel (2001): The Bootstrap, Handbook of Econometrics, Vol. 5, J.J. Heckman

and E.E. Leamer, eds., Elsevier Science, 3159-3228.

[66] Imbens, G.W. (1997): One step estimators for over-identied generalized method of moments

models, Review of Economic Studies, 64, 359-383.

[67] Imbens, G.W., R.H. Spady and P. Johnson (1998): Information theoretic approaches to

inference in moment condition models, Econometrica, 66, 333-357.

[68] Jarque, C.M. and A.K. Bera (1980): E cient tests for normality, homoskedasticity and

serial independence of regression residuals, Economic Letters, 6, 255-259.

[69] Johansen, S. (1988): Statistical analysis of cointegrating vectors, Journal of Economic

Dynamics and Control, 12, 231-254.

[70] Johansen, S. (1991): Estimation and hypothesis testing of cointegration vectors in the presence of linear trend, Econometrica, 59, 1551-1580.

[71] Johansen, S. (1995): Likelihood-Based Inference in Cointegrated Vector Auto-Regressive Models, Oxford University Press.

[72] Johansen, S. and K. Juselius (1992): Testing structural hypotheses in a multivariate cointegration analysis of the PPP and the UIP for the UK,Journal of Econometrics, 53, 211-244.

[73] Kitamura, Y. (2001): Asymptotic optimality and empirical likelihood for testing moment

restrictions, Econometrica, 69, 1661-1672.

[74] Kitamura, Y. and M. Stutzer (1997): An information-theoretic alternative to generalized

method of moments, Econometrica, 65, 861-874..

[75] Koenker, Roger (2005): Quantile Regression. Cambridge University Press.

[76] Kunsch, H.R. (1989): The jackknife and the bootstrap for general stationary observations,

Annals of Statistics, 17, 1217-1241.

BIBLIOGRAPHY

344

[77] Kwiatkowski, D., P.C.B. Phillips, P. Schmidt, and Y. Shin (1992): Testing the null hypothesis of stationarity against the alternative of a unit root: How sure are we that economic time

series have a unit root? Journal of Econometrics, 54, 159-178.

[78] Lafontaine, F. and K.J. White (1986): Obtaining any Wald statistic you want, Economics

Letters, 21, 35-40.

[79] Lehmann, E.L. and George Casella (1998): Theory of Point Estimation, 2nd Edition,

Springer.

[80] Lehmann, E.L. and Joseph P. Romano (2005): Testing Statistical Hypotheses, 3rd Edition,

Springer.

[81] Lindeberg, Jarl Waldemar, (1922): Eine neue Herleitung des Exponentialgesetzes in der

Wahrscheinlichkeitsrechnung, Mathematische Zeitschrift, 15, 211-225.

[82] Li, Qi and Jerey Racine (2007) Nonparametric Econometrics.

[83] Lovell, M.C. (1963): Seasonal adjustment of economic time series,Journal of the American

Statistical Association, 58, 993-1010.

[84] MacKinnon, James G. (1990): Critical values for cointegration, in Engle, R.F. and C.W.

Granger (eds.) Long-Run Economic Relationships: Readings in Cointegration, Oxford, Oxford

University Press.

[85] MacKinnon, James G. and Halbert White (1985): Some heteroskedasticity-consistent covariance matrix estimators with improved nite sample properties,Journal of Econometrics, 29,

305-325.

[86] Magnus, J. R., and H. Neudecker (1988): Matrix Di erential Calculus with Applications in

Statistics and Econometrics, New York: John Wiley and Sons.

[87] Mann, H.B. and A. Wald (1943). On stochastic limit and order relationships, The Annals

of Mathematical Statistics 14, 217226.

[88] Muirhead, R.J. (1982): Aspects of Multivariate Statistical Theory. New York: Wiley.

[89] Nelder, J. and R. Mead (1965): A simplex method for function minimization, Computer

Journal, 7, 308-313.

[90] Nerlove, Marc (1963): Returns to Scale in Electricity Supply, Chapter 7 of Measurement

in Economics (C. Christ, et al, eds.). Stanford: Stanford University Press, 167-198.

[91] Newey, Whitney K. (1990): Semiparametric e ciency bounds, Journal of Applied Econometrics, 5, 99-135.

[92] Newey, Whitney K. (1997): Convergence rates and asymptotic normality for series estimators, Journal of Econometrics, 79, 147-168.

[93] Newey, Whitney K. and Daniel L. McFadden (1994): Large Sample Estimation and Hypothesis Testing, in Robert Engle and Daniel McFadden, (eds.) Handbook of Econometrics,

vol. IV, 2111-2245, North Holland: Amsterdam.

[94] Newey, Whitney K. and Kenneth D. West (1987): Hypothesis testing with e cient method

of moments estimation, International Economic Review, 28, 777-787.

[95] Owen, Art B. (1988): Empirical likelihood ratio condence intervals for a single functional,

Biometrika, 75, 237-249.

BIBLIOGRAPHY

345

[96] Owen, Art B. (2001): Empirical Likelihood. New York: Chapman & Hall.

[97] Park, Joon Y. and Peter C. B. Phillips (1988): On the formulation of Wald tests of nonlinear

restrictions, Econometrica, 56, 1065-1083,

[98] Phillips, Peter C.B. (1989): Partially identied econometric models, Econometric Theory,

5, 181-240.

[99] Phillips, Peter C.B. and Sam Ouliaris (1990): Asymptotic properties of residual based tests

for cointegration, Econometrica, 58, 165-193.

[100] Politis, D.N. and J.P. Romano (1996): The stationary bootstrap,Journal of the American

Statistical Association, 89, 1303-1313.

[101] Potscher, B.M. (1991): Eects of model selection on inference, Econometric Theory, 7,

163-185.

[102] Qin, J. and J. Lawless (1994): Empirical likelihood and general estimating equations, The

Annals of Statistics, 22, 300-325.

[103] Ramsey, J. B. (1969): Tests for specication errors in classical linear least-squares regression

analysis, Journal of the Royal Statistical Society, Series B, 31, 350-371.

[104] Rudin, W. (1987): Real and Complex Analysis, 3rd edition. New York: McGraw-Hill.

[105] Runge, Carl (1901): ber empirische Funktionen und die Interpolation zwischen quidistanten Ordinaten, Zeitschrift fr Mathematik und Physik, 46, 224-243.

[106] Said, S.E. and D.A. Dickey (1984): Testing for unit roots in autoregressive-moving average

models of unknown order, Biometrika, 71, 599-608.

[107] Secrist, Horace (1933): The Triumph of Mediocrity in Business. Evanston: Northwestern

University.

[108] Shao, J. and D. Tu (1995): The Jackknife and Bootstrap. NY: Springer.

[109] Sargan, J.D. (1958): The estimation of economic relationships using instrumental variables,

Econometrica, 2 6, 393-415.

[110] Shao, Jun (2003): Mathematical Statistics, 2nd edition, Springer.

[111] Sheather, S.J. and M.C. Jones (1991): A reliable data-based bandwidth selection method

for kernel density estimation, Journal of the Royal Statistical Society, Series B, 53, 683-690.

[112] Shin, Y. (1994): A residual-based test of the null of cointegration against the alternative of

no cointegration, Econometric Theory, 10, 91-115.

[113] Silverman, B.W. (1986): Density Estimation for Statistics and Data Analysis. London: Chapman and Hall.

[114] Sims, C.A. (1972): Money, income and causality,American Economic Review, 62, 540-552.

[115] Sims, C.A. (1980): Macroeconomics and reality, Econometrica, 48, 1-48.

[116] Staiger, D. and James H. Stock (1997): Instrumental variables regression with weak instruments, Econometrica, 65, 557-586.

[117] Stock, James H. (1987): Asymptotic properties of least squares estimators of cointegrating

vectors, Econometrica, 55, 1035-1056.

BIBLIOGRAPHY

346

[118] Stock, James H. (1991): Condence intervals for the largest autoregressive root in U.S.

macroeconomic time series, Journal of Monetary Economics, 28, 435-460.

[119] Stock, James H. and Jonathan H. Wright (2000): GMM with weak identication, Econometrica, 68, 1055-1096.

[120] Stock, James H. and Mark W. Watson (2010): Introduction to Econometrics, 3rd edition,

Addison-Wesley.

[121] Stone, Marshall H. (1937): Applications of the Theory of Boolean Rings to General Topology, Transactions of the American Mathematical Society, 41, 375-481.

[122] Stone, Marshall H. (1948): The Generalized Weierstrass Approximation Theorem, Mathematics Magazine, 21, 167-184.

[123] Theil, Henri. (1953): Repeated least squares applied to complete equation systems, The

Hague, Central Planning Bureau, mimeo.

[124] Theil, Henri (1961): Economic Forecasts and Policy. Amsterdam: North Holland.

[125] Theil, Henri. (1971): Principles of Econometrics, New York: Wiley.

[126] Tobin, James (1958): Estimation of relationships for limited dependent variables, Econometrica, 2 6, 24-36.

[127] Tripathi, Gautam (1999): A matrix extension of the Cauchy-Schwarz inequality,Economics

Letters, 63, 1-3.

[128] van der Vaart, A.W. (1998): Asymptotic Statistics, Cambridge University Press.

[129] Wald, A. (1943): Tests of statistical hypotheses concerning several parameters when the

number of observations is large, Transactions of the American Mathematical Society, 54,

426-482.

[130] Wang, J. and E. Zivot (1998): Inference on structural parameters in instrumental variables

regression with weak instruments, Econometrica, 66, 1389-1404.

[131] Weierstrass, K. (1885): ber die analytische Darstellbarkeit sogenannter willkrlicher Functionen einer reellen Vernderlichen, Sitzungsberichte der Kniglich Preu

ischen Akademie

der Wissenschaften zu Berlin, 1885.

[132] White, Halbert (1980): A heteroskedasticity-consistent covariance matrix estimator and a

direct test for heteroskedasticity, Econometrica, 48, 817-838.

[133] White, Halbert (1984): Asymptotic Theory for Econometricians, Academic Press.

[134] Wooldridge, Jerey M. (2010) Econometric Analysis of Cross Section and Panel Data, 2nd

edition, MIT Press.

[135] Wooldridge, Jerey M. (2009) Introductory Econometrics: A Modern Approach, 4th edition,

Southwestern

[136] Zellner, Arnold. (1962): An e cient method of estimating seemingly unrelated regressions,

and tests for aggregation bias,Journal of the American Statistical Association, 57, 348-368.

[137] Zhang, Fuzhen and Qingling Zhang (2006): Eigenvalue inequalities for matrix product,

IEEE Transactions on Automatic Control, 51, 1506-1509.

- Stock3e_Empirical_SM.pdfUploaded byVarin Ali
- Introductory EconometricsUploaded byJohn Smythe
- mbook applied econometrics using MATLABUploaded byimad.akhdar
- Jeffrey M Wooldridge Solutions Manual and Supplementary Materials for Econometric Analysis of Cross Section and Panel Data 2003Uploaded byvanyta0201

## Much more than documents.

Discover everything Scribd has to offer, including books and audiobooks from major publishers.

Cancel anytime.