Professional Documents
Culture Documents
Bruce E. Hansen
c 2000, 20071
University of Wisconsin
www.ssc.wisc.edu/~bhansen
1
This manuscript may be printed and reproduced for individual or instructional use, but may not be printed for
commercial purposes.
Contents
1 Introduction 1
1.1 Economic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Observational Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Economic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
4 Inference 27
4.1 Sampling Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.3 Asymptotic Normality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.4 Covariance Matrix Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.5 Alternative Covariance Matrix Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.6 Functions of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.7 t tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.8 Con dence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.9 Wald Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.10 F Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.11 Normal Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.12 Problems with Tests of NonLinear Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.13 Monte Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.14 Estimating a Wage Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.15 Technical Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
i
4.16 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6 The Bootstrap 66
6.1 De nition of the Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6.2 The Empirical Distribution Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6.3 Nonparametric Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.4 Bootstrap Estimation of Bias and Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.5 Percentile Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.6 Percentile-t Equal-Tailed Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.7 Symmetric Percentile-t Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.8 Asymptotic Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.9 One-Sided Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.10 Symmetric Two-Sided Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.11 Percentile Con dence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
6.12 Bootstrap Methods for Regression Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.13 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
8 Empirical Likelihood 86
8.1 Non-Parametric Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
8.2 Asymptotic Distribution of EL Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
8.3 Overidentifying Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
8.4 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
8.5 Numerical Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
8.6 Technical Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
9 Endogeneity 92
9.1 Instrumental Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
9.2 Reduced Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
9.3 Identi cation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
9.4 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
9.5 Special Cases: IV and 2SLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
9.6 Bekker Asymptotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
9.7 Identi cation Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
ii
9.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
14 Nonparametrics 123
14.1 Kernel Density Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
14.2 Asymptotic MSE for Kernel Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
iii
B Probability 135
B.1 Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
B.2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
B.3 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
B.4 Common Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
B.5 Multivariate Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
B.6 Conditional Distributions and Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
B.7 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
B.8 Normal and Related Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
iv
Chapter 1
Introduction
Econometrics is the study of estimation and inference for economic models using economic data. Econo-
metric theory concerns the study and development of tools and methods for applied econometric applications.
Applied econometrics concerns the application of these tools to economic data.
1
1.3 Economic Data
Fortunately for economists, the development of the internet has provided a convenient forum for dis-
semination of economic data. Many large-scale economic datasets are available without charge from gov-
ernmental agencies. An excellent starting point is the Resources for Economists Data Links, available at
http://rfe.wustl.edu/Data/index.html.
Some other excellent data sources are listed below.
Bureau of Labor Statistics: http://www.bls.gov/
Federal Reserve Bank of St. Louis: http://research.stlouisfed.org/fred2/
Board of Governors of the Federal Reserve System: http://www.federalreserve.gov/releases/
National Bureau of Economic Research: http://www.nber.org/
US Census: http://www.census.gov/econ/www/
Current Population Survey (CPS): http://www.bls.census.gov/cps/cpsmain.htm
Survey of Income and Program Participation (SIPP): http://www.sipp.census.gov/sipp/
Panel Study of Income Dynamics (PSID): http://psidonline.isr.umich.edu/
U.S. Bureau of Economic Analysis: http://www.bea.doc.gov/
CompuStat: http://www.compustat.com/www/
International Financial Statistics (IFS): http://ifs.apdi.net/imf/
2
Chapter 2
2.1 Variables
The most commonly applied econometric tool is regression. This is used when the goal is to quantify
the impact of one set of variables (the regressors, conditioning variable, or covariates) on another
variable (the dependent variable). We let y denote the dependent variable and (x1 ; x2 ; :::; xk ) denote the
k regressors. It is convenient to write the set of regressors as a vector in Rk :
0 1
x1
B x2 C
B C
x = B . C: (2.1)
@ .. A
xk
Following mathematical convention, real numbers (elements of the real line R) are written using lower
case italics such as y, and vectors (elements of Rk ) by lower case bold italics such as x: Upper case bold
italics such as X will be used for matrices.
The random variables (y; x) have a distribution F which we call the population. This \population"
is in nitely large. This abstraction can be a source of confusion as it does not correspond to a physical
population in the real world. The distribution F is unknown, and the goal of statistical inference is to learn
about features of F from the sample.
At this point in our analysis it is unimportant whether the observations y and x come from continuous
or discrete distributions. For example, many regressors in econometric practice are binary, taking on only
the values 0 and 1, and are typically called dummy variables.
In general, m (x) can take any form, and exists so long as E jyj < 1: In the example presented in Figure
2.1, the mean wage for men is $27.22, and that for women is $20.73. These are indicated in Figure 2.1 by
the arrows drawn to the x-axis.
1 These are nonparametric density estimates using a Gaussian kernel with the bandwidth selected by cross-validation. See
Chapter 14. The data are from the 2004 Current Population Survey
3
Figure 2.1: Wage Densities for White College Grads with 10-15 Years Work Experience
Take a closer look at the density functions displayed in Figure 2.1. You can see that the right tail of
the density is much thicker than the left tail. These are asymmetric (skewed) densities, which is a common
feature of wage distributions. When a distribution is skewed, the mean is not necessarily a good summary
of the central tendency. In this context it is often convenient to transform the data by taking the (natural)
logarithm. Figure 2.2 shows the density of log hourly wages for the same population, with mean log hourly
wages drawn in with the arrows. The di erence in the log mean wage between men and women is 0.30, which
implies a 30% average wage di erence for this population. This is a more robust measure of the typical wage
gap between men and women than the di erence in the untransformed wage means. For this reason, wage
regressions typically use log wages as a dependent variable rather than the level of wages.
The comparison in Figure 2.1 is facilitated by the fact that the control variable (gender) is discrete.
When the distribution of the control variable is continuous, then comparisons become more complicated. To
illustrate, Figure 2.3 displays a scatter plot2 of log wages against education levels. Assuming for simplicity
that this is the true joint distribution, the solid line displays the conditional expectation of log wages varying
with education. The conditional expectation function is close to linear; the dashed line is a linear projection
approximation which will be discussed in the Section 2.6. The main point to be learned from Figure 2.3 is
that the conditional expectation describes the central tendency of the conditional distribution. Of particular
interest to graduate students may be the observation that di erence between a B.A. and a Ph.D. degree in
mean log hourly wages is 0.36, implying an average 36% di erence in wage levels.
1. E (e j x) = 0:
2. E(e) = 0:
2 White non-military male wage earners with 10-15 years of potential work experience.
4
Figure 2.2: Log Wage Densities
To show the rst statement, by the de nition of e and the linearity of conditional expectations,
E (e j x) = E ((y m(x)) j x)
= E (y j x) E (m(x) j x)
= m(x) m(x)
= 0:
y = m(x) + e
E (e j x) = 0:
are often stated jointly as the regression framework. It is important to understand that this is a framework,
not a model, because no restrictions have been placed on the joint distribution of the data. These equations
hold true by de nition. A regression model imposes further restrictions on the joint distribution; most
typically, restrictions on the permissible class of regression functions m (x) :
The conditional mean also has the property of being the the best predictor of y; in the sense of achieving
the lowest mean squared error. To see this, let g (x) be an arbitrary predictor of y given x: The expected
squared error using this prediction function is
2 2
E (y g (x)) = E (e + m (x) g (x))
2
= Ee2 + 2E (e (m (x) g (x))) + E (m (x) g (x))
2
= Ee2 + E (m (x) g (x))
Ee2
where the second equality uses Theorem 2.3.1.3. The right-hand-side is minimized by setting g (x) = m (x) :
Thus the mean squared error is minimized by the conditional mean.
5
Figure 2.3: Conditional Mean of Wages Given Education
6
where 0 1
1
B C
= @ ... A (2.7)
k
is a k 1 parameter vector.
In this case (2.3) can be written as
y = x0 + e (2.8)
E (e j x) = 0: (2.9)
y = x0 + e
E (e j x) = 0
E e2 j x = 2
:
which is quadratic in : The best linear predictor is obtained by selecting to minimize S( ): The
rst-order condition for minimization (from Appendix A.9) is
@
0= S( ) = 2E (xy) + 2E (xx0 ) :
@
Solving for we nd
1
= (E (xx0 )) E (xy) : (2.10)
It is worth taking the time to understand the notation involved in this expression. E (xx0 ) is a k k matrix
E(xy) 0 1
and E (xi yi ) is a k 1 column vector. Therefore, alternative expressions such as E(xx 0 ) or E (xy) (E (xx ))
are incoherent and incorrect. Appendix A provides a comprehensive review of matrix notation and operation.
The vector (2.10) exits and is unique as long as the k k matrix Q = E (xx0 ) is invertible. The matrix
Q plays an important role in least-squares theory so we will discuss some of its properties in detail. Observe
that for any non-zero 2 Rk ;
0 2
Q = E ( 0 xx0 ) = E ( 0 x) 0
so Q is by construction positive semi-de nite. It is invertible if and only if it is positive de nite, which
2
requires that for all non-zero ; E ( 0 x) > 0: Equivalently, there cannot exist a non-zero vector such that
0
x = 0 identically. This occurs when redundant variables are included in x: In order for to be uniquely
de ned, this situation must be excluded.
Given the de nition of in (2.10), x0 is the best linear predictor for y: The error is
e=y x0 : (2.11)
Notice that the error e from the linear prediction equation is equal to the error from the regression equation
when (and only when) the conditional mean is linear in x; otherwise they are distinct.
Rewriting, we obtain a decomposition of y into linear predictor and error
y = x0 + e: (2.12)
7
This completes the derivation of the model. We call x0 alternatively the best linear predictor of y given
x0 ; or the linear projection of y onto x: In general we will call equation (2.12) the linear projection model.
We now summarize the assumptions necessary for its derivation and list the implications in Theorem
2.6.1.
Assumption 2.6.1
1. x contains an intercept;
2. Ey 2 < 1;
3. Ex2j < 1 for j = 1; :::; k:
4. Q = E (xx0 ) is invertible.
Theorem 2.6.1 Under Assumption 2.6.1, (2.10) and (2.11) are well de ned. Furthermore,
E e2 < 1 (2.13)
E (xe) = 0 (2.14)
and
E (e) = 0: (2.15)
A complete proof of Theorem (2.6.1) is presented in Section 2.7.
The two equations (2.12) and (2.14) summarize the linear projection model. Let's compare it with the
linear regression model (2.8)-(2.9). Since from Theorem 2.3.1.4 we know that the regression error has the
property E (xe) = 0; it follows that linear regression is a special case of the projection model. However,
the converse is not true as the projection error does not necessarily satisfy E (e j x) = 0: For example,
suppose that for x 2 R that Ex = 0; Ex3 = 0; and e = x2 Ex2 : Then Exe = Ex3 ExEx2 = 0 yet
E (e j x) = x2 Ex2 6= 0:
It is useful to note that the facts that E (xe) = 0 and E (e) = 0 means that the variables x and e are
uncorrelated.
The conditions listed in Assumption 2.6.1 are weak. The nite second moment Assumptions 2.6.1.2 and
2.6.1.3 are called regularity conditions. Assumption 2.6.1.4 is required to ensure that is uniquely de ned.
Assumption 2.6.1.1 is employed to guarantee that (2.15) holds.
We have shown that under mild regularity conditions for any pair (y; x) we can de ne a linear equation
(2.12) with the properties listed in Theorem 2.6.1. No additional assumptions are required. However, it is
important to not misinterpret the generality of this statement. The linear equation (2.12) is de ned by the
de nition of the best linear predictor and the associated coe cient de nition (2.10). In contrast, in many
economic models the parameter may be de ned within the model. In this case (2.10) may not hold and the
implications of Theorem 2.6.1 may be false. These structural models require alternative estimation methods,
and are discussed in Chapter 9.
Returning to the joint distribution displayed in Figure 2.3, the dashed line is projection of log wages
onto education. In this example the linear predictor is a close approximation to the conditional mean. In
other cases the two may be quite di erent. Figure 2.4 displays the relationship3 between mean log hourly
wages and labor market experience. The solid line is the conditional mean, and the straight dashed line is
the linear projection. In this case the linear projection is a poor approximation to the conditional mean. It
over-predicts wages for young and old workers, and under-predicts for the rest. Most importantly, it misses
the strong downturn in expected wages for those above 35 years work experience (equivalently, for those over
53 in age).
This defect in the best linear predictor can be partially corrected through a careful selection of regressors.
In the example just presented, we can augment the regressor vector x to include both experience and
experience2 : The best linear predictor of log wages given these two variables can be called a quadratic
projection, since the resulting function is quadratic in experience: Other than the rede nition of the regressor
vector, there are no changes in our methods or analysis. In Figure 2.4 we display as well the quadratic
projection. In this example it is a much better approximation to the conditional mean than the linear
projection.
3 In the population of Caucasian non-military male wage earners with 12 years of education.
8
Figure 2.4: Hourly Wage as a Function of Experience
Another defect of linear projection is that it is sensitive to the marginal distribution of the regressors when
the conditional mean is non-linear. We illustrate the issue in Figure 2.5 for a constructed4 joint distribution
of y and x. The solid line is the non-linear conditional mean of y given x: The data are divided in two { Group
1 and Group 2 { which have di erent marginal distributions for the regressor x; and Group 1 has a lower
mean value of x than Group 2. The separate linear projections of y on x for these two groups are displayed
in the Figure by the dashed lines. These two projections are distinct approximations to the conditional
mean. A defect with linear projection is that it leads to the incorrect conclusion that the e ect of x on y is
di erent for individuals in the two Groups. This conclusion is incorrect because is fact there is no di erence
in the conditional mean function. The apparant di erence is a by-product of a linear approximation to a
non-linear mean, combined with di erent marginal distributions for the conditioning variables.
Note that for j = 1; :::; k; by the Cauchy-Schwarz Inequality (C.3) and Assumptions 2.6.1.2 and 2.6.1.3
1=2 1=2
E jxj yj Ex2j Ey 2 < 1:
Thus the elements in the vector E (xy) are well de ned and nite. Next, note that the jl'th element of
E (xx0 ) is E (xj xl ) : Observe that
1=2 1=2
E jxj xl j Ex2j Ex2l <1
under Assumption 2.6.1.3. Thus all elements of the matrix E (xx0 ) are nite.
1 1
Equation (2.10) states that = (E (xx0 )) E (xy) which is well de ned since (E (xx0 )) exists under
0
Assumption 2.6.1.4. It follows that e = y x as de ned in (2.11) is also well de ned.
4 The x in Group 1 are N(2; 1) and those in Group 2 are N(4; 1); and the conditional distriubtion of y given x is N(m(x); 1)
i
where m(x) = 2x x2 =6:
9
Figure 2.5: Conditional Mean and Two Linear Projections
2 2 2
Note the Schwarz Inequality (A.7) implies (x0 ) kxk k k and therefore combined with (2.16) we
see that
2 2 2
E (x0 ) E kxk k k < 1: (2.17)
Using Minkowski's Inequality (C.5), Assumption 2.6.1.2 and (2.17) we nd
1=2 2 1=2
E e2 = E (y x0 )
1=2 2 1=2
Ey 2 + E (x0 )
< 1
establishing (2.13).
An application of the Cauchy-Schwarz Inequality (C.3) shows that for any j
1=2 1=2
E jxj ej Ex2j Ee2 <1
and therefore the elements in the vector E (xe) are well de ned and nite.
1
Using the de nitions (2.11) and (2.10), and the matrix properties that AA = I and Ia = a;
E (xe) = E (x (y x0 ))
1
= E (xy) E (xx0 ) (E (xx0 )) E (xy)
= 0:
10
2.8 Exercises
1. Prove parts 2, 3 and 4 of Theorem 2.3.1.
2. Suppose that the random variables y and x only take the values 0 and 1, and have the following joint
probability distribution
x=0 x=1
y=0 .1 .2
y=1 .4 .3
Find E (y j x) ; E y 2 j x and var (y j x) for x = 0 and x = 1:
3. Suppose that y is discrete-valued, taking values only on the non-negative integers, and the conditional
distribution of y given x is Poisson:
j
exp ( x0 ) (x0 )
P (y = k j x) = ; j = 0; 1; 2; :::
j!
Compute E (y j x) and var (y j x) : Does this justify a linear regression model of the form y = x0 + e?
j
exp( )
Hint: If P (y = j) = j! ; then Ey = and var(y) = :
3
4. Let x and y have the joint density f (x; y) = 2 x2 + y 2 on 0 x 1; 0 y 1: Compute the
coe cients of the best linear predictor y = 1 + 2 x + e: Compute the conditional mean m(x) =
E (y j x) : Are they di erent?
5. Take the bivariate linear projection model
y = 1 + 2x +e
E (e) = 0
E (xe) = 0
2 2 2
De ne y = Ey; x = Ex; x = var(x); y = var(y) and xy = cov(x; y): Show that 2 = xy = x and
1 = y 1 x:
2 x
g xj ; = 2 2 :
(x )
2
Show that Eg (x j m; s) = 0 if and only if m = and s = :
11
Chapter 3
Assumption 3.1.1 The observations (yi ; xi ) i = 1; :::; n; are mutually independent across observations i
and identically distributed.
This chapter explores estimation and inference in the linear projection model for a random sample:
yi = x0i + ei (3.1)
E (xi ei ) = 0 (3.2)
1
= (E (xi x0i )) E (xi yi ) (3.3)
In Sections 3.8 and 3.9, we narrow the focus to the linear regression model, but for most of the chapter we
retain the broader focus on the projection model.
3.2 Estimation
Equation (3.3) writes the projection coe cient as an explicit function of population moments E (xi yi )
and E (xi x0i ) : Their moment estimators are the sample moments
n
b (xi yi ) 1X
E = xi y i
n i=1
n
b (xi x0 ) 1X
E i = xi x0i :
n i=1
12
It follows that the moment estimator of replaces the population moments in (3.3) with the sample moments:
1
^ = b (xi x0i )
E b (xi yi )
E
n
! 1 n
!
1X 1X
= xi x0i xi yi
n i=1 n i=1
n
! 1 n
!
X X
0
= xi xi xi y i (3.4)
i=1 i=1
Another way to derive ^ is as follows. Observe that (3.2) can be written in the parametric form g ( ) =
E (xi (yi x0i )) = 0: The function g ( ) can be estimated by
n
1X
^( ) =
g xi (yi x0i ) :
n i=1
This is a set of k equations which are linear in : The estimator ^ is the value which jointly sets these
equations equal to zero:
^ ^
0 = g (3.5)
n
1X
= xi yi x0i ^
n i=1
n n
1X 1X
= xi yi xi x0i ^
n i=1 n i=1
n
1X 1 14:14
xi x0i = :
n i=1 14:14 205:83
Thus
1
^ 1 14:14 2:95
=
14:14 205:83 42:40
34:94 2: 40 2:95
=
2: 40 0:170 42:40
1: 313
= :
0:128
\
log(W age) = 1:313 + 0:128 Education:
An interpretation of the estimated equation is that each year of education is associated with a 12.8% increase
in mean wages.
13
3.3 Least Squares
Least squares is another classic motivation for the estimator (3.4). De ne the sum-of-squared errors
(SSE) function
n
X 2
Sn ( ) = (yi x0i )
i=1
n
X n
X n
X
0 0
= yi2 2 xi y i + xi x0i :
i=1 i=1 i=1
This is a quadratic function of : To visualize this function, Figure 3.1 displays an example sum-of-squared
errors function Sn ( ) for the case k = 2:
The Ordinary Least Squares (OLS) estimator is the value of which minimizes Sn ( ): Matrix
calculus (see Appendix A.9) gives the rst-order conditions for minimization:
@
0 = Sn ( ^ )
@
Xn n
X
= 2 xi y i + 2 xi x0i ^
i=1 i=1
whose solution is (3.4). Following convention we will call ^ the OLS estimator of :
As a by-product of OLS estimation, we de ne the predicted value
y^i = x0i ^
and the residual
e^i = yi y^i
= yi x0i ^ :
Note that yi = y^i + e^i : It is important to understand the distinction between the error ei and the residual
e^i : The error is unobservable, while the residual is a by-product of estimation. These two variables are
frequently mislabeled, which can cause confusion.
14
Equation (3.5) implies that
n
1X
xi e^i = 0:
n i=1
Since xi contains a constant, one implication is that
n
1X
e^i = 0:
n i=1
Thus the residuals have a sample mean of zero and the sample correlation between the regressors and the
residual is zero. These are algebraic results, and hold true for all linear regression estimates.
The error variance 2 = Ee2i is also a parameter of interest. It measures the variation in the \unexplained"
part of the regression. Its method of moments estimator is the sample average of the squared residuals
n
1X 2
^2 = e^ : (3.6)
n i=1 i
A justi cation for the latter choice will be provided in Section 3.8.
A measure of the explained variation relative to the total variation is the coe cient of determination
or R-squared. Pn 2
2 yi y)
(^ ^2
R = Pi=1n 2 = 1
i=1 (yi y) ^ 2y
where
n
1X 2
^ 2y = (yi y)
n i=1
2 var (x0i ) 2
= =1 2
var(yi ) y
2 2
where y = var(yi ): An alternative estimator of proposed by Theil called \R-bar-squared" is
2 s2
R =1
~ 2y
where
n
X
1 2
~ 2y = (yi y) :
n 1 i=1
2
Theil's estimator R is a ratio of adjusted variance estimators, and therefore is expected to be a better
estimator of 2 than the unadjusted estimator R2 :
15
The log-likelihood function for the normal regression model is
n
!
X 1 1 2
2
log L( ; ) = log exp 2
(yi x0i )
(2 2 )1=2 2
i=1
n 2 1
= log 2 2
Sn ( )
2 2
The MLE ( ^ ; ^ 2 ) maximize log L( ; 2 ): Since log L( ; 2 ) is a function of only through the sum of squared
errors Sn ( ); maximizing the likelihood is identical to minimizing Sn ( ). Hence the MLE for equals the
OLS estimator:
Plugging ^ into the log-likelihood we obtain
n
n 1 X
log L ^ ; 2
= log 2 2
2
e^2i :
2 2 i=1
2
Maximization with respect to yields the rst-order condition
n
X
@ n 1
log L ^ ; ^ 2 = 2 + 2 e^2i = 0:
@ 2 2^ 2 ^2 i=1
Solving for ^ 2 yields the method of moments estimator (3.6). Thus the MLE ^ ; ^ 2 for the normal regression
model are identical to the method of moment estimators. Due to this equivalence, the OLS estimator ^ is
frequently referred to as the Gaussian MLE.
y1 = x01 + e1
y2 = x02 + e2
..
.
yn = x0n + en :
Now de ne 0 1 0 1 0 1
y1 x01 e1
B y2 C B x02 C B e2 C
B C B C B C
y=B .. C; X=B .. C; e=B .. C:
@ . A @ . A @ . A
yn x0n en
Observe that y and e are n 1 vectors, and X is an n k matrix. Then the system of n equations can be
compactly written in the single equation
y = X + e:
Sample sums can also be written in matrix notation. For example
n
X
xi x0i = X 0X
i=1
Xn
xi y i = X 0 y:
i=1
Thus the estimator (3.4), residual vector, and sample error variance can be written as
1
^ = X 0X X 0y
^ = y
e X^
^2 = n 1 0
e
^e^:
16
A useful result is obtained by inserting y = X + e into the formula for ^ to obtain
1
^ = X 0X X 0 (X + e)
1 1
= X 0X X 0X + X 0X X 0e
1
= + X 0X X 0 e: (3.8)
where I n is the n n identity matrix. P and M are called projection matrices due to the property that
for any matrix Z which can be written as Z = X for some matrix ; (we say that Z lies in the range
space of X) then
1
P Z = P X = X X 0X X 0X = X = Z
and
M Z = (I n P)Z = Z PZ = Z Z = 0:
As an important example of this property, partition the matrix X into two matrices X 1 and X 2 ; so that
X = [X 1 X 2] :
1 1
PP = X X 0X X0 X X 0X X0
1 1
= X X 0X X 0X X 0X X0
1
= X X 0X X0
= P:
Similarly,
0
M 0 = (I n P ) = In P =M
and
MM = M (I n P )
= M MP
= M;
1A matrix P is symmetric if P 0 = P : A matrix P is idempotent if P P = P: See Appendix A.8
17
since M P = 0:
Another useful property is that
tr P = k (3.9)
tr M = n k (3.10)
(See Appendix A.4 for de nition and properties of the trace operator.) To show (3.9) and (3.10),
1
tr P = tr X X 0 X X0
1
= tr X 0X X 0X
= tr (I k )
= k;
and
tr M = tr (I n P ) = tr (I n ) tr (P ) = n k:
Given the de nitions of P and M ; observe that
1
y^ = X ^ = X X 0 X X 0y = P y
and
^= y
e X^ = y P y = M y: (3.11)
Furthermore, since y = X + e and M X = 0; then
^ = M (X + e) = M e:
e (3.12)
y = (P + M ) y = P y + M y = y^ + e
^:
y = X1 1 + X2 2 + e: (3.13)
0 0 0
Observe that the OLS estimator of =( 1; 2) can be obtained by regression of y on X = [X 1 X 2 ]: OLS
estimation can be written as
y = X 1 ^1 + X 2 ^2 + e
^ (3.14)
Suppose that we are primarily interested in 2 ; not in 1 ; so we are only interested in obtaining the
OLS sub-component ^ 2 : In this section we derive an alternative expression for ^ 2 which does not involve
estimation of the full model.
De ne
1
M 1 = I n X 1 X 01 X 1 X 01 :
1
Recalling the de nition M = I X X 0X X 0 ; observe that X 01 M 1 = 0 and thus
1
M 1M = M X 1 X 01 X 1 X 01 M = M :
18
It follows that
^ = M 1M e = M e = e
M 1e ^:
Using this result, if we premultiply (3.14) by M 1 we obtain
M 1y = M 1X 1 ^1 + M 1X 2 ^2 + M 1e
^
= M 1X 2 ^2 + e
^ (3.15)
X 02 M 1 y = X20 M 1 X 2 ^ 2 + X 02 e
^ = X 02 M 1 X 2 ^ 2 :
Solving,
1
^ 2 = X 02 M 1 X 2 X 02 M 1 y
an alternative expression for ^ 2 :
Now, de ne
~ 2 = M 1X 2
X (3.16)
y~ = M 1 y; (3.17)
the least-squares residuals from the regression of X 2 and y; respectively, on the matrix X 1 only. Since the
matrix M 1 is idempotent, M 1 = M 1 M 1 and thus
1
^2 = X 02 M 1 X 2 X 02 M 1 y
1
= X 02 M 1 M 1 X 2 X 02 M 1 M 1 y
1
= ~ 02 X
X ~2 ~ 02 y~ :
X
This shows that ^ 2 can be calculated by the OLS regression of y~ on X~ 2 : This technique is called residual
regression.
Furthermore, using the de nitions (3.16) and (3.17), expression (3.15) can be equivalently written as
~ 2 ^2 + e
y~ = X ^:
Since ^ 2 is precisely the OLS coe cient from a regression of y~ on X ~ 2 ; this shows that the residual vector
from this regression is e^, numerically the same residual as from the joint regression (3.14). We have proven
the following theorem.
Theorem 3.7.1 (Frisch-Waugh-Lovell). In the model (3.13), the OLS estimator of 2 and the OLS resid-
uals e
^ may be equivalently computed by either the OLS regression (3.14) or via the following algorithm:
In some contexts, the FWL theorem can be used to speed computation, but in most cases there is little
computational advantage to using the two-step algorithm. Rather, the primary use is theoretical.
A common application of the FWL theorem, which you may have seen in an introductory econometrics
course, is the demeaning formula for regression. Partition X = [X 1 X 2 ] where X 1 = is a vector of ones,
and X 2 is the vector of observed regressors. In this case,
0 1 0
M1 = I ( ) :
Observe that
~2
X = M 1X 2
1
= X2 ( 0 ) X2
= X2 X2
19
and
y~ = M 1 y
0 1 0
= y ( ) y
= y y;
which are \demeaned". The FWL theorem says that ^ 2 is the OLS estimate from a regression of yi y on
x2i x2 : ! 1 n !
Xn X
^2 = 0
(x2i x2 ) (x2i x2 ) (x2i x2 ) (yi y) :
i=1 i=1
Thus the OLS estimator for the slope coe cients is a regression with demeaned data.
Using (3.8), the properties of conditional expectations, and (3.18), we can calculate
1
E ^ jX = E X 0X X 0e j X
1
= X 0X X 0 E (e j X)
= 0:
which implies
E ^ =
where
D = E (ee0 j X) :
The i'th diagonal element of D is
E e2i j X = E e2i j xi = 2
i
20
2
Thus D is a diagonal matrix with i'th diagonal element i:
0 2
1
1 0 0
B 0 2
0 C
2 2 B 2 C
D = diag 1 ; :::; n =B .. .. .. .. C: (3.20)
@ . . . . A
2
0 0 n
2 2
In the special case of the linear homoskedastic regression model, i = and we have the simpli cations
D = I n 2 , X 0 DX = X 0 X 2 ; and
1
var ^ j X = X 0 X 2
:
We now calculate the nite sample bias of the method of moments estimator ^ 2 for 2 , under the addi-
tional assumption of conditional homoskedasticity E e2i j xi = 2 : From (3.12), the properties of projection
matrices and the trace operator observe that
1 0 1 1 1 1
^2 = e
^e^ = e0 M M e = e0 M e = tr (e0 M e) = tr M ee0
n n n n n
Then
1
E ^2 j X = tr E M ee0 j X
n
1
= tr (M E (ee0 j X))
n
1
= tr M 2
n
2 n k
= ;
n
the nal equality by (3.10). Thus ^ 2 is biased towards zero. As an alternative, the estimator s2 de ned in
(3.7) is unbiased for 2 by this calculation. This is the justi cation for the common preference of s2 over ^ 2 in
empirical practice. It is important to remember, however, that this estimator is only unbiased in the special
case of the homoskedastic linear regression model. It is not unbiased in the absence of homoskedasticity, or
in the projection model.
E ~ j X = A0 X ;
var ~ j X = A0 var (e j X) A = A0 A 2
:
The \best" linear estimator is obtained by nding the matrix A for which this variance is the smallest in
the positive de nite sense. The following result, known as the Gauss-Markov theorem, is a famous statement
of the solution.
21
Theorem 3.9.1 Gauss-Markov. In the homoskedastic linear regression model, the best (minimum-variance)
unbiased linear estimator is OLS.
The Gauss-Markov theorem is an e ciency justi cation for the least-squares estimator, but it is quite
limited in scope. Not only has the class of models been restricted to homoskedastic linear regressions, the
class of potential estimators has been restricted to linear unbiased estimators. This latter restriction is
particularly unsatisfactory, as the theorem leaves open the possibility that a non-linear or biased estimator
could have lower mean squared error than the least-squares estimator.
for j = 1; :::; r; where 1 ( ) is the indicator function. That is, ^ j is the percentage of the observations which
fall in each category. The MLE ^ mle for is then the analog of (3.21) with the parameters j replaced by
the estimates ^ j :
0 1 10 1
Xr Xr
^ mle = @ ^j j j A @
0
^j j j A :
j=1 j=1
22
Thus ! !
n 1 n
^ mle = 1X 1X
xi x0i xi y i = ^ ols :
n i=1 n i=1
In other words, if the data have a discrete distribution, the maximum likelihood estimator is identical to the
OLS estimator. Since this is a regular parametric model the MLE is asymptotically e cient (see Appendix
D), and thus so is the OLS estimator.
Chamberlain (1987) extends this argument to the case of continuously-distributed data: He observes
that the above argument holds for all multinomial distributions, and any continuous distribution can be
arbitrarily well approximated by a multinomial distribution. He proves that generically the OLS estimator
(3.4) is an asymptotically e cient estimator for the parameter de ned in (2.10) for the class of models
satisfying Assumption 2.6.1.
3.11 Multicollinearity
If rank(X 0 X) < k; then ^ is not de ned2 . This is called strict multicollinearity. This happens when
the columns of X are linearly dependent, i.e., there is some 6= 0 such that X = 0: Most commonly, this
arises when sets of regressors are included which are identically related. For example, if X includes both the
logs of two prices and the log of the relative prices, log(p1 ); log(p2 ) and log(p1 =p2 ): When this happens, the
applied researcher quickly discovers the error as the statistical software will be unable to construct (X 0 X) 1 :
Since the error is discovered quickly, this is rarely a problem for applied econometric practice.
The more relevant issue is near multicollinearity, which is often called \multicollinearity" for brevity.
This is the situation when the X 0 X matrix is near singular, when the columns of X are close to linearly
dependent. This de nition is not precise, because we have not said what it means for a matrix to be \near
singular". This is one di culty with the de nition and interpretation of multicollinearity.
One implication of near singularity of matrices is that the numerical reliability of the calculations is
reduced. In extreme cases it is possible that the reported calculations will be in error due to oating-point
calculation di culties.
A more relevant implication of near multicollinearity is that individual coe cient estimates will be im-
precise. We can see this most simply in a homoskedastic linear regression model with two regressors
yi = x1i 1 + x2i 2 + ei ;
and
1 0 1
XX=
n 1
In this case
2 1 2
1 1
var ^ j X = = :
n 1 n (1 2) 1
The correlation indexes collinearity, since as approaches 1 the matrix becomes singular. We can see
the e ect of collinearity on precision by observing that the asymptotic variance of a coe cient estimate
2 2 1
1 approaches in nity as approaches 1. Thus the more \collinear" are the regressors, the worse
the precision of the individual coe cient estimates.
What is happening is that when the regressors are highly dependent, it is statistically di cult to dis-
entangle the impact of 1 from that of 2 : As a consequence, the precision of individual estimates are
reduced.
23
where X ( i) and y ( i) are the data matrices omitting the i'th row. A convenient alternative expression
(derived in Section 3.13) is
^ ( i) = ^ (1 hi ) 1 X 0 X 1 xi e^i (3.23)
where
1
hi = x0i X 0 X xi
1
is the i'th diagonal element of the projection matrix X X 0 X X 0:
We can also de ne the leave-one-out residual
1
e^i; i = yi x0i ^ ( i) = (1 hi ) e^i : (3.24)
Proof of Theorem equation (3.23). Equation (A.2) in Appendix A.5 states that for nonsingular A
and vector b
1 1
A bb0 = A 1 + 1 b0 A 1 b A 1 bb0 A 1 :
This implies
1 1 1 1 1
X 0X xi x0i = X 0X + (1 hi ) X 0X xi x0i X 0 X
and thus
1
^( i) = X 0X xi x0i X 0y xi y i
0 1 0 0 1
= XX Xy XX xi y i
1 1 1
+ (1 hi ) X 0X xi x0i X 0 X X 0y xi y i
1 1 1
= ^ X 0X xi yi + (1 hi ) X 0X xi x0i ^ hi y i
1 1
= ^ (1 hi ) X 0X xi (1 hi ) y i x0i ^ + hi yi
1 1
= ^ (1 hi ) X 0X xi e^i
1 1
the third equality making the substitutions ^ = X 0 X X 0 y and hi = x0i X 0 X xi ; and the remainder
collecting terms.
24
3.14 Exercises
2
1. Let y be a random variable with = Ey and = var(y): De ne
2 y
g y; ; = 2 2 :
(y )
Pn
Let (^ ; ^ 2 ) be the values such that g n (^ ; ^ 2 ) = 0 where g n (m; s) = n 1
i=1 g yi ; ; 2
: Show that
^ and ^ 2 are the sample mean and variance.
2. Consider the OLS regression of the n 1 vector y on the n k matrix X. Consider an alternative set
of regressors Z = XC; where C is a k k non-singular matrix. Thus, each column of Z is a mixture
of some of the columns of X: Compare the OLS estimates and residuals from the regression of y on
X to the OLS estimates from the regression of y on Z:
^ be the OLS residual from a regression of y on X = [X 1 X 2 ]. Find X 02 e
3. Let e ^:
4. Let e
^ be the OLS residual from a regression of y on X: Find the OLS coe cient estimate from a
regression of e
^ on X:
5. Let y^ = X(X 0 X) 1
X 0 y: Find the OLS coe cient estimate from a regression of y^ on X:
6. Prove that R2 is the square of the simple correlation between y and y^:
Pn
7. Explain the di erence between n1 i=1 xi x0i and E (xi x0i ) :
1
8. Let ^ n = X 0n X n X 0n y n denote the OLS estimate when y n is n 1 and X n is n k. A new obser-
vation (yn+1 ; xn+1 ) becomes available. Prove that the OLS estimate computed using this additional
observation is
^ n+1 = ^ n + 1 1
1 X 0n X n xn+1 yn+1 x0n+1 ^ n :
1+ x0n+1 X 0n X n xn+1
9. True or False. If P
yi = xi + ei , xi 2 R; E(ei j xi ) = 0; and e^i is the OLS residual from the regression
n
of yi on xi ; then i=1 x2i e^i = 0:
10. A dummy variable takes on only the values 0 and 1. It is used for categorical data, such as an
individual's gender. Let d1 and d2 be vectors of 1's and 0's, with the i0 th element of d1 equaling 1 and
that of d2 equaling 0 if the person is a man, and the reverse if the person is a woman. Suppose that
there are n1 men and n2 women in the sample. Consider the three regressions
y = + d1 1 + d2 2 + e (3.26)
y = d1 1 + d2 2 + e (3.27)
y = + d1 + e (3.28)
(a) Can all three regressions (3.26), (3.27), and (3.28) be estimated by OLS? Explain if not.
(b) Compare regressions (3.27) and (3.28). Is one more general than the other? Explain the relation-
ship between the parameters in (3.27) and (3.28).
0 0
(c) Compute d1 and d2 ; where is an n 1 is a vector of ones.
0
(d) Letting = ( 1 2 ) ; write equation (3.27) as y = X + e: Consider the assumption E(xi ei ) = 0.
Is there any content to this assumption in this setting?
25
(b) Describe in words the transformations
y = y d1 y 1 d2 y 2
X = X d1 X 1 d2 X 2 :
y = d1 ^ 1 + d2 ^ 2 + X ^ + e
^:
12. The data le cps85.dat contains a random sample of 528 individuals from the 1985 Current Population
Survey by the U.S. Census Bureau. The le contains observations on nine variables, listed in the le
cps85.pdf.
V1 = education (in years)
V2 = region of residence (coded 1 if South, 0 otherwise)
V3 = (coded 1 if nonwhite and non-Hispanic, 0 otherwise)
V4 = (coded 1 if Hispanic, 0 otherwise)
V5 = gender (coded 1 if female, 0 otherwise)
V6 = marital status (coded 1 if married, 0 otherwise)
V7 = potential labor market experience (in years)
V8 = union status (coded 1 if in union job, 0 otherwise)
V9 = hourly wage (in dollars)
Estimate a regression of wage yi on education x1i , experience x2i , and experienced-squared x3i = x22i
(and a constant). Report the OLS estimates.
Let e^i be the OLS residual and y^i the predicted value from the regression. Numerically calculate the
following:
Pn
(a) i=1 e
^i
Pn
(b) i=1 x1i e^i
Pn
(c) i=1 x2i e^i
Pn 2
(d) i=1 x1i e^i
Pn 2
(e) i=1 x2i e^i
Pn
(f) i=1 y
^i e^i
Pn 2
(g) i=1 e
^i
(h) R2
Are these calculations consistent with the theoretical properties of OLS? Explain.
13. Use the data from the previous problem, restimate the slope on education using the residual regression
approach. Regress yi on (1; x2i ; x22i ), regress x1i on (1; x2i ; x22i ), and regress the residuals on the
residuals. Report the estimate from this regression. Does it equal the value from the rst OLS
regression? Explain.
In the second-stage residual regression, (the regression of the residuals on the residuals), calculate the
equation R2 and sum of squared errors. Do they equal the values from the initial OLS regression?
Explain.
26
Chapter 4
Inference
To illustrate the possibilities in one example, let yi and xi be drawn from the joint density
1 1 2 1 2
f (x; y) = exp (log y log x) exp (log x)
2 xy 2 2
and let ^ 2 be the slope coe cient estimate computed on observations from this joint density. Using simulation
methods, the density function of ^ 2 was computed and plotted in Figure 4.1 for sample sizes of n = 25;
n = 100 and n = 800: The vertical line marks the true value of the projection coe cient.
From the gure we can see that the density functions are dispersed and highly non-normal. As the sample
size increases the density becomes more concentrated about the population coe cient. To characterize the
sampling distribution more fully, we will use the methods of asymptotic approximation. A review of the
most important tools in asymptotic theory is contained in Appendix C.
27
4.2 Consistency
As discussed in Section 4.1, the OLS estimator ^ is has a statistical distribution which is unknown.
Asymptotic (large sample) methods approximate sampling distributions based on the limiting experiment
that the sample size n tends to in nity. A preliminary step in this approach is the demonstration that
estimators converge in probability to the true parameters as the sample size gets large. This is illustrated
in Figure 4.1 by the fact that the sampling densities become more concentrated as n gets larger.
This derivation is based on three key components. First, the OLS estimator can be written as a continuous
function of a set of sample moments. Second, the weak law of large numbers (WLLN, Theorem C.2.1) shows
that sample moments converge in probability to population moments. And third, the continuous mapping
theorem (Theorem C.4.1) states that continuous functions preserve convergence in probability. We now
explain each step.
First, the OLS estimator
n
! 1 n
!
X 1X
^= 1 0
xi xi xi yi
n i=1 n i=1
Pn Pn
is a continuous function of the sample moments n1 i=1 xi x0i and n1 i=1 xi yi :
Second, the WLLN states that for any iid random variable ui such that E jui j < 1;
n
1X p
ui ! E (ui )
n i=1
Pn
n ! 1: We want to apply the WLLN to the elements of the sample moments n1 i=1 xi x0i and
asP
1 n
n i=1 xi yi : This is okay since these are sample averages of the random variables xji xli and xj yi . The
WLLN says that for any j and l;
n
1X p
xji xli ! E (xji xli ) ;
n i=1
and
n
1X p
xji yi ! E (xji yi ) :
n i=1
Pn Pn
Since this holds for all elements in the matrix n1 i=1 xi x0i and the vector 1
n i=1 xi yi ; it follows that
n
1X p
xi x0i ! E (xi x0i ) = Q (4.1)
n i=1
and
n
1X p
xi yi ! E (xi yi ) : (4.2)
n i=1
The continuous mapping theorem then implies that
n
! 1 n
!
^= 1X 1X
xi x0i xi yi
n i=1 n i=1
p 1
! (E (xi x0i )) (E (xi yi ))
= : (4.3)
p
We have shown that ^ ! : In words, the OLS estimator converges in probability to the true parameter
vector as the sample size n gets large.
For a slightly di erent demonstration of this result, recall that (3.8) implies that
n
! 1 n
!
1 X 1 X
^ = 0
xi xi xi ei : (4.4)
n i=1 n i=1
28
Therefore
n
! 1 n
!
^ 1X 1X
= xi x0i xi ei
n i=1 n i=1
p 1
!Q 0
=0
p
which is the same as ^ ! .
p
Theorem 4.2.1 Under Assumptions 2.6.1 and 3.1.1, ^ ! as n ! 1:
n
! 1 n
!
p 1X 1 X
n ^ = xi x0i p xi ei : (4.6)
n i=1 n i=1
p
This shows that the normalized and centered estimator n ^ is a function of the sample average
1
Pn 0 1
Pn
i=1 xi xi and the normalized sample average i=1 xi ei : Furthermore, the latter has mean zero so the
p
n n
central limit theorem (CLT) applies. Recall, the CLT (Theorem C.3.1) states that if ui 2 Rk is iid, Eui = 0
and Eu2ji < 1 for j = 1; :::; k; then as n ! 1
n
1 X d
p ui ! N (0; E (ui u0i )) :
n i=1
For our application, ui = xi ei ; and E (xi ei ) = 0: We calculate that E (ui u0i ) = E xi x0i e2i and de ne this
matrix by . By the CLT we condlue
n
1 X d
p xi ei ! N (0; ) : (4.7)
n i=1
29
where
= E xi x0i e2i :
Then using (4.6), (4.1), and (4.7),
p d
n ^ !Q 1
N (0; )
1 1
= N 0; Q Q :
where the nal equality follows from a property of the normal distribution.
A formal statement of this result requires the following strengthening of the moment conditions.
Assumption 4.3.1 In addition to Assumption 2.6.1, E e4i < 1 and for j = 1; :::; k; Ex4ji < 1:
where
1 1
V =Q Q :
In (4.10) we de ne V 0 = Q 1 2 whether (4.8) is true or false. When (4.8) is true then V = V 0 ; otherwise
V 6= V 0 : We call V 0 the homoskedastic covariance matrix.
The asymptotic distribution of Theorem 4.3.1 is commonly used to approximate the nite sample dis-
p
tribution of n ^ : The approximation may be poor when n is small. How large should n be in
order for the approximation to be useful? Unfortunately, there is no simple answer to this reasonable ques-
tion. The trouble is that no matter how large is the sample size, the normal approximation is arbitrarily
poor for some data distribution satisfying the assumptions. We illustrate this problem using a simulation.
Let yi = 0 + 1 xi + ei where xi is N (0; 1) ; and ei is independent of xi with the Double Pareto density
1
f (e) = 2 jej ; jej 1: If > 2 the error ei has zero mean and variance =( 2): As approaches
2,
q however, its variance diverges to in nity. In this context the normalized least-squares slope estimator
n 2 ^2 2 has the N(0; 1) asymptotic distibution. In Figure 4.2 we display the nite sample den-
q
sities of the normalized estimator n 2 ^ 2 2 ; setting n = 100 and varying the parameter . For
= 3:0 the density is very close to the N(0; 1) density. As diminishes the density changes signi cantly,
concentrating most of the probability mass around zero.
Another example is shown in Figure 4.3. Here the model is yi = 1 + ei where
uki E uki
ei =
2 1=2
E u2k
i E uki
30
Figure 4.2: Density of Normalized OLS estimator
p
and ui N(0; 1): We show the sampling distribution of n ^ 1 1 setting n = 100; for k = 1; 4, 6 and 8.
As k increases, the sampling distribution becomes highly skewed and non-normal. The lesson from Figures
4.2 and 4.3 is that the N(0; 1) asymptotic approximation is never guaranteed to be accurate.
31
be the method of moments estimator for Q: The homoskedastic covariance matrix V 0 = Q 1 2
is typically
estimated by
0
^ 1 s2 :
V^ = Q (4.11)
^ p! Q and s2 p! 2 (see (4.1) and Theorem 4.2.1) it is clear that V^ 0 p! V 0 : The estimator ^ 2
Since Q
may also be substituted for s2 in (4.11) without changing this result.
To estimate V = Q 1 Q 1 ; we need an estimate of = E xi x0i e2i : The MME estimator is
X n
^ = 1 xi x0i e^2i (4.12)
n i=1
De nition 4.4.1 A standard error s( ^ ) for an estimator ^ is an estimate of the standard deviation of
the distribution of ^ :
p p
When is scalar, and V^ is an estimator of the variance of n ^ ; we set s( ^ ) = n 1=2 V^ : When
is a vector, we focus on individual elements of one-at-a-time, vis., j ; j = 0; 1; :::; k: Thus
q
s( ^ j ) = n 1=2 V^ jj :
Generically, standard errors are not unique, as there may be more than one estimator of the variance of
the estimator. It is therefore important to understand what formula and method is used by an author when
studying their work. It is also important to understand that a particular standard error may be relevant
under one set of model assumptions, but not under another set of assumptions, just as any other estimator.
From a computational standpoint, the standard method to calculate the standard errors is to rst calcu-
late n 1 V^ , then take the diagonal elements, and then the square roots.
To illustrate, we return to the log wage regression of Section 3.2. We calculate that s2 = 0:20 and
^ = 0:199 2:80
:
2:80 40:6
and
1 1
1 14:14 :199 2:80 1 14:14 7:20 0:493
V^ = = :
14:14 205:83 2:80 40:6 14:14 205:83 0:493 0:035
32
p
In this case the two estimates are quite similar. The (White) standard errors for ^ 0 are 7:2=988 = :085
p
and that for ^ 1 is :035=988 = :006: We can write the estimated equation with standard errors using the
format
\
log(W age) = 1:313 + 0:128 Education:
(:085) (:006)
where
n
^ 1X 2
= (1 hi ) xi x0i e^2i :
n i=1
It is similar to the MacKinnon-White estimator V^ ; but omits the mean correction. Andrews (1991) argues
that simulation evidence indicates that V^ is an improvement on V^ .
What is an appropriate standard error for ^? Assume that h( ) is di erentiable at the true value of :
By a rst-order Taylor series approximation:
h( ^ ) ' h( ) + H 0 ^ :
33
where
@
H = h( ) k q:
@
Thus
p p
n ^ = n h( ^ ) h( )
p
' H0 n ^
d
! H 0 N (0; V )
= N (0; V ) : (4.15)
where
V = H0 V H :
If V^ is the estimated covariance matrix for ^ ; then the natural estimate for the variance of ^ is
^ 0 V^ H
V^ = H ^
where
^ = @ h( ^ ):
H
@
In many cases, the function h( ) is linear:
h( ) = R0
^ = R; so V^ = R0 V^ R:
for some k q matrix R: In this case, H = R and H
For example, if R is a \selector matrix"
I
R=
0
I
V^ = I 0 V^ = V^ 11 ;
0
4.7 t tests
Let = h( ) : Rk ! R be any parameter of interest, ^ its estimate and s(^) its asymptotic standard
error. Consider the studentized statistic
^
tn ( ) = : (4.16)
s(^)
d
Theorem 4.7.1 tn ( ) ! N (0; 1)
Thus the asymptotic distribution of the t-ratio tn ( ) is the standard normal. Since the standard normal
distribution does not depend on the parameters, we say that tn ( ) is asymptotically pivotal. In special
cases (such as the normal regression model, see Section 3.4), the statistic tn has an exact t distribution, and
is therefore exactly free of unknowns. In this case, we say that tn is an exactly pivotal statistic. In general,
however, pivotal statistics are unavailable and so we must rely on asymptotically pivotal statistics.
A simple null and composite hypothesis takes the form
H0 : = 0
H1 : 6= 0
34
where 0 is some pre-speci ed value, and = h( ) is some function of the parameter vector. (For example,
could be a single element of ):
The standard test for H0 against H1 is the t-statistic (or studentized statistic)
^ 0
tn = tn ( 0 ) = :
s(^)
d
Under H0 ; tn ! N(0; 1): Let z =2 is the upper =2 quantile of the standard normal distribution. That is,
if Z N(0; 1); then P(Z > z =2 ) = =2 and P(jZj > z =2 ) = : For example, z:025 = 1:96 and z:05 = 1:645:
A test of asymptotic signi cance rejects H0 if jtn j > z =2 : Otherwise the test does not reject, or \accepts"
H0 : This is because
P (reject H0 j H0 true) = P jtn j > z =2 j = 0
! P jZj > z =2 = :
The rejection/acceptance dichotomy is associated with the Neyman-Pearson approach to hypothesis testing.
An alternative approach, associated with Fisher, is to report an asymptotic p-value. The asymptotic
p-value for the above statistic is constructed as follows. De ne the tail probability, or asymptotic p-value
function
p(t) = P (jZj > jtj) = 2 (1 (jtj)) :
Then the asymptotic p-value of the statistic tn is
pn = p(tn ):
If the p-value pn is small (close to zero) then the evidence against H0 is strong. In a sense, p-values
and hypothesis tests are equivalent since pn < if and only if jtn j > z =2 . Thus an equivalent statement
of a Neyman-Pearson test is to reject at the % level if and only if pn < : The p-value is more general,
however, in that the reader is allowed to pick the level of signi cance , in contrast to Neyman-Pearson
rejection/acceptance reporting where the researcher picks the level.
Another helpful observation is that the p-value function has simply made a unit-free transformation of the
d
test statistic. That is, under H0 ; pn ! U[0; 1]; so the \unusualness" of the test statistic can be compared to
the easy-to-understand uniform distribution, regardless of the complication of the distribution of the original
test statistic. To see this fact, note that the asymptotic distribution of jtn j is F (x) = 1 p(x): Thus
P (1 pn u) = P (1 p(tn ) u)
= P (F (tn ) u)
= P jtn j F 1 (u)
1
! F F (u) = u;
d d
establishing that 1 pn ! U[0; 1]; from which it follows that pn ! U[0; 1]:
35
While there is no hard-and-fast guideline for choosing the coverage probability 1 ; the
h most commoni
professional choice is 95%, or = :05: This corresponds to selecting the con dence interval ^ 1:96s(^)
h i
^ 2s(^) : Thus values of within two standard errors of the estimated ^ are considered \reasonable"
candidates for the true value ; and values of outside two standard errors of the estimated ^ are considered
unlikely or unreasonable candidates for the true value.
The interval has been constructed so that as n ! 1;
P ( 2 Cn ) = P jtn ( )j z =2 ! P jZj z =2 =1 :
H0 : = 0
H1 : 6= 0:
^ 0 V^ H
V^ = H ^
where
^ = @ h( ^ ):
H
@
The Wald statistic for H0 against H1 is
0 1
Wn = n ^ 0 V^ ^ 0
0 1
= n h( ^ ) 0
^ 0 V^ H
H ^ h( ^ ) 0 : (4.18)
When h is a linear function of ; h( ) = R0 ; then the Wald statistic takes the form
0 1
Wn = n R 0 ^ 0 R0 V^ R R0 ^ 0 :
p d
The delta method (4.15) showed that n ^ !Z N (0; V ) ; and Theorem 4.4.1 showed that
p
V^ ! V : Furthermore, H ( ) is a continuous function of ; so by the continuous mapping theorem,
p
^ 0 V^ H
H ( ^ ) ! H : Thus V^ = H ^ p
! H 0 V H = V > 0 if H has full rank q: Hence
0 1 d
Wn = n ^ 0 V^ ^ 0 ! Z 0V 1
Z= 2
q;
d 2
Theorem 4.9.1 Under H0 and Assumption 4.3.1, if rank(H ) = q; then Wn ! q; a chi-square random
variable with q degrees of freedom.
An asymptotic Wald test rejects H0 in favor of H1 if Wn exceeds 2q ( ); the upper- quantile of the 2q
distribution. For example, 21 (:05) = 3:84 = z:025
2
: The Wald test fails to reject if Wn is less than 2q ( ): The
asymptotic p-value for Wn is pn = p(Wn ); where p(x) = P 2q x is the tail probability function of the 2q
distribution. As before, the test rejects at the % level i pn < ; and pn is asymptotically U[0; 1] under
H0 :
36
4.10 F Tests
Take the linear model
y = X1 1 + X2 2 +e
where X 1 is n k1 and X 2 is n k2 and k = k1 + k2 : The null hypothesis is
H0 : 2 = 0:
0
In this case, = 2; and there are q = k2 restrictions. Also h( ) = R0 is linear with R = a selector
I
matrix. We know that the Wald statistic takes the form
0 1
Wn = n^ V^ ^
0 1
= n ^ 2 R0 V^ R ^2:
0 1
What we will show in this section is that if V^ is replaced with V^ = ^ 2 n 1 X 0 X ; the covariance matrix
estimator valid under homoskedasticity, then the Wald statistic can be written in the form
~2 ^2
Wn = n (4.19)
^2
where
1 0 1
~2 =
e
~e ~; ~= y
e X 1 ~1; ~ 1 = X 01 X 1 X 01 y
n
are from OLS of y on X 1 ; and
1 0 ^ = X 0X 1 X 0y
^2 =
e
^e^; ^ = y X ^;
e
n
are from OLS of y on X = (X 1 ; X 2 ):
The elegant feature about (4.19) is that it is directly computable from the standard output from two
simple OLS regressions, as the sum of squared errors is a typical output from statistical packages. This
statistic is typically reported as an \F-statistic" which is de ned as
n k Wn ~ 2 ^ 2 =k2
F = = :
n k2 ^ 2 =(n k)
0 1
While it should be emphasized that equality (4.19) only holds if V^ = ^ 2 n 1 X 0 X ; still this formula
often nds good use in reading applied papers. Because of this connection we call (4.19) the F form of the
Wald statistic.
We now derive expression (4.19). First, note that by partitioned matrix inversion (A.3)
1
1 X 01 X 1 X 01 X 2 1
R0 X 0 X R = R0 R = X 02 M 1 X 2
X 02 X 1 X 02 X 2
where M 1 = I X 1 (X 01 X 1 ) 1
X 01 : Thus
0 1 1 1
R0 V^ R =^ 2
n 1
R0 X 0 X R =^ 2
n 1
X 02 M 1 X 2
and
0 0 1
Wn = n ^ 2 R0 V^ R ^2
^ 02 X 02 M 1 X 2 ^ 2
= :
^2
To simplify this expression further, note that if we regress y on X 1 alone, the residual is e
~ = M 1 y: Now
consider the residual regression of e ~ 2 = M 1 X 2 : By the FWL theorem, e
~ on X ~ = X 2 ^2 + e ~ 02 e
^ and X ^ = 0:
Thus
0
~0 e
e ~ = ~ 2 ^2 + e
X ^ ~ 2 ^2 + e
X ^
= ^ 02 X
~ 02 X
~ 2 ^2 + e
^0 e
^
= ^ 02 X 02 M 1 X 2 ^ 2 + e
^0 e
^;
37
or alternatively,
^ 02 X 02 M 1 X 2 ^ 2 = e
~0 e
~ ^0 e
e ^:
Also, since
^2 = n 1 0
e
^e^
we conclude that
~0 e
e ~ e ^0 e
^ ~2 ^2
Wn = n 0 =n ;
e
^e ^ ^2
as claimed.
In many statistical packages, when an OLS regression is reported, an \F-statistic" is reported. This is
~ 2y ^ 2 = (k 1)
F = 2 :
^ =(n k)
where
1 0
~ 2y =
(y y) (y y)
n
is the sample variance of yi ; equivalently the residual variance from an intercept-only model. This special
F statistic is testing the hypothesis that all slope coe cients (other than the intercept) are zero. This
was a popular statistic in the early days of econometric reporting, when sample sizes were very small and
researchers wanted to know if there was \any explanatory power" to their regression. This is rarely an issue
today, as sample sizes are typically su ciently large that this F statistic is highly \signi cant". While there
are special cases where this F statistic is useful, these cases are atypical.
a chi-square distribution with n k degrees of freedom. Furthermore, if standard errors are calculated using
the homoskedastic formula (4.11)
h i
2 0 1
^ ^ N 0; X X
j j j j jj N (0; 1)
= rh i q rh i =q 2 tn k
s( ^ j ) s XX 0 1 2 2 XX0 1 n k
n k n k n k
jj jj
38
a t distribution with n k degrees of freedom.
We summarize these ndings
2
Theorem 4.11.1 If ei is independent of xi and distributed N 0; ; and standard errors are calculated
using the homoskedastic formula (4.11) then
1
^ N 0; 2
X 0X
(n k)s2 2
2 n k;
^
j j
s( ^ j )
tn k
In Theorem 4.3.1 and Theorem 4.7.1 we showed that in large samples, ^ and t are approximately normally
distributed. In contrast, Theorem 4.11.1 shows that under the strong assumption of normality, ^ has an
exact normal distribution and t has an exact t distribution. As inference (con dence intervals) is based on
the t-ratio, the notable distinction is between the N(0; 1) and tn k distributions. The critical values are
quite close if n k 30; so as a practical matter it does not matter which distribution is used. (Unless the
sample size is unreasonably small.)
Now let us partition = ( 1 ; 2 ) and consider tests of the linear restriction
H0 : 2 =0
H1 : 2 6= 0
In the context of parametric models, a good testing procedure is based on the likelihood ratio statistic, which
is twice the di erence in the log-likelihood function evaluated under the null and alternative hypotheses. The
estimator under the alternative is the unrestricted estimator ( ^ 1 ; ^ 2 ; ^ 2 ) discussed above. The Gaussian log-
likelihood at these estimates is
n 1 0
log L( ^ 1 ; ^ 2 ; ^ 2 ) = log 2 ^ 2 e
^e^
2 2^ 2
n n n
= log ^ 2 log (2 ) :
2 2 2
The MLE of the model under the null hypothesis is ( ~ 1 ; 0; ~ 2 ) where ~ 1 is the OLS estimate from a regression
of yi on x1i only, with residual variance ~ 2 : The log-likelihood of this model is
n n n
log L( ~ 1 ; 0; ~ 2 ) = log ~ 2 log (2 ) :
2 2 2
The LR statistic for H0 is
LR = 2 log L( ^ 1 ; ^ 2 ; ^ 2 ) log L( ~ 1 ; 0; ~ 2 )
= n log ~ 2 log ^ 2
~2
= n log :
^2
By a rst-order Taylor series approximation
~2 ~2
LR = n log 1 + 1 'n 1 = Wn :
^2 ^2
the F statistic.
39
and consider the hypothesis
H0 : = 1:
Let ^ and ^ be the sample mean and variance of yi : Then the standard Wald test for H0 is
2
2
^ 1
Wn = n :
^2
Now notice that H0 is equivalent to the hypothesis
s
H0 (s) : =1
s s 1
for any positive integer s: Letting h( ) = ; and noting H = s ; we nd that the standard Wald test
for H0 (s) is
2
^s 1
Wn (s) = n 2s 2 :
^ 2 s2 ^
While the hypothesis s = 1 is una ected by the choice of s; the statistic Wn (s) varies with s: This is an
unfortunate feature of the Wald statistic.
To demonstrate this e ect, we have plotted in Figure 4.4 the Wald statistic Wn (s) as a function of s;
setting n= 2 = 10: The increasing solid line is for the case ^ = 0:8: The decreasing dashed line is for the case
^ = 1:7: It is easy to see that in each case there are values of s for which the test statistic is signi cant relative
to asymptotic critical values, while there are other values of s for which the test statistic is insigni cant.
This is distressing since the choice of s seems arbitrary and irrelevant to the actual hypothesis.
d
Our rst-order asymptotic theory is not useful to help pick s; as Wn (s) ! 21 under H0 for any s:
This is a context where Monte Carlo simulation can be quite useful as a tool to study and compare the
exact distributions statistical procedures in nite samples. The method uses random simulation to create an
arti cial dataset to apply the statistical tools of interest. This produces random draws from the sampling
distribution of interest. Through repetition, features of this distribution can be calculated.
In the present context of the Wald statistic, one feature of importance is the Type I error of the test
using the asymptotic 5% critical value 3.84 { the probability of a false rejection, P (Wn (s) > 3:84 j = 1) :
Given the simplicity of the model, this probability depends only on s; n; and 2 : In Table 2.1 we report the
results of a Monte Carlo simulation where we vary these three parameters. The value of s is varied from 1
to 10, n is varied among 20, 100 and 500, and is varied among 1 and 3. Table 4.1 reports the simulation
estimate of the Type I error probability from 50,000 random samples. Each row of the table corresponds to a
40
di erent value of s { and thus corresponds to a particular choice of test statistic. The second through seventh
columns contain the Type I error probabilities for di erent combinations of n and . These probabilities are
calculated as the percentage of the 50,000 simulated Wald statistics Wn (s) which are larger than 3.84. The
null hypothesis s = 1 is true, so these probabilities are Type I error.
To interpret the table, remember that the ideal Type I error probability is 5% (.05) with deviations
indicating distortion. Typically, Type I error rates between 3% and 8% are considered reasonable. Error
rates above 10% are considered excessive. Rates above 20% are unacceptable. When comparing statistical
procedures, we compare the rates row by row, looking for tests for which rejection rates are close to 5%, and
rarely fall outside of the 3%-8% range. For this particular example, the only test which meets this criterion
is the conventional Wn = Wn (1) test. Any other choice of s leads to a test with unacceptable Type I error
probabilities.
In Table 4.1 you can also see the impact of variation in sample size. In each case, the Type I error
probability improves towards 5% as the sample size n increases. There is, however, no magic choice of n for
which all tests perform uniformly well. Test performance deteriorates as s increases, which is not surprising
given the dependence of Wn (s) on s as shown in Figure 4.4.
Table 4.1
Type I error Probability of Asymptotic 5% Wn (s) Test
=1 =3
s n = 20 n = 100 n = 500 n = 20 n = 100 n = 500
1 .06 .05 .05 .07 .05 .05
2 .08 .06 .05 .15 .08 .06
3 .10 .06 .05 .21 .12 .07
4 .13 .07 .06 .25 .15 .08
5 .15 .08 .06 .28 .18 .10
6 .17 .09 .06 .30 .20 .11
7 .19 .10 .06 .31 .22 .13
8 .20 .12 .07 .33 .24 .14
9 .22 .13 .07 .34 .25 .15
10 .23 .14 .08 .35 .26 .16
Note: Rejection frequencies from 50,000 simulated random samples
In this example it is not surprising that the choice s = 1 yields the best test statistic. Other choices are
arbitrary and would not be used in practice. While this is clear in this particular example, in other examples
natural choices are not always obvious and the best choices may in fact appear counter-intuitive at rst.
This point can be illustrated through another example. Take the model
yi = 0 + x1i 1 + x2i 2 + ei (4.20)
E (xi ei ) = 0
and the hypothesis
1
H0 : =r
2
where r is a known constant. Equivalently, de ne = 1 = 2 ; so the hypothesis can be stated as H0 : = r:
Let ^ = ( ^ 0 ; ^ 1 ; ^ 2 ) be the least-squares estimates of (4.20), let V^ be an estimate of the asymptotic
variance matrix for ^ and set ^ = ^ 1 = ^ 2 . De ne
0 1
0
B C
B C
B 1 C
B C
^1 = B
H B ^2 C
C
B C
B C
B ^ C
@ 1 A
^2
2
1=2
so that the standard error for ^ is s(^) = n 1 ^ 01 V^ H
H ^1 : In this case a t-statistic for H0 is
^
^
1
r
2
t1n = :
s(^)
41
An alternative statistic can be constructed through reformulating the null hypothesis as
H0 : 1 r 2 = 0:
where 10
0
H2 = @ 1 A :
r
To compare t1n and t2n we perform another simple Monte Carlo simulation. We let x1i and x2i be
mutually independent N(0; 1) variables, ei be an independent N(0; 2 ) draw with = 3, and normalize
0 = 0 and 1 = 1: This leaves 2 as a free parameter, along with sample size n: We vary 2 among :1; .25,
.50, .75, and 1.0 and n among 100 and 500:
Table 4.2
Type I error Probability of Asymptotic 5% t-tests
n = 100 n = 500
P (tn < 1:645) P (tn > 1:645) P (tn < 1:645) P (tn > 1:645)
2 t1n t2n t1n t2n t1n t2n t1n t2n
.10 .47 .06 .00 .06 .28 .05 .00 .05
.25 .26 .06 .00 .06 .15 .05 .00 .05
.50 .15 .06 .00 .06 .10 .05 .00 .05
.75 .12 .06 .00 .06 .09 .05 .00 .05
1.00 .10 .06 .00 .06 .07 .05 .02 .05
The one-sided Type I error probabilities P (tn < 1:645) and P (tn > 1:645) are calculated from 50,000
simulated samples. The results are presented in Table 4.2. Ideally, the entries in the table should be
0.05. However, the rejection rates for the t1n statistic diverge greatly from this value, especially for small
values of 2 : The left tail probabilities P (t1n < 1:645) greatly exceed 5%, while the right tail probabilities
P (t1n > 1:645) are close to zero in most cases. In contrast, the rejection rates for the linear t2n statistic are
invariant to the value of 2 ; and are close to the ideal 5% rate for both sample sizes. The implication of
Table 4.2 is that the two t-ratios have dramatically di erent sampling behavior.
The common message from both examples is that Wald statistics are sensitive to the algebraic formulation
of the null hypothesis. In all cases, if the hypothesis can be expressed as a linear restriction on the model
parameters, this formulation should be used. If no linear formulation is feasible, then the \most linear"
formulation should be selected, and alternatives to asymptotic critical values should be considered. It is also
prudent to consider alternative tests to the Wald statistic, such as the GMM distance statistic which will be
presented in Section 7.7.
While the asymptotic distribution of Tn might be known, the exact ( nite sample) distribution Gn is generally
unknown.
Monte Carlo simulation uses numerical simulation to compute Gn (u; F ) for selected choices of F: This
is useful to investigate the performance of the statistic Tn in reasonable situations and sample sizes. The
42
basic idea is that for any given F; the distribution function Gn (u; F ) can be calculated numerically through
simulation. The name Monte Carlo derives from the famous Mediterranean gambling resort, where games
of chance are played.
The method of Monte Carlo is quite simple to describe. The researcher chooses F (the distribution of
the data) and the sample size n. A \true" value of is implied by this choice, or equivalently the value is
selected directly by the researcher, which implies restrictions on F .
Then the following experiment is conducted
n independent random pairs (yi ; xi ) ; i = 1; :::; n; are drawn from the distribution F using the com-
puter's random number generator.
The statistic Tn = Tn ((y1 ; x1 ) ; :::; (yn ; xn ) ; ) is calculated on this pseudo data.
For step 1, most computer packages have built-in procedures for generating U[0; 1] and N(0; 1) random
numbers, and from these most random variables can be constructed. (For example, a chi-square can be
generated by sums of squares of normals.)
For step 2, it is important that the statistic be evaluated at the \true" value of corresponding to the
choice of F:
The above experiment creates one random draw from the distribution Gn (u; F ): This is one observation
from an unknown distribution. Clearly, from one observation very little can be said. So the researcher repeats
the experiment B times, where B is a large number. Typically, we set B = 1000 or B = 5000: We will discuss
this choice later.
Notationally, let the b0 th experiment result in the draw Tnb ; b = 1; :::; B: These results are stored. They
constitute a random sample of size B from the distribution of Gn (u; F ) = P (Tnb u) = P (Tn u j F ) :
From a random sample, we can estimate any feature of interest using (typically) a method of moments
estimator. For example:
Suppose we are interested in the bias, mean-squared error (MSE), or variance of the distribution of ^ :
We then set Tn = ^ ; run the above experiment, and calculate
B B
\^) 1 X 1 X^
Bias( = Tnb = b
B B
b=1 b=1
B
1 X
M\
2
SE(^) = (Tnb )
B
b=1
2
\
var(^) = M\
SE(^) \^)
Bias(
Suppose we are interested in the Type I error associated with an asymptotic 5% two-sided t-test. We
would then set Tn = ^ =s(^) and calculate
B
X
^= 1
P 1 (Tnb 1:96) ; (4.21)
B
b=1
the percentage of the simulated t-ratios which exceed the asymptotic 5% critical value.
Suppose we are interested in the 5% and 95% quantile of Tn = ^: We then compute the 10% and 90%
sample quantiles of the sample fTnb g: The % sample quantile is a number q such that % of the sample
are less than q : A simple way to compute sample quantiles is to sort the sample fTnb g from low to high.
Then q is the N 'th number in this ordered sequence, where N = (B + 1) : It is therefore convenient to
pick B so that N is an integer. For example, if we set B = 999; then the 5% sample quantile is 50'th sorted
value and the 95% sample quantile is the 950'th sorted value.
The typical purpose of a Monte Carlo simulation is to investigate the performance of a statistical proce-
dure (estimator or test) in realistic settings. Generally, the performance will depend on n and F: In many
cases, an estimator or test may perform wonderfully for some values, and poorly for others. It is therefore
useful to conduct a variety of experiments, for a selection of choices of n and F:
As discussed above, the researcher must select the number of experiments, B: Often this is called the
number of replications. Quite simply, a larger B results in more precise estimates of the features of interest
of Gn ; but requires more computational time. In practice, therefore, the choice of B is often guided by the
computational demands of the statistical procedure. Since the results of a Monte Carlo experiment are
43
estimates computed from a random sample of size B; it is straightforward to calculate standard errors for
any quantity of interest. If the standard error is too large to make a reliable inference, then B will have to
be increased.
In particular, it is simple to make inferences about rejection probabilities from statistical tests, such as
the percentage estimate reported in (4.21). The random variable 1 (Tnb 1:96) is iid Bernoulli, equalling
1 with probability p =pE1 (Tnb 1:96) : The average (4.21) is therefore an unbiased estimator of p with
standard error s (^p) = p (1 p) =B. As p is unknown, this may be approximated by replacing p with p^
or with pan hypothesized value.p For example, if we are assessing an asymptotic 5% test, then we can set
p) = (:05) (:95) =B ' :22= B: Hence, standard errors for B = 100; 1000, and 5000, are, respectively,
s (^
s (^
p) = :022; :007; and .003.
Table 4.1
OLS Estimates of Linear Equation for Log(Wage)
^ s( ^ )
Intercept 1.027 .032
Education .101 .002
Experience .033 .001
Experience2 :00057 .00002
Married .102 .008
Female :232 .007
Union Member .097 .010
Immigrant :121 .013
Hispanic :102 .014
Non-White :070 .010
^ .4877
Sample Size 18,808
R2 .34
One question is whether or not the state dummy variables are relevant. Computing the Wald statistic
(4.18) that the state coe cients are jointly zero, we nd Wn = 550: Alternatively, re-estimating the model
with the 50 state dummies excluded, the restricted standard deviation estimate is ~ = :4945: The F form of
the Wald statistic (4.19) is
^2 :48772
Wn = n 1 = 18; 808 1 = 515:
~2 :49452
Notice that the two statistics are close, but not equal. Using either statistic the hypothesis is easily rejected,
as the 1% critical value for the 250 distribution is 76.
Another interesting question which can be addressed from these estimates is the maximal impact of
experience on mean wages. Ignoring the other coe cients, we can write this e ect as
2
log(W age) = 2 Experience + 3 Experience +
44
Our question is: At which level of experience do workers achieve the highest wage? In this quadratic model,
if 2 > 0 and 3 < 0 the solution is
2
= :
2 3
From Table 4.1 we nd the point estimate
^
^= 2
= 28:69:
2^3
Using the Delta Method, we can calculate a standard error of s(^) = :40; implying a 95% con dence interval
of [27:9; 29:5]:
However, this is a poor choice, as the coverage probability of this con dence interval is one minus the
Type I error of the hypothesis test based on the t-test. In the previous section we discovered that such
t-tests had very poor Type I error rates. Instead, we found better Type I error rates by reformulating the
hypothesis as a linear restriction. These t-statistics take the form
^ + 2^
2 3
tn ( ) = 1=2
0 ^
h Vh
where
1
h =
2
and V^ is the covariance matrix for ( ^ 2 ^ 3 ):
In the present context we are interested in forming a con dence interval, not testing a hypothesis, so we
have to go one step further. Our desired con dence interval will be the set of parameter values which are
not rejected by the hypothesis test. This is the set of such that jtn ( )j 1:96: Since tn ( ) is a non-linear
function of ; there is not a simple expression for this set, but it can be found numerically quite easily. This
set is [27:0; 29:5]: Notice that the upper end of the con dence interval is the same as that from the delta
method, but the lower end is substantially lower.
n
! 1 n
!
^ 1X 1X
= xi x0i xi yi
n i=1 n i=1
n n
!
1X 0 1
X
= g xi xi ; xi y i
n i=1 n i=1
where g (A; b) = A 1 b is a function of A and b: This function is a continuous function of A and b at all
values of the arguments such that A 1 exists. Assumption 2.6.1.4 implies that Q 1 exists and thus g (A; b)
is continuous at A = Q: Hence by the continuous mapping theorem (Theorem C.4.1),
n n
!
1 X 1 X
^ =g 0
xi xi ; xi y i
n i=1 n i=1
p
! g (Q; E (xi yi ))
1
= E (xi x0i ) E (xi yi )
=
45
Proof of Theorem 4.2.2. Note that
e^i = yi x0i ^
= ei + x0i x0i ^
= ei x0i ^ :
Thus 0
e^2i = e2i 2ei x0i ^ + ^ xi x0i ^ (4.22)
and
n
1X 2
^2 = e^
n i=1 i
n n
! n
!
1X 2 1X ^
0 1X
= e 2 ei x0i + ^ xi x0i ^
n i=1 i n i=1 n i=1
p 2
!
the last line using the WLLN, (4.1), (4.5) and Theorem (4.2.1). Thus ^ 2 is consistent for 2
:
Finally, since n=(n k) ! 1 as n ! 1; it follows that
n p
s2 = ^2 ! 2
:
n k
Proof of Theorem 4.3.1. The discussion preceeding the statement of the theorem, together with the
proof of Theorem 4.2.1, covered all points of the derivation except for the demonstration that the elements of
the vector xi ei have nite second moments, which we now show. For any j = 1; :::; k; by the Cauchy-Schwarz
Inequality (C.3) and Assumption 4.3.1
2 1=2 1=2
E jxji ei j = E x2ji e2i Ex4ji Ee4i < 1: (4.23)
p p
Proof of Theorem 4.4.1. We now show ^ ! ; from which it follows that V^ ! V as n ! 1:
Using (4.22)
n
^ 1X
= xi x0i e^2i
n i=1
n n n
1X 2X 1X
0 0 2
= xi x0i e2i xi x0i ^ xi ei + xi x0i ^ xi : (4.24)
n i=1 n i=1 n i=1
Since this expectation is nite, we can apply the WLLN (Theorem C.2.1) to nd that
n
1X p
xi x0i e2i ! E xi x0i e2i = :
n i=1
46
Now take the second term on the right-hand-side of (4.24). Applying the Triangle Inequality (A.9) to the
matrix Euclidean norm, the Matrix Schwarz Inequality (A.8), equation (A.6) and the Schwarz Inequality
(A.7)
n n
2X 0 2X 0
xi x0i ^ xi ei xi x0i ^ xi ei
n i=1 n i=1
n
2X ^
0
kxi x0i k xi jei j
n i=1
n
!
2X 3 ^
kxi k jei j : (4.25)
n i=1
By the WLLN
n
1X 3 p 3
kxi k jei j ! E kxi k jei j < 1:
n i=1
p
Since ^ ! 0 it follows that (4.25) converges in probability to zero. This shows that the second term
on the right-hand-side of (4.24) converges in probability to zero.
We now take the third term in (4.24). Again by the Triangle Inequality, the Matrix Schwarz Inequality,
(A.6) and the Schwarz Inequality
n n
1X 1X
0 2 0 2
xi x0i ^ xi kxi x0i k ^ xi
n i=1 n i=1
n
1X 4
kxi k ^
n i=1
p
!0
p Pn 4 p 4
the nal convergence since ^ ! 0 and n1 i=1 kxi k ! E kxi k < 1 under Assumption 4.3.1. This
shows that the third term on the right-hand-side of (4.24) converges in probability to zero.
Considering the three terms on the right-hand-side of (4.24), we have shown that the rst term converges
in probability to , and the second and third converge in probability to zero. We conclude that ^ !
as claimed.
^
tn ( ) =
s(^)
p ^
n
= p
V^
d N (0; V )
! p
V
= N (0; 1)
47
4.16 Exercises
For exercises 1-4, the following de nition is used. In the model y = X + e; the least-squares estimate
of subject to the restriction h( ) = 0 is
~ = argmin Sn ( )
h( )=0
0
Sn ( ) = (y X ) (y X ):
That is, ~ minimizes the sum of squared errors Sn ( ) over all such that the restriction holds.
where
1
^ = X 0X X 0y
is the unconstrained OLS estimator.
(b) Verify that R0 ~ = r:
(c) Show that if R0 = r is true, then
h i 1
1 1 1
~ = Ik X 0X R R0 X 0 X R R0 X 0X X 0 e:
p
(d) Under the standard assumptions plus R0 = r; nd the asymptotic distribution of n ~
as n ! 1:
(e) Find an appropriate formula to calculate standard errors for the elements of ~ :
48
6. The model is
yi = x0i + ei
E (xi ei ) = 0
= E xi x0i e2i :
7. Take the model yi = x01i 1 + x02i 2 + ei with Exi ei = 0: Suppose that 1 is estimated by regressing
yi on x1i only. Find the probability limit of this estimator. In general, is it consistent for 1 ? If not,
under what conditions is this estimator consistent for 1 ?
8. Verify that equation (4.13) equals (4.14) as claimed in Section 4.5.
2
9. Prove that if an additional regressor X k+1 is added to X; Theil's adjusted R increases if and only if
jtk+1 j > 1; where tk+1 = ^ k+1 =s( ^ k+1 ) is the t-ratio for ^ k+1 and
1=2
s( ^ k+1 ) = s2 [(X 0 X) 1
]k+1;k+1
where > 0 is a xed constant. Find the probability limit of ^ as n ! 1: Is ^ consistent for ?
11. Of the variables (yi ; yi ; xi ) only the pair (yi ; xi ) are observed. In this case, we say that yi is a latent
variable. Suppose
yi = x0i + ei
E (xi ei ) = 0
yi = y i + u i
E (xi ui ) = 0
E (yi ui ) = 0
12. The data set invest.dat contains data on 565 U.S. rms extracted from Compustat for the year 1987.
The variables, in order, are
The ow variables are annual sums for 1987. The stock variables are beginning of year.
49
(a) Estimate a linear regression of Ii on the other variables. Calculate appropriate standard errors.
(b) Calculate asymptotic con dence intervals for the coe cients.
(c) This regression is related to Tobin's q theory of investment, which suggests that investment should
be predicted solely by Qi : Thus the coe cient on Qi should be positive and the others should be
zero. Test the joint hypothesis that the coe cients on Ci and Di are zero. Test the hypothesis
that the coe cient on Qi is zero. Are the results consistent with the predictions of the theory?
(d) Now try a non-linear (quadratic) speci cation. Regress Ii on Qi ; Ci , Di ; Q2i ; Ci2 , Di2 ; Qi Ci ; Qi Di ;
Ci Di : Test the joint hypothesis that the six interaction and quadratic coe cients are zero.
13. In a paper in 1963, Marc Nerlove analyzed a cost function for 145 American electric companies. (The
problem is discussed in Example 8.3 of Greene, section 1.7 of Hayashi, and the empirical exercise in
Chapter 1 of Hayashi). The data le nerlov.dat contains his data. The variables are described on
page 77 of Hayashi. Nerlov was interested in estimating a cost function: T C = f (Q; P L; P F; P K):
Report parameter estimates and standard errors. You should obtain the same OLS estimates as
in Hayashi's equation (1.7.7), but your standard errors may di er.
(b) Using a Wald statistic, test the hypothesis H0 : 3 + 4 + 5 = 1:
(c) Estimate (4.26) by least-squares imposing this restriction by substitution. Report your parameter
estimates and standard errors.
(d) Estimate (4.26) subject to 3 + 4 + 5 = 1 using the restricted least-squares estimator from
problem 4. Do you obtain the same estimates as in part (c)?
50
Chapter 5
yi = x0i + ei
E (ei j xi ) = 0;
the least-squares estimator is ine cient. The theory of Chamberlain (1987) can be used to show that in
this model the semiparametric e ciency bound is obtained by the Generalized Least Squares (GLS)
estimator
~ = X 0D 1X 1 X 0D 1y (5.1)
where D = diagf 21 ; :::; 2n g and 2i = 2 (xi ) = E e2i j xi . The GLS estimator is sometimes called the
Aitken estimator. The GLS estimator (5.1) infeasible since the matrix D is unknown. A feasible GLS
(FGLS) estimator replaces the unknown D with an estimate D ^ = diagf^ 21 ; :::; ^ 2n g: We now discuss this
estimation problem.
Suppose that we model the conditional variance using the parametric form
2
i = 0 + z 01i 1
0
= zi;
where z 1i is some q 1 function of xi : Typically, z 1i are squares (and perhaps levels) of some (or all) elements
of xi : Often the functional form is kept simple for parsimony.
Let i = e2i : Then
E ( i j xi ) = 0 + z 01i 1
and we have the regression equation
=
i 0 + z 01i 1 + i (5.2)
E( i j xi ) = 0:
This regression error i is generally heteroskedastic and has the conditional variance
and p d
n (^ ) ! N (0; V )
51
where
1 2 1
V = (E (z i z 0i )) E z i z 0i i (E (z i z 0i )) : (5.3)
While ei is not observed, we have the OLS residual e^i = yi x0i ^ = ei x0i ( ^ ): Thus
i ^ i
= e^2i e2i
= 2ei x0i ^ + (^ )0 xi x0i ( ^ ):
And then
n n n
1 X 2X p 1X p
p zi i = z i ei x0i n ^ + zi( ^ )0 xi x0i ( ^ ) n
n i=1 n i=1 n i=1
p
!0
Let
1
~ = Z 0Z Z0^ (5.4)
be from OLS regression of ^i on z i : Then
p p 1
n (~ )= n (^ )+ n 1
Z 0Z n 1=2
Z0
d
! N (0; V ) (5.5)
Thus the fact that i is replaced with ^i is asymptotically irrelevant. We may call (5.4) the skedastic
regression, as it is estimating the conditional variance of the regression of yi on xi : We have shown that
is consistently estimated by a simple procedure, and hence we can estimate 2i = z 0i by
~ 2i = ~ 0 z i : (5.6)
and
1 1 1
~ = X 0D
~ X ~
X 0D y:
This is the feasible GLS, or FGLS, estimator of : Since there is not a unique speci cation for the conditional
variance the FGLS estimator is not unique, and will depend on the model (and estimation method) for the
skedastic regression.
One typical problem with implementation of FGLS estimation is that in a linear regression speci cation,
there is no guarantee that ~ 2i > 0 for all i: If ~ 2i < 0 for some i; then the FGLS estimator is not well de ned.
Furthermore, if ~ 2i 0 for some i; then the FGLS estimator will force the regression equation through the
point (yi ; xi ); which is typically undesirable. This suggests that there is a need to bound the estimated
variances away from zero. A trimming rule might make sense:
2
i = max[~ 2i ; 2
]
and thus p d
n ~ F GLS ! N (0; V ) ;
where
2 1
V = E i xi x0i :
52
Examining the asymptotic distribution of Theorem 5.1.1, the natural estimator of the asymptotic variance
of ^ is
n
! 1
0 1 X 1 0~ 1
1
V~ = 2 0
~ i xi xi = XD X :
n i=1 n
0
which is consistent for V as n ! 1: This estimator V~ is appropriate when the skedastic regression (5.2) is
correctly speci ed.
It may be the case that 0 z i is only an approximation to the true conditional variance 2i = E(e2i j xi ).
In this case we interpret 0 z i as a linear projection of e2i on z i : ~ should perhaps be called a quasi-FGLS
estimator of : Its asymptotic variance is not that given in Theorem 5.1.1. Instead,
1 1 2 1 1
0
V = E ( zi) xi x0i E ( 0
zi) 2 0
i xi xi E ( 0
zi) xi x0i :
0
V takes a sandwich form similar to the covariance matrix of the OLS estimator. Unless 2i = 0 z i , V~ is
inconsistent for V .
0
An appropriate solution is to use a White-type estimator in place of V~ : This may be written as
n
! 1 n
! n
! 1
~ 1X 2 0 1X 4 2 0 1X 2 0
V = ~ xi xi ~ e^ xi xi ~ xi xi
n i=1 i n i=1 i i n i=1 i
1 1 1 1 1 1
~
= n X 0D X ~
X 0D ^D
D ~ X ~
X 0D X
where D ^ = diagf^ e21 ; :::; e^2n g: This is an estimator which is robust to misspeci cation of the conditional
variance, and was proposed by Cragg (1992).
In the linear regression model, FGLS is asymptotically superior to OLS. Why then do we not exclusively
estimate regression models by FGLS? This is a good question. There are three reasons.
First, FGLS estimation depends on speci cation and estimation of the skedastic regression. Since the
form of the skedastic regression is unknown, and it may be estimated with considerable error, the estimated
conditional variances may contain more noise than information about the true conditional variances. In this
case, FGLS will do worse than OLS in practice.
Second, individual estimated conditional variances may be negative, and this requires trimming to solve.
This introduces an element of arbitrariness which is unsettling to empirical researchers.
Third, OLS is a more robust estimator of the parameter vector. It is consistent not only in the regression
model, but also under the assumptions of linear projection. The GLS and FGLS estimators, on the other
hand, require the assumption of a correct conditional mean. If the equation of interest is a linear projection,
and not a conditional mean, then the OLS and FGLS estimators will converge in probability to di erent
limits, as they will be estimating two di erent projections. And the FGLS probability limit will depend on
the particular function selected for the skedastic regression. The point is that the e ciency gains from FGLS
are built on the stronger assumption of a correct conditional mean, and the cost is a reduction of robustness
to misspeci cation.
H0 : 1 =0
in the regression (5.2). We may therefore test this hypothesis by the estimation (5.4) and constructing a
Wald statistic.
This hypothesis does not imply that i is independent of xi : Typically, however, we impose the stronger
hypothesis and test the hypothesis that ei is independent of xi ; in which case i is independent of xi and
the asymptotic variance (5.3) for ~ simpli es to
1 2
V = (E (z i z 0i )) E i : (5.7)
Hence the standard test of H0 is a classic F (or Wald) test for exclusion of all regressors from the skedastic
regression (5.4). The asymptotic distribution (5.5) and the asymptotic variance (5.7) under independence
show that this test has an asymptotic chi-square distribution.
53
2
Theorem 5.2.1 Under H0 and ei independent of xi ; the Wald test of H0 is asymptotically q:
Most tests for heteroskedasticity take this basic form. The main di erences between popular \tests"
are which transformations of xi enter z i : Motivated by the form of the asymptotic variance of the OLS
estimator ^ ; White (1980) proposed that the test for heteroskedasticity be based on setting z i to equal all
non-redundant elements of xi ; its squares, and all cross-products. Breusch-Pagan (1979) proposed what
might appear to be a distinct test, but the only di erence is that they allowed for general choice of z i ; and
replaced E 2i with 2 4 which holds when ei is N 0; 2 : If this simpli cation is replaced by the standard
formula (under independence of the error), the two tests coincide.
m(x) = E (yi j xi = x) = x0 :
In some cases, we want to estimate m(x) at a particular point x: Notice that this is a (linear)
p function of :
0
Letting h( ) = x and = h( ); we see that m(x) ^ ^ 0^ ^
= = x and H = x; so s( ) = n 1 x0 V^ x: Thus
an asymptotic 95% con dence interval for m(x) is
h p i
x0 ^ 2 n 1 x0 V^ x :
It is interesting to observe that if this is viewed as a function of x; the width of the con dence set is dependent
on x:
For a given value of xi = x; we may want to forecast (guess) yi out-of-sample. A reasonable rule is
the conditional mean m(x) as it is the mean-square-minimizing forecast. A point forecast is the estimated
conditional mean m(x)^ = x0 ^ . We would also like a measure of uncertainty for the forecast.
The forecast error is e^i = yi m(x)
^ = ei x0 ^ : As the out-of-sample error ei is independent of
the in-sample estimate ^ ; this has variance
0
e2i
E^ = E e2i j xi = x + x0 E ^ ^ x
2 1
= (x) + n x0 V x:
Assuming E e2i j xi = 2 ; the natural estimate of this variance is ^ 2 + n 1 x0 V^ x; so a standard error for the
p
forecast is s^(x) = ^ 2 + n 1 x0 V^ x: Notice that this is di erent from the standard error for the conditional
mean. If we have an estimate of q the conditional variance function, e.g. ~ 2 (x) = ~ 0 z from (5.6), then the
forecast standard error is s^(x) = ~ 2 (x) + n 1 x0 V^ x
It would appear natural to conclude that an asymptotic 95% forecast interval for yi is
h i
x0 ^ 2^
s(x) ;
but this turns out to be incorrect. In general, the validity of an asymptotic con dence interval is based on
the asymptotic normality of the studentized ratio. In the present case, this would require the asymptotic
normality of the ratio
ei x0 ^
:
s^(x)
But no such asymptotic approximation can be made. The only special exception is the case where ei has
the exact distribution N(0; 2 ); which is generally invalid.
To get an accurate forecast interval, we need to estimate the conditional distribution of ei given xi = x;
which is a much more h di cult task.
i Given the di culty, many applied forecasters focus on the simple
approximate interval x0 ^ 2^ s(x) :
54
5.4 NonLinear Least Squares
In some cases we might use a parametric regression function m (x; ) = E (yi j xi = x) which is a non-
linear function of the parameters : We describe this setting as non-linear regression. Examples of
nonlinear regression functions include
x
m (x; ) = 1 + 2
1 + 3x
m (x; ) = 1 + 2 x 3
m (x; ) = 1 + 2 exp( 3 x)
m (x; ) = G(x0 ); G known
0 0 x2 3
m (x; ) = 1 x1 + 2 x1
4
m (x; ) = 1 + 2x + 3 (x 4 ) 1 (x > 4 )
0
m (x; ) = 1 x1 1 (x2 < 3 ) + 02 x1 1 (x2 > 3)
In the rst ve examples, m (x; ) is (generically) di erentiable in the parameters : In the nal two examples,
m is not di erentiable with respect to 4 and 3 which alters some of the analysis. When it exists, let
@
m (x; ) = m (x; ) :
@
Nonlinear regression is frequently adopted because the functional form m (x; ) is suggested by an eco-
nomic model. In other cases, it is adopted as a exible approximation to an unknown regression function.
The least squares estimator ^ minimizes the sum-of-squared-errors
n
X 2
Sn ( ) = (yi m (xi ; )) :
i=1
When the regression function is nonlinear, we call this the nonlinear least squares (NLLS) estimator.
The NLLS residuals are e^i = yi m xi ; ^ :
One motivation for the choice of NLLS as the estimation method is that the parameter is the solution
2
to the population problem min E (yi m (xi ; ))
Since sum-of-squared-errors function Sn ( ) is not quadratic, ^ must be found by numerical methods. See
Appendix E. When m(x; ) is di erentiable, then the FOC for minimization are
n
X
0= m xi ; ^ e^i : (5.8)
i=1
Theorem 5.4.1 If the model is identi ed and m (x; ) is di erentiable with respect to ,
p d
n ^ 0 ! N (0; V )
1 1
V = (E (m i m0 i )) E m i m0 i e2i (E (m i m0 i ))
where m i = m (xi ; 0 ):
55
where xi ( ) is a function of xi and the unknown parameter : Examples include xi ( ) = xi ; xi ( ) =
exp ( xi ) ; and xi ( ) = xi 1 (g (xi ) > ). The model is linear when 2 = 0; and this is often a useful
hypothesis (sub-model) to consider. Thus we want to test
H0 : 2 = 0:
0 = argmin E jy j (5.9)
and
E sgn (y 0) =0
where
1 if u 0
sgn (u) =
1 if u < 0
is the sign function.
These facts and de nitions motivate three estimators
Pn of : The rst de nition is the 50th empirical
quantile. The second is theP value which minimizes n1 i=1 jyi j ; and the third de nition is the solution
n
to the moment equation n1 i=1 sgn (yi ) : These distinctions are illusory, however, as these estimators
are indeed identical.
Now let's consider the conditional median of y given a random vector x: Let m(x) = med (y j x) denote
the conditional median of y given x: The linear median regression model takes the form
yi = x0i + ei
med (ei j xi ) = 0
In this model, the linear function med (yi j xi = x) = x0 is the conditional median function, and the
substantive assumption is that the median function is linear in x:
Conditional analogs of the facts about the median are
56
be the average of absolute deviations. The least absolute deviations (LAD) estimator of minimizes
this function
^ = argmin LADn ( )
1 1 1
V = (E (xi x0i f (0 j xi ))) (Exi x0i ) (E (xi x0i f (0 j xi )))
4
and f (e j x) is the conditional density of ei given xi = x:
The variance of the asymptotic distribution inversely depends on f (0 j x) ; the conditional density of
the error at its median. When f (0 j x) is large, then there are many innovations near to the median, and
this improves estimation of the median. In the special case where the error is independent of xi ; then
f (0 j x) = f (0) and the asymptotic variance simpli es
1
(Exi x0i )
V = 2 (5.11)
4f (0)
This simpli cation is similar to the simpli cation of the asymptotic covariance of the OLS estimator under
homoskedasticity.
Computation of standard error for LAD estimates typically is based on equation (5.11). The main
di culty is the estimation of f (0); the height of the error density at its median. This can be done with
kernel estimation techniques. See Chapter 14. While a complete proof of Theorem 5.5.1 is advanced, we
provide a sketch here for completeness.
Q = inf fu : F (u) g
When F (u) is continuous and strictly monotonic, then F (Q ) = ; so you can think of the quantile as the
inverse of the distribution function. The quantile Q is the value such that (percent) of the mass of the
distribution is less than Q : The median is the special case = :5:
The following alternative representation is useful. If the random variable U has 'th quantile Q ; then
Q = argmin E (U ): (5.12)
q (1 ) q<0
(q) = (5.13)
q q 0
= q( 1 (q < 0)) :
57
As functions of x; the quantile regression functions can take any shape. However for computational
convenience it is typical to assume that they are (approximately) linear in x (after suitable transformations).
This linear speci cation assumes that Q (x) = 0 x where the coe cients vary across the quantiles :
We then have the linear quantile regression model
yi = x0i + ei
where ei is the error de ned to be the di erence between yi and its 'th conditional quantile x0i : By
construction, the 'th conditional quantile of ei is zero, otherwise its properties are unspeci ed without
further restrictions.
Given the representation (5.12), the quantile regression estimator ^ for solves the minimization
problem
^ = argmin Sn ( )
where
n
1X
Sn ( ) = (yi x0i )
n i=1
and (q) is de ned in (5.13).
Since the quanitle regression criterion function Sn ( ) does not have an algebraic solution, numerical
methods are necessary for its minimization. Furthermore, since it has discontinuous derivatives, conventional
Newton-type optimization methods are inappropriate. Fortunately, fast linear programming methods have
been developed for this problem, and are widely available.
An asymptotic distribution theory for the quantile regression estimator can be derived using similar
arguments as those for the LAD estimator in Theorem 5.5.1.
p d
Theorem 5.6.1 n ^ ! N (0; V ) ; where
1 1
V = (1 ) (E (xi x0i f (0 j xi ))) (Exi x0i ) (E (xi x0i f (0 j xi )))
yi = x0i + ei
which is estimated by OLS, yielding predicted values y^i = x0i ^ : Now let
0 2 1
y^i
B .. C
zi = @ . A
y^im
58
by OLS, and form the Wald statistic Wn for = 0: It is easy (although somewhat tedious) to show that
d 2
under the null hypothesis, Wn ! m 1: Thus the null is rejected at the % level if Wn exceeds the upper
% tail critical value of the 2m 1 distribution.
To implement the test, m must be selected in advance. Typically, small values such as m = 2, 3, or 4
seem to work best.
The RESET test appears to work well as a test of functional form against a wide range of smooth
alternatives. It is particularly powerful at detecting single-index models of the form
yi = G(x0i ) + ei
where G( ) is a smooth \link" function. To see why this is the case, note that (5.14) may be written as
2 3 m
yi = x0i ~ + x0i ^ ~ 1 + x0i ^ ~2 + x0i ^ ~m 1 + e~i
yi = x01i 1 + ui (5.16)
E (x1i ui ) = 0
Notice that we have written the coe cient on x1i as 1 rather than 1 and the error as ui rather than ei :
This is because the model being estimated is di erent than (5.15). Goldberger (1991) calls (5.15) the long
regression and (5.16) the short regression to emphasize the distinction.
Typically, 1 6= 1 , except in special cases. To see this, we calculate
1
1 = (E (x1i x01i )) E (x1i yi )
1
= (E (x1i x01i )) E (x1i (x01i 1 + x2i
0
2 + ei ))
1
= 1 + (E (x1i x01i )) E (x1i x02i ) 2
= 1 + 2
where
1
= (E (x1i x01i )) E (x1i x02i )
is the coe cient from a regression of x2i on x1i :
Observe that 1 6= 1 unless = 0 or 2 = 0: Thus the short and long regressions have the same
coe cient on x1i only under one of two conditions. First, the regression of x2i on x1i yields a set of zero
coe cients (they are uncorrelated), or second, the coe cient on x2i in (5.15) is zero. In general, least-squares
estimation of (5.16) is an estimate of 1 = 1 + 2 rather than 1 : The di erence 2 is known as omitted
variable bias. It is the consequence of omission of a relevant correlated variable.
To avoid omitted variables bias the standard advice is to include potentially relevant variables in the
estimated model. By construction, the general model will be free of the omitted variables problem. Typically
there are limits, as many desired variables are not available in a given dataset. In this case, the possibility
of omitted variables bias should be acknowledged and discussed in the course of an empirical investigation.
59
5.9 Irrelevant Variables
In the model
yi = x01i 1 + x02i 2 + ei
E (xi ei ) = 0;
say, and
1 1 1
lim n var( ^ 1 ) = Ex1i x01i Ex1i x02i (Ex2i x02i ) Ex2i x01i 2
= Q11 Q12 Q221 Q21 2
;
n!1
say. If Q12 = 0 (so the variables are orthogonal) then these two variance matrices equal, and the two esti-
mators have equal asymptotic e ciency. Otherwise, since Q12 Q221 Q21 > 0; then Q11 > Q11 Q12 Q221 Q21 ;
and consequently
1 2
Q111 2 < Q11 Q12 Q221 Q21 :
This means that ~ 1 has a lower asymptotic variance matrix than ^ 1 : We conclude that the inclusion of
irrelevant variable reduces estimation e ciency if these variables are correlated with the relevant variables.
For example, take the model yi = 0 + 1 xi + ei and suppose that 0 = 0: Let ^ 1 be the estimate of
1 from the unconstrained model, and ~ 1 be the estimate under the constraint 0 = 0: (The least-squares
2
estimate with the intercept omitted.). Let Exi = ; and E (xi ) = 2x : Then under (4.8),
2
lim n var( ~ 1 ) = 2 2
n!1 x +
while
2
lim n var( ^ 1 ) = 2
:
n!1 x
y = X1 1 + X2 2 + e;
60
where X 1 is n k1 and X 2 is n k2 : This is equivalent to the comparison of the two models
M1 : y = X1 1 + e; E (e j X 1 ; X 2 ) = 0
M2 : y = X1 1 + X2 2 + e; E (e j X 1 ; X 2 ) = 0:
Note that M1 M2 : To be concrete, we say that M2 is true if 2 6= 0:
To x notation, models 1 and 2 are estimated by OLS, with residual vectors e ^1 and e^2 ; estimated
variances ^ 21 and ^ 22 ; etc., respectively. To simplify some of the statistical discussion, we will on occasion use
the homoskedasticity assumption E e2i j x1i ; x2i = 2 :
A model selection procedure is a data-dependent rule which selects one of the two models. We can
write this as M.c There are many possible desirable properties for a model selection procedure. One useful
property is consistency, that it selects the true model with probability one if the sample is su ciently large.
A model selection procedure is consistent if
c = M1 j M1
P M ! 1
c = M2 j M2
P M ! 1
However, this rule only makes sense when the true model is nite dimensional. If the truth is in nite dimen-
sional, it is more appropriate to view model selection as determining the best nite sample approximation.
A common approach to model selection is to base the decision on a statistical test such as the Wald Wn :
The model selection rule is as follows. For some critical level ; let c satisfy P 2k2 > c : Then select M1
if Wn c ; else select M2 .
A major problem with this approach is that the critical level is indeterminate. The reasoning which
helps guide the choice of in hypothesis testing (controlling Type I error) is not relevant for model selection.
That is, if is set to be a small number, then P M c = M1 j M1 1 but P M c = M2 j M2 could
vary dramatically, depending on the sample size, etc. Another problem is that if is held xed, then this
model selection procedure is inconsistent, as P M c = M1 j M1 ! 1 < 1:
Another common approach to model selection is to use a selection criterion. One popular choice is the
Akaike Information Criterion (AIC). The AIC under normality for model m is
km
AICm = log ^ 2m + 2 : (5.17)
n
where ^ 2m is the variance estimate for model m; and km is the number of coe cients in the model. The AIC
can be derived as an estimate of the KullbackLeibler information distance K(M) = E (log f (y j X) log f (y j X; M))
between the true density and the model density. The rule is to select M1 if AIC1 < AIC2 ; else select M2 :
AIC selection is inconsistent, as the rule tends to over t. Indeed, since under M1 ;
d
LRn = n log ^ 21 log ^ 22 ' Wn ! 2
k2 ; (5.18)
then
c = M1 j M1
P M = P (AIC1 < AIC2 j M1 )
k1 k1 + k2
= P log(^ 21 ) + 2
< log(^ 22 ) + 2 j M1
n n
= P (LRn < 2k2 j M1 )
! P 2k2 < 2k2 < 1:
While many criterions similar to the AIC have been proposed, the most popular is one proposed by
Schwarz based on Bayesian arguments. His criterion, known as the BIC, is
km
BICm = log ^ 2m + log(n) : (5.19)
n
Since log(n) > 2 (if n > 8); the BIC places a larger penalty than the AIC on the number of estimated
parameters and is more parsimonious.
In contrast to the AIC, BIC model selection is consistent. Indeed, since (5.18) holds under M1 ;
LRn p
! 0;
log(n)
61
so
c = M1 j M1
P M = P (BIC1 < BIC2 j M1 )
= P (LRn < log(n)k2 j M1 )
LRn
= P < k2 j M1
log(n)
! P (0 < k2 ) = 1:
c = M2 j M2 LRn
P M = P > k2 j M2
log(n)
! 1:
We have discussed model selection between two models. The methods extend readily to the issue of
selection among multiple regressors. The general problem is the model
and the question is which subset of the coe cients are non-zero (equivalently, which regressors enter the
regression).
There are two leading cases: ordered regressors and unordered.
In the ordered case, the models are
M1 : 1 6= 0; 2 = 3= = K =0
M2 : 1 6= 0; 2 6= 0; 3 = = K =0
..
.
MK : 1 6= 0; 2 6= 0; : : : ; K 6= 0:
which are nested. The AIC selection criteria estimates the K models by OLS, stores the residual variance
^ 2 for each model, and then selects the model with the lowest AIC (5.17). Similarly for the BIC, selecting
based on (5.19).
In the unordered case, a model consists of any possible subset of the regressors fx1i ; :::; xKi g; and the
AIC or BIC in principle can be implemented by estimating all possible subset models. However, there are 2K
such models, which can be a very large number. For example, 210 = 1024; and 220 = 1; 048; 576: In the latter
case, a full-blown implementation of the BIC selection criterion would seem computationally prohibitive.
yi0 = ei + m0 i 0:
Thus
62
Hence the sum of squared errors function is
n
X n
X
2 2
Sn ( ) = (yi m (xi ; )) ' yi0 m0 i
i=1 i=1
and the right-hand-side is the SSE function for a linear regression of yi0 on m i : Thus the NLLS estimator
^ has the same asymptotic distribution as the (infeasible) OLS regression of y 0 on m i ; which is that stated
i
in the theorem.
p
Proof of Theorem 5.5.1: The rst step is to show that ^ ! 0 : We sketch the main argument. For any
p
xed ; by the WLLN, LADn ( ) ! E jyi x0i j : Furthermore, it can be shown that this convergence is
uniform in : It follows that ^ ; the minimizer of LADn ( ); converges in probability to 0 ; the minimizer of
E jyi x0i j.
Pn
Since sgn (a) = 1 2 1 (a 0) ; (5.10) is equivalent to g n ( ^ ) = 0; where g n ( ) = n 1 i=1 g i ( ) and
g i ( ) = xi (1 2 1 (yi x0i )) : Let g( ) = Eg i ( ). We need three preliminary results. First, by the
central limit theorem (Theorem C.3.1)
n
X
p 1=2 d
n (g n ( 0) g( 0 )) = n gi ( 0) ! N (0; Exi x0i )
i=1
0
since Eg i ( 0 )g i ( 0 ) = Exi x0i : Second using the law of iterated expectations and the chain rule of di eren-
tiation,
@ @
g( ) = Exi (1 2 1 (yi x0i ))
@ 0 @ 0
@
= 2 0 E [xi E (1 (ei x0i x0i 0 ) j xi )]
@
" Z 0 #
xi x0i 0
@
= 2 0 E xi f (e j xi ) de
@ 1
so
@
g( 0) = 2E [xi x0i f (0 j xi )] :
@ 0
Third, by a Taylor series expansion and the fact g( 0) =0
@
g( ^ ) ' g( 0)
^ 0 :
@ 0
Together
1
p @ p
n ^ 0 ' g( 0) ng( ^ )
@ 0
1 p
= ( 2E [xi x0i f (0 j xi )]) n g( ^ ) gn ( ^ )
1 1p
' (E [xi x0i f (0 j xi )]) n (g n ( 0 ) g( 0 ))
2
d 1 1
! (E [xi x0i f (0 j xi )]) N (0; Exi x0i )
2
= N (0; V ) :
p
The third line follows from an asymptotic empirical process argument and the fact that ^ ! 0.
63
5.12 Exercises
1. For any predictor g(xi ) for yi ; the mean absolute error (MAE) is
E jyi g(xi )j :
Show that the function g(x) which minimizes the MAE is the conditional median m (x) = med(yi j xi ):
2. De ne
g(u) = 1 (u < 0)
where 1 ( ) is the indicator function (takes the value 1 if the argument is true, else equals zero). Let
satisfy Eg(yi ) = 0: Is a quantile of the distribution of yi ?
3. Verify equation (5.12).
5. In a linear model
2
y = X + e; E(e j X) = 0; var (e j X) =
with known, the GLS estimator is
1
~ = X0 1
X X0 1
y :
6. Let (yi ; xi ) be a random sample with E(y j X) = X : Consider the Weighted Least Squares (WLS)
estimator of
~ = X 0W X 1 X 0W y
where W = diag (w1 ; :::; wn ) and wi = xji2 , where xji is one of the xi :
7. Suppose that yi = g(xi ; ) + ei with E (ei j xi ) = 0; ^ is the NLLS estimator, and V^ is the estimate of
var ^ : You are interested in the conditional mean function E (yi j xi = x) = g(x) at some x: Find
an asymptotic 95% con dence interval for g(x):
8. The model is
yi = xi + ei
E (ei j xi ) = 0
64
where xi 2 R: Consider the two estimators
Pn
^ xi yi
= Pi=1
n 2
i=1 xi
n
~ 1 X yi
= :
n i=1 xi
(a) Under the stated assumptions, are both estimators consistent for ?
(b) Are there conditions under which either estimator is e cient?
9. In Chapter 6, Exercise 13, you estimated a cost function on a cross-section of electric companies. The
equation you estimated was
(a) Following Nerlove, add the variable (log Qi )2 to the regression. Do so. Assess the merits of this
new speci cation using (i) a hypothesis test; (ii) AIC criterion; (iii) BIC criterion. Do you agree
with this modi cation?
(b) Now try a non-linear speci cation. Consider model (5.20) plus the extra term 6zi; where
1
z i = log Qi (1 + exp ( (log Qi 7 ))) :
In addition, impose the restriction 3 + 4 + 5 = 1: This model is called a smooth threshold
model. For values of log Qi much below 7 ; the variable log Qi has a regression slope of 2 : For
values much above 7 ; the regression slope is 2 + 6 ; and the model imposes a smooth transition
between these regimes. The model is non-linear because of the parameter 7 :
The model works best when 7 is selected so that several values (in this example, at least 10 to
15) of log Qi are both below and above 7 : Examine the data and pick an appropriate range for
7:
(c) Estimate the model by non-linear least squares. I recommend the concentration method: Pick 10
(or more or you like) values of 7 in this range. For each value of 7 ; calculate z i and estimate
the model by OLS. Record the sum of squared errors, and nd the value of 7 for which the sum
of squared errors is minimized.
(d) Calculate standard errors for all the parameters ( 1 ; :::; 7 ).
10. The data le cps78.dat contains 550 observations on 20 variables taken from the May 1978 current
population survey. Variables are listed in the le cps78.pdf. The goal of the exercise is to estimate a
model for the log of earnings (variable LNWAGE) as a function of the conditioning variables.
(a) Start by an OLS regression of LNWAGE on the other variables. Report coe cient estimates and
standard errors.
(b) Consider augmenting the model by squares and/or cross-products of the conditioning variables.
Estimate your selected model and report the results.
(c) Are there any variables which seem to be unimportant as a determinant of wages? You may
re-estimate the model without these variables, if desired.
(d) Test whether the error variance is di erent for men and women. Interpret.
(e) Test whether the error variance is di erent for whites and nonwhites. Interpret.
(f) Construct a model for the conditional variance. Estimate such a model, test for general het-
eroskedasticity and report the results.
(g) Using this model for the conditional variance, re-estimate the model from part (c) using FGLS.
Report the results.
(h) Do the OLS and FGLS estimates di er greatly? Note any interesting di erences.
(i) Compare the estimated standard errors. Note any interesting di erences.
65
Chapter 6
The Bootstrap
be a statistic of interest, for example an estimator ^ or a t-statistic ^ =s(^): Note that we write Tn
as possibly a function of F . For example, the t-statistic is a function of the parameter which itself is a
function of F:
The exact CDF of Tn when the data are sampled from the distribution F is
Gn (u; F ) = P(Tn u j F)
Fn (y; x) is called the empirical distribution function (EDF). Fn is a nonparametric estimate of F: Note that
while F may be either discrete or continuous, Fn is by construction a step function.
The EDF is a consistent estimator of the CDF. To see this, note that for any (y; x); 1 (yi y) 1 (xi x) is
p
an iid random variable with expectation F (y; x): Thus by the WLLN (Theorem C.2.1), Fn (y; x) ! F (y; x) :
66
Furthermore, by the CLT (Theorem C.3.1),
p d
n (Fn (y; x) F (y; x)) ! N (0; F (y; x) (1 F (y; x))) :
To see the e ect of sample size on the EDF, in the Figure below, I have plotted the EDF and true CDF for
three random samples of size n = 25; 50, 100, and 500. The random draws are from the N (0; 1) distribution.
For n = 25; the EDF is only a crude approximation to the CDF, but the approximation appears to improve
for the large n. In general, as the sample size gets larger, the EDF step function gets uniformly close to the
true CDF.
The EDF is a valid discrete probability distribution which puts probability mass 1=n at each pair (yi ; xi ),
i = 1; :::; n: Notationally, it is helpful to think of a random pair (yi ; xi ) with the distribution Fn : That is,
67
The sample size n used for the simulation is the same as the sample size.
The random vectors (yi ; xi ) are drawn randomly from the empirical distribution. This is equivalent
to sampling a pair (yi ; xi ) randomly from the sample.
The bootstrap statistic Tn = Tn ((y1 ; x1 ) ; :::; (yn ; xn ) ; Fn ) is calculated for each bootstrap sample. This
is repeated B times. B is known as the number of bootstrap replications. A theory for the determination
of the number of bootstrap replications B has been developed by Andrews and Buchinsky (2000). It is
desirable for B to be large, so long as the computational costs are reasonable. B = 1000 typically su ces.
When the statistic Tn is a function of F; it is typically through dependence on a parameter. For example,
the t-ratio ^ =s(^) depends on : As the bootstrap statistic replaces F with Fn ; it similarly replaces
with n ; the value of implied by Fn : Typically n = ^; the parameter estimate. (When in doubt use ^:)
Sampling from the EDF is particularly easy. Since Fn is a discrete probability distribution putting proba-
bility mass 1=n at each sample point, sampling from the EDF is equivalent to random sampling a pair (yi ; xi )
from the observed data with replacement. In consequence, a bootstrap sample f(y1 ; x1 ) ; :::; (yn ; xn )g will
necessarily have some ties and multiple values, which is generally not a problem.
n = E(Tn ):
If this is calculated by the simulation described in the previous section, the estimate of n is
B
1 X
^n = Tnb
B
b=1
B
1 X^ ^
= b
B
b=1
= ^ ^:
If ^ is biased, it might be desirable to construct a biased-corrected estimator (one with reduced bias).
Ideally, this would be
~=^ n;
= 2^ ^ :
Note, in particular, that the biased-corrected estimator is not ^ : Intuitively, the bootstrap makes the
following experiment. Suppose that ^ is the truth. Then what is the average value of ^ calculated from such
samples? The answer is ^ : If this is lower than ^; this suggests that the estimator is downward-biased, so a
biased-corrected estimator of should be larger than ^; and the best guess is the di erence between ^ and
^ : Similarly if ^ is higher than ^; then the estimator is upward-biased and the biased-corrected estimator
should be lower than ^.
Let Tn = ^: The variance of ^ is
Vn = E(Tn ETn )2 :
Let Tn = ^ : It has variance
Vn = E(Tn ETn )2 :
The simulation estimate is
B
1 X ^ ^
2
V^n = b :
B
b=1
68
q
^ ^
A bootstrap standard error for is the square root of the bootstrap estimate of variance, s ( ) = V^n :
While this standard error may be calculated and reported, it is not clear if it is useful. The primary
use of asymptotic standard errors is to construct asymptotic con dence intervals, which are based on the
asymptotic normal approximation to the t-ratio. However, the use of the bootstrap presumes that such
asymptotic approximations might be poor, in which case the normal approximation is suspected. It appears
superior to calculate bootstrap con dence intervals, and we turn to this next.
P 0 2 C10 = P ^ + qn ( =2) 0
^ + qn (1 =2)
= P qn (1 =2) ^ 0 qn ( =2)
= Gn ( qn ( =2); F0 ) Gn ( qn (1 =2); F0 )
which generally is not 1 ! There is one important exception. If ^ 0 has a symmetric distribution, then
Gn ( u; F0 ) = 1 Gn (u; F0 ); so
and this idealized con dence interval is accurate. Therefore, C10 and C1 are designed for the case that ^ has
a symmetric distribution about 0 :
69
When ^ does not have a symmetric distribution, C1 may perform quite poorly.
However, by the translation invariance argument presented above, it also follows that if there exists some
monotonic transformation f ( ) such that f (^) is symmetrically distributed about f ( 0 ); then the idealized
percentile bootstrap method will be accurate.
Based on these arguments, many argue that the percentile interval should not be used unless the sampling
distribution is close to unbiased and symmetric.
The problems with the percentile method can be circumvented by an alternative method.
Let Tn ( ) = ^ . Then
1 = P (qn ( =2) Tn ( 0 ) qn (1 =2))
= P ^ qn (1 =2) 0
^ qn ( =2) ;
is centered at the estimate ^; and the standard error s(^ ) is calculated on the bootstrap sample. These
t-statistics are sorted to nd the estimated quantiles q^n ( ) and/or q^n (1 ):
Let Tn ( ) = ^ ^
=s( ). Then taking the intersection of two one-sided intervals,
= P ^ s(^)qn (1 =2) 0
^ s(^)qn ( =2) ;
70
6.7 Symmetric Percentile-t Intervals
Suppose we want to test H0 : = 0 against H1 : 6= 0 at size : We would set Tn ( ) = ^ =s(^)
and reject H0 in favor of H1 if jTn ( 0 )j > c; where c would be selected so that
P (jTn ( 0 )j > c) = :
Note that
which is a symmetric distribution function. The ideal critical value c = qn ( ) solves the equation
Gn (qn ( )) = 1 :
P( 0 2 C4 ) = P ^ s(^)qn ( ) 0
^ + s(^)q ( )
n
= P (jTn ( 0 )j < qn ( ))
' P (jTn ( 0 )j < qn ( ))
= 1 :
or some other asymptotically chi-square statistic. Thus here Tn ( ) = Wn ( ): The ideal test rejects if
Wn qn ( ); where qn ( ) is the (1 )% quantile of the distribution of Wn : The bootstrap test rejects if
Wn qn ( ); where qn ( ) is the (1 )% quantile of the distribution of
0 1
Wn = n ^ ^ V^ ^ ^ :
Computationally, the critical value qn ( ) is found as the quantile from simulated values of Wn : Note in the
simulation that the Wald statistic is a quadratic form in ^ ^ ; not ^ 0 : [This is a typical mistake
made by practitioners.]
71
or
u
Gn (u; F ) = + o (1) : (6.4)
u
While (6.4) says that Gn converges to as n ! 1; it says nothing, however, about the rate of
convergence; or the size of the divergence for any particular sample size n: A better asymptotic approx-
imation may be obtained through an asymptotic expansion.
The following notation will be helpful. Let an be a sequence.
Since the derivative of g1 is odd, the density function is skewed. To a third order of approximation,
u 1=2 1
Gn (u; F ) +n g1 (u; F ) + n g2 (u; F )
which adds a symmetric non-normal component to the approximate density (for example, adding leptokur-
tosis).
72
p
Since Fn converges to Fpat rate n; and g1 is continuous with respect to F; the di erence (g1 (u; Fn ) g1 (u; F0 ))
converges to 0 at rate n: Heuristically,
@
g1 (u; Fn ) g1 (u; F0 ) g1 (u; F0 ) (Fn F0 )
@F
= O(n 1=2 );
@
The \derivative" @F g1 (u; F ) is only heuristic, as F is a function. We conclude that
1
Gn (u) Gn (u; F0 ) = O(n );
or
1
P (Tn u) = P (Tn u) + O(n );
which is an improved rate of convergence over the asymptotic test (which converged at rate O(n 1=2 )).
This rate can be used to show that one-tailed bootstrap inference based on the t-ratio achieves a so-called
asymptotic re nement { the Type I error of the test converges at a faster rate than an analogous asymptotic
test.
P (jyj u) = P ( u y u)
= P (y u) P (y u)
= H(u) H( u):
Similarly, if Tn has exact distribution Gn (u; F ); then jTn j has the distribution function
Gn (u; F ) = Gn (u; F ) Gn ( u; F ):
d d
A two-sided hypothesis test rejects H0 for large values of jTn j : Since Tn ! Z; then jTn j ! jZj :
Thus asymptotic critical values are taken from the distribution, and exact critical values are taken from
the Gn (u; F0 ) distribution. From Theorem 6.8.1, we can calculate that
Gn (u; F ) = Gn (u; F ) Gn ( u; F )
1 1
= (u) + 1=2 g1 (u; F ) + g2 (u; F )
n n
1 1 3=2
( u) + 1=2 g1 ( u; F ) + g2 ( u; F ) + O(n )
n n
2
= (u) + g2 (u; F ) + O(n 3=2 ); (6.5)
n
where the simpli cations are because g1 is even and g2 is odd. Hence the di erence between the asymptotic
distribution and the exact distribution is
2 3=2 1
(u) Gn (u; F0 ) = g2 (u; F0 ) + O(n ) = O(n ):
n
The order of the error is O(n 1 ):
Interestingly, the asymptotic two-sided test has a better coverage rate than the asymptotic one-sided
test. This is because the rst term in the asymptotic expansion, g1 ; is an even function, meaning that the
errors in the two directions exactly cancel out.
73
Applying (6.5) to the bootstrap distribution, we nd
2 3=2
Gn (u) = Gn (u; Fn ) = (u) + g2 (u; Fn ) + O(n ):
n
Thus the di erence between the bootstrap and exact distributions is
2
Gn (u) Gn (u; F0 ) = (g2 (u; Fn ) g2 (u; F0 )) + O(n 3=2 )
n
= O(n 3=2 );
p
the last equality because Fn converges to F0 at rate n; and g2 is continuous in F: Another way of writing
this is
P (jTn j < u) = P (jTn j < u) + O(n 3=2 )
so the error from using the bootstrap distribution (relative to the true unknown distribution) is O(n 3=2 ):
This is in contrast to the use of the asymptotic distribution, whose error is O(n 1 ): Thus a two-sided
bootstrap test also achieves an asymptotic re nement, similar to a one-sided test.
A reader might get confused between the two simultaneous e ects. Two-sided tests have better rates of
convergence than the one-sided tests, and bootstrap tests have better rates of convergence than asymptotic
tests.
The analysis shows that there may be a trade-o between one-sided and two-sided tests. Two-sided tests
will have more accurate size (Reported Type I error), but one-sided tests might have more power against
alternatives of interest. Con dence intervals based on the bootstrap can be asymmetric if based on one-sided
tests (equal-tailed intervals) and can therefore be more informative and have smaller length than symmetric
intervals. Therefore, the choice between symmetric and equal-tailed con dence intervals is unclear, and
needs to be determined on a case-by-case basis.
74
6.12 Bootstrap Methods for Regression Models
The bootstrap methods we have discussed have set Gn (u) = Gn (u; Fn ); where Fn is the EDF. Any other
consistent estimate of F may be used to de ne a feasible bootstrap estimator. The advantage of the EDF is
that it is fully nonparametric, it imposes no conditions, and works in nearly any context. But since it is fully
nonparametric, it may be ine cient in contexts where more is known about F: We discuss some bootstrap
methods appropriate for the case of a regression model where
yi = x0i + ei
E (ei j xi ) = 0:
The non-parametric bootstrap distribution resamples the observations (yi ; xi ) from the EDF, which implies
yi = xi 0 ^ + ei
E (xi ei ) = 0
but generally
E (ei j xi ) 6= 0:
The the bootstrap distribution does not impose the regression assumption, and is thus an ine cient estimator
of the true distribution (when in fact the regression assumption is true.)
One approach to this problem is to impose the very strong assumption that the error "i is independent of
the regressor xi : The advantage is that in this case it is straightforward to construct bootstrap distributions.
The disadvantage is that the bootstrap distribution may be a poor approximation when the error is not
independent of the regressors.
To impose independence, it is su cient to sample the xi and ei independently, and then create yi =
0^
xi + ei : There are di erent ways to impose independence. A non-parametric method is to sample the
bootstrap errors ei randomly from the OLS residuals f^ e1 ; :::; e^n g: A parametric method is to generate the
bootstrap errors ei from a parametric distribution, such as the normal ei N(0; ^ 2 ):
For the regressors xi , a nonparametric method is to sample the xi randomly from the EDF or sample
values fx1 ; :::; xn g: A parametric method is to sample xi from an estimated parametric distribution. A third
approach sets xi = xi : This is equivalent to treating the regressors as xed in repeated samples. If this is
done, then all inferential statements are made conditionally on the observed values of the regressors, which
is a valid statistical approach. It does not really matter, however, whether or not the xi are really \ xed"
or random.
The methods discussed above are unattractive for most applications in econometrics because they impose
the stringent assumption that xi and ei are independent. Typically what is desirable is to impose only the
regression condition E (ei j xi ) = 0: Unfortunately this is a harder problem.
One proposal which imposes the regression condition without independence is the Wild Bootstrap. The
idea is to construct a conditional distribution for ei so that
E (ei j xi ) = 0
E ei 2 j xi = e^2i
E ei 3 j xi = e^3i :
A conditional distribution with these features will preserve the main important features of the data. This
can be achieved using a two-point distribution of the form
p ! ! p
1+ 5 5 1
P ei = e^i = p
2 2 5
p ! ! p
1 5 5+1
P ei = e^i = p
2 2 5
For each xi ; you sample ei using this two-point distribution.
6.13 Exercises
1. Let Fn (x) denote the EDF of a random sample. Show that
p d
n (Fn (x) F0 (x)) ! N (0; F0 (x) (1 F0 (x))) :
75
2. Take a random sample fy1 ; :::; yn g with = Eyi and 2 = var (yi ) : Let the statistic of interest be
the sample mean Tn = y n : Find the population moments ETn and var (Tn ) : Let fy1 ; :::; yn g be a
random sample from the empirical distribution function and let Tn = y n be its sample mean. Find the
bootstrap moments ETn and var (Tn ) :
3. Consider the following bootstrap procedure for a regression of yi on xi : Let ^ denote the OLS estimator
from the regression of y on X, and e^ = y X ^ the OLS residuals.
(a) Draw a random vector (x ; e ) from the pair f(xi ; e^i ) : i = 1; :::; ng : That is, draw a random
integer i0 from [1; 2; :::; n]; and set x = xi0 and e = e^i0 . Set y = x 0 ^ + e : Draw (with
replacement) n such vectors, creating a random bootstrap data set (y ; X ):
(b) Regress y on X ; yielding OLS estimates ^ and any other statistic of interest.
Show that this bootstrap procedure is (numerically) identical to the non-parametric bootstrap.
4. Consider the following bootstrap procedure. Using the non-parametric bootstrap, generate bootstrap
samples, calculate the estimate ^ on these samples and then calculate
Tn = (^ ^)=s(^);
where s(^) is the standard error in the original data. Let qn (:05) and qn (:95) denote the 5% and 95%
quantiles of Tn , and de ne the bootstrap con dence interval
h i
C = ^ s(^)qn (:95); ^ s(^)qn (:05) :
Show that C exactly equals the Alternative percentile interval (not the percentile-t interval).
5. You want to test H0 : = 0 against H1 : > 0: The test for H0 is to reject if Tn = ^=s(^) > c where
c is picked so that Type I error is : You do this as follows. Using the non-parametric bootstrap, you
generate bootstrap samples, calculate the estimates ^ on these samples and then calculate
Tn = ^ =s(^ ):
Let qn (:95) denote the 95% quantile of Tn . You replace c with qn (:95); and thus reject H0 if Tn =
^=s(^) > q (:95): What is wrong with this procedure?
n
6. Suppose that in an application, ^ = 1:2 and s(^) = :2: Using the non-parametric bootstrap, 1000
samples are generated from the bootstrap distribution, and ^ is calculated on each sample. The ^
are sorted, and the 2.5% and 97.5% quantiles of the ^ are .75 and 1.3, respectively.
7. The data le hprice1.dat contains data on house prices (sales), with variables listed in the le
hprice1.pdf. Estimate a linear regression of price on the number of bedrooms, lot size, size of
house, and the colonial dummy. Calculate 95% con dence intervals for the regression coe cients using
both the asymptotic normal approximation and the percentile-t bootstrap.
76
Chapter 7
77
The method of moments estimator for is de ned as the parameter value which sets g n ( ) = 0. This
is generally not possible when ` > k; are there are more equations than free parameters. The idea of the
generalized method of moments (GMM) is to de ne an estimator which sets g n ( ) \close" to zero.
For some ` ` weight matrix W n > 0; let
Jn ( ) = n g n ( )0 W n g n ( ):
This is a non-negative measure of the \length" of the vector g n ( ): For example, if W n = I; then, Jn ( ) =
2
n g n ( )0 g n ( ) = n kg n ( )k ; the square of the Euclidean length. The GMM estimator minimizes Jn ( ).
De nition 1 GM M = argmin Jn ( ) :
Note that if k = `; then g n ( ^ ) = 0; and the GMM estimator is the method of moments estimator: The
rst order conditions for the GMM estimator are
@
0 = Jn ( ^ )
@
@
= 2 g ( ^ )0 W n g n ( ^ )
@ n
1 0 1 0
= 2 Z X Wn X y Z^
n n
so
2 Z 0X W n X 0Z ^ = 2 Z 0X W n X 0y
which establishes the following.
Proposition 7.2.1
1
^ GM M = Z 0X W n X 0Z Z 0X W n X 0y :
While the estimator depends on W n ; the dependence is only up to scale, for if W n is replaced by cW n
for some c > 0; ^ GM M does not change.
1 1
V = Q0 W Q Q0 W W Q Q0 W Q :
In general, GMM estimators are asymptotically normal with \sandwich form" asymptotic variances.
1
The optimal weight matrix W 0 is one which minimizes V : This turns out to be W 0 = : The proof
is left as an exercise. This yields the e cient GMM estimator:
1
^ = Z 0X 1
X 0Z Z 0X 1
X 0 y:
Thus we have
78
p d 1
Theorem 7.3.2 For the e cient GMM estimator, n ^ ! N 0; Q0 1
Q :
1 p
W0 = is not known in practice, but it can be estimated consistently. For any W n ! W 0 ; we still
call ^ the e cient GMM estimator, as it has the same asymptotic distribution.
By \e cient", we mean that this estimator has the smallest asymptotic variance in the class of GMM
estimators with this set of moment conditions. This is a weak concept of optimality, as we are only considering
alternative weight matrices W n : However, it turns out that the GMM estimator is semiparametrically
e cient, as shown by Gary Chamberlain (1987).
If it is known that E (g i ( )) = 0; and this is all that is known, this is a semi-parametric problem, as the
distribution of the data is unknown. Chamberlain showed that in this context, no semiparametric estimator
(one which is consistent globally for the class of models considered) can have a smaller asymptotic variance
1
than G0 1
G where G = E @@ 0 g i ( ): Since the GMM estimator has this asymptotic variance, it is
semiparametrically e cient.
This result shows that in the linear model, no estimator has greater asymptotic e ciency than the e cient
linear GMM estimator. No estimator can do better (in this rst-order asymptotic sense), without imposing
additional assumptions.
^i = g
g ^i gn ;
and de ne ! !
n 1 n 1
1X 1X
Wn = g ^0
^ g = g
^g^0 g n g 0n : (7.4)
n i=1 i i n i=1 i i
p 1
Then W n ! = W 0 ; and GMM using W n as the weight matrix is asymptotically e cient.
A common alternative choice is to set
n
! 1
1X 0
Wn = g
^g^
n i=1 i i
which uses the uncentered moment conditions. Since Eg i = 0; these two estimators are asymptotically
equivalent under the hypothesis of correct speci cation. However, Alastair Hall (2000) has shown that the
uncentered estimator is a poor choice. When constructing hypothesis tests, under the alternative hypothesis
the moment conditions are violated, i.e. Eg i 6= 0; so the uncentered estimator will contain an undesirable
bias term and the power of the test will be adversely a ected. A simple solution is to use the centered
moment conditions to construct the weight matrix, as in (7.4) above.
Here is a simple way to compute the e cient GMM estimator for the linear model. First, set W n =
(X 0 X) 1 , estimate ^ using this weight matrix, and construct the residual e^i = yi z 0i ^ : Then set g
^i = xi e^i ;
and let g
^ be the associated n ` matrix: Then the e cient GMM estimator is
1 1 1
^ = Z 0X g
^0 g
^ ng n g 0n X 0Z ^0 g
Z 0X g ^ ng n g 0n X 0 y:
In most cases, when we say \GMM", we actually mean \e cient GMM". There is little point in using an
ine cient GMM estimator when the e cient estimator is easy to compute.
An estimator of the asymptotic variance of ^ can be seen from the above formula. Set
1 1
V^ = n Z 0 X g
^0 g
^ ng n g 0n X 0Z :
Asymptotic standard errors are given by the square roots of the diagonal elements of V^ :
79
There is an important alternative to the two-step GMM estimator just described. Instead, we can let the
weight matrix be considered as a function of : The criterion function is then
n
! 1
0 1X
J( ) = n g n ( ) g ( )g i ( )0 g n ( ):
n i=1 i
where
gi ( ) = gi ( ) gn ( )
The ^ which minimizes this function is called the continuously-updated GMM estimator, and was
introduced by L. Hansen, Heaton and Yaron (1996).
The estimator appears to have some better properties than traditional GMM, but can be numerically
tricky to obtain in some cases. This is a current area of research in econometrics.
J( ) = n g n ( )0 W n g n ( )
where
n
1X
gn ( ) = g( )
n i=1 i
and !
n 1
1X
Wn = g
^g^0 g n g 0n ;
n i=1 i i
The general theory of GMM estimation and testing was exposited by L. Hansen (1982).
Eg(yi ; z i ; xi ; ) = 0
holds. Thus the model { the overidentifying restrictions { are testable.
For example, take the linear model yi = 01 x1i + 02 x2i + ei with E (x1i ei ) = 0 and E (x2i ei ) = 0: It is
possible that 2 = 0; so that the linear equation may be written as yi = 01 x1i + ei : However, it is possible
that 2 6= 0; and in this case it would be impossible to nd a value of 1 so that both E (x1i (yi x01i 1 )) = 0
and E (x2i (yi x01i 1 )) = 0 hold simultaneously. In this sense an exclusion restriction can be seen as an
overidentifying restriction.
80
p
Note that g n ! Eg i ; and thus g n can be used to assess whether or not the hypothesis that Eg i = 0 is
true or not. The criterion function at the parameter estimates is
J = n g 0n W n g n
1
^0 g
= n2 g 0n g ^ ng n g 0n gn :
Theorem 7.6.1 (Sargan-Hansen). Under the hypothesis of correct speci cation, and if the weight matrix is
asymptotically e cient,
d
J = J( ^ ) ! 2` k :
The proof of the theorem is left as an exercise. This result was established by Sargan (1958) for a
specialized case, and by L. Hansen (1982) for the general case.
The degrees of freedom of the asymptotic distribution are the number of overidentifying restrictions. If
the statistic J exceeds the chi-square critical value, we can reject the model. Based on this information
alone, it is unclear what is wrong, but it is typically cause for concern. The GMM overidenti cation test is
a very useful by-product of the GMM methodology, and it is advisable to report the statistic J whenever
GMM is the estimation method.
When over-identi ed models are estimated by GMM, it is customary to report the J statistic as a general
test of model adequacy.
J( ) = n g n ( )0 W n g n ( )
The two minimizing criterion functions are J( ^ ) and J( ~ ): The GMM distance statistic is the di erence
D = J( ~ ) J( ^ ):
Proposition 7.7.1 If the same weight matrix W n is used for both null and alternative,
1. D 0
d 2
2. D ! r
If h is non-linear, the Wald statistic can work quite poorly. In contrast, current evidence suggests that
the D statistic appears to have quite good sampling properties, and is the preferred test statistic.
Newey and West (1987) suggested to use the same weight matrix W n for both null and alternative, as
this ensures that D 0. This reasoning is not compelling, however, and some current research suggests that
this restriction is not necessary for good performance of the test.
This test shares the useful feature of LR tests in that it is a natural by-product of the computation of
alternative models.
81
7.8 Conditional Moment Restrictions
In many contexts, the model implies more than an unconditional moment restriction of the form Eg i ( ) =
0: It implies a conditional moment restriction of the form
E (ei ( ) j xi ) = 0
where ei ( ) is some s 1 function of the observation and the parameters. In many cases, s = 1.
It turns out that this conditional moment restriction is much more powerful, and restrictive, than the
unconditional moment restriction discussed above.
Our linear model yi = z 0i + ei with instruments xi falls into this class under the stronger assumption
E (ei j xi ) = 0: Then ei ( ) = yi z 0i :
It is also helpful to realize that conventional regression models also fall into this class, except that in
this case z i = xi : For example, in linear regression, ei ( ) = yi x0i , while in a nonlinear regression model
ei ( ) = yi g(xi ; ): In a joint model of the conditional mean and variance
8
< yi x0i
ei ( ; ) = :
: 2 0
(yi x0i ) f (xi )
Here s = 2:
Given a conditional moment restriction, an unconditional moment restriction can always be constructed.
That is for any ` 1 function (xi ; ) ; we can set g i ( ) = (xi ; ) ei ( ) which satis es Eg i ( ) = 0 and
hence de nes a GMM estimator. The obvious problem is that the class of functions is in nite. Which
should be selected?
This is equivalent to the problem of selection of the best instruments. If xi 2 R is a valid instrument
satisfying E (ei j xi ) = 0; then xi ; x2i ; x3i ; :::; etc., are all valid instruments. Which should be used?
One solution is to construct an in nite list of potent instruments, and then use the rst k instruments.
How is k to be determined? This is an area of theory still under development. A recent study of this problem
is Donald and Newey (2001).
Another approach is to construct the optimal instrument. The form was uncovered by Chamberlain
(1987). Take the case s = 1: Let
@
Ri = E ei ( ) j xi
@
and
2
i = E ei ( )2 j xi :
Then the \optimal instrument" is
2
Ai = i Ri
so the optimal moment is
g i ( ) = Ai ei ( ):
Setting g i ( ) to be this choice (which is k 1; so is just-identi ed) yields the best GMM estimator possible.
In practice, Ai is unknown, but its form does help us think about construction of optimal instruments.
In the linear model ei ( ) = yi z 0i ; note that
Ri = E (z i j xi )
and
2
i = E e2i j xi ;
so
2
Ai = i E (z i j xi ) :
In the case of linear regression, z i = xi ; so Ai = i 2 xi : Hence e cient GMM is GLS, as we discussed
earlier in the course.
In the case of endogenous variables, note that the e cient instrument Ai involves the estimation of
the conditional mean of z i given xi : In other words, to get the best instrument for z i ; we need the best
conditional mean model for z i given xi , not just an arbitrary linear projection. The e cient instrument is
also inversely proportional to the conditional variance of ei : This is the same as the GLS estimator; namely
that improved e ciency can be obtained if the observations are weighted inversely to the conditional variance
of the errors.
82
7.9 Bootstrap GMM Inference
Let ^ be the 2SLS or GMM estimator of . Using the EDF of (yi ; xi ; z i ), we can apply the bootstrap
methods discussed in Chapter 6 to compute estimates of the bias and variance of ^ ; and construct con dence
intervals for ; identically as in the regression model. However, caution should be applied when interpreting
such results.
A straightforward application of the nonparametric bootstrap works in the sense of consistently achieving
the rst-order asymptotic distribution. This has been shown by Hahn (1996). However, it fails to achieve
an asymptotic re nement when the model is over-identi ed, jeopardizing the theoretical justi cation for
percentile-t methods. Furthermore, the bootstrap applied J test will yield the wrong answer.
The problem is that in the sample, ^ is the \true" value and yet g n ( ^ ) 6= 0: Thus according to random
variables (yi ; xi ; z i ) drawn from the EDF Fn ;
E gi ^ = g n ( ^ ) 6= 0:
This means that (yi ; xi ; z i ) do not satisfy the same moment conditions as the population distribution.
A correction suggested by Hall and Horowitz (1996) can solve the problem. Given the bootstrap sample
(y ; X ; Z ); de ne the bootstrap GMM criterion
0
J ( )=n gn ( ) gn ( ^ ) W n gn ( ) gn ( ^ )
where g n ( ^ ) is from the in-sample data, not from the bootstrap data.
Let ^ minimize J ( ); and de ne all statistics and tests accordingly. In the linear model, this implies
that the bootstrap estimator is
1
^ = Z 0X W nX 0Z Z 0X W n X 0y X 0e
^ :
83
7.10 Exercises
1. Take the model
yi = x0i + ei
E (xi ei ) = 0
e2i = z 0i + i
E (z i i ) = 0:
84
(a) Show that you can rewrite Jn ( ) in (7.5) as
0 1
Jn ( ) = ^ V^ n ^
where !
n
X
1 1
V^ n = X 0 X xi x0i e^2i X 0X :
i=1
p d
(c) Show that if R0 = r then n ~ ! N (0; V R ) where
1
VR=V V R R0 V R R0 V :
(d) Show that in this setting, the distance statistic D in (7.6) equals the Wald statistic.
yi = z 0i + ei
E (xi ei ) = 0:
0 1
(b) Jn = n C 0 g n ( ^ ) C0 ^ C C 0 gn ( ^ )
(c) C 0 g n ( ^ ) = D n C 0 g n ( 0) where
1
1 0 1 0 ^ 1 1 0 1 0 ^ 1
Dn = I` C0 XZ ZX XZ ZX C0 1
n n n n
1 0
gn ( 0) = X e:
n
p 1
(d) D n ! I ` R R0 R R0 where R = C 0 E (xi z 0i )
d
(e) n1=2 C 0 g n ( 0) !Z N (0; I ` )
d 0 0 1
(f) Jn ! Z I` R RR R0 Z
1
(g) Z 0 I ` R R0 R R0 Z 2
` k:
1
Hint: I ` R R0 R R0 is a projection matrix..
85
Chapter 8
Empirical Likelihood
Since each observation is observed once in the sample, the log-likelihood function for this multinomial
distribution is
n
X
log L (p1 ; :::; pn ) = log(pi ): (8.2)
i=1
First let us consider a just-identi ed model. In this case the moment condition places no additional re-
strictions on the multinomial distribution. The maximum likelihood estimators of the probabilities (p1 ; :::; pn )
are those which maximize the log-likelihood subject to the constraint (8.1). This is equivalent to maximizing
n n
!
X X
log(pi ) pi 1
i=1 i=1
where is a Lagrange multiplier. The n rst order conditions are 0 = pi 1 : Combined with the constraint
(8.1) we nd that the MLE is pi = n 1 yielding the log-likelihood n log(n):
Now consider the case of an overidenti ed model with moment condition
Eg i ( 0) =0
where g is ` 1 and is k 1 and for simplicity we write g i ( ) = g(yi ; z i ; xi ; ): The multinomial distribution
which places probability pi at each observation (yi ; xi ; z i ) will satisfy this condition if and only if
n
X
pi g i ( ) = 0 (8.3)
i=1
The empirical likelihood estimator is the value of which maximizes the multinomial log-likelihood
(8.2) subject to the restrictions (8.1) and (8.3).
The Lagrangian for this maximization problem is
n n
! n
X X X
0
L ( ; p1 ; :::; pn ; ; ) = log(pi ) pi 1 n pi g i ( )
i=1 i=1 i=1
86
where and are Lagrange multipliers. The rst-order-conditions of L with respect to pi , ; and are
1
= + n 0 gi ( )
pi
n
X
pi = 1
i=1
n
X
pi g i ( ) = 0:
i=1
Multiplying the rst equation by pi , summing over i; and using the second and third equations, we nd
= n and
1
pi = :
n 1 + 0 gi ( )
Substituting into L we nd
n
X
0
R( ; ) = n log (n) log 1 + gi ( ) : (8.4)
i=1
This minimization problem is the dual of the constrained maximization problem. The solution (when it
exists) is well de ned since R( ; ) is a convex function of : The solution cannot be obtained explicitly, but
must be obtained numerically (see section 6.5). This yields the (pro le) empirical log-likelihood function for
.
R( ) = R( ; ( ))
n
X
= n log (n) log (1 + ( )0 g i ( ))
i=1
The EL estimate ^ is the value which maximizes R( ); or equivalently minimizes its negative
^ = argmin [ R( )] (8.6)
and
1
V = G0 1
G (8.9)
0 1 1 0
V = G G G G (8.10)
For example, in the linear model, Gi ( ) = xi z 0i ; G= E (xi z 0i ), and =E xi x0i e2i :
87
Theorem 8.2.1 Under regularity conditions,
p d
n ^ 0 ! N (0; V )
p d
n^ ! 1
N (0; V )
p p ^
where V and V are de ned in (8.9) and (8.10), and n ^ 0 and n are asymptotically independent.
d 2
Theorem 8.3.1 If Eg i ( 0) = 0 then LRn ! ` k:
8.4 Testing
Let the maintained model be
Eg i ( ) = 0 (8.12)
where g is ` 1 and is k 1: By \maintained" we mean that the overidentfying restrictions contained in
(8.12) are assumed to hold and are not being challenged (at least for the test discussed in this section). The
hypothesis of interest is
h( ) = 0:
where h : Rk ! Ra : The restricted EL estimator and likelihood are the values which solve
~ = argmax R( )
h( )=0
R( ~ ) = max R( ):
h( )=0
LRn = 2 R( ^ ) R( ~ ) :
88
8.5 Numerical Computation
Gauss code which implements the methods discussed below can be found at
http://www.ssc.wisc.edu/~bhansen/progs/elike.prc
Derivatives
The numerical calculations depend on derivatives of the dual likelihood function (8.4). De ne
gi ( )
gi ( ; ) =
1 + 0 gi ( )
0
Gi ( )
Gi ( ; ) =
1 + 0 gi ( )
Inner Loop
The so-called \inner loop" solves (8.5) for given : The modi ed Newton method takes a quadratic
approximation to Rn ( ; ) yielding the iteration rule
1
j+1 = j (R ( ; j )) R ( ; j) : (8.13)
where > 0 is a scalar steplength (to be discussed next). The starting value 1 can be set to the zero vector.
The iteration (8.13) is continued until the gradient R ( ; j ) is smaller than some prespeci ed tolerance.
E cient convergence requires a good choice of steplength : One method uses the following quadratic
approximation. Set 0 = 0; 1 = 12 and 2 = 1: For p = 0; 1; 2; set
1
p = j p(R ( ; j )) R ( ; j ))
Rp = R( ; p)
A quadratic function can be t exactly through these three points. The value of which minimizes this
quadratic is
^ = R2 + 3R0 4R1 :
4R2 + 4R0 8R1
yielding the steplength to be plugged into (8.13).
A complication is that must be constrained so that 0 pi 1 which holds if
0
n 1+ gi ( ) 1 (8.14)
89
The outer loop is the minimization (8.6). This can be done by the modi ed Newton method described
in the previous section. The gradient for (8.6) is
@ @ 0
R = R( ) = R( ; ) = R + R =R
@ @
since R ( ; ) = 0 at = ( ); where
@ 1
= ( )= R R ;
@ 0
the second equality following from the implicit function theorem applied to R ( ; ( )) = 0:
The Hessian for (8.6) is
@
R = R( )
@ @ 0
@ 0
= R ( ; ( )) + R ( ; ( ))
@ 0
0 0
= R ( ; ( )) + R0 + R + R
0 1
= R R R R :
It is not guaranteed that R > 0: If not, the eigenvalues of R should be adjusted so that all are positive.
The Newton iteration rule is
j+1 = j R 1R
where is a scalar stepsize, and the rule is iterated until convergence.
@
n
X gi ^
0 = R( ; ) = 0 (8.15)
@ i=1 1 + ^ gi ^
0
@
n
X Gi ^
0 = R( ; ) = 0 : (8.16)
@ i=1 1 + ^ gi ^
Pn Pn Pn 0
Let Gn = n1 i=1 Gi ( 0 ) ; g n = n1 i=1 g i ( 0 ) and n = n1 i=1 g i ( 0 ) gi ( 0) :
Expanding (8.16) around = 0 and = 0 = 0 yields
0' gn Gn ^ 0 + n
^ (8.18)
Premultiplying by G0n n
1
and using (8.17) yields
0 ' G0n n
1
gn G0n n
1
Gn ^ 0 + G0n n
1
n
^
= G0n n
1
gn G0n n
1
Gn ^ 0
90
Solving (8.18) for ^ and using (8.19) yields
p 1 p
n^ ' n
1
I Gn G0n n
1
Gn G0n n
1
ng n (8.20)
d 1 1
! I G G0 1
G G0 1
N (0; )
1
= N (0; V )
Furthermore, since
1
G0 I 1
G G0 1
G G0 = 0
p p ^
n ^ 0 and n are asymptotically uncorrelated and hence independent.
1 p
' I Gn G0n n
1
Gn G0n n
1
ng n
p
' n n^:
91
Chapter 9
Endogeneity
We say that there is endogeneity in the linear model y = z 0i + ei if is the parameter of interest and
E(z i ei ) 6= 0: This cannot happen if is de ned by linear projection, so requires a structural interpretation.
The coe cient must have meaning separately from the de nition of a conditional mean or linear projection.
Example: Measurement error in the regressor. Suppose that (yi ; xi ) are joint random variables,
E(yi j xi ) = xi 0 is linear, is the parameter of interest, and xi is not observed. Instead we observe
xi = xi + ui where ui is an k 1 measurement error, independent of yi and xi : Then
yi = xi 0 + ei
0
= (xi ui ) + ei
= x0i + vi
where
vi = ei u0i :
The problem is that
E (xi vi ) = E [(xi + ui ) (ei u0i )] = E (ui u0i ) 6= 0
if 6= 0 and E (ui u0i ) 6= 0: It follows that if ^ is the OLS estimator, then
^ p! = (E (xi x0i ))
1
E (ui u0i ) 6= :
1 1 qi e1i
=
1 2 pi e2i
so
1
qi 1 1 e1i
=
pi 1 2 e2i
2 1 e1i
=
1 1 e2i
2 e1i + 1 e2i
= :
(e1i e2i )
92
The projection of qi on pi yields
qi = pi + " i
E (pi "i ) = 0
where
E (pi qi ) 2 1
= =
E (p2i ) 2
p
Hence if it is estimated by OLS, ^ ! ; which does not equal either 1 or 2: This is called simultaneous
equations bias.
y = Z + e: (9.2)
Any solution to the problem of endogeneity requires additional information which we call instruments.
De nition 9.1.1 The ` 1 random vector xi is an instrumental variable for (9.1) if E (xi ei ) = 0:
In a typical set-up, some regressors in z i will be uncorrelated with ei (for example, at least the intercept).
Thus we make the partition
z 1i k1
zi = (9.3)
z 2i k2
where E(z 1i ei ) = 0 yet E(z 2i ei ) 6= 0: We call z 1i exogenous and z 2i endogenous. By the above de nition,
z 1i is an instrumental variable for (9.1); so should be included in xi : So we have the partition
z 1i k1
xi = (9.4)
x2i `2
where z 1i = x1i are the included exogenous variables, and x2i are the excluded exogenous variables.
That is x2i are variables which could be included in the equation for yi (in the sense that they are uncorrelated
with ei ) yet can be excluded, as they would have true zero coe cients in the equation.
The model is just-identi ed if ` = k (i.e., if `2 = k2 ) and over-identi ed if ` > k (i.e., if `2 > k2 ):
We have noted that any solution to the problem of endogeneity requires instruments. This does not mean
that valid instruments actually exist.
as the projection error. Then the reduced form linear relationship between z i and xi is
0
zi = x i + ui : (9.5)
93
By construction,
E(xi u0i ) = 0;
so (9.5) is a projection and can be estimated by OLS:
Z = X^ + u
^
1
^ = X 0X X 0Z :
y = (X + U ) +e
= X + v; (9.7)
where
= (9.8)
and
v=U + e:
Observe that
E (xi vi ) = E (xi u0i ) + E (xi ei ) = 0:
Thus (9.7) is a projection equation and may be estimated by OLS. This is
y = X^ + v
^;
1
^ = X 0X X 0y
The equation (9.7) is the reduced form for y: (9.6) and (9.7) together are the reduced form equations
for the system
y = X +v
Z = X + U:
rank ( ) = k: (9.9)
1 1
Assume that (9.9) holds. If ` = k; then = : If ` > k; then for any W > 0; = 0 W 0
W :
If (9.9) is not satis ed, then cannot be recovered from ( ; ) : Note that a necessary (although not
su cient) condition for (9.9) is ` k:
Since X and Z have the common variables X 1 ; we can rewrite some of the expressions. Using (9.3) and
(9.4) to make the matrix partitions X = [X 1 ; X 2 ] and Z = [X 1 ; Z 2 ] ; we can partition as
11 12
=
21 22
I 12
=
0 22
Z1 = X1
Z2 = X1 12 + X2 22 + U 2: (9.10)
94
9.4 Estimation
The model can be written as
yi = z 0i + ei
E (xi ei ) = 0
or
Eg i ( ) = 0
gi ( ) = xi (yi z 0i ) :
This is a moment condition model. Appropriate estimators include GMM and EL. The estimators and
distribution theory developed in those Chapter 8 and 9 directly apply. Recall that the GMM estimator, for
given weight matrix W n ; is
^ = Z 0 XW n X 0 Z 1 Z 0 XW n X 0 y:
Recall that the optimal weight matrix is an estimate of the inverse of = E xi x0i e2i : In the special case
that E e2i j xi = 2 (homoskedasticity), then = E (xi x0i ) 2 / E (xi x0i ) suggesting the weight matrix
1
W n = X 0X : Using this choice, the GMM estimator equals
1 1 1
^ 2SLS = Z 0 X X 0 X X 0Z Z 0X X 0X X 0y
This is called the two-stage-least squares (2SLS) estimator. It was originally proposed by Theil (1953)
and Basmann (1957), and is the classic estimator for linear equations with instruments. Under the ho-
moskedasticity assumption, the 2SLS estimator is e cient GMM, but otherwise it is ine cient.
It is useful to observe that writing
1
P = X X 0X X0
^
Z = PZ = X^
then
1
^ = Z 0P Z Z 0P y
1
= ^0 Z
Z ^ ^ 0 y:
Z
95
1
First regress Z on X; vis., ^ = X 0 X ^ = X ^ = P Z:
X 0 Z and Z
1
Second, regress y on Z; ^0 Z
^ vis., ^ = Z ^ ^ 0 y:
Z
= [P Z 1 ; P Z 2 ]
= [Z 1 ; P Z 2 ]
h i
= Z 1; Z ^2 ;
= y Z +e (9.11)
= Z X +U (9.12)
= (e; U )
E ( j X) = 0
E 0 jX = S
1 0 0
E Ze = E (z i ei ) = E (xi ei ) + E (ui ei ) = s21
n
and
1 0
E ZZ = E (z i z 0i )
n
0 0
= E (xi x0i ) + E (ui x0i ) + E (xi u0i ) + E (ui u0i )
0
= Q + S 22
96
l 1: Hence
1 0 1 1=20
E P jX = S E Q0 Q j X S 1=2
n n
1 1=20 1 0
= S E q q S 1=2
n n 1 1
l 1=20 1=2
= S S
n
= S
where
l
= :
n
Using (9.12) and this result,
1 1 1
E Z 0P e = E 0
X 0e + E U 0 P e = s21 ;
n n n
and
1 1
E Z 0P Z = 0
E (xi x0i ) + 0
E (xi ui ) + E (ui x0i ) + E U 0P U
n n
0
= Q + S 22 :
Together
1
1 0 1 0
E ^ 2SLS E Z PZ E Z Pe
n n
0 1
= Q + S 22 s21 : (9.14)
In general this is non-zero, except when s21 = 0 (when Z is exogenous). It is also close to zero when = 0.
Bekker (1994) pointed out that it also has the reverse implication { that when = l=n is large, the bias
in the 2SLS estimator will be large. Indeed as ! 1; the expression in (9.14) approaches that in (9.13),
indicating that the bias in 2SLS approaches that of OLS as the number of instruments increases.
Bekker (1994) showed further that under the alternative asymptotic approximation that is xed as
n ! 1 (so that the number of instruments goes to in nity proportionately with sample size) then the
expression in (9.14) is the probability limit of ^ 2SLS
Z2 = X1 12 + X2 22 + U 2:
The parameter fails to be identi ed if 22 has de cient rank. The consequences of identi cation failure
for inference are quite severe.
Take the simplest case where k = l = 1 (so there is no X 1 ): Then the model may be written as
yi = zi + ei
zi = xi + ui
and 22 = = E (xi zi ) =Ex2i : We see that is identi ed if and only if 6= 0; which occurs when E (zi xi ) 6= 0.
Thus identi cation hinges on the existence of correlation between the excluded exogenous variable and the
included endogenous variable.
Suppose this condition fails, so E (zi xi ) = 0: Then by the CLT
n
1 X d
p xi ei ! N1 N 0; E x2i e2i (9.15)
n i=1
97
n n
1 X 1 X d
p xi zi = p xi ui ! N2 N 0; E x2i u2i (9.16)
n i=1 n i=1
therefore Pn
p1 xi ei
^ n i=1 d N1
= Pn ! Cauchy;
p1 xi z i N2
n i=1
since the ratio of two normals is Cauchy. This is particularly nasty, as the Cauchy distribution does not
have a nite mean. This result carries over to more general settings, and was examined by Phillips (1989)
and Choi and Phillips (1992).
Suppose that identi cation does not completely fail, but is weak. This occurs when 22 is full rank, but
small. This can be handled in an asymptotic analysis by modeling it as local-to-zero, viz
1=2
22 =n C;
where C is a full rank matrix. The n 1=2 is picked because it provides just the right balancing to allow a
rich distribution theory.
To see the consequences, once again take the simple case k = l = 1: Here, the instrument xi is weak for
zi if
= n 1=2 c:
Then (9.15) is una ected, but (9.16) instead takes the form
n n n
1 X 1 X 2 1 X
p xi zi = p xi + p xi ui
n i=1 n i=1 n i=1
n n
1X 2 1 X
= xi c + p xi ui
n i=1 n i=1
d
! Qc + N2
therefore
^ d N1
! :
Qc + N2
As in the case of complete identi cation failure, we nd that ^ is inconsistent for and the asymptotic
distribution of ^ is non-normal. In addition, standard test statistics have non-standard distributions, meaning
that inferences about parameters of interest can be misleading.
The distribution theory for this model was developed by Staiger and Stock (1997) and extended to
nonlinear GMM estimation by Stock and Wright (2000). Further results on testing were obtained by Wang
and Zivot (1998).
The bottom line is that it is highly desirable to avoid identi cation failure. Once again, the equation to
focus on is the reduced form
Z 2 = X 1 12 + X 2 22 + U 2
and identi cation requires rank( 22 ) = k2 : If k2 = 1; this requires 22 6= 0; which is straightforward to assess
using a hypothesis test on the reduced form. Therefore in the case of k2 = 1 (one RHS endogenous variable),
one constructive recommendation is to explicitly estimate the reduced form equation for Z 2 ; construct the
test of 22 = 0, and at a minimum check that the test rejects H0 : 22 = 0.
When k2 > 1; 22 6= 0 is not su cient for identi cation. It is not even su cient that each column of 22
is non-zero (each column corresponds to a distinct endogenous variable in X 2 ): So while a minimal check is
to test that each columns of 22 is non-zero, this cannot be interpreted as de nitive proof that 22 has full
rank. Unfortunately, tests of de cient rank are di cult to implement. In any event, it appears reasonable
to explicitly estimate and report the reduced form equations for X 2 ; and attempt to assess the likelihood
that 22 has de cient rank.
98
9.8 Exercises
1. Consider the single equation model
yi = zi + ei ;
where yi and zi are both real-valued (1 1). Let ^ denote the IV estimator of using as an instrument
a dummy variable di (takes only the values 0 and 1). Find a simple expression for the IV estimator in
this context.
2. In the linear model
yi = z 0i + ei
E (ei j z i ) = 0
suppose 2i = E e2i j z i is known. Show that the GLS estimator of can be written as an IV estimator
using some instrument xi : (Find an expression for xi :)
3. Take the linear model
y = Z + e:
Let the OLS estimator for be ^ and the OLS residual be e^ = y Z ^.
Let the IV estimator for using some instrument X be ~ and the IV residual be e ~ = y Z ~ . If
0 0
X is indeed endogeneous, will IV \ t" better than OLS, in the sense that e
~e ~< e
^e ^; at least in large
samples?
4. The reduced form between the regressors z i and instruments xi takes the form
0
zi = x i + ui
or
Z =X +U
where z i is k 1; xi is l 1; Z is n k; X is n l; U is n k; and is l k: The parameter is
de ned by the population moment condition
E (xi u0i ) = 0
1
Show that the method of moments estimator for is ^ = X 0 X X 0Z :
5. In the structural model
y = Z +e
Z = X +U
with l k; l k; we claim that is identi ed (can be recovered from the reduced form) if rank( ) = k:
Explain why this is true. That is, show that if rank( ) < k then cannot be identi ed.
6. Take the linear model
yi = xi + ei
E (ei j xi ) = 0:
(a) Show that E (xi ei ) = 0 and E x2i ei = 0: Is z i = (xi x2i )0 a valid instrumental variable for
estimation of ?
(b) De ne the 2SLS estimator of ; using z i as an instrument for xi : How does this di er from OLS?
(c) Find the e cient GMM estimator of based on the moment condition
E (z i (yi xi )) = 0:
99
7. Suppose that price and quantity are determined by the intersection of the linear demand and supply
curves
Demand : Q = a0 + a1 P + a2 Y + e1
Supply : Q = b0 + b1 P + b2 W + e2
where income (Y ) and wage (W ) are determined outside the market. In this model, are the parameters
identi ed?
8. The data le card.dat is taken from David Card \Using Geographic Variation in College Proximity
to Estimate the Return to Schooling" in Aspects of Labour Market Behavior (1995). There are 2215
observations with 29 variables, listed in card.pdf. We want to estimate a wage equation
2
log(W age) = 0 + 1 Educ + 2 Exper + 3 Exper + 4 South + 5 Black +e
where Educ = Eduation (Years) Exper = Experience (Years), and South and Black are regional and
racial dummy variables.
(a) Estimate the model by OLS. Report estimates and standard errors.
(b) Now treat Education as endogenous, and the remaining variables as exogenous. Estimate the
model by 2SLS, using the instrument near4, a dummy indicating that the observation lives near
a 4-year college. Report estimates and standard errors.
(c) Re-estimate by 2SLS (report estimates and standard errors) adding three additional instruments:
near2 (a dummy indicating that the observation lives near a 2-year college), f atheduc (the edu-
cation, in years, of the father) and motheduc (the education, in years, of the mother).
(d) Re-estimate the model by e cient GMM. I suggest that you use the 2SLS estimates as the rst-
step to get the weight matrix, and then calculate the GMM estimator from this weight matrix
without further iteration. Report the estimates and standard errors.
(e) Calculate and report the J statistic for overidenti cation.
(f) Discuss your ndings..
100
Chapter 10
A time series yt is a process observed in sequence over time, t = 1; :::; T . To indicate the dependence on
time, we adopt new notation, and use the subscript t to denote the individual observation, and T to denote
the number of observations.
Because of the sequential nature of time series, we expect that yt and yt 1 are not independent, so
classical assumptions are not valid.
We can separate time series into two categories: univariate (yt 2 R is scalar); and multivariate (yt 2 Rm
is vector-valued). The primary model for univariate time series is autoregressions (ARs). The primary model
for multivariate time series is vector autoregressions (VARs).
E(yt ) =
is independent of t; and
cov (yt ; yt k) = (k)
is independent of t for all k: (k) is called the autocovariance function.
De nition 10.1.2 fyt g is strictly stationary if the joint distribution of (yt ; :::; yt k) is independent of t
for all k:
Theorem 10.1.1 If yt is strictly stationary and ergodic and xt = f (yt ; yt 1 ; :::) is a random variable, then
xt is strictly stationary and ergodic.
Theorem 10.1.2 (Ergodic Theorem). If yt is strictly stationary and ergodic and E jyt j < 1; then as
T ! 1;
T
1X p
yt ! E(yt ):
T t=1
101
The sample autocovariance
T
1X
^ (k) = (yt ^ ) (yt k ^) :
T t=1
The sample autocorrelation
^ (k)
^(k) = :
^ (0)
Theorem 10.1.3 If yt is strictly stationary and ergodic and Eyt2 < 1; then as T ! 1;
p
1. ^ ! E(yt );
p
2. ^ (k) ! (k);
p
3. ^(k) ! (k):
10.2 Autoregressions
In time-series, the series f:::; y1 ; y2 ; :::; yT ; :::g are jointly random. We consider the conditional expectation
E (yt j Ft 1)
A linear AR model (the most common type used in practice) speci es linearity:
E (yt j Ft 1) = + 1 yt 1 + 2 yt 1 + + k yt k :
Letting
et = yt E (yt j Ft 1) ;
yt = + 1 yt 1 + 2 yt 1 + + k yt k + et
E (et j Ft 1 ) = 0:
102
10.3 Stationarity of AR(1) Process
A mean-zero AR(1) is
yt = yt 1 + et :
Assume that et is iid, E(et ) = 0 and Ee2t = 2
< 1:
By back-substitution, we nd
2
yt = et + et 1 + et 2 + :::
X1
k
= et k:
k=0
k
Loosely speaking, this series converges if the sequence et k gets small as k ! 1: This occurs when j j < 1:
1
X 2
2k
var(yt ) = var (et k) = 2
:
1
k=0
If the equation for yt has an intercept, the above results are unchanged, except that the mean of yt can
be computed from the relationship
Eyt = + Eyt 1 ;
and solving for Eyt = Eyt 1 we nd Eyt = =(1 ):
yt yt 1 + et
or
(1 L) yt 1 = et :
The operator (L) = (1 L) is a polynomial in the operator L: We say that the root of the polynomial is
1= ; since (z) = 0 when z = 1= : We call (L) the autoregressive polynomial of yt .
From Theorem 10.3.1, an AR(1) is stationary i j j < 1: Note that an equivalent way to say this is that
an AR(1) is stationary i the root of the autoregressive polynomial is larger than one (in absolute value).
103
We call (L) the autoregressive polynomial of yt :
The Fundamental Theorem of Algebra says that any polynomial can be factored as
1 1 1
(z) = 1 1 z 1 2 z 1 k z
where the 1 ; :::; k are the complex roots of (z); which satisfy ( j ) = 0:
We know that an AR(1) is stationary i the absolute value of the root of its autoregressive polynomial
is larger than one. For an AR(k), the requirement is that all roots are larger than one. Let j j denote the
modulus of a complex number :
Theorem 10.5.1 The AR(k) is strictly stationary and ergodic if and only if j jj > 1 for all j:
One way of stating this is that \All roots lie outside the unit circle."
If one of the roots equals 1, we say that (L); and hence yt ; \has a unit root". This is a special case of
non-stationarity, and is of great interest in applied time series.
10.6 Estimation
Let
0
xt = 1 yt 1 yt 2 yt k
0
= 1 2 k :
The vector xt is strictly stationary and ergodic, and by Theorem 10.1.1, so is xt x0t : Thus by the Ergodic
Theorem,
T
1X p
xt x0t ! E (xt x0t ) = Q:
T t=1
Combined with (10.1) and the continuous mapping theorem, we see that
T
! 1 T
!
^= 1X 1X p 1
+ xt x0t xt et !Q 0 = 0:
T t=1 T t=1
104
10.7 Asymptotic Distribution
Theorem 10.7.1 MDS CLT. If ut is a strictly stationary and ergodic MDS and E (ut u0t ) = < 1; then
as T ! 1;
T
1 X d
p ut ! N (0; ) :
T t=1
where
= E(xt x0t e2t ):
Theorem 10.7.2 If the AR(k) process yt is strictly stationary and ergodic and Eyt4 < 1; then as T ! 1;
p d
T ^ ! N 0; Q 1
Q 1
:
This is identical in form to the asymptotic distribution of OLS in cross-section regression. The implication
is that asymptotic inference is the same. In particular, the asymptotic covariance matrix is estimated just
as in the cross-section case.
yt = ^ + ^ 1 y t 1 + ^ 2 yt 2 + + ^ k yt k + et :
This construction imposes homoskedasticity on the errors ei ; which may be di erent than the properties
of the actual ei : It also presumes that the AR(k) structure is the truth.
105
10.9 Trend Stationarity
yt = 0 + 1t + St (10.2)
St = 1 St 1 + 2 St 2 + + k St l + et ; (10.3)
or
yt = 0 + 1t + 1 yt 1 + 2 yt 1 + + k yt k + et : (10.4)
There are two essentially equivalent ways to estimate the autoregressive parameters ( 1 ; :::; k ):
The reason why these two procedures are (essentially) the same is the Frisch-Waugh-Lovell theorem.
Seasonal E ects
Include dummy variables for each season. This presumes that \seasonality" does not change over the
sample.
Use \seasonally adjusted" data. The seasonal factor is typically estimated by a two-sided weighted
average of the data for that season in neighboring years. Thus the seasonally adjusted data is a
\ ltered" series. This is a exible approach which can extract a wide range of seasonal factors. The
seasonal adjustment, however, also alters the time-series correlations of the data.
First apply a seasonal di erencing operator. If s is the number of seasons (typically s = 4 or s = 12);
s yt = yt yt s;
or the season-to-season change. The series s yt is clearly free of seasonality. But the long-run trend
is also eliminated, and perhaps this was of relevance.
yt = + yt 1 + ut : (10.5)
We are interested in the question if the error ut is serially correlated. We model this as an AR(1):
ut = u t 1 + et (10.6)
H0 : =0
H1 : 6= 0:
yt 1 = + yt 2 + ut 1:
yt yt 1 = + yt 1 yt 1 + ut ut 1;
or
yt = (1 ) + ( + ) yt 1 yt 2 + et = AR(2):
106
Thus under H0 ; yt is an AR(1), and under H1 it is an AR(2). H0 may be expressed as the restriction that
the coe cient on yt 2 is zero.
An appropriate test of H0 against H1 is therefore a Wald test that the coe cient on yt 2 is zero. (A
simple exclusion test).
In general, if the null hypothesis is that yt is an AR(k), and the alternative is that the error is an
AR(m), this is the same as saying that under the alternative yt is an AR(k+m), and this is equivalent to
the restriction that the coe cients on yt k 1 ; :::; yt k m are jointly zero. An appropriate test is the Wald
test of this restriction.
(L)yt = + et
k
(L) = 1 1L kL :
1 + 2 + + k = 1:
In this case, yt is non-stationary. The ergodic theorem and MDS CLT do not apply, and test statistics are
asymptotically non-normal.
A helpful way to write the equation is the so-called Dickey-Fuller reparameterization:
yt = + 0 yt 1 + 1 yt 1 + + k 1 yt (k 1) + et : (10.7)
These models are equivalent linear transformations of one another. The DF parameterization is convenient
because the parameter 0 summarizes the information about the unit root, since (1) = 0 : To see this,
observe that the lag polynomial for the yt computed from (10.7) is
(1 L) 0L 1 (L L2 ) k 1 (L
k 1
Lk )
But this must equal (L); as the models are equivalent. Thus
(1) = (1 1) 0 (1 1) (1 1) = 0:
H0 : 0 = 0:
H1 : 0 < 0:
yt = + 1 yt 1 + + k 1 yt (k 1) + et ;
107
which is an AR(k-1) in the rst-di erence yt : Thus if yt has a (single) unit root, then yt is a stationary
AR process. Because of this property, we say that if yt is non-stationary but d yt is stationary, then yt is
\integrated of order d"; or I(d): Thus a time series with unit root is I(1):
Since 0 is the parameter of a linear regression, the natural test statistic is the t-statistic for H0 from
OLS estimation of (10.7). Indeed, this is the most popular unit root test, and is called the Augmented
Dickey-Fuller (ADF) test for a unit root.
It would seem natural to assess the signi cance of the ADF statistic using the normal table. However,
under H0 ; yt is non-stationary, so conventional normal asymptotics are invalid. An alternative asymptotic
framework has been developed to deal with non-stationary data. We do not have the time to develop this
theory in detail, but simply assert the main results.
^0
ADF = ! DFt :
s(^ 0 )
The limit distributions DF and DFt are non-normal. They are skewed to the left, and have negative
means.
The rst result states that ^ 0 converges to its true value (of zero) at rate T; rather than the conventional
rate of T 1=2 : This is called a \super-consistent" rate of convergence.
The second result states that the t-statistic for ^ 0 converges to a limit distribution which is non-normal,
but does not depend on the parameters : This distribution has been extensively tabulated, and may be
used for testing the hypothesis H0 : Note: The standard error s(^ 0 ) is the conventional (\homoskedastic")
standard error. But the theorem does not require an assumption of homoskedasticity. Thus the Dickey-Fuller
test is robust to heteroskedasticity.
Since the alternative hypothesis is one-sided, the ADF test rejects H0 in favor of H1 when ADF < c;
where c is the critical value from the ADF table. If the test rejects H0 ; this means that the evidence points
to yt being stationary. If the test does not reject H0 ; a common conclusion is that the data suggests that yt
is non-stationary. This is not really a correct conclusion, however. All we can say is that there is insu cient
evidence to conclude whether the data are stationary or not.
We have described the test for the setting of with an intercept. Another popular setting includes as well
a linear time trend. This model is
yt = 1 + 2t + 0 yt 1 + 1 yt 1 + + k 1 yt (k 1) + et : (10.8)
This is natural when the alternative hypothesis is that the series is stationary about a linear time trend. If
the series has a linear trend (e.g. GDP, Stock Prices), then the series itself is non-stationary, but it may be
stationary around the linear time trend. In this context, it is a silly waste of time to t an AR model to the
level of the series without a time trend, as the AR model cannot conceivably describe this data. The natural
solution is to include a time trend in the tted OLS equation. When conducting the ADF test, this means
that it is computed as the t-ratio for 0 from OLS estimation of (10.8).
If a time trend is included, the test procedure is the same, but di erent critical values are required. The
ADF test has a di erent distribution when the time trend has been included, and a di erent table should
be consulted.
Most texts include as well the critical values for the extreme polar case where the intercept has been
omitted from the model. These are included for completeness (from a pedagogical perspective) but have no
relevance for empirical practice where intercepts are always included
108
By Theorem 10.1.1 above, the sequence yt yt k is strictly stationary and ergodic, and it has a nite mean by
the assumption that Eyt2 < 1: Thus an application of the Ergodic Theorem yields
T
1X p
y t yt k ! E(yt yt k ):
T t=1
Thus
p 2 2 2 2
^ (k) ! E(yt yt k) + = E(yt yt k) = (k):
p
Part (3) follows by the continuous mapping theorem: ^(k) = ^ (k)=^ (0) ! (k)= (0) = (k):
.
109
Chapter 11
E (y t j Ft 1) = E yt j yt 1 ; :::; y t k :
A linear VAR speci es that this conditional mean is linear in the arguments:
E yt j yt 1 ; :::; y t k = a0 + A1 y t 1 + A2 y t 2 + Ak y t k:
et = y t E (y t j Ft 1) ;
y t = a0 + A1 y t 1 + A2 y t 2 + Ak y t k + et
E (et j Ft 1 ) = 0:
110
11.2 Estimation
Consider the moment conditions
E (xt ejt ) = 0;
j = 1; :::; m: These are implied by the VAR model, either as a regression, or as a linear projection.
The GMM estimator corresponding to these moment conditions is equation-by-equation OLS
^j = (X 0 X)
a 1
X 0 yj :
^0j = y 0j X(X 0 X)
a 1
:
^ we nd
And if we stack these to create the estimate A;
0 1
y 01
B y 02 C
A^ = B B ..
C
C X(X 0 X) 1
@ . A
y 0m+1
= Y X(X 0 X)
0 1
;
where
Y = y1 y2 ym
the T m matrix of the stacked y 0t :
This (system) estimator is known as the SUR (Seemingly Unrelated Regressions) estimator, and was
originally derived by Zellner (1962)
yjt = a0j xt + et ;
0
and xt consists of lagged values of yjt and the other ylt s: In this case, it is convenient to re-de ne the
variables. Let yt = yjt ; and z t be the other variables. Let et = ejt and = aj : Then the single equation
takes the form
yt = x0t + et ; (11.1)
and h i
0
xt = 1 yt 1 yt k z 0t 1 z 0t k :
This is just a conventional regression with time series data.
yt = x0t + et
et = et 1 + ut (11.2)
E (ut j Ft 1 ) = 0:
111
Then the null and alternative are
H0 : =0 H1 : 6= 0:
Take the equation yt = x0t + et ; and subtract o the equation once lagged multiplied by ; to get
yt yt 1 = (x0t + et ) x0t 1 + et 1
= x0t xt 1 + et et 1 ;
or
yt = y t 1 + x0t + x0t 1 + ut ; (11.3)
which is a valid regression model.
So testing H0 versus H1 is equivalent to testing for the signi cance of adding (yt 1 ; xt 1 ) to the regression.
This can be done by a Wald test. We see that an appropriate, general, and simple way to test for omitted
serial correlation is to test the signi cance of extra lagged values of the dependent variable and regressors.
You may have heard of the Durbin-Watson test for omitted serial correlation, which once was very
popular, and is still routinely reported by conventional regression packages. The DW test is appropriate
only when regression yt = x0t + et is not dynamic (has no lagged values on the RHS), and et is iid N(0; 2 ):
Otherwise it is invalid.
Another interesting fact is that (11.2) is a special case of (11.3), under the restriction = : This
restriction, which is called a common factor restriction, may be tested if desired. If valid, the model (11.2)
may be estimated by iterated GLS. (A simple version of this estimator is called Cochrane-Orcutt.) Since the
common factor restriction appears arbitrary, and is typically rejected empirically, direct estimation of (11.2)
is uncommon in recent applications.
where p is the number of parameters in the model, and e^t (k) is the OLS residual vector from the model with
k lags. The log determinant is the criterion from the multivariate normal likelihood.
F1t = yt ; yt 1 ; y t 2 ; :::
F2t = yt ; zt ; yt 1 ; z t 1 ; y t 2 ; z t 2 ; ; :::
The information set F1t is generated only by the history of y t ; and the information set F2t is generated by
both y t and z t : The latter has more information.
We say that z t does not Granger-cause y t if
E (y t j F1;t 1) = E (y t j F2;t 1) :
That is, conditional on information in lagged y t ; lagged z t does not help to forecast y t : If this condition
does not hold, then we say that z t Granger-causes y t :
The reason why we call this \Granger Causality" rather than \causality" is because this is not a physical
or structure de nition of causality. If z t is some sort of forecast of the future, such as a futures price, then
112
z t may help to forecast y t even though it does not \cause" y t : This de nition of causality was developed by
Granger (1969) and Sims (1972).
In a linear VAR, the equation for y t is
yt = + 1 yt 1 + + k yt k + z 0t 1 1 + + z 0t k k + et :
H0 : 1 = 2 = = k = 0:
11.8 Cointegration
The idea of cointegration is due to Granger (1981), and was articulated in detail by Engle and Granger
(1987).
De nition 11.8.1 The m 1 series y t is cointegrated if y t is I(1) yet there exists ;m r, of rank r; such
that z t = 0 y t is I(0): The r vectors in are called the cointegrating vectors.
If the series y t is not cointegrated, then r = 0: If r = m; then y t is I(0): For 0 < r < m; y t is I(1) and
cointegrated.
In some cases, it may be believed that is known a priori. Often, = (1 1)0 : For example, if y t is a
0
pair of interest rates, then = (1 1) speci es that the spread (the di erence in returns) is stationary.
If y = (log(Consumption) log(Income))0 ; then = (1 1)0 speci es that log(Consumption=Income)
is stationary.
In other cases, may not be known.
If y t is cointegrated with a single cointegrating vector (r = 1); then it turns out that can be consistently
estimated by an OLS regression of one component of y t on the others. Thus y t = (Y1t ; Y2t ) and = ( 1 2 )
p
and normalize 1 = 1: Then ^ 2 = (y 02 y 2 ) 1 y 02 y 1 ! 2 : Furthermore this estimation is super-consistent:
d
T (^2 2 ) ! Limit; as rst shown by Stock (1987). This is not, in general, a good method to estimate
; but it is useful in the construction of alternative estimators and tests.
We are often interested in testing the hypothesis of no cointegration:
H0 : r=0
H1 : r > 0:
Suppose that is known, so z t = 0 y t is known. Then under H0 z t is I(1); yet under H1 z t is I(0):
Thus H0 can be tested using a univariate ADF test on z t :
When is unknown, Engle and Granger (1987) suggested using an ADF test on the estimated residual
0
z^t = ^ y t ; from OLS of y1t on y2t : Their justi cation was Stock's result that ^ is super-consistent under
H1 : Under H0 ; however, ^ is not consistent, so the ADF critical values are not appropriate. The asymptotic
distribution was worked out by Phillips and Ouliaris (1990).
When the data have time trends, it may be necessary to include a time trend in the estimated cointegrating
regression. Whether or not the time trend is included, the asymptotic distribution of the test is a ected by
the presence of the time trend. The asymptotic distribution was worked out in B. Hansen (1992).
A(L)y t = et
A(L) = I A1 L A2 L2 Ak Lk
113
or alternatively as
yt = yt 1 + D(L) y t 1 + et
where
= A(1)
= I + A1 + A2 + + Ak :
Thus cointegration imposes a restriction upon the parameters of a VAR. The restricted model can be
written as
0
yt = yt + D(L) y t 1 + et
1
yt = z t 1 + D(L) y t 1 + et :
114
Chapter 12
A \limited dependent variable" y is one which takes a \limited" set of values. The most common cases
are
Binary: y 2 f0; 1g
Multinomial: y 2 f0; 1; 2; :::; kg
Integer: y 2 f0; 1; 2; :::g
Censored: y 2 R+
The traditional approach to the estimation of limited dependent variable (LDV) models is parametric
maximum likelihood. A parametric model is constructed, allowing the construction of the likelihood function.
A more modern approach is semi-parametric, eliminating the dependence on a parametric distributional
assumption. We will discuss only the rst (parametric) approach, due to time constraints. They still
constitute the majority of LDV applications. If, however, you were to write a thesis involving LDV estimation,
you would be advised to consider employing a semi-parametric estimation approach.
For the parametric approach, estimation is by MLE. A major practical issue is construction of the
likelihood function.
P (yi = 1 j xi ) = x0i :
As P (yi = 1 j xi ) = E (yi j xi ) ; this yields the regression: yi = x0i + ei which can be estimated by OLS.
However, the linear probability model does not impose the restriction that 0 P (yi j xi ) 1: Even so
estimation of a linear probability model is a useful starting point for subsequent analysis.
The standard alternative is to use a function of the form
P (yi = 1 j xi ) = F (x0i )
where F ( ) is a known CDF, typically assumed to be symmetric about zero, so that F (u) = 1 F ( u): The
two standard choices for F are
u 1
Logistic: F (u) = (1 + e ) :
Normal: F (u) = (u):
If F is logistic, we call this the logit model, and if F is normal, we call this the probit model.
115
This model is identical to the latent variable model
yi = x0i + ei
ei F()
1 if yi > 0
yi = :
0 otherwise
For then
Estimation is by maximum likelihood. To construct the likelihood, we need the conditional distribution
of an individual observation. Recall that if y is Bernoulli, such that P(y = 1) = p and P(y = 0) = 1 p,
then we can write the density of y as
f (y) = py (1 p)1 y
; y = 0; 1:
In the Binary choice model, yi is conditionally Bernoulli with P (yi = 1 j xi ) = pi = F (x0i ) : Thus the
conditional density is
f (yi j xi ) = pyi i (1 pi )1 yi
y
= F (x0i ) i (1 F (x0i ))1 yi
:
i=1
n
X
= [yi log F (x0i ) + (1 yi ) log(1 F (x0i ))]
i=1
X X
= log F (x0i ) + log(1 F (x0i )):
yi =1 yi =0
The MLE ^ is the value of which maximizes log L( ): Standard errors and test statistics are computed
by asymptotic approximations. Details of such calculations are left to more advanced courses.
The conditional density is the Poisson with parameter i: The functional form for i has been picked to
ensure that i > 0.
The log-likelihood function is
n
X n
X
log L( ) = log f (yi j xi ) = ( exp(x0i ) + yi x0i log(yi !)) :
i=1 i=1
116
The MLE is the value ^ which maximizes log L( ):
Since
E (yi j xi ) = i = exp(x0i )
is the conditional mean, this motivates the label Poisson \regression."
Also observe that the model implies that
so the model imposes the restriction that the conditional mean and variance of yi are the same. This may
be considered restrictive. A generalization is the negative binomial.
yi if yi 0
yi = : (12.1)
0 if yi < 0
(This is written for the case of the threshold being zero, any known value can substitute.) The observed
data yi therefore come from a mixed continuous/discrete distribution.
Censored models are typically applied when the data set has a meaningful proportion (say 5% or higher)
of data at the boundary of the sample support. The censoring process may be explicit in data collection, or
it may be a by-product of economic constraints.
An example of a data collection censoring is top-coding of income. In surveys, incomes above a threshold
are typically reported at the threshold.
The rst censored regression model was developed by Tobin (1958) to explain consumption of durable
goods. Tobin observed that for many households, the consumption level (purchases) in a particular period
was zero. He proposed the latent variable model
yi = x0i + ei
2
ei iid N(0; )
with the observed variable yi generated by the censoring equation (12.1). This model (now called the Tobit)
speci es that the latent (or ideal) value of consumption may be negative (the household would prefer to sell
than buy). All that is reported is that the household purchased zero units of the good.
The naive approach to estimate is to regress yi on xi . This does not work because regression estimates
E (yi j xi ) ; not E (yi j xi ) = x0i ; and the latter is of interest. Thus OLS will be biased for the parameter of
interest :
[Note: it is still possible to estimate E (yi j xi ) by LS techniques. The Tobit framework postulates that
this is not inherently interesting, that the parameter of is de ned by an alternative statistical structure.]
Consistent estimation will be achieved by the MLE. To construct the likelihood, observe that the prob-
ability of being censored is
x0i
= :
117
where 1 ( ) is the indicator function.
Hence the log-likelihood is a mixture of the probit and the normal:
n
X
log L( ) = log f (yi j xi )
i=1
X x0i X yi x0i
1
= log + log :
yi =0 yi >0
118
Estimate ^ from a Probit, using regressors z i : The binary dependent variable is Ti :
The OLS standard errors will be incorrect, as this is a two-step estimator. They can be corrected using
a more complicated formula. Or, alternatively, by viewing the Probit/OLS estimation equations as a
large joint GMM problem.
The Heckit estimator is frequently used to deal with problems of sample selection. However, the estimator
is built on the assumption of normality, and the estimator can be quite sensitive to this assumption. Some
modern econometric research is exploring how to relax the normality assumption.
The estimator can also work quite poorly if (z 0i ^ ) does not have much in-sample variation. This can
happen if the Probit equation does not \explain" much about the selection choice. Another potential problem
is that if z i = xi ; then (z 0i ^ ) can be highly collinear with xi ; so the second step OLS estimator will not
be able to precisely estimate : Based this observation, it is typically recommended to nd a valid exclusion
restriction: a variable should be in z i which is not in xi : If this is valid, it will ensure that (z 0i ^ ) is not
collinear with xi ; and hence improve the second stage estimator's precision.
119
Chapter 13
Panel Data
A panel is a set of observations on individuals, collected over time. An observation is the pair fyit ; xit g;
where the i subscript denotes the individual, and the t subscript denotes time. A panel may be balanced :
or unbalanced :
fyit ; xit g : For i = 1; :::; n; t = ti ; :::; ti :
E (xit ui ) = 0 (13.1)
If this condition fails, then OLS is inconsistent. (13.1) fails if the individual-speci c unobserved e ect ui
is correlated with the observed explanatory variables xit : This is often believed to be plausible if ui is an
omitted variable.
If (13.1) is true, however, OLS can be improved upon via a GLS technique. In either event, OLS appears
a poor estimation choice.
Condition (13.1) is called the random e ects hypothesis. It is a strong assumption, and most applied
researchers try to avoid its use.
120
an n 1 dummy vector with a \1" in the i0 th place. Let
0 1
u1
B C
u = @ ... A :
un
Then note that
ui = d0i u;
and
yit = x0it + d0i u + eit : (13.2)
Observe that
E (eit j xit ; di ) = 0;
so (13.2) is a valid regression, with di as a regressor along with xi :
OLS on (13.2) yields estimator ^ ; u ^ : Conventional inference applies.
Observe that
This is generally consistent.
If xit contains an intercept, it will be collinear with di ; so the intercept is typically omitted from xit :
Any regressor in xit which is constant over time for all individuals (e.g., their gender) will be collinear
with di ; so will have to be omitted.
There are n + k regression parameters, which is quite large as typically n is very large.
Computationally, you do not want to actually implement conventional OLS estimation, as the parameter
space is too large. OLS estimation of proceeds by the FWL theorem. Stacking the observations together:
y = X + Du + e;
then by the FWL theorem,
1
^ = X 0 (I P D) X X 0 (I P D) y
1
= X 0X X 0y ;
where
y = y D(D 0 D) 1 D 0 y
X = X D(D 0 D) 1 D 0 X:
Since the regression of yit on di is a regression onto individual-speci c dummies, the predicted value from
these regressions is the individual speci c mean y i ; and the residual is the demean value
yit = yit yi :
The xed e ects estimator ^ is OLS of yit on xit , the dependent variable and regressors in deviation-from-
mean form.
Another derivation of the estimator is to take the equation
yit = x0it + ui + eit ;
and then take individual-speci c means by taking the average for the i0 th individual:
ti ti ti
1 X 1 X 1 X
yit = x0it + ui + eit
Ti t=t Ti t=t Ti t=t
i i i
or
y i = x0i + ui + ei :
Subtracting, we nd
yit = xit0 + eit ;
which is free of the individual-e ect ui :
121
13.3 Dynamic Panel Regression
A dynamic panel regression has a lagged dependent variable
122
Chapter 14
Nonparametrics
the Epanechnikov
3
4 1 u2 ; juj 1
K(u) =
0 juj > 1
and the Biweight or Quartic
15 2
1 u2 ; juj 1
K(u) = 16
0 juj > 1
In practice, the choice between these three rarely makes a meaningful di erence in the estimates.
The kernel functions are used to smooth the data. The amount of smoothing is controlled by the
bandwidth h > 0. Let
1 u
Kh (u) = K :
h h
be the kernel K rescaled by the bandwidth h: The kernel density estimator of f (x) is
n
1X
f^(x) = Kh (Xi x) :
n i=1
This estimator is the average of a set of weights. If a large number of the observations Xi are near x; then
the weights are relatively large and f^(x) is larger. Conversely, if only a few Xi are near x; then the weights
are small and f^(x) is small. The bandwidth h controls the meaning of \near".
Interestingly, f^(x) is a valid density. That is, f^(x) 0 for all x; and
Z Z n n Z n Z
1 1
1X 1X 1
1X 1
f^(x)dx = Kh (Xi x) dx = Kh (Xi x) dx = K (u) du = 1
1 1 n i=1 n i=1 1 n i=1 1
123
We can also calculate the moments of the density f^(x): The mean is
Z 1 n Z
1X 1
xf^(x)dx = xKh (Xi x) dx
1 n i=1 1
n Z
1X 1
= (Xi + uh) K (u) du
n i=1 1
n Z 1 n Z
1X 1X 1
= Xi K (u) du + h uK (u) du
n i=1 1 n i=1 1
n
1X
= Xi
n i=1
the sample mean of the Xi ; where the second-to-last equality used the change-of-variables u = (Xi x)=h
which has Jacobian h:
The second moment of the estimated density is
Z 1 n Z
2^ 1X 1 2
x f (x)dx = x Kh (Xi x) dx
1 n i=1 1
n Z
1X 1 2
= (Xi + uh) K (u) du
n i=1 1
n n Z 1 n Z
1X 2 2X 1X 2 1 2
= Xi + Xi h K(u)du + h u K (u) du
n i=1 n i=1 1 n i=1 1
n
1X 2
= X + h2 2
n i=1 i K
where Z 1
2
K = u2 K (u) du
1
is the variance of the kernel. It follows that the variance of the density f^(x) is
Z 1 Z 1 n n
!2
1X 2 1X
2
2^ ^ 2 2
x f (x)dx xf (x)dx = X +h K Xi
1 1 n i=1 i n i=1
= ^ 2 + h2 2
K
The second equality uses the change-of variables u = (z x)=h: The last expression shows that the expected
value is an average of f (z) locally about x:
This integral (typically) is not analytically solvable, so we approximate it using a second order Taylor
expansion of f (x + hu) in the argument hu about hu = 0; which is valid as h ! 0: Thus
1
f (x + hu) ' f (x) + f 0 (x)hu + f 00 (x)h2 u2
2
and therefore
Z 1
1
EKh (X x) ' K (u) f (x) + f 0 (x)hu + f 00 (x)h2 u2 du
1 2
Z 1 Z 1 Z 1
0 1 00 2
= f (x) K (u) du + f (x)h K (u) udu + f (x)h K (u) u2 du
1 1 2 1
1
= f (x) + f 00 (x)h2 2K :
2
124
The bias of f^(x) is then
n
1X 1 00
Bias(x) = Ef^(x) f (x) = EKh (Xi x) f (x) = f (x)h2 2
K:
n i=1 2
We see that the bias of f^(x) at x depends on the second derivative f 00 (x): The sharper the derivative, the
greater the bias. Intuitively, the estimator f^(x) smooths data local to Xi = x; so is estimating a smoothed
version of f (x): The bias results from this smoothing, and is larger the greater the curvature in f (x):
We now examine the variance of f^(x): Since it is an average of iid random variables, using rst-order
Taylor approximations and the fact that n 1 is of smaller order than (nh) 1
1
var (x) = var (Kh (Xi x))
n
1 2 1 2
= EKh (Xi x) (EKh (Xi x))
n n
Z 1 2
1 z x 1
' K f (z)dz f (x)2
nh2 1 h n
Z 1
1 2
= K (u) f (x + hu) du
nh 1
Z
f (x) 1 2
' K (u) du
nh 1
f (x) R(K)
= :
nh
R1 2
where R(K) = 1 K (u) du is called the roughness of K:
Together, the asymptotic mean-squared error (AMSE) for xed x is the sum of the approximate squared
bias and approximate variance
1 00 2 4 4 f (x) R(K)
AM SEh (x) = f (x) h K + :
4 nh
A global measure of precision is the asymptotic mean integrated squared error (AMISE)
Z
h4 4K R(f 00 ) R(K)
AM ISEh = AM SEh (x)dx = + : (14.1)
4 nh
R 2
where R(f 00 ) = (f 00 (x)) dx is the roughness of f 00 : Notice that the rst term (the squared bias) is increasing
in h and the second term (the variance) is decreasing in nh: Thus for the AMISE to decline with n; we need
h ! 0 but nh ! 1: That is, h must tend to zero, but at a slower rate than n 1 :
Equation (14.1) is an asymptotic approximation to the MSE. We de ne the asymptotically optimal
bandwidth h0 as the value which minimizes this approximate MSE. That is,
h0 = argmin AM ISEh
h
d R(K)
AM ISEh = h3 4 00
K R(f ) =0
dh nh2
yielding
1=5
R(K) 1=2
h0 = 4 R(f 00 ) n : (14.2)
K
This solution takes the form h0 = cn 1=5 where c is a function of K and f; but not of n: We thus say
that the optimal bandwidth is of order O(n 1=5 ): Note that this h declines to zero, but at a very slow rate.
In practice, how should the bandwidth be selected? This is a di cult problem, and there is a large and
continuing literature on the subject. The asymptotically optimal choice given in (14.2) depends on R(K);
2 00
K ; and R(f ): The rst two are determined by the kernel function. Their values for the three functions
introduced in the previous section are given here.
125
2
R1 R1 2
K K = 1
u2 K (u) du R(K) = 1 K (u) du
p
Gaussian 1 1/(2 )
Epanechnikov 1=5 1=5
Biweight 1=7 5=7
An obvious di culty is that R(f 00 ) is unknown. A classic simple solution proposed by Silverman (1986)has
come to be known as the reference bandwidth or Silverman's Rule-of-Thumb. It uses formula (14.2)
but replaces R(f 00 ) with ^ 5 R( 00 ); where is the N(0; 1) distribution and ^ 2 is an estimate of 2 = var(X):
This choice for h gives an optimal rule when f (x) is normal, and gives a nearly optimal rule when f (x) is
close to normal. The downside is that if the density
p is very far from normal, the rule-of-thumb h can be quite
ine cient. We can calculate that R( 00 ) = 3= (8 ) : Together with the above table, we nd the reference
rules for the three kernel functions introduced earlier.
Gaussian Kernel: hrule = 1:06n 1=5
Epanechnikov Kernel: hrule = 2:34n 1=5
Biweight (Quartic) Kernel: hrule = 2:78n 1=5
Unless you delve more deeply into kernel estimation methods the rule-of-thumb bandwidth is a good
practical bandwidth choice, perhaps adjusted by visual inspection of the resulting estimate f^(x): There are
other approaches, but implementation can be delicate. I now discuss some of these choices. The plug-in
approach is to estimate R(f 00 ) in a rst step, and then plug this estimate into the formula (14.2). This is
more treacherous than may rst appear, as the optimal h for estimation of the roughness R(f 00 ) is quite
di erent than the optimal h for estimation of f (x): However, there are modern versions of this estimator
work well, in particular the iterative method of Sheather and Jones (1991). Another popular choice for
selection of h is cross-validation. This works by constructing an estimate of the MISE using leave-one-out
estimators. There are some desirable properties of cross-validation bandwidths, but they are also known
to converge very slowly to the optimal values. They are also quite ill-behaved when the data has some
discretization (as is common in economics), in which case the cross-validation rule can sometimes select very
small bandwidths leading to dramatically undersmoothed estimates. Fortunately there are remedies, which
are known as smoothed cross-validation which is a close cousin of the bootstrap.
126
Appendix A
Matrix Algebra
A.1 Notation
A scalar a is a single number.
A vector a is a k 1 list of numbers, typically arranged in a column. We write this as
0 1
a1
B a2 C
B C
a=B . C
@ .. A
ak
Equivalently, a vector a is an element of Euclidean k space, written as a 2 Rk : If k = 1 then a is a scalar.
A matrix A is a k r rectangular array of numbers, written as
2 3
a11 a12 a1r
6 a21 a22 a2r 7
6 7
A=6 . .. .. 7
4 .. . . 5
ak1 ak2 akr
0 0
By convention aij refers to the element in the i th row and j th column of A: If r = 1 then A is a column
vector. If k = 1 then A is a row vector. If r = k = 1; then A is a scalar. Matrices are generally represented
by capital letters such as A or sometimes by the symbol (aij ) :
A standard convention (which we will follow in this text whenever possible) is to denote scalars by
lower-case italics (a); vectors by lower-case bold italics (a); and matrices by upper-case bold italics (A):
A matrix can be written as a set of column vectors or as a set of row vectors. That is,
2 3
1
6 2 7
6 7
A= a1 a2 ar =6 . 7
.
4 . 5
k
where 2 3
a1i
6 a2i 7
6 7
ai = 6 .. 7
4 . 5
aki
are column vectors and
j = aj1 aj2 ajr
are row vectors.
The transpose of a matrix, denoted A0 ; is obtained by ipping the matrix on its diagonal. Thus
2 3
a11 a21 ak1
6 a12 a22 ak2 7
6 7
A0 = 6 . . .. 7
4 .. .. . 5
a1r a2r akr
127
Alternatively, letting B = A0 ; then bij = aji . Note that if A is k r, then A0 is r k: If a is a k 1 vector,
then a0 is a 1 k row vector.
A matrix is square if k = r: A square matrix is symmetric if A = A0 ; which requires aij = aji : A
square matrix is diagonal if the only non-zero elements appear on the diagonal, so that aij = 0 if i 6= j: A
square matrix is upper (lower) diagonal if all elements below (above) the diagonal equal zero.n
An important diagonal matrix is the identity matrix, which has ones on the diagonal. The k k
identity matrix is denoted as 2 3
1 0 0
6 0 1 0 7
6 7
Ik = 6 . . .. 7 :
4 .. .. . 5
0 0 1
A partitioned matrix takes the form
2 3
A11 A12 A1r
6 A21 A22 A2r 7
6 7
A=6 .. .. .. 7
4 . . . 5
Ak1 Ak2 Akr
A + B = (aij + bij ) :
It follows that matrix addition follows the communtative and associative laws:
A+B = B+A
A + (B + C) = (A + B) + C:
Ac = cA = (aij c) :
128
Matrix multiplication is not communicative: in general AB 6= BA: However, it is associative and dis-
tributive:
A (BC) = (AB) C
A (B + C) = AB + AC
An alternative way to write the matrix product is to use matrix partitions. For example,
A11 A12 B 11 B 12
AB =
A21 A22 B 21 B 22
As another example,
2 3
B1
6 B2 7
6 7
AB = A1 A2 Ar 6 .. 7
4 . 5
Br
= A1 B 1 + A2 B 2 + + Ar B r
Xr
= Aj B j
j=1
A.4 Trace
The trace of a k k square matrix A is the sum of its diagonal elements
k
X
tr (A) = aii :
i=1
Some straightforward properties for square matrices A and B and real c are
tr (cA) = c tr (A)
tr A0 = tr (A)
tr (A + B) = tr (A) + tr (B)
tr (I k ) = k:
129
A.5 Rank and Inverse
The rank of the k r matrix (r k)
A= a1 a2 ar
is the number of linearly independent columns aj ; and is written as rank (A) : We say that A has full rank
if rank (A) = r:
A square k k matrix A is said to be nonsingular if it is has full rank, e.g. rank (A) = k: This means
that there is no k 1 c 6= 0 such that Ac = 0:
If a square k k matrix A is nonsingular then there exists a unique matrix k k matrix A 1 called the
inverse of A which satis es
AA 1 = A 1 A = I k :
For non-singular A and C; some important properties include
1 1
AA = A A = Ik
1 0 0 1
A = A
1 1 1
(AC) = C A
1 1 1 1 1 1
(A + C) = A A +C C
1 1 1 1 1 1
A (A + C) = A A +C A
The following fact about inverting partitioned matrices is quite useful. If A BD 1 C and D CA 1
B
are non-singular, then
" 1 1
#
1
A B A BD 1 C A BD 1 C BD 1
= 1 1 : (A.3)
C D D CA 1 B CA 1 D CA 1 B
Even if a matrix A does not possess an inverse, we can still de ne the generalized inverse A as a
matrix which satis es
AA A = A: (A.4)
The matrix A is not necessarily unique. The Moore-Penrose generalized inverse A satis es (A.4)
plus the following three conditions
A AA = A
AA is symmetric
A A is symmetric
For any matrix A; the Moore-Penrose generalized inverse A exists and is unique.
A.6 Determinant
The determinant is a measure of the volume of a square matrix.
While the determinant is widely used, its precise de nition is rarely needed. However, we present the
de nition here for completeness. Let A = (aij ) be a general k k matrix . Let = (j1 ; :::; jk ) denote a
permutation of (1; :::; k) : There are k! such permutations. There is a unique count of the number of inversions
130
of the indices of such permutations (relative to the natural order (1; :::; k) ; and let " = +1 if this count is
even and " = 1 if the count is odd. Then the determinant of A is de ned as
X
det A = " a1j1 a2j2 akjk :
For example, if A is 2 2; then the two permutations of (1; 2) are (1; 2) and (2; 1) ; for which "(1;2) = 1
and "(2;1) = 1. Thus
A B 1
det = (det D) det A BD C if det D 6= 0
C D
det A 6= 0 if and only if A is nonsingular.
Qk
If A is triangular (upper or lower), then det A = i=1 aii
If A is orthogonal, then det A = 1
A.7 Eigenvalues
The characteristic equation of a square matrix A is
det (A I k ) = 0:
The left side is a polynomial of degree k in so it has exactly k roots, which are not necessarily distinct and
may be real or complex. They are called the latent roots or characteristic roots or eigenvalues of A.
If i is an eigenvalue of A; then A i I k is singular so there exists a non-zero vector hi such that
(A i I k ) hi = 0:
131
A.8 Positive De niteness
We say that a symmetric square matrix A is positive semi-de nite if for all c 6= 0; c0 Ac 0: This is
written as A 0: We say that A is positive de nite if for all c 6= 0; c0 Ac > 0: This is written as A > 0:
Some properties include:
A square matrix A is idempotent if AA = A: If A is idempotent and symmetric then all its charac-
teristic roots equal either zero or one and is thus positive semi-de nite. To see this, note that we can write
A = H H 0 where H is orthogonal and contains the (real) characteristic roots. Then
A = AA = H H 0 H H 0 = H 2
H 0:
By the uniqueness of the characteristic roots, we deduce that 2 = and 2i = i for i = 1; :::; k: Hence
they must equal either 0 or 1. It follows that the spectral decomposition of idempotent A takes the form
In k 0
A=H H0 (A.5)
0 0
and
@ @ @
g (x) = @x1 g (x) @xk g (x) :
@x0
Some properties are now summarized.
@ @
@x (a0 x) = @x (x0 a) = a
@
@x0 (Ax) = A
@
@x (x0 Ax) = A + A0 x
@2
@x@x0 (x0 Ax) = A + A0
132
Let A = (aij ) be an m n matrix and let B be any matrix. The Kronecker product of A and B;
denoted A B; is the matrix
2 3
a11 B a12 B a1n B
6 a21 B a22 B a2n B 7
6 7
A B=6 .. .. .. 7:
4 . . . 5
am1 B am2 B amn B
Some important properties are now summarized. These results hold for matrices for which all matrix
multiplications are conformable.
(A + B) C=A C +B C
(A B) (C D) = AC BD
A (B C) = (A B) C
0 0 0
(A B) = A B
tr (A B) = tr (A) tr (B)
n m
If A is m m and B is n n; det(A B) = (det (A)) (det (B))
1 1 1
(A B) =A B
If A > 0 and B > 0 then A B>0
vec (ABC) = C 0 A vec (B)
0
tr (ABCD) = vec D 0 C0 A vec (B)
and in particular
2
kaa0 k = kak (A.6)
Some useful inequalities are now given:
Schwarz Inequality: For any m 1 vectors a and b,
133
Schwarz Matrix Inequality: For any m n matrices A and B;
Proof of Schwarz Inequality: First, suppose that kbk = 0: Then b = 0 and both ja0 bj = 0 and
1 0
kak kbk = 0 so the inequality is true. Second, suppose that kbk > 0 and de ne c = a b b0 b b a: Since
0
c is a vector, c c 0: Thus
2
0 c0 c = a0 a (a0 b) = b0 b :
Rearranging, this implies that
2
(a0 b) (a0 a) b0 b :
Taking the square root of each side yields the result.
Proof of Schwarz Matrix Inequality: Partition A = [a1 ; :::; an ] and B = [b1 ; :::; bn ]. Then by
partitioned matrix multiplication, the de nition of the matrix Euclidean norm and the Schwarz inequality
a01 b1 a01 b2
A0 B = a02 b1 a02 b2
.. .. ..
. . .
ka1 k kb1 k ka1 k kb2 k
ka2 k kb1 k ka2 k kb2 k
.. .. ..
. . .
0 11=2
n X
X n
2 2
= @ kai k kbj k A
i=1 j=1
n
!1=2 n
!1=2
X 2
X 2
= kai k kbi k
i=1 i=1
0 11=2 0 11=2
n X
X m n X
X m
2A
= @ a2ji A @ kbji k
i=1 j=1 i=1 j=1
= kAk kBk
Proof of Triangle Inequality: Let a = vec (A) and b = vec (B) . Then by the de nition of the matrix
norm and the Schwarz Inequality
2 2
kA + Bk = ka + bk
= a0 a + 2a0 b + b0 b
a0 a + 2 ja0 bj + b0 b
2 2
kak + 2 kak kbk + kbk
2
= (kak + kbk)
2
= (kAk + kBk)
134
Appendix B
Probability
B.1 Foundations
The set S of all possible outcomes of an experiment is called the sample space for the experiment. Take
the simple example of tossing a coin. There are two outcomes, heads and tails, so we can write S = fH; T g:
If two coins are tossed in sequence, we can write the four outcomes as S = fHH; HT; T H; T T g:
An event A is any collection of possible outcomes of an experiment. An event is a subset of S; including
S itself and the null set ;: Continuing the two coin example, one event is A = fHH; HT g; the event that the
rst coin is heads. We say that A and B are disjoint or mutually exclusive if A \ B = ;: For example,
the sets fHH; HT g and fT Hg are disjoint. Furthermore, if the sets A1 ; A2 ; ::: are pairwise disjoint and
[1i=1 Ai = S; then the collection A1 ; A2 ; ::: is called a partition of S:
The following are elementary set operations:
Union: A [ B = fx : x 2 A or x 2 Bg:
Intersection: A \ B = fx : x 2 A and x 2 Bg:
Complement: Ac = fx : x 2 = Ag:
The following are useful properties of set operations.
Communtatitivity: A [ B = B [ A; A \ B = B \ A:
Associativity: A [ (B [ C) = (A [ B) [ C; A \ (B \ C) = (A \ B) \ C:
Distributive Laws: A \ (B [ C) = (A \ B) [ (A \ C) ; A [ (B \ C) = (A [ B) \ (A [ C) :
c c
DeMorgan's Laws: (A [ B) = Ac \ B c ; (A \ B) = Ac [ B c :
A probability function assigns probabilities (numbers between 0 and 1) to events A in S: This is
straightforward when S is countable; when S is uncountable we must be somewhat more careful:A set B
is called a sigma algebra (or Borel eld) if ; 2 B , A 2 B implies Ac 2 B, and A1 ; A2 ; ::: 2 B implies
[1i=1 Ai 2 B. A simple example is f;; Sg which is known as the trivial sigma algebra. For any sample space
S; let B be the smallest sigma algebra which contains all of the open sets in S: When S is countable, B is
simply the collection of all subsets of S; including ; and S: When S is the real line, then B is the collection
of all open and closed intervals. We call B the sigma algebra associated with S: We only de ne probabilities
for events contained in B.
We now can give the axiomatic de nition of probability. Given S and B, a probability function P satis es
P1
P(S) = 1; P(A) 0 for all A 2 B, and if A1 ; A2 ; ::: 2 B are pairwise disjoint, then P ([1i=1 Ai ) = i=1 P(Ai ):
Some important properties of the probability function include the following
P (;) = 0
P(A) 1
P (Ac ) = 1 P(A)
P (B \ Ac ) = P(B) P(A \ B)
P (A [ B) = P(A) + P(B) P(A \ B)
If A B then P(A) P(B)
Bonferroni's Inequality: P(A \ B) P(A) + P(B) 1
Boole's Inequality: P (A [ B) P(A) + P(B)
135
For some elementary probability models, it is useful to have simple rules to count the number of objects in
a set. These counting rules are facilitated by using the binomial coe cients which are de ned for nonnegative
integers n and r; n r; as
n n!
= :
r r! (n r)!
When counting the number of objects in a set, there are two important distinctions. Counting may be
with replacement or without replacement. Counting may be ordered or unordered. For example,
consider a lottery where you pick six numbers from the set 1, 2, ..., 49. This selection is without replacement
if you are not allowed to select the same number twice, and is with replacement if this is allowed. Counting
is ordered or not depending on whether the sequential order of the numbers is relevant to winning the
lottery. Depending on these two distinctions, we have four expressions for the number of objects (possible
arrangements) of size r from n objects.
Without With
Replacement Replacement
n!
Ordered (n r)! nr
n n+r 1
Unordered r r
In the lottery example, if counting is unordered and without replacement, the number of potential
combinations is 49
6 = 13; 983; 816.
If P(B) > 0 the conditional probability of the event A given the event B is
P (A \ B)
P (A j B) = :
P(B)
For any B; the conditional probability function is a valid probability function where S has been replaced by
B: Rearranging the de nition, we can write
P(A \ B) = P (A j B) P(B)
which is often quite useful. We can say that the occurrence of B has no information about the likelihood of
event A when P (A j B) = P(A); in which case we nd
We say that the events A and B are statistically independent when (B.1) holds. Furthermore, we say
that the collection of events A1 ; :::; Ak are mutually independent when for any subset fAi : i 2 Ig;
!
\ Y
P Ai = P (Ai ) :
i2I i2I
Theorem 3 (Bayes' Rule). For any set B and any partition A1 ; A2 ; ::: of the sample space, then for each
i = 1; 2; :::
P (B j Ai ) P(Ai )
P (Ai j B) = P1
j=1 P (B j Aj ) P(Aj )
F (x) = P (X x) : (B.2)
Sometimes we write this as FX (x) to denote that it is the CDF of X: A function F (x) is a CDF if and only
if the following three properties hold:
136
2. F (x) is nondecreasing in x
3. F (x) is right-continuous
We say that the random variable X is discrete if F (x) is a step function. In the latter case, the range
of X consists of a countable set of real numbers 1 ; :::; r : The probability function for X takes the form
P (X = j) = j; j = 1; :::; r (B.3)
Pr
where 0 j 1 and j=1 j = 1.
We say that the random variable X is continuous if F (x) is continuous in x: In this case P(X = ) = 0
for all 2 R so the representation (B.3) is unavailable. Instead, we represent the relative probabilities by
the probability density function (PDF)
d
f (x) = F (x)
dx
so that Z x
F (x) = f (u)du
1
and Z b
P (a X b) = f (u)du:
a
These expressions only make sense if F (x) is di erentiable. While there are examples of continuous random
variables which do not possess a PDF, these cases are unusual and areRtypically ignored.
1
A function f (x) is a PDF if and only if f (x) 0 for all x 2 R and 1 f (x)dx:
B.3 Expectation
For any measurable real function g; we de ne the mean or expectation Eg(X) as follows. If X is
discrete,
Xr
Eg(X) = g( j ) j ;
j=1
and if X is continuous Z 1
Eg(X) = g(x)f (x)dx:
1
M ( ) = E exp ( X) :
137
m
The MGF does not necessarily exist. However, when it does and E jXj < 1 then
dm
M( ) = E (X m )
d m =0
C( ) = E exp (i X) :
p m
where i = 1 is the imaginary unit. The CF always exists, and when E jXj <1
dm
C( ) = im E (X m ) :
d m =0
p
The L norm, p 1; of the random variable X is
p 1=p
kXkp = (E jXj ) :
P (X = x) = px (1 p)1 x
; x = 0; 1; 0 p 1
EX = p
var(X) = p(1 p)
Binomial
n x n x
P (X = x) = p (1 p) ; x = 0; 1; :::; n; 0 p 1
x
EX = np
var(X) = np(1 p)
Geometric
P (X = x) = p(1 p)x 1
; x = 1; 2; :::; 0 p 1
1
EX =
p
1 p
var(X) =
p2
Multinomial
n!
P (X1 = x1 ; X2 = x2 ; :::; Xm = xm ) = px1 px2 pxmm ;
x1 !x2 ! xm ! 1 2
x1 + + xm = n;
p1 + + pm = 1
EXi = pi
var(Xi ) = npi (1 pi )
cov (Xi ; Xj ) = npi pj
Negative Binomial
(r + x) r
P (X = x) = p (1 p)x 1
; x = 0; 1; 2; :::; 0 p 1
x! (r)
r (1 p)
EX =
p
r (1 p)
var(X) =
p2
138
Poisson
x
exp ( )
P (X = x) = ; x = 0; 1; 2; :::; >0
x!
EX =
var(X) =
We now list some important continuous distributions.
Beta
( + ) 1 1
f (x) = x (1 x) ; 0 x 1; > 0; >0
( ) ( )
=
+
var(X) = 2
( + + 1) ( + )
Cauchy
1
f (x) = ; 1<x<1
(1 + x2 )
EX = 1
var(X) = 1
ExponentiaI
1 x
f (x) = exp ; 0 x < 1; >0
EX =
2
var(X) =
Logistic
exp ( x)
f (x) = 2; 1 < x < 1;
(1 + exp ( x))
EX = 0
2
var(X) =
3
Lognormal
!
2
1 (log x )
f (x) = p exp 2
; 0 x < 1; >0
2 x 2
2
EX = exp + =2
2 2
var(X) = exp 2 + 2 exp 2 +
Pareto
f (x) = +1
; x < 1; > 0; >0
x
EX = ; >1
1
2
var(X) = 2 ; >2
( 1) ( 2)
Uniform
1
f (x) = ; a x b
b a
a+b
EX =
2
2
(b a)
var(X) =
12
139
Weibull
1 x
f (x) = x exp ; 0 x < 1; > 0; >0
1= 1
EX = 1+
2= 2 2 1
var(X) = 1+ 1+
FX (x) = P(X x)
= lim F (x; y)
y!1
Z x Z 1
= f (x; y)dydx
1 1
The random variables X and Y are de ned to be independent if f (x; y) = fX (x)fY (y): Furthermore,
X and Y are independent if and only if there exist functions g(x) and h(y) such that f (x; y) = g(x)h(y):
If X and Y are independent, then
Z Z
E (g(X)h(Y )) = g(x)h(y)f (y; x)dydx
Z Z
= g(x)h(y)fY (y)fX (x)dydx
Z Z
= g(x)fX (x)dx h(y)fY (y)dy
= Eg (X) Eh (Y ) : (B.5)
E(XY ) = EXEY:
140
Another implication of (B.5) is that if X and Y are independent and Z = X + Y; then
MZ ( ) = E exp ( (X + Y ))
= E (exp ( X) exp ( Y ))
= E exp 0 X E exp 0 Y
= MX ( )MY ( ): (B.6)
The Cauchy-Schwarz Inequality implies that j XY j 1:The correlation is a measure of linear dependence,
free of units of measurement.
If X and Y are independent, then XY = 0 and XY = 0: The reverse, however, is not true. For example,
if EX = 0 and EX 3 = 0, then cov(X; X 2 ) = 0:
A useful fact is that
var (X + Y ) = var(X) + var(Y ) + 2 cov(X; Y ):
An implication is that if X and Y are independent, then
F (x) = P(X x)
@k
f (x) = F (x):
@x1 @xk
where the symbol dx denotes dx1 dxk : In particular, we have the k 1 multivariate mean
= EX
141
if fX (x) > 0: One way to derive this expression from the de nition of conditional probability is
@
fY jX (y j x) = lim P (Y y j x X x + ")
@y "!0
@ P (fY yg \ fx X x + "g)
= lim
@y "!0 P(x X x + ")
@ F (x + "; y) F (x; y)
= lim
@y "!0 FX (x + ") FX (x)
@
@ @x F (x + "; y)
= lim
@y "!0 fX (x + ")
@2
@x@y F (x; y)
=
fX (x)
f (x; y)
= :
fX (x)
The conditional mean m(x) is a function, meaning that when X equals x; then the expected value of Y is
m(x):
Similarly, we de ne the conditional variance of Y given X = x as
2
(x) = var (Y j X = x)
2
= E (Y m(x)) j X = x
= E Y2 jX =x m(x)2 :
2
Evaluated at x = X; the conditional mean m(X) and conditional variance (x) are random vari-
2
ables, functions of X: We write this as E(Y j X) = m(X) and var (Y j X) = (X): For example, if
E (Y j X = x) = + 0 x; then E (Y j X) = + 0 X; a transformation of X:
The following are important facts about conditional expectations.
Simple Law of Iterated Expectations:
E (E (Y j X)) = E (Y ) (B.7)
Proof :
E (E (Y j X)) = E (m(X))
Z 1
= m(x)fX (x)dx
1
Z 1Z 1
= yfY jX (y j x) fX (x)dydx
1 1
Z 1Z 1
= yf (y; x) dydx
1 1
= E(Y ):
E (E (Y j X; Z) j X) = E (Y j X) (B.8)
142
Proof : Let
h(x) = E (g(X)Y j X = x)
Z 1
= g(x)yfY jX (y j x) dy
1
Z 1
= g(x) yfY jX (y j x) dy
1
= g(x)m(x)
where m(x) = E (Y j X = x) : Thus h(X) = g(X)m(X), which is the same as E (g(X)Y j X) = g (X) E (Y j X) :
B.7 Transformations
Suppose that X 2 Rk with continuous distribution function FX (x) and density fX (x): Let Y = g(X)
where g(x) : Rk ! Rk is one-to-one, di erentiable, and invertible. Let h(y) denote the inverse of g(x). The
Jacobian is
@
J(y) = det h(y) :
@y 0
Consider the univariate case k = 1: If g(x) is an increasing function, then g(X) Y if and only if
X h(Y ); so the distribution function of Y is
FY (y) = P (g(X) y)
= P (X h(Y ))
= FX (h(Y ))
so the density of Y is
d d
fY (y) = FY (y) = fX (h(Y )) h(y):
dy dy
If g(x) is a decreasing function, then g(X) Y if and only if X h(Y ); so
FY (y) = P (g(X) y)
= 1 P (X h(Y ))
= 1 FX (h(Y ))
This is known as the change-of-variables formula. This same formula (B.10) holds for k > 1; but its
justi cation requires deeper results from analysis.
As one example, take the case X U [0; 1] and Y = log(X). Here, g(x) = log(x) and h(y) = exp( y)
so the Jacobian is J(y) = exp(y): As the range of X is [0; 1]; that for Y is [0,1): Since fX (x) = 1 for
0 x 1 (B.10) shows that
fY (y) = exp( y); 0 y 1;
an exponential density.
1 x2
(x) = p exp ; 1 < x < 1:
2 2
143
This density has all moments nite. Since it is symmetric about zero all odd moments are zero. By iterated
integration by parts, we can also show that EX 2 = 1 and EX 4 = 3: It is conventional to write X N (0; 1) ;
and to denote the standard normal density function by (x) and its distribution function by (x): The latter
has no closed-form solution.
If Z is standard normal and X = + Z; then using the change-of-variables formula, X has density
!
2
1 (x )
f (x) = p exp ; 1 < x < 1:
2 2 2
2
which is the univariate normal density. The mean and variance of the distribution are and ; and it
is conventional to write X N ; 2 .
For x 2 Rk ; the multivariate normal density is
0 1
1 (x ) (x )
f (x) = k=2 1=2
exp ; x 2 Rk :
(2 ) det ( ) 2
The mean and covariance matrix of the distribution are and ; and it is conventional to write X N ( ; ).
It useful to observe that the MGF and CF of the multivariate normal are exp 0 + 0 =2 and
exp i 0 0
=2 ; respectively.
If X 2 Rk is multivariate normal and the elements of X are mutually uncorrelated, then = diagf 2j g
is a diagonal matrix. In this case the density function can be written as
!!
2 2 2 2
1 (x1 1) = 1 + + (xk k) = k
f (x) = k=2
exp
(2 ) 1 k
2
k 2 !
Y 1 xj j
= exp
j=1 (2 )
1=2
j
2 2j
which is the product of marginal univariate normal densities. This shows that if X is multivariate normal
with uncorrelated elements, then they are mutually independent.
Another useful fact is that if X N ( ; ) and Y = a + BX with B an invertible matrix, then by the
change-of-variables formula, the density of Y is
0 1
1 (y Y ) (y Y )
f (y) = k=2 1=2
exp Y
; y 2 Rk :
(2 ) det ( Y)
2
1=2 1=2
where Y = a + B and Y = B B 0 ; where we used the fact that det B B 0 = det ( ) det (B) :
This shows that linear transformations of normals are also normal.
2
Theorem B.8.3 Let Z N (0; 1) and Q r be independent. Set
Z
tr = p :
Q=r
The density of tr is
r+1
2
f (x) = p r+1 (B.12)
r x2 2
r 2 1+ r
and is known as the student's t distribution with r degrees of freedom.
144
Proof of Theorem B.8.1. The MGF for the density (B.11) is
Z 1
1
E exp (tQ) = r y r=2 1
exp (ty) exp ( y=2) dy
0 2 2r=2
r=2
= (1 2t) (B.13)
R1
where the second equality uses the fact that 0 y a 1 exp ( by) dy = b a (a); which can be found by applying
change-of-variables to the gamma function. For Z N(0; 1) the distribution of Z 2 is
p
P Z 2 y = 2P (0 Z y)
Z py
1 x2
= 2 p exp dx
0 2 2
Z y
1 s
= 1 1=2
s 1=2 exp ds
0 2 2 2
1 p
using the change{of-variables s = x2 and the fact 2 = : Thus the density of Z 2 is (B.11) with r = 1:
1=2 Pr
2
From (B.13), we see that the MGF of Z is (1 2t) . Since we can write Q = X 0 X = j=1 Zj2 where
r=2
the Zj are independent N (0; 1), (B.6) can be used to show that the MGF of Q is (1 2t) ; which we
showed in (B.13) is the MGF of the density (B.11).
Proof of Theorem B.8.2. The fact that A > 0 means that we can write A = CC 0 where C is non-singular.
Then A 1 = C 10 C 1 and
1 1 10 1
C Z N 0; C AC = N 0; C CC 0 C 10
= N (0; I q ) :
Thus
0
Z 0A 1
Z = Z 0C 10
C 1
Z= C 1
Z C 1
Z 2
q:
Proof of Theorem B.8.3. Using the simple law of iterated expectations, tr has distribution function
!
Z
F (x) = P p x
Q=r
( r )
Q
= E Z x
r
" r !#
Q
= E P Z x jQ
r
r !
Q
= E x
r
145
Appendix C
Asymptotic Theory
C.1 Inequalities
The following are an important set of inequalities which are used in asymptotic distribution theory.
Jensen's Inequality. If g( ) : R ! R is convex, then for any random variable x for which E jxj < 1 and
E jg (x)j < 1;
g(E(x)) E (g (x)) : (C.1)
Expectation Inequality. For any random variable x for which E jxj < 1;
1 1
Holder's Inequality. If p > 1 and q > 1 and p + q = 1; then for any random m n matrices X and Y;
p 1=p q 1=q
E X 0Y (E kXk ) (E kY k ) : (C.4)
Markov's Inequality. For any random vector x and non-negative function g(x) 0;
1
P(g(x) > ) Eg(x): (C.6)
Proof of Jensen's Inequality. Let a + bu be the tangent line to g(u) at u = Ex: Since g(u) is convex,
tangent lines lie below it. So for all u; g(u) a + bu yet g(Ex) = a + bEx since the curve is tangent at Ex:
Applying expectations, Eg(x) a + bEx = g(Ex); as stated.
Proof of Expecation Inequality. Follows from an application of Jensen's Inequality, noting that the
function g(u) = juj is convex.
1 1
Proof of Holder's Inequality. Since p + q = 1 an application of Jensen's Inequality shows that for any
real a and b
1 1 1 1
exp a+ b exp (a) + exp (b) :
p q p q
Setting u = exp (a) and v = exp (b) this implies
u v
u1=p v 1=q +
p q
146
and this inequality holds for any u > 0 and v > 0:
p p q q
Set u = kXk =E kXk and v = kY k =E kY k : Note that Eu = Ev = 1: By the matrix Schwarz
0
Inequality (A.8), X Y kXk kY k. Thus
E X 0Y E (kXk kY k)
p 1=p q 1=q p 1=p q 1=q
(E kXk ) (E kY k ) (E kXk ) (E kY k )
= E u1=p v 1=q
u v
E +
p q
1 1
= +
p q
= 1;
which is (C.4).
Proof of Minkowski's Inequality. Note that by rewriting, using the triangle inequality (A.9), and then
Holder's Inequality to the two expectations
p p 1
E kX + Y k = E kX + Y k kX + Y k
p 1 p 1
E kXk kX + Y k + E kY k kX + Y k
1=q 1=q
p 1=p q(p 1) p 1=p q(p 1)
(E kXk ) E kX + Y k + (E kY k ) E kX + Y k
p 1=p p 1=p p (p 1)=p
= (E kXk ) + (E kY k ) E (kX + Y k )
where the second equality picks q to satisfy 1=p + 1=q = 1; and the nal equality uses this fact to make the
p (p 1)=p
substitution q = p=(p 1) and then collects terms. Dividing both sides by E (kX + Y k ) ; we obtain
(C.5).
Theorem C.2.1 Weak Law of Large Numbers (WLLN). If xi 2 R is iid and E jxi j < 1; then as
n!1
n
1X p
xn = xi ! E(xi ):
n i=1
147
Proof: Without loss of generality, we can set E(xi ) = 0 by recentering xi on its expectation.
We need to show that for all > 0 and > 0 there is some N < 1 so that for all n N; P (jxn j > ) :
Fix and : Set " = =3: Pick C < 1 large enough so that
(where 1 ( ) is the indicator function) which is possible since E jxi j < 1: De ne the random vectors
By the Triangle Inequality (A.9), the Expectation Inequality (C.2), and (C.7),
n
1X
E jz n j = E zi
n i=1
n
1X
E jzi j
n i=1
= E jzi j
E jxi j 1 (jxi j > C) + jE (xi 1 (jxi j > C))j
2E jxi j 1 (jxi j > C)
2": (C.8)
By Jensen's Inequality (C.1), the fact that the wi are iid and mean zero, and the bound jwi j 2C;
2
(E jwn j) Ew2n
Ewi2
=
n
4C 2
n
2
" (C.9)
Theorem C.3.1 Central Limit Theorem (CLT). If xi 2 Rk is iid and Ex2ji < 1 for j = 1; :::; k; then
as n ! 1
n
p 1 X d
n (xn )= p (xi ) ! N (0; ) :
n i=1
0
where = Exi and = E (xi ) (xi ) :
Proof: The moment bound Ex2ji < 1 is su cient to guarantee that the elements of and V are well
de ned and nite. Without loss of generality, it is su cient to consider the case = 0 and = I k :
148
For 2 Rk ; let C ( ) = E exp i 0 xi denote the characteristic function of xi and set c ( ) = log C( ).
Then observe
@
C( ) = iE xi exp i 0 xi
@
@2
0 C( ) = i2 E xi x0i exp i 0 xi
@ @
so when evaluated at =0
C(0) = 1
@
C(0) = iE (xi ) = 0
@
@2
0 C(0) = E (xi x0i ) = Ik:
@ @
Furthermore,
@ @
c ( ) = c( ) = C( ) 1 C( )
@ @
@2 1 @2 2 @ @
c ( ) = 0 c( ) = C( ) C( ) C( ) C( ) C( )
@ @ @ @ 0 @ @ 0
so when evaluated at =0
c(0) = 0
c (0) = 0
c (0) = Ik:
= nc p
n
1 0
= c ( n)
2
p
where n ! 0 lies on the line segment joining 0 and = n: Since c ( n) !c (0) = I k ; we see that as
n ! 1;
1 0
Cn ( ) ! exp
2
the characteristic function of the N (0; I k ) distribution. This is su cient to establish the theorem.
149
Proof: Since g is continuous at c; for all " > 0 we can nd a > 0 such that if kz n ck < then
jg (z n ) g (c)j ": Recall that A B implies P(A) P(B): Thus P (jg (z n ) g (c)j ") P (kz n ck < ) !
p p
1 as n ! 1 by the assumption that z n ! c: Hence g(z n ) ! g(c) as n ! 1.
d
Theorem C.4.2 Continuous Mapping Theorem 2. If z n ! z as n ! 1 and g ( ) is continuous; then
d
g(z n ) ! g(z) as n ! 1.
p d
Theorem C.4.3 Delta Method: If n ( n 0) ! N (0; ) ; where is m 1 and is m m; and
m k
g( ) : R ! R ; k m; then
p d
n (g ( n ) g( 0 )) ! N (0; g g0 )
@
where g ( ) = @ 0 g( ) and g = g ( 0 ):
gj ( n) = gj ( 0) + gj ( jn ) ( n 0)
where nj lies on the line segment between n and 0 and therefore converges in probability to 0: It follows
p
that ajn = gj ( jn ) gj ! 0: Stacking across elements of g; we nd
p p d
n (g ( n) g( 0 )) = (g + an ) n ( n 0) ! g N (0; ) = N (0; g g0 ) :
150
Appendix D
Maximum Likelihood
The likelihood of the sample is this joint density evaluated at the observed sample values, viewed as a
function of . The log-likelihood function is its natural log
n
X
log L( ) = log f (y i ; ) :
i=1
If the distribution F is discrete, the likelihood and log-likelihood are constructed by setting f (y; ) =
P (y i = y; ) :
De ne the Hessian
@2
H= E log f (y i ; 0 ) (D.1)
@ @ 0
and the outer product matrix
@ @ 0
=E log f (y i ; 0) log f (y i ; 0) : (D.2)
@ @
Two important features of the likelihood are
Theorem D.0.4
@
E log f (y i ; ) =0 (D.3)
@ = 0
H= I0 (D.4)
The matrix I0 is called the information, and the equality (D.4) is often called the information matrix
equality.
151
In some simple cases, we can nd an explicit expression for ^ as a function of the data, but these cases are
rare. More typically, the MLE ^ must be found by numerical methods.
Why do we believe that the MLE ^ is estimating the parameter ? Observe that when standardized, the
log-likelihood is a sample average
n
1 1X p
log L( ) = log f (y i ; ) ! E log f (y i ; ) :
n n i=1
As the MLE ^ maximizes the left-hand-side, we can see that it is an estimator of the maximizer of the
right-hand-side. The rst-order condition for the latter problem is
@
0= E log f (y i ; )
@
which holds at = 0 by (D.3). In fact, under conventional regularity conditions, ^ is consistent for this
p
value, ^ ! 0 as n ! 1:
p d
Theorem D.0.6 Under regularity conditions, n ^ 0 ! N 0; I0 1
.
1
Thus in large samples, the approximate variance of the MLE is (nI0 ) which is the Cramer-Rao lower
bound. Thus in large samples the MLE has approximately the best possible variance. Therefore the MLE
is called asymptotically e cient.
Typically, to estimate the asymptotic variance of the MLE we use an estimate based on the Hessian
formula (D.1)
Xn
b = 1 @2
H log f y i ; ^ (D.5)
n i=1 @ @ 0
1
We then set Ib0 1 = H b : Asymptotic standard errors for ^ are then the square roots of the diagonal elements
1b 1
of n I0 :
Sometimes a parametric density function f (y; ) is used to approximate the true unknown density f (y);
but it is not literally believed that the model f (y; ) is necessarily the true density. In this case, we refer to
log L( ) as a quasi-likelihood and the its maximizer ^ as a quasi-mle or QMLE.
In this case there is not a \true" value of the parameter : Instead we de ne the pseudo-true value 0
as the maximizer of Z
E log f (y i ; ) = f (y) log f (y; ) dy
the Kullback-Leibler information distance between the true density f (y) and the parametric density f (y; ):
Thus the QMLE 0 is the value which makes the parametric density \closest" to the true value according to
this measure of distance. The QMLE is consistent for the pseudo-true value, but has a di erent covariance
matrix than in the pure MLE case, since the information matrix equality (D.4) does not hold. A minor
adjustment to Theorem (D.0.6) yields the asymptotic distribution of the QMLE:
p d
n ^ 0 ! N (0; V ) ; V =H 1
H 1
Asymptotic standard errors (sometimes called qmle standard errors) are then the square roots of the diagonal
elements of n 1 V^ :
152
Proof of Theorem D.0.4. To see (D.3);
Z
@ @
E log f (y i ; ) = log f (y; ) f (y; 0 ) dy
@ = 0
@ = 0
Z
@ f (y; 0 )
= f (y; ) dy
@ f (y; ) = 0
Z
@
= f (y; ) dy
@ = 0
@
= 1 = 0:
@ = 0
which by Theorem (D.0.4) has mean zero and variance nH: Write the estimator ~ = ~ (Y ) as a function of
the data. Since ~ is unbiased for any ;
Z
= E~ = ~ (Y ) f (Y ; ) dY :
so
1 1
var ~ = :
var (S) nH
Proof of Theorem D.0.6 Taking the rst-order condition for maximization of log L( ), and making a
rst-order Taylor series expansion,
@
0 = log L( )
@ =^
Xn
@
= log f y i ; ^
i=1
@
Xn n
X
@ @2 ^
' log f (y i ; 0) + 0 log f (y i ; n) 0 ;
i=1
@ i=1
@ @
153
where n lies on a line segment joining ^ and 0: (Technically, the speci c value of n varies by row in this
expansion.) Rewriting this equation, we nd
n
! 1 n
!
X @2 X @
^ 0 = log f (y i ; n) log f (y i ; 0) :
0
i=1
@ @ i=1
@
@
Since @ log f (y i ; 0) is mean-zero with covariance matrix ; an application of the CLT yields
n
1 X @ d
p log f (y i ; 0) ! N (0; ) :
n i=1 @
The analysis of the sample Hessian is somewhat more complicated due to the presence of n : Let H( ) =
@2 p p
@ @ 0 log f (y i ; ) : If it is continuous in ; then since n ! 0 we nd H( n ) ! H and so
n n
1 X @2 1X @2
0 log f (y i ; n) = 0 log f (y i ; n) H( n) + H( n)
n i=1 @ @ n i=1 @ @
p
!H
154
Appendix E
Numerical Optimization
where the parameter is 2 Rm and the criterion function is Q( ) : ! R: For example NLLS, GLS,
MLE and GMM estimators take this form. In most cases, Q( ) can be computed for given ; but ^ is not
available in closed form. In this case, numerical methods are required to obtain ^:
155
If the functions g( ) and H( ) are not analytically available, they can be calculated numerically. Take
the j 0 th element of g( ): Let j be the j 0 th unit vector (zeros everywhere except for a one in the j 0 th row).
Then for " small
Q( + j ") Q( )
gj ( ) ' :
"
Similarly,
Q( + j " + k ") Q( + k ") Q( + j ") + Q( )
gjk ( ) '
"2
In many cases, numerical derivatives can work well but can be computationally costly relative to analytic
derivatives. In some cases, however, numerical derivatives can be quite unstable.
Most gradient methods are a variant of Newton's method which is based on a quadratic approximation.
By a Taylor's expansion for close to ^
0 = g(^) ' g( ) + H( ) ^
which implies
^= H( ) 1
g( ):
This suggests the iteration rule
^i+1 = i H( i ) 1
g( i ):
where
One problem with Newton's method is that it will send the iterations in the wrong direction if H( i ) is
not positive de nite. One modi cation to prevent this possibility is quadratic hill-climbing which sets
^i+1 = 1
i (H( i ) + iI m) g( i ):
where i is set just above the smallest eigenvalue of H( i ) if H( ) is not positive de nite.
Another productive modi cation is to add a scalar steplength i : In this case the iteration rule takes
the form
i+1 = i Di gi i (E.2)
1
where g i = g( i ) and D i = H( i ) 1 for Newton's method and Di = (H( i ) + i I m ) for quadratic
hill-climbing.
Allowing the steplength to be a free parameter allows for a line search, a one-dimensional optimization.
To pick i write the criterion function as a function of
Q( ) = Q( i + Di gi )
a one-dimensional optimization problem. There are two common methods to perform a line search. A
quadratic approximation evaluates the rst and second derivatives of Q( ) with respect to ; and picks
i as the value minimizing this approximation. The half-step method considers the sequence = 1; 1/2,
1/4, 1/8, ... . Each value in the sequence is considered and the criterion Q( i + D i g i ) evaluated. If the
criterion has improved over Q( i ), use this value, otherwise move to the next element in the sequence.
Newton's method does not perform well if Q( ) is irregular, and it can be quite computationally costly if
H( ) is not analytically available. These problems have motivated alternative choices for the weight matrix
Di : These methods are called Quasi-Newton methods. Two popular methods are do to Davidson-Fletcher-
Powell (DFP) and Broyden-Fletcher-Goldfarb-Shanno (BFGS).
Let
gi = gi gi 1
i = i i 1
For any of the gradient methods, the iterations continue until the sequence has converged in some sense.
This can be de ned by examining whether j i i 1 j ; jQ ( i ) Q ( i 1 )j or jg( i )j has become small.
156
E.3 Derivative-Free Methods
All gradient methods can be quite poor in locating the global minimum when Q( ) has several local
minima. Furthermore, the methods are not well de ned when Q( ) is non-di erentiable. In these cases,
alternative optimization methods are required. One example is the simplex method of Nelder-Mead (1965).
A more recent innovation is the method of simulated annealing (SA). For a review see Go e, Ferrier,
and Rodgers (1994). The SA method is a sophisticated random search. Like the gradient methods, it relies
on an iterative sequence. At each iteration, a random variable is drawn and added to the current value of
the parameter. If the resulting criterion is decreased, this new value is accepted. If the criterion is increased,
it may still be accepted depending on the extent of the increase and another randomization. The latter
property is needed to keep the algorithm from selecting a local minimum. As the iterations continue, the
variance of the random innovations is shrunk. The SA algorithm stops when a large number of iterations is
unable to improve the criterion. The SA method has been found to be successful at locating global minima.
The downside is that it can take considerable computer time to execute.
157
Bibliography
[1] Aitken, A.C. (1935): \On least squares and linear combinations of observations," Proceedings of the
Royal Statistical Society, 55, 42-48.
[2] Akaike, H. (1973): \Information theory and an extension of the maximum likelihood principle." In B.
Petroc and F. Csake, eds., Second International Symposium on Information Theory.
[3] Anderson, T.W. and H. Rubin (1949): \Estimation of the parameters of a single equation in a complete
system of stochastic equations," The Annals of Mathematical Statistics, 20, 46-63.
[4] Andrews, D.W.K. (1988): \Laws of large numbers for dependent non-identically distributed random
variables,' Econometric Theory, 4, 458-467.
[5] Andrews, D.W.K. (1991), \Asymptotic normality of series estimators for nonparameric and semipara-
metric regression models," Econometrica, 59, 307-345.
[6] Andrews, D.W.K. (1993), \Tests for parameter instability and structural change with unknown change
point," Econometrica, 61, 821-8516.
[7] Andrews, D.W.K. and M. Buchinsky: (2000): \A three-step method for choosing the number of boot-
strap replications," Econometrica, 68, 23-51.
[8] Andrews, D.W.K. and W. Ploberger (1994): \Optimal tests when a nuisance parameter is present only
under the alternative," Econometrica, 62, 1383-1414.
[9] Basmann, R. L. (1957): \A generalized classical method of linear estimation of coe cients in a structural
equation," Econometrica, 25, 77-83.
[10] Bekker, P.A. (1994): \Alternative approximations to the distributions of instrumental variable estima-
tors, Econometrica, 62, 657-681.
[11] Billingsley, P. (1968): Convergence of Probability Measures. New York: Wiley.
[12] Billingsley, P. (1979): Probability and Measure. New York: Wiley.
[13] Bose, A. (1988): \Edgeworth correction by bootstrap in autoregressions," Annals of Statistics, 16,
1709-1722.
[14] Breusch, T.S. and A.R. Pagan (1979): \The Lagrange multiplier test and its application to model
speci cation in econometrics," Review of Economic Studies, 47, 239-253.
[15] Brown, B.W. and W.K. Newey (2002): \GMM, e cient bootstrapping, and improved inference ,"
Journal of Business and Economic Statistics.
[16] Carlstein, E. (1986): \The use of subseries methods for estimating the variance of a general statistic
from a stationary time series," Annals of Statistics, 14, 1171-1179.
[17] Chamberlain, G. (1987): \Asymptotic e ciency in estimation with conditional moment restrictions,"
Journal of Econometrics, 34, 305-334.
[18] Choi, I. and P.C.B. Phillips (1992): \Asymptotic and nite sample distribution theory for IV estimators
and tests in partially identi ed structural equations," Journal of Econometrics, 51, 113-150.
[19] Chow, G.C. (1960): \Tests of equality between sets of coe cients in two linear regressions," Economet-
rica, 28, 591-603.
158
[20] Cragg, John (1992): \Quasi-Aitken Estimation for Heterskedasticity of Unknown Form" Journal of
Econometrics, 54, 179-201.
[21] Davidson, J. (1994): Stochastic Limit Theory: An Introduction for Econometricians. Oxford: Oxford
University Press.
[22] Davison, A.C. and D.V. Hinkley (1997): Bootstrap Methods and their Application. Cambridge University
Press.
[23] Dickey, D.A. and W.A. Fuller (1979): \Distribution of the estimators for autoregressive time series with
a unit root," Journal of the American Statistical Association, 74, 427-431.
[24] Donald S.G. and W.K. Newey (2001): \Choosing the number of instruments," Econometrica, 69, 1161-
1191.
[25] Dufour, J.M. (1997): \Some impossibility theorems in econometrics with applications to structural and
dynamic models," Econometrica, 65, 1365-1387.
[26] Efron, Bradley (1979): \Bootstrap methods: Another look at the jackknife," Annals of Statistics, 7,
1-26.
[27] Efron, Bradley (1982): The Jackknife, the Bootstrap, and Other Resampling Plans. Society for Industrial
and Applied Mathematics.
[28] Efron, Bradley and R.J. Tibshirani (1993): An Introduction to the Bootstrap, New York: Chapman-Hall.
[29] Eicker, F. (1963): \Asymptotic normality and consistency of the least squares estimators for families of
linear regressions," Annals of Mathematical Statistics, 34, 447-456.
[30] Engle, R.F. and C.W.J. Granger (1987): \Co-integration and error correction: Representation, estima-
tion and testing," Econometrica, 55, 251-276.
[31] Frisch, R. and F. Waugh (1933): \Partial time regressions as compared with individual trends," Econo-
metrica, 1, 387-401.
[32] Gallant, A.F. and D.W. Nychka (1987): \Seminonparametric maximum likelihood estimation," Econo-
metrica, 55, 363-390.
[33] Gallant, A.R. and H. White (1988): A Uni ed Theory of Estimation and Inference for Nonlinear
Dynamic Models. New York: Basil Blackwell.
[34] Goldberger, Arthur S. (1991): A Course in Econometrics. Cambridge: Harvard University Press.
[35] Go e, W.L., G.D. Ferrier and J. Rogers (1994): \Global optimization of statistical functions with
simulated annealing," Journal of Econometrics, 60, 65-99.
[36] Gauss, K.F. (1809): \Theoria motus corporum coelestium," in Werke, Vol. VII, 240-254.
[37] Granger, C.W.J. (1969): \Investigating causal relations by econometric models and cross-spectral meth-
ods," Econometrica, 37, 424-438.
[38] Granger, C.W.J. (1981): \Some properties of time series data and their use in econometric speci cation,"
Journal of Econometrics, 16, 121-130.
[39] Granger, C.W.J. and T. Ter•
asvirta (1993): Modelling Nonlinear Economic Relationships, Oxford Uni-
versity Press, Oxford.
[40] Hall, A. R. (2000): \Covariance matrix estimation and the power of the overidentifying restrictions
test," Econometrica, 68, 1517-1527,
[41] Hall, P. (1992): The Bootstrap and Edgeworth Expansion, New York: Springer-Verlag.
[42] Hall, P. (1994): \Methodology and theory for the bootstrap," Handbook of Econometrics, Vol. IV, eds.
R.F. Engle and D.L. McFadden. New York: Elsevier Science.
[43] Hall, P. and J.L. Horowitz (1996): \Bootstrap critical values for tests based on Generalized-Method-of-
Moments estimation," Econometrica, 64, 891-916.
159
[44] Hahn, J. (1996): \A note on bootstrapping generalized method of moments estimators," Econometric
Theory, 12, 187-197.
[45] Hansen, B.E. (1992): \E cient estimation and testing of cointegrating vectors in the presence of deter-
ministic trends," Journal of Econometrics, 53, 87-121.
[46] Hansen, B.E. (1996): \Inference when a nuisance parameter is not identi ed under the null hypothesis,"
Econometrica, 64, 413-430.
[47] Hansen, L.P. (1982): \Large sample properties of generalized method of moments estimators, Econo-
metrica, 50, 1029-1054.
[48] Hansen, L.P., J. Heaton, and A. Yaron (1996): \Finite sample properties of some alternative GMM
estimators," Journal of Business and Economic Statistics, 14, 262-280.
[49] Hausman, J.A. (1978): \Speci cation tests in econometrics," Econometrica, 46, 1251-1271.
[50] Heckman, J. (1979): \Sample selection bias as a speci cation error," Econometrica, 47, 153-161.
[51] Horowitz, Joel (2001): \The Bootstrap," Handbook of Econometrics, Vol. 5, J.J. Heckman and E.E.
Leamer, eds., Elsevier Science, 3159-3228.
[52] Imbens, G.W. (1997): \One step estimators for over-identi ed generalized method of moments models,"
Review of Economic Studies, 64, 359-383.
[53] Imbens, G.W., R.H. Spady and P. Johnson (1998): \Information theoretic approaches to inference in
moment condition models," Econometrica, 66, 333-357.
[54] Jarque, C.M. and A.K. Bera (1980): \E cient tests for normality, homoskedasticity and serial indepen-
dence of regression residuals, Economic Letters, 6, 255-259.
[55] Johansen, S. (1988): \Statistical analysis of cointegrating vectors," Journal of Economic Dynamics and
Control, 12, 231-254.
[56] Johansen, S. (1991): \Estimation and hypothesis testing of cointegration vectors in the presence of
linear trend," Econometrica, 59, 1551-1580.
[57] Johansen, S. (1995): Likelihood-Based Inference in Cointegrated Vector Auto-Regressive Models, Oxford
University Press.
[58] Johansen, S. and K. Juselius (1992): \Testing structural hypotheses in a multivariate cointegration
analysis of the PPP and the UIP for the UK," Journal of Econometrics, 53, 211-244.
[59] Kitamura, Y. (2001): \Asymptotic optimality and empirical likelihood for testing moment restrictions,"
Econometrica, 69, 1661-1672.
[60] Kitamura, Y. and M. Stutzer (1997): \An information-theoretic alternative to generalized method of
moments," Econometrica, 65, 861-874..
[61] Koenker, Roger (2005): Quantile Regression. Cambridge University Press.
[62] Kunsch, H.R. (1989): \The jackknife and the bootstrap for general stationary observations," Annals of
Statistics, 17, 1217-1241.
[63] Kwiatkowski, D., P.C.B. Phillips, P. Schmidt, and Y. Shin (1992): \Testing the null hypothesis of
stationarity against the alternative of a unit root: How sure are we that economic time series have a
unit root?" Journal of Econometrics, 54, 159-178.
[64] Lafontaine, F. and K.J. White (1986): \Obtaining any Wald statistic you want," Economics Letters,
21, 35-40.
[65] Lovell, M.C. (1963): \Seasonal adjustment of economic time series," Journal of the American Statistical
Association, 58, 993-1010.
[66] MacKinnon, J.G. (1990): \Critical values for cointegration," in Engle, R.F. and C.W. Granger (eds.)
Long-Run Economic Relationships: Readings in Cointegration, Oxford, Oxford University Press.
160
[67] MacKinnon, J.G. and H. White (1985): \Some heteroskedasticity-consistent covariance matrix estima-
tors with improved nite sample properties," Journal of Econometrics, 29, 305-325.
[68] Magnus, J. R., and H. Neudecker (1988): Matrix Di erential Calculus with Applications in Statistics
and Econometrics, New York: John Wiley and Sons.
[69] Muirhead, R.J. (1982): Aspects of Multivariate Statistical Theory. New York: Wiley.
[70] Nelder, J. and R. Mead (1965): \A simplex method for function minimization," Computer Journal, 7,
308-313.
[71] Newey, W.K. and K.D. West (1987): \Hypothesis testing with e cient method of moments estimation,"
International Economic Review, 28, 777-787.
[72] Owen, Art B. (1988): \Empirical likelihood ratio con dence intervals for a single functional,"
Biometrika, 75, 237-249.
[73] Owen, Art B. (2001): Empirical Likelihood. New York: Chapman & Hall.
[74] Phillips, P.C.B. (1989): \Partially identi ed econometric models," Econometric Theory, 5, 181-240.
[75] Phillips, P.C.B. and S. Ouliaris (1990): \Asymptotic properties of residual based tests for cointegration,"
Econometrica, 58, 165-193.
[76] Politis, D.N. and J.P. Romano (1996): \The stationary bootstrap," Journal of the American Statistical
Association, 89, 1303-1313.
[77] Potscher, B.M. (1991): \E ects of model selection on inference," Econometric Theory, 7, 163-185.
[78] Qin, J. and J. Lawless (1994): \Empirical likelihood and general estimating equations," The Annals of
Statistics, 22, 300-325.
[79] Ramsey, J. B. (1969): \Tests for speci cation errors in classical linear least-squares regression analysis,"
Journal of the Royal Statistical Society, Series B, 31, 350-371.
[80] Rudin, W. (1987): Real and Complex Analysis, 3rd edition. New York: McGraw-Hill.
[81] Said, S.E. and D.A. Dickey (1984): \Testing for unit roots in autoregressive-moving average models of
unknown order," Biometrika, 71, 599-608.
[82] Shao, J. and D. Tu (1995): The Jackknife and Bootstrap. NY: Springer.
[83] Sargan, J.D. (1958): \The estimation of economic relationships using instrumental variables," Econo-
metrica, 2 6, 393-415.
[84] Sheather, S.J. and M.C. Jones (1991): \A reliable data-based bandwidth selection method for kernel
density estimation, Journal of the Royal Statistical Society, Series B, 53, 683-690.
[85] Shin, Y. (1994): \A residual-based test of the null of cointegration against the alternative of no cointe-
gration," Econometric Theory, 10, 91-115.
[86] Silverman, B.W. (1986): Density Estimation for Statistics and Data Analysis. London: Chapman and
Hall.
[87] Sims, C.A. (1972): \Money, income and causality," American Economic Review, 62, 540-552.
[88] Sims, C.A. (1980): \Macroeconomics and reality," Econometrica, 48, 1-48.
[89] Staiger, D. and J.H. Stock (1997): \Instrumental variables regression with weak instruments," Econo-
metrica, 65, 557-586.
[90] Stock, J.H. (1987): \Asymptotic properties of least squares estimators of cointegrating vectors," Econo-
metrica, 55, 1035-1056.
[91] Stock, J.H. (1991): \Con dence intervals for the largest autoregressive root in U.S. macroeconomic time
series," Journal of Monetary Economics, 28, 435-460.
161
[92] Stock, J.H. and J.H. Wright (2000): \GMM with weak identi cation," Econometrica, 68, 1055-1096.
[93] Theil, H. (1953): \Repeated least squares applied to complete equation systems," The Hague, Central
Planning Bureau, mimeo.
[94] Tobin, J. (1958): \Estimation of relationships for limited dependent variables," Econometrica, 2 6,
24-36.
[95] Wald, A. (1943): \Tests of statistical hypotheses concerning several parameters when the number of
observations is large," Transactions of the American Mathematical Society, 54, 426-482.
[96] Wang, J. and E. Zivot (1998): \Inference on structural parameters in instrumental variables regression
with weak instruments," Econometrica, 66, 1389-1404.
[97] White, H. (1980): \A heteroskedasticity-consistent covariance matrix estimator and a direct test for
heteroskedasticity," Econometrica, 48, 817-838.
162