autocorrelated errors
Giorgio Calzolari
Laura Magazzini
.
Abstract
The performance of simulationbased estimators for panel data To
bit models (censored regression) with random eects and autocorrelated
AR(1) errors is evaluated with Monte Carlo experiments. Examples show
that poor identiability of parameters can arise in this context when the
autocorrelation parameter is moderately high. An example of application
is provided on a model analysing the patentR&D relationship.
1 Introduction
This paper aims at evaluating the performance of simulationbased estimation
techniques in the context of a Tobit model for panel data with autocorrelated
errors and random eects. The availability of panel data (i.e. repeated obser
vations over time on the same unit) oers a great number of advantages for
estimation over single crosssection or timeseries data. First, panel data allow
to control for timeinvariant or unitinvariant characteristics, whose omission
can result in biased estimates in a crosssection or a timeseries setting. Then,
the availability of repeated observations over the same unit allows answering
questions about the dynamic behaviour of economic variables that could not be
handled in a timeseries or crosssection context.
Despite that, the application of limited dependent variable models to panel data
has been hampered by the intractability of the likelihood function, that contains
integrals with no closed form solution, unless restrictive (and not realistic) hy
potheses are imposed on the structure of the model.
it
, is expressed as a linear function of a set of independent
variables, X
it
, and an error term,
it
:
y
it
= X
it
+
it
(1)
where i denotes the unit (household, rm, country, ...), and the index t denotes
the time, with i = 1, ..., N; t = 1, ..., T.
Observation on the dependent variable is driven by the following rule: y
it
=
max{0, y
it
}, i.e. the dependent variable is observed only if non negative, oth
erwise a zero is recorded. As a result, the likelihood function for the whole
observed vector y is a mixture of discrete and continous distributions.
When analysing panel data, the error structure can be decomposed into three
independent terms:
it
=
i
+
t
+ e
it
(2)
where
i
is the individual eect, representing all the timeinvariant (unobserved
or unobservable) characteristics of unit i,
t
is the time eect, representing all
the characteristics of time t, invariant across all the crosssectional units in the
sample, and e
it
is a random term that varies over time and individuals.
In standard settings, the error term e
it
is assumed to be serially uncorrelated.
This assumption is not suited to situations where the eect of unobserved vari
ables vary sistematically over time, as in the case of serially correlated omitted
variables or transitory variables whose eect last more than one period. Recent
research in linear models with random eects is considering serial correlation in
the time dimension (Karlsson and Skoglund 2004). Rather, we consider distur
bances that are correlated over time, produced by an AR(1) process:
e
it
= e
i,t1
+ w
it
(3)
with w
it
omoschedastic, uncorrelated, and with mean zero. The error term
t
is not considered in the analysis, since it can be easily accounted for in a typical
(shortT) panel setting by inserting timedummies in the regression.
3
As a result of these assumptions, the variancecovariance matrix of the error
term
it
has the following structure:
E[
it
js
] =
_
_
_
+
2
e
if i = j, t = s
+
ts
2
e
if i = j, t = s
0 if i = j
(4)
Autocorrelated disturbances in panel data linear models with random eects
have been considered for the rst time by Lillard and Willis (1978). The authors
estimate the model parameters by rst applying OLS on the data pooled over
individuals and years, and then the variance components and the autocorrelation
parameter are estimated by applying maximum likelihood to the OLS residuals.
If data are not censored, estimation can be easily handled both in the random
and xed eect approaches
1
.
Diculties arise when observations on the dependent variable are censored.
Even if a single time series is considered, maximum likelihood estimation of
the autocorrelated models require the evaluation of multiple integrals (Zeger
and Brookmeyer 1986). If the time dimension is suciently small, the integral
may not be dicult to compute, and in special cases the likelihood can be
decomposed into the product of unidimensional integrals. Consistent estimates
may be also obtained ignoring serial correlation, and treating the observations as
independent (Robinson 1982). However, alternative estimation procedures have
been devised which are shown to perform better than the estimator obtained
by ignoring serial correlation (Dagenais 1989).
In the case of panel data, the issue is further complicated by the introduction of
individual eects, that allow to capture the eect of unitspecic unobservables
or unobserved heterogeneity, therefore reducing the omitted variable bias that
might arise in timeseries or crosssection analysis.
Empirical studies have already analyzed the Tobit model with random eect and
autocorrelated disturbances where the problem of intractability of the likelihood
function has been solved by the application of simulated maximum likelihood.
However, to our knowledge, a Monte Carlo experiment evaluating the perfor
mance of the estimator is still lacking.
Hajivassiliou (1994) applies simulation technique for the estimation of the inci
dence and extent of external nancing crises of developing countries, allowing
exible correlation structure in the unobservables. Both multiperiod Tobit and
probit models are considered and the author assumes a onefactor plus AR(1)
structure, whose coecients are estimated via smoothly simulated maximum
likelihood (based on a smooth recursive conditioning simulator) and via the
method of simulated scores (based both on a smooth recursive conditioning
simulator and on a Gibbs sampling simulator).
More recently, Schmit, Gould, Dong, Kaiser and Chung (2003) consider panel
data on household cheese purchases to examine the impact of U.S. generic cheese
1
See Bhargava, Franzini and Narendranathan (1982) for the discussion of autocorrelated
models within the linear xed eect framework.
4
advertising on athome consumption. The model accounts for the panel and cen
sored nature of the data, as well as for an autoregressive error structure. The
problem of highorder integrals appearing in the likelihood function is solved
using techniques for simulating the probabilities and partitioning the data, ex
tending the procedure proposed by Zeger and Brookmeyer (1986) for the analysis
of censored autocorrelated data. The authors have applied the methodology also
in a study of the purchase process for frequently purchased commodity (Dong,
Schmit, Kaiser and Chung 2003).
3 Simulationbased estimation
Simulationbased estimation allows us to overcome the problem of intractability
of the likelihood function. Two approaches are considered and compared: (con
strained) indirect estimation and simulated maximum likelihood, employing the
GewekeHajivassiliouKeane, henceforth GHK, simulator (see e.g. Hajivassiliou
and McFadden, 1998 for details).
Early research on the oneway Tobit model with no autocorrelation showed that
simulationbased estimation performs well enough for estimation (Calzolari et al.
2001).
3.1 Indirect Estimation
Indirect estimation methods
2
represent an inferential approach which is suitable
for situations where the estimation of the statistical model of interest is too dif
cult to be performed directly, while it is straightforward to produce simulated
values from the same model. It was rst motivated by econometric models with
latent variables, but it can be applied in virtually every situation in which the
direct maximization of the likelihood function turns out to be dicult.
The principle underlying the socalled Ecient Method of Moments, henceforth
EMM, (Gallant and Tauchen 1996) is as follows. Suppose we have a sample
of observations y and a model whose likelihood function L(y; ) is dicult to
handle and maximize.
3
The maximum likelihood estimate of , given by
= arg max
lnL(; y),
is thus unavailable. Let us now take an alternative model, depending on a
parameter vector B, which will be indicated as auxiliary model, easier to
handle, and suppose we decide to use it in the place of the original one. Since
the model is misspecied, the quasiML (or pseudoML) estimator
= arg max
B
ln
L(; y),
is not necessarily consistent: the idea is to exploit simulations performed under
the original model to correct for inconsistency.
2
See, for a general treatment, the fourth chapter of Gourieroux and Monfort (1996).
3
We remark that the model could also depend on a matrix of explanatory variables X.
5
One now simulates a set of S vectors from the original model on the basis of an
arbitrary parameter vector
, and denotes each one of those vectors as y
s
(
).
Of course, using observed data y, the score function of the auxiliary model:
ln
L(; y)
, (5)
is zero when evaluated at the quasimaximum likelihood estimate
. However,
using simulated y
s
(
s=1
ln
L[; y
s
()]
_
_
_
_
_
_
S
s=1
ln
L[; y
s
()]
_
_
_
, (6)
where is a symmetric nonnegative denite matrix dening the metric.
4
This
approach is especially useful when a closed form expression for the score of the
auxiliary model is available. In this case, the procedure is computationally faster
than other indirect estimation procedures, like indirect inference (Gourieroux
et al. 1993) that would require iterated numerical reestimation of the auxiliary
model. In our specic case the score is available in closed form.
3.1.1 Constrained indirect estimation
The estimation of the auxiliary model needs to incorporate inequality con
straints to rule out negative variances and autocorrelation parameter greater
than 1, but also to avoid poorly identied regions of the parameter space of the
auxiliary model. Indirect estimation in the presence of constraints is examined
in Calzolari, Fiorentini and Sentana (2004): rather than the quasilikelihood of
the auxiliary model, one has to consider the Lagrangian function
(; y) = ln
L(; y) + r
(), (7)
where r is the functional vector containing the restrictions, are the multipliers
and = (
.
Assuming that both the log likelihood function and the vector of constraints are
twice continuously dierentiable with respect to , the latter with a Jacobian
matrix r
=
ln
L(; y)
+
r
()
= 0. (8)
Under the constraints, the quadratic form (6) to be minimized thus becomes
_
[; y
s
()]
_
[; y
s
()]
_
, (9)
4
Details on how to obtain an optimal weighing matrix can be found in Gallant and Tauchen
(1996).
6
which is not more complex than (6) if we consider that the second term of (8)
()
does not depend on simulated data and is therefore equal to the score of the
quasilikelihood computed at
with observed data. It is thus equal to zero if
the restrictions are not binding, and dierent from zero in case they are.
To wrap up, in our specic case, we will conduct constrained EMM with non
negativity constraints for
2
and
2
e
and an upper bound (of course < 1) for
. It is important to remark (Calzolari et al. 2004) that when clashes on
its upper bound, the information on will be contained in the KuhnTucker
multiplier
=
ln
L(; y)
;
therefore a byproduct of the constrained approach will be to eschew the poor
identication problem arising when is close to 1, as it will be explained in the
next section.
The procedure is implemented in Fortran 77.
3.1.2 The auxiliary model
We use, as auxiliary model, the same model of interest (1), where we treat the
censoring process as a random cancellation process, independent from the data.
In other words, the missing variables are simply disregarded, as if missingness
was ignorable. This implies a misspecication, since missingness is not at all
ignorable in a Tobit model: thus, parameters estimated applying quasiML to
the auxiliary model will be biased (inconsistent). Correcting the bias is the
purpose of the indirect estimation procedure. In addition, there will be an
obvious loss of eciency (not cured by the indirect estimation procedure) due
to the complete cancellation of the censored values.
If no censoring occurs, the (T T) covariance matrix of the ith individual
error terms is
= Cov(
i
) =
2
+
2
e
Corr() (10)
where is a (T 1) vector of elements = 1, and Corr() is the (T T) correlation
matrix of an AR(1) process with coecient .
Thus, the contribution of the ith individual data to the loglikelihood is
1
2
ln
1
2
(y
i
X
i
)
1
(y
i
X
i
)
When some observations are cancelled, the vector, still indicated as y
i
X
i
,
is compacted (thus it has less than T elements), and compacted is also the
matrix, after rows and columns corresponding to the cancelled data are dropped:
i
will be the resulting matrix. The contribution to the loglikelihood of the
not cancelled data of the ith individual data is
1
2
ln
i

1
2
(y
i
X
i
)
1
i
(y
i
X
i
)
7
3.1.3 Identiability and poor identication
Still in absence of censoring, if T = 2 the covariance matrix would be
= Cov(
i
) =
_
2
+
2
e
2
+
2
e
+
2
e
2
+
2
e
_
(11)
thus it contains only two independent elements, from which it is impossible to
identify separately the three parameters of the auxiliary model
2
,
2
e
and .
If T = 3, then the covariance matrix would be
= Cov(
i
) =
_
+
2
e
2
+
2
e
2
+
2
e
+
2
e
2
+
2
e
2
+
2
e
+
2
e
+
2
e
2
+
2
e
_
_
(12)
where the independent elements are three, making identication possible.
Of course the situation becomes even better for larger values of T.
In practice, however, identication can be very poor when is moderately high,
even for values that would not be considered dangerously close to 1 in a time
series context. Some numerical examples could well exemplify the problem.
Let be T = 4,
2
= 35,
2
e
= 5 and = 0.9. The covariance matrix is
40.00 39.50 39.05 38.65
39.50 40.00 39.50 39.05
39.05 39.50 40.00 39.50
38.65 39.05 39.50 40.00
and the logarithm of its determinant is 3.664.
Let however be
2
= 20,
2
e
= 20 and = 0.975. In this case the covariance
matrix is
40.00 39.50 39.01 38.54
39.50 40.00 39.50 39.01
39.01 39.50 40.00 39.50
38.54 39.01 39.50 40.00
and the logarithm of its determinant is 3.670.
Finally, if
2
= 5,
2
e
= 35 and = 0.9857, the covariance matrix is
40.00 39.50 39.01 38.52
39.50 40.00 39.50 39.01
39.01 39.50 40.00 39.50
38.52 39.01 39.50 40.00
and the logarithm of its determinant is 3.673.
Of course, the matrices and the corresponding determinants are not equal, but
quite close to each other, thus suggesting that an estimation procedure would
8
not reach convergence in a simple or straightforward way. It would be necessary
to have larger values of T (so that higher powers of could make the dier
ence), or a very large number of observations to ensure reliable (and meaningful)
estimation results.
Just one more example, to stress the point of poor identication.
2
= 9,
2
e
= 1
and = 0.9082, produce this covariance matrix
10.00 9.91 9.82 9.75
9.91 10.00 9.91 9.82
9.82 9.91 10.00 9.91
9.75 9.82 9.91 10.00
very close to the matrix that would be produced by
2
= 5,
2
e
= 5 and
= 0.98205
10.00 9.91 9.82 9.74
9.91 10.00 9.91 9.82
9.82 9.91 10.00 9.91
9.74 9.82 9.91 10.00
It is quite unlikely that censoring can help identication. For this reason, our
rst set of Monte Carlo experiments will use moderately low values of (till
0.7). Experiments with larger values of are in progress.
3.2 Simulated Maximum Likelihood
The likelihood function for the panel data Tobit model with random eects and
autocorrelated disturbances can be written as:
L =
N
i=1
L
i
(y
i
) =
N
i=1
_
{y
i
yi=max(0,y
i
)}
T
(y
i
X
i
; )dy
i
(13)
where L
i
(y
i
) represents the likelihood function for the ith unit (i = 1, ..., N),
and
T
is the Tvariate normal density with mean zero and variance covariance
matrix , given in (10). Let us indicate with y
i0
the censored observation for
unit i, and with y
i1
the uncensored (positive) observation for unit i.
We can distinguish three cases:
1. All the observations for unit i are positive. In this case no problem arises
for computing the contribution to the likelihood of unit i which is equal
to L
i
=
T
(y
i
X
i
; ).
2. All the observations for unit i are equal to zero. The contribution to the
likelihood of unit i is equal to L
i
=
T
(X
i
; ), where a Tfold integral
needs to be evaluated. The GHK simulator
5
is used for the evaluation of
the normal probabilities.
5
The algorithm was the most reliable among those examined by Hajivassiliou, McFadden
and Ruud (1996). See e.g. Hajivassiliou and McFadden (1998) for details.
9
3. Observations for unit i display both positive and zero values. We par
tition the T observations of unit i into two mutually exclusive sets: one
containing the censored observation, indexed by i0, and one containing
the uncensored observations, indexed by i1: T = T
i0
+ T
i1
. As a result,
L
i
can be decomposed into
L
i
(y
i
) = L
i
(y
i0
, y
i1
) =
Ti1
(y
i1
X
i1
;
1
)
Ti0
(X
i0
y
i1
;
01
).
The loglikelihood is composed of two term: one (
Ti1
) has a closed form
expression, and the second one (
Ti0
) is the multinomial probability that
all components of {y
i0
} are zero (i.e. the components of {y
i0
} are nega
tive), conditioning on the set {y
i1
}. The GHK simulator is employed for
the evaluation of integrals of dimension higher than 1 (i.e. if T
i0
> 1, oth
erwise
Ti0
requires the evaluation of a onedimensional integral, posing
no computational problems).
The major drawback of this estimation method is its inconsistency when the
number of pseudorandom values used to approximate the likelihood function
is xed. This is due to the fact that the likelihood function is approximated,
whereas the loglikelihood is maximized. However, the simulated maximum
likelihood estimator has the same performance as maximum likelihood estimator
if the number of observations (in our case N T) and the number of simulated
pseudorandom values (S) tend to innity in a way that
N T/S tends to
zero.
4 Monte Carlo results
To study the properties of these methods, we perform a simulation study and
apply the methods to the pseudoobserved data produced by simulation of the
data generating process.
We considered the following equation:
y
it
=
0
+
1
X
it
+
it
(14)
Observations are censored, since y
it
is observed only if it is greater than 0, it is
0 otherwise. X
it
were generated i.i.d. N(6, 4).
In the set of experiments discussed in this paper,
1
is set equal to 2, and the
intercept equals 12, corresponding to approximately 50% of censored observa
tions.
The error term is obtained as:
it
=
i
+ e
it
(15)
with e
it
= e
i,t1
+
w
w
it
, w
it
i.i.d. N(0, 1), and
2
w
=
2
e
(1
2
).
Each estimation considered N = 500 and T = 10 for a total of 5000 observa
tions. As an example, the sample can resemble a panel of rms or a sample of
households observed over a 10year time period (approximately the number of
10
0
1
2
2
e
True values 12.00 2.000 1.000 1.000
Results of indirect estimation
MC mean 12.01 2.001 0.9973 1.003
MC var. 0.2423e1 0.3391e3 0.6968e2 0.1407e2
Results of maximum likelihood (GaussHermite quadrature)
MC mean 12.00 2.001 0.9969 1.002
MC var. 0.8926e2 0.1193e3 0.2346e2 0.3847e3
Indirect estimation: mean and var. of 1000 MC replications.
Max. Lik.: mean and var. of 100 MC replications.
Estimation by GaussHermite quadrature (25 points) is obtained in STATA.
Table 1: Results of Monte Carlo experiments uncorrelated error terms.
observations in our empirical application; see Section 5). For indirect estima
tion we used S = 10, thus 50000 simulated observations are used to compute
the score of the quasilikelihood.
Three true values of are used in the experiments: 0, 0.5, 0.7. Experiments
with larger values of are still in progress (they need a careful use of the
constraints to reduce the problems of poor identication of the auxiliary model
parameters). In each experiment 1000 Monte Carlo replications are performed,
and means and variances of estimated parameters are displayed in Table 2. The
results for the case of = 0 in Table 2 allow the comparison with the same
estimation method applied to the model without autocorrelation (Table 1).
In the case of simulated likelihood S is set to 75 (results at the bottom of Table
2). Due to large increases in computational times, the method of simulated
likelihood is evaluated on the basis of 100 Monte Carlo replications. Results
will be extended before Conference time.
Some considerations can be derived from the results of Tables 1 and 2.
1. When results with both methods are available, Maximum Likelihood (with
GaussHermite quadrature or simulated) has variances between one third and a
half of the corresponding variances of indirect estimation parameters. Roughly
speaking, this is more or less what we might expect, since indirect estimation
has completely ignored the censored values (about a half of the total).
2. Indirect estimation gets rid of the bias due to misspecication of the auxiliary
model. This is obtained with a considerable reduction of the computational costs
with respect to Maximum Likelihood.
3. (Work in progress, no result displayed yet) Use of the constrained indirect
procedure may help in producing reasonable results, when a large value of the
autocorrelation parameter causes poor identication of the auxiliary model.
11
0
1
2
2
e
Results of indirect estimation
True values 12.00 2.000 1.000 1.000 0.0000
MC mean (a.m.) 10.49 1.829 0.8849 0.9186 0.1834e2
MC mean (m.i.)
a
11.99 1.999 1.002 1.002 0.1451e2
MC var. (m.i.) 0.2309e1 0.3044e3 0.9585e2 0.1751e2 0.1935e2
True values 12.00 2.000 1.000 1.000 0.5000
MC mean (a.m.) 10.74 1.858 0.9328 0.9131 0.4669
MC mean (m.i.) 11.99 2.000 1.002 1.003 0.5000
MC var. (m.i.) 0.1768e1 0.2355e3 0.1304e1 0.3705e2 0.1257e2
True values 12.00 2.000 1.000 1.000 0.7000
MC mean (a.m.) 10.98 1.886 0.9948 0.9005 0.6651
MC mean (m.i.) 11.99 2.002 1.000 1.011 0.6992
MC var. (m.i.) 0.1519e1 0.1915e3 0.2064e1 0.9426e2 0.1091e2
True values 12.00 2.000 5.000 5.000 0.0000
MC mean (a.m.) 7.347 1.518 3.070 3.873 0.4718e3
MC mean (m.i.)
b
12.01 2.000 5.044 5.014 0.1413e2
MC var. (m.i.) 0.1386 0.1450e2 0.4040 0.6150e1 0.2615e2
True values 12.00 2.000 5.000 5.000 0.5000
MC mean (a.m.) 7.936 1.590 3.394 3.814 0.4054
MC mean (m.i.) 12.01 2.000 5.055 5.020 0.4996
MC var. (m.i.) 0.1235 0.1119e2 0.5727 0.1216 0.1557e2
True values 12.00 2.000 5.000 5.000 0.7000
MC mean (a.m.) 8.529 1.664 3.827 3.679 0.5992
MC mean (m.i.) 12.02 2.000 5.052 5.055 0.6997
MC var. (m.i.) 0.1309 0.8381e3 0.9111 0.3026 0.1329e2
Results of simulated maximum likelihood estimation
True values 12.00 2.000 1.000 1.000 0.7000
MC mean 12.01 2.000 1.002 1.008 0.7006
MC var. 0.9927e2 0.1182e3 0.1005e1 0.4916e2 0.5420e3
m.i.: model of interest; a.m.: auxiliary model.
a
The algorithm did not converge in 8 cases.
b
The algorithm did not converge in 4 cases.
Indirect estimation: mean and var. of 1000 MC replications.
Simulated Max. Lik.: mean and var. of 100 MC replications.
Table 2: Results of Monte Carlo experiments, autocorrelated error terms.
12
5 An application to the patentR&D relation
ship
Innovation and technological change are largely recognized as the main drivers
of longterm economic growth. Despite that, the empirical account of the dy
namic relationship between the inputs and outputs of technological activities is
hindered by the diculties in devising indicators that can proxy in a consistent
and systematic way the inputs and the outputs of the technological activities.
Against this background, the literature on the source of technological growth
has relied on the level of R&D expenditure as a proxy for R&D input, and there
is increasing acknowledgment of the idea that patents can be fruitfully employed
as a proxy for R&D output (Griliches 1990, Jae and Trajtenberg 2002, Cincera
1997).
Most available empirical studies rely on count data models for investigating the
relationship between patents and (log)R&D (Hall, Griliches and Hausman 1986,
Hausman, Hall and Griliches 1984, Cincera 1997). Even though patent data can
only take integer values, the variable lies on a large support
6
, allowing also the
estimation of a linear model. Nonetheless, a large proportion of observations
report zero patent, therefore the censored regression model is more appropriate.
We apply the proposed methodology to the data employed by Hall et al. (1986),
which cover information about the patenting and R&D activity of a sample of
346 US manufacturing rms over the period 19701979
7
. Additional information
are available for a larger set of rms but on a more limited time frame (642 US
manufacturing rms observed over the period 19721979). The model considers
the number of patents as a function of log R&D expenditure. Since the analysis
of Hall et al. (1986) reveals that R&D and patents appear to be dominated by a
contemporaneous relationship with little eect of leads and lags, we only focus
on contemporaneous relationship, considering the possibility of autocorrelation
of the error terms.
Estimates obtained by indirect estimation and simulated maximum likelihood
are reported in Table 3.
Computation of standard errors is in progress, and presumably it will help
explaining the dierence in the estimated coecients, particularly the coecient
of LogR, which is the variable of primary interest. However, the estimated
shows that substantianl correlation (about 0.7) exists across the error terms in
our data.
6 Summary
This paper has shown the performance of simulated estimators in the context
of the Tobit model with random eects and autocorrelated disturbances.
6
In Cincera (1997) the number of patents applied for by a rm ranges from 0 to 925,
whereas in Hall et al. (1986) the maximum number of patent is 831.
7
In a future version of the paper we will also take into consideration the data analysed by
Cincera (1997).
13
Sample 1 Sample 2
Variable N = 346, T = 10 N = 642, T = 8
Indirect Estimation
Constant 159.1 181.2
Scientic sector 15.05 16.70
LogK 14.24 12.66
LogR 39.54 41.86
4418. 5348.
2
e
162.6 332.3
0.8190 0.7629
Simulated Maximum Likelihood
Constant 47.75 43.24
Scientic sector 17.91 14.41
LogK 17.20 16.47
LogR 5.63 3.17
4443 4443
2
e
589.4 300.6
0.7610 0.6430
% censored 17.49 22.88
Table 3: Estimation results
Monte Carlo experiments hightlight the good performance of the methods in
this context. A problem of poor identiability arises in this context for values of
the autocorrelation parameter that are moderately high. Constrained indirect
estimation is proposed as a tool for solving the problem.
An example of application is presented for the analysis of the patentR&D re
lationship. High autocorrelation (about 0.7) is estimated.
References
Arellano, M. and Hahn, J.: 2005, Understanding Bias in Nonlinear Panel Mod
els: Some Recent Developments, Invited Lecture at the Econometric Society
World Congress, London .
Bhargava, A., Franzini, L. and Narendranathan, W.: 1982, Serial Correlation
and the Fixed Eects Model, The Review of Economic Studies 49(4), 533
549.
Calzolari, G., Fiorentini, G. and Sentana, E.: 2004, Constrained Indirect Esti
mation, Review of Economic Studies 71(4), 945973.
Calzolari, G., Magazzini, L. and Mealli, F.: 2001, SimulationBased Estima
tion of Tobit Model with Random Eects, in R. Friedmann, L. Kn uppel
14
and H. L utkepohl (eds), Econometric Studies  A Festschrift in Honour of
Joachim Frohn, LIT Verlag BerlinHamburgM unster, pp. 349369.
Chamberlain, G.: 1980, Analysis of Covariance with Qualitative Data, The
Review of Economic Studies 47(1), 225238.
Cincera, M.: 1997, Patents, R&D, and Technological Spillovers at the Firm
Level: Some Evidence from Econometric Count Models for Panel Data,
Journal of Applied Econometrics 12(3), 265280.
Dagenais, M.: 1989, Small sample performance of parameter estimators for
tobit modesl with serial correlation*, Journal of Statistical Computation
and Simulation 33(1), 1126.
Dong, D., Schmit, T., Kaiser, H. and Chung, C.: 2003, Modeling the House
hold PUrchasing Process Using a Panel Data Tobit Model, Cornell Uni
versity, Dept. of Applied Economics and Management Research Bullettin
RB 200307.
Gallant, A. and Tauchen, G.: 1996, Which Moments to Match?, Econometric
Theory 12(4), 657681.
Gourieroux, C. and Monfort, A.: 1996, SimulationBased Econometric Methods,
Oxford University Press, USA.
Gourieroux, C., Monfort, A. and Renault, E.: 1993, Indirect Inference, Journal
of Applied Econometrics 8(S1), S85S118.
Greene, W.: 2004, Fixed Eects and Bias Due to the Incidental Parameters
Problem in the Tobit Model, Econometric Reviews 23(2), 125147.
Griliches, Z.: 1990, Patent Statistics as Economic Indicators: A Survey, Journal
of Economic Literature 28(4), 16611707.
Hajivassiliou, V.: 1994, A Simulation Estimation Analysis of the External Debt
Crises of Developing Countries, Journal of Applied Econometrics 9(2), 109
131.
Hajivassiliou, V. and McFadden, D.: 1998, The Method of Simulated Scores for
the Estimation of LDV Models, Econometrica 66(4), 863896.
Hajivassiliou, V., McFadden, D. and Ruud, P.: 1996, Simulation of multivari
ate normal rectangle probabilities and their derivatives: Theoretical and
computational results, Journal of Econometrics 72(12), 85134.
Hall, B., Griliches, Z. and Hausman, J.: 1986, Patents and R and D: Is There
a Lag?, International Economic Review 27(2), 265283.
Hausman, J., Hall, B. and Griliches, Z.: 1984, Econometric Models for Count
Data with an Application to the PatentsR & D Relationship, Econometrica
52(4), 909938.
15
Jae, A. and Trajtenberg, M.: 2002, Patents, Citations, and Innovations: A
Window on the Knowledge Economy, MIT Press.
Karlsson, S. and Skoglund, J.: 2004, Maximumlikelihood based inference in
the twoway random eects model with serially correlated time eects,
Empirical Economics 29(1), 7988.
Lerman, S. and Manski, C.: 1981, On the Use of Simulated Frequencies
to Approximate Choice Probabilities, in C. Manski and D. McFadden
(eds), Structural Analysis of Discrete Data with Econometric Applications,
Vol. 10, MIT Press, pp. 305319.
Lillard, L. and Willis, R.: 1978, Dynamic Aspects of Earning Mobility, Econo
metrica 46(5), 9851012.
Neyman, J. and Scott, E.: 1948, Consistent Estimates Based on Partially Con
sistent Observations, Econometrica 16(1), 132.
Pakes, A.: 1986, Patents as Options: Some Estimates of the Value of Holding
European Patent Stocks, Econometrica 54(4), 755784.
Robinson, P.: 1982, On the Asymptotic Properties of Estimators of Models
Containing Limited Dependent Variables, Econometrica 50(1), 2741.
Schmit, T., Gould, B., Dong, D., Kaiser, H. and Chung, C.: 2003, The Impact of
Generic Advertising on US Household Cheese Purchases: A Censored Au
tocorrelated Regression Approach, Canadian Journal of Agricultural Eco
nomics 51(1), 1537.
Smith Jr, A.: 1993, Estimating Nonlinear TimeSeries Models Using Simulated
Vector Autoregressions, Journal of Applied Econometrics 8, 6384.
Tobin, J.: 1958, Estimation of Relationships for Limited Dependent Variables,
Econometrica 26(1), 2436.
Zeger, S. and Brookmeyer, R.: 1986, Regression Analsis with Censored Autocor
related Data, Journal of the American Statistical Association 81(395), 722
729.
16
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
Cancel anytime.