P. 1
Bayesian Analysis of DSGE Models

Bayesian Analysis of DSGE Models

|Views: 15|Likes:

More info:

Published by: Raiden Berte Hasegawa on May 22, 2012
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

09/12/2013

pdf

text

original

Sections

  • 1. INTRODUCTION
  • 2.1. The Agents and Their Decision Problems
  • 2.2. Exogenous Processes
  • 2.3. Equilibrium Relationships
  • 2.4. Model Solutions
  • 2.5. Measurement Equations
  • 3.1. Potential Model Misspecification
  • 3.2. Identification
  • 3.3. Priors
  • 3.4. The Road Ahead
  • 4.1. Posterior Computations
  • 4.2. Simulation Results for 1(L)/ 1(L)
  • 4.3. A Potential Pitfall: Simulation Results for 2(L)/ 2(L)
  • 4.4. Parameter Transformations for 1(L)/ 1(L)
  • 4.5. Empirical Applications
  • 5.1. Posterior Predictive Checks
  • 5.2. Posterior Odds Comparisons of DSGE Models
  • 5.3. Comparison of a DSGE Model with a VAR
  • 6. NONLINEAR METHODS
  • 7. CONCLUSIONS AND OUTLOOK

Econometric Reviews, 26(2–4):113–172, 2007

Copyright © Taylor & Francis Group, LLC
ISSN: 0747-4938 print/1532-4168 online
DOI: 10.1080/07474930701220071
BAYESIAN ANALYSIS OF DSGE MODELS
Sungbae An School of Economics and Social Sciences, Singapore Management
University, Singapore
Frank Schorfheide Department of Economics, University of Pennsylvania,
Philadelphia, Pennsylvania, USA
This paper reviews Bayesian methods that have been developed in recent years to estimate
and evaluate dynamic stochastic general equilibrium (DSGE) models. We consider the estimation
of linearized DSGE models, the evaluation of models based on Bayesian model checking,
posterior odds comparisons, and comparisons to vector autoregressions, as well as the non-linear
estimation based on a second-order accurate model solution. These methods are applied to data
generated from correctly specified and misspecified linearized DSGE models and a DSGE model
that was solved with a second-order perturbation method.
Keywords Bayesian analysis; DSGE models; Model evaluation; Vector autoregressions.
JEL Classification C11; C32; C51; C52.
1. INTRODUCTION
Dynamic stochastic general equilibrium (DSGE) models are micro-
founded optimization-based models that have become very popular in
macroeconomics over the past 25 years. They are taught in virtually
every Ph.D. program and represent a significant share of publications in
macroeconomics. For a long time the quantitative evaluation of DSGE
models was conducted without formal statistical methods. While DSGE
models provide a complete multivariate stochastic process representation
for the data, simple models impose very strong restrictions on actual time
series and are in many cases rejected against less restrictive specifications
such as vector autoregressions (VAR). Apparent model misspecifications
were used as an argument in favor of informal calibration approaches
along the lines of Kydland and Prescott (1982).
Received August 20, 2005; Accepted July 15, 2006
Address correspondence to Frank Schorfheide, Department of Economics, University of
Pennsylvania, 3718 Locust Walk, Philadelphia, PA 19104, USA; E-mail: schorf@ssc.upenn.edu
114 S. An and F. Schorfheide
Subsequently, many authors have developed econometric frameworks
that formalize aspects of the calibration approach by taking model
misspecification explicitly into account. Examples are Smith (1993),
Watson (1993), Canova (1994), DeJong et al. (1996), Diebold et al.
(1998), Geweke (1999b), Schorfheide (2000), Dridi et al. (2007), and
Bierens (2007). At the same time, macroeconomists have improved the
structural models and relaxed many of the misspecified restrictions of
the first generation of DSGE models. As a consequence, more traditional
econometric techniques have become applicable. The most recent vintage
of DSGE models is not just attractive from a theoretical perspective but
is also emerging as a useful tool for forecasting and quantitative policy
analysis in macroeconomics. Moreover, owing to improved time series fit
these models are gaining credibility in policy-making institutions such as
central banks.
This paper reviews Bayesian estimation and evaluation techniques
that have been developed in recent years for empirical work with DSGE
models.
1
We focus on methods that are built around a likelihood function
derived from the DSGE model. The econometric analysis has to cope
with several challenges, including potential model misspecification and
identification problems. We will illustrate how a Bayesian framework can
address these challenges. Most of the techniques described in this article
have been developed and applied in other papers. Our contribution is
to give a unified perspective, by applying these methods successively to
artificial data generated from a DSGE model and a VAR. We provide some
evidence on the performance of Markov Chain Monte Carlo (MCMC)
methods that have been applied to the Bayesian estimation of DSGE
models. Moreover, we present new results on the use of first-order accurate
versus second-order accurate solutions in the estimation of DSGE models.
The paper is structured as follows. Section 2 outlines two versions of
a five-equation New Keynesian DSGE model along the lines of Woodford
(2003) that differ with respect to the monetary policy rule. This model
serves as the foundation for the current generation of large-scale models
that are used for the analysis of monetary policy in academic and central
bank circles. Section 3 discusses some preliminaries, in particular the
challenges that have to be confronted by the econometric framework.
We proceed by generating several data sets that are used subsequently
for model estimation and evaluation. We simulate samples from the first-
order accurate solution of the DSGE model presented in Section 2,
from a modified version of the DSGE model in order to introduce
1
There is also an extensive literature on classical estimation and evaluation of DSGE models,
but a detailed survey of these methods is beyond the scope of this article. The interested reader is
referred to Kim and Pagan (1995) and the book by Canova (2007), which discusses both classical
and Bayesian methods for the analysis of DSGE models.
Bayesian Analysis of DSGE Models 115
misspecification, and from the second-order accurate solution of the
benchmark DSGE model.
Owing to the computational burden associated with the likelihood
evaluation for non-linear solutions of the DSGE model, most of the
empirical literature has estimated linearized DSGE models. Section 4
describes a Random-Walk Metropolis (RWM) algorithm and an Importance
Sampler (IS) algorithm that can be used to calculate the posterior
moments of DSGE model parameters and transformations thereof. We
compare the performance of the algorithms and provide an example
in which they explore the posterior distribution only locally in the
neighborhood of modes that are separated from each other by a deep
valley in the surface of the posterior density.
Model evaluation is an important part of the empirical work with
DSGE models. We consider three techniques in Section 5: posterior
predictive model checking, model comparisons based on posterior odds,
and comparisons of DSGE models to VARs. We illustrate these techniques
by fitting correctly specified and misspecified DSGE models to artificial
data. In Section 6 we construct posterior distributions for DSGE model
parameters based on a second-order accurate solution and compare them
to posteriors obtained from a linearized DSGE model. Finally, Section 7
concludes and provides an outlook on future work.
2. A PROTOTYPICAL DSGE MODEL
Our model economy consists of a final goods producing firm, a
continuum of intermediate goods producing firms, a representative
household, and a monetary as well as a fiscal authority. This model has
become a benchmark specification for the analysis of monetary policy
and is analyzed in detail, for instance, in Woodford (2003). To keep the
model specification simple, we abstract from wage rigidities and capital
accumulation. More elaborate versions can be found in Smets and Wouters
(2003) and Christiano et al. (2005).
2.1. The Agents and Their Decision Problems
The perfectly competitive, representative, final goods producing firm
combines a continuum of intermediate goods indexed by j ∈ [0, 1] using
the technology
Y
t
=
__
1
0
Y
t
(j )
1−v
dj
_
1
1−v
. (1)
Here 1,v > 1 represents the elasticity of demand for each intermediate
good. The firm takes input prices P
t
(j ) and output prices P
t
as given.
116 S. An and F. Schorfheide
Profit maximization implies that the demand for intermediate goods is
Y
t
(j ) =
_
P
t
(j )
P
t
_
−1,v
Y
t
. (2)
The relationship between intermediate goods prices and the price of the
final good is
P
t
=
__
1
0
P
t
(j )
v−1
v
dj
_
v
v−1
. (3)
Intermediate good j is produced by a monopolist who has access to the
linear production technology
Y
t
(j ) = A
t
N
t
(j ), (4)
where A
t
is an exogenous productivity process that is common to all
firms and N
t
(j ) is the labor input of firm j . Labor is hired in a perfectly
competitive factor market at the real wage W
t
. Firms face nominal rigidities
in terms of quadratic price adjustment costs
AC
t
(j ) =
[
2
_
P
t
(j )
P
t −1
(j )
−¬
_
2
Y
t
(j ), (5)
where [ governs the price stickiness in the economy and ¬ is the steady-
state inflation rate associated with the final good. Firm j chooses its labor
input N
t
(j ) and the price P
t
(j ) to maximize the present value of future
profits

t
_

s=0
[
s
Q
t +s|t
_
P
t +s
(j )
P
t +s
Y
t +s
(j ) −W
t +s
N
t +s
(j ) −AC
t +s
(j )
_
_
. (6)
Here Q
t +s|t
is the time t value of a unit of the consumption good in period
t +s to the household, which is treated as exogenous by the firm.
The representative household derives utility from real money balances
M
t
,P
t
and consumption C
t
relative to a habit stock. We assume that the
habit stock is given by the level of technology A
t
. This assumption ensures
that the economy evolves along a balanced growth path even if the utility
function is additively separable in consumption, real money balances,
and leisure. The household derives disutility from hours worked H
t
and
maximizes

t
_

s=0
[
s
_
(C
t +s
,A
t +s
)
1−t
−1
1 −t
+,
M
ln
_
M
t +s
P
t +s
_
−,
H
H
t +s
_
_
, (7)
Bayesian Analysis of DSGE Models 117
where [ is the discount factor, 1,t is the intertemporal elasticity of
substitution, and ,
M
and ,
H
are scale factors that determine steady-state
real money balances and hours worked. We will set ,
H
= 1. The household
supplies perfectly elastic labor services to the firms taking the real wage
W
t
as given. The household has access to a domestic bond market where
nominal government bonds B
t
are traded that pay (gross) interest R
t
.
Furthermore, it receives aggregate residual real profits D
t
from the firms
and has to pay lump-sum taxes T
t
. Thus the household’s budget constraint
is of the form
P
t
C
t
+B
t
+M
t
−M
t −1
+T
t
= P
t
W
t
H
t
+R
t −1
B
t −1
+P
t
D
t
+P
t
SC
t
, (8)
where SC
t
is the net cash inflow from trading a full set of state-contingent
securities. The usual transversality condition on asset accumulation applies,
which rules out Ponzi schemes.
Monetary policy is described by an interest rate feedback rule of the
form
R
t
= R
∗ 1−j
R
t
R
j
R
t −1
e
e
R,t
, (9)
where e
R,t
is a monetary policy shock and R

t
is the (nominal) target rate.
We consider two specifications for R

t
, one in which the central bank reacts
to inflation and deviations of output from potential output:
R

t
= r ¬

_
¬
t
¬

_
ç
1
_
Y
t
Y

t
_
ç
2
(output gap rule specification) (10)
and a second specification in which the central bank responds to deviations
of output growth from its equilibrium steady-state ¸:
R

t
= r ¬

_
¬
t
¬

_
ç
1
_
Y
t
¸Y
t −1
_
ç
2
(output growth rule specification). (11)
Here r is the steady-state real interest rate, ¬
t
is the gross inflation
rate defined as ¬
t
= P
t
,P
t −1
, and ¬

is the target inflation rate, which in
equilibrium coincides with the steady-state inflation rate. Y

t
in (10) is the
level of output that would prevail in the absence of nominal rigidities.
The fiscal authority consumes a fraction ¸
t
of aggregate output Y
t
,
where ¸
t
∈ [0, 1] follows an exogenous process. The government levies a
lump-sum tax (subsidy) to finance any shortfalls in government revenues
(or to rebate any surplus). The government’s budget constraint is given by
P
t
G
t
+R
t −1
B
t −1
= T
t
+B
t
+M
t
−M
t −1
, (12)
where G
t
= ¸
t
Y
t
.
118 S. An and F. Schorfheide
2.2. Exogenous Processes
The model economy is perturbed by three exogenous processes.
Aggregate productivity evolves according to
lnA
t
= ln¸ +lnA
t −1
+lnz
t
, where lnz
t
= j
z
lnz
t −1
+e
z,t
. (13)
Thus on average technology grows at the rate ¸, and z
t
captures exogenous
fluctuations of the technology growth rate. Define g
t
= 1,(1 −¸
t
). We
assume that
lng
t
= (1 −j
g
)lng +j
g
lng
t −1
+e
g ,t
. (14)
Finally, the monetary policy shock e
R,t
is assumed to be serially
uncorrelated. The three innovations are independent of each other at all
leads and lags and are normally distributed with means zero and standard
deviations o
z
, o
g
, and o
R
, respectively.
2.3. Equilibrium Relationships
We consider the symmetric equilibrium in which all intermediate goods
producing firms make identical choices so that the j subscript can be
omitted. The market clearing conditions are given by
Y
t
= C
t
+G
t
+AC
t
and H
t
= N
t
. (15)
Since the households have access to a full set of state-contingent claims,
Q
t +s|t
in (6) is
Q
t +s|t
= (C
t +s
,C
t
)
−t
(A
t
,A
t +s
)
1−t
. (16)
It can be shown that output, consumption, interest rates, and inflation have
to satisfy the following optimality conditions
1 = [
t
__
C
t +1
,A
t +1
C
t
,A
t
_
−t
A
t
A
t +1
R
t
¬
t +1
_
(17)
1 =
1
v
_
1 −
_
C
t
A
t
_
t
_
+[(¬
t
−¬)
__
1 −
1
2v
_
¬
t
+
¬
2v
_
−[[
t
__
C
t +1
,A
t +1
C
t
,A
t
_
−t
Y
t +1
,A
t +1
Y
t
,A
t

t +1
−¬)¬
t +1
_
. (18)
In the absence of nominal rigidities ([ = 0) aggregate output is given by
Y

t
= (1 −v)
1,t
A
t
g
t
, (19)
Bayesian Analysis of DSGE Models 119
which is the target level of output that appears in the output gap rule
specification.
Since the non-stationary technology process A
t
induces a stochastic
trend in output and consumption, it is convenient to express the model
in terms of detrended variables c
t
= C
t
,A
t
and y
t
= Y
t
,A
t
. The model
economy has a unique steady-state in terms of the detrended variables
that is attained if the innovations e
R,t
, e
g ,t
, and e
z,t
are zero at all times.
The steady-state inflation ¬ equals the target rate ¬

and
r =
¸
[
, R = r ¬

, c = (1 −v)
1,t
, and y = g (1 −v)
1,t
. (20)
Let ˆ x
t
= ln(x
t
,x) denote the percentage deviation of a variable x
t
from its
steady-state x. Then the model can be expressed as
1 = [
t
|e
−tˆ c
t +1
+tˆ c
t
+
¨
R
t
−ˆ z
t +1
−ˆ ¬
t +1
] (21)
1 −v
v[¬
2
_
e
tˆ c
t
−1
_
=
_
e
ˆ ¬
t
−1
_
__
1 −
1
2v
_
e
ˆ ¬
t
+
1
2v
_
−[
t
__
e
ˆ ¬
t +1
−1
_
e
−tˆ c
t +1
+tˆ c
t
+ˆ y
t +1
−ˆ y
t
+ˆ ¬
t +1
_
(22)
e
ˆ c
t
−ˆ y
t
= e
−ˆ g
t


2
g
2
_
e
ˆ ¬
t
−1
_
2
(23)
¨
R
t
= j
R
¨
R
t −1
+(1 −j
R

1
ˆ ¬
t
+(1 −j
R

2
(ˆ y
t
− ˆ g
t
) +e
R,t
(24)
ˆ g
t
= j
g
ˆ g
t −1
+e
g ,t
(25)
ˆ z
t
= j
z
ˆ z
t −1
+e
z,t
. (26)
For the output growth rule specification, Equation (24) is replaced by
¨
R
t
= j
R
¨
R
t −1
+(1 −j
R

1
ˆ ¬
t
+(1 −j
R

2
_
Aˆ y
t
+ ˆ z
t
_
+e
R,t
. (27)
2.4. Model Solutions
Equations (21) to (26) form a non-linear rational expectations system
in the variables ˆ y
t
, ˆ c
t
, ˆ ¬
t
,
¨
R
t
, ˆ g
t
, and ˆ z
t
that is driven by the vector of
innovations e
t
= [e
R,t
, e
g ,t
, e
z,t
]

. This rational expectations system has to be
solved before the DSGE model can be estimated. Define
2
s
t
= [ˆ y
t
, ˆ c
t
, ˆ ¬
t
,
¨
R
t
, e
R,t
, ˆ g
t
, ˆ z
t
]

.
2
Under the output growth rule specification for R

t
the vector s
t
also contains ˆ y
t −1
.
120 S. An and F. Schorfheide
The solution of the rational expectations system takes the form
s
t
= 4(s
t −1
, e
t
; 0). (28)
From an econometric perspective, s
t
can be viewed as a (partially latent)
state vector in a non-linear state space model and (28) is the state transition
equation.
A variety of numerical techniques are available to solve rational
expectations systems. In the context of likelihood-based DSGE model
estimation, linear approximation methods are very popular because
they lead to a state–space representation of the DSGE model that can
be analyzed with the Kalman filter. Linearization and straightforward
manipulation of Equations (21) to (23) yields
ˆ y
t
=
t
[ˆ y
t +1
] + ˆ g
t

t
[ ˆ g
t +1
] −
1
t
_
¨
R
t

¨

t

t +1
] −
t
[ˆ z
t +1
]
_
(29)
ˆ ¬
t
= [
t
[ ˆ ¬
t +1
] +«(ˆ y
t
− ˆ g
t
) (30)
ˆ c
t
= ˆ y
t
− ˆ g
t
, (31)
where
« = t
1 −v

2
[
. (32)
Equations (29) to (31) combined with (24) to (26) and the trivial
identity e
R,t
= e
R,t
form a linear rational expectations system in s
t
for
which several solution algorithms are available, for instance, Blanchard
and Kahn (1980), Binder and Pesaran (1997), King and Watson (1998),
Uhlig (1999), Anderson (2000), Kim (2000), Christiano (2002), and Sims
(2002). Depending on the parameterization of the DSGE model there are
three possibilities: no stable rational expectations solution exists, the stable
solution is unique (determinacy), or there are multiple stable solutions
(indeterminacy). We will focus on the case of determinacy and restrict
the parameter space accordingly. The resulting law of motion for the j th
element of s
t
takes the form
s
j ,t
=
J

i=1
4
(s)
j ,i
s
i,t −1
+
n

l =1
4
(e)
j ,l
e
l ,t
, j = 1, . . . , J . (33)
Here J denotes the number of elements of the vector s
t
and n is the
number of shocks stacked in the vector e
t
. Here the coefficients 4
(s)
j ,i
and
4
(e)
j ,l
are functions of the structural parameters of the DSGE model.
While in many applications first-order approximations are sufficient,
the higher-order refinement is an active area of research. For instance,
Bayesian Analysis of DSGE Models 121
if the goal of the analysis is to compare welfare across policies or market
structures that do not have first-order effects on the model’s steady-state
or to study asset pricing implications of DSGE models, a more accurate
solution may be necessary; see, for instance, Kim and Kim (2003) or
Woodford (2003). A simple example can illustrate this point. Consider a
one-period model in which utility is a smooth function of consumption and
is approximated as
U(ˆ c) = U(0) +U
(1)
(0)ˆ c +
1
2
U
(2)
(0)ˆ c
2
+o(|ˆ c|
2
), (34)
where ˆ c is the percentage deviation of consumption from its steady-state
and the superscript i denotes the ith derivative. Suppose that consumption
is a smooth function of a zero mean random shock e which is scaled by o.
The second-order approximation of ˆ c around the steady-state (o = 0) is
ˆ c = C
(1)
(c)oe +
1
2
C
(2)
(c)o
2
e
2
+o
p
(o
2
). (35)
If first-order approximations are used for both U and C, then the expected
utility is simply approximated by the steady-state utility U(0), which, for
instance, in the DSGE model described above, is not affected by the
coefficients ç
1
and ç
2
of the monetary policy rule (24). A second-order
approximation of both U and C leads to
[U(ˆ c)] = U(0) +
1
2
U
(2)
(0)C
(1)
2
(c)o
2
+
1
2
U
(1)
(0)C
(2)
(c)o
2
+o(o
2
). (36)
Using a second-order expansion of the utility function together with a first-
order expansion of consumption ignores the second term in (36) and is
only appropriate if either U
(1)
(0) or C
(2)
(c) is zero. The first case arises
if the marginal utility of consumption in the steady-state is zero, and the
second case arises if the percentage deviation of consumption from the
steady-state is a linear function of the shock.
A second-order accurate solution to the DSGE model can be obtained
from a second-order expansion of the equilibrium conditions (21) to (26).
Algorithms to construct such solutions have been developed by Judd
(1998), Collard and Juillard (2001), Jin and Judd (2002), Schmitt-
Grohé and Uribe (2006), Kim et al. (2005), and Swanson et al. (2005).
The resulting state transition equation can be expressed as
s
j ,t
= 4
(0)
j
+
J

i=1
4
(s)
j ,i
s
i,t −1
+
n

l =1
4
(e)
j ,l
e
l ,t
+
J

i=1
J

l =1
4
(ss)
j ,il
s
i,t −1
s
l ,t −1
+
J

i=1
n

l =1
4
(se)
j ,il
s
i,t −1
e
l ,t
+
n

i=1
n

l =1
4
(ee)
j ,il
e
i,t
e
l ,t
. (37)
122 S. An and F. Schorfheide
As before, the coefficients are functions of the parameters of the DSGE
model. For the subsequent analysis we use Sims’ (2002) procedure to
compute a first-order accurate solution of the DSGE model and Schmitt-
Grohé and Uribe’s (2006) algorithm to obtain a second-order accurate
solution.
While perturbation methods approximate policy functions only locally,
there are several global approximation schemes, including projection
methods such as the finite-elements method and Chebyshev–polynomial
method on the spectral domain. Judd (1998) covers various solution
methods, and Taylor and Uhlig (1990), Den Haan and Marcet (1994), and
Aruoba et al. (2004) compare the accuracy of alternative solution methods.
2.5. Measurement Equations
The model is completed by defining a set of measurement equations
that relate the elements of s
t
to a set of observables. We assume that
the time period t in the model corresponds to one quarter and that the
following observations are available for estimation: quarter-to-quarter per
capita GDP growth rates (YGR), annualized quarter-to-quarter inflation
rates (INFL), and annualized nominal interest rates (INT). The three
series are measured in percentages, and their relationship to the model
variables is given by the set of equations
YGR
t
= ¸
(Q)
+100(ˆ y
t
− ˆ y
t −1
+ ˆ z
t
)
INFL
t
= ¬
(A)
+400ˆ ¬
t
(38)
INT
t
= ¬
(A)
+r
(A)
+4¸
(Q)
+400
¨
R
t
.
The parameters ¸
(Q)
, ¬
(A)
, and r
(A)
are related to the steady states of the
model economy as
¸ = 1 +
¸
(Q)
100
, [ =
1
1 +r
(A)
,400
, ¬ = 1 +
¬
(A)
400
.
The structural parameters are collected in the vector 0. Since in the
first-order approximation the parameters v and [ are not separately
identifiable, we express the model in terms of «, defined in (32). Let
0 = [t, «, ç
1
, ç
2
, j
R
, j
g
, j
z
, r
(A)
, ¬
(A)
, ¸
(Q)
, o
R
, o
g
, o
z
]

.
For the quadratic approximation the composite parameter « will be
replaced by either ([, v) or alternatively by («, v). Moreover, 0 will be
augmented with the steady-state ratio c,y, which is equal to 1,g .
Bayesian Analysis of DSGE Models 123
3. PRELIMINARIES
Numerous formal and informal econometric procedures have been
proposed to parameterize and evaluate DSGE models, ranging from
calibration, e.g., Kydland and Prescott (1982), over generalized method of
moments (GMM) estimation of equilibrium relationships, e.g., Christiano
and Eichenbaum (1992), minimum distance estimation based on the
discrepancy among VAR and DSGE model impulse response functions,
e.g., Rotemberg and Woodford (1997) and Christiano et al. (2005), to
full-information likelihood-based estimation as in Altug (1989), McGrattan
(1994), Leeper and Sims (1994), and Kim (2000). Much of the
methodological debate surrounding the various estimation (and model
evaluation) techniques is summarized in papers by Kydland and Prescott
(1996), Hansen and Heckman (1996), and Sims (1996).
We focus on Bayesian estimation of DSGE models, which has three
main characteristics. First, unlike GMM estimation based on equilibrium
relationships such as the consumption Euler equation (21), the price
setting equation of the intermediate goods producing firms (22), or the
monetary policy rule (24), the Bayesian analysis is system-based and fits
the solved DSGE model to a vector of aggregate time series. Second,
the estimation is based on the likelihood function generated by the
DSGE model rather than, for instance, the discrepancy between DSGE
model responses and VAR impulse responses. Third, prior distributions
can be used to incorporate additional information into the parameter
estimation. Any estimation and evaluation method is confronted with the
following challenges: potential model misspecification and possible lack
of identification of parameters of interest. We will subsequently elaborate
these challenges and discuss how a Bayesian approach can be used to cope
with them.
Throughout the paper we will use the following notation: the n ×1
vector y
t
stacks the time t observations that are used to estimate the DSGE
model. In the context of the model developed in Section 2 y
t
is composed
of YGR
t
, INFL
t
, INT
t
. The sample ranges from t = 1 to T and the sample
observations are collected in the matrix Y with rows y

t
. We denote the
prior density by p(0), the likelihood function by (0 | Y ), and the posterior
density by p(0 | Y ).
3.1. Potential Model Misspecification
If one predicts a vector of time series y
t
, for instance composed of
output growth, inflation, and nominal interest rates, by a function of past
y
t
’s, then the resulting forecast error covariance matrix is non-singular.
Hence any DSGE model that generates a rank-deficient covariance matrix
for y
t
is clearly at odds with the data and suffers from an obvious form
124 S. An and F. Schorfheide
of misspecification. This singularity is an obstacle to likelihood estimation.
Hence one branch of the literature has emphasized procedures that can
be applied despite the singularity, whereas the other branch has modified
the model specification to remove the singularity by adding so-called
measurement errors, e.g., Sargent (1989), Altug (1989), Ireland (2004),
or additional structural shocks as in Leeper and Sims (1994) and more
recently Smets and Wouters (2003). In this paper we pursue the latter
approach by considering a model in which the number of structural shocks
(monetary policy shock, government spending shock, technology growth
shock) equals the number of observables (output growth, inflation, interest
rates) to which the model is fitted.
A second source of misspecification that is more difficult to correct
is potentially invalid cross-coefficient restrictions on the time series
representation of y
t
generated by the DSGE model. Invalid restrictions
manifest themselves in poor out-of-sample fit relative to more densely
parameterized reference models such as VARs. Del Negro et al. (2006)
document that even an elaborate DSGE model with capital accumulation
and various nominal and real frictions has difficulties attaining the fit
achieved with VARs that have been estimated with well-designed shrinkage
methods. While the primary research objectives in the DSGE model
literature is to overcome discrepancies between models and reality, it is
important to have empirical strategies available that are able to cope with
potential model misspecification.
Once one acknowledges that the DSGE model provides merely an
approximation to the law of motion of the time series y
t
, then it seems
reasonable to assume that there need not exist a single parameter vector
0
0
that delivers, say, the “true” intertemporal substitution elasticity or price
adjustment costs and, simultaneously, the most precise impulse responses
to a technology or monetary policy shock. Each estimation method is
associated with a particular measure of discrepancy between the ‘true’
law of motion and the class of approximating models. Likelihood-based
estimators, for instance, asymptotically minimize the Kullback–Leibler
distance (see White, 1982).
One reason that pure maximum likelihood estimation of DSGE models
has not turned into the estimation method of choice is the “dilemma of
absurd parameter estimates.” Estimates of structural parameters generated
with maximum likelihood procedures based on a set of observations Y
are often at odds with additional information that the research may have.
For example, estimates of the discount factor [ should be consistent with
our knowledge about the average magnitude of real interest rates, even if
observations on interest rates are not included in the estimation sample
Y . Time series estimates of aggregate labor supply elasticities or price
adjustment costs should be broadly consistent with microlevel evidence.
However, due to the stylized nature and the resulting misspecification
Bayesian Analysis of DSGE Models 125
of most DSGE models, the likelihood function often peaks in regions
of the parameter space that appear to be inconsistent with extraneous
information.
In a Bayesian framework, the likelihood function is reweighted by a
prior density. The prior can bring to bear information that is not contained
in the estimation sample Y . Since priors are always subject to revision, the
shift from prior to posterior distribution can be an indicator of the tension
between different sources of information. If the likelihood function peaks
at a value that is at odds with the information that has been used to
construct the prior distribution, then the marginal data density of the
DSGE model, defined as
p(Y ) =
_
(0 | Y )p(0)d0, (39)
will be low compared to, say, a VAR, and in a posterior odds comparison
the DSGE model will automatically be penalized for not being able to
reconcile the two sources of information with a single set of parameters.
Section 5 will discuss several methods that have been proposed to assess
the fit of DSGE models: posterior odds comparisons of competing model
specifications, posterior predictive checks, and comparisons to reference
models that relax some of the cross-coefficient restrictions generated by the
DSGE models.
3.2. Identification
Identification problems can arise owing to a lack of informative
observations or, more fundamentally, from a probability model that
implies that different values of structural parameters lead to the same
joint distribution for the observables Y . At first glance the identification
of DSGE model parameters does not appear to be problematic. The
parameter vector 0 is typically low dimensional compared to VARs and
the model imposes tight restrictions on the time series representation
of Y . However, recent experience with the estimation of New Keynesian
DSGE models has cast some doubt on this notion and triggered a more
careful assessment. Lack of identification is documented in papers by
Beyer and Farmer (2004) and Canova and Sala (2005). The former paper
provides an algorithm to construct families of observationally equivalent
linear rational expectations models, whereas the latter paper compares
the informativeness of different estimators with respect to key structural
parameters in a variety of DSGE models.
The delicate identification problems that arise in rational expectations
models can be illustrated in a simple example adopted from Lubik and
Schorfheide (2006). Consider the following two models, in which y
t
is
126 S. An and F. Schorfheide
the observed endogenous variable and u
t
is an unobserved shock process.
In model
1
, the u
t
’s are serially correlated:

1
: y
t
=
1
:

t
[y
t +1
] +u
t
, u
t
= ju
t −1
+e
t
, e
t
∼ iid
_
0, (1 −j,:)
2
_
.
(40)
In model
2
the shocks are serially uncorrelated, but we introduce
a backward-looking term [y
t −1
on the right-hand side to generate
persistence:

2
: y
t
=
1
:

t
[y
t +1
] +[y
t −1
+u
t
, u
t
= e
t
,
e
t
∼ iid
_
0,
_
: +
_
:
2
−4[:
2:
_
2
_
.
(41)
For both specifications, the law of motion of y
t
is
y
t
= çy
t −1
+p
t
, p
t
∼ iid(0, 1). (42)
Under restrictions on the parameter spaces that guarantee uniqueness of a
stable rational expectations solution
3
we obtain the following relationships
between ç and the structural parameters:

1
: ç = j,
2
: ç =
1
2
(: −
_
:
2
−4[:).
In model
1
the parameter : is not identifiable, and in model
2
, the
parameters : and [ are not separately identifiable. Moreover, models
1
and
2
are observationally equivalent. The likelihood functions of
1
and

2
have ridges that can cause serious problems for numerical optimization
procedures. The calculation of a valid confidence set is challenging since it
is difficult to trace out the parameter subspace along which the likelihood
function is constant.
While Bayesian inference is based on the likelihood function, even a
weakly informative prior can introduce curvature into the posterior density
surface that facilitates numerical maximization and the use of MCMC
methods. Consider model
1
. Here 0 = [j, :]

and the likelihood function
can be written as (j, : | Y ) =

(j). Straightforward manipulations of the
Bayes theorem yield
p(j, : | Y ) = p(j | Y )p(: | j). (43)
3
To ensure determinacy we impose : > 1 in
1
and | : −
_
:
2
−4[: | - 2 and
| : +
_
:
2
−4[: | > 2 in
2
.
Bayesian Analysis of DSGE Models 127
Thus the prior distribution is not updated in directions of the parameter
space in which the likelihood function is flat. This is of course well known
in Bayesian econometrics; see Poirier (1998) for a discussion.
It is difficult to detect directly identification problems in large DSGE
models, since the mapping from the vector of structural parameters 0
into the state–space representation that determines the joint probability
distribution of Y is highly non-linear and typically can only be evaluated
numerically. The posterior distribution is well defined as long as the
joint prior distribution of the parameters is proper. However, lack of
identification provides a challenge for scientific reporting, as the audience
typically would like to know what features of the posterior are generated
by the prior rather than the likelihood. A direct comparison of priors and
posteriors can often provide valuable insights about the extent to which
data provide information about the parameters of interest.
3.3. Priors
As indicated in our previous discussion, prior distributions will play an
important role in the estimation of DSGE models. They might downweigh
regions of the parameter space that are at odds with observations not
contained in the estimation sample Y . They might also add curvature
to a likelihood function that is (nearly) flat in some dimensions of
the parameter space and therefore strongly influence the shape of the
posterior distribution.
4
While, in principle, priors can be gleaned from
personal introspection to reflect strongly held beliefs about the validity
of economic theories, in practice most priors are chosen based on some
observations.
For instance, Lubik and Schorfheide (2006) estimate a two-country
version of the model described in Section 2. Priors for the autocorrelations
and standard deviations of the exogenous processes, the steady-state
parameters ¸
(Q)
, ¬
(A)
, and r
(A)
, as well as the standard deviation of the
monetary policy shock, are quantified based on regressions run on pre-
(estimation)-sample observations of output growth, inflation, and nominal
interest rates. The priors for the coefficients in the monetary policy rule
are loosely centered around values typically associated with the Taylor rule.
The prior for the parameter that governs price stickiness is chosen based
on microevidence on price setting behavior provided, for instance, in Bils
and Klenow (2004). To fine-tune the prior distribution, in particular the
distribution of shock standard deviations, it is often helpful to simulate the
prior predictive distribution for various sample moments and check that
4
The role of priors in DSGE model estimation is markedly different from the role of priors
in VAR estimation. In the latter case priors are essentially used to reduce the dimensionality of the
econometric model and hence the sampling variability of the parameter estimates.
128 S. An and F. Schorfheide
the prior does not place little or no mass in a neighborhood of important
features of the data. Such an occurrence would suggest that the model is
incapable of explaining salient data properties. A formalization of such a
prior predictive check can be found in Geweke (2005).
Table 2 lists the marginal prior distributions for the structural
parameters of the DSGE model, that we will use in the subsequent
analysis. These priors are adopted from Lubik and Schorfheide (2006). For
convenience, it is assumed that all parameters are a priori independent.
In applications in which the independence assumption is unreasonable
one could derive parameter transformations, such as steady rate ratios,
autocorrelations, or relative volatilities, and specify independent priors
on the transformed parameters, which induce dependent priors for the
original parameters. As mentioned before, rational expectations models
can have multiple equilibria. While this may be of independent interest we
do not pursue this direction in this paper. Hence, the prior distribution
is truncated at the boundary of the determinacy region. Prior to the
truncation the distribution specified in Table 2 places about 2% of its
mass on parameter values that imply indeterminacy. The parameters v
(conditional on «) and 1,g only affect the second-order approximation of
the DSGE model.
3.4. The Road Ahead
Throughout this paper we are estimating and evaluating versions of
the DSGE model presented in Section 2 based on simulated data. Table 1
provides a characterization of the model specifications and data sets.
TABLE 1 Model specifications and data sets

1
(L) Benchmark Model with output gap rule, consists of Eqs. (21)–(26), solved by first-order
approximation.

1
(Q) Benchmark Model with output gap rule, consists of Eqs. (21)–(26), solved by second-order
approximation.

2
(L) DSGE Model with output growth rule, consists of Eqs. (21)–(26), however, Eq. (24) is
replaced by Eq. (27), solved by first-order approximation.

3
(L) Same as
1
(L), except that prices are nearly flexible: « = 5.

4
(L) Same as
1
(L), except that central bank does not respond to the output gap: ç
2
= 0.

5
(L) Same as
1
(L), except in first-order approximation Eq. (29)is replaced by
(1 +h)ˆ y
t
=
¨

t
[y
t +1
] +hˆ y
t −1
+ ˆ g
t

t
[ ˆ g
t +1
] −
1
t
_
¨
R
t

¨

t

t +1
] −[ˆ z
t +1
]
_
with h = .95.

1
(L) 80 observations generated with
1
(L)

1
(Q) 80 observations generated with
1
(Q)

2
(L) 80 observations generated with
2
(L)

5
(L) 80 observations generated with
5
(L)
Bayesian Analysis of DSGE Models 129
Model
1
is the benchmark specification of the DSGE model in which
monetary policy follows an interest-rate rule that reacts to the output gap.

1
(L) is approximated by a linear rational expectations system, whereas

1
(Q) is solved with a second-order perturbation method. In
2
(L) we
replace the output gap in the interest-rate feedback rule by output growth.

3
and
4
are identical to
1
(L), except that in one case we impose that
the prices are nearly flexible (« = 5) and in the other case the central
bank does not respond to output (ç
2
= 0). In
5
(L) we replace the log-
linearized consumption Euler equation by an equation that includes lagged
output on the right-hand side. Loosely speaking, this specification can be
motivated with a model in which agents derive utility from the difference
between individual consumption and the overall level of consumption in
the previous period.
Conditional on parameter vectors reported in Tables 2 and 3 we
generate four data sets with 80 observations each. We use the labels
1
(L),

1
(Q),
2
(L), and
5
(L) to denote data sets generated from
1
(L),

1
(Q),
2
(L), and
5
(L), respectively. By and large, the values for the
DSGE model parameters resemble empirical estimates obtained from post-
1982 U.S. data. The sample size is fairly realistic for the estimation of
monetary DSGE model. Many industrialized countries experienced a high
inflation episode in the 1970s that ended with a disinflation in the 1980s.
Subsequently, the level and the variability of inflation and interest rates
have been fairly stable, so that it is not uncommon to estimate constant
coefficient DSGE models based on data sets that begin in the early 1980s.
TABLE 2 Prior distribution and DGPs—linear analysis (Sections 4 and 5)
Prior
DGP
Name Domain Density Para (1) Para (2)
1
(L),
2
(L),
5
(L)
t
+
Gamma 2.00 .50 2.00
«
+
Gamma .20 .10 .15
ç
1

+
Gamma 1.50 .25 1.50
ç
2

+
Gamma .50 .25 1.00
j
R
[0, 1) Beta .50 .20 .60
j
G
[0, 1) Beta .80 .10 .95
j
Z
[0, 1) Beta .66 .15 .65
r
(A)

+
Gamma .50 .50 .40
¬
(A)

+
Gamma 7.00 2.00 4.00
¸
(Q)
Normal .40 .20 .50
100o
R

+
InvGamma .40 4.00 .20
100o
G

+
InvGamma 1.00 4.00 .80
100o
Z

+
InvGamma .50 4.00 .45
Notes: Paras (1) and (2) list the means and the standard deviations for Beta, Gamma, and Normal
distributions; the upper and lower bound of the support for the Uniform distribution; s and v for
the Inverse Gamma distribution, where p
G
(o | v, s) ∝ o
−v−1
e
−vs
2
,2o
2
. The effective prior is truncated
at the boundary of the determinacy region.
130 S. An and F. Schorfheide
TABLE 3 Prior distribution and DGP—Nonlinear analysis (Section 6)
Prior
DGP
Name Domain Density Para (1) Para (2)
1
(Q)
t
+
Gamma 2.00 .50 2.00
«
+
Gamma .30 .20 .33
ç
1

+
Gamma 1.50 .25 1.50
ç
2

+
Gamma .50 .25 .125
j
R
[0, 1) Beta .50 .20 .75
j
G
[0, 1) Beta .80 .10 .95
j
Z
[0, 1) Beta .66 .15 .90
r
(A)

+
Gamma .80 .50 1.00
¬
(A)

+
Gamma 4.00 2.00 3.20
¸
(Q)
Normal .40 .20 .55
100o
R

+
InvGamma .30 4.00 .20
100o
G

+
InvGamma .40 4.00 .60
100o
Z

+
InvGamma .40 4.00 .30
v [0, 1) Beta .10 .05 .10
1,g [0, 1) Beta .85 .10 .85
Notes: See Table 2. Parameters v and 1,g only affect the second-order accurate
solution of the DSGE model.
Section 4 illustrates the estimation of linearized DSGE models under
correct specification, that is, we construct posteriors for
1
(L) and

2
(L) based on the data sets
1
(L) and
2
(L). Section 5 considers
model evaluation techniques. We begin with posterior predictive checks
in Section 5.1, which are implemented based on
1
(L) for data sets

1
(L) (correct specification of DSGE model) and
5
(L) (misspecification
of the consumption Euler equation). In Section 5.2 we proceed with
the calculation of posterior model probabilities for specifications
1
(L),

3
(L), and
4
(L) conditional on the data set
1
(L). Subsequently in
Section 5.3 we compare
1
(L) to a vector autoregressive specification
using
1
(L) (correct specification) and
5
(L) (misspecification). Finally,
in Section 6 we study the estimation of DSGE models solved with a second-
order perturbation method. We compare posteriors for
1
(Q) and
1
(L)
conditional on
1
(Q).
4. ESTIMATION OF LINEARIZED DSGE MODELS
We will begin by describing two algorithms that can be used to generate
draws from the posterior distribution of 0 and subsequently illustrate their
performance in the context of models
1
(L),
1
(L) and
2
(L),
2
(L).
4.1. Posterior Computations
We consider an RWM algorithm and an IS algorithm to generate draws
from the posterior distribution of 0. Both algorithms require the evaluation
Bayesian Analysis of DSGE Models 131
of (0 | Y )p(0). The computation of the non-normalized posterior density
proceeds in two steps. First, the linear rational expectations system is solved
to obtain the state transition equation (33). If the parameter value 0
implies indeterminacy (or non-existence of a stable rational expectations
solution), then (0 | Y )p(0) is set to zero. If a unique stable solution
exists, then the Kalman filter
5
is used to evaluate the likelihood function
associated with the linear state–space system (33) and (38). Since the
prior is generated from well-known densities, the computation of p(0) is
straightforward.
The RWM algorithm belongs to the more general class of Metropolis–
Hastings algorithms. This class is composed of universal algorithms that
generate Markov chains with stationary distributions that correspond
to the posterior distributions of interest. A first version of such an
algorithm had been constructed by Metropolis et al. (1953) to solve a
minimization problem and was later generalized by Hastings (1970). Chib
and Greenberg (1995) provide an excellent introduction to Metropolis–
Hastings algorithms. The RWM algorithm was first used to generate draws
from the posterior distribution of DSGE model parameters by Schorfheide
(2000) and Otrok (2001). We will use the following implementation based
on Schorfheide (2000):
6
Random-Walk Metropolis (RWM) Algorithm
1. Use a numerical optimization routine to maximize ln(0 | Y ) +lnp(0).
Denote the posterior mode by
˜
0.
2. Let
¯
l be the inverse of the Hessian computed at the posterior mode
˜
0.
3. Draw 0
(0)
from (
˜
0, c
2
0
¯
l) or directly specify a starting value.
4. For s = 1, . . . , n
sim
, draw 6 from the proposal distribution (0
(s−1)
, c
2
¯
l).
The jump from 0
(s−1)
is accepted (0
(s)
= 6) with probability
min]1, r (0
(s−1)
, 6 | Y )] and rejected (0
(s)
= 0
(s−1)
) otherwise. Here
r (0
(s−1)
, 6 | Y ) =
(6 | Y )p(6)
(0
(s−1)
| Y )p(0
(s−1)
)
.
5. Approximate the posterior expected value of a function h(0) by
1
n
sim

n
sim
s=1
h(0
(s)
).
5
Since according to our model s
t
is stationary, the Kalman filter can be initialized with the
unconditional distribution of s
t
. To make the estimation in Section 4 comparable to the DSGE-
VAR analysis presented in Section 5.3 we adjust the likelihood function to condition on the first
four observations in the sample: (0 | Y ),(0 | y
1
, . . . , y
4
). These four observations are later used to
initialize the lags of a VAR.
6
This version of the algorithm is available in the user-friendly DYNARE (2005) package, which
automates the Bayesian estimation of linearized DSGE models and provides routines to solve DSGE
models with higher-order perturbation methods.
132 S. An and F. Schorfheide
Under fairly general regularity conditions, e.g., Walker (1969),
Crowder (1988), and Kim (1998), the posterior distribution of 0 will be
asymptotically normal. The algorithm constructs a Gaussian approximation
around the posterior mode and uses a scaled version of the asymptotic
covariance matrix as the covariance matrix for the proposal distribution.
This allows for an efficient exploration of the posterior distribution at least
in the neighborhood of the mode.
The maximization of the posterior density kernel is carried out with
a version of the BFGS quasi-Newton algorithm, written by Chris Sims
for the maximum likelihood estimation of a DSGE model conducted in
Leeper and Sims (1994). The algorithm uses a fairly simple line search
and randomly perturbs the search direction if it reaches a cliff caused by
nonexistence or non-uniqueness of a stable rational expectations solution
for the DSGE model. Prior to the numerical maximization we transform
all parameters so that their domain is unconstrained. This parameter
transformation is only used in Step 1 of the RWM algorithm. The elements
of the Hessian matrix in Step 2 are computed numerically for various
values of the increment d0. There typically exists a range for d0 over which
the derivatives are stable. As d0 approaches zero the numerical derivatives
will eventually deteriorate owing to inaccuracies in the evaluation of
the objective function. While Steps 1 and 2 are not necessary for the
implementation of the RWM algorithm, they are often helpful.
The RWM algorithm generates a sequence of dependent draws from
the posterior distribution of 0 that can be averaged to approximate
posterior moments. Geweke (1999a, 2005) reviews regularity conditions
that guarantee the convergence of the Markov chain generated by
Metropolis–Hastings algorithms to the posterior distribution of interest
and the convergence of
1
n
sim

n
sim
s=1
h(0
(s)
) to the posterior expectations
[h(0) | Y ].
DeJong et al. (2000) used an IS algorithm to calculate posterior
moments of the parameters of a linearized stochastic growth model. The
idea of the algorithm is based on the identity
[h(0) | Y ] =
_
h(0)p(0 | Y )d0 =
_
h(0)p(0 | Y )
q(0)
q(0)d0.
Draws from the posterior density p(0 | Y ) are replaced by draws from
the density q(0) and reweighted by the importance ratio p(0 | Y ),q(0) to
obtain a numerical approximation of the posterior moment of interest.
Hammersley and Handscomb (1964) were among the first to propose this
method and Geweke (1989) provides important convergence results. The
particular version of the IS algorithm used subsequently is of the following
form.
Bayesian Analysis of DSGE Models 133
Importance Sampling (IS) Algorithm
1. Use a numerical optimization routine to maximize ln(0 | Y ) +lnp(0).
Denote the posterior mode by
˜
0.
2. Let
¯
l be the inverse of the Hessian computed at the posterior mode
˜
0.
3. Let q(0) be the density of a multivariate t -distribution with mean
˜
0, scale
matrix c
2
¯
l, and v degrees of freedom.
4. For s = 1, . . . , n
sim
generate draws 0
(s)
from q(0).
5. Compute ˜ w
s
= (0
(s)
| Y )p(0
(s)
),q(0
(s)
) and w
s
= ˜ w
s
,

n
sim
s=1
˜ w
s
.
6. Approximate the posterior expected value of a function h(0) by

n
sim
s=1
w(0
(s)
)h(0
(s)
).
The accuracy of the IS approximation depends on the similarity
between posterior kernel and importance density. If the two are equal, then
the importance weights w
s
are constant and one averages independent
draws to approximate the posterior expectations of interest. If the
importance density concentrates its mass in a region of the parameter
space in which the posterior density is very low, then most of the w
s
’s will be
much smaller than 1,n
sim
and the approximation method is inefficient. We
construct a Gaussian approximation of the posterior near the mode, scale
the asymptotic covariance matrix, and replace the normal distribution with
a fat-tailed t -distribution to implement the algorithm.
4.2. Simulation Results for
1
(L)/
1
(L)
We begin by estimating the log-linearized output gap rule specification

1
(L) based on data set
1
(L). We use the RWM algorithm (c
0
= 1, c =
0.3) to generate 1 million draws from the posterior distribution. The
rejection rate is about 45%. Moreover, we generate 200 independent draws
from the truncated prior distribution. Figure 1 depicts the draws from
the prior as well as every 5,000th draw from the posterior distribution
in two-dimensional scatter plots. We also indicate the location of the
posterior mode in the 12 panels of the figure. A visual comparison of
priors and posteriors suggests that the 80 observations sample contains
little information on the risk-aversion parameter t and the policy rule
coefficients ç
1
and ç
2
. There is, however, information about the degree
of price-stickiness, captured by the slope coefficient « of the Phillips-curve
relationship (30). Moreover, the posteriors of steady-state parameters, the
autocorrelation coefficients, and the shock standard deviations are sharply
peaked relative to the prior distributions. The lack of information about
some of the key structural parameters resembles the empirical findings
based on actual observations.
There exists an extensive literature on convergence diagnostics for
MCMC methods such as the RWM algorithm. An introduction to this
134 S. An and F. Schorfheide
FIGURE 1 Prior and posterior – Model
1
(L), Data
1
(L). The panels depict 200 draws from
prior and posterior distributions. Intersections of solid lines signify posterior mode values.
literature can be found, for instance, in Robert and Casella (1999).
These authors distinguish between convergence of the Markov chain to
its stationary distribution, convergence of empirical averages to posterior
moments, and convergence to iid sampling. While many convergence
diagnostics are based on formal statistical tests, we will consider informal
graphical methods in this paper. More specifically, we will compare draws
and recursively computed means from multiple chains.
We run four independent Markov chains. Except for t and ç
2
, all
parameters are initialized at their posterior mode values. The initial values
for (t, ç
2
) are (3.0, .1), (3.0, 2.0), (.4, .1), and (.6, 2.0), respectively. As
before, we set c = 0.3, generate 1 million draws for each chain, and plot
every 5,000th draw of (t, ç
2
) in Figure 2. Panels (1, 1) and (1, 2) of the
figure depict posterior contours at the posterior mode. Visual inspection of
Bayesian Analysis of DSGE Models 135
FIGURE 2 Draws from multiple chains – Model
1
(L), Data
1
(L). Panels (1,1) and (1,2): contours
of posterior density at “low” and “high” mode as function of t and ç
2
. Panels (2,1) to (3,2): 200
draws from four Markov chains generated by the Metropolis Algorithm. Intersections of solid lines
signify posterior mode values.
the plots suggests that all four chains, despite the use of different starting
values, converge to the same (stationary) distribution and concentrate in
the high-posterior density region. Figure 3 plots recursive means for the
four chains. Despite different initializations, the means converge in the
long run.
We generate another set of 1 million draws using the IS algorithm with
v = 3 and c = 1.5. As for the draws from the RWM algorithm, we compute
recursive means. Since neither the IS nor the RWM approximation of the
posterior means are exact, we construct standard error estimates. For the
importance sampler we follow Geweke (1999b), and for the Metropolis
chain we use Newey–West standard error estimates, truncated at 140 lags.
7
In Figure 4 we plot (recursive) confidence intervals for the RWM and the
IS approximation of the posterior means. For most parameters the two
7
It is not guaranteed that the Newey–West standard errors are formally valid.
136 S. An and F. Schorfheide
FIGURE 3 Recursive means from multiple chains – Model
1
(L), Data
1
(L). Each line
corresponds to recursive means (as a function of the number of draws) calculated from one of the
four Markov chains generated by the Metropolis Algorithm.
confidence intervals overlap. Exceptions are, for instance, «, r
(A)
, j
g
, and
o
g
. However, the magnitude of the discrepancy is generally small relative to,
say, the difference between the posterior mode and the estimated posterior
means.
4.3. A Potential Pitfall: Simulation Results for
2
(L)/
2
(L)
We proceed by estimating the output growth rule version
2
(L) of the
DSGE model based on 80 observations generated from this model (Data
Set
2
(L)). Unlike the posterior for the output gap version, the
2
(L)
posterior has (at least) two modes. One of the modes, which we label
the “high” mode,
˜
0
(h)
, is located at t = 2.06 and ç
2
= .97. The value of
Bayesian Analysis of DSGE Models 137
FIGURE 4 RWM algorithm vs. Importance Sampling – Model
1
(L), Data
1
(L). Panels depict
posterior modes (solid), recursively computed 95% bands for posterior means based on the
metropolis algorithm (dotted) and the importance sampler (dashed).
ln[(0 | Y )p(0)] is equal to −175.48. The second (“low”) mode,
˜
0
(l )
, is
located at t = 1.44 and ç
2
= .81 and attains the value −183.23. The first
two panels of Figure 5 depict the contours of the non-normalized posterior
density as a function of t and ç
2
at the low and the high mode, respectively.
Panel (1,1) is constructed by setting 0 =
˜
0
(l )
and then graphing posterior
contours for t ∈ [0, 3] and ç
2
∈ [0, 2.5], keeping all other parameters fixed
at their respective
˜
0
(l )
values. Similarly, Panel (1,2) is obtained by exploring
the shape of the posterior as a function of t and ç
2
, fixing the other
parameters at their respective
˜
0
(h)
values. The intersection of the solid
lines in panels (1,1), (2,1), and (3,1) signify the low mode, whereas the
solid lines in the remaining panels indicate the location of the high mode.
138 S. An and F. Schorfheide
FIGURE 5 Draws from multiple chains – Model
2
(L), Data
2
(L). Panels (1,1) and (1,2): contours
of posterior density at “low” and “high” mode as function of t and ç
2
. Panels (2,1) to (3,2): 200
draws from four Markov chains generated by the metropolis algorithm. Intersections of solid lines
signify “low” (left panels) and “high” (right panels) posterior mode values.
The two modes are separated by a deep valley which is caused by complex
eigenvalues of the state transition matrix.
As in Section 4.2 we generate 1 million draws each for model
2
from
four Markov chains that were initialized at the following values for (t; ç
2
):
(3.0; .1), (3.0; 2.0), (.4; .1), and (.6; 2.0). The remaining parameters were
initialized at the low (high) mode for Chains 1 and 3 (2 and 4).
8
We plot
every 5,000th draw of t and ç
2
from the four Markov chains in Panels (2,1)
to (3,2). Unlike in Panels (1,1) and (1,2), the remaining parameters are
not fixed at their
˜
0
(l )
and
˜
0
(h)
values. It turns out that Chains 1 and 3
explore the posterior distribution locally in the neighborhood of the low
mode, whereas Chains 2 and 4 move through the posterior surface near the
high mode. Given the configuration of the RWM algorithm the likelihood
8
The covariance matrices of the proposal distributions are obtained from the Hessians
associated with the respective modes.
Bayesian Analysis of DSGE Models 139
of crossing the valley that separates the two modes is so small that it did not
occur. The recursive means associated with the four chains are plotted in
Figure 6. They exhibit two limit points, corresponding to the two posterior
modes. While each chain appears to be stable, it only explores the posterior
in the neighborhood of one of the modes.
We explored various modifications of the RWM algorithm by changing
the scaling factor c and by setting the off-diagonal elements of
¯
l to zero.
Nevertheless, 1,000,000 draws were insufficient for the chains initialized
at the low mode to cross over to the neighborhood of the high mode.
Two remarks are in order. First, extremum estimators that are computed
with numerical optimization methods suffer from the same problem as
FIGURE 6 Recursive means from multiple chains – Model
2
(L), Data
(
L). Each line corresponds
to recursive means (as a function of the number of draws) calculated from one of the four Markov
chains generated by the metropolis algorithm.
140 S. An and F. Schorfheide
the Bayes estimators in this example. One might find a local rather than
the global extremum of the objective function. Hence, it is good practice
to start the optimization from different points in the parameter space
to increase the likelihood that the global optimum is found. Similarly,
in Bayesian computation it is helpful to start MCMC methods from
different regions of the parameter space, or vary the distribution q(0)
in the importance sampler. Second, in many applications as well as in
this example there exists a global mode that dominates the local modes.
Hence an exploration of the posterior in the neighborhood of the global
mode might still provide a good approximation to the overall posterior
distribution. All subsequent computations are based on the output gap rule
specification of the DSGE model.
4.4. Parameter Transformations for
1
(L)/
1
(L)
Macroeconomists are often interested in posterior distributions of
parameter transformations h(0) to address questions such as what fraction
of the variation in output growth is caused by monetary policy shocks, and
what happens to output and inflation in response to a monetary policy
shock. Answers can be obtained from the moving average representation
associated with the state space model composed of (33) and (38). We will
focus on variance decompositions in the remainder of this subsection.
Fluctuations of output growth, inflation, and nominal interest rate
in the DSGE model are due to three shocks: technology growth shocks,
government spending shocks, and monetary policy shocks. Hence variance
decompositions of the endogenous variables lie in a three-dimensional
simplex, which can be depicted as a triangle in
2
. In Figure 7 we plot
200 draws from the prior distribution (left panels) and 200 draws from the
posterior (right panels) of the variance decompositions of output growth
and inflation. The posterior draws are obtained by converting every 5,000th
draw generated with the RWM algorithm. The corners Z, G, and R of the
triangles correspond to decompositions in which the shocks e
z
, e
g
, and e
R
explain 100% of the variation, respectively.
Panels (1,1) and (2,1) indicate that the prior distribution of the
variance decomposition is informative in the sense that it is not uniform
over the simplex. For instance, the output gap policy rule specification
1
implies that the government spending shock does not affect inflation and
nominal interest rates. Hence all prior and posterior draws concentrate on
the R −Z edge of the simplex. While the prior mean of the fraction of
inflation variability explained by the monetary policy shock is about 40%,
a 90% probability interval ranges from 0 to 95%. A priori about 10% of
the variation in output growth is due to monetary policy shocks. A 90%
probability interval ranges from 0 to 20%. The data provide additional
information about the variance decomposition. The posterior distribution
Bayesian Analysis of DSGE Models 141
FIGURE 7 Prior and posterior variance decompositions – Model
1
(L), Data
1
(L). The panels
depict 200 draws from prior and posterior distributions arranged on a 3-dimensional simplex. The
three corners (Z,G,R) correspond to 100% of the variation being due to the shocks e
z,t
, e
g ,t
, and
e
R,t
, respectively.
is much more concentrated than the prior. The probability interval for
the contribution of monetary policy shocks to output growth fluctuations
shrinks to the range from 1 to 4.5%. The posterior probability interval for
the fraction of inflation variability explained by the monetary policy shock
ranges from 15 to 37%.
4.5. Empirical Applications
There is a rapidly growing empirical literature on the Bayesian
estimation of DSGE models that applies the techniques described in
this paper. The following incomplete list of references aims to give an
overview of this work. Preceding the Bayesian literature are papers that
use maximum likelihood techniques to estimate DSGE models. Altug
(1989) estimates Kydland and Prescott’s (1982) time-to-build model, and
McGrattan (1994) studies the macroeconomic effects of taxation in an
estimated business cycle model. Leeper and Sims (1994) and Kim (2000)
estimated DSGE models that are usable for monetary policy analysis.
Canova (1994), DeJong et al. (1996), and Geweke (1999a) proposed
Bayesian approaches to calibration that do not exploit the likelihood
function of the DSGE model and provide empirical applications that assess
business cycle and asset pricing implications of simple stochastic growth
142 S. An and F. Schorfheide
models. The literature on likelihood-based Bayesian estimation of DSGE
models began with work by Landon-Lane (1998), DeJong et al. (2000),
Schorfheide (2000), and Otrok (2001). DeJong et al. (2000) estimate a
stochastic growth model and examine its forecasting performance, Otrok
(2001) fits a real business cycle with habit formation and time-to-build to
the data to assess the welfare costs of business cycles, and Schorfheide
(2000) considers cash-in-advance monetary DSGE models. DeJong and
Ingram (2001) study the cyclical behavior of skill accumulation, whereas
Chang et al. (2002) estimate a stochastic growth model augmented with a
learning-by-doing mechanism to amplify the propagation of shocks. Chang
and Schorfheide (2003) study the importance of labor supply shocks
and estimate a home-production model. Fernández-Villaverde and Rubio-
Ramírez (2004) use Bayesian estimation techniques to fit a cattle–cycle
model to the data.
Variants of the small-scale New Keynesian DSGE model presented in
Section 2 have been estimated by Rabanal and Rubio-Ramírez (2005a,b)
for the U.S. and the Euro Area. Lubik and Schorfheide (2004) estimate the
benchmark New Keynesian DSGE model without restricting the parameters
to the determinacy region of the parameter space. Schorfheide (2005)
allows for regime-switching of the target inflation level in the monetary
policy rule. Canova (2004) estimates a small-scale New Keynesian model
recursively to assess the stability of the structural parameters over time.
Galí and Rabanal (2005) use an estimated DSGE model to study the effect
of technology shocks on hours worked. Large-scale models that include
capital accumulation and additional real and nominal frictions along the
lines of Christiano et al. (2005) have been analyzed by Smets and Wouters
(2003, 2005) both for the U.S. and the Euro Area. Models similar to Smets
and Wouters (2003) have been estimated by Laforte (2004), Onatski and
Williams (2004), and Levin et al. (2006) to study monetary policy. Many
central banks are in the process of developing DSGE models along the
lines of Smets and Wouters (2003) that can be estimated with Bayesian
techniques and used for policy analysis and forecasting.
Bayesian estimation techniques have also been used in the open
economy literature. Lubik and Schorfheide (2003) estimate the small
open economy extension of the model presented in Section 2 to examine
whether the central banks of Australia, Canada, England, and New Zealand
respond to exchange rates. A similar model is fitted by Del Negro (2003) to
Mexican data. Justiniano and Preston (2004) extend the empirical analysis
to situations of imperfect exchange rate passthrough. Adolfson et al.
(2004) analyze an open economy model that includes capital accumulation
as well as numerous real and nominal frictions. Lubik and Schorfheide
(2006), Rabanal and Tuesta (2006), and de Walque and Wouters (2004)
have estimated multicountry DSGE models.
Bayesian Analysis of DSGE Models 143
5. Model Evaluation
In Section 4 we discussed the estimation of a linearized DSGE model
and reported results on the posterior distribution of the model parameters
and variance decompositions. We tacitly assumed that model and prior
provide an adequate probabilistic representation of the uncertainty with
respect to data and parameters. This section studies various techniques that
can be used to evaluate the model fit. We will distinguish the assessment of
absolute fit from techniques that aim to determine the fit of a DSGE model
relative to some other model. The first approach can be implemented
by a posterior predictive model check and has the flavor of a classical
hypothesis test. Relative model comparisons, on the other hand, are
typically conducted by enlarging the model space and applying Bayesian
inference and decision theory to the extended model space. Section 5.1
discusses Bayesian model checks, Section 5.2 reviews model posterior odds
comparisons, and Section 5.3 describes a more elaborate model evaluation
based on comparisons between DSGE models and VARs.
5.1. Posterior Predictive Checks
Predictive checks as a tool to assess the absolute fit of a probability
model have been advocated, for instance, by Box (1980). A probability
model is considered as discredited by the data if one observes an
event that is deemed very unlikely by the model. Such model checks
are controversial from a Bayesian perspective. Methods that determine
whether actual data lie in the tail of a model’s data distribution potentially
favor alternatives that make unreasonably diffuse predictions. Nevertheless,
posterior predictive checks have become a valuable tool in applied
Bayesian analysis though they have not been used much in the context of
DSGE models. An introduction to Bayesian model checking can be found,
for instance, in books by Gelman et al. (1995), Lancaster (2004), and
Geweke (2005).
Let Y
rep
be a sample of observations of length T that we could
have observed in the past or that we might observe in the future. We
can derive the predictive distribution of Y
rep
given the current state of
knowledge:
p(Y
rep
| Y ) =
_
p(Y
rep
| 0)p(0 | Y )d0. (44)
Let h(Y ) be a test quantity that reflects an aspect of the data that we
want to examine. A quantitative model check can be based on Bayesian
p-values. Suppose that the test quantity is univariate, non-negative, and
has a unimodal density. Then one could compute probability of the tail
144 S. An and F. Schorfheide
event by
_
]h(Y
rep
) ≥ h(Y )]p(Y
rep
| Y )dY
rep
, (45)
where ]x ≥ a] is the indicator function that is 1 if x ≥ : and zero
otherwise. A small tail probability can be viewed as evidence against model
adequacy.
Rather than constructing numerical approximations of the tail
probabilities for univariate functions h(·), we use a graphical approach to
illustrate the model checking procedure in the context of model
1
(L). We
use the following data transformations: the correlation between inflation
and lagged interest rates, lagged inflation and current interest rates,
output growth and lagged output growth, and output growth and interest
rates. To obtain draws from the posterior predictive distribution of h(Y
rep
)
we take every 5,000th parameter draw of 0
(s)
generated with the RWM
algorithm, simulate a sample Y
rep
of 80 observations
9
from the DSGE model
conditional on 0
(s)
, and calculate h(Y
rep
).
We consider two cases: in the case of no mis-specification, depicted in
the left panels of Figure 8, the posterior is constructed based on data set

1
(L), whereas under mis-specification, illustrated in the two right panels
of Figure 8, the posterior is obtained from data set
5
(L). Each panel
of the figure depicts 200 draws from the posterior predictive distribution
in bivariate scatter plots. Moreover, the intersections of the solid lines
indicate the actual values h(Y ). Since
1
(L) was generated from model

1
(L), whereas
5
(L) was not, we would expect the actual values of h(Y )
to lie further in the tails of the posterior predictive distribution in the
right panels than in the left panels. Indeed a visual inspection of the plots
provides some evidence in which dimensions
1
(L) is at odds with data
generated from a model that includes lags of output in the household’s
Euler equation. The observed autocorrelation of output growth in
5
(L) is
much larger and the actual correlation between output growth and interest
rates is much lower than the draws generated from the posterior predictive
distribution.
5.2. Posterior Odds Comparisons of DSGE Models
The Bayesian framework is naturally geared toward the evaluation
of relative model fit. Researchers can place probabilities on competing
models and assess alternative specifications based on their posterior odds.
An excellent survey on model comparisons based on posterior probabilities
9
To obtain draws from the unconditional distribution of y
t
we initialize s
0
= 0 and generate
180 observations, discarding the first 100.
Bayesian Analysis of DSGE Models 145
FIGURE 8 Posterior predictive check for Model
1
(L). We plot 200 draws from the posterior
predictive distribution of various sample moments. Intersections of solid lines signify the observed
sample moments.
can be found in Kass and Rafterty (1995). For concreteness, suppose in
addition to the specification
1
(L) we consider a version of the New
Keynesian model, denoted by
3
(L) in which prices are nearly flexible,
that is, « = 5. Moreover, there is a model
4
(L), according to which the
central bank does not respond to output at all and ç
2
= 0. If we are willing
to place prior probabilities ¬
i,0
on the three competing specifications then
posterior model probabilities can be computed by
¬
i,T
=
¬
i,0
p(Y |
i
)

j =1,3,4
¬
j ,0
p(Y |
j
)
, i = 1, 3, 4. (46)
The key object in the calculation of posterior probabilities is the marginal
data density p(Y |
i
), which is defined in (39).
Posterior model probabilities can be used to weigh predictions from
different models. In many instances, researchers take a shortcut and
use posterior probabilities to justify the selection of a particular model
specification. All subsequent analysis is then conditioned on the chosen
model. It is straightforward to verify that under a 0–1 loss function (the
loss attached to choosing the wrong model is one), the optimal decision
is to select the highest posterior probability model. This 0–1 loss function
is an attractive benchmark in situations in which the researcher believes
that all the important models are included in the analysis. However, even
in situations in which all the models under consideration are misspecified,
146 S. An and F. Schorfheide
the selection of the highest posterior probability model has some desirable
properties. It has been shown under various regularity conditions, e.g.,
Phillips (1996) and Fernández-Villaverde and Rubio-Ramírez (2004), that
posterior odds (or their large sample approximations) asymptotically favor
the DSGE model that is closest to the ‘true’ data generating process in the
Kullback-Leibler sense. Moreover, since the log-marginal data density can
be rewritten as
lnp(Y | ) =
T

t =1
lnp(y
t
| Y
t −1
, )
=
T

t =1
ln
__
p(y
t
| Y
t −1
, 0, )p(0 | Y
t −1
, )d0
_
, (47)
where lnp(Y | ) can be interpreted as a predictive score (Good,
1952) and the model comparison based on posterior odds captures the
relative one-step-ahead predictive performance. The practical difficulty
in implementing posterior odds comparisons is the computation of the
marginal data density. We will subsequently consider two numerical
approaches: Geweke’s (1999b) modified harmonic mean estimator and
Chib and Jeliazkov’s (2001) algorithm.
Harmonic mean estimators are based on the identity
1
p(Y )
=
_
f (0)
(0 | Y )p(0)
p(0 | Y )d0, (48)
where f (0) has the property that
_
f (0)d0 = 1 (see Gelfand and Dey,
1994). Conditional on the choice of f (0) an obvious estimator is
ˆ
p
G
(Y ) =
_
1
n
sim
n
sim

s=1
f (0
(s)
)
(0
(s)
| Y )p(0
(s)
)
_
−1
, (49)
where 0
(s)
is drawn from the posterior p(0 | Y ). To make the numerical
approximation efficient, f (0) should be chosen so that the summands are
of equal magnitude. Geweke (1999b) proposed to use the density of a
truncated multivariate normal distribution,
f (0) = t
−1
(2¬)
−d,2
| V
0
|
−1,2
exp
_
−0.5(0 −
¯
0)

V
−1
0
(0 −
¯
0)
_
×
_
(0 −
¯
0)

V
−1
0
(0 −
¯
0) ≤ F
−1
,
2
d
(t)
_
. (50)
Here
¯
0 and V
0
are the posterior mean and covariance matrix computed
from the output of the posterior simulator, d is the dimension of the
Bayesian Analysis of DSGE Models 147
parameter vector, F
,
2
d
is the cumulative density function of a ,
2
random
variable with d degrees of freedom, and t ∈ (0, 1). If the posterior of 0
is in fact normal then the summands in (49) are approximately constant.
Chib and Jeliazkov (2001) use the following equality as the basis for their
estimator of the marginal data density:
p(Y ) =
(0 | Y )p(0)
p(0 | Y )
. (51)
The equation is obtained by rearranging the Bayes theorem and has
to hold for all 0. While the numerator can be easily computed, the
denominator requires a numerical approximation. Thus,
ˆ
p
CS
(Y ) =
(
˜
0 | Y )p(
˜
0)
ˆ
p(
˜
0 | Y )
, (52)
where we replaced the generic 0 in (51) by the posterior mode
˜
0. Within
the RWM Algorithm denote the probability of moving from 0 to 6 by
:(0, 6 | Y ) = min]1, r (0, 6 | Y )], (53)
where r (0, 6 | Y ) was in the description of the algorithm. Moreover, let
q(0,
˜
0 | Y ) be the proposal density for the transition from 0 to
˜
0. Then the
posterior density at the mode can be approximated by
ˆ
p(
˜
0 | Y ) =
1
n
sim

n
sim
s=1
:(0
(s)
,
˜
0 | Y )q(0
(s)
,
˜
0 | Y )
J
−1

J
j =1
:(
˜
0, 0
(j )
| Y )
, (54)
where ]0
(s)
]
n
sim
s=1
are sampled draws from the posterior distribution with the
RWM algorithm and ]0
(j )
]
J
j =1
are draws from q(
˜
0, 0 | Y ) given the posterior
mode value
˜
0.
We first use Geweke’s harmonic mean estimator to approximate
the marginal data density associated with
1
(L),
1
(L). The results are
summarized in Figure 9. We calculate marginal data densities recursively
(as a function of the number of MCMC draws) for the four chains that
were initialized at different starting values as discussed in Section 4. For
the output gap rule all four chains lead to estimates of around −196.7.
Figure 10 provides a comparison of Geweke’s versus Chib and Jeliazkov’s
approximation of the marginal data density for the output gap rule
specification. A visual inspection of the plot suggests that both estimators
converge to the same limit point. However, the modified harmonic mean
estimator appears to be more reliable if the number of simulations is small.
148 S. An and F. Schorfheide
FIGURE 9 Data densities from multiple chains – Model
1
(L), Data
1
(L). For each Markov
chain, log marginal data densities are computed recursively with Geweke’s modified harmonic mean
estimator and plotted as a function of the number of draws.
FIGURE 10 Geweke vs. Chib–Jeliazkov data densities – Model
1
(L), Data
1
(L). Log marginal
data densities are computed recursively with Geweke’s modified harmonic mean estimator as well
as the Chib–Jeliazkov estimator and plotted as a function of the number of draws.
Bayesian Analysis of DSGE Models 149
TABLE 4 Log marginal data densities based on
1
(L)
Specification lnp(Y | ) Bayes factor versus
1
(L)
Benchmark model
1
(L) −196.7 1.00
Model with nearly flexible prices
3
(L) −245.6 exp[48.9]
No policy reaction to output
4
(L) −201.9 exp[5.2]
Notes: The log marginal data densities for the DSGE model specifications are
computed based on Geweke’s (1999a) modified harmonic mean estimator.
Based on data set
1
(L) we also estimate specification
3
(L) in which
prices are nearly flexible and specification
4
(L) in which the central
bank does not respond to output. The model comparison results are
summarized in Table 4. The marginal data density associated with
3
(L)
is −245.6, which translates into a Bayes factor (ratio of posterior odds
to prior odds) of approximately e
49
in favor of
1
(L). Hence, data set

1
(L) provides very strong evidence against flexible prices. Given the
fairly concentrated posterior of « depicted in Figure 1 this result is not
surprising. The marginal data density of model
4
(L) is equal to −201.9
and the Bayes factor of model
1
(L) versus model
4
(L) is ‘only’ e
5
. The
DSGE model implies that when actual output is close to the target flexible
price output, inflation will also be close to its target value. Vice versa,
deviations of output from target coincide with deviations of inflation from
its target value. This mechanism makes it difficult to identify the policy rule
coefficients and imposing an incorrect value for ç
2
is not particularly costly
in terms of fit.
5.3. Comparison of a DSGE Model with a VAR
The notion of potential misspecification of a DSGE model can be
incorporated in the Bayesian framework by including a more general
reference model
0
into the model set. A natural choice of reference
model in dynamic macroeconomics is a VAR, as linearized DSGE models,
at least approximately, can be interpreted as restrictions on a VAR
representation. The first step of the comparison is typically to compute
posterior probabilities for the DSGE model and the VAR, which can be
used to detect the presence of misspecifications. Since the VAR parameter
space is generally much larger than the DSGE model parameter space,
the specification of a prior distribution for the VAR parameter requires
careful attention. Possible pitfalls are discussed in Sims (2003). A VAR with
a prior that is very diffuse is likely to be rejected even against a misspecified
DSGE model. In a more general context this phenomenon is often called
Lindley’s paradox. We will subsequently present a procedure that allows us
to document how the marginal data density of the DSGE model changes as
150 S. An and F. Schorfheide
the cross-coefficient restrictions that the DSGE model imposes on the VAR
are relaxed.
If the data favor the VAR over the DSGE model then it becomes
important to investigate further the deficiencies of the structural model.
This can be achieved by comparing the posterior distributions of
interesting population characteristics such as impulse response functions
obtained from the DSGE model and from a VAR representation that does
not dogmatically impose the DSGE model restrictions. We will refer to
the latter benchmark model as DSGE-VAR and discuss an identification
scheme that allows us to construct structural impulse response functions
for the vector autoregressive specification against which the DSGE model
can be evaluated. Building on work by Ingram and Whiteman (1994) the
DSGE-VAR approach of Del Negro and Schorfheide (2004) was designed
to improve forecasting and monetary policy analysis with VARs. The
framework has been extended to a model evaluation tool in Del Negro
et al. (2006) and used to assess the fit of a variant of the Smets and Wouters
(2003) model.
To construct a DSGE-VAR we proceed as follows. Consider a vector
autoregressive specification of the form
y
t
= 4
0
+4
1
y
t −1
+· · · +4
p
y
t −p
+u
t
, [u
t
u

t
] = l. (55)
Define the k ×1 vector x
t
= [1, y

t −1
, . . . , y

t −p
]

, the coefficient matrix
4 = [4
0
, 4
1
, . . . , 4
p
]

, the T ×n matrices Y and U composed of rows y

t
and
u

t
, and the T ×k matrix X with rows x

t
. Thus the VAR can be written as
Y = X4 +U. Let
D
0
[·] be the expectation under DSGE model and define
the autocovariance matrices
!
XX
(0) =
D
0
[x
t
x

t
], !
XY
(0) =
D
0
[x
t
y

t
].
A VAR approximation of the DSGE model can be obtained from restriction
functions that relate the DSGE model parameters to the VAR parameters,
4

(0) = !
−1
XX
(0)!
XY
(0), l

(0) = !
YY
(0) −!
YX
(0)!
−1
XX
(0)!
XY
(0). (56)
This approximation is typically not exact because the state–space
representation of the linearized DSGE model generates moving average
terms. Its accuracy depends on the number of lags p, and the magnitude
of the roots of the moving average polynomial. We will document below
that four lags are sufficient to generate a fairly precise approximation of
the model
1
(L).
10
10
In principle one could start from a VARMA model to avoid the approximation error. The
posterior computations, however, would become significantly more cumbersome.
Bayesian Analysis of DSGE Models 151
In order to account for potential misspecification of the DSGE model
it is assumed that there is a vector 0 and matrices 4
A
and l
A
such that the
data are generated from the VAR in (55) with the coefficient matrices
4 = 4

(0) +4
A
, l = l

(0) +l
A
. (57)
The matrices 4
A
and l
A
capture deviations from the restriction functions
4

(0) and l

(0).
Bayesian analysis of this model requires the specification of a prior
distribution for the DSGE model parameters, p(0), and the misspecification
matrices. Rather than specifying a prior in terms of 4
A
and l
A
it is
convenient to specify it in terms of 4 and l conditional on 0. We assume
l| 0 ∼ W
_
zTl

(0), zT −k, n
_
(58)
4| l, 0 ∼
_
4

(0),
1
zT
_
l
−1
⊗ !
XX
(0)
_
−1
_
,
where W denotes the inverted Wishart distribution.
11
The prior
distribution can be interpreted as a posterior calculated from a sample of
zT observations generated from the DSGE model with parameters 0; see
Del Negro and Schorfheide (2004). It has the property that its density is
approximately proportional to the Kullback–Leibler discrepancy between
the VAR approximation of the DSGE model and the 4-l VAR, which is
emphasized in Del Negro et al. (2006). z is a hyperparameter that scales the
prior covariance matrix. The prior is diffuse for small values of z and shifts
its mass closer to the DSGE model restrictions as z −→∞. In the limit the
VAR is estimated subject to the restrictions (56). The prior distribution is
proper provided that zT ≥ k +n.
The joint posterior density of VAR and DSGE model parameters can be
conveniently factorized as
p
z
(4, l, 0 | Y ) = p
z
(4, l| Y , 0)p
z
(0 | Y ). (59)
The z-subscript indicates the dependence of the posterior on the
hyperparameter. It is straightforward to show, e.g., Zeller (1971), that the
posterior distribution of 4 and l is also of the inverted Wishart normal
form:
l| Y , 0 ∼ W
_
(1 +z)T
¨
l
b
(0), (1 +z)T −k, n
_
4| Y , l, 0 ∼
_
¨
4
b
(0), l ⊗ (zT!
XX
(0) +X

X)
−1
_
,
(60)
11
Ingram and Whiteman (1994) constructed a VAR prior from a stochastic growth model
to improve the forecast performance of the VAR. However, their setup did not allow for the
computation of a posterior distribution for 0.
152 S. An and F. Schorfheide
where
¨
4
b
(0) and
¨
l
b
(0) are the given by
¨
4
b
(0) =
_
z
1 +z
!
XX
(0) +
1
1 +z
X

X
T
_
−1
_
z
1 +z
!
XY
+
1
1 +z
X

Y
T
_
¨
l
b
(0) =
1
(1 +z)T
|(zT!
YY
(0) +Y

Y ) −(zT!
YX
(0) +Y

X)
×(zT!
XX
(0) +X

X)
−1
(zT!
XY
(0) +X

Y )].
Hence the larger the weight z of the prior, the closer the posterior mean
of the VAR parameters is to 4

(0) and l

(0), the values that respect
the cross-equation restrictions of the DSGE model. On the other hand,
if z = (n +k),T, then the posterior mean is close to the OLS estimate
(X

X)
−1
X

Y . The marginal posterior density of 0 can be obtained through
the marginal likelihood
p
z
(Y | 0) =
|zT!
XX
(0) +X

X|

n
2
|(1 +z)T
¨
l
b
(0)|

(1+z)T−k
2
|zT!
XX
(0)|

n
2
|zTl

(0)|

zT−k
2
×
(2¬)
−nT,2
2
n((1+z)T−k)
2

n
i=1
![((1 +z)T −k +1 −i),2]
2
n(zT−k)
2

n
i=1
![(zT −k +1 −i),2]
.
A derivation is provided in Del Negro and Schorfheide (2004). The paper
also shows that in large samples the resulting estimator of 0 can be
interpreted as a Bayesian minimum distance estimator that projects the
VAR coefficient estimates onto the subspace generated by the restriction
functions (56).
Since the empirical performance of the DSGE-VAR procedure crucially
depends on the weight placed on the DSGE model restrictions, it is
important to consider a data-driven procedure to determine z. A natural
criterion for the choice of z in a Bayesian framework is the marginal data
density
p
z
(Y ) =
_
p
z
(Y | 0)p(0)d0. (61)
For computational reasons we restrict the hyperparameter to a finite grid
A. If one assigns equal prior probability to each grid point then the
normalized p
z
(Y )’s can be interpreted as posterior probabilities for z.
Del Negro et al. (2006) emphasize that the posterior of z provides a
measure of fit for the DSGE model: high posterior probabilities for large
Bayesian Analysis of DSGE Models 153
values of z indicate that the model is well specified and a lot of weight
should be placed on its implied restrictions. Define
ˆ
z = argmax
z∈A
p
z
(Y ). (62)
If p
z
(Y ) peaks at an intermediate value of z, say between 0.5 and 2,
then a comparison between DSGE-VAR(
ˆ
z) and DSGE model impulse
responses can yield important insights about the misspecification of the
DSGE model.
An impulse response function comparison requires the identification
of structural shocks in the context of the VAR. So far, the VAR given in (55)
has been specified in terms of reduced form disturbances u
t
. According to
the DSGE model, the one-step-ahead forecast errors u
t
are functions of the
structural shocks e
t
, which we represent by
u
t
= l
tr
De
t
. (63)
l
tr
is the Cholesky decomposition of l, and D is an orthonormal
matrix that is not identifiable based on the likelihood function associated
with (55). Del Negro and Schorfheide (2004) proposed to construct D as
follows. Let A
0
(0) be the contemporaneous impact of e
t
on y
t
according to
the DSGE model. Using a QR factorization, the initial response of y
t
to the
structural shocks can be can be uniquely decomposed into
_
cy
t
ce

t
_
DSGE
= A
0
(0) = l

tr
(0)D

(0), (64)
where l

tr
(0) is lower triangular and D

(0) is orthonormal. The initial
impact of e
t
on y
t
in the VAR, on the other hand, is given by
_
cy
t
ce

t
_
VAR
= l
tr
D. (65)
To identify the DSGE-VAR, we maintain the triangularization of its
covariance matrix l and replace the rotation D in (65) with the function
D

(0) that appears in (64). The rotation matrix is chosen so that in absence
of misspecification the DSGE’s and the DSGE-VAR’s impulse responses
to all shocks approximately coincide. To the extent that misspecification
is mainly in the dynamics, as opposed to the covariance matrix of
innovations, the identification procedure can be interpreted as matching,
at least qualitatively, the short-run responses of the VAR with those from
the DSGE model. The estimation of the DSGE-VAR can be implemented
as follows.
154 S. An and F. Schorfheide
MCMC Algorithm for DSGE-VAR
1. Use the RWM algorithm to generate draws 0
(s)
from the marginal
posterior distribution p
z
(0 | Y ).
2. Use Geweke’s modified harmonic mean estimator to obtain a numerical
approximation of p
z
(Y ).
3. For each draw 0
(s)
generate a pair 4
(s)
, l
(s)
, by sampling from the
W − distribution. Moreover, compute the orthonormal matrix
D
(s)
= D

(0) as described above.
We now implement the DSGE-VAR procedure for
1
(L) based on
artificially generated data. All the results reported subsequently are based
on a DSGE-VAR with p = 4 lags. The first step of our analysis is to construct
a posterior distribution for the parameter z. We assume that z lies on the
grid A = ].25, .5, .75, 1, 5, ∞] and assign equal probabilities to each grid
point. For z = ∞ the misspecification matrices 4
A
and l
A
are zero and we
estimate the VAR by imposing the restrictions 4 = 4

(0) and l = l

(0).
Table 5 reports log marginal data densities for the DSGE-VAR as a
function of z for data sets
1
(L) and
5
(L). The first row reports the
marginal data density associated with the state space representation of
the DSGE model, whereas the second row corresponds to the DSGE-
VAR(∞). If the VAR approximation of the DSGE model were exact, then
the values in the first and second row of Table 5 would be identical. For
data set
1
(L), which has been directly generated from the state space
representation of
1
(L), the two values are indeed quite close. Moreover,
the marginal data densities are increasing as a function of z, indicating that
there is no evidence of misspecification and that it is best to dogmatically
impose the DSGE model restrictions.
With respect to data set
5
(L) the DSGE model is misspecified. This
misspecification is captured in the inverse U-shape of the marginal data
density function, which is representative for the shapes found in empirical
applications in Del Negro and Schorfheide (2004) and Del Negro et al.
(2006). The value z = 1.00 has the highest likelihood. The marginal data
TABLE 5 Log marginal data densities for
1
(L) DSGE-VARs
Specification
1
(L)
5
(L)
DSGE Model −196.66 −245.62
DSGE-VAR z = ∞ −196.88 −244.54
DSGE-VAR z = 5.00 −198.87 −241.95
DSGE-VAR z = 1.00 −206.57 −238.59
DSGE-VAR z = .75 −209.53 −239.40
DSGE-VAR z = .50 −215.06 −241.81
DSGE-VAR z = .25 −231.20 −253.61
Notes: See Table 4.
Bayesian Analysis of DSGE Models 155
densities drop substantially for z ≥ 5 and z ≤ 0.5. In order to assess the
nature of the misspecification for
5
(L) we now turn to the impulse
response function comparison.
In principle, there are three different sets of impulse responses to
be compared: responses from the state–space representation of the DSGE
model, the z = ∞ and the z =
ˆ
z DSGE-VAR. Moreover, these impulse
responses can either be compared based on the same parameter values
0 or their respective posterior estimates of the DSGE model parameters.
Figure 11 depicts posterior mean impulse responses for the state–space
representation of the DSGE model as well as the DSGE-VAR(∞). In both
FIGURE 11 Impulse responses, DSGE, and DSGE-VAR(z = ∞) – Model
1
(L), Data
5
(L).
DSGE model responses computed from state–space representation: posterior mean (solid); DSGE-
VAR(z = ∞) responses: posterior mean (dashed) and pointwise 90% probability bands (dotted).
156 S. An and F. Schorfheide
cases we use the same posterior distribution of 0, namely, the one obtained
from the DSGE-VAR(∞). The discrepancy between the posterior mean
responses (solid and dashed lines) indicates the magnitude of the error
that is made by approximating the state–space representation of the DSGE
model by a fourth-order VAR. Except for the response of output to the
government spending shock, the DSGE and DSGE-VAR(∞) responses are
virtually indistinguishable, indicating that the approximation error is small.
The (dotted) bands in Figure 11 depict pointwise 90% probability intervals
for the DSGE-VAR(∞) and reflect posterior parameter uncertainty with
respect to 0.
To assess the misspecification of the DSGE model we compare posterior
mean responses of the z = ∞ (dashed) and
ˆ
z (solid) DSGE-VAR in
Figure 12. Both sets of responses are constructed from the
ˆ
z posterior
of 0. The (dotted) 90% probability bands reflect uncertainty with respect
to the discrepancy between the z = ∞and
ˆ
z responses. A brief description
of the computation can clarify the interpretation. For each draw of 0
from the
ˆ
z DSGE-VAR posterior we compute the
ˆ
z and the z = ∞
response functions, calculate their differences, and construct probability
intervals of the differences. To obtain the dotted bands in the figure,
we take the upper and lower bound of these probability intervals and
shift them according to the posterior mean of the z = ∞ response.
Roughly speaking, the figure answers the question: to what extent do the
ˆ
z
estimates of the VAR coefficients deviate from the DSGE model restriction
functions 4

(0) and l

(0). The discrepancy, however, is transformed from
the 4–l space into impulse response functions because they are easier
to interpret.
While the relaxation of the DSGE model restrictions has little effect on
the propagation of the technology shock, the inflation and interest rate
responses to a monetary policy shock are markedly different. According to
the VAR approximation of the DSGE model, inflation returns quickly to
its steady-state level after a contractionary policy shock. The DSGE-VAR(
ˆ
z)
response of inflation, on the other hand, has a slight hump shape, and the
reversion to the steady-state is much slower. Unlike in the DSGE-VAR(∞),
the interest rate falls below steady-state in period four and stays negative
for several periods.
In a general equilibrium model a misspecification of the households’
decision problem can have effects on the dynamics of all the endogenous
variables. Rather than looking at the dynamic responses of the endogenous
variables to the structural shocks, we can also ask to what extent are the
optimality conditions that the DSGE model imposes satisfied by the DSGE-
VAR(
ˆ
z) responses. According to the log-linearized DSGE model output,
inflation, and interest rates satisfy the following relationships in response
Bayesian Analysis of DSGE Models 157
FIGURE 12 Impulse responses, DSGE-VAR(z = ∞), and DSGE-VAR(z = 1) – Model
1
(L), Data

5
(L). DSGE-VAR(z = ∞) posterior mean responses (dashed), DSGE-VAR(z = 1) posterior mean
responses (solid). Pointwise 90% probability bands (dotted) signify shifted probability intervals for
the difference between z = ∞ and z = 1 responses.
to the shock e
R,t
:
0 = ˆ y
t

¨

t
[y
t +1
] +
1
t
(
¨
R
t

¨

t

t +1
]) (66)
0 = ˆ ¬
t
−[
t
[ ˆ ¬
t +1
] −«ˆ y
t
(67)
e
R,t
=
¨
R
t
−j
R
¨
R
t −1
−(1 −j
R

1
ˆ ¬
t
−(1 −j
R

2
ˆ y
t
(68)
Figure 13 depicts the path of the right-hand side of Equations (66) to (68)
in response to a monetary policy shock. Based on draws from the joint
posterior of the DSGE and VAR parameters we can first calculate identified
158 S. An and F. Schorfheide
FIGURE 13 Impulse responses, DSGE-VAR(z = ∞), and DSGE-VAR(z = 1) – Model
1
(L), Data

5
(L). DSGE model responses: posterior mean (dashed); DSGE-VAR responses: posterior mean
(solid) and pointwise 90% probability bands (dotted).
VAR responses to obtain the path of output, inflation, and interest rate,
and in a second step use the DSGE model parameters to calculate the
wedges in the Euler equation, the price setting equation, and the monetary
policy rule. The dashed lines show the posterior mean responses according
to the state–space representation of the DSGE model, and the solid lines
depict DSGE-VAR responses. The top three panels of Figure 13 show that
both for the DSGE model as well as the DSGE-VAR(∞) the three equations
are satisfied. The bottom three panels of the figure are based on the DSGE-
VAR(
ˆ
z), which relaxes the DSGE model restrictions. While the price setting
relationship and the monetary policy rule restriction seem to be satisfied,
at least for the posterior mean, Panel (2,1) indicates a substantial violation
of the consumption Euler equation. This finding is encouraging for the
evaluation strategy since the data set
5
(L) was indeed generated from a
model with a modified Euler equation.
While this section focused mostly on the assessment of a single DSGE
model, Del Negro et al. (2006) use the DSGE-VAR framework also to
compare multiple DSGE models. Schorfheide (2000) proposed a loss-
function-based evaluation framework for (multiple) DSGE models that
Bayesian Analysis of DSGE Models 159
augments potentially misspecified DSGE models with a more general
reference model, constructs a posterior distribution for population
characteristics of interest such as autocovariances and impulse responses,
and then examines the ability of the DSGE models to predict the
population characteristics. This prediction is evaluated under a loss
function and the risk is calculated under an overall posterior distribution
that averages the predictions of the DSGE models and the reference
model according to their posterior probabilities. This loss-function-based
approach nests DSGE model comparisons based on marginal data densities
as well as assessments based on a comparison of model moments to sample
moments, popularized in the calibration literature, as special cases.
6. NONLINEAR METHODS
For a non-linear/nonnormal state–space model, the linear Kalman
filter cannot be used to compute the likelihood function. Instead,
numerical methods have to be used to integrate the latent state vector. The
evaluation of the likelihood function can be implemented with a particle
filter, also known as a sequential Monte Carlo filter. Gordon et al. (1993)
and Kitagawa (1996) are early contributors to this literature. Arulampalam
et al. (2002) provide an excellent survey. In economics, the particle filter
has been applied to analyze the stochastic volatility models by Pitt and
Shephard (1999) and Kim et al. (1998). Recently Fernández-Villaverde
and Rubio-Ramírez (2005) use the filter to construct the likelihood for
DSGE model solved with a projection method. We follow their approach
and use the particle filter for DSGE model
1
solved with a second-order
perturbation method. A brief description of the procedure is given below.
We use Y
t
to denote the t ×n matrix with rows y

t
, t = 1, . . . , t. The vector
s
t
has been defined in Section 2.4.
Particle Filter
1) Initialization: Draw N particles s
i
0
, i = 1, . . . , N, from the initial
distribution p(s
0
| 0). By induction, in period t we start with the particles
_
s
i
t −1
_
N
i=1
, which approximate p
_
s
t −1
| Y
t −1
, 0
_
.
2) Prediction: Draw one-step-ahead forecasted particles
_
˜ s
i
t
_
N
i=1
from
p
_
s
t
| Y
t −1
, 0
_
. Note that
p
_
s
t
| Y
t −1
, 0
_
=
_
p (s
t
| s
t −1
, 0) p
_
s
t −1
| Y
t −1
, 0
_
ds
t −1

1
N
N

i=1
p
_
s
t
| s
i
t −1
, 0
_
.
Hence one can draw N particles from p
_
s
t
| Y
t −1
, 0
_
by generating one
particle from p
_
s
t
| s
i
t −1
, 0
_
for each i.
160 S. An and F. Schorfheide
3) Filtering: The goal is to approximate
p
_
s
t
| Y
t
, 0
_
=
p
_
y
t
| s
t
, 0
_
p
_
s
t
| Y
t −1
, 0
_
p
_
y
t
| Y
t −1
, 0
_ , (69)
which amounts to updating the probability weights assigned to the particles
_
˜ s
i
t
_
N
i=1
. We begin by computing the non-normalized importance weights
˜ ¬
i
t
= p
_
y
t
| ˜ s
i
t
, 0
_
. The denominator in (69) can be approximated by
p
_
y
t
| Y
t −1
, 0
_
=
_
p
_
y
t
| s
t
, 0
_
p
_
s
t
| Y
t −1
, 0
_
ds
t

1
N
N

i=1
˜ ¬
i
t
. (70)
Now define the normalized weights
¬
i
t
=
˜ ¬
i
t

N
j =1
˜ ¬
j
t
and note that the importance sampler
_
˜ s
i
t
, ¬
i
t
_
N
i=1
approximates
12
the
updated density p
_
s
t
| Y
t
, 0
_
.
4) Resampling: We now generate a new set of particles
_
s
i
t
_
N
i=1
by
resampling with replacement N times from an approximate discrete
representation of p
_
s
t
| Y
t
, 0
_
given by
_
˜ s
i
t
, ¬
i
t
_
N
i=1
so that
Pr
_
s
i
t
= ˜ s
i
t
_
= ¬
i
t
, i = 1, . . . , N.
The resulting sample is in fact an iid sample from the discretized density of
p(s
t
| Y
t
, 0) and hence is equally weighted.
5) Likelihood Evaluation: According to (70) the log likelihood function
can be approximated by
ln(0 | Y
T
) = ln p(y
1
| 0) +
T

t =2
lnp
_
y
t
| Y
t −1
, 0
_

N

t =1
_
1
N
N

i=1
˜ ¬
i
t
_
.
To compare results from first and second-order accurate DSGE model
solutions we simulated artificial data of 80 observations from the quadratic
approximation of model
1
with parameter values reported in Table 3
under the heading
1
(Q). In the remainder of this section we will refer
to these parameters as “true” values as opposed to “pseudotrue” values to
12
The sequential importance sampler (SIS) is a Monte Carlo method which utilized this aspect,
but it is well documented that it suffers from a degeneracy problem, where after a few iterations,
all but one particle will have negligible weight.
Bayesian Analysis of DSGE Models 161
be defined later. These true parameter values by and large resemble the
parameter values that have been used to generate data from the log-linear
DSGE model specifications. However, there are some exceptions. The
degree of imperfect competition, v, is set to 0.1, which implies a steady-state
markup of 11% that is consistent with the estimates of Basu (1995). 1,g
is chosen as .85, which implies that the steady-state consumption amounts
to 85% of output. The slope coefficient of the Phillips curve, «, is chosen
to be .33, which implies less price stickiness than in the linear case. The
implied quadratic adjustment cost coefficient, [, is 53.68. We also make
monetary policy less responsive to the output gap by setting ç
2
= .125. ¸
(Q)
,
r
(A)
, and ¬
(A)
are chosen as .55, 1.0, and 3.2 to match the average output
growth rate, interest rate, and inflation between the artificial data and the
real U.S. data. These values are different from the mean of the historical
observations because in the non-linear version of the DSGE model means
differ from steady states. Other exogenous process parameters, j
g
, j
z
, o
R
,
o
g
, and o
z
, are chosen so that the second moments are matched to the U.S.
data. For computational convenience, a measurement error is introduced
to each measurement equation, whose standard deviation is set as 20% of
that of each observation.
13
6.1. Configuration of the Particle Filter
There are several issues concerning the practical implementation of
the particle filter. First, we need a scheme to draw from the initial state
distribution, p (s
0
| 0). In the linear case, it is straightforward to calculate the
unconditional distribution of s
t
associated with the vector autoregressive
representation (33). For the second-order approximation we rewrite the
state transition equation (37) as follows. Decompose s
t
= [x

t
, ¸

t
]

, where ¸
t
is composed of the exogenous processes e
R,t
, ˆ g
t
, and ˆ z
t
. The law of motion
for the endogenous state variables x
j ,t
can be expressed as
x
j ,t
= +
(0)
j
++
(1)
j
w
t
+w

t
+
(2)
j
w
t
(71)
where w
t
= [x

t −1
, ¸

t
]

. We generate x
0
by drawing ¸
0
from its unconditional
distribution (the law of motion for ¸
t
is linear) and setting x
−1
= x, where
x is the steady-state of x
t
.
Second, we have to choose the number of particles. For a good
approximation of the prediction error distribution, it is desirable to
have many particles, especially enough particles to capture the tails of
13
Without the measurement error two complications arise: since p(y
t
| s
t
) degenerates to a point-
mass and the distribution of s
t
is that of a multivariate noncentral ,
2
the evaluation of p(y
t
| Y
t −1
)
becomes difficult. Moreover, the computation of p(s
t
| Y
t
) requires the solution of a system of
quadratic equations.
162 S. An and F. Schorfheide
p(s
t
| Y
t
). Moreover, the number of particles affects the performance
of the resampling algorithm. If the number of particles is too small,
the resampling will not work well. In our implementation, the stratified
resampling scheme proposed by Kitagawa (1996) is applied. It is optimal
in terms of variance in the class of unbiased resampling schemes. If the
measurement errors in the conditional distribution p(y
t
| s
t
) are small,
more particles are needed to obtain an accurate approximation of the
likelihood function and to ensure that the posterior weights ¬
i
t
do not
assign probability one to a single particle. In our application, we found
that 40,000 particles were enough to get stable evalutaions of the likelihood
given the size of the measurement errors.
Even though the estimation procedure involves extensive random
sampling, the particle filter can be readily implemented on a good desktop
computer. We implement most of the procedure in MATLAB (2005) so
that we can exploit the symbolic toolbox to solve the DSGE model, but it
takes too much time to evaluate the likelihood function using the particle
filter. The filtering step is implemented as FORTRAN mex library, so we
can call it as a function in MATLAB. Based on a sample of 80 observations
we can generate 1,000 draws with 40,000 particles in 1 h and 40 min (6.0 sec
per draw) on the AMD64 3000+ machine with 1 GB RAM. Linear methods
are more than 200 times faster: 1,000 draws are generated in 24 sec in our
Matlab routines and 3 sec in our GAUSS routines.
6.2. A Look at the Likelihood Function
We will begin our analysis by studying profiles of likelihood functions.
For brevity, we will refer to the likelihood obtained from the first-order
(second-order) accurate solution of the DSGE model as linear (quadratic)
likelihood. It is natural to evaluate the quadratic likelihood function in the
neighborhood of the true parameter values given in Table 3. However, we
evaluate the linear likelihood around the pseudo-true parameter values,
which are obtained by finding the mode of the linear likelihood using
3,000 observations. The parameter values for v and 1,g are fixed at their
true values because they do not enter the linear likelihood. Recall that we
had introduced a reduced-form Phillips curve parameter « in (32) because
v and [ cannot be identified with the log-linearized DSGE model.
Figure 14 shows the log-likelihood profiles along each structural
parameter dimension. Two features are noticeable in this comparison.
First, the quadratic log-likelihood peaks around the true parameter values,
while the linear log-likelihood attains its maximum around the pseudo-true
parameter values. The differences between the peaks are most pronounced
for the steady-state parameters r
(A)
and ¬
(A)
. While in the linearized
model steady states and long-run averages coincide, they differ if the
model is solved by a second-order approximation. For some parameters
Bayesian Analysis of DSGE Models 163
FIGURE 14 Linear vs. quadratic approximations: likelihood profiles. Data Set
1
(Q). Likelihood
profile for each parameter:
1
(L)/Kalman filter (dashed) and
1
(Q)/particle filter (solid). 40,000
particles are used. Vertical lines signify true (dotted) and pseudo-true (dashed-dotted) values.
164 S. An and F. Schorfheide
FIGURE 15 Linear vs. quadratic approximations: Likelihood contours. Data set
1
(Q). Contours
of likelihood (solid) for
1
(L) and
1
(Q). 40,000 particles are used. Large dot indicates true and
pseudotrue parameter value. « is constant along dashed lines.
the quadratic log-likelihood function is more concave than the linear
one, which implies that the non-linear approach is able to extract more
information on the structural parameters from the data. For instance,
it appears that the monetary policy parameter such as ç
1
can be more
precisely estimated with the quadratic approximation.
As noted before, the quadratic approximation can identify some
structural parameters that are unidentifiable under the log-linearized
model. g does not enter the linear version of the model and hence
the log-likelihood profile along this dimension is flat. In the quadratic
approximation, however, the log-likelihood is slightly sloped in 1,g = c,y
dimension. Moreover, the linear likelihood is flat for the values of v and
[ that imply a constant «. This is illustrated in Figure 15, which depicts
the linear and quadratic likelihood contours in v-[ space. The contours
of the linear likelihood trace out iso-« lines. The right panel of Figure 15
indicates that the iso-« lines intersect with the contours of the quadratic
likelihood function, which suggests that v and [ are potentially separately
identifiable. However, our sample of 80 observations is too short to obtain
a sharp estimate of the two parameters.
6.3. The Posterior
We now compare the posterior distributions obtained from the linear
and non-linear analysis. In both cases we use the same prior distribution
reported in Table 3. Unlike in the analysis of the likelihood function we
now substitute the adjustment cost parameter [ with the Phillips-curve
parameter « in the quadratic approximation of the DSGE model. A beta
prior with mean .1 and standard deviation 0.05 is placed on v, which
Bayesian Analysis of DSGE Models 165
roughly covers micro evidences of 10–15% markup. Note that 1 −1,g is
the steady-state government spending–output ratio, and hence a beta prior
with mean 0.85 and standard deviation .1 is used for 1,g .
We use the RWM algorithm to generate draws from the posterior
distributions associated with the linear and quadratic approximations of
the DSGE model. We first compute the posterior mode for the linear
specification with the additional parameters, v and 1,g , fixed at their true
values. After that, we evaluate the Hessian to be used in the RWM algorithm
at the (linear) mode without fixing v and 1,g , so that the inverse Hessian
reflects the prior variance of the additional parameters. This Hessian is
used for both the linear and the non-linear analysis. We use scaling factors
.5 (linear) and .25 (quadratic) to target an acceptance rate of about 30%.
The Markov chains are initialized in the neighborhood of the pseudo-true
and true parameter values, respectively.
Figures 16 and 17 depict the draws from prior, linear posterior, and
quadratic posterior distribution. 100,000 draws from each distribution are
generated and every 500th draw is plotted. The draws tend to be more
concentrated as we move from prior to posterior. While, for most of our
parameters, linear and quadratic posteriors are quite similar, there are a
few exceptions. The quadratic posterior mean of ç
1
is larger than the
linear posterior mean and closer to the true value. The quadratic posterior
means for the steady-state parameters r
(A)
and ¬
(A)
are also greater than the
corresponding linear posterior means, which is consistent with the positive
difference between “true” and “pseudotrue” values of these parameters.
Now consider the parameter v that determines the demand elasticity
for the differentiated products. Conditional on « this parameter is not
identifiable under the linear approximation and hence its posterior is
not updated. The parameter does, however, affect the second-order terms
that arise in the quadratic approximation of the DSGE model. Indeed,
the quadratic posterior of v is markedly different from the prior as it
concentrates around 0.05. Moreover, there is a clear negative correlation
between « and v according to the quadratic posterior.
Table 6 reports the log marginal data densities for our linear (
1
(L))
and quadratic (
1
(Q)) approximations of the DSGE model based on
Data Set
1
(Q). The log marginal likelihoods are −416.06 for
1
(L)
and −408.09 for
1
(Q), which imply a Bayes factor of about e
8
in favor
of the quadratic approximation. This result suggests that
1
(Q) exhibits
nonlinearities that can be detected by the likelihood-based estimation
methods. Our simulation results with respect to linear versus non-linear
estimation are in line with the findings reported in Fernández-Villaverde
and Rubio-Ramírez (2005) for a version of the neoclassical stochastic
growth model. In their analysis the non-linear solution combined with
the particle filter delivers a substantially better fit of the model to the
data as measured by the marginal data density, both for simulated as well
166 S. An and F. Schorfheide
FIGURE 16 Posterior draws: linear vs. quadratic approximation I. Data set
1
(Q). 100,000 draws
from the prior and posterior distributions. Every 500th draw is plotted. Intersections of solid and
dotted lines signify true and pseudotrue parameter values. 40,000 particles are used.
Bayesian Analysis of DSGE Models 167
FIGURE 17 Posterior draws: Linear vs. quadratic approximation II. See Figure 16.
168 S. An and F. Schorfheide
TABLE 6 Log marginal data densities based on
1
(Q)
Specification lnp(Y |
1
) Bayes factor vesus
1
(Q)
Benchmark model, linear solution
1
(L) −416.06 exp[7.97]
Benchmark model, quadratic solution
1
(Q) −408.09 1.00
Notes: See Table 4.
as actual data. Moreover, the authors point out that the differences in
terms of point estimates, although relatively small in magnitude, may have
important effects on the moments of the model.
7. CONCLUSIONS AND OUTLOOK
There exists by now a large and growing body of empirical work on
the Bayesian estimation and evaluation of DSGE models. This paper has
surveyed the tools used in this literature and illustrated their performance
in the context of artificial data. While most of the existing empirical work
is based on linearized models, techniques to estimate DSGE models solved
with non-linear methods have become implementable thanks to advances
in computing technology. Nevertheless, many challenges remain. Model
size and dimensionality of the parameter space pose a challenge for MCMC
methods. Lack of identification of structural parameters is often difficult to
detect since the mapping from the structural form of the DSGE model into
a state–space representation is highly non-linear and creates a challenge
for scientific reporting as audiences are typically interested in disentangling
information provided by the data from information embodied in the
prior. Model misspecification is and will remain a concern in empirical
work with DSGE models despite continuous efforts by macroeconomists
to develop more adequate models. Hence it is important to develop
methods that incorporate potential model misspecification in the measures
of uncertainty constructed for forecasts and policy recommendations.
ACKNOWLEDGMENTS
Part of the research was conducted while Schorfheide was visiting
the Research Department of the Federal Reserve Bank of Philadelphia,
for whose hospitality he is grateful. The views expressed here are those
of the authors and do not necessarily reflect the views of the Federal
Reserve Bank of Philadelphia or the Federal Reserve System. We thank
Herman van Dijk and Gary Koop for inviting us to write this survey
paper for Econometric Reviews. We also thank our discussants as well as
Marco Del Negro, Jesús Fernández-Villaverde, Thomas Lubik, Juan Rubio-
Ramírez, Georg Strasser, seminar participants at Columbia University, and
Bayesian Analysis of DSGE Models 169
the participants of the 2006 EABCN Training School in Eltville for helpful
comments and suggestions. Schorfheide gratefully acknowledges financial
support from the Alfred P. Sloan Foundation.
REFERENCES
Adolfson, M., Laséen, S., Lindé, J., Villani, M. (2004). Bayesian estimation of an open economy
DSGE model with incomplete pass-through. Journal of International Economics, forthcoming.
Altug, S. (1989). Time-to-build and aggregate fluctuations: some new evidence. International Economic
Review 30(4):889–920.
Anderson, G. (2000). A reliable and computationally efficient algorithm for imposing the saddle
point property in dynamic models. Manuscript, Federal Reserve Board of Governors.
Arulampalam, M. S., Maskell, S., Gordon, N., Clapp, T. (2002). A tutorial on particle filters
for online non-linear/non-Gaussian Bayesian tracking. IEEE Transactions on Signal Processing
50(2):173–188.
Aruoba, S. B., Fernández-Villaverde, J., Rubio-Ramírez, J. F. (2004). Comparing solution methods for
dynamic equilibrium economies. Journal of Economic Dynamics and Control 30(12):2477–2508.
Basu, S. (1995). Intermediate goods and business cycles: implications for productivity and welfare.
American Economic Review 85(3):512–531.
Beyer, A., Farmer, R. (2004). On the indeterminacy of new keynesian economics. ECB Working
Paper 323.
Bierens, H. J. (2007). Econometric analysis of linearized singular dynamic stochastic general
equilibrium models. Journal of Econometrics 136(2):595–627.
Bils, M., Klenow, P. (2004). Some evidence on the importance of sticky prices. Journal of Political
Economy 112(5):947–985.
Binder, M., Pesaran, H. (1997). Multivariate linear rational expectations models: characterization
of the nature of the solutions and their fully recursive computation. Econometric Theory
13(6):877–888.
Blanchard, O. J., Kahn, C. M. (1980). The solution of linear difference models under rational
expectations. Econometrica 48(5):1305–1312.
Box, G. E. P. (1980). Sampling and Bayes inference in scientific modelling and robustness. Journal
of the Royal Statistical Society Series A 143(4):383–430.
Canova, F. (1994). Statistical inference in calibrated models. Journal of Applied Econometrics
9:S123–S144.
Canova, F. (2004). Monetary policy and the evolution of the US economy: 1948-2002. Manuscript,
IGIER and Universitat Pompeu Fabra.
Canova, F. (2007). Methods for Applied Macroeconomic Research. NJ: Princeton University Press.
Canova, F., Sala, L. (2005). Back to square one: identification issues in DSGE models. Manuscript,
IGIER and Bocconi University.
Chang, Y., Gomes, J., Schorfheide, F. (2002). Learning-by-doing as a propagation mechanism.
American Economic Review 92(5):1498–1520.
Chang, Y., Schorfheide, F. (2003). Labor-supply shifts and economic fluctuations. Journal of Monetary
Economics 50(8):1751–1768.
Chib, S., Greenberg, E. (1995). Understanding the metropolis-hastings algorithm. American
Statistician 49(4):327–335.
Chib, S., Jeliazkov, I. (2001). Marginal likelihood from the metropolis-hastings output. Journal of the
American Statistical Association 96(453):270–281.
Christiano, L. J. (2002). Solving dynamic equilibrium models by a methods of undetermined
coefficients. Computational Economics 20(1–2):21–55.
Christiano, L. J., Eichenbaum, M. (1992). Current real-business cycle theories and aggregate labor-
market fluctuations. American Economic Review 82(3):430–450.
Christiano, L. J., Eichenbaum, M., Evans, C. L. (2005). Nominal rigidities and the dynamic effects
of a shock to monetary policy. Journal of Political Economy 113(1):1–45.
Collard, F., Juillard, M. (2001). Accuracy of stochastic perturbation methods: the case of asset pricing
models. Journal of Economic Dynamics and Control 25(6–7):979–999.
170 S. An and F. Schorfheide
Crowder, M. (1988). Asymptotic expansions of posterior expectations, distributions, and densities
for stochastic processes. Annals of the Institute for Statistical Mathematics 40:297–309.
de Walque, G., Wouters, R. (2004). An open economy DSGE model linking the Euro area and the
U.S. economy. Manuscript, National Bank of Belgium.
DeJong, D. N., Ingram, B. F. (2001). The cyclical behavior of skill acquisition. Review of Economic
Dynamics 4(3):536–561.
DeJong, D. N., Ingram, B. F., Whiteman, C. H. (1996). A Bayesian approach to calibration. Journal
of Business and Economic Statistics 14(1):1–9.
DeJong, D. N., Ingram, B. F., Whiteman, C. H. (2000). A Bayesian approach to dynamic
macroeconomics. Journal of Econometrics 98(2):203–223.
Del Negro, M. (2003). Fear of floating: a structural estimation of monetary policy in Mexico.
Manuscript, Federal Reserve Bank of Atlanta.
Del Negro, M., Schorfheide, F. (2004). Priors from general equilibrium models for VARs.
International Economic Review 45(2):643–673.
Del Negro, M., Schorfheide, F., Smets, F., Wouters, R. (2006). On the fit and forecasting
performance of new keynesian models. Journal of Business and Economic Statistics, forthcoming.
Den Haan, W. J., Marcet, A. (1994). Accuracy in simulation. Review of Economic Studies 61(1):3–17.
Diebold, F. X., Ohanian, L. E., Berkowitz, J. (1998). Dynamic equilibrium economies: a framework
for comparing models and data. Review of Economic Studies 65(3):433–452.
Dridi, R., Guay, A., Renault, E. (2007). Indirect inference and calibration of dynamic stochastic
general equilibrium models. Journal of Econometrics 136(2):397–430.
DYNARE (2005). software available from http://cepremap.cnrs.fr/dynare/.
Fernández-Villaverde, J., Rubio-Ramírez, J. F. (2004). Comparing dynamic equilibrium models to
data: aBayesian approach. Journal of Econometrics 123(1):153–187.
Fernández-Villaverde, J., Rubio-Ramírez, J. F. (2005). Estimating dynamic equilibrium economies:
linear versus non-linear likelihood. Journal of Applied Econometrics 20(7):891–910.
Galí, J., Rabanal, P. (2005). Technology shocks and aggregate fluctuations: how well does the RBC
model fit postwar U.S. data. In: Gertler, M., Rogoff, K., eds. NBER Macroeconomics Annual
2004. Cambridge, MA: MIT Press, pp. 225–228.
GAUSS (2004). Software available from http://www.aptech.com/.
Gelfand, A. E., Dey, D. K. (1994). Bayesian model choice: asymptotics and exact calculations. Journal
of the Royal Statistical Society Series B 56(3):501–514.
Gelman, A., Carlin, J., Stern, H., Rubin, D. (1995). Bayesian Data Analysis. New York: Chapman &
Hall.
Geweke, J. (1989). Bayesian inference in econometric models using Monte Carlo integration.
Econometrica 57(6):1317–1339.
Geweke, J. (1999a). Computational experiments and reality. Manuscript, University of Iowa.
Geweke, J. (1999b). Using simulation methods for bayesian econometric models: inference,
development and communication. Econometric Reviews 18(1):1–126.
Geweke, J. (2005). Contemporary Bayesian Econometrics and Statistics. Hoboken: John Wiley & Sons.
Good, I. J. (1952). Rational decisions. Journal of the Royal Statistical Society Series B 14(1):107–114.
Gordon, N. J., Salmond, D. J., Smith, A. F. M. (1993). Novel approach to non-linear and non-
gaussian Bayesian state estimation. IEE Proceedings-F 140(2):107–113.
Hammersley, J., Handscomb, D. (1964). Monte Carlo Methods. New York: John Wiley.
Hansen, L. P., Heckman, J. J. (1996). The empirical foundations of calibration. Journal of Economic
Perspectives 10(1):87–104.
Hastings, W. K. (1970). Monte Carlo sampling methods using markov chain and their applications.
Biometrika 57(1):97–109.
Ingram, B. F., Whiteman, C. M. (1994). Supplanting the Minnesota prior: forecasting
macroeconomic time series using real business cycle model priors. Journal of Monetary
Economics 34(3):497–510.
Ireland, P. N. (2004). A method for taking models to the data. Journal of Economic Dynamics and
Control 28(6):1205–1226.
Jin, H.-H., Judd, K. L. (2002). Perturbation methods for general dynamic stochastic models.
Manuscript, Hoover Institute.
Judd, K. L. (1998). Numerical Methods in Economics. Cambridge, MA: MIT Press.
Bayesian Analysis of DSGE Models 171
Justiniano, A., Preston, B. (2004). Small open economy Dsge models – specification, estimation, and
model fit. Manuscript, International Monetary Fund and Columbia University.
Kass, R. E., Rafterty, A. E. (1995). Bayes factors. Journal of the American Statistical Association 90(430):
773–795.
Kim, J. (2000). Constructing and estimating a realistic optimizing model of monetary policy. Journal
of Monetary Economics 45(2):329–359.
Kim, J., Kim, S. H. (2003). Spurious welfare reversal in international business cycle models. Journal
of International Economics 60(2):471–500.
Kim, J., Kim, S. H., Schaumburg, E., Sims, C. A. (2005). Calculating and using second order accurate
solutions of discrete time dynamic equilibirum models. Manuscript, Princeton University.
Kim, J.-Y. (1998). Large sample properties of posterior densities, bayesian information criterion, and
the likelihood principle in non-stationary time series models. Econometrica 66(2):359–380.
Kim, K., Pagan, A. (1995). Econometric analysis of calibrated macroeconomic models. In: Pesaran,
H., Wickens, M., eds. Handbook of Applied Econometrics: Macroeconomics. Oxford: Basil Blackwell,
pp. 356–390.
Kim, S., Shephard, N., Chib, S. (1998). Stochastic volatility: likelihood inference and comparison
with ARCH Models. Review of Economic Studies 65(3):361–393.
King, R. G., Watson, M. W. (1998). The solution of singluar linear difference systems under rational
expectations. International Economic Review 39(4):1015–1026.
Kitagawa, G. (1996). Monte Carlo filter and smoother for non-gaussian non-linear state space
models. Journal of Computational and Graphical Statistics 5(1):1–25.
Klein, P. (2000). Using the generalized Schur form to solve a multivariate linear rational
expectations model. Journal of Economic Dynamics and Control 24(10):1405–1423.
Kydland, F. E., Prescott, E. C. (1982). Time to build and aggregate fluctuations. Econometrica
50(6):1345–1370.
Kydland, F. E., Prescott, E. C. (1996). The computational experiment: an econometric tool. Journal
of Economic Perspectives 10(1):69–85.
Laforte, J.-P. (2004). Comparing monetary policy rules in an estimated equilibrium model of the
US economy. Manuscript, Federal Reserve Board of Governors.
Lancaster, A. (2004). An Introduction to Modern Bayesian Econometrics. Oxford: Blackwell Publishing.
Landon-Lane, J. (1998). Bayesian Comparison of Dynamic Macroeconomic Models. Ph.D.
dissertation, University of Minnesota.
Leeper, E. M., Sims, C. A. (1994). Toward a modern macroeconomic model usable for policy
analysis. In: Stanley, F., Rotemberg, J. J., eds. NBER MAcroeconomics Annual 1994. Cambridge,
MA: MIT Press, pp. 81–118.
Levin, A. T., Alexei O., Williams, J. C., Williams, N. (2006). Monetary policy under uncertainty in
micro-founded macroeconometric models. In: Mark G., Rogoff, K., eds. NBER Macroeconomics
Annual 2005. Cambridge, MA: MIT Press, pp. 229–287.
Lubik, T. A., Schorfheide, F. (2003). Do central banks respond to exchange rate fluctuation – a
structural investigation. Manuscript, University of Pennsylvania.
Lubik, T. A., Schorfheide, F. (2004). Testing for indeterminacy: an application to US monetary
policy. American Economic Review 94(1):190–217.
Lubik, T. A., Schorfheide, F. (2006). A Bayesian look at new open economy macroeconomics. In:
Mark, G., Rogoff, K., eds. NBER Macroeconomics Annual 2005. Cambridge, MA: MIT Press,
pp. 313–366.
MATLAB (2005). Version 7. Software available from http://www.mathworks.com/.
McGrattan, E. R. (1994). The macroeconomic effects of distortionary taxation. Journal of Monetary
Economics 33(3):573–601.
Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., Teller, E. (1953). Equation of
state calculations by fast computing machines. Journal of Chemical Physics 21(6):1087–1092.
Onatski, A., Williams, N. (2004). Empirical and policy performance of a forward-looking monetary
model. Manuscript, Princeton University.
Otrok, C. (2001). On measuring the welfare cost of business cycles. Journal of Monetary Economics
47(1):61–92.
Phillips, P. C. B. (1996). Econometric model determination. Econometrica 64(4):763–812.
Pitt, M. K., Shephard, N. (1999). Filtering via simulation: Auxiliary particle filters. Journal of the
American Statistical Association 94(446):590–599.
172 S. An and F. Schorfheide
Poirier, D. (1998). Revising beliefs in nonidentified models. Econometric Theory 14(4):483–509.
Rabanal, P., Rubio-Ramírez, J. F. (2005a). Comparing new keynesian models in the Euro area: A
bayesian approach. Federal Reserve Bank of Atlanta Working Paper 2005–8.
Rabanal, P., Rubio-Ramírez, J. F. (2005b). Comparing new keynesian models of the business cycle:
A Bayesian approach. Journal of Monetary Economics 52(6):1152–1166.
Rabanal, P., Tuesta, V. (2006). Euro-Dollar real exchange rate dynamics in an estimated two-country
model: What is important and what is not. CEPR Discussion Paper No. 5957.
Robert, C., Casella, G. (1999). Monte Carlo Statistical Methods. New York: Springer Verlag.
Rotemberg, J. J., Woodford, M. (1997). An optimization-based econometric framework for the
evaluation of monetary policy. In: Bernanke, B. S., Rotemberg, J. J., eds. NBER Macroeconomics
Annual 1997. Cambridge, MA: MIT Press, pp. 297–346.
Sargent, T. J. (1989). Two models of measurements and the investment accelerator. Journal of Political
Economy 97(2):251–287.
Schmitt-Grohé, S., Uribe, M. (2006). Optimal simple and implementable monetary and fiscal rules.
Journal of Monetray Economics. Duke University.
Schorfheide, F. (2000). Loss function-based evaluation of DSGE models. Journal of Applied Econometrics
15(6):645–670.
Schorfheide, F. (2005). Learning and monetary policy shifts. Review of Economic Dynamics
8(2):392–419.
Sims, C. A. (1996). Macroeconomics and methodology. Journal of Economic Perspectives 10(1):105–120.
Sims, C. A. (2002). Solving linear rational expectations models. Computational Economics 20(1–2):1–20.
Sims, C. A. (2003). Probability models for monetary policy decisions. Manuscript, Princeton
University.
Smets, F., Wouters, R. (2003). An estimated dynamic stochastic general equilibrium model of the
Euro area. Journal of European Economic Association 1(5):1123–1175.
Smets, F., Wouters, R. (2005). Comparing shocks and frictions in US and Euro area business cycles:
a Bayesian DSGE Approach. Journal of Applied Econometrics 20(2):161–183.
Smith, A. A. (1993). Estimating non-linear time-series models using simulated vector autoregressions.
Journal of Applied Econometrics 8:S63–S84.
Swanson, E., Anderson, G., Levin, A. (2005). Higher-Order perturbation solutions to dynamic,
discrete-time rational expectations models. Manuscript, Federal Reserve Bank of San
Fransisco.
Taylor, J. B., Uhlig, H. (1990). Solving non-linear stochastic growth models: a comparison of
alternative solution methods. Journal of Business and Economic Statistics 8(1):1–17.
Uhlig, H. (1999). A toolkit for analyzing non-linear dynamic stochastic models easily. In: Marimón,
R., Scott, A., eds. Computational Methods for the Study of Dynamic Economies. Oxford, UK: Oxford
University Press, pp. 30–61.
Walker, A. M. (1969). On the asymptotic behaviour of posterior distributions. Journal of the Royal
Statistical Society Series B 31(1):80–88.
Watson, M. W. (1993). Measures of fit for calibrated models. Journal of Political Economy
101(6):1011–1041.
White, H. (1982). Maximum likelihood estimation of misspecified models. Econometrica 50(1):1–25.
Woodford, M. (2003). Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton, NJ:
Princeton University Press.
Zeller, A. (1971). Introduction to Bayesian Inference in Econometrics. New York: John Wiley & Sons.

114

S. An and F. Schorfheide

Subsequently, many authors have developed econometric frameworks that formalize aspects of the calibration approach by taking model misspecification explicitly into account. Examples are Smith (1993), Watson (1993), Canova (1994), DeJong et al. (1996), Diebold et al. (1998), Geweke (1999b), Schorfheide (2000), Dridi et al. (2007), and Bierens (2007). At the same time, macroeconomists have improved the structural models and relaxed many of the misspecified restrictions of the first generation of DSGE models. As a consequence, more traditional econometric techniques have become applicable. The most recent vintage of DSGE models is not just attractive from a theoretical perspective but is also emerging as a useful tool for forecasting and quantitative policy analysis in macroeconomics. Moreover, owing to improved time series fit these models are gaining credibility in policy-making institutions such as central banks. This paper reviews Bayesian estimation and evaluation techniques that have been developed in recent years for empirical work with DSGE models.1 We focus on methods that are built around a likelihood function derived from the DSGE model. The econometric analysis has to cope with several challenges, including potential model misspecification and identification problems. We will illustrate how a Bayesian framework can address these challenges. Most of the techniques described in this article have been developed and applied in other papers. Our contribution is to give a unified perspective, by applying these methods successively to artificial data generated from a DSGE model and a VAR. We provide some evidence on the performance of Markov Chain Monte Carlo (MCMC) methods that have been applied to the Bayesian estimation of DSGE models. Moreover, we present new results on the use of first-order accurate versus second-order accurate solutions in the estimation of DSGE models. The paper is structured as follows. Section 2 outlines two versions of a five-equation New Keynesian DSGE model along the lines of Woodford (2003) that differ with respect to the monetary policy rule. This model serves as the foundation for the current generation of large-scale models that are used for the analysis of monetary policy in academic and central bank circles. Section 3 discusses some preliminaries, in particular the challenges that have to be confronted by the econometric framework. We proceed by generating several data sets that are used subsequently for model estimation and evaluation. We simulate samples from the firstorder accurate solution of the DSGE model presented in Section 2, from a modified version of the DSGE model in order to introduce
There is also an extensive literature on classical estimation and evaluation of DSGE models, but a detailed survey of these methods is beyond the scope of this article. The interested reader is referred to Kim and Pagan (1995) and the book by Canova (2007), which discusses both classical and Bayesian methods for the analysis of DSGE models.
1

Bayesian Analysis of DSGE Models

115

misspecification, and from the second-order accurate solution of the benchmark DSGE model. Owing to the computational burden associated with the likelihood evaluation for non-linear solutions of the DSGE model, most of the empirical literature has estimated linearized DSGE models. Section 4 describes a Random-Walk Metropolis (RWM) algorithm and an Importance Sampler (IS) algorithm that can be used to calculate the posterior moments of DSGE model parameters and transformations thereof. We compare the performance of the algorithms and provide an example in which they explore the posterior distribution only locally in the neighborhood of modes that are separated from each other by a deep valley in the surface of the posterior density. Model evaluation is an important part of the empirical work with DSGE models. We consider three techniques in Section 5: posterior predictive model checking, model comparisons based on posterior odds, and comparisons of DSGE models to VARs. We illustrate these techniques by fitting correctly specified and misspecified DSGE models to artificial data. In Section 6 we construct posterior distributions for DSGE model parameters based on a second-order accurate solution and compare them to posteriors obtained from a linearized DSGE model. Finally, Section 7 concludes and provides an outlook on future work. 2. A PROTOTYPICAL DSGE MODEL Our model economy consists of a final goods producing firm, a continuum of intermediate goods producing firms, a representative household, and a monetary as well as a fiscal authority. This model has become a benchmark specification for the analysis of monetary policy and is analyzed in detail, for instance, in Woodford (2003). To keep the model specification simple, we abstract from wage rigidities and capital accumulation. More elaborate versions can be found in Smets and Wouters (2003) and Christiano et al. (2005). 2.1. The Agents and Their Decision Problems The perfectly competitive, representative, final goods producing firm combines a continuum of intermediate goods indexed by j ∈ [0, 1] using the technology Yt =
0 1
1 1−

Yt (j )

1−

dj

(1)

Here 1/ > 1 represents the elasticity of demand for each intermediate good. The firm takes input prices Pt (j ) and output prices Pt as given.

which is treated as exogenous by the firm. (4) where At is an exogenous productivity process that is common to all firms and Nt (j ) is the labor input of firm j . (5) where governs the price stickiness in the economy and is the steadystate inflation rate associated with the final good. The household derives disutility from hours worked Ht and maximizes ∞ t s=0 s (Ct +s /At +s )1− − 1 + 1− M ln Mt +s Pt +s − H Ht +s . Firm j chooses its labor input Nt (j ) and the price Pt (j ) to maximize the present value of future profits ∞ t s=0 s Qt +s|t Pt +s (j ) Yt +s (j ) − Wt +s Nt +s (j ) − ACt +s (j ) Pt +s (6) Here Qt +s|t is the time t value of a unit of the consumption good in period t + s to the household. An and F. Firms face nominal rigidities in terms of quadratic price adjustment costs ACt (j ) = Pt (j ) − Pt −1 (j ) 2 2 Yt (j ).116 S. The representative household derives utility from real money balances Mt /Pt and consumption Ct relative to a habit stock. real money balances. This assumption ensures that the economy evolves along a balanced growth path even if the utility function is additively separable in consumption. (7) . We assume that the habit stock is given by the level of technology At . and leisure. Labor is hired in a perfectly competitive factor market at the real wage Wt . Schorfheide Profit maximization implies that the demand for intermediate goods is Yt (j ) = Pt (j ) Pt −1/ Yt (2) The relationship between intermediate goods prices and the price of the final good is Pt = 0 1 Pt (j ) −1 −1 dj (3) Intermediate good j is produced by a monopolist who has access to the linear production technology Yt (j ) = At Nt (j ).

The government’s budget constraint is given by Pt Gt + Rt −1 Bt −1 = Tt + Bt + Mt − Mt −1 . and ∗ is the target inflation rate. (12) . (9) where R . The household supplies perfectly elastic labor services to the firms taking the real wage Wt as given.t .Bayesian Analysis of DSGE Models 117 where is the discount factor. We will set H = 1. The usual transversality condition on asset accumulation applies.t is a monetary policy shock and Rt∗ is the (nominal) target rate. 1] follows an exogenous process. which in equilibrium coincides with the steady-state inflation rate. The household has access to a domestic bond market where nominal government bonds Bt are traded that pay (gross) interest Rt . where t ∈ [0. t is the gross inflation rate defined as t = Pt /Pt −1 . it receives aggregate residual real profits Dt from the firms and has to pay lump-sum taxes Tt . We consider two specifications for Rt∗ . Yt∗ in (10) is the level of output that would prevail in the absence of nominal rigidities. and M and H are scale factors that determine steady-state real money balances and hours worked. Furthermore. 1/ is the intertemporal elasticity of substitution. (8) where SCt is the net cash inflow from trading a full set of state-contingent securities. where Gt = t Yt . The government levies a lump-sum tax (subsidy) to finance any shortfalls in government revenues (or to rebate any surplus). one in which the central bank reacts to inflation and deviations of output from potential output: Rt∗ = r ∗ t ∗ 1 Yt Yt∗ 2 (output gap rule specification) (10) and a second specification in which the central bank responds to deviations of output growth from its equilibrium steady-state : Rt∗ = r ∗ t ∗ 1 Yt Yt −1 2 (output growth rule specification) (11) Here r is the steady-state real interest rate. Monetary policy is described by an interest rate feedback rule of the form Rt = Rt ∗ 1− R R Rt −1 e R . which rules out Ponzi schemes. Thus the household’s budget constraint is of the form Pt Ct + Bt + Mt − Mt −1 + Tt = Pt Wt Ht + Rt −1 Bt −1 + Pt Dt + Pt SCt . The fiscal authority consumes a fraction t of aggregate output Yt .

The three innovations are independent of each other at all leads and lags and are normally distributed with means zero and standard deviations z .t is assumed to be serially uncorrelated. g . Define gt = 1/(1 − t ). the monetary policy shock R . Equilibrium Relationships We consider the symmetric equilibrium in which all intermediate goods producing firms make identical choices so that the j subscript can be omitted. consumption. The market clearing conditions are given by Yt = Ct + Gt + ACt and Ht = Nt (15) Since the households have access to a full set of state-contingent claims.t (13) Thus on average technology grows at the rate . and zt captures exogenous fluctuations of the technology growth rate. interest rates. and inflation have to satisfy the following optimality conditions 1= 1= 1 t Ct +1 /At +1 Ct /At Ct At t − At Rt At +1 t +1 t (17) 1− 1 2 t +1 t 1− + ( Ct +1 /At +1 Ct /At − ) − + 2 t +1 − Yt +1 /At +1 ( Yt /At − ) (18) In the absence of nominal rigidities ( = 0) aggregate output is given by Yt∗ = (1 − )1/ At gt . Qt +s|t in (6) is Qt +s|t = (Ct +s /Ct )− (At /At +s )1− (16) It can be shown that output. Exogenous Processes The model economy is perturbed by three exogenous processes. (19) .118 S. Aggregate productivity evolves according to ln At = ln + ln At −1 + ln zt . An and F.3. We assume that ln gt = (1 − g )ln g + g ln gt −1 + g . and R .2. 2. respectively.t (14) Finally. Schorfheide 2. where ln zt = z ln zt −1 + z.

t are zero at all times. This rational expectations system has to be solved before the DSGE model can be estimated. and z. ˆ t . Since the non-stationary technology process At induces a stochastic trend in output and consumption. ct . Then the model can be expressed as 1= 1− 2 t z e − cˆt +1 + cˆt +Rt −ˆt +1 − ˆ t +1 (21) e ct ˆ − 1 = e ˆt − 1 − t 1− 1 2 e ˆt + 1 2 (22) (23) − gt ) + ˆ R . and zt that is driven by the vector of ˆ ˆ innovations t = [ R .t y 2 (ˆt (24) (25) (26) gt = ˆ zt = ˆ ˆ g gt −1 ˆ z zt −1 For the output growth rule specification. it is convenient to express the model in terms of detrended variables ct = Ct /At and yt = Yt /At . and y = g (1 − )1/ (20) Let xt = ln (xt /x) denote the percentage deviation of a variable xt from its ˆ steady-state x.t ] . gt . gt .t . Rt . g .4. Define2 st = [ˆt . The steady-state inflation equals the target rate ∗ and r = .t (27) 2.t . g .t . z.Bayesian Analysis of DSGE Models 119 which is the target level of output that appears in the output gap rule specification. R =r ∗ . The model economy has a unique steady-state in terms of the detrended variables that is attained if the innovations R . Equation (24) is replaced by Rt = R Rt −1 + (1 − R) 1 ˆt + (1 − R) 2 yt + z t + ˆ ˆ R . ˆ . Rt . ct .t z. y ˆ 2 ˆ ˆ R . c = (1 − )1/ .t . Model Solutions Equations (21) to (26) form a non-linear rational expectations system ˆ ˆ in the variables yt . ˆ t .t y y e ˆ t +1 − 1 e − cˆt +1 + cˆt +ˆt +1 −ˆt + ˆ t +1 2 y e cˆt −ˆt = e −gˆt − g 2 e ˆt − 1 R) 2 Rt = R Rt −1 + (1 − R) 1 ˆt + (1 − + + g .t . zt ] Under the output growth rule specification for Rt∗ the vector st also contains yt −1 .

Blanchard and Kahn (1980). In the context of likelihood-based DSGE model estimation. Binder and Pesaran (1997). Linearization and straightforward manipulation of Equations (21) to (23) yields yt = ˆ y t [ˆt +1 ] + gt − ˆ ˆt = ˆ t [gt +1 ] − 1 Rt − t [ t +1 ] − z t [ˆt +1 ] (29) (30) (31) t [ ˆ t +1 ] + (ˆt − gt ) y ˆ ct = yt − gt . An and F.t = R .l l . the stable solution is unique (determinacy). . Here the coefficients j(s) and .t = + l =1 ( ) j . the higher-order refinement is an active area of research.t . Anderson (2000).i si. A variety of numerical techniques are available to solve rational expectations systems. Christiano (2002). We will focus on the case of determinacy and restrict the parameter space accordingly. j = 1.t form a linear rational expectations system in st for which several solution algorithms are available. or there are multiple stable solutions (indeterminacy).l While in many applications first-order approximations are sufficient. For instance. .t −1 i=1 sj .J (33) Here J denotes the number of elements of the vector st and n is the number of shocks stacked in the vector t .120 S. King and Watson (1998). Kim (2000). st can be viewed as a (partially latent) state vector in a non-linear state space model and (28) is the state transition equation. The resulting law of motion for the j th element of st takes the form J n (s) j . j . Schorfheide The solution of the rational expectations system takes the form st = (st −1 . linear approximation methods are very popular because they lead to a state–space representation of the DSGE model that can be analyzed with the Kalman filter. Depending on the parameterization of the DSGE model there are three possibilities: no stable rational expectations solution exists. t . ˆ ˆ ˆ where = 1− 2 (32) Equations (29) to (31) combined with (24) to (26) and the trivial identity R . ) (28) From an econometric perspective. and Sims (2002). Uhlig (1999).i ( ) are functions of the structural parameters of the DSGE model. for instance.

i si.t −1 i=1 J n (s ) j . and Swanson et al.t l . (2005). then the expected utility is simply approximated by the steady-state utility U (0). Suppose that consumption is a smooth function of a zero mean random shock which is scaled by .Bayesian Analysis of DSGE Models 121 if the goal of the analysis is to compare welfare across policies or market structures that do not have first-order effects on the model’s steady-state or to study asset pricing implications of DSGE models.t −1 i=1 l =1 sj . Jin and Judd (2002). Consider a one-period model in which utility is a smooth function of consumption and is approximated as 1 c c c U (ˆ ) = U (0) + U (1) (0)ˆ + U (2) (0)ˆ 2 + o(|ˆ |2 ).il si. for instance. The first case arises if the marginal utility of consumption in the steady-state is zero. is not affected by the coefficients 1 and 2 of the monetary policy rule (24). Kim et al. Algorithms to construct such solutions have been developed by Judd (1998). which. A simple example can illustrate this point.t −1 sl . Kim and Kim (2003) or Woodford (2003). The resulting state transition equation can be expressed as J n (s) j . (2005).il i. in the DSGE model described above.t i=1 l =1 J ( ) j . Collard and Juillard (2001). A second-order approximation of both U and C leads to 1 2 [U (ˆ )] = U (0) + U (2) (0)C (1) (c) c 2 2 1 + U (1) (0)C (2) (c) 2 2 + o( 2 ) (36) Using a second-order expansion of the utility function together with a firstorder expansion of consumption ignores the second term in (36) and is only appropriate if either U (1) (0) or C (2) (c) is zero. for instance. c 2 (34) where c is the percentage deviation of consumption from its steady-state ˆ and the superscript i denotes the ith derivative. and the second case arises if the percentage deviation of consumption from the steady-state is a linear function of the shock.t l =1 n n ( ) j . The second-order approximation of c around the steady-state ( = 0) is ˆ c = C (1) (c) ˆ 1 + C (2) (c) 2 2 2 + op ( 2 ) (35) If first-order approximations are used for both U and C . A second-order accurate solution to the DSGE model can be obtained from a second-order expansion of the equilibrium conditions (21) to (26).il si. see.l l . SchmittGrohé and Uribe (2006).t −1 l .t = (0) j + + + + + (37) . a more accurate solution may be necessary.t i=1 l =1 J (ss) j .

). and r (A) are related to the steady states of the (A) 100 . and Aruoba et al. = 1 1 + r (A) /400 . R. z. We assume that the time period t in the model corresponds to one quarter and that the following observations are available for estimation: quarter-to-quarter per capita GDP growth rates (YGR). (A) . R. (Q ) . g. r (A) . While perturbation methods approximate policy functions only locally. including projection methods such as the finite-elements method and Chebyshev–polynomial method on the spectral domain. Schorfheide As before. annualized quarter-to-quarter inflation rates (INFL). defined in (32). model economy as =1+ (Q ) (A) (Q ) (A) (A) + 100(ˆt − yt −1 + zt ) y ˆ ˆ + 400 ˆ t +r (A) (38) (Q ) +4 + 400Rt . Measurement Equations The model is completed by defining a set of measurement equations that relate the elements of st to a set of observables. Let =[ . augmented with the steady-state ratio c/y. the coefficients are functions of the parameters of the DSGE model. Moreover. will be will be .122 S. Since in the first-order approximation the parameters and are not separately identifiable. and Taylor and Uhlig (1990). 1. z] For the quadratic approximation the composite parameter replaced by either ( . and annualized nominal interest rates (INT). ) or alternatively by ( . we express the model in terms of . 2. For the subsequent analysis we use Sims’ (2002) procedure to compute a first-order accurate solution of the DSGE model and SchmittGrohé and Uribe’s (2006) algorithm to obtain a second-order accurate solution. g. An and F. there are several global approximation schemes. Judd (1998) covers various solution methods. The three series are measured in percentages. (2004) compare the accuracy of alternative solution methods.5. 2. which is equal to 1/g . and their relationship to the model variables is given by the set of equations YGRt = INFLt = INTt = The parameters (Q ) . =1+ 400 The structural parameters are collected in the vector . . Den Haan and Marcet (1994).

the estimation is based on the likelihood function generated by the DSGE model rather than. minimum distance estimation based on the discrepancy among VAR and DSGE model impulse response functions. Any estimation and evaluation method is confronted with the following challenges: potential model misspecification and possible lack of identification of parameters of interest.. Third. First.g. the likelihood function by ( | Y ). McGrattan (1994). inflation. to full-information likelihood-based estimation as in Altug (1989). for instance composed of output growth. and Sims (1996). or the monetary policy rule (24). Christiano and Eichenbaum (1992). In the context of the model developed in Section 2 yt is composed of YGRt .g. Much of the methodological debate surrounding the various estimation (and model evaluation) techniques is summarized in papers by Kydland and Prescott (1996). over generalized method of moments (GMM) estimation of equilibrium relationships. unlike GMM estimation based on equilibrium relationships such as the consumption Euler equation (21). Throughout the paper we will use the following notation: the n × 1 vector yt stacks the time t observations that are used to estimate the DSGE model. (2005). Hence any DSGE model that generates a rank-deficient covariance matrix for yt is clearly at odds with the data and suffers from an obvious form . the Bayesian analysis is system-based and fits the solved DSGE model to a vector of aggregate time series. e. Leeper and Sims (1994).Bayesian Analysis of DSGE Models 123 3. e. Potential Model Misspecification If one predicts a vector of time series yt . and nominal interest rates.1. The sample ranges from t = 1 to T and the sample observations are collected in the matrix Y with rows yt . the discrepancy between DSGE model responses and VAR impulse responses.. which has three main characteristics. and the posterior density by p( | Y ). Second. We focus on Bayesian estimation of DSGE models.. e. for instance. INTt . 3. by a function of past yt ’s. the price setting equation of the intermediate goods producing firms (22). We will subsequently elaborate these challenges and discuss how a Bayesian approach can be used to cope with them. Rotemberg and Woodford (1997) and Christiano et al. We denote the prior density by p( ). PRELIMINARIES Numerous formal and informal econometric procedures have been proposed to parameterize and evaluate DSGE models.g. INFLt . Hansen and Heckman (1996). then the resulting forecast error covariance matrix is non-singular. prior distributions can be used to incorporate additional information into the parameter estimation. ranging from calibration. Kydland and Prescott (1982). and Kim (2000).

1982). Once one acknowledges that the DSGE model provides merely an approximation to the law of motion of the time series yt . the most precise impulse responses to a technology or monetary policy shock. government spending shock.124 S. While the primary research objectives in the DSGE model literature is to overcome discrepancies between models and reality. e. for instance. Each estimation method is associated with a particular measure of discrepancy between the ‘true’ law of motion and the class of approximating models. the “true” intertemporal substitution elasticity or price adjustment costs and. A second source of misspecification that is more difficult to correct is potentially invalid cross-coefficient restrictions on the time series representation of yt generated by the DSGE model. simultaneously. inflation. However. Hence one branch of the literature has emphasized procedures that can be applied despite the singularity. or additional structural shocks as in Leeper and Sims (1994) and more recently Smets and Wouters (2003). then it seems reasonable to assume that there need not exist a single parameter vector 0 that delivers.” Estimates of structural parameters generated with maximum likelihood procedures based on a set of observations Y are often at odds with additional information that the research may have. Schorfheide of misspecification. Altug (1989).g. whereas the other branch has modified the model specification to remove the singularity by adding so-called measurement errors. (2006) document that even an elaborate DSGE model with capital accumulation and various nominal and real frictions has difficulties attaining the fit achieved with VARs that have been estimated with well-designed shrinkage methods.. Sargent (1989). technology growth shock) equals the number of observables (output growth. One reason that pure maximum likelihood estimation of DSGE models has not turned into the estimation method of choice is the “dilemma of absurd parameter estimates. say. estimates of the discount factor should be consistent with our knowledge about the average magnitude of real interest rates. Likelihood-based estimators. even if observations on interest rates are not included in the estimation sample Y . This singularity is an obstacle to likelihood estimation. Ireland (2004). In this paper we pursue the latter approach by considering a model in which the number of structural shocks (monetary policy shock. For example. Del Negro et al. Invalid restrictions manifest themselves in poor out-of-sample fit relative to more densely parameterized reference models such as VARs. Time series estimates of aggregate labor supply elasticities or price adjustment costs should be broadly consistent with microlevel evidence. An and F. due to the stylized nature and the resulting misspecification . it is important to have empirical strategies available that are able to cope with potential model misspecification. asymptotically minimize the Kullback–Leibler distance (see White. interest rates) to which the model is fitted.

and comparisons to reference models that relax some of the cross-coefficient restrictions generated by the DSGE models. 3. In a Bayesian framework. Lack of identification is documented in papers by Beyer and Farmer (2004) and Canova and Sala (2005). (39) will be low compared to. from a probability model that implies that different values of structural parameters lead to the same joint distribution for the observables Y . Section 5 will discuss several methods that have been proposed to assess the fit of DSGE models: posterior odds comparisons of competing model specifications. in which yt is . At first glance the identification of DSGE model parameters does not appear to be problematic. whereas the latter paper compares the informativeness of different estimators with respect to key structural parameters in a variety of DSGE models. Identification Identification problems can arise owing to a lack of informative observations or. more fundamentally. The delicate identification problems that arise in rational expectations models can be illustrated in a simple example adopted from Lubik and Schorfheide (2006).Bayesian Analysis of DSGE Models 125 of most DSGE models. The parameter vector is typically low dimensional compared to VARs and the model imposes tight restrictions on the time series representation of Y . the shift from prior to posterior distribution can be an indicator of the tension between different sources of information. say. However. a VAR. the likelihood function is reweighted by a prior density. then the marginal data density of the DSGE model. the likelihood function often peaks in regions of the parameter space that appear to be inconsistent with extraneous information. The former paper provides an algorithm to construct families of observationally equivalent linear rational expectations models. The prior can bring to bear information that is not contained in the estimation sample Y . Consider the following two models. posterior predictive checks. If the likelihood function peaks at a value that is at odds with the information that has been used to construct the prior distribution.2. defined as p(Y ) = ( | Y )p( )d . and in a posterior odds comparison the DSGE model will automatically be penalized for not being able to reconcile the two sources of information with a single set of parameters. recent experience with the estimation of New Keynesian DSGE models has cast some doubt on this notion and triggered a more careful assessment. Since priors are always subject to revision.

(1 − / )2 (40) In model 2 the shocks are serially uncorrelated. While Bayesian inference is based on the likelihood function. Straightforward manipulations of the Bayes theorem yield p( . Consider model 1 . but we introduce a backward-looking term yt −1 on the right-hand side to generate persistence: 2 : yt = 1 t [yt +1 ] + yt −1 + ut . The calculation of a valid confidence set is challenging since it is difficult to trace out the parameter subspace along which the likelihood function is constant. t ∼ iid 0. t ∼ iid 0. An and F. ut = ut −1 + t . + −4 (41) 2 For both specifications. Moreover. Here = [ . we impose >1 in 1 and −4 |< 2 and .126 S. The likelihood functions of 1 and 2 have ridges that can cause serious problems for numerical optimization procedures. Schorfheide the observed endogenous variable and ut is an unobserved shock process. models 1 and 2 are observationally equivalent. even a weakly informative prior can introduce curvature into the posterior density surface that facilitates numerical maximization and the use of MCMC methods. In model 1 . ] and the likelihood function can be written as ( . the ut ’s are serially correlated: 1 : yt = 1 t [yt +1 ] + ut . the parameters and are not separately identifiable. 2 : = 1 ( − 2 2 −4 ) In model 1 the parameter is not identifiable. 1) (42) Under restrictions on the parameter spaces that guarantee uniqueness of a stable rational expectations solution3 we obtain the following relationships between and the structural parameters: 1 : = . | Y ) = p( | Y )p( | ) 3 (43) | − 2 To 2 | + ensure determinacy − 4 | > 2 in 2 . the law of motion of yt is yt = yt −1 + t. t ∼ iid(0. | Y ) = ( ). 2 ut = 2 t. and in model 2 .

The posterior distribution is well defined as long as the joint prior distribution of the parameters is proper. In the latter case priors are essentially used to reduce the dimensionality of the econometric model and hence the sampling variability of the parameter estimates. Priors for the autocorrelations and standard deviations of the exogenous processes.4 While. and nominal interest rates. For instance. However. To fine-tune the prior distribution. it is often helpful to simulate the prior predictive distribution for various sample moments and check that 4 The role of priors in DSGE model estimation is markedly different from the role of priors in VAR estimation. The prior for the parameter that governs price stickiness is chosen based on microevidence on price setting behavior provided. in practice most priors are chosen based on some observations. priors can be gleaned from personal introspection to reflect strongly held beliefs about the validity of economic theories.3. in Bils and Klenow (2004). A direct comparison of priors and posteriors can often provide valuable insights about the extent to which data provide information about the parameters of interest. are quantified based on regressions run on pre(estimation)-sample observations of output growth. as the audience typically would like to know what features of the posterior are generated by the prior rather than the likelihood. (A) .Bayesian Analysis of DSGE Models 127 Thus the prior distribution is not updated in directions of the parameter space in which the likelihood function is flat. in particular the distribution of shock standard deviations. Priors As indicated in our previous discussion. the steady-state parameters (Q ) . They might also add curvature to a likelihood function that is (nearly) flat in some dimensions of the parameter space and therefore strongly influence the shape of the posterior distribution. It is difficult to detect directly identification problems in large DSGE models. 3. This is of course well known in Bayesian econometrics. . inflation. The priors for the coefficients in the monetary policy rule are loosely centered around values typically associated with the Taylor rule. see Poirier (1998) for a discussion. prior distributions will play an important role in the estimation of DSGE models. and r (A) . Lubik and Schorfheide (2006) estimate a two-country version of the model described in Section 2. in principle. as well as the standard deviation of the monetary policy shock. for instance. lack of identification provides a challenge for scientific reporting. They might downweigh regions of the parameter space that are at odds with observations not contained in the estimation sample Y . since the mapping from the vector of structural parameters into the state–space representation that determines the joint probability distribution of Y is highly non-linear and typically can only be evaluated numerically.

that we will use in the subsequent analysis. and specify independent priors on the transformed parameters. however. For convenience. except in first-order approximation Eq. solved by first-order approximation. consists of Eqs. Prior to the truncation the distribution specified in Table 2 places about 2% of its mass on parameter values that imply indeterminacy. 2 1 (L). rational expectations models can have multiple equilibria. While this may be of independent interest we do not pursue this direction in this paper. z 80 observations generated with 80 observations generated with 80 observations generated with 80 observations generated with 1 (L) 1 (Q ) 2 (L) 5 (L) . (21)–(26).128 S. Benchmark Model with output gap rule. As mentioned before. Such an occurrence would suggest that the model is incapable of explaining salient data properties. Schorfheide the prior does not place little or no mass in a neighborhood of important features of the data. (21)–(26). Hence. (21)–(26). DSGE Model with output growth rule. such as steady rate ratios. Same as Same as Same as 1 (L). solved by second-order approximation. TABLE 1 Model specifications and data sets 1 (L) 1 (Q ) 2 (L) Benchmark Model with output gap rule. (24) is replaced by Eq. (27). consists of Eqs. Table 1 provides a characterization of the model specifications and data sets. 3. The parameters (conditional on ) and 1/g only affect the second-order approximation of the DSGE model. or relative volatilities. Eq. A formalization of such a prior predictive check can be found in Geweke (2005). The Road Ahead Throughout this paper we are estimating and evaluating versions of the DSGE model presented in Section 2 based on simulated data. except that central bank does not respond to the output gap: 1 (L).4. autocorrelations. 3 (L) 4 (L) 5 (L) except that prices are nearly flexible: = 5. solved by first-order approximation. (29)is replaced by t [yt +1 ] = 0. the prior distribution is truncated at the boundary of the determinacy region. Table 2 lists the marginal prior distributions for the structural parameters of the DSGE model. An and F. In applications in which the independence assumption is unreasonable one could derive parameter transformations. These priors are adopted from Lubik and Schorfheide (2006). (1 + h)ˆt = y 1 (L) 1 (Q ) 2 (L) 5 (L) + h yt −1 + gt − ˆ ˆ ˆ t [gt +1 ] − 1 Rt − t [ t +1 ] − [ˆt +1 ] with h = 95. which induce dependent priors for the original parameters. consists of Eqs. it is assumed that all parameters are a priori independent.

Bayesian Analysis of DSGE Models

129

Model 1 is the benchmark specification of the DSGE model in which monetary policy follows an interest-rate rule that reacts to the output gap. 1 (L) is approximated by a linear rational expectations system, whereas 1 (Q ) is solved with a second-order perturbation method. In 2 (L) we replace the output gap in the interest-rate feedback rule by output growth. 3 and 4 are identical to 1 (L), except that in one case we impose that the prices are nearly flexible ( = 5) and in the other case the central bank does not respond to output ( 2 = 0). In 5 (L) we replace the loglinearized consumption Euler equation by an equation that includes lagged output on the right-hand side. Loosely speaking, this specification can be motivated with a model in which agents derive utility from the difference between individual consumption and the overall level of consumption in the previous period. Conditional on parameter vectors reported in Tables 2 and 3 we generate four data sets with 80 observations each. We use the labels 1 (L), 1 (Q ), 2 (L), and 5 (L) to denote data sets generated from 1 (L), (Q ), 2 (L), and 5 (L), respectively. By and large, the values for the 1 DSGE model parameters resemble empirical estimates obtained from post1982 U.S. data. The sample size is fairly realistic for the estimation of monetary DSGE model. Many industrialized countries experienced a high inflation episode in the 1970s that ended with a disinflation in the 1980s. Subsequently, the level and the variability of inflation and interest rates have been fairly stable, so that it is not uncommon to estimate constant coefficient DSGE models based on data sets that begin in the early 1980s.
TABLE 2 Prior distribution and DGPs—linear analysis (Sections 4 and 5) Prior Name Domain
+ + 1 2 R G Z + +

Density Gamma Gamma Gamma Gamma Beta Beta Beta Gamma Gamma Normal InvGamma InvGamma InvGamma

Para (1) 2 00 20 1 50 50 50 80 66 50 7 00 40 40 1 00 50

Para (2) 50 10 25 25 20 10 15 50 2 00 20 4 00 4 00 4 00

1 (L),

DGP 2 (L), 2 00 15 1 50 1 00 60 95 65 40 4 00 50 20 80 45

5 (L)

[0, 1) [0, 1) [0, 1)
+ + + + +

r (A)
(A) (Q )

100 100 100

R G Z

Notes: Paras (1) and (2) list the means and the standard deviations for Beta, Gamma, and Normal distributions; the upper and lower bound of the support for the Uniform distribution; s and for 2 2 the Inverse Gamma distribution, where p G ( | , s) ∝ − −1 e − s /2 . The effective prior is truncated at the boundary of the determinacy region.

130

S. An and F. Schorfheide
TABLE 3 Prior distribution and DGP—Nonlinear analysis (Section 6) Prior Name Domain
+ + 1 2 R G Z + +

Density Gamma Gamma Gamma Gamma Beta Beta Beta Gamma Gamma Normal InvGamma InvGamma InvGamma Beta Beta

Para (1) 2 00 30 1 50 50 50 80 66 80 4 00 40 30 40 40 10 85

Para (2) 50 20 25 25 20 10 15 50 2 00 20 4 00 4 00 4 00 05 10

DGP 1 (Q ) 2 00 33 1 50 125 75 95 90 1 00 3 20 55 20 60 30 10 85

[0, 1) [0, 1) [0, 1)
+ + + + +

r (A)
(A) (Q )

100 100 100 1/g

R G Z

[0, 1) [0, 1)

Notes: See Table 2. Parameters solution of the DSGE model.

and 1/g only affect the second-order accurate

Section 4 illustrates the estimation of linearized DSGE models under correct specification, that is, we construct posteriors for 1 (L) and 2 (L) based on the data sets 1 (L) and 2 (L). Section 5 considers model evaluation techniques. We begin with posterior predictive checks in Section 5.1, which are implemented based on 1 (L) for data sets 1 (L) (correct specification of DSGE model) and 5 (L) (misspecification of the consumption Euler equation). In Section 5.2 we proceed with the calculation of posterior model probabilities for specifications 1 (L), 4 (L) conditional on the data set 1 (L). Subsequently in 3 (L), and Section 5.3 we compare 1 (L) to a vector autoregressive specification using 1 (L) (correct specification) and 5 (L) (misspecification). Finally, in Section 6 we study the estimation of DSGE models solved with a secondorder perturbation method. We compare posteriors for 1 (Q ) and 1 (L) conditional on 1 (Q ). 4. ESTIMATION OF LINEARIZED DSGE MODELS We will begin by describing two algorithms that can be used to generate draws from the posterior distribution of and subsequently illustrate their performance in the context of models 1 (L)/ 1 (L) and 2 (L)/ 2 (L). 4.1. Posterior Computations We consider an RWM algorithm and an IS algorithm to generate draws from the posterior distribution of . Both algorithms require the evaluation

Bayesian Analysis of DSGE Models

131

of ( | Y )p( ). The computation of the non-normalized posterior density proceeds in two steps. First, the linear rational expectations system is solved to obtain the state transition equation (33). If the parameter value implies indeterminacy (or non-existence of a stable rational expectations solution), then ( | Y )p( ) is set to zero. If a unique stable solution exists, then the Kalman filter5 is used to evaluate the likelihood function associated with the linear state–space system (33) and (38). Since the prior is generated from well-known densities, the computation of p( ) is straightforward. The RWM algorithm belongs to the more general class of Metropolis– Hastings algorithms. This class is composed of universal algorithms that generate Markov chains with stationary distributions that correspond to the posterior distributions of interest. A first version of such an algorithm had been constructed by Metropolis et al. (1953) to solve a minimization problem and was later generalized by Hastings (1970). Chib and Greenberg (1995) provide an excellent introduction to Metropolis– Hastings algorithms. The RWM algorithm was first used to generate draws from the posterior distribution of DSGE model parameters by Schorfheide (2000) and Otrok (2001). We will use the following implementation based on Schorfheide (2000):6 Random-Walk Metropolis (RWM) Algorithm Use a numerical optimization routine to maximize ln ( | Y ) + ln p( ). Denote the posterior mode by ˜ . Let be the inverse of the Hessian computed at the posterior mode ˜ . 2 Draw (0) from ( ˜ , c0 ) or directly specify a starting value. For s = 1, , nsim , draw from the proposal distribution ( (s−1) , c 2 ). The jump from (s−1) is accepted ( (s) = ) with probability min 1, r ( (s−1) , | Y ) and rejected ( (s) = (s−1) ) otherwise. Here r(
(s−1)

1. 2. 3. 4.

,

|Y) =

(

(s−1)

( | Y )p( ) | Y )p( (s−1) )

5. Approximate the posterior expected value of a function h( ) by nsim 1 (s) ). s=1 h( nsim
Since according to our model st is stationary, the Kalman filter can be initialized with the unconditional distribution of st . To make the estimation in Section 4 comparable to the DSGEVAR analysis presented in Section 5.3 we adjust the likelihood function to condition on the first four observations in the sample: ( | Y )/ ( | y1 , , y4 ). These four observations are later used to initialize the lags of a VAR. 6 This version of the algorithm is available in the user-friendly DYNARE (2005) package, which automates the Bayesian estimation of linearized DSGE models and provides routines to solve DSGE models with higher-order perturbation methods.
5

and Kim (1998). There typically exists a range for d over which the derivatives are stable. An and F. e. Hammersley and Handscomb (1964) were among the first to propose this method and Geweke (1989) provides important convergence results. (2000) used an IS algorithm to calculate posterior moments of the parameters of a linearized stochastic growth model. The idea of the algorithm is based on the identity [h( ) | Y ] = h( )p( | Y )d = h( )p( | Y ) q( )d q( ) Draws from the posterior density p( | Y ) are replaced by draws from the density q( ) and reweighted by the importance ratio p( | Y )/q( ) to obtain a numerical approximation of the posterior moment of interest. Walker (1969). . they are often helpful. This parameter transformation is only used in Step 1 of the RWM algorithm.132 S. The algorithm constructs a Gaussian approximation around the posterior mode and uses a scaled version of the asymptotic covariance matrix as the covariance matrix for the proposal distribution. Schorfheide Under fairly general regularity conditions. This allows for an efficient exploration of the posterior distribution at least in the neighborhood of the mode. Crowder (1988). the posterior distribution of will be asymptotically normal. As d approaches zero the numerical derivatives will eventually deteriorate owing to inaccuracies in the evaluation of the objective function. While Steps 1 and 2 are not necessary for the implementation of the RWM algorithm. Prior to the numerical maximization we transform all parameters so that their domain is unconstrained. 2005) reviews regularity conditions that guarantee the convergence of the Markov chain generated by Metropolis–Hastings algorithms to the posterior distribution of interest nsim 1 and the convergence of nsim s=1 h( (s) ) to the posterior expectations [h( ) | Y ]. The algorithm uses a fairly simple line search and randomly perturbs the search direction if it reaches a cliff caused by nonexistence or non-uniqueness of a stable rational expectations solution for the DSGE model. written by Chris Sims for the maximum likelihood estimation of a DSGE model conducted in Leeper and Sims (1994).. The particular version of the IS algorithm used subsequently is of the following form. DeJong et al.g. Geweke (1999a. The maximization of the posterior density kernel is carried out with a version of the BFGS quasi-Newton algorithm. The RWM algorithm generates a sequence of dependent draws from the posterior distribution of that can be averaged to approximate posterior moments. The elements of the Hessian matrix in Step 2 are computed numerically for various values of the increment d .

Moreover. The rejection rate is about 45%. Importance Sampling (IS) Algorithm Use a numerical optimization routine to maximize ln ( | Y ) + ln p( ).2. 5. Moreover. and degrees of freedom. Denote the posterior mode by ˜ . the posteriors of steady-state parameters. There is. We also indicate the location of the posterior mode in the 12 panels of the figure.000th draw from the posterior distribution in two-dimensional scatter plots. Simulation Results for 1 (L) 1 (L)/ 1 (L) We begin by estimating the log-linearized output gap rule specification based on data set 1 (L). information about the degree of price-stickiness. . s=1 w( The accuracy of the IS approximation depends on the similarity between posterior kernel and importance density. captured by the slope coefficient of the Phillips-curve relationship (30). 3. 6. we generate 200 independent draws from the truncated prior distribution. Figure 1 depicts the draws from the prior as well as every 5. If the importance density concentrates its mass in a region of the parameter space in which the posterior density is very low. then most of the ws ’s will be much smaller than 1/nsim and the approximation method is inefficient.Bayesian Analysis of DSGE Models 133 1. 4. An introduction to this . nsim ˜ ˜ Compute ws = ( (s) | Y )p( (s) )/q( (s) ) and ws = ws / s=1 ws . and replace the normal distribution with a fat-tailed t -distribution to implement the algorithm. the autocorrelation coefficients. For s = 1. 4. A visual comparison of priors and posteriors suggests that the 80 observations sample contains little information on the risk-aversion parameter and the policy rule coefficients 1 and 2 . Let be the inverse of the Hessian computed at the posterior mode ˜ . scale the asymptotic covariance matrix. then the importance weights ws are constant and one averages independent draws to approximate the posterior expectations of interest. 2. We construct a Gaussian approximation of the posterior near the mode. and the shock standard deviations are sharply peaked relative to the prior distributions. nsim generate draws (s) from q( ). The lack of information about some of the key structural parameters resembles the empirical findings based on actual observations. There exists an extensive literature on convergence diagnostics for MCMC methods such as the RWM algorithm. If the two are equal. however. scale matrix c 2 . We use the RWM algorithm (c0 = 1. ˜ Approximate the posterior expected value of a function h( ) by nsim (s) )h( (s) ). Let q( ) be the density of a multivariate t -distribution with mean ˜ . c = 0 3) to generate 1 million draws from the posterior distribution.

1). and plot every 5. Visual inspection of .000th draw of ( . The initial values for ( . in Robert and Casella (1999). and ( 6. 2 0). we will consider informal graphical methods in this paper. Data 1 (L). As before. ( 4. and convergence to iid sampling. literature can be found. An and F. all parameters are initialized at their posterior mode values. The panels depict 200 draws from prior and posterior distributions. generate 1 million draws for each chain. convergence of empirical averages to posterior moments. Schorfheide FIGURE 1 Prior and posterior – Model 1 (L). Except for and 2 . 2) of the figure depict posterior contours at the posterior mode. We run four independent Markov chains. for instance. 2 ) are (3 0. 2 ) in Figure 2. we set c = 0 3. Intersections of solid lines signify posterior mode values. 2 0). While many convergence diagnostics are based on formal statistical tests. respectively. 1).134 S. we will compare draws and recursively computed means from multiple chains. 1) and (1. These authors distinguish between convergence of the Markov chain to its stationary distribution. More specifically. Panels (1. (3 0.

1) and (1. For most parameters the two 7 It is not guaranteed that the Newey–West standard errors are formally valid.2): 200 draws from four Markov chains generated by the Metropolis Algorithm. we construct standard error estimates. Figure 3 plots recursive means for the four chains. As for the draws from the RWM algorithm. We generate another set of 1 million draws using the IS algorithm with = 3 and c = 1 5. truncated at 140 lags. Panels (1. Panels (2.1) to (3.7 In Figure 4 we plot (recursive) confidence intervals for the RWM and the IS approximation of the posterior means. . Data 1 (L). converge to the same (stationary) distribution and concentrate in the high-posterior density region. Since neither the IS nor the RWM approximation of the posterior means are exact. the means converge in the long run. Despite different initializations. Intersections of solid lines signify posterior mode values. For the importance sampler we follow Geweke (1999b). and for the Metropolis chain we use Newey–West standard error estimates.2): contours of posterior density at “low” and “high” mode as function of and 2 . we compute recursive means. the plots suggests that all four chains. despite the use of different starting values.Bayesian Analysis of DSGE Models 135 FIGURE 2 Draws from multiple chains – Model 1 (L).

r (A) . However. the difference between the posterior mode and the estimated posterior means. A Potential Pitfall: Simulation Results for 2 (L)/ 2 (L) We proceed by estimating the output growth rule version 2 (L) of the DSGE model based on 80 observations generated from this model (Data Set 2 (L)). Schorfheide FIGURE 3 Recursive means from multiple chains – Model 1 (L). The value of . confidence intervals overlap. .3. which we label the “high” mode. g . ˜ (h) . Exceptions are. Data 1 (L). is located at = 2 06 and 2 = 97. say. and g . An and F. for instance. Each line corresponds to recursive means (as a function of the number of draws) calculated from one of the four Markov chains generated by the Metropolis Algorithm. Unlike the posterior for the output gap version. the 2 (L) posterior has (at least) two modes. the magnitude of the discrepancy is generally small relative to. 4.136 S. One of the modes.

Panel (1. Data 1 (L).1). respectively. and (3. fixing the other parameters at their respective ˜ (h) values. 3] and 2 ∈ [0. The second (“low”) mode. whereas the solid lines in the remaining panels indicate the location of the high mode. The first two panels of Figure 5 depict the contours of the non-normalized posterior density as a function of and 2 at the low and the high mode.Bayesian Analysis of DSGE Models 137 FIGURE 4 RWM algorithm vs. is located at = 1 44 and 2 = 81 and attains the value −183 23. The intersection of the solid lines in panels (1. .2) is obtained by exploring the shape of the posterior as a function of and 2 . ln [ ( | Y )p( )] is equal to −175 48. (2.1). Panel (1. 2 5]. Importance Sampling – Model 1 (L). Similarly.1) is constructed by setting = ˜ (l ) and then graphing posterior contours for ∈ [0. ˜ (l ) . keeping all other parameters fixed at their respective ˜ (l ) values. Panels depict posterior modes (solid). recursively computed 95% bands for posterior means based on the metropolis algorithm (dotted) and the importance sampler (dashed).1) signify the low mode.

An and F.138 S.1) to (3.1) and (1. Given the configuration of the RWM algorithm the likelihood The covariance matrices of the proposal distributions are obtained from the Hessians associated with the respective modes.2 we generate 1 million draws each for model 2 from four Markov chains that were initialized at the following values for ( .2): 200 draws from four Markov chains generated by the metropolis algorithm. As in Section 4. 2 ): (3 0.000th draw of and 2 from the four Markov chains in Panels (2. Panels (2.8 We plot every 5. 2 0). Unlike in Panels (1. (3 0. and ( 6. the remaining parameters are not fixed at their ˜ (l ) and ˜ (h) values. ( 4. It turns out that Chains 1 and 3 explore the posterior distribution locally in the neighborhood of the low mode.2): contours of posterior density at “low” and “high” mode as function of and 2 . Schorfheide FIGURE 5 Draws from multiple chains – Model 2 (L).1) and (1. 1). The remaining parameters were initialized at the low (high) mode for Chains 1 and 3 (2 and 4). whereas Chains 2 and 4 move through the posterior surface near the high mode. 1).2). The two modes are separated by a deep valley which is caused by complex eigenvalues of the state transition matrix.2). Data 2 (L). 8 . 2 0).1) to (3. Intersections of solid lines signify “low” (left panels) and “high” (right panels) posterior mode values. Panels (1.

Two remarks are in order. 1. Data ( L). corresponding to the two posterior modes. We explored various modifications of the RWM algorithm by changing the scaling factor c and by setting the off-diagonal elements of to zero. .Bayesian Analysis of DSGE Models 139 of crossing the valley that separates the two modes is so small that it did not occur. Each line corresponds to recursive means (as a function of the number of draws) calculated from one of the four Markov chains generated by the metropolis algorithm. extremum estimators that are computed with numerical optimization methods suffer from the same problem as FIGURE 6 Recursive means from multiple chains – Model 2 (L). Nevertheless. They exhibit two limit points. The recursive means associated with the four chains are plotted in Figure 6. While each chain appears to be stable. it only explores the posterior in the neighborhood of one of the modes.000 draws were insufficient for the chains initialized at the low mode to cross over to the neighborhood of the high mode. First.000.

Schorfheide the Bayes estimators in this example. Panels (1. A 90% probability interval ranges from 0 to 20%. One might find a local rather than the global extremum of the objective function. All subsequent computations are based on the output gap rule specification of the DSGE model. in many applications as well as in this example there exists a global mode that dominates the local modes. Answers can be obtained from the moving average representation associated with the state space model composed of (33) and (38).000th draw generated with the RWM algorithm. the output gap policy rule specification 1 implies that the government spending shock does not affect inflation and nominal interest rates. in Bayesian computation it is helpful to start MCMC methods from different regions of the parameter space. G . Similarly. a 90% probability interval ranges from 0 to 95%. which can be depicted as a triangle in 2 . Hence all prior and posterior draws concentrate on the R − Z edge of the simplex. Fluctuations of output growth. g . and monetary policy shocks. 4. Hence variance decompositions of the endogenous variables lie in a three-dimensional simplex. it is good practice to start the optimization from different points in the parameter space to increase the likelihood that the global optimum is found. The data provide additional information about the variance decomposition. The posterior draws are obtained by converting every 5. and R of the triangles correspond to decompositions in which the shocks z . The posterior distribution . and nominal interest rate in the DSGE model are due to three shocks: technology growth shocks. or vary the distribution q( ) in the importance sampler. and what happens to output and inflation in response to a monetary policy shock. For instance. While the prior mean of the fraction of inflation variability explained by the monetary policy shock is about 40%. and R explain 100% of the variation. Second. The corners Z . In Figure 7 we plot 200 draws from the prior distribution (left panels) and 200 draws from the posterior (right panels) of the variance decompositions of output growth and inflation. Hence.140 S. government spending shocks.1) and (2. We will focus on variance decompositions in the remainder of this subsection. An and F. Hence an exploration of the posterior in the neighborhood of the global mode might still provide a good approximation to the overall posterior distribution. inflation.4.1) indicate that the prior distribution of the variance decomposition is informative in the sense that it is not uniform over the simplex. Parameter Transformations for 1 (L)/ 1 (L) Macroeconomists are often interested in posterior distributions of parameter transformations h( ) to address questions such as what fraction of the variation in output growth is caused by monetary policy shocks. A priori about 10% of the variation in output growth is due to monetary policy shocks. respectively.

Leeper and Sims (1994) and Kim (2000) estimated DSGE models that are usable for monetary policy analysis.t . respectively.R) correspond to 100% of the variation being due to the shocks z. The probability interval for the contribution of monetary policy shocks to output growth fluctuations shrinks to the range from 1 to 4. Data 1 (L).5. g . is much more concentrated than the prior. (1996).Bayesian Analysis of DSGE Models 141 FIGURE 7 Prior and posterior variance decompositions – Model 1 (L).5%. DeJong et al.G. and McGrattan (1994) studies the macroeconomic effects of taxation in an estimated business cycle model. The three corners (Z. and R . Empirical Applications There is a rapidly growing empirical literature on the Bayesian estimation of DSGE models that applies the techniques described in this paper. The panels depict 200 draws from prior and posterior distributions arranged on a 3-dimensional simplex. Preceding the Bayesian literature are papers that use maximum likelihood techniques to estimate DSGE models.t . Altug (1989) estimates Kydland and Prescott’s (1982) time-to-build model. The posterior probability interval for the fraction of inflation variability explained by the monetary policy shock ranges from 15 to 37%.t . 4. Canova (1994). The following incomplete list of references aims to give an overview of this work. and Geweke (1999a) proposed Bayesian approaches to calibration that do not exploit the likelihood function of the DSGE model and provide empirical applications that assess business cycle and asset pricing implications of simple stochastic growth .

Variants of the small-scale New Keynesian DSGE model presented in Section 2 have been estimated by Rabanal and Rubio-Ramírez (2005a. Schorfheide models. Large-scale models that include capital accumulation and additional real and nominal frictions along the lines of Christiano et al. Canada. Galí and Rabanal (2005) use an estimated DSGE model to study the effect of technology shocks on hours worked.S. Rabanal and Tuesta (2006). Canova (2004) estimates a small-scale New Keynesian model recursively to assess the stability of the structural parameters over time. Chang and Schorfheide (2003) study the importance of labor supply shocks and estimate a home-production model. 2005) both for the U. Lubik and Schorfheide (2004) estimate the benchmark New Keynesian DSGE model without restricting the parameters to the determinacy region of the parameter space. and de Walque and Wouters (2004) have estimated multicountry DSGE models. An and F. Many central banks are in the process of developing DSGE models along the lines of Smets and Wouters (2003) that can be estimated with Bayesian techniques and used for policy analysis and forecasting. DeJong et al. Adolfson et al.b) for the U. (2002) estimate a stochastic growth model augmented with a learning-by-doing mechanism to amplify the propagation of shocks. Lubik and Schorfheide (2003) estimate the small open economy extension of the model presented in Section 2 to examine whether the central banks of Australia. (2004) analyze an open economy model that includes capital accumulation as well as numerous real and nominal frictions. and Otrok (2001). England. Schorfheide (2005) allows for regime-switching of the target inflation level in the monetary policy rule. Lubik and Schorfheide (2006). and Schorfheide (2000) considers cash-in-advance monetary DSGE models. The literature on likelihood-based Bayesian estimation of DSGE models began with work by Landon-Lane (1998). DeJong and Ingram (2001) study the cyclical behavior of skill accumulation. . (2005) have been analyzed by Smets and Wouters (2003. and the Euro Area. DeJong et al. Bayesian estimation techniques have also been used in the open economy literature. (2000) estimate a stochastic growth model and examine its forecasting performance. (2000). and New Zealand respond to exchange rates. Otrok (2001) fits a real business cycle with habit formation and time-to-build to the data to assess the welfare costs of business cycles. whereas Chang et al. A similar model is fitted by Del Negro (2003) to Mexican data. Models similar to Smets and Wouters (2003) have been estimated by Laforte (2004).S.142 S. (2006) to study monetary policy. Onatski and Williams (2004). Fernández-Villaverde and RubioRamírez (2004) use Bayesian estimation techniques to fit a cattle–cycle model to the data. Justiniano and Preston (2004) extend the empirical analysis to situations of imperfect exchange rate passthrough. and the Euro Area. and Levin et al. Schorfheide (2000).

We can derive the predictive distribution of Y rep given the current state of knowledge: p(Y rep | Y ) = p(Y rep | )p( | Y )d (44) Let h(Y ) be a test quantity that reflects an aspect of the data that we want to examine. and Section 5. We tacitly assumed that model and prior provide an adequate probabilistic representation of the uncertainty with respect to data and parameters.2 reviews model posterior odds comparisons. Section 5. 5. by Box (1980). on the other hand.1. posterior predictive checks have become a valuable tool in applied Bayesian analysis though they have not been used much in the context of DSGE models. and has a unimodal density. A probability model is considered as discredited by the data if one observes an event that is deemed very unlikely by the model. non-negative. in books by Gelman et al. An introduction to Bayesian model checking can be found. and Geweke (2005). The first approach can be implemented by a posterior predictive model check and has the flavor of a classical hypothesis test. Section 5. for instance. Posterior Predictive Checks Predictive checks as a tool to assess the absolute fit of a probability model have been advocated. are typically conducted by enlarging the model space and applying Bayesian inference and decision theory to the extended model space. A quantitative model check can be based on Bayesian p-values. Methods that determine whether actual data lie in the tail of a model’s data distribution potentially favor alternatives that make unreasonably diffuse predictions.3 describes a more elaborate model evaluation based on comparisons between DSGE models and VARs. Model Evaluation In Section 4 we discussed the estimation of a linearized DSGE model and reported results on the posterior distribution of the model parameters and variance decompositions. Then one could compute probability of the tail .1 discusses Bayesian model checks.Bayesian Analysis of DSGE Models 143 5. Relative model comparisons. Nevertheless. We will distinguish the assessment of absolute fit from techniques that aim to determine the fit of a DSGE model relative to some other model. This section studies various techniques that can be used to evaluate the model fit. Let Y rep be a sample of observations of length T that we could have observed in the past or that we might observe in the future. Lancaster (2004). for instance. (1995). Such model checks are controversial from a Bayesian perspective. Suppose that the test quantity is univariate.

Posterior Odds Comparisons of DSGE Models The Bayesian framework is naturally geared toward the evaluation of relative model fit. whereas 5 (L) was not. We consider two cases: in the case of no mis-specification. depicted in the left panels of Figure 8. Since 1 (L) was generated from model 1 (L). simulate a sample Y rep of 80 observations9 from the DSGE model conditional on (s) . An excellent survey on model comparisons based on posterior probabilities To obtain draws from the unconditional distribution of yt we initialize s0 = 0 and generate 180 observations. Schorfheide event by h(Y rep ) ≥ h(Y ) p(Y rep | Y )dY rep . Indeed a visual inspection of the plots provides some evidence in which dimensions 1 (L) is at odds with data generated from a model that includes lags of output in the household’s Euler equation. We use the following data transformations: the correlation between inflation and lagged interest rates. 5. and calculate h(Y rep ). and output growth and interest rates. the intersections of the solid lines indicate the actual values h(Y ). Rather than constructing numerical approximations of the tail probabilities for univariate functions h(·). output growth and lagged output growth. we would expect the actual values of h(Y ) to lie further in the tails of the posterior predictive distribution in the right panels than in the left panels.144 S.000th parameter draw of (s) generated with the RWM algorithm. whereas under mis-specification. Moreover. To obtain draws from the posterior predictive distribution of h(Y rep ) we take every 5. Each panel of the figure depicts 200 draws from the posterior predictive distribution in bivariate scatter plots. the posterior is obtained from data set 5 (L).2. 9 . The observed autocorrelation of output growth in 5 (L) is much larger and the actual correlation between output growth and interest rates is much lower than the draws generated from the posterior predictive distribution. lagged inflation and current interest rates. the posterior is constructed based on data set 1 (L). A small tail probability can be viewed as evidence against model adequacy. An and F. we use a graphical approach to illustrate the model checking procedure in the context of model 1 (L). discarding the first 100. (45) where x ≥ a is the indicator function that is 1 if x ≥ and zero otherwise. Researchers can place probabilities on competing models and assess alternative specifications based on their posterior odds. illustrated in the two right panels of Figure 8.

0 j) . which is defined in (39). 4 (46) The key object in the calculation of posterior probabilities is the marginal data density p(Y | i ). according to which the central bank does not respond to output at all and 2 = 0. Posterior model probabilities can be used to weigh predictions from different models. Intersections of solid lines signify the observed sample moments. However. even in situations in which all the models under consideration are misspecified. Moreover. All subsequent analysis is then conditioned on the chosen model. researchers take a shortcut and use posterior probabilities to justify the selection of a particular model specification. suppose in addition to the specification 1 (L) we consider a version of the New Keynesian model. denoted by 3 (L) in which prices are nearly flexible.3. In many instances.0 p(Y j =1. = 5. This 0–1 loss function is an attractive benchmark in situations in which the researcher believes that all the important models are included in the analysis. .T = i. We plot 200 draws from the posterior predictive distribution of various sample moments. the optimal decision is to select the highest posterior probability model.0 on the three competing specifications then posterior model probabilities can be computed by i.4 | i) p(Y | j . 3. there is a model 4 (L). If we are willing to place prior probabilities i.Bayesian Analysis of DSGE Models 145 FIGURE 8 Posterior predictive check for Model 1 (L). i = 1. It is straightforward to verify that under a 0–1 loss function (the loss attached to choosing the wrong model is one). For concreteness. can be found in Kass and Rafterty (1995). that is.

The practical difficulty in implementing posterior odds comparisons is the computation of the marginal data density. ) )p( | Y t −1 . that posterior odds (or their large sample approximations) asymptotically favor the DSGE model that is closest to the ‘true’ data generating process in the Kullback-Leibler sense. 1952) and the model comparison based on posterior odds captures the relative one-step-ahead predictive performance. We will subsequently consider two numerical approaches: Geweke’s (1999b) modified harmonic mean estimator and Chib and Jeliazkov’s (2001) algorithm. An and F. f( )= −1 (2 )−d/2 | V | −1/2 exp −0 5( − ¯ ) V −1 ( − ¯ ) ( − ¯ ) V −1 ( − ¯ ) ≤ F −1 ( ) 2 d × (50) Here ¯ and V are the posterior mean and covariance matrix computed from the output of the posterior simulator. since the log-marginal data density can be rewritten as T ln p(Y | )= t =1 T ln p(yt | Y t −1 . (47) where ln p(Y | ) can be interpreted as a predictive score (Good.. )d . Harmonic mean estimators are based on the identity 1 = p(Y ) f( ) p( | Y )d . . It has been shown under various regularity conditions. = t =1 ln p(yt | Y t −1 .146 S. Moreover. (49) where (s) is drawn from the posterior p( | Y ). e. Geweke (1999b) proposed to use the density of a truncated multivariate normal distribution. Conditional on the choice of f ( ) an obvious estimator is ˆ pG (Y ) = 1 nsim nsim s=1 f ( (s) ) ( (s) | Y )p( −1 (s) ) . d is the dimension of the . Schorfheide the selection of the highest posterior probability model has some desirable properties. Phillips (1996) and Fernández-Villaverde and Rubio-Ramírez (2004). 1994). To make the numerical approximation efficient. f ( ) should be chosen so that the summands are of equal magnitude. ( | Y )p( ) (48) where f ( ) has the property that f ( )d = 1 (see Gelfand and Dey.g.

| Y ) given the posterior mode value ˜ . (53) where r ( . the modified harmonic mean estimator appears to be more reliable if the number of simulations is small. Chib and Jeliazkov (2001) use the following equality as the basis for their estimator of the marginal data density: p(Y ) = ( | Y )p( ) p( | Y ) (51) The equation is obtained by rearranging the Bayes theorem and has to hold for all . ˜ | Y )q( (˜. r ( . the denominator requires a numerical approximation. let q( . Thus. The results are summarized in Figure 9. However. For the output gap rule all four chains lead to estimates of around −196 7 Figure 10 provides a comparison of Geweke’s versus Chib and Jeliazkov’s approximation of the marginal data density for the output gap rule specification. (j ) (s) . While the numerator can be easily computed. (54) sim where (s) s=1 are sampled draws from the posterior distribution with the J RWM algorithm and (j ) j =1 are draws from q( ˜ . . We calculate marginal data densities recursively (as a function of the number of MCMC draws) for the four chains that were initialized at different starting values as discussed in Section 4. F 2 is the cumulative density function of a 2 random d variable with d degrees of freedom. Within the RWM Algorithm denote the probability of moving from to by ( . ˆ pCS (Y ) = ( ˜ | Y )p( ˜ ) . | Y ) = min 1. Then the posterior density at the mode can be approximated by ˆ p( ˜ | Y ) = n 1 nsim nsim s=1 ( (s) . A visual inspection of the plot suggests that both estimators converge to the same limit point. | Y ) was in the description of the algorithm. ˜ | Y ) be the proposal density for the transition from to ˜ . and ∈ (0. We first use Geweke’s harmonic mean estimator to approximate the marginal data density associated with 1 (L)/ 1 (L). If the posterior of is in fact normal then the summands in (49) are approximately constant. |Y) . ˆ p( ˜ | Y ) (52) where we replaced the generic in (51) by the posterior mode ˜ .Bayesian Analysis of DSGE Models 147 parameter vector. ˜ |Y) J −1 J j =1 |Y) . Moreover. 1).

148 S. . Chib–Jeliazkov data densities – Model 1 (L). log marginal data densities are computed recursively with Geweke’s modified harmonic mean estimator and plotted as a function of the number of draws. FIGURE 10 Geweke vs. Data 1 (L). Schorfheide FIGURE 9 Data densities from multiple chains – Model 1 (L). An and F. Data 1 (L). Log marginal data densities are computed recursively with Geweke’s modified harmonic mean estimator as well as the Chib–Jeliazkov estimator and plotted as a function of the number of draws. For each Markov chain.

data set 1 (L) provides very strong evidence against flexible prices. Based on data set 1 (L) we also estimate specification 3 (L) in which prices are nearly flexible and specification 4 (L) in which the central bank does not respond to output. 5. Vice versa. The first step of the comparison is typically to compute posterior probabilities for the DSGE model and the VAR. A natural choice of reference model in dynamic macroeconomics is a VAR. as linearized DSGE models.Bayesian Analysis of DSGE Models TABLE 4 Log marginal data densities based on Specification Benchmark model 1 (L) Model with nearly flexible prices 3 (L) No policy reaction to output 4 (L) 1 (L) 149 ln p(Y | −196 7 −245 6 −201 9 ) Bayes factor versus 1. The marginal data density of model 4 (L) is equal to −201 9 and the Bayes factor of model 1 (L) versus model 4 (L) is ‘only’ e 5 . the specification of a prior distribution for the VAR parameter requires careful attention. Comparison of a DSGE Model with a VAR The notion of potential misspecification of a DSGE model can be incorporated in the Bayesian framework by including a more general reference model 0 into the model set. We will subsequently present a procedure that allows us to document how the marginal data density of the DSGE model changes as . inflation will also be close to its target value. Hence. can be interpreted as restrictions on a VAR representation. The model comparison results are summarized in Table 4. deviations of output from target coincide with deviations of inflation from its target value. which can be used to detect the presence of misspecifications. The DSGE model implies that when actual output is close to the target flexible price output. Possible pitfalls are discussed in Sims (2003). The marginal data density associated with 3 (L) is −245 6.3. at least approximately.00 exp[48 9] exp[5 2] 1 (L) Notes: The log marginal data densities for the DSGE model specifications are computed based on Geweke’s (1999a) modified harmonic mean estimator. which translates into a Bayes factor (ratio of posterior odds to prior odds) of approximately e 49 in favor of 1 (L). Given the fairly concentrated posterior of depicted in Figure 1 this result is not surprising. This mechanism makes it difficult to identify the policy rule coefficients and imposing an incorrect value for 2 is not particularly costly in terms of fit. Since the VAR parameter space is generally much larger than the DSGE model parameter space. A VAR with a prior that is very diffuse is likely to be rejected even against a misspecified DSGE model. In a more general context this phenomenon is often called Lindley’s paradox.

Its accuracy depends on the number of lags p.10 In principle one could start from a VARMA model to avoid the approximation error. however. To construct a DSGE-VAR we proceed as follows. The framework has been extended to a model evaluation tool in Del Negro et al. and the magnitude of the roots of the moving average polynomial. Schorfheide the cross-coefficient restrictions that the DSGE model imposes on the VAR are relaxed. yt −p ] . 1 . the coefficient matrix = [ 0 . XY ( )= D [xt yt ] A VAR approximation of the DSGE model can be obtained from restriction functions that relate the DSGE model parameters to the VAR parameters. and the T × k matrix X with rows xt . ∗ ( )= −1 XX ( ) XY ( ). This can be achieved by comparing the posterior distributions of interesting population characteristics such as impulse response functions obtained from the DSGE model and from a VAR representation that does not dogmatically impose the DSGE model restrictions. yt −1 .150 S. The posterior computations. Consider a vector autoregressive specification of the form yt = 0 + 1 yt −1 + ··· + p yt −p + ut . . p ] . An and F. the T × n matrices Y and U composed of rows yt and ut . If the data favor the VAR over the DSGE model then it becomes important to investigate further the deficiencies of the structural model. Thus the VAR can be written as Y = X + U . 10 . We will refer to the latter benchmark model as DSGE-VAR and discuss an identification scheme that allows us to construct structural impulse response functions for the vector autoregressive specification against which the DSGE model can be evaluated. . We will document below that four lags are sufficient to generate a fairly precise approximation of the model 1 (L). Building on work by Ingram and Whiteman (1994) the DSGE-VAR approach of Del Negro and Schorfheide (2004) was designed to improve forecasting and monetary policy analysis with VARs. ∗ ( )= YY ( )− YX ( ) −1 XX ( ) XY ( ) (56) This approximation is typically not exact because the state–space representation of the linearized DSGE model generates moving average terms. Let D [·] be the expectation under DSGE model and define the autocovariance matrices XX ( )= D [xt xt ]. [ut ut ] = (55) Define the k × 1 vector xt = [1. would become significantly more cumbersome. (2006) and used to assess the fit of a variant of the Smets and Wouters (2003) model.

VAR. that the posterior distribution of and is also of the inverted Wishart normal form: |Y. We assume | ∼ | . ⊗( T ) + X X )−1 . e. . T − k. However. ∼ W (1 + )T b( b( ). (60) 11 Ingram and Whiteman (1994) constructed a VAR prior from a stochastic growth model to improve the forecast performance of the VAR. (1 + )T − k. | Y . = ∗ ( )+ (57) and capture deviations from the restriction functions The matrices ∗ ( ) and ∗ ( ). |Y) = p ( . see Del Negro and Schorfheide (2004). The joint posterior density of VAR and DSGE model parameters can be conveniently factorized as p ( . It is straightforward to show. . Bayesian analysis of this model requires the specification of a prior distribution for the DSGE model parameters. where W denotes the inverted Wishart distribution. It has the property that its density is approximately proportional to the Kullback–Leibler discrepancy between the VAR approximation of the DSGE model and the . is a hyperparameter that scales the prior covariance matrix.g. p( ).. which is emphasized in Del Negro et al. n (58) −1 1 ( ). ∼ W T ∗ ∗ ( ). their setup did not allow for the computation of a posterior distribution for . (2006).11 The prior distribution can be interpreted as a posterior calculated from a sample of T observations generated from the DSGE model with parameters . n XX ( ). . In the limit the VAR is estimated subject to the restrictions (56). and the misspecification matrices. Rather than specifying a prior in terms of and it is convenient to specify it in terms of and conditional on .Bayesian Analysis of DSGE Models 151 In order to account for potential misspecification of the DSGE model it is assumed that there is a vector and matrices and such that the data are generated from the VAR in (55) with the coefficient matrices = ∗ ( )+ . Zeller (1971). The prior distribution is proper provided that T ≥ k + n. The prior is diffuse for small values of and shifts its mass closer to the DSGE model restrictions as −→ ∞. ∼ |Y. T ⊗ XX ( ) −1 . )p ( | Y ) (59) The -subscript indicates the dependence of the posterior on the hyperparameter.

A natural criterion for the choice of in a Bayesian framework is the marginal data density p (Y ) = p (Y | )p( )d (61) For computational reasons we restrict the hyperparameter to a finite grid . On the other hand. the closer the posterior mean of the VAR parameters is to ∗ ( ) and ∗ ( ). the values that respect the cross-equation restrictions of the DSGE model.152 S. then the posterior mean is close to the OLS estimate (X X )−1 X Y . it is important to consider a data-driven procedure to determine . The marginal posterior density of can be obtained through the marginal likelihood p (Y | ) = | T XX ( ) + X X |− 2 |(1 + )T XX ( n b( )|− (1+ )T −k 2 | T × (2 )−nT /2 2 2 )|− 2 | T n i=1 n ∗( )|− T −k 2 n((1+ )T −k) 2 n( T −k) 2 [((1 + )T − k + 1 − i)/2] n i=1 [( T − k + 1 − i)/2] A derivation is provided in Del Negro and Schorfheide (2004). if = (n + k)/T . If one assigns equal prior probability to each grid point then the normalized p (Y )’s can be interpreted as posterior probabilities for . An and F. Schorfheide b( where ) and b( ) are the given by XX ( b( )= )= 1+ )+ 1 XX 1+ T −1 1+ YX ( XY + 1 XY 1+ T b( 1 ( T (1 + )T YY ( )+Y Y)−( T ) + Y X) )+X Y) ×( T XX ( ) + X X )−1 ( T XY ( Hence the larger the weight of the prior. Del Negro et al. The paper also shows that in large samples the resulting estimator of can be interpreted as a Bayesian minimum distance estimator that projects the VAR coefficient estimates onto the subspace generated by the restriction functions (56). (2006) emphasize that the posterior of provides a measure of fit for the DSGE model: high posterior probabilities for large . Since the empirical performance of the DSGE-VAR procedure crucially depends on the weight placed on the DSGE model restrictions.

The estimation of the DSGE-VAR can be implemented as follows. the short-run responses of the VAR with those from the DSGE model. Using a QR factorization.5 and 2. at least qualitatively. According to the DSGE model. we maintain the triangularization of its covariance matrix and replace the rotation in (65) with the function ∗ ( ) that appears in (64). then a comparison between DSGE-VAR( ˆ ) and DSGE model impulse responses can yield important insights about the misspecification of the DSGE model. The initial tr impact of t on yt in the VAR. The rotation matrix is chosen so that in absence of misspecification the DSGE’s and the DSGE-VAR’s impulse responses to all shocks approximately coincide. and is an orthonormal tr is the Cholesky decomposition of matrix that is not identifiable based on the likelihood function associated with (55). say between 0. So far. (64) where ∗ ( ) is lower triangular and ∗ ( ) is orthonormal. the one-step-ahead forecast errors ut are functions of the structural shocks t .Bayesian Analysis of DSGE Models 153 values of indicate that the model is well specified and a lot of weight should be placed on its implied restrictions. . the initial response of yt to the structural shocks can be can be uniquely decomposed into yt t DSGE = A0 ( ) = ∗ tr ( ) ∗ ( ). which we represent by ut = tr t (63) . Define ˆ = argmax p (Y ) ∈ (62) If p (Y ) peaks at an intermediate value of . on the other hand. as opposed to the covariance matrix of innovations. the identification procedure can be interpreted as matching. To the extent that misspecification is mainly in the dynamics. Let A0 ( ) be the contemporaneous impact of t on yt according to the DSGE model. the VAR given in (55) has been specified in terms of reduced form disturbances ut . is given by yt t VAR = tr (65) To identify the DSGE-VAR. An impulse response function comparison requires the identification of structural shocks in the context of the VAR. Del Negro and Schorfheide (2004) proposed to construct as follows.

indicating that there is no evidence of misspecification and that it is best to dogmatically impose the DSGE model restrictions. The marginal data TABLE 5 Log marginal data densities for Specification DSGE Model DSGE-VAR = ∞ DSGE-VAR = 5 00 DSGE-VAR = 1 00 DSGE-VAR = 75 DSGE-VAR = 50 DSGE-VAR = 25 Notes: See Table 4. The first row reports the marginal data density associated with the state space representation of the DSGE model. The first step of our analysis is to construct a posterior distribution for the parameter . For data set 1 (L). by sampling from the W − distribution. the marginal data densities are increasing as a function of . then the values in the first and second row of Table 5 would be identical. The value = 1 00 has the highest likelihood. Table 5 reports log marginal data densities for the DSGE-VAR as a function of for data sets 1 (L) and 5 (L). compute the orthonormal matrix (s) = ∗ ( ) as described above. 1. An and F. For each draw (s) generate a pair (s) . We assume that lies on the grid = 25. (s) . the two values are indeed quite close. which is representative for the shapes found in empirical applications in Del Negro and Schorfheide (2004) and Del Negro et al. Use the RWM algorithm to generate draws (s) from the marginal posterior distribution p ( | Y ). 1 (L) 1 (L) DSGE-VARs 5 (L) −196 66 −196 88 −198 87 −206 57 −209 53 −215 06 −231 20 −245 62 −244 54 −241 95 −238 59 −239 40 −241 81 −253 61 . 5. Use Geweke’s modified harmonic mean estimator to obtain a numerical approximation of p (Y ). (2006). This misspecification is captured in the inverse U -shape of the marginal data density function. which has been directly generated from the state space representation of 1 (L). 3. Moreover. 75. We now implement the DSGE-VAR procedure for 1 (L) based on artificially generated data. With respect to data set 5 (L) the DSGE model is misspecified. If the VAR approximation of the DSGE model were exact. Moreover. 5. All the results reported subsequently are based on a DSGE-VAR with p = 4 lags. whereas the second row corresponds to the DSGEVAR(∞).154 S. For = ∞ the misspecification matrices estimate the VAR by imposing the restrictions = ∗ ( ) and = ∗ ( ). Schorfheide MCMC Algorithm for DSGE-VAR 1. ∞ and assign equal probabilities to each grid and are zero and we point. 2.

. the = ∞ and the = ˆ DSGE-VAR. In order to assess the nature of the misspecification for 5 (L) we now turn to the impulse response function comparison. DSGE.Bayesian Analysis of DSGE Models 155 densities drop substantially for ≥ 5 and ≤ 0 5. Moreover. DSGE model responses computed from state–space representation: posterior mean (solid). these impulse responses can either be compared based on the same parameter values or their respective posterior estimates of the DSGE model parameters. there are three different sets of impulse responses to be compared: responses from the state–space representation of the DSGE model. DSGEVAR( = ∞) responses: posterior mean (dashed) and pointwise 90% probability bands (dotted). Data 5 (L). In both FIGURE 11 Impulse responses. Figure 11 depicts posterior mean impulse responses for the state–space representation of the DSGE model as well as the DSGE-VAR(∞). In principle. and DSGE-VAR( = ∞) – Model 1 (L).

To obtain the dotted bands in the figure. indicating that the approximation error is small. and construct probability intervals of the differences. the one obtained from the DSGE-VAR(∞). the figure answers the question: to what extent do the ˆ estimates of the VAR coefficients deviate from the DSGE model restriction functions ∗ ( ) and ∗ ( ). we take the upper and lower bound of these probability intervals and shift them according to the posterior mean of the = ∞ response. the inflation and interest rate responses to a monetary policy shock are markedly different. According to the VAR approximation of the DSGE model. The (dotted) bands in Figure 11 depict pointwise 90% probability intervals for the DSGE-VAR(∞) and reflect posterior parameter uncertainty with respect to . is transformed from the – space into impulse response functions because they are easier to interpret. however. Roughly speaking. In a general equilibrium model a misspecification of the households’ decision problem can have effects on the dynamics of all the endogenous variables. An and F. A brief description of the computation can clarify the interpretation. Unlike in the DSGE-VAR(∞). calculate their differences. For each draw of from the ˆ DSGE-VAR posterior we compute the ˆ and the = ∞ response functions.156 S. the interest rate falls below steady-state in period four and stays negative for several periods. The discrepancy between the posterior mean responses (solid and dashed lines) indicates the magnitude of the error that is made by approximating the state–space representation of the DSGE model by a fourth-order VAR. The (dotted) 90% probability bands reflect uncertainty with respect to the discrepancy between the = ∞ and ˆ responses. Schorfheide cases we use the same posterior distribution of . According to the log-linearized DSGE model output. namely. Rather than looking at the dynamic responses of the endogenous variables to the structural shocks. While the relaxation of the DSGE model restrictions has little effect on the propagation of the technology shock. To assess the misspecification of the DSGE model we compare posterior mean responses of the = ∞ (dashed) and ˆ (solid) DSGE-VAR in Figure 12. Except for the response of output to the government spending shock. Both sets of responses are constructed from the ˆ posterior of . The discrepancy. we can also ask to what extent are the optimality conditions that the DSGE model imposes satisfied by the DSGEVAR( ˆ ) responses. and interest rates satisfy the following relationships in response . has a slight hump shape. and the reversion to the steady-state is much slower. on the other hand. the DSGE and DSGE-VAR(∞) responses are virtually indistinguishable. inflation returns quickly to its steady-state level after a contractionary policy shock. inflation. The DSGE-VAR( ˆ ) response of inflation.

DSGE-VAR( = ∞). Pointwise 90% probability bands (dotted) signify shifted probability intervals for the difference between = ∞ and = 1 responses. DSGE-VAR( = 1) posterior mean responses (solid). to the shock R .t R Rt −1 − yt ˆ − (1 − ˆ 2 yt − (1 − (68) Figure 13 depicts the path of the right-hand side of Equations (66) to (68) in response to a monetary policy shock. Data 5 (L).t : 0 = yt − ˆ = Rt − t [yt +1 ] 1 + (Rt − t [ ˆ t +1 ] R) 1 ˆt t [ t +1 ]) (66) (67) R) 0 = ˆt − R . and DSGE-VAR( = 1) – Model 1 (L). DSGE-VAR( = ∞) posterior mean responses (dashed).Bayesian Analysis of DSGE Models 157 FIGURE 12 Impulse responses. Based on draws from the joint posterior of the DSGE and VAR parameters we can first calculate identified .

and the solid lines depict DSGE-VAR responses. While the price setting relationship and the monetary policy rule restriction seem to be satisfied. and the monetary policy rule. VAR responses to obtain the path of output. An and F. The bottom three panels of the figure are based on the DSGEVAR( ˆ ). Del Negro et al.158 S. Schorfheide FIGURE 13 Impulse responses. inflation. This finding is encouraging for the evaluation strategy since the data set 5 (L) was indeed generated from a model with a modified Euler equation. (2006) use the DSGE-VAR framework also to compare multiple DSGE models. DSGE-VAR responses: posterior mean (solid) and pointwise 90% probability bands (dotted). at least for the posterior mean. and DSGE-VAR( = 1) – Model 1 (L). Panel (2. Data 5 (L). the price setting equation. which relaxes the DSGE model restrictions. While this section focused mostly on the assessment of a single DSGE model. and in a second step use the DSGE model parameters to calculate the wedges in the Euler equation.1) indicates a substantial violation of the consumption Euler equation. DSGE-VAR( = ∞). The dashed lines show the posterior mean responses according to the state–space representation of the DSGE model. The top three panels of Figure 13 show that both for the DSGE model as well as the DSGE-VAR(∞) the three equations are satisfied. DSGE model responses: posterior mean (dashed). Schorfheide (2000) proposed a lossfunction-based evaluation framework for (multiple) DSGE models that . and interest rate.

for each i. . i = 1. Gordon et al. This loss-function-based approach nests DSGE model comparisons based on marginal data densities as well as assessments based on a comparison of model moments to sample moments. numerical methods have to be used to integrate the latent state vector. constructs a posterior distribution for population characteristics of interest such as autocovariances and impulse responses.4. Recently Fernández-Villaverde and Rubio-Ramírez (2005) use the filter to construct the likelihood for DSGE model solved with a projection method. particle from p st | sti−1 . also known as a sequential Monte Carlo filter. by generating one . NONLINEAR METHODS For a non-linear/nonnormal state–space model. A brief description of the procedure is given below. Note that p st | Y t −1 . ) p st −1 | Y t −1 . which approximate p st −1 | Y t −1 . We follow their approach and use the particle filter for DSGE model 1 solved with a second-order perturbation method. as special cases. Instead. The evaluation of the likelihood function can be implemented with a particle filter. This prediction is evaluated under a loss function and the risk is calculated under an overall posterior distribution that averages the predictions of the DSGE models and the reference model according to their posterior probabilities. = p (st | st −1 . the linear Kalman filter cannot be used to compute the likelihood function. (1993) and Kitagawa (1996) are early contributors to this literature. in period t we start with the particles N sti−1 i=1 . and then examines the ability of the DSGE models to predict the population characteristics. (1998). Particle Filter i 1) Initialization: Draw N particles s0 . . t = 1. dst −1 ≈ 1 N N sti ˜ N i=1 from p st | sti−1 . popularized in the calibration literature. N . The vector st has been defined in Section 2. 6. from the initial distribution p(s0 | ). Arulampalam et al. By induction. (2002) provide an excellent survey. .Bayesian Analysis of DSGE Models 159 augments potentially misspecified DSGE models with a more general reference model. i=1 Hence one can draw N particles from p st | Y t −1 . We use Y to denote the × n matrix with rows yt . . . the particle filter has been applied to analyze the stochastic volatility models by Pitt and Shephard (1999) and Kim et al. In economics. 2) Prediction: Draw one-step-ahead forecasted particles p st | Y t −1 .

) and hence is equally weighted. given by sti . which amounts to updating the probability weights assigned to the particles N sti i=1 . .160 S. = p yt | st . = p yt | st . We begin by computing the non-normalized importance weights ˜ ˜ ˜ ti = p yt | sti . dst ≈ 1 N N ˜ ti i=1 (70) Now define the normalized weights i t = ˜ ti N j =1 ˜t sti . Schorfheide 3) Filtering: The goal is to approximate p st | Y t . p st | Y t −1 . . ti i=1 so that ˜ Pr sti = sti = i t. 5) Likelihood Evaluation: According to (70) the log likelihood function can be approximated by T N ln ( | Y ) = ln p(y1 | ) + T t =2 ln p yt | Y t −1 . approximates12 the N 4) Resampling: We now generate a new set of particles sti i=1 by resampling with replacement N times from an approximate discrete N ˜ representation of p st | Y t . i = 1. p st | Y t −1 . In the remainder of this section we will refer to these parameters as “true” values as opposed to “pseudotrue” values to 12 The sequential importance sampler (SIS) is a Monte Carlo method which utilized this aspect. all but one particle will have negligible weight. ≈ t =1 1 N N ˜ ti i=1 To compare results from first and second-order accurate DSGE model solutions we simulated artificial data of 80 observations from the quadratic approximation of model 1 with parameter values reported in Table 3 under the heading 1 (Q ). where after a few iterations. . but it is well documented that it suffers from a degeneracy problem.N The resulting sample is in fact an iid sample from the discretized density of p(st | Y t . . ˜ i N t i=1 j and note that the importance sampler updated density p st | Y t . . (69) p yt | Y t −1 . An and F. The denominator in (69) can be approximated by p yt | Y t −1 .

Bayesian Analysis of DSGE Models 161 be defined later. The slope coefficient of the Phillips curve. it is desirable to have many particles. g . a measurement error is introduced to each measurement equation.S. . 13 . (Q ) . .55. .S. 1. We generate x0 by drawing 0 from its unconditional distribution (the law of motion for t is linear) and setting x−1 = x. and 3. the computation of p(st | Y t ) requires the solution of a system of quadratic equations.t can be expressed as xj . data. We also make monetary policy less responsive to the output gap by setting 2 = 125.0.1. are chosen so that the second moments are matched to the U.33. which implies that the steady-state consumption amounts to 85% of output. The law of motion ˆ ˆ for the endogenous state variables xj .t = (0) j + (1) j wt + wt (2) j wt (71) where wt = [xt −1 . r (A) . Moreover. z . These true parameter values by and large resemble the parameter values that have been used to generate data from the log-linear DSGE model specifications.1. However. especially enough particles to capture the tails of Without the measurement error two complications arise: since p(yt | st ) degenerates to a pointmass and the distribution of st is that of a multivariate noncentral 2 the evaluation of p(yt | Y t −1 ) becomes difficult. p (s0 | ). whose standard deviation is set as 20% of that of each observation. 1/g is chosen as . is chosen to be .85. and inflation between the artificial data and the real U. First. R .2 to match the average output growth rate. Other exogenous process parameters.68. For a good approximation of the prediction error distribution. The implied quadratic adjustment cost coefficient. data. These values are different from the mean of the historical observations because in the non-linear version of the DSGE model means differ from steady states.t . it is straightforward to calculate the unconditional distribution of st associated with the vector autoregressive representation (33). is 53. gt . t ] . Decompose st = [xt . is set to 0. For the second-order approximation we rewrite the state transition equation (37) as follows. Second. where t is composed of the exogenous processes R . and zt . there are some exceptions. t ] . which implies a steady-state markup of 11% that is consistent with the estimates of Basu (1995). we have to choose the number of particles. The degree of imperfect competition. and (A) are chosen as . which implies less price stickiness than in the linear case. and z . g . where x is the steady-state of xt . we need a scheme to draw from the initial state distribution. For computational convenience.13 6. In the linear case. interest rate. Configuration of the Particle Filter There are several issues concerning the practical implementation of the particle filter.

0 sec per draw) on the AMD64 3000+ machine with 1 GB RAM. For brevity. An and F. Recall that we had introduced a reduced-form Phillips curve parameter in (32) because and cannot be identified with the log-linearized DSGE model.000 particles were enough to get stable evalutaions of the likelihood given the size of the measurement errors. the quadratic log-likelihood peaks around the true parameter values. more particles are needed to obtain an accurate approximation of the likelihood function and to ensure that the posterior weights ti do not assign probability one to a single particle. We implement most of the procedure in MATLAB (2005) so that we can exploit the symbolic toolbox to solve the DSGE model. Linear methods are more than 200 times faster: 1.2. the number of particles affects the performance of the resampling algorithm. the stratified resampling scheme proposed by Kitagawa (1996) is applied. Figure 14 shows the log-likelihood profiles along each structural parameter dimension. For some parameters . However. If the number of particles is too small. the resampling will not work well. the particle filter can be readily implemented on a good desktop computer.000 observations. In our implementation. 6. If the measurement errors in the conditional distribution p(yt | st ) are small. while the linear log-likelihood attains its maximum around the pseudo-true parameter values. The parameter values for and 1/g are fixed at their true values because they do not enter the linear likelihood. we evaluate the linear likelihood around the pseudo-true parameter values. we will refer to the likelihood obtained from the first-order (second-order) accurate solution of the DSGE model as linear (quadratic) likelihood. they differ if the model is solved by a second-order approximation. It is optimal in terms of variance in the class of unbiased resampling schemes. Two features are noticeable in this comparison. but it takes too much time to evaluate the likelihood function using the particle filter. we found that 40. which are obtained by finding the mode of the linear likelihood using 3. It is natural to evaluate the quadratic likelihood function in the neighborhood of the true parameter values given in Table 3.000 draws with 40. The filtering step is implemented as FORTRAN mex library.162 S. In our application. While in the linearized model steady states and long-run averages coincide. Based on a sample of 80 observations we can generate 1. Schorfheide p(st | Y t ).000 particles in 1 h and 40 min (6. Moreover. A Look at the Likelihood Function We will begin our analysis by studying profiles of likelihood functions. First. Even though the estimation procedure involves extensive random sampling. so we can call it as a function in MATLAB. The differences between the peaks are most pronounced for the steady-state parameters r (A) and (A) .000 draws are generated in 24 sec in our Matlab routines and 3 sec in our GAUSS routines.

quadratic approximations: likelihood profiles. Data Set 1 (Q ).000 particles are used.Bayesian Analysis of DSGE Models 163 FIGURE 14 Linear vs. Vertical lines signify true (dotted) and pseudo-true (dashed-dotted) values. Likelihood profile for each parameter: 1 (L)/Kalman filter (dashed) and 1 (Q )/particle filter (solid). . 40.

lines intersect with the contours of the quadratic likelihood function. the linear likelihood is flat for the values of and that imply a constant . which implies that the non-linear approach is able to extract more information on the structural parameters from the data. however. which suggests that and are potentially separately identifiable. which depicts the linear and quadratic likelihood contours in . the log-likelihood is slightly sloped in 1/g = c/y dimension.space. is constant along dashed lines. In both cases we use the same prior distribution reported in Table 3. g does not enter the linear version of the model and hence the log-likelihood profile along this dimension is flat. Large dot indicates true and pseudotrue parameter value. Unlike in the analysis of the likelihood function we now substitute the adjustment cost parameter with the Phillips-curve parameter in the quadratic approximation of the DSGE model. The contours of the linear likelihood trace out iso. The right panel of Figure 15 indicates that the iso. Data set 1 (Q ).000 particles are used. In the quadratic approximation. the quadratic log-likelihood function is more concave than the linear one. Schorfheide FIGURE 15 Linear vs. the quadratic approximation can identify some structural parameters that are unidentifiable under the log-linearized model. Moreover. our sample of 80 observations is too short to obtain a sharp estimate of the two parameters.1 and standard deviation 0. The Posterior We now compare the posterior distributions obtained from the linear and non-linear analysis. quadratic approximations: Likelihood contours. 6. For instance. An and F. As noted before. Contours of likelihood (solid) for 1 (L) and 1 (Q ). 40. A beta prior with mean .164 S.3.lines. it appears that the monetary policy parameter such as 1 can be more precisely estimated with the quadratic approximation. However. This is illustrated in Figure 15.05 is placed on . which .

so that the inverse Hessian reflects the prior variance of the additional parameters. Figures 16 and 17 depict the draws from prior. Our simulation results with respect to linear versus non-linear estimation are in line with the findings reported in Fernández-Villaverde and Rubio-Ramírez (2005) for a version of the neoclassical stochastic growth model. We use scaling factors . we evaluate the Hessian to be used in the RWM algorithm at the (linear) mode without fixing and 1/g . there are a few exceptions. there is a clear negative correlation between and according to the quadratic posterior. This result suggests that 1 (Q ) exhibits nonlinearities that can be detected by the likelihood-based estimation methods. Note that 1 − 1/g is the steady-state government spending–output ratio. Now consider the parameter that determines the demand elasticity for the differentiated products. We first compute the posterior mode for the linear specification with the additional parameters. 100. and hence a beta prior with mean 0.05. The log marginal likelihoods are −416 06 for 1 (L) and −408 09 for 1 (Q ). linear and quadratic posteriors are quite similar. Conditional on this parameter is not identifiable under the linear approximation and hence its posterior is not updated. Moreover. for most of our parameters.5 (linear) and . Indeed. the quadratic posterior of is markedly different from the prior as it concentrates around 0. Table 6 reports the log marginal data densities for our linear ( 1 (L)) and quadratic ( 1 (Q )) approximations of the DSGE model based on Data Set 1 (Q ). which is consistent with the positive difference between “true” and “pseudotrue” values of these parameters.Bayesian Analysis of DSGE Models 165 roughly covers micro evidences of 10–15% markup. both for simulated as well .000 draws from each distribution are generated and every 500th draw is plotted. affect the second-order terms that arise in the quadratic approximation of the DSGE model.25 (quadratic) to target an acceptance rate of about 30%. The draws tend to be more concentrated as we move from prior to posterior. We use the RWM algorithm to generate draws from the posterior distributions associated with the linear and quadratic approximations of the DSGE model. After that. While. and 1/g .85 and standard deviation . and quadratic posterior distribution. This Hessian is used for both the linear and the non-linear analysis. fixed at their true values. The parameter does. The Markov chains are initialized in the neighborhood of the pseudo-true and true parameter values. linear posterior. respectively. The quadratic posterior means for the steady-state parameters r (A) and (A) are also greater than the corresponding linear posterior means.1 is used for 1/g . The quadratic posterior mean of 1 is larger than the linear posterior mean and closer to the true value. In their analysis the non-linear solution combined with the particle filter delivers a substantially better fit of the model to the data as measured by the marginal data density. which imply a Bayes factor of about e 8 in favor of the quadratic approximation. however.

. Intersections of solid and dotted lines signify true and pseudotrue parameter values. Data set 1 (Q ).166 S. Schorfheide FIGURE 16 Posterior draws: linear vs. Every 500th draw is plotted. 40. 100.000 particles are used. quadratic approximation I. An and F.000 draws from the prior and posterior distributions.

Bayesian Analysis of DSGE Models 167 FIGURE 17 Posterior draws: Linear vs. quadratic approximation II. . See Figure 16.

Hence it is important to develop methods that incorporate potential model misspecification in the measures of uncertainty constructed for forecasts and policy recommendations. seminar participants at Columbia University. techniques to estimate DSGE models solved with non-linear methods have become implementable thanks to advances in computing technology. linear solution 1 (L) Benchmark model. Model size and dimensionality of the parameter space pose a challenge for MCMC methods. Juan RubioRamírez. although relatively small in magnitude. Lack of identification of structural parameters is often difficult to detect since the mapping from the structural form of the DSGE model into a state–space representation is highly non-linear and creates a challenge for scientific reporting as audiences are typically interested in disentangling information provided by the data from information embodied in the prior. The views expressed here are those of the authors and do not necessarily reflect the views of the Federal Reserve Bank of Philadelphia or the Federal Reserve System.168 S. for whose hospitality he is grateful. quadratic solution 1 (Q ) Notes: See Table 4. ACKNOWLEDGMENTS Part of the research was conducted while Schorfheide was visiting the Research Department of the Federal Reserve Bank of Philadelphia. Model misspecification is and will remain a concern in empirical work with DSGE models despite continuous efforts by macroeconomists to develop more adequate models. Schorfheide 1 (Q ) TABLE 6 Log marginal data densities based on Specification Benchmark model. We also thank our discussants as well as Marco Del Negro.00 1 (Q ) −416 06 −408 09 as actual data. and . While most of the existing empirical work is based on linearized models. An and F. Jesús Fernández-Villaverde. We thank Herman van Dijk and Gary Koop for inviting us to write this survey paper for Econometric Reviews. This paper has surveyed the tools used in this literature and illustrated their performance in the context of artificial data. ln p(Y | 1) Bayes factor vesus exp[7 97] 1. 7. many challenges remain. may have important effects on the moments of the model. Thomas Lubik. Nevertheless. Moreover. Georg Strasser. the authors point out that the differences in terms of point estimates. CONCLUSIONS AND OUTLOOK There exists by now a large and growing body of empirical work on the Bayesian estimation and evaluation of DSGE models.

Accuracy of stochastic perturbation methods: the case of asset pricing models. S. Journal of International Economics. Collard. F. I. Canova. Time-to-build and aggregate fluctuations: some new evidence.. American Statistician 49(4):327–335. A. Chib. O. (2001). IGIER and Bocconi University. M. American Economic Review 82(3):430–450. L.. F. S. Laséen. (1980). Econometric analysis of linearized singular dynamic stochastic general equilibrium models. F. (1995). C. Computational Economics 20(1–2):21–55. Journal of the Royal Statistical Society Series A 143(4):383–430. Bayesian estimation of an open economy DSGE model with incomplete pass-through. Villani. S. (2002). J.Bayesian Analysis of DSGE Models 169 the participants of the 2006 EABCN Training School in Eltville for helpful comments and suggestions. Journal of Monetary Economics 50(8):1751–1768. Kahn. Bierens. Canova. Chang. G. Aruoba. L. . L. M. Gomes. L.. (2004). Schorfheide. (2004). M. American Economic Review 92(5):1498–1520. J. S. Understanding the metropolis-hastings algorithm. J. Christiano. (2003).. Binder. (1989). Beyer. The solution of linear difference models under rational expectations. R. Nominal rigidities and the dynamic effects of a shock to monetary policy. J. Arulampalam. (2004).. Y. Christiano. Chib. Jeliazkov. H. Journal of Political Economy 113(1):1–45.. Y. G. Econometric Theory 13(6):877–888.. T. Altug.. (2000). P. Statistical inference in calibrated models. J. Multivariate linear rational expectations models: characterization of the nature of the solutions and their fully recursive computation. Rubio-Ramírez. (2005). NJ: Princeton University Press. Methods for Applied Macroeconomic Research.. J. Pesaran. A tutorial on particle filters for online non-linear/non-Gaussian Bayesian tracking.. (2004). Sampling and Bayes inference in scientific modelling and robustness. Christiano. M. Juillard. M. C. Box. Sala. Canova.. IEEE Transactions on Signal Processing 50(2):173–188.. Basu. Canova. Journal of Economic Dynamics and Control 25(6–7):979–999. Journal of Econometrics 136(2):595–627. Farmer. H. Eichenbaum. E. (1994). M. E. Evans. (1995). forthcoming. P. Labor-supply shifts and economic fluctuations. Comparing solution methods for dynamic equilibrium economies. S. Gordon. IGIER and Universitat Pompeu Fabra. M. Blanchard. Back to square one: identification issues in DSGE models. B. (2007).. Clapp. Solving dynamic equilibrium models by a methods of undetermined coefficients. Anderson. S. Fernández-Villaverde. Journal of Applied Econometrics 9:S123–S144. M.. (1992). Federal Reserve Board of Governors. (1997). F. Bils.. L. F. Monetary policy and the evolution of the US economy: 1948-2002.. Journal of the American Statistical Association 96(453):270–281.. Manuscript. (2001). Lindé. Econometrica 48(5):1305–1312. N. Current real-business cycle theories and aggregate labormarket fluctuations. International Economic Review 30(4):889–920. On the indeterminacy of new keynesian economics. J.. A reliable and computationally efficient algorithm for imposing the saddle point property in dynamic models. J. Some evidence on the importance of sticky prices. REFERENCES Adolfson. ECB Working Paper 323. (2007). Learning-by-doing as a propagation mechanism. Intermediate goods and business cycles: implications for productivity and welfare. F. F. (2002). Journal of Economic Dynamics and Control 30(12):2477–2508. Sloan Foundation.. Chang. J.. Marginal likelihood from the metropolis-hastings output. M. Maskell.. (1980). Greenberg. Journal of Political Economy 112(5):947–985. (2005). Manuscript. F. Eichenbaum. (2002). S. Manuscript. American Economic Review 85(3):512–531. Schorfheide.. Klenow. Schorfheide gratefully acknowledges financial support from the Alfred P. S. (2004).

D. distributions. Journal of Monetary Economics 34(3):497–510. Whiteman. Judd. Journal of the Royal Statistical Society Series B 56(3):501–514. Biometrika 57(1):97–109. G. J. (1993). Journal of Econometrics 136(2):397–430. Bayesian Data Analysis. C. Federal Reserve Bank of Atlanta. F. P. J.com/. R. W. Monte Carlo Methods. P. Renault. A method for taking models to the data. J. Bayesian inference in econometric models using Monte Carlo integration. (2004). Ingram. N.S. D. IEE Proceedings-F 140(2):107–113. (2005). Ingram. B. DeJong. H. A. Heckman..fr/dynare/. Using simulation methods for bayesian econometric models: inference. Rubio-Ramírez. Journal of Applied Econometrics 20(7):891–910.cnrs. J. (2006).. Fernández-Villaverde. Perturbation methods for general dynamic stochastic models. Geweke. E.. D.. Journal of the Royal Statistical Society Series B 14(1):107–114. pp. J. W. Estimating dynamic equilibrium economies: linear versus non-linear likelihood. J.170 S. J. economy. Marcet. Whiteman. Journal of Economic Dynamics and Control 28(6):1205–1226. Berkowitz. C. K.. Supplanting the Minnesota prior: forecasting macroeconomic time series using real business cycle model priors. X. MA: MIT Press. (2004). An and F. J. P. (1970).. B. Fear of floating: a structural estimation of monetary policy in Mexico. (1996). (1994). (1964). The cyclical behavior of skill acquisition. K. An open economy DSGE model linking the Euro area and the U. Guay.. (1998). (1996). Ingram.. Hastings. International Economic Review 45(2):643–673. F. Cambridge. D. Del Negro. (1998). Wouters. Hoboken: John Wiley & Sons.. E. D. J. N... forthcoming.. M. (2007). Hansen.. Smith. A Bayesian approach to calibration. (1999a). Manuscript. B. Technology shocks and aggregate fluctuations: how well does the RBC model fit postwar U. F. F. (2003). F. H. E.. Fernández-Villaverde. (2004). J. D. and densities for stochastic processes. N. J. DYNARE (2005). (1999b). J. M.S. Geweke. A. Annals of the Institute for Statistical Mathematics 40:297–309. (2000).. Gelfand. C. L. R. F. Journal of Econometrics 98(2):203–223. N. (1994). J. Econometric Reviews 18(1):1–126. Rogoff. J. M. (1994). R.. L. F.aptech. (2005). Schorfheide. Gelman. 225–228. Hammersley. Review of Economic Studies 65(3):433–452. Schorfheide Crowder. Del Negro. B. K. Del Negro.. Stern. Journal of Business and Economic Statistics. D. NBER Macroeconomics Annual 2004. Journal of Economic Perspectives 10(1):87–104. The empirical foundations of calibration. M. Ireland. data. Rational decisions. (1952). Judd. Novel approach to non-linear and nongaussian Bayesian state estimation. L. Journal of Business and Economic Statistics 14(1):1–9. Contemporary Bayesian Econometrics and Statistics. (1995). In: Gertler. Salmond. F. Indirect inference and calibration of dynamic stochastic general equilibrium models. National Bank of Belgium. Numerical Methods in Economics. Gordon. Rubin. Geweke. Cambridge. (2005). Dridi.. Ohanian. Rabanal. Handscomb. software available from http://cepremap. Diebold. Ingram. (1989). Hoover Institute. (2004).. New York: John Wiley. Smets. J. Jin. Bayesian model choice: asymptotics and exact calculations. DeJong. M. M. Comparing dynamic equilibrium models to data: aBayesian approach. Computational experiments and reality. Den Haan. Whiteman. (2002). F. J. . F. Econometrica 57(6):1317–1339. GAUSS (2004). Rubio-Ramírez... A. (2001).. development and communication. Galí... Manuscript.-H. Journal of Econometrics 123(1):153–187. A Bayesian approach to dynamic macroeconomics. Wouters. (1988).. H. Manuscript.. J.. Dynamic equilibrium economies: a framework for comparing models and data. H. K.. L. Monte Carlo sampling methods using markov chain and their applications. University of Iowa. Priors from general equilibrium models for VARs. Geweke. N. eds. Review of Economic Dynamics 4(3):536–561. Manuscript. F. Dey. New York: Chapman & Hall. Schorfheide. Review of Economic Studies 61(1):3–17. A. K. Software available from http://www. Accuracy in simulation. MA: MIT Press. M. A. Asymptotic expansions of posterior expectations. I. On the fit and forecasting performance of new keynesian models.. DeJong. de Walque. Good. Carlin.. J.

McGrattan. Version 7.. G. Prescott. Chib. Metropolis. MA: MIT Press. Manuscript..-P. University of Pennsylvania. A. dissertation. Kim. J. Journal of Economic Dynamics and Control 24(10):1405–1423. Kydland. 229–287.. Klein.. Leeper. A.Bayesian Analysis of DSGE Models 171 Justiniano. Phillips. 81–118. W. K. MA: MIT Press. N. Alexei O. 356–390. N. (2001). T.. H. Rogoff. Rotemberg. Shephard. (1998). B. and the likelihood principle in non-stationary time series models. (1994). eds. K. (2004). S. (1995). (1999). (2004). J. Handbook of Applied Econometrics: Macroeconomics. . E. T. Journal of the American Statistical Association 94(446):590–599. Lubik. (1998). eds. A Bayesian look at new open economy macroeconomics. F. and model fit. E.. (1996). Wickens. Princeton University. F. (2006). (1996).-Y. (2005). MA: MIT Press. A. A. M.. The computational experiment: an econometric tool. American Economic Review 94(1):190–217. A. The macroeconomic effects of distortionary taxation. In: Mark. J. W. Williams. A. (2003).. bayesian information criterion. Large sample properties of posterior densities. (1996). Monetary policy under uncertainty in micro-founded macroeconometric models. NBER Macroeconomics Annual 2005. (2000). Bayes factors. Kim. (1982).. Rosenbluth. (2003). Levin. E. eds.. J. (2000). pp. F.mathworks. Sims. Comparing monetary policy rules in an estimated equilibrium model of the US economy. E. eds.. Pagan. H. Toward a modern macroeconomic model usable for policy analysis. Empirical and policy performance of a forward-looking monetary model. pp. Kitagawa. The solution of singluar linear difference systems under rational expectations. Teller. J. Manuscript. Software available from http://www. Econometrica 66(2):359–380. Journal of Monetary Economics 47(1):61–92. Filtering via simulation: Auxiliary particle filters. P. Small open economy Dsge models – specification. T. R. K.D. In: Stanley. E. Cambridge. Lubik. K. Rafterty. Equation of state calculations by fast computing machines. NBER Macroeconomics Annual 2005. King. pp. Journal of the American Statistical Association 90(430): 773–795. Laforte. Monte Carlo filter and smoother for non-gaussian non-linear state space models. Econometric model determination. Oxford: Basil Blackwell. (2004). G. Schaumburg.. Do central banks respond to exchange rate fluctuation – a structural investigation. An Introduction to Modern Bayesian Econometrics. Williams. Federal Reserve Board of Governors.. F. Preston. F. J.. E. Williams. Rogoff. Cambridge. C. NBER MAcroeconomics Annual 1994. Watson.. Sims. Testing for indeterminacy: an application to US monetary policy. N. Kim.. Cambridge. (1998). M.. G. Using the generalized Schur form to solve a multivariate linear rational expectations model. Kydland. University of Minnesota... Kim. R. Manuscript. A. Time to build and aggregate fluctuations. J. S... S. C. On measuring the welfare cost of business cycles. M. Econometric analysis of calibrated macroeconomic models. Kim. Review of Economic Studies 65(3):361–393. C. Schorfheide. J. N. Prescott. (2004). Bayesian Comparison of Dynamic Macroeconomic Models. In: Pesaran. T.. Kass. N. Ph. pp. Journal of Economic Perspectives 10(1):69–85. Schorfheide. Princeton University. Teller. Econometrica 50(6):1345–1370. Onatski. C. Pitt. (1953). Manuscript. Schorfheide. H. Manuscript. S.. E. E.. H. Oxford: Blackwell Publishing. A. Kim. A.. Journal of Monetary Economics 33(3):573–601. C. In: Mark G.. C. (2004). Lubik. A. Landon-Lane.. N.. A. Calculating and using second order accurate solutions of discrete time dynamic equilibirum models. Journal of International Economics 60(2):471–500. Kim. International Monetary Fund and Columbia University. M. F.. Shephard. Journal of Computational and Graphical Statistics 5(1):1–25. (1994). Rosenbluth. Econometrica 64(4):763–812. P. E. Journal of Monetary Economics 45(2):329–359..com/. 313–366. International Economic Review 39(4):1015–1026. Constructing and estimating a realistic optimizing model of monetary policy. E. J. (2006). (1995). M. Kim. (1998). B. R.. A. estimation. Spurious welfare reversal in international business cycle models. Lancaster. MATLAB (2005). Otrok. C. A.. Stochastic volatility: likelihood inference and comparison with ARCH Models.. Journal of Chemical Physics 21(6):1087–1092.

(1998). Econometric Theory 14(4):483–509. Schorfheide. (2003). Computational Methods for the Study of Dynamic Economies. S. Euro-Dollar real exchange rate dynamics in an estimated two-country model: What is important and what is not. Rabanal. Journal of Applied Econometrics 20(2):161–183. (2003). M. Rubio-Ramírez. (2005b). Duke University. J. Woodford.. H. discrete-time rational expectations models. M. G.. A. Anderson. A. Uhlig. Robert. Interest and Prices: Foundations of a Theory of Monetary Policy. P. Review of Economic Dynamics 8(2):392–419. Schmitt-Grohé. Sargent. A. An and F. F. F.. (2006). Wouters. Solving linear rational expectations models. J. Rabanal. Estimating non-linear time-series models using simulated vector autoregressions. Oxford. Tuesta. Wouters. P. R. NBER Macroeconomics Annual 1997. Sims. (1996). (2005). Schorfheide Poirier. Manuscript. B. Two models of measurements and the investment accelerator. (2005). Optimal simple and implementable monetary and fiscal rules. W. Smets. Federal Reserve Bank of San Fransisco. (1971). R. Levin. J. G. Monte Carlo Statistical Methods. Journal of Applied Econometrics 8:S63–S84. F. Loss function-based evaluation of DSGE models. M. Uhlig. Comparing new keynesian models of the business cycle: A Bayesian approach. M. Manuscript. Measures of fit for calibrated models. (1993). A. (1989). (1982). C. Higher-Order perturbation solutions to dynamic. R. H. Econometrica 50(1):1–25. Smith. C. J. Journal of Applied Econometrics 15(6):645–670. (2002). Rabanal. pp. A.. Zeller. New York: John Wiley & Sons. B. Rotemberg. M. A. Solving non-linear stochastic growth models: a comparison of alternative solution methods. Watson. S. An estimated dynamic stochastic general equilibrium model of the Euro area. Journal of European Economic Association 1(5):1123–1175. eds.. V. Learning and monetary policy shifts. J. F. (1993). New York: Springer Verlag. H. Schorfheide.. Journal of Business and Economic Statistics 8(1):1–17.. Princeton University.172 S. MA: MIT Press. (2000). Maximum likelihood estimation of misspecified models. (1999). F. (2006). J. UK: Oxford University Press.. Journal of the Royal Statistical Society Series B 31(1):80–88.. 5957. Casella. A. D. 30–61. pp. Taylor. A toolkit for analyzing non-linear dynamic stochastic models easily.. (1990). eds. Uribe.. Rubio-Ramírez. P. Journal of Political Economy 97(2):251–287. J. T. Rotemberg. Computational Economics 20(1–2):1–20. (2005a). Princeton. CEPR Discussion Paper No. Comparing shocks and frictions in US and Euro area business cycles: a Bayesian DSGE Approach.. Swanson. C. Federal Reserve Bank of Atlanta Working Paper 2005–8. Journal of Political Economy 101(6):1011–1041. Journal of Economic Perspectives 10(1):105–120. C. Journal of Monetary Economics 52(6):1152–1166. (2003). Macroeconomics and methodology.. Revising beliefs in nonidentified models. . NJ: Princeton University Press. Comparing new keynesian models in the Euro area: A bayesian approach. In: Bernanke. F. Scott. On the asymptotic behaviour of posterior distributions. (2005). Sims. (1997). (1999). In: Marimón. An optimization-based econometric framework for the evaluation of monetary policy. J.. Journal of Monetray Economics.. Introduction to Bayesian Inference in Econometrics. (1969). Sims. Probability models for monetary policy decisions. 297–346. Cambridge. A. White. Smets. E. A. Woodford. Walker.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->