This action might not be possible to undo. Are you sure you want to continue?
Preliminary and incomplete.
First complete draft: May, 2006
This version: March 19, 2007.
Francesca Monti
∗
ECARES, Universit´e Libre de Bruxelles
CP 114 Av.F.D. Roosevelt, 50.
B1050 Bruxelles
fmonti@ulb.ac.be
Abstract
This paper proposes a parsimonious and modelconsistent method for combining
forecasts generated by structural microfounded models and judgmental fore
casts. The goal is to produce forecasts that are modelbased, and therefore
disciplined by the rigor of the economic model, but that can also incorporate
judgmental information. In our setup, there are three actors: the economic
agents and two types of forecasters, the purelymodel based and the judgmental
forecasters. They all know the true model of the economy, but their information
sets diﬀer. The economic agents observe shocks as they realize and make their
decisions consequently, while the forecasters do not observe current shocks. The
judgmental forecasters however have access to more timely information than the
purely modelbased forecasters, but their forecasts are aﬀected by some noise
(i.e. they are not perfectly rational). Thus, the idea is to extract such informa
tion from the judgmental forecasts. This method also allows interpreting the
judgmental forecasts through the lens of the model. We illustrate the proposed
methodology with a realtime forecasting exercise, using a strippedtothebone
version of an RBC model and the Survey of Professional Forecasters.
∗
I am greatly indebted to Domenico Giannone, Lucrezia Reichlin and Philippe Weil for
invaluable guidance and advice. I am grateful to Gunter Coenen, David De Antonio Liedo,
Marco Del Negro, Andrea Ferrero, Mark Gertler, Simon Potter, Paulo Santos Monteiro, Argia
Sbordone, Frank Schorfheide, Andrea Tambalotti and all seminar participants at the ECB,
University of Pennsylvania, Federal Reserve Bank of NY for comments and useful discussions.
The usual disclaimer applies.
JEL Classiﬁcation: C32, C53
Keywords: Forecasting, Judgment, Kalman ﬁlter, Real time
1 Introduction
Much of the macroeconometric literature of the last decade has focused on
making microfounded dynamic stochastic general equilibrium (DSGE) models
a viable option for policy analysis and forecasting. Since Smets and Wouters
(2004) have shown that DSGE models estimated with Bayesian techniques seem
to perform quite well in forecasting relative to standard benchmark models such
as VARs, DSGE models are playing a more relevant role in practice and have
indeed become an increasingly important tool for policy analysis and forecasting
at central banks. The attractiveness of using these models to forecast derives
from the fact that they are theoretically consistent and based on ﬁrst princi
ples. The microfoundations imply that the parameters are more likely to be
truly structural and allow interpreting the forecasts in an economically intuitive
way. Moreover, the structural nature of the model allows computing forecasts
conditional on a policy path and allows examining the structural sources of the
forecast errors and their implications for monetary policy.
Despite their growing employment in practice, modelbased forecasts still
seem to be outperformed at short horizons and particularly in the nowcast
1

by forecasts produced by institutional and professional forecasters, such as the
Federal Reserve’s Greenbook (e.g. Sims, 2003) or the Survey of Professional
Forecasters.
2
Where does this advantage come from?
Professional forecasters monitor and analyze literally hundreds of data se
ries, using informal methods to distill information from the available data. Not
only they access what is generally called hard data (data series that are re
leased by the statistical agencies, as for example, GDP, industrial production,
etc), they also gather socalled soft information, i.e. things like the quantity
of goods transported by railway in each month (Bruno and Lupi, 2004) or the
electricity consumed each month (Marchetti and Parigi, 1998). Moreover, pro
fessional forecasters are able to incorporate new data and new information as
it becomes available throughout the month or the quarter and therefore are
can take advantage of the timeliness of this information. Indeed, as Giannone,
Reichlin, Small (2005) point out, timely information seems to play a very im
portant role in improving the quality of the forecasts, and of the nowcasts in
particular. Finally, in their forecasts, professional forecasters account also for
all sheerly judgmental information. A typical example is the adjustments of the
forecasts made in 1999 in order to account for the fear of the Y2K bug. Indeed
this seemed at the time a very important event, but since it had never happened
no model could be expected to encompass it, while the institutional forecasters
could.
1
Nowcasts are estimates of the current value of variables, such as GDP, that are unknown
in the current period due to information lags
2
This view has recently been challenged by Edge, Kiley and Laforte (2006), who suggest
that a richly speciﬁed DSGE models have a forecasting performance that is comparable to
that of the Greenbooks. We believe that their results are very much related to the sample
they choose for their out of sample exercise, i.e. 19962001. We will discuss this in more detail
in the empirical application.
1
Hence, judgement  i.e. information, knowledge and views outside the scope
of a particular model
3
 strongly informs the institutional forecasts. The empir
ical evidence at hand suggests that the ability to account for more, more timely
and ”softer” information is what makes the professional or judgmental (I will
use the two terms interchangeably from now on) forecasts better at nowcasting
and forecasting short horizons.
The introduction of DSGE models in a policy and projection environment
has given rise to a literature on how the model’s outcomes should be combined
with judgmental input and oﬀmodel information. The aim of this paper is to
propose a method for combining judgment  proxied by judgmental forecasts 
with modelbased forecasts, in order to make predictions that are more accurate
but nevertheless disciplined by rigorous economic theory. In particular, we
propose to interpret the judgmental forecasts as an estimate  made with a
diﬀerent, possibly more informative, information set  of the real signal, estimate
which can be ﬁltered in order to extract the information it possibly contains.
We then use the model to generate another forecast that can now also account
for judgmental information and therefore make more accurate predictions. The
new forecast that we generate is a combination of the modelbased forecast and
the judgmental forecast: the Kalman ﬁlter will automatically associate weights
to the two forecasts depending on the information content of the judgmental
forecasts. Moreover, with the methodology we propose, we will be able to
look at the judgmental forecasts through the lens of the model. Storytelling is
diﬃcult when it comes to judgmental forecasts; in our setup we will be able to
interpret the forecasts in light of the model and therefore somehow structuralize
the forecasts. The approach we propose is similar in spirit to the one used by
Coenen, Levin and Wieland (2005) and Koenig (2005) to deal with revisions.
One of the key factors that distinguishes our approach from theirs is that we
consider the judgmental forecasts as optimal forecasts made with a diﬀerent
information set, not a noisy signal of the actual variables.
Recently, other authors have addressed the issue of how to use soft data
and judgment in models. Svensson (2005), Svensson and Tetlow (2005) and
Svensson and Williams (2005) develop diﬀerent frameworks that allow account
ing for centralbank judgment when constructing optimal policy projections of
the target variables and the instrument rate. They show that such monetary
policy may perform substantially better than monetary policy that disregards
judgment and follows a given instrument rule. Our approach diﬀers quite sub
stantially from theirs: our goal is solely to produce modelbased forecasts that
can account for judgmental and oﬀmodel information. Our approach leaves
the structure of the DSGE model unchanged and combines the modelbased
forecasts with the judgmental forecasts.
In a Bayesian framework, Robertson, Tallman and Whiteman (2005) suggest
a minimum relative entropy procedure for imposing moment restrictions on sim
ulated forecasts distributions from a variety of models. This technique involves
changing the initial predictive distribution to a new one that satisﬁes speciﬁc
moment conditions that come from outside of the models, i.e. that are judg
mental. Therefore, minimumentropy methods allow adjusting the full posterior
distribution of the DSGE models to match a given experts’ assessment.
Another approach that can be used to incorporate judgmental and oﬀmodel
3
This deﬁnition appears in Svensson (2005)
2
information is that of Boivin and Giannoni (2005). They build on the factor
model literature, started by Stock and Watson (1999, 2002) and Forni, Hallin,
Lippi and Reichlin (2000), and propose an empirical framework for the estima
tion of DSGE models that exploits the relevant information from a datarich
environment. Their methodology allows using as much information as possi
ble to estimate the structural model and to update the estimates of the state
variables featuring in the model. In this way, soft information can be used sys
tematically to update the current assessment of the state variables as well as of
the shortterm forecast.
The paper is structured as follows. In Section 2 we outline the framework
and describe the proposed methodology; we illustrate how to extract the weights
given to the modelbased and the judgmental forecast; and describe how to
structuralize the professional forecasts. In Section 3 we apply the proposed
methodology on a strippedtothebone version of an RBC model using the
Survey of Professional Forecasters’ forecasts to extract eventual judgmental in
formation. Section 4 presents the results of the empirical application described
in the previous section. In Section 5 we give some conclusions and outline future
extensions of this paper.
2 The Econometric Methodology
2.1 The Framework
Let us consider a general linear(ized) rational expectations model of the form
AE
t
_
z
t+1
Z
t+1
_
= B
_
z
t
Z
t
_
+Cx
t
(1)
x
t
= Mx
t−1
+ε
t
(2)
ε
t
∼ WN(0, Q)
where z
t
is a vector of non predetermined endogenous variables, Z
t
is a vector of
predetermined endogenous variables or of lagged exogenous variables such that
E
t
Z
t+1
= Z
t+1
, x
t
is a vector of exogenous variables following the process (2), Q
is diagonal and A, B, C and M are conformable matrices of coeﬃcients that form
a structural parameter space that we shall call Θ. Linear(ized) general equi
librium models containing additional lags, lagged expectations or expectations
farther in the future can be cast in this form simply by expanding the vectors
z
t
and Z
t
appropriately. Several numerical techniques have been developed to
solve models of the form (1)(2), see, e.g., Blanchard and Kahn (1980), Klein
(1997) and Sims (2000). The solution of the model has the following statespace
representation
S
t
=
_
Z
t
x
t
_
= F(θ)S
t−1
+H(θ)ε
t
(3)
z
t
= N(θ)S
t
, (4)
where F(θ), H(θ) and N(θ) are functions of the underlying structural parame
ters.
To allow for greater generality we augment each equation in (4) with a (pos
sibly serially correlated) residual, or error term, as in Ireland (2004). The model
3
now consists of the transition equation (3) and the new observation equation
y
t
= z
t
+v
t
= N(θ)S
t
+v
t
, (5)
where
v
t+1
= Dv
t
+ξ
t+1
(6)
for all t = 1, 2, ... and ξ
t
is a vector of zero mean, serially uncorrelated inno
vations that is normally distributed with covariance matrix Eξ
t
ξ
′
t
= V and is
uncorrelated with the innovation ε
t
. There are two appealing features in this
setup. First, depending on the way in which the matrices D and V are deﬁned,
the residuals can be interpreted as measurement errors or as additional elements
capturing all of the movements and comovements in the data that the DSGE
models, because of their elegance and parsimony, cannot explain. Second, the
model consisting of (3),(5) and (6) overcomes the wellknown stochastic singu
larity problem, pointed out by Ingram et al.(1994), that often appears in DSGE
models, deriving from the assumption that few fundamental shocks drive all the
dynamics of the model.
Model (3),(5) and (6) can be rewritten more compactly as:
s
t+1
=
_
S
t+1
v
t+1
_
= G(θ)s
t
+ν
t+1
(7)
y
t
= Λ(θ)s
t
(8)
where G(θ) =
_
F(θ) 0
0 D
_
, Λ(θ) =
_
N(θ) I
¸
and ν
t
=
_
H(θ)ε
t
ξ
t
_
is seri
ally uncorrelated, normally distributed with zero mean and covariance matrix
Eν
t
ν
′
t
= Q =
_
H(θ)ΣH(θ)
′
0
0 V
_
.
From now on, for notational simplicity, we will drop the indication that the
matrices G, Λ, etc... are function of the structural parameters θ.
Associated with the state space representation (7)(8) is the innovations
representation
4
ˆ s
tt
= Gˆ s
t−1t−1
+Ku
t
(9)
y
t
= ΛGˆ s
t−1t−1
+u
t
(10)
where ˆ s
tt
= E[s
t
y
t
, y
t−1
, ..., y
0
, ˆ s
0
] is the estimate of the state vector s
t
based
on the observations of y
τ
up to date t, u
t
= y
t
− y
tt−1
= y
t
− E[y
t
y
t−1
, ..., y
0
]
the forecast error made when forecasting y
t
given the observations of y
τ
up to
date t1, K = GPΛ
′
(ΛPΛ
′
)
−1
is the steady state Kalman gain, and P is unique
positive semideﬁnite solution that satisﬁes the algebraic Riccati equation
P = Q+GPG
′
−GPΛ
′
(ΛPΛ
′
)
−1
ΛPG
′
. (11)
P is the steadystate covariance matrix of the innovations s
t
−s
tt−1
given the
information in period t −1. It is useful to rewrite the innovations representation
4
The conditions for the existence of this representation are stated carefully, among other
places, in Anderson, Hansen, McGrattan, and Sargent (1996). The conditions are that that
(F,H,N) be such that iterations on the Riccati equation for Σt = E(xt −ˆ xt)(xt −ˆ xt)
′
converge,
which makes the associated Kalman gain Kt converge to K. Suﬃcient conditions are that
(F, N
′
) is stabilizable and that (F
′
, H
′
) is detectable. See Anderson, Hansen, McGrattan,
and Sargent (1996, page 175) for deﬁnitions of stabilizable and detectable.
4
given by (9) and (10) as the sum of two components: one that is forecastable
given the information set containing all observations of y
τ
up to date t1,
M
t
= Sp{y
t−1
, y
t−2
, ...y
0
}, (12)
and a sequence of innovation terms. That is,
ˆ s
t+ht+h
= G
h+1
ˆ s
t−1t−1
+
h−1
j=0
G
j
Ku
t+h−j
(13)
y
t+h
= ΛG
h+1
ˆ s
t−1t−1
+ ΛG
h
i=1
G
i
Ku
t+h−i
+u
t+h
. (14)
2.2 Model of the Professional Forecasts
The goal of this section is to show how to incorporate judgmental forecasts
into models of the form (7)(8), or equivalently in (9)(10). In order to do so,
we need to somehow formalize these forecasts. More speciﬁcally, we need to
make assumptions on the model and the information set that the professional
forecasters use to generate their forecasters. In this paper, we will make the
assumption that the professional forecasters know the structure of the economy,
i.e. know the model (7)(8) and can use it to forecast, but they access an
information set, I
t
, which is more informative than M
t
.
The assumptions on the information available in each period t are outlined in
Table 1. We assume that the shocks hit the economy at the beginning of period
t. The agents observe the shocks and base their decisions on this knowledge. We
then assume that there are three types of forecasters. The ﬁrst type generates
his forecasts on the basis of the model and the data released by the statistical
agency. The data reporting agency releases data about the current period at the
end of the period, so in period t there is data available only up to t1. Therefore
the information set available to the ﬁrst type of forecaster in time t comprises
exclusively information up to time t1: his information set is M
t
, as deﬁned in
(12). From now on I will call the ﬁrst type of forecaster ’purely’ modelbased
forecaster.
The second type of forecaster also knows the model of the economy and
uses it to make its forecasts, but accesses another information set I
t
, which
comprises M
t
but is possibly more informative. This type represents the pro
fessional forecasters (PF from now on). As highlighted in the Introduction,
their information set is plausibly richer than M
t
: they collect soft, intraperiod
extramodel information, such as monthly electricity consumption or quantity
of goods transported by railway in each month. In what follows, we assume
that professional forecasters have extramodel information only on the current
period’s shock but not on future shocks. In appendix we illustrate the case in
which we assume that they have some extramodel information, not exclusively
on the current shocks, but possibly also on future shocks. Considering the way
the professional forecasts are made and the type of extrainformation they use 
i.e., information about this month electricity consumption, but also information
on future tax raises or on the possible eﬀects of extraordinary events like the
Y2K bug or the World Cup , both cases are credible. However, the possible in
consistency arising from assuming that the Professional Forecasters know more
5
Table 1: Information structure
t
t + 1
shocks hit the economy
agents observe them
PF collect information
and release their forecast
stat. agency releases
data on period t
about the future that the agents, but that the agents don’t incorporate them in
their solution persuaded me to relegate this part to the appendix.
Finally, the third type of forecaster will use the method I propose: I will
deﬁne the forecasts they produce augmented forecasts, since the use the PF to
augment their information set.
Let us formalize rigorously the second type of forecaster, the professional
forecasters. PF are able to obtain information on the current period’s shocks.
At any given time T their information set I
T
comprises M
T
but is such that,
for h = 1, 2, 3, 4
E[u
T
I
′
T
] = 0, (15)
u
T+h
⊥ I
T
The forecasters know the model of the economy and produce their forecasts
as linear least squares forecasts given their information set plus an error term.
5
Notice that if we just modelled the PF’s forecasts as a noisy version of the actual
signal, i.e.
s
t+1+h
= Gs
t+h
+ν
t+1+h
y
PF
t+h
= Λs
t+h
+e
t+h
this formulation would be totally inconsistent with our assumptions. First of all,
the e
t+h
’s would not be forecast errors  since they are not orthogonal to the past
 and therefore the PF would not be forecasts, but rather noisy signals of actual
future variables. This would mean that we are assuming that the PF have crystal
balls through which they see the future. This is neither realistic, nor model
consistent. Instead we want to model the output of the professional forecasters
as forecasts, as shown in detail below. Sargent (1989) ﬁrst distinguished among
these two possible modelizations, discussing two models of a statistical agency
that is collecting and reporting observations on a dynamical linear stochastic
economy.
For t = 1, 2, ..., T − 1 both purely modelbased forecasters and professional
forecasters are going to construct the innovations representation (9) (10). For
t T, professional forecasters will report the following: for h=0,
ˆ s
TT
= Gˆ s
T−1T−1
+Ku
T
(16)
y
PF
t
= E[y
T
I
T
] +η
TT
(17)
where E[y
T
I
T
] = ΛGˆ s
T−1T−1
+ E[u
T
I
T
] is the least squares forecast made
by the PF with their information set I
t
and η
TT
is the measurement error (the
5
For a recent review of the debate on the rationality (unbiasedness and eﬃciency) of
macroeconomic forecasts see Schuh(2001)
6
typo) made by the professional forecasters in T while reporting their forecast.
This can be cast in statespace form as follows. For h=0,
ˆ s
TT
= Gˆ s
T−1T−1
+Ku
T
(18)
y
PF
t
= ΛGˆ s
T−1T−1
+w
TT
,
where
w
TT
= E[u
T
I
T
] +η
TT
,
and
_
Ku
T
w
TT
_
∼ WN
_
0 , Σ
0
_
with
Σ
0
=
_
KE(u
T
u
′
T
)K
′
KE(u
T
w
′
TT
)
E(w
TT
u
′
T
)K
′
Ew
TT
w
′
TT
_
.
For h = 1, 2, 3, 4 we have
ˆ s
T+hT+h
= Gˆ s
T+h−1T+h−1
+Ku
T+h
y
PF
T+h
= ΛGˆ s
T−1T−1
+ ΛG
h
KE[u
T
I
T
] +η
T+hT
, (19)
= ΛGˆ s
T+h−1T+h−1
+w
T+hT
where
w
T+hT
= −Λ
h−1
j=1
G
j
Ku
T+h−j
− ΛG
h
K(u
T
−E[u
T
I
T
]) +η
T+hT
(20)
_
Ku
T+h
w
T+hT
_
∼ WN
_
0 , Σ
h
_
and
Σ
h
=
_
KE(u
T+h
u
′
T+h
)K
′
0
0 Ew
T+hT
w
′
T+hT
_
.
As above, η
T+hT
is the measurement error (the typo) made by the professional
forecasters in T while reporting their forecast for period T +h. We assume that
η
sT
⊥ u
τ
, for any s and τ, and that η
T+hT
⊥ E(u
T
I
T
) for h = 0, 1, 2, 3, 4.
Clearly, the form of the matrices Σ
h
depends crucially on the assumptions we
made on the information set of the professional forecasters. Here Σ
h
will be
block diagonal for all h = 0, since we have modeled the PF as having some
extrainformation only on the current shock. In appendix we present the case
in which the PF can have extrainformation up to 4 periods ahead. In that
case, the Σ
h
will not necessarily be block diagonal: the value of the oﬀdiagonal
terms will depend on the information on future shocks actually carried by the
professional forecasts.
We want to extract the optimal linear projection of y
T+h
given I
T
, i.e
E[y
T+h
I
T
], from y
PF
T+h
. To understand how the augmented forecasts are built,
consider the fact that models (18) and (19) can be seen as a new state space
model in which the new observables are the professional forecasts. By ﬁltering
model (18)(19) with a timevarying Kalman smoother one can obtain optimal
7
estimates of the state variables s
+
T+hIT
that comprise the extrainformation con
tained in the professional forecasts and employ it optimally within the model.
6
Having ˆ s
+
T+hIT
for we can ﬁnally construct the augmented forecasts y
+
T+hIT
y
+
T+hIT
= Λˆ s
+
T+hIT
(21)
that incorporates optimally in the modelbased framework the judgemental in
formation coming from the conjunctural forecasters.
In order to implement the Kalman ﬁlter, we need to be able to recover the
exact form of the covariance matrices Σ
0
and Σ
h
for h=1,2,3,4. To recover all
the elements of Σ
0
, we proceed as follows. First of all let us point out that,
since η
TT
⊥ u
T
by assumption, then
E(u
T
w
′
TT
) = E[u
T
E(u
T
I
T
)
′
] +E[u
T
η
′
TT
] = E[u
T
E(u
T
I
T
)
′
].
The element on the right hand side of this equation can be calculated as follows.
First notice that the following equality holds
E[u
T
y
PF′
T
] = E[u
T
(ΛGˆ s
T−1T−1
)
′
] +E[u
T
E(u
T
I
T
)
′
] +E[u
T
η
′
TT
]
= E[u
T
E(u
T
I
t
)′],
The second equality derives from the fact that u
T
⊥ ΛGˆ s
T−1T−1
by construc
tion and that η
T
⊥ u
T
by assumption. Finally, as the series for the u
T
’s are
readily available via the Kalman ﬁlter, we are able to recover empirically the
value of E[u
T
y
PF′
T
], and therefore of E[u
T
E(u
T
I
T
)
′
]. Moreover, notice that,
since E(u
T
I
T
) is a linear projection of u
T
on Sp(I
T
), the space spanned by I
T
,
then
u
T
= E(u
T
I
T
) +µ
T
where µ
T
is orthogonal to the space spanned by I
T
. Therefore,
E[u
T
E(u
T
I
T
)
′
] = E[E(u
T
I
T
)E(u
T
I
T
)
′
], (22)
i.e. we have determined also the variance of the expected value of the current
shock given the information set I
T
, E(u
T
I
T
), and we showed it is equal to the
covariance among the shock and its expected value.
In order to recover
E[w
TT
w
′
TT
] = E[E(u
T
I
T
)E(u
T
I
T
)
′
] +E[η
TT
η
′
TT
], (23)
(the equality holds because η
TT
⊥ E(u
T
I
T
) by assumption), we will ﬁrst deﬁne
the forecasters’ forecast error as
e
T
= y
T
−y
PF
T
= u
T
−w
TT
.
= u
T
−E(u
T
I
T
) −η
TT
.
Its variance, whose value can be recovered from sample data, is:
E(e
T
e
′
T
) = E(u
T
u
′
T
) −E[u
T
E(u
T
I
T
)
′
] + (24)
− E[u
T
E(u
T
I
T
)
′
]
′
+E[w
TT
w
′
TT
]
6
The notation ˆ s
+
T+hI
T
mean the estimate of the state s
T+h
made in T give the augmented
information set I
T
.
8
E(u
T
u
′
T
) can be obtained by the Kalman ﬁlter on the system of equations (7)
and (8) as follows
E(u
T
u
′
T
) = ΛPΛ
′
Where P is the solution of the Riccati equation deﬁned in (11). Using the above
equations, we can ﬁnally recover E[w
TT
w
′
TT
] and we therefore have pinned
down all the values of the matrix Σ
0
.
Moreover, it is possible to recover the value of the variance of the typo
E[η
TT
η
′
TT
]; from (23) and (24) we infer the following equation, which can be
reshuﬄed to obtain E[η
TT
η
′
TT
].
E(e
T
e
′
T
) = E(u
T
u
′
T
) −E[E(u
T
I
T
)E(u
T
I
T
)
′
] +Eη
TT
η
′
TT
. (25)
The procedure to recover Σ
h
, for h=1,2,3,4 is very similar.
2.3 Modelconsistent weights for forecast pooling
Now we discuss how the timevarying Kalman smoother we use to generate
the ”judgmentaugmented” forecasts actually combines the judgmental fore
casts with the purely modelbased forecasts. Let us turn back to the system of
equations we smooth to obtain the augmented forecasts, i.e.
ˆ s
T+hT+h
= Gˆ s
T+h−1T+h−1
+Ku
T+h
(26)
y
PF
T+h
= ΛGˆ s
T+h−1T+h−1
+w
T+hT
,
where for h=0
w
TT
= E[u
T
I
T
] +η
TT
(27)
and for h=1,2,3,4,
w
T+hT
= −Λ
h−1
j=1
G
j
Ku
T+h−j
− ΛG
h
K(u
T
−E[u
T
I
T
]) +η
T+hT
and
_
Ku
T+h
w
T+hT
_
∼ WN
_
0 ,
_
Σ
h
11
Σ
h
12
Σ
h
21
Σ
h
22
_ _
.
In period T we ﬁlter (26) and we generate a new innovations representation,
initialized with ˆ s
+
T−1IT
= ˆ s
T−1T−1
ˆ s
+
T+hIT
= Gˆ s
+
T+h−1IT
+K
2h
a
T+h
(28)
y
PF
T+h
= ΛGˆ s
T+h−1IT
+a
T+h
,
where
ˆ s
+
T+hIT
= E(ˆ s
+
T+hIT
y
PF
T+h
, y
PF
T+h−1
, ..., y
PF
T
, y
T−1
, ....y
1
, ˆ s
T−1T−1
),
a
T+h
= y
PF
T+h
−E
_
y
PF
T+h
y
PF
T+h
, y
PF
T+h−1
, ..., y
PF
T
, y
T−1
, ....y
1
, ˆ s
T−1T−1
¸
.
The Kalman gain K
2h
is timevarying and takes the form: for h=0,
K
20
= Σ
0
12
Σ
0
22
−1
9
and
K
2h
= (GP
hh−1
G
′
Λ
′
+ Σ
h
12
)(ΛGP
hh−1
G
′
Λ
′
+ Σ
h
22
)
−1
(29)
otherwise. In order to understand how the Kalman ﬁlter is combining the purely
modelbased forecast and the judgemental forecast, let us consider the case of
the nowcast:
ˆ s
+
TIT
= (I − K
20
)Gˆ s
T−1T−1
+K
20
y
PF
T
, (30)
where ˆ s
+
T−1IT
= ˆ s
T−1T−1
and Gˆ s
T−1T−1
is the purely modelbased forecast of
the state at time T. Therefore the augmented nowcast for y
T
is
y
+
TIT
= Λˆ s
+
TIT
= Λ(I −K
20
)Gˆ s
T−1T−1
+ ΛK
20
y
PF
T+h
(31)
Since K
20
has the form described in equation (29), the weight given to the
judgmental forecast y
PF
T
is directly proportional to Σ
0
12
, i.e. to the correlation
among u
T
and w
TT
, and inversely proportional to the variance of w
TT
Σ
0
22
.
That is, the more the professional forecasters are able to gather information on
the current period’s shock, the more the Kalman ﬁlter will use the professional
forecasts when combining the two forecasts, but it will downweigh them if the
variance of their forecast errors is too large. Similarly for higher horizons.
The assumption that the professional forecasters have information only on
the current period shocks is crucial in determining the negligible weights as
signed to the professional forecasts for h = 0. In appendix we present some
results for the case in which the PF are assumed to have some oﬀmodel infor
mation on current and future shocks up to four periods ahead. In that case, the
weight associated to the judgmental forecast will be sizeable depending on the
informational content, of course at all horizons considered.
2.4 Using the model to interpret judgemental forecasts
Another interesting aspect of this procedure is that it also allows to see the
judgmental forecasts through the lens of the model. Storytelling is diﬃcult when
it comes to judgmental forecasts; in our setup we will be able to interpret the
forecasts in light of the model and therefore somehow structuralize the forecasts.
Let us have another look at model (28). The element K
20
a
T
, with
a
T
= y
PF
T
−E
_
y
PF
T
ˆ s
+
T−1IT
_
,
is the estimate of the current period’s structural shocks made by the professional
forecasters with their information set I
T
. That is, these are the structural shocks
the professional forecasters perceive given their information set; they do not
necessarily coincide with the ”real” structural shocks. Indeed
K
20
a
T
= E[Ku
T
I
T
].
Moreover, given this information, we can also derive [u
T
I
T
] as follows:
E[u
T
I
T
] = (K
′
K)
−1
K
′
E[Ku
T
I
T
]
= (K
′
K)
−1
K
′
(K
20
a
T
).
10
Now, recalling that the professional’s forecasts are represented by equations
(18)(19), we can evaluate exactly how much of the forecast is due to extra
information on the current shocks and how much of it is instead due to mea
surement errors. Moreover, we can construct diﬀerent scenarios  e.g. assume
that the professional forecasters have extra information only on certain types
of shocks but not on others  and compare them among each other. As we will
show in Section 4, this can be very informative.
3 An application
In order to illustrate the methodology proposed in the previous section, we will
apply it to a strippedtothebone version of an RBC model with unit root tech
nology. Let us brieﬂy outline the model. A representative consumer has pref
erences deﬁned over consumption C
t
during each period t=1,2,..., as described
by the expected utility function
E
∞
i=1
β
t
ln(C
t
) (32)
where the discount factor satisﬁes 0 < β < 1. In this economy there is only one
ﬁnal good Y
t
and it is produced using capital K
t
and labor N
t
according to the
constantreturnstoscale technology
Y
t
= (A
t
N
t
)
α
K
(1−α)
t
, (33)
where 0 < α < 1 and A
t
is a laboraugmenting technological change process. We
will assume that labor is supplied inelastically and that there is no population
growth: in such case N
t
is constant for any t and we can normalize it to 1, i.e.
N
t
= 1 for any t. We there for rewrite equation (33) as
Y
t
= A
α
t
K
(1−α)
t
. (34)
The logs of the technology shock A
t
follow a ﬁrst order autoregressive process
of the form:
ln(A
t
) = ln(A
t−1
) +γ +ε
t
, (35)
where the innovation ε
t
is serially uncorrelated and normally distributed with
mean zero and standard deviation σ.
In each period the representative agent decides how much of output Y
t
to
consume and how much to invest, subject to the resource constraint
Y
t
= C
t
+I
t
. (36)
By investing I
t
units of output in period t, the agent increased the capital stock
K
t+1
available in period t + 1 according to
K
t+1
= (1 −δ)K
t
+I
t
, (37)
where δ is the depreciation rate and satisﬁes 0 < δ < 1.
The standard method of analyzing models with steady state growth is to
transform the economy into a stationary one where the dynamics are more
11
tractable. The transformation, which is shown in great detail in King, Plosser
and Rebelo (1988a, 1988b), involves dividing all variables in the system by the
growth component, which in our setting corresponds to A
t
.The stationarized
model is very similar to the untransformed model, with some exceptions that
we will highlight in what follows.
Let us start deﬁning E[
At+1
At
] = γ
A
, y
t
=
Yt
At
, c
t
=
Ct
At
, k
t
=
Kt
At
and so on.
The stationary version of the model deﬁned by equations (32), (34), (35), (36)
and (37) is
max E
∞
i=1
β
t
ln(C
t
) (38)
subject to
y
t
= k
(1−α)
t
, (39)
y
t
= c
t
+i
t
, (40)
and
(γ
A
e
εt+1
)k
t+1
= k
t
+i
t
. (41)
The ﬁrst order conditions for this problem are
R
t
= [(1 −α)k
−α
t
+ (1 −δ)] (42)
where R
t
is the gross rate of return of capital in t and
c
−1
t
= βE
t
[
c
−1
t+1
R
t+1
e
εt+1
]. (43)
Equation (43) equates the marginal rate of substitution to the marginal product
of capital for all t=1,2,..... Equations (39)(43) form a system of ﬁve nonlinear
stochastic diﬀerence equations in the model’s ﬁve variables y
t
, c
t
, k
t
, i
t
, r
t
. A
linear approximation of this system can be derived loglinearizing it around its
steady state, as shown in Appendix A. The system of equations (39)(43) has
an approximate solution of the form:
x
t+1
=
_
k
t+1
k
t
_
= G(θ)x
t
+
_
1
0
_
ε
t+1
(44)
z
t
=
_
_
∆y
t
c
t
−y
t
i
t
−y
t
_
_
= Λ(θ)x
t
, (45)
where the expressions for the matrices G(θ) and Λ(θ) in terms of the structural
parameters θ can be found in Appendix A. A particular feature of this model
is that one shock  the aggregate technology shock ε
t
 drives all business cycle
ﬂuctuations.
Any equation of the form (45) can be rewritten in terms of diﬀerences, simply
by premultiplying everything with a suitable matrix:
y
t
=
_
_
∆y
t
∆c
t
∆i
t
_
_
=
_
_
1 0 0
1 1 −L 0
1 0 1 −L
_
_
z
t
=
_
_
1 0 0
1 1 −L 0
1 0 1 −L
_
_
Λx
t
. (46)
In general, equation (44) will change conformably to take into account that (46)
is now deﬁned in terms of current and lagged states s
t
. In this speciﬁc case the
12
state equation will not need to be rewritten, since it already contains the lagged
state.
We also augment each equation in (46) with a serially correlated residual,
or error term, so that the model now consists of (44),
y
t
=
_
_
∆y
t
∆c
t
∆i
t
_
_
= Λ
∗
x
t
+v
t
, (47)
where Λ
∗
=
_
_
1 0 0
1 1 −L 0
1 0 1 −L
_
_
Λ and
v
t+1
= Dv
t
+ξ
t+1
(48)
for all t = 1, 2, ... and ξ
t
is a vector of zero mean, serially uncorrelated inno
vations that is normally distributed with covariance matrix Eξ
t
ξ
′
t
= V and is
uncorrelated with the innovation ε
t
.
We have considered three possible speciﬁcations of v
t
, the ﬁrst one assumes
v
t
is white noise, the second one allows for autocorrelation (but not serial cor
relation) in v
t
, while in the third speciﬁcation we allow v
t
to be a VAR(1). The
results were very similar for each of the three speciﬁcations, therefore we will
present results only for the model with autocorrelated but not crosscorrelated
residuals, which is our best performing one. All results are robust to the speci
ﬁcation of the measurement error v
t
.
We calibrated all the model’s parameters, with the exclusion of the variance
of the technological shock and and the parameters describing the measurement
errors, using values from King, Plosser and Rebelo (1988a, 1988b) and Ireland
(2004). All parameters are reported in Table 7 in Appendix A. We estimate the
variance of technological shock and the parameters of the measurement error
only once, using maximum likelihood, and then we take them as calibrated. The
covariance matrices Σ
i
in (20) are instead estimated using a rolling window.
For the estimation and the forecasting exercise we use realtime quarterly
data for real GDP and real consumption for the US from the Philadelphia Fed
realtime dataset. The dataset covers the period 1947 through 2005 and the
ﬁrst available vintage is 1965:Q4. Due to the unavailability of realtime data
on population, we have made the somewhat heroic assumption that the popu
lation has been constant throughout the period considered. In what follows, we
will perform an outofsample realtime forecasting exercise, using as evaluation
sample the period 19872004.
We use the Survey of Professional Forecasters (SPF) as example of judgmen
tal forecast. The Survey of Professional Forecasters, conducted by the Federal
Reserve Bank of Philadelphia, is based on many individual commercial forecasts,
which are then grouped in mean or median forecasts. The Survey is conducted
near the end of the second month of each quarter and publishes forecasts for
the current quarter and the next 4 quarters in the future. We consider a sample
period going from the second quarter of 1987 to the fourth quarter of 2004.
An important datarelated issue regards the appropriate ”actual” series to
use when comparing the various forecasts. Because macroeconomics data is
continuously revised, we need to make a choice about which revision to use.
Following Romer and Romer (2000), we choose to use the second revision, i.e.
13
Table 2: Relative MSFE of forecasts of GDP growth with respect to naive
benchmark. Asterisks denote forecasts that are statistically more accurate than
he naive benchmark at 1% (∗ ∗ ∗), 5%(∗∗) and 10%(∗)
relative to constant growth
EVALUATION SAMPLE: 1987:2  2004:4
Forecast horizon RBC SPF
Q0 0.86 0.68 ∗∗
Q1 0.88 ∗∗ 0.77
Q2 0.92 0.88
Q3 0.94 ∗∗ 0.94
Q4 0.97 0.95
relative to constant growth
EVALUATION SAMPLE: 1996:2  2004:4
Forecast horizon RBC SPF
Q0 0.81∗∗ 0.80∗∗
Q1 0.85 ∗ ∗ ∗ 0.94
Q2 0.84∗ ∗ ∗ 1.01
Q3 0.84 ∗ ∗ ∗ 1.01
Q4 0.87 ∗∗ 1.01
the one done at the end of the subsequent quarter. The second revision seems to
be the appropriate series to use because it is based on relatively complete data,
but it is still roughly contemporaneous with the forecasts we are analyzing. This
series does not include rebenchmarking and deﬁnitional changes that occur in
the annual and quinquennial revisions and should, therefore, be conceptually
similar to the series being forecast.
Let us now present some forecasting results that will highlight the motivation
of this paper. We will compare all the forecasts of GDP and Consumption
growth to a naive benchmark: the constant growth model, i.e. random walk in
levels. Tables 2 and 3 report the performance in outofsample forecasting of
the forecasts generated with the RBC model, speciﬁed as having autocorrelated,
but not serially correlated residuals (i.e. D is diagonal) and of the SPF relative
to the naive benchmark, for GDP growth and consumption growth respectively.
Each table is divided in two subtables which consider the full sample period
1987:Q22004:Q4 and the subsample 1996:Q22004:Q4. In the ﬁrst column of
each table one can see the ratio of the mean square error of the purely model
based forecast (RBC) against the mean square error of the naive benchmark,
while the second column reports the ratio of the mean square error of the SPF
against the mean square error of the naive benchmark. Asterisks indicate a
rejection of the test of equal predictive accuracy between each forecast and the
naive benchmark.
7
7
Following Romer and Romer (2000), our inference is based on the regression: (z
ht
−ˆ z
m
ht
)
2
−
(z
ht
−ˆ z
naive
ht
)
2
= c+u
ht
where z is the variable to be forecasted at horizon h using model m .
The estimate of c is simply the diﬀerence between forecastm and a Naive model MSFEs, and
the standard error is corrected for heteroskedasticity and serial correlation over h1 months.
This testing procedure falls in the DieboldMarianoWest framework, and Giacomini and
White (2006, Section 3.2, see in particular Comment 4) show that by using rolling window
14
Table 3: Relative MSFE of forecasts of Consumption growth with respect to
naive benchmark. Asterisks denote forecasts that are statistically more accurate
than he naive benchmark at 1% (∗ ∗ ∗), 5%(∗∗) and 10%(∗)
relative to constant growth
EVALUATION SAMPLE: 1987:2  2004:4
Forecast horizon RBC SPF
Q0 0.97 0.70
Q1 0.92 ∗∗ 0.89
Q2 0.94 ∗∗ 0.94
Q3 0.94 ∗ 0.99
Q4 0.94 ∗ 1.00
relative to constant growth
EVALUATION SAMPLE: 1996:2  2004:4
Forecast horizon RBC SPF
Q0 0.89 ∗∗ 0.96
Q1 0.89 ∗∗ 1.06
Q2 0.88 ∗∗ 1.03
Q3 0.87 ∗∗ 1.03
Q4 0.86 ∗∗ 1.08
Over the full sample period 1987:Q22004:Q4, SPF forecasts of GDP growth
outperform the modelbased forecasts at all horizons, but their advantage re
duces as the forecasting horizon grows. Considering the subsample 1996:Q2
2004:Q4, we notice the model seems to perform better at all horizons excluding
the nowcast. The modelbased forecasts of consumption growth are poorer in
the very short run than in the medium term (4 quarters ahead). Over the full
sample period 1987:Q22004:Q4, SPF forecasts of consumption growth outper
form the modelbased forecasts in the nowcast and 1 period ahead, and never
in the subsample 1996:Q22004:Q4. The two results on consumption growth
point to the addedvalue of enforcing accounting identities, especially in the
mediumterm.
Figure 1 reports the smoothed forecast errors for the nowcast of GDP (cen
tered moving average 4 quarters on each side) over the full sample period. Model
and judgement seem to contain information that is useful in diﬀerent points in
time: in some periods the model does better than the benchmark and in oth
ers it does worse, and the same holds for SPF. Particularly during recessions,
judgement seems to fare much better than the model (a similar result can be
found in Giannone, Reichlin and Sala, 2005).
In the last part of the sample, and particularly between 1996 and 2000,
judgmental forecasts have performed quite bad, while the model seems to fare
acceptably. A similar result can be found in Edge, Kiley and Laforte (2006),
where the authors compare the forecasting performance of a richly speciﬁed
DSGE models to the Greenbooks in the sample 19962000 and ﬁnd that the
model’s forecasting performance is comparable to the one of the Greenbooks.
estimators, as we do here, the limiting behavior of this type of tests is standard, and therefore
standard asymptotic theory can be used for inference on the diﬀerence in predictive accuracy.
15
Figure 1: NOWCASTS: Smoothed Square forecast errors
Q1−90 Q3−92 Q1−95 Q3−97 Q1−00 Q3−02
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
NOWCAST: Smoothed Square Forecast errors (relative to constant growth)
SPF
RBC
AUG
In fact, not only the richly speciﬁed DSGE model that Edge, Kiley and Laforte
(2006) propose, but also the toy model I use seem to fare better than judgment
in that period.
8
Furthermore, Figure 1 highlights how performances of the purely model
based and the judgmental forecasts do not seem positively correlated, but rather
they seem to somehow counterbalance each other. Therefore, it is plausible that
combining the two forecasts can be advantageous. The method I propose indeed
allows for a modelbased and hence interpretable averaging of the two forecasts.
In the following section we present the results obtained when applying the
methodology proposed in Section 2 to the toy model presented above and using
the SPF forecasts.
4 Results of the Forecasting Exercise
The main goal of this section is to present modelbased forecasts for real GDP
and real consumption that can account for the judgmental information contained
in the SPF forecasts. We will compare their performance on the basis of their
mean square forecast error, i.e. deeming better a forecast with a smaller MSE.
We will perform outofsample forecasting exercises on the full evaluation sample
1987:Q22004:Q4 and on the subsample 1996:Q22004:Q4.
8
It is worth stressing that the naive benchmark model Edge, Kiley and Laforte use for
GDP growth, i.e. a random walk, is not model compatible.
16
Table 4: Relative MSFE of forecasts of GDP growth with respect to naive
benchmark. Asterisks denote forecasts that are statistically more accurate than
he naive benchmark at 1% (∗ ∗ ∗), 5%(∗∗) and 10%(∗)
relative to constant growth
EVALUATION SAMPLE: 1987:2  2004:4
Forecast horizon RBC SPF AUG
Q0 0.86 0.68 ∗∗ 0.68 ∗∗
Q1 0.88 ∗∗ 0.77 0.84 ∗∗
Q2 0.92 0.88 0.91
Q3 0.94 ∗∗ 0.94 0.94 ∗
Q4 0.97 0.95 0.97
relative to constant growth
EVALUATION SAMPLE: 1996:2  2004:4
Forecast horizon RBC SPF AUG
Q0 0.81 ∗∗ 0.80 ∗∗ 0.75 ∗ ∗ ∗
Q1 0.85 ∗ ∗ ∗ 0.94 0.85 ∗∗
Q2 0.84 ∗ ∗ ∗ 1.01 0.83 ∗ ∗ ∗
Q3 0.84 ∗ ∗ ∗ 1.01 0.85 ∗ ∗ ∗
Q4 0.87 ∗∗ 1.01 0.88 ∗∗
The purely modelbased forecasts are generated with model (44), (47) and
(48), with v
t
is autocorrelated, but not serially correlated, i.e. D is diagonal.
The results however are robust to the speciﬁcation of the measurement error. We
construct the augmented forecasts following the procedure described in previous
Section  assuming that the professional forecasters have extrainformation on
the current period. In appendix we present some results for the case in which
we assume that the professional forecasters have some extrainformation also on
future shocks.
Table 8 reports the mean square forecast error (MSE) of the purely model
based forecasts, the SPF and the augmented forecasts relative to the naive model
(random walk in levels), when forecasting GDP growth.
In all samples considered, the augmented forecasts outperform quite consis
tently the modelbased forecasts. The greatest gain is achieved in the nowcast,
since that is where the judgmental forecasts help. For higher horizons, the
weight associated to the judgmental forecast is very small, and therefore the
augmented forecast is very similar to the purely modelbased one. This re
sult crucially depends on the assumptions made on the information set of the
professional forecasters.
Table 9 reports the weights that the ﬁlter gives to the judgmental forecasts
of real GDP growth and real consumption growth when constructing the aug
mented forecasts. In particular, the ﬁrst column gives the weights associated to
the SPF forecasts of real GDP growth when constructing the augmented forecast
of real GDP growth; the second column instead gives the weights associated to
the SPF forecasts of real consumption growth when constructing the augmented
forecast of real consumption growth. Clearly, since the Professional Forecasters
are assumed to have information only on current shock, the judgmental forecast
receives a signiﬁcant weight in the construction of the augmented forecasts only
17
Table 5: Weight given to the SPF forecast of ∆GDP and ∆CONS in the
augmented forecast . 2004:4
∆GDP ∆CONS
nowcast 0.3941 0.2574
1 step ahead 0.1038 0.0020
2 step ahead 0.0341 0.0018
3 step ahead 0.0185 0.0019
4 step ahead 0.0078 0.0014
in the nowcast. The reason is that the weights given by the Kalman ﬁlter to
the SPF tend to zero as their information content goes to zero. Finally, notice
that the weight associated to the SPF forecast of consumption growth is smaller
than those associated to the SPF forecast of GDP growth. This indicates that
the extramodel information accessed by the professional forecasters is more
informative on GDP than on consumption.
Let us brieﬂy compare these results with the ones reported in appendix for
the case in which the SPF are assumed to have extra some extrainformation up
to 4 periods ahead. In the latter case, the augmented forecasts beat both the
SPF forecasts and the modelbased forecasts at all horizons over the full eval
uation sample 1987:Q22004:Q4. Moreover, in the full sample, the augmented
forecasts built under the assumption that the PF have some extrainformation
on the current and future shocks perform much better that the augmented fore
cast obtained by assuming that the PF have extramodel information only on
the current shock. Interestingly, the opposite is true in the subsample 1996:Q2
2004:Q4. This indicates that the SPF are better modeled by assuming that they
have extramodel information on current and future shocks in the full evaluation
sample, while in the subsample 1996:Q22004:Q4 they are better represented by
assuming they have extrainformation only on the current shock. The above re
sults point out to the decline in predictability highlight in D’Agostino, Giannone
and Surico (2006).
Table 6 reports the mean square forecast error (MSE) of the purely model
based forecasts, the SPF and the augmented forecasts relative to the naive model
(random walk in levels), when forecasting consumption growth.
Figures (2)(8) report the nowcasts and the forecasts, plotted against the
data. In each ﬁgure, the green/grey solid line represents the actual data; the
solid line with circles is the series of the SPF forecasts, the dotted line portrays
the purely modelbased forecasts; the dashed line corresponds to the augmented
forecasts. The latter forecasts becomes more and more similar to the model
based forecast as we increase the forecasting horizon. This is not surprising,
since we have assumed that the SPF have extra information only regarding the
current period; thus the informational value of their forecast is much lower at
horizons higher than the nowcast and will therefore be given less weight. This
would clearly not be the case under diﬀerent assumptions for the information
18
Table 6: Relative MSFE of forecasts of Consumption growth with respect to
naive benchmark. Asterisks denote forecasts that are statistically more accurate
than he naive benchmark at 1% (∗ ∗ ∗), 5%(∗∗) and 10%(∗)
relative to constant growth
EVALUATION SAMPLE: 1987:2  2004:4
Forecast horizon RBC SPF AUG
Q0 0.97 0.70 0.84
Q1 0.92 ∗∗ 0.89 0.91∗∗
Q2 0.94 ∗∗ 0.94 0.93∗∗
Q3 0.94 ∗ 0.99 0.93 ∗
Q4 0.94 ∗ 1.00 0.93 ∗
relative to constant growth
EVALUATION SAMPLE: 1996:2  2004:4
Forecast horizon RBC SPF AUG
Q0 0.89 ∗∗ 0.96 0.88∗
Q1 0.89 ∗∗ 1.06 0.89 ∗∗
Q2 0.88 ∗∗ 1.03 0.87 ∗∗
Q3 0.87 ∗∗ 1.03 0.87 ∗∗
Q4 0.86 ∗∗ 1.08 0.86∗∗
set of the SPF. If, as in appendix, we assumed that they have some extramodel
information not only on current shocks, but also on future ones, then the SPF
could be given sizeable weights (see Tables in Appendix) also at horizons higher
than than the current quarter.
Finally, we report few illustrative results on the ”structural” analysis of
the SPF forecasts. First of all, we can make some considerations regarding
information content of the SPF forecasts, according to the extent that they will
reduce uncertainty surrounding the estimation problem faced by the agents (as
in Coenen, Levin and Wieland, 2005). Figure 9 plots the nowcasts and their
conﬁdence bands
9
. The black solid line is the purely modelbased nowcast, while
the black dasheddotted lines are its conﬁdence bands. Similarly, the red lines
represent the augmented augmented nowcast (and its conﬁdence bands). It is
clear from the picture that the conﬁdence bands for the augmented nowcast
are smaller than the ones of the purely modelbased nowcast. This means that
there is less uncertainty when nowcasting using also the SPF, and therefore that
the SPF are indeed somewhat informative on the current state of the economy.
Similarly for higher horizons.
Second, we can infer the type of shocks the Professional Forecasters saw
when performing their forecasts. Figure 10 reports actual real GDP growth and
the diﬀerent shocks they perceived while doing their nowcast. The dotted line
represented actual real GDP growth. The blue line is the technological shock
as perceived by the SPF, the green line is the measurement error they perceive
on GDP growth, while the red line is the measurement error they perceive on
real consumption growth. In this very simple model the only shocks diﬀerent
9
While constructing these conﬁdence bands, we consider only the uncertainty in the esti
mation of the state and assume that the parameters, estimated previously, are known
19
Q1−90 Q1−95 Q1−00
−0.5
0
0.5
1
1.5
Figure 2: Nowcast for GDP  EVALUATION SAMPLE 1987:22004:4.
The green/grey solid line represents the actual data; the solid line with circles is the
series of the SPF forecasts, the dotted line portrays the fully modelbased forecasts;
the dashed line corresponds to the the augmented forecasts
from the technological shocks are the shocks to the residual term that is meant
to describe all the dynamics that is not captured by the RBC model. However
the proposed procedure would, when applied to a more elaborate DSGE model,
allow for understanding the perception of the diﬀerent shock  monetary, ﬁscal,
etc..  that the SPF have.
Finally, interesting counterfactual exercises can be done in this setup. Figure
11 reports, for the period 1987:21994:4, the the actual SPF nowcast and the
nowcasts they would have done if they had only parts of the information they
actually have. In particular, the black solid line represents actual real GDP
growth; the starred line is the actual SPF nowcast. The line marked with
plus signs is the nowcast the SPF would have made if they would have had no
information on any of the shocks, the line with squares plots the nowcast they
would have made if they had extrainformation only on the technological shock,
while the line with diamonds is the nowcast they would have made if they had
extrainformation only on the measurement error shocks.
From this ﬁgure we can extract some very relevant information. For exam
ple, consider the fall in actual GDP growth in the last quarter of 1989. The
actual SPF nowcast for that quarter is very close to the actual ﬁgure and, in
terestingly, coincides with the nowcast the SPF would have made if they only
saw the technology shock (green line). On the other hand, the nowcast the SPF
would have made if they only saw the ”measurement error” shock (red line)
coincides with the one they would have made if the had no extrainformation
at all (blue line). This means that this speciﬁc movement in GDP growth was
most probably due to a technological shock. Similarly, in the third quarter of
20
Q1−88 Q1−90 Q1−92 Q1−94 Q1−96
−0.5
0
0.5
1
1.5
Figure 3: Nowcast for GDP  PERIOD I 1987:21996:1. The green/grey
solid line represents the actual data; the solid line with circles is the series of the SPF
forecasts, the dotted line portrays the fully modelbased forecasts; the dashed line
corresponds to the the augmented forecasts
Q1−98 Q1−00 Q1−02 Q1−04
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
Figure 4: Nowcast for GDP  PERIOD II 1996:22004:4. The green/grey
solid line represents the actual data; the solid line with circles is the series of the SPF
forecasts, the dotted line portrays the fully modelbased forecasts; the dashed line
corresponds to the the augmented forecasts
21
Q1−90 Q3−92 Q1−95 Q3−97 Q1−00 Q3−02
−0.5
0
0.5
1
1.5
Figure 5: Forecasts 1 step ahead for GDP  EVALUATION SAMPLE
1987:22004:4. The green/grey solid line represents the actual data; the solid line
with circles is the series of the SPF forecasts, the dotted line portrays the fully model
based forecasts; the dashed line corresponds to the the augmented forecasts
Q1−90 Q3−92 Q1−95 Q3−97 Q1−00 Q3−02
−0.5
0
0.5
1
1.5
Figure 6: Forecasts 2 step ahead for GDP  EVALUATION SAMPLE
1987:22004:4. The green/grey solid line represents the actual data; the solid line
with circles is the series of the SPF forecasts, the dotted line portrays the fully model
based forecasts; the dashed line corresponds to the the augmented forecasts
22
Q1−90 Q1−95 Q1−00 Q1−05
−0.5
0
0.5
1
1.5
Figure 7: Forecasts 3 step ahead for GDP  EVALUATION SAMPLE
1987:22004:4. TThe green/grey solid line represents the actual data; the solid
line with circles is the series of the SPF forecasts, the dotted line portrays the fully
modelbased forecasts; the dashed line corresponds to the the augmented forecasts
Q1−90 Q1−95 Q1−00 Q1−05
−0.5
0
0.5
1
1.5
Figure 8: Forecasts 4 step ahead for GDP  EVALUATION SAMPLE
1987:22004:4. The green/grey solid line represents the actual data; the solid line
with circles is the series of the SPF forecasts, the dotted line portrays the fully model
based forecasts; the dashed line corresponds to the the augmented forecasts
23
Q1−90 Q1−95 Q1−00
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
Nowcasts and confidence bands (no parameter uncertainty) 0 quarters ahead
Figure 9: Nowcasts with conﬁdence bands (no parameter uncertainty)
1987:22004:4. The black solid line is the purely modelbased nowcast, while the
black dasheddotted lines are its conﬁdences bands. Similarly, the blue lines represent
the augmented nowcast (and its conﬁdence bands). When constructing the conﬁdence
bands, we only consider the uncertainty in the estimation of the state, but no param
eter uncertainty. the current period’s shock.
24
Q1−90 Q1−95 Q1−00
−0.5
0
0.5
1
1.5
techonology
measurement error on Dy
measurement error on Dc
Dy
Figure 10: Current structural shocks perceived by the SPF when now
casting 1987:22004:4. The dotted line represents actual real GDP growth. The
blue line is the technological shock the SPF see, the green line is the measurement
error they perceive on GDP growth, while the red line is the measurement error they
perceive on real consumption growth.
1992, the SPF nowcast coincides with the nowcast the SPF would have made if
they only saw the ”measurement error” shock (red line), while have information
on the technological shock is like having no information. In this case, then,
the variation of the GDP was certainly not due to technology, but due to the
”measurement errors” and some other factors. Of course, this exercise would
be much more interesting if we developed it in a setup in which we were able
to distinguish, for example, monetary or ﬁscal shocks, but it is any how quite
informative.
5 Conclusions and Extensions
In this paper we have proposed a method to incorporate judgmental informa
tion, proxied by professional forecasts, into modelbased forecasts. We suggested
modeling the professional forecasts as optimal estimates of the variables of inter
est, made with a diﬀerent, possibly more informative, information set; we then
have shown how they can be accounted for in the framework of a linearized and
solved DSGE model. The methodology we propose allows generating forecasts
that are more accurate than the purely modelbased ones, but that are still
disciplined by the economic rigor of the model.
We have also highlighted how to infer the information content of the SPF
forecasts from the weights that the Kalman ﬁlter assigns to them. In particular,
the weights given by the Kalman ﬁlter to the SPF go to zero as their information
content goes to zero. More precisely, the more the professional forecasters are
able to gather information on the shocks, the more the Kalman ﬁlter will use
the professional forecasts when combining them with the predictions from the
25
Q1−88 Q1−90 Q1−92 Q1−94
−0.5
0
0.5
1
1.5
Figure 11: Counterfactual exercise nowcast 1987:21994:4. The black
solid line represents actual real GDP growth; the dasheddotted line is the actual SPF
nowcast. The blue line is the nowcast the SPF would have made if the would have
had no information on any of the shocks, the green line plots the nowcast they would
have made if they had extrainformation only on the technological shock, while the
red line is the nowcast they would have made if they had extrainformation only on
the measurement error shocks.
26
model, but it will downweigh them if the variance of their forecast errors is too
large.
Finally we have described how to interpret the forecasts through the lens of
the model. We were able to extract the structural shocks as they were perceived
by the professional forecasters and to make several counterfactual exercises on
the forecasts the professional forecasters would have done if they saw only some
of the shocks.
We working on several extensions to this paper. First, in a joint paper with
Domenico Giannone and Lucrezia Reichlin we allow for the timely information
to enter the model directly, not processed by the professional forecasters. We
diﬀer from Boivin and Giannoni (2005) in that we consider a large dataset
containing intraperiod data, in order to really capture the eﬀects of timely
information.
Then, we are also working on reformulating the problem so to be able to
account for the possibility that the forecasts feedback into the model. If for
example we wanted to include a policymaker that targeted current inﬂation
and current output gap, the estimate of inﬂation and output made using judg
mental information should feedback into the model via the policy rule. This
extension can be implemented using the extension of the Kalman ﬁlter pro
posed by Svensson and Woodford (2003) that allows to do signalextraction
with forwardlooking obsverables
10
.
Once we have reformulated the problem, we will be able to apply the method
ology we propose to richer DSGE models, with more shocks. This can be very
interesting from a storytelling perspective, because, as we have sketched in the
simple application we consider in this paper, it will allows us to understand
which types of shocks the professional forecasters perceived. More importantly,
we will be able to understand, in the cases in which the professional forecasters
forecasts where very close to the actual data, the extent to which the perception
of certain shocks helped them forecasting so well and infer if the movements in
the data were due to, say, a technology or a monetary shock.
10
Since the forecasts feedback into the model, we cannot solve the model ﬁrst, as we did in
our simple RBC case, and then forecast. For this reason we will need to deal with forwarding
looking observables.
27
References
[1] AlvarezLois, P., R. Harrison, L. Piscitelli and A. Scott (2005), ‘Taking
DSGE Models to the Policy Environment’
[2] Anderson B.D.O. and J.B. Moore (1979), Optimal Filtering
[3] Anderson, E., L. P. Hansen, E. R. MCGrattan and T. J. Sargent (1996):
“Mechanics of Forming and Estimating Dynamic Linear Economies,” in
Handbook of Computational Economics, Volume 1, ed. by D. A. K. Hans
M. Amman, and J. Rust, pp. 171–252. NorthHolland
[4] Aruoba, B. (2005), ”Data Revisions are not Well Behaved” EABCN/CEPR
Working Paper Series 21/2005.
[5] Blanchard, O.J. and C.M. Kahn (1980), ”The Solution of Linear Diﬀerence
Models under Rational Expectations,”, Econometrica 48(5), 13051311
[6] Boivin, J. and M. Giannoni (2005), ”DGSE in a DataRich Environment”
[7] Bruno, G. and C. Lupi (2004), ”Forecasting industrial production and the
early detection of turning points”, Empirical Economics, 29, 647671
[8] Campbell, John Y. (1994), ” Inspecting the Mechanism: An Analytical Ap
proach to the Stochastic Growth Model,” Journal of Monetary Economics
33, 463506.
[9] Coenen, G., Levin and V. Wieland, (2005). ”A Data Uncertainty and the
Role of Money as an Information Variable for Monetary Policy,” European
Economic Review, 49(4), 9751006.
[10] Cogley, T., S. Morozov and T. Sargent (2005), ”Bayesian fan charts for U.K.
inﬂation: Forecasting and sources of uncertainty in an evolving montary
system,” Journal of Economic Dynamics and Control 29, 18931925.
[11] D’Agostino, A., D. Giannone and P. Surico (2006), ”(Un)predictability and
Macroeconomic Stability”, ECB Working Paper No 605
[12] Diebold, F.X. and R.S. Mariano (1995),”Comparing Predictive Accuracy,”,
Journal of Business Economics and Statistics, 13, 253265
[13] Edge, R.M., M.T. Kiley, J. Laforte (2006) ,”A Comparison of Forecast Per
formance between Federal Reserve Staﬀ Forecasts, Simple ReducedForm
Models and a DSGE Model,” Federal Reserve Board (mimeo)
[14] Forni, M., M. Hallin, M. Lippi, and L. Reichlin (2000), ”The Generalized
Dynamic Factor Model: Identiﬁcation and Estimation,” Review of Eco
nomics and Statistics 82:4, 540—554.
[15] Friedman, M (1961), ”The lag in eﬀect of monetary policy”, Journal of
Political Economy, 69, 44766.
[16] Giacomini, R. and H. White (2006), ”Tests of Conditional Predictive Abil
ity”, Econometrica, vol. 74(6=, 15451578
28
[17] Giannone, D., Reichlin, L. and Sala, L., (2005). ”Monetary Policy in Real
Time,” CEPR Discussion Papers 4981, C.E.P.R. Discussion Papers.
[18] Giannone, D., Reichlin, L. and Small, D., (2005). ”Nowcasting GDP and
Inﬂation: The Real Time Informational Content of Macroeconomic Data
Releases,” CEPR Discussion Papers 5178, C.E.P.R. Discussion Papers.
[19] Hamilton,J.D. (1994) Time Series Analysis, Princeton University
Press,Princeton,NJ.
[20] Ingram,B.F.,Kocherlakota,N.R.,Savin,N.E. (1994), ”Explaining business
cycles:a multipleshock approach. Journal of Monetary Economics” 34,415
–428.
[21] Ireland, P.N., (2004), ”A method for taking models to the data,” Journal
of Economic Dynamics and Control, Elsevier, vol. 28(6), pages 12051226
[22] Hansen,G.D.,(1985), ”Indivisible labor and the business cycle,” Journal of
Monetary Economics 16,309 –327.Journal of Monetary Economics 16,309
–327.
[23] King, R. G., C. I. Plosser et S. T. Rebelo (1988a), ”Production, Growth and
Business Cycles: 1. The Basic Neoclassical Model,” Journal of Monetary
Economics, 21, 195232.
[24] King, R. G., C. I. Plosser et S. T. Rebelo (1988b), ”Production, Growth
and Business Cycles: 2. New Directions,” Journal of Monetary Economics,
21, 309341.
[25] Klein, P. (2000), ”Using the Generalized Schur Form to Solve a System of
Linear Expectational Diﬀerence Equations”, Journal of Economic Dynam
ics and Control 24(10), 14051423
[26] Mankiw, N.G. and M.D. Shapiro, (1986), ”News or Noise: An Analysis of
GNP Revisions,” Survey of Current Business, 66 , 2025.
[27] Marchetti, D.J. and G. Parigi (1998), ”Energy Cconsumption, Survey Data
and the Prediction of Industrial Production in Italy”, Tem di Discussione
del Servizio Studi della Banca d’ Italia, No 342
[28] McNees, S.K. (1990), ”The role of judgment in macroeconomic forecasting
accuracy,” International Journal of Forecasting,. 6, 287299.
[29]
¨
Osterholm, P., (2006), ”Judgement and Fan Charts  Incorporation and
Evaluation”
[30] Reifschneider, D.,D.J. Stockton and D.W. Wilcox (1997),”Econometric
Models and the Monetary Policy Process” , CarnegieRochester Confer
ence Series on Public Policy, vol. 47, pp.137
[31] , Robertson, J., E. Tallman and C. Whiteman, (2005)”Forecasting using
Relative Entropy,” Journal of Money Credit and Banking 37, 383402.
Banking 37, 383402.
29
[32] Romer, C. and D. Romer(2000), ”Federal Reserve Information and the
Behavior of Interest Rates”, The American Economic Review, 90, 3, 429
457
[33] Sargent, T.J. (1989), ”Two Models of Measurement and the Investment
Accelerator”, Journal of Political Economy, 97,2,251287
[34] Schuh, S. (2001), ”An Evaluation of Recent Macroeconomic Forecast Er
rors”, New England Economic Review
[35] Smets, F. and R. Wouters, (2004) ”Forecasting with a Bayesian DSGE
model  an application to the euro area,” Working Paper Series 389, Euro
pean Central Bank
[36] Sims, C.A (2002), ”Solving Linear Rational Expectations Models,” Com
putational Economics, Springer, 20(12), 120.
[37] Sims, C.A (2003), ”The role of models and probabilities in the monetary
policy process,” Brooking Papers on Economic Activity, 2002:2 163
[38] Stock, J.H., and M. W. Watson (1999), ”Forecasting Inﬂation,” Journal of
Monetary Economics 44, 293335.
[39] Stock, J.H., and M. W. Watson (2002), ”Macroeconomic Forecasting Using
Many Predictors”.
[40] Svensson, L.E.O (2005), ”Monetary Policy with Judgement: Forecast Tar
geting”, NBER Working Paper 11167
[41] Svensson, L.E.O and R.J. Tetlow (2005), ”Optimal Policy Projections”,
NBER Working Paper 11392
[42] Svensson, L.E.O. and N. Williams (2005), ”Monetary Policy with Model
Uncertainty: Distribution Forecast Targeting”, NBER Working Paper
11733
[43] Svensson, L.E.O. and M. Woodford (2003), ”Indicator Variables for Opti
mal Policy”, Journal of Monetary Economics 50, 691720
[44] Tinsley, P. A., Spindt, P. A. and Friar, M. E., 1980. ”Indicator and ﬁlter
attributes of monetary aggregates : A nitpicking case for disaggregation,”
Journal of Econometrics, Elsevier, vol. 14(1), 6191
30
6 Appendix A: Solving the RBC model
The equilibrium conditions for the optimization problem are
y
t
= k
(1−α)
t
, (49)
y
t
= c
t
+i
t
, (50)
(γ
A
e
εt
)k
t+1
= k
t
+i
t
. (51)
R
t
= [(1 −α)k
−α
t
+ (1 −δ)] (52)
c
−η
t
=
β
γ
η
A
E
t
[
c
−η
t+1
R
t+1
e
εt+1
]. (53)
In absence of shocks, the economy converges to the following deterministic bal
anced growth path/steady state that can be obtained from equations (49)(53)
dropping the time subscripts and through some simple manipulation.
R =
γ
η
A
β
Y
K
=
γ
η
A
β
+ 1 −δ
1 −α
I
K
= γ
A
− (1 −δ)
C
Y
= 1 −
(1 −α)(γ
A
− (1 −δ))
R − (1 −δ)
I
Y
=
(1 −α)(γ
A
− (1 −δ))
R − (1 −δ)
A linear approximation of system (49)(53) can be derived loglinearizing it
around its steady state. Let us deﬁne, for and variable x, ˆ x
t
= ln(
xt
X
), i.e. the
logdeviation of x
t
from its steady state X. With some manipulation we obtained
the following linearization for the system (49)(53).
ˆ y
t
=
C
Y
ˆ c
t
+
I
Y
ˆ
i
t
ˆ
k
t+1
=
I
γ
A
K
ˆ
i
t
+
1 −δ
γ
A
ˆ
k
t
−ε
t
ˆ y
t
= (1 −α)ˆ y
t
ˆ r
t
=
1 −α
R
Y
K
(ˆ y
t
−
ˆ
k
t
)
0 = E
t
[−η(ˆ c
t+1
− ˆ c
t
) + ˆ r
t+1
]
Manipulating it a bit, this system can also be rewritten all as a function of
ˆ
k
t+1
,
ˆ c
t
and ε
t+1
.
ˆ
k
t+1
= λ
1
ˆ
k
t
+λ
2
ˆ c
t
−ε
t
(54)
E
t
[∆ˆ c
t
] =
λ
3
η
E
t
ˆ
k
t+1
(55)
ˆ y
t
= (1 −α)
ˆ
k
t
(56)
31
where
λ
1
=
R
γ
A
λ
2
= 1−
R +α(1 −δ)
γ
A
(1 −α)
λ
3
= −
α(R − (1 −δ))
R
Various methods are available for solving linear diﬀerence models like (54)
(56) under rational expectations, but given the simplicity of our model we do
not need to resort to elaborate methods, we can simply apply the method of
undetermined coeﬃcients as described, e.g., in Campbell (1994). That is, we
”guess” the functional form of
ˆ
k
t+1
and ˆ c
t
and then we verify it by ﬁnding
parameters that satisfy the restrictions of the approximate loglinear model.
As pointed out in King, Plosser and Rebelo (1988b), if technology is a loga
rithmic random walk with drift, then the solution to the transformed economy
is particularly simple. The only impact of technological progress is to reset the
transformed economy’s capital stock relative to its longsum stationary level. A
positive 1% technological innovation in the untransformed economy leads to a
1% decline in the transformed economy’s capital stock. Therefore we assume
that
ˆ
k
t+1
= µ
ˆ
k
t
−ε
t+1
(57)
ˆ c
t
= π
ck
ˆ
k
t
(58)
and similarly, ˆ y
t
= π
yk
ˆ
k
t
, etcetera. Substituting (57) and (58) into (54)(56)
and manipulating, we obtain a second order equation for π
c
k which has only
one positive solution (as required by the problem). Therefore we obtain
ˆ
k
t+1
= µ
ˆ
k
t
−ε
t+1
(59)
_
ˆ y
t
ˆ c
t
_
=
_
π
ck
(1 −α)
_
ˆ
k
t
,
where
π
ck
=
−(λ
1
− 1 +
1
η
)λ
2
λ
3
+
_
(λ
1
− 1 +
1
η
)λ
2
λ
3
)
2
+ 4λ
2
λ1λ3
η
2λ
2
µ = λ
1
+λ
2
π
ck
.
In order to be able to bring the model to the data we still need to work on
it a bit more. If we consider the variables in levels, we can write
ln y
t
= a
t
+ ln(Y ) + ˆ y
t
ln c
t
= a
t
+ ln(c) + ˆ c
t
ln i
t
= a
t
+ ln(I) +
ˆ
i
t
Since we are in fact looking at a balanced growth path rather than a steady
state, we cannot recover the steady state values Y, C, I, we can only pin down
32
some ratios as
C
Y
and
Y
K
. Therefore, in order to relate the model (59) to the
data, we rather look at
∆(ln y
t
) = ln γ
A
+ (α +µ − 1)
ˆ
k
t−1
−α
ˆ
k
t
ln c
t
− ln y
t
= log(
C
Y
) + [π
ck
− (1 −α)]
ˆ
k
t
ln i
t
− ln y
t
= log(
I
Y
) + [π
ik
− (1 −α)]
ˆ
k
t
. (60)
Finally we obtain
_
ˆ
k
t+1
ˆ
k
t
_
=
_
µ 0
1 0
_ _
ˆ
k
t
ˆ
k
t−1
_
+
_
−1
0
_
ε
t+1
(61)
_
_
∆ln y
t
ln c
t
− ln y
t
ln i
t
− ln y
t
_
_
=
_
_
ln γ
A
log(
C
Y
)
log(
I
Y
)
_
_
+
_
_
−α α +µ − 1
π
ck
− (1 −α) 0
π
ik
− (1 −α) 0
_
_
_
ˆ
k
t
ˆ
k
t−1
_
,
in the form of a statespace econometric model, allowing the procedure for eval
uating the likelihood function to continue using the Kalman ﬁltering algorithms
outlined, for example, by Hamilton (1994,Chapter 13).
Table 7 reports the parameters values for the model with the autocorre
lated but not crosscorrelated residual. All other speciﬁcations have the same
calibrated parameters, but diﬀer slightly in the estimated parameters.
Table 7: Parameter values for the model with the autocorrelated but not cross
correlated residual.
Parameter value
β 0.988 calibrated
α 0.667 calibrated
δ 0.01 calibrated
η 1 (log utility) calibrated
γ 1.0072 calibrated
σ
2
ε
0.01 estimated
D
yy
0.6692 estimated
D
cc
0.2215 estimated
V
yy
0.01 estimated
V
cc
0.0075 estimated
V
yc
0.0483 estimated
7 Appendix B: alternative modeling of the pro
fessional forecasters
Here I show how to construct an augmented forecast that extracts and accounts
for the information contained in professional forecasts under the assumption that
the professional forecasters have extramodel information on current and future
33
shocks.
11
Their information set I
T
⊇ M
T
and is such that, for h = 0, 1, 2, 3, 4
E[u
T+h
I
′
T+h
] = 0. (62)
In this case, the forecasters will report the following statespace form. For
h=0,
ˆ s
TT
= Gˆ s
T−1T−1
+Ku
T
(63)
E[y
T
I
T
] = ΛGˆ s
T−1T−1
+w
TT
,
where
w
TT
= E[u
T
I
T
] +η
TT
and
_
Ku
T
w
TT
_
∼ WN
_
0 , Σ
0
_
where
Σ
0
=
_
KE(u
T
u
′
T
)K
′
KE(u
T
w
′
TT
)
E(w
TT
u
′
T
)K
′
Ew
TT
w
′
TT
_
,
η
TT
is, as before, the measurement error (the typo) made by the forecasters
in T while reporting their forecast for period T. The professional nowcast co
incides with the one generated under the assumption that the forecasters have
information only on the current period; the forecasts will instead be diﬀerent.
For h = 1, 2, 3, 4 we have
ˆ s
T+hT+h
= Gˆ s
T+h−1T+h−1
+Ku
T+h
E[y
T+h
I
T
] = ΛGˆ s
T−1T−1
+
h
i=1
ΛG
i
KE[u
T+h−i
I
T
] +η
T+hT
, (64)
= ΛGˆ s
T+h−1T+h−1
+w
T+hT
where
w
T+hT
= Λ
h
i=1
G
i
K [E(u
T+h−i
I
T
) −u
T+h−i
] +E[u
T+h
I
T
] +η
T+hT
(65)
_
Ku
T+h
w
T+hT
_
∼ WN
_
0 , Σ
h
_
and
Σ
h
=
_
KE(u
T+h
u
′
T+h
)K
′
KE(u
T+h
w
′
T+hT
)
(w
T+hT
u
′
T+h
)K
′
Ew
T+hT
w
′
T+hT
_
.
As above, η
T+hT
is, as before, the measurement error (the typo) made by the
forecasters in T while reporting their forecast for period T + h. We assume
11
We acknowledge, but for now ignore the potential inconsistency arising from the fact of
assuming that professional forecasters have information on future shocks, while agents do not
and fail to look at the professional forecasters to have more information.
34
that η
sT
⊥ u
τ
, for any s and τ, and that η
T+hT
⊥ E(u
T+i
I
T
) for any i, h.
Moreover we will make the following assumptions:
u
t
⊥ E(u
τ
I
T
) ∀τ = t
E(u
t
I
T
) ⊥ E(u
τ
I
T
) ∀τ = t (66)
u
t
⊥ E(u
τ
I
T
) ∀τ = t
which will allow us to recover Σ
0
and Σ
h
for h=1,2,3,4.
Once we have recovered the exact form of the covariance matrices Σ
0
and
Σ
h
we can then smooth (63) and (64) with a timevarying Kalman smoother,
in order to obtain optimal estimates s
+
T+hIT
,for h=0,1,2,3,4, that comprise the
extrainformation contained in the forecasts and employs it optimally within
the model. Therefore, using ˆ s
+
T+hIt
we can create a forecast ˆ y
+
T+hIT
,
y
+
T+hIT
= Λˆ s
+
T+hIT
, (67)
that incorporates optimally in the modelbased framework the judgemental in
formation coming from the conjunctural forecasters.
Since the nowcasts coincides with the ones of the previous subsection, Σ
0
can be recovered exactly in the same way. To recover all the elements of Σ
h
, we
proceed as follows. Given the assumptions we made in (66), it is easy to show
that
E(u
T+h
w
′
T+hT
) = E[u
T+h
E(u
T+h
I
T
)
′
].
On the basis of assumptions (66), the following equality holds
E[u
T+h
E(y
T+h
I
T
)
′
] = E[u
T+h
E(u
T+h
I
T
)′],
As the series for the u
T+h
are readily available via the Kalman ﬁlter, we are
able to recover empirically the value of E[u
T+h
E(y
T+h
I
T
)
′
], and therefore of
E[u
T+h
E(u
T+h
I
T
)
′
]. Moreover, notice that, since E(u
T+h
I
T
) is a linear pro
jection of u
T+h
on the space spanned by I
T
, Sp(I
T
), then
u
T+h
= E(u
T+h
I
T
) +µ
T
where µ
T
is orthogonal to the space spanned by I
T
. Therefore,
E[u
T+h
E(u
T+h
I
T
)
′
] = E[E(u
T+h
I
T
)E(u
T+h
I
T
)
′
], (68)
i.e. we have determined also the variance of the expected value of the current
shock given the information set I
T
, E(u
T+h
I
T
), and we showed it is equal to
the covariance among the shock and its expected value.
In order to recover Ew
T+hT+h
w
′
T+hT+h
we will ﬁrst deﬁne the forecasters’
forecast error as
e
T+h
= y
T+h
−E(y
T+h
I
T
)
= u
T+h
−w
T+hT
.
The variance of e
T+h
, whose value can be recovered from sample data, is:
E(e
T+h
e
′
T+h
) = E(u
T+h
u
′
T+h
) −E[u
T+h
w
′
T+hT
] +
− E[u
T+h
w
T+hT
]
′
+E[w
T+hT
w
′
T+hT
]
35
E(u
T+h
u
′
T+h
) can be obtained by the Kalman ﬁlter as follows
E(u
T+h
u
′
T+h
) = ΛPΛ
′
(69)
Where P is the solution of the Riccati equation deﬁned in (11). Using the above
equations, we can ﬁnally recover E[w
T+hT
w
′
T+hT
] and we therefore have pinned
down all the values of the matrix Σ
h
.
We have shown how to construct modelbased forecasts that incorporate
extramodel information coming from the professional forecasts under two pos
sible modelizations of the professional forecasters’ information set. Similar re
sults would hold however under all intermediate assumptions, as, for example,
the assumption that the professional forecasters have extramodel information
on the current and 1step ahead shock. The only diﬀerence would be in the
deﬁnition of wT +hT, which depends on the assumptions made.
Table 8: Relative MSFE of forecasts of GDP growth with respect to naive
benchmark. Asterisks denote forecasts that are statistically more accurate than
he naive benchmark at 1% (∗ ∗ ∗), 5%(∗∗) and 10%(∗)
relative to constant growth
EVALUATION SAMPLE: 1987:2  2004:4
Forecast horizon RBC SPF AUG
Q0 0.86 0.68 ∗ 0.68 ∗∗
Q1 0.88 ∗∗ 0.77 0.76 ∗∗
Q2 0.92 0.88 0.82 ∗ ∗ ∗
Q3 0.94 ∗∗ 0.94 0.90 ∗ ∗ ∗
Q4 0.97 0.95 0.92∗ ∗ ∗
relative to constant growth
EVALUATION SAMPLE: 1996:2  2004:4
Forecast horizon RBC SPF AUG
Q0 0.81 ∗∗ 0.80 ∗∗ 0.75 ∗ ∗ ∗
Q1 0.85 ∗ ∗ ∗ 0.94 0.86 ∗∗
Q2 0.84 ∗ ∗ ∗ 1.01 0.88 ∗ ∗ ∗
Q3 0.84 ∗ ∗ ∗ 1.01 0.92 ∗ ∗ ∗
Q4 0.87 ∗∗ 1.01 0.92 ∗∗
36
Table 9: Weight given to the SPF forecast of ∆GDP and ∆CONS in the
augmented forecast . 2004:4
∆GDP ∆CONS
nowcast 0.3941 0.2574
1 step ahead 0.3606 0.1261
2 step ahead 0.3236 0.0931
3 step ahead 0.3387 0.0832
4 step ahead 0.3321 0.0818
37
JEL Classiﬁcation: C32, C53 Keywords: Forecasting, Judgment, Kalman ﬁlter, Real time
1
Introduction
Much of the macroeconometric literature of the last decade has focused on making microfounded dynamic stochastic general equilibrium (DSGE) models a viable option for policy analysis and forecasting. Since Smets and Wouters (2004) have shown that DSGE models estimated with Bayesian techniques seem to perform quite well in forecasting relative to standard benchmark models such as VARs, DSGE models are playing a more relevant role in practice and have indeed become an increasingly important tool for policy analysis and forecasting at central banks. The attractiveness of using these models to forecast derives from the fact that they are theoretically consistent and based on ﬁrst principles. The microfoundations imply that the parameters are more likely to be truly structural and allow interpreting the forecasts in an economically intuitive way. Moreover, the structural nature of the model allows computing forecasts conditional on a policy path and allows examining the structural sources of the forecast errors and their implications for monetary policy. Despite their growing employment in practice, modelbased forecasts still seem to be outperformed at short horizons and particularly in the nowcast1 by forecasts produced by institutional and professional forecasters, such as the Federal Reserve’s Greenbook (e.g. Sims, 2003) or the Survey of Professional Forecasters.2 Where does this advantage come from? Professional forecasters monitor and analyze literally hundreds of data series, using informal methods to distill information from the available data. Not only they access what is generally called hard data (data series that are released by the statistical agencies, as for example, GDP, industrial production, etc), they also gather socalled soft information, i.e. things like the quantity of goods transported by railway in each month (Bruno and Lupi, 2004) or the electricity consumed each month (Marchetti and Parigi, 1998). Moreover, professional forecasters are able to incorporate new data and new information as it becomes available throughout the month or the quarter and therefore are can take advantage of the timeliness of this information. Indeed, as Giannone, Reichlin, Small (2005) point out, timely information seems to play a very important role in improving the quality of the forecasts, and of the nowcasts in particular. Finally, in their forecasts, professional forecasters account also for all sheerly judgmental information. A typical example is the adjustments of the forecasts made in 1999 in order to account for the fear of the Y2K bug. Indeed this seemed at the time a very important event, but since it had never happened no model could be expected to encompass it, while the institutional forecasters could.
1 Nowcasts are estimates of the current value of variables, such as GDP, that are unknown in the current period due to information lags 2 This view has recently been challenged by Edge, Kiley and Laforte (2006), who suggest that a richly speciﬁed DSGE models have a forecasting performance that is comparable to that of the Greenbooks. We believe that their results are very much related to the sample they choose for their out of sample exercise, i.e. 19962001. We will discuss this in more detail in the empirical application.
1
Hence, judgement  i.e. information, knowledge and views outside the scope of a particular model 3  strongly informs the institutional forecasts. The empirical evidence at hand suggests that the ability to account for more, more timely and ”softer” information is what makes the professional or judgmental (I will use the two terms interchangeably from now on) forecasts better at nowcasting and forecasting short horizons. The introduction of DSGE models in a policy and projection environment has given rise to a literature on how the model’s outcomes should be combined with judgmental input and oﬀmodel information. The aim of this paper is to propose a method for combining judgment  proxied by judgmental forecasts with modelbased forecasts, in order to make predictions that are more accurate but nevertheless disciplined by rigorous economic theory. In particular, we propose to interpret the judgmental forecasts as an estimate  made with a diﬀerent, possibly more informative, information set  of the real signal, estimate which can be ﬁltered in order to extract the information it possibly contains. We then use the model to generate another forecast that can now also account for judgmental information and therefore make more accurate predictions. The new forecast that we generate is a combination of the modelbased forecast and the judgmental forecast: the Kalman ﬁlter will automatically associate weights to the two forecasts depending on the information content of the judgmental forecasts. Moreover, with the methodology we propose, we will be able to look at the judgmental forecasts through the lens of the model. Storytelling is diﬃcult when it comes to judgmental forecasts; in our setup we will be able to interpret the forecasts in light of the model and therefore somehow structuralize the forecasts. The approach we propose is similar in spirit to the one used by Coenen, Levin and Wieland (2005) and Koenig (2005) to deal with revisions. One of the key factors that distinguishes our approach from theirs is that we consider the judgmental forecasts as optimal forecasts made with a diﬀerent information set, not a noisy signal of the actual variables. Recently, other authors have addressed the issue of how to use soft data and judgment in models. Svensson (2005), Svensson and Tetlow (2005) and Svensson and Williams (2005) develop diﬀerent frameworks that allow accounting for centralbank judgment when constructing optimal policy projections of the target variables and the instrument rate. They show that such monetary policy may perform substantially better than monetary policy that disregards judgment and follows a given instrument rule. Our approach diﬀers quite substantially from theirs: our goal is solely to produce modelbased forecasts that can account for judgmental and oﬀmodel information. Our approach leaves the structure of the DSGE model unchanged and combines the modelbased forecasts with the judgmental forecasts. In a Bayesian framework, Robertson, Tallman and Whiteman (2005) suggest a minimum relative entropy procedure for imposing moment restrictions on simulated forecasts distributions from a variety of models. This technique involves changing the initial predictive distribution to a new one that satisﬁes speciﬁc moment conditions that come from outside of the models, i.e. that are judgmental. Therefore, minimumentropy methods allow adjusting the full posterior distribution of the DSGE models to match a given experts’ assessment. Another approach that can be used to incorporate judgmental and oﬀmodel
3 This
deﬁnition appears in Svensson (2005)
2
and propose an empirical framework for the estimation of DSGE models that exploits the relevant information from a datarich environment. lagged expectations or expectations farther in the future can be cast in this form simply by expanding the vectors zt and Zt appropriately. The model 3 .information is that of Boivin and Giannoni (2005). The solution of the model has the following statespace representation St = Zt xt zt = = F (θ)St−1 + H(θ)εt N (θ)St . To allow for greater generality we augment each equation in (4) with a (possibly serially correlated) residual.1 The Econometric Methodology The Framework zt+1 Zt+1 zt Zt Let us consider a general linear(ized) rational expectations model of the form AEt =B + Cxt (1) (2) xt = M xt−1 + εt εt ∼ W N (0. or error term. In this way. H(θ) and N (θ) are functions of the underlying structural parameters. e. as in Ireland (2004). Section 4 presents the results of the empirical application described in the previous section. In Section 3 we apply the proposed methodology on a strippedtothebone version of an RBC model using the Survey of Professional Forecasters’ forecasts to extract eventual judgmental information.. 2 2. we illustrate how to extract the weights given to the modelbased and the judgmental forecast. They build on the factor model literature. In Section 5 we give some conclusions and outline future extensions of this paper. xt is a vector of exogenous variables following the process (2). The paper is structured as follows. see. Klein (1997) and Sims (2000). Blanchard and Kahn (1980). Their methodology allows using as much information as possible to estimate the structural model and to update the estimates of the state variables featuring in the model. (3) (4) where F (θ). Lippi and Reichlin (2000). Zt is a vector of predetermined endogenous variables or of lagged exogenous variables such that Et Zt+1 = Zt+1 . 2002) and Forni.g. started by Stock and Watson (1999. Hallin. Several numerical techniques have been developed to solve models of the form (1)(2). soft information can be used systematically to update the current assessment of the state variables as well as of the shortterm forecast. Q) where zt is a vector of non predetermined endogenous variables. B. Linear(ized) general equilibrium models containing additional lags. and describe how to structuralize the professional forecasts. C and M are conformable matrices of coeﬃcients that form a structural parameter space that we shall call Θ. In Section 2 we outline the framework and describe the proposed methodology. Q is diagonal and A.
depending on the way in which the matrices D and V are deﬁned. . The conditions are that that (F. .. Hansen. y0 . 2. serially uncorrelated inno′ vations that is normally distributed with covariance matrix Eξt ξt = V and is uncorrelated with the innovation εt . There are two appealing features in this setup. McGrattan. cannot explain. that often appears in DSGE models. It is useful to rewrite the innovations representation 4 The conditions for the existence of this representation are stated carefully. among other places. s0 ] is the estimate of the state vector st based ˆ ˆ on the observations of yτ up to date t. etc. Associated with the state space representation (7)(8) is the innovations representation4 stt = Gˆt−1t−1 + Kut ˆ s (9) yt = ΛGˆt−1t−1 + ut s (10) where stt = E[st yt . Λ. pointed out by Ingram et al.H.(5) and (6) can be rewritten more compactly as: st+1 = St+1 vt+1 = G(θ)st + νt+1 (7) (8) (5) yt = Λ(θ)st where G(θ) = F (θ) 0 H(θ)εt .now consists of the transition equation (3) and the new observation equation yt = zt + vt = N (θ)St + vt .. and Sargent (1996. where vt+1 = Dvt + ξt+1 (6) for all t = 1. 4 . ˆ ˆ which makes the associated Kalman gain Kt converge to K. Λ(θ) = N (θ) I and νt = is seri0 D ξt ally uncorrelated. See Anderson.. the model consisting of (3). K = GP Λ′ (ΛP Λ′ )−1 is the steady state Kalman gain. Hansen. ut = yt − ytt−1 = yt − E[yt yt−1 .N) be such that iterations on the Riccati equation for Σt = E(xt − xt )(xt − xt )′ converge. normally distributed with zero mean and covariance matrix H(θ)ΣH(θ)′ 0 ′ Eνt νt = Q = . McGrattan. for notational simplicity. Model (3)... are function of the structural parameters θ. (11) P is the steadystate covariance matrix of the innovations st − stt−1 given the information in period t − 1.. because of their elegance and parsimony.. page 175) for deﬁnitions of stabilizable and detectable. we will drop the indication that the matrices G. Second. .(5) and (6) overcomes the wellknown stochastic singularity problem. Suﬃcient conditions are that (F.(1994). in Anderson. yt−1 . 0 V From now on. y0 ] the forecast error made when forecasting yt given the observations of yτ up to date t1.. H ′ ) is detectable. deriving from the assumption that few fundamental shocks drive all the dynamics of the model.. and Sargent (1996). the residuals can be interpreted as measurement errors or as additional elements capturing all of the movements and comovements in the data that the DSGE models. and ξt is a vector of zero mean. First.. N ′ ) is stabilizable and that (F ′ . and P is unique positive semideﬁnite solution that satisﬁes the algebraic Riccati equation P = Q + GP G′ − GP Λ′ (ΛP Λ′ )−1 ΛP G′ .
The second type of forecaster also knows the model of the economy and uses it to make its forecasts. (14) 2.given by (9) and (10) as the sum of two components: one that is forecastable given the information set containing all observations of yτ up to date t1. so in period t there is data available only up to t1. but possibly also on future shocks. It . The assumptions on the information available in each period t are outlined in Table 1. The ﬁrst type generates his forecasts on the basis of the model and the data released by the statistical agency. This type represents the professional forecasters (PF from now on). h−1 (12) st+ht+h = Gh+1 st−1t−1 + ˆ ˆ j=0 h Gj Kut+h−j (13) yt+h = ΛGh+1 st−1t−1 + ΛG ˆ i=1 Gi Kut+h−i + ut+h .e. which is more informative than Mt . The data reporting agency releases data about the current period at the end of the period. However. In order to do so. we need to make assumptions on the model and the information set that the professional forecasters use to generate their forecasters. both cases are credible. . In this paper. not exclusively on the current shocks. but also information on future tax raises or on the possible eﬀects of extraordinary events like the Y2K bug or the World Cup. In what follows. know the model (7)(8) and can use it to forecast. which comprises Mt but is possibly more informative. and a sequence of innovation terms. their information set is plausibly richer than Mt : they collect soft. We then assume that there are three types of forecasters. As highlighted in the Introduction. the possible inconsistency arising from assuming that the Professional Forecasters know more 5 . such as monthly electricity consumption or quantity of goods transported by railway in each month.2 Model of the Professional Forecasts The goal of this section is to show how to incorporate judgmental forecasts into models of the form (7)(8).. More speciﬁcally.. we assume that professional forecasters have extramodel information only on the current period’s shock but not on future shocks. but they access an information set. yt−2 . i. information about this month electricity consumption. The agents observe the shocks and base their decisions on this knowledge..e. Mt = Sp{yt−1 . Considering the way the professional forecasts are made and the type of extrainformation they use i. but accesses another information set It . That is. or equivalently in (9)(10).. We assume that the shocks hit the economy at the beginning of period t. intraperiod extramodel information. From now on I will call the ﬁrst type of forecaster ’purely’ modelbased forecaster. In appendix we illustrate the case in which we assume that they have some extramodel information. we will make the assumption that the professional forecasters know the structure of the economy.y0 }. we need to somehow formalize these forecasts. Therefore the information set available to the ﬁrst type of forecaster in time t comprises exclusively information up to time t1: his information set is Mt . as deﬁned in (12).
the et+h ’s would not be forecast errors . i. but rather noisy signals of actual future variables. Sargent (1989) ﬁrst distinguished among these two possible modelizations. At any given time T their information set IT comprises MT but is such that. 4 ′ E[uT IT ] = 0. Let us formalize rigorously the second type of forecaster. PF are able to obtain information on the current period’s shocks. professional forecasters will report the following: for h=0.and therefore the PF would not be forecasts. Instead we want to model the output of the professional forecasters as forecasts. For t T .5 Notice that if we just modelled the PF’s forecasts as a noisy version of the actual signal.. . discussing two models of a statistical agency that is collecting and reporting observations on a dynamical linear stochastic economy. 2. 2. the third type of forecaster will use the method I propose: I will deﬁne the forecasts they produce augmented forecasts. st+1+h = Gst+h + νt+1+h PF yt+h = Λst+h + et+h this formulation would be totally inconsistent with our assumptions.. This would mean that we are assuming that the PF have crystal balls through which they see the future. 3. since the use the PF to augment their information set. for h = 1.since they are not orthogonal to the past . (15) uT +h ⊥ IT The forecasters know the model of the economy and produce their forecasts as linear least squares forecasts given their information set plus an error term. the professional forecasters. Finally. but that the agents don’t incorporate them in their solution persuaded me to relegate this part to the appendix. nor modelconsistent.e.. This is neither realistic. First of all. agency releases data on period t about the future that the agents. T − 1 both purely modelbased forecasters and professional forecasters are going to construct the innovations representation (9).(10). For t = 1. sT T = GˆT −1T −1 + KuT ˆ s P yt F = E[yT IT ] + ηT T (16) (17) s where E[yT IT ] = ΛGˆT −1T −1 + E[uT IT ] is the least squares forecast made by the PF with their information set It and ηT T is the measurement error (the 5 For a recent review of the debate on the rationality (unbiasedness and eﬃciency) of macroeconomic forecasts see Schuh(2001) 6 . as shown in detail below.Table 1: Information structure t shocks hit the economy agents observe them PF collect information and release their forecast t+1 stat.
Σh 0 ′ EwT +hT wT +hT and Σh = KE(uT +h u′ +h )K ′ T 0 . In appendix we present the case in which the PF can have extrainformation up to 4 periods ahead. sT T ˆ PF yt = where wT T = E[uT IT ] + ηT T . We assume that ηsT ⊥ uτ . Here Σh will be block diagonal for all h = 0. 1. for any s and τ . We want to extract the optimal linear projection of yT +h given IT .e PF E[yT +h IT ]. Clearly. By ﬁltering model (18)(19) with a timevarying Kalman smoother one can obtain optimal 7 . and KuT wT T Σ0 = For h = 1. from yT +h . s (18) ∼ WN 0 . 3. 3. 4 we have sT +hT +h ˆ PF yT +h = GˆT −1T −1 + KuT s ΛGˆT −1T −1 + wT T . As above. This can be cast in statespace form as follows. the form of the matrices Σh depends crucially on the assumptions we made on the information set of the professional forecasters. s = ΛGˆT +h−1T +h−1 + wT +hT s (19) where h−1 wT +hT = −Λ j=1 Gj KuT +h−j − ΛGh K(uT − E[uT IT ]) + ηT +hT KuT +h wT +hT (20) ∼ WN 0 .typo) made by the professional forecasters in T while reporting their forecast. ηT +hT is the measurement error (the typo) made by the professional forecasters in T while reporting their forecast for period T + h. = GˆT +h−1T +h−1 + KuT +h s = ΛGˆT −1T −1 + ΛGh KE[uT IT ] + ηT +hT . since we have modeled the PF as having some extrainformation only on the current shock. In that case. Σ0 with KE(uT u′ )K ′ T E(wT T u′ )K ′ T ′ KE(uT wT T ) ′ EwT T wT T . 2. 4. To understand how the augmented forecasts are built. and that ηT +hT ⊥ E(uT IT ) for h = 0. the Σh will not necessarily be block diagonal: the value of the oﬀdiagonal terms will depend on the information on future shocks actually carried by the professional forecasts. i. For h=0. consider the fact that models (18) and (19) can be seen as a new state space model in which the new observables are the professional forecasts. 2.
3. is: ′ E(eT eT ) = − E(uT u′ ) − E[uT E(uT IT )′ ] + T ′ E[uT E(uT IT )′ ]′ + E[wT T wT T ] (24) 6 The notation s+ ˆT +hI mean the estimate of the state sT +h made in T give the augmented T information set IT . (23) (the equality holds because ηT T ⊥ E(uT IT ) by assumption). E(uT IT ). we proceed as follows. since E(uT IT ) is a linear projection of uT on Sp(IT ). whose value can be recovered from sample data. and therefore of E[uT E(uT IT )′ ].6 + Having s++hIT for we can ﬁnally construct the augmented forecasts yT +hIT ˆT + yT +hIT = Λˆ++hIT sT (21) that incorporates optimally in the modelbased framework the judgemental information coming from the conjunctural forecasters. The element on the right hand side of this equation can be calculated as follows. = uT − E(uT IT ) − ηT T . as the series for the uT ’s are readily available via the Kalman ﬁlter. we have determined also the variance of the expected value of the current shock given the information set IT . In order to implement the Kalman ﬁlter. Finally. (22) i. then ′ ′ E(uT wT T ) = E[uT E(uT IT )′ ] + E[uT ηT T ] = E[uT E(uT IT )′ ]. To recover all the elements of Σ0 . the space spanned by IT . we need to be able to recover the exact form of the covariance matrices Σ0 and Σh for h=1.e. Therefore. First notice that the following equality holds P E[uT yT F ′ ] ′ = E[uT (ΛGˆT −1T −1 )′ ] + E[uT E(uT IT )′ ] + E[uT ηT T ] s = E[uT E(uT It )′]. The second equality derives from the fact that uT ⊥ ΛGˆT −1T −1 by construcs tion and that ηT ⊥ uT by assumption. we are able to recover empirically the P value of E[uT yT F ′ ]. First of all let us point out that. notice that.4. E[uT E(uT IT )′ ] = E[E(uT IT )E(uT IT )′ ]. Its variance.estimates of the state variables s++hIT that comprise the extrainformation conT tained in the professional forecasts and employ it optimally within the model.2. and we showed it is equal to the covariance among the shock and its expected value. 8 . since ηT T ⊥ uT by assumption. Moreover. then uT = E(uT IT ) + µT where µT is orthogonal to the space spanned by IT . we will ﬁrst deﬁne the forecasters’ forecast error as eT P = yT − yT F = uT − wT T . In order to recover ′ ′ E[wT T wT T ] = E[E(uT IT )E(uT IT )′ ] + E[ηT T ηT T ].
...... i.2. s where PF PF P ˆ sT s++hIT = E(ˆ++hIT yT +h . E(eT e′ ) = T ′ E(uT u′ ) − E[E(uT IT )E(uT IT )′ ] + EηT T ηT T .y1 .3. Moreover. ˆ The Kalman gain K2h is timevarying and takes the form: for h=0. yT F .. K20 = Σ0 Σ0 12 22 9 −1 .. it is possible to recover the value of the variance of the typo ′ E[ηT T ηT T ]. from (23) and (24) we infer the following equation. Using the above ′ equations.. Let us turn back to the system of equations we smooth to obtain the augmented forecasts.4. h−1 wT +hT = −Λ j=1 Gj KuT +h−j − ΛGh K(uT − E[uT IT ]) + ηT +hT and KuT +h wT +hT ∼ WN 0 . T (25) The procedure to recover Σh . sT −1T −1 . 2.3 Modelconsistent weights for forecast pooling Now we discuss how the timevarying Kalman smoother we use to generate the ”judgmentaugmented” forecasts actually combines the judgmental forecasts with the purely modelbased forecasts. sT −1T −1 ). which can be ′ reshuﬄed to obtain E[ηT T ηT T ]..y1 . yT −1 .. initialized with s+−1IT = sT −1T −1 ˆT ˆ s++hIT = Gˆ++h−1IT + K2h aT +h ˆT sT PF yT +h (28) = ΛGˆT +h−1IT + aT +h . In period T we ﬁlter (26) and we generate a new innovations representation.e.. sT +hT +h = GˆT +h−1T +h−1 + KuT +h ˆ s PF yT +h (26) = ΛGˆT +h−1T +h−1 + wT +hT . s wT T = E[uT IT ] + ηT T (27) where for h=0 and for h=1. Σh 11 Σh 21 Σh 12 Σh 22 . ˆT PF PF PF PF P aT +h = yT +h − E yT +h yT +h .. yT +h−1 . . yT +h−1 .E(uT u′ ) can be obtained by the Kalman ﬁlter on the system of equations (7) T and (8) as follows E(uT u′ ) = ΛP Λ′ T Where P is the solution of the Riccati equation deﬁned in (11). for h=1. we can ﬁnally recover E[wT T wT T ] and we therefore have pinned down all the values of the matrix Σ0 .3. .4 is very similar. yT −1 .2. yT F . .
but it will downweigh them if the variance of their forecast errors is too large. 2. with P P aT = yT F − E yT F ˆ+−1IT . the weight given to the P judgmental forecast yT F is directly proportional to Σ0 . of course. in our setup we will be able to interpret the forecasts in light of the model and therefore somehow structuralize the forecasts. i.at all horizons considered. That is.e. the more the professional forecasters are able to gather information on the current period’s shock. let us consider the case of the nowcast: s+IT ˆT = P (I − K20 )GˆT −1T −1 + K20 yT F . sT is the estimate of the current period’s structural shocks made by the professional forecasters with their information set IT . Indeed K20 aT = E[KuT IT ]. Moreover. to the correlation 12 among uT and wT T . and inversely proportional to the variance of wT T Σ0 . In appendix we present some results for the case in which the PF are assumed to have some oﬀmodel information on current and future shocks up to four periods ahead. Let us have another look at model (28).and K2h = (GPhh−1 G′ Λ′ + Σh )(ΛGPhh−1 G′ Λ′ + Σh )−1 12 22 (29) otherwise. In order to understand how the Kalman ﬁlter is combining the purely modelbased forecast and the judgemental forecast. the more the Kalman ﬁlter will use the professional forecasts when combining the two forecasts. we can also derive [uT IT ] as follows: E[uT IT ] = (K ′ K)−1 K ′ E[KuT IT ] = (K ′ K)−1 K ′ (K20 aT ). In that case. Therefore the augmented nowcast for yT is + yT IT = Λˆ+IT sT PF = Λ(I − K20 )GˆT −1T −1 + ΛK20 yT +h s (31) Since K20 has the form described in equation (29). they do not necessarily coincide with the ”real” structural shocks. the weight associated to the judgmental forecast will be sizeable depending on the informational content. The assumption that the professional forecasters have information only on the current period shocks is crucial in determining the negligible weights assigned to the professional forecasts for h = 0. s (30) where s+−1IT = sT −1T −1 and GˆT −1T −1 is the purely modelbased forecast of ˆT ˆ s the state at time T. The element K20 aT . Similarly for higher horizons. these are the structural shocks the professional forecasters perceive given their information set. given this information. 10 . Storytelling is diﬃcult when it comes to judgmental forecasts. 22 That is.4 Using the model to interpret judgemental forecasts Another interesting aspect of this procedure is that it also allows to see the judgmental forecasts through the lens of the model.
(35) where the innovation εt is serially uncorrelated and normally distributed with mean zero and standard deviation σ.2. this can be very informative.g.Now. the agent increased the capital stock Kt+1 available in period t + 1 according to Kt+1 = (1 − δ)Kt + It . we can construct diﬀerent scenarios . we can evaluate exactly how much of the forecast is due to extra information on the current shocks and how much of it is instead due to measurement errors. In this economy there is only one ﬁnal good Yt and it is produced using capital Kt and labor Nt according to the constantreturnstoscale technology Yt = (At Nt )α Kt (1−α) . A representative consumer has preferences deﬁned over consumption Ct during each period t=1. In each period the representative agent decides how much of output Yt to consume and how much to invest.e. We will assume that labor is supplied inelastically and that there is no population growth: in such case Nt is constant for any t and we can normalize it to 1. Moreover. recalling that the professional’s forecasts are represented by equations (18)(19). as described by the expected utility function ∞ E i=1 β t ln(Ct ) (32) where the discount factor satisﬁes 0 < β < 1. (37) where δ is the depreciation rate and satisﬁes 0 < δ < 1. (36) By investing It units of output in period t. We there for rewrite equation (33) as Yt = Aα Kt t (1−α) . 3 An application In order to illustrate the methodology proposed in the previous section.. Nt = 1 for any t.e. i.. (33) where 0 < α < 1 and At is a laboraugmenting technological change process. we will apply it to a strippedtothebone version of an RBC model with unit root technology.. assume that the professional forecasters have extra information only on certain types of shocks but not on others .and compare them among each other. Let us brieﬂy outline the model. As we will show in Section 4. (34) The logs of the technology shock At follow a ﬁrst order autoregressive process of the form: ln(At ) = ln(At−1 ) + γ + εt . The standard method of analyzing models with steady state growth is to transform the economy into a stationary one where the dynamics are more 11 . subject to the resource constraint Yt = Ct + It ..
(42) where Rt is the gross rate of return of capital in t and c−1 = βEt [ t c−1 Rt+1 t+1 ]. it − yt (45) .drives all business cycle ﬂuctuations. kt .the aggregate technology shock εt . Any equation of the form (45) can be rewritten in terms of diﬀerences. involves dividing all variables in the system by the growth component. which is shown in great detail in King. equation (44) will change conformably to take into account that (46) is now deﬁned in terms of current and lagged states st . ct .tractable. The system of equations (39)(43) has an approximate solution of the form: xt+1 = kt+1 kt = G(θ)xt + 1 0 εt+1 (44) In general. The transformation. rt . with some exceptions that we will highlight in what follows. as shown in Appendix A. ct = At . (39) (40) (41) y t = ct + i t .. A particular feature of this model is that one shock .The stationarized model is very similar to the untransformed model.. it . A t t The stationary version of the model deﬁned by equations (32). simply by premultiplying everything with a suitable matrix: ∆yt 1 0 0 1 0 0 0 zt = 1 1 − L 0 Λxt .. yt = At . In this speciﬁc case the 12 where the expressions for the matrices G(θ) and Λ(θ) in terms of the structural parameters θ can be found in Appendix A. (34). (36) and (37) is ∞ max E i=1 β t ln(Ct ) (38) subject to yt = kt and (γA eεt+1 )kt+1 = kt + it . 1988b). (46) yt = ∆ct = 1 1 − L ∆it 1 0 1−L 1 0 1−L ∆yt zt = ct − yt = Λ(θ)xt . (35).. A linear approximation of this system can be derived loglinearizing it around its steady state.2. Plosser and Rebelo (1988a. The ﬁrst order conditions for this problem are −α Rt = [(1 − α)kt + (1 − δ)] (1−α) . eεt+1 (43) Equation (43) equates the marginal rate of substitution to the marginal product of capital for all t=1. kt = Kt and so on.. Equations (39)(43) form a system of ﬁve nonlinear stochastic diﬀerence equations in the model’s ﬁve variables yt . which in our setting corresponds to At . Y C t+1 t Let us start deﬁning E[ AAt ] = γA .
we will perform an outofsample realtime forecasting exercise. 1988b) and Ireland (2004). The covariance matrices Σi in (20) are instead estimated using a rolling window. we have made the somewhat heroic assumption that the population has been constant throughout the period considered. 13 . (47) ∆it 1 0 0 0 Λ and where Λ∗ = 1 1 − L 1 0 1−L vt+1 = Dvt + ξt+1 (48) for all t = 1. or error term. serially uncorrelated inno′ vations that is normally distributed with covariance matrix Eξt ξt = V and is uncorrelated with the innovation εt . We use the Survey of Professional Forecasters (SPF) as example of judgmental forecast. we need to make a choice about which revision to use. using values from King. For the estimation and the forecasting exercise we use realtime quarterly data for real GDP and real consumption for the US from the Philadelphia Fed realtime dataset. 2. using maximum likelihood. Due to the unavailability of realtime data on population. using as evaluation sample the period 19872004. We calibrated all the model’s parameters. while in the third speciﬁcation we allow vt to be a VAR(1).e. conducted by the Federal Reserve Bank of Philadelphia. and ξt is a vector of zero mean. so that the model now consists of (44). and then we take them as calibrated. the second one allows for autocorrelation (but not serial correlation) in vt . Plosser and Rebelo (1988a. is based on many individual commercial forecasts. We consider a sample period going from the second quarter of 1987 to the fourth quarter of 2004. Following Romer and Romer (2000). i. . All results are robust to the speciﬁcation of the measurement error vt .. The dataset covers the period 1947 through 2005 and the ﬁrst available vintage is 1965:Q4. since it already contains the lagged state. In what follows. therefore we will present results only for the model with autocorrelated but not crosscorrelated residuals. We have considered three possible speciﬁcations of vt . We also augment each equation in (46) with a serially correlated residual.state equation will not need to be rewritten. the ﬁrst one assumes vt is white noise. The results were very similar for each of the three speciﬁcations. We estimate the variance of technological shock and the parameters of the measurement error only once. All parameters are reported in Table 7 in Appendix A. The Survey of Professional Forecasters. The Survey is conducted near the end of the second month of each quarter and publishes forecasts for the current quarter and the next 4 quarters in the future. with the exclusion of the variance of the technological shock and and the parameters describing the measurement errors. ∆yt yt = ∆ct = Λ∗ xt + vt . we choose to use the second revision. An important datarelated issue regards the appropriate ”actual” series to use when comparing the various forecasts.. which is our best performing one. Because macroeconomics data is continuously revised. which are then grouped in mean or median forecasts.
2004:4 Forecast horizon RBC SPF Q0 0.01 the one done at the end of the subsequent quarter. see in particular Comment 4) show that by using rolling window 14 .86 0. 5%(∗∗) and 10%(∗) relative to constant growth EVALUATION SAMPLE: 1987:2 .92 0. and the standard error is corrected for heteroskedasticity and serial correlation over h1 months.88 ∗∗ 0. be conceptually similar to the series being forecast.84 ∗ ∗ ∗ 1.01 Q4 0. Asterisks denote forecasts that are statistically more accurate than he naive benchmark at 1% (∗ ∗ ∗).97 0.94 Q2 0. D is diagonal) and of the SPF relative to the naive benchmark.94 ∗∗ 0. Tables 2 and 3 report the performance in outofsample forecasting of the forecasts generated with the RBC model. Let us now present some forecasting results that will highlight the motivation of this paper.68 ∗∗ Q1 0. random walk in levels.e. our inference is based on the regression: (z − z m )2 − ht ˆht (zht − zht ˆnaive )2 = c + uht where z is the variable to be forecasted at horizon h using model m .95 relative to constant growth EVALUATION SAMPLE: 1996:2 . In the ﬁrst column of each table one can see the ratio of the mean square error of the purely modelbased forecast (RBC) against the mean square error of the naive benchmark. The estimate of c is simply the diﬀerence between forecastm and a Naive model MSFEs. i. Asterisks indicate a rejection of the test of equal predictive accuracy between each forecast and the naive benchmark. speciﬁed as having autocorrelated.80∗∗ Q1 0.85 ∗ ∗ ∗ 0.81∗∗ 0. This series does not include rebenchmarking and deﬁnitional changes that occur in the annual and quinquennial revisions and should. 7 7 Following Romer and Romer (2000).2.88 Q3 0.87 ∗∗ 1. while the second column reports the ratio of the mean square error of the SPF against the mean square error of the naive benchmark. therefore. for GDP growth and consumption growth respectively. but it is still roughly contemporaneous with the forecasts we are analyzing. This testing procedure falls in the DieboldMarianoWest framework.Table 2: Relative MSFE of forecasts of GDP growth with respect to naive benchmark. The second revision seems to be the appropriate series to use because it is based on relatively complete data. but not serially correlated residuals (i.94 Q4 0. Section 3.2004:4 Forecast horizon RBC SPF Q0 0. Each table is divided in two subtables which consider the full sample period 1987:Q22004:Q4 and the subsample 1996:Q22004:Q4.84∗ ∗ ∗ 1. We will compare all the forecasts of GDP and Consumption growth to a naive benchmark: the constant growth model.01 Q3 0.77 Q2 0. and Giacomini and White (2006.e.
Table 3: Relative MSFE of forecasts of Consumption growth with respect to naive benchmark.03 Q4 0. and the same holds for SPF. Figure 1 reports the smoothed forecast errors for the nowcast of GDP (centered moving average 4 quarters on each side) over the full sample period. The modelbased forecasts of consumption growth are poorer in the very short run than in the medium term (4 quarters ahead). Asterisks denote forecasts that are statistically more accurate than he naive benchmark at 1% (∗ ∗ ∗).06 Q2 0. judgmental forecasts have performed quite bad. Over the full sample period 1987:Q22004:Q4.92 ∗∗ 0. Particularly during recessions. SPF forecasts of consumption growth outperform the modelbased forecasts in the nowcast and 1 period ahead. The two results on consumption growth point to the addedvalue of enforcing accounting identities. 15 . 2005).86 ∗∗ 1.70 Q1 0. we notice the model seems to perform better at all horizons excluding the nowcast. Considering the subsample 1996:Q22004:Q4. Kiley and Laforte (2006).99 Q4 0. In the last part of the sample. Model and judgement seem to contain information that is useful in diﬀerent points in time: in some periods the model does better than the benchmark and in others it does worse. A similar result can be found in Edge.94 ∗ 1.08 Over the full sample period 1987:Q22004:Q4.03 Q3 0.89 ∗∗ 0. where the authors compare the forecasting performance of a richly speciﬁed DSGE models to the Greenbooks in the sample 19962000 and ﬁnd that the model’s forecasting performance is comparable to the one of the Greenbooks. and particularly between 1996 and 2000.94 ∗ 0. but their advantage reduces as the forecasting horizon grows. while the model seems to fare acceptably.87 ∗∗ 1. 5%(∗∗) and 10%(∗) relative to constant growth EVALUATION SAMPLE: 1987:2 .94 ∗∗ 0. as we do here.00 relative to constant growth EVALUATION SAMPLE: 1996:2 .89 Q2 0. estimators.94 Q3 0. judgement seems to fare much better than the model (a similar result can be found in Giannone.96 Q1 0. the limiting behavior of this type of tests is standard. and never in the subsample 1996:Q22004:Q4. Reichlin and Sala.2004:4 Forecast horizon RBC SPF Q0 0.89 ∗∗ 1. especially in the mediumterm. and therefore standard asymptotic theory can be used for inference on the diﬀerence in predictive accuracy. SPF forecasts of GDP growth outperform the modelbased forecasts at all horizons.2004:4 Forecast horizon RBC SPF Q0 0.88 ∗∗ 1.97 0.
4 1.6 1. Kiley and Laforte use for GDP growth. In the following section we present the results obtained when applying the methodology proposed in Section 2 to the toy model presented above and using the SPF forecasts. is not model compatible. 4 Results of the Forecasting Exercise The main goal of this section is to present modelbased forecasts for real GDP and real consumption that can account for the judgmental information contained in the SPF forecasts.2 1 0. Figure 1 highlights how performances of the purely modelbased and the judgmental forecasts do not seem positively correlated.e.Figure 1: NOWCASTS: Smoothed Square forecast errors NOWCAST: Smoothed Square Forecast errors (relative to constant growth) 1. not only the richly speciﬁed DSGE model that Edge.8 1.8 0.6 0. a random walk. it is plausible that combining the two forecasts can be advantageous.8 Furthermore.e. 8 It 16 . The method I propose indeed allows for a modelbased and hence interpretable averaging of the two forecasts. deeming better a forecast with a smaller MSE. but rather they seem to somehow counterbalance each other. i. is worth stressing that the naive benchmark model Edge.2 Q1−90 Q3−92 Q1−95 Q3−97 Q1−00 Q3−02 SPF RBC AUG In fact.4 0. We will perform outofsample forecasting exercises on the full evaluation sample 1987:Q22004:Q4 and on the subsample 1996:Q22004:Q4. i. We will compare their performance on the basis of their mean square forecast error. Therefore. Kiley and Laforte (2006) propose. but also the toy model I use seem to fare better than judgment in that period.
81 ∗∗ 0.85 ∗∗ Q2 0. Table 9 reports the weights that the ﬁlter gives to the judgmental forecasts of real GDP growth and real consumption growth when constructing the augmented forecasts.68 ∗∗ Q1 0.85 ∗ ∗ ∗ 0. 5%(∗∗) and 10%(∗) relative to constant growth EVALUATION SAMPLE: 1987:2 .01 0.2004:4 Forecast horizon RBC SPF AUG Q0 0.01 0.01 0. since that is where the judgmental forecasts help.92 0. Clearly.88 0. In appendix we present some results for the case in which we assume that the professional forecasters have some extrainformation also on future shocks.94 0.85 ∗ ∗ ∗ Q4 0. the augmented forecasts outperform quite consistently the modelbased forecasts.88 ∗∗ 0.84 ∗ ∗ ∗ 1.94 0. Table 8 reports the mean square forecast error (MSE) of the purely modelbased forecasts. when forecasting GDP growth. i. For higher horizons. (47) and (48).assuming that the professional forecasters have extrainformation on the current period.84 ∗∗ Q2 0. since the Professional Forecasters are assumed to have information only on current shock. the SPF and the augmented forecasts relative to the naive model (random walk in levels). with vt is autocorrelated. We construct the augmented forecasts following the procedure described in previous Section . and therefore the augmented forecast is very similar to the purely modelbased one.97 relative to constant growth EVALUATION SAMPLE: 1996:2 .83 ∗ ∗ ∗ Q3 0. The results however are robust to the speciﬁcation of the measurement error.80 ∗∗ 0.88 ∗∗ The purely modelbased forecasts are generated with model (44). Asterisks denote forecasts that are statistically more accurate than he naive benchmark at 1% (∗ ∗ ∗).84 ∗ ∗ ∗ 1.87 ∗∗ 1. the weight associated to the judgmental forecast is very small. D is diagonal.e. the ﬁrst column gives the weights associated to the SPF forecasts of real GDP growth when constructing the augmented forecast of real GDP growth.75 ∗ ∗ ∗ Q1 0. This result crucially depends on the assumptions made on the information set of the professional forecasters.94 ∗∗ 0. but not serially correlated. In particular.94 ∗ Q4 0.95 0.91 Q3 0. the second column instead gives the weights associated to the SPF forecasts of real consumption growth when constructing the augmented forecast of real consumption growth.97 0.77 0.86 0. The greatest gain is achieved in the nowcast.Table 4: Relative MSFE of forecasts of GDP growth with respect to naive benchmark. the judgmental forecast receives a signiﬁcant weight in the construction of the augmented forecasts only 17 .2004:4 Forecast horizon RBC SPF AUG Q0 0.68 ∗∗ 0. In all samples considered.
Moreover. when forecasting consumption growth. 2004:4 ∆GDP ∆CON S nowcast 0. Figures (2)(8) report the nowcasts and the forecasts.0185 0. the dotted line portrays the purely modelbased forecasts. since we have assumed that the SPF have extra information only regarding the current period. Let us brieﬂy compare these results with the ones reported in appendix for the case in which the SPF are assumed to have extra some extrainformation up to 4 periods ahead. Giannone and Surico (2006). Table 6 reports the mean square forecast error (MSE) of the purely modelbased forecasts.0078 0. This is not surprising. the SPF and the augmented forecasts relative to the naive model (random walk in levels). the augmented forecasts built under the assumption that the PF have some extrainformation on the current and future shocks perform much better that the augmented forecast obtained by assuming that the PF have extramodel information only on the current shock.0341 0. the opposite is true in the subsample 1996:Q22004:Q4. in the full sample. the dashed line corresponds to the augmented forecasts. In each ﬁgure. thus the informational value of their forecast is much lower at horizons higher than the nowcast and will therefore be given less weight. In the latter case. while in the subsample 1996:Q22004:Q4 they are better represented by assuming they have extrainformation only on the current shock. The above results point out to the decline in predictability highlight in D’Agostino.0018 0.1038 0. notice that the weight associated to the SPF forecast of consumption growth is smaller than those associated to the SPF forecast of GDP growth.Table 5: Weight given to the SPF forecast of ∆GDP and ∆CON S in the augmented forecast .0020 0.3941 0. This indicates that the SPF are better modeled by assuming that they have extramodel information on current and future shocks in the full evaluation sample. plotted against the data. The latter forecasts becomes more and more similar to the modelbased forecast as we increase the forecasting horizon. the green/grey solid line represents the actual data. This indicates that the extramodel information accessed by the professional forecasters is more informative on GDP than on consumption. The reason is that the weights given by the Kalman ﬁlter to the SPF tend to zero as their information content goes to zero.2574 1 step ahead 2 step ahead 3 step ahead 4 step ahead 0. Finally. This would clearly not be the case under diﬀerent assumptions for the information 18 .0019 0.0014 in the nowcast. the solid line with circles is the series of the SPF forecasts. the augmented forecasts beat both the SPF forecasts and the modelbased forecasts at all horizons over the full evaluation sample 1987:Q22004:Q4. Interestingly.
86 ∗∗ 1. the green line is the measurement error they perceive on GDP growth. Similarly for higher horizons. the red lines represent the augmented augmented nowcast (and its conﬁdence bands).06 0. 2005). then the SPF could be given sizeable weights (see Tables in Appendix) also at horizons higher than than the current quarter.99 0. according to the extent that they will reduce uncertainty surrounding the estimation problem faced by the agents (as in Coenen.87 ∗∗ Q4 0.94 ∗ 0. but also on future ones. we can infer the type of shocks the Professional Forecasters saw when performing their forecasts.96 0. This means that there is less uncertainty when nowcasting using also the SPF.87 ∗∗ 1.94 0. Figure 9 plots the nowcasts and their conﬁdence bands9 . Finally.89 ∗∗ Q2 0.2004:4 Forecast horizon RBC SPF AUG Q0 0. while the black dasheddotted lines are its conﬁdence bands.91∗∗ Q2 0.08 0.00 0. we can make some considerations regarding information content of the SPF forecasts.89 0.93 ∗ Q4 0.89 ∗∗ 1. It is clear from the picture that the conﬁdence bands for the augmented nowcast are smaller than the ones of the purely modelbased nowcast. we consider only the uncertainty in the estimation of the state and assume that the parameters.70 0. 5%(∗∗) and 10%(∗) relative to constant growth EVALUATION SAMPLE: 1987:2 .2004:4 Forecast horizon RBC SPF AUG Q0 0.97 0. The dotted line represented actual real GDP growth. while the red line is the measurement error they perceive on real consumption growth.Table 6: Relative MSFE of forecasts of Consumption growth with respect to naive benchmark. Levin and Wieland.84 Q1 0. estimated previously. Figure 10 reports actual real GDP growth and the diﬀerent shocks they perceived while doing their nowcast. Second.94 ∗ 1.03 0. The blue line is the technological shock as perceived by the SPF.93∗∗ Q3 0.87 ∗∗ Q3 0. First of all.93 ∗ relative to constant growth EVALUATION SAMPLE: 1996:2 . Similarly.94 ∗∗ 0.88∗ Q1 0. are known 19 .86∗∗ set of the SPF.89 ∗∗ 0. Asterisks denote forecasts that are statistically more accurate than he naive benchmark at 1% (∗ ∗ ∗).03 0. and therefore that the SPF are indeed somewhat informative on the current state of the economy. The black solid line is the purely modelbased nowcast. In this very simple model the only shocks diﬀerent 9 While constructing these conﬁdence bands.92 ∗∗ 0. If. as in appendix.88 ∗∗ 1. we report few illustrative results on the ”structural” analysis of the SPF forecasts. we assumed that they have some extramodel information not only on current shocks.
The line marked with plus signs is the nowcast the SPF would have made if they would have had no information on any of the shocks. the solid line with circles is the series of the SPF forecasts. This means that this speciﬁc movement in GDP growth was most probably due to a technological shock. The actual SPF nowcast for that quarter is very close to the actual ﬁgure and. etc. the starred line is the actual SPF nowcast. the dotted line portrays the fully modelbased forecasts. Figure 11 reports. the line with squares plots the nowcast they would have made if they had extrainformation only on the technological shock.. ﬁscal. the black solid line represents actual real GDP growth.5 1 0. From this ﬁgure we can extract some very relevant information.that the SPF have.5 0 −0. .EVALUATION SAMPLE 1987:22004:4. when applied to a more elaborate DSGE model. the the actual SPF nowcast and the nowcasts they would have done if they had only parts of the information they actually have. Similarly. allow for understanding the perception of the diﬀerent shock .5 Q1−90 Q1−95 Q1−00 Figure 2: Nowcast for GDP . However the proposed procedure would. in the third quarter of 20 . the dashed line corresponds to the the augmented forecasts from the technological shocks are the shocks to the residual term that is meant to describe all the dynamics that is not captured by the RBC model.1. interesting counterfactual exercises can be done in this setup. coincides with the nowcast the SPF would have made if they only saw the technology shock (green line). for the period 1987:21994:4.monetary. while the line with diamonds is the nowcast they would have made if they had extrainformation only on the measurement error shocks. interestingly. For example. Finally. On the other hand. consider the fall in actual GDP growth in the last quarter of 1989. In particular. the nowcast the SPF would have made if they only saw the ”measurement error” shock (red line) coincides with the one they would have made if the had no extrainformation at all (blue line). The green/grey solid line represents the actual data.
the dashed line corresponds to the the augmented forecasts 1.5 1 0.4 1.6 0.1.2 1 0.5 0 −0. The green/grey solid line represents the actual data. the dotted line portrays the fully modelbased forecasts.4 Q1−98 Q1−00 Q1−02 Q1−04 Figure 4: Nowcast for GDP .2 −0.PERIOD I 1987:21996:1. the dashed line corresponds to the the augmented forecasts 21 .PERIOD II 1996:22004:4. The green/grey solid line represents the actual data.8 0. the dotted line portrays the fully modelbased forecasts.6 1. the solid line with circles is the series of the SPF forecasts. the solid line with circles is the series of the SPF forecasts.2 0 −0.4 0.5 Q1−88 Q1−90 Q1−92 Q1−94 Q1−96 Figure 3: Nowcast for GDP .
the dashed line corresponds to the the augmented forecasts 1. The green/grey solid line represents the actual data. The green/grey solid line represents the actual data.5 Q1−90 Q3−92 Q1−95 Q3−97 Q1−00 Q3−02 Figure 6: Forecasts 2 step ahead for GDP .5 Q1−90 Q3−92 Q1−95 Q3−97 Q1−00 Q3−02 Figure 5: Forecasts 1 step ahead for GDP .EVALUATION SAMPLE 1987:22004:4.5 0 −0.5 1 0.5 1 0. the dotted line portrays the fully modelbased forecasts.5 0 −0. the solid line with circles is the series of the SPF forecasts. the dashed line corresponds to the the augmented forecasts 22 .1.EVALUATION SAMPLE 1987:22004:4. the solid line with circles is the series of the SPF forecasts. the dotted line portrays the fully modelbased forecasts.
the dashed line corresponds to the the augmented forecasts 23 .EVALUATION SAMPLE 1987:22004:4. The green/grey solid line represents the actual data.5 1 0. the solid line with circles is the series of the SPF forecasts.1. the dotted line portrays the fully modelbased forecasts.5 0 −0. the dashed line corresponds to the the augmented forecasts 1.5 1 0. the solid line with circles is the series of the SPF forecasts.5 Q1−90 Q1−95 Q1−00 Q1−05 Figure 7: Forecasts 3 step ahead for GDP . TThe green/grey solid line represents the actual data.5 0 −0.5 Q1−90 Q1−95 Q1−00 Q1−05 Figure 8: Forecasts 4 step ahead for GDP . the dotted line portrays the fully modelbased forecasts.EVALUATION SAMPLE 1987:22004:4.
When constructing the conﬁdence bands. Similarly. but no parameter uncertainty.6 1. The black solid line is the purely modelbased nowcast. the blue lines represent the augmented nowcast (and its conﬁdence bands).2 Q1−90 Q1−95 Q1−00 Figure 9: Nowcasts with conﬁdence bands (no parameter uncertainty) 1987:22004:4.8 0. the current period’s shock.6 0. while the black dasheddotted lines are its conﬁdences bands.4 0. 24 .2 0 −0.4 1. we only consider the uncertainty in the estimation of the state.Nowcasts and confidence bands (no parameter uncertainty) 0 quarters ahead 1.2 1 0.
We have also highlighted how to infer the information content of the SPF forecasts from the weights that the Kalman ﬁlter assigns to them. the weights given by the Kalman ﬁlter to the SPF go to zero as their information content goes to zero. we then have shown how they can be accounted for in the framework of a linearized and solved DSGE model. but it is any how quite informative.5 Q1−90 Q1−95 Q1−00 Figure 10: Current structural shocks perceived by the SPF when nowcasting 1987:22004:4. The methodology we propose allows generating forecasts that are more accurate than the purely modelbased ones. but that are still disciplined by the economic rigor of the model. made with a diﬀerent. the more the professional forecasters are able to gather information on the shocks. 5 Conclusions and Extensions In this paper we have proposed a method to incorporate judgmental information. possibly more informative. this exercise would be much more interesting if we developed it in a setup in which we were able to distinguish.5 techonology measurement error on Dy measurement error on Dc Dy 1 0. the variation of the GDP was certainly not due to technology. Of course. then. the SPF nowcast coincides with the nowcast the SPF would have made if they only saw the ”measurement error” shock (red line). for example. In particular. the green line is the measurement error they perceive on GDP growth. into modelbased forecasts. information set.1. while the red line is the measurement error they perceive on real consumption growth. while have information on the technological shock is like having no information. More precisely. 1992. monetary or ﬁscal shocks. The dotted line represents actual real GDP growth. In this case. The blue line is the technological shock the SPF see. We suggested modeling the professional forecasts as optimal estimates of the variables of interest. proxied by professional forecasts.5 0 −0. but due to the ”measurement errors” and some other factors. the more the Kalman ﬁlter will use the professional forecasts when combining them with the predictions from the 25 .
The black solid line represents actual real GDP growth. the dasheddotted line is the actual SPF nowcast.5 1 0. 26 . the green line plots the nowcast they would have made if they had extrainformation only on the technological shock.5 0 −0.5 Q1−88 Q1−90 Q1−92 Q1−94 Figure 11: Counterfactual exercise nowcast.1. while the red line is the nowcast they would have made if they had extrainformation only on the measurement error shocks.1987:21994:4. The blue line is the nowcast the SPF would have made if the would have had no information on any of the shocks.
the extent to which the perception of certain shocks helped them forecasting so well and infer if the movements in the data were due to. we will be able to apply the methodology we propose to richer DSGE models. More importantly. we cannot solve the model ﬁrst. the estimate of inﬂation and output made using judgmental information should feedback into the model via the policy rule. with more shocks. If for example we wanted to include a policymaker that targeted current inﬂation and current output gap. This extension can be implemented using the extension of the Kalman ﬁlter proposed by Svensson and Woodford (2003) that allows to do signalextraction with forwardlooking obsverables10. We diﬀer from Boivin and Giannoni (2005) in that we consider a large dataset containing intraperiod data. in a joint paper with Domenico Giannone and Lucrezia Reichlin we allow for the timely information to enter the model directly. First. we will be able to understand. as we did in our simple RBC case. because. in order to really capture the eﬀects of timely information. we are also working on reformulating the problem so to be able to account for the possibility that the forecasts feedback into the model. Then. not processed by the professional forecasters. but it will downweigh them if the variance of their forecast errors is too large. 10 Since the forecasts feedback into the model. a technology or a monetary shock. it will allows us to understand which types of shocks the professional forecasters perceived. in the cases in which the professional forecasters forecasts where very close to the actual data. say. This can be very interesting from a storytelling perspective. We were able to extract the structural shocks as they were perceived by the professional forecasters and to make several counterfactual exercises on the forecasts the professional forecasters would have done if they saw only some of the shocks. Once we have reformulated the problem. as we have sketched in the simple application we consider in this paper. We working on several extensions to this paper. and then forecast.model. 27 . Finally we have described how to interpret the forecasts through the lens of the model. For this reason we will need to deal with forwardinglooking observables.
74(6=. ”Forecasting industrial production and the early detection of turning points”. White (2006). and H. and L. J. Mariano (1995). P.M. Kiley. Econometrica. O. [15] Friedman. G. Sargent (1996): “Mechanics of Forming and Estimating Dynamic Linear Economies. Hans M. Lupi (2004). Sargent (2005). 13.” European Economic Review. G. (2005). 29. Giannone and P. ”(Un)predictability and Macroeconomic Stability”. Levin and V. F. Hallin. Simple ReducedForm Models and a DSGE Model. L. Surico (2006). 647671 [8] Campbell. R. Journal of Political Economy. E. and J. T. 69. vol. ”Data Revisions are not Well Behaved” EABCN/CEPR Working Paper Series 21/2005. ”Tests of Conditional Predictive Ability”. R. [16] Giacomini.” Review of Economics and Statistics 82:4. Amman. John Y. Morozov and T.M. ”The Generalized Dynamic Factor Model: Identiﬁcation and Estimation.”. D. R. NorthHolland [4] Aruoba. 18931925. 463506.S. K. Scott (2005). ”Bayesian fan charts for U. 9751006. Moore (1979). and R.”Comparing Predictive Accuracy.. B. 15451578 28 . M. Lippi. M (1961). ”A Data Uncertainty and the Role of Money as an Information Variable for Monetary Policy. MCGrattan and T. J. A. Econometrica 48(5). A.B. ed. Rust.” Journal of Monetary Economics 33. and M.X. by D. (2005).T. ‘Taking DSGE Models to the Policy Environment’ [2] Anderson B. 49(4). 13051311 [6] Boivin. ” Inspecting the Mechanism: An Analytical Approach to the Stochastic Growth Model. and C.. [11] D’Agostino. 44766. Piscitelli and A. Reichlin (2000). ”DGSE in a DataRich Environment” [7] Bruno. Kahn (1980). 540—554.D. 253265 [13] Edge. Volume 1. ”The lag in eﬀect of monetary policy”. ECB Working Paper No 605 [12] Diebold. [9] Coenen.” in Handbook of Computational Economics. S.. pp.. [5] Blanchard.” Journal of Economic Dynamics and Control 29. [10] Cogley. M. and C.References [1] AlvarezLois. and J.. Laforte (2006) . Journal of Business Economics and Statistics.K. 171–252. Empirical Economics. Wieland. L. inﬂation: Forecasting and sources of uncertainty in an evolving montary system.J.”.” Federal Reserve Board (mimeo) [14] Forni. R. M. Hansen. E.. P. Giannoni (2005). (1994).O. M. ”The Solution of Linear Diﬀerence Models under Rational Expectations.”A Comparison of Forecast Performance between Federal Reserve Staﬀ Forecasts. Harrison. J. Optimal Filtering [3] Anderson..
New Directions. ”Indivisible labor and the business cycle. R. Shapiro. S.D. and G. D. (1994). L... D. 6.E.P. Press. C. L. Growth and Business Cycles: 2.G. Elsevier. ”Energy Cconsumption. Whiteman. Plosser et S. [18] Giannone. 21. Discussion Papers..”Econometric Models and the Monetary Policy Process” . pages 12051226 [22] Hansen. 309341. ”A method for taking models to the data.R.J. Princeton University [20] Ingram. Reichlin.D. vol. Wilcox (1997).R. ”Production. D.” CEPR Discussion Papers 5178. Discussion Papers. [21] Ireland. [27] Marchetti. T. T.N. C.F.. 14051423 [26] Mankiw. Tallman and C. (2005)”Forecasting using Relative Entropy.D. Rebelo (1988a).G. and Sala.E. (2004). [25] Klein.Incorporation and Evaluation” [30] Reifschneider.N.. (2005). G.K..” Journal of Monetary Economics 16. ”Judgement and Fan Charts .” Journal of Economic Dynamics and Control. 47. ”Nowcasting GDP and Inﬂation: The Real Time Informational Content of Macroeconomic Data Releases. P. (1986).309 –327.Princeton. Parigi (1998).309 –327. 287299. D. J. (2006). Plosser et S.415 –428. ”The role of judgment in macroeconomic forecasting accuracy. Survey Data and the Prediction of Industrial Production in Italy”. The Basic Neoclassical Model. R..” Journal of Money Credit and Banking 37..D. CarnegieRochester Conference Series on Public Policy. P. Growth and Business Cycles: 1. ”Monetary Policy in Real Time. ”Using the Generalized Schur Form to Solve a System of Linear Expectational Diﬀerence Equations”.E.R.B. and Small.J. Banking 37. pp. ”Production. L. 383402. G. D... ”Explaining business cycles:a multipleshock approach. (2005). 66 . I. C. Rebelo (1988b). [19] Hamilton. (2000).” Survey of Current Business.. C. 28(6). No 342 [28] McNees.” Journal of Monetary Economics. (1994) Time Series Analysis. Reichlin. 383402. I.. Stockton and D.” Journal of Monetary Economics. P.N. and M.137 [31] .Kocherlakota.W. vol.[17] Giannone. Robertson. Journal of Monetary Economics” 34.Journal of Monetary Economics 16.” CEPR Discussion Papers 4981.Savin.(1985). [24] King. 2025. (1990). 21. 195232. 29 . ”News or Noise: An Analysis of GNP Revisions. ¨ [29] Osterholm..J. E. N.” International Journal of Forecasting. [23] King.P.NJ. Tem di Discussione del Servizio Studi della Banca d’ Italia. Journal of Economic Dynamics and Control 24(10)..
E. ”Forecasting Inﬂation. Spindt. L.A (2003).” Computational Economics. C. 429457 [33] Sargent. Woodford (2003). ”Two Models of Measurement and the Investment Accelerator”. NBER Working Paper 11167 [41] Svensson. and Friar. ”Monetary Policy with Judgement: Forecast Targeting”. L. 691720 [44] Tinsley. vol. NBER Working Paper 11392 [42] Svensson. ”Indicator Variables for Optimal Policy”.H. [37] Sims. T. P. [40] Svensson. ”Optimal Policy Projections”.[32] Romer. L. Williams (2005). (1989). J.251287 [34] Schuh. 3.. L.E.” Journal of Econometrics. ”An Evaluation of Recent Macroeconomic Forecast Errors”.O and R. ”Indicator and ﬁlter attributes of monetary aggregates : A nitpicking case for disaggregation. and D.J.2. NBER Working Paper 11733 [43] Svensson. 97.J. ”Federal Reserve Information and the Behavior of Interest Rates”. [39] Stock. F.O. E.an application to the euro area. 2002:2 163 [38] Stock. ”Solving Linear Rational Expectations Models. A.” Working Paper Series 389.E. ”Macroeconomic Forecasting Using Many Predictors”. and M. S. Journal of Political Economy. P. Tetlow (2005).” Journal of Monetary Economics 44. Journal of Monetary Economics 50. C. Romer(2000). 14(1). and M.. Springer. Watson (1999). and R. 120. W. Wouters. J. 6191 30 .E. 90.. and N.H. A. ”The role of models and probabilities in the monetary policy process. European Central Bank [36] Sims. M.” Brooking Papers on Economic Activity.O. and M.. (2001). New England Economic Review [35] Smets. ”Monetary Policy with Model Uncertainty: Distribution Forecast Targeting”. 20(12).O (2005).A (2002). (2004) ”Forecasting with a Bayesian DSGE model . 293335. Elsevier. 1980. C. W. Watson (2002). The American Economic Review.
6 Appendix A: Solving the RBC model yt = kt (1−α) The equilibrium conditions for the optimization problem are . yt ˆ ˆ kt+1 yt ˆ ˆ rt = = = = I C ct + ˆt ˆ i Y Y I ˆ 1−δˆ it + kt − εt γA K γA (1 − α)ˆt y 1−α Y ˆ (ˆt − kt ) y R K Et [−η(ˆt+1 − ct ) + rt+1 ] c ˆ ˆ 0 = ˆ Manipulating it a bit. xt = ln( xt ). R= γη η γA β A Y β +1−δ = K 1−α I = γA − (1 − δ) K C (1 − α)(γA − (1 − δ)) =1− Y R − (1 − δ) (1 − α)(γA − (1 − δ)) I = Y R − (1 − δ) A linear approximation of system (49)(53) can be derived loglinearizing it around its steady state. this system can also be rewritten all as a function of kt+1 . ˆ ˆ kt+1 = ˆ λ1 kt + λ2 ct − εt ˆ λ3 ˆ Et kt+1 η ˆ (1 − α)kt 31 (54) (55) (56) Et [∆ˆt ] = c yt ˆ = . the economy converges to the following deterministic balanced growth path/steady state that can be obtained from equations (49)(53) dropping the time subscripts and through some simple manipulation. i. (γA e )kt+1 = kt + it .e. (49) (50) (51) (52) (53) y t = ct + i t . Let us deﬁne. With some manipulation we obtained the following linearization for the system (49)(53). the ˆ X logdeviation of xt from its steady state X. η γA eε In absence of shocks. −α Rt = [(1 − α)kt + (1 − δ)] εt c−η = t c−η Rt+1 β Et [ t+1t+1 ]. for and variable x. ct and εt+1 .
A positive 1% technological innovation in the untransformed economy leads to a 1% decline in the transformed economy’s capital stock. C. but given the simplicity of our model we do not need to resort to elaborate methods. If we consider the variables in levels. That is. etcetera. = (1 − α) (59) 2λ2 λ1 + λ2 πck .where λ1 λ2 λ3 R γA R + α(1 − δ) = 1− γA (1 − α) α(R − (1 − δ)) = − R = Various methods are available for solving linear diﬀerence models like (54)(56) under rational expectations. Therefore we assume that ˆ kt+1 ct ˆ = = ˆ µkt − εt+1 ˆ πck kt (57) (58) ˆ and similarly. then the solution to the transformed economy is particularly simple. we can write ln yt ln ct ln it = = = at + ln(Y ) + yt ˆ at + ln(c) + ct ˆ ˆt at + ln(I) + i Since we are in fact looking at a balanced growth path rather than a steady state. Therefore we obtain ˆ kt+1 yt ˆ ct ˆ where πck = 1 −(λ1 − 1 + η )λ2 λ3 + 1 (λ1 − 1 + η )λ2 λ3 )2 + 4λ2 λ1ηλ3 ˆ = µkt − εt+1 πck ˆ kt . we can only pin down 32 .g. if technology is a logarithmic random walk with drift. we cannot recover the steady state values Y. The only impact of technological progress is to reset the transformed economy’s capital stock relative to its longsum stationary level. e. we can simply apply the method of undetermined coeﬃcients as described.. in Campbell (1994). we ˆ ”guess” the functional form of kt+1 and ct and then we verify it by ﬁnding ˆ parameters that satisfy the restrictions of the approximate loglinear model. I. As pointed out in King. we obtain a second order equation for πc k which has only one positive solution (as required by the problem). Plosser and Rebelo (1988b). µ = In order to be able to bring the model to the data we still need to work on it a bit more. Substituting (57) and (58) into (54)(56) ˆ and manipulating. yt = πyk kt .
01 calibrated η 1 (log utility) calibrated γ 1. 7 Appendix B: alternative modeling of the professional forecasters Here I show how to construct an augmented forecast that extracts and accounts for the information contained in professional forecasts under the assumption that the professional forecasters have extramodel information on current and future 33 . Table 7: Parameter values for the model with the autocorrelated but not crosscorrelated residual.667 calibrated δ 0.01 estimated Dyy 0.C Y some ratios as Y and K . Table 7 reports the parameters values for the model with the autocorrelated but not crosscorrelated residual. in order to relate the model (59) to the data. we rather look at ∆(ln yt ) ln ct − ln yt ln it − ln yt Finally we obtain ˆ kt+1 ˆ kt = ˆ ˆ = ln γA + (α + µ − 1)kt−1 − αkt C ˆ = log( ) + [πck − (1 − α)]kt Y I ˆ = log( ) + [πik − (1 − α)]kt .01 estimated Vcc 0. allowing the procedure for evaluating the likelihood function to continue using the Kalman ﬁltering algorithms outlined.0075 estimated Vyc 0. Parameter value β 0.988 calibrated α 0. Y (60) µ 0 1 0 in the form of a statespace econometric model. Therefore.2215 estimated Vyy 0.0483 estimated ∆ ln yt ln ct − ln yt = ln it − ln yt ln γA −α log( C ) + πck − (1 − α) Y I πik − (1 − α) log( Y ) ˆ kt ˆt−1 k + −1 0 εt+1 α+µ−1 0 0 (61) ˆ kt ˆ kt−1 . but diﬀer slightly in the estimated parameters.6692 estimated Dcc 0. All other speciﬁcations have the same calibrated parameters. for example.Chapter 13).0072 calibrated 2 σε 0. by Hamilton (1994.
as before. s where wT T = E[uT IT ] + ηT T and KuT wT T Σ0 = ∼ WN 0 . 1. 3. For h=0. the measurement error (the typo) made by the forecasters in T while reporting their forecast for period T . for h = 0. 2. (64) = ΛGˆT +h−1T +h−1 + wT +hT s where h wT +hT = Λ i=1 Gi K [E(uT +h−i IT ) − uT +h−i ] + E[uT +h IT ] + ηT +hT KuT +h wT +hT (65) ∼ WN 0 . 4 we have sT +hT +h ˆ = GˆT +h−1T +h−1 + KuT +h s h E[yT +h IT ] = ΛGˆT −1T −1 + s i=1 ΛGi KE[uT +h−i IT ] + ηT +hT . Σh ′ KE(uT +h wT +hT ) ′ EwT +hT wT +hT and Σh = KE(uT +h u′ +h )K ′ T (wT +hT u′ +h )K ′ T . 4 ′ E[uT +h IT +h ] = 0. 3. Σ0 (63) where KE(uT u′ )K ′ T E(wT T u′ )K ′ T ′ KE(uT wT T ) ′ EwT T wT T . but for now ignore the potential inconsistency arising from the fact of assuming that professional forecasters have information on future shocks. while agents do not and fail to look at the professional forecasters to have more information. sT T = GˆT −1T −1 + KuT ˆ s E[yT IT ] = ΛGˆT −1T −1 + wT T . For h = 1. the forecasters will report the following statespace form.11 Their information set IT ⊇ MT and is such that. as before. 34 . the forecasts will instead be diﬀerent. We assume 11 We acknowledge. 2.shocks. the measurement error (the typo) made by the forecasters in T while reporting their forecast for period T + h. ηT +hT is. ηT T is. As above. The professional nowcast coincides with the one generated under the assumption that the forecasters have information only on the current period. (62) In this case.
for h=0. Sp(IT ).4. (68) i. it is easy to show that ′ E(uT +h wT +hT ) = E[uT +h E(uT +h IT )′ ]. notice that. The variance of eT +h . As the series for the uT +h are readily available via the Kalman ﬁlter. the following equality holds E[uT +h E(yT +h IT )′ ] = E[uT +h E(uT +h IT )′].4. then uT +h = E(uT +h IT ) + µT where µT is orthogonal to the space spanned by IT . Since the nowcasts coincides with the ones of the previous subsection. Therefore.3. ′ In order to recover EwT +hT +h wT +hT +h we will ﬁrst deﬁne the forecasters’ forecast error as eT +h = = yT +h − E(yT +h IT ) uT +h − wT +hT . and that ηT +hT ⊥ E(uT +i IT ) for any i. sT (67) that incorporates optimally in the modelbased framework the judgemental information coming from the conjunctural forecasters. and therefore of E[uT +h E(uT +h IT )′ ]. since E(uT +h IT ) is a linear projection of uT +h on the space spanned by IT . Moreover. h. Once we have recovered the exact form of the covariance matrices Σ0 and Σh we can then smooth (63) and (64) with a timevarying Kalman smoother.2. and we showed it is equal to the covariance among the shock and its expected value. E[uT +h E(uT +h IT )′ ] = E[E(uT +h IT )E(uT +h IT )′ ]. Σ0 can be recovered exactly in the same way. E(uT +h IT ). we are able to recover empirically the value of E[uT +h E(yT +h IT )′ ].e. Given the assumptions we made in (66). Therefore. whose value can be recovered from sample data. ˆ+ ˆ+ + yT +hIT = Λˆ++hIT . in order to obtain optimal estimates s++hIT . we proceed as follows.that ηsT ⊥ uτ .2. is: ′ E(eT +h eT +h ) = ′ E(uT +h u′ +h ) − E[uT +h wT +hT ] + T ′ E[uT +h wT +hT ]′ + E[wT +hT wT +hT ] − 35 .3. that comprise the T extrainformation contained in the forecasts and employs it optimally within the model. we have determined also the variance of the expected value of the current shock given the information set IT .1. To recover all the elements of Σh . On the basis of assumptions (66). for any s and τ . Moreover we will make the following assumptions: ut ⊥ E(uτ IT ) E(ut IT ) ⊥ E(uτ IT ) ut ⊥ E(uτ IT ) ∀τ = t ∀τ = t ∀τ = t (66) which will allow us to recover Σ0 and Σh for h=1. using sT +hIt we can create a forecast yT +hIT .
80 ∗∗ 0.95 0.86 ∗∗ Q2 0.2004:4 Forecast horizon RBC SPF AUG Q0 0.82 ∗ ∗ ∗ Q3 0.E(uT +h u′ +h ) can be obtained by the Kalman ﬁlter as follows T E(uT +h u′ +h ) = ΛP Λ′ T (69) Where P is the solution of the Riccati equation deﬁned in (11).68 ∗ 0.84 ∗ ∗ ∗ 1.81 ∗∗ 0. for example. the assumption that the professional forecasters have extramodel information on the current and 1step ahead shock.77 0. which depends on the assumptions made.88 ∗ ∗ ∗ Q3 0.68 ∗∗ Q1 0.92 0. 5%(∗∗) and 10%(∗) relative to constant growth EVALUATION SAMPLE: 1987:2 .92 ∗ ∗ ∗ Q4 0.85 ∗ ∗ ∗ 0. We have shown how to construct modelbased forecasts that incorporate extramodel information coming from the professional forecasts under two possible modelizations of the professional forecasters’ information set.01 0.88 ∗∗ 0. The only diﬀerence would be in the deﬁnition of wT + hT .94 ∗∗ 0.94 0.01 0. as.75 ∗ ∗ ∗ Q1 0.90 ∗ ∗ ∗ Q4 0.86 0.87 ∗∗ 1. Similar results would hold however under all intermediate assumptions.84 ∗ ∗ ∗ 1.2004:4 Forecast horizon RBC SPF AUG Q0 0.92∗ ∗ ∗ relative to constant growth EVALUATION SAMPLE: 1996:2 . Using the above ′ equations.97 0.76 ∗∗ Q2 0.01 0. we can ﬁnally recover E[wT +hT wT +hT ] and we therefore have pinned down all the values of the matrix Σh .88 0.92 ∗∗ 36 .94 0. Table 8: Relative MSFE of forecasts of GDP growth with respect to naive benchmark. Asterisks denote forecasts that are statistically more accurate than he naive benchmark at 1% (∗ ∗ ∗).
3321 0.3941 0.3606 0.2574 1 step ahead 2 step ahead 3 step ahead 4 step ahead 0.0818 37 .3387 0.3236 0.0832 0.0931 0.Table 9: Weight given to the SPF forecast of ∆GDP and ∆CON S in the augmented forecast . 2004:4 ∆GDP ∆CON S nowcast 0.1261 0.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.