Professional Documents
Culture Documents
43–54
Olivier Blanchard*
Abstract: Macroeconomics has been under scrutiny as a field since the financial crisis, which brought
an abrupt end to the optimism of the Great Moderation. There is widespread acknowledgement that
the prevailing dynamic stochastic general equilibrium (DSGE) models performed poorly, but little
agreement on what alternative future paradigm should be pursued. This article is the elaboration of
four blog posts that together present a clear message: current DSGE models are flawed, but they con-
tain the right foundations and must be improved rather than discarded. Further, we need different
types of macroeconomic models for different purposes. Specifically, there should be five kinds of gen-
eral equilibrium models: a common core, plus foundational theory, policy, toy, and forecasting models.
The different classes of models have a lot to learn from each other, but the goal of full integration has
proven counterproductive. No model can be all things to all people.
Keywords: macroeconomics, methodology, macroeconomic model, DSGE, dynamic stochastic general
equilibrium, forecasting, macroeconomic policy making, graduate teaching
JEL classification: B41, E000, E120, E130, E170, E610, A230
I. Introduction
One of the best pieces of advice Rudi Dornbusch gave me was: never talk about meth-
odology; just do it. Yet, I shall make him unhappy, and take the plunge.
This set of remarks on the future of macroeconomic models was triggered by a pro-
ject, led by David Vines, to assess how dynamic stochastic general equilibrium (DSGE)
models had performed during the financial crisis (namely, badly) and to examine how
these models might be improved. Needled by David’s opinions, I wrote a PIIE Policy
Brief (Blanchard, 2016a). That brief piece went nearly viral (by the standards of blogs
on DSGEs). The many comments I received led me to write a second piece, a PIIE Real
Time blog (Blanchard, 2016b), which again led to a further round of reactions, prompt-
ing me to write a third blog (Blanchard (2017a), which I hoped and fully expected
would be my last piece on this topic). I thought I was done, but last February David
organized a one-day conference on the topic, from which I learned a lot, which led
me to write a fourth blog (Blanchard, 2017b) and now, with this article, my final piece
(I promise) on this topic. In doing so I have brought all my previous contributions
together into this one longer piece.
A common theme runs through all four pieces: we need different types of macro
models. One type is not better than the others. They are all needed, and indeed they
should all interact. In the final section of my piece I present my attempt at typology,
distinguishing between five types.
My remarks about needing many types of models would be trivial and superfluous if
that proposition were widely accepted, and there were no wars of religion. But it is not,
and there are. As a result, my contribution seems useful. To show how, and why, I have
ended up where I am, I have kept the pieces in the order in which I wrote them, rather
than reorganize them in a new, more integrated piece.
I limit myself to general equilibrium models. Much of macro must, however, be about
building the individual pieces, constructing partial equilibrium models, and examining
the corresponding empirical evidence. These are the components on which the general
equilibrium models must then build.
1 While a ‘standard DSGE model’ does not exist, a standard reference remains the model developed by
Frank Smets and Rafael Wouters (2007). See Lindé et al. (2016) for a recent assessment with many references.
2 More specifically, the equation characterizing the behaviour of consumers is the first order condition
of the corresponding optimization problem and is known as the ‘Euler equation’. The equation character-
izing the behaviour of prices is derived from a formalization offered by Guillermo Calvo and is thus known
as ‘Calvo pricing’.
3 In some cases, maximum likelihood estimates of the parameters are well identified but highly implaus-
ible on theoretical grounds. In this case, tight Bayesian priors lead to more plausible estimates. It is clear, how-
ever, that the problem in this case comes from an incorrect specification of the model and that tight Bayesian
priors are again a repair rather than a solution.
extensions. And, indeed, having a common core enriches the discussion among those
who actually produce these models and have acquired, through many simulations, some
sense of their entrails (leaving aside whether the common core is the right one, the issue
raised in the first criticism above). But, for the more casual reader, it is often extremely
hard to understand what a particular distortion does on its own and then how it inter-
acts with other distortions in the model.
All these objections are serious. Do they add up to a case for discarding DSGEs and
exploring other approaches? I do not think so. I believe the DSGEs make the right basic
strategic choices and the current flaws can be addressed.
Let me explain these two claims. The pursuit of a widely accepted analytical macro-
economic core, in which to locate discussions and extensions, may be a pipe dream, but
it is a dream surely worth pursuing. If so, the three main modelling choices of DSGEs
are the right ones. Starting from explicit microfoundations is clearly essential; where
else to start from? Ad hoc equations will not do for that purpose. Thinking in terms of
a set of distortions to a competitive economy implies a long slog from the competitive
model to a reasonably plausible description of the economy. But, again, it is hard to
see where else to start from. Turning to estimation, calibrating/estimating the model
as a system rather than equation-by-equation also seems essential. Experience from
past equation-by-equation models has shown that their dynamic properties can be very
much at odds with the actual dynamics of the system.
Models can have different degrees of theoretical purity. At one end, maximum theo-
retical purity is indeed the niche of DSGEs. For those models, fitting the data closely
is less important than clarity of structure. Next come models used for policy purposes,
for example, models by central banks or international organizations. Those must fit the
data more closely, and this is likely to require in particular more flexible, less micro-
founded, lag structures (an example of such a model is the FRB/US model used by the
Federal Reserve, which starts from microfoundations but allows the data to determine
the dynamic structure of the various relations). Finally, come the models used for fore-
casting. It may well be that, for these purposes, reduced form models will continue to
beat structural models for some time; theoretical purity may be for the moment more
of a hindrance than a strength.
Models can also differ in their degree of simplicity. Not all models have to be
explicitly microfounded. While this will sound like a plaidoyer pro domo, I strongly
believe that ad hoc macro models, from various versions of the IS–LM to the
Mundell–Fleming model, have an important role to play in relation to DSGE mod-
els. They can be useful upstream, before DSGE modelling, as a first cut to think
about the effects of a particular distortion or a particular policy. They can be use-
ful downstream, after DSGE modelling, to present the major insight of the model
in a lighter and pedagogical fashion. Here again, there is room for a variety of
models, depending on the degree of ad hocery: one can think, for example, of the
New Keynesian model as a hybrid, a microfounded but much simplified version of
larger DSGEs.
Somebody has said that such ad hoc models are more art than science, and I think
this is right. In the right hands, they are beautiful art, but not all economists can or
should be artists. There is room for both science and art. I have found, for example,
that I could often, as a discussant, summarize the findings of a DSGE paper in a
simple graph. I had learned something from the formal model, but I was able (and
allowed as the discussant) to present the basic insight more simply than the author
of the paper. The DSGE and the ad hoc models were complements, not substitutes.
So, to return to the initial question: I suspect that even DSGE modellers will agree
that current DSGE models are flawed. But DSGE models can fulfil an important
need in macroeconomics; that of offering a core structure around which to build
and organize discussions. To do that, however, they have to build more on the rest
of macroeconomics and agree to share the scene with other types of general equi-
librium models.
(i) The specific role of DSGEs in the panoply of general equilibrium models is to
provide a basic macroeconomic Meccano set. By this I mean a formal, analyti-
cal platform for discussion and integration of new elements: for example, as a
base from which to explore the role of bargaining in the labour market, the role
of price-setting in the goods markets, the role of banks in intermediation, the
role of liquidity constraints on consumption, the scope for multiple equilibria,
etc. Some seem to think that it is a dangerous and counterproductive dream,
a likely waste of time, or at least one with a high opportunity cost. I do not.
I believe that aiming for such a structure is desirable and achievable.
(ii) The only way in which DSGEs can play this role is if they are built on explicit
micro foundations. This is not because models with micro foundations are ho-
lier than others, but because any other approach makes it difficult to integrate
new elements and have a formal dialogue. For those who believe that there are
few distortions (say, the believers in the original real business cycle view of the
world), this is a straightforward and natural approach. For those of us who be-
lieve that there are many distortions relevant to macroeconomic fluctuations,
this makes for a longer and messier journey, the introduction of many distor-
tions, and the risk of an inelegant machine at the end. One wishes there were a
short cut and a different starting point. I do not think either exists.
If, however, one does accept them (even if reluctantly), then wholesale dismissal of
DSGEs is not an option. The discussion must be about the nature of the micro founda-
tions and the distortions current models embody, and how we can do better.
Do current DSGEs represent a basic Meccano set, a set that most macroeconomists
are willing to use as a starting point? (For Meccano enthusiasts, the Meccano set num-
ber is 10. The Meccano company has actually also gone through major crises, having
to reinvent itself a few times—a lesson for DSGE modellers.) I believe the answer to
this is no. (I have presented my objections in my first piece on the topic.) So, to me, the
research priorities are clear:
First, can we write down a basic model most of us would be willing to take as a
starting point, the way the IS–LM model was a widely accepted starting point earlier in
time? Given technological progress and the easy use of simulation programmes, such a
model can and probably must be substantially larger than the IS–LM but still remain
transparent. To me, this means starting from the New Keynesian model, but with more
realistic consumption- and price-setting equations, and adding capital and thus invest-
ment decisions.
Second, can we then explore serious improvements in various dimensions? Can we
have a more realistic model of consumer behaviour? How can we deviate from rational
expectations, while keeping the notion that people and firms care about the future?
What is the best way to introduce financial intermediation? How can we deal with
aggregation issues (which have been largely swept under the rug)? How do we want to
proceed with estimation (which is seriously flawed at this point)? And, in each case, if
we do not like the way it is currently done, what do we propose as an alternative? These
are the discussions that must take place, not grand pronouncements on whether we
should have DSGEs or not, or on the usefulness of macroeconomics in general.
It would be nice if there was a model that did both, namely had a tight, elegant, theo-
retical structure and fitted the data well. But this is a dangerous illusion. Perhaps one
of the main lessons of empirical work (at least in macro, and in my experience) is how
messy the evidence typically is, how difficult aggregate dynamics are to rationalize, and
how unstable many relations are over time. This may not be too surprising. We know,
for example, that aggregation can make aggregate relations bear little resemblance to
underlying individual behaviour.
So, models that aim at achieving both tasks are doomed to fail, in both dimensions.
The French have an apt expression to describe such contraptions: ‘the marriage of a
carp and a rabbit’.
Take theory models. DSGE modellers, confronted with complex dynamics and the
desire to fit the data, have extended the original structure to add, for example, external
habit persistence (not just regular, old habit persistence), costs of changing investment
(not just costs of changing capital), and indexing of prices (which we do not observe in
reality), etc. These changes are entirely ad hoc, do not correspond to any micro evidence,
and have made the theoretical structure of the models heavier and more opaque. (I have
a number of other problems with existing DSGEs, but these were the topics of my previ-
ous blogs.)
Now take policy models. Policy modellers, looking to tighten the theoretical struc-
ture of their models, have, in some cases, attempted to derive the observed lag struc-
tures from some form of optimization. For example, in the main model used by the
Federal Reserve, the FRB/US model, the dynamic equations are constrained to be
solutions to optimization problems under high order adjustment cost structures. This
strikes me as wrongheaded. Actual dynamics probably reflect many factors other than
costs of adjustment. And the constraints that are imposed (for example, on the way
the past and the expected future enter the equations) have little justification, theoreti-
cal or empirical.
So what should be done? My suggestion is that the two classes should go their
separate ways.
DSGE modellers should accept the fact that theoretical models cannot, and thus
should not, fit reality closely. The models should capture what we believe are the macro-
essential characteristics of the behaviour of firms and people, and not try to capture all
relevant dynamics. Only then can they serve their purpose, remain simple enough, and
provide a platform for theoretical discussions. Ironically, I find myself fairly close to my
interpretation of Ed Prescott’s position on this issue, and his dislike of econometrics
for those purposes.
Policy modellers should accept the fact that equations that truly fit the data can
have only a loose theoretical justification. In that, the early macroeconomic models
had it right: the permanent income theory, the life-cycle theory, and the Q theory pro-
vided guidance for the specification of consumption and investment behaviour, but
the data then determined the final specification. As Ray Fair (2015) has shown, there
is no incompatibility between doing this and allowing for forward-looking, rational
expectations.
Both classes should clearly interact and benefit from each other. To use an expression
suggested by Ricardo Reis, there should be scientific cointegration. But the goal of full
integration has, I believe, proven counterproductive. No model can be all things to all
people.
Foundational models
The purpose of these models is to make a deep theoretical point, likely of relevance to
nearly any macro model, but not pretending to capture reality closely. I would put there
the consumption-loan model of Paul Samuelson, the overlapping generation model
of Peter Diamond, the models of money by Neil Wallace or Randy Wright (Randy
deserves to be there, but the reason I list him is that he was one of the participants to
the Vines conference. I learned from him that what feels like micro-foundations to one
economist feels like total ad-hocery to another), the equity premium model of Edward
Prescott and Rajnish Mehra (Mehra and Prescott, 1985), the search models of Peter
Diamond, Dale Mortensen, and Chris Pissarides.
DSGE models
The purpose of these models is to explore the macro implications of distortions or sets
of distortions. To allow for a productive discussion, they must be built around a largely
agreed upon common core, with each model then exploring additional distortions, be
it bounded rationality, asymmetric information, different forms of heterogeneity, etc.
(At the conference, Ricardo Reis had a nice list of extensions that one would want to
see in a DSGE model.)
These were the models David Vines (and many others) was criticizing when he started
his project and, in their current incarnation, they raise two issues.
The first is what the core model should be. The current core, roughly an RBC struc-
ture with one main distortion, nominal rigidities, seems too much at odds with reality
to be the best starting point. Both the Euler equation for consumers, and the pricing
equation for price-setters seem to imply, in combination with rational expectations,
much too forward-lookingness on the part of economic agents. I, and many others,
have discussed these issues elsewhere, and I shall not return to them here. (I learned at
the conference that some economists, those working with agent-based models, reject
this approach altogether. If their view of the world is correct, and network interactions
are of the essence, they may well be right. But they have not provided an alternative core
from which to start.)
The second issue is how close these models should be to reality. My view is that they
should obviously aim to be close, but not through ad hoc additions and repairs, such as
arbitrary and undocumented higher order costs introduced only to deliver more realis-
tic lag structures. Fitting reality closely should be left to policy models.
Policy models
The purpose of these models is to help policy, to study the dynamic effects of specific
shocks, to allow for the exploration of alternative policies. If China slows down, what
will be the effect on Latin America? If the Trump administration embarks on a fiscal
expansion, what will be the effects on other countries?
For these models, capturing actual dynamics is clearly essential. So is having enough theo-
retical structure that the model can be used to trace the effects of shocks and policies. But
the theoretical structure must by necessity be looser than for DSGE: aggregation and hetero-
geneity lead to much more complex aggregate dynamics than a tight theoretical model can
hope to capture. Old-fashioned policy models started from theory as motivation, and then
let the data speak, equation by equation. Some new-fashioned models start from a DSGE
structure, and then let the data determine the richer dynamics. One of the main models used
at the Fed, the FRB/US model, uses theory to restrict long-run relations, and then allows for
potentially high order costs of adjustment to fit the dynamics of the data. I am sceptical that
this is the best approach, as I do not see what is gained by constraining dynamics in this way.
In any case, for this class of models, the rules of the game here must be different than
for DSGEs. Does the model fit well, for example in the sense of being consistent with
the dynamics of a VAR characterization? Does it capture well the effects of past poli-
cies? Does it allow us to think about alternative policies?
Toy models
Here, I have in mind models such as the many variations of the IS–LM model, the
Mundell–Fleming model, the RBC model, and the New Keynesian model. As my list
indicates, some may be only loosely based on theory, others more explicitly so. But they
have the same purpose. They allow for a quick first pass at some question, and present
the essence of the answer from a more complicated model or from a class of models.
For the researcher, they may come before writing a more elaborate model, or after, once
the elaborate model has been worked out.
How close they remain formally to theory is not a relevant criterion here. In the
right hands, and here I think of master craftsmen such as Robert Mundell or Rudi
Dornbusch, they can be illuminating. There is a reason why they dominate undergradu-
ate macroeconomics textbooks: they work as pedagogical devices. They are art as much
science. But art is of much value.
Forecasting models
The purpose of these models is straightforward: give the best forecasts. And this is the
only criterion by which to judge them. If theory is useful in improving the forecasts,
then theory should be used. If it is not, it should be ignored. My reading of the evidence
is the verdict is out on how much theory helps. The issues are then statistical, from over-
parameterization to how to deal with the instability of the underlying relations, etc.
VII. Conclusion
The attempts of some of these models to do more than what they were designed to do seem
to be overambitious. I am not optimistic that DSGEs will be good policy models unless
they become much looser about constraints from theory. I am willing to see them used for
forecasting, but I am again sceptical that they will win that game. This being said, the dif-
ferent classes of models have a lot to learn from each other, and would benefit from more
interactions. Old-fashioned policy models would benefit from the work about heterogeneity,
liquidity constraints, embodied in some DSGEs. And, to repeat a point made at the begin-
ning, all should be built on solid partial equilibrium foundations and empirical evidence.
References
Blanchard, O. (2016a), ‘Do DSGE Models Have a Future?’, available at https://piie.com/publications/
policy-briefs/do-dsge-models-have-future
— (2016b), ‘Further Thoughts on DSGE Models: What We Agree On and What We Do Not’, availa-
ble at https://piie.com/blogs/realtime-economic-issues-watch/further-thoughts-dsge-models
— (2017a), ‘The Need for Different Classes of Macroeconomic Models’, available at https://piie.com/
blogs/realtime-economic-issues-watch/need-different-classes-macroeconomic-models
— (2017b), ‘On the Need for (At Least) Five Classes of Macro Models’, available at https://piie.com/
blogs/realtime-economic-issues-watch/need-least-five-classes-macro-models
— Erceg, C., and Lindé. J. (2016), ‘Jump-starting the Euro Area Recovery: Would a Rise in Core
Fiscal Spending Help the Periphery?’, NBER Macroeconomics Annual, Cambridge, MA, National
Bureau of Economic Research, forthcoming.
DeLong, B. (2016), ‘Must-read’, available at: http://www.bradford-delong.com/2016/08/must-read-the-
thing-about-this-paper-that-surprises-me-is-in-its-last-paragraphs-that-olivier-blanchard-answers-
the-que.html
Fair, R. C. (2015), ‘Reflections on Macroeconometric Modelling’, The B.E. Journal of Macroeconomics,
15(1), 445–66.
Farmer, R. (2014), ‘Faust, Keynes and the DSGE Approach to Macroeconomics’, available at http://
rogerfarmerblog.blogspot.co.uk/2014/02/faust-keynes-and-dsge-approach-to.html
Keen, S. (2016), ‘The Need for Pluralism in Economics’, available at http://www.debtdeflation.com/
blogs/2016/08/13/the-need-for-pluralism-in-economics/
Kocherlakota, N. (2016), ‘Toy Models’, available at https://docs.google.com/viewer?a=v&pid=sites&-
srcid=ZGVmYXVsdGRvbWFpbnxrb2NoZXJsYWtvdGEwMDl8Z3g6MTAyZmIzODcxNGZi
OGY4Yg
Korinek, A. (2015), ‘Thoughts on DSGE Macroeconomics: Matching the Moment, But Missing the
Point?’, available at https://www.ineteconomics.org/uploads/downloads/Korinek-DSGE-Macro-
Essay.pdf
Krugman, P. (2016), ‘The State of Macro is Sad (Wonkish)’, available at http://krugman.blogs.nytimes.
com/2016/08/12/the-state-of-macro-is-sad-wonkish/?_r=0
Lindé, J., Smets, F., and Wouters, R. (2016), ‘Challenges for Central Banks’ Macro Models’, Riksbank
Research Paper Series No. 147, available at papers.ssrn.com/ sol3/papers.cfm?abstract_id=2780455
Mehra, R., and Prescott, E. (1985), ‘The Equity Premium’, Journal of Monetary Economics, 15,145–61.
Prescott, E. (1986), ‘Theory Ahead of Business Cycle Measurement’, Carnegie-Rochester Conference
Series on Public Policy, 25, 11–44.
Romer, P. (2016), ‘The Trouble with Macroeconomics’, available at https://paulromer.net/wp-content/
uploads/2016/09/WP-Trouble.pdf
Smets, F., and Wouters, R. (2007), ‘Shocks and Frictions in US Business Cycles: A Bayesian
DSGE Approach’, American Economic Review, 97(3), 586–606, available at www.aeaweb.org/
articles?id=10.1257/aer.97.3.586
Smith, N. (2016), ‘Economics Struggles to Cope with Reality’, available at https://www.bloomberg.
com/view/articles/2016-06-10/economics-struggles-to-cope-with-reality
Wren-Lewis, S. (2016), ‘Blanchard on DSGE’, available at https://mainlymacro.blogspot.co.uk/2016/08/
blanchard-on-dsge.html