You are on page 1of 58

Macroeconomic Dynamics, 16 (Supplement 1), 2012, 1–4. Printed in the United States of America.

doi:10.1017/S1365100511000599

INTRODUCTION TO
MACROECONOMIC DYNAMICS
SPECIAL ISSUE IN HONOR OF
KAZUO NISHIMURA: NONLINEAR
DYNAMICS IN EQUILIBRIUM
MODELS

JOHN STACHURSKI
Australian National University
ALAIN VENDITTI
CNRS-GREQAM
and
EDHEC
MAKOTO YANO
Kyoto University

Over the past three decades, analysis of dynamics has come to the forefront of
macroeconomic theory. A key impetus for progress on this front has been the
connections developed between equilibrium growth theory, on one hand, and
the field of nonlinear dynamics, on the other. Kazuo Nishimura’s work has been
at the center of these advances, and the lines of research he initiated remain an
exciting area of study for young researchers with strong technical skills.
Since his first papers appeared in the late 1970s, Kazuo Nishimura has lit a
bright path for those of us who seek to understand optimal growth and nonlinear
dynamics. He has made outstanding contributions to three main fields of economic
theory: international trade, general equilibrium, and economic growth. With his re-
markable insight, creativity, and energy, Kazuo has transformed our understanding
of economic growth, business cycles, and the relationship between them.

We thank Raouf Boucekkine for allowing us to initiate this special issue of Macroeconomic Dynamics. We also thank
the editor, Bill Barnett, and the special issue editors, Steve Turnovsky and Lee Ohanian, for their support. Most of
the contributions in this volume were presented at the international conference “New Challenges for Macroeconomic
Regulation: Financial Crisis, Stabilisation Policy and Sustainable Development,” GREQAM, Marseille, France June
9–11, 2011. Financial support from CNRS, Ecole d’Economie de Paris, Centre d’Economie de la Sorbonne—
Pôle Macroéconomie, Centre d’Economie de la Sorbonne—Pôle Economie Mathématique, GREQAM, ADRES,
CEPREMAP, Ministère de la Recherche, Université de la Méditerranée, Université Paul Cézanne, and Université
Paris I Panthéon–Sorbonne is gratefully acknowledged. Special thanks go to Isabelle Mauduech, Corinne Michaud,
and Aziza Sikar, whose help in organizing the conference was invaluable. Address correspondence to: Alain Venditti,
GREQAM, 2 rue de la Charité, 13002 Marseille, France; e-mail: alain.venditti@univmed.fr.


c 2011 Cambridge University Press 1365-1005/11 1
Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:30:12, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100511000599
2 JOHN STACHURSKI ET AL.

As editors of the current volume, we are truly delighted to present this special
issue honoring Professor Kazuo Nishimura on the occasion of his 65th birthday.
The authors of the research papers published here have all had the privilege of
working with Kazuo, and their contributions are evidence of his far-reaching
influence on economic theory. In the lead article, Jean-Michel Grandmont, a long-
time friend, describes Kazuo and his most important contributions. As he puts it,
“Kazuo Nishimura is a great economic theorist who has devoted his scientific life
in this field to the analysis of multiple equilibria and business cycles in economic
models. Actually it is striking to see that his concern with multiplicity of economic
equilibria started quite early in his academic life.”1 The papers collected in this
special issue illustrate this statement, but also show that Kazuo’s influence on the
economics profession goes far beyond these topics.
The first set of three contributions is concerned with aggregate optimal growth
models. The first paper, by Robert Becker and Tapan Mitra, “Efficient Ramsey
Equilbria,” shows that Ramsey equilibrium models with heterogeneous agents and
borrowing constraints yield efficient equilibrium sequences of aggregate capital
and consumption. The proof of this result is based on verifying that equilibrium
sequences of prices satisfy the Malinvaud criterion for efficiency.
This paper is followed by “Existence of Competitive Equilibrium in an Opti-
mal Growth Model with Heterogeneous Agents and Endogenous Leisure.” Here,
Aditya Goenka, Cuong Le Van, and Manh-Hung Nguyen prove the existence of
competitive equilibrium in a single-sector dynamic economy with heterogeneous
agents, an elastic labor supply, and complete assets markets. The method of proof
relies on some recent results concerning the existence of Lagrange multipliers in
infinite-dimensional spaces and their representation as summable sequences. It
contains an application of the inward-boundary fixed point theorem.
In the third paper in this group, “Discrete Choice and Complex Dynamics in
Deterministic Optimization Problems,” Takashi Kamihigashi shows that complex
dynamics arises naturally in deterministic discrete choice problems. In particular,
he shows that if the objective function of a maximization problem can be written as
a function of a sequence of discrete variables, and if the maximized value function
is strictly increasing in an exogenous variable, then for almost all values of the
exogenous variable, any optimal path exhibits aperiodic dynamics. This result is
applied to a maximization problem with indivisible durable goods, as well as to a
Ramsey model with an indivisible consumption good. In each model, it is shown
that optimal dynamics is almost always complex.
These three papers are followed by a second set of papers that deal with two-
sector growth models. In “Long-Run Optimal Behavior in a Two-Sector Robinson–
Solow–Srinivasan Model,” Ali Khan and Tapan Mitra study the nature of long-
run behavior in a two-sector model of optimal growth. They provide an explicit
solution of the optimal policy function generated by the optimal growth model. For
a particular configuration of parameter values, they provide an explicit solution of
the unique absolutely continuous invariant ergodic distribution generated by the
optimal policy function.

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:30:12, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100511000599
INTRODUCTION 3

This paper is followed by “Endogenous Business Cycles in OLG Economies


with Multiple Consumption Goods,” in which Carine Nourry and Alain Venditti
consider a two-sector OLG economy with a pure consumption good and a mixed
good that can be either consumed or used as capital. They prove that the existence
of Pareto-optimal expectations-driven fluctuations is compatible with standard
sectoral technologies if the share of the pure consumption good is low enough.
This result suggests that some fiscal policy rules can prevent the existence of
business-cycle fluctuations in the economy by driving it to the optimal steady state
as soon as it is announced.
The third paper in this group is “Does the Capital Intensity Matter? Evidence
from the Postwar Japanese Economy and Other OECD Countries,” by Harutaka
Takahashi, Kohichi Mashiyama, and Tomoya Sakagami. This paper focuses on
capital intensity, which plays an important role in two-sector and multisector
growth models. The purpose of their research is to measure the capital intensities
of the consumption good and the investment good sector in the postwar Japanese
economy, as well as in other OECD countries. Their empirical evidence strongly
supports the common capital-intensity assumption: the consumption good sector
is more capital-intensive than the capital good sector.
In a third set of papers, stochastic dynamic models are considered. In “Bound-
ing Tail Probabilities in Dynamic Economic Models,” John Stachurski provides
conditions for bounding tail probabilities in stochastic economic models in terms
of their transition laws and shock distributions. Particular attention is given to
conditions under which the tails of stationary equilibria have exponential decay.
By way of illustration, the technique is applied to a threshold autoregression model
of exchange rates.
Kenji Sato and Makoto Yano, in their paper “On a Stochastic Growth Model
with L∞ Dual Vectors: A Differential Analysis,” investigate a stochastic growth
model in which dual vectors lie in an L∞ space. This condition ensures that the
value of a stock vector is jointly continuous with respect to the stock vector and its
support price vector. The result is based on the differentiation method in Banach
spaces that Makoto Yano developed earlier for stochastic optimal growth models.
The final two papers are more oriented toward empirical considerations. In
“The Equity Premium in Consumption and Production Models,” Levent Akdeniz
and Davis Dechert use a simple model with a single Cobb–Douglas firm and a
consumer with a CRRA utility function to show the difference in the equity premia
in the production-based Brock model and the consumption-based Lucas model.
They show that the equity premium in the production-based model exceeds that
in the consumption-based model with probability one.
In the final paper of this volume, “Does Fiscal Policy Matter? Blinder and Solow
Revisited,” Roger Farmer and Dmitry Plotnikov use an old-Keynesian represen-
tative agent model and consider temporary bond-financed paths of government
purchases that are similar to the actual path that occurred during WWII. They
show that a temporary increase in government purchases does crowd out private
consumption expenditure but can also reduce unemployment.

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:30:12, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100511000599
4 JOHN STACHURSKI ET AL.

Most of the contributions in this volume were presented during the conference
“New Challenges for Macroeconomic Regulation: Financial Crisis, Stabilisation
Policy and Sustainable Development,” which was held on June 9–11, 2011 at
GREQAM in Marseille, and was dedicated to Kazuo. In reviewing these papers,
we have been struck by enduring and fundamental importance of Kazuo’s research,
and the influence it has had on researchers in our field.
We, the editors of this volume, have been privileged to benefit from Kazuo’s
warmth, generosity, and insight over many years. We hope that this special issue
of Macroeconomic Dynamics will not only celebrate a great economist who has
made such important contributions to economic theory, but also stimulate much
additional research on macroeconomic dynamics. Within this new research, we
have little doubt that Kazuo’s own contributions will continue to lead the way. It
is with great pleasure that we present this collection of papers as a special issue in
his honor.

NOTE

1. For a complete list of Kazuo’s contributions, see Grandmont’s paper in this issue.

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:30:12, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100511000599
Macroeconomic Dynamics, 12 (Supplement 2), 2008, 149–153. Printed in the United States of America.
doi:10.1017/S1365100508070430

INTRODUCTION TO
MACROECONOMIC DYNAMICS
SPECIAL ISSUE: INEQUALITY

ROBERT M. TOWNSEND

This special issue features inequality. This is a subject that rightly draws immediate
attention from both the profession and the popular press. The numbers themselves
are intrinsically interesting, if not disturbing. There is, on the one hand, great
variety in the distribution of earnings and an enormously right-skewed distribution
of wealth. On the other hand, there is absolute and relative poverty. In developing
countries there are extremes coexisting on both ends of the distribution. But
developing economies also feature growth with time-varying levels of inequality.
Macroeconomic growth, stability, and social policies seem correlated with poverty
reduction in some instances.
More specifically, Cagetti and DeNardi concentrate primarily on the distribution
of wealth in the United States. The richest 1% hold one-third of total wealth
and the richest 5% hold more than half. Cunha and Heckman concentrate on the
distribution of earnings in the United States. Many who go to college earn less than
those who complete only high school and, more generally, the two distributions
are overlapping (even controlling for selection, as in their model). Brazil is a
country that at times has had virtually the highest level of inequality in income
in the world (the Gini index at 0.625, behind Sierra Leone at 0.629 in 1989).
The spread between the rich and poor is enormous. The distribution is again right
skewed with the mean at 302 and the median at only 162 in 1992, expressed in 1994
monthly reals. Brazil is featured here in the paper of Ferreira and Litchfield. Brazil
and especially Thailand have experienced substantial growth, with increasing—
and then decreasing—inequality. In the case of Thailand, as with other Asian
miracles, growth rate was 5% per year on average, 1976–1996, and reached 12%
during the late 1980s. Poverty was reduced from 48% to 13%, and the distance of
those below the poverty line closed substantially. Thailand is featured here in the
work of Jeong, and also in Jeong and Townsend.
We present in this special issue not simply these facts but also diverse takes on
these subjects, ranging from measurement to theory, typically featuring both in
some degree. Ferreira and Litchfield also feature concerns about the data per se;
for example, the construction of the poverty line and an unexplained discrepancy
in income in Brazil in 1986. More generally, the use of data and econometric
approaches also vary considerably across the various papers. Part of the goal of
having these papers written for joint appearance in this special issue is to draw the


c 2008 Cambridge University Press 1365-1005/08 149
Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:27:19, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100508070430
150 ROBERT M. TOWNSEND

reader into thinking about this variety, hopefully more aware of the trade-offs and
perhaps creating new hybrids in future work.
Accounting decompositions of levels and changes in income, inequality, and
poverty in Thailand and Brazil are important exercises in measurement and provide
important data summaries. The emphasis here is on what proportion of observed
levels or changes can be accounted for by observable categories in available data.
Implicit also are suggestions as to what subsequent generations of models should
focus on, for example, individual and household choice problems, what policies
may be needed as remedies, or which policies have had an apparent impact.
In Brazil, measured levels of inequality are best accounted for by education, at
34%– 42%, whereas race and family types are important at 6%–11%, depending on
the category. Convergence seems to be in evidence in the diminishing importance
of regional and urban rural gaps. Earnings from employment versus employer
earnings are the most important contributions to income differences, although with
the former declining and the latter—along with Social Security—rising. Oddly,
decompositions of changes do not reveal many patterns, with many movements
offsetting one another and the majority falling into the residual “unexplained”
category. There are exceptions, such as a decline in the mean returns to schooling,
in the latter period.
In Thailand, the key measured observables are what Jeong refers to as self-
selection categories—education, occupation, and use of financial institutions—
and in his paper the emphasis is on change. In Thailand, all three key categories
enter into changing per capita income levels, poverty reduction, and changing
inequality change. Indeed, they account for a 39% change in income and poverty
reduction and over three-fourths of the change in inequality. Astoundingly, the
latter rises to 98% in the high-growth period during which changing financial
sector access plays a key role. Income convergence across all three categories
contributes 99% of inequality change in the growth-with-declining-inequality pe-
riod. More generally, composition/population shift effects are high for education
and financial access, whereas diverging and then converging income levels are
more salient for occupation/sector contributions. The punch line of Jeong’s work
is that growth, inequality, and poverty dynamics are linked through self-selection
in a way envisioned by Kuznets.
As accounting identities, the “dark” or unknown side is clearly documented in
the tables of both sets of authors, Ferreira et al. and Jeong; that is, the proportion of
inequality within a category that is simply unexplained. Related, gender and age of
the household head as categories are seemingly unimportant in both countries, but,
in fact, what may be going on are distinctions that are blurred (death of husband vs.
migration) or various ways of taking care of the elderly. Both authors cautiously
go beyond correlation to causal statements or hypotheses, but only as agendas for
future work. Naturally enough, these conjectures are driven by what is seen to be
large in the decompositions, or correlated in the data.
By contrast, Cunha and Heckman focus on unobserved heterogeneity and the
theory of choice. Although of course there is a list of covariates for which they

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:27:19, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100508070430
INTRODUCTION TO MACROECONOMIC DYNAMICS SPECIAL ISSUE 151

control (mother’s education, father’s education, distance from school, family in-
come at age 17, parents divorced), their interest is a theory that can explain
the observed variety in outcomes for households that appear otherwise equal,
diversity created via selection into anticipated outcomes and diversity created ex
post via luck. Multidimensional unobserved talents and how to measure them are
key in their framework. The first of two factors accounts largely for selection
into different levels of education and the second for heterogeneity in subsequent
earnings. Evidently, much of subsequent uncertainty (from the standpoint of the
econometrician) is able to be forecasted by the agents at the time of their choices.
Essentially, all underlying parameters of the model, including factors with di-
verse factor loadings and the distributions of disturbances, are estimated using
an explicit, well-articulated framework. Moreover, a variety of factor models and
timings are considered—an agent knowing no factors, knowing one factor for
selection, or both. This is the motivation behind the title of their paper.
Cagetti and DeNardi take the reader through a vast array of general equilibrium
models of the U.S. economy, focusing on a recursive framework and discounted
expected utility maximization. Key variables are savings, occupation choice, and
bequests. These models include dynamic, infinitely lived, representative consumer
models, which do not work well, the same setups but with more variety, typically
in observables such as occupation—some of the literature that they review has
selection based on talent and some advance knowledge of shocks. Other models
are life cycle, intergenerational with retirement, uncertain lifetimes, accidental and
deliberate bequests, as well as mixtures of these. Practically all of the work Cagetti
and DeNardi report, including their own, is in the tradition of macroeconomics
RBC (real business cycle calibration), although Markov chains parameters and
various key moments or ratios are estimated from micro data. They also report on
model sensitivity checks, feeding in alternatives values (not the calibrated values)
of risk aversion and the discount rate/impatience parameters.
Jeong and Townsend take the reader through two explicit models of wealth-
constrained occupational choice and financial deepening, representative of models
widely featured in the development literature—and key categories in Jeong’s
work. The models have both observed covariates such as wealth, and unobserved
heterogeneity as in talent or draws of idiosyncratic shocks. Most of the parameters
of preferences and technology are estimated via maximum likelihood from micro
data on household choices in initial cross-sections, and other data are deliberately
set aside for comparisons with the models’ simulations/predictions. Specifically,
each of the models is simulated over time, drawing macro shocks from well-defined
distributions and/or computing market clearing prices. The focus is on how well
the models at estimated parameters (and with sensitivity checks at alternative
values within standard error bands) can explain levels and movements in growth,
inequality, and poverty at both macros aggregated and sector/disaggregated levels.
The sectors use key categories suggested by the models, for example, entrepreneurs
versus wage earners and those using the financial system versus those who do not.
The reporting of anomalies and a model comparison section is a deliberate attempt

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:27:19, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100508070430
152 ROBERT M. TOWNSEND

to fuel further iterations of theory with data, very much in the spirit of the work
and literature that Cagetti and DeNardi report so well. The point in both Cagetti
and DeNardi, as well as in Jeong and Townsend, is that new models are needed
to address the anomalies. Cunha and Heckman’s work also might be placed in the
context of a larger literature, for example, the evolving literature both interpreting,
and altering, Mincer regressions.
Most of the models used by the various authors of this special issue are general
equilibrium models of entire economies. This is quite clear in the work of Cagetti
and DeNardi, as aggregated capital and labor supply, the integrals over micro
decisions, are used in an aggregate production function to generate marginal
productivities; hence the interest rates and wages taken as given in these same
household decisions. Although less obvious in Cunha and Heckman, in which
the earning equations appear more as reduced forms, the same aggregation and
pricing can be done in this literature on schooling, and the authors do propose
this as the next step, following Heckman, Lochner, and Taber (1998a,b). The
Jeong and Townsend (2008) model of occupation choice is also explicitly general
equilibrium, with labor and capital demand and supply obtained as integrals over
observed and unobserved characteristics—the computation of interest and wages
is more demanding, as the framework cannot be tricked into an approximate
representative consumer framework with aggregate technology. A related TFP
paper by the same authors, Jeong and Townsend (2007), makes that point.
The various models across the papers do vary in other substantive ways. Some
assume perfect credit markets, as in Cunha and Heckman, although they report
that this does not matter, whereas Cagetti and DeNardi use a Bewley-Aiyagari
model of limited credit [Aiyagari (1994) and Bewley (1977)]. The models of
Jeong and Townsend have these two kinds of extremes, financial autarky versus
complete markets, imbedded in the same overall context, either exogenously as
in Lloyd-Ellis and Bernhard (2000) model of occupation choice or endogenously
as in the Greenwood and Jovanovic model of financial access. Finally, Cagetti-
DeNardi and Cunha-Heckman are essentially steady state models, whereas Jeong
and Townsend is a model in which transitions are very much featured.
The various papers differ in their attention to policy issues, but practically all
raise the subject. Ferreira and Litchfield point to reduced inequality as consistent
with the impact of various Brazilian social transfer programs, and reduced poverty
and declining inequality with the impact of macroeconomic stabilization policies
that lowered inflation. Ferreira and Litchfield also note that returns to schooling
seem to be diminishing, although this is based on observed differentials across
school categories. By contrast, the point of Cunha and Heckman is not to explain
inequality per se but, rather, to address policy issues, such as tuition subsidies for
those below the mean, financed by taxes on others. With unobserved heterogeneity
in talent, different segments of the population will self-select into different levels
of schooling, depending on the policy. Some may leave college, for example.
The point is that observed averages in the data reflect selection “bias.” Returns
to schooling might appear to decrease only because less talented are drawn into

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:27:19, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100508070430
INTRODUCTION TO MACROECONOMIC DYNAMICS SPECIAL ISSUE 153

higher levels. Other papers touch on policy issues. Cagetti and DeNardi mention
tax policy, especially on the estates of the rich. The models used by Jeong and
Townsend have been subjected to policy experiments in other work, Giné and
Townsend (2004) and also Townsend and Ueda (2006, 2007), computing the
distribution of gains and losses to credit market expansion, financial liberalization,
or foreign capital inflows.
In the end, the diverse approaches of these papers to inequality complement
each other. We understand a given approach more in comparison with others than
in isolation as a stand-alone contribution. Thus, hopefully, the whole of this special
issue will appear greater than the sum of the constituent parts. Ideally, the reader
will be drawn into this literature and encouraged to contribute to these lively,
productive debates. I want to thank each of the teams of authors for allowing the
Journal to publish their important work.

REFERENCES
∗ included in this issue
Aiyagari, S. Rao (1994) Uninsured idiosyncratic risk and aggregate saving. Quarterly Journal of
Economics 109(3), 659–684.
Bewley, Truman F. (1977) The permanent income hypothesis: A theoretical formulation. Journal of
Economic Theory 16(2), 252–292.
∗ Cagetti, Marco and Mariacristina De Nardi (2008) Wealth inequality: Data and models. Macroeco-

nomic Dynamics 12(Supplement 2), 285–313.


∗ Cunha, Flavio and James Heckman (2008) A new framework for the analysis of inequality. Macro-

economic Dynamics 12(Supplement 2), 315–354.


∗ Ferreira, Francisco H.G., Phillippe G. Leite and Julie A. Litchfield (2008) The rise and fall of

Brazilian inequality: 1981–2004. Macroeconomic Dynamics 12(Supplement 2), 199–230.


Giné, Xavier and Robert M. Townsend (2004) Evaluation of financial liberalization: A general equilib-
rium model with constrained occupation choice. Journal of Development Economics 74, 269–304.
Heckman, J.J., L.J. Lochner, and C. Taber (1998a) Explaining rising wage inequality: Explorations
with a dynamic general equilibrium model of labor earnings with heterogeneous agents. Review of
Economic Dynamics 1(1), 1–58.
Heckman, J.J., L.J. Lochner, and C. Taber (1998b) General-equilibrium treatment effects: A study of
tuition policy. American Economic Review 88(2), 381–386.
∗ Jeong, Hyeok (2008) Assessment of relationship between growth and inequality: Micro evidence

from Thailand. Macroeconomic Dynamics 12(Supplement 2), 155–197.


∗ Jeong, Hyeok and Robert M. Townsend (2008) Growth and inequality: Model evaluation based on

an estimation-calibration strategy. Macroeconomic Dynamics 12(Supplement 2), 231–284.


Jeong, Hyeok and Robert M. Townsend (2007) Sources of TFP growth: Occupational choice and
financial deepening. Economic Theory 32(1), 179–221. [July 2007 special issue honoring Edward
Prescott.]
Lloyd-Ellis, Huw and Bernhardt, Dan (2000) Enterprise, inequality and economic development. Review
of Economic Studies 67(1), 147–168.
Townsend, Robert M. and Kenichi Ueda (2006) Financial deepening, inequality, and growth: A model-
based quantitative evaluation. Review of Economic Studies 73(1), 251–293.
Townsend, Robert M. and Kenichi Ueda (2007) Welfare Gains from Financial Liberalization. IMF
Publication ISSN: 1934-7073, WPIEA2007154.

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:27:19, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100508070430
Macroeconomic Dynamics, 20, 2016, 461–465. Printed in the United States of America.
doi:10.1017/S1365100514000261

ARTICLES
INTRODUCTION TO
MACROECONOMIC DYNAMICS
SPECIAL ISSUE ON COMPLEXITY
IN ECONOMIC SYSTEMS

APOSTOLOS SERLETIS
University of Calgary

In the aftermath of the global financial crisis, questions have been raised regarding the
value and applicability of modern macroeconomics. Motivated by these developments and
recent advances in dynamical systems theory, the papers in this special issue of
Macroeconomic Dynamics deal with specific aspects of the economy as a complex
evolving dynamic system.

Keywords: Complex Economic Dynamics, Endogenous Business Cycles

The 1,000-point tumble in the Dow Jones Industrial Average on May 6, 2010
“was just a small indicator of how complex and chaotic, in the formal sense, these
systems are. ... Our financial system is so complicated and so interactive—so
many different markets in different countries and so many sets of rules. ... What
happened in the stock market is just a little example of how things can cascade or
how technology can interact with market panic.”
—Ben Bernanke, Interview with the International Herald Tribune
(May 17, 2010)

Following the powerful critique by Lucas (1976), the modern core of macro-
economics includes both the real business cycle approach (known as freshwater
economics) and the New Keynesian approach (known as saltwater economics) and
makes systematic use of the dynamic stochastic general equilibrium framework,
originally associated with the real business cycle approach. It assumes rational
expectations and forward-looking economic agents, relies on market-clearing
conditions for households and firms, relies on shocks and on mechanisms that
amplify the shocks and propagate them through time, and is designed to be a
quantitative mathematical formalization of the aggregate economy.

Address correspondence to: Apostolos Serletis, Department of Economics, University of Calgary, Calgary, Alberta
T2N 1N4, Canada; e-mail: Serletis@ucalgary.ca.


c 2014 Cambridge University Press 1365-1005/14 461
Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:29:32, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100514000261
462 APOSTOLOS SERLETIS

However, in the aftermath of the global financial crisis, the Great Recession, and
the European debt crisis, policy makers, the media, and a number of economists
have raised questions regarding the value and applicability of modern macroeco-
nomics. For example, Caballero (2010, p. 89) wrote that
by some strange herding process the core of macroeconomics seems
to transform things that may have been useful modeling short-cuts into
a part of a new and artificial “reality,” and now suddenly everyone
uses the same language, which in the next iteration gets confused with,
and eventually replaces, reality. Along the way, this process of make-
believe substitution raises our presumption of knowledge about the
workings of a complex economy and increases the risks of a “pretense
of knowledge” about which Hayek warned us [in his (1974) Nobel-
prize acceptance lecture].
There are many criticisms of the modern core of macroeconomics—see, for exam-
ple, Farmer and Geanakoplos (2008) and Kirman (2010). One is of the assumption
that economic agents act in isolation and the only interaction between them is
through the price system. This is clearly unrealistic, as it fails to capture the
interdependence, interaction, and economic networks of the real world. Another is
of the aggregation assumption, according to which the behavior of the aggregate
(or macro) economy corresponds to that of the representative economic agent,
consistent with the reductionist belief that “the whole is the sum of its parts.”
The key property, however, of economic systems (as well as of other natural
systems such as brains, immune systems, and insect colonies) is nonlinearity,
implying that “the whole is different from the sum of the parts.” This realiza-
tion led to a move away from the traditional paradigm of reductionism and the
development of new sciences such as systems biology, chaos and information
theory, and evolutionary economics to explain complex and adaptive systems,
systems in which large numbers of entities with limited communication among
themselves collectively produce complicated global behavior and, in some cases,
evolve and learn. Macroeconomists, however, have handicapped themselves by
not taking a complex systems approach to explaining the dynamics of apparently
self-organizing economic systems.
Another criticism of the current mainstream approach to macroeconomics con-
cerns the definition of rational expectations. Expectations play an important role
in economics and finance, but the definition of rational expectations used in the
dynamic stochastic general equilibrium approach to macroeconomics needs to be
modified, because it applies to a stationary world. As Hendry and Mizon (2010,
p. 13) argue,
the present treatment of expectations in economic theories of inter-
temporal optimization is inappropriate—it cannot be proved that con-
ditional expectations based on contemporaneous distributions are min-
imum mean-square error 1-step predictors when unanticipated breaks
occur, and the law of iterated expectations then also does not hold

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:29:32, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100514000261
INTRODUCTION TO SPECIAL ISSUE 463

inter-temporally. One consequence is that dynamic stochastic general


equilibrium models are intrinsically non-structural, and must fail the
Lucas critique since their derivations depend on constant expectations
distributions.
Finally, another serious criticism of the modern core of macroeconomics pertains
to its formalization of the origins of business cycles (short-run fluctuations). As
Kocherlakota (2010, p. 16) puts it, “the difficulty in macroeconomics is that
virtually every variable is endogenous, but the macroeconomy has to be hit by
some kind of exogenously specified shocks if the endogenous variables are to
move.” Typically, state-of-the-art dynamic stochastic general equilibrium models
attribute short-run fluctuations to high-frequency shocks to fundamentals such as
preferences, technologies, or government policy. This, however, is highly unsatis-
factory. As Angeletos and La’O (2013) recently put it in their Conclusion, “if taken
literally, these shocks seem empirically implausible. Instead, short-run phenomena
appear to have a largely self-fulfilling nature—one that leads many practitioners
to attribute these phenomena to more exotic forces such as ‘animal spirits,’ ‘sen-
timents,’ or ‘market psychology,’ and one that standard macroeconomic models
have failed to capture.”
It is for these reasons that in recent years there has been a revival of interest
in dynamical systems theory, and there is a group of economists who use ideas
from complex systems research to look at economic fluctuations as deterministic
phenomena, endogenously created by market forces and aggregator (utility and
production) functions. Motivated by these developments and recent advances in
complex systems research, this special issue brings together a number of papers
dealing with specific aspects of the economy as a complex evolving dynamic
system. In what follows, I briefly describe these papers.
The first paper, by Alan Kirman, “Ants and Nonoptimal Self-Organization:
Lessons for Macroeconomics,” looks at the analogy often used for the economy
with the way in which social insects self-organize. The argument normally heard is
that social insects have adapted optimally over millions of years and this accounts
for the efficiency of their collective systems. This has been used to suggest that
individuals in markets have learned to behave optimally and that this accounts for
the efficiency of those markets. Kirman argues that just like ants, contrary to the
usual account, market participants are far from optimality and efficiency, and that
systems with noisy interacting agents operating with simple rules will constantly
evolve, will necessarily pass through crises, which will be endogenous, and will
never be in what could be considered to be an equilibrium state in the normal
sense.
The second paper, by Roger Farmer, “The Evolution of Endogenous Business
Cycles,” surveys the class of endogenous business cycle models that Farmer has
developed based on indeterminacy and externalities and relates them to broader
themes in the history of macroeconomics. The survey focuses on a distinction
between two generations of endogenous business cycle models. The first gener-
ation uses dynamic indeterminacy (a situation in which a dynamic equilibrium

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:29:32, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100514000261
464 APOSTOLOS SERLETIS

model is associated with a unique steady state but with multiple steady state paths
that converge to it) to get endogenous fluctuations around a deterministic steady
state. This is in contrast to the standard models of exogenous business cycles,
which usually have a unique dynamic equilibrium and a unique steady state. The
second generation exploits the search and matching frictions in the labor market
to get not just dynamic indeterminacy, but also indeterminacy of the steady state
(many steady-state equilibria). Farmer argues that models that exploit steady-state
indeterminacy provide a microfoundation for Keynesian macroeconomics and
in particular for the idea that high unemployment can persist as an equilibrium
phenomenon.
The third paper, by Jess Benhabib, Alberto Bisin, and Shenghao Zhu, “The Dis-
tribution of Wealth in the Blanchard–Yaari Model,” applies dynamic asset pricing
theory to derive some implications in an overlapping-generations model with inter-
generational transmission of wealth, lifetime uncertainty, and redistributive fiscal
policy. The authors show that idiosyncratic investment risk and uncertain lifetime
can generate a stationary double Pareto wealth distribution. This is confirmed to
be asymptotically robust even under government lump-sum redistribution policy.
Moreover, it is shown, using data from the Survey of Consumer Finances, that this
distribution matches up well with the data.
The fourth paper, by Costas Azariadis and Leo Kaas, “Capital Misallocation
and Aggregate Factor Productivity,” proposes a financial theory of aggregate pro-
ductivity that connects the sectoral allocation of capital with sectoral productivity
shocks and credit frictions. The authors emphasize frictions arising from insuffi-
cient collateral for secured loans and from limited enforcement of unsecured loans,
both of which lead to endogenous debt limits. They offer some interesting insights
into the complex dynamic patterns of competitive equilibria in economies with
endogenous debt limits and show that endogenous debt limits slow the realloca-
tion of capital, preventing the equalization of sectoral productivities and sectoral
rates of return. The theory may be useful in answering important questions in
macroeconomics, such as why relatively mild financial market frictions may be
responsible for endogenous fluctuations and an inefficient allocation of capital.
The fifth paper, by Gaetano Antinolfi, Costas Azariadis, and James Bullard, “The
Optimal Inflation Target in an Economy with Limited Enforcement,” formulates a
simple model to study the central bank’s problem of selecting the optimal inflation
rate in monetary economies with heterogeneous agents and limited enforcement.
It assumes two types of agents, cash agents who can buy consumption only with
money and credit agents who can buy consumption with money and with credit
and must voluntarily repay debts (limited enforcement). The authors show that
the optimal rate of inflation is positive, because inflation reduces the value of the
outside option for credit agents and raises their debt limits.
The next paper, by Kazuo Nishimura, Carine Nourry, Thomas Seegmuller, and
Alain Venditti, “Public Spending as a Source of Endogenous Business Cycles in
a Ramsey Model with Many Agents,” considers a Ramsey model with heteroge-
neous agents and introduces public spending financed through consumption taxes

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:29:32, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100514000261
INTRODUCTION TO SPECIAL ISSUE 465

that affects preferences as an externality. It shows that the existence of such an


externality is a source of endogenous business cycles under a set of sufficient
conditions that do not involve empirically implausible increasing returns in pro-
duction, an implausibly low elasticity of substitution between capital and labor,
or an implausibly elastic labor supply with respect to wages.
The next paper, by Quamrul Ashraf, Boris Gershman, and Peter Howitt, “How
Inflation Affects Macroeconomic Performance: An Agent-Based Computational
Investigation,” provides a theoretical and computational framework for the in-
vestigation of the macroeconomic effects of inflation. It takes an agent-based
computational approach to show how inflation can worsen macroeconomic per-
formance by disrupting the market processes that coordinate economic activity in
a decentralized market economy. The authors find that increasing the trend rate of
inflation above 3% has powerful adverse effects on macroeconomic performance,
but lowering it below 3% has no significant economic effects. This finding is
qualitatively robust to changes in parameter values and to modifications to the
model that partly address the Lucas critique.
The last paper, by William Barnett and Unal Eryilmaz, “An Analytical and
Numerical Search for Bifurcations in Open-Economy New Keynesian Models,”
explores bifurcation phenomena in an open-economy New Keynesian model, ex-
tending earlier work by Barnett and his co-authors to the open-economy case. The
authors provide a detailed stability and bifurcation analysis of the model’s equilib-
rium and show the existence of bifurcations within the feasible parameter range of
the model. They find that the introduction of parameters related to the openness of
the economy affects the values of bifurcation parameters and changes the location
of bifurcation boundaries. They conclude that the bifurcation stratification of the
confidence regions remains a serious issue in the context of this open-economy
New Keynesian model, as previously found in closed-economy New Keynesian
functional structures.

REFERENCES
Angeletos, George-Marios and Jennifer La’O (2013) Sentiments. Econometrica 81, 739–779.
Caballero, Ricardo J. (2010) Macroeconomics after the crisis: Time to deal with the pretense-of-
knowledge syndrome. Journal of Economic Perspectives 24, 85–102.
Farmer, J. Doyne and John Geanakoplos (2008) The virtues and vices of equilibrium and the future of
financial economics. Complexity 14, 11–38.
Hayek, Friedrich A. (1974) The Pretence of Knowledge. Prize Lecture, The Sveriges Riksbank Prize
in Economic Sciences in Memory of Alfred Nobel.
Hendry, David F. and Grayham E. Mizon (August 2010) On the Mathematical Basis of Inter-temporal
Optimization. Working paper, Department of Economics, Oxford University.
Kirman, Alan (2010) The economic crisis is a crisis for economic theory. CESifo Economic Studies
56, 498–535.
Kocherlakota, Narayana (2010) Modern macroeconomic models as tools for economic policy. The
Region, 5–21.
Lucas, Robert E., Jr. (1976) Econometric policy evaluation: A critique. Carnegie-Rochester Conference
Series on Public Policy 1, 19–46.

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:29:32, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100514000261
Macroeconomic Dynamics, 13 (Supplement 2), 2009, 151–168. Printed in the United States of America.
doi:10.1017/S1365100509090233

INTRODUCTION TO
MEASUREMENT WITH THEORY
WILLIAM A. BARNETT
University of Kansas
W. ERWIN DIEWERT
University of British Columbia
ARNOLD ZELLNER
University of Chicago

This paper is the introduction to the Macroeconomic Dynamics Special Issue on


Measurement with Theory. The Guest Editors of the special issue are William A. Barnett,
W. Erwin Diewert, Shigeru Iwata, and Arnold Zellner. The papers included are part of a
larger initiative to promote measurement with theory in economics.
Keywords: Measurement, Index Number Theory, Aggregation Theory

1. INTRODUCTION
This special issue of Macroeconomic Dynamics is devoted to papers that address
various measurement problems while drawing on economic theory in a manner
that is theoretically internally consistent at all relevant levels of aggregation and
in relation to the models within which the measured data are used. The special
issue is part of a larger initiative to promote measurement with theory, as opposed
to the well-known measurement without theory and theory without measure-
ment. The eight papers in this special issue can be grouped into three general
classes:
Papers that address some of the problems associated with making international compar-
isons of welfare and productivity;
Papers that draw primarily on production theory; and
Papers that draw primarily on consumer theory.
Of course, some papers could be slotted into more than one of the above
categories; in particular, the first two papers in the international comparisons
category draw on consumer theory, whereas the third paper in this category draws
on production theory. The second paper in the production theory category uses both

Address correspondence to: William A. Barnett, Department of Economics, University of Kansas, Snow Hall, 1460
Jayhawks Blvd., Lawrence, KS 66045-7585, USA; e-mail: barnett@ku.edu.


c 2009 Cambridge University Press 1365-1005/09 151
Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
152 WILLIAM A. BARNETT ET AL.

consumer theory regarding occupational choice and producer theory regarding


financial intermediation.
Sections 2, 3, and 4 will be devoted to brief descriptions of the papers that
appear in each of these three categories.

2. INTERNATIONAL COMPARISONS OF PRICES, QUANTITIES,


AND PRODUCTIVITY
Three papers in this special issue make international comparisons. The first paper
is by Robert Feenstra, Hong Ma, and Prasada Rao (2009; henceforth FMR).
Their paper addresses the following problem. Suppose the World Bank or some
other organization undertakes periodic comparisons of final demand prices every
5 or 10 years so that estimates of real GDP and its components can be made
across countries for the year when the International Comparison Project (ICP)
constructs its price estimates. Then national price and expenditure data can be
used to construct comparable estimates of real GDP across time and countries. The
problem with these comparisons is that they change (sometimes very dramatically)
every time a new cross-sectional comparison is made; that is, the resulting panel
data are not consistent across time and space. TFMR provides a solution to this
problem by extending the methodology developed by Neary (2004), who makes
a single cross-sectional comparison to the case where there are two sets of cross-
sectional data.
To explain the FMR methodology, it will be useful to explain Neary’s method-
ology first. Neary (2004) draws on an early method for making international
comparisons that was suggested by Geary (1958).1 Because Neary’s method for
making international comparisons is a modification of Geary’s method, we will
explain Geary’s method first.
Suppose we have data on the prices for some time period (in national currencies
but in comparable units of measurement) for N commodity groups and J countries,
pj ≡ [pj 1 , . . . , pj N ], where pj is the country j price vector and qj ≡ [qj 1 , . . . , qj N ]
is the corresponding country j quantity vector for j = 1, . . . , J . Thus total expen-
diture on these commodities for country j in the time period under consideration
is the inner product of the country j price and quantity vectors, which we denote
by pj · qj for j = 1, . . . , J . The Geary–Khamis or GK vector of international
reference prices is the vector π ≡ [π 1 , . . . , π N ] and the vector of true exchange
rates2 is the vector ε ≡ [ε1 , . . . , εJ ]. The components of these two vectors are the
solution to the following set of equations:3


J 
J
πn = εj pj n qj n / qj n ; n = 1, . . . , N; (1)
j =1 j =1

εj = π · qj /pj · qj ; j = 1, . . . , J. (2)
Once the reference prices π have been determined, the GK quantity aggregates,
Qj for j = 1, . . . , J , are defined as the inner product of the international price

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
INTRODUCTION TO MEASUREMENT WITH THEORY 153

vector π and the country j quantity vector, qj ; that is, we have

Qj ≡ π · qj ; j = 1, . . . , J. (3)

Neary’s modification of Geary’s method works as follows. First, Neary re-


stricted himself to making comparisons of consumption across countries, and so
consumer theory can be applied to these comparisons. Second, Neary assumed
that per capita consumption data across countries can be rationalized by assuming
common (nonhomothetic) tastes across countries.4 Thus using the per capita con-
sumption data for 11 commodity groups for 60 countries for the year 1980 that
were constructed by the International Comparisons Project (ICP), Neary estimated
a flexible functional form e( p, u) for a consumer expenditure function.5 Once the
expenditure function has been estimated, country utility levels uj can be calculated
by solving the following equations for uj :

e(pj , uj ) = pj · qj ; j = 1, . . . , J. (4)

Given that the expenditure function e is known and the country utility levels
uj have been determined via equations (4), we could choose any reference price
vector π and use money metric utility6 estimates Uj (π ) to obtain a theoretically
consistent cardinal measure of the per capita consumption of country j in real
terms:
Uj (π ) ≡ e(π, uj ); j = 1, . . . , J. (5)

The family of cardinal utility measures defined by (5) is very closely related
to the family of Allen (1949) quantity indices of per capita consumption between
countries j and k:

QA (π, uj , uk ) ≡ e(π, uk )/e(π, uj ); j, k = 1, . . . , J. (6)

Thus, for each choice of a reference price vector π , one can define consistent
measures of the per capita consumption of countries in the cross-sectional com-
parison using (5). and the Allen indices defined by (6) simply take ratios of the
cardinal measures defined by (5).
Basically, Neary (2004) used the per capita measures defined by (5) (or a
normalization of these measures) to rank the per capita consumption expenditures
of the 60 countries in his 1980 sample of countries. However, we now encounter
the interesting problem of how exactly the international vector of reference prices
π should be chosen.
Neary used the π solution to the following equations, which are related to the
original Geary equations, (1) and (2):7


J 
J
πn = εj pj n qj n / ∂e(π, uj )/∂πn ; n = 1, . . . , N ; (7)
j =1 j =1

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
154 WILLIAM A. BARNETT ET AL.

εj = e(π, uj )/pj · qj ; j = 1, . . . , J
= e(π, uj )/e(pj , uj ). (8)

Once the vector of Neary international reference prices π has been determined
by solving (7) and (8), consistent theoretical measures of country real consumption
per capita, Uj (π ), can be obtained by using the estimated expenditure function e
and definition (5) for this π . Neary (2004, p. 1417) called his real consumption
estimates Geary–Allen true indices of real income and he termed his overall
multilateral method for making international comparisons the Geary–Allen in-
ternational accounts (GAIA). Because multilateral methods are usually named
after the pioneers who invented the method and established the properties of the
method, it seems appropriate to call Neary’s multilateral method the Geary–Neary
(GN) method.
Now we are in a position to evaluate the contribution of FMR. They too estimated
a nonhomothetic flexible functional form for an expenditure function (the AIDS
model), but they used the per capita expenditure data pertaining to two ICP price
and quantity comparisons: the comparisons for 1980 and for 1996. They used
Neary’s data and commodity classification for 1980 and aggregated the ICP data
for the 1996 round of international comparisons to match Neary’s commodity
classification. They ended up with a consistent data set for 48 countries8 that
appeared in both the 1980 and 1996 data sets. Once the AIDS expenditure function
was estimated, FMR used the methodology illustrated by equations (4) and (5)
to construct estimates of real consumption per capita for 48 countries for the
years 1980 and 1996; that is, they succeeded in their goal of obtaining consistent
international comparisons of per capita “income” over time and space.
FMR also raise some interesting issues that have not been discussed in the
literature, which are centered on Neary’s nonhomothetic estimation methodology,
such as: What are the right reference prices π to use in equations (5), which
provide cardinal estimates of per capita consumption? Neary simply assumed that
his GAIA reference prices constructed by solving equations (7) and (8) were the
right reference prices and did not discuss the arbitrariness inherent in this choice.
FMR experiment with alternative choices for the reference prices π . Three of
their choices are Neary’s GAIA reference prices, U.S. prices,9 and the unweighted
geometric mean of all 96 (48 countries times 2 observations) country prices. Of
course, the ordinal ranking of per capita consumption across countries should not
depend on the choice of π , but the results tabulated in the paper by FMR show
that the cardinal ranking of country observations does change considerably as π
changes. The fact that the cardinal estimates of per capita consumption change
considerably as π changes means that it is necessary to debate what the “right”
choice of π is. From the viewpoint of any single country j, the most meaningful
choice of π would be pj , country j’s price vector. However, in the present context,
we have two country j price vectors, say pj1 and pj2 , representing the country j
price vectors for 1980 and 1996, respectively. One could argue that the 1996 price

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
INTRODUCTION TO MEASUREMENT WITH THEORY 155

vector (or more generally the last price vector in the sample) is the preferred choice
between pj1 and pj2 , because comparing the relative size of country budget sets in
the present will be easier to accomplish if the most current set of country prices
are used in the per capita consumption comparisons defined by (5) above. One
could also argue that it would be more appropriate to choose reference prices for
the country under consideration that are representative of the structure of relative
prices for the entire sample period, and this would lead us to set the preferred
country j reference prices, πj ≡ [πj 1 , . . . , πj N ], equal to the geometric mean of
the two country price vectors, [(pj11 pj21 )1/2 , . . . , (pj1N pj2N )1/2 ].10 When this choice
π j is used for π , country j’s preferred set of per capita real consumption estimates
can be defined as follows:
   
Uk1 (πj ) ≡ e πj , u1k ; Uk2 (πj ) ≡ e πj , u2k ; k = 1, . . . , J, (9)

where utk is country k’s estimated utility level for year t, which results from the
econometric estimation of the expenditure function e over the panel data set.
Thus instead of a single set of cross-country comparisons, the above suggested
methodology would lead to a set of J sets of cross-country comparisons, one for
each country being compared.
Of course, for many purposes, a single ranking of per capita “incomes” is
required and so, in the end, the J rankings defined by (9) will have to be av-
eraged. Note that we are now in exactly the same situation that occurs when
normal bilateral index number theory is applied in a multilateral context; that is,
a superlative bilateral index number formula can be used to generate estimates
of per capita consumption using country j as the base country, and this leads
to J separate star rankings of real consumption. These separate measures are
then averaged to obtain a final single ranking. The simplest way to average the
individual star rankings is to take their equally weighted geometric average. If the
superlative index number formula is the Fisher (1922) ideal quantity index,11 then
the resulting multilateral method is due to Gini (1931), and it is known as the EKS
or GEKS multilateral method. Note that this method of weighting the separate
star estimates is a democratic method of weighting; that is, each country gets the
same weight in the geometric mean of the separate country estimates. It is also
possible to weight according to the relative size of each country (this is known as
plutocratic weighting12 ), but in the present context, democratic weighting seems
more appropriate. FMR did not calculate the country-specific estimates of real
per capita consumption defined by (9) or take the geometric average of these
estimates, but we conjecture that this geometric average will generally be close
to GEKS or Caves et al. (1982) estimates based on traditional multilateral index
number theory (which does not use econometrics).
In addition to generalizing Neary’s method to a panel data context, FMR have
several other very interesting results. In particular, they show (for both AIDS and
translog nonhomothetic preferences) how to generate Allen quantity indices of the
type defined by (6) above easily for arbitrary reference price vectors π , using only

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
156 WILLIAM A. BARNETT ET AL.

information on the subset of parameters in these functional forms that are related
to nonunitary income elasticities. These results should be of general interest to
econometricians who estimate systems of consumer demand equations.
The second paper dealing with international comparisons in this special issue is
by Robert Hill and Peter Hill (2009). Their paper explains the new methodology
that was used in the 2005 International Comparison Program (ICP), which com-
pared relative price levels and GDP levels across 146 countries.13 In this round of
the ICP, the world was divided into five regions: OECD and CIS countries, Africa,
South America, Asia Pacific, and West Asia. What is new in this round compared
to previous rounds of the ICP is that each region was allowed to develop its own
product list and collect prices on this list for countries in the region. The regions
were then linked using another separate product list, 18 countries across the five
regions collected prices for products on this list, and this information was used to
link prices and quantities across the regions. Hill and Hill provide a comprehensive
review of the new methodology that was used in ICP 2005. Their methodological
review covers three topics:
The construction of price indices at the basic heading level;
The construction of price indices between countries within one of the five regions; and
The linking of the country comparisons across regions.

Hill and Hill are well qualified to describe the new methodological developments
in the ICP, because Peter Hill (2007) wrote all of the theory chapters in the World
Bank’s methodological handbook and Robert Hill (1997, 1999, 2001, 2004) has
written numerous papers on multilateral index number theory.14
Hill and Hill look ahead to the next round of the ICP and make an interesting new
methodological suggestion that is a variant of Robert Hill’s (1999, 2001, 2004)
minimum–spanning tree methodology for international comparisons. Based on a
suggestion by Diewert, they suggest that countries could be broken up into two
groups: those with well-funded statistical offices and those with less well-funded
offices. The first group of countries would be labeled as core countries, and Hill’s
spanning-tree methodology would be applied initially to only the core countries.15
The remaining countries would be linked to the initial core-country tree using an
appropriate similarity measure for the structure of relative prices in each noncore
country as compared to each country in the core group of countries. Thus, suppose
that there are C countries in the core group of countries and N countries in the
noncore group. Denote the vector of basic heading prices for a core country c
by pc for c = 1, . . . , C and for a noncore country n by Pn for n = 1, . . . , N .
Suppose further that a suitable measure of relative price dissimilarity, s(p, P), has
been chosen.16 Let n be an arbitrary noncore country and consider the following
minimization problem:
min{s(pc , P n ) : c = 1, . . . , C}. (10)
Suppose that the core country c(n) solves the above minimization problem, so
that the structure of relative prices in c(n) is the most similar to the structure of

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
INTRODUCTION TO MEASUREMENT WITH THEORY 157

relative prices in the noncore country n.17 Then we link country n to country c(n)
using a superlative index number formula, such as the Fisher (1922) ideal formula.
Thus there are N simple minimization problems of the form (10) to be solved: one
for each noncore country. When all of the problems have been solved, all of the
noncore countries will be linked to the core-spanning tree. The resulting overall
spanning tree will have the property that none of the noncore countries will have a
large influence in the overall spanning tree (and noncore countries will be linked
to core countries which have the most similar structure of relative prices).
The paper by Robert Inklaar and Marcel Timmer (2009) is our third paper that
deals with international comparisons. However, this paper deals with comparison
of industry levels of output, input, and productivity across countries rather than
with comparisons of consumption or final demand, which were the focus of the
first two papers.
The basic question that Inklaar and Timmer ask is whether productivity levels
are converging across OECD countries over time. To address this question, the
authors developed a new industry-level database for 20 OECD countries over the
years 1970–2005. The authors combined the EU KLEMS growth accounting
database with their GGDC productivity level database. This second database
provides productivity level comparisons for 20 OECD countries at a detailed
industry level for the benchmark year 1997. Then the EU KLEMS database was
used to extrapolate this benchmark through time from 1970 to 2005. This was
done at a detailed industry level. Modern production theory based on the user cost
of capital and superlative indices was used in constructing their database.18
Inklaar and Timmer find that Bernard and Jones (1996) were basically right:
patterns of convergence in a set of advanced OECD countries differ consider-
ably across sectors. Inklaar and Timmer find a process of steady convergence in
market services since the 1970s, but they find little evidence for convergence in
manufacturing or other goods-producing industries. Moreover, when they analyze
convergence at a more detailed industry level using a dataset of 24 industries, they
find that the patterns of convergence and divergence since 1980 are very different
and far from homogenous, even within industry groups such as market services.19

3. PRODUCTION THEORY
This special issue contains two papers relevant to measurement in a production
context. The paper by Bert Balk (2009b) in this special issue looks at the relation-
ship between measures of productivity growth that are based either on a gross-
output framework or on a value-added framework. If Yt denotes the aggregate
output of a production unit in period t and Xt denotes the corresponding aggregate
input used by that unit in period t, the period t (total factor) productivity (TFP) of
the unit is simply the ratio Y t /Xt . The corresponding period t TFP growth of the
unit, TFPGt , is defined as the ratio of the unit’s period t TFP to the corresponding
period t − 1 TFP:
TFPGt ≡ [Y t /Xt ]/[Y t−1 /Xt−1 ] = [Y t /Y t−1 ]/[Xt /Xt−1 ]. (11)

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
158 WILLIAM A. BARNETT ET AL.

The second expression in (11) indicates how TFP growth can be computed
in practice: aggregate output growth Y t /Y t−1 can be replaced by a bilateral
quantity index, Q(pt−1 , pt , y t−1 , y t ), where pt and yt are the period t output
price and quantity vectors for the production unit, and aggregate input growth,
Xt /Xt−1 , can be replaced by a bilateral quantity index, Q∗ (w t−1 , w t , x t−1 , x t ),
where w t and xt are the period t input price and quantity vectors for the production
unit.20 The difference between the gross-output and value-added approaches to
measuring TFP growth can now be explained. In the gross-output framework,
Q(pt−1 , pt , y t−1 , y t ) is an aggregate growth rate for all of the outputs produced
by the production unit and Q∗ (w t−1 , w t , x t−1 , x t ) is an aggregate growth rate for
all of the primary and intermediate inputs used by the production unit, whereas
in the value-added framework, Q(pt−1 , pt , y t−1 , y t ) is an aggregate growth rate
for all of the outputs produced by the production unit less the intermediate inputs
used by the unit21 and Q∗ (w t−1 , w t , x t−1 , x t ) is an aggregate growth rate for
just the primary inputs used by the production unit. If revenues equal costs for
the production unit in both periods, Balk is able to derive a very general exact
relationship between the gross output and value added measures of TFP growth;
see his Theorem 1. Balk sums up his result in words as follows: if the unit’s
profits are zero in both periods, then its value added–basedproductivity change is
equal to its gross output–based productivity change times the ratio22 of average
revenue to average value added for the two periods under consideration. Thus the
value-added measure of TFP growth will always be larger than the corresponding
gross-output measure if profits are zero in both periods. Balk also provides an
elegant proof of Domar’s (1961) aggregation rule, which relates the gross-output
TFP growth rates of a set of production units to an economy-wide measure of TFP
growth.
In Section 4 of his paper, Balk turns his attention to difference measures of TFP
growth. Because many readers will not be familiar with this approach, we will
provide an introduction to this topic.23
Traditional bilateral index number theory decomposes a value ratio pertaining
to the two periods under consideration (say periods 0 and 1) into the product of
a price index, P (p0 , p1 , q 0 , q 1 ), and a quantity index, Q(p0 , p1 , q 0 , q 1 ); that is,
we have

p1 · q 1 /p0 · q 0 = P (p0 , p1 , q 0 , q 1 )Q(p0 , p1 , q 0 , q 1 ). (12)

If there is only one commodity in the aggregate, then the price index
P (p0 , p1 , q 0 , q 1 ) collapses down to the single price ratio p11 /p01 and the quantity
index Q(p0 , p1 , q 0 , q 1 ) collapses down to the single quantity ratio q11 /q01 . Thus
traditional index number theory is based on a ratio principle.
Bennet (1920) and Montgomery (1937) pursued the branch of index number
theory where differences replaced the ratios in (12). Thus, they looked for two
functions of 4N variables, P (p0 , p1 , q 0 , q 1 ) and Q(p0 , p1 , q 0 , q 1 ), which
added up to the value difference in the aggregate rather than the value ratio; that

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
INTRODUCTION TO MEASUREMENT WITH THEORY 159

is, these two functions were to satisfy the following equation:

p1 · q 1 − p0 · q 0 = P (p0 , p1 , q 0 , q 1 ) + Q(p0 , p1 , q 0 , q 1 ). (13)

The two functions P (p0 , p1 , q 0 , q 1 ) and Q(p 0 , p1 , q 0 , q 1 ) are to satisfy


certain tests or properties that will allow us to identify P (p0 , p1 , q 0 , q 1 ) as
a measure of aggregate price change and Q(p0 , p1 , q 0 , q 1 ) as a measure of
aggregate quantity or volume change. Note that if either of these functions is deter-
mined, then the other function is also determined. The terms indicator of price and
quantity change are used to describe P (p0 , p1 , q 0 , q 1 ) and Q(p0 , p1 , q 0 , q 1 )
respectively24 (as opposed to the terms price and quantity index, which appear in
the traditional ratio approach to index number theory).
Where might one use the difference approach to analyzing value change? A
natural home for this approach is in the business and accounting community. The
usual ratio approach to the decomposition of value change is not one that the
business and accounting community finds natural; a manager or owner of a firm is
typically interested in analyzing profit differences rather than ratios. Thus interest
centers on decomposing cost, revenue, or profit changes into price and quantity
(or volume) effects, and this is precisely what Balk (2009b) does in Section 4 of
his paper. Note that if (13) represents a profit decomposition for a production unit,
then Q(p0 , p1 , q 0 , q 1 ) can be interpreted as a measure of efficiency improvement
for that unit; that is, it is the difference counterpart to TFP growth in the usual ratio
approach to index number theory.25 For example, the owner of an oil exploration
company will generally be interested in knowing how much of the difference
between current period profits and previous period profits is due to a change in
the price of crude oil [this will show up in the P (p0 , p1 , q 0 , q 1 ) term] and how
much of the profit change is due to improvements in the operating efficiency of
the company [the Q(p0 , p1 , q 0 , q 1 ) term].
Another natural application of the difference approach to index number theory is
in consumer surplus theory. In this context, the problem is to decompose the change
in a consumer’s expenditures between two periods into a price-change compo-
nent, P (p0 , p1 , q 0 , q 1 ), and a quantity change component, Q(p0 , p1 , q 0 , q 1 ),
which can be interpreted as a constant-dollar measure of utility change. This line
of research was started by Marshall (1890) and Bennet (1920) and continued by
Hotelling(1938, 253–254), Hicks (1941–1942, 134; 1945–1946), and Harberger
(1971). This second application of the difference approach to index number the-
ory is pursued by Diewert and Mizobuchi (2009), and we will describe their
contribution in more detail in the following section.
In Section 4 of his paper, Balk (2009b) establishes a remarkably simple result
using the difference approach to describing the growth in efficiency of a production
unit over the two periods under consideration: the value added–based TFP indicator
of TFP growth is exactly equal to the corresponding gross output–based TFP
indicator of TFP growth. Thus there is no need for Domar factors when the
difference approach is used to measure the growth of efficiency!

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
160 WILLIAM A. BARNETT ET AL.

In Section 5 of his paper, Balk uses the continuous-time Divisia approach to the
measurement of TFP growth (rather than discrete-time indices) and establishes
essentially the same results that he established in Section 3; that is, under certain
assumptions, he shows that value-added TFP growth is equal to the Domar factor
times gross-output TFP growth.
In Section 6, Balk again uses a continuous-time framework, but in this section,
he introduces a production function and assumes competitive cost-minimizing
behavior on the part of the production unit, and he also assumes that the produc-
tion unit maximizes value added, taking prices as fixed. He then looks at a cost
function-based measure of technical progress and compares it to a value added
function-based measure of technical progress. He again finds a Domar-type factor
relating the two measures of technical progress. In the remainder of Section 6,
Balk makes various assumptions about competitive behavior and the structure
of technical progress in order to obtain generalizations of the results of Solow
(1957), Jorgenson and Griliches (1967), and Denny et al. (1981) relating TFP
growth (both gross output and value added) to (path-independent) measures of
technical progress. Balk finds that path independence of the gross output–based
TFP index requires that the technology exhibit Hicks input neutrality, whereas
path independence of the value added–based TFP index requires Hicks value-
added neutrality. In Balk’s view, the two sets of assumptions on the structure of
technology and the nature of technical progress are equally restrictive: they simply
imply two numerically different measures of technological change. Balk notes that
this state of affairs does not imply a breakdown of measurement, but it reflects a
structural state of affairs. Technological change simply moves the production unit’s
production possibilities set through time. Measurement means that this move-
ment must be mapped into one-dimensional space. There is no unique way to do
this.
As has been emphasized by Barnett and Hahm (1994), Barnett and Zhou (1994),
and Barnett et al. (1995), empirical research on financial intermediation that brings
together production theory, index number theory, and econometrics in an internally
coherent manner is rare. Considering the nature of the current financial crisis, few
subjects can be considered as important as financial intermediation. Townsend
and Urzua, in their innovative paper in this special issue, state (regarding contract
theory models of financial intermediation and econometric policy evaluation) that
“our goal in this paper is to bring these two strands of the literature together and
discuss the assumptions that allow the researcher to go back and forth between
the theory and the data.” They consider the large literature on changes in financial
policies that affect occupational choices by changing credit constraints and/or
changing occupational choice risk conditions. As the authors state in the final
sentence of their abstract, “all in all, our objective is to assess the impact of
financial intermediation on occupational choices and income.” In their paper, they
consider, theoretically and with a number of empirical examples, various economic
theories, econometric methods, and types of data that are useful in achieving their
important objective. Because of the focus on occupational choice, this paper is at

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
INTRODUCTION TO MEASUREMENT WITH THEORY 161

least as relevant to consumer theory as to production theory and admirably uses


theory from both sides of the market.

4. CONSUMER THEORY
Three papers in this special issue look at various measurement problems that are
associated with aspects of consumer theory. The first consumer theory paper, by
Barnett and de Peretti (2009), deals with the subject that is logically prior to
all other empirical applications in consumer theory: existence of aggregates and
existence of sectors of the economy. Without existence of admissible clusterings
of goods permitting aggregation over quantities and prices and separation of the
economy into sectors to lower the dimensions of models, there are too many
quantities and prices to permit empirical analysis with the typically small sample
sizes available to economists. The admissibility condition permitting clustering
together groups of goods as components of aggregates is blockwise weak separa-
bility. Although there is a long history of research on tests for weak separability,
that hypothesis has been resistant to successful testing, as has been shown in
Monte Carlo studies, starting with Barnett and Choi (1989).
Weak separability is equivalent to the existence of a composite function struc-
ture, a subtle hypothesis that is difficult to test by parametric means. Although
flexible function-form specifications have been successful in testing many other
hypotheses on functional structure, the imposition of blockwise weak separability
on parsimonious flexible functional forms, such as the translog and generalized
Leontief, results in loss of flexibility, often to a severe degree, as first shown
by Blackorby et al. (1977).26 Consequently, tests of weak separability become
tests of the joint hypothesis of weak separability and a very inflexible aggregator
function, with the latter subhypothesis being the most likely source of rejection.
Other parametric approaches to testing weak separability have been proposed
and used, but none so far have been shown to perform well in Monte Carlo
studies.
The inherent problems associated with parametric testing of weak separability
have resulted in the growth of nonparametric approaches, often based upon re-
vealed preference theory, with the seminal papers being those of Varian (1982,
1983), who drew on the earlier work of Afriat (1967) and Diewert (1973). But
these tests have similarly performed poorly in Monte Carlo studies, because they
have tended to be deterministic tests that do not cope well with violations of
revealed preference theory resulting from noise in the data.
In their paper, Barnett and Peretti introduce seminonparametric testing into this
literature. This middle-ground approach between parametric and nonparametric
approaches solves the problems associated with the earlier approaches. Barnett
and Peretti’s seminonparametric approach is fully stochastic and is not inflexible
under the null. Hence their approach is a direct test of weak separability rather
than a composite test embedding a more restrictive hypothesis within the null. By
eliminating the bias against acceptance of weak separability inherent in the earlier

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
162 WILLIAM A. BARNETT ET AL.

approaches, Barnett and Peretti’s seminonparametric test performs well in Monte


Carlo testing.
The second consumer theory paper is by Erwin Diewert and Hideyuki
Mizobuchi (2009), and it addresses the problem of finding index number for-
mulae (and indicator formulae) that are exact for flexible functional forms for
nonhomothetic preferences. Diewert (1976) addressed this problem in the index
number context, but his exact index number results assumed that the consumer had
homothetic preferences for the most part.27 Diewert (1992a) also addressed the
problem of finding exact indicator formulae in the difference approach to index
number theory, but in the end, his results followed the example of Weitzman (1988),
who assumed that preferences were homothetic. Thus the problem of obtaining
indicator formulae that are exact for nonhomothetic preferences remains open.
This is the problem that Diewert and Mizobuchi address.
As noted earlier, traditional index number theory is based on ratio concepts.
Thus, if the consumer’s preferences are homothetic (so that they can be represented
by a linearly homogeneous utility function), then the family of Konüs (1939) price
indices collapses to a ratio of unit cost functions and the family of Allen (1949)
quantity indices collapses to a ratio of utility functions, where these functions are
evaluated with the data of, say, period 1 in the numerator and the data of period
0 in the denominator. After describing this traditional approach to index number
theory, Diewert and Mizobuchi switch to the economic approach pioneered by
Hicks (1941–1942, 1945–1946), which is based on differences. As noted in our
discussion of Balk’s paper, in the traditional approach to index number theory, a
value ratio is decomposed into the product of a price index and a quantity index,
whereas in the difference approach, a value difference is decomposed into the sum
of a price indicator (which is a measure of aggregate price change) and a quan-
tity indicator (which is a measure of aggregate quantity change). The difference
analogue to a theoretical Konüs price index is a Hicksian price variation, and the
difference analogue to an Allen quantity index is a Hicksian quantity variation,
such as the equivalent or compensating variation. For normal index number theory,
the theoretical Konüs and Allen indices are defined using ratios of cost functions,
but in the difference approach to index number theory, the theoretical price and
quantity variation functions are defined in terms of differences of cost functions.
Both index number formulae and indicator functions are known functions of the
price and quantity data pertaining to the two periods under consideration.
Diewert and Mizobuchi define a given price or quantity indicator function
to be strongly superlative if it is exactly equal to a corresponding theoretical
price or quantity variation, under the assumption that the consumer has (general)
preferences that are dual to a flexible cost function that is subject to money metric
utility scaling. As we noted in our discussion of FMR, the term “money metric
utility scaling” is due to Samuelson (1974), and it is simply a convenient way
of cardinalizing a utility function. Diewert and Mizobuchi show that the Bennet
(1920) indicator functions are strongly superlative. Their results require that the
consumer’s preferences be represented by a certain translation-homothetic cost

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
INTRODUCTION TO MEASUREMENT WITH THEORY 163

function that is a variant of the normalized quadratic cost function introduced by


Diewert and Wales (1987).
Diewert and Mizobuchi also show that the aggregation-over-consumers problem
in the difference approach to index number theory is not as difficult as it is in the
ratio approach to index number theory. In particular, Diewert and Mizobuchi show
that it is possible to measure the arithmetic average of the economy’s sum of
the individual household equivalent and compensating variations exactly using
only aggregate data, because this aggregate measure of welfare change is exactly
equal to the Bennet quantity indicator using only aggregate quantity data. In other
words, the difference approach to the measurement of aggregate price and quantity
change has better aggregation properties than the traditional ratio approach.
Diewert and Mizobuchi also provide an economic interpretation for each term
in the sum of terms that make up the Bennet price and quantity indicators. The
decomposition results developed here are analogues to similar results obtained
by Diewert and Morrison (1986) and Kohli (1990) in the traditional approach to
index number theory.
Diewert and Mizobuchi conclude their paper by using the difference approach
to measure aggregate Japanese consumption, and they contrast their results with
the results generated by the traditional ratio approach to the measurement of real
consumption.
The third paper dealing with measurement problems in the context of consumer
theory is by W. A. Barnett, M. Chauvet, and H. L. R. Tierney (2009). Most of
the papers in this special issue connect together measurement and theory through
the use and advancement of economic index number theory and microeconomic
aggregation theory. But there is another tradition that has been growing in impor-
tance: the use of state-space econometric modeling with measurement functions.
In this paper, Barnett et al. integrate the state-space time-series approach with
the economic index number–theoretic approach using the state-space approach to
demonstrate that the aggregation-theoretic Divisia monetary aggregates deviate
from their common dynamics with the official atheoretical simple-sum monetary
aggregates in ways that have predictive power regarding turning points in the
business cycle.
Their state space approach uses a factor model with regime switching. The
model separates out the common movements underlying the theoretical and athe-
oretical monetary aggregate indices, summarized in the dynamic factor, from
individual variations in each individual series, captured by the idiosyncratic terms.
The idiosyncratic terms and the measurement errors reveal where the monetary
indices differ.
In future research, it could be useful to generalize to explicit treatment of
measurement error of component quantities and prices and to incorporate of the
Divisia monetary growth–rate variances introduced by Barnett and Serletis (1990)
to measure dispersion of growth rates over components of aggregates. In fact,
the approaches used in all of the papers in this special issue could benefit from
more explicit treatment of second moments and the consequent possible sources

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
164 WILLIAM A. BARNETT ET AL.

of error in index number theory with data that are potentially subject to errors in
the variables.

NOTES
1. For surveys of the early methods used to make international comparisons that rely on index
number theory, see Diewert (1988, 1999) and Balk (1996). In this literature, Geary’s method is known
as the Geary (1958)–Khamis (1972) method (the GK method) because Khamis provided a rigorous
proof of the existence of a solution to Geary’s equations.
2. Neary (2004, p. 1413) uses this terminology. In the international comparisons literature, the
country j’s true exchange rate εj is usually replaced by its reciprocal, the country j’s purchasing power
parity.
3. The system of equations (1) and (2) only determines π and ε up to a positive scalar multiple.
Thus we need an additional normalization on (1) and (2) such as ε1 = 1 or π1 = 1 to obtain a unique
solution.
4. Thus pj is now a vector of consumer prices for country j and qj is the corresponding per capita
consumption vector.
5. Neary (2004, p. 1420) estimated both the AIDS model of Deaton and Muellbauer (1980) and
the more flexible QUAIDS model of Banks et al. (1997). Neary (2004, p. 1422) noted that his AIDS
and QUAIDS models were visually indistinguishable in his Figure 1.
6. This terminology is due to Samuelson (1974, p. 1262), but the basic idea is due to Hicks
(1941–1942). Basically, this method cardinalizes utility using the distance from the origin of a family
of parallel budget hyperplanes (indexed by the reference price vector) where each budget hyperplane
is just tangent to an indifference surface.
7. As in the GK system of equations, Neary shows that one additional normalization on the
components of π and ε is needed to obtain a unique solution to (7) and (8).
8. Their analysis does rest on the assumption that the commodity units in the two ICP rounds are
comparable, an assumption that is only approximately correct.
9. The components of the U.S. π are the geometric means of the 1980 and 1996 U.S. prices for
each of the 11 components of consumption.
10. FMR use this set of country reference prices as building blocks in some of their comparisons.
11. Caves et al. (1982) applied this method using the Törnqvist–Theil bilateral quantity index as
their bilateral index.
12. See Diewert (1988, p. 71; 1999), who introduced this terminology in the multilateral context.
13. The 2005 ICP round was sponsored by the World Bank and other national and international
statistical agencies. The final results of this round of GDP comparisons were released in February,
2008; see the World Bank (2008). The various ICP rounds play a very important role in the con-
struction of the Penn World Tables, which are heavily utilized by many development and macro
economists.
14. The World Bank’s (2008) release of the ICP 2005 results did not describe the methodology
used in the comparison. The theoretical chapters in the World Bank’s Handbook written by Peter Hill
(2007) did not describe all of the methodology actually used in the final comparison. Hence the paper
by Hill and Hill is particularly valuable in describing the actual methodology used in some of the
problem areas. For additional methodological material on ICP 2005, see Diewert (2008) and Deaton
and Heston (2008).
15. Alternatively, the initial set of core country comparisons could be obtained using the GEKS
method.
16. If p is proportional to P, then s(p, P) = 0, and if p is not proportional to P, then s(p, P) > 0.
Thus the dissimilarity measure is analogous to a distance function in the mathematics literature. For
additional material on the properties of dissimilarity measures, see Diewert (2009).
17. If more than one country c is a solution to (10), then any of these solution countries could be
taken as the link of country n to the core-spanning tree. Alternatively, all of the solution countries

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
INTRODUCTION TO MEASUREMENT WITH THEORY 165

could be used as links to the core-spanning tree and a geometric average of the resulting links could
be taken.
18. This literature dates back to the pioneering contributions of Jorgenson and Griliches (1967).
Later, their continuous-time methodology using Divisia indices was adapted to discrete time and the
use of superlative indexes by Diewert (1976), Caves et al. (1982), and Diewert and Morrison (1986).
19. One limitation of their analysis should be kept in mind: the EUKLEMS database does not have
the services of land or inventories included as inputs. Thus results for land-intensive sectors such as
agriculture or for inventory-intensive sectors such as retailing may be less reliable than results for other
sectors.
20. For additional material on basic productivity analysis, see Jorgenson and Griliches (1967),
Diewert (1992b), Balk (2003), and Diewert and Nakamura (2003).
21. In the value-added framework, the vector yt consists of all of the outputs produced by the unit
during period t as well as all of the intermediate inputs used during the period. The components of yt
that correspond to outputs are given positive signs, whereas the components of yt that correspond to
intermediate inputs are given negative signs.
22. Balk calls this ratio the Domar factor; see Domar (1961).
23. This topic is pursued in much greater depth by Diewert (1992a, 2005) and Balk (2009a).
24. This indicator terminology was introduced by Diewert (1992a; 2005, 349).
25. See Diewert (2005, 353) for this interpretation of Q(p0 , p 1 , q 0 , q 1 ). However, the basic idea
can be traced back to the early accounting and industrial engineering literature; see Harrison (1918,
275).
26. However, Diewert and Wales (1995) derived tests for homogeneous weak separability in the
production context, using the normalized quadratic functional form, that were not subject to this
inflexibility problem. For a survey of much of the relevant literature in a demand context, see Barnett
and Serletis (2008).
27. Diewert (1976, 122–123) did establish two exact results for translog nonhomothetic preferences.

REFERENCES
Afriat, S.N. (1967) The construction of utility functions from expenditure data. International Economic
Review 8, 67–77.
Allen, R.G.D. (1949) The economic theory of index numbers. Economica 16, 197–203.
Balk, B.M. (1996) A comparison of ten methods for multilateral international price and volume
comparisons. Journal of Official Statistics 12, 199–222.
Balk, B.M. (2003) The residual: On monitoring and benchmarking firms, industries and economies
with respect to productivity. Journal of Productivity Analysis 20, 5–47.
Balk, B.M. (2009a) Measuring productivity change without neoclassical assumptions: A conceptual
analysis. In W.E. Diewert, B.M. Balk, D. Fixler, K.J. Fox, and A.O. Nakamura (eds.), Price and
Productivity Measurement: Volume 6—Index Number Theory, Ch. 4. Victoria, Canada: Trafford
Press. In press.
Balk, B.M. (2009b) On the relation between gross output– and value added–based productivity mea-
sures: The importance of the Domar factor. Macroeconomic Dynamics 13, Supplement 2, 241–267.
Banks, J., R. Blundell, and A. Lewbel (1997) Quadratic Engel curves and consumer demand. Review
of Economics and Statistics 79, 527–539.
Barnett, W.A. and S. Choi (1989) A Monte Carlo study of tests of blockwise weak separability. Journal
of Business and Economic Statistics 7, 363–377.
Barnett, W.A. and J.H. Hahm (1994) Financial firm production of monetary services: A generalized
symmetric Barnett variable-profit function approach. Journal of Business and Economic Statistics
12, 33–46.
Barnett, W.A. and P. de Peretti (2009) Admissible clustering of aggregator components: A necessary
and sufficient stochastic seminonparametric test for weak separability. Macroeconomic Dynamics
13, Supplement 2, 317–334.

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
166 WILLIAM A. BARNETT ET AL.

Barnett, W.A. and A. Serletis (1990) A dispersion dependency diagnostic test for aggregation error:
With applications to monetary economics and income distribution. Journal of Econometrics 43,
5–43. Reprinted in W.A. Barnett and A. Serletis (eds.), The Theory of Monetary Aggregation,
Chap. 9, pp. 5–34. Amsterdam: North-Holland, 2000.
Barnett, W.A. and A. Serletis (2008) Consumer preferences and demand systems. Journal of Econo-
metrics 147, 210–224.
Barnett, W.A. and G. Zhou (1994) Financial firms’ production and supply-side monetary aggregation
under dynamic uncertainty. Federal Reserve Bank of St. Louis Review 76, 133–165.
Barnett, W.A., M. Chauvet, and H.L.R. Tierney (2009) Measurement error in monetary aggregates: A
Markov switching factor approach. Macroeconomic Dynamics 13, Supplement 2, 381–412.
Barnett, W.A., M. Kirova, and M. Pasupathy (1995) Estimating policy-invariant deep parameters in
the financial sector when risk and growth matter. Journal of Money, Credit, and Banking 27, 1402–
1430.
Bennet, T.L. (1920) The theory of measurement of changes in cost of living. Journal of the Royal
Statistics Society 83, 455–462.
Bernard, A.B. and C.I. Jones (1996) Comparing apples to oranges: Productivity convergence and
measurement across industries and countries. American Economic Review 86(5), 1216–1238.
Blackorby, C., D. Primont, and R.R. Russell (1977) On testing separability restrictions with flexible
functional forms. Journal of Econometrics 5, 195–209.
Caves, D.W., L.R. Christensen, and W.E. Diewert (1982) Multilateral comparisons of output, input
and productivity using superlative index numbers. Economic Journal 92, 73–86.
Deaton, A. and A. Heston (2008) Understanding PPPs and PPP-Based National Accounts. NBER
Working Paper 14499, National Bureau of Economic Research, Cambridge, MA.
Deaton, A.S. and J. Muellbauer (1980) An almost ideal demand system. American Economic Review
70, 312–326.
Denny, M., M. Fuss, and L. Waverman (1981) The measurement and interpretation of Total Factor
Productivity in regulated industries, with an application to Canadian telecommunications. In T.G.
Cowing and R.E. Stevenson (eds.), Productivity Measurement in Regulated Industries, pp. 179–218.
New York: Academic Press.
Diewert, W.E. (1973) Afriat and revealed preference theory. Review of Economic Studies 40, 419–
425.
Diewert, W.E. (1976) Exact and superlative index numbers. Journal of Econometrics 4, 114–145.
Diewert, W.E. (1988) Test approaches to international comparisons. In W. Eichhorn (ed.), Measurement
in Economics: Theory and Applications of Economic Indices, pp. 67–86. Heidelberg: Physica-Verlag.
Diewert, W.E. (1992a) Exact and superlative welfare change indicators. Economic Inquiry 30, 565–582.
Diewert, W.E. (1992b) The measurement of productivity. Bulletin of Economic Research 44(3), 163–
198.
Diewert, W.E. (1999) Axiomatic and economic approaches to international comparisons. In A. Heston
and R.E. Lipsey (eds.), International and Interarea Comparisons of Income, Output and Prices,
Studies in Income and Wealth, vol. 61, pp. 13–87. Chicago: University of Chicago Press.
Diewert, W.E. (2005) Index number theory using differences instead of ratios. American Journal of
Economics and Sociology 64(1), 311–360.
Diewert, W.E. (2008) New Methodological Developments for the International Comparison Program.
Discussion Paper 08-08, Department of Economics, University of British Columbia, Vancouver,
Canada.
Diewert, W.E. (2009) Similarity indexes and criteria for spatial linking. In D.S. Prasada Rao (ed.),
Purchasing Power Parities of Currencies: Recent Advances in Methods and Applications, Chap. 8,
pp. 155–176. Cheltenham, UK: Edward Elgar.
Diewert, W.E. and H. Mizobuchi (2009) Exact and superlative price and quantity indicators. Macro-
economic Dynamics 13, Supplement 2, 335–380.
Diewert, W.E. and C.J. Morrison (1986) Adjusting output and productivity indexes for changes in the
terms of trade. Economic Journal 96, 659–679.

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
INTRODUCTION TO MEASUREMENT WITH THEORY 167

Diewert, W.E. and A.O. Nakamura (2003) Index number concepts, measures and decompositions of
productivity growth. Journal of Productivity Analysis 19, 127–159.
Diewert, W.E. and T.J. Wales (1987) Flexible functional forms and global curvature conditions.
Econometrica 55, 43–68.
Diewert, W.E. and T.J. Wales (1995) Flexible functional forms and tests of homogeneous separability.
Journal of Econometrics 67, 259–302.
Domar, E.D. (1961) On the measurement of technological change. The Economic Journal 71, 709–721.
Feenstra, R.C., H. Ma, and D.S. Prasada Rao (2009) Consistent comparisons of real incomes across
time and space. Macroeconomic Dynamics 13, Supplement 2, 169–193.
Fisher, I. (1922) The Making of Index Numbers. Boston: Houghton-Mifflin.
Geary, R.G. (1958) A note on comparisons of exchange rates and purchasing power between countries.
Journal of the Royal Statistical Society Series A 121, 97–99.
Gini, C. (1931) On the circular test of index numbers. Metron 9(9), 3–24.
Harberger, A.C. (1971) Three basic postulates for applied welfare economics: An interpretive essay.
Journal of Economic Literature 9, 785–797.
Harrison, G.C. (1918) Cost accounting to aid production, I. Industrial Management 56, 273–282.
Hicks, J.R. (1941–1942) Consumers’ surplus and index numbers. Review of Economic Studies 9,
126–137.
Hicks, J.R. (1945–1946) The generalized theory of consumers’ surplus. Review of Economic Studies
13, 68–74.
Hill, R.J. (1997) A taxonomy of multilateral methods for making international comparisons of prices
and quantities. Review of Income and Wealth 43(1), 49–69.
Hill, R.J. (1999) Comparing price levels across countries using minimum spanning trees. Review of
Economics and Statistics 81, 135–142.
Hill, R.J. (2001) Measuring inflation and growth using spanning trees. International Economic Review
42, 167–185.
Hill, R.J. (2004) Constructing price indexes across space and time: The case of the European Union.
American Economic Review 94, 1379–1410.
Hill, R.J. and T.P. Hill (2009) Recent developments in the international comparison of prices and real
output. Macroeconomic Dynamics 13, Supplement 2, 194–217.
Hill, T.P. (2007) ICP 2003–2006 Handbook, Chaps. 11–15. Washington DC: World Bank. Available
at http://siteresources.worldbank.org/ICPINT/Resources/Ch11.doc. Accessed 17 June 2009.
Hotelling, H. (1938) The general welfare in relation to problems of taxation and of railway and utility
rates. Econometrica 6, 242–269.
Inklaar, R. and M.P. Timmer (2009) Productivity convergence across industries and countries: The
importance of theory-based measurement. Macroeconomic Dynamics 13, Supplement 2, 218–240.
Jorgenson, D.W. and Z. Griliches (1967) The explanation of productivity change. Review of Economic
Studies 34, 249–283.
Khamis, S.H. (1972) A new system of index numbers for national and international purposes. Journal
of the Royal Statistical Society Series A 135, 96–121.
Kohli, U. (1990) Growth accounting in the open economy: Parametric and nonparametric estimates.
Journal of Economic and Social Measurement 16, 125–136.
Konüs, A.A. (1939) The problem of the true index of the cost of living. Econometrica 7, 10–29.
Marshall, A. (1890) Principles of Economics. London: Macmillan Co.
Montgomery, J.K. (1937) The Mathematical Problem of the Price Index. Orchard House, Westminster,
UK: P.S. King & Son.
Neary, J.P. (2004) Rationalizing the Penn World Tables: True multilateral indices for international
comparisons of real income. American Economic Review 94, 1411–1428.
Samuelson, P.A. (1974) Complementarity—An essay on the 40th anniversary of the Hicks–Allen
revolution in demand theory. Journal of Economic Literature 12, 1255–1289.
Solow, R.M. (1957) Technical change and the aggregate production function. Review of Economics
and Statistics 39, 312–320.

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
168 WILLIAM A. BARNETT ET AL.

Townsend, R.M. and S.S. Urzua (2009) Measuring the impact of financial intermediation: Linking
contract theory to econometric policy evaluation. Macroeconomic Dynamics 13, Supplement 2,
268–316.
Varian, H. (1982) The nonparametric approach to demand analysis. Econometrica 50, 945–973.
Varian, H. (1983) Non-parametric tests of consumer behavior. Review of Economic Studies 50, 99–110.
Weitzman, M.L. (1988) Consumer’s surplus as an exact approximation when prices are appropriately
deflated. Quarterly Journal of Economics 102, 543–553.
World Bank (2008) 2005 International Comparison Program: Tables of Final Results, Prelimi-
nary draft. Washington, DC: World Bank. Available at http://siteresources.worldbank.org/ICPINT/
Resources/ICP final-results.pdf. Accessed 17 June 2009.

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:31:00, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100509090233
Macroeconomic Dynamics, 17, 2013, 1193–1197. Printed in the United States of America.
doi:10.1017/S136510051200082X

INTRODUCTION TO THE
MACROECONOMIC DYNAMICS
SPECIAL ISSUE ON INEQUALITY,
PUBLIC INSURANCE, AND
MONETARY POLICY

PERE GOMIS-PORQUERAS
Monash University
BENOÎT JULIEN
University of New South Wales

We present a set of research and briefly describe the individual contributions to the
Macroeconomic Dynamics special issue on inequality, public insurance, and monetary
policy.
Keywords: Inequality, Public Insurance, Monetary and Fiscal Policy, Unemployment,
Political Economy

This special issue of Macroeconomic Dynamics contains a subset of papers pre-


sented at a Workshop on Macroeconomic Dynamics held in Sydney, Australia,
2010. The Workshop hosted a set of papers with varying focus but centered on
three main themes: Inequality, public insurance, and monetary policy. It is widely
acknowledged and evidenced that inequality is a pervasive feature of the world’s
economies. Some of the primary causes contributing to the creation and persis-
tence of inequality include fiscal policy, government programs, saving rates, credit
constraints, monetary policy, inflation, and even international trade.
For this issue, we have assembled two sets of papers. The first set tackles
directly the link between inequality and different factors such as public insurance,
including health, means-tested pensions financed by fiscal instruments, and po-
tential cross-country inequality exacerbated by openness to international trade.
These papers are built on a dynamic overlapping-generations framework, and
some generate quantitative results in addition, with the exception of one paper on
international trade that builds a dynamic Heckscher–Ohlin model. The second set
of papers tackles monetary policy issues and the relation of monetary policy to
unemployment and the welfare cost of inflation directly. Although the presence of

Address correspondence to: Benoı̂t Julien, School of Economics, Australian School of Business, University of New
South Wales, Sydney, New South Wales 2052, Australia; e-mail: benoit.julien@unsw.edu.au.


c 2013 Cambridge University Press 1365-1005/13 1193
Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:31:08, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051200082X
1194 PERE GOMIS-PORQUERAS AND BENOÎT JULIEN

market frictions is common in these papers, they focus on different consequences


of monetary policy, such as deviations from the Taylor Rule in a dynamic stochastic
general equilibrium (DSGE) new Keynesian framework with unemployment, and
within a monetary search framework, consider the effect of interactions between
monetary and fiscal policy on unemployment and on the welfare cost of inflation
when the quality of goods traded is flexible. We now provide a brief summary of
individual contributions.
“Income Inequality, Mobility, and the Welfare State: A Political Economy
Model,” by Luca Bossi and Gulcin Gumus, looks at the implications of income
inequality, mobility for demand redistribution, and social insurance in a politi-
cal economy overlapping-generations framework. Old age pensions and transfer
payments to the working-aged are at the core of the welfare state. Various pol-
icy instruments in the welfare state can be distinguished by the extent to which
they provide redistribution and social insurance. Although some programs target
inequalities, others focus on income variations over the life cycle. In their paper,
Bossi and Gumus analyze two simultaneous programs, transfer payments for
the working-aged and old age social security, using a multidimensional-voting
political economy model. Simultaneity of the programs allows them to capture the
dynamics of how political support for redistribution and social insurance depends
on the groups to which benefits are targeted. In their model, income is endogenous
via labor supply, allowing for distortionary effects of taxation. Finally, they study
the effects of income mobility (via income shocks) on the level of redistributive
taxation. They show that the welfare state plays a significant role in equalizing
incomes across groups and over lifetimes, but income inequality and mobility also
have crucial implications for redistributive policy.
“Means-tested Age Pension and Homeownership: Is There a Link?,” by Sang-
Wook (Stanley) Cho and Renuka Sane, investigates inequality and welfare pro-
grams by studying the relationship between the Australian age pension scheme
and homeownership. The scheme currently has an uncapped exemption in rela-
tion to housing wealth, measured by an assets test. Cho and Sane formulate a
general equilibrium overlapping-generations model with tenure choice, life-cycle
attributes, housing choice, and borrowing constraints. The model is calibrated
to match the key aspect of the data for the Australian economy and matches
the profiles of wealth and homeownership, along with wealth inequality. They
investigate the implications of abolishing or changing the exemption of owner-
occupied housing in the assets test. Removing the exemption is shown to increase
aggregate output, capital accumulation, and welfare, but reduces housing invest-
ment and homeownership. These distortions, however, implies that a policy such
as lowering taxes while maintaining a fiscal balance leads to a large welfare loss
for wealthy households and benefits others.
“The Provision of Public Universal Health Insurance: Impacts on Private In-
surance, Asset Holdings, and Welfare,” by Minchung Hsu and Junsang Lee,
continues on the themes of inequality, redistribution policies, and programs to
study the impacts of public health insurance provision. Government-sponsored

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:31:08, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051200082X
INEQUALITY, PUBLIC INSURANCE, AND MONETARY POLICY 1195

mandatory universal health insurance is in effect in many OECD and middle-


income countries, and is widely considered by countries moving in this direction.
They build an overlapping-generations framework, adding stochastic components
and public health insurance, with financial market incompleteness and endogenous
demand for private health insurance. Their quantitative analysis demonstrates clear
crowding-out effects on asset holding and private health insurance purchases. The
asset holdings reduction occurs through reduced precautionary savings, whereas
the private health insurance decreases as it becomes complementary. If universal
health insurance (UHI) is financed by a distortionary payroll tax, the model gen-
erates a redistribution effect on wealth and welfare. The effect on wealth is not
clear, and it may worsen inequality, as the UHI introduction crowds out a greater
proportion of assets among low-wealth than among high-wealth households. The
effect on welfare is clear, with old gaining more than the young generations, and
low-wealth gaining more than high-wealth households. The payroll tax effects
are compared with a nondistortionary lump-sum tax, allowing for identification
of the distortions brought about by the payroll tax. Hsu and Lee also study the
welfare implication of UHI policies with various expenditure coverage rates. Their
findings suggest that the actual rates in most OECD countries might be too high,
with distortions outweighing welfare gains. Finally, they incorporate a medicare
program, private health insurance for the elderly, and calibrate the model to the U.S.
economy. Their findings are of particular interest because of the U.S. consideration
of a UHI program.
“Poverty Trap and Inferior Goods in a Dynamic Heckscher–Ohlin Model,” by
Eric W. Bond, Kazumichi Iwasa, and Kazuo Nishimura, looks at the link between
the poverty trap and inferior goods in a dynamic Eckscher–Ohlin model. They show
that if a labor-intensive good is inferior, multiple steady states can exist in autarky,
and a poverty trap can emerge. The results are driven by conditions on technologies
and labor endowment, assuming that for low incomes, the labor-intensive good is
a necessity, but that it is inferior for high incomes. This poverty trap is novel, given
that it arises in a model with complete markets, convex preferences and technology,
and constant discount factor. They subsequently allow for trade between low- and
high-capital-stock countries. For a given range of capital endowments, the poorer
country can be pulled out of the autarkic poverty trap by the richer country.
The richer country also ends up with higher steady state capital stock and utility
levels with free trade than under autarky. For other ranges of endowments, both
countries can end up in a steady state with lower capital stock under free trade
than under autarky. The country not initially in a poverty trap before trade can
be pulled down into a poverty trap if trade occurs with a poorer country. These
possibilities are in contrast to the results of a dynamic Heckscher–Ohlin model
with normal goods, in which the country with higher (lower) capital stock reaches
a steady state with higher (lower) welfare level. The results suggest that if the
presence of inferior goods is important, trade can exacerbate inequality across
countries.
The second general theme of this special issue is monetary policy.

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:31:08, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051200082X
1196 PERE GOMIS-PORQUERAS AND BENOÎT JULIEN

“Monetary Policy, Inflation and Unemployment, In Defense of the Federal


Reserve,” by Nicolas Groshenny. investigates whether deviations from the Taylor
Rule [Taylor (1993)] between 2002 and 2006 by the U.S. Federal Reserve helped
to promote the dual mandate of price stability and maximum sustainable employ-
ment. He performs a counterfactual experiment with monetary policy following a
Taylor Rule for the period 2002:Q1–2006:Q4, using a Bayesian method estimation
of a DSGE new Keynesian model with unemployment on U.S. data. The model
combines the nominal rigidities of new Keynesian models with equilibrium un-
employment generated by search and matching frictions à la Diamond (1982) and
Mortensen and Pissarides (1994). The structural estimates are used to infer shocks
that hit the economy over 2002–2009. Through exogenous monetary policy shocks,
the findings suggest that deviations from the estimated rule had significance in
enhancing macroeconomic stability over the period. In particular, deviations from
the Taylor Rule would have generated a sizeable increase in unemployment and
undesirable inflation rates. The results provide some quantitative evidence in sup-
port of the expansionary stance of monetary policy in the first half of the 2000s,
and consistent with the Federal Reserve’s dual mandate.
“Optimal Monetary and Fiscal Policies in a Search-Theoretic Model of Money
and Unemployment,” by Pere Gomis-Porqueras, Benoit Julien, and Chengsi Wang,
investigates monetary policy, but within a search-theory-based model of money,
now commonly referred to as “new monetarism” [see Williamson and Wright
(2011)]. As in Groshenny’s model, unemployment arises because of search and
matching frictions à la Diamond–Mortensen–Pissarides. But money is essential for
trade in the frictional decentralized goods market, and there are no nominal rigidi-
ties. Their model builds on Berentsen, Menzio, and Wright (2011) by introducing
fiscal policy instruments. They investigate whether inefficient outcomes generated
by the underlying frictions in the labor and goods markets can be corrected with
fiscal instruments. In this model, apart from the standard intertemporal distortion in
monetary models, the other distortions are consequential to the use of generalized
Nash bargaining as surplus sharing with matches of the labor and goods markets.
The authors propose fiscal instruments and monetary policy to restore efficiency
of monetary equilibrium, even when the Hosios (1990) and Friedman (1959) rules
do not hold. In particular, with lump sum money transfers, a production subsidy
financed by money printing can yield higher output of the goods exchanged in
the frictional market. Furthermore, a vacancy subsidy financed by a dividend tax
can restore efficiency even when the Hosios rule does not hold. Multiple such
subsidies and inflation rates exist, leading to efficient allocation. The Friedman
rule is only one of the possible policy options, regardless of buyers’ bargaining
power. In other words, for any buyers’ bargaining power, one can find a fiscal
policy to restore efficiency whether the Friedman rule holds or not.
“Inflation and Endogenous Quality Dispersion,” by Richard Dutu, continues
on monetary policy using a monetary search model. In contrast to the previous
monetary policy papers in this issue, there is no labor market, and matching in
the decentralized frictional goods market is directed as in standard price posting

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:31:08, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051200082X
INEQUALITY, PUBLIC INSURANCE, AND MONETARY POLICY 1197

with directed search or competitive search [see Peters (1984), Moen (1997)]. It
is well known that directed search improves efficiency relative to random search
and the Hosios rule holds endogenously. The directed search paradigm maintains
the assumption that sellers can commit to the posted terms of trade. This is
a particularly strong assumption when the directed search environment allows
multilateral matching (many buyers per seller matches) ex post. This creates local
demand conditions that can be exploited by sellers to extract a bigger surplus from
buyers. Previous search monetary models with price posting and directed search
follow the seller’s commitment assumption on both price and quality levels posted
[see Rocheteau and Wright (2005)]. Dutu questions the extent of commitment to
terms of trade in the decentralized market by allowing sellers to post and commit
to prices, but to allow quality to vary and be determined ex post by local market
conditions. In particular, crowded local demand yields lower quality exchanged
in trade at the posted price. Comparing this economy with the one under which
sellers can commit to both price and quality ex ante, he shows that sellers and
buyers are better off under the latter and free entry on the buyers’ side. The model
is evaluated with U.S. economic data and shows that sellers’ ability to commit to
both terms of trade reduces the welfare cost of inflation relative to an economy
with price commitment only and quality varying according to ex post conditions.
This result can give rise to a normative justification for strong commitment to
terms of trade within an economy.
We believe that the papers in this special issue advance our knowledge on how
inequality may relate to different factors and causes, and on the effects of these
factors on welfare. We hope that this issue can promote interest in further research
on these issues.

REFERENCES
Berentsen, A., G. Menzio, and R. Wright (2011) Inflation and unemployment in the long run. American
Economic Review 101, 371–398.
Diamond, P. (1982) Aggregate demand management in search equilibrium. Journal of Political Econ-
omy 90, 881–894.
Friedman, M. (1959) A Program for Monetary Stability. New York: Fordham University Press.
Hosios, A.J. (1990) On the efficiency of matching and related models of search unemployment. Review
of Economic Studies 57, 279–298.
Moen, E.R. (1997) Competitive search equilibrium. Journal of Political Economy 105(2), 385–411.
Mortensen, D. and C. Pissarides (1994) Job creation and job destruction in the theory of unemployment.
Review of Economic Studies 61, 397–415.
Peters, M. (1984) Equilibrium with capacity constraints and restricted mobility. Econometrica 52,
1117–1129.
Rocheteau, G. and R. Wright (2005) Money in search equilibrium, in competitive equilibrium, and in
competitive search equilibrium. Econometrica 73, 75–202.
Taylor, J.B. (1993) Discretion versus policy rules in practice. Conference Series on Public Policy 39,
195–214.
Williamson, S. and R. Wright (2011) New monetarism economics: Models. In B. Friedman and M.
Woodford (eds.), Handbook of Monetary Economics, 1st ed., Vol. 3A, No. 3. Elsevier.

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:31:08, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051200082X
Macroeconomic Dynamics, 20, 2016, 1953–1956. Printed in the United States of America.
doi:10.1017/S136510051500019X

ARTICLES
INTRODUCTION TO THE
MACROECONOMIC DYNAMICS
SPECIAL ISSUE ON TECHNOLOGY
ASPECTS IN THE PROCESS OF
DEVELOPMENT

THÉOPHILE T. AZOMAHOU
United Nations University (UNU-MERIT),
Maastricht University,
University of Auvergne,
and
CERDI
RAOUF BOUCEKKINE
Aix-Marseille School of Economics, Aix-Marseille Université
and
GREQAM
PIERRE MOHNEN AND BART VERSPAGEN
United Nations University (UNU-MERIT)
and
Maastricht University

We present a set of theoretical and empirical papers and briefly describe the specific
contributions to the Macroeconomic Dynamics special issue on technology aspects in the
process of development.

Keywords: Technology, Social and Human Capital, Environment and Natural Resources,
Public Investment, Innovation and Productivity, Structural Changes

The sources of economic growth have been a recurrent topic of research in eco-
nomics since the writings of Adam Smith. As forcefully shown by Solow (1956,
1957), and despite Alwyn Young’s (1992) claim to the contrary, input growth is not

We thank all the anonymous referees for their valuable contribution to this special issue. This work was supported by
the FERDI (Fondation pour les Etudes et Recherches sur le Développement International) and the Agence Nationale
de la Recherche of the French government through the program “Investissements d’avenir ANR-10-LABX-14-01.”
Address correspondence to: Théophile T. Azomahou, United Nations University (UNU-MERIT) and Department
of Economics, Maastricht University, Boschstraat 24, 6211 AX Maastricht, the Netherlands; e-mail: azomahou@
merit.unu.edu.


c 2016 Cambridge University Press 1365-1005/16 1953
Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:30:31, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051500019X
1954 THÉOPHILE T. AZOMAHOU ET AL.

sufficient to explain output growth. Total factor productivity (TFP) growth, itself
largely driven by technological change, plays a sizeable role. This holds for the
developed as well as for the developing countries, but the individual components
of TFP may differ in the two sets of countries. However, besides technology,
there are also institutional, educational, and environmental aspects of growth and
development. There may be complementarities but also conflicts between the
different drivers of economic growth. For example, by exerting heavy pressure on
natural resources, technological progress may cause side effects such as pollution
and other environmental damage.
The economics literature seeks to evaluate theoretically and empirically the mul-
tiple aspects of technology in the growth and development process. This special
issue of Macroeconomic Dynamics is a contribution to this reflection. Technology
aspects covered in this issue are both theoretical and empirical, and include social
capital, human capital, environment and natural resources, new technologies, pub-
lic investment, innovation and productivity, and structural changes. Most of the
papers of this special issue were first presented at the 6th MEIDE (Micro Evidence
on Innovation and Development) conference organized by UNU-MERIT in Cape
Town in late 2012.
We start with three empirical papers. The first deals with the role of public
investment. Relying on a Cobb–Douglas technology, Fossu, Getachew, and Ziese-
mer specify a nonlinear relationship between public investment and growth to
analyze the growth-maximizing levels of public investment for a panel of African
countries. The paper further runs welfare-maximization simulations. The results
presented contrast with previous findings. Indeed, the growth-maximizing level of
public investment is estimated at about 10% of GDP. Consistently, the simulated
optimal public investment share is between 8.1% and 9.6% of GDP, showing that
there has been significant public underinvestment in Africa.
The role of technology in determining productivity differences between coun-
tries has been outlined by Prescott (1998), Hall and Jones (1999), Easterly and
Levine (2001), and Fagerberg et al. (2010). In their study, Bresson, Etienne, and
Mohnen evaluate the importance of technology, infrastructure, and institutions in
explaining differences in TFP among 82 countries over the period 1990–2008.
Relying on two kinds of common factors, those in the cross-sectional dimension
and those in the time-series dimension, the authors propose a frequentist and
a Bayesian approach to estimate a factor-augmented productivity equation. The
Bayesian estimator leads to a better fit than the frequentist model. The greatest por-
tion of the variation in TFP is explained by infrastructure, followed by technology
and finally institutions.
A last dimension of technology is its societal impact as measured by the struc-
tural changes it brings about. Bluhm, de Crombrugghe, and Szirmai test whether
stagnation can be predicted by institutional characteristics and external or internal
political shocks and whether the effects of the included determinants on the onset
of stagnation differ from their effects on the continuation of stagnation. Relying on
a panel of 127 developing and developed countries, the study shows that inflation,
negative regime changes, real exchange rate undervaluation, and financial and

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:30:31, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051500019X
TECHNOLOGY AND DEVELOPMENT 1955

trade openness have significant effects on both the onset and the continuation
of stagnation. Only in the case of trade openness is there a differential impact.
Moreover, trade openness reduces the probability of falling into a stagnation spell,
but does not improve the chances of exiting a spell.
The human capital aspect of technology has been studied more specifically in
the papers by Brezis and Dos Santos Ferreira and Diene, Diene, and Azomahou.
Brezis and Dos Santos Ferreira extend the Beckerian model of endogenous fertility
to take into account net upstream intergenerational income transfers from chil-
dren to parents in developing countries. The authors show that adding a negative
sibship size effect is a sufficient condition to restore the quantity–quality Beckerian
paradigm. Moreover, allowing the intensity of the effect to increase with sibship
size favors the emergence of multiple equilibra with contrasting regimes of child
labor, high fertility, low income, and transfers from children to parents vs. child
schooling, low fertility, high income, and transfers from parents to children. The
technology considered in this paper is embedded in parents’ decisions, with a
constant wage per efficiency that results from output production by competitive
firms endowed with a linear technology.
Whereas the previous study considers deterministic environments, Diene, Di-
ene, and Azomahou develop a framework to explain the role of uncertainty in
human capital formation, in particular in finding optimal policy interventions that
may improve educational outcomes. The authors posit a stochastic setting where
the technology is incorporated into the school production function as well as into
household decisions. Policy interventions are then linked to global performance
in education. The study characterizes optimal policies and conditions of social
welfare enhancement. Furthermore, the authors study the optimal growth of the
economy under uncertainty and population heterogeneity when human capital is
produced and used in the education sector. They show that the growth rate of the
unskilled population has a direct impact on the growth of human and physical
capital.
The most controversial, even philosophical, debates about technology focus
on its social value. The applications of technology influence society’s landmark
and new technologies often raise ethical questions. These questions concern, for
example, the notion of efficiency in terms of human productivity, a term originally
applied to machines, and the challenge of traditional norms. The paper by Bofota,
Boucekkine, and Bala discusses this topic. The authors introduce social capital into
a growth model à la Lucas (1988) with sector-specific technologies. They consider
human capital as a channel through which social capital affects development. The
assumptions made about the cost of social capital make it possible to capture the
fact that the maintenance of social networks can be costly in terms of resources
that could be allocated to consumption or the accumulation of physical capital. It is
primarily shown that in contrast to existing alternative specifications, the authors’
specifications ensure that social capital enhances productivity gains by playing the
role of a timing belt driving the transmission and propagation of all productivity
shocks throughout society, whatever the sectoral origin of the shocks. However,
short-term dynamics and imbalance effect properties of the model depend heavily

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:30:31, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051500019X
1956 THÉOPHILE T. AZOMAHOU ET AL.

on the elasticity of human capital to social capital. In particular, the authors show
that when the substitutability of social capital for human capital increases, the econ-
omy is better equipped to surmount initial imbalances, as individuals may allocate
more working time to the final goods sector without impeding economic growth.
Chan and Laffargue develop a political economy stochastic growth model to
explain the main stylized facts of imperial China’s dynastic cycle, in particular
the evolution of taxation, public spending, and corruption over the cycle and their
effects on production and income distribution. Investment in public capital by the
emperor enhances the productivity of farmers. The authors clearly highlight an
impulse mechanism of the dynastic cycle, which is driven by random shocks to the
authority of the emperor and his central administration that change the efficiency
of the institutional capital.
Van Zon and Mupela illustrate the benefits of regional connectivity and spe-
cialization to growth. The authors show how welfare increases as the number of
connected regions increases. The results point to reductions in transportation and
communication costs in particular as a suitable vehicle for speeding up growth.
There is a strong positive effect of reductions in the cost of making new con-
nections, which in turn impact both the steady state growth rate and transitional
growth, while significantly reducing the transition period.
Nguyen and Nguyen Van study the environmental dimension of technology.
The authors develop an optimal growth model with two kinds of natural resources
in the final sector employing labor to accumulate knowledge. The technology used
exhibits increasing returns to scale, and the two types of resources (renewable and
nonrenewable) are imperfect substitutes. A direct proof of existence of the optimal
solution is provided. In addition, the dynamics of transition to the steady state are
used to derive empirically testable convergence relationships.
We thank all anonymous referees for their valuable comments.

REFERENCES
Easterly, W. and R. Levine (2001) It’s not factor accumulation: Stylized facts and growth models.
World Bank Economic Review 15, 177–219.
Fagerberg, J., M. Schrolec, and B. Verspagen (2010) Innovation and economic development. In B.
Hall and N. Rosenberg (eds.), Handbooks in Economics 02, Chap. 20. Amsterdam: North-Holland.
Hall, R. and C. Jones (1999) Why do some countries produce so much more output per worker than
others? Quarterly Journal of Economics 114, 83–116.
Lucas, R.E. (1988) On the mechanics of economic development. Journal of Monetary Economics 22,
3–42.
Prescott, E. (1998) Needed: A theory of total factor productivity. International Economic Review 39,
525–551.
Solow, R.M. (1956) A contribution to the theory of economic growth. Quarterly Journal of Economics
70, 65–94.
Solow, R.M. (1957) Technical change and the aggregate production function. Review of Economics
and Statistics 39, 312–320.
Young, A. (1992) A tale of two cities: Factor accumulation and technical change in Hong Kong
and Singapore. In O. Blanchard and S. Fischer (eds.), NBER Macroeconomics Annual, pp. 13–64.
Cambridge, MA: MIT Press.

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:30:31, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051500019X
Macroeconomic Dynamics, 19, 2015, 1167–1170. Printed in the United States of America.
doi:10.1017/S1365100514000017

ARTICLES
INTRODUCTION TO THE SPECIAL
ISSUE ON GROWTH, OPTIMAL
FISCAL AND MONETARY POLICY,
AND FINANCIAL FRICTIONS
GEORGIOS P. KOURETAS
Athens University of Economics and Business
ATHANASIOS P. PAPADOPOULOS
University of Crete

Since 1997, the Department of Economics of the University of Crete has organized
an annual international conference on macroeconomic analysis and international
finance. The articles included in this special issue are refereed versions of papers
presented at the 17th International Conference on Macroeconomic Analysis and
International Finance held at the University Campus, Rethymno, 30 May–1 June
2013, and submitted to Macroeconomic Dynamics in an open call for papers.
The central theme of this Special Issue is Growth, Optimal Fiscal and Monetary
Policy, and Financial Frictions. The topics discussed in this issue are endogenous
growth and public investment and taxation; optimal inflation and fiscal and
monetary policy; foreign reserve accumulation and China’s exchange rate policy;
and liquidity shocks and financial frictions. We begin the Special Issue with an
overview of these papers.
In “Policy Games, Distributional Conflicts, and Optimal Inflation,” Alice
Albonico and Lorenza Rossi show that limited asset market participation (LAMP)
generates an extra inflation bias when the fiscal and the monetary authority play
strategically. A fully redistributive fiscal policy eliminates the extra inflation bias,
but at the cost of reducing Ricardian welfare. A fiscal authority that redistributes
income only partially reduces the inflation bias but increases government
spending. Although a fully conservative monetary policy is necessary to get price
stability, it implies a reduction in liquidity-constrained consumers’ welfare, in
the absence of redistributive fiscal policies. Finally, under a crisis scenario, none
of the policy regimes is able to avoid the fall in economic activity when the
increase in the fraction of LAMP is coupled with a negative technology shock,

We wish to thank the discussants, referees and all participants at the Conference, whose comments have substantially
improved the papers presented in this issue. We are also grateful to the University of Crete, the Bank of Greece,
and the National Bank of Greece for their generous financial support. Last but not least, we would like to thank
Ioanna Yotopoulou and Maria Mouzouraki for their superb secretarial assistance, as well as Pericles Drakos and Kostis
Pigounakis for their technical support. Address correspondence to: Georgios P. Kouretas, Department of Business
Administration, Athens University of Economics and Business, GR-14304 Athens, Greece; e-mail: kouretas@aueb.gr.


c 2014 Cambridge University Press 1365-1005/14 1167
Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:30:36, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100514000017
1168 GEORGIOS P. KOURETAS AND ATHANASIOS P. PAPADOPOULOS

whereas optimal policy can avoid recession when it responds to the increase in
LAMP proportion alone.
Constantine Angyridis, in his paper “Endogenous Growth with Public Capital
and Progressive Taxation,” considers an endogenous growth model with public
capital and heterogeneous agents. Heterogeneity is due to differences in discount
factors and inherent abilities that allow us to closely approximate the 2007 U.S.
income and wealth distributions. Furthermore, it is assumed that government
expenditures, including public investment, are financed through a progressive
income taxation scheme, along with a flat tax on consumption. The analysis
considers three revenue-neutral fiscal policy reforms: (i) an increase in the degree
of progressivity of the tax schedule that reduces the after-tax income distribution
Gini coefficient to its lowest value over the 1979–2009 period, (ii) a reduction in
the progressivity ratio that causes the Gini coefficient of the wealth distribution
to get close to 1, and (iii) an increase in the fraction of output allocated to public
investment that has the same positive impact on the growth rate as reform (ii).
The main findings of the paper show that increasing investment in public capital
is the only type of policy that simultaneously enhances growth and reduces both
types of inequality (income and wealth). Moreover, the analysis concludes that the
public-investment-to-output ratio that maximizes social welfare crucially depends
on the elasticity of the labor supply.
Gong Cheng, in “A Growth Perspective on Foreign Reserve Accumulation,” an-
alyzes the relationship between productivity growth, financial underdevelopment,
and foreign reserve accumulation in emerging market economies. The analysis
is conducted based on a dynamic open-economy macroeconomic model. The
demand for foreign reserves is derived from the interaction between positive pro-
ductivity shocks, borrowing constraints, and the lack of domestic financial assets.
Foreign reserve accumulation can thus be regarded as part of a catching-up strategy
in an economy with an underdeveloped financial market. In fact, if domestic firms
are credit-constrained, domestic saving instruments are necessary to increase their
retained earnings in order to invest in capital. The central bank plays the role of a
financial intermediary and provides domestic firms with liquid public bonds while
investing the bond proceeds abroad in the form of foreign reserves. It is also shown
that during the economic transition, the social welfare in an economy where the
central bank accumulates foreign reserves and imposes capital controls is higher
than in a financially liberalized economy. By controlling private capital flows, the
central bank can not only provide sufficient domestic liquid assets by investing
abroad in foreign reserves, but also adjust the domestic interest rate to cope with
positive productivity shocks.
In “Firm Dynamics, Endogenous Markups, and the Labor Share,” Andrea Col-
ciago and Lorenza Rossi argue that the recent U.S. evidence that suggests that the
response of labor share to a productivity shock is characterized by countercycli-
cality and overshooting cannot reconciled with existing business cycle models.
The authors extend the Diamond–Mortensen–Pissarides model of search in the
labor market by considering strategic interactions among an endogenous number

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:30:36, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100514000017
INTRODUCTION TO SPECIAL ISSUE 1169

of producers, which leads to countercyclical price markups. Moreover, the analysis


argues that although Nash bargaining delivers a countercyclical labor share, that
countercyclical markups are fundamental to address the overshooting. Finally, they
show that real wage rigidity does not seem to play a crucial role in the dynamics
of the labor share of income.
Harris Dellas, Behzad Diba, and Olivier Loisel, in “Liquidity Shocks, Equity-
Market Frictions, and Optimal Policy,” study the positive and normative impli-
cations of financial shocks in a standard New Keynesian model that includes
banks and frictions in the market for bank capital. They show how such frictions
influence the effects of bank liquidity shocks and the properties of optimal policy
materially. In particular, they limit the scope for countercyclical monetary policy
in the face of these shocks. A fiscal policy instrument can complement monetary
policy by offsetting the balance-sheet effects of these shocks, and jointly optimal
policies attain the same equilibrium that monetary policy (alone) could attain in
the absence of equity-market frictions.
In “U.S. Trend Inflation Reinterpreted: The Role of Fiscal Policies and Time-
Varying Nominal Rigidities,” Giovanni di Bartolomeo, Patrizio Tirelli, and Nicola
Acocella provide a reinterpretation of the Fed’s time-varying implicit inflation
target, based on two considerations. The first is that the need to alleviate the
burden of distortionary taxation may justify the choice of a positive inflation rate.
The second is based on compelling evidence that the degree of price and wage
indexation falls with trend inflation. The analysis leads to the conclusion that
a proper characterization of the joint evolution of fiscal variables and nominal
rigidities has a strong impact on the Ramsey optimal policies, implying optimal
inflation dynamics that are consistent with the observed evolution of U.S. trend
inflation. In contrast, tax policies have been too lax, especially at the time of the
controversial Bush tax cuts.
Francesca D’Auria, in her paper “The Effects of Fiscal Shocks in a New Key-
nesian Model with Useful Government Spending,” develops an extension of the
standard dynamic general equilibrium model with price and wage rigidities in
order to account for recent evidence on the response of macroeconomic variables
to fiscal shocks. The model is augmented with two features: consumer preferences
depend on government expenditures and public capital is productivity-enhancing.
The model is based on a set of simulations considering alternative monetary policy
rules and plausible assumptions on the degree of complementarity between private
and public expenditures and on the output elasticity of public spending. The main
finding of the analysis is that the effects of fiscal shocks predicted by the model
are in line with the empirical evidence.
In “Fiscal Multipliers in a Monetary Union under the Zero-Lower-Bound
Constraint,” Stefanie Flotho analyzes government spending multipliers in a two-
country model of a monetary union with price stickiness and home bias in con-
sumption where monetary policy is constrained by the zero lower bound (ZLB) on
the nominal interest rate. The analysis deals with the computation of government
spending multipliers under this constraint and their comparison to fiscal multipliers

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:30:36, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100514000017
1170 GEORGIOS P. KOURETAS AND ATHANASIOS P. PAPADOPOULOS

in normal times; that is, the central bank sets the nominal interest rate via a Taylor
rule. Furthermore, the trade elasticity and the parameter measuring home bias
in consumption play an important role in determining the size of the multiplier.
The author argues that the multipliers are not necessarily large under the ZLB
constraint. However, compared with the fiscal multipliers when the central bank
sets the nominal interest rate according to a Taylor rule, the multipliers under the
ZLB are bigger. Moreover, the persistence parameter of the binding ZLB plays a
crucial role.
Cecilia Garcı́a-Peñalosa and Stephen J. Turnovsky, in “Income Inequality, Mo-
bility, and the Accumulation of Capital,” examine the determinants of income
mobility and inequality in a Ramsey model with elastic labor supply and hetero-
geneous wealth and ability (labor endowment). Both agents with lower wealth
and agents with greater ability tend to supply more labor, implying that labor
supply decisions may have an equalizing or unequalizing effect, depending on the
relative importance of the two sources of heterogeneity. Moreover, these decisions
are central to the extent of mobility observed in an economy. The relationship
between mobility and inequality is complex. For example, a reduction in the
interest rate and an increase in the wage rate reduce capital income inequality and
allow upward mobility of the ability-rich. However, the increase in the labor supply
of high-ability agents in response to higher wages raises earnings dispersion and
thus has an offsetting effect. As a result, high mobility can be associated with an
increase or a decrease in overall income inequality.
In the final paper of this Special Issue, “Green Spending Reforms, Growth,
and Welfare with Endogenous Subjective Discounting,” Eugenia Vella, Evangelos
Dioikitopoulos, and Sarantis Kalyvitis study the effectiveness of optimal fiscal
policy, in the form of taxation and the allocation of tax revenues between infras-
tructure and environmental investment, in a general-equilibrium growth model
with endogenous subjective discounting. A green spending reform, defined as a
reallocation of government expenditures towards the environment, can procure
a double dividend by raising growth and improving environmental conditions,
although the environment does not impact the production technology. Also, en-
dogenous Ramsey fiscal policy eliminates the possibility of an “environmental
and economic poverty trap.” Contrary to the case of exogenous discounting, green
spending reforms are the optimal response of the Ramsey government to a rise in
the agents’ environmental concerns.

Downloaded from https:/www.cambridge.org/core. IP address: 187.179.154.227, on 28 Mar 2017 at 18:30:36, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1365100514000017
Macroeconomic Dynamics, 16 (Supplement 2), 2012, 167–175. Printed in the United States of America.
doi:10.1017/S136510051100071X

ARTICLES
INTRODUCTION TO
TIME-VARYING MODELING
WITH MACROECONOMIC AND
FINANCIAL DATA
FREDJ JAWADI
Université d’Evry Val d’Essonne
and
Amiens School of Management

1. INTRODUCTION
The dynamics of macroeconomic and financial series has evolved swiftly and
asymmetrically since the end of the 1970s, and their statistical properties have
also changed over time, suggesting complex relationships between economic and
financial variables. The transformations can be explained by considerable changes
in householder’s behavior, market structures, and economic systems and by the al-
ternation of exogenous shocks and financial crises that have affected the economic
cycle, with significant evidence of time variation in the major economic variables.
Hence, there is a need for new econometric protocols to take such changes into
consideration. The introduction of ARMA (autoregressive moving average mod-
els) by Box and Jenkins (1970) led to the development of time-series econometrics,
which had a major impact on the conceptual analysis of economic and financial
data. This type of modeling offered a transition from a static setup to a new
modeling process that reproduces the time-varying features of macroeconomic
and financial series. However, the ARMA modeling system retains the constancy
of the first and second moments, limits the phases of a cycle to symmetrical
instances, and only reproduces the dynamics of stationary variables. It thus fails
to adequately reproduce the nonstationary relationships between major economic
and financial variables. Abrupt changes in economies and financial systems have
given evidence of nonstationary series whose statistical properties are also time-
varying, making it necessary to develop new econometric tools to capture the time
variation of economic and financial series in the mean and in the variance, and to
apprehend their dynamics in the short and long term. Among the most important
and influential studies in the 1980s’ econometrics literature were therefore those
that dealt with the introduction of the ARCH (autoregressive conditional het-
eroskedasticity) model by Engle (1982) and the cointegration theory by Engle and

Address correspondence to: Fredj Jawadi, Université d’Evry Val d’Essonne, Bâtiment La Poste, Bureau 226, 2 rue
Facteur Cheval, 91025 Evry, France; e-mail: fredj.jawadi@univ-evry.fr.


c 2012 Cambridge University Press 1365-1005/12 167
Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:26:37, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051100071X
168 FREDJ JAWADI

Granger (1987). The ARCH model, which focuses on the time-varying features
of volatility structure, was a major breakthrough, as it highlighted the importance
of the second moment of time series, while the cointegration framework enabled
the short- and long-term dynamics of nonstationary variables to be modeled.
Recently, however, increased use of quantitative economic and financial data,
improvements in empirical techniques, and the development of databases have
shown that these models are not always able to apprehend all the stylized factors
in economic and financial time series satisfactorily. In particular, we noted that
time variation in economic and financial series has become increasingly significant,
and the relationships appear somewhat more persistent, abnormal, asymmetrical,
and nonlinear. These properties may escape the linear cointegration framework.
However, several nonlinear models have been developed to capture persistence,
irregularity, and nonlinearity in economic and financial data.
This Introduction aims to briefly explain the time variation and irregularity
characterizing key economic and financial data and to discuss the failure of linear
modeling to capture the behavior of macroeconomic and financial variables. We
also present and discuss several empirical studies that model the statistical prop-
erties of the economic and financial series under consideration in different ways,
using diverse econometric tools. The analysis of their findings indicates significant
contributions and results.
The introduction is organized as follows. First, Section 2 briefly discusses the
importance of time variation and asymmetry in the data. In Section 3, we present
the limitations of linear models for reproducing this asymmetry. Section 4 then
provides a brief overview of time-varying models. Finally, we present the different
contributions of this special issue.

2. TIME VARIATION AND ASYMMETRY IN MACROECONOMIC


AND FINANCIAL DATA
The analysis of financial and macroeconomic data indicates that their dynamics
evolve significantly over time. This time variation appears to be due to internal
and external, microeconomic and macroeconomic, endogenous and exogenous
factors. Indeed, in recent decades changes in corporate strategies and consumer
behavior, evolution in market regulations and regimentation, increases in sup-
ply and demand, and new government regulations have had an impact on all
economies and have led to considerable changes in all economic systems. Further-
more, the international infrastructure has also experienced considerable changes,
with the recent and impressive growth in China and the emerging markets: the
move toward financial capitalism, the adoption of the floating exchange system,
the modernization of financial institutions, the development of financial inno-
vation and a number of sophisticated new financial instruments and products,
the increase in financial and economic integration, the use of new information
and communication technologies, and the rapid evolution of international capital
markets, financial globalization, and the rise in capital mobility. Such changes are

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:26:37, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051100071X
INTRODUCTION TO SPECIAL ISSUE 169

not without consequences, as they generate complex dynamics and asymmetrical


relationships between economic and financial variables, which exhibit different
types of asymmetries and nonlinearities.
Granger et al. (2010) suggest that nonlinearity involves several economic vari-
ables and can be explained in different ways. When central banks set exchange rate
boundaries, they generate nonlinearity between the exchange rate and its funda-
mentals. In labor markets, the behavior of firms employing workers may generate
asymmetric and nonlinear fluctuations in employment. Additionally, when govern-
ments introduce new reforms and policies, this can lead to asymmetric variations
in supply and demand. The presence of transaction costs in capital markets is
an important source of nonlinearity. Overall, large fluctuations and variations in
economic and financial series may occur. Moreover, their dynamics tend to evolve
over time and their adjustment is likely to vary asymmetrically according to the
phase of the economic cycle (economic growth, collapse). Investors and firms are
in competition to make money quickly, for instance, and investors may often use
arbitrage techniques and implement risky positions to this end, leading to rapid
price increases and excessive volatility.
To model the dynamic evolution of economic and financial time series, previous
studies have used different tools and techniques. Overall, they come to two major
conclusions. First, linear modeling fails to reproduce such time-varying dynamics,
as it does not enable the first and second moments to evolve over time. Second,
the failure of linear modeling encouraged researchers to look for a suitable model
that allows variable dynamics to be apprehended. Nonlinear models thus began to
supplant the linear model to improve the modeling of the dynamics of economic
and financial data.

3. LIMITATIONS OF LINEAR MODELING


Modeling economic and financial series is a difficult task, particularly when their
properties display time-varying dynamics. When using linear modeling to model
time series, econometricians often assume that series display normal distribution
with constant variance. However, in a sample comparison using visual inspection
with a time series generated by the standard normal distribution, we can show the
limitations of this hypothesis for major economic and financial series. In several
series, the variance is not constant, and a more general distribution would be
more suitable to capture this change over time. Moreover, analyses of stylized
factors in financial series often show dynamics that exhibit excess skewness and
kurtosis compared to the normal distribution. However, these properties cannot
be reproduced using linear modeling, as it cannot capture the properties of time-
varying macroeconomic and financial series, and their implementation may lead
to biases in the estimation.
Thus, linear modeling displays a number of limitations. First, ARMA mod-
els do not enable the asymmetry characterizing various economic and financial
series to be reproduced. Second, these models do not allow for time variance,

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:26:37, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051100071X
170 FREDJ JAWADI

structural breaks, or switching regimes in the data. Third, ARMA modeling re-
quires the normality hypothesis. Finally, linear models only require the first and
second moments (average, variance), which implies underexploitation of the time-
series information. These limitations can be overcome using nonlinear models,
which can improve the study of time-varying properties in economic and financial
data.

4. TIME-VARYING MODELING
Several models have been developed in the literature to capture the time-varying
dynamics of economic and financial series. A number of these models have given
rise to nonlinear models, which appear useful for reproducing economic relation-
ships that evolve over time. The nonlinearity is in the mean equation and/or the
variance equation, thus identifying nonlinear models in the mean and nonlinear
models in the variance, respectively.1
Interestingly, with nonlinear models, the parameters can vary over time, al-
lowing the dynamics of a series to evolve and to be different across the phases
of the cycle. However, as in Granger et al. (2010), when we try to consider a
nonlinear model to explain the variation in economic or financial time series, the
presence of several models poses a problem: which model should be retained
from the nonlinear models available? To select the best model, econometricians
should check which one provides the best fit to the data. To this end, they need
to build their model in different stages, checking its ability to apprehend the
statistical properties of the data. Indeed, in the econometrics literature related to
time-varying modeling, econometricians choose and build their models according
to the hypotheses under consideration and the results of linearity tests, as illus-
trated by the different contributions in this special issue. They then develop new
econometric tools and test their performance on real data.

5. PRESENTATION OF THE CONTRIBUTIONS


The papers in this special issue of Macroeconomic Dynamics were presented at the
first International Symposium in Computational Economics and Finance (ISCEF)
in Sousse, Tunisia 25–27 February 2010.2 They were submitted for publication
in this issue and, after several rounds of double-blind review, were recommended
for publication. We will present the papers in the Introduction and briefly discuss
their main findings and contributions. As we will see, these papers have greatly
contributed to the current literature on time-varying modeling. They concern
different countries, focus on different data, use different modeling tools, and yield
strong evidence of time variation, persistence, asymmetry, and nonlinearity in
macroeconomic and financial data.
The first paper, entitled “Business Environment, Start-Ups, and Productivity
during Transition,” by Zuzana Brixiová (African Development Bank, Devel-
opment Research Department) and Balázs Égert (Organization for Economic

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:26:37, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051100071X
INTRODUCTION TO SPECIAL ISSUE 171

Cooperation and Development, Economics Department),focuses on the study of


business environments and productivity in Central European, Eastern European,
and Baltic countries during transition. The authors note a significant increase
in productivity, which they explain by the rapid reform and transformation of
their business environments. This implies significant variation of productivity in
these countries and a narrower productivity gap with advanced countries. The
most important contribution of this paper is the development of a dynamic search
model of company start-ups that reflects these trends. Their model shows that an
enabling business climate contributes significantly to highly productive business
start-ups at an earlier stage of transition, underscoring the importance of early
reforms. This topic has been studied in several previous papers, but only from an
empirical viewpoint. Thus, the present paper contributes to closing the research
gap by extending the dynamic model of Brixiova and Kiyotaki (1997) for the role
of the business climate. Their model helps to explain some of the diverging eco-
nomic outcomes observed between the economies in Central and Eastern Europe
and the Baltics (CEEB) and the Commonwealth of Independent States (CIS) by
showing how an enabling business climate can stimulate an earlier shift to highly
productive private firms and thus faster growth in the private sector, productivity,
and employment. The authors point out that the CEEB’s faster implementation
of market reforms and more enabling business climate have helped stimulate an
earlier structural shift to more productive private firms. Their model also illustrates
the way this has contributed to faster and time-varying labor productivity growth
and, consequently, to more rapid convergence with the income levels of advanced
economies, thus confirming the importance of undertaking structural reforms and
strengthening the business climate.
The second paper is by Michal Rubaszek (Economic Institute, National Bank
of Poland and Econometric Institute, Warsaw School of Economics) and concerns
“The Role of Two Interest Rates in the Intertemporal Current Account Model.”
The paper analyzes the role of the lending-deposit interest rate spread in the
dynamics of the current account in developing countries. The author extends
the standard perfect-foresight intertemporal model of the current account to the
existence of interest rate spread. Applying this modeling to panel data for 60
developing countries, the author points to significant relationships between the
current account and the interest rate spread. The paper contributes to the literature
on intertemporal current accounts in several ways. Whereas previous studies often
assumed that only one domestic interest rate existed for both deposits and loans,
this paper differentiates between the two interest rates. Because of the existence
of a spread between these two interest rates, it may be optimal for low-income,
converging economies to run balanced current accounts. Also, while developing a
perfect-foresight general equilibrium model with two interest rates and following
several simulation exercises, the author shows that even a small change in interest
rate spread can have a huge impact on the current account balance, especially if
the interest rate spread is close to zero. The existence of interest rate spread can
significantly change the implications of the standard intertemporal current account

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:26:37, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051100071X
172 FREDJ JAWADI

and may also imply significant variation in the dynamics of other macroeconomic
variables.
Whereas the two first papers concern Central and Eastern European and Baltic
countries and developing countries, respectively, the paper “Real Effects of Mone-
tary Policy in Large Emerging Economies,” by Sushanta K. Mallick(Queen Mary
University of London) and Ricardo M. Sousa (University of Minho and London
School of Economics) focuses on large emerging countries, notably the monetary
policy transmission of five key emerging market economies: Brazil, Russia, India,
China, and South Africa (BRICS). The authors justify the choice of this sample by
the fact that they are the largest and fastest-growing emerging markets. Also using
interest rates to define monetary policy shocks, the paper contributes to studies on
the sources of time-varying economic fluctuations and provides evidence of mon-
etary policy transmission for the major emerging market countries. The authors
focus in particular on the real effects of monetary policy for the BRICS. The study
enhances our understanding of the impact of monetary policy shocks in emerging
market economies, while improving and extending the existing literature in several
areas. First, it looks at the impact of monetary policy not only in terms of output
and inflation, but also with regard to monetary growth rate, the exchange rate and
stock price, thus shedding light on the role of monetary policy in terms of provision
of liquidity, explaining the current account imbalances, and on its effects on the
stability of the financial markets. Second, the authors identify the monetary policy
shock using modern estimation techniques, namely, the Bayesian structural vector
autoregressive (B-SVAR) and the sign-restrictions VAR, thereby accounting for
the uncertainty regarding the impulse-response functions. Third, they use quarterly
data for a longer time period (namely, 1990:1–2008:4) and are thus able to obtain
more precise estimates. They show that a contractionary monetary policy has
a strong and negative effect on output and that contractionary monetary policy
shocks tend to stabilize inflation in these countries, while producing a strongly
persistent negative effect on real equity prices. Their findings highlight the fact
that a monetary policy contraction has a negative effect on output; leads to inflation
stabilization with persistence in the aggregate price level, coinciding with the fall
in commodity prices; produces a small liquidity effect; has a strong and negative
impact on equity markets; and generates domestic currency appreciation.
Overall, the study indicates that an exogenous increase in the short-term interest
rate tends to be followed by an immediate decline in prices and appreciation of
the exchange rate, and has a significant negative impact on output and equity
prices. However, although monetary policy has a very strong influence on eco-
nomic activity, evidence regarding the transmission of short-term interest rates to
inflation remains limited, indicating that the major focus of monetary policy for
the five large emerging economies in the BRICS has been more on stabilizing
output than on controlling inflation. This confirms the strong relationship between
monetary policy and real macroeconomic variables, and may help to improve our
understanding of time variation and the different changes characterizing the most
important macroeconomic indicators.

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:26:37, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051100071X
INTRODUCTION TO SPECIAL ISSUE 173

The paper “Modeling Nonlinear and Heterogeneous Dynamic Links in In-


ternational Monetary Markets,” written by Mohamed Arouri (EDHEC Business
School), Fredj Jawadi (University of Evry & Amiens School of Management), and
Duc Khuong Nguyen (ISC Paris School of Management), also focuses on mon-
etary policies. However, it concerns three important developed markets: France,
the United Kingdom, and the United States of America. The paper examines the
dynamic linkages of international monetary markets over the period 2004–2009
using daily short-term interbank interest rates from three of the most advanced
countries (France, the United Kingdom, and the United States). The authors apply
vector error-correction models and smooth-transition error-correction models that
enable them to examine the short-run dynamics and long-run relationships of the
interest variables. These techniques also appear to be well suited for capturing
several forms of potential asymmetry, nonlinearity, and structural changes in their
adjustment dynamics. The empirical results offer strong evidence of nonlinear
and heterogeneous causality between the three interest rates under consideration.
They also show that the relationships between interest rates are asymmetrically
and nonlinearly time-varying. Furthermore, the authors find that exogenous shifts
in the U.S. short-term interest rate led those in France and in the United Kingdom
within a horizon of one to two days. These findings have important implications
regarding the actions of leading central banks (ECB, the Bank of England, and the
U.S. Fed), because the behavior of short-term interest rates can be viewed as an
indicator of the degree of interdependence in central banks’ policy. Indeed, if we
consider that short-term interest rate deviations reasonably reflect changes in target
interest rates, the convergence of short-term interest rates toward a common equi-
librium over recent years may be explained by the greater coordination between
ECB, U.S., and U.K. central bankers in an effort to manage crisis issues together.
In the paper “Why Do Risk Premia Vary over Time? A Theoretical Investigation
under Habit Formation”, Bianca De Paoli (London School of Economics, Centre
for Economic Performance, and Bank of England) and Pawel Zabczyk (London
School of Economics and Bank of England) focus on the study of risk premium
dynamics and analyze its main determinants. Using a model with external habit
formation, the authors identify the time-varying character of risk premia. In par-
ticular, they show that because of changing recession risks, risk premia can be
pro-cyclical. Their results also indicate that persistent habits, shocks, and features
generating hump-shaped consumption responses are all likely to make premia
countercyclical. The findings are interesting because they not only indicate the
importance of the role of countercyclical recession risks, but also provide evidence
that factors that help match activity data (i.e., allowing for consumption habits and
persistent shocks) are also likely to influence the asset-pricing dimension. Indeed,
changes in risk premia can substantially contribute to asset price volatility, so
having a good understanding of the factors driving them is crucial for modeling
asset prices.
In their paper “Persistent Deficit, Growth, and Indeterminacy,” Alexandru
Minea (CERDI—University of Auvergne) and Patrick Villieu (LEO—University

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:26:37, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051100071X
174 FREDJ JAWADI

of Orléans) focus on macroeconomic data and investigate the short- and long-
run effects of fiscal deficits on economic growth using an endogenous growth
model with productive public spending that is financed by public deficit and debt.
The question under consideration is extremely interesting, notably because public
deficits are increasing in all countries and economic growth is crucial for most
of them. As in previous studies, the authors provide evidence of a multiplicity
of long-run balanced growth paths as well as of possible indeterminacy of the
transition path. In particular, they show that starting from the steady, high-growth
state, a positive impulse in the deficit ratio exerts an adverse effect on economic
growth in the long run after an initial rise. Starting from the low-growth steady
state, however, the situation may be radically undetermined, and the effect of fiscal
deficit impulses is subject to “optimistic” or “pessimistic” views of public debt
sustainability.
Overall, this means that higher public deficit can generate a crowding-out effect
on economic growth, but may also ensure the development of productive infras-
tructures that promote long-run growth. However, this would depend on the initial
debt ratio and on self-fulfilling prophecies about government debt sustainability.
Formally, the model under consideration exhibits some complex dynamics, allow-
ing indeterminacy and nonlinear effects of fiscal policy. Overall, these findings
indicate a significant relationship between fiscal policies and economic growth
and, interestingly, show that the response of economic growth to fiscal deficits
may exhibit time variation, asymmetry, irregularity, and nonlinearity.
Finally, the last paper in this issue by Pascal Seppecher (CEMAFI—University
of Nice Sophia Antipolis), entitled “Flexibility of Wages and Macroeconomic
Instability in an Agent-Based Computational Model with Endogenous Money,”
investigates the effects of a decrease in nominal wages on the level of unemploy-
ment, prices, and the equilibrium of the system as a whole. According to Keynes,
one should not expect the effects of a fall in nominal wages to be limited to the
labor market, because the level of effective demand may be modified. This paper
presents a computational macroeconomic model that closely associates Keynesian
thinking and an agent-based approach. To this end, the author presents a model
of a dynamic and complex economy in which the creation and destruction of
money result from interactions between multiple and heterogeneous agents. In
particular, agent-based computational methods enable the modeling of complex
systems populated with large numbers of autonomous and heterogeneous agents.
Adopting this approach, the author built a computational model of a dynamic and
complex economy in which the interactions between agents are real and monetary.
Interestingly, he shows on the one hand the emergence of macroeconomic
stability due to the long-term stability of the distribution of total income, wages,
and profits which can be explained by the fact that the model, in its basic ver-
sion, contains a rigid labor market. On the other hand, when nominal wages
are made more flexible, he shows that households become more flexible to the
demands of employers, with no sustained reduction in the level of unemployment.
These downward wage adjustments lead to weaker demand, inducing not only

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:26:37, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051100071X
INTRODUCTION TO SPECIAL ISSUE 175

lower commodity prices but also lower production levels, with subsequent rises in
unemployment. These findings illustrate and confirm Keynes’s reasoning, whereby
he asserts that a fall in nominal wages cannot guarantee full employment, and
could even destabilize prices and the whole economic system. They also suggest
the importance of time variation in real and monetary macroeconomic data.

NOTES

1. For more details about recent developments in nonlinear time-series models, see Granger et al.
(2010), who work on an update of the book by Granger and Teräsvirta (1993). See Barnett and Jawadi
(2010) for more different applications of nonlinear modeling.
2. For more details about the ISCEF, see the conference website: www.iscef.com.

REFERENCES
Barnett, W. and F. Jawadi (2010) Nonlinear Modeling of Economic and Financial Time-Series. Inter-
national Symposia in Economic Theory and Econometrics, vol. 20. Bingley, UK: Emerald Group
Publishing Limited.
Box, G.E.P. and G.M. Jenkins (1970) Time Series Analysis Forecasting and Control. San Francisco:
Holden–Day.
Brixiova, Z. and N. Kiyotaki (1997) Private sector development in transition economies. Carnegie–
Rochester Conference Series on Public Policy 46, 241–280.
Engle, R.F. (1982) Autoregressive conditional heteroscedasticity with estimates of the variance of
United Kingdom inflation. Econometrica 50, 987–1007.
Engle, R.F. and C.W.J. Granger (1987) Cointegration and error correction: Representation, estimation
and testing. Econometrica 55, 251–276.
Granger, C.W.J. and T. Teräsvirta (1993) Modelling Nonlinear Economic Relationships. Oxford:
Oxford University Press.
Granger, C.W.J., T. Teräsvirta, and D. Tjøstheim (2010) Modelling Nonlinear Economic Time Series.
Oxford: Oxford University Press.

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:26:37, subject to the Cambridge Core
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S136510051100071X
Macroeconomics in Agriculture
John B. Penson, Jr.

Economists traditionally define the long run not I spent some time reviewing past Lifetime
in terms of months, years, or when we die (the Award Winner seminar papers in preparing my
one notable exception is John Maynard Key- remarks. These papers are largely an introspec-
nes's observation that in the long run, we are all tive view of an individual's experience during
dead) but rather as a time period long enough to his or her career. My approach is somewhat
allow for full response to changing incentives or different. It is commonplace today for doctoral
where all factors are variable. My colleagues at dissertations to consist of three papers. Not
Texas A&M University will confirm that all of confusing my remarks with the required quality
my faculties are extremely variable, so I guess I of a doctoral dissertation, I would like to focus
have made it to the long run. on three facets of a central theme I think merits
Three people have had a major impact on your consideration. My remarks will focus on
the direction my career has taken. One was my three aspects of the macroeconomics of agri-
dissertation chairman at the University of culture: (1) macroeconomics in agricultural
Illinois, Dr. Chester Baker. Chet, past presi- economics research, (2) macroeconomics in
dent of the American Agricultural Economics agricultural economics undergraduate curricu-
Association and Fellow of the Association, la, and (3) macroeconomics in agricultural loan
was a major innovator in the field of agricul- portfolio decision making.
tural finance at the micro and macro levels. He
argued, for example, that credit is more than Macroeconomics of Agriculture in Research
an input; it is an integral part of the decision-
making process. A second was Dr. Walter My initial experience with macroeconomic
McMahon, Professor Emeritus of Economics models that in one way or another incorporate
and Medicine at the University of Illinois, an agricultural sector began with Leontief s
a nationally recognized expert in health pioneering work with input-output models
economics. Walter taught an advanced mac- and Karl Fox's farm-sector equations incor-
roeconomics course that spurred my interest in porated into the Federal Reserve's economet-
relationships between markets in a global ric model. The dynamics associated with Fox's
economy. The third was Dr. G. Edward efforts were entirely recursive, whereas the
Schuh, the Hubert Humphrey Professor at Leontief model was fully simultaneous al-
the University of Minnesota and AAEA though static. The latter approach captured
Fellow. I had the opportunity to discuss agriculture's use of inputs from other sectors
research opportunities in the macroeconomics in the economy as well as the distribution of
of agriculture with Ed while he was with the its output throughout the economy for a par-
President's Council of Economic Advisors, ticular year. Prices were unaffected by the
a topic he pioneered back in the 1970s. assumption that each sector's supply was
perfectly elastic. Later input-output modeling
by Dorfman, Samuelson, and Solow captured
John B. Penson, Jr. is Regents Professor and Stiles the effects of a kinked supply curve reflecting
Professor of Agriculture in the Department of
Agricultural Economics, Texas A&M University. the current capacity of individual sectors,

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:24:56, subject to the Cambridge Core terms of use,
available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1074070800022938
240 Journal of Agricultural and Applied Economics, August 2007

whereas others endogenized demand curves it can be argued that a single introductory
and primary input supply curves (Penson and macroeconomics course focusing entirely on
Fulton). Computable general equilibrium the behavior of the general economy is
models by Hertel and others also capture the insufficient. I argue that students majoring
role of agriculture in the general economy. in agricultural economics or agribusiness
Karl Fox's early contribution to the Federal should have a fundamental appreciation
Reserve's econometric macroeconomic model, for the sensitivity of agriculture and the food
based largely on lagged relationships, was and fiber sector to macroeconomic shocks
eventually broadened in studies by Penson and in today's domestic and global economies.
Hughes, and later, by Penson and Taylor, Take the simple concept of supply and
which econometrically captured the simulta- demand. An understanding of the forces
neous interrelationship between agriculture that cause these curves to shift in the food
and the general economy. Both modeling and fiber industry (i.e., incomes, exchange
approaches require substantial time and re- rates, interest rates, and inflation, to name
sources, however, and thus are difficult to a few) and events and policies that bring
sustain (Penson and Gardner). these shifts about cannot be found in an
The agricultural economics literature also introductory macroeconomics course. Is
includes a broad array of studies focusing on knowing the value of an income elasticity for
individual aspects of the macroeconomics of a commodity enough? Or should our students
agriculture. Examples that come to mind are understand what might cause the denominator
studies by Schuh, Chambers, and others to change?
focusing on the role of exchange rates. Other To gain a better understanding of student
studies have focused on supply and demand in exposure to macroeconomic instruction, I
individual markets, where macroeconomic reviewed the undergraduate curricula in de-
variables are captured exogenously. I under- partments of agricultural economics at 15
stand that large-scale models require consider- universities; 10 in the South (Arkansas,
able resources to develop and maintain. To Auburn, Florida, Kentucky, Louisiana State
this end, sensitivity analysis of appropriate University (LSU), Tennessee, Texas Tech,
exogenous variables is a reasonable compro- Texas A&M, North Carolina (NC) State,
mise. and Virginia Tech) and five located outside
The profession has continued to expand its the South (Illinois, Minnesota, Wisconsin,
research focus beyond the farm gate over the Purdue, and Maryland). All departments
years to encompass economic activity all along surveyed require undergraduate majors to
the supply chain for food and fiber products, have at least one course in macroeconomics.
often in a global context. The titles of our The focus of this course is on the monetary
journals today refer to agricultural and and real economies at the aggregate level, with
applied economics and to resource and little more than anecdotal references to in-
environmental economics as opposed to the dustry-level causes and effects. Some depart-
Journal of Farm Economics. One simply has to ments rely on one or two introductory courses
browse the topics covered in Choices from one taught in an economics department. A few
quarter to the next to appreciate the diversity departments require an introductory agricul-
of the scope of research programs in agricul- tural economics course to supplement micro
tural economics. and macro concepts taught in an economics
department. One department relies entirely on
Macroeconomics of Agriculture in micro and macro instruction within the de-
Undergraduate Curricula partment.
Importantly, only three departments ap-
Agricultural economics departments are pear to require that students take an in-
fundamentally micro in theory and applica- termediate or capstone course in the macro-
tions. Although this should not surprise you, economics of agriculture (Florida, Minnesota,
Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:24:56, subject to the Cambridge Core terms of use,
available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1074070800022938
Penson: Macroeconomics in Agriculture 241

and Texas A&M). I argue that students today obligations or risk assessment after the loan
need to be able to relate global and domestic is made.
macroeconomic events to their major. An Recent events underscore the globalization
intermediate course addressing, among other of markets, including financial markets. To-
things, the sensitivity of agriculture to macro- day we see U.S. farmers purchasing farmland
economic shocks in an open economy is in other countries, like Brazil. The responsi-
seemingly essential to a well-rounded under- bilities placed on both lenders and regulators
graduate curriculum in agricultural economics in monitoring the risk associated with agricul-
or agribusiness. Macroeconomics offers one tural loan portfolios will increasingly require
more perspective on the forces bearing on more in-depth analysis. The bottom line is that
production, marketing, and financial decisions gains in computing capacity and database
and events in specific markets and along the availability allow for more sophisticated sen-
food and fiber supply chain as we saw during sitivity analysis of farm mortgage lending
the efforts of the Federal Reserve to fight decisions and potential adverse migration in
inflation during the early 1980s and again agricultural loan portfolios.
during the Asian financial crisis in the late
1990s. References

Macroeconomics of Agriculture in Boehlje, M., and D. Lins. "Managing Stress in the


Lending Decisions Agribusiness Portfolio," Journal of Lending and
Credit Risk Management 82,4(December 1999):
The concept of "due diligence" and the 37^2.
importance placed on risk management of Chambers, R.G. "Agricultural and Financial Mar-
ket Interdependence in the Short Run." Amer-
loan portfolios requires lending institutions to ican Journal of Agricultural Economics 66,
assess their exposure to various forms of risk, 1 (February 1984): 12-24.
including credit risk, when determining their Dorfman, R., P.A. Samuelson, and R.M. Solow.
loan-loss reserves and capital requirements. Linear Programming and Economic Analysis.
Credit risk is one of the major pillars in the New York: McGraw-Hill, 1958.
Basel (Switzerland) international capital ac- Fox, K.A. "A Submodel of the Agricultural
cords. Sector." Demand Analysis, Econometrics and
Policy Models. S.R. Johnson, J.K. Sengupta
Most, if not all, agricultural lending and E. Thorbecke eds. Ames: Iowa State
institutions, however, do little in the way of University Press, 1992.
quantifying the sensitivity of future repayment Fraser, D.R., B.E. Gup, and J.W. Kolari. Com-
capacity and the probability of default. mercial Banking: The Management of Risk.
Whereas finance textbooks by Rose and St. Paul, MN: West Publishing Company,
Fraser, Gup, and Kolari, among others, 1996.
underscore the need for portfolio-sensitivity Hertel, T.W. "Partial vs. General Equilibrium
analysis, such evaluation is not practiced in Analysis of International Agricultural Trade."
Journal of Agricultural Economics Research
the real world of agricultural mortgage
3(1993):3-13.
lending. I refer you to articles by Boehlje
Leontief, W.W. The Structure of the American
and Lins and by Penson in the Journal Economy, 1919-1939, 2nd edition. New York:
of Lending and Credit Risk Management Oxford University Press, 1951.
on this subject. More can be done in the Penson, J.B., Jr. "Stress Testing Agricultural Loan
areas of data mining existing portfolios, as Portfolios." Journal of Lending and Credit Risk
well as in stochastic simulation of future Management 83,l(February 2000):70-74.
debt-repayment capacity at the individual Penson, J.B., Jr, and M.E. Fulton. "Impact of
borrower and portfolio segment levels. It Localized Cutbacks in Agricultural Production
has been my experience over the years that on a State Economy." Western Journal of
Agricultural Economics 3(1980): 107-122.
many lenders make 20- to 25-year mortgage
Penson, J.B., Jr, and B.L. Gardner. "Implications
loans with little enforcement of oversight of Macroeconomic Outcomes for Agriculture."
Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:24:56, subject to the Cambridge Core terms of use,
available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1074070800022938
242 Journal of Agricultural and Applied Economics, August 2007

American Journal of Agricultural Economics ing Their Interface." Agricultural Systems Jour-
(December 1988):1013-1022. nal 39,1 (January 1992).
Penson, J.B., Jr, and D.W. Hughes. "Incorporation of Rose, P.S. The Irwin Series in Finance: Commercial
General Economic Outcomes in Econometric Pro- Bank Management, 3rd ed. Chicago: Richard
jections Models for Agriculture." American Jour- D. Irwin Publishing, 1995.
nal of Agricultural Economics 61 (1979): 151-157. Schuh, G.E. "The New Macroeconomics of Agri-
Penson, J.B., Jr, and C.R. Taylor. "United States culture." American Journal of Agricultural
Agriculture and the General Economy: Model- Economics 59(1977):! 17-125.

Downloaded from https:/www.cambridge.org/core. IP address: 189.211.174.122, on 27 Mar 2017 at 23:24:56, subject to the Cambridge Core terms of use,
available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S1074070800022938

You might also like