Methods, Theory
and Applications
This page intentionally left blank This page intentionally left blank
NE W J E RSE Y • L ONDON • SI NGAP ORE • BE I J I NG • SHANGHAI • HONG KONG • TAI P E I • CHE NNAI
World Scientifc
ECONOMIC MODELS
Methods, Theory
and Applications
editor
Dipak Basu
Nagasaki University, Japan
British Library CataloguinginPublication Data
A catalogue record for this book is available from the British Library.
For photocopying of material in this volume, please pay a copying fee through the Copyright
Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to
photocopy is not required from the publisher.
ISBN13 9789812836458
ISBN10 9812836454
Typeset by Stallion Press
Email: enquiries@stallionpress.com
All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means,
electronic or mechanical, including photocopying, recording or any information storage and retrieval
system now known or to be invented, without written permission from the Publisher.
Copyright © 2009 by World Scientific Publishing Co. Pte. Ltd.
Published by
World Scientific Publishing Co. Pte. Ltd.
5 Toh Tuck Link, Singapore 596224
USA office: 27 Warren Street, Suite 401402, Hackensack, NJ 07601
UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
Printed in Singapore.
ECONOMIC MODELS
Methods, Theory and Applications
Sherry  Economic Models.pmd 9/2/2009, 7:10 PM 1
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
This Volume is Dedicated to the Memory of
Prof. Tom Oskar Martin Kronsjo
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
This page intentionally left blank This page intentionally left blank
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
Contents
Tom Oskar Martin Kronsjo: A Proﬁle ix
About the Editor xiii
Contributors xv
Introduction xix
Chapter 1: Methods of Modelling: Mathematical
Programming
1
Topic 1: Some Unresolved Problems of Mathematical
Programming
3
by Olav Bjerkholt
Chapter 2: Methods of Modeling: Econometrics
and Adaptive Control System
21
Topic 1: A Novel Method of Estimation Under CoIntegration 23
by Alexis Lazaridis
Topic 2: Time Varying Responses of Output to Monetary
and Fiscal Policy
43
by Dipak R. Basu and Alexis Lazaridis
Chapter 3: Mathematical Modeling in Macroeconomics 67
Topic 1: The Advantages of Fiscal Leadership in an Economy
with Independent Monetary Policies
69
by Andrew Hughes Hallet
Topic 2: CheapTalk Multiple Equilibria and Pareto —
Improvement in an Environmental Taxation Games
101
by Chirstophe Deissenberg and Pavel
ˇ
Sevˇ c´ık
vii
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
viii Contents
Chapter 4: Modeling Business Organization 119
Topic 1: Enterprise Modeling and Integration: Review
and New Directions
121
by Nikitas Spiros Koutsoukis
Topic 2: Toward a Theory of Japanese Organizational Culture
and Corporate Performance
137
by Victoria Miroshnik
Topic 3: Health Service Management Using Goal Programming 151
by AnnaMaria Mouza
Chapter 5: Modeling National Economies 171
Topic 1: Inﬂation Control in Central and Eastern European
Countries
173
by Fabrizio Iacone and Renzo Orsi
Topic 2: Credit and Income: CoIntegration Dynamics
of the US Economy
201
by Athanasios Athanasenas
Index 223
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
Tom Oskar Martin Kronsjo: A Proﬁle
Lydia Kronsjo
University of Birmingham, England
Tom Kronsjo was born in Stockholm, Sweden, on 16 August 1932, the
only child of Elvira and Erik Kronsjo. Both his parents were gymnasium
teachers.
In his school days Tom Kronsjo was a bright, popular student, often
a leader among his school contemporaries and friends. During his school
years, he organized and chaired a National Science Society that became
very popular with his school colleagues.
From an early age, Tom Kronsjo’s mother encouraged him to learn
foreign languages. He put these studies to a good test during a 1948 summer
vacation when he hitchhiked through Denmark, Germany, the Netherlands,
England, Scotland, Ireland, France, and Switzerland. The same year Tom
Kronsjofoundedandchaireda technical societyinhis school. Inthe summer
vacation of 1949, Tom Kronsjo organized an expedition to North Africa
by seven members of the technical society, their ages being 14–24. The
young people traveled by coach through the austere postSecondWorldWar
Europe. All the equipment for travel, including the coach, were donated
by sympathetic individuals and companies, who were impressed by an
intelligent, enthusiastic group of youngsters, eager to learn about the world.
This event caught a wide interest in the Swedish press at that time and was
vividly reported.
Tom Kronsjo used to mention that as a particularly personally exciting
experience of his school years he remembered volunteering to give a series
of morning school sermons on the need for the world governmental organi
zations, which would encourage and support mutual understanding of the
needs of the people to live in peace.
Tom Kronsjo’s university years were spent at both sides of the great
divide. Preparing himself for a career in international economics, he wanted
to experience and see for himself the different nations’ ways of life and
points of view. He studiedinLondon, Belgrade, Warsaw, Oslo, andMoscow,
before returning to his home country, Sweden, to pass examinations and
ix
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
x Tom Oskar Martin Kronsjo
submit required work at the Universities of Uppsala, Stockholm and Lund
for his Fil.kand. degree in economics, statistics, mathematics and Slavonic
Languages, and Fil.lic. degree in economics. By then, TomKronsjo besides
his mother tongue — Swedish, was ﬂuent in six languages — German,
English, French, Polish, SerboCroatian, and Russian. During this period,
Tomwas awarded a number of scholarships for study in Sweden and abroad.
TomKronsjo met his wife while studying in Moscow. He was a visiting
graduate student at the MoscowState University and she a graduate student
with the Academy of Sciences of the USSR. They married in Moscow in
February 1962. A few years later, a very happy event in the family life was
the arrival of their adoptive son from Korea in October 1973.
As a postgraduate student in economics, Tom Kronsjo was one of a
new breed of Western social scientists that applied rigorous mathematical
and computerbased modeling to economics. In 1963, Tom Kronsjo was
invited by Professor Ragnar Frish, later Nobel prize winner in economet
rics, to work under himand Dr. Salah Hamid in Egypt as a visitingAssistant
Professor in Operations Research at the UAR Institute of National Plan
ning. His pioneering papers on optimization of foreign trade were soon
considered classic and brought invitations to work as a guest researcher by
the GDR Ministry of Foreign Trade and Inner German Trade, and by the
Polish Ministry of Foreign Trade.
In 1964, Tom Kronsjo was appointed an Associate Professor in Econo
metrics and Social statistics at the Faculty of Commerce and Social Sci
ences, the University of Birmingham, U.K. He and his wife moved to the
United Kingdom.
In Birmingham, together with Professor R.W. Davies Tom Kronsjo
developed diploma, and master of social sciences courses in national eco
nomic planning. The courses, offered from 1967, attracted many talented
students from all over the world. Many of these students then stayed
with Tom Kronsjo as research students and many of these people are
today university professors, industrialists, and economic advisors. In 1969,
at the age of 36, Tom Kronsjo was appointed Professor of Economic
Planning. By then Tom Kronsjo had published over a hundred research
papers.
Tom Kronsjo has always had a strong interest in educational innova
tions. He was among the ﬁrst professors at the university to introduce a
televisionbased teaching, making it possible to widen the scope of mate
rial taught.
TomKronsjo was generous towards his students with his research ideas,
stimulating his students intellectually, unlocking wide opportunities for his
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
Tom Oskar Martin Kronsjo xi
students through this generosity. He had the ability to see the possibilities
in any situation and taught his students to use them.
Tom Kronsjo’s research interests were wide and multifaceted. One
area of his interest was planning and management at various levels of
the national economy. He developed important methods of decomposi
tion of large economic systems, which were considered by his contem
poraries a major contribution to the theory of mathematical planning and
programming. These results were published in 1972 in a book “Economics
of Foreign Trade” written jointly by Tom Kronsjo, Dr. Z. Zawada, and
Professor J. Krynicki, both of the Warsaw University, and published in
Poland.
In 1972, TomKronsjo was invited as a Visiting Professor of Economics
and Industrial Administration at Purdue University, Purdue, USA.
In 1974, Tom Kronsjo was a Distinguished Visiting Professor of Eco
nomics, at San Diego State University, San Diego, USA.
In the year 1975, Tom Kronsjo found himself as a Senior Fellow in the
Foreign Policy Studies Program of the Brookings Institution, Washington,
D.C., USA. In collaboration with three other scientists, he was engaged
in the project entitled “Trade and Employment Effects of the Multilateral
Trade Negotiations”. In the words of his colleagues, Tom Kronsjo made an
indispensable contribution to the project. Tom Kronsjo’s ingenious, tire
less, conceptual, and implementing efforts made it possible to carry out
calculations involving millions of pieces of information on trade and tariffs,
permitting the Brookings model to become the most sophisticated and com
prehensive model available at the time for investigation into effects of the
current multilateral trade negotiations. In addition, Tom Kronsjo conceived
and designed the means for implementing, perhaps the most imaginative
portion of the project, the calculation of “optimal” tariff cutting formulas
using linear programming and taking account of balance of payments and
other constraints on trade liberalization. A monograph on “Trade, Welfare,
and Employment Effects of Trade Negotiations in the Tokyo Round” by
Dr. W.R. Cline, Dr. Noboru Kawanabe, Tom Kronsjo, and T. Williams was
published in 1978.
Tom Kronsjo’s general interests have been in the ﬁeld of conﬂicts and
cooperation, war and peace, ways out of the arms race, EastWest joint
enterprises andworlddevelopment. He publishedpapers onunemployment,
work incentives, and nuclear disarmament.
Tom Kronsjo served as one of the ﬁve examiners for one of the Nobel
prizes in economic sciences. From time to time, he was invited to nominate
a candidate or candidates for the Nobel prize in economics.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
xii Tom Oskar Martin Kronsjo
Tom Kronsjo took early retirement from the University of Birmingham
in 1988 and devoted subsequent years working with young researchers from
China, Russia, Poland, Estonia, and other countries.
Tom Kronsjo was a great enthusiast of vegetarianism and veganism,
of Buddhist philosophy, yoga, and interspace communication, of aviation
and ﬂying. He learned to ﬂy in the United States and held a pilot license
since 1972.
TomKronsjo was a man who lived his life to the fullest, who constantly
looked for ways to bring peace to the world. He was an exceptional indi
vidual who had the incredible vision and an ability to foresee new trends.
He was a man of great charisma, one of those people of whom we say “he
is a man larger than life”.
Tom Oskar Martin Kronsjo died in June 2005 at the age of 72.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
About the Editor
Prof. Dipak Basu is currently a Professor in International Economics in
Nagasaki UniversityinJapan. He has obtainedhis Ph.D. fromthe University
of Birmingham, England and did his postdoctoral research in Cambridge
University. He was previously a Lecturer in Oxford University — Institute
of Agricultural Economics, Senior Economist in the Ministry of Foreign
Affairs, Saudi Arabia and Senior Economist in charge of Middle East &
Africa Division of Standard &Poor (Data Resources Inc). He has published
six books and more than 60 research papers in leading academic journals.
xiii
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
This page intentionally left blank This page intentionally left blank This page intentionally left blank This page intentionally left blank
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
Contributors
Athanasios Athanasenas is anAssistant Professor of Operations Research
& Economic Development in the School of Administration & Economics,
Institute of Technology and Education, Serres, Greece. He was educated in
Colorado and Virginia Universities and received his Ph.D. from the Uni
versity of Minnesota. He was a FullBright scholar.
Olav Bjerkholt, Professor in Economics, University of Oslo, was previ
ously the Head of the Research at the Norwegian Statistical Ofﬁce and an
Assistant Professor of Energy Economics at the University of Oslo. He has
obtained his Ph.D. fromthe University of Oslo. He was a Visiting Professor
in the Massachusetts Institute of Technology and was a consultant to the
United Nations and the IMF.
Christophe Deissenberg is a Professor in Economics, at Universit´ e de la
M´ editerran´ ee and GREQAM, France. He is a coeditor of the Journal of
Optimization Theory and Applications and of Macroeconomic Dynamics,
a member of the editorial board of Computational Economics and of Com
putational Management Science. He is a member of the Advisory Board of
the Society for Computational Economics and a member of the Manage
ment Committee of the ESF action COSTP10–Physics of Risk, and a team
leader of the European STREP project EURACE.
Andrew Hughes Hallett is a Professor of Economics and Public Policy
in the School of Public Policy at George Mason University, USA. Previ
ously he was a Professor of Economics at Vanderbilt University, and the
University of Strathclyde in Glasgow, Scotland. He was educated in the
University of Warwick and London School of Economics, holds a doc
torate from Oxford University. He was a Visiting Professor in Princeton
University, Free University of Berlin, Universities of Warwick, Frankfurt,
Rome, Paris X and in the Copenhagen Business School. He is a Fellow of
the Royal Society of Edinburgh.
xv
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
xvi Contributors
Fabrizio Iacone obtained a Ph.D. in Economics at the London School of
Economics and is currently a Lecturer in the Department of Economics at
the University of York, UK. He has already published several articles in
leading journals.
Nikos Konidaris is currently a Ph.D. student at Hellenic Open University,
School of Social Sciences, researching in the ﬁeld of Business Administra
tion. He holds an MAdegree in Business and Management and a Bachelor’s
Degree in Economics & Marketing from Luton University UK.
Nikitas Spiros Koutsoukis is anAssistant Professor of Decision Modelling
and Scientiﬁc Management in the Department of International Economic
Relations and Development, Democritus University, Greece. He holds MSc
and Ph.D. in decision modeling from Brunel University (UK). He is inter
ested mainly in decision modeling and governance systems, with applica
tions in risk management, business intelligence and operational research
based decision support.
Alexis Lazaridis, Professor in Econometrics, Aristotle University of Thes
saloniki, Greece, has obtained his Ph.D. from the University of Birming
ham, England. He was a member of the Administrative Board of the Orga
nization of Mediation and Arbitration, of the Administrative Board of the
Centre of International and European Economic Law, and of the Supreme
Labour Council of Greece. During his term of service as Head of the Eco
nomic Department Faculty of Law and Economics, Aristotle University of
Thessaloniki, he was conferred an honorary doctoral degree by the Pres
ident of the Republic of Cyprus. He was also pronounced an honorary
member of the Centre of International and Economic European Law by the
SecretaryGeneral of the Legal Services of E.U. He has published several
books, large number of scientiﬁc papers and has patents on two computer
software programs.
Athanassios Mihiotis is an Assistant Professor at the School of Social
Science of the Hellenic Open University. He has obtained his Ph.D. in
industrial management from the National Technical University of Athens,
Greece. He is the editor of the International Journal of Decision Sciences,
Risk and Management. He has in the past worked as Planning and Logis
tics Director for multinational and domestic companies and has served as
member of project teams in the Greek public sector.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
Contributors xvii
Victoria Miroshnik, is currently anAdamSmith Research Scholar, Faculty
of Law and Social Sciences, University of Glasgow, Scotland. She was
educated in Moscow State University and was an Assistant Professor in
Tbilisi State University, Georgia. She has published two books and a large
number of scientiﬁc papers in leading management journals.
AnnaMaria Mouza, Assistant Professor in Business Administration,
Technological Educational Institute of Serres, Greece, has obtained her
Ph.D. from the School of Health Sciences, Aristotle University of
Thessaloniki, Greece. Previously, she was an Assistant Professor in the
School of Technology Applications of the Technological Institute of
Thessaloniki, in the Department of Agriculture and in the Faculty of Health
Sciences of the Aristotle University of Thessaloniki. She is the author of
two books and many published scientiﬁc papers.
R. Orsi, Professor in Econometrics, University of Bologna, Italy, has
obtained his Ph.D. from Universit` e Catholique de Louvain, Belgium. He
was previously educated in the University of Bologna and in the University
of Rome. He was an Associate Professor in the University of Modena, and
Professor of Econometrics in the University of Calabria, Italy. He was a
Visiting Professor in the Center for Operation Research and Econometrics
(CORE), Universit` eCatholique de Louvain, Belgium, Nato Senior Fel
low, Head of the Department of Economics at the University of Calabria
from 1987 to 1989, Director of CIDE (the InterUniversitary Center of
Econometrics) from 1997 to1999, and from 2000 to 2002, and the Head of
Department of Economics, University of Bologna from 1999 to 2002.
Pavel
ˇ
Sevˇ c´ık, was educated in the Universit´ e de la M´ editerran´ ee. He is
presently completing his Ph.D in the Universil´ e de Montr´ eal, Canada.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
Introduction
We deﬁne “model building in economics,” as the fruitful area of economics
designed to solve realworld problems using all available methods avail
able without distinctions: mathematical, computational, and analytical.
Wherever needed we should not be shy to develop new techniques —
whether mathematical or computational. This was the philosophy of Prof
Tom Kronsjo, in whose memory we dedicate this volume. The idea is to
develop stylized facts amenable to analytical methods to solve problems of
the world.
The articles in this volume are divided into three distinct groups: meth
ods, theory, and applications.
In the method section there are two parts: mathematical programming
and computation of cointegrating vectors, which are widely used in econo
metric analysis. Both these parts are important to analyze and develop poli
cies in any analytical model. Berjkholt has reviewed the history of develop
ment of mathematical programming and its impacts in economic modeling.
In the process, he has drawn our attention to some methods, ﬁrst developed
by Ragner Frisch, which can provide solution where the standard Simplex
method may fail. Frisch has developed these methods while working on the
Second Five Year’s Plan for India and has developed a method, which is
similar to that of Karmarkar.
Lazaridis presents a completely new method to compute cointegration
vectors, widely used in econometric analysis, by applying singular value
decomposition. With this method, one can easily accommodate in the co
integrating vectors any deterministic factors, such as dummies, apart from
the constant term and the trend. In addition, a comparatively simple proce
dure is developed for determining the order of integration of a different sta
tionary series. Besides, with this procedure one can directly detect whether
the differencing process produces a stationary series or not, since it seems
to be a common belief that differencing a variable (one or more times) we
will always get a stationary series, although this is not necessarily the case.
Basu and Lazaridis have proposed a new method of estimation and
solution of an econometric model with parameters moving over time. This
xix
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
xx Introduction
type of model is very realistic for economies, which are in transitional
phases. The model was applied for the Indian economy to derive monetary
ﬁscal policy structure that would respond to the changing environments of
the economy. The authors provide analysis to show that the response of the
economy would be variable i.e., not at all ﬁxed over time.
In the macroeconomic theory section HughesHallet has examined the
impacts of ﬁscal policy in a regime with independent monetary authority
and how to coordinate public policies in such a regime. Independence
of the central bank is a crucial issue for both developed and developing
country. In this theoretical model, the author has discovered some important
conclusions that ﬁscal leadership leads to improved outcomes because it
implies a degree of coordinationandreducedconﬂicts betweeninstitutions,
without the central bank having to lose its ability to act independently.
Deissenberg and Pavel consider a dynamic model of environmental
taxation that exhibits time inconsistency. There are two categories of ﬁrms,
believers (who take the nonbinding tax announcements made by the reg
ulator at face value), and nonbelievers (who make rational but costly pre
dictions of the true regulator’s decisions). The proportion of believers and
nonbelievers changes over time depending on the relative proﬁts of both
groups. If the regulator is sufﬁciently patient and if the ﬁrms react sufﬁ
cientlyfast tothe proﬁt differences, multiple equilibria canarise. Depending
on the initial number of believers, the regulator will use cheaptalk together
with actual taxes to steer the economy either to the standard, perfect pre
diction solution suggested in the literature, or to equilibrium with a mixed
population of believers and nonbelievers who both make prediction errors.
This model can be very useful in the analysis of environmental economics.
In the section on theory of business organization, Victoria Miroshnik
has formulated a model of Japanese organization and how the superior
corporate performances in the Japanese multinational companies are the
result of a multilayer structure of cultures, national, personal, organiza
tional. The model goes deep into the analysis of fundamental ingredient of
Japanese organizational culture and human resource management system
and how these contribute to corporate performance. The model produces a
novel scheme, howtoestimate phenomena whichare seeminglyunobserved
psychological and sociological characteristics.
Mihiotis et al., discussed enterprise modeling and integration chal
lenges, and rationale as well as current tools and methodology for deeper
analysis. Over the last decades, much has been written and a lot of research
has been undertaken on the issue of enterprise modeling and integration.
The purpose of this discussion is to provide an overview and understanding
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692fm
Introduction xxi
of the key concepts. The authors have outlined a framework for advancing
enterprise integration modeling based on the stateofthe art techniques.
Anna Maria Mauza ina pathbreakingresearchpresents a model suitable
for an efﬁcient budget management of a health service unit, by applying goal
programming. She analyzes all the details needed to formulate the proper
model, in order to successfully apply goal programming, in an attempt to
satisfy the expectations of the decision maker in the best possible way,
providing at the same time alternative scenarios considering various socio
economic factors.
Inthe sectionfor policyanalysis, Iacone andOrsi applieda small macro
econometric model to ascertain if the inﬂation dynamics and controls for
Poland, Czech Republic, and Slovenia, are compatible with the remaining
EU member countries. They found that the real exchange rate is the most
effective instrument to stabilize inﬂation whereas direct inﬂation control
mechanisms may be ineffective in certain cases. These experiments are
very useful to design antiinﬂation policies in open economies.
AthanasiosAthanasenas investigated the cointegration dynamics of the
credit–income nexus, within the economic growth process of the postwar
US economy, over the period from 1957 up to 2007. Given the existing
empirical research on the creditlending channel and the established rela
tionship between ﬁnancial intermediation and economic growth in general,
the main purpose is to analyze in detail the causal relationship between
ﬁnance and growth by focusing on bank credit and income GNP, in the
postwar US economy. This is a newapplication of an innovative technique
of cointegration analysis with emphasis on system stability analysis. The
results show that there is no shortrun effect of credit changes on income
changes, but only in the longrun, credit affects money income.
The bookcovers most of the important areas of economics withthe basic
analytical framework to formulate a logical structure and then suggest and
implement methods to quantify the structure to derive applicable policies.
We hope the book would be a source of joy for anyone interested to make
economics a useful discipline to enhance human welfare rather than being
a sterile discourse devoid of reality.
This page intentionally left blank This page intentionally left blank
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
Chapter 1
Methods of Modelling: Mathematical
Programming
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
This page intentionally left blank This page intentionally left blank
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
Topic 1
Some Unresolved Problems of Mathematical
Programming
Olav Bjerkholt
University of Oslo, Norway
1. Introduction
Linear programming (LP) emerged in the United States in the early post
war years. One may to a considerable degree see the development of LP
as a direct result of the mobilization of research efforts during the war.
George B. Dantzig, who was employed by the US Armed Forces, played a
key role in developing the new tool by his discovery in 1947 of the Simplex
method for solving LP problems. Linear programming was thus a military
product, whichsoonappearedtohave verywidespreadcivilianapplications.
The US Armed Forces continued its support of Dantzig’s LP work, as the
most widespread textbook in LP in the 1960s, namely Dantzig (1963), was
sponsored by the US Air Force.
Linear programming emerged at the same time as the ﬁrst electronic
computers were developed. The uses and improvements in LP as an opti
mization tool developed in step with the fast development of electronic
computers.
A standard formulation of a LP problem in n variables is as follows:
min
x
{c
x Ax = b, x ≥ 0}
where c ∈ R
n
, b ∈ R
m
and A a (m, n) matrix of rank m < n. The number
of linear constraints is thus m. The feasibility region of the problemconsists
of all points fulﬁlling the constraints.
S = {x ∈ R
n
Ax = b, x ≥ 0}
where S is assumed to be bounded with a nonempty interior.
Several leading economists, including a number of future Nobel Lau
reates, took active part in developing and utilizing LP at an early stage.
3
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
4 Olav Bjerkholt
Among these was also Ragnar Frisch. Frisch was broadly oriented towards
macroeconomic policy and planning problems and was highly interested
in the promising new optimization tool. Frisch, who had a strong back
ground in mathematics and also was very proﬁcient in numerical methods,
made the development of solution algorithms for LP problems a part of his
research agenda. During the 1950s, Frisch invented and tested out various
solution algorithms along other lines than the Simplex method for solving
LP problems.
With regard to the magnitude of LP problems that could be solved
at different times with the computer equipment at disposal, Orden (1993)
gives the following rough indication. In the ﬁrst year after 1950, the number
m of constraints in a solvable problem was of order 100 and has grown
with a factor of 10 for each decade, implying that currently LP problems
may have a number of constraints that runs into tens of millions. Linear
programming has been steadily taken into use for new kinds of problems
and the computer development has allowed the problems to be bigger and
bigger.
The Simplex method has throughout this period been the dominant
algorithm for solving LP problems, not least in the textbooks. It is perhaps
relativelyrare that analgorithmdevelopedat anearlystage for newproblem,
as Dantzig did with the Simplex method for the LP problem, has retained
such a position. The Simplex method will surely survive in textbook pre
sentations and for its historical role, but for the solution of largescale LP
problems it is yielding ground to alternative algorithms. Ragnar Frisch’s
work on algorithms is interesting in this perspective and has been given
little attention. It may on a modest scale be a case of a pioneering effort
that was more insightful than considered at the time and thus did not get
the attention it deserved. In the history of science, there are surely many
such cases.
Let us ﬁrst make a remark on the meaning of algorithms. The mathe
matical meaning of “algorithm” in relation to a given type of problem is a
procedure, which after ﬁnite number of steps ﬁnds the solution to the prob
lem, or determines that there is no solution. Such an algorithmic procedure
can be executed by a computer program, an idea that goes back to Alan
Turing. But when we talk about algorithms with regard to the LP prob
lem it is not in the mathematical sense, but a more practical one. An LP
algorithm is a practically executable technique for ﬁnding the solution to
the problem. Dantzig’s Simplex method is an algorithm in both meanings.
But one may have an algorithm in the mathematical sense, which is not a
practical algorithm (and indeed also vice versa).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
Some Unresolved Problems 5
In the mathematical sense of algorithm, the LP problem is trivial. It
can easily be shown that the feasibility region is a convex set with a linear
surface. The optimum point of a linear criterion over such a set must be in a
corner or possibly in a linear manifold of dimension greater than one. As the
number of corners is ﬁnite, the solutioncanbe found, for example, bysetting
n − m of the x’s equal to zero and solve Ax = b for the remaining ones.
Then all the corners can be searched for the lowest value of the optimality
criterion.
Dantzig (1984) gives a beautiful illustration of why such a search for
optimality is not viable. He takes a classical assignment problem, the dis
tribution of a given number of jobs among the same number of workers.
Assume that 70 persons shall be assigned to 70 jobs and the return of each
workerjob combination is known. There are thus 70! possible assignment
combinations. Dantzig’s comment about the possibility of looking at all
these combinations runs as follows:
“Now 70! is a big number, greater than 10
100
. Suppose we had an IBM
370168 available at the time of the big bang 15 billion years ago. Would
it have been able to look at all the 70! combinations by the year 1981?
No! Suppose instead it could examine 1 billion assignments per second?
The answer is still no. Even if the earth were ﬁlled with such computers
all working in parallel, the answer would still be no. If, however, there
were 10
50
earths or 10
44
suns all ﬁlled with nanosecond speed computers
all programmed in parallel from the time of the big bang until sun grows
cold, then perhaps the answer is yes.” (Dantzig, 1984, p. 106)
In view of these enormous combinatorial possibilities, the Simplex
method is a most impressive tool by making the just mentioned and even
bigger problems practically solvable. The Simplex method is to search for
the optimum on the surface of the feasible region, or, more precisely, in the
corners of the feasible region, in such a way that the optimality criterion
improves at each step.
A completely different strategy to search for the optimum is to search
the interior of the feasible region and approach the optimal corner (or one of
them) fromwithin, so to speak. It may seemto speak for the advantage of the
Simplex method that it searches an area where the solution is known to be.
A watershed in the history of LP took place in 1984 when Narendra
Karmarkar’s algorithm was presented at a conference (Karmarkar, 1984a)
and published later the same year in a slightly revised version (Karmarkar,
1984b). Karmarkar’s algorithm, which the New York Times found worthy
as ﬁrst page news as a great scientiﬁc breakthrough, is such an “interior”
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
6 Olav Bjerkholt
method, searching the interior of the feasibility area in the direction of the
optimal point.
This idea had, however, been pursued by Ragnar Frisch. To the best of
my knowledge this was ﬁrst noted by Roger Koenker a few years ago:
“But it is an interesting irony, illustrating the spasmodic progress of sci
ence, that the most fruitful practical formulation of the interior point revo
lution of Karmarkar (1984) can be traced back to a series of Oslo working
papers by Ragnar Frisch in the early 1950s.” (Koenker, 2000).
2. Linear Programming in Economics
Linear programming has a somewhat curious relationship with academic
economics. Fewwould today consider LPas a part of economic science. But
LP was, so to say, launched within economics, or even within econometrics,
and given much attention by leading economists for about 10 years or so.
After around 1960, the ties to economics were severed. Linear program
ming disappeared from the economics curriculum and lost the attention of
academic economists. It belonged from then on to operations research and
management science on one hand and to computer science on the other.
It is hardly possible to answer to give an exact date for when “LP”
was born or ﬁrst appeared. It originated at around the same time as game
theory with which it shares some features, and also at the time of some
applied problems such as the transportation problem and the diet problem.
Linear programming has various roots and forerunners in economics and
mathematics in attempts to deal with economic or other problems using
linear mathematical techniques. One such forerunner, but only slightly, was
Wassily Leontief’s inputoutput analysis. Leontief developed his “closed
model” in the 1930s in an attempt to give empirical content to Walrasian
general equilibrium. Leontief’s equilibrium approach was transformed in
his cooperation with the US Bureau of Labor Statistics to the open input
output model, see Kohli (2001).
In the early postwar years, LP and inputoutput analysis seemed within
economics to be two sides of the same coin. The two terms were, for
a short term, even used interchangeably. The origin of LP could be set
to 1947, which, as already mentioned, was when Dantzig developed the
Simplex algorithm. If we state the question as to when “LP” was ﬁrst
used in its current meaning in the title of a paper presented at a sci
entiﬁc meeting, the answer is to the best of my knowledge 1948. At
the Econometric Society meeting in Cleveland at the end of December
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
Some Unresolved Problems 7
1948, Leonid Hurwicz presented a paper on LP and the theory of opti
mal behavior with the ﬁrst paragraph providing a concise deﬁnition for
economists:
“The term linear programming is used to denote a problem of a type
familiar to economists: maximization (or minimization) under restric
tions. What distinguishes linear programming from other problems in
this class is that both the function to be maximized and the restrictions
(equalities or inequalities) are linear in the variables.” (Hurwicz, 1949).
Hurwicz’s paper appeared in a session on LP, which must have been
the ﬁrst ever with that title. One of the other two papers in the session was
by Wood and Dantzig and discussed the problem of selecting the “best”
method of accomplishing any given set of objectives within given restriction
as a LP problem, using an airlift operation as an illustrative example. The
general model was presented as an “elaboration of Leontief’s inputoutput
model.”
At an earlier meeting of the Econometric Society in 1948, Dantzig had
presented the general idea of LPin a symposiumon game theory. Koopmans
had, at the same meeting, presented his general activity analysis produc
tion model. The conference papers of both Wood and Dantzig appeared in
Econometrica in 1949. Dantzig (1949) mentioned as examples of problems
for which the new technique could be used, Stigler (1945), known in the
literature as having introduced the “diet problem,” and a paper by Koop
mans on the “transportation problem,” presented at an Econometric Society
meeting in 1947 (Koopmans, 1948).
Dantzig(1949) statedthe LPproblem, not yet instandardformat, but the
solutiontechnique was not discussed. The paper referrednot onlyLeontief’s
inputoutput model but alsotoJohnvonNeumann’s workoneconomic equi
librium growth, i.e., to recent papers ﬁrmly within the realm of economics.
Wood and Dantzig (1949) in the same issue stated an airlift problem.
In 1949, Cowles Commission and RAND jointly arranged a confer
ence on LP. The conference meant a breakthrough for the new optimiza
tion technique and had prominent participation. At the conference were
economists, Tjalling Koopmans, Paul Samuelson, Kenneth Arrow, Leonid
Hurwicz, Robert Dorfman, Abba Lerner and Nicholas GeorgescuRoegen,
mathematicians, Al Tucker, Harold Kuhn and David Gale, and several mil
itary researchers including Dantzig.
Dantzig had discovered the Simplex method but admitted many years
later that he had not really realized how important this discovery was. Few
people had a proper overviewof linear models to place the newdiscovery in
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
8 Olav Bjerkholt
context, but one of the few was John von Neumann, at the time an authority
on a wide range of problems form nuclear physics to the development
of computers. Dantzig decided to consult him about his work on solution
techniques for the LP problem:
“I decided to consult with the “great” Johnny von Neumann to see what
he could suggest in the way of solution techniques. He was considered
by many as the leading mathematician in the world. On October 3, 1947
I visited him for the ﬁrst time at the Institute for Advanced Study at
Princeton. I remember trying to describe to von Neumann, as I would to
an ordinary mortal, the Air Force problem. I began with the formulation
of the linear programming model in terms of activities and items, etc. Von
Neumann did something, which I believe was uncharacteristic of him.
“Get to the point,” he said impatiently. Having at times a somewhat low
kindling point, I said to myself “O.K., if he wants a quicky, then that’s what
he’ll get.” In under one minute I slapped the geometric and the algebraic
version of the problem on the blackboard. Von Neumann stood up and
said “Oh that!” Then for the next hour and a half, he proceeded to give
me a lecture on the mathematical theory of linear programs.” (Dantzig,
1984).
Von Neumann could immediately recognize the core issue as he sawthe
similarity with the game theory. The meeting became the ﬁrst time Dantzig
heard about duality and Farkas’ lemma.
One could underline how LP once seemed ﬁrmly anchored in eco
nomics by pointing to the number of Nobel Laureates in economics who
undertook studies involving LP. These comprise Samuelson and Solow, co
authors with Dorfman of Dorfman, Samuelson and Solow (1958), Leontief
who applied LP in his inputoutput models, and Koopmans and Stigler,
originators of the transportation problemand the diet problem, respectively.
One also has to include Frisch, as we shall look at in more detail below,
Arrow, coauthor of Arrow et al. (1958), Kantorovich, who is known to
have formulated the LP problem already in 1939, Modigliani, and Simon,
who both were coauthors of Holt et al. (1956). Finally, to be included
are also all the three Nobel Laureates in economics for 1990: Markowitz
et al. (1957), Charnes et al. (1959) and Sharpe (1967). Koopmans and
Kantorovich shared the Nobel Prize in economics for 1975 for their work
in LP. John von Neumann died long before the Nobel Prize in economics
was established, and George Dantzig never got the prize he ought to have
shared with Koopmans and Kantorovich in 1975 for reasons that are not
easy to understand.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
Some Unresolved Problems 9
Among all these Nobel Laureates, it seems that Ragnar Frisch had the
deepest involvement with LP. But he published poorly and his achievements
went largely unrecognized.
3. Frisch and Programming, PreWar Attempts
Ragnar Frisch had a strong mathematical background and a passion for
numerical analysis. He had been a cofounder of the Econometric Society
in 1930 and to him econometrics meant both the formulation of economic
theory in a precise way by means of mathematics and the development
of methods for confronting theory with empirical data. He wrote both
of these aims into the constitution of the Econometric Society as ren
dered for many years in every issue of Econometrica. He became editor of
Econometrica fromits ﬁrst issue in 1933 and held this position for more than
20 years.
His impressive work in several ﬁelds of econometrics must be left
uncommented here. Frisch pioneered the use of matrix algebra in econo
metrics in 1929 and had introduced many other innovations in the use of
mathematics for economic analysis.
Frisch’s most active and creative as a mathematical economist coin
cided with the Great Depression, which he observed at close range both
in the United States and in Norway. In 1934, he published an article
in Econometrica (Frisch, 1934a) where he discussed the organization of
national exchange organization that could take over the need for trade
when markets had collapsed due to the depression, see Bjerkholt and Knell
(2006). In many ways, the article may seem na¨ıve with regard to the sub
ject matter, but it had a very sophisticated mathematical underpinning.
The article was 93 pages long, the longest regular article ever published
in Econometrica. Frisch had also published the article without letting it
undergo proper referee process. Perhaps it was the urgency of getting it
circulated that caused him to do this mistake for which he was rebuked
by some of his fellow econometricians. The article was written as literally
squeezed in between Frisch two most famous econometric contributions
in the 1930s, his business cycle theory (Frisch, 1933) and his conﬂuence
theory (Frisch, 1934b).
Frisch (1934a) developed a linear structure representing inputs for pro
duction in different industries, rather similar to the inputoutput model
of Leontief, although the underlying idea and motivation was quite dif
ferent. Frisch then addressed the question of how the input needs could
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
10 Olav Bjerkholt
be modiﬁed in the least costly way in order to achieve a higher total
production capacity of the entire economy. The variables in his prob
lem (x
i
) represented (relative) deviations from observed input coefﬁcients.
He formulated a cost function as a quadratic function of these devia
tions. His reasoning thus led to a quadratic target function to be mini
mized subject to a set of linear constraints. The problem was stated as
follows:
min =
1
2
x
2
1
ε
1
+
x
2
2
ε
2
+ · · · +
x
2
n
ε
n
wrt. x
k
k = 1, 2, . . . , n
subject to:
k
f
ik
x
k
≥ C
∗
−
a
i0
P
i
i = 1, . . . , n
where C
∗
is the set target of overall production and the constraint the con
dition that the deviation allowed this target value to be achieved.
By solving for alternative values of C
∗
, the function = (C
∗
),
representing the cost of achieving the production level C
∗
could be mapped.
Frisch naturally did not possess an algorithm for this problem. This did
not prevent him from working out a numerical solution of an illuminating
example and making various observations on the nature of such optimising
problems. His solution of the problem for a given example with n = 3,
determining as a function of C
∗
ran over 12 pages in Econometrica, see
Bjerkholt (2006).
Another problem Frisch worked on before the war with connections
to LP was the diet problem. One of Frisch’s students worked on a dis
sertation on health and nutrition, drawing on various investigations of the
nutritional value of alternative diets. Frisch supervised the dissertation in
the years immediately before World War II when it led him to formulate
the diet problem, years before Stigler (1945). Frisch’s work was published
as the introduction to his student’s monograph (Frisch, 1941). In a foot
note, he gave a somewhat rudimentary LP formulation of the diet problem,
cf. Sandmo (1993).
4. Frisch and Linear Programming
Frisch was not present at the Econometric Society meetings in 1948 men
tioned above; neither did he participate in the Cowles CommissionRAND
conference in 1949. He could still survey and follow these developments
rather closely. He was still the editor of Econometrica and thus had accepted
and surely studied the two papers by Dantzig in 1949. He had contact with
Cowles Commission in Chicago, its two successive research directors in
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
Some Unresolved Problems 11
these years, Jacob Marschak and Tjalling Koopmans, and other leading
members of the Econometric Society. Frisch visited in the early postwar
years frequently in the United States, primarily due to his position as chair
man of the UN SubCommission for Employment and Economic Stability.
The new developments over the whole range of related areas of linear tech
niques had surely caught his interest.
He ﬁrst touched upon LP in lectures to his students at the University of
Oslo in September 1950. Two students drafted lecture notes that Frisch cor
roborated and approved. In these early lectures Frisch used “inputoutput
analysis” and “LP” as almost synonymous concepts. The content of these
early lectures was inputoutput analysis with emphasis on optimisation by
means of LP, e.g., optimisation under a given import constraint or given
labor supply, they did not really address LP techniques as a separate issue.
Still these lectures may well have been among the ﬁrst applying LP to
the macroeconomic problems of the day in Western Europe, years before
Dorfman et al. (1958) popularised this tool for economists’ beneﬁt. One
major reason for Frisch pioneering efforts here was, of course, that the
new ideas touched or even overlapped with his own prewar attempts as
discussed above. In the very ﬁrst postwar years, he had indeed applied
the ideas of Frisch (1934a) to the problem of achieving the largest possi
ble multilateral balanced trade in the highly constrained world economy
(Frisch, 1947; 1948).
The ﬁrst trace of LP as a topic in its own right on Frisch’s agenda
was a single lecture he gave to his students on February 15th 1952. Leif
Johansen, who was Frisch’s assistant at the time, took notes that were issued
in the Memorandum series two days later (Johansen, 1952). The lecture
gave a brief mathematical statement of the LP problem and then as an
important example discussed the problem related to producing different
qualities of airplane fuel from crude oil derivates, with reference to an
article by Charnes et al. in Econometrica. Charnes et al. (1952) is, indeed,
a famous contribution in the LP literature as the ﬁrst published paper on
an industrial application of LP. A special thing about the Frisch’s use of
this example was, however, that his lecture did not refer the students to
an article that had already appeared in Econometrica, but to one that was
forthcoming! Frisch had apparently been so eager to present these ideas
that he had simply used (or perhaps misused) his editorial prerogative in
lecturing and issuing lecture notes on an article not yet published!
After the singular lecture on LP in February 1952, Frisch did not lecture
on this topic till the autumn term in 1953, as part of another lecture series
on inputoutput analysis. In these lectures, he went by way of introduction
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
12 Olav Bjerkholt
through a number of concrete examples of problem that were amenable to
be solved by LP, most of which have already been mentioned above. When
he came to the diet problem, it was not with reference to Stigler (1945), but
to Frisch (1941). He exempliﬁed, not least, with reconstruction in Norway,
the building of dwellings under the tight postwar constraints and various
macroeconomic problems.
In his brief introduction to LP, Frisch in a splendid pedagogical way
used examples, amply illustrated with illuminating ﬁgures, and appealed to
geometric intuition. He started out with cost minimization of a potato and
herring diet with minimumrequirements of fat, proteins and vitamin B, and
then moved to macroeconomic problems within an inputoutput framework,
somewhat similar in tenor to Dorfman (1958), still in the ofﬁng.
Frisch conﬁned, however, his discussion to the formulation of the LP
problemand the characteristics of the solution, he did not enter into solution
techniques. But clearly he had already put considerable work into this.
Because after mentioning Dantzig’s Simplex method, referring to Dantzig
(1951), Dorfman (1951) and Charnes et al. (1953), he added that there
were two other methods, the logarithm potential method and the basis
variable method, both developed at the Institute of Economics. He added
the remark that when the number of constraints was moderate and thus the
number of degrees of freedom large, the Simplex method might well be
the most efﬁcient one. But in other cases, the alternative methods would be
more efﬁcient. Needless to say, there were no other sources, which referred
to these methods at this time. Frisch’s research work on LP methods and
alternatives to the Simplex method, can be followed almost step by step
thanks to his working style of incorporating new results in the Institute
Memorandum series. Between the middle of October 1953 and May 1954,
he issued10memoranda. Theywere all inNorwegian, but Frischwas clearly
preparing for presenting his work. As were to be expected Frisch’s work on
LP was sprinkled with newterms, some of themadapted fromearlier works.
Some of the memoranda comprised comprehensive numerical experiments.
The memoranda were the following:
18 October 1953 Logaritmepotensialmetoden til løsning av lineære
programmeringsproblemer [The logarithm potential
method for solving LP problems], 11 pages.
7 November 1953 Litt analytisk geometri i ﬂere dimensjoner [Some
multidimensional analytic geometry], 11 pages.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
Some Unresolved Problems 13
(Continued )
13 November 1953 Substitumalutforming av logaritmepotensialmetoden til
løsning av lineære programmeringsproblemer
[Substitumal formulation of the logarithmic potential
method for solving LP problems], 3 pages.
14 January 1954 Notater i forbindelse med logaritmepotensialmetoden til
løsning av lineære programmeringsproblemer [Notes
in connection with the logarithmic potential method
for solving LP problems], 48 pages.
7 March 1954 Finalhopp og substitumalfølging ved lineær
programmering [Final jumps and moving along the
substitumal in LP], 8 pages.
29 March 1954 Trunkering som forberedelse til ﬁnalhopp ved lineær
programmering [Truncation as preparation for a ﬁnal
jump in LP], 6 pages.
27 April 1954 Et 22variables eksempel p˚ a anvendelse av
logaritmepotensialmetoden til løsning av lineære
programmeringsproblemer [A 22 variable example in
application of the logarithmic potential method for the
solution of LP problems].
1 May 1954 Basisvariabelmetoden til løsning av lineære
programmeringsproblemer [The basisvariable
method for solving LP problems], 21 pages.
5 May 1954 Simplexmetoden til løsning av lineære
programmeringsproblemer [The Simplex method for
solving LP problems], 14 pages.
7 May 1954 Generelle merknader om løsningsstrukturen ved det
lineære programmeringsproblem [Generfal remarks
on the solution structure of the LP problem],
12 pages.
At this stage, Frisch was ready to present his ideas internationally. This
happened ﬁrst at a conference in Varenna in July 1954, subsequently issued
as a memorandum. During the summer, Frisch added another couple of
memoranda. Then he went to India, invited by Pandit Nehru, and spent the
entire academic year 1954–55 at the Indian Statistical Institute, directed
by P. C. Mahalanobis, to take part in the preparation of the new ﬁveyear
plan for India. While he was away, he sent home the MSS for three more
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
14 Olav Bjerkholt
memoranda. These six papers were the following:
June 21st 1954 Methods of solving LP problems, Synopsis of lecture to be
given at the International Seminar on InputOutput
Analysis, Varenna (Lake Como), June–July 1954,
91 pages.
August 13th 1954 Nye notater i forbindelse med logaritmepotensialmetoden
til løsning av lineære programmeringsproblemer [New
notes in connection with the logarithmic potential
method for solving LP problems], 74 pages.
August 23rd 1954 Merknad om formuleringen av det lineære
programmeringsproblem [A remark on the formulation of
the LP problem], 3 pages.
October 18th 1954 Principles of LP. With particular reference to the double
gradient form of the logarithmic potential method,
219 pages.
March 29th 1955 A labour saving method of performing freedom truncations
in international trade. Part I, 21 pages.
April 15th 1955 A labour saving method of performing freedom truncations
in international trade. Part II, 8 pages.
Before he left for India he had already accepted an invitation to present
his ideas on programming in Paris. This he did on three different occasions
in May–June 1955. He presented in French, but issued synopsis in English
in the memorandumseries. Thus, his Paris presentation were the following:
7th May 1955 The logarithmic potential method for solving linear
programming problems, 16 pp. Synopsis of an exposition to
be made on 1 June 1955 in the seminar of Professor Ren´ e
Roy, Paris (published as Frisch, 1956b).
13 May 1955 The logarithmic potential method of convex programming. With
particular application to the dynamics of planning for
national developments, 35 pp. Synopsis of a presentation to
be made at the international colloquium in Paris, 23–28 May
1955 (published as Frisch, 1956c).
25 May 1955 Linear and convex programming problems studied by means of
the double gradient form of the logarithmic potential method,
16pp. Synopsis of a presentation to be given in the seminar of
Professor Allais, Paris, 26 May 1955.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
Some Unresolved Problems 15
After this, Frisch published some additional memoranda on LP. He
launched a new method, named the Multiplex method, towards the end of
1955, later published in a long article in Sankhya (Frisch, 1955; 1957).
Worth mentioning is also his contribution to the festschrift for Erik Lindahl
in which he discussed the logarithmic potential method and linked it to the
general problem of macroeconomic planning (Frisch, 1956a; 1956d).
From this time, Frisch’s efforts subsided with regard to pure LP prob
lems, as he concentrated on nonlinear and more complex optimisation,
drawing naturally on the methods he had developed for LP.
As Frisch managed to publish his results only to a limited degree and
not very prominently there are few traces of Frisch in the international
literature. Dantzig (1963) has, however, references to Frisch’s work. Frisch
also corresponded with Dantzig, Charnes, von Neumann and others about
LP methods. Extracts from such correspondence are quoted in some of the
memoranda.
5. Frisch’s Radar: An Anticipation of Karmarkar’s Result
In 1972, severe doubts were created about the ability of the Simplex method
to solve even larger problems. Klee and Minty (1972), whom we must
assume knew very well how the Simplex algorithm worked, constructed
a LP problem, which when attacked by the Simplex method turned out
to need a number of elementary operations which grew exponentially in
the number of variables in the problem, i.e., the algorithm had exponential
complexity. Exponential complexity in the algorithmis for obvious reasons
a rather ruining property for attacking large problems.
In practice, however, the Simplex method did not seemto confront such
problems, but the demonstration of Klee and Minty (1972) showed a worst
case complexity, implying that it could never be proved what had been
largely assumed, namely that the Simplex method only had polynomial
complexity.
A response to Klee and Minty’s result came in 1978 by the Russian
mathematician Khachiyan 1978 (published in English in 1979 and 1980).
He used a socalled “ellipsoidal” method for solving the LP problem, a
methoddevelopedinthe 1970s for solvingconvexoptimisationproblems by
the Russian mathematicians Shor and Yudin and Nemirovskii. Khachiyan
managed to prove that this method could indeed solve LP problems and
had polynomial complexity as the time needed to reach a solution increased
with the forth power of the size of the problem. But the method itself was
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
16 Olav Bjerkholt
hopelessly ineffective compared to the Simplex method and was in practice
never used for LP problems.
A few years later, Karmarkar presented his algorithm (Karmarkar,
1984a; b). It has polynomial complexity, not only of a lower degree than
Khachiyan’s method, but is asserted to be more effective than the Simplex
method. The news about Karmarkar’s pathbreaking result became front
page news in the NewYork Times. For a while, there was some confusion as
to whether Karmarkar’s method really was more effective than the Simplex
method, partly because Karmarkar had used a somewhat special formula
tion of the LP problem. But it was soon conﬁrmed that for sufﬁcently large
problems, the Simplex method was less effcient than Karmarkar’s result.
Karmarkar’s contribution initiated a hectic new development towards even
better methods.
Karmarkar’s approach may be said to be to consider the LP just as
another convex optimalization problem rather that exploiting the fact that
the solution must be found in a corner by searching only corner points. One
of those who has improved Karmarkar’s method, Clovis C. Gonzaga, and
for a while was in the lead with regard to possessing the nest method wrt.
polynomial complexity, said the following about Karmarkar’s approach:
“Karmarkar’s algorithm performs well by avoiding the boundary of the
feasible set. And it does this with the help of a classical resource, ﬁrst
used in optimization by Frisch in 1955: the logarithmic barrier function:
x ∈ R
n
, x > 0 → p(x) = −
log x
i
.
This function grows indeﬁnitely near the boundary of the feasible set S,
and can be used as a penalty attached to these points. Combining p(·)
with the objective makes points near the boundary expensive, and forces
any minimization algorithm to avoid them.” (Gonzaga, 1992)
Gonzaga’s reference to Frisch was surprising; it was to the not very
accessible memorandumversioninEnglishof one of the three Paris papers.
1
Frisch had indeed introduced the logarithmic barrier function in his log
arithmic potential method. Frisch’s approach was summarized by Koenker
as follows:
“The basic idea of Frisch (1956b) was to replace the linear inequality
constraints of the LP, by what he called a log barrier, or potential function.
Thus, in place of the canonical linear program
1
In Gonzaga’s reference, the author’s name was given as K.R. Frisch, which can also be
found in other rerences to Frisch in recent literature, suggesting that these refrences had a
common source.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
Some Unresolved Problems 17
(1) min{c
xAx = b, x ≥ 0}
we may associate the logarithmic barrier reformulation
(2) min{B(x, µ)Ax = b}
where
(3) B(x, µ) = c
x − µ
log x
k
In effect, (2) replaces the inequality constraints in (1) by the penalty term
of the log barrier. Solving (2) with a sequence of parameters µ such that
µ → 0 we obtain in the limit a solution to the original problem (1).”
(Koenker, 2000, p. 20)
Frisch had not provided exact proofs and he had above all not published
properly. But he had the exact same idea as Karmarkar came up with almost
30 years later. Why did he not make his results better known? In fact there
are other, even more important cases, in Frisch’s work of not publishing. The
reasons for this might be that his investigations had not yet come to an end.
In the programming work Frisch pushed on to more complex programming
problems, spurred by the possibility of using ﬁrst and secondgeneration
computers. They might have seemed powerful to him at that time, but they
hardly had the capacity to match Frisch’s ambitions. Another reason for
his results remaining largely unknown, was perhaps that he was too much
ahead. The Simplex method had not really been challenged yet, by large
enough problems to necessitate better method. The problemmay have seen,
not as much as a question of efﬁcient algorithms as that of powerful enough
computers.
We ﬁnish off with Frisch’s nice illustrative description of his method in
the only publication he got properly published (but unfortunately not very
well distributed) on the logarithmic potential method:
“Ma m´ ethode d’approche est d’une esp` ece toute diff´ erente. Dans cette
m´ ethode nous travaillons syst´ ematiquement ` a l’int´ erieur de la r´ egion
admissible et utilisons un potentiel logarithmique comme un guide–
une sorte du radar–pour nous ´ eviter de traverser la limite.” (Frisch,
1956b, p. 13)
The corresponding quote in the memorandum synoptic note is:
“My method of approach is of an entirely different sort [than the Simplex
method]. In this method we work systematically in the interior of the
admissible region and use a logarithmic potential as a guiding device — a
sort of radar — to prevent us from hitting the boundary.” (Memorandum
of 7 May 1955, p. 8)
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
18 Olav Bjerkholt
References
Arrow, KJ, LHurwicz andHUzawa (1958). Studies inLinear andNonLinear Programming.
Stanford, CA: Stanford University Press.
Bjerkholt, O and M Knell (2006). Ragnar Frisch and inputoutput analysis. Economic
Systems Research 18, (forthcoming).
Charnes, A, WWCooper andAHenderson (1953). An Introduction to Linear Programming.
NewYork and London: John Wiley & Sons.
Charnes, A, WW Cooper and B Mellon (1952). Bledning aviation gasolines — a study in
programming interdependent activities in an integrated oil company, Econometrica 20,
135–159.
Charnes, A, WW Cooper and MW Miller (1959). An application of linear programming to
ﬁnancial budgeting and the costing of funds. The Journal of Business 20, 20–46.
Dantzig, GB (1949). Programming of interdependent activities: II mathematical model.
Econometrica 17, 200–211.
Dantzig, GB(1951). Maximization of a linear function of variables subject to linear inequal
ities. In Activity Analysis of Production and Allocation, T Koopmans (ed.), NewYork
and London: John Wiley & Sons, pp. 339–347.
Dantzig, GB (1963). Linear Programming and Extensions. Princeton, NJ: Princeton
University Press.
Dantzig, GB (1984). Reminiscences about the origin of linear programming. In
Mathematical Programming, RW Cottle, ML Kelmanson and B Korte (eds.),
Amsterdam: Elsevier (NorthHolland), pp. 217–226.
Dorfman, R (1951). An Application of Linear Programming to the Theory of the Firm.
Berkeley: University of California Press.
Dorfman, R, PA Samuelson and RM Solow, (1958). Linear Programming and Economic
Analysis. NewYork: McGrawHill.
Frisch, R (1933). Propagation problems and impulse problems in economics. In
Economic Essays in Honor of Gustav Cassel, R Frisch (ed.), London: Allen & Unwin,
pp. 171–205.
Frisch, R(1934a). Circulation planning: proposal for a national organization of a commodity
and service exchange. Econometrica 2, 258–336 and 422–435.
Frisch, R (1934b). Statistical Conﬂuence Analysis by Means of Complete Regression
Systems. Publikasjon nr 5, Oslo: Institute of Economics.
Frisch, R (1941). Innledning. In Kosthold og levestandard. En økonomisk undersøkelse,
K Getz Wold (ed.), Oslo: Fabritius og Sønners Forlag, pp. 1–23.
Frisch, R (1947). On the need for forecasting a multilateral balance of payments. The Amer
ican Economic Review 37(1), 535–551.
Frisch, R (1948). The problem of multicompensatory trade. Outline of a system of multi
compensatory trade. The Review of Economics and Statistics 30, 265–271.
Frisch, R (1955). The multiplex method for linear programming. Memorandum from Insti
tute of Economics, Oslo: University of Oslo (17 October 1955).
Frisch, R (1956a). Macroeconomics and linear programming. Memorandum from Institute
of Economics, Oslo: University of Oslo, (10 January 1956). (Also published shortened
and simpliﬁed as Frisch (1956d)).
Frisch, R (1956b). La r´ esolution des probl` emes de programme lin´ eaire par la m´ ethode du
potential logarithmique. In Cahiers du S´ eminaire d’Econometrie: No 4 — Programme
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
Some Unresolved Problems 19
Lin´ eaire — Agr´ egation et Nombre Indices, pp. 7–20. (The publication also gives an
extract of the oral discussion in Ren´ e Roy’s seminar, pp. 20–23.)
Frisch, R (1956c). Programme convexe et planiﬁcation pour le d´ eveloppement national.
Colloques Internationaux du Centre National de la Recherche Scientiﬁque LXII. Les
mod` eles dynamiques. Conf´ erence ` a Paris, mai 1955, 47–80.
Frisch, R (1956d). Macroeconomics and linear programming. In 25 Economic Essays. In
honour of Erik Lindahl 21 November 1956, R Frisch (ed.), pp. 38–67. Stockholm:
Ekonomisk Tidskrift, (Slightly simpliﬁed version of Frisch (1956a)).
Frisch, R (1957). The multiplex method for linear programming. Sankhy´ a: The Indian
Journal of Statistics 18, 329–362 [Practically identical to Frisch (1955)]
Gonzaga, CC (1992). Pathfollowing methods for linear programming. SIAM Review 34,
167–224.
Holt, CC, F Modigliani, JF Muth and HA Simon (1956). Derivation of a linear decision rule
for production and employment. Management Science 2, 159–177.
Hurwicz, L (1949). Linear programming and general theory of optimal behaviour (abstract).
Econometrica 17, 161–162.
Johansen, L (1952). Lineær programming. (Notes from Professor Ragnar Frisch’s lecture
15 February 1952), pp. 15. Memorandum from Institute of Economics, University of
Oslo, 17 February 1952, 15pp.
Karmarkar, NK (1984). A new polynomialtime algorithm for linear programming. Combi
natorica, 4, 373–395.
Klee, V and G Minty, (1972). How good is the simplex algorithm. In Inequalities III, O
Sisha (ed.), pp. 159–175. NewYork, NY: Academic Press.
Koenker, R (2000). Galton, Edgeworth, Frisch, and prospects for quantile regression in
econometrics. Journal of Econometrics, 95(2), 347–374.
Kohli, MC (2001). Leontief and the bureau of labor statistics, 1941–54: developing a
framework for measurement. In The Age of Economic Measurement, JL Klein and
MS Morgan (eds.), pp. 190–212. Durham and London: Duke University Press.
Koopmans, T (1948). Optimum utilization of the transportation system (abstract).
Econometrica 16, 66.
Koopmans, TC (ed.) (1951). Activity Analysis of Production and Allocation. Cowles
Commission Monographs No 13. NewYork: John Wiley & Sons.
Markowitz, H (1957). The elimination form of the inverse and its application in linear
programming. Management Science 3, 255–269.
Orden, A (1993). LP from the ’40s to the ’90s. Interfaces 23, 2–12.
Sandmo, A (1993). Ragnar Frisch on the optimal diet. History of Political Economy 25,
313–327.
Sharpe, W (1967). A linear programming algorithm for mutual fund portfolio selection.
Management Science 13, 499–511.
Stigler, G (1945). The cost of subsistence. Journal of Farm Economics 27, 303–314.
Wood, MK and GB Dantzig (1949). Programming of interdependent activities: I general
discussion. Econometrica 17, 193–199.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch01topic1
This page intentionally left blank This page intentionally left blank
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
Chapter 2
Methods of Modeling: Econometrics
and Adaptive Control System
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
This page intentionally left blank This page intentionally left blank
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
Topic 1
A Novel Method of Estimation
Under CoIntegration
Alexis Lazaridis
Aristotle University of Thessaloniki, Greece
1. Introduction
We start with the unrestricted VAR(p):
x
i
=δδδ +µµµt
i
+
p
j=1
A
j
x
i−j
+zd
i
+w
i
(1)
where x ∈ E
n
(here, E denotes the Euclidean space), t
i
and d
i
are the
trend and dummy, respectively and w is the noise vector. The matrices of
coefﬁcients A
j
, are deﬁned on E
n
×E
n
. After estimation, we may compute
matrix from:
=
_
_
p
j=1
A
j
_
_
−I (1a)
or, directly, fromthe corresponding Error CorrectionVAR(ECVAR), which
is presented later. Also, an estimate of matrixcan be obtained by applying
OLS, according to the procedure suggested by Kleibergen and Van Dijk
(1994). It is recalled that this type of VAR models like the one presented
above, have been recommended by Sims (1980), as a way to estimate the
dynamic relationships among jointly endogenous variables.
It is known that matrix can be decomposed such that = AC,
where the elements of A are the coefﬁcients of adjustment and the rows of
the matrix C are the (possible) cointegrating vectors. It should be clariﬁed
at the very outset, that in many books etc., C is denoted by βββ
(prime shows
transposition) and A by ααα, although capital letters are used for denoting
matrices. Besides, in the general linear model, y = Xβββ +u, βββ is the vector
of coefﬁcients. To avoid any confusion, we adopted the notation used here.
23
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
24 Alexis Lazaridis
A straightforward approach to obtain matrices A and C, provided that
some rank conditions of are satisﬁed, is to solve the eigenproblem(John
ston and Di Nardo, 1997, pp. 287–296).
= VV
−1
(2)
where is the diagonal matrix of eigenvalues and V is the matrix of the
corresponding eigenvectors. Note that since is not symmetric, V = V
−1
.
Eq. (2) holds if is (n ×n) and all its eigenvalues are real, which implies
that the eigenvectors are real too. In cases that is deﬁned on E
n
× E
m
,
with m > n, Eq. (2) is not applicable. If these necessary conditions hold,
then from Eq. (2) follows that matrix A = V and C = V
−1
. Further, the
cointegrating vectors are normalized so that —usually —the ﬁrst element
to be equal to 1. In other words, if c
i
1
(=0), denotes the ﬁrst element
1
of
the ith row of matrix C, i.e.,
2
c
i.
and a
j
the jth column of matrix A, then
the normalization process can be summarized as c
i.
/c
i
1
and a
i
× c
i
1
, for
i = 1, . . ., n.
It is known that according to the maximum likelihood (ML) method,
matrix C can be obtained by solving the constrained problem:
{min det(I −C
ˆ
k0
ˆ
−1
00
ˆ
0k
C
)  C
ˆ
kk
C
= I} (3)
where matrices
ˆ
00
,
ˆ
0k
,
ˆ
kk
and
ˆ
k0
are residual covariance matrices of
speciﬁc regressions (Harris, 1995, p. 78; Johansen Juselius, 1990). These
matrices appear in the last term of the concentrated likelihood of
3
Eq. (5)
together with C, where is replaced by AC.
Note that the minimization of Eq. (3) is with respect to the elements
of matrix C. It is recalled that Eq. (3) is minimized by solving a general
eigenvalue problem, i.e., ﬁnding the eigenvalues from:
λλλ
ˆ
kk
−
ˆ
k0
ˆ
−1
00
ˆ
0k
 = 0. (4)
Applying Cholesky’s factorization on the positive deﬁnite symmetric
matrix
ˆ
−1
kk
, such that
ˆ
−1
kk
= LL
⇒
ˆ
kk
= (L
)
−1
L
−1
, where L is lower
1
In some computer programs, c
i
i
is taken to be the ith element of the ith row.
2
Note that dot is necessary to distinguish the jth row of C, i.e., c
i.
, from the transposed of
the jth column of this matrix (i.e., c
i
).
3
Equation (5) is presented below.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
A Novel Method of Estimation Under CoIntegration 25
triangular, Eq. (4) takes the conventional form
4
:
λI −L
ˆ
k0
ˆ
−1
00
ˆ
0k
L = 0. (4a)
After obtaining matrix V = V
of the eigenvectors, we can easily
compute the (unormalized) C from C = V
L
. Note also that C
ˆ
kk
C
=
V
L
(L
)
−1
L
−1
LV = I. Finally, matrix A is obtained from A =
ˆ
0k
C
.
Hence, the initial problem reduces to ﬁnding one of the eigen values and
eigenvectors of a symmetric matrix, which are real. And this is the main
advantage of the MLmethod. According to this procedure, a constant and/or
trend can be included in the cointegrating vectors, but it is not clear of how
to accommodate additional deterministic factors, such as one or more dum
mies. Besides, nothing is said about the variance of the (disequilibrium)
errors, which correspond to the ﬁnally adopted cointegration vector.
We propose in the ﬁrst part of this chapter, a comparatively simple
method, to directly obtain cointegrating vectors, which may include, apart
from the standard deterministic components, any number of dummies, in
cases that this is necessary. Additionally, as we will see in what follows, the
errors corresponding to the proper cointegration vector have the property
of minimum variance.
2. Some Estimation Remarks
It was mentioned that after estimation, matrix can be computed from
Eq. (1a). It is obvious that each equation of the VAR can be estimated by
OLS. At this stage, one should be aware to comply with the basic assump
tions regarding the error terms, in the sense that proper tests must ensure
that the noises are white. This way, the order p of the model is deﬁned
imposing at the same time restrictions — if necessary — on some elements
of A’s and vectors µµµ and z to be zero.
It can be easily shown that Eq. (1) may be expressed in the form of an
equivalent ECVAR, i.e.,
x
i
=δδδ +µµµt
i
+
p−1
j=1
Q
j
x
i−j
+x
i−p
+zd
i
+w
i
(5)
4
It should be noted that it is also necessary to premultiply the elements of Eq. (4) by L
and postmultiply them by L.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
26 Alexis Lazaridis
where
Q
j
=
_
_
j
k=1
A
k
−I
_
_
.
3. The Singular Value Decomposition
Matrix in Eq. (5) may be augmented to accommodate, as an additional
column, the vector δδδ, µµµ or z. It is also possible to accommodate all these
vectors in the following way:
˜
=
_
.
.
.δδδµµµz
_
(6)
In this case,
˜
is deﬁned on E
n
×E
m
, with m > n.
For conformability, the vector x
i−p
in Eq. (5) should be augmented
accordingly by using the linear advance operator L(such that L
k
y
i
= y
i+k
),
i.e.,
˜ x
i−p
=
_
_
_
_
_
x
i−p
1
L
p
t
i−p
L
p
d
i−p
_
¸
¸
¸
_
.
Hence, Eq. (5) takes the form:
x
i
=
p−1
j=1
Q
j
x
i−j
+
˜
˜
X
i−p
+w
i
. (7)
We can compute the unique generalized inverse (or pseudoinverse) of
˜
,
denoted by
˜
+
(Lazaridis and Basu, 1981), from:
˜
+
= VF
∗
U
(8)
where
Vis of dimension (m×m), and its columns are the orthonormal eigenvectors
of
˜
˜
.
U is (n ×m), and its columns are the orthonormal eigenvectors of
˜
˜
,
corresponding to the largest m eigenvalues of this matrix so that:
U
U = V
V = VV
= I
m
(9)
F is diagonal (m × m), with elements f
ii
f
i
, called the singular values
of
˜
, which are the nonnegative square roots of the eigenvalues of
˜
˜
.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
A Novel Method of Estimation Under CoIntegration 27
If the rank of
˜
is k, i.e.,
r(
˜
) = k ≤ n, then f
1
≥ f
2
≥ · · · ≥ f
k
> 0
and f
k+1
= f
k+2
= · · · = f
m
= 0
F
∗
is diagonal (m×m), and f
∗
ii
f
∗
i
= 1/f
i
.
It should be noted that all the above matrices are real. Also note that if
˜
has full row rank, then
˜
+
is the right inverse of
˜
, i.e.,
˜
˜
+
= I
n
Hence, the singular value decomposition (SVD) of
˜
is:
˜
= UFV
. (10)
4. Computing the CoIntegrating Vectors
We proceed to form a matrix F
1
of dimension (m×n) such that:
f
1(i,j)
=
_
0, if i = j
√
f
i
, if i = j
(11)
In accordance to Eq. (11), we next form a matrix F
2
of dimension (n×m).
It is veriﬁed that F
1
F
2
= F. Hence, Eq. (10) can be written as:
˜
= AC
where A = UF
1
of dimension (n×n) and C = F
2
V of dimension (n×m).
After normalization, we get the possible cointegrating vectors as the
rows of matrix C, whereas the elements of A can be viewed as approxima
tions to the corresponding coefﬁcients of adjustment.
It is apparent that the cointegrating vectors can always be computed,
given that r(
˜
) > 0. Needless to say that the same steps are followed if ,
instead of
˜
, is considered.
5. Case Study 1: Comparing CoIntegrating Vectors
We ﬁrst consider the VAR(2) model, presented by Holden and Peaman
(1994). The x ∼I(1) vector consists of the variables C (consumption), I
(income) and W (wealth), so that x ∈ E
3
. We used the same set of data
presented at the end of this book (Table D3, pp. 196–198). To simplify
the presentation, we did not consider the three dummies DD682, DD792,
and DD883. Hence, the VAR in this case has a form analogous to Eq. (1),
without trend and dummy components.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
28 Alexis Lazaridis
Estimating the equations of this VAR by OLS, we get the following
results.
=
_
_
_
0.059472 −0.073043 0.0072427
0.5486237 −0.5244858 −0.028041
0.3595042 −0.2938160 −0.039400
_
¸
_
, δδδ =
_
_
_
0.064873
0.16781451
−0.1554421
_
¸
_
.
(12)
Hat (ˆ) is omitted for simplicity. One possible cointegration vector with
intercept, obtained by applying the ML method is:
1 −0.9574698 −0.048531 +0.2912925 (13)
and corresponds to the longrun relationship:
C
i
= 0.9574698I
i
+0.048531W
i
−0.2912925.
For the vector in Eq. (13) to be a cointegration vector, the necessary and
sufﬁcient condition is that the (disequilibrium) errors ˆ u
i
computed from:
ˆ u
i
= C
i
−0.9574698I
i
−0.048531W
i
+0.2912925 (14)
to be stationary, i.e., {ˆ u
i
} ∼ I(0), since x ∼I(1).
For distinct purposes, these ML errors obtained from Eq. (14) will be
denoted by uml
i
.
Applying SVD on
˜
= [
.
.
.δ] from Eq. (12), we get matrix C which is:
C =
_
_
_
1 −0.9206225 −0.06545528 0.1110597
1 0.3586208 −0.6305243 −6.403011
1 1.283901 −2.050713 0.4300255
_
¸
_
Euclidean norm =
_
_
_
1.365344
6.521098
2.653064
_
_
_
.
In the column at the righthand side, the (Euclidean) norm of each row
of C is presented. It is noted also that the singular values of are:
f
1
= 0.8979247, f
2
= 0.2304381 and f
3
= 0.0083471695.
All f
i
are less than 1, indicating thus that at least one row of C can be
regarded as a possible cointegration vector, in the sense that the resulting
errors are likely to be stationary. This row usually has the smallest norm,
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
A Novel Method of Estimation Under CoIntegration 29
which in this case, is the ﬁrst one. Hence, the errors corresponding to this
cointegration vector are obtained from:
ˆ u
i
= C
i
−0.9206225I
i
−0.06545528W
i
+0.1110597. (15)
These errors will be denoted by usvd
i
. It should be pointed out here
that the smallest norm condition is just an indication as to which row of
matrix C to be considered ﬁrst.
It may be useful to recall at this point that the cointegration vector
reported by the authors (p. 111), which is obtained by applying OLS after
omitting the ﬁrst two observations from the initial sample is:
1 −0.9109 −0.0790 +0.1778.
It should be clariﬁed that this vector has been obtained by applying OLS
to estimate directly the longrun relationship. Repeating this procedure,
we get:
1 −0.910971 −0.079761 +0.178455.
Hence, the errors are computed from:
ˆ u
i
= C
i
−0.910971I
i
−0.079761W
i
+0.178455. (16)
These errors will be denoted by ubook
i
.
The next task is to compare the above three series of errors, having a
ﬁrst look at the corresponding graphs, which are shown in Fig. 1.
Although by a ﬁrst glance the three series seemto be stationary, we shall
go a bit further to see their properties in detail. Before going to the next
section, it is worth to mention here that for the three series, the normality
test using the JarqueBera criterion favors the null.
Figure 1. The series {uml
i
}, {usvd
i
}, and {ubook
i
}.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
30 Alexis Lazaridis
5.1. Testing for Cointegration
Since for the ﬁrst two cases,
ˆ u
i
= 0, we estimated the following regres
sion:
ˆ u
i
= b
1
+b
2
ˆ u
i−1
+
q
j=1
b
j+2
ˆ u
i−j
+ε
i
. (17)
The value of q was set such that the noises ε
i
to be white, as it will be
explained in detail later on. Regarding the last case, the intercept is omitted
from Eq. (17), since ˆ u
i
’s are OLS residuals, so that
ˆ u
i
= 0.
The estimation results are as follows:
u ˆ ml
i
= −0.00612 −0.381773
(0.193688)
uml
i−1
−0.281651uml
i−1
t = −3.682
uˆ svd
i
= 0.006525 −0.428278
(0.108269)
usvd
i−1
−0.259066usvd
i−1
t = −3.956
ubˆ ook
i
= −0.606759
(0.09522)
ubook
i−1
t = −6.37.
Since these error series are the results of speciﬁc calculations and in
the simplest case are the OLS residuals, it is not advisable (Harris, 1995,
pp. 54–55). to apply the DickeyFuller (DF/ADF) test, as we do with any
variable in the initial data set. We have to compute the t
u
statistic fromMcK
innon (1991, p. 267–276) critical values (see also Harris, 1995, Table A6,
p. 158). This statistic (t
u
=
∞
+
1
/T+
2
/T
2
) for three (independent)
variables in the longrun relationship with intercept is:
α = 0.01 α = 0.05 α = 0.10
t
u
= −4.45 t
u
= −3.83 t
u
= −3.52.
Hence, {book
i
} is stationary for α = (0.01, 0.05, 0.10), {usvd
i
} is sta
tionary for α = (0.05, 0.10) and {uml
i
} is stationary only for α = 0.10.
Recall that the null [ ˆ u
i
∼ I(1)] is rejected in favor of H
1
[ ˆ u
i
∼ I(0)], if
t < t
u
. Besides, if we compute the standard deviation of these three series
we will get:
Standard deviation of Standard deviation of Standard deviation of
{uml
i
} {usvd
i
} {book
i
}
0.017622 0.016612 0.0164
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
A Novel Method of Estimation Under CoIntegration 31
It is obvious from these ﬁndings, the super consistency of the OLS.
However, the method we introduce here, produces almost minimum vari
ance errors, which is a quite desirable property, as suggested by Engle and
Granger (Dickey et al., 1994, p. 27). Hence, we will mainly focus on this
property in our next section.
6. Case Study 2: Further Evidence Regarding the Minimum
Variance Property
With the same set of data, the cointegration vector without an intercept
obtained by applying the ML method is:
1 −0.9422994 −0.058564
The errors corresponding to this vector are obtained from:
ˆ u
i
= C
i
−0.9422994I
i
−0.058564W
i
(18)
Again these errors will be denoted by uml
i
.
Next, we apply SVD to in Eq. (12) to obtain matrix C, which is:
C =
_
_
_
1 −0.9192042 −0.066081211
1 1.160720 −1.012970
1 0.9395341 2.063768
_
¸
_
and
Euclidean norm =
_
_
_
1.359891
1.836676
2.47828
_
_
_
The singular values of are:
f
1
= 0.89514, f
2
= 0.0403121 and f
3
= 0.0026218249
Again, all f
i
’s are less than 1. We consider the ﬁrst row of C, so that
the errors corresponding to this vector are obtained from:
ˆ u
i
= C
i
−0.9192042I
i
−0.066081211W
i
(19)
These errors will be also denoted by usvd
i
.
The cointegrating vector reported by the authors (p. 109), which is
obtained from the VAR(2), including the dummies mentioned previously,
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
32 Alexis Lazaridis
by applying the ML procedure is given by:
1 −0.93638 −0.03804
and the errors obtained from this vector are computed from:
ˆ u
i
= C
i
−0.93638I
i
−0.03804W
i
(20)
As in the previous case, these errors will be denoted by ubook
i
.
The minimum variance property can be seen from the following:
Standard deviation of Standard deviation of Standard deviation of
{uml
i
} {usvd
i
} {book
i
}
0.0169 0.0166 0.0193
Next, we consider the model presented by Banerjee et al. (1993,
pp. 292–293) that refers to VAR(2) with constant dummy and trend. The
state vector x ∈ E
4
consists of the variables (m − p), p, y, and R. The
data (in logs) are presented in Harris (1995, statistical appendix, pp. 153–
155. Note that the entries for 1967:1 and 1973:2 have to be corrected from
0.076195 and 0.56569498 to 9.076195 and 9.56569498, respectively). Con
sidering the cointegration vector reported by the authors, which has been
obtained by applying the ML method, we compute the errors from:
ˆ u
i
= (m−p)
i
+6.3966p
i
−0.8938y
i
+7.6838R
i
. (21)
These errors will be denoted by uml
i
.
Estimating the same VAR by OLS, we get,
=
_
_
_
_
_
−0.089584 0.16375 −0.184282 −0.933878
−0.012116 −0.9605 0.229635 0.206963
0.021703 0.676612 −0.476152 0.447157
−0.006877 0.118891 0.048932 −0.076414
_
¸
¸
¸
_
,
δδδ =
_
_
_
_
_
3.129631
−2.46328
5.181506
−0.470803
_
¸
¸
¸
_
and µµµ =
_
_
_
_
_
0.001867
−0.001442
0.003078
−0.000366
_
¸
¸
¸
_
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
A Novel Method of Estimation Under CoIntegration 33
Following the procedure described above, we obtain matrix Cwhich is:
C =
_
_
_
_
_
1 −48.30517 24.60144 37.75685
1 5.491415 −0.6510755 7.423318
1 −2.055229 −5.466846 0.9061699
1 0.0051181894 0.1603713 −0.1244312
_
¸
¸
¸
_
Euclidean norm =
_
_
_
_
_
66.06966
9.310488
5.99429
1.020406
_
_
_
_
_
The singular values of are:
f
1
= 1.503069, f
2
= 0.7752298, f
3
= 0.1977592
and f
4
= 0.012560162
The fact that one singular value is greater than one, is an indication that
unless a dummy is present — we hardly can trace one row of matrix C,
which may be related to stationary errors ˆ u
i
. As shown in Fig. 1, from the
graphs, from which refer to the errors corresponding to the rows of C, we
see that only the second row produces reasonably results. In this case, the
errors are computed from:
ˆ u
i
= (m−p)
i
+5.491415p
i
−0.6510755y
i
+7.42318R
i
. (22)
As usual, these errors will be denoted by usvd
i
.
The minimum variance property can be seen from:
Standard deviation of {uml
i
} Standard deviation of {usvd
i
}
0.2671 0.2375
To see that none of the two error series is stationary, as it was indicated
by the ﬁrst singular value, we present the following results.
u ˆ ml
i
= 0.268599 −0.178771
(0.065634)
uml
i−1
−0.214098uml
i−1
t = −2.724
uˆ svd
i
= 0.832329 −0.193432
(0.066421)
usvd
i−1
−0.160135usvd
i−1
t = −2.912
For a longrun relationship with three (independent) variables without
trend and intercept, the critical value t
u
for α = 0.10 is less than −3.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
34 Alexis Lazaridis
Nevertheless, the method proposed, produces better results than the ML
approach does.
Finally, we will consider the ﬁrst of the two cointegration vectors
(since, it produces slightly better results), reported by Harris (1995, p. 102).
From this vector, which was obtained by the ML method, one gets the
following errors.
ˆ u
i
= (m−p)
i
+7.325p
i
−1.073y
i
+6.808R
i
(23)
Also we obtain
standard deviation of ˆ u
i
’s = 0.268 and
ˆ
ˆ u
i
= −0.117378 −0.178896
(0.068296)
ˆ u
i−1
−0.30095417ˆ u
i−1
t = −2.619.
It is obvious that these additional results further support the previous
ﬁndings, which are due to some properties of SVD (Lazaridis, 1986).
6.1. A Note on Integration
It seems to be a common belief that differentiating a variable say n times,
we will always get a stationary series (Harris, 1995, p. 16). Hence, the
initial series is said to be I(n). But, this is not necessarily the case, as it is
veriﬁed that if we take into consideration the exports of goods and services
for Greece, as presented in Table 1. Besides if n is large enough, is it of
any good to obtain such a stationary series when economic variables are
considered?
Table 1. The series {xxx
iii
}.
Exports of goods and services in bil. Eur. Greece 1956–2000.
X
2.4944974E02 2.9053558E02 2.9347029E02 2.8760089E02 2.8173149E02
3.2575201E02 3.5803374E02 4.1379310E02 4.2553190E02 4.7248717E02
6.6030815E02 6.7498162E02 6.6030815E02 7.6008804E02 8.8041089E02
0.1000734 0.1300073 0.2022010 0.2664710 0.3325018
0.4258254 0.4763023 0.5998532 0.7325019 1.049743
1.239618 1.388114 1.788701 2.419956 2.868965
3.618782 4.510052 4.978430 5.818929 6.485106
7.690976 9.316215 9.847396 11.45708 14.08716
15.39428 18.87601 20.90506 22.56992 25.51577
Source: International Financial Statistics. (International Monetary Fund, Washington DC)
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
A Novel Method of Estimation Under CoIntegration 35
The series { x ∆ ∆ ∆
i
}, {
2
x
i
}, and {
3
x
i
}
Figure 2. Differences of the NI series {x
i
}.
Using the series {x
i
} as shown in Table 1, we can easily verify from the
corresponding graph, that even for n = 5, the series {
5
x
i
} is not stationary.
In Fig. 2, the series {x
i
}, {
2
x
i
}, and {
3
x
i
} are presented, for a better
understanding of this situation. We will call the variables, which belong to
this category near the integrated (NI) (Banerjee et al., 1993, p. 95) series.
To trace that a given series is NI, we may run the regression:
x
i
= β
1
+β
2
x
i−1
+β
3
t
i
+
q
j=1
β
j+3
x
i−j
+u
i
(24)
which is a reparametrized AR(q) with constant, where we added a trend
too. As shown in Eq. (17), the value of q is set such that the noises u
i
to
be white. A comparatively simple way for this veriﬁcation is to compute
the residuals ˆ u
i
and to consider the corresponding LjungBox Q statis
tics and particularly their pvalues, which should be much greater than
0.1, to say that no autocorrelation (AC) is present. On the other hand, if
we trace heteroscedasticity, this is a strong indication that we have a NI
series. For the data presented in Table 1, we found that q = 2. The cor
responding Q statistics (Column 4) together with pvalues are presented
in Table 2.
We see that for all k (column 1), the corresponding pvalues (column 5)
are greater than 0.1. But, if we look at the residuals graph (Fig. 3), we can
verify the presence of heteroscedasticity.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
36 Alexis Lazaridis
Table 2. Residuals: Autocorrelations, PAC, Q statistics and pppvalues.
Autocorrelation (AC), partial autocor coefﬁcients (PAC) QQQ statistics and ppp values (prob).
Values AC PAC QQQ Stat(LB) ppp value QQQ Stat(BP) ppp value
of kkk
1 0.040101 0.040101 0.072482 0.787756 0.067540 0.794952
2 −0.061329 −0.063039 0.246254 0.884151 0.225515 0.893367
3 −0.008189 −0.003064 0.249432 0.969240 0.228331 0.972891
4 −0.285494 −0.290504 4.213238 0.377916 3.651618 0.455202
5 0.042856 0.072784 4.304969 0.506394 3.728756 0.589090
6 0.099724 0.058378 4.815469 0.567689 4.146438 0.656867
7 −0.151904 −0.167680 6.033816 0.535806 5.115577 0.645861
8 −0.007846 −0.069642 6.037162 0.643069 5.118163 0.744875
9 0.001750 0.023203 6.037333 0.736176 5.118291 0.823877
10 0.109773 0.165620 6.733228 0.750367 5.624397 0.845771
11 0.134538 0.022368 7.812250 0.730017 6.384616 0.846509
12 −0.029101 −0.048363 7.864417 0.795634 6.420185 0.893438
13 −0.074997 −0.028063 8.222832 0.828786 6.656414 0.918979
14 −0.084284 −0.017969 8.691681 0.850280 6.954772 0.936441
15 −0.113984 −0.104484 9.580937 0.845240 7.500452 0.942248
16 −0.081662 −0.157803 10.05493 0.863744 7.780540 0.955133
17 −0.001015 −0.005026 10.05501 0.901288 7.780582 0.971016
18 −0.053178 −0.056574 10.27276 0.922639 7.899356 0.980097
Figure 3. The residuals from Eq. (24) (q = 2).
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
A Novel Method of Estimation Under CoIntegration 37
Another easy and practical way to trace heteroscedasticity, is to select
the explanatory variable, which yields the smallest pvalue for the corre
sponding Spearman’s correlation coefﬁcient (r
s
). Note that in models like
the one we have seen in Eq. (24), such a variable is the trend, resulting
in this case to: r
s
= 0.3739, t = 2.394, p = 0.0219. This means that
for a signiﬁcance level α > 0.022, heteroscedasticity is present. Hence,
we have an initially NI series {x
i
}, which has to be weighed somehow in
order to be transformed to a difference stationary series (DSS). In some
cases, this transformation can be achieved if the initial series is expressed
in real terms, or as percentage of growth. In usual applications, the (natu
ral) logs [i.e., ln(x
i
)] are considered, given that {x
i
/x
i
} > {ln(x
i
)} >
{x
i
/x
i−1
}.
7. Case Study 3: How to Obtain Reliable Results Regarding
the Order of Integration?
It is a common practice to apply DF/ADF for determining the order of inte
gration for a given series. However, the analytical stepbystep application
of this test (Enders, 1995, p. 257; Holden and Perman, 1994, pp. 64–65),
leaves the impression that the cure is worse than the disease itself. On the
other hand, by simply adopting the results produced by some commer
cial computer programs, which is the usual practice in many applications,
we may reach some faulty conclusions. It should be noted at this point that
according to the results obtained by a wellknown commercial package, the
series {x
i
}, as shown in Table 1, should be considered as I(2), for α = 0.05,
which is not the case.
Here, we present a comparatively simple procedure to determine the
order of integration of a DSS, by inspecting a model analogous to Eq. (24).
Assuming that {z
i
} is DSS, we proceed as follows:
• Start with k = 1 and estimate the regression:
k
z
i
= β
1
+β
2
k−1
z
i−1
+β
3
t
i
+
q
j=1
β
j+3
k
z
i−j
+u
i
. (25)
Be sure that the noises in Eq. (25) are white, by applying the relevant
tests as described above. This way, we set the laglength q. Note that if
k = 1, its value is omitted from Eq. (25) and that
0
z
i−1
= z
i−1
.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
38 Alexis Lazaridis
Table 3. Critical values for the DF Ftest.
Sample size ααα = 0.01 ααα = 0.05 ααα = 0.10
25 10.61 7.24 5.91
50 9.31 6.73 5.61
100 8.73 6.49 5.47
250 8.43 6.34 5.39
500 8.34 6.30 5.36
>500 8.27 6.25 5.34
• Proceed to test the hypothesis:
H
0
: β
2
= β
3
= 0. (26)
To compare the computed Fstatistic, we have to consider, in this case,
the DickeyFuller Fdistribution. The critical values (Dickey and Fuller,
1981) are reproduced in Table 3, to facilitate the presentation.
Note that in order to reject Eq. (26), Fstatistic should be much greater
than the corresponding critical values, as shown in Table 3. If we reject
Eq. (26), this means that the series {z
i
} is I(k −1). If the null hypothesis
is not rejected, then increase k by 1(k = k + 1) and repeat the same
procedure, i.e., starting from estimating Eq. (25), testing at each stage so
that the model disturbances are white noises.
• In case of known structural break(s) in the series, a dummy d
i
should be
added in Eq. (25), such that:
d
i
=
_
0 if i ≤ r
w
i
if i > r
where w
i
= 1(∀i) for the shift in mean, or w
i
= t
i
for the shift in
trend. It is recalled that r denotes the date of the break or shift. With
this speciﬁcation, the results obtained are alike the ones that Perron et al.
(1992a; b) procedure yields.
We applied the proposed technique to ﬁnd the order of integration of the
variable m
i
as shown in Eq. (21), which is the log of nominal M1 (money
supply). Harris (1995, p. 38) after applying DF/ADF test reports that “. . .
and the interest rate as I(1) and prices as (I2). The nominal money supply
might be either, given its path over time. . . .”. Since the latter variable is m
i
,
it is clear that, with this particular test, it was not possible to get a precise
answer as to whether {m
i
} is either I(1) or I(2).
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
A Novel Method of Estimation Under CoIntegration 39
To show the details, underline the ﬁrst stage of this procedure; we
present some estimation results with different values of q.
For q = 3
Most of the pvalues in the 5th column of the table, which is analogous
to Table 2, are equal to zero.
Obviously, this is not the proper laglength.
For q = 4
All pvalues are greater than 0.1, as shown in Table 2. r
s
= −0.193,
p = 0.05652. F(2, 94) = 4.2524.
This laglength is acceptable.
For q = 5
All pvalues are greater than 0.1, as shown in Table 2. r
s
=
−0.2094, p = 0.0395.F(2, 92) = 4.2278.
This laglength is questionable, since for α > 0.04, we face the prob
lem of heteroscedasticity.
q = 6
All pvalues are greater than 0.1, as we have seen in Table 2. r
s
=
−0.166, p = 0.1038.F(2, 90) = 5.937.
This laglength is quite acceptable.
q = 7
All pvalues are greater than 0.1, as shown in Table 2. r
s
= −0.1928, p =
0.0608.F(2, 88) = 4.768.
This laglength is also acceptable, but it is inferior when compared to
the previous one.
From these analytical results, it can be easily veriﬁed that the proper
laglength is 6 (q = 6). It may also be worthy to mention that according
to the normality test (JarqueBera), we should accept the null. The value
of F = 5.937 is less than the corresponding critical values as shown in
Table 3, for 100 observations and α = (0.01, 0.05). Hence, we accept
Eq. (26) and increase the value of k by one. The estimation results, after
properly selecting the value of q(q = 5), are:
2
m
i
= 0.0047
(0.0046)
−0.894892
(0.21784)
m
i−1
+ 0.00033
(0.000108)
t
i
+
5
j=1
ˆ
β
j+3
2
m
i−j
+ ˆ u
i
. (27)
Regarding the residuals, all pvalues in the 5th column of the table,
which is analogous to Table 2, are greater than 0.1. Further, we have:
r
s
= −0.1556, p = 0.127. JarqueBera = 0.8(p = 0.67).F(2, 91) = 8.44.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
40 Alexis Lazaridis
Figure 4. The series {m
i
} and {
2
m
i
}.
Since F < 8.73 for α = 0.01 (Table 3), the null seen in Eq. (26) is not
rejected. Thus, we increase the value of k(k = 3) for the next step. The esti
mation results, after properly setting the laglength (q = 4), are as follows.
3
m
i
= 0.000831
(0.004885)
−2.827537
(0.495485)
2
m
i−1
−0.000006
(0.000077)
t
i
+
4
j=1
ˆ
β
j+3
3
m
i−j
+ ˆ u
i
. (28)
No autocorrelation is present. r
s
= −0.164, p = 0.108. JarqueBera =
0.65(p = 0.723).F(2, 92) = 16.283. Since the value of F is much greater
than 8.73 for α = 0.01 (Table 3), the null in Eq. (26) is rejected. Hence,
the order of integration of the series {m
i
} is k − 1 = 2, i.e., {m
i
} ∼ I(2).
This is also veriﬁed from Fig. 4, where {m
i
} and {
2
m
i
} are plotted.
Even from Fig. 4, one can easily see that the series {m
i
} is not sta
tionary. Rather, it seems to be trendstationary. According to the previous
ﬁndings, it can be also veriﬁed from this ﬁgure that {
2
m
i
} is stationary.
It should be noted that in some exceptional or problematic cases, the
Fstatistic to test Eq. (26), may have a value marginally greater than the
corresponding critical value in Table 3, even for α = 0.01. In such cases,
we propose to increase the value of k and proceed to the next stage.
8. Conclusion
We have derived a simpliﬁed method for computing cointegrating vectors,
using the SVD. In many applications, we found that this technique produces
better results, compared to the ones obtained by applying the ML method.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
A Novel Method of Estimation Under CoIntegration 41
Due to the properties of SVD, we can easily and directly include in the
cointegrating vectors, as many deterministic components as we want to.
Besides, we found that the errors corresponding to the ﬁnally selected co
integration vector, are characterized by the minimum variance property. It
should be noted that the procedure proposed here is not mentioned in the
relevant literature.
Further, we analyzed the stages of a relevant technique to obtain reli
able results regarding the order of integration of a given series provided that
it is DSS. We also demonstrated the properties of the socalled near inte
grated series, which can be traced by applying this technique. It should be
emphasized that with this kind of series (NI), we showthat the conventional
methods for unit root test may produce faulty results.
References
Banerjee, A, JJ Dolado, JW Galbraith and DF Hendry (1993). Cointegration, Error Cor
rection, and the Econometric Analysis of NonStationary Data. New York: Oxford
University Press.
Dickey, DA and WA Fuller (1981). Likelihood ratio statistics for autoregressive time series
with a unit root. Econometrica 49, 1057–1072.
Dickey, DA, DW Jansen and DL Thornton (1994). A primer on cointegration with an appli
cation to money and income. In Cointegration for the Applied Economist, BB Rao
(ed.), UK: St. Martin’s Press, 9–45.
Enders, W (1995). Applied Econometrics Time Series. NewYork: John Wiley.
Harris, R(1995). Using CointegrationAnalysis in Econometric Modelling. London: Prentice
Hall.
Holden, D and R Perman (1994). Unit roots and cointegration for the economist. In Coin
tegration for the Applied Economist, BB Rao (ed.), UK: St. Martin’s Press, 47–112.
Johansen, S and K Juselius (1990). Maximum likelihood estimation and inference on coin
tegration with application to the demand for money. Oxford Bulletin of Economics and
Statistics 52, 169–210.
Johnston, J and J DiNardo (1997). Econometric Models. NewYork: McGrawHill Interna
tional.
Kleibergen, F and V Dijk (1994). Direct cointegration testing in error correction models.
Journal of Econometrics 63, 61–103.
Lazaridis, A and D Basu (1983). Stochastic optimal control by pseudoinverse. Review of
Economics and Statistics 65(2), 347–350.
Lazaridis, A(1986). Anote regarding the problemof perfect multicollinearity. Quantity and
Quality 120, 297–306.
McKinnon, J (1991). Critical values for cointegration tests. In LongRun Economic Rela
tionships, RF Engle and CWJ Granger (eds.), NewYork: Oxford University Press.
Perron, P and T Vogelsang (1992a). Nonstationarity and level shifts with an applica
tion to purchasing power parity. Journal of Business and Economic Statistics 10,
301–320.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic1
42 Alexis Lazaridis
Perron, P and T Vogelsang (1992b). Testing for a unit root in a time series with a changing
mean: Corrections and extensions. Journal of Business and Economic Statistics 10,
467–470.
Sims, CA (1980). Macroeconomics and reality. Econometrica 48, 1–48.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
Topic 2
Time Varying Responses of Output to Monetary
and Fiscal Policy
Dipak R. Basu
Nagasaki University, Nagasaki, Japan
Alexis Lazaridis
Aristotle University of Thessalonikki, Greece
1. Introduction
There are large volumes of literatures on the effectiveness of monetary and
ﬁscal policies and on lags in the effects of monetary policies (Friedman and
Schwartz, 1963; Hamburger, 1971; Meyer, 1967; Tobin, 1970). Although
the lengths of lags are important in determining the role of monetaryﬁscal
policies, the relationship between money and income vary over time due
to changes in the lag structure. Mayer (1967), Poole (1975) and Warburton
(l971) attempted to analyze the empirical variability of the lag structure
by analyzing the turning points in general business activity. Sargent and
Wallace (1973) as well as Lucas (1972) have drawn attention to the role of
timedependant response coefﬁcients to changes in stabilization policies.
Cargill and Meyer (1977; 1978) have estimated the timevarying relation
ship between national income and monetaryﬁscal policies. The results
then indicated existences of time variations and exclusions of time varia
tions of the coefﬁcients, lead to exclusions of prior information inherent in
the models from the estimation process. Blanchard and Perotti (2002) as
well as Smets and Wouters (2003) have obtained similar characteristics of
the monetary and ﬁscal policy.
The existence of time dependency of effects of monetary ﬁscal policies,
reﬂects considerable doubts on the policy prescription based on constant
coefﬁcient estimates. However, more reasonable results can be obtained if
we try to estimate dynamics and movements of these relationships over
43
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
44 Dipak R. Basu and Alexis Lazaridis
time. For this purpose, adaptive control systems with timevarying param
eters (Astrom and Wittenmark, 1995; Basu and Lazaridis, 1986; Brannas
andWestlund, 1980; Nicolao, 1992; Radenkovic and Michel, 1992) provide
a fruitful approach. There are alternative estimation methods, e.g., rational
expectation modeling, VAR approaches etc., to estimate timevarying sys
tems. These methods, however, are mainly descriptive, i.e., cannot have
immediate application in policy planning. The purpose of this paper is to
explore this possibility in terms of a planning model where adaptive control
rules will be implemented so that we can derive timevarying reduced form
coefﬁcients fromthe original structural model to demonstrate the dynamics
of monetaryﬁscal policies.
The model follows the basic theoretical ideas of monetary approach to
balance of payments and structural adjustments (Berdell, 1995; Humphrey,
1981; 1993; Khan and Montiel, 1989). In Section 2, the method of adaptive
optimization and the updating method of the timevarying model are ana
lyzed. In Section 3, the model and its estimates are described. This includes
some necessary adjustments for a developing country. In Section 4, the
results of the timevarying impacts of monetaryﬁscal policies are analyzed.
2. The Method of Adaptive Optimization
Suppose a dynamic econometric model can be converted to an equivalent
ﬁrst order dynamic system of the form
˜ x
i
=
˜
A˜ x
i−1
+
˜
C˜ u
i
+
˜
D˜ z
i
+ ˜ e
i
(1)
where ˜ x
i
is the vector of endogenous variables, ˜ u
i
is the vector of con
trol variables, ˜ z
i
is the vector of exogenous variables, and ˜ e
i
is the vector
of noises, which are assumed to be white Gaussian and
˜
A,
˜
C, and
˜
D are
coefﬁcient matrices of proper dimensions. It should be noted that a certain
element of ˜ z
i
is 1 and corresponds to the constant terms. The parameters of
the above system are assumed to be random.
Shifting to period i + 1, we can write
˜ x
i+1
=
˜
A˜ x
i
+
˜
C˜ u
i+1
+
˜
D˜ z
i+1
+ ˜ e
i+1
. (2)
Now, we deﬁne the following augmented vectors and matrices.
x
i
=
_
˜ x
i
˜ u
i
_
, x
i+1
=
_
˜ x
i+1
˜ u
i+1
_
, e
i+1
=
_
˜ e
i+1
0
_
,
A =
_
˜
A 0
0 0
_
, C =
_
˜
C
I
_
, D =
_
˜
D
0
_
.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
Time Varying Responses 45
Hence (2) can be written as
x
i+1
= Ax
i
+C˜ u
i+1
+D˜ z
i+1
+e
i+1
(3)
Using the linear advance operator L, such that L
k
y
i
= y
i+k
and deﬁning
the vectors u, z and ε from
u
i
= L˜ u
i
z
i
= L˜ z
i
ε
i
= Le
i
then Eq. (3) can take the form
x
i+1
= Ax
i
+Cu
i
+Dz
i
+ε
i
(4)
which is a typical linear control system.
We can formulate an optimal control problem of the general form
min J =
1
2
x
T
− ˆ x
T
2
Q
T
+
1
2
T−1
i=1
x
i
− ˆ x
i
2
Q
i
(5)
subject to the system transition equation shown in Eq. (4).
It is noted that T indicates the terminal time of the control period, {Q}
is the sequence of weighing matrices and ˆ x
i
(i = 1, 2, . . . , T ) is the desired
state and control trajectory, according to our formulation.
The solution to this problem can be obtained according to the min
imization principle by solving the Ricattitype equations (Astrom and
Wittenmark, 1995).
K
T
= Q
T
(6)
i
= −(E
i
C
K
i+1
C)
−1
(E
i
C
K
i+1
A) (7)
K
i
= E
i
A
K
i+1
A +
i
(E
i
C
K
i−1
A) +Q
i
(8)
h
T
= −Q
T
ˆ x
T
(9)
h
i
=
i
(E
i
C
K
i+1
D)z
i
+
i
(E
i
C
)h
i+1
+(E
i
A
K
i+1
D)z
i
+(E
i
A
)h
i+1
−Q
i
ˆ x
i
(10)
g
i
= −(E
i
C
K
i+1
C)
−1
[(E
i
C
K
i+1
D)z
i
+(E
i
C
)h
i+1
] (11)
x
∗
i
= [E
i
A +(E
i
C)
i
] x
∗
i
+(E
i
C) g
i
+(E
i
D)z
i
(12)
u
∗
i
=
i
x
∗
i
+g
i
(13)
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
46 Dipak R. Basu and Alexis Lazaridis
where u
∗
i
(i = 0, 1, . . . , T −1), the optimal control sequence and x
∗
i+1
, the
corresponding state trajectory, which constitutes the solution to the stated
optimal control problem.
In the above equations,
i
is the matrix of feedback coefﬁcients and
g
i
is the vector of intercepts. The notation E
i
denotes the conditional
expectations, given all information up to the period i. Expressions like
E
i
C
K
i+1
C, E
i
C
K
i+1
A, E
i
C
K
i+1
D are evaluated, taking into account
the reduced form coefﬁcients of the econometric model and their co
variance matrix, whichare tobe updatedcontinuously, alongwiththe imple
mentation of the control rules. These rules should be readjusted according
to “passivelearning” methods, where the joint densities of matrices A, C,
and D are assumed to remain constant over the control period.
2.1. Updating Method of ReducedForm Coefﬁcients and Their
CoVariance Matrices
Suppose we have a simultaneousequation system of the form
XB
+U
= R (14)
where X is the matrix of endogenous variable deﬁned on E
N
× E
n
and
B is the matrix of structural coefﬁcients, which refer to the endogenous
variables and is deﬁned on E
n
×E
n
. U is the matrix of explanatory variables
deﬁned on E
N
×E
g
and is the matrix of the structural coefﬁcients, which
refer to the explanatory variables, deﬁned on E
N
× E
g
. R is the matrix of
noises deﬁned on E
N
×E
n
. The reduced formcoefﬁcients matrix is then
deﬁned from:
= −B
−1
. (14a)
Goldberger et al. (1961) have shown that the asymptotic covariance
matrix, say of the vector ˆ π, which consists of the g column of matrix
ˆ
can be approximated by
˜
=
__
ˆ
I
g
_
⊗(
ˆ
B
)
−1
_
F
__
ˆ
I
g
_
⊗(
ˆ
B
)
−1
_
(15)
where ⊗ denotes the Kroneker product,
ˆ
and
ˆ
B are the estimated coef
ﬁcients by standard econometric techniques and F denotes the asymptotic
covariance matrix of the n + g columns of (
ˆ
B
ˆ
), which assumed to be
consistent and asymptotically unbiased estimate of (B).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
Time Varying Responses 47
Combining Eqs. (14) and (14a) we have
BX
= −U
+R
⇒ X
= −B
−1
U
+B
−1
R
⇒ X
= U
+W
(16)
where W
= B
−1
R
Denoting the ith column of matrix X
by x
i
and the ith column of matrix
W
by w
i
, we can write
x
i
=
_
_
_
_
_
_
_
_
u
1i
0 · · · 0 u
2i
0 · · · 0 u
gi
0 · · · 0
0 u
1i
· · · 0 0 u
2i
· · · 0 0 u
gi
· · · 0
· · · · · · · · ·
· · · · · · · · ·
· · · · · · · · ·
0 0 · · · u
1i
0 0 · · · u
2i
· · · 0 . . . u
gi
_
¸
¸
¸
¸
¸
¸
_
π +w
i
(17)
where u
ij
is the element of the jth column and ith row of matrix U. The
vector π ∈ E
ng
, as mentioned earlier, consists of the g column of matrix .
Equation (17) can be written in a compact form, as
x
i
= H
i
π +w
i
, i = 1, 2, . . . , N (17a)
where x
i
∈ E
n
, w
i
∈ E
n
and the observation matrix H
i
is deﬁned on
E
n
×E
ng
.
In a timeinvariant econometric model, the coefﬁcients vector π is
assumed random with constant expectation overtime, so that
π
i+1
= π
i
, for all i. (18)
In a timevarying and stochastic model we can have
π
i+1
= π
i
+ε
i
(18a)
where ε
i
∈ E
ng
is the noise.
Based on these, we can rewrite Eq. (17a) as
x
i+1
= H
i+1
π
i+1
+w
i+1
i = 0, 1, . . . , N − 1. (19)
We make the following assumptions.
(a) The vector x
i+1
and matrix H
i+1
can he measured exactly for all i.
(b) The noises ε
i
and w
i+1
are independent discrete white noises with
known statistics, i.e.,
E(ε
i
) = 0; E(w
i+1
) = 0
E(ε
i
w
i+1
) = 0
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
48 Dipak R. Basu and Alexis Lazaridis
E(c
i
ε
i
) = Q
1
δ
ij
, where δ
ij
is the Kronecker delta, and
E(w
i
w
i
) = Q
2
δ
ij
.
The above covariance matrices, assumed to be positive deﬁnite.
(c) The state vector is normally distributed with a ﬁnite covariance matrix.
(d) Regarding Eqs. (18a) and (19), the Jacobians of the transformation of
ε
i
into π
i+1
and of w
i+1
into x
i+1
are unities. Hence, the corresponding
conditional probability densities are:
p(π
i+1
π
i
) = p(ε
i
)
p(x
i+1
π
i+1
) = p(w
i+1
).
Under the above assumptions andgivenEqs. (18a) and(19), the problem
set is to evaluate
E(π
i+1
x
i+1
) = π
∗
i+1
and
cov (π
i+1
x
i+1
) = S
i+1
(the error covariance matrix)
where x
i+1
= x
1
, x
2
, x
3
, . . . , x
i+1
.
The solution to this problem (Basu and Lazaridis, 1986; Lazaridis,
1980) is given by the following set of recursive equations, as it is brieﬂy
shown in Appendix A.
π
∗
i+1
= π
∗
i
+K
i+1
(x
i+1
−H
i+1
π
∗
i
) (20)
K
i+1
= S
i+1
H
i+1
Q
−1
2
(21)
S
−1
i+1
= P
−1
i+1
+H
i+1
Q
−1
2
H
i+1
(22)
P
−1
i+1
= (Q
1
+S
i
)
−1
. (23)
The recursive process is initiated by regarding K
0
and H
0
as null matri
ces and computing π
∗
0
and S
0
from
π
∗
0
= ˆ π i.e., the reduced form coefﬁcients (columns of matrix
ˆ
)
S
0
= P
0
=
˜
.
The reduced form coefﬁcients, along with their covariance matrices,
can be updated by this recursive process and at each stage the set of Riccati
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
Time Varying Responses 49
equations should be updated accordingly so that adaptive control rules may
be derived.
Once we estimate the model (described in the next section), using the
full information maximum likelihood (FIML) method, we can obtain both
the structural model and the probability density functions along with all
associated matrices mentioned above. We ﬁrst convert the structural econo
metric model to a statevariable formaccording to Eq. (1). Once we specify
the targets for the state and control variables, the objective function is to be
minimized, the weights attached to each state and control variables, then
we can calculate the results of the optimization process for the entire period
using Eqs. (6)–(13). Thereafter, we can update all probability density func
tions and all other associated matrices using Eqs. (20)–(23). These will
effectively update the coefﬁcients of the model in its statevariable form.
We can repeat the optimization process over and over as we update the
model, its associated matrices, probability density functions and use these
as new information. When the process will converge, i.e., the difference
between the old and new estimates are insigniﬁcant according to some pre
assigned criteria, we can obtain the ﬁnal updated coefﬁcients of the model
along with the results of the optimization process. The dynamics of the
timevarying coefﬁcients (in terms of responsemultipliers) are described
later in Section 4. This mechanism is a great improvement compared to the
ﬁxedcoefﬁcient model as we can incorporate in our calculation, chang
ing information sets derived from the implementation of the optimization
process.
3. MonetaryFiscal Policy Model
We describe below a monetaryﬁscal policy model, which was estimated
with data from the Indian economy. The model accepts the deﬁnition of
balance of payments and money stock according to monetary approach
of balance of payments (Khan, 1976; Khan and Montiel, 1989; 1990).
The decision to choose this particular model is inﬂuenced by the fact that
it is commonly known as the fundbank adjustment policy model, used
extensively by the World Bank and the International Monetary Fund (IMF).
In the version adopted for this paper, there is no explicit investment
or consumption function, because the domestic absorption can reﬂect the
combined response of both private and public investment and consump
tion, to the planned target for national income set by the planning author
ity and to various market forces reﬂected in the money market interest
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
50 Dipak R. Basu and Alexis Lazaridis
rate and exchange rate. The “New Cambridge” model of the UK econ
omy (Cripps and Godley, 1976; Godley and Lavoie, 2002; 2007) has pos
tulated a similar combined consumptioninvestment function for the UK
as well.
The model was estimated by applying FIML method, using the data
from the Indian economy for the period 1951–1996. The estimated param
eters were then used as the initial starting point for the stochastic control
model. The estimated econometric model can be transformed into a state
variable form (Basu and Lazaridis, 1986), in order to formulate the optimal
control problem, i.e., to minimize the quadratic objective function, subject
to Eq. (4), which is the system transition equation. As already mentioned,
the recursive estimation process of the timevarying responsemultipliers
is presented in brief in Appendix A.
3.1. Absorption Function and National Income
Domestic absorption reﬂects the behavior of both the private and public
sector regarding consumption and investment too. Domestic real adsorp
tion is inﬂuenced by real national income, market interest rate and foreign
exchange rate. We assume a linear relationship.
(A/P)
t
= a
0
+a
1
(Y/P)
t
−a
2
(IR)
t
−a
3
EXR
t
, t = time period.
The relation between the national income and adsorption can be deﬁned
as follows:
Y
t
= A
t
+TY
t
+R
t
−G
t
+ GBS
t
−LR
t
where A is the value of domestic absorption, P is the price level, Y is the
national income, IR is the market interest rates, EXR is the exchange rate,
TY is the government tax revenue, G is the public consumption, GBS is the
government bond sales, LR is the net lending by the central government to
the states (which is not part of the planned public expenditure) and R is
the changes in the foreign exchange reserve reﬂecting the behavior of the
foreign trade sector.
The government budget deﬁcit (BD
t
) is deﬁned by the following
equation
BD
t
= (G
t
+LR
t
+PF
t
) −(TY
t
+ GBS
t
+AF
t
+BF
t
)
where PF is the foreign payments due to existing foreign debts, which
may include both amortization and interest payments, AF is the foreign
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
Time Varying Responses 51
assistance which is an insigniﬁcant feature, FB is the total foreign borrow
ing, assuming only the government can borrow from foreign sources.
We assume AF and LR as exogenous, whereas FB, G, and TY as policy
instruments. However, PF depends on the level of existing foreign debt and
the world interest rate, although a sizable part of the foreign borrowing can
be at a concessional rate
PF
t
= a
4
+a
5
t
r=−20
FB
r
+a
6
_
WIR
EXR
_
t
.
Government bond sales (GBS) depends on its attractiveness reﬂected on
the interest rate (IR), on the ability of the domestic economy to absorb (A)
on the requirements of the government (G) and on the alternative sources of
ﬁnances reﬂected on the tax revenue (TY) and on government’s borrowing
from the central bank i.e., the net domestic asset (NDA) creation by the
central bank.
GBS
t
= a
7
+a
8
A
t
+a
9
IR
t
+a
10
G
t
−a
11
TY
t
−a
12
NDA
t
3.2. Monetary Sector
We assume the ﬂow equilibrium in the money market, i.e.,
MD
t
= M
t
= MS
t
where MD is the money demand and MS is the money supply. The stock
of money supply depends on the stock of high powered money and the
moneymultiplier, as follows:
MS
t
= [(1 +CD)/(CD +RR)]
t
(R +NDA)
t
.
(R + NDA) reﬂect the stock of highpowered money and the expression
within the square bracket is the moneymultiplier, which depends on credit
to deposit ratio of the commercial banking sector (CD) and the reserve to
deposit liabilities in the commercial banking sector (RR). Whereas NDA
is an instrument, R depends on the foreign trade sector. However, the
government can inﬂuence CD and RR to control the money supply. RR,
which is the actual reserve ratio, depends on the demand for loans created by
private sectors and the commercial bank’s willingness to lend. We assume
that the desired reserve ratio RR
∗
t
is a function of national income and
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
52 Dipak R. Basu and Alexis Lazaridis
market interest rate i.e.,
RR
∗
t
= a
13
+a
14
Y
t
+a
15
IR
∗
t
.
The commercial banks may adjust their actual reserve ratio to the
desired reserve ratio with a lag.
RR
t
= α(RR
∗
t
−RR
t−1
) where 0 < α < 1.
Thus, we can write
RR
t
= αa
13
+αa
14
Y
t
+αa
15
IR
t
+(l −α)RR
t−1
.
The ratio of credit to deposit liabilities with the commercial bank system
is affected by the opportunity cost of holding currency, as measured by
the market interest rate and national income, representing the domestic
economic activity. Khan (1976) has postulated that the effect of national
income should be negative because “individuals and corporations tend to
become more efﬁcient in their management of cash balances as their income
rises”. However, a different logic mayemerge toa developingcountrywhere
the use of banks as an institution is not widespread, particularly among the
labor force. If there is an expansion in economic activity, the entrepreneurs
will have to maintain a huge cashbalance and run down deposits simply
to pay various dues, as most payments would have to be made in cash.
It is possible that corporations would be more efﬁcient, after some initial
adjustment period. We, therefore, expect the sign of the coefﬁcient for
the current national income to be positive and that for the lagged national
income to be negative.
CD
t
= a
16
+a
17
IR
t
+a
18
Y
t
−a
19
Y
t−1
.
The demand for money is assumed to be a function of the market interest
rate and the national income.
MD
t
= a
20
−a
21
IR
t
+a
22
Y
t
.
3.3. Price and Interest Rate
The market rate of interest (IR) is determined by the supply of money,
national income, and the central bank discount rate.
IR
t
= a
23
−a
29
(MS
t
) +a
29
(Y
t
) +a
26
(CI
t
).
The domestic price level depends on domestic economic activity (par
ticularly changes in the agricultural sector) and the import cost (IMC).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
Time Varying Responses 53
The import cost in turn depends on the exchange rate (EXR) and the world
price of imported goods (WPM). We assume the desired price level (P
∗
) is
represented by the following equation:
P
∗
t
= a
27
−a
28
(A
t
) +a
29
(IMC
t
).
The desired price level reﬂects private sector’s reaction to their expected
domestic absorption of the expected import cost. Suppose the actual price
will move according to the difference between the desired price in period t
and the actual price level in the previous period
Pt = β(P
∗
t
−P
t−1
); 0 < β < 1.
Thus, we get
P
t
= βa
27
−βa
27
(A
t
) +βa
29
(IMC
t
) +(1 −β)P
t−1
.
Import cost (IMC) is represented by the following equation timevarying
responses of output to monetary and ﬁscal policy
IMC
t
= a
30
−a
31
(EXR
t
) +a
32
(WPM
t
)
Exchange rate (EXR) can be an instrument variable whereas WPM is an
exogenous variable.
3.4. Balance of Payments
The balance of payments (R) is equal to the changes in the stock of
international reserve i.e.,
R
t
= X
t
−IM
t
+K
t
+PFT
t
+FB
t
−PF
t
+AF
t
where Xis the value of exports, IM is the value of imports, K is the foreign
capital inﬂows, PFT is the private sectors transactions, FB is the foreign
borrowing, PF is the foreign payments by the central bank and AF is the
foreign aid and grants; where X, PFT, K and AF are exogenous, import IM
is determined by the national income, and the import cost i.e.,
IM
t
= a
33
+a
34
Y
t
−a
35
(IMC
t
).
The above analytical structure was estimated using expected values of
each variable, with expectations being adaptive. The estimated parame
ters were used as the initial starting point for the stochastic control model
developed. Estimated model is presented in Appendix B.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
54 Dipak R. Basu and Alexis Lazaridis
3.5. Stability of the Model
Characteristic roots (real) of the system transition matrix
−0.0000008
−0.0000008
−0.1213610
0.2442519
0.3087129
0.5123629
0.7824260
0.0000001
0.0000001
All the roots are real and they are less than unity, so the system is
stable. [In the equivalent control system, the rank of the controllability
matrix is the same as the dimension of the reduced state vector so that the
system is controllable and observable, i.e., the system parameters can be
identiﬁed.]
We can, however, transform our model to the following form:
Y
t
= ρY
t−1
+e
t
, t = 1, 2, . . . , n (24)
where ρ is a real number and (e
t
) is a sequence of normally distributed
random variables with mean zero and variance σ
2
t
. Box and Pierce (1970)
suggested the following test statistic
Q
m
= n
m
k=1
r
2
k
(25)
where
r
k
=
n
t=k+1
ˆ e
t
ˆ e
t−k
__
n
t=1
ˆ e
2
t
_
(26)
n = the number of observations, m = n − k, where k = the number of
parameters estimated, and ˆ e
t
are the residuals from the ﬁtted model.
If (Y
t
) satisﬁes the system, then under the null hypothesis, Q
m
is
distributed as a chisquared random variable with m degrees of freedom.
The null hypothesis is that ρ = 1 where ˆ e
t
= Y
t
− Y
t−1
and thus k = 0.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
Time Varying Responses 55
The estimated value of Q
m
is 3.85, where the null hypothesis is rejected
at 0.05 signiﬁcance level. So, we accept the alternative hypothesis that if
ρ < 1, therefore the system is stable.
Das and Cristi (1990) have analyzed in detail the condition for the sta
bility and robustness of the timevarying stochastic optimal control system.
The condition is that the dynamic responsemultipliers of the model should
have slow timevariations. As demonstrated in Tables 1–7, the response
multipliers satisfy conditions of slowtimevariations (Das and Cristi, 1990;
Tsakalis and Ioannou, 1990).
Table 1. Responsemultiplier
a
and endogenous variable.
Exogenous Y GBS P BD CD MS IR
variable
EXR −0.02283 −0.00027 0.05593 0.00121 −0.00018 −0.00576 −0.00004
G 0.03178 0.00095 0.13965 0.04073 0.00041 0.01699 0.00108
TY −0.03116 −0.00426 −0.15022 0.04091 −0.00310 −0.03637 −0.01231
CI −0.00265 0.04953 −0.00884 0.00550 −0.00210 −0.00625 0.06599
a
Note: This responsemultiplier refers to the starting point of the planning; λ
2
= 563.09,
indicating overall goodness of ﬁt in a FIML estimation.
Table 2. Responsemultiplier period 1.
a
Endogenous Y GBS P BD CD MS IR
EXR −0.02254 −0.00092 0.05458 0.00111 −0.00020 −0.00543 −0.00092
G 0.03572 0.00403 0.14849 0.03987 0.00500 0.01466 0.00330
TY −0.03590 −0.01017 −0.15549 0.02791 −0.00399 −0.02682 −0.01996
CI 0.00226 0.04592 −0.00606 −0.00458 −0.00169 −0.00516 0.06120
a
Note: Period 1 refers to the ﬁrst period of the planning horizon,
Table 3. Responsemultiplier period 2.
Endogenous Y GBS P BD CD MS IR
EXR −0.02209 −0.00096 0.05336 0.00125 −0.00013 −0.00570 −0.00097
G 0.03293 0.00568 0.15603 0.03500 0.00091 0.02882 0.00599
TY −0.02961 −0.01240 −0.15688 0.02792 −0.00246 −0.04758 −0.02324
CI 0.00273 0.94546 −0.00387 −0.00488 −0.00181 −0.00668 0.06057
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
56 Dipak R. Basu and Alexis Lazaridis
Table 4. Responsemultiplier period 3.
Endogenous Y GBS P BD CD MS IR
EXR −0.02199 −0.00069 0.05269 0.00133 −0.00042 −0.00573 −0.00062
G 0.03001 0.00499 0.16536 0.02924 0.05923 0.04217 0.00570
TY −0.02470 −0.01149 −0.16281 0.02472 −0.07031 −0.06325 −0.02220
CI 0.00215 0.04676 −0.00252 −0.00399 −0.00465 −0.00671 0.06228
Table 5. Responsemultiplier period 5.
Endogenous Y GBS P BD CD MS IR
EXR −0.02133 −0.00088 0.05190 0.00152 −0.00201 −0.00595 −0.00089
G 0.02058 0.00558 0.18078 0.01160 0.50890 0.07248 0.00530
TY −0.01676 −0.00410 −0.16655 0.00695 −0.55026 −0.07779 −0.01013
CI 0.00268 0.04465 −0.00267 −0.00403 −0.01967 −0.00726 0.05953
Table 6. Responsemultiplier period 7.
Endogenous Y GBS P BD CD MS IR
EXR −0.02067 −0.00116 0.04987 0.00135 −0.00104 −0.00558 −0.00129
G 0.00733 0.01033 0.16455 0.00048 0.57357 0.10760 0.01206
TY −0.00402 −0.00345 −0.14832 −0.01434 −0.58067 −0.10904 −0.00869
CI 0.00200 0.04174 −0.00236 −0.00293 −0.02186 −0.00551 0.05566
Table 7. Response of output to monetary and ﬁscal policies.
Periods 1 2 3 5 7
dY/dCI −0.00226 −0.00273 −0.00215 −0.00268 −0.00200
dY/dEXR −0.02254 −0.02209 −0.02188 −0.02133 −0.02067
dP/dG 0.14849 0.15603 0.16536 0.18078 0.16455
dY/dG 0.03572 0.03203 0.03001 0.02058 0.00733
dP/dTY −0.15549 −0.15688 −0.16281 −0.16655 −0.14832
dY/dTY −0.03590 −0.02691 −0.02470 −0.01676 −0.00402
dGBS/dG 0.00403 0.00568 0.00499 0.00558 0.01033
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
Time Varying Responses 57
4. Policy Analysis
The dynamics of the monetary and ﬁscal policy are analyzed in terms of
the analysis of the dynamic responsemultipliers and their relationship with
the monetaryﬁscal policy regimes. The responsemultiplier of the system
will move over time within an adaptive control framework (Tables 1–6).
The movements of the coefﬁcients response of responsemultipliers in a
monetary policy framework are described below. It is essential that for the
stability of the control system, these dynamics of the responsemultiplier
should be slow (Das and Cristi, 1990; Tsakalis and Ioannou, 1990).
Analysis of the responsemultipliers shows that devaluation would have
negative effect on the national income, devaluation would also have neg
ative effect on the government bond sales, CD, interest rate, and money
demand. This is due to the fact that Indian exports may not be that elastic
in response to devaluation, whereas devaluation will reduce import abili
ties signiﬁcantly. As a result, national income and domestic activity would
have negative effects, which would depress the activity of the private sector.
Because of this CD will decline.
Due to the downturn of the economic activity, there will be less demand
for government bonds. Devaluations also can have an inﬂationary effect due
to increased import costs. Government expenditure, on the other hand, will
have positive effects on every variable. This implies that increased public
expenditure will stimulate the national income and the private sector despite
increased interest rate. However, the increased public expenditure will have
inﬂationary impacts on the economy at the same time.
Increased tax revenue will depress national income, as it will reduce
government bond sales. Lower level of national income will reduce money
supplyandthe private sector’s activity, whichwill be reﬂectedinthe reduced
currency to deposit ratio and the reduced level of money demand. Reduced
level of national income, in this case, will have reduced price level but bud
get deﬁcit will go up as well because of the reduced demand for government
bonds. Increased rate of central bank’s discount rate will depress national
income and general price level. At the same time, private sector’s activity
will be reduced, as reﬂected in the CDand money demand, due to increased
rate of interest. Although government bond sales will go up, budget deﬁcit
will be increased due to reduced level of economic activity.
The effect of the central bank discount rate (CI ) on interest rate varies
from 0.061 in period 1 to 0.055 in period 7. This result is primarily due to
the fact that the inﬂuence of G (public expenditure) on IR has increased
from 0.003 in period 1 to 0.012 in period 7. If the public expenditure goes
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
58 Dipak R. Basu and Alexis Lazaridis
up, there will be direct pressure on the central bank to provide loans to the
government. The central bank can put pressure on the commercial banks to
extend some loans in public sector and less to the private sector, and the CD
of the commercial banks will be reduced as a result (at the same time reserve
ratio will be increased). While CD goes up in response to government
expenditure (G), the CI will have a similar effect on the currency to deposit
ratio.
Effects of the CD on prices declines rapidly, whereas the effect of the
exchange rate on price level is most prominent and does not vary much. It
demonstrates that although during the initial phase of planning, tight credit
policy may inﬂuence prices, in the longer run structural factors, public
expenditure, and import costs reﬂected through the exchange rate will affect
price levels more signiﬁcantly. The effect of the CI on the money supply
(MS) will be reduced from −0.005 in period 1 to −0.007 in period 6, and
it will slightly increase in period 7.
The impacts of the public expenditure and tax revenues on the two most
important variables, GNP(Y) and price level (P) can be analyzed as follows.
The increased government expenditure (G) will increase price level and
the impact will be intensiﬁed. The impact of the government expenditure
on the GNP will be reduced, which is consistent with the reduced impacts
of the tax revenues on the GNP over the planning period. Tax revenue will
have negative impacts on the GNP as well as on the price level, and the
impacts will decline over time. The negative impacts of the tax revenue on
the GNP is partly explained by the negative effects of tax revenue on the
currency to deposit ratios of the commercial banks, the main indicator for
the private sector activities. The government expenditure also have positive
impacts on the government bond sales and the impact will increase over
time due to the increasing difﬁculties of raising taxes, which will have
negative impacts on the growth prospects.
The effect of the exchange rate declines gradually, whereas the effect of
the interest rate is cyclical. This is due to the cyclical pattern of the central
discount rate in the optimum path. It is quite obvious that the impacts of
ﬁscal and monetary policies on output will ﬂuctuate over time depending
on the direction of change of these policies. The variations of the effects are
not unsystematic, as feared by Friedman and Schwartz (1963). However,
here we are analyzing an optimum path not the historical path.
5. Conclusion
The importance of timevarying coefﬁcients was recognized in statistical
literature long ago (Davis, 1941), where frequency domain approach of
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
Time Varying Responses 59
the time series analysis was utilized (Mayer, 1972). Several authors since
then have used varying parameter regression analysis (Cooley and Prescott,
1973; Farley et al., 1975). In the above analysis, adaptive control system
was used in a statespace model where the reduced form parameters can
move over time.
However, variations over time are slow which indicates any absence
of explosive response (Das and Cristi, 1990; Tsakalis and Ioannou, 1990).
Cargil and Mayer (1978) also have observed stable movements of the coef
ﬁcients. Thus the role of the monetaryﬁscal policies on the economy is not
unsystematic, although it can vary over time.
Results obtained by other researchers showed that the effects of
monetary and ﬁscal policy change over time, and it is important to analyze
these changes in order to obtain timeconsistent monetaryﬁscal policy (De
Castro, 2006; Folster and Henrekson, 2001; Muscatelli and Tirelli, 2005).
The results obtained by using adaptive control method, showed similar
characteristics of the monetaryﬁscal policy.
The implication for the public policy is quite obvious. Timeconsistent
monetaryﬁscal policy demands continuous revision, otherwise, the effec
tiveness of the policy may deteriorate and as a result, the effects of monetary
and ﬁscal policy on major target variables of the economy may deviate from
their desired level. Our approach is a systematic way forward to analyze
these dynamics of monetary and ﬁscal policy.
References
Astrom, KJ and K Wittenmark (1995). Adaptive Control. Reading, MA: AddisonWesley.
Basu, DandALazaridis (1986). Amethod of stochastic optimal control by Bayesian ﬁltering
techniques. International Journal of System Sciences 17(1), 81–85.
Brannas, K and A Westlund (1980). On the recursive estimation of stochastic and time
varying parameters in economic system. In Optimization Techniques, K Iracki (ed.),
Berlin: Springer Verlag, 265–274.
Berdell, JF (1995). The prevent relevance of Hume’s openeconomy monetary dynamics.
Economic Journal l05(432), September, 1205–1217.
Box, GEP and DA Pierce (1970). Distribution of residual autocorrelation in autoregressive
integrated moving average time series models. Journal of the American Statistical
Association 65(3), 1509–1526.
Blanchard, O and R Perotti (2002). An empirical characterization of the dynamic effects of
changes in government spending and taxes on output. Quarterly Journal of Economics,
117(4), November, 1329–1368.
Cargil, TF and RAMeyer (1977). Intertemporal stability of the relationship between interest
races and prices. Journal of Finance 32, September, 427–448.
Cargil, TF and RA Mayer (1978). The time varying response of income to changes in
monetary and ﬁscal policy. Review of Economics and Statistics LX(1), February, 1–7.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
60 Dipak R. Basu and Alexis Lazaridis
Cripps, F and WAH Godley (1976). A formal analysis of the Cambridge economic policy
group Model. Economica 43, August, 335–348.
Cooley, TF and EC Prescott (1973). An adaptive regression model. International Economic
Review l4, June, 248–256.
Das, M and R Cristi (1990). Robustness of an adaptive pole placement algorithm
in the presence of bounded disturbances and slow time variation of parameters.
IEEE Transactions on Automatic Control 35(6), June, 762–802.
Davis, HT (1941). The Analysis of Economic Time Series. Bloomington: Principia Press.
De Castro, F (2006). Macroeconomic effects of ﬁscal policy in Spain. Applied Economics
38(8–10), May, 913–924.
Farley, J, MHinich and T McGuire (1975). Some comparison of tests for a shift in the slopes
of a multivariate time series model. Journal of Econometrics 3, August, 297–318.
Friedman, M and A Schwartz (1963). A monetary history of the United States, 1867–1960.
Princeton: Princeton University Press.
Folster, S and M Henrekson (2001). Growth effects of government expenditure and taxation
in rich countries. European Economic Review 45(8), 1501–1520.
Goldberger, AC, AL Nagar and HS Odeh (1961). The covariance matrices of reduced
form coefﬁcients and forecasts for a structural econometric model. Econometrica 29,
556–573.
Godley, W and M Lavoie (2002). Kaleckian models of growth in a coherent stockﬂow
monetary framework: A Kaldorian view. Journal of Post Keynesian Economics 24(2),
277–312.
Godley, W and M Lavoie (2007). Fiscal policy in a stockﬂow consistent (SFC) model.
Journal of Post Keynesian Economics 30(1), 79–100.
Hamberger, MJ (1971). The lag in the effect of monetary policy: Asurvey of recent literature.
Monthly ReviewFederal Reserve Bank of NewYork, December, 289–297.
Humphrey, TM(1981). AdamSmith and the monetary approach to the balance of payments.
Economic Review, Federal Reserve Bank of Richmond 67.
Humphrey, TM (1993). Money, Ranking and Inﬂation. Aldershot: Edward Elgar.
Khan, MS(1976). Amonetarymodel of balance of payments: The case ofVenezuela. Journal
of Monetary Economics 2, July, 311–332.
Khan, MS and PJ Montiel (1989). Growth oriented adjustment programs. IMF Staff Papers
36, June, 279–306.
Khan, MS and PJ Montiel (1990), A marriage between fund and bank models. IMF Staff
Papers 37, March, 187–191.
Lazaridis, A(1980). Application of ﬁltering methods in econometrics. International Journal
of Systems Science 11(11), 1315–1325.
Lucas, RE, Jr (1972). Expectation and the neutrality of money. Journal of Economic Theory
4, April, 103–124.
Meyer, T (1967). The lag in effect of monetary policy, some criticisms. Western Economic
Journal 5, September.
Mayer, R (1972) Estimating coefﬁcients that change over time. International Economic
Review 11, October.
Muscatelli, VAand P Tirelli (2005). Analyzing the interaction of monetary and ﬁscal policy:
does ﬁscal policy play a valuable role in stabilization. CESIFOEconomic Studies 51(4),
1–17.
Nicolao, G (1992). On the timevarying Riccati difference equation of optimal ﬁltering.
SIAM Journal on Control and Optimization 30(6), November, 1251–1269.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
Time Varying Responses 61
Poole, W(1975). The relationship of monetary deceleration to business cycle peaks. Journal
of Finance 30, June, 697–712.
Radenkovic, M and A Michel (1992). Veriﬁcation of the selfstabilization mechanism in
robust stochastic adaptive control using Lyapunov function arguments. SIAM Journal
on Control and Optimization 30(6), November, 1270–1294.
Sargent, TJ and NWallace (1973). Rational expectations and the theory of economic policy.
Journal of Monetary Economics 2, April, 169–183.
Smets, V and R Wouters (2003). An estimated dynamic stochastic general equilibrium
model of the Euro area. Journal of European Economic Association 1(5), September,
1123–1175.
Tobin, J (1970). Money and income: Post Hoc Ergo Propter Hoc. Quarterly Journal of
Economics 84, May, 301–317.
Tsakalis, K and P Ioannou (1990). A new direct adaptive control scheme for timevarying
plant. IEEE Transactions on Automatic Control 35(6) June, 356–369.
Warburton, C (1971). Variability of the lag in the effect of monetary policy 1919–1965.
Western Economic Journal 9, June, 1919–1965.
Appendix A
Probability Density Function and Recursive Process
for ReEstimation
Under the assumptions stated in Section 3 and according to Bayes’ rule, the
conditional probability density function of π
i
given x
i+1
is Gaussian and is
given by:
p(π
i+1
x
i+1
) =
_
p(π
i
x
i
)p(π
i+1
, x
i+1
π
i
, x
i
)dπ
i
where
p(π
i+1
x
i
) = constant exp
_
−
1
2
π
i+1
−π
∗
i

2
S
−1
i
_
π
∗
i
= E(π
i
x
i
)
S
i
= cov(π
i
x
i
) (S
i
assumed to be invertible)
P(π
i+1
, x
i+1
π
i
, x
i
) = constant exp
_
−
1
2
_
_
_
_
π
i+1
−π
i
x
i+1
−H
i+1
π
i+1
_
_
_
_
2
C
−1
_
C =
_
Q
1
0
0 Q
2
_
, C
−1
=
_
Q
−1
1
0
0 Q
−1
2
_
.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
62 Dipak R. Basu and Alexis Lazaridis
Hence
p(π
i+1
x
i+1
) = constant
_
exp
_
−
1
2
˜
J
i
_
dπ
i
where
˜
J
i
= (π
i
−π
∗
i
)
(S
−1
i
+Q
−1
1
)(π
i
−π
∗
i
)
+(π
i+1
−π
∗
i
)
(Q
−1
1
+H
i+1
Q
−1
2
H
i+1
)(π
i+1
−π
∗
i
)
−2(π
i
−π
∗
i
)
Q
−1
1
(π
i
−π
∗
i
)
+(x
i+1
−H
i+1
π
∗
i
)
Q
−1
2
(x
i+1
−H
i+1
π
∗
i
)
−2(π
i+1
−π
∗
i
)
H
i+1
Q
−1
2
(x
i+1
−H
i+1
π
∗
i
). (a)
Now, deﬁne G
−1
i
= (S
−1
i
+Q
−1
1
) and consider the expression
˜
˜
J
i
= [(π
i
−π
∗
i
)
−G
i
Q
−1
1
(π
i+1
−π
∗
i
)]
×G
−1
i
[(π
i
−π
∗
i
)
−G
i
Q
−1
1
(π
i+1
−π
∗
i
)]. (b)
Note that G
i
is symmetric, since it is the sumof two(symmetric) covariance
matrices. Expanding Eq. (b) and noting that G
i
G
−1
i
= I we obtain
˜
˜
J
i
= (π
i
−π
∗
i
)
G
−1
i
(π
i+1
−π
∗
i
) − 2(π
i
−π
∗
i
)
Q
−1
1
(π
i+1
−π
∗
i
)
+(π
i+1
−π
∗
i
)
Q
−1
1
G
i
Q
−1
1
(π
i+1
−π
∗
i
). (c)
In view of Eqs. (b) and (c), Eq. (a) can be written as
˜
J
i
= [(π
i
−π
∗
i
)
−G
i
Q
−1
1
(π
i+1
−π
∗
i
)]
×G
−1
i
[(π
i
−π
∗
i
)
−G
i
Q
−1
1
(π
i+1
−π
∗
i
)]
+(π
i+1
−π
∗
i
)
(Q
−1
1
+H
i+1
Q
−1
2
H
i+1
−Q
−1
1
G
i
Q
−1
1
)(π
i+1
−π
∗
i
)
+(x
i+1
−H
i+1
π
∗
i
)
Q
−1
2
(x
i+1
−H
i+1
π
∗
i
)
−2(π
i+1
−π
∗
i
)
H
i+1
Q
−1
2
(x
i+1
−H
i+1
π
∗
i
).
Integration with respect to π
i
yields constant
_
exp
_
−
1
2
˜
J
i
_
dπ
i
=
constant exp
_
−
1
3
...
J
i
_
where
...
J
i
=
_
π
i+1
−π
∗
i
)
(Q
−1
1
−Q
−1
1
G
i
Q
−1
1
+H
i+1
Q
−1
2
H
i+1
)
_
π
i+1
−π
∗
i
_
+
_
x
i+1
−H
i+1
π
∗
i
)
Q
−1
2
_
x
i+1
−H
i+1
π
∗
i
_
−2
_
π
i+1
−π
∗
i
)
H
i+1
Q
−1
2
_
x
i+1
−H
i+1
π
∗
i
_
.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
Time Varying Responses 63
Hence
p(π
i+1
x
i+1
) = constant exp
_
−
1
2
...
J
i
_
.
Since p(π
i+1
x
i+1
) is proportional to the likelihood function, by maximiz
ing the conditional density function, we are also maximizing the likelihood
in order to determine π
∗
i+1
. Note that minimization of
...
J
i
, is equivalent to
maximizing p(π
i+1
x
i+1
). To minimize
...
J
i
, we expand Eq. (c) eliminating
terms not containing π
i+1
. Then, we differentiate with respect to π
i+1
and
after equating to zero, we ﬁnally obtain (Lazaridis, 1980).
π
∗
i+1
= π
∗
i
+(Q
−1
1
−Q
−1
1
G
i
Q
−1
1
+H
i+1
Q
−1
2
H
i+1
)
−1
×H
i+1
Q
−1
2
(x
i+1
−H
i+1
π
∗
i
). (d)
Now, consider the composite matrix:
Q
−1
1
−Q
−1
1
G
i
Q
−1
1
where G
i
= (S
−1
i
+Q
−1
1
)
−1
.
Considering the matrix identity of householder, we can write
Q
−1
1
−Q
−1
1
(S
−1
i
+Q
−1
1
)
−1
Q
−1
1
= (Q
1
+S
i
)
−1
P
−1
i+1
.
Hence Eq. (d) takes the form:
π
∗
i+1
= π
∗
i
+K
i+1
(x
i+1
−H
i+1
π
∗
i
)
where K
i+1
, S
−1
i+1
and P
−1
i+1
are deﬁned in Eqs. (21)–(23).
It is recalled that H
0
is a null matrix, since no observations exist beyond
period 1 in the estimation process. The same applies for the vector x
0
.
Appendix B
It is recalled that the model was estimated using FIML method. The FIML
estimates are as follows (R
2
and adjusted R
2
refer to the corresponding 2
SLS estimates. Numbers in brackets are the corresponding tstatistics).
Estimated Model
1. A
t
= 0.842Y
t
(3.48)
+0.024A
t−1
(1.74)
−1.319IR
t
(1.34)
+1.051IR
t−t
(1.16)
−284EXR
t
(1.29)
+ 55191.5
(4.87)
R
2
= 0.99, R
2
= 0.92, DW = 1.76, ρ = 0.27
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
64 Dipak R. Basu and Alexis Lazaridis
2. (Y
t
) = A
t
+R
t
3. (BD)
t
= (G
t
+ LR
t
+ PF
t
) −(TY
t
+ GBS
t
+ AF
t
+ FB
t
)
4. PF
t
= 1148.80
(1.48)
+0.169 CFB
t−1
(2.39)
+144.374 WIR
t
(1.89)
5. GBS
t
= 0.641G
t
(1.14)
+0.677G
t−1
(1.41)
+1.591IR
t
(1.64)
−0.33AF
t
(0.23)
R
2
= 0.89, R
2
= 0.84, DW = 2.36, ρ = 0.23
(1.51)
6. MD = MS
7. (MS)
t
= [(1 + CD
t
)/(CD
t
+ RR
t
)](R
t
+ NDA
t
)
8. RR
t
= 0.008Y
t
(3.47)
−0.002Y
t−1
(−1.84)
+1.2571R
t
(1.87)
+2.7031R
t−1
(1.88)
−0.003T
(2.87)
+ 0.065
(2.72)
R
2
= 0.86, R
2
= 0.79, DW = 2.88, ρ = 0.26
(1.23)
9. CD
t
= 0.4391IR
t
(−1.49)
+0.158CD
t−1
(1.16)
+0.0007Y
t
(1.56)
+0.009Y
t−1
(1.73)
− 0.0057T
(−6.269)
+0.193
(11.95)
R
2
= 0.94, R
2
= 0.92, DW = 1.22, ρ = 0.64
(4.21)
10. MD
t
= 2.733RR
t
(−2.52)
−2.19IR
t
(−0.24)
+1.713IR
t−1
(2.17)
+1.275Y
t
(6.67)
−113335.0
(−0.088)
R
2
= 0.86, R
2
= 0.81, DW = 1.92, ρ = 0.26
(1.31)
11. IR
t
= 0.413MD
t−1
(1.93)
−0.814IR
t−1
(1.403)
+8.656CI
t
(1.68)
−1.351CI
t−1
+ 0.406Y
t−1
(1.56)
− 7.02
(−1.84)
R
2
= 0.98, R
2
= 0.95, DW = 2.48, ρ = 0.58
(3.21)
12. P
t
= 0.0004A
t
(−5.71)
+0.0002A
t−1
(1.77)
+0.105IMC
t
(1.48)
+0.421P
t−1
(3.79)
+32.895
(1.87)
R
2
= 0.98, R
2
= 0.93, DW = 2.09, ρ = 0.48
13. IMC
t
= 5.347
(1.34)
+19.352WPM
t
(2.03)
+9.017EXR
t
(0.97)
R
2
= 0.98, R
2
= 0.97, DW = 2.07, ρ = 0.31
(1.37)
14. R
t
= X
t
− IM
t
+K
t
+ PFT
t
+ FB
t
− PF
t
+ AF
t
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
Time Varying Responses 65
15. IM
t
= − 24.04
(−0.82)
−0.025IM
t−1
(−0.86)
+0.104Y
t
(1.59)
−0.089Y
t−1
(9.21)
−0.654IMC
t
(−1.26)
+ 1.123T
(0.27)
R
2
= 0.96, R2 = 0.92, DW = 2.52, ρ = −0.62
(−1.72)
16. CFB
t
=
t
r=−20
FB
r
λ
2
= 563.09
The estimated Chisquare satisﬁes the goodness of ﬁt test for the FIML
estimation.
Notations
A = Domestic absorption
AF = Foreign receipts (grants etc.)
BD = Government budget deﬁcits
CD = Credit to deposit ratio in the commercial banking sector
CFB = Cumulative foreign borrowing, i.e., foreign debt over a period of
20 years
CT = Discount rate of the RBI
FB = Foreign borrowing
G = Government expenditure
GBS = Government bond sales
IM = Value of imports
IMC = Import price index (1990 = 100)
IR = Interest rate in the moneymarket
K = Foreign capital inﬂows
LR = Lending (minus repayments to the states)
MD = Money demand
MS = Money supply
NDA = Net domestic asset creation by the RBI
P = Consumers’ price index (1990 = 100)
PFT = Private foreign transactions
PF = Foreign payments
R = Changes in foreign exchange reserve
RR = Reserve to deposit ratio in the commercial banking sector
TY = Government tax revenue
T = Time trend
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch02topic2
66 Dipak R. Basu and Alexis Lazaridis
WPM = World price index of India’s imports (1990 = 100)
WIR = World interest rate, average of European and US money market
rate
EXR = Exchange rate (Rs/US$)
X = Value of exports
Y = GNP at constant 1990 prices
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
Chapter 3
Mathematical
Modeling in Macroeconomics
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
This page intentionally left blank This page intentionally left blank
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
Topic 1
The Advantages of Fiscal Leadership in an
Economy with Independent Monetary Policies
Andrew Hughes Hallet
James Mason University, USA
1. Introduction
In January 2006, Gordon Brown (as Britain’s ﬁnance minister) was widely
criticized by the European Commission and other policy makers for running
ﬁscal policies, which they considered to be too loose and irresponsible. The
UK ﬁscal deﬁcit had breached the 3% of GDP that is considered to be safe.
To make its point, the European Commission initiated an “excessive deﬁcit”
procedure against the UK government, claiming that its ﬁscal policies con
stituted a danger for the good performance of the UK economy and its
neighbors, if these deﬁcits were not contained and reversed. Yet the United
Kingdom, alone among its European partners, allows ﬁscal policy to lead in
order to achieve certain social expenditure and mediumterm output objec
tives and a degree of coordination with the monetary policies designed to
reach a certain inﬂation target. And the economic performance has been
no worse, and may have been better than elsewhere in the Eurozone. Was
Gordon Brown’s strategy so mistaken after all?
The British ﬁscal policy has changed radically since the days when it
tried to micromanage all of the aggregate demand with an accommodating
monetary policy in the 1960s and 1970s; and again during the 1980s, it
was designed to strengthen the economy’s supplyside responses, while the
monetary policies were intended to secure lower inﬂation.
The 1990s saw a return to more activist ﬁscal policies — but policies
designed strictly in combination with an equally active monetary policy
based on inﬂation targeting and an independent Bank of England. They are
set, in the main, to gain a series of medium to longterm objectives — low
debt, the provision of public services and investment, social equality, and
economic efﬁciency. The income stabilizing aspects of the ﬁscal policy
69
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
70 Andrew Hughes Hallet
have, therefore, been left to act passively through the automatic stabilizers,
which are part of any ﬁscal system, while the discretionary part (the bulk of
the policy measures) is set to achieve these longterm objectives. Monetary
policy, meanwhile, is left to take care of any shortrun stabilization around
the cycle; that is, beyond what, predictably, would be done by the automatic
stabilizers.
1
To draw a sharp distinction between actively managed longrun poli
cies, and nondiscretionary shortrun stabilization efforts restricted to the
automatic stabilizers, is of course the strategic policy prescription of Taylor
(2000). Marrying this with an activist, monetary policy directed at cyclical
stabilization, but based on an independent Bank of England and a mone
tary policy committee with the instrument (but not target) independence,
appears to have been the innovation in the UK policies. It implies a lead
ership role for the ﬁscal policy, which allows both ﬁscal and monetary
policies to be better coordinated — but without either losing their ability
to act independently.
2
Thus, Britain appears to have adopted a Stackelberg solution, which lies
somewhere between the discretionary (but Pareto superior) cooperative
solution, and the inferior but independent (noncooperative) solution, as
showninFig. 1.
3
Nonetheless, byforcingthe focus ontolongrunobjectives,
to the exclusion of the short term, this setup has imposed a degree of
precommitment (and potential for electoral punishment) on ﬁscal policy
because governments naturally wish to lead. But, the regime remains non
cooperative so that there is no incentive to renege on earlier plans in the
1
The Treasury estimates that the automatic stabilizers will, in normal circumstances, stabi
lize some 30% of the cycle; the remaining 70% being left to monetary policy (HM Trea
sury, 2003). The option to undertake discretionary stabilizing interventions is retained “for
exceptional circumstances” however. Nevertheless, the need for any such additional inter
ventions is unlikely: ﬁrst, because of the effectiveness of the forward looking, symmetric
and an activist inﬂation targeting mechanism adopted by the Bank of England; and, second,
because the longterm expenditure (and tax) plans are deliberately constructed in nominal
terms so that they add to the stabilizing power of the automatic stabilizers in more serious
booms or slumps.
2
For details on how this leadership vs. stabilization assignment is intended to work, see HM
Treasury (2003) and Section 3 below. Australia and Sweden operate rather similar regimes.
3
Figure 1 is the standard representation of the outcomes of the different game theory equi
libria when the reaction functions form an acute angle. The latter is assured for our context
when both sets of policy makers have some interest in inﬂation control and output stabi
lization, and the policy multipliers have the conventional signs (Hughes Hallett and Viegi,
2002). Stackelberg solutions, therefore, imply a degree of coordination in the outcomes,
relative to the noncooperative (uncoordinated) Nash solution.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
The Advantages of Fiscal Leadership in an Economy 71
Figure 1. MonetaryFiscal Interactions and the Central Bank.
absence of changes in information. Thus, the policies and their intended
outcomes will be sustained by the government of the day.
4
2. The Fiscal Leadership Hypothesis
The hypothesis is that the United Kingdom’s improved performance is due
to the fact that the ﬁscal policy leads an independent monetary policy. This
leadership derives from the fact that ﬁscal policies typically have longrun
targets (sustainability and lowdebt), and is not easily reversible (public ser
vices and social equality), and does not stabilize well if efﬁciency is to be
maintained. Nevertheless, there are also automatic stabilizers in any ﬁscal
policy framework, implying that monetary policy must condition itself on
the ﬁscal stance at each point. This automatically puts the latter in a fol
lower’s role. This is helpful because it allows the economy to secure the ben
eﬁts of an independent monetary policy but also enjoys a certain measure
of coordination between the two sets of policy makers — discretionary/
automatic ﬁscal policies on one side, and active monetary policies on the
4
Stackelberg games, with ﬁscal policy leading, produce ﬁscal commitment: i.e., subgame
perfection with either strong or weak time consistency (Basar, 1989). In this sense, we have
“rules rather than discretion”. Commitment to the stabilization policies is then assured by
the independence of the monetary authority.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
72 Andrew Hughes Hallet
other. The extra coordination arises in this case because the constraints
on, and responses to an agreed leadership role reduce the externalties of
selfinterested behavior, which independent agents would otherwise impose
on one another. This allows a Pareto improvement over the conventional
noncooperative (full independence) solution, without reducing the cen
tral bank’s ability to act independently. It is important to realize that the
coordination here is implicit and therefore rulebased, not discretionary.
To show that the UK ﬁscal policy does lead monetary policy in this
sense, and that this is the reason for the Pareto improved results in Table 1,
I produce evidence in three parts:
• Institutional evidence: taken from the UK Treasury’s own account of
how its decisionmaking works, what goals it needs to achieve, and how
the monetary stance may affect it or vice versa;
Table 1. Generalized Taylor rules in the UK and EU12.
S.No. Const r
t−1
r
t−1
r
t−1
π
e
j,k
π
e
j,k
π
e
j,k
j, k j, k j, k gap pd debt
Dependent variable: central bank lending rate, r
t
:
For the UK, monthly data from June 1997–January 2004
(1) −1.72
(2.16)
0.711
(5.88)
1.394
(2.66)
+6, +18 0.540
(2.06)
— —
R
2
= 0.91, F
3,21
= 82.47, N = 29
(2) −2.57
(2.59)
0.598
(4.38)
1.289
(0.66)
+9, +21 1.10
(1.53)
−0.67
(0.83)
0.43
(0.50)
R
2
= 0.90, F
5,19
= 44.1, N = 25
For the Eurozone (EU12), monthly data from January 1999–January 2004
(1) −0.996
(0.76)
0.274
(1.04)
1.714
(3.89)
+9, +21 0.610
(1.77)
— —
R
2
= 0.82, F
3,11
= 22.2, N = 15
(2) −13.67
(0.59)
0.274
(1.04)
1.110
(1.19)
−6, +6 0.341
(1.22)
0.463
(2.89)
0.191
(0.63)
R
2
= 0.87, F
5,13
= 24.7, N = 19
Notation: j, k give the bank’s inﬂation forecast interval; π
e
j,k
represents the average inﬂation
rate expected over the interval t + j to t + k: Eπ
t+j,t+k
; gap = GDP–trend GDP; pd =
primary deﬁcit/surplus as a percentage of GDP (a surplus > 0) and debt = debt/GDP ratio.
Estimation: instrumented 2SLS; tratios in parentheses; j, k determined by search; and the
output gap is obtained from a conventional HP ﬁlter to determine trend output.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
The Advantages of Fiscal Leadership in an Economy 73
• Empirical evidence: the extent to which the monetary policy is affected
by the ﬁscal policy, but ﬁscal policy with its longer objectives does not
depend on monetary policies and
• Theoretical evidence: shows how, with different goals for governments
and central banks, Pareto improving results can be expected from ﬁscal
leadership. The United Kingdom has, therefore, had the incentive, and
the capacity, to operate in this way.
3. Institutional Evidence: The Treasury’s Mandate
3.1. History
The British ﬁscal policy has changed a great deal over the past 30 years.
Fiscal policy was the principal instrument of the economic policy in the
1960s and 1970s, and was focused almost exclusively on demand manage
ment. Monetary policy and the provision of public services (subject to a
lower bound) were essentially accommodating factors, since the main con
straint was perceived to be the ﬁnancing of persistent currentaccount trade
deﬁcits. The result of this was a shortterm view, in which the conﬂicts
between the desire for growth (employment) and the recurring evidence
of overheating (trade deﬁcits) led to an unavoidable sequence of “stopgo”
policies. There were many (Dow, 1964), who argued that the real problem
was a lack of longrun goals for the ﬁscal policy; and that the lags inherent
in recognizing the need for a policy change and in implementing it through
Parliament till it takes hold in the markets, had produced a system that was
actually destabilizing rather than stabilizing. But, there would have been
conﬂicts anyway because there were two or more competing goals, but
effectively only one instrument to reach them.
Such a system could not provide the longrun goal of public services,
public investment, and social equity. It certainly proved too difﬁcult to turn
the ﬁscal policy on and off as fast and frequently as required, and pre
commit to certain expenditure and taxation plans at the same time. The
point here is that, if you cannot precommit ﬁscal policy to certain goals,
then you will not be able to precommit monetary policy either. This means
that the “stable growth with low inﬂation” objective will be lost.
In the 1980s, the strategy changed when a newregime took ofﬁce, com
mitted to reducing the size and role of the government in the economy. The
demand management role of the ﬁscal policy was phased out. This role
passed over to inﬂation control and monetary management — with limited
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
74 Andrew Hughes Hallet
success in the earlier phases of monetary and exchange rate targeting, but
with rather more success when it was formalized as an inﬂationtargeting
regime under an independent Bank of England and monetary policy com
mittee. In this period, therefore, the role of ﬁscal policy was to provide
the “conditioning information” within which the monetary management
had to work. It was split into two distinct parts. Shortterm stabilization
of output and employment was left to the automatic stabilizers inherent in
the prevailing tax and expenditures rules. Discretionary adjustments would
only be used in exceptional circumstances, if at all.
5
The rest of the ﬁscal
policy, being the larger part, could then be directed at longterm objectives:
in this instance, changes to free up the supply side (and enhance com
petitiveness) on the premise that greater microeconomic ﬂexibility would
both reduce the need for stabilizing interventions (relative wage and price
adjustments would do the job better) and provide the conditions for stronger
employment and growth (HMT, 2003). Given this, the longrun goals of
better public services, efﬁciency, and social equity could then be attained.
In addition, as the market ﬂexibility changes were made, this part of ﬁs
cal policy could be increasingly focused on providing the longrun goals
directly.
6
3.2. Current Practice
According to the treasury’s own assessment (HMT, 2003), the UKﬁscal pol
icy now leads monetary policy in two senses. First, ﬁscal policy is decided
in advance of monetary or other policies (pp. 5, 63, 64, 74),
7
and with a
longtime horizon (pp. 5, 42, 48, 61). Second, monetary policy is charged
with controlling inﬂation and stabilizing output around the cycle — rather
than steering the economy as such (pp. 61–63). Fiscal policy, therefore, sets
the conditions within which monetary policy has to respond and achieve its
own objectives (pp. 9, 15, 67–68). The shortterm ﬁscal interventions, now
less thanhalf the total anddeclining(p. 48, Box5.3, Section6), are restricted
to the automatic stabilizers — with effects that are known and predictable.
5
This follows the recommendations in Taylor (2000). However, because of the inevitable
tradeoff between cyclical stability and budget stability, it is likely to be successful only in
markets with sufﬁciently ﬂexible wages and prices (see Fatas et al., 2003).
6
This summary is taken from the UK Treasury’s own view (HMT, 2003), Sections 2, 4, and
pp. 34–38.
7
In this section, page or section numbers given without further reference are all cited from
HMT (2003).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
The Advantages of Fiscal Leadership in an Economy 75
The shortrun discretionary components are negligible (p. 59, Table 5.5),
and the longrun objectives of the policy will always take precedence in
cases of conﬂict (pp. 61, 63–68). Fiscal policy would, therefore, not be
used for ﬁne tuning (pp. 11, 63), or for stabilization (pp. 1, 14). This burden
will be carried by interest rates, given the ﬁscal stance and its forward plans
(pp. 1, 7, 11, 37).
Third, given the evidence that consumers and ﬁrms often do not look
forward efﬁciently and may be creditconstrained, and also that the impacts
of the ﬁscal policy are uncertain and have variable lags (pp. 19, 26, 48;
Taylor, 2000), it makes sense for the ﬁscal policy to be used consistently
and sparingly and in a way that is clearly identiﬁed with the longterm
objectives. The United Kingdom has determined these objectives to be (pp.
11–13, 39–41, 61–63, 81–82):
(a) The achievement of sustainable public ﬁnances in the long term; low
debt (40% of GDP) and symmetric stabilization over the cycle.
(b) Asustainable andimprovingdeliveryof public services (healthandedu
cation); improving competitiveness/supplyside efﬁciency in the long
term; and the preservation of social (and intergenerational) equity and
alleviation of poverty.
(c) Recognition that the achievement of these objectives is often contrac
tual and cannot easily be reversed once committed. The long lead times
needed to build up these programs mean that the necessary commit
ments must be made well in advance. Frequent changes would conﬂict
with the government’s mediumtermsustainability objectives, and can
not be fulﬁlled. Similarly, changes in direct taxes will affect both equity
and efﬁciency, and should seldom be made.
(d) The formulation of clear numerical objectives is consistent with these
goals; a transparent set of institutional rules to achieve them; and a com
mitment to a clear separation of these goals fromeconomic stabilization
around the cycle.
8
(e) To ensure that ﬁscal policy can operate along these lines, public expen
ditures are planned with ﬁxed threeyear departmental expenditure lim
its which, when combined with decision and implementation lags of up
to two years, means that the bulk of the ﬁscal policy has to be planned
8
The credibility of ﬁscal policy, and by extension an independent monetary policy, is seen as
depending on these steps, and on symmetric stabilization. See also Dixit (2001), and Dixit
and Lambertini (2003).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
76 Andrew Hughes Hallet
with a horizon of up to ﬁve years — vs a maximum of two years in
the inﬂation targeting rules operated by the Bank of England. More
over, these spending limits are constraints deﬁned in nominal terms, so
that they will be met. In addition, the Treasury operates a tax smooth
ing approach — having rejected “formulaic rules” that might have
adjusted taxes or expenditures, or the various tax regulators, or credit
taxes that could have stabilized the economy in the short term. The
bulk of ﬁscal policy will remain focused on the medium to long term
therefore.
To summarize, the institutional structure itself has introduced ﬁscal
leadership. And, since the discretionary elements are smaller than else
where — smaller than the automatic stabilizers (Tables 1–5) and declining
(Box 5.3) — something else (monetary policy) must have taken up the bur
den of shortrun stabilization. This arrangement is, therefore, best modeled
as a Stackelberg game, with institutional parameters for goals, indepen
dence, priorities etc. I shall argue that this has had the very desirable result
of increasing the degree of coordination between policy makers without
compromising their formal or institutional independence —or indeed, their
credibility for delivering stability and low inﬂation. Constrained indepen
dence, in other words, designed to reduce the externalities, which each party
might, otherwise, impose on the other.
4. Empirical Evidence
The next step is to establish whether the UK authorities have followed
the leadership model described above. The fact that they say that they
follow such a strategy does not prove that they do so. The difﬁculty
here is that, although the asymmetry in exante (anticipated) responses
between the follower and leader is clear enough in the theoretical model —
the follower expects no further changes from the leader after the fol
lower chooses his reaction function, whereas the leader takes this reac
tion function into account — such an asymmetry and zero restriction
will not appear in the expost (observed) responses that emerge in the
ﬁnal solution (Basar and Olsder, 1999; Brandsma and Hughes Hallett,
1984; Hughes Hallett, 1984). Since I have no data on anticipations, this
makes it difﬁcult to test for leadership directly. But, we can use indirect
tests based on the degree of competition, or complimentarity, between the
instruments.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
The Advantages of Fiscal Leadership in an Economy 77
4.1. Monetary Responses
For monetary policy, it is widely argued that the authorities’ decisions can
best be modeled by a Taylor rule
9
:
r
t
= ρr
t−1
+αE
t
π
t+k
+βgap
t+h
α, β, ρ ≥ 0 (1)
where k, h represent the authorities’ forecast horizon
10
and may be positive
or negative. Normally, α > 1 will be required to avoid indeterminancy:
that is, arbitrary variations in output or inﬂation, as a result of unanchored
expectations in the private sector. The relative size of α and β then reveals
the strength of the authorities’ attempts to control inﬂation vs. income sta
bilization; and ρ their preference for gradualism. I set h = 0 in Eq. (1),
since monetary policy appears not to depend on the expected output gaps
(Dieppe et al., 2004).
In order to obtain an idea of the inﬂuence of ﬁscal policies on monetary
policy, I include some Taylor rule estimates — with and without ﬁscal
variables — in Table 1. They show such rules for the United Kingdom and
the Eurozone since 1997 and 1999, respectively, the dates when new policy
regimes were introduced. The Eurozone has been included to emphasize
the potential contrast between ﬁscal leadership in the United Kingdom, and
the lack of it in Europe.
4.2. Different Types of Leadership
Conventional wisdom would suggest that Europe has either monetary lead
ership or independent policies, and hence policies which are either jointly
dependent in the usual way, or which are complementary and mutually sup
porting. The latter implies that the monetary policy tends to expand/contract
whenever ﬁscal policy needs to expand or contract — but not necessarily
vice versa, when money is expanding or contracting and is sufﬁcient to
9
Taylor (1993b). One can argue that the policy should be based on fully optimal rules
(Svensson, 2003), of which Eq. (1) will be a special case. But, it is hard to argue that policy
makers actually do optimize when the additional gains fromdoing so may be small and when
the uncertainties in their information, policy transmissions, or the economy’s responses may
be quite large. In practice, therefore, the Taylor rule approach is often found to ﬁt central
bank behavior better.
10
In principle, k and h may be positive or negative: positive if the policy rule is based on
futureexpected inﬂation, to head off an anticipated problem, as in the Bank of England. But
negative if interest rates are to follow a feedback rule to correct past mistakes or failures.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
78 Andrew Hughes Hallet
control inﬂation on its own. This is a weak form of monetary leadership in
which ﬁscal policies are an additional instrument for use in cases of par
ticular difﬁculty, rather than the policies in a Nash game with conﬂicting
aims that need to be reconciled.
More generally, leadership implies complimentarity among policy
instruments in the leader’s reaction function, but conﬂicts among them in
the follower’s responses. A weak form of leadership also allows for inde
pendence among instruments in the leader’s policy rules. Thus, monetary
leadership would imply some complimentarity (or independence) in the
Taylor rule, but conﬂicts in the ﬁscal responses. And ﬁscal leadership would
mean complimentarity or independence in the ﬁscal rule, but conﬂicts in
the monetary responses. Evidently, from Section 3, we might expect Stack
elberg leadership (with ﬁscal policy leading) in the UK, but the opposite in
the Eurozone.
4.3. Observed Behavior
The upper equations in each panel of Table 1 yield the standard results for
the monetary behavior in both the United Kingdomand the Eurozone. Both
monetary authorities have targeted expected inﬂation more than the output
gap since the late 1990s — and with horizons of 18–21 months ahead. The
European Central Bank (ECB) has been more aggressive in this respect.
But, contrary to conventional wisdom, it was also more sensitive to the
output gap and had a longer horizon and less policy inertia.
However, if we allow monetary policies to react to the changes in ﬁscal
stance, we get different results (the lower equations). Here, we see that the
UKmonetary decisions may take ﬁscal policy into account, but the effect is
not signiﬁcant or well deﬁned. However, this model of monetary behavior
does imply more activist policies, a longer forecast horizon (up to two years
as the Bank of England claims) and greater attention to the output gap —
the symmetry in the UK’s policy rule. And to the extent that ﬁscal policy
does have an inﬂuence, it would be as a substitute (or competitor) for mone
tary policy — ﬁscal deﬁcits lead to higher interest rates. This is potentially
consistent with ﬁscal leadership, since this form of leadership can allow
independence or complimentarity between instruments in the leader’s rule.
We need to check the ﬁscal reaction functions directly. The lack of sig
niﬁcance for the debt ratio is easily understood, however. Since this is a
declared longrun objective of the ﬁscal policy, it would not be necessary
for the monetary policy to take it into account. So far, the evidence could
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
The Advantages of Fiscal Leadership in an Economy 79
imply a Stackelberg follower role or independence for monetary policy,
easing the competition (externalities) among policy instruments.
The ECB results look quite different. Once ﬁscal effects are included,
the concentration on inﬂation control is much reduced and the forecast
horizon shrinks to six months. Moreover, a feedback element, to correct
the past mistakes, comes in. At the same time, output stabilization becomes
less important, which implies that the symmetric targeting goes out. Instead,
monetary policy now appears to react to ﬁscal policy, but with the “wrong”
sign: the larger the primary deﬁcit, the looser will be the monetary policy.
In this case, therefore, the policies are acting as complements — circum
stances, which call for a primary deﬁcit will also call for a relaxation of the
monetary policy. Thus, we have an evidence of monetary leadership, where
ﬁscal policy is used as an additional policy instrument.
4.4. Fiscal Reaction Functions in the UK and Eurozone
Many analysts have hypothesized that ﬁscal policy responses can best be
modeled by means of a “ﬁscal Taylor rule”:
d
t
= a
t
+γ gap
t
+sd γ > 0 (2)
where sd = structural deﬁcit ratio, d
t
is the actual deﬁcit ratio (d > 0
denotes a surplus),
11
and a
t
represents all the other factors such as the
inﬂuence of monetary policy, existing or anticipated inﬂation, the debt bur
den, or discretionary ﬁscal interventions. The coefﬁcient γ then gives a
measure of an economy’s automatic stabilizers. The European Commis
sion (2002), for example, estimates γ ≈ 1/2 for Europe — a little more in
countries with an extensive social security system, a little less elsewhere.
And a similar relationship is thought to underlie UK’s ﬁscal policy (see
HMT, 2003: Boxes 5.2 and 6.2).
The expected signs of any remaining factors are not so clear. The debt
burden should increase current deﬁcits, unless there is a systematic debt
reduction programunderway. Inﬂation should have a positive impact on the
deﬁcit ratio if ﬁscal policy is used for stabilization purposes, but it had no
effect otherwise. Finally, the output gap should also have a positive impact
on the deﬁcit ratio, if the latter is being used for stabilization purposes —
in which case interest rates should be negatively correlated with the size
11
See Taylor (2000), Gali and Perotti (2003), Canzoneri (2004), and Turrini and in ‘t Veld
(2004) for similar formulations. The European Commission (2002) uses a similar rule, but
deﬁnes d > 0 to be a deﬁcit. They therefore expect γ to be negative.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
80 Andrew Hughes Hallet
Table 2. Fiscal policy reaction functions.
For the United Kingdom, sample period 1997q3–2004q2
d
t
= −0.444
(0.37)
+0.845debt
t
(7.64)
+0.0685r
t
(0.30)
+1.076gap
t
(1.72)
R
2
= 0.78,
F
3,24
= 33.23
For the Eurozone, sample period 1999q1–2004q2
d
t
= 3.36
(2.96)
+1.477d
t−1
(9.66)
−0.740π
e
+9,21
(2.21)
−0.207r
t
(1.16)
−0.337gap
t
(2.18)
R
2
= 0.95,
F
4,11
= 54.55
Key: d
t
= gross deﬁcit/GDP (%), where d < 0 denotes a deﬁcit; debt
t
= debt/GDP (%);
r
t
is the central bank lending rate; π
e
j,k
and gap
t
as in Table 2(a).
Estimation method, instrumented 2SLS; linear interpolation for quarterly deﬁcit ﬁgures;
and tratios in parentheses.
Note: At current debt, interest rates and inﬂation targets, these estimates imply a structural
deﬁcit of about 0.1% for the United Kingdom and 2.6% for the Eurozone.
of the deﬁcit, because monetary policies focused on inﬂation, and ﬁscal
policies focused on shortrun stabilization, would conﬂict. Conversely, a
negative association with the output gap, but a positive one with interest
rates, would imply no automatic stabilizer effects but mutually supporting
policies: i.e., higher interest rates go with tighter ﬁscal policies. This implies
complementary policies.
Table 2 includes our estimates of the ﬁscal policy reaction functions
for the United Kingdom and the Eurozone. Higher debt increases the sur
plus ratio in the United Kingdom, but has no effect in the Euro area. So,
while the United Kingdom, evidently, has had a systematic debt reduc
tion program, no such efforts have been made in Europe. Inﬂation, on the
other hand, has had no effect on the UK deﬁcit, but a negative one in
Europe. Similarly, the output gap produces a negative reaction in Europe,
but a positive one in the United Kingdom. These two variables, therefore,
indicate that the ﬁscal policy has been used for output stabilization in the
United Kingdom — consistent with allowing automatic stabilizers to do
the job — but for purposes other than this in the Eurozone. This result
ﬁts in neatly with Europe’s evident inability to save for a rainy day in
the upturn, and the inability to stabilize in the downturn because of the
stability pact.
12
12
Buti et al. (2003). The presence of debt in the UK rule indicates that sustainability has
been a primary target in the United Kingdom, but not in the Eurozone.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
The Advantages of Fiscal Leadership in an Economy 81
Consequently, the United Kingdom appears to use ﬁscal policy for sta
bilization in a minor way as claimed, while the Euro economies seem not
to use ﬁscal policy this way at all. Conﬁrmation of this comes from the
responses to interest rate changes, which are positive but very small and
statistically insigniﬁcant in the United Kingdom, but negative and near sig
niﬁcant in Europe. This implies that the British ﬁscal policies are chosen
independently of the monetary policy, as suggested by our weak leader
ship model. But, if there is any association at all, then both the ﬁscal and
monetary policies would be compliments and weakly coordinated.
The UK results are therefore inconclusive. They are consistent with
independent policies, or ﬁscal leadership — the latter, despite the insigniﬁ
cance of the direct ﬁscalmonetary linkage, because of the signiﬁcant output
gap term in the ﬁscal equation which, given the same effect is not found in
the monetary policy reactions, suggests ﬁscal leadership may in fact have
been operating. In any event, there is no suggestion of monetary leadership;
if anything happens, the results imply independence or ﬁscal leadership.
In the Eurozone, the results are quite different. Here, the signiﬁcant
result is the conﬂict among instruments in monetary policy (and essentially
the same conﬂict in the ﬁscal policy reactions). Hence, the policies are
competitive, which suggests that they forma Nash equilibrium(or possibly
monetary leadership, since the coefﬁcient on ﬁscal policy in the Taylor
rule is small and there is no output gap smoothing). This suggests weak
monetary leadership, or a simple noncooperative game.
5. Theoretical Evidence: A Model of Fiscal Leadership
5.1. The Economic Model and Policy Constraints
The key question now is: would governments want to pursue ﬁscal pre
commitment? Do they have an incentive to do so? And would there be a
clear improvement in terms of an economic performance if they did? More
important, would the ﬁscal leadership model be more advantageous, if ﬁscal
policy was limited by a deﬁcit rule in the form of “hard” targets (as in the
original stability pact) or “soft” targets?
To answer these questions, we extend a model used in Hughes Hallett
and Weymark (2002; 2004a;b; 2005) to examine the problem of monetary
policy design when there are interactions with ﬁscal policy. For exposition
purposes, we suppress the spillovers among countries and focus on the
following three equations to represent the economic structure of any one
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
82 Andrew Hughes Hallet
country
13
:
π
t
= π
e
t
+αy
t
+u
t
(3)
y
t
= β(m
t
−π
t
) +γg
t
+ε
t
(4)
g
t
= m
t
+s(by
t
−τ
t
) (5)
where π
t
is inﬂation in period t, y
t
is the growth in output (relative to trend)
in period t, and π
e
t
represents the rate of inﬂation that the rational agents
expect to prevail in period t, conditional on the information available at the
time expectations are formed. Next, m
t
, g
t
, and τ
t
represent the growth in
the money supply, government expenditures, and tax revenues in period t,
and u
t
and ε
t
are randomdisturbances, which are distributed independently
with zero mean and constant variance. All variables are deviations from
their steadystate (or equilibrium) growth paths, and we treat trend budget
variables as precommitted and balanced. Deviations fromthe trend budget
are, therefore, the only discretionary ﬁscal policy choices available. The
coefﬁcients α, β, γ, s, and b are all positive by assumption. The assump
tion that γ is positive is sometimes controversial.
14
However, the shortrun
impact multipliers derived from Taylor’s (1993a) multicountry estimation
provide an empirical support for this assumption (as does the HMT, 2003).
According to Eq. (3), inﬂation is increasing, as the rate of inﬂation pre
dicted by private agents and in output growth. Equation (4) indicates that
both monetary and ﬁscal policies have an impact on the output gap. The
microfoundations of the aggregate supply Eq. (3), originally derived by
Lucas (1972; 1973), are well known. McCallum (1989) shows that aggre
gate demand equation like Eq. (4) can be derived from a standard, multi
period utilitymaximization problem.
Equation (5) describes the government’s budget constraint. In ear
lier chapters, we allowed discretionary tax revenues to be used for
13
Technically, we assume a blockwise orthogonalization of the traditional multicountry
model to produce independent semireduced forms for each country. The disturbance terms
may, therefore, contain foreign variables, but they will have zero means so long as these
countries remain on their longrun (equilibrium) growth paths on average (all variables being
deﬁned as deviations from their equilibrium growth paths).
14
Barro (1981) argues that government purchases have a contractionary impact on output.
Our model, by contrast, treats ﬁscal policy as important because: (i) ﬁscal policy is widely
used to achieve redistributive and public service objectives; (ii) governments cannot pre
commit monetary policy with any credibility if ﬁscal policy is not precommitted (Dixit
and Lambertini, 2003), and (iii) Central Banks, and the ECB, in particular, worry intensely
about the impact of ﬁscal policy on inﬂation and ﬁnancial stability (Dixit, 2001).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
The Advantages of Fiscal Leadership in an Economy 83
redistributive purposes only, but permitted discretionary expenditures for
enhancingoutput. We further assumedthat there were twotypes of agents —
rich and poor — and that only the rich use their savings to buy government
bonds. Based on this view, b would be the proportion of pretax income
(output) going to the rich and s, the proportion of aftertax income that the
rich allocate to saving. Tax revenues, τ
t
, can then be used by the government
to redistribute income from the rich to poor, either directly or via public
services. This structure, therefore, has outputenhancing expenditures g
t
,
and discretionary transfers τ
t
. Both are ﬁnanced by aggregate tax revenues;
that is, from discretionary and trend revenues. Expenditures above these
revenues must be ﬁnanced by the sale of bonds.
5.2. An Alternative Interpretation
We could, however, take a completely different interpretation of Eq. (5).
We could take the terms(by
t
−τ
t
) to be the proportion of the budgetdeﬁcit
ratio, as it currently exists, adjusted for the effect of growth on this ratio
and any discretionary taxes raised in this period, which the government
now proposes to spend in period t. The deﬁcit ratio itself is of course
d = (G − T)/Y = (e − r), where G and T are the absolute levels of
government spending and tax revenues, and e and r are their counterparts as
a proportion of Y. If the economy grows, then d = (G−T)/Y −˙ y(e−r)
where ˙ y denotes the growth rate in national income. If we wish to see what
would happen to this deﬁcit ratio if ﬁscal policies were not changed, it
means new spending can only be allowed to take place out of new revenues
generated by growth: G = T = rY. Inserting this, we have d =
−(e − r)˙ y = −d ˙ y. Since y
t
is the deviation of Y from its steadystate
path, by
t
in Eq. (5) will equal d, the change in the existing deﬁcit ratio
under existing expenditure plans and tax codes, if b = −(e − r)/Y
1
. The
government will spend some proportion of that, s, less new discretionary
taxes in the current period. Hence, s would typically equal 1, although it
might be less, if some parts were saved in social security funds or other
assets. This deﬁnes what would happen to the deﬁcit ratio if there were no
change in existing ﬁscal policies; but the termin τ
t
shows that governments
may also raise additional taxes in order to reduce the current deﬁcit ratio
to some desired target value, θ, say. This target is likely to be θ = 0, to
balance the budget over the cycle, as required by the stability pact. We give
governments such a target, in the form of either a soft or a hard rule, in the
objective functions that are discussed in Section 5.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
84 Andrew Hughes Hallet
Given this new interpretation, we can now solve for π
e
t
, π
t
, and y
t
from
Eqs. (3) and (4). This yields the following reduced forms:
π
t
(g
t
, m
t
) = (1 +αβ)
−1
_
αβm
t
+αγg
t
+m
e
t
+
γ
β
g
e
t
+αε
t
+u
t
_
(6)
y
t
(g
t
, m
t
) = (1 +αβ)
−1
[βm
t
+γg
t
−βm
e
t
−γg
e
t
+ε
t
−βu
t
]. (7)
Solving for τ
t
using Eqs. (5) and (7), then yields:
τ
t
(g
t
, m
t
) = [s(1 +αβ)]
−1
[(1 +αβ +sbβ)m
t
−(1 +αβ −sbγ)g
t
−sbβm
e
t
−sbγg
e
t
+sb(ε
t
−βu
t
)]. (8)
5.3. Government and Central Bank Objectives
In our formulation, we allow for the possibility that the government and
an independent central bank may differ in their objectives. In particular,
we assume that the government cares about inﬂation stabilization, output
growth, and the provision of public services (and hence the size of the public
sector deﬁcit or debt); whereas the central bank, if left to itself, would be
concerned only with the ﬁrst two objectives,
15
and possibly only the ﬁrst
one. We also assume that the government has been elected by majority
vote, so that the government’s loss function reﬂects society’s preferences
to a signiﬁcant extent.
Formally, the government’s loss function is given by:
L
g
t
=
1
2
(π
t
− ˆ π)
2
−λ
g
1
y
t
+
λ
g
2
2
[(b −θ)y
t
−τ
t
]
2
(9)
where ˆ π is the government’s inﬂation target, λ
g
1
is the relative weight or
importance that the government assigns to output growth,
16
and λ
g
2
is the
relative weight, which it assigns to the debt or deﬁcit rule. The parameter θ
represents the target value for the debt or deﬁcit to the GDP ratio, which the
government would like to reach: hence (b −θ)y
t
becomes the target for its
15
Since the central bank has no instruments to control the debt itself, it can only react to
poor ﬁscal discipline indirectly: e.g., to the extent that its inﬂation objective is compromised,
where it does have an instrument.
16
Barro and Gordon (1983) also adopt a linear output target. In the delegation literature, the
output component in the government’s loss function is usually represented as quadratic to
reﬂect an output stability objective. In our model, the quadratic term in debt/deﬁcits allows
monetary and ﬁscal policy to play a stabilization role as well as pick a position on the
economy’s outputinﬂation tradeoff.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
The Advantages of Fiscal Leadership in an Economy 85
discretionary revenues, τ
t
. All the other variables are as previously deﬁned.
We may regard large values of λ
g
2
, say λ
g
2
> max[1, λ
g
1
], as deﬁning a
“hard” debt or deﬁcit rule; and smaller values, λ
g
2
< min[1, λ
g
1
], as a “soft”
debt or deﬁcit rule.
The objectives of the central bank, however, may be quite different from
those of the government. We model them as follows:
L
cb
t
=
1
2
(π − ˆ π)
2
−(1 −δ)λ
cb
y
t
−δλ
g
1
y
t
+
δλ
g
2
2
[(b −θ)y
t
−τ
t
]
2
(10)
where 0 ≤ δ ≤ 1, and λ
cb
is the weight, which the central bank assigns
to the output growth. The parameter δ measures the degree to which the
central bank is forced to take the government’s objectives into account.
The closer δ is to 0, the greater is the independence of the central bank in
making its choices. And the lower is λ
cb
is, the greater is the degree of its
conservatism in making these choices.
In Eq. (9), we have deﬁned the government’s inﬂation target as ˆ π. The
fact that the same inﬂation target appears in Eq. (10) would be correct for
the cases where the bank has the instrument independence, but not target
independence. However, it is easy to relax this assumption and allow the
central bank to choose its own target, as the ECB does. But, as I show in
Hughes Hallett (2005), there is no advantage in doing so since the govern
ment would simply adjust its parameters to compensate. Hence, only if the
bank is free to choose the value of λ
cb
as well, do we get an extra advantage.
Yet, even this will not be enough to outweigh the advantages to be gained
from ﬁscal leadership — for the reasons noted in Section 5.
A second feature is that Eqs. (9) and (10) specify symmetric inﬂation
targets around ˆ π. Symmetric inﬂation targets are particularly emphasized
as being required of both monetary policy and the ﬁscal authorities (HMT,
2003). We have, therefore, speciﬁed both to be features of the government
and the central bank objective functions here. However, it is not clear that
symmetry has been a feature of the European policy making in practice.
Indeed, the ECB has acquired a reputation for being more concerned to
tighten monetary policy when π > ˆ π, than it is to loosen it when π < ˆ π.
Consequently, rather than symmetry in the outputgap target, we have
asymmetric penalties, which tighten the ﬁscal constraints in recessions,
but weaken them in a boom when the debt and deﬁcits would be falling
anyway. On one hand, this might not be ideal, since it introduces an element
of procyclicality; on the other hand, it brings the rule closer to reality.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
86 Andrew Hughes Hallet
5.4. Institutional Design and Policy Choices
We characterize the strategic interaction between the government and the
central bank as a twostage noncooperative game in which the structure of
the model and the objective functions are common knowledge. In the ﬁrst
stage, the government chooses the institutional parameters δ and λ
cb
. The
second stage is a Stackelberg game in which ﬁscal policy takes on a leader
ship role. In this stage, the government and the monetary authority set their
policy instruments, given the δ and λ
cb
values determined at the previous
stage. Private agents understand the game and form rational expectations
for future prices in the second stage. Formally, the policy game runs as
follows:
5.4.1. Stage 1
The government solves the problem:
min
δ,λ
cb
EL
g
(g
t,
m
t,
δ, λ
cb
) = E
_
1
2
[π
t
(g
t
, m
t
) − ˆ π]
2
−λ
2
1
[y
t
(g
t
, m
t
)]
_
+
λ
g
2
2
E[(b −θ)y
t
(g
t
, m
t
) −τ
t
(g
t
, m
t
)]
2
(11)
where L
g
(g
t
, m
t
, δ, λ
cb
) is Eq. (9) evaluated at (g
t
, m
t
, δ, λ
cb
), and E
denotes expectations.
5.4.2. Stage 2
(a) Private agents form rational expectations about future prices π
e
t
, before
the shocks u
t
and ε
t
are realized.
(b) The shocks u
t
and ε
t
are realized and observed by both the government
and the central bank.
(c) The government chooses g
t
, before m
t
is chosen by the central bank,
to minimize L
g
(g
t
, m
t
,
¯
δ,
¯
λ
cb
) where
¯
δ and
¯
λ
cb
are at the values deter
mined at stage 1.
(d) The central bank then chooses m
t
, taking g
t
as given, to minimize:
L
cb
(g
t
, m
t
,
¯
δ,
¯
λ
cb
) =
(1 −
¯
δ)
2
[π
t
(g
t
, m
t
) − ˆ π]
2
−(1 −
¯
δ)
¯
λ
cb
[y
t
(g
t
, m
t
)] +
¯
δL
g
(g
t
, m
t
,
¯
δ,
¯
λ
cb
)
(12)
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
The Advantages of Fiscal Leadership in an Economy 87
We solve this game by solving the second stage (for the policy choices)
ﬁrst, and then substituting the results back into Eq. (11) to determine the
optimal operating parameters δ and λ
cb
. From stage 2, we get:
π
t
(δ, λ
cb
) = ˆ π +
(1 −δ)β(φ −η)λ
cb
+δ(βφ +γ)λ
g
1
α[β(φ −η) +δ(βη +γ)]
(13)
y
t
(δ, λ
cb
) = −u
t
/α (14)
τ
t
(δ, λ
cb
) =
(1 −δ)βs(βη +γ)(λ
cb
−λ
g
1
)
[β(φ −η) +δ(βη +γ)]λ
9
2
−
(b −θ)u
t
α
(15)
where
η =
∂m
t
∂g
t
=
−α
2
γβs
2
+δφλ
g
2
(αβs)
2
+δ
2
λ
g
2
(16)
φ = 1 +αβ −γ θs (17)
and
= 1 +αβ +βθs. (18)
Evidently, is positive. We assume φ to be positive as well. One would
certainly expect φ > 0 since, with θ < 1 and s < 1, ﬁscal policy would oth
erwise have to have such a strong impact on national income this, together
with a Phillips curve that is sufﬁciently ﬂat and weak monetary transmis
sions, government expenditures can simultaneously boost the output and
be used to reduce deﬁcit spending without worsening the budget (and sub
sequently debt) at the same time: γ > (1 + αβ)/(θs). In practice, with
s ≈ 1, as noted above and a target deﬁcit of zero, this would require very
large ﬁscal multipliers indeed. In fact, numerical estimates for 10 larger
OECD economies place φ very close to unity, rather than negative.
17
And
with balanced budget targets (θ = 0), we would certainly have φ > 0.
Nevertheless, the conﬂict which stands revealed within the ﬁscal policy is
important. In order to get φ < 0, output has to be capable of growing fast
enough to generate sufﬁcient revenues to boost output, when needed and to
pay down the debt/deﬁcit. If this is not possible, one will have to come at
the expense of the other. This underlines the natural procyclicality of any
ﬁscal restraint mechanism under “normal” parametric values.
18
17
See Table 4 in Appendix A.
18
Formally, ∂{(b − θ)y
t
− τ
t
]/∂g
t
= φ/(1 + αβ), which implies the gap between the
ﬁscal adjustment needed and the paydown allocated, will rise as more funds are used for
stabilization, if φ > 0.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
88 Andrew Hughes Hallet
Substituting Eqs. (13)–(15) back into Eq. (11), we can now get the
stage 1 solution from:
min
δ,λ
cb
EL
g
(δ, λ
cb
) =
1
2
_
(1 −δ)β(φ −η)λ
cb
+δ(βφ +γ)λ
g
1
α[β(φ −η) +δ(βη +γ)]
_
2
+
λ
g
2
2
_
(1 −δ)βs(βη +γ)(λ
cb
−λ
g
1
)
[β(φ −η) +δ(βη +γ)]λ
g
2
_
2
. (19)
This part of the problem has ﬁrstorder conditions:
(1 −δ)(φ −η)λ
g
2
{(1 −δ)β(φ −η)λ
cb
+δ(βφ +γ)λ
g
1
}
−(1 −δ)
2
(βη +γ)
2
α
2
s
2
β(λ
g
1
−λ
cb
) = 0 (20)
and
{(1 −δ)β(φ −η)λ
cb
+δ(βφ +γ)λ
g
1
}
×(λ
g
1
−λ
cb
) {δ(1 −δ)+(φ −η)} λ
g
2
−(1 −δ)(βη +γ)α
2
s
2
β {(βη +γ) −(1 −δ)β} (λ
g
1
−λ
cb
)
2
= 0.
(21)
where = ∂η/∂δ. There are two realvalued solutions which satisfy this
pair of ﬁrstorder conditions.
19
Both are satisﬁed when δ = 1 and λ
cb
= λ
g
1
.
This solution describes a fully dependent central bank, which is not appro
priate in the Eurozone case. And, it turns out to be inferior to the second
solution: δ = λ
cb
= 0. In this solution, the central bank is fully independent
and exclusively concerned with the economy’s inﬂation performance.
Out of the two possibilities, the solution which yields the lowest welfare
loss, as measured by the government’s (society’s) loss function, can be
identiﬁed by comparing Eq. (19) to the expected loss that would be suffered
under the alternative institutional arrangement. Substituting, δ = 1 and
λ
cb
= λ
g
1
in Eq. (19) results in:
EL
g
=
(λ
g
1
)
2
2α
2
. (22)
19
Because η is a function of δ, Eq. (21) is quartic in δ. This polynomial has four distinct
roots, of which only two are realvalued. The complete solution may be found in Hughes
Hallett and Weymark (2002).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
The Advantages of Fiscal Leadership in an Economy 89
But substituting, δ = λ
cb
= 0 in the righthandside of Eq. (19) yields:
EL
g
= 0. (23)
Consequently, our results show that, when there is ﬁscal leadership, soci
ety’s welfare loss (as measured by Eq. (19)) is minimized when the govern
ment appoints independent central bankers who are concerned only with
the achievement of a mandated inﬂation target and completely disregard
the impact their policies may have on the output.
However, our results alsoindicate that ﬁscal leadershipwithanindepen
dent central bank can be beneﬁcial under more general conditions. When
δ = 0, βη + γ = 0 and the externalities between policy makers are neu
tralized.
20
As a result, Eq. (19) will become:
EL
g
=
1
2
_
λ
cb
α
_
2
(24)
for any value of λ
cb
. Hence, an independent central bank will always pro
duce better results than a dependent one, so long as it is more conservative
than the government (λ
cb
< λ
g
1
), irrespective of the latter’s commitment to
the debt rule (λ
g
2
). A conservative central bank will therefore be the best,
but any bank that is more conservative than the government will do if the
debt rule is to be effective.
A more interesting question is whether ﬁscal leadership with an inde
pendent central bank also produces better outcomes from a society’s per
spective, than those obtained in a simultaneous move game without leader
ship — the model generally favored in Europe. In the simultaneous move
game, the solution to the stage 1 minimization problem is:
δ =
βφ
2
λ
cb
λ
g
2
+α
2
γ
2
s
2
β(λ
cb
−λ
g
1
)
βφ
2
λ
cb
λ
g
2
+α
2
γ
2
β(λ
cb
−λ
g
1
) −φ(βφ +γ)λ
g
1
λ
g
2
(25)
and a society’s welfare loss will then be:
EL
g
=
1
2
_
λ
g
1
α
__
(αγs)
2
(αγs)
2
+φ
2
λ
g
2
_
. (26)
21
20
Because βη + γ = (∂y/∂m)∂m/∂g + ∂y/∂g, and the central bank has already taken the
impact of its decisions on the government’s decisions (also zero in this case) into account.
21
Hughes Hallett and Weymark (2002; 2004a) derive these results in detail.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
90 Andrew Hughes Hallet
This is always smaller than the loss incurred when ﬁscal leadership is com
bined with a dependent central bank. However, the optimal degree of con
servatismfor an independent central bank, in this case, is obtained by setting
δ = 0 in Eq. (25) to yield:
λ
cb∗
=
(αγs)
2
λ
g
1
(αγs)
2
+φ
2
λ
g
2
. (27)
It is straightforward to show that the value of EL
g
in Eq. (24) is always
less than Eq. (26) as long as:
λ
cb
<[λ
g
1
λ
cb∗
]
1/2
. (28)
It is also evident that λ
cb∗
<λ
g
1
holds for any degree of commitment, how
ever small, to the debt/deﬁcit rule: λ
g
2
> 0. Consequently, ﬁscal leader
ship with any λ
cb
<λ
cb∗
will always produce better outcomes, from a
society’s point of view, than any simultaneous move game between the
central bank and the government. This is important because many inﬂation
targeting regimes, such as the ones operated by the Bank of England, the
Swedish Riksbank, and the Reserve Bank of New Zealand, operate with
ﬁscal leadership. By contrast, several others, notably the ECB and the US
Federal Reserve System, do not operate with ﬁscal leadership. They are
better thought of as being engaged in simultaneous move game with their
governments.
5.5. The Gains from Fiscal Leadership
Finally, where do these leadership gains come from? Substituting δ = 0
and λ
cb
= 0 in Eqs. (13)–(15) yields:
π
t
= ˆ π, y
t
= −u
t
/α, and τ
t
= −(b −θ)u
t
/α. (29)
as ﬁnal outcomes. By contrast, the outcomes for the simultaneous move
policy game are:
π
∗
t
= ˆ π +
α(γs)
2
[(αγs)
2
+φ
2
λ
g
2
]
= ˆ π +λ
cb∗
/α (30)
y
∗
t
= −u
t
/α (31)
τ
∗
t
=
γs(λ
cb∗
−λ
g
1
)
φλ
g
2
−
(b −θ)u
t
α
. (32)
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
The Advantages of Fiscal Leadership in an Economy 91
Comparing these two sets of outcomes, we have ﬁve conclusions. These
conclusions signify the main results of this paper:
(a) Fiscal leadership eliminates the inﬂationary bias, and results in lower
inﬂation without any loss in output or output volatility. The proximate
cause of this surprising result is that optimization under ﬁscal leadership
leads to higher taxes and larger debt repayments.
22
(b) The deeper reason is that there is a selflimiting aspect to the long
run design of ﬁscal policy. Unless the effect of ﬁscal policy on output
is so large as to generate savings/taxes that could ﬁnance both ﬁscal
expansions and debt repayments (this possibility is ruled out in the
deﬁcit rule case), each expansion designed to increase output would
simply be accompanied by a greater burden of debt. Hence, in order to
preserve the longrun sustainability of its ﬁnances, the government must
eventually raise taxes. This fact takes some inﬂationary pressure off the
central bank when ﬁscal expansions are called for. As a result, the bank
is less likelytotightenthe monetarypolicy, andexternalities are reduced
on both sides. This is what generates a better coordination between
the policy makers, as noted in Section 5. But, this can only happen
if the government has a genuine commitment to the longrun goals
(sustainable public ﬁnances, and a debt or deﬁcit rule in this case), since
only then will the ﬁscal policy leader take into account the predictable
reactions (of the follower) to any shortrun deviations by the leader.
The follower knows that the leader is not likely to sacriﬁce his longrun
goals by making such deviations (and anyway has the opportunity to
clear up the mess, according to the follower’s own preferences, if the
leader does so). Thus, in the shortrun under simultaneous moves, each
player fails to consider the predictable response, and costs, which the
externalities he/she imposes on his/her rival would create. The ability to
coordinate would be lost, and the outcomes worse for both. We show
this in Section 5 (and in a more formal analysis in Hughes Hallett,
2005).
(c) These results hold independently of the commitment to the debt rule
(λ
g
2
), or its target value (θ), and independently of the government’s pref
erence to stabilize or spend (λ
g
1
, s), and of the economy’s transmission
parameters (α, β, γ) or the particular value of b as discussed above.
22
Taxes are lower under simultaneous moves because λ
cb
<λ
g
1
. So Eτ = 0 in Eq. (29) vs.
Eτ
∗
t
≤ 0 in Eq. (32).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
92 Andrew Hughes Hallet
(d) Since Eτ
∗
t
<0 in Eq. (32), but Eτ
t
= 0 in (29), the simultaneous moves
regime will always end up in increasing the deﬁcit levels. Therefore, it
will eventually exceed any limit set for the debt ratio. Fiscal leadership
will do neither of these things. More generally, the simultaneous moves
game always leads to ﬁscal expansions if money ﬁnancing is small.
23
Indeed, the expected value of s(by
∗
t
− τ
∗
t
) in Eq. (5) is zero under
ﬁscal leadership, but φλ
cb∗
/(α
2
γ) > 0 in the simultaneous moves case
(as expected, since the monetary policy externalities would be larger).
And since revenues are lower in the latter case, budget deﬁcits are
larger. Hence, the gains from the ﬁscal leadership are more than just
better outcomes. Leadership also implies less expansionary budgets
and tighter public expenditure controls. The ECB should ﬁnd this a
signiﬁcantly more comfortable environment to operate in.
(e) The simultaneous moves regime approaches ﬁscal leadership, as the
debt rule becomes progressively “harder”: that is, EL
g
→ 0, λ
cb∗
→
0; π
∗
t
→ ˆ π, and Eτ
∗
t
→0, as λ
g
2
→∞.
6. The CoOrdination Effect
In our model, the central bank is independent. Without further institutional
restraints, interactions between an independent central bank and the gov
ernment would lead to noncooperative outcomes. But, if the government
is committed to longterm leadership in the manner that I have described,
the policy game will become an example of rulebased coordination in
which both parties can gain without any reduction in the central bank’s
independence. However, this is not to say that both parties gain equally. If
the inﬂation target is strengthened after the ﬁscal policy is set, the leadership
gains to the government will be less (Hughes Hallett and Viegi, 2002). This
is an important point because it implies that granting leadership to a central
bank whose inﬂation aversion is already greater than the government’s, or
whose commitment is greater, will produce no additional gains. For this
reason, granting leadership to a central bank that cannot inﬂuence debt or
deﬁcit directly, but puts a higher priority on inﬂation control will bring
smaller gains from a society’s point of view (whatever it may do for the
central bank). I demonstrate these points formally in Hughes Hallett (2005).
23
m
t
≈ 0; see Appendix B. This result still holds good when β > γ, even if monetary
ﬁnancing is allowed. It also holds good if the central bank is conservative, or the government
is liberal, when β <γ.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
The Advantages of Fiscal Leadership in an Economy 93
Finally, given the structure of the Stackelberg game, there is no incen
tive to reoptimize for either party — so long as the government remains
committed to longterm leadership rather than shortterm demand manage
ment — unless both parties agree to reduce their independence through
discretionary coordination.
24
This is a general result: it holds good for any
model where both inﬂation and output depend on both monetary and ﬁs
cal policy, and where inﬂation is targeted to some degree by both players
(Demertzis et al., 2004; Hughes Hallett and Viegi, 2002). Thus, although
Pareto improvements, over no cooperation, do not emerge for both parties
in all Stackelberg games, they do so in problems of the kind examined here.
Our results are robust.
6.1. Empirical Evidence
Whether or not these results are of practical importance is another matter.
In Table 3, I have computed the expected losses under the simultaneous
move and ﬁscal leadership regimes for 10 countries when both ﬁscal policy
and the central bank are constructed optimally. The data I have used is from
1998, the year in which the Eurozone was created. The data itself, and its
sources, are summarized in Appendix A.
The countries that are selected fall into three groups:
(a) Eurozone countries, withindependentlyset ﬁscal andmonetarypolicies
and target independence: France, Germany, Italy, and the Netherlands;
(b) Explicit inﬂation targeters with ﬁscal leadership: Sweden, New
Zealand, and the United Kingdom and
(c) Federalists, with joint inﬂation and stabilization objectives: United
States, Canada, and Switzerland.
In the ﬁrst group, monetary policy is conducted at the European level
and ﬁscal policy at the national level. Policy interactions, in this group,
can be characterized in terms of a simultaneous move game, with target as
well as with instrument independence for both sets of policy authorities.
24
This supplies the political economy of our results. We need no precommitment beyond
leadership (the Stackelberg game supplies subgame perfection, see Note 4) and no punish
ment beyond electoral results. The former will hold so long as governments have a natural
commitment to maintaining sustainable public ﬁnances, public services (health, education,
and defence), or social equity, all longrun “contractual” issues. The latter will then arise
from the political competition inherent in any democracy.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
94 Andrew Hughes Hallet
Table 3. Expected losses under a deﬁcit rule, leadership vs. other strategies.
Full Fiscal Simultaneous Losses under
dependence leadership moves simultaneous
δ = 1 δ = 1 δ = 1 δ = 0 δ = 0 δ = 0; λ
g
1
= 1 λ
g
1
= 1 λ
g
1
= 1 δ = 0 δ = 0 δ = 0; λ
g
1
= 1 λ
g
1
= 1 λ
g
1
= 1 moves: Growth
λ
cb
= λ
g
1
λ
cb
= λ
g
1
λ
cb
= λ
g
1
λ
cb
= 0 λ
cb
= 0 λ
cb
= 0 λ
cb
= λ
cb
∗
λ
cb
= λ
cb
∗
λ
cb
= λ
cb
∗
rate equivalents
France 5.78 0.00 0.134 13.4
Germany 16.14 0.00 0.053 5.30
Italy 1.28 0.00 0.207 20.7
Netherlands 1.28 0.00 0.165 16.5
Sweden 4.51 0.00 0.034 3.40
New Zealand 8.40 0.00 0.071 7.10
United Kingdom 3.37 0.00 0.057 5.70
United States 6.46 0.00 0.248 24.8
of America
Canada 12.50 0.00 0.118 11.8
Switzerland 4.79 0.00 0.066 6.60
It is, perhaps, arguable that the Eurozone has adopted a monetary leader
ship regime; but the empirical evidence for this was weak, and the ﬁscal
constraints that could have sustained such a leadership were widely ignored.
The second group of countries has adopted explicit, and mostly pub
licly announced, inﬂation targets. Central banks in these countries have
been granted instrument independence, but not target independence. The
government either sets, or helps to set, the inﬂation target. In each case, the
government has adopted longterm (supply side) ﬁscal policies, leaving an
active demand management to monetary policy. These are clear cases in
which there is both ﬁscal leadership and instrument independence for the
central bank.
The third group represents a set of more ﬂexible economies, with
implicit inﬂation targeting and a statutory concern for stabilization; also fed
eralism in ﬁscal policy and independence at the central bank, and therefore
no declared leadership in either ﬁscal or monetary policies. This provides
a useful benchmark for the other two groups.
The performance under the different ﬁscal regimes when the ﬁscal con
straint is a relatively soft deﬁcit rule is reported in Table 3. Column 1
shows the losses under a dependent central bank in welfare units — lead
ership or not. Column 2 reﬂects the losses that would be incurred under
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
The Advantages of Fiscal Leadership in an Economy 95
the government leadership with an independent central bank that directs
monetary policy exclusively towards the achievement of the inﬂation target
(i.e., δ = λ
cb
= 0). The third column gives the minimum loss associated
with a simultaneous decisionmaking version of the same game. In each
case, the deﬁcit rule is soft for the Eurozone countries (λ
g
2
= 0.25); medium
strength for the United States, Canada, and Switzerland (λ
g
2
= 0.5); and
harder for Sweden, the United Kingdom and New Zealand (λ
g
2
= 1). The
deﬁcit ratio targets are 0%in all countries (a balanced budget in the medium
term), but the propensity to spend, s, out of any remaining deﬁcit or any
endogenous budget improvements is one.
Evidently, complete dependence in monetary policy is extremely unfa
vorable, although the magnitude of the loss varies considerably from coun
try to country. The losses in Column 3 (Table 3) appear to be relatively small
in comparison. However, when these ﬁgures are converted into “growthrate
equivalents”, I ﬁnd these losses to be signiﬁcant. A growthrate equivalent
is the loss in output growth that would produce the same welfare loss, if all
other variables remain ﬁxed at their optimized values.
25
The ﬁgures in Column 4 are more interesting. They showthat the losses
associated with simultaneous decisionmaking are equivalent to having per
manent reductions of up to 20% in the level of national income. That is,
France, Italy, and the Netherlands could have expected to have grown about
15%–20%more, had they adopted ﬁscal leadership with deﬁcit targets; and
Sweden, the United Kingdom, and New Zealand around 3%–5% less, had
they not done so. Similarly, ﬁscal leadership with debt targeting would
be worth 10%–20% extra GDP in the United States, and Canada. These
are signiﬁcant gains and signiﬁcantly larger than could be expected from
international policy coordination itself (Currie et al., 1989).
Thus, the immediate effect of introducing a stability pact deﬁcit rule
has been to increase the costs of an uncoordinated (simultaneous moves)
regime dramatically. This reveals the weakness of the deﬁcit rule as a
25
Currie et al. (1989). To obtain these ﬁgures, I compute the marginal rates of transformation
around each government’s indifference curve to ﬁnd the change in output growth, dy
t
, that
would yield the welfare losses in Column 3. Formally,
dy
t
=
dEL
g
t
[λ
g
2
{(b −θ)y
t
−τ
t
}(b −θ) −λ
g
1
]
using Eq. (9). The minimumvalue of dy
t
is, therefore, attained when tax revenues τ
t
growat
the same rate as the repayment target (b −θ)y
t
. These are the losses reported in Column 4.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
96 Andrew Hughes Hallet
restraint mechanism. Because it is a ﬂow and not a stock, it has no inher
ent persistence. Moreover, a larger part (the current automatic stabilizers,
social expenditures, and tax revenues) will be endogenous, varying with the
level of economic activity and with the pressures of the electoral cycle. It is,
therefore, much harder to use as a commitment device with any credibility
in the long term. If we assume that it can be precommitted, as in the lead
ership scenario, then we get good results of course. But, if this assumption
is likely to be violated (or challenged), then the fact that the results are so
much worse than in the simultaneous moves case shows how difﬁcult it
is to precommit deﬁcits in advance. Being only a small component in a
moving total, it does not have the persistence of a cumulated debt criterion
because violations are not carried forward. So, when policy conﬂicts arise,
the deﬁcit either fails its target or it has to be restrained to such a degree
that it starts to do serious damage to overall economic performance.
7. Conclusions
I conclude this chapter with the following observations:
(a) Fiscal leadership leads to improved outcomes because it implies a
degree of coordination and reduced conﬂicts between institutions,
without the central bank having to lose its ability to act independently.
This places the outcomes somewhere between the superior (but discre
tionary) policies of complete cooperation; and the noncooperative
(or rulebased) policies of complete independence.
(b) The coordination gains come from the selflimiting action of ﬁscal
policy when ﬁscal policy is given a longrun focus in the form of a
(soft) debt or deﬁcit rule. Leadership is the crucial element here; these
gains do not appear in a straightforward competitive regime with the
same debt or deﬁcit rule.
(c) The leadership model predicts improvements in inﬂation and the ﬁs
cal targets, without any loss in growth or output volatility. In addition,
leadership requires less precision in the setting of the strategic or insti
tutional parameters. It is easier to implement.
(d) Debt ratios will increase in a policy game without coordination, but
do not do so under ﬁscal leadership. Likewise, deﬁcit rules without
coordination to restrain ﬁscal policy imply larger deﬁcits (and more
expansionary policies) than do debt rules of a similar speciﬁcation
because deﬁcits (being a ﬂow not a stock, and with less natural persis
tence) are less easy to precommit credibly. These two results explain
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
The Advantages of Fiscal Leadership in an Economy 97
why the ECB prefers hard restraints (and why the ﬁscal policy makers
do not), and suggests that the ECBmay actually prefer ﬁscal leadership
because of the greater degree of precommitment implied.
References
Barro, RJ (1981). Output effects of government purchases. Journal of Political Economy
89, 1086–1121.
Barro, RJ and DB Gordon (1983). Rules, discretion, and reputation in a model of monetary
policy. Journal of Monetary Economics 12, 101–121.
Basar, T (1989). Time consistency and robustness of equilibria in noncooperative dynamic
games. In Dynamic Policy Games in Economics, F van der Ploeg and A de Zeeuw
(eds.), pp. 9–54. Amsterdam: North Holland.
Basar, T and GOlsder (1989). Dynamic noncooperative game theory. In Classics in Applied
Mathematics, SIAM series, Philadelphia: SIAM.
Brandsma, A and A Hughes Hallett (1984). Economic conﬂict and the solution of dynamic
games. European Economic Review 26, 13–32.
Buti, M, S Eijfﬁnger and D Franco (2003). Revisiting the stability and growth pact: grand
design or internal adjustment? Discussion Paper 3692. London: Centre for Economic
Policy Research.
Canzoneri, M (2004). A New View on The Transatlantic Transmission of Fiscal Policy
and Macroeconomic Policy Coordination. In The Transatlantic Transmission of Fiscal
Policy and Macroeconomic Policy Coordination, MButi (ed.), Cambridge: Cambridge
University Press.
Currie, DA, GHolthamandAHughes Hallett (1989). The theoryandpractice of international
economic policy coordination: does coordination pay? In Macroeconomic Policies in
an Interdependent World, R Bryant, D Currie, J Frenkel, P Masson and R Portes (eds.),
Washington, DC: International Monetary Fund, 125–148.
Demertzis, M, A Hughes Hallett and NYiegi (2004). An Independent Central Bank Faced
with Elected Governments: A Political Economy Conﬂict. In European Journal of
Political Economy 20(4), 907–922.
Dieppe, A, K Kuster and P McAdam (2004). Optimal monetary policy rules for the Euro
Area: an analysis using the area wide model. Working Paper 360. Frankfurt: European
Central Bank.
Dixit, A(2001). Games of monetary and ﬁscal interactions in the EMU. European Economic
Review 45, 589–613.
Dixit, AK and L Lambertini (2003). Interactions of commitment and discretion in monetary
and ﬁscal issues. American Economic Review 93, 1522–1542.
Dow, JCR (1964). The Management of the British Economy 1945–1960. Cambridge:
Cambridge University Press.
European Commission (2002). Public ﬁnances in EMU:2002. European Economy: Studies
and Reports 4, Brussels: European Commission.
Fatas, A, J von Hagen, A Hughes Hallett, R Strauch and A Sibert (2003). Stability and
Growth in Europe. London: Centre for Economic Policy Research (MEI13).
Gali, J and R Perotti (2003). Fiscal policy and monetary integration in Europe. Economic
Policy 37, 533–571.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
98 Andrew Hughes Hallet
HM Treasury (HMT) (2003). Fiscal Stabilization and EMU. HM Stationary Ofﬁce, Cmnd
799373. Norwich: also available from www.hmtreasury.gov.uk.
Hughes Hallett, A (1984). Noncooperative strategies for dynamic policy games and the
problem of time inconsistency. Oxford Economic Papers 36, 381–399.
Hughes Hallett, A(2005). Inpraise of ﬁscal restraint anddebt rules: what the Eurozone might
do now. Discussion Paper 5043. London: Centre for Economic Research, 251–279.
Hughes Hallett, A and DWeymark (2002). Government leadership and central bank design.
Discussion Paper 3395, London: Centre for Economic Research.
Hughes Hallett, A and N Viegi (2002). Inﬂation targeting as a coordination device. Open
Economies Review 13, 341–362.
Hughes Hallett, A and DWeymark (2004a). Policy games and the optimal design of central
banks. In Money Matters, P Minford (ed.). London: Edward Elgar.
Hughes Hallett, AandDWeymark(2004b). Independent monetarypolicies andsocial equity.
Economics Letters 85, 103–110.
Hughes Hallett, Aand DWeymark (2005). Independence before conservatism: transparency,
politics and central bank design. German Economic Review 6, 1–25.
Lucas, RE (1972). Expectations and the neutrality of money. Journal of Economic Theory
4, 103–124.
Lucas, RE (1973). Some international evidence on outputinﬂation tradeoffs. American
Economic Review 63, 326–334.
McCallum, BT (1989). Monetary Economics: Theory and Policy. NewYork: Macmillan.
Svensson, L (2003). What is wrong with Taylor rules? Using judgement in monetary rules
through targeting rules. Journal of Economic Literature 41, 426–477.
Taylor, JB (1993a). Macroeconomic Policy in a World Economy: From Econometric Design
to Practical Operation. NewYork: W.W. Norton and Company.
Taylor, JB (1993b). Discretion vs. policy rules in practice. CarnegieRochester Conference
Series on Public Policy 39, 195–214.
Taylor, JB(2000). Discretionary ﬁscal policies. Journal of Economic Perspectives 14, 1–23.
Turrini, A and J in ‘t Veld (2004). The Impact of the EU Fiscal Framework on Economic
Activity. DGII, Brussels: European Commission.
Appendix A: Data Sources for Table 3
The data used in Table 3 are set out in the table below. They come from dif
ferent sources and represent “stylized facts” for the corresponding param
eters. The Phillips curve parameter, α, is the inverse of the annualized
sacriﬁce ratios on quarterly data from 1971–1998. The values for β and γ,
the impact multipliers for ﬁscal and monetary policy, respectively, are the
oneyear simulated multipliers for these policies in Taylor’s multicountry
model (Taylor, 1993a). I have calibrated the remaining parameters using
stylized facts from 1998: the year that EMU started, and the year that new
ﬁscal and monetary regimes took effect in the United Kingdomand Sweden.
Thus, s is the automatic stablizer effect on output according to the European
Commission (2002) and HMT (2003) estimates, allowing for larger social
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
The Advantages of Fiscal Leadership in an Economy 99
Table 4. Data for the deﬁcit rule calculations.
ααα βββ γγγ sss θθθ φφφ λ
g
2
λ
g
2
λ
g
2
France 0.294 0.500 0.570 1 0 1.147 0.25
Germany 0.176 0.533 0.430 1 0 1.094 0.25
Italy 0.625 0.433 0.600 1 0 1.271 0.25
Netherlands 0.625 0.489 0.533 1 0 1.306 0.25
Sweden 0.333 0.489 0.533 1 0 1.163 1
New Zealand 0.244 0.400 0.850 1 0 1.098 1
UK 0.385 0.133 0.580 1 0 1.038 1
United States 0.278 0.467 1.150 1 0 1.130 0.5
Canada 0.200 0.400 0.850 1 0 1.080 0.5
Switzerland 0.323 0.489 0.533 1 0 1.158 0.5
security sectors in the Netherlands and Sweden. Similarly, θ is set to give a
debt target somewhat below each country’s 1998 debt ratio and the SGP’s
60% limit. Likewise, λ
g
2
was set to imply a low, high, or medium commit
ment to debt/deﬁcit limits, as reﬂected in actual behavior in 2004. Finally,
I have set λ
g
1
= 1 in each country to suggest that governments, if left to
themselves, are equally concerned about inﬂation and output stabilization.
For Table 4, I suppose that the deﬁcit that remains after growth effects
and new discretionary taxes are accounted for will be spent (s = 1), and
that the mediumterm deﬁcit target is zero (θ = 0).
Appendix B: Derivation of the Change in Fiscal Stance
Between Regimes
The expected value of s(by
t
−τ
t
) under ﬁscal leadership and simultaneous
moves can be obtained by inserting values ﬁrst fromEq. (29), and then from
Eqs. (31) and (32), respectively. Taking the expected values, this yields
zero and φλ
cb∗
/(α
2
γ) > 0 in each case. Applying this result to Eq. (4),
it shows that the expected change in output between regimes, which we
know to be zero, is (β − γ)m − βλ
cb∗
/(αλ
g
1
) + γ[s(by − τ)] = 0.
Hence,
m = {γ[s(by −τ)] +βλ
cb
∗
/(αλ
g
1
)}/(β −γ)
which, given the change in s(by
t
− τ
t
), is positive, if β > γ. In this case,
the simultaneous move regime is unambiguously more expansionary, and
will run larger deﬁcits, than a regime with ﬁscal leadership. But, since
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic1
100 Andrew Hughes Hallet
γ/(β − γ) > 1, this result may not follow if β <γ. In the latter case, the
expansion of the budget deﬁcit will be:
Deficit = m+[s(by −τ)] −τ
=
_
β
α(β −γ)
_
φ
αγ
+
1
λ
g
1
_
−
γs
φλ
g
2
_
λ
cb
∗
+
γs
ϕλ
g
2
λ
g
1
which is still clearly positive if either λ
cb
∗
is small, the central bank is
conservative, or if λ
g
1
is somewhat larger (the government is relatively
liberal).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
Topic 2
CheapTalk Multiple Equilibria and Pareto —
Improvement in an Environmental
Taxation Games
Chirstophe Deissenberg
Universit´ e de la M´ editerran´ ee and GREQAM, France
Pavel
ˇ
Sevˇ c´ık
Universil´ e de Montr´ eal, Canada
1. Introduction
The use of taxes as regulatory instrument in environmental economics is a
classic topic. In a nutshell, the need for regulation usually arises because
producing causes detrimental emissions. Due to the lack of a proper market,
the ﬁrms do not internalize the impact of these emissions on the utility of
other agents. Thus, they take their decisions on the basis of prices that do not
reﬂect the true social costs of their production. Taxes can be used to modify
the prices, confronting the ﬁrms so that the socially desirable decisions are
taken. The problem has been exhaustively investigated in static settings,
where there is no room for strategic interaction between the regulator and
the ﬁrms.
Consider, however, the following situations:
(1) The emission taxes have a dual effect, they incite the ﬁrms to reduce
production and to undertake investments in abatement technology. This
is typically the case, when the emissions are increasing in the output
and decreasing in the abatement technology;
(2) Emission reduction is socially desirable. The reduction of production
is not and
(3) The investments are irreversible. In this case, the regulator must ﬁnd
an optimal compromise between implementing high taxes to motivate
high investments, and keeping the taxes low to encourage production.
101
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
102 Chirstophe Deissenberg and Pavel
ˇ
Sevˇ c´ık
The fact that the investments are irreversible introduces a strategic
element in the problem. If the ﬁrms are na¨ıve and believe his announce
ments, the regulatory can insure high production and important investments
by ﬁrst declaring high taxes and reducing them, once the corresponding
investments have been realized. More sophisticated ﬁrms, however, recog
nize that the initially high taxes will not be implemented, and are reluctant
to invest in the ﬁrst place. In other words, one is confronted with a typical
time inconsistency problem, which has been extensively treated in the mon
etary policy literature following Kydland and Prescott (1977) and Barro and
Gordon (1983). In environmental economics, the time inconsistency prob
lem has received yet only limited attention, although it frequently occurs.
See among others Gersbach and Glazer (1999) for a number of examples
and for an interesting model, see Abrego and Perroni (1999); Batabyal
(1996a;b); Dijkstra (2002); Marsiliani and Renstrom (2000) and Petrakis
and Xepapadeas (2003).
The time inconsistency is directly related to the fact that the situation
described above, deﬁnes a Stackelberg game between the regulator (the
leader) and the ﬁrms (the followers). As noted in the seminal work of
Simaan and Cruz (1973a;b), inconsistency arises because the Stackelberg
equilibrium is not deﬁned by mutual best responses. It implies that the
follower uses a best response in reaction to the leader’s action, but not that
the leader’s actions is itself a best response to the follower’s. This opens
the door to a reoptimization by the leader once the follower has played.
Thus, a regulator, who announces that it will implement the Stackelberg
solution, is not credible. An usual conclusion is that, in the absence of
additional mechanisms, the economy is doomed to converge towards the
less desirable Nash solution.
A number of options to insure credible solutions have been consid
ered in the literature — credible binding commitments by the Stackelberg
leader, reputation building, use of trigger strategies by the followers, etc.
(see McCallum(1997), for a reviewin a monetary policy context). Schemat
ically, these solutions aim at assuring the time consistency of Stackelberg
solution with either the regulator or the ﬁrms as a leader. Usually, these
solutions are not efﬁcient and can be Paretoimproved.
In Dawid et al. (2005), we demonstrated the usefulness of a new solu
tion to the time inconsistency problemin environmental policy. We showed
that tax announcements, that are not respected, can increase the payoff not
only of the regulator, but also of all ﬁrms, if these include any number
of na¨ıve believers who take the announcements at face value. Moreover,
if ﬁrms tend to adopt the behavior of the most successful ones, a unique
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
Cheaptalk Multiple Equilibria and Pareto 103
stable equilibrium may exist where a positive fraction of ﬁrms are believ
ers. This equilibrium Paretodominates the one, where all ﬁrms anticipate
perfectly the Regulator’s action. To attain the superior equilibrium, the
regulator builds reputation and leadership by making announcements and
implementing taxes in a way that generates good results for the believers,
rather than by precommitting to his announcements.
The potential usefulness of employing misleading announcements to
Paretoimprove, upon standard gametheoretic equilibrium solutions was
suggested for the case of general linearquadratic dynamic games in Vallee
et al. (1999) and developed by the same authors in subsequent papers.
An early application to environmental economies is Vallee (1998). The
believers/nonbelievers dichotomy was introduced by Deissenberg and
Gonzalez (2002), who study the credibility problemin monetary economics
in a discretetime framework with reinforcement learning. A similar mone
tary policy problemhas been investigated by Dawid and Deissenberg (2005)
in a continuoustime setting akin to the used in the present work.
In this chapter, we revisit the model proposed by Dawid et al. (2005)
and showthat a minor (and plausible) modiﬁcation of the regulator’s objec
tive function, opens the door to the existence of multiple stable equilib
ria with distinct basins of attraction — with important interpretations and
potential consequences for the practical conduct of environmental policy.
This chapter is organized as follows. In Section 2, we present the model
of environmental taxation and introduce the imitationtype dynamics that
determinate the evolution of the number to believers in the economy. In
Section 3, we derive and discuss the solution of the sequential Nash game,
one obtains by assuming a constant proportion of believers. In Section 4,
we formulate the dynamic problemand investigate its solution numerically.
Finally, in Section 6, we summarize the mechanisms at work in the model
and the main insights.
2. The Model
We consider an economy, consisting of a Regulator R and a continuum of
atomistic, proﬁtmaximizing ﬁrms i with an identical production technol
ogy. Time τ is continuous. To keep the notation simple, we do not index
the variables with either i or r unless it is useful for a better understanding.
In a nutshell, the situation of interest is the following: The regulator
can tax the ﬁrms to incite them to reduce their emission, and to generate
tax income, taxes, however, have a negative impact on the employment
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
104 Chirstophe Deissenberg and Pavel
ˇ
Sevˇ c´ık
(and thus, on the labor income) and the proﬁts. Thus, R has to choose the
tax level in order to achieve an optimal compromise among tax revenue,
private sector income, emission reduction, and employment levels. The
following sequence of event (S) occurs in every τ:
— R makes a nonbinding announcement t
a
≥ 0 about the tax level t ≥ 0
it will choose.
— Given t
a
, the ﬁrms form expectations t
e
about the true level of the
environmental tax. As will be described in more detail later, there are
two different ways for an individual ﬁrm to form its expectations.
— Each ﬁrm decides about its level of investment v based on its expecta
tion t
e
.
— R choose the actual level of tax t ≥ 0. The tax level t is the amount
each ﬁrm has to pay per unit of its emissions.
— Each ﬁrm produces a quantity x.
— Each ﬁrm may revise the way it forms its expectations depending on its
relative proﬁts.
All ﬁrms are identical, except for the waytheyformtheir expectations t
e
.
We assume full information: the regulator and the ﬁrms perfectly know
both the sequence, we just presented and the economic structure will be
(production and prediction technology, objective functions . . . ) described
in the remainder of this section.
2.1. The Firms, Bs and NBs
Each ﬁrm produces the same homogenous goods using a linear production
technology: the production of x units of output, requires x units of labor and
generates x units of environmentally damaging emissions. The production
costs are given by:
c(x) = wx +c
x
x
2
, (1)
where x is the output, w > 0 the ﬁxed wage rate, and c
x
> 0 a parameter for
simplicity sake, the demand is assumed inﬁnitely elastic at the given price
˜ p > w. Let p := ˜ p − w > 0. At each point of time, each ﬁrm can spend
an additional amount of money γ in order to reduce its current emissions
x. The investment
γ(v) =
1
2
v
2
(2)
is needed, in order to reduce the ﬁrm’s emissions from x to max [x −v, 0].
The investment in one period has no impact on the emissions in future
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
Cheaptalk Multiple Equilibria and Pareto 105
period. Rather than expenditures in emissionreducing capital, γ can there
fore be interpreted as the additional costs resulting of a temporary switch
to a cleaner resource — for e.g., of a switch from coal to natural gas.
The ﬁrms attempts to maximize their proﬁt. As we shall see at a later
place, one can assume without loss of generality that they disregard the
future and maximize in every τ, their current instantaneous proﬁt. However,
the actual tax rate t enters their instantaneous proﬁt function and is unknown
at the time when they choose v. Thus, they need to base their optimal
choice of v on an expectation (or, prediction) t
e
of t. In reality, a ﬁrm
can invest larger or smaller amounts of money in order to make or less
accurate predictions of the tax, the government will actually implement.
In this model, we assume for simplicity that each ﬁrm has two (extreme)
options for predicting the implemented tax. It can either suppose that R’s
announcement is truthful, that is, that t will be equal to t
a
, or it can do costly
research and try to better predict the t that will actually be implemented.
Depending upon the expectationformation mechanism the ﬁrm chooses,
we speak of believer B or of a nonbeliever NB:
A believer considers the regulator’s announcement to be truthful
and sets
t
e
= t
a
(3)
This does not cost anything.
A nonbeliever makes a “rational” prediction of t, considering the cur
rent number of Bs and NBs as constant. Making the prediction in any given
period τ costs δ > 0.
Details on the derivation of the NBs’ prediction will be given later. Note
that a ﬁrm can change its choice of expectationformation mechanism at
any moment in time. That is, a B can become a NB, and vice versa. We
denote with π = π(t) ∈ [0, 1] the current fraction of Bs in the population.
Thus, 1 −π is the current fraction of NBs.
2.2. The Regulator, R
The regulator’s goal is to maximize with respect to t
a
≥ 0 and t ≥ 0, the
intertemporal social welfare function.
(t
a
, t) =
∞
0
e
−ρτ
ϕ(t
a
, t)dτ
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
106 Chirstophe Deissenberg and Pavel
ˇ
Sevˇ c´ık
with
ϕ(t
a
, t) = (πx
B
+(1 −π)x
NB
) +πg
B
+(1 −π)g
NB
−(π(x
B
−v
B
) +(1 −π)(x
NB
−v
NB
))
2
+t(π(x
B
−v
B
) +(1 −π)(x
NB
−v
NB
)), (4)
where x
B
, x
NB
, v
B
, v
NB
denote the production respectively, investment
chosen by the believers B and the nonbelievers NB, and where g
B
(.)
and g
NB
(.) are their instantaneous proﬁts,
g
B
= px
B
−c
x
(x
B
)
2
−t(x
B
−v
B
) −
1
2
(v
B
)
2
g
NB
= px
NB
−c
x
(x
NB
)
2
−t(x
NB
−v
NB
) −
1
2
(v
NB
)
2
−δ.
(5)
The strictly positive parameter ρ is a social discount factor.
Thus, the instantaneous welfare φ is the sum of: (1) the economy’s
output, that is also, the level of employment (remember that output and
employment are onetoone in this economy); (2) the total proﬁts of Bs
and NBs; (3) the squared volume of emissions and (4) the tax revenue. This
function can be recognized as a reasonable approximation of the economy’s
social surplus. The difference between the results of Dawid et al. (2005) and
those of the present article stem directly from the different speciﬁcation of
the regulator’s objective function. In the former article, this function does
not include the ﬁrm’s proﬁt and is linear in the emissions.
2.3. Dynamics
The ﬁrms switch between the two possible expectationformation mecha
nisms (Bor NB), according to a imitationtype dynamics, see Dawid (1999);
Hofbauer and Sigmund (1998). Speciﬁcally, at each τ the ﬁrms meet ran
domly twobytwo, each pairing being equiprobable. At every encounter,
the ﬁrmwith the lower current proﬁt adopts the belief of the other ﬁrmwith
a probability proportional to the current difference between the individual
proﬁts. Thus, on the average, the ﬁrms tend to adopt the type of predic
tion, N or NB, which may currently leads to the highest proﬁts. Above
mechanism gives rise to the dynamics.
dπ
dt
= ˙ π = βπ(1 −π)g
B
−g
NB
. (6)
Notice that ˙ π reaches its maximum for π =
1
2
(the value or π for
which the probability, for encounter between ﬁrms with different proﬁts are
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
Cheaptalk Multiple Equilibria and Pareto 107
maximized), and tends towards 0, for π →0 and π →1 (for extreme values
of π, all but very few ﬁrms have the same proﬁts). The parameter β ≥ 0,
that captures the probability with which a ﬁrm changes its strategy during
one of these meetings, calibrates the speed of the information ﬂowbetween
Bs and NBs. It can be interpreted as a measure of the ﬁrms’ willingness to
change strategies, that is, of the ﬂexibility of the ﬁrms.
Equation (6) implies that by choosing the value of (t
a
, t) at time τ the
regulator not only inﬂuences the instantaneous social welfare, but also the
future proportion of Bs in the economy. This, in turn, has an impact on
the future social welfare. Hence, there are no explicit dynamics for the
economy. This again has an impact on the future social welfare. Hence,
although there are no explicit dynamics for the economic variables v and
x, R faces a nontrivial intertemporal optimization problem.
On the other hand, since the ﬁrms are atomistic, each single producer
is too small to inﬂuence the dynamics Eq. (6) of π, the only source of
dynamics in the model. Thus, the single ﬁrm does not take into account
any intertemporal effect and, independently of its true planing horizon,
maximizes its current proﬁt in every τ. This naturally leads us to examine
the (Nash) solution of the game between the regulator and the nonbelievers
that follows from the sequence (S) when, for an arbitrary, ﬁxed value of π,
(a) R maximizes the integrand φ in Eq. (4) with respect to t
a
and t and (b)
the NBs maximize their proﬁts with respect to ν
NB
and x
NB
. As previously
mentioned, we assume full information. In particular, the R and the NBs
are fully aware of the (nonstrategic) behavior of the NBs.
The analysis of the static case is also necessary since, as previously
mentioned, it provides the prediction t
e
of the NBs.
3. The Sequential Nash Game
The game is solved by backwards induction, taking into account the fact
that the ﬁrms, being atomistic, rightly assume that their individual actions
do not affect the aggregate values, that is consider these values as given. We
consider only symmetric solutions where all Bs respectively, NBs choose
the same values for v and x.
3.1. Derivation of the Solution
The last decision in the sequence (S) is the choice of the production level x
by the ﬁrms. This choice is made after v and t are known. The ﬁrms being
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
108 Chirstophe Deissenberg and Pavel
ˇ
Sevˇ c´ık
takers, the optimal symmetric production decision is:
x
B
= x
NB
= x =
p −t
2c
x
. (7)
Note that the optimal production is the same for Bs and NBs, as it
exclusively depends upon the realized taxes t.
In the previous stage, R chooses t given v
B
. Maximizing v
NB
with
respect to t, with x is given by (7) gives the optimal reaction function:
t(v
B
, v
NB
) =
p −c
x
[2(πv
B
+(1 −π)v
NB
) +1]
1 +c
x
. (8)
When the ﬁrms determine their investment v, Eqs. (7) and (8), and t
a
are known, but the actual tax rate t is not. Using Eq. (7) in Eq. (5), one
recognizes that the ﬁrms choose v in order to maximize
(p −t
e
)
4c
x
2
+t
e
v −
1
2
v
2
. (9)
That is, they set:
v = t
e
(10)
The Bs predict that t will be equal to its announced value, t
e
= t
a
. Thus,
v
B
= t
a
. (11)
The NBs know that the regulator will act according to Eq. (8). Thus, they
choose:
v
NB
=
p −c
x
[2(πv
B
+(1 −π)v
NB
) +1]
1 +c
x
. (12)
Solving this last equation for v
NB
gives for the equilibrium value:
v
NB
= v
NB
(t
a
)
p −c
x
(1 +2πt
a
)
1 +c
x
(3 −2π)
. (13)
The ﬁrst decision to be taken is the choice of t
a
by R. The regula
tor’s instantaneous objective function φ, using the results fromthe previous
stages, i.e., Eq. (7) for x, Eq. (11) for v
B
and Eq. (13) for v
NB
, is concave
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
Cheaptalk Multiple Equilibria and Pareto 109
with respect to t
a
for all π ∈ [0, 1], if,
c
x
> 1/
√
3. (14)
We shall assume in the following that this inequality holds. Then, the
regulator will choose
t
a$
=
1 +p
1 +3c
x
. (15)
Using this last equation in Eq. (8) with v
NB
given by Eq. (13), one obtains
for the equilibrium value of t
t
$
=
1 +p
1 +3c
x
−
1 +c
x
c
x
(3 −2π) +1
. (16)
Both t
a$
and t
$
will be nonnegative if
p ≥
3
3c
x
−2
, (17)
which we assume from now on.
3.2. Some Properties of the Solution
For all c
x
> 0 and π ∈ [0, 1], the optimal announcement t
a$
is a constant,
and always less than t
a$
. It is optimal to announce a high tax in order to
elicit a high investment from the believers, and to implement a low tax to
motivate a high production level. Moreover, t
$
decreases in π: When the
proportion of Bs augments, the announcement t
a
becomes a more powerful
instrument making a recourse to t less necessary.
The difference between the statically optimal (i.e., evaluated at t
$
) prof
its of NBs and Bs
g
NB$
−g
B$
=
(1 +c
x
)
2
2c
x
(2π −3)
2
−δ, (18)
is likewise increasing in π. Modulo δ, the proﬁt of the NBs is always higher
than the proﬁt of the Bs whenever π > 0, reﬂecting the fact that the latter
make a systematic error about the true value of t. The proﬁt of the Bs,
however, can exceed the one of the NBs if the prediction costs δ are high
enough.
Since t
a$
is constant and t
$
decreasing in π and since t
$
≤ t
a$
, the
difference t
a$
− t
$
, increases with π. Therefore, the difference Eq. (18)
between the proﬁts of the NBs and Bs is also increasing in the difference
between the announced and the implemented tax, t
a$
−t
$
,
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
110 Chirstophe Deissenberg and Pavel
ˇ
Sevˇ c´ık
Figure 1. The proﬁts of the Bs (lower curve), of the NBs (middle curve), and the
regulator’s utility(upper curve) as a functionof πfor δ = 0.00025, c
x
= 0.6, p = 2.
As one would intuitively expect, the regulator’s instantaneous utility φ
too, increases withπ. Most remarkable, however, is the fact that the proﬁts of
both the Bs and NBs are also increasing in π (see Fig. 1 for an illustration).
An intuitive explanation is the following: For π = 0 the outcome is the
Nash solution of a game between the NBs and the regulator. This outcome
is not efﬁcient leaving room for Paretoimprovement. As π increases, the
problem tends towards the solution a standard optimization situation with
the regulator as the sole decisionmaker. This solutionis efﬁcient inthe sense
that it does maximize the regulator’s objective function under the given
constraints. Since the objectives of the regulator are not extremely different
of the objectives of the ﬁrm, moving from the inefﬁcient gametheoretic
solution to the efﬁcient controltheoretic solution one improves the position
of all actors. Speciﬁcally, as π increases, the Bs enjoy the beneﬁts of a lower
t, while the investment remains ﬁxed at t
a$
. Likewise, the NBs beneﬁt from
the decrease in t. And R also beneﬁts, because a raising number of Bs
are led by R’s announcement to invest more than they would, otherwise
and, subsequently, to produce more, to make higher proﬁts, and to pollute
subsequently, to produce more, to make higher proﬁts, and to pollute less.
In dynamic settings, however, the impact of the tax announcements
on the π dynamics may incite the regulator to reduce the spread between
t and t
a
, and in particular to choose t
a
< t
a$
. Ceteris paribus, the R prefers
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
Cheaptalk Multiple Equilibria and Pareto 111
a high proportion of Bs to a low one, since it has one more instrument
(namely, t
a
) to inﬂuence the Bs than to regulate the NBs. A high spread
t
a
− t, however, implies that the proﬁts of the Bs will be low compared
to those of the NBs. This, by Eq. (6), reduces the value of ˙ π and leads
over time to a lower proportion of Bs, diminishing the instantaneous utility
of R. Therefore, in the dynamic problem, R will have to ﬁnd an optimal
compromise between choosing a high value of t
a
, which allows a lowvalue
of t and insures the regulator a high instantaneous utility, and choosing a
lower one, leading to a more favorable proportion of Bs in the future. We
are going to explore these mechanisms in more details in the next section.
4. The Dynamic Optimization Problem
4.1. Formulation
Being atomistic, the ﬁrms do not recognize that their individual decisions
to choose the Bor NB prediction technology inﬂuence π over time. Assume
that they consider π as constant. The regulator’s optimization problem is
then:
max (t
a
, t) 0 ≤ t
a
(τ), t(τ)
such that Eqs. (6), (7), (11), (13), and π
0
= π(0).
(19)
Using
g :=
p −c
x
(1 +2πt
a
)
1 +c
x
(3 −2π)
,
h :=
p −t
2c
x
.
(20)
The Hamiltonian of the above problem is given by
H(t
a
, t, π, λ) = h +π
h −
(p −t)
2
4c
x
+t(h −t
a
) −
t
a2
2
+(1 −π)
h −
(p −t)
2
4c
x
+t(h −g) −
1
2
g
2
−δ
+t[π(h −t
a
) +(1 −π)(h −g)]
−[π(h −t
a
) +(1 −π)(h −g)]
2
+λπ(1 −π)β
tt
a
−
t
a2
2
−th +
1
2
h
2
+δ
,
where λ is the costate.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
112 Chirstophe Deissenberg and Pavel
ˇ
Sevˇ c´ık
The ﬁrst order maximization condition with respect to t
a
and t yields
for the dynamically optimal values*
t
a∗
=
(p −c
x
) +[c
x
(3 −2π) +1](1 +c
x
)
2
(1 +3c
x
)[βλ(1 −π) +K]
, (21)
t
∗
=
(p −c
x
) +2c
x
(1 +c
x
)π[c
x
(βλ(1 +3c
x
)(1 −π) +1)]
(1 +3c
x
)[βπ(1 −π) +K]
with
K = 1 −6c
3
x
β
2
λ
2
(1 −π)
2
π +2c
x
[2 +2βλ(1 −π)π]
+c
2
x
[3 −2π +βλ(1 −π)(3 −2βλ(1 −π)π)]. (22)
According to Pontriagin’s maximum principle, an optimal solution
(π(τ), (λ)) has to satisfy the state dynamics (6), evaluated at (t
a∗
, t
∗
) plus:
˙
λ = ρλ −
∂H(t
a∗
(π, λ), t
∗
(π, λ), π, λ)
∂π
(23)
0 = lim
τ→∞
e
−ρτ
λ(τ). (24)
Due to the highly nonlinear structure of the optimization problem an
explicit derivation of the optimal (π(τ), λ(τ)) seems infeasible. Hence, we
used Mathematica
r
to carry out a numerical analysis of the canonical
system Eqs. (6)–(23) in order to shed light on the optimal solution.
4.2. Numerical Results
Depending upon the parameter values, the model can have either two stable
equilibria separated by a socalled Skiba point, or a unique stable equilib
rium. We discuss ﬁrst the more interesting case of multiple equilibria, before
carrying a summary of sensitivity analysis and brieﬂy addressing the ques
tion of the bifurcations, leading to a situation with a situation with unique
equilibrium. Unless and otherwise speciﬁed, the results presented pertain
to the reference parameter conﬁguration c
x
= 0.6, p = 2, δ = 0.00025 that
underlies Fig. 1, together with ρ = 1, 0.0015. These are robust, with respect
to sufﬁciently small parameter variations: The same qualitative insights
would be obtained for a compact set of alternative parameter conﬁgurations.
From the onset, let us stress the fundamental mechanism that underlies
most of the results. Ceteris paribus, R would like to face as many Bs as
possible, since dφ/dπ > 0. If the prediction costs δ are sufﬁciently high,
the proﬁt difference g
NB$
−g
B$
see Eq. (18), will be negative at the current
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
Cheaptalk Multiple Equilibria and Pareto 113
value of π. In this case, by Eq. (6), implementing the statically optimal
actions t
a$
, t
$
sufﬁces (albeit not optimally so) to insure that π increases
locally. In fact, for δ sufﬁciently large, the thus controlled system will
globally converge to a situation where there are only believers, π = 1.
However, if δ is small, the regulator cannot implement the actions t
a$
, t
$
since in that case g
NB
< g
B
, which implies by Eq. (6) a decreasing number
of believers, ˙ π < 0. Instead, Rwill use actions t
a∗
, t
∗
that insure an optimal
compromise between obtaining as high a value of π and the optimal value
˙ π
∗
of ˙ π may be negative. In this case, the economy will tend towards the
already discussed equilibrium, $ where nobody believes R, π = 0. At this
equilibrium, everybody correctly anticipates the tax level t
$
. However, the
outcome is inefﬁcient.
Alternatively, ˙ π
∗
may be positive. Then, there will be a positive number
of Bs at the equilibrium. Since for any given y
a
, y the speed of learning ˙ π
is ﬁrst increasing and then decreasing in π, see Eq. (6), it will not come
as a surprise, if for certain parameter constellations there exists a value of
π that deﬁnes a threshold between the two domains of convergence, one
towards π = 0 and one towards some πu > 0.
4.3. Multiple Equilibria
4.3.1. Phase Diagram
The phase diagram for the reference parameter values, c
x
= 0.6, p = 2,
δ = 0.00025, β = 1, and ρ = 0.0015, is shown in Fig. 2.
The ˙ π = 0 isocline, shown in dotted lines, has three branches. Two of
these correspond to the situation where there is a homogenous population
of NBs (π = 0) or of Bs (π = 1). Along the third branch, Bs and NBs
coexist but make the same prediction errors and realize the same proﬁts
and decisions. The
˙
λ = 0 isocline is shown as a continuous line. There
are three equilibria, E
L
= (0, λ
L
), E
M
= (π
M
, λ
M
), and E
U
= (π
U
, λ
U
)
with 0 < π
M
< π
U
< 1. The lower and upper equilibrium, E
L
and E
U
,
are locally stable. The middle equilibrium E
M
is an unstable spiral.
Speciﬁcally, for the reference parameter values, E
M
= (0.24, 5.98)
and E
U
= (0.93, 16.60). The characteristic roots of the Jacobian of the
Canonical system Eqs. (6), (23), evaluated at E
M
, E
U
, respectively are
0.04 and −0.03 (indicating saddlepoint stability), respectively 0.0075 ±
0.0132i (characteristic for an unstable spiral). Most interestingly, E
U
Paretodominates E
L
, although at the latter equilibrium all ﬁrms make
perfect predications of the taxes, the regulator is going to implement.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
114 Chirstophe Deissenberg and Pavel
ˇ
Sevˇ c´ık
Figure 2. Phase diagram of the canonical system, c
x
= 0.6, p = 2, δ = 0.00025,
β = 1, and ρ = 0.0015.
Table 1. Proﬁts, welfare, and taxes at E
U
E
U
E
U
and E
L
E
L
E
L
, c
x
= 0.6 c
x
= 0.6 c
x
= 0.6, p = 2 p = 2 p = 2, δ = 0.0025 δ = 0.0025 δ = 0.0025, β = 1 β = 1 β = 1,
p = 0.0015 p = 0.0015 p = 0.0015.
ggg φφφ t
a
t
a
t
a
ttt
E
U
1 2.25 1.07 0.12
E
L
0.98 2.18 N.A. 0.62
The Paretosuperiority of E
U
is illustrated in Table 1 (remember that at
the equilibrium g
N
= g
NB
).
The situation captured in Fig. 2 (an unstable equilibrium surrounded
by two stable ones) implies the existence of a threshold such that it is
optimal for R to follow a policy leading in the longrun towards the stable
equilibrium E
L
= (0, λ
L
), whenever the initial value π
0
of π is inferior to
the threshold value, π
0
< π
s
in other words, when the initial fraction of Bs
is less than π
s
, it is rational for the regulator to let the number of believers
go to zero, although the resulting stationary equilibrium is associated with
poor outcomes for all agents, R, Bs, and NBs.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
Cheaptalk Multiple Equilibria and Pareto 115
Whenever π
0
< π
s
on the other hand, the regulator’s optimal policy
leads to the stable equilibrium E
U
, that is, to a situation where there is in
permanence, a positive fraction of believers.
The existence of the threshold is closely associated with the properties
of the πdynamics Eq. (6). For any given actions t
a
, t (that is equivalently:
for any given value of g
B
−g
NB
), the value of ˙ π will be smaller for small
or large values of π than for intermediate ones. Note that, when a threshold
exists and for any π
0
< 1, it is never optimal for the regulator to follow
a policy that would ultimately insure that all ﬁrms are believers, π → 1.
There are twoconcurrent reasons for this: Once the πincreases, the regulator
has to deviate more and more from t
$
(π) in order to make believing more
proﬁtable than not believing. On the other hand, ˙ π decreases. Thus, the
discounted beneﬁts of increasing π decrease. The rationale for steering the
inferior equilibrium E
L
starting from any π
0
> π
s
lies in the following
fact: The “deviation costs” that the regulator will have to accept initially in
order to steer the systemtowards the more favorable E
U
are so high that the
net beneﬁt of going to E
U
would be less than the one obtained by letting
the system optimally converge towards the inferior E
L
.
The above results suggest that the policy of an environmental regulator
maybe very different depending on the fraction of the population that trusts
it when it initiates a new policy. If the initial conﬁdence is high, the costs of
building a large fraction of Bs are compensated in the longrun by accrued
beneﬁts. However, if the regulator’s trustworthiness is low from the onset,
its interest is to exploit its initial credibility for making shortterm gains,
although this ultimately leads to an inferior situation where nobody trusts
the regulator.
As already mentioned, the dynamics of the canonical system Eqs. (6)–
(23) at the unstable middle equilibrium E
M
form a spiral. Therefore, the
threshold value π
s
is typically close to, but distinct fromπ
M
(i.e., π
M
= π
s
)
and the threshold takes the particularly challenging form of a Skiba point.
No local analytical expression exists that characterizes a Skiba point. Thus,
Skiba points must be computed numerically. This was not done in this paper.
For a reference article on thresholds and Skiba points in economic models,
see Deissenberg et al. (2004).
5. Sensitivity Analysis and Bifurcations Towards
a Unique Equilibrium
Numerical analyses show that an increase in the ﬁrms’ ﬂexibility (i.e., an
increase in β) shifts the stable equilibrium E
U
to the right and the unstable
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
116 Chirstophe Deissenberg and Pavel
ˇ
Sevˇ c´ık
equilibrium E
M
as well as the Skiba point π
s
to the left. In other words,
if the ﬁrms react quickly to proﬁt differences, the regulator will follow a
policy that leads to the upper equilibrium E
U
even if the initial proportion
of Bs is small. Indeed, the high speed of reaction of the ﬁrms means that the
accumulated costs incurred by the regulator on the way to E
U
will be small
and easily compensated by the gains around and at E
U
. Moreover, R does
not have to make Bs much better off than NBs to insure a fast reaction. As a
result, for β large, the equilibriumE
U
is characterized by a large proportion
of Bs and thus insures high stationary gains to all actors.
Not surprisingly, an increase of the discount factor ρ has the opposite
effect. Impatient regulators want to build up conﬁdence, only if the initial
proportion of Bs is high. The resulting equilibrium value π
U
will be rela
tively low, since the time and efforts needed now for an additional increase
in the stock of Bs weighs heavily compared to the expected future beneﬁts.
An increase of the prediction cost δ ﬁnally, shifts E
M
to the left and
E
U
to the right: It is easier for R to incite the ﬁrms to use the B strategy,
favoring the convergence to equilibrium with a large number of believers.
The scenario with two stable equilibria separated by a Skiba point can
change qualitatively, when the parameter values are sufﬁciently altered.
For small values of β and/or δ and/or for large values of ρ, the equilib
rium E
U
collides with E
M
and disappears. Only one (stable) equilibrium
remains, E
L
. Thus, in the longrun, the proportion of believers tends to zero.
On the other hand, for large values of δ trying to outguess the regulator is
never optimal. The only remaining equilibrium is the one where everybody
is a believer.
6. Conclusion
The starting point of this paper is a static situation, where standard ratio
nal behavior by all agents leads to a Paretoinferior outcome, although
there is no fundamental conﬂict of interest either among the different ﬁrms
or between the ﬁrms and the regulator, and although the ﬁrms are per
fectly informed and perfectly predict the regulator’s actions. In addition
to the environmental problem studied here, such a situation is common for
instance in monetary economics, in disaster relief, patent protection, capital
levies, or default on debt.
The present paper and previous work strongly suggest that, in such a
situation, the existence of believers whotake the announcements of a macro
economic regulator at face value typically, Paretoimprove the economic
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
Cheaptalk Multiple Equilibria and Pareto 117
outcome. This property hinges crucially on the fact that the private sector
is atomistic, and thus, that the single agents do not anticipate the colla
tive impact of their individual decisions. Moreover, it requires a relatively
benevolent regulator, whose objective function does not differ too drasti
cally from the one of the private agents.
To introduce dynamics, we assumed that the proportion of believers
and nonbelievers in the economy changes over time as a function of the
difference in proﬁts for the two types of ﬁrms. The change obeys a wordof
mouth process driven by the relative success of the one or the other private
strategy, to believe or to predict. It is supposed that the regulator recog
nizes its ability to inﬂuence the wordofmonth dynamics, by an appropri
ate choice of actions and is interested not only in its instantaneous but also
in its future losses. It is shown that, when the ﬁrms react fast enough to the
difference in proﬁts of Bs and NBs, a sufﬁciently patient regulator may use
incorrect announcements to Paretoimprove the economic outcome com
pared to the situation where all ﬁrms are nonbelievers that perfectly predict
the regulator’s actions. A particularly interesting case arise, when a stable
Paretosuperior equilibrium with a positive fraction of believers coexists
with the (likewise stable) perfect prediction but inferior equilibrium where
all ﬁrms are nonbelievers.
In this case, the optimal policy of the regulator (to converge towards
the one or the other equilibrium) depends upon the initial proportion of
believers in the population. This history dependence of the solution suggests
a promisingmetaanalysis of the importance of goodreputationbasedonthe
previous good governance, in situations where the same macroeconomic
policymaker is confronted with recurrent, distinct decisionmaking issues.
These results of this paper are obtained under the assumption that the
nonbelievers do not exploit the fact that the regulator announced tax differs
from the equilibrium announcement for the static game. Likewise, in the
model, the nonbelievers do not attempt to predict the dynamics of the num
ber of believers. We leave to future research, the nontrivial task of exploring
the consequences of a relaxation, of these two restrictive hypotheses.
References
Abrego, L and C Perroni (1999). Investment subsidies and timeconsistent environmental
policy. Discussion Paper, Coventry: University of Warwick, 1–35.
Barro, R and D Gordon (1983). Rules, discretion and reputation in a model of monetary
policy. Journal of Monetary Economics 12, 101–122.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch03topic2
118 Chirstophe Deissenberg and Pavel
ˇ
Sevˇ c´ık
Batabyal, A (1996a). Consistency and optimality in a dynamic game of pollution control I:
competition. Environmental and Resource Economics 8, 205–220.
Batabyal, A (1996b). Consistency and optimality in a dynamic game of pollution control II:
monopoly. Environmental and Resource Economics 8, 315–330.
Dawid, H(1999). On the dynamics of word of mouth learning with and without anticipations.
Annals of Operations Research 89, 273–295.
Dawid, H and C Deissenberg (2005). On the efﬁciencyeffects of private (dis)trust in the
government. Journal of Economic Behavior and Organization 57, 530–550.
Dawid, H, C Deissenberg and P
ˇ
Sevˇ c´ık (2005). Cheap talk, gullibility and welfare in an
environmental taxation game. In: Dynamic Games: Theory and Applications, AHaurie
and G Zaccour (eds.), Amsterdam: Kluwer, 175–192.
Deissenberg, C and F Alvarez Gonzalez (2002). Paretoimproving cheating in a monetary
policy game. Journal of Economic Dynamics and Control 26, 1457–1479.
Deissenberg, C, G Feichtinger, W Semmler and F Wirl (2004). Multiple equilibria, his
tory dependence, and global dynamics in intertemporal optimization models. In Eco
nomic Complexity: NonLinear Dynamics, Multiagents Economies, and Learning,
W Barnett, C Deissenberg and G Feichtinger (eds.), ISETE, Vol. 14, Amsterdam:
Elsevier, 91–122.
Dijkstra, B (2002). Time consistency and investment incentives in environmental policy.
Discussion Paper 02/12.U.K.: School of Economics, University of Nottingham, 2–12.
Gersbach, H and A Glazer (1999). Markets and regulatory holdup problems. Journal of
Environmental Economics and Management 37, 151–164.
Hofbauer, J and K Sigmund (1998). Evolutionary Games and Population Dynamics.
Cambridge: Cambridge University Press.
Kydland, F and E Prescott (1977). Rules rather than discretion: the inconsistency of optimal
plans. Journal of Political Economy 85, 473–491.
Marsiliani, L and T Renstrom (2000). Time inconsistency in environmental policy: tax
earmarking as a commitment solution. Economic Journal 110, C123–C138.
McCallum, B (1997). Critical issues concerning central bank independence. Journal of
Monetary Economics 39, 99–112.
Petrakis, E and A Xepapadeas (2003). Location decisions of a polluting ﬁrm and the time
consistency of environmental policy. Resource and Energy Economics 25, 197–214.
Simaan, Mand JBCruz (1973a). On the Stackelberg strategy in nonzerosumgames. Journal
of Optimization Theory and Applications 11, 533–555.
Simaan, M and JB Cruz (1973b). Additional aspects of the Stackelberg strategy in nonzero
sum games. Journal of Optimization Theory and Applications 11, 613–626.
Vallee, T (1998). Comparison of different Stackelberg solutions in a deterministic
dynamic pollution control problem. Discussion Paper, LENC3E. France: University of
Nantes, 1–30.
Vallee, T, C Deissenberg and T Basar (1999). Optimal open loop cheating in dynamic
reversed linearquadratic Stackelberg games. Annals of Operations Research 88,
247–266.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
Chapter 4
Modeling Business Organization
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
This page intentionally left blank This page intentionally left blank
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
Topic 1
Enterprise Modeling and Integration:
Review and New Directions
Nikitas Spiros Koutsoukis
Hellenic Open University, and Democritus University
of Thrace, Greece
1. Introduction
Enterprise modeling (EM), Enterprise integration (EI), and Enterprise
integration modeling (EIM) are concerned with the deﬁnition, analysis,
redesign and integration of information, human capital, business process,
processing of data and knowledge, software applications and information
systems regarding the enterprise and its external environment to achieve
advancements in organization performance (Soon et al., 1997).
Enterprise modeling and Enterprise integration still are still ongoing
ﬁelds of research where the background and interests of contributors in
these areas are heterogeneous, including disciplines like mechanical man
ufacturing, business process reengineering, and software engineering.
However, the following two deﬁnitions (Vernadat, 1996) are represen
tative for current practices:
• Enterprise modeling is concerned with the explicit representation of
knowledge regarding the enterprise in terms models.
• Enterprise integration consists of breaking down organizational barri
ers to improve synergy within the enterprise so that business goals are
achieved in a more productive way.
The main objective of this paper is to review the literature on EM and inte
gration and to provide a concise description of related topics. The paper also
discusses EM and integration challenges, and rationale as well as current
tools and methodologies for deeper analysis.
121
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
122 Nikitas Spiros Koutsoukis
2. Model and Enterprise Terminology
Model and enterprise are the two key terms that appear throughout this
paper. An enterprise can be perceived as “a set of concurrent processes exe
cuted on the enterprise means (resources or functional entities) according
to the enterprise objectives and subject to business or external constraints”
(Vernadat, 1996). A model can be deﬁned as “a structure that a system
can use to simulate or anticipate the behaviour of something else” (Hoyte,
1992). This is a generalization of a deﬁnition given by Minsky (1968):
“a model is a useful representation of some subject. It is a (more or less
formal) abstraction of a reality (or universe of discourse) expressed in terms
of some formalism (or language) deﬁned by modeling constructs for the
purpose of the user”.
Instead of attempting to forma deﬁnition, which may be redundant, we
will mention some important features of enterprise models instead.
• An enterprise model provides a computational representation of the
structure, activities, processes, information, resources, people, behav
ior, goals, and constraints of a business, government, or other enterprise.
It can be both descriptive and deﬁnitionalspanning what is and what
should be. The role of an enterprise model is to achieve modeldriven
enterprise design, analysis, and operation (Fox, 1993).
• Enterprise models must either be representedinthe mindof humanbeings
or in computers (Christensen et al., 1995).
• All enterprise models are built with a particular purpose in mind. Depend
ing on the purpose, the model will emphasize different types of informa
tion at different levels of detail.
3. Enterprise Modeling
During the last decades, EM has been partly addressed by several
approaches to the same problemof making the enterprise more efﬁcient. All
these approaches have included the construction of symbolic representa
tions of aspects of the real world, usingvarious notations andtechniques that
commonly could be denoted “conceptual modeling” (Solvberg and Kung,
1993). The idea of EM, as an extension of the standard for the exchange
of product model data (STEP) product modeling approach, contributes to
integrating design (which focuses on product information) and construction
(focusing on activity and resource information) (Kemmerer, 1999).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
Enterprise Modeling and Integration 123
In the 1980s and during the early 1990s, when database technology was
applied on a larger scale, the need for better application design and devel
opment methods became evident (Martin, 1982; Sager, 1988). Different
modeling methodologies, like the structured analysis and design technique
(SADT), were developed. A narrow focus on a single department or task
during development often caused operational problems across application
boundaries. The need for making more complete models aroused, integrat
ing operations across departmental or functional boundaries.
The problem in EM regarding the manufacturing of EI and control was
to ensure timely execution of business processes on the functional entities
of the enterprise (i.e., human and technical agents) to process enterprise
objects. Processes are made of functional operations and functional opera
tions are executed by functional entities. Objects ﬂowing through the enter
prise can be information entities (data) as well as material objects (parts,
products, tools, etc.) (Rumbaugh, 1993).
In order to design the ﬂexible information and communication
infrastructure needed, different enterprise models are useful, and the model
may in fact become a part of the integrated enterprise itself. The trend
now is away from the ﬁrst single perspective enterprise models, which was
only focusing on the product data information handling, to more complete
enterprise models covering both informational and human aspects of the
organization (Fox, 1993).
During the 1990s, the question aroused on how to integrate different
departments in an enterprise, and how to connect the enterprise with its
external environment i.e., suppliers and customers, to improve cooperation
and logistics. During the 1990s, conglomerate industries took a more devel
opmental approach, and the research area of EI emerged. Enterprise mod
eling is clearly a prerequisite for EI as all things to be integrated and
coordinated need to modeled to some extent (Petrie, 1992).
Since 1990, major R&D programs for computerintegrated manufac
turing (CIM) have resulted in the following tools/methods or architectures
for EM:
(1) IDEF modeling tools: These are complementary modeling tools devel
oped by the integrated computer aided manufacturing (ICAM) project,
introducing IDEF0 for functional modeling, IDEF1 for information
modeling, IDEF2 for simulation modeling, IDEF3 for business pro
cess modeling, IDEF4 for object modeling, and IDEF5 for ontology
modeling. IDEF models are mainly used for requirements deﬁnitions.
(2) Generic Artiﬁcial Intelligence (GRAI) integrated methodology: It is a
methodology which uses the GRAI grid, GRAI nets, developed by the
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
124 Nikitas Spiros Koutsoukis
GRAI laboratories of France and includes modeling IDEF0, and group
technology to model the decision making processes of the enterprise.
The combination is called GRAI integrated methodology (GIM).
(3) Computer integrated manufacturing open system architecture
(CIMOSA): CIMOSAhas beendevelopedtoassist enterprises adapt to
internal and external environmental changes so as to face global compe
tition and improve themselves regarding product prices, product qual
ity, and product delivery time. It provides four different types of views
of the enterprise: modeling function view, information view, resource
view, and organization view (Beeckman 1993; Vernadat, 1996).
(4) Purdue enterprise reference architecture (PERA): It is a methodology
developed at Purdue University in 1989 which focuses on separating
the humanbased functions of an enterprise from the manufacturing
and information functions (Williams, 1994).
(5) Generalized enterprise reference architecture and methodology
(GERAM): It is a generalization of CIMOSA, GIM, and PERA. The
general concepts, identiﬁed and deﬁned in this reference methodology,
consist of methodological guidelines for enterprise engineering (from
PERA and GIM), lifecycle guidelines (from PERA), and model views
modeling (e.g., CIMOSA constructs).
Modeling method relies upon a speciﬁc purpose deﬁning its ﬁnality, i.e.,
the goal of the modeler. This ﬁnality has a direct inﬂuence on the deﬁnition
of modeling method. We adopt the position that any enterprise is made of a
large collection of concurrent business processes, executed by an open set
of functional entities (or agents) to achieve business objectives (as set by
management). Enterprise modeling and integration is essentially a matter
of modeling and integrating these process and agents (Kosanke and Nell,
1997; Ladet and Vernadat, 1995).
The prime goal of an EMapproach is to support analysis of an enterprise
rather than to model the entire enterprise, even though this is theoretically
possible. In addition, another goal is to model relevant business process and
enterprise objects concerned with business integration.
According to Vernadat (1996), the aim of EM is to provide:
• a better perspective of the enterprise structure and operations;
• reference methods for enterprise engineering of existing or new parts of
the enterprise both in terms of analysis, simulation, and decisionmaking
and
• a model in order to manage efﬁciently the enterprise operations.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
Enterprise Modeling and Integration 125
Whereas, the main motivations for EM are:
• better understanding of the structure and functions of the enterprise;
• managing complexity of operations;
• creating enterprise knowledge and knowhow for all;
• efﬁcient management of processes;
• enterprise engineering and
• EI itself.
4. Enterprise Integration
Enterprise integration was discussed since the early days of computers
in industry and especially in the manufacturing industry with CIM as the
acronymfor operations integration (Vernadat, 1996). In spite of the different
understandings of the scope of integration in CIM, it was always understood
for information integration across or at least parts of the enterprise. Infor
mation integration essentially consists of providing the right information,
to the right people, at the right place, at the right time (Norrie et al., 1995).
Information that is available to the right people, at the right place, at the
right time, and that enables them to act quickly and decisively, is a crucial
requirement of enterprises that intend to grow and remain proﬁtable. The
intelligent delivery of information must obtain maturity levels consistent
with enterprise goals and must employ the appropriate technology to facili
tate this purpose. In today’s enterprise environment, users must consolidate
data and deliver it as useful information and knowledge that empowers the
right business decisions.
With this understanding the different needs in EI can be identiﬁed:
(1) Identifythe right informationtogainknowledge: Elicit knowledge from
gathering information from the intraenterprise activities and use mod
els to represent this knowledge regarding product attributes and infor
mation, process management issues, and the decisionmaking process.
(2) Provide the right informationat the right place: The needfor implement
ing information sharing systems and integration platforms to support
information transaction across heterogeneous environments regarding
hardware, different operating systems, and the legacy systems. This
is a step towards interenterprise integration overcoming barriers and
provides connection between enterprise environments by linking their
operations to create extended and virtual enterprises.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
126 Nikitas Spiros Koutsoukis
(3) Update the information in real time to reﬂect the actual state of the
enterprise operation: enterprises must be able to adopt changes regard
ing both their external environment i.e., technology, legislation, and
their internal environment i.e., production operations in order to pro
tect the enterprise goal and objective from becoming obsolete.
(4) Coordinate business processes: This requires decisional capabilities
and knowhow within the enterprise for realtime decision support
and evaluation of operational alternatives based on business process
modeling.
The aims of EI are:
• to enable communication among the various functional entities of the
enterprise;
• to provide interoperability of IT applications (plugandplay approach)
and
• to facilitate coordination of functional entities for executing business
processes so that they synergistically fulﬁll enterprise goals.
The main motivation for EI is to make possible true interoperability
among heterogeneous, remotely located, systems within, or outside the
enterprise in an independent IT vendor environment.
Enterprise integration, especially for networked companies and large
manufacturing enterprises is becoming a reality. The issue is that EI is both
an organizational and a technological matter. The organizational matter
is partly understood so far whereas the technological matter lies with the
major advances over the last decade. It concerns mostly computer commu
nications technology, data exchange formats, distributed databases, object
technology, Internet, object request brokers (ORBsuch as OMG/CORBA),
distributed computing environments (such as OSF/DCE and MS DCOM,
and now Java to enterprise edition and execution environments (J2EE).
Some important projects developing integrating infrastructure (IIS)
technology for manufacturing environments include:
• CIMOSA IIS: The CIMOSA integrated infrastructure (IIS) enables
business process control in a heterogeneous IT environment. In IIS, an
enterprise is composed of functional entities, which are connected by a
communication system. Functional entities can be a machine, a software
application, or a human (Vernadat, 2002).
• AIT IP: This is an integration platform developed as an AIT project and
based on the CIMOSA IIS concepts (Waite, 1998).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
Enterprise Modeling and Integration 127
• OPAL: This is also anAITproject that has proved that EI can be achieved
in design and manufacturing environments using existing ICT solutions
(Bueno, 1998).
• NIIIP (National industrial information infrastructure protocols) (Bolton
et al., 1997).
However, EI suffers frominherent complexity of the ﬁeld, lack of estab
lished standards, which sometimes happen after the fact, and the rapid and
unstable growth of the basic technology with a lack of commonly sup
ported global strategy. The EI modeling research community is growing at
a very rapid pace (Fox and Gruninger, 2003). The EIM research so far has
been mostly performed in large research organizations. A few pilot studies
have also been done in large organizations. Hunt (1991) suggests that EIM
is most beneﬁcial to large industrial and government organizations. The
emphasis of current EIM research on large organizations is risky, and may
even have hampered the growth of the ﬁeld. Enterprise wide modeling is
difﬁcult in large organizations for two reasons: there is seldom any consen
sus on exactly how the organization functions and the modeling task may
be so big that by the time any realistic model is built, the underlying domain
may have changed. Whatever these terms mean, we have to seek solutions
to solve industry problems, and this applies to EM and integration.
5. A Framework Setting for EI Modeling
Enterprise integration is a multifaceted task employed for addressing chal
lenges stemming fromvarious aspects of the globalized economy. The con
temporary business world, economically, politically, and technologically is
a fastchanging world and most if not all enterprises, from small to large,
are continually challenged to evolve as rapidly as the world around them
evolves. Enterprise integration allows an enterprise to function coherently
as a whole and to shift its primary value chain activities in tandem with
its secondary value chain activities according to the challenges or require
ments posed by the enterprise environment or goals set by the enterprise
itself, and most frequently a combination of both (Giachetti, 2004). In this
context, the value chain’s primary and secondary activities are used sim
ply as a wellknown functionaleconomic view of the enterprise in order to
communicate coherently, as opposed to a semantically accurate or complete
view of the enterprise (Porter, 1985).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
128 Nikitas Spiros Koutsoukis
The multiple facets of EI refer to the multiple dimensions, entities, and
constituents where integration is primarily aimed at, and through which
enterprise integration is achieved.
Enterprise integration dimensions represent infrastructural views, upon
which integration is founded, built and subsequently achieved. Some of the
most commonly used integration dimensions are the following:
• the data and information dimension;
• the processes dimension;
• the economic activity or business dimension and
• the technology or technical dimension.
Because these dimensions can easily be extended throughout the enter
prise regardless of the enterprise functions, they are frequently used as
fundamental reference points for EI efforts (Lankhorst, 2005).
Eachof these dimensions has beenusedsuccessfullyfor seekingout and
achieving EI. For instance, in the data and information dimension, which
is mainly the realm of corporate information systems, data warehousing
and enterprise resource planning (ERP), have been used successfully to
introduce enterprisewide, uniﬁed views of data, information and related
analyses. In the processes dimension, business process management (BPM)
or reengineering (BPR) have been successfully applied in developing and
reﬁning old, and new processes which are more integrated and efﬁcient
across the enterprise. In the economic or business dimension, business
strategy models, like the corecompetency theory, balancedscore cards,
or economic valueadded models like the activitybased costing (ABC)
are also used successfully to integrate across the enterprise (Kaplan and
Norton, 1992; Prahalad and Hamel, 1990). In the technology or technical
dimension, the key requirement is for interoperability of systems, hard
ware, and software. The success with which many multinational enter
prises operate shows that certain milestones have been reached in this
direction.
Enterprise entities refer to functionoriented enterprise constructs
that are typically goaloriented, collectively contributing to achieving
the enterprise mission. Entities form hierarchical views of the enter
prise, and frequently tend to match formal or informal structures as in
most organizational charts. From a functional perspective, entities refer
to key organizational activities or divisions, such as administration, pro
duction, accounting, sales, marketing, procurement, and R&D. As is well
known, entities can also be organized in terms of authority or delegation
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
Enterprise Modeling and Integration 129
layers, communication layers, goalcontribution layers, geographically
based activities and so forth.
The sole purpose for having such functionaloriented constructs is to
decompose the enterprise into smaller, more speciﬁc, and more manageable
parts. The contribution of each entity to the enterprise objectives is thus
readily identiﬁed and narrowed down in scope, horizontally or vertically.
It is only through the use of one or more dimensions that the entities are
recombined in an enterprisewide integration model.
Enterprise constituents or constituent groups refer to the human factor
which is ultimately responsible for setting out, applying and achieving EI.
The most prevalent constituent groups are the following:
• Stakeholders
• Decisionmakers
• Employees
which are internal to the enterprise, and
• Suppliers and
• Clients
which are external to the enterprise.
Some of the main pitfalls in EI lie at the constituents’ level. This is
because the constituents are, in one way or another, autonomous or semi
autonomous agents acting primarily in their own interest, which is deﬁned
mainly by their position in enterprise coordinates terms. We consider the
following fundamental dimensions as the main enterprise coordinates for
positioning the constituents:
• level of authority and
• immediacyof contributiontogoals (fromstrategic tooperational, abstract
to speciﬁc)
which can also be inverted to
• level of responsibility and
• level of activity (from strategic to operational, or abstract to speciﬁc)
The higher the level of authority, the more strategic or abstract are the
actions that the constituents take in enterprise terms. At this level, con
stituents have a bird’seye view of the enterprise and ﬁnd it easier to match
organizational outcomes to personal interest i.e., the bird’seye viewmakes
it easier to view the enterprise as a single entity. On the other hand, the
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
130 Nikitas Spiros Koutsoukis
lower the level of authority the more operational or speciﬁc the actions
that the constituents take in enterprise terms. At this level, constituents ﬁnd
it harder to consider a singleentity view of the organization, and hence
organizational outcomes are less easily matched to personal interests.
It follows that inconsistencies in the constituents’ perspectives have the
ability to cancel out some of the main beneﬁts that the other two facets bring
forward, namely the uniﬁed view of the enterprise as single entity and the
immediacy with which operational activities contribute to strategic goals.
Inconsistencies in constituents’ perspectives are being addressed mainly
through leadership and human resource practices, which aimto put in place
an organizational culture, or otherwise a uniﬁed perception of what the
enterprise stands for, its values andits practices amongother things. Mission
statements, codes of practice, and team building exercises are some of the
most frequently used tools for scoping and setting out a uniﬁed, constituent
view of the enterprise.
Thus, a strong requirement is surfacing for developing newEI modeling
frameworks, which are able to incorporate heterogeneous modeling con
structs, such as the dimensions, entities or constituents considered above.
We ﬁnd that some of the key requirements are set out as follows:
(a) Interchangeable dimensions: EI models should be able to shift between
the dimensions considered above, without loss of context.
(b) Use of hybrid modeling constructs: EI models should be able to utilize
interchangeably modeling constructs for representing any type of enter
prise activity unit, including procedural units, physical or logical units,
inputs or outputs, tangible and intangible processes, decisionmaking
points, etc.
(c) Ability to consolidate or analyze to successive levels of detail: this is
particularly useful for communicating enterprise structures throughout
multiple echelons and hierarchy levels. For instance, stakeholders will
most likely need access to a consolidated view of the integrated enter
prise whereas employees are most likely to require a positioncentred
view of the enterprise, with detailed surroundings that are consolidated
further away from the employee’s position.
(d) Ability to interchange between declarative and goalseeking states of
the integration models: when in declarative mode, models work simply
as formal structures, or simulators of how the integrated enterprise
operates. In goalseeking mode, the model emphasizes improvements
that can lead to goal attainment or “satisfying,” to use Simon’s term
(Simon, 1957).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
Enterprise Modeling and Integration 131
Of course, these requirements can be analyzed further and in detail, but
such a discussion would be beyond the scope of this paper. Still, we also
ﬁnd that these requirements are sufﬁce to set out an agenda for further work
in the domain of EI and EI modeling.
6. Discussion
At the current state of technology, we ﬁnd that EM and integration are
becoming familiar within the large enterprises that seem to understand the
need of acquiring their concept. However, in accordance with Williams
(1994) we also ﬁnd that the reason for the moderate success and rather not
so extended use of EM and integration initiatives include:
(a) Cost: It is commonly accepted that such efforts are very expensive and
it is rather difﬁcult to argue on the beneﬁts of the outcome from the
beginning of each project.
(b) Project size and duration: Indeed, such projects cannot be completed
but they require time. Quite a large number of people will be involved
and high levels of commitment on behalf of the enterprise are required.
(c) Complexity: EI is a neverending process and parameters such as the
legacy systems within large enterprises resisting integration have to
be seriously considered. There is the need for a global vision with a
modular implementation approach.
(d) Management support: Management facing high cost may resent to sup
port such projects. It is very important that management be part of the
project and be fully committed to it.
(e) Skilled people: The human factor concerning highskilled employees
involved in the projects is vital for success. Unfortunately, due to limited
training schemes experienced users are scarce.
Nevertheless, the enthusiasmof the research community and the indus
try remains intact trying to overcome problems and difﬁculties by creating
tools and adopting methodology that will make a difference. Nowadays,
EM and integration mostly rely upon advanced computer networks, the
worldwide web and Internet, integration platforms as well as ERPs, and
data exchange formats (Chen and Vernadat, 2002; Martin et al., 2004).
A big issue is that both EM and EI are nearly totally ignored by SME’s
and there is still a long way to go before SME’s will master these techniques
on their own and in this case, they have to quickly catch on the technology
and deﬁne which tools they have to use.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
132 Nikitas Spiros Koutsoukis
Some researchers like Vernadat, Kosanke, Tolone, and Zeigler con
sider that this technology would better penetrate and serve any kind of
enterprises if:
• They identify users and a speciﬁc EIM problem. An example of an EIM
problem is how to produce highquality products at low cost.
• They make a prototype EIM that addresses the problem, contributing in
the decisionmaking process based on a simple initial model accepted by
the majority of the users (Tolone et al., 2002).
• There was a standard vision on what EM really depend and there was an
international consensus on the underlying concept for the beneﬁt of the
business users (Kosanke et al., 2000).
• There was a standard, useroriented, interface in the form of a uniﬁed
enterprise modeling language (UEML) based on the previous consensus
to be available on all commercial modeling tools.
• There were real EM and simulation tools commercially available in the
market place taking into account function, information, resource, orga
nization, and ﬁnancial aspects of an enterprise including human aspects,
exception handling, and process coordination. Simulation tools need to
be conﬁgurable, distributed, and agentbased simulation tools (Vernadat
and Zeigler, 2000).
However, it is rather unlikely that a single EIM that will be equally
applicable to all organizations. Each organization will have to develop its
own strategy and philosophy (Fox and Huang, 2005). Despite the different
approaches (frameworks) of EIM, we always have to bear in mind the
following functional requirements of an EIM framework (Patankar and
Adiga, 1995):
(1) Decision orientation. An EI model based on any particular concept of
an EM framework should support the decisionmaking process consid
ering thoroughly all factors of the external and internal environment
affecting the enterprise.
(2) Completeness. EM frameworks should be able to represent all aspects
of the enterprise such as business units, business data, information
systems, and external partners i.e., suppliers, subcontractors, and
customers.
(3) Appropriateness. EIM frameworks should be representing the enter
prise at appropriate levels by providing models that can be understood
by people with heterogeneous backgrounds.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
Enterprise Modeling and Integration 133
(4) Permanence. The EI model should remain consistent with the enter
prise’s overall objectives even when the enterprise is in the process of
changing due to management changes or other. Models should remain
true and valid within the enterprise.
Finally, the EIM frameworks should be simply structured, ﬂexible,
modular, and generic to ensure that the framework is applicable across dif
ferent manufacturing operations and enterprises. In addition, the framework
should relate to both manual and automatic processes. This will provide the
basis for proper and complete integration.
Modeling and integration always mean effort in time and costs.
Nevertheless, to handle complexity in business environments we see the
abovementioned elements moving from “luxury articles” to “everyday
necessities”.
These are some of the challenges to be solved in future to build extended
interoperable manufacturing enterprises as there is unlikely to be a single
EIM methodology, which will be equally applicable to all organizations
(Li and Williams, 2004). Each organization will have to develop its own
strategy and philosophy. However, EIM promises to be a revolutionary
technology.
Finally, the success of EI efforts lies in the ability to iron out incoheren
cies and lack coordination between the multiple facets considered above.
This is a complex and challenging task, and lies at the heart for EI mod
eling. Many of the multiple modeling frameworks considered above, like
IDEF, CIMOSA, or GERAMfor example, have stemmed fromintegration
efforts in the areas of manufacturing or engineering, and while successful
in their own right, these frameworks are still lagging behind when it comes
to integrating across the multiple dimensions, entities and constituents of a
contemporary enterprise that we have identiﬁed above.
7. Epilogue
This paper has provided a description of the main issues on EM and EI.
It has become quite evident from the literature that both the issues are com
plex and require knowledge on various disciplines particularly on organiza
tional management, engineering, and computer science. As such, it can be
generalized that EM and integration mainly revolves around organizational
modeling from which the analysis, design, implementation and integration
of the organization management, business processes, software applications,
and physical computer systems within an enterprise are supported.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
134 Nikitas Spiros Koutsoukis
We believe that although we are rich in technology to solve isolated
business problems (e.g., product scheduling, quality control, production
planning, accounting, marketing, etc.), many other problems still reside
withinthe enterprise. Solvingthese problems require anintegratedapproach
to problem solving. This integration must start at the representation level
and the entire enterprise together with its structure, functionality, goals,
objectives, constraints, people, processes, products must be modeled. The
solution will probably be the outcome of an EIM framework.
References
Beeckman, D (1993). CIMOSA: computer integrated manufacturing open system archi
tecture. International Journal of Computer Integrated Manufacturing 2(2), 94–105.
Bolton, R, A Dewey, A Goldschmidt and P Horstmann (1997). NIIIP — the national indus
trial information infrastructure protocols. In Enterprise Engineering and Integration:
Building International Consensus, K Kosanke and JG Nell (eds.), Berlin: Springer
Verlag. pp. 293–306.
Bueno, R (1998). Integrated information and process management in manufacturing engi
neering. Proc. of the 9th IFAC Symposium on Information Control in Manufacturing
(INCOM’98), pp. 109–112. France: NancyMetz, May 24–26.
Chen, DandFVernadat (2002). Enterprise interoperability: astandardisationview. ICEIMT
2002, 273–282.
Christensen, LC, BW Johansen, N Midjo, J Onarheim, TG Syvertsen and T Totland (1995).
Enterprise ModelingPractices and Perspectives. In Proc. of 9th ASME Engineering
Database Symposium, Rangan (ed.), Boston: Massachusetts, 1171–1181.
Fox, MS (1993). Issues in Enterprise Modeling. Proc. of the IEEE Conference on Systems.
Man and Cybernetics, France: Le Touquet, 83–98.
Fox, MS and M Gruninger (2003). The logic of enterprise modeling, modeling and method
ologies for enterprise integration. In Modelling and Methodologies of Enterprise
Integration, P Bernus and L Nemes (eds.), Cornwall, Great Britain: Chapman and
Hall, 83–98.
Fox, MS and J Huang (2005). Knowledge provenance in enterprise information.
International Journal of Production Research 43(20), 4471–4492.
Giachetti, RE (2004). A framework to review the information integration of the enterprise.
International Journal of Production Research 42(6), 1147–1166.
Hoyte, J (1992). Digital models in engineering — a study on why and how engineers build
and operate digital models for decision support. PHDThesis, University of Trondheim,
Norway.
Hunt, DV (1991). The Integration of CALS, CE, TQM, PDES, RAMP, and CIM, Enterprise
Integration Sourcebook. San Diego, CA: Academic Press.
Kaplan, RS and DP Norton (1992). The balanced scorecard: measures that drive
performance. Harvard Business Review 70(1), 71–79.
Kemmerer, J (1999). STEP: The Grand Experience, NIST Special Publication SP939,
Washington, C 20402, USA: US Government Printing Ofﬁce.
Kosanke, K and JG NelI (1997). Enterprise Engineering and Integration: Building
International Consensus. Berlin: SpringerVerlag.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
Enterprise Modeling and Integration 135
Kosanke, K, F Vernadat and M Zelm (2000). Enterprise engineering and integration in the
global environment. Advanced Network Enterprises 2000, Berlin, 61–70.
Ladet, P and FBVernadat (1995). Integrated Manufacturing Systems Engineering. London:
Chapman & Hall.
Lankhorst, M (2005). Enterprise Architecture at Work. Berlin: SpringerVerlag.
Li, H and TJ Williams (2004). A Vision of Enterprise Integration Considerations. Toronto,
Canada: ICEIMT 2004.
Martin, J (1982). Strategic DataPlanning Methodologies. Englewood Cliffs, New York:
PrenticeHall.
Martin, J, ERobertson and J Springer (2004). Architectural Principles for Enterprise Frame
works: Guidance for Interoperability. Toronto, Canada: ICEIMT 2004.
Minsky, M (1968). Semantic Information Processing. Cambridge, MA: The MIT Press.
Norrie, M, M Wunderli, R Montau, U Leonhardt, W Schaad and HJ Schek (1995).
Coordination approaches for CIM. In Integrated Manufacturing Systems Engineering.
P Ladet and FBVernadat (eds.), London: Chapman & Hall pp. 221–235.
Patankar, AK, and S Adiga (1995). Enterprise integration modeling: a review of theory and
practice. Computer Integrated manufacturing Systems 8(1), 21–34.
Petrie, CJ (1992). Enterprise Integration Modeling. Cambridge, MA: The MIT Press.
Porter, M (1985). Competitive Advantage. NewYork: Free Press.
Prahalad, CK and G Hamel (1990). The core competence of the organisation. Harvard
Business Review 68(3), 79–91.
Rumbaugh, J (1993). Objects in the constitution — enterprise modeling. Journal on Object
Oriented Programming, January issue, 18–24.
Sager, MT(1988). Data concerned enterprise modeling methodologies —a study of practice
and potential. The Australian Computer Journal 20(3), 59–67.
Simon, H (1957). Models of Man: Social and Rational. NewYork: Wiley.
Solvberg, A and DC Kung (1993). Information Systems Engineering — An Introduction.
NewYork: SpringerVerlag.
Soon, HL, J Neal andA Pennington (1997). Enterprise modeling and integration: taxonomy
of seven key aspects. Computers in Industry 34, 339–359.
Tolone, W, BChu, G.J Ahn, RWilhelmand JESims (2002). Challenges to MultiEnterprise
Integration. ICEIMT 2002, 205–216.
Vernadat, FB (1996). Enterprise Modeling and Integration: Principles and Applications.
London: Chapman & Hall.
Vernadat, F (2002). Enterprise modeling and integration. ICEIMT 2002, 25–33.
Vernadat, FB and BP Zeigler (2000). New simulation requirements for modelbased Enter
prise Engineering. Proc. of IFAC/IEEE/INRIA Internal Conference on Manufacturing
Control and Production Logistics (MCPL’2000). Grenoble, 4–7 July 2000. CDROM.
Waite, EJ (1998). Advanced Information Technology for Design and Manufacture. In Enter
prise Engineering and Integration: Building International Consensus. K Kosanke and
JG Nell (eds.), Berlin: SpringerVerlag, pp. 256–264.
Williams, TJ (1994). The purdue enterprise reference architecture. Computers in Industry,
24(2–3), 141–158.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic1
This page intentionally left blank This page intentionally left blank
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic2
Topic 2
Toward a Theory of Japanese Organizational
Culture and Corporate Performance
Victoria Miroshnik
University of Glasgow, UK
1. Introduction
Japanese national culture has a speciﬁc inﬂuence on the organizational
culture (OC) of the Japanese companies. Inorder tounderstandthese aspects
of culture that are affecting corporate performance, a systematic analysis of
the national culture (NC) and OC is essential. The purpose of this chapter
is to provide a scheme to analyze this issue in a systematic way.
National culture can affect corporate OC (Aoki, 1990; Axel, 1995;
Hofstede, 1980) and corporate performance (CP) (Cameron and Quinn,
1999; Kotter and Heskett, 1992). Multinational corporations with its sup
posedly common OCdemonstrate different behaviors in different countries,
which can affect their performances (Barlett and Yoshihara, 1988; Calori
and Sarnin, 1991). National culture with its various manifestations can cre
ate unique organizational values for a country, which in conjunction with
the human resources management (HRM) practice may create some unique
OC for a corporation. Organizational culture along with a leadership style,
which is the product of OC, NC, and HRM practices, creates a synergy,
which shapes the fortune of the company’s performances. Thus, NC, orga
nizational values, OCs, leadership styles, human resources practices, and
CPs are interrelated and each depend on the other. If a company maintains
this harmony it can perform well, otherwise it declines.
The structure of this chapter is as follows. In Section 2, relationship
between NCs, OC, and performances, as given in the existing literature, is
analyzed. In Section 3, the Japanese culture was analyzed along with the
controversies on this issue. In Section 4, the new approach of the nation
organizationperformance exchange model (NOPX) is explained in detail
and the theoretical propositions underlying this model are analyzed.
137
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic2
138 Victoria Miroshnik
2. Corporate Performance, the Highest Stage of National
and Organizational Culture: A Synthesis
Culture is multidimensional, comprising of several layers of interrelated
variables. As society and organizations are continuously evolving, there is
no theory of culture valid at all times and locations. The core values of a
society (macrovalues (MAVs)) can be analyzed by asking the following
questions:
(a) What are the relationships between human and nature? (b) What are
the characteristics of innate human nature? (c) What is the focus regarding
time — whether past, future, or present, or a combination of all these?
(d) What are the modalities of human activity, whether spontaneous or
introspective or resultoriented or a combination of these? (e) What is the
basis of relationship between one man and another in the society?
In the structure of the NC, there are certain micro or individual val
ues, which include sense of belonging, excitement, fun and enjoyment,
warm relationship with others, selffulﬁlment, being wellrespected, and a
sense of accomplishment, security, and selfrespect (Kahle, et al., 1988).
Although Hofstede (1990; 1993) has the opinion that the perceived practices
are the roots of the OC rather than values, most researchers have accepted
the importance of values in shaping cultures in several organizations.
Combinations of microvalues (MIVs) and macrovalues (MAVs) can
give rise to an OC, given certain mesovalues (MEV) or organizational
values, which are speciﬁc for a country. Mesovalues are certain codes of
behavior, which inﬂuences the future, and the expected behavior of the
members of the society, if they want to belong to the main stream. These
vary from one society to another, as the expectations of different soci
eties, as products of historical experiences, religious, and moral values are
different.
Cultural values are shared ideas about what is good, desirable, and
justiﬁed; these are expressed in a variety of ways. Social order, respect for
traditions, security, and wisdom are important for the society, which is like
an extended family.
According to Schein (1997), OCemerges fromsome common assump
tions about the organization, which the members share as a result of their
experiences in that organization. These are reﬂected in their pattern of
behaviors, expressed values, and observed artefacts. Organizational cul
ture provides accepted solutions to known problems, which the members
learn and feel about and forms a set of shared philosophies, expectations,
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic2
Toward a Theory of Japanese Organizational Culture 139
norms, and behavior patterns, which promotes higher level of achievements
(Kilman et al., 1985; Marcoulides and Heck, 1993; Schein, 1997).
Organizational culture, for a large and geographically dispersed organi
zation, may have many different cultures. According to Kotter and Heskett
(1992), leaders create certain vision or philosophy and business strategy
for the company. A corporate culture emerges that reﬂects the vision and
strategy of the leaders and experiences they had while implementing these
strategies. However, the question is whether it is valid for every NC.
A combination of MAVs, MEVs, and MIVs creates a speciﬁc OC,
which varies from country to country according to their differences in
NC. Hofstede (1980; 1985; 1990; 1993) showed the relationship between
NC and OC. Global leadership and organizational behavior effectiveness
(GLOBE) research project has tried to identify the relationships among
leadership, social culture, and OC (House, 1999).
Cameron and Quinn (1999) have mentioned that the most impor
tant competitive advantage of a company is its OC. If an organization
has a “strong culture” with “wellintegrated and an effective” set of val
ues, beliefs, and behaviors, it normally demonstrates high level of CPs
(Ouchi, 1981).
Denison and Mishra (1995) have attempted to relate OC and perfor
mances based on different characteristics of the OC. However, different
types of OCenhance different types of business (Kotter and Heskett, 1992).
There is no single cultural formula for longrun effectiveness. As Siehl and
Martin (1988) have observed, culture may serve as a ﬁlter for factors that
inﬂuence the performance of an organization. These factors are different for
different organizations. Thus, a thorough analysis regarding the relationship
between culture and performances is essential.
3. Japanese National and Organizational Culture
Japanese ﬁrms are considered to be of a family unit with longterm ori
entations for HRM and with close ties with other likeminded ﬁrms,
banks, and the government. The OC promotes loyalty, harmony, hard
work, selfsacriﬁce, and consensus decisionmaking. These, along with
lifetime employment and senioritybased promotions, are considered to
be the natural outcome of the Japanese NC. (Ikeda, 1987; Imai, 1986;
Ouchi, 1981). Japanese workers’ psychological dependency on companies
emerges fromtheir intimate dependent relationships withthe societyandthe
nation.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic2
140 Victoria Miroshnik
Japanese NC has certain MIVs: demonstration of appropriate attitudes
(taido); the way of thinking (kangaekata); and spirit (ishiki), which forms
the basic value system for the Japanese (Kobayashi, 1980). Certain MEVs
can be identiﬁed as the core of the cultural life for the Japanese, if they want
to belong to the mainstream Japanese society (Basu, 1999; Fujino, 1998).
These are (a) SenpaiKohai system; (b) Conformity; (c) HouRenSou and
(d) Kaizen or continuous improvements.
SenpaiKohai or seniorjunior relationships are formed from the level
of primary schools where junior students have followed the orders of the
senior students, who in turn, may help the juniors in learning. The process
continues throughout the lifetime for the Japanese. In work places, Senpais
will explain Kohais how to do their work, the basic code of conducts and
norms.
Fromthis systememerges the second MEV, Conformity, which is better
understood from the saying that “nails that sticks out should be beaten
down”. The inner meaning is that unless someone conforms to the rules
of the community or coworkers, he/she would be an outcaste. There is no
room for individualism in the Japanese society or in work place.
The third item, HouRenSou, is the basic feature of the Japanese
organizations. HouRenSou, is a combination of three different words in
Japanese: Houkoku i.e., to report, Renraku i.e., to inform and Soudan i.e.,
to consult or preconsult.
Subordinates should always report to the superior. Superiors and subor
dinates share information. Consultations andpreconsultations are required;
no one can make his/her decision by himself/herself even within the dele
gated authority. There is no space in which the delegation of authority may
function. Combining these words, Houkoku, Renraku, and Soudan, “Hou
RenSou” is the core value of the culture in Japan. Making suggestions, for
improvements, without preconsultations is considered to be an offensive
behavior in Japanese culture. Everyone fromthe clerk to the president, from
the entry day of the working life to the date of retirement, every Japanese
must follow the “HouRenSou” value system.
Kaizen i.e., continuous improvement is another MEV that is one of the
basic ingredients of the Japanese culture. Search for continuous improve
ment during the Meiji Government of the 19th century led themto search the
worldfor knowledge. Japanese companies todayare doingthe same interms
of both acquisitions of knowledge by setting up R&D centers throughout
the western world, and by having “quality circles” in work places in order to
implement new knowledge to improve the product quality and to increase
its efﬁciency (Kujawa, 1979; Kumagai, 1996; Miyajima, 1996).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic2
Toward a Theory of Japanese Organizational Culture 141
3.1. Japanese OC
In order to understand the Japanese OC, it is essential to know the Japanese
systemof management, which gave rise to the unique Japanese style of OC.
What is written below is the result of several visits to the major Japanese
corporations (like Toyota, Mitsubishi, Honda, and Nissan) and interviews
with several senior executives, including members of the board and vice
presidents of these companies.
Japanese system of management is a complete philosophy of organi
zation, which can affect every part of the enterprise. There are three basic
ingredients: lean production system, total quality management (TQM) and
HRMs (Kobayashi, 1980). These three ingredients are interlinked in order
to produce total effect on the management of the Japanese enterprises.
Because Japanese overseas afﬁliates are part of the family of the parent
company, their management systems are part of the management strat
egy of the parent company (Basu, 1999; Morishima, 1996; Morita, 1992;
Shimada, 1993).
The basic idea of the lean production system and the fundamental
organizational principles is the “Humanware” (Shimada, 1993), which is
described in Fig. 1. “Humanware” is deﬁned as the integration and interde
pendence of machinery and human relations and a concept to differentiate
among different types of production systems. The purpose of the lean pro
duction philosophy, which was developed at the Toyota Motor Company,
is to lower the costs. This is done through the elimination of waste —
everything that does not add value to the product. The constant strive for
perfection (Kaizen in Japanese) is the overriding concept behind good man
agement, in which the production system is being constantly improved;
perfection is the only goal. Involving everyone in the work of improvement
is often accomplished through quality circles. Lean production systemuses
“autonomous defect control” (Pokayoke in Japanese), which is an inexpen
sive means of conducting inspection for all units to ensure zero defects.
Quality assurance is the responsibility of everyone. Manufacturing tasks
are organized into teams. The principle of “justintime” (JIT) means, each
OC NC LC CP
Where “NC” is “National Culture”; “OC” is “Organizational Culture”; “LC” is
“Leader Culture”; and, “CP” is “Corporate Performance”.
Figure 1. Submodel 1: cultureperformance system.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic2
142 Victoria Miroshnik
process should be provided with the right part, in the right quantity at exactly
the right point of time. The ultimate goal is that every process should be
provided with one part at a time, exactly when that part is needed.
The most important feature of the organizational setup of the lean
production system is the extensive use of multifunctional teams, which
are groups of workers who are able to perform many different works. The
teams are organized along a cellbased production ﬂow system. The num
ber of jobclassiﬁcations also declines. Workers have received training to
perform a number of different tasks, such as the statistical process control,
quality instruments, computers, setup performances, maintenance etc. In
the lean production system responsibilities are decentralized. There is no
supervisory level in the hierarchy. The multifunctional team is expected
to perform supervisory tasks. This is done through the rotations of team
leadership among workers. As a result, the number of hierarchical levels in
the organization can be reduced. The number of functional areas that are
the responsibility of the teams increases (Kumazawa and Yamada, 1989).
In a multifunctional setup, it is vital to provide information in time
and continuously in the production ﬂow. The production system is chang
ing and gradually adopting a more ﬂexible system in order to adopt itself to
changes in demand; which may reduce costs of production and increase efﬁ
ciency. There are extensive usages of Kaizen activities and TQM and Total
Productive Maintenance (TPM) to increase the effectiveness of corporate
performances.
4. A NationOrganizationLeaderPerformance Model
Organizational culture is a product of a number of factors: NC, human
resources practice, and leadership styles have prominent inﬂuences on it.
These in turn, along with the OC, affect CPs. The proposed model, as in
Fig. 1, is the synthesis of the ideas relating to this central theme. Figures 2
and 3 describe the proposed theoretical model relating OC and CPs. Cor
porate performances are measured in terms of the following criteria:
(a) customers’ satisfaction;
(b) employee’s satisfaction and commitment and
(c) contributions of the organization to the society and environment.
Most companies in Japan use these criteria to signify CPs (Basu, 1999;
Nakane, 1970).
There are several latent or hypothetical variables, which jointly can
inﬂuence the outcome. National culture is affected by two constructs:
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic2
Toward a Theory of Japanese Organizational Culture 143
MIVs and MAVs. Microvalues (MIVs) are composed of several factors
(Kahle et al., 1988). These are: (1) the sense of belonging; (2) respect
and recognition from others and (3) a sense of life accomplishments and
(4) selfrespect.
Macrovalues, according to the opinions of the executives of the
Japanese, ﬁrms are religious values, habitual values, and moral values.
There may be other MAVs, which are important for other nations like geog
raphy, racial origin, etc., but for the Japanese, these are not so important
(Basu, 1999). Although moral and habitual values can be the results of
religious values, it is better to separate out these three values as distinct.
The moral and habitual values are different in Japan from other East Asian
nations. Honor and “respect from others” are central to the Japanese psy
chology. Ritual suicides (Harakiri) are honorable acts for the Japanese, if
they fail in some way to do their duty. Habitual values are extreme politeness
on the surface, beautiﬁcations of everything, community spirit, and clean
ness are unique in Japan, which are not followed in any other countries
in Asia.
This is particularly true if we examine certain MEVs, which are neither
MIVnor MAVvalues, but are derived fromthe NC. It is possible to identify
ﬁve different MEVs, which are important outcomes of the Japanese NC.
These are discussed as follows:
(a) Exclusivity or insideroutsider (UchiSoto in Japanese) psychology by
which Japanese exclude anyone who is not an ethnic Japanese from
social discourse; (It is different fromcolor or religious exclusivities. For
example, the Chinese or Koreans, who are living in Japan for centuries,
are excluded from the Japanese social circles.)
(b) Conformity or the doctrine of “nail that sticks up should be beaten
down” (Deru Kuiwa Utareru in Japanese) —deviations fromthe main
stream norms are not tolerated;
(c) Seniority system (SenpaiKohai in Japanese) by which every junior
must obey and show respects to the seniors;
(d) Collectivism in decisionmaking process (“HouRenSou” system in
Japanese) and
(e) Continuous improvements (Keizen in Japanese), which is the funda
mental philosophy of the Japanese society.
These MEVs are exclusively Japanese, a reﬂection of the unique
Japanese culture (Basu, 1999; Nakane, 1970) and is fundamental to the
Japanese organizations.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic2
144 Victoria Miroshnik
National culture construct along with the MEVs affect both the orga
nizational culture construct (OR), HRM system and leadership styles (LS)
constructs. Japanese OR construct is affected by the NC, MEVs, and the
HRM system. The similarities in OCs in different companies are due to
the similarities in NC and MEV; the differences are due mainly to the dif
ferences in HRM. In Japan human resource practices vary from one com
pany to another and as a result, OCs and performances vary. In Toyota,
it is very consistent and strong culture with great emphasis on disci
pline, Keizen, TQM and just in time (JIT) productioninventory system.
In many other companies in Japan, situations are not the same; as a result,
OC differs.
Organizational culture is affected by the NC, MEVs, and HRM. If the
company’s leader is also the founder, then only the leadership style (LS) can
affect the HRMand OC. Otherwise, when the leaders are professional man
agers trained and grown up within the company (In Japan and in Japanese
companies abroad, it is out of practice to accept an outsider for the execu
tive positions. All executives enter the company after their graduation and
stay until they retire), appropriate LCs are developed by the OC and HRM.
Due to the collective decisionmaking process (HouRenSou) leader
ship, a MEV in the model, cannot affect an organizational culture or the
HRM system. It is very different from the Western (America, European,
or Australian) companies where the leaders are hired from outside and are
expected to be innovative regarding the OC and HRM system. This is quite
alien to the Japanese culture. Leadership style in Japan is an outcome, not
an independent variable as in the Western companies. Leadership style is
affected by the NC, MEVs, OC, and the HRM.
Organizational culture (OC) is affected by four factors, according to this
proposed model. These factors are: (a) stability; (b) ﬂexibility; (c) internal
focus and (d) external focus (Cameron and Quinn, 1999). These factors are
universal, not restricted to the Japanese companies. However, how an OC
is being affected by NC, MEVs, and HRM would vary from country to
country.
Leadership style is a function of four factors. These factors are leader:
(a) as a facilitator; (b) as an innovator; (c) as a technical expert and (d) as
an effective competitor for rival ﬁrms (Cameron and Quinn, 1999). These
four characteristics of a leader are universal, but the emphasis varies from
country to country.
Finally, the corporate performance (CP) is affected by a number of
inﬂuencing factors: (a) customers’ satisfaction, (b) employees’ satisfaction
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic2
Toward a Theory of Japanese Organizational Culture 145
OC
NC
LC
CP
MAV
MEV
MIV
HRMPJ
Where “NC” is “National Culture”; “OC” is “Organizational Culture”; “LC” is “Leader
Culture”; “MAV” is “MacroValues of Culture”; “MEV” is “MesoValues of Culture”;
“MIV” is “MicroValues of Culture”; “HRMPJ” is “Human Resources Management
Practices in Japan”, and “CP” is “Corporate Performance”.
Figure 2. Value–culture–performance: a thematic presentation.
and commitments, and (c) the contribution of the organization to the soci
ety. If we consider the contribution of the organization to society that is
already taken into account by the customers’ satisfaction and employees’
satisfaction, then CP has these two important contributory factors.
Theoretical relationship between the observed variables and the under
lying constructs are speciﬁed in an exploratory model, represented in Fig. 2.
In Fig. 3, a more elaborate model, including the unique HRM system of the
Japanese companies is described. Figure 3 represents a Linear Structural
Relations (LISREL) version (Joreskog and Sorbom, 1999) of the model,
which can be estimated using the methods of structural equation modeling
(Raykov and Marcoulides, 2000).
5. Discussion
The proposed theory describes the relationship between NC and CPs
through OC with the HRM system affecting the nature of the culture and
leadership. These interrelationships are important aspects of the Japanese
corporate behaviors in a multinational setting and this proposed theory is
an attempt to understand the Japanese corporate culture.
Analysis of the Japanese culture and organizations by outsiders so far
suffers from a number of defects (Ikeda, 1987; Watanabe, 1998). They nei
ther try to understand the effects of the Japanese value system on their
organizations nor do they give any importance to the relationship between
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic2
146 Victoria Miroshnik
H21
OC
NC
LC
CP
MAV
MEV
MIV
C1 C2 C3 C4 C5
D2 D3 D4 D1
F2 F3 F1 F4
G2 G1 G3 FG
B2
B1
B3
B4
H1
F5
F6
A1
A2
A3
HRMPJ
E1
E24
Where “NC” is “National Culture”; “OC” is “Organizational Culture”; “LC” is “Leader Culture”;
“MAV” is “MacroValues of Culture”; “MEV” is “MesoValues of Culture”; “MIV” is
“MicroValues of Culture”, and “CP” is “Corporate Performance”; A1 “Religious Values”, A2
“Moral Values” A3 “Habitual Values” are indicators for MAV variable; B1 “Power Distance”,
B2 “Individualism/Collectivism”, B3 “Masculinity/Femininity” are indicators for NC variable;
C1 “Uchi/Soto Value”, C2 “Deru Kuiwa Utareru Value”, C3 “Senpai/Kohai Value”, C4
“HouRenSou Value”, C5 “Kaizen Value” are indicators for MEV variable; D1 “Stability”,
D2 “Flexibility”, D3 “External Focus”, D4 “Internal Focus are indicators for OC variable”;
E1 “OnJobTraining”; E2 “Emphasis on Personality of Recruits” are indicators for HRMPJ
variable; F1 “Sense of Belonging to a Group Value”, F2 “Respect and Recognition from Others
Value”, F3 “Sense of Life Accomplishment Value”, F4 “Selfrespect Value”, F5 “Kangaekata
Value”, F6 “Ishiki Value” are indicators for MIV variable; G1 “Leader as Facilitator”, G2
“Leader as Innovator”, G3 “Leader as Technical Expert”, G4 “Leader as Competitor” are
indicators for LC variable; H1 “Customers Satisfaction”, H2 “Employees satisfaction and
Commitment” are indicators for CP variable.
Figure 3. Research model: Valuescultureperformance (VCP): predicted paths
between variables and indicators.
performance and culture. So far, the success of the Japanese companies was
analyzed only in terms of their unique production and operations manage
ment system. However, the interrelationship between the production and
operations management system, which is a part of the OC, and the spe
ciﬁc HRM system it requires for its proper implementation, was not given
sufﬁcient attention.
The typical example is the analysis of the failure of the implementa
tions of Japanese operations management system in Britain by Oliver and
Williamson (1992) or in China by Taylor (1999). They could not explain
the reason for the failure, as they have not analyzed the Japanese OC and
particularly the HRM system.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic2
Toward a Theory of Japanese Organizational Culture 147
In a Japanese company, the LSs and the OC are designed by the HRM
system; these are not coming from outside. An effective corporate perfor
mance is the result of these underlying determinants of OC and LSs. Thus,
in a Japanese organization, LS is rooted in the HRMsystem, which emerges
from the MEVs of the Japanese national culture.
Whether efﬁcient CPs can be repeated in foreign locations depends on
the transmission mechanismof the Japanese OC, which in turn, depends on
the adaptations of the Japanese style HRM system. This can face obstacles
due to the different MEVs in a foreign society and as a result, OC in a
foreign location may be different from the OC of the company in Japan.
Adaptation of the operations management system alone will not create an
effective performance of a Japanese company in a foreign location. Creation
of an appropriate HRM system and appropriate OC are essential for an
effective performance.
6. Conclusion
A proposed theory of the relationship, among NC, OC, human resources
practices, and CPs for the Japanese multinational companies, is narrated.
Cultural differences are important and these factors affect every compo
nent of the company behaviors and performances. The question was raised
regarding the factors, which are relatively more important than the oth
ers and whether it will be possible to manipulate these factors in order to
increase the performances of the multinational companies. The proposed
theory may provide an answer to these issues.
References
Aoki, M (1990). Towards an economic model of the Japanese ﬁrm. Journal of Economic
Literature 28, 1–27.
Axel, M (1995). Culturebound aspects of Japanese management. Management Interna
tional Review 35(2), 57–73.
Barlett, CA and HYoshihara (1988). New challenges for Japanese multinationals: is orga
nization adaptation their Achilles Heel. Human Resources Management 27(1), 19–43.
Basu, S (1999). Corporate Purpose. NewYork: Garland Publishers.
Calori, Rand P Sarnin (1991). Corporate culture and economic performance: a French study.
Organization Studies 12(1), 49–74.
Cameron, KS and R Quinn (1999). Diagnosing and Changing Organizational Culture:
Based on the Competing Values Framework. NewYork: AddisonWesley.
Denison, DR and AK Mishra (1995). Toward a theory of organizational culture and effec
tiveness. Organizational Science 6(2), March–April, 204–223.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic2
148 Victoria Miroshnik
Fujino, T (1998). How Japanese companies have globalized. Management Japan 31(2),
1–28.
Hofstede, G (1980). Culture’s Consequences: International Differences in Workrelated
Values. Beverly Hills, CA: Sage.
Hofstede, G (1985). The interaction between national and organisational value system.
Journal of Management Studies 22(3), 347–357.
Hofstede, G (1990). Measuring organizational cultures: a qualitative and quantitative study
across twenty cases. Administrative Science Quarterly 35, 286–316.
Hofstede, G (1993). Cultural constraints in management theories. Academy of Management
Executive 7(1), 81–94.
House, RJ et al. (1999). Cultural inﬂuences on leadership: Project GLOBE. In Advances in
Global Leadership, W Moblet, J Gessner and VArnold (eds.), Vol. 1. Greenwick: CT,
JAI Press, 171–234.
Ikeda, K (1987). Nihon oyobi nihonjin ni tsuiteno imeji (Image of Japan and Japanese). In
Sekai wa Nihon wo Dou Miteiruka (How Does the World See Japan). A Tsujimura,
K Furuhata and HAkuto (eds.) Tokyo: Nihon Hyoronsa, pp. 12–31.
Imai, M (1986). Kaizen: The Key to Japan’s Competitive Success. New York, NY:
McGrawHill.
Joreskog, KG and D Sorbom (1999). LISREL 8.30: User’s Reference Guide. Chicago:
Scientiﬁc Software.
Kahle, LR, RJ Best and RJ Kennedy (1988). An alternative method for measuring value
based segmentation and advertisement positioning. Current Issues in Research in
Advertising 11, 139–155.
Kilman, R, MJ Saxton, and R Serpa (1985). Introduction: ﬁve key issues in understanding
and changing culture, In Gaining Control of the Organizational Culture, R Kilman,
M Gaxton and R Serpa (eds.), San Francisco: JosseyBass, pp. 1–16.
Kobayashi, N (1980). Nihon no Takokuseki Kigyo (Japanese multinational corporations).
Tokyo: Chuo Keizaisha.
Kotter, JP and JL Heskett (1992). Corporate Culture and Performance. NY: The Free Press.
Kujawa, D (1979). The labour relations of US multinationals abroad: comparative and
prospective views. Labour and Society, 4, 3–25.
Kumagai, F (1996). Nihon Teki Seisan Sisutemu in USA (Japanese Production System in
USA). Tokyo: JETRO.
Kumazawa, M and J Yamada (1989). Jobs and skills under the lifelong Nenko employment
practice. In The Transformation of Work. SWood (ed.). London: Unwin Hyman, 56–79.
Marcoulides, GA and RH Heck (1993). Organization culture and performance: proposing
and testing a model. Organization Science 14(2), May, 209–225.
Morishima, M (1996). Renegotiating psychological controls, Japanese style. In Trends in
Organizational Behavior. CL Cooper (ed.), Vol. 3. NewYork: John Wiley & Sons Ltd.,
149–165.
Morita, A (1992). A moment for Japanese management. JapanEcho 19(2), 1–12.
Miyajima, H (1996). The evolution and change of contingent governance structure in the
Jﬁrm system: An approach to presidential turnover and ﬁrm performance. Working
Paper No. 9606. Institute for Research in Contemporary Political and EconomicAffairs.
Tokyo: Waseda University.
Nakane, G (1970). Japanese Society. Harmondsworth: Penguin.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic2
Toward a Theory of Japanese Organizational Culture 149
Oliver, N and B Wilkinson (1992). The Japanization of British Industry. Cambridge,
Massachussets: Blackwell.
Ouchi, W (1981). Theory Z: How American Business can Meet the Japanese Challenge.
Reading, MA: AddisonWesley.
Raykov, T and GA Marcoulides, (2000). A First Course in Structural Equation Modeling.
London: Lawrence Erlbaum.
Schein, EH (1997). Organizational Culture and Leadership. SanFrancisco: JosseyBass
Publishers.
Shimada, H (1993). Japanese management of autoproduction in the United States: An
overviewof human technology in international labour organization. In Lean Production
and Beyond, Geneva: ILO, 23–47.
Taylor, B (1999). Patterns of control within Japanese manufacturing plants in China: doubts
about Japanization in Asia. Journal of Management Studies 36(6), 853–873.
Watanabe, S (1998). Managersubordinate relationship in crosscultural organizations: the
case of Japanese subsidiaries in the United States. International Journal of Japanese
Sociology 7, 23–43.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic2
This page intentionally left blank This page intentionally left blank
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
Topic 3
Health Service Management Using
Goal Programming
AnnaMaria Mouza
Institute of Technology and Education, Greece
1. Introduction
Private production units usually operate on the basis of proﬁt maximization.
However, to face competition fromsimilar units and to be in line with some
major socioeconomic factors, some decision makers are forced to relax this
basic objective. In order to present propositions for optimal management
decisions, one should combine the priorities of the decision maker with the
technical and socioeconomic factors, which are involved in the operating
process in the best possible way. The production unit considered in this
paper is a relatively small (60 beds) private clinic in northern Greece, which
is similar to the orthopedic department of a hospital studied and described
elsewhere (Mouza, 1996).
However, in this particular case, there is no outpatient services. To
facilitate the presentation, I consider a ﬁveyear operational plan where
the personnel claims, the manpower working pattern, the various expenses,
together with the proﬁt maximization target, and the expected number of
patients are described in detail. The necessary projections are based on
reliable techniques presented elsewhere (Mouza, 2002; 2006).
After setting the priorities of the various targets, I formulate a proper
“goal programming” problem using deviational variables (d
−
i
and d
+
i
) and
obtained the ﬁrstrun solution. Then, I proceed to the goal attainment eval
uation, which dictated a readjustment of the priorities. The secondrun
solution indicates that a slower acceleration of the proﬁt maximization rate
is preferable, to avoid overcharging the patients, at the same time satisfying
most of the requirements for personnel management. Furthermore, if the
decision maker’s preference is relatively rigid, a lower rate of the increase
151
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
152 AnnaMaria Mouza
of salaries of the employees should be adopted. All computational results
are presented at the end of the relevant section.
2. The Details of the Private Clinic in Relation
to the FiveYear Plan
In Table 1, I present the composition of the personnel, and their salary
at the base year (t
0
). Note that the 5year planning horizon, refers to the
timeperiods (years) t
1
, t
2
, t
3
, t
4
, and t
5
.
In columns 4 and 5 of Table 1, the total amount of the wages for all
employees of the same group, for the initial and ﬁnal year is stated. Note
that the wages are an average of the salaries earned by each person in
the individual group. For each group, an average percent of annual salary
Table 1. Personnel of the orthopedic clinic and their wages.
Groups
Position of the
group members
Members
at each
group
Annual salary at
year t
0
t
0
t
0
for all
group members
(in thousand )
Salary at ﬁnal
year t
5
t
5
t
5
, for all
group members
(in thousand )
1 Orthopedic surgeons,
anaesthiologists and
microbiologists
10 260 356.222
2 Pharmacist 1 24.7 33.054
3 Operatingroom nurses 3 54.6 68.04
4 Nurses plus laboratory
staff
14 163.8 199.29
5 Axialtomography and
Xray technicians
3 58.5 74.307
6 Clinic manager 1 27.3 36.190
7 Secretary and
administrative
service staff
3 32.76 39.476
8 Ofﬁce clerks 4 41.6 49.647
9 Cooking and food
service staff
4 42.64 50.643
10 Other staff (guard,
bearers maintenance,
cleaning service etc.)
10 101.4 119.851
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
Health Service Management 153
increase, say R
i
, is considered. It should be noted that in some groups, this
rate is well above the rate of inﬂation growth, and the rate stated in the
collective wage and salary agreements for each employees group. These
rates of annual salary increase for all groups are presented in Table 3.
Denoting by A
0
(i) the annual salary of all members of group i at the base
year t
0
, and the ﬁnal annual salary at year t
5
for all members of the same
group, denoted by A
n
(i), is computed from:
A
n
(i) = A
0
(i)
1 +
R
i
100
5
. (1)
Thus for group 1, where R
1
= 6.5%, the ﬁnal salary for all group members
will be:
A
n
(1) = 260(1 +0.065)
5
= 356.222.
It should be noted, that these rates of salary increases have been formed
in accordance to the personnel claims. Other additional expenses for the
ﬁnal year of of the planning period are presented in Table 2.
On the top of Table 2, the forecast regarding the expected number
of patients for the ﬁnal year is stated. It should be pointed out that not
all patients necessarily need an axialtomography and/or Xray treatment.
That is why the average per patient cost stated in Table 2 seems to be
relatively low.
The interactive voice response (IVR) systems whichare effectivelyused
in healthcare and hospitals are described by Mouza (2003). It should be
noted that replacement expenses together with expenses for obtaining new
devices, seems more reasonable to be equally distributed over the ﬁve years
of the planning period. Needless to say, that in case of replacements, the
estimated salvage on old equipment has to be subtracted from the relevant
cost. Thus, at the ﬁnal year, only one ﬁfth of such expenses is stated. For the
case under consideration, the replacement refers to the server of the local
area computer network (LAN), together with two stations. Their net cost
sum up to 2000, which has been equally spread over the ﬁve years of the
planning period.
From Table 1, I computed the average salary per employee at the ﬁnal
year which, when multiplied by the group size gives the total salary for the
corresponding group, as it can be veriﬁed from Table 3.
It can be seen fromTable 3, that X
i
(i = 1, . . . , 10) denotes the size (i.e.,
the members) of group i and c
i
, the average salary of each group member.
Thus, the total salary of all the members of group 1, for instance, can also
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
154 AnnaMaria Mouza
Table 2. Forecasted number of patients and various expenses for the
ﬁnal year.
Forecasted average
per patient, at the ﬁnal
Patient expenses period t
f
t
f
t
f
( )
Forecasted number of patients for the period (year) t
5
= 3625
Axialtomography 9.1
Xray 5.7
Medical supplies 59.4
Administrative and
miscellaneous
32.1
Other expected
expenses
Total ( ) Amount for the ﬁnal year t
5
Installation of IVR
system
3000 600
Server and computers
replacement
2350 (salvage value: 350) 400
Personnel insurance
(19% of total
salaries)
Endogenous (decision
variable X
39
)
Laboratory expenses 60,000
Fixed expenses plus
depreciation
180,000
All other expenses 80,000
be computed from:
c
1
×X
1
= 35622.2 ×10 = 356222.
If I denote by X
10+i
, the salary expenses of group i, then I may write
X
10+i
= c
i
×X
i
(i = 1, . . . , 10). (2)
so that the total salary expenses for the ﬁnal year can be determined by
evaluating the summation:
20
i=11
X
i
. (3)
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
Health Service Management 155
Table 3. Employees’ salary at the initial and ﬁnal year of the planning period.
Groups (i)
Number
of group
members
(X
i
) (X
i
) (X
i
)
Annual
salary at
year t
0
t
0
t
0
, for
all the
members of
group iii (in
thousand )
Annual
rate of
increase,
R
i
R
i
R
i
(in %)
Salary at
ﬁnal year t
f
t
f
t
f
,
for all the
members of
group iii
(in )
Average
per person
at each
group (c
i
c
i
c
i
)
1 10 260 6.5 356,222 35,622.2
2 1 24.7 6 33,054 33,054
3 3 54.6 4.5 68,040 22,680
4 14 163.8 4 199,290 14,235
5 3 58.5 4.9 74,307 24,769
6 1 27.3 5.8 36,190 36,190
7 3 32.76 3.8 39,476 13,158.666
8 4 41.6 3.6 49,647 12,411.75
9 4 42.64 3.5 50,643 12,660.75
10 10 101.4 3.4 119,851 11,985.10
Another point of interest refers to the average working hours of the
members of each group. The relative ﬁgures are analytically presented in
Table 4.
It can be easily veriﬁed that in Table 4, column 5 is obtained by multi
plying the elements of column 4 by 52.
Denoting by X
22+i
(i = 1, . . . , 10) the total average working hours per
year for all the group members, then I may write
X
22+i
= w
i
×X
i
(i = 1, . . . , 10). (4)
3. The Nature of the Problem
Given all the information presented in Tables 1–4, the resultant prob
lem is, how to formulate a feasible operating plan, which should com
bine the desired salary increases and the full utilization of working hours,
the expenses incurred, together with the desired gross proﬁt at the ﬁnal
year set by the decision maker, in the best possible way with the mini
mum sacriﬁce. This problem can be answered by the goal programming
method, originally developed by Charnes and Cooper (1961). It should
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
156 AnnaMaria Mouza
Table 4. Working hours of each group personnel.
Groups (i)
Number
of group
members
Hours per
week
(average),
for each
member of
group i (in
thousand )
Total hours
per week
for all the
members of
group iii
Total
hours per
year, for
all the
members
of each
group
Average
per person
at each
group (w
i
) (w
i
) (w
i
)
1 10 65 650 33,800 3380
2 1 45 45 2340 2340
3 3 65 195 10,140 3380
4 14 40 560 29,120 2080
5 3 40 120 6240 2080
6 1 45 45 2340 2340
7 3 40 120 6240 2080
8 4 40 160 8320 2080
9 4 30 120 6240 1560
10 10 30 300 15,600 1560
be recalled that goal programming is a mathematical programming tech
nique with the capability of handling multiple goals given certain priority
structure. Furthermore, a goal programming model may be composed of
nonhomogeneous units of measure, which is the case under consideration.
In brief, the main task here refers to the compensation of total expenses
with the decision maker’s requirement regarding gross proﬁts, without any
change in the operating pattern of the clinic, as far as the manpower level is
concerned.
3.1. Speciﬁcation of the Problem Decision Variables
• The ﬁrst 10 decision or choice variables X
i
(i = 1, . . . , 10) are
already deﬁned in Table 3 and refer to the number of personnel in
group i.
— Variables X
11
. . . X
20
, stand for the salary cost of each group.
— Variable X
21
denotes total salary cost at the ﬁnal year.
— Variables X
22
. . . X
31
refer to the total working hours, observed in
the 5th column of Table 4.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
Health Service Management 157
• Variables X
32
. . . X
35
refer to the average cost per patient, as it is stated
in the ﬁrst four rows of Table 2.
— Variable X
36
denotes total per patient expenses.
— Variables X
37
and X
38
refer to the equipment purchase and
replacement.
• Variable X
39
stands for the personnel insurance expenses.
— Variable X
40
refers to the depreciation, laboratory, ﬁxed and other
expenses.
• Variable X
41
is used to sum up all the above expenses (i.e., X
37
+X
38
+
X
39
+X
40
)
• X
42
, which also is a deﬁnition variable, is used to denote total operating
cost for the ﬁnal year t
f
(i.e., t
5
) of the planning period.
• Finally, variable X
43
denotes the average charge per patient, so that the
desired gross proﬁt would not be less than 750,000.
3.2. Formulation of the Constraints (Goals)
To facilitate the presentation, I classiﬁed the goals into the following seven
categories.
3.2.1. Personnel Employed
The operation pattern of the clinic, regarding the personnel employed is
presented in Table 1. Given that, the decision maker does not want to alter
this pattern, I must ﬁrst introduce the following 10 constraints (goals)
X
i
+d
−
i
−d
+
i
= b
i
(i = 1, . . . , 10) (5)
where b
1
= 10, b
2
= 1, b
3
= 3, b
4
= 14, b
5
= 3, b
6
= 1, b
7
= 3, b
8
= 4,
b
9
= 4 and b
10
= 10, are the desired targets, regarding the members of
each group as shown in Tables 1, 3 and 4.
It is useful to recall that d
−
i
and d
+
i
denote the socalled deviational
variables from the goals, which are complementary to each other, so that
d
−
i
×d
+
i
= 0. If the ith goal is not completely achieved, then the observed
slack will be expressed by d
−
i
, which represents the negative deviation from
the goal. On the other hand, if the ith goal is over achieved, then d
+
i
will
take a nonzero value. Finally, if this particular goal is exactly achieved,
then both d
−
i
and d
+
i
will be zero.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
158 AnnaMaria Mouza
3.2.2. Personnel Claims for Salary Increase
According to Eq. (2), I have to introduce the following 10 constraints in
order to deﬁne the total salary expenses for each group.
X
10+i
−c
i
X
i
+d
−
10+i
−d
+
10+i
= 0, (i = 1, . . . , 10). (6)
It is recalled that coefﬁcients c
i
are presented in Table 3. An additional
variable X
21
is introduced to sum up salary expenses for all personnel at
the ﬁnal year. The computation of this total is based on Eq. (3).
X
21
−
20
i=11
X
i
+d
−
21
−d
+
21
= 0. (7)
3.2.3. Complete Utilization of Personnel Capacity in Terms
of Working Hours
The forecasted number of patients for the ﬁnal year is not exceeding the
clinic capabilities, regarding the existing infrastructure. Furthermore, the
decision maker does not want to alter the existing operation pattern, since
he considers that the present personnel manpower potential will be ade
quate to provide satisfactory service to the number of patients forecasted,
provided that the under utilization of the regular working hours will be
avoided. Hence, I introduced the following constraints in accordance to
Eq. (4), which on the other hand may be regarded as a measure to provide
job security to all personnel by avoiding underutilization of their regular
working hours as shown in Table 4.
X
21+i
−w
i
X
i
+d
−
21+i
−d
+
21+i
= 0, (i = 1, . . . , 10). (8)
Note, the coefﬁcients w
i
, are also presented in Table 4.
3.2.4. Patient expenses
The axialtomography expenses per patient can be expressed as:
X
32
+d
−
32
−d
+
32
= 9.1. (9a)
Xray expenses per patient:
X
33
+d
−
33
−d
+
33
= 5.7. (9b)
Medical expenses per patient:
X
34
+d
−
34
−d
+
34
= 59.4. (9c)
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
Health Service Management 159
Administrative and miscellaneous expenses per patient:
X
35
+d
−
35
−d
+
35
= 32.1. (9d)
Variable X
36
depicts total per patient expenses at the ﬁnal year.
X
36
−(X
32
+X
33
+X
34
+X
35
) +d
−
36
−d
+
36
= 0. (9e)
3.2.5. Other Expected Expenses
Installation of the IVR system.
X
37
+d
−
37
−d
+
37
= 600. (10a)
Computers and server replacement.
X
38
+d
−
38
−d
+
38
= 400. (10b)
The insurance funds.
FromTable 2, we see that the average expenses for personnel insurance
amounts to 19% of the total employees’ salary. This is represented by the
following constraint, taking into account the variable X
21
.
X
39
−0.19X
21
+d
−
39
−d
+
39
= 0 (10c)
Depreciation, laboratory, ﬁxed and other expenses
X
40
+d
−
40
−d
+
40
= (60,000 +180,000 +80,000). (10d)
To sum all other expenses, I introduce the following relation.
X
41
−(X
37
+X
38
+X
39
+X
40
) +d
−
41
−d
+
41
= 0. (10e)
3.2.6. Total Expenses
It is already mentioned that I introduce the variable X
43
in order to sum up
total expenses, in the following way.
X
42
−(X
21
+3625X
36
+X
41
) +d
−
42
−d
+
42
= 0. (11)
3.2.7. Total Gross Proﬁt
The gross proﬁt achievement as set by the decision maker, may be intro
duced in the following way.
3625X
43
−X
42
+d
−
43
−d
+
43
= 750,000 (12)
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
160 AnnaMaria Mouza
where X
43
, as it was mentioned earlier, represents the adequate charge per
patient, in order to satisfy Eq. (12). It is recalled that the coefﬁcient 3625,
which appears in Eq. (11) too, is the stated forecast, regarding the expected
number of patients for the ﬁnal year t
f
.
With the above formulation, we see that the number of constraints is
equal to the number of choice variables (43).
3.3. Priority of the goals — The objective function
It is obvious that the decision maker, apart from determining the goals of
the ﬁveyear plan, also has to specify the priorities (Gass, 1986), denoted by
P
i
, in order to accomplish the optimum allocation of resources for achiev
ing these goals in the best possible manner. One further point, that has to
be considered in the formulation of the model, is the possibility of weigh
ing the deviational variables at the same priority level (Gass, 1987). The
list of decision maker’s goals is presented below in descending order of
importance.
1. To retain the present operating pattern of the clinic.
This implies that no alterations are desired regarding the number and
composition of the existing personnel (P
1
). Different weights have been
assigned to the various personnel groups.
2. To achieve the desired level of gross proﬁts. Also, to provide reserves for
the personnel insurance, to cover all expenses for equipment replace
ments and installation of a new IVR system (P
2
).
3. To avoid underutilization of the personnel regular working hours, pro
viding at the same time job security to all employees (P
3
).
4. To provide the funds necessary to cover all patient’s expenses (P
4
).
5. To provide adequate funds for covering laboratory and ﬁxed expenses,
depreciation etc. (P
5
).
6. To provide the wage increase, claimed by the personnel at the various
groups (P
6
).
In this case, weights are used to discriminate the salary increase of
each group. Heavier weights have been assigned to groups with lower
salaries.
According to the above ranking of the goals priority and taking into
account the relations for the corresponding totals, the following objective
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
Health Service Management 161
function is formulated.
min z = (10P
1
d
−
1
+9P
1
d
−
2
+8P
1
d
−
3
+7P
1
d
−
4
+6P
1
d
−
5
+5P
1
d
−
6
+4P
1
d
−
7
+3P
1
d
−
8
+2P
1
d
−
9
+P
1
d
−
10
) +(P
2
d
−
37
+P
2
d
−
38
+P
2
d
−
39
+3P
2
d
−
42
+P
2
d
−
43
) +P
3
31
i=22
d
−
i
+P
4
36
i=32
d
−
i
+(P
5
d
−
40
+P
5
d
−
41
) +(P
6
d
−
21
+2P
6
d
−
11
+2P
6
d
−
12
+3P
6
d
−
13
+4P
6
d
−
14
+5P
6
d
−
15
+6P
6
d
−
16
+7P
6
d
−
17
+8P
6
d
−
18
+9P
6
d
−
19
+10P
6
d
−
20
). (13)
4. Initial Results
With this speciﬁcation of the goal programming model, I obtain the results
as given on the left side of Table 5 (solution I). It is easily veriﬁed that all
goals are achieved. Hence, this pattern dictated by the optimal solution is
readily applicable. It should be pointed out that according to this solution,
an average rate of total salary increase of about 4.9% is realized, whereas
the mean salary per worker increases from15,232.1 to 19,372.1. In fact, this
average rate r (i.e., 4.9) has been computed from Eq. (1) in the following
way, using natural logs.
ln(x)
5
= ln(1 +r), where x =
1,026,720
807,300
and r = (y −1) ×100, where y = e
Z
and Z =
ln(x)
5
However, the charge per patient (∼740 ) is considered being higher,
when compared to the average of the local market, affecting thus the
competition level of the clinic.
5. Imposition of a New Constraint
Given the social character of health service, which has to be combined with
the fact that the unit under consideration must be competitive, it is necessary
to impose the following constraint.
X
43
+d
−
44
−d
+
44
= 700. (14)
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
162 AnnaMaria Mouza
Thus, the total number of constrains has become 44. It should be noted
that minimizing d
+
44
in the objective function, an upper limit of 700 is
set to the average charge per patient. Hence, I added in the objective as in
Eq. (13) the following term
2P
2
d
+
44
which implies that the restriction regarding the maximum average charge
per patient has been considered as a second priority goal, properly weighed.
5.1. SecondRun Results
With the new constraint, the solution obtained is presented on the right
side of Table 5 (solution II). I observe that constraint Eq. (14) is bind
ing, and in order to achieve the level of gross proﬁt set by the decision
maker, the solution dictates that the total salary expenses must be reduced
by 118,180.125, which is the value of the deviational variable d
−
21
, shown
in Eq. (7). Evaluating the results of the second run, one may argue as to
Table 5. Solution I (left) and solution II (right).
No. Variable name Value No. Variable name Value
The simplex solution (Number of patients = 3625)
87 ×1 10.000 89 ×1 10.000
88 ×2 1.000 90 ×2 1.000
89 ×3 3.000 91 ×3 3.000
90 ×4 14.000 92 ×4 14.000
91 ×5 3.000 93 ×5 3.000
92 ×6 1.000 94 ×6 1.000
93 ×7 3.000 95 ×7 3.000
94 ×8 4.000 96 ×8 4.000
95 ×9 4.000 97 ×9 4.000
96 ×10 10.000 98 ×10 10.000
97 ×11 356,222.000 99 ×11 356,222.000
98 ×12 33,054.000 100 ×12 33,054.000
99 ×13 68,040.000 101 ×13 68,040.000
100 ×14 199,290.000 102 ×14 199,290.000
101 ×15 74,307.000 103 ×15 74,307.000
102 ×16 36,190.000 104 ×16 36,190.000
(Continued )
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
Health Service Management 163
Table 5. (Continued )
No. Variable name Value No. Variable name Value
The simplex solution (Number of patients = 3625)
103 ×17 39,476.000 105 ×17 39,476.000
104 ×18 49,647.000 106 ×18 49,647.000
105 ×19 50,643.000 107 ×19 50,643.000
106 ×20 119,851.000 108 ×20 119,851.000
107 ×21 1,026,720.000 21 d
−
21
118,180.125
108 ×22 33,300.000 110 ×22 33,800.000
109 ×23 2340.000 111 ×23 2340.000
110 ×24 10,140.000 112 ×24 10,140.000
111 ×25 29,120.000 113 ×25 29,120.000
112 ×26 6240.000 114 ×26 6240.000
113 ×27 2340.000 115 ×27 2340.000
114 ×28 6240.000 116 ×28 6240.000
115 ×29 8320.000 117 ×29 8320.000
116 ×30 6240.000 118 ×30 6240.000
117 ×31 15,600.000 119 ×31 15,600.000
118 ×32 9.100 120 ×32 9.100
119 ×33 5.700 121 ×33 5.700
120 ×34 59.400 122 ×34 59.400
121 ×35 32.100 123 ×35 32.100
122 ×36 106.300 124 ×36 106.300
123 ×37 600.000 125 ×37 600.000
124 ×38 400.000 126 ×38 400.000
125 ×39 195,076.797 127 ×39 172,622.578
126 ×40 320,000.000 128 ×40 320,000.000
127 ×41 516,076.812 129 ×41 493,622.562
128 ×42 1,928,134.375 130 ×42 1,787,500.000
129 ×43 738.796 131 ×43 700.000
109 ×21 908,539.875
whether this is an optimal solution, from the socioeconomic point of view,
since any reduction has to directly affect the employees salary solely. On
the other hand, the decision maker is not willing to entirely undertake this
reduction (118,180.125), decreasing gross proﬁts by almost 16%. With all
these in mind, I present the following proposition, which may be considered
of particular importance.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
164 AnnaMaria Mouza
6. Some Alternatives
After proper negotiations, the decision maker and the employees have to
agree on the proportion, say a (where 0 < a < 1), of the deﬁcit that will
be subtracted from the total employee’s salary. Let us assume that both
sides agree for a = 0.5. In this case, the total salary of all employees will
be reduced by 0.5 × 118,180.125 = 59,090.0625, and would affect their
earnings proportionally. A crucial point of particular importance, is that
special care should be taken for retaining a fair wage policy. This implies
that lower salary groups should undergo lesser decrease, compared to high
salary groups. In any case, the mean annual rate of salary increase for each
group after the reduction mentioned above should not be less than 3.1%
over the planning period, which is an estimate of the expected annual rate
of inﬂation increase. In fact, the latter requirement affects the value of “a”.
All these restrictions call for the construction of a composite index, in
order to distribute the salary restraint among different groups in the desired
and acceptable way. For this reason, the initial rates of salary increase
for each group, presented in Table 3, have been ranked properly as it is
shown in Table 6. Finally, after taking into account, the initial rates of salary
Table 6. The distribution of salary reduction among groups, using the
composite index.
Groups (iii)
Number
of group
members
(X
i
) (X
i
) (X
i
)
Annual
rate of
initial
salary
increase
r
i
r
i
r
i
(in %)
Ranking
for salary
decrease
A composite
index to
serve as the
rate of
initial salary
decrease
Corresponding
amount (in )
1 10 6.5 10 0.627689 37,090.172
2 1 6 9 0.048387 2859.187
3 3 4.5 6 0.049801 2942.746
4 14 4 5 0.108050 6384.697
5 3 4.9 7 0.069093 4082.711
6 1 5.8 8 0.045522 2689.870
7 3 3.8 4 0.016266 961.173
8 4 3.6 3 0.014535 858.898
9 4 3.5 2 0.009610 567.861
10 10 3.4 1 0.011047 652.748
Total 1 59,090.06
M
a
r
c
h
1
6
,
2
0
0
9
1
6
:
5
2
W
S
P
C
/
T
r
i
m
S
i
z
e
:
9
i
n
x
6
i
n
S
P
I

B
6
9
2
E
c
o
n
o
m
i
c
M
o
d
e
l
s
b
6
9
2

c
h
0
4
t
o
p
i
c
3
H
e
a
l
t
h
S
e
r
v
i
c
e
M
a
n
a
g
e
m
e
n
t
1
6
5
Table 7. Salary at the initial and ﬁnal years together with the annual rates of increase.
Groups (iii)
Salary at the
period t
0
t
0
t
0
( )
Initially
claimed
salary for the
ﬁnal year ( )
Amount of
reduction
Salary to be
realized at the
ﬁnal year ( )
Actual annual
rate of salary
increase (%)
New
coefﬁcients c
i
c
i
c
i
1 260,000 356,222 37,090.172 319,131.81 4.1836 31,913.181
2 24,700 33,054 2859.187 30,194.81 4.0991 30,194.81
3 54,600 68,040 2942.746 65,097.25 3.5795 21,699.083
4 163,800 199,290 6384.697 192,905.3 3.3252 13,778.95
5 58,500 74,307 4082.711 70,224.29 3.7209 23,408.096
6 27,300 36,190 2689.870 33,500.13 4.1782 33,500.13
7 32,760 39,476 961.173 38,514.83 3.2897 12,838.278
8 41,600 49,647 858.898 48,788.10 3.2391 12,197.025
9 42,640 50,643 567.861 50,075.14 3.2669 12,518.785
10 101,400 119,851 652.748 119,198.25 3.2872 11,919.825
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
166 AnnaMaria Mouza
increase for each group, the number of employees per group together with
the ranking scores, I computed the index presented in the 4th column of
Table 6. Multiplying the elements of this column by 59,090.0625, we get
the entries of the 5th column.
From the last column of Table 6, I computed the end period salary
of each group and the corresponding annual rate of increase, which are
presented in Table 7.
From Table 7, we can see that the requirement for the annual rate of
salaryincrease at eachgrouptoexceed3.1%, is fullysatisﬁed. Inanopposite
situation however, one has to reduce the value of “a” by, say, 0.05 and
to repeat all calculations. It should be also noted that with the allocation
proposed so far, total initial salary, spending will increase by an annual
rate of 3.69% (from 807,300 to 967,629.9) and the annual wage per capita
will increase by 19.86% (from 15,232.1 to 18,257.2). It is clear that all
employees will be better off with smaller value of a. The composite index
and all other entries of Tables 6 and 7 can be computed with the following
code segment written in Visual Fortran.
MODULE data_and_params
! Goups=number of groups ! Planning_hor=Planning horizon (years)
! R = Initial rate of salary increase at each group (vector)
! Members = Number of employees at each group (vector)
! Salary_in = Initial employees salary at each group (vector)
! Salary_f = Final claimed salary for each group (vector)
! Deficit = The value of deviational variable
! a = Value of coefficient a (0<a<1)
IMPLICIT NONE
INTEGER, PARAMETER :: groups= 10, planning_hor=5, out = 8
REAL, DIMENSION(groups) :: R, Salary_in, Salary_f
INTEGER, DIMENSION(groups) :: Members
REAL :: Total_Deficit, a
! To facilitate the presentation data are given directly here
DATA R / 6.5, 6.0, 4.5, 4.0, 4.9, 5.8, 3.8, 3.6, 3.5, 3.4/
DATA Members / 10; 1, 3, 14, 3, 1, 3, 4, 4, 10 /
DATA Salary_in / 260000., 24700., 54600., 163800., 58500., 27300. &
& ,32760., 41600., 42640., 101400./
DATA Salary_f / 356222., 33054., 68040., 199290., 74307., 36190. &
& ,39476., 49647., 50643., 119851./
DATA Total_Deficit, a /118180.125, 0.5/
END MODULE data_and_params
!
PROGRAM composite_index
USE Dflib
USE data_and_params
IMPLICIT NONE
INTEGER i, j, Employees
REAL DIMENSION(groups) :: Y, X, rate, Rsmall, sortR, Scores, newrate
REAL Logvector(groups),Salary(groups), Deficit, s
! Open output file
OPEN(out, FILE=’Composite_Index.out’, STATUS=’UNKNOWN’)
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
Health Service Management 167
WRITE(out,*)’ Composite index and the final rate of salary increase’
WRITE(out,’(/)’)
Deficit=Total_Deficit*a ; sortR=R ; Salary=Salary_f/members
! Perform ranking operation
CALL SORTQQ(LOC(sortR),groups,SRT$REAL4)
DO i=1,groups
s=R(i)
DO j=1,groups
IF(sortR(j) == s) scores(i)= j
ENDDO ; ENDDO
! Compute composite index
Rsmall=R/100. ; Y=Rsmall*Salary ; s=SUM(Y)
Employees=SUM(Members)
rate=(y*100)/s/100. ; X=rate*deficit/Employees
s=SUM(X)
rate=(x*100.)/s/100. ; s=SUM(X) ; X=rate*Members
s=SUM(X)
Rate=(X*100.)/s/100. ; s=SUM(Rate) ; X=rate*scores
s=SUM(X)
rate=(X*100)/s/100. ; s=SUM(rate) ; X=Deficit*rate
! Compute revised annual rates of salary increase
Y=Salary_fX
Logvector=(LOG(Y/Salary_in))/ planning_hor
newrate=(EXP(logvector) 1.)*100.
WRITE(out,*) ’ Index Reduction Final Salary New rate’
DO i=1,groups
WRITE(out,’(1X,F9.6,2X,F10.3,4X,F10.2,6X,F6.4)’) Rate(i)&
&, X(i), Y(i), newrate(i)
ENDDO
END
7. Final Results
In the last column of Table 7, the revised coefﬁcients c
i
are stated. These
new coefﬁcients are to be considered in Eq. (2), and in the corresponding
restrictions seeing in Eq. (6) for the ﬁnal run. Also, the objective in Eq. (13)
has been slightly reformed as follows:
min z = (10P
1
d
−
1
+9P
1
d
−
2
+8P
1
d
−
3
+7P
1
d
−
4
+6P
1
d
−
5
+5P
1
d
−
6
+4P
1
d
−
7
+3P
1
d
−
8
+2P
1
d
−
9
+P
1
d
−
10
) +(P
2
d
−
37
+P
2
d
−
38
+P
2
d
−
39
+3P
2
d
−
42
+2P
2
d
+
44
) +P
3
31
i=22
d
−
i
+P
4
36
i=32
d
−
i
+(P
5
d
−
40
+P
5
d
−
41
) +(P
6
d
−
21
+2P
6
d
−
11
+2P
6
d
−
12
+3P
6
d
−
13
+4P
6
d
−
14
+5P
6
d
−
15
+6P
6
d
−
16
+7P
6
d
−
17
+8P
6
d
−
18
+9P
6
d
−
19
+10P
6
d
−
20
) +P
7
d
−
43
.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
168 AnnaMaria Mouza
The solution obtained with the restated restrictions satisﬁes all require
ments. Now, total expenses sum up to 1,857,817.125. It has to be under
lined, however, that since the value of the deviational variable d
−
43
is
70,317.2, it is implied that total gross proﬁts undergo a reduction of about
9.38% (from 750,000 to 679,682.8), which is a bit higher when compared
to the corresponding percentage (∼6%) of total salary reduction. Neverthe
less, the results provide a sound basis to achieve an optimal solution, in the
sense that it is acceptable by all people involved without any violations of
the existing rules.
8. Conclusions
Many clinics like the one presented here beneﬁt the patients, the health
service in a convenient way, and help attain consistency and continuity
of treatment. These achievements are heavily dependent on a successful
budget planning. The characteristics of a model of this type for a clinic
are based upon many factors, such as the type of the clinic, the location,
the size, the medical specialty, etc. Hence, it is difﬁcult to design a general
model that can be applied to all types of clinics. However, once a budget
planning model has been developed, it can be easily modiﬁed at a later
stage to ﬁt many other types of clinics. With this in mind, I consider in this
paper an orthopedic clinic and implemented a model designed for a ﬁve
year planning period, using the goal programming approach of aggregate
budget planning for this particular health care unit. To start with, the goals
and the relevant priorities are set in advance, to formulate the corresponding
goal programming model. The solution obtained is operational in the sense
that, through the analysis of resource requirements, the acceptable charge
per patient, the desired level of gross proﬁts, the claimed rate of salary
increases by the employees, and the tradeoffs among the set of goals are
achieved with a minimum sacriﬁce. The model shows that this sacriﬁce is
negotiable between different sides, so that the business manager can ﬁnally
establish an aggregative budget planning with the best perspectives, since
the realization of a fair wage policy can be obtained through using the
composite index, introduced in this paper.
References
Charnes, A and W Cooper (1961). Management Models and Industrial Applications of
Linear Programming (Vols. 1 and 2). NewYork: Wiley.
Everett, JE (2002). A decision support simulation model for the management of an elective
surgery waiting system. Health Care Management Science 5, 89–95.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
Health Service Management 169
Flessa, S (2000). Where efﬁciency saves lives: a linear programme for the optimal allocation
of health care resources in developing countries. Health Care Management Science 3,
249–267.
Gass, SI (1986). A process for determining priorities and weights for large scale linear goal
programming. Journal of Operational Research Society 37, 779–785.
Gass, SI (1987). The setting of weights in linear goal programming. Computers and Oper
ations Research 14, 227–229.
Mouza and AnnaMaria (1996). An economic approach of the Greek health sector, based
on domestic data. Modeling the hospital operation process. Ph.D. Theses, Greece:
Aristotle University of Thessaloniki.
Mouza and AnnaMaria (2002). Estimation of the total number of hospital admissions and
bed requirements for 2011: the case for Greece. Health Service Management Research
15, 186–192.
Mouza and AnnaMaria (2003). IVR and administrative operations in healthcare and hos
pitals. Journal of Healthcare Information Management 17(1), 68–71.
Mouza and AnnaMaria (2006). A note regarding the projections of some major health
indicators. Quality Management in Health Care 15(3), 184–199.
Ogulata, SN and R Erol (2003). A hierarchical multiple criteria mathematical program
ming approach for scheduling surgery operations in large hospitals. Journal of Medical
Systems 27(3), 259–270.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch04topic3
This page intentionally left blank This page intentionally left blank
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
Chapter 5
Modeling National Economies
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
This page intentionally left blank This page intentionally left blank
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
Topic 1
Inﬂation Control in Central and Eastern
European Countries
Fabrizio Iacone and Renzo Orsi
University of Bologna, Italy
1. Introduction
The recent accession to the European Union (EU) of 10 new members
ofﬁcially opened the agenda of their participation in the European Monetary
Union (EMU) too.
Although, the accession of the new European partners to the euroarea
is formally analyzed, keeping the Maastricht criteria as the main reference,
these do not necessarily reﬂect the effective integration and convergence of
the economies. With this respect, we think it is important to study if and how
the Eurosystem can successfully control inﬂation after the extension of the
EMU: we then investigated the monetary policy transmission mechanismin
the Central andEasternEuropeanCountries (CEECs) tosee howcompatible
it is with one of the current members of euroarea.
Knowledge of the mechanism of transmission of monetary policy is
also necessary to the CEECs to choose the monetary strategy for the period
preceding the accession to the euroarea. The discussion is open, both in
the literature and among monetary institutions, on whether a direct inﬂation
targeting or some kind of exchange rate commitment should be preferred:
by comparing the mechanismof transmission of policy impulses in different
regimes, we can see if a certain strategy made inﬂation control easier or
more difﬁcult, and assess its cost by looking at the impact on the economic
activity.
We focused on Poland, Czech Republic, and Slovenia because they
are very different for the initial conditions from which they started the
transition to functioning market economies and later the integration in the
EU, for the relative size and the degree of openness to the international
trade, for the policies implementedindue course andevenfor their historical
173
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
174 Fabrizio Iacone and Renzo Orsi
developments: we think that because of these differences they are, together,
representative of the whole group of CEECs. For the euroarea, we took
Germany as the reference country.
2. Inﬂation Control over 1989–2004
2.1. Transition, Integration, and Accession
Although Poland, Czech Republic, and Slovenia shared the goals of imple
menting a functioning market economy and of acceding to the EU, the
policies they implemented to reach them were rather different.
The ﬁrst reason for the difference is in the productive structure at the
beginning of the transition. Yugoslavia established a quasimarket system
well before 1989, with quite independent ﬁrms and relatively hard budget
constraints; production was much more centralized and budget constraints
much softer in Poland and Czech Republic, and, to make things tougher for
these two countries, they also had to suffer the breakup of the council for
mutual economic assistance (CMEA), in which they were well integrated
(Czech Republic also had to endure the additional trade disruption, caused
by the separation from Slovakia). Poland and Czech Republic represent
an average position with respect to the general condition of the productive
structure of the CEECs at the beginning of the transition. Local differences
remain: for example, Hungary implemented some timid reforms before
1989, while the three Baltic countries experienced a much bigger shock,
because the disintegration of the Soviet Union resulted in the loss of their
largest trade partner. The stronger centralization, the softer budget con
straint, and the higher disruption caused by the break. up of the CMEA,
and of Czechoslovakia could have coeteris paribus caused a more turbulent
and longer transition for Poland and Czech Republic, but in reality Slovenia
had to face the major shock due to the collapse of Yugoslavia and of the
subsequent wars, which deprived the country of its biggest market and had
other adverse effects including the loss of revenues and hard currency due
to the fall of tourism.
A more precise ranking emerges when the size and the openness to
international trade is considered, with Poland being the largest country and
Slovenia the smallest one. As far as the monetary policies implemented for
inﬂation stabilization, all the CEECs except Slovenia, began their transi
tion with a strong commitment to a ﬁxed exchange rate. The initial price
liberalization, in fact, left the system without a nominal anchor, and, also
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
Inﬂation Control in Central and Eastern European Countries 175
because of a natural price rigidity with respect to negative corrections, the
realignment of relative prices took the formof a sudden increase and a steep
inﬂation followed: by opening to the international trade (and by introducing
the proper legislation, in order to force the local producers to face a harder
budget constraint), the countries in transition had a chance to import a price
structure similar to the one of their commercial partners. In fact, being the
exchange rate ﬁxed, the domestic producers could not raise their prices too
much without losing competitiveness with respect to the foreign ones.
Slovenia followed a different approach to disinﬂation, targeting the
growth of a relatively narrowmonetary aggregate (M1); according to Capri
olo and Lavrac (2003), anyway the key characteristic of the period was the
continuous intervention on the foreign exchange market in order to prevent
an excessive real rate appreciation (contrary to the strategy of the Bundes
bank and of the Eurosystem, the control of M1 was also enforced with non
market instruments and procedures). Since mainstream optimal currency
area literature prescribes a greater incentive to adopt a stable exchange rate
to the countries more open to the international trade, the fact that the out
come is actually reversed means that price stabilization rather than trade
was the priority in the exchange rate commitment served in Poland and in
Czechoslovakia.
Fixing the exchange rate may have contributed to price stabilization,
and inﬂation actually dropped very quickly, but there are certain differences
remained with respect to the EU, leading to a strong real exchange rate
appreciation. This may have been partially due to the BalassaSamuelson
effect: assuming that the marginal productivity and the wage in the sector
of internationally traded goods, is set on the foreign market and that the
same wage is transferred to the production of the nontraded goods, then
the productivity gap between the two sectors is compensated by a price
increase in the less productive domestic sector. This yields a progressive
appreciation of the real exchange rate and higher domestic inﬂation.
The faster growth of productivity, determined by the transfer of
technology fromthe international trade partners and the more efﬁcient allo
cation of the resources within the economies, compensated part of the pres
sure on the domestic producers, but the ﬁxed exchange rate arrangements
were not sustainable for a long time: Poland switched to a regime of pre
announcedcrawlingpegas earlyas 1991, later couplingit withanoscillation
band which was gradually widened until it was ﬁnally abandoned in 2000;
Czech Republic resorted to a free ﬂoat without explicit commitment as
early as 1997 under the pressure of a speculative attack, its currency having
progressively overevaluated in real terms over time. Both the countries
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
176 Fabrizio Iacone and Renzo Orsi
replaced the monetary anchor by introducing a direct inﬂation target (Czech
Republic starting in 1998, Poland in 1999). The progressive weakening of
the exchange rate commitment took place in Hungary and Slovakia too, but
in these last two countries, the switch of targets was not complete: Hungary
retained a ﬁxed exchange rate commitment, albeit with a very wide band,
while Slovakia did not implement an explicit inﬂation targeting because of
the high role that administrated prices still had in the consumer prices. Any
way this is not an unanimous attitude: the three Baltic States kept a ﬁxed
exchange rate with an extremely tight band, apparently trying to exploit the
credibility acquired with it during the ﬁrst phase of the transition, and Esto
nia even withstood a moderate speculative attack rather than devaluating.
Once again, Slovenia went in the opposite direction: according to Capriolo
and Lavraac, a more aggressive exchange rate strategy has been pursued
since 2001, with an active, albeit informal, crawling peg.
2.2. Exchange Rate and Direct Inﬂation Targeting
The ﬁxed exchange rate played a key role in the stabilization phase; nowa
days, it is still the reference of monetary policy in the Baltic States, and it is
also present in Hungary. Since, the exchange rate commitment is also part of
the Maastricht criteria, where two years of participation to the exchange rate
mechanism (ERM) 2 is required, the decision of Poland, Czech Republic,
and Slovakia, to completely abandon it may indeed seem to be a paradox.
Surely, a small country cannot ignore the effect of the exchange rate
ﬂuctuations on the stability of ﬁnancial institutions and the economy as
a whole, but a commitment to a ﬁxed parity is a much stronger pol
icy. Whether, macroeconomic stability and inﬂation control are helped or
hindered by this depends on the credibility of the commitment itself.
There can be little doubt that an exchange rate commitment is a risky
policy: the integration of the ﬁnancial markets allows the mobilization of
large amounts of funds, and no central bank can muster enough foreign
exchange reserves to resist a sustained attack, nor indeed may desire to do
it, because the mere defence of the currency could require a substantial rise
in the domestic interest rates, for which the economy may be even more
detrimental than the devaluation that it was trying to prevent. Against an
exchange rate commitment, stands the evidence of speculative attacks in
the past: consider, for example, the break up of the ERM 1, when even
the French currency suffered some pressure, although the ﬁscal, ﬁnancial,
and macroeconomic situation of the country was not worse than it was
in Germany. The same lesson can indeed be derived by looking at the
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
Inﬂation Control in Central and Eastern European Countries 177
Hungarian experience: when the two objectives were in conﬂict, as in June
2003, one had to be abandoned, and a devaluation took place.
Since the accession to the EU also required the complete adoption of
the acquis communautaire, thereby including the elimination of capital
controls, the apparent paradox of several CEECs switching to a free ﬂoat
and inﬂation targeting, for the period preceding the adoption of the euro,
is then a strategy that may protect their currencies from unrealistic parities
and speculative attacks.
On the other hand, a direct inﬂation targeting can only be successful if
the monetary authority has earned itself a solid reputation, and this primarily
requires the independence fromany political interference. Since the CEECs
only acquired a formal central bank independence in the last fewyears, as a
part of the process of adoption of the acquis communautaire, their credibi
lity may still be weak. Central bank independence is even more a concern,
when the deﬁnition is extended beyond the mere legal requirement: in most
of the cases for example, the credibility was seriously weakened because
the ﬁscal authority had the opportunity to partially ﬁnance its expenditures
through the central bank; political pressures, as experienced in the Czech
Republic and in Poland, may undermine the credibility of the central bank
even when they are resisted, because they may erode the consensus in
the country. Several indices of central bank independence, were proposed
in the literature: according to Maliszewski (2000), and to the extensive
survey in Dvorsky (2000), the CEECs were lagging behind with respect
to their Western counterparts, Poland and the Czech Republic faring better
than Slovenia. Indeed, it is exactly because the credibility of the monetary
authorities is lower in the CEECs that Amato and Gerlach (2002) proposed
to supplement the inﬂation targeting with a mild monetary commitment,
this would increase the information in the market.
It is then important to ascertain if the exchange rate commitment is a
risk worth taking, not only for the countries on the way to EMUthat are still
more than two years away from the formal discussion of their convergence
according to the Maastricht criteria, but also for other emerging economies
in the world.
2.3. A Summary of the Results of Previous Analyses
Several empirical analyses on inﬂation control in CEECs appeared in the
recent years. Iacone and Orsi (2004) surveyed many applied works: these
varied for the country that was analyzed, for the period considered and for
the methodology adopted, making comparison rather difﬁcult.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
178 Fabrizio Iacone and Renzo Orsi
Interpretation of the results was often dubious because either the vari
ables of interest were replaced by ﬁrst or second differences, or the inter
mediate steps linking the monetary instrument to the target were omitted,
thus obscuring any potential evidence about the channel through which
inﬂation may be controlled. Reliability was also hampered by the fact that
in many cases no stability analysis was provided, despite the potentially
disruptive effects of the transition on the estimates, nor any attempt was
made to model the slow formation of a market economy despite this being
the main feature of the period.
It seems nevertheless fair to conclude that the exchange rate is very
important for the stabilization of inﬂation, an appreciation of the real
exchange rate reducing the growth rate of prices. The empirical evidence
supporting the existence of a standard interest rate channel was much more
controversial, the results depending largely on the econometric approach
and model adopted by the researcher and on the sampling period considered.
2.4. A Structural Model for Monetary Policy
It is possible that the weak support provided by these analyses is at least
partially due to the methodology adopted: VAR models have the advantage
of requiring a minimal structural speciﬁcation, but, in return, they need
the estimation of many parameters. Considering the short length of the
sampling size and the potential extreme instability of the parameters due to
the transition, large standard errors and nonsigniﬁcant estimates seem to
be unavoidable consequences.
A structural model can provide a more parsimonious approach: we
followed Rudebush and Svensson (1999) and considered the model
π
t
= α
0
+α
π1
π
t−1
+α
π2
π
t−2
+α
π3
π
t−3
+α
π4
π
t−4
+α
y
y
t−1
+ε
π,t
PC
y
t
= β
0
+β
y
y
t−1
−β AD
where y
t
is a measure of the economic activity, π
t
is the annualized inﬂation
within the quarter, it is a shortterm interest rate, ¯ π =
1
4
3
j=0
π
t−j
is the
average inﬂation rate in the last four quarters (i.e., the inﬂation over the
last year), and
¯
i =
1
4
3
j=0
i
t−j
is the average interest rate over the same
period.
The equations are referred to as Phillips Curve (PC) and aggregate
demand (AD), respectively because the ﬁrst one can be given a structural
interpretation as a PC while the second one may describe the transmission
of monetary policy on the economic activity quite like in an AD function.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
Inﬂation Control in Central and Eastern European Countries 179
With this speciﬁcation, agents extrapolate from the past inﬂation to for
mulate the current period price level. The mechanism of transmission of
monetary policy to inﬂation is indirect: an expansionary monetary policy
takes place as a decrease in the real rate
¯
i
t−1
− ¯ π
t−1
, which boosts the
economic activity in the AD equation; in the second moment, the pres
sure on the demand enhances inﬂation, as illustrated in the PC equation.
The instrument and the mechanism of transmission of monetary policy are
then consistent with the operating procedures of both the FED and of the
Bundesbank, and later of the ECB.
Rudebush and Svensson (1999) allowed for an additional termβ
y2
y
t−2
in the AD equation, but we found that, possibly also due to the smaller
dimension of our samples and the high collinearity with β
y2
y
t−2
, the con
tribution of the two different effects in general could not be distinguished.
We, then considered the proposed AD equation sufﬁcient, also because
one lag proved to be enough to have no serial correlation in the residuals.
Rudebush and Svensson also speciﬁed the model without intercepts α
0
, β
0
,
having formulated for meancorrected data, but we preferred to allow for
the constant and test, for its possible exclusion explicitly.
Rudebush and Svensson (2002) remarked that
4
j=1
α
πj
= 1 can be
expected as it corresponds to a vertical PC in the long run. They also
explicitly discussed the fact that the backward looking formulation of
the PC and AD equations imply adaptive rather than rationale expecta
tions, but they argued that this may well be the case especially in a phase
of monetary and inﬂationary instability; as an alternative, they suggested
that the given speciﬁcation may simply be considered as reduced form
equations.
Notice that even if the constraint
4
j=1
α
πj
= 1 is applied, the model
still does not necessarily impose a unit root to inﬂation, because the dyna
mics of y
t
has to be taken into account too: consider for example α
π1
= 1,
α
π2
= α
π3
= α
π4
= 0, β
y
= 0, and the monetary policy rule
¯
i
t
= ¯ π
t
+π
t
;
the PCequation then becomes β
r
∈ y, t+ ∈ π, t, thus, inducing a stationary
behavior. It is indeed, precisely by steering the output gap using the interest
rate that the stabilization of inﬂation is ultimately possible.
This model may well be applied to the transition economies too, espe
cially considering that the accession of these countries to the EU requires
them to adopt a monetary policy setting that is institutionally compatible
with the one of the ECB. In the case of a small open economy, though,
two other factors should be taken into account: the demand of domestic
products generated by the trade partners and the effect of the international
competition on local prices.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
180 Fabrizio Iacone and Renzo Orsi
Following Svensson (2000), we then augmented the model by introduc
ing the level of the economic activity in the international partners wt and the
percent real exchange rate depreciation (q
t
−q
t−1
), where q
t
= p
∗
t
+e
t
−p
t
is the logarithm of the real exchange rate and p
t
, p
∗
t
and e
t
are the loga
rithm of the local and foreign level of prices and of the nominal exchange
rate, respectively. Both the variables are added to the AD equation, and
the real exchange rate is added to the PC equation as well. For the AD,
the assumption is that phases of high economic activity abroad result in
higher demand of locally produced goods, while a too high level of prices
(in real terms) causes a shift of the domestic and foreign demand towards
the external producers. The real exchange rate entered the PC equation,
because the international competition imposes some price discipline to the
local producers: for given foreign prices, a nominal exchange rate appre
ciation makes foreign goods cheaper in local currency, thus, forcing the
domestic producers to lower their prices to keep up with the external com
petitors, while for given nominal exchange rate higher foreign prices give
the domestic producers to set higher prices and stay in the market; we also
refer to Svensson (2000), where he explains how an increase in foreign
prices in local currency, either due to the move of the foreign price itself or
to the exchange rate, results in higher cost of inputs and then feeds back in
higher prices of output.
Maintaining the autoregressive structure of Rudebush and Svensson,
we describe the economy with the two equations model
π
t
= α
0
+α
π1
π
t−1
+α
π2
π
t−2
+α
π3
π
t−3
+α
π4
π
t−4
+α
y
y
t−1
−α
q
(q
t−1
−q
t−5
) +ε
π,t
PC
y
t
= β
0
+β
y
y
t−1
−β
w
w
t−1
−β
q
(q
t−1
−q
t−5
) +ε
y,t
AD
where we actuallyconsideredrelevant for the exchange rate the depreciation
over the whole year.
In the original model, Svensson considered the real exchange rate rather
than the percent depreciation: in our model speciﬁcation, we preferred the
latter because as we saw the real exchange rates of most of the CEECs
appreciated since the beginning of the transition without resulting in a
dramatic decline of the economic activity.
Moreover, we kept the AD/PC structure as a general reference, but
given that the speciﬁcation is somewhat arbitrary, we tried to adapt it to
the economies considered. Our sample starts from 1990, and to avoid the
risk of model instability in the very ﬁrst part of the sample, the data were
used only to compute the output gap or as lags. In some cases, we did
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
Inﬂation Control in Central and Eastern European Countries 181
not ﬁnd adequate data so the sample period was even shorter. Data are
monthly, but in order to reduce the volatility, we used quarterly averages,
adjusting the frequency properly. We took the CPI to compute inﬂation and
the real exchange rate, and the seasonally adjusted industrial production
to derive a measure of economic activity (when only the nonseasonally
adjusted series was available, we run a preliminary step using the X12
ﬁlter in EViews to remove seasonality); for the nominal interest rate we
used a threemonths interbank rate or the Treasury bills rate of the same
maturity. More details on the data used are at the end of this paper. We
always considered Germany as the international reference, even if in the
ﬁrst part of the sample period some countries pegged the exchange rate to
the US$: the EU in fact, constitutes by far the biggest trade partner for the
CEECs.
There is a strong seasonal component in the quarterly inﬂation, which
is likely to shift the weights towards the fourth lag in the PC equation;
the choice of the year appreciation of the real exchange rate was then also
convenient to remove the seasonal effect.
As a measure of the economic activity we took the output gap, deﬁned
the difference between the observed and the potential output computed
using the Hodrik Prescott (HP) ﬁlter on the quarterly average of the pro
duction index. Even if the HP gap is usually considered an inexact measure
of the economic cycle, due to its purely numerical deﬁnition, we still think
that it is at least informative of the effective output gap. Plots of the inﬂa
tion in the four countries are in Fig. 1 (ﬁgures are all collected at the end of
Figure 1. Yearly inﬂation, Germany, Poland, Czech Republic, and Slovenia.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
182 Fabrizio Iacone and Renzo Orsi
the work). The inﬂation used for this graph was deﬁned as the growth rate
of prices on the previous year: we presented it albeit, we used a different
measure in our empirical study because it is very common in the descriptive
analysis in the literature. Clearly, though, averaging the inﬂation in this way
induces spurious autocorrelation in the data, so in the empirical analysis we
measure inﬂation as the annualized growth rate of prices over each period.
We plotted this and the output gap for Germany in Fig. 2; in Figs. 3–5,
we plot the same data and the yearly real exchange rate depreciation for
Poland, Czech Republic, and Slovenia, respectively. From these ﬁgures, it
Figure 2. Quarterly inﬂation and output gap, Germany.
Figure 3. Quarterly inﬂation, output gap, real exchange rate depreciation, Poland.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
Inﬂation Control in Central and Eastern European Countries 183
Figure 4. Quarterly inﬂation, output gap, real exchange rate depreciation, Czech
Republic.
Figure 5. Quarterly inﬂation, output gap, real exchange rate depreciation,
Slovenia.
is already possible to notice that, while for Germany, the dynamics of inﬂa
tion is dictated by the output gap, in the three new EU members inﬂation
is largely dictated by the real exchange rate dynamics.
Estimation was carried out using OLSequation by equation, as in Rude
bush and Svensson (1999). We did not introduce any other modiﬁcation to
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
184 Fabrizio Iacone and Renzo Orsi
the AD equation, but we rewrote the PC as
π
t
= α
0
+ρ
0
π
t−1
+ρ
1
π
t−1
+ρ
2
π
t−2
+ρ
3
π
t−3
+ρ
y
y
t−1
+α
q
(q
t−1
−q
t−5
) +ε
π,t
PC
(the term q
t−1
− q
t−5
does not appear in the equation for Germany). This
is a reparametrization of the original equation, with
ρ
0
= (α
π1
+α
π2
+α
π3
+α
π4
− 1)
ρ
1
= −(α
π2
+α
π3
+α
π4
), ρ
2
= −(α
π3
+α
π4
), ρ
3
= −α
π4
Being simply a reparametrization, the residual sum of squares that
has to be minimized is the same, so the OLS estimates are not different
than the estimates that one would obtain in the original model in levels.
The advantage of this speciﬁcation is that long and short dynamics are
explicitly separated,
4
j=1
α
πj
= 1 corresponding to p
0
= 0, and that,
more importantly, the multicollinearity due to the terms π
t−1
to π
t−4
is
much less severe.
Correct speciﬁcation was primarily checked by looking at the tests for
normality (Jarque and Bera), autocorrelation (up to the fourth lag: Breusch
.Godfrey LM) and heteroskedasticity (white without cross terms) in the
residuals: we referred to these tests using N for the normality, AC for the
autocorrelation, and H for the heteroskedasticity. If the model appeared to
be correctly speciﬁed, we then analyzed the stability of the parameters over
time. If we did not reject the hypothesis of stability either, we moved to
test speciﬁc restrictions on the parameters, and to estimate a more efﬁcient
version of the model. In all the tests, we used the conventional 5% size. In
the tables in the Appendix, we presented both the Pvalues associated to
the tests and a summary of the equation estimated when the restriction was
imposed.
We considered the normality test ﬁrst, because we used its result to
choose between the small or large sample version of the other tests. Since
the small and large sample statistics are asymptotically equivalent in large
samples, the asymptotic interpretation of the results is not affected. Admit
tedly, due to the asymptotic nature of the Jarque and Bera test, and to the
small sample lower order bias in the estimation of autoregressive coefﬁ
cients, the reader may be cautious in the interpretation of the results, espe
cially, when only portions of the dataset are used to estimate parameters
(like in the Chow stability test or in the estimates on subsamples): notice
anyway that this is a problem that depends on the dimension of the sample;
so, indeed the same caveat applies to all the empirical analyzes present in
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
Inﬂation Control in Central and Eastern European Countries 185
the literature. The reader may then want to consider the estimates and the
statistics computed on the estimated parameters more as general indications
in small samples (especially, if the subsample 1998–2004 only is used),
but we think that despite these drawbacks, the estimates and the tests do
still provide some information on the structure that we intend to analyze,
albeit an approximate one.
Autocorrelation in the residuals is a serious misspeciﬁcation since it
results in inconsistent estimates given that both the equations include a
lagged dependent variable. Heteroskedasticity is a minor problem, because
estimation is simply inefﬁcient (compared to the GLS estimator) but not
inconsistent. Yet, since the standard errors indicated by the regression are
not correct, evidence of heteroskedasticity requires at least a different esti
mation of the variancecovariance matrix of the estimates.
Due to the peculiarity of the transition, we think it is particularly impor
tant to control the parameter stability over time, and possibly to model it
when apparent. We used as preliminary diagnostic the CUSUM squared
test, but also considered as a particular breakpoint in 1998, because of the
opening of the negotiation to access the EU. Clearly, this is just a broad
indication: one could also argue that by that time the negotiation opened
the transition to a market economy was already nearly complete, and the
issue was convergence to the EU standards. Yet, that period may still be
informative about a potential break even if it was not located exactly there.
We also considered countryspeciﬁc breakpoints in the PC equations to
analyze the effect on the inﬂation dynamics of monetary regime changes:
for Germany, we allowed a break in 1999Q1, when the control of monetary
policy was transferred from the Bundesbank to the ECB, because it may
have been perceived as a weakening of the antiinﬂationary stance, induc
ing higher inﬂationary expectations; for Poland, we considered a break in
2000Q2, when the exchange rate commitment was formally abandoned;
for Slovenia, the change is in 2001Q1, when the approach towards the real
exchange rate became effective. We tested the presence of the break with a
standard Chow test, unless the sample after the break was very little, in this
case we used the Forecast test, which is exactly designed to test for breaks
when only a little data are located after the break.
If the model was correctly speciﬁed and stable, we improved the efﬁ
ciency of the estimates by removing the variables whose coefﬁcients were
estimated with a sign opposite to the one prescribed in the economic theory
(we assumed that α
y
, α
q
, β
y
, β
w
, β
q
, β
r
are not negative), and by imposing
other restrictions (including
4
j=1
α
πj
= 1): we tested these with a Wald
statistic, and referred to these tests as W in the tables.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
186 Fabrizio Iacone and Renzo Orsi
Of course, given the small dimension of the samples and the potential
instability of the models due to the transition, a certain caution should still
be exerted in the interpretation of the results. But, we think that even in
this case our analysis is at least able to get a general, broad picture of the
situation: the lack of precision that may be attributed to the small dimension
of the sample, for example, is already taken into account in the standard
error. If, under normality, a test statistic rejects the null hypothesis despite
the large variability due to the small sample, then we would still regard the
estimate as informative, although may be only on the sign.
3. The Empirical Evidence
3.1. Germany
As a benchmark case, we ﬁrst estimated the model for Germany over the
period 1991Q1–2004Q1. The choice of the sample is twofold: ﬁrst, only
data after the German reuniﬁcation were considered, leaving many of the
problems related to that particular transition out of the discussion; second,
this time span is roughly the same one covered by the data of the CEECs in
most of the applied analyzes, making the results comparable in this sense.
We called this a “benchmark” model because it is widely assumed that this
transmission mechanism for monetary policy was at work in Germany in
the years we are interested in. Furthermore, Germany is geographically
and economically close to the CEECs, accounting for a large share of their
international trade. Finally, it is an interesting reference because Germany
is the biggest country in the euroarea: if the monetary transmission in the
CEECs works in a very different way, their integration in the euro may
adversely affect the dynamics of inﬂation.
The estimated model for Germany was
ˆ y
t
= 0.011
(0.005)
+0.769y
t−1
(0.084)
−0.0038y
t−1
(0.00163)
π
t
= −1.02π
t−1
(0.10)
−0.86π
t−2
(0.12)
−0.71π
t−3
(0.10)
+17.62y
t−2
(12.18)
over the period 1991Q2–2004Q1 for the AD, and 1993Q1–2004Q1 for
the PC.
According to the diagnostic tests, the AD equation was correctly spec
iﬁed and stable over the whole sample; the CUSUMsq hit the boundary in
the period 1993–1996, but we found no evidence of break with the Chow
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
Inﬂation Control in Central and Eastern European Countries 187
test (further details about all the tests and a summary of the estimates are
in the tables at the end).
The inﬂation dynamics, on the other hand, was not stable when the
whole sample was taken into account. The instability, though was clearly
restricted to the ﬁrst part of the sample, and could indeed be removed drop
ping the years 1991 and 1992. It is interesting to observe that it is exactly
the period of the recovery of the inﬂationary shock due to the German
reuniﬁcation, a phenomenon that was usually regarded as temporary and
exceptional. Diagnostic tests conﬁrmed that the equation estimated using
data from 1993 onwards was correctly speciﬁed and stable; notice anyway,
the distribution of the residuals did not appear to be normal, so the interpre
tation of the other tests can only be justiﬁed asymptotically. Point estimates
of ρ
1
, ρ
2
, and ρ
3
were all very close to −1, suggesting a very high value of
α
4π
, or a strong seasonal component in inﬂation; we also found that lagged
values of yt −1 had a more signiﬁcant effect, albeit even in this case the P
value of the test H
0
: {αy = 0} vs. H
0
: {α
y
> 0} was between 5% and 10%.
Considering the model as a whole, anyway, we ﬁnd these estimates
satisfactory, although not as much as those presented by Rudebush and
Svensson for the United States: we conjecture that the lower precision of
the estimates for Germany depended on the smaller number of observations
available.
3.2. Poland
Opposite to Germany, we consider Poland (and the other CEECs later)
as a country small enough to be affected by the international trade, so
we augmented the model with the real exchange rate depreciation with
respect to Germany and with the German output gap. Albeit our dataset
included 1993 too, we found that both the equations suffered from residual
autocorrelation, resulting in inconsistent estimates. We interpreted this as
evidence of instability, because when 1994 was taken as a starting point,
the residual autocorrelation was largely removed and the other diagnostic
tests were broadly compatible with a stable, correctly speciﬁed model. The
estimated model was
ˆ y
t
= 0.020
(0.008)
+0.467y
t−1
(0.142)
−0.0027y
t−1
(0.001)
π
t
= −0.64π
t−1
(0.12)
−0.67π
t−2
(0.101)
−0.51π
t−3
(0.10)
+21.46(q
t−1
−q
t−5
)
(7.88)
over the period 1994Q1–2004Q1 for the AD, and 1994Q1–2004Q1 for
the PC.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
188 Fabrizio Iacone and Renzo Orsi
The variables referred to the trade partners were then not signiﬁcant in
theADequation: we found a certain effect at least for the real exchange rate
depreciation, and with a sign consistent with the macroeconomic theory,
but the realization of the t statistic was still rather below the critical value.
The absence of the two variables referring to the international trade could
be interpreted as a sign that Poland is still large enough to be more sensible
to the internal rather than to the international developments, but notice that
the situation was reversed for the PC equation, where only the external
conditions had a signiﬁcant effect on the inﬂationary dynamics (again we
found that the effect of the omitted explanatory variable, y
t−1
in this case,
had the sign predicted by the macroeconomic theory, but it failed to be
signiﬁcantly different than 0).
We concluded performing a Forecast test for a break in 2000Q2 in
the PC equation, rejecting it (Pvalue 0.65). Since at that point, the Polish
national bank abandoned the exchange rate commitment and left the direct
inﬂation targeting to stand alone, it is fair to conclude that the change of
monetary target did not result in a destabilization of the inﬂation dynamics.
Since the target for monetary policy is deﬁned mainly because it is assumed
that in that way the expected inﬂation can be favorably inﬂuenced, we prefer
not to give a structural interpretation of a backward looking formulation
of the inﬂation equation, but in this case we can at least conclude that the
change of target did not result in a worsening of the expectation of inﬂation.
Thus, it is likely that by 2000, the Polish central bank acquired sufﬁcient
credibility to move away from a commitment that would have exposed it
to pressure from market speculation.
Although we summarized the diagnostic tests, concluding that the insta
bility that may be associated to the transition to a market economy can be
largely conﬁned to the period before 1994, we also estimated a more restric
tive speciﬁcation, in which the sample for the AD equation is from 1995
onwards, and for the PC from 1998 onwards. We considered this second
speciﬁcation, because the residuals were not normally distributed, and an
asymptotic interpretation of the outcome of the autocorrelation test for the
AD equation was particularly dubious because the test statistic reached
approximately, the limit critical value, the Pvalue being slightly lower
than 0.05 (it was 0.046). As far as the PC equation, the Chow test did not
identify any break in 1998Q1 but the CUSUMsq statistic did indeed signal
a potential break in this year, albeit only mildly. The estimates in the more
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
Inﬂation Control in Central and Eastern European Countries 189
restrictive model are
ˆ y
t
= 0.021
(0.009)
+0.454y
t−1
(0.154)
−0.0029y
t−1
(0.001)
π
t
= −0.57π
t−1
(0.19)
−0.76π
t−2
(0.13)
−0.47π
t−3
(0.20)
+16.09(q
t−1
−q
t−5
)
(7.23)
over the period 1995Q1–2004Q1 for the AD, and 1998Q1–2004Q1 for
the PC, very similar then to the estimates on the longer sample. For the PC
equation, we also present the alternative speciﬁcation
π
t
= −0.66π
t−1
(0.20)
−0.78π
t−2
(0.13)
−0.59π
t−3
(0.20)
+39.24π
t−1
(29.00)
+ 19.22(q
t−1
−q
t−5
)
(7.46)
estimated over the sample 1998Q1–2004Q1: the lagged output gap still
appeared in the determinants of the inﬂation dynamics, it had the correct
sign and it was also marginally signiﬁcant with the 10% critical value and a
onesided t test: a possible interpretation of this result could be that by 1998,
the PC equation evolved even closer to the standard textbook speciﬁcation,
but we think that a longer dataset is needed to address this particular issue.
In any case, considering the whole model, we conﬁdently conclude that
the Polish monetary authority was able to control inﬂation during these
years, but we think it mainly worked through the exchange rate manage
ment, the evidence of the closed economy transmission mechanism, in fact,
being dubious.
3.3. Czech Republic
Czech Republic originated in 1993, from the split of the former Czechoslo
vakia. Data availability is then slightly reduced, and it is also possible that
the division of the country acted as a source of instability additional to the
one generated to the transition. Although it is not possible to distinguish
between these two causes, the evidence of instability of the macroeconomic
structure was quite clear.
The sample for the estimation of theADequation started in 1994Q4, but
we found that a break took place around 1999, according to the CUSUMsq
statistic. This is approximately also the point for which the Chow test is
to be computed, and we found strong evidence in favor of it (the Pvalue
associated to it was 0.01). Comparing the estimates in the two subsamples,
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
190 Fabrizio Iacone and Renzo Orsi
the most relevant differences were in the fact that the coefﬁcient of the real
rate, that signals the transmission of the monetary policy impulse from the
monetary authority to the economy, and the real exchange rate depreciation,
were only signiﬁcant and with the correct sign in the second part of the
sample (the economic activity in Germany remained insigniﬁcant).
It is possible that instability affected the PC equation too, albeit in
this case the evidence was less compelling. Using the whole sample that
started in 1994, the CUSUMsq statistic did not indicate the presence of any
break, but according to the Chow test a discontinuity took place in 1998.
The estimated coefﬁcient of the economic activity was largely insigniﬁcant,
despite having the same sign predicted by the economic theory, while we
found that the real exchange rate appreciation had a strongly signiﬁcant
antiinﬂationary effect. The estimated model was
ˆ y
t
= 0.010
(0.006)
+0.620y
t−1
(0.116)
−0.03y
t−1
(0.001)
−0.083(q
t−1
−q
t−5
)
(0.050)
π
t
= −0.76π
t−1
(0.23)
−0.21π
t−2
(0.25)
−0.31π
t−3
(0.18)
+32.55(q
t−1
−q
t−5
)
(11.97)
over the period 1998Q1–2004Q1 for both the AD and the PC, while if we
relied on the CUSUMsq statistic rather than on the Chow one for the PC,
the PC equation estimated over the period 1994Q1–2004Q1 was
π
t
= −0.84π
t−1
(0.15)
−0.33π
t−2
(0.17)
−0.34π
t−3
(0.09)
+25.338(q
t−1
−q
t−5
)
(9.186)
Comparing these estimates, notice that if a break did indeed take place,
then the consequence was that the real exchange rate appreciation had a
stronger effect against inﬂation in the last part of the sample, that was
exactly when Czech central bank used inﬂation rather than exchange rate
targeting. As in the case of Poland, we then found no evidence that the
switch of monetary policy target resulted in a worsening of the inﬂationary
dynamics. On the contrary, if a change in the dynamics of the growth rate
of prices took place at all, when inﬂation targeting was introduced, it was
in favor of an easier control of it.
3.4. Slovenia
As we didfor CzechRepublic, inthe case of Slovenia toowe onlyconsidered
the part of the sample corresponding to the existence of an independent
state.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
Inﬂation Control in Central and Eastern European Countries 191
The data available for the estimation started from 1994Q1 for the AD
equation and in 1993Q2 for the PC. We found anyway, evidence of model
misspeciﬁcation when the ﬁrst year is included, resulting in residual auto
correlation and inconsistent estimates: it is also fair to suspect, on the basis
of the CUSUMsq statistic, that a break took place in the ﬁrst part of the
sample. We then removed 1993 from the dataset, and estimated the model
from1994 onwards (sample period 1994Q1–2003Q3 for theAD, 1994Q1–
2003Q4 for the PC), obtaining
ˆ y
t
= 0.0226
(0.167)
+0.343w
t−1
(0.234)
π
t
= −0.60π
t−1
(0.14)
−0.66π
t−2
(0.12)
−0.45π
t−3
(0.12)
+52.64(q
t−1
−q
t−5
)
(15.60)
The diagnostic tests conﬁrmed that both the equations were correctly
speciﬁed and stable in the period considered, albeit, we acknowledge that
the estimates in theADequation are only signiﬁcant using a 10%threshold.
We are inclined to consider them anyway, suspecting that a more clear
link will emerge in the future, with a longer sample and an even higher
integration in the European economy. Notice, on the other hand, the absence
of the real interest rate, whose estimated coefﬁcient even failed to have the
sign prescribed in the economic theory: this supported the argument of
Capriolo and Lavraac that, until recently, the interference of the central
bank on the ﬁnancial markets made the interest rates not informative about
the monetary stance.
The estimated PC is qualitatively similar to the Polish and Czech one,
with inﬂation only driven by the exchange rate dynamics but not by the
output gap: indeed,
ˆ
β
y
even resulted as negative, so in the ﬁnal speciﬁcation
we excluded y
t−1
fromthe explanatory variables altogether. In order to see,
if the more active exchange rate policy resulted in an easier inﬂation control
and then in a larger β
q
, we also tested (with a Forecast test) for a break in
2001, but we did not reject the hypothesis of stability.
3.5. Some Concluding Remarks
Poland, Czech Republic, and Slovenia are small economies that in a few
years reshaped their productive and distributive structures, opened to the
international trade, established a functioning market economy and joined
the EU. Despite these common features, they differ quite remarkably for
the time and the condition, in which they started the transition to the market
economy, for the policies they implemented since 1989, and for the size
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
192 Fabrizio Iacone and Renzo Orsi
and the relative importance of the trade with the rest of the EU. It is then
fair to expect that both the similarities and the differences emerge, and this
is indeed the case.
The most relevant common feature is the presence of a certain instability
in the ﬁrst part of the sample. To appreciate the importance of the transition,
we observed that, according to a study of Coricelli and Jasbec (2004), the
characteristics that may be associated to the initial conditions were the main
driving forces of the real exchange rate dynamics. The variables that were
more directly related to the economic notions of supply and demand, only
acquired importance after a period of approximately ﬁve years. We found
that the macroeconomic model structure begun to show a stable pattern
only from 1994 onward, conﬁrming the ﬁndings of Coricelli and Jasbec.
Even taking 1994 as a starting value, anyway, some elements of instability
remained, especially with respect to Czech Republic.
The instability of the estimated macroeconomic relations is also infor
mative about the speed of the transition. In this respect, we found that in
Slovenia the adverse effect of theYugoslavian civil war canceled the advan
tage due to the selfmanagement reform; Czech Republic, which had to face
not only the decentralization and the breakup of CMEA but even the split
of Czechoslovakia, fared worse.
The relation between the variables got closer to the standard textbook
description of a market supply and demand structure in the last few years.
Yet, even in that part of the sample evidence of a transmission mecha
nism of monetary policy based on the interest rates was only present for
Poland, and that too turned out to be very weak. Although it is not surpris
ing that the exchange rate has a prominent role in inﬂation control in small
open economies, our ﬁndings cannot be explained simply by this argument,
because we explicitly include the exchange rate in both the equations of
the model, then taking it into account (and holding it ﬁxed in the interpre
tation of the results) in the multiple regression framework. One has rather
to conclude that the interest rate channel of transmission of monetary pol
icy was weak or obstructed, as indeed explicitly argued for Slovenia. This
also means that, although these economies were acknowledged as func
tioning market economies, they may still be lagging behind; nonetheless,
since a good development took place during the accession period, it is fair
to expect that participation to the EU will foster integration. Meanwhile,
their presence should not hamper inﬂation control too much, because the
exchange rate is clearly a valid instrument for that purpose: this means that
in a widened eurozone, the Eurosystemmay control inﬂation in the CEECs
by controlling it in the core area.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
Inﬂation Control in Central and Eastern European Countries 193
Although they are all small open economies, a pattern can be envis
aged in terms of dimension and openness to international trade, and this
is reﬂected in the estimated models: Poland, is the only country in which
the international factors did not affect the AD. Another way to appreciate
the importance of the international openness is to look at the estimated
effect of a 1% appreciation of the exchange rate: the impact on the inﬂation
dynamics is clearly ranked according to the relative dimension of trade.
Stability analysis is also important because it allows us to understand if
the switch in Poland and in Czech Republic from exchange rate to inﬂation
targeting affected the price setting formation. We did not ﬁnd any evidence
of it, nor we found that a more aggressive exchange rate policy in Slovenia
eased the costs of disinﬂation from 2001 onwards. The instability that we
found for Czech Republic seems to suggest an evolution of the macroeco
nomic structure towards a market economy, and should then have more to
do with EU accession than with inﬂation targeting. One main implication
that we perceive is that the exchange rate commitment is either irrelevant
or, more likely, replaceable with inﬂation targeting as a nominal anchor.
We think that this conclusion is of particular interest to help choosing the
monetary target for the period preceding the last two years before ﬁxing
the exchange rate in the ERM2: inﬂation targeting should be preferred and
ERM2 membership should only be limited to the two mandatory years.
We also think that this lesson can be extended to other emerging market
economies, and that inﬂation targeting, rather than explicit exchange rate
commitment, should be preferred, especially if strict capital controls are
not in place.
References
Amato, JD and S Gerlach (2002). Inﬂation targeting in emerging market and transition
economies: lessons after a decade. European Economic Review 46, 781–790.
Capriolo, GandVLavrac (2003). Monetary and exchange rate policy in Slovenia. Eastward
Enlargement of the EuroZone Working Papers, 17G. Berlin: Free University Berlin,
Jean Monnet Centre of Excellence, 1–35.
Coricelli, F and B Jasbec (2004). Real exchange rate dynamics in transition economies.
Structural Change and Economic Dynamics 15, 83–100.
Dvorsky, S (2000). Measuring central bank independence in selected transition countries
and the disinﬂation process. Bank of Finland, Boﬁt Discussion Paper 13. Helsinki:
Bank of Finland, 1–14.
Iacone, F and R Orsi (2004). Exchange rate regimes and monetary policy strategies for
accession countries. In Fiscal, Monetary and Exchange Rate Issues of the Eurozone
Enlargement, K Zukrowska, R Orsi and V Lavrac (eds.), pp. 89–136. Warsaw: Warsaw
School of Economics.
Maliszewski, WS (2000). Central bank independence in transition economies. Economics
of Transition 8, 749–789.
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
194 Fabrizio Iacone and Renzo Orsi
Rudebush, GD and LEO Svensson (1999). Policy rules for inﬂation targeting. In Monetary
Policy Rules, John B Taylor (ed.), pp. 203–246. Chicago: Chicago University Press.
Rudebush, GD and LEO Svensson (2002). Eurosystem monetary targeting: lessons from
U.S. data. European Economic Review 46, 417–442.
Svensson, LEO (2000). Openeconomy inﬂation targeting. Journal of International Eco
nomics 50, 155–183.
Appendix A
Summary of the Results
Note to the Tables: for each equation, N indicates the normality test, AC
is the LM test of no autoregressive structure in the residuals and H is the
test for the heteroskedasticity; W indicates the test of those restrictions
that reduce the corresponding equation as speciﬁed in general to the one
speciﬁcally used. For each test, we report the Pvalue: the small sample
version of the test is taken, unless normality is rejected. When the estimated
equation is restricted, the test statistics N, AC, H, Chow, W are calculated
in the unrestricted model.
Table 1a. Germany.
AD [sample: 91Q2–04Q1]
ˆ y
t
= 0.011 + 0.769
y
t
−1
− 0.038r
t−1
[N: 0.38] [AC: 0.20] [H: 0.17]
PC [sample: 93Q1–04Q1]
ˆ
π
t
= −1.02π
t−1
− 0.86π
t−2
− 0.71π
t−3
+ 17.62y
t−2
[N: 0.00] [AC: 0.24] [H: 0.10] [Chow: 0.18] [W: 0.44]
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
Inﬂation Control in Central and Eastern European Countries 195
Table 2a. Poland, 1994 onwards.
AD [sample: 94Q1–04Q1]
ˆ y
t
= 0.020 + 0.467y
t−1
− 0.0027r
t−1
[N: 0.00] [AC: 0.047] [H: 0.49] [Chow: 0.046] [W: 0.57]
PC [sample: 94Q1–04Q1]
ˆ
π = −0.64π
t−1
− 0.67π
t−2
− 0.51π
t−3
+ 21.46(q
t−1
−q
t−5
)
[N: 0.00] [AC: 0.056] [H: 0.29] [Chow: 0.36] [W: 0.06] [Chow: 2000Q2: 0.65]
Table 2b. Poland, restricted sample.
AD [sample: 95Q1–04Q1]
ˆ y
t
= 0.021 + 0.454y
t−1
− 0.0029r
t−1
[N: 0.00] [AC: 0.07] [H: 0.54] [Chow: 0.54] [W: 0.64]
PC [sample: 94Q1–04Q1]
ˆ
π
t
= −0.57π
t−1
− 0.76π
t−2
− 0.47π
t−3
+ 16.09(q
t−1
−q
t−5
)
[N: 0.57] [AC: 0.25] [H: 0.79] [W: 0.39]
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
196 Fabrizio Iacone and Renzo Orsi
Table 3. Czech Republic, restricted sample.
AD [sample: 98Q1–04Q1]
ˆ y
t
= 0.010
(0.006)
+0.620y
t−1
(0.116)
−0.03r
t−1
(0.001)
+0.083(q
t−1
−q
t−5
)
(0.050)
[N: 0.79] [AC: 0.07] [H: 0.54] [W: 0.15]
PC [sample: 98Q1–04Q1]
ˆ
π
t
= −0.76π
t−1
− 0.21π
t−2
− 0.31π
t−3
+ 32.55(q
t−1
−q
t−5
)
[N: 0.71] [AC: 0.34] [H: 0.33] [W: 0.26]
Table 4. Slovenia.
AD [sample: 94Q1–03Q3]
ˆ y
t
= 0.226y
t−1
(0.167)
+0.343w
t−1
(0.234)
[N: 0.30] [AC: 0.37] [H: 0.86] [Chow 0.31] [W: 0.31]
PC [sample: 94Q1–03Q4]
ˆ
π
t
= −0.60π
t−1
− 0.66π
t−2
− 0.45π
t−3
+ 52.64(q
t−1
−q
t−5
)
[N: 0.44] [AC: 0.35] [H: 0.65] [Chow: 0.83] [W: 0.15] [Break 2001Q1:0.69]
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
Inﬂation Control in Central and Eastern European Countries 197
Datastream codes.
Czech
Germany Poland Republic Slovenia
Exchange rate
(US$ per ∈)
USXRUSD
Exchange rate
(vs. US$)
BDESXUSD. POI..AF. CZXRUSD. SJXRUSD.
CPI BDCONPRCF POCONPRCF CZCONPRCF SJCONPRCF
Industrial
production
BDINPRODG POIPTOT5H CZIPTOT.H SJIPTO01H
3monthrate BDINTER3 PO160C.. CZ160C.. SJI60B..
Diagnostic tests of some speciﬁcations that we dismissed:
Table 5b. Germany, PC equation, whole dataset.
PC [sample: 91Q3–04Q1]
ˆ
π
t
= 0.47
(0.48)
−0.29π
t−1
(0.18)
−0.66π
t−1
(0.18)
−0.53π
t−2
(0.17)
−0.50π
t−3
.
(0.18)
+7.22y
t−1
(13.41)
[N:0.00] [AC: 0.20] [H: 0.06] [Chow: 0.62]
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
198 Fabrizio Iacone and Renzo Orsi
Table 5c. Poland, whole dataset.
AD [sample: 93Q2–04Q1]
ˆ y
t
= 0.02
(0.009)
+0.509y
t−1
(0.166)
−0.0025r
t−1
(0.001)
−0.130w
t−1
(0.203)
−0.054(q
t−1
+q
t−5
)
(0.045)
[N: 0.00] [AC: 0.20] [H: 0.53] [Chow: 0.06]
PC [sample: 93Q2–04Q1]
ˆ
π
t
= − 0.42
(1.25)
−0.10π
t−1
(0.08)
−0.77π
t−1
(0.12)
−0.78π
t−2
(0.11)
−0.67π
t−3
.
(0.10)
+22.55y
t−1
(27.42)
+18.08(q
t−1
−q
t−5
)
(9.29)
[N: 0.00] [AC: 0.01] [H: 0.07] [Chow: 0.81]
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
Inﬂation Control in Central and Eastern European Countries 199
Table 5d. Czech Republic, whole dataset.
AD [sample: 93Q4–04Q1]
ˆ y
t
= 0.007
(0.005)
+0.617y
t−1
(0.140)
−0.0001r
t−1
(0.001)
−0.186w
t−1
(0.229)
+0.047(q
t−1
+q
t−5
)
(0.063)
[N: 0.44] [AC: 0.94] [H: 0.06] [Chow: 0.01]
PC [sample: 94Q1–04Q1]
ˆ
π
t
= 2.45
(1.17)
−0.24π
t−1
(0.19)
−0.68π
t−1
(0.21)
−0.28π
t−2
(0.19)
− 0.36π
t−3
(0.10)
+22.10y
t−1
(24.4)
+33.22(q
t−1
−q
t−5
)
[N: 0.00] [AC: 0.25] [H: 0.086 [Chow: 0.01] [W:0.19]
March 26, 2009 11:27 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic1
200 Fabrizio Iacone and Renzo Orsi
Table 5e. Slovenia, PC equation, whole dataset.
PC [sample: 93Q2–03Q4]
ˆ
π
t
= 4.71
(1.41)
−0.50π
t−1
(0.19)
−0.03π
t−1
(0.11)
−0.40π
t−2
(0.11)
− 0.10π
t−3
.
(0.07)
+2.45y
t−1
(26.70)
+48.28(q
t−1
−q
t−5
)
(17.73)
[N: 0.84] [AC: 0.03] [H: 0.02] [Chow: 0.71]
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
Topic 2
Credit and Income: CoIntegration Dynamics
of the US Economy
Athanasios Athanasenas
Institute of Technology and Education, Greece
1. Introduction
In this study, I seek to identify the existence of a signiﬁcant statistical rela
tionship between credit (as commercial bank lending) and income (GDP),
for the US postwar economy, in terms of a contemporary cointegration
methodology.
The main empirical ﬁnding regarding the hypothesis that causality, in
the longrun, is directedfromcredit toincome, agrees withthe “creditview”.
This can be veriﬁed by the the postwar US data. In this chapter, I have
identiﬁed a dynamic causality effect in the short run, from income changes
to credit ones. To obtain the results cited here, I have applied a cointegration
analysis. In addition to considering the formulated VAR, I place particular
emphasis upon the stability analysis of the estimated equivalent ﬁrstorder
dynamic system. Finally, dynamic forecasts from the corresponding error
correction variance (ECVAR) are obtained, in order to verify the validity
of the cointegration vector used.
This study have four sections. Section 2 reviews, in a brief note,
the literature on credit and income growth of the postwar US economy.
Section 3 deals with methodological issues concerning the contemporary
cointegration analysis and the data used. Section 4 presents the econo
metric analysis and the empirical results; while Section 5 concludes the
creditincome causality issue, placing emphasis on the stability analysis
of the obtained ﬁrstorder dynamic system and the forecasts from the
corresponding ECVAR.
201
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
202 Athanasios Athanasenas
2. The Credit Issue Researched: A Brief Note
on the US Economy
There is an evidence that money tends to “lead” income in a historical sense
(see Friedman and Schwartz (1963) and Friedman (1961; 1964). Where the
mainstreammonetarists through the “quantity theory” explain the empirical
observations as the causal relationship running from money to income.
Since the seminal works of Sims, empirical validation of the relationship
between money and income ﬂuctuations has been established (Sims, 1972;
1980a). Since the publication of the works of Goldsmith (1969), Mckinnon
(1973), and Shaw (1973), the debate has been going on about the role of
ﬁnancial intermediaries and bank credit in promoting the longrun growth.
On the basis of historical data, Friedman and Schwartz (1963) argue that
major depressions of the US economy have been caused by autonomous
movements in money stock.
1
Sims (1980a) concludes that on one hand,
money stock emerges as causally prior accounting for a substantial fraction
of income variance during the interwar and postwar periods of the busi
ness cycles. On the other hand, old skepticism remains on the direction of
causality between money and income.
It is well known that traditional monetarists are consistent with the
oldclassic view that only “money matters”, so that even when tighten
ing of bank credits does occur during recessions, they see these as part
of the endogenous ﬁnancial system, rather than exogenous events, which
induce recessions. As Brunner and Meltzer (1988) argue, banking crises
are endogenous ﬁnancial forces, which directly affect business cycles con
ditional upon the monetary propagation mechanism.
2
In contrast, the “newcredit” viewers stress further the importance of
credit restrictions, while accepting the fundamental inefﬁciencies of the
monetary policy.
3
Bernanke and Blinder (1988), in their seminal article,
conclude that credit demand is becoming relatively more stable than money
1
Ibid.
2
Note that monetarists seem to stress mainly, the complementarity of credit and money
channels of transmission, the shortrun stability of the money multiplier, and the longrun
neutrality of money.
3
The “newcredit” view combines the “oldcredit” view of the money to spend perspective,
with the money to hold perspective of the money view. The “oldcredit” view focused on
decisions to create and spend money, on the expansion effects of bank lending, and empha
sized the issue of the nonneutrality of credit money in the longrun (see also, Trautwein,
2000).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
Credit and Income 203
demand since 1979 and throughout the 1980s, where, money demand
shocks became much more important relative to credit demand shocks in
the 1980s, supported by their estimation of greater variance of money.
Evidently, one of the most challenging issues of the applied monetary
theory is the existence and importance of the credit channel in the monetary
transmissionmechanism. Bernanke (1988) emphasizes that accordingtothe
proponents of the credit view, bank credit (loans) deserve special treatment
due to the qualitative difference between a borrower going to the bank for
a loan and a borrower raising funds by going to the ﬁnancial markets and
issuing stock or ﬂoating bonds.
4
Kashyap and Stein (1994) leave no doubt while supporting Bernanke’s
(1983), and Bernanke’s and James’s (1991) examination of the Great
Depression in the United States, that the conventional explanation for the
depth and persistence of the Depression is one of the strongest pieces of evi
dence supporting the view that shifts in loan supply can be quite important.
The supporting evidence stands strong, considering also that the decline of
intensity as seen in recessions are partially due to a cutoff in bank lending
(Kashyap et al., 1994).
5
King and Levine (1993) stress that ﬁnancial indicators, such as the
importance of the banks relative to the central bank, the percentage of
credit allocated to private ﬁrms and the ratio of credit issued to private
ﬁrms to GDP, are strongly related with growth. For the relevant role of the
monetary process and the ﬁnancial sector, see, in particular, (Angeloni et al.,
2003; Bernanke and Blinder, 1992; Murdock and Stiglitz, 1993; Stock and
Watson, 1989; Swanson, 1998).
More recently, Kashyap and Stein (2000) proved that the impact of
monetary policy on lending behavior is stronger for the banks with less
liquid balance sheets, where liquidity is measured by the ratio of securities
4
Stiglitz (1992) stresses the fact that the distinctive nature of the mechanisms by which
credit is allocated plays a central role in the allocation process. He identiﬁes several crucial
reasons that banks are likely to reduce lending activity (p. 293), as the economy goes into
recession, where banks’ unwillingness and the inability to make loans obviates the effect
of monetary policy. Bernanke and Blinder (1992) ﬁnd that the transmission of monetary
policy works through bank loans, as well as through bank deposits. Tight monetary policy
reduces deposits and over time banks respond to tightened money by terminating old loans
and refusing to make newones. Reduction of bank loans for credit can depress the economy.
5
Both Bernanke and Blinder (1992) and Gertler and Gilchrist (1993) using innovations in
the federal funds rate as an indicator of monetary policy disturbances, ﬁnd that M1 or M2
declines immediately, while bank loans are slower to fall, moving contemporaneously with
output.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
204 Athanasios Athanasenas
to assets. Their principal conclusions can be narrowed down to: (a) the exis
tence of the creditlending channel of monetary transmission, at least for the
United States is undeniable and (b) the precise quantiﬁcation and accurate
measurement of the aggregate loansupply consequences of monetary pol
icy remain very difﬁcult in econometrics due to the large estimation biases.
See also the works of Mishkin (1995), Bernanke and Gertler (1995), and
Meltzer (1995), for the banklending channel. Lown and Morgan (2006)
stress further the substantial issue of the role of ﬂuctuations in commercial
credit standards and credit cycles being correlated with real output.
All previous works provide a clear evidence that is consistent with
the existence of a creditlending channel of monetary transmission. The
very heart of the creditlending view is the proposition that the Central
Banks, in general, and the Federal Reserve in the United States, in partic
ular, can shift banks’ loan supply schedules, by conducting openmarket
operations.
6
Consequently, given the aforementioned empirical research on the
creditlending channel and the established relationship between ﬁnancial
intermediation and economic growth in general, our main focus here is to
analyze in detail the causal relationship between ﬁnance and growth by
focusing on bank credit and income GNP, in the postwar US economy.
Analyzing in detail here means placing particular emphasis: (a) on a con
temporary cointegration approach, and (b) on the stability analysis of the
observed ﬁrstorder dynamic system and the dynamic forecasts from the
corresponding ECVAR.
3. Methodological Issues and Data
In the empirical analysis, for the case of the US postwar period 1957–2007,
we test for the existence of causality between the total loans and leases at
commercial banks, denoted by real credit, representing the development of
the banking sector, and the money income, GNP, denoted by real income,
representing national economic performance. Both the variables are quar
terly seasonally adjusted from 1957:3 to 2007:3, i.e., 201 observations, and
are expressed in real terms, indexed by 1982–1984 as base 100, in billions of
US$. Considering the natural logs, I deﬁne two newvariables (W
i
, Y
i
) such
that Y
i
= ln(real income
i
) and W
i
= ln(real credit
i
), for i = 1, 2, . . ., 201.
6
It is noted that the creditlending channel also requires an imperfect price adjustment and
that some borrowers cannot ﬁnd perfect substitutes for bank loans.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
Credit and Income 205
My data is obtained on line from the Federal Reserve Bank of Saint Louis,
Missouri, through their open source database.
7
Researchers usually employ some measure of the stock of money over
GNP, as a proxy for the size of the “ﬁnancial depth” (see for instance,
Goldsmith, 1969; Mckinnon, 1973). This type of ﬁnancial indicator does
not inform us as to whether the liabilities are those of the central banks,
commercial banks, or other ﬁnancial intermediaries. In this way, the credit
channel issue cannot be analyzed, and the signiﬁcant role of credit cannot
be isolated. Moreover, this type of ﬁnancial indicator is mainly applicable
to the developing countries, since the ﬁnancing of the economy is to a
great extent carried out by the public sector and controlled mainly by its
central bank.
An advanced economy such as the United States, with a fully liberalized
banking and ﬁnancial system, provides an excellent case for testing the
modern credit view such that the commercial bank credit itself has to be
modeled as a fundamental, ﬁnancial, and an independent monetary variable
that affects money income, and hence the economic performance, in the
long run.
Based on the cited references, we would accept the value of commercial
bank credit to the private sector, as a measure of the ability of the banking
system to provide ﬁnanceled income growth. The extent to which money
income growth is associated with the credit provision from the ﬁnancial
sector is examined through the use of contemporary cointegration methods
and system stability analysis.
Initially, the order of integration of the two variables is investigated
using standard unit root tests (Dickey and Fuller, 1979; Harris, 1995). The
next step is the computation of cointegration vectors by applying the max
imum likelihood (ML) approach considering the errorcorrection represen
tation of the VAR model (Harris, 1995; Johansen and Juselius, 1990). We
transformed the initial VAR to an equivalent ﬁrstorder dynamic system
to perform the proper stability analysis, which revealed that the system
under consideration is stable. It should be recalled that stability is of vital
importance for obtaining consistent forecasts, and this issue is stressed
appropriately in Sections 4 (4.2 and 4.6) and 5.
I proceed with the utilization of the vector errorcorrection modeling
following Engle and Granger (1987), and testing for the shortrun dynamic
relationship on causality effects between money and income.
7
And, we are deeply thankful for this.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
206 Athanasios Athanasenas
I endupbyapplyinga dynamic forecastingtechnique withthe estimated
ECVAR. The very low value of Theil’s inequality coefﬁcient U, provides
a concrete evidence that I have formulated the suitable ECVAR obtain
ing, thus, robust results. Our particular emphasis on the system stability is
evident and it is clariﬁed in detail in the following section.
4. CoIntegration
Considering the series {Y
i
}, {W
i
} and applying the Dickey–Fuller
(DA/ADF) tests (Dickey and Fuller, 1979; 1981; Harris, 1995, pp. 28–47),
I found that both series are not stationary, but they are integrated of order
1, that is I(1). This implies that by differentiating the initial series, we
get stationary ones. In other words, the series Y
i
= Y
i
− Y
i−1
and
W
i
= W
i
− W
i−1
are stationary, that is I(0). This can be easily veriﬁed
from Fig. 1, where the nonstationary series {Y
i
}, {W
i
} are also presented.
Y
i
W
i
∆Y
i
∆W
i
Figure 1. The I(1) series {Y
i
}, {W
i
} and the stationary ones {Y
i
}, {W
i
}.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
Credit and Income 207
Since the variables Y
i
, W
i
are used in this study, it should be recalled that
the parameters of a longrun relationship represent constant elasticities.
It should be recalled that if two series are not stationary but they have
common trends so that there exist a linear combination of these series that
is stationary, which means that they are integrated in a similar way and for
this reason, they are called cointegrated, in the sense that one or the other
of these variables will tend to adjust so as to restore a longrun equilibrium.
Also, if there is a linear trend in the data, as in the case of Y
i
and W
i
as
shown in Fig. 1, then we specify a model including a timetrend variable,
allowing thus the nonstationary relationships to drift.
4.1. The VAR Model and The Equivalent ECVAR
We start this section by formulating and estimating an unrestricted VAR of
the form:
x
i
=δδδ +µµµt
i
+
p
j=1
A
j
x
i−j
+w
i
(1)
where x ∈ E
n
(here E denotes the Euclidean space), δδδ is the vector of
constant terms, t
i
is a timetrend variable, µµµ is the vector of corresponding
coefﬁcients, and w is the noise vector. The matrices of coefﬁcients A
j
, are
deﬁned on E
n
× E
n
. It is noted that variable t
i
is included for the reason
mentioned above. It can be easily shown (Johansen, 1995), that (1) may be
expressed in the form of an equivalent ECVAR, that is:
x
i
=δδδ +µµµt
i
+
p−1
j=1
Q
j
x
i−j
+x
i−1
+w
i
(2)
where Q
j
= (
j
k=1
A
k
−I).
After estimation, we may compute matrix from:
=
_
_
p
j=1
A
j
_
_
−I (2a)
or, directly fromthe corresponding ECVARas given in Eq. (2). It is recalled
that this type of VARmodels like the one presented above, is recommended
by Sims (1980a; b), as a way to estimate the dynamic relationships among
jointly endogenous variables. It should be noted that for the case under
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
208 Athanasios Athanasenas
consideration, vector x in Eqs. (1) and (2) is 2dimensional, i.e.,
x
i
=
_
Y
i
W
i
_
and it is x ∼ I(1).
It is known that the matrix can be decomposed such that = AC,
where the elements of A are the coefﬁcients of adjustment and the rows
of matrix C are the (possible) cointegrating vectors. In some books, A is
denoted by ααα and B by βββ
, although capital letters are used for matrices.
Besides, βββ denotes the coefﬁcient vector in a general linear econometric
model. To avoid any confusion, we adopted the notation used here, as well
as in Mouza and Paschaloudis (2007).
Usually, matrices A and C are obtained by applying the ML method
(Harris, 1995; Johansen and Juselius, 1990, p. 78). According to this proce
dure, a constant and/or trend can be included in the cointegrating vectors
in the following way.
Considering Eq. (2), we can augment matrix to accommodate, as an
additional column, the vectors δδδ and µµµ in the following way.
˜
= [
.
.
.µµµδδδ]. (3)
In this case,
˜
is deﬁned on E
n
×E
m
, with m > n. For conformability,
the vector x
i−1
in Eq. (2) should be augmented accordingly by using the
linear advance operator L (such that L
k
y
i
= y
i+k
), i.e.,
˜ x
i−1
=
_
_
x
i−1
Lt
i−1
1
_
_
. (3a)
Hence, Eq. (2) takes the form:
x
i
=
p−1
j=1
Q
j
x
i−j
+
˜
˜ x
i−1
+w
i
. (4)
Note that no constants (vector δδδ), as well as coefﬁcients of the time trend
(vector µµµ) are explicitly presented in the ECVARin Eq. (4), which speciﬁes
precisely a relevant set of n ECM.
It is assumed at this point that matrices C and A refer to matrix
˜
as
seen in Eq. (4). Let us consider now the kth row of matrix C, that is c
k.
.
8
If
this row is assumed to be a cointegration vector, then the (disequilibrium)
8
Note that dot is necessary to distinguish the kth row of C, i.e., c
k.
, from the transposed of
the kth column of this matrix that is c
k
.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
Credit and Income 209
errors computed from u
i
= c
k.
˜ x
i
must be a stationary series. Considering
now the kth column of matrix A, that is a
k
, and given that u
i−1
= c
k.
x
i−1
,
then the ECVAR in Eq. (4) can be written as:
x
i
=
p−1
j=1
Q
j
x
i−j
+a
k
u
i−1
+w
i
. (5)
This is the conventional form of an ECVAR, when matrix
˜
instead
of is considered. In any case, the maximum lag in Eq. (5) is p − 1. It
should be noted that this speciﬁcation of an ECM is fully justiﬁed from the
theoretical point of view. However, if we want to include an intercept in
Eq. (5), i.e., in the shortrun model, it may be assumed that the intercept in
the cointegration vector is cancelled by the intercept in the shortrun model,
leaving only an intercept in Eq. (5). Thus, when a vector of constants is to
be included in Eq. (4), which, in this case, takes the form:
x
i
=δδδ +
p−1
j=1
Q
j
x
i−j
+
˜
˜ x
i−1
+w
i
(6)
then Eq. (5), matrix
˜
and the augmented vector ˜ x will become:
x
i
=δδδ +
p−1
j=1
Q
j
x
i−j
+a
k
u
i−1
+w
i
(6a)
˜
= [
.
.
.µµµ] (6b)
˜ x
i−1
=
_
x
i−1
Lt
i−1
_
. (6c)
According to Eq. (6a), the cointegrating vector c
k.
, used to compute
u
i
= c
k.
˜ x
i
, includes only the coefﬁcient of the timetrend variable. The
inclusion of a trend in the model can be justiﬁed, if we examine the esti
mation results of a longrun relationship considering Y and W. In such a
case, we immediately realize that a trend should be present.
After adopting the VAR as seen in Eq. (1), the next step is to determine
the value of p from Table 1 (Holden and Perman, p. 108).
We see from Table 1 that for α ≤ 0.05, p = 4, which means that we
have to estimate a VAR(4), with only one deterministic terms (i.e., trend).
After estimation, we found that matrix
˜
(hat is omitted for simplicity),
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
210 Athanasios Athanasenas
Table 1. Determination of the value of ppp.
Lag length ppp LR LR LR Value of ppp (probability)
1 — —
2 132.91 0.0000
3 14.988 0.0047
4 14.074 0.0071
5 2.7328 0.6035
6 5.1465 0.2726
7 2.3439 0.6728
8 5.2137 0.2661
speciﬁed in Eq. (6b) has the following form:
Y
i
W
i
t
i
˜
=
_
−0.092971 0.028736 0.000320
0.016209 −0.024594 0.000124
_
. (7)
4.2. Dynamic System Stability
TheVAR(4) canbe transformedtoanequivalent ﬁrstorder dynamic system,
in the following way.
_
_
_
_
x
i
Lx
i
L
2
x
i
L
3
x
i
_
¸
¸
_
=
_
_
_
_
A
1
A
2
A
3
A
4
I
2
0 0 0
0 I
2
0 0
0 0 I
2
0
_
¸
¸
_
_
_
_
_
x
i−1
Lx
i−1
L
2
x
i−1
L
3
x
i−1
_
¸
¸
_
+
_
_
_
_
δδδ
0
0
0
_
¸
¸
_
+
_
_
_
_
µµµ
0
0
0
_
¸
¸
_
t
i
+
_
_
_
_
w
i
0
0
0
_
¸
¸
_
. (8)
L in this place, denotes the linear lag operator, such that L
k
z
i
= z
i−k
.
Equation (8) can be written in a compact form as:
y
i
=
˜
Ay
i−1
+d +zt
i
+v
i
(8a)
where
y
i
= [x
i
Lx
i
L
2
x
i
L
3
x
i
], d
= [δδδ 0 0 0],
z
= [µµµ 0 0 0], v
i
= [w
i
0 0 0]
and the dimension of matrix
˜
A is (8 ×8).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
Credit and Income 211
Table 2. Eigen values.
No. Real part Coefﬁcient of imaginary part Length
1 0.94 0 0.94
2 0.8567 0.0579 0.8586
3 0.8567 −0.0579 0.8586
4 0.3807 0.0 0.3807
5 0.0367 0.3655 0.3674
6 0.0367 −0.3677 0.3674
7 −0.3581 0.0 0.3581
8 −0.1142 0.0 0.1142
To decide whether the dynamic system described by Eq. (8a) is stable, we
compute the eigenvalues of matrix
˜
A, presented in Table 2.
Since the greater length (0.94) is less then 1, the dynamic system as
given in Eq. (8a) is stable and can be used for forecasting purposes. A side
veriﬁcation of this stability test, is that once the system is stable, then the
elements of the state vector x in the initial VAR(4), which in this case are the
series {Y
i
}, {W
i
}, are either difference stationary series (DSS), or in some
cases, stationary series. In fact, we show that these series are DSS and in
particular, they are I(1).
4.3. The Estimated CoIntegrating Vectors
We applied the ML method, to compute matrices C and A, which have the
following form:
Y W t
C =
_
1 −0.289613 −0.003621
1 −0.653108 −0.000292
_
,
A =
_
−0.087991 −0.004980
−0.038535 0.054745
_
(9)
satisfying AC =
˜
. We can easily verify that only the ﬁrst row of matrix
C is a promising candidate, veriﬁed from the computed value of the t
statistic (−0.23), which corresponds to the (1, 2) element of matrix A, that
corresponds to the second rowof matrix C. Thus, we have the cointegrating
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
212 Athanasios Athanasenas
vector:
[1 −0.289613 −0.003621] (10)
producing the errors u
i
, which are the disequilibriumerrors obtained from:
u
i
= Y
i
−0.289613W
i
−0.003621t
i
. (10a)
It should be noted that the vector speciﬁed in Eq. (10) is a cointegration
vector, iff the series {u
i
} computed from Eq. (10a) is stationary. To ﬁnd out
that this series is stationary, we run the following regression with a constant
term, since
u
i
= 0.
u
i
= a +b
1
u
i−1
+
q
j=1
b
j+1
u
i−j
+ε
i
. (11)
Note that the value of q is set such that the noises ε
i
to be white. With q = 3,
the estimation results are as follows:
u
i
= 0.4275
(0.1174)
−0.07384
(0.02029)
u
i−1
+
3
j=1
ˆ
b
j+1
u
i−j
+ ˆ ε
i
t = −3.64.
(11a)
It should be noted that in Eq. (11a), there is no problem regarding
autocorrelation and heteroscedasticity. We have to compute the t
u
statistic
fromMacKinnon (1991, pp. 267–276) critical values (see also Granger and
Newbold, 1974; Hamilton, 1994; Harris, 1995, Table A6, p. 158; Harris,
1995, pp. 54–55). This statistic (t
u
=
∞
+
1
/T +
2
/T
2
, where T
denotes the sample size) is evaluated from the relevant table, taking into
account Eqs. (10a) and (11a). Note that in the latter equation, there is only
one deterministic term (constant). The value of the t
u
statistic for this case
is −3.3676 (α = 0.05). Hence, since −3.64 < −3.3676, we reject the
null that the series {u
i
} is not stationary. Recall that the null [u
i
∼ I(1)] is
rejected in favor of H
1
[u
i
∼ I(0)], if t < t
u
. Hence, the ﬁrst rowof matrix C
is a cointegration vector, which yields the following longrun relationship:
Y
i
= 0.289613W
i
+0.003621t
i
+u
i
which implies that a 10% increase of W (total credit) results to an increase
of Y (money income) by about 2.9%.
9
9
Another point of interest is that in Eq. (10a), the errors u
i
are computed from:
u
i
= Y
i
−
ˆ
Y
i
(12)
where
ˆ
Y
i
= 0.289613W
i
+0.003621t
i
.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
Credit and Income 213
4.4. Essential Remarks
In seminal works about money and income (as for instance in Tobin, 1970),
the analysis is restricted in a pure theoretical consideration. Some authors
(see for instance, Friedman and Kuttner, 1992) write many explanations
about cointegration, cointegrating vectors, and errorcorrection models,
but I did not trace any attempt to compute either. In a later paper by the
same authors (Friedman and Kuttner, 1993), I read (p. 200) the phrase,
“vector autoregression system,” which in fact is understood as a VAR. The
point is that no VAR model is speciﬁed in this work. Besides, the lag length
is determined by simply applying the Ftest (p. 191). In this context, the
validity of the results is questionable. In other applications (see for instance,
Arestis and Demetriades, 1997), where a VAR model has been formulated,
nothing is mentioned about the test applied to determine the maximum lag
length. Also, in this paper, the authors are restricted to the use of trace
statistic (l
(trace)
) to test for cointegration (p. 787), although this is the
necessary but not the sufﬁcient condition, as pointed out above. From other
research works (see for instance, Arestis et al., 2001), I get the impression
that there is not a clear notion regarding stationarity and/or integration,
since the authors state [p. 22 immediately after Eq. (1)], that “D is a set
of I(0) deterministic variables such as constant, trend and dummies. . . ”.
It is clear that this veriﬁcation is false. How can a trend, for instance, be
a stationary, I(0), variable?
10
The authors declare (p. 24) that “We then
perform cointegration analysis. . . ”. The point is that apart from some
theoretical issues, I did not trace such an analysis in the way it is applied
here. Besides, nothingis mentionedabout anyECVAR, analogous tothe one
that is formulated and used in this study. Most essentially, I have not traced,
In case that the disequilibrium errors are computed from:
u
i
=
ˆ
Y
i
−Y
i
(13)
then the cointegration vector will have the form:
[−10.289613 0.003621]. (14)
It is noted also that hat (ˆ) is omitted from the disequilibrium errors u
i
, for avoiding any
confusion with the OLS disturbances.
It should be emphasized that in such a case, i.e., using the disequilibriumerrors computed
from Eq. (13), then the sign of the coefﬁcient of adjustment, as will be seen later, will be
changed.
10
It should be recalled at this point, that if t
i
is a common trend, then t
i
= 1(∀i).
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
214 Athanasios Athanasenas
in relevant works, the stability analysis applied for the system presented
in Eq. (8).
4.5. The ECM Formulation
The lagged values of the disequilibrium errors, that is u
i−1
, serve as an
error correction mechanism in a shortrun dynamic relationship, where the
additional explanatory variables may appear in lagged ﬁrst differences. All
variables in this equation, also known as ECM, are stationary so that, from
the econometric point of view, it is a standard single equation model, where
all the classical tests are applicable. It should be noted, that the lag structure
and the details of the ECM, should be in line with the formulation as seen
in Eq. (6a). Hence, I started from this relation considering the errors u
i
and estimated the following model, given that the maximum lag length is
p −1 i.e., 3.
Y
i
= α
0
+
3
j=1
α
j
Y
i−j
+
3
j=1
β
j
W
i−j
−a
Y
u
i−1
+
Y
v
i
. (15)
Note that
Y
v
i
are the model disturbances. If the adjustment coefﬁcient
a
Y
is signiﬁcant, then we may conclude that in the long run, W causes Y.
If a
Y
= 0, then no such a causality effect exists. In case that all β
j
are
signiﬁcant, then there is a causality effect in the short run, from W to
Y. If all β
j
= 0, then no such causality effect exists. The estimation
results are presented below.
Y
i
= 0.5132+0.2618Y
i−1
+0.161Y
i−2
−0.00432Y
i−3
+0.0273W
i−1
(0.125) (0.074) (0.073) (0.0738) (0.075)
p value 0.0001 0.0005 0.0276 0.953 0.719
Hansen 0.0370 0.1380 0.2380 0.234 0.128
+0.0487W
i−2
−0.076W
i−3
−0.0879u
i−1
+
Y
ˆ v
i
(0.075) (0.065) (0.022)
p value 0.519 0.242 0.0001
Hansen 0.065 0.098 0.025 (for all coefﬁcients 2.316) (16)
¯
R
2
= 0.167, s = 0.009, DWd = 1.98, F
(7,189)
= 6.65,
Condition number (CN) = 635.34.
(p value = 0.0)
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
Credit and Income 215
We observe that the estimated adjustment coefﬁcient (−0.0879) is
signiﬁcant and almost identical to the (1, 1) element of matrix A, as seen
in Eq. (9), which is computed by the ML method. According to Hansen
statistics, all coefﬁcients seem to be stable for α = 0.05. The BG test
revealed that no autocorrelation problems exist. In models like the one seen
in Eq. (16), the term u
i−1
usually produces the smallest Spearman’s cor
relation coefﬁcient (r
s
). In this particular case, we have: r
s
= −0.0979,
t = −1.37, p = 0.172, which means that no heteroscedasticity problems
exist. Finally, the RESET test shows that there is no any speciﬁcation error.
In Eq. (16), we have an inﬂated CN due to spurious multicollinear
ity, which is veriﬁed from the revised CN, having the value of 3.42 and
as Lazaridis (2007) has shown, this means that, in fact, there is not any
serious multicollinearity problem, affecting the reliability of the estimation
results. In many applications, particularly when the variables are in logs,
then usually the value of this statistic (CN) is extremely high, which in most
cases, as stated above, is due to the spurious multicollinearity, which can be
revealed by computing the revised CN, which is not widely known and it
seems that this is the main reason, for not reporting this statistic in relevant
applications.
To test the null, H
0
: β
1
= β
2
= β
3
= 0, where β
j
are the coefﬁcients of
W
i−j
(j = 1, 2, 3), we computed the Fstatistic that is F
(3,189)
= 0.533
(p value = 0.66). Hence, the null is accepted, which means that there is not
any shortrun causality effect fromW to Y, but only in the levels, i.e., in
the long run, W causes Y. Observing the value of the adjustment coefﬁcient
(−0.0879), I may conclude that it gives a satisfactory percentage, regarding
the money income convergence towards a longrun equilibrium.
11
4.6. Dynamic Forecasting with an ECVAR
To complete the ECVAR model, it is necessary to estimate a shortrun rela
tion for W. Thus, we consider an equation which is similar to Eq. (15) i.e.,
W
i
= b
0
+
3
j=1
a
j
Y
i−j
+
3
j=1
b
j
W
i−j
−a
W
u
i−1
+
W
v
i
. (17)
11
It is already mentioned that if the disequilibrium errors are computed from Eq. (13),
instead of Eq. (12), then the estimated coefﬁcient of adjustment will have a positive sign.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
216 Athanasios Athanasenas
The estimation results are as follows
12
:
W
i
= 0.2241+0.0977Y
i−1
+0.2256Y
i−2
+0.199Y
i−3
+0.4864W
i−1
(0.125) (0.0733) (0.072) (0.073) (0.075)
p value 0.075 0.183 0.002 0.0072 0.000
Hansen 0.075 0.181 0.041 0.7130 0.045
+0.0123W
i−2
+0.068W
i−3
−0.03852u
i−1
+
W
ˆ v
i
(0.87) (0.29) (0.0216)
p value 0.519 0.242 0.076
Hansen 0.062 0.181 0.075 (for all coefﬁcients 2.268)
¯
R
2
= 0.541, s = 0.009, DWd = 1.97,
F
(7,189)
= 34.0(p value = 0.0), CN = 635.4, Revised CN = 3.42.
(18)
Presumably, we are on the right way, since the estimated adjustment co
efﬁcient is almost the same as the one computed by the ML method as in
the previous case. This is the (2, 1) element of matrix A as seen in Eq. (9).
Note that this coefﬁcient is signiﬁcant for α ≥ 0.08, which means that there
is a rather weak causality effect from Y to W in the long run.
To test the null H
0
: a
1
= a
2
= a
3
= 0, where the coefﬁcients
a
j
(j = 1, 2, 3) as seen in Eq. (17), we compute the relevant Fstatistic,
i.e., F
(3,189)
= 8.42 (pvalue = 0.0). This implies that indeed there is a
causality effect in the short run, from Y to W.
Considering the complete ECVAR, i.e., Eqs. (16) and (18), Eq. (10a) is
used to compute the series {u
i
}, together with some trivial identities, I form
the following system:
Y
i
=
ˆ
β
0
+
3
j=1
ˆ α
j
Y
i−j
+
3
j=1
ˆ
β
j
W
i−j
− ˆ a
Y
u
i−1
+
Y
ˆ v
i
W
i
=
ˆ
b
0
+
3
j=1
ˆ a
j
Y
i−j
+
3
j=1
ˆ
b
j
W
i−j
− ˆ a
W
u
i−1
+
W
ˆ v
i
12
According to Hansen statistics, all coefﬁcients seem to be stable for α = 0.05.
The Spearman’s correlation coefﬁcient (r
s
) regarding the term u
i−1
, is: r
s
= −0.075,
t = −1.051, p = 0.294, which means that we do not have to bother about heteroscedas
ticity problems. Also the revised CN indicates that no multicollinearity problems exist, in
order to affect the reliability of the estimation results.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
Credit and Income 217
Y
i
= Y
i
+Y
i−1
W
i
= W
i
+W
i−1
u
i
= Y
i
−0.289613W
i
−0.0036218t
i
.
(19)
It may be useful to note that there are three identities in the system
and the only exogenous variable present in the system is the trend. We may
obtain dynamic simulation results from the ECVAR speciﬁed in Eq. (19),
in order to obtain predictions for Y
i
and W
i
and at the same time to verify
the validity of the cointegration vector used. This can be achieved by ﬁrst
formulating the following deterministic system.
K
0
y
i
= K
1
y
i−1
+K
2
y
i−2
+K
3
y
i−3
+
˜
Dq
i
(20)
where y
i
= [ϒ
i
W
i
u
i
Y
i
W
i
]
, q
i
= [t
i
1]
K
0
=
_
_
_
_
_
_
1 0 0 0 0
0 1 0 0 0
0 0 1 −1 0.289613
−1 0 0 1 0
0 −1 0 0 1
_
¸
¸
¸
¸
_
,
K
1
=
_
_
_
_
_
_
0.2618 0.0273 −0.0879 0 0
0.0977 0.4864 −0.03852 0 0
0 0 0 0 0
0 0 0 1 0
0 0 0 0 1
_
¸
¸
¸
¸
_
,
K
2
=
_
_
_
_
_
_
0.161 0.0487 0 0 0
0.2256 0.0123 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
_
¸
¸
¸
¸
_
,
K
3
=
_
_
_
_
_
_
−0.00432 0.076 0 0 0
0.199 −0.068 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
_
¸
¸
¸
¸
_
,
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
218 Athanasios Athanasenas
and
˜
D =
_
_
_
_
_
_
0 0.5132
0 0.2241
−0.003621 0
0 0
0 0
_
¸
¸
¸
¸
_
.
Premultiplying Eq. (20) by K
−1
0
we get:
y
i
= Q
1
y
i−1
+Q
2
y
i−2
+Q
3
y
i−3
+Dq
i
(21)
where Q
1
= K
−1
0
K
1
, Q
2
= K
−1
0
K
2
, Q
3
= K
−1
0
K
3
, and D = K
−1
0
˜
D.
The system given in Eq. (21) can be transformed to an equivalent ﬁrst
order dynamic system, in a similar way to the one already described earlier
[see Eqs. (8) and (8a)]. It is noted that in this case matrix
˜
A is of dimension
(15 × 15). It is important to mention again that we should avoid using
a dynamic system, for simulation purposes, that is unstable. Thus, from
this system, we can obtain dynamic simulation results for the variables Y
i
and W
i
. These results are graphically presented (Fig. 2).
The very low value of Theil’s inequality coefﬁcient U, for both cases
is a pronounced evidence that, apart from computing the indicated co
integration vector, we have also formulated the suitable ECVAR so that the
results obtained are undoubtfully robust.
5. Conclusions and Implications
The purpose of this study is to contribute to the empirical investigation of
the cointegration dynamics of the creditincome nexus, within the eco
nomic growth process of the postwar US economy, over the period from
1957 to 2007.
Utilizing advanced and contemporary cointegration analysis and
applying vector ECMestimation, we place special emphasis on forecasting
and system stability analysis. I can say that to the best of my knowledge,
similar system stability and forecasting analysis, as the one applied here,
is very difﬁcult to meet in the relevant literature, after taking into consider
ation similar research works. See for example, (Arestis and Demetriades,
1997; Arestis et al., 2001; Demetriades and Hussein, 1996; Friedman and
Kuttner, 1992; 1993; Levine and Zervos, 1998; Rousseau and Wachtel,
1998; 2000).
My results state clearly that there is no shortrun causality effect from
credit changes to income changes, but only in the levels, that is in the long
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
Credit and Income 219
Figure 2. Results of dynamic simulations.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
220 Athanasios Athanasenas
run, credit causes money income. From my cointegration analysis, I found
that the adjustment coefﬁcient is signiﬁcant and its value (−0.0879), gives a
satisfactory percentage, regarding the money income convergence towards
a longrun equilibrium.
Moreover, we observe a causality effect from income changes to credit
changes, in the short run for the postwar US economy. The validity of
the credit view theorists seems rather evident, by taking into account the
contemporary cointegration and system stability approaches.
Further, in terms of reasonable research implications, following our
contemporary cointegration and system stability approaches, we may
stress the need for an indepth investigation of the credit cycle and ﬂuc
tuations in commercial credit standards.
13
References
Angeloni, I, KA Kashyap and B Mojon (2003). Monetary Policy Transmission in the Euro
Area. Part 4, Chap. 24 NY: Cambridge University Press.
Arestis, PandPDemetriades (1997). Financial development andeconomic growth: assessing
the evidence. The Economic Journal 107, 783–799.
Arestis, P, PO Demetriades and KB Luintel (2001). Financial development and economic
growth: the results of stock markets. Journal of Money, Credit and Banking 33(1),
16–41.
Bernanke, SB (1983). Nonmonetary effects of the ﬁnancial crisis in the propagation of the
great depression. American Economic Review 73(3), 257–276.
Bernanke, SB (1988). Monetary policy transmission: through money or credit? Business
Review, November/December, 3–11 (Federal Reserve Bank of Philadelphia).
Bernanke, SB and SA Blinder (1988). Credit, money, and aggregate demand. The American
Economic Review, Papers and Proceedings 78(2), 435–439.
Bernanke, SB and SA Blinder (1992). The federal funds rate and the channels of monetary
transmission. The American Economic Review 82(4), 901–921.
Bernanke, SB and M Gertler (1995). Inside the black box: the credit channel of monetary
transmission. The Journal of Economic Perspectives 9(4), 27–48.
Bernanke, SB and H James (1991). The gold standard, deﬂation, and ﬁnancial crisis in the
great depression: an International comparison. In Financial Markets and Financial
Crises, RG Hubbard (ed.), Chicago: University of Chicago Press for NBER, 33–68.
Brunner, Kand HAMeltzer (1988). Money and credit in the monetary transmission process.
American Economic Review 78, 446–451.
Demetriades, PO and K Hussein (1996). Does ﬁnancial development cause economic
growth? Journal of Development Economics 51, 387–411.
13
See, for example, Lown and Morgan (2006), for parallel research on commercial credit
standards, following “traditional” VAR analysis.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
Credit and Income 221
Dickey, DA and WA Fuller (1979). Distribution of the estimators for autoregressive
time series with a unit root. Journal of the American Statistical Association 74,
427–431.
Dickey, DA and WA Fuller (1981). Likelihood ratio statistics for autoregressive time series
with a unit root. Econometrica 49, 1057–1072.
Engle, RF and CWJ Granger (1987). Cointegration and errorcorrection: representation,
estimation, and testing. Econometrica 55, 251–276.
Friedman, M(1961). The lag in the effect of monetary policy. Journal of Political Economy,
October 69, 447–466.
Friedman, M (1964). The monetary studies of the national bureau. National Bureau of Eco
nomic Research, Annual Report. NewYork: National Bureau of Economic Research,
1–234.
Friedman, Band KNKuttner (1992). Money, income, prices and interest rates. The American
Economic Review 82(3), 472–492.
Friedman, Band KNKuttner (1993). Another look at the evidence of moneyincome causal
ity. Journal of Econometrics 57, 189–203.
Friedman, MandASchwartz (1963). Money and business cycles. Reviewof Economics and
Statistics, February, (suppl. 45), 32–64.
Gertler, M and S Gilchrist (1993). The role of credit market imperfections in the monetary
transmissionmechanism: arguments andevidence. ScandinavianJournal of Economics
95(1), 43–64.
Goldsmith, RW (1969). Financial structure and development. New Haven, CT: Yale
University Press.
Granger, CWJ and P Newbold (1974). Spurious regressions in econometric model speciﬁ
cation. Journal of Econometrics 2, 111–120.
Hamilton, JD (1994). Time Series Analysis. Princeton: Princeton University Press.
Harris, R(1995). Using Cointegration Analysis in Econometric Modeling. London: Prentice
Hall.
Holden, D and R Perman (1994). Unit roots and cointegration for the economists. In Coin
tegration for the Applied Economist, BB Rao (Ed.), UK: St. Martin’s Press, 47–112.
Johansen, S (1995). LikelihoodBased Inference in Cointegrating Vector Autoregressive
Models. NewYork: Oxford University Press.
Johansen, S and K Juselius (1990). Maximum likelihood estimation and inference on coin
tegration with application to the demand for money. Oxford Bulletin of Economics and
Statistics 52, 169–210.
Kashyap, KA and CJ Stein (1994). Monetary policy and bank lending. NBER Studies in
Business Cycles 29, 221–256.
Kashyap, KA and CJ Stein (2000). What do a million observations on banks say about the
transmission of monetary policy? The American Economic Review 90(3), 407–428.
Kashyap, KA, AO Lamont and CJ Stein (1994). Credit conditions and the cyclical behavior
of inventories. The Quarterly Journal of Economics 109(3), 565–592.
King, GRand RLevine (1993). Finance, entrepreneurship and growth: theory and evidence.
Journal of Monetary Economics 32, 1–30.
Lazaridis, A (2007). A note regarding the condition number: the case of spurious and latent
multicollinearity. Quality & Quantity 41(1), 123–135.
Levine, R and S Zervos (1998). Stock markets, banks, and economic growth. American
Economic Review 88, 537–558.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692ch05topic2
222 Athanasios Athanasenas
Lown, C and DP Morgan (2006). The credit cycle and the business cycle: new ﬁndings
using the loan ofﬁcer opinion survey. Journal of Money, Credit, and Banking 38(6),
1575–1597.
MacKinnon, J (1991). Critical values for cointegration tests. In LongRun Economic Rela
tionships, RF Engle and CWJ Granger (eds.), pp. 267–276. Oxford: Oxford University
Press.
Mckinnon, RL (1973). Money and Capital in Economic Development. Washington, DC:
Brookings Institution.
Meltzer, HA (1995). Monetary, credit and (other) transmission processes: a monetarist per
spective. The Journal of Economic Perspectives 9(4), 49–72.
Mishkin, SF (1995). Symposium on the monetary transmission mechanism. The Journal of
Economic Perspectives 9(4), 3–10.
Mouza, AM and D Paschaloudis (2007). Advertisement and sales: a contemporary cointe
gration analysis. Journal of Academy of Business and Economics 7(2), 83–95.
Murdock, K and J Stiglitz (1997). Financial restraint: towards a new paradigm. Journal of
Monetary Economics 35, 275–301.
Rousseau, PL and P Wachtel (1998). Financial intermediation and economic performance:
historical evidence from ﬁve industrialized countries. Journal of Money, Credit and
Banking 30, 657–678.
Rousseau, PL and P Wachtel (2000). Equity markets and growth: crosscountry evidence on
timing and outcomes, 1980–1995. Journal of Banking and Finance 24, 1933–1957.
Shaw, ES (1973). Financial Deepening in Economic Development. New York: Oxford
University Press.
Sims, CA (1972). Money, income, and causality. The American Economic Review 62(4),
540–552.
Sims, CA (1980a). Comparison of interwar and postwar business cycles. The American
Economic Review 70, 250–277.
Sims, CA (1980b). Macroeconomics and reality. Econometrica 48, 1–48.
Stiglitz, EJ (1992). Capital markets and economic ﬂuctuations in capitalist economies.
European Economic Review 36, 269–306.
Stock, JM and MW Watson (1988). Testing for common trends. Journal of the American
Statistical Association 83, 1097–1107.
Stock, J and MW Watson (1989). Interpreting the evidence on moneyoutput causality.
Journal of Econometrics, January, 161–181.
Swanson, N(1998). Money and output viewed through rolling window. Journal of Monetary
Economics 41, 455–473.
Tobin, J (1970). Money and income: post hoc ergo propter hoc? Quarterly Journal of
Economics 84, 301–329.
Trautwein, M (2000). The credit view, old and new. Journal of Economic Surveys 14(2)
155–189.
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692Index
INDEX
adaptive control, 44, 49, 57, 59
asymptotic covariance, 46
autocorrelation, 35, 36, 40, 182, 184, 185,
187, 188, 191, 212, 215
automatic stabilizer, 70, 71, 74, 76, 79, 80,
96, 98
balance of payments, 44, 49, 53
bank credit, 202–205
Bank of England, 69, 70, 74, 76–78, 90
British ﬁscal policy, 69, 73, 81
budget stability, 74
business cycle, 9, 202
cashbalance, 52
causality, 201, 202, 204, 205, 214–216,
218, 220
Central Bank objective function, 85
Cholesky’s factorization, 24
Chow stability, 184
cointegration, 23, 25, 28–32, 34, 41, 201,
204–206, 208, 209, 212, 213, 217, 218,
220
collectivism, 143
conformability, 26, 208
conformity, 140, 143
corporate culture, 139, 145
credit and income, 201, 204
credit demand, 202, 203
critical value, 30, 33, 38–40, 188, 189, 212
current deﬁcits, 79, 83
CUSUM, 185, 186, 188–191
cyclical stability, 74
Czech Republic, 173–177, 181–183,
189–193, 196, 197, 199
debt ratio, 72, 78, 92, 96, 99
decision orientation, 132
decomposition, 26, 27
dependent central bank, 88, 90, 94
depression, 9, 202, 203
deterministic, 25, 41, 209, 212, 213, 217
devaluation, 57, 176, 177
diagnostic, 185–188, 191, 197
Dickey–Fuller tests, 30, 206
discount rate, 52, 57–59
discretionary ﬁscal policy, 82
discretionary tax revenues, 82
disequilibrium, 25, 28, 208, 212–215
domestic absorption, 49, 50, 53, 59
dynamic response multiplier, 57
Eastern European Countries, 173
econometrics, 6, 9, 204
ECVAR, 23, 25, 201, 204, 206–209, 213,
215–218
Eigen values, 24–26, 211
emissions, 101, 103–106
enterprise integration, 121, 125–128
enterprise integration modeling, 121
enterprise modeling, 121–124, 132
environmental economics, 101–103
environmental taxation, 101, 103
ERM, 176, 193
error covariance matrix, 48
errorcorrection, 23, 201, 205, 213, 214
European Commission, 69, 79, 98
Eurozone, 69, 72, 77–81, 88, 93–95
exante responses, 76
exchange rate, 50, 53, 58, 60, 74,
173–178, 180–183, 185, 187–193, 197
exclusivity, 143
feasibility region, 3, 5
feasible set, 16
feedback rule, 77
223
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692Index
224 Index
ﬁscal expansions, 91, 92
ﬁscal leadership, 69, 71, 73, 76–78, 81,
85, 89–97, 99
ﬁxed parity, 176
function oriented enterprise, 128
functional entities, 122–124, 126
fundbank adjustment policy model, 49
Gaussian, 44, 62
Germany, 93, 94, 99, 174, 176, 181–187,
190, 194, 197
goal programming, 151, 155, 156, 161,
168
Hamiltonian, 111
health care unit, 168
health service management, 151
heteroscedasticity, 35, 37, 39, 184, 185,
194, 212, 215, 216
HouRenSou, 140, 143, 144
human resources practices, 130, 142, 144
human ware, 141
hybrid modeling constructs, 130
imitationtype dynamics, 103, 106
independent monetary policies, 71, 75
Indian economy, 49, 50
inﬂation, 69, 70, 72–74, 76–80, 82, 84, 85,
88–96, 99, 153, 164, 173–179,
181–183, 185–193
inﬂation control, 70, 73, 79, 92, 173, 174,
176, 177, 191, 192
inputoutput, 6–9, 11, 12, 14
institutional parameters, 76, 86, 96
integration challenges, 121
intertemporal optimization problem, 107
interactive voice response, 153
interest rate, 38, 50–52, 57–60, 75, 77–81,
176, 178, 179, 181, 191, 192
Japanese national culture, 137, 139, 140,
147
Japanese Organizational culture and
corporate performance, 137
JaqueBera criterion, 29
just in time, 144
Kaizen, 140–142
Kroneker product, 46
leadership, 69–73, 76–79, 81, 85, 86,
89–97, 99, 103, 130, 137, 139, 142,
144, 145
lean production system, 141, 142
linear constraints, 3, 10
linear programming, 3, 4, 6–8, 10, 14
liquidity, 203
LISREL, 145
Maastricht criteria, 173, 176, 177
macro values, 138, 143
management support, 131
mathematical programming, 3, 156
maximization, 7, 82, 112, 151
maximum likelihood, 24, 49, 205
meso values, 138
metaanalysis, 117
micro values, 138, 143
minimum variance, 25, 31–33, 41
ML method, 24, 25, 28, 31, 32, 34, 40,
208, 211, 215, 216
modeling method, 124
monetary and ﬁscal policy, 43, 44, 49, 53,
56–59, 82, 84, 93
monetary policy, 43, 44, 49, 53, 56–59,
69–82, 84, 85, 91–95, 98, 102, 103,
173, 174, 176, 178, 179, 185, 186, 188,
190, 192, 202–204
multicollinearity, 215, 216
multi functional team, 142
multiple equilibria, 101, 112, 113
Nash equilibrium, 81
Nash solution, 70, 102, 107, 110
New Cambridge model, 50
new constraint, 161, 162
new credit, 202
nonbelievers, 103, 105–107, 117
object request brokers, 126
old credit, 202
OLS, 23, 25, 28–32, 183, 184, 213
optimal control, 45, 46, 50, 55
March 16, 2009 16:52 WSPC/Trim Size: 9in x 6in SPIB692 Economic Models b692Index
Index 225
optimal currency area, 175
optimization, 3, 4, 7, 16, 44, 49, 91, 107,
110–112
organizational culture, 130, 137–139, 142,
144
output gap, 72, 77–82, 179–183, 187, 189,
191
Paretoimprovement, 110
Paretoinferior, 116
Phillips curve, 87, 98, 178
Poland, 173–177, 181, 182, 185, 187, 188,
190–193, 195, 197, 198
primary deﬁcit, 72, 79
procyclicality, 85, 87
product modeling, 122
productivity, 175
pseudoinverse, 26
public sector deﬁcit, 84
quadratic function, 10
quality circle, 141
rational expectations, 44, 86, 179
reduced form, 44, 46, 48, 58, 59, 82, 84,
179
reduced form coefﬁcients, 44, 46, 48
regulator, 76, 101–111, 113–117
regulator’s trustworthiness, 115
residuals, 24, 30, 35, 36, 39, 54, 179, 184,
185, 187, 188, 191, 194
resource information, 122
Ricattiequations, 45
robustness, 55
sequential Nash game, 103, 107
shortrun stabilization, 70, 76, 80
simplex algorithm, 6, 15
singular value, 26–28, 31, 33
Skiba point, 112, 115, 116
Slovenia, 173–177, 181–183, 185,
190–193, 196, 197, 200
stability, 11, 54, 55, 57, 74, 76, 80–84, 95,
113, 144, 176, 178, 184, 185, 191, 193,
201, 202, 204–206, 210, 211, 214, 218,
220
stabilization policy, 43, 71
standard deviation, 30, 32–34
state variable form, 49, 50
statically optimal, 109, 113
steadystate, 82, 83
structural deﬁcit, 79, 80
structural model, 44, 49, 178
symmetric stabilization, 75
tax revenue, 50, 51, 57, 58, 60, 82, 83, 95,
96, 104, 106
Theil’s inequality, 206, 218
time varying responses, 43
timeinvariant econometric model, 47
timevarying model, 44, 47
TQM, 141, 142, 144
transition, 45, 50, 54, 173–176, 178–180,
185, 186, 188, 189, 191, 192
transmission of monetary policy, 173, 178,
179, 190, 192, 203
UK government, 69
underutilization, 158, 160
unstable middle equilibrium, 115
updating method, 44, 46
US economy, 201, 202, 204, 218, 220
VAR, 23, 25, 27, 28, 32, 44, 178, 201, 205,
207, 213, 220
ECONOMIC MODELS
Methods, Theory and Applications
This page intentionally left blank
ECONOMIC MODELS
Methods, Theory and Applications
editor
Dipak Basu
Nagasaki University, Japan
World Scientific
NEW JERSEY
•
LONDON
•
SINGAPORE
•
BEIJING
•
SHANGHAI
•
HONG KONG
•
TA I P E I
•
CHENNAI
Covent Garden. ISBN13 9789812836458 ISBN10 9812836454 Typeset by Stallion Press Email: enquiries@stallionpress. including photocopying. Singapore 596224 USA office: 27 Warren Street. without written permission from the Publisher. electronic or mechanical. Ltd. 222 Rosewood Drive. Suite 401402. USA. Ltd. MA 01923. For photocopying of material in this volume. All rights reserved. or parts thereof. Hackensack. In this case permission to photocopy is not required from the publisher. Inc.Published by World Scientific Publishing Co. . Pte. please pay a copying fee through the Copyright Clearance Center. London WC2H 9HE British Library CataloguinginPublication Data A catalogue record for this book is available from the British Library. Theory and Applications Copyright © 2009 by World Scientific Publishing Co.com Printed in Singapore.. Danvers. NJ 07601 UK office: 57 Shelton Street. This book. may not be reproduced in any form or by any means. recording or any information storage and retrieval system now known or to be invented. ECONOMIC MODELS Methods. 5 Toh Tuck Link. Pte.
Tom Oskar Martin Kronsjo .This Volume is Dedicated to the Memory of Prof.
This page intentionally left blank .
Basu and Alexis Lazaridis Chapter 3: Mathematical Modeling in Macroeconomics Topic 1: The Advantages of Fiscal Leadership in an Economy with Independent Monetary Policies by Andrew Hughes Hallet Topic 2: CheapTalk Multiple Equilibria and Pareto — Improvement in an Environmental Taxation Games ˇ cı by Chirstophe Deissenberg and Pavel Sevˇ´k vii ix xiii xv xix 1 3 21 23 43 67 69 101 .Contents Tom Oskar Martin Kronsjo: A Proﬁle About the Editor Contributors Introduction Chapter 1: Methods of Modelling: Mathematical Programming Topic 1: Some Unresolved Problems of Mathematical Programming by Olav Bjerkholt Chapter 2: Methods of Modeling: Econometrics and Adaptive Control System Topic 1: A Novel Method of Estimation Under CoIntegration by Alexis Lazaridis Topic 2: Time Varying Responses of Output to Monetary and Fiscal Policy by Dipak R.
viii Contents Chapter 4: Modeling Business Organization Topic 1: Enterprise Modeling and Integration: Review and New Directions by Nikitas Spiros Koutsoukis Topic 2: Toward a Theory of Japanese Organizational Culture and Corporate Performance by Victoria Miroshnik Topic 3: Health Service Management Using Goal Programming by AnnaMaria Mouza Chapter 5: Modeling National Economies Topic 1: Inﬂation Control in Central and Eastern European Countries by Fabrizio Iacone and Renzo Orsi Topic 2: Credit and Income: CoIntegration Dynamics of the US Economy by Athanasios Athanasenas Index 119 121 137 151 171 173 201 223 .
The young people traveled by coach through the austere postSecond World War Europe. Tom Kronsjo’s university years were spent at both sides of the great divide. he organized and chaired a National Science Society that became very popular with his school colleagues. In the summer vacation of 1949. From an early age. often a leader among his school contemporaries and friends. Sweden. Germany. He put these studies to a good test during a 1948 summer vacation when he hitchhiked through Denmark. which would encourage and support mutual understanding of the needs of the people to live in peace. England Tom Kronsjo was born in Stockholm. During his school years. Tom Kronsjo’s mother encouraged him to learn foreign languages. eager to learn about the world. who were impressed by an intelligent. their ages being 14–24. Preparing himself for a career in international economics. on 16 August 1932. he wanted to experience and see for himself the different nations’ ways of life and points of view. France. England. Sweden.Tom Oskar Martin Kronsjo: A Proﬁle Lydia Kronsjo University of Birmingham. In his school days Tom Kronsjo was a bright. Tom Kronsjo used to mention that as a particularly personally exciting experience of his school years he remembered volunteering to give a series of morning school sermons on the need for the world governmental organizations. were donated by sympathetic individuals and companies. before returning to his home country. This event caught a wide interest in the Swedish press at that time and was vividly reported. Ireland. and Switzerland. Both his parents were gymnasium teachers. enthusiastic group of youngsters. and Moscow. Warsaw. Belgrade. Scotland. the only child of Elvira and Erik Kronsjo. Tom Kronsjo organized an expedition to North Africa by seven members of the technical society. All the equipment for travel. the Netherlands. He studied in London. The same year Tom Kronsjo founded and chaired a technical society in his school. Oslo. including the coach. to pass examinations and ix . popular student.
the University of Birmingham. Tom Kronsjo was invited by Professor Ragnar Frish. industrialists. The courses. By then. mathematics and Slavonic Languages. Tom Kronsjo was one of a new breed of Western social scientists that applied rigorous mathematical and computerbased modeling to economics. In Birmingham. Tom Kronsjo has always had a strong interest in educational innovations. A few years later. English. stimulating his students intellectually. and economic advisors. In 1963. later Nobel prize winner in econometrics. He was among the ﬁrst professors at the university to introduce a televisionbased teaching. By then Tom Kronsjo had published over a hundred research papers. SerboCroatian. Tom Kronsjo was appointed Professor of Economic Planning. In 1964.x Tom Oskar Martin Kronsjo submit required work at the Universities of Uppsala. together with Professor R. was ﬂuent in six languages — German. attracted many talented students from all over the world.kand. and by the Polish Ministry of Foreign Trade. As a postgraduate student in economics. unlocking wide opportunities for his . French. Davies Tom Kronsjo developed diploma. Stockholm and Lund for his Fil. to work under him and Dr. They married in Moscow in February 1962. degree in economics. Salah Hamid in Egypt as a visiting Assistant Professor in Operations Research at the UAR Institute of National Planning. He and his wife moved to the United Kingdom. Tom Kronsjo met his wife while studying in Moscow. Tom Kronsjo besides his mother tongue — Swedish. statistics.W. and Fil. offered from 1967. Polish. He was a visiting graduate student at the Moscow State University and she a graduate student with the Academy of Sciences of the USSR. During this period. and Russian.lic. and master of social sciences courses in national economic planning.K. a very happy event in the family life was the arrival of their adoptive son from Korea in October 1973. Tom was awarded a number of scholarships for study in Sweden and abroad. U. degree in economics. Many of these students then stayed with Tom Kronsjo as research students and many of these people are today university professors. at the age of 36. making it possible to widen the scope of material taught. Tom Kronsjo was generous towards his students with his research ideas. Tom Kronsjo was appointed an Associate Professor in Econometrics and Social statistics at the Faculty of Commerce and Social Sciences. His pioneering papers on optimization of foreign trade were soon considered classic and brought invitations to work as a guest researcher by the GDR Ministry of Foreign Trade and Inner German Trade. In 1969.
and implementing efforts made it possible to carry out calculations involving millions of pieces of information on trade and tariffs.R. Tom Kronsjo was a Distinguished Visiting Professor of Economics. W. In 1974. USA. at San Diego State University. Noboru Kawanabe. He had the ability to see the possibilities in any situation and taught his students to use them. ways out of the arms race. In collaboration with three other scientists. A monograph on “Trade. Purdue.C. permitting the Brookings model to become the most sophisticated and comprehensive model available at the time for investigation into effects of the current multilateral trade negotiations. war and peace. both of the Warsaw University. Tom Kronsjo. Krynicki. He published papers on unemployment. He developed important methods of decomposition of large economic systems. he was engaged in the project entitled “Trade and Employment Effects of the Multilateral Trade Negotiations”. These results were published in 1972 in a book “Economics of Foreign Trade” written jointly by Tom Kronsjo. Tom Kronsjo’s research interests were wide and multifaceted. Williams was published in 1978. and nuclear disarmament. One area of his interest was planning and management at various levels of the national economy. Zawada. and T. From time to time. Tom Kronsjo conceived and designed the means for implementing. Tom Kronsjo was invited as a Visiting Professor of Economics and Industrial Administration at Purdue University. Tom Kronsjo found himself as a Senior Fellow in the Foreign Policy Studies Program of the Brookings Institution. Washington. and Professor J. Cline. San Diego. In the year 1975. Tom Kronsjo’s ingenious. In the words of his colleagues. Welfare. tireless.. he was invited to nominate a candidate or candidates for the Nobel prize in economics.Tom Oskar Martin Kronsjo xi students through this generosity. which were considered by his contemporaries a major contribution to the theory of mathematical planning and programming. In addition. EastWest joint enterprises and world development. Tom Kronsjo made an indispensable contribution to the project. and Employment Effects of Trade Negotiations in the Tokyo Round” by Dr. Tom Kronsjo served as one of the ﬁve examiners for one of the Nobel prizes in economic sciences. perhaps the most imaginative portion of the project. and published in Poland. Dr. Dr. Z. D. USA. work incentives. conceptual. . In 1972. Tom Kronsjo’s general interests have been in the ﬁeld of conﬂicts and cooperation. the calculation of “optimal” tariff cutting formulas using linear programming and taking account of balance of payments and other constraints on trade liberalization. USA.
xii Tom Oskar Martin Kronsjo Tom Kronsjo took early retirement from the University of Birmingham in 1988 and devoted subsequent years working with young researchers from China. one of those people of whom we say “he is a man larger than life”. of Buddhist philosophy. Estonia. Tom Oskar Martin Kronsjo died in June 2005 at the age of 72. Poland. yoga. He was an exceptional individual who had the incredible vision and an ability to foresee new trends. who constantly looked for ways to bring peace to the world. of aviation and ﬂying. He was a man of great charisma. and other countries. He learned to ﬂy in the United States and held a pilot license since 1972. and interspace communication. Russia. . Tom Kronsjo was a great enthusiast of vegetarianism and veganism. Tom Kronsjo was a man who lived his life to the fullest.
Saudi Arabia and Senior Economist in charge of Middle East & Africa Division of Standard & Poor (Data Resources Inc). He has obtained his Ph. He has published six books and more than 60 research papers in leading academic journals. England and did his postdoctoral research in Cambridge University. xiii . Dipak Basu is currently a Professor in International Economics in Nagasaki University in Japan.About the Editor Prof. Senior Economist in the Ministry of Foreign Affairs. He was previously a Lecturer in Oxford University — Institute of Agricultural Economics.D. from the University of Birmingham.
This page intentionally left blank .
Previously he was a Professor of Economics at Vanderbilt University. He was a FullBright scholar. xv . was previously the Head of the Research at the Norwegian Statistical Ofﬁce and an Assistant Professor of Energy Economics at the University of Oslo. He was educated in Colorado and Virginia Universities and received his Ph. Free University of Berlin. University of Oslo. holds a doctorate from Oxford University. He was a Visiting Professor in Princeton University. a member of the editorial board of Computational Economics and of Computational Management Science. Frankfurt.D. Rome. He is a member of the Advisory Board of the Society for Computational Economics and a member of the Management Committee of the ESF action COSTP10–Physics of Risk. at Universit´ de la e M´ diterran´ e and GREQAM. Scotland. Serres. USA. Institute of Technology and Education. He was a Visiting Professor in the Massachusetts Institute of Technology and was a consultant to the United Nations and the IMF. He is a coeditor of the Journal of e e Optimization Theory and Applications and of Macroeconomic Dynamics. Paris X and in the Copenhagen Business School. Professor in Economics. Greece.Contributors Athanasios Athanasenas is an Assistant Professor of Operations Research & Economic Development in the School of Administration & Economics.D. France. from the University of Oslo. and a team leader of the European STREP project EURACE. Universities of Warwick. He has obtained his Ph. He was educated in the University of Warwick and London School of Economics. Christophe Deissenberg is a Professor in Economics. from the University of Minnesota. Andrew Hughes Hallett is a Professor of Economics and Public Policy in the School of Public Policy at George Mason University. He is a Fellow of the Royal Society of Edinburgh. and the University of Strathclyde in Glasgow. Olav Bjerkholt.
he was conferred an honorary doctoral degree by the President of the Republic of Cyprus. Greece. UK. He has published several books.xvi Contributors Fabrizio Iacone obtained a Ph. in industrial management from the National Technical University of Athens.D. Greece. He was a member of the Administrative Board of the Organization of Mediation and Arbitration. Democritus University. .D. Alexis Lazaridis. with applications in risk management. Professor in Econometrics. He has obtained his Ph.D. and of the Supreme Labour Council of Greece.U. He holds MSc and Ph. He is interested mainly in decision modeling and governance systems. Aristotle University of Thessaloniki. He has in the past worked as Planning and Logistics Director for multinational and domestic companies and has served as member of project teams in the Greek public sector. Aristotle University of Thessaloniki. England. in Economics at the London School of Economics and is currently a Lecturer in the Department of Economics at the University of York. Greece. business intelligence and operational research based decision support. large number of scientiﬁc papers and has patents on two computer software programs. He is the editor of the International Journal of Decision Sciences. Nikos Konidaris is currently a Ph. in decision modeling from Brunel University (UK). has obtained his Ph. During his term of service as Head of the Economic Department Faculty of Law and Economics.D. He holds an MA degree in Business and Management and a Bachelor’s Degree in Economics & Marketing from Luton University UK. School of Social Sciences.D. student at Hellenic Open University. Risk and Management. Nikitas Spiros Koutsoukis is an Assistant Professor of Decision Modelling and Scientiﬁc Management in the Department of International Economic Relations and Development. He was also pronounced an honorary member of the Centre of International and Economic European Law by the SecretaryGeneral of the Legal Services of E. Athanassios Mihiotis is an Assistant Professor at the School of Social Science of the Hellenic Open University. of the Administrative Board of the Centre of International and European Economic Law. from the University of Birmingham. researching in the ﬁeld of Business Administration. He has already published several articles in leading journals.
has obtained her Ph.D. and Professor of Econometrics in the University of Calabria. is currently an Adam Smith Research Scholar. ˇ cı Pavel Sevˇ´k. from the School of Health Sciences. Canada. He was a Visiting Professor in the Center for Operation Research and Econometrics (CORE). she was an Assistant Professor in the School of Technology Applications of the Technological Institute of Thessaloniki. University of Bologna. He is e e e presently completing his Ph. has obtained his Ph. Italy. from Universit` Catholique de Louvain. He was an Associate Professor in the University of Modena. Technological Educational Institute of Serres. She is the author of two books and many published scientiﬁc papers. Faculty of Law and Social Sciences. Scotland. Greece. Italy. She was educated in Moscow State University and was an Assistant Professor in Tbilisi State University. He e was previously educated in the University of Bologna and in the University of Rome. University of Glasgow. Director of CIDE (the InterUniversitary Center of Econometrics) from 1997 to1999. Greece. Belgium. AnnaMaria Mouza. e e . and from 2000 to 2002. and the Head of Department of Economics. Assistant Professor in Business Administration. Belgium.Contributors xvii Victoria Miroshnik. University of Bologna from 1999 to 2002. Orsi. R.D in the Universil´ de Montr´ al. Nato Senior Fele low. Professor in Econometrics. Georgia. Aristotle University of Thessaloniki. was educated in the Universit´ de la M´ diterran´ e. Universit` Catholique de Louvain. Head of the Department of Economics at the University of Calabria from 1987 to 1989. in the Department of Agriculture and in the Faculty of Health Sciences of the Aristotle University of Thessaloniki.D. She has published two books and a large number of scientiﬁc papers in leading management journals. Previously.
.
with this procedure one can directly detect whether the differencing process produces a stationary series or not.Introduction We deﬁne “model building in economics. This xix . which can provide solution where the standard Simplex method may fail. one can easily accommodate in the cointegrating vectors any deterministic factors. in whose memory we dedicate this volume. Frisch has developed these methods while working on the Second Five Year’s Plan for India and has developed a method. The idea is to develop stylized facts amenable to analytical methods to solve problems of the world. Both these parts are important to analyze and develop policies in any analytical model. although this is not necessarily the case. ﬁrst developed by Ragner Frisch. computational. and analytical. This was the philosophy of Prof Tom Kronsjo. Lazaridis presents a completely new method to compute cointegration vectors. With this method. which are widely used in econometric analysis. Besides. Wherever needed we should not be shy to develop new techniques — whether mathematical or computational. and applications. The articles in this volume are divided into three distinct groups: methods. theory. such as dummies. apart from the constant term and the trend. he has drawn our attention to some methods. In the process. by applying singular value decomposition. In addition. Berjkholt has reviewed the history of development of mathematical programming and its impacts in economic modeling. since it seems to be a common belief that differencing a variable (one or more times) we will always get a stationary series. a comparatively simple procedure is developed for determining the order of integration of a different stationary series. Basu and Lazaridis have proposed a new method of estimation and solution of an econometric model with parameters moving over time. In the method section there are two parts: mathematical programming and computation of cointegrating vectors. widely used in econometric analysis.” as the fruitful area of economics designed to solve realworld problems using all available methods available without distinctions: mathematical. which is similar to that of Karmarkar.
and rationale as well as current tools and methodology for deeper analysis. Deissenberg and Pavel consider a dynamic model of environmental taxation that exhibits time inconsistency.e. The authors provide analysis to show that the response of the economy would be variable i. personal. In the macroeconomic theory section HughesHallet has examined the impacts of ﬁscal policy in a regime with independent monetary authority and how to coordinate public policies in such a regime. In this theoretical model. believers (who take the nonbinding tax announcements made by the regulator at face value). perfect prediction solution suggested in the literature. Mihiotis et al. much has been written and a lot of research has been undertaken on the issue of enterprise modeling and integration. If the regulator is sufﬁciently patient and if the ﬁrms react sufﬁciently fast to the proﬁt differences. The purpose of this discussion is to provide an overview and understanding . the author has discovered some important conclusions that ﬁscal leadership leads to improved outcomes because it implies a degree of coordination and reduced conﬂicts between institutions. Over the last decades. The model produces a novel scheme. not at all ﬁxed over time. There are two categories of ﬁrms. Depending on the initial number of believers. without the central bank having to lose its ability to act independently. which are in transitional phases. Independence of the central bank is a crucial issue for both developed and developing country. This model can be very useful in the analysis of environmental economics. the regulator will use cheaptalk together with actual taxes to steer the economy either to the standard. or to equilibrium with a mixed population of believers and nonbelievers who both make prediction errors.. discussed enterprise modeling and integration challenges. Victoria Miroshnik has formulated a model of Japanese organization and how the superior corporate performances in the Japanese multinational companies are the result of a multilayer structure of cultures.. organizational. and nonbelievers (who make rational but costly predictions of the true regulator’s decisions). how to estimate phenomena which are seemingly unobserved psychological and sociological characteristics. The model goes deep into the analysis of fundamental ingredient of Japanese organizational culture and human resource management system and how these contribute to corporate performance. The proportion of believers and nonbelievers changes over time depending on the relative proﬁts of both groups. multiple equilibria can arise. In the section on theory of business organization. national. The model was applied for the Indian economy to derive monetaryﬁscal policy structure that would respond to the changing environments of the economy.xx Introduction type of model is very realistic for economies.
Anna Maria Mauza in a path breaking research presents a model suitable for an efﬁcient budget management of a health service unit. and Slovenia. In the section for policy analysis. Given the existing empirical research on the creditlending channel and the established relationship between ﬁnancial intermediation and economic growth in general. This is a new application of an innovative technique of cointegration analysis with emphasis on system stability analysis. providing at the same time alternative scenarios considering various socioeconomic factors. The book covers most of the important areas of economics with the basic analytical framework to formulate a logical structure and then suggest and implement methods to quantify the structure to derive applicable policies. She analyzes all the details needed to formulate the proper model. The authors have outlined a framework for advancing enterprise integration modeling based on the stateofthe art techniques. They found that the real exchange rate is the most effective instrument to stabilize inﬂation whereas direct inﬂation control mechanisms may be ineffective in certain cases. These experiments are very useful to design antiinﬂation policies in open economies. the main purpose is to analyze in detail the causal relationship between ﬁnance and growth by focusing on bank credit and income GNP. over the period from 1957 up to 2007. . Athanasios Athanasenas investigated the cointegration dynamics of the credit–income nexus. within the economic growth process of the postwar US economy. credit affects money income. The results show that there is no shortrun effect of credit changes on income changes. We hope the book would be a source of joy for anyone interested to make economics a useful discipline to enhance human welfare rather than being a sterile discourse devoid of reality. Czech Republic. are compatible with the remaining EU member countries.Introduction xxi of the key concepts. in an attempt to satisfy the expectations of the decision maker in the best possible way. in order to successfully apply goal programming. but only in the longrun. in the postwar US economy. by applying goal programming. Iacone and Orsi applied a small macroeconometric model to ascertain if the inﬂation dynamics and controls for Poland.
This page intentionally left blank .
Chapter 1 Methods of Modelling: Mathematical Programming .
This page intentionally left blank .
Topic 1 Some Unresolved Problems of Mathematical Programming Olav Bjerkholt University of Oslo. played a key role in developing the new tool by his discovery in 1947 of the Simplex method for solving LP problems. Introduction Linear programming (LP) emerged in the United States in the early postwar years. Rm S = {x ∈ Rn Ax = b. George B. x ≥ 0} x b∈ and A a (m. The uses and improvements in LP as an optimization tool developed in step with the fast development of electronic computers. The feasibility region of the problem consists of all points fulﬁlling the constraints. took active part in developing and utilizing LP at an early stage. Norway 1. namely Dantzig (1963). Several leading economists. Dantzig. x ≥ 0 } where S is assumed to be bounded with a nonempty interior. Linear programming emerged at the same time as the ﬁrst electronic computers were developed. as the most widespread textbook in LP in the 1960s. Rn . The US Armed Forces continued its support of Dantzig’s LP work. was sponsored by the US Air Force. n) matrix of rank m < n. Linear programming was thus a military product. One may to a considerable degree see the development of LP as a direct result of the mobilization of research efforts during the war. A standard formulation of a LP problem in n variables is as follows: min{c x Ax = b. including a number of future Nobel Laureates. who was employed by the US Armed Forces. 3 . The number where c ∈ of linear constraints is thus m. which soon appeared to have very widespread civilian applications.
The Simplex method will surely survive in textbook presentations and for its historical role. who had a strong background in mathematics and also was very proﬁcient in numerical methods. Frisch was broadly oriented towards macroeconomic policy and planning problems and was highly interested in the promising new optimization tool. which after ﬁnite number of steps ﬁnds the solution to the problem. implying that currently LP problems may have a number of constraints that runs into tens of millions. made the development of solution algorithms for LP problems a part of his research agenda. Dantzig’s Simplex method is an algorithm in both meanings. Let us ﬁrst make a remark on the meaning of algorithms. Frisch.4 Olav Bjerkholt Among these was also Ragnar Frisch. An LP algorithm is a practically executable technique for ﬁnding the solution to the problem. Such an algorithmic procedure can be executed by a computer program. as Dantzig did with the Simplex method for the LP problem. Linear programming has been steadily taken into use for new kinds of problems and the computer development has allowed the problems to be bigger and bigger. an idea that goes back to Alan Turing. The mathematical meaning of “algorithm” in relation to a given type of problem is a procedure. there are surely many such cases. Ragnar Frisch’s work on algorithms is interesting in this perspective and has been given little attention. or determines that there is no solution. which is not a practical algorithm (and indeed also vice versa). In the ﬁrst year after 1950. During the 1950s. Orden (1993) gives the following rough indication. It may on a modest scale be a case of a pioneering effort that was more insightful than considered at the time and thus did not get the attention it deserved. but a more practical one. the number m of constraints in a solvable problem was of order 100 and has grown with a factor of 10 for each decade. The Simplex method has throughout this period been the dominant algorithm for solving LP problems. With regard to the magnitude of LP problems that could be solved at different times with the computer equipment at disposal. But when we talk about algorithms with regard to the LP problem it is not in the mathematical sense. . Frisch invented and tested out various solution algorithms along other lines than the Simplex method for solving LP problems. not least in the textbooks. In the history of science. but for the solution of largescale LP problems it is yielding ground to alternative algorithms. But one may have an algorithm in the mathematical sense. has retained such a position. It is perhaps relatively rare that an algorithm developed at an early stage for new problem.
so to speak. 1984a) and published later the same year in a slightly revised version (Karmarkar. p. Dantzig (1984) gives a beautiful illustration of why such a search for optimality is not viable. in the corners of the feasible region. the answer would still be no. the Simplex method is a most impressive tool by making the just mentioned and even bigger problems practically solvable. He takes a classical assignment problem. The Simplex method is to search for the optimum on the surface of the feasible region. more precisely. in such a way that the optimality criterion improves at each step.Some Unresolved Problems 5 In the mathematical sense of algorithm. 1984b). Then all the corners can be searched for the lowest value of the optimality criterion. there were 1050 earths or 1044 suns all ﬁlled with nanosecond speed computers all programmed in parallel from the time of the big bang until sun grows cold. is such an “interior” .” (Dantzig. Karmarkar’s algorithm. which the New York Times found worthy as ﬁrst page news as a great scientiﬁc breakthrough. 1984. 106) In view of these enormous combinatorial possibilities. Suppose we had an IBM 370168 available at the time of the big bang 15 billion years ago. It can easily be shown that the feasibility region is a convex set with a linear surface. then perhaps the answer is yes. Would it have been able to look at all the 70! combinations by the year 1981? No! Suppose instead it could examine 1 billion assignments per second? The answer is still no. As the number of corners is ﬁnite. A completely different strategy to search for the optimum is to search the interior of the feasible region and approach the optimal corner (or one of them) from within. It may seem to speak for the advantage of the Simplex method that it searches an area where the solution is known to be. for example. A watershed in the history of LP took place in 1984 when Narendra Karmarkar’s algorithm was presented at a conference (Karmarkar. however. the solution can be found. Even if the earth were ﬁlled with such computers all working in parallel. or. the distribution of a given number of jobs among the same number of workers. The optimum point of a linear criterion over such a set must be in a corner or possibly in a linear manifold of dimension greater than one. Assume that 70 persons shall be assigned to 70 jobs and the return of each workerjob combination is known. Dantzig’s comment about the possibility of looking at all these combinations runs as follows: “Now 70! is a big number. the LP problem is trivial. greater than 10100 . If. There are thus 70! possible assignment combinations. by setting n − m of the x’s equal to zero and solve Ax = b for the remaining ones.
One such forerunner. To the best of my knowledge this was ﬁrst noted by Roger Koenker a few years ago: “But it is an interesting irony. but only slightly. It belonged from then on to operations research and management science on one hand and to computer science on the other. After around 1960. 2. The origin of LP could be set to 1947. illustrating the spasmodic progress of science. which. was when Dantzig developed the Simplex algorithm. LP and inputoutput analysis seemed within economics to be two sides of the same coin. or even within econometrics. Few would today consider LP as a part of economic science.6 Olav Bjerkholt method. If we state the question as to when “LP” was ﬁrst used in its current meaning in the title of a paper presented at a scientiﬁc meeting. Linear programming has various roots and forerunners in economics and mathematics in attempts to deal with economic or other problems using linear mathematical techniques. This idea had. Leontief developed his “closed model” in the 1930s in an attempt to give empirical content to Walrasian general equilibrium. In the early postwar years. see Kohli (2001). Linear programming disappeared from the economics curriculum and lost the attention of academic economists. It is hardly possible to answer to give an exact date for when “LP” was born or ﬁrst appeared. the answer is to the best of my knowledge 1948. was Wassily Leontief’s inputoutput analysis. launched within economics. 2000). so to say. that the most fruitful practical formulation of the interior point revolution of Karmarkar (1984) can be traced back to a series of Oslo working papers by Ragnar Frisch in the early 1950s. as already mentioned. however. and also at the time of some applied problems such as the transportation problem and the diet problem. the ties to economics were severed. But LP was. and given much attention by leading economists for about 10 years or so.” (Koenker. searching the interior of the feasibility area in the direction of the optimal point. Linear Programming in Economics Linear programming has a somewhat curious relationship with academic economics. At the Econometric Society meeting in Cleveland at the end of December . been pursued by Ragnar Frisch. The two terms were. even used interchangeably. It originated at around the same time as game theory with which it shares some features. for a short term. Leontief’s equilibrium approach was transformed in his cooperation with the US Bureau of Labor Statistics to the open inputoutput model.
The conference meant a breakthrough for the new optimization technique and had prominent participation. Kenneth Arrow. 1948). Robert Dorfman.. Cowles Commission and RAND jointly arranged a conference on LP. The conference papers of both Wood and Dantzig appeared in Econometrica in 1949. i. and several military researchers including Dantzig.” At an earlier meeting of the Econometric Society in 1948. Dantzig had presented the general idea of LP in a symposium on game theory. 1949). Paul Samuelson.” presented at an Econometric Society meeting in 1947 (Koopmans. at the same meeting. presented his general activity analysis production model. not yet in standard format.” and a paper by Koopmans on the “transportation problem. Al Tucker. One of the other two papers in the session was by Wood and Dantzig and discussed the problem of selecting the “best” method of accomplishing any given set of objectives within given restriction as a LP problem. Leonid Hurwicz. Koopmans had. mathematicians. Hurwicz’s paper appeared in a session on LP. Harold Kuhn and David Gale. At the conference were economists.e. In 1949. The paper referred not only Leontief’s inputoutput model but also to John von Neumann’s work on economic equilibrium growth. What distinguishes linear programming from other problems in this class is that both the function to be maximized and the restrictions (equalities or inequalities) are linear in the variables. Stigler (1945). which must have been the ﬁrst ever with that title. Dantzig (1949) mentioned as examples of problems for which the new technique could be used. Leonid Hurwicz presented a paper on LP and the theory of optimal behavior with the ﬁrst paragraph providing a concise deﬁnition for economists: “The term linear programming is used to denote a problem of a type familiar to economists: maximization (or minimization) under restrictions. The general model was presented as an “elaboration of Leontief’s inputoutput model. known in the literature as having introduced the “diet problem. Abba Lerner and Nicholas GeorgescuRoegen. using an airlift operation as an illustrative example. to recent papers ﬁrmly within the realm of economics. Wood and Dantzig (1949) in the same issue stated an airlift problem. Dantzig had discovered the Simplex method but admitted many years later that he had not really realized how important this discovery was. Few people had a proper overview of linear models to place the new discovery in . Tjalling Koopmans. Dantzig (1949) stated the LP problem. but the solution technique was not discussed.” (Hurwicz.Some Unresolved Problems 7 1948.
1984). Charnes et al. coauthors with Dorfman of Dorfman. Having at times a somewhat low kindling point. I began with the formulation of the linear programming model in terms of activities and items. (1959) and Sharpe (1967). which I believe was uncharacteristic of him. John von Neumann died long before the Nobel Prize in economics was established. Modigliani. but one of the few was John von Neumann. Samuelson and Solow (1958).” (Dantzig.. then that’s what he’ll get. as I would to an ordinary mortal. Dantzig decided to consult him about his work on solution techniques for the LP problem: “I decided to consult with the “great” Johnny von Neumann to see what he could suggest in the way of solution techniques. coauthor of Arrow et al. Koopmans and Kantorovich shared the Nobel Prize in economics for 1975 for their work in LP. On October 3. One also has to include Frisch. I said to myself “O. to be included are also all the three Nobel Laureates in economics for 1990: Markowitz et al. Von Neumann did something. the Air Force problem.8 Olav Bjerkholt context. and Simon. (1956).” In under one minute I slapped the geometric and the algebraic version of the problem on the blackboard. he proceeded to give me a lecture on the mathematical theory of linear programs. 1947 I visited him for the ﬁrst time at the Institute for Advanced Study at Princeton. as we shall look at in more detail below. at the time an authority on a wide range of problems form nuclear physics to the development of computers. He was considered by many as the leading mathematician in the world. originators of the transportation problem and the diet problem. and George Dantzig never got the prize he ought to have shared with Koopmans and Kantorovich in 1975 for reasons that are not easy to understand. who both were coauthors of Holt et al. Arrow. Kantorovich.” he said impatiently. (1958). Finally. Von Neumann could immediately recognize the core issue as he saw the similarity with the game theory. and Koopmans and Stigler. etc. who is known to have formulated the LP problem already in 1939. respectively. . Leontief who applied LP in his inputoutput models. These comprise Samuelson and Solow. One could underline how LP once seemed ﬁrmly anchored in economics by pointing to the number of Nobel Laureates in economics who undertook studies involving LP. The meeting became the ﬁrst time Dantzig heard about duality and Farkas’ lemma. “Get to the point. (1957). if he wants a quicky. I remember trying to describe to von Neumann.K. Von Neumann stood up and said “Oh that!” Then for the next hour and a half.
The article was 93 pages long. the article may seem na¨ve with regard to the subı ject matter. rather similar to the inputoutput model of Leontief. He had been a cofounder of the Econometric Society in 1930 and to him econometrics meant both the formulation of economic theory in a precise way by means of mathematics and the development of methods for confronting theory with empirical data. Frisch (1934a) developed a linear structure representing inputs for production in different industries. 1934b). He wrote both of these aims into the constitution of the Econometric Society as rendered for many years in every issue of Econometrica. Frisch pioneered the use of matrix algebra in econometrics in 1929 and had introduced many other innovations in the use of mathematics for economic analysis. In many ways. He became editor of Econometrica from its ﬁrst issue in 1933 and held this position for more than 20 years. although the underlying idea and motivation was quite different. it seems that Ragnar Frisch had the deepest involvement with LP. In 1934. His impressive work in several ﬁelds of econometrics must be left uncommented here. Frisch had also published the article without letting it undergo proper referee process. but it had a very sophisticated mathematical underpinning. which he observed at close range both in the United States and in Norway. 1933) and his conﬂuence theory (Frisch. he published an article in Econometrica (Frisch. The article was written as literally squeezed in between Frisch two most famous econometric contributions in the 1930s.Some Unresolved Problems 9 Among all these Nobel Laureates. Frisch’s most active and creative as a mathematical economist coincided with the Great Depression. 3. see Bjerkholt and Knell (2006). 1934a) where he discussed the organization of national exchange organization that could take over the need for trade when markets had collapsed due to the depression. Perhaps it was the urgency of getting it circulated that caused him to do this mistake for which he was rebuked by some of his fellow econometricians. the longest regular article ever published in Econometrica. But he published poorly and his achievements went largely unrecognized. his business cycle theory (Frisch. PreWar Attempts Ragnar Frisch had a strong mathematical background and a passion for numerical analysis. Frisch and Programming. Frisch then addressed the question of how the input needs could .
. 1941). By solving for alternative values of C∗ . its two successive research directors in . One of Frisch’s students worked on a dissertation on health and nutrition.10 Olav Bjerkholt be modiﬁed in the least costly way in order to achieve a higher total production capacity of the entire economy. . . cf. In a footnote. . He formulated a cost function as a quadratic function of these deviations. xk k = 1. n + + ··· + n ε1 ε2 εn ai0 i = 1. Frisch’s work was published as the introduction to his student’s monograph (Frisch. He could still survey and follow these developments rather closely. His solution of the problem for a given example with n = 3. min = 1 2 4. Sandmo (1993). He had contact with Cowles Commission in Chicago. Frisch and Linear Programming Frisch was not present at the Econometric Society meetings in 1948 mentioned above. He was still the editor of Econometrica and thus had accepted and surely studied the two papers by Dantzig in 1949. years before Stigler (1945). drawing on various investigations of the nutritional value of alternative diets. The problem was stated as follows: 2 2 x1 x2 x2 wrt. Frisch naturally did not possess an algorithm for this problem. . he gave a somewhat rudimentary LP formulation of the diet problem. 2. n subject to: k fik xk ≥ C∗ − Pi where C∗ is the set target of overall production and the constraint the condition that the deviation allowed this target value to be achieved. determining as a function of C∗ ran over 12 pages in Econometrica. . see Bjerkholt (2006). representing the cost of achieving the production level C∗ could be mapped. His reasoning thus led to a quadratic target function to be minimized subject to a set of linear constraints. the function = (C∗ ). . Frisch supervised the dissertation in the years immediately before World War II when it led him to formulate the diet problem. The variables in his problem (xi ) represented (relative) deviations from observed input coefﬁcients. neither did he participate in the Cowles CommissionRAND conference in 1949. . This did not prevent him from working out a numerical solution of an illuminating example and making various observations on the nature of such optimising problems. Another problem Frisch worked on before the war with connections to LP was the diet problem.
The lecture gave a brief mathematical statement of the LP problem and then as an important example discussed the problem related to producing different qualities of airplane fuel from crude oil derivates. optimisation under a given import constraint or given labor supply. (1958) popularised this tool for economists’ beneﬁt. Leif Johansen. they did not really address LP techniques as a separate issue. he had indeed applied the ideas of Frisch (1934a) to the problem of achieving the largest possible multilateral balanced trade in the highly constrained world economy (Frisch.. Two students drafted lecture notes that Frisch corroborated and approved. primarily due to his position as chairman of the UN SubCommission for Employment and Economic Stability. that his lecture did not refer the students to an article that had already appeared in Econometrica. He ﬁrst touched upon LP in lectures to his students at the University of Oslo in September 1950. e. who was Frisch’s assistant at the time. Still these lectures may well have been among the ﬁrst applying LP to the macroeconomic problems of the day in Western Europe. Charnes et al. 1948). that the new ideas touched or even overlapped with his own prewar attempts as discussed above. 1947. took notes that were issued in the Memorandum series two days later (Johansen. 1952). Frisch visited in the early postwar years frequently in the United States. and other leading members of the Econometric Society. but to one that was forthcoming! Frisch had apparently been so eager to present these ideas that he had simply used (or perhaps misused) his editorial prerogative in lecturing and issuing lecture notes on an article not yet published! After the singular lecture on LP in February 1952. One major reason for Frisch pioneering efforts here was. The ﬁrst trace of LP as a topic in its own right on Frisch’s agenda was a single lecture he gave to his students on February 15th 1952. Frisch did not lecture on this topic till the autumn term in 1953. he went by way of introduction . indeed. as part of another lecture series on inputoutput analysis. in Econometrica. A special thing about the Frisch’s use of this example was. The new developments over the whole range of related areas of linear techniques had surely caught his interest. In the very ﬁrst postwar years. Jacob Marschak and Tjalling Koopmans. with reference to an article by Charnes et al. of course.Some Unresolved Problems 11 these years.g. (1952) is. a famous contribution in the LP literature as the ﬁrst published paper on an industrial application of LP. The content of these early lectures was inputoutput analysis with emphasis on optimisation by means of LP. however. In these lectures. years before Dorfman et al. In these early lectures Frisch used “inputoutput analysis” and “LP” as almost synonymous concepts.
Between the middle of October 1953 and May 1954. he added that there were two other methods. referring to Dantzig (1951). Some of the memoranda comprised comprehensive numerical experiments. 11 pages. and appealed to geometric intuition. Dorfman (1951) and Charnes et al. Litt analytisk geometri i ﬂere dimensjoner [Some multidimensional analytic geometry]. however. most of which have already been mentioned above. he did not enter into solution techniques. Frisch conﬁned. Frisch’s research work on LP methods and alternatives to the Simplex method. . not least. Because after mentioning Dantzig’s Simplex method. can be followed almost step by step thanks to his working style of incorporating new results in the Institute Memorandum series. But in other cases. some of them adapted from earlier works. but to Frisch (1941). But clearly he had already put considerable work into this. proteins and vitamin B. In his brief introduction to LP. the alternative methods would be more efﬁcient. somewhat similar in tenor to Dorfman (1958). he issued 10 memoranda. the Simplex method might well be the most efﬁcient one. his discussion to the formulation of the LP problem and the characteristics of the solution. which referred to these methods at this time. He exempliﬁed. and then moved to macroeconomic problems within an inputoutput framework. amply illustrated with illuminating ﬁgures. the building of dwellings under the tight postwar constraints and various macroeconomic problems. Frisch in a splendid pedagogical way used examples. 11 pages. it was not with reference to Stigler (1945). with reconstruction in Norway. The memoranda were the following: 18 October 1953 7 November 1953 Logaritmepotensialmetoden til løsning av lineære programmeringsproblemer [The logarithm potential method for solving LP problems]. When he came to the diet problem. (1953). the logarithm potential method and the basis variable method. He started out with cost minimization of a potato and herring diet with minimum requirements of fat. both developed at the Institute of Economics. As were to be expected Frisch’s work on LP was sprinkled with new terms. but Frisch was clearly preparing for presenting his work. He added the remark that when the number of constraints was moderate and thus the number of degrees of freedom large. Needless to say. there were no other sources. They were all in Norwegian.12 Olav Bjerkholt through a number of concrete examples of problem that were amenable to be solved by LP. still in the ofﬁng.
Some Unresolved Problems 13
(Continued ) 13 November 1953 Substitumalutforming av logaritmepotensialmetoden til løsning av lineære programmeringsproblemer [Substitumal formulation of the logarithmic potential method for solving LP problems], 3 pages. Notater i forbindelse med logaritmepotensialmetoden til løsning av lineære programmeringsproblemer [Notes in connection with the logarithmic potential method for solving LP problems], 48 pages. Finalhopp og substitumalfølging ved lineær programmering [Final jumps and moving along the substitumal in LP], 8 pages. Trunkering som forberedelse til ﬁnalhopp ved lineær programmering [Truncation as preparation for a ﬁnal jump in LP], 6 pages. Et 22variables eksempel p˚ anvendelse av a logaritmepotensialmetoden til løsning av lineære programmeringsproblemer [A 22 variable example in application of the logarithmic potential method for the solution of LP problems]. Basisvariabelmetoden til løsning av lineære programmeringsproblemer [The basisvariable method for solving LP problems], 21 pages. Simplexmetoden til løsning av lineære programmeringsproblemer [The Simplex method for solving LP problems], 14 pages. Generelle merknader om løsningsstrukturen ved det lineære programmeringsproblem [Generfal remarks on the solution structure of the LP problem], 12 pages.
14 January 1954
7 March 1954
29 March 1954
27 April 1954
1 May 1954
5 May 1954
7 May 1954
At this stage, Frisch was ready to present his ideas internationally. This happened ﬁrst at a conference in Varenna in July 1954, subsequently issued as a memorandum. During the summer, Frisch added another couple of memoranda. Then he went to India, invited by Pandit Nehru, and spent the entire academic year 1954–55 at the Indian Statistical Institute, directed by P. C. Mahalanobis, to take part in the preparation of the new ﬁveyear plan for India. While he was away, he sent home the MSS for three more
14
Olav Bjerkholt
memoranda. These six papers were the following:
June 21st 1954 Methods of solving LP problems, Synopsis of lecture to be given at the International Seminar on InputOutput Analysis, Varenna (Lake Como), June–July 1954, 91 pages. Nye notater i forbindelse med logaritmepotensialmetoden til løsning av lineære programmeringsproblemer [New notes in connection with the logarithmic potential method for solving LP problems], 74 pages. Merknad om formuleringen av det lineære programmeringsproblem [A remark on the formulation of the LP problem], 3 pages. Principles of LP. With particular reference to the double gradient form of the logarithmic potential method, 219 pages. A labour saving method of performing freedom truncations in international trade. Part I, 21 pages. A labour saving method of performing freedom truncations in international trade. Part II, 8 pages.
August 13th 1954
August 23rd 1954
October 18th 1954
March 29th 1955 April 15th 1955
Before he left for India he had already accepted an invitation to present his ideas on programming in Paris. This he did on three different occasions in May–June 1955. He presented in French, but issued synopsis in English in the memorandum series. Thus, his Paris presentation were the following:
7th May 1955 The logarithmic potential method for solving linear programming problems, 16 pp. Synopsis of an exposition to be made on 1 June 1955 in the seminar of Professor Ren´ e Roy, Paris (published as Frisch, 1956b). The logarithmic potential method of convex programming. With particular application to the dynamics of planning for national developments, 35 pp. Synopsis of a presentation to be made at the international colloquium in Paris, 23–28 May 1955 (published as Frisch, 1956c). Linear and convex programming problems studied by means of the double gradient form of the logarithmic potential method, 16pp. Synopsis of a presentation to be given in the seminar of Professor Allais, Paris, 26 May 1955.
13 May 1955
25 May 1955
Some Unresolved Problems 15
After this, Frisch published some additional memoranda on LP. He launched a new method, named the Multiplex method, towards the end of 1955, later published in a long article in Sankhya (Frisch, 1955; 1957). Worth mentioning is also his contribution to the festschrift for Erik Lindahl in which he discussed the logarithmic potential method and linked it to the general problem of macroeconomic planning (Frisch, 1956a; 1956d). From this time, Frisch’s efforts subsided with regard to pure LP problems, as he concentrated on nonlinear and more complex optimisation, drawing naturally on the methods he had developed for LP. As Frisch managed to publish his results only to a limited degree and not very prominently there are few traces of Frisch in the international literature. Dantzig (1963) has, however, references to Frisch’s work. Frisch also corresponded with Dantzig, Charnes, von Neumann and others about LP methods. Extracts from such correspondence are quoted in some of the memoranda.
5. Frisch’s Radar: An Anticipation of Karmarkar’s Result In 1972, severe doubts were created about the ability of the Simplex method to solve even larger problems. Klee and Minty (1972), whom we must assume knew very well how the Simplex algorithm worked, constructed a LP problem, which when attacked by the Simplex method turned out to need a number of elementary operations which grew exponentially in the number of variables in the problem, i.e., the algorithm had exponential complexity. Exponential complexity in the algorithm is for obvious reasons a rather ruining property for attacking large problems. In practice, however, the Simplex method did not seem to confront such problems, but the demonstration of Klee and Minty (1972) showed a worst case complexity, implying that it could never be proved what had been largely assumed, namely that the Simplex method only had polynomial complexity. A response to Klee and Minty’s result came in 1978 by the Russian mathematician Khachiyan 1978 (published in English in 1979 and 1980). He used a socalled “ellipsoidal” method for solving the LP problem, a method developed in the 1970s for solving convex optimisation problems by the Russian mathematicians Shor and Yudin and Nemirovskii. Khachiyan managed to prove that this method could indeed solve LP problems and had polynomial complexity as the time needed to reach a solution increased with the forth power of the size of the problem. But the method itself was
16
Olav Bjerkholt
hopelessly ineffective compared to the Simplex method and was in practice never used for LP problems. A few years later, Karmarkar presented his algorithm (Karmarkar, 1984a; b). It has polynomial complexity, not only of a lower degree than Khachiyan’s method, but is asserted to be more effective than the Simplex method. The news about Karmarkar’s pathbreaking result became frontpage news in the New York Times. For a while, there was some confusion as to whether Karmarkar’s method really was more effective than the Simplex method, partly because Karmarkar had used a somewhat special formulation of the LP problem. But it was soon conﬁrmed that for sufﬁcently large problems, the Simplex method was less effcient than Karmarkar’s result. Karmarkar’s contribution initiated a hectic new development towards even better methods. Karmarkar’s approach may be said to be to consider the LP just as another convex optimalization problem rather that exploiting the fact that the solution must be found in a corner by searching only corner points. One of those who has improved Karmarkar’s method, Clovis C. Gonzaga, and for a while was in the lead with regard to possessing the nest method wrt. polynomial complexity, said the following about Karmarkar’s approach:
“Karmarkar’s algorithm performs well by avoiding the boundary of the feasible set. And it does this with the help of a classical resource, ﬁrst used in optimization by Frisch in 1955: the logarithmic barrier function: x ∈ Rn , x > 0 → p(x) = − log xi .
This function grows indeﬁnitely near the boundary of the feasible set S, and can be used as a penalty attached to these points. Combining p(·) with the objective makes points near the boundary expensive, and forces any minimization algorithm to avoid them.” (Gonzaga, 1992)
Gonzaga’s reference to Frisch was surprising; it was to the not very accessible memorandum version in English of one of the three Paris papers.1 Frisch had indeed introduced the logarithmic barrier function in his logarithmic potential method. Frisch’s approach was summarized by Koenker as follows:
“The basic idea of Frisch (1956b) was to replace the linear inequality constraints of the LP, by what he called a log barrier, or potential function. Thus, in place of the canonical linear program
1 In Gonzaga’s reference, the author’s name was given as K.R. Frisch, which can also be
found in other rerences to Frisch in recent literature, suggesting that these refrences had a common source.
´ 1956b. 13) The corresponding quote in the memorandum synoptic note is: “My method of approach is of an entirely different sort [than the Simplex method].Some Unresolved Problems 17 (1) min{c xAx = b. (2) replaces the inequality constraints in (1) by the penalty term of the log barrier. 8) . µ)Ax = b} where (3) B(x. not as much as a question of efﬁcient algorithms as that of powerful enough computers. p. The problem may have seen. They might have seemed powerful to him at that time. Solving (2) with a sequence of parameters µ such that µ → 0 we obtain in the limit a solution to the original problem (1). Dans cette e e e m´ thode nous travaillons syst´ matiquement a l’int´ rieur de la r´ gion e e ` e e admissible et utilisons un potentiel logarithmique comme un guide– une sorte du radar–pour nous eviter de traverser la limite. 20) Frisch had not provided exact proofs and he had above all not published properly. In the programming work Frisch pushed on to more complex programming problems. 2000. The Simplex method had not really been challenged yet. x ≥ 0} we may associate the logarithmic barrier reformulation (2) min{B(x. In this method we work systematically in the interior of the admissible region and use a logarithmic potential as a guiding device — a sort of radar — to prevent us from hitting the boundary.” (Frisch. The reasons for this might be that his investigations had not yet come to an end.” (Koenker. Another reason for his results remaining largely unknown. µ) = c x − µ log xk In effect. We ﬁnish off with Frisch’s nice illustrative description of his method in the only publication he got properly published (but unfortunately not very well distributed) on the logarithmic potential method: “Ma m´ thode d’approche est d’une esp` ce toute diff´ rente. but they hardly had the capacity to match Frisch’s ambitions. in Frisch’s work of not publishing. But he had the exact same idea as Karmarkar came up with almost 30 years later. p.” (Memorandum of 7 May 1955. was perhaps that he was too much ahead. p. by large enough problems to necessitate better method. Why did he not make his results better known? In fact there are other. even more important cases. spurred by the possibility of using ﬁrst.and secondgeneration computers.
135–159. R (1941). R (1956a).18 Olav Bjerkholt References Arrow. On the need for forecasting a multilateral balance of payments. A. In Kosthold og levestandard.). The Review of Economics and Statistics 30. PA Samuelson and RM Solow. Econometrica 20. GB (1951). Macroeconomics and linear programming. Reminiscences about the origin of linear programming. In Cahiers du S´ minaire d’Econometrie: No 4 — Programme e . T Koopmans (ed.). Dantzig. Linear Programming and Economic Analysis. The multiplex method for linear programming. In Activity Analysis of Production and Allocation. WW Cooper and MW Miller (1959). An Application of Linear Programming to the Theory of the Firm. R Frisch (ed. An application of linear programming to ﬁnancial budgeting and the costing of funds. Frisch. WW Cooper and A Henderson (1953). Innledning. 535–551. Studies in Linear and NonLinear Programming. New York: McGrawHill. 1–23. Charnes. Charnes. An Introduction to Linear Programming. KJ. 217–226. O and M Knell (2006). In Economic Essays in Honor of Gustav Cassel. R. pp. R (1934a). The Journal of Business 20. In Mathematical Programming. En økonomisk undersøkelse. Econometrica 17. Frisch. RW Cottle. Bledning aviation gasolines — a study in programming interdependent activities in an integrated oil company. Berkeley: University of California Press. K Getz Wold (ed. L Hurwicz and H Uzawa (1958). CA: Stanford University Press. R (1934b). R (1933).). Linear Programming and Extensions. A. 171–205. (1958). 339–347. GB (1963). Memorandum from Institute of Economics. WW Cooper and B Mellon (1952). R (1948). Oslo: Fabritius og Sønners Forlag. Stanford. New York and London: John Wiley & Sons. Charnes. New York and London: John Wiley & Sons. Frisch. Oslo: Institute of Economics. R (1947). Frisch. (forthcoming). Dorfman. Memorandum from Institute of Economics. Circulation planning: proposal for a national organization of a commodity and service exchange. Outline of a system of multicompensatory trade. Propagation problems and impulse problems in economics. Dantzig. Frisch. Oslo: University of Oslo (17 October 1955). 258–336 and 422–435. R (1955). pp. pp. GB (1949). La r´ solution des probl` mes de programme lin´ aire par la m´ thode du e e e e potential logarithmique. 200–211. R (1951). Dantzig. Princeton. Econometrica 2. (10 January 1956). Oslo: University of Oslo. Frisch. Economic Systems Research 18. Maximization of a linear function of variables subject to linear inequalities. Statistical Conﬂuence Analysis by Means of Complete Regression Systems. The problem of multicompensatory trade.). Frisch. A. Ragnar Frisch and inputoutput analysis. 20–46. Programming of interdependent activities: II mathematical model. Bjerkholt. Publikasjon nr 5. London: Allen & Unwin. ML Kelmanson and B Korte (eds. R (1956b). GB (1984). pp. Frisch. The American Economic Review 37(1). Dorfman. Dantzig. Amsterdam: Elsevier (NorthHolland). 265–271. (Also published shortened and simpliﬁed as Frisch (1956d)). Frisch. NJ: Princeton University Press.
H (1957). CC.) (1951). Econometrica 16. Koopmans. L (1952). R Frisch (ed. e Colloques Internationaux du Centre National de la Recherche Scientiﬁque LXII. Pathfollowing methods for linear programming. Linear programming and general theory of optimal behaviour (abstract). Hurwicz. MK and GB Dantzig (1949). Galton. Management Science 3. A new polynomialtime algorithm for linear programming. Kohli.Some Unresolved Problems 19 Lin´ aire — Agr´ gation et Nombre Indices. Koopmans. Stigler. pp. Frisch. W (1967). R (1956c). History of Political Economy 25. Journal of Farm Economics 27. (Notes from Professor Ragnar Frisch’s lecture 15 February 1952). G (1945).). O Sisha (ed. 167–224. 347–374. Karmarkar. Activity Analysis of Production and Allocation. TC (ed. Koenker. 4. Programme convexe et planiﬁcation pour le d´ veloppement national. 1941–54: developing a framework for measurement. New York. 38–67. In The Age of Economic Measurement. R (1956d).) e Frisch. Conf´ rence a Paris. A linear programming algorithm for mutual fund portfolio selection. JL Klein and MS Morgan (eds. 499–511. Management Science 13. 329–362 [Practically identical to Frisch (1955)] Gonzaga. Leontief and the bureau of labor statistics. SIAM Review 34. The cost of subsistence. 255–269. CC (1992). Derivation of a linear decision rule for production and employment. 15pp. Holt. In Inequalities III. e e ` Frisch. Stockholm: Ekonomisk Tidskrift. (The publication also gives an e e extract of the oral discussion in Ren´ Roy’s seminar. (1972). MC (2001). Ragnar Frisch on the optimal diet. JF Muth and HA Simon (1956).). R (1957). 2–12. Sankhy´ : The Indian a Journal of Statistics 18. LP from the ’40s to the ’90s. 159–175. Sandmo. Programming of interdependent activities: I general discussion. Cowles Commission Monographs No 13. pp. Interfaces 23. R (2000). Wood. T (1948). 303–314. 190–212. A (1993). 66. Econometrica 17. Durham and London: Duke University Press. pp. 95(2). University of Oslo. Combinatorica. mai 1955. Optimum utilization of the transportation system (abstract). 193–199. NK (1984). pp. A (1993). The elimination form of the inverse and its application in linear programming. . L (1949). Les mod` les dynamiques. 313–327. Frisch. Journal of Econometrics. 17 February 1952. 15. V and G Minty. (Slightly simpliﬁed version of Frisch (1956a)). The multiplex method for linear programming. New York: John Wiley & Sons. 161–162. How good is the simplex algorithm. In honour of Erik Lindahl 21 November 1956. 7–20. 159–177.). pp. Klee. Lineær programming. 373–395. Orden. pp. Econometrica 17. Sharpe. Edgeworth. 20–23. Macroeconomics and linear programming. F Modigliani. NY: Academic Press. In 25 Economic Essays. 47–80. Management Science 2. Markowitz. and prospects for quantile regression in econometrics. Johansen. Memorandum from Institute of Economics.
This page intentionally left blank .
Chapter 2 Methods of Modeling: Econometrics and Adaptive Control System .
This page intentionally left blank .
E denotes the Euclidean space). we adopted the notation used here. we may compute matrix from: En = p Aj − I (1a) j=1 or. It should be clariﬁed at the very outset. It is recalled that this type of VAR models like the one presented above. where the elements of A are the coefﬁcients of adjustment and the rows of the matrix C are the (possible) cointegrating vectors. Besides. β is the vector of coefﬁcients. are deﬁned on En × En . ti and di are the trend and dummy. After estimation.. Also. although capital letters are used for denoting β matrices. an estimate of matrix can be obtained by applying OLS. The matrices of coefﬁcients Aj . 23 . which is presented later. Greece 1. C is denoted by β (prime shows transposition) and A by α . To avoid any confusion. y = Xβ + u. from the corresponding Error Correction VAR (ECVAR). in the general linear model. It is known that matrix can be decomposed such that = AC. according to the procedure suggested by Kleibergen and Van Dijk (1994). respectively and w is the noise vector. directly. as a way to estimate the dynamic relationships among jointly endogenous variables. have been recommended by Sims (1980). that in many books etc. Introduction We start with the unrestricted VAR(p): p x i = δ + µ ti + j=1 Aj xi−j + zdi + wi (1) where x ∈ (here.Topic 1 A Novel Method of Estimation Under CoIntegration Alexis Lazaridis Aristotle University of Thessaloniki.
Johansen Juselius. . provided that some rank conditions of are satisﬁed. pp.24 Alexis Lazaridis A straightforward approach to obtain matrices A and C. ﬁnding the eigenvalues from: −1 λ λ ˆ kk − ˆ k0 ˆ 00 ˆ 0k  = 0. /c1 and ai × c1 .. from the transposed of i.. These matrices appear in the last term of the concentrated likelihood of 3 Eq. i. if c1 (=0). 3 Equation (5) is presented below. 1990).. with m > n. p. If these necessary conditions hold. matrix C can be obtained by solving the constrained problem: −1 {min det(I − C ˆ k0 ˆ 00 ˆ 0k C )  C ˆ kk C = I} (3) where matrices ˆ 00 . the cointegrating vectors are normalized so that — usually — the ﬁrst element i to be equal to 1. (2) holds if is (n × n) and all its eigenvalues are real.e. ci ). (3) is minimized by solving a general eigenvalue problem.e. denotes the ﬁrst element1 of the ith row of matrix C. the jth column of this matrix (i. (2) is not applicable. ˆ kk and ˆ k0 are residual covariance matrices of speciﬁc regressions (Harris. (5) together with C. 287–296). i. (4) Applying Cholesky’s factorization on the positive deﬁnite symmetric −1 −1 matrix ˆ kk . ci is taken to be the ith element of the ith row. i. c . n.. for i = 1. .2 ci. then from Eq. It is known that according to the maximum likelihood (ML) method. ˆ 0k . such that ˆ kk = LL ⇒ ˆ kk = (L )−1 L−1 . is to solve the eigenproblem (Johnston and Di Nardo. where L is lower 1 In some computer programs. = V V−1 (2) where is the diagonal matrix of eigenvalues and V is the matrix of the corresponding eigenvectors. Note that since is not symmetric. .e.e. 1997. Eq. . which implies that the eigenvectors are real too.. (2) follows that matrix A = V and C = V−1 . i 2 Note that dot is necessary to distinguish the jth row of C. Further. In other words. Note that the minimization of Eq. It is recalled that Eq. In cases that is deﬁned on En × Em . 1995. where is replaced by AC. 78. then i i the normalization process can be summarized as ci. Eq. (3) is with respect to the elements of matrix C. and aj the jth column of matrix A. V = V−1 .
in the sense that proper tests must ensure that the noises are white. Hence. (4) takes the conventional form4 : −1 λI − L ˆ k0 ˆ 00 ˆ 0k L = 0. (4) by L and postmultiply them by L. matrix can be computed from Eq. Eq. matrix A is obtained from A = ˆ 0k C . (1) may be expressed in the form of an equivalent ECVAR.e. We propose in the ﬁrst part of this chapter. one should be aware to comply with the basic assumptions regarding the error terms. 2. the initial problem reduces to ﬁnding one of the eigen values and eigenvectors of a symmetric matrix. which are real. Finally. Some Estimation Remarks It was mentioned that after estimation. as we will see in what follows. . At this stage.. It is obvious that each equation of the VAR can be estimated by OLS. (4a) After obtaining matrix V = V of the eigenvectors. in cases that this is necessary. such as one or more dummies. which may include. any number of dummies. Besides. we can easily compute the (unormalized) C from C = V L . but it is not clear of how to accommodate additional deterministic factors. p−1 xi = δ + µ ti + j=1 Qj xi−j + xi−p + zdi + wi (5) 4 It should be noted that it is also necessary to premultiply the elements of Eq. Note also that C ˆ kk C = V L (L )−1 L−1 LV = I.A Novel Method of Estimation Under CoIntegration 25 triangular. It can be easily shown that Eq. the order p of the model is deﬁned imposing at the same time restrictions — if necessary — on some elements of A’s and vectors µ and z to be zero. And this is the main advantage of the ML method. which correspond to the ﬁnally adopted cointegration vector. Additionally. apart from the standard deterministic components. This way. a comparatively simple method. nothing is said about the variance of the (disequilibrium) errors. to directly obtain cointegrating vectors. According to this procedure. i. the errors corresponding to the proper cointegration vector have the property of minimum variance. (1a). a constant and/or trend can be included in the cointegrating vectors.
with elements fii fi . and its columns are the orthonormal eigenvectors of ˜ ˜ . and its columns are the orthonormal eigenvectors of ˜ ˜ . 3. with m > n. L ti−p Lp di−p Hence. µ or z. 1981). i. . called the singular values of ˜ .δµ z ˜ = .26 Alexis Lazaridis where Qj = j k=1 Ak − I . the vector δ . (5) takes the form: p−1 xi = j=1 ˜ Qj xi−j + ˜ Xi−p + wi . The Singular Value Decomposition Matrix in Eq. (7) We can compute the unique generalized inverse (or pseudoinverse) of ˜ . (5) may be augmented to accommodate. For conformability. from: ˜ + = VF∗ U (8) where V is of dimension (m×m). (5) should be augmented accordingly by using the linear advance operator L (such that Lk yi = yi+k ). which are the nonnegative square roots of the eigenvalues of ˜ ˜ . corresponding to the largest m eigenvalues of this matrix so that: U U = V V = VV = Im (9) F is diagonal (m × m). .δ (6) In this case.e. + denoted by ˜ (Lazaridis and Basu. the vector xi−p in Eq. ˜ is deﬁned on En × Em . U is (n × m).. Eq. It is also possible to accommodate all these vectors in the following way: . xi−p 1 ˜ xi−p = p . as an additional column.
(1).. r( ˜ ) = k ≤ n.j) = √ fi . the VAR in this case has a form analogous to Eq.. i. if i = j (11) (10) In accordance to Eq. (10) can be written as: ˜ = AC where A = UF1 of dimension (n × n) and C = F2 V of dimension (n × m). we next form a matrix F2 of dimension (n × m). The x ∼I(1) vector consists of the variables C (consumption). we did not consider the three dummies DD682. then f1 ≥ f2 ≥ · · · ≥ fk > 0 and fk+1 = fk+2 = · · · = fm = 0 ∗ F∗ is diagonal (m × m). so that x ∈ E3 . instead of ˜ .e. the singular value decomposition (SVD) of ˜ is: ˜ = UFV . After normalization. 196–198).A Novel Method of Estimation Under CoIntegration 27 If the rank of ˜ is k. It is apparent that the cointegrating vectors can always be computed. Case Study 1: Comparing CoIntegrating Vectors We ﬁrst consider the VAR(2) model. DD792. presented by Holden and Peaman (1994). Computing the CoIntegrating Vectors We proceed to form a matrix F1 of dimension (m × n) such that: 0. It should be noted that all the above matrices are real. without trend and dummy components. then ˜ + is the right inverse of ˜ . and DD883. Hence. pp. . given that r( ˜ ) > 0. Needless to say that the same steps are followed if . We used the same set of data presented at the end of this book (Table D3.e. (11). i. we get the possible cointegrating vectors as the rows of matrix C. To simplify the presentation. ˜ ˜ + = In Hence. Also note that if ˜ has full row rank. if i = j f1(i. 4. It is veriﬁed that F1 F2 = F. Hence. 5. and fii fi∗ = 1/fi . I (income) and W (wealth). whereas the elements of A can be viewed as approximations to the corresponding coefﬁcients of adjustment. Eq. is considered.
1110597 0. (14) will be denoted by umli . from Eq.9206225 −0.2912925 ˆ (14) to be stationary.9574698Ii + 0. indicating thus that at least one row of C can be regarded as a possible cointegration vector.4300255 1.28 Alexis Lazaridis Estimating the equations of this VAR by OLS.048531Wi + 0.0083471695.064873 0.2938160 −0. −0. .0072427 = 0. (12). f2 = 0.9574698 − 0.06545528 0.048531Wi − 0.2912925.5486237 −0. we get the following results. we get matrix C which is: 1 −0. .050713 0.8979247. 2. For the vector in Eq. δ = 0.521098 .16781451 . One possible cointegration vector with intercept. obtained by applying the ML method is: 1 − 0.283901 −2. the necessary and sufﬁcient condition is that the (disequilibrium) errors ui computed from: ˆ ui = Ci − 0. since x ∼I(1).5244858 −0. 0.403011 C = 1 1 1. This row usually has the smallest norm. (13) All fi are less than 1.3595042 −0.365344 Euclidean norm = 6. the (Euclidean) norm of each row of C is presented. . It is noted also that the singular values of are: f1 = 0. i. {ui } ∼ I(0).2912925 and corresponds to the longrun relationship: Ci = 0.073043 0.e..028041 . ˆ For distinct purposes. these ML errors obtained from Eq.039400 (12) Hat (ˆ) is omitted for simplicity. (13) to be a cointegration vector.6305243 −6.059472 −0.653064 In the column at the righthand side.9574698Ii − 0.3586208 −0.δ] Applying SVD on ˜ = [ .2304381 and f3 = 0.1554421 0. in the sense that the resulting errors are likely to be stationary.048531 + 0.
The next task is to compare the above three series of errors. is the ﬁrst one. Hence. we shall go a bit further to see their properties in detail. .06545528Wi + 0. 111). the errors are computed from: ui = Ci − 0.9206225Ii − 0. It should be pointed out here that the smallest norm condition is just an indication as to which row of matrix C to be considered ﬁrst. and {ubooki }.0790 + 0. the normality test using the JarqueBera criterion favors the null.1778.910971 − 0.A Novel Method of Estimation Under CoIntegration 29 which in this case. It may be useful to recall at this point that the cointegration vector reported by the authors (p. {usvdi }.079761 + 0.178455. Before going to the next section. ˆ (16) These errors will be denoted by ubooki .178455. it is worth to mention here that for the three series. Although by a ﬁrst glance the three series seem to be stationary. Hence.910971Ii − 0.9109 − 0. the errors corresponding to this cointegration vector are obtained from: ui = Ci − 0. It should be clariﬁed that this vector has been obtained by applying OLS to estimate directly the longrun relationship. we get: 1 − 0. Repeating this procedure. having a ﬁrst look at the corresponding graphs.1110597. The series {umli }.079761Wi + 0. ˆ (15) These errors will be denoted by usvdi . Figure 1. which is obtained by applying OLS after omitting the ﬁrst two observations from the initial sample is: 1 − 0. which are shown in Fig. 1.
267–276) critical values (see also Harris. ˆ (17) The value of q was set such that the noises εi to be white. 158).193688) t = −3.281651 umli−1 ˆ (0. pp.45 α = 0. 54–55). it is not advisable (Harris. if we compute the standard deviation of these three series we will get: Standard deviation of {umli } 0.30 Alexis Lazaridis 5.01. Regarding the last case. so that ui = 0.606759ubooki−1 o (0. This statistic (tu = ∞ + 1 /T + 2 /T2 ) for three (independent) variables in the longrun relationship with intercept is: α = 0.0164 .108269) t = −3.10).10) and {umli } is stationary only for α = 0. we estimated the following regresˆ q ui = b1 + b2 ui−1 + ˆ ˆ j=1 bj+2 ui−j + εi . {booki } is stationary for α = (0.016612 Standard deviation of {booki } 0.259066 usvdi−1 s (0. Hence. sion: ui = 0. {usvdi } is stationary for α = (0.00612 − 0.1.83 α = 0.05. Besides. We have to compute the tu statistic from McKinnon (1991.381773umli−1 − 0.05. ˆ ˆ The estimation results are as follows: umli = −0.10. to apply the DickeyFuller (DF/ADF) test.017622 Standard deviation of {usvdi } 0. 1995. p.52. as it will be explained in detail later on.05 tu = −3. as we do with any variable in the initial data set.09522) t = −6. (17).006525 − 0. 0. Since these error series are the results of speciﬁc calculations and in the simplest case are the OLS residuals. if ˆ ˆ t < tu . Testing for Cointegration Since for the ﬁrst two cases.10 tu = −3.01 tu = −4.682 uˆ vdi = 0. Table A6. 1995. p. since ui ’s are OLS residuals. 0. the intercept is omitted from Eq. 0. Recall that the null [ui ∼ I(1)] is rejected in favor of H1 [ui ∼ I(0)].956 ubˆ oki = −0.428278usvdi−1 − 0.37.
9192042Ii − 0. 6. we apply SVD to in Eq. Hence. 27).058564 The errors corresponding to this vector are obtained from: ui = Ci − 0.836676 2. which is: 1 −0. which is a quite desirable property. . The cointegrating vector reported by the authors (p. f2 = 0. as suggested by Engle and Granger (Dickey et al. produces almost minimum variance errors. including the dummies mentioned previously.A Novel Method of Estimation Under CoIntegration 31 It is obvious from these ﬁndings. the method we introduce here. (12) to obtain matrix C.066081211Wi ˆ (19) These errors will be also denoted by usvdi .89514.063768 and 1. 109). the cointegration vector without an intercept obtained by applying the ML method is: 1 − 0. p.359891 Euclidean norm = 1.9422994Ii − 0. so that the errors corresponding to this vector are obtained from: ui = Ci − 0.012970 C = 1 1 0.9395341 2.058564Wi ˆ (18) Again these errors will be denoted by umli .0403121 and f3 = 0.0026218249 Again. 1994. Next.9422994 − 0. However. we will mainly focus on this property in our next section. We consider the ﬁrst row of C. Case Study 2: Further Evidence Regarding the Minimum Variance Property With the same set of data. which is obtained from the VAR(2)..9192042 −0. all fi ’s are less than 1.066081211 1. the super consistency of the OLS.47828 The singular values of are: f1 = 0.160720 −1.
(1993.03804 and the errors obtained from this vector are computed from: ui = Ci − 0.6838Ri .001442 −2.9605 0.476152 0. 0.076414 0. y.8938yi + 7.021703 0.048932 −0. 292–293) that refers to VAR(2) with constant dummy and trend. pp. we compute the errors from: ui = (m − p)i + 6. The data (in logs) are presented in Harris (1995.003078 5.184282 −0.16375 −0.0193 (20) Next.089584 0.676612 −0. respectively). The minimum variance property can be seen from the following: Standard deviation of {umli } 0.0166 Standard deviation of {booki } 0.000366 −0.470803 (21) .076195 and 0. Note that the entries for 1967:1 and 1973:2 have to be corrected from 0. Estimating the same VAR by OLS.006877 0.181506 −0.447157 −0.3966 pi − 0.229635 0.118891 0. statistical appendix.076195 and 9.129631 −0.933878 −0. we get.93638Ii − 0. which has been obtained by applying the ML method. ˆ These errors will be denoted by umli .206963 = .46328 and µ = δ= 0. p.0169 Standard deviation of {usvdi } 0. Considering the cointegration vector reported by the authors. we consider the model presented by Banerjee et al.03804Wi ˆ As in the previous case. The state vector x ∈ E4 consists of the variables (m − p).001867 3. these errors will be denoted by ubooki .012116 −0. and R. −0.56569498 to 9. pp. 153– 155.32 Alexis Lazaridis by applying the ML procedure is given by: 1 − 0.93638 − 0.56569498.
60144 37. f2 = 0.2671 Standard deviation of {usvdi } 0.1603713 −0.503069. 1.065634) t = −2.06966 9. which may be related to stationary errors ui .055229 −5. as it was indicated by the ﬁrst singular value.7752298.10 is less than −3.9061699 1 0.0051181894 0.A Novel Method of Estimation Under CoIntegration 33 Following the procedure described above.491415 pi − 0.42318Ri .912 For a longrun relationship with three (independent) variables without trend and intercept.1244312 66. from the ˆ graphs.012560162 The fact that one singular value is greater than one.30517 24.1977592 and f4 = 0. The minimum variance property can be seen from: Standard deviation of {umli } 0.6510755yi + 7.491415 −0. f3 = 0.466846 0.310488 Euclidean norm = 5. from which refer to the errors corresponding to the rows of C.724 uˆ vdi = 0.160135 usvdi−1 s (0. the errors are computed from: ui = (m − p)i + 5. is an indication that unless a dummy is present — we hardly can trace one row of matrix C.6510755 7.75685 1 5.020406 The singular values of are: f1 = 1.214098 umli−1 ˆ (0. ˆ As usual.832329 − 0. .268599 − 0. we obtain matrix C which is: 1 −48.178771 uml i−1 − 0.193432 usvd i−1 − 0. we present the following results.2375 (22) To see that none of the two error series is stationary. the critical value tu for α = 0.99429 1. In this case.066421) t = −2. umli = 0. we see that only the second row produces reasonably results.423318 C= 1 −2. these errors will be denoted by usvdi . As shown in Fig.
08716 15.5803374E02 4.6008804E02 8. as presented in Table 1. Greece 1956–2000. Washington DC) . which are due to some properties of SVD (Lazaridis.8173149E02 3.1300073 0.073yi + 6. Hence. we will consider the ﬁrst of the two cointegration vectors (since.8041089E02 0. is it of any good to obtain such a stationary series when economic variables are considered? x Table 1.7248717E02 6. Besides if n is large enough. (International Monetary Fund. 16).1379310E02 4.049743 1.5998532 0. which was obtained by the ML method.847396 11.1. The series {xi }.34 Alexis Lazaridis Nevertheless.8760089E02 2.3325018 0.39428 18.268 and ˆ ˆ ui = −0. p.808Ri ˆ Also we obtain standard deviation of ui ’s = 0. But.868965 3.51577 Source: International Financial Statistics.388114 1.2022010 0. 1986).56992 25.325 pi − 1. ui = (m − p)i + 7. p.2553190E02 4. Finally.7325019 1. It is obvious that these additional results further support the previous ﬁndings.2664710 0. 1995.4258254 0. reported by Harris (1995.485106 7.2575201E02 3. we will always get a stationary series (Harris. the initial series is said to be I(n). as it is veriﬁed that if we take into consideration the exports of goods and services for Greece.178896ui−1 − 0. Exports of goods and services in bil.6030815E02 6.9347029E02 2. the method proposed. X 2.239618 1.619.117378 − 0.6030815E02 7.788701 2.4763023 0.7498162E02 6. it produces slightly better results).068296) (23) t = −2.4944974E02 2.1000734 0. From this vector.316215 9.510052 4. one gets the following errors.45708 14.978430 5.90506 22.87601 20.690976 9.9053558E02 2. 102). this is not necessarily the case. Eur. produces better results than the ML approach does.419956 2. A Note on Integration It seems to be a common belief that differentiating a variable say n times. 6.618782 4.30095417 ui−1 ˆ ˆ ˆ (0.818929 6.
the corresponding pvalues (column 5) are greater than 0. this is a strong indication that we have a NI series. we can easily verify from the corresponding graph. the series { xi }. 95) series. . if we trace heteroscedasticity. the series { 5 xi } is not stationary. we found that q = 2. As shown in Eq. p.1. 2. and {∆ 3x i} Figure 2. To trace that a given series is NI. For the data presented in Table 1. 3). Differences of the NI series {xi }.A Novel Method of Estimation Under CoIntegration 35 The series {∆ x i}.. { 2 xi }. We see that for all k (column 1). to say that no autocorrelation (AC) is present. if we look at the residuals graph (Fig. The corresponding Q statistics (Column 4) together with pvalues are presented in Table 2.1. {∆ 2x i}. we can verify the presence of heteroscedasticity. In Fig. 1993. and { 3 xi } are presented. Using the series {xi } as shown in Table 1. A comparatively simple way for this veriﬁcation is to compute the residuals ui and to consider the corresponding LjungBox Q statisˆ tics and particularly their pvalues. (17). where we added a trend too. We will call the variables. we may run the regression: q xi = β1 + β2 xi−1 + β3 ti + j=1 βj+3 xi−j + ui (24) which is a reparametrized AR(q) with constant. which should be much greater than 0. But. which belong to this category near the integrated (NI) (Banerjee et al. the value of q is set such that the noises ui to be white. On the other hand. that even for n = 5. for a better understanding of this situation.
863744 0.063039 −0. partial autocor coefﬁcients (PAC) Q statistics and p values (prob).213238 4.167680 −0.109773 0.104484 −0.056574 Q Stat(LB) 0.730017 0.500452 7.420185 6.384616 6.901288 0. (24) (q = 2).003064 −0.084284 −0.980097 Figure 3.048363 −0.850280 0. Q statistics and p values.794952 0.936441 0.074997 −0.845240 0.691681 9.037333 6.228331 3.893438 0.05501 10.113984 −0.118291 5.001750 0.922639 Q Stat(BP) 0.651618 3.780540 7.899356 p value 0.787756 0.040101 −0.744875 0.058378 −0.535806 0.845771 0.733228 7.969240 0.846509 0.285494 0.042856 0.040101 −0.36 Alexis Lazaridis Table 2.157803 −0. Autocorrelation (AC).728756 4.780582 7.061329 −0.151904 −0.081662 −0. .067540 0.795634 0.005026 −0. Residuals: Autocorrelations.864417 8.037162 6.028063 −0.656414 6.001015 −0.008189 −0.022368 −0.823877 0.954772 7.222832 8.072482 0.580937 10.27276 p value 0.750367 0.812250 7.304969 4.643069 0.072784 0.942248 0.971016 0.455202 0.053178 PAC 0.815469 6.007846 0.656867 0.955133 0.893367 0.146438 5.828786 0.884151 0.377916 0.736176 0.918979 0.134538 −0.099724 −0.069642 0.023203 0.246254 0.225515 0.118163 5.567689 0.029101 −0.972891 0. PAC.05493 10.033816 6.290504 0.017969 −0. Values of k 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 AC 0. The residuals from Eq.115577 5.249432 4.645861 0.506394 0.624397 6.165620 0.589090 0.
However. its value is omitted from Eq. we set the laglength q. In some cases. Note that in models like the one we have seen in Eq. by applying the relevant tests as described above. This way. (25) and that 0 zi−1 = zi−1 . p.e. such a variable is the trend. resulting in this case to: rs = 0.. (25) Be sure that the noises in Eq. Assuming that {zi } is DSS. This means that for a signiﬁcance level α > 0. which yields the smallest pvalue for the corresponding Spearman’s correlation coefﬁcient (rs ). which has to be weighed somehow in order to be transformed to a difference stationary series (DSS). by simply adopting the results produced by some commercial computer programs. 7. is to select the explanatory variable. t = 2. (24).05. we may reach some faulty conclusions. Holden and Perman. this transformation can be achieved if the initial series is expressed in real terms. .A Novel Method of Estimation Under CoIntegration 37 Another easy and practical way to trace heteroscedasticity. for α = 0. (24). we proceed as follows: • Start with k = 1 and estimate the regression: q k zi = β1 + β2 k−1 zi−1 + β3 ti + j=1 βj+3 k zi−j + ui . Hence. heteroscedasticity is present.3739. or as percentage of growth. the (natural) logs [i. ln(xi )] are considered. the analytical stepbystep application of this test (Enders. Case Study 3: How to Obtain Reliable Results Regarding the Order of Integration? It is a common practice to apply DF/ADF for determining the order of integration for a given series. On the other hand. we present a comparatively simple procedure to determine the order of integration of a DSS. Here. leaves the impression that the cure is worse than the disease itself. It should be noted at this point that according to the results obtained by a wellknown commercial package. 64–65). the series {xi }. In usual applications. 1994. as shown in Table 1. given that { xi /xi } > { ln(xi )} > { xi /xi−1 }. 257. should be considered as I(2).0219. pp. 1995. p = 0. which is not the case. (25) are white. Note that if k = 1.394.022. by inspecting a model analogous to Eq. which is the usual practice in many applications. we have an initially NI series {xi }.
b) procedure yields. 38) after applying DF/ADF test reports that “.10 5.73 8. The nominal money supply might be either. this means that the series {zi } is I(k − 1). then increase k by 1(k = k + 1) and repeat the same procedure.47 5. i.34 • Proceed to test the hypothesis: H0 : β2 = β3 = 0. Harris (1995.36 5. .73 6. • In case of known structural break(s) in the series.43 8.24 6. (21). . such that: di = 0 wi if i ≤ r if i > r where wi = 1(∀ i) for the shift in mean. (26). It is recalled that r denotes the date of the break or shift. p. The critical values (Dickey and Fuller. testing at each stage so that the model disturbances are white noises.61 9. a dummy di should be added in Eq. F statistic should be much greater than the corresponding critical values.39 5. . with this particular test.30 6.e. Sample size 25 50 100 250 500 >500 Critical values for the DF Ftest.05 7.61 5. to facilitate the presentation. it was not possible to get a precise answer as to whether {mi } is either I(1) or I(2). With this speciﬁcation. we have to consider.49 6. (1992a.. α = 0. If we reject Eq.34 6. .38 Alexis Lazaridis Table 3. . in this case. starting from estimating Eq.91 5. it is clear that. (26) To compare the computed F statistic. We applied the proposed technique to ﬁnd the order of integration of the variable mi as shown in Eq. as shown in Table 3.34 8. the DickeyFuller F distribution.25 α = 0.”.31 8. (25). (26). . and the interest rate as I(1) and prices as (I2). or wi = ti for the shift in trend. If the null hypothesis is not rejected. 1981) are reproduced in Table 3. Note that in order to reject Eq. the results obtained are alike the ones that Perron et al. given its path over time. (25). which is the log of nominal M1 (money supply). Since the latter variable is mi .01 10.27 α = 0.
0608.1. 90) = 5.F(2. rs = −0. p = 0. 91) = 8. it can be easily veriﬁed that the proper laglength is 6 (q = 6). underline the ﬁrst stage of this procedure. as shown in Table 2. 88) = 4. q=6 All pvalues are greater than 0. rs = −0. as shown in Table 2. For q = 4 All pvalues are greater than 0. which is analogous to Table 2. as we have seen in Table 2.05).F(2. Further. q=7 All pvalues are greater than 0. For q = 3 Most of the pvalues in the 5th column of the table. From these analytical results.0395. ˆ (27) Regarding the residuals. we present some estimation results with different values of q. which is analogous to Table 2. after properly selecting the value of q(q = 5).127.A Novel Method of Estimation Under CoIntegration 39 To show the details. but it is inferior when compared to the previous one. This laglength is also acceptable. . we have: rs = −0. are: 2 mi = 0. Hence. F(2.1928. It may also be worthy to mention that according to the normality test (JarqueBera). we should accept the null.8(p = 0. are equal to zero.0047 − 0.04.21784) 2 (0. p = 0. 92) = 4. p = 0. Obviously.1.166.44. we face the problem of heteroscedasticity.2278.937 is less than the corresponding critical values as shown in Table 3.2094.F(2. JarqueBera = 0.01. This laglength is acceptable. 0.894892 mi−1 + 0. as shown in Table 2.000108) + j=1 ˆ βj+3 mi−j + ui . p = 0. This laglength is questionable. p = 0. The estimation results.F(2.1.937.05652. for 100 observations and α = (0. since for α > 0. all pvalues in the 5th column of the table. 94) = 4.1.1556.768. rs = −0. this is not the proper laglength.193.0046) 5 (0. For q = 5 All pvalues are greater than 0.67). rs = −0. This laglength is quite acceptable. we accept Eq.00033 ti (0.1038. are greater than 0.2524. (26) and increase the value of k by one.1. The value of F = 5.
73 for α = 0.495485) 3 2 mi−1 − 0. rs = −0. Hence.65(p = 0. Even from Fig. .000077) + j=1 ˆ βj+3 mi−j + ui . In such cases. are as follows. i.01. 3 mi = 0. (26) is not rejected.827537 (0. Since F < 8.01 (Table 3). the null in Eq.e. It should be noted that in some exceptional or problematic cases. 8. (26) is rejected. This is also veriﬁed from Fig.. compared to the ones obtained by applying the ML method. we propose to increase the value of k and proceed to the next stage. Rather.164. 92) = 16. we increase the value of k(k = 3) for the next step.40 Alexis Lazaridis Figure 4. after properly setting the laglength (q = 4).000831 − 2. using the SVD. it can be also veriﬁed from this ﬁgure that { 2 mi } is stationary. one can easily see that the series { mi } is not stationary. Since the value of F is much greater than 8.000006 ti (0. the null seen in Eq. 4. 4. ˆ (28) No autocorrelation is present. According to the previous ﬁndings. p = 0.723). may have a value marginally greater than the corresponding critical value in Table 3. where { mi } and { 2 mi } are plotted. even for α = 0.F(2. (26). Thus. The series { mi } and { 2m i }.73 for α = 0.004885) 4 (0. the order of integration of the series {mi } is k − 1 = 2.108. {mi } ∼ I(2). JarqueBera = 0. The estimation results. Conclusion We have derived a simpliﬁed method for computing cointegrating vectors.283. it seems to be trendstationary.01 (Table 3). In many applications. we found that this technique produces better results. the F statistic to test Eq.
London: Prentice Hall. F and V Dijk (1994). In Cointegration for the Applied Economist. BB Rao (ed. JW Galbraith and DF Hendry (1993). A and D Basu (1983). 297–306. Cointegration. R (1995). 9–45. 169–210. Lazaridis. 61–103. we can easily and directly include in the cointegrating vectors.). Review of Economics and Statistics 65(2). BB Rao (ed. J and J DiNardo (1997). . UK: St. Likelihood ratio statistics for autoregressive time series with a unit root. New York: McGrawHill International. Maximum likelihood estimation and inference on cointegration with application to the demand for money. New York: Oxford University Press. Error Correction. we found that the errors corresponding to the ﬁnally selected cointegration vector. New York: John Wiley. P and T Vogelsang (1992a). Quantity and Quality 120. Martin’s Press. D and R Perman (1994). Martin’s Press. Applied Econometrics Time Series. RF Engle and CWJ Granger (eds. A primer on cointegration with an application to money and income. DW Jansen and DL Thornton (1994). and the Econometric Analysis of NonStationary Data.). A note regarding the problem of perfect multicollinearity. Harris. Further. we show that the conventional methods for unit root test may produce faulty results. S and K Juselius (1990). JJ Dolado. Econometric Models. Direct cointegration testing in error correction models. Holden. Oxford Bulletin of Economics and Statistics 52. UK: St. Kleibergen. Stochastic optimal control by pseudoinverse. In LongRun Economic Relationships. W (1995). Perron. It should be noted that the procedure proposed here is not mentioned in the relevant literature. 47–112. Enders. as many deterministic components as we want to. It should be emphasized that with this kind of series (NI). Using Cointegration Analysis in Econometric Modelling. Besides. In Cointegration for the Applied Economist.). DA. 301–320. Unit roots and cointegration for the economist. A (1986). Critical values for cointegration tests. DA and WA Fuller (1981). Lazaridis. 1057–1072. Dickey. we analyzed the stages of a relevant technique to obtain reliable results regarding the order of integration of a given series provided that it is DSS. We also demonstrated the properties of the socalled near integrated series. Journal of Business and Economic Statistics 10.A Novel Method of Estimation Under CoIntegration 41 Due to the properties of SVD. 347–350. are characterized by the minimum variance property. which can be traced by applying this technique. References Banerjee. A. Nonstationarity and level shifts with an application to purchasing power parity. Econometrica 49. Johansen. Journal of Econometrics 63. McKinnon. Johnston. J (1991). Dickey. New York: Oxford University Press.
467–470. Journal of Business and Economic Statistics 10. . Sims. Econometrica 48. Testing for a unit root in a time series with a changing mean: Corrections and extensions. CA (1980).42 Alexis Lazaridis Perron. Macroeconomics and reality. 1–48. P and T Vogelsang (1992b).
1963. the relationship between money and income vary over time due to changes in the lag structure. Nagasaki. more reasonable results can be obtained if we try to estimate dynamics and movements of these relationships over 43 . Hamburger. Blanchard and Perotti (2002) as well as Smets and Wouters (2003) have obtained similar characteristics of the monetary and ﬁscal policy. Sargent and Wallace (1973) as well as Lucas (1972) have drawn attention to the role of timedependant response coefﬁcients to changes in stabilization policies. Mayer (1967). Cargill and Meyer (1977. Greece 1. Tobin. 1967. Although the lengths of lags are important in determining the role of monetaryﬁscal policies. reﬂects considerable doubts on the policy prescription based on constant coefﬁcient estimates. 1971. Meyer. 1970). Poole (1975) and Warburton (l971) attempted to analyze the empirical variability of the lag structure by analyzing the turning points in general business activity. However. 1978) have estimated the timevarying relationship between national income and monetaryﬁscal policies. Basu Nagasaki University. Japan Alexis Lazaridis Aristotle University of Thessalonikki. Introduction There are large volumes of literatures on the effectiveness of monetary and ﬁscal policies and on lags in the effects of monetary policies (Friedman and Schwartz. The existence of time dependency of effects of monetary ﬁscal policies.Topic 2 Time Varying Responses of Output to Monetary and Fiscal Policy Dipak R. lead to exclusions of prior information inherent in the models from the estimation process. The results then indicated existences of time variations and exclusions of time variations of the coefﬁcients.
It should be noted that a certain element of zi is 1 and corresponds to the constant terms. we can write ˜x ˜˜ ˜z xi+1 = A˜ i + Cui+1 + D˜ i+1 + ei+1 . I ei+1 = D= ei+1 ˜ . 2. 1992) provide a fruitful approach. and ei is the vector ˜ ˜ ˜ ˜ ˜ of noises. For this purpose. This includes some necessary adjustments for a developing country.g. Radenkovic and Michel. e. cannot have immediate application in policy planning. 0 ˜ D . the results of the timevarying impacts of monetaryﬁscal policies are analyzed. ui is the vector of con˜ ˜ trol variables. however. VAR approaches etc. 0 xi+1 ˜ . The Method of Adaptive Optimization Suppose a dynamic econometric model can be converted to an equivalent ﬁrst order dynamic system of the form ˜x ˜˜ ˜z xi = A˜ i−1 + Cui + D˜ i + ei ˜ ˜ (1) where xi is the vector of endogenous variables. C. Shifting to period i + 1. 1992. ui ˜ ˜ A 0 xi+1 = 0 .. 1989). Humphrey.44 Dipak R. 1993. Basu and Lazaridis. i. 1981. In Section 3. the method of adaptive optimization and the updating method of the timevarying model are analyzed. 0 A= C= . In Section 4. Brannas and Westlund. The parameters of ˜ the above system are assumed to be random. Basu and Alexis Lazaridis time. These methods. are mainly descriptive. adaptive control systems with timevarying parameters (Astrom and Wittenmark. to estimate timevarying systems. rational expectation modeling. Nicolao. and D are coefﬁcient matrices of proper dimensions.. Khan and Montiel. The model follows the basic theoretical ideas of monetary approach to balance of payments and structural adjustments (Berdell. 1986.. 1995.e. The purpose of this paper is to explore this possibility in terms of a planning model where adaptive control rules will be implemented so that we can derive timevarying reduced form coefﬁcients from the original structural model to demonstrate the dynamics of monetaryﬁscal policies. we deﬁne the following augmented vectors and matrices. the model and its estimates are described. There are alternative estimation methods. ˜ ˜ (2) Now. 1980. zi is the vector of exogenous variables. ui+1 ˜ ˜ C . xi = xi ˜ . 1995. In Section 2. which are assumed to be white Gaussian and A.
. T ) is the desired ˆ state and control trajectory. 2. .Time Varying Responses 45 Hence (2) can be written as xi+1 = Axi + Cui+1 + D˜ i+1 + ei+1 ˜ z (3) Using the linear advance operator L. according to our formulation. . {Q} is the sequence of weighing matrices and xi (i = 1. It is noted that T indicates the terminal time of the control period. KT = QT i (6) (7) (8) (9) Ki+1 D)zi + i (Ei C = −(Ei C Ki+1 C)−1 (Ei C Ki+1 A) i (Ei C Ki = Ei A Ki+1 A + ˆ hT = −QT xT hi = i (Ei C Ki−1 A) + Qi )hi+1 + (Ei A Ki+1 D)zi + (Ei A )hi+1 − Qi xi ˆ gi = −(Ei C Ki+1 C)−1 [(Ei C Ki+1 D)zi + (Ei C )hi+1 ] ∗ xi u∗ i (10) (11) (12) (13) = [Ei A + (Ei C) = ∗ i xi ∗ i ] xi + (Ei C) gi + (Ei D)zi + gi . such that Lk yi = yi+k and deﬁning the vectors u. . 1995). (4). z and ε from ui = Lui ˜ z zi = L˜ i εi = Lei then Eq. (3) can take the form xi+1 = Axi + Cui + Dzi + εi which is a typical linear control system. We can formulate an optimal control problem of the general form 1 min J = xT − xT ˆ 2 2 QT (4) 1 + 2 T −1 xi − x i ˆ i=1 2 Qi (5) subject to the system transition equation shown in Eq. The solution to this problem can be obtained according to the minimization principle by solving the Ricattitype equations (Astrom and Wittenmark.
Ei C Ki+1 D are evaluated. Basu and Alexis Lazaridis ∗ where u∗ (i = 0. i is the matrix of feedback coefﬁcients and gi is the vector of intercepts. In the above equations. say of the vector π. along with the implementation of the control rules. which constitutes the solution to the stated optimal control problem. The reduced form coefﬁcients matrix is then deﬁned from: = −B−1 . . given all information up to the period i. Updating Method of ReducedForm Coefﬁcients and Their CoVariance Matrices Suppose we have a simultaneousequation system of the form XB + U =R (14) where X is the matrix of endogenous variable deﬁned on EN × En and B is the matrix of structural coefﬁcients. . . T − 1). (14a) Goldberger et al. where the joint densities of matrices A. These rules should be readjusted according to “passivelearning” methods. deﬁned on EN × Eg . which assumed to be consistent and asymptotically unbiased estimate of (B ). and D are assumed to remain constant over the control period.1. C. The notation Ei denotes the conditional expectations. . which are to be updated continuously. taking into account the reduced form coefﬁcients of the econometric model and their covariance matrix. . 2. which refer to the endogenous variables and is deﬁned on En ×En . R is the matrix of noises deﬁned on EN × En . the i corresponding state trajectory. which refer to the explanatory variables. 1. Expressions like Ei C Ki+1 C. Ei C Ki+1 A. which consists of the g column of matrix ˆ ˆ can be approximated by ˜ = ˆ ˆ ⊗ (B )−1 Ig F ˆ ˆ ⊗ (B )−1 Ig (15) ˆ where ⊗ denotes the Kroneker product.46 Dipak R. the optimal control sequence and xi+1 . U is the matrix of explanatory variables deﬁned on EN × Eg and is the matrix of the structural coefﬁcients. ˆ and B are the estimated coefﬁcients by standard econometric techniques and F denotes the asymptotic ˆ covariance matrix of the n + g columns of (B ˆ ). (1961) have shown that the asymptotic covariance matrix.
ugi (17) where uij is the element of the jth column and ith row of matrix U. . . . The vector π ∈ Eng . E(εi ) = 0. wi ∈ En and the observation matrix Hi is deﬁned on En × Eng . we can rewrite Eq.e. N − 1. the coefﬁcients vector π is assumed random with constant expectation overtime. (18) (18a) In a timevarying and stochastic model we can have πi+1 = πi + εi is the noise. . Equation (17) can be written in a compact form. E(εi wi+1 ) = 0 E(wi+1 ) = 0 . (19) We make the following assumptions. In a timeinvariant econometric model. . where εi ∈ Based on these. we can write u1i 0 · · · 0 u2i 0 · · · 0 ugi 0 · · · 0 0 u1i · · · 0 0 u2i · · · 0 0 ugi · · · 0 · · · · · · · · · π + wi xi = · · · · · · · · · · · · · · · · · · 0 0 · · · u1i 0 0 · · · u2i · · · 0 . 2. (14) and (14a) we have BX = − U + R ⇒ X = −B−1 U + B−1 R ⇒X = B−1 R U +W (16) where W = Denoting the ith column of matrix X by xi and the ith column of matrix W by wi . i = 1. . . (a) The vector xi+1 and matrix Hi+1 can he measured exactly for all i. . N (17a) where xi ∈ En . as mentioned earlier.Time Varying Responses 47 Combining Eqs.. consists of the g column of matrix . so that πi+1 = πi . (b) The noises εi and wi+1 are independent discrete white noises with known statistics. . . as xi = H i π + w i . for all i. i. (17a) as Eng xi+1 = Hi+1 πi+1 + wi+1 i = 0. 1.
along with their covariance matrices. (c) The state vector is normally distributed with a ﬁnite covariance matrix. where δij is the Kronecker delta. the corresponding conditional probability densities are: p(πi+1 πi ) = p(εi ) p(xi+1 πi+1 ) = p(wi+1 ). Lazaridis. the Jacobians of the transformation of εi into πi+1 and of wi+1 into xi+1 are unities. . 1986.. The recursive process is initiated by regarding K0 and H0 as null matri∗ ces and computing π0 and S0 from ∗ π0 = π i. 1980) is given by the following set of recursive equations. The solution to this problem (Basu and Lazaridis. . The reduced form coefﬁcients. E(wi wi ) = Q2 δij . Hence. (18a) and (19). x2 . (18a) and (19).48 Dipak R. Under the above assumptions and given Eqs. . x3 .e. can be updated by this recursive process and at each stage the set of Riccati . xi+1 . Basu and Alexis Lazaridis E(ci εi ) = Q1 δij . (d) Regarding Eqs. the reduced form coefﬁcients (columns of matrix ˆ ) ˆ S0 = P0 = ˜ . and The above covariance matrices. as it is brieﬂy shown in Appendix A. assumed to be positive deﬁnite. ∗ ∗ ∗ πi+1 = πi + Ki+1 (xi+1 − Hi+1 πi ) (20) (21) (22) (23) Ki+1 = Si+1 Hi+1 Q−1 2 −1 −1 Si+1 = Pi+1 + Hi+1 Q−1 Hi+1 2 −1 Pi+1 = (Q1 + Si )−1 . the problem set is to evaluate ∗ E(πi+1 xi+1 ) = πi+1 and cov (πi+1 xi+1 ) = Si+1 (the error covariance matrix) where xi+1 = x1 . .
The decision to choose this particular model is inﬂuenced by the fact that it is commonly known as the fundbank adjustment policy model. using the full information maximum likelihood (FIML) method. (1).e. Khan and Montiel. its associated matrices. We ﬁrst convert the structural econometric model to a statevariable form according to Eq. the objective function is to be minimized. 1990).. probability density functions and use these as new information. we can update all probability density functions and all other associated matrices using Eqs. Thereafter. Once we specify the targets for the state and control variables. 1989. changing information sets derived from the implementation of the optimization process. (6)–(13). used extensively by the World Bank and the International Monetary Fund (IMF). In the version adopted for this paper.Time Varying Responses 49 equations should be updated accordingly so that adaptive control rules may be derived. This mechanism is a great improvement compared to the ﬁxedcoefﬁcient model as we can incorporate in our calculation. The dynamics of the timevarying coefﬁcients (in terms of responsemultipliers) are described later in Section 4. which was estimated with data from the Indian economy. the difference between the old and new estimates are insigniﬁcant according to some preassigned criteria. the weights attached to each state and control variables. then we can calculate the results of the optimization process for the entire period using Eqs. to the planned target for national income set by the planning authority and to various market forces reﬂected in the money market interest . 1976. we can obtain the ﬁnal updated coefﬁcients of the model along with the results of the optimization process. (20)–(23). there is no explicit investment or consumption function. Once we estimate the model (described in the next section). i. MonetaryFiscal Policy Model We describe below a monetaryﬁscal policy model. we can obtain both the structural model and the probability density functions along with all associated matrices mentioned above. We can repeat the optimization process over and over as we update the model. When the process will converge. 3. because the domestic absorption can reﬂect the combined response of both private and public investment and consumption. These will effectively update the coefﬁcients of the model in its statevariable form. The model accepts the deﬁnition of balance of payments and money stock according to monetary approach of balance of payments (Khan.
which may include both amortization and interest payments. i. 2007) has postulated a similar combined consumptioninvestment function for the UK as well. EXR is the exchange rate. P is the price level. Absorption Function and National Income Domestic absorption reﬂects the behavior of both the private and public sector regarding consumption and investment too. The government budget deﬁcit (BDt ) is deﬁned by the following equation BDt = (Gt + LRt + PFt ) − (TYt + GBSt + AFt + BFt ) where PF is the foreign payments due to existing foreign debts. The “New Cambridge” model of the UK economy (Cripps and Godley. The relation between the national income and adsorption can be deﬁned as follows: Yt = At + TYt + Rt − Gt + GBSt − LRt where A is the value of domestic absorption.1. t = time period. (4). in order to formulate the optimal control problem. the recursive estimation process of the timevarying responsemultipliers is presented in brief in Appendix A. using the data from the Indian economy for the period 1951–1996. which is the system transition equation. G is the public consumption. Godley and Lavoie. We assume a linear relationship. 3. 1976. The estimated econometric model can be transformed into a state variable form (Basu and Lazaridis. (A/P)t = a0 + a1 (Y/P)t − a2 (IR)t − a3 EXRt . 1986). TY is the government tax revenue. AF is the foreign .50 Dipak R. market interest rate and foreign exchange rate. GBS is the government bond sales. Basu and Alexis Lazaridis rate and exchange rate. to minimize the quadratic objective function. Y is the national income. The estimated parameters were then used as the initial starting point for the stochastic control model.. IR is the market interest rates. As already mentioned. The model was estimated by applying FIML method. LR is the net lending by the central government to the states (which is not part of the planned public expenditure) and R is the changes in the foreign exchange reserve reﬂecting the behavior of the foreign trade sector. Domestic real adsorption is inﬂuenced by real national income. subject to Eq. 2002.e.
RR. depends on the demand for loans created by private sectors and the commercial bank’s willingness to lend. We assume that the desired reserve ratio RR∗ is a function of national income and t .e. We assume AF and LR as exogenous. ( R + NDA) reﬂect the stock of highpowered money and the expression within the square bracket is the moneymultiplier. the government can inﬂuence CD and RR to control the money supply. the net domestic asset (NDA) creation by the central bank. PF depends on the level of existing foreign debt and the world interest rate. on the ability of the domestic economy to absorb (A) on the requirements of the government (G) and on the alternative sources of ﬁnances reﬂected on the tax revenue (TY) and on government’s borrowing from the central bank i. However. although a sizable part of the foreign borrowing can be at a concessional rate t PFt = a4 + a5 r=−20 FBr + a6 WIR EXR .2. t Government bond sales (GBS) depends on its attractiveness reﬂected on the interest rate (IR). whereas FB. which is the actual reserve ratio.e. as follows: MSt = [(1 + CD)/(CD + RR)]t ( R + NDA)t . GBS t = a7 + a8 At + a9 IRt + a10 Gt − a11 TYt − a12 NDAt 3.. and TY as policy instruments.Time Varying Responses 51 assistance which is an insigniﬁcant feature. i.. G. assuming only the government can borrow from foreign sources. which depends on credit to deposit ratio of the commercial banking sector (CD) and the reserve to deposit liabilities in the commercial banking sector (RR). Monetary Sector We assume the ﬂow equilibrium in the money market. However. R depends on the foreign trade sector. The stock of money supply depends on the stock of high powered money and the moneymultiplier. Whereas NDA is an instrument. MDt = Mt = MSt where MD is the money demand and MS is the money supply. FB is the total foreign borrowing.
We. The domestic price level depends on domestic economic activity (particularly changes in the agricultural sector) and the import cost (IMC ). . representing the domestic economic activity. However. and the central bank discount rate. particularly among the labor force. Basu and Alexis Lazaridis market interest rate i.e. 3. a different logic may emerge to a developing country where the use of banks as an institution is not widespread. as most payments would have to be made in cash. t t The commercial banks may adjust their actual reserve ratio to the desired reserve ratio with a lag. MDt = a20 − a21 IRt + a22 Yt . after some initial adjustment period. CDt = a16 + a17 IRt + a18 Yt − a19 Yt−1 . IRt = a23 − a29 (MSt ) + a29 (Yt ) + a26 (CIt ). as measured by the market interest rate and national income. expect the sign of the coefﬁcient for the current national income to be positive and that for the lagged national income to be negative. national income. the entrepreneurs will have to maintain a huge cashbalance and run down deposits simply to pay various dues.52 Dipak R. RRt = α(RR∗ − RRt−1 ) t Thus.. It is possible that corporations would be more efﬁcient. If there is an expansion in economic activity. we can write RRt = αa13 + αa14 Yt + αa15 IRt + (l − α)RRt−1 . The demand for money is assumed to be a function of the market interest rate and the national income. therefore. Price and Interest Rate The market rate of interest (IR) is determined by the supply of money. Khan (1976) has postulated that the effect of national income should be negative because “individuals and corporations tend to become more efﬁcient in their management of cash balances as their income rises”. RR∗ = a13 + a14 Yt + a15 IR∗ . The ratio of credit to deposit liabilities with the commercial bank system is affected by the opportunity cost of holding currency.3. where 0 < α < 1.
The estimated parameters were used as the initial starting point for the stochastic control model developed. FB is the foreign borrowing. IMt = a33 + a34 Yt − a35 (IMCt ). Estimated model is presented in Appendix B. The above analytical structure was estimated using expected values of each variable. PFT. The desired price level reﬂects private sector’s reaction to their expected domestic absorption of the expected import cost.e.. K and AF are exogenous.e. where X. Balance of Payments The balance of payments ( R) is equal to the changes in the stock of international reserve i. 0 < β < 1. IM is the value of imports. K is the foreign capital inﬂows. Rt = Xt − IMt + Kt + PFTt + FBt − PFt + AFt where X is the value of exports. 3. we get Pt = βa27 − βa27 (At ) + βa29 (IMCt ) + (1 − β)Pt−1 . PF is the foreign payments by the central bank and AF is the foreign aid and grants. and the import cost i. Thus..4.Time Varying Responses 53 The import cost in turn depends on the exchange rate (EXR) and the world price of imported goods (WPM). PFT is the private sectors transactions. import IM is determined by the national income. with expectations being adaptive. Import cost (IMC) is represented by the following equation timevarying responses of output to monetary and ﬁscal policy IMCt = a30 − a31 (EXRt ) + a32 (WPM t ) Exchange rate (EXR) can be an instrument variable whereas WPM is an exogenous variable. Suppose the actual price will move according to the difference between the desired price in period t and the actual price level in the previous period Pt = β(Pt∗ − Pt−1 ). We assume the desired price level (P ∗ ) is represented by the following equation: Pt∗ = a27 − a28 (At ) + a29 (IMCt ). .
. and et are the residuals from the ﬁtted model. n (24) where ρ is a real number and (et ) is a sequence of normally distributed random variables with mean zero and variance σt2 . so the system is stable. . The null hypothesis is that ρ = 1 where et = Yt − Yt−1 and thus k = 0. transform our model to the following form: Yt = ρYt−1 + et .7824260 0.5. the rank of the controllability matrix is the same as the dimension of the reduced state vector so that the system is controllable and observable.e.1213610 0. i. however.0000001 0. Qm is distributed as a chisquared random variable with m degrees of freedom.3087129 0. .0000008 − 0.0000001 All the roots are real and they are less than unity. then under the null hypothesis. ˆ .] We can.2442519 0. m = n − k. Box and Pierce (1970) suggested the following test statistic m Qm = n k=1 2 rk (25) where n n rk = t=k+1 et et−k ˆ ˆ t=1 e2 ˆt (26) n = the number of observations.5123629 0. [In the equivalent control system. t = 1. Stability of the Model Characteristic roots (real) of the system transition matrix − 0.54 Dipak R. Basu and Alexis Lazaridis 3. . 2. where k = the number of parameters estimated.0000008 − 0. the system parameters can be identiﬁed. ˆ If (Yt ) satisﬁes the system..
P BD CD MS IR −0.00004 0.00500 0. 1990).00403 0. where the null hypothesis is rejected at 0.00599 −0.15022 −0.15603 0.00041 0. Table 1.00884 0.00310 −0.06057 .06599 a Note: This responsemultiplier refers to the starting point of the planning.00550 −0.00543 −0.00488 −0.00426 −0.00330 −0.00108 0.01996 0. Endogenous EXR G TY CI Y GBS Responsemultiplier period 2.00387 −0. indicating overall goodness of ﬁt in a FIML estimation.13965 −0. GBS P BD CD MS IR −0.00210 −0.00091 0.03637 −0. Das and Cristi (1990) have analyzed in detail the condition for the stability and robustness of the timevarying stochastic optimal control system.00096 0.02283 −0.00568 0.04073 0.00265 0.03572 0.00125 −0. Exogenous variable EXR G TY CI Y Responsemultipliera and endogenous variable. So.00458 −0.01017 −0.00013 −0.03178 0.05336 0.02209 −0.04953 −0.04592 −0.15549 0.85.04758 −0. λ2 = 563.00095 0.03987 0.00027 0.00606 −0.01699 0. the responsemultipliers satisfy conditions of slow timevariations (Das and Cristi.03293 0.03116 −0.00169 −0.02882 0.94546 −0.15688 0.00020 −0.14849 0.01231 0.00181 −0.a P BD CD MS IR −0.02682 −0.00570 −0. The condition is that the dynamic responsemultipliers of the model should have slow timevariations.00226 0.Time Varying Responses 55 The estimated value of Qm is 3.04091 −0.05 signiﬁcance level. Endogenous EXR G TY CI Y GBS Responsemultiplier period 1.01466 0.00273 0.00668 0.00097 0.00092 0.02792 −0.02254 −0.01240 −0.00092 0. As demonstrated in Tables 1–7.03590 −0.00121 −0.05593 0.02961 −0. 1990.09. Table 3.00018 −0.00111 −0.06120 a Note: Period 1 refers to the ﬁrst period of the planning horizon. Tsakalis and Ioannou. we accept the alternative hypothesis that if ρ < 1. Table 2.00516 0.02324 0.03500 0.00246 −0.00625 0.00399 −0.02791 −0. therefore the system is stable.00576 −0.05458 0.
00726 0.06228 Table 5.00595 −0.16281 0.00133 −0.01967 −0.16455 0.00069 0.04217 0.00042 −0.00733 0.02209 0.14832 −0.00399 −0. 1 2 −0.02254 0.02067 −0.00573 −0.00201 −0.02470 −0.00089 0.14849 0.00551 0.00088 0.05269 0.00570 −0. P BD CD MS IR −0. Periods dY /dCI dY /dEXR dP/dG dY /dG dP/dTY dY /dTY dGBS/dG Response of output to monetary and ﬁscal policies.00869 0.00558 0.57357 0.07248 0.00200 0.00135 −0.00695 −0.02924 0.02186 −0.02691 0.02472 −0.00236 −0.14832 −0.16536 0.00104 −0.02470 0.02199 −0.01676 −0.02058 −0.15688 −0.00402 0.01206 −0.00129 0.01033 −0.00403 −0.16455 0. Basu and Alexis Lazaridis Table 4.00403 .06325 −0.05566 Table 7.58067 −0.10904 −0.00499 0.00268 −0.02220 0.02067 0.16655 −0.00402 −0.02133 −0.10760 0. P BD CD MS IR −0.04174 −0.00268 0.16536 0.01676 0.00733 −0.00499 5 −0.01160 0.16281 −0.05190 0.00116 0. P BD CD MS IR −0.05953 Table 6.04987 0.00671 0.04676 −0.16655 0.03001 −0.55026 −0. Endogenous EXR G TY CI Y GBS Responsemultiplier period 7.05923 0.15603 0.00048 0.00558 −0.00200 −0.00293 −0.03572 −0.00273 −0.00215 0.00530 −0.02058 0.00062 0.00345 −0.00152 −0.07031 −0.00267 −0.00568 3 −0.00410 −0.01033 0.03203 −0.18078 0.00215 −0.01434 −0.56 Dipak R.01013 0.01149 −0.03001 0.02133 0.00465 −0.03590 0.00558 7 −0. Endogenous EXR G TY CI Y GBS Responsemultiplier period 3.50890 0.00252 −0.15549 −0.02188 0.07779 −0.00226 −0.04465 −0.18078 0. Endogenous EXR G TY CI Y GBS Responsemultiplier period 5.
devaluation would also have negative effect on the government bond sales. However. national income and domestic activity would have negative effects. Increased tax revenue will depress national income. It is essential that for the stability of the control system. as reﬂected in the CD and money demand. 1990). on the other hand. these dynamics of the responsemultiplier should be slow (Das and Cristi. whereas devaluation will reduce import abilities signiﬁcantly. Analysis of the responsemultipliers shows that devaluation would have negative effect on the national income. Devaluations also can have an inﬂationary effect due to increased import costs. interest rate. the increased public expenditure will have inﬂationary impacts on the economy at the same time. which would depress the activity of the private sector. Due to the downturn of the economic activity. will have positive effects on every variable. private sector’s activity will be reduced. due to increased rate of interest. there will be less demand for government bonds. Lower level of national income will reduce money supply and the private sector’s activity. as it will reduce government bond sales. Because of this CD will decline. will have reduced price level but budget deﬁcit will go up as well because of the reduced demand for government bonds. This is due to the fact that Indian exports may not be that elastic in response to devaluation.Time Varying Responses 57 4. If the public expenditure goes . Policy Analysis The dynamics of the monetary and ﬁscal policy are analyzed in terms of the analysis of the dynamic responsemultipliers and their relationship with the monetaryﬁscal policy regimes. Reduced level of national income. Tsakalis and Ioannou. The movements of the coefﬁcients response of responsemultipliers in a monetary policy framework are described below.012 in period 7. and money demand. The responsemultiplier of the system will move over time within an adaptive control framework (Tables 1–6).003 in period 1 to 0. As a result. which will be reﬂected in the reduced currency to deposit ratio and the reduced level of money demand. CD. At the same time. This implies that increased public expenditure will stimulate the national income and the private sector despite increased interest rate.061 in period 1 to 0. This result is primarily due to the fact that the inﬂuence of G (public expenditure) on IR has increased from 0. budget deﬁcit will be increased due to reduced level of economic activity.055 in period 7. 1990. Government expenditure. Although government bond sales will go up. The effect of the central bank discount rate (CI ) on interest rate varies from 0. Increased rate of central bank’s discount rate will depress national income and general price level. in this case.
in the longer run structural factors.007 in period 6. Tax revenue will have negative impacts on the GNP as well as on the price level. as feared by Friedman and Schwartz (1963). and import costs reﬂected through the exchange rate will affect price levels more signiﬁcantly. and the impacts will decline over time. and it will slightly increase in period 7. Basu and Alexis Lazaridis up. which will have negative impacts on the growth prospects. there will be direct pressure on the central bank to provide loans to the government. The effect of the exchange rate declines gradually. The central bank can put pressure on the commercial banks to extend some loans in public sector and less to the private sector. whereas the effect of the exchange rate on price level is most prominent and does not vary much. It demonstrates that although during the initial phase of planning. which is consistent with the reduced impacts of the tax revenues on the GNP over the planning period. Effects of the CD on prices declines rapidly. The impacts of the public expenditure and tax revenues on the two most important variables. here we are analyzing an optimum path not the historical path. where frequency domain approach of . whereas the effect of the interest rate is cyclical.005 in period 1 to −0.58 Dipak R. the CI will have a similar effect on the currency to deposit ratio. This is due to the cyclical pattern of the central discount rate in the optimum path. the main indicator for the private sector activities. and the CD of the commercial banks will be reduced as a result (at the same time reserve ratio will be increased). The negative impacts of the tax revenue on the GNP is partly explained by the negative effects of tax revenue on the currency to deposit ratios of the commercial banks. The variations of the effects are not unsystematic. public expenditure. 1941). However. tight credit policy may inﬂuence prices. The impact of the government expenditure on the GNP will be reduced. While CD goes up in response to government expenditure (G). The government expenditure also have positive impacts on the government bond sales and the impact will increase over time due to the increasing difﬁculties of raising taxes. GNP (Y ) and price level (P) can be analyzed as follows. Conclusion The importance of timevarying coefﬁcients was recognized in statistical literature long ago (Davis. The effect of the CI on the money supply (MS) will be reduced from −0. 5. The increased government expenditure (G) will increase price level and the impact will be intensiﬁed. It is quite obvious that the impacts of ﬁscal and monetary policies on output will ﬂuctuate over time depending on the direction of change of these policies.
Journal of Finance 32. K Iracki (ed. Brannas. The prevent relevance of Hume’s openeconomy monetary dynamics. D and A Lazaridis (1986). Our approach is a systematic way forward to analyze these dynamics of monetary and ﬁscal policy. Folster and Henrekson. Berdell. 265–274. MA: AddisonWesley. 2006. variations over time are slow which indicates any absence of explosive response (Das and Cristi. Quarterly Journal of Economics. 1972). September. International Journal of System Sciences 17(1). Cargil. 81–85. Muscatelli and Tirelli. Basu. Journal of the American Statistical Association 65(3). An empirical characterization of the dynamic effects of changes in government spending and taxes on output. 1990. However. Timeconsistent monetaryﬁscal policy demands continuous revision. In the above analysis.Time Varying Responses 59 the time series analysis was utilized (Mayer. K and A Westlund (1980).. The implication for the public policy is quite obvious. showed similar characteristics of the monetaryﬁscal policy. and it is important to analyze these changes in order to obtain timeconsistent monetaryﬁscal policy (De Castro. otherwise. 1509–1526. GEP and DA Pierce (1970). September. References Astrom. Results obtained by other researchers showed that the effects of monetary and ﬁscal policy change over time. . TF and RA Meyer (1977). O and R Perotti (2002). 1973.). Distribution of residual autocorrelation in autoregressiveintegrated moving average time series models. 117(4). 427–448. Review of Economics and Statistics LX(1). On the recursive estimation of stochastic and time varying parameters in economic system. 2001. 1329–1368. Reading. The time varying response of income to changes in monetary and ﬁscal policy. The results obtained by using adaptive control method. Box. February. A method of stochastic optimal control by Bayesian ﬁltering techniques. Several authors since then have used varying parameter regression analysis (Cooley and Prescott. 1975). KJ and K Wittenmark (1995). November. Cargil. the effectiveness of the policy may deteriorate and as a result. the effects of monetary and ﬁscal policy on major target variables of the economy may deviate from their desired level. In Optimization Techniques. Economic Journal l05(432). 2005). 1205–1217. JF (1995). Intertemporal stability of the relationship between interest races and prices. Farley et al. adaptive control system was used in a statespace model where the reduced form parameters can move over time. Tsakalis and Ioannou. Blanchard. Cargil and Mayer (1978) also have observed stable movements of the coefﬁcients. Thus the role of the monetaryﬁscal policies on the economy is not unsystematic. 1990). 1–7. TF and RA Mayer (1978). Berlin: Springer Verlag. Adaptive Control. although it can vary over time.
Economica 43. Growth oriented adjustment programs. Muscatelli. Monthly ReviewFederal Reserve Bank of New York. AC. SIAM Journal on Control and Optimization 30(6). A formal analysis of the Cambridge economic policy group Model. Fiscal policy in a stockﬂow consistent (SFC) model. 913–924. 335–348. Basu and Alexis Lazaridis Cripps. European Economic Review 45(8). Analyzing the interaction of monetary and ﬁscal policy: does ﬁscal policy play a valuable role in stabilization. 187–191. Meyer. Jr (1972). TF and EC Prescott (1973). November. Journal of Post Keynesian Economics 30(1). R (1972) Estimating coefﬁcients that change over time. Nicolao. Hamberger. Friedman. A (1980). J. 103–124. Applied Economics 38(8–10). Macroeconomic effects of ﬁscal policy in Spain. M Hinich and T McGuire (1975). TM (1981). 311–332. A marriage between fund and bank models. MS and PJ Montiel (1990). W and M Lavoie (2007). Khan. Davis. Western Economic Journal 5. June. June. Goldberger.60 Dipak R. Growth effects of government expenditure and taxation in rich countries. The lag in the effect of monetary policy: A survey of recent literature. 277–312. Humphrey. Princeton: Princeton University Press. . An adaptive regression model. some criticisms. Journal of Econometrics 3. T (1967). Expectation and the neutrality of money. Aldershot: Edward Elgar. 297–318. M and R Cristi (1990). Money. A monetary history of the United States. Khan. Economic Review. Econometrica 29. Lucas. TM (1993). 1867–1960. Robustness of an adaptive pole placement algorithm in the presence of bounded disturbances and slow time variation of parameters. 762–802. MJ (1971). Application of ﬁltering methods in econometrics. August. Lazaridis. HT (1941). Kaleckian models of growth in a coherent stockﬂow monetary framework: A Kaldorian view. Ranking and Inﬂation. The Analysis of Economic Time Series. Das. 289–297. S and M Henrekson (2001). G (1992). CESIFO Economic Studies 51(4). Mayer. 1251–1269. August. Godley. IMF Staff Papers 36. July. March. 279–306. September. 556–573. International Economic Review 11. Journal of Post Keynesian Economics 24(2). MS and PJ Montiel (1989). AL Nagar and HS Odeh (1961). 1501–1520. Federal Reserve Bank of Richmond 67. 248–256. Some comparison of tests for a shift in the slopes of a multivariate time series model. Adam Smith and the monetary approach to the balance of payments. Khan.A monetary model of balance of payments: The case ofVenezuela. F (2006). F and WAH Godley (1976). May. The lag in effect of monetary policy. De Castro. 1315–1325. IMF Staff Papers 37. International Journal of Systems Science 11(11). Cooley. M and A Schwartz (1963). IEEE Transactions on Automatic Control 35(6). 79–100. Journal of Monetary Economics 2. Farley. MS (1976). W and M Lavoie (2002). June. On the timevarying Riccati difference equation of optimal ﬁltering. Bloomington: Principia Press. December. International Economic Review l4. Humphrey. VA and P Tirelli (2005). April. Godley. RE. The covariance matrices of reduced form coefﬁcients and forecasts for a structural econometric model. 1–17. Journal of Economic Theory 4. Folster. October.
November. Warburton. Journal of Finance 30. 169–183. Veriﬁcation of the selfstabilization mechanism in robust stochastic adaptive control using Lyapunov function arguments. TJ and N Wallace (1973). Journal of Monetary Economics 2. A new direct adaptive control scheme for timevarying plant. Tsakalis. K and P Ioannou (1990). Rational expectations and the theory of economic policy. xi ) = constant exp − C= Q1 0 0 . Radenkovic. M and A Michel (1992). Q−1 2 2 C−1 P(πi+1 . xi )dπi Si = cov(πi xi ) (Si assumed to be invertible) 1 πi+1 − πi 2 xi+1 − Hi+1 πi+1 Q−1 1 0 0 . 356–369. SIAM Journal on Control and Optimization 30(6). 1919–1965. IEEE Transactions on Automatic Control 35(6) June. Appendix A Probability Density Function and Recursive Process for ReEstimation Under the assumptions stated in Section 3 and according to Bayes’ rule. 1123–1175. J (1970). xi+1 πi . C (1971). Smets. V and R Wouters (2003). April. June. 697–712. May. The relationship of monetary deceleration to business cycle peaks. Money and income: Post Hoc Ergo Propter Hoc. Sargent. Variability of the lag in the effect of monetary policy 1919–1965. Western Economic Journal 9. the conditional probability density function of πi given xi+1 is Gaussian and is given by: p(πi+1 xi+1 ) = where 1 ∗ p(πi+1 xi ) = constant exp − πi+1 − πi 2 −1 Si 2 ∗ πi = E(πi xi ) p(πi xi )p(πi+1 . 1270–1294. Journal of European Economic Association 1(5). An estimated dynamic stochastic general equilibrium model of the Euro area. Quarterly Journal of Economics 84. 301–317. Tobin.Time Varying Responses 61 Poole. Q2 C−1 = . xi+1 πi . June. September. W (1975).
i 1 (b) Note that Gi is symmetric. 2 1 ˜ exp − J i dπi 2 (a) Now. constant exp − 1 J i where 3 . 1 1 (c) In view of Eqs. −1 −1 −1 −1 ∗ ∗ J i = πi+1 − πi ) (Q1 − Q1 Gi Q1 + Hi+1 Q2 Hi+1 ) πi+1 − πi ∗ ∗ + xi+1 − Hi+1 πi ) Q−1 xi+1 − Hi+1 πi 2 ∗ ∗ − 2 πi+1 − πi ) Hi+1 Q−1 xi+1 − Hi+1 πi . deﬁne G−1 = (Si−1 + Q−1 ) and consider the expression i 1 ∗ ∗ ˜ J˜i = [(πi − πi ) − Gi Q−1 (πi+1 − πi )] 1 ∗ ∗ × G−1 [(πi − πi ) − Gi Q−1 (πi+1 − πi )]. Eq. (b) and noting that Gi G−1 = I we obtain i ∗ ∗ ∗ ∗ ˜ J˜i = (πi − πi ) G−1 (πi+1 − πi ) − 2(πi − πi ) Q−1 (πi+1 − πi ) i 1 ∗ ∗ + (πi+1 − πi ) Q−1 Gi Q−1 (πi+1 − πi ). (a) can be written as ∗ ∗ ˜ J i = [(πi − πi ) − Gi Q−1 (πi+1 − πi )] 1 × G−1 [(πi i ∗ − πi ) ∗ − Gi Q−1 (πi+1 − πi )] 1 ∗ ∗ + (πi+1 − πi ) (Q−1 + Hi+1 Q−1 Hi+1 − Q−1 Gi Q−1 )(πi+1 − πi ) 1 2 1 1 ∗ ∗ + (xi+1 − Hi+1 πi ) Q−1 (xi+1 − Hi+1 πi ) 2 ∗ ∗ − 2(πi+1 − πi ) Hi+1 Q−1 (xi+1 − Hi+1 πi ). 2 .62 Dipak R. Basu and Alexis Lazaridis Hence p(πi+1 xi+1 ) = constant where ∗ ∗ ˜ J i = (πi − πi ) (Si−1 + Q−1 )(πi − πi ) 1 ∗ ∗ + (πi+1 − πi ) (Q−1 + Hi+1 Q−1 Hi+1 )(πi+1 − πi ) 1 2 ∗ ∗ − 2(πi − πi ) Q−1 (πi − πi ) 1 ∗ ∗ + (xi+1 − Hi+1 πi ) Q−1 (xi+1 − Hi+1 πi ) 2 ∗ ∗ − 2(πi+1 − πi ) Hi+1 Q−1 (xi+1 − Hi+1 πi ). since it is the sum of two (symmetric) covariance matrices..... 2 1 ˜ Integration with respect to πi yields constant exp − 2 J i dπi = . (b) and (c). Expanding Eq.
1980). we can write Q−1 − Q−1 (Si−1 + Q−1 )−1 Q−1 = (Q1 + Si )−1 1 1 1 1 Hence Eq. p(πi+1 xi+1 ) = constant exp − J i . At = 0. by maximizing the conditional density function. Then. To minimize J i . (21)–(23).. consider the composite matrix: Q−1 − Q−1 Gi Q−1 1 1 1 where Gi = (Si−1 + Q−1 )−1 .842Yt + 0.99. ∗ ∗ πi+1 = πi + (Q−1 − Q−1 Gi Q−1 + Hi+1 Q−1 Hi+1 )−1 1 1 1 2 ∗ × Hi+1 Q−1 (xi+1 − Hi+1 πi ).34) (1.5 (4. ρ = 0. The same applies for the vector x0 . 1 −1 Pi+1 . 2 (d) Now. we expand Eq.48) (1. 2 DW = 1. Si+1 and Pi+1 are deﬁned in Eqs.16) (1. The FIML estimates are as follows (R2 and adjusted R2 refer to the corresponding 2 SLS estimates..29) + 55191. 2 Since p(πi+1 xi+1 ) is proportional to the likelihood function.27 .024At−1 − 1.. ∗ in order to determine πi+1 .. It is recalled that H0 is a null matrix. is equivalent to ..Time Varying Responses 63 Hence 1 .76.74) (1. Estimated Model 1. we differentiate with respect to πi+1 and after equating to zero.. R = 0. Appendix B It is recalled that the model was estimated using FIML method. Note that minimization of J i . (d) takes the form: ∗ ∗ ∗ πi+1 = πi + Ki+1 (xi+1 − Hi+1 πi ) −1 −1 where Ki+1 . maximizing p(πi+1 xi+1 ).051IRt−t − 284EXRt (3.87) R2 = 0. (c) eliminating terms not containing πi+1 . we ﬁnally obtain (Lazaridis. since no observations exist beyond period 1 in the estimation process. Numbers in brackets are the corresponding tstatistics).92. Considering the matrix identity of householder. we are also maximizing the likelihood .319IRt + 1.
641Gt + 0.017EXRt (1. DW = 2.88. ρ = 0.169 CFBt−1 + 144.21) 12.98.09.677Gt−1 + 1.48. R = 0.814IRt−1 + 8.16) (11.72) R2 = 0. ρ = 0. (BD)t = (Gt + LRt + PF t ) − (TY t + GBS t + AF t + FBt ) 4.48) 5.193 (−6.79. IRt = 0.347 + 19. (1.84) 2 DW = 1.97) R = 0.81. ρ = 0.39) (1.92. R2 = 0. (6. RRt = 0. MD = MS 7.68) ρ = 0.79) (1.413MDt−1 − 0.374 WIRt (1.23 (1.58 (3.406Yt−1 − 7.2571Rt + 2.93) (1.0057T + 0.656CI t − 1.47) (−1.0002At−1 + 0.26 (1.275Yt − 113335.95. DW = 2.009Yt−1 (−1.19IRt + 1.24) 2 (2.31 (1.77) 2 (1.17) R = 0.21) (−0.98.23) 9.87) (1.86.89.31) 11. Rt = Xt − IM t + Kt + PFT t + FBt − PF t + AF t .87) R2 = 0.0007Yt + 0.41) 2 (1.0004At + 0.64) (0.7031Rt−1 − 0.95) 2 DW = 2.94.88) (2. (2.591IRt − 0.105IMC t + 0. R = 0.269) R2 = 0. R = 0.64 (4.48 13.23) DW = 2.71) (1. Basu and Alexis Lazaridis 2.14) (2.895 (−5.56) ρ = 0.34) R2 = 0. (Yt ) = At + Rt 3. (1.51) 6.52) 2 = 0. R = 0.36.93.003T (3.92.351CI t−1 (1.84) (1.33AF t (1. DW = 2. ρ = 0. Pt = 0.158CDt−1 + 0.26 (1.002Yt−1 + 1.87) + 0.56) + 0. R (−0. GBS t = 0.07.84.64 Dipak R. PF t = 1148.67) ρ = 0.49) (1.86.088) 10. CDt = 0.89) R2 = 0. MDt = 2.48) (3.80 + 0.421Pt−1 + 32.403) (−1.733RRt − 2.352WPM t + 9.03) 2 (0.065 (2.98. (1.4391IRt + 0.0 (−2.02 R2 = 0. (MS)t = [(1 + CDt )/(CDt + RRt )]( Rt + NDAt ) 8. DW = 1.73) − 0. (1.22. IMC t = 5.713IRt−1 + 1.97.37) 14. (1.008Yt − 0.
86) (1. Notations A = Domestic absorption AF = Foreign receipts (grants etc. foreign debt over a period of 20 years CT = Discount rate of the RBI FB = Foreign borrowing G = Government expenditure GBS = Government bond sales IM = Value of imports IMC = Import price index (1990 = 100) IR = Interest rate in the moneymarket K = Foreign capital inﬂows LR = Lending (minus repayments to the states) MD = Money demand MS = Money supply NDA = Net domestic asset creation by the RBI P = Consumers’ price index (1990 = 100) PFT = Private foreign transactions PF = Foreign payments R = Changes in foreign exchange reserve RR = Reserve to deposit ratio in the commercial banking sector TY = Government tax revenue T = Time trend ..654IMC t (−0.27) R2 = 0.82) (−0.92.e. DW = 2. t r=−20 FBr R2 = 0.09 The estimated Chisquare satisﬁes the goodness of ﬁt test for the FIML estimation. i. ρ = −0.04 − 0.96. CFBt = λ2 = 563. IM t = − 24.104Yt − 0.72) 16.21) (−1.62 (−1.26) + 1.089Yt−1 − 0.59) (9.Time Varying Responses 65 15.025IM t−1 + 0.52.) BD = Government budget deﬁcits CD = Credit to deposit ratio in the commercial banking sector CFB = Cumulative foreign borrowing.123T (0.
66 Dipak R. average of European and US money market rate EXR = Exchange rate (Rs/US$) X = Value of exports Y = GNP at constant 1990 prices . Basu and Alexis Lazaridis WPM = World price index of India’s imports (1990 = 100) WIR = World interest rate.
Chapter 3 Mathematical Modeling in Macroeconomics .
This page intentionally left blank .
And the economic performance has been no worse. while the monetary policies were intended to secure lower inﬂation. it was designed to strengthen the economy’s supplyside responses. The income stabilizing aspects of the ﬁscal policy 69 . claiming that its ﬁscal policies constituted a danger for the good performance of the UK economy and its neighbors. and economic efﬁciency. The UK ﬁscal deﬁcit had breached the 3% of GDP that is considered to be safe.to longterm objectives — low debt. Yet the United Kingdom.Topic 1 The Advantages of Fiscal Leadership in an Economy with Independent Monetary Policies Andrew Hughes Hallet James Mason University. if these deﬁcits were not contained and reversed. The 1990s saw a return to more activist ﬁscal policies — but policies designed strictly in combination with an equally active monetary policy based on inﬂation targeting and an independent Bank of England. social equality. alone among its European partners. which they considered to be too loose and irresponsible. and again during the 1980s. USA 1. They are set. in the main. the European Commission initiated an “excessive deﬁcit” procedure against the UK government. To make its point. and may have been better than elsewhere in the Eurozone. to gain a series of medium. the provision of public services and investment. Was Gordon Brown’s strategy so mistaken after all? The British ﬁscal policy has changed radically since the days when it tried to micromanage all of the aggregate demand with an accommodating monetary policy in the 1960s and 1970s. allows ﬁscal policy to lead in order to achieve certain social expenditure and mediumterm output objectives and a degree of coordination with the monetary policies designed to reach a certain inﬂation target. Introduction In January 2006. Gordon Brown (as Britain’s ﬁnance minister) was widely criticized by the European Commission and other policy makers for running ﬁscal policies.
is of course the strategic policy prescription of Taylor (2000). been left to act passively through the automatic stabilizers. that is.3 Nonetheless. while the discretionary part (the bulk of the policy measures) is set to achieve these longterm objectives. would be done by the automatic stabilizers. 2002). stabi lize some 30% of the cycle. and the inferior but independent (noncooperative) solution. the need for any such additional interventions is unlikely: ﬁrst. and. The latter is assured for our context when both sets of policy makers have some interest in inﬂation control and output stabilization. Monetary policy. 3 Figure 1 is the standard representation of the outcomes of the different game theory equilibria when the reaction functions form an acute angle. and the policy multipliers have the conventional signs (Hughes Hallett and Viegi. this setup has imposed a degree of precommitment (and potential for electoral punishment) on ﬁscal policy because governments naturally wish to lead. imply a degree of coordination in the outcomes. Marrying this with an activist. appears to have been the innovation in the UK policies. 2003). beyond what. the regime remains noncooperative so that there is no incentive to renege on earlier plans in the 1 The Treasury estimates that the automatic stabilizers will. But. The option to undertake discretionary stabilizing interventions is retained “for exceptional circumstances” however.2 Thus. and nondiscretionary shortrun stabilization efforts restricted to the automatic stabilizers. is left to take care of any shortrun stabilization around the cycle. Nevertheless. relative to the noncooperative (uncoordinated) Nash solution. because of the effectiveness of the forward looking. by forcing the focus onto longrun objectives. therefore. but based on an independent Bank of England and a monetary policy committee with the instrument (but not target) independence. monetary policy directed at cyclical stabilization. which allows both ﬁscal and monetary policies to be better coordinated — but without either losing their ability to act independently. meanwhile. second. as shown in Fig.1 To draw a sharp distinction between actively managed longrun policies. because the longterm expenditure (and tax) plans are deliberately constructed in nominal terms so that they add to the stabilizing power of the automatic stabilizers in more serious booms or slumps. It implies a leadership role for the ﬁscal policy. therefore. to the exclusion of the short term. 1. Stackelberg solutions. . the remaining 70% being left to monetary policy (HM Treasury. predictably. 2 For details on how this leadership vs. Britain appears to have adopted a Stackelberg solution. stabilization assignment is intended to work. see HM Treasury (2003) and Section 3 below.70 Andrew Hughes Hallet have. symmetric and an activist inﬂation targeting mechanism adopted by the Bank of England. which are part of any ﬁscal system. which lies somewhere between the discretionary (but Pareto superior) cooperative solution. in normal circumstances. Australia and Sweden operate rather similar regimes.
and does not stabilize well if efﬁciency is to be maintained. the policies and their intended outcomes will be sustained by the government of the day. implying that monetary policy must condition itself on the ﬁscal stance at each point. with ﬁscal policy leading. . In this sense. Commitment to the stabilization policies is then assured by the independence of the monetary authority. and active monetary policies on the 4 Stackelberg games. we have “rules rather than discretion”. produce ﬁscal commitment: i.e. 1989). absence of changes in information. MonetaryFiscal Interactions and the Central Bank. there are also automatic stabilizers in any ﬁscal policy framework. The Fiscal Leadership Hypothesis The hypothesis is that the United Kingdom’s improved performance is due to the fact that the ﬁscal policy leads an independent monetary policy.The Advantages of Fiscal Leadership in an Economy 71 Figure 1.4 2. This automatically puts the latter in a follower’s role. This is helpful because it allows the economy to secure the beneﬁts of an independent monetary policy but also enjoys a certain measure of coordination between the two sets of policy makers — discretionary/ automatic ﬁscal policies on one side. subgame perfection with either strong or weak time consistency (Basar.. and is not easily reversible (public services and social equality). Nevertheless. This leadership derives from the fact that ﬁscal policies typically have longrun targets (sustainability and low debt). Thus.
67 (0. It is important to realize that the coordination here is implicit and therefore rulebased. +18 0.540 (2. and how the monetary stance may affect it or vice versa. gap = GDP–trend GDP. monthly data from January 1999–January 2004 — — (1) −0. rt−1 e πj. −6. S.89) 0. pd = primary deﬁcit/surplus as a percentage of GDP (a surplus > 0) and debt = debt/GDP ratio. +21 0. This allows a Pareto improvement over the conventional noncooperative (full independence) solution. monthly data from June 1997–January 2004 (1) −1. Estimation: instrumented 2SLS.1.06) — — R = 0.711 1. .610 (0.463 (2. To show that the UK ﬁscal policy does lead monetary policy in this sense.43 R = 0.2. tratios in parentheses. Table 1. without reducing the central bank’s ability to act independently.88) (2.04) 2 1.289 (0. rt : For the UK.191 (0.No.394 +6. +9. which independent agents would otherwise impose on one another.04) (3. what goals it needs to achieve.63) R = 0.11 = 22.76) (1.13 = 24. k give the bank’s inﬂation forecast interval. k gap pd debt Dependent variable: central bank lending rate. +6 0.7.274 1.341 (1.90.274 (1. F5. (2) −13. and that this is the reason for the Pareto improved results in Table 1.57 (2. F5.110 (1. I produce evidence in three parts: • Institutional evidence: taken from the UK Treasury’s own account of how its decisionmaking works.21 = 82.82.87.714 +9.k represents the average inﬂation rate expected over the interval t + j to t + k: Eπt+j.66) (2.59) 2 F3.53) N = 29 −0. and responses to an agreed leadership role reduce the externalties of selfinterested behavior.91.10 0.89) (1. πj. not discretionary.72 Andrew Hughes Hallet other.67 (0.19 = 44.16) (5.50) 0.19) 0. N = 19 e Notation: j.t+k .47.598 (4.59) 2 F3.77) R = 0.72 0.k j. The extra coordination arises in this case because the constraints on. (2) −2. N = 25 For the Eurozone (EU12).996 0.38) 2 1.22) N = 15 0. and the output gap is obtained from a conventional HP ﬁlter to determine trend output.66) 1. +21 (1. k determined by search.83) (0. j. Const Generalized Taylor rules in the UK and EU12.
3. In the 1980s. The demand management role of the ﬁscal policy was phased out. then you will not be able to precommit monetary policy either. the strategy changed when a new regime took ofﬁce. Institutional Evidence: The Treasury’s Mandate 3. and social equity. and the capacity. The result of this was a shortterm view. if you cannot precommit ﬁscal policy to certain goals. public investment. who argued that the real problem was a lack of longrun goals for the ﬁscal policy. 1964). But. Monetary policy and the provision of public services (subject to a lower bound) were essentially accommodating factors. and precommit to certain expenditure and taxation plans at the same time. This role passed over to inﬂation control and monetary management — with limited . committed to reducing the size and role of the government in the economy. It certainly proved too difﬁcult to turn the ﬁscal policy on and off as fast and frequently as required. since the main constraint was perceived to be the ﬁnancing of persistent currentaccount trade deﬁcits. to operate in this way. in which the conﬂicts between the desire for growth (employment) and the recurring evidence of overheating (trade deﬁcits) led to an unavoidable sequence of “stopgo” policies. The United Kingdom has. there would have been conﬂicts anyway because there were two or more competing goals. This means that the “stable growth with low inﬂation” objective will be lost. had produced a system that was actually destabilizing rather than stabilizing. and was focused almost exclusively on demand management. There were many (Dow. but effectively only one instrument to reach them. with different goals for governments and central banks. therefore.1. and that the lags inherent in recognizing the need for a policy change and in implementing it through Parliament till it takes hold in the markets. Pareto improving results can be expected from ﬁscal leadership. The point here is that. had the incentive.The Advantages of Fiscal Leadership in an Economy 73 • Empirical evidence: the extent to which the monetary policy is affected by the ﬁscal policy. Fiscal policy was the principal instrument of the economic policy in the 1960s and 1970s. Such a system could not provide the longrun goal of public services. History The British ﬁscal policy has changed a great deal over the past 30 years. but ﬁscal policy with its longer objectives does not depend on monetary policies and • Theoretical evidence: shows how.
because of the inevitable tradeoff between cyclical stability and budget stability. It was split into two distinct parts. 4.2. 6 This summary is taken from the UK Treasury’s own view (HMT. the longrun goals of better public services. The shortterm ﬁscal interventions. being the larger part. it is likely to be successful only in markets with sufﬁciently ﬂexible wages and prices (see Fatas et al. and social equity could then be attained. 5.3. However. the role of ﬁscal policy was to provide the “conditioning information” within which the monetary management had to work. Fiscal policy. and pp. 74). 2003). are restricted to the automatic stabilizers — with effects that are known and predictable. 5. therefore. 63. 9. 61). ﬁscal policy is decided in advance of monetary or other policies (pp. as the market ﬂexibility changes were made. Given this. 48. 7 In this section. the UK ﬁscal policy now leads monetary policy in two senses. Shortterm stabilization of output and employment was left to the automatic stabilizers inherent in the prevailing tax and expenditures rules. . therefore. Current Practice According to the treasury’s own assessment (HMT.6 3. now less than half the total and declining (p. 2003). efﬁciency. Sections 2. changes to free up the supply side (and enhance competitiveness) on the premise that greater microeconomic ﬂexibility would both reduce the need for stabilizing interventions (relative wage and price adjustments would do the job better) and provide the conditions for stronger employment and growth (HMT. 67–68). In addition. could then be directed at longterm objectives: in this instance. First. Second. monetary policy is charged with controlling inﬂation and stabilizing output around the cycle — rather than steering the economy as such (pp. 48. Box 5.. 2003). if at all. 34–38. but with rather more success when it was formalized as an inﬂationtargeting regime under an independent Bank of England and monetary policy committee. 42. 64.7 and with a longtime horizon (pp. 5 This follows the recommendations in Taylor (2000). Section 6). sets the conditions within which monetary policy has to respond and achieve its own objectives (pp. Discretionary adjustments would only be used in exceptional circumstances. 61–63).74 Andrew Hughes Hallet success in the earlier phases of monetary and exchange rate targeting.5 The rest of the ﬁscal policy. In this period. 2003). page or section numbers given without further reference are all cited from HMT (2003). this part of ﬁscal policy could be increasingly focused on providing the longrun goals directly. 15.
when combined with decision and implementation lags of up to two years. and the longrun objectives of the policy will always take precedence in cases of conﬂict (pp. (b) A sustainable and improving delivery of public services (health and education). 14). it makes sense for the ﬁscal policy to be used consistently and sparingly and in a way that is clearly identiﬁed with the longterm objectives. Similarly. given the evidence that consumers and ﬁrms often do not look forward efﬁciently and may be creditconstrained. See also Dixit (2001). Frequent changes would conﬂict with the government’s mediumterm sustainability objectives.The Advantages of Fiscal Leadership in an Economy 75 The shortrun discretionary components are negligible (p. Table 5. and Dixit and Lambertini (2003). public expenditures are planned with ﬁxed threeyear departmental expenditure limits which. 7. This burden will be carried by interest rates. means that the bulk of the ﬁscal policy has to be planned 8 The credibility of ﬁscal policy. The United Kingdom has determined these objectives to be (pp. Taylor. (d) The formulation of clear numerical objectives is consistent with these goals. given the ﬁscal stance and its forward plans (pp. 1. . The long lead times needed to build up these programs mean that the necessary commitments must be made well in advance. 11–13. 1. Fiscal policy would. not be used for ﬁne tuning (pp. 39–41. (c) Recognition that the achievement of these objectives is often contractual and cannot easily be reversed once committed. and on symmetric stabilization. 26. and by extension an independent monetary policy. 81–82): (a) The achievement of sustainable public ﬁnances in the long term. 48. changes in direct taxes will affect both equity and efﬁciency. and the preservation of social (and intergenerational) equity and alleviation of poverty. 37). 59. and cannot be fulﬁlled. 63–68). 11. 11. is seen as depending on these steps. 61–63. 63). and a commitment to a clear separation of these goals from economic stabilization around the cycle. or for stabilization (pp. and should seldom be made. low debt (40% of GDP) and symmetric stabilization over the cycle.5). 19. 2000). therefore. and also that the impacts of the ﬁscal policy are uncertain and have variable lags (pp. Third. 61.8 (e) To ensure that ﬁscal policy can operate along these lines. improving competitiveness/supplyside efﬁciency in the long term. a transparent set of institutional rules to achieve them.
otherwise. which each party might. Empirical Evidence The next step is to establish whether the UK authorities have followed the leadership model described above. The bulk of ﬁscal policy will remain focused on the medium to long term therefore. or complimentarity. And. with institutional parameters for goals. these spending limits are constraints deﬁned in nominal terms. in other words. In addition.3) — something else (monetary policy) must have taken up the burden of shortrun stabilization. Brandsma and Hughes Hallett. whereas the leader takes this reaction function into account — such an asymmetry and zero restriction will not appear in the expost (observed) responses that emerge in the ﬁnal solution (Basar and Olsder. The difﬁculty here is that. This arrangement is. the institutional structure itself has introduced ﬁscal leadership. this makes it difﬁcult to test for leadership directly. between the instruments. therefore. priorities etc. or credit taxes that could have stabilized the economy in the short term. best modeled as a Stackelberg game. The fact that they say that they follow such a strategy does not prove that they do so. Moreover. 1999. their credibility for delivering stability and low inﬂation. I shall argue that this has had the very desirable result of increasing the degree of coordination between policy makers without compromising their formal or institutional independence — or indeed. 1984. 4. To summarize. impose on the other. designed to reduce the externalities. independence. 1984). But. we can use indirect tests based on the degree of competition. Constrained independence. since the discretionary elements are smaller than elsewhere — smaller than the automatic stabilizers (Tables 1–5) and declining (Box 5. or the various tax regulators. . Since I have no data on anticipations.76 Andrew Hughes Hallet with a horizon of up to ﬁve years — vs a maximum of two years in the inﬂation targeting rules operated by the Bank of England. the Treasury operates a tax smoothing approach — having rejected “formulaic rules” that might have adjusted taxes or expenditures. although the asymmetry in exante (anticipated) responses between the follower and leader is clear enough in the theoretical model — the follower expects no further changes from the leader after the follower chooses his reaction function. Hughes Hallett. so that they will be met.
They show such rules for the United Kingdom and the Eurozone since 1997 and 1999. The latter implies that the monetary policy tends to expand/contract whenever ﬁscal policy needs to expand or contract — but not necessarily vice versa. The relative size of α and β then reveals the strength of the authorities’ attempts to control inﬂation vs. Normally. therefore. One can argue that the policy should be based on fully optimal rules (Svensson. 2004). it is widely argued that the authorities’ decisions can best be modeled by a Taylor rule9 : rt = ρrt−1 + αEt πt+k + βgapt+h α. as a result of unanchored expectations in the private sector. respectively. In practice. the dates when new policy regimes were introduced. and ρ their preference for gradualism.The Advantages of Fiscal Leadership in an Economy 77 4. 10 In principle. since monetary policy appears not to depend on the expected output gaps (Dieppe et al. The Eurozone has been included to emphasize the potential contrast between ﬁscal leadership in the United Kingdom. In order to obtain an idea of the inﬂuence of ﬁscal policies on monetary policy. h represent the authorities’ forecast horizon10 and may be positive or negative. 2003). and the lack of it in Europe. α > 1 will be required to avoid indeterminancy: that is. k and h may be positive or negative: positive if the policy rule is based on futureexpected inﬂation. income stabilization. or the economy’s responses may be quite large.1. But negative if interest rates are to follow a feedback rule to correct past mistakes or failures. I include some Taylor rule estimates — with and without ﬁscal variables — in Table 1. as in the Bank of England. policy transmissions. it is hard to argue that policy makers actually do optimize when the additional gains from doing so may be small and when the uncertainties in their information. and hence policies which are either jointly dependent in the usual way.. Different Types of Leadership Conventional wisdom would suggest that Europe has either monetary leadership or independent policies. ρ ≥ 0 (1) where k. 4. Monetary Responses For monetary policy. I set h = 0 in Eq. when money is expanding or contracting and is sufﬁcient to 9 Taylor (1993b). to head off an anticipated problem. (1). of which Eq. arbitrary variations in output or inﬂation. β. But. or which are complementary and mutually supporting. the Taylor rule approach is often found to ﬁt central bank behavior better. (1) will be a special case. .2.
And ﬁscal leadership would mean complimentarity or independence in the ﬁscal rule. This is potentially consistent with ﬁscal leadership. Here. Both monetary authorities have targeted expected inﬂation more than the output gap since the late 1990s — and with horizons of 18–21 months ahead. And to the extent that ﬁscal policy does have an inﬂuence. But. if we allow monetary policies to react to the changes in ﬁscal stance. but conﬂicts in the monetary responses. we see that the UK monetary decisions may take ﬁscal policy into account. but the effect is not signiﬁcant or well deﬁned. we get different results (the lower equations). rather than the policies in a Nash game with conﬂicting aims that need to be reconciled. but the opposite in the Eurozone. a longer forecast horizon (up to two years as the Bank of England claims) and greater attention to the output gap — the symmetry in the UK’s policy rule. leadership implies complimentarity among policy instruments in the leader’s reaction function. this model of monetary behavior does imply more activist policies. however. but conﬂicts among them in the follower’s responses. The European Central Bank (ECB) has been more aggressive in this respect. monetary leadership would imply some complimentarity (or independence) in the Taylor rule. The lack of signiﬁcance for the debt ratio is easily understood. the evidence could . we might expect Stackelberg leadership (with ﬁscal policy leading) in the UK. 4. More generally. Observed Behavior The upper equations in each panel of Table 1 yield the standard results for the monetary behavior in both the United Kingdom and the Eurozone. Evidently. but conﬂicts in the ﬁscal responses. Thus. This is a weak form of monetary leadership in which ﬁscal policies are an additional instrument for use in cases of particular difﬁculty. it would not be necessary for the monetary policy to take it into account. However.78 Andrew Hughes Hallet control inﬂation on its own. We need to check the ﬁscal reaction functions directly. since this form of leadership can allow independence or complimentarity between instruments in the leader’s rule. from Section 3. it would be as a substitute (or competitor) for monetary policy — ﬁscal deﬁcits lead to higher interest rates. A weak form of leadership also allows for independence among instruments in the leader’s policy rules. However.3. Since this is a declared longrun objective of the ﬁscal policy. contrary to conventional wisdom. So far. it was also more sensitive to the output gap and had a longer horizon and less policy inertia.
Thus. The European Commission (2002) uses a similar rule. which call for a primary deﬁcit will also call for a relaxation of the monetary policy. if the latter is being used for stabilization purposes — in which case interest rates should be negatively correlated with the size 11 See Taylor (2000). Inﬂation should have a positive impact on the deﬁcit ratio if ﬁscal policy is used for stabilization purposes. Finally. a little less elsewhere. The expected signs of any remaining factors are not so clear. unless there is a systematic debt reduction program underway. the policies are acting as complements — circumstances. And a similar relationship is thought to underlie UK’s ﬁscal policy (see HMT. but it had no effect otherwise. and Turrini and in ‘t Veld (2004) for similar formulations. we have an evidence of monetary leadership. monetary policy now appears to react to ﬁscal policy. the debt burden. The coefﬁcient γ then gives a measure of an economy’s automatic stabilizers. estimates γ ≈ 1/2 for Europe — a little more in countries with an extensive social security system. to correct the past mistakes. where ﬁscal policy is used as an additional policy instrument. Fiscal Reaction Functions in the UK and Eurozone Many analysts have hypothesized that ﬁscal policy responses can best be modeled by means of a “ﬁscal Taylor rule”: dt = at + γ gapt + sd γ>0 (2) where sd = structural deﬁcit ratio. for example.2 and 6. comes in. the looser will be the monetary policy. Canzoneri (2004). 4. Instead. Once ﬁscal effects are included. The ECB results look quite different. In this case.2). existing or anticipated inﬂation. or discretionary ﬁscal interventions. dt is the actual deﬁcit ratio (d > 0 denotes a surplus). but with the “wrong” sign: the larger the primary deﬁcit. Gali and Perotti (2003). Moreover. The European Commission (2002). . a feedback element. but deﬁnes d > 0 to be a deﬁcit. the concentration on inﬂation control is much reduced and the forecast horizon shrinks to six months. At the same time. They therefore expect γ to be negative.The Advantages of Fiscal Leadership in an Economy 79 imply a Stackelberg follower role or independence for monetary policy. which implies that the symmetric targeting goes out. 2003: Boxes 5. the output gap should also have a positive impact on the deﬁcit ratio. therefore.4. easing the competition (externalities) among policy instruments. The debt burden should increase current deﬁcits.11 and at represents all the other factors such as the inﬂuence of monetary policy. output stabilization becomes less important.
therefore.36 + 1. Similarly.207rt − 0.66) (2.24 = 33. Table 2 includes our estimates of the ﬁscal policy reaction functions for the United Kingdom and the Eurozone.337gapt (2. but a negative one in Europe. but has no effect in the Euro area. This result ﬁts in neatly with Europe’s evident inability to save for a rainy day in the upturn. and the inability to stabilize in the downturn because of the stability pact.78. indicate that the ﬁscal policy has been used for output stabilization in the United Kingdom — consistent with allowing automatic stabilizers to do the job — but for purposes other than this in the Eurozone. Conversely. linear interpolation for quarterly deﬁcit ﬁgures. no such efforts have been made in Europe. the output gap produces a negative reaction in Europe.18) Key: dt = gross deﬁcit/GDP (%). This implies complementary policies. of the deﬁcit. . sample period 1999q1–2004q2 e dt = 3. Note: At current debt. evidently.477dt−1 − 0.444 + 0. where d < 0 denotes a deﬁcit. The presence of debt in the UK rule indicates that sustainability has been a primary target in the United Kingdom.30) (1. (2003). but a positive one with interest rates. F3.95. Inﬂation.16) (2. higher interest rates go with tighter ﬁscal policies. on the other hand.740π+9.11 = 54.72) R = 0.k and gapt as in Table 2(a). but not in the Eurozone. Higher debt increases the surplus ratio in the United Kingdom. debtt = debt/GDP (%). has had no effect on the UK deﬁcit. because monetary policies focused on inﬂation. but a positive one in the United Kingdom. For the United Kingdom. F4. these estimates imply a structural deﬁcit of about 0. So. instrumented 2SLS.64) (0. has had a systematic debt reduction program. sample period 1997q3–2004q2 dt = − 0. and ﬁscal policies focused on shortrun stabilization. would imply no automatic stabilizer effects but mutually supporting policies: i.23 R = 0.076gapt (0.12 12 Buti et al. would conﬂict.37) (7.21) (1.e. These two variables.55 2 2 For the Eurozone.96) (9.1% for the United Kingdom and 2.80 Andrew Hughes Hallet Table 2. e rt is the central bank lending rate. Estimation method. a negative association with the output gap.6% for the Eurozone.21 − 0. while the United Kingdom.845debtt + 0.. and tratios in parentheses. Fiscal policy reaction functions.0685rt + 1. interest rates and inﬂation targets. πj.
since the coefﬁcient on ﬁscal policy in the Taylor rule is small and there is no output gap smoothing).b. suggests ﬁscal leadership may in fact have been operating. or a simple noncooperative game. or ﬁscal leadership — the latter. Here. The UK results are therefore inconclusive. The Economic Model and Policy Constraints The key question now is: would governments want to pursue ﬁscal precommitment? Do they have an incentive to do so? And would there be a clear improvement in terms of an economic performance if they did? More important. the policies are competitive. we suppress the spillovers among countries and focus on the following three equations to represent the economic structure of any one . This implies that the British ﬁscal policies are chosen independently of the monetary policy. given the same effect is not found in the monetary policy reactions. as suggested by our weak leadership model. They are consistent with independent policies. we extend a model used in Hughes Hallett and Weymark (2002. then both the ﬁscal and monetary policies would be compliments and weakly coordinated. if ﬁscal policy was limited by a deﬁcit rule in the form of “hard” targets (as in the original stability pact) or “soft” targets? To answer these questions. because of the signiﬁcant output gap term in the ﬁscal equation which. the results are quite different. 2005) to examine the problem of monetary policy design when there are interactions with ﬁscal policy. 2004a. which suggests that they form a Nash equilibrium (or possibly monetary leadership. if there is any association at all.1. But. would the ﬁscal leadership model be more advantageous. but negative and near signiﬁcant in Europe. despite the insigniﬁcance of the direct ﬁscalmonetary linkage. Theoretical Evidence: A Model of Fiscal Leadership 5.The Advantages of Fiscal Leadership in an Economy 81 Consequently. there is no suggestion of monetary leadership. the signiﬁcant result is the conﬂict among instruments in monetary policy (and essentially the same conﬂict in the ﬁscal policy reactions). For exposition purposes. In the Eurozone. In any event. Conﬁrmation of this comes from the responses to interest rate changes. 5. the United Kingdom appears to use ﬁscal policy for stabilization in a minor way as claimed. This suggests weak monetary leadership. which are positive but very small and statistically insigniﬁcant in the United Kingdom. the results imply independence or ﬁscal leadership. while the Euro economies seem not to use ﬁscal policy this way at all. Hence. if anything happens.
(3). The coefﬁcients α. by contrast. therefore. originally derived by Lucas (1972. government expenditures.14 However. s.82 Andrew Hughes Hallet country13 : πt = πte + αyt + ut yt = β(mt − πt ) + γgt + εt gt = mt + s(byt − τt ) (3) (4) (5) where πt is inﬂation in period t. 2001). McCallum (1989) shows that aggregate demand equation like Eq. Our model. and the ECB. 2003). 14 Barro (1981) argues that government purchases have a contractionary impact on output. treats ﬁscal policy as important because: (i) ﬁscal policy is widely used to achieve redistributive and public service objectives. 1973). Next. inﬂation is increasing. and (iii) Central Banks. and we treat trend budget variables as precommitted and balanced. therefore. In earlier chapters. and b are all positive by assumption. The assumption that γ is positive is sometimes controversial. and τt represent the growth in the money supply. yt is the growth in output (relative to trend) in period t. Deviations from the trend budget are. Equation (5) describes the government’s budget constraint. 2003). According to Eq. γ. and tax revenues in period t. conditional on the information available at the time expectations are formed. but they will have zero means so long as these countries remain on their longrun (equilibrium) growth paths on average (all variables being deﬁned as deviations from their equilibrium growth paths). β. the only discretionary ﬁscal policy choices available. (ii) governments cannot precommit monetary policy with any credibility if ﬁscal policy is not precommitted (Dixit and Lambertini. Equation (4) indicates that both monetary and ﬁscal policies have an impact on the output gap. mt . multiperiod utilitymaximization problem. as the rate of inﬂation predicted by private agents and in output growth. . are well known. The microfoundations of the aggregate supply Eq. (4) can be derived from a standard. in particular. (3). The disturbance terms may. and ut and εt are random disturbances. the shortrun impact multipliers derived from Taylor’s (1993a) multicountry estimation provide an empirical support for this assumption (as does the HMT. worry intensely about the impact of ﬁscal policy on inﬂation and ﬁnancial stability (Dixit. we allowed discretionary tax revenues to be used for 13 Technically. All variables are deviations from their steadystate (or equilibrium) growth paths. gt . we assume a blockwise orthogonalization of the traditional multicountry model to produce independent semireduced forms for each country. and πte represents the rate of inﬂation that the rational agents expect to prevail in period t. contain foreign variables. which are distributed independently with zero mean and constant variance.
We give governments such a target. which the government now proposes to spend in period t. The deﬁcit ratio itself is of course d = (G − T)/Y = (e − r). less new discretionary taxes in the current period. s would typically equal 1. Expenditures above these revenues must be ﬁnanced by the sale of bonds. from discretionary and trend revenues. can then be used by the government to redistribute income from the rich to poor. to balance the budget over the cycle. that is. b would be the proportion of pretax income (output) going to the rich and s. Inserting this. has outputenhancing expenditures gt . either directly or via public services. We further assumed that there were two types of agents — rich and poor — and that only the rich use their savings to buy government bonds. if some parts were saved in social security funds or other assets. we have d = −(e − r)˙ = −d y.The Advantages of Fiscal Leadership in an Economy 83 redistributive purposes only. if b = −(e − r)/Y1 . and discretionary transfers τt . . 5. then d = (G−T)/Y − y(e−r) ˙ where y denotes the growth rate in national income. where G and T are the absolute levels of government spending and tax revenues.2. in the form of either a soft or a hard rule. (5) will equal d. Both are ﬁnanced by aggregate tax revenues. therefore. as it currently exists. adjusted for the effect of growth on this ratio and any discretionary taxes raised in this period. We could take the term s(byt − τt ) to be the proportion of the budgetdeﬁcit ratio. byt in Eq. take a completely different interpretation of Eq. as required by the stability pact. the change in the existing deﬁcit ratio under existing expenditure plans and tax codes. This target is likely to be θ = 0. τt . θ. This structure. but the term in τt shows that governments may also raise additional taxes in order to reduce the current deﬁcit ratio to some desired target value. it means new spending can only be allowed to take place out of new revenues generated by growth: G = T = r Y . Tax revenues. Hence. An Alternative Interpretation We could. the proportion of aftertax income that the rich allocate to saving. although it might be less. say. If the economy grows. and e and r are their counterparts as a proportion of Y . however. This deﬁnes what would happen to the deﬁcit ratio if there were no change in existing ﬁscal policies. (5). but permitted discretionary expenditures for enhancing output. Since yt is the deviation of Y from its steadystate y ˙ path. in the objective functions that are discussed in Section 5. The government will spend some proportion of that. s. If we wish to see what ˙ would happen to this deﬁcit ratio if ﬁscal policies were not changed. Based on this view.
the government’s loss function is given by: λ 1 g = (πt − π)2 − λ1 yt + 2 [(b − θ)yt − τt ]2 ˆ (9) 2 2 g where π is the government’s inﬂation target.16 and λ2 is the relative weight. mt ) = (1 + αβ)−1 αβmt + αγgt + me + t γ e g + αεt + ut β t (6) (7) e yt (gt . where it does have an instrument. the output component in the government’s loss function is usually represented as quadratic to reﬂect an output stability objective. (8) t 5.15 and possibly only the ﬁrst one. we can now solve for πte . whereas the central bank. to the extent that its inﬂation objective is compromised. Formally. the quadratic term in debt/deﬁcits allows monetary and ﬁscal policy to play a stabilization role as well as pick a position on the economy’s outputinﬂation tradeoff. mt ) = (1 + αβ)−1 [βmt + γgt − βme − γgt + εt − βut ]. if left to itself. The parameter θ represents the target value for the debt or deﬁcit to the GDP ratio. Government and Central Bank Objectives In our formulation. In our model. we assume that the government cares about inﬂation stabilization. t Solving for τt using Eqs. output growth. λ1 is the relative weight or ˆ g importance that the government assigns to output growth. and the provision of public services (and hence the size of the public sector deﬁcit or debt). would be concerned only with the ﬁrst two objectives.. In particular. it can only react to g poor ﬁscal discipline indirectly: e. We also assume that the government has been elected by majority vote. 16 Barro and Gordon (1983) also adopt a linear output target. . (3) and (4). we allow for the possibility that the government and an independent central bank may differ in their objectives. This yields the following reduced forms: πt (gt . and yt from Eqs. πt .3. which the government would like to reach: hence (b − θ)yt becomes the target for its g Lt 15 Since the central bank has no instruments to control the debt itself. then yields: τt (gt . which it assigns to the debt or deﬁcit rule.84 Andrew Hughes Hallet Given this new interpretation. mt ) = [s(1 + αβ)]−1 [(1 + αβ + sbβ)mt − (1 + αβ − sbγ)gt e − sbβme − sbγgt + sb(εt − βut )].g. In the delegation literature. so that the government’s loss function reﬂects society’s preferences to a signiﬁcant extent. (5) and (7).
Symmetric inﬂation targets are particularly emphasized ˆ as being required of both monetary policy and the ﬁscal authorities (HMT. as a “soft” debt or deﬁcit rule. the greater is the independence of the central bank in making its choices. All the other variables are as previously deﬁned. ˆ ˆ Consequently. say λ2 > max[1. speciﬁed both to be features of the government and the central bank objective functions here. the ECB has acquired a reputation for being more concerned to tighten monetary policy when π > π. τt . it is not clear that symmetry has been a feature of the European policy making in practice. A second feature is that Eqs. We model them as follows: 1 g Lcb = (π − π)2 − (1 − δ)λcb yt − δλ1 yt ˆ t 2 g δλ2 [(b − θ)yt − τt ]2 + (10) 2 where 0 ≤ δ ≤ 1. The closer δ is to 0. We have. as the ECB does. However. than it is to loosen it when π < π. we have asymmetric penalties. Hence. there is no advantage in doing so since the government would simply adjust its parameters to compensate. therefore. and λcb is the weight. Indeed. and smaller values. 2003). (10) would be correct for the cases where the bank has the instrument independence. but weaken them in a boom when the debt and deﬁcits would be falling anyway. . λ1 ]. it is easy to relax this assumption and allow the central bank to choose its own target. rather than symmetry in the outputgap target. as deﬁning a g g “hard” debt or deﬁcit rule. Yet. even this will not be enough to outweigh the advantages to be gained from ﬁscal leadership — for the reasons noted in Section 5. (9) and (10) specify symmetric inﬂation targets around π. do we get an extra advantage. And the lower is λcb is. But. The parameter δ measures the degree to which the central bank is forced to take the government’s objectives into account. which tighten the ﬁscal constraints in recessions. The objectives of the central bank. on the other hand. λ1 ]. the greater is the degree of its conservatism in making these choices. this might not be ideal.The Advantages of Fiscal Leadership in an Economy 85 discretionary revenues. it brings the rule closer to reality. we have deﬁned the government’s inﬂation target as π. however. only if the bank is free to choose the value of λcb as well. but not target independence. may be quite different from those of the government. (9). g g g We may regard large values of λ2 . In Eq. since it introduces an element of procyclicality. which the central bank assigns to the output growth. as I show in Hughes Hallett (2005). λ2 < min[1. The ˆ fact that the same inﬂation target appears in Eq. However. On one hand.
λcb ) = ¯ (1 − δ) [πt (gt . δ. 5. δ. to minimize: ¯ ¯ Lcb (gt . Formally.4. taking gt as given.86 Andrew Hughes Hallet 5. ¯ ¯ ¯ ¯ to minimize Lg (gt . λcb ) = E + 1 [πt (gt . Private agents understand the game and form rational expectations for future prices in the second stage. mt . λcb ). the policy game runs as follows: 5. and E denotes expectations. λcb ) is Eq. (c) The government chooses gt .4. before the shocks ut and εt are realized. before mt is chosen by the central bank. (9) evaluated at (gt . mt .2. given the δ and λcb values determined at the previous stage. In the ﬁrst stage. (b) The shocks ut and εt are realized and observed by both the government and the central bank. Stage 1 The government solves the problem: min ELg (gt. λcb ) where δ and λcb are at the values determined at stage 1.4. δ. mt ) − π]2 ˆ 2 ¯ ¯ ¯ ¯ ¯ − (1 − δ)λcb [yt (gt . the government and the monetary authority set their policy instruments.1. The second stage is a Stackelberg game in which ﬁscal policy takes on a leadership role. Institutional Design and Policy Choices We characterize the strategic interaction between the government and the central bank as a twostage noncooperative game in which the structure of the model and the objective functions are common knowledge.λcb (11) where Lg (gt . mt . mt ) − τt (gt . δ. mt ) − π]2 − λ2 [yt (gt . Stage 2 (a) Private agents form rational expectations about future prices πte . δ. the government chooses the institutional parameters δ and λcb . mt . λcb ) (12) . δ. mt. mt . mt )] ˆ 1 2 λ2 E[(b − θ)yt (gt . (d) The central bank then chooses mt . mt )] + δLg (gt . In this stage. mt )]2 2 g δ.
∂{(b − θ)y − τ ]/∂g = φ/(1 + αβ). ﬁscal policy would otherwise have to have such a strong impact on national income this. the conﬂict which stands revealed within the ﬁscal policy is important.The Advantages of Fiscal Leadership in an Economy 87 We solve this game by solving the second stage (for the policy choices) ﬁrst. numerical estimates for 10 larger OECD economies place φ very close to unity. rather than negative. λcb ) = where −α2 γβs2 + δφ λ2 ∂mt = η= g ∂gt (αβs)2 + δ 2 λ2 φ = 1 + αβ − γ θs and = 1 + αβ + βθs.17 And with balanced budget targets (θ = 0). 18 Formally. From stage 2. together with a Phillips curve that is sufﬁciently ﬂat and weak monetary transmissions. One would certainly expect φ > 0 since. we would certainly have φ > 0. is positive. This underlines the natural procyclicality of any ﬁscal restraint mechanism under “normal” parametric values. λ ) = π + ˆ α[β(φ − η ) + δ (βη + γ)] cb g (13) (14) yt (δ. and then substituting the results back into Eq. Nevertheless. if φ > 0.18 17 See Table 4 in Appendix A. which implies the gap between the t t t ﬁscal adjustment needed and the paydown allocated. output has to be capable of growing fast enough to generate sufﬁcient revenues to boost output. (11) to determine the optimal operating parameters δ and λcb . g (1 − δ)βs(βη + γ)(λcb − λ1 ) [β(φ − η ) + δ (βη + γ)]λ9 2 g − (b − θ)ut α (15) (16) (17) (18) Evidently. this would require very large ﬁscal multipliers indeed. one will have to come at the expense of the other. as noted above and a target deﬁcit of zero. we get: (1 − δ)β(φ − η )λcb + δ(βφ + γ )λ1 πt (δ. government expenditures can simultaneously boost the output and be used to reduce deﬁcit spending without worsening the budget (and subsequently debt) at the same time: γ > (1 + αβ)/(θs). In fact. In order to get φ < 0. will rise as more funds are used for stabilization. We assume φ to be positive as well. with s ≈ 1. λcb ) = −ut /α τt (δ. . In practice. when needed and to pay down the debt/deﬁcit. with θ < 1 and s < 1. If this is not possible.
we can now get the stage 1 solution from: 1 min EL (δ. which is not appropriate in the Eurozone case. 2α2 g (22) 19 Because η is a function of δ. The complete solution may be found in Hughes Hallett and Weymark (2002). (19) This part of the problem has ﬁrstorder conditions: (1 − δ)(φ − η )λ2 {(1 − δ)β(φ − η )λcb + δ(βφ + γ )λ1 } −(1 − δ)2 (βη + γ)2 α2 s2 β(λ1 − λcb ) = 0 and {(1 − δ)β(φ − η )λcb + δ(βφ + γ )λ1 } × (λ1 − λcb ) {δ(1 − δ) g g g g g (20) + (φ − η )} λ2 g g −(1 − δ)(βη + γ)α2 s2 β {(βη + γ) − (1 − δ)β } (λ1 − λcb )2 = 0. (19) results in: ELg = (λ1 )2 . the central bank is fully independent and exclusively concerned with the economy’s inﬂation performance. as measured by the government’s (society’s) loss function. Out of the two possibilities.19 Both are satisﬁed when δ = 1 and λcb = λ1 . And. . Substituting. In this solution. (21) is quartic in δ. can be identiﬁed by comparing Eq. of which only two are realvalued. This solution describes a fully dependent central bank. the solution which yields the lowest welfare loss. (11). This polynomial has four distinct roots.λcb g cb (1 − δ)β(φ − η )λcb + δ(βφ + γ )λ1 α[β(φ − η ) + δ (βη + γ)] g g 2 λ + 2 2 (1 − δ)βs(βη + γ)(λcb − λ1 ) g 2 [β(φ − η ) + δ (βη + γ)]λ2 g . (19) to the expected loss that would be suffered under the alternative institutional arrangement. it turns out to be inferior to the second solution: δ = λcb = 0. Eq. δ = 1 and g λcb = λ1 in Eq.88 Andrew Hughes Hallet Substituting Eqs. (13)–(15) back into Eq. (21) where = ∂η/∂δ. λ ) = 2 δ. There are two realvalued solutions which satisfy this g pair of ﬁrstorder conditions.
Hence. irrespective of the latter’s commitment to g the debt rule (λ2 ). than those obtained in a simultaneous move game without leadership — the model generally favored in Europe. (19) will become: 1 EL = 2 g λcb α 2 (24) for any value of λcb . (26)21 20 Because βη + γ = (∂y/∂m)∂m/∂g + ∂y/∂g. our results show that. 21 Hughes Hallett and Weymark (2002. (19)) is minimized when the government appoints independent central bankers who are concerned only with the achievement of a mandated inﬂation target and completely disregard the impact their policies may have on the output. and the central bank has already taken the impact of its decisions on the government’s decisions (also zero in this case) into account.The Advantages of Fiscal Leadership in an Economy 89 But substituting. when there is ﬁscal leadership. A conservative central bank will therefore be the best. . In the simultaneous move game. (23) Consequently. δ = λcb = 0 in the righthandside of Eq. the solution to the stage 1 minimization problem is: δ= βφ2 λcb λ2 + α2 γ 2 β(λcb − λ1 ) − φ(βφ + γ )λ1 λ2 g g βφ2 λcb λ2 + α2 γ 2 s2 β(λcb − λ1 ) g g g g g (25) and a society’s welfare loss will then be: 1 EL = 2 g λ1 α (αγs)2 g (αγs)2 + φ2 λ2 . but any bank that is more conservative than the government will do if the debt rule is to be effective. an independent central bank will always produce better results than a dependent one. society’s welfare loss (as measured by Eq. (19) yields: ELg = 0. 2004a) derive these results in detail. βη + γ = 0 and the externalities between policy makers are neutralized.20 As a result. When δ = 0. our results also indicate that ﬁscal leadership with an independent central bank can be beneﬁcial under more general conditions. However. Eq. so long as it is more conservative g than the government (λcb < λ1 ). A more interesting question is whether ﬁscal leadership with an independent central bank also produces better outcomes from a society’s perspective.
to the debt/deﬁcit rule: λ2 > 0. (13)–(15) yields: πt = π. in this case. the outcomes for the simultaneous move policy game are: πt∗ = π + ˆ α(γs)2 cb∗ ˆ /α g =π+λ [(αγs)2 + φ2 λ2 ] g (30) (31) yt∗ = −ut /α τt∗ = γs(λcb∗ − λ1 ) g φλ2 − (b − θ)ut . operate with ﬁscal leadership. Consequently. the optimal degree of conservatism for an independent central bank. α (32) . ˆ (29) as ﬁnal outcomes. and τt = −(b − θ)ut /α. yt = −ut /α. (25) to yield: λcb∗ = (αγs)2 λ1 g g. several others. is obtained by setting δ = 0 in Eq. (26) as long as: λcb < [λ1 λcb∗ ]1/2 . 5. g g (28) It is also evident that λcb∗ < λ1 holds for any degree of commitment. The Gains from Fiscal Leadership Finally. (αγs)2 + φ2 λ2 (27) It is straightforward to show that the value of ELg in Eq. the Swedish Riksbank. (24) is always less than Eq. ﬁscal leadership with any λcb < λcb∗ will always produce better outcomes. By contrast. However. They are better thought of as being engaged in simultaneous move game with their governments. where do these leadership gains come from? Substituting δ = 0 and λcb = 0 in Eqs. This is important because many inﬂationtargeting regimes.5. and the Reserve Bank of New Zealand. By contrast. such as the ones operated by the Bank of England. notably the ECB and the US Federal Reserve System. howg ever small. do not operate with ﬁscal leadership.90 Andrew Hughes Hallet This is always smaller than the loss incurred when ﬁscal leadership is combined with a dependent central bank. from a society’s point of view. than any simultaneous move game between the central bank and the government.
1 Eτt∗ ≤ 0 in Eq. this can only happen if the government has a genuine commitment to the longrun goals (sustainable public ﬁnances. 2005). Hence. As a result. and of the economy’s transmission parameters (α. we have ﬁve conclusions. This fact takes some inﬂationary pressure off the central bank when ﬁscal expansions are called for. each expansion designed to increase output would simply be accompanied by a greater burden of debt. since only then will the ﬁscal policy leader take into account the predictable reactions (of the follower) to any shortrun deviations by the leader. and costs. β. and results in lower inﬂation without any loss in output or output volatility. γ) or the particular value of b as discussed above. This is what generates a better coordination between the policy makers. The proximate cause of this surprising result is that optimization under ﬁscal leadership leads to higher taxes and larger debt repayments. Thus. and externalities are reduced on both sides. and the outcomes worse for both. s). in order to preserve the longrun sustainability of its ﬁnances. The ability to coordinate would be lost. These conclusions signify the main results of this paper: (a) Fiscal leadership eliminates the inﬂationary bias. the bank is less likely to tighten the monetary policy. (32). if the leader does so). 22 Taxes are lower under simultaneous moves because λcb < λg . the government must eventually raise taxes. or its target value (θ). in the shortrun under simultaneous moves. and independently of the government’s prefg erence to stabilize or spend (λ1 . as noted in Section 5.The Advantages of Fiscal Leadership in an Economy 91 Comparing these two sets of outcomes. Unless the effect of ﬁscal policy on output is so large as to generate savings/taxes that could ﬁnance both ﬁscal expansions and debt repayments (this possibility is ruled out in the deﬁcit rule case). (c) These results hold independently of the commitment to the debt rule g (λ2 ). So Eτ = 0 in Eq. . and a debt or deﬁcit rule in this case). which the externalities he/she imposes on his/her rival would create. We show this in Section 5 (and in a more formal analysis in Hughes Hallett. The follower knows that the leader is not likely to sacriﬁce his longrun goals by making such deviations (and anyway has the opportunity to clear up the mess.22 (b) The deeper reason is that there is a selflimiting aspect to the longrun design of ﬁscal policy. according to the follower’s own preferences. (29) vs. But. each player fails to consider the predictable response.
I demonstrate these points formally in Hughes Hallett (2005). see Appendix B.92 Andrew Hughes Hallet (d) Since Eτt∗ < 0 in Eq. and Eτt∗ → 0. the simultaneous moves regime will always end up in increasing the deﬁcit levels. The CoOrdination Effect In our model. interactions between an independent central bank and the government would lead to noncooperative outcomes. The ECB should ﬁnd this a signiﬁcantly more comfortable environment to operate in. the expected value of s(byt∗ − τt∗ ) in Eq. even if monetary t ﬁnancing is allowed. since the monetary policy externalities would be larger). This is an important point because it implies that granting leadership to a central bank whose inﬂation aversion is already greater than the government’s. Leadership also implies less expansionary budgets and tighter public expenditure controls. But. And since revenues are lower in the latter case. this is not to say that both parties gain equally. the gains from the ﬁscal leadership are more than just better outcomes. or the government is liberal. 2002). . Without further institutional restraints. the leadership gains to the government will be less (Hughes Hallett and Viegi. the policy game will become an example of rulebased coordination in which both parties can gain without any reduction in the central bank’s independence. ˆ 6. will produce no additional gains. but Eτt = 0 in (29). Hence. 23 m ≈ 0. or whose commitment is greater. if the government is committed to longterm leadership in the manner that I have described.23 Indeed. granting leadership to a central bank that cannot inﬂuence debt or deﬁcit directly. but φλcb∗ /(α2 γ) > 0 in the simultaneous moves case (as expected. (e) The simultaneous moves regime approaches ﬁscal leadership. but puts a higher priority on inﬂation control will bring smaller gains from a society’s point of view (whatever it may do for the central bank). Therefore. If the inﬂation target is strengthened after the ﬁscal policy is set. it will eventually exceed any limit set for the debt ratio. More generally. the simultaneous moves game always leads to ﬁscal expansions if money ﬁnancing is small. (32). πt∗ → π. (5) is zero under ﬁscal leadership. the central bank is independent. This result still holds good when β > γ. Fiscal leadership will do neither of these things. ELg → 0. when β < γ. as λ2 → ∞. budget deﬁcits are larger. λcb∗ → g 0. For this reason. It also holds good if the central bank is conservative. However. as the debt rule becomes progressively “harder”: that is.
The data itself. and its sources. We need no precommitment beyond leadership (the Stackelberg game supplies subgame perfection. with independently set ﬁscal and monetary policies and target independence: France. all longrun “contractual” issues. In the ﬁrst group. I have computed the expected losses under the simultaneous move and ﬁscal leadership regimes for 10 countries when both ﬁscal policy and the central bank are constructed optimally. New Zealand. and defence). Germany. monetary policy is conducted at the European level and ﬁscal policy at the national level. 24 This supplies the political economy of our results. The data I have used is from 1998. there is no incentive to reoptimize for either party — so long as the government remains committed to longterm leadership rather than shortterm demand management — unless both parties agree to reduce their independence through discretionary coordination. given the structure of the Stackelberg game. . can be characterized in terms of a simultaneous move game. Italy. Canada. with joint inﬂation and stabilization objectives: United States. 2002). Our results are robust. do not emerge for both parties in all Stackelberg games.1. they do so in problems of the kind examined here. and Switzerland. Empirical Evidence Whether or not these results are of practical importance is another matter. The countries that are selected fall into three groups: (a) Eurozone countries. and the United Kingdom and (c) Federalists. although Pareto improvements. with target as well as with instrument independence for both sets of policy authorities. Hughes Hallett and Viegi. (b) Explicit inﬂation targeters with ﬁscal leadership: Sweden. The latter will then arise from the political competition inherent in any democracy.24 This is a general result: it holds good for any model where both inﬂation and output depend on both monetary and ﬁscal policy. Thus. the year in which the Eurozone was created. Policy interactions. are summarized in Appendix A. The former will hold so long as governments have a natural commitment to maintaining sustainable public ﬁnances. or social equity. education. see Note 4) and no punishment beyond electoral results.. 2004. and where inﬂation is targeted to some degree by both players (Demertzis et al. in this group.The Advantages of Fiscal Leadership in an Economy 93 Finally. In Table 3. and the Netherlands. public services (health. 6. over no cooperation.
inﬂation targets. 0.14 1.79 0. perhaps.60 It is.134 0.8 11. also federalism in ﬁscal policy and independence at the central bank.40 3. The second group of countries has adopted explicit.118 0.40 7. Column 2 reﬂects the losses that would be incurred under .70 24.50 4. and mostly publicly announced.51 8.00 0. the government has adopted longterm (supply side) ﬁscal policies.00 0. The third group represents a set of more ﬂexible economies.7 16. arguable that the Eurozone has adopted a monetary leadership regime.46 12.00 0.10 5.28 4.207 0.071 0.00 0. Central banks in these countries have been granted instrument independence. and therefore no declared leadership in either ﬁscal or monetary policies.00 0.00 0.034 0. ∗ g λcb = λcb rate equivalents λcb = 0 λcb = λ1 France Germany Italy Netherlands Sweden New Zealand United Kingdom United States of America Canada Switzerland 5. Expected losses under a deﬁcit rule.30 20.057 0.165 0.00 0.00 0. This provides a useful benchmark for the other two groups.066 13. with implicit inﬂation targeting and a statutory concern for stabilization. Column 1 shows the losses under a dependent central bank in welfare units — leadership or not.053 0. and the ﬁscal constraints that could have sustained such a leadership were widely ignored. leaving an active demand management to monetary policy. These are clear cases in which there is both ﬁscal leadership and instrument independence for the central bank. In each case. the inﬂation target.5 3. The government either sets. other strategies.8 6. but the empirical evidence for this was weak.00 0.37 6.94 Andrew Hughes Hallet Table 3. but not target independence.4 5. or helps to set. The performance under the different ﬁscal regimes when the ﬁscal constraint is a relatively soft deﬁcit rule is reported in Table 3.28 1. Full Fiscal Simultaneous Losses under dependence leadership moves simultaneous g g δ=1 δ = 0 λ1 = 1 δ = 0 λ1 = 1 moves: Growth0.00 0.248 0. leadership vs.78 16.
The deﬁcit ratio targets are 0% in all countries (a balanced budget in the medium term). The losses in Column 3 (Table 3) appear to be relatively small in comparison. medium g strength for the United States. the United Kingdom. However. This reveals the weakness of the deﬁcit rule as a 25 Currie et al. Similarly. . Italy.. Evidently. I ﬁnd these losses to be signiﬁcant. (9). Formally. and New Zealand around 3%–5% less. To obtain these ﬁgures. The minimum value of dyt is. dyt .. the deﬁcit rule is soft for the Eurozone countries (λ2 = 0. The third column gives the minimum loss associated with a simultaneous decisionmaking version of the same game. if all other variables remain ﬁxed at their optimized values. Canada. the United Kingdom and New Zealand (λ2 = 1). ﬁscal leadership with debt targeting would be worth 10%–20% extra GDP in the United States. out of any remaining deﬁcit or any endogenous budget improvements is one. In each g case. attained when tax revenues τt grow at the same rate as the repayment target (b − θ)yt . that would yield the welfare losses in Column 3. Thus. had they adopted ﬁscal leadership with deﬁcit targets. but the propensity to spend. and the Netherlands could have expected to have grown about 15%–20% more. complete dependence in monetary policy is extremely unfavorable.5). and Switzerland (λ2 = 0. A growthrate equivalent is the loss in output growth that would produce the same welfare loss. They show that the losses associated with simultaneous decisionmaking are equivalent to having permanent reductions of up to 20% in the level of national income. the immediate effect of introducing a stability pact deﬁcit rule has been to increase the costs of an uncoordinated (simultaneous moves) regime dramatically.e.25 The ﬁgures in Column 4 are more interesting. when these ﬁgures are converted into “growthrate equivalents”. 1989). had they not done so. These are the losses reported in Column 4. These are signiﬁcant gains and signiﬁcantly larger than could be expected from international policy coordination itself (Currie et al. s.25). δ = λcb = 0). therefore. and Sweden. France. That is.The Advantages of Fiscal Leadership in an Economy 95 the government leadership with an independent central bank that directs monetary policy exclusively towards the achievement of the inﬂation target (i. and Canada. although the magnitude of the loss varies considerably from country to country. and g harder for Sweden. dyt = g dELt g g [λ2 {(b − θ)yt − τt }(b − θ) − λ1 ] using Eq. I compute the marginal rates of transformation around each government’s indifference curve to ﬁnd the change in output growth. (1989).
and tax revenues) will be endogenous. (d) Debt ratios will increase in a policy game without coordination. Leadership is the crucial element here. and with less natural persistence) are less easy to precommit credibly. but do not do so under ﬁscal leadership. Likewise. therefore. (c) The leadership model predicts improvements in inﬂation and the ﬁscal targets. leadership requires less precision in the setting of the strategic or institutional parameters. Moreover. as in the leadership scenario. deﬁcit rules without coordination to restrain ﬁscal policy imply larger deﬁcits (and more expansionary policies) than do debt rules of a similar speciﬁcation because deﬁcits (being a ﬂow not a stock. without any loss in growth or output volatility. 7. and the noncooperative (or rulebased) policies of complete independence. It is easier to implement. Because it is a ﬂow and not a stock. (b) The coordination gains come from the selflimiting action of ﬁscal policy when ﬁscal policy is given a longrun focus in the form of a (soft) debt or deﬁcit rule. it does not have the persistence of a cumulated debt criterion because violations are not carried forward. a larger part (the current automatic stabilizers. social expenditures. if this assumption is likely to be violated (or challenged). then we get good results of course. These two results explain . it has no inherent persistence. It is. So. these gains do not appear in a straightforward competitive regime with the same debt or deﬁcit rule. varying with the level of economic activity and with the pressures of the electoral cycle. But. This places the outcomes somewhere between the superior (but discretionary) policies of complete cooperation.96 Andrew Hughes Hallet restraint mechanism. the deﬁcit either fails its target or it has to be restrained to such a degree that it starts to do serious damage to overall economic performance. In addition. then the fact that the results are so much worse than in the simultaneous moves case shows how difﬁcult it is to precommit deﬁcits in advance. Conclusions I conclude this chapter with the following observations: (a) Fiscal leadership leads to improved outcomes because it implies a degree of coordination and reduced conﬂicts between institutions. If we assume that it can be precommitted. without the central bank having to lose its ability to act independently. Being only a small component in a moving total. much harder to use as a commitment device with any credibility in the long term. when policy conﬂicts arise.
Economic conﬂict and the solution of dynamic games. Barro. D Currie.). Amsterdam: North Holland. Dixit. Philadelphia: SIAM. Dynamic noncooperative game theory. Brussels: European Commission. JCR (1964). DC: International Monetary Fund. American Economic Review 93. M Buti (ed. A (2001). Dieppe. . J Frenkel. London: Centre for Economic Policy Research. In Classics in Applied Mathematics. Stability and Growth in Europe.The Advantages of Fiscal Leadership in an Economy 97 why the ECB prefers hard restraints (and why the ﬁscal policy makers do not). DA. S Eijfﬁnger and D Franco (2003). AK and L Lambertini (2003). Journal of Political Economy 89. 13–32. 533–571.). 9–54. Cambridge: Cambridge University Press. Economic Policy 37. In The Transatlantic Transmission of Fiscal Policy and Macroeconomic Policy Coordination. Games of monetary and ﬁscal interactions in the EMU. An Independent Central Bank Faced with Elected Governments: A Political Economy Conﬂict. Interactions of commitment and discretion in monetary and ﬁscal issues. RJ and DB Gordon (1983). A New View on The Transatlantic Transmission of Fiscal Policy and Macroeconomic Policy Coordination. In Dynamic Policy Games in Economics. R Bryant. SIAM series. P Masson and R Portes (eds. R Strauch and A Sibert (2003). Fiscal policy and monetary integration in Europe. and reputation in a model of monetary policy. Dixit. References Barro. European Economy: Studies and Reports 4. discretion. 1086–1121. Fatas. Optimal monetary policy rules for the Euro Area: an analysis using the area wide model. European Commission (2002). Cambridge: Cambridge University Press. 125–148. The Management of the British Economy 1945–1960. Revisiting the stability and growth pact: grand design or internal adjustment? Discussion Paper 3692. M. Canzoneri. Gali. A Hughes Hallett. G Holtham andA Hughes Hallett (1989). 589–613. 907–922. Washington. J von Hagen.). Output effects of government purchases. Brandsma. M (2004). T and G Olsder (1989). Basar. T (1989). A. and suggests that the ECB may actually prefer ﬁscal leadership because of the greater degree of precommitment implied. Public ﬁnances in EMU:2002. M. European Economic Review 26. A. Dow. RJ (1981). European Economic Review 45. Time consistency and robustness of equilibria in noncooperative dynamic games. F van der Ploeg and A de Zeeuw (eds. A and A Hughes Hallett (1984). Working Paper 360. In European Journal of Political Economy 20(4). 1522–1542. Journal of Monetary Economics 12. Rules. Demertzis. Frankfurt: European Central Bank. J and R Perotti (2003). A Hughes Hallett and N Yiegi (2004). 101–121. London: Centre for Economic Policy Research (MEI13). Buti. The theory and practice of international economic policy coordination: does coordination pay? In Macroeconomic Policies in an Interdependent World. pp. Basar. Currie. K Kuster and P McAdam (2004).
hmtreasury. The values for β and γ.A (2005). 103–110. 326–334. RE (1972). A (1984). Independence before conservatism: transparency. 426–477. American Economic Review 63. Cmnd 799373. 1–23. Some international evidence on outputinﬂation tradeoffs. policy rules in practice. Journal of Economic Theory 4. Discretion vs. The Phillips curve parameter. α. are the oneyear simulated multipliers for these policies in Taylor’s multicountry model (Taylor. Journal of Economic Literature 41. Oxford Economic Papers 36. Taylor. London: Centre for Economic Research. Policy games and the optimal design of central banks. Norwich: also available from www. 251–279.W. Government leadership and central bank design. s is the automatic stablizer effect on output according to the European Commission (2002) and HMT (2003) estimates. Economics Letters 85. Independent monetary policies and social equity. They come from different sources and represent “stylized facts” for the corresponding parameters. London: Centre for Economic Research. Appendix A: Data Sources for Table 3 The data used in Table 3 are set out in the table below. New York: Macmillan. A and D Weymark (2005). Discretionary ﬁscal policies. Noncooperative strategies for dynamic policy games and the problem of time inconsistency. Brussels: European Commission. DGII. JB (1993a). L (2003). HM Stationary Ofﬁce. is the inverse of the annualized sacriﬁce ratios on quarterly data from 1971–1998. BT (1989). A and N Viegi (2002). A and D Weymark (2004a). The Impact of the EU Fiscal Framework on Economic Activity. Macroeconomic Policy in a World Economy: From Econometric Design to Practical Operation. Discussion Paper 5043. Taylor. 341–362. I have calibrated the remaining parameters using stylized facts from 1998: the year that EMU started. CarnegieRochester Conference Series on Public Policy 39. Turrini. Hughes Hallett. Hughes Hallett. allowing for larger social . 1993a). A and J in ‘t Veld (2004). Norton and Company. 103–124. What is wrong with Taylor rules? Using judgement in monetary rules through targeting rules. 195–214. 381–399. Hughes Hallett. Fiscal Stabilization and EMU. the impact multipliers for ﬁscal and monetary policy. McCallum. Taylor. JB (2000).A and D Weymark (2004b). Hughes Hallett. Monetary Economics: Theory and Policy. respectively. A and D Weymark (2002). RE (1973). Hughes Hallett. Expectations and the neutrality of money. Open Economies Review 13.). JB (1993b). Journal of Economic Perspectives 14. and the year that new ﬁscal and monetary regimes took effect in the United Kingdom and Sweden. Thus. P Minford (ed. politics and central bank design. 1–25.gov.uk. In praise of ﬁscal restraint and debt rules: what the Euro zone might do now. Lucas. Inﬂation targeting as a coordination device. Svensson. London: Edward Elgar. German Economic Review 6.98 Andrew Hughes Hallet HM Treasury (HMT) (2003). In Money Matters. New York: W. Discussion Paper 3395. Lucas. Hughes Hallett. Hughes Hallett.
489 γ 0.158 g λ2 g 0. Similarly.25 1 1 1 0. Finally.25 0.147 1. than a regime with ﬁscal leadership.25 0.489 0.430 0. as reﬂected in actual behavior in 2004.580 1.850 0. given the change in s(byt − τt ).433 0.570 0.467 0.244 0. For Table 4.The Advantages of Fiscal Leadership in an Economy 99 Table 4. θ is set to give a debt target somewhat below each country’s 1998 debt ratio and the SGP’s g 60% limit. m = {γ [s(by − τ)] + βλcb /(αλ1 )}/(β − γ) which.850 0. Applying this result to Eq.533 0. is positive.080 1.163 1. and will run larger deﬁcits. (4). I suppose that the deﬁcit that remains after growth effects and new discretionary taxes are accounted for will be spent (s = 1).533 0.400 0. high. and that the mediumterm deﬁcit target is zero (θ = 0).150 0. since ∗ g . g I have set λ1 = 1 in each country to suggest that governments.400 0.385 0. this yields zero and φλcb∗ /(α2 γ) > 0 in each case.176 0. or medium commitment to debt/deﬁcit limits.278 0.271 1. But.5 0. β 0.25 0.200 0. Hence. Taking the expected values. the simultaneous move regime is unambiguously more expansionary. if β > γ. In this case. (31) and (32). are equally concerned about inﬂation and output stabilization. λ2 was set to imply a low. Appendix B: Derivation of the Change in Fiscal Stance Between Regimes The expected value of s(byt − τt ) under ﬁscal leadership and simultaneous moves can be obtained by inserting values ﬁrst from Eq. respectively.625 0.294 0.323 0. is (β − γ) m − βλcb∗ /(αλ1 ) + γ [s(by − τ)] = 0.5 0. Likewise.625 0.5 security sectors in the Netherlands and Sweden.533 s 1 1 1 1 1 1 1 1 1 1 θ 0 0 0 0 0 0 0 0 0 0 φ 1.133 0.098 1.038 1. α France Germany Italy Netherlands Sweden New Zealand UK United States Canada Switzerland Data for the deﬁcit rule calculations.306 1. (29).533 0.094 1. if left to themselves. it shows that the expected change in output between regimes.130 1.489 0.333 0. which we g know to be zero.500 0. and then from Eqs.600 0.
. or if λ1 is somewhat larger (the government is relatively liberal). the expansion of the budget deﬁcit will be: Deficit = = m+ [s(by − τ)] − φ 1 + g αγ λ1 ∗ τ − γs γs g cb∗ + g λ1 g λ φλ2 ϕλ2 β α(β − γ) which is still clearly positive if either λcb is small.100 Andrew Hughes Hallet γ/(β − γ) > 1. the central bank is g conservative. In the latter case. this result may not follow if β < γ.
when the emissions are increasing in the output and decreasing in the abatement technology. France e e e ˇ cı Pavel Sevˇ´k Universil´ de Montr´ al. however. In a nutshell. and keeping the taxes low to encourage production. Introduction The use of taxes as regulatory instrument in environmental economics is a classic topic. Canada e e 1. they incite the ﬁrms to reduce production and to undertake investments in abatement technology. the following situations: (1) The emission taxes have a dual effect. This is typically the case. the need for regulation usually arises because producing causes detrimental emissions. In this case. 101 . Due to the lack of a proper market. where there is no room for strategic interaction between the regulator and the ﬁrms. Taxes can be used to modify the prices. The reduction of production is not and (3) The investments are irreversible.Topic 2 CheapTalk Multiple Equilibria and Pareto — Improvement in an Environmental Taxation Games Chirstophe Deissenberg Universit´ de la M´ diterran´ e and GREQAM. Thus. (2) Emission reduction is socially desirable. Consider. the regulator must ﬁnd an optimal compromise between implementing high taxes to motivate high investments. they take their decisions on the basis of prices that do not reﬂect the true social costs of their production. confronting the ﬁrms so that the socially desirable decisions are taken. the ﬁrms do not internalize the impact of these emissions on the utility of other agents. The problem has been exhaustively investigated in static settings.
b). In Dawid et al. Dijkstra (2002). Moreover.ˇ cı 102 Chirstophe Deissenberg and Pavel Sevˇ ´k The fact that the investments are irreversible introduces a strategic element in the problem. See among others Gersbach and Glazer (1999) for a number of examples and for an interesting model. ı if ﬁrms tend to adopt the behavior of the most successful ones. a unique . inconsistency arises because the Stackelberg equilibrium is not deﬁned by mutual best responses. these solutions aim at assuring the time consistency of Stackelberg solution with either the regulator or the ﬁrms as a leader. the regulatory can insure high production and important investments by ﬁrst declaring high taxes and reducing them. This opens the door to a reoptimization by the leader once the follower has played. we demonstrated the usefulness of a new solution to the time inconsistency problem in environmental policy. one is confronted with a typical time inconsistency problem. Thus. A number of options to insure credible solutions have been considered in the literature — credible binding commitments by the Stackelberg leader. Schematically. in the absence of additional mechanisms. As noted in the seminal work of Simaan and Cruz (1973a. the economy is doomed to converge towards the less desirable Nash solution. An usual conclusion is that. a regulator. but not that the leader’s actions is itself a best response to the follower’s. It implies that the follower uses a best response in reaction to the leader’s action. etc. is not credible. that are not respected. recognize that the initially high taxes will not be implemented. Batabyal (1996a. If the ﬁrms are na¨ve and believe his announceı ments.b). Marsiliani and Renstrom (2000) and Petrakis and Xepapadeas (2003). these solutions are not efﬁcient and can be Paretoimproved. which has been extensively treated in the monetary policy literature following Kydland and Prescott (1977) and Barro and Gordon (1983). (see McCallum (1997). who announces that it will implement the Stackelberg solution. for a review in a monetary policy context). however. (2005). In other words. We showed that tax announcements. see Abrego and Perroni (1999). use of trigger strategies by the followers. if these include any number of na¨ve believers who take the announcements at face value. the time inconsistency problem has received yet only limited attention. In environmental economics. The time inconsistency is directly related to the fact that the situation described above. can increase the payoff not only of the regulator. although it frequently occurs. deﬁnes a Stackelberg game between the regulator (the leader) and the ﬁrms (the followers). but also of all ﬁrms. More sophisticated ﬁrms. Usually. and are reluctant to invest in the ﬁrst place. reputation building. once the corresponding investments have been realized.
however. rather than by precommitting to his announcements. The believers/nonbelievers dichotomy was introduced by Deissenberg and Gonzalez (2002). In Section 4. we summarize the mechanisms at work in the model and the main insights. In this chapter. the situation of interest is the following: The regulator can tax the ﬁrms to incite them to reduce their emission. The potential usefulness of employing misleading announcements to Paretoimprove. the regulator builds reputation and leadership by making announcements and implementing taxes in a way that generates good results for the believers. proﬁtmaximizing ﬁrms i with an identical production technology. To attain the superior equilibrium. in Section 6. (1999) and developed by the same authors in subsequent papers. (2005) and show that a minor (and plausible) modiﬁcation of the regulator’s objective function. In Section 2. have a negative impact on the employment . upon standard gametheoretic equilibrium solutions was suggested for the case of general linearquadratic dynamic games in Vallee et al. we present the model of environmental taxation and introduce the imitationtype dynamics that determinate the evolution of the number to believers in the economy. This equilibrium Paretodominates the one. This chapter is organized as follows. we derive and discuss the solution of the sequential Nash game. consisting of a Regulator R and a continuum of atomistic. 2. one obtains by assuming a constant proportion of believers. A similar monetary policy problem has been investigated by Dawid and Deissenberg (2005) in a continuoustime setting akin to the used in the present work. who study the credibility problem in monetary economics in a discretetime framework with reinforcement learning. The Model We consider an economy. and to generate tax income. To keep the notation simple. we do not index the variables with either i or r unless it is useful for a better understanding. In a nutshell. opens the door to the existence of multiple stable equilibria with distinct basins of attraction — with important interpretations and potential consequences for the practical conduct of environmental policy. In Section 3. we formulate the dynamic problem and investigate its solution numerically. we revisit the model proposed by Dawid et al. where all ﬁrms anticipate perfectly the Regulator’s action. Finally.Cheaptalk Multiple Equilibria and Pareto 103 stable equilibrium may exist where a positive fraction of ﬁrms are believers. taxes. An early application to environmental economies is Vallee (1998). Time τ is continuous.
Bs and NBs Each ﬁrm produces the same homogenous goods using a linear production technology: the production of x units of output. At each point of time. and employment levels.ˇ cı 104 Chirstophe Deissenberg and Pavel Sevˇ ´k (and thus. private sector income. — Given t a . emission reduction. The investment in one period has no impact on the emissions in future . The investment 1 γ(v) = v2 (2) 2 is needed. The Firms. ) described in the remainder of this section. — Each ﬁrm decides about its level of investment v based on its expectation t e . objective functions . Thus. except for the way they form their expectations t e . the demand is assumed inﬁnitely elastic at the given price p > w. the ﬁrms form expectations t e about the true level of the environmental tax. .1. each ﬁrm can spend ˜ ˜ an additional amount of money γ in order to reduce its current emissions x. requires x units of labor and generates x units of environmentally damaging emissions. and cx > 0 a parameter for simplicity sake. — Each ﬁrm may revise the way it forms its expectations depending on its relative proﬁts. (1) where x is the output. we just presented and the economic structure will be (production and prediction technology. there are two different ways for an individual ﬁrm to form its expectations. The tax level t is the amount each ﬁrm has to pay per unit of its emissions. All ﬁrms are identical. on the labor income) and the proﬁts. . Let p := p − w > 0. The production costs are given by: c(x) = wx + cx x2 . The following sequence of event (S) occurs in every τ: — R makes a nonbinding announcement t a ≥ 0 about the tax level t ≥ 0 it will choose. 2. 0]. R has to choose the tax level in order to achieve an optimal compromise among tax revenue. — Each ﬁrm produces a quantity x. — R choose the actual level of tax t ≥ 0. We assume full information: the regulator and the ﬁrms perfectly know both the sequence. w > 0 the ﬁxed wage rate. in order to reduce the ﬁrm’s emissions from x to max [x − v. As will be described in more detail later.
Depending upon the expectationformation mechanism the ﬁrm chooses. It can either suppose that R’s announcement is truthful. Thus. Thus. their current instantaneous proﬁt. Details on the derivation of the NBs’ prediction will be given later.. the intertemporal social welfare function. In this model. t) = 0 ∞ e−ρτ ϕ(t a . we speak of believer B or of a nonbeliever NB: A believer considers the regulator’s announcement to be truthful and sets te = ta (3) This does not cost anything. However. of a switch from coal to natural gas. and vice versa. that t will be equal to t a . prediction) t e of t. The Regulator. Rather than expenditures in emissionreducing capital. Note that a ﬁrm can change its choice of expectationformation mechanism at any moment in time.Cheaptalk Multiple Equilibria and Pareto 105 period. γ can therefore be interpreted as the additional costs resulting of a temporary switch to a cleaner resource — for e. R The regulator’s goal is to maximize with respect to t a ≥ 0 and t ≥ 0. (t a . In reality. We denote with π = π(t) ∈ [0.g. that is. a ﬁrm can invest larger or smaller amounts of money in order to make or less accurate predictions of the tax. 1] the current fraction of Bs in the population. the actual tax rate t enters their instantaneous proﬁt function and is unknown at the time when they choose v. The ﬁrms attempts to maximize their proﬁt. 1 − π is the current fraction of NBs. As we shall see at a later place. Making the prediction in any given period τ costs δ > 0. a B can become a NB. one can assume without loss of generality that they disregard the future and maximize in every τ. 2. considering the current number of Bs and NBs as constant. the government will actually implement. A nonbeliever makes a “rational” prediction of t. they need to base their optimal choice of v on an expectation (or. we assume for simplicity that each ﬁrm has two (extreme) options for predicting the implemented tax. or it can do costly research and try to better predict the t that will actually be implemented.2. t)dτ . That is.
At every encounter. according to a imitationtype dynamics. The difference between the results of Dawid et al. 1 gB = pxB − cx (xB )2 − t(xB − vB ) − (vB )2 2 (5) 1 gNB = pxNB − cx (xNB )2 − t(xNB − vNB ) − (vNB )2 − δ. Speciﬁcally. 2 The strictly positive parameter ρ is a social discount factor. the ﬁrms tend to adopt the type of prediction. Dynamics The ﬁrms switch between the two possible expectationformation mechanisms (B or NB).3. the instantaneous welfare φ is the sum of: (1) the economy’s output. Thus. each pairing being equiprobable. on the average. (4) where xB . dπ (6) = π = βπ(1 − π)gB − gNB . In the former article. at each τ the ﬁrms meet randomly twobytwo.) and gNB (. that is also. (2) the total proﬁts of Bs and NBs. for encounter between ﬁrms with different proﬁts are . 2. N or NB. Hofbauer and Sigmund (1998). t) = (πxB + (1 − π)xNB ) + πgB + (1 − π)gNB − (π(xB − vB ) + (1 − π)(xNB − vNB ))2 + t(π(xB − vB ) + (1 − π)(xNB − vNB )). (3) the squared volume of emissions and (4) the tax revenue. xNB . the ﬁrm with the lower current proﬁt adopts the belief of the other ﬁrm with a probability proportional to the current difference between the individual proﬁts. vB . this function does not include the ﬁrm’s proﬁt and is linear in the emissions. vNB denote the production respectively. and where gB (. (2005) and those of the present article stem directly from the different speciﬁcation of the regulator’s objective function. investment chosen by the believers B and the nonbelievers NB. ˙ dt 1 Notice that π reaches its maximum for π = 2 (the value or π for ˙ which the probability.ˇ cı 106 Chirstophe Deissenberg and Pavel Sevˇ ´k with ϕ(t a .) are their instantaneous proﬁts. This function can be recognized as a reasonable approximation of the economy’s social surplus. which may currently leads to the highest proﬁts. Thus. see Dawid (1999). Above mechanism gives rise to the dynamics. the level of employment (remember that output and employment are onetoone in this economy).
The ﬁrms being . 3. but also the future proportion of Bs in the economy.1. This naturally leads us to examine the (Nash) solution of the game between the regulator and the nonbelievers that follows from the sequence (S) when. it provides the prediction t e of the NBs. (6) of π. independently of its true planing horizon. since the ﬁrms are atomistic. that captures the probability with which a ﬁrm changes its strategy during one of these meetings. This. maximizes its current proﬁt in every τ. and tends towards 0. as previously mentioned. On the other hand. we assume full information. Equation (6) implies that by choosing the value of (t a . t) at time τ the regulator not only inﬂuences the instantaneous social welfare. each single producer is too small to inﬂuence the dynamics Eq. NBs choose the same values for v and x. Derivation of the Solution The last decision in the sequence (S) is the choice of the production level x by the ﬁrms. Thus. that is. It can be interpreted as a measure of the ﬁrms’ willingness to change strategies. Hence. for π → 0 and π → 1 (for extreme values of π. (4) with respect to t a and t and (b) the NBs maximize their proﬁts with respect to νNB and xNB . taking into account the fact that the ﬁrms. of the ﬂexibility of the ﬁrms. ﬁxed value of π. (a) R maximizes the integrand φ in Eq. the R and the NBs are fully aware of the (nonstrategic) behavior of the NBs. This choice is made after v and t are known. The Sequential Nash Game The game is solved by backwards induction. has an impact on the future social welfare.Cheaptalk Multiple Equilibria and Pareto 107 maximized). 3. in turn. As previously mentioned. the single ﬁrm does not take into account any intertemporal effect and. We consider only symmetric solutions where all Bs respectively. that is consider these values as given. for an arbitrary. Hence. all but very few ﬁrms have the same proﬁts). rightly assume that their individual actions do not affect the aggregate values. The parameter β ≥ 0. calibrates the speed of the information ﬂow between Bs and NBs. In particular. This again has an impact on the future social welfare. The analysis of the static case is also necessary since. there are no explicit dynamics for the economy. although there are no explicit dynamics for the economic variables v and x. being atomistic. the only source of dynamics in the model. R faces a nontrivial intertemporal optimization problem.
1 + cx (12) Solving this last equation for vNB gives for the equilibrium value: vNB = vNB (t a ) p − cx (1 + 2πt a ) . they set: v = te (10) (9) The Bs predict that t will be equal to its announced value. Thus. as it exclusively depends upon the realized taxes t. (5).e. i. one recognizes that the ﬁrms choose v in order to maximize (p − t e ) 2 1 + t e v − v2 . 1 + cx (8) When the ﬁrms determine their investment v. the optimal symmetric production decision is: xB = xNB = x = p−t . 1 + cx (3 − 2π) (13) The ﬁrst decision to be taken is the choice of t a by R. (11) for vB and Eq. In the previous stage. Eq. (13) for vNB . t e = t a . (7) in Eq. 4cx 2 That is. and t a are known. using the results from the previous stages. but the actual tax rate t is not. (11) The NBs know that the regulator will act according to Eq. R chooses t given vB . (7) and (8).ˇ cı 108 Chirstophe Deissenberg and Pavel Sevˇ ´k takers. vNB ) = p − cx [2(πvB + (1 − π)vNB ) + 1] . Using Eq. (7) for x. 2cx (7) Note that the optimal production is the same for Bs and NBs. vB = t a . Thus. Eq.. they choose: vNB = p − cx [2(πvB + (1 − π)vNB ) + 1] . is concave . The regulator’s instantaneous objective function φ. (8). Eqs. with x is given by (7) gives the optimal reaction function: t(vB . Maximizing vNB with respect to t.
Moreover. The difference between the statically optimal (i. the optimal announcement t a$ is a constant. the proﬁt of the NBs is always higher than the proﬁt of the Bs whenever π > 0. reﬂecting the fact that the latter make a systematic error about the true value of t. and to implement a low tax to motivate a high production level. one obtains for the equilibrium value of t 1+p 1 + cx t$ = . 3. however. (18) between the proﬁts of the NBs and Bs is also increasing in the difference between the announced and the implemented tax. √ cx > 1/ 3. evaluated at t $ ) profits of NBs and Bs (1 + cx )2 gNB$ − gB$ = − δ. Since t a$ is constant and t $ decreasing in π and since t $ ≤ t a$ . if. t $ decreases in π: When the proportion of Bs augments.Cheaptalk Multiple Equilibria and Pareto 109 with respect to t a for all π ∈ [0. (14) We shall assume in the following that this inequality holds. (8) with vNB given by Eq. can exceed the one of the NBs if the prediction costs δ are high enough.. (16) − 1 + 3cx cx (3 − 2π) + 1 Both t a$ and t $ will be nonnegative if 3 . (13). p≥ 3cx − 2 which we assume from now on. Modulo δ. the announcement t a becomes a more powerful instrument making a recourse to t less necessary. The proﬁt of the Bs.2. Therefore. the difference t a$ − t $ . Then.e. Some Properties of the Solution For all cx > 0 and π ∈ [0. (18) 2cx (2π − 3)2 is likewise increasing in π. 1]. t a$ − t $ . the regulator will choose 1+p . the difference Eq. increases with π. It is optimal to announce a high tax in order to elicit a high investment from the believers. 1]. (17) . (15) t a$ = 1 + 3cx Using this last equation in Eq. and always less than t a$ .
increases with π. however. to make higher proﬁts. because a raising number of Bs are led by R’s announcement to invest more than they would. to produce more. as π increases. the problem tends towards the solution a standard optimization situation with the regulator as the sole decisionmaker. to make higher proﬁts. moving from the inefﬁcient gametheoretic solution to the efﬁcient controltheoretic solution one improves the position of all actors. however. 1 for an illustration). is the fact that the proﬁts of both the Bs and NBs are also increasing in π (see Fig. to produce more. This outcome is not efﬁcient leaving room for Paretoimprovement. the NBs beneﬁt from the decrease in t. the R prefers . Most remarkable. the regulator’s instantaneous utility φ too. subsequently. and to pollute less. An intuitive explanation is the following: For π = 0 the outcome is the Nash solution of a game between the NBs and the regulator. Likewise.6. p = 2. of the NBs (middle curve).00025. the impact of the tax announcements on the π dynamics may incite the regulator to reduce the spread between t and t a . And R also beneﬁts. otherwise and. As one would intuitively expect. and to pollute subsequently. As π increases. In dynamic settings. This solution is efﬁcient in the sense that it does maximize the regulator’s objective function under the given constraints. while the investment remains ﬁxed at t a$ . and the regulator’s utility (upper curve) as a function of π for δ = 0. and in particular to choose t a < t a$ .ˇ cı 110 Chirstophe Deissenberg and Pavel Sevˇ ´k Figure 1. cx = 0. the Bs enjoy the beneﬁts of a lower t. Ceteris paribus. The proﬁts of the Bs (lower curve). Since the objectives of the regulator are not extremely different of the objectives of the ﬁrm. Speciﬁcally.
1 t a2 − th + h2 + δ . 2 2 . by Eq. Using p − cx (1 + 2πt a ) . The Dynamic Optimization Problem 4.Cheaptalk Multiple Equilibria and Pareto 111 a high proportion of Bs to a low one. implies that the proﬁts of the Bs will be low compared to those of the NBs. R will have to ﬁnd an optimal compromise between choosing a high value of t a . Formulation Being atomistic. We are going to explore these mechanisms in more details in the next section. and choosing a lower one. t. in the dynamic problem. h := 2cx The Hamiltonian of the above problem is given by g := H(t a . leading to a more favorable proportion of Bs in the future. t(τ) (19) such that Eqs. λ) = h + π h − (p − t)2 t a2 + t(h − t a ) − 4cx 2 1 (p − t)2 + t(h − g) − g2 − δ 4cx 2 (20) + (1 − π) h − + t[π(h − t a ) + (1 − π)(h − g)] − [π(h − t a ) + (1 − π)(h − g)]2 + λπ(1 − π)β tt a − where λ is the costate. Therefore. diminishing the instantaneous utility of R. and π0 = π(0). however. (13). (6). Assume that they consider π as constant. This. A high spread t a − t. which allows a low value of t and insures the regulator a high instantaneous utility. t) 0 ≤ t a (τ).1. t a ) to inﬂuence the Bs than to regulate the NBs. the ﬁrms do not recognize that their individual decisions to choose the B or NB prediction technology inﬂuence π over time. since it has one more instrument (namely. 4. reduces the value of π and leads ˙ over time to a lower proportion of Bs. π. 1 + cx (3 − 2π) p−t . (11). The regulator’s optimization problem is then: max (t a . (7). (6).
λ) ∂π −ρτ 0 = lim e λ(τ). 0. (p − cx ) + [cx (3 − 2π) + 1](1 + cx )2 .0015. Unless and otherwise speciﬁed. p = 2. π. λ). since dφ/dπ > 0. before carrying a summary of sensitivity analysis and brieﬂy addressing the question of the bifurcations. the results presented pertain to the reference parameter conﬁguration cx = 0. λ). an optimal solution (π(τ). the proﬁt difference gNB$ − gB$ see Eq. t ∗ (π. If the prediction costs δ are sufﬁciently high. 1. we used Mathematica r to carry out a numerical analysis of the canonical system Eqs. λ(τ)) seems infeasible. will be negative at the current . (1 + 3cx )[βλ(1 − π) + K] (p − cx ) + 2cx (1 + cx )π[cx (βλ(1 + 3cx )(1 − π) + 1)] (1 + 3cx )[βπ(1 − π) + K] (21) (22) According to Pontriagin’s maximum principle.ˇ cı 112 Chirstophe Deissenberg and Pavel Sevˇ ´k The ﬁrst order maximization condition with respect to t a and t yields for the dynamically optimal values* t a∗ = t∗ = with 3 K = 1 − 6cx β2 λ2 (1 − π)2 π + 2cx [2 + 2βλ(1 − π)π] 2 + cx [3 − 2π + βλ(1 − π)(3 − 2βλ(1 − π)π)]. together with ρ = 1. Hence. We discuss ﬁrst the more interesting case of multiple equilibria. (18).6. the model can have either two stable equilibria separated by a socalled Skiba point. or a unique stable equilibrium. Ceteris paribus. (λ)) has to satisfy the state dynamics (6). t ∗ ) plus: ˙ λ = ρλ − τ→∞ ∂H(t a∗ (π. From the onset. 4.2. evaluated at (t a∗ . (6)–(23) in order to shed light on the optimal solution. These are robust.00025 that underlies Fig. (23) (24) Due to the highly nonlinear structure of the optimization problem an explicit derivation of the optimal (π(τ). δ = 0. R would like to face as many Bs as possible. let us stress the fundamental mechanism that underlies most of the results. with respect to sufﬁciently small parameter variations: The same qualitative insights would be obtained for a compact set of alternative parameter conﬁgurations. leading to a situation with a situation with unique equilibrium. Numerical Results Depending upon the parameter values.
Since for any given ya .3.98) and EU = (0. 4. R will use actions t a∗ .0075 ± 0. Bs and NBs coexist but make the same prediction errors and realize the same proﬁts ˙ and decisions. implementing the statically optimal actions t a$ . t $ since in that case gNB < gB . for δ sufﬁciently large. . (6) a decreasing number of believers. The π = 0 isocline. has three branches. Most interestingly. and EU = (πU . EL = (0.04 and −0. π∗ may be positive. However. δ = 0. although at the latter equilibrium all ﬁrms make perfect predications of the taxes. Two of ˙ these correspond to the situation where there is a homogenous population of NBs (π = 0) or of Bs (π = 1). p = 2. EM = (πM . 5. Alternatively. However. In this case. evaluated at EM . The middle equilibrium EM is an unstable spiral. which implies by Eq.93. by Eq.60). In this case. Then. λM ).00025. EU . respectively 0. see Eq. Multiple Equilibria 4. π < 0. 16. are locally stable. shown in dotted lines. for the reference parameter values. (23). (6). the economy will tend towards the ˙ ˙ already discussed equilibrium. the thus controlled system will globally converge to a situation where there are only believers.3. $ where nobody believes R. π = 0. t ∗ that insure an optimal ˙ compromise between obtaining as high a value of π and the optimal value π∗ of π may be negative. if for certain parameter constellations there exists a value of π that deﬁnes a threshold between the two domains of convergence. there will be a positive number ˙ of Bs at the equilibrium. it will not come as a surprise. λU ) with 0 < πM < πU < 1. The lower and upper equilibrium. Speciﬁcally. one towards π = 0 and one towards some πu > 0. cx = 0. the outcome is inefﬁcient. EM = (0. EL and EU . EU Paretodominates EL .0132i (characteristic for an unstable spiral).03 (indicating saddlepoint stability). β = 1.0015. y the speed of learning π ˙ is ﬁrst increasing and then decreasing in π. There are three equilibria. 2. The characteristic roots of the Jacobian of the Canonical system Eqs. (6). In fact. λL ).24. respectively are 0. is shown in Fig. Along the third branch.6. The λ = 0 isocline is shown as a continuous line. (6). t $ sufﬁces (albeit not optimally so) to insure that π increases locally. and ρ = 0.Cheaptalk Multiple Equilibria and Pareto 113 value of π. Instead. At this equilibrium. everybody correctly anticipates the tax level t $ . the regulator cannot implement the actions t a$ . if δ is small. the regulator is going to implement. Phase Diagram The phase diagram for the reference parameter values.1. π = 1.
12 0. Table 1. t 0.A. δ = 0.62 The Paretosuperiority of EU is illustrated in Table 1 (remember that at the equilibrium gN = gNB ).00025.0015 0. and ρ = 0. whenever the initial value π0 of π is inferior to the threshold value. it is rational for the regulator to let the number of believers go to zero.98 φ 2. and NBs.18 ta 1. cx = 0. cx = 0. R. although the resulting stationary equilibrium is associated with poor outcomes for all agents. g EU EL 1 0. λL ). and taxes at EU and EL .6. p = 0. 2. Proﬁts. 1. The situation captured in Fig.07 N. 2 (an unstable equilibrium surrounded by two stable ones) implies the existence of a threshold such that it is optimal for R to follow a policy leading in the longrun towards the stable equilibrium EL = (0. p = 2.25 2.0015.0025 β = 1 0.ˇ cı 114 Chirstophe Deissenberg and Pavel Sevˇ ´k Figure 2.6. welfare. Phase diagram of the canonical system. .0025. Bs.0015. π0 < πs in other words. 0. when the initial fraction of Bs is less than πs . β = 1.6 p = 2 δ = 0.
t (that is equivalently: for any given value of gB − gNB ). the threshold value πs is typically close to. the costs of building a large fraction of Bs are compensated in the longrun by accrued beneﬁts. As already mentioned. The rationale for steering the inferior equilibrium EL starting from any π0 > πs lies in the following fact: The “deviation costs” that the regulator will have to accept initially in order to steer the system towards the more favorable EU are so high that the net beneﬁt of going to EU would be less than the one obtained by letting the system optimally converge towards the inferior EL . There are two concurrent reasons for this: Once the π increases. However. to a situation where there is in permanence. Note that. 5. when a threshold exists and for any π0 < 1. πM = πs ) and the threshold takes the particularly challenging form of a Skiba point. if the regulator’s trustworthiness is low from the onset. the dynamics of the canonical system Eqs. Therefore. No local analytical expression exists that characterizes a Skiba point. the regulator has to deviate more and more from t $ (π) in order to make believing more proﬁtable than not believing. π decreases. For a reference article on thresholds and Skiba points in economic models. This was not done in this paper. (2004).. it is never optimal for the regulator to follow a policy that would ultimately insure that all ﬁrms are believers. For any given actions t a . its interest is to exploit its initial credibility for making shortterm gains. that is.. see Deissenberg et al. The existence of the threshold is closely associated with the properties of the πdynamics Eq. Thus. a positive fraction of believers. the regulator’s optimal policy leads to the stable equilibrium EU . (6). If the initial conﬁdence is high. π → 1.e. the ˙ discounted beneﬁts of increasing π decrease. (6)– (23) at the unstable middle equilibrium EM form a spiral. Sensitivity Analysis and Bifurcations Towards a Unique Equilibrium Numerical analyses show that an increase in the ﬁrms’ ﬂexibility (i. an increase in β) shifts the stable equilibrium EU to the right and the unstable .e. the value of π will be smaller for small ˙ or large values of π than for intermediate ones. On the other hand. Thus. The above results suggest that the policy of an environmental regulator maybe very different depending on the fraction of the population that trusts it when it initiates a new policy. Skiba points must be computed numerically.Cheaptalk Multiple Equilibria and Pareto 115 Whenever π0 < πs on the other hand. although this ultimately leads to an inferior situation where nobody trusts the regulator. but distinct from πM (i.
favoring the convergence to equilibrium with a large number of believers. The scenario with two stable equilibria separated by a Skiba point can change qualitatively. The only remaining equilibrium is the one where everybody is a believer. Indeed. The present paper and previous work strongly suggest that. EL . in the longrun. For small values of β and/or δ and/or for large values of ρ. the equilibrium EU is characterized by a large proportion of Bs and thus insures high stationary gains to all actors. shifts EM to the left and EU to the right: It is easier for R to incite the ﬁrms to use the B strategy. where standard rational behavior by all agents leads to a Paretoinferior outcome. capital levies. An increase of the prediction cost δ ﬁnally. such a situation is common for instance in monetary economics. As a result. patent protection. for β large. On the other hand. Only one (stable) equilibrium remains. R does not have to make Bs much better off than NBs to insure a fast reaction. Conclusion The starting point of this paper is a static situation. 6.ˇ cı 116 Chirstophe Deissenberg and Pavel Sevˇ ´k equilibrium EM as well as the Skiba point πs to the left. although there is no fundamental conﬂict of interest either among the different ﬁrms or between the ﬁrms and the regulator. if the ﬁrms react quickly to proﬁt differences. Paretoimprove the economic . an increase of the discount factor ρ has the opposite effect. Thus. since the time and efforts needed now for an additional increase in the stock of Bs weighs heavily compared to the expected future beneﬁts. Not surprisingly. and although the ﬁrms are perfectly informed and perfectly predict the regulator’s actions. when the parameter values are sufﬁciently altered. only if the initial proportion of Bs is high. the high speed of reaction of the ﬁrms means that the accumulated costs incurred by the regulator on the way to EU will be small and easily compensated by the gains around and at EU . Moreover. in disaster relief. Impatient regulators want to build up conﬁdence. the equilibrium EU collides with EM and disappears. the existence of believers who take the announcements of a macroeconomic regulator at face value typically. in such a situation. for large values of δ trying to outguess the regulator is never optimal. the regulator will follow a policy that leads to the upper equilibrium EU even if the initial proportion of Bs is small. In other words. the proportion of believers tends to zero. The resulting equilibrium value πU will be relatively low. or default on debt. In addition to the environmental problem studied here.
that the single agents do not anticipate the collative impact of their individual decisions. In this case. when the ﬁrms react fast enough to the difference in proﬁts of Bs and NBs. Investment subsidies and timeconsistent environmental policy. in the model. by an appropriate choice of actions and is interested not only in its instantaneous but also in its future losses. A particularly interesting case arise. to believe or to predict. To introduce dynamics. Barro. when a stable Paretosuperior equilibrium with a positive fraction of believers coexists with the (likewise stable) perfect prediction but inferior equilibrium where all ﬁrms are nonbelievers. it requires a relatively benevolent regulator. This history dependence of the solution suggests a promising metaanalysis of the importance of good reputation based on the previous good governance. distinct decisionmaking issues. we assumed that the proportion of believers and nonbelievers in the economy changes over time as a function of the difference in proﬁts for the two types of ﬁrms. It is supposed that the regulator recognizes its ability to inﬂuence the wordofmonth dynamics. of these two restrictive hypotheses. whose objective function does not differ too drastically from the one of the private agents.Cheaptalk Multiple Equilibria and Pareto 117 outcome. These results of this paper are obtained under the assumption that the nonbelievers do not exploit the fact that the regulator announced tax differs from the equilibrium announcement for the static game. L and C Perroni (1999). . We leave to future research. It is shown that. discretion and reputation in a model of monetary policy. 101–122. in situations where the same macroeconomic policymaker is confronted with recurrent. the optimal policy of the regulator (to converge towards the one or the other equilibrium) depends upon the initial proportion of believers in the population. Journal of Monetary Economics 12. a sufﬁciently patient regulator may use incorrect announcements to Paretoimprove the economic outcome compared to the situation where all ﬁrms are nonbelievers that perfectly predict the regulator’s actions. R and D Gordon (1983). Coventry: University of Warwick. Rules. and thus. Likewise. Discussion Paper. the nontrivial task of exploring the consequences of a relaxation. The change obeys a wordofmouth process driven by the relative success of the one or the other private strategy. This property hinges crucially on the fact that the private sector is atomistic. 1–35. Moreover. References Abrego. the nonbelievers do not attempt to predict the dynamics of the number of believers.
151–164. On the dynamics of word of mouth learning with and without anticipations.ˇ cı 118 Chirstophe Deissenberg and Pavel Sevˇ ´k Batabyal. 533–555. Simaan. Deissenberg. W Barnett. 530–550. and Learning. Vallee. Journal of Environmental Economics and Management 37. W Semmler and F Wirl (2004). H. C Deissenberg and T Basar (1999).K. Comparison of different Stackelberg solutions in a deterministic dynamic pollution control problem. Gersbach. Journal of Political Economy 85. C Deissenberg and G Feichtinger (eds. Consistency and optimality in a dynamic game of pollution control I: competition. Economic Journal 110. Amsterdam: Kluwer. Dawid. T. Petrakis. Batabyal. 2–12. Additional aspects of the Stackelberg strategy in nonzerosum games. H (1999). Amsterdam: Elsevier. On the efﬁciencyeffects of private (dis)trust in the government. 247–266. France: University of Nantes. 273–295. Critical issues concerning central bank independence. M and JB Cruz (1973b). Deissenberg. LENC3E. Resource and Energy Economics 25. 14. Annals of Operations Research 89. On the Stackelberg strategy in nonzerosum games. Journal of Monetary Economics 39. Vallee. G Feichtinger. Multiple equilibria. C Deissenberg and P Sevˇ´k (2005). A (1996b). 1457–1479. A (1996a). Discussion Paper. Multiagents Economies. Annals of Operations Research 88. Dawid. Paretoimproving cheating in a monetary policy game. H and C Deissenberg (2005). Dijkstra. Time consistency and investment incentives in environmental policy. 99–112. Location decisions of a polluting ﬁrm and the time consistency of environmental policy. C. J and K Sigmund (1998). 197–214. Vol. Cambridge: Cambridge University Press.). F and E Prescott (1977). Consistency and optimality in a dynamic game of pollution control II: monopoly.U. C and F Alvarez Gonzalez (2002). history dependence. C123–C138. Optimal open loop cheating in dynamic reversed linearquadratic Stackelberg games. gullibility and welfare in an environmental taxation game. ˇ cı Dawid. and global dynamics in intertemporal optimization models. 175–192. Time inconsistency in environmental policy: tax earmarking as a commitment solution. Hofbauer. 205–220. Discussion Paper 02/12. B (2002). In Economic Complexity: NonLinear Dynamics. Journal of Optimization Theory and Applications 11. T (1998). University of Nottingham. 315–330. 1–30. Journal of Economic Behavior and Organization 57. M and JB Cruz (1973a). . L and T Renstrom (2000).: School of Economics. Simaan. Cheap talk. Kydland. 613–626. Environmental and Resource Economics 8. Journal of Economic Dynamics and Control 26. 473–491.). Rules rather than discretion: the inconsistency of optimal plans. Journal of Optimization Theory and Applications 11. McCallum. B (1997). 91–122. E and A Xepapadeas (2003). Environmental and Resource Economics 8. Markets and regulatory holdup problems. H and A Glazer (1999). ISETE. Marsiliani. A Haurie and G Zaccour (eds. In: Dynamic Games: Theory and Applications. Evolutionary Games and Population Dynamics.
Chapter 4 Modeling Business Organization .
This page intentionally left blank .
business process. human capital. including disciplines like mechanical manufacturing. redesign and integration of information. and software engineering. and Democritus University of Thrace. The paper also discusses EM and integration challenges. business process reengineering. Greece 1.. 121 . processing of data and knowledge. analysis. the following two deﬁnitions (Vernadat. Introduction Enterprise modeling (EM). • Enterprise integration consists of breaking down organizational barriers to improve synergy within the enterprise so that business goals are achieved in a more productive way. Enterprise integration (EI).Topic 1 Enterprise Modeling and Integration: Review and New Directions Nikitas Spiros Koutsoukis Hellenic Open University. The main objective of this paper is to review the literature on EM and integration and to provide a concise description of related topics. and rationale as well as current tools and methodologies for deeper analysis. 1996) are representative for current practices: • Enterprise modeling is concerned with the explicit representation of knowledge regarding the enterprise in terms models. Enterprise modeling and Enterprise integration still are still ongoing ﬁelds of research where the background and interests of contributors in these areas are heterogeneous. software applications and information systems regarding the enterprise and its external environment to achieve advancements in organization performance (Soon et al. and Enterprise integration modeling (EIM) are concerned with the deﬁnition. 1997). However.
3. • Enterprise models must either be represented in the mind of human beings or in computers (Christensen et al. Depending on the purpose. EM has been partly addressed by several approaches to the same problem of making the enterprise more efﬁcient. we will mention some important features of enterprise models instead. goals. behavior. 1996). An enterprise can be perceived as “a set of concurrent processes executed on the enterprise means (resources or functional entities) according to the enterprise objectives and subject to business or external constraints” (Vernadat. as an extension of the standard for the exchange of product model data (STEP) product modeling approach. . and constraints of a business. • All enterprise models are built with a particular purpose in mind. contributes to integrating design (which focuses on product information) and construction (focusing on activity and resource information) (Kemmerer. All these approaches have included the construction of symbolic representations of aspects of the real world. 1999). or other enterprise. resources. 1993). which may be redundant. processes. people. 1993). using various notations and techniques that commonly could be denoted “conceptual modeling” (Solvberg and Kung. Model and Enterprise Terminology Model and enterprise are the two key terms that appear throughout this paper. information. and operation (Fox. The role of an enterprise model is to achieve modeldriven enterprise design. 1992). A model can be deﬁned as “a structure that a system can use to simulate or anticipate the behaviour of something else” (Hoyte. 1995). government.122 Nikitas Spiros Koutsoukis 2. activities. the model will emphasize different types of information at different levels of detail.. analysis. • An enterprise model provides a computational representation of the structure. This is a generalization of a deﬁnition given by Minsky (1968): “a model is a useful representation of some subject. Enterprise Modeling During the last decades. It is a (more or less formal) abstraction of a reality (or universe of discourse) expressed in terms of some formalism (or language) deﬁned by modeling constructs for the purpose of the user”. The idea of EM. Instead of attempting to form a deﬁnition. It can be both descriptive and deﬁnitionalspanning what is and what should be.
and the model may in fact become a part of the integrated enterprise itself. tools.) (Rumbaugh. the question aroused on how to integrate different departments in an enterprise. IDEF2 for simulation modeling. the need for better application design and development methods became evident (Martin. conglomerate industries took a more developmental approach. and IDEF5 for ontology modeling. (2) Generic Artiﬁcial Intelligence (GRAI) integrated methodology: It is a methodology which uses the GRAI grid. Since 1990. 1988). developed by the . Processes are made of functional operations and functional operations are executed by functional entities. when database technology was applied on a larger scale.e. In order to design the ﬂexible information and communication infrastructure needed. which was only focusing on the product data information handling. IDEF4 for object modeling. The trend now is away from the ﬁrst single perspective enterprise models. 1982. and the research area of EI emerged. A narrow focus on a single department or task during development often caused operational problems across application boundaries. The need for making more complete models aroused. During the 1990s. to improve cooperation and logistics. Different modeling methodologies. Objects ﬂowing through the enterprise can be information entities (data) as well as material objects (parts.. to more complete enterprise models covering both informational and human aspects of the organization (Fox. major R&D programs for computerintegrated manufacturing (CIM) have resulted in the following tools/methods or architectures for EM: (1) IDEF modeling tools: These are complementary modeling tools developed by the integrated computer aided manufacturing (ICAM) project. 1992). IDEF1 for information modeling. 1993). suppliers and customers. human and technical agents) to process enterprise objects. IDEF3 for business process modeling. were developed. and how to connect the enterprise with its external environment i.Enterprise Modeling and Integration 123 In the 1980s and during the early 1990s. During the 1990s. IDEF models are mainly used for requirements deﬁnitions. like the structured analysis and design technique (SADT). GRAI nets. different enterprise models are useful. 1993). introducing IDEF0 for functional modeling.. The problem in EM regarding the manufacturing of EI and control was to ensure timely execution of business processes on the functional entities of the enterprise (i. Enterprise modeling is clearly a prerequisite for EI as all things to be integrated and coordinated need to modeled to some extent (Petrie.e. Sager. products. etc. integrating operations across departmental or functional boundaries.
identiﬁed and deﬁned in this reference methodology. Ladet and Vernadat. and product delivery time.g. This ﬁnality has a direct inﬂuence on the deﬁnition of modeling method. (5) Generalized enterprise reference architecture and methodology (GERAM): It is a generalization of CIMOSA. consist of methodological guidelines for enterprise engineering (from PERA and GIM). another goal is to model relevant business process and enterprise objects concerned with business integration. The prime goal of an EM approach is to support analysis of an enterprise rather than to model the entire enterprise. and PERA. (3) Computer integrated manufacturing open system architecture (CIMOSA): CIMOSA has been developed to assist enterprises adapt to internal and external environmental changes so as to face global competition and improve themselves regarding product prices. i. In addition. The combination is called GRAI integrated methodology (GIM). simulation. executed by an open set of functional entities (or agents) to achieve business objectives (as set by management). Enterprise modeling and integration is essentially a matter of modeling and integrating these process and agents (Kosanke and Nell. resource view. CIMOSA constructs).124 Nikitas Spiros Koutsoukis GRAI laboratories of France and includes modeling IDEF0. product quality.e. • reference methods for enterprise engineering of existing or new parts of the enterprise both in terms of analysis. and group technology to model the decision making processes of the enterprise. GIM. The general concepts. and decisionmaking and • a model in order to manage efﬁciently the enterprise operations. Modeling method relies upon a speciﬁc purpose deﬁning its ﬁnality. information view. 1995). even though this is theoretically possible. Vernadat. and organization view (Beeckman 1993. the aim of EM is to provide: • a better perspective of the enterprise structure and operations. (4) Purdue enterprise reference architecture (PERA): It is a methodology developed at Purdue University in 1989 which focuses on separating the humanbased functions of an enterprise from the manufacturing and information functions (Williams. According to Vernadat (1996). 1997. .. 1996).. and model views modeling (e. 1994). We adopt the position that any enterprise is made of a large collection of concurrent business processes. It provides four different types of views of the enterprise: modeling function view. lifecycle guidelines (from PERA). the goal of the modeler.
at the right place. 1995). different operating systems. at the right time. and that enables them to act quickly and decisively. In today’s enterprise environment. creating enterprise knowledge and knowhow for all. This is a step towards interenterprise integration overcoming barriers and provides connection between enterprise environments by linking their operations to create extended and virtual enterprises. to the right people. In spite of the different understandings of the scope of integration in CIM.Enterprise Modeling and Integration 125 Whereas. 1996). it was always understood for information integration across or at least parts of the enterprise. at the right time (Norrie et al. The intelligent delivery of information must obtain maturity levels consistent with enterprise goals and must employ the appropriate technology to facilitate this purpose. is a crucial requirement of enterprises that intend to grow and remain proﬁtable. and the legacy systems. Information that is available to the right people. 4. at the right place. (2) Provide the right information at the right place: The need for implementing information sharing systems and integration platforms to support information transaction across heterogeneous environments regarding hardware. efﬁcient management of processes. users must consolidate data and deliver it as useful information and knowledge that empowers the right business decisions.. Information integration essentially consists of providing the right information. the main motivations for EM are: • • • • • • better understanding of the structure and functions of the enterprise. Enterprise Integration Enterprise integration was discussed since the early days of computers in industry and especially in the manufacturing industry with CIM as the acronym for operations integration (Vernadat. and the decisionmaking process. process management issues. enterprise engineering and EI itself. . managing complexity of operations. With this understanding the different needs in EI can be identiﬁed: (1) Identify the right information to gain knowledge: Elicit knowledge from gathering information from the intraenterprise activities and use models to represent this knowledge regarding product attributes and information.
especially for networked companies and large manufacturing enterprises is becoming a reality. a software application. distributed computing environments (such as OSF/DCE and MS DCOM. In IIS.. The organizational matter is partly understood so far whereas the technological matter lies with the major advances over the last decade.. remotely located. which are connected by a communication system. The aims of EI are: • to enable communication among the various functional entities of the enterprise. Enterprise integration. technology. The main motivation for EI is to make possible true interoperability among heterogeneous. object request brokers (ORB such as OMG/CORBA). object technology. and now Java to enterprise edition and execution environments (J2EE). systems within. . legislation. (4) Coordinate business processes: This requires decisional capabilities and knowhow within the enterprise for realtime decision support and evaluation of operational alternatives based on business process modeling. or a human (Vernadat. The issue is that EI is both an organizational and a technological matter.126 Nikitas Spiros Koutsoukis (3) Update the information in real time to reﬂect the actual state of the enterprise operation: enterprises must be able to adopt changes regarding both their external environment i.e. 2002). production operations in order to protect the enterprise goal and objective from becoming obsolete. distributed databases. data exchange formats. Internet. an enterprise is composed of functional entities. Some important projects developing integrating infrastructure (IIS) technology for manufacturing environments include: • CIMOSA IIS: The CIMOSA integrated infrastructure (IIS) enables business process control in a heterogeneous IT environment. and their internal environment i. 1998). Functional entities can be a machine. • to provide interoperability of IT applications (plugandplay approach) and • to facilitate coordination of functional entities for executing business processes so that they synergistically fulﬁll enterprise goals. or outside the enterprise in an independent IT vendor environment.e. • AIT IP: This is an integration platform developed as an AIT project and based on the CIMOSA IIS concepts (Waite. It concerns mostly computer communications technology.
Whatever these terms mean. In this context. 1997). 2004). Hunt (1991) suggests that EIM is most beneﬁcial to large industrial and government organizations. • NIIIP (National industrial information infrastructure protocols) (Bolton et al.Enterprise Modeling and Integration 127 • OPAL: This is also an AIT project that has proved that EI can be achieved in design and manufacturing environments using existing ICT solutions (Bueno. 1998). we have to seek solutions to solve industry problems. and technologically is a fastchanging world and most if not all enterprises. economically. which sometimes happen after the fact. A Framework Setting for EI Modeling Enterprise integration is a multifaceted task employed for addressing challenges stemming from various aspects of the globalized economy. are continually challenged to evolve as rapidly as the world around them evolves. as opposed to a semantically accurate or complete view of the enterprise (Porter. The emphasis of current EIM research on large organizations is risky. 5. the value chain’s primary and secondary activities are used simply as a wellknown functionaleconomic view of the enterprise in order to communicate coherently. However. and most frequently a combination of both (Giachetti. . Enterprise wide modeling is difﬁcult in large organizations for two reasons: there is seldom any consensus on exactly how the organization functions and the modeling task may be so big that by the time any realistic model is built. 1985). The EIM research so far has been mostly performed in large research organizations. A few pilot studies have also been done in large organizations. politically. lack of established standards.. the underlying domain may have changed. The EI modeling research community is growing at a very rapid pace (Fox and Gruninger. Enterprise integration allows an enterprise to function coherently as a whole and to shift its primary value chain activities in tandem with its secondary value chain activities according to the challenges or requirements posed by the enterprise environment or goals set by the enterprise itself. The contemporary business world. and the rapid and unstable growth of the basic technology with a lack of commonly supported global strategy. from small to large. 2003). and this applies to EM and integration. EI suffers from inherent complexity of the ﬁeld. and may even have hampered the growth of the ﬁeld.
procurement. marketing. and R&D. information and related analyses. 1990). For instance. like the corecompetency theory. the key requirement is for interoperability of systems. upon which integration is founded. 2005). Enterprise entities refer to functionoriented enterprise constructs that are typically goaloriented. In the economic or business dimension. Because these dimensions can easily be extended throughout the enterprise regardless of the enterprise functions. hardware. they are frequently used as fundamental reference points for EI efforts (Lankhorst. have been used successfully to introduce enterprisewide. built and subsequently achieved. Entities form hierarchical views of the enterprise. or economic valueadded models like the activitybased costing (ABC) are also used successfully to integrate across the enterprise (Kaplan and Norton. the processes dimension. Prahalad and Hamel. 1992. From a functional perspective. and software. the economic activity or business dimension and the technology or technical dimension. Each of these dimensions has been used successfully for seeking out and achieving EI. balancedscore cards. accounting. data warehousing and enterprise resource planning (ERP). such as administration. which is mainly the realm of corporate information systems. production. sales. and through which enterprise integration is achieved.128 Nikitas Spiros Koutsoukis The multiple facets of EI refer to the multiple dimensions. in the data and information dimension. entities can also be organized in terms of authority or delegation . business process management (BPM) or reengineering (BPR) have been successfully applied in developing and reﬁning old. uniﬁed views of data. business strategy models. The success with which many multinational enterprises operate shows that certain milestones have been reached in this direction. and constituents where integration is primarily aimed at. and frequently tend to match formal or informal structures as in most organizational charts. entities refer to key organizational activities or divisions. As is well known. Some of the most commonly used integration dimensions are the following: • • • • the data and information dimension. In the processes dimension. and new processes which are more integrated and efﬁcient across the enterprise. In the technology or technical dimension. Enterprise integration dimensions represent infrastructural views. collectively contributing to achieving the enterprise mission. entities.
the bird’seye view makes it easier to view the enterprise as a single entity. horizontally or vertically. Some of the main pitfalls in EI lie at the constituents’ level.e.. The most prevalent constituent groups are the following: • Stakeholders • Decisionmakers • Employees which are internal to the enterprise. the . constituents have a bird’seye view of the enterprise and ﬁnd it easier to match organizational outcomes to personal interest i. The sole purpose for having such functionaloriented constructs is to decompose the enterprise into smaller. geographically based activities and so forth. communication layers. The contribution of each entity to the enterprise objectives is thus readily identiﬁed and narrowed down in scope. This is because the constituents are. We consider the following fundamental dimensions as the main enterprise coordinates for positioning the constituents: • level of authority and • immediacy of contribution to goals (from strategic to operational. or abstract to speciﬁc) The higher the level of authority. At this level. Enterprise constituents or constituent groups refer to the human factor which is ultimately responsible for setting out. abstract to speciﬁc) which can also be inverted to • level of responsibility and • level of activity (from strategic to operational. and • Suppliers and • Clients which are external to the enterprise. autonomous or semiautonomous agents acting primarily in their own interest. which is deﬁned mainly by their position in enterprise coordinates terms. more speciﬁc. On the other hand. It is only through the use of one or more dimensions that the entities are recombined in an enterprisewide integration model.Enterprise Modeling and Integration 129 layers. goalcontribution layers. and more manageable parts. in one way or another. applying and achieving EI. the more strategic or abstract are the actions that the constituents take in enterprise terms.
or otherwise a uniﬁed perception of what the enterprise stands for. with detailed surroundings that are consolidated further away from the employee’s position. At this level. and hence organizational outcomes are less easily matched to personal interests.” to use Simon’s term (Simon. codes of practice. entities or constituents considered above. the model emphasizes improvements that can lead to goal attainment or “satisfying. Mission statements. For instance. without loss of context. including procedural units. or simulators of how the integrated enterprise operates. 1957). such as the dimensions. Inconsistencies in constituents’ perspectives are being addressed mainly through leadership and human resource practices. It follows that inconsistencies in the constituents’ perspectives have the ability to cancel out some of the main beneﬁts that the other two facets bring forward. a strong requirement is surfacing for developing new EI modeling frameworks. Thus. which aim to put in place an organizational culture. constituents ﬁnd it harder to consider a singleentity view of the organization. decisionmaking points. (b) Use of hybrid modeling constructs: EI models should be able to utilize interchangeably modeling constructs for representing any type of enterprise activity unit. . In goalseeking mode. and team building exercises are some of the most frequently used tools for scoping and setting out a uniﬁed. stakeholders will most likely need access to a consolidated view of the integrated enterprise whereas employees are most likely to require a positioncentred view of the enterprise. namely the uniﬁed view of the enterprise as single entity and the immediacy with which operational activities contribute to strategic goals. constituent view of the enterprise. tangible and intangible processes. which are able to incorporate heterogeneous modeling constructs. (d) Ability to interchange between declarative and goalseeking states of the integration models: when in declarative mode.130 Nikitas Spiros Koutsoukis lower the level of authority the more operational or speciﬁc the actions that the constituents take in enterprise terms. (c) Ability to consolidate or analyze to successive levels of detail: this is particularly useful for communicating enterprise structures throughout multiple echelons and hierarchy levels. physical or logical units. etc. inputs or outputs. models work simply as formal structures. We ﬁnd that some of the key requirements are set out as follows: (a) Interchangeable dimensions: EI models should be able to shift between the dimensions considered above. its values and its practices among other things.
but such a discussion would be beyond the scope of this paper. Unfortunately. 6. (c) Complexity: EI is a neverending process and parameters such as the legacy systems within large enterprises resisting integration have to be seriously considered. A big issue is that both EM and EI are nearly totally ignored by SME’s and there is still a long way to go before SME’s will master these techniques on their own and in this case.. . Nowadays. we also ﬁnd that these requirements are sufﬁce to set out an agenda for further work in the domain of EI and EI modeling. and data exchange formats (Chen and Vernadat. Nevertheless. due to limited training schemes experienced users are scarce. the worldwide web and Internet. There is the need for a global vision with a modular implementation approach. It is very important that management be part of the project and be fully committed to it. we ﬁnd that EM and integration are becoming familiar within the large enterprises that seem to understand the need of acquiring their concept. EM and integration mostly rely upon advanced computer networks. Quite a large number of people will be involved and high levels of commitment on behalf of the enterprise are required. 2002.Enterprise Modeling and Integration 131 Of course. Martin et al. 2004). (e) Skilled people: The human factor concerning highskilled employees involved in the projects is vital for success. the enthusiasm of the research community and the industry remains intact trying to overcome problems and difﬁculties by creating tools and adopting methodology that will make a difference. such projects cannot be completed but they require time. Still. these requirements can be analyzed further and in detail. they have to quickly catch on the technology and deﬁne which tools they have to use. Discussion At the current state of technology. in accordance with Williams (1994) we also ﬁnd that the reason for the moderate success and rather not so extended use of EM and integration initiatives include: (a) Cost: It is commonly accepted that such efforts are very expensive and it is rather difﬁcult to argue on the beneﬁts of the outcome from the beginning of each project. However. (b) Project size and duration: Indeed. (d) Management support: Management facing high cost may resent to support such projects. integration platforms as well as ERPs.
132 Nikitas Spiros Koutsoukis
Some researchers like Vernadat, Kosanke, Tolone, and Zeigler consider that this technology would better penetrate and serve any kind of enterprises if: • They identify users and a speciﬁc EIM problem. An example of an EIM problem is how to produce highquality products at low cost. • They make a prototype EIM that addresses the problem, contributing in the decisionmaking process based on a simple initial model accepted by the majority of the users (Tolone et al., 2002). • There was a standard vision on what EM really depend and there was an international consensus on the underlying concept for the beneﬁt of the business users (Kosanke et al., 2000). • There was a standard, useroriented, interface in the form of a uniﬁed enterprise modeling language (UEML) based on the previous consensus to be available on all commercial modeling tools. • There were real EM and simulation tools commercially available in the market place taking into account function, information, resource, organization, and ﬁnancial aspects of an enterprise including human aspects, exception handling, and process coordination. Simulation tools need to be conﬁgurable, distributed, and agentbased simulation tools (Vernadat and Zeigler, 2000). However, it is rather unlikely that a single EIM that will be equally applicable to all organizations. Each organization will have to develop its own strategy and philosophy (Fox and Huang, 2005). Despite the different approaches (frameworks) of EIM, we always have to bear in mind the following functional requirements of an EIM framework (Patankar and Adiga, 1995): (1) Decision orientation. An EI model based on any particular concept of an EM framework should support the decisionmaking process considering thoroughly all factors of the external and internal environment affecting the enterprise. (2) Completeness. EM frameworks should be able to represent all aspects of the enterprise such as business units, business data, information systems, and external partners i.e., suppliers, subcontractors, and customers. (3) Appropriateness. EIM frameworks should be representing the enterprise at appropriate levels by providing models that can be understood by people with heterogeneous backgrounds.
Enterprise Modeling and Integration 133
(4) Permanence. The EI model should remain consistent with the enterprise’s overall objectives even when the enterprise is in the process of changing due to management changes or other. Models should remain true and valid within the enterprise. Finally, the EIM frameworks should be simply structured, ﬂexible, modular, and generic to ensure that the framework is applicable across different manufacturing operations and enterprises. In addition, the framework should relate to both manual and automatic processes. This will provide the basis for proper and complete integration. Modeling and integration always mean effort in time and costs. Nevertheless, to handle complexity in business environments we see the abovementioned elements moving from “luxury articles” to “everyday necessities”. These are some of the challenges to be solved in future to build extended interoperable manufacturing enterprises as there is unlikely to be a single EIM methodology, which will be equally applicable to all organizations (Li and Williams, 2004). Each organization will have to develop its own strategy and philosophy. However, EIM promises to be a revolutionary technology. Finally, the success of EI efforts lies in the ability to iron out incoherencies and lack coordination between the multiple facets considered above. This is a complex and challenging task, and lies at the heart for EI modeling. Many of the multiple modeling frameworks considered above, like IDEF, CIMOSA, or GERAM for example, have stemmed from integration efforts in the areas of manufacturing or engineering, and while successful in their own right, these frameworks are still lagging behind when it comes to integrating across the multiple dimensions, entities and constituents of a contemporary enterprise that we have identiﬁed above. 7. Epilogue This paper has provided a description of the main issues on EM and EI. It has become quite evident from the literature that both the issues are complex and require knowledge on various disciplines particularly on organizational management, engineering, and computer science. As such, it can be generalized that EM and integration mainly revolves around organizational modeling from which the analysis, design, implementation and integration of the organization management, business processes, software applications, and physical computer systems within an enterprise are supported.
134 Nikitas Spiros Koutsoukis
We believe that although we are rich in technology to solve isolated business problems (e.g., product scheduling, quality control, production planning, accounting, marketing, etc.), many other problems still reside within the enterprise. Solving these problems require an integrated approach to problem solving. This integration must start at the representation level and the entire enterprise together with its structure, functionality, goals, objectives, constraints, people, processes, products must be modeled. The solution will probably be the outcome of an EIM framework. References
Beeckman, D (1993). CIMOSA: computer integrated manufacturing open system architecture. International Journal of Computer Integrated Manufacturing 2(2), 94–105. Bolton, R, A Dewey, A Goldschmidt and P Horstmann (1997). NIIIP — the national industrial information infrastructure protocols. In Enterprise Engineering and Integration: Building International Consensus, K Kosanke and JG Nell (eds.), Berlin: SpringerVerlag. pp. 293–306. Bueno, R (1998). Integrated information and process management in manufacturing engineering. Proc. of the 9th IFAC Symposium on Information Control in Manufacturing (INCOM’98), pp. 109–112. France: NancyMetz, May 24–26. Chen, D and FVernadat (2002). Enterprise interoperability: a standardisation view. ICEIMT 2002, 273–282. Christensen, LC, BW Johansen, N Midjo, J Onarheim, TG Syvertsen and T Totland (1995). Enterprise ModelingPractices and Perspectives. In Proc. of 9th ASME Engineering Database Symposium, Rangan (ed.), Boston: Massachusetts, 1171–1181. Fox, MS (1993). Issues in Enterprise Modeling. Proc. of the IEEE Conference on Systems. Man and Cybernetics, France: Le Touquet, 83–98. Fox, MS and M Gruninger (2003). The logic of enterprise modeling, modeling and methodologies for enterprise integration. In Modelling and Methodologies of Enterprise Integration, P Bernus and L Nemes (eds.), Cornwall, Great Britain: Chapman and Hall, 83–98. Fox, MS and J Huang (2005). Knowledge provenance in enterprise information. International Journal of Production Research 43(20), 4471–4492. Giachetti, RE (2004). A framework to review the information integration of the enterprise. International Journal of Production Research 42(6), 1147–1166. Hoyte, J (1992). Digital models in engineering — a study on why and how engineers build and operate digital models for decision support. PHD Thesis, University of Trondheim, Norway. Hunt, DV (1991). The Integration of CALS, CE, TQM, PDES, RAMP, and CIM, Enterprise Integration Sourcebook. San Diego, CA: Academic Press. Kaplan, RS and DP Norton (1992). The balanced scorecard: measures that drive performance. Harvard Business Review 70(1), 71–79. Kemmerer, J (1999). STEP: The Grand Experience, NIST Special Publication SP939, Washington, C 20402, USA: US Government Printing Ofﬁce. Kosanke, K and JG NelI (1997). Enterprise Engineering and Integration: Building International Consensus. Berlin: SpringerVerlag.
Enterprise Modeling and Integration 135 Kosanke, K, F Vernadat and M Zelm (2000). Enterprise engineering and integration in the global environment. Advanced Network Enterprises 2000, Berlin, 61–70. Ladet, P and FB Vernadat (1995). Integrated Manufacturing Systems Engineering. London: Chapman & Hall. Lankhorst, M (2005). Enterprise Architecture at Work. Berlin: SpringerVerlag. Li, H and TJ Williams (2004). A Vision of Enterprise Integration Considerations. Toronto, Canada: ICEIMT 2004. Martin, J (1982). Strategic DataPlanning Methodologies. Englewood Cliffs, New York: PrenticeHall. Martin, J, E Robertson and J Springer (2004). Architectural Principles for Enterprise Frameworks: Guidance for Interoperability. Toronto, Canada: ICEIMT 2004. Minsky, M (1968). Semantic Information Processing. Cambridge, MA: The MIT Press. Norrie, M, M Wunderli, R Montau, U Leonhardt, W Schaad and HJ Schek (1995). Coordination approaches for CIM. In Integrated Manufacturing Systems Engineering. P Ladet and FB Vernadat (eds.), London: Chapman & Hall pp. 221–235. Patankar, AK, and S Adiga (1995). Enterprise integration modeling: a review of theory and practice. Computer Integrated manufacturing Systems 8(1), 21–34. Petrie, CJ (1992). Enterprise Integration Modeling. Cambridge, MA: The MIT Press. Porter, M (1985). Competitive Advantage. New York: Free Press. Prahalad, CK and G Hamel (1990). The core competence of the organisation. Harvard Business Review 68(3), 79–91. Rumbaugh, J (1993). Objects in the constitution — enterprise modeling. Journal on Object Oriented Programming, January issue, 18–24. Sager, MT (1988). Data concerned enterprise modeling methodologies — a study of practice and potential. The Australian Computer Journal 20(3), 59–67. Simon, H (1957). Models of Man: Social and Rational. New York: Wiley. Solvberg, A and DC Kung (1993). Information Systems Engineering — An Introduction. New York: SpringerVerlag. Soon, HL, J Neal and A Pennington (1997). Enterprise modeling and integration: taxonomy of seven key aspects. Computers in Industry 34, 339–359. Tolone, W, B Chu, G.J Ahn, R Wilhelm and JE Sims (2002). Challenges to MultiEnterprise Integration. ICEIMT 2002, 205–216. Vernadat, FB (1996). Enterprise Modeling and Integration: Principles and Applications. London: Chapman & Hall. Vernadat, F (2002). Enterprise modeling and integration. ICEIMT 2002, 25–33. Vernadat, FB and BP Zeigler (2000). New simulation requirements for modelbased Enterprise Engineering. Proc. of IFAC/IEEE/INRIA Internal Conference on Manufacturing Control and Production Logistics (MCPL’2000). Grenoble, 4–7 July 2000. CDROM. Waite, EJ (1998). Advanced Information Technology for Design and Manufacture. In Enterprise Engineering and Integration: Building International Consensus. K Kosanke and JG Nell (eds.), Berlin: SpringerVerlag, pp. 256–264. Williams, TJ (1994). The purdue enterprise reference architecture. Computers in Industry, 24(2–3), 141–158.
This page intentionally left blank .
Axel. Hofstede. which in conjunction with the human resources management (HRM) practice may create some unique OC for a corporation. The purpose of this chapter is to provide a scheme to analyze this issue in a systematic way. the new approach of the nationorganizationperformance exchange model (NOPX) is explained in detail and the theoretical propositions underlying this model are analyzed. National culture with its various manifestations can create unique organizational values for a country. 1990. In Section 2. NC. 1991). OC. 1992). Calori and Sarnin. which can affect their performances (Barlett and Yoshihara. creates a synergy. and HRM practices. 137 . Introduction Japanese national culture has a speciﬁc inﬂuence on the organizational culture (OC) of the Japanese companies. Thus. which is the product of OC. and CPs are interrelated and each depend on the other. National culture can affect corporate OC (Aoki. OCs. otherwise it declines. 1980) and corporate performance (CP) (Cameron and Quinn. leadership styles. 1995. The structure of this chapter is as follows. In Section 3. which shapes the fortune of the company’s performances. the Japanese culture was analyzed along with the controversies on this issue. 1988. is analyzed. as given in the existing literature. If a company maintains this harmony it can perform well.Topic 2 Toward a Theory of Japanese Organizational Culture and Corporate Performance Victoria Miroshnik University of Glasgow. human resources practices. organizational values. In order to understand these aspects of culture that are affecting corporate performance. In Section 4. Multinational corporations with its supposedly common OC demonstrate different behaviors in different countries. and performances. UK 1. 1999. relationship between NCs. NC. Organizational culture along with a leadership style. Kotter and Heskett. a systematic analysis of the national culture (NC) and OC is essential.
or present. desirable. or a combination of all these? (d) What are the modalities of human activity. Mesovalues are certain codes of behavior. security. which are speciﬁc for a country.. 1988). Corporate Performance. religious. and the expected behavior of the members of the society. comprising of several layers of interrelated variables. given certain mesovalues (MEV) or organizational values. the Highest Stage of National and Organizational Culture: A Synthesis Culture is multidimensional. and a sense of accomplishment. OC emerges from some common assumptions about the organization. These are reﬂected in their pattern of behaviors. et al. Combinations of microvalues (MIVs) and macrovalues (MAVs) can give rise to an OC. selffulﬁlment. and selfrespect (Kahle. . fun and enjoyment. which the members learn and feel about and forms a set of shared philosophies. as products of historical experiences. As society and organizations are continuously evolving. whether spontaneous or introspective or resultoriented or a combination of these? (e) What is the basis of relationship between one man and another in the society? In the structure of the NC. According to Schein (1997). The core values of a society (macrovalues (MAVs)) can be analyzed by asking the following questions: (a) What are the relationships between human and nature? (b) What are the characteristics of innate human nature? (c) What is the focus regarding time — whether past. if they want to belong to the main stream. expressed values. excitement. being wellrespected. there is no theory of culture valid at all times and locations. which include sense of belonging. security. These vary from one society to another. which inﬂuences the future. Cultural values are shared ideas about what is good.138 Victoria Miroshnik 2. respect for traditions. and justiﬁed. and observed artefacts. future. and moral values are different. warm relationship with others. as the expectations of different societies. 1993) has the opinion that the perceived practices are the roots of the OC rather than values. expectations. these are expressed in a variety of ways. Although Hofstede (1990. and wisdom are important for the society. which the members share as a result of their experiences in that organization. Organizational culture provides accepted solutions to known problems. Social order. which is like an extended family. there are certain micro or individual values. most researchers have accepted the importance of values in shaping cultures in several organizations.
which promotes higher level of achievements (Kilman et al. the question is whether it is valid for every NC. As Siehl and Martin (1988) have observed. Schein. If an organization has a “strong culture” with “wellintegrated and an effective” set of values. Hofstede (1980. However. along with lifetime employment and senioritybased promotions. social culture. These factors are different for different organizations. 1985. which varies from country to country according to their differences in NC. are considered to be the natural outcome of the Japanese NC. and MIVs creates a speciﬁc OC. Organizational culture. harmony. leaders create certain vision or philosophy and business strategy for the company. 1985. Japanese National and Organizational Culture Japanese ﬁrms are considered to be of a family unit with longterm orientations for HRM and with close ties with other likeminded ﬁrms. Global leadership and organizational behavior effectiveness (GLOBE) research project has tried to identify the relationships among leadership. MEVs. may have many different cultures. and behavior patterns. and behaviors. A combination of MAVs. There is no single cultural formula for longrun effectiveness. and consensus decisionmaking. 1981). selfsacriﬁce. These. different types of OC enhance different types of business (Kotter and Heskett. 1993. Ouchi. Cameron and Quinn (1999) have mentioned that the most important competitive advantage of a company is its OC. 1993) showed the relationship between NC and OC. banks. 3. and the government. culture may serve as a ﬁlter for factors that inﬂuence the performance of an organization. 1986. 1987. . Imai. However. A corporate culture emerges that reﬂects the vision and strategy of the leaders and experiences they had while implementing these strategies. 1981). hard work. Marcoulides and Heck. The OC promotes loyalty. 1997). According to Kotter and Heskett (1992). Thus..Toward a Theory of Japanese Organizational Culture 139 norms. Japanese workers’ psychological dependency on companies emerges from their intimate dependent relationships with the society and the nation. and OC (House. 1990. 1992). it normally demonstrates high level of CPs (Ouchi. beliefs. a thorough analysis regarding the relationship between culture and performances is essential. (Ikeda. Denison and Mishra (1995) have attempted to relate OC and performances based on different characteristics of the OC. 1999). for a large and geographically dispersed organization.
if they want to belong to the mainstream Japanese society (Basu. Houkoku. Combining these words. and spirit (ishiki). SenpaiKohai or seniorjunior relationships are formed from the level of primary schools where junior students have followed the orders of the senior students. 1996.. Making suggestions. which forms the basic value system for the Japanese (Kobayashi. 1996). (c) HouRenSou and (d) Kaizen or continuous improvements. which is better understood from the saying that “nails that sticks out should be beaten down”. every Japanese must follow the “HouRenSou” value system. From this system emerges the second MEV. (b) Conformity. 1998). the basic code of conducts and norms.. the way of thinking (kangaekata). Search for continuous improvement during the Meiji Government of the 19th century led them to search the world for knowledge. HouRenSou. These are (a) SenpaiKohai system.e. for improvements. Japanese companies today are doing the same in terms of both acquisitions of knowledge by setting up R&D centers throughout the western world. to consult or preconsult. Superiors and subordinates share information. The process continues throughout the lifetime for the Japanese. may help the juniors in learning. no one can make his/her decision by himself/herself even within the delegated authority. who in turn.140 Victoria Miroshnik Japanese NC has certain MIVs: demonstration of appropriate attitudes (taido).e. to inform and Soudan i. from the entry day of the working life to the date of retirement. The inner meaning is that unless someone conforms to the rules of the community or coworkers. Certain MEVs can be identiﬁed as the core of the cultural life for the Japanese. and by having “quality circles” in work places in order to implement new knowledge to improve the product quality and to increase its efﬁciency (Kujawa. 1980). Everyone from the clerk to the president. Miyajima. and Soudan. Conformity. . Renraku i. 1999. Renraku. Kaizen i. HouRenSou.e. Fujino. Kumagai.. is a combination of three different words in Japanese: Houkoku i. continuous improvement is another MEV that is one of the basic ingredients of the Japanese culture. The third item. “HouRenSou” is the core value of the culture in Japan. to report. In work places. There is no space in which the delegation of authority may function. Senpais will explain Kohais how to do their work. without preconsultations is considered to be an offensive behavior in Japanese culture. is the basic feature of the Japanese organizations. 1979. he/she would be an outcaste. There is no room for individualism in the Japanese society or in work place. Consultations and preconsultations are required. Subordinates should always report to the superior..e.
Japanese system of management is a complete philosophy of organization. Submodel 1: cultureperformance system. which is an inexpensive means of conducting inspection for all units to ensure zero defects. which gave rise to the unique Japanese style of OC. Mitsubishi. “OC” is “Organizational Culture”. 1992. which was developed at the Toyota Motor Company. in which the production system is being constantly improved. Because Japanese overseas afﬁliates are part of the family of the parent company. 1996. Involving everyone in the work of improvement is often accomplished through quality circles. These three ingredients are interlinked in order to produce total effect on the management of the Japanese enterprises. “LC” is “Leader Culture”. “Humanware” is deﬁned as the integration and interdependence of machinery and human relations and a concept to differentiate among different types of production systems. . 1980). 1993). There are three basic ingredients: lean production system.1. which can affect every part of the enterprise. and Nissan) and interviews with several senior executives. and. it is essential to know the Japanese system of management. Manufacturing tasks are organized into teams. which is described in Fig. total quality management (TQM) and HRMs (Kobayashi. Japanese OC In order to understand the Japanese OC. is to lower the costs. Quality assurance is the responsibility of everyone. “CP” is “Corporate Performance”. What is written below is the result of several visits to the major Japanese corporations (like Toyota. 1. 1999. Shimada. The purpose of the lean production philosophy. Morishima. The basic idea of the lean production system and the fundamental organizational principles is the “Humanware” (Shimada. perfection is the only goal. Figure 1. Lean production system uses “autonomous defect control” (Pokayoke in Japanese). 1993). each NC OC LC CP Where “NC” is “National Culture”. This is done through the elimination of waste — everything that does not add value to the product. including members of the board and vicepresidents of these companies. Morita. The principle of “justintime” (JIT) means. The constant strive for perfection (Kaizen in Japanese) is the overriding concept behind good management. Honda.Toward a Theory of Japanese Organizational Culture 141 3. their management systems are part of the management strategy of the parent company (Basu.
1999. In the lean production system responsibilities are decentralized. The most important feature of the organizational setup of the lean production system is the extensive use of multifunctional teams. National culture is affected by two constructs: . quality instruments. and leadership styles have prominent inﬂuences on it. There are extensive usages of Kaizen activities and TQM and Total Productive Maintenance (TPM) to increase the effectiveness of corporate performances. The production system is changing and gradually adopting a more ﬂexible system in order to adopt itself to changes in demand. which are groups of workers who are able to perform many different works. 1989). 1. is the synthesis of the ideas relating to this central theme. which jointly can inﬂuence the outcome. affect CPs. There is no supervisory level in the hierarchy. These in turn. In a multifunctional setup. The number of functional areas that are the responsibility of the teams increases (Kumazawa and Yamada. the number of hierarchical levels in the organization can be reduced. The proposed model. exactly when that part is needed. Most companies in Japan use these criteria to signify CPs (Basu. in the right quantity at exactly the right point of time. setup performances. computers. Nakane. it is vital to provide information in time and continuously in the production ﬂow. The teams are organized along a cellbased production ﬂow system. Corporate performances are measured in terms of the following criteria: (a) customers’ satisfaction.142 Victoria Miroshnik process should be provided with the right part. The multifunctional team is expected to perform supervisory tasks. which may reduce costs of production and increase efﬁciency. Workers have received training to perform a number of different tasks. 1970). There are several latent or hypothetical variables. The ultimate goal is that every process should be provided with one part at a time. 4. As a result. The number of jobclassiﬁcations also declines. as in Fig. human resources practice. (b) employee’s satisfaction and commitment and (c) contributions of the organization to the society and environment. Figures 2 and 3 describe the proposed theoretical model relating OC and CPs. along with the OC. This is done through the rotations of team leadership among workers. maintenance etc. such as the statistical process control. A NationOrganizationLeaderPerformance Model Organizational culture is a product of a number of factors: NC.
1988). . these are not so important (Basu.. Ritual suicides (Harakiri) are honorable acts for the Japanese. it is better to separate out these three values as distinct.. Nakane. (It is different from color or religious exclusivities. (d) Collectivism in decisionmaking process (“HouRenSou” system in Japanese) and (e) Continuous improvements (Keizen in Japanese). are excluded from the Japanese social circles. These are discussed as follows: (a) Exclusivity or insideroutsider (UchiSoto in Japanese) psychology by which Japanese exclude anyone who is not an ethnic Japanese from social discourse. the Chinese or Koreans. but are derived from the NC. community spirit. 1970) and is fundamental to the Japanese organizations. Habitual values are extreme politeness on the surface. but for the Japanese. who are living in Japan for centuries. and moral values. There may be other MAVs.Toward a Theory of Japanese Organizational Culture 143 MIVs and MAVs. The moral and habitual values are different in Japan from other East Asian nations. a reﬂection of the unique Japanese culture (Basu. 1999). For example. if they fail in some way to do their duty. This is particularly true if we examine certain MEVs. and cleanness are unique in Japan. Macrovalues. (c) Seniority system (SenpaiKohai in Japanese) by which every junior must obey and show respects to the seniors. which are important for other nations like geography. Honor and “respect from others” are central to the Japanese psychology.) (b) Conformity or the doctrine of “nail that sticks up should be beaten down” (Deru Kuiwa Utareru in Japanese) — deviations from the mainstream norms are not tolerated. racial origin. (2) respect and recognition from others and (3) a sense of life accomplishments and (4) selfrespect. It is possible to identify ﬁve different MEVs. ﬁrms are religious values. habitual values. 1999. according to the opinions of the executives of the Japanese. Microvalues (MIVs) are composed of several factors (Kahle et al. which are neither MIV nor MAV values. etc. which are not followed in any other countries in Asia. beautiﬁcations of everything. which are important outcomes of the Japanese NC. These MEVs are exclusively Japanese. These are: (1) the sense of belonging. Although moral and habitual values can be the results of religious values. which is the fundamental philosophy of the Japanese society.
and HRM would vary from country to country. MEVs. cannot affect an organizational culture or the HRM system. The similarities in OCs in different companies are due to the similarities in NC and MEV. These factors are: (a) stability. MEVs. not an independent variable as in the Western companies. 1999). Organizational culture (OC) is affected by four factors. HRM system and leadership styles (LS) constructs. 1999). MEVs. In Japan human resource practices vary from one company to another and as a result. according to this proposed model. situations are not the same. If the company’s leader is also the founder. then only the leadership style (LS) can affect the HRM and OC. (b) ﬂexibility. as a result. In many other companies in Japan. Finally. Leadership style is a function of four factors. or Australian) companies where the leaders are hired from outside and are expected to be innovative regarding the OC and HRM system. how an OC is being affected by NC. Keizen. OC. but the emphasis varies from country to country. (c) internal focus and (d) external focus (Cameron and Quinn. All executives enter the company after their graduation and stay until they retire). OC differs. a MEV in the model.144 Victoria Miroshnik National culture construct along with the MEVs affect both the organizational culture construct (OR). OCs and performances vary. the corporate performance (CP) is affected by a number of inﬂuencing factors: (a) customers’ satisfaction. This is quite alien to the Japanese culture. (c) as a technical expert and (d) as an effective competitor for rival ﬁrms (Cameron and Quinn. TQM and just in time (JIT) productioninventory system. Japanese OR construct is affected by the NC. and the HRM system. (b) employees’ satisfaction . Leadership style in Japan is an outcome. MEVs. Otherwise. the differences are due mainly to the differences in HRM. These four characteristics of a leader are universal. it is very consistent and strong culture with great emphasis on discipline. appropriate LCs are developed by the OC and HRM. These factors are leader: (a) as a facilitator. Leadership style is affected by the NC. However. when the leaders are professional managers trained and grown up within the company (In Japan and in Japanese companies abroad. and the HRM. not restricted to the Japanese companies. it is out of practice to accept an outsider for the executive positions. and HRM. It is very different from the Western (America. Organizational culture is affected by the NC. These factors are universal. In Toyota. Due to the collective decisionmaking process (HouRenSou) leadership. European. (b) as an innovator.
and commitments. “OC” is “Organizational Culture”. then CP has these two important contributory factors. “HRMPJ” is “Human Resources Management Practices in Japan”. which can be estimated using the methods of structural equation modeling (Raykov and Marcoulides. “MIV” is “MicroValues of Culture”. Analysis of the Japanese culture and organizations by outsiders so far suffers from a number of defects (Ikeda. These interrelationships are important aspects of the Japanese corporate behaviors in a multinational setting and this proposed theory is an attempt to understand the Japanese corporate culture. 5. a more elaborate model. Watanabe. and “CP” is “Corporate Performance”. represented in Fig. “MAV” is “MacroValues of Culture”. 1999) of the model. In Fig. 1987. They neither try to understand the effects of the Japanese value system on their organizations nor do they give any importance to the relationship between . Value–culture–performance: a thematic presentation. “LC” is “Leader Culture”. 2. including the unique HRM system of the Japanese companies is described. Discussion The proposed theory describes the relationship between NC and CPs through OC with the HRM system affecting the nature of the culture and leadership. and (c) the contribution of the organization to the society. Figure 2. “MEV” is “MesoValues of Culture”. If we consider the contribution of the organization to society that is already taken into account by the customers’ satisfaction and employees’ satisfaction.Toward a Theory of Japanese Organizational Culture 145 MAV LC NC OC CP MEV HRMPJ MIV Where “NC” is “National Culture”. 3. 2000). 1998). Figure 3 represents a Linear Structural Relations (LISREL) version (Joreskog and Sorbom. Theoretical relationship between the observed variables and the underlying constructs are speciﬁed in an exploratory model.
B3 “Masculinity/Femininity” are indicators for NC variable. C5 “Kaizen Value” are indicators for MEV variable. “OC” is “Organizational Culture”. “MAV” is “MacroValues of Culture”. G4 “Leader as Competitor” are indicators for LC variable. E2 “Emphasis on Personality of Recruits” are indicators for HRMPJ variable. F6 “Ishiki Value” are indicators for MIV variable. D1 “Stability”. D4 “Internal Focus are indicators for OC variable”. So far. E1 “OnJobTraining”. D2 “Flexibility”. C1 “Uchi/Soto Value”. H1 “Customers Satisfaction”. A1 “Religious Values”. Figure 3. B2 “Individualism/Collectivism”. However. C4 “HouRenSou Value”. G1 “Leader as Facilitator”. They could not explain the reason for the failure. F1 “Sense of Belonging to a Group Value”. A2 “Moral Values” A3 “Habitual Values” are indicators for MAV variable. G3 “Leader as Technical Expert”. F2 “Respect and Recognition from Others Value”. The typical example is the analysis of the failure of the implementations of Japanese operations management system in Britain by Oliver and Williamson (1992) or in China by Taylor (1999). performance and culture. C2 “Deru Kuiwa Utareru Value”. H2 “Employees satisfaction and Commitment” are indicators for CP variable. “MIV” is “MicroValues of Culture”. the success of the Japanese companies was analyzed only in terms of their unique production and operations management system. . “MEV” is “MesoValues of Culture”. “LC” is “Leader Culture”. F5 “Kangaekata Value”. G2 “Leader as Innovator”. the interrelationship between the production and operations management system. B1 “Power Distance”. and the speciﬁc HRM system it requires for its proper implementation. D3 “External Focus”. Research model: Valuescultureperformance (VCP): predicted paths between variables and indicators. C3 “Senpai/Kohai Value”. was not given sufﬁcient attention. and “CP” is “Corporate Performance”. as they have not analyzed the Japanese OC and particularly the HRM system. F4 “Selfrespect Value”.146 Victoria Miroshnik A2 A1 A3 D1 D2 D3 D4 H1 MAV B1 B2 B3 B4 G1 G2 G3 FG NC MEV OC LC CP H21 MIV F6 C1 C2 C3 C4 C5 HRMPJ F1 F2 F3 F4 F5 E1 E24 Where “NC” is “National Culture”. which is a part of the OC. F3 “Sense of Life Accomplishment Value”.
Conclusion A proposed theory of the relationship. these are not coming from outside. M (1990). Thus. 1–27. KS and R Quinn (1999). Cultural differences are important and these factors affect every component of the company behaviors and performances. References Aoki. Corporate culture and economic performance: a French study. Denison. human resources practices. DR and AK Mishra (1995). Culturebound aspects of Japanese management. Whether efﬁcient CPs can be repeated in foreign locations depends on the transmission mechanism of the Japanese OC. in a Japanese organization. Toward a theory of organizational culture and effectiveness. New challenges for Japanese multinationals: is organization adaptation their Achilles Heel. March–April. . Towards an economic model of the Japanese ﬁrm. Journal of Economic Literature 28. 204–223. is narrated. CA and H Yoshihara (1988). S (1999). 49–74. Calori. 6.Toward a Theory of Japanese Organizational Culture 147 In a Japanese company. 57–73. Adaptation of the operations management system alone will not create an effective performance of a Japanese company in a foreign location. The proposed theory may provide an answer to these issues. An effective corporate performance is the result of these underlying determinants of OC and LSs. and CPs for the Japanese multinational companies. which in turn. OC. among NC. Axel. which emerges from the MEVs of the Japanese national culture. Organization Studies 12(1). Cameron. 19–43. Creation of an appropriate HRM system and appropriate OC are essential for an effective performance. Organizational Science 6(2). Basu. Management International Review 35(2). Corporate Purpose. Human Resources Management 27(1). the LSs and the OC are designed by the HRM system. OC in a foreign location may be different from the OC of the company in Japan. M (1995). This can face obstacles due to the different MEVs in a foreign society and as a result. LS is rooted in the HRM system. Barlett. R and P Sarnin (1991). which are relatively more important than the others and whether it will be possible to manipulate these factors in order to increase the performances of the multinational companies. New York: Garland Publishers. Diagnosing and Changing Organizational Culture: Based on the Competing Values Framework. depends on the adaptations of the Japanese style HRM system. New York: AddisonWesley. The question was raised regarding the factors.
Vol. G (1970). In Trends in Organizational Behavior. RJ Best and RJ Kennedy (1988). T (1998). A moment for Japanese management. GA and RH Heck (1993). Renegotiating psychological controls. In Sekai wa Nihon wo Dou Miteiruka (How Does the World See Japan). Japanese style. The labour relations of US multinationals abroad: comparative and prospective views. 347–357. 171–234. Kujawa. Chicago: Scientiﬁc Software. Nakane. Greenwick: CT. R. Beverly Hills. Vol. London: Unwin Hyman. Corporate Culture and Performance. Marcoulides. H (1996).30: User’s Reference Guide. Current Issues in Research in Advertising 11. RJ et al. In Gaining Control of the Organizational Culture. May. pp. An alternative method for measuring valuebased segmentation and advertisement positioning. Hofstede. Administrative Science Quarterly 35. S Wood (ed. M Gaxton and R Serpa (eds. Kobayashi. . Nihon oyobi nihonjin ni tsuiteno imeji (Image of Japan and Japanese). NY: McGrawHill. 3–25. Academy of Management Executive 7(1). 12–31. Kumagai. 56–79. How Japanese companies have globalized. Ikeda. Morishima. JapanEcho 19(2). Hofstede. 4. Kumazawa. Institute for Research in Contemporary Political and EconomicAffairs. Culture’s Consequences: International Differences in Workrelated Values. A Tsujimura. KG and D Sorbom (1999). 3. Jobs and skills under the lifelong Nenko employment practice. Kilman. Journal of Management Studies 22(3). K (1987). M and J Yamada (1989).148 Victoria Miroshnik Fujino. M (1986). Tokyo: Waseda University. Introduction: ﬁve key issues in understanding and changing culture. 139–155. 1–12. JP and JL Heskett (1992). Working Paper No. 149–165. Hofstede. M (1996). San Francisco: JosseyBass. Measuring organizational cultures: a qualitative and quantitative study across twenty cases. Organization Science 14(2). The evolution and change of contingent governance structure in the Jﬁrm system: An approach to presidential turnover and ﬁrm performance. Tokyo: JETRO. 1–28. CL Cooper (ed. Kaizen: The Key to Japan’s Competitive Success.). A (1992). Kotter. Labour and Society. MJ Saxton.) Tokyo: Nihon Hyoronsa.). pp. and R Serpa (1985). Cultural inﬂuences on leadership: Project GLOBE. 81–94. 1–16. W Moblet. Kahle. G (1990). The interaction between national and organisational value system. G (1993). F (1996). (1999). NY: The Free Press. LR. G (1980). 286–316. CA: Sage. Organization culture and performance: proposing and testing a model. D (1979). In Advances in Global Leadership. Management Japan 31(2). G (1985). K Furuhata and H Akuto (eds. New York: John Wiley & Sons Ltd. Tokyo: Chuo Keizaisha. Morita. Imai. LISREL 8. 9606.).). JAI Press. Japanese Society. N (1980). 209–225. R Kilman. Nihon no Takokuseki Kigyo (Japanese multinational corporations). Joreskog. Harmondsworth: Penguin. J Gessner and V Arnold (eds. Hofstede. Cultural constraints in management theories. House. Nihon Teki Seisan Sisutemu in USA (Japanese Production System in USA). 1. Miyajima. New York.. In The Transformation of Work.
Organizational Culture and Leadership. Schein. B (1999). Raykov. Shimada. Reading. A First Course in Structural Equation Modeling. Watanabe. 853–873. H (1993). Taylor. Theory Z: How American Business can Meet the Japanese Challenge. The Japanization of British Industry. In Lean Production and Beyond. (2000). MA: AddisonWesley. 23–47. Japanese management of autoproduction in the United States: An overview of human technology in international labour organization. Patterns of control within Japanese manufacturing plants in China: doubts about Japanization in Asia. 23–43.Toward a Theory of Japanese Organizational Culture 149 Oliver. Geneva: ILO. International Journal of Japanese Sociology 7. London: Lawrence Erlbaum. Managersubordinate relationship in crosscultural organizations: the case of Japanese subsidiaries in the United States. EH (1997). Cambridge. Journal of Management Studies 36(6). W (1981). N and B Wilkinson (1992). . SanFrancisco: JosseyBass Publishers. S (1998). T and GA Marcoulides. Ouchi. Massachussets: Blackwell.
This page intentionally left blank .
However. a lower rate of the increase 151 . To facilitate the presentation. After setting the priorities of the various targets. one should combine the priorities of the decision maker with the technical and socioeconomic factors. there is no outpatient services. which dictated a readjustment of the priorities. 2002. at the same time satisfying most of the requirements for personnel management. to avoid overcharging the patients. to face competition from similar units and to be in line with some major socioeconomic factors. together with the proﬁt maximization target. Introduction Private production units usually operate on the basis of proﬁt maximization.Topic 3 Health Service Management Using Goal Programming AnnaMaria Mouza Institute of Technology and Education. The production unit considered in this paper is a relatively small (60 beds) private clinic in northern Greece. if the decision maker’s preference is relatively rigid. The necessary projections are based on reliable techniques presented elsewhere (Mouza. in this particular case. the various expenses. I consider a ﬁveyear operational plan where the personnel claims. In order to present propositions for optimal management decisions. the manpower working pattern. I formulate a proper “goal programming” problem using deviational variables (di− and di+ ) and obtained the ﬁrstrun solution. I proceed to the goal attainment evaluation. which are involved in the operating process in the best possible way. However. 1996). and the expected number of patients are described in detail. Then. 2006). Greece 1. some decision makers are forced to relax this basic objective. The secondrun solution indicates that a slower acceleration of the proﬁt maximization rate is preferable. Furthermore. which is similar to the orthopedic department of a hospital studied and described elsewhere (Mouza.
04 199. and t5 . for the initial and ﬁnal year is stated. The Details of the Private Clinic in Relation to the FiveYear Plan In Table 1. 2. bearers maintenance.7 54.8 58. Annual salary at Salary at ﬁnal year t5 .64 101.647 50. anaesthiologists and microbiologists Pharmacist Operatingroom nurses Nurses plus laboratory staff Axialtomography and Xray technicians Clinic manager Secretary and administrative service staff Ofﬁce clerks Cooking and food service staff Other staff (guard.643 119. In columns 4 and 5 of Table 1. All computational results are presented at the end of the relevant section. the total amount of the wages for all employees of the same group.6 163. an average percent of annual salary Table 1.6 42.054 68.3 32. For each group. I present the composition of the personnel. for all Members year t0 for all at each group members group members group (in thousand ) (in thousand ) 10 260 356.307 36.222 Position of the Groups group members 1 Orthopedic surgeons.190 39. cleaning service etc. Note that the wages are an average of the salaries earned by each person in the individual group. and their salary at the base year (t0 ).851 .4 49. Personnel of the orthopedic clinic and their wages.) 2 3 4 5 6 7 1 3 14 3 1 3 24. t4 .152 AnnaMaria Mouza of salaries of the employees should be adopted. refers to the timeperiods (years) t1 . Note that the 5year planning horizon. t3 .76 33.476 8 9 10 4 4 10 41.29 74. t2 .5 27.
the ﬁnal salary for all group members will be: An (1) = 260(1 + 0. From Table 1. the total salary of all the members of group 1. It should be noted that in some groups. seems more reasonable to be equally distributed over the ﬁve years of the planning period. The interactive voice response (IVR) systems which are effectively used in healthcare and hospitals are described by Mouza (2003). the forecast regarding the expected number of patients for the ﬁnal year is stated. It should be noted that replacement expenses together with expenses for obtaining new devices.Health Service Management 153 increase. Other additional expenses for the ﬁnal year of of the planning period are presented in Table 2. (1) Thus for group 1. 10) denotes the size (i. is computed from: Ri An (i) = A0 (i) 1 + 100 5 . .065)5 = 356. this rate is well above the rate of inﬂation growth. which has been equally spread over the ﬁve years of the planning period. These rates of annual salary increase for all groups are presented in Table 3. the replacement refers to the server of the local area computer network (LAN). Their net cost sum up to 2000. at the ﬁnal year. denoted by An (i). where R1 = 6. can also . and the ﬁnal annual salary at year t5 for all members of the same group. Needless to say. . and the rate stated in the collective wage and salary agreements for each employees group. It should be noted. Thus. Thus. is considered. . as it can be veriﬁed from Table 3. say Ri . On the top of Table 2. That is why the average per patient cost stated in Table 2 seems to be relatively low. the estimated salvage on old equipment has to be subtracted from the relevant cost. only one ﬁfth of such expenses is stated. Denoting by A0 (i) the annual salary of all members of group i at the base year t0 .222. the members) of group i and ci . It should be pointed out that not all patients necessarily need an axialtomography and/or Xray treatment. that in case of replacements.5%. the average salary of each group member. It can be seen from Table 3.e. . that Xi (i = 1. when multiplied by the group size gives the total salary for the corresponding group. For the case under consideration. for instance.. I computed the average salary per employee at the ﬁnal year which. that these rates of salary increases have been formed in accordance to the personnel claims. together with two stations.
the salary expenses of group i. .4 Administrative and 32. i=11 (3) .2 × 10 = 356222. then I may write X10+i = ci × Xi (i = 1.000 be computed from: c1 × X1 = 35622. . (2) so that the total salary expenses for the ﬁnal year can be determined by evaluating the summation: 20 Xi . 10).1 Xray 5. If I denote by X10+i .154 AnnaMaria Mouza Table 2.000 Fixed expenses plus 180. .7 Medical supplies 59. at the ﬁnal period tf ( ) Patient expenses Forecasted number of patients for the period (year) t5 = 3625 Axialtomography 9. Forecasted number of patients and various expenses for the ﬁnal year. Forecasted average per patient.000 depreciation All other expenses 80.1 miscellaneous Other expected Total ( ) Amount for the ﬁnal year t5 expenses Installation of IVR 3000 600 system Server and computers 2350 (salvage value: 350) 400 replacement Personnel insurance Endogenous (decision (19% of total variable X39 ) salaries) Laboratory expenses 60. .
It should (i = 1. the expenses incurred.054 22.76 41.5 6 4.9 5. originally developed by Charnes and Cooper (1961).222 33. (4) .Health Service Management 155 Table 3. 10).985.054 68.4 Salary at ﬁnal year tf .8 3. .8 58.680 14.64 101. for all the Average members of per person group i at each c (in ) group (ci ) 356.235 24. for all the members of group i (in thousand ) 260 24.190 39. It can be easily veriﬁed that in Table 4. .6 3. Denoting by X22+i (i = 1. which should combine the desired salary increases and the full utilization of working hours. column 5 is obtained by multiplying the elements of column 4 by 52.769 36.5 27. This problem can be answered by the goal programming method.5 3.8 3. 10) the total average working hours per year for all the group members.647 50. together with the desired gross proﬁt at the ﬁnal year set by the decision maker.307 36.7 54.5 4 4. Annual salary at year t0 . .290 74.10 Another point of interest refers to the average working hours of the members of each group.643 119. the resultant problem is.75 11.158.476 49.4 Number of group members Groups (i) (Xi ) 1 2 3 4 5 6 7 8 9 10 10 1 3 14 3 1 3 4 4 10 Annual rate of increase.660. then I may write X22+i = wi × Xi 3.411. . how to formulate a feasible operating plan.190 13.3 32. Ri (in %) 6. The Nature of the Problem Given all the information presented in Tables 1–4.040 199.622. . Employees’ salary at the initial and ﬁnal year of the planning period.2 33. . .6 163. in the best possible way with the minimum sacriﬁce.851 35. The relative ﬁgures are analytically presented in Table 4.6 42.666 12. .75 12.
. . X20 . . . X31 refer to the total working hours. Speciﬁcation of the Problem Decision Variables • The ﬁrst 10 decision or choice variables Xi (i = 1.120 6240 2340 6240 8320 6240 15. — Variables X22 . for all the members of each group 33.156 AnnaMaria Mouza Table 4. — Variable X21 denotes total salary cost at the ﬁnal year. without any change in the operating pattern of the clinic. .600 Average per person at each group (wi ) 3380 2340 3380 2080 2080 2340 2080 2080 1560 1560 be recalled that goal programming is a mathematical programming technique with the capability of handling multiple goals given certain priority structure. . the main task here refers to the compensation of total expenses with the decision maker’s requirement regarding gross proﬁts. . stand for the salary cost of each group. 3.1. Total hours per week for each for all the Number member of of group group i (in members of group i Groups (i) members thousand ) 1 2 3 4 5 6 7 8 9 10 10 1 3 14 3 1 3 4 4 10 65 45 65 40 40 45 40 40 30 30 650 45 195 560 120 45 120 160 120 300 Total hours per year. — Variables X11 . Hours per week (average). observed in the 5th column of Table 4.800 2340 10. . 10) are already deﬁned in Table 3 and refer to the number of personnel in group i. Working hours of each group personnel.140 29. In brief. a goal programming model may be composed of nonhomogeneous units of measure. Furthermore. which is the case under consideration. . as far as the manpower level is concerned.
Finally. I classiﬁed the goals into the following seven categories.Health Service Management 157 • Variables X32 .. X35 refer to the average cost per patient. 3. is used to denote total operating cost for the ﬁnal year tf (i. .2. so that di− × di+ = 0. b6 = 1. 10) (5) where b1 = 10.. then di+ will take a nonzero value. — Variables X37 and X38 refer to the equipment purchase and replacement. . variable X43 denotes the average charge per patient. which also is a deﬁnition variable. which represents the negative deviation from the goal. . • Variable X39 stands for the personnel insurance expenses. — Variable X40 refers to the depreciation. Personnel Employed The operation pattern of the clinic. b5 = 3. ﬁxed and other expenses. b2 = 1. • Finally. • Variable X41 is used to sum up all the above expenses (i. the decision maker does not want to alter this pattern. Given that.000. If the ith goal is not completely achieved. . It is useful to recall that di− and di+ denote the socalled deviational variables from the goals. X37 + X38 + X39 + X40 ) • X42 . if the ith goal is over achieved. . which are complementary to each other. as it is stated in the ﬁrst four rows of Table 2. 3 and 4. b7 = 3. . laboratory.e. b8 = 4. regarding the personnel employed is presented in Table 1.e. if this particular goal is exactly achieved. b4 = 14. so that the desired gross proﬁt would not be less than 750. I must ﬁrst introduce the following 10 constraints (goals) Xi + di− − di+ = bi (i = 1. regarding the members of each group as shown in Tables 1.2. . are the desired targets. On the other hand. b9 = 4 and b10 = 10. 3.1. t5 ) of the planning period. Formulation of the Constraints (Goals) To facilitate the presentation. b3 = 3. then both di− and di+ will be zero. then the observed slack will be expressed by di− . — Variable X36 denotes total per patient expenses.
1. (i = 1. Furthermore. The computation of this total is based on Eq. . . 3. (6) It is recalled that coefﬁcients ci are presented in Table 3. (i = 1.158 AnnaMaria Mouza 3. Hence. (9a) Xray expenses per patient: − + X33 + d33 − d33 = 5. − + X21+i − wi Xi + d21+i − d21+i = 0. (8) Note. the decision maker does not want to alter the existing operation pattern. are also presented in Table 4. regarding the existing infrastructure. (9c) . An additional variable X21 is introduced to sum up salary expenses for all personnel at the ﬁnal year. 10). . since he considers that the present personnel manpower potential will be adequate to provide satisfactory service to the number of patients forecasted. . Personnel Claims for Salary Increase According to Eq. . . (2).2.2. I introduced the following constraints in accordance to Eq. 10).4. (9b) Medical expenses per patient: − + X34 + d34 − d34 = 59.2. 20 X21 − i=11 − + Xi + d21 − d21 = 0.7. Complete Utilization of Personnel Capacity in Terms of Working Hours The forecasted number of patients for the ﬁnal year is not exceeding the clinic capabilities.3. (4). . Patient expenses The axialtomography expenses per patient can be expressed as: − + X32 + d32 − d32 = 9. − + X10+i − ci Xi + d10+i − d10+i = 0. the coefﬁcients wi . (3). I have to introduce the following 10 constraints in order to deﬁne the total salary expenses for each group. .2. provided that the under utilization of the regular working hours will be avoided.4. which on the other hand may be regarded as a measure to provide job security to all personnel by avoiding underutilization of their regular working hours as shown in Table 4. (7) 3.
Total Expenses It is already mentioned that I introduce the variable X43 in order to sum up total expenses.2.19X21 + d39 − d39 = 0 (10c) Depreciation. (9d) Variable X36 depicts total per patient expenses at the ﬁnal year. − + X41 − (X37 + X38 + X39 + X40 ) + d41 − d41 = 0. (11) 3. (9e) 3. − + X39 − 0. (10a) Computers and server replacement. we see that the average expenses for personnel insurance amounts to 19% of the total employees’ salary.2. − + 3625X43 − X42 + d43 − d43 = 750. (10d) To sum all other expenses.1. in the following way. ﬁxed and other expenses − + X40 + d40 − d40 = (60. − + X42 − (X21 + 3625X36 + X41 ) + d42 − d42 = 0.7. Other Expected Expenses Installation of the IVR system. Total Gross Proﬁt The gross proﬁt achievement as set by the decision maker. laboratory. − + X38 + d38 − d38 = 400. From Table 2.000 (12) . (10b) The insurance funds.000). I introduce the following relation. This is represented by the following constraint. may be introduced in the following way. − + X36 − (X32 + X33 + X34 + X35 ) + d36 − d36 = 0.6.000 + 80.2.5. taking into account the variable X21 . (10e) 3.Health Service Management 159 Administrative and miscellaneous expenses per patient: − + X35 + d35 − d35 = 32. − + X37 + d37 − d37 = 600.000 + 180.
5. is the possibility of weighing the deviational variables at the same priority level (Gass. 3. 6. 4. To provide the funds necessary to cover all patient’s expenses (P4 ). Also. regarding the expected number of patients for the ﬁnal year tf . To avoid underutilization of the personnel regular working hours. represents the adequate charge per patient. Priority of the goals — The objective function It is obvious that the decision maker. To retain the present operating pattern of the clinic. To provide the wage increase. as it was mentioned earlier.3. (P5 ). in order to satisfy Eq. apart from determining the goals of the ﬁveyear plan. which appears in Eq. (11) too. (12). that has to be considered in the formulation of the model. 1. One further point. to cover all expenses for equipment replacements and installation of a new IVR system (P2 ).160 AnnaMaria Mouza where X43 . Different weights have been assigned to the various personnel groups. With the above formulation. also has to specify the priorities (Gass. claimed by the personnel at the various groups (P6 ). in order to accomplish the optimum allocation of resources for achieving these goals in the best possible manner. It is recalled that the coefﬁcient 3625. denoted by Pi . 1986). weights are used to discriminate the salary increase of each group. we see that the number of constraints is equal to the number of choice variables (43). 3. In this case. According to the above ranking of the goals priority and taking into account the relations for the corresponding totals. 2. to provide reserves for the personnel insurance. depreciation etc. Heavier weights have been assigned to groups with lower salaries. 1987). To achieve the desired level of gross proﬁts. providing at the same time job security to all employees (P3 ). the following objective . To provide adequate funds for covering laboratory and ﬁxed expenses. The list of decision maker’s goals is presented below in descending order of importance. is the stated forecast. This implies that no alterations are desired regarding the number and composition of the existing personnel (P1 ).
Imposition of a New Constraint Given the social character of health service.372. − − − − − − min z = (10P1 d1 + 9P1 d2 + 8P1 d3 + 7P1 d4 + 6P1 d5 + 5P1 d6 − − − − − − + 4P1 d7 + 3P1 d8 + 2P1 d9 + P1 d10 ) + (P2 d37 + P2 d38 − + P2 d39 − + 3P2 d42 − + P2 d43 ) + P3 31 i=22 di− 36 + P4 i=32 di− − − − − − − + (P5 d40 + P5 d41 ) + (P6 d21 + 2P6 d11 + 2P6 d12 + 3P6 d13 − − − − − + 4P6 d14 + 5P6 d15 + 6P6 d16 + 7P6 d17 + 8P6 d18 − − + 9P6 d19 + 10P6 d20 ).1. + − X43 + d44 − d44 = 700. Hence.9% is realized. using natural logs. (13) 4.Health Service Management 161 function is formulated. this pattern dictated by the optimal solution is readily applicable.e. the charge per patient (∼740 ) is considered being higher. 5 where x = 1. It should be pointed out that according to this solution.300 and r = (y − 1) × 100. where y = eZ and Z = ln(x) 5 However. when compared to the average of the local market. (1) in the following way. it is necessary to impose the following constraint.026.. this average rate r (i. an average rate of total salary increase of about 4. whereas the mean salary per worker increases from 15. I obtain the results as given on the left side of Table 5 (solution I).232. (14) . ln(x) = ln(1 + r). It is easily veriﬁed that all goals are achieved. In fact. 4. 5. affecting thus the competition level of the clinic. Initial Results With this speciﬁcation of the goal programming model.720 807. which has to be combined with the fact that the unit under consideration must be competitive.1 to 19.9) has been computed from Eq.
040.222. which is the value of the deviational variable d21 .290. shown in Eq.000 3.000 3.1.000 95 94 ×8 4. SecondRun Results With the new constraint.000 89 88 ×2 1. properly weighed. the solution obtained is presented on the right side of Table 5 (solution II). I observe that constraint Eq.000 4.000 36.000 356. It should be noted + that minimizing d44 in the objective function.000 102 101 ×15 74. the solution dictates that the total salary expenses must be reduced − by 118. I added in the objective as in Eq.190.000 1.000 92 91 ×5 3.190.000 74.000 90 89 ×3 3.000 3. Variable name Value The simplex solution (Number of patients = 3625) 87 ×1 10.000 4. the total number of constrains has become 44.000 101 100 ×14 199.180.000 199.307.000 (Continued ) . (7). No. Value No.125.000 104 ×1 ×2 ×3 ×4 ×5 ×6 ×7 ×8 ×9 ×10 ×11 ×12 ×13 ×14 ×15 ×16 10.000 97 96 ×10 10.040.162 AnnaMaria Mouza Thus. Hence.000 96 95 ×9 4.290.000 1.000 103 102 ×16 36. (13) the following term + 2P2 d44 which implies that the restriction regarding the maximum average charge per patient has been considered as a second priority goal.307.054.054.000 94 93 ×7 3. Evaluating the results of the second run. one may argue as to Table 5.000 10. Variable name Solution I (left) and solution II (right). 5.000 93 92 ×6 1.000 100 99 ×13 68. an upper limit of 700 is set to the average charge per patient.000 98 97 ×11 356.222. and in order to achieve the level of gross proﬁt set by the decision maker.000 14.000 33.000 91 90 ×4 14. (14) is binding.000 99 98 ×12 33.000 68.
300 600. decreasing gross proﬁts by almost 16%.000 118.578 320.000 15.134.140.000 113 112 ×26 6240.000 125 124 ×38 400.476.700 59.000 119.622.600.000 50.000 700.643.300 124 123 ×37 600.000 49.125 33.000 106 105 ×19 50.647.140.000 111 110 ×24 10.800.000 116 115 ×29 8320.100 120 119 ×33 5.100 123 122 ×36 106.000 172.000 118 117 ×31 15.000 105 104 ×18 49.647.796 131 109 ×17 ×18 ×19 ×20 d− 21 ×22 ×23 ×24 ×25 ×26 ×27 ×28 ×29 ×30 ×31 ×32 ×33 ×34 ×35 ×36 ×37 ×38 ×39 ×40 ×41 ×42 ×43 ×21 39.539.100 5.000.875 whether this is an optimal solution.928.000 6240.851.787.000 110 109 ×23 2340. .000 10.000 2340. which may be considered of particular importance.600.000 117 116 ×30 6240.000 493. since any reduction has to directly affect the employees salary solely. from the socioeconomic point of view.000 107 106 ×20 119.000.000 108 107 ×21 1.562 1.Health Service Management 163 Table 5.000 400.000 29.000 115 114 ×28 6240. No.180.000 112 111 ×25 29. Variable name Value The simplex solution (Number of patients = 3625) 103 ×17 39. With all these in mind.000 114 113 ×27 2340.700 121 120 ×34 59. the decision maker is not willing to entirely undertake this reduction (118.000 119 118 ×32 9.076.851.300.720.180. On the other hand. Variable name Value (Continued ) No. I present the following proposition.076.812 129 128 ×42 1.000 126 125 ×39 195.000 128 127 ×41 516.026.643.400 122 121 ×35 32.000 9.120.500.000 2340.476.120.125).000 21 108 ×22 33.000 908.400 32.797 127 126 ×40 320.375 130 129 ×43 738.000 6240.100 106.000 6240.000 8320.622.
which is an estimate of the expected annual rate of inﬂation increase.172 2859. of the deﬁcit that will be subtracted from the total employee’s salary.9 5. Let us assume that both sides agree for a = 0. is that special care should be taken for retaining a fair wage policy. A crucial point of particular importance. after taking into account.173 858.5 4 4.180. presented in Table 3.6 3.125 = 59.5. in order to distribute the salary restraint among different groups in the desired and acceptable way. the decision maker and the employees have to agree on the proportion. The distribution of salary reduction among groups.045522 0.1% over the planning period.016266 0.048387 0.090.627689 0.108050 0.164 AnnaMaria Mouza 6.748 59.5 × 118.861 652.011047 1 37.014535 0.870 961. the total salary of all employees will be reduced by 0.187 2942.049801 0.090.069093 0.4 10 9 6 5 7 8 4 3 2 1 0.090. Annual A composite rate of index to initial serve as the Number salary Ranking rate of of group members increase for salary initial salary Corresponding i ri (in %) decrease decrease amount (in ) Groups (i ) (Xi ) 1 2 3 4 5 6 7 8 9 10 Total 10 1 3 14 3 1 3 4 4 10 6.8 3.8 3. the initial rates of salary increase for each group. In this case. Finally.5 6 4.898 567. In any case.06 . the mean annual rate of salary increase for each group after the reduction mentioned above should not be less than 3. have been ranked properly as it is shown in Table 6. In fact.0625. This implies that lower salary groups should undergo lesser decrease.746 6384.711 2689. For this reason. and would affect their earnings proportionally.697 4082. compared to high salary groups.009610 0. say a (where 0 < a < 1). using the composite index.5 3. the latter requirement affects the value of “a”. Some Alternatives After proper negotiations. All these restrictions call for the construction of a composite index. the initial rates of salary Table 6.
Initially claimed salary for the ﬁnal year ( ) 356.647 50.699.13 12.187 2942.097.224.0991 3.838.700 54. Salary at the initial and ﬁnal years together with the annual rates of increase.2872 New coefﬁcients ci 31.514.054 68.2391 3.198.913.300 32.222 33.600 163.025 12.500.197.697 4082.278 12.14 119.096 33.748 Salary to be realized at the ﬁnal year ( ) 319.083 13.400 Amount of reduction 37.040 199.3 70.181 30.788.746 6384.000 24.800 58.10 50.643 119.640 101.194.778.194.83 48.518.Table 7.5795 3.173 858.3252 3.172 2859.81 30.29 33.13 38.090.81 21.81 65.131.785 11.190 39.919.851 i Groups (i ) 1 2 3 4 5 6 7 8 9 10 Salary at the period t0 ( ) 260.760 41.290 74.95 23.500.476 49.1836 4.25 Actual annual rate of salary increase (%) 4.870 961.711 2689.500 27.1782 3.905.825 Health Service Management 165 .861 652.408.2669 3.25 192.600 42.2897 3.7209 4.075.898 567.307 36.
090. say. 3. 3.. 4.5/ END MODULE data_and_params ! PROGRAM composite_index USE Dflib USE data_and_params IMPLICIT NONE INTEGER i. 42640.9) and the annual wage per capita will increase by 19. It should be also noted that with the allocation proposed so far. I computed the index presented in the 4th column of Table 6. 36190.. In an opposite situation however. 199290. 3. PARAMETER :: groups= 10. 27300. 6. 33054. 1. 119851.257.232.. we get the entries of the 5th column.Salary(groups). we can see that the requirement for the annual rate of salary increase at each group to exceed 3. out = 8 REAL. 74307. 1.300 to 967. rate. the number of employees per group together with the ranking scores. a ! To facilitate the presentation data are given directly here DATA R / 6. 3. STATUS=’UNKNOWN’) . MODULE data_and_params ! Goups=number of groups ! Planning_hor=Planning horizon (years) ! R = Initial rate of salary increase at each group (vector) ! Members = Number of employees at each group (vector) ! Salary_in = Initial employees salary at each group (vector) ! Salary_f = Final claimed salary for each group (vector) ! Deficit = The value of deviational variable ! a = Value of coefficient a (0<a<1) IMPLICIT NONE INTEGER. From the last column of Table 6.. 10 / DATA Salary_in / 260000. Multiplying the elements of this column by 59. 3.69% (from 807. 54600.0. 50643.86% (from 15.0. 41600. & & .32760.. is fully satisﬁed. s ! Open output file OPEN(out. one has to reduce the value of “a” by. 14. 101400. 68040.. j.1%. 4.2). newrate REAL Logvector(groups). FILE=’Composite_Index. a /118180.. 4. The composite index and all other entries of Tables 6 and 7 can be computed with the following code segment written in Visual Fortran.4/ DATA Members / 10. Employees REAL DIMENSION(groups) :: Y..8. 24700.5. Scores. 5. which are presented in Table 7. 58500. DIMENSION(groups) :: Members REAL :: Total_Deficit. Rsmall.. 49647.. 4.8. total initial salary. 4.629. 3./ DATA Salary_f / 356222.1 to 18. 163800. From Table 7. 3... Salary_f INTEGER.out’. DIMENSION(groups) :: R.39476. Salary_in. Deficit. 0. spending will increase by an annual rate of 3.0625.125. It is clear that all employees will be better off with smaller value of a. I computed the end period salary of each group and the corresponding annual rate of increase.5. sortR.. & & .166 AnnaMaria Mouza increase for each group..6.5.. planning_hor=5.05 and to repeat all calculations./ DATA Total_Deficit. X. 0..9.
X(i). s=SUM(Y) Employees=SUM(Members) rate=(y*100)/s/100. Y(i).4X.F10. (2).groups IF(sortR(j) == s) scores(i)= j ENDDO . X=rate*Members s=SUM(X) Rate=(X*100.2. sortR=R . ENDDO ! Compute composite index Rsmall=R/100. (6) for the ﬁnal run.SRT$REAL4) DO i=1. . WRITE(out.F10. s=SUM(X) .groups s=R(i) DO j=1. newrate(i) ENDDO END 7. Also. (13) has been slightly reformed as follows: − − − − − − min z = (10P1 d1 + 9P1 d2 + 8P1 d3 + 7P1 d4 + 6P1 d5 + 5P1 d6 − − − − − − + 4P1 d7 + 3P1 d8 + 2P1 d9 + P1 d10 ) + (P2 d37 + P2 d38 − − + + P2 d39 + 3P2 d42 + 2P2 d44 ) + P3 31 i=22 di− + P4 36 i=32 di− − − − − − + (P5 d40 + P5 d41 ) + (P6 d21 + 2P6 d11 + 2P6 d12 − − − − − − + 3P6 d13 + 4P6 d14 + 5P6 d15 + 6P6 d16 + 7P6 d17 + 8P6 d18 − − − + 9P6 d19 + 10P6 d20 ) + P7 d43 . X=Deficit*rate ! Compute revised annual rates of salary increase Y=Salary_fX Logvector=(LOG(Y/Salary_in))/ planning_hor newrate=(EXP(logvector) 1.3.groups WRITE(out. Salary=Salary_f/members ! Perform ranking operation CALL SORTQQ(LOC(sortR).*)’ Composite index and the final rate of salary increase’ WRITE(out.’(1X. and in the corresponding restrictions seeing in Eq.’(/)’) Deficit=Total_Deficit*a . Final Results In the last column of Table 7.6.)/s/100.Health Service Management 167 WRITE(out.*) ’ Index Reduction Final Salary New rate’ DO i=1.F6.)*100. X=rate*scores s=SUM(X) rate=(X*100)/s/100. the objective in Eq. These new coefﬁcients are to be considered in Eq. . . . the revised coefﬁcients ci are stated.6X. X=rate*deficit/Employees s=SUM(X) rate=(x*100.F9.2X. . s=SUM(Rate) . Y=Rsmall*Salary .groups.)/s/100. . s=SUM(rate) .4)’) Rate(i)& &.
A decision support simulation model for the management of an elective surgery waiting system. in the sense that it is acceptable by all people involved without any violations of the existing rules. The solution obtained is operational in the sense that. the results provide a sound basis to achieve an optimal solution. it can be easily modiﬁed at a later stage to ﬁt many other types of clinics. 8. and the tradeoffs among the set of goals are achieved with a minimum sacriﬁce. the acceptable charge per patient. to formulate the corresponding goal programming model. etc.125.817. the health service in a convenient way. A and W Cooper (1961). With this in mind. Now. the medical specialty. 1 and 2). These achievements are heavily dependent on a successful budget planning. total expenses sum up to 1. Nevertheless. Everett.000 to 679. the desired level of gross proﬁts.168 AnnaMaria Mouza The solution obtained with the restated restrictions satisﬁes all requirements. Conclusions Many clinics like the one presented here beneﬁt the patients. it is difﬁcult to design a general model that can be applied to all types of clinics. The characteristics of a model of this type for a clinic are based upon many factors. It has to be under− lined. through the analysis of resource requirements. Health Care Management Science 5. JE (2002). however. The model shows that this sacriﬁce is negotiable between different sides. the location. References Charnes. once a budget planning model has been developed. so that the business manager can ﬁnally establish an aggregative budget planning with the best perspectives.857.38% (from 750. the goals and the relevant priorities are set in advance. which is a bit higher when compared to the corresponding percentage (∼6%) of total salary reduction. 89–95. it is implied that total gross proﬁts undergo a reduction of about 9.317. using the goal programming approach of aggregate budget planning for this particular health care unit. Management Models and Industrial Applications of Linear Programming (Vols.682. the size. the claimed rate of salary increases by the employees. To start with.2. Hence. since the realization of a fair wage policy can be obtained through using the composite index. introduced in this paper. such as the type of the clinic. that since the value of the deviational variable d43 is 70. I consider in this paper an orthopedic clinic and implemented a model designed for a ﬁveyear planning period.8). . New York: Wiley. However. and help attain consistency and continuity of treatment.
. Journal of Medical Systems 27(3). Mouza and AnnaMaria (2006). Mouza and AnnaMaria (2002). Ph. IVR and administrative operations in healthcare and hospitals. Computers and Operations Research 14. 259–270. Where efﬁciency saves lives: a linear programme for the optimal allocation of health care resources in developing countries. SI (1986). 68–71.Health Service Management 169 Flessa. 186–192. The setting of weights in linear goal programming. Mouza and AnnaMaria (1996). A hierarchical multiple criteria mathematical programming approach for scheduling surgery operations in large hospitals. Mouza and AnnaMaria (2003). An economic approach of the Greek health sector. 779–785. Modeling the hospital operation process. Gass. Journal of Healthcare Information Management 17(1). S (2000). Health Care Management Science 3. Journal of Operational Research Society 37. Theses. A note regarding the projections of some major health indicators. 249–267. A process for determining priorities and weights for large scale linear goal programming. SN and R Erol (2003). 184–199. 227–229. Ogulata. Greece: Aristotle University of Thessaloniki. Health Service Management Research 15. Quality Management in Health Care 15(3). Estimation of the total number of hospital admissions and bed requirements for 2011: the case for Greece. based on domestic data. Gass. SI (1987).D.
This page intentionally left blank .
Chapter 5 Modeling National Economies .
This page intentionally left blank .
Topic 1 Inﬂation Control in Central and Eastern European Countries Fabrizio Iacone and Renzo Orsi University of Bologna. We focused on Poland. we can see if a certain strategy made inﬂation control easier or more difﬁcult. we think it is important to study if and how the Eurosystem can successfully control inﬂation after the extension of the EMU: we then investigated the monetary policy transmission mechanism in the Central and Eastern European Countries (CEECs) to see how compatible it is with one of the current members of euroarea. both in the literature and among monetary institutions. and Slovenia because they are very different for the initial conditions from which they started the transition to functioning market economies and later the integration in the EU. for the relative size and the degree of openness to the international trade. on whether a direct inﬂation targeting or some kind of exchange rate commitment should be preferred: by comparing the mechanism of transmission of policy impulses in different regimes. these do not necessarily reﬂect the effective integration and convergence of the economies. Knowledge of the mechanism of transmission of monetary policy is also necessary to the CEECs to choose the monetary strategy for the period preceding the accession to the euroarea. Czech Republic. the accession of the new European partners to the euroarea is formally analyzed. Although. With this respect. Introduction The recent accession to the European Union (EU) of 10 new members ofﬁcially opened the agenda of their participation in the European Monetary Union (EMU) too. The discussion is open. Italy 1. and assess its cost by looking at the impact on the economic activity. keeping the Maastricht criteria as the main reference. for the policies implemented in due course and even for their historical 173 .
As far as the monetary policies implemented for inﬂation stabilization. and Accession Although Poland. Czech Republic. in fact. The ﬁrst reason for the difference is in the productive structure at the beginning of the transition. production was much more centralized and budget constraints much softer in Poland and Czech Republic. left the system without a nominal anchor. all the CEECs except Slovenia. but in reality Slovenia had to face the major shock due to the collapse of Yugoslavia and of the subsequent wars. Integration. also . the softer budget constraint. representative of the whole group of CEECs. Transition. The stronger centralization. A more precise ranking emerges when the size and the openness to international trade is considered. while the three Baltic countries experienced a much bigger shock. The initial price liberalization. Inﬂation Control over 1989–2004 2. to make things tougher for these two countries.1. 2. together. and of Czechoslovakia could have coeteris paribus caused a more turbulent and longer transition for Poland and Czech Republic. began their transition with a strong commitment to a ﬁxed exchange rate. and Slovenia shared the goals of implementing a functioning market economy and of acceding to the EU. Hungary implemented some timid reforms before 1989. and. For the euroarea. Local differences remain: for example. we took Germany as the reference country. and. they also had to suffer the breakup of the council for mutual economic assistance (CMEA). with quite independent ﬁrms and relatively hard budget constraints. Poland and Czech Republic represent an average position with respect to the general condition of the productive structure of the CEECs at the beginning of the transition. Yugoslavia established a quasimarket system well before 1989.174 Fabrizio Iacone and Renzo Orsi developments: we think that because of these differences they are. because the disintegration of the Soviet Union resulted in the loss of their largest trade partner. up of the CMEA. and the higher disruption caused by the break. with Poland being the largest country and Slovenia the smallest one. in which they were well integrated (Czech Republic also had to endure the additional trade disruption. which deprived the country of its biggest market and had other adverse effects including the loss of revenues and hard currency due to the fall of tourism. caused by the separation from Slovakia). the policies they implemented to reach them were rather different.
Slovenia followed a different approach to disinﬂation. In fact. the countries in transition had a chance to import a price structure similar to the one of their commercial partners. the domestic producers could not raise their prices too much without losing competitiveness with respect to the foreign ones. in order to force the local producers to face a harder budget constraint). This may have been partially due to the BalassaSamuelson effect: assuming that the marginal productivity and the wage in the sector of internationally traded goods. This yields a progressive appreciation of the real exchange rate and higher domestic inﬂation. then the productivity gap between the two sectors is compensated by a price increase in the less productive domestic sector. anyway the key characteristic of the period was the continuous intervention on the foreign exchange market in order to prevent an excessive real rate appreciation (contrary to the strategy of the Bundesbank and of the Eurosystem. later coupling it with an oscillation band which was gradually widened until it was ﬁnally abandoned in 2000. but the ﬁxed exchange rate arrangements were not sustainable for a long time: Poland switched to a regime of preannounced crawling peg as early as 1991. the fact that the outcome is actually reversed means that price stabilization rather than trade was the priority in the exchange rate commitment served in Poland and in Czechoslovakia. determined by the transfer of technology from the international trade partners and the more efﬁcient allocation of the resources within the economies. targeting the growth of a relatively narrow monetary aggregate (M1). Since mainstream optimal currency area literature prescribes a greater incentive to adopt a stable exchange rate to the countries more open to the international trade. the realignment of relative prices took the form of a sudden increase and a steep inﬂation followed: by opening to the international trade (and by introducing the proper legislation. leading to a strong real exchange rate appreciation. being the exchange rate ﬁxed. according to Capriolo and Lavrac (2003).Inﬂation Control in Central and Eastern European Countries 175 because of a natural price rigidity with respect to negative corrections. the control of M1 was also enforced with nonmarket instruments and procedures). compensated part of the pressure on the domestic producers. The faster growth of productivity. is set on the foreign market and that the same wage is transferred to the production of the nontraded goods. Czech Republic resorted to a free ﬂoat without explicit commitment as early as 1997 under the pressure of a speculative attack. and inﬂation actually dropped very quickly. its currency having progressively overevaluated in real terms over time. but there are certain differences remained with respect to the EU. Fixing the exchange rate may have contributed to price stabilization. Both the countries .
There can be little doubt that an exchange rate commitment is a risky policy: the integration of the ﬁnancial markets allows the mobilization of large amounts of funds. where two years of participation to the exchange rate mechanism (ERM) 2 is required. because the mere defence of the currency could require a substantial rise in the domestic interest rates. but in these last two countries. Whether. but a commitment to a ﬁxed parity is a much stronger policy. and no central bank can muster enough foreign exchange reserves to resist a sustained attack. a small country cannot ignore the effect of the exchange rate ﬂuctuations on the stability of ﬁnancial institutions and the economy as a whole. and macroeconomic situation of the country was not worse than it was in Germany. when even the French currency suffered some pressure. Once again. to completely abandon it may indeed seem to be a paradox. a more aggressive exchange rate strategy has been pursued since 2001. it is still the reference of monetary policy in the Baltic States. although the ﬁscal. The same lesson can indeed be derived by looking at the . and Estonia even withstood a moderate speculative attack rather than devaluating. stands the evidence of speculative attacks in the past: consider. the switch of targets was not complete: Hungary retained a ﬁxed exchange rate commitment. the break up of the ERM 1. for example. Exchange Rate and Direct Inﬂation Targeting The ﬁxed exchange rate played a key role in the stabilization phase. and Slovakia. macroeconomic stability and inﬂation control are helped or hindered by this depends on the credibility of the commitment itself. albeit informal. Against an exchange rate commitment. crawling peg.2. Slovenia went in the opposite direction: according to Capriolo and Lavraac. Czech Republic. The progressive weakening of the exchange rate commitment took place in Hungary and Slovakia too. Poland in 1999). apparently trying to exploit the credibility acquired with it during the ﬁrst phase of the transition. ﬁnancial. while Slovakia did not implement an explicit inﬂation targeting because of the high role that administrated prices still had in the consumer prices. 2. nowadays. and it is also present in Hungary. nor indeed may desire to do it.176 Fabrizio Iacone and Renzo Orsi replaced the monetary anchor by introducing a direct inﬂation target (Czech Republic starting in 1998. albeit with a very wide band. Since. the decision of Poland. Surely. with an active. Anyway this is not an unanimous attitude: the three Baltic States kept a ﬁxed exchange rate with an extremely tight band. for which the economy may be even more detrimental than the devaluation that it was trying to prevent. the exchange rate commitment is also part of the Maastricht criteria.
and this primarily requires the independence from any political interference. the credibility was seriously weakened because the ﬁscal authority had the opportunity to partially ﬁnance its expenditures through the central bank. may undermine the credibility of the central bank even when they are resisted. the CEECs were lagging behind with respect to their Western counterparts. Since the accession to the EU also required the complete adoption of the acquis communautaire. this would increase the information in the market. It is then important to ascertain if the exchange rate commitment is a risk worth taking. one had to be abandoned. the apparent paradox of several CEECs switching to a free ﬂoat and inﬂation targeting. . On the other hand. as experienced in the Czech Republic and in Poland. Central bank independence is even more a concern. and a devaluation took place. not only for the countries on the way to EMU that are still more than two years away from the formal discussion of their convergence according to the Maastricht criteria. political pressures. Poland and the Czech Republic faring better than Slovenia. as in June 2003. and to the extensive survey in Dvorsky (2000). when the deﬁnition is extended beyond the mere legal requirement: in most of the cases for example. A Summary of the Results of Previous Analyses Several empirical analyses on inﬂation control in CEECs appeared in the recent years. for the period considered and for the methodology adopted. making comparison rather difﬁcult. but also for other emerging economies in the world. a direct inﬂation targeting can only be successful if the monetary authority has earned itself a solid reputation. thereby including the elimination of capital controls.3. were proposed in the literature: according to Maliszewski (2000). Since the CEECs only acquired a formal central bank independence in the last few years. Several indices of central bank independence. is then a strategy that may protect their currencies from unrealistic parities and speculative attacks. it is exactly because the credibility of the monetary authorities is lower in the CEECs that Amato and Gerlach (2002) proposed to supplement the inﬂation targeting with a mild monetary commitment. Indeed. as a part of the process of adoption of the acquis communautaire. 2. because they may erode the consensus in the country. Iacone and Orsi (2004) surveyed many applied works: these varied for the country that was analyzed.Inﬂation Control in Central and Eastern European Countries 177 Hungarian experience: when the two objectives were in conﬂict. for the period preceding the adoption of the euro. their credibility may still be weak.
t PC yt = β0 + βy yt−1 − β AD where yt is a measure of the economic activity. large standard errors and nonsigniﬁcant estimates seem to be unavoidable consequences.e. π = 4 3 πt−j is the ¯ j=0 average inﬂation rate in the last four quarters (i. in return. A structural model can provide a more parsimonious approach: we followed Rudebush and Svensson (1999) and considered the model πt = α0 + απ1 πt−1 + απ2 πt−2 + απ3 πt−3 + απ4 πt−4 + αy yt−1 + επ. Considering the short length of the sampling size and the potential extreme instability of the parameters due to the transition. It seems nevertheless fair to conclude that the exchange rate is very important for the stabilization of inﬂation. The equations are referred to as Phillips Curve (PC) and aggregate demand (AD). and ¯ = 4 3 it−j is the average interest rate over the same i j=0 period. or the intermediate steps linking the monetary instrument to the target were omitted. The empirical evidence supporting the existence of a standard interest rate channel was much more controversial.4. πt is the annualized inﬂation 1 within the quarter. the inﬂation over the 1 last year). they need the estimation of many parameters. Reliability was also hampered by the fact that in many cases no stability analysis was provided. thus obscuring any potential evidence about the channel through which inﬂation may be controlled.. A Structural Model for Monetary Policy It is possible that the weak support provided by these analyses is at least partially due to the methodology adopted: VAR models have the advantage of requiring a minimal structural speciﬁcation. the results depending largely on the econometric approach and model adopted by the researcher and on the sampling period considered. respectively because the ﬁrst one can be given a structural interpretation as a PC while the second one may describe the transmission of monetary policy on the economic activity quite like in an AD function. . 2. it is a shortterm interest rate. but.178 Fabrizio Iacone and Renzo Orsi Interpretation of the results was often dubious because either the variables of interest were replaced by ﬁrst or second differences. nor any attempt was made to model the slow formation of a market economy despite this being the main feature of the period. an appreciation of the real exchange rate reducing the growth rate of prices. despite the potentially disruptive effects of the transition on the estimates.
the pressure on the demand enhances inﬂation. then considered the proposed AD equation sufﬁcient. Notice that even if the constraint 4 απj = 1 is applied. It is indeed. but we preferred to allow for the constant and test. though. We. especially considering that the accession of these countries to the EU requires them to adopt a monetary policy setting that is institutionally compatible with the one of the ECB. for its possible exclusion explicitly. and the monetary policy rule ¯t = πt + πt . because the dynamics of yt has to be taken into account too: consider for example απ1 = 1. Rudebush and Svensson (1999) allowed for an additional term βy2 yt−2 in the AD equation. i ¯ the PC equation then becomes βr ∈ y. thus. Rudebush and Svensson (2002) remarked that 4 απj = 1 can be j=1 expected as it corresponds to a vertical PC in the long run. inducing a stationary behavior. t+ ∈ π. β0 . In the case of a small open economy. as illustrated in the PC equation. and later of the ECB. they suggested that the given speciﬁcation may simply be considered as reduced form equations. as an alternative. in the second moment. agents extrapolate from the past inﬂation to formulate the current period price level. the contribution of the two different effects in general could not be distinguished. They also explicitly discussed the fact that the backward looking formulation of the PC and AD equations imply adaptive rather than rationale expectations. . possibly also due to the smaller dimension of our samples and the high collinearity with βy2 yt−2 . precisely by steering the output gap using the interest rate that the stabilization of inﬂation is ultimately possible. two other factors should be taken into account: the demand of domestic products generated by the trade partners and the effect of the international competition on local prices. but we found that.Inﬂation Control in Central and Eastern European Countries 179 With this speciﬁcation. The mechanism of transmission of monetary policy to inﬂation is indirect: an expansionary monetary policy takes place as a decrease in the real rate ¯t−1 − πt−1 . Rudebush and Svensson also speciﬁed the model without intercepts α0 . βy = 0. This model may well be applied to the transition economies too. which boosts the i ¯ economic activity in the AD equation. απ2 = απ3 = απ4 = 0. but they argued that this may well be the case especially in a phase of monetary and inﬂationary instability. t. also because one lag proved to be enough to have no serial correlation in the residuals. The instrument and the mechanism of transmission of monetary policy are then consistent with the operating procedures of both the FED and of the Bundesbank. having formulated for meancorrected data. the model j=1 still does not necessarily impose a unit root to inﬂation.
the assumption is that phases of high economic activity abroad result in higher demand of locally produced goods. thus. we then augmented the model by introducing the level of the economic activity in the international partners wt and the percent real exchange rate depreciation (qt −qt−1 ). forcing the domestic producers to lower their prices to keep up with the external competitors. while a too high level of prices (in real terms) causes a shift of the domestic and foreign demand towards the external producers. we did . because the international competition imposes some price discipline to the local producers: for given foreign prices. results in higher cost of inputs and then feeds back in higher prices of output. In the original model. Svensson considered the real exchange rate rather than the percent depreciation: in our model speciﬁcation. The real exchange rate entered the PC equation.t AD where we actually considered relevant for the exchange rate the depreciation over the whole year. where qt = p∗ +et −pt t is the logarithm of the real exchange rate and pt . we also refer to Svensson (2000). where he explains how an increase in foreign prices in local currency. In some cases. p∗ and et are the logat rithm of the local and foreign level of prices and of the nominal exchange rate. respectively. either due to the move of the foreign price itself or to the exchange rate. we describe the economy with the two equations model πt = α0 + απ1 πt−1 + απ2 πt−2 + απ3 πt−3 + απ4 πt−4 + αy yt−1 − αq (qt−1 − qt−5 ) + επ. For the AD. and to avoid the risk of model instability in the very ﬁrst part of the sample. the data were used only to compute the output gap or as lags.180 Fabrizio Iacone and Renzo Orsi Following Svensson (2000). Both the variables are added to the AD equation. we kept the AD/PC structure as a general reference. and the real exchange rate is added to the PC equation as well. Moreover. but given that the speciﬁcation is somewhat arbitrary. a nominal exchange rate appreciation makes foreign goods cheaper in local currency. we preferred the latter because as we saw the real exchange rates of most of the CEECs appreciated since the beginning of the transition without resulting in a dramatic decline of the economic activity. while for given nominal exchange rate higher foreign prices give the domestic producers to set higher prices and stay in the market.t PC yt = β0 + βy yt−1 − βw wt−1 − βq (qt−1 − qt−5 ) + εy. Our sample starts from 1990. Maintaining the autoregressive structure of Rudebush and Svensson. we tried to adapt it to the economies considered.
Poland. the choice of the year appreciation of the real exchange rate was then also convenient to remove the seasonal effect. for the nominal interest rate we used a threemonths interbank rate or the Treasury bills rate of the same maturity. constitutes by far the biggest trade partner for the CEECs. As a measure of the economic activity we took the output gap. There is a strong seasonal component in the quarterly inﬂation.Inﬂation Control in Central and Eastern European Countries 181 not ﬁnd adequate data so the sample period was even shorter. . we used quarterly averages. and the seasonally adjusted industrial production to derive a measure of economic activity (when only the nonseasonally adjusted series was available. deﬁned the difference between the observed and the potential output computed using the Hodrik Prescott (HP) ﬁlter on the quarterly average of the production index. Czech Republic. adjusting the frequency properly. We always considered Germany as the international reference. Data are monthly. More details on the data used are at the end of this paper. we still think that it is at least informative of the effective output gap. Plots of the inﬂation in the four countries are in Fig. 1 (ﬁgures are all collected at the end of Figure 1. even if in the ﬁrst part of the sample period some countries pegged the exchange rate to the US$: the EU in fact. due to its purely numerical deﬁnition. and Slovenia. Even if the HP gap is usually considered an inexact measure of the economic cycle. Yearly inﬂation. which is likely to shift the weights towards the fourth lag in the PC equation. Germany. We took the CPI to compute inﬂation and the real exchange rate. but in order to reduce the volatility. we run a preliminary step using the X12 ﬁlter in EViews to remove seasonality).
it Figure 2. . Quarterly inﬂation and output gap.182 Fabrizio Iacone and Renzo Orsi the work). Czech Republic. Figure 3. output gap. real exchange rate depreciation. though. Germany. Quarterly inﬂation. From these ﬁgures. we used a different measure in our empirical study because it is very common in the descriptive analysis in the literature. The inﬂation used for this graph was deﬁned as the growth rate of prices on the previous year: we presented it albeit. in Figs. Clearly. averaging the inﬂation in this way induces spurious autocorrelation in the data. so in the empirical analysis we measure inﬂation as the annualized growth rate of prices over each period. and Slovenia. we plot the same data and the yearly real exchange rate depreciation for Poland. respectively. 2. 3–5. Poland. We plotted this and the output gap for Germany in Fig.
Slovenia. as in Rudebush and Svensson (1999). is already possible to notice that. Quarterly inﬂation. Quarterly inﬂation. Figure 5. output gap. the dynamics of inﬂation is dictated by the output gap. output gap. Estimation was carried out using OLS equation by equation. in the three new EU members inﬂation is largely dictated by the real exchange rate dynamics. We did not introduce any other modiﬁcation to .Inﬂation Control in Central and Eastern European Countries 183 Figure 4. while for Germany. real exchange rate depreciation. real exchange rate depreciation. Czech Republic.
AC for the autocorrelation. with ρ0 = (απ1 + απ2 + απ3 + απ4 − 1) ρ1 = −(απ2 + απ3 + απ4 ). This is a reparametrization of the original equation. so. because we used its result to choose between the small or large sample version of the other tests. ρ3 = −απ4 Being simply a reparametrization. In all the tests. and H for the heteroskedasticity. In the tables in the Appendix. Since the small and large sample statistics are asymptotically equivalent in large samples. we used the conventional 5% size. Admittedly. ρ2 = −(απ3 + απ4 ). we moved to test speciﬁc restrictions on the parameters. autocorrelation (up to the fourth lag: Breusch . due to the asymptotic nature of the Jarque and Bera test. we presented both the Pvalues associated to the tests and a summary of the equation estimated when the restriction was imposed. and that.Godfrey LM) and heteroskedasticity (white without cross terms) in the residuals: we referred to these tests using N for the normality. j=1 more importantly. we then analyzed the stability of the parameters over time. so the OLS estimates are not different than the estimates that one would obtain in the original model in levels. especially. 4 απj = 1 corresponding to p0 = 0. Correct speciﬁcation was primarily checked by looking at the tests for normality (Jarque and Bera). the asymptotic interpretation of the results is not affected. and to the small sample lower order bias in the estimation of autoregressive coefﬁcients. and to estimate a more efﬁcient version of the model.184 Fabrizio Iacone and Renzo Orsi the AD equation. the multicollinearity due to the terms πt−1 to πt−4 is much less severe. indeed the same caveat applies to all the empirical analyzes present in . The advantage of this speciﬁcation is that long and short dynamics are explicitly separated. the reader may be cautious in the interpretation of the results. We considered the normality test ﬁrst. but we rewrote the PC as πt = α0 + ρ0 πt−1 + ρ1 πt−1 + ρ2 πt−2 + ρ3 πt−3 + ρy yt−1 + αq (qt−1 − qt−5 ) + επ. If the model appeared to be correctly speciﬁed.t PC (the term qt−1 − qt−5 does not appear in the equation for Germany). If we did not reject the hypothesis of stability either. the residual sum of squares that has to be minimized is the same. when only portions of the dataset are used to estimate parameters (like in the Chow stability test or in the estimates on subsamples): notice anyway that this is a problem that depends on the dimension of the sample.
we considered a break in 2000Q2. but we think that despite these drawbacks. inducing higher inﬂationary expectations. in this case we used the Forecast test. we improved the efﬁciency of the estimates by removing the variables whose coefﬁcients were estimated with a sign opposite to the one prescribed in the economic theory (we assumed that αy . we think it is particularly important to control the parameter stability over time. and the issue was convergence to the EU standards. because it may have been perceived as a weakening of the antiinﬂationary stance. and by imposing other restrictions (including 4 απj = 1): we tested these with a Wald j=1 statistic. we allowed a break in 1999Q1. Autocorrelation in the residuals is a serious misspeciﬁcation since it results in inconsistent estimates given that both the equations include a lagged dependent variable. when the exchange rate commitment was formally abandoned. if the subsample 1998–2004 only is used). Yet. albeit an approximate one. Yet. because of the opening of the negotiation to access the EU. the estimates and the tests do still provide some information on the structure that we intend to analyze. this is just a broad indication: one could also argue that by that time the negotiation opened the transition to a market economy was already nearly complete. when the control of monetary policy was transferred from the Bundesbank to the ECB. because estimation is simply inefﬁcient (compared to the GLS estimator) but not inconsistent. We tested the presence of the break with a standard Chow test. βr are not negative). and referred to these tests as W in the tables. . Heteroskedasticity is a minor problem. We also considered countryspeciﬁc breakpoints in the PC equations to analyze the effect on the inﬂation dynamics of monetary regime changes: for Germany. for Slovenia. since the standard errors indicated by the regression are not correct. unless the sample after the break was very little. that period may still be informative about a potential break even if it was not located exactly there. αq . The reader may then want to consider the estimates and the statistics computed on the estimated parameters more as general indications in small samples (especially.Inﬂation Control in Central and Eastern European Countries 185 the literature. and possibly to model it when apparent. the change is in 2001Q1. for Poland. but also considered as a particular breakpoint in 1998. βq . βy . Clearly. Due to the peculiarity of the transition. If the model was correctly speciﬁed and stable. βw . which is exactly designed to test for breaks when only a little data are located after the break. evidence of heteroskedasticity requires at least a different estimation of the variancecovariance matrix of the estimates. We used as preliminary diagnostic the CUSUM squared test. when the approach towards the real exchange rate became effective.
the CUSUMsq hit the boundary in the period 1993–1996. and 1993Q1–2004Q1 for the PC. is already taken into account in the standard error.005) + 0. Germany As a benchmark case. although may be only on the sign.00163) πt = − 1. it is an interesting reference because Germany is the biggest country in the euroarea: if the monetary transmission in the CEECs works in a very different way. Finally. under normality.18) over the period 1991Q2–2004Q1 for the AD.10) (0.10) (12. leaving many of the problems related to that particular transition out of the discussion. a test statistic rejects the null hypothesis despite the large variability due to the small sample. a certain caution should still be exerted in the interpretation of the results. 3. The Empirical Evidence 3. this time span is roughly the same one covered by the data of the CEECs in most of the applied analyzes. making the results comparable in this sense.0038yt−1 (0. We called this a “benchmark” model because it is widely assumed that this transmission mechanism for monetary policy was at work in Germany in the years we are interested in.011 yt = (0. But. we think that even in this case our analysis is at least able to get a general. If.02 πt−1 − 0. then we would still regard the estimate as informative. their integration in the euro may adversely affect the dynamics of inﬂation.71 πt−3 + 17. only data after the German reuniﬁcation were considered.769yt−1 − 0.12) (0.084) (0. for example. According to the diagnostic tests. we ﬁrst estimated the model for Germany over the period 1991Q1–2004Q1. the AD equation was correctly speciﬁed and stable over the whole sample. broad picture of the situation: the lack of precision that may be attributed to the small dimension of the sample.86 πt−2 − 0. The estimated model for Germany was ˆ 0. The choice of the sample is twofold: ﬁrst.1. but we found no evidence of break with the Chow .186 Fabrizio Iacone and Renzo Orsi Of course. second.62yt−2 (0. given the small dimension of the samples and the potential instability of the models due to the transition. Furthermore. accounting for a large share of their international trade. Germany is geographically and economically close to the CEECs.
020 yt = (0. .67 πt−2 − 0. although not as much as those presented by Rudebush and Svensson for the United States: we conjecture that the lower precision of the estimates for Germany depended on the smaller number of observations available.12) (0. was not stable when the whole sample was taken into account. Considering the model as a whole. correctly speciﬁed model.001) πt = − 0. The estimated model was ˆ 0.88) over the period 1994Q1–2004Q1 for the AD.10) (7. H0 : {αy > 0} was between 5% and 10%.2. though was clearly restricted to the ﬁrst part of the sample. albeit even in this case the Pvalue of the test H0 : {αy = 0} vs. a phenomenon that was usually regarded as temporary and exceptional. We interpreted this as evidence of instability.467yt−1 − 0. The inﬂation dynamics. so we augmented the model with the real exchange rate depreciation with respect to Germany and with the German output gap. on the other hand. we found that both the equations suffered from residual autocorrelation. because when 1994 was taken as a starting point. The instability. suggesting a very high value of α4π .0027yt−1 (0. resulting in inconsistent estimates. we consider Poland (and the other CEECs later) as a country small enough to be affected by the international trade. Point estimates of ρ1 . we also found that lagged values of yt − 1 had a more signiﬁcant effect. It is interesting to observe that it is exactly the period of the recovery of the inﬂationary shock due to the German reuniﬁcation. anyway.008) + 0. or a strong seasonal component in inﬂation. and could indeed be removed dropping the years 1991 and 1992. the residual autocorrelation was largely removed and the other diagnostic tests were broadly compatible with a stable.46(qt−1 − qt−5 ) (0.64 πt−1 − 0. and ρ3 were all very close to −1. we ﬁnd these estimates satisfactory.142) (0.51 πt−3 + 21. Poland Opposite to Germany. so the interpretation of the other tests can only be justiﬁed asymptotically. ρ2 . the distribution of the residuals did not appear to be normal. and 1994Q1–2004Q1 for the PC.Inﬂation Control in Central and Eastern European Countries 187 test (further details about all the tests and a summary of the estimates are in the tables at the end). notice anyway. Diagnostic tests conﬁrmed that the equation estimated using data from 1993 onwards was correctly speciﬁed and stable.101) (0. Albeit our dataset included 1993 too. 3.
As far as the PC equation. rejecting it (Pvalue 0. where only the external conditions had a signiﬁcant effect on the inﬂationary dynamics (again we found that the effect of the omitted explanatory variable. had the sign predicted by the macroeconomic theory. Since at that point. The estimates in the more . We considered this second speciﬁcation. it is likely that by 2000. Since the target for monetary policy is deﬁned mainly because it is assumed that in that way the expected inﬂation can be favorably inﬂuenced. the Polish central bank acquired sufﬁcient credibility to move away from a commitment that would have exposed it to pressure from market speculation.65).046).05 (it was 0. because the residuals were not normally distributed. the Polish national bank abandoned the exchange rate commitment and left the direct inﬂation targeting to stand alone. but the realization of the t statistic was still rather below the critical value. We concluded performing a Forecast test for a break in 2000Q2 in the PC equation. concluding that the instability that may be associated to the transition to a market economy can be largely conﬁned to the period before 1994.188 Fabrizio Iacone and Renzo Orsi The variables referred to the trade partners were then not signiﬁcant in the AD equation: we found a certain effect at least for the real exchange rate depreciation. but in this case we can at least conclude that the change of target did not result in a worsening of the expectation of inﬂation. and an asymptotic interpretation of the outcome of the autocorrelation test for the AD equation was particularly dubious because the test statistic reached approximately. and with a sign consistent with the macroeconomic theory. but notice that the situation was reversed for the PC equation. yt−1 in this case. the limit critical value. the Pvalue being slightly lower than 0. Thus. in which the sample for the AD equation is from 1995 onwards. but it failed to be signiﬁcantly different than 0). albeit only mildly. it is fair to conclude that the change of monetary target did not result in a destabilization of the inﬂation dynamics. Although we summarized the diagnostic tests. we prefer not to give a structural interpretation of a backward looking formulation of the inﬂation equation. and for the PC from 1998 onwards. the Chow test did not identify any break in 1998Q1 but the CUSUMsq statistic did indeed signal a potential break in this year. we also estimated a more restrictive speciﬁcation. The absence of the two variables referring to the international trade could be interpreted as a sign that Poland is still large enough to be more sensible to the internal rather than to the international developments.
from the split of the former Czechoslovakia. . we conﬁdently conclude that the Polish monetary authority was able to control inﬂation during these years.09(q − q ) t−2 t−3 t−1 t t−1 t−5 (0. This is approximately also the point for which the Chow test is to be computed.23) over the period 1995Q1–2004Q1 for the AD. and it is also possible that the division of the country acted as a source of instability additional to the one generated to the transition. In any case. Data availability is then slightly reduced.009) (0. For the PC equation. 3. it had the correct sign and it was also marginally signiﬁcant with the 10% critical value and a onesided t test: a possible interpretation of this result could be that by 1998.00) + 19.19) (0.46) estimated over the sample 1998Q1–2004Q1: the lagged output gap still appeared in the determinants of the inﬂation dynamics. the evidence of instability of the macroeconomic structure was quite clear.154) π = − 0. very similar then to the estimates on the longer sample.78 πt−2 − 0. but we think that a longer dataset is needed to address this particular issue.01).021 + 0.57 π − 0.Inﬂation Control in Central and Eastern European Countries 189 restrictive model are yt = 0.59 πt−3 + 39. but we found that a break took place around 1999. according to the CUSUMsq statistic. Although it is not possible to distinguish between these two causes.0029yt−1 ˆ (0. we also present the alternative speciﬁcation πt = −0.20) (0. the PC equation evolved even closer to the standard textbook speciﬁcation.20) (29.20) (7. Czech Republic Czech Republic originated in 1993.454yt−1 − 0. but we think it mainly worked through the exchange rate management.001) (0.66 πt−1 − 0. The sample for the estimation of theAD equation started in 1994Q4.3. considering the whole model.13) (0.47 π + 16. being dubious.22(qt−1 − qt−5 ) (7. and 1998Q1–2004Q1 for the PC.24 πt−1 (0. in fact. the evidence of the closed economy transmission mechanism.13) (0. and we found strong evidence in favor of it (the Pvalue associated to it was 0.76 π − 0. Comparing the estimates in the two subsamples.
that was exactly when Czech central bank used inﬂation rather than exchange rate targeting.15) (0.31 πt−3 + 32. . and the real exchange rate depreciation. when inﬂation targeting was introduced.006) + 0.25) (0.116) (0. It is possible that instability affected the PC equation too.050) πt = −0.338(qt−1 − qt−5 ) (0.03yt−1 − 0. As in the case of Poland. despite having the same sign predicted by the economic theory. the PC equation estimated over the period 1994Q1–2004Q1 was πt = −0.001) (0.84 πt−1 − 0.76 πt−1 − 0. we then found no evidence that the switch of monetary policy target resulted in a worsening of the inﬂationary dynamics. if a change in the dynamics of the growth rate of prices took place at all. The estimated model was ˆ 0. notice that if a break did indeed take place. that signals the transmission of the monetary policy impulse from the monetary authority to the economy. Slovenia As we did for Czech Republic.21 πt−2 − 0.010 yt = (0. were only signiﬁcant and with the correct sign in the second part of the sample (the economic activity in Germany remained insigniﬁcant).09) (9. it was in favor of an easier control of it.18) (11. albeit in this case the evidence was less compelling.083(qt−1 − qt−5 ) (0.33 πt−2 − 0.620yt−1 − 0. but according to the Chow test a discontinuity took place in 1998.190 Fabrizio Iacone and Renzo Orsi the most relevant differences were in the fact that the coefﬁcient of the real rate.34 πt−3 + 25. then the consequence was that the real exchange rate appreciation had a stronger effect against inﬂation in the last part of the sample. Using the whole sample that started in 1994. The estimated coefﬁcient of the economic activity was largely insigniﬁcant. 3.186) Comparing these estimates. while if we relied on the CUSUMsq statistic rather than on the Chow one for the PC. while we found that the real exchange rate appreciation had a strongly signiﬁcant antiinﬂationary effect. the CUSUMsq statistic did not indicate the presence of any break.23) (0.17) (0. On the contrary.97) over the period 1998Q1–2004Q1 for both the AD and the PC.55(qt−1 − qt−5 ) (0.4. in the case of Slovenia too we only considered the part of the sample corresponding to the existence of an independent state.
3. for the policies they implemented since 1989. until recently.Inﬂation Control in Central and Eastern European Countries 191 The data available for the estimation started from 1994Q1 for the AD equation and in 1993Q2 for the PC. Notice. and for the size . in which they started the transition to the market economy. The estimated PC is qualitatively similar to the Polish and Czech one. whose estimated coefﬁcient even failed to have the sign prescribed in the economic theory: this supported the argument of Capriolo and Lavraac that.0226 + 0. Czech Republic. and Slovenia are small economies that in a few years reshaped their productive and distributive structures. opened to the international trade.234) πt = − 0.12) (0. evidence of model misspeciﬁcation when the ﬁrst year is included. that a break took place in the ﬁrst part of the sample. so in the ﬁnal speciﬁcation we excluded yt−1 from the explanatory variables altogether. the absence of the real interest rate. albeit. but we did not reject the hypothesis of stability.343wt−1 (0.14) (0. the interference of the central bank on the ﬁnancial markets made the interest rates not informative about the monetary stance. established a functioning market economy and joined the EU. with a longer sample and an even higher integration in the European economy.45 πt−3 + 52.60 πt−1 − 0.12) (15. We then removed 1993 from the dataset. we also tested (with a Forecast test) for a break in 2001. and estimated the model from 1994 onwards (sample period 1994Q1–2003Q3 for the AD. we acknowledge that the estimates in the AD equation are only signiﬁcant using a 10% threshold. obtaining ˆ yt = 0.66 πt−2 − 0. resulting in residual autocorrelation and inconsistent estimates: it is also fair to suspect. We are inclined to consider them anyway.60) The diagnostic tests conﬁrmed that both the equations were correctly speciﬁed and stable in the period considered. Despite these common features. with inﬂation only driven by the exchange rate dynamics but not by the ˆ output gap: indeed.5. βy even resulted as negative. 1994Q1– 2003Q4 for the PC). suspecting that a more clear link will emerge in the future. In order to see. Some Concluding Remarks Poland. if the more active exchange rate policy resulted in an easier inﬂation control and then in a larger βq .167) (0. they differ quite remarkably for the time and the condition. We found anyway. on the basis of the CUSUMsq statistic.64(qt−1 − qt−5 ) (0. on the other hand.
One has rather to conclude that the interest rate channel of transmission of monetary policy was weak or obstructed. they may still be lagging behind. the Eurosystem may control inﬂation in the CEECs by controlling it in the core area. Meanwhile. the characteristics that may be associated to the initial conditions were the main driving forces of the real exchange rate dynamics. . It is then fair to expect that both the similarities and the differences emerge.192 Fabrizio Iacone and Renzo Orsi and the relative importance of the trade with the rest of the EU. since a good development took place during the accession period. fared worse. some elements of instability remained. according to a study of Coricelli and Jasbec (2004). because we explicitly include the exchange rate in both the equations of the model. which had to face not only the decentralization and the breakup of CMEA but even the split of Czechoslovakia. as indeed explicitly argued for Slovenia. we observed that. The most relevant common feature is the presence of a certain instability in the ﬁrst part of the sample. The relation between the variables got closer to the standard textbook description of a market supply and demand structure in the last few years. Yet. conﬁrming the ﬁndings of Coricelli and Jasbec. Even taking 1994 as a starting value. and that too turned out to be very weak. In this respect. even in that part of the sample evidence of a transmission mechanism of monetary policy based on the interest rates was only present for Poland. it is fair to expect that participation to the EU will foster integration. because the exchange rate is clearly a valid instrument for that purpose: this means that in a widened eurozone. Although it is not surprising that the exchange rate has a prominent role in inﬂation control in small open economies. nonetheless. only acquired importance after a period of approximately ﬁve years. The instability of the estimated macroeconomic relations is also informative about the speed of the transition. Czech Republic. The variables that were more directly related to the economic notions of supply and demand. anyway. especially with respect to Czech Republic. We found that the macroeconomic model structure begun to show a stable pattern only from 1994 onward. and this is indeed the case. we found that in Slovenia the adverse effect of theYugoslavian civil war canceled the advantage due to the selfmanagement reform. then taking it into account (and holding it ﬁxed in the interpretation of the results) in the multiple regression framework. This also means that. although these economies were acknowledged as functioning market economies. To appreciate the importance of the transition. our ﬁndings cannot be explained simply by this argument. their presence should not hamper inﬂation control too much.
K Zukrowska. Exchange rate regimes and monetary policy strategies for accession countries. S (2000). G and V Lavrac (2003). more likely. Real exchange rate dynamics in transition economies. and this is reﬂected in the estimated models: Poland. . and should then have more to do with EU accession than with inﬂation targeting. Monetary and exchange rate policy in Slovenia. WS (2000). Central bank independence in transition economies. Iacone. Monetary and Exchange Rate Issues of the Eurozone Enlargement. R Orsi and V Lavrac (eds. Inﬂation targeting in emerging market and transition economies: lessons after a decade. We did not ﬁnd any evidence of it. We think that this conclusion is of particular interest to help choosing the monetary target for the period preceding the last two years before ﬁxing the exchange rate in the ERM2: inﬂation targeting should be preferred and ERM2 membership should only be limited to the two mandatory years. Dvorsky. References Amato.). We also think that this lesson can be extended to other emerging market economies. Coricelli. Measuring central bank independence in selected transition countries and the disinﬂation process. especially if strict capital controls are not in place. Bank of Finland. is the only country in which the international factors did not affect the AD. Boﬁt Discussion Paper 13. Warsaw: Warsaw School of Economics. and that inﬂation targeting. nor we found that a more aggressive exchange rate policy in Slovenia eased the costs of disinﬂation from 2001 onwards. Structural Change and Economic Dynamics 15. rather than explicit exchange rate commitment. One main implication that we perceive is that the exchange rate commitment is either irrelevant or. The instability that we found for Czech Republic seems to suggest an evolution of the macroeconomic structure towards a market economy. replaceable with inﬂation targeting as a nominal anchor. a pattern can be envisaged in terms of dimension and openness to international trade. Eastward Enlargement of the EuroZone Working Papers. Capriolo. 89–136. 1–35. 749–789. 17G. In Fiscal. European Economic Review 46. Maliszewski. Another way to appreciate the importance of the international openness is to look at the estimated effect of a 1% appreciation of the exchange rate: the impact on the inﬂation dynamics is clearly ranked according to the relative dimension of trade.Inﬂation Control in Central and Eastern European Countries 193 Although they are all small open economies. 1–14. should be preferred. JD and S Gerlach (2002). F and R Orsi (2004). 83–100. Berlin: Free University Berlin. Economics of Transition 8. Stability analysis is also important because it allows us to understand if the switch in Poland and in Czech Republic from exchange rate to inﬂation targeting affected the price setting formation. 781–790. Helsinki: Bank of Finland. F and B Jasbec (2004). Jean Monnet Centre of Excellence. pp.
John B Taylor (ed. Svensson. pp.24] [H: 0. European Economic Review 46. AD yt = 0. GD and LEO Svensson (2002). data.011 + 0.71 πt−3 + 17.20] [H: 0.S. For each test.769yt −1 − 0.038rt−1 ˆ [N: 0. GD and LEO Svensson (1999).86 πt−2 − 0. AC is the LM test of no autoregressive structure in the residuals and H is the test for the heteroskedasticity. 417–442. Policy rules for inﬂation targeting.).38] [AC: 0. we report the Pvalue: the small sample version of the test is taken. Journal of International Economics 50. LEO (2000). [sample: 91Q2–04Q1] PC [sample: 93Q1–04Q1] ˆ πt = −1.62yt−2 [N: 0.17] Germany. 203–246.44] . Openeconomy inﬂation targeting.194 Fabrizio Iacone and Renzo Orsi Rudebush. 155–183. In Monetary Policy Rules. Chicago: Chicago University Press.18] [W : 0.00] [AC: 0. AC. Eurosystem monetary targeting: lessons from U. Rudebush.02 πt−1 − 0. Chow. When the estimated equation is restricted. W indicates the test of those restrictions that reduce the corresponding equation as speciﬁed in general to the one speciﬁcally used. N indicates the normality test. unless normality is rejected. the test statistics N.10] [Chow: 0. H. Appendix A Summary of the Results Note to the Tables: for each equation. W are calculated in the unrestricted model. Table 1a.
67 πt−2 − 0. Poland.76 πt−2 − 0. AD [sample: 94Q1–04Q1] yt = 0.47 πt−3 + 16.00] [AC: 0.57 πt−1 − 0.54] [W: 0.29] [Chow: 0.06] [Chow: 2000Q2: 0.25] [H: 0.467yt−1 − 0.0027rt−1 ˆ [N: 0.Inﬂation Control in Central and Eastern European Countries 195 Table 2a.056] [H: 0.00] [AC: 0.047] [H: 0.021 + 0.57] PC [sample: 94Q1–04Q1] ˆ π = −0.64 πt−1 − 0.65] Table 2b.36] [W: 0.09(qt−1 − qt−5 ) [N: 0.07] [H: 0.57] [AC: 0.79] [W: 0. restricted sample.454yt−1 − 0.00] [AC: 0.39] .49] [Chow: 0. AD yt = 0. 1994 onwards.54] [Chow: 0.46(qt−1 − qt−5 ) [N: 0.020 + 0. Poland.046] [W : 0.51 πt−3 + 21.0029rt−1 ˆ [N: 0.64] [sample: 95Q1–04Q1] PC [sample: 94Q1–04Q1] ˆ πt = −0.
44] [AC: 0.66 πt−2 − 0.15] [Break 2001Q1:0.55(qt−1 − qt−5 ) [N: 0.30] [AC: 0.31] [N: 0.07] [H: 0.050) [N: 0.26] Table 4.54] [W : 0.34] [H: 0.76 πt−1 − 0. Czech Republic.196 Fabrizio Iacone and Renzo Orsi Table 3.35] [H: 0.234) Slovenia.15] PC [sample: 98Q1–04Q1] ˆ πt = −0.71] [AC: 0.64(qt−1 − qt−5 ) [N: 0.31 πt−3 + 32.45 πt−3 + 52.620yt−1 − 0.69] .65] [Chow: 0.006) (0.86] [Chow 0. AD [sample: 98Q1–04Q1] yt = 0.31] PC [sample: 94Q1–03Q4] ˆ πt = −0.60 πt−1 − 0. [sample: 94Q1–03Q3] [W: 0.33] [W : 0.83] [W : 0. restricted sample.001) (0.083(qt−1 − qt−5 ) ˆ (0.167) (0.226yt−1 + 0.010 + 0.03rt−1 + 0.37] [H: 0.343wt−1 ˆ (0.79] [AC: 0.21 πt−2 − 0.116) (0. AD yt = 0.
47 − 0.Inﬂation Control in Central and Eastern European Countries 197 Datastream codes..22yt−1 (0.53 πt−2 − 0.66 πt−1 − 0. US$) CPI Industrial production 3monthrate USXRUSD BDESXUSD. Germany.29πt−1 − 0. SJXRUSD.62] . CZXRUSD. SJI60B...00] [AC: 0. + 7. CZ160C.48) (0.18) (0. Diagnostic tests of some speciﬁcations that we dismissed: Table 5b. BDCONPRCF POCONPRCF CZCONPRCF SJCONPRCF BDINPRODG POIPTOT5H CZIPTOT. PC equation.H SJIPTO01H BDINTER3 PO160C.18) (13.50 πt−3 .AF.41) [N:0.06] [Chow: 0. PC [sample: 91Q3–04Q1] ˆ πt = 0.. Poland Slovenia POI.20] [H: 0. Czech Republic Germany Exchange rate (US$ per ∈) Exchange rate (vs.18) (0. whole dataset.17) (0.
203) (0.42 − 0.29) (0.198 Fabrizio Iacone and Renzo Orsi Table 5c.166) (0.20] [H: 0.509yt−1 − 0.08) (0.08(qt−1 − qt−5 ) (0.0025rt−1 − 0.12) (0.07] [Chow: 0.02 + 0.06] PC [sample: 93Q2–04Q1] ˆ πt = − 0.42) (9.00] [AC: 0.10) (27.77 πt−1 − 0. whole dataset.130wt−1 − 0.009) (0.67 πt−3 .11) [N: 0.25) − 0.001) (0.53] [Chow: 0. + 22.045) [N: 0.78 πt−2 (1. AD [sample: 93Q2–04Q1] yt = 0.81] .55yt−1 + 18.01] [H: 0. Poland.10πt−1 − 0.054(qt−1 + qt−5 ) ˆ (0.00] [AC: 0.
10) (24.01] [W :0.45 − 0.19) (0.007 + 0.36 πt−3 +22.17) [sample: 94Q1–04Q1] − 0.24πt−1 − 0.229) (0.063) [N: 0.00] [AC: 0.19) [N: 0.94] [H: 0.617yt−1 − 0.19] .68 πt−1 − 0.086 [Chow: 0.44] [AC: 0.4) (0. whole dataset.01] PC ˆ πt = 2.Inﬂation Control in Central and Eastern European Countries 199 Table 5d.10yt−1 +33.25] [H: 0.140) (0.21) (0.001) (0.06] [Chow: 0. AD [sample: 93Q4–04Q1] yt = 0.28 πt−2 (1. Czech Republic.22(qt−1 − qt−5 ) (0.186wt−1 + 0.0001rt−1 − 0.047(qt−1 + qt−5 ) ˆ (0.005) (0.
200 Fabrizio Iacone and Renzo Orsi
Table 5e.
Slovenia, PC equation, whole dataset. [sample: 93Q2–03Q4]
PC ˆ πt = 4.71 − 0.50πt−1 − 0.03 πt−1 − 0.40 πt−2
(1.41) (0.19) (0.11) (0.11) (0.07) (26.70) (17.73)
− 0.10 πt−3 . + 2.45yt−1 + 48.28(qt−1 − qt−5 ) [N: 0.84] [AC: 0.03] [H: 0.02] [Chow: 0.71]
Topic 2
Credit and Income: CoIntegration Dynamics of the US Economy
Athanasios Athanasenas Institute of Technology and Education, Greece
1. Introduction In this study, I seek to identify the existence of a signiﬁcant statistical relationship between credit (as commercial bank lending) and income (GDP), for the US postwar economy, in terms of a contemporary cointegration methodology. The main empirical ﬁnding regarding the hypothesis that causality, in the long run, is directed from credit to income, agrees with the “creditview”. This can be veriﬁed by the the postwar US data. In this chapter, I have identiﬁed a dynamic causality effect in the short run, from income changes to credit ones. To obtain the results cited here, I have applied a cointegration analysis. In addition to considering the formulated VAR, I place particular emphasis upon the stability analysis of the estimated equivalent ﬁrstorder dynamic system. Finally, dynamic forecasts from the corresponding error correction variance (ECVAR) are obtained, in order to verify the validity of the cointegration vector used. This study have four sections. Section 2 reviews, in a brief note, the literature on credit and income growth of the postwar US economy. Section 3 deals with methodological issues concerning the contemporary cointegration analysis and the data used. Section 4 presents the econometric analysis and the empirical results; while Section 5 concludes the creditincome causality issue, placing emphasis on the stability analysis of the obtained ﬁrstorder dynamic system and the forecasts from the corresponding ECVAR.
201
202 Athanasios Athanasenas
2. The Credit Issue Researched: A Brief Note on the US Economy There is an evidence that money tends to “lead” income in a historical sense (see Friedman and Schwartz (1963) and Friedman (1961; 1964). Where the mainstream monetarists through the “quantity theory” explain the empirical observations as the causal relationship running from money to income. Since the seminal works of Sims, empirical validation of the relationship between money and income ﬂuctuations has been established (Sims, 1972; 1980a). Since the publication of the works of Goldsmith (1969), Mckinnon (1973), and Shaw (1973), the debate has been going on about the role of ﬁnancial intermediaries and bank credit in promoting the longrun growth. On the basis of historical data, Friedman and Schwartz (1963) argue that major depressions of the US economy have been caused by autonomous movements in money stock.1 Sims (1980a) concludes that on one hand, money stock emerges as causally prior accounting for a substantial fraction of income variance during the interwar and postwar periods of the business cycles. On the other hand, old skepticism remains on the direction of causality between money and income. It is well known that traditional monetarists are consistent with the oldclassic view that only “money matters”, so that even when tightening of bank credits does occur during recessions, they see these as part of the endogenous ﬁnancial system, rather than exogenous events, which induce recessions. As Brunner and Meltzer (1988) argue, banking crises are endogenous ﬁnancial forces, which directly affect business cycles conditional upon the monetary propagation mechanism.2 In contrast, the “newcredit” viewers stress further the importance of credit restrictions, while accepting the fundamental inefﬁciencies of the monetary policy.3 Bernanke and Blinder (1988), in their seminal article, conclude that credit demand is becoming relatively more stable than money
1 Ibid. 2 Note that monetarists seem to stress mainly, the complementarity of credit and money
channels of transmission, the shortrun stability of the money multiplier, and the longrun neutrality of money. 3 The “newcredit” view combines the “oldcredit” view of the money to spend perspective, with the money to hold perspective of the money view. The “oldcredit” view focused on decisions to create and spend money, on the expansion effects of bank lending, and emphasized the issue of the nonneutrality of credit money in the longrun (see also, Trautwein, 2000).
Credit and Income 203
demand since 1979 and throughout the 1980s, where, money demand shocks became much more important relative to credit demand shocks in the 1980s, supported by their estimation of greater variance of money. Evidently, one of the most challenging issues of the applied monetary theory is the existence and importance of the credit channel in the monetary transmission mechanism. Bernanke (1988) emphasizes that according to the proponents of the credit view, bank credit (loans) deserve special treatment due to the qualitative difference between a borrower going to the bank for a loan and a borrower raising funds by going to the ﬁnancial markets and issuing stock or ﬂoating bonds.4 Kashyap and Stein (1994) leave no doubt while supporting Bernanke’s (1983), and Bernanke’s and James’s (1991) examination of the Great Depression in the United States, that the conventional explanation for the depth and persistence of the Depression is one of the strongest pieces of evidence supporting the view that shifts in loan supply can be quite important. The supporting evidence stands strong, considering also that the decline of intensity as seen in recessions are partially due to a cutoff in bank lending (Kashyap et al., 1994).5 King and Levine (1993) stress that ﬁnancial indicators, such as the importance of the banks relative to the central bank, the percentage of credit allocated to private ﬁrms and the ratio of credit issued to private ﬁrms to GDP, are strongly related with growth. For the relevant role of the monetary process and the ﬁnancial sector, see, in particular, (Angeloni et al., 2003; Bernanke and Blinder, 1992; Murdock and Stiglitz, 1993; Stock and Watson, 1989; Swanson, 1998). More recently, Kashyap and Stein (2000) proved that the impact of monetary policy on lending behavior is stronger for the banks with less liquid balance sheets, where liquidity is measured by the ratio of securities
4 Stiglitz (1992) stresses the fact that the distinctive nature of the mechanisms by which
credit is allocated plays a central role in the allocation process. He identiﬁes several crucial reasons that banks are likely to reduce lending activity (p. 293), as the economy goes into recession, where banks’ unwillingness and the inability to make loans obviates the effect of monetary policy. Bernanke and Blinder (1992) ﬁnd that the transmission of monetary policy works through bank loans, as well as through bank deposits. Tight monetary policy reduces deposits and over time banks respond to tightened money by terminating old loans and refusing to make new ones. Reduction of bank loans for credit can depress the economy. 5 Both Bernanke and Blinder (1992) and Gertler and Gilchrist (1993) using innovations in the federal