Stochastic Dynamic Macroeconomics

:
Theory, Numerics and Empirical Evidence
Gang Gong

and Willi Semmler

October 2004

Tsinghua University, Bejing, China. Email: ggong@em.tsinghua.edu.cn

Center for Empirical Macroeconomics, Bielefeld, and New School University, New York.
Contents
List of Figures iv
List of Tables vi
Preface 1
Introduction and Overview 2
I Solution and Estimation of Stochastic Dynamic
Models 11
1 Solution Methods of Stochastic Dynamic Models 12
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2 The Standard Recursive Method . . . . . . . . . . . . . . . . . 13
1.3 The First-Order Conditions . . . . . . . . . . . . . . . . . . . 15
1.4 Approximation and Solution Algorithms . . . . . . . . . . . . 17
1.5 An Algorithm for the Linear-Quadratic Approximation . . . . 23
1.6 A Dynamic Programming Algorithm . . . . . . . . . . . . . . 25
1.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.8 Appendix I: Proof of Proposition 1 . . . . . . . . . . . . . . . 28
1.9 Appendix II: An Algorithm for the LQ-Approximation . . . . 29
2 Solving a Prototype Stochastic Dynamic Model 33
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2 The Ramsey Problem . . . . . . . . . . . . . . . . . . . . . . . 33
2.3 The First-Order Conditions and Approximate Solutions . . . . 35
2.4 Solving the Ramsey Problem with Different Approximations . 39
2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.6 Appendix I: The Proof of Proposition 2 and 3 . . . . . . . . . 48
2.7 Appendix II: Dynamic Programming for the Stochastic Version 50
i
CONTENTS ii
3 The Estimation and Evaluation of the Stochastic Dynamic
Model 52
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.3 The Estimation Methods . . . . . . . . . . . . . . . . . . . . . 55
3.4 The Estimation Strategy . . . . . . . . . . . . . . . . . . . . . 57
3.5 A Global Optimization Algorithm: The Simulated Annealing 58
3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.7 Appendix: A Sketch of the Computer Program for Estimation 60
II The Standard Stochastic Dynamic Optimization
Model 63
4 Real Business Cycles: Theory and the Solutions 64
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.2 The Microfoundation . . . . . . . . . . . . . . . . . . . . . . . 65
4.3 The Standard RBC Model . . . . . . . . . . . . . . . . . . . . 69
4.4 Solving Standard Model with Standard Parameters . . . . . . 74
4.5 The Generalized RBC Model . . . . . . . . . . . . . . . . . . . 76
4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.7 Appendix: The Proof of Proposition 4 . . . . . . . . . . . . . 80
5 The Empirics of the Standard Real Business Cycle Model 82
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.2 Estimation with Simulated Data . . . . . . . . . . . . . . . . . 82
5.3 Estimation with Actual Data . . . . . . . . . . . . . . . . . . 86
5.4 Calibration and Matching to U. S. Time-Series Data . . . . . 89
5.5 The Issue of the Solow Residual . . . . . . . . . . . . . . . . . 93
5.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6 Asset Market Implications of Real Business Cycles 101
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.2 The Standard Model and Its Asset Pricing Implications . . . . 103
6.3 The Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.4 The Estimation Results . . . . . . . . . . . . . . . . . . . . . . 110
6.5 The Evaluation of Predicted and Sample Moments . . . . . . . 112
6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
CONTENTS iii
III Beyond the Standard Model — Model Variants
with Keynesian Features 116
7 Multiple Equilibria and History Dependence 117
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
7.2 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
7.3 The Existence of Multiple Steady States . . . . . . . . . . . . 121
7.4 The Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
7.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.6 Appendix: The Proof of Propositions 5 and 6 . . . . . . . . . 128
8 Business Cycles with Nonclearing Labor Market 131
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
8.2 An Economy with Nonclearing Labor Market . . . . . . . . . 135
8.3 Estimation and Calibration for U. S. Economy . . . . . . . . . 142
8.4 Estimation and Calibration for the German Economy . . . . . 151
8.5 Differences in Labor Market Institutions . . . . . . . . . . . . 159
8.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
8.7 Appendix I: Wage Setting . . . . . . . . . . . . . . . . . . . . 164
9 Monopolistic Competition, Nonclearing Markets and Tech-
nology Shocks 171
9.1 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
9.2 Estimation and Calibration for U.S. Economy . . . . . . . . . 175
9.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
9.4 Appendix: Proof of the Proposition . . . . . . . . . . . . . . 184
10 Conclusions 186
List of Figures
2.1 The Fair-Taylor Solution in Comparison to the Exact Solution 40
2.2 The Log-linear Solution in Comparison to the Exact Solution . 42
2.3 The Linear-quadratic Solution in Comparison to the Exact
Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.4 Value Function obtained from the Linear-quadratic Solution . 45
2.5 Value Function . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.6 Path of Control . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.7 Approximated value function and final adaptive grid for our
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.1 The Deterministic Solution to the Benchmark RBC Model for
the Standard Parameters . . . . . . . . . . . . . . . . . . . . . 75
4.2 The Stochastic Solution to the Benchmark RBC Model for the
Standard Parameters . . . . . . . . . . . . . . . . . . . . . . . 75
4.3 Value function for the general model . . . . . . . . . . . . . . 79
4.4 Paths of the Choice Variables C and N (depending on K) . . 79
5.1 The β - δ Surface of the Objective Function for ML Estimation 85
5.2 The θ −α Surface of the Objective Function for ML Estimation 85
5.3 Simulated and Observed Series (non detrended) . . . . . . . . 91
5.4 Simulated and Observed Series (non detrended) . . . . . . . . 92
5.5 The Solow Residual: standard (solid curve) and corrected
(dashed curve) . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.6 Sample and Predicted Moments with Innovation Given by
Corrected Solow Residual . . . . . . . . . . . . . . . . . . . . 99
6.1 Predicted and Actual Series: all variables HP detrended (ex-
cept for excess equity return) . . . . . . . . . . . . . . . . . . 113
6.2 The Second Moment Comparison:all variables detrended (ex-
cept excess equity return) . . . . . . . . . . . . . . . . . . . . 114
7.1 The Adjustment Cost Function . . . . . . . . . . . . . . . . . 122
iv
LIST OF FIGURES v
7.2 The Derivatives of the Adjustment Cost . . . . . . . . . . . . 122
7.3 Multiplicity of Equilibria: f(i) function . . . . . . . . . . . . . 124
7.4 The Welfare Performance of three Linear Decision Rules . . . 126
8.1 Simulated Economy versus Sample Economy: U.S. Case . . . . 150
8.2 Comparison of Macroeconomic Variables U. S. versus Germany 152
8.3 Comparison of Macroeconomic Variables: U. S. versus Ger-
many (data series are detrended by the HP-filter) . . . . . . . 153
8.4 Simulated Economy versus Sample Economy: German Case . 157
8.5 Comparison of demand and supply in the labor market . . . . 158
8.6 A Static Version of the Working of the Labor Market . . . . . 165
8.7 Welfare Comparison of Model II and III . . . . . . . . . . . . 169
9.1 Simulated Economy versus Sample Economy: U.S. Case . . . . 181
List of Tables
2.1 Parameterizing the Prototype Model . . . . . . . . . . . . . . 39
2.2 Number of nodes and errors for our Example . . . . . . . . . 51
4.1 Parameterizing the Standard RBC Model . . . . . . . . . . . . 74
4.2 Parameterizing the General Model . . . . . . . . . . . . . . . . 78
5.1 GMM and ML Estimation Using Simulated Data . . . . . . . 84
5.2 Estimation with Christiano’s Data Set . . . . . . . . . . . . . 88
5.3 Estimation with the NIPA Data Set . . . . . . . . . . . . . . . 88
5.4 Parameterizing the Standard RBC Model . . . . . . . . . . . . 89
5.5 Calibration of Real Business Cycle Model . . . . . . . . . . . . 90
5.6 F−Statistics for Testing Exogeneity of Solow Residual . . . . 95
5.7 The Cross-Correlation of Technology . . . . . . . . . . . . . . 98
6.1 Asset Market Facts and Real Variables . . . . . . . . . . . . . 106
6.2 Summary of Models . . . . . . . . . . . . . . . . . . . . . . . . 110
6.3 Summary of Estimation Results . . . . . . . . . . . . . . . . . 110
6.4 Asset Pricing Implications . . . . . . . . . . . . . . . . . . . . 111
6.5 Matching the Sharpe-Ratio . . . . . . . . . . . . . . . . . . . . 111
7.1 The Parameters in the Logistic Function . . . . . . . . . . . . 121
7.2 The Standard Parameters of RBC Model . . . . . . . . . . . . 124
7.3 The Multiple Steady States . . . . . . . . . . . . . . . . . . . 125
8.1 Parameters Used for Calibration . . . . . . . . . . . . . . . . 145
8.2 Calibration of the Model Variants: U.S. Economy . . . . . . . 147
8.3 The Standard Deviations (U.S. versus Germany) . . . . . . . . 154
8.4 Parameters used for Calibration (German Economy) . . . . . . 155
8.5 Calibration of the Model Variants: German Economy . . . . . 156
9.1 Calibration of the Model Variants . . . . . . . . . . . . . . . . 179
9.2 The Correlation Coefficients of Temporary Shock in Technology.182
vi
Preface
This book intends to contribute to the study of alternative paradigms in
macroeconomics. As other recent approaches to dynamic macroeconomics,
we also build on intertemporal economic behavior of economic agents but
stress Keynesian features more than other recent literature in this area. In
general, stochastic dynamic macromodels are difficult to solve and to esti-
mate, in particular if intertemporal behavior of economic agents is involved.
Thus, beside addressing important macroeconomic issues in a dynamic frame-
work another major focus of this book is to discuss and apply solution and es-
timation methods to models with intertemporal behavior of economic agents.
The material of this book has been presented by the authors at several
universities. Chapters of the book have been presented as lectures at Biele-
feld University, Foscari University, Venice, University of Technology, Vienna,
University of Aix-en-Provence, Colombia University, New York, New School
University, New York, Bejing University, Tsinghua University, Bejing, Chi-
nese University of HongKong, City University of HongKong and European
Central Bank. Some chapters of the book have also been presented at the
annual conference of the American Economic Association, Society of Compu-
tational Economics, and Society of Nonlinear Dynamics and Econometrics.
We are grateful for comments by the participants of those conferences. We
are also grateful for discussions with Toichiro Asada, Jean-Paul Benassy, Pe-
ter Flaschel, Buz Brock, Lars Gr¨ une, Richard Day, Ray Fair, Stefan Mittnik,
James Ramsey, Malte Sieveking, Michael Woodford and colleagues of our
universities. We thank Uwe K¨oller for research assistance and Gaby Wind-
horst for editing and typing the manuscript. Financial support from the
Ministry of Education, Science and Technology is gratefully acknowledged.
1
Introduction and Overview
The dynamic general equilibrium (DGE) model, in particular its more pop-
ular version, the Real Business Cycle Model, has become a major paradigm
in macroeconomics. It has been applied in numerous fields of economics. Its
essential features are the assumptions of intertemporal optimizing behavior
of economic agents, competitive markets and price-mediated market clear-
ing through flexible wages and prices. In this type of stochastic dynamic
macromodeling only real shocks, such as technology shocks, monetary and
government spending shocks variation in tax rates or shifts in preferences
generate macro fluctuations.
Recently Keynesian features have been built into the dynamic general
equilibrium (DGE) model by preserving its characteristics such as intertem-
porally optimizing agents and market clearing, but introducing monopolistic
competition and sticky prices and wages into the model. In particular, in nu-
merous papers and in a recent book Woodford (2003) has worked out this new
paradigm in macroeconomics, which is now commonly called New Keynesian
macroeconomics. In contrast to the traditional Keynesian macromodels such
variants also presume dynamically optimizing agents and market clearing
1
,
but sluggish wage and price adjustments.
It is well known that the standard DGE model fails to replicate essential
product, labor market and asset market characteristics. In our book, differ-
ent from the DGE model, its competitive or monopolistic variants, we do
not presume clearing of all markets in all periods. As in the monopolistic
competition variant of the DGE model we permit nominal rigidities. Yet, by
stressing Keynesian features in a model with production and capital accu-
mulation, we demonstrate that even with dynamically optimizing agents not
all markets may be cleared.
1
It should be noted that the concept of market clearing in recent New Keynesian literature
is not unambiguous. We will discuss this issue in chapter 8.
2
3
Solution and Estimation Methods
Whereas models with Keynesian features are worked out and stressed in the
chapters of part III of the book, part I and II provide the ground work for
those later chapters. In part I and II of the book we build extensively on the
basics of stochastic dynamic macroeconomics.
Part I of the book can be regarded as the technical preparation for our
theoretical arguments developed in this volume. Here we provide a variety of
technical tools to solve and estimate stochastic dynamic optimization mod-
els, which is a prerequesit for a proper empirical assessment of the models
treated in our book. Solution methods are presented in chapters 1-2 whereas
estimation methods along with calibration, the current methods of empirical
assessment, are introduced in chapter 3. These methods are subsequently
applied in the remaining chapters of the book.
Solving stochastic dynamic optimization models has been an important
research topic in the last decade and many different methods have been pro-
posed. Usually, an exact and analytical solution of a dynamic decision prob-
lem is not attainable. Therefore one has to rely on an approximate solution,
which may also have to be computed by numerical methods. Recently, there
have been developed numerous methods to solve stochastic dynamic decision
problems. Among the well-known methods are the perturbation and projec-
tion methods (Judd (1998)), the parameterized expectations approach (den
Haan and Marcet (1990)) and the dynamic programming approach (Santos
and Vigo Aguiar (1998) and Gr¨ une and Semmler (2004a)). When an exact
and analytical solution to a dynamic optimization problem is not attainable
and one has to use numerical methods. A solution method with higher accu-
racy often requires more complicated procedures and extensive computation
time.
In this book, in order to allow for an empirical assessment of stochastic
dynamic models we focus on approximate solutions that are computed from
two types of first-order conditions: the Euler equation and the equation de-
rived from the Lagrangian. Given these two types of first-order conditions,
three types of approximation methods can be found in the literature: the
Fair-Taylor method, the log-linear approximation method and the linear-
quadratic approximation method. After a discussion on the variety of ap-
proximation methods, we introduce a method, which will be repeatedly used
in the subsequent chapters. The method, which has been written into a
GAUSS procedure, has the advantage of short computation time and easy
implementation without sacrificing too much accuracy. We will also compare
those methods with the dynamic programming approach.
Often the methods use a smooth approximation of first order conditions,
4
such as the Euler equation. Sometimes, as, for example, in the model of
chapter 7 smooth approximations are not useful if the value function is not
differentiable and thus is non-smooth. A method such as employed by Gr¨ une
and Semmler (2004a) can then be used.
There has been less progress made regarding the empirical assessment
and estimation of stochastic dynamic models. Given the wide application of
stochastic dynamic models expected in the future, we believe that the esti-
mation of such type of models will become an important research topic. The
discussion in chapters 3-6 can be regarded as an important step toward that
purpose. As we will find, our proposed estimation strategy requires to solve
the stochastic dynamic optimization model repeatedly, at various possible
structural parameters searched by a numerical algorithm within the parame-
ter space. This requires that the solution methods adopted in the estimation
strategy should be as little time consuming as possible while not losing too
much accuracy. After comparing different approximation methods, we find
that the proposed methods of solving stochastic dynamic optimization mod-
els, such as used in chapters 3 - 6 most useful. We also will explore the impact
of the use of different data sets on the calibration and estimation results.
RBC Model as a Benchmark
In the next part of the book, in part II, we set up a benchmark model, the
RBC model, for comparison, in terms of either theory or empirics.
The standard RBC model is a representative agent model, but it is con-
structed on the basis of neoclassical general equilibrium theory. It therefore
assumes that all markets (including product, capital and labor models) are
cleared in all periods regardless of whether the model refers to the short- or
the long-run. The imposition of market clearing requires that prices are set
at an equilibrium level. At the pure theoretical level, the existence of such
general equilibrium prices can be proved under certain assumption. Little,
however, has been told how the general equilibrium can be achieved. In an
economy in which both firms and households are price-takers, implicitly an
auctioneer is presumed to exist who adjusts the price towards some equilib-
rium. Thus, the way of how an equilibrium is brought about is essentially a
Walrasian tˆatonnement process.
Working with such a framework of competitive general equilibrium is el-
egant and perhaps a convenient starting point for economic analysis. It nev-
ertheless neglects many restrictions on the behavior of agents, the trading
process and the market clearing process, the implementation of technology
and the market structure, among many others. In part II of this volume,
5
we provide a thorough review of the standard RBC model, the representa-
tive stochastic dynamic model of competitive general equilibrium type. The
review starts with laying out microfoundation, and continues to discuss a va-
riety of empirical issues, such as the estimation of structural parameters, the
data construction, the matching with the empirical data, its asset market im-
plications and so on. The issues explored in this part of the book provide the
incentives to introduce Keynesian features into a stochastic dynamic model
as developed in Part III. Meanwhile, it also provides a reasonable ground
to judge new model variants by considering whether they can resolve some
puzzles as explored in part II of the book.
Open Ended Dynamics
One of the restrictions in the standard RBC model is that the firm does not
face any additional cost (a cost beyond the usual activities at the current
market prices) when it makes an adjustment on either price or quantity.
For example, changing the price may require the firm to pay a menu cost
and also, more importantly, a reputation cost. It is the cost, arising from
price and wage adjustments that has become an important focus of New
Keynesian research over the last decades.
2
However, adjustment cost may
also come from a change in quantity. In a production economy increasing
output requires the firm to hire new workers and add new capacity. In a
given period of time, a firm may find more and more difficulties to create
new additional capacity. This indicates that there will be an adjustment
cost in creating capacity (or capital stock via investment), and further such
adjustment cost may also be an increasing function of the size of investment.
In chapter 7, we will introduce adjustment costs into the benchmark RBC
model. This may bring about multiple equilibria toward which the economy
may move. The dynamics are open ended in the sense that it can move to low
level, or high level of economic activity.
3
Such an open ended dynamics is cer-
tainly one of the important feature of Keynesian economics. In recent times
such open ended dynamics have been found in a large number of dynamic
models with intertemporal optimization. Those models have been called in-
determinacy and multiple equilibria models. Theoretical models of this type
are studied in Benhabib and Farmer (1999) and Farmer (2001), and an em-
pirical assessment is given in Schmidt-Grohe (2001). Some of the models
2
Important papers in this reserach line are, for example, Calvo (1983) and Rotemberg
(1982). For a recent review, see Taylor (1999) and Woodford (2003, ch. 3).
3
Keynes (1936) discusses the possibility of such an open ended dynamics in chapter 5 of
his book.
6
are real models, RBC models, with increasing returns to scale and/or more
general preferences than power utility that generate indeterminacy. Local
indeterminacy and globally multiplicity of equilibria can arise here. Others
are monetary macro models, where consumers’ welfare is affected positively
by consumption and cash balances and negatively by the labor effort and
an inflation gap from some target rates. For certain substitution properties
between consumption and cash holdings those models admit unstable as well
as stable high level and low level steady states. There also can be indeter-
minacy in the sense that any initial condition in the neighborhood of one of
the steady-states is associated with a path toward, or away from, that steady
state, see Benhabib et al. (2001).
Overall, the indeterminacy and multiple equilibria models predict an open
ended dynamics, arising from sunspots, where the sunspot dynamics are fre-
quently modeled by versions with multiple steady state equilibria, where
there are also pure attractors (repellors), permitting any path in the vicin-
ity of the steady state equilibria to move back to (away from) the steady
state equilibrium. Although these are important variants of macrodynamic
models with optimizing behavior, as, however, recently has been shown
4
in-
determinacy is likely to occur only within a small set of initial conditions.
Yet, despite such unsolved problems the literature on open ended dynamics
has greatly enriched macrodynamic modeling.
Pursuing this line of research we introduce a simple model where one
does not need to refer to model variants with externalities and (increasing
returns to scale) and/or to more elaborate preferences to obtain such re-
sults. We show that due to the adjustment cost of capital we may obtain
non-uniqueness of steady state equilibria in an otherwise standard dynamic
optimization version. Multiple steady state equilibria, in turn, lead to thresh-
olds separating different domains of attraction of capital stock, consumption,
employment and welfare level. As our solution shows thresholds are impor-
tant as separation points below or above which it is advantages to move to
lower or higher levels of capital stock, consumption, employment and welfare.
Our model version thus can explain of how the economy becomes history de-
pendent and moves, after a shock or policy influences, to a low or high level
equilibria in employment and output.
Nonclearing Markets
A second important feature of Keynesian macroeconomics concerns the mod-
eling of the labor market. An important characteristic of the DGE model
4
See Beyn, Pampel and Semmler (2001) and Gr¨ une and Semmler (2004a).
7
is that it is a market clearing model. For the labor market the DGE model
predicts an excessive smoothness of labor effort in contrast to empirical data.
The low variation in the employment series is a well-known puzzle in the RBC
literature.
5
It is related to the specification of the labor market as a cleared
market. Though in its structural setting, see, for instance, Stockey et al.
(1989), the DGE model specifies both sides of a market, demand and supply,
the moments of the macro variables of the economy are, however, generated
by a one-sided force due to its assumption on wage and price flexibility and
thus equilibrium in all markets, including output, labor and capital markets.
The labor effort results only from the decision rule of the representative agent
to supply labor. In our view there should be no restriction for the other side
of the market, the demand, to have effects on the variation of labor effort.
Attempts have been made to introduce imperfect competition features
into the DGE model.
6
In those types of models, producers set the price
optimally according to their expected market demand curve. If one follows a
Calvo price setting scheme, there will be a gap between the optimal price and
the existing price. However, it is presumed that the market is still cleared
since the producer is assumed to supply the output according to what the
market demands for the existing price. This consideration also holds for the
labor market. Here the wage rate is set optimally by the household according
to the expected market demand curve for labor. Once the wage has been set,
it is assumed to be rigid (or adjusted slowly). Thus, if the expectation is not
fulfilled, there will be a gap again between the optimal wage and existing
wage. Yet in the New Keynesian models the market is still assumed to be
cleared since the household is assumed to supply labor whatever demand is
at the given wage rate.
7
In order to better fit the RBC model’s predictions with the labor market
data, search and matching theory has been employed
8
to model the labor
market in the context of an RBC model. Informational or institutional search
frictions may then explain equilibrium unemployment rates and its rise. Yet,
those models still have a hard time to explain the shift of unemployment rates
such as, for example, experienced in Europe since the 1980s, as equilibrium
unemployment rate.
9
5
A recent evaluation of this failure of the RBC model is given in Schmidt-Grohe (2001).
6
Rotemberg and Woodford (1995, 1999), King and Wollman (1999), Gali (2001) and
Woodford (2003) present a variety of models of monopolistic competition with price and
wage stickiness.
7
Yet, as we have mentioned above, this definition of market clearing is not unambiguous.
8
For further details, see ch. 8.
9
For an evaluation of the search and matching theory as well as the role of shocks to
explain the evolution of unemployment in Europe, see Ljungqvist and Sargent (2003)and
8
As concerns the labor market along Keynesian lines we pursue an ap-
proach that allows for a nonclearing labor market. In our view the decisions
with regard to price and quantities can be made separately, both subject to
optimal behavior. When the price has been set, and is sticky for a certain pe-
riod, the price is then given to the supplier when deciding on the quantities.
There is no reason why the firm cannot choose the optimal quantity rather
than what the market demands, especially when the optimum quantity is less
than the quantity demanded by the market. This consideration will allow for
nonclearing markets.
10
Our proposed new model helps to study labor market
problems by being based on adaptive optimization where households, after
a first round of optimization, have to reoptimize when facing constraints in
supplying labor in the market. On the other hand, firms may have con-
straints on the product markets. As we will show in chapters 8 and 9 such a
multiple stage optimization model will allow for larger volatility of the em-
ployment rates as compared to the standard RBC model, and provides, also
a framework to study the secular rise or fall of unemployment.
Technology and Demand Shocks
A further Keynesian feature of macromodels concerns the role of shocks. In
the standard DGE model technology shocks are the driving force of the busi-
ness cycles which is assumed to be measured by the Solow-residual. Since
the Solow residual is computed on the basis of observed output, capital and
employment, it is presumed that all factors are fully utilized. There are
several reasons to distrust the standard Solow residual as a measure of tech-
nology shock. First, Mankiw (1989) and Summers (1986) have argued that
such a measure often leads to excessive volatility in productivity and even
the possibility of technological regress, both of which seem to be empiri-
cally implausible. Second, it has been shown that the Solow residual can be
expressed by some exogenuous variables, for example demand shocks aris-
ing from military spending (Hall 1988) and changed monetary aggregates
(Evan 1992), which are unlikely to be related to factor productivity. Third,
the standard Solow residual can be contaminated if the cyclical variation in
factor utilization are significant.
Considering that the Solow-residual cannot be trusted as a measure of
Blanchard (2003)
10
There is indeed a long tradition of macroeconomic modeling with specification of the
nonclearing labor markets, see, for instance, Benassy (1995, 2002), Malinvaud (1994),
Danthine and Donaldson (1990, 1995) and Uhlig and Xu (1996). Although our approach
owes a substantial debt to disequilibrium models, we move beyond this type of literature.
9
technology shock, researchers have now developed different methods to mea-
sures technology shocks correctly. All these methods are focused on the
computation of factor utilization. There are basically three strategies. The
first strategy is to use an observed indicator to proxy for unobserved utiliza-
tion. A typical example is to employ electricity use as a proxy for capacity
utilization (see Burnside, Eichenbaum and Rebelo 1996). Another strategy
is to construct an economic model so that one could compute the factor uti-
lization from the observed variables (see Basu and Kimball 1997 and Basu,
Fernald and Kimball 1998). A third strategy uses a appropriate restriction in
a VAR estimate to identify a technology shock, see Gali (1999) and Francis
and Ramey (2001, 2003).
It is well known that one of the major celebrated arguments of real busi-
ness cycles theory is that technology shocks are pro-cyclical. A positive tech-
nology shock will increase output, consumption and employment. Yet this
result is obtained from the empirical evidence, in which the technology shock
is measured by the standard Solow-residual. As Gali (1999) and Francis and
Ramey (2001, 2003) we also find that if one uses the corrected Solow-residual,
the technology shock is negatively correlated with employment and therefore
the RBC model loses its major driving force, see chapters 5 and 9.
Puzzles to be Resolved
In order to sum up, we may say that the standard RBC model has left us
with major puzzles. The first type of puzzle is related to the asset market
and is often discussed under the heading of the equity premium puzzle. Ex-
tensive research has attempted to improve on this problem by elaborating
on more general preferences and technology shocks. Chapter 6 studies in
details the asset price implication of the RBC model. The second puzzle is,
as above mentioned, related to the labor market. The RBC model generally
predicts an excessive smoothness of labor effort in contrast to empirical data.
The model also implies an excessively high correlation between consumption
and employment while empirical data only indicates a week correlation.
11
Third, the RBC model predicts a significantly high positive correlation be-
tween technology and employment whereas empirical research demonstrates,
at least at business cycle frequency, a negative or almost zero correlation.
One might name it the technology puzzle. Whereas the first puzzle is stud-
ied in chapter 6 of the book, chapters 8-9 of part III of the book are mainly
concerned with the latter two puzzles.
11
This problem of excessive correlation has, to our knowledge, not sufficiently been studied
in the literature. It will be explored in Chapter 5 of this volume.
10
Finally, we want to note that the research along the line of Keynesian
micro-founded macroeconomics has been historically developed by two ap-
proaches: one is the tradition of non-clearing market (or disequilibrium anal-
ysis), and the other is the New Keynesian analysis of monopolistic competi-
tion and sticky (or sluggish) prices. These two approaches will be contrasted
in the last two chapters, Chapters 8 and 9. We will find that one can im-
prove on the labor market and technology puzzles once we combine these
two approaches. We want to argue that the two traditions can indeed be
complementary rather than exclusive, and therefore they can somewhat be
consolidated into a more complete system of price and quantity determina-
tion within the Keynesian tradition. The main new method we are using
here to reconcile the two traditions is a multiple stage optimization behav-
ior, adaptive optimization, where agents, reoptimize once they have perceived
and learned about market constraints. Thus, adaptive optimization permits
us to properly treat the market adjustment for nonclearing markets which,
we hope, allows us to make some progress to match better the model with
time series data.
Part I
Solution and Estimation of
Stochastic Dynamic Models
11
Chapter 1
Solution Methods of Stochastic
Dynamic Models
1.1 Introduction
The dynamic decision problem of an economic agent whose objective is to
maximize his or her utility over an infinite time horizon is often studied in
the context of a stochastic dynamic optimization model. To understand the
structure of this decision problem we describe it in terms of a recursive de-
cision problem of a dynamic programming approach. Thereafter, we discuss
some solution methods frequently employed to solve the dynamic decision
problem.
In most cases, an exact and analytical solution of the dynamic program-
ming decision is not attainable. Therefore one has to rely on an approximate
solution, which may also have to be computed by numerical methods. Re-
cently, there have been developed numerous methods to solve stochastic dy-
namic decision problems. Among the well-known methods are the perturba-
tion and projection methods (Judd (1996)), the parameterized expectations
approach (den Haan and Marcet (1990)) and the dynamic programming ap-
proach (Santos and Vigo Aguiar (1998) and Gr¨ une and Semmler (2004a)). In
this book, in order to allow for an empirical assessment of stochastic dynamic
models we focus on approximate solutions that are computed from two types
of first-order conditions: the Euler equation and the equation derived from
the Lagrangian. Given these two types of first-order conditions, three types
of approximation methods can be found in the literature: the Fair-Taylor
method, the log-linear approximation method and the linear-quadratic ap-
proximation method. Still, in most of the cases, an approximate solution
cannot be derived analytically, and therefore a numerical algorithm is called
12
13
for to facilitate the computation of the solution.
In this chapter, we discuss those various approximation methods and
then propose another numerical algorithm that can help us to compute ap-
proximate solutions. The algorithm is used to compute the solution path
obtained from the method of linear-quadratic approximation with the first-
order condition derived from the Lagrangian. While the algorithm takes full
advantage of an existing one (Chow 1993), it overcomes the limitations given
by the Chow (1993) method.
The remainder of this chapter is organized as follows. We start in Section
2 with the standard recursive method, which uses the value function for it-
eration. We will show that the standard recursive method may encounter
difficulties when being applied to compute a dynamic model. Section 3
establishes the two first-order conditions to which different approximation
methods can be applied. Section 4 briefly reviews the different approxima-
tion methods in the existing literature. Section 5 presents our new algorithm
for dynamic optimization. Appendix I provides the proof of the propositions
in the text. Finally, a GAUSS procedure that implements our suggested
algorithm is presented in Appendix II.
1.2 The Standard Recursive Method
We consider a representative agent whose objective is to find a control (or
decision) sequence {u
t
}

t=0
such that
max
{u}

t=0
E
0
_

t=0
β
t
U(x
t
, u
t
)
_
(1.1)
subject to
x
t+1
= F(x
t
, u
t
, z
t
). (1.2)
Above, x
t
is a vector of m state variables at period t; u
t
is a vector of n control
variables; z
t
is a vector of s exogenuous variables whose dynamics does not
depend on x
t
and u
t
; E
t
is the mathematical expectation conditional on the
information available at time t and β ∈ (0, 1) denotes the discount factor.
Let us first make several remarks regarding the formulation of the above
problem. First, this formulation assumes that the uncertainty of the model
only comes from the exogenuous z
t
. One popular assumption regarding the
dynamics of z
t
is to assume that z
t
follow an AR(1) process:
z
t+1
= Pz
t
+p + ǫ
t+1
(1.3)
14
where ǫ
t
is independently and identically distributed (i.i.d. ). Second, this
formulation is not restrictive to those structure models with more lags or
leads. It is well known that a model with finite lags or leads can be trans-
formed through the use of auxiliary variables into an equivalent model with
one lag or lead. Third, the initial condition (x
0
, z
0
) in this formulation is
assumed to be given.
The problem to solve a dynamic decision problem is to seek a time-
invariant policy function G mapping from the state and exogenuous (x, z)
into the control u. With such a policy function (or control equation), the
sequences of state {x
t
}

t=1
and control {u
t
}

t=0
can be generated by iterating
the control equation
u
t
= G(x
t
, z
t
) (1.4)
as well as the state equation (1.2), given the initial condition (x
0
, z
0
) and the
exogenuous sequence {z
t
}

t=1
generated by (1.3).
To find the policy function G by the recursive method, we first define a
value function V :
V (x
0
, z
0
) ≡ max
{ut}

t=0
E
0
_

t=0
β
t
U(x
t
, u
t
)
_
(1.5)
Expression (1.5) could be transformed to unveil its recursive structure. For
this purpose, we first rewrite (1.5) as follows:
V (x
0
, z
0
) = max
{ut}

t=0
_
U(x
0
, u
0
) +E
0
_

t=1
β
t
U(x
t
, u
t
)
__
.
= max
{ut}

t=0
_
U(x
0
, u
0
) +βE
0
_

t=0
β
t
U(x
t+1
, u
t+1
)
__
(1.6)
It is easy to find that the second term in (1.6) can be expressed as being
β times the value V as defined in (1.5) with the initial condition (x
1
, z
1
).
Therefore, we could rewrite (1.5) as
V (x
0
, z
0
) = max
{ut}

t=0
{U(x
0
, u
0
) + βE
0
[V (x
1
, z
1
)]} (1.7)
The formulation of equ. (1.7) represents a dynamic programming problem
which highlights the recursive structure of the decision problem. In every
period t, the planner faces the same decision problem: choosing the control
variable u
t
that maximizes the current return plus the discounted value of
the optimum plan from period t+1 onwards. Since the problem repeats itself
15
every period the time subscripts become irrelevant. We thus can write (1.7)
as
V (x, z) = max
u
{U(x, u) + βE [V (˜ x, ˜ z)]} (1.8)
where the tilde (∼) over x and z denotes the corresponding next period
values. Obviously, they are subject to (1.2) and (1.3). Equation (1.8) is
said to be the Bellman equation, named after Richard Bellman (1957). If we
know the function V, we then can solve u via the Bellman equation.
Unfortunately, all these considerations are based on the assumption that
we know the function V, which in reality we do not know in advance. The
typical method in this case is to construct a sequence of value functions by
iterating the following equation:
V
j+1
(x, z) = max
u
{U(x, u) + βE [V
j
(˜ x, ˜ z)]} (1.9)
In terms of an algorithm, the method can be described as follows:
• Step 1. Guess a differentiable and concave candidate value function,
V
j
.
• Step 2. Use the Bellman equation to find the optimum u and then
compute V
j+1
according (1.9).
• Step 3. If V
j+1
= V
j
, stop. Otherwise, update V
j
and go to step 1.
Under some regularity conditions regarding the function U and F, the
convergence of this algorithm is warranted by the contraction mapping the-
orem (Sargent 1987, Stockey et al. 1989). However, the difficulty of this
algorithm is that in each Step 2, we need to find the optimum u that maxi-
mize the right side of equ. (1.9). This task makes it difficult to write a closed
form algorithm for iterating the Bellman equation. Researchers are therefore
forced to seek different numerical approximation methods.
1.3 The First-Order Conditions
The last two decades have observed various methods of numerical approxi-
mation to solve the problem of dynamic optimization.
1
As above stated in
1
Kendrick (1981) can be regarded as a seminal work in this field. For a review up to
1990, see Taylor and Uhlig (1990). For the later development, see Chow (1997), Judd
(1999), Ljungqvist and Sargent (2000) and Marimon and Scott (1999). Recent methods
of numerically solving the above discrete time Bellman equation (1.9) can be found in
Santos and Vigo-Aguiar (1998) and Gr¨ une and Semmler (2004a).
16
this book we want to focus on the first-order conditions, that are used to
derive the decision sequence. One can find two approaches: one is to use the
Euler equation and the other the equation derived from the Lagrangian.
1.3.1 The Euler Equation
We start from the Bellman equation (1.8). The first-order condition for
maximizing the right side of the equation takes the form:
∂U(x, u)
∂u
+ βE
_
∂F
∂u
(x, u, z)
∂V
∂˜ x
(˜ x, ˜ z)
_
= 0 (1.10)
The objective here is to find ∂V/∂x. Assume V is differentiable and thus
from (1.8) it satisfies
∂V (x, z)
∂x
=
∂U(x, G(x, z))
∂x
+ βE
_
∂F
∂x
(x, G(x, z), z)
∂V
∂F
(˜ x, ˜ z)
_
(1.11)
This equation is often called the Benveniste-Scheinkman formula.
2
Assume
∂F/∂x = 0. The above formula becomes
∂V (x, z)
∂x
=
∂U(x, G(x, z))
∂x
(1.12)
Substituting this formula into (1.10) gives rise to the Euler equation:
∂U(x, u)
∂u
+ βE
_
∂F
∂u
(x, u, z)
∂U
∂˜ x
(˜ x, ˜ u)
_
= 0 (1.13)
where the tilde (∼) over u again denotes the next period value with respect
to u.
Note that to use the above Euler equation as the first-order condition for
deriving the decision sequence, one must require ∂F/∂x = 0. In economic
analysis, one often encounters models, after some transformation, in which x
does not appear in the transition law so that ∂F/∂x = 0 is satisfied. We will
show this technique in the next chapter using a prototype model as a practical
example. However, there are still models in which such transformation is not
feasible.
2
named after Benveniste and Scheinkman (1979).
17
1.3.2 Deriving the First-Order Condition from the La-
grangian
Suppose for the dynamic optimization problem as represented by (1.1) and
(1.2), we can define the Lagrangian L:
L = E
0

t=0
_
β
τ
U(x
t
, u
t
) −β
τ+1
λ

t+1
[x
t+1
−F(x
t
, u
t
, z
t
)]
_
where λ
t
, the Lagrangian multiplier, is a m× 1 vector. Setting the partial
derivatives of L to zero with respect to λ
t
, x
t
and u
t
will yield equation (1.2)
as well as
∂U(x
t
, u
t
)
∂x
t
+βE
t
_
∂F(x
t+1
, u
t+1
, z
t+1
)
∂x
t+1
λ
t+1
_
= λ
t
, (1.14)
∂U(x
t
, u
t
)
∂u
t
+E
t
λ
t+1
∂F(x
t
, u
t
, z
t
)
∂u
t
= 0, (1.15)
In comparison with the Euler equation, we find that there is an unobserv-
able variable λ
t
appearing in the system. Yet, using (1.14) and (1.15), one
does not have to transform the model into the setting that ∂F/∂x = 0. This
is an important advantage over the Euler equation. Also as we will see in the
next chapter, these two types of first-order conditions are equivalent when
we appropriately define λ
t
in terms of x
t
, u
t
and z
t
.
3
This further implies
that they can produce the same steady states when being evaluated at their
certainty equivalence forms.
1.4 Approximation and Solution Algorithms
1.4.1 The Gauss-Seidel Procedure and the Fair-Taylor
Method
The state (1.2), the exogenuous (1.3) and the first-order condition derived
either as Euler equation (1.13) or from the Lagrangian (1.14) - (1.15) form a
dynamic system from which the transition sequences {x
t+1
}

t=0
, {z
t+1
}

t=0
,
{u
t
}

t=0
and {λ
t
}

t=0
are implied given the initial condition (x
0
, z
0
) . Yet,
mostly such a system is highly nonlinear, and therefore the solution paths
usually are impossible to be computed directly. One popular approach as
suggested by Fair and Taylor (1983) is to use a numerical algorithm, called
3
See also Chow (1997).
18
Gauss-Seidel procedure. For the convenience of presentation, the following
discussion assumes only the Euler equation to be used.
Suppose the system can be be written as the following m + n equations:
f
1
(y
t
, y
t+1
, z
t
, ψ) = 0 (1.16)
f
2
(y
t
, y
t+1
, z
t
, ψ) = 0 (1.17)
.
.
.
f
m+n
(y
t
, y
t+1
, z
t
, ψ) = 0 (1.18)
Here y
t
is the vector of endogenuous variables with m+n dimensions, includ-
ing both state x
t
and control u
t
;
4
ψ is the vector of structural parameter. Also
note that in this formulation we left aside the expectation operator E. This
can be done by setting the corresponding disturbance term, if there is any,
to their expectation values (usually zero). Therefore the system is essentially
not different from the dynamic rational expectation model as considered by
Fair and Taylor (1983).
5
The system, as suggested, can be solved numerically
by an iterative technique, called Gauss Seidel procedure, to which we shall
now turn.
It is always possible to tranform the system (1.16) - (1.18) as follows:
y
1,t+1
= g
1
(y
t
, y
t+1
, z
t
, ψ) (1.19)
y
2,t+1
= g
2
(y
t
, y
t+1
, z
t
, ψ) (1.20)
.
.
.
y
m+n,t+1
= g
m+n
(y
t
, y
t+1
, z
t
, ψ) (1.21)
where, y
i,t+1
is the ith element in the vector y
t+1
, i = 1, 2, ..., m + n. Given
the initial condition y
0
= y

0
, and the sequence of exogenuous variable {z
t
}
T
t=0
,
with T to be the prescribed time horizon of our problem, the algorithm starts
by setting t = 0 and proceeds as follows:
• Step 1. Set an initial guess on y
t+1
. Call this guess y
(0)
t+1
. Compute y
t+1
according to (1.19) - (1.21) for the given y
(0)
t+1
along with y
t
. Denote
this new computed value y
(1)
t+1
.
• Step 2. If the distance between y
(1)
t+1
and y
(0)
t+1
is less than a prescribed
tolerance level, go to Step 3. Otherwise compute y
(2)
t+1
for the given y
(1)
t+1
.
This procedure will be repeated until the tolerance level is satisfied.
4
If we use (1.14) and (1.15) as the first-order condition, then there will be 2m+n equations
and y
t
should include x
t
, λ
t
and u
t
.
5
Our suggested model here is a more simplified version since we only take one lead. See
also a similar formulation in Juillard (1996) with one lag and one lead.
19
• Step 3. Update t by setting t = t + 1 and go to Step 1.
The algorithm will continue until t reaches T. This will produce a sequence
of endogenuous variable {y
t
}
T
t=0
, which include both decision {u
t
}
T
t=0
and
state {x
t
}
T
t=0
.
There is no guarantee that convergence can always be achieved for the
iteration in each period. If this is the case, a damping technique can usually
be employed to force convergence (see Fair 1984, chapter 7). The second
disadvantage of this method is the cost of computation. The procedure
requires the iteration and convergence for each period t, t = 1, 2, 3, ..., T.
This cost of computation makes it a difficult candidate solving the dynamic
optimization problem. The third and the most important problem regards
its accuracy of solution. Note that the procedure starts with the given initial
condition y
0
, and therefore the solution sequences {u
t
}
T
t=1
and {x
t
}
T
t=1
depend
virtually on the initial condition, which include not only the initial state x
0
but also the initial decisions u
0
. Yet the initial condition for the dynamic
decision problem is usually provided only by x
0
(see equation our discussion
in the last section). Considering that the weights of u
0
could be important
in the value of the objective function (1.1), there might be a problem in
accuracy. One possible way to deal with this problem is to start with different
initial u
0
. In the next chapter when we turn to a practical problem, we will
investigate these issues more thoroughly.
1.4.2 The Log-linear Approximation Method
Solving nonlinear dynamic optimization model with log-linear approximation
has been widely used and well documented. This has been proposed in
particular by King et al. (1988) and Campbell (1994) in the context of
Real Business Cycle models. In principle, this approximation method can
be applied to the first-order condition either in terms of the Euler equation
or derived from the Langrangean. Formally, let X
t
be the variables,
¯
X the
corresponding steady state. Then,
x
t
≡ ln X
t
−ln
¯
X
is regarded to be the log-deviation of X
t
. In particular, 100x
t
is the per-
centage of X
t
that it deviates from
¯
X. The general idea of this method is to
replace all the necessary equations in the model by approximations, which
are linear in the log-deviation form. Given the approximate log-linear sys-
tem, one then uses the method of undetermined coefficients to solve for the
decision rule, which is also in the form of log-linear deviations.
20
Uhlig (1999) provides a toolkit of such method for solving a general dy-
namic optimization model. The general procedure involves the following
steps:
• Step 1. Find the necessary equations characterizing the equilibrium
law of motion of the system. These necessary equations should include
the state equation (1.2), the exogenuous equation (1.3) and the first-
order condition derived either as Euler equation (1.13) or from the
Lagrangian (1.14) and (1.15).
• Step 2. Derive the steady state of the model. This requires first pa-
rameterizing the model and then evaluating the model at its certainty
equivalence form.
• Step 3. Log-linearize the necessary equations characterizing the equilib-
rium law of motion of the system. Uhlig (1999) suggests the following
building block for such log-inearization.
X
t

¯
Xe
xt
(1.22)
e
xt+ayt
≈ 1 + x
t
+ ay
t
(1.23)
x
t
y
t
≈ 0 (1.24)
• Step 4. Solve the log-linearized system for the decision rule (which is
also in log-linear form) with the method of undetermined coefficients.
In Chapter 3, we will provide a concrete example to apply the above
procedure and to solve a practical problem of dynamic optimization.
Solving a nonlinear dynamic optimization model with log-linear approx-
imation usually does not require a heavy computation in contrast to the
Fair-Taylor method. In some cases, the decision rule could be derived even
analytically. On the other hand, by assuming a log-linearized decision rule as
expressed in (1.4), the solution path does not require the initial condition of
u
0
and therefore it should be more accurate in comparison to the Fair-Taylor
method. However, the process of log-linearization and solving for the unde-
termined coefficients are not easy, and usually have to be accomplished by
hand. It is certainly desirable to have a numerical algorithm available that
can take over at least part of the analytical derivation process.
1.4.3 Linear-quadratic Approximation with Chow’s Al-
gorithm
Another important approximation method is the linear-quadratic approxi-
mation. Again in principle this method can be applied to the first-order con-
21
dition either in terms of the Euler equation or derived from the Lagrangian.
Chow (1993) was among the first who solves a dynamic optimization model
with linear-quadratic approximation applied to the first-order condition de-
rived from the Lagrangian. At the same time, he proposed a numerical
algorithm to facilitate the computation of the solution.
Chow’s method can be presented in both continuous and discrete time
form. Since the models in discrete time are more convenient for empirical
and econometric studies, we here only consider the discrete time version.
The numerical properties of this approximation method have further been
studied in Reiter (1997) and Kwan and Chow (1997).
Suppose the objective of a representative agent can again be written as
(1.1), but subject to
x
t+1
= F(x
t
, u
t
) +ǫ
t+1
(1.25)
We shall remark that the state equation here is slightly different from the
one as expressed by (1.2) in Section 2. Apparently, it is only a special case
of (1.2). Consequently, the Lagrangian L should be defined as
L = E
0

t=0
_
β
t
U(x
t
, u
t
) −β
t+1
λ

t+1
[x
t+1
−F(x
t
, u
t
) −ǫ
t+1
]
_
Setting the partial derivatives of L to zero with respect to λ
t
, x
t
and u
t
will yield equation (1.25) as well as
∂U(x
t
, u
t
)
∂x
t
+ β
∂F(x
t
, u
t
)
∂x
t
E
t
λ
t+1
= λ
t
, (1.26)
∂U(x
t
, u
t
)
∂u
t

∂F(x
t
, u
t
)
∂u
t
E
t
λ
t+1
= 0, (1.27)
The linear-quadratic approximation assumes the state equations to be linear
and the objective function to be quadratic. In other words,
∂U(x
t
, u
t
)
∂x
t
= K
1
x
t
+K
12
u
t
+ k
1
(1.28)
∂U(x
t
, u
t
)
∂u
t
= K
2
u
t
+ K
21
x
t
+ k
2
(1.29)
F(x
t
, u
t
) = Ax
t
+ Cu
t
+ b + ǫ
t+1
(1.30)
Given this linear-quadratic assumption, equation (1.26) and (1.27) can be
rewritten as
K
1
x
t
+ K
12
u
t
+ k
1
+ βA

E
t
λ
t+1
= λ
t
(1.31)
K
2
u
t
+ K
21
x
t
+ k
2
+βC

E
t
λ
t+1
= 0 (1.32)
22
Assume the transition laws of u
t
and λ
t
take the linear form:
u
t
= Gx
t
+ g (1.33)
λ
t+1
= Hx
t+1
+ h (1.34)
Chow (1993) proves that the coefficient matrices G and H and the vectors g
and h satisfy
G = −(K
2
+ βC

HC)
−1
(K
21
+ βC

HA) (1.35)
g = −(K
2
+ βC

HC)
−1
(k
2
+ βC

Hb + βC

h) (1.36)
H = K
1
+ K
12
G + βA

H(A +CG) (1.37)
h = (K
12
+ βA

HC)g + k
1
+ βA

(Hb + h) (1.38)
Generally it is impossible to find, the analytical solution to G, H, g and h is
impossible. Thus, an iterative procedure can designed as follows. First, set
the initial H and h. G and g can then be calculated by (1.35) and (1.36).
Given G and g as well as the initial H and h, the new H and h can be
calculated by (1.37) and (1.38). Using these new H and h, one calculates
the new G and g by (1.35) and (1.36) again. The process will continue until
convergence is achieved.
6
In comparison to the log-linear approximation, Chow’s method requires
less derivations, which must be accomplished by hand. Given the steady
state, the only derivation is to obtain the partial derivatives of U and F.
Yet even this can be computed with a written procedure in a major software
package.
7
Despite this significant advantage, Chow’s method has at least
three weeknesses.
First, Chow’s method can be a good approximation method only when the
state equation is linear or can be transformed into a linear one. Otherwise,
6
It should be noted that the algorithm suggested by Chow (1993) is much more compli-
cated. For any given x
t
, the approximation as represented by (1.30) - (1.32) first takes
place around (x
t
, u

), where u

is the initial guess on u
t
. Therefore, if u
t
calculated via
the decision rule (1.33), as the result of iterating (1.35) - (1.38), is different from u

,
the approximation will take place again. However, this time it will be around (x
t
, u
t
),
followed by iterating (1.35) - (1.38). The procedure will continue until convergence of
u
t
. Since the above algorithm is designed for any given x
t
, the resulting decision rule is
indeed nonlinear in x
t
. In this sense, the method, as pointed by Reiter (1997), is less con-
venient comparing to other approximation method. In a response to Reiter (1997), Kwan
and Chow (1997) propose a one-time linearization around the steady states. Therefore,
our above presentation follows the spirit of Kwan and Chow (1997) if we assume the
linearization takes place around the steady states.
7
For instance, in GAUSS, one could use the procedure GRADP for deriving the partial
derivatives.
23
the linearized first-order condition as expressed by (1.31) and (1.32) will not
be a good approximation to the non-approximated (1.26) and (1.27), since A

and C

are not good approximations to
∂F(xt,ut)
∂xt
and
∂F(xt,ut)
∂ut
.
8
Reiter (1996)
has used a concrete example to show this point. Gong (1997) points out the
same problem.
Second, the iteration with (1.35) - (1.38) may exhibit multiple solutions
since inserting (1.35) into (1.37) gives a quadratic matrix equation in H.
Indeed, one would expect that the number of solutions will increase with the
increase of the dimensions of state space.
9
Third, the assumed state equation (1.25) is only a special case of (1.2).
This will create some difficulty when being applied to the model with exo-
genuous variables. One possible way to circumvent this problem is to regard
the exogenuous variables, if there are any, as a part of state variables. This
actually has been done by Chow and Kwan (1998) when the method has
been applied to a practical problem. Yet this will increase the dimension of
state space and hence intensify the problem of multiple solutions.
1.5 An Algorithm for the Linear-Quadratic
Approximation
In this section, we shall present an algorithm for solving a dynamic optimiza-
tion model with the general formulation as expressed in (1.1) - (1.3). The
first-order condition used for this algorithm is derived from the Lagrangian.
This indicates that the method does not require the assumption ∂F/∂x = 0.
The approximation method we used here is the linear-quadratic approxima-
tion, and therefore the method does not require log-linear-approximation,
which, in many cases, needs to be accomplished by hand. The established
Proposition 1 below allows us to save further derivations when applying the
method of undetermined coefficients. Indeed, if we use an existing soft-
ware procedure to compute the partial derivatives with respect to U and F,
the only derivation in applying our algorithm is to derive the steady state.
Therefore, our suggested algorithm takes full advantage, yet overcomes all
the limitations occurring in Chow’s method.
Since our state equation takes the form of (1.2) rather than (1.25), the
first-order condition is established by (1.14) and (1.15). Evaluating the first-
8
Note that the good approximation to
∂F(xt,ut)
∂xt
should be

2
F(¯ x,¯ u)
∂x
2
(x
t
−¯ x) +

2
F(¯ x,¯ u)
∂¯ x∂u
(u
t

¯ u) +
∂F(¯ x,¯ u)
∂¯ x
. For
∂F(xt,ut)
∂ut
there is a similar problem.
9
The same problem of multiple solution should also exist for the log-linear approximation
method.
24
order condition along with (1.2) and (1.3) at their certainty equivalence form,
we are able to derive the steady states for x
t
, u
t
, λ
t
and z
t
, which we shall
denote respectively as ¯ x, ¯ u,
¯
λ and ¯ z. Taking the first-order Taylor approxi-
mation around the steady state for (1.14), (1.15) and (1.2), we obtain
F
11
x
t
+ F
12
u
t
+ F
13
E
t
λ
t+1
+ F
14
z
t
+ f
1
= λ
t
, (1.39)
F
21
x
t
+ F
22
u
t
+F
23
E
t
λ
t+1
+ F
24
z
t
+ f
2
= 0, (1.40)
x
t+1
= Ax
t
+ Cu
t
+ Wz
t
+b; (1.41)
where in particular,
A = F
x
C = F
u
b = F(¯ x, ¯ u, ¯ z) −F
x
¯ x −F
z
¯ z −F
u
¯ u W = F
z
F
11
= U
xx
+ β
¯
λF
xx
F
12
= U
xu

¯
λF
xu
F
13
= βA

F
14
= β
¯
λF
xz
f
1
= U
x
+ βA

¯
λ −F
11
¯ x −F
12
¯ u −F
13
¯
λ −F
14
¯ z F
21
= U
ux

¯
λF
ux
F
22
= U
uu
+ β
¯
λF
uu
F
23
= βC

f
2
= U
u
+ βC

¯
λ −F
21
¯ x −F
22
¯ u −F
23
¯
λ −F
24
¯ z F
24
= β
¯
λF
uz
Note that here we define U
x
as ∂U/∂x and U
xx
as ∂
2
U/∂x∂x all to be
evaluated at the steady state. The similarity is applied to U
u
, U
uu
, U
ux
, U
xu
,
F
xx
, F
uu
, F
xz
, F
ux
and F
uz
. The objective is to find the linear decision rule
and the Lagrangian function:
u
t
= Gx
t
+ Dz
t
+g (1.42)
λ
t+1
= Hx
t+1
+Qz
t+1
+ h (1.43)
The following is the proposition regarding the solution for (1.42) and (1.43):
Proposition 1 Assume u
t
and λ
t+1
follow (1.42) and (1.43) respectively.
Then the solution of G, D, g, H, Q and h satisfy
G = M +NH (1.44)
D = R +NQ (1.45)
g = Nh + m (1.46)
HCNH + H(A +CM) + F
−1
13
(F
12
N −I
m
)H + F
−1
13
(F
11
+ F
12
M) = 0
(1.47)
_
F
−1
13
(I −F
12
N) −HCN
¸
Q−QP = H(W + CR) + F
−1
13
(F
14
+ F
12
R)
(1.48)
h =
_
F
−1
13
(I −F
12
N) −HCN −I
m
¸
−1
_
H(Cm+ b) + Qp + F
−1
13
(f
1
+ F
12
m)
¸
(1.49)
25
where I
m
is the m×m identity matrix and
N =
_
F
23
F
−1
13
F
12
−F
22
_
−1
F
23
F
−1
13
, (1.50)
M =
_
F
23
F
−1
13
F
12
−F
22
_
−1
(F
21
−F
23
F
−1
13
F
11
), (1.51)
R =
_
F
23
F
−1
13
F
12
−F
22
_
−1
(F
24
−F
23
F
−1
13
F
14
), (1.52)
m =
_
F
23
F
−1
13
F
12
−F
22
_
−1
(f
2
−F
23
F
−1
13
f
1
). (1.53)
Given H, the proposition allows us to solve G, Q and h directly according
(1.44), (1.48) and (1.49). Then D and g can be computed from (1.45) and
(1.46). The solution to H is implied by (1.47), which is nonlinear (quadratic)
in H. Obviously, if the model has more than one state variables, we cannot
solve for H analytically. In this case, we shall rewrite (1.47) as
H = F(H), (1.54)
iterating (1.54) until convergence will give us a solution to H.
10
However, when one encounters the model with one state variable,
11
H
becomes a scalar, and therefore (1.47) can be written as
a
1
H
2
+ a
2
H + a
3
= 0
with the two solutions given by
H
1,2
=
1
2a
1
_
−a
2
±
_
a
2
2
−4a
1
a
3
_1
2
_
.
In other words, the solutions can be computed without iteration. Further,
in most cases, one can also easily identify the proper solution by relying on
the economic meaning of λ
t
. For example, in all the models that we will
present in this book, the state equation is the capital stock. Therefore, λ
t
is the shadow price of the capital which should be inversely related to the
quantity of capital. This indicates that only the negative solution is a proper
solution.
1.6 A Dynamic Programming Algorithm
In this section we describe a dynamic programming algorithm which en-
ables us to compute optimal value functions as well as optimal trajectories
10
though multiple solutions may exist.
11
which is mostly the case in recent economic literature.
26
of discounted optimal control problems of the type above. An extension to
a stochastic decision problem is briefly summarized in appendix III.
The basic discretization procedure goes back to Capuzzo Dolcetta (1983)
and Falcone (1987) and is applied with adaptive gridding strategy by Gr¨ une
(1997) and Gr¨ une and Semmler (2004a). We consider discounted optimal
control problems in discrete time t ∈ N
0
given by
V (x) = max
u∈U

t=0
β
t
g(x(t), u(t)) (1.55)
where
x
t+1
= f(x(t), u(t)), x(0) = x
0
∈ R
n
For the discretization in space we consider a grid Γ covering the com-
putational domain of interest. Denoting the nodes of the grid Γ by x
i
,
i = 1, . . . , P, we are now looking for an approximation V
Γ
h
satisfying
V
Γ
h
(x
i
) = T
h
(V
Γ
h
)(x
i
) (1.56)
for all nodes x
i
of the grid, where the value of V
Γ
h
for points x which are not
grid points (these are needed for the evaluation of T
h
) is determined by linear
interpolation. For a description of several iterative methods for the solution
of (1.56) we refer to Gr¨ une and Semmler (2004a).
For the estimation of the gridding error we estimate the residual of the
operator T
h
with respect to V
Γ
h
, i.e., the difference between V
Γ
h
(x) and
T
h
(V
Γ
h
)(x) for points x which are not nodes of the grid. Thus, for each
cell C
l
of the grid Γ we compute
η
l
:= max
x∈C
l
¸
¸
T
h
(V
Γ
h
)(x) −V
Γ
h
(x)
¸
¸
Using these estimates we can iteratively construct adaptive grids as follows:
(0) Pick an initial grid Γ
0
and set i = 0. Fix a refinement parameter
θ ∈ (0, 1) and a tolerance tol > 0.
(1) Compute the solution V
Γ
i
h
on Γ
i
(2) Evaluate the error estimates η
l
. If η
l
< tol for all l then stop
(3) Refine all cells C
j
with η
j
≥ θ max
l
η
l
, set i = i + 1 and go to (1).
27
For more information about this adaptive gridding procedure and a com-
parison with other adaptive dynamic programming approaches we refer to
Gr¨ une and Semmler (2004a) and Gr¨ une (1997).
In order to determine equilibria and approximately optimal trajectories
we need an approximately optimal policy, which in our discretization can be
obtained in feedback form u

(x) for the discrete time approximation using
the following procedure:
For each x in the gridding domain we choose u

(x) such that the equality
max
u∈U
{hg(x, u) + βV
h
(x
h
(1))} = {hg(x, u

(x)) +βV
h
(x

h
(1))}
holds, where x

h
(1) = x + hf(x, u

(x)). Then the resulting sequence u

i
=
u

(x
h
(i)) is an approximately optimal policy and the related piecewise con-
stant control function is approximately optimal.
1.7 Conclusion
This chapter reviews some typical approximation methods to solve a stochas-
tic dynamic optimization model. The approximation methods discussed here
use two types of first-order conditions: the Euler equation and the equations
derived from the Lagrangian. We find that the Euler equation needs a re-
striction that the state variable cannot appear as a determinant in the state
equation. Although many economic models can satisfy this restriction after
some transformation, we still cannot exclude the possibility that sometime
the restriction cannot be satisfied.
Given these two types of first-order conditions, we consider the solutions
computed by the Fair-Taylor method, the log-linear approximation method
and the linear-quadratic approximation method. For all of these different
methods, we compare their advantages and disadvantages. We find that the
Fair-Taylor method may encounter an accuracy problem due to its additional
requirement of the initial condition for the control variable. On the other
hand, the methods of log-linear approximation may need an algorithm that
can take over some heavy derivation process that otherwise must be ana-
lytically accomplished. For the linear-quadratic approximation, we therefore
propose an algorithm that could overcome the limitation of existing methods
(such as Chow’s method). We also have elaborated on dynamic programming
as a recently developed method to solve the involved Bellman equation. In
the next chapter we will turn to a practical problem and apply the diverse
methods.
28
1.8 Appendix I: Proof of Proposition 1
From (1.39), we obtain
E
t
λ
t+1
= F
−1
13

t
−F
11
x
t
−F
12
u
t
−F
14
z
t
−f
1
) (1.57)
Substituting the above equation into (1.40) and then solving for u
t
, we get
u
t
= Nλ
t
+ Mx
t
+ Rz
t
+ m, (1.58)
where N, M, R and m are all defined in the proposition. Substituting (1.58)
into (1.41) and (1.55) respectively, we obtain
x
t+1
= S
x
x
t
+ S
λ
λ
t
+ S
z
z
t
+s (1.59)
E
t
λ
t+1
= L
x
x
t
+L
λ
λ
t
+L
z
z
t
+ l. (1.60)
where
S
x
= A +CM (1.61)
S
λ
= CN (1.62)
S
z
= W + CR (1.63)
s = Cm+ b (1.64)
L
x
= −F
−1
13
(F
11
+ F
12
M) (1.65)
L
λ
= F
−1
13
(I −F
12
N) (1.66)
L
z
= −F
−1
13
(F
14
+ F
12
R) (1.67)
l = −F
−1
13
(f
1
+ F
12
m) (1.68)
Now express λ
t
in terms of (1.43) and then plug it into (1.59) and (1.60):
x
t+1
= (S
x
+ S
λ
H)x
t
+ (S
z
+ S
λ
Q)z
t
+s + S
λ
h, (1.69)
E
t
λ
t+1
= (L
x
+ L
λ
H)x
t
+ (L
z
+ L
λ
Q)z
t
+ l +L
λ
h. (1.70)
Next, taking expectation for the both sides of (1.43) and expressing x
t+1
in
terms of (1.41) while E
t
z
t+1
in terms of (1.3) we obtain
E
t
λ
t+1
= HAx
t
+ HCu
t
+ (HW + QP)z
t
+ Hb +Qp +h. (1.71)
Next, expressing u
t
in terms of (1.42) for equations (1.41) and (1.71) respec-
tively, we get
29
x
t+1
= (A + CG)x
t
+ (W + CD)z
t
+ Cg + b (1.72)
E
t
λ
t+1
= H(A + CG)x
t
+ [H(W + CD) + QP] z
t
+ H(Cg + b) +Qp + h.
(1.73)
Comparing (1.69) and (1.70) with (1.72) and (1.73), we obtain
S
x
+ S
λ
H = A + CG, (1.74)
L
x
+ L
λ
H = H(A + CG), (1.75)
S
z
+ S
λ
Q = W + CD, (1.76)
L
z
+ L
λ
Q = H(W + CD) + QP, (1.77)
s + S
λ
h = Cg + b, (1.78)
l + L
λ
h = H(Cg + b) + Qp +h. (1.79)
These 6 equations determine 6 unknown coefficient matrices and vectors G,
H, D, Q, g and h. In particular, H is resolved from (1.75) when A + CG
is replaced by (1.74). This gives rise to (1.47) as in Proposition 1 with
S
x
, S
λ
, L
x
and L
λ
to be expressed by (1.61), (1.62), (1.65) and (1.66). Given
H, G is then resolved from (1.74), which allows us to obtain (1.44). Next
Q is resolved from (1.77) when W + CD is replaced by (1.76). This gives
rise to (1.48) with S
z
, S
λ
, L
z
and L
λ
to be expressed by (1.63), (1.62), (1.67)
and (1.66). Then D is resolved from (1.76), which allows us to obtain (1.45).
Finally, h is resolved from (1.79) when Cg + b is replaced by (1.78). This
gives rise to (1.49) with S
λ
, L
λ
, s and l to be expressed by (1.62), (1.66),
(1.64) and (1.66). Finally g is resolved from (1.78), which allows us to obtain
(1.46).
1.9 Appendix II: An Algorithm for the LQ-
Approximation
The algorithm that we suggest to solve the LQ approximation of chapter
1.5 is written as a GAUSS procedure and available from the authors upon
request. We call this procedure DYNPR. The input of this procedure includes
• the steady states of x, u, λ, z and F denoted as xbar, ubar, lbar, zbar
and Fbar respectively.
• the first- and the second-order partial derivatives of x, u and z with
respect F and U, all evaluated at the steady states. They are denoted
30
as, for instance, Fx and Fxx for the first- and the second-order partial
derivatives of x with respect to F respectively.
• the discount factor β (denoted as beta) and the parameters P and p
(denoted as BP and sp respectively) appearing in the AR(1) process
of z.
The output of this procedure is the decision parameters G, D and g,
denoted respectively as BG, BD and sg.
PROC(3) = DYNPR (Fx, Fu, Fz, Fxx, Fuu, Fzz, Fxu, Fxz, Fuz, Fux,
Fzx, Ux, Uu, Uxx, Uuu, Uxu, Uux, Fbar, xbar,
ubar, zbar, lbar, beta, BP, sp);
LOCAL A, C, W, sb, F11, F12, F13, F14, f1, F21, F22,
F23, F24, f2, BM, BN, BR, sm, Sx, Slamda, Sz,
ss, BLx, BLlamda, BLz, sl, sa1, sa2, sa3, BH,
BG, sh, sg, BQ, BD;
A = Fx;
C = Fu;
W = Fz;
sb = Fbar −A ∗ xbar −C ∗ ubar −W ∗ zbar;
F11 = Uxx + beta ∗ lbar ∗ Fxx;
F12 = Uxu + beta ∗ lbar ∗ Fxu;
F13 = beta ∗ Fx;
F14 = beta ∗ lbar ∗ Fxz;
f1 = Ux +beta ∗ lbar ∗ Fx −F11 ∗ xbar−
F12 ∗ ubar −F13 ∗ lbar −F14 ∗ zbar;
F21 = Uux + beta ∗ lbar ∗ Fux;
F22 = Uuu + beta ∗ lbar ∗ Fuu;
F23 = beta ∗ (Fu’);
F24 = beta ∗ lbar ∗ Fuz;
f2 = Uu + beta ∗ (Fu′) ∗ lbar −F21 ∗ xbar−
F22 ∗ ubar −F23 ∗ lbar −F24 ∗ zbar;
BM = INV(F23∗INV(F13) ∗ F12 −F22)∗
(F21 −F23∗INV(F13) ∗ F11);
BN = INV(F23∗INV(F13) ∗ F12 −F22)∗
F23∗INV(F13);
BR = INV(F23∗INV(F13) ∗ F12 −F22)∗
(F24 −F23∗INV(F13) ∗ F14);
sm = INV(F23∗INV(F13) ∗ F12 −F22)∗
(f2 −F23∗INV(F13) ∗ f1);
Sx = A +C ∗ BM;
31
Slamda = C ∗ BN;
Sz = W + C ∗ BR;
ss = C ∗ sm+ sb;
BLx = −INV(F13) ∗ (F11 +F12 ∗ BM);
BLlamda = INV(F13) ∗ (1 −F12 ∗ BN);
BLz = −INV(F13) ∗ (F14 +F12 ∗ BR);
sl = −INV(F13) ∗ (f1 + F12 ∗ sm);
sa1 = Slamda;
sa2 = Sx −BLlamda;
sa3 = −BLx;
BH = (1/(2 ∗ sa1)) ∗ (−sa2 −(sa2ˆ2 −4 ∗ sa1 ∗ sa3)ˆ0.5);
BG = BM + BN ∗ BH;
BQ = INV(BLlamda −BH ∗ Slamda −BP)∗
(BH ∗ Sz −BLz);
BD = BR + BN ∗ BQ;
sh = INV(BLlamda −BH ∗ Slamda −1)∗
(BH ∗ ss +BQ∗ sp −sl);
sg = BN ∗ sh +sm;
RETP(BG, BD, sg);
ENDP;
1.9.1 Appendix III: The Stochastic Dynamic Program-
ming Algorithm
Our adaptive approach of chapter 1.6. is easily extended to stochastic discrete
time problems of the type
V (x) = E
_
max
uin U

t=0
β
t
g(x(t), u(t))
_
(1.80)
where
x(t + 1) = f(x(t), u(t), z
t
), x(0) = x
0
∈ R
n
(1.81)
and the z
t
are i.i.d. random variables. This problem can immediately be
applied in discrete time with time step h = 1.
12
The corresponding dynamic programming operator becomes
T
h
(V
h
)(x) = max
u∈U
E {hg(x, u) + βV
h
(ϕ(x, u, z))} , (1.82)
12
For a discretization of a continuous time stochastic optimal control problem with dynam-
ics governed by an Itˆo stochastic differential equation, see Camilli and Falcone (1995).
32
where ϕ(x, u, z) is now a random variable.
If the random variable z is discrete then the evaluation of the expectation
E is a simple summation, if z is a continuous random variable then we can
compute E via a numerical quadrature formula for the approximation of the
integral
_
z
(hg(x, u) + βV
h
(ϕ(x, u, z))p(z)dz,
where p(z) is the probability density of z. Gr¨ une and Semmler (2004a)
show the application of such a method to such a problem, where z is a
truncated Gaussian random variable and the numerical integration is done
via the trapezoidal rule.
It should be noted that despite the formal similarity, stochastic opti-
mal control problems have several features different from deterministic ones.
First, complicated dynamical behavior like multiple stable steady state equi-
libria, periodic attractors etc. is less likely because the influence of the
stochastic term tends to “smear out” the dynamics in such a way that these
phenomena disappear.
13
Furthermore, in stochastic problems the optimal
value function typically has more regularity which allows the use of high
order approximation techniques. Finally, stochastic problems can often be
formulated in terms of Markov decision problems with continuous transi-
tion probability (see Rust (1996) for a details), whose structure gives rise
to different approximation techniques, in particular allowing to avoid the
discretization of the state space.
In these situations, the above dynamic programming technique, developed
by Gr¨ une (1997) and applied in Gr¨ une and Semmler (2004a) may not be
the most efficient approach to these problems, and it has to compete with
other efficient techniques. Nevertheless, the examples in Gr¨ une and Semmler
(2004a). shows that adaptive grids as discussed in chapter 1.6 are by far more
efficient than non–adaptive methods if the same discretization technique is
used for both approaches. It should also be noted that in the smooth case
one can obtain estimates for the error in the approximation of the gradient
of V
h
from our error estimates, for details we refer to Gr¨ une (2003).
13
A remark to this extent on an earlier version of our work has been made by Buz Brock
and Michael Woodford.
Chapter 2
Solving a Prototype Stochastic
Dynamic Model
2.1 Introduction
This chapter turns to a practical problem of dynamic optimization. In partic-
ular, we shall solve a prototype model by employing different approximation
methods as we have discussed in the last chapter. The model we choose is
a Ramsey model (Ramsey 1928) to which the exact solution is computable
with the standard recursive method. This will allow us to test the accuracy
of approximations by comparing the different approximate solutions to the
exact solution.
2.2 The Ramsey Problem
2.2.1 The Model
Ramsey (1928) posed a problem of optimal resource allocation, which is now
often used as a prototype model of dynamic optimization.
1
The model pre-
sented in this section is essentially that of Ramsey (1928) yet it is augmented
by uncertainty. Let C
t
denote consumption, Y
t
output and K
t
the capital
stock. Assume that the output is produced by capital stock and it is either
consumed or invested, that is, added to the capital stock. Formally,
Y
t
= A
t
K
α
t
, (2.1)
Y
t
= C
t
+ K
t+1
−K
t
, (2.2)
1
See, for instance, Stockey et al. (1989, chapter 2), Blanchard and Fisher (1989, chapter
2) and Ljungqvist and Sargent (2000, chapter 2).
33
34
where α ∈ (0, 1) and A
t
is the technology which may follow an AR(1) process:
A
t+1
= a
0
+ a
1
A
t
+ ǫ
t+1
. (2.3)
Here we shall assume ǫ
t
to be i.i.d. Equation (2.1) and (2.2) indicates that
we could write the transition law of capital stock as
K
t+1
= A
t
K
α
t
−C
t
. (2.4)
Note that we have assumed here that the depreciation rate of capital stock
is equal to 1. This is a simplified assumption by which the exact solution is
computable.
2
The representative agent is assumed to find the control sequence {C
t
}

t=0
such that
max E
0

t=0
β
t
ln C
t
(2.5)
given the initial condition (K
0
, A
0
).
2.2.2 The Exact Solution and the Steady States
It is well known that the exact solution for this model – which could be
derived from the standard recursive method can be written as
K
t+1
= αβA
t
K
α
t
. (2.6)
This further implies from (2.4) that
C
t
= (1 −αβ)A
t
K
α
t
. (2.7)
Given the solution paths for C
t
and K
t+1
, we are then able to derive the
steady state. It is not difficult to find that one steady state is on the bound-
ary, that is
¯
K = 0 and
¯
C = 0. To obtain a more meaningful interior steady
state, we take logarithm for both sides of (2.6) and evaluate the equation at
its certainty equivalence form:
log K
t+1
= log(αβA) + αlog K
t
. (2.8)
At the steady state, K
t+1
= K
t
=
¯
K. Solving (2.8) for log
¯
K, we obtain
log
¯
K =
log(αβA)
1−α
. Therefore,
¯
K = (αβA)
1/(1−α)
. (2.9)
Given
¯
K,
¯
C is resolved from (2.4):
¯
C = A
¯
K
α

¯
K (2.10)
2
For another similar model where the depreciation rate is also set to 1 and hence the exact
solution is computable, see Long and Plosser (1983).
35
2.3 The First-Order Conditions and Approx-
imate Solutions
To solve the model with different approximation methods, we shall first es-
tablish the first-order condition derived from the Ramsey problem. As we
have mentioned in the last chapter, there are two types of first-order condi-
tions, the Euler equation and the equation derived from the Lagrangian. Let
us first consider the Euler equation.
2.3.1 The Euler Equation
Our first task is to transform the model into a setting so that the state vari-
able K
t
does not appear in F(·) as we have discussed in the last chapter. This
can be done by assuming K
t+1
(instead of C
t
) as model’s decision variable.
To achieve a notational consistency in the time subscript, we may denote the
decision variable as Z
t
. Therefore the model can be rewritten as
max E
0

t=0
β
t
ln(A
t
K
α
t
−K
t+1
). (2.11)
subject to
K
t+1
= Z
t
Note that here we have used (2.4) to express C
t
in the utility function. Also
note that in this formulation the state variable in period t is still K
t
. There-
fore ∂F/∂x = 0 and ∂F/∂u = 1. The Bellman equation in this case can be
written as
V (K
t
, A
t
) = max
K
t+1
{ln(A
t
K
α
t
−K
t+1
) +βE [V (K
t+1
, A
t+1
)]} . (2.12)
The necessary condition for maximizing the right hand side of the Bellman
equation (2.12) is given by
−1
A
t
K
α
t
−K
t+1
+βE
_
∂V
∂K
t+1
(K
t+1
, A
t+1
)
_
= 0. (2.13)
Meanwhile applying the Benveniste-Scheinkman formula,
∂V
∂K
t
(K
t
, A
t
) =
αA
t
K
α−1
t
A
t
K
α
t
−K
t+1
. (2.14)
Substituting (2.14) into (2.13) allows us to obtain the Euler equation:
−1
A
t
K
α
t
−K
t+1
+ βE
_
αA
t+1
K
α−1
t+1
A
t+1
K
α
t+1
−K
t+2
_
= 0,
36
which can further be written as

1
C
t
+ βE
_
αA
t+1
K
α−1
t+1
C
t+1
_
= 0. (2.15)
This Euler equations (2.15) along with (2.4) and (2.3) determine the transi-
tion sequences of {K
t+1
}

t=1
, {A
t+1
}

t=1
and {C
t
}

t=0
given the initial condition
K
0
and A
0
.
2.3.2 The First-Order Condition Derived from the La-
grangian
Next, we turn to derive the first-order condition from the Lagrangian. Define
the Lagrangian:
L =

t=0
β
t
ln C
t

t=0
E
t
_
β
t+1
λ
t+1
(K
t+1
−A
t
K
α
t
+ C
t
)
¸
.
Setting to zero the derivatives of L with respect to λ
t
, C
t
and K
t
, one obtains
(2.4) as well as
1/C
t
−βE
t
λ
t+1
= 0, (2.16)
βE
t
λ
t+1
αA
t
K
α−1
t
= λ
t
. (2.17)
These are the first-order conditions derived from the Lagrangian. Next we try
to demonstrate that the two first-order conditions are virtually equivalent.
This can be done as follows. Using (2.16) to express βE
t
λ
t+1
in terms 1/C
t
and then plug it into (2.17), we obtain λ
t
= αA
t
K
α−1
t
/C
t
. This further
indicates that
E
t
λ
t+1
= E
t
_
αA
t+1
K
α−1
t+1
/C
t+1
¸
(2.18)
Substitute (2.18) back into (2.16), we obtain the Euler equation (2.15).
It is also not difficult to find that the two first-order conditions imply the
same steady state as the one derived from the exact solution (see equation
(2.9) and (2.10)). Writing either (2.15) or (2.17) in terms of their certainty
equivalence form while evaluating them at the steady state, we indeed obtain
(2.9).
2.3.3 The Dynamic Programming Formulation
The dynamic programming problem for the Ramsey growth model of chapter
2.2.1 can be written, in its deterministic version, as a basic discrete time
growth model
37
V = max
C

t=0
β
t
U(C
t
) (2.19)
s.t.
C
t
+ K
t+1
= f(K
t
) (2.20)
with an one period utility function U

(C) > 0, U
′′
(C) < 0, f

(K) > 0, f
′′
(K) <
0.
Let us restate the problem above with K the state variable and K

the
control variable, where K

denotes the next period’s value of K. Substitute
C into the above intertemporal utility function by defining
C = f(K) −K

(2.21)
We then can express the discrete time Bellman-equation, representing a
dynamic programming formulation as
V (K) = max
K

{U[f(K) −K

] + βV (K

)} (2.22)
By applying the Benveniste-Scheinkman condition
3
gives
V

(K) = U

(f(K) −K

)f

(K) (2.23)
Note that K is the state variable and that in equ. (2.22) we have V (K

).
Notice that from the discrete time form of the envelope condition one again
obtains the first order condition of equ. (2.22) as
U

[f(K) −K

] + βV

(K

) = 0
which gives by using (2.23) one step forward, i.e. for V

(K

).
U

[f(K) −K

] = βU

[f(K

) −K
′′
]f

(K

) (2.24)
Note that hereby we obtain as a solution a second order difference equa-
tion in K, whereby K’ denotes the one period and K
′′
the two period ahead
value of K. Yet equ. (2.24) can be written as
1 = β
U

(C
t+1
)
U

(C
t
)
f

(K
t+1
) (2.25)
3
The Benveniste-Scheinkman condition implies that the state variable does not appear in
the transition equation, see chapter 1.3 of this book and Ljungquist and Sargent (2000,
ch. 2).
38
which represent the Euler-equation that has extensively been used in
economic theory
4
.
If we allow for log-utility as in chapter 2.2.1 the discrete time decision
problem is directly analytically solvable. We take the following form of a
utility function
V = max
Ct

t=0
β
t
ln C
t
(2.26)
s.t.
K
t+1
= AK
α
t
−C
t
(2.27)
The analytical solution for the value function is
V (K) =
˜
B +
˜
C ln (K) (2.28)
and for the sequence of capital one obtains
K
t+1
=
βCAK
α
t
1 +βC
(2.29)
with
˜
C =
α
1 −αβ
and
˜
B =
ln (
˜
C(1 −αβ)A) +
βα
1−βα
ln (αβA)
1 −β
For the optimal consumption holds
C
t
= K
t+1
−AK
α
t
(2.30)
and for the steady state equilibrium K one obtains
1
β
= αAK
α−1
or (2.31)
K = βαAK
α
(2.32)
4
The above Euler-equation is also essential not only in stochastic growth but also in
finance, to study asset pricing, in fiscal policy to evaluate treasury bonds and testing for
sustainability of fiscal policy, see Ljungqvist and Sargent (2000, chs. 2, 7, 10, 17)
39
2.4 Solving the Ramsey Problem with Differ-
ent Approximations
2.4.1 The Fair-Taylor Solution
It should be noted that one can apply the Fair-Taylor method either to the
Euler equation or to the first-order condition derived from the Lagrangian.
Here we shall use the Euler equation. Let us first write equation (2.15) in
the form as expressed by (1.19) - (1.21):
C
t+1
= αβC
t
A
t+1
K
α−1
t+1
. (2.33)
Together with (2.4) and (2.3), they form a recursive dynamic system from
which the transition paths of C
t
, K
t
and A
t
can be directly computed. Since
the model is simple in its structure, there is no necessity to employ the
Gauss-Seidel procedure as suggested in the last chapter.
Before we compute the solution path, we shall first parameterize the
model. There are all together 5 structural parameters: α, β, a
0
, a
1
and σ
ǫ
.
Table 2.1 specifies these parameters and the corresponding interior steady
state values:
Table 2.1: Parameterizing the Prototype Model
α β a
0
a
1
σ
ǫ
K C A
0.3200 0.9800 600.00 0.8000 60.000 23593 51640 3000.0
Given the parameters, as reported in Table 2.1, we provide in Figure 2.1
three solution paths computed by the Fair-Taylor method. These solution
paths are compared to the exact solution as expressed in (2.6) and (2.7):
The three solution paths are different due to their initial condition with
regard to C
0
. Since we know the exact solution, we thus can choose C
0
close to
the exact solution denoted as C

0
. Note that from (2.7) C

0
= (1 −αβ)A
0
K
α
0
.
In particular, we allow one C
0
to be equal to C

0
, and the others to deviate
1% from C

0
.
40
Figure 2.1: The Fair-Taylor Solution in Comparison to the Exact Solution:
solid curve the exact solution, dashed and dotted curves the Fair-Taylor
solution
The following is a summary of what we have found in this experiment.
• When we choose C
0
above C

0
(by 1%) the path of K
t
quickly reaches to
zero and therefore the simulations have to be subject to the constraint
C
t
< A
t
K
α
t
. In particular, we restrict C
t
≤ 0.99A
t
K
α
t
. This restriction
makes K
t
never reach zero so that the simulation can be continued.
The solution path is shown by one of the dashed curves in the figure.
• When we set C
0
below C

0
(again by 1%), the path of C
t
quickly reaches
its lower bound 0. This is shown by the dotted curve in the figure.
• When we set C
0
to C

0
, the paths of K
t
and C
t
(shown by the another
dashed curve) are close to the exact solution for small t

s. Yet when t
goes beyond a certain point, the deviation becomes significant.
What can we learn from this experiment? The exact solution to this
problem seems to be the saddle path for the system composed of (2.4), (2.3)
41
and (2.33). The eventual deviation of the solution starting with C

0
from the
exact solution is likely to be due to the computational errors resulting from
our numerical simulation. On the other hand, we have verified our previ-
ous concern that the initial condition for the control variable is extremely
important for obtaining an appropriate solution path when we employ the
Fair-Taylor method.
2.4.2 The Log-linear Solution
As the Fair-Taylor method, the log-linear approximation method can be ap-
plied to the first-order condition either from Euler equation or from the
Lagrangian. Here we shall again use the Euler equation. Our first task is
therefore to log-linearize the state, Euler and the exogenuous equations as ex-
pressed in (2.4), (2.15) and (2.3). The following is the proposition regarding
this log-linearization (the proof is provided in the appendix):
Next we try to find a solution path for c
t
, which we shall conjecture as
c
t
= η
ca
a
t
+ η
ca
k
t
(2.34)
The proposition below regards the determination of the two undetermined
coefficients η
ca
and η
ck
.
Proposition 2 Let k
t
, c
t
and a
t
denote the log deviations of K
t
, C
t
and A
t
.
Then equation (2.4), (2.15) and (2.3) can be log-linearized as
k
t+1
= ϕ
ka
a
t
+ ϕ
kk
k
t
+ ϕ
kc
c
t
, (2.35)
E [c
t+1
] = ϕ
cc
c
t
+ ϕ
ca
a
t

ck
k
t+1
, (2.36)
E [a
t+1
] = a
1
a
t
, (2.37)
where
ϕ
ka
=
¯
A
¯
K
α−1
, ϕ
kk
=
¯
A
¯
K
α−1
α,
ϕ
kc
= −(
¯
C/
¯
K), ϕ
cc
= αβ
¯
A
¯
K
α−1
,
ϕ
ca
= αβ
¯
A
¯
K
α−1
a
1
, ϕ
ck
= αβ
¯
A
¯
K
α−1
(α −1).
Next we try to find a solution path for c
t
which we shall conjecture as
c
t
= η
ca
a
t
+ η
ca
k
t
(2.38)
The proposition below regards the determination of the two undetermined
coefficients η
c
and η
ck
.
42
Proposition 3 Assume c
t
follow (2.34). Then η
ck
and η
ca
are determined
from the following equation
η
ck
=
1
2Q
2
_
−Q
1

_
Q
2
1
−4Q
0
Q
2
_
(2.39)
η
ca
=

ck
−ϕ
ck

ka
−ϕ
ca
ϕ
cc
−a
1
−ϕ
kc

ck
−ϕ
ck
)
(2.40)
where Q
2
= ϕ
kc
, Q
1
= (ϕ
kk
−ϕ
cc
−ϕ
kc
ϕ
ck
) and Q
0
= ϕ
kk
.
The solution paths of the model can now be computed by relying on (2.34)
and (2.35) with a
t
to be given by a
1
a
t−1
+ ǫ
t
/
¯
A. All the solution paths are
expressed as log deviations. Therefore, to compare the log-linear solution to
the exact solution, we shall perform the transformation via X
t
= (1 + x
t
)
¯
X
for a variable x
t
in log deviation form. Using the same parameters as reported
in Table 2.1, we show in Figure 2.2 the log-linear solution in comparison to
the exact solution.
Figure 2.2: The Log-linear Solution in Comparison to the Exact Solution:
solid curve the exact solution, dashed curves the log-linear solution
43
In contrast to the Fair-Taylor solution, one finds that the log-linear solu-
tion is quite close to the exact solution except for some initial paths.
2.4.3 The Linear-Quadratic Solution with Chow’s Al-
gorithm
To apply Chow’s method, we shall first transform the model in which the
state equation appears to be linear. This can be done by choosing investment
I
t
≡ A
t
K
α
t
− C
t
as the control variable while leaving the capital stock and
the technology as the two state variables. When considering this, the model
becomes
max E
0

t=0
β
t
ln(A
t
K
α
t
−I
t
)
subject to (2.3) as well as
K
t+1
= I
t
(2.41)
Due to the insufficiency in the specification of the model with regard to
the possible exogenuous variable, we have to treat, as suggested in the last
chapter, the technology A
t
also as a state variable. This indicates that there
are two state equations (2.41) and (2.3), both are now in linear form. Next we
shall derive the first- and the second-order partial derivatives of the utility
function around the steady state. This will allow us to obtain those K
ij
and k
j
(i, j = 1, 2) coefficient matrices and vectors as expressed in Chow’s
first-order conditions (1.31) and (1.32).
Suppose the linear decision rule can be written as
I
t
= G
11
K
t
+ G
12
A
t
+g
1
(2.42)
The coefficients G
11
, G
12
and g
1
can be computed in principle by iterating
(1.35) - (1.38) as discussed in the last chapter. Yet this requires the iteration
to be convergent. Unfortunately, this is not attainable for our particular ap-
plication, even if we start from many different initial conditions.
5
Therefore,
our attempt fails to compute the solution path with Chow’s algorithm.
2.4.4 The Linear-Quadratic Solution Using the Sug-
gested Algorithm
When we employ our new algorithm, there is no need to transform the model.
Therefore we can define F = AK
α
−C and U = ln C. Again our first step is
5
Reiter (1997) has experienced the same problem.
44
to compute the first- and second-order partial derivatives with respect to F
and U. All these partial derivatives along with the steady states can be used
as input in the GAUSS procedure provided in Appendix II of the last chap-
ters. Executing this procedure will allow us to compute the undetermined
coefficients in the following decision rule for C
t
:
C
t
= G
21
K
t
+G
22
A
t
+ g
2
(2.43)
Equation (2.43) along with (2.4) and (2.3) form the dynamic system from
which the transition path of C
t
, K
t
and A
t
are computed (see Figure 2.3 for
illustration).
Figure 2.3: The Linear-quadratic Solution in Comparison to the Exact So-
lution: solid curve for exact solution, dashed curves for linear-quadratic so-
lution
45
Figure 2.4: Value Function obtained from the Linear-quadratic Solution
In addition, in the figure 2.4, the value function obtained from the linear
quadratic solution is shown. As the figure shows the value function is clearly
concave in the capital stock, K.
2.4.5 The Dynamic Programming Solution
Next, using as example, we will compare the analytical solution of chapter
2.3.3 with the dynamic programming solution obtained from the dynamic
programming algorithm of chapter 1.6. Subsequently we report only results
from a deterministic version. Results from a stochastic version are discussed
in appendix II.
For the growth model of chapter 2.3.3 we employ the following parameters
α = 0.34
A = 5
β = 0.95
we can solve all the above expressions numerically for a grid of the capital
stock in the interval [0.1, 10] and the control variable, c, in the interval [0.1, 5].
For the parameters chosen we obtain a steady state of the capital stock
K = 2.07
For more details of the solution, see Gr¨ une and Semmler (2004a).
6
6
Moreover, as concerns asset pricing log-utility preferences provide us with a very simple
46
The solution of the growth model with the above parameters, using the
dynamic programming algorithm of chapter 1.6 with grid refinement is shown
in figures 2.5 and 2.6.
27.5
28
28.5
29
29.5
30
30.5
0 1 2 3 4 5 6 7 8 9 10
Figure 2.5: Value Function
stochastic discount factor and an analytical expression for the asset price. For U(C) =
ln(C) the asset price is
P
t
= E
t

t=1
β
j
U

(C
t+j
)
U

(C
t
)
= E
t

t=1
β
j
C
t
· C
t+j
C
t+j
=
βC
t
1 −β
.
For further details, see Cochrane (2001, ch. 9.1) and Gr¨ une and Semmler (2004 b).
47
4
3
2
1
0
0 5 10
Figure 2.6: Path of Control
As figures 2.5 and 2.6 show the value function and the control, C, are
concave in the capital stock, K in figure 2.6 the optimal consumption is
shown to depend on the state variable K for a grid of K, 0 ≤ K ≤ 10.
Moreover, as observable from figure 2.6 consumption is low when capital stock
is low (capital stock can grow) and consumption is high when capital stock
is high (capital stock will decrease)where low and high is meant in reference
to the optimal steady state capital stock K = 2.07. As reported in Gr¨ une
and Semmler (2004a) the dynamic programming algorithm with adaptive
gridding strategy as introduced in chapter 1.6 solves the value function with
high accuracy.
7
2.5 Conclusion
This chapter employs the different approximation methods to solve a proto-
type dynamic optimization model. Our purpose here is to compare the dif-
ferent approximate solutions to the exact solution, which for this model can
be derived analytically by the standard recursive method. As we have found,
there have been some difficulties when we apply the Fair-Taylor method and
the method of linear-quadratic approximation using Chow’s algorithm. Yet
7
With 100 nodes in capital stock interval the error is 3.2 · 10
−2
and with 2000 nodes the
error shrinks to 6.3 · 10
−4
, see Gr¨ une and Semmler (2004a).
48
when we apply the methods of log-linear approximation and linear-quadratic
approximation with our suggested algorithm, we find that the approximate
solutions are close to the exact solution. At the same time, we also find
that the method of log-linear approximation may need an algorithm that
can take over some heavy derivations that otherwise must be analytically
accomplished. Therefore, our experiment in this chapter verifies our previ-
ous concerns (in Chapter 2) with regard to the accuracy and the capabil-
ity of different approximation methods, including the Fair-Taylor method,
the log-linear approximation method and the linear-quadratic approxima-
tion method. Although the dynamic programming approach solves the value
function with higher accuracy, in the subsequent chapters, when we come
to the calibration of the intertemporal decision models, we will work with
the linear quadratic approximation of the Chow method since it is better
applicable to empirical assessment of the models.
2.6 Appendix I: The Proof of Proposition 2
and 3
2.6.1 The Proof of Proposition 2
For convenience, we shall write (2.4), (2.15) and (2.3) as
K
t+1
−A
t
K
α
t
−C
t
= 0 (2.44)
E
_
C
t+1
−αβC
t
A
t+1
K
α−1
t+1
¸
= 0 (2.45)
E [A
t+1
−a
0
−a
1
A
t
] = 0 (2.46)
Applying (1.22) to the above equations, we obtain
¯
Ke
k
t+1

¯
A
¯
K
α
e
at+αkt
+
¯
Ce
ct
= 0 (2.47)
E
_
¯
Ce
c
t+1
−αβ
¯
C
¯
A
¯
K
α−1
e
a
t+1
+ct+(α−1)k
t+1
¸
= 0 (2.48)
E
_
¯
Ae
a
t+1
−a
0
−a
1
¯
Ae
at
¸
= 0 (2.49)
Applying (1.23), we further obtain from the above:
¯
K(1 +k
t+1
) −
¯
A
¯
K
α
(1 +a
t
+αk
t
) +
¯
C(1 +c
t
) = 0
E
_
¯
C(1 +c
t+1
) −αβ
¯
C
¯
A
¯
K
α−1
(1 +c
t
+ a
t+1
+ (α −1)k
t+1
)
_
= 0
E
_
¯
A(1 +a
t+1
) −a
0
−a
1
¯
A(1 +a
t
)
¸
= 0
49
which can be further written as to be
¯
Kk
t+1

¯
A
¯
K
α
a
t

¯
A
¯
K
α
αk
t
+
¯
Cc
t
= 0 (2.50)
E
_
¯
Cc
t+1
−αβ
¯
C
¯
A
¯
K
α−1
(c
t
+ a
t+1
+ (α −1)k
t+1
)
_
= 0 (2.51)
E [a
t+1
−a
1
a
t
] = 0 (2.52)
Equation (2.52) indicates (2.37). Substituting it into (2.51) to express
E [a
t+1
] and re-arranging (2.50) and (2.51), we obtain (2.35) and (2.36) as
indicated in the proposition.
2.6.2 Proof of Proposition 3
Given the conjectured solution (2.34), the transition path of k
t+1
can be
derived from (2.35), which can be written as
k
t+1
= η
ka
a
t

kk
k
t
(2.53)
where
η
ka
= ϕ
ka

kc
η
ca
(2.54)
η
kk
= ϕ
kk
+ ϕ
kc
η
ck
(2.55)
Expressing c
t+1
and c
t
in terms of (2.34) while recognizing that E [a
t+1
] =
a
1
a
t
, we obtain from (2.36):
η
ca
a
1
a
t
+ η
ck
k
t+1
= ϕ
cc

ca
a
t
+ η
ck
k
t
) + ϕ
ca
a
t
+ ϕ
ck
k
t+1
which can further be written as
k
t+1
=
ϕ
cc
η
ca
+ ϕ
ca
−η
ca
a
1
η
ck
−ϕ
ck
a
t
+
ϕ
cc
η
ck
η
ck
−ϕ
ck
k
t
(2.56)
Comparing (2.56) to (2.53) with η
ka
and η
kk
to be given by (2.54) and (2.55),
we thus obtain
ϕ
cc
η
ca
+ ϕ
ca
−η
ca
a
1
η
ck
−ϕ
ck
= ϕ
ka
+ ϕ
kc
η
ca
(2.57)
ϕ
cc
η
ck

ck
−ϕ
ck
)
= ϕ
kk
+ ϕ
kc
η
ck
(2.58)
Equation (2.58) gives rise to the following quadratic function in η
ck
:
Q
2
η
2
ck
+Q
1
η
ck
+Q
0
= 0 (2.59)
with Q
2
, Q
1
and Q
0
to be given in the proposition. Solving (2.59) for η
ck
,
we obtain (2.39). Given η
ck
, η
ca
is resolved from (2.57), which gives rise to
(2.40).
50
2.7 Appendix II: Dynamic Programming for
the Stochastic Version
We here present a stochastic version of a growth model which is based on the
Ramsey model of chapter 2.1 but extended to the stochastic case. A model
of type goes back to Brock and Mirman (1972). Here the Ramsey 1d model
?? is extended using a second variable modelling a stochastic shock. The
model is given by the discrete time equations
K(t + 1) = A(t)
˜
AK(t)
α
−C(t)
A(t + 1) = exp(ρ ln A(t) + z
t
)
where α and ρ are real constants and the z(t) are i.i.d. random variables with
zero mean. The return function is again U(C) = ln C.
In our numerical computations which follows Gr¨ une and Semmler (2004)
we used the parameter values
˜
A = 5, α = 0.34, ρ = 0.9 and β = 0.95. As
the case of the Ramsey model, the exact solution is known and given by
V (K, A) = B +
˜
C ln K + DA,
where
B =
ln((1 −βα)
˜
A) +
βα
1−βα
ln(βα
˜
A)
1 −β
,
˜
C =
α
1 −αβ
, D =
1
(1 −αβ)(1 −ρβ)
We have computed the solution to this problem on the domain Ω =
[0.1, 10] × [−0.32, 0.32]. The integral over the Gaussian variable z was ap-
proximated by a trapezoidal rule with 11 discrete values equidistributed in
the interval [−0.032, 0.032] which ensures ϕ(x, u, z) ∈ Ω for x ∈ Ω and suit-
able u ∈ U = [0.5, 10.5]. For evaluating the maximum in T the set U was
discretized with 161 points. Table 2.2 shows the results of the resulting
adaptive gridding scheme applied with refinement threshold θ = 0.1 and
coarsening tolerance ctol = 0.001. Figure 2.7 shows the resulting optimal
value function and adapted grid.
51
# nodes Error estimated Error
49 1.4 · 10
0
1.6 · 10
1
56 0.5 · 10
−1
6.9 · 10
0
65 2.9 · 10
−1
3.4 · 10
0
109 1.3 · 10
−1
1.6 · 10
0
154 5.5 · 10
−2
6.8 · 10
−1
327 2.2 · 10
−2
2.4 · 10
−1
889 9.6 · 10
−3
7.3 · 10
−2
2977 4.3 · 10
−3
3.2 · 10
−2
Table 2.2: Number of nodes and errors for our Example
0 1 2 3 4 5 6 7 8 9 10
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
0.4
24
25
26
27
28
29
30
31
32
33
34
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
1 2 3 4 5 6 7 8 9 10
Figure 2.7: Approximated value function and final adaptive grid for our
Example
In Santos and Vigo–Aguiar (1995) on equidistant grids with 143 × 9 =
1287 and 500 × 33 = 16500 nodes, errors of 2.1 · 10
−1
and 1.48 · 10
−2
, re-
spectively, were reported. In our adaptive iteration these accuracies could be
obtained with 109 and 889 nodes, respectively; thus we obtain a reduction
in the number of nodes of more than 90% in the first and almost 95% in the
second case, even though the anisotropy of the value function was already
taken into account in these equidistant grids. Here again in our stochastic
version of the growth model, a steep value function can best be approximated
with grid refinement.
Chapter 3
The Estimation and Evaluation
of the Stochastic Dynamic
Model
3.1 Introduction
Solving a stochastic dynamic optimization model with aproximation methods
or dynamic programming is only a first step towards the empirical assessment
of such a model. Another necessary step is to estimate the model with
some econometric techniques. To undertake this step certain approximation
methods are more useful than others. Given the estimation, one then can
evaluate the model to see how the model’s prediction can match the empirical
data.
The task of estimation has often been ignored in the current empirical
studies of a stochastic dynamic optimization model when a technique, often
referred to as calibration, is employed. The calibration approach compares
the moment statistics (usually the second moments) of major macroeconomic
time series to those obtained from simulating the model.
1
Typically the pa-
rameters employed for the model’s simulation are selected from independent
sources, such as different microeconomic studies. This approach has been
criticized because the structural parameters are assumed to be given rather
1
See, e.g., Kydland and Prescott (1982), Long and Plosser (1983), Prescott (1986), Hansen
(1985, 1988), King et.al (1988a, 1988b), Plosser (1989) among many others. Recently,
other statistics (in addition to the first and second moments) proposed by early business
cycle literature, e.g., Burns and Mitchell (1946) and Adelman and Adelman (1959), are
also employed for this comparison. See King and Plosser (1994) and Simkins (1994)
among others.
52
53
than estimated.
2
Although this may not create severe difficulties for some currently used
stochastic dynamic optimization models, such as the RBC, we believe the
problem remains for more elabrated models in the future development of
macroeconomic research. Unfortunately, we do not find many econometric
studies on how to estimate a stochastic dynamic optimization model except
for a few attempts that have been undertaken for some simple cases.
3
In this
chapter, we shall discuss two estimation methods: the Generalized Method of
Moment (GMM) estimation, and the Maximum Likelihood (ML) estimation.
Both estimation methods define an objective function to be optimized. Due
to the complexity of stochastic dynamic models, it is often unclear how the
parameters to be estimated are related to the model’s restrictions and hence
to the objective function in the estimation. We thus also need to develope
an estimation strategy that can be used to recursively search the parameter
space in order to obtain the optimum.
Section 2 will first introduce the calibration technique, which has been
used in the current empirical study of stochastic dynamic models. As one
can find there, a proper application of calibration requires us to define the
model’s structural parameters correctly. We then in Section 3 consider two
possible estimation methods, the GMM and the ML estimations. In Section
4, we propose a strategy to implement these two estimations for estimating
a dynamic optimization model. This estimation requires a global optimiza-
tion algorithm. We, therefore, subsequently in Section 5, introduce a global
optimization algorithm, called simulated annealing, which is used for execut-
ing the suggested strategy of estimation. Finally, a sketch of the computer
program for our estimation strategy will be described in the appendix of this
chapter.
3.2 Calibration
The current empirical studies of a stochastic dynamic model often rely on
calibration. This approach uses the Monte Carlo method to generate the
distribution of some moment statistics implied by the model. For an empir-
ical assessment of the model, solved through some approximation method,
we compare these moment statistics to the sample moments computed from
the data. Generally, the calibration may include the following steps:
2
For an early critique of the parameter selection employed in calibration technique, see
Singleton (1988) and Eichenbaum (1991).
3
See, e.g., Christiano and Eichenbaum (1992), Burnside et al. (1993), Chow (1993), Chow
and Kwan (1998).
54
• Step 1: Select the model’s structural parameters. These parameters
may include preference and technology parameters and those that de-
scribe the distribution of the random variables in the model, such as
σ
ǫ
, the standard deviation of ǫ
t
as in equation (1.3).
• Step 2: Select the number of times for which the iteration is conducted
in the simulation of the model. This number might be the same as the
number of observations in the sample. We denote this number by T.
• Step 3: Select the initial condition (x
0
, z
0
) and use the state equation
(1.2), the control equation (1.4) and the exogenuous equation (1.3) to
compute the solution of the model iteratively for T times. This can be
regarded as a one time simulation.
• Step 4: If necessary, detrend the simulated series generated in Step 3 to
remove its time trend. Often the HP-filter (see Hodrick and Prescott
1980) is used for this detrending.
• Step 5: Compute the moment statistics of interest using, if necessary,
a detrended series generated in Step 4. These moment statistics are
mainly those of the second moments such as variances and covariances.
• Step 6: Repeat Step 3 to 5 N times. Here N should be sufficiently large.
Then after these N times repeated runs, compute the distributions of
these moment statistics. These distributions are mainly represented by
their means and their standard deviations.
• Step 7: Compute the same moment statistics from the data sample
and check whether it falls within the proper range of the distribution
for the moment statistics generated from the Monte Carlo simulation
of the model.
Due to the stochastic innovation of ǫ
t
, the simulated series should also
be cyclically and stochastically fluctuating. The extent and the way of fluc-
tuations, reflected by some second moment statistics, should depend on the
model specification and the structural parameters including σ
ǫ
, the standard
deviation of ǫ
t
. Therefore, if we specify the model correctly with the struc-
tural parameters, including σ
ǫ
, which should also be defined properly, the
above comparison of moment statistics of the model and sample should give
us a basis to test whether the model can explain the actual business cycles
represented by the data.
55
3.3 The Estimation Methods
The application of the calibration requires techniques to select the structural
parameters accurately. This indicates that we need to estimate the model
before the calibration. We consider two possible estimation methods: the
GMM estimation and the ML estimation.
3.3.1 The Generalized Method of Moments (GMM)
Estimation
The GMM estimation starts with a set of orthogonal conditions, representing
the population moments established by the theoretical model:
E [h(y
t
, ψ)] = 0 (3.1)
where y
t
is a k-dimensional vector of observed random variables at date t; ψ
is a l-dimensional vector of unknown parameters that needs to be estimated;
h(·) is a vector-valued function mapping from R
k
×R
l
into R
m
. Let y
T
contain
all the observations of the k variables in the sample with size T. The sample
average of h(·) can then be written as
g
T
(ψ; y
T
) =
1
T
T

t=1
h(y
t
, ψ) (3.2)
Notice that g
T
(·) is also a vector-valued function with m-dimensions. The
idea behind the GMM estimation is to choose an estimator of ψ, denoted
´
ψ, such that the sample moments g
T
(
´
ψ; y
T
) are as close as possible to the
population moments reflected by (3.1). To achieve this, one needs to define
a distance function by which that closeness can be judged. Hansen (1982)
suggested the following distance function:
J(
´
ψ; y
T
) =
_
g
T
(
´
ψ; y
T
)
_

W
T
_
g
T
(
´
ψ; y
T
)
_
(3.3)
where W
T
, called the weighting matrix, is m × m, symmetric, positive
definite and depends only on the sample observation y
T
. The choice of this
weighting matrix defines a metric that makes the distance function a scalar.
The GMM estimator of ψ is the value of
´
ψ that minimizes (3.3). Hansen
(1982) proves that under certain assumption such a GMM estimator is consis-
tent and asymptotically normal. Also from the results established in Hansen
(1982), a consistent estimator of the variance-covariance matrix of
´
ψ is given
by
56
V ar(
´
ψ) =
1
T
(D
T
)
−1
W
T
(D

T
)
−1
, (3.4)
where D
T
= ∂g
T
(
´
ψ)/∂ψ

.
There is a great flexibility in the choice of W
T
for constructing a consistent
and asymptotically normal GMM estimator. In this book, we will adopt the
method by Newey and West (1987), where it is suggested that
W
−1
T
=
´

0
+
d

j=1
w(j, d)(
´

j
+
´


j
), (3.5)
with w(j, d) ≡ 1 − j/(1 + d),
´

j
≡ (1/T)

T
t=j+1
g(y
t
,
´
ψ

)g(y
t−j
,
´
ψ

)
and d to be a suitable function of T. Here
´
ψ

is required to be a consistent
estimator of ψ. Therefore, the estimation with the GMM method usually
requires two steps as suggested by Hansen and Singleton (1982). First, one
chooses a sub-optimal weighting matrix to minimize (3.3) and hence obtains
a consistent estimator
´
ψ

. Second, one then uses the consistent estimator
obtained in the first step to calculate the optimum W
T
through which ( 3.3)
is re-minimized.
3.3.2 The Maximum Likelihood (ML) Estimation
The ML estimation, proposed by Chow (1993) for estimating a dynamic
optimization model, starts with an econometric model such as follows:
By
t
+ Γx
t
= ǫ
t
(3.6)
where B is a m × m matrix; Γ is a m × k matrix; y
t
in this case is a m ×1
vector of dependent variables; x
t
is a k ×1 vector of explanatory variables
and ǫ
t
is a m ×1 vector of disturbance terms. Note that if we take the
expectations on both sides, the model to be estimated here is the same as
in the GMM estimation represented by (3.1), except here the functions are
linear. Non-linearity may pose a problem to derive the log-likelihood function
for (3.6).
Suppose there are T observations. Then the above (3.6) can be re-written
as
BY

+ ΓX

= E

(3.7)
where Y is T × m; X is T × k and E is T × m. Assuming normal and serially
uncorrelated ǫ
t
with the covariance matrix Σ, the concentrated log-likelihood
function can be derived (see Chow, 1983, p.170-171) as
log L(ψ) = const. + nlog |B| −
n
2
log |Σ| (3.8)
57
with the ML estimator of Σ given by
´
Σ = n
−1
(BY

+ ΓX

)(Y B

+ XΓ

) (3.9)
The ML estimator of ψ is the one that maximizes logL(ψ) in (3.8). The
asymptotic standard deviation of estimated parameters can be inferred from
the following variance-covariance matrix of
´
ψ (see Hamilton 1994 p.143):
E(
´
ψ −ψ)(
´
ψ −ψ)


= −
_

2
L(ψ)
∂ψ∂ψ

_
−1
. (3.10)
3.4 The Estimation Strategy
In practice, using GMM or ML method to estimate a stochastic dynamic
model is rather complicated. The first problem that we need to discuss are
the restrictions on the estimation.
One proper restriction is the state equation and a first-order condition
derived either as Euler equation or from the Lagrangian.
4
Yet, most first-
order conditions are extremely complicated and may include some auxiliary
variables, such as the Lagrangian multiplier, which are not observable. This
seems to suggest that the restrictions for a stochastic dynamic optimization
model should typically be represented by the state equation and the control
equation derived from the dynamic optimization problem.
The derivation of the control equations in approximate form from a dy-
namic optimization problem is a complicated process. As discussed in the
previous chapters often a numerical procedure is required. For an approxi-
mation method, the linearization of the system at its steady states is needed.
The linearization and the derivation of the control equation, possibly through
an iterative procedure, make it often unclear how the parameters to be es-
timated are related to the model’s restrictions and hence to the objective
function in estimation, such as (3.3) and (3.8). Therefore one is usually inca-
pable of deriving analytically the first-order conditions of minimizing (3.3) or
maximizing (3.8) with respect to the parameters. Furthermore, using first-
order conditions to minimize (3.3) and maximize (3.8) may only lead to a
local optimum, which is quite possible in general, since the system to be esti-
mated is often nonlinear in parameters. Consequently, searching a parameter
space becomes the only possible way to find the optimum.
Our search process includes the following recursive steps:
• Step 1. Start with an initial guess on ψ and use an appropriate method
of dynamic optimization to derive the decision rules.
4
The parameters in the state equation could be estimated independently.
58
• Step 2. Use the state equation and the derived control equation to
calculate the value of the objective function.
• Step 3. Apply some optimization algorithm to change the initial guess
on ψ and start again with step one.
Using this strategy to estimate a stochastic dynamic model, one needs to
employ an optimization algorithm to search the parameter space recursively.
The conventional optimization algorithms,
5
such as Newton-Raphson or re-
lated methods, may not serve our purpose well due to the possible existence
of multiple local optima. We thus need to employ a global optimization al-
gorithm to execute the estimation process as described above. One possible
candidates is the simulated annealing, which shall be discussed in the next
section.
3.5 A Global Optimization Algorithm: The
Simulated Annealing
The idea of simulated annealing has been initially proposed by Metropolis et
al. (1953) and later developed by Vanderbilt and Louie (1984), Bohachevsky
et al. (1986) and Corana et al, (1987). The algorithm operates through
an iterative random search for the optimal variables of an objective function
within an appropriate space. It moves uphill and downhill with a varying step
size to escape local optima. The step size is narrowed so that the random
search is confined to an ever smaller region when the global optimum is
approached.
The simulated annealing algorithm has been tested by Goffe et al. (1992).
For this test, Goffe et al. (1992) compute a test function with two optima
provided by Judge et al. (1985, p.956-7). By comparing it with conventional
algorithms, they find that out of 100 times conventional algorithms are suc-
cessful 52-60 times to reach the global optimum while simulated annealing
is 100 percent efficient. We thus believe that the algorithm may serve our
purpose well.
Let f(x), for example, be a function that is to be maximized and x ∈ S,
where S is the parameter space with the dimensions equal to the number
of structural parameters that need to be estimated. The space S should be
defined from the economic viewpoint and by computational convenience. The
algorithm starts with an initial parameter vector x
0
. Its value f
0
= f(x
0
)
5
For conventional optimization algorithm, see appendix B of Judge et al (1985) and Hamil-
ton (1994, ch. 5).
59
is calculated and recorded. Subsequently, we set the optimum x and f(x)
– denoted by x
opt
and f
opt
respectively – to x
0
and f(x
0
). Other initial
conditions include the initial step-length (a vector with the same dimension
as x) denoted by v
0
and an initial temperature (a scalar) denoted by T
0
.
The new variable, x

, is chosen by varying the ith element of x
0
such that
x

i
= x
0
i
+ r · v
0
i
(3.11)
where r is a uniformly distributed random number in [−1, 1]. If x

is not
in S, repeat (3.11) until x

is in S. The new function value f

= f(x

) is
then computed. If f

is larger than f
0
, x

is accepted. If not, the Metropolis
criteria,
6
denoted as p, is used to decide on acceptance, where
p = e
(f

−f)/T
0
(3.12)
This p is compared to p

, a uniformly distributed random number from [0, 1].
If p is greater than p

, x

is accepted. Besides, f

should also be compared to
the updated f
opt
. If it is larger than f
opt
, both x
opt
and f
opt
are replaced by
x

and f

.
The above steps (starting with (3.11)) should be undertaken and repeated
N
S
times
7
for each i. Subsequently, the step-length is adjusted. The ith
element of the new step-length vector (denoted as v

i
) depends on its number
of acceptances (denoted as n
i
) in its last N
S
times of the above repetition
and is given by
v

i
=
_
_
_
v
0
i
_
1 +
1
0.4
c
i
(n
i
/N
S
−0.6)
¸
if n
i
> 0.6N
S
;
v
0
i
_
1 +
1
0.4
c
i
(0.4 −n
i
/N
S
)
¸
−1
if n
i
< 0.4N
S
;
v
0
i
if 0.4N
S
≤ n
i
≤ 0.6N
S
(3.13)
where c
i
is suggested to be 2 as by Corona et al. (1987) for all i. With the
new selected step-length vector, one goes back to (3.11) and hence starts a
new round of iteration. Again after another N
S
times of such repetitions,
the step-length will be re-adjusted. These adjustments as to each v
i
should
be performed N
T
times.
8
We then come to adjust the temperature. The new
temperature (denoted as T

) will be
T

= R
T
T
0
(3.14)
with 0 < R
T
< 1.
9
With this new temperature T

, we should go back again
to (3.11). But this time, the initial variable x
0
is replaced by the updated
6
motivated by thermodynamics.
7
N
S
is suggested to be 20 as by Corana et al. (1987)
8
N
T
is suggested to be 100 by Corona et. al. (1987).
9
R
T
is suggested to be 0.85 by Corana et. al. (1987).
60
x
opt
. Of course, the temperature will be reduced further after one additional
N
T
times of adjusting the step-length of each i.
For convergence, the step-length in (3.11) is required to be very small. In
(3.13), whether the new selected step-length is enlarged or not depends on
the corresponding number of acceptances. The number of acceptance n
i
is
not only determined by whether the new selected x
i
increases the value of
objective function, but also by the Metropolis criteria which itself depends
on the temperature. Thus a convergence will ultimately be achieved with
the continuous reduction of the temperature. The algorithm will end by
comparing the value of f
opt
for the last N
ǫ
times (suggested to be 4) when
the temperature is attempted to be re-adjusted.
The simulated annealing algorithm described above has been tested by
Goffe et al. (1992). For this test, Goffe et al. (1992) compute a test function
with two optima provided by Judge et al. (1985, p. 956-7). By comparing
it with conventional algorithms, they find that out of 100 times conventional
algorithms are successful 52-60 times to reach the global optimum while sim-
ulated annealing is 100 percent efficient. We thus believe that the algorithm
may serve our purpose well. In the next chapter, we shall demonstrate the ef-
fectiveness of this estimation strategy by estimating a benchmark RBC with
simulated data.
3.6 Conclusions
In this chapter, we have first introduced the calibration method, which has
often been employed in the assessment of a stochastic dynamic model. We
then, based on some approximation methods, have presented an estimation
strategy, to estimate stochastic dynamic models employing time series data.
We have introduced both the General Method of Moments (GMM) as well
as the Maximum Likelihood (ML) estimations as strategies to match the dy-
namic decision model with time series data. Although both strategies permit
to estimate the parameters involved, often a global optimization algorithm,
for example the simulated annealing, is needed to be employed to detect the
correct parameters.
3.7 Appendix: A Sketch of the Computer
Program for Estimation
The algorithm we describe here is written in GAUSS. The entire program
consists of three parts. The first part regards some necessary steps in the
61
data processing after loading the original data. The second part is the pro-
cedure that calculates the value of objective function for the estimation. The
input of this procedure are the structural parameters, while the activation
of this procedure generates the value of objective function. We denote this
procedure as OBJF(ϕ). The third part, which is also the main part of this
program, is the simulated annealing. Of these three parts, we shall only
describe the simulated annealing.
{Set initial conditions for simulated annealing}
DO UNTIL convergence;
t = t + 1;
DO N
T
times;
n = 0; /*set the vector for recording No. of acceptances*/
DO N
s
times;
i = 0;
DO UNTIL i = the dimensions of ϕ;
i = i + 1;
HERE: ϕ
i
= ϕ
i
+ rv
i
;
ϕ

= {as if current ϕ except the ith element to be ϕ

i
};
IF ϕ

is not in S;
GOTO HERE;
ELSE;
CONTINUE;
ENDIF;
f =OBJF(ϕ

); /*f is the value of the objective function to be minimized*/
p = exp[(f

−f)/T]; /*p is the metropolis criteria*/
IF f

> f or p > p

ϕ = ϕ

;
f = f

;
n
i
= n
i
+ 1;
ELSE;
CONTINUE;
ENDIF;
IF f

> f
opt
;
ϕ
opt
= ϕ

;
f
opt
= f

;
ELSE;
CONTINUE;
ENDIF;
62
ENDO;
ENDO;
i = 0;
{define the new step-size, v

, according to n
i
}
v = v

;
ENDO;
IF change of f
opt
< ε in last N
ε
times;
REPORT ϕ
opt
and f
opt
;
BREAK
ELSE
T = R
T
T;
CONTINUE;
ENDIF;
ENDO;
Part II
The Standard Stochastic
Dynamic Optimization Model
63
Chapter 4
Real Business Cycles: Theory
and the Solutions
4.1 Introduction
The Real Business Cycle model as a prototype of a stochastic dynamic macro-
model has influenced quantitative macromodeling enormously in the last two
decades. Its concepts and methods have diffused into mainstream macroe-
conomics. The criticism of the performance of macroeconometric models of
Keynesian type in the 1970s and the associated rational expectation revolu-
tion pioneered by Lucas (1976) initiated this development. The Real Busi-
ness Cycle analysis now occupies a major position in the curriculum of many
graduate programs. To some extent, the Real Business Cycle approach has
become a new orthodoxy of macroeconomics.
The central argument by Real Business Cycel theorists is that economic
fluctuations are caused primarily by real factors. Kydland and Prescott
(1982) and Long and Plosser (1983) first strikingly illustrate this idea in a
simple representative agent optimization model with market clearing, ratio-
nal expectation and no monetary factors. Stockey, Lucas and Prescott (1989)
further illustrate that such type of model could be viewed as an Arrow-Debreu
economy so that the model can be established on a solid micro-foundation
with many (identical) agents. Therefore, as mentioned above, the RBC anal-
ysis can also be regarded as a general equilibrium approach to macrodynam-
ics.
This chapter introduces the RBC model by first describing its microeco-
nomic foundation as set out by Stockey et al. (1989). We then present the
standard RBC model as formulated in King et al. (1988). A model of this
kind will repeatedly be used in the subsequent chapters in various ways. The
64
65
model will then be solved after being parameterized by those standard values
of the model’s structural parameters.
4.2 The Microfoundation
The standard Real Business Cycle model assumes a representative agent who
solves a resource allocation problem over an infinite time horizon via dynamic
optimization. It is argued that “the solutions to planning problems of this
type can, under appropriate conditions, be interpreted as predictions about
the behavior of market economies.”(Stokey et al. 1989, p. 22)
To establish the connection to the competitive equilibrium of the Arrow-
Debreu economy,
1
several assumptions should be made for an hypothetical
economy. First, the households in the economy are identical, all with the
same preference, and firms are also identical, all producing a common out-
put with the same constant returns to scale technology. With this identical
assumption, the resource allocation problem can be viewed as an optimiza-
tion problem of a representative agent.
Second, as in Arrow-Debreu economy, the trading process is assumed
to be “once-and-for-all”. The following citation is again from Stokey et al.
(1989, p.23).
Finally, assume that all transactions take place in a single
once-and-for-all market that meets in period 0. All trading takes
place at that time, so all prices and quantities are determined
simultaneously. No further trades are negotiated later. After this
market has closed, in periods t = 0, 1, ..., T, agents simply deliver
the quantities of factors and goods they have contracted to sell
and receive those they have contracted to buy.
The third assumption regards the ownership. It is assumed that the
household owns all factors of production and all shares of the firm. Therefore,
in each period the household sells factor services to the firm. The revenue
from selling factors can only be used to buy the goods produced by the firm
either for consuming or accumulating as capital. The representative firm
owns nothing. In each period it simply hires capital and labor on a rental
basis to produce output, sells the output and transfers any profit back to the
household.
1
See Arrow and Debreu (1954) and Debreu (1959).
66
4.2.1 The Decision of the Household
At the beginning of the period 0 when the market is open, the household is
given the price sequence {p
t
, w
t
, r
t
}

t=0
at which he (or she) will choose the se-
quence of output demand and input supply
_
c
d
t
, i
d
t
, n
s
t
, k
s
t
_

t=0
that maximizes
the discounted utility:
max E
0
_

t=0
β
t
U(c
d
t
, n
s
t
)
_
(4.1)
subject to
p
t
(c
d
t
+i
d
t
) = p
t
(r
t
k
s
t
+ w
t
n
s
t
) + π
t
(4.2)
k
s
t+1
= (1 −δ)k
s
t
+ i
d
t
(4.3)
Above δ is the depreciation rate; β is the discounted factor; π
t
is the expected
dividend; c
d
t
and i
d
t
are the demands for consumption and investment; and
n
s
t
and k
s
t
are the supplies of labor and capital stock. Note that (4.2) can be
regarded as a budget constraint. The equality holds due to the assumption
U
c
> 0. Next, we shall consider how the representative household calculates
π
t
. It is reasonable to assume that
π
t
= p
t
(´ y
t
−w
t
´ n
t
−r
t
´
k
t
) (4.4)
where ´ y
t
, ´ n
t
and
´
k
t
are the realized output, labor and capital expected by
the household at given price sequence {p
t
, w
t
, r
t
}

t=0
. Thus assuming that
the household knows the production function while expecting that the mar-
ket will be cleared at the given price sequence {p
t
, w
t
, r
t
}

t=0
, (4.4) can be
rewritten as
π
t
= p
t
_
f(k
s
t
, n
s
t
,
ˆ
A
t
) −w
t
n
s
t
−r
t
k
s
t
_
(4.5)
Above, f(·) is the production function and
ˆ
A
t
is the expected technology
shock. Explaining π
t
in (4.2) in terms of (4.5) and then substituting from
(4.3) to eliminate i
d
t
, we obtain
k
s
t+1
= (1 −δ)k
s
t
+ f(k
s
t
, n
s
t
,
ˆ
A
t
) −c
d
t
(4.6)
Note that (4.1) and (4.6) represent the standard RBC model, although it
only specifies one side of the markets: output demand and input supply.
Given the initial capital stock k
s
0
, the solution of this model is the sequence
of plans
_
c
d
t
, i
d
t
, n
s
t
, k
s
t+1
_

t=0
, where k
s
t
is implied by (4.6), and
c
d
t
= G
c
(k
s
t
,
ˆ
A
t
) (4.7)
n
s
t
= G
n
(k
s
t
,
ˆ
A
t
) (4.8)
i
d
t
= f(k
s
t
, n
s
t
,
ˆ
A
t
) −c
d
t
(4.9)
67
4.2.2 The Decision of the Firm
Given the same price sequence {p
t
, w
t
, r
t
}

t=0
, and also the sequence of ex-
pected technology shocks {
ˆ
A
t
}

t=0
, the problem faced by the representative
firm is to choose input demands and output supplies
_
y
s
t
, n
d
t
, k
d
t
_

t=0
. How-
ever, since the firm simply rents capital and hires labor on a period-by-period
basis, its optimization problem is equivalent to a series of one-period maxi-
mization (Stokey et al. 1989, p25):
max p
t
(y
s
t
−r
t
k
d
t
−w
t
n
d
t
)
subject to
y
s
t
= f(k
d
t
, n
d
t
,
ˆ
A
t
) (4.10)
where t = 0, 1, 2, ..., ∞. The solution to this optimization problem satisfies:
r
t
= f
k
(k
d
t
, n
d
t
,
ˆ
A
t
)
w
t
= f
n
(k
d
t
, n
d
t
,
ˆ
A
t
)
This first-order condition allow us to derive the following equations of input
demands k
d
t
and n
d
t
:
k
d
t
= k(r
t
, w
t
,
ˆ
A) (4.11)
n
d
t
= n(r
t
, w
t
,
ˆ
A) (4.12)
4.2.3 The Competitive Equilibrium and the Walrasian
Auctioneer
A competitive equilibrium can be described as a sequence of prices {p

t
, w

t
, r

t
}

t=0
at which the two market forces (demand and supply) are equalized in all these
three markets, i.e.,
k
d
t
= k
s
t
(4.13)
n
d
t
= n
s
t
(4.14)
c
d
t
+i
d
t
= y
s
t
(4.15)
for all t’s, t = 0, 1, 2, ..., ∞. The economy is at the competitive equilibrium if
{p
t
, w
t
, r
t
}

t=0
= {p

t
, w

t
, r

t
}

t=0
Using equation (4.6) - (4.12), one can easily prove the existence of {p

t
, w

t
, r

t
}

t=0
that satisfies the equilibrium condition (4.13)-(4.15).
68
The real business cycles literature usually does not explain how the equi-
librium is achieved. Implicitly, it is assumed that there exists an auctioneer in
the market, who adjust the price towards the equilibrium. This adjsutment
process - often named as tˆatonnement process as in Walrasian economics - is
a commen solution to the adjsutment problem within the neoclassical general
equilibrium framework.
4.2.4 The Contingency Plan
It is not difficult to find that the sequence of equilibrium prices {p

t
, w

t
, r

t
}

t=0
depends on the expected technology shock {
ˆ
A
t
}

t=0
. This indeed creates a
problem how to express the equilibrium prices and the equilibrium demand
and supply which are supposed to be made at the beginning of period 0 when
the technology shock from period 1 onward are all unobserved. The Real
Business Cycle theorists circumvent this problem skillfully and ingeniously.
Their approach is to use the so-called “contingency plan”. As written by
Stokey et al.(1989, p17):
In the stochastic case, however, this is not a sequence of num-
bers but a sequence of contingency plans, one for each period.
Specifically, consumption c
t
, and end-of-period capital k
t+1
in
each period t = 1, 2, ... are contingent on the realization of the
shocks z
1
, z
2
, ..., z
t
. This sequence of realization is information
that is available when the decision is being carried out but is un-
known in period 0 when the decision is being made. Technically,
then, the planner chooses among sequence of functions....
Thus the sequence of equilibrium prices and the sequence of equilibrium
demand and supply are all contingent on the realization of the shock regard-
less that the corresponding decisions are all made at the beginning of period
0.
4.2.5 The Dynamics
Assmue that the decisions are all contingent on the future shock {A
t
}

t=0
,
and the prices are all at their equilibrium, the dynamics of our hypothetic
economy can be fully described by the following equations regarding the
69
realized consumption, employment, output, investment and capital stock:
c
t
= G
c
(k
t
, A
t
) (4.16)
n
t
= G
n
(k
t
, A
t
) (4.17)
y
t
= f(k
t
, n
t
, A
t
) (4.18)
i
t
= y
t
−c
t
(4.19)
k
t+1
= (1 −δ)k
t
+ f(k
t
, n
t
, A
t
) −c
t
(4.20)
given the initial condition k
0
and the sequence of technology shock{A
t
}

t=0
.
This indeed provides another important property of the RBC economy. Al-
though the model specifies the decision behaviors for both household and
the firm and therefore the two market forces, demand and supply, for all
major three markets: output, capital and labor markets, the dynamics of the
economy is reflected by only the household behavior, which concerns only
one side of the market forces, the output demand and input supply. The
decision of the firm does not have any impact! This is certainly due to the
equilibrium feature of the model specification.
4.3 The Standard RBC Model
4.3.1 The Model Structure
The hypothetical economy we have presented in the last section is only for ex-
plaining the theory (from microeconomic point of view) behind the standard
RBC economy. The model specified in (4.16) - (4.20) is not testable with
empirical data, not only because we do not specify the stochastic process of
{A
t
}

t=0
, but also we do not introduce the growth factor. For an empirically
testable standard RBC model, we employ here the specifications of a model
as formulated by King, et al. (1988). This empirically oriented formulation
will be repeatedly used in the subequent chapters of this volumn.
Let K
t
denote for the aggregate capital stock, Y
t
for aggregate output
and C
t
for aggregate consumption. The capital stock in the economy follow
the transition law:
K
t+1
= (1 −δ)K
t
+Y
t
−C
t
, (4.21)
where δ is the depreciation rate. Assume that the aggregate production
function take the form:
Y
t
= A
t
K
1−α
t
(N
t
X
t
)
α
(4.22)
where N
t
is per capita working hours; α is the share of labor in the produc-
tion function; A
t
is the temporary shock in technology and X
t
the permanent
70
shock that follows a growth rate γ. Note that here X
t
includes not only the
growth in labor force, but also the growth in productivity. Apparently, the
model is nonstationary due to X
t
. To transform the model into a station-
ary formulation, we divide both sides of equation (4.21) by X
t
(when Y
t
is
expressed by (4.22)):
k
t+1
=
1
1 +γ
_
(1 −δ)k
t
+ A
t
k
1−α
t
(n
t
N/0.3)
α
−c
t
_
, (4.23)
where by definition, k
t
≡ K
t
/X
t
, c
t
≡ C
t
/X
t
and n
t
≡ 0.3N
t
/N with N to be
the sample mean of N
t
. Note that n
t
is often regarded to be the normalized
hours. The sample mean of n
t
is equal to 30 %, which, as pointed out by
Hansen (1985), is the average percentage of hours attributed to work.
The representative agent in the economy is assumed to make the decision
sequence {c
t
}

t=0
and {n
t
}

t=0
so as to
max E
0

t=0
β
t
[log c
t
+ θ log(1 −n
t
)] , (4.24)
subject to the state equation (4.23). The exogenuous variable in this model
is the temporary shock A
t
, which may follow an AR(1) process:
A
t+1
= a
0
+ a
1
A
t
+ ǫ
t+1
, (4.25)
with ǫ
t
to be an i.i.d. innovation.
Note that there is no possibility to derive the exact solution with the
standard recursive method. We therefore have to rely on an approximate
solution. For this, we shall first derive the first-order conditions.
4.3.2 The First-Order Conditions
As we have discussed in the previous chapters, there are two types of first-
order conditions, the Euler equations and the equations from the Lagrangian.
The Euler equation is not used in our suggested solution method. We nev-
ertheless still present it here as an exercise and demonstrate that the two
first-order conditions are virtually equivalent.
The Euler equation
To derive the Euler equation, our first task is to transform the model into a
setting that the state variable k
t
does not appear in F(·) as we have discussed
in Chapter 1 and 2. This can be done by assuming k
t+1
(instead of c
t
) along
71
with n
t
as model’s decision variables. In this case, the objective function
takes the form:
max E
0

t=0
β
t
U(k
t+1
, n
t
, k
t
, A
t
),
where
U(k
t+1
, n
t
, k
t
, A
t
) = log [(1 −δ)k
t
+ y
t
−(1 +γ)k
t+1
] (4.26)
+θ log(1 −n
t
).
Note that here we have used (4.23) to express c
t
in the utility function; y
t
is
the stationary output via the following equation:
y
t
= A
t
k
1−α
t
(n
t
N/0.3)
α
. (4.27)
Given such an objective function, the state equation (4.23) can simply be
ignored in deriving the first-order condition. The Bellman equation in this
case can be written as
V (k
t
, A
t
) = max
k
t+1
,nt
U(k
t+1
, n
t
, k
t
, A
t
) +βE [V (k
t+1
, A
t+1
)] . (4.28)
The necessary condition for maximizing the right side of Bellman equation
(4.28) is given by
∂U
∂k
t+1
(k
t+1
, n
t
, k
t
, A
t
) + βE
_
∂V
∂k
t+1
(k
t+1
, A
t+1
)
_
= 0; (4.29)
∂U
∂n
t
(k
t+1
, n
t
, k
t
, A
t
) = 0. (4.30)
Meanwhile the application of Benveniste-Scheinkman formula gives
∂V
∂k
t
(k
t
, A
t
) =
∂U
∂k
t
(k
t+1
, n
t
, k
t
, A
t
) (4.31)
Using (4.31) to express
∂V
∂k
t+1
(k
t+1
, A
t+1
) in (4.29), we obtain
∂U
∂k
t+1
(k
t+1
, n
t
, k
t
, A
t
) + βE
_
∂U
∂k
t+1
(k
t+2
, n
t+1
, k
t+1
, A
t+1
)
_
= 0. (4.32)
From equation (4.23) and (4.26),
∂U
∂k
t+1
(k
t+1
, n
t
, k
t
, A
t
) = −
1 +γ
c
t
;
∂U
∂k
t+1
(k
t+2
, n
t+1
, k
t+1
, A
t+1
) =
(1 −δ)k
t+1
+ (1 −α)y
t+1
k
t+1
c
t+1
;
∂U
∂n
t
(k
t+1
, n
t
, k
t
, A
t
) =
αy
t
n
t
c
t

θ
1 −n
t
.
72
Substituting the above expressions into (4.32) and (4.30), we establish the
following Euler equations:

1 + γ
c
t
+ βE
_
(1 −δ)k
t+1
+ (1 −α)y
t+1
k
t+1
c
t+1
_
= 0; (4.33)
αy
t
n
t
c
t

θ
1 −n
t
= 0. (4.34)
The First-Order Condition Derived from the Lagrangian
Next, we turn to derive the first-order condition from the Lagrangian. Define
the Lagrangian:
L =

t=0
β
t
[log(c
t
) +θ log(1 −n
t
)] −

t=0
E
t
_
β
t+1
λ
t+1
_
k
t+1

1
1 +γ
_
(1 −δ)k
t
−A
t
k
1−α
t
(n
t
N/0.3)
α
+ c
t
_
__
Setting zero the derivatives of L with respect to c
t
, n
t
, k
t
and λ
t
, one obtains
the following first-order condition:
1
c
t

β
1 +γ
E
t
λ
t+1
= 0; (4.35)
−θ
1 −n
t
+
αβy
t
(1 +γ)n
t
E
t
λ
t+1
= 0; (4.36)
β
1 + γ
E
t
λ
t+1
_
(1 −δ) +
(1 −α)y
t
k
t
_
= λ
t
; (4.37)
k
t+1
=
1
1 + γ
[(1 −δ)k
t
+y
t
−c
t
] , (4.38)
with y
t
again to be given by (4.27).
Next we try to demonstrate that the two first-order conditions: (4.33)
- (4.34) and (4.35) - (4.38) are virtually equivalent. This can be done as
follows. First, expressing [β/(1+γ)]E
t
λ
t+1
in terms of 1/c
t
(which is implied
by (4.35)), we obtain from (4.37)
λ
t
=
(1 −δ)k
t
+ (1 −α)y
t
k
t
c
t
This further indicates that
E
t
λ
t+1
=
(1 −δ)k
t+1
+ (1 −α)y
t+1
k
t+1
c
t+1
(4.39)
73
Substituting (4.39) into (4.35), we obtain the first Euler equation (4.33).
Second, expressing [β/(1+γ)]E
t
λ
t+1
again in terms of 1/c
t
, and substituting
it into (4.36), we then verify the second Euler equation (4.34).
4.3.3 The Steady States
Next we try to derive the corresponding steady states. The steady state A
t
is simply determined from (4.25). The other steady states are given by the
following proposition:
Proposition 4 Assume A
t
has a steady state A. Equation (4.35) - (4.38)
along with (4.27), when evaluated in terms of their certainty equivalence
forms, determines at least two steady states: one is on boundary, denoted as
(¯ c
b
, ¯ n
b
,
¯
k
b
, ¯ y
b
,
¯
λ
b
), and the other is interior, denoted as (¯ c
i
, ¯ n
i
,
¯
k
i
, ¯ y
i
,
¯
λ
i
). In
particular,
¯ c
b
= 0,
¯ n
b
= 1,
λ
b
= ∞,
¯
k
b
=
_
A/(δ +γ)
¸
1/α
_
¯
N/0.3
_
,
¯ y
b
= (δ + γ)
¯
k
b
;
and
n
i
= αφ/ [(α + θ)φ −(δ + γ)θ] ,
k
i
= A
1/α
φ
−1/α
n
i
_
¯
N/0.3
_
c
i
= (φ −δ −γ)k
i
,
λ
i
= (1 +γ)/β¯ c
i
y
i
= φk
i
,
where
φ =
(1 +γ) −β(1 −δ)
β(1 −α)
(4.40)
Note that we have used the first-order condition from the Lagrangian
to derive the above two steady states. Since the two first-order conditions
are virtually equivalent, we expect that the same steady states can also be
derived from the Euler equations.
2
2
We however leave this exercise to the readers.
74
4.4 Solving Standard Model with Standard
Parameters
To obtain the solution path of standard RBC model, we shall first specify the
values of the structural parameters defined in the model. These are reported
in Table 4.1. We shall remark that these parameters are close to the standard
parameters, which can often be found in the RBC literature.
3
More detailed
discussion regarding the parameter selection and estimation will be provided
in the next chapter.
Table 4.1: Parameterizing the Standard RBC Model
α γ β δ θ
¯
N a
0
a
1
σ
ǫ
0.58 0.0045 0.9884 0.025 2 480 0.0333 0.9811 0.0189
The solution method that we shall employ is the method of linear-quadratic
approximation with our suggested algorithm as discussed in Chapter 1. As-
sume that decision rule take the form
c
t
= G
11
A
t
+ G
12
k
t
+g
1
(4.41)
n
t
= G
21
A
t
+ G
22
k
t
+ g
2
(4.42)
Our first step is to compute the first- and the second-order partial derivatives
with respect to F and U where
F(k, c, n, A) =
1
1 +γ
_
(1 −δ)k + Ak
1−α
(nN/0.3)
α
−c
_
U(c, n) = log(c) + θ log(1 −n)
All these partial derivatives along with the steady states can be used as
the inputs in the GAUSS procedure provided in Appendix II of Chapter
1. Executing this procedure will allow us to compute the undetermined
coefficients G
ij
and g
i
(i, j = 1, 2) in the decision rule as expressed in (4.41)
and (4.42). In Figure 4.1 and Figure 4.2 , we illustrate the solution paths, one
for the deterministic and the other for the stochastic case, for the variables
k
t
, c
t
, n
t
and A
t
.
3
Indeed, they are essentially the same as the parameters choosed by King et al. (1988)
except the last three parameters, which is related to the stochastic equation (4.25).
75
Figure 4.1: The Deterministic Solution to the Benchmark RBC Model for
the Standard Parameters
Figure 4.2: The Stochastic Solution to the Benchmark RBC Model for the
Standard Parameters
Elsewhere (see Gong and Semmler 2001), we have compared these solu-
76
tion paths to those computed by Campbell (1994)’s log-linear approximate
solution. We find that the two solutions are surprisingly close to the extent
that one can hardly observe the differences.
4.5 The Generalized RBC Model
In recent work stochastic dynamic optimization models of general equilib-
rium type have been presented in the literature that go beyond the standard
model as discussed in chapter 4.3. The recent models are more demanding
in terms of solution and estimation methods. Although we will not attempt
to estimate those more generalized versions it is worth presenting the main
structure of the generalized models and to demonstrate how they can be
solved by using dynamic programming as introduced in chapter 1.6. Since
those generalized versions can easily give rise to multiple equilibria and his-
tory dependence, in chapter 7 we will return to these types of models.
4.5.1 The Model Structure
The generalization of the standard RBC model is usually undertaken either
with respect to preferences or with respect to the technology. With respect
to preferences utility functions such as
4
U(C, N) =
_
Cexp
_
−N
1+χ
1+χ
_
_
1−σ
−1
1 −σ
. (4.43)
are used. The utility function (4.43) with consumption, C and labor
effort, N, as arguments is non-separable in consumption and leisure. We can
obtain from (4.43) a separable utility function such as
5
U(C, N) =
C
1−σ
1 −σ

N
1+χ
1 + χ
(4.44)
which is additively separable in consumption and leisure.
Moreover, by setting σ = 1 we obtain simplified preferences in log utility
and leisure.
U(C, N) = logC −
N
1+χ
1 + χ
(4.45)
4
See Bennet and Farmer (2000) and Kim (2004).
5
See Benhabib and Nishimura (1998) and Harrison(2001).
77
As concerning production technology and markets usually the following
generalizations are introduced.
6
First we can allow for increasing returns to
scale. Although the individual-level private technology generates constant
returns to scale with
Y
i
= AK
a
i
L
b
i
, a + b = 1
externalities of the form
A = (K
a
N
b
)
ξ
, ξ ≥ 0
may allow for an aggregate production function with increasing returns
to scale, with
Y = K
α
N
β
α > 0, β > 0, α + β ≥ 1 (4.46)
whereby α = (1 + ξ)a, β = (1 + ξ)b and Y, K, N represent total output,
the aggregate stock of capital and labor hours respectively. The increasing
returns to scale technology represented by equ. (4.46) can also be interpreted
as a monopolistic competition economy where there are rents arising from
inverse demand curves for monopolistic firms.
7
Another generalization as concerning production technology can be un-
dertaken by introducing adjustment cost of investment.
8
We may write
˙
K
K
= ϕ
_
I
K
_
(4.47)
with the assumption of
ϕ(δ) = 0, ϕ

(δ) = 1, ϕ
′′
(δ) ≤ 0 (4.48)
Hereby δ is the depreciation rate of the capital stock. A functional form
that satisfies the three conditions of equ. (4.47) is
δ
__
I
δK
_
1−ϕ
−1
_
/(1 −ϕ) (4.49)
For ϕ = 0, one has the standard model without adjustment cost, namely
˙
K = I −δK.
The above describes a type of a generalized model that we want to solve.
6
See Kim (2003a).
7
See Farmer (1999, ch. 7.2.4) and Benhabib and Farmer (1994).
8
For the following, see Lucas and Prescott (1971) and Kim (2003a) and Boldrin, Christiano
and Fisher (2001).
78
4.5.2 Solving the Generalized RBC Model
We write the model in continuous time and in its deterministic form
C
t
, N
t
= max
_

t=0
e
−ρt
U(C
t
, N
t
)dt. (4.50)
s.t.
˙
K
K
= ϕ
_
I
t
K
t
_
(4.51)
where preferences U(C
t
, N
t
) are chosen such as represented by equ. (4.45)
and the technology such as specified in eqs. (4.46) and (4.47)-(4.49). The
latter are used in equ. (4.51).
For solving the model with the dynamic programming algorithm as pre-
sented in chapter 1.6 we use the following parameters.
Table 4.2: Parameterizing the General Model
a b χ ρ δ ξ ϕ
0.3 0.7 0.3 0.05 0.1 0 0.05
Note that in order to stay as close as possible to the standard RBC model
we avoid externalities and therefore presume ξ = 0. Preferences and technol-
ogy (by using adjustment cost of capital) take on, however, a more general
form. Note that the dynamic decision problem (4.50)-(4.51) are written in
continuous time. Its discretization for the use of dynamic programming is un-
dertaken through the Euler procedure. Using then the deterministic variant
of our dynamic programming algorithm of ch. 1.6 and a grid for the capital
stock, K, in the interval [0.1, 10], we obtain the following value function,
representing the total utility along the optimal paths of C and N.
79
-18
-17
-16
-15
-14
-13
-12
-11
-10
-9
0 1 2 3 4 5 6 7 8 9 10
Figure 4.3: Value function for the general model
The out-of-steady state of the two choice variables namely consumption
and labor effort,(depending in feedback form on the state variable capital
stock, K) are shown in Figure 4.4.
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2 0 4 6 8 10
Figure 4.4: Paths of the Choice Variables C and N (depending on K)
As can be clearly observed the value function is concave, see Figure 4.3.
Moreover, see Figure 4.4, consumption is low and labor effort high, when
capital stock is low (so that capital stock can be built up) and consumption
is high and labor effort low (so that capital stock will shrink). The dynam-
ics generated by the response of consumption and labor effort to the state
variable capital stock, K, will thus lead to a convergence toward an interior
steady state of the capital stock, consumption and labor effort.
80
4.6 Conclusions
In this chapter we first have introduced the intertemporal general equilibrium
model on which the standard RBC model is constructed. Our attempt was
to reveal the basis of some intertemporal decision problems behind the RBC
model. This will be important for the subsequent part of the book. We then
introduce the empirically oriented standard RBC model and, based on our
previous chapters, present the solution to the model. This provides us the
groundwork for an empirical assessment of the RBC model, the task that
will be addressed in the next chapter. We also have presented a generalized
RBC model and solved it by using dynamic programming.
4.7 Appendix: The Proof of Proposition 4
Evaluating (4.35) - (4.38) along with (4.27) in their certainty equivalence
form, and assuming all the variables to be at their steady states, we obtain
1
c

β
1 +γ
λ = 0 (4.52)
−θ
1 −n
+ βλ
αy
(1 +γ)n
= 0 (4.53)
β
1 + γ
λ
_
(1 −δ) +
(1 −α)y
k
_
= λ (4.54)
k =
1
1 +γ
_
(1 −δ)k +y −c
¸
(4.55)
where from (4.27) y is given by
y = Ak
1−α
_
nN
0.3
_
α
(4.56)
The derivation of the boundary steady state is trival. Replace c, ¯ n and λ
with c
b
, ¯ n
b
and λ
b
. We find equation (4.52) - (4.54) are satisfied. Further,
¯
k
b
and ¯ y
b
can be derived from (4.55) and (4.56) given c
b
, ¯ n
b
and λ
b
.
Next we try to derive the interior steady state. For notational conve-
nience, we ignore the subscript i, and therefore all the steady state values
are understood as the interior steady states. Let
y
k
= φ. By (4.54), we obtain
81
y = φk where φ is defined by (4.40) in the proposition. According to (4.56),
y
n
= A
_
y
φ
_
1−α
_
nN
0.3
_
α
n
−1
= A
_
y
n
_
1−α
φ
α−1
_
N
0.3
_
α
= A
1
α
φ
1−
1
α
_
N
0.3
_
(4.57)
Therefore,
k
n
=
¯ y/¯ n
¯ y/
¯
k
= A
1
α
φ

1
α
_
N
0.3
_
(4.58)
Expressing y in terms of φk and then expressing k in terms of (4.58), we thus
obtain from (4.55)
c = (φ −δ −γ)k
= (φ −δ −γ)A
1
α
φ

1
α
_
N
0.3
_
n (4.59)
Meanwhile, (4.52) and (4.53) imply that
c =
_
α
θ
_
(1 −n)
_
y
n
_
=
_
α
θ
_
(1 −n)A
1
α
φ
1−
1
α
_
N
0.3
_
(4.60)
Equating (4.59) and (4.60), we thus solve the steady state of the labor
effort n. Once n is solved, k can be obtained from (4.58), y from (4.57) and
c either from (4.59) or from (4.60). Finally, λ can be derived from (4.52).
Chapter 5
The Empirics of the Standard
Real Business Cycle Model
5.1 Introduction
Many real business cycle theorists believe that the RBC model is empirically
powerful in explaining the stylized facts of business cycles. Moreover, some
theorists suggest that even a simple RBC model, like the standard model we
presented in the last chapter, despite its rather simple structure, can generate
the time series to match the macroeconomic moments from empirically ob-
served time series data. As Plosser (1989) pointed out, “the whole idea that
such a simple model with no government, no market failures of any kind, ra-
tional expectations, no adjustment cost could replicate actual experience this
well is very surprising.” (Plosser 1989:...) However, these early assessments
have also become the subject to various criticisms. In this chapter, we shall
provide a comprehensive empirical assessment of the standard RBC model.
Our previous discussion, especially in the first three chapters, has provided
a technical preparation for this assessment. We shall first estimate the stan-
dard RBC model and then evaluate the calibration results that have been
stated by early real business cycle theorists. Yet before we commence with
our formal study, we shall first demonstrate the efficiency of our estimation
strategy as discussed in Chapter 3.
5.2 Estimation with Simulated Data
In this section, we shall first apply our estimation strategy using simulated
data. The simulated data are shown in Figure 4.2 in the last chapter, which
is generated from a stochastic simulation of our standard model for the given
82
83
standard parameters reported in Table 4.1. The purpose of this estimation is
to test whether our suggested estimation strategy works well. If the strategy
works well, we expect that the estimated parameters will be close to the
standard parameters that we know in advance.
5.2.1 The Estimation Restriction
The standard model implies certain restrictions on the estimation. For the
GMM estimation, the model implies the following moment restrictions:
1
E [(1 +γ)k
t+1
−(1 −δ)k
t
−y
t
+c
t
] = 0; (5.1)
E [c
t
−G
11
k
t
−G
12
A
t
−g
13
] = 0; (5.2)
E [n
t
−G
21
k
t
−G
22
A
t
−g
23
] = 0; (5.3)
E
_
y
t
−A
t
k
1−α
t
(n
t
N/0.3)
α
_
= 0. (5.4)
Note that the moment restrictions could be nonlinear as in (5.4). Yet for the
ML estimation, the restriction Bz
t
+ Γx
t
= ε
t
(see equation (3.6))
2
must be
linear. Therefore, we shall first linearize (4.27) using Taylor approximation.
This gives us
y
t
=
y
A
A
t
+ (1 −α)
y
k
k
t
+ α
y
n
n
t
−y
We thus obtain for the ML estimation:
z
t
=
_
¸
¸
_
k
t
c
t
n
t
y
t
_
¸
¸
_
, B =
_
¸
¸
_
1 + γ 0 0 0
−G
12
1 0 0
−G
22
0 1 0
−(1 −α)
y
k
0 −α
y
n
1
_
¸
¸
_
x
t
=
_
¸
¸
¸
¸
_
k
t−1
c
t−1
y
t−1
A
t
1
_
¸
¸
¸
¸
_
, Γ =
_
¸
¸
_
−(1 −δ) 1 −1 0 0
0 0 0 −G
11
−g
13
0 0 0 −G
21
−g
23
0 0 0 −
y
A
y
_
¸
¸
_
1
Note that the parameters in the exogenuous equation could be independently estimated
since they have no feed back into the other equations. Therefore, there is no necessity to
include the exogenuous equation into the restriction.
2
Note that here we use z
t
rather than y
t
in order to distinguish it from the output y
t
in
the model.
84
5.2.2 Estimation with Simulated Data
Although the model involve many parameters, we shall only estimate the
parameters α, β, δ and θ. These are the parameters that are empirically un-
known and thus need to be estimated when we later turn to the estimation
using empirical data.
3
Table 5.1 reports our estimation with the standard
deviation included in parenthesis.
Table 5.1: GMM and ML Estimation Using Simulated Data
α β δ θ
True 0.58 0.9884 0.025 2
ML Estimation 0.5781
(2.4373E−006)
0.9946
(5.9290E−006)
0.0253
(3.4956E−007)
2.1826
(3.7174E−006)
1st Step GMM 0.5796
(6.5779E−005)
0.9821
(0.00112798)
0.02500
(9.6093E−006)
2.2919
(0.006181723)
2nd Step GMM 0.5800
(2.9958E−008)
0.9884
(3.4412E−006)
0.02505
(9.7217E−007)
2.000
(4.6369E−006)
One finds that the estimations from both methods are quite satisfying.
All estimated parameters are close to their true parameters, the parameters
that we have used to generate the data. This demonstrates the efficiency
of our estimation strategy. Yet, the GMM estimation after the second step
is more accurate than the ML estimation. This is probably because the
GMM estimation does not need to linearize (5.4). However, we should also
remark that the difference is minor whereas the time required by the ML
estimation is much shorter. The latter holds not only because the GMM
needs an additional step, but also each single step of the GMM estimation
takes much more time for the algorithm to converge. Approximately 8 hours
on a Pentium III computer is required for each step of the GMM estimation
whereas approximately only 4 hours is needed for the ML estimation.
In Figure 5.1 and 5.2, we also illustrate the surface of the objective func-
tion for our ML estimation. It shows not only the existence of multiple
optima, but also that the objective function is not smooth. This verifies the
necessity of using simulated annealing in our estimation strategy.
3
N can be regarded as the mean of per capital hour. γ does not appear in the model, but
create for transforming the model into a stationary version. The parameters with regard
to the AR(1) process of A
t
have no feedback effect on our estimation restrictions.
85
Figure 5.1: The β - δ Surface of the Objective Function for ML Estimation
Figure 5.2: The θ −α Surface of the Objective Function for ML Estimation
86
Next, we turn to estimating the standard RBC model with the data of
U. S. time series data.
5.3 Estimation with Actual Data
5.3.1 The Data Construction
Before estimating the benchmark model with empirical U. S. time series,
we shall first discuss the data that shall be used in our estimation. The
empirical studies of RBC models often require a considerable re-construction
of existing macroeconomic data. The time series employed for our estimation
should include A
t
, the temporary shock in technology, N
t
the labor input
(per capita hours), K
t
the capital stock, C
t
consumption and Y
t
output.
Here all the data can assume to be obtained from statistical sources except
the temporary shock A
t
. A common practice is to use the so-called Solow
residual for the temporary shock. Assuming that the production function
takes the form Y
t
= A
t
K
1−α
t
(N
t
X
t
)
α
, the Solow residual A
t
is computed as
follows:
A
t
=
Y
t
K
1−α
t
(N
t
X
t
)
α
(5.5)
=
y
t
k
1−α
t
N
t
α
(5.6)
where X
t
follows a constant growth rate:
X
t
= (1 +γ)
t
X
0
(5.7)
Thus, the Solow residual A
t
can be derived if the time series Y
t
, K
t
, N
t
,
and the parameter α and γ as well as the initial condition X
0
are given.
It should be noted to derive the temporary shock as above deserves some
criticism. Indeed, this approach uses macroeconomic data that posits a full-
employment assumption, a key assumption in the model, which makes it
different from other types of model, such as the business cycle model of
Keynesian type. Yet, since this is a common practice we shall here also
follow this procedure. Later in this chapter we will, however, deviate from
this practice and construct the Solow residual in a different way. This will
allow us to explore a puzzle, often called the technology puzzle in the RBC
literature.
In addition to the construction of the temporary shock, the existing
macroeconomic data (such as those from Citibase) also need to be adjusted
to be accommodated to the definition of the variables as defined in the model.
87
4
The national income as defined in the model is simply the sum of consump-
tion C
t
and investment, the latter increases the capital stock. Therefore, to
make the model’s national income account consistent with the actual data,
one should also include government consumption in C
t
. Further, it is sug-
gested that not only private investment but also government investment and
durable consumption goods (as well as inventory and the value of land) should
be included in the capital stock. Consequently, the service generated from
durable consumption goods and government capital stock should also appear
in the definition of Y
t
. Since such data are not readily available, one has to
compute them based on some assumptions.
Two Different Data Sets
To explore how this treatment of the data construction could affect the empir-
ical assessment of dynamic optimization model, we shall employ two different
data sets. The first data set, Data Set I, is constructed by Christiano (1987),
which has been used in many empirical studies of RBC model such as Chris-
tiano (1988) and Christiano and Eichenbaum (1992). The sample period of
this data set is from 1955.1 to 1984.4. All the data are quarterly.
The second data set, Data Set II, is obtained mainly from Citibase except
the capital stock which is taken from the Current Survey of Business. This
data set is taken without any modification. The sample period for this data
set is from 1952.1 to 1988.4.
5.3.2 Estimation with the Christiano Data Set
As we have mentioned before, the shock sequence A
t
is computed from the
time series of Y
t
, X
t
, N
t
and K
t
given the pre-determined α, which we denote
as α

(see equation (5.5 )). Here the time series X
t
is computed according to
equation (5.7) for the given parameter γ and the initial condition X
0
. In this
estimation, we set X
0
to 1 and γ to 0.0045.
5
Meanwhile we shall consider
two α

: one is the standard value 0.58 and the other is 0.66. We shall remark
that 0.66 is the estimated α in Christiano and Eichenbaum (1992). Table 5.2
reports the estimations after the second step of GMM estimation.
4
For a discussion on data definitions in RBC models, see Cooley and Prescott (1995).
5
We choose 0.0045 for γ to make k
t
, c
t
and y
t
to be stationary (note that we have defined
k
t

Kt
Xt
, c
t

Ct
Xt
and y
t

Yt
Xt
). Also, γ is the standard parameter as choosen by King
et al. (1988).
88
Table 5.2: Estimation with Christiano’s Data Set
α β δ θ
α

= 0.58 0.5800
(2.2377E−008)
0.9892
(0.0002)
0.0209
(0.0002)
1.9552
(0.0078)
α

= 0.66 0.6600
(9.2393E−006)
0.9935
(0.0002)
0.0209
(0.0002)
2.1111
(0.0088)
As one can observe, all the estimations seem to be quite reasonable. For
α

= 0.58, the estimated parameters, though somewhat deviating from the
standard parameters used in King et al. (1988), are all within the economi-
cally feasible range. For α

= 0.66, the estimated parameters are very close
to those in Christiano and Eichenbaum (1992). Even the parameter β esti-
mated here is very close to the β chosen (rather than estimated) by them.
In both cases, the estimated α is almost the same as the pre-determined α.
This is not surprising due to the way of computing the temporary shocks
from the Solow residual. Finally, we should also remark that the standard
errors are unusually small.
5.3.3 Estimation with the NIPA Data Set
As in the case of the estimation with Christiano’s data set, we again set X
0
to 1 and γ to 0.0045. For the two pre-determined α,we report the estimation
results in Table 5.3.
Table 5.3: Estimation with the NIPA Data Set
α β δ θ
α

= 0.58 0.4656
(71431.409)
0.8553
(54457.907)
0.0716
(89684.204)
1.2963
(454278.06)
α

= 0.66 0.6663
(35789.023)
0.9286
(39958.272)
0.0714
(45174.828)
1.8610
(283689.56)
In contrast to the estimation using the Chrisitano data set, we find that
the estimations here are much less satisfying. They deviate significantly
from the standard parameters. Some of the parameters, especially β, are
not within the economically feasible range. Furthermore, the estimates are
all statistically insignificant due to the huge standard errors. Given such a
sharp contrast for the results of the two different data sets, one is forced to
think about the data issue involved in the current empirical studies of RBC
model. Indeed, this issue is mostly suppressed in the current debate.
89
5.4 Calibration and Matching to U. S. Time-
Series Data
Given the structural parameters, one can then assess the model to see how
closely it matches with the empirical data. The current method for assessing
a stochastic dynamic optimization model of RBC type is the calibration
technique, which has already been introduced in Chapter 3. The basic idea
of calibration is to compare the time series moments generated from the
model’s stochastic simulation to those from a sample economy.
The data generation process for this stochastic simulation is given by the
following equations:
c
t
= G
11
A
t
+ G
12
k
t
+g
1
(5.8)
n
t
= G
21
A
t
+ G
22
k
t
+ g
2
(5.9)
y
t
= A
t
k
1−α
t
(n
t
N/0.3)
α
(5.10)
A
t+1
= a
0
+ a
1
A
t
+ ǫ
t+1
(5.11)
k
t+1
=
1
1 +γ
[(1 −δ)k
t
+ y
t
−c
t
] (5.12)
where ǫ
t+1
∼ N(0, σ
2
ǫ
); and G
ij
and g
i
(i, j = 1, 2) are all the complicated
functions of the structural parameters and can be computed from our GAUSS
procedure of solving dynamic optimization problem as presented in Appendix
II of Chapter 1.
The structural parameters used for this stochastic simulation are defined
as follows. First, we employ those in Table 5.2 at α

= 0.58 for the parameters
α, β, δ and θ. The parameter γ is set 0.0045 as usual. The parameters a
0
, a
1
and σ
ǫ
in the stochastic equation (5.11) are estimated by the OLS method
given the time series computed from Solow residue.
6
The parameter N is
simply the sample mean of per capita hours N
t
. For convenience, we report
all these parameters in Table 5.4.
Table 5.4: Parameterizing the Standard RBC Model
α γ β δ θ N a
0
a
1
σ
ǫ
0.5800 0.0045 0.9892 0.0209 1.9552 299.03 0.0333 0.9811 0.0189
6
Note that these are the same as in Table 4.1.
90
Table 5.5 reports our calibration from 5000 stochastic simulations. In
particular, the moment statistics of the sample economy is computed from
Christiano’s data set, while those for model economy are generated from
our stochastic simulation using the data generation process (5.8) - (5.12).
Here, the moment statistics include the standard deviations of some major
macroeconomic variables and also their correlation coefficients. For the model
economy, we can further obtain the distribution of these moment statistics,
which can be reflected by their corresponding standard deviations (those in
paratheses). Of course, the distributions are derived from our 5000 thousand
stochastic simulations. All time series data are detrended by the HP-filter.
Table 5.5: Calibration of Real Business Cycle Model (numbers in parentheses
are the corresponding standard deviations)
Consumption Capital Employment Output
Standard Deviations
Sample Economy 0.0081 0.0035 0.0165 0.0156
Model Economy 0.0090 0.0037 0.0050 0.0159
(0.0012) (0.0007) (0.0006) (0.0021)
Correlation Coefficients
Sample Economy
Consumption 1.0000
Capital Stock 0.1741 1.0000
Employment 0.4604 0.2861 1.0000
Output 0.7550 0.0954 0.7263 1.0000
Model Economy
Consumption 1.0000
(0.0000)
Capital Stock 0.2013 1.0000
(0.1089) (0.0000)
Employment 0.9381 −0.1431 1.0000
(0.0210) (0.0906) (0.0000)
Output 0.9796 0.0575 0.9432 1.0000
(0.0031) (0.1032) (0.0083) (0.0000)
5.4.1 The Labor Market Puzzle
By observing Table 5.5, we find that among the four key variables the volatil-
ities of consumption, capital stock and output could be regarded as being
91
somewhat matched. This is indeed one of the major early results of real
business cycle theorists. However, the matching does not hold for employ-
ment. Indeed, the employment in the model economy is excessively smooth.
These results are further demonstrated by Figure 5.3 and Figure 5.4,
where we compare the observed series from the sample economy to the sim-
ulated series with innovation given by the observed Solow residual. We shall
remark that the excessive smoothness of employment is a typical problem of
the standard model that has been addressed many times in literature.
Figure 5.3: Simulated and Observed Series (non detrended): solid line ob-
served and dashed line simulated
92
Figure 5.4: Simulated and Observed Series (detrended by HP filter): solid
line observed and dashed line simulated
Now let us look at the correlations. In the sample economy, there are ba-
sically two significant correlations. One is between consumption and output,
and the other is between employment and output. Both of these two corre-
lations have also been found in our model economy. However, in addition to
these two correlations, consumption and employment in the model economy
are also significantly correlated. We remark that such an excessive correlation
has, to our knowledge, not yet been discussed in the literature. The discus-
sions have often been focused on the correlation with output. However, this
excessive correlation should not be surprising given that in the RBC model
the movements of employment and consumption reflect the movements of the
same state variables: capital stock and temporary shock. They, therefore,
should be somewhat correlated. The excessive smoothness of labor effort and
the excessive correlation between labor and consumption will be taken up in
Chapter 8.
93
5.5 The Issue of the Solow Residual
So far we may argue that one of the major achievements of the standard RBC
model is that the model could explain the volatility of some key macroeco-
nomic variables such as output, consumption and capital stock. Meanwhile,
these results rely on the hypothesis that the driving force of the business cy-
cles are technology shocks, which are assumed to be measured by the Solow
residual. The measurement of technology can impact this result in two ways.
One is that the parameters a
0
, a
1
and σ
ǫ
in the stochastic equation (5.11)
are estimated from the time series computed from Solow residual. These pa-
rameters will directly affect the results from our stochastic simulation. The
second is that the Solow residual also serves as the sequence of observed
innovations that generate the graphs in Figure 5.3 and Figure 5.4. Those
innovations are often used in the RBC literature as an additional indicator
to support the model and its matching of the empirical data.
Another major presumption of the RBC literature, not yet shown in Table
5.5, but to be shown below, is the technology-driven hypthoses, i.e., the
technology is procyclical with output, consumption and employment. Of
course, this celebrated result is obtained also from the empirical evidence, in
which the technology is measured by the standard Solow residual.
There are several reasons to distrust the standard Solow residual as a
measure of technology shock. First, Mankiw (1989) and Summers (1986) have
argued that such a measure often leads to excessive volatility in productivity
and even the possibility of technological regress, both of which seem to be
empirically implausible. Second, It has been shown that the Solow residual
can be expressed by some exogenuous variables, for example demand shocks
arising from military spending (Hall 1988) and changed monetary aggregates
(Evan 1992), which are unlikely to be related to factor productivity. Third,
the standard Solow residual is not a reliable measure of technology schocks
if the cyclical variation in factor utilization are significant.
Considering that the Solow residual cannot be trusted as a measure of the
technology shock, researchers have now developed different methods to mea-
sure technology correctly. All these methods are focused on the computation
of factor utilization. There are basically three strategies. The first strategy
is to use an observed indicator to proxy for unobserved utilization. A typical
example is to employ electricity use as a proxy for capacity utilization (see
Burnside, Eichenbaum and Rebelo 1996). Another strategy is to construct
an economic model so that one could compute the factor utilization from
the observed variables (see Basu and Kimball 1997 and Basu, Fernald and
Kimball 1998). A third strategy identifies the technology shock through an
VAR estimate, see Gali (1999) and Francis and Ramey (2001, 2003).
94
Recently, Gali (1999) and Francis and Ramey (2001) have found that if
one uses the corrected Solow residual – if one identifies the technology shock
correctly – the technology shock is negatively correlated with employment
and therefore the celebrated discovery of the RBC literature must be re-
jected. Also, if the corrected Solow residual is significantly different from the
standard Solow residual, one may find that the standard RBC model, using
the Solow residual, can match well the variations in output, consumption
and capital stock not because the model has been constructed correctly, but
because it uses a problematic measure of technology.
All these are important problematic issues that are related to the Solow
residual. Indeed, if they are confirmed, the real business cycles model, as
driven by technology shocks, may not be a realistic paradigm for macroeco-
nomic analysis any more. In this section, we will refer to all of this recent
research employing our available data set. We will first follow Hall (1988) and
Evan (1992) to test the exogeneity of Solow residual. Yet in this test, we sim-
ply use government spending, which is available in our Christiano’s data set.
We will construct a measurement of the technology shock that represents a
corrected Solow residual. This construction needs data on factor utilization.
Unlike other current research, we use empirically observed data series, the
capacity utilization of manufacturing, IPXMCAQ, obtained from Citibase.
Given our new measurement, we then explore whether the RBC model is still
able to explain the business cycles, in particular the variation in consump-
tion, output and capital stock. We shall also look at whether the technology
still moves procyclically with output, consumption and employment.
5.5.1 Testing the Exogeneity of the Solow Residual
Apparently, a critical assumption of the Solow residual to be a correct mea-
surement of the technology shock is that A
t
should be purely exogenuous.
In other words, the distribution of A
t
cannot be altered by the change in
other exogenuous variables such as the variables of monetary and fiscal pol-
icy. Therefore, testing the exogeneity of the Solow residual becomes our first
investigation to explore whether the Solow residual is a correct measure of
the technology shock.
One possible way to test the exogeneity is to employ the Granger causality
test. This is also the approach taken by Evan (1992). For this purpose, we
shall investigate the following specification:
A
t
= c + α
1
A
t−1
+· · · + α
p
A
t−p

1
g
t−1
+· · · + β
p
g
t−p
+ ε
t
(5.13)
where g
t
in this test is government spending, as an aggregate demand vari-
able, over X
t
. If the Solow residual is exogenuous, g
t
should not have any
95
explanatory power for A
t
. Therefore our null hypothesis is
H
0
: β
1
= · · · = β
p
= 0 (5.14)
The rejection of the null hypothesis is sufficient for us to refute the assump-
tion that A
t
is strictly exogenuous. It is well known that the result of any
empirical test for Granger causality can be surprisingly sensitive to the choice
of lag length p. The test therefore will be conducted for different lag lengths
p’s. Table 5.6 provides the corresponding F-statistics computed for the dif-
ferent p’s.
Table 5.6: F−Statistics for Testing Exogeneity of Solow Residual
F− statistics degrees of freedom
p = 1 9.5769969 (1, 92)
p = 2 4.3041035 (2, 90)
p = 3 3.2775435 (4, 86)
p = 4 2.3825632 (6, 82)
From Table 5.6, one finds that at 5% significance level we can reject the
null hypothesis for all the lag lengths p

s.
7
5.5.2 Corrected Technology Shocks
The analysis in our previous section indicates that the hypothesis can re-
jected that the standard Solow residual be strictly exogenuous. On the other
hand, those policy variables, such as government spending, which certainly
represents a demand shock, may have explanatory power for the variation of
the Solow residual. This finding is consistent with the results in Hall (1988)
and Evan (1992). Therefore we may have sufficient reason to distrust the
Solow residual to be a good measure of the technology shock.
Next, we present a simple way of how to extract a technology shock from
macroeconomic data. If we look at the computation of the Solow residual,
e.g., equation (5.5), we find two strong assumptions inherent in the formula-
tion of the Solow residual. First, it is assumed that the capital stock is fully
utilized. Second, it is further assumed that the population follows a constant
growth rate which is a part of γ. In other words, there is no variation in
population growth. Next we shall consider the derivation of the corrected
Solow residual by relaxing those strong assumptions.
7
Although we are not able to obtain the same at 1% significance level.
96
Let u
t
denote the utilization of capital stock, which can be measured
by IPXMCAQ from Citibase. The observed output is thus produced by
the utilized capital and labor service (expressed in terms of total observed
working hours) via the production function:
Y
t
=
˜
A
t
(u
t
K
t
)
1−α
(Z
t
E
t
H
t
)
α
(5.15)
Above,
˜
A
t
is the corrected Solow residual (which is our new measure of
temporary shock in technology); E
t
is the number of workers employed; H
t
denotes the hours per employed worker;
8
and Z
t
is the permanent shock in
technology. Note that in this formulation, we interpret the utilization of
labor service only in terms of their working hours and therefore ignore their
actual effort, which is more difficult to be observed.
Let L
t
denote the permanent shock to population so that X
t
= Z
t
L
t
while
L
t
denotes the observed population so that
EtHt
Lt
= N
t
. Dividing both sides
of (5.15) by X
t
, we then obtain
y
t
=
˜
A
t
(u
t
k
t
)
1−α
(l
t
N
t
)
α
(5.16)
Above l
t

Lt
Lt
. Given equation (5.16), the corrected Solow residual
˜
A
t
can
be computed as
˜
A
t
=
y
t
(u
t
k
t
)
1−α
(l
t
N
t
)
α
(5.17)
Comparing this with equation (5.5), one finds that our corrected Solow resid-
ual
˜
A
t
will match the standard Solow residual A
t
if and only if both u
t
and
l
t
equal 1. Figure 5.5 compares these two time series: one for non-detrended
and the for detrended series.
8
Note that this is different from our notation N
t
before, which is the hours per capita.
97
Figure 5.5: The Solow Residual: standard (solid curve) and corrected (dashed
curve)
As one can observe in the figure 5.5, the two series follow basically the
same trend while their volatilities are almost the same.
9
However, in the short
run, they rather move in different directions if we compare the detrended
series.
5.5.3 Business Cycles with Corrected Solow Residual
Next we shall use the corrected Solow residual to test the technology-driven
hypothesis. In Table 5.7, we report the cross-correlations of the technology
shock to our four key economic variables: output, consumption, employment
and capital stock. These correlations are compared for three economies: the
RBC economy (whose statistics is computed from 5000 simulations), the
Sample Economy I (in which the technology shock is represented by the
standard Solow residual) and the Sample Economy II (to be represented by
the corrected Solow residual). The data series are again detrended by the
HP-filter.
9
A similar volatility is also found in Burnside et al. (1996).
98
Table 5.7: The Cross-Correlation of Technology
output consumption employment capital stock
RBC Economy 0.9903 0.9722 0.9966 -0.0255
(0.0031) (0.0084) (0.0013) (0.1077)
Sample Economy I 0.7844 0.7008 0.1736 -0.2142
Sample Economy II -0.3422 -0.1108 -0.5854 0.0762
If we look at the Sample Economy I, where the standard Solow resid-
ual is employed, we find that the technology shock is procyclical to output,
consumption and employment. This result is exactly predicted by the RBC
Economy and represents what has been called the technology-driven hypoth-
esis. However, if we use the corrected Solow residual, as in Sample Economy
II, we find a somewhat opposite result, especially for employment. We, there-
fore, can confirm the findings of the recent research by Basu, et al. (1998),
Gali (1999) and Francis and Ramey (2001, 2003).
To test whether the model can still match the observed business cycles,
we provide in Figure 5.6 a one time simulation with the observed innovation
given by the corrected Solow residual.
10
Comparing Figure 5.6 to Figure 5.4,
we find that the results are in sharp contrast to the prediction as referred in
the standard RBC model.
10
Here the structural parameters are still the standard ones as given in Table 5.4.
99
Figure 5.6: Sample and Predicted Moments with Innovation Given by Cor-
rected Solow Residual
5.6 Conclusions
The standard RBC model has been regarded as a model that replicates the
basic moment properties of U.S. macroeconomic time series data despite its
rather simple structure. Prescott (1986) summarizes the moment implica-
tions as indicating “the match between theory and observation is excellent,
but far from perfect”. Indeed, many have felt that the RBC research has
at least passed the first test. Yet this early assessment should be subject to
certain qualification.
In the first place, this early assessment builds on the reconstruction of
U.S. macroeconomic data. Through its necessity to accommodate the data to
the model’s implication, such data reconstruction seems to force the first mo-
ments of certain macroeconomic variables of the U.S. economy to be matched
by the model’s steady state at the given economically feasible standard pa-
100
rameters. The unusual small standard errors of the estimates seem to confirm
this suspicion.
Second, although one may celebrate the fit of the variation of consump-
tion, output and capital stock when the reconstructed data series are em-
ployed, we still cannot ignore the problems of excessive smoothness of labor
effort and excessive correlation between labor and consumption. Both of
these two problems are related to the labor market specification of the RBC
model. For the model to be able to replicate employment variation, it seems
necessary to make improvement upon the labor market specification. One
possible approach for such improvement is to allow for wage stickyness and
nonclearing of the labor market, a task that we will turn to in Chapter 8.
Third, the celebrated fit of the variation in consumption, output and
capital stock may rely on the incorrected measure of technology. As we have
shown in Figure 5.6, the match does not exist any more when we use the
corrected Solow residual as the observed innovations. This incorrect measure
of technology takes us to the technology puzzle: the procyclical technology,
driving the business cycle, may not be a very plausible hypothesis. As King
et. al (1999) pointed out, “it is the final criticism that the Solow residual is a
problematic measure of technology shock that has remained the Achilles heel
of the RBC literature.” In Chapter 9, we shall address the technology puzzle
again by introducing monopolistic competition into an stochastic dynamic
macro model.
Chapter 6
Asset Market Implications of
Real Business Cycles
6.1 Introduction
In this chapter, we shall study asset price implications of the standard RBC
model. The idea of employing a basic stochastic growth model to study asset
prices goes back to Brock and Mirman (1972) and Brock (1978, 1982). Asset
prices contain valuable information about intertemporal decision making and
dynamic models explaining asset pricing are of great importance in current
research. We here want to study a production economy with asset market
and spell out its implications for asset prices and returns. In particular we
will explore to what extend it can replicate the empirically found risk-free
interest rate, equity premium and Sharpe-ratio.
Modelling asset price and risk premia in models with production is much
more challenging than in exchange economies. Most of the asset pricing
literature has followed Lucas (1978) and Mehra and Prescott (1985) in com-
puting asset prices from the consumption based asset pricing models with
an exogenous dividend streams. Production economies offer a much richer,
and realistic environment. First, in economies with an exogenous dividend
stream and no savings consumers are forced to consume their endowment. In
economies with production where asset returns and consumption are endoge-
nous consumers can save and hence transfer consumption between periods.
Second, in economies with an exogenous dividend stream the aggregate con-
sumption is usually used as a proxy for equity dividends. Empirically, this
is not a very sensible modelling choice. Since there is a capital stock in pro-
duction economies, there is a more realistic modelling of equity dividends is
possible.
101
102
Although recently further extension of the baseline stochastic growth
model of RBC type were developed to match better actual asset market
characteristics
1
we will in the current paper by and large restrict ourselves
to the baseline model. The theoretical framework in this chapter is taken
from Lettau (1999), Lettau and Uhlig (1999) and Lettau, Gong and Semm-
ler (2001) where the closed-form solutions for risk premia of equity, long-term
real bonds, the Sharpe-ratio and the risk-free interest rates are presented in a
log-linearized RBC model as developed by Campbell (1994). Those equations
can be used as additional moment restrictions in the estimation process. We
introduce the asset pricing restrictions step-by-step to clearly demonstrate
the effect of each new restriction.
First, we estimate the model using only the restrictions of real variables
as in Chapter 5. The data employed for this estimation are taken again from
Christiano (1987).
2
We then add our first asset pricing restriction, the risk-
free interest rate. We use the observed 30-day T-bill rate to match the one-
period risk-free interest rate implied by the model.
3
The second asset pricing
restriction concerns the risk-return trade-off as measured by the Sharpe-
ratio, or the price of risk. This variable determines how much expected
return agents require per unit of financial risk. Hansen and Jagannathan
(1991) and Lettau and Uhlig (1999) show how important the Sharpe-ratio
4
is in evaluating asset prices generated by different models. Introducing the
Sharpe-ratio as moment restriction in the estimation procedure requires an
iterative procedure to estimate the risk aversion parameter. We find that the
Sharpe-ratio restriction affects the estimation of the model drastically. For
each estimation, we compute the implied premia of equity and long-term real
bond. Those values are then compared to the stylized facts of asset markets.
The estimation technique in this chapter follows the Maximum Likeli-
hood (ML) method as discussed in Chapter 4. All the estimations are again
conducted through the numerical algorithm, the simulated annealing. In ad-
dition, we introduce a diagnostic procedure developed by Watson (1993) and
Diebold, Ohanian and Berkowitz (1995) to test whether the moments pre-
dicted by the model, for the estimated parameters, can match the moments
of the actual macroeconomic time series. In particular, we use the variance-
covariance matrix of the estimated parameters to infer the intervals of the
1
See, for example Jerman (1998), Boldrin, Christiano and Fisher (2001) and Gr¨ une and
Semmler (2004b).
2
Using Christiano’s data set, we implicitly assume that the standard model can, to some
extent, replicate the moments of the real variables. Of course, as the previous chapter
has shown the standard model fails also along some real dimensions.
3
Using 30-day rate allows us to keep inflation uncertainty at a minimum.
4
See also Sharpe (1964)
103
moment statistics and to study whether the actual moments derived from
the sample data fall within this interval.
The rest of the chapter is organized as follows. In section 2, we use the
standard RBC model and log-linearization as proposed by Campbell (1994)
and derive the closed-form solutions for the financial variables. Section 3
presents the estimation for the model specified by different moments restric-
tions. In Section 4, we interpret our results and contrast the asset market
implications of our estimates to the stylized facts of the asset market. Section
5 compares the second moments of the time series generated from the model
to the moments of actual time series data. Section 6 concludes.
6.2 The Standard Model and Its Asset Pric-
ing Implications
6.2.1 The Standard Model
We follow Campbell (1994) and use the notation Y
t
for output, K
t
for capital
stock, A
t
for technology, N
t
for normalized labor input and C
t
for consump-
tion. The maximization problem of a representative agent is assumed to take
the form
5
Max E
t

i=0
β
i
_
C
1−γ
t+i
1 −γ
+ θ log(1 −N
t+i
)
_
subject to
K
t+1
= (1 −δ)K
t
+ Y
t
−C
t
with Y
t
given by (A
t
N
t
)
α
K
1−α
t
. The first order condition is given by the
following Euler equation:
C
−γ
t
= βE
t
_
C
−γ
t+1
R
t+1
_
(6.1)
1
θ(1 −N
t
)
= α
A
α
t
C
t
_
K
t
N
t
_
(1−α)
(6.2)
5
Note that, as in our previous modelling, we apply here the power utility as describing the
preferences of the representative household. For the model of asset market implications
other preferences, for example, habit formation are often employed, see Jerman (1998),
Boldri, Christiano and Fisher (2001) and Cochrane (2001, ch. 21) and Gr¨ une and Semmler
(2004b).
104
where R
t+1
is the gross rate of return on investment in capital, which is equal
to the marginal product of capital in production plus undepreciated capital:
R
t+1
≡ (1 −α)
_
A
t+1
N
t+1
K
t+1
_
α
+ 1 −δ.
We allow firms to issue bonds as well as equity. Since markets are competi-
tive, real allocations will not be affected by this choice, i.e. the Modigliani-
Miller theorem is presumed to hold. We denote the leverage factor (the ratio
of bonds outstanding and total firm value) as ζ.
At the steady state, the technology, consumption, output and capital
stock all grow at a common rate G = A
t+1
/A
t
. Hence, (6.1) becomes
G = βR
where R is the steady state of R
t+1
. Taking log for both sides, we can further
write the above equation as
γg = log(β) + r. (6.3)
where g ≡ log G and r ≡ log R. This defines the relation among g, r, β and γ.
In the rest of the chapter, we use g, r, and γ as parameters to be determined,
the implied value for the discount factor β can then be deduced from (6.3).
6.2.2 The Log-linear Approximate Solution
Outside the steady state, the model characterizes a system of nonlinear equa-
tions in the logs of technology a
t
, consumption c
t
, labor n
t
and capital stock
k
t
. Note that here we use the lower case letter as the corresponding log
variables of the capital letter. In the case of incomplete capital depreciation
δ < 1, the exact analytical solution to the model is not feasible. We therefore
seek instead an approximate analytical solution. Assume that the technology
shock follows an AR(1) process:
a
t
= φa
t−1
+ ε
t
(6.4)
with ε
t
to be the i.i.d. innovation: ε
t
∼ N(0, σ
2
ε
). Campbell (1994) shows
that the solution, using the log-linear approximation method, can be written
as
c
t
= η
ck
k
t
+ η
ca
a
t
(6.5)
n
t
= η
nk
k
t
+ η
na
a
t
(6.6)
105
and the law of motion of capital is
k
t
= η
kk
k
t−1
+ η
ka
a
t−1
(6.7)
where η
ck
, η
ca
, η
nk
, η
na
, η
kk
, and η
ka
are all the complicated functions of the
parameters α, δ, r, g, γ, φ and N (the steady state value of N
t
).
6.2.3 The Asset Price Implications
The standard RBC model as presented above has strong implications for
asset pricing. First, the Euler equation (6.1) implies the following expression
regarding the risk-free rate R
f
t
:
6
R
f
t
=
_
βE
t
_
(C
t+1
/C
t
)
−γ
¸_
−1
.
Writing the equation in the log form, we obtain the risk-free rate in logs as
7
r
f
t
= γE
t
∆c
t+1

1
2
γ
2
V ar∆c
t+1
−log β. (6.8)
Using the process of consumption, capital stock and technology as expressed
in (6.5), (6.7) and (6.4) while ignoring the constant term involving the dis-
count factor and the variance of consumption growth, we derive from (6.8)
(see Lettau et al. (2001) for the details.):
r
f
t
= γ
η
ck
η
ka
1 −η
kk
L
ε
t−1
, (6.9)
where L is the lag operator. Matching this process implied by the model
to the data will give us the first asset market restriction. The second asset
market restriction will be the Sharpe-ratio which summarizes the risk-return
trade-off:
SR
t
= max
all assets
E
t
_
R
t+1
−R
f
t+1
_
σ
t
[R
t+1
]
. (6.10)
Since the model is log-linear and has normal shocks, the Sharpe-ratio can be
computed in closed form as:
8
SR = γη
ca
σ
ε
. (6.11)
6
For further details, see Cochrane (2001, chs. 1.2 and 2.1). We also want to note that in
RBC models the risk-free rate is generally too high, and its standard deviation is much
too low compared to the data, see Hornstein and Uhlig (2001).
7
Note that here we use the formula Ee
x
= e
Ex+σ
2
x
/2
.
8
See Lettau and Uhlig (1999) for the details.
106
Lastly, we consider the risk premia of equity (EP) and long-term real bonds
(LTBP). These can be computed on the basis of the log-linear solutions (6.5)-
(6.7) as:
LTBP = −γ
2
β
η
ck
η
ka
1 −βη
kk
η
2
ca
σ
2
ε
(6.12)
EP =
_
η
dk
η
nk
−η
da
η
kk
1 −βη
kk
−γβ
η
ck
η
kk
1 −βη
kk
_
γη
2
ca
σ
2
ε
. (6.13)
Again we refer to Lettau (1999) and Lettau, Gong and Semmler (2001) for
details of those computations.
6.2.4 Some Stylized Facts
Table 6.1 summarizes some key facts on asset markets and real economic ac-
tivity for the US economy. A successful model should be consistent with these
basic moments of real and financial variables. In addition to the well-known
stylized facts on macroeconomic variables, we will consider the performance
of the model concerning the following facts of asset markets.
Table 6.1: Asset Market Facts and Real Variables
Standard Deviation Mean
GDP 1.72
Consumption 1.27
Investment 8.24
Labor Input 1.59
T-Bill 0.86 0.19
SP 500 7.53 2.17
Equity Premium 7.42 1.99
Long Bond Premium 0.21 4.80
Sharpe Ratio 0.27
Note: Standard Deviations for the real variables are taken from
Cooley and Prescott (1995). The series are H-P filtered. Asset market
data are from Lettau (1999). All data are from the U.S. economy at
quarterly frequency. Units are per cent per quarters. The Sharpe-ratio
is the mean of equity premium divided by its standard deviation.
The table shows that the equity premium is roughly 2% per quarter.
The Sharpe-ratio, which measures the risk-return trade-off, equals 0.27 in
107
post-war data of the U.S.. The standard deviation of the real variables
reveal the usual hierarchy in volatility with investment being most volatile
and consumption the smoothest variable. Among the financial variables the
equity price and equity premium exhibit the highest volatility, roughly six
times higher than consumption.
6.3 The Estimation
6.3.1 The Structural Parameters to be Estimated
The RBC model presented in section 2 contains seven parameters, α, δ, r, g, γ,
φ, and N. Recall that the discount factor is determined in (6.3) for given
values of g, r and γ. The parameter θ is simply dropped due to our log-linear
approximation. Of course, we would like to estimate as many parameters
as possible. However, some of the parameters have to be pre-specified. The
computation of technology shocks requires the values for α and g. In this
paper we use the standard values of α = 0.667 and g = 0.005. N is specified
as 0.3. The parameter φ is estimated independently from (6.4) by OLS
regression. This leaves the risk aversion parameter γ, the average interest
rate r and the depreciation rate δ to be estimated. The estimation strategy is
similar to Christiano and Eichenbaum (1992). However, they fix the discount
factor and the risk aversion parameter without estimating them. In contrast,
the estimation of these parameters is central to our strategy, as we will see
shortly.
6.3.2 The Data
For the real variables of the economy, we use the data set as constructed by
Christiano (1987). The data set covers the period from the third quarter
of 1955 through the fourth quarter of 1983 (1955.1-1983.4). As we have
demonstrated in the last chapter, the Christiano data set can match the real
side of the economy better than the commonly used NIPA data set. For
the time series of the risk-free interest rate, we use the 30-day T-bill rate to
minimize unmodeled inflation risk.
To make the data suitable for estimation, we are required to detrend the
data into their log-deviation form. For a data observation X
t
, the detrended
value x
t
is assumed to take the form log( X
t
/X
t
), where X
t
is the value
of X
t
on its steady state path, i.e., X
t
= (1 + g)
t−1
X
1
. Therefore, for the
given g, which could be calculated from the sample, the computation of
x
t
depends on X
1
, the initial X
t
. We compute this initial condition based on
108
the consideration that the mean of x
t
is equal to 0. In other words,
1
T
T

i=1
log(X
t
/X
t
) =
1
T
T

i=1
log(X
t
) −
1
T
T

i=1
log(X
t
)
=
1
T
T

i=1
log(X
t
) −
1
T
T

i=1
log(X
1
) −
1
T
T

i=1
log
_
(1 +g)
t−1
¸
= 0
Solving the above equation for X
1
, we obtain
X
1
=
1
T
exp
_
T

i=1
log(X
t
) −
T

i=1
log
_
(1 +g)
t−1
¸
_
.
6.3.3 The Moment Restrictions of Estimation
For the estimation in this chapter, we use the maximum likelihood (ML)
method as discussed in Chapter 3. In order to analyze the role of each
restriction, we introduce the restrictions step-by-step. First, we constrain
the risk aversion parameter r to unity and use only moment restrictions of
the real variables, i.e. (6.5) - (6.7) so we can compare our results to those
in Christiano and Eichenbaum (1992). The remaining parameters thus to be
estimated are δ and r. We call this Model 1 (M1). The matrices for the ML
estimation are given by
B =
_
_
1 0 0
−η
ck
1 0
−η
ck
0 1
_
_
, Γ =
_
_
−η
kk
−η
ka
0
0 0 −η
ca
0 0 −η
na
_
_
,
y
t
=
_
_
k
t
c
t
n
t
_
_
, x
t
=
_
_
k
t−1
a
t−1
a
t
_
_
.
After considering the estimation with the moment restrictions only for real
variables, we add restrictions from asset markets one by one. We start by
including the following moment restriction of the risk-free interest rate in
estimation while still keeping risk aversion fixed at unity:
E
_
b
t
−r
f
t
_
= 0
109
where b
t
denotes the return on the 30-day T-bill and the risk-free rate r
f
t
is
computed as in (6.9). We refer to this version as Model 2 (M2). In this case
the matrices B and Γ and the vectors x
t
and y
t
can be written as
B =
_
¸
¸
_
1 0 0 0
−η
nk
1 0 0
−η
nk
0 1 0
0 0 0 1
_
¸
¸
_
, Γ =
_
¸
¸
_
−η
kk
−η
ka
0 0
0 0 −η
ca
0
0 0 −η
na
0
0 0 0 −1
_
¸
¸
_
,
y
t
=
_
¸
¸
_
k
t
c
t
n
t
b
t
_
¸
¸
_
, x
t
=
_
¸
¸
_
k
t−1
a
t−1
a
t
r
f
t
_
¸
¸
_
.
Model 3 (M3) uses the same moment restrictions as Model 2 but leaves the
risk aversion parameter r to be estimated rather than fixed to unity.
Finally, we impose that the dynamic model should generate a Sharpe-ratio
of 0.27 as measured in the data (see Table 1). We take this restriction into
account in two different ways. First, as a shortcut, we fix the risk aversion
at 50, a value suggested in Lettau and Uhlig (1999) for generating a Sharpe-
ratio of 0.27 using actual consumption data. Given this value, we estimate the
remaining parameter δ and r. This will be called Model 4 (M4). In the next
version, Model 5 (M5), we are simultaneously estimating γ while imposing
a Sharpe-ratio restriction of 0.27. Recall from (6.11) that the Sharpe-ratio
is a function of risk aversion, the standard deviation of the technology shock
and the elasticity of consumption with respect to the shock η
ca
. Of course,
η
ca
is itself a complicated function of γ. Hence, the Sharpe-ratio restriction
becomes
γ =
0.27
η
ca
(γ)σ
ε
. (6.14)
This equation provides the solution of γ, given the other parameters δ
and r. Since it is nonlinear in γ, we, therefore, have to use an iterative
procedure to obtain the solution. For each given δ and r, searched by the
simulated annealing, we first set an initial γ, denoted by γ
0
. Then the new γ,
denoted by γ
1
, is calculated from (6.14), which is equal to 0.27/[η
ca

0

ε
].
This procedure is continued until convergence.
We summarize the different cases in Table 6.2, where we start by using
only restrictions on real variables and fix risk aversion to unity (M1). We add
the risk-free rate restriction keeping risk aversion at one (M2), then estimate
110
it (M3). Finally we add the Sharpe-ratio restriction, fixing risk aversion at 50
(M4) and estimate it using an iterative procedure (M5). For each model we
also compute the implied values of the long-term bond and equity premium
using (6.12) and (6.13).
Table 6.2: Summary of Models
Models Estimated Parameters Fixed Parameters Asset Restrictions
M1 r, δ γ = 1 none
M2 r, δ γ = 1 risk-free rate
M3 r, δ, γ risk-free rate
M4 r, δ γ = 50 risk-free rate, Sharpe-ratio
M5 r, δ, γ risk-free rate, Sharpe-ratio
6.4 The Estimation Results
Table 6.3 summarizes the estimations for the first three models. Standard
errors are in parentheses. Entries without standard errors are preset and
hence are not estimated.
Table 6.3: Summary of Estimation Results
9
Models δ r γ
M1 0.0189 (0.0144) 0.0077 (0.0160) prefixed to 1
M2 0.0220 (0.0132) 0.0041 (0.0144) prefixed to 1
M3 0.0344 (0.0156) 0.0088 (0.0185) 2.0633 (0.4719)
Consider first Model 1, which only uses restrictions on real variables. The
depreciation rate is estimated to be just below 2% which close to Christiano
and Eichenbaum’s (1992) results. The average interest rate is 0.77% per
quarter or 3.08% on an annual basis. The implied discount factor computed
from (6.3) is 0.9972. These results confirm the estimates in Christiano and
Eichenbaum (1992). Adding the risk-free rate restriction in Model 2 does
not significantly change the estimates. The discount factor is slightly higher
while the average risk-free rate decreases.
However the implied discount factor now exceeds unity, a problem also
encountered in Eichenbaum et al. (1988). Christiano and Eichenbaum (1992)
9
The standard errors are in parenthesis.
111
avoid this by fixing the discount factor below unity rather than estimating it.
Model 3 is more general since the risk aversion parameter is estimated instead
of fixed at unity. The ML procedure estimates the risk aversion parameter
to be roughly 2 and significantly different from 1, the value implied from log-
utility function. Adding the risk-free rate restriction increases the estimates
of δ and r somewhat. Overall, the model is able to produce sensible parameter
estimates when the moment restriction for the risk-free rate is introduced.
While the implications of the dynamic optimization model concerning
the real macroeconomic variables could be considered as fairly successful,
the implications for asset prices are dismal. Table 6.4 computes the Sharpe-
ratio as well as risk premia for equity and long term real bond using (6.11) -
(6.13). Note that these variables are not used in the estimation of the model
parameters. The leverage factor ζ is set to 2/3 for the computation of the
equity premium.
10
Table 6.4: Asset Pricing Implications
Models SR LT BPrem EqPrem
M1 0.0065 0.000% -0.082%
M2 0.0065 -0.042% -0.085%
M3 0.0180 -0.053% -0.091%
Table 6.4 shows that the RBC model is not able to produce sensible asset
market prices when the model parameters are estimated from the restrictions
derived only from the real side of the model (or, as in M3, adding the risk-free
rate). The Sharpe-ratio is too small by a factor of 50 and both risk premia
are too small as well, even negative for certain cases. Introducing the risk-
free rate restriction improves the performance only a little bit. Next, we will
try to estimate the model by adding the Sharpe-ratio moment restrictions.
The estimation is reported in Table 6.5.
Table 6.5: Matching the Sharpe-Ratio
Models δ r γ
M4 1 0 prefixed to 50
M5 1 1 60
10
This value is advocated in Benninga and Protopapadakis (1990).
112
Model 4 fixes the risk aversion at 50. As explained in Lettau and Uhlig
(1999), such a high level of risk aversion has the potential to generate rea-
sonable Sharpe-ratios in consumption CAPM models. The question now is
how the moment restrictions of the real variables are affected by such a high
level of risk aversion. The first row of Table 6.5 shows that the resulting
estimates are not sensible. The estimates for the depreciation factors and
the steady-state interest rate converge to the pre-specified constraints,
11
or
the estimation does not settle down to an interior optimum. This implies
that the real side of the model does not yield reasonable results when risk
aversion is 50. High risk aversion implies a low elasticity of intertemporal
substitution so that agents are very reluctant to change their consumption
over time.
Trying to estimate risk aversion while matching the Sharpe-ratio gives
similar results. It is not possible to estimate the RBC model with simulta-
neously satisfying the moment restrictions from both the real side and the
financial side of the model, as shown in the last row in Table 7.5. Again the
parameter estimates do converge to pre-specified constraints. The depreci-
ation rate converges again to unity as does the steady-state interest rate r.
The point estimate of risk aversion parameter is high (60). The reason is
of course that a high Sharpe-ratio requires high risk aversion. The tension
between the Sharpe-ratio restriction and the real side of the model causes
the estimation to fail. It demonstrates again that the asset pricing charac-
teristics that one find in the data are fundamentally incompatible with the
standard RBC model.
6.5 The Evaluation of Predicted and Sample
Moments
Next we provide a diagnostic procedure to compare the second moments
predicted by the model with the moments implied by the sample data. Our
objective here is to ask whether our RBC model can predict the actual mo-
ments of the time series for both the real and asset market. The moments
are revealed by the spectra at various frequencies. We remark that a simi-
lar diagnostic procedure can be found in Watson (1993) and Diebold et. al
(1995).
Given the observations on k
t
, a
t
and the estimated parameters of our log-
linear model, the predicted c
t
and n
t
can be constructed from the right hand
side of (6.5) - (6.6) with k
t
and a
t
to be their actual observations. We now
11
We constraint the estimates to lie between 0 and 1.
113
consider the possible deviations of our predicted series from the sample series.
We hereby employ our most reasonable estimated Model 3. We can use the
variance-covariance matrix of our estimated parameters to infer the intervals
of our forecasted series hence also the intervals of the moment statistics that
we are interested in.
Figure 6.1: Predicted and Actual Series: solid lines (predicted series), dotted
lines (actual series) for A) consumption, B) labor, C) risk-free interest rate
and D) long term equity excess return; all variables HP detrended (except
for excess equity return)
114
Figure 6.2: The Second Moment Comparison: solid line (actual moments),
dashed and dotted lines (the intervals of predicted moments) for A) con-
sumption, B) labor, C) risk-free interest rate and D) long-term equity excess
return; all variables detrended (except excess equity return)
Figure 6.1 presents the Hodrick-Prescott (HP) filtered actual and pre-
dicted time series data on consumption, labor effort, risk-free rate and eq-
uity return. As shown in Chapter 5, the consumption series can somewhat
be matched whereas the volatility in the labor effort as well as in the risk-free
rate and equity excess return cannot be matched. The insufficient match of
the latter three series are further confirmed by Figure 6.2 where we compare
the spectra calculated from the data samples to the intervals of the spectra
predicted, at 5% significance level, by the models.
A good match of the actual and predicted second moments of the time
series would be represented by the fact that the solid line falls within the
interval of the dashed and dotted lines. In particular the time series for
115
labor effort, risk-free interest rate and equity return fail to do so.
6.6 Conclusions
Asset prices contain valuable information about intertemporal decision mak-
ing of economic agents. This chapter has estimated the parameters of a
standard RBC model taking the asset pricing implications into account. We
introduce model restrictions based on asset pricing implications in addition
to the standard restrictions of the real variables and estimate the model by
using ML method. We use the risk-free interest rate and the Sharpe-ratio
in matching actual and predicted asset market moments and compute the
implicit risk premia for long real bonds and equity. We find that though the
inclusion of the risk-free interest rate as a moment restriction can produce
sensible estimates, the computed Sharpe-ratio and the risk premia of long-
term real bonds and equity are in general counterfactual. The computed
Sharpe-ratio is too low while both risk premia are small and even negative.
Moreover, the attempt to match the Sharpe-ratio in the estimation process
can hardly generate sensible estimates. Finally, given the sensible param-
eter estimates, the second moments of labor effort, risk-free interest rate
and long-term equity return predicted by the model do not match well the
corresponding moments of the sample economy.
We conclude that the standard RBC model cannot match the asset mar-
ket restrictions, at least with the standard technology shock, constant rela-
tive risk aversion (CRRA) utility function and no adjustment costs. Other
researchers have looked at some extensions of the standard model such as
technology shocks with a greater variance, other utility functions, for exam-
ple, utility functions with habit formation, and adjustment costs of invest-
ment. The latter line of research has been pursued by Jerman (1998) and
Boldrin, Christiano and Fisher (1996, 2001).
12
Those extensions of the stan-
dard model are, to least to a certain extent, more successful in replicating
stylized asset market characteristics, yet these extensions frequently use ex-
treme parameter values to be able to match the asset price characteristics
of the model with the data. Moreover, the approximation methods for solv-
ing the models might not be very reliable since accuracy tests for the used
approximation methods are still missing.
13
12
see also W¨ohrmann, Semmler and Lettau (2001) where time varying characteristics of
asset prices are explored.
13
See Gr¨ une and Semmler (2004b).
Part III
Beyond the Standard Model —
Model Variants with Keynesian
Features
116
Chapter 7
Multiple Equilibria and History
Dependence
7.1 Introduction
One of the important features of Keynesian economics is that there is no
unique equilibrium toward which the economy moves. The dynamics are
open ended in the sense that it can move to low level, or high level of eco-
nomic activity and expectations and policy may become important to tild
the dynamics to one or the other outcomes.
1
In recent times such type of dynamics have been found in a large num-
ber of dynamic models with intertemporal optimization. Those models have
been called indeterminacy models. Theoretical models of this type are re-
viewed in Benhabib and Farmer (1999) and Farmer (12001) and an empirical
assessment is given in Schmidt-Grohe (2002). Some of the models are real
models, RBC models, with increasind returns to scale and or more gener-
ate preferences, as introduced in chapter 4.5, that can exhibit locally stable
steady state equilibria giving rise to sun spot phenomena.
2
Multiplicity of
equilibria can also arise here as a consequence of increasing returns to scale
and/or more general preferences. Others are monetary models, where con-
sumers’ welfare is affected positively by consumption and cash balances and
negatively by the labor effort and an inflation gap from some target rates.
For certain substitution properties between consumption and cash holdings
those models admit unstable as well as stable high level and low level steady
1
In Keynes (1936) such an open ended dynamic is described in Chapter 5 of his book;
Keynes describes here how higher or lower ”long term positions” associated with higher
or lower output and employment might be generated by expectational forces.
2
See, for example Kim (2004).
117
118
states. Here can be indeterminacy in the sense that any initial condition
in the neighborhood of one of the steady-states is associated with a path
forward or away from that steady state, see Benhabib et al. (2001).
When indeterminacy models exhibit multiple steady state equilibria, where
a middle one is an attractor (repellor), than this permits any path in the
vicinity of the steady state equilibria to move back to (away from) the steady
state equilibrium. Despite some unresolved issues in the literature on mul-
tiple equilibria and indeterminacy
3
it has greatly enriched macrodynamic
modelling.
Pursuing this line of research we show that one does not need to refer
to increasing returns to scale or specific preferences to obtain such results.
We show that due to the adjustment cost of capital we may obtain non-
uniqueness of steady state equilibria in an otherwise standard dynamic op-
timization version. Multiple steady state equilibria, in turn, may lead to
thresholds separating different domains of attraction of capital stock, con-
sumption, employment and welfare level. As our solution shows thresholds
are important as separation points below or above which it is advantages to
move to lower or higher levels of capital stock, consumption, employment and
welfare. Our model version thus can explain of how the economy becomes
history dependent and moves, after a shock or policy influences, to a low or
high level equilibria in employment and output.
Recently, numerous stochastic growth model have employed adjustment
cost of capital. In non-stochastic dynamic models adjustment cost has al-
ready been used in Eisner and Stroz (1963), Lucas (1967) and Hayashi (1982).
Authors in this tradition have also distinguished the absolute adjustment
cost depending on the level of investment from the adjustment cost depend-
ing on investment relative to capital stock (Uzawa 1968, Asada and Semm-
ler 1995).
4
In stochastic growth models adjustment cost has been used in
Boldrin, Christiano and Fisher (2001) and adjustment cost associated with
the rate of change of investment can be found in Christiano, Eichenbaum
and Evans (2001). In this chapter we want to show that adjustment cost
in a standard RBC model can give rise to multiple steady state equilibria.
The existence of multiple steady state equilibria entails thresholds that sep-
arate different domains of attraction for welfare and employment and allow
3
Although these are important variants of macrodynamic models with optimizing behavior,
as, however, recently has been shown, see Beyn, Pampel and Semmler (2001) and Gr¨ une
and Semmler (2004a), indeterminacy is likely to occur solely at a point in these models,
at a threshold, and not within a set as the indeterminacy literature often claims.
4
In Feichtinger et al. (2000) it is shown that relative adjustment cost, where investment as
well as cpital stock enters the adjustment costs, likely to generate multiplicity of steady
state equilibria.
119
for an open ended dynamics depending on the initial conditions and policy
influences impacting the initial conditions.
The remainder of this chapter is organized as follows. Section 2 presents
the model. Section 3 studies the adjustment cost function which gives rise to
multiple equilibria and section 4 demonstrates the existence of a threshold
5
that separates different domains of attraction. Section 5 concludes the chap-
ter. The proof of the propositions in the text is provided in the appendix.
7.2 The Model
The model we present here is the standard stochastic growth model of RBC
type, as in King et al. (1988), augmented by adjustment cost. The state
equation for the capital stock takes the form:
K
t+1
= (1 −δ)K
t
+ I
t
−Q
t
(7.1)
where
I
t
= Y
t
−C
t
(7.2)
and
Y
t
= A
t
K
1−α
t
(N
t
X
t
)
α
(7.3)
Above, K
t
, Y
t
, I
t
, C
t
and Q
t
are the level of capital stock, output, investment,
consumption and adjustment cost, all in real terms. N
t
is per capita working
hours; A
t
is the temporary shock in technology; and X
t
is the permanent
(including both population and productivity growth) shock that follows a
growth rate γ. The model is non-stationary due to X
t
. To transform the
model into a stationary version we need to detrend the variables. For this
purpose, we divide both sides of equation (7.1) - (7.3) by X
t
:
k
t+1
=
1
1 + γ
[(1 −δ)k
t
+ i
t
−q
t
]
i
t
= y
t
−c
t
(7.4)
y
t
= A
t
k
1−α
t
(n
t
N/0.3)
α
(7.5)
Above, k
t
, c
t
, i
t
, y
t
and q
t
are the detrended variables for K
t
, C
t
, I
t
, Y
t
and
Q
t
.
6
; n
t
is defined to be
0.3Nt
N
with N denoting the sample mean of N
t
. Note
that here n
t
is often regarded to be the normalized hours with its sample
5
In the literature those thresholds have been called Skiba-points (see Skiba, 1978).
6
In particular, k
t

Kt
Xt
, c
t

Ct
Xt
, i
t

It
Xt
, y
t

Yt
Xt
and q
t

Qt
Xt
.
120
mean equal to 30 %. We shall assume that the detrended adjustment cost q
t
depends on detrended investment i
t
:
q
t
= q(i
t
)
The objective function takes the form
max E
0

t=0
β
t
[log c
t
+θ log(1 −n
t
)]
To solve the model, we first form the Lagrangian:
L =

t=0
β
t
[log(c
t
) + θ log(1 −n
t
)] −

t=0
E
t
_
β
t+1
λ
t+1
_
k
t+1

1
1 + γ
[(1 −δ)k
t
+i
t
−q(i
t
)]
__
Setting to zero the derivatives of L with respect to c
t
, n
t
, k
t
and λ
t
, we obtain
the following first-order conditions:
1
c
t

β
1 +γ
E
t
λ
t+1
[1 −q

(i
t
)] = 0 (7.6)
−θ
1 −n
t
+
β
1 + γ
E
t
λ
t+1
αy
t
n
t
[1 −q

(i
t
)] = 0 (7.7)
β
1 + γ
E
t
λ
t+1
_
(1 −δ) +
(1 −α)y
t
k
t
[1 −q

(i
t
)]
_
= λ
t
(7.8)
k
t+1
=
1
1 + γ
[(1 −δ)k
t
+ i
t
−q(i
t
)] (7.9)
with i
t
and y
t
to be given by (7.4) and (7.5) respectively.
The following proposition concerns the steady states
Proposition 5 Assume A
t
has a steady state A. Equation (7.4) - (7.9),
when evaluated at their certainty equivalence form, determine the following
steady states:
[bφ(i) −1][i −q(i)] −aφ(i)
1−
1
α
−q(i) = 0 (7.10)
k =
1
γ +δ
[i −q(i)] (7.11)
121
n =
_
φ(i)
A
_
1
α
k
0.3
N
(7.12)
y = φ(i)k (7.13)
c = y −i (7.14)
λ =
(1 +γ)
βc
_
1 −q

(i)
¸ (7.15)
where
a =
α
θ
A
1
α
(N/0.3) (7.16)
b =
(1 +
α
θ
)
γ + δ
(7.17)
φ(i) =
m
1 −q

(i)
(7.18)
and
m =
(1 +γ) −(1 −δ)β
β(1 −α)
. (7.19)
Note that equation (7.10) determines the solution of i, depending on the
assumption that all other steady states can be uniquely solved via (7.11) -
(7.15). Also if q(i) is linear, equ. (7.18) indicates that φ(·) is constant, and
then from (7.10) i is uniquely determined. Therefore, if q(i) is linear, no
multiple steady state equilibria will occur.
7.3 The Existence of Multiple Steady States
Many non-linear forms of q(i) may lead to a multiplicity of equilibria. Here
we shall only consider that q(i) takes the logistic form:
q(i) =
q
0
exp(q
1
i)
exp(q
1
i) + q
2

q
0
1 + q
2
(7.20)
Figure (7.1) shows a typical shape of q(i) while Figure (7.2) shows the cor-
responding derivative q

(i) with the parameters given in Table 7.1:
Table 7.1: The Parameters in the Logistic Function
q
0
q
1
q
2
2500 0.0034 500
122
Figure 7.1: The Adjustment Cost Function
Figure 7.2: The Derivatives of the Adjustment Cost
Note that in equation (7.20) we posit a restriction such that q(0) = 0.
Another restriction, which is reflected in Figure (7.1), is that
123
q(i) < i (7.21)
indicating that the adjustment cost should never be larger than the invest-
ment itself. Both restrictions seem reasonable.
The two critical points, i
1
and i
2
, in Figure (7.1) and (7.2) need to be
discussed. These are the two points at which q

(i) = 1. Meanwhile between
i
1
and i
2
, q

(i) > 1. When q

(i) > 1, equation (7.18) indicates that φ(i) is
negative since from (7.19) m > 0. A negative φ(i) will lead to a complex
φ(i)
1−
1
α
in (7.10). We therefore obtain two feasible ranges for the existence
of steady states of i : one is (0, i
1
) and the other is (i
2
, +∞).
The following proposition concerns the existence of multiplicity of equi-
libria.
Proposition 6 Let f(i) ≡ [bφ(i) −1][i −q(i)] −aφ(i)
1−
1
α
−q(i), where q(i)
takes the form as in (7.20) subject to (7.21). Assume bm−1 > 0.
• There exists one and only one i, denoted i
1
in the range (0, i
1
) such
that f(i) = 0.
• In the range (i
2
, +∞), if there are some i

s at which f(i) < 0, then
there must exist two i’s, denoted as i
2
and i
3
such that f(i) = 0.
We shall first remark that the assumption bm− 1 > 0 is plausible given
the standard parameters for b and m also f(i) = 0 is indeed the equation
(2.10). Therefore, this proposition indicates a condition under which multiple
steady states will occur. In particular, if there exist some i

s in (i
2
, +∞) at
which f(i) < 0, three equilibria will occur. A formal mathematical proof
of the existence of this condition is intractable. In Figure 7.3, we show the
curve of f(·) given the empirically plausible parameters as reported in Table
7.1 and other standard parameters as given in Table 7.2. The curves cut the
zero line three times, indicating three steady states of i.
124
Figure 7.3: Multiplicity of Equilibria: f(i) function
Table 7.2: The Standard Parameters of RBC Model
7
α γ β δ θ N A
0.5800 0.0045 0.9930 0.2080 2.0189 480.00 1.7619
We use a numerical method to compute the three steady states of i : i
1
, i
2
and i
3
. Given these steady states, the other steady states are computed by
(7.11) - (7.15). Table 7.3 uses essentially the same parameters as reported in
Table 5.1. The result of the computations of the three steady states are:
7
The N below is calculated on the assumption of 12 weeks per quarter and 40 working
hours per week. A is derived from A
t
= a
0
+ a
1
A
t−1
+ ε
t
, with estimated a
0
and a
1
are
given respectively by 0.0333 and 0.9811.
125
Table 7.3: The Multiple Steady States
Corresponding to i
1
Corresponding to i
2
Corresponding to i
3
i 564.53140 1175.7553 4010.4778
k 18667.347 11672.642 119070.11
n 0.25436594 0.30904713 0.33781724
c 3011.4307 2111.2273 5169.5634
y 3575.9621 3286.9826 9180.0412
λ 0.00083463435 0.0017500565 0.00019568017
V 1058.7118 986.07481 1101.6369
Note that above, V is the value of the objective function at the corre-
sponding steady states. Therefore it reflects one corresponding welfare level.
The steady state corresponding to i
2
deserves some discussion. The welfare
of i
1
is larger than i
2
, its corresponding steady states in capital, output, and
consumption are all greater than those corresponding to i
2
, yet its corre-
sponding steady state in labor effort and thus employment is larger for i
2
.
This already indicates that i
2
may be inferior in terms of the welfare, at least
compared to i
1
. On the other hand i
1
and i
3
exhibit also differences in welfare
and employment.
7.4 The Solution
An analytical solution to the dynamics of the model with adjustment cost
is not feasable. We, therefore, have to rely on an approximate solution. For
this, we shall first linearize the first-order conditions around the three sets of
steady states as reported in Table 7.3. Then by applying an approximation
method as discussed in chapter 2, we obtain three sets of linear decision rules
for c
t
and n
t
corresponding to our three sets of steady states. For notational
convenience, we shall denote them as decision rule Set 1, Set 2 and Set 3
corresponding to i
1
, i
2
and i
3
. Assume that A
t
stays at its steady state A so
that we only consider the deterministic case. The ith set of decision rule can
then be written as
c
t
= G
i
c
k
t
+ g
i
c
(7.22)
n
t
= G
i
n
k
t
+ g
i
n
(7.23)
where i = 1, 2, 3. We therefore can simulate the solution paths by using the
above two equations together with (7.4), (7.5) and (7.9). The question then
126
arises as to which set of decision rule, as expressed by (7.22) and (7.23),
should be used. The likely conjecture is that this will depend on the initial
condition k
0
. For example, if k
0
is close to the k
1
, the steady state of k
corresponding to i
1
, we would expect that the decision rule 1 is appropriate.
This consideration further indicates that there must exist some thresholds
for k
0
at which intervals are divided regarding which set of decision rule
should be applied. To detect such thresholds, we shall compute the value of
the objective functions starting at different k
0
for our three decision rules.
Specifically, we compute V , where
V ≡

t=0
β
t
[log(c
t
) + θ log(1 −n
t
)]
We should choose the range of k
0
’s that covers the three steady states of
k’s as reported in Table 7.3. In this exercise, we choose the range [8000, 138000]
for k
0
. Figure 7.4 compares the welfare performance of our three sets of linear
decision rules.
Figure 7.4: The Welfare Performance of three Linear Decision Rules (solid,
dotted and dashed lines for decision rule Set 1, Set 2 and Set 3 respectively)
127
From Figure 7.4, we first realize that the value of the objective function is
always lower for decision rule Set 2. This is likely to be caused by its inferior
welfare performance at the steady states for which we compute the decision
rule. However, there is an intersection of the two welfare curves corresponding
to the decision rules Set 1 and Set 3. This intersection, occuring around
k
0
= 36900, can be regarded as a threshold. If k
0
< 36900, the household
should choose decision rule Set 1 since it will allow the household to obtain a
higher welfare. On the other hand, if k
0
> 36900, the household may choose
decision rule Set 3, since this leads to a higher welfare.
7.5 Conclusion
This chapter shows that the introduction of adjustment cost of capital may
lead to non-uniqueness of steady state equilibria in an otherwise standard
RBC model. Multiple steady state equilibria, in turn, lead to thresholds
separating different domains of attraction of capital stock, consumption, em-
ployment and welfare level. As our simulation shows thresholds are important
as separation points below or above which it is optimal to move to lower or
higher levels of capital stock, consumption, employment and welfare. Our
model thus can easily explain of how an economy become history dependent
and moves, after a shock, to a low or high level equilibrium in employment
and output. A variety of further economic models giving rise to multiple
equilibria and thresholds are presented in Gr¨ une and Semmler (2004a).
The above model stays as close as possible to the standard RBC model
except, for illustrative purpose, a specific form of adjustment cost of capital
was introduced. On the other hand, dynamic models giving rise to inde-
terminacy usually have to presume some weak externalities and increasing
returns and/or more general preferences. Kim (2004) discusses to what ex-
tent weak externalities in combination with more complex preferences will
produce indeterminacy. He in fact shows that if for the generalized RBC
model, as studied in chapter 4.5., there is a weak externality, ξ > 0, then
the model generates local indeterminacy. Overall, as shown in many recent
contributions, the issue of multiple equilibria and indeterminacy is an impor-
tant macroeconomic issue and should be pursued in further research in the
future.
128
7.6 Appendix: The Proof of Propositions 5
and 6
7.6.1 The Proof of Proposition 5
The certainty equivalence of equation (7.4) - (7.9) takes the form
1
c

β
1 + γ
λ
_
1 −q

(i)
¸
= 0 (7.24)
−θ
1 −n
+
β
1 + γ
λ
αy
n
_
1 −q

(i)
¸
= 0 (7.25)
β
1 + γ
_
(1 −δ) +
(1 −α)y
k
_
1 −q

(i)
¸
_
= 1 (7.26)
k =
1
1 +γ
_
(1 −δ)k + i −q(i)
¸
(7.27)
i = y −c (7.28)
y = Ak
1−α
(nN/0.3)
α
(7.29)
From (7.27),
k =
1
γ +δ
_
i −q(i)
¸
(7.30)
which is equation (7.11). From (7.26),
y
k
=
(1 +γ) −(1 −δ)β
β(1 −α)
_
1 −q

(i)
¸
= φ(i) (7.31)
which is equation (7.13) with φ(i) given by (refequ7.18) and (7.19). Further
from (7.29),
y
k
= Ak
−α
(nN/0.3)
α
(7.32)
Using (7.31) to express
y
k
, we derive from (7.32)
n =
_
φ(i)
A
_
1
α
k
0.3
N
(7.33)
which is (7.12). Next, using (7.28) to express i while using φ(i)k for y. We
derive from (7.30) (γ + δ)k = φ(i)k −c −q(i), which is equivalent to
c = (φ(i) −γ −δ)k −q(i)
129
= c
1
(φ(i), k, q(i)) (7.34)
Meanwhile from (7.24) and (7.25):
β
1 + γ
λ
_
1 −q

(i)
¸
=
1
c
=
θ
(1 −n)α
_
y
n
_
−1
The first equation is equivalent to (7.15) while the second equation indicates
c =
(1 −n)α
θ
_
y
n
_
(7.35)
where from (7.29),
y
n
= A
_
k
n
_
1−α
(N/0.3)
α
= A
_
y
n
_
1−α
_
y
k
_
α−1
(N/0.3)
α
= A
1
α
φ(i)
1−
1
α
(N/0.3) (7.36)
Substitute (7.36) into (7.35) and then express n in terms of (7.33):
c =
(1 −n)α
θ
A
1
α
φ(i)
1−
1
α
(N/0.3)
=
α
θ
A
1
α
(N/0.3)φ(i)
1−
1
α

α
θ
A
1
α
(N/0.3)φ(i)
1−
1
α
_
φ(i)
A
_
1
α
k
0.3
N
= aφ(i)
1−
1
α

α
θ
φ(i)k
= c
2
(φ(i), k)
= c
2
(φ(i), k) (7.37)
with a given by (7.16).
Let c
1
(·) = c
2
(·). We thus obtain
(φ(i) −γ −δ)k −q(i) = aφ(i)
1−
1
α

α
θ
φ(i)k
which is equivalent to
_
(1 +
α
θ
)φ(i) −γ −δ
_
k = aφ(i)
1−
1
α
+ q(i)
Using (7.30) for k, we obtain
_
(1 +
α
θ
)φ(i) −γ −δ
_
1
γ + δ
(i −q(i)) = aφ(i)
1−
1
α
+ q(i) (7.38)
Equation (7.37) is equivalent to (7.10) with b given by (7.17).
130
7.6.2 The Proof of Proposition 6
Note that within our two ranges (0, i
1
) and (i
2
, +∞) φ(i) is positive and
hence f(i) is continuous and differentiable. In particular,
f

(i) = φ

(i)
_
b[i −q(i)] + a(
1
α
−1)φ(i)

1
α
_
+ (bm−1) (7.39)
where
φ

(i) =
mq
′′
(i)
[1 −q

(i)]
2
(7.40)
We shall first realized that a, b and m are all positive as indicated by (7.16)
and (7.17) and (7.19), and therefore the term b[i − q(i)] + a(
1
α
− 1)φ(i)

1
α
is positive. Meanwhile in the range (0, i
1
), q
′′
(i) > 0 and hence f

(i) > 0.
However, in the range (i
2
, +∞), q
′′
(i) < 0 and hence f

(i) can either be
positive or negative due to the sign of (bm−1).
Let us first consider the range (0, i
1
). Assume i →0. In this case, q(i) →0
and therefore f(i) → −aφ(0)
1−
1
α
< 0. Next assume i → i
1
, In this case,
q

(i) → 1 and therefore φ(i) → +∞. Since 1 −
1
α
is negative, this further
indicates φ(i)
1−
1
α
→ 0. Therefore f(i) → +∞. Since f

(i) > 0, by the
intermediate value theorem, there exists one and only one i such that f(i) =
0. We thus have proved the first part of the proposition.
Next we turn to the range (i
2
, +∞). To verify the second part of the
proposition, we only need to prove that f(i) →+∞and f(i) < 0 when i →i
2
and f(i) →+∞and f(i) > 0 when i →+∞. Consider first i →i
2
, Again in
this case, q

(i) → 1, φ(i) → +∞, and φ(i)
1−
1
α
→ 0. Therefore f(i) → +∞.
Meanwhile from (7.39), φ

(i) →−∞(since q
′′
(i) < 0) and therefore f

(i) < 0.
Consider now i → +∞, In this case, q

(i) → 0 and q(i) → q
m
where q
m
is the upper limit of q(i). This indicates that [i −q(i)] → +∞. Therefore
f(i) → +∞. Meanwhile, since q
′′
(i) → 0 and hence φ

(i) → 0. Therefore
f

(i) → (bm−1), which is positive. We thus have proved the second part of
the proposition.
Chapter 8
Business Cycles with
Nonclearing Labor Market
8.1 Introduction
As discussed in the previous chapters, especially in Chapter 5, the stan-
dard real business cycle (RBC) model, despite its rather simple structure,
can explain the volatilities of some macroeconomic variables such as output,
consumption and capital stock. However, to explain the actual variation in
employment the model generally predicts an excessive smoothness of labor
effort in contrast to empirical data. This problem of excessive smoothness
in labor effort is well-known in the RBC literature. A recent evaluation of
this failure of the RBC model is given in Schmidt-Grohe (2001). There the
RBC model is compared to indeterminacy models, as developed by Benhabib
and his co-authors. Whereas in RBC models the standard deviation of the
labor effort is too low, in indeterminacy models it turns out to be excessively
high. Another problem in RBC literature related to this, is that the model
implies a excessively high correlation between consumption and employment
while empirical data only indicates a week correlation. This problem of ex-
cessive correlation has, to our knowledge, not sufficiently been studied in the
literature. It has preliminarily been explored in Chapter 5 of this volume.
Lastly, the RBC model predicts a significantly high positive correlation be-
tween technology and employment whereas empirical research demonstrates,
at least at business cycle frequency, a negative or almost zero correlation.
These are the major issues that we shall take up from now on. We
want to note that the labor market problems, the lack of variation in the
employment and the high correlation between consumption and employment
in the standard RBC model, may be related to the specification of the labor
131
132
market, and therefore we could name it the labor market puzzle. In this
chapter we are mainly concerned with this puzzle. The technology puzzle,
that is, the excessively high correlation between technology and employment,
preliminarily discussed in Chapter 5, will be taken up in Chapter 9.
Although in the specification of its model structure (see Chapter 4), the
real business cycle model specifies both sides, the demand and supply, of a
market, the moments of the economy are however reflected by the variation
on one side of markets due to its general equilibrium nature for all markets
(including output, labor and capital markets). For the labor market, the
moments of labor effort result from the decision rule of the representative
household to supply labor. The variations in labor and consumption both
reflect the moments of the two state variables, capital and technology. It is
therefore not surprising why employment is highly correlated with consump-
tion and why the variation of consumption is a smooth as labor effort. This
further suggests that to resolve the labor market puzzle in a real business
cycle model, one has to make improvement upon labor market specifications.
One possible approach for such improvement is to introduce the Keynesian
feature into the model and to allow for wage stickiness and a nonclearing
labor market.
The research along the line of micro-founded Keynesian economics has
been historically developed by the two approaches: one is the disequilibrium
analysis, which had been popular before 1980’s and the other is the New
Keynesian analysis based on monopolistic competition. Attempts have now
been made recently that introduce the Keynesian features into a dynamic
optimization model. Rotemberg and Woodford (1995, 1999), King and Woll-
man (1999), Gali (1999) and Woodford (2003) present a variety of models
with monopolistic competition and sticky price. On the other hand, there
are models of efficiency wages where nonclearing labor market could occur.
1
We shall remark that in those studies with nonclearing labor market, an ex-
plicit labor demand function is introduced from the decision problem of the
firm side. However, the decision rule with regard to labor supply in these
models is often dropped because the labor supply no longer appears in the
utility function of the household. Consequently, the moments of labor effort
become purely demand-determined.
2
In this chapter, we will present a stochastic dynamic optimization model
of RBC type but argumented by Keynesian features along the line of above
1
See Danthine and Donaldson (1990, 1995), Benassy (1995) and Uhlig and Xu (1996)
among others.
2
The labor supply in the these models is implicitly assumed to be given exogenously, and
normalized to 1. Hence nonclearing of the labor market occurs if the demand is not equal
to 1.
133
consideration. In particular, we shall allow for wage stickiness
3
and non-
clearing labor market. However, unlike other recent models that drop the
decision rule of labor supply, we view the decision rule of the labor effort as
being derived from a dynamic optimization problem as a quite natural way to
determine desired labor supply.
4
With the determination of labor demand,
derived from the marginal product of labor and other factors,
5
the two ba-
sic forces in the labor market can be formalized. One of the advantages of
this formulation, as will become clear, is that a variety of employment rules
could be adopted to specify the realization of actual employment when a
nonclearing market emerges.
6
We will assess this model by employing U.S.
and German macroeconomic time series data.
Yet before we formally present the model and its calibration we want
to note that there is a similarity of our approach chosen here and the New
Keynesian analysis. New Keynesian literature presents models with imper-
fect competition and sluggish price and wage adjustments where labor effort
is endogenized. Important work of this type can be found in Rotemberg
and Woodford (1995, 1999), King and Wollman (1999), Gali (1999), Erceg,
Henderson and Levin (2000) and Woodford (2003). However, the market
in those models are still assumed to be cleared since the producer supplies
the output according to what the market demands at the existing price. A
similar consideration is also assumed to hold for the labor market. Here the
wage rate is set optimally by a representative of the household according to
3
Already Keynes (1936) had not only observed a wide-spread phenomenon of downward
rigidity of wages but has also attributed strong stabilizing properties of wage stickiness.
4
0ne could perceive a change in secular forces concerning labor supply from the side of
households, for example, changes in preferences, demographic changes, productivity and
real wage, union bargaining, evolution of wealth, taxes and subsides which all affect
labor supply. Some of those secular forces are often mentioned in the work by Phelps,
see Phelps (1997) and Phelps and Zoega (1998). Recently, concerning Europe, generous
unemployment compensation and related welfare state benefits have been added to the
list of factors affecting the supply of labor, intensity of job search and unemployment. For
an extensive reference to those factors, see Blanchard and Wolfers (2000) and Ljungqvist
and Sargent (1998, 2003).
5
On the demand side one could add beside the pure technology shocks and the real wage,
the role of aggregate demand, high interest rates (Phelps 1997, Phelps and Zoega 1998),
hiring and firing cost, capital shortages and slow down of growth, for example, in Europe.
See Malinvaud (1994) for a more extensive list of those factors .
6
Another line of recent research on modeling unemployment in a dynamic optimization
framework can be found in the work by Merz (1999) who employs search and matching
theory to model the labor market, see also Ljungqvist and Sargent (1998, 2003). Yet,
unemployment resulting from search and matching problems can rather be viewed as
frictional unemployment (see Malinvaud (1994) for his classification of unemployment).
As will become clear, this will be different from the unemployment that we will discuss
in this chapter.
134
the expected market a demand curve for labor. Once the wage has been set,
it is assumed to be sticky for some time period and only a fraction of wages
are set optimally in each period. In those models there will be a gap again
between the optimal wage and existing wage, yet the labor market is still
cleared since the household is assumed to supply labor whatever the market
demand is at the given wage rate.
7
In this chapter, we shall present a dynamic model that allows for a non-
cleared labor market, which could be seen to be caused by staggered wage as
described by Taylor (1980), Calvo (1983) or other theories of sluggish wage
adjustment. The objective to construct a model such as ours is to approach
the two aforementioned labor market problems coherently within a single
model of dynamic optimization. Yet, we wish to argue that the New Keyne-
sian and approach are complementary rather than exclusive, and therefore
they can somewhat be consolidated as a more complete system for price and
quantity determination within the Keynesian tradition. For further details
of this consolidation, see Chapter 9. In the current chapter we are only con-
cerned with a nonclearing of the labor market as brought into the academic
discussion by the disequilibrium school. We will derive the nonclearing of
the labor market from optimizing behavior of economic agents but it will be
a multiple stage decision process that will generate the nonclearing of the
labor market.
8
The remainder of this chapter is organized as follows. Section 2 presents
the model structure. Section 3 estimates and calibrates our different model
variants for the U.S. economy. Section 4 undertakes the same exercise for the
German economy. Section 5 concludes. Appendices I and II in this chapter
contain some technical derivation of the adaptive optimization procedure
whereas Appendix III undertakes a welfare comparison of the different model
variants.
7
See, for example, Woodford (2003, ch. 3). There are also traditional Keynesian mod-
els that allow for disequilibria, see Benassy (1984) among others. Yet, the well-known
problem of these earlier disequilibrium models was that they disregard intertemporal op-
timizing behavior and never specify who sets the price. This has now been resolved by
the modern literature of monopolistic competition as can be found in Woodford (2003).
However, while resolving the price setting problem, the decision with regard to quantities
seems to be unresolved. The supplier may no longer behave optimally concerning their
supply decision, but simply supplies whatever the quantity the market demands for at
the current price.
8
For models with multiple steps of optimization in the context of learning models, see
Dawid and Day (2003), Sargent (1998) and Zhang and Semmler (2003).
135
8.2 An Economy with Nonclearing Labor Mar-
ket
We shall still follow the usual assumptions of identical households and iden-
tical firms. Therefore we are considering an economy that has two repre-
sentative agents: the representative household and the representative firm.
There are three markets in which the agents exchange their products, labor
and capital. The household owns all the factors of production and therefore
sells factor services to the firm. The revenue from selling factor services can
only be used to buy the goods produced by the firm either for consuming or
for accumulating capital. The representative firm owns nothing. It simply
hires capital and labor to produce output, sells the output and transfers the
profit back to the household.
Unlike the typical RBC model, in which one could assume an once-for-all
market, we, however, in this model shall assume that the market to be re-
opened at the beginning of each period t. This is necessary for a model with
nonclearing markets in which adjustments should take place which leads us
to a multiple stage adaptive optimization behavior. Yet, let us first describe
how prices and wages are set.
8.2.1 The Wage Determination
As usual we presume that both the household and the firm express their
desired demand and supply on the basis of given prices, including the output
price p
t
, the wage rate w
t
and the rental rate of the capital stock r
t
, we
shall first discuss how the period t prices are determined at the beginning of
period t. Note that there are three commodities in our model. One of them
should serve as a numeraire, which we assume to be the output. Therefore,
the output price p
t
always equals 1. This indicates that the wage w
t
and
the rental rate of capital stock r
t
are all measured in terms of the physical
units of output.
9
As to the rental rate of capital r
t
, it is assumed to be
adjustable so as to clear the capital market. We can then ignore its setting.
Indeed, as will become clear, one can imagine any initial value of the rental
rate of capital when the firm and the household make the quantity decisions
and express their desired demand and supply. This leaves us to focus the
discussion only on the wage setting. Let us first discuss how the wage rate
9
For our simple representative agent model without money, this simplification does not
effect our major result derived from our model. Meanwhile, it will allow us to save some
effort to explain the nominal price determination, a focus in the recent New Keyensian
literature.
136
might be set.
Most recent literature, in discussing wage setting,
10
assumes that it is the
supplier of labor, the household or its representative, that sets the wage rate
whereas the firm is simply a wage taker. On the other hand, there are also
models that discuss how firms set the wage rate.
11
In actual bargaining it is
likely, as Taylor (1999) has pointed out, that wage setting is an interacting
process between firms and households. Despite this variety of wage setting
models, we, however, follow the recent approach. We may assume that the
wage rate is set by a representative of the household which acts as a mo-
nopolistic agent for the supply of labor effort as Woodford (2003, ch. 3) has
suggested. Woodford (2003, p.221) introduces different wage setting agents
and monopolistic competition since he assumes heterogenous households as
different suppliers of differentiated types of labor. In appendix I, in close
relationship to Woodford (2003, ch.3), Erceg et al (2000) and Christiano et
al. (2001) we present a wage setting model, where wages are set optimally,
but a fraction of wages may be sticky. We neglect, however, differentiated
types of labor and refer only to aggregate wages.
We want to note, however, that recently many theories have been de-
veloped to explain wage and price stickiness. There is the so-called menu
cost for changing prices (though this seems more appropriate for the output
price). There is also a reputation cost for changing prices and wages.
12
In
addition, changing the price, or wage, needs information, computation and
communication, which may be costly.
13
All these efforts cause costs which
may be summarized as adjustment costs of changing the price or wage. The
adjustment cost for changing the wage may provide some reason for the rep-
resentative of the household to stick to the wage rate even if it is known that
current wage may not be optimal. One may also derive this stickiness of
wages from wage contracts as in Taylor (1980) with the contract period to
be longer than one period.
Since workers, or their respective representative, enter usually into long
term employment contracts involving labor supply for several periods with a
variety of job security arrangements and termination options, a wage contract
may also be understood from an asset price perspective, namely as derivative
security based on a fundamental underlying asset such as the asset price of
the firm. In principle a wage contract could be treated as a debt contract with
10
See, for instance, Erceg, Henderson and Levin (2000), Christiano, Eichenbaum and
Evans (2001) and Woodford (2003) among others.
11
These are basically the efficiency wage models that are mentioned in the introduction.
12
This is emphasized by Rotemberg (1982)
13
See the discussion in Christiano, Eichenbaum and Evans (2001) and Zbaracki, Ritson,
Levy, Dutta and Bergen (2000).
137
similar long term commitment as exists for other liabilities of the firm.
14
As
in the case of the pricing of corporate liabilities the wage contract, the value of
the derivative security, would depend on some specifications in contractual
agreements. Yet, in general it can be assumed to be arranged for several
periods.
As noted above we do not have to posit that the wage rate, w
t
, to be
completely fixed in contracts and never responds to the disequilibrium in
the labor market. One may imagine that the dynamics of the wage rate, for
example, follows the updating scheme as suggested in Calvo’s staggered price
model (1983) or in Taylor’s wage contract model (1980). In Calvo’s model,
for example, there is always a fraction of individual prices to be adjusted
in each period t.
15
This can be expressed in our model as the expiration
of some wage contracts, to be reviewed in each time period and therefore
new wage contracts will be signed in each t. The new signed wage contracts
should respond to the expected market conditions not only in period t but
also through t to t + j, where j can be regarded as the contract period.
16
Through such a pattern of wage dynamics, wages are only partially adjusted.
Explicit formulation of wage dynamics of a Calvo type of updating scheme,
particularly with differentiated types of labor, is studied in Erceg et al (2000),
Christiano et al. (2001) and Woodford (2003, ch. 3) and briefly sketched, as
underlying our model, for an aggregate wage in appendix I of this chapter.
A more explicit treatment is not needed here. Indeed, as will become clear in
section 3, the empirical study of our model does not rely on how we formulate
the wage dynamics. All we need to presume is that, wage contracts are only
partially adjusted, giving rise to a sticky aggregate wage.
8.2.2 The Household’s Desired Transactions
The next step in our multiple stage decision process is to model the quantity
decisions of the households. When the price, including the wage, has been
set, the household is then going to express its desire of demand for goods
and supply of factors. We define the household’s desired demand and sup-
ply as those that can allow the household to obtain the maximum utility
on the condition that these demand and supply can be realized at the given
set of prices. We can express the household’s desired demand and supply as
a sequence of output demand and factor supply
_
c
d
t+i
, i
d
t+i
, n
s
t+i
, k
s
t+i+1
_

i=0
,
14
For such a treatment of the wages as derivative security, see Uhlig (2003). For further
details of the pricing of such liabilities, see Gr¨ une and Semmler (2004c).
15
These are basically those prices that have not been adjusted for some periods and there
the adjustment costs (such as the reputation cost) may not be high.
16
This type of wage setting is used in Woodford (2003, ch. 4) and Erceg et al. (2000).
138
where i
t+i
is referred to investment. Note that here we have used the su-
perscripts d and s to refer to the agent’s desired demand and supply. The
decision problem for the household to derive its demand and supply can be
formulated as
max
{c
t+i
,n
t+i
}

i=0
E
t
_

i=0
β
i
U(c
d
t+i
, n
s
t+i
)
_
(8.1)
subject to
c
d
t+i
+ i
d
t+i
= r
t+i
k
s
t+i
+ w
t+i
n
s
t+i
+ π
t+i
(8.2)
k
s
t+i+1
= (1 −δ)k
s
t+i
+ i
d
t+i
(8.3)
Above π
t+i
is the expected dividend. Note that (8.2) can be regarded as a
budget constraint. The equality holds due to the assumption U
c
> 0. Next,
we shall consider how the representative household calculates π
t+i
.
Assuming that the household knows the production function f(·) while it
expects that all its optimal plans can be fulfilled at the given price sequence
{p
t+i
, w
t+i
, r
t+i
}

i=0
, we thus obtain
π
t+i
= f(k
s
t+i
, n
s
t+i
, A
t+i
) −w
t+i
n
s
t+i
−r
t+i
k
s
t+i
(8.4)
Explaining π
t+i
in (8.2) in terms of (8.4) and then substituting from (8.3) to
eliminate i
d
t
, we obtain
k
s
t+i+1
= (1 −δ)k
s
t+i
+ f(k
s
t+i
, n
s
t+i
, A
t+i
) −c
d
t+i
(8.5)
For the given technology sequence {A
t+i
}

i=0
, equations (8.1) and (8.5) form
a standard intertemporal decision problem. The solution to this problem can
be written as:
c
d
t+i
= G
c
(k
s
t+i
, A
t+i
) (8.6)
n
s
t+i
= G
n
(k
s
t+i
, A
t+i
) (8.7)
We shall remark that although the solution appears to be a sequence
_
c
d
t+i
, n
s
t+i
_

i=0
only (c
d
t
, n
s
t
) along with (i
d
t
, k
s
t
), where i
d
t
= f(k
s
t
, n
s
t
, A
t
) −
c
d
t
and k
s
t
= k
t
, are actually carried into the market by the household for
exchange due to our assumption of re-opening of the market.
8.2.3 The Firm’s Desired Transactions
As in the case of the household, the firm’s desired demand for factors and
supply of goods are those that maximize the firm’s profit under the condition
that all its intentions can be carried out at the given set of prices. The
139
optimization problem for the firm can thus be expressed as being to choose
the input demands and output supply (n
d
t
, k
d
t
, y
s
t
) that maximizes the current
profit:
max y
s
t
−r
t
k
d
t
−w
t
n
d
t
subject to
y
s
t
= f(A
t
, k
d
t
, n
d
t
) (8.8)
For regular conditions on the production function f(·), the solution to the
above optimization problem should satisfy
r
t
= f
k
(k
d
t
, n
d
t
, A
t
) (8.9)
w
t
= f
n
(k
d
t
, n
d
t
, A
t
) (8.10)
where f
k
(·) and f
n
(·) are respectively the marginal products of capital and
labor. Next we shall consider the transactions in our three markets. Let us
first consider the two factor markets.
8.2.4 Transaction in the Factor Market and Actual
Employment
We have assumed the rental rate of capital r
t
to be adjustable in each period
and thus the capital market is cleared. This indicates that
k
t
= k
s
t
= k
d
t
As concerning the labor market, there is no reason to believe that firm’s
demand for labor, as implicitly expressed in (8.10) should be equal to the
willingness of the household to supply labor as determined in (8.7) given
the way the wage determination is explained in section 8.2.1. Therefore,
we cannot regard the labor market to be cleared. An illustration of this
statement, though in a simpler version, is given in Appendix I.
17
Given a nonclearing labor market, we shall have to specify what rule
should apply regarding the realization of actual employment.
17
Strictly speaking, the so-called labor market clearing should be defined as the condition
that the firm’s willingness to demand factors is equal to the household’s willingness to
supply factors. Such concept has somehow disappeared in the new Keynesian literature
in which the household supplies the labor effort according to the market demand and
therefore it does not seem to face excess demand or supply. Yet, even in this case, the
household’s willingness to supply labor effort is not necessarily equal to its actual supply
or the market demand. At some point the marginal disutility of work may be higher
than the pre-set wage. This indicates that even if there are no adjustment costs so that
the household can adjust the wage rate at every time period t, the disequilibrium in
the labor market may still exist. In Appendix I these points are illustrated in a static
version of the working of the labor market.
140
Disequilibrium Rule: When disequilibrium occurs in the labor
market either of the following two rules will be applied:
n
t
= min(n
d
t
, n
s
t
) (8.11)
n
t
= ωn
d
t
+ (1 −ω)n
s
t
(8.12)
where ω ∈ (0, 1).
Above, the first is the famous short-side rule when nonclearing of the
market occurs. It has been widely used in the literature on disequilibrium
analysis (see, for instance, Benassy 1975, 1984, among others). The sec-
ond might be called the compromising rule. This rule indicates that when
nonclearing of the labor market occurs both firms and workers have to com-
promise. If there is excess supply, firms will employ more labor than what
they wish to employ.
18
On the other hand, when there is excess demand,
workers will have to offer more effort than they wish to offer.
19
Such mutual
compromises may be due to institutional structures and moral standards of
the society.
20
Given the rather corporate relationship of labor and firms in
Germany, for example, this compromising rule might be considered a reason-
able approximation. Such a rule that seems to hold for many other countries
was already discussed early in the economic literature, see Meyers (1968) and
also Solow (1979).
We want to note that the unemployment we discuss here is certainly dif-
ferent from the frictional unemployment as often discussed in search and
matching models. In our representative agent model, the unemployment is
mainly due to adaptive optimization of the household given the institutional
18
This could also be realized by firms by demanding the same (or less) hours per worker but
employing more workers than being optimal. This case corresponds to what is discussed
in the literature as labor hoarding where firms hesitate to fire workers during a recession
because it may be hard to find new workers in the next upswing, see Burnside et al.
(1993). Note that in this case firms may be off their marginal product curve and thus
this might require wage subsidies for firms as has been suggested by Phelps (1997).
19
This could be achieved by employing the same number of workers but each worker sup-
plying more hours (varying shift length and overtime work); for a more formal treatment
of this point, see Burnside et al. (1993).
20
Note that if firms are off their supply schedule and workers off their demand schedule,
a proper study would have to compute the firms’ cost increase and profit loss and the
workers’ welfare loss. If, however, the marginal cost for firms is rather flat (as empirical
literature has argued, see Blanchard and Fischer, 1989) and the marginal disutility is
also rather flat the overall loss may not be so high. The departure of the value function
– as measuring the welfare of the representative household from the standard case – is
studied in Gong and Semmler (2001). Results of this study are reported in Appendix
III of this chapter.
141
arrangements of the wage setting (see Chapter 8.5.). The cause for fric-
tional unemployment can arise from informational and institutional search
and matching frictions where welfare state and labor market institutions may
play a role.
21
Yet the frictions in the institutions of the matching process are
likely to explain only a certain fraction of observed unemployment.
22
8.2.5 Actual Employment and Transaction in the Prod-
uct Market
After the transactions in these two factor markets have been carried out, the
firm will engage in its production activity. The result is the output supply,
which, instead of (8.8), is now given by
y
s
t
= f(k
t
, n
t
, A
t
). (8.13)
Then the transaction needs to be carried out with respect to y
s
t
. It is im-
portant to note that when the labor market is not cleared, the previous
consumption plan as expressed by (8.6) becomes invalid due to the improper
budget constraint (8.2), which further bring the improper transition law of
capital (8.5), for deriving the plan. Therefore, the household will be required
to construct a new consumption plan, which should be derived from the
following optimization program:
max
(c
d
t
)
U(c
d
t
, n
t
) + E
t
_

i=1
β
i
U(c
d
t+i
, n
s
t+i
)
_
(8.14)
subject to
k
s
t+1
= (1 −δ)k
t
+ f(k
t
, n
t
, A
t
) −c
d
t
(8.15)
k
s
t+i+1
= (1 −δ)k
s
t+i
+f(k
s
t+i
, n
s
t+i
, A
t+i
) −c
d
t+i
(8.16)
i = 1, 2, ...
Note that in this optimization program the only decision variable is about
c
d
t
and the data includes not only A
t
and k
t
but also n
t
, which is given by
21
For a recent position representing this view, see Ljungqvist and Sargent (1998, 2003). For
comments on this view, see Blanchard (2003), see also Walsh (2002) who employs search
and matching theory to derive the persistence of real effects resulting from monetary
policy shocks.
22
Already Hicks (1963) has called this frictional unemployment. Recently, one important
form of a mismatch in the labor market seems to be the mismatch of skills, see Greiner,
Rubart and Semmler (2003).
142
either (8.11) or (8.12). We can write the solution in terms of the following
equation (see Appendix II of this chapter for the detail):
c
d
t
= G
c2
(k
t
, A
t
, n
t
) (8.17)
Given this adjusted consumption plan, the product market should be cleared
if the household demand f(k
t
, n
t
, A
t
) − c
d
t
for investment. Therefore, c
d
t
in
(8.17) should also be the realized consumption.
8.3 Estimation and Calibration for U. S. Econ-
omy
This section provides an empirical study, for the U. S. economy, of our model
as presented in the last section. However, the model in the last section is
only for illustrative purpose. It is not the model that can be tested with
empirical data, not only because we do not specify the forms of production
function, utility function and the stochastic process of A
t
, but also we do
not introduce the growth factor into the model. For an empirically testable
model, we here still employ the model as formulated by King, Plosser and
Rebelo (1988).
8.3.1 The Empirically Testable Model
Let K
t
denote for capital stock, N
t
for per capita working hours, Y
t
for output
and C
t
for consumption. Assume that the capital stock in the economy follow
the transition law:
K
t+1
= (1 −δ)K
t
+ A
t
K
1−α
t
(N
t
X
t
)
α
−C
t
, (8.18)
where δ is the depreciation rate; α is the share of labor in the production
function F(·) = A
t
K
1−α
t
(N
t
X
t
)
α
; A
t
is the temporary shock in technology
and X
t
the permanent shock that follows a growth rate γ.
23
The model is
nonstationary due to X
t
. To transform the model into a stationary setting,
we divide both sides of equation (8.18) by X
t
:
k
t+1
=
1
1 +γ
_
(1 −δ)k
t
+ A
t
k
1−α
t
(n
t
N/0.3)
α
−c
t
_
, (8.19)
where k
t
≡ K
t
/X
t
, c
t
≡ C
t
/X
t
and n
t
≡ 0.3N
t
/N with N to be the sample
mean of N
t
. Note that n
t
is often regarded to be the normalized hours. The
23
Note that X
t
includes both population and productivity growth.
143
sample mean of n
t
is equal to 30 %, which, as pointed out by Hansen (1985),
is the average percentage of hours attributed to work. Note that the above
formulation also indicates that the form of f(·) in the previous section may
follow
f(·) = A
t
k
1−α
t
(n
t
N/0.3)
α
(8.20)
while y
t
≡ Y
t
/X
t
with Y
t
to be the empirical output.
With regard to the household preference, we shall assume that the utility
function takes the form
U(c
t
, n
t
) = log c
t
+θ log(1 −n
t
) (8.21)
The temporary shock A
t
may follow an AR(1) process:
A
t+1
= a
0
+ a
1
A
t
+ ǫ
t+1
, (8.22)
where ǫ
t
is an independently and identically distributed (i.i.d.) innovation:
ǫ
t
∼ N(0, σ
2
ǫ
).
8.3.2 The Data Generating Process
For our empirical test, we consider three model variants: the standard RBC
model, as a benchmark for comparison, and the two labor market disequilib-
rium models with the disequilibrium rules as expressed in (8.11) and (8.12)
respectively. Specifically, we shall call the standard model the Model I; the
disequilibrium model with short side rule (8.11) the Model II; and the dise-
quilibrium model with the compromising rule (8.12) the Model III.
For the standard RBC model, the data generating process include (8.19),
(8.22) as well as
c
t
= G
11
A
t
+ G
12
k
t
+ g
1
(8.23)
n
t
= G
21
A
t
+ G
22
k
t
+ g
2
(8.24)
Note that here (8.23) and (8.24) are the linear approximations to (8.6) and
(8.7) when we ignore the superscripts s and d. The coefficients G
ij
and
g
i
(i = 1, 2 and j = 1, 2) are the complicated functions of the model’s struc-
tural parameters, α, β, among others. They are computed as in Chapter 5
by the numerical algorithm using the linear-quadratic approximation method
presented in Chapter 1 and 2. Given these coefficients and the parameters in
equation (8.22), including σ
ε
, we can simulate the model to generate stochas-
tically simulated data. These data can then be compared to the sample
moments of the observed economy.
144
Obviously, the standard model does not allow for nonclearing of the labor
market. The moments of the labor effort are solely reflected by the decision
rule (8.24) which is quite similar in its structure to the other decision rule
given by (8.23), i.e., they are both determined by k
t
and A
t
. This structural
similarity are expected to produce two labor market puzzles as aforemen-
tioned:
• First, the volatility of the labor effort can not be much different from
the volatility of consumption, which generally appears to be smooth.
• Second, the moments of labor effort and consumption are likely to be
strongly correlated.
To define the data generating process for our disequilibrium models, we
shall first modify (8.24) as
n
s
t
= G
21
A
t
+G
22
k
t
+g
2
(8.25)
On the other hand, the equilibrium in the product market indicates that
c
d
t
in (8.17) should be equal to c
t
. Therefore, this equation can also be
approximated as
c
t
= G
31
A
t
+ G
32
k
t
+G
33
n
t
+ g
3
(8.26)
In the appendix, we provide the details how to compute the coefficients G
3j
,
j = 1, 2, 3, and g
3
.
Next we consider the labor demand derived from the production function
F(·) = A
t
K
1−α
t
(N
t
X
t
)
α
. Let X
t
= Z
t
L
t
, with Z
t
to be the permanent shock
resulting purely from productivity growth, and L
t
from population growth.
We shall assume that L
t
has a constant growth rate µ and hence Z
t
follows
the growth rate (γ − µ). The production function can be written as Y
t
=
A
t
Z
α
t
K
1−α
t
H
α
t
, where H
t
equals N
t
L
t
, which can be regarded as total labor
hours. Taking the partial derivative with respect to H
t
and recognizing that
the marginal product of labor is equal to the real wage, we thus obtain
w
t
= αA
t
Z
t
k
1−α
t
(n
d
t
¯
N/0.3)
α−1
This equation is equivalent to (8.10). It generates the demand for labor as
n
d
t
= (αA
t
Z
t
/w
t
)
1/(1−α)
k
t
(0.3/N). (8.27)
Note that the per capita hours demanded n
d
t
should be stationary if the real
wage w
t
and productivity Z
t
grow at the same rate. This seems to be roughly
consistent with the U.S. experience that we shall now calibrate.
145
Thus, for the nonclearing market model with short side rule, Model II,
the data generating process includes (8.19), (8.22), (8.11), (8.25), (8.26) and
(8.27) with w
t
given by the observed wage rate. We thereby do not attempt
to give the actually observed sequence of wages a further theoretical founda-
tion.
24
For our purpose it suffices to take the empirically observed series of
wages. For Model III, we use (8.12) instead of (8.11).
8.3.3 The Data and the Parameters
Before we calibrate the models we shall first specify the parameters. There
are altogether 10 parameters in our three variants: a
0
, a
1
, σ
ε
, γ, µ, α, β, δ, θ,
and ω. We first specify α and γ respectively at 0.58 and 0.0045, which are
standard. This allows us to compute the data series of the temporary shock
A
t
. With this data series, we estimate the parameters a
0
, a
1
and σ
ε
. The next
three parameters β, δ and θ are estimated with the GMM method by match-
ing the moments of the model generated by (8.19), (8.23) and (8.25). The
estimation is conducted by a global optimization algorithm called simulated
annealing. These parameters have already been estimated in Chapter 5, and
therefore we shall employ them here. For the new parameters, we specify µ
at 0.001, which is close to the average growth rate of the labor force in U.S.;
the parameter ω in Model III is set to 0.1203. It is estimated by minimizing
the residual sum of square between actual employment and the model gener-
ated employment. The estimation is executed by a conventional algorithm,
the grid search. Table 8.1 illustrates these parameters:
Table 8.1: Parameters Used for Calibration
a
0
0.0333 σ
ε
0.0185 µ 0.0010 β 0.9930 θ 2.0189
a
1
0.9811 γ 0.0045 α 0.5800 δ 0.2080 ω 0.1203
The data set used in this section is taken from Christiano (1987). The
wage series are obtained from Citibase. It is re-scaled to match the model’s
implication.
25
24
One however might apply here the efficiency wage theory or other theories such as the
staggered contract theory that justify the wage stickiness.
25
Note that this re-scaling is necessary because we do not exactly know the initial condition
of Z
t
, which we set equal to 1. We re-scaled the wage series in such a way that the first
observation of employment is equal to the demand for labor as specified by equation
(8.27).
146
8.3.4 Calibration
Table 8.2 reports our calibration from 5000 stochastic simulations. The re-
sults in this table are confirmed by Figure 8.1, where a one time simulation
with the observed innovation A
t
are presented.
26
All time series are detrended
by the HP-filter.
26
Due to the discussion on Solow residual in Chapter 5, we shall now understand that A
t
computed as the Solow residual may reflect also the demand shock in addition to the
technology shock.
147
Table 8.2: Calibration of the Model Variants: U.S. Economy (numbers in
parentheses are the corresponding standard errors)
Consumption Capital Employment Output
Standard Deviations
Sample Economy 0.0081 0.0035 0.0165 0.0156
Model I Economy 0.0091 0.0036 0.0051 0.0158
(0.0012) (0.0007) (0.0006) (0.0021)
Model II Economy 0.0137 0.0095 0.0545 0.0393
(0.0098) (0.0031) (0.0198) (0.0115)
Model III Economy 0.0066 0.0052 0.0135 0.0197
(0.0010) (0.0010) (0.0020) (0.0026)
Correlation Coefficients
Sample Economy
Consumption 1.0000
Capital Stock 0.1741 1.0000
Employment 0.4604 0.2861 1.0000
Output 0.7550 0.0954 0.7263 1.0000
Model I Economy
Consumption 1.0000
(0.0000)
Capital Stock 0.2043 1.0000
(0.1190) (0.0000)
Employment 0.9288 −0.1593 1.0000
(0.0203) (0.0906) (0.0000)
Output 0.9866 0.0566 0.9754 1.0000
(0.00332) (0.1044) (0.0076) (0.0000)
Model II Economy
Consumption 1.0000
(0.0000)
Capital Stock 0.4944 1.0000
(0.1662) (0.0000)
Employment 0.4874 -0.0577 1.0000
(0.1362) (0.0825) (0.0000)
Output 0.6869 0.0336 0.9392 1.0000
(0.1069) (0.0717) (0.0407) (0.0000)
Model III Economy
Consumption 1.0000
(0.0000)
Capital Stock 0.4525 1.0000
(0.1175) (0.0000)
Employment 0.6807 -0.0863 1.0000
(0.0824) (0.1045) (0.0000)
Output 0.8924 0.0576 0.9056 1.0000
(0.0268) (0.0971) (0.0327) (0.0000)
148
First we want to remark that the structural parameters that we used
here for calibration are estimated by matching the Model I Economy to the
Sample Economy. The result, reflected in Table 8.2, is therefore somewhat
biased in favor of the Model I Economy. It is not surprising that for most
variables the moments generated from the Model I Economy are closer to the
moments of the Sample Economy. Yet even in this case, there is an excessive
smoothness of the labor effort and the employment series of the data cannot
be matched. For our time period, 1955.1 to 1983.4, we find 0.32 in the
Model I Economy as the ratio of the standard deviation of labor effort to the
standard deviation of output. This ratio is roughly 1 in the Sample Economy.
The problem is, however, resolved in our Model II and Model III Economies
representing sticky wages and labor market nonclearing. There the ratio is
1.38 and 0.69 for the Model II and Model III Economies respectively.
Further evidence on the better fit of the nonclearing labor market models
– as concerns the volatility of the macroeconomic variables – is also demon-
strated in the Figure 8.1 where the horizontal figures show, from top to
bottom, actual (solid line) and simulated data (dotted line) for consump-
tion, capital stock, employment and output, the three columns representing
the figures for Model I, Model II and Model III Economies. As observable,
in particular the Model III Economy fits, along most dimensions, best the
actual data. As can be seen from the separate figures, the volatility of em-
ployment has been greatly increased for both Model II and Model III. In
particular, the volatility in the Model III Economy is close to the one in the
Sample Economy, although too high a volatility is observable in the Model
II Economy which may reflect our assumption that there are no search and
matching frictions (which, of course, in the actual economy will not hold).
We therefore may conclude that Model III is the best in matching the labor
market volatility.
We want to note that the failure of the standard model to match the
volatility of employment of the data is also described in the recent paper by
Schmidt-Grohe (2001). For her employed time series data 1948.3 - 1997.4,
Schmidt-Grohe (2001) finds that the ratio of the standard deviation of em-
ployment to the standard deviation of output is roughly 0.95, close to our
Sample Economy. Yet for the standard RBC model, the ratio is found to be
0.49, which is too low compared to the empirical data. For the indetermi-
nacy model, originating in the work by Benhabib and co-authors, she finds
the ratio to be 1.45, which seems too high. As noted above, a similarly high
ratio of standard deviations can also be observed in our Model II Economy
where the short side rule leads to excessive fluctuations of the labor effort.
Next, let us look at the cross-correlations of the macroeconomic variables.
In the Sample Economy, there are two significant correlations we can observe:
149
the correlation between consumption and output, roughly 0.75, and between
employment and output, about 0.72. These two strong correlations can also
be found in all of our simulated economies. However, in our Model I Economy
and this only holds for the Model I Economy (the standard RBC model) in
addition to these two correlations, consumption and employment are, with
0.93, also strongly correlated. Yet, empirically, this correlation is weak, about
0.46.
The latter result of the standard model is not surprising given that move-
ments of employment as well as consumption reflect the movements in the
state variables capital stock and the temporary shock. They, therefore,
should be somewhat correlated. We remark here that such an excessive
correlation has, to our knowledge, not explicitly been discussed in the RBC
literature, including the recent study by Schmidt-Grohe (2001). Discussions
have often focused on the correlation with output.
150
Figure 8.1: Simulated Economy versus Sample Economy: U.S. Case (solid
line for sample economy, dotted line for simulated economy)
A success of our nonclearing labor market models, see the Model II and
III Economies, is that employment is no longer significantly correlated with
consumption. This is because we have made a distinction between the de-
mand and supply of labor, whereas only the latter, labor supply, reflects the
moments of capital and technology as consumption does. Since the realized
employment is not necessarily the same as the labor supply, the correlation
with consumption is therefore weakened.
151
8.4 Estimation and Calibration for the Ger-
man Economy
Above we have employed a model with nonclearing labor market for the U.
S. economy. We have seen that one of the major reasons that the standard
model can not appropriately replicate the variation in employment is its lack
of introducing the demand for labor. Next, we pursue a similar study of
German economy. For this purpose we shall first summarize some stylized
facts on the German economy compared to the U.S. economy.
8.4.1 The Data
Our subsequent study of the German economy employs the time series data
from 1960.1 to 1992.1. We thus have included a short period after the uni-
fication of Germany (1990 - 1991). We use again quarterly data. The time
series data on GDP, consumption, investment and capital stock are OECD
data, see OECD (1998a), the data on total labor force is also from the OECD
(1998b). The time series data on total working hours is taken from Statis-
tisches Bundesamt (1998). The time series on the hourly real wage index is
from OECD (1998a).
8.4.2 The Stylized Facts
Next, we want to compare some stylized facts. Figures 8.2 and 8.3 com-
pare 6 key variables relevant for the models for both the German and U.S.
economies. In particular, the data in Figure 8.3 are detrended by the HP-
filter. The standard deviations of the detrended series are summarized in
Table 8.3.
152
Figure 8.2: Comparison of Macroeconomic Variables U. S. versus Germany
153
Figure 8.3: Comparison of Macroeconomic Variables: U. S. versus Germany
(data series are detrended by the HP-filter)
154
Table 8.3: The Standard Deviations (U.S. versus Germany)
Germany
(detrended)
U.S.
(detrended)
consumption 0.0146 0.0084
capital stock 0.0203 0.0036
employment 0.0100 0.0166
output 0.0258 0.0164
temporary shock 0.0230 0.0115
efficiency wage 0.0129 0.0273
Several remarks are at place here. First, employment and the efficiency
wage are among the variables with the highest volatility in the U. S. econ-
omy. However, in the German economy they are the smoothest variables.
Second, the employment (measured in terms of per capita hours) is declining
over time in Germany (see Figure 8.2 for the non-detrended series), while in
the U.S. economy, the series is approximately stationary. Third, in the U. S.
economy, the capital stock and temporary shock to technology are both rel-
atively smooth. In contrast, they are both more volatile in Germany. These
results might be due to our first remark regarding the difference in employ-
ment volatility. The volatility of output must be absorbed by some factors
in the production function. If employment is smooth, the other two factors
have to be volatile.
Should we expect that such differences will lead to different calibration
of our model variants? This will be explored next.
8.4.3 The Parameters
For the German economy, our investigation showed that an AR(1) process
does not match well the observed process of A
t
. Instead, we shall use an
AR(2) process:
A
t+1
= a
0
+ a
1
A
t
+a
2
A
t−1

t+1
The parameters used for calibration are given in Table 8.4. All of these
parameters are estimated in the same way as those for the U.S. economy.
155
Table 8.4: Parameters used for Calibration (German Economy)
a
0
0.0044 γ 0.0083 δ 0.0538
a
1
1.8880 µ 0.0019 θ 2.1507
a
2
-0.8920 α 0.6600 ω 0
σ
ε
0.0071 β 0.9876
It is important to note that the estimated ω in this case is on the boundary
0, indicating the weight of the demand is zero in the compromising rule (8.12).
In other words, the Model III Economy is almost identical to the Model I
Economy. This seems to provide us with the conjecture that the Model I
Economy, the standard model, will be the best in matching German labor
market.
8.4.4 Calibration
As for the U.S. economy we provide in Table 8.5 for the German economy
the calibration result from 5000 time stochastic simulations. In Figure 8.4 we
again compare the one-time simulation with the observed A
t
for our model
variants. Again all time series here are detrended by the HP-filter.
27
27
Note that we do not include the Model III Economy for calibration. Due to the zero
value of the weighting parameter ω, the Model III Economy is equivalent to the Model
I Economy.
156
Table 8.5: Calibration of the Model Variants: German Economy (number in
parentheses are the corresponding standard errors)
Consumption Capital Employment Output
Standard Deviations
Sample Economy 0.0146 0.0203 0.0100 0.0258
Model I Economy 0.0292 0.0241 0.0107 0.0397
(0.0106) (0.0066) (0.0023) (0.0112)
Model II Economy 0.1276 0.0425 0.0865 0.4648
(0.1533) (0.0238) (0.1519) (0.9002)
Correlation Coefficients
Sample Economy
Consumption 1.0000
Capital Stock 0.4360 1.0000
Employment 0.0039 -0.3002 1.0000
Output 0.9692 0.5423 0.0202 1.0000
Model I Economy
Consumption 1.0000
(0.0000)
Capital Stock 0.7208 1.0000
(0.0920) (0.0000)
Employment 0.5138 −0.1842 1.0000
(0.1640) (0.1309) (0.0000)
Output 0.9473 0.4855 0.7496 1.0000
(0.0200) (0.1099) (0.1028) (0.0000)
Model II Economy
Consumption 1.0000
(0.0000)
Capital Stock 0.6907 1.0000
(0.1461) (0.0000)
Employment 0.7147 0.3486 1.0000
(0.2319) (0.4561) (0.0000)
Output 0.8935 0.5420 0.9130 1.0000
(0.1047) (0.2362) (0.1312) (0.00000)
157
Figure 8.4: Simulated Economy versus Sample Economy: German Case (solid
line for sample economy, dotted line for simulated economy)
In contrast to U.S. economy we find some major differences. First, there is
a difference concerning the variation of employment. The standard problem
of excessive smoothness with respect to employment in the benchmark model
no longer holds for the German economy. This is likely to be due to the fact
that employment itself is smooth in the German economy (see Table 8.3 and
Figure 8.3). We shall also note that the simulated labor supply in Germany
is smoother than in the U. S. (see Figure 8.5). In most labor market studies
the German labor market is often considered less flexible than the U. S.
labor market. In particular, there are stronger influences of labor unions
and various legal restrictions on firms’ hiring and firing decisions.
28
Such
influences and legal restriction will give rise to the smoother employment
series in contrast to the U. S. Such influences and legal restriction, or what
Solow (1979) has termed the moral factor in the labor market, may also be
viewed as a readiness to compromise as our Model III suggests. Those factors
28
See, for example, Nickell (1997) and Nickell (2003), and see already Meyers (1964).
158
will indeed give rise to a smooth employment series.
Further, if we look at the labor demand and supply in Figure 8.5, the
supply of labor is mostly the short side in the Germany economy whereas
in U.S. economy demand is dominating in most periods. Note that here we
must distinguish the supply that is actually provided in the labor market and
the “supply” that is specified by the decision rule in the standard model. It
might reasonably be argued that due to the intertemporal optimization sub-
ject to the budget constraints the supply specified by the decision rule may
only approximate the decisions from those households for which unemploy-
ment is not expected to pose a problem on their budgets. Such households
are more likely to be currently employed and protected by labor unions and
legal restrictions. In other words, currently employed labor decides, through
the optimal decision rule, about labor supply and not those who are currently
unemployed. Such a shortcoming of single representative intertemporal de-
cision model could presumably be overcome by a intertemporal model with
heterogenous households.
29
Figure 8.5: Comparison of demand and supply in the labor market (solid line
for actual, dashed line for demand and dotted line for supply)
29
See, for example, Uhlig and Xu (1996).
159
The second difference concerns the trend in employment growth and un-
employment of the U.S. and Germany. So far we only have shown that our
model of nonclearing labor market seems to match better than the standard
RBC model the variation in employment. This in particular seems to be
true for the U.S. economy. We did not attempt to explain the trend of the
unemployment rate neither for the U.S. nor for Germany. We want to note
that the time series data (U. S. 1955.1 - 1983.1, Germany 1960.1 - 1992.1)
are from a period where the U.S. had higher – but falling – unemployment
rates, whereas Germany had still lower but rising unemployment rates. Yet,
since the end of the 1980s the level of the unemployment rate in Germany
has considerably moved up, partly due to the unification of Germany after
1989.
8.5 Differences in Labor Market Institutions
In Chapter 8.2 we have introduced rules that might be thought to be op-
erative when there is a nonclearing labor market. In this respect, as our
calibration in section 3 has shown, the most promising route to model, and
to match, stylized facts of the labor market, through a microbased labor mar-
ket behavior, is the compromising model. One hereby may pay attention to
some institutional characteristics of the labor market presumed in our model.
The first is the way how the agency representing the household sets the
wage rate. If the household sets the wage rate, as if it were monopolistic com-
petitor, then at this wage rate the household’s willingness to supply labor is
likely to be less than the market demand for labor unless the household suf-
ficiently under-estimates the market demand when it conducts its optimiza-
tion for wage setting. Such a way of wage setting may imply unemployment
and it is likely to be the institutional structure that gives the representative
household (or the representative of the household, such as unions), the power
to bargain with the firm in wage setting.
30
Yet, there could be, of course,
other reasons why wages do not move to a labor market clearing level – such
as efficiency wage, insider – outsider relationship, or wages determined by
standards of fairness as Solow (1979) has noted and so on.
On the other hand, there can be labor market institutions, for example
corporatist structures, also measured by our ω, which affect the actual em-
ployment. Our ω expresses how much weight is given to the desired labor
supply or desired labor demand. A small ω means that the agency, repre-
30
This is similar to Woodford’s (2003, ch. 3) idea of a deviation of the efficient and natural
level of output where the efficient level is achieved only in a competitive economy with
no frictions.
160
senting the household, has a high weight in determining the outcome of the
employment compromise. A high ω means that the firm’s side is stronger in
employment negotiations. As our empirical estimations in Gong, Ernst and
Semmler (2004) have shown the former case, a low ω, is very characteristic
of Germany, France and Italy whereas a larger ω is found for U.S. and the
U.K.
31
Given the rather corporatist relationship of labor and the firm in some Eu-
ropean countries, with some considerable labor market regulations through
legislature and union bargaining (rules of employment protection, hiring and
firing restrictions, extension of employment even if there is a shortfall of sales
etc.)
32
, our ω may thus measure differences concerning labor market insti-
tutions between the U.S. and European countries. This has already been
stated in the 1960s by Meyers. He states: ”One of the differences between
the United States and Europe lies in our attitude toward layoffs... When
business falls off, he [the typical American employer] soon begins to think of
reduction in work force... In many other industrial countries, specific laws,
collective agreements, or vigorous public opinion protect the workers against
layoffs except under the most critical circumstances. Despite falling demand,
the employer counts on retraining his permanent employees. He is obliged
to find work for them to do... These arrangements are certainly effective in
holding down unemployment”. (Meyers, 1964:)
Thus, we wish to argue that the major international difference causing
employment variation does arise less from real wage stickiness (due to the
presence of unions and the extend and duration of contractual agreements
between labor and the firm)
33
but rather it seems to be the degree to which
compromising rules exist and which side dominates the compromising rule. A
lower ω, defining, for example, the compromising rule in Euro-area countries,
can show up as difference in the variation of macroeconomic variables. This
is demonstrated in Chapter 8.4 for the German economy.
We there could observe that first, employment and the efficiency wage
(defined as real wage divided productions) are among the variables with the
31
In the paper by Gong, Ernst and Semmler (2004) it is also shown that the ω is strongly
negatively correlated with labor market institutions.
32
This could also be realized by firms by demanding the same (or less) hours per worker
but employing more workers than being optimal. The case would then correspond to
what is discussed in the literature as labor hoarding where firms hesitate to fire workers
during a recession because it may be hard to find new workers in the next upswing, see
Burnside et al. (1993). Note that in this case firms may be off their marginal product
curve and thus this might require wage subsidies for firms as has been suggested by
Phelps (1997).
33
In fact real wage rigidities in the U.S. are almost the same as in European countries, see
Flaschel, Gong and Semmler (2001).
161
highest volatility in the U. S. economy. However, in the German economy
they are the smoothest variables. Second, in the U. S. economy, the capital
stock and temporary shock to technology are both relatively smooth. In
contrast, they are both more volatile in Germany. These results are likely to
be due to our first remark regarding the difference in employment volatility.
The volatility of output must be absorbed by some factors in the production
function. If employment is smooth, the other two factors have to be volatile.
Indeed, recent Phillips curve studies do not seem to reveal much difference
in real wage stickiness between Germany and the U.S., although the German
labor market is often considered less flexible.
34
Yet, there are differences in
another sense. In Germany, there are stronger influences of labor unions and
various legal restrictions on firms’ hiring and firing decisions shorter work
week even for the same pay etc.
35
Such influences and legal restriction will
give rise to the smoother employment series in contrast to the U.S.. Such
influences and legal restriction, or what Solow (1979) has termed the moral
factor in the labor market, may also be viewed as a readiness to compromise
as our Model III suggests. Those factors will indeed give rise to a lower ω
and a smoother employment series.
36
So far we only have shown that our model of nonclearing labor market
seems to match better the variation in employment than the standard RBC
model. Yet, we did not attempt to explain the secular trend of the unem-
ployment rate neither for the U.S. nor for Germany. We want to express
a conjecture of how our model can be used to study the trend shift in em-
ployment. We want to note that the time series data for the table 8.3 (U.S.
1955.1-1983.1, Germany 1960.1-1992.1) are from a period where the U.S. had
higher – but falling – unemployment rates, whereas Germany had still lower
but rising unemployment rates. Yet, since the end of the 1980s the level of
the unemployment rate in Germany has considerably moved up, partly, of
course due to the unification of Germany after 1989.
One recent attempt to better fit the RBC model’s predictions with labor
34
See Flaschel, Gong and Semmler (2001).
35
See,for example, Nickell (1997) and Nickell et al. (2003), and see already Meyers (1964).
36
It might reasonably be argued that, due to intertemporal optimization subject to the
budget constraints, the supply specified by the decision rule may only approximate the
decisions of those households for which unemployment is not expected to pose a problem
on their budgets. Such households are more likely to be currently employed represented
by labor unions and covered by legal restrictions. In other words, currently employed
labor decides, through the optimal decision rule, about labor supply and not those
who are currently unemployed. Such a feature could presumably be better studied by
an intertemporal model with heterogenous households, see, for example, Uhlig and Xu
(1996).
162
market data has employed search and matching theory.
37
Informational or
institutional search frictions may then explain the equilibrium unemployment
rate and its rise. Yet, those models usually observe that there has been a
shift in matching functions due to evolution of unemployment rates such as,
for example, experienced in Europe since the 1980s, and that the model itself
fails to explain such a shift.
38
In contrast to the literature on institutional frictions in the search and
matching process we think that the essential impact on the trend in the
rate of unemployment seems to stem from both changes of preferences of
households as well as a changing trend in the technology shock.
39
Concerning
the latter, as shown in Chapters 5 and 9, the Solow residual, as it used in RBC
models as the technology shock, greatly depends on endogenous variables
(such as capacity utilization). Thus exogenous technology shocks constitute
only a small fraction of the Solow residual. We thus might conclude that
cyclical fluctuations in output and employment are not likely to sufficiently
be explained by productivity shocks alone. Gali (1999) and Francis and
Ramey (2001, 2003) have argued that other shocks, for example demand
shocks, are important as well.
Yet, in the long run, the change in the trend of the unemployment rate
is likely to be related to the long-run trend in the true technology shock.
Empirical evidence on the role of lagging implementation and diffusion of new
technology for low employment growth in Germany can be found in Heckman
(2003) and Greiner, Semmler and Gong (2004). In the context of our model
this would have the effect that labor demand, given by equation (8.27) may
fall short of labor supply given by equation (8.24). This is likely to occur in
the long-run if the productivity Z
t
in equation (8.27) starts tending to grow
at a lower rate which many researchers recently have maintained to have
happened in Germany, and other European countries, since the 1980s.
40
Yet,
as recent research has stressed, for example, the work by Phelps, see Phelps
(1997) and Phelps and Zoega (1998), there have also been secular changes on
the supply side of labor due to changes in preferences of households.
41
Some
of those factors affecting the households’ supply of labor have been discussed
37
See Merz (1999) and Ljungqvist and Sargent (1998, 2003).
38
For an evaluation of the search and matching theory as well as the role of shocks to
explain the evolution of unemployment in Europe, see Blanchard and Wolfers (2000)
and Blanchard (2003).
39
See Campbell (1994) for a modelling of a trend in technology shocks.
40
Of course, the trend in the wage rate is also important in the equation for labor demand
(in equation 25). For an account of the technology trend, see Flaschel, Gong and Semmler
(2001), and for an additional account of the wage rate, see Heckman (2003).
41
Phelps and his co-authors have pointed out that an important change in the households’
preferences in Europe is that households now rely more an assets instead of labor income.
163
above.
8.6 Conclusions
Market clearing is a prominent feature in the standard RBC model which
commonly presumes wage and price flexibility. In this chapter, we have
introduced an adaptive optimization behavior and a multiple stage decision
process that, given wage stickiness, results in a nonclearing labor market in
an otherwise standard stochastic dynamic model. Nonclearing labor market
is then a result of different employment rules derived on the basis of a multiple
stage decision process. Calibrations have shown that such model variants will
produce a higher volatility in employment, and thus fit the data significantly
better than the standard model.
42
As concerning international aspects of our study we presume that different
labor market institutions result in different weights defining the compromis-
ing rule. The results for Euro-area economies, for example, for Germany in
contrast to the U.S., are consistent with what has been found in many other
empirical studies with regard to the institutions of the labor market.
Finally, with respect to the trend of lower employment growth in some
European countries as compared to the U.S. since the 1980s, our model
suggests that one has to study more carefully the secular forces affecting the
supply and the demand of labor as modeled in our multiple stage decision
process of section 2. In particular, on the demand side for labor, the slow
down of technology seems to have been a major factor for the low employment
growth in Germany and other countries in Europe.
43
On the other hand
there has also been changes in the preferences of households. Our study has
provided a framework that allows to also follow up such issues.
44
42
Appendix III computes the welfare loss of our different model variants of nonclearing
labor market. There we find that similarly to Sargent and Ljungqvist (1998), that the
welfare losses are very small.
43
See Blanchard and Wolfers (2000), Greiner, Semmler and Gong (2004) and Heckman
(2003)
44
For further discussion, see also Chapter 9.
164
8.7 Appendix I: Wage Setting
Suppose now that at the beginning of t the household (of course with certain
probability denoted as 1 − ξ) decides to set up a new wage rate w

1
given
the data (A
t
, k
t
), and the sequence of expectations on {A
t+i
}

i=1
where A
t
and k
t
are referred to as the technology and capital stock respectively. If
the household knows the production f(A
t
, k
t
, n
t
, where n
t
the labor effort so
that it may also know the firm’s demand for labor, the decision problem of
the household with regard to wage setting may be expressed as follows:
max
w

t
,{c
t+i
}

i=0
E
t
_

i=0
(ξβ)
i
U(c
t+i
, n(w

t
, k
t+i
, A
t+i
))
_
(8.28)
subject to
k
t+i+1
= (1 −δ)k
t+i
+ f(A
t+i
, k
t+i
, n(w

t
, k
t+i
, A
t+i
)) −c
t+i
(8.29)
Above ξ
i
is the probability that the new wage rate w

t
will still be effective
in period t + i. Obviously, this probability will be reduced when i become
larger. U(·) is the household’s utility function, which depends on consump-
tion c
t+i
and the labor effort n(w

t
, k
t+i
, A
t+i
). Note that here n(w

t
, k
t+i
, A
t+i
)
is the function of firm’s demand for labor, which is derived from the condition
of marginal product equal to the wage rate:
w

t
= f
n
(A
t+i
, k
t+i
, n
t+i
)
We shall remark that although the decision is mainly about the choice
of w

t
, the sequence of {c
t+i
}

i=0
should also be considered for the dynamic
optimization. Of course there is no guarantee that the household will actu-
ally implement this sequence {c
t+i
}

i=0
. However, as argued by recent New
Keynesian literature, there is only a certain probability (due to the adjust-
ment cost in changing the wage) that the household will set a new wage rate
in period t. Therefore, the observed wage dynamics w
t
may follow Calvo’s
updating scheme:
w
t
= (1 −ξ)w

t
+ξw
t−1
Such a wage indicates that there exists a gap between optimum wage w

t
and the observed wage w
t
.
It should be noted that in recent New Keynesian literature where the
wage is set in a similar way as we have discussed here, the concept of non-
clearing labor market somehow disappeared. In this literature, the household
165
is assumed to supply the labor effort according to the market demand at the
existing wage rate and therefore does not seem to face the problem of excess
demand or supply. Instead, what New Keynesian economists are concerned
with is the gap between the optimum price and actual price, whose existence
is caused by the adjustment cost in changing prices. In correspondence to
the gap between optimum and actual price, there also exists a gap between
optimum output and actual output.
D
0
MR MR’
MC
D
0
D’
n
0
n* n’ n
s
w
0
w*
n
w
Figure 8.6: A Static Version of the Working of the Labor Market
Some clarifications may be obtained by referring to a static version of our
view on the working of the labor market. In figure 8.6, the supplier (or the
household, in the labor market case) first (say, at the beginning of period
0) sets its price optimally according to the expected demand curve D
0
. Let
us denote this price as w
0
. Consider now the situation that the supplier’s
expectation on demand is not fulfilled. Instead of n
0
, the market demand
at w
0
is n

. In this case, the household may reasonably believe that the
demand curve should be D

and therefore the optimum price should be w

while the optimum supply should be n

. Yet, due to the adjustment cost
in changing prices, the supplier may stick to w
0
. This produces the gaps
between optimum price w

and actual price w
0
and between optimum supply
n

and actual supply n

.
However, the existence of price and output gaps does not exclude the
166
existence of a disequilibrium or nonclearing market. New Keynesian litera-
ture presumes that at the existing wage rate, the household supplies labor
effort whatever the market demand for labor is. Note that in figure 8.6 the
household’s willingness to supply labor is n
s
. In this context the marginal
cost curve, MC, can be interpreted as marginal disutility of labor which has
also an upward slope since we use the standard log utility function as in the
RBC literature. This then means that the household’s supply of labor will be
restricted by a wage rate below, or equal, to the marginal disutility of work.
If we define the labor market demand and supply in a standard way, that
is, at the given wage rate there is a firm’s willingness to demand labor and
the household’s willingness to supply labor, and a nonclearing labor market
can be very general phenomena. This indicates that even if there are no
adjustment costs so that the household can adjust the wage rate in every t
(so that there is no price and quantity gaps as we have mentioned earlier),
the disequilibrium in the labor market may still exist.
Appendix II: Adaptive Optimization and Con-
sumption Decision
For the problem (8.14) - (8.16), we define the Lagrangian:
L = E
t
__
log c
d
t
+ θ log(1 −n
t
)
¸
+
λ
t
_
k
s
t+1

1
1 + γ
_
(1 −δ)k
s
t
+f(k
s
t
, n
t
, A
t
) −c
d
t
¸
__
+
E
t
_

i=1
β
i
_
log(c
d
t+i
) + θ log(1 −n
s
t+i
)
¸
+
β
i
λ
t+i
_
k
s
t+1+i

1
1 + γ
_
(1 −δ)k
s
t+i
+ f(k
s
t+i
, n
s
t+i
, A
t+i
) −c
d
t+i
¸
__
Since the decision is only about c
d
t
, we thus take the partial derivatives of
L with respect to c
d
t
, k
s
t+1
and λ
t
. This gives us the following first-order
167
condition:
1
c
d
t

λ
t
1 + γ
= 0, (8.30)
β
1 + γ
E
t
_
λ
t+1
_
(1 −δ) + (1 −α)A
t+1
_
k
s
t+1
_
−α
_
n
s
t+1
¯
N/0.3
_
α
__
= λ
t
,
(8.31)
k
s
t+1
=
1
1 +γ
_
(1 −δ)k
s
t
+ A
t
(k
s
t
)
1−α
_
n
t
¯
N/0.3
_
α
−c
d
t
¸
. (8.32)
Recall that in deriving the decision rules as expressed in (8.23) and (8.24)
we have postulated
λ
t+1
= Hk
s
t+1
+ QA
t+1
+ h, (8.33)
n
s
t+1
= G
21
k
s
t+1
+ G
22
A
t+1
+g
2,
(8.34)
where H, Q, h, G
21
, G
22
and g
2
have all been resolved previously in the house-
hold optimization program. We therefore obtain from (8.33) and (8.34)
E
t
λ
t+1
= Hk
s
t+1
+ Q(a
0
+ a
1
A
t
) + h, (8.35)
E
t
n
s
t+1
= G
2
k
s
t+1
+ D
2
(a
0
+ a
1
A
t
) + g
2
. (8.36)
Our next step is to linearize (8.30) - (8.32) around the steady states. Suppose
they can be written as
F
c1
c
t
+ F
c2
λ
t
+f
c
= 0, (8.37)
F
k1
E
t
λ
t+1
+ F
k2
E
t
A
t+1
+F
k3
k
s
t+1
+ F
k4
E
t
n
s
t+1
+ f
k
= λ
t
, (8.38)
k
s
t+1
= Ak
t
+ WA
t
+ C
1
c
d
t
+C
2
n
t
+b. (8.39)
Expressing E
t
λ
t+1
, E
t
n
s
t+1
and E
t
A
t+1
in terms of (8.35), (8.36) and a
0
+a
1
A
t
respectively, we obtain from (8.38)
κ
1
k
s
t+1
+ κ
2
A
t
+ κ
0
= λ
t
, (8.40)
where, in particular,
κ
0
= F
k1
(Qa
0
+ h) + F
k2
a
0
+ F
k4
(G
22
a
0
+ g
2
) + f
k
, (8.41)
κ
1
= F
k1
H + F
k3
+F
k4
G
21
, (8.42)
κ
2
= F
k1
Qa
1
+ F
k2
a
1
+ F
k4
G
22
a
1
. (8.43)
Using (8.37) to express λ
t
in (8.40), we further obtain
κ
1
k
s
t+1

2
A
t
+ κ
0
= −
F
c1
F
c2
c
d
t

f
c
F
c2
, (8.44)
168
which is equivalent to
k
s
t+1
= −
κ
2
κ
1
A
t

F
c1
F
c2
κ
1
c
d
t

κ
0
κ
1

f
c
F
c2
κ
1
. (8.45)
Comparing the right side of (8.39) and (8.45) will allow us to solve c
d
t
as
c
d
t
= −
_
F
c1
F
c2
κ
1
+C
1
_
−1
_
Ak
t
+
_
κ
2
κ
1
+ W
_
A
t
+ C
2
n
t
+
_
b +
κ
0
κ
1
+
f
c
F
c2
κ
1
__
.
Appendix III: Welfare Comparison of the Model
Variants
In this appendix we want to undertake a welfare comparison of our different
model variants. We follow here Ljungqvist and Sargent (1998) and compute
the welfare implication of the different model variants. Yet, whereas they
concentrate on the steady state, we compute the welfare also outside the
steady state. We here restrict our welfare analysis to the U. S. model variants.
It is sufficient to consider only the equilibrium (benchmark) model, and the
two models with nonclearing labor market. They are given by Simulated
Economy I, II, and III. A likely conjecture is that the benchmark model
should always be superior to the other two variants because the decisions on
labor supply - which are optimal for the representative agent - are realized
in all periods.
However, we believe that this may not generically be the case. The point
here is that the model specification in variants II and III, is somewhat differ-
ent from the the benchmark model due to the distinction between expected
and actual moments with respect to our state variable, the capital stock.
In the models of nonclearing market the representative agent may not ra-
tionally expect those moments of the capital stock. The expected moments
are represented by equation (8.5) while the actual moments are expressed by
equation (8.5). They are not necessary equal unless the labor efforts of those
two equations are equal. Also, in addition to A
t
, there is another external
variable w
t
, entering into the models, which will affect the labor employed
(via demand for labor) and hence eventually the welfare performance. The
welfare result due to these changes in the specification may therefore deviate
from what one would expect.
Our exercise here is to compute the values of the objective function for all
our three models, given the sequence of our two decision variables, consump-
tion and employment. Note that for our models variants with nonclearing
169
labor market, we use realized employment, rather than the decisions on labor
supply, to compute the utility functional. More specifically, we calculate V ,
where
V ≡

t=0
β
t
U(c
t
, n
t
)
where U(c
t
, n
t
) is given by log(c
t
) + θ log(1 −n
t
). This exercise here is con-
ducted for different initial conditions of k
t
denoted by k
0
. We choose the
different k
0
based on the grid search around the steady state of k
t
. Obvi-
ously, the value of V for any given k
0
will also depend on the external variable
A
t
and w
t
(though in the benchmark model, only A
t
appears). We consider
two different ways to treat these external variables. One is to set both ex-
ternal variables at their steady state levels for all t. The other is to employ
their observed series entering into the computation. Figure 8.7 provides the
welfare comparison of the two versions.
(a) Welfare Comparison with External Variable set at their Steady State
(Solid Line for Model II; Dashed Line for Model III)
(b) Welfare Comparison with External Variable set at their Observed Series
(Solid Line for Model II; Dashed Line for Model III)
Figure 8.7: Welfare Comparison of Model II and III
170
In Figure 8.7(a), 8.7(b), the percentage deviations of V from the corre-
sponding values of benchmark model is plotted for both Model II and Model
III given for various k
0
around the steady states. The various k
0
’s are ex-
pressed in terms of the percentages deviation from the steady state of k
t
.
It is not surprising to find that in most cases the benchmark model is
the best in its welfare performance, since most of the values are negative.
However, it is important to note that the deviations from the benchmark
model are very small. Similar results have been obtained by Ljungqvist and
Sargent (1998), they, however, compare only the steady states. Meanwhile,
not always is the benchmark model the best one. When k
0
is sufficiently high,
close to or higher than the steady state of k
t
, the deviations become 0 for the
Model II. Furthermore, in the case of using observed external variables, the
Model III will be superior in its welfare performance when k
0
is larger than
its steady state, see lower part of the figure.
Chapter 9
Monopolistic Competition,
Nonclearing Markets and
Technology Shocks
In the last chapter we have found that if we introduce some non-Walrasian
features into an intertemporal decision model with the household’s wage
setting, sluggish wage and price adjustments and adaptive optimization the
labor market may not be cleared. This model then naturally generates higher
volatility of employment and a low correlation between employment and
consumption. Next we relate our approach of nonclearing labor market to
the theory of monopolistic competition in the product market as developed
in New Keynesian economics.
In many respects, the specifications in this chapter are the same as for
the model of the last chapter. We shall still follow the assumptions with
respect to ownership, adaptive optimization and nonclearing labor markets.
The assumption of re-opening of the market shall also be adopted here. This
is necessary for a model with nonclearing markets where adjustments should
take place in real time.
9.1 The Model
As mentioned in chapter 8, price and wage stickiness is an important feature
in New Keynesian literature. As concerning wage stickiness Keynes (1936)
has attributed strong stabilizing effects to wage stickiness. Recent litera-
ture uses monopolistic competition theory to give a foundation to nominal
stickiness.
Since both household and firm make their quantity decisions on the basis
171
172
of the given prices, including the output price p
t
, the wage rate w
t
and the
rental rate of capital stock r
t
, we shall first discuss how in our model the
period t prices are determined at the beginning of period t.
Here again, as in the model of the last chapter there are three commodi-
ties. One of them should serve as a numeraire, which we assume to be the
output. Therefore, the output price p
t
always equals 1. This indicates that
the wage w
t
and the rental rate of capital stock r
t
are all measured in terms
of the physical units of output.
1
As to the rental rate of capital r
t
, it is as-
sumed to be adjustable and to clear the capital market. We can then ignore
its setting. Here, we shall follow all the specifications on price and wage
setting as in presented the chapter 8.2.1.
9.1.1 The Household’s Desired Transactions
When the prices, including wages, have been set the household is going to
express its desired demand and supply. We define the household’s willingness
as those demand and supply that can allow the household to obtain the
maximum utility on the condition that these demand and supply can be
realized at the given set of prices. We can express this as a sequence of
output demand and factor supply
_
c
d
t+i
, i
d
t+i
, n
s
t+i
, k
s
t+i+1
_

i=0
, where i
t+i
is
referred to investment. Note that here we have used the superscripts d and
s to refer to the agent’s desired demand and supply. The decision problem
for the household to derive its desired demand and supply is very similar as
in the last chapter and can be formulated as
max
{c
d
t+i
,n
s
t+i
}

i=0
E
t
_

i=0
β
i
U(c
d
t+i
, n
s
t+i
)
_
(9.1)
subject to
k
s
t+i+1
= (1 −δ)k
s
t+i
+ f(k
s
t+i
, n
s
t+i
, A
t+i
) −c
d
t+i
(9.2)
All the notations have been defined in the last chapter. For the given
technology sequence {A
t+i
}

i=0
, the solution of optimization problem can be
written as:
c
d
t+i
= G
c
(k
s
t+i
, A
t+i
) (9.3)
n
s
t+i
= G
n
(k
s
t+i
, A
t+i
) (9.4)
1
For our simple representative agent model without money, this simplification does not
effect our major result derived from our model. Meanwhile, it will allow us to save
the effort to work on the nominal price determination, a main focus in the recent new
Keyensian literature.
173
We shall remark that although the solution appears to be a sequence
_
c
d
t+i
, n
s
t+i
_

i=0
only (c
d
t
, n
s
t
) along with (i
d
t
, k
s
t
), where i
d
t
= f(k
s
t
, n
s
t
, A
t
) −c
d
t
and k
s
t
= k
t
, are
actually carried by the household into the market for exchange due to our
assumption of re-opening market.
9.1.2 The Quantity Decisions of the Firm
The problem of our representative firm in period t is to choose the current
input demand and output supply (n
d
t
, k
d
t
, y
s
t
) to maximizes the current profit.
However in this chapter, we no longer assume that the product market is
in perfect competition. Instead, we shall assume that our representative
firm behaves as a monopolistic competitor, and therefore it should face a
perceived demand curve for its product, see the discussion above. Thus given
the output price, which shall always be 1 (since it serves as a numeraire), the
firm has a perceived constraint on the market demand for its product. We
shall denote this perceived demand as ´ y
t
.
On the other hand, given the prices of output, labor and capital stock
(1, w
t
, r
t
), the firm should also have its own desired supply y

t
. This desired
supply is the amount that allows the firm to obtain a maximum profit on
the assumption that all its output can be sold. Obviously, if the expected
demand ´ y
t
is less than the firm’s desired supply y

t
, the firm will choose ´ y
t
.
Otherwise, it will simply follow the short side rule to choose y

t
as in the
general New Keynesian model.
Thus, for our representative firm, the optimization problem can be ex-
pressed as
max min(´ y
t
, y

t
) −r
t
k
d
t
−w
t
n
d
t
(9.5)
subject to
min(´ y
t
, y

t
) = f(A
t
, k
t
, n
t
) (9.6)
For the regular condition on the production function, the solutions should
satisfy
k
d
t
= f
k
(r
t
, w
t
, A
t
, ´ y
t
) (9.7)
n
d
t
= f
n
(r
t
, w
t
, A
t
, ´ y
t
) (9.8)
where r
t
and w
t
are respectively the prices (in real term) of capital and labor.
2
We are now considering the transactions in our three markets. Let us
first consider the two factor markets.
2
The detail will be provided in the appendix of this chapter.
174
9.1.3 Transaction in the Factor Market
Since the rental rate of capital stock r
t
is adjusted to clear the capital market
when the market is re-opened in period t, we have
k
t
= k
s
t
= k
d
t
(9.9)
Due to the monopolistic wage setting and the sluggish wage adjustment,
there is no reason to believe that the labor market will be cleared, see the
discussion in the last chapter. Therefore, we shall again define a realization
rule with regard to actual employment. As we have discussed in the last
chapter, the most frequent rule that has been used is the short side rule,
that is,
n
t
= min(n
d
t
, n
s
t
)
Thus, when a disequilibrium occurs, only the short side of demand and sup-
ply will be realized. Another important rule that we have discussed in the
last chapter is the compromising rule. The latter rule means that when
disequilibrium occurs in the labor market both firms and workers have to
compromise. In particular, we again formulate this rule as
n
t
= ωn
d
t
+ (1 −ω)n
s
t
(9.10)
where ω ∈ (0, 1). Our study in the last chapter indicates that the short
side rule seems to be empirically less satisfying than the compromising rule.
Therefore, in this chapter we shall only consider the compromising rule.
9.1.4 The Transaction in the Product Market
After the transactions in those two factor markets have been carried out, the
firm will engage in its production activity. The result is the output supply,
which is now given by
y
s
t
= f(k
t
, n
t
, A
t
) (9.11)
One remark should be added here. Equation (9.11) indicates that the
firm’s actual produced output is not necessarily constrained by equation
(9.6), and therefore one may argue that the output determination does follow
eventually the Keynesian way, that is, the output is constrained by demand.
However, the Keynesian way of output determination is still reflected in the
firm’s demand for inputs, capital and labor (see equation (9.7) and (9.8)). On
the other hand, if the produced output is still constrained by (9.6), one may
175
encouter the difficulty either in terms of feasibility when y
s
t
in (9.11) is less
than min(´ y
t
, y

t
) or in terms of inefficiency when y
s
t
is larger than min(´ y
t
, y

t
).
3
Given that the output is determined by (9.11), the transaction then needs
to be carried out with respect to y
s
t
. It is important to note here that when
disequilibrium occurs in the labor market the previous consumption plan as
expressed by (9.3) becomes invalid due to the improper rule of capital accu-
mulation (9.2) for deriving the plan. Therefore, the household will construct
a new plan as expressed below:
max
(c
d
t
)
E
t
_

i=0
β
i
U(c
d
t+i
, n
s
t+i
)
_
(9.12)
s.t. k
s
t+1
=
1
1 + γ
_
(1 −δ)k
s
t
+ f(k
t
, n
t
, A
t
) −c
d
t
¸
(9.13)
k
s
t+i+1
=
1
1 +γ
_
(1 −δ)k
s
t+i
+ f(k
s
t+i
, n
s
t+i
, A
t+i
) −c
d
t+i
¸
, (9.14)
i = 1, 2, ....
Above, k
t
equals k
s
t
as expressed by (9.9) and n
t
is given by (9.10) with n
s
t
and
n
d
t
are implied by (9.4) and (9.8) respectively. As we have demonstrated in
the last chapter, the solution to this further step in the optimization problem
can be written in terms of the following equation:
c
d
t
= G
c2
(k
t
, A
t
, n
t
) (9.15)
Given this consumption plan, the product market should be cleared if the
household demands the amount f(k
t
, n
t
, A
t
) −c
d
t
for investment. Therefore,
c
d
t
in (9.15) should also be the realized consumption.
9.2 Estimation and Calibration for U.S. Econ-
omy
9.2.1 The Empirically Testable Model
This section provides an empirical study of our theoretical model above pre-
sented which again, in order to make it empirically more realistic, has to
include economic growth.
3
Note that here when y
s
t
< min(´ y
t
, y

t
) there will be no sufficient inputs to produce
min(´ y
t
, y

t
). On the other hand, when y
s
t
> min(´ y
t
, y

t
), not all inputs will be used
in production, and therefore resources are somewhat wasted.
176
Let K
t
denote for capital stock, N
t
for per capita working hours, Y
t
for
output and C
t
for consumption. Assume the capital stock in the economy
follows the transition law:
K
t+1
= (1 −δ)K
t
+ A
t
K
1−α
t
(N
t
X
t
)
α
−C
t
, (9.16)
where δ is the depreciation rate; α is the share of labor in the production
function F(·) = A
t
K
1−α
t
(N
t
X
t
)
α
; A
t
is the temporary shock in technology
and X
t
the permanent shock that follows a growth rate γ. Dividing both
sides of equation (9.16) by X
t
, we obtain
k
t+1
=
1
1 +γ
_
(1 −δ)k
t
+ A
t
k
1−α
t
(n
t
N/0.3)
α
−c
t
_
, (9.17)
where k
t
≡ K
t
/X
t
, c
t
≡ C
t
/X
t
and n
t
≡ 0.3N
t
/N with N to be the sample
mean of N
t
. Note that the above formulation also indicates that the form of
f(·) in the last section may take the form
f(·) = A
t
k
1−α
t
(n
t
N/0.3)
α
(9.18)
With regard to the household preference, we shall assume that the utility
function takes the form
U(c
t
, n
t
) log c
t
+ θ log(1 −n
t
) (9.19)
The temporary shock A
t
may follow an AR(1) process:
A
t+1
= a
0
+a
1
A
t
+ ǫ
t
, (9.20)
where ǫ
t
is an independently and identically distributed (i.i.d.) innovation:
ǫ
t
∼ N(0, σ
2
ǫ
).
Finally, we shall assume that the output expectation ´ y
t
be simply equal
to y
t−1
, that is,
´ y
t
= y
t−1
(9.21)
where y
t
= Y
t
/X
t
, so that the expectation is fully adaptive to the actual
output in the last period.
4
4
Of course, one can also consider other forms of expectation. One possibility is to assume
the expectation to be rational so that it is equal to the steady state of y
t
. Indeed, we
also have done the same empirical study, yet the result is less satisfying.
177
9.2.2 The Data Generating Process
For our empirical assessment, we consider two model variants: the standard
model, as a benchmark for comparison, and our model with monopolistic
competition and nonclearing labor market. Specifically, we shall call the
benchmark model the Model I and the model with monopolistic competition
the Model IV (in distinction from the Model II and Model III in Chapter 8).
For the benchmark dynamic optimization model, the Model I, the data
generating process include (9.17), (9.20) as well as
c
t
= G
11
A
t
+ G
12
k
t
+ g
1
(9.22)
n
t
= G
21
A
t
+ G
22
k
t
+ g
2
(9.23)
Note that here (9.22) and (9.23) are the linear approximations to (9.3) and
(9.4). The coefficients G
ij
and g
i
(i = 1, 2 and j = 1, 2) are the compli-
cated functions of the model’s structural parameters, α, β, δ,among others.
They are computed by the numerical algorithm using the linear-quadratic
approximation method.
5
To define the data generating process for our model with monopolistic
competition and nonclearing labor market, the Model IV, we shall first mod-
ify (9.23) as
n
s
t
= G
21
A
t
+G
22
k
t
+g
2
(9.24)
On the other hand, the equilibrium in product market indicates that c
d
t
in
(9.15) should be equal to c
t
, and therefore this equation can also be approx-
imated by
c
t
= G
31
A
t
+ G
32
k
t
+G
33
n
t
+ g
3
(9.25)
The computation of coefficients g
3
and G
3j
, j = 1, 2, 3, are the same as in
Chapter 8.
Next we consider the demand for labor n
d
t
derived from the firm’s opti-
mization problem (9.5) - (9.8), which shall now be augmented by the growth
factor for our empirical test. The following proposition concerns the deriva-
tion of n
d
t
.
Proposition: When the capital market is cleared, the firm’s demand for
labor can be expressed as
n
d
t
=
_
¸
_
¸
_
_
0.3
¯
N
_
_
ˆ yt
At
_
1/α
_
1
kt
_
(1−α)/α
if ˆ y
t
< y

t
0.3
¯
N
_
αAtZt
wt
_ 1
1−α
k
t
if ˆ y
t
≥ y

t
(9.26)
5
The algorithm used here is again from Chapter 1 of this volume.
178
where
y

t
= (αA
t
Z
t
/w
t
)
α/(1−α)
k
t
A
t
(9.27)
Note that the first n
d
t
in the above equation responds to the condition that
the expected demand is less than the firm’s desired supply, and the second
to the condition otherwise. The proof of this proposition is provided in the
appendix to this chapter. Thus, for Model IV, the data generating process
includes (9.17), (9.20), (9.10), (9.24), (9.25), (9.26) and (9.21) with w
t
given
by the observed wage rate. Here again we do not need to attempt to give
the actually observed sequence of wages a further theoretical foundation. For
our purpose it suffices to take the empirically observed series of wages.
9.2.3 The Data and the Parameters
We here only employ time series data of the U.S. economy. To calibrate the
models, we shall first specify the structural parameters. There are altogether
10 structural parameters in Model IV: a
0
, a
1
, σ
ε
, γ, µ, α, β, δ, θ and ω. All
these parameters are essentially the same as we have employed in Chapter 8
(see Table 8.1) except for the ω. We choose ω to be 0.5203. This is estimated
according to our new model by minimizing the residual sum of square between
actual employment and the model generated employment. The estimation is
again executed by a conventional algorithm, the grid search. Note that here
again we need a rescaling of the wage series in the estimation of ω.
6
9.2.4 Calibration
Table 9.1 provides the result of our calibrations from 5000 stochastic sim-
ulations. This result is further confirmed by Figure 9.1, where a one time
simulation with the observed innovation A
t
are presented.
7
All time series
are detrended by the HP-filter.
6
Note that there is a need of rescaling the wage series in the estimation of ω. This re-
scaling is necessary because we do not exactly know the initial condition of Z
t
, which we
set equal to 1. We have followed the same rescaling procedure as we did in Chapter 8.
7
Of course, for this exercise one should still consider A
t
,the observed Solow residual, to
include not only the technology shock, but also the demand shock among others.
179
Table 9.1: of the Model Variants (numbers in parentheses are the correspond-
ing standard deviations)
Consumption Capital Employment Output
Standard Deviations
Sample Economy 0.0081 0.0035 0.0165 0.0156
Model I Economy 0.0091 0.0036 0.0051 0.0158
(0.0012) (0.0007) (0.0006) (0.0021)
Model IV Economy 0.0071 0.0058 0.0237 0.0230
(0.0015) (0.0018) (0.0084) (0.0060)
Correlation Coefficients
Sample Economy
Consumption 1.0000
Capital Stock 0.1741 1.0000
Employment 0.4604 0.2861 1.0000
Output 0.7550 0.0954 0.7263 1.0000
Model I Economy
Consumption 1.0000
(0.0000)
Capital Stock 0.2043 1.0000
(0.1190) (0.0000)
Employment 0.9288 -0.1593 1.0000
(0.0203) (0.0906) (0.0000)
Output 0.9866 0.0566 0.9754 1.0000
(0.0033) (0.1044) (0.0076) (0.0000)
Model IV Economy
Consumption 1.0000
(0.0000)
Capital Stock 0.3878 1.0000
(0.1515) (0.0000)
Employment 0.4659 0.0278 1.0000
(0.1424) (0.1332) (0.0000)
Output 0.8374 0.0369 0.8164 1.0000
(0.0591) (0.0888) (0.1230) (0.0000)
9.2.5 The Labor Market Puzzle
Despite the bias towards Model I Economy, due to the selection of the struc-
tural parameters, we find that the labor effort is much more volatile than
180
in the Model I Economy the benchmark model. Indeed, comparing to the
benchmark model, the Model I economy, the volatility of labor effort in our
Model IV economy has much been increased if anything the volatility of the
labor effort is too high. This result is, however, not surprising since the
agents face two constraints – one in the labor market and one in the product
market. Also the excessive correlation between labor and consumption has
been weakened.
Further evidence on the better fit of our Model IV economy — as concerns
the volatility of the macroeconomic variables — is also demonstrated in the
Figure 9.1 where the horizontal figures show, from top to bottom, actual
(solid line) and simulated data (dotted line) for consumption, capital stock,
employment and output. The two columns of figures, from the left to the
right, represent the figures for Model I and Model IV economies respectively.
As observable, the employment series in Model IV economy can fit the data
better than the Model I economy.
This resolution to the labor market puzzle should not be surprising be-
cause we specify the structure of labor market essentially the same way as in
the last chapter. However, in addition to the labor market disequilibrium as
specified in the last chapter, we also allow in this chapter for monopolistic
competition in the product market. In addition to impacting the volatility
of labor effort, this may provide the possibility to resolve the another puzzle,
that is, the technology puzzle also arising in the market clearing RBC model.
181
Figure 9.1: Simulated Economy versus Sample Economy: U.S. Case (solid
line for sample economy, dotted line for simulated economy)
9.2.6 The Technology Puzzle
In economic literature, one often discusses the technology in terms of its per-
sistent and temporary effects on the economy. One possibility to investigate
the persistent effect in our models here is to look at the steady states. Given
that at the steady state all the markets will be cleared, our Model IV econ-
omy should have the same steady state as in the benchmark model. For the
convenience of our discussion, we rewrite these steady states in the following
182
equations (see the proof of Proposition 4 in Chapter 4):
n = αφ/ [(α + θ)φ −(δ + γ)θ]
k = A
1/α
φ
−1/α
n
_
N/0.3
_
c = (φ −δ −γ)k
y = φk
where
φ = [(1 +γ) −β(1 −δ)] /β(1 −α)
From the above equation, one finds that technology has the positive per-
sistent effect on output, consumption and capital stock,
8
yet zero effect on
employment.
Next, we shall look at the temporary effect of the technology shock. Ta-
ble 9.2 records the cross correlation of the temporary shock A
t
from our
5000 thousand stochastic simulation. As one can find there, the two models
predicts rather different correlations. In the Model I (RBC) economy, tech-
nology A
t
has a temporary effect not only on consumption and output, but
also on employment, which are all strongly positive. Yet in our Model IV
Economy with monopolistic competition and nonclearing labor market, we
find that the correlation is much weaker with respect to employment. This
is consistent with the widely discussed recent finding that technology has
near-zero (and even negative) effect on employment.
Table 9.2: The Correlation Coefficients of Temporary Shock in Technology.
output consumption employment capital stock
Model I Economy 0.9903 0.9722 0.9966 -0.0255
(0.0031) (0.0084) (0.0013) (0.1077)
Model IV Economy 0.8397 0.8510 0.4137 -0.1264
(0.0512) (0.0507) (0.1862) (0.1390)
At the given expected market demand, an improvement in technology
(reflected as an increase in labor productivity) will reduce the demand for
labor, if the firm follows the Keynesian way of output determination, that
is, the output is determined by demand. In this case, less labor is required
to produce the given amount of output. Technical progress, therefore, may
8
This long run effect of technology is also revealed by recent time series studies in the
context of a variety of endogenous growth models, see Greiner, Semmler and Gong (2004).
183
have an adverse effect on employment at least in the short run. This stylist
fact cannot be explained in the RBC framework since at the given wage rate,
the demand for labor is simply determined by the marginal product, which
should be increased with the improvement in technology. This chapter thus
demonstrates that if we follow the Keynesian way of quantity determina-
tion in a monopolistic competition model, the technology puzzle explored in
standard market clearing models would disappear.
9.3 Conclusions
In the last chapter, we have shown how households may be constrained in the
product market in buying consumption goods by the firms actual demand
for labor. Then noncleared labor market was derived from a multiple stage
decision process of households where we have neglected that firms may also
be demand constrained on the product market. The proposition in this
chapter which shows the firms’ constraint in the product market, explains
this additional complication that can arise due to the interaction of the labor
market and the product market constraints.
We have then shown in this chapter how the firms’ constraints on the
product market may explain the technology puzzle, namely that positive
technology shocks may have, only a weak effect on employment in the short
run - a phenomenon inconsistent with equilibrium business cycle models,
where technology shocks and employment are predicted to be positively cor-
related. This result was obtained in an economy with monopolistic compe-
tition, as in New Keynesian economics, where prices and wages are set by
a monopolistic supplier and are sticky, resulting in an updating scheme of
prices and wages where only a fraction of prices and wages are optimally set
each time period. Yet we have also introduced a nonclearing labor market,
resulting from a multiple stage decision problem, where then the households’
constraint on the labor market spills over to the product market and the
firms constraint on the product market generates employment constraints.
We could show that such a model matches better time series data of the U.S.
economy.
184
9.4 Appendix: Proof of the Proposition
Let X
t
= Z
t
L
t
, with Z
t
to be the permanent shock resulting purely from
productivity growth, and L
t
from population growth. We shall assume that
L
t
has a constant growth rate µ and hence Z
t
follows the growth rate (γ −µ).
The production function can be written as Y
t
= A
t
Z
α
t
K
1−α
t
H
α
t
, where H
t
equals N
t
L
t
and can be regarded as total labor hours.
Let us first consider the firm’s willingness to supply Y

t
, Y

t
= X
t
y

t
, under
the condition that the rental rate of capital r
t
clears the capital market while
the wage rate w
t
is given. In this case, the firm’s optimization problem can
be expressed as
max Y

t
−r
t
K
d
t
−w
t
H
d
t
subject to
Y

t
= A
t
(Z
t
)
α
_
K
d
t
_
1−α
_
H
d
t
_
α
The first order condition tell us that
(1 −α)A
t
(Z
t
)
α
_
K
d
t
_
−α
_
H
d
t
_
α
= r
t
(9.28)
αA
t
(Z
t
)
α
_
K
d
t
_
1−α
_
H
d
t
_
α−1
= w
t
(9.29)
from which we can further obtain
r
t
w
t
=
_
1 −α
α
_
H
d
t
K
d
t
(9.30)
Since the rental rate of capital r
t
is assumed to clear the capital market, we
can thus replace K
d
t
in the above equations by K
t
. Since w
t
is given, and
therefore the demand for labor can be derived from (9.29):
H
d
t
=
_
αA
t
w
t
_ 1
1−α
(Z
t
)
α
1−α
K
t
Dividing both sides of the above equation by X
t
, and then reorganizing, we
obtain
n
d
t
=
0.3
¯
N
_
αA
t
Z
t
w
t
_ 1
1−α
k
t
We shall regard this labor demand as the demand when the firm desired
activities are carried out, which is indeed the first equation in (9.26). Given
this n
d
t
, the firm’s desire to supply y

t
can be expressed as
y

t
= A
t
k
1−α
t
(n
d
t
N/0.3)
α
= A
t
k
t
_
αA
t
Z
t
w
t
_ α
1−α
(9.31)
185
This is the equation (9.27) as expressed in the proposition.
Next, we consider the case that the firm’s supply is constrained by the
expanded demand
ˆ
Y
t
,
ˆ
Y
t
= X
t
y
t
. In other words, ˆ y
t
< y

t
where y

t
is given
by (9.31). In this case, the firm’s profit maximization problem is equivalent
to the following minimization problem:
min r
t
K
d
t
+ w
t
H
d
t
subject to
ˆ
Y
t
= A
t
(Z
t
)
α
_
K
d
t
_
1−α
_
H
d
t
_
α
(9.32)
The first-order condition will still allows us to obtain (9.30). Using equation
(9.32) and (9.30), we obtain the demand for capital K
d
t
and labor H
d
t
as
K
d
t
=
_
ˆ
Y
t
A
t
Z
α
t
_
__
w
t
r
t
__
1 −α
α
__
α
H
d
t
=
_
ˆ
Y
t
A
t
Z
α
t
_
__
w
t
r
t
__
α
1 −α
__
1−α
Dividing both sides of the above two equations by X
t
, we obtain
k
d
t
=
_
´ y
t
A
t
___
w
t
r
t
Z
t
__
1 −α
α
__
α
(9.33)
n
d
t
=
_
0.3´ y
t
A
t
N
___
r
t
Z
t
w
t
__
α
1 −α
__
1−α
(9.34)
Since the real rental of capital r
t
will clear the capital market, we can replace
k
d
t
in (9.33) by k
t
.Substituting it into (9.34) for explaining r
t
, we obtain
n
d
t
=
_
0.3
N
__
´ y
t
A
t
_
1/α
_
1
k
t
_
(1−α)/α
This is the second equation in (9.26).
Chapter 10
Conclusions
In this book, we try to contribute to the current research in stochastic dy-
namic macroeconomics. We recognize that the stochastic dynamic optimiza-
tion model is important in macroeconomics, we consider the current stan-
dard model of model, the real business cycle model, only to be a simple and
starting point for macrodynamic analysis. For the model to explain the real
world more effectively, some Keynesian features should be introduced. We
have shown that with such an introduction the model can be enriched while
it becomes possible to resolve the most important puzzles, the labor market
puzzle and the technology puzzle, in RBC economy.
186
Bibliography
[1] Adelman, I. and F. L. Adelman (1959): ”The Dynamic Properties of
the Klein-Goldberger Model”, Econometrica, vol. 27, 596-625
[2] Arrow, K. J. and Debreu, G. (1954): ”Existence of an Equilibrium for
a Competitive Economy,” Econometrica 22, 265-290.
[3] Basu, S. and M. S. Kimball (1997): ”Cyclical Productivity with Un-
observed Input Variation,” NBER Working Paper Series 5915. Cam-
bridge, MA.
[4] Bellman, R. (1957): Dynamic Programming. Princeton, NJ: Princeton
University Press.
[5] Benassy, J.-P. (1995): ”Money and Wage Contract in an Optimizing
Model of the Business Cycle”, Journal of Monetary Economics, vol. 35:
303-315.
[6] Benassy, J.-P. (2002): ”The Macroeconomics of Imperfect Competition
and Nonclearing Markets”, Cambridge: MIT-Press.
[7] Benhabib, J. and R. Farmer (1994): ”Indeterminacy and Increasing
Returns”, Journal of Economic Theory 63: 19-41.
[8] Benhabib, J. and R. Farmer (1999): ”Indeterminacy and Sunspots in
Macroeconomics”, Handbook for Macroeconomics, eds. J. Taylor and
M. Woodford, North-Holland, New York, vol. 1A: 387-448
[9] Benhabib, J., S. Schmidt-Grohe and M. Uribe (2001): ”Monetary Pol-
icy and Multiple Equilibria”, American Economic Review, vol. 91, no.1:
167-186.
[10] Benninga, S. and A. Protopapadakis (1990),”Leverage, Time Prefer-
ence and the Equity Premium Puzzle,” Journal of Monetary Economics
25, 49-58.
187
BIBLIOGRAPHY 188
[11] Bennett,R. L. and R.E.A. Farmer (2000): ”Indeterminacy with Non-
separable Utility”, Journal of Economic Theory 93: 118-143.
[12] Benveniste and Scheinkman (1979): ”On the Differentiability of the
Value Function in Dynamic Economics, Econometrica, Vol. 47(3):727-
732.
[13] Beyn, W. J., Pampel, T. and W. Semmler (2001): ”Dynamic Op-
timization and Skiba Sets in Economic Examples”, Optimal Control
Applications and Methods, vol. 22, issues 5-6: 251-280.
[14] Blanchard, O. and S. Fischer (1989): ”Lectures on Macroeconomics”,
Cambridge, MIT-Press
[15] Blanchard, O. and J. Wolfers (2000): ”The Role of Shocks and In-
stitutions in the Rise of Unemployment: The Aggregate Evidence”,
Economic Journal 110:C1-C33.
[16] Blanchard, O. (2003): ”Comments on Jjungqvist and Sargent” in:
Knowledge, Information, And Expectations in Modern Macroceco-
nomics”, edited by P. Aghion, R. Frydman, J. Stiglitz and M. Wood-
ford, Princeton Unviersity Press, Princeton: 351-356.
[17] Bohachevsky, I. O., M. E. Johnson and M. L. Stein (1986), ”General-
ized Simulated Annealing for Function Optimization,” Technometrics,
vol. 28, 209-217.
[18] Bohn, H. (1995) ”The Sustainability of Budget Deficits in a stochastic
Economy”, Journal of Money, Credit and Banking, vol. 27, no. 1:257-
271.
[19] Boldrin, M., Christiano, L. and J. Fisher (1996), ” Macroeconomic
Lessons for Asset Pricing”, NBER working paper no. 5262.
[20] Boldrin, M., Christiano, L. and J. Fisher (2001), ”Habit Persistence,
Asset Returns and the Business Cycle”, American Economic Review,
vol. 91, 1:149-166.
[21] Brock, W. and Mirman (1972)”Optimal Economic Growth and Uncer-
tainty: The Discounted Case”, Journal of Economic Theory 4: 479-513.
[22] Brock, W. (1979) ”An integration of stochastic growth theory and
theory of finance, part I: the growth model”, in: J. Green and J.
Schenkman (eds.), New York, Academic Press: 165-190.
BIBLIOGRAPHY 189
[23] Brock (1982) ”Asset Pricing in a Production Economy”, The
Economies of Information and Uncertainty, ed. by J.J. McCall,
Chicago, University of Chicago Press: 165-192.
[24] Burns, A. F. and W. C. Mitchell (1946): Measuring Business Cycles,
New York: NBER.
[25] Burnside,A. C. , M. S. Eichenbaum and S. T. Rebelo (1993):
”Labor Hoarding and the Business Cycle”, Journal of Political
Economy,101:245-273.
[26] Burnside, C., M. Eichenbaum and S. T. Rebelo (1996): ”Sectoral Solow
Residual”, European Economic Review, Vol. 40: 861-869.
[27] Calvo, G.A. (1983)”Staggered Contracts in a Utility Maximization
Framework”, Journal of Monetary Economics, vol 12: 383-398.
[28] Campbell, J. (1994), ”Inspecting the Mechanism: An Analytical Ap-
proach to the Stochastic Growth Model”, Journal of Monetary Eco-
nomics 33, 463-506.
[29] F. Camilli and M. Falcone (1995), ”Approximation of Ooptimal Control
Problems with State Constraints: Estimates and Applications”, in B.S.
Mordukhovic, H.J. Sussman eds., ”Nonsmooth analysis and geometric
methods in deterministic optimal control”, IMA Volumes in Applied
Mathematics 78, Springer Verlag, 1996, 23-57.
[30] Capuzzo-Dolcetta, I. (1983): ”On a Discrete Approximation of the
Hamilton-Jacobi-Bellman Equation of Dynamic Programming”, Appl.
Math. Optim., vol. 10: 367-377.
[31] Chow, G. C. (1983),”Econometrics”, New York: MacGraw-Hill, Inc.
[32] Chow, G. C. (1993): ”Statistical Estimation and Testing of a Real Busi-
ness Cycle Model,” Econometric Research Program, Research Memo-
randum, no. 365, Princeton: Princeton University.
[33] Chow, G. C. (1993): Optimum Control without Solving the Bellman
Equation, Journal of Economic Dynamics and Control 17, 621-630.
[34] Chow, G. C. (1997): Dynamic Economics: Optimization by the La-
grange Method, New York: Oxford University Press.
BIBLIOGRAPHY 190
[35] Chow, G. C. and Kwan, Y. K. (1998): How the Basic RBC Model
Fails to Explain U.S. Time Series, Jounal of Monetary Economics 41,
308-318.
[36] Christiano, L. J. (1987): Why Does Inventory Fluctuate So Much?
Journal of Monetary Economics, vol. 21: 247-80.
[37] Christiano, L. J. (1988): ”Why Does Inventory Fluctuate So Much?”,
Journal of Monetary Economics, vol. 21: 247-80.
[38] Christiano, L. J. (1987): Technical Appendix to “Why Does Inventory
Investment Fluctuate So Much?” Research Department Working Paper
No. 380, Federal Reserve Bank of Minneapolis.
[39] Christiano, L. J. and M. Eichenbaum (1992): ”Current Real Business
Cycle Theories and Aggregate Labor Market Fluctuation,” American
Economic Review, June, 431-472.
[40] Christiano, L.J., M. Eichenbaum and C. Evans (2001), “Nominal
Rigidities and the Dynamic Effects of a Stock to Monetary Policy,
[41] Cochrane (2001): ”Asset Pricing”. Princeton: Princeton University
Press.
[42] Cooley, T. and E. Prescott (1995): ”Economic Growth and Business
Cycles”, in Cooley, T. ed., Frontiers in Business Cycle Research, Prince-
ton: Princeton University Press
[43] Corana, A., M. C. Martini, and S. Ridella (1987), ”Minimizing Multi-
modal Functions of Continuous Variables with the Simulating Anneal-
ing Algorithm,” ACM Transactions on Mathematical Software, vol. 13,
262-80.
[44] Danthine, J.P. and J.B. Donaldson (1990): ”Efficiency Wages and the
Business Cycle Puzzle”, European Economic Review 34: 1275-1301.
[45] Danthine, J.P. and J. B. Donaldson (1995): ”Non-Walrian Economies,”
in T.F. Cooly (ed), Frontiers of Business Cycle Research, Prince-
ton:Princeton University Press.
[46] Dawid, H. and R. Day (2003), ”Adaptive Economizing and Sustainable
Living: Optimally, Suboptimally and Pessimality in the One Sector
Growth Model”, mimeo, University of Bielefeld.
BIBLIOGRAPHY 191
[47] den Haan, W. and A. Marcet (1990): ”Solving the Stochastic Growth
Model by Parameterizing Expectations”. Journal of Business and Eco-
nomic Statistics, 8: 31-34.
[48] Debreu, G. (1959): Theory of Value, New York: Wiley.
[49] Diebold F.X.L.E. Ohanian and J. Berkowitz (1995), ”Dynamic Equi-
librium Economies: A Framework for Comparing Model and Data”,
Technical Working Paper No. 174, National Bureau of Economic Re-
search.
[50] Eichenbaum, M. (1991): ”Real Business Cycle Theory: Wisdom or
Whimsy?” Journal of Economic Dynamics and Control, vol. 15, 607-
626.
[51] Eichenbaum, M, L. Hansen and K. Singleton (1988): ”A Time Series
Analysis of Representative Agent Models of Consumption and Leisure
Under Uncertainty,” Quarterly Journal of Economics, 51-78
[52] Eisner, R. and R. Stroz (1963), “Determinants of Business Investment,
Impacts on Monetary Policy, Prentice Hall.
[53] Erceg, C. J., D. W. Henderson and A. T. Levin (2000), ”Optimal Mon-
etary Policy with Staggered Wage and Price Contracts”, Journal of
Monetary Economics, Vol. 46: 281 - 313.
[54] European Central Bank Report, Country Decision (2004), ”Quantify-
ing the Impact Structural Reforms”, European Central Bank, Frank-
furt.
[55] Evans, C. (1992): ”Productivity Shock and Real Business Cycles”,
Journal of Monetary Economics, Vol. 29, p191-208.
[56] Fair, R. C. (1984): Specification, Estimation, and Analysis of Macroe-
conometric Models, Cambridge, MA: Harvard University Press.
[57] Fair, R. C. and J. B. Taylor (1983): Solution and Maximum Likeli-
hood Estimation of Dynamic Nonlinear Rational Expectation Models,
Econometrica, 21(4), 1169-1185.
[58] Falcone, M. (1987) ”A Numerical Approach to the Infinite Horizon
Problem of Determinstic Control Theory”, Appl. Math. Optim., 15:
1-13.
BIBLIOGRAPHY 192
[59] Farmer (1999) ”Macroeconomics with Self-Fulfilling Expectations”,
Cambridge, MIT Press.
[60] Feichtinger, G., F.H. Hartl, P. Kort and F. Wirl (2000), “The Dynamics
of a Simple Relative Adjustment-Cost Framework, mimeo, University
of Technology, Vienna.
[61] Francis, N. and V.A. Ramey (2001): ”Is the Technology-Driven Real
Business Cycle Hypothesis Dead? Shocks and Aggregate Fluctuations
Revisited”, University of California, San Diego.
[62] Francis, N. and V.A. Ramey (2003): ”The Source of Historical Eco-
nomic Fluctuations: An Analysis using Long-Run Restrictions”, Uni-
versity of California, San Diego.
[63] Gali, (1999): Technology, Employment, and the Business Cycle: Do
Technology Shocks Explain Aggregate Fluctuation? American Eco-
nomic Review, Vol. 89, p249-271.
[64] Goffe, W. L., G. Ferrier and J. Rogers (1992), ”Global Optimization of
Statistical Function,” in H. M. Amman, D. A. Belsley and L. F. Pau
eds. Computational Economics and Econometrics, vol. 1, Dordrecht:
Kluwer.
[65] Gong, G. and W. Semmler (2001): Dynamic Programming with La-
grangian Multiplier: an Improvement over Chow’s Approximation
Method, Working Paper, Center for Empirical Macreconomics, Biele-
feld University.
[66] Gong, G. and W. Semmler (2001), Real Business Cycles with disequi-
libirum in the Labor Market: A Comparison of the US and German
Economies, Center for Empirical Macroeconomics, Bielefeld University,
working paper.
[67] Gong, G. and W. Semmler: ”Stochastic Dynamic Macroeconomics:
Theory, Numerics and Empirical Evidence”, Center for Empirical
Macroeconomics, book manuscript, Bielefeld University.
[68] Gong, G., A. Greiner, W. Semmler and J. Rubart (2001): ”Economic
Growth in the U.S. and Europe: the Role of Knowledge, Human Capi-
tal, and Inventions”, in:
¨
Okonomie als Grundlage politischer Entschei-
dungen, J. Gabriel and M. Neugart (eds.), Leske und Budrich, Opladen.
BIBLIOGRAPHY 193
[69] Greiner, A., W. Semmler and G. Gong (2003): ”The Forces of Eco-
nomic Growth: A Time Series Perspective”, Princeton: Princeton Uni-
versity Press.
[70] Greiner, A., W. Semmler and G. Gong (2004)”Forces of Economic
Growth - A Time Series Perspective”, forthcoming: Princeton, Prince-
ton University Press.
[71] Greiner, A., J. Rubart and W. Semmler (2003): ”Economic Growth,
Skill-biased Technical Change and Wage Inequality. A Model and Es-
timations for the U.S. and Europe”, forthcoming Journal of Macroeco-
nomics.
[72] Gr¨ une, L. (1997) ”An Adaptive Grid Scheme for the Discrete Hamilton-
Jacobi-Bellman Equation”, Numer. Math., 75: 1288-1314.
[73] L. Gr¨ une (2003), Errorr estimation and adaptive discretiza-
tion for the discrete stochastic Hamilton–Jacobi–Bellman
equation. Preprint, University of Bayreuth. Submitted,
http://www.uni-bayreuth.de/departments/math/∼lgruene/papers/.
[74] Gr¨ une, L. and W. Semmler (2004a): ”Using Dynamic Programming
for Solving Dynamic Models in Economics”, CEM Bielefeld, working
paper, forthcoming Journal of Economic Dynamics and Control, 28:
2427-2456.
[75] Gr¨ une, L. and W. Semmler (2004b) ”Solving Asset Pricing Models with
Stochastic Dynamic Programming”, CEM Bielefeld, working paper.
[76] Gr¨ une, L. and W. Semmler (2004c) ”Default Risk, Asset Pricing
and Debt Control”, forthcoming Journal of Financial Econometrics,
2004/05.
[77] Gr¨ une, L. and W. Semmler (2004d), ”Asset Pricing - Constrained by
Past Consumption Decisions”, CEM Bielefeld, working paper.
[78] Gr¨ une, L., W. Semmler and M. Sieveking (2004), ”Creditworthiness
and Threshold in a Credit Market Model with Multiple Equilibria”,
forthcoming, Economic Theory, vol. 25, no. 2: 287-315.
[79] Hall, R. E. (1988): ”The Relation between Price and Marginal Cost in
U.S. Industry”, Journal of Political Economy, Vol. 96, p.921-947.
[80] Hamilton, J. D. (1994), ”Time Series Analysis”, Princeton: Princeton
University Press.
BIBLIOGRAPHY 194
[81] Hall, R. E. (1988): ”The Relation between Price and Marginal Cost in
U.S. Industry”, Journal of Political Economy, Vol. 96, p.921-947.
[82] Hansen, L. P. (1982): ”Large Sample Properties of Generalized Meth-
ods of Moments Estimators,” Econometrica, vol. 50, no. 4, 1029-1054.
[83] Hansen, G. H. (1985): ”Indivisible Labor and Business Cycles,” Jour-
nal of Monetary Economics, vol.16, 309-327.
[84] Hansen, G. H. (1988): ”Technical Progress and Aggregate Fluctua-
tions”, working paper, University of California, Los Angeles.
[85] Hansen, L. P. and K. J. Singleton (1982): ”Generalized Instrument
Variables Estimation of Nonlinear Rational Expectations Models,”
Econometrica, vol. 50, no. 5, 1268-1286.
[86] Harrison, S.G. (2001) ”Indeterminacy with Sector-specific Externali-
ties”, Journal of Economic Dynamics and Control, 25: 747-76...
[87] Hayashi, F. (1982), “Tobin’s Marginal q and Average q: A Neoclassical
Interpretation, Econometrica 50: 213-224.
[88] Heckman, J. (2003): ”Flexibility and Creation: Job Lessons from the
German Experience”, in: Knowledge, Information, And Expectations
in Modern Macroceconomics”, edited by P. Aghion, R. Frydman, J.
Stiglitz and M. Woodford, Princeton Unviersity Press, Princeton: 357-
393.
[89] Hicks, J.R. (1963): ”The Theory of Wages”, McMillan, London.
[90] Hodrick, R. J. and E. C. Prescott (1980): Post-war U. S. Business
Cycle: an Empirical Investigation, Working Paper, Carnegie-Mellon
University, Pittsburgh, PA.
[91] Hornstein, A. and H. Uhlig (2001), ”What is the Real Story for Interest
Rate Volatility?” German Economic Review 1(1): 43-67.
[92] Jerman, U.J. (1998), ”Asset Pricing in Pproduction Economies”, Jour-
nal of Monetary Economies 41: 257-275.
[93] Judd, K. L. (1996) ”Approximation,Pertubation, and Projection Meth-
ods in Economic Analysis”, Chapter 12 in: Amman, H.M., D.A.
Kendrick and J. Rust, eds., Handbook of Computational Economics,
Elsevier: 511-585.
BIBLIOGRAPHY 195
[94] Judd, K. L. (1998): Numerical Methods in Economics, Cambridge,
MA: MIT Press.
[95] Judge, G. G., W. E. Griffiths, R. C. Hill and T. C. Lee (1985), ”The
Theory and Practice of Econometrics”, 2nd edition, New York: Wiley.
[96] Juillard M. (1996): DYNARE: A Program for the Resolution and Sim-
ulation of Dynamic Models with Forward Variables through the Use
of a Relaxation Algorithm,” CEPREMAP Working Paper, No. 9602,
Paris, France.
[97] Kendrick, D. (1981): Stochastic Control for Economic Models, New
York, NY: McGraw-Hill Book Company.
[98] Keynes, J.M. (1936) ”The General Theory of Employment, Interest
and Money”, London, MacMillan.
[99] Kim, J. (2003) ”Indeterminacy and Investment and Adjustment Costs:
An Analytical Result”, Macroeconomic Dynamics 7: 394-406.
[100] Kim, J. (2004) ”Does Utility Curvature Matter for Indetermincy?”,
forthcoming, Journal of Economic Behavior and Organization.
[101] King, R. G. and C. I. Plosser (1994): ”Real Business Cycles and the
Test of the Adelmans”, Journal of Monetary Economics, vol. 33, 405-
438.
[102] King, R. G., C. I. Plosser, and S. T. Rebelo (1988a): ”Production,
Growth and Business Cycles I: the Basic Neo-classical Model,” Journal
of Monetary Economics, 21, 195-232.
[103] King, R. G., C. I. Plosser, and S. T. Rebelo (1988b), ”Production,
Growth and Business Cycles II: New Directions,” Journal of Monetary
Economics, vol. 21, 309-341.
[104] King, R. G. and S. T. Rebelo (1999): ”Resusciting Real Business Cy-
cles,” in Handbook of Macroeconomics, Volume I, edited by J. B. Tay-
lor and M. Woodford, Elsevier Science.
[105] King, R.G. and A.L. Wolman (1999): ” What should the Monetary
Authority do when Prices are sticky?”, in: J. Taylor (ed.) Monetary
Policy Rules, Chicago: The University of Chicago Press.
BIBLIOGRAPHY 196
[106] Kwan, Y. K. and G. C. Chow (1997): Chow’s Method of Optimum
Control: A Numerical Solution, Journal of Economic Dynamics and
Control 21, 739-752.
[107] Kydland, F. E. and E. F. Prescott (1982), ”Time to Build and Aggre-
gate Fluctuation”, Econometrica, vol. 50, 1345-1370.
[108] Lettau, M. (1999): ”Inspecting the Mechanism: The Determination of
Asset Prices in the Real Business Cycle Model,” CEPR working paper
No. 1834
[109] Lettau, M. and H. Uhlig (1999): ”Volatility Bounds and Preferences:
An Analytical Approach,” revised from CEPR Discussion Paper No.
1678
[110] Lettau, M., G. Gong and W. Semmler (2001): Statistical Estimation
and Moment Evaluation of a Stochastic Growth Model with Asset Mar-
ket Restriction, Journal of Economics Behavior and Organization, vol.
44, 85-103.
[111] Ljungqvist, L. and T. Sargent (1998): ”The European Unemployment
Dilemma”, Journal of Political Economy, vol. 106, no.3: 514-550.
[112] Ljungqvist, L. and Sargent, T. J. (2000): Recursive Macroeconomics,
Cambridge, MA: The MIT Press.
[113] Ljungqvist, L. and T. Sargent (2003): ”European Unemployment:
From a Worker’s Perspective”, in: Knowledge, Information, And Ex-
pectations in Modern Macroceconomics”, edited by P. Aghion, R. Fry-
dman, J. Stiglitz and M. Woodford, Princeton Unviersity Press, Prince-
ton: 326-350.
[114] Long, J. B. and C. I. Plosser (1983): Real Business Cycles, Journal of
Political Economy, vol. 91, 39-69.
[115] Lucas, R. E. (1967): “Adjustment Costs and the Theory of Supply,
Journal of Political Economiy 75: 321-334.
[116] Lucas, R. (1976): Econometric Policy Evaluation: A Critique,
Carnegie-Rochester Conference Series on Public Policy, 1, 19-46.
[117] Lucas, R. (1978) ”Asset Prices in an Exchange Economy”. Economet-
rica 46: 1429-1446.
BIBLIOGRAPHY 197
[118] Lucas, R. and E.C. Prescott (1971):”Investment under Uncertainty”,
Econometrica, vol. 39 (5):659ff.
[119] Malinvaud, E. (1994): ”Diagnosing Unemployment”, Cambridge, Cam-
bridge University Press.
[120] Mankiw, N. G. (1989): ”Real Business Cycles: A New Keynesian Per-
spective”, Journal of Economic Perspectives, Vol. 3. 79-90.
[121] Mankiw, N. G. (1990): A Quick Refresher Course in Macroeconomics,
Journal of Economic Literature, Vol. 27. 1645-1660.
[122] Marimon, R. and Scott, A. (1999): Computational Methods for the
Study of Dynamic Economies, New York, NY: Oxford University Press.
[123] Mehra and Prescott (1985), ”The Equity Premium Puzzle”, Journal of
Monetary Economics 15: 145-161.
[124] Merz, M. (1999): Heterogenous Job-Matches and the Cyclical Behavior
of Labor Turnover”, Journal of Monetary Economics, 43: 91-124.
[125] Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A. M. and
Teller, E. (1953), ”Equation of State Calculation by Fast Computing
Machines,” The Journal of Chemical Physics, vol. 21, no. 6, 1087-1092.
[126] Meyers, R.J. (1964): ”What can we learn from European Experience,
in Unemployment and the American Economy?”, ed. by A.M. Ross,
New York: John Wiley & Sons, Inc.
[127] Meyers, R.J. (1968) ”What Can We Learn from European Experience,
in Unemployment and the American Economy?”, ed. by A.M. Ross,
New York: John Wiley & Sons, Inc.
[128] Nickell, S. (1997): ”Unemployment and Labor Market Rigidities – Eu-
rope versus North Maerica”, Journal of Economic Perspectives 3, 55-74.
[129] Nickell, S., L. Nunziata, W. Ochel and G. Quintini (2003): ”The Bev-
eridge Curve, Unemployment, and Wages in the OECD from the 1960s
to the 1990s”, in: Knowledge, Information, And Expectations in Mod-
ern Macroceconomics”, edited by P. Aghion, R. Frydman, J. Stiglitz
and M. Woodford, Princeton Unviersity Press, Princeton.
[130] OECD (1998a): ”Business Sector Data Base”, OECD Statistical Com-
pendium.
BIBLIOGRAPHY 198
[131] OECD (1998b): ”General Economic Problems.” OECD Economic Out-
look, Contry Specific Series
[132] Phelps, E. (1997): Rewarding Work, Cambridge: MIT-Press.
[133] Phelps, E. and G. Zoega (1998): ”Natural Rate Theory and OECD
Unemployment”, Economic Journal, 108 (May): 782-801.
[134] Plosser, C. I. (1989): ”Understanding Real Business Cycles,” Journal
of Economic Perspectives, vol. 3, no. 3, 51-77.
[135] Prescott, E. C. (1986): ”Theory ahead of Business Cycle Measure-
ment,” Quarterly Review, Federal Reserve Bank of Minneapolis, vol.
10, no. 4, 9-22.
[136] Ramsey, F.P. (1928): ”A Mathematical Theory of Saving”, Economic
Journal 28, 543-559.
[137] Reiter, M. (1996)
[138] Reiter, M. (1997): Chow’s Method of Optimum Control, Journal of
Economic Dynamics and Control 21, 723-737.
[139] Rotemberg, J. (1982) ”Sticky Prices in the United States”, Journal of
Political Economy, vol. 90: 1187-1211.
[140] Rotemberg, J. and M. Woodford (1995): ”Dynamic General Equilib-
rium Models with Imperfectly Competitive Product Markets,” in T.F.
Cooley (ed), Frontiers of Business Cycle Research, Princeton:Princeton
University Press.
[141] Rotemberg, J. and M. Woodford (1999): ”Interest Rate Rules in an
Estimated Sticky Price Model”, in: J. Taylor (ed.) Monetary Policy
Rules, Chicago: the University of Chicago Press.
[142] Rust, J. (1996), ”Numerical Dynamic Programming in Economics”,
in: Amman, H.M., D.A. Kendrick and J. Rust, eds., Handbook of
Computational Economics, Elsevier, pp. 620–729.
[143] Santos, M.S. and J. Vigo-Aguiar (1995):
[144] Santos, M.S. and J. Vigo-Aguiar (1998): Analysis of a Numerical Dy-
namic Programming Algorithm Applied to Economic Models. Econo-
metrica, 66(2): 409-426
BIBLIOGRAPHY 199
[145] Sargent, T. (1999): Contested Inflation. Princeton, Princeton Univer-
sity Press.
[146] Schmidt-Grohe, S. (2001): Endogenous Business Cycles and the Dy-
namics of Output, Hours and Consumption, American Economic Re-
view, vol 90, no. 5, 1136-1159.
[147] Simkins, S. P. (1994): ”Do Real Business Cycle Models Really Exhibit
Business Cycle Behavior?” Journal of Monetary Economics, vol. 33,
381-404.
[148] Singleton, K. (1988): ”Econometric Issues in the Analysis of Equilib-
rium Business Cycle Model,” Journal of Monetary Economics, vol. 21,
361-386.
[149] Skiba, A. K. (1978); “Optimal Growth with a Convex-Concave Pro-
duction Function, Econometrica 46 (May): 527-539.
[150] Solow, R. (1979): ”Another Possible Source of Wage Stickiness”, Jour-
nal of Macroeconomics, vol. 1: 79-82
[151] Statistisches Bundesamt (1998), Fachserie 18, Statistisches Bundesamt
Wiesbaden.
[152] Stockey, N. L., R. E. Lucas and E. C. Prescott (1989): ”Recursive
Methods in Economics”, Cambridge: Harvard University Press.
[153] Summers, L. H. (1986): ”Some Skeptical Observations on Real Busi-
ness Cycles Theory”, Federal Reserve Bank of Minneapolis Quarterly
Review, Vol. 10, p.23-27.
[154] Taylor, J. B. (1980): Aggregate Dynamics and Staggered Contracts,
Journal of Political Economy, Vol. 88: 1 - 24.
[155] Taylor, J. B. (1999): ”Staggered Price and Wage Setting in Macroeco-
nomics,” in Handbook of Macroeconomics, Volume I, edited by J. B.
Tayor and M. Woodford, Elsevier Science.
[156] Taylor, J.B. and Uhlig, H. (1990): Solving Nonlinear Stochastic Growth
Models: A Comparison of Alternative Solution Methods, Journal of
Business and Economic Statistics, 8, 1-17.
[157] Uhlig, H. (1999): A Toolkit for Analysing Nonlinear Dynamic Stochas-
tic Models Easily, in R. Marimon and A. Scott ed.: Computational
Methods for the Study of Dynamic Economies, New York: Oxford
University Press.
BIBLIOGRAPHY 200
[158] Uhlig, H. and Y. Xu (1996): ”Effort and the Cycle: Cyclical Implica-
tions of Efficiency Wages,”, mimeo, Tilburg Unversity.
[159] Uzawa, H. (1968): “The Penrose Effect and Optimum Growth, Eco-
nomic Studies Quarterly XIX: 1-14.
[160] Vanderbilt, D. and S. G. Louie (1984), ”A Monte Carlo Simulated An-
nealing Approach to Optimization over Continuous Variables,” Journal
of Computational Physics, vol. 56, 259-271.
[161] Walsh, L.E. (2002): ”Labor Market Search and Monetary Shocks”,
working paper, University of California, Santa Cruz.
[162] Watson, M.W. (1993), ”Measures of Fit for Calibration Models”, Jour-
nal of Political Economy, vol. 101, no. 6, 1011-1041.
[163] W¨ohrmann, P., W. Semmler and M. Lettau (2001), ”Nonparamet-
ric Estimation of Time-Varying Characteristics of Intertemporal Asset
Pricing Models”, working paper, CEM, Bielefeld University.
[164] Woodford, M. (2003): ”Interest and Prices”, Princeton University
Press, Princeton.
[165] Zbaracki, M. J., M. Ritson, D. Levy, S. Dutta and M. Bergen (2000):
The Managerial and Customer Costs of Price Adjustment: Direct Ev-
idence from Industrial Markets, Manuscript, Wharton School, Univer-
sity of Pennsylvania
[166] 2003): ”Monetary Policy Rules under Uncertainty: Adaptive Learning
and Robust Control”, forthcoming. Macroeconomic Dynamics 2004/05

Contents
List of Figures List of Tables Preface Introduction and Overview iv vi 1 2

I Solution and Estimation of Stochastic Dynamic Models 11
1 Solution Methods of Stochastic Dynamic Models 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 1.2 The Standard Recursive Method . . . . . . . . . . . . . 1.3 The First-Order Conditions . . . . . . . . . . . . . . . 1.4 Approximation and Solution Algorithms . . . . . . . . 1.5 An Algorithm for the Linear-Quadratic Approximation 1.6 A Dynamic Programming Algorithm . . . . . . . . . . 1.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Appendix I: Proof of Proposition 1 . . . . . . . . . . . 1.9 Appendix II: An Algorithm for the LQ-Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 12 13 15 17 23 25 27 28 29 33 33 33 35 39 47 48 50

2 Solving a Prototype Stochastic Dynamic Model 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 The Ramsey Problem . . . . . . . . . . . . . . . . . . . . . . . 2.3 The First-Order Conditions and Approximate Solutions . . . . 2.4 Solving the Ramsey Problem with Different Approximations . 2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Appendix I: The Proof of Proposition 2 and 3 . . . . . . . . . 2.7 Appendix II: Dynamic Programming for the Stochastic Version

i

CONTENTS 3 The Estimation and Evaluation of the Stochastic Dynamic Model 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 The Estimation Methods . . . . . . . . . . . . . . . . . . . . . 3.4 The Estimation Strategy . . . . . . . . . . . . . . . . . . . . . 3.5 A Global Optimization Algorithm: The Simulated Annealing 3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Appendix: A Sketch of the Computer Program for Estimation

ii

52 52 53 55 57 58 60 60

II The Standard Stochastic Dynamic Optimization Model 63
4 Real Business Cycles: Theory and the Solutions 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . 4.2 The Microfoundation . . . . . . . . . . . . . . . . . 4.3 The Standard RBC Model . . . . . . . . . . . . . . 4.4 Solving Standard Model with Standard Parameters 4.5 The Generalized RBC Model . . . . . . . . . . . . . 4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . 4.7 Appendix: The Proof of Proposition 4 . . . . . . . 5 The 5.1 5.2 5.3 5.4 5.5 5.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 64 65 69 74 76 80 80 82 82 82 86 89 93 99

Empirics of the Standard Real Business Cycle Model Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimation with Simulated Data . . . . . . . . . . . . . . . . Estimation with Actual Data . . . . . . . . . . . . . . . . . Calibration and Matching to U. S. Time-Series Data . . . . The Issue of the Solow Residual . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6 Asset Market Implications of Real Business Cycles 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 6.2 The Standard Model and Its Asset Pricing Implications 6.3 The Estimation . . . . . . . . . . . . . . . . . . . . . . 6.4 The Estimation Results . . . . . . . . . . . . . . . . . . 6.5 The Evaluation of Predicted and Sample Moments . . . 6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . .

101 . 101 . 103 . 107 . 110 . 112 . 115

CONTENTS

iii

III Beyond the Standard Model — Model Variants with Keynesian Features 116
7 Multiple Equilibria and History Dependence 7.1 Introduction . . . . . . . . . . . . . . . . . . . 7.2 The Model . . . . . . . . . . . . . . . . . . . . 7.3 The Existence of Multiple Steady States . . . 7.4 The Solution . . . . . . . . . . . . . . . . . . 7.5 Conclusion . . . . . . . . . . . . . . . . . . . . 7.6 Appendix: The Proof of Propositions 5 and 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 117 119 121 125 127 128

8 Business Cycles with Nonclearing Labor Market 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 8.2 An Economy with Nonclearing Labor Market . . . . 8.3 Estimation and Calibration for U. S. Economy . . . . 8.4 Estimation and Calibration for the German Economy 8.5 Differences in Labor Market Institutions . . . . . . . 8.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . 8.7 Appendix I: Wage Setting . . . . . . . . . . . . . . .

131 . 131 . 135 . 142 . 151 . 159 . 163 . 164

9 Monopolistic Competition, Nonclearing Markets and Technology Shocks 171 9.1 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 9.2 Estimation and Calibration for U.S. Economy . . . . . . . . . 175 9.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 9.4 Appendix: Proof of the Proposition . . . . . . . . . . . . . . 184 10 Conclusions 186

. . . . . .2 5. . . . . . . . . . . . . . . . Value Function obtained from the Linear-quadratic Solution . . . . 40 42 44 45 46 47 51 . . . . . . . . . .List of Figures 2. . . . . . . .6 2. . . . . . 79 . .1 4. . . . Sample and Predicted Moments with Innovation Given by Corrected Solow Residual .1 5. The Stochastic Solution to the Benchmark RBC Model for the Standard Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . .4 2. . . . . . . Predicted and Actual Series: all variables HP detrended (except for excess equity return) . . .3 5. . . .4 5.3 2. . . . . . . . . . . .7 4. . . . . . . . . . . . 122 iv . . . . . . . Approximated value function and final adaptive grid for our Example . .2 4. . . . . . . . . .1 6. . . . . .5 5. . .2 2. . . . . . Paths of the Choice Variables C and N (depending on K) . . Value function for the general model . . . . . .3 4. . 114 The Adjustment Cost Function . . . . . . . . . . . . The Solow Residual: standard (solid curve) and corrected (dashed curve) . 113 The Second Moment Comparison:all variables detrended (except excess equity return) . . . The Linear-quadratic Solution in Comparison to the Exact Solution . . . . . The Deterministic Solution to the Benchmark RBC Model for the Standard Parameters . .1 The Fair-Taylor Solution in Comparison to the Exact Solution The Log-linear Solution in Comparison to the Exact Solution . . . . .6 6. . . . . . . . . . . .δ Surface of the Objective Function for ML Estimation The θ − α Surface of the Objective Function for ML Estimation Simulated and Observed Series (non detrended) . . . . . .1 2. . . . . . . . . . . . . Path of Control . . . . . . . . . . . Simulated and Observed Series (non detrended) . . . . . . . . Value Function . . . . . . . . . . . . . . .4 5. . . . . . . . . . .2 7. 75 . . . . . . . . 75 . . . . . . . . . . . . . . . . . . . . . 79 85 85 91 92 97 99 The β . . . . . . . . .5 2.

. . . . 124 The Welfare Performance of three Linear Decision Rules . . . . . . .S. . . versus Germany 152 Comparison of Macroeconomic Variables: U. versus Germany (data series are detrended by the HP-filter) . S. 150 Comparison of Macroeconomic Variables U.3 8. . .6 8. . . . . . . . Case .4 8. Case . . . . .2 7. .4 8. . . . . .3 7. 165 Welfare Comparison of Model II and III . . . . . . S.S. 181 . . . 158 A Static Version of the Working of the Labor Market . . . 122 Multiplicity of Equilibria: f(i) function .2 8. . . . . .5 8. . 157 Comparison of demand and supply in the labor market .1 8.1 v The Derivatives of the Adjustment Cost . 169 Simulated Economy versus Sample Economy: U. .LIST OF FIGURES 7. . . . . . 153 Simulated Economy versus Sample Economy: German Case .7 9. . 126 Simulated Economy versus Sample Economy: U. . .

. . . . .5 9. . . . . . . Calibration of the Model Variants: U. . . Calibration of Real Business Cycle Model . . . . . . . . Parameterizing the Standard RBC Model . .2 7. . . . . . . . . Calibration of the Model Variants: German Economy .1 4. . . . . . . . .4 8. . . Estimation with Christiano’s Data Set . . . . . . . . . . .3 8. .2 8. . . . .3 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 5. 124 The Multiple Steady States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Number of nodes and errors for our Example . Asset Market Facts and Real Variables Summary of Models . . . . . . . . 145 147 154 155 156 Calibration of the Model Variants .4 5. .1 6. . . . . . . . . . . . . . . . . . . . . . . . Matching the Sharpe-Ratio . . . . . . . . . . . . . . . . . . .S. . . . . . . . The Standard Deviations (U. . . . Economy . .2 4.2 Parameterizing the Prototype Model .1 8. . . 51 Parameterizing the Standard RBC Model . . . .S. . . . 84 88 88 89 90 95 98 106 110 110 111 111 The Parameters in the Logistic Function . . .1 7. . . .1 5. Asset Pricing Implications . . 121 The Standard Parameters of RBC Model . . . . . 125 Parameters Used for Calibration .List of Tables 2. . . . . . . . .2 6. versus Germany) . .5 7. . . . . . . . .2 5. .2 5. . . . . . . . . . . . . .182 vi . . . .1 9. . 74 Parameterizing the General Model .3 8. . . . 179 The Correlation Coefficients of Temporary Shock in Technology. . . . . . . . .1 2. . . . . . . . . . .6 5. . . Summary of Estimation Results . . . . . . . . .7 6. . . . . . . . . . . . . . . . . . .4 6. . . Estimation with the NIPA Data Set . . . . . . 78 GMM and ML Estimation Using Simulated Data .3 5. . . . F −Statistics for Testing Exogeneity of Solow Residual The Cross-Correlation of Technology . . . . . . . . . . . . Parameters used for Calibration (German Economy) .

Foscari University. u James Ramsey. University of Technology. Chapters of the book have been presented as lectures at Bielefeld University. Stefan Mittnik. Buz Brock. Vienna. 1 . New York. Jean-Paul Benassy. Peter Flaschel. City University of HongKong and European Central Bank. Some chapters of the book have also been presented at the annual conference of the American Economic Association. New York. We are also grateful for discussions with Toichiro Asada.Preface This book intends to contribute to the study of alternative paradigms in macroeconomics. We are grateful for comments by the participants of those conferences. in particular if intertemporal behavior of economic agents is involved. Michael Woodford and colleagues of our universities. The material of this book has been presented by the authors at several universities. Bejing. University of Aix-en-Provence. we also build on intertemporal economic behavior of economic agents but stress Keynesian features more than other recent literature in this area. Science and Technology is gratefully acknowledged. In general. beside addressing important macroeconomic issues in a dynamic framework another major focus of this book is to discuss and apply solution and estimation methods to models with intertemporal behavior of economic agents. stochastic dynamic macromodels are difficult to solve and to estimate. Society of Computational Economics. As other recent approaches to dynamic macroeconomics. Bejing University. Lars Gr¨ne. Richard Day. Financial support from the Ministry of Education. Chinese University of HongKong. Venice. and Society of Nonlinear Dynamics and Econometrics. Malte Sieveking. We thank Uwe K¨ller for research assistance and Gaby Windo horst for editing and typing the manuscript. Colombia University. New School University. Tsinghua University. Thus. Ray Fair.

but introducing monopolistic competition and sticky prices and wages into the model. the Real Business Cycle Model. We will discuss this issue in chapter 8. has become a major paradigm in macroeconomics. its competitive or monopolistic variants. In our book. It is well known that the standard DGE model fails to replicate essential product. labor market and asset market characteristics. such as technology shocks. in particular its more popular version. In contrast to the traditional Keynesian macromodels such variants also presume dynamically optimizing agents and market clearing1 . In this type of stochastic dynamic macromodeling only real shocks. 1 It should be noted that the concept of market clearing in recent New Keynesian literature is not unambiguous. As in the monopolistic competition variant of the DGE model we permit nominal rigidities. Recently Keynesian features have been built into the dynamic general equilibrium (DGE) model by preserving its characteristics such as intertemporally optimizing agents and market clearing. but sluggish wage and price adjustments. monetary and government spending shocks variation in tax rates or shifts in preferences generate macro fluctuations.Introduction and Overview The dynamic general equilibrium (DGE) model. 2 . It has been applied in numerous fields of economics. Its essential features are the assumptions of intertemporal optimizing behavior of economic agents. In particular. in numerous papers and in a recent book Woodford (2003) has worked out this new paradigm in macroeconomics. Yet. different from the DGE model. competitive markets and price-mediated market clearing through flexible wages and prices. we demonstrate that even with dynamically optimizing agents not all markets may be cleared. by stressing Keynesian features in a model with production and capital accumulation. we do not presume clearing of all markets in all periods. which is now commonly called New Keynesian macroeconomics.

Solving stochastic dynamic optimization models has been an important research topic in the last decade and many different methods have been proposed. which has been written into a GAUSS procedure. there have been developed numerous methods to solve stochastic dynamic decision problems. Given these two types of first-order conditions. part I and II provide the ground work for those later chapters. We will also compare those methods with the dynamic programming approach. Part I of the book can be regarded as the technical preparation for our theoretical arguments developed in this volume. Therefore one has to rely on an approximate solution. an exact and analytical solution of a dynamic decision problem is not attainable. In this book. Usually. which will be repeatedly used in the subsequent chapters. Solution methods are presented in chapters 1-2 whereas estimation methods along with calibration. These methods are subsequently applied in the remaining chapters of the book. are introduced in chapter 3. The method. which may also have to be computed by numerical methods. Recently. has the advantage of short computation time and easy implementation without sacrificing too much accuracy. A solution method with higher accuracy often requires more complicated procedures and extensive computation time. When an exact u and analytical solution to a dynamic optimization problem is not attainable and one has to use numerical methods. Among the well-known methods are the perturbation and projection methods (Judd (1998)). Here we provide a variety of technical tools to solve and estimate stochastic dynamic optimization models.3 Solution and Estimation Methods Whereas models with Keynesian features are worked out and stressed in the chapters of part III of the book. three types of approximation methods can be found in the literature: the Fair-Taylor method. the current methods of empirical assessment. . in order to allow for an empirical assessment of stochastic dynamic models we focus on approximate solutions that are computed from two types of first-order conditions: the Euler equation and the equation derived from the Lagrangian. After a discussion on the variety of approximation methods. we introduce a method. which is a prerequesit for a proper empirical assessment of the models treated in our book. Often the methods use a smooth approximation of first order conditions. In part I and II of the book we build extensively on the basics of stochastic dynamic macroeconomics. the parameterized expectations approach (den Haan and Marcet (1990)) and the dynamic programming approach (Santos and Vigo Aguiar (1998) and Gr¨ne and Semmler (2004a)). the log-linear approximation method and the linearquadratic approximation method.

4 such as the Euler equation. Sometimes, as, for example, in the model of chapter 7 smooth approximations are not useful if the value function is not differentiable and thus is non-smooth. A method such as employed by Gr¨ne u and Semmler (2004a) can then be used. There has been less progress made regarding the empirical assessment and estimation of stochastic dynamic models. Given the wide application of stochastic dynamic models expected in the future, we believe that the estimation of such type of models will become an important research topic. The discussion in chapters 3-6 can be regarded as an important step toward that purpose. As we will find, our proposed estimation strategy requires to solve the stochastic dynamic optimization model repeatedly, at various possible structural parameters searched by a numerical algorithm within the parameter space. This requires that the solution methods adopted in the estimation strategy should be as little time consuming as possible while not losing too much accuracy. After comparing different approximation methods, we find that the proposed methods of solving stochastic dynamic optimization models, such as used in chapters 3 - 6 most useful. We also will explore the impact of the use of different data sets on the calibration and estimation results.

RBC Model as a Benchmark
In the next part of the book, in part II, we set up a benchmark model, the RBC model, for comparison, in terms of either theory or empirics. The standard RBC model is a representative agent model, but it is constructed on the basis of neoclassical general equilibrium theory. It therefore assumes that all markets (including product, capital and labor models) are cleared in all periods regardless of whether the model refers to the short- or the long-run. The imposition of market clearing requires that prices are set at an equilibrium level. At the pure theoretical level, the existence of such general equilibrium prices can be proved under certain assumption. Little, however, has been told how the general equilibrium can be achieved. In an economy in which both firms and households are price-takers, implicitly an auctioneer is presumed to exist who adjusts the price towards some equilibrium. Thus, the way of how an equilibrium is brought about is essentially a Walrasian tˆtonnement process. a Working with such a framework of competitive general equilibrium is elegant and perhaps a convenient starting point for economic analysis. It nevertheless neglects many restrictions on the behavior of agents, the trading process and the market clearing process, the implementation of technology and the market structure, among many others. In part II of this volume,

5 we provide a thorough review of the standard RBC model, the representative stochastic dynamic model of competitive general equilibrium type. The review starts with laying out microfoundation, and continues to discuss a variety of empirical issues, such as the estimation of structural parameters, the data construction, the matching with the empirical data, its asset market implications and so on. The issues explored in this part of the book provide the incentives to introduce Keynesian features into a stochastic dynamic model as developed in Part III. Meanwhile, it also provides a reasonable ground to judge new model variants by considering whether they can resolve some puzzles as explored in part II of the book.

Open Ended Dynamics
One of the restrictions in the standard RBC model is that the firm does not face any additional cost (a cost beyond the usual activities at the current market prices) when it makes an adjustment on either price or quantity. For example, changing the price may require the firm to pay a menu cost and also, more importantly, a reputation cost. It is the cost, arising from price and wage adjustments that has become an important focus of New Keynesian research over the last decades. 2 However, adjustment cost may also come from a change in quantity. In a production economy increasing output requires the firm to hire new workers and add new capacity. In a given period of time, a firm may find more and more difficulties to create new additional capacity. This indicates that there will be an adjustment cost in creating capacity (or capital stock via investment), and further such adjustment cost may also be an increasing function of the size of investment. In chapter 7, we will introduce adjustment costs into the benchmark RBC model. This may bring about multiple equilibria toward which the economy may move. The dynamics are open ended in the sense that it can move to low level, or high level of economic activity.3 Such an open ended dynamics is certainly one of the important feature of Keynesian economics. In recent times such open ended dynamics have been found in a large number of dynamic models with intertemporal optimization. Those models have been called indeterminacy and multiple equilibria models. Theoretical models of this type are studied in Benhabib and Farmer (1999) and Farmer (2001), and an empirical assessment is given in Schmidt-Grohe (2001). Some of the models
Important papers in this reserach line are, for example, Calvo (1983) and Rotemberg (1982). For a recent review, see Taylor (1999) and Woodford (2003, ch. 3). 3 Keynes (1936) discusses the possibility of such an open ended dynamics in chapter 5 of his book.
2

6 are real models, RBC models, with increasing returns to scale and/or more general preferences than power utility that generate indeterminacy. Local indeterminacy and globally multiplicity of equilibria can arise here. Others are monetary macro models, where consumers’ welfare is affected positively by consumption and cash balances and negatively by the labor effort and an inflation gap from some target rates. For certain substitution properties between consumption and cash holdings those models admit unstable as well as stable high level and low level steady states. There also can be indeterminacy in the sense that any initial condition in the neighborhood of one of the steady-states is associated with a path toward, or away from, that steady state, see Benhabib et al. (2001). Overall, the indeterminacy and multiple equilibria models predict an open ended dynamics, arising from sunspots, where the sunspot dynamics are frequently modeled by versions with multiple steady state equilibria, where there are also pure attractors (repellors), permitting any path in the vicinity of the steady state equilibria to move back to (away from) the steady state equilibrium. Although these are important variants of macrodynamic models with optimizing behavior, as, however, recently has been shown4 indeterminacy is likely to occur only within a small set of initial conditions. Yet, despite such unsolved problems the literature on open ended dynamics has greatly enriched macrodynamic modeling. Pursuing this line of research we introduce a simple model where one does not need to refer to model variants with externalities and (increasing returns to scale) and/or to more elaborate preferences to obtain such results. We show that due to the adjustment cost of capital we may obtain non-uniqueness of steady state equilibria in an otherwise standard dynamic optimization version. Multiple steady state equilibria, in turn, lead to thresholds separating different domains of attraction of capital stock, consumption, employment and welfare level. As our solution shows thresholds are important as separation points below or above which it is advantages to move to lower or higher levels of capital stock, consumption, employment and welfare. Our model version thus can explain of how the economy becomes history dependent and moves, after a shock or policy influences, to a low or high level equilibria in employment and output.

Nonclearing Markets
A second important feature of Keynesian macroeconomics concerns the modeling of the labor market. An important characteristic of the DGE model
4

See Beyn, Pampel and Semmler (2001) and Gr¨ne and Semmler (2004a). u

7 is that it is a market clearing model. For the labor market the DGE model predicts an excessive smoothness of labor effort in contrast to empirical data. The low variation in the employment series is a well-known puzzle in the RBC literature.5 It is related to the specification of the labor market as a cleared market. Though in its structural setting, see, for instance, Stockey et al. (1989), the DGE model specifies both sides of a market, demand and supply, the moments of the macro variables of the economy are, however, generated by a one-sided force due to its assumption on wage and price flexibility and thus equilibrium in all markets, including output, labor and capital markets. The labor effort results only from the decision rule of the representative agent to supply labor. In our view there should be no restriction for the other side of the market, the demand, to have effects on the variation of labor effort. Attempts have been made to introduce imperfect competition features into the DGE model.6 In those types of models, producers set the price optimally according to their expected market demand curve. If one follows a Calvo price setting scheme, there will be a gap between the optimal price and the existing price. However, it is presumed that the market is still cleared since the producer is assumed to supply the output according to what the market demands for the existing price. This consideration also holds for the labor market. Here the wage rate is set optimally by the household according to the expected market demand curve for labor. Once the wage has been set, it is assumed to be rigid (or adjusted slowly). Thus, if the expectation is not fulfilled, there will be a gap again between the optimal wage and existing wage. Yet in the New Keynesian models the market is still assumed to be cleared since the household is assumed to supply labor whatever demand is at the given wage rate.7 In order to better fit the RBC model’s predictions with the labor market data, search and matching theory has been employed8 to model the labor market in the context of an RBC model. Informational or institutional search frictions may then explain equilibrium unemployment rates and its rise. Yet, those models still have a hard time to explain the shift of unemployment rates such as, for example, experienced in Europe since the 1980s, as equilibrium unemployment rate. 9
5

A recent evaluation of this failure of the RBC model is given in Schmidt-Grohe (2001). Rotemberg and Woodford (1995, 1999), King and Wollman (1999), Gali (2001) and Woodford (2003) present a variety of models of monopolistic competition with price and wage stickiness. 7 Yet, as we have mentioned above, this definition of market clearing is not unambiguous. 8 For further details, see ch. 8. 9 For an evaluation of the search and matching theory as well as the role of shocks to explain the evolution of unemployment in Europe, see Ljungqvist and Sargent (2003)and
6

As we will show in chapters 8 and 9 such a multiple stage optimization model will allow for larger volatility of the employment rates as compared to the standard RBC model. When the price has been set. the price is then given to the supplier when deciding on the quantities. it is presumed that all factors are fully utilized. 10 .10 Our proposed new model helps to study labor market problems by being based on adaptive optimization where households. which are unlikely to be related to factor productivity. Malinvaud (1994). Danthine and Donaldson (1990. see. Mankiw (1989) and Summers (1986) have argued that such a measure often leads to excessive volatility in productivity and even the possibility of technological regress. the standard Solow residual can be contaminated if the cyclical variation in factor utilization are significant. Third. In the standard DGE model technology shocks are the driving force of the business cycles which is assumed to be measured by the Solow-residual. There is no reason why the firm cannot choose the optimal quantity rather than what the market demands. have to reoptimize when facing constraints in supplying labor in the market. both subject to optimal behavior. Since the Solow residual is computed on the basis of observed output. capital and employment. 2002). This consideration will allow for nonclearing markets. Considering that the Solow-residual cannot be trusted as a measure of Blanchard (2003) There is indeed a long tradition of macroeconomic modeling with specification of the nonclearing labor markets. and is sticky for a certain period. Benassy (1995. firms may have constraints on the product markets. There are several reasons to distrust the standard Solow residual as a measure of technology shock.8 As concerns the labor market along Keynesian lines we pursue an approach that allows for a nonclearing labor market. and provides. especially when the optimum quantity is less than the quantity demanded by the market. 1995) and Uhlig and Xu (1996). Second. we move beyond this type of literature. Technology and Demand Shocks A further Keynesian feature of macromodels concerns the role of shocks. it has been shown that the Solow residual can be expressed by some exogenuous variables. after a first round of optimization. for example demand shocks arising from military spending (Hall 1988) and changed monetary aggregates (Evan 1992). Although our approach owes a substantial debt to disequilibrium models. both of which seem to be empirically implausible. First. also a framework to study the secular rise or fall of unemployment. In our view the decisions with regard to price and quantities can be made separately. for instance. On the other hand.

Eichenbaum and Rebelo 1996). The first strategy is to use an observed indicator to proxy for unobserved utilization. in which the technology shock is measured by the standard Solow-residual. Puzzles to be Resolved In order to sum up. related to the labor market. Fernald and Kimball 1998). 2003) we also find that if one uses the corrected Solow-residual. as above mentioned. we may say that the standard RBC model has left us with major puzzles. not sufficiently been studied in the literature. see Gali (1999) and Francis and Ramey (2001. One might name it the technology puzzle. The model also implies an excessively high correlation between consumption and employment while empirical data only indicates a week correlation. a negative or almost zero correlation. A third strategy uses a appropriate restriction in a VAR estimate to identify a technology shock. Another strategy is to construct an economic model so that one could compute the factor utilization from the observed variables (see Basu and Kimball 1997 and Basu. at least at business cycle frequency. It will be explored in Chapter 5 of this volume. All these methods are focused on the computation of factor utilization. The first type of puzzle is related to the asset market and is often discussed under the heading of the equity premium puzzle. to our knowledge. see chapters 5 and 9. the technology shock is negatively correlated with employment and therefore the RBC model loses its major driving force. chapters 8-9 of part III of the book are mainly concerned with the latter two puzzles. consumption and employment. Extensive research has attempted to improve on this problem by elaborating on more general preferences and technology shocks. The RBC model generally predicts an excessive smoothness of labor effort in contrast to empirical data. . 2003).11 Third.9 technology shock. Chapter 6 studies in details the asset price implication of the RBC model. 11 This problem of excessive correlation has. The second puzzle is. Yet this result is obtained from the empirical evidence. As Gali (1999) and Francis and Ramey (2001. A positive technology shock will increase output. It is well known that one of the major celebrated arguments of real business cycles theory is that technology shocks are pro-cyclical. the RBC model predicts a significantly high positive correlation between technology and employment whereas empirical research demonstrates. Whereas the first puzzle is studied in chapter 6 of the book. researchers have now developed different methods to measures technology shocks correctly. A typical example is to employ electricity use as a proxy for capacity utilization (see Burnside. There are basically three strategies.

allows us to make some progress to match better the model with time series data. we hope. and therefore they can somewhat be consolidated into a more complete system of price and quantity determination within the Keynesian tradition.10 Finally. These two approaches will be contrasted in the last two chapters. . reoptimize once they have perceived and learned about market constraints. adaptive optimization. we want to note that the research along the line of Keynesian micro-founded macroeconomics has been historically developed by two approaches: one is the tradition of non-clearing market (or disequilibrium analysis). Thus. We want to argue that the two traditions can indeed be complementary rather than exclusive. The main new method we are using here to reconcile the two traditions is a multiple stage optimization behavior. where agents. Chapters 8 and 9. and the other is the New Keynesian analysis of monopolistic competition and sticky (or sluggish) prices. We will find that one can improve on the labor market and technology puzzles once we combine these two approaches. adaptive optimization permits us to properly treat the market adjustment for nonclearing markets which.

Part I Solution and Estimation of Stochastic Dynamic Models 11 .

we discuss some solution methods frequently employed to solve the dynamic decision problem.1 Introduction The dynamic decision problem of an economic agent whose objective is to maximize his or her utility over an infinite time horizon is often studied in the context of a stochastic dynamic optimization model.Chapter 1 Solution Methods of Stochastic Dynamic Models 1. three types of approximation methods can be found in the literature: the Fair-Taylor method. Among the well-known methods are the perturbation and projection methods (Judd (1996)). Still. and therefore a numerical algorithm is called 12 . To understand the structure of this decision problem we describe it in terms of a recursive decision problem of a dynamic programming approach. there have been developed numerous methods to solve stochastic dynamic decision problems. an exact and analytical solution of the dynamic programming decision is not attainable. in most of the cases. an approximate solution cannot be derived analytically. In most cases. Thereafter. which may also have to be computed by numerical methods. Recently. Therefore one has to rely on an approximate solution. in order to allow for an empirical assessment of stochastic dynamic models we focus on approximate solutions that are computed from two types of first-order conditions: the Euler equation and the equation derived from the Lagrangian. the log-linear approximation method and the linear-quadratic approximation method. Given these two types of first-order conditions. the parameterized expectations approach (den Haan and Marcet (1990)) and the dynamic programming approach (Santos and Vigo Aguiar (1998) and Gr¨ne and Semmler (2004a)). In u this book.

Appendix I provides the proof of the propositions in the text. 1. The remainder of this chapter is organized as follows. ut .3) . Finally. Section 5 presents our new algorithm for dynamic optimization. xt is a vector of m state variables at period t. it overcomes the limitations given by the Chow (1993) method. 1) denotes the discount factor. We start in Section 2 with the standard recursive method. ut ) (1. One popular assumption regarding the dynamics of zt is to assume that zt follow an AR(1) process: zt+1 = P zt + p + ǫt+1 (1. The algorithm is used to compute the solution path obtained from the method of linear-quadratic approximation with the firstorder condition derived from the Lagrangian. ut is a vector of n control variables. We will show that the standard recursive method may encounter difficulties when being applied to compute a dynamic model. zt is a vector of s exogenuous variables whose dynamics does not depend on xt and ut . Section 3 establishes the two first-order conditions to which different approximation methods can be applied.2 The Standard Recursive Method We consider a representative agent whose objective is to find a control (or decision) sequence {ut }∞ such that t=0 ∞ {u}t=0 max E0 ∞ t=0 β t U (xt . which uses the value function for iteration. this formulation assumes that the uncertainty of the model only comes from the exogenuous zt .13 for to facilitate the computation of the solution. Let us first make several remarks regarding the formulation of the above problem. Et is the mathematical expectation conditional on the information available at time t and β ∈ (0. a GAUSS procedure that implements our suggested algorithm is presented in Appendix II. First. we discuss those various approximation methods and then propose another numerical algorithm that can help us to compute approximate solutions. While the algorithm takes full advantage of an existing one (Chow 1993). In this chapter. zt ).2) Above. Section 4 briefly reviews the different approximation methods in the existing literature.1) subject to xt+1 = F (xt . (1.

given the initial condition (x0 . With such a policy function (or control equation). z0 ) and the exogenuous sequence {zt }∞ generated by (1.5) as V (x0 . the planner faces the same decision problem: choosing the control variable ut that maximizes the current return plus the discounted value of the optimum plan from period t+1 onwards. ut ) t=1 ∞ .6) {ut }∞ t=0 max β t U (xt+1 . we first define a value function V : ∞ V (x0 . (1. Second.5) with the initial condition (x1 . In every period t. t=1 To find the policy function G by the recursive method. u0 ) + βE0 β t U (xt .6) can be expressed as being β times the value V as defined in (1. z0 ) = = {ut }∞ t=0 max U (x0 . ). z1 )]} ∞ {ut }t=0 (1. the sequences of state {xt }∞ and control {ut }∞ can be generated by iterating t=1 t=0 the control equation ut = G(xt . zt ) (1.d.i.5) could be transformed to unveil its recursive structure. z) into the control u. Therefore.7) The formulation of equ. (1. ut+1 ) t=0 It is easy to find that the second term in (1. z0 ) = max {U (x0 . ut ) (1. Since the problem repeats itself .7) represents a dynamic programming problem which highlights the recursive structure of the decision problem. For this purpose. z0 ) ≡ max E0 ∞ {ut }t=0 t=0 β t U (xt . u0 ) + βE0 [V (x1 .2). It is well known that a model with finite lags or leads can be transformed through the use of auxiliary variables into an equivalent model with one lag or lead.5) Expression (1.4) as well as the state equation (1. Third.3). we first rewrite (1.14 where ǫt is independently and identically distributed (i. z0 ) in this formulation is assumed to be given. we could rewrite (1. u0 ) + E0 U (x0 . z1 ). The problem to solve a dynamic decision problem is to seek a timeinvariant policy function G mapping from the state and exogenuous (x. this formulation is not restrictive to those structure models with more lags or leads.5) as follows: ∞ V (x0 . the initial condition (x0 .

z )]} x ˜ (1. Use the Bellman equation to find the optimum u and then compute Vj+1 according (1.9) In terms of an algorithm. u .8) is said to be the Bellman equation.9). We thus can write (1.9). Guess a differentiable and concave candidate value function. the method can be described as follows: • Step 1. • Step 3. see Chow (1997). Judd (1999). z )]} x ˜ u (1. If Vj+1 = Vj . Ljungqvist and Sargent (2000) and Marimon and Scott (1999). However.1 As above stated in 1 Kendrick (1981) can be regarded as a seminal work in this field. we need to find the optimum u that maximize the right side of equ. This task makes it difficult to write a closed form algorithm for iterating the Bellman equation. the difficulty of this algorithm is that in each Step 2. If we know the function V.3). For the later development.7) as V (x. 1. For a review up to 1990. • Step 2. the convergence of this algorithm is warranted by the contraction mapping theorem (Sargent 1987. Vj . z) = max {U (x. Otherwise.15 every period the time subscripts become irrelevant. Unfortunately. see Taylor and Uhlig (1990). which in reality we do not know in advance. stop.9) can be found in Santos and Vigo-Aguiar (1998) and Gr¨ne and Semmler (2004a). all these considerations are based on the assumption that we know the function V. we then can solve u via the Bellman equation.8) u where the tilde (∼) over x and z denotes the corresponding next period values. The typical method in this case is to construct a sequence of value functions by iterating the following equation: Vj+1 (x. 1989). update Vj and go to step 1. Stockey et al. Recent methods of numerically solving the above discrete time Bellman equation (1.3 The First-Order Conditions The last two decades have observed various methods of numerical approximation to solve the problem of dynamic optimization. z) = max {U (x.2) and (1. Obviously. they are subject to (1. Under some regularity conditions regarding the function U and F . (1. Researchers are therefore forced to seek different numerical approximation methods. u) + βE [V (˜. named after Richard Bellman (1957). Equation (1. u) + βE [Vj (˜.

In economic analysis.2 Assume ∂F/∂x = 0. there are still models in which such transformation is not feasible. G(x. z ) x ˜ ∂x ∂x ∂x ∂F (1. 2 named after Benveniste and Scheinkman (1979). u) + βE (x.3.13) (1. G(x. z). u.8) it satisfies ∂V (x. z)) = ∂x ∂x Substituting this formula into (1. in which x does not appear in the transition law so that ∂F/∂x = 0 is satisfied. u) = 0 x ˜ ∂u ∂u ∂x ˜ (1.8). Note that to use the above Euler equation as the first-order condition for deriving the decision sequence. The first-order condition for maximizing the right side of the equation takes the form: ∂U (x. z ) = 0 x ˜ ∂u ∂u ∂x ˜ (1. z) (˜. that are used to derive the decision sequence. z) (˜. We will show this technique in the next chapter using a prototype model as a practical example. z) ∂U (x. Assume V is differentiable and thus from (1. z) (˜. 1. G(x. One can find two approaches: one is to use the Euler equation and the other the equation derived from the Lagrangian. u. one must require ∂F/∂x = 0. z)) ∂V ∂F = + βE (x. The above formula becomes ∂V (x.10) The objective here is to find ∂V /∂x. . one often encounters models. u) ∂V ∂F + βE (x.11) This equation is often called the Benveniste-Scheinkman formula.1 The Euler Equation We start from the Bellman equation (1.12) where the tilde (∼) over u again denotes the next period value with respect to u.10) gives rise to the Euler equation: ∂F ∂U ∂U (x. after some transformation. However.16 this book we want to focus on the first-order conditions. z) ∂U (x.

Yet.1) and (1. ut .17 1. ut and zt .4. using (1.15) In comparison with the Euler equation. ut ) + βEt λt+1 = λt . ∂xt ∂xt+1 ∂F (xt .2). xt and ut will yield equation (1. ∂ut ∂ut (1. {zt+1 }∞ .2) as well as ∂F (xt+1 . 1. the Lagrangian multiplier. we find that there is an unobservable variable λt appearing in the system.3.14) and (1. Also as we will see in the next chapter. t=0 t=0 ∞ {ut }∞ and {λt }t=0 are implied given the initial condition (x0 . ut+1 . zt )] where λt . we can define the Lagrangian L: ∞ L = E0 t=0 β τ U (xt .15) form a dynamic system from which the transition sequences {xt+1 }∞ . one does not have to transform the model into the setting that ∂F/∂x = 0. and therefore the solution paths usually are impossible to be computed directly. called 3 See also Chow (1997).15).2). t=0 mostly such a system is highly nonlinear. Yet.14) .3 This further implies that they can produce the same steady states when being evaluated at their certainty equivalence forms. zt ) ∂U (xt .13) or from the Lagrangian (1.4 1.1 Approximation and Solution Algorithms The Gauss-Seidel Procedure and the Fair-Taylor Method The state (1.3) and the first-order condition derived either as Euler equation (1. the exogenuous (1. zt+1 ) ∂U (xt . is a m × 1 vector.14) (1. Setting the partial derivatives of L to zero with respect to λt .2 Deriving the First-Order Condition from the Lagrangian Suppose for the dynamic optimization problem as represented by (1.(1. . z0 ) . ut ) − β τ +1 λ′t+1 [xt+1 − F (xt . ut ) + Et λt+1 = 0. One popular approach as suggested by Fair and Taylor (1983) is to use a numerical algorithm. This is an important advantage over the Euler equation. these two types of first-order conditions are equivalent when we appropriately define λt in terms of xt . ut .

ym+n. . zt .(1. 2. ψ) = 0 f2 (yt . ψ) = 0 . then there will be 2m+n equations and yt should include xt . Denote (1) this new computed value yt+1 . zt .t+1 is the ith element in the vector yt+1 .t+1 = gm+n (yt . .16) (1. if there is any. the following discussion assumes only the Euler equation to be used.(1. zt .21) for the given yt+1 along with yt . fm+n (yt .5 The system. zt .18) as follows: y1. can be solved numerically by an iterative technique.18) Here yt is the vector of endogenuous variables with m + n dimensions.15) as the first-order condition. yt+1 . the algorithm starts by setting t = 0 and proceeds as follows: • Step 1. . .16) .19) (1. m + n. λt and ut . ψ) .14) and (1. For the convenience of presentation. Given ∗ the initial condition y0 = y0 . Therefore the system is essentially not different from the dynamic rational expectation model as considered by Fair and Taylor (1983). Compute yt+1 (0) according to (1.t+1 = g2 (yt . .. ψ) y2. go to Step 3. Otherwise compute yt+1 for the given yt+1 . zt . yt+1 . to their expectation values (usually zero). including both state xt and control ut .20) (1.19) . yt+1 . and the sequence of exogenuous variable {zt }T .21) where. to which we shall now turn. See also a similar formulation in Juillard (1996) with one lag and one lead. Suppose the system can be be written as the following m + n equations: f1 (yt . . 4 (0) (1) (0) If we use (1. This procedure will be repeated until the tolerance level is satisfied.18 Gauss-Seidel procedure.17) (1. It is always possible to tranform the system (1. yt+1 .. ψ) (1. as suggested. If the distance between yt+1 and yt+1 is less than a prescribed (2) (1) tolerance level. zt . ψ) = 0 (1. • Step 2. 5 Our suggested model here is a more simplified version since we only take one lead. yt+1 . Call this guess yt+1 .4 ψ is the vector of structural parameter. yt+1 . This can be done by setting the corresponding disturbance term. t=0 with T to be the prescribed time horizon of our problem..t+1 = g1 (yt . i = 1. Also note that in this formulation we left aside the expectation operator E. called Gauss Seidel procedure. Set an initial guess on yt+1 . yi.

If this is the case. This will produce a sequence T of endogenuous variable {yt }T .. Yet the initial condition for the dynamic decision problem is usually provided only by x0 (see equation our discussion in the last section). This cost of computation makes it a difficult candidate solving the dynamic optimization problem. The third and the most important problem regards its accuracy of solution. Update t by setting t = t + 1 and go to Step 1. In the next chapter when we turn to a practical problem. The second disadvantage of this method is the cost of computation. T. . The general idea of this method is to replace all the necessary equations in the model by approximations. Formally. chapter 7). In particular. The algorithm will continue until t reaches T.2 The Log-linear Approximation Method Solving nonlinear dynamic optimization model with log-linear approximation has been widely used and well documented. 2. t = 1. which include not only the initial state x0 but also the initial decisions u0 . Given the approximate log-linear system. The procedure requires the iteration and convergence for each period t. which include both decision {ut }t=0 and t=0 state {xt }T . which is also in the form of log-linear deviations. (1988) and Campbell (1994) in the context of Real Business Cycle models.1).19 • Step 3.. X the corresponding steady state. this approximation method can be applied to the first-order condition either in terms of the Euler equation ¯ or derived from the Langrangean. One possible way to deal with this problem is to start with different initial u0 . which are linear in the log-deviation form. there might be a problem in accuracy.4. 3. ¯ xt ≡ ln Xt − ln X is regarded to be the log-deviation of Xt . let Xt be the variables. .. a damping technique can usually be employed to force convergence (see Fair 1984. In principle. one then uses the method of undetermined coefficients to solve for the decision rule. Then. Note that the procedure starts with the given initial condition y0 . This has been proposed in particular by King et al. we will investigate these issues more thoroughly. 1. and therefore the solution sequences {ut }T and {xt }T depend t=1 t=1 virtually on the initial condition. Considering that the weights of u0 could be important in the value of the objective function (1. t=0 There is no guarantee that convergence can always be achieved for the iteration in each period. 100xt is the per¯ centage of Xt that it deviates from X.

22) (1. Solve the log-linearized system for the decision rule (which is also in log-linear form) with the method of undetermined coefficients.20 Uhlig (1999) provides a toolkit of such method for solving a general dynamic optimization model. This requires first parameterizing the model and then evaluating the model at its certainty equivalence form. the exogenuous equation (1.14) and (1. the process of log-linearization and solving for the undetermined coefficients are not easy.4. These necessary equations should include the state equation (1.4).24) • Step 4. Log-linearize the necessary equations characterizing the equilibrium law of motion of the system. ¯ Xt ≈ Xext ext +ayt ≈ 1 + xt + ayt xt yt ≈ 0 (1. Find the necessary equations characterizing the equilibrium law of motion of the system. In Chapter 3. • Step 3. However.13) or from the Lagrangian (1. The general procedure involves the following steps: • Step 1. 1. Derive the steady state of the model. the solution path does not require the initial condition of u0 and therefore it should be more accurate in comparison to the Fair-Taylor method. It is certainly desirable to have a numerical algorithm available that can take over at least part of the analytical derivation process.2). • Step 2. and usually have to be accomplished by hand. Again in principle this method can be applied to the first-order con- .3 Linear-quadratic Approximation with Chow’s Algorithm Another important approximation method is the linear-quadratic approximation.3) and the firstorder condition derived either as Euler equation (1. Uhlig (1999) suggests the following building block for such log-inearization. we will provide a concrete example to apply the above procedure and to solve a practical problem of dynamic optimization. In some cases.15). On the other hand.23) (1. Solving a nonlinear dynamic optimization model with log-linear approximation usually does not require a heavy computation in contrast to the Fair-Taylor method. the decision rule could be derived even analytically. by assuming a log-linearized decision rule as expressed in (1.

The numerical properties of this approximation method have further been studied in Reiter (1997) and Kwan and Chow (1997).27) The linear-quadratic approximation assumes the state equations to be linear and the objective function to be quadratic. ut ) = K1 xt + K12 ut + k1 ∂xt ∂U (xt . Chow’s method can be presented in both continuous and discrete time form. At the same time.31) (1. ∂xt ∂xt ∂U (xt .27) can be rewritten as K1 xt + K12 ut + k1 + βA Et λt+1 = λt ′ K2 ut + K21 xt + k2 + βC Et λt+1 = 0 ′ (1.25) We shall remark that the state equation here is slightly different from the one as expressed by (1. he proposed a numerical algorithm to facilitate the computation of the solution.28) (1. ut ) +β Et λt+1 = λt .30) Given this linear-quadratic assumption.25) as well as ∂F (xt .2). ut ) − β t+1 λt+1 [xt+1 − F (xt . but subject to xt+1 = F (xt . In other words. ut ) = K2 ut + K21 xt + k2 ∂ut F (xt .1). ut ) − ǫt+1 ] Setting the partial derivatives of L to zero with respect to λt . ∂ut ∂ut (1. it is only a special case of (1. Apparently. we here only consider the discrete time version.29) (1.26) and (1.2) in Section 2. the Lagrangian L should be defined as ∞ L = E0 t=0 ′ β t U (xt . xt and ut will yield equation (1. ut ) ∂F (xt . ut ) ∂U (xt . Consequently.32) .21 dition either in terms of the Euler equation or derived from the Lagrangian. Suppose the objective of a representative agent can again be written as (1. ut ) +β Et λt+1 = 0. equation (1. ut ) = Axt + Cut + b + ǫt+1 (1. Since the models in discrete time are more convenient for empirical and econometric studies. Chow (1993) was among the first who solves a dynamic optimization model with linear-quadratic approximation applied to the first-order condition derived from the Lagrangian. ∂U (xt . ut ) + ǫt+1 (1.26) (1.

35) (1. the method. the new H and h can be calculated by (1.30) . It should be noted that the algorithm suggested by Chow (1993) is much more complicated. where u∗ is the initial guess on ut . one calculates the new G and g by (1.37) and (1. 7 For instance. our above presentation follows the spirit of Kwan and Chow (1997) if we assume the linearization takes place around the steady states. as the result of iterating (1. as pointed by Reiter (1997). Chow’s method requires less derivations. in GAUSS.38). followed by iterating (1. The procedure will continue until convergence of ut .33) (1. H.32) first takes place around (xt . G and g can then be calculated by (1. Chow’s method has at least three weeknesses.7 Despite this significant advantage. Since the above algorithm is designed for any given xt . Chow’s method can be a good approximation method only when the state equation is linear or can be transformed into a linear one.6 In comparison to the log-linear approximation. an iterative procedure can designed as follows.38). First. However. is less convenient comparing to other approximation method.38).35) . Yet even this can be computed with a written procedure in a major software package. this time it will be around (xt .35) .35) and (1. the only derivation is to obtain the partial derivatives of U and F . if ut calculated via the decision rule (1. set the initial H and h. one could use the procedure GRADP for deriving the partial derivatives. For any given xt . In a response to Reiter (1997). Using these new H and h.(1.(1.35) and (1. 6 .38) Generally it is impossible to find. First.36) again. which must be accomplished by hand. the approximation will take place again. The process will continue until convergence is achieved. Given the steady state.36) (1. the analytical solution to G. Thus.36).22 Assume the transition laws of ut and λt take the linear form: ut = Gxt + g λt+1 = Hxt+1 + h (1. Given G and g as well as the initial H and h.37) (1.(1.33). the resulting decision rule is indeed nonlinear in xt . ut ). u∗ ). is different from u∗ . Therefore.34) Chow (1993) proves that the coefficient matrices G and H and the vectors g and h satisfy G = −(K2 + βC HC)−1 (K21 + βC HA) g = −(K2 + βC HC)−1 (k2 + βC Hb + βC h) H = K1 + K12 G + βA H(A + CG) h = (K12 + βA HC)g + k1 + βA (Hb + h) ′ ′ ′ ′ ′ ′ ′ ′ (1. Therefore. the approximation as represented by (1. Kwan and Chow (1997) propose a one-time linearization around the steady states. In this sense. Otherwise. g and h is impossible.

32) will not be a good approximation to the non-approximated (1. Yet this will increase the dimension of state space and hence intensify the problem of multiple solutions.35) into (1. . which. the iteration with (1.23 the linearized first-order condition as expressed by (1. Indeed.37) gives a quadratic matrix equation in H.35) . Gong (1997) points out the same problem. Evaluating the first8 Note that the good approximation to ∂F (xt . This will create some difficulty when being applied to the model with exogenuous variables.9 Third. since A′ and C ′ are not good approximations to ∂F (xtt.ut ) . The approximation method we used here is the linear-quadratic approximation.25).15).14) and (1.2) rather than (1. the first-order condition is established by (1.25) is only a special case of (1. and therefore the method does not require log-linear-approximation.ut ) ∂xt should be 2 F xu ∂ 2 F (¯.3). This indicates that the method does not require the assumption ∂F/∂x = 0.¯) xu (xt − x) + ∂ ∂ x(¯. yet overcomes all the limitations occurring in Chow’s method.¯) . Therefore. ¯ x ¯ ∂u The same problem of multiple solution should also exist for the log-linear approximation method. The established Proposition 1 below allows us to save further derivations when applying the method of undetermined coefficients. if we use an existing software procedure to compute the partial derivatives with respect to U and F .2).31) and (1.26) and (1. if there are any. needs to be accomplished by hand. Second.ut ) there is a similar problem. The first-order condition used for this algorithm is derived from the Lagrangian. in many cases. This actually has been done by Chow and Kwan (1998) when the method has been applied to a practical problem.1) .5 An Algorithm for the Linear-Quadratic Approximation In this section.27).8 Reiter (1996) ∂x ∂u has used a concrete example to show this point. Indeed. our suggested algorithm takes full advantage. the only derivation in applying our algorithm is to derive the steady state.¯) (ut − ¯ ∂x2 ¯∂u 9 (¯ u u) + ∂F∂x. as a part of state variables. One possible way to circumvent this problem is to regard the exogenuous variables. we shall present an algorithm for solving a dynamic optimization model with the general formulation as expressed in (1.(1. Since our state equation takes the form of (1. 1.(1. one would expect that the number of solutions will increase with the increase of the dimensions of state space.ut ) and ∂F (xtt.38) may exhibit multiple solutions since inserting (1. the assumed state equation (1. For ∂F (xtt.

40) (1. Fxz . Then the solution of G. Q and h satisfy G = M + NH (1. Fuu . The similarity is applied to Uu . Uux . where in particular.2) and (1. g. Uxu . (1.14).43): Proposition 1 Assume ut and λt+1 follow (1. Taking the first-order Taylor approxi¯ ¯ ¯ ¯ mation around the steady state for (1.46) −1 −1 HCN H + H(A + CM ) + F13 (F12 N − Im )H + F13 (F11 + F12 M ) = 0 (1. Fxx .3) at their certainty equivalence form.42) and (1. we obtain F11 xt + F12 ut + F13 Et λt+1 + F14 zt + f1 = λt . λ and z .48) −1 h = F13 (I − F12 N ) − HCN − Im −1 −1 H(Cm + b) + Qp + F13 (f1 + F12 m) (1. xt+1 = Axt + Cut + W zt + b.45) g = Nh + m (1.15) and (1.2).49) .41) Note that here we define Ux as ∂U/∂x and Uxx as ∂ 2 U/∂x∂x all to be evaluated at the steady state. Fux and Fuz .42) and (1.42) (1.47) −1 −1 F13 (I − F12 N ) − HCN Q − QP = H(W + CR) + F13 (F14 + F12 R) (1.43) The following is the proposition regarding the solution for (1.43) respectively. Uuu . The objective is to find the linear decision rule and the Lagrangian function: ut = Gxt + Dzt + g λt+1 = Hxt+1 + Qzt+1 + h (1. z ) − Fx x − Fz z − Fu u x ¯ ¯ ¯ ¯ ¯ ¯ xx F11 = Uxx + β λF F13 = βA′ ¯ ¯ ¯ ¯ ¯ f1 = Ux + βA′ λ − F11 x − F12 u − F13 λ − F14 z ¯ F22 = Uuu + β λFuu ¯ ¯ f2 = Uu + βC ′ λ − F21 x − F22 u − F23 λ − F24 z ¯ ¯ ¯ C = Fu W = Fz ¯ F12 = Uxu + β λFxu ¯ xz F14 = β λF ¯ F21 = Uux + β λFux ′ F23 = βC ¯ F24 = β λFuz (1. λt and zt . u.44) D = R + NQ (1. ut . F21 xt + F22 ut + F23 Et λt+1 + F24 zt + f2 = 0. we are able to derive the steady states for xt . u. D. H.24 order condition along with (1.39) (1. which we shall denote respectively as x. A = Fx b = F (¯.

we cannot solve for H analytically. Obviously. when one encounters the model with one state variable. For example.51) (1. in most cases. . 1.48) and (1. and therefore (1.47).46). (1. In this case. the proposition allows us to solve G. Given H. which is nonlinear (quadratic) in H.44).47) as H = F (H). (1.49). λt is the shadow price of the capital which should be inversely related to the quantity of capital. −1 F23 F13 F12 − F22 −1 (f2 − F23 F13 f1 ).47) can be written as a1 H 2 + a2 H + a 3 = 0 with the two solutions given by H1. In other words. −1 F23 F13 F14 ). in all the models that we will present in this book. if the model has more than one state variables.25 where Im is the m × m identity matrix and N = M = R = m = −1 F23 F13 F12 − F22 −1 F23 F13 F12 −1 F23 F13 F12 −1 −1 −1 −1 −1 F23 F13 .6 A Dynamic Programming Algorithm In this section we describe a dynamic programming algorithm which enables us to compute optimal value functions as well as optimal trajectories 10 11 though multiple solutions may exist. one can also easily identify the proper solution by relying on the economic meaning of λt . (1. This indicates that only the negative solution is a proper solution.45) and (1.10 However. Then D and g can be computed from (1. we shall rewrite (1. Therefore. Further.50) (1. the solutions can be computed without iteration.54) until convergence will give us a solution to H.11 H becomes a scalar.2 = 1 −a2 ± a2 − 4a1 a3 2 2a1 1 2 .54) iterating (1. The solution to H is implied by (1.53) − F22 − F22 (F21 − (F24 − −1 F23 F13 F11 ). the state equation is the capital stock.52) (1. Q and h directly according (1. which is mostly the case in recent economic literature.

.55) where xt+1 = f (x(t). u For the estimation of the gridding error we estimate the residual of the operator Th with respect to VhΓ . u(t)). 1) and a tolerance tol > 0. If ηl < tol for all l then stop (3) Refine all cells Cj with ηj ≥ θ maxl ηl . the difference between VhΓ (x) and Th (VhΓ )(x) for points x which are not nodes of the grid. i. x(0) = x0 ∈ Rn For the discretization in space we consider a grid Γ covering the computational domain of interest.e. where the value of VhΓ for points x which are not grid points (these are needed for the evaluation of Th ) is determined by linear interpolation. (1) Compute the solution VhΓi on Γi (2) Evaluate the error estimates ηl . Denoting the nodes of the grid Γ by xi . .26 of discounted optimal control problems of the type above. .56) we refer to Gr¨ne and Semmler (2004a). Fix a refinement parameter θ ∈ (0.56) for all nodes xi of the grid. . set i = i + 1 and go to (1). i = 1. . We consider discounted optimal u control problems in discrete time t ∈ N0 given by ∞ V (x) = max u∈U t=0 β t g(x(t). u(t)) (1. The basic discretization procedure goes back to Capuzzo Dolcetta (1983) and Falcone (1987) and is applied with adaptive gridding strategy by Gr¨ne u (1997) and Gr¨ne and Semmler (2004a). An extension to a stochastic decision problem is briefly summarized in appendix III. for each cell Cl of the grid Γ we compute ηl := max Th (VhΓ )(x) − VhΓ (x) x∈Cl Using these estimates we can iteratively construct adaptive grids as follows: (0) Pick an initial grid Γ0 and set i = 0. For a description of several iterative methods for the solution of (1. Thus. P .. we are now looking for an approximation VhΓ satisfying VhΓ (xi ) = Th (VhΓ )(xi ) (1.

the log-linear approximation method and the linear-quadratic approximation method. we therefore propose an algorithm that could overcome the limitation of existing methods (such as Chow’s method).27 For more information about this adaptive gridding procedure and a comparison with other adaptive dynamic programming approaches we refer to Gr¨ne and Semmler (2004a) and Gr¨ne (1997). which in our discretization can be obtained in feedback form u∗ (x) for the discrete time approximation using the following procedure: For each x in the gridding domain we choose u∗ (x) such that the equality ∗ max {hg(x. we still cannot exclude the possibility that sometime the restriction cannot be satisfied. u∗ (x)). For the linear-quadratic approximation. 1. we compare their advantages and disadvantages. In the next chapter we will turn to a practical problem and apply the diverse methods. u u In order to determine equilibria and approximately optimal trajectories we need an approximately optimal policy. On the other hand. we consider the solutions computed by the Fair-Taylor method. Then the resulting sequence u∗ = i h u∗ (xh (i)) is an approximately optimal policy and the related piecewise constant control function is approximately optimal. u) + βVh (xh (1))} = {hg(x. We also have elaborated on dynamic programming as a recently developed method to solve the involved Bellman equation. The approximation methods discussed here use two types of first-order conditions: the Euler equation and the equations derived from the Lagrangian. u∗ (x)) + βVh (xh (1))} u∈U holds. We find that the Euler equation needs a restriction that the state variable cannot appear as a determinant in the state equation. . the methods of log-linear approximation may need an algorithm that can take over some heavy derivation process that otherwise must be analytically accomplished. Given these two types of first-order conditions. For all of these different methods.7 Conclusion This chapter reviews some typical approximation methods to solve a stochastic dynamic optimization model. Although many economic models can satisfy this restriction after some transformation. We find that the Fair-Taylor method may encounter an accuracy problem due to its additional requirement of the initial condition for the control variable. where x∗ (1) = x + hf (x.

60): xt+1 = (Sx + Sλ H)xt + (Sz + Sλ Q)zt + s + Sλ h. (1.63) (1.55) respectively.58) where N.59) (1.40) and then solving for ut . (1.60) Now express λt in terms of (1.61) (1. Substituting (1.64) (1. Et λt+1 = (Lx + Lλ H)xt + (Lz + Lλ Q)zt + l + Lλ h. we get ut = N λt + M xt + Rzt + m. expressing ut in terms of (1.59) and (1.69) (1.70) Next.39).58) into (1.43) and expressing xt+1 in terms of (1.62) (1.41) while Et zt+1 in terms of (1.43) and then plug it into (1.28 1.41) and (1.66) (1. we obtain xt+1 = Sx xt + Sλ λt + Sz zt + s Et λt+1 = Lx xt + Lλ λt + Lz zt + l.71) respectively.41) and (1.71) Next. we obtain −1 Et λt+1 = F13 (λt − F11 xt − F12 ut − F14 zt − f1 ) (1. we get .65) (1. where Sx Sλ Sz s Lx Lλ Lz l = = = = = = = = A + CM CN W + CR Cm + b −1 −F13 (F11 + F12 M ) −1 F13 (I − F12 N ) −1 −F13 (F14 + F12 R) −1 −F13 (f1 + F12 m) (1.67) (1.68) (1.42) for equations (1.3) we obtain Et λt+1 = HAxt + HCut + (HW + QP )zt + Hb + Qp + h. M.8 Appendix I: Proof of Proposition 1 From (1. (1. taking expectation for the both sides of (1. R and m are all defined in the proposition.57) Substituting the above equation into (1.

1. Next Q is resolved from (1.44). Then D is resolved from (1. H is resolved from (1. Sz + Sλ Q = W + CD.77) (1. D.5 is written as a GAUSS procedure and available from the authors upon request.77) when W + CD is replaced by (1.74). This gives rise to (1.65) and (1. This gives rise to (1.76). (1. Lx + Lλ H = H(A + CG). The input of this procedure includes • the steady states of x.75) (1.47) as in Proposition 1 with Sx .61). u and z with respect F and U . g and h. Sλ .73) Comparing (1. we obtain Sx + Sλ H = A + CG. u. s + Sλ h = Cg + b.74) (1.63).29 Et λt+1 xt+1 = (A + CG)xt + (W + CD)zt + Cg + b (1.76).62). (1. Sλ . which allows us to obtain (1.79) These 6 equations determine 6 unknown coefficient matrices and vectors G.and the second-order partial derivatives of x. We call this procedure DYNPR.69) and (1.64) and (1. zbar and F bar respectively.48) with Sz .66).78).45).76) (1.49) with Sλ . Given H. (1. They are denoted .73). h is resolved from (1.66). Q.46). (1.66). all evaluated at the steady states.78). l + Lλ h = H(Cg + b) + Qp + h. ubar. s and l to be expressed by (1.79) when Cg + b is replaced by (1.9 Appendix II: An Algorithm for the LQApproximation The algorithm that we suggest to solve the LQ approximation of chapter 1. (1. Lλ .67) and (1. lbar.62). Finally. G is then resolved from (1.66). which allows us to obtain (1.74).72) and (1. (1. H. z and F denoted as xbar. (1. (1. This gives rise to (1.62). • the first. Finally g is resolved from (1.72) = H(A + CG)xt + [H(W + CD) + QP ] zt + H(Cg + b) + Qp + h. Lx and Lλ to be expressed by (1. which allows us to obtain (1.70) with (1. Lz + Lλ Q = H(W + CD) + QP. λ. In particular.78) (1. Lz and Lλ to be expressed by (1.75) when A + CG is replaced by (1.

BR. BLlamda. BN = INV(F 23∗INV(F 13) ∗ F 12 − F 22)∗ F 23∗INV(F 13). BQ. F ux. U xu. F xu. Sz. ubar. F 11 = U xx + beta ∗ lbar ∗ F xx. F 24. F 22. Slamda. • the discount factor β (denoted as beta) and the parameters P and p (denoted as BP and sp respectively) appearing in the AR(1) process of z. f 1 = U x + beta ∗ lbar ∗ F x − F 11 ∗ xbar− F 12 ∗ ubar − F 13 ∗ lbar − F 14 ∗ zbar. F zx. F 12 = U xu + beta ∗ lbar ∗ F xu. U uu. F xx. BR = INV(F 23∗INV(F 13) ∗ F 12 − F 22)∗ (F 24 − F 23∗INV(F 13) ∗ F 14). W. BLx. BP. F 22 = U uu + beta ∗ lbar ∗ F uu. denoted respectively as BG. F 21 = U ux + beta ∗ lbar ∗ F ux. sl. xbar. BM. BN. sh. F xz.and the second-order partial derivatives of x with respect to F respectively. BH. for instance. F 14. F zz. PROC(3) = DYNPR (F x. zbar. F z. F u. f 2 = U u + beta ∗ (F u′) ∗ lbar − F 21 ∗ xbar− F 22 ∗ ubar − F 23 ∗ lbar − F 24 ∗ zbar. BG. D and g. sa1. F bar. F uz. sm. BM = INV(F 23∗INV(F 13) ∗ F 12 − F 22)∗ (F 21 − F 23∗INV(F 13) ∗ F 11). U x. F 23 = beta ∗ (F u’). F 13. lbar. F 24 = beta ∗ lbar ∗ F uz. ss. A. U u. sb. W = F z. U xx. sg. C = F u. BLz. Sx = A + C ∗ BM . F 11. F uu. sb = F bar − A ∗ xbar − C ∗ ubar − W ∗ zbar. F 13 = beta ∗ F x. F 14 = beta ∗ lbar ∗ F xz. F 12. F x and F xx for the first. beta. sa3. C. Sx. sa2.30 as. . BD. F 21. BD and sg. sm = INV(F 23∗INV(F 13) ∗ F 12 − F 22)∗ (f 2 − F 23∗INV(F 13) ∗ f 1). LOCAL A = F x. f 2. f 1. F 23. The output of this procedure is the decision parameters G. sp). U ux.

i.31 Slamda = C ∗ BN . Sz = W + C ∗ BR. u∈U 12 (1. u(t). o . sg).5). BLlamda = INV(F 13) ∗ (1 − F 12 ∗ BN ). x(0) = x0 ∈ Rn (1. BLx = −INV(F 13) ∗ (F 11 + F 12 ∗ BM ).81) and the zt are i. 1. sa2 = Sx − BLlamda. sh = INV(BLlamda − BH ∗ Slamda − 1)∗ (BH ∗ ss + BQ ∗ sp − sl).d. is easily extended to stochastic discrete time problems of the type ∞ V (x) = E where uin U max t=0 β t g(x(t).9. z))} . BQ = INV(BLlamda − BH ∗ Slamda − BP )∗ (BH ∗ Sz − BLz). BH = (1/(2 ∗ sa1)) ∗ (−sa2 − (sa2ˆ2 − 4 ∗ sa1 ∗ sa3)ˆ0. RETP(BG. random variables. sa1 = Slamda. u) + βVh (ϕ(x. u. see Camilli and Falcone (1995). BD = BR + BN ∗ BQ. This problem can immediately be applied in discrete time with time step h = 1.6. sg = BN ∗ sh + sm.82) For a discretization of a continuous time stochastic optimal control problem with dynamics governed by an Itˆ stochastic differential equation. zt ). u(t)) (1. BLz = −INV(F 13) ∗ (F 14 + F 12 ∗ BR). sa3 = −BLx. sl = −INV(F 13) ∗ (f 1 + F 12 ∗ sm).80) x(t + 1) = f (x(t). 12 The corresponding dynamic programming operator becomes Th (Vh )(x) = max E {hg(x.1 Appendix III: The Stochastic Dynamic Programming Algorithm Our adaptive approach of chapter 1. BD. BG = BM + BN ∗ BH. ss = C ∗ sm + sb. ENDP.

developed by Gr¨ne (1997) and applied in Gr¨ne and Semmler (2004a) may not be u u the most efficient approach to these problems. if z is a continuous random variable then we can compute E via a numerical quadrature formula for the approximation of the integral (hg(x. It should also be noted that in the smooth case one can obtain estimates for the error in the approximation of the gradient of Vh from our error estimates.13 Furthermore. Gr¨ne and Semmler (2004a) u show the application of such a method to such a problem. First. in particular allowing to avoid the discretization of the state space.32 where ϕ(x. Nevertheless. complicated dynamical behavior like multiple stable steady state equilibria. u 13 A remark to this extent on an earlier version of our work has been made by Buz Brock and Michael Woodford. u. whose structure gives rise to different approximation techniques. for details we refer to Gr¨ne (2003). stochastic optimal control problems have several features different from deterministic ones.6 are by far more efficient than non–adaptive methods if the same discretization technique is used for both approaches. periodic attractors etc. shows that adaptive grids as discussed in chapter 1. If the random variable z is discrete then the evaluation of the expectation E is a simple summation. . It should be noted that despite the formal similarity. Finally. z) is now a random variable. where z is a truncated Gaussian random variable and the numerical integration is done via the trapezoidal rule. u. z))p(z)dz. z where p(z) is the probability density of z. the examples in Gr¨ne and Semmler u (2004a). is less likely because the influence of the stochastic term tends to “smear out” the dynamics in such a way that these phenomena disappear. in stochastic problems the optimal value function typically has more regularity which allows the use of high order approximation techniques. and it has to compete with other efficient techniques. the above dynamic programming technique. u) + βVh (ϕ(x. stochastic problems can often be formulated in terms of Markov decision problems with continuous transition probability (see Rust (1996) for a details). In these situations.

2) See. 2. Stockey et al.1 The Ramsey Problem The Model Ramsey (1928) posed a problem of optimal resource allocation.1) (2. that is. Assume that the output is produced by capital stock and it is either consumed or invested. which is now often used as a prototype model of dynamic optimization. The model we choose is a Ramsey model (Ramsey 1928) to which the exact solution is computable with the standard recursive method. Yt = At Ktα . chapter 2).Chapter 2 Solving a Prototype Stochastic Dynamic Model 2. (1989.2 2. Yt output and Kt the capital stock. added to the capital stock. chapter 2) and Ljungqvist and Sargent (2000.2. 33 . 1 (2. Formally. chapter 2). Let Ct denote consumption.1 The model presented in this section is essentially that of Ramsey (1928) yet it is augmented by uncertainty. Blanchard and Fisher (1989. we shall solve a prototype model by employing different approximation methods as we have discussed in the last chapter. This will allow us to test the accuracy of approximations by comparing the different approximate solutions to the exact solution. Yt = Ct + Kt+1 − Kt . In particular. for instance.1 Introduction This chapter turns to a practical problem of dynamic optimization.

Kt+1 = Kt = K. . 2.10) For another similar model where the depreciation rate is also set to 1 and hence the exact solution is computable. 1) and At is the technology which may follow an AR(1) process: At+1 = a0 + a1 At + ǫt+1 . It is not difficult to find that one steady state is on the bound¯ ¯ ary.8) for logK. (2. Solving (2. Therefore. C is resolved from (2.4): ¯ ¯ ¯ C = AK α − K 2 (2.34 where α ∈ (0.6) and evaluate the equation at its certainty equivalence form: log Kt+1 = log(αβA) + α log Kt .i. To obtain a more meaningful interior steady state.2. we take logarithm for both sides of (2. Equation (2.7) (2. we obtain ¯ log K = log(αβA) . ¯ ¯ Given K.2 The Exact Solution and the Steady States It is well known that the exact solution for this model – which could be derived from the standard recursive method can be written as Kt+1 = αβAt Ktα . 1−α ¯ K = (αβA)1/(1−α) . that is K = 0 and C = 0. A0 ).6) Given the solution paths for Ct and Kt+1 .5) given the initial condition (K0 .8) ¯ ¯ At the steady state. This further implies from (2. (2.1) and (2. see Long and Plosser (1983). we are then able to derive the steady state. (2.4) Note that we have assumed here that the depreciation rate of capital stock is equal to 1.9) (2. (2.2 The representative agent is assumed to find the control sequence {Ct }∞ t=0 such that ∞ max E0 t=0 β t ln Ct (2.4) that Ct = (1 − αβ)At Ktα .d. This is a simplified assumption by which the exact solution is computable.3) Here we shall assume ǫt to be i.2) indicates that we could write the transition law of capital stock as Kt+1 = At Ktα − Ct .

Therefore ∂F/∂x = 0 and ∂F/∂u = 1. Also note that in this formulation the state variable in period t is still Kt . At ) = max {ln(At Ktα − Kt+1 ) + βE [V (Kt+1 . Kt+1 (2. Let us first consider the Euler equation.13) Meanwhile applying the Benveniste-Scheinkman formula. The Bellman equation in this case can be written as V (Kt . there are two types of first-order conditions. As we have mentioned in the last chapter.13) allows us to obtain the Euler equation: α−1 αAt+1 Kt+1 −1 + βE = 0.4) to express Ct in the utility function. we shall first establish the first-order condition derived from the Ramsey problem. 2. At+1 )]} .35 2.12) The necessary condition for maximizing the right hand side of the Bellman equation (2. At ) = . At+1 ) = 0. (2. we may denote the decision variable as Zt . Therefore the model can be rewritten as ∞ max E0 t=0 β t ln(At Ktα − Kt+1 ).3 The First-Order Conditions and Approximate Solutions To solve the model with different approximation methods.14) .1 The Euler Equation Our first task is to transform the model into a setting so that the state variable Kt does not appear in F (·) as we have discussed in the last chapter.11) subject to Kt+1 = Zt Note that here we have used (2.14) into (2.12) is given by At Ktα ∂V −1 + βE (Kt+1 .3. (2. the Euler equation and the equation derived from the Lagrangian. This can be done by assuming Kt+1 (instead of Ct ) as model’s decision variable. To achieve a notational consistency in the time subscript. ∂Kt At Ktα − Kt+1 Substituting (2. − Kt+1 ∂Kt+1 ∂V αAt Ktα−1 (Kt . α At Ktα − Kt+1 At+1 Kt+1 − Kt+2 (2.

3 The Dynamic Programming Formulation The dynamic programming problem for the Ramsey growth model of chapter 2.18) back into (2. Using (2.16).4) as well as 1/Ct − βEt λt+1 = 0. Writing either (2.17). as a basic discrete time growth model .36 which can further be written as − α−1 αAt+1 Kt+1 1 + βE = 0.16) (2. Ct Ct+1 (2. {At+1 }∞ and {Ct }∞ given the initial condition t=1 t=0 K0 and A0 . Ct and Kt . one obtains (2.4) and (2. 2. (2.2. This further indicates that α−1 Et λt+1 = Et αAt+1 Kt+1 /Ct+1 (2. we obtain the Euler equation (2. This can be done as follows. we obtain λt = αAt Ktα−1 /Ct .15) or (2. we indeed obtain (2.9) and (2.15) along with (2. 2. we turn to derive the first-order condition from the Lagrangian.3) determine the transi∞ tion sequences of {Kt+1 }t=1 . in its deterministic version.2 The First-Order Condition Derived from the Lagrangian Next.3. βEt λt+1 αAt Ktα−1 = λt . It is also not difficult to find that the two first-order conditions imply the same steady state as the one derived from the exact solution (see equation (2. Next we try to demonstrate that the two first-order conditions are virtually equivalent. Define the Lagrangian: ∞ ∞ L= t=0 β ln Ct − t=0 t Et β t+1 λt+1 (Kt+1 − At Ktα + Ct ) .18) Substitute (2.3.9).15).10)).1 can be written. Setting to zero the derivatives of L with respect to λt .17) in terms of their certainty equivalence form while evaluating them at the steady state.17) These are the first-order conditions derived from the Lagrangian.15) This Euler equations (2.16) to express βEt λt+1 in terms 1/Ct and then plug it into (2.

Let us restate the problem above with K the state variable and K ′ the control variable. ch. see chapter 1.3 of this book and Ljungquist and Sargent (2000. Substitute C into the above intertemporal utility function by defining C = f (K) − K ′ (2.22) we have V (K ′ ).22) as U ′ [f (K) − K ′ ] + βV ′ (K ′ ) = 0 which gives by using (2.23) one step forward.37 ∞ V = max C t=0 β t U (Ct ) (2. (2. f ′′ (K) < 0. Yet equ. for V ′ (K ′ ).21) We then can express the discrete time Bellman-equation. representing a dynamic programming formulation as V (K) = max{U [f (K) − K ′ ] + βV (K ′ )} ′ K (2.22) By applying the Benveniste-Scheinkman condition3 gives V ′ (K) = U ′ (f (K) − K ′ )f ′ (K) (2. Notice that from the discrete time form of the envelope condition one again obtains the first order condition of equ.23) Note that K is the state variable and that in equ. (2.e. Ct + Kt+1 = f (Kt ) (2.24) can be written as 1=β U ′ (Ct+1 ) ′ f (Kt+1 ) U ′ (Ct ) (2. . U ′′ (C) < 0. where K ′ denotes the next period’s value of K.19) s. whereby K’ denotes the one period and K ′′ the two period ahead value of K. (2. f ′ (K) > 0.20) with an one period utility function U ′ (C) > 0.25) 3 The Benveniste-Scheinkman condition implies that the state variable does not appear in the transition equation.t.24) Note that hereby we obtain as a solution a second order difference equation in K. 2). i. U ′ [f (K) − K ′ ] = βU ′ [f (K ′ ) − K ′′ ]f ′ (K ′ ) (2.

30) (2.29) (2. to study asset pricing.t. 7. Kt+1 = AKtα − Ct The analytical solution for the value function is ˜ ˜ V (K) = B + C ln (K) and for the sequence of capital one obtains Kt+1 with ˜ C = ˜ B = α and 1 − αβ ˜ ln (C(1 − αβ)A) + 1−β βCAKtα = 1 + βC (2.38 which represent the Euler-equation that has extensively been used in economic theory4 . 10. 17) . If we allow for log-utility as in chapter 2.2.32) 4 The above Euler-equation is also essential not only in stochastic growth but also in finance. chs. in fiscal policy to evaluate treasury bonds and testing for sustainability of fiscal policy.31) (2. 2.26) s.27) βα ln 1−βα (αβA) For the optimal consumption holds Ct = Kt+1 − AKtα and for the steady state equilibrium K one obtains 1 α−1 or = αAK β K = βαAK α (2.1 the discrete time decision problem is directly analytically solvable. We take the following form of a utility function ∞ V = max Ct t=0 β t ln Ct (2.28) (2. see Ljungqvist and Sargent (2000.

Kt and At can be directly computed.7) C0 = (1 − αβ)A0 K0 .21): α−1 Ct+1 = αβCt At+1 Kt+1 . we thus can choose C0 close to ∗ ∗ α the exact solution denoted as C0 .1 specifies these parameters and the corresponding interior steady state values: Table 2.9800 a0 600. Let us first write equation (2.39 2. These solution paths are compared to the exact solution as expressed in (2.3). Note that from (2. Here we shall use the Euler equation. Table 2. β. and the others to deviate ∗ 1% from C0 .1. we shall first parameterize the model. . Since we know the exact solution.1 three solution paths computed by the Fair-Taylor method. a1 and σǫ .15) in the form as expressed by (1.8000 σǫ 60.4 2. we allow one C0 to be equal to C0 . we provide in Figure 2.1: Parameterizing the Prototype Model α 0. Since the model is simple in its structure.1 Solving the Ramsey Problem with Different Approximations The Fair-Taylor Solution It should be noted that one can apply the Fair-Taylor method either to the Euler equation or to the first-order condition derived from the Lagrangian. as reported in Table 2.00 a1 0. Before we compute the solution path.0 Given the parameters.4) and (2.4.33) Together with (2. there is no necessity to employ the Gauss-Seidel procedure as suggested in the last chapter. (2. There are all together 5 structural parameters: α. a0 .(1.7): The three solution paths are different due to their initial condition with regard to C0 .3200 β 0. they form a recursive dynamic system from which the transition paths of Ct .000 K 23593 C 51640 A 3000. ∗ In particular.19) .6) and (2.

the path of Ct quickly reaches its lower bound 0. we restrict Ct ≤ 0.99At Ktα . What can we learn from this experiment? The exact solution to this problem seems to be the saddle path for the system composed of (2. the paths of Kt and Ct (shown by the another dashed curve) are close to the exact solution for small t′ s. In particular. The solution path is shown by one of the dashed curves in the figure. the deviation becomes significant.3) .1: The Fair-Taylor Solution in Comparison to the Exact Solution: solid curve the exact solution.40 Figure 2. ∗ • When we choose C0 above C0 (by 1%) the path of Kt quickly reaches to zero and therefore the simulations have to be subject to the constraint Ct < At Ktα . ∗ • When we set C0 below C0 (again by 1%). (2. Yet when t goes beyond a certain point. dashed and dotted curves the Fair-Taylor solution The following is a summary of what we have found in this experiment. This restriction makes Kt never reach zero so that the simulation can be continued. This is shown by the dotted curve in the figure. ∗ • When we set C0 to C0 .4).

.15) and (2.3) can be log-linearized as kt+1 = ϕka at + ϕkk kt + ϕkc ct . Our first task is therefore to log-linearize the state.34) The proposition below regards the determination of the two undetermined coefficients ηca and ηck .33).15) and (2. ¯¯ ϕkk = AK α−1 α.41 ∗ and (2. Here we shall again use the Euler equation.37) Next we try to find a solution path for ct which we shall conjecture as ct = ηca at + ηca kt (2. the log-linear approximation method can be applied to the first-order condition either from Euler equation or from the Lagrangian. Ct and At . The following is the proposition regarding this log-linearization (the proof is provided in the appendix): Next we try to find a solution path for ct . The eventual deviation of the solution starting with C0 from the exact solution is likely to be due to the computational errors resulting from our numerical simulation. ¯ ¯ ϕkc = −(C/K).38) The proposition below regards the determination of the two undetermined coefficients ηc and ηck . ¯ ¯ α−1 ϕcc = αβ AK . Euler and the exogenuous equations as expressed in (2.4). we have verified our previous concern that the initial condition for the control variable is extremely important for obtaining an appropriate solution path when we employ the Fair-Taylor method.4). Then equation (2. where ¯¯ ϕka = AK α−1 . (2. ¯ ¯ α−1 ϕck = αβ AK (α − 1).36) (2. which we shall conjecture as ct = ηca at + ηca kt (2.3).2 The Log-linear Solution As the Fair-Taylor method. (2. E [ct+1 ] = ϕcc ct + ϕca at + ϕck kt+1 . ct and at denote the log deviations of Kt . (2. ¯ ¯ α−1 ϕca = αβ AK a1 .4. 2.35) (2. E [at+1 ] = a1 at . Proposition 2 Let kt . On the other hand.

The solution paths of the model can now be computed by relying on (2. dashed curves the log-linear solution .40) where Q2 = ϕkc . we show in Figure 2. All the solution paths are expressed as log deviations. Therefore.2: The Log-linear Solution in Comparison to the Exact Solution: solid curve the exact solution. Then ηck and ηca are determined from the following equation ηck = ηca 1 −Q1 − Q2 − 4Q0 Q2 1 2Q2 (ηck − ϕck )ϕka − ϕca = ϕcc − a1 − ϕkc (ηck − ϕck ) (2.2 the log-linear solution in comparison to the exact solution.42 Proposition 3 Assume ct follow (2. to compare the log-linear solution to ¯ the exact solution.35) with at to be given by a1 at−1 + ǫt /A. we shall perform the transformation via Xt = (1 + xt )X for a variable xt in log deviation form.34) ¯ and (2.34).1.39) (2. Using the same parameters as reported in Table 2. Q1 = (ϕkk − ϕcc − ϕkc ϕck ) and Q0 = ϕkk . Figure 2.

38) as discussed in the last chapter. This will allow us to obtain those Kij and kj (i. 2.43 In contrast to the Fair-Taylor solution. even if we start from many different initial conditions.3). one finds that the log-linear solution is quite close to the exact solution except for some initial paths.3) as well as Kt+1 = It (2.42) E0 t=0 β t ln(At Ktα − It ) The coefficients G11 . When considering this.4 The Linear-Quadratic Solution Using the Suggested Algorithm When we employ our new algorithm. the technology At also as a state variable.(1.4. the model becomes ∞ max subject to (2. this is not attainable for our particular application.3 The Linear-Quadratic Solution with Chow’s Algorithm To apply Chow’s method.5 Therefore.32). we have to treat. Therefore we can define F = AK α − C and U = ln C.35) . Again our first step is 5 Reiter (1997) has experienced the same problem. our attempt fails to compute the solution path with Chow’s algorithm. 2) coefficient matrices and vectors as expressed in Chow’s first-order conditions (1.41) and (2. j = 1. Suppose the linear decision rule can be written as It = G11 Kt + G12 At + g1 (2. 2.31) and (1.4. G12 and g1 can be computed in principle by iterating (1. Unfortunately. This can be done by choosing investment It ≡ At Ktα − Ct as the control variable while leaving the capital stock and the technology as the two state variables. . Yet this requires the iteration to be convergent. we shall first transform the model in which the state equation appears to be linear.41) Due to the insufficiency in the specification of the model with regard to the possible exogenuous variable. Next we shall derive the first. there is no need to transform the model. as suggested in the last chapter.and the second-order partial derivatives of the utility function around the steady state. This indicates that there are two state equations (2. both are now in linear form.

All these partial derivatives along with the steady states can be used as input in the GAUSS procedure provided in Appendix II of the last chapters.43) Equation (2.4) and (2.3: The Linear-quadratic Solution in Comparison to the Exact Solution: solid curve for exact solution.3 for illustration). Executing this procedure will allow us to compute the undetermined coefficients in the following decision rule for Ct : Ct = G21 Kt + G22 At + g2 (2. dashed curves for linear-quadratic solution .and second-order partial derivatives with respect to F and U. Kt and At are computed (see Figure 2.44 to compute the first.43) along with (2.3) form the dynamic system from which the transition path of Ct . Figure 2.

Results from a stochastic version are discussed in appendix II. in the interval [0. As the figure shows the value function is clearly concave in the capital stock.95 we can solve all the above expressions numerically for a grid of the capital stock in the interval [0.1. c.6 u 6 Moreover.5 The Dynamic Programming Solution Next. For the growth model of chapter 2.45 Figure 2. we will compare the analytical solution of chapter 2. 5]. using as example.3 with the dynamic programming solution obtained from the dynamic programming algorithm of chapter 1.4.4. 10] and the control variable. Subsequently we report only results from a deterministic version.3 we employ the following parameters α = 0. 2.34 A=5 β = 0.3. K. as concerns asset pricing log-utility preferences provide us with a very simple . in the figure 2.1.6. the value function obtained from the linear quadratic solution is shown. see Gr¨ne and Semmler (2004a).4: Value Function obtained from the Linear-quadratic Solution In addition.3. For the parameters chosen we obtain a steady state of the capital stock K = 2.07 For more details of the solution.

Ct+j 1−β = Et For further details.5: Value Function stochastic discount factor and an analytical expression for the asset price. using the dynamic programming algorithm of chapter 1.5 28 27.1) and Gr¨ne and Semmler (2004 b). 30. 9. see Cochrane (2001.6 with grid refinement is shown in figures 2.5 30 29. For U (C) = ln(C) the asset price is ∞ Pt = Et t=1 ∞ βj βj t=1 U ′ (Ct+j ) U ′ (Ct ) βCt Ct · Ct+j = .5 0 1 2 3 4 5 6 7 8 9 10 Figure 2. ch. u .5 29 28.5 and 2.46 The solution of the growth model with the above parameters.6.

07. As reported in Gr¨ne and Semmler (2004a) the dynamic programming algorithm with adaptive gridding strategy as introduced in chapter 1. 0 ≤ K ≤ 10. Moreover.6: Path of Control 10 As figures 2.47 4 3 2 1 0 0 5 Figure 2. are concave in the capital stock. as observable from figure 2. K in figure 2.6 solves the value function with high accuracy. Our purpose here is to compare the different approximate solutions to the exact solution.6 consumption is low when capital stock is low (capital stock can grow) and consumption is high when capital stock is high (capital stock will decrease)where low and high is meant in reference u to the optimal steady state capital stock K = 2.2 · 10−2 and with 2000 nodes the error shrinks to 6. Yet 7 With 100 nodes in capital stock interval the error is 3. C.5 Conclusion This chapter employs the different approximation methods to solve a prototype dynamic optimization model.6 the optimal consumption is shown to depend on the state variable K for a grid of K. there have been some difficulties when we apply the Fair-Taylor method and the method of linear-quadratic approximation using Chow’s algorithm. u .6 show the value function and the control.3 · 10−4 .5 and 2.7 2. see Gr¨ne and Semmler (2004a). As we have found. which for this model can be derived analytically by the standard recursive method.

48 when we apply the methods of log-linear approximation and linear-quadratic approximation with our suggested algorithm, we find that the approximate solutions are close to the exact solution. At the same time, we also find that the method of log-linear approximation may need an algorithm that can take over some heavy derivations that otherwise must be analytically accomplished. Therefore, our experiment in this chapter verifies our previous concerns (in Chapter 2) with regard to the accuracy and the capability of different approximation methods, including the Fair-Taylor method, the log-linear approximation method and the linear-quadratic approximation method. Although the dynamic programming approach solves the value function with higher accuracy, in the subsequent chapters, when we come to the calibration of the intertemporal decision models, we will work with the linear quadratic approximation of the Chow method since it is better applicable to empirical assessment of the models.

2.6
2.6.1

Appendix I: The Proof of Proposition 2 and 3
The Proof of Proposition 2
Kt+1 − At Ktα − Ct = 0 E Ct+1 − =0 E [At+1 − a0 − a1 At ] = 0
α−1 αβCt At+1 Kt+1

For convenience, we shall write (2.4), (2.15) and (2.3) as (2.44) (2.45) (2.46)

Applying (1.22) to the above equations, we obtain ¯ ¯ ¯¯ Kekt+1 − AK α eat +αkt + Cect = 0 ¯ ¯¯¯ E Cect+1 − αβ C AK α−1 eat+1 +ct +(α−1)kt+1 = 0 ¯ ¯ E Aeat+1 − a0 − a1 Aeat = 0 Applying (1.23), we further obtain from the above: ¯ ¯¯ ¯ K(1 + kt+1 ) − AK α (1 + at + αkt ) + C(1 + ct ) = 0 ¯ ¯ ¯ ¯ α−1 E C(1 + ct+1 ) − αβ C AK (1 + ct + at+1 + (α − 1)kt+1 ) = 0 ¯ ¯ E A(1 + at+1 ) − a0 − a1 A(1 + at ) = 0 (2.47) (2.48) (2.49)

49 which can be further written as to be ¯ ¯¯ ¯¯ ¯ Kkt+1 − AK α at − AK α αkt + Cct = 0 ¯ ¯ ¯ ¯ α−1 E Cct+1 − αβ C AK (ct + at+1 + (α − 1)kt+1 ) = 0 E [at+1 − a1 at ] = 0 (2.50) (2.51) (2.52)

Equation (2.52) indicates (2.37). Substituting it into (2.51) to express E [at+1 ] and re-arranging (2.50) and (2.51), we obtain (2.35) and (2.36) as indicated in the proposition.

2.6.2

Proof of Proposition 3

Given the conjectured solution (2.34), the transition path of kt+1 can be derived from (2.35), which can be written as kt+1 = ηka at + ηkk kt where ηka = ϕka + ϕkc ηca ηkk = ϕkk + ϕkc ηck (2.54) (2.55) (2.53)

Expressing ct+1 and ct in terms of (2.34) while recognizing that E [at+1 ] = a1 at , we obtain from (2.36): ηca a1 at + ηck kt+1 = ϕcc (ηca at + ηck kt ) + ϕca at + ϕck kt+1 which can further be written as ϕcc ηck ϕcc ηca + ϕca − ηca a1 at + kt kt+1 = ηck − ϕck ηck − ϕck

(2.56)

Comparing (2.56) to (2.53) with ηka and ηkk to be given by (2.54) and (2.55), we thus obtain ϕcc ηca + ϕca − ηca a1 = ϕka + ϕkc ηca (2.57) ηck − ϕck ϕcc ηck = ϕkk + ϕkc ηck (2.58) (ηck − ϕck ) Equation (2.58) gives rise to the following quadratic function in ηck :
2 Q2 ηck + Q1 ηck + Q0 = 0

(2.59)

with Q2 , Q1 and Q0 to be given in the proposition. Solving (2.59) for ηck , we obtain (2.39). Given ηck , ηca is resolved from (2.57), which gives rise to (2.40).

50

2.7

Appendix II: Dynamic Programming for the Stochastic Version

We here present a stochastic version of a growth model which is based on the Ramsey model of chapter 2.1 but extended to the stochastic case. A model of type goes back to Brock and Mirman (1972). Here the Ramsey 1d model ?? is extended using a second variable modelling a stochastic shock. The model is given by the discrete time equations ˜ K(t + 1) = A(t)AK(t)α − C(t) A(t + 1) = exp(ρ ln A(t) + zt ) where α and ρ are real constants and the z(t) are i.i.d. random variables with zero mean. The return function is again U (C) = ln C. In our numerical computations which follows Gr¨ne and Semmler (2004) u ˜ = 5, α = 0.34, ρ = 0.9 and β = 0.95. As we used the parameter values A the case of the Ramsey model, the exact solution is known and given by ˜ V (K, A) = B + C ln K + DA, where B= ˜ ln((1 − βα)A) +
βα 1−βα

˜ ln(βαA)

1−β

˜ , C=

α 1 , D= 1 − αβ (1 − αβ)(1 − ρβ)

We have computed the solution to this problem on the domain Ω = [0.1, 10] × [−0.32, 0.32]. The integral over the Gaussian variable z was approximated by a trapezoidal rule with 11 discrete values equidistributed in the interval [−0.032, 0.032] which ensures ϕ(x, u, z) ∈ Ω for x ∈ Ω and suitable u ∈ U = [0.5, 10.5]. For evaluating the maximum in T the set U was discretized with 161 points. Table 2.2 shows the results of the resulting adaptive gridding scheme applied with refinement threshold θ = 0.1 and coarsening tolerance ctol = 0.001. Figure 2.7 shows the resulting optimal value function and adapted grid.

51 # nodes 49 56 65 109 154 327 889 2977 Error 1.4 · 10 0 0.5 · 10−1 2.9 · 10−1 1.3 · 10−1 5.5 · 10−2 2.2 · 10−2 9.6 · 10−3 4.3 · 10−3 estimated Error 1.6 · 10 1 6.9 · 10 0 3.4 · 10 0 1.6 · 10 0 6.8 · 10−1 2.4 · 10−1 7.3 · 10−2 3.2 · 10−2

Table 2.2: Number of nodes and errors for our Example

34 33 32 31 30 29 28 27 26 25 24

0.3

0.2

0.1

0.4 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4

0

-0.1

-0.2

0

1

2

3

4

5

6

7

8

9

10
-0.3 1 2 3 4 5 6 7 8 9 10

Figure 2.7: Approximated value function and final adaptive grid for our Example In Santos and Vigo–Aguiar (1995) on equidistant grids with 143 × 9 = 1287 and 500 × 33 = 16500 nodes, errors of 2.1 · 10−1 and 1.48 · 10−2 , respectively, were reported. In our adaptive iteration these accuracies could be obtained with 109 and 889 nodes, respectively; thus we obtain a reduction in the number of nodes of more than 90% in the first and almost 95% in the second case, even though the anisotropy of the value function was already taken into account in these equidistant grids. Here again in our stochastic version of the growth model, a steep value function can best be approximated with grid refinement.

al (1988a. The task of estimation has often been ignored in the current empirical studies of a stochastic dynamic optimization model when a technique. e. e.. are also employed for this comparison. Long and Plosser (1983). one then can evaluate the model to see how the model’s prediction can match the empirical data. See King and Plosser (1994) and Simkins (1994) among others. 1988b).. King et.1 Typically the parameters employed for the model’s simulation are selected from independent sources. Prescott (1986). is employed. This approach has been criticized because the structural parameters are assumed to be given rather 1 See. Another necessary step is to estimate the model with some econometric techniques.1 Introduction Solving a stochastic dynamic optimization model with aproximation methods or dynamic programming is only a first step towards the empirical assessment of such a model.Chapter 3 The Estimation and Evaluation of the Stochastic Dynamic Model 3. other statistics (in addition to the first and second moments) proposed by early business cycle literature. Recently. 52 . Given the estimation. Hansen (1985. such as different microeconomic studies. often referred to as calibration. 1988).g. Burns and Mitchell (1946) and Adelman and Adelman (1959). Kydland and Prescott (1982). To undertake this step certain approximation methods are more useful than others. Plosser (1989) among many others. The calibration approach compares the moment statistics (usually the second moments) of major macroeconomic time series to those obtained from simulating the model.g.

We thus also need to develope an estimation strategy that can be used to recursively search the parameter space in order to obtain the optimum. Due to the complexity of stochastic dynamic models. We. As one can find there. a proper application of calibration requires us to define the model’s structural parameters correctly. we compare these moment statistics to the sample moments computed from the data. we propose a strategy to implement these two estimations for estimating a dynamic optimization model. solved through some approximation method. Both estimation methods define an objective function to be optimized. a sketch of the computer program for our estimation strategy will be described in the appendix of this chapter. introduce a global optimization algorithm. 3 In this chapter. Chow and Kwan (1998). Section 2 will first introduce the calibration technique. e. which is used for executing the suggested strategy of estimation. Unfortunately. Chow (1993). see Singleton (1988) and Eichenbaum (1991).g. we do not find many econometric studies on how to estimate a stochastic dynamic optimization model except for a few attempts that have been undertaken for some simple cases. and the Maximum Likelihood (ML) estimation. such as the RBC. This approach uses the Monte Carlo method to generate the distribution of some moment statistics implied by the model. called simulated annealing. the GMM and the ML estimations. we shall discuss two estimation methods: the Generalized Method of Moment (GMM) estimation. the calibration may include the following steps: For an early critique of the parameter selection employed in calibration technique. (1993). subsequently in Section 5. In Section 4. This estimation requires a global optimization algorithm. 3 See. We then in Section 3 consider two possible estimation methods. 2 . For an empirical assessment of the model. Finally. Burnside et al.2 Although this may not create severe difficulties for some currently used stochastic dynamic optimization models. which has been used in the current empirical study of stochastic dynamic models. it is often unclear how the parameters to be estimated are related to the model’s restrictions and hence to the objective function in the estimation.53 than estimated.2 Calibration The current empirical studies of a stochastic dynamic model often rely on calibration. 3. therefore.. Christiano and Eichenbaum (1992). Generally. we believe the problem remains for more elabrated models in the future development of macroeconomic research.

• Step 7: Compute the same moment statistics from the data sample and check whether it falls within the proper range of the distribution for the moment statistics generated from the Monte Carlo simulation of the model. reflected by some second moment statistics. We denote this number by T . the control equation (1. Here N should be sufficiently large. This can be regarded as a one time simulation. Often the HP-filter (see Hodrick and Prescott 1980) is used for this detrending. the standard deviation of ǫt as in equation (1. should depend on the model specification and the structural parameters including σǫ .2).3) to compute the solution of the model iteratively for T times. detrend the simulated series generated in Step 3 to remove its time trend. the simulated series should also be cyclically and stochastically fluctuating. These distributions are mainly represented by their means and their standard deviations. This number might be the same as the number of observations in the sample. • Step 3: Select the initial condition (x0 .3). z0 ) and use the state equation (1.4) and the exogenuous equation (1. which should also be defined properly. • Step 6: Repeat Step 3 to 5 N times. These parameters may include preference and technology parameters and those that describe the distribution of the random variables in the model. Therefore. • Step 4: If necessary. if we specify the model correctly with the structural parameters. such as σǫ . including σǫ . Due to the stochastic innovation of ǫt . The extent and the way of fluctuations. compute the distributions of these moment statistics. the above comparison of moment statistics of the model and sample should give us a basis to test whether the model can explain the actual business cycles represented by the data. a detrended series generated in Step 4. These moment statistics are mainly those of the second moments such as variances and covariances. • Step 5: Compute the moment statistics of interest using. . Then after these N times repeated runs. if necessary.54 • Step 1: Select the model’s structural parameters. the standard deviation of ǫt . • Step 2: Select the number of times for which the iteration is conducted in the simulation of the model.

Hansen (1982) suggested the following distance function: J(ψ. yT ) are as close as possible to the population moments reflected by (3. ψ is a l-dimensional vector of unknown parameters that needs to be estimated. one needs to define a distance function by which that closeness can be judged.1) where yt is a k-dimensional vector of observed random variables at date t. To achieve this. a consistent estimator of the variance-covariance matrix of ψ is given by . The choice of this weighting matrix defines a metric that makes the distance function a scalar. The GMM estimator of ψ is the value of ψ that minimizes (3. We consider two possible estimation methods: the GMM estimation and the ML estimation. h(·) is a vector-valued function mapping from Rk ×Rl into Rm . representing the population moments established by the theoretical model: E [h(yt . Let yT contain all the observations of the k variables in the sample with size T . The idea behind the GMM estimation is to choose an estimator of ψ. 3. symmetric. ψ)] = 0 (3. ψ) t=1 (3.3). yT ) = gT (ψ.3) where WT .1 The Generalized Method of Moments (GMM) Estimation The GMM estimation starts with a set of orthogonal conditions.1). yT ) WT gT (ψ. such that the sample moments gT (ψ. Also from the results established in Hansen (1982).2) Notice that gT (·) is also a vector-valued function with m-dimensions. This indicates that we need to estimate the model before the calibration. positive definite and depends only on the sample observation yT . yT ) ′ (3. called the weighting matrix. is m × m.3.55 3. The sample average of h(·) can then be written as 1 gT (ψ.3 The Estimation Methods The application of the calibration requires techniques to select the structural parameters accurately. Hansen (1982) proves that under certain assumption such a GMM estimator is consistent and asymptotically normal. denoted ψ. yT ) = T T h(yt .

X is T × k and E is T × m. Second. yt in this case is a m ×1 vector of dependent variables. Assuming normal and serially uncorrelated ǫt with the covariance matrix Σ. p. Γ is a m × k matrix. Suppose there are T observations. In this book. the estimation with the GMM method usually requires two steps as suggested by Hansen and Singleton (1982).6) where B is a m × m matrix. where it is suggested that d −1 WT = Ω0 + j=1 ∗ ∗ with w(j. we will adopt the method by Newey and West (1987). + n log |B| − log |Σ| 2 . First.6) can be re-written as BY ′ + ΓX ′ = E ′ (3. d)(Ωj + Ωj ). the model to be estimated here is the same as in the GMM estimation represented by (3.56 1 ′ (DT )−1 WT (DT )−1 . T V ar(ψ) = ′ (3.5) 3. except here the functions are linear. ψ ) and d to be a suitable function of T. Then the above (3. d) ≡ 1 − j/(1 + d). one chooses a sub-optimal weighting matrix to minimize (3.3. xt is a k ×1 vector of explanatory variables and ǫt is a m ×1 vector of disturbance terms. 1983. Note that if we take the expectations on both sides.8) log L(ψ) = const. Therefore.7) where Y is T × m. Here ψ ∗ is required to be a consistent estimator of ψ. proposed by Chow (1993) for estimating a dynamic optimization model. Ωj ≡ (1/T ) T t=j+1 g(yt .170-171) as n (3. Non-linearity may pose a problem to derive the log-likelihood function for (3. one then uses the consistent estimator obtained in the first step to calculate the optimum WT through which ( 3. starts with an econometric model such as follows: Byt + Γxt = ǫt (3.6).3) and hence obtains a consistent estimator ψ ∗ . w(j. the concentrated log-likelihood function can be derived (see Chow. There is a great flexibility in the choice of WT for constructing a consistent and asymptotically normal GMM estimator.2 The Maximum Likelihood (ML) Estimation The ML estimation.4) where DT = ∂gT (ψ)/∂ψ .3) is re-minimized. ψ )g(yt−j . ′ (3.1).

4 The parameters in the state equation could be estimated independently.57 with the ML estimator of Σ given by Σ = n−1 (BY ′ + ΓX ′ )(Y B ′ + XΓ′ ) (3.8) may only lead to a local optimum. As discussed in the previous chapters often a numerical procedure is required.9) The ML estimator of ψ is the one that maximizes logL(ψ) in (3. Consequently.3) and maximize (3. which is quite possible in general.143): ∂ 2 L(ψ) E(ψ − ψ)(ψ − ψ) ∼ − = ∂ψ∂ψ ′ ′ −1 .8). For an approximation method. The first problem that we need to discuss are the restrictions on the estimation. Start with an initial guess on ψ and use an appropriate method of dynamic optimization to derive the decision rules. such as (3.8) with respect to the parameters. the linearization of the system at its steady states is needed. The derivation of the control equations in approximate form from a dynamic optimization problem is a complicated process. One proper restriction is the state equation and a first-order condition derived either as Euler equation or from the Lagrangian. such as the Lagrangian multiplier. using firstorder conditions to minimize (3.3) and (3.8). . using GMM or ML method to estimate a stochastic dynamic model is rather complicated.3) or maximizing (3. most firstorder conditions are extremely complicated and may include some auxiliary variables. The linearization and the derivation of the control equation. Therefore one is usually incapable of deriving analytically the first-order conditions of minimizing (3.10) 3. The asymptotic standard deviation of estimated parameters can be inferred from the following variance-covariance matrix of ψ (see Hamilton 1994 p. since the system to be estimated is often nonlinear in parameters. which are not observable. make it often unclear how the parameters to be estimated are related to the model’s restrictions and hence to the objective function in estimation. This seems to suggest that the restrictions for a stochastic dynamic optimization model should typically be represented by the state equation and the control equation derived from the dynamic optimization problem. searching a parameter space becomes the only possible way to find the optimum. (3. Our search process includes the following recursive steps: • Step 1.4 The Estimation Strategy In practice. Furthermore.4 Yet. possibly through an iterative procedure.

Use the state equation and the derived control equation to calculate the value of the objective function. For this test. Its value f 0 = f (x0 ) 5 For conventional optimization algorithm. • Step 3. which shall be discussed in the next section. (1986) and Corana et al. Bohachevsky et al. The algorithm operates through an iterative random search for the optimal variables of an objective function within an appropriate space. Apply some optimization algorithm to change the initial guess on ψ and start again with step one. (1992) compute a test function with two optima provided by Judge et al. (1987). We thus believe that the algorithm may serve our purpose well.5 A Global Optimization Algorithm: The Simulated Annealing The idea of simulated annealing has been initially proposed by Metropolis et al. The step size is narrowed so that the random search is confined to an ever smaller region when the global optimum is approached. It moves uphill and downhill with a varying step size to escape local optima. Let f (x). p. ch. Goffe et al. they find that out of 100 times conventional algorithms are successful 52-60 times to reach the global optimum while simulated annealing is 100 percent efficient. Using this strategy to estimate a stochastic dynamic model. 5). The space S should be defined from the economic viewpoint and by computational convenience. see appendix B of Judge et al (1985) and Hamilton (1994. . (1953) and later developed by Vanderbilt and Louie (1984). (1992).956-7). one needs to employ an optimization algorithm to search the parameter space recursively. be a function that is to be maximized and x ∈ S.58 • Step 2. The simulated annealing algorithm has been tested by Goffe et al. We thus need to employ a global optimization algorithm to execute the estimation process as described above. (1985. for example. One possible candidates is the simulated annealing. 3. where S is the parameter space with the dimensions equal to the number of structural parameters that need to be estimated. The conventional optimization algorithms. The algorithm starts with an initial parameter vector x0 . may not serve our purpose well due to the possible existence of multiple local optima. By comparing it with conventional algorithms.5 such as Newton-Raphson or related methods.

the initial variable x0 is replaced by the updated 6 7 motivated by thermodynamics. The new function value f ′ = f (x′ ) is then computed. The new variable. the Metropolis criteria. The new temperature (denoted as T ′ ) will be T ′ = RT T 0 (3.  vi 1 + 0.11) where r is a uniformly distributed random number in [−1. If not.14) with 0 < RT < 1. 9 RT is suggested to be 0. a uniformly distributed random number from [0.12) This p is compared to p′ . If f ′ is larger than f 0 . x′ is accepted.11)) should be undertaken and repeated NS times7 for each i. 1].4 ci (0.4 − ni /NS ) if ni < 0. repeat (3.6 denoted as p. (1987). x′ . If it is larger than fopt .11) until x′ is in S.4 ci (ni /NS − 0. Again after another NS times of such repetitions. Other initial conditions include the initial step-length (a vector with the same dimension as x) denoted by v 0 and an initial temperature (a scalar) denoted by T 0 .4NS .59 is calculated and recorded. The above steps (starting with (3. These adjustments as to each vi should be performed NT times.13) vi = v 1 + 0. both xopt and fopt are replaced by x′ and f ′ . is chosen by varying the ith element of x0 such that 0 x′i = x0 + r · vi i (3.6NS . one goes back to (3. we set the optimum x and f (x) – denoted by xopt and fopt respectively – to x0 and f (x0 ). Besides. al. where p = e(f −f )/T ′ 0 (3.6) −1 ′ 1 0 (3. If p is greater than p′ . is used to decide on acceptance. 1].11). al. .9 With this new temperature T ′ .  i 0 vi if 0. (1987) 8 NT is suggested to be 100 by Corona et.85 by Corana et. x′ is accepted. Subsequently. Subsequently. (1987). But this time.11) and hence starts a new round of iteration. we should go back again to (3. the step-length is adjusted.4NS ≤ ni ≤ 0. The ith ′ element of the new step-length vector (denoted as vi ) depends on its number of acceptances (denoted as ni ) in its last NS times of the above repetition and is given by  0 1 if ni > 0. If x′ is not in S. f ′ should also be compared to the updated fopt . (1987) for all i.8 We then come to adjust the temperature. the step-length will be re-adjusted. With the new selected step-length vector. NS is suggested to be 20 as by Corana et al.6NS where ci is suggested to be 2 as by Corona et al.

We have introduced both the General Method of Moments (GMM) as well as the Maximum Likelihood (ML) estimations as strategies to match the dynamic decision model with time series data.13). whether the new selected step-length is enlarged or not depends on the corresponding number of acceptances. (1985.6 Conclusions In this chapter. Thus a convergence will ultimately be achieved with the continuous reduction of the temperature. By comparing it with conventional algorithms. For convergence. For this test. 3. they find that out of 100 times conventional algorithms are successful 52-60 times to reach the global optimum while simulated annealing is 100 percent efficient. have presented an estimation strategy. often a global optimization algorithm. based on some approximation methods. p. we shall demonstrate the effectiveness of this estimation strategy by estimating a benchmark RBC with simulated data. The simulated annealing algorithm described above has been tested by Goffe et al. The entire program consists of three parts. to estimate stochastic dynamic models employing time series data. The number of acceptance ni is not only determined by whether the new selected xi increases the value of objective function. which has often been employed in the assessment of a stochastic dynamic model. for example the simulated annealing.11) is required to be very small. 3. We then.60 xopt . but also by the Metropolis criteria which itself depends on the temperature. 956-7). (1992). the temperature will be reduced further after one additional NT times of adjusting the step-length of each i. In (3. The first part regards some necessary steps in the . In the next chapter. We thus believe that the algorithm may serve our purpose well. is needed to be employed to detect the correct parameters. the step-length in (3.7 Appendix: A Sketch of the Computer Program for Estimation The algorithm we describe here is written in GAUSS. Goffe et al. we have first introduced the calibration method. (1992) compute a test function with two optima provided by Judge et al. The algorithm will end by comparing the value of fopt for the last Nǫ times (suggested to be 4) when the temperature is attempted to be re-adjusted. Although both strategies permit to estimate the parameters involved. Of course.

n = 0. {Set initial conditions for simulated annealing} DO UNTIL convergence. ELSE. ϕ′ = {as if current ϕ except the ith element to be ϕ′i }. t = t + 1. of acceptances*/ DO Ns times. IF f ′ > fopt . The second part is the procedure that calculates the value of objective function for the estimation. while the activation of this procedure generates the value of objective function. The third part. We denote this procedure as OBJF(ϕ). f = f ′. ni = ni + 1. we shall only describe the simulated annealing. CONTINUE. is the simulated annealing. CONTINUE. Of these three parts. /*p is the metropolis criteria*/ ′ ′ IF f > f or p > p ϕ = ϕ′ . ELSE. /*f is the value of the objective function to be minimized*/ p = exp[(f ′ − f )/T ]. CONTINUE. DO UNTIL i = the dimensions of ϕ. ELSE. . HERE: ϕi = ϕi + rvi . The input of this procedure are the structural parameters. ENDIF.61 data processing after loading the original data. ENDIF. DO NT times. GOTO HERE. which is also the main part of this program. i = 0. f =OBJF(ϕ′ ). ENDIF. /*set the vector for recording No. ϕopt = ϕ′ . fopt = f ′ . i = i + 1. IF ϕ′ is not in S.

i = 0. ENDO. ENDO. IF change of fopt < ε in last Nε times. ENDIF.62 ENDO. according to ni } v = v′. ENDO. REPORT ϕopt and fopt . BREAK ELSE T = RT T . v ′ . CONTINUE. . {define the new step-size.

Part II The Standard Stochastic Dynamic Optimization Model 63 .

A model of this kind will repeatedly be used in the subsequent chapters in various ways. We then present the standard RBC model as formulated in King et al. Kydland and Prescott (1982) and Long and Plosser (1983) first strikingly illustrate this idea in a simple representative agent optimization model with market clearing. To some extent. The central argument by Real Business Cycel theorists is that economic fluctuations are caused primarily by real factors. as mentioned above. rational expectation and no monetary factors. the RBC analysis can also be regarded as a general equilibrium approach to macrodynamics. the Real Business Cycle approach has become a new orthodoxy of macroeconomics. (1989). This chapter introduces the RBC model by first describing its microeconomic foundation as set out by Stockey et al. Its concepts and methods have diffused into mainstream macroeconomics. Therefore. The Real Business Cycle analysis now occupies a major position in the curriculum of many graduate programs. Stockey.Chapter 4 Real Business Cycles: Theory and the Solutions 4.1 Introduction The Real Business Cycle model as a prototype of a stochastic dynamic macromodel has influenced quantitative macromodeling enormously in the last two decades. The 64 . The criticism of the performance of macroeconometric models of Keynesian type in the 1970s and the associated rational expectation revolution pioneered by Lucas (1976) initiated this development. (1988). Lucas and Prescott (1989) further illustrate that such type of model could be viewed as an Arrow-Debreu economy so that the model can be established on a solid micro-foundation with many (identical) agents.

Therefore. the trading process is assumed to be “once-and-for-all”. Finally. all producing a common output with the same constant returns to scale technology. After this market has closed.. agents simply deliver the quantities of factors and goods they have contracted to sell and receive those they have contracted to buy.65 model will then be solved after being parameterized by those standard values of the model’s structural parameters. With this identical assumption. 1 See Arrow and Debreu (1954) and Debreu (1959).23). as in Arrow-Debreu economy. p. 1. It is assumed that the household owns all factors of production and all shares of the firm. (1989. so all prices and quantities are determined simultaneously. be interpreted as predictions about the behavior of market economies. Second. No further trades are negotiated later. First. the resource allocation problem can be viewed as an optimization problem of a representative agent. The revenue from selling factors can only be used to buy the goods produced by the firm either for consuming or accumulating as capital. T . sells the output and transfers any profit back to the household. .”(Stokey et al. in each period the household sells factor services to the firm. All trading takes place at that time. all with the same preference. In each period it simply hires capital and labor on a rental basis to produce output. under appropriate conditions.. p. assume that all transactions take place in a single once-and-for-all market that meets in period 0. 4. 1989. and firms are also identical. The following citation is again from Stokey et al.1 several assumptions should be made for an hypothetical economy. 22) To establish the connection to the competitive equilibrium of the ArrowDebreu economy. The representative firm owns nothing. in periods t = 0. It is argued that “the solutions to planning problems of this type can. .. the households in the economy are identical. The third assumption regards the ownership.2 The Microfoundation The standard Real Business Cycle model assumes a representative agent who solves a resource allocation problem over an infinite time horizon via dynamic optimization.

wt . cd and id are the demands for consumption and investment.2) (4.1 The Decision of the Household At the beginning of the period 0 when the market is open. Explaining πt in (4. The equality holds due to the assumption Uc > 0.3) Above δ is the depreciation rate. we shall consider how the representative household calculates πt .6) represent the standard RBC model.1) and (4.5) and then substituting from (4. kt+1 t=0 .2) in terms of (4.8) (4. nt . f (·) is the production function and At is the expected technology shock. the solution of this model is the sequence ∞ s s of plans cd .4) can be rewritten as s s ˆ (4. kt t=0 that maximizes t t t the discounted utility: ∞ max E0 t=0 d β t U (ct . rt }t=0 .1) subject to s pt (cd + id ) = pt (rt kt + wt ns ) + πt t t t s s d kt+1 = (1 − δ)kt + it (4. ns .2) can be regarded as a budget constraint. (4. At ) − wt ns − rt kt t t ˆ Above. β is the discounted factor. although it only specifies one side of the markets: output demand and input supply. labor and capital expected by the household at given price sequence {pt .3) to eliminate id . ns . At ) − cd t t (4. At ) t s s ˆ id = f (kt . πt is the expected dividend. wt .66 4. ns . id . Next. s Given the initial capital stock k0 .2. we obtain t s s s ˆ kt+1 = (1 − δ)kt + f (kt . and t t s s nt and kt are the supplies of labor and capital stock. where kt is implied by (4. nt and kt are the realized output.5) πt = pt f (kt . Thus assuming that t=0 the household knows the production function while expecting that the mar∞ ket will be cleared at the given price sequence {pt . At ) t s ˆ ns = Gn (kt . At ) − cd t t (4.6). Note that (4. rt }t=0 at which he (or she) will choose the ses ∞ quence of output demand and input supply cd . ns . wt . the household is ∞ given the price sequence {pt . ns ) t (4.9) . It is reasonable to assume that πt = pt (yt − wt nt − rt kt ) (4. id . rt }∞ . and t t t s ˆ cd = Gc (kt .4) where yt .7) (4.6) Note that (4.

wt . At ) t (4. nd . 1.12).15).6) . A) ˆ nd = n(rt . p25): s d max pt (yt − rt kt − wt nd ) t subject to d s ˆ yt = f (kt . d s kt = kt (4.2 The Decision of the Firm Given the same price sequence {pt . wt . one can easily prove the existence of {p∗ . rt }∞ = {p∗ . At ) t d ˆ d wt = fn (kt . its optimization problem is equivalent to a series of one-period maximization (Stokey et al.e. nt . A) t (4.10) where t = 0. The solution to this optimization problem satisfies: d ˆ rt = fk (kt . . ∞.14) s yt nd = ns t t cd t + id t = (4.13) (4.. wt . 1.3 The Competitive Equilibrium and the Walrasian Auctioneer ∗ ∗ A competitive equilibrium can be described as a sequence of prices {p∗ . since the firm simply rents capital and hires labor on a period-by-period basis. wt . Howt ever. wt . nd . rt }∞ t t=0 that satisfies the equilibrium condition (4.. rt }∞ t t=0 t=0 ∗ ∗ Using equation (4. 2. 2. .12) 4.2.. kt t=0 . nd . wt ..11) (4.(4. t = 0. . 1989.13)-(4. At ) This first-order condition allow us to derive the following equations of input d demands kt and nd : t d ˆ kt = k(rt .. wt ..2. rt }∞ . the problem faced by the representative s d ∞ firm is to choose input demands and output supplies yt . The economy is at the competitive equilibrium if ∗ ∗ {pt .15) for all t’s. and also the sequence of ext=0 ˆ t=0 pected technology shocks {At }∞ . ∞.. rt }∞ t t=0 at which the two market forces (demand and supply) are equalized in all these three markets.67 4. i.

Specifically. however.. . . one for each period. wt . t=0 and the prices are all at their equilibrium. are contingent on the realization of the shocks z1 . Implicitly.2. this is not a sequence of numbers but a sequence of contingency plans. consumption ct . then. Their approach is to use the so-called “contingency plan”. rt }∞ t t=0 ˆ t=0 depends on the expected technology shock {At }∞ . z2 .2. zt . As written by Stokey et al..often named as tˆtonnement process as in Walrasian economics . Technically. 4. This indeed creates a problem how to express the equilibrium prices and the equilibrium demand and supply which are supposed to be made at the beginning of period 0 when the technology shock from period 1 onward are all unobserved.is a a commen solution to the adjsutment problem within the neoclassical general equilibrium framework. who adjust the price towards the equilibrium. This sequence of realization is information that is available when the decision is being carried out but is unknown in period 0 when the decision is being made. The Real Business Cycle theorists circumvent this problem skillfully and ingeniously. Thus the sequence of equilibrium prices and the sequence of equilibrium demand and supply are all contingent on the realization of the shock regardless that the corresponding decisions are all made at the beginning of period 0. This adjsutment process . it is assumed that there exists an auctioneer in the market.. the dynamics of our hypothetic economy can be fully described by the following equations regarding the . 4.. 2..(1989.68 The real business cycles literature usually does not explain how the equilibrium is achieved.. and end-of-period capital kt+1 in each period t = 1. p17): In the stochastic case...5 The Dynamics Assmue that the decisions are all contingent on the future shock {At }∞ .4 The Contingency Plan ∗ ∗ It is not difficult to find that the sequence of equilibrium prices {p∗ . the planner chooses among sequence of functions.

output. for all major three markets: output. (4.16) (4. At ) − ct (4. At ) yt − c t (1 − δ)kt + f (kt . the output demand and input supply.20) is not testable with empirical data. not only because we do not specify the stochastic process of {At }∞ . This empirically oriented formulation will be repeatedly used in the subequent chapters of this volumn.69 realized consumption. Yt for aggregate output and Ct for aggregate consumption.18) (4.(4. et al. At is the temporary shock in technology and Xt the permanent .17) (4. we employ here the specifications of a model as formulated by King.1 The Standard RBC Model The Model Structure The hypothetical economy we have presented in the last section is only for explaining the theory (from microeconomic point of view) behind the standard RBC economy. The capital stock in the economy follow the transition law: Kt+1 = (1 − δ)Kt + Yt − Ct . the dynamics of the economy is reflected by only the household behavior. 4. The model specified in (4. For an empirically t=0 testable standard RBC model.21) where δ is the depreciation rate. t=0 This indeed provides another important property of the RBC economy.22) where Nt is per capita working hours. nt . At ) f (kt . (1988). nt .3. employment.20) given the initial condition k0 and the sequence of technology shock{At }∞ . α is the share of labor in the production function. At ) Gn (kt . capital and labor markets. Let Kt denote for the aggregate capital stock. The decision of the firm does not have any impact! This is certainly due to the equilibrium feature of the model specification. which concerns only one side of the market forces. demand and supply.16) . Although the model specifies the decision behaviors for both household and the firm and therefore the two market forces. Assume that the aggregate production function take the form: Yt = At Kt1−α (Nt Xt )α (4.19) (4. investment and capital stock: ct nt yt it kt+1 = = = = = Gc (kt . but also we do not introduce the growth factor.3 4.

as pointed out by Hansen (1985).3Nt /N with N to be the sample mean of Nt . but also the growth in productivity. Note that there is no possibility to derive the exact solution with the standard recursive method.23). (4. Apparently.23) where by definition. 1+γ (4. The sample mean of nt is equal to 30 %. To transform the model into a stationary formulation. the Euler equations and the equations from the Lagrangian.25) with ǫt to be an i. is the average percentage of hours attributed to work.22)): kt+1 = 1 1−α (1 − δ)kt + At kt (nt N /0. The representative agent in the economy is assumed to make the decision sequence {ct }∞ and {nt }∞ so as to t=0 t=0 ∞ max E0 t=0 β t [log ct + θ log(1 − nt )] . we divide both sides of equation (4. We therefore have to rely on an approximate solution.2 The First-Order Conditions As we have discussed in the previous chapters. The Euler equation is not used in our suggested solution method.70 shock that follows a growth rate γ. (4. innovation.3. which may follow an AR(1) process: At+1 = a0 + a1 At + ǫt+1 . which. we shall first derive the first-order conditions. our first task is to transform the model into a setting that the state variable kt does not appear in F (·) as we have discussed in Chapter 1 and 2. We nevertheless still present it here as an exercise and demonstrate that the two first-order conditions are virtually equivalent. The exogenuous variable in this model is the temporary shock At . ct ≡ Ct /Xt and nt ≡ 0.i. 4. This can be done by assuming kt+1 (instead of ct ) along .21) by Xt (when Yt is expressed by (4. kt ≡ Kt /Xt . For this.d. the model is nonstationary due to Xt . Note that here Xt includes not only the growth in labor force. there are two types of firstorder conditions.3)α − ct . The Euler equation To derive the Euler equation.24) subject to the state equation (4. Note that nt is often regarded to be the normalized hours.

kt .29) (4.3)α .27) Given such an objective function. kt . nt+1 . kt . At ). yt is the stationary output via the following equation: 1−α yt = At kt (nt N /0. nt . At+1 ) = 0.nt (4. At+1 ) = . At+1 ) = 0.32) ∂U ∂U (kt+1 . nt . At ) ∂kt ∂kt Using (4. ∂nt nt c t 1 − n t . ∂kt+1 kt+1 ct+1 ∂U αyt θ (kt+1 . At ) + βE [V (kt+1 . kt+1 . ∂U 1+γ (kt+1 . nt . where U (kt+1 . (4. At ) = (kt+1 . kt . In this case. At+1 ) ∂kt+1 (4. At ) + βE (kt+1 .23) can simply be ignored in deriving the first-order condition. At ) + βE (kt+2 . kt .71 with nt as model’s decision variables. nt . The Bellman equation in this case can be written as V (kt .31) to express ∂V (kt+1 . At+1 )] . the objective function takes the form: ∞ max E0 t=0 β t U (kt+1 . nt . nt . At ) = 0. the state equation (4. kt+1 .23) and (4. nt . ∂nt Meanwhile the application of Benveniste-Scheinkman formula gives ∂V ∂U (kt .26) Note that here we have used (4. kt . nt . (4. nt . kt . At ) = − .23) to express ct in the utility function. nt+1 . kt+1 .28) The necessary condition for maximizing the right side of Bellman equation (4. At ) = − . At ) = log [(1 − δ)kt + yt − (1 + γ)kt+1 ] +θ log(1 − nt ).26). At ) = max U (kt+1 .28) is given by ∂U ∂V (kt+1 . ∂kt+1 ∂kt+1 ∂U (kt+1 . kt .30) (4. we obtain (4.31) in (4. ∂kt+1 ct (1 − δ)kt+1 + (1 − α)yt+1 ∂U (kt+2 . kt .29). ∂kt+1 ∂kt+1 From equation (4.

nt .38) with yt again to be given by (4.36) (4.33) . 1 − nt (1 + γ)nt β (1 − α)yt Et λt+1 (1 − δ) + = λt .72 Substituting the above expressions into (4. Next we try to demonstrate that the two first-order conditions: (4.27).37) λt = This further indicates that Et λt+1 = (1 − δ)kt+1 + (1 − α)yt+1 kt+1 ct+1 (4.34) and (4.(4. one obtains the following first-order condition: β 1 − Et λt+1 = 0.35) . expressing [β/(1 + γ)]Et λt+1 in terms of 1/ct (which is implied by (4.33) (4. First. we obtain from (4.34) The First-Order Condition Derived from the Lagrangian Next.39) (1 − δ)kt + (1 − α)yt kt ct . 1+γ kt 1 [(1 − δ)kt + yt − ct ] .(4.35)).3)α + ct 1+γ Setting zero the derivatives of L with respect to ct . nt c t 1 − n t (4.38) are virtually equivalent. ct kt+1 ct+1 αyt θ − = 0.35) (4. ct 1 + γ αβyt −θ + Et λt+1 = 0.30). we establish the following Euler equations: − (1 − δ)kt+1 + (1 − α)yt+1 1+γ + βE = 0. Define the Lagrangian: ∞ L = t=0 ∞ β t [log(ct ) + θ log(1 − nt )] − Et β t+1 λt+1 kt+1 − t=0 1 1−α (1 − δ)kt − At kt (nt N /0. This can be done as follows.37) (4. kt+1 = 1+γ (4.32) and (4. we turn to derive the first-order condition from the Lagrangian. kt and λt .

1/α ¯ k i = A φ−1/α ni N /0. nb . λi = (1 + γ)/β¯i c y i = φk i .2 2 We however leave this exercise to the readers.27). yb . Equation (4. when evaluated in terms of their certainty equivalence forms. denoted as (¯b . we obtain the first Euler equation (4. The steady state At is simply determined from (4. expressing [β/(1 + γ)]Et λt+1 again in terms of 1/ct . Second. = A/(δ + γ) ¯ = (δ + γ)kb .(4. ki . 4.34). (4. In c ¯ ¯ ¯ ¯ c ¯ ¯ ¯ ¯ particular.40) Note that we have used the first-order condition from the Lagrangian to derive the above two steady states. where φ= (1 + γ) − β(1 − δ) β(1 − α) = 0.35).38) along with (4. kb . λb ).3 ci = (φ − δ − γ)k i . and substituting it into (4. = ∞. λi ). .39) into (4. = 1.3. ni .35) . 1/α ¯ N /0.3 . we then verify the second Euler equation (4.33).3 The Steady States Next we try to derive the corresponding steady states. The other steady states are given by the following proposition: Proposition 4 Assume At has a steady state A. cb ¯ nb ¯ λb ¯ kb yb ¯ and ni = αφ/ [(α + θ)φ − (δ + γ)θ] .73 Substituting (4. yi . and the other is interior. determines at least two steady states: one is on boundary. we expect that the same steady states can also be derived from the Euler equations. Since the two first-order conditions are virtually equivalent.36).25). denoted as (¯i .

nt and At .74 4.025 θ 2 ¯ N 480 a0 0. Table 4. In Figure 4.41) and (4.58 γ 0.1 and Figure 4.9884 δ 0. (1988) except the last three parameters.42). which can often be found in the RBC literature.25). Assume that decision rule take the form ct = G11 At + G12 kt + g1 nt = G21 At + G22 kt + g2 (4.3)α − c 1+γ U (c. ct . they are essentially the same as the parameters choosed by King et al. 2) in the decision rule as expressed in (4. c.1: Parameterizing the Standard RBC Model α 0. These are reported in Table 4. Executing this procedure will allow us to compute the undetermined coefficients Gij and gi (i. one for the deterministic and the other for the stochastic case.0045 β 0. We shall remark that these parameters are close to the standard parameters.and the second-order partial derivatives with respect to F and U where F (k. which is related to the stochastic equation (4.41) (4. n) = log(c) + θ log(1 − n) All these partial derivatives along with the steady states can be used as the inputs in the GAUSS procedure provided in Appendix II of Chapter 1. j = 1. we illustrate the solution paths.42) Our first step is to compute the first. we shall first specify the values of the structural parameters defined in the model. 3 Indeed. . n.3 More detailed discussion regarding the parameter selection and estimation will be provided in the next chapter.0333 a1 0.1.2 .0189 The solution method that we shall employ is the method of linear-quadratic approximation with our suggested algorithm as discussed in Chapter 1.4 Solving Standard Model with Standard Parameters To obtain the solution path of standard RBC model. A) = 1 (1 − δ)k + Ak 1−α (nN /0.9811 σǫ 0. for the variables kt .

2: The Stochastic Solution to the Benchmark RBC Model for the Standard Parameters Elsewhere (see Gong and Semmler 2001).75 Figure 4. we have compared these solu- .1: The Deterministic Solution to the Benchmark RBC Model for the Standard Parameters Figure 4.

45) 4 5 See Bennet and Farmer (2000) and Kim (2004).43) 1−σ are used. by setting σ = 1 we obtain simplified preferences in log utility and leisure. 4. . We find that the two solutions are surprisingly close to the extent that one can hardly observe the differences. The recent models are more demanding in terms of solution and estimation methods. in chapter 7 we will return to these types of models. U (C.43) with consumption. N ) = − 1−σ 1+χ (4. 4.43) a separable utility function such as5 C 1−σ N 1+χ U (C. (4.3. Although we will not attempt to estimate those more generalized versions it is worth presenting the main structure of the generalized models and to demonstrate how they can be solved by using dynamic programming as introduced in chapter 1. We can obtain from (4.1 The Model Structure The generalization of the standard RBC model is usually undertaken either with respect to preferences or with respect to the technology. C and labor effort. With respect to preferences utility functions such as4 Cexp U (C. as arguments is non-separable in consumption and leisure. The utility function (4. Moreover.5 The Generalized RBC Model In recent work stochastic dynamic optimization models of general equilibrium type have been presented in the literature that go beyond the standard model as discussed in chapter 4.6. Since those generalized versions can easily give rise to multiple equilibria and history dependence. N ) = −N 1+χ 1+χ 1−σ −1 . See Benhabib and Nishimura (1998) and Harrison(2001). N ) = logC − N 1+χ 1+χ (4.44) which is additively separable in consumption and leisure.76 tion paths to those computed by Campbell (1994)’s log-linear approximate solution. N .5.

β = (1 + ξ)b and Y. a + b = 1 i externalities of the form A = (K a N b )ξ . with Y = K α N β α > 0.47) is I 1−ϕ − 1 /(1 − ϕ) (4. ϕ′′ (δ) ≤ 0 (4.4) and Benhabib and Farmer (1994).77 As concerning production technology and markets usually the following generalizations are introduced. The above describes a type of a generalized model that we want to solve. namely δ ˙ K = I − δK. β > 0.46) whereby α = (1 + ξ)a. the aggregate stock of capital and labor hours respectively. ch. ϕ′ (δ) = 1. (4. (4.48) (4. 7. K.47) Hereby δ is the depreciation rate of the capital stock.6 First we can allow for increasing returns to scale.49) δK For ϕ = 0. N represent total output. 6 7 See Kim (2003a). ξ ≥ 0 may allow for an aggregate production function with increasing returns to scale. one has the standard model without adjustment cost. see Lucas and Prescott (1971) and Kim (2003a) and Boldrin. See Farmer (1999. A functional form that satisfies the three conditions of equ.46) can also be interpreted as a monopolistic competition economy where there are rents arising from inverse demand curves for monopolistic firms.2. Christiano and Fisher (2001).7 Another generalization as concerning production technology can be undertaken by introducing adjustment cost of investment.8 We may write ˙ I K =ϕ K K with the assumption of ϕ(δ) = 0. Although the individual-level private technology generates constant returns to scale with Yi = AKia Lb . The increasing returns to scale technology represented by equ. α + β ≥ 1 (4. 8 For the following. .

45) and the technology such as specified in eqs. we obtain the following value function. Preferences and technology (by using adjustment cost of capital) take on.78 4. .3 0.7 0.5. (4. (4.50) s. representing the total utility along the optimal paths of C and N .2: Parameterizing the General Model a b χ ρ δ ξ ϕ 0. (4.1.51) where preferences U (Ct . Nt ) are chosen such as represented by equ.05 Note that in order to stay as close as possible to the standard RBC model we avoid externalities and therefore presume ξ = 0. ˙ K It =ϕ K Kt (4. however.49).47)-(4. Note that the dynamic decision problem (4. (4.05 0.46) and (4. For solving the model with the dynamic programming algorithm as presented in chapter 1. Table 4.50)-(4.6 and a grid for the capital stock.2 Solving the Generalized RBC Model ∞ We write the model in continuous time and in its deterministic form Ct .t.51) are written in continuous time. a more general form.51). 10].6 we use the following parameters. Using then the deterministic variant of our dynamic programming algorithm of ch. Nt = max t=0 e−ρt U (Ct .1 0 0. Nt )dt. Its discretization for the use of dynamic programming is undertaken through the Euler procedure.3 0. The latter are used in equ. in the interval [0. 1. K.

79 -9 -10 -11 -12 -13 -14 -15 -16 -17 -18 0 1 2 3 4 5 6 7 8 9 10 Figure 4. consumption is low and labor effort high.8 0.6 0.3.4.4 1.8 1. The dynamics generated by the response of consumption and labor effort to the state variable capital stock.4. see Figure 4.6 1. .4: Paths of the Choice Variables C and N (depending on K) As can be clearly observed the value function is concave.4 0. see Figure 4. K. 1. will thus lead to a convergence toward an interior steady state of the capital stock. when capital stock is low (so that capital stock can be built up) and consumption is high and labor effort low (so that capital stock will shrink). K) are shown in Figure 4.3: Value function for the general model The out-of-steady state of the two choice variables namely consumption and labor effort.2 1 0.2 0 0 2 4 6 8 10 Figure 4. Moreover. consumption and labor effort.(depending in feedback form on the state variable capital stock.

the task that will be addressed in the next chapter. present the solution to the model.53) (4.56) given cb . kb ¯ and yb can be derived from (4. nb and λb . We find equation (4.7 Appendix: The Proof of Proposition 4 Evaluating (4.56) ¯ The derivation of the boundary steady state is trival.27) y is given by y = Ak 1−α (4. We then introduce the empirically oriented standard RBC model and.80 4. 4.55) nN 0. we ignore the subscript i.35) . We also have presented a generalized RBC model and solved it by using dynamic programming. This provides us the groundwork for an empirical assessment of the RBC model. By (4.27) in their certainty equivalence form. This will be important for the subsequent part of the book.54) are satisfied. nb and λb .54) (4. and assuming all the variables to be at their steady states.52) (4. n and λ ¯ with cb .55) and (4. and therefore all the steady state values y are understood as the interior steady states.(4. based on our previous chapters. Further.6 Conclusions In this chapter we first have introduced the intertemporal general equilibrium model on which the standard RBC model is constructed.52) .54). ¯ ¯ Next we try to derive the interior steady state.(4. we obtain β 1 − λ=0 c 1+γ αy −θ + βλ =0 1−n (1 + γ)n (1 − α)y β λ (1 − δ) + =λ 1+γ k 1 (1 − δ)k + y − c k= 1+γ where from (4. Our attempt was to reveal the basis of some intertemporal decision problems behind the RBC model. For notational convenience. Replace c. Let k = φ.38) along with (4.3 α (4. we obtain .

57) = A α φ− α 1 1 N 0.59) N 0. k y /¯ ¯ n = n y /k ¯ ¯ 1 N 0.81 y = φk where φ is defined by (4.60). .3 α 1−α = A α φ1− α Therefore.57) and c either from (4.40) in the proposition.3 n (4.3 (4. λ can be derived from (4.59) or from (4. we thus solve the steady state of the labor effort n.58). k can be obtained from (4.3 (4.59) and (4. (4.3 (4. y from (4.56).53) imply that c = = α y (1 − n) θ n 1 1 α (1 − n)A α φ1− α θ 1 1 N 0.58).60) Equating (4. Once n is solved.52).3 φ α−1 α n−1 N 0. y = A n = A 1 y φ y n 1−α nN 0. Finally. we thus obtain from (4.52) and (4.55) c = (φ − δ − γ)k = (φ − δ − γ)A α φ− α Meanwhile. According to (4.58) Expressing y in terms of φk and then expressing k in terms of (4.60).

“the whole idea that such a simple model with no government. we shall provide a comprehensive empirical assessment of the standard RBC model.. despite its rather simple structure. especially in the first three chapters. Moreover.2 in the last chapter. no adjustment cost could replicate actual experience this well is very surprising. rational expectations. Yet before we commence with our formal study. The simulated data are shown in Figure 4.. Our previous discussion. We shall first estimate the standard RBC model and then evaluate the calibration results that have been stated by early real business cycle theorists. which is generated from a stochastic simulation of our standard model for the given 82 . 5. some theorists suggest that even a simple RBC model.) However. no market failures of any kind. can generate the time series to match the macroeconomic moments from empirically observed time series data. we shall first apply our estimation strategy using simulated data.1 Introduction Many real business cycle theorists believe that the RBC model is empirically powerful in explaining the stylized facts of business cycles. In this chapter. As Plosser (1989) pointed out. like the standard model we presented in the last chapter.” (Plosser 1989:.Chapter 5 The Empirics of the Standard Real Business Cycle Model 5. has provided a technical preparation for this assessment.2 Estimation with Simulated Data In this section. these early assessments have also become the subject to various criticisms. we shall first demonstrate the efficiency of our estimation strategy as discussed in Chapter 3.

2.6))2 must be linear. 1−α E yt − At kt (nt N /0.3)α = 0. Therefore. . we shall first linearize (4. 5.   −(1 − δ)  0 Γ=  0 0   1 −1 0 0 0 0 −G11 −g13   0 0 −G21 −g23  y 0 0 −y A Note that the parameters in the exogenuous equation could be independently estimated since they have no feed back into the other equations. This gives us y y y yt = At + (1 − α) kt + α nt − y n A k We thus obtain for the ML estimation:    1+γ kt  ct   −G12  zt =  B =  −G  nt  .83 standard parameters reported in Table 4. (5.3) (5.1. E [nt − G21 kt − G22 At − g23 ] = 0. there is no necessity to include the exogenuous equation into the restriction. we expect that the estimated parameters will be close to the standard parameters that we know in advance. The purpose of this estimation is to test whether our suggested estimation strategy works well. 2 Note that here we use zt rather than yt in order to distinguish it from the output yt in the model.  22 −(1 − α) y yt kt−1  ct−1  xt =  yt−1   At 1 1 k 0 1 0 0 0 0 1 y −α n    0 0   0  1   . If the strategy works well. the model implies the following moment restrictions:1 E [(1 + γ)kt+1 − (1 − δ)kt − yt + ct ] = 0.4) Note that the moment restrictions could be nonlinear as in (5.2) (5.27) using Taylor approximation.4).1 The Estimation Restriction The standard model implies certain restrictions on the estimation.1) (5. For the GMM estimation. Therefore. the restriction Bzt + Γxt = εt (see equation (3. Yet for the ML estimation. E [ct − G11 kt − G12 At − g13 ] = 0.

but also that the objective function is not smooth.9946 δ 0.1 and 5.9884 0.025 0.1: GMM and ML Estimation Using Simulated Data True ML Estimation 1st Step GMM 2nd Step GMM α 0.2 Estimation with Simulated Data Although the model involve many parameters.58 0. but also each single step of the GMM estimation takes much more time for the algorithm to converge.2. Yet.5781 β 0.4).2919 2. Table 5.7217E−007) (3.4956E−007) (9. Approximately 8 hours on a Pentium III computer is required for each step of the GMM estimation whereas approximately only 4 hours is needed for the ML estimation. The parameters with regard to the AR(1) process of At have no feedback effect on our estimation restrictions.2. This verifies the necessity of using simulated annealing in our estimation strategy.5796 0.4373E−006) (6.1 reports our estimation with the standard deviation included in parenthesis. δ and θ.02505 One finds that the estimations from both methods are quite satisfying.006181723) (4.84 5. This is probably because the GMM estimation does not need to linearize (5. 3 N can be regarded as the mean of per capital hour.9821 0. All estimated parameters are close to their true parameters. This demonstrates the efficiency of our estimation strategy.5800 0.1826 (2.6369E−006) 0.5779E−005) (2. These are the parameters that are empirically unknown and thus need to be estimated when we later turn to the estimation using empirical data. we should also remark that the difference is minor whereas the time required by the ML estimation is much shorter.4412E−006) (3.3 Table 5. It shows not only the existence of multiple optima.02500 2.6093E−006) (9. β.9290E−006) (0.00112798) (3. the parameters that we have used to generate the data. the GMM estimation after the second step is more accurate than the ML estimation. The latter holds not only because the GMM needs an additional step. However. In Figure 5.9958E−008) (5.000 0. . we shall only estimate the parameters α. we also illustrate the surface of the objective function for our ML estimation.0253 θ 2 2.9884 0. but create for transforming the model into a stationary version. γ does not appear in the model.7174E−006) (0.

85 Figure 5.δ Surface of the Objective Function for ML Estimation Figure 5.1: The β .2: The θ − α Surface of the Objective Function for ML Estimation .

time series. Later in this chapter we will. this approach uses macroeconomic data that posits a fullemployment assumption. Yet.3 5. we turn to estimating the standard RBC model with the data of U. and the parameter α and γ as well as the initial condition X0 are given. a key assumption in the model.86 Next. Indeed. Nt the labor input (per capita hours). the Solow residual At is computed as follows: At = = Yt 1−α Kt (Nt Xt )α yt 1−α kt Nt α (5. Kt the capital stock. The empirical studies of RBC models often require a considerable re-construction of existing macroeconomic data. Here all the data can assume to be obtained from statistical sources except the temporary shock At . however. we shall first discuss the data that shall be used in our estimation. Kt .5) (5. S.7) Thus.1 Estimation with Actual Data The Data Construction Before estimating the benchmark model with empirical U. often called the technology puzzle in the RBC literature. Ct consumption and Yt output. The time series employed for our estimation should include At . since this is a common practice we shall here also follow this procedure. Assuming that the production function takes the form Yt = At Kt1−α (Nt Xt )α . S.6) where Xt follows a constant growth rate: Xt = (1 + γ)t X0 (5. time series data. the Solow residual At can be derived if the time series Yt . the temporary shock in technology.3. It should be noted to derive the temporary shock as above deserves some criticism. A common practice is to use the so-called Solow residual for the temporary shock. deviate from this practice and construct the Solow residual in a different way. In addition to the construction of the temporary shock. such as the business cycle model of Keynesian type. 5. This will allow us to explore a puzzle. . Nt . the existing macroeconomic data (such as those from Citibase) also need to be adjusted to be accommodated to the definition of the variables as defined in the model. which makes it different from other types of model.

7) for the given parameter γ and the initial condition X0 . the service generated from durable consumption goods and government capital stock should also appear in the definition of Yt .58 and the other is 0.87 4 The national income as defined in the model is simply the sum of consumption Ct and investment. which we denote as α∗ (see equation (5.0045. Therefore. In this estimation.2 reports the estimations after the second step of GMM estimation. This data set is taken without any modification.5 Meanwhile we shall consider two α∗ : one is the standard value 0. We shall remark that 0. . we set X0 to 1 and γ to 0.66 is the estimated α in Christiano and Eichenbaum (1992). it is suggested that not only private investment but also government investment and durable consumption goods (as well as inventory and the value of land) should be included in the capital stock. The sample period of this data set is from 1955. ct ≡ Xt and yt ≡ Xtt ). one has to compute them based on some assumptions.4.66. Data Set I. ct and yt to be stationary (note that we have defined C Y t kt ≡ Kt . we shall employ two different data sets. Data Set II. The second data set. Further. Nt and Kt given the pre-determined α.0045 for γ to make kt . see Cooley and Prescott (1995). γ is the standard parameter as choosen by King X t et al. 5. (1988). the latter increases the capital stock. Two Different Data Sets To explore how this treatment of the data construction could affect the empirical assessment of dynamic optimization model.4. the shock sequence At is computed from the time series of Yt .2 Estimation with the Christiano Data Set As we have mentioned before. Xt . Also.5 )). Table 5. to make the model’s national income account consistent with the actual data.1 to 1988.1 to 1984. is obtained mainly from Citibase except the capital stock which is taken from the Current Survey of Business. All the data are quarterly. 4 5 For a discussion on data definitions in RBC models. Since such data are not readily available. is constructed by Christiano (1987). Here the time series Xt is computed according to equation (5. The first data set. one should also include government consumption in Ct . We choose 0. which has been used in many empirical studies of RBC model such as Christiano (1988) and Christiano and Eichenbaum (1992). The sample period for this data set is from 1952. Consequently.3.

Table 5.0002) (0.9286 0.0002) (0. ∗ 5. the estimated α is almost the same as the pre-determined α. especially β. this issue is mostly suppressed in the current debate. are not within the economically feasible range.2963 0.6663 0.06) (283689.907) (39958. though somewhat deviating from the standard parameters used in King et al.58. the estimated parameters. all the estimations seem to be quite reasonable. This is not surprising due to the way of computing the temporary shocks from the Solow residual.88 Table 5.0209 1.58 α∗ = 0. we again set X0 to 1 and γ to 0.66 ∗ (2. are all within the economically feasible range.9552 (0. we should also remark that the standard errors are unusually small. we find that the estimations here are much less satisfying. For α = 0. Finally.2393E−006) α 0.828) δ 0.2377E−008) (9.0002) (0. Given such a sharp contrast for the results of the two different data sets.0002) (0. (1988).4656 (54457.6600 0. For α∗ = 0.0714 1.409) (35789.8610 In contrast to the estimation using the Chrisitano data set.we report the estimation results in Table 5.9892 0.2: Estimation with Christiano’s Data Set α = 0.0045. .023) α 0. They deviate significantly from the standard parameters. the estimates are all statistically insignificant due to the huge standard errors.0716 (454278.3. In both cases.3.0088) As one can observe.5800 β δ θ 0.272) β 0. one is forced to think about the data issue involved in the current empirical studies of RBC model.9935 0. Even the parameter β estimated here is very close to the β chosen (rather than estimated) by them. Furthermore.0078) 0.204) (45174.3: Estimation with the NIPA Data Set α = 0.0209 2. For the two pre-determined α. the estimated parameters are very close to those in Christiano and Eichenbaum (1992).3 Estimation with the NIPA Data Set As in the case of the estimation with Christiano’s data set. Indeed.56) θ 1. Some of the parameters.8553 (89684.58 α∗ = 0.1111 (0.66 ∗ (71431.66.

5800 γ 0. Table 5. β. one can then assess the model to see how closely it matches with the empirical data. σǫ ).9892 δ 0.8) (5.9) (5.58 for the parameters α.1.89 5.0209 θ 1. a1 and σǫ in the stochastic equation (5.11) (5. First. The parameter γ is set 0.0045 as usual. The current method for assessing a stochastic dynamic optimization model of RBC type is the calibration technique.03 a0 0.4. we employ those in Table 5.12) 2 where ǫt+1 ∼ N (0. The structural parameters used for this stochastic simulation are defined as follows.10) (5.9552 N 299. we report all these parameters in Table 5. The basic idea of calibration is to compare the time series moments generated from the model’s stochastic simulation to those from a sample economy.4 Calibration and Matching to U.6 The parameter N is simply the sample mean of per capita hours Nt . which has already been introduced in Chapter 3.11) are estimated by the OLS method given the time series computed from Solow residue. For convenience. and Gij and gi (i.0333 a1 0.9811 σǫ 0.0045 β 0. 2) are all the complicated functions of the structural parameters and can be computed from our GAUSS procedure of solving dynamic optimization problem as presented in Appendix II of Chapter 1. j = 1.0189 6 Note that these are the same as in Table 4.2 at α∗ = 0. TimeSeries Data Given the structural parameters. The parameters a0 .4: Parameterizing the Standard RBC Model α 0.3)α At+1 = a0 + a1 At + ǫt+1 1 [(1 − δ)kt + yt − ct ] kt+1 = 1+γ (5. S. . The data generation process for this stochastic simulation is given by the following equations: ct = G11 At + G12 kt + g1 nt = G21 At + G22 kt + g2 1−α yt = At kt (nt N /0. δ and θ.

the moment statistics of the sample economy is computed from Christiano’s data set.0000) −0. For the model economy.(5.90 Table 5.5.0037 (0.0000) 5. capital stock and output could be regarded as being .2013 (0.0021) 1.0159 (0.0165 0.0906) 0. the distributions are derived from our 5000 thousand stochastic simulations.4.0012) Capital 0.9796 (0.0007) Employment 0.0000 (0.0050 (0. Here.0000 1.4604 0. In particular.0006) Output 0.1089) 0.0156 0.1032) 1.0210) 0. the moment statistics include the standard deviations of some major macroeconomic variables and also their correlation coefficients. Table 5.0000) 0.0090 (0. while those for model economy are generated from our stochastic simulation using the data generation process (5.0035 0.0081 0.7550 1.1 The Labor Market Puzzle By observing Table 5.9432 (0.0000) 0. Of course.2861 0. All time series data are detrended by the HP-filter. we find that among the four key variables the volatilities of consumption.9381 (0.12).0083) 1. which can be reflected by their corresponding standard deviations (those in paratheses).5: Calibration of Real Business Cycle Model (numbers in parentheses are the corresponding standard deviations) Consumption Standard Deviations Sample Economy Model Economy Correlation Coefficients Sample Economy Consumption Capital Stock Employment Output Model Economy Consumption Capital Stock Employment Output 0.0000 (0.0954 1.1431 (0.1741 0. we can further obtain the distribution of these moment statistics.8) .0031) 1.5 reports our calibration from 5000 stochastic simulations.7263 1.0000 0.0000 (0.0000 0.0000 (0.0575 (0.0000 0.

where we compare the observed series from the sample economy to the simulated series with innovation given by the observed Solow residual. the matching does not hold for employment.4. the employment in the model economy is excessively smooth.3: Simulated and Observed Series (non detrended): solid line observed and dashed line simulated . Figure 5. Indeed.3 and Figure 5. This is indeed one of the major early results of real business cycle theorists. We shall remark that the excessive smoothness of employment is a typical problem of the standard model that has been addressed many times in literature.91 somewhat matched. However. These results are further demonstrated by Figure 5.

The excessive smoothness of labor effort and the excessive correlation between labor and consumption will be taken up in Chapter 8. In the sample economy. We remark that such an excessive correlation has.4: Simulated and Observed Series (detrended by HP filter): solid line observed and dashed line simulated Now let us look at the correlations. and the other is between employment and output. there are basically two significant correlations. therefore. should be somewhat correlated. consumption and employment in the model economy are also significantly correlated. to our knowledge. The discussions have often been focused on the correlation with output.92 Figure 5. However. One is between consumption and output. this excessive correlation should not be surprising given that in the RBC model the movements of employment and consumption reflect the movements of the same state variables: capital stock and temporary shock. not yet been discussed in the literature. . They. in addition to these two correlations. However. Both of these two correlations have also been found in our model economy.

The first strategy is to use an observed indicator to proxy for unobserved utilization. Mankiw (1989) and Summers (1986) have argued that such a measure often leads to excessive volatility in productivity and even the possibility of technological regress. 2003). but to be shown below. these results rely on the hypothesis that the driving force of the business cycles are technology shocks. both of which seem to be empirically implausible. this celebrated result is obtained also from the empirical evidence. i. see Gali (1999) and Francis and Ramey (2001. It has been shown that the Solow residual can be expressed by some exogenuous variables. Another strategy is to construct an economic model so that one could compute the factor utilization from the observed variables (see Basu and Kimball 1997 and Basu. consumption and employment. There are several reasons to distrust the standard Solow residual as a measure of technology shock. Eichenbaum and Rebelo 1996). consumption and capital stock. One is that the parameters a0 .. Second. which are assumed to be measured by the Solow residual. not yet shown in Table 5. A third strategy identifies the technology shock through an VAR estimate. the technology is procyclical with output. Meanwhile. Considering that the Solow residual cannot be trusted as a measure of the technology shock.5 The Issue of the Solow Residual So far we may argue that one of the major achievements of the standard RBC model is that the model could explain the volatility of some key macroeconomic variables such as output. for example demand shocks arising from military spending (Hall 1988) and changed monetary aggregates (Evan 1992). Another major presumption of the RBC literature. is the technology-driven hypthoses. First.5. The measurement of technology can impact this result in two ways. Those innovations are often used in the RBC literature as an additional indicator to support the model and its matching of the empirical data. The second is that the Solow residual also serves as the sequence of observed innovations that generate the graphs in Figure 5. All these methods are focused on the computation of factor utilization. These parameters will directly affect the results from our stochastic simulation. researchers have now developed different methods to measure technology correctly. Of course.4. in which the technology is measured by the standard Solow residual. which are unlikely to be related to factor productivity. the standard Solow residual is not a reliable measure of technology schocks if the cyclical variation in factor utilization are significant.e.11) are estimated from the time series computed from Solow residual. Third. . Fernald and Kimball 1998). A typical example is to employ electricity use as a proxy for capacity utilization (see Burnside. a1 and σǫ in the stochastic equation (5. There are basically three strategies.3 and Figure 5.93 5.

IPXMCAQ. the capacity utilization of manufacturing. may not be a realistic paradigm for macroeconomic analysis any more. we will refer to all of this recent research employing our available data set. Therefore. All these are important problematic issues that are related to the Solow residual. Yet in this test. This is also the approach taken by Evan (1992). but because it uses a problematic measure of technology. In this section. we shall investigate the following specification: At = c + α1 At−1 + · · · + αp At−p + β1 gt−1 + · · · + βp gt−p + εt (5. obtained from Citibase.5. Indeed. Given our new measurement. gt should not have any .13) where gt in this test is government spending. we simply use government spending. a critical assumption of the Solow residual to be a correct measurement of the technology shock is that At should be purely exogenuous. in particular the variation in consumption. the real business cycles model. can match well the variations in output. consumption and employment. as driven by technology shocks.94 Recently. Unlike other current research. Gali (1999) and Francis and Ramey (2001) have found that if one uses the corrected Solow residual – if one identifies the technology shock correctly – the technology shock is negatively correlated with employment and therefore the celebrated discovery of the RBC literature must be rejected. which is available in our Christiano’s data set. one may find that the standard RBC model. We will construct a measurement of the technology shock that represents a corrected Solow residual. if they are confirmed. output and capital stock. If the Solow residual is exogenuous. if the corrected Solow residual is significantly different from the standard Solow residual. using the Solow residual. We will first follow Hall (1988) and Evan (1992) to test the exogeneity of Solow residual. Also.1 Testing the Exogeneity of the Solow Residual Apparently. This construction needs data on factor utilization. as an aggregate demand variable. 5. For this purpose. testing the exogeneity of the Solow residual becomes our first investigation to explore whether the Solow residual is a correct measure of the technology shock. the distribution of At cannot be altered by the change in other exogenuous variables such as the variables of monetary and fiscal policy. we then explore whether the RBC model is still able to explain the business cycles. we use empirically observed data series. One possible way to test the exogeneity is to employ the Granger causality test. consumption and capital stock not because the model has been constructed correctly. We shall also look at whether the technology still moves procyclically with output. over Xt . In other words.

5). 86) (6. it is assumed that the capital stock is fully utilized. there is no variation in population growth. Next.6: F −Statistics for Testing Exogeneity of Solow Residual p=1 p=2 p=3 p=4 F − statistics 9. If we look at the computation of the Solow residual. Table 5. one finds that at 5% significance level we can reject the null hypothesis for all the lag lengths p′ s. e. On the other hand.7 5.6 provides the corresponding F -statistics computed for the different p’s. Therefore our null hypothesis is H0 : β1 = · · · = βp = 0 (5.g.14) The rejection of the null hypothesis is sufficient for us to refute the assumption that At is strictly exogenuous.2775435 2. In other words.6. it is further assumed that the population follows a constant growth rate which is a part of γ. we present a simple way of how to extract a technology shock from macroeconomic data. 82) From Table 5. It is well known that the result of any empirical test for Granger causality can be surprisingly sensitive to the choice of lag length p.2 Corrected Technology Shocks The analysis in our previous section indicates that the hypothesis can rejected that the standard Solow residual be strictly exogenuous. Table 5. Next we shall consider the derivation of the corrected Solow residual by relaxing those strong assumptions. First.3825632 degrees of freedom (1. we find two strong assumptions inherent in the formulation of the Solow residual. 90) (4.5769969 4. This finding is consistent with the results in Hall (1988) and Evan (1992). 92) (2. equation (5. such as government spending.. The test therefore will be conducted for different lag lengths p’s. those policy variables.95 explanatory power for At . Therefore we may have sufficient reason to distrust the Solow residual to be a good measure of the technology shock. may have explanatory power for the variation of the Solow residual. Second. 7 Although we are not able to obtain the same at 1% significance level.3041035 3.5. . which certainly represents a demand shock.

the corrected Solow residual At can Lt be computed as yt ˜ (5.16) ˜ Above lt ≡ Lt . . Et is the number of workers employed.15) ˜ Above.15) by Xt . one finds that our corrected Solow resid˜ ual At will match the standard Solow residual At if and only if both ut and lt equal 1. which is the hours per capita. Let Lt denote the permanent shock to population so that Xt = Zt Lt while t Lt denotes the observed population so that ELHt = Nt .8 and Zt is the permanent shock in technology.17) At = 1−α (ut kt ) (lt Nt )α Comparing this with equation (5. Note that in this formulation.5 compares these two time series: one for non-detrended and the for detrended series. Given equation (5.96 Let ut denote the utilization of capital stock. we interpret the utilization of labor service only in terms of their working hours and therefore ignore their actual effort. Figure 5. The observed output is thus produced by the utilized capital and labor service (expressed in terms of total observed working hours) via the production function: ˜ Yt = At (ut Kt )1−α (Zt Et Ht )α (5. which is more difficult to be observed. 8 Note that this is different from our notation Nt before. which can be measured by IPXMCAQ from Citibase. At is the corrected Solow residual (which is our new measure of temporary shock in technology). we then obtain ˜ yt = At (ut kt )1−α (lt Nt )α (5. Dividing both sides t of (5. Ht denotes the hours per employed worker.5).16).

. in the short run.9 However. (1996).97 Figure 5.3 Business Cycles with Corrected Solow Residual Next we shall use the corrected Solow residual to test the technology-driven hypothesis. they rather move in different directions if we compare the detrended series. consumption.5. employment and capital stock. the Sample Economy I (in which the technology shock is represented by the standard Solow residual) and the Sample Economy II (to be represented by the corrected Solow residual).5.5: The Solow Residual: standard (solid curve) and corrected (dashed curve) As one can observe in the figure 5.7. In Table 5. 9 A similar volatility is also found in Burnside et al. The data series are again detrended by the HP-filter. we report the cross-correlations of the technology shock to our four key economic variables: output. These correlations are compared for three economies: the RBC economy (whose statistics is computed from 5000 simulations). the two series follow basically the same trend while their volatilities are almost the same. 5.

0013) (0. we find that the technology shock is procyclical to output.2142 -0. We.0255 (0.3422 -0. we find a somewhat opposite result.0084) (0.4. as in Sample Economy II. can confirm the findings of the recent research by Basu. 10 Here the structural parameters are still the standard ones as given in Table 5.10 Comparing Figure 5. where the standard Solow residual is employed.4.6 a one time simulation with the observed innovation given by the corrected Solow residual.7008 0.0762 RBC Economy Sample Economy I Sample Economy II If we look at the Sample Economy I. Gali (1999) and Francis and Ramey (2001.5854 0.9903 0. 2003).7: The Cross-Correlation of Technology output consumption employment capital stock 0. if we use the corrected Solow residual.6 to Figure 5.98 Table 5.1108 -0. consumption and employment. However.9722 0.1077) 0. . therefore. et al. This result is exactly predicted by the RBC Economy and represents what has been called the technology-driven hypothesis.1736 -0. we provide in Figure 5. especially for employment.9966 -0.0031) (0. (1998). we find that the results are in sharp contrast to the prediction as referred in the standard RBC model.7844 0. To test whether the model can still match the observed business cycles.

6 Conclusions The standard RBC model has been regarded as a model that replicates the basic moment properties of U. Through its necessity to accommodate the data to the model’s implication.S. Prescott (1986) summarizes the moment implications as indicating “the match between theory and observation is excellent.S. many have felt that the RBC research has at least passed the first test.6: Sample and Predicted Moments with Innovation Given by Corrected Solow Residual 5. economy to be matched by the model’s steady state at the given economically feasible standard pa- .S.99 Figure 5. Indeed. this early assessment builds on the reconstruction of U. macroeconomic data. In the first place. such data reconstruction seems to force the first moments of certain macroeconomic variables of the U. Yet this early assessment should be subject to certain qualification. macroeconomic time series data despite its rather simple structure. but far from perfect”.

This incorrect measure of technology takes us to the technology puzzle: the procyclical technology. al (1999) pointed out. output and capital stock may rely on the incorrected measure of technology. Second.” In Chapter 9.6. output and capital stock when the reconstructed data series are employed. Both of these two problems are related to the labor market specification of the RBC model. Third.100 rameters. “it is the final criticism that the Solow residual is a problematic measure of technology shock that has remained the Achilles heel of the RBC literature. One possible approach for such improvement is to allow for wage stickyness and nonclearing of the labor market. we still cannot ignore the problems of excessive smoothness of labor effort and excessive correlation between labor and consumption. As King et. . driving the business cycle. may not be a very plausible hypothesis. The unusual small standard errors of the estimates seem to confirm this suspicion. we shall address the technology puzzle again by introducing monopolistic competition into an stochastic dynamic macro model. For the model to be able to replicate employment variation. although one may celebrate the fit of the variation of consumption. the celebrated fit of the variation in consumption. a task that we will turn to in Chapter 8. the match does not exist any more when we use the corrected Solow residual as the observed innovations. it seems necessary to make improvement upon the labor market specification. As we have shown in Figure 5.

in economies with an exogenous dividend stream the aggregate consumption is usually used as a proxy for equity dividends. equity premium and Sharpe-ratio. In particular we will explore to what extend it can replicate the empirically found risk-free interest rate. First. Second. in economies with an exogenous dividend stream and no savings consumers are forced to consume their endowment. Production economies offer a much richer. Modelling asset price and risk premia in models with production is much more challenging than in exchange economies. Most of the asset pricing literature has followed Lucas (1978) and Mehra and Prescott (1985) in computing asset prices from the consumption based asset pricing models with an exogenous dividend streams. 101 . Asset prices contain valuable information about intertemporal decision making and dynamic models explaining asset pricing are of great importance in current research. The idea of employing a basic stochastic growth model to study asset prices goes back to Brock and Mirman (1972) and Brock (1978. Empirically.1 Introduction In this chapter. we shall study asset price implications of the standard RBC model.Chapter 6 Asset Market Implications of Real Business Cycles 6. there is a more realistic modelling of equity dividends is possible. 1982). this is not a very sensible modelling choice. We here want to study a production economy with asset market and spell out its implications for asset prices and returns. and realistic environment. In economies with production where asset returns and consumption are endogenous consumers can save and hence transfer consumption between periods. Since there is a capital stock in production economies.

Gong and Semmler (2001) where the closed-form solutions for risk premia of equity. All the estimations are again conducted through the numerical algorithm. the Sharpe-ratio and the risk-free interest rates are presented in a log-linearized RBC model as developed by Campbell (1994).3 The second asset pricing restriction concerns the risk-return trade-off as measured by the Sharperatio. we implicitly assume that the standard model can. Those values are then compared to the stylized facts of asset markets. The theoretical framework in this chapter is taken from Lettau (1999). as the previous chapter has shown the standard model fails also along some real dimensions. or the price of risk. for example Jerman (1998). replicate the moments of the real variables. This variable determines how much expected return agents require per unit of financial risk. For each estimation. Hansen and Jagannathan (1991) and Lettau and Uhlig (1999) show how important the Sharpe-ratio4 is in evaluating asset prices generated by different models. we compute the implied premia of equity and long-term real bond. The data employed for this estimation are taken again from Christiano (1987). the simulated annealing. Boldrin. for the estimated parameters.2 We then add our first asset pricing restriction. We find that the Sharpe-ratio restriction affects the estimation of the model drastically. we estimate the model using only the restrictions of real variables as in Chapter 5. Ohanian and Berkowitz (1995) to test whether the moments predicted by the model. In addition. Lettau and Uhlig (1999) and Lettau. to some extent. We use the observed 30-day T-bill rate to match the oneperiod risk-free interest rate implied by the model. We introduce the asset pricing restrictions step-by-step to clearly demonstrate the effect of each new restriction. Of course.102 Although recently further extension of the baseline stochastic growth model of RBC type were developed to match better actual asset market characteristics 1 we will in the current paper by and large restrict ourselves to the baseline model. 4 See also Sharpe (1964) 1 . the riskfree interest rate. 2 Using Christiano’s data set. 3 Using 30-day rate allows us to keep inflation uncertainty at a minimum. we introduce a diagnostic procedure developed by Watson (1993) and Diebold. can match the moments of the actual macroeconomic time series. First. Introducing the Sharpe-ratio as moment restriction in the estimation procedure requires an iterative procedure to estimate the risk aversion parameter. In particular. long-term real bonds. Christiano and Fisher (2001) and Gr¨ne and u Semmler (2004b). Those equations can be used as additional moment restrictions in the estimation process. The estimation technique in this chapter follows the Maximum Likelihood (ML) method as discussed in Chapter 4. we use the variancecovariance matrix of the estimated parameters to infer the intervals of the See.

we apply here the power utility as describing the preferences of the representative household. 21) and Gr¨ne and Semmler u (2004b).2 6. In section 2. see Jerman (1998).103 moment statistics and to study whether the actual moments derived from the sample data fall within this interval. Kt for capital stock. Section 3 presents the estimation for the model specified by different moments restrictions. we use the standard RBC model and log-linearization as proposed by Campbell (1994) and derive the closed-form solutions for the financial variables. as in our previous modelling. Nt for normalized labor input and Ct for consumption. Section 5 compares the second moments of the time series generated from the model to the moments of actual time series data. For the model of asset market implications other preferences. ch. for example.2. At for technology. we interpret our results and contrast the asset market implications of our estimates to the stylized facts of the asset market.2) 1 Aα =α t θ(1 − Nt ) Ct 5 Kt Nt (1−α) Note that. Christiano and Fisher (2001) and Cochrane (2001. The first order condition is given by the following Euler equation: −γ Ct−γ = βEt Ct+1 Rt+1 (6. . The rest of the chapter is organized as follows. In Section 4.1 The Standard Model and Its Asset Pricing Implications The Standard Model We follow Campbell (1994) and use the notation Yt for output. Boldri. Section 6 concludes. 6. habit formation are often employed. The maximization problem of a representative agent is assumed to take the form5 ∞ M ax Et i=0 β i 1−γ Ct+i + θ log(1 − Nt+i ) 1−γ subject to Kt+1 = (1 − δ)Kt + Yt − Ct with Yt given by (At Nt )α Kt1−α .1) (6.

We denote the leverage factor (the ratio of bonds outstanding and total firm value) as ζ.2 The Log-linear Approximate Solution Outside the steady state. σε ). which is equal to the marginal product of capital in production plus undepreciated capital: Rt+1 ≡ (1 − α) At+1 Nt+1 Kt+1 α + 1 − δ. We allow firms to issue bonds as well as equity. In the rest of the chapter. Hence. the exact analytical solution to the model is not feasible. This defines the relation among g. 6. innovation: εt ∼ N (0. can be written as ct = ηck kt + ηca at nt = ηnk kt + ηna at (6. We therefore seek instead an approximate analytical solution. we can further write the above equation as γg = log(β) + r. Since markets are competitive. the technology.5) (6.4) 2 with εt to be the i. β and γ. the ModiglianiMiller theorem is presumed to hold.3) where g ≡ log G and r ≡ log R.i. Taking log for both sides. using the log-linear approximation method.1) becomes G = βR where R is the steady state of Rt+1 . In the case of incomplete capital depreciation δ < 1. output and capital stock all grow at a common rate G = At+1 /At . consumption. (6. we use g.6) .2. real allocations will not be affected by this choice. r. the implied value for the discount factor β can then be deduced from (6. At the steady state. Assume that the technology shock follows an AR(1) process: at = φat−1 + εt (6. i.d. consumption ct .e.104 where Rt+1 is the gross rate of return on investment in capital. Note that here we use the lower case letter as the corresponding log variables of the capital letter. r. (6. the model characterizes a system of nonlinear equations in the logs of technology at . and γ as parameters to be determined.3). Campbell (1994) shows that the solution. labor nt and capital stock kt .

ηca .4) while ignoring the constant term involving the discount factor and the variance of consumption growth. and ηka are all the complicated functions of the parameters α. r.8) (see Lettau et al. . (6. 1.3 The Asset Price Implications The standard RBC model as presented above has strong implications for asset pricing.7) and (6. the Euler equation (6.105 and the law of motion of capital is kt = ηkk kt−1 + ηka at−1 (6. capital stock and technology as expressed in (6. First.11) For further details. 2 7 Note that here we use the formula Eex = eEx+σx /2 . 1 − ηkk L (6.9) where L is the lag operator. φ and N (the steady state value of Nt ). and its standard deviation is much too low compared to the data. 6. ηna . we obtain the risk-free rate in logs as7 1 f rt = γEt ∆ct+1 − γ 2 V ar∆ct+1 − log β.8) Using the process of consumption. We also want to note that in RBC models the risk-free rate is generally too high. (2001) for the details.10) Since the model is log-linear and has normal shocks. ηkk . see Cochrane (2001.7) where ηck . (6. δ. the Sharpe-ratio can be computed in closed form as:8 SR = γηca σε . 8 See Lettau and Uhlig (1999) for the details.2.2 and 2. 6 (6. ηnk . see Hornstein and Uhlig (2001).5).): f rt = γ ηck ηka εt−1 . chs. The second asset market restriction will be the Sharpe-ratio which summarizes the risk-return trade-off: SRt = max f Et Rt+1 − Rt+1 all assets σt [Rt+1 ] .1). γ. g. Matching this process implied by the model to the data will give us the first asset market restriction. we derive from (6. 2 (6. Writing the equation in the log form.1) implies the following expression f regarding the risk-free rate Rt :6 f Rt = βEt (Ct+1 /Ct )−γ −1 .

27 8. A successful model should be consistent with these basic moments of real and financial variables. The table shows that the equity premium is roughly 2% per quarter.19 2. Gong and Semmler (2001) for details of those computations. The Sharpe-ratio. All data are from the U. (6.80 0.42 0. we consider the risk premia of equity (EP) and long-term real bonds (LTBP).59 0.4 Some Stylized Facts Table 6.86 7. Asset market data are from Lettau (1999).S. equals 0.106 Lastly. Units are per cent per quarters. The Sharpe-ratio is the mean of equity premium divided by its standard deviation.13) 1 − βηkk 1 − βηkk Again we refer to Lettau (1999) and Lettau. 6.1 summarizes some key facts on asset markets and real economic activity for the US economy. which measures the risk-return trade-off.12) LT BP = −γ 2 β 1 − βηkk ca ε ηdk ηnk − ηda ηkk ηck ηkk 2 2 EP = − γβ γηca σε .7) as: ηck ηka 2 2 η σ (6. economy at quarterly frequency.2. we will consider the performance of the model concerning the following facts of asset markets. In addition to the well-known stylized facts on macroeconomic variables.27 in .21 Mean 0. Table 6.72 1.5)(6. These can be computed on the basis of the log-linear solutions (6.27 Note: Standard Deviations for the real variables are taken from Cooley and Prescott (1995).99 4.1: Asset Market Facts and Real Variables GDP Consumption Investment Labor Input T-Bill SP 500 Equity Premium Long Bond Premium Sharpe Ratio Standard Deviation 1.24 1. The series are H-P filtered.53 7.17 1.

they fix the discount factor and the risk aversion parameter without estimating them. the initial X t . the Christiano data set can match the real side of the economy better than the commonly used NIPA data set.2 The Data For the real variables of the economy.3. The parameter φ is estimated independently from (6. γ. X t = (1 + g)t−1 X 1 . As we have demonstrated in the last chapter.3 6. For a data observation Xt . The computation of technology shocks requires the values for α and g. r and γ. the average interest rate r and the depreciation rate δ to be estimated. α. we use the 30-day T-bill rate to minimize unmodeled inflation risk. r.1 The Estimation The Structural Parameters to be Estimated The RBC model presented in section 2 contains seven parameters. We compute this initial condition based on . which could be calculated from the sample.3) for given values of g. For the time series of the risk-free interest rate.. 6. The standard deviation of the real variables reveal the usual hierarchy in volatility with investment being most volatile and consumption the smoothest variable.1-1983. we use the data set as constructed by Christiano (1987). we are required to detrend the data into their log-deviation form. the computation of xt depends on X1 .005.. Among the financial variables the equity price and equity premium exhibit the highest volatility.S. The estimation strategy is similar to Christiano and Eichenbaum (1992). To make the data suitable for estimation. In this paper we use the standard values of α = 0. Therefore. g.4).4) by OLS regression. However. 6. The parameter θ is simply dropped due to our log-linear approximation. where Xt is the value of Xt on its steady state path. roughly six times higher than consumption.3. φ.e. i. and N .107 post-war data of the U. for the given g. as we will see shortly.3. Recall that the discount factor is determined in (6. we would like to estimate as many parameters as possible. δ. Of course. some of the parameters have to be pre-specified. the detrended value xt is assumed to take the form log( Xt /Xt ). However. In contrast. the estimation of these parameters is central to our strategy. The data set covers the period from the third quarter of 1955 through the fourth quarter of 1983 (1955. This leaves the risk aversion parameter γ. N is specified as 0.667 and g = 0.

we use the maximum likelihood (ML) method as discussed in Chapter 3. we add restrictions from asset markets one by one.3 The Moment Restrictions of Estimation For the estimation in this chapter. Γ =  0 0 0 −ηna −ηck 0 1    kt−1 kt yt =  ct  .5) . we introduce the restrictions step-by-step. In other words. xt =  at−1  . B =  −ηck 1 0  . 6. First. (6. we obtain 1 X 1 = exp T T T log(Xt ) − i=1 i=1 log (1 + g)t−1 .3. The matrices for the ML estimation are given by     −ηkk −ηka 0 1 0 0 0 −ηca  . i.7) so we can compare our results to those in Christiano and Eichenbaum (1992).108 the consideration that the mean of xt is equal to 0. We call this Model 1 (M1). In order to analyze the role of each restriction. We start by including the following moment restriction of the risk-free interest rate in estimation while still keeping risk aversion fixed at unity: f E b t − rt = 0 . we constrain the risk aversion parameter r to unity and use only moment restrictions of the real variables. The remaining parameters thus to be estimated are δ and r.e. at nt  After considering the estimation with the moment restrictions only for real variables.(6. 1 T T i=1 1 log(Xt /X t ) = T 1 = T =0 T i=1 T 1 log(Xt ) − T 1 log(Xt ) − T T log(X t ) i=1 T i=1 i=1 1 log(X 1 ) − T T log (1 + g)t−1 i=1 Solving the above equation for X 1 .

we are simultaneously estimating γ while imposing a Sharpe-ratio restriction of 0.27 . In this case the matrices B and Γ and the vectors xt and yt can be written as 1  −ηnk B=  −ηnk 0  0 1 0 0 0 0 1 0    0 −ηkk −ηka 0 0  0  0  0 −ηca 0 . as a shortcut.  Model 3 (M3) uses the same moment restrictions as Model 2 but leaves the risk aversion parameter r to be estimated rather than fixed to unity.11) that the Sharpe-ratio is a function of risk aversion. Hence. denoted by γ1 . Of course. which is equal to 0. ηca is itself a complicated function of γ. ηca (γ)σε (6. we estimate the remaining parameter δ and r. Finally.27/[ηca (γ0 )σε ]. denoted by γ0 . then estimate . Recall from (6. have to use an iterative procedure to obtain the solution. Then the new γ. we fix the risk aversion at 50.14). where we start by using only restrictions on real variables and fix risk aversion to unity (M1). We add the risk-free rate restriction keeping risk aversion at one (M2). the standard deviation of the technology shock and the elasticity of consumption with respect to the shock ηca . This procedure is continued until convergence.109 f where bt denotes the return on the 30-day T-bill and the risk-free rate rt is computed as in (6. Given this value. we first set an initial γ. searched by the simulated annealing. For each given δ and r.2. We summarize the different cases in Table 6. a value suggested in Lettau and Uhlig (1999) for generating a Sharperatio of 0. First. is calculated from (6.27 using actual consumption data.27.  nt  bt This equation provides the solution of γ. f rt    .14)  kt  c  yt =  t  . therefore. we impose that the dynamic model should generate a Sharpe-ratio of 0. We take this restriction into account in two different ways. Since it is nonlinear in γ. the Sharpe-ratio restriction becomes γ= 0. given the other parameters δ and r.9).27 as measured in the data (see Table 1). This will be called Model 4 (M4). Model 5 (M5). Γ=  0 0  0 −ηna 0 −1 1 0 0 0  kt−1  at−1   xt =   at  . In the next version. We refer to this version as Model 2 (M2). we.

0633 (0. which only uses restrictions on real variables. Entries without standard errors are preset and hence are not estimated.0144) 0.08% on an annual basis.0041 (0.0132) 0. δ γ = 50 risk-free rate.0185) γ prefixed to 1 prefixed to 1 2. a problem also encountered in Eichenbaum et al.3) is 0.3: Summary of Estimation Results9 Models M1 M2 M3 δ 0. The discount factor is slightly higher while the average risk-free rate decreases. δ. Adding the risk-free rate restriction in Model 2 does not significantly change the estimates. fixing risk aversion at 50 (M4) and estimate it using an iterative procedure (M5). δ γ=1 none M2 r.0160) 0.9972.0189 (0. Table 6.3 summarizes the estimations for the first three models.0144) 0. δ γ=1 risk-free rate M3 r. These results confirm the estimates in Christiano and Eichenbaum (1992). Sharpe-ratio M5 r.0156) r 0. However the implied discount factor now exceeds unity. δ.0344 (0.4719) Consider first Model 1. Sharpe-ratio 6. (1988). The depreciation rate is estimated to be just below 2% which close to Christiano and Eichenbaum’s (1992) results.0088 (0.110 it (M3). Finally we add the Sharpe-ratio restriction. Standard errors are in parentheses. . Christiano and Eichenbaum (1992) 9 The standard errors are in parenthesis. Table 6. The implied discount factor computed from (6.2: Summary of Models Models Estimated Parameters Fixed Parameters Asset Restrictions M1 r. γ risk-free rate. For each model we also compute the implied values of the long-term bond and equity premium using (6.13). γ risk-free rate M4 r.4 The Estimation Results Table 6.12) and (6. The average interest rate is 0.0220 (0.0077 (0.77% per quarter or 3.

091% Table 6.053% EqPrem -0.4 computes the Sharperatio as well as risk premia for equity and long term real bond using (6.5. even negative for certain cases.11) (6. While the implications of the dynamic optimization model concerning the real macroeconomic variables could be considered as fairly successful. Introducing the riskfree rate restriction improves the performance only a little bit. the implications for asset prices are dismal.085% -0. as in M3.082% -0.5: Matching the Sharpe-Ratio Models M4 M5 δ 1 1 r 0 1 γ prefixed to 50 60 10 This value is advocated in Benninga and Protopapadakis (1990). . the model is able to produce sensible parameter estimates when the moment restriction for the risk-free rate is introduced.0065 0.111 avoid this by fixing the discount factor below unity rather than estimating it. the value implied from logutility function. adding the risk-free rate).0065 0.042% -0. The Sharpe-ratio is too small by a factor of 50 and both risk premia are too small as well. Table 6. we will try to estimate the model by adding the Sharpe-ratio moment restrictions.0180 LT BPrem 0. Model 3 is more general since the risk aversion parameter is estimated instead of fixed at unity. The ML procedure estimates the risk aversion parameter to be roughly 2 and significantly different from 1.10 Table 6.13).000% -0.4: Asset Pricing Implications Models M1 M2 M3 SR 0. Table 6. Note that these variables are not used in the estimation of the model parameters. The estimation is reported in Table 6. Adding the risk-free rate restriction increases the estimates of δ and r somewhat.4 shows that the RBC model is not able to produce sensible asset market prices when the model parameters are estimated from the restrictions derived only from the real side of the model (or. The leverage factor ζ is set to 2/3 for the computation of the equity premium. Next. Overall.

The tension between the Sharpe-ratio restriction and the real side of the model causes the estimation to fail. Trying to estimate risk aversion while matching the Sharpe-ratio gives similar results. The point estimate of risk aversion parameter is high (60). al (1995). The question now is how the moment restrictions of the real variables are affected by such a high level of risk aversion. such a high level of risk aversion has the potential to generate reasonable Sharpe-ratios in consumption CAPM models.(6. The estimates for the depreciation factors and the steady-state interest rate converge to the pre-specified constraints. We remark that a similar diagnostic procedure can be found in Watson (1993) and Diebold et. at and the estimated parameters of our loglinear model.5. Our objective here is to ask whether our RBC model can predict the actual moments of the time series for both the real and asset market.6) with kt and at to be their actual observations. It is not possible to estimate the RBC model with simultaneously satisfying the moment restrictions from both the real side and the financial side of the model.5) . as shown in the last row in Table 7. It demonstrates again that the asset pricing characteristics that one find in the data are fundamentally incompatible with the standard RBC model. High risk aversion implies a low elasticity of intertemporal substitution so that agents are very reluctant to change their consumption over time. 11 or the estimation does not settle down to an interior optimum. We now 11 We constraint the estimates to lie between 0 and 1.112 Model 4 fixes the risk aversion at 50.5 The Evaluation of Predicted and Sample Moments Next we provide a diagnostic procedure to compare the second moments predicted by the model with the moments implied by the sample data. . 6.5 shows that the resulting estimates are not sensible. The first row of Table 6. Again the parameter estimates do converge to pre-specified constraints. The reason is of course that a high Sharpe-ratio requires high risk aversion. The depreciation rate converges again to unity as does the steady-state interest rate r. the predicted ct and nt can be constructed from the right hand side of (6. This implies that the real side of the model does not yield reasonable results when risk aversion is 50. As explained in Lettau and Uhlig (1999). Given the observations on kt . The moments are revealed by the spectra at various frequencies.

dotted lines (actual series) for A) consumption. B) labor. We can use the variance-covariance matrix of our estimated parameters to infer the intervals of our forecasted series hence also the intervals of the moment statistics that we are interested in. We hereby employ our most reasonable estimated Model 3. Figure 6. all variables HP detrended (except for excess equity return) .113 consider the possible deviations of our predicted series from the sample series.1: Predicted and Actual Series: solid lines (predicted series). C) risk-free interest rate and D) long term equity excess return.

at 5% significance level. risk-free rate and equity return. all variables detrended (except excess equity return) Figure 6. B) labor. A good match of the actual and predicted second moments of the time series would be represented by the fact that the solid line falls within the interval of the dashed and dotted lines.2 where we compare the spectra calculated from the data samples to the intervals of the spectra predicted. dashed and dotted lines (the intervals of predicted moments) for A) consumption.114 Figure 6. The insufficient match of the latter three series are further confirmed by Figure 6. In particular the time series for . labor effort. C) risk-free interest rate and D) long-term equity excess return. the consumption series can somewhat be matched whereas the volatility in the labor effort as well as in the risk-free rate and equity excess return cannot be matched.2: The Second Moment Comparison: solid line (actual moments).1 presents the Hodrick-Prescott (HP) filtered actual and predicted time series data on consumption. As shown in Chapter 5. by the models.

The latter line of research has been pursued by Jerman (1998) and Boldrin. We use the risk-free interest rate and the Sharpe-ratio in matching actual and predicted asset market moments and compute the implicit risk premia for long real bonds and equity. Other researchers have looked at some extensions of the standard model such as technology shocks with a greater variance. to least to a certain extent. We introduce model restrictions based on asset pricing implications in addition to the standard restrictions of the real variables and estimate the model by using ML method. risk-free interest rate and long-term equity return predicted by the model do not match well the corresponding moments of the sample economy. We find that though the inclusion of the risk-free interest rate as a moment restriction can produce sensible estimates. Moreover. the approximation methods for solving the models might not be very reliable since accuracy tests for the used approximation methods are still missing. 6. 13 See Gr¨ne and Semmler (2004b). utility functions with habit formation.6 Conclusions Asset prices contain valuable information about intertemporal decision making of economic agents. other utility functions. the computed Sharpe-ratio and the risk premia of longterm real bonds and equity are in general counterfactual. given the sensible parameter estimates. We conclude that the standard RBC model cannot match the asset market restrictions. the attempt to match the Sharpe-ratio in the estimation process can hardly generate sensible estimates. and adjustment costs of investment. Moreover. This chapter has estimated the parameters of a standard RBC model taking the asset pricing implications into account. at least with the standard technology shock. for example. Finally. 2001). The computed Sharpe-ratio is too low while both risk premia are small and even negative.13 see also W¨hrmann.115 labor effort. Semmler and Lettau (2001) where time varying characteristics of o asset prices are explored. constant relative risk aversion (CRRA) utility function and no adjustment costs. yet these extensions frequently use extreme parameter values to be able to match the asset price characteristics of the model with the data. risk-free interest rate and equity return fail to do so. the second moments of labor effort. u 12 . Christiano and Fisher (1996. more successful in replicating stylized asset market characteristics.12 Those extensions of the standard model are.

Part III Beyond the Standard Model — Model Variants with Keynesian Features 116 .

Chapter 7 Multiple Equilibria and History Dependence 7. RBC models. For certain substitution properties between consumption and cash holdings those models admit unstable as well as stable high level and low level steady 1 In Keynes (1936) such an open ended dynamic is described in Chapter 5 of his book.1 In recent times such type of dynamics have been found in a large number of dynamic models with intertemporal optimization.5. as introduced in chapter 4. Some of the models are real models. Keynes describes here how higher or lower ”long term positions” associated with higher or lower output and employment might be generated by expectational forces. for example Kim (2004). where consumers’ welfare is affected positively by consumption and cash balances and negatively by the labor effort and an inflation gap from some target rates. that can exhibit locally stable steady state equilibria giving rise to sun spot phenomena.1 Introduction One of the important features of Keynesian economics is that there is no unique equilibrium toward which the economy moves. 117 . The dynamics are open ended in the sense that it can move to low level. or high level of economic activity and expectations and policy may become important to tild the dynamics to one or the other outcomes.2 Multiplicity of equilibria can also arise here as a consequence of increasing returns to scale and/or more general preferences. Others are monetary models. with increasind returns to scale and or more generate preferences. Theoretical models of this type are reviewed in Benhabib and Farmer (1999) and Farmer (12001) and an empirical assessment is given in Schmidt-Grohe (2002). 2 See. Those models have been called indeterminacy models.

Asada and Semmler 1995). however. numerous stochastic growth model have employed adjustment cost of capital. Despite some unresolved issues in the literature on multiple equilibria and indeterminacy3 it has greatly enriched macrodynamic modelling.118 states. see Benhabib et al. Lucas (1967) and Hayashi (1982). Our model version thus can explain of how the economy becomes history dependent and moves. 4 In Feichtinger et al. Here can be indeterminacy in the sense that any initial condition in the neighborhood of one of the steady-states is associated with a path forward or away from that steady state. may lead to thresholds separating different domains of attraction of capital stock. In non-stochastic dynamic models adjustment cost has already been used in Eisner and Stroz (1963). Pampel and Semmler (2001) and Gr¨ne u and Semmler (2004a). see Beyn. Authors in this tradition have also distinguished the absolute adjustment cost depending on the level of investment from the adjustment cost depending on investment relative to capital stock (Uzawa 1968. as. employment and welfare.4 In stochastic growth models adjustment cost has been used in Boldrin. When indeterminacy models exhibit multiple steady state equilibria. Recently. recently has been shown. where investment as well as cpital stock enters the adjustment costs. consumption. Pursuing this line of research we show that one does not need to refer to increasing returns to scale or specific preferences to obtain such results. Eichenbaum and Evans (2001). We show that due to the adjustment cost of capital we may obtain nonuniqueness of steady state equilibria in an otherwise standard dynamic optimization version. The existence of multiple steady state equilibria entails thresholds that separate different domains of attraction for welfare and employment and allow Although these are important variants of macrodynamic models with optimizing behavior. in turn. after a shock or policy influences. at a threshold. and not within a set as the indeterminacy literature often claims. Christiano and Fisher (2001) and adjustment cost associated with the rate of change of investment can be found in Christiano. 3 . to a low or high level equilibria in employment and output. indeterminacy is likely to occur solely at a point in these models. In this chapter we want to show that adjustment cost in a standard RBC model can give rise to multiple steady state equilibria. likely to generate multiplicity of steady state equilibria. (2001). than this permits any path in the vicinity of the steady state equilibria to move back to (away from) the steady state equilibrium. Multiple steady state equilibria. employment and welfare level. As our solution shows thresholds are important as separation points below or above which it is advantages to move to lower or higher levels of capital stock. (2000) it is shown that relative adjustment cost. consumption. where a middle one is an attractor (repellor).

The remainder of this chapter is organized as follows. Nt is per capita working hours. To transform the model into a stationary version we need to detrend the variables.2) (7. we divide both sides of equation (7. consumption and adjustment cost. output. it .5) Above. Q C I Y t In particular.2 The Model The model we present here is the standard stochastic growth model of RBC type.1) . Yt . ct . Yt and Qt . Ct and Qt are the level of capital stock. Kt . kt ≡ Kt . as in King et al. The proof of the propositions in the text is provided in the appendix.6 . Section 2 presents the model.(7. For this purpose. X t t .4) (7.3Nt with N denoting the sample mean of Nt .3)α (7. At is the temporary shock in technology. Section 5 concludes the chapter. The model is non-stationary due to Xt . investment. The state equation for the capital stock takes the form: Kt+1 = (1 − δ)Kt + It − Qt where It = Yt − Ct and Yt = At Kt1−α (Nt Xt )α (7. It . Note N that here nt is often regarded to be the normalized hours with its sample 5 6 In the literature those thresholds have been called Skiba-points (see Skiba.1) (7. ct ≡ Xt . Ct .119 for an open ended dynamics depending on the initial conditions and policy influences impacting the initial conditions. nt is defined to be 0. augmented by adjustment cost.3) Above. yt ≡ Xtt and qt ≡ Xt . all in real terms.3) by Xt : kt+1 = 1 [(1 − δ)kt + it − qt ] 1+γ it = yt − c t 1−α yt = At kt (nt N /0. yt and qt are the detrended variables for Kt . 7. (1988). it ≡ Xtt . 1978). kt . It . and Xt is the permanent (including both population and productivity growth) shock that follows a growth rate γ. Section 3 studies the adjustment cost function which gives rise to multiple equilibria and section 4 demonstrates the existence of a threshold5 that separates different domains of attraction.

4) .7) (7.9) with it and yt to be given by (7.6) (7.10) (7.120 mean equal to 30 %. The following proposition concerns the steady states Proposition 5 Assume At has a steady state A. kt and λt . we obtain the following first-order conditions: 1 β − Et λt+1 [1 − q ′ (it )] = 0 ct 1 + γ −θ β αyt + [1 − q ′ (it )] = 0 Et λt+1 1 − nt 1 + γ nt (1 − α)yt β Et λt+1 (1 − δ) + [1 − q ′ (it )] = λt 1+γ kt kt+1 = 1 [(1 − δ)kt + it − q(it )] 1+γ (7.11) . determine the following steady states: [bφ(i) − 1][i − q(i)] − aφ(i)1− α − q(i) = 0 k= 1 [i − q(i)] γ+δ 1 (7.(7. when evaluated at their certainty equivalence form. we first form the Lagrangian: ∞ L = t=0 ∞ β t [log(ct ) + θ log(1 − nt )] − Et β t+1 λt+1 kt+1 − t=0 1 [(1 − δ)kt + it − q(it )] 1+γ Setting to zero the derivatives of L with respect to ct .9). nt .5) respectively.4) and (7.8) (7. Equation (7. We shall assume that the detrended adjustment cost qt depends on detrended investment it : qt = q(it ) The objective function takes the form ∞ max E0 t=0 β t [log ct + θ log(1 − nt )] To solve the model.

Also if q(i) is linear.0034 500 . 7.3) θ (1 + α ) θ b= γ+δ m φ(i) = 1 − q ′ (i) (7.17) (7.1: Table 7.3 The Existence of Multiple Steady States Many non-linear forms of q(i) may lead to a multiplicity of equilibria.121 φ(i) n= A 1 α k 0.11) (7.2) shows the corresponding derivative q ′ (i) with the parameters given in Table 7.15). and then from (7.10) i is uniquely determined. β(1 − α) (7. (7.1: The Parameters in the Logistic Function q0 q1 q2 2500 0. if q(i) is linear.16) (7.14) (7. Here we shall only consider that q(i) takes the logistic form: q(i) = q0 q0 exp(q1 i) − exp(q1 i) + q2 1 + q2 (7.19) Note that equation (7.18) and m= (1 + γ) − (1 − δ)β . Therefore.18) indicates that φ(·) is constant. depending on the assumption that all other steady states can be uniquely solved via (7.15) y = φ(i)k c=y−i λ= where (1 + γ) βc 1 − q ′ (i) a= 1 α α A (N /0.10) determines the solution of i.13) (7.3 N (7.1) shows a typical shape of q(i) while Figure (7.12) (7.20) Figure (7. equ. no multiple steady state equilibria will occur.

1: The Adjustment Cost Function Figure 7.122 Figure 7.20) we posit a restriction such that q(0) = 0.1). Another restriction. which is reflected in Figure (7.2: The Derivatives of the Adjustment Cost Note that in equation (7. is that .

The following proposition concerns the existence of multiplicity of equilibria.20) subject to (7. Therefore.1) and (7. Both restrictions seem reasonable. where q(i) takes the form as in (7.10).21) indicating that the adjustment cost should never be larger than the investment itself.2) need to be discussed. if there are some i′ s at which f (i) < 0. three equilibria will occur. The curves cut the zero line three times. We shall first remark that the assumption bm − 1 > 0 is plausible given the standard parameters for b and m also f (i) = 0 is indeed the equation (2. • In the range (i2 . denoted i1 in the range (0.2. this proposition indicates a condition under which multiple steady states will occur.10). The two critical points.21). Proposition 6 Let f (i) ≡ [bφ(i) − 1][i − q(i)] − aφ(i)1− α − q(i).3. +∞). q ′ (i) > 1.1 and other standard parameters as given in Table 7. A negative φ(i) will lead to a complex 1 φ(i)1− α in (7. indicating three steady states of i. in Figure (7. i1 ) such that f (i) = 0. +∞) at which f (i) < 0. i1 ) and the other is (i2 .18) indicates that φ(i) is negative since from (7. i1 and i2 . 1 . In particular. if there exist some i′ s in (i2 .123 q(i) < i (7. we show the curve of f (·) given the empirically plausible parameters as reported in Table 7.19) m > 0. +∞). Meanwhile between i1 and i2 . Assume bm − 1 > 0. These are the two points at which q ′ (i) = 1. • There exists one and only one i. When q ′ (i) > 1. then there must exist two i’s. denoted as i2 and i3 such that f (i) = 0. equation (7. We therefore obtain two feasible ranges for the existence of steady states of i : one is (0. In Figure 7. A formal mathematical proof of the existence of this condition is intractable.

7619 We use a numerical method to compute the three steady states of i : i1 .0189 N 480.5800 γ 0.2: The Standard Parameters of RBC Model7 α 0.3: Multiplicity of Equilibria: f(i) function Table 7.0045 β 0.15).124 Figure 7.1. with estimated a0 and a1 are given respectively by 0. i2 and i3 .(7. . Given these steady states. the other steady states are computed by (7.9811. Table 7. A is derived from At = a0 + a1 At−1 + εt .0333 and 0.3 uses essentially the same parameters as reported in Table 5.2080 θ 2.00 A 1.9930 δ 0. The result of the computations of the three steady states are: 7 The N below is calculated on the assumption of 12 weeks per quarter and 40 working hours per week.11) .

00019568017 1101.3.2273 3286.9).4).125 Table 7. its corresponding steady states in capital.00083463435 1058. Therefore it reflects one corresponding welfare level.23) where i = 1. at least compared to i1 .4307 3575. Then by applying an approximation method as discussed in chapter 2.4 The Solution An analytical solution to the dynamics of the model with adjustment cost is not feasable. output. and consumption are all greater than those corresponding to i2 .642 0. On the other hand i1 and i3 exhibit also differences in welfare and employment. Assume that At stays at its steady state A so that we only consider the deterministic case.3: The Multiple Steady States i k n c y λ V Corresponding to i1 564.22) c i nt = Gi kt + gn n (7.4778 119070.347 0. (7. yet its corresponding steady state in labor effort and thus employment is larger for i2 . We therefore can simulate the solution paths by using the above two equations together with (7.9621 0. Set 2 and Set 3 corresponding to i1 .11 0. 7. i2 and i3 . The question then .5634 9180.6369 Note that above. This already indicates that i2 may be inferior in terms of the welfare. For this. 2. therefore. The ith set of decision rule can then be written as i ct = Gi kt + gc (7.25436594 3011.9826 0.0412 0.30904713 2111. We.7118 Corresponding to i2 1175. have to rely on an approximate solution. we obtain three sets of linear decision rules for ct and nt corresponding to our three sets of steady states.5) and (7.07481 Corresponding to i3 4010. 3. we shall first linearize the first-order conditions around the three sets of steady states as reported in Table 7. we shall denote them as decision rule Set 1. The steady state corresponding to i2 deserves some discussion.0017500565 986. The welfare of i1 is larger than i2 .53140 18667.33781724 5169.7553 11672. V is the value of the objective function at the corresponding steady states. For notational convenience.

Figure 7. if k0 is close to the k 1 . The likely conjecture is that this will depend on the initial condition k0 . should be used.4 compares the welfare performance of our three sets of linear decision rules. Set 2 and Set 3 respectively) . we choose the range [8000.23). This consideration further indicates that there must exist some thresholds for k0 at which intervals are divided regarding which set of decision rule should be applied. the steady state of k corresponding to i1 . 138000] for k0 . For example. In this exercise. we would expect that the decision rule 1 is appropriate. dotted and dashed lines for decision rule Set 1. where ∞ V ≡ t=0 β t [log(ct ) + θ log(1 − nt )] We should choose the range of k0 ’s that covers the three steady states of k’s as reported in Table 7. To detect such thresholds. as expressed by (7.3. we compute V . Specifically. Figure 7.126 arises as to which set of decision rule.4: The Welfare Performance of three Linear Decision Rules (solid. we shall compute the value of the objective functions starting at different k0 for our three decision rules.22) and (7.

A variety of further economic models giving rise to multiple equilibria and thresholds are presented in Gr¨ne and Semmler (2004a). as studied in chapter 4. a specific form of adjustment cost of capital was introduced. He in fact shows that if for the generalized RBC model. 7. since this leads to a higher welfare. Overall.5. As our simulation shows thresholds are important as separation points below or above which it is optimal to move to lower or higher levels of capital stock. consumption. there is an intersection of the two welfare curves corresponding to the decision rules Set 1 and Set 3. ξ > 0. can be regarded as a threshold. occuring around k0 = 36900. we first realize that the value of the objective function is always lower for decision rule Set 2. dynamic models giving rise to indeterminacy usually have to presume some weak externalities and increasing returns and/or more general preferences. If k0 < 36900. in turn. Kim (2004) discusses to what extent weak externalities in combination with more complex preferences will produce indeterminacy. However.127 From Figure 7. the household may choose decision rule Set 3. for illustrative purpose. employment and welfare level. then the model generates local indeterminacy. to a low or high level equilibrium in employment and output. after a shock. employment and welfare.4. Multiple steady state equilibria. consumption. lead to thresholds separating different domains of attraction of capital stock. if k0 > 36900. u The above model stays as close as possible to the standard RBC model except. On the other hand. This intersection. the household should choose decision rule Set 1 since it will allow the household to obtain a higher welfare. . This is likely to be caused by its inferior welfare performance at the steady states for which we compute the decision rule. as shown in many recent contributions. On the other hand. Our model thus can easily explain of how an economy become history dependent and moves.5 Conclusion This chapter shows that the introduction of adjustment cost of capital may lead to non-uniqueness of steady state equilibria in an otherwise standard RBC model. there is a weak externality. the issue of multiple equilibria and indeterminacy is an important macroeconomic issue and should be pursued in further research in the future..

3)α (7.13) with φ(i) given by (refequ7.18) and (7.32) φ(i) n= A 1 α k 0.4) .26) (7.6 7.24) (7.30) (nN /0.3)α 1 i − q(i) γ+δ which is equation (7. (7.28) to express i while using φ(i)k for y.30) (γ + δ)k = φ(i)k − c − q(i).31) which is equation (7.(7.9) takes the form (7.11).12).6. using (7.28) (7.128 7.26).32) k y Using (7.29). From (7. Further from (7. we derive from (7. k= (1 + γ) − (1 − δ)β y = β(1 − α) 1 − q ′ (i) k = φ(i) From (7.3 N (7.27) (7.29) (7.25) =1 (7. which is equivalent to c = (φ(i) − γ − δ)k − q(i) .1 Appendix: The Proof of Propositions 5 and 6 The Proof of Proposition 5 1 β λ 1 − q ′ (i) = 0 − c 1+γ αy β −θ λ + 1 − q ′ (i) = 0 1−n 1+γ n β (1 − α)y (1 − δ) + 1 − q ′ (i) 1+γ k 1 k= (1 − δ)k + i − q(i) 1+γ i=y−c y = Ak 1−α The certainty equivalence of equation (7. We derive from (7. Next.33) which is (7. y −α = Ak (nN /0.27).19).31) to express k .

37) with a given by (7.37) is equivalent to (7.33): c = = = = = 1 1 (1 − n)α α A φ(i)1− α (N /0.24) and (7.3)α 1−α y k α−1 (N /0.34) y n −1 The first equation is equivalent to (7.35) k n y n (N /0. q(i)) Meanwhile from (7.36) = A α φ(i)1− α (N /0. y = A n = A 1 y n (7. Let c1 (·) = c2 (·).17). we obtain 1 α φ(i)k θ (1 + (1 + 1 1 α )φ(i) − γ − δ (i − q(i)) = aφ(i)1− α + q(i) θ γ+δ (7.35) and then express n in terms of (7. k) c2 (φ(i).29).3)φ(i)1− α − A α (N /0. k) 1 α k 0.36) into (7.3) θ 1 1 1 α α α 1 φ(i) A (N /0.10) with b given by (7.3)α (7.38) Equation (7.3)φ(i)1− α θ θ A 1 α aφ(i)1− α − φ(i)k θ c2 (φ(i).129 = c1 (φ(i). k. We thus obtain (φ(i) − γ − δ)k − q(i) = aφ(i)1− α − which is equivalent to 1 α )φ(i) − γ − δ k = aφ(i)1− α + q(i) θ Using (7. .25): β 1 θ λ 1 − q ′ (i) = = 1+γ c (1 − n)α (1 − n)α θ 1−α (7.15) while the second equation indicates c= where from (7.30) for k.16).3) 1 Substitute (7.3 N (7.

In this case. i1 ) and (i2 . Therefore f (i) → +∞. we only need to prove that f (i) → +∞ and f (i) < 0 when i → i2 and f (i) → +∞ and f (i) > 0 when i → +∞. q ′ (i) → 0 and q(i) → q m where q m is the upper limit of q(i). Meanwhile from (7. 1 1 − 1)φ(i)− α α f ′ (i) = φ′ (i) b[i − q(i)] + a( where φ′ (i) = + (bm − 1) (7. Therefore f (i) → +∞. . and φ(i)1− α → 0. since q ′′ (i) → 0 and hence φ′ (i) → 0. in the range (i2 .16) 1 1 and (7.40) We shall first realized that a. Assume i → 0. Consider first i → i2 . this further 1 indicates φ(i)1− α → 0. φ′ (i) → −∞ (since q ′′ (i) < 0) and therefore f ′ (i) < 0. there exists one and only one i such that f (i) = 0.39). q ′′ (i) > 0 and hence f ′ (i) > 0. Meanwhile. which is positive. Since f ′ (i) > 0. Consider now i → +∞. Next assume i → i1 . q ′′ (i) < 0 and hence f ′ (i) can either be positive or negative due to the sign of (bm − 1). This indicates that [i − q(i)] → +∞. Since 1 − α is negative. Therefore f ′ (i) → (bm − 1).6. by the intermediate value theorem. Meanwhile in the range (0.130 7. Again in 1 this case. We thus have proved the first part of the proposition.39) mq ′′ (i) [1 − q ′ (i)]2 (7. Therefore f (i) → +∞. +∞) φ(i) is positive and hence f (i) is continuous and differentiable. Next we turn to the range (i2 . In this case. i1 ). To verify the second part of the proposition. q ′ (i) → 1. Let us first consider the range (0. and therefore the term b[i − q(i)] + a( α − 1)φ(i)− α is positive.17) and (7. q(i) → 0 1 and therefore f (i) → −aφ(0)1− α < 0. +∞). 1 q ′ (i) → 1 and therefore φ(i) → +∞. We thus have proved the second part of the proposition. However.19). i1 ). +∞). φ(i) → +∞.2 The Proof of Proposition 6 Note that within our two ranges (0. In particular. In this case. b and m are all positive as indicated by (7.

It has preliminarily been explored in Chapter 5 of this volume. especially in Chapter 5. can explain the volatilities of some macroeconomic variables such as output. at least at business cycle frequency. may be related to the specification of the labor 131 . in indeterminacy models it turns out to be excessively high. However. Another problem in RBC literature related to this. the standard real business cycle (RBC) model. We want to note that the labor market problems. as developed by Benhabib and his co-authors. a negative or almost zero correlation. not sufficiently been studied in the literature. the lack of variation in the employment and the high correlation between consumption and employment in the standard RBC model. This problem of excessive smoothness in labor effort is well-known in the RBC literature. A recent evaluation of this failure of the RBC model is given in Schmidt-Grohe (2001).1 Introduction As discussed in the previous chapters. despite its rather simple structure. Lastly. to our knowledge. to explain the actual variation in employment the model generally predicts an excessive smoothness of labor effort in contrast to empirical data. the RBC model predicts a significantly high positive correlation between technology and employment whereas empirical research demonstrates. This problem of excessive correlation has. consumption and capital stock. There the RBC model is compared to indeterminacy models.Chapter 8 Business Cycles with Nonclearing Labor Market 8. Whereas in RBC models the standard deviation of the labor effort is too low. is that the model implies a excessively high correlation between consumption and employment while empirical data only indicates a week correlation. These are the major issues that we shall take up from now on.

the demand and supply. On the other hand. Rotemberg and Woodford (1995. and normalized to 1. Gali (1999) and Woodford (2003) present a variety of models with monopolistic competition and sticky price. One possible approach for such improvement is to introduce the Keynesian feature into the model and to allow for wage stickiness and a nonclearing labor market. the excessively high correlation between technology and employment. 2 The labor supply in the these models is implicitly assumed to be given exogenously. the moments of labor effort become purely demand-determined. an explicit labor demand function is introduced from the decision problem of the firm side. In this chapter we are mainly concerned with this puzzle. that is.1 We shall remark that in those studies with nonclearing labor market. The variations in labor and consumption both reflect the moments of the two state variables. which had been popular before 1980’s and the other is the New Keynesian analysis based on monopolistic competition. Although in the specification of its model structure (see Chapter 4). one has to make improvement upon labor market specifications. The technology puzzle. labor and capital markets). The research along the line of micro-founded Keynesian economics has been historically developed by the two approaches: one is the disequilibrium analysis. This further suggests that to resolve the labor market puzzle in a real business cycle model. capital and technology. King and Wollman (1999). there are models of efficiency wages where nonclearing labor market could occur. the real business cycle model specifies both sides. Attempts have now been made recently that introduce the Keynesian features into a dynamic optimization model. However.132 market. For the labor market. Benassy (1995) and Uhlig and Xu (1996) among others. we will present a stochastic dynamic optimization model of RBC type but argumented by Keynesian features along the line of above See Danthine and Donaldson (1990. Hence nonclearing of the labor market occurs if the demand is not equal to 1. 1 .2 In this chapter. preliminarily discussed in Chapter 5. the decision rule with regard to labor supply in these models is often dropped because the labor supply no longer appears in the utility function of the household. the moments of labor effort result from the decision rule of the representative household to supply labor. 1995). will be taken up in Chapter 9. It is therefore not surprising why employment is highly correlated with consumption and why the variation of consumption is a smooth as labor effort. Consequently. 1999). of a market. and therefore we could name it the labor market puzzle. the moments of the economy are however reflected by the variation on one side of markets due to its general equilibrium nature for all markets (including output.

hiring and firing cost. evolution of wealth. See Malinvaud (1994) for a more extensive list of those factors . 3 . 5 On the demand side one could add beside the pure technology shocks and the real wage. 2003). 4 0ne could perceive a change in secular forces concerning labor supply from the side of households. Phelps and Zoega 1998). Important work of this type can be found in Rotemberg and Woodford (1995. Henderson and Levin (2000) and Woodford (2003). changes in preferences. the market in those models are still assumed to be cleared since the producer supplies the output according to what the market demands at the existing price. union bargaining. For an extensive reference to those factors. taxes and subsides which all affect labor supply. Some of those secular forces are often mentioned in the work by Phelps. see also Ljungqvist and Sargent (1998. 6 Another line of recent research on modeling unemployment in a dynamic optimization framework can be found in the work by Merz (1999) who employs search and matching theory to model the labor market. However. 1999).6 We will assess this model by employing U. we shall allow for wage stickiness3 and nonclearing labor market. see Blanchard and Wolfers (2000) and Ljungqvist and Sargent (1998. productivity and real wage. A similar consideration is also assumed to hold for the labor market. is that a variety of employment rules could be adopted to specify the realization of actual employment when a nonclearing market emerges. As will become clear.4 With the determination of labor demand. Yet before we formally present the model and its calibration we want to note that there is a similarity of our approach chosen here and the New Keynesian analysis. concerning Europe. capital shortages and slow down of growth. unlike other recent models that drop the decision rule of labor supply. Recently. derived from the marginal product of labor and other factors. generous unemployment compensation and related welfare state benefits have been added to the list of factors affecting the supply of labor. for example. the role of aggregate demand. King and Wollman (1999). However. One of the advantages of this formulation. In particular. and German macroeconomic time series data. demographic changes. high interest rates (Phelps 1997. New Keynesian literature presents models with imperfect competition and sluggish price and wage adjustments where labor effort is endogenized. 2003).133 consideration. unemployment resulting from search and matching problems can rather be viewed as frictional unemployment (see Malinvaud (1994) for his classification of unemployment). Here the wage rate is set optimally by a representative of the household according to Already Keynes (1936) had not only observed a wide-spread phenomenon of downward rigidity of wages but has also attributed strong stabilizing properties of wage stickiness. in Europe. Gali (1999). Erceg. we view the decision rule of the labor effort as being derived from a dynamic optimization problem as a quite natural way to determine desired labor supply. see Phelps (1997) and Phelps and Zoega (1998). for example.5 the two basic forces in the labor market can be formalized. as will become clear. this will be different from the unemployment that we will discuss in this chapter. Yet. intensity of job search and unemployment.S.

see Benassy (1984) among others. we shall present a dynamic model that allows for a noncleared labor market. Section 2 presents the model structure. while resolving the price setting problem. Section 5 concludes. yet the labor market is still cleared since the household is assumed to supply labor whatever the market demand is at the given wage rate. Yet. but simply supplies whatever the quantity the market demands for at the current price. There are also traditional Keynesian models that allow for disequilibria. In the current chapter we are only concerned with a nonclearing of the labor market as brought into the academic discussion by the disequilibrium school. economy. 8 For models with multiple steps of optimization in the context of learning models. the decision with regard to quantities seems to be unresolved. This has now been resolved by the modern literature of monopolistic competition as can be found in Woodford (2003). Woodford (2003. 3). and therefore they can somewhat be consolidated as a more complete system for price and quantity determination within the Keynesian tradition. Once the wage has been set. However.8 The remainder of this chapter is organized as follows.S. the well-known problem of these earlier disequilibrium models was that they disregard intertemporal optimizing behavior and never specify who sets the price. Calvo (1983) or other theories of sluggish wage adjustment. Appendices I and II in this chapter contain some technical derivation of the adaptive optimization procedure whereas Appendix III undertakes a welfare comparison of the different model variants. The supplier may no longer behave optimally concerning their supply decision. see Dawid and Day (2003). Yet. We will derive the nonclearing of the labor market from optimizing behavior of economic agents but it will be a multiple stage decision process that will generate the nonclearing of the labor market. See. 7 . Section 4 undertakes the same exercise for the German economy. see Chapter 9. we wish to argue that the New Keynesian and approach are complementary rather than exclusive. which could be seen to be caused by staggered wage as described by Taylor (1980).134 the expected market a demand curve for labor.7 In this chapter. for example. it is assumed to be sticky for some time period and only a fraction of wages are set optimally in each period. Sargent (1998) and Zhang and Semmler (2003). The objective to construct a model such as ours is to approach the two aforementioned labor market problems coherently within a single model of dynamic optimization. In those models there will be a gap again between the optimal wage and existing wage. For further details of this consolidation. Section 3 estimates and calibrates our different model variants for the U. ch.

we. it will allow us to save some effort to explain the nominal price determination. This leaves us to focus the discussion only on the wage setting. in this model shall assume that the market to be reopened at the beginning of each period t. one can imagine any initial value of the rental rate of capital when the firm and the household make the quantity decisions and express their desired demand and supply. let us first describe how prices and wages are set.2. This indicates that the wage wt and the rental rate of capital stock rt are all measured in terms of the physical units of output. the wage rate wt and the rental rate of the capital stock rt . Unlike the typical RBC model. We can then ignore its setting.1 The Wage Determination As usual we presume that both the household and the firm express their desired demand and supply on the basis of given prices. as will become clear. we shall first discuss how the period t prices are determined at the beginning of period t. There are three markets in which the agents exchange their products. a focus in the recent New Keyensian literature. It simply hires capital and labor to produce output. Therefore.2 An Economy with Nonclearing Labor Market We shall still follow the usual assumptions of identical households and identical firms. Therefore we are considering an economy that has two representative agents: the representative household and the representative firm. however. The revenue from selling factor services can only be used to buy the goods produced by the firm either for consuming or for accumulating capital. Let us first discuss how the wage rate 9 For our simple representative agent model without money. One of them should serve as a numeraire. . the output price pt always equals 1. This is necessary for a model with nonclearing markets in which adjustments should take place which leads us to a multiple stage adaptive optimization behavior. this simplification does not effect our major result derived from our model. sells the output and transfers the profit back to the household. The household owns all the factors of production and therefore sells factor services to the firm.135 8. The representative firm owns nothing. it is assumed to be adjustable so as to clear the capital market. Note that there are three commodities in our model.9 As to the rental rate of capital rt . including the output price pt . in which one could assume an once-for-all market. Meanwhile. Indeed. labor and capital. Yet. which we assume to be the output. 8.

where wages are set optimally. we. but a fraction of wages may be sticky. however. 3) has suggested. the household or its representative. or their respective representative. follow the recent approach. Dutta and Bergen (2000). There is the so-called menu cost for changing prices (though this seems more appropriate for the output price). changing the price.11 In actual bargaining it is likely. a wage contract may also be understood from an asset price perspective. ch. Woodford (2003.10 assumes that it is the supplier of labor. We may assume that the wage rate is set by a representative of the household which acts as a monopolistic agent for the supply of labor effort as Woodford (2003. Erceg. On the other hand.221) introduces different wage setting agents and monopolistic competition since he assumes heterogenous households as different suppliers of differentiated types of labor. in discussing wage setting. Eichenbaum and Evans (2001) and Zbaracki. Despite this variety of wage setting models. for instance. Christiano. computation and communication. there are also models that discuss how firms set the wage rate. In appendix I. .136 might be set. 12 This is emphasized by Rotemberg (1982) 13 See the discussion in Christiano. There is also a reputation cost for changing prices and wages. In principle a wage contract could be treated as a debt contract with 10 See. ch. however.13 All these efforts cause costs which may be summarized as adjustment costs of changing the price or wage. that recently many theories have been developed to explain wage and price stickiness. The adjustment cost for changing the wage may provide some reason for the representative of the household to stick to the wage rate even if it is known that current wage may not be optimal. differentiated types of labor and refer only to aggregate wages. as Taylor (1999) has pointed out. which may be costly. We want to note. Levy. Eichenbaum and Evans (2001) and Woodford (2003) among others. enter usually into long term employment contracts involving labor supply for several periods with a variety of job security arrangements and termination options. needs information. that sets the wage rate whereas the firm is simply a wage taker. We neglect.12 In addition. or wage. namely as derivative security based on a fundamental underlying asset such as the asset price of the firm. Henderson and Levin (2000). p. 11 These are basically the efficiency wage models that are mentioned in the introduction. (2001) we present a wage setting model. Most recent literature. in close relationship to Woodford (2003. however. that wage setting is an interacting process between firms and households. Erceg et al (2000) and Christiano et al. One may also derive this stickiness of wages from wage contracts as in Taylor (1980) with the contract period to be longer than one period.3). Since workers. Ritson.

Indeed. would depend on some specifications in contractual agreements. When the price. For further details of the pricing of such liabilities. Yet. ch. All we need to presume is that. in general it can be assumed to be arranged for several periods. As noted above we do not have to posit that the wage rate. 16 This type of wage setting is used in Woodford (2003. (2000). kt+i+1 i=0 . the household is then going to express its desire of demand for goods and supply of factors. 4) and Erceg et al. One may imagine that the dynamics of the wage rate. 3) and briefly sketched. has been set. Christiano et al. particularly with differentiated types of labor. t+i t+i t+i For such a treatment of the wages as derivative security. see Uhlig (2003). follows the updating scheme as suggested in Calvo’s staggered price model (1983) or in Taylor’s wage contract model (1980). Explicit formulation of wage dynamics of a Calvo type of updating scheme. The new signed wage contracts should respond to the expected market conditions not only in period t but also through t to t + j. where j can be regarded as the contract period. to be reviewed in each time period and therefore new wage contracts will be signed in each t. wages are only partially adjusted. there is always a fraction of individual prices to be adjusted in each period t. In Calvo’s model. for example. 14 .14 As in the case of the pricing of corporate liabilities the wage contract. We define the household’s desired demand and supply as those that can allow the household to obtain the maximum utility on the condition that these demand and supply can be realized at the given set of prices.2. the empirical study of our model does not rely on how we formulate the wage dynamics. id . 8. is studied in Erceg et al (2000). A more explicit treatment is not needed here. wt . including the wage. ns .16 Through such a pattern of wage dynamics. to be completely fixed in contracts and never responds to the disequilibrium in the labor market. giving rise to a sticky aggregate wage. as underlying our model. ch.2 The Household’s Desired Transactions The next step in our multiple stage decision process is to model the quantity decisions of the households.137 similar long term commitment as exists for other liabilities of the firm. We can express the household’s desired demand and supply as ∞ s a sequence of output demand and factor supply cd . for an aggregate wage in appendix I of this chapter. (2001) and Woodford (2003. see Gr¨ne and Semmler (2004c). wage contracts are only partially adjusted. the value of the derivative security.15 This can be expressed in our model as the expiration of some wage contracts. u 15 These are basically those prices that have not been adjusted for some periods and there the adjustment costs (such as the reputation cost) may not be high. for example. as will become clear in section 3.

ns . ns .6) (8. where id = f (kt . The solution to this problem can be written as: s cd t+i = Gc (kt+i . ns ) t+i t+i (8. are actually carried into the market by the household for t exchange due to our assumption of re-opening of the market. Note that (8. Note that here we have used the superscripts d and s to refer to the agent’s desired demand and supply.2) in terms of (8. Next.138 where it+i is referred to investment.7) We shall remark that although the solution appears to be a sequence ∞ d s s cd .1) and (8.3 The Firm’s Desired Transactions As in the case of the household. At+i ) − cd t+i t+i (8.2. ns .3) to eliminate id . At+i ) s ns t+i = Gn (kt+i .5) form i=0 a standard intertemporal decision problem.2) (8. At+i ) − wt+i ns − rt+i kt+i t+i t+i (8.2) can be regarded as a budget constraint.4) Explaining πt+i in (8. 8. The . ns ) along with (id . wt+i . we obtain t s s s kt+i+1 = (1 − δ)kt+i + f (kt+i .3) Above πt+i is the expected dividend.4) and then substituting from (8. At+i ) (8. we thus obtain i=0 s s πt+i = f (kt+i . equations (8.5) For the given technology sequence {At+i }∞ .nt+i }∞ i=0 max Et i=0 β i U (cd . At ) − t+i t+i t t t t s cd and kt = kt . rt+i }∞ .1) subject to s s c d + id t+i t+i = rt+i kt+i + wt+i nt+i + πt+i s s kt+i+1 = (1 − δ)kt+i + id t+i (8. kt ). ns i=0 only (ct . The equality holds due to the assumption Uc > 0. we shall consider how the representative household calculates πt+i . the firm’s desired demand for factors and supply of goods are those that maximize the firm’s profit under the condition that all its intentions can be carried out at the given set of prices. The decision problem for the household to derive its demand and supply can be formulated as ∞ {ct+i . Assuming that the household knows the production function f (·) while it expects that all its optimal plans can be fulfilled at the given price sequence {pt+i .

At some point the marginal disutility of work may be higher than the pre-set wage. the household’s willingness to supply labor effort is not necessarily equal to its actual supply or the market demand.17 Given a nonclearing labor market. we cannot regard the labor market to be cleared. we shall have to specify what rule should apply regarding the realization of actual employment. the solution to the above optimization problem should satisfy d rt = fk (kt .9) (8. nd ) t (8. This indicates that even if there are no adjustment costs so that the household can adjust the wage rate at every time period t. as implicitly expressed in (8. nd . there is no reason to believe that firm’s demand for labor.10) where fk (·) and fn (·) are respectively the marginal products of capital and labor. This indicates that s d kt = kt = kt As concerning the labor market. Next we shall consider the transactions in our three markets.2. nd .2. kt . Let us first consider the two factor markets. An illustration of this statement.10) should be equal to the willingness of the household to supply labor as determined in (8. Therefore. though in a simpler version. Such concept has somehow disappeared in the new Keynesian literature in which the household supplies the labor effort according to the market demand and therefore it does not seem to face excess demand or supply. 8. kt . the disequilibrium in the labor market may still exist. Yet.139 optimization problem for the firm can thus be expressed as being to choose d s the input demands and output supply (nd . yt ) that maximizes the current t profit: s d max yt − rt kt − wt nd t subject to s d yt = f (At . . At ) t (8.1. 17 Strictly speaking.7) given the way the wage determination is explained in section 8. In Appendix I these points are illustrated in a static version of the working of the labor market. is given in Appendix I. At ) t d wt = fn (kt .4 Transaction in the Factor Market and Actual Employment We have assumed the rental rate of capital rt to be adjustable in each period and thus the capital market is cleared. even in this case. the so-called labor market clearing should be defined as the condition that the firm’s willingness to demand factors is equal to the household’s willingness to supply factors.8) For regular conditions on the production function f (·).

This case corresponds to what is discussed in the literature as labor hoarding where firms hesitate to fire workers during a recession because it may be hard to find new workers in the next upswing. We want to note that the unemployment we discuss here is certainly different from the frictional unemployment as often discussed in search and matching models. It has been widely used in the literature on disequilibrium analysis (see. 20 Given the rather corporate relationship of labor and firms in Germany. (1993). Results of this study are reported in Appendix III of this chapter.11) (8. the first is the famous short-side rule when nonclearing of the market occurs. a proper study would have to compute the firms’ cost increase and profit loss and the workers’ welfare loss. 1989) and the marginal disutility is also rather flat the overall loss may not be so high. Above. 19 This could be achieved by employing the same number of workers but each worker supplying more hours (varying shift length and overtime work).12) ωnd t + (1 − ω)ns t . 1). (1993).140 Disequilibrium Rule: When disequilibrium occurs in the labor market either of the following two rules will be applied: nt = min(nd . the unemployment is mainly due to adaptive optimization of the household given the institutional This could also be realized by firms by demanding the same (or less) hours per worker but employing more workers than being optimal. firms will employ more labor than what they wish to employ. among others). the marginal cost for firms is rather flat (as empirical literature has argued. for instance. see Burnside et al. see Blanchard and Fischer. In our representative agent model. see Burnside et al.19 Such mutual compromises may be due to institutional structures and moral standards of the society. for example. Such a rule that seems to hold for many other countries was already discussed early in the economic literature. Note that in this case firms may be off their marginal product curve and thus this might require wage subsidies for firms as has been suggested by Phelps (1997). this compromising rule might be considered a reasonable approximation. 18 (8.18 On the other hand. workers will have to offer more effort than they wish to offer. The second might be called the compromising rule. when there is excess demand. for a more formal treatment of this point. Benassy 1975. 20 Note that if firms are off their supply schedule and workers off their demand schedule. If there is excess supply. This rule indicates that when nonclearing of the labor market occurs both firms and workers have to compromise. The departure of the value function – as measuring the welfare of the representative household from the standard case – is studied in Gong and Semmler (2001). however. see Meyers (1968) and also Solow (1979). ns ) t t nt = where ω ∈ (0. 1984. If.

). Recently. see also Walsh (2002) who employs search and matching theory to derive the persistence of real effects resulting from monetary policy shocks. the household will be required to construct a new consumption plan. The result is the output supply. At ) − cd t s s s s kt+i+1 = (1 − δ)kt+i + f (kt+i . which.5 Actual Employment and Transaction in the Product Market After the transactions in these two factor markets have been carried out.13) s Then the transaction needs to be carried out with respect to yt . 2.8). nt+i .5).. nt . see Blanchard (2003).22 8. 21 . Rubart and Semmler (2003).16) Note that in this optimization program the only decision variable is about cd and the data includes not only At and kt but also nt . instead of (8. ns ) t+i t+i (8. . is now given by s yt = f (kt . which further bring the improper transition law of capital (8. At+i ) − cd t+i i = 1. It is important to note that when the labor market is not cleared.2). which should be derived from the following optimization program: ∞ max (cd ) t U (cd . 22 Already Hicks (1963) has called this frictional unemployment. nt . the firm will engage in its production activity. 2003).5. For comments on this view. the previous consumption plan as expressed by (8.21 Yet the frictions in the institutions of the matching process are likely to explain only a certain fraction of observed unemployment. see Ljungqvist and Sargent (1998. one important form of a mismatch in the labor market seems to be the mismatch of skills. see Greiner. which is given by t For a recent position representing this view.6) becomes invalid due to the improper budget constraint (8. for deriving the plan.2.14) subject to s kt+1 = (1 − δ)kt + f (kt . (8. At ).. Therefore. (8.141 arrangements of the wage setting (see Chapter 8.15) (8. nt ) t + Et i=1 β i U (cd . The cause for frictional unemployment can arise from informational and institutional search and matching frictions where welfare state and labor market institutions may play a role.

11) or (8. nt . the model in the last section is only for illustrative purpose. . It is not the model that can be tested with empirical data. Yt for output and Ct for consumption. nt ) t (8. The 23 Note that Xt includes both population and productivity growth.17) Given this adjusted consumption plan. we here still employ the model as formulated by King. 8. we divide both sides of equation (8. For an empirically testable model.18) where δ is the depreciation rate.17) should also be the realized consumption.18) by Xt : kt+1 = 1 1−α (1 − δ)kt + At kt (nt N /0. for the U. of our model as presented in the last section. Assume that the capital stock in the economy follow the transition law: Kt+1 = (1 − δ)Kt + At Kt1−α (Nt Xt )α − Ct .142 either (8. At is the temporary shock in technology and Xt the permanent shock that follows a growth rate γ.3)α − ct . We can write the solution in terms of the following equation (see Appendix II of this chapter for the detail): cd = Gc2 (kt . S. not only because we do not specify the forms of production function.1 The Empirically Testable Model Let Kt denote for capital stock. Note that nt is often regarded to be the normalized hours. utility function and the stochastic process of At . α is the share of labor in the production function F (·) = At Kt1−α (Nt Xt )α . Nt for per capita working hours.23 The model is nonstationary due to Xt . To transform the model into a stationary setting. 1+γ (8. However. S. Therefore.12). cd in t t (8. but also we do not introduce the growth factor into the model.3.19) where kt ≡ Kt /Xt . the product market should be cleared if the household demand f (kt . ct ≡ Ct /Xt and nt ≡ 0. 8.3Nt /N with N to be the sample mean of Nt . Plosser and Rebelo (1988).3 Estimation and Calibration for U. At . At ) − cd for investment. (8. Economy This section provides an empirical study. economy.

as pointed out by Hansen (1985). The coefficients Gij and gi (i = 1. nt ) = log ct + θ log(1 − nt ) The temporary shock At may follow an AR(1) process: At+1 = a0 + a1 At + ǫt+1 . the disequilibrium model with short side rule (8. is the average percentage of hours attributed to work. we can simulate the model to generate stochastically simulated data.6) and (8.12) the Model III. we shall assume that the utility function takes the form U (ct . (8. Note that the above formulation also indicates that the form of f (·) in the previous section may follow 1−α (8. These data can then be compared to the sample moments of the observed economy. and the disequilibrium model with the compromising rule (8. the data generating process include (8.143 sample mean of nt is equal to 30 %.) innovation: 2 ǫt ∼ N (0. σǫ ). For the standard RBC model.3. 8. (8. With regard to the household preference.24) are the linear approximations to (8. They are computed as in Chapter 5 by the numerical algorithm using the linear-quadratic approximation method presented in Chapter 1 and 2. we shall call the standard model the Model I. as a benchmark for comparison.20) f (·) = At kt (nt N /0.21) where ǫt is an independently and identically distributed (i.23) and (8. we consider three model variants: the standard RBC model. and the two labor market disequilibrium models with the disequilibrium rules as expressed in (8.24) Note that here (8.23) (8. β.2 The Data Generating Process For our empirical test.12) respectively. Specifically.7) when we ignore the superscripts s and d.3)α while yt ≡ Yt /Xt with Yt to be the empirical output.22).i. including σε . . among others. 2 and j = 1.d.22) (8. which. Given these coefficients and the parameters in equation (8.11) and (8.22) as well as ct = G11 At + G12 kt + g1 nt = G21 At + G22 kt + g2 (8.19). α. 2) are the complicated functions of the model’s structural parameters.11) the Model II.

which can be regarded as total labor hours. they are both determined by kt and At . and g3 . • Second. we thus obtain 1−α ¯ wt = αAt Zt kt (nd N /0.24) as ns = G21 At + G22 kt + g2 t (8. with Zt to be the permanent shock resulting purely from productivity growth.25) On the other hand. Next we consider the labor demand derived from the production function F (·) = At Kt1−α (Nt Xt )α . i. 3. It generates the demand for labor as nd = (αAt Zt /wt )1/(1−α) kt (0. . experience that we shall now calibrate. The production function can be written as Yt = At Ztα Kt1−α Htα . Therefore. which generally appears to be smooth.10). we provide the details how to compute the coefficients G3j .S.17) should be equal to ct . the equilibrium in the product market indicates that cd in (8. The moments of the labor effort are solely reflected by the decision rule (8.3/N ). we shall first modify (8. j = 1.e. To define the data generating process for our disequilibrium models. the moments of labor effort and consumption are likely to be strongly correlated.27) Note that the per capita hours demanded nd should be stationary if the real t wage wt and productivity Zt grow at the same rate. This seems to be roughly consistent with the U. the volatility of the labor effort can not be much different from the volatility of consumption. the standard model does not allow for nonclearing of the labor market. 2. t (8. Taking the partial derivative with respect to Ht and recognizing that the marginal product of labor is equal to the real wage. We shall assume that Lt has a constant growth rate µ and hence Zt follows the growth rate (γ − µ).3)α−1 t This equation is equivalent to (8. where Ht equals Nt Lt .24) which is quite similar in its structure to the other decision rule given by (8.23).144 Obviously. this equation can also be t approximated as ct = G31 At + G32 kt + G33 nt + g3 (8. Let Xt = Zt Lt . and Lt from population growth. This structural similarity are expected to produce two labor market puzzles as aforementioned: • First.26) In the appendix..

These parameters have already been estimated in Chapter 5.145 Thus.0333 σε 0. With this data series. (8.12) instead of (8.0185 µ 0. Table 8.22).S.5800 δ 0. 25 Note that this re-scaling is necessary because we do not exactly know the initial condition of Zt . 8. σε .25).0189 0. a1 .001.23) and (8. The wage series are obtained from Citibase. (8. Model II. The estimation is conducted by a global optimization algorithm called simulated annealing. which we set equal to 1. It is re-scaled to match the model’s implication. 24 . For the new parameters.2080 ω 2. which are standard. We thereby do not attempt to give the actually observed sequence of wages a further theoretical foundation. δ.19).0010 β 0. There are altogether 10 parameters in our three variants: a0 .1 illustrates these parameters: Table 8.58 and 0.1203 The data set used in this section is taken from Christiano (1987). This allows us to compute the data series of the temporary shock At .19). which is close to the average growth rate of the labor force in U. The estimation is executed by a conventional algorithm. the grid search.26) and (8.. (8.27) with wt given by the observed wage rate. we specify µ at 0.9930 θ 0. (8. γ.0045. and therefore we shall employ them here.11). β.1: Parameters Used for Calibration a0 a1 0. (8.0045 α 0.1203. It is estimated by minimizing the residual sum of square between actual employment and the model generated employment.9811 γ 0.24 For our purpose it suffices to take the empirically observed series of wages. θ. We first specify α and γ respectively at 0. for the nonclearing market model with short side rule. µ. and ω.25 One however might apply here the efficiency wage theory or other theories such as the staggered contract theory that justify the wage stickiness. α. we use (8. the data generating process includes (8. We re-scaled the wage series in such a way that the first observation of employment is equal to the demand for labor as specified by equation (8.3 The Data and the Parameters Before we calibrate the models we shall first specify the parameters.11).27). For Model III.3. The next three parameters β. we estimate the parameters a0 . a1 and σε .25). δ and θ are estimated with the GMM method by matching the moments of the model generated by (8. the parameter ω in Model III is set to 0.

1. where a one time simulation with the observed innovation At are presented. 26 Due to the discussion on Solow residual in Chapter 5. The results in this table are confirmed by Figure 8. we shall now understand that At computed as the Solow residual may reflect also the demand shock in addition to the technology shock.26 All time series are detrended by the HP-filter.4 Calibration Table 8.146 8. .2 reports our calibration from 5000 stochastic simulations.3.

0066 (0.0717) 1.0000) 1.2043 (0.9754 (0.0026) 1.0000 (0.0336 (0.4525 (0.0000) 0.0000 (0.9288 (0.0156 0.0825) 0.0000 1.0000 (0.0000) -0.0000) 0.0545 (0.0010) Employment 0.6869 (0.0198) 0.0115) 0.0971) 1.1593 (0.0203) 0.0010) Capital 0.1044) 1.0158 (0.2861 0.2: Calibration of the Model Variants: U.0020) Output 0.9392 (0.0095 (0.0000 (0.0081 0.0000 (0.4874 (0.0012) 0.0098) 0.4944 (0.0000) 0.1741 0.1069) 1.0000 0.0824) 0.9866 (0.0000 (0.S.0036 (0.1362) 0.0000) -0.0000 (0.0000) 0. Economy (numbers in parentheses are the corresponding standard errors) Consumption Standard Deviations Sample Economy Model I Economy Model II Economy Model III Economy Correlation Coefficients Sample Economy Consumption Capital Stock Employment Output Model I Economy Consumption Capital Stock Employment Output Model II Economy Consumption Capital Stock Employment Output Model III Economy Consumption Capital Stock Employment Output 0.0393 (0.9056 (0.0000) 0.0000) 1.0576 (0.1662) 0.0577 (0.1190) 0.0052 (0.0000) .0954 1.0863 (0.0000 0.0000 (0.0076) 1.1175) 0.0021) 0.4604 0.0035 0.00332) 1.0165 0.8924 (0.0000) −0.0268) 1.0091 (0.0007) 0.0407) 1.0135 (0.0906) 0.0000 (0.1045) 0.0006) 0.0000 0.0000 (0.0031) 0.0000 (0.6807 (0.7263 1.0566 (0.0051 (0.0137 (0.147 Table 8.0000 (0.0197 (0.7550 1.0000) 0.0327) 1.

As observable. along most dimensions. capital stock. Model II and Model III Economies. 1955. We want to note that the failure of the standard model to match the volatility of employment of the data is also described in the recent paper by Schmidt-Grohe (2001). the volatility in the Model III Economy is close to the one in the Sample Economy. There the ratio is 1. In particular. Next.4. which is too low compared to the empirical data.38 and 0. For the indeterminacy model.1997. which seems too high.95. although too high a volatility is observable in the Model II Economy which may reflect our assumption that there are no search and matching frictions (which. Schmidt-Grohe (2001) finds that the ratio of the standard deviation of employment to the standard deviation of output is roughly 0. let us look at the cross-correlations of the macroeconomic variables. we find 0. As noted above.1 where the horizontal figures show. In the Sample Economy. there is an excessive smoothness of the labor effort and the employment series of the data cannot be matched. is therefore somewhat biased in favor of the Model I Economy.4. The problem is. the ratio is found to be 0. employment and output. the three columns representing the figures for Model I. The result. reflected in Table 8. best the actual data. originating in the work by Benhabib and co-authors.45. This ratio is roughly 1 in the Sample Economy. It is not surprising that for most variables the moments generated from the Model I Economy are closer to the moments of the Sample Economy. Yet for the standard RBC model. however. For her employed time series data 1948. of course. For our time period. close to our Sample Economy. from top to bottom. she finds the ratio to be 1. As can be seen from the separate figures. the volatility of employment has been greatly increased for both Model II and Model III.69 for the Model II and Model III Economies respectively. Yet even in this case. in the actual economy will not hold).148 First we want to remark that the structural parameters that we used here for calibration are estimated by matching the Model I Economy to the Sample Economy. actual (solid line) and simulated data (dotted line) for consumption.2. resolved in our Model II and Model III Economies representing sticky wages and labor market nonclearing. there are two significant correlations we can observe: .1 to 1983.49. in particular the Model III Economy fits. a similarly high ratio of standard deviations can also be observed in our Model II Economy where the short side rule leads to excessive fluctuations of the labor effort.3 . We therefore may conclude that Model III is the best in matching the labor market volatility. Further evidence on the better fit of the nonclearing labor market models – as concerns the volatility of the macroeconomic variables – is also demonstrated in the Figure 8.32 in the Model I Economy as the ratio of the standard deviation of labor effort to the standard deviation of output.

Discussions have often focused on the correlation with output. These two strong correlations can also be found in all of our simulated economies. roughly 0. They. therefore. . also strongly correlated. in our Model I Economy and this only holds for the Model I Economy (the standard RBC model) in addition to these two correlations. and between employment and output. including the recent study by Schmidt-Grohe (2001).72. about 0.46. this correlation is weak.75. about 0. not explicitly been discussed in the RBC literature. Yet. consumption and employment are. empirically.93. should be somewhat correlated. The latter result of the standard model is not surprising given that movements of employment as well as consumption reflect the movements in the state variables capital stock and the temporary shock. However. with 0. to our knowledge.149 the correlation between consumption and output. We remark here that such an excessive correlation has.

dotted line for simulated economy) A success of our nonclearing labor market models. reflects the moments of capital and technology as consumption does. is that employment is no longer significantly correlated with consumption. . Case (solid line for sample economy. whereas only the latter. the correlation with consumption is therefore weakened. Since the realized employment is not necessarily the same as the labor supply.1: Simulated Economy versus Sample Economy: U. labor supply. see the Model II and III Economies. This is because we have made a distinction between the demand and supply of labor.S.150 Figure 8.

see OECD (1998a).3 compare 6 key variables relevant for the models for both the German and U. economy. For this purpose we shall first summarize some stylized facts on the German economy compared to the U.4 Estimation and Calibration for the German Economy Above we have employed a model with nonclearing labor market for the U. investment and capital stock are OECD data. S. We thus have included a short period after the unification of Germany (1990 . Next.1991). the data in Figure 8.1 The Data Our subsequent study of the German economy employs the time series data from 1960. economies. we want to compare some stylized facts. We use again quarterly data.S. consumption. In particular.4. The time series data on total working hours is taken from Statistisches Bundesamt (1998).3 are detrended by the HPfilter. we pursue a similar study of German economy.3.4. The time series data on GDP. 8. Figures 8. The time series on the hourly real wage index is from OECD (1998a). 8.1. We have seen that one of the major reasons that the standard model can not appropriately replicate the variation in employment is its lack of introducing the demand for labor. The standard deviations of the detrended series are summarized in Table 8. the data on total labor force is also from the OECD (1998b).151 8.2 and 8. .1 to 1992.2 The Stylized Facts Next.S. economy.

152 Figure 8. versus Germany . S.2: Comparison of Macroeconomic Variables U.

3: Comparison of Macroeconomic Variables: U.153 Figure 8. S. versus Germany (data series are detrended by the HP-filter) .

Third.154 Table 8. If employment is smooth. the other two factors have to be volatile.0129 0. In contrast. The volatility of output must be absorbed by some factors in the production function.0115 0. the series is approximately stationary.3 The Parameters For the German economy. our investigation showed that an AR(1) process does not match well the observed process of At . economy. economy.S. economy.0203 0. in the German economy they are the smoothest variables. These results might be due to our first remark regarding the difference in employment volatility. employment and the efficiency wage are among the variables with the highest volatility in the U.0230 0. consumption capital stock employment output temporary shock efficiency wage 0.0036 0.0100 0.0084 0. Instead.4. First.0258 0.0164 0.0166 0. 8.S.0146 0.2 for the non-detrended series). S. we shall use an AR(2) process: At+1 = a0 + a1 At + a2 At−1 + εt+1 The parameters used for calibration are given in Table 8.S. the capital stock and temporary shock to technology are both relatively smooth. However.4. All of these parameters are estimated in the same way as those for the U. versus Germany) Germany (detrended) (detrended) U. in the U.S.3: The Standard Deviations (U. Should we expect that such differences will lead to different calibration of our model variants? This will be explored next. S. . the employment (measured in terms of per capita hours) is declining over time in Germany (see Figure 8. economy. Second.0273 Several remarks are at place here. while in the U. they are both more volatile in Germany.

0538 2. the Model III Economy is almost identical to the Model I Economy.4 Calibration As for the U. Again all time series here are detrended by the HP-filter.6600 0. the Model III Economy is equivalent to the Model I Economy. 27 27 Note that we do not include the Model III Economy for calibration. .4 we again compare the one-time simulation with the observed At for our model variants. economy we provide in Table 8.8880 -0. 8.8920 0.0019 0.S.5 for the German economy the calibration result from 5000 time stochastic simulations. Due to the zero value of the weighting parameter ω. In other words.12). indicating the weight of the demand is zero in the compromising rule (8. This seems to provide us with the conjecture that the Model I Economy.4: Parameters used for Calibration (German Economy) a0 a1 a2 σε 0.0044 1. In Figure 8.155 Table 8.0071 γ µ α β 0.1507 0 It is important to note that the estimated ω in this case is on the boundary 0.9876 δ θ ω 0. will be the best in matching German labor market.0083 0. the standard model.4.

0241 (0.8935 (0.1842 (0.0146 0.0023) 0.0397 (0.9130 (0.2319) 0.0292 (0.5423 1.0000 (0.4561) 0.3002 0.0258 0.9473 (0.0920) 0.0000 1.5138 (0.1028) 1.1099) 1.1519) Output 0.0000) 0.0000) 0.0202 1.5420 (0.7496 (0.0000 (0.0000 -0.9692 1.0000 (0.0000 (0.5: Calibration of the Model Variants: German Economy (number in parentheses are the corresponding standard errors) Consumption Standard Deviations Sample Economy Model I Economy Model II Economy Correlation Coefficients Sample Economy Consumption Capital Stock Employment Output Model I Economy Consumption Capital Stock Employment Output Model II Economy Consumption Capital Stock Employment Output 1.6907 (0.0865 (0.1533) Capital 0.0238) Employment 0.0000 (0.00000) .0066) 0.1047) 0.0100 0.0000 (0.0000) 0.0000) 1.4855 (0.0112) 0.0000 0.0000 0.3486 (0.1309) 0.0039 0.0106) 0.9002) 1.7147 (0.2362) 1.156 Table 8.4648 (0.0000) 0.0107 (0.1461) 0.0200) 1.0203 0.0000 (0.0425 (0.1312) 1.0000) 0.1276 (0.1640) 0.0000) −0.7208 (0.0000 (0.4360 0.

This is likely to be due to the fact that employment itself is smooth in the German economy (see Table 8. and see already Meyers (1964). there are stronger influences of labor unions and various legal restrictions on firms’ hiring and firing decisions.4: Simulated Economy versus Sample Economy: German Case (solid line for sample economy. economy we find some major differences. In most labor market studies the German labor market is often considered less flexible than the U. S. . The standard problem of excessive smoothness with respect to employment in the benchmark model no longer holds for the German economy. Nickell (1997) and Nickell (2003). for example. dotted line for simulated economy) In contrast to U. may also be viewed as a readiness to compromise as our Model III suggests. In particular.5). Such influences and legal restriction.28 Such influences and legal restriction will give rise to the smoother employment series in contrast to the U.3).3 and Figure 8.157 Figure 8. S. S.S. First. Those factors 28 See. there is a difference concerning the variation of employment. labor market. or what Solow (1979) has termed the moral factor in the labor market. (see Figure 8. We shall also note that the simulated labor supply in Germany is smoother than in the U.

158 will indeed give rise to a smooth employment series. Further, if we look at the labor demand and supply in Figure 8.5, the supply of labor is mostly the short side in the Germany economy whereas in U.S. economy demand is dominating in most periods. Note that here we must distinguish the supply that is actually provided in the labor market and the “supply” that is specified by the decision rule in the standard model. It might reasonably be argued that due to the intertemporal optimization subject to the budget constraints the supply specified by the decision rule may only approximate the decisions from those households for which unemployment is not expected to pose a problem on their budgets. Such households are more likely to be currently employed and protected by labor unions and legal restrictions. In other words, currently employed labor decides, through the optimal decision rule, about labor supply and not those who are currently unemployed. Such a shortcoming of single representative intertemporal decision model could presumably be overcome by a intertemporal model with heterogenous households.29

Figure 8.5: Comparison of demand and supply in the labor market (solid line for actual, dashed line for demand and dotted line for supply)
29

See, for example, Uhlig and Xu (1996).

159 The second difference concerns the trend in employment growth and unemployment of the U.S. and Germany. So far we only have shown that our model of nonclearing labor market seems to match better than the standard RBC model the variation in employment. This in particular seems to be true for the U.S. economy. We did not attempt to explain the trend of the unemployment rate neither for the U.S. nor for Germany. We want to note that the time series data (U. S. 1955.1 - 1983.1, Germany 1960.1 - 1992.1) are from a period where the U.S. had higher – but falling – unemployment rates, whereas Germany had still lower but rising unemployment rates. Yet, since the end of the 1980s the level of the unemployment rate in Germany has considerably moved up, partly due to the unification of Germany after 1989.

8.5

Differences in Labor Market Institutions

In Chapter 8.2 we have introduced rules that might be thought to be operative when there is a nonclearing labor market. In this respect, as our calibration in section 3 has shown, the most promising route to model, and to match, stylized facts of the labor market, through a microbased labor market behavior, is the compromising model. One hereby may pay attention to some institutional characteristics of the labor market presumed in our model. The first is the way how the agency representing the household sets the wage rate. If the household sets the wage rate, as if it were monopolistic competitor, then at this wage rate the household’s willingness to supply labor is likely to be less than the market demand for labor unless the household sufficiently under-estimates the market demand when it conducts its optimization for wage setting. Such a way of wage setting may imply unemployment and it is likely to be the institutional structure that gives the representative household (or the representative of the household, such as unions), the power to bargain with the firm in wage setting.30 Yet, there could be, of course, other reasons why wages do not move to a labor market clearing level – such as efficiency wage, insider – outsider relationship, or wages determined by standards of fairness as Solow (1979) has noted and so on. On the other hand, there can be labor market institutions, for example corporatist structures, also measured by our ω, which affect the actual employment. Our ω expresses how much weight is given to the desired labor supply or desired labor demand. A small ω means that the agency, repre30

This is similar to Woodford’s (2003, ch. 3) idea of a deviation of the efficient and natural level of output where the efficient level is achieved only in a competitive economy with no frictions.

160 senting the household, has a high weight in determining the outcome of the employment compromise. A high ω means that the firm’s side is stronger in employment negotiations. As our empirical estimations in Gong, Ernst and Semmler (2004) have shown the former case, a low ω, is very characteristic of Germany, France and Italy whereas a larger ω is found for U.S. and the U.K.31 Given the rather corporatist relationship of labor and the firm in some European countries, with some considerable labor market regulations through legislature and union bargaining (rules of employment protection, hiring and firing restrictions, extension of employment even if there is a shortfall of sales etc.)32 , our ω may thus measure differences concerning labor market institutions between the U.S. and European countries. This has already been stated in the 1960s by Meyers. He states: ”One of the differences between the United States and Europe lies in our attitude toward layoffs... When business falls off, he [the typical American employer] soon begins to think of reduction in work force... In many other industrial countries, specific laws, collective agreements, or vigorous public opinion protect the workers against layoffs except under the most critical circumstances. Despite falling demand, the employer counts on retraining his permanent employees. He is obliged to find work for them to do... These arrangements are certainly effective in holding down unemployment”. (Meyers, 1964:) Thus, we wish to argue that the major international difference causing employment variation does arise less from real wage stickiness (due to the presence of unions and the extend and duration of contractual agreements between labor and the firm)33 but rather it seems to be the degree to which compromising rules exist and which side dominates the compromising rule. A lower ω, defining, for example, the compromising rule in Euro-area countries, can show up as difference in the variation of macroeconomic variables. This is demonstrated in Chapter 8.4 for the German economy. We there could observe that first, employment and the efficiency wage (defined as real wage divided productions) are among the variables with the
In the paper by Gong, Ernst and Semmler (2004) it is also shown that the ω is strongly negatively correlated with labor market institutions. 32 This could also be realized by firms by demanding the same (or less) hours per worker but employing more workers than being optimal. The case would then correspond to what is discussed in the literature as labor hoarding where firms hesitate to fire workers during a recession because it may be hard to find new workers in the next upswing, see Burnside et al. (1993). Note that in this case firms may be off their marginal product curve and thus this might require wage subsidies for firms as has been suggested by Phelps (1997). 33 In fact real wage rigidities in the U.S. are almost the same as in European countries, see Flaschel, Gong and Semmler (2001).
31

161 highest volatility in the U. S. economy. However, in the German economy they are the smoothest variables. Second, in the U. S. economy, the capital stock and temporary shock to technology are both relatively smooth. In contrast, they are both more volatile in Germany. These results are likely to be due to our first remark regarding the difference in employment volatility. The volatility of output must be absorbed by some factors in the production function. If employment is smooth, the other two factors have to be volatile. Indeed, recent Phillips curve studies do not seem to reveal much difference in real wage stickiness between Germany and the U.S., although the German labor market is often considered less flexible.34 Yet, there are differences in another sense. In Germany, there are stronger influences of labor unions and various legal restrictions on firms’ hiring and firing decisions shorter work week even for the same pay etc. 35 Such influences and legal restriction will give rise to the smoother employment series in contrast to the U.S.. Such influences and legal restriction, or what Solow (1979) has termed the moral factor in the labor market, may also be viewed as a readiness to compromise as our Model III suggests. Those factors will indeed give rise to a lower ω and a smoother employment series.36 So far we only have shown that our model of nonclearing labor market seems to match better the variation in employment than the standard RBC model. Yet, we did not attempt to explain the secular trend of the unemployment rate neither for the U.S. nor for Germany. We want to express a conjecture of how our model can be used to study the trend shift in employment. We want to note that the time series data for the table 8.3 (U.S. 1955.1-1983.1, Germany 1960.1-1992.1) are from a period where the U.S. had higher – but falling – unemployment rates, whereas Germany had still lower but rising unemployment rates. Yet, since the end of the 1980s the level of the unemployment rate in Germany has considerably moved up, partly, of course due to the unification of Germany after 1989. One recent attempt to better fit the RBC model’s predictions with labor
See Flaschel, Gong and Semmler (2001). See,for example, Nickell (1997) and Nickell et al. (2003), and see already Meyers (1964). 36 It might reasonably be argued that, due to intertemporal optimization subject to the budget constraints, the supply specified by the decision rule may only approximate the decisions of those households for which unemployment is not expected to pose a problem on their budgets. Such households are more likely to be currently employed represented by labor unions and covered by legal restrictions. In other words, currently employed labor decides, through the optimal decision rule, about labor supply and not those who are currently unemployed. Such a feature could presumably be better studied by an intertemporal model with heterogenous households, see, for example, Uhlig and Xu (1996).
35 34

see Blanchard and Wolfers (2000) and Blanchard (2003). Empirical evidence on the role of lagging implementation and diffusion of new technology for low employment growth in Germany can be found in Heckman (2003) and Greiner. see Phelps (1997) and Phelps and Zoega (1998). as shown in Chapters 5 and 9.41 Some of those factors affecting the households’ supply of labor have been discussed See Merz (1999) and Ljungqvist and Sargent (1998. 2003). the work by Phelps. greatly depends on endogenous variables (such as capacity utilization). Semmler and Gong (2004). for example. the change in the trend of the unemployment rate is likely to be related to the long-run trend in the true technology shock. and other European countries. for example demand shocks. 39 See Campbell (1994) for a modelling of a trend in technology shocks. since the 1980s. In the context of our model this would have the effect that labor demand. those models usually observe that there has been a shift in matching functions due to evolution of unemployment rates such as. and for an additional account of the wage rate. are important as well. the trend in the wage rate is also important in the equation for labor demand (in equation 25).40 Yet.27) may fall short of labor supply given by equation (8. given by equation (8. This is likely to occur in the long-run if the productivity Zt in equation (8. We thus might conclude that cyclical fluctuations in output and employment are not likely to sufficiently be explained by productivity shocks alone. as it used in RBC models as the technology shock. 40 Of course. Gong and Semmler (2001).39 Concerning the latter. For an account of the technology trend. Thus exogenous technology shocks constitute only a small fraction of the Solow residual. the Solow residual. for example. 38 37 . experienced in Europe since the 1980s. as recent research has stressed. Yet.38 In contrast to the literature on institutional frictions in the search and matching process we think that the essential impact on the trend in the rate of unemployment seems to stem from both changes of preferences of households as well as a changing trend in the technology shock. there have also been secular changes on the supply side of labor due to changes in preferences of households.24). For an evaluation of the search and matching theory as well as the role of shocks to explain the evolution of unemployment in Europe. 41 Phelps and his co-authors have pointed out that an important change in the households’ preferences in Europe is that households now rely more an assets instead of labor income. Yet.162 market data has employed search and matching theory. and that the model itself fails to explain such a shift. see Flaschel.37 Informational or institutional search frictions may then explain the equilibrium unemployment rate and its rise. in the long run. see Heckman (2003). Gali (1999) and Francis and Ramey (2001.27) starts tending to grow at a lower rate which many researchers recently have maintained to have happened in Germany. 2003) have argued that other shocks.

Semmler and Gong (2004) and Heckman (2003) 44 For further discussion. with respect to the trend of lower employment growth in some European countries as compared to the U. given wage stickiness. are consistent with what has been found in many other empirical studies with regard to the institutions of the labor market. In particular. There we find that similarly to Sargent and Ljungqvist (1998). 8. and thus fit the data significantly better than the standard model. 43 See Blanchard and Wolfers (2000). 42 . on the demand side for labor. for Germany in contrast to the U.. Greiner. Our study has provided a framework that allows to also follow up such issues.S. Finally.163 above. for example. In this chapter. our model suggests that one has to study more carefully the secular forces affecting the supply and the demand of labor as modeled in our multiple stage decision process of section 2.S.44 Appendix III computes the welfare loss of our different model variants of nonclearing labor market. since the 1980s. that the welfare losses are very small.6 Conclusions Market clearing is a prominent feature in the standard RBC model which commonly presumes wage and price flexibility. Nonclearing labor market is then a result of different employment rules derived on the basis of a multiple stage decision process. see also Chapter 9. Calibrations have shown that such model variants will produce a higher volatility in employment.43 On the other hand there has also been changes in the preferences of households. we have introduced an adaptive optimization behavior and a multiple stage decision process that. results in a nonclearing labor market in an otherwise standard stochastic dynamic model. the slow down of technology seems to have been a major factor for the low employment growth in Germany and other countries in Europe.42 As concerning international aspects of our study we presume that different labor market institutions result in different weights defining the compromising rule. The results for Euro-area economies.

as argued by recent New i=0 Keynesian literature. the concept of nonclearing labor market somehow disappeared. Of course there is no guarantee that the household will actually implement this sequence {ct+i }∞ . In this literature. which is derived from the condition of marginal product equal to the wage rate: ∗ wt = fn (At+i . If the household knows the production f (At . However. the household . the observed wage dynamics wt may follow Calvo’s updating scheme: ∗ wt = (1 − ξ)wt + ξwt−1 ∗ Such a wage indicates that there exists a gap between optimum wage wt and the observed wage wt . nt+i ) We shall remark that although the decision is mainly about the choice ∞ ∗ of wt . It should be noted that in recent New Keynesian literature where the wage is set in a similar way as we have discussed here. nt .{ct+i }∞ i=0 max Et i=0 ∗ (ξβ)i U (ct+i . At+i )) − ct+i (8.29) ∗ Above ξ i is the probability that the new wage rate wt will still be effective in period t + i. where nt the labor effort so that it may also know the firm’s demand for labor. n(wt . Obviously. the sequence of {ct+i }i=0 should also be considered for the dynamic optimization. At+i )) (8. At+i ) is the function of firm’s demand for labor. kt+i . kt ).7 Appendix I: Wage Setting Suppose now that at the beginning of t the household (of course with certain ∗ probability denoted as 1 − ξ) decides to set up a new wage rate w1 given ∞ the data (At . Therefore. kt+i . kt . the decision problem of the household with regard to wage setting may be expressed as follows: ∞ ∗ wt . n(wt . which depends on consump∗ ∗ tion ct+i and the labor effort n(wt . U (·) is the household’s utility function. kt+i . At+i ). and the sequence of expectations on {At+i }i=1 where At and kt are referred to as the technology and capital stock respectively. Note that here n(wt . this probability will be reduced when i become larger.164 8. kt+i . there is only a certain probability (due to the adjustment cost in changing the wage) that the household will set a new wage rate in period t. kt+i .28) subject to ∗ kt+i+1 = (1 − δ)kt+i + f (At+i . kt+i .

there also exists a gap between optimum output and actual output. In correspondence to the gap between optimum and actual price. the supplier may stick to w0 . This produces the gaps between optimum price w∗ and actual price w0 and between optimum supply n∗ and actual supply n′ . Yet.6. the existence of price and output gaps does not exclude the . w w* MC w0 MR MR’ D0 D0 D’ n0 n* n’ ns n Figure 8. at the beginning of period 0) sets its price optimally according to the expected demand curve D0 .6: A Static Version of the Working of the Labor Market Some clarifications may be obtained by referring to a static version of our view on the working of the labor market. whose existence is caused by the adjustment cost in changing prices. In figure 8. Consider now the situation that the supplier’s expectation on demand is not fulfilled. the supplier (or the household. In this case. in the labor market case) first (say. the household may reasonably believe that the demand curve should be D′ and therefore the optimum price should be w∗ while the optimum supply should be n∗ . Let us denote this price as w0 . Instead. due to the adjustment cost in changing prices.165 is assumed to supply the labor effort according to the market demand at the existing wage rate and therefore does not seem to face the problem of excess demand or supply. However. what New Keynesian economists are concerned with is the gap between the optimum price and actual price. Instead of n0 . the market demand at w0 is n′ .

(8. or equal. In this context the marginal cost curve. at the given wage rate there is a firm’s willingness to demand labor and the household’s willingness to supply labor. to the marginal disutility of work. nt .16). New Keynesian literature presumes that at the existing wage rate. Appendix II: Adaptive Optimization and Consumption Decision For the problem (8. At+i ) − cd t+i t+i 1+γ s β i λt+i kt+1+i − Since the decision is only about cd . can be interpreted as marginal disutility of labor which has also an upward slope since we use the standard log utility function as in the RBC literature. This gives us the following first-order t .6 the household’s willingness to supply labor is ns . At ) − cd λt kt+1 − t 1+γ ∞ + Et i=1 β i log(cd ) + θ log(1 − ns ) + t+i t+i 1 s s (1 − δ)kt+i + f (kt+i . This indicates that even if there are no adjustment costs so that the household can adjust the wage rate in every t (so that there is no price and quantity gaps as we have mentioned earlier).166 existence of a disequilibrium or nonclearing market. If we define the labor market demand and supply in a standard way. ns . that is. MC. we define the Lagrangian: L = Et log cd + θ log(1 − nt ) + t 1 s s s (1 − δ)kt + f (kt . the household supplies labor effort whatever the market demand for labor is. the disequilibrium in the labor market may still exist. This then means that the household’s supply of labor will be restricted by a wage rate below. and a nonclearing labor market can be very general phenomena.14) . we thus take the partial derivatives of t s L with respect to cd . Note that in figure 8. kt+1 and λt .

33) and (8. (8.30) .23) and (8. − d 1+γ ct β s Et λt+1 (1 − δ) + (1 − α)At+1 kt+1 1+γ s kt+1 = −α (8.32) around the steady states.33) (8. G22 and g2 have all been resolved previously in the household optimization program.38) (8.36) and a0 +a1 At t+1 respectively. (8.167 condition: 1 λt = 0.37) (8.44) .43) Fc1 d fc ct − . κ0 = Fk1 (Qa0 + h) + Fk2 a0 + Fk4 (G22 a0 + g2 ) + fk .35). G21 . Fc2 Fc2 (8. t+1 (8. Using (8. h.40). κ2 = Fk1 Qa1 + Fk2 a1 + Fk4 G22 a1 .39) Expressing Et λt+1 .37) to express λt in (8. we further obtain s κ1 kt+1 + κ2 At + κ0 = − (8.3 t+1 α α = λt . We therefore obtain from (8.34) where H.3 1+γ − cd .32) Recall that in deriving the decision rules as expressed in (8.(8. Suppose they can be written as Fc1 ct + Fc2 λt + fc = 0. t (8.34) s Et λt+1 = Hkt+1 + Q(a0 + a1 At ) + h. s ns = G21 kt+1 + G22 At+1 + g2. s Et ns = G2 kt+1 + D2 (a0 + a1 At ) + g2 .24) we have postulated s λt+1 = Hkt+1 + QAt+1 + h.31) 1 s s ¯ (1 − δ)kt + At (kt )1−α nt N /0. κ1 = Fk1 H + Fk3 + Fk4 G21 .36) Our next step is to linearize (8.35) (8. t (8. (8. s Fk1 Et λt+1 + Fk2 Et At+1 + Fk3 kt+1 + Fk4 Et ns + fk = λt . in particular. Q.42) (8.40) where. t+1 (8.41) (8.30) ¯ ns N /0.38) s κ1 kt+1 + κ2 At + κ0 = λt . t+1 s kt+1 = Akt + W At + C1 cd + C2 nt + b. Et ns and Et At+1 in terms of (8. we obtain from (8.

A likely conjecture is that the benchmark model should always be superior to the other two variants because the decisions on labor supply . The expected moments are represented by equation (8.5). is somewhat different from the the benchmark model due to the distinction between expected and actual moments with respect to our state variable. However. We follow here Ljungqvist and Sargent (1998) and compute the welfare implication of the different model variants. In the models of nonclearing market the representative agent may not rationally expect those moments of the capital stock. Appendix III: Welfare Comparison of the Model Variants In this appendix we want to undertake a welfare comparison of our different model variants. given the sequence of our two decision variables. the capital stock. consumption and employment. model variants. They are given by Simulated Economy I. κ1 Fc2 κ1 κ1 Fc2 κ1 (8. Also.45) Comparing the right side of (8.which are optimal for the representative agent . there is another external variable wt . Yet.5) while the actual moments are expressed by equation (8. The welfare result due to these changes in the specification may therefore deviate from what one would expect. and the two models with nonclearing labor market. which will affect the labor employed (via demand for labor) and hence eventually the welfare performance. S. II. Note that for our models variants with nonclearing . and III. entering into the models.168 which is equivalent to s kt+1 = − Fc1 d κ0 fc κ2 At − ct − − . It is sufficient to consider only the equilibrium (benchmark) model.are realized in all periods. We here restrict our welfare analysis to the U. They are not necessary equal unless the labor efforts of those two equations are equal. in addition to At . we believe that this may not generically be the case. whereas they concentrate on the steady state.39) and (8. Our exercise here is to compute the values of the objective function for all our three models.45) will allow us to solve cd as t Fc1 + C1 Fc2 κ1 −1 cd = − t Akt + κ2 +W κ1 At + C2 nt + b + κ0 fc + κ1 Fc2 κ1 . The point here is that the model specification in variants II and III. we compute the welfare also outside the steady state.

More specifically. to compute the utility functional.169 labor market. Figure 8.7: Welfare Comparison of Model II and III . rather than the decisions on labor supply. Dashed Line for Model III) Figure 8. (a) Welfare Comparison with External Variable set at their Steady State (Solid Line for Model II. nt ) where U (ct .7 provides the welfare comparison of the two versions. We choose the different k0 based on the grid search around the steady state of kt . where ∞ V ≡ t=0 β t U (ct . One is to set both external variables at their steady state levels for all t. only At appears). This exercise here is conducted for different initial conditions of kt denoted by k0 . We consider two different ways to treat these external variables. The other is to employ their observed series entering into the computation. nt ) is given by log(ct ) + θ log(1 − nt ). Dashed Line for Model III) (b) Welfare Comparison with External Variable set at their Observed Series (Solid Line for Model II. we calculate V . the value of V for any given k0 will also depend on the external variable At and wt (though in the benchmark model. we use realized employment. Obviously.

The various k0 ’s are expressed in terms of the percentages deviation from the steady state of kt . since most of the values are negative. . However. not always is the benchmark model the best one. the deviations become 0 for the Model II. When k0 is sufficiently high.7(a). 8. they.170 In Figure 8. the Model III will be superior in its welfare performance when k0 is larger than its steady state. however.7(b). in the case of using observed external variables. Similar results have been obtained by Ljungqvist and Sargent (1998). see lower part of the figure. it is important to note that the deviations from the benchmark model are very small. the percentage deviations of V from the corresponding values of benchmark model is plotted for both Model II and Model III given for various k0 around the steady states. Furthermore. It is not surprising to find that in most cases the benchmark model is the best in its welfare performance. compare only the steady states. Meanwhile. close to or higher than the steady state of kt .

This is necessary for a model with nonclearing markets where adjustments should take place in real time. In many respects. sluggish wage and price adjustments and adaptive optimization the labor market may not be cleared. As concerning wage stickiness Keynes (1936) has attributed strong stabilizing effects to wage stickiness.1 The Model As mentioned in chapter 8. The assumption of re-opening of the market shall also be adopted here. Nonclearing Markets and Technology Shocks In the last chapter we have found that if we introduce some non-Walrasian features into an intertemporal decision model with the household’s wage setting.Chapter 9 Monopolistic Competition. 9. the specifications in this chapter are the same as for the model of the last chapter. We shall still follow the assumptions with respect to ownership. adaptive optimization and nonclearing labor markets. price and wage stickiness is an important feature in New Keynesian literature. Next we relate our approach of nonclearing labor market to the theory of monopolistic competition in the product market as developed in New Keynesian economics. Since both household and firm make their quantity decisions on the basis 171 . This model then naturally generates higher volatility of employment and a low correlation between employment and consumption. Recent literature uses monopolistic competition theory to give a foundation to nominal stickiness.

where it+i is t+i t+i t+i referred to investment. At+i ) − cd t+i t+i (9. We can then ignore its setting.2) All the notations have been defined in the last chapter. Therefore.1 The Household’s Desired Transactions When the prices. the solution of optimization problem can be i=0 written as: s cd t+i = Gc (kt+i . a main focus in the recent new Keyensian literature.3) (9. it will allow us to save the effort to work on the nominal price determination. ns ) t+i t+i (9. ns . This indicates that the wage wt and the rental rate of capital stock rt are all measured in terms of the physical units of output. We can express this as a sequence of ∞ s output demand and factor supply cd . have been set the household is going to express its desired demand and supply. including the output price pt . Note that here we have used the superscripts d and s to refer to the agent’s desired demand and supply. . One of them should serve as a numeraire.ns } t+i t+i subject to max ∞ i=0 Et i=0 β i U (cd .1 As to the rental rate of capital rt . 9. At+i ) 1 (9.1. For the given technology sequence {At+i }∞ . Here. Meanwhile. At+i ) s ns t+i = Gn (kt+i . We define the household’s willingness as those demand and supply that can allow the household to obtain the maximum utility on the condition that these demand and supply can be realized at the given set of prices. the wage rate wt and the rental rate of capital stock rt . we shall follow all the specifications on price and wage setting as in presented the chapter 8. which we assume to be the output. The decision problem for the household to derive its desired demand and supply is very similar as in the last chapter and can be formulated as ∞ {cd . ns .4) For our simple representative agent model without money. Here again. we shall first discuss how in our model the period t prices are determined at the beginning of period t. as in the model of the last chapter there are three commodities.1.172 of the given prices. kt+i+1 i=0 . this simplification does not effect our major result derived from our model. including wages. id . the output price pt always equals 1.1) s s s kt+i+1 = (1 − δ)kt+i + f (kt+i . it is assumed to be adjustable and to clear the capital market.2.

see the discussion above.1. On the other hand. At . the firm will choose yt . wt . Thus. it will simply follow the short side rule to choose yt as in the general New Keynesian model. rt ). yt ) to maximizes the current profit.6) For the regular condition on the production function. kt .7) (9. labor and capital stock ∗ (1. yt ) = f (At . we no longer assume that the product market is in perfect competition.2 We are now considering the transactions in our three markets. Instead. and therefore it should face a perceived demand curve for its product. kt . At . nt ) (9. we shall assume that our representative firm behaves as a monopolistic competitor. kt ). wt . ∞ i=0 9. Obviously. This desired supply is the amount that allows the firm to obtain a maximum profit on the assumption that all its output can be sold. the firm should also have its own desired supply yt . if the expected ∗ demand yt is less than the firm’s desired supply yt .5) t subject to ∗ min(yt . ns t+i t+i d s d s d s s d s only (ct . We shall denote this perceived demand as yt . yt ) t (9. for our representative firm. . Thus given the output price. At ) − ct and kt = kt . the firm has a perceived constraint on the market demand for its product. nt ) along with (it . yt ) nd = fn (rt . are actually carried by the household into the market for exchange due to our assumption of re-opening market. yt ) − rt kt − wt nd (9. Let us first consider the two factor markets. which shall always be 1 (since it serves as a numeraire).2 The Quantity Decisions of the Firm The problem of our representative firm in period t is to choose the current d s input demand and output supply (nd . the solutions should satisfy d kt = fk (rt . ∗ Otherwise. wt . given the prices of output. nt . where it = f (kt . 2 The detail will be provided in the appendix of this chapter. t However in this chapter. the optimization problem can be expressed as ∗ d max min(yt .173 We shall remark that although the solution appears to be a sequence cd .8) where rt and wt are respectively the prices (in real term) of capital and labor.

As we have discussed in the last chapter. only the short side of demand and supply will be realized. when a disequilibrium occurs. In particular. we again formulate this rule as nt = ωnd + (1 − ω)ns t t (9. see the discussion in the last chapter. there is no reason to believe that the labor market will be cleared.11) yt = f (kt . and therefore one may argue that the output determination does follow eventually the Keynesian way.6). On the other hand. 1). Our study in the last chapter indicates that the short side rule seems to be empirically less satisfying than the compromising rule. that is.1. nt = min(nd .11) indicates that the firm’s actual produced output is not necessarily constrained by equation (9. The result is the output supply. Therefore.174 9.8)). capital and labor (see equation (9. the Keynesian way of output determination is still reflected in the firm’s demand for inputs.10) where ω ∈ (0. ns ) t t Thus. which is now given by s (9.4 The Transaction in the Product Market After the transactions in those two factor markets have been carried out. Equation (9. the firm will engage in its production activity. 9.9) Due to the monopolistic wage setting and the sluggish wage adjustment. the most frequent rule that has been used is the short side rule. nt . one may . if the produced output is still constrained by (9. we shall again define a realization rule with regard to actual employment.1. The latter rule means that when disequilibrium occurs in the labor market both firms and workers have to compromise.7) and (9. in this chapter we shall only consider the compromising rule. At ) One remark should be added here.3 Transaction in the Factor Market Since the rental rate of capital stock rt is adjusted to clear the capital market when the market is re-opened in period t. that is. However. the output is constrained by demand.6). we have s d kt = kt = kt (9. Another important rule that we have discussed in the last chapter is the compromising rule. Therefore.

ns . t 9. t cd in (9. yt ) there will be no sufficient inputs to produce ∗ s ∗ min(yt .. t+i t+i 1+γ i = 1. the household will construct a new plan as expressed below: ∞ max Et (cd ) t i=0 s s.10) with ns and t nd are implied by (9.11). s Above. Therefore. yt ). It is important to note here that when disequilibrium occurs in the labor market the previous consumption plan as expressed by (9. yt ). Therefore. At ) − cd for investment. As we have demonstrated in t the last chapter.175 s encouter the difficulty either in terms of feasibility when yt in (9. ns ) t+i (9. yt ).13) (9.S. nt ) t (9.. . the transaction then needs s to be carried out with respect to yt .t. when yt > min(yt . not all inputs will be used in production. nt . On the other hand.11) is less ∗ s ∗ than min(yt . At ) − cd t 1+γ 1 s s (1 − δ)kt+i + f (kt+i .12) (9.9) and nt is given by (9. 2.2 9. the solution to this further step in the optimization problem can be written in terms of the following equation: cd = Gc2 (kt . in order to make it empirically more realistic. kt+1 = s kt+i+1 = d β i U (ct+i . . has to include economic growth. the product market should be cleared if the household demands the amount f (kt .15) Given this consumption plan.2. kt equals kt as expressed by (9. At+i ) − cd .3) becomes invalid due to the improper rule of capital accumulation (9. nt .. 3 s ∗ Note that here when yt < min(yt .15) should also be the realized consumption.1 Estimation and Calibration for U.14) 1 s (1 − δ)kt + f (kt . and therefore resources are somewhat wasted.2) for deriving the plan.8) respectively.4) and (9. Economy The Empirically Testable Model This section provides an empirical study of our theoretical model above presented which again. yt ) or in terms of inefficiency when yt is larger than min(yt . At .3 Given that the output is determined by (9.

19) where ǫt is an independently and identically distributed (i.176 Let Kt denote for capital stock. One possibility is to assume the expectation to be rational so that it is equal to the steady state of yt .17) where kt ≡ Kt /Xt . that is.i. yt = yt−1 (9. Finally.) innovation: 2 ǫt ∼ N (0. Indeed.3)α − ct . ct ≡ Ct /Xt and nt ≡ 0. . we shall assume that the utility function takes the form U (ct .21) where yt = Yt /Xt . 1+γ (9.3)α (9. Assume the capital stock in the economy follows the transition law: Kt+1 = (1 − δ)Kt + At Kt1−α (Nt Xt )α − Ct . (9.d. σǫ ).16) where δ is the depreciation rate. we also have done the same empirical study. one can also consider other forms of expectation.20) (9. Dividing both sides of equation (9. we shall assume that the output expectation yt be simply equal to yt−1 . Yt for output and Ct for consumption. (9. Note that the above formulation also indicates that the form of f (·) in the last section may take the form 1−α f (·) = At kt (nt N /0. nt ) log ct + θ log(1 − nt ) The temporary shock At may follow an AR(1) process: At+1 = a0 + a1 At + ǫt .3Nt /N with N to be the sample mean of Nt . α is the share of labor in the production function F (·) = At Kt1−α (Nt Xt )α . we obtain kt+1 = 1 1−α (1 − δ)kt + At kt (nt N /0. At is the temporary shock in technology and Xt the permanent shock that follows a growth rate γ. Nt for per capita working hours. so that the expectation is fully adaptive to the actual output in the last period.16) by Xt . yet the result is less satisfying.18) With regard to the household preference.4 4 Of course.

α.15) should be equal to ct .4). (9.26) nt = 1  0. we shall first modify (9. Next we consider the demand for labor nd derived from the firm’s optit mization problem (9. 5 . 2) are the complicated functions of the model’s structural parameters. 2. t Proposition: When the capital market is cleared. are the same as in Chapter 8. Specifically.20) as well as ct = G11 At + G12 kt + g1 nt = G21 At + G22 kt + g2 (9.2. 3. The coefficients Gij and gi (i = 1. as a benchmark for comparison. the Model I. They are computed by the numerical algorithm using the linear-quadratic approximation method.3) and (9. the equilibrium in product market indicates that cd in t (9.23) are the linear approximations to (9. we shall call the benchmark model the Model I and the model with monopolistic competition the Model IV (in distinction from the Model II and Model III in Chapter 8). the Model IV. δ.5) .3 αAt Zt 1−α ∗  ¯ kt if yt ≥ yt ˆ wt N The algorithm used here is again from Chapter 1 of this volume.25) The computation of coefficients g3 and G3j .among others.(9.5 To define the data generating process for our model with monopolistic competition and nonclearing labor market.17).23) Note that here (9.22) (9.23) as ns = G21 At + G22 kt + g2 (9.22) and (9.177 9.2 The Data Generating Process For our empirical assessment.8). The following proposition concerns the derivation of nd .24) t On the other hand. For the benchmark dynamic optimization model. 2 and j = 1. the data generating process include (9. and therefore this equation can also be approximated by ct = G31 At + G32 kt + G33 nt + g3 (9. we consider two model variants: the standard model.3 yt ˆ 1 ∗  ¯ if yt < yt ˆ At kt N d (9. the firm’s demand for labor can be expressed as  1/α (1−α)/α  0. j = 1. and our model with monopolistic competition and nonclearing labor market. β. which shall now be augmented by the growth factor for our empirical test.

which we set equal to 1.20). To calibrate the models. θ and ω.2. the grid search. to include not only the technology shock. (9. This result is further confirmed by Figure 9. Note that here again we need a rescaling of the wage series in the estimation of ω. 7 Of course.4 Calibration Table 9. α. β. µ. and the second to the condition otherwise.26) and (9.25). σε . For our purpose it suffices to take the empirically observed series of wages.the observed Solow residual. . for this exercise one should still consider At . The estimation is again executed by a conventional algorithm. (9.24). This rescaling is necessary because we do not exactly know the initial condition of Zt . All these parameters are essentially the same as we have employed in Chapter 8 (see Table 8. We choose ω to be 0. we shall first specify the structural parameters. The proof of this proposition is provided in the appendix to this chapter. economy. Here again we do not need to attempt to give the actually observed sequence of wages a further theoretical foundation. 6 Note that there is a need of rescaling the wage series in the estimation of ω.178 where ∗ yt = (αAt Zt /wt )α/(1−α) kt At (9. Thus. where a one time simulation with the observed innovation At are presented. (9.17).S.1.5203. (9. We have followed the same rescaling procedure as we did in Chapter 8.7 All time series are detrended by the HP-filter.21) with wt given by the observed wage rate. (9.3 The Data and the Parameters We here only employ time series data of the U.27) Note that the first nd in the above equation responds to the condition that t the expected demand is less than the firm’s desired supply. δ. but also the demand shock among others. γ. a1 .2.1) except for the ω. There are altogether 10 structural parameters in Model IV: a0 . for Model IV.1 provides the result of our calibrations from 5000 stochastic simulations. 9.10). 6 9. the data generating process includes (9. This is estimated according to our new model by minimizing the residual sum of square between actual employment and the model generated employment.

179 Table 9.1: of the Model Variants (numbers in parentheses are the corresponding standard deviations) Consumption Standard Deviations Sample Economy Model I Economy Model IV Economy Correlation Coefficients Sample Economy Consumption Capital Stock Employment Output Model I Economy Consumption Capital Stock Employment Output Model IV Economy Consumption Capital Stock Employment Output 0.0081 0.0091 (0.0012) 0.0071 (0.0015) Capital 0.0035 0.0036 (0.0007) 0.0058 (0.0018) Employment 0.0165 0.0051 (0.0006) 0.0237 (0.0084) Output 0.0156 0.0158 (0.0021) 0.0230 (0.0060)

1.0000 0.1741 0.4604 0.7550 1.0000 (0.0000) 0.2043 (0.1190) 0.9288 (0.0203) 0.9866 (0.0033) 1.0000 (0.0000) 0.3878 (0.1515) 0.4659 (0.1424) 0.8374 (0.0591)

1.0000 0.2861 0.0954

1.0000 0.7263

1.0000

1.0000 (0.0000) -0.1593 (0.0906) 0.0566 (0.1044)

1.0000 (0.0000) 0.9754 (0.0076)

1.0000 (0.0000)

1.0000 (0.0000) 0.0278 (0.1332) 0.0369 (0.0888)

1.0000 (0.0000) 0.8164 (0.1230)

1.0000 (0.0000)

9.2.5

The Labor Market Puzzle

Despite the bias towards Model I Economy, due to the selection of the structural parameters, we find that the labor effort is much more volatile than

180 in the Model I Economy the benchmark model. Indeed, comparing to the benchmark model, the Model I economy, the volatility of labor effort in our Model IV economy has much been increased if anything the volatility of the labor effort is too high. This result is, however, not surprising since the agents face two constraints – one in the labor market and one in the product market. Also the excessive correlation between labor and consumption has been weakened. Further evidence on the better fit of our Model IV economy — as concerns the volatility of the macroeconomic variables — is also demonstrated in the Figure 9.1 where the horizontal figures show, from top to bottom, actual (solid line) and simulated data (dotted line) for consumption, capital stock, employment and output. The two columns of figures, from the left to the right, represent the figures for Model I and Model IV economies respectively. As observable, the employment series in Model IV economy can fit the data better than the Model I economy. This resolution to the labor market puzzle should not be surprising because we specify the structure of labor market essentially the same way as in the last chapter. However, in addition to the labor market disequilibrium as specified in the last chapter, we also allow in this chapter for monopolistic competition in the product market. In addition to impacting the volatility of labor effort, this may provide the possibility to resolve the another puzzle, that is, the technology puzzle also arising in the market clearing RBC model.

181

Figure 9.1: Simulated Economy versus Sample Economy: U.S. Case (solid line for sample economy, dotted line for simulated economy)

9.2.6

The Technology Puzzle

In economic literature, one often discusses the technology in terms of its persistent and temporary effects on the economy. One possibility to investigate the persistent effect in our models here is to look at the steady states. Given that at the steady state all the markets will be cleared, our Model IV economy should have the same steady state as in the benchmark model. For the convenience of our discussion, we rewrite these steady states in the following

182 equations (see the proof of Proposition 4 in Chapter 4): n = αφ/ [(α + θ)φ − (δ + γ)θ] k = A
1/α −1/α

φ

n N /0.3

c = (φ − δ − γ)k y = φk where φ = [(1 + γ) − β(1 − δ)] /β(1 − α) From the above equation, one finds that technology has the positive persistent effect on output, consumption and capital stock,8 yet zero effect on employment. Next, we shall look at the temporary effect of the technology shock. Table 9.2 records the cross correlation of the temporary shock At from our 5000 thousand stochastic simulation. As one can find there, the two models predicts rather different correlations. In the Model I (RBC) economy, technology At has a temporary effect not only on consumption and output, but also on employment, which are all strongly positive. Yet in our Model IV Economy with monopolistic competition and nonclearing labor market, we find that the correlation is much weaker with respect to employment. This is consistent with the widely discussed recent finding that technology has near-zero (and even negative) effect on employment. Table 9.2: The Correlation Coefficients of Temporary Shock in Technology. output consumption employment capital stock 0.9903 0.9722 0.9966 -0.0255 (0.0031) (0.0084) (0.0013) (0.1077) 0.8397 0.8510 0.4137 -0.1264 (0.0512) (0.0507) (0.1862) (0.1390)

Model I Economy Model IV Economy

At the given expected market demand, an improvement in technology (reflected as an increase in labor productivity) will reduce the demand for labor, if the firm follows the Keynesian way of output determination, that is, the output is determined by demand. In this case, less labor is required to produce the given amount of output. Technical progress, therefore, may
8

This long run effect of technology is also revealed by recent time series studies in the context of a variety of endogenous growth models, see Greiner, Semmler and Gong (2004).

where prices and wages are set by a monopolistic supplier and are sticky. the technology puzzle explored in standard market clearing models would disappear. the demand for labor is simply determined by the marginal product. This stylist fact cannot be explained in the RBC framework since at the given wage rate. where technology shocks and employment are predicted to be positively correlated. only a weak effect on employment in the short run . where then the households’ constraint on the labor market spills over to the product market and the firms constraint on the product market generates employment constraints. resulting in an updating scheme of prices and wages where only a fraction of prices and wages are optimally set each time period. The proposition in this chapter which shows the firms’ constraint in the product market. 9. which should be increased with the improvement in technology. . Yet we have also introduced a nonclearing labor market. namely that positive technology shocks may have.S.a phenomenon inconsistent with equilibrium business cycle models. We have then shown in this chapter how the firms’ constraints on the product market may explain the technology puzzle. Then noncleared labor market was derived from a multiple stage decision process of households where we have neglected that firms may also be demand constrained on the product market. This result was obtained in an economy with monopolistic competition. resulting from a multiple stage decision problem. This chapter thus demonstrates that if we follow the Keynesian way of quantity determination in a monopolistic competition model. economy. explains this additional complication that can arise due to the interaction of the labor market and the product market constraints. We could show that such a model matches better time series data of the U.3 Conclusions In the last chapter. we have shown how households may be constrained in the product market in buying consumption goods by the firms actual demand for labor. as in New Keynesian economics.183 have an adverse effect on employment at least in the short run.

and then reorganizing. Given ∗ this nd .29): Htd = αAt wt 1 1−α (Zt ) 1−α Kt α Dividing both sides of the above equation by Xt . we obtain 1 0.29) 1−α Htd α−1 Since the rental rate of capital rt is assumed to clear the capital market. where Ht equals Nt Lt and can be regarded as total labor hours.3 αAt Zt 1−α d nt = ¯ kt wt N We shall regard this labor demand as the demand when the firm desired activities are carried out.26). and therefore the demand for labor can be derived from (9.31) .4 Appendix: Proof of the Proposition Let Xt = Zt Lt . We shall assume that Lt has a constant growth rate µ and hence Zt follows the growth rate (γ − µ).184 9.30) −α 1−α Htd α α Htd = rt = wt (9. the firm’s desire to supply yt can be expressed as t ∗ 1−α yt = At kt (nd N /0. under the condition that the rental rate of capital rt clears the capital market while the wage rate wt is given.3)α t α 1−α = At kt αAt Zt wt (9. Since wt is given. ∗ Let us first consider the firm’s willingness to supply Yt∗ . which is indeed the first equation in (9. Yt∗ = Xt yt . we can thus replace Ktd in the above equations by Kt . the firm’s optimization problem can be expressed as max Yt∗ − rt Ktd − wt Htd subject to Yt∗ = At (Zt )α Ktd The first order condition tell us that (1 − α)At (Zt )α Ktd αAt (Zt )α Ktd from which we can further obtain rt = wt 1−α α Htd Ktd (9.28) (9. In this case. The production function can be written as Yt = At Ztα Kt1−α Htα . with Zt to be the permanent shock resulting purely from productivity growth. and Lt from population growth.

we obtain the demand for capital Ktd and labor Htd as ˆ Yt At Ztα ˆ Yt At Ztα wt rt wt rt 1−α α α 1−α α Ktd = Htd = 1−α Dividing both sides of the above two equations by Xt .3yt At N wt r t Zt r t Zt wt 1−α α α 1−α α d kt = (9.32) The first-order condition will still allows us to obtain (9. Yt = Xt yt .3 N yt At 1/α 1 kt (1−α)/α This is the second equation in (9. the firm’s profit maximization problem is equivalent to the following minimization problem: min rt Ktd + wt Htd subject to ˆ Yt = At (Zt )α Ktd 1−α Htd α (9.31).27) as expressed in the proposition.30). Next.Substituting it into (9.33) by kt . yt < yt where yt is given ˆ by (9.185 This is the equation (9.34) for explaining rt . we can replace d kt in (9.26).34) Since the real rental of capital rt will clear the capital market.33) 1−α nd = t (9. In this case.32) and (9. Using equation (9.30). we obtain nd t = 0. In other words. we consider the case that the firm’s supply is constrained by the ∗ ∗ ˆ ˆ expanded demand Yt . . we obtain yt At 0.

only to be a simple and starting point for macrodynamic analysis. 186 . the labor market puzzle and the technology puzzle. We recognize that the stochastic dynamic optimization model is important in macroeconomics. in RBC economy. we try to contribute to the current research in stochastic dynamic macroeconomics. the real business cycle model.Chapter 10 Conclusions In this book. For the model to explain the real world more effectively. we consider the current standard model of model. some Keynesian features should be introduced. We have shown that with such an introduction the model can be enriched while it becomes possible to resolve the most important puzzles.

[5] Benassy. Protopapadakis (1990). (2002): ”The Macroeconomics of Imperfect Competition and Nonclearing Markets”. 27.Bibliography [1] Adelman. J. and M. (1995): ”Money and Wage Contract in an Optimizing Model of the Business Cycle”. eds. Cambridge. Farmer (1999): ”Indeterminacy and Sunspots in Macroeconomics”. 187 .” Econometrica 22. 1A: 387-448 [9] Benhabib.” NBER Working Paper Series 5915. Journal of Economic Theory 63: 19-41. Taylor and M. Econometrica. 596-625 [2] Arrow. 265-290. J. and R. and F. Time Preference and the Equity Premium Puzzle. Handbook for Macroeconomics. North-Holland. (1954): ”Existence of an Equilibrium for a Competitive Economy. J. [3] Basu. G. 49-58. [4] Bellman. Farmer (1994): ”Indeterminacy and Increasing Returns”. S. J. [6] Benassy. Adelman (1959): ”The Dynamic Properties of the Klein-Goldberger Model”. and R. vol. vol. Schmidt-Grohe and M. S. New York. American Economic Review. J. (1957): Dynamic Programming. R. vol. and A. MA. [7] Benhabib.. Cambridge: MIT-Press. Kimball (1997): ”Cyclical Productivity with Unobserved Input Variation. K.-P. vol. NJ: Princeton University Press. L. 91. J. [8] Benhabib. and Debreu. S.”Leverage.-P. Princeton. I. S.1: 167-186. 35: 303-315.” Journal of Monetary Economics 25. Woodford. [10] Benninga. Uribe (2001): ”Monetary Policy and Multiple Equilibria”. J. Journal of Monetary Economics. no.

Fisher (2001). edited by P. and S. Aghion. 47(3):727732.A. O. vol. 1:257271. 91. [20] Boldrin. 5262. part I: the growth model”. 22. [22] Brock. O. Journal of Economic Theory 4: 479-513. vol.E. Fisher (1996). New York. American Economic Review. Semmler (2001): ”Dynamic Optimization and Skiba Sets in Economic Examples”. Vol. [13] Beyn. no. issues 5-6: 251-280. Stein (1986). Green and J. I. and Mirman (1972)”Optimal Economic Growth and Uncertainty: The Discounted Case”. And Expectations in Modern Macroceconomics”. and J. Econometrica. Academic Press: 165-190. L. in: J. Christiano.BIBLIOGRAPHY 188 [11] Bennett. L. Credit and Banking. W. (2003): ”Comments on Jjungqvist and Sargent” in: Knowledge. Pampel. 28. (1995) ”The Sustainability of Budget Deficits in a stochastic Economy”. W. Optimal Control Applications and Methods.. R. [14] Blanchard.” Technometrics. ” Macroeconomic Lessons for Asset Pricing”. [21] Brock. M. vol. [19] Boldrin. [16] Blanchard. and J.R. Johnson and M. W. Schenkman (eds. O. [17] Bohachevsky. T.).. E. Wolfers (2000): ”The Role of Shocks and Institutions in the Rise of Unemployment: The Aggregate Evidence”. Journal of Money. M. and J. Information. Woodford. J. 209-217. J. Princeton Unviersity Press. 27. Asset Returns and the Business Cycle”. Fischer (1989): ”Lectures on Macroeconomics”. H. Princeton: 351-356. [12] Benveniste and Scheinkman (1979): ”On the Differentiability of the Value Function in Dynamic Economics. NBER working paper no. . Farmer (2000): ”Indeterminacy with Nonseparable Utility”. L. and W.. 1:149-166. O. L.. M. Economic Journal 110:C1-C33. [18] Bohn. (1979) ”An integration of stochastic growth theory and theory of finance. ”Habit Persistence. and R. Journal of Economic Theory 93: 118-143. Frydman. MIT-Press [15] Blanchard. Christiano. ”Generalized Simulated Annealing for Function Optimization. Cambridge. vol. Stiglitz and M.

[31] Chow. Vol. IMA Volumes in Applied Mathematics 78. A. 621-630. ”Inspecting the Mechanism: An Analytical Approach to the Stochastic Growth Model”. 1996. Rebelo (1996): ”Sectoral Solow Residual”.” Econometric Research Program. C. [29] F. 23-57. T.A. European Economic Review. ed. The Economies of Information and Uncertainty. Princeton: Princeton University. C. McCall. [33] Chow. C. Mordukhovic. (1993): ”Statistical Estimation and Testing of a Real Business Cycle Model. Optim. (1983). (1997): Dynamic Economics: Optimization by the Lagrange Method. J.. [24] Burns. vol 12: 383-398. T. Research Memorandum. F. Journal of Political Economy. I. Chicago. Springer Verlag. 40: 861-869. (1993): Optimum Control without Solving the Bellman Equation. ”Approximation of Ooptimal Control Problems with State Constraints: Estimates and Applications”. M. G. [34] Chow. [32] Chow. G. Camilli and M. Journal of Economic Dynamics and Control 17. Journal of Monetary Economics 33. [27] Calvo. Math. Journal of Monetary Economics.A. Inc. New York: MacGraw-Hill.”Econometrics”. New York: NBER. (1994). Sussman eds. C. G. Appl. University of Chicago Press: 165-192.101:245-273.. [30] Capuzzo-Dolcetta. no. [28] Campbell. Eichenbaum and S. S. C. G. C. Rebelo (1993): ”Labor Hoarding and the Business Cycle”. 10: 367-377. and W. .BIBLIOGRAPHY 189 [23] Brock (1982) ”Asset Pricing in a Production Economy”. ”Nonsmooth analysis and geometric methods in deterministic optimal control”. by J.J.. vol. G. 365. (1983): ”On a Discrete Approximation of the Hamilton-Jacobi-Bellman Equation of Dynamic Programming”. Eichenbaum and S. 463-506. [26] Burnside. . (1983)”Staggered Contracts in a Utility Maximization Framework”. New York: Oxford University Press. Falcone (1995).S. H. [25] Burnside. in B. C. M.J. Mitchell (1946): Measuring Business Cycles.

308-318. [46] Dawid. Day (2003). vol.J. Princeton:Princeton University Press. vol. vol. [42] Cooley. J. Donaldson (1995): ”Non-Walrian Economies. G. Federal Reserve Bank of Minneapolis. 21: 247-80. J. Eichenbaum (1992): ”Current Real Business Cycle Theories and Aggregate Labor Market Fluctuation.P. C.B. L. Journal of Monetary Economics. Donaldson (1990): ”Efficiency Wages and the Business Cycle Puzzle”. J.BIBLIOGRAPHY 190 [35] Chow. 13. L. [45] Danthine. C. (1987): Why Does Inventory Fluctuate So Much? Journal of Monetary Economics. [40] Christiano. Cooly (ed). University of Bielefeld. [36] Christiano. H. Prescott (1995): ”Economic Growth and Business Cycles”. L. M. 380. K. 21: 247-80.” in T. Suboptimally and Pessimality in the One Sector Growth Model”. Eichenbaum and C.. [44] Danthine. (1987): Technical Appendix to “Why Does Inventory Investment Fluctuate So Much?” Research Department Working Paper No. (1988): ”Why Does Inventory Fluctuate So Much?”. [38] Christiano. mimeo. . Jounal of Monetary Economics 41.F. A. and J. Time Series. J. Y. in Cooley. L. and E. ”Adaptive Economizing and Sustainable Living: Optimally. ”Minimizing Multimodal Functions of Continuous Variables with the Simulating Annealing Algorithm. June.. Princeton: Princeton University Press. and R. and Kwan. and S.” ACM Transactions on Mathematical Software. Evans (2001). [41] Cochrane (2001): ”Asset Pricing”.. Frontiers in Business Cycle Research. Ridella (1987). T. European Economic Review 34: 1275-1301. Princeton: Princeton University Press [43] Corana. ed. and J. “Nominal Rigidities and the Dynamic Effects of a Stock to Monetary Policy. Frontiers of Business Cycle Research.” American Economic Review. 262-80. T. 431-472. and M.S. J. M. L. B. (1998): How the Basic RBC Model Fails to Explain U. [37] Christiano.P. J. Martini. [39] Christiano.

(1991): ”Real Business Cycle Theory: Wisdom or Whimsy?” Journal of Economic Dynamics and Control. Ohanian and J. Frankfurt. [54] European Central Bank Report. Levin (2000). 21(4). 46: 281 . New York: Wiley. [50] Eichenbaum. [49] Diebold F. 15. National Bureau of Economic Research. Optim. Estimation. [48] Debreu.313. (1959): Theory of Value. C. [58] Falcone. and A. (1992): ”Productivity Shock and Real Business Cycles”. 174. G. 8: 31-34. L. “Determinants of Business Investment. D. and Analysis of Macroeconometric Models. J. W. . Hansen and K. Technical Working Paper No. 1169-1185. M. W. Stroz (1963). Marcet (1990): ”Solving the Stochastic Growth Model by Parameterizing Expectations”. ”Dynamic Equilibrium Economies: A Framework for Comparing Model and Data”. Berkowitz (1995). Vol. [55] Evans. and R. (1984): Specification. R. R. [57] Fair. R. Impacts on Monetary Policy.E. C. C.. p191-208. Prentice Hall. European Central Bank.BIBLIOGRAPHY 191 [47] den Haan. 29. 607626. ”Quantifying the Impact Structural Reforms”.. 15: 1-13. (1987) ”A Numerical Approach to the Infinite Horizon Problem of Determinstic Control Theory”. M. M. Appl.L.X. ”Optimal Monetary Policy with Staggered Wage and Price Contracts”. [53] Erceg. 51-78 [52] Eisner. and J. Country Decision (2004). [51] Eichenbaum. Math. Taylor (1983): Solution and Maximum Likelihood Estimation of Dynamic Nonlinear Rational Expectation Models. B. C. Vol. Singleton (1988): ”A Time Series Analysis of Representative Agent Models of Consumption and Leisure Under Uncertainty. T. [56] Fair. Cambridge. MA: Harvard University Press. Journal of Monetary Economics. Henderson and A. vol.” Quarterly Journal of Economics. Journal of Monetary Economics. Econometrica. Journal of Business and Economic Statistics.

Center for Empirical Macreconomics. M. “The Dynamics of a Simple Relative Adjustment-Cost Framework. P. [61] Francis. and W. and W. Bielefeld University. University of California. and W. Employment. 89. University of Technology. San Diego. Numerics and Empirical Evidence”.” in H. Semmler and J. [60] Feichtinger. Hartl. G. Dordrecht: Kluwer.. Opladen. Ramey (2003): ”The Source of Historical Economic Fluctuations: An Analysis using Long-Run Restrictions”. W. University of California. Wirl (2000). (1999): Technology. Computational Economics and Econometrics. in: Okonomie als Grundlage politischer Entscheidungen. Cambridge. . J. [67] Gong. [68] Gong.. p249-271. San Diego. F. and Inventions”. Amman. and V. G. [66] Gong. L. MIT Press. G. [62] Francis. Kort and F.. Semmler (2001). [65] Gong.). Center for Empirical Macroeconomics. and Europe: the Role of Knowledge. Ramey (2001): ”Is the Technology-Driven Real Business Cycle Hypothesis Dead? Shocks and Aggregate Fluctuations Revisited”.H. ”Global Optimization of Statistical Function.S. Center for Empirical Macroeconomics. Vienna. Gabriel and M. G. and V. and the Business Cycle: Do Technology Shocks Explain Aggregate Fluctuation? American Economic Review. Ferrier and J. Bielefeld University. vol.A. Greiner. 1. Rogers (1992). N.A. Semmler: ”Stochastic Dynamic Macroeconomics: Theory. working paper. G. Leske und Budrich. Bielefeld University. D. A. A. Real Business Cycles with disequilibirum in the Labor Market: A Comparison of the US and German Economies. Semmler (2001): Dynamic Programming with Lagrangian Multiplier: an Improvement over Chow’s Approximation Method. Neugart (eds. F. Pau eds. Working Paper. book manuscript. Rubart (2001): ”Economic Growth in the U. N. Human Capi¨ tal. mimeo. W. Vol. [64] Goffe. Belsley and L.BIBLIOGRAPHY 192 [59] Farmer (1999) ”Macroeconomics with Self-Fulfilling Expectations”. [63] Gali. G.

921-947.. A. D. [73] L. Gong (2004)”Forces of Economic Growth . forthcoming: Princeton. and W. W. Semmler (2003): ”Economic Growth. [71] Greiner. . and Europe”. J. Journal of Political Economy. Rubart and W. Gr¨ne (2003). Errorr estimation and adaptive discretizau tion for the discrete stochastic Hamilton–Jacobi–Bellman equation. [80] Hamilton. [70] Greiner. forthcoming Journal of Macroeconomics. ”Asset Pricing . Vol. 2: 287-315. [74] Gr¨ne. working paper. University of Bayreuth. E. (1994). Princeton: Princeton University Press. A. Semmler and M. Industry”. [72] Gr¨ne. Semmler and G. 25. Semmler (2004b) ”Solving Asset Pricing Models with u Stochastic Dynamic Programming”. L.. and W. R. p. A Model and Estimations for the U. [76] Gr¨ne. http://www. 28: 2427-2456. L.. J. and W.uni-bayreuth. Economic Theory. Princeton University Press. Semmler (2004a): ”Using Dynamic Programming u for Solving Dynamic Models in Economics”.S. L. L.Constrained by u Past Consumption Decisions”. 96. forthcoming.. Preprint. CEM Bielefeld.BIBLIOGRAPHY 193 [69] Greiner. vol. A. Gong (2003): ”The Forces of Economic Growth: A Time Series Perspective”. no. ”Creditworthiness u and Threshold in a Credit Market Model with Multiple Equilibria”. W. 75: 1288-1314.A Time Series Perspective”. [77] Gr¨ne. working paper. Princeton: Princeton University Press. L. Semmler and G. (1997) ”An Adaptive Grid Scheme for the Discrete Hamiltonu Jacobi-Bellman Equation”. ”Time Series Analysis”. Numer. [78] Gr¨ne. CEM Bielefeld. and W. Math. Submitted. Skill-biased Technical Change and Wage Inequality. [79] Hall.S. Sieveking (2004). [75] Gr¨ne. 2004/05. Semmler (2004d). Asset Pricing u and Debt Control”. working paper. Semmler (2004c) ”Default Risk.de/departments/math/∼lgruene/papers/. W.. (1988): ”The Relation between Price and Marginal Cost in U. CEM Bielefeld. forthcoming Journal of Economic Dynamics and Control. forthcoming Journal of Financial Econometrics. L.

1268-1286. [93] Judd. R.M. Chapter 12 in: Amman. Business Cycle: an Empirical Investigation. 309-327. [86] Harrison. Pittsburgh.. J. L. 1029-1054. Working Paper. H. p.. G.Pertubation. (2001) ”Indeterminacy with Sector-specific Externalities”. Frydman. [84] Hansen.A. J. [92] Jerman. no. Carnegie-Mellon University. Stiglitz and M. H. London. Uhlig (2001). J. U. vol. no.. (1982). “Tobin’s Marginal q and Average q: A Neoclassical Interpretation. H. ”What is the Real Story for Interest Rate Volatility?” German Economic Review 1(1): 43-67. Woodford.” Econometrica.. J. Princeton Unviersity Press. and E. S. Aghion. C. [88] Heckman. PA. and H. ”Asset Pricing in Pproduction Economies”. in: Knowledge. E. and K. Kendrick and J. [85] Hansen. Information. D.R.G. Los Angeles. Journal of Political Economy. vol. [90] Hodrick. Handbook of Computational Economics. [82] Hansen. G. Journal of Economic Dynamics and Control. 25: 747-76. eds. 5. J. Econometrica 50: 213-224.” Journal of Monetary Economics. (1985): ”Indivisible Labor and Business Cycles. Prescott (1980): Post-war U. Elsevier: 511-585. and Projection Methods in Economic Analysis”.J. [83] Hansen.921-947. R. University of California. vol. (1998). F. S. And Expectations in Modern Macroceconomics”. Journal of Monetary Economies 41: 257-275. edited by P. (2003): ”Flexibility and Creation: Job Lessons from the German Experience”. K. Singleton (1982): ”Generalized Instrument Variables Estimation of Nonlinear Rational Expectations Models. P. Rust. (1982): ”Large Sample Properties of Generalized Methods of Moments Estimators.16. 96.BIBLIOGRAPHY 194 [81] Hall. . (1988): ”The Relation between Price and Marginal Cost in U. Industry”. Vol. L. (1996) ”Approximation. P. [87] Hayashi. L. R. A. McMillan. (1988): ”Technical Progress and Aggregate Fluctuations”. 50.S. 4. (1963): ”The Theory of Wages”. [91] Hornstein.” Econometrica. 50. working paper. Princeton: 357393. [89] Hicks.

.” Journal of Monetary Economics. [95] Judge. Paris. G.M. 195-232. NY: McGraw-Hill Book Company. Elsevier Science. D. MA: MIT Press. R.) Monetary Policy Rules. London. L. Woodford. [100] Kim. R. [96] Juillard M. Rebelo (1988a): ”Production. G. (2004) ”Does Utility Curvature Matter for Indetermincy?”. 21. 405438.” CEPREMAP Working Paper. Taylor (ed.. [102] King.L. [103] King. Chicago: The University of Chicago Press. Growth and Business Cycles I: the Basic Neo-classical Model. Volume I. Journal of Economic Behavior and Organization. R. Plosser (1994): ”Real Business Cycles and the Test of the Adelmans”. G. Lee (1985). Wolman (1999): ” What should the Monetary Authority do when Prices are sticky?”.. J. [99] Kim. No. Taylor and M. T. G. ”The Theory and Practice of Econometrics”. Plosser. Plosser. W. T. (1998): Numerical Methods in Economics. Interest and Money”.. and S. (1936) ”The General Theory of Employment. Cambridge. MacMillan. C. vol. Hill and T. [105] King. 21. and A. (2003) ”Indeterminacy and Investment and Adjustment Costs: An Analytical Result”. B.BIBLIOGRAPHY 195 [94] Judd. J. R. New York: Wiley. K. Growth and Business Cycles II: New Directions. 309-341. and S. C. T. 2nd edition. G. and S.G. R. France. and C. [104] King. vol. [98] Keynes. I. New York. G. Journal of Monetary Economics. forthcoming. I. [101] King. edited by J. ”Production.” in Handbook of Macroeconomics. Rebelo (1999): ”Resusciting Real Business Cycles. R. Griffiths. Rebelo (1988b). 33. Macroeconomic Dynamics 7: 394-406. (1981): Stochastic Control for Economic Models. I. in: J. 9602. (1996): DYNARE: A Program for the Resolution and Simulation of Dynamic Models with Forward Variables through the Use of a Relaxation Algorithm. E. C. C. [97] Kendrick.” Journal of Monetary Economics. J.

And Expectations in Modern Macroceconomics”. T. Journal of Political Economiy 75: 321-334. 44. (1999): ”Inspecting the Mechanism: The Determination of Asset Prices in the Real Business Cycle Model. Cambridge. M. Information. and T. [112] Ljungqvist. M. Plosser (1983): Real Business Cycles. 50. J. Econometrica. . Prescott (1982). Journal of Political Economy. J. and E. L. 106. ”Time to Build and Aggregate Fluctuation”. [117] Lucas. Chow (1997): Chow’s Method of Optimum Control: A Numerical Solution. Y. Carnegie-Rochester Conference Series on Public Policy. Journal of Economic Dynamics and Control 21. (2000): Recursive Macroeconomics. 1345-1370. E. 85-103. 91. Econometrica 46: 1429-1446.” revised from CEPR Discussion Paper No. MA: The MIT Press. 39-69. L. R. vol. Sargent (1998): ”The European Unemployment Dilemma”. Gong and W. F. [115] Lucas. Aghion. [113] Ljungqvist. vol. I. L. Journal of Political Economy. Uhlig (1999): ”Volatility Bounds and Preferences: An Analytical Approach. 1834 [109] Lettau. [116] Lucas. R.” CEPR working paper No. 739-752. K.BIBLIOGRAPHY 196 [106] Kwan. and G. E. B. [111] Ljungqvist. edited by P. Woodford. R. Journal of Economics Behavior and Organization. [108] Lettau. C. [114] Long. Princeton: 326-350. Semmler (2001): Statistical Estimation and Moment Evaluation of a Stochastic Growth Model with Asset Market Restriction. in: Knowledge. F. (1978) ”Asset Prices in an Exchange Economy”.. and H. and T. Stiglitz and M. J. 1678 [110] Lettau. (1976): Econometric Policy Evaluation: A Critique. and Sargent. Sargent (2003): ”European Unemployment: From a Worker’s Perspective”. 19-46. Frydman. no. [107] Kydland. M. Princeton Unviersity Press. and C. vol. 1. vol.3: 514-550. G. (1967): “Adjustment Costs and the Theory of Supply. R.

Information. [130] OECD (1998a): ”Business Sector Data Base”. (1997): ”Unemployment and Labor Market Rigidities – Europe versus North Maerica”. R. Prescott (1971):”Investment under Uncertainty”. [128] Nickell. Inc. Stiglitz and M. L. Rosenbluth. [124] Merz. (1964): ”What can we learn from European Experience. E. Journal of Monetary Economics 15: 145-161. W. 27. R. ed. Vol. 39 (5):659ff. and Teller. 43: 91-124. in Unemployment and the American Economy?”. and E. M. [123] Mehra and Prescott (1985). Woodford. [121] Mankiw. (1989): ”Real Business Cycles: A New Keynesian Perspective”. in Unemployment and the American Economy?”. [129] Nickell. . R. R. New York: John Wiley & Sons. ”Equation of State Calculation by Fast Computing Machines.M. Aghion. M. G. Journal of Economic Literature. (1968) ”What Can We Learn from European Experience. Rosenbluth. Quintini (2003): ”The Beveridge Curve. (1994): ”Diagnosing Unemployment”. and Scott. vol. 1087-1092. [125] Metropolis. New York: John Wiley & Sons. Econometrica. A.J. A. (1953). N.C. vol. (1999): Computational Methods for the Study of Dynamic Economies. (1999): Heterogenous Job-Matches and the Cyclical Behavior of Labor Turnover”. Unemployment. Ross.N.J.. Vol. ”The Equity Premium Puzzle”. S. [127] Meyers. E... Inc. (1990): A Quick Refresher Course in Macroeconomics. Cambridge University Press. Frydman. by A. S. Journal of Economic Perspectives. 79-90. 3. And Expectations in Modern Macroceconomics”. OECD Statistical Compendium.W. by A. Princeton. [122] Marimon. ed. A. 21. New York. edited by P. [120] Mankiw. 55-74. N..M. Nunziata. Teller. G. no. [126] Meyers. R. Ochel and G. and Wages in the OECD from the 1960s to the 1990s”. in: Knowledge.” The Journal of Chemical Physics. Journal of Monetary Economics. NY: Oxford University Press. 1645-1660. Journal of Economic Perspectives 3. [119] Malinvaud. N. Cambridge. J. Ross. 6. Princeton Unviersity Press. M.BIBLIOGRAPHY 197 [118] Lucas.

3. and G. 51-77.” Journal of Economic Perspectives. eds. E.” in T. 10.P.S. no. 90: 1187-1211. and M. 543-559. [136] Ramsey. [137] Reiter. C. [133] Phelps. [142] Rust. H. Kendrick and J. Zoega (1998): ”Natural Rate Theory and OECD Unemployment”. [135] Prescott. Journal of Political Economy.S. F. no. M. Federal Reserve Bank of Minneapolis.M. Cambridge: MIT-Press.) Monetary Policy Rules. Handbook of Computational Economics. 3.” OECD Economic Outlook. J. (1997): Rewarding Work. 9-22. Frontiers of Business Cycle Research. [140] Rotemberg. vol. [141] Rotemberg.BIBLIOGRAPHY 198 [131] OECD (1998b): ”General Economic Problems. J. (1989): ”Understanding Real Business Cycles. [139] Rotemberg. Cooley (ed). and J.A. Taylor (ed. [134] Plosser. Princeton:Princeton University Press. vol. [143] Santos. M.F. Woodford (1995): ”Dynamic General Equilibrium Models with Imperfectly Competitive Product Markets.. Chicago: the University of Chicago Press. E. Vigo-Aguiar (1998): Analysis of a Numerical Dynamic Programming Algorithm Applied to Economic Models. vol. Vigo-Aguiar (1995): [144] Santos. and J. Journal of Economic Dynamics and Control 21. E. Economic Journal. (1996). 66(2): 409-426 . (1982) ”Sticky Prices in the United States”. Econometrica. Contry Specific Series [132] Phelps.. 108 (May): 782-801. and M. 620–729.” Quarterly Review. 723-737. J. D. Rust. (1997): Chow’s Method of Optimum Control. M. 4. M. in: J. (1996) [138] Reiter. in: Amman. I. pp. Woodford (1999): ”Interest Rate Rules in an Estimated Sticky Price Model”. Economic Journal 28. (1928): ”A Mathematical Theory of Saving”. Elsevier. (1986): ”Theory ahead of Business Cycle Measurement. J. C. ”Numerical Dynamic Programming in Economics”.

(1986): ”Some Skeptical Observations on Real Business Cycles Theory”. Scott ed. (1980): Aggregate Dynamics and Staggered Contracts. edited by J.23-27. Vol. H. 21. 1136-1159. [153] Summers. vol. vol. Princeton. Journal of Business and Economic Statistics. Princeton University Press.” in Handbook of Macroeconomics. K. (1979): ”Another Possible Source of Wage Stickiness”. New York: Oxford University Press. [157] Uhlig. (1988): ”Econometric Issues in the Analysis of Equilibrium Business Cycle Model. 361-386. vol 90. (1990): Solving Nonlinear Stochastic Growth Models: A Comparison of Alternative Solution Methods. Statistisches Bundesamt Wiesbaden. “Optimal Growth with a Convex-Concave Production Function. (1999): A Toolkit for Analysing Nonlinear Dynamic Stochastic Models Easily. R. Cambridge: Harvard University Press. A. [147] Simkins. 88: 1 . S. R. American Economic Review. (1999): ”Staggered Price and Wage Setting in Macroeconomics. [156] Taylor. Fachserie 18. C. [146] Schmidt-Grohe. Econometrica 46 (May): 527-539. vol. Elsevier Science. [155] Taylor. L. [152] Stockey. Woodford. 1: 79-82 [151] Statistisches Bundesamt (1998). Federal Reserve Bank of Minneapolis Quarterly Review. [150] Solow.” Journal of Monetary Economics. H. Lucas and E. 8. Volume I.BIBLIOGRAPHY 199 [145] Sargent. (2001): Endogenous Business Cycles and the Dynamics of Output. J. L. B. (1978). 1-17. in R. (1994): ”Do Real Business Cycle Models Really Exhibit Business Cycle Behavior?” Journal of Monetary Economics.: Computational Methods for the Study of Dynamic Economies. Hours and Consumption. [154] Taylor. [149] Skiba. Tayor and M. J. B.B. and Uhlig. J. 5. . K. 33. no. [148] Singleton. S. Prescott (1989): ”Recursive Methods in Economics”. P.. p. Marimon and A. B. 10. T. N. E. Vol. Journal of Political Economy. Journal of Macroeconomics. 381-404. (1999): Contested Inflation. H.24.

Princeton University Press. 1011-1041. Bielefeld University. no. H. J. D. Semmler and M. [164] Woodford. Ritson. University of Pennsylvania [166] 2003): ”Monetary Policy Rules under Uncertainty: Adaptive Learning and Robust Control”. Bergen (2000): The Managerial and Customer Costs of Price Adjustment: Direct Evidence from Industrial Markets. working paper.” Journal of Computational Physics. 101. CEM. Macroeconomic Dynamics 2004/05 . Lettau (2001). 56. L. (2003): ”Interest and Prices”. D. Santa Cruz. H. M. vol.W. Manuscript.E. [159] Uzawa. and S. M. and Y. vol. [163] W¨hrmann. 259-271. Louie (1984). M. (1993). W. Levy. [162] Watson. Princeton. ”Measures of Fit for Calibration Models”. P. S. ”Nonparameto ric Estimation of Time-Varying Characteristics of Intertemporal Asset Pricing Models”. working paper. [160] Vanderbilt. ”A Monte Carlo Simulated Annealing Approach to Optimization over Continuous Variables. Tilburg Unversity. Economic Studies Quarterly XIX: 1-14. forthcoming. Xu (1996): ”Effort and the Cycle: Cyclical Implications of Efficiency Wages. [165] Zbaracki. Journal of Political Economy. 6.”. University of California. Dutta and M. mimeo. M. (1968): “The Penrose Effect and Optimum Growth. [161] Walsh.BIBLIOGRAPHY 200 [158] Uhlig. Wharton School.. (2002): ”Labor Market Search and Monetary Shocks”.. G.

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer: Get 4 months of Scribd and The New York Times for just $1.87 per week!

Master Your Semester with a Special Offer from Scribd & The New York Times