This action might not be possible to undo. Are you sure you want to continue?

# Stochastic Dynamic Macroeconomics

:

Theory, Numerics and Empirical Evidence

Gang Gong

∗

and Willi Semmler

†

October 2004

∗

Tsinghua University, Bejing, China. Email: ggong@em.tsinghua.edu.cn

†

Center for Empirical Macroeconomics, Bielefeld, and New School University, New York.

Contents

List of Figures iv

List of Tables vi

Preface 1

Introduction and Overview 2

I Solution and Estimation of Stochastic Dynamic

Models 11

1 Solution Methods of Stochastic Dynamic Models 12

1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.2 The Standard Recursive Method . . . . . . . . . . . . . . . . . 13

1.3 The First-Order Conditions . . . . . . . . . . . . . . . . . . . 15

1.4 Approximation and Solution Algorithms . . . . . . . . . . . . 17

1.5 An Algorithm for the Linear-Quadratic Approximation . . . . 23

1.6 A Dynamic Programming Algorithm . . . . . . . . . . . . . . 25

1.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

1.8 Appendix I: Proof of Proposition 1 . . . . . . . . . . . . . . . 28

1.9 Appendix II: An Algorithm for the LQ-Approximation . . . . 29

2 Solving a Prototype Stochastic Dynamic Model 33

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.2 The Ramsey Problem . . . . . . . . . . . . . . . . . . . . . . . 33

2.3 The First-Order Conditions and Approximate Solutions . . . . 35

2.4 Solving the Ramsey Problem with Diﬀerent Approximations . 39

2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

2.6 Appendix I: The Proof of Proposition 2 and 3 . . . . . . . . . 48

2.7 Appendix II: Dynamic Programming for the Stochastic Version 50

i

CONTENTS ii

3 The Estimation and Evaluation of the Stochastic Dynamic

Model 52

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3.2 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

3.3 The Estimation Methods . . . . . . . . . . . . . . . . . . . . . 55

3.4 The Estimation Strategy . . . . . . . . . . . . . . . . . . . . . 57

3.5 A Global Optimization Algorithm: The Simulated Annealing 58

3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

3.7 Appendix: A Sketch of the Computer Program for Estimation 60

II The Standard Stochastic Dynamic Optimization

Model 63

4 Real Business Cycles: Theory and the Solutions 64

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

4.2 The Microfoundation . . . . . . . . . . . . . . . . . . . . . . . 65

4.3 The Standard RBC Model . . . . . . . . . . . . . . . . . . . . 69

4.4 Solving Standard Model with Standard Parameters . . . . . . 74

4.5 The Generalized RBC Model . . . . . . . . . . . . . . . . . . . 76

4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

4.7 Appendix: The Proof of Proposition 4 . . . . . . . . . . . . . 80

5 The Empirics of the Standard Real Business Cycle Model 82

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

5.2 Estimation with Simulated Data . . . . . . . . . . . . . . . . . 82

5.3 Estimation with Actual Data . . . . . . . . . . . . . . . . . . 86

5.4 Calibration and Matching to U. S. Time-Series Data . . . . . 89

5.5 The Issue of the Solow Residual . . . . . . . . . . . . . . . . . 93

5.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

6 Asset Market Implications of Real Business Cycles 101

6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

6.2 The Standard Model and Its Asset Pricing Implications . . . . 103

6.3 The Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 107

6.4 The Estimation Results . . . . . . . . . . . . . . . . . . . . . . 110

6.5 The Evaluation of Predicted and Sample Moments . . . . . . . 112

6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

CONTENTS iii

III Beyond the Standard Model — Model Variants

with Keynesian Features 116

7 Multiple Equilibria and History Dependence 117

7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

7.2 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

7.3 The Existence of Multiple Steady States . . . . . . . . . . . . 121

7.4 The Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

7.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

7.6 Appendix: The Proof of Propositions 5 and 6 . . . . . . . . . 128

8 Business Cycles with Nonclearing Labor Market 131

8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

8.2 An Economy with Nonclearing Labor Market . . . . . . . . . 135

8.3 Estimation and Calibration for U. S. Economy . . . . . . . . . 142

8.4 Estimation and Calibration for the German Economy . . . . . 151

8.5 Diﬀerences in Labor Market Institutions . . . . . . . . . . . . 159

8.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

8.7 Appendix I: Wage Setting . . . . . . . . . . . . . . . . . . . . 164

9 Monopolistic Competition, Nonclearing Markets and Tech-

nology Shocks 171

9.1 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

9.2 Estimation and Calibration for U.S. Economy . . . . . . . . . 175

9.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

9.4 Appendix: Proof of the Proposition . . . . . . . . . . . . . . 184

10 Conclusions 186

List of Figures

2.1 The Fair-Taylor Solution in Comparison to the Exact Solution 40

2.2 The Log-linear Solution in Comparison to the Exact Solution . 42

2.3 The Linear-quadratic Solution in Comparison to the Exact

Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

2.4 Value Function obtained from the Linear-quadratic Solution . 45

2.5 Value Function . . . . . . . . . . . . . . . . . . . . . . . . . . 46

2.6 Path of Control . . . . . . . . . . . . . . . . . . . . . . . . . . 47

2.7 Approximated value function and ﬁnal adaptive grid for our

Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

4.1 The Deterministic Solution to the Benchmark RBC Model for

the Standard Parameters . . . . . . . . . . . . . . . . . . . . . 75

4.2 The Stochastic Solution to the Benchmark RBC Model for the

Standard Parameters . . . . . . . . . . . . . . . . . . . . . . . 75

4.3 Value function for the general model . . . . . . . . . . . . . . 79

4.4 Paths of the Choice Variables C and N (depending on K) . . 79

5.1 The β - δ Surface of the Objective Function for ML Estimation 85

5.2 The θ −α Surface of the Objective Function for ML Estimation 85

5.3 Simulated and Observed Series (non detrended) . . . . . . . . 91

5.4 Simulated and Observed Series (non detrended) . . . . . . . . 92

5.5 The Solow Residual: standard (solid curve) and corrected

(dashed curve) . . . . . . . . . . . . . . . . . . . . . . . . . . 97

5.6 Sample and Predicted Moments with Innovation Given by

Corrected Solow Residual . . . . . . . . . . . . . . . . . . . . 99

6.1 Predicted and Actual Series: all variables HP detrended (ex-

cept for excess equity return) . . . . . . . . . . . . . . . . . . 113

6.2 The Second Moment Comparison:all variables detrended (ex-

cept excess equity return) . . . . . . . . . . . . . . . . . . . . 114

7.1 The Adjustment Cost Function . . . . . . . . . . . . . . . . . 122

iv

LIST OF FIGURES v

7.2 The Derivatives of the Adjustment Cost . . . . . . . . . . . . 122

7.3 Multiplicity of Equilibria: f(i) function . . . . . . . . . . . . . 124

7.4 The Welfare Performance of three Linear Decision Rules . . . 126

8.1 Simulated Economy versus Sample Economy: U.S. Case . . . . 150

8.2 Comparison of Macroeconomic Variables U. S. versus Germany 152

8.3 Comparison of Macroeconomic Variables: U. S. versus Ger-

many (data series are detrended by the HP-ﬁlter) . . . . . . . 153

8.4 Simulated Economy versus Sample Economy: German Case . 157

8.5 Comparison of demand and supply in the labor market . . . . 158

8.6 A Static Version of the Working of the Labor Market . . . . . 165

8.7 Welfare Comparison of Model II and III . . . . . . . . . . . . 169

9.1 Simulated Economy versus Sample Economy: U.S. Case . . . . 181

List of Tables

2.1 Parameterizing the Prototype Model . . . . . . . . . . . . . . 39

2.2 Number of nodes and errors for our Example . . . . . . . . . 51

4.1 Parameterizing the Standard RBC Model . . . . . . . . . . . . 74

4.2 Parameterizing the General Model . . . . . . . . . . . . . . . . 78

5.1 GMM and ML Estimation Using Simulated Data . . . . . . . 84

5.2 Estimation with Christiano’s Data Set . . . . . . . . . . . . . 88

5.3 Estimation with the NIPA Data Set . . . . . . . . . . . . . . . 88

5.4 Parameterizing the Standard RBC Model . . . . . . . . . . . . 89

5.5 Calibration of Real Business Cycle Model . . . . . . . . . . . . 90

5.6 F−Statistics for Testing Exogeneity of Solow Residual . . . . 95

5.7 The Cross-Correlation of Technology . . . . . . . . . . . . . . 98

6.1 Asset Market Facts and Real Variables . . . . . . . . . . . . . 106

6.2 Summary of Models . . . . . . . . . . . . . . . . . . . . . . . . 110

6.3 Summary of Estimation Results . . . . . . . . . . . . . . . . . 110

6.4 Asset Pricing Implications . . . . . . . . . . . . . . . . . . . . 111

6.5 Matching the Sharpe-Ratio . . . . . . . . . . . . . . . . . . . . 111

7.1 The Parameters in the Logistic Function . . . . . . . . . . . . 121

7.2 The Standard Parameters of RBC Model . . . . . . . . . . . . 124

7.3 The Multiple Steady States . . . . . . . . . . . . . . . . . . . 125

8.1 Parameters Used for Calibration . . . . . . . . . . . . . . . . 145

8.2 Calibration of the Model Variants: U.S. Economy . . . . . . . 147

8.3 The Standard Deviations (U.S. versus Germany) . . . . . . . . 154

8.4 Parameters used for Calibration (German Economy) . . . . . . 155

8.5 Calibration of the Model Variants: German Economy . . . . . 156

9.1 Calibration of the Model Variants . . . . . . . . . . . . . . . . 179

9.2 The Correlation Coeﬃcients of Temporary Shock in Technology.182

vi

Preface

This book intends to contribute to the study of alternative paradigms in

macroeconomics. As other recent approaches to dynamic macroeconomics,

we also build on intertemporal economic behavior of economic agents but

stress Keynesian features more than other recent literature in this area. In

general, stochastic dynamic macromodels are diﬃcult to solve and to esti-

mate, in particular if intertemporal behavior of economic agents is involved.

Thus, beside addressing important macroeconomic issues in a dynamic frame-

work another major focus of this book is to discuss and apply solution and es-

timation methods to models with intertemporal behavior of economic agents.

The material of this book has been presented by the authors at several

universities. Chapters of the book have been presented as lectures at Biele-

feld University, Foscari University, Venice, University of Technology, Vienna,

University of Aix-en-Provence, Colombia University, New York, New School

University, New York, Bejing University, Tsinghua University, Bejing, Chi-

nese University of HongKong, City University of HongKong and European

Central Bank. Some chapters of the book have also been presented at the

annual conference of the American Economic Association, Society of Compu-

tational Economics, and Society of Nonlinear Dynamics and Econometrics.

We are grateful for comments by the participants of those conferences. We

are also grateful for discussions with Toichiro Asada, Jean-Paul Benassy, Pe-

ter Flaschel, Buz Brock, Lars Gr¨ une, Richard Day, Ray Fair, Stefan Mittnik,

James Ramsey, Malte Sieveking, Michael Woodford and colleagues of our

universities. We thank Uwe K¨oller for research assistance and Gaby Wind-

horst for editing and typing the manuscript. Financial support from the

Ministry of Education, Science and Technology is gratefully acknowledged.

1

Introduction and Overview

The dynamic general equilibrium (DGE) model, in particular its more pop-

ular version, the Real Business Cycle Model, has become a major paradigm

in macroeconomics. It has been applied in numerous ﬁelds of economics. Its

essential features are the assumptions of intertemporal optimizing behavior

of economic agents, competitive markets and price-mediated market clear-

ing through ﬂexible wages and prices. In this type of stochastic dynamic

macromodeling only real shocks, such as technology shocks, monetary and

government spending shocks variation in tax rates or shifts in preferences

generate macro ﬂuctuations.

Recently Keynesian features have been built into the dynamic general

equilibrium (DGE) model by preserving its characteristics such as intertem-

porally optimizing agents and market clearing, but introducing monopolistic

competition and sticky prices and wages into the model. In particular, in nu-

merous papers and in a recent book Woodford (2003) has worked out this new

paradigm in macroeconomics, which is now commonly called New Keynesian

macroeconomics. In contrast to the traditional Keynesian macromodels such

variants also presume dynamically optimizing agents and market clearing

1

,

but sluggish wage and price adjustments.

It is well known that the standard DGE model fails to replicate essential

product, labor market and asset market characteristics. In our book, diﬀer-

ent from the DGE model, its competitive or monopolistic variants, we do

not presume clearing of all markets in all periods. As in the monopolistic

competition variant of the DGE model we permit nominal rigidities. Yet, by

stressing Keynesian features in a model with production and capital accu-

mulation, we demonstrate that even with dynamically optimizing agents not

all markets may be cleared.

1

It should be noted that the concept of market clearing in recent New Keynesian literature

is not unambiguous. We will discuss this issue in chapter 8.

2

3

Solution and Estimation Methods

Whereas models with Keynesian features are worked out and stressed in the

chapters of part III of the book, part I and II provide the ground work for

those later chapters. In part I and II of the book we build extensively on the

basics of stochastic dynamic macroeconomics.

Part I of the book can be regarded as the technical preparation for our

theoretical arguments developed in this volume. Here we provide a variety of

technical tools to solve and estimate stochastic dynamic optimization mod-

els, which is a prerequesit for a proper empirical assessment of the models

treated in our book. Solution methods are presented in chapters 1-2 whereas

estimation methods along with calibration, the current methods of empirical

assessment, are introduced in chapter 3. These methods are subsequently

applied in the remaining chapters of the book.

Solving stochastic dynamic optimization models has been an important

research topic in the last decade and many diﬀerent methods have been pro-

posed. Usually, an exact and analytical solution of a dynamic decision prob-

lem is not attainable. Therefore one has to rely on an approximate solution,

which may also have to be computed by numerical methods. Recently, there

have been developed numerous methods to solve stochastic dynamic decision

problems. Among the well-known methods are the perturbation and projec-

tion methods (Judd (1998)), the parameterized expectations approach (den

Haan and Marcet (1990)) and the dynamic programming approach (Santos

and Vigo Aguiar (1998) and Gr¨ une and Semmler (2004a)). When an exact

and analytical solution to a dynamic optimization problem is not attainable

and one has to use numerical methods. A solution method with higher accu-

racy often requires more complicated procedures and extensive computation

time.

In this book, in order to allow for an empirical assessment of stochastic

dynamic models we focus on approximate solutions that are computed from

two types of ﬁrst-order conditions: the Euler equation and the equation de-

rived from the Lagrangian. Given these two types of ﬁrst-order conditions,

three types of approximation methods can be found in the literature: the

Fair-Taylor method, the log-linear approximation method and the linear-

quadratic approximation method. After a discussion on the variety of ap-

proximation methods, we introduce a method, which will be repeatedly used

in the subsequent chapters. The method, which has been written into a

GAUSS procedure, has the advantage of short computation time and easy

implementation without sacriﬁcing too much accuracy. We will also compare

those methods with the dynamic programming approach.

Often the methods use a smooth approximation of ﬁrst order conditions,

4

such as the Euler equation. Sometimes, as, for example, in the model of

chapter 7 smooth approximations are not useful if the value function is not

diﬀerentiable and thus is non-smooth. A method such as employed by Gr¨ une

and Semmler (2004a) can then be used.

There has been less progress made regarding the empirical assessment

and estimation of stochastic dynamic models. Given the wide application of

stochastic dynamic models expected in the future, we believe that the esti-

mation of such type of models will become an important research topic. The

discussion in chapters 3-6 can be regarded as an important step toward that

purpose. As we will ﬁnd, our proposed estimation strategy requires to solve

the stochastic dynamic optimization model repeatedly, at various possible

structural parameters searched by a numerical algorithm within the parame-

ter space. This requires that the solution methods adopted in the estimation

strategy should be as little time consuming as possible while not losing too

much accuracy. After comparing diﬀerent approximation methods, we ﬁnd

that the proposed methods of solving stochastic dynamic optimization mod-

els, such as used in chapters 3 - 6 most useful. We also will explore the impact

of the use of diﬀerent data sets on the calibration and estimation results.

RBC Model as a Benchmark

In the next part of the book, in part II, we set up a benchmark model, the

RBC model, for comparison, in terms of either theory or empirics.

The standard RBC model is a representative agent model, but it is con-

structed on the basis of neoclassical general equilibrium theory. It therefore

assumes that all markets (including product, capital and labor models) are

cleared in all periods regardless of whether the model refers to the short- or

the long-run. The imposition of market clearing requires that prices are set

at an equilibrium level. At the pure theoretical level, the existence of such

general equilibrium prices can be proved under certain assumption. Little,

however, has been told how the general equilibrium can be achieved. In an

economy in which both ﬁrms and households are price-takers, implicitly an

auctioneer is presumed to exist who adjusts the price towards some equilib-

rium. Thus, the way of how an equilibrium is brought about is essentially a

Walrasian tˆatonnement process.

Working with such a framework of competitive general equilibrium is el-

egant and perhaps a convenient starting point for economic analysis. It nev-

ertheless neglects many restrictions on the behavior of agents, the trading

process and the market clearing process, the implementation of technology

and the market structure, among many others. In part II of this volume,

5

we provide a thorough review of the standard RBC model, the representa-

tive stochastic dynamic model of competitive general equilibrium type. The

review starts with laying out microfoundation, and continues to discuss a va-

riety of empirical issues, such as the estimation of structural parameters, the

data construction, the matching with the empirical data, its asset market im-

plications and so on. The issues explored in this part of the book provide the

incentives to introduce Keynesian features into a stochastic dynamic model

as developed in Part III. Meanwhile, it also provides a reasonable ground

to judge new model variants by considering whether they can resolve some

puzzles as explored in part II of the book.

Open Ended Dynamics

One of the restrictions in the standard RBC model is that the ﬁrm does not

face any additional cost (a cost beyond the usual activities at the current

market prices) when it makes an adjustment on either price or quantity.

For example, changing the price may require the ﬁrm to pay a menu cost

and also, more importantly, a reputation cost. It is the cost, arising from

price and wage adjustments that has become an important focus of New

Keynesian research over the last decades.

2

However, adjustment cost may

also come from a change in quantity. In a production economy increasing

output requires the ﬁrm to hire new workers and add new capacity. In a

given period of time, a ﬁrm may ﬁnd more and more diﬃculties to create

new additional capacity. This indicates that there will be an adjustment

cost in creating capacity (or capital stock via investment), and further such

adjustment cost may also be an increasing function of the size of investment.

In chapter 7, we will introduce adjustment costs into the benchmark RBC

model. This may bring about multiple equilibria toward which the economy

may move. The dynamics are open ended in the sense that it can move to low

level, or high level of economic activity.

3

Such an open ended dynamics is cer-

tainly one of the important feature of Keynesian economics. In recent times

such open ended dynamics have been found in a large number of dynamic

models with intertemporal optimization. Those models have been called in-

determinacy and multiple equilibria models. Theoretical models of this type

are studied in Benhabib and Farmer (1999) and Farmer (2001), and an em-

pirical assessment is given in Schmidt-Grohe (2001). Some of the models

2

Important papers in this reserach line are, for example, Calvo (1983) and Rotemberg

(1982). For a recent review, see Taylor (1999) and Woodford (2003, ch. 3).

3

Keynes (1936) discusses the possibility of such an open ended dynamics in chapter 5 of

his book.

6

are real models, RBC models, with increasing returns to scale and/or more

general preferences than power utility that generate indeterminacy. Local

indeterminacy and globally multiplicity of equilibria can arise here. Others

are monetary macro models, where consumers’ welfare is aﬀected positively

by consumption and cash balances and negatively by the labor eﬀort and

an inﬂation gap from some target rates. For certain substitution properties

between consumption and cash holdings those models admit unstable as well

as stable high level and low level steady states. There also can be indeter-

minacy in the sense that any initial condition in the neighborhood of one of

the steady-states is associated with a path toward, or away from, that steady

state, see Benhabib et al. (2001).

Overall, the indeterminacy and multiple equilibria models predict an open

ended dynamics, arising from sunspots, where the sunspot dynamics are fre-

quently modeled by versions with multiple steady state equilibria, where

there are also pure attractors (repellors), permitting any path in the vicin-

ity of the steady state equilibria to move back to (away from) the steady

state equilibrium. Although these are important variants of macrodynamic

models with optimizing behavior, as, however, recently has been shown

4

in-

determinacy is likely to occur only within a small set of initial conditions.

Yet, despite such unsolved problems the literature on open ended dynamics

has greatly enriched macrodynamic modeling.

Pursuing this line of research we introduce a simple model where one

does not need to refer to model variants with externalities and (increasing

returns to scale) and/or to more elaborate preferences to obtain such re-

sults. We show that due to the adjustment cost of capital we may obtain

non-uniqueness of steady state equilibria in an otherwise standard dynamic

optimization version. Multiple steady state equilibria, in turn, lead to thresh-

olds separating diﬀerent domains of attraction of capital stock, consumption,

employment and welfare level. As our solution shows thresholds are impor-

tant as separation points below or above which it is advantages to move to

lower or higher levels of capital stock, consumption, employment and welfare.

Our model version thus can explain of how the economy becomes history de-

pendent and moves, after a shock or policy inﬂuences, to a low or high level

equilibria in employment and output.

Nonclearing Markets

A second important feature of Keynesian macroeconomics concerns the mod-

eling of the labor market. An important characteristic of the DGE model

4

See Beyn, Pampel and Semmler (2001) and Gr¨ une and Semmler (2004a).

7

is that it is a market clearing model. For the labor market the DGE model

predicts an excessive smoothness of labor eﬀort in contrast to empirical data.

The low variation in the employment series is a well-known puzzle in the RBC

literature.

5

It is related to the speciﬁcation of the labor market as a cleared

market. Though in its structural setting, see, for instance, Stockey et al.

(1989), the DGE model speciﬁes both sides of a market, demand and supply,

the moments of the macro variables of the economy are, however, generated

by a one-sided force due to its assumption on wage and price ﬂexibility and

thus equilibrium in all markets, including output, labor and capital markets.

The labor eﬀort results only from the decision rule of the representative agent

to supply labor. In our view there should be no restriction for the other side

of the market, the demand, to have eﬀects on the variation of labor eﬀort.

Attempts have been made to introduce imperfect competition features

into the DGE model.

6

In those types of models, producers set the price

optimally according to their expected market demand curve. If one follows a

Calvo price setting scheme, there will be a gap between the optimal price and

the existing price. However, it is presumed that the market is still cleared

since the producer is assumed to supply the output according to what the

market demands for the existing price. This consideration also holds for the

labor market. Here the wage rate is set optimally by the household according

to the expected market demand curve for labor. Once the wage has been set,

it is assumed to be rigid (or adjusted slowly). Thus, if the expectation is not

fulﬁlled, there will be a gap again between the optimal wage and existing

wage. Yet in the New Keynesian models the market is still assumed to be

cleared since the household is assumed to supply labor whatever demand is

at the given wage rate.

7

In order to better ﬁt the RBC model’s predictions with the labor market

data, search and matching theory has been employed

8

to model the labor

market in the context of an RBC model. Informational or institutional search

frictions may then explain equilibrium unemployment rates and its rise. Yet,

those models still have a hard time to explain the shift of unemployment rates

such as, for example, experienced in Europe since the 1980s, as equilibrium

unemployment rate.

9

5

A recent evaluation of this failure of the RBC model is given in Schmidt-Grohe (2001).

6

Rotemberg and Woodford (1995, 1999), King and Wollman (1999), Gali (2001) and

Woodford (2003) present a variety of models of monopolistic competition with price and

wage stickiness.

7

Yet, as we have mentioned above, this deﬁnition of market clearing is not unambiguous.

8

For further details, see ch. 8.

9

For an evaluation of the search and matching theory as well as the role of shocks to

explain the evolution of unemployment in Europe, see Ljungqvist and Sargent (2003)and

8

As concerns the labor market along Keynesian lines we pursue an ap-

proach that allows for a nonclearing labor market. In our view the decisions

with regard to price and quantities can be made separately, both subject to

optimal behavior. When the price has been set, and is sticky for a certain pe-

riod, the price is then given to the supplier when deciding on the quantities.

There is no reason why the ﬁrm cannot choose the optimal quantity rather

than what the market demands, especially when the optimum quantity is less

than the quantity demanded by the market. This consideration will allow for

nonclearing markets.

10

Our proposed new model helps to study labor market

problems by being based on adaptive optimization where households, after

a ﬁrst round of optimization, have to reoptimize when facing constraints in

supplying labor in the market. On the other hand, ﬁrms may have con-

straints on the product markets. As we will show in chapters 8 and 9 such a

multiple stage optimization model will allow for larger volatility of the em-

ployment rates as compared to the standard RBC model, and provides, also

a framework to study the secular rise or fall of unemployment.

Technology and Demand Shocks

A further Keynesian feature of macromodels concerns the role of shocks. In

the standard DGE model technology shocks are the driving force of the busi-

ness cycles which is assumed to be measured by the Solow-residual. Since

the Solow residual is computed on the basis of observed output, capital and

employment, it is presumed that all factors are fully utilized. There are

several reasons to distrust the standard Solow residual as a measure of tech-

nology shock. First, Mankiw (1989) and Summers (1986) have argued that

such a measure often leads to excessive volatility in productivity and even

the possibility of technological regress, both of which seem to be empiri-

cally implausible. Second, it has been shown that the Solow residual can be

expressed by some exogenuous variables, for example demand shocks aris-

ing from military spending (Hall 1988) and changed monetary aggregates

(Evan 1992), which are unlikely to be related to factor productivity. Third,

the standard Solow residual can be contaminated if the cyclical variation in

factor utilization are signiﬁcant.

Considering that the Solow-residual cannot be trusted as a measure of

Blanchard (2003)

10

There is indeed a long tradition of macroeconomic modeling with speciﬁcation of the

nonclearing labor markets, see, for instance, Benassy (1995, 2002), Malinvaud (1994),

Danthine and Donaldson (1990, 1995) and Uhlig and Xu (1996). Although our approach

owes a substantial debt to disequilibrium models, we move beyond this type of literature.

9

technology shock, researchers have now developed diﬀerent methods to mea-

sures technology shocks correctly. All these methods are focused on the

computation of factor utilization. There are basically three strategies. The

ﬁrst strategy is to use an observed indicator to proxy for unobserved utiliza-

tion. A typical example is to employ electricity use as a proxy for capacity

utilization (see Burnside, Eichenbaum and Rebelo 1996). Another strategy

is to construct an economic model so that one could compute the factor uti-

lization from the observed variables (see Basu and Kimball 1997 and Basu,

Fernald and Kimball 1998). A third strategy uses a appropriate restriction in

a VAR estimate to identify a technology shock, see Gali (1999) and Francis

and Ramey (2001, 2003).

It is well known that one of the major celebrated arguments of real busi-

ness cycles theory is that technology shocks are pro-cyclical. A positive tech-

nology shock will increase output, consumption and employment. Yet this

result is obtained from the empirical evidence, in which the technology shock

is measured by the standard Solow-residual. As Gali (1999) and Francis and

Ramey (2001, 2003) we also ﬁnd that if one uses the corrected Solow-residual,

the technology shock is negatively correlated with employment and therefore

the RBC model loses its major driving force, see chapters 5 and 9.

Puzzles to be Resolved

In order to sum up, we may say that the standard RBC model has left us

with major puzzles. The ﬁrst type of puzzle is related to the asset market

and is often discussed under the heading of the equity premium puzzle. Ex-

tensive research has attempted to improve on this problem by elaborating

on more general preferences and technology shocks. Chapter 6 studies in

details the asset price implication of the RBC model. The second puzzle is,

as above mentioned, related to the labor market. The RBC model generally

predicts an excessive smoothness of labor eﬀort in contrast to empirical data.

The model also implies an excessively high correlation between consumption

and employment while empirical data only indicates a week correlation.

11

Third, the RBC model predicts a signiﬁcantly high positive correlation be-

tween technology and employment whereas empirical research demonstrates,

at least at business cycle frequency, a negative or almost zero correlation.

One might name it the technology puzzle. Whereas the ﬁrst puzzle is stud-

ied in chapter 6 of the book, chapters 8-9 of part III of the book are mainly

concerned with the latter two puzzles.

11

This problem of excessive correlation has, to our knowledge, not suﬃciently been studied

in the literature. It will be explored in Chapter 5 of this volume.

10

Finally, we want to note that the research along the line of Keynesian

micro-founded macroeconomics has been historically developed by two ap-

proaches: one is the tradition of non-clearing market (or disequilibrium anal-

ysis), and the other is the New Keynesian analysis of monopolistic competi-

tion and sticky (or sluggish) prices. These two approaches will be contrasted

in the last two chapters, Chapters 8 and 9. We will ﬁnd that one can im-

prove on the labor market and technology puzzles once we combine these

two approaches. We want to argue that the two traditions can indeed be

complementary rather than exclusive, and therefore they can somewhat be

consolidated into a more complete system of price and quantity determina-

tion within the Keynesian tradition. The main new method we are using

here to reconcile the two traditions is a multiple stage optimization behav-

ior, adaptive optimization, where agents, reoptimize once they have perceived

and learned about market constraints. Thus, adaptive optimization permits

us to properly treat the market adjustment for nonclearing markets which,

we hope, allows us to make some progress to match better the model with

time series data.

Part I

Solution and Estimation of

Stochastic Dynamic Models

11

Chapter 1

Solution Methods of Stochastic

Dynamic Models

1.1 Introduction

The dynamic decision problem of an economic agent whose objective is to

maximize his or her utility over an inﬁnite time horizon is often studied in

the context of a stochastic dynamic optimization model. To understand the

structure of this decision problem we describe it in terms of a recursive de-

cision problem of a dynamic programming approach. Thereafter, we discuss

some solution methods frequently employed to solve the dynamic decision

problem.

In most cases, an exact and analytical solution of the dynamic program-

ming decision is not attainable. Therefore one has to rely on an approximate

solution, which may also have to be computed by numerical methods. Re-

cently, there have been developed numerous methods to solve stochastic dy-

namic decision problems. Among the well-known methods are the perturba-

tion and projection methods (Judd (1996)), the parameterized expectations

approach (den Haan and Marcet (1990)) and the dynamic programming ap-

proach (Santos and Vigo Aguiar (1998) and Gr¨ une and Semmler (2004a)). In

this book, in order to allow for an empirical assessment of stochastic dynamic

models we focus on approximate solutions that are computed from two types

of ﬁrst-order conditions: the Euler equation and the equation derived from

the Lagrangian. Given these two types of ﬁrst-order conditions, three types

of approximation methods can be found in the literature: the Fair-Taylor

method, the log-linear approximation method and the linear-quadratic ap-

proximation method. Still, in most of the cases, an approximate solution

cannot be derived analytically, and therefore a numerical algorithm is called

12

13

for to facilitate the computation of the solution.

In this chapter, we discuss those various approximation methods and

then propose another numerical algorithm that can help us to compute ap-

proximate solutions. The algorithm is used to compute the solution path

obtained from the method of linear-quadratic approximation with the ﬁrst-

order condition derived from the Lagrangian. While the algorithm takes full

advantage of an existing one (Chow 1993), it overcomes the limitations given

by the Chow (1993) method.

The remainder of this chapter is organized as follows. We start in Section

2 with the standard recursive method, which uses the value function for it-

eration. We will show that the standard recursive method may encounter

diﬃculties when being applied to compute a dynamic model. Section 3

establishes the two ﬁrst-order conditions to which diﬀerent approximation

methods can be applied. Section 4 brieﬂy reviews the diﬀerent approxima-

tion methods in the existing literature. Section 5 presents our new algorithm

for dynamic optimization. Appendix I provides the proof of the propositions

in the text. Finally, a GAUSS procedure that implements our suggested

algorithm is presented in Appendix II.

1.2 The Standard Recursive Method

We consider a representative agent whose objective is to ﬁnd a control (or

decision) sequence {u

t

}

∞

t=0

such that

max

{u}

∞

t=0

E

0

_

∞

t=0

β

t

U(x

t

, u

t

)

_

(1.1)

subject to

x

t+1

= F(x

t

, u

t

, z

t

). (1.2)

Above, x

t

is a vector of m state variables at period t; u

t

is a vector of n control

variables; z

t

is a vector of s exogenuous variables whose dynamics does not

depend on x

t

and u

t

; E

t

is the mathematical expectation conditional on the

information available at time t and β ∈ (0, 1) denotes the discount factor.

Let us ﬁrst make several remarks regarding the formulation of the above

problem. First, this formulation assumes that the uncertainty of the model

only comes from the exogenuous z

t

. One popular assumption regarding the

dynamics of z

t

is to assume that z

t

follow an AR(1) process:

z

t+1

= Pz

t

+p + ǫ

t+1

(1.3)

14

where ǫ

t

is independently and identically distributed (i.i.d. ). Second, this

formulation is not restrictive to those structure models with more lags or

leads. It is well known that a model with ﬁnite lags or leads can be trans-

formed through the use of auxiliary variables into an equivalent model with

one lag or lead. Third, the initial condition (x

0

, z

0

) in this formulation is

assumed to be given.

The problem to solve a dynamic decision problem is to seek a time-

invariant policy function G mapping from the state and exogenuous (x, z)

into the control u. With such a policy function (or control equation), the

sequences of state {x

t

}

∞

t=1

and control {u

t

}

∞

t=0

can be generated by iterating

the control equation

u

t

= G(x

t

, z

t

) (1.4)

as well as the state equation (1.2), given the initial condition (x

0

, z

0

) and the

exogenuous sequence {z

t

}

∞

t=1

generated by (1.3).

To ﬁnd the policy function G by the recursive method, we ﬁrst deﬁne a

value function V :

V (x

0

, z

0

) ≡ max

{ut}

∞

t=0

E

0

_

∞

t=0

β

t

U(x

t

, u

t

)

_

(1.5)

Expression (1.5) could be transformed to unveil its recursive structure. For

this purpose, we ﬁrst rewrite (1.5) as follows:

V (x

0

, z

0

) = max

{ut}

∞

t=0

_

U(x

0

, u

0

) +E

0

_

∞

t=1

β

t

U(x

t

, u

t

)

__

.

= max

{ut}

∞

t=0

_

U(x

0

, u

0

) +βE

0

_

∞

t=0

β

t

U(x

t+1

, u

t+1

)

__

(1.6)

It is easy to ﬁnd that the second term in (1.6) can be expressed as being

β times the value V as deﬁned in (1.5) with the initial condition (x

1

, z

1

).

Therefore, we could rewrite (1.5) as

V (x

0

, z

0

) = max

{ut}

∞

t=0

{U(x

0

, u

0

) + βE

0

[V (x

1

, z

1

)]} (1.7)

The formulation of equ. (1.7) represents a dynamic programming problem

which highlights the recursive structure of the decision problem. In every

period t, the planner faces the same decision problem: choosing the control

variable u

t

that maximizes the current return plus the discounted value of

the optimum plan from period t+1 onwards. Since the problem repeats itself

15

every period the time subscripts become irrelevant. We thus can write (1.7)

as

V (x, z) = max

u

{U(x, u) + βE [V (˜ x, ˜ z)]} (1.8)

where the tilde (∼) over x and z denotes the corresponding next period

values. Obviously, they are subject to (1.2) and (1.3). Equation (1.8) is

said to be the Bellman equation, named after Richard Bellman (1957). If we

know the function V, we then can solve u via the Bellman equation.

Unfortunately, all these considerations are based on the assumption that

we know the function V, which in reality we do not know in advance. The

typical method in this case is to construct a sequence of value functions by

iterating the following equation:

V

j+1

(x, z) = max

u

{U(x, u) + βE [V

j

(˜ x, ˜ z)]} (1.9)

In terms of an algorithm, the method can be described as follows:

• Step 1. Guess a diﬀerentiable and concave candidate value function,

V

j

.

• Step 2. Use the Bellman equation to ﬁnd the optimum u and then

compute V

j+1

according (1.9).

• Step 3. If V

j+1

= V

j

, stop. Otherwise, update V

j

and go to step 1.

Under some regularity conditions regarding the function U and F, the

convergence of this algorithm is warranted by the contraction mapping the-

orem (Sargent 1987, Stockey et al. 1989). However, the diﬃculty of this

algorithm is that in each Step 2, we need to ﬁnd the optimum u that maxi-

mize the right side of equ. (1.9). This task makes it diﬃcult to write a closed

form algorithm for iterating the Bellman equation. Researchers are therefore

forced to seek diﬀerent numerical approximation methods.

1.3 The First-Order Conditions

The last two decades have observed various methods of numerical approxi-

mation to solve the problem of dynamic optimization.

1

As above stated in

1

Kendrick (1981) can be regarded as a seminal work in this ﬁeld. For a review up to

1990, see Taylor and Uhlig (1990). For the later development, see Chow (1997), Judd

(1999), Ljungqvist and Sargent (2000) and Marimon and Scott (1999). Recent methods

of numerically solving the above discrete time Bellman equation (1.9) can be found in

Santos and Vigo-Aguiar (1998) and Gr¨ une and Semmler (2004a).

16

this book we want to focus on the ﬁrst-order conditions, that are used to

derive the decision sequence. One can ﬁnd two approaches: one is to use the

Euler equation and the other the equation derived from the Lagrangian.

1.3.1 The Euler Equation

We start from the Bellman equation (1.8). The ﬁrst-order condition for

maximizing the right side of the equation takes the form:

∂U(x, u)

∂u

+ βE

_

∂F

∂u

(x, u, z)

∂V

∂˜ x

(˜ x, ˜ z)

_

= 0 (1.10)

The objective here is to ﬁnd ∂V/∂x. Assume V is diﬀerentiable and thus

from (1.8) it satisﬁes

∂V (x, z)

∂x

=

∂U(x, G(x, z))

∂x

+ βE

_

∂F

∂x

(x, G(x, z), z)

∂V

∂F

(˜ x, ˜ z)

_

(1.11)

This equation is often called the Benveniste-Scheinkman formula.

2

Assume

∂F/∂x = 0. The above formula becomes

∂V (x, z)

∂x

=

∂U(x, G(x, z))

∂x

(1.12)

Substituting this formula into (1.10) gives rise to the Euler equation:

∂U(x, u)

∂u

+ βE

_

∂F

∂u

(x, u, z)

∂U

∂˜ x

(˜ x, ˜ u)

_

= 0 (1.13)

where the tilde (∼) over u again denotes the next period value with respect

to u.

Note that to use the above Euler equation as the ﬁrst-order condition for

deriving the decision sequence, one must require ∂F/∂x = 0. In economic

analysis, one often encounters models, after some transformation, in which x

does not appear in the transition law so that ∂F/∂x = 0 is satisﬁed. We will

show this technique in the next chapter using a prototype model as a practical

example. However, there are still models in which such transformation is not

feasible.

2

named after Benveniste and Scheinkman (1979).

17

1.3.2 Deriving the First-Order Condition from the La-

grangian

Suppose for the dynamic optimization problem as represented by (1.1) and

(1.2), we can deﬁne the Lagrangian L:

L = E

0

∞

t=0

_

β

τ

U(x

t

, u

t

) −β

τ+1

λ

′

t+1

[x

t+1

−F(x

t

, u

t

, z

t

)]

_

where λ

t

, the Lagrangian multiplier, is a m× 1 vector. Setting the partial

derivatives of L to zero with respect to λ

t

, x

t

and u

t

will yield equation (1.2)

as well as

∂U(x

t

, u

t

)

∂x

t

+βE

t

_

∂F(x

t+1

, u

t+1

, z

t+1

)

∂x

t+1

λ

t+1

_

= λ

t

, (1.14)

∂U(x

t

, u

t

)

∂u

t

+E

t

λ

t+1

∂F(x

t

, u

t

, z

t

)

∂u

t

= 0, (1.15)

In comparison with the Euler equation, we ﬁnd that there is an unobserv-

able variable λ

t

appearing in the system. Yet, using (1.14) and (1.15), one

does not have to transform the model into the setting that ∂F/∂x = 0. This

is an important advantage over the Euler equation. Also as we will see in the

next chapter, these two types of ﬁrst-order conditions are equivalent when

we appropriately deﬁne λ

t

in terms of x

t

, u

t

and z

t

.

3

This further implies

that they can produce the same steady states when being evaluated at their

certainty equivalence forms.

1.4 Approximation and Solution Algorithms

1.4.1 The Gauss-Seidel Procedure and the Fair-Taylor

Method

The state (1.2), the exogenuous (1.3) and the ﬁrst-order condition derived

either as Euler equation (1.13) or from the Lagrangian (1.14) - (1.15) form a

dynamic system from which the transition sequences {x

t+1

}

∞

t=0

, {z

t+1

}

∞

t=0

,

{u

t

}

∞

t=0

and {λ

t

}

∞

t=0

are implied given the initial condition (x

0

, z

0

) . Yet,

mostly such a system is highly nonlinear, and therefore the solution paths

usually are impossible to be computed directly. One popular approach as

suggested by Fair and Taylor (1983) is to use a numerical algorithm, called

3

See also Chow (1997).

18

Gauss-Seidel procedure. For the convenience of presentation, the following

discussion assumes only the Euler equation to be used.

Suppose the system can be be written as the following m + n equations:

f

1

(y

t

, y

t+1

, z

t

, ψ) = 0 (1.16)

f

2

(y

t

, y

t+1

, z

t

, ψ) = 0 (1.17)

.

.

.

f

m+n

(y

t

, y

t+1

, z

t

, ψ) = 0 (1.18)

Here y

t

is the vector of endogenuous variables with m+n dimensions, includ-

ing both state x

t

and control u

t

;

4

ψ is the vector of structural parameter. Also

note that in this formulation we left aside the expectation operator E. This

can be done by setting the corresponding disturbance term, if there is any,

to their expectation values (usually zero). Therefore the system is essentially

not diﬀerent from the dynamic rational expectation model as considered by

Fair and Taylor (1983).

5

The system, as suggested, can be solved numerically

by an iterative technique, called Gauss Seidel procedure, to which we shall

now turn.

It is always possible to tranform the system (1.16) - (1.18) as follows:

y

1,t+1

= g

1

(y

t

, y

t+1

, z

t

, ψ) (1.19)

y

2,t+1

= g

2

(y

t

, y

t+1

, z

t

, ψ) (1.20)

.

.

.

y

m+n,t+1

= g

m+n

(y

t

, y

t+1

, z

t

, ψ) (1.21)

where, y

i,t+1

is the ith element in the vector y

t+1

, i = 1, 2, ..., m + n. Given

the initial condition y

0

= y

∗

0

, and the sequence of exogenuous variable {z

t

}

T

t=0

,

with T to be the prescribed time horizon of our problem, the algorithm starts

by setting t = 0 and proceeds as follows:

• Step 1. Set an initial guess on y

t+1

. Call this guess y

(0)

t+1

. Compute y

t+1

according to (1.19) - (1.21) for the given y

(0)

t+1

along with y

t

. Denote

this new computed value y

(1)

t+1

.

• Step 2. If the distance between y

(1)

t+1

and y

(0)

t+1

is less than a prescribed

tolerance level, go to Step 3. Otherwise compute y

(2)

t+1

for the given y

(1)

t+1

.

This procedure will be repeated until the tolerance level is satisﬁed.

4

If we use (1.14) and (1.15) as the ﬁrst-order condition, then there will be 2m+n equations

and y

t

should include x

t

, λ

t

and u

t

.

5

Our suggested model here is a more simpliﬁed version since we only take one lead. See

also a similar formulation in Juillard (1996) with one lag and one lead.

19

• Step 3. Update t by setting t = t + 1 and go to Step 1.

The algorithm will continue until t reaches T. This will produce a sequence

of endogenuous variable {y

t

}

T

t=0

, which include both decision {u

t

}

T

t=0

and

state {x

t

}

T

t=0

.

There is no guarantee that convergence can always be achieved for the

iteration in each period. If this is the case, a damping technique can usually

be employed to force convergence (see Fair 1984, chapter 7). The second

disadvantage of this method is the cost of computation. The procedure

requires the iteration and convergence for each period t, t = 1, 2, 3, ..., T.

This cost of computation makes it a diﬃcult candidate solving the dynamic

optimization problem. The third and the most important problem regards

its accuracy of solution. Note that the procedure starts with the given initial

condition y

0

, and therefore the solution sequences {u

t

}

T

t=1

and {x

t

}

T

t=1

depend

virtually on the initial condition, which include not only the initial state x

0

but also the initial decisions u

0

. Yet the initial condition for the dynamic

decision problem is usually provided only by x

0

(see equation our discussion

in the last section). Considering that the weights of u

0

could be important

in the value of the objective function (1.1), there might be a problem in

accuracy. One possible way to deal with this problem is to start with diﬀerent

initial u

0

. In the next chapter when we turn to a practical problem, we will

investigate these issues more thoroughly.

1.4.2 The Log-linear Approximation Method

Solving nonlinear dynamic optimization model with log-linear approximation

has been widely used and well documented. This has been proposed in

particular by King et al. (1988) and Campbell (1994) in the context of

Real Business Cycle models. In principle, this approximation method can

be applied to the ﬁrst-order condition either in terms of the Euler equation

or derived from the Langrangean. Formally, let X

t

be the variables,

¯

X the

corresponding steady state. Then,

x

t

≡ ln X

t

−ln

¯

X

is regarded to be the log-deviation of X

t

. In particular, 100x

t

is the per-

centage of X

t

that it deviates from

¯

X. The general idea of this method is to

replace all the necessary equations in the model by approximations, which

are linear in the log-deviation form. Given the approximate log-linear sys-

tem, one then uses the method of undetermined coeﬃcients to solve for the

decision rule, which is also in the form of log-linear deviations.

20

Uhlig (1999) provides a toolkit of such method for solving a general dy-

namic optimization model. The general procedure involves the following

steps:

• Step 1. Find the necessary equations characterizing the equilibrium

law of motion of the system. These necessary equations should include

the state equation (1.2), the exogenuous equation (1.3) and the ﬁrst-

order condition derived either as Euler equation (1.13) or from the

Lagrangian (1.14) and (1.15).

• Step 2. Derive the steady state of the model. This requires ﬁrst pa-

rameterizing the model and then evaluating the model at its certainty

equivalence form.

• Step 3. Log-linearize the necessary equations characterizing the equilib-

rium law of motion of the system. Uhlig (1999) suggests the following

building block for such log-inearization.

X

t

≈

¯

Xe

xt

(1.22)

e

xt+ayt

≈ 1 + x

t

+ ay

t

(1.23)

x

t

y

t

≈ 0 (1.24)

• Step 4. Solve the log-linearized system for the decision rule (which is

also in log-linear form) with the method of undetermined coeﬃcients.

In Chapter 3, we will provide a concrete example to apply the above

procedure and to solve a practical problem of dynamic optimization.

Solving a nonlinear dynamic optimization model with log-linear approx-

imation usually does not require a heavy computation in contrast to the

Fair-Taylor method. In some cases, the decision rule could be derived even

analytically. On the other hand, by assuming a log-linearized decision rule as

expressed in (1.4), the solution path does not require the initial condition of

u

0

and therefore it should be more accurate in comparison to the Fair-Taylor

method. However, the process of log-linearization and solving for the unde-

termined coeﬃcients are not easy, and usually have to be accomplished by

hand. It is certainly desirable to have a numerical algorithm available that

can take over at least part of the analytical derivation process.

1.4.3 Linear-quadratic Approximation with Chow’s Al-

gorithm

Another important approximation method is the linear-quadratic approxi-

mation. Again in principle this method can be applied to the ﬁrst-order con-

21

dition either in terms of the Euler equation or derived from the Lagrangian.

Chow (1993) was among the ﬁrst who solves a dynamic optimization model

with linear-quadratic approximation applied to the ﬁrst-order condition de-

rived from the Lagrangian. At the same time, he proposed a numerical

algorithm to facilitate the computation of the solution.

Chow’s method can be presented in both continuous and discrete time

form. Since the models in discrete time are more convenient for empirical

and econometric studies, we here only consider the discrete time version.

The numerical properties of this approximation method have further been

studied in Reiter (1997) and Kwan and Chow (1997).

Suppose the objective of a representative agent can again be written as

(1.1), but subject to

x

t+1

= F(x

t

, u

t

) +ǫ

t+1

(1.25)

We shall remark that the state equation here is slightly diﬀerent from the

one as expressed by (1.2) in Section 2. Apparently, it is only a special case

of (1.2). Consequently, the Lagrangian L should be deﬁned as

L = E

0

∞

t=0

_

β

t

U(x

t

, u

t

) −β

t+1

λ

′

t+1

[x

t+1

−F(x

t

, u

t

) −ǫ

t+1

]

_

Setting the partial derivatives of L to zero with respect to λ

t

, x

t

and u

t

will yield equation (1.25) as well as

∂U(x

t

, u

t

)

∂x

t

+ β

∂F(x

t

, u

t

)

∂x

t

E

t

λ

t+1

= λ

t

, (1.26)

∂U(x

t

, u

t

)

∂u

t

+β

∂F(x

t

, u

t

)

∂u

t

E

t

λ

t+1

= 0, (1.27)

The linear-quadratic approximation assumes the state equations to be linear

and the objective function to be quadratic. In other words,

∂U(x

t

, u

t

)

∂x

t

= K

1

x

t

+K

12

u

t

+ k

1

(1.28)

∂U(x

t

, u

t

)

∂u

t

= K

2

u

t

+ K

21

x

t

+ k

2

(1.29)

F(x

t

, u

t

) = Ax

t

+ Cu

t

+ b + ǫ

t+1

(1.30)

Given this linear-quadratic assumption, equation (1.26) and (1.27) can be

rewritten as

K

1

x

t

+ K

12

u

t

+ k

1

+ βA

′

E

t

λ

t+1

= λ

t

(1.31)

K

2

u

t

+ K

21

x

t

+ k

2

+βC

′

E

t

λ

t+1

= 0 (1.32)

22

Assume the transition laws of u

t

and λ

t

take the linear form:

u

t

= Gx

t

+ g (1.33)

λ

t+1

= Hx

t+1

+ h (1.34)

Chow (1993) proves that the coeﬃcient matrices G and H and the vectors g

and h satisfy

G = −(K

2

+ βC

′

HC)

−1

(K

21

+ βC

′

HA) (1.35)

g = −(K

2

+ βC

′

HC)

−1

(k

2

+ βC

′

Hb + βC

′

h) (1.36)

H = K

1

+ K

12

G + βA

′

H(A +CG) (1.37)

h = (K

12

+ βA

′

HC)g + k

1

+ βA

′

(Hb + h) (1.38)

Generally it is impossible to ﬁnd, the analytical solution to G, H, g and h is

impossible. Thus, an iterative procedure can designed as follows. First, set

the initial H and h. G and g can then be calculated by (1.35) and (1.36).

Given G and g as well as the initial H and h, the new H and h can be

calculated by (1.37) and (1.38). Using these new H and h, one calculates

the new G and g by (1.35) and (1.36) again. The process will continue until

convergence is achieved.

6

In comparison to the log-linear approximation, Chow’s method requires

less derivations, which must be accomplished by hand. Given the steady

state, the only derivation is to obtain the partial derivatives of U and F.

Yet even this can be computed with a written procedure in a major software

package.

7

Despite this signiﬁcant advantage, Chow’s method has at least

three weeknesses.

First, Chow’s method can be a good approximation method only when the

state equation is linear or can be transformed into a linear one. Otherwise,

6

It should be noted that the algorithm suggested by Chow (1993) is much more compli-

cated. For any given x

t

, the approximation as represented by (1.30) - (1.32) ﬁrst takes

place around (x

t

, u

∗

), where u

∗

is the initial guess on u

t

. Therefore, if u

t

calculated via

the decision rule (1.33), as the result of iterating (1.35) - (1.38), is diﬀerent from u

∗

,

the approximation will take place again. However, this time it will be around (x

t

, u

t

),

followed by iterating (1.35) - (1.38). The procedure will continue until convergence of

u

t

. Since the above algorithm is designed for any given x

t

, the resulting decision rule is

indeed nonlinear in x

t

. In this sense, the method, as pointed by Reiter (1997), is less con-

venient comparing to other approximation method. In a response to Reiter (1997), Kwan

and Chow (1997) propose a one-time linearization around the steady states. Therefore,

our above presentation follows the spirit of Kwan and Chow (1997) if we assume the

linearization takes place around the steady states.

7

For instance, in GAUSS, one could use the procedure GRADP for deriving the partial

derivatives.

23

the linearized ﬁrst-order condition as expressed by (1.31) and (1.32) will not

be a good approximation to the non-approximated (1.26) and (1.27), since A

′

and C

′

are not good approximations to

∂F(xt,ut)

∂xt

and

∂F(xt,ut)

∂ut

.

8

Reiter (1996)

has used a concrete example to show this point. Gong (1997) points out the

same problem.

Second, the iteration with (1.35) - (1.38) may exhibit multiple solutions

since inserting (1.35) into (1.37) gives a quadratic matrix equation in H.

Indeed, one would expect that the number of solutions will increase with the

increase of the dimensions of state space.

9

Third, the assumed state equation (1.25) is only a special case of (1.2).

This will create some diﬃculty when being applied to the model with exo-

genuous variables. One possible way to circumvent this problem is to regard

the exogenuous variables, if there are any, as a part of state variables. This

actually has been done by Chow and Kwan (1998) when the method has

been applied to a practical problem. Yet this will increase the dimension of

state space and hence intensify the problem of multiple solutions.

1.5 An Algorithm for the Linear-Quadratic

Approximation

In this section, we shall present an algorithm for solving a dynamic optimiza-

tion model with the general formulation as expressed in (1.1) - (1.3). The

ﬁrst-order condition used for this algorithm is derived from the Lagrangian.

This indicates that the method does not require the assumption ∂F/∂x = 0.

The approximation method we used here is the linear-quadratic approxima-

tion, and therefore the method does not require log-linear-approximation,

which, in many cases, needs to be accomplished by hand. The established

Proposition 1 below allows us to save further derivations when applying the

method of undetermined coeﬃcients. Indeed, if we use an existing soft-

ware procedure to compute the partial derivatives with respect to U and F,

the only derivation in applying our algorithm is to derive the steady state.

Therefore, our suggested algorithm takes full advantage, yet overcomes all

the limitations occurring in Chow’s method.

Since our state equation takes the form of (1.2) rather than (1.25), the

ﬁrst-order condition is established by (1.14) and (1.15). Evaluating the ﬁrst-

8

Note that the good approximation to

∂F(xt,ut)

∂xt

should be

∂

2

F(¯ x,¯ u)

∂x

2

(x

t

−¯ x) +

∂

2

F(¯ x,¯ u)

∂¯ x∂u

(u

t

−

¯ u) +

∂F(¯ x,¯ u)

∂¯ x

. For

∂F(xt,ut)

∂ut

there is a similar problem.

9

The same problem of multiple solution should also exist for the log-linear approximation

method.

24

order condition along with (1.2) and (1.3) at their certainty equivalence form,

we are able to derive the steady states for x

t

, u

t

, λ

t

and z

t

, which we shall

denote respectively as ¯ x, ¯ u,

¯

λ and ¯ z. Taking the ﬁrst-order Taylor approxi-

mation around the steady state for (1.14), (1.15) and (1.2), we obtain

F

11

x

t

+ F

12

u

t

+ F

13

E

t

λ

t+1

+ F

14

z

t

+ f

1

= λ

t

, (1.39)

F

21

x

t

+ F

22

u

t

+F

23

E

t

λ

t+1

+ F

24

z

t

+ f

2

= 0, (1.40)

x

t+1

= Ax

t

+ Cu

t

+ Wz

t

+b; (1.41)

where in particular,

A = F

x

C = F

u

b = F(¯ x, ¯ u, ¯ z) −F

x

¯ x −F

z

¯ z −F

u

¯ u W = F

z

F

11

= U

xx

+ β

¯

λF

xx

F

12

= U

xu

+β

¯

λF

xu

F

13

= βA

′

F

14

= β

¯

λF

xz

f

1

= U

x

+ βA

′

¯

λ −F

11

¯ x −F

12

¯ u −F

13

¯

λ −F

14

¯ z F

21

= U

ux

+β

¯

λF

ux

F

22

= U

uu

+ β

¯

λF

uu

F

23

= βC

′

f

2

= U

u

+ βC

′

¯

λ −F

21

¯ x −F

22

¯ u −F

23

¯

λ −F

24

¯ z F

24

= β

¯

λF

uz

Note that here we deﬁne U

x

as ∂U/∂x and U

xx

as ∂

2

U/∂x∂x all to be

evaluated at the steady state. The similarity is applied to U

u

, U

uu

, U

ux

, U

xu

,

F

xx

, F

uu

, F

xz

, F

ux

and F

uz

. The objective is to ﬁnd the linear decision rule

and the Lagrangian function:

u

t

= Gx

t

+ Dz

t

+g (1.42)

λ

t+1

= Hx

t+1

+Qz

t+1

+ h (1.43)

The following is the proposition regarding the solution for (1.42) and (1.43):

Proposition 1 Assume u

t

and λ

t+1

follow (1.42) and (1.43) respectively.

Then the solution of G, D, g, H, Q and h satisfy

G = M +NH (1.44)

D = R +NQ (1.45)

g = Nh + m (1.46)

HCNH + H(A +CM) + F

−1

13

(F

12

N −I

m

)H + F

−1

13

(F

11

+ F

12

M) = 0

(1.47)

_

F

−1

13

(I −F

12

N) −HCN

¸

Q−QP = H(W + CR) + F

−1

13

(F

14

+ F

12

R)

(1.48)

h =

_

F

−1

13

(I −F

12

N) −HCN −I

m

¸

−1

_

H(Cm+ b) + Qp + F

−1

13

(f

1

+ F

12

m)

¸

(1.49)

25

where I

m

is the m×m identity matrix and

N =

_

F

23

F

−1

13

F

12

−F

22

_

−1

F

23

F

−1

13

, (1.50)

M =

_

F

23

F

−1

13

F

12

−F

22

_

−1

(F

21

−F

23

F

−1

13

F

11

), (1.51)

R =

_

F

23

F

−1

13

F

12

−F

22

_

−1

(F

24

−F

23

F

−1

13

F

14

), (1.52)

m =

_

F

23

F

−1

13

F

12

−F

22

_

−1

(f

2

−F

23

F

−1

13

f

1

). (1.53)

Given H, the proposition allows us to solve G, Q and h directly according

(1.44), (1.48) and (1.49). Then D and g can be computed from (1.45) and

(1.46). The solution to H is implied by (1.47), which is nonlinear (quadratic)

in H. Obviously, if the model has more than one state variables, we cannot

solve for H analytically. In this case, we shall rewrite (1.47) as

H = F(H), (1.54)

iterating (1.54) until convergence will give us a solution to H.

10

However, when one encounters the model with one state variable,

11

H

becomes a scalar, and therefore (1.47) can be written as

a

1

H

2

+ a

2

H + a

3

= 0

with the two solutions given by

H

1,2

=

1

2a

1

_

−a

2

±

_

a

2

2

−4a

1

a

3

_1

2

_

.

In other words, the solutions can be computed without iteration. Further,

in most cases, one can also easily identify the proper solution by relying on

the economic meaning of λ

t

. For example, in all the models that we will

present in this book, the state equation is the capital stock. Therefore, λ

t

is the shadow price of the capital which should be inversely related to the

quantity of capital. This indicates that only the negative solution is a proper

solution.

1.6 A Dynamic Programming Algorithm

In this section we describe a dynamic programming algorithm which en-

ables us to compute optimal value functions as well as optimal trajectories

10

though multiple solutions may exist.

11

which is mostly the case in recent economic literature.

26

of discounted optimal control problems of the type above. An extension to

a stochastic decision problem is brieﬂy summarized in appendix III.

The basic discretization procedure goes back to Capuzzo Dolcetta (1983)

and Falcone (1987) and is applied with adaptive gridding strategy by Gr¨ une

(1997) and Gr¨ une and Semmler (2004a). We consider discounted optimal

control problems in discrete time t ∈ N

0

given by

V (x) = max

u∈U

∞

t=0

β

t

g(x(t), u(t)) (1.55)

where

x

t+1

= f(x(t), u(t)), x(0) = x

0

∈ R

n

For the discretization in space we consider a grid Γ covering the com-

putational domain of interest. Denoting the nodes of the grid Γ by x

i

,

i = 1, . . . , P, we are now looking for an approximation V

Γ

h

satisfying

V

Γ

h

(x

i

) = T

h

(V

Γ

h

)(x

i

) (1.56)

for all nodes x

i

of the grid, where the value of V

Γ

h

for points x which are not

grid points (these are needed for the evaluation of T

h

) is determined by linear

interpolation. For a description of several iterative methods for the solution

of (1.56) we refer to Gr¨ une and Semmler (2004a).

For the estimation of the gridding error we estimate the residual of the

operator T

h

with respect to V

Γ

h

, i.e., the diﬀerence between V

Γ

h

(x) and

T

h

(V

Γ

h

)(x) for points x which are not nodes of the grid. Thus, for each

cell C

l

of the grid Γ we compute

η

l

:= max

x∈C

l

¸

¸

T

h

(V

Γ

h

)(x) −V

Γ

h

(x)

¸

¸

Using these estimates we can iteratively construct adaptive grids as follows:

(0) Pick an initial grid Γ

0

and set i = 0. Fix a reﬁnement parameter

θ ∈ (0, 1) and a tolerance tol > 0.

(1) Compute the solution V

Γ

i

h

on Γ

i

(2) Evaluate the error estimates η

l

. If η

l

< tol for all l then stop

(3) Reﬁne all cells C

j

with η

j

≥ θ max

l

η

l

, set i = i + 1 and go to (1).

27

For more information about this adaptive gridding procedure and a com-

parison with other adaptive dynamic programming approaches we refer to

Gr¨ une and Semmler (2004a) and Gr¨ une (1997).

In order to determine equilibria and approximately optimal trajectories

we need an approximately optimal policy, which in our discretization can be

obtained in feedback form u

∗

(x) for the discrete time approximation using

the following procedure:

For each x in the gridding domain we choose u

∗

(x) such that the equality

max

u∈U

{hg(x, u) + βV

h

(x

h

(1))} = {hg(x, u

∗

(x)) +βV

h

(x

∗

h

(1))}

holds, where x

∗

h

(1) = x + hf(x, u

∗

(x)). Then the resulting sequence u

∗

i

=

u

∗

(x

h

(i)) is an approximately optimal policy and the related piecewise con-

stant control function is approximately optimal.

1.7 Conclusion

This chapter reviews some typical approximation methods to solve a stochas-

tic dynamic optimization model. The approximation methods discussed here

use two types of ﬁrst-order conditions: the Euler equation and the equations

derived from the Lagrangian. We ﬁnd that the Euler equation needs a re-

striction that the state variable cannot appear as a determinant in the state

equation. Although many economic models can satisfy this restriction after

some transformation, we still cannot exclude the possibility that sometime

the restriction cannot be satisﬁed.

Given these two types of ﬁrst-order conditions, we consider the solutions

computed by the Fair-Taylor method, the log-linear approximation method

and the linear-quadratic approximation method. For all of these diﬀerent

methods, we compare their advantages and disadvantages. We ﬁnd that the

Fair-Taylor method may encounter an accuracy problem due to its additional

requirement of the initial condition for the control variable. On the other

hand, the methods of log-linear approximation may need an algorithm that

can take over some heavy derivation process that otherwise must be ana-

lytically accomplished. For the linear-quadratic approximation, we therefore

propose an algorithm that could overcome the limitation of existing methods

(such as Chow’s method). We also have elaborated on dynamic programming

as a recently developed method to solve the involved Bellman equation. In

the next chapter we will turn to a practical problem and apply the diverse

methods.

28

1.8 Appendix I: Proof of Proposition 1

From (1.39), we obtain

E

t

λ

t+1

= F

−1

13

(λ

t

−F

11

x

t

−F

12

u

t

−F

14

z

t

−f

1

) (1.57)

Substituting the above equation into (1.40) and then solving for u

t

, we get

u

t

= Nλ

t

+ Mx

t

+ Rz

t

+ m, (1.58)

where N, M, R and m are all deﬁned in the proposition. Substituting (1.58)

into (1.41) and (1.55) respectively, we obtain

x

t+1

= S

x

x

t

+ S

λ

λ

t

+ S

z

z

t

+s (1.59)

E

t

λ

t+1

= L

x

x

t

+L

λ

λ

t

+L

z

z

t

+ l. (1.60)

where

S

x

= A +CM (1.61)

S

λ

= CN (1.62)

S

z

= W + CR (1.63)

s = Cm+ b (1.64)

L

x

= −F

−1

13

(F

11

+ F

12

M) (1.65)

L

λ

= F

−1

13

(I −F

12

N) (1.66)

L

z

= −F

−1

13

(F

14

+ F

12

R) (1.67)

l = −F

−1

13

(f

1

+ F

12

m) (1.68)

Now express λ

t

in terms of (1.43) and then plug it into (1.59) and (1.60):

x

t+1

= (S

x

+ S

λ

H)x

t

+ (S

z

+ S

λ

Q)z

t

+s + S

λ

h, (1.69)

E

t

λ

t+1

= (L

x

+ L

λ

H)x

t

+ (L

z

+ L

λ

Q)z

t

+ l +L

λ

h. (1.70)

Next, taking expectation for the both sides of (1.43) and expressing x

t+1

in

terms of (1.41) while E

t

z

t+1

in terms of (1.3) we obtain

E

t

λ

t+1

= HAx

t

+ HCu

t

+ (HW + QP)z

t

+ Hb +Qp +h. (1.71)

Next, expressing u

t

in terms of (1.42) for equations (1.41) and (1.71) respec-

tively, we get

29

x

t+1

= (A + CG)x

t

+ (W + CD)z

t

+ Cg + b (1.72)

E

t

λ

t+1

= H(A + CG)x

t

+ [H(W + CD) + QP] z

t

+ H(Cg + b) +Qp + h.

(1.73)

Comparing (1.69) and (1.70) with (1.72) and (1.73), we obtain

S

x

+ S

λ

H = A + CG, (1.74)

L

x

+ L

λ

H = H(A + CG), (1.75)

S

z

+ S

λ

Q = W + CD, (1.76)

L

z

+ L

λ

Q = H(W + CD) + QP, (1.77)

s + S

λ

h = Cg + b, (1.78)

l + L

λ

h = H(Cg + b) + Qp +h. (1.79)

These 6 equations determine 6 unknown coeﬃcient matrices and vectors G,

H, D, Q, g and h. In particular, H is resolved from (1.75) when A + CG

is replaced by (1.74). This gives rise to (1.47) as in Proposition 1 with

S

x

, S

λ

, L

x

and L

λ

to be expressed by (1.61), (1.62), (1.65) and (1.66). Given

H, G is then resolved from (1.74), which allows us to obtain (1.44). Next

Q is resolved from (1.77) when W + CD is replaced by (1.76). This gives

rise to (1.48) with S

z

, S

λ

, L

z

and L

λ

to be expressed by (1.63), (1.62), (1.67)

and (1.66). Then D is resolved from (1.76), which allows us to obtain (1.45).

Finally, h is resolved from (1.79) when Cg + b is replaced by (1.78). This

gives rise to (1.49) with S

λ

, L

λ

, s and l to be expressed by (1.62), (1.66),

(1.64) and (1.66). Finally g is resolved from (1.78), which allows us to obtain

(1.46).

1.9 Appendix II: An Algorithm for the LQ-

Approximation

The algorithm that we suggest to solve the LQ approximation of chapter

1.5 is written as a GAUSS procedure and available from the authors upon

request. We call this procedure DYNPR. The input of this procedure includes

• the steady states of x, u, λ, z and F denoted as xbar, ubar, lbar, zbar

and Fbar respectively.

• the ﬁrst- and the second-order partial derivatives of x, u and z with

respect F and U, all evaluated at the steady states. They are denoted

30

as, for instance, Fx and Fxx for the ﬁrst- and the second-order partial

derivatives of x with respect to F respectively.

• the discount factor β (denoted as beta) and the parameters P and p

(denoted as BP and sp respectively) appearing in the AR(1) process

of z.

The output of this procedure is the decision parameters G, D and g,

denoted respectively as BG, BD and sg.

PROC(3) = DYNPR (Fx, Fu, Fz, Fxx, Fuu, Fzz, Fxu, Fxz, Fuz, Fux,

Fzx, Ux, Uu, Uxx, Uuu, Uxu, Uux, Fbar, xbar,

ubar, zbar, lbar, beta, BP, sp);

LOCAL A, C, W, sb, F11, F12, F13, F14, f1, F21, F22,

F23, F24, f2, BM, BN, BR, sm, Sx, Slamda, Sz,

ss, BLx, BLlamda, BLz, sl, sa1, sa2, sa3, BH,

BG, sh, sg, BQ, BD;

A = Fx;

C = Fu;

W = Fz;

sb = Fbar −A ∗ xbar −C ∗ ubar −W ∗ zbar;

F11 = Uxx + beta ∗ lbar ∗ Fxx;

F12 = Uxu + beta ∗ lbar ∗ Fxu;

F13 = beta ∗ Fx;

F14 = beta ∗ lbar ∗ Fxz;

f1 = Ux +beta ∗ lbar ∗ Fx −F11 ∗ xbar−

F12 ∗ ubar −F13 ∗ lbar −F14 ∗ zbar;

F21 = Uux + beta ∗ lbar ∗ Fux;

F22 = Uuu + beta ∗ lbar ∗ Fuu;

F23 = beta ∗ (Fu’);

F24 = beta ∗ lbar ∗ Fuz;

f2 = Uu + beta ∗ (Fu′) ∗ lbar −F21 ∗ xbar−

F22 ∗ ubar −F23 ∗ lbar −F24 ∗ zbar;

BM = INV(F23∗INV(F13) ∗ F12 −F22)∗

(F21 −F23∗INV(F13) ∗ F11);

BN = INV(F23∗INV(F13) ∗ F12 −F22)∗

F23∗INV(F13);

BR = INV(F23∗INV(F13) ∗ F12 −F22)∗

(F24 −F23∗INV(F13) ∗ F14);

sm = INV(F23∗INV(F13) ∗ F12 −F22)∗

(f2 −F23∗INV(F13) ∗ f1);

Sx = A +C ∗ BM;

31

Slamda = C ∗ BN;

Sz = W + C ∗ BR;

ss = C ∗ sm+ sb;

BLx = −INV(F13) ∗ (F11 +F12 ∗ BM);

BLlamda = INV(F13) ∗ (1 −F12 ∗ BN);

BLz = −INV(F13) ∗ (F14 +F12 ∗ BR);

sl = −INV(F13) ∗ (f1 + F12 ∗ sm);

sa1 = Slamda;

sa2 = Sx −BLlamda;

sa3 = −BLx;

BH = (1/(2 ∗ sa1)) ∗ (−sa2 −(sa2ˆ2 −4 ∗ sa1 ∗ sa3)ˆ0.5);

BG = BM + BN ∗ BH;

BQ = INV(BLlamda −BH ∗ Slamda −BP)∗

(BH ∗ Sz −BLz);

BD = BR + BN ∗ BQ;

sh = INV(BLlamda −BH ∗ Slamda −1)∗

(BH ∗ ss +BQ∗ sp −sl);

sg = BN ∗ sh +sm;

RETP(BG, BD, sg);

ENDP;

1.9.1 Appendix III: The Stochastic Dynamic Program-

ming Algorithm

Our adaptive approach of chapter 1.6. is easily extended to stochastic discrete

time problems of the type

V (x) = E

_

max

uin U

∞

t=0

β

t

g(x(t), u(t))

_

(1.80)

where

x(t + 1) = f(x(t), u(t), z

t

), x(0) = x

0

∈ R

n

(1.81)

and the z

t

are i.i.d. random variables. This problem can immediately be

applied in discrete time with time step h = 1.

12

The corresponding dynamic programming operator becomes

T

h

(V

h

)(x) = max

u∈U

E {hg(x, u) + βV

h

(ϕ(x, u, z))} , (1.82)

12

For a discretization of a continuous time stochastic optimal control problem with dynam-

ics governed by an Itˆo stochastic diﬀerential equation, see Camilli and Falcone (1995).

32

where ϕ(x, u, z) is now a random variable.

If the random variable z is discrete then the evaluation of the expectation

E is a simple summation, if z is a continuous random variable then we can

compute E via a numerical quadrature formula for the approximation of the

integral

_

z

(hg(x, u) + βV

h

(ϕ(x, u, z))p(z)dz,

where p(z) is the probability density of z. Gr¨ une and Semmler (2004a)

show the application of such a method to such a problem, where z is a

truncated Gaussian random variable and the numerical integration is done

via the trapezoidal rule.

It should be noted that despite the formal similarity, stochastic opti-

mal control problems have several features diﬀerent from deterministic ones.

First, complicated dynamical behavior like multiple stable steady state equi-

libria, periodic attractors etc. is less likely because the inﬂuence of the

stochastic term tends to “smear out” the dynamics in such a way that these

phenomena disappear.

13

Furthermore, in stochastic problems the optimal

value function typically has more regularity which allows the use of high

order approximation techniques. Finally, stochastic problems can often be

formulated in terms of Markov decision problems with continuous transi-

tion probability (see Rust (1996) for a details), whose structure gives rise

to diﬀerent approximation techniques, in particular allowing to avoid the

discretization of the state space.

In these situations, the above dynamic programming technique, developed

by Gr¨ une (1997) and applied in Gr¨ une and Semmler (2004a) may not be

the most eﬃcient approach to these problems, and it has to compete with

other eﬃcient techniques. Nevertheless, the examples in Gr¨ une and Semmler

(2004a). shows that adaptive grids as discussed in chapter 1.6 are by far more

eﬃcient than non–adaptive methods if the same discretization technique is

used for both approaches. It should also be noted that in the smooth case

one can obtain estimates for the error in the approximation of the gradient

of V

h

from our error estimates, for details we refer to Gr¨ une (2003).

13

A remark to this extent on an earlier version of our work has been made by Buz Brock

and Michael Woodford.

Chapter 2

Solving a Prototype Stochastic

Dynamic Model

2.1 Introduction

This chapter turns to a practical problem of dynamic optimization. In partic-

ular, we shall solve a prototype model by employing diﬀerent approximation

methods as we have discussed in the last chapter. The model we choose is

a Ramsey model (Ramsey 1928) to which the exact solution is computable

with the standard recursive method. This will allow us to test the accuracy

of approximations by comparing the diﬀerent approximate solutions to the

exact solution.

2.2 The Ramsey Problem

2.2.1 The Model

Ramsey (1928) posed a problem of optimal resource allocation, which is now

often used as a prototype model of dynamic optimization.

1

The model pre-

sented in this section is essentially that of Ramsey (1928) yet it is augmented

by uncertainty. Let C

t

denote consumption, Y

t

output and K

t

the capital

stock. Assume that the output is produced by capital stock and it is either

consumed or invested, that is, added to the capital stock. Formally,

Y

t

= A

t

K

α

t

, (2.1)

Y

t

= C

t

+ K

t+1

−K

t

, (2.2)

1

See, for instance, Stockey et al. (1989, chapter 2), Blanchard and Fisher (1989, chapter

2) and Ljungqvist and Sargent (2000, chapter 2).

33

34

where α ∈ (0, 1) and A

t

is the technology which may follow an AR(1) process:

A

t+1

= a

0

+ a

1

A

t

+ ǫ

t+1

. (2.3)

Here we shall assume ǫ

t

to be i.i.d. Equation (2.1) and (2.2) indicates that

we could write the transition law of capital stock as

K

t+1

= A

t

K

α

t

−C

t

. (2.4)

Note that we have assumed here that the depreciation rate of capital stock

is equal to 1. This is a simpliﬁed assumption by which the exact solution is

computable.

2

The representative agent is assumed to ﬁnd the control sequence {C

t

}

∞

t=0

such that

max E

0

∞

t=0

β

t

ln C

t

(2.5)

given the initial condition (K

0

, A

0

).

2.2.2 The Exact Solution and the Steady States

It is well known that the exact solution for this model – which could be

derived from the standard recursive method can be written as

K

t+1

= αβA

t

K

α

t

. (2.6)

This further implies from (2.4) that

C

t

= (1 −αβ)A

t

K

α

t

. (2.7)

Given the solution paths for C

t

and K

t+1

, we are then able to derive the

steady state. It is not diﬃcult to ﬁnd that one steady state is on the bound-

ary, that is

¯

K = 0 and

¯

C = 0. To obtain a more meaningful interior steady

state, we take logarithm for both sides of (2.6) and evaluate the equation at

its certainty equivalence form:

log K

t+1

= log(αβA) + αlog K

t

. (2.8)

At the steady state, K

t+1

= K

t

=

¯

K. Solving (2.8) for log

¯

K, we obtain

log

¯

K =

log(αβA)

1−α

. Therefore,

¯

K = (αβA)

1/(1−α)

. (2.9)

Given

¯

K,

¯

C is resolved from (2.4):

¯

C = A

¯

K

α

−

¯

K (2.10)

2

For another similar model where the depreciation rate is also set to 1 and hence the exact

solution is computable, see Long and Plosser (1983).

35

2.3 The First-Order Conditions and Approx-

imate Solutions

To solve the model with diﬀerent approximation methods, we shall ﬁrst es-

tablish the ﬁrst-order condition derived from the Ramsey problem. As we

have mentioned in the last chapter, there are two types of ﬁrst-order condi-

tions, the Euler equation and the equation derived from the Lagrangian. Let

us ﬁrst consider the Euler equation.

2.3.1 The Euler Equation

Our ﬁrst task is to transform the model into a setting so that the state vari-

able K

t

does not appear in F(·) as we have discussed in the last chapter. This

can be done by assuming K

t+1

(instead of C

t

) as model’s decision variable.

To achieve a notational consistency in the time subscript, we may denote the

decision variable as Z

t

. Therefore the model can be rewritten as

max E

0

∞

t=0

β

t

ln(A

t

K

α

t

−K

t+1

). (2.11)

subject to

K

t+1

= Z

t

Note that here we have used (2.4) to express C

t

in the utility function. Also

note that in this formulation the state variable in period t is still K

t

. There-

fore ∂F/∂x = 0 and ∂F/∂u = 1. The Bellman equation in this case can be

written as

V (K

t

, A

t

) = max

K

t+1

{ln(A

t

K

α

t

−K

t+1

) +βE [V (K

t+1

, A

t+1

)]} . (2.12)

The necessary condition for maximizing the right hand side of the Bellman

equation (2.12) is given by

−1

A

t

K

α

t

−K

t+1

+βE

_

∂V

∂K

t+1

(K

t+1

, A

t+1

)

_

= 0. (2.13)

Meanwhile applying the Benveniste-Scheinkman formula,

∂V

∂K

t

(K

t

, A

t

) =

αA

t

K

α−1

t

A

t

K

α

t

−K

t+1

. (2.14)

Substituting (2.14) into (2.13) allows us to obtain the Euler equation:

−1

A

t

K

α

t

−K

t+1

+ βE

_

αA

t+1

K

α−1

t+1

A

t+1

K

α

t+1

−K

t+2

_

= 0,

36

which can further be written as

−

1

C

t

+ βE

_

αA

t+1

K

α−1

t+1

C

t+1

_

= 0. (2.15)

This Euler equations (2.15) along with (2.4) and (2.3) determine the transi-

tion sequences of {K

t+1

}

∞

t=1

, {A

t+1

}

∞

t=1

and {C

t

}

∞

t=0

given the initial condition

K

0

and A

0

.

2.3.2 The First-Order Condition Derived from the La-

grangian

Next, we turn to derive the ﬁrst-order condition from the Lagrangian. Deﬁne

the Lagrangian:

L =

∞

t=0

β

t

ln C

t

−

∞

t=0

E

t

_

β

t+1

λ

t+1

(K

t+1

−A

t

K

α

t

+ C

t

)

¸

.

Setting to zero the derivatives of L with respect to λ

t

, C

t

and K

t

, one obtains

(2.4) as well as

1/C

t

−βE

t

λ

t+1

= 0, (2.16)

βE

t

λ

t+1

αA

t

K

α−1

t

= λ

t

. (2.17)

These are the ﬁrst-order conditions derived from the Lagrangian. Next we try

to demonstrate that the two ﬁrst-order conditions are virtually equivalent.

This can be done as follows. Using (2.16) to express βE

t

λ

t+1

in terms 1/C

t

and then plug it into (2.17), we obtain λ

t

= αA

t

K

α−1

t

/C

t

. This further

indicates that

E

t

λ

t+1

= E

t

_

αA

t+1

K

α−1

t+1

/C

t+1

¸

(2.18)

Substitute (2.18) back into (2.16), we obtain the Euler equation (2.15).

It is also not diﬃcult to ﬁnd that the two ﬁrst-order conditions imply the

same steady state as the one derived from the exact solution (see equation

(2.9) and (2.10)). Writing either (2.15) or (2.17) in terms of their certainty

equivalence form while evaluating them at the steady state, we indeed obtain

(2.9).

2.3.3 The Dynamic Programming Formulation

The dynamic programming problem for the Ramsey growth model of chapter

2.2.1 can be written, in its deterministic version, as a basic discrete time

growth model

37

V = max

C

∞

t=0

β

t

U(C

t

) (2.19)

s.t.

C

t

+ K

t+1

= f(K

t

) (2.20)

with an one period utility function U

′

(C) > 0, U

′′

(C) < 0, f

′

(K) > 0, f

′′

(K) <

0.

Let us restate the problem above with K the state variable and K

′

the

control variable, where K

′

denotes the next period’s value of K. Substitute

C into the above intertemporal utility function by deﬁning

C = f(K) −K

′

(2.21)

We then can express the discrete time Bellman-equation, representing a

dynamic programming formulation as

V (K) = max

K

′

{U[f(K) −K

′

] + βV (K

′

)} (2.22)

By applying the Benveniste-Scheinkman condition

3

gives

V

′

(K) = U

′

(f(K) −K

′

)f

′

(K) (2.23)

Note that K is the state variable and that in equ. (2.22) we have V (K

′

).

Notice that from the discrete time form of the envelope condition one again

obtains the ﬁrst order condition of equ. (2.22) as

U

′

[f(K) −K

′

] + βV

′

(K

′

) = 0

which gives by using (2.23) one step forward, i.e. for V

′

(K

′

).

U

′

[f(K) −K

′

] = βU

′

[f(K

′

) −K

′′

]f

′

(K

′

) (2.24)

Note that hereby we obtain as a solution a second order diﬀerence equa-

tion in K, whereby K’ denotes the one period and K

′′

the two period ahead

value of K. Yet equ. (2.24) can be written as

1 = β

U

′

(C

t+1

)

U

′

(C

t

)

f

′

(K

t+1

) (2.25)

3

The Benveniste-Scheinkman condition implies that the state variable does not appear in

the transition equation, see chapter 1.3 of this book and Ljungquist and Sargent (2000,

ch. 2).

38

which represent the Euler-equation that has extensively been used in

economic theory

4

.

If we allow for log-utility as in chapter 2.2.1 the discrete time decision

problem is directly analytically solvable. We take the following form of a

utility function

V = max

Ct

∞

t=0

β

t

ln C

t

(2.26)

s.t.

K

t+1

= AK

α

t

−C

t

(2.27)

The analytical solution for the value function is

V (K) =

˜

B +

˜

C ln (K) (2.28)

and for the sequence of capital one obtains

K

t+1

=

βCAK

α

t

1 +βC

(2.29)

with

˜

C =

α

1 −αβ

and

˜

B =

ln (

˜

C(1 −αβ)A) +

βα

1−βα

ln (αβA)

1 −β

For the optimal consumption holds

C

t

= K

t+1

−AK

α

t

(2.30)

and for the steady state equilibrium K one obtains

1

β

= αAK

α−1

or (2.31)

K = βαAK

α

(2.32)

4

The above Euler-equation is also essential not only in stochastic growth but also in

ﬁnance, to study asset pricing, in ﬁscal policy to evaluate treasury bonds and testing for

sustainability of ﬁscal policy, see Ljungqvist and Sargent (2000, chs. 2, 7, 10, 17)

39

2.4 Solving the Ramsey Problem with Diﬀer-

ent Approximations

2.4.1 The Fair-Taylor Solution

It should be noted that one can apply the Fair-Taylor method either to the

Euler equation or to the ﬁrst-order condition derived from the Lagrangian.

Here we shall use the Euler equation. Let us ﬁrst write equation (2.15) in

the form as expressed by (1.19) - (1.21):

C

t+1

= αβC

t

A

t+1

K

α−1

t+1

. (2.33)

Together with (2.4) and (2.3), they form a recursive dynamic system from

which the transition paths of C

t

, K

t

and A

t

can be directly computed. Since

the model is simple in its structure, there is no necessity to employ the

Gauss-Seidel procedure as suggested in the last chapter.

Before we compute the solution path, we shall ﬁrst parameterize the

model. There are all together 5 structural parameters: α, β, a

0

, a

1

and σ

ǫ

.

Table 2.1 speciﬁes these parameters and the corresponding interior steady

state values:

Table 2.1: Parameterizing the Prototype Model

α β a

0

a

1

σ

ǫ

K C A

0.3200 0.9800 600.00 0.8000 60.000 23593 51640 3000.0

Given the parameters, as reported in Table 2.1, we provide in Figure 2.1

three solution paths computed by the Fair-Taylor method. These solution

paths are compared to the exact solution as expressed in (2.6) and (2.7):

The three solution paths are diﬀerent due to their initial condition with

regard to C

0

. Since we know the exact solution, we thus can choose C

0

close to

the exact solution denoted as C

∗

0

. Note that from (2.7) C

∗

0

= (1 −αβ)A

0

K

α

0

.

In particular, we allow one C

0

to be equal to C

∗

0

, and the others to deviate

1% from C

∗

0

.

40

Figure 2.1: The Fair-Taylor Solution in Comparison to the Exact Solution:

solid curve the exact solution, dashed and dotted curves the Fair-Taylor

solution

The following is a summary of what we have found in this experiment.

• When we choose C

0

above C

∗

0

(by 1%) the path of K

t

quickly reaches to

zero and therefore the simulations have to be subject to the constraint

C

t

< A

t

K

α

t

. In particular, we restrict C

t

≤ 0.99A

t

K

α

t

. This restriction

makes K

t

never reach zero so that the simulation can be continued.

The solution path is shown by one of the dashed curves in the ﬁgure.

• When we set C

0

below C

∗

0

(again by 1%), the path of C

t

quickly reaches

its lower bound 0. This is shown by the dotted curve in the ﬁgure.

• When we set C

0

to C

∗

0

, the paths of K

t

and C

t

(shown by the another

dashed curve) are close to the exact solution for small t

′

s. Yet when t

goes beyond a certain point, the deviation becomes signiﬁcant.

What can we learn from this experiment? The exact solution to this

problem seems to be the saddle path for the system composed of (2.4), (2.3)

41

and (2.33). The eventual deviation of the solution starting with C

∗

0

from the

exact solution is likely to be due to the computational errors resulting from

our numerical simulation. On the other hand, we have veriﬁed our previ-

ous concern that the initial condition for the control variable is extremely

important for obtaining an appropriate solution path when we employ the

Fair-Taylor method.

2.4.2 The Log-linear Solution

As the Fair-Taylor method, the log-linear approximation method can be ap-

plied to the ﬁrst-order condition either from Euler equation or from the

Lagrangian. Here we shall again use the Euler equation. Our ﬁrst task is

therefore to log-linearize the state, Euler and the exogenuous equations as ex-

pressed in (2.4), (2.15) and (2.3). The following is the proposition regarding

this log-linearization (the proof is provided in the appendix):

Next we try to ﬁnd a solution path for c

t

, which we shall conjecture as

c

t

= η

ca

a

t

+ η

ca

k

t

(2.34)

The proposition below regards the determination of the two undetermined

coeﬃcients η

ca

and η

ck

.

Proposition 2 Let k

t

, c

t

and a

t

denote the log deviations of K

t

, C

t

and A

t

.

Then equation (2.4), (2.15) and (2.3) can be log-linearized as

k

t+1

= ϕ

ka

a

t

+ ϕ

kk

k

t

+ ϕ

kc

c

t

, (2.35)

E [c

t+1

] = ϕ

cc

c

t

+ ϕ

ca

a

t

+ϕ

ck

k

t+1

, (2.36)

E [a

t+1

] = a

1

a

t

, (2.37)

where

ϕ

ka

=

¯

A

¯

K

α−1

, ϕ

kk

=

¯

A

¯

K

α−1

α,

ϕ

kc

= −(

¯

C/

¯

K), ϕ

cc

= αβ

¯

A

¯

K

α−1

,

ϕ

ca

= αβ

¯

A

¯

K

α−1

a

1

, ϕ

ck

= αβ

¯

A

¯

K

α−1

(α −1).

Next we try to ﬁnd a solution path for c

t

which we shall conjecture as

c

t

= η

ca

a

t

+ η

ca

k

t

(2.38)

The proposition below regards the determination of the two undetermined

coeﬃcients η

c

and η

ck

.

42

Proposition 3 Assume c

t

follow (2.34). Then η

ck

and η

ca

are determined

from the following equation

η

ck

=

1

2Q

2

_

−Q

1

−

_

Q

2

1

−4Q

0

Q

2

_

(2.39)

η

ca

=

(η

ck

−ϕ

ck

)ϕ

ka

−ϕ

ca

ϕ

cc

−a

1

−ϕ

kc

(η

ck

−ϕ

ck

)

(2.40)

where Q

2

= ϕ

kc

, Q

1

= (ϕ

kk

−ϕ

cc

−ϕ

kc

ϕ

ck

) and Q

0

= ϕ

kk

.

The solution paths of the model can now be computed by relying on (2.34)

and (2.35) with a

t

to be given by a

1

a

t−1

+ ǫ

t

/

¯

A. All the solution paths are

expressed as log deviations. Therefore, to compare the log-linear solution to

the exact solution, we shall perform the transformation via X

t

= (1 + x

t

)

¯

X

for a variable x

t

in log deviation form. Using the same parameters as reported

in Table 2.1, we show in Figure 2.2 the log-linear solution in comparison to

the exact solution.

Figure 2.2: The Log-linear Solution in Comparison to the Exact Solution:

solid curve the exact solution, dashed curves the log-linear solution

43

In contrast to the Fair-Taylor solution, one ﬁnds that the log-linear solu-

tion is quite close to the exact solution except for some initial paths.

2.4.3 The Linear-Quadratic Solution with Chow’s Al-

gorithm

To apply Chow’s method, we shall ﬁrst transform the model in which the

state equation appears to be linear. This can be done by choosing investment

I

t

≡ A

t

K

α

t

− C

t

as the control variable while leaving the capital stock and

the technology as the two state variables. When considering this, the model

becomes

max E

0

∞

t=0

β

t

ln(A

t

K

α

t

−I

t

)

subject to (2.3) as well as

K

t+1

= I

t

(2.41)

Due to the insuﬃciency in the speciﬁcation of the model with regard to

the possible exogenuous variable, we have to treat, as suggested in the last

chapter, the technology A

t

also as a state variable. This indicates that there

are two state equations (2.41) and (2.3), both are now in linear form. Next we

shall derive the ﬁrst- and the second-order partial derivatives of the utility

function around the steady state. This will allow us to obtain those K

ij

and k

j

(i, j = 1, 2) coeﬃcient matrices and vectors as expressed in Chow’s

ﬁrst-order conditions (1.31) and (1.32).

Suppose the linear decision rule can be written as

I

t

= G

11

K

t

+ G

12

A

t

+g

1

(2.42)

The coeﬃcients G

11

, G

12

and g

1

can be computed in principle by iterating

(1.35) - (1.38) as discussed in the last chapter. Yet this requires the iteration

to be convergent. Unfortunately, this is not attainable for our particular ap-

plication, even if we start from many diﬀerent initial conditions.

5

Therefore,

our attempt fails to compute the solution path with Chow’s algorithm.

2.4.4 The Linear-Quadratic Solution Using the Sug-

gested Algorithm

When we employ our new algorithm, there is no need to transform the model.

Therefore we can deﬁne F = AK

α

−C and U = ln C. Again our ﬁrst step is

5

Reiter (1997) has experienced the same problem.

44

to compute the ﬁrst- and second-order partial derivatives with respect to F

and U. All these partial derivatives along with the steady states can be used

as input in the GAUSS procedure provided in Appendix II of the last chap-

ters. Executing this procedure will allow us to compute the undetermined

coeﬃcients in the following decision rule for C

t

:

C

t

= G

21

K

t

+G

22

A

t

+ g

2

(2.43)

Equation (2.43) along with (2.4) and (2.3) form the dynamic system from

which the transition path of C

t

, K

t

and A

t

are computed (see Figure 2.3 for

illustration).

Figure 2.3: The Linear-quadratic Solution in Comparison to the Exact So-

lution: solid curve for exact solution, dashed curves for linear-quadratic so-

lution

45

Figure 2.4: Value Function obtained from the Linear-quadratic Solution

In addition, in the ﬁgure 2.4, the value function obtained from the linear

quadratic solution is shown. As the ﬁgure shows the value function is clearly

concave in the capital stock, K.

2.4.5 The Dynamic Programming Solution

Next, using as example, we will compare the analytical solution of chapter

2.3.3 with the dynamic programming solution obtained from the dynamic

programming algorithm of chapter 1.6. Subsequently we report only results

from a deterministic version. Results from a stochastic version are discussed

in appendix II.

For the growth model of chapter 2.3.3 we employ the following parameters

α = 0.34

A = 5

β = 0.95

we can solve all the above expressions numerically for a grid of the capital

stock in the interval [0.1, 10] and the control variable, c, in the interval [0.1, 5].

For the parameters chosen we obtain a steady state of the capital stock

K = 2.07

For more details of the solution, see Gr¨ une and Semmler (2004a).

6

6

Moreover, as concerns asset pricing log-utility preferences provide us with a very simple

46

The solution of the growth model with the above parameters, using the

dynamic programming algorithm of chapter 1.6 with grid reﬁnement is shown

in ﬁgures 2.5 and 2.6.

27.5

28

28.5

29

29.5

30

30.5

0 1 2 3 4 5 6 7 8 9 10

Figure 2.5: Value Function

stochastic discount factor and an analytical expression for the asset price. For U(C) =

ln(C) the asset price is

P

t

= E

t

∞

t=1

β

j

U

′

(C

t+j

)

U

′

(C

t

)

= E

t

∞

t=1

β

j

C

t

· C

t+j

C

t+j

=

βC

t

1 −β

.

For further details, see Cochrane (2001, ch. 9.1) and Gr¨ une and Semmler (2004 b).

47

4

3

2

1

0

0 5 10

Figure 2.6: Path of Control

As ﬁgures 2.5 and 2.6 show the value function and the control, C, are

concave in the capital stock, K in ﬁgure 2.6 the optimal consumption is

shown to depend on the state variable K for a grid of K, 0 ≤ K ≤ 10.

Moreover, as observable from ﬁgure 2.6 consumption is low when capital stock

is low (capital stock can grow) and consumption is high when capital stock

is high (capital stock will decrease)where low and high is meant in reference

to the optimal steady state capital stock K = 2.07. As reported in Gr¨ une

and Semmler (2004a) the dynamic programming algorithm with adaptive

gridding strategy as introduced in chapter 1.6 solves the value function with

high accuracy.

7

2.5 Conclusion

This chapter employs the diﬀerent approximation methods to solve a proto-

type dynamic optimization model. Our purpose here is to compare the dif-

ferent approximate solutions to the exact solution, which for this model can

be derived analytically by the standard recursive method. As we have found,

there have been some diﬃculties when we apply the Fair-Taylor method and

the method of linear-quadratic approximation using Chow’s algorithm. Yet

7

With 100 nodes in capital stock interval the error is 3.2 · 10

−2

and with 2000 nodes the

error shrinks to 6.3 · 10

−4

, see Gr¨ une and Semmler (2004a).

48

when we apply the methods of log-linear approximation and linear-quadratic

approximation with our suggested algorithm, we ﬁnd that the approximate

solutions are close to the exact solution. At the same time, we also ﬁnd

that the method of log-linear approximation may need an algorithm that

can take over some heavy derivations that otherwise must be analytically

accomplished. Therefore, our experiment in this chapter veriﬁes our previ-

ous concerns (in Chapter 2) with regard to the accuracy and the capabil-

ity of diﬀerent approximation methods, including the Fair-Taylor method,

the log-linear approximation method and the linear-quadratic approxima-

tion method. Although the dynamic programming approach solves the value

function with higher accuracy, in the subsequent chapters, when we come

to the calibration of the intertemporal decision models, we will work with

the linear quadratic approximation of the Chow method since it is better

applicable to empirical assessment of the models.

2.6 Appendix I: The Proof of Proposition 2

and 3

2.6.1 The Proof of Proposition 2

For convenience, we shall write (2.4), (2.15) and (2.3) as

K

t+1

−A

t

K

α

t

−C

t

= 0 (2.44)

E

_

C

t+1

−αβC

t

A

t+1

K

α−1

t+1

¸

= 0 (2.45)

E [A

t+1

−a

0

−a

1

A

t

] = 0 (2.46)

Applying (1.22) to the above equations, we obtain

¯

Ke

k

t+1

−

¯

A

¯

K

α

e

at+αkt

+

¯

Ce

ct

= 0 (2.47)

E

_

¯

Ce

c

t+1

−αβ

¯

C

¯

A

¯

K

α−1

e

a

t+1

+ct+(α−1)k

t+1

¸

= 0 (2.48)

E

_

¯

Ae

a

t+1

−a

0

−a

1

¯

Ae

at

¸

= 0 (2.49)

Applying (1.23), we further obtain from the above:

¯

K(1 +k

t+1

) −

¯

A

¯

K

α

(1 +a

t

+αk

t

) +

¯

C(1 +c

t

) = 0

E

_

¯

C(1 +c

t+1

) −αβ

¯

C

¯

A

¯

K

α−1

(1 +c

t

+ a

t+1

+ (α −1)k

t+1

)

_

= 0

E

_

¯

A(1 +a

t+1

) −a

0

−a

1

¯

A(1 +a

t

)

¸

= 0

49

which can be further written as to be

¯

Kk

t+1

−

¯

A

¯

K

α

a

t

−

¯

A

¯

K

α

αk

t

+

¯

Cc

t

= 0 (2.50)

E

_

¯

Cc

t+1

−αβ

¯

C

¯

A

¯

K

α−1

(c

t

+ a

t+1

+ (α −1)k

t+1

)

_

= 0 (2.51)

E [a

t+1

−a

1

a

t

] = 0 (2.52)

Equation (2.52) indicates (2.37). Substituting it into (2.51) to express

E [a

t+1

] and re-arranging (2.50) and (2.51), we obtain (2.35) and (2.36) as

indicated in the proposition.

2.6.2 Proof of Proposition 3

Given the conjectured solution (2.34), the transition path of k

t+1

can be

derived from (2.35), which can be written as

k

t+1

= η

ka

a

t

+η

kk

k

t

(2.53)

where

η

ka

= ϕ

ka

+ϕ

kc

η

ca

(2.54)

η

kk

= ϕ

kk

+ ϕ

kc

η

ck

(2.55)

Expressing c

t+1

and c

t

in terms of (2.34) while recognizing that E [a

t+1

] =

a

1

a

t

, we obtain from (2.36):

η

ca

a

1

a

t

+ η

ck

k

t+1

= ϕ

cc

(η

ca

a

t

+ η

ck

k

t

) + ϕ

ca

a

t

+ ϕ

ck

k

t+1

which can further be written as

k

t+1

=

ϕ

cc

η

ca

+ ϕ

ca

−η

ca

a

1

η

ck

−ϕ

ck

a

t

+

ϕ

cc

η

ck

η

ck

−ϕ

ck

k

t

(2.56)

Comparing (2.56) to (2.53) with η

ka

and η

kk

to be given by (2.54) and (2.55),

we thus obtain

ϕ

cc

η

ca

+ ϕ

ca

−η

ca

a

1

η

ck

−ϕ

ck

= ϕ

ka

+ ϕ

kc

η

ca

(2.57)

ϕ

cc

η

ck

(η

ck

−ϕ

ck

)

= ϕ

kk

+ ϕ

kc

η

ck

(2.58)

Equation (2.58) gives rise to the following quadratic function in η

ck

:

Q

2

η

2

ck

+Q

1

η

ck

+Q

0

= 0 (2.59)

with Q

2

, Q

1

and Q

0

to be given in the proposition. Solving (2.59) for η

ck

,

we obtain (2.39). Given η

ck

, η

ca

is resolved from (2.57), which gives rise to

(2.40).

50

2.7 Appendix II: Dynamic Programming for

the Stochastic Version

We here present a stochastic version of a growth model which is based on the

Ramsey model of chapter 2.1 but extended to the stochastic case. A model

of type goes back to Brock and Mirman (1972). Here the Ramsey 1d model

?? is extended using a second variable modelling a stochastic shock. The

model is given by the discrete time equations

K(t + 1) = A(t)

˜

AK(t)

α

−C(t)

A(t + 1) = exp(ρ ln A(t) + z

t

)

where α and ρ are real constants and the z(t) are i.i.d. random variables with

zero mean. The return function is again U(C) = ln C.

In our numerical computations which follows Gr¨ une and Semmler (2004)

we used the parameter values

˜

A = 5, α = 0.34, ρ = 0.9 and β = 0.95. As

the case of the Ramsey model, the exact solution is known and given by

V (K, A) = B +

˜

C ln K + DA,

where

B =

ln((1 −βα)

˜

A) +

βα

1−βα

ln(βα

˜

A)

1 −β

,

˜

C =

α

1 −αβ

, D =

1

(1 −αβ)(1 −ρβ)

We have computed the solution to this problem on the domain Ω =

[0.1, 10] × [−0.32, 0.32]. The integral over the Gaussian variable z was ap-

proximated by a trapezoidal rule with 11 discrete values equidistributed in

the interval [−0.032, 0.032] which ensures ϕ(x, u, z) ∈ Ω for x ∈ Ω and suit-

able u ∈ U = [0.5, 10.5]. For evaluating the maximum in T the set U was

discretized with 161 points. Table 2.2 shows the results of the resulting

adaptive gridding scheme applied with reﬁnement threshold θ = 0.1 and

coarsening tolerance ctol = 0.001. Figure 2.7 shows the resulting optimal

value function and adapted grid.

51

# nodes Error estimated Error

49 1.4 · 10

0

1.6 · 10

1

56 0.5 · 10

−1

6.9 · 10

0

65 2.9 · 10

−1

3.4 · 10

0

109 1.3 · 10

−1

1.6 · 10

0

154 5.5 · 10

−2

6.8 · 10

−1

327 2.2 · 10

−2

2.4 · 10

−1

889 9.6 · 10

−3

7.3 · 10

−2

2977 4.3 · 10

−3

3.2 · 10

−2

Table 2.2: Number of nodes and errors for our Example

0 1 2 3 4 5 6 7 8 9 10

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

24

25

26

27

28

29

30

31

32

33

34

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

1 2 3 4 5 6 7 8 9 10

Figure 2.7: Approximated value function and ﬁnal adaptive grid for our

Example

In Santos and Vigo–Aguiar (1995) on equidistant grids with 143 × 9 =

1287 and 500 × 33 = 16500 nodes, errors of 2.1 · 10

−1

and 1.48 · 10

−2

, re-

spectively, were reported. In our adaptive iteration these accuracies could be

obtained with 109 and 889 nodes, respectively; thus we obtain a reduction

in the number of nodes of more than 90% in the ﬁrst and almost 95% in the

second case, even though the anisotropy of the value function was already

taken into account in these equidistant grids. Here again in our stochastic

version of the growth model, a steep value function can best be approximated

with grid reﬁnement.

Chapter 3

The Estimation and Evaluation

of the Stochastic Dynamic

Model

3.1 Introduction

Solving a stochastic dynamic optimization model with aproximation methods

or dynamic programming is only a ﬁrst step towards the empirical assessment

of such a model. Another necessary step is to estimate the model with

some econometric techniques. To undertake this step certain approximation

methods are more useful than others. Given the estimation, one then can

evaluate the model to see how the model’s prediction can match the empirical

data.

The task of estimation has often been ignored in the current empirical

studies of a stochastic dynamic optimization model when a technique, often

referred to as calibration, is employed. The calibration approach compares

the moment statistics (usually the second moments) of major macroeconomic

time series to those obtained from simulating the model.

1

Typically the pa-

rameters employed for the model’s simulation are selected from independent

sources, such as diﬀerent microeconomic studies. This approach has been

criticized because the structural parameters are assumed to be given rather

1

See, e.g., Kydland and Prescott (1982), Long and Plosser (1983), Prescott (1986), Hansen

(1985, 1988), King et.al (1988a, 1988b), Plosser (1989) among many others. Recently,

other statistics (in addition to the ﬁrst and second moments) proposed by early business

cycle literature, e.g., Burns and Mitchell (1946) and Adelman and Adelman (1959), are

also employed for this comparison. See King and Plosser (1994) and Simkins (1994)

among others.

52

53

than estimated.

2

Although this may not create severe diﬃculties for some currently used

stochastic dynamic optimization models, such as the RBC, we believe the

problem remains for more elabrated models in the future development of

macroeconomic research. Unfortunately, we do not ﬁnd many econometric

studies on how to estimate a stochastic dynamic optimization model except

for a few attempts that have been undertaken for some simple cases.

3

In this

chapter, we shall discuss two estimation methods: the Generalized Method of

Moment (GMM) estimation, and the Maximum Likelihood (ML) estimation.

Both estimation methods deﬁne an objective function to be optimized. Due

to the complexity of stochastic dynamic models, it is often unclear how the

parameters to be estimated are related to the model’s restrictions and hence

to the objective function in the estimation. We thus also need to develope

an estimation strategy that can be used to recursively search the parameter

space in order to obtain the optimum.

Section 2 will ﬁrst introduce the calibration technique, which has been

used in the current empirical study of stochastic dynamic models. As one

can ﬁnd there, a proper application of calibration requires us to deﬁne the

model’s structural parameters correctly. We then in Section 3 consider two

possible estimation methods, the GMM and the ML estimations. In Section

4, we propose a strategy to implement these two estimations for estimating

a dynamic optimization model. This estimation requires a global optimiza-

tion algorithm. We, therefore, subsequently in Section 5, introduce a global

optimization algorithm, called simulated annealing, which is used for execut-

ing the suggested strategy of estimation. Finally, a sketch of the computer

program for our estimation strategy will be described in the appendix of this

chapter.

3.2 Calibration

The current empirical studies of a stochastic dynamic model often rely on

calibration. This approach uses the Monte Carlo method to generate the

distribution of some moment statistics implied by the model. For an empir-

ical assessment of the model, solved through some approximation method,

we compare these moment statistics to the sample moments computed from

the data. Generally, the calibration may include the following steps:

2

For an early critique of the parameter selection employed in calibration technique, see

Singleton (1988) and Eichenbaum (1991).

3

See, e.g., Christiano and Eichenbaum (1992), Burnside et al. (1993), Chow (1993), Chow

and Kwan (1998).

54

• Step 1: Select the model’s structural parameters. These parameters

may include preference and technology parameters and those that de-

scribe the distribution of the random variables in the model, such as

σ

ǫ

, the standard deviation of ǫ

t

as in equation (1.3).

• Step 2: Select the number of times for which the iteration is conducted

in the simulation of the model. This number might be the same as the

number of observations in the sample. We denote this number by T.

• Step 3: Select the initial condition (x

0

, z

0

) and use the state equation

(1.2), the control equation (1.4) and the exogenuous equation (1.3) to

compute the solution of the model iteratively for T times. This can be

regarded as a one time simulation.

• Step 4: If necessary, detrend the simulated series generated in Step 3 to

remove its time trend. Often the HP-ﬁlter (see Hodrick and Prescott

1980) is used for this detrending.

• Step 5: Compute the moment statistics of interest using, if necessary,

a detrended series generated in Step 4. These moment statistics are

mainly those of the second moments such as variances and covariances.

• Step 6: Repeat Step 3 to 5 N times. Here N should be suﬃciently large.

Then after these N times repeated runs, compute the distributions of

these moment statistics. These distributions are mainly represented by

their means and their standard deviations.

• Step 7: Compute the same moment statistics from the data sample

and check whether it falls within the proper range of the distribution

for the moment statistics generated from the Monte Carlo simulation

of the model.

Due to the stochastic innovation of ǫ

t

, the simulated series should also

be cyclically and stochastically ﬂuctuating. The extent and the way of ﬂuc-

tuations, reﬂected by some second moment statistics, should depend on the

model speciﬁcation and the structural parameters including σ

ǫ

, the standard

deviation of ǫ

t

. Therefore, if we specify the model correctly with the struc-

tural parameters, including σ

ǫ

, which should also be deﬁned properly, the

above comparison of moment statistics of the model and sample should give

us a basis to test whether the model can explain the actual business cycles

represented by the data.

55

3.3 The Estimation Methods

The application of the calibration requires techniques to select the structural

parameters accurately. This indicates that we need to estimate the model

before the calibration. We consider two possible estimation methods: the

GMM estimation and the ML estimation.

3.3.1 The Generalized Method of Moments (GMM)

Estimation

The GMM estimation starts with a set of orthogonal conditions, representing

the population moments established by the theoretical model:

E [h(y

t

, ψ)] = 0 (3.1)

where y

t

is a k-dimensional vector of observed random variables at date t; ψ

is a l-dimensional vector of unknown parameters that needs to be estimated;

h(·) is a vector-valued function mapping from R

k

×R

l

into R

m

. Let y

T

contain

all the observations of the k variables in the sample with size T. The sample

average of h(·) can then be written as

g

T

(ψ; y

T

) =

1

T

T

t=1

h(y

t

, ψ) (3.2)

Notice that g

T

(·) is also a vector-valued function with m-dimensions. The

idea behind the GMM estimation is to choose an estimator of ψ, denoted

´

ψ, such that the sample moments g

T

(

´

ψ; y

T

) are as close as possible to the

population moments reﬂected by (3.1). To achieve this, one needs to deﬁne

a distance function by which that closeness can be judged. Hansen (1982)

suggested the following distance function:

J(

´

ψ; y

T

) =

_

g

T

(

´

ψ; y

T

)

_

′

W

T

_

g

T

(

´

ψ; y

T

)

_

(3.3)

where W

T

, called the weighting matrix, is m × m, symmetric, positive

deﬁnite and depends only on the sample observation y

T

. The choice of this

weighting matrix deﬁnes a metric that makes the distance function a scalar.

The GMM estimator of ψ is the value of

´

ψ that minimizes (3.3). Hansen

(1982) proves that under certain assumption such a GMM estimator is consis-

tent and asymptotically normal. Also from the results established in Hansen

(1982), a consistent estimator of the variance-covariance matrix of

´

ψ is given

by

56

V ar(

´

ψ) =

1

T

(D

T

)

−1

W

T

(D

′

T

)

−1

, (3.4)

where D

T

= ∂g

T

(

´

ψ)/∂ψ

′

.

There is a great ﬂexibility in the choice of W

T

for constructing a consistent

and asymptotically normal GMM estimator. In this book, we will adopt the

method by Newey and West (1987), where it is suggested that

W

−1

T

=

´

Ω

0

+

d

j=1

w(j, d)(

´

Ω

j

+

´

Ω

′

j

), (3.5)

with w(j, d) ≡ 1 − j/(1 + d),

´

Ω

j

≡ (1/T)

T

t=j+1

g(y

t

,

´

ψ

∗

)g(y

t−j

,

´

ψ

∗

)

and d to be a suitable function of T. Here

´

ψ

∗

is required to be a consistent

estimator of ψ. Therefore, the estimation with the GMM method usually

requires two steps as suggested by Hansen and Singleton (1982). First, one

chooses a sub-optimal weighting matrix to minimize (3.3) and hence obtains

a consistent estimator

´

ψ

∗

. Second, one then uses the consistent estimator

obtained in the ﬁrst step to calculate the optimum W

T

through which ( 3.3)

is re-minimized.

3.3.2 The Maximum Likelihood (ML) Estimation

The ML estimation, proposed by Chow (1993) for estimating a dynamic

optimization model, starts with an econometric model such as follows:

By

t

+ Γx

t

= ǫ

t

(3.6)

where B is a m × m matrix; Γ is a m × k matrix; y

t

in this case is a m ×1

vector of dependent variables; x

t

is a k ×1 vector of explanatory variables

and ǫ

t

is a m ×1 vector of disturbance terms. Note that if we take the

expectations on both sides, the model to be estimated here is the same as

in the GMM estimation represented by (3.1), except here the functions are

linear. Non-linearity may pose a problem to derive the log-likelihood function

for (3.6).

Suppose there are T observations. Then the above (3.6) can be re-written

as

BY

′

+ ΓX

′

= E

′

(3.7)

where Y is T × m; X is T × k and E is T × m. Assuming normal and serially

uncorrelated ǫ

t

with the covariance matrix Σ, the concentrated log-likelihood

function can be derived (see Chow, 1983, p.170-171) as

log L(ψ) = const. + nlog |B| −

n

2

log |Σ| (3.8)

57

with the ML estimator of Σ given by

´

Σ = n

−1

(BY

′

+ ΓX

′

)(Y B

′

+ XΓ

′

) (3.9)

The ML estimator of ψ is the one that maximizes logL(ψ) in (3.8). The

asymptotic standard deviation of estimated parameters can be inferred from

the following variance-covariance matrix of

´

ψ (see Hamilton 1994 p.143):

E(

´

ψ −ψ)(

´

ψ −ψ)

′

∼

= −

_

∂

2

L(ψ)

∂ψ∂ψ

′

_

−1

. (3.10)

3.4 The Estimation Strategy

In practice, using GMM or ML method to estimate a stochastic dynamic

model is rather complicated. The ﬁrst problem that we need to discuss are

the restrictions on the estimation.

One proper restriction is the state equation and a ﬁrst-order condition

derived either as Euler equation or from the Lagrangian.

4

Yet, most ﬁrst-

order conditions are extremely complicated and may include some auxiliary

variables, such as the Lagrangian multiplier, which are not observable. This

seems to suggest that the restrictions for a stochastic dynamic optimization

model should typically be represented by the state equation and the control

equation derived from the dynamic optimization problem.

The derivation of the control equations in approximate form from a dy-

namic optimization problem is a complicated process. As discussed in the

previous chapters often a numerical procedure is required. For an approxi-

mation method, the linearization of the system at its steady states is needed.

The linearization and the derivation of the control equation, possibly through

an iterative procedure, make it often unclear how the parameters to be es-

timated are related to the model’s restrictions and hence to the objective

function in estimation, such as (3.3) and (3.8). Therefore one is usually inca-

pable of deriving analytically the ﬁrst-order conditions of minimizing (3.3) or

maximizing (3.8) with respect to the parameters. Furthermore, using ﬁrst-

order conditions to minimize (3.3) and maximize (3.8) may only lead to a

local optimum, which is quite possible in general, since the system to be esti-

mated is often nonlinear in parameters. Consequently, searching a parameter

space becomes the only possible way to ﬁnd the optimum.

Our search process includes the following recursive steps:

• Step 1. Start with an initial guess on ψ and use an appropriate method

of dynamic optimization to derive the decision rules.

4

The parameters in the state equation could be estimated independently.

58

• Step 2. Use the state equation and the derived control equation to

calculate the value of the objective function.

• Step 3. Apply some optimization algorithm to change the initial guess

on ψ and start again with step one.

Using this strategy to estimate a stochastic dynamic model, one needs to

employ an optimization algorithm to search the parameter space recursively.

The conventional optimization algorithms,

5

such as Newton-Raphson or re-

lated methods, may not serve our purpose well due to the possible existence

of multiple local optima. We thus need to employ a global optimization al-

gorithm to execute the estimation process as described above. One possible

candidates is the simulated annealing, which shall be discussed in the next

section.

3.5 A Global Optimization Algorithm: The

Simulated Annealing

The idea of simulated annealing has been initially proposed by Metropolis et

al. (1953) and later developed by Vanderbilt and Louie (1984), Bohachevsky

et al. (1986) and Corana et al, (1987). The algorithm operates through

an iterative random search for the optimal variables of an objective function

within an appropriate space. It moves uphill and downhill with a varying step

size to escape local optima. The step size is narrowed so that the random

search is conﬁned to an ever smaller region when the global optimum is

approached.

The simulated annealing algorithm has been tested by Goﬀe et al. (1992).

For this test, Goﬀe et al. (1992) compute a test function with two optima

provided by Judge et al. (1985, p.956-7). By comparing it with conventional

algorithms, they ﬁnd that out of 100 times conventional algorithms are suc-

cessful 52-60 times to reach the global optimum while simulated annealing

is 100 percent eﬃcient. We thus believe that the algorithm may serve our

purpose well.

Let f(x), for example, be a function that is to be maximized and x ∈ S,

where S is the parameter space with the dimensions equal to the number

of structural parameters that need to be estimated. The space S should be

deﬁned from the economic viewpoint and by computational convenience. The

algorithm starts with an initial parameter vector x

0

. Its value f

0

= f(x

0

)

5

For conventional optimization algorithm, see appendix B of Judge et al (1985) and Hamil-

ton (1994, ch. 5).

59

is calculated and recorded. Subsequently, we set the optimum x and f(x)

– denoted by x

opt

and f

opt

respectively – to x

0

and f(x

0

). Other initial

conditions include the initial step-length (a vector with the same dimension

as x) denoted by v

0

and an initial temperature (a scalar) denoted by T

0

.

The new variable, x

′

, is chosen by varying the ith element of x

0

such that

x

′

i

= x

0

i

+ r · v

0

i

(3.11)

where r is a uniformly distributed random number in [−1, 1]. If x

′

is not

in S, repeat (3.11) until x

′

is in S. The new function value f

′

= f(x

′

) is

then computed. If f

′

is larger than f

0

, x

′

is accepted. If not, the Metropolis

criteria,

6

denoted as p, is used to decide on acceptance, where

p = e

(f

′

−f)/T

0

(3.12)

This p is compared to p

′

, a uniformly distributed random number from [0, 1].

If p is greater than p

′

, x

′

is accepted. Besides, f

′

should also be compared to

the updated f

opt

. If it is larger than f

opt

, both x

opt

and f

opt

are replaced by

x

′

and f

′

.

The above steps (starting with (3.11)) should be undertaken and repeated

N

S

times

7

for each i. Subsequently, the step-length is adjusted. The ith

element of the new step-length vector (denoted as v

′

i

) depends on its number

of acceptances (denoted as n

i

) in its last N

S

times of the above repetition

and is given by

v

′

i

=

_

_

_

v

0

i

_

1 +

1

0.4

c

i

(n

i

/N

S

−0.6)

¸

if n

i

> 0.6N

S

;

v

0

i

_

1 +

1

0.4

c

i

(0.4 −n

i

/N

S

)

¸

−1

if n

i

< 0.4N

S

;

v

0

i

if 0.4N

S

≤ n

i

≤ 0.6N

S

(3.13)

where c

i

is suggested to be 2 as by Corona et al. (1987) for all i. With the

new selected step-length vector, one goes back to (3.11) and hence starts a

new round of iteration. Again after another N

S

times of such repetitions,

the step-length will be re-adjusted. These adjustments as to each v

i

should

be performed N

T

times.

8

We then come to adjust the temperature. The new

temperature (denoted as T

′

) will be

T

′

= R

T

T

0

(3.14)

with 0 < R

T

< 1.

9

With this new temperature T

′

, we should go back again

to (3.11). But this time, the initial variable x

0

is replaced by the updated

6

motivated by thermodynamics.

7

N

S

is suggested to be 20 as by Corana et al. (1987)

8

N

T

is suggested to be 100 by Corona et. al. (1987).

9

R

T

is suggested to be 0.85 by Corana et. al. (1987).

60

x

opt

. Of course, the temperature will be reduced further after one additional

N

T

times of adjusting the step-length of each i.

For convergence, the step-length in (3.11) is required to be very small. In

(3.13), whether the new selected step-length is enlarged or not depends on

the corresponding number of acceptances. The number of acceptance n

i

is

not only determined by whether the new selected x

i

increases the value of

objective function, but also by the Metropolis criteria which itself depends

on the temperature. Thus a convergence will ultimately be achieved with

the continuous reduction of the temperature. The algorithm will end by

comparing the value of f

opt

for the last N

ǫ

times (suggested to be 4) when

the temperature is attempted to be re-adjusted.

The simulated annealing algorithm described above has been tested by

Goﬀe et al. (1992). For this test, Goﬀe et al. (1992) compute a test function

with two optima provided by Judge et al. (1985, p. 956-7). By comparing

it with conventional algorithms, they ﬁnd that out of 100 times conventional

algorithms are successful 52-60 times to reach the global optimum while sim-

ulated annealing is 100 percent eﬃcient. We thus believe that the algorithm

may serve our purpose well. In the next chapter, we shall demonstrate the ef-

fectiveness of this estimation strategy by estimating a benchmark RBC with

simulated data.

3.6 Conclusions

In this chapter, we have ﬁrst introduced the calibration method, which has

often been employed in the assessment of a stochastic dynamic model. We

then, based on some approximation methods, have presented an estimation

strategy, to estimate stochastic dynamic models employing time series data.

We have introduced both the General Method of Moments (GMM) as well

as the Maximum Likelihood (ML) estimations as strategies to match the dy-

namic decision model with time series data. Although both strategies permit

to estimate the parameters involved, often a global optimization algorithm,

for example the simulated annealing, is needed to be employed to detect the

correct parameters.

3.7 Appendix: A Sketch of the Computer

Program for Estimation

The algorithm we describe here is written in GAUSS. The entire program

consists of three parts. The ﬁrst part regards some necessary steps in the

61

data processing after loading the original data. The second part is the pro-

cedure that calculates the value of objective function for the estimation. The

input of this procedure are the structural parameters, while the activation

of this procedure generates the value of objective function. We denote this

procedure as OBJF(ϕ). The third part, which is also the main part of this

program, is the simulated annealing. Of these three parts, we shall only

describe the simulated annealing.

{Set initial conditions for simulated annealing}

DO UNTIL convergence;

t = t + 1;

DO N

T

times;

n = 0; /*set the vector for recording No. of acceptances*/

DO N

s

times;

i = 0;

DO UNTIL i = the dimensions of ϕ;

i = i + 1;

HERE: ϕ

i

= ϕ

i

+ rv

i

;

ϕ

′

= {as if current ϕ except the ith element to be ϕ

′

i

};

IF ϕ

′

is not in S;

GOTO HERE;

ELSE;

CONTINUE;

ENDIF;

f =OBJF(ϕ

′

); /*f is the value of the objective function to be minimized*/

p = exp[(f

′

−f)/T]; /*p is the metropolis criteria*/

IF f

′

> f or p > p

′

ϕ = ϕ

′

;

f = f

′

;

n

i

= n

i

+ 1;

ELSE;

CONTINUE;

ENDIF;

IF f

′

> f

opt

;

ϕ

opt

= ϕ

′

;

f

opt

= f

′

;

ELSE;

CONTINUE;

ENDIF;

62

ENDO;

ENDO;

i = 0;

{deﬁne the new step-size, v

′

, according to n

i

}

v = v

′

;

ENDO;

IF change of f

opt

< ε in last N

ε

times;

REPORT ϕ

opt

and f

opt

;

BREAK

ELSE

T = R

T

T;

CONTINUE;

ENDIF;

ENDO;

Part II

The Standard Stochastic

Dynamic Optimization Model

63

Chapter 4

Real Business Cycles: Theory

and the Solutions

4.1 Introduction

The Real Business Cycle model as a prototype of a stochastic dynamic macro-

model has inﬂuenced quantitative macromodeling enormously in the last two

decades. Its concepts and methods have diﬀused into mainstream macroe-

conomics. The criticism of the performance of macroeconometric models of

Keynesian type in the 1970s and the associated rational expectation revolu-

tion pioneered by Lucas (1976) initiated this development. The Real Busi-

ness Cycle analysis now occupies a major position in the curriculum of many

graduate programs. To some extent, the Real Business Cycle approach has

become a new orthodoxy of macroeconomics.

The central argument by Real Business Cycel theorists is that economic

ﬂuctuations are caused primarily by real factors. Kydland and Prescott

(1982) and Long and Plosser (1983) ﬁrst strikingly illustrate this idea in a

simple representative agent optimization model with market clearing, ratio-

nal expectation and no monetary factors. Stockey, Lucas and Prescott (1989)

further illustrate that such type of model could be viewed as an Arrow-Debreu

economy so that the model can be established on a solid micro-foundation

with many (identical) agents. Therefore, as mentioned above, the RBC anal-

ysis can also be regarded as a general equilibrium approach to macrodynam-

ics.

This chapter introduces the RBC model by ﬁrst describing its microeco-

nomic foundation as set out by Stockey et al. (1989). We then present the

standard RBC model as formulated in King et al. (1988). A model of this

kind will repeatedly be used in the subsequent chapters in various ways. The

64

65

model will then be solved after being parameterized by those standard values

of the model’s structural parameters.

4.2 The Microfoundation

The standard Real Business Cycle model assumes a representative agent who

solves a resource allocation problem over an inﬁnite time horizon via dynamic

optimization. It is argued that “the solutions to planning problems of this

type can, under appropriate conditions, be interpreted as predictions about

the behavior of market economies.”(Stokey et al. 1989, p. 22)

To establish the connection to the competitive equilibrium of the Arrow-

Debreu economy,

1

several assumptions should be made for an hypothetical

economy. First, the households in the economy are identical, all with the

same preference, and ﬁrms are also identical, all producing a common out-

put with the same constant returns to scale technology. With this identical

assumption, the resource allocation problem can be viewed as an optimiza-

tion problem of a representative agent.

Second, as in Arrow-Debreu economy, the trading process is assumed

to be “once-and-for-all”. The following citation is again from Stokey et al.

(1989, p.23).

Finally, assume that all transactions take place in a single

once-and-for-all market that meets in period 0. All trading takes

place at that time, so all prices and quantities are determined

simultaneously. No further trades are negotiated later. After this

market has closed, in periods t = 0, 1, ..., T, agents simply deliver

the quantities of factors and goods they have contracted to sell

and receive those they have contracted to buy.

The third assumption regards the ownership. It is assumed that the

household owns all factors of production and all shares of the ﬁrm. Therefore,

in each period the household sells factor services to the ﬁrm. The revenue

from selling factors can only be used to buy the goods produced by the ﬁrm

either for consuming or accumulating as capital. The representative ﬁrm

owns nothing. In each period it simply hires capital and labor on a rental

basis to produce output, sells the output and transfers any proﬁt back to the

household.

1

See Arrow and Debreu (1954) and Debreu (1959).

66

4.2.1 The Decision of the Household

At the beginning of the period 0 when the market is open, the household is

given the price sequence {p

t

, w

t

, r

t

}

∞

t=0

at which he (or she) will choose the se-

quence of output demand and input supply

_

c

d

t

, i

d

t

, n

s

t

, k

s

t

_

∞

t=0

that maximizes

the discounted utility:

max E

0

_

∞

t=0

β

t

U(c

d

t

, n

s

t

)

_

(4.1)

subject to

p

t

(c

d

t

+i

d

t

) = p

t

(r

t

k

s

t

+ w

t

n

s

t

) + π

t

(4.2)

k

s

t+1

= (1 −δ)k

s

t

+ i

d

t

(4.3)

Above δ is the depreciation rate; β is the discounted factor; π

t

is the expected

dividend; c

d

t

and i

d

t

are the demands for consumption and investment; and

n

s

t

and k

s

t

are the supplies of labor and capital stock. Note that (4.2) can be

regarded as a budget constraint. The equality holds due to the assumption

U

c

> 0. Next, we shall consider how the representative household calculates

π

t

. It is reasonable to assume that

π

t

= p

t

(´ y

t

−w

t

´ n

t

−r

t

´

k

t

) (4.4)

where ´ y

t

, ´ n

t

and

´

k

t

are the realized output, labor and capital expected by

the household at given price sequence {p

t

, w

t

, r

t

}

∞

t=0

. Thus assuming that

the household knows the production function while expecting that the mar-

ket will be cleared at the given price sequence {p

t

, w

t

, r

t

}

∞

t=0

, (4.4) can be

rewritten as

π

t

= p

t

_

f(k

s

t

, n

s

t

,

ˆ

A

t

) −w

t

n

s

t

−r

t

k

s

t

_

(4.5)

Above, f(·) is the production function and

ˆ

A

t

is the expected technology

shock. Explaining π

t

in (4.2) in terms of (4.5) and then substituting from

(4.3) to eliminate i

d

t

, we obtain

k

s

t+1

= (1 −δ)k

s

t

+ f(k

s

t

, n

s

t

,

ˆ

A

t

) −c

d

t

(4.6)

Note that (4.1) and (4.6) represent the standard RBC model, although it

only speciﬁes one side of the markets: output demand and input supply.

Given the initial capital stock k

s

0

, the solution of this model is the sequence

of plans

_

c

d

t

, i

d

t

, n

s

t

, k

s

t+1

_

∞

t=0

, where k

s

t

is implied by (4.6), and

c

d

t

= G

c

(k

s

t

,

ˆ

A

t

) (4.7)

n

s

t

= G

n

(k

s

t

,

ˆ

A

t

) (4.8)

i

d

t

= f(k

s

t

, n

s

t

,

ˆ

A

t

) −c

d

t

(4.9)

67

4.2.2 The Decision of the Firm

Given the same price sequence {p

t

, w

t

, r

t

}

∞

t=0

, and also the sequence of ex-

pected technology shocks {

ˆ

A

t

}

∞

t=0

, the problem faced by the representative

ﬁrm is to choose input demands and output supplies

_

y

s

t

, n

d

t

, k

d

t

_

∞

t=0

. How-

ever, since the ﬁrm simply rents capital and hires labor on a period-by-period

basis, its optimization problem is equivalent to a series of one-period maxi-

mization (Stokey et al. 1989, p25):

max p

t

(y

s

t

−r

t

k

d

t

−w

t

n

d

t

)

subject to

y

s

t

= f(k

d

t

, n

d

t

,

ˆ

A

t

) (4.10)

where t = 0, 1, 2, ..., ∞. The solution to this optimization problem satisﬁes:

r

t

= f

k

(k

d

t

, n

d

t

,

ˆ

A

t

)

w

t

= f

n

(k

d

t

, n

d

t

,

ˆ

A

t

)

This ﬁrst-order condition allow us to derive the following equations of input

demands k

d

t

and n

d

t

:

k

d

t

= k(r

t

, w

t

,

ˆ

A) (4.11)

n

d

t

= n(r

t

, w

t

,

ˆ

A) (4.12)

4.2.3 The Competitive Equilibrium and the Walrasian

Auctioneer

A competitive equilibrium can be described as a sequence of prices {p

∗

t

, w

∗

t

, r

∗

t

}

∞

t=0

at which the two market forces (demand and supply) are equalized in all these

three markets, i.e.,

k

d

t

= k

s

t

(4.13)

n

d

t

= n

s

t

(4.14)

c

d

t

+i

d

t

= y

s

t

(4.15)

for all t’s, t = 0, 1, 2, ..., ∞. The economy is at the competitive equilibrium if

{p

t

, w

t

, r

t

}

∞

t=0

= {p

∗

t

, w

∗

t

, r

∗

t

}

∞

t=0

Using equation (4.6) - (4.12), one can easily prove the existence of {p

∗

t

, w

∗

t

, r

∗

t

}

∞

t=0

that satisﬁes the equilibrium condition (4.13)-(4.15).

68

The real business cycles literature usually does not explain how the equi-

librium is achieved. Implicitly, it is assumed that there exists an auctioneer in

the market, who adjust the price towards the equilibrium. This adjsutment

process - often named as tˆatonnement process as in Walrasian economics - is

a commen solution to the adjsutment problem within the neoclassical general

equilibrium framework.

4.2.4 The Contingency Plan

It is not diﬃcult to ﬁnd that the sequence of equilibrium prices {p

∗

t

, w

∗

t

, r

∗

t

}

∞

t=0

depends on the expected technology shock {

ˆ

A

t

}

∞

t=0

. This indeed creates a

problem how to express the equilibrium prices and the equilibrium demand

and supply which are supposed to be made at the beginning of period 0 when

the technology shock from period 1 onward are all unobserved. The Real

Business Cycle theorists circumvent this problem skillfully and ingeniously.

Their approach is to use the so-called “contingency plan”. As written by

Stokey et al.(1989, p17):

In the stochastic case, however, this is not a sequence of num-

bers but a sequence of contingency plans, one for each period.

Speciﬁcally, consumption c

t

, and end-of-period capital k

t+1

in

each period t = 1, 2, ... are contingent on the realization of the

shocks z

1

, z

2

, ..., z

t

. This sequence of realization is information

that is available when the decision is being carried out but is un-

known in period 0 when the decision is being made. Technically,

then, the planner chooses among sequence of functions....

Thus the sequence of equilibrium prices and the sequence of equilibrium

demand and supply are all contingent on the realization of the shock regard-

less that the corresponding decisions are all made at the beginning of period

0.

4.2.5 The Dynamics

Assmue that the decisions are all contingent on the future shock {A

t

}

∞

t=0

,

and the prices are all at their equilibrium, the dynamics of our hypothetic

economy can be fully described by the following equations regarding the

69

realized consumption, employment, output, investment and capital stock:

c

t

= G

c

(k

t

, A

t

) (4.16)

n

t

= G

n

(k

t

, A

t

) (4.17)

y

t

= f(k

t

, n

t

, A

t

) (4.18)

i

t

= y

t

−c

t

(4.19)

k

t+1

= (1 −δ)k

t

+ f(k

t

, n

t

, A

t

) −c

t

(4.20)

given the initial condition k

0

and the sequence of technology shock{A

t

}

∞

t=0

.

This indeed provides another important property of the RBC economy. Al-

though the model speciﬁes the decision behaviors for both household and

the ﬁrm and therefore the two market forces, demand and supply, for all

major three markets: output, capital and labor markets, the dynamics of the

economy is reﬂected by only the household behavior, which concerns only

one side of the market forces, the output demand and input supply. The

decision of the ﬁrm does not have any impact! This is certainly due to the

equilibrium feature of the model speciﬁcation.

4.3 The Standard RBC Model

4.3.1 The Model Structure

The hypothetical economy we have presented in the last section is only for ex-

plaining the theory (from microeconomic point of view) behind the standard

RBC economy. The model speciﬁed in (4.16) - (4.20) is not testable with

empirical data, not only because we do not specify the stochastic process of

{A

t

}

∞

t=0

, but also we do not introduce the growth factor. For an empirically

testable standard RBC model, we employ here the speciﬁcations of a model

as formulated by King, et al. (1988). This empirically oriented formulation

will be repeatedly used in the subequent chapters of this volumn.

Let K

t

denote for the aggregate capital stock, Y

t

for aggregate output

and C

t

for aggregate consumption. The capital stock in the economy follow

the transition law:

K

t+1

= (1 −δ)K

t

+Y

t

−C

t

, (4.21)

where δ is the depreciation rate. Assume that the aggregate production

function take the form:

Y

t

= A

t

K

1−α

t

(N

t

X

t

)

α

(4.22)

where N

t

is per capita working hours; α is the share of labor in the produc-

tion function; A

t

is the temporary shock in technology and X

t

the permanent

70

shock that follows a growth rate γ. Note that here X

t

includes not only the

growth in labor force, but also the growth in productivity. Apparently, the

model is nonstationary due to X

t

. To transform the model into a station-

ary formulation, we divide both sides of equation (4.21) by X

t

(when Y

t

is

expressed by (4.22)):

k

t+1

=

1

1 +γ

_

(1 −δ)k

t

+ A

t

k

1−α

t

(n

t

N/0.3)

α

−c

t

_

, (4.23)

where by deﬁnition, k

t

≡ K

t

/X

t

, c

t

≡ C

t

/X

t

and n

t

≡ 0.3N

t

/N with N to be

the sample mean of N

t

. Note that n

t

is often regarded to be the normalized

hours. The sample mean of n

t

is equal to 30 %, which, as pointed out by

Hansen (1985), is the average percentage of hours attributed to work.

The representative agent in the economy is assumed to make the decision

sequence {c

t

}

∞

t=0

and {n

t

}

∞

t=0

so as to

max E

0

∞

t=0

β

t

[log c

t

+ θ log(1 −n

t

)] , (4.24)

subject to the state equation (4.23). The exogenuous variable in this model

is the temporary shock A

t

, which may follow an AR(1) process:

A

t+1

= a

0

+ a

1

A

t

+ ǫ

t+1

, (4.25)

with ǫ

t

to be an i.i.d. innovation.

Note that there is no possibility to derive the exact solution with the

standard recursive method. We therefore have to rely on an approximate

solution. For this, we shall ﬁrst derive the ﬁrst-order conditions.

4.3.2 The First-Order Conditions

As we have discussed in the previous chapters, there are two types of ﬁrst-

order conditions, the Euler equations and the equations from the Lagrangian.

The Euler equation is not used in our suggested solution method. We nev-

ertheless still present it here as an exercise and demonstrate that the two

ﬁrst-order conditions are virtually equivalent.

The Euler equation

To derive the Euler equation, our ﬁrst task is to transform the model into a

setting that the state variable k

t

does not appear in F(·) as we have discussed

in Chapter 1 and 2. This can be done by assuming k

t+1

(instead of c

t

) along

71

with n

t

as model’s decision variables. In this case, the objective function

takes the form:

max E

0

∞

t=0

β

t

U(k

t+1

, n

t

, k

t

, A

t

),

where

U(k

t+1

, n

t

, k

t

, A

t

) = log [(1 −δ)k

t

+ y

t

−(1 +γ)k

t+1

] (4.26)

+θ log(1 −n

t

).

Note that here we have used (4.23) to express c

t

in the utility function; y

t

is

the stationary output via the following equation:

y

t

= A

t

k

1−α

t

(n

t

N/0.3)

α

. (4.27)

Given such an objective function, the state equation (4.23) can simply be

ignored in deriving the ﬁrst-order condition. The Bellman equation in this

case can be written as

V (k

t

, A

t

) = max

k

t+1

,nt

U(k

t+1

, n

t

, k

t

, A

t

) +βE [V (k

t+1

, A

t+1

)] . (4.28)

The necessary condition for maximizing the right side of Bellman equation

(4.28) is given by

∂U

∂k

t+1

(k

t+1

, n

t

, k

t

, A

t

) + βE

_

∂V

∂k

t+1

(k

t+1

, A

t+1

)

_

= 0; (4.29)

∂U

∂n

t

(k

t+1

, n

t

, k

t

, A

t

) = 0. (4.30)

Meanwhile the application of Benveniste-Scheinkman formula gives

∂V

∂k

t

(k

t

, A

t

) =

∂U

∂k

t

(k

t+1

, n

t

, k

t

, A

t

) (4.31)

Using (4.31) to express

∂V

∂k

t+1

(k

t+1

, A

t+1

) in (4.29), we obtain

∂U

∂k

t+1

(k

t+1

, n

t

, k

t

, A

t

) + βE

_

∂U

∂k

t+1

(k

t+2

, n

t+1

, k

t+1

, A

t+1

)

_

= 0. (4.32)

From equation (4.23) and (4.26),

∂U

∂k

t+1

(k

t+1

, n

t

, k

t

, A

t

) = −

1 +γ

c

t

;

∂U

∂k

t+1

(k

t+2

, n

t+1

, k

t+1

, A

t+1

) =

(1 −δ)k

t+1

+ (1 −α)y

t+1

k

t+1

c

t+1

;

∂U

∂n

t

(k

t+1

, n

t

, k

t

, A

t

) =

αy

t

n

t

c

t

−

θ

1 −n

t

.

72

Substituting the above expressions into (4.32) and (4.30), we establish the

following Euler equations:

−

1 + γ

c

t

+ βE

_

(1 −δ)k

t+1

+ (1 −α)y

t+1

k

t+1

c

t+1

_

= 0; (4.33)

αy

t

n

t

c

t

−

θ

1 −n

t

= 0. (4.34)

The First-Order Condition Derived from the Lagrangian

Next, we turn to derive the ﬁrst-order condition from the Lagrangian. Deﬁne

the Lagrangian:

L =

∞

t=0

β

t

[log(c

t

) +θ log(1 −n

t

)] −

∞

t=0

E

t

_

β

t+1

λ

t+1

_

k

t+1

−

1

1 +γ

_

(1 −δ)k

t

−A

t

k

1−α

t

(n

t

N/0.3)

α

+ c

t

_

__

Setting zero the derivatives of L with respect to c

t

, n

t

, k

t

and λ

t

, one obtains

the following ﬁrst-order condition:

1

c

t

−

β

1 +γ

E

t

λ

t+1

= 0; (4.35)

−θ

1 −n

t

+

αβy

t

(1 +γ)n

t

E

t

λ

t+1

= 0; (4.36)

β

1 + γ

E

t

λ

t+1

_

(1 −δ) +

(1 −α)y

t

k

t

_

= λ

t

; (4.37)

k

t+1

=

1

1 + γ

[(1 −δ)k

t

+y

t

−c

t

] , (4.38)

with y

t

again to be given by (4.27).

Next we try to demonstrate that the two ﬁrst-order conditions: (4.33)

- (4.34) and (4.35) - (4.38) are virtually equivalent. This can be done as

follows. First, expressing [β/(1+γ)]E

t

λ

t+1

in terms of 1/c

t

(which is implied

by (4.35)), we obtain from (4.37)

λ

t

=

(1 −δ)k

t

+ (1 −α)y

t

k

t

c

t

This further indicates that

E

t

λ

t+1

=

(1 −δ)k

t+1

+ (1 −α)y

t+1

k

t+1

c

t+1

(4.39)

73

Substituting (4.39) into (4.35), we obtain the ﬁrst Euler equation (4.33).

Second, expressing [β/(1+γ)]E

t

λ

t+1

again in terms of 1/c

t

, and substituting

it into (4.36), we then verify the second Euler equation (4.34).

4.3.3 The Steady States

Next we try to derive the corresponding steady states. The steady state A

t

is simply determined from (4.25). The other steady states are given by the

following proposition:

Proposition 4 Assume A

t

has a steady state A. Equation (4.35) - (4.38)

along with (4.27), when evaluated in terms of their certainty equivalence

forms, determines at least two steady states: one is on boundary, denoted as

(¯ c

b

, ¯ n

b

,

¯

k

b

, ¯ y

b

,

¯

λ

b

), and the other is interior, denoted as (¯ c

i

, ¯ n

i

,

¯

k

i

, ¯ y

i

,

¯

λ

i

). In

particular,

¯ c

b

= 0,

¯ n

b

= 1,

λ

b

= ∞,

¯

k

b

=

_

A/(δ +γ)

¸

1/α

_

¯

N/0.3

_

,

¯ y

b

= (δ + γ)

¯

k

b

;

and

n

i

= αφ/ [(α + θ)φ −(δ + γ)θ] ,

k

i

= A

1/α

φ

−1/α

n

i

_

¯

N/0.3

_

c

i

= (φ −δ −γ)k

i

,

λ

i

= (1 +γ)/β¯ c

i

y

i

= φk

i

,

where

φ =

(1 +γ) −β(1 −δ)

β(1 −α)

(4.40)

Note that we have used the ﬁrst-order condition from the Lagrangian

to derive the above two steady states. Since the two ﬁrst-order conditions

are virtually equivalent, we expect that the same steady states can also be

derived from the Euler equations.

2

2

We however leave this exercise to the readers.

74

4.4 Solving Standard Model with Standard

Parameters

To obtain the solution path of standard RBC model, we shall ﬁrst specify the

values of the structural parameters deﬁned in the model. These are reported

in Table 4.1. We shall remark that these parameters are close to the standard

parameters, which can often be found in the RBC literature.

3

More detailed

discussion regarding the parameter selection and estimation will be provided

in the next chapter.

Table 4.1: Parameterizing the Standard RBC Model

α γ β δ θ

¯

N a

0

a

1

σ

ǫ

0.58 0.0045 0.9884 0.025 2 480 0.0333 0.9811 0.0189

The solution method that we shall employ is the method of linear-quadratic

approximation with our suggested algorithm as discussed in Chapter 1. As-

sume that decision rule take the form

c

t

= G

11

A

t

+ G

12

k

t

+g

1

(4.41)

n

t

= G

21

A

t

+ G

22

k

t

+ g

2

(4.42)

Our ﬁrst step is to compute the ﬁrst- and the second-order partial derivatives

with respect to F and U where

F(k, c, n, A) =

1

1 +γ

_

(1 −δ)k + Ak

1−α

(nN/0.3)

α

−c

_

U(c, n) = log(c) + θ log(1 −n)

All these partial derivatives along with the steady states can be used as

the inputs in the GAUSS procedure provided in Appendix II of Chapter

1. Executing this procedure will allow us to compute the undetermined

coeﬃcients G

ij

and g

i

(i, j = 1, 2) in the decision rule as expressed in (4.41)

and (4.42). In Figure 4.1 and Figure 4.2 , we illustrate the solution paths, one

for the deterministic and the other for the stochastic case, for the variables

k

t

, c

t

, n

t

and A

t

.

3

Indeed, they are essentially the same as the parameters choosed by King et al. (1988)

except the last three parameters, which is related to the stochastic equation (4.25).

75

Figure 4.1: The Deterministic Solution to the Benchmark RBC Model for

the Standard Parameters

Figure 4.2: The Stochastic Solution to the Benchmark RBC Model for the

Standard Parameters

Elsewhere (see Gong and Semmler 2001), we have compared these solu-

76

tion paths to those computed by Campbell (1994)’s log-linear approximate

solution. We ﬁnd that the two solutions are surprisingly close to the extent

that one can hardly observe the diﬀerences.

4.5 The Generalized RBC Model

In recent work stochastic dynamic optimization models of general equilib-

rium type have been presented in the literature that go beyond the standard

model as discussed in chapter 4.3. The recent models are more demanding

in terms of solution and estimation methods. Although we will not attempt

to estimate those more generalized versions it is worth presenting the main

structure of the generalized models and to demonstrate how they can be

solved by using dynamic programming as introduced in chapter 1.6. Since

those generalized versions can easily give rise to multiple equilibria and his-

tory dependence, in chapter 7 we will return to these types of models.

4.5.1 The Model Structure

The generalization of the standard RBC model is usually undertaken either

with respect to preferences or with respect to the technology. With respect

to preferences utility functions such as

4

U(C, N) =

_

Cexp

_

−N

1+χ

1+χ

_

_

1−σ

−1

1 −σ

. (4.43)

are used. The utility function (4.43) with consumption, C and labor

eﬀort, N, as arguments is non-separable in consumption and leisure. We can

obtain from (4.43) a separable utility function such as

5

U(C, N) =

C

1−σ

1 −σ

−

N

1+χ

1 + χ

(4.44)

which is additively separable in consumption and leisure.

Moreover, by setting σ = 1 we obtain simpliﬁed preferences in log utility

and leisure.

U(C, N) = logC −

N

1+χ

1 + χ

(4.45)

4

See Bennet and Farmer (2000) and Kim (2004).

5

See Benhabib and Nishimura (1998) and Harrison(2001).

77

As concerning production technology and markets usually the following

generalizations are introduced.

6

First we can allow for increasing returns to

scale. Although the individual-level private technology generates constant

returns to scale with

Y

i

= AK

a

i

L

b

i

, a + b = 1

externalities of the form

A = (K

a

N

b

)

ξ

, ξ ≥ 0

may allow for an aggregate production function with increasing returns

to scale, with

Y = K

α

N

β

α > 0, β > 0, α + β ≥ 1 (4.46)

whereby α = (1 + ξ)a, β = (1 + ξ)b and Y, K, N represent total output,

the aggregate stock of capital and labor hours respectively. The increasing

returns to scale technology represented by equ. (4.46) can also be interpreted

as a monopolistic competition economy where there are rents arising from

inverse demand curves for monopolistic ﬁrms.

7

Another generalization as concerning production technology can be un-

dertaken by introducing adjustment cost of investment.

8

We may write

˙

K

K

= ϕ

_

I

K

_

(4.47)

with the assumption of

ϕ(δ) = 0, ϕ

′

(δ) = 1, ϕ

′′

(δ) ≤ 0 (4.48)

Hereby δ is the depreciation rate of the capital stock. A functional form

that satisﬁes the three conditions of equ. (4.47) is

δ

__

I

δK

_

1−ϕ

−1

_

/(1 −ϕ) (4.49)

For ϕ = 0, one has the standard model without adjustment cost, namely

˙

K = I −δK.

The above describes a type of a generalized model that we want to solve.

6

See Kim (2003a).

7

See Farmer (1999, ch. 7.2.4) and Benhabib and Farmer (1994).

8

For the following, see Lucas and Prescott (1971) and Kim (2003a) and Boldrin, Christiano

and Fisher (2001).

78

4.5.2 Solving the Generalized RBC Model

We write the model in continuous time and in its deterministic form

C

t

, N

t

= max

_

∞

t=0

e

−ρt

U(C

t

, N

t

)dt. (4.50)

s.t.

˙

K

K

= ϕ

_

I

t

K

t

_

(4.51)

where preferences U(C

t

, N

t

) are chosen such as represented by equ. (4.45)

and the technology such as speciﬁed in eqs. (4.46) and (4.47)-(4.49). The

latter are used in equ. (4.51).

For solving the model with the dynamic programming algorithm as pre-

sented in chapter 1.6 we use the following parameters.

Table 4.2: Parameterizing the General Model

a b χ ρ δ ξ ϕ

0.3 0.7 0.3 0.05 0.1 0 0.05

Note that in order to stay as close as possible to the standard RBC model

we avoid externalities and therefore presume ξ = 0. Preferences and technol-

ogy (by using adjustment cost of capital) take on, however, a more general

form. Note that the dynamic decision problem (4.50)-(4.51) are written in

continuous time. Its discretization for the use of dynamic programming is un-

dertaken through the Euler procedure. Using then the deterministic variant

of our dynamic programming algorithm of ch. 1.6 and a grid for the capital

stock, K, in the interval [0.1, 10], we obtain the following value function,

representing the total utility along the optimal paths of C and N.

79

-18

-17

-16

-15

-14

-13

-12

-11

-10

-9

0 1 2 3 4 5 6 7 8 9 10

Figure 4.3: Value function for the general model

The out-of-steady state of the two choice variables namely consumption

and labor eﬀort,(depending in feedback form on the state variable capital

stock, K) are shown in Figure 4.4.

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2 0 4 6 8 10

Figure 4.4: Paths of the Choice Variables C and N (depending on K)

As can be clearly observed the value function is concave, see Figure 4.3.

Moreover, see Figure 4.4, consumption is low and labor eﬀort high, when

capital stock is low (so that capital stock can be built up) and consumption

is high and labor eﬀort low (so that capital stock will shrink). The dynam-

ics generated by the response of consumption and labor eﬀort to the state

variable capital stock, K, will thus lead to a convergence toward an interior

steady state of the capital stock, consumption and labor eﬀort.

80

4.6 Conclusions

In this chapter we ﬁrst have introduced the intertemporal general equilibrium

model on which the standard RBC model is constructed. Our attempt was

to reveal the basis of some intertemporal decision problems behind the RBC

model. This will be important for the subsequent part of the book. We then

introduce the empirically oriented standard RBC model and, based on our

previous chapters, present the solution to the model. This provides us the

groundwork for an empirical assessment of the RBC model, the task that

will be addressed in the next chapter. We also have presented a generalized

RBC model and solved it by using dynamic programming.

4.7 Appendix: The Proof of Proposition 4

Evaluating (4.35) - (4.38) along with (4.27) in their certainty equivalence

form, and assuming all the variables to be at their steady states, we obtain

1

c

−

β

1 +γ

λ = 0 (4.52)

−θ

1 −n

+ βλ

αy

(1 +γ)n

= 0 (4.53)

β

1 + γ

λ

_

(1 −δ) +

(1 −α)y

k

_

= λ (4.54)

k =

1

1 +γ

_

(1 −δ)k +y −c

¸

(4.55)

where from (4.27) y is given by

y = Ak

1−α

_

nN

0.3

_

α

(4.56)

The derivation of the boundary steady state is trival. Replace c, ¯ n and λ

with c

b

, ¯ n

b

and λ

b

. We ﬁnd equation (4.52) - (4.54) are satisﬁed. Further,

¯

k

b

and ¯ y

b

can be derived from (4.55) and (4.56) given c

b

, ¯ n

b

and λ

b

.

Next we try to derive the interior steady state. For notational conve-

nience, we ignore the subscript i, and therefore all the steady state values

are understood as the interior steady states. Let

y

k

= φ. By (4.54), we obtain

81

y = φk where φ is deﬁned by (4.40) in the proposition. According to (4.56),

y

n

= A

_

y

φ

_

1−α

_

nN

0.3

_

α

n

−1

= A

_

y

n

_

1−α

φ

α−1

_

N

0.3

_

α

= A

1

α

φ

1−

1

α

_

N

0.3

_

(4.57)

Therefore,

k

n

=

¯ y/¯ n

¯ y/

¯

k

= A

1

α

φ

−

1

α

_

N

0.3

_

(4.58)

Expressing y in terms of φk and then expressing k in terms of (4.58), we thus

obtain from (4.55)

c = (φ −δ −γ)k

= (φ −δ −γ)A

1

α

φ

−

1

α

_

N

0.3

_

n (4.59)

Meanwhile, (4.52) and (4.53) imply that

c =

_

α

θ

_

(1 −n)

_

y

n

_

=

_

α

θ

_

(1 −n)A

1

α

φ

1−

1

α

_

N

0.3

_

(4.60)

Equating (4.59) and (4.60), we thus solve the steady state of the labor

eﬀort n. Once n is solved, k can be obtained from (4.58), y from (4.57) and

c either from (4.59) or from (4.60). Finally, λ can be derived from (4.52).

Chapter 5

The Empirics of the Standard

Real Business Cycle Model

5.1 Introduction

Many real business cycle theorists believe that the RBC model is empirically

powerful in explaining the stylized facts of business cycles. Moreover, some

theorists suggest that even a simple RBC model, like the standard model we

presented in the last chapter, despite its rather simple structure, can generate

the time series to match the macroeconomic moments from empirically ob-

served time series data. As Plosser (1989) pointed out, “the whole idea that

such a simple model with no government, no market failures of any kind, ra-

tional expectations, no adjustment cost could replicate actual experience this

well is very surprising.” (Plosser 1989:...) However, these early assessments

have also become the subject to various criticisms. In this chapter, we shall

provide a comprehensive empirical assessment of the standard RBC model.

Our previous discussion, especially in the ﬁrst three chapters, has provided

a technical preparation for this assessment. We shall ﬁrst estimate the stan-

dard RBC model and then evaluate the calibration results that have been

stated by early real business cycle theorists. Yet before we commence with

our formal study, we shall ﬁrst demonstrate the eﬃciency of our estimation

strategy as discussed in Chapter 3.

5.2 Estimation with Simulated Data

In this section, we shall ﬁrst apply our estimation strategy using simulated

data. The simulated data are shown in Figure 4.2 in the last chapter, which

is generated from a stochastic simulation of our standard model for the given

82

83

standard parameters reported in Table 4.1. The purpose of this estimation is

to test whether our suggested estimation strategy works well. If the strategy

works well, we expect that the estimated parameters will be close to the

standard parameters that we know in advance.

5.2.1 The Estimation Restriction

The standard model implies certain restrictions on the estimation. For the

GMM estimation, the model implies the following moment restrictions:

1

E [(1 +γ)k

t+1

−(1 −δ)k

t

−y

t

+c

t

] = 0; (5.1)

E [c

t

−G

11

k

t

−G

12

A

t

−g

13

] = 0; (5.2)

E [n

t

−G

21

k

t

−G

22

A

t

−g

23

] = 0; (5.3)

E

_

y

t

−A

t

k

1−α

t

(n

t

N/0.3)

α

_

= 0. (5.4)

Note that the moment restrictions could be nonlinear as in (5.4). Yet for the

ML estimation, the restriction Bz

t

+ Γx

t

= ε

t

(see equation (3.6))

2

must be

linear. Therefore, we shall ﬁrst linearize (4.27) using Taylor approximation.

This gives us

y

t

=

y

A

A

t

+ (1 −α)

y

k

k

t

+ α

y

n

n

t

−y

We thus obtain for the ML estimation:

z

t

=

_

¸

¸

_

k

t

c

t

n

t

y

t

_

¸

¸

_

, B =

_

¸

¸

_

1 + γ 0 0 0

−G

12

1 0 0

−G

22

0 1 0

−(1 −α)

y

k

0 −α

y

n

1

_

¸

¸

_

x

t

=

_

¸

¸

¸

¸

_

k

t−1

c

t−1

y

t−1

A

t

1

_

¸

¸

¸

¸

_

, Γ =

_

¸

¸

_

−(1 −δ) 1 −1 0 0

0 0 0 −G

11

−g

13

0 0 0 −G

21

−g

23

0 0 0 −

y

A

y

_

¸

¸

_

1

Note that the parameters in the exogenuous equation could be independently estimated

since they have no feed back into the other equations. Therefore, there is no necessity to

include the exogenuous equation into the restriction.

2

Note that here we use z

t

rather than y

t

in order to distinguish it from the output y

t

in

the model.

84

5.2.2 Estimation with Simulated Data

Although the model involve many parameters, we shall only estimate the

parameters α, β, δ and θ. These are the parameters that are empirically un-

known and thus need to be estimated when we later turn to the estimation

using empirical data.

3

Table 5.1 reports our estimation with the standard

deviation included in parenthesis.

Table 5.1: GMM and ML Estimation Using Simulated Data

α β δ θ

True 0.58 0.9884 0.025 2

ML Estimation 0.5781

(2.4373E−006)

0.9946

(5.9290E−006)

0.0253

(3.4956E−007)

2.1826

(3.7174E−006)

1st Step GMM 0.5796

(6.5779E−005)

0.9821

(0.00112798)

0.02500

(9.6093E−006)

2.2919

(0.006181723)

2nd Step GMM 0.5800

(2.9958E−008)

0.9884

(3.4412E−006)

0.02505

(9.7217E−007)

2.000

(4.6369E−006)

One ﬁnds that the estimations from both methods are quite satisfying.

All estimated parameters are close to their true parameters, the parameters

that we have used to generate the data. This demonstrates the eﬃciency

of our estimation strategy. Yet, the GMM estimation after the second step

is more accurate than the ML estimation. This is probably because the

GMM estimation does not need to linearize (5.4). However, we should also

remark that the diﬀerence is minor whereas the time required by the ML

estimation is much shorter. The latter holds not only because the GMM

needs an additional step, but also each single step of the GMM estimation

takes much more time for the algorithm to converge. Approximately 8 hours

on a Pentium III computer is required for each step of the GMM estimation

whereas approximately only 4 hours is needed for the ML estimation.

In Figure 5.1 and 5.2, we also illustrate the surface of the objective func-

tion for our ML estimation. It shows not only the existence of multiple

optima, but also that the objective function is not smooth. This veriﬁes the

necessity of using simulated annealing in our estimation strategy.

3

N can be regarded as the mean of per capital hour. γ does not appear in the model, but

create for transforming the model into a stationary version. The parameters with regard

to the AR(1) process of A

t

have no feedback eﬀect on our estimation restrictions.

85

Figure 5.1: The β - δ Surface of the Objective Function for ML Estimation

Figure 5.2: The θ −α Surface of the Objective Function for ML Estimation

86

Next, we turn to estimating the standard RBC model with the data of

U. S. time series data.

5.3 Estimation with Actual Data

5.3.1 The Data Construction

Before estimating the benchmark model with empirical U. S. time series,

we shall ﬁrst discuss the data that shall be used in our estimation. The

empirical studies of RBC models often require a considerable re-construction

of existing macroeconomic data. The time series employed for our estimation

should include A

t

, the temporary shock in technology, N

t

the labor input

(per capita hours), K

t

the capital stock, C

t

consumption and Y

t

output.

Here all the data can assume to be obtained from statistical sources except

the temporary shock A

t

. A common practice is to use the so-called Solow

residual for the temporary shock. Assuming that the production function

takes the form Y

t

= A

t

K

1−α

t

(N

t

X

t

)

α

, the Solow residual A

t

is computed as

follows:

A

t

=

Y

t

K

1−α

t

(N

t

X

t

)

α

(5.5)

=

y

t

k

1−α

t

N

t

α

(5.6)

where X

t

follows a constant growth rate:

X

t

= (1 +γ)

t

X

0

(5.7)

Thus, the Solow residual A

t

can be derived if the time series Y

t

, K

t

, N

t

,

and the parameter α and γ as well as the initial condition X

0

are given.

It should be noted to derive the temporary shock as above deserves some

criticism. Indeed, this approach uses macroeconomic data that posits a full-

employment assumption, a key assumption in the model, which makes it

diﬀerent from other types of model, such as the business cycle model of

Keynesian type. Yet, since this is a common practice we shall here also

follow this procedure. Later in this chapter we will, however, deviate from

this practice and construct the Solow residual in a diﬀerent way. This will

allow us to explore a puzzle, often called the technology puzzle in the RBC

literature.

In addition to the construction of the temporary shock, the existing

macroeconomic data (such as those from Citibase) also need to be adjusted

to be accommodated to the deﬁnition of the variables as deﬁned in the model.

87

4

The national income as deﬁned in the model is simply the sum of consump-

tion C

t

and investment, the latter increases the capital stock. Therefore, to

make the model’s national income account consistent with the actual data,

one should also include government consumption in C

t

. Further, it is sug-

gested that not only private investment but also government investment and

durable consumption goods (as well as inventory and the value of land) should

be included in the capital stock. Consequently, the service generated from

durable consumption goods and government capital stock should also appear

in the deﬁnition of Y

t

. Since such data are not readily available, one has to

compute them based on some assumptions.

Two Diﬀerent Data Sets

To explore how this treatment of the data construction could aﬀect the empir-

ical assessment of dynamic optimization model, we shall employ two diﬀerent

data sets. The ﬁrst data set, Data Set I, is constructed by Christiano (1987),

which has been used in many empirical studies of RBC model such as Chris-

tiano (1988) and Christiano and Eichenbaum (1992). The sample period of

this data set is from 1955.1 to 1984.4. All the data are quarterly.

The second data set, Data Set II, is obtained mainly from Citibase except

the capital stock which is taken from the Current Survey of Business. This

data set is taken without any modiﬁcation. The sample period for this data

set is from 1952.1 to 1988.4.

5.3.2 Estimation with the Christiano Data Set

As we have mentioned before, the shock sequence A

t

is computed from the

time series of Y

t

, X

t

, N

t

and K

t

given the pre-determined α, which we denote

as α

∗

(see equation (5.5 )). Here the time series X

t

is computed according to

equation (5.7) for the given parameter γ and the initial condition X

0

. In this

estimation, we set X

0

to 1 and γ to 0.0045.

5

Meanwhile we shall consider

two α

∗

: one is the standard value 0.58 and the other is 0.66. We shall remark

that 0.66 is the estimated α in Christiano and Eichenbaum (1992). Table 5.2

reports the estimations after the second step of GMM estimation.

4

For a discussion on data deﬁnitions in RBC models, see Cooley and Prescott (1995).

5

We choose 0.0045 for γ to make k

t

, c

t

and y

t

to be stationary (note that we have deﬁned

k

t

≡

Kt

Xt

, c

t

≡

Ct

Xt

and y

t

≡

Yt

Xt

). Also, γ is the standard parameter as choosen by King

et al. (1988).

88

Table 5.2: Estimation with Christiano’s Data Set

α β δ θ

α

∗

= 0.58 0.5800

(2.2377E−008)

0.9892

(0.0002)

0.0209

(0.0002)

1.9552

(0.0078)

α

∗

= 0.66 0.6600

(9.2393E−006)

0.9935

(0.0002)

0.0209

(0.0002)

2.1111

(0.0088)

As one can observe, all the estimations seem to be quite reasonable. For

α

∗

= 0.58, the estimated parameters, though somewhat deviating from the

standard parameters used in King et al. (1988), are all within the economi-

cally feasible range. For α

∗

= 0.66, the estimated parameters are very close

to those in Christiano and Eichenbaum (1992). Even the parameter β esti-

mated here is very close to the β chosen (rather than estimated) by them.

In both cases, the estimated α is almost the same as the pre-determined α.

This is not surprising due to the way of computing the temporary shocks

from the Solow residual. Finally, we should also remark that the standard

errors are unusually small.

5.3.3 Estimation with the NIPA Data Set

As in the case of the estimation with Christiano’s data set, we again set X

0

to 1 and γ to 0.0045. For the two pre-determined α,we report the estimation

results in Table 5.3.

Table 5.3: Estimation with the NIPA Data Set

α β δ θ

α

∗

= 0.58 0.4656

(71431.409)

0.8553

(54457.907)

0.0716

(89684.204)

1.2963

(454278.06)

α

∗

= 0.66 0.6663

(35789.023)

0.9286

(39958.272)

0.0714

(45174.828)

1.8610

(283689.56)

In contrast to the estimation using the Chrisitano data set, we ﬁnd that

the estimations here are much less satisfying. They deviate signiﬁcantly

from the standard parameters. Some of the parameters, especially β, are

not within the economically feasible range. Furthermore, the estimates are

all statistically insigniﬁcant due to the huge standard errors. Given such a

sharp contrast for the results of the two diﬀerent data sets, one is forced to

think about the data issue involved in the current empirical studies of RBC

model. Indeed, this issue is mostly suppressed in the current debate.

89

5.4 Calibration and Matching to U. S. Time-

Series Data

Given the structural parameters, one can then assess the model to see how

closely it matches with the empirical data. The current method for assessing

a stochastic dynamic optimization model of RBC type is the calibration

technique, which has already been introduced in Chapter 3. The basic idea

of calibration is to compare the time series moments generated from the

model’s stochastic simulation to those from a sample economy.

The data generation process for this stochastic simulation is given by the

following equations:

c

t

= G

11

A

t

+ G

12

k

t

+g

1

(5.8)

n

t

= G

21

A

t

+ G

22

k

t

+ g

2

(5.9)

y

t

= A

t

k

1−α

t

(n

t

N/0.3)

α

(5.10)

A

t+1

= a

0

+ a

1

A

t

+ ǫ

t+1

(5.11)

k

t+1

=

1

1 +γ

[(1 −δ)k

t

+ y

t

−c

t

] (5.12)

where ǫ

t+1

∼ N(0, σ

2

ǫ

); and G

ij

and g

i

(i, j = 1, 2) are all the complicated

functions of the structural parameters and can be computed from our GAUSS

procedure of solving dynamic optimization problem as presented in Appendix

II of Chapter 1.

The structural parameters used for this stochastic simulation are deﬁned

as follows. First, we employ those in Table 5.2 at α

∗

= 0.58 for the parameters

α, β, δ and θ. The parameter γ is set 0.0045 as usual. The parameters a

0

, a

1

and σ

ǫ

in the stochastic equation (5.11) are estimated by the OLS method

given the time series computed from Solow residue.

6

The parameter N is

simply the sample mean of per capita hours N

t

. For convenience, we report

all these parameters in Table 5.4.

Table 5.4: Parameterizing the Standard RBC Model

α γ β δ θ N a

0

a

1

σ

ǫ

0.5800 0.0045 0.9892 0.0209 1.9552 299.03 0.0333 0.9811 0.0189

6

Note that these are the same as in Table 4.1.

90

Table 5.5 reports our calibration from 5000 stochastic simulations. In

particular, the moment statistics of the sample economy is computed from

Christiano’s data set, while those for model economy are generated from

our stochastic simulation using the data generation process (5.8) - (5.12).

Here, the moment statistics include the standard deviations of some major

macroeconomic variables and also their correlation coeﬃcients. For the model

economy, we can further obtain the distribution of these moment statistics,

which can be reﬂected by their corresponding standard deviations (those in

paratheses). Of course, the distributions are derived from our 5000 thousand

stochastic simulations. All time series data are detrended by the HP-ﬁlter.

Table 5.5: Calibration of Real Business Cycle Model (numbers in parentheses

are the corresponding standard deviations)

Consumption Capital Employment Output

Standard Deviations

Sample Economy 0.0081 0.0035 0.0165 0.0156

Model Economy 0.0090 0.0037 0.0050 0.0159

(0.0012) (0.0007) (0.0006) (0.0021)

Correlation Coeﬃcients

Sample Economy

Consumption 1.0000

Capital Stock 0.1741 1.0000

Employment 0.4604 0.2861 1.0000

Output 0.7550 0.0954 0.7263 1.0000

Model Economy

Consumption 1.0000

(0.0000)

Capital Stock 0.2013 1.0000

(0.1089) (0.0000)

Employment 0.9381 −0.1431 1.0000

(0.0210) (0.0906) (0.0000)

Output 0.9796 0.0575 0.9432 1.0000

(0.0031) (0.1032) (0.0083) (0.0000)

5.4.1 The Labor Market Puzzle

By observing Table 5.5, we ﬁnd that among the four key variables the volatil-

ities of consumption, capital stock and output could be regarded as being

91

somewhat matched. This is indeed one of the major early results of real

business cycle theorists. However, the matching does not hold for employ-

ment. Indeed, the employment in the model economy is excessively smooth.

These results are further demonstrated by Figure 5.3 and Figure 5.4,

where we compare the observed series from the sample economy to the sim-

ulated series with innovation given by the observed Solow residual. We shall

remark that the excessive smoothness of employment is a typical problem of

the standard model that has been addressed many times in literature.

Figure 5.3: Simulated and Observed Series (non detrended): solid line ob-

served and dashed line simulated

92

Figure 5.4: Simulated and Observed Series (detrended by HP ﬁlter): solid

line observed and dashed line simulated

Now let us look at the correlations. In the sample economy, there are ba-

sically two signiﬁcant correlations. One is between consumption and output,

and the other is between employment and output. Both of these two corre-

lations have also been found in our model economy. However, in addition to

these two correlations, consumption and employment in the model economy

are also signiﬁcantly correlated. We remark that such an excessive correlation

has, to our knowledge, not yet been discussed in the literature. The discus-

sions have often been focused on the correlation with output. However, this

excessive correlation should not be surprising given that in the RBC model

the movements of employment and consumption reﬂect the movements of the

same state variables: capital stock and temporary shock. They, therefore,

should be somewhat correlated. The excessive smoothness of labor eﬀort and

the excessive correlation between labor and consumption will be taken up in

Chapter 8.

93

5.5 The Issue of the Solow Residual

So far we may argue that one of the major achievements of the standard RBC

model is that the model could explain the volatility of some key macroeco-

nomic variables such as output, consumption and capital stock. Meanwhile,

these results rely on the hypothesis that the driving force of the business cy-

cles are technology shocks, which are assumed to be measured by the Solow

residual. The measurement of technology can impact this result in two ways.

One is that the parameters a

0

, a

1

and σ

ǫ

in the stochastic equation (5.11)

are estimated from the time series computed from Solow residual. These pa-

rameters will directly aﬀect the results from our stochastic simulation. The

second is that the Solow residual also serves as the sequence of observed

innovations that generate the graphs in Figure 5.3 and Figure 5.4. Those

innovations are often used in the RBC literature as an additional indicator

to support the model and its matching of the empirical data.

Another major presumption of the RBC literature, not yet shown in Table

5.5, but to be shown below, is the technology-driven hypthoses, i.e., the

technology is procyclical with output, consumption and employment. Of

course, this celebrated result is obtained also from the empirical evidence, in

which the technology is measured by the standard Solow residual.

There are several reasons to distrust the standard Solow residual as a

measure of technology shock. First, Mankiw (1989) and Summers (1986) have

argued that such a measure often leads to excessive volatility in productivity

and even the possibility of technological regress, both of which seem to be

empirically implausible. Second, It has been shown that the Solow residual

can be expressed by some exogenuous variables, for example demand shocks

arising from military spending (Hall 1988) and changed monetary aggregates

(Evan 1992), which are unlikely to be related to factor productivity. Third,

the standard Solow residual is not a reliable measure of technology schocks

if the cyclical variation in factor utilization are signiﬁcant.

Considering that the Solow residual cannot be trusted as a measure of the

technology shock, researchers have now developed diﬀerent methods to mea-

sure technology correctly. All these methods are focused on the computation

of factor utilization. There are basically three strategies. The ﬁrst strategy

is to use an observed indicator to proxy for unobserved utilization. A typical

example is to employ electricity use as a proxy for capacity utilization (see

Burnside, Eichenbaum and Rebelo 1996). Another strategy is to construct

an economic model so that one could compute the factor utilization from

the observed variables (see Basu and Kimball 1997 and Basu, Fernald and

Kimball 1998). A third strategy identiﬁes the technology shock through an

VAR estimate, see Gali (1999) and Francis and Ramey (2001, 2003).

94

Recently, Gali (1999) and Francis and Ramey (2001) have found that if

one uses the corrected Solow residual – if one identiﬁes the technology shock

correctly – the technology shock is negatively correlated with employment

and therefore the celebrated discovery of the RBC literature must be re-

jected. Also, if the corrected Solow residual is signiﬁcantly diﬀerent from the

standard Solow residual, one may ﬁnd that the standard RBC model, using

the Solow residual, can match well the variations in output, consumption

and capital stock not because the model has been constructed correctly, but

because it uses a problematic measure of technology.

All these are important problematic issues that are related to the Solow

residual. Indeed, if they are conﬁrmed, the real business cycles model, as

driven by technology shocks, may not be a realistic paradigm for macroeco-

nomic analysis any more. In this section, we will refer to all of this recent

research employing our available data set. We will ﬁrst follow Hall (1988) and

Evan (1992) to test the exogeneity of Solow residual. Yet in this test, we sim-

ply use government spending, which is available in our Christiano’s data set.

We will construct a measurement of the technology shock that represents a

corrected Solow residual. This construction needs data on factor utilization.

Unlike other current research, we use empirically observed data series, the

capacity utilization of manufacturing, IPXMCAQ, obtained from Citibase.

Given our new measurement, we then explore whether the RBC model is still

able to explain the business cycles, in particular the variation in consump-

tion, output and capital stock. We shall also look at whether the technology

still moves procyclically with output, consumption and employment.

5.5.1 Testing the Exogeneity of the Solow Residual

Apparently, a critical assumption of the Solow residual to be a correct mea-

surement of the technology shock is that A

t

should be purely exogenuous.

In other words, the distribution of A

t

cannot be altered by the change in

other exogenuous variables such as the variables of monetary and ﬁscal pol-

icy. Therefore, testing the exogeneity of the Solow residual becomes our ﬁrst

investigation to explore whether the Solow residual is a correct measure of

the technology shock.

One possible way to test the exogeneity is to employ the Granger causality

test. This is also the approach taken by Evan (1992). For this purpose, we

shall investigate the following speciﬁcation:

A

t

= c + α

1

A

t−1

+· · · + α

p

A

t−p

+β

1

g

t−1

+· · · + β

p

g

t−p

+ ε

t

(5.13)

where g

t

in this test is government spending, as an aggregate demand vari-

able, over X

t

. If the Solow residual is exogenuous, g

t

should not have any

95

explanatory power for A

t

. Therefore our null hypothesis is

H

0

: β

1

= · · · = β

p

= 0 (5.14)

The rejection of the null hypothesis is suﬃcient for us to refute the assump-

tion that A

t

is strictly exogenuous. It is well known that the result of any

empirical test for Granger causality can be surprisingly sensitive to the choice

of lag length p. The test therefore will be conducted for diﬀerent lag lengths

p’s. Table 5.6 provides the corresponding F-statistics computed for the dif-

ferent p’s.

Table 5.6: F−Statistics for Testing Exogeneity of Solow Residual

F− statistics degrees of freedom

p = 1 9.5769969 (1, 92)

p = 2 4.3041035 (2, 90)

p = 3 3.2775435 (4, 86)

p = 4 2.3825632 (6, 82)

From Table 5.6, one ﬁnds that at 5% signiﬁcance level we can reject the

null hypothesis for all the lag lengths p

′

s.

7

5.5.2 Corrected Technology Shocks

The analysis in our previous section indicates that the hypothesis can re-

jected that the standard Solow residual be strictly exogenuous. On the other

hand, those policy variables, such as government spending, which certainly

represents a demand shock, may have explanatory power for the variation of

the Solow residual. This ﬁnding is consistent with the results in Hall (1988)

and Evan (1992). Therefore we may have suﬃcient reason to distrust the

Solow residual to be a good measure of the technology shock.

Next, we present a simple way of how to extract a technology shock from

macroeconomic data. If we look at the computation of the Solow residual,

e.g., equation (5.5), we ﬁnd two strong assumptions inherent in the formula-

tion of the Solow residual. First, it is assumed that the capital stock is fully

utilized. Second, it is further assumed that the population follows a constant

growth rate which is a part of γ. In other words, there is no variation in

population growth. Next we shall consider the derivation of the corrected

Solow residual by relaxing those strong assumptions.

7

Although we are not able to obtain the same at 1% signiﬁcance level.

96

Let u

t

denote the utilization of capital stock, which can be measured

by IPXMCAQ from Citibase. The observed output is thus produced by

the utilized capital and labor service (expressed in terms of total observed

working hours) via the production function:

Y

t

=

˜

A

t

(u

t

K

t

)

1−α

(Z

t

E

t

H

t

)

α

(5.15)

Above,

˜

A

t

is the corrected Solow residual (which is our new measure of

temporary shock in technology); E

t

is the number of workers employed; H

t

denotes the hours per employed worker;

8

and Z

t

is the permanent shock in

technology. Note that in this formulation, we interpret the utilization of

labor service only in terms of their working hours and therefore ignore their

actual eﬀort, which is more diﬃcult to be observed.

Let L

t

denote the permanent shock to population so that X

t

= Z

t

L

t

while

L

t

denotes the observed population so that

EtHt

Lt

= N

t

. Dividing both sides

of (5.15) by X

t

, we then obtain

y

t

=

˜

A

t

(u

t

k

t

)

1−α

(l

t

N

t

)

α

(5.16)

Above l

t

≡

Lt

Lt

. Given equation (5.16), the corrected Solow residual

˜

A

t

can

be computed as

˜

A

t

=

y

t

(u

t

k

t

)

1−α

(l

t

N

t

)

α

(5.17)

Comparing this with equation (5.5), one ﬁnds that our corrected Solow resid-

ual

˜

A

t

will match the standard Solow residual A

t

if and only if both u

t

and

l

t

equal 1. Figure 5.5 compares these two time series: one for non-detrended

and the for detrended series.

8

Note that this is diﬀerent from our notation N

t

before, which is the hours per capita.

97

Figure 5.5: The Solow Residual: standard (solid curve) and corrected (dashed

curve)

As one can observe in the ﬁgure 5.5, the two series follow basically the

same trend while their volatilities are almost the same.

9

However, in the short

run, they rather move in diﬀerent directions if we compare the detrended

series.

5.5.3 Business Cycles with Corrected Solow Residual

Next we shall use the corrected Solow residual to test the technology-driven

hypothesis. In Table 5.7, we report the cross-correlations of the technology

shock to our four key economic variables: output, consumption, employment

and capital stock. These correlations are compared for three economies: the

RBC economy (whose statistics is computed from 5000 simulations), the

Sample Economy I (in which the technology shock is represented by the

standard Solow residual) and the Sample Economy II (to be represented by

the corrected Solow residual). The data series are again detrended by the

HP-ﬁlter.

9

A similar volatility is also found in Burnside et al. (1996).

98

Table 5.7: The Cross-Correlation of Technology

output consumption employment capital stock

RBC Economy 0.9903 0.9722 0.9966 -0.0255

(0.0031) (0.0084) (0.0013) (0.1077)

Sample Economy I 0.7844 0.7008 0.1736 -0.2142

Sample Economy II -0.3422 -0.1108 -0.5854 0.0762

If we look at the Sample Economy I, where the standard Solow resid-

ual is employed, we ﬁnd that the technology shock is procyclical to output,

consumption and employment. This result is exactly predicted by the RBC

Economy and represents what has been called the technology-driven hypoth-

esis. However, if we use the corrected Solow residual, as in Sample Economy

II, we ﬁnd a somewhat opposite result, especially for employment. We, there-

fore, can conﬁrm the ﬁndings of the recent research by Basu, et al. (1998),

Gali (1999) and Francis and Ramey (2001, 2003).

To test whether the model can still match the observed business cycles,

we provide in Figure 5.6 a one time simulation with the observed innovation

given by the corrected Solow residual.

10

Comparing Figure 5.6 to Figure 5.4,

we ﬁnd that the results are in sharp contrast to the prediction as referred in

the standard RBC model.

10

Here the structural parameters are still the standard ones as given in Table 5.4.

99

Figure 5.6: Sample and Predicted Moments with Innovation Given by Cor-

rected Solow Residual

5.6 Conclusions

The standard RBC model has been regarded as a model that replicates the

basic moment properties of U.S. macroeconomic time series data despite its

rather simple structure. Prescott (1986) summarizes the moment implica-

tions as indicating “the match between theory and observation is excellent,

but far from perfect”. Indeed, many have felt that the RBC research has

at least passed the ﬁrst test. Yet this early assessment should be subject to

certain qualiﬁcation.

In the ﬁrst place, this early assessment builds on the reconstruction of

U.S. macroeconomic data. Through its necessity to accommodate the data to

the model’s implication, such data reconstruction seems to force the ﬁrst mo-

ments of certain macroeconomic variables of the U.S. economy to be matched

by the model’s steady state at the given economically feasible standard pa-

100

rameters. The unusual small standard errors of the estimates seem to conﬁrm

this suspicion.

Second, although one may celebrate the ﬁt of the variation of consump-

tion, output and capital stock when the reconstructed data series are em-

ployed, we still cannot ignore the problems of excessive smoothness of labor

eﬀort and excessive correlation between labor and consumption. Both of

these two problems are related to the labor market speciﬁcation of the RBC

model. For the model to be able to replicate employment variation, it seems

necessary to make improvement upon the labor market speciﬁcation. One

possible approach for such improvement is to allow for wage stickyness and

nonclearing of the labor market, a task that we will turn to in Chapter 8.

Third, the celebrated ﬁt of the variation in consumption, output and

capital stock may rely on the incorrected measure of technology. As we have

shown in Figure 5.6, the match does not exist any more when we use the

corrected Solow residual as the observed innovations. This incorrect measure

of technology takes us to the technology puzzle: the procyclical technology,

driving the business cycle, may not be a very plausible hypothesis. As King

et. al (1999) pointed out, “it is the ﬁnal criticism that the Solow residual is a

problematic measure of technology shock that has remained the Achilles heel

of the RBC literature.” In Chapter 9, we shall address the technology puzzle

again by introducing monopolistic competition into an stochastic dynamic

macro model.

Chapter 6

Asset Market Implications of

Real Business Cycles

6.1 Introduction

In this chapter, we shall study asset price implications of the standard RBC

model. The idea of employing a basic stochastic growth model to study asset

prices goes back to Brock and Mirman (1972) and Brock (1978, 1982). Asset

prices contain valuable information about intertemporal decision making and

dynamic models explaining asset pricing are of great importance in current

research. We here want to study a production economy with asset market

and spell out its implications for asset prices and returns. In particular we

will explore to what extend it can replicate the empirically found risk-free

interest rate, equity premium and Sharpe-ratio.

Modelling asset price and risk premia in models with production is much

more challenging than in exchange economies. Most of the asset pricing

literature has followed Lucas (1978) and Mehra and Prescott (1985) in com-

puting asset prices from the consumption based asset pricing models with

an exogenous dividend streams. Production economies oﬀer a much richer,

and realistic environment. First, in economies with an exogenous dividend

stream and no savings consumers are forced to consume their endowment. In

economies with production where asset returns and consumption are endoge-

nous consumers can save and hence transfer consumption between periods.

Second, in economies with an exogenous dividend stream the aggregate con-

sumption is usually used as a proxy for equity dividends. Empirically, this

is not a very sensible modelling choice. Since there is a capital stock in pro-

duction economies, there is a more realistic modelling of equity dividends is

possible.

101

102

Although recently further extension of the baseline stochastic growth

model of RBC type were developed to match better actual asset market

characteristics

1

we will in the current paper by and large restrict ourselves

to the baseline model. The theoretical framework in this chapter is taken

from Lettau (1999), Lettau and Uhlig (1999) and Lettau, Gong and Semm-

ler (2001) where the closed-form solutions for risk premia of equity, long-term

real bonds, the Sharpe-ratio and the risk-free interest rates are presented in a

log-linearized RBC model as developed by Campbell (1994). Those equations

can be used as additional moment restrictions in the estimation process. We

introduce the asset pricing restrictions step-by-step to clearly demonstrate

the eﬀect of each new restriction.

First, we estimate the model using only the restrictions of real variables

as in Chapter 5. The data employed for this estimation are taken again from

Christiano (1987).

2

We then add our ﬁrst asset pricing restriction, the risk-

free interest rate. We use the observed 30-day T-bill rate to match the one-

period risk-free interest rate implied by the model.

3

The second asset pricing

restriction concerns the risk-return trade-oﬀ as measured by the Sharpe-

ratio, or the price of risk. This variable determines how much expected

return agents require per unit of ﬁnancial risk. Hansen and Jagannathan

(1991) and Lettau and Uhlig (1999) show how important the Sharpe-ratio

4

is in evaluating asset prices generated by diﬀerent models. Introducing the

Sharpe-ratio as moment restriction in the estimation procedure requires an

iterative procedure to estimate the risk aversion parameter. We ﬁnd that the

Sharpe-ratio restriction aﬀects the estimation of the model drastically. For

each estimation, we compute the implied premia of equity and long-term real

bond. Those values are then compared to the stylized facts of asset markets.

The estimation technique in this chapter follows the Maximum Likeli-

hood (ML) method as discussed in Chapter 4. All the estimations are again

conducted through the numerical algorithm, the simulated annealing. In ad-

dition, we introduce a diagnostic procedure developed by Watson (1993) and

Diebold, Ohanian and Berkowitz (1995) to test whether the moments pre-

dicted by the model, for the estimated parameters, can match the moments

of the actual macroeconomic time series. In particular, we use the variance-

covariance matrix of the estimated parameters to infer the intervals of the

1

See, for example Jerman (1998), Boldrin, Christiano and Fisher (2001) and Gr¨ une and

Semmler (2004b).

2

Using Christiano’s data set, we implicitly assume that the standard model can, to some

extent, replicate the moments of the real variables. Of course, as the previous chapter

has shown the standard model fails also along some real dimensions.

3

Using 30-day rate allows us to keep inﬂation uncertainty at a minimum.

4

See also Sharpe (1964)

103

moment statistics and to study whether the actual moments derived from

the sample data fall within this interval.

The rest of the chapter is organized as follows. In section 2, we use the

standard RBC model and log-linearization as proposed by Campbell (1994)

and derive the closed-form solutions for the ﬁnancial variables. Section 3

presents the estimation for the model speciﬁed by diﬀerent moments restric-

tions. In Section 4, we interpret our results and contrast the asset market

implications of our estimates to the stylized facts of the asset market. Section

5 compares the second moments of the time series generated from the model

to the moments of actual time series data. Section 6 concludes.

6.2 The Standard Model and Its Asset Pric-

ing Implications

6.2.1 The Standard Model

We follow Campbell (1994) and use the notation Y

t

for output, K

t

for capital

stock, A

t

for technology, N

t

for normalized labor input and C

t

for consump-

tion. The maximization problem of a representative agent is assumed to take

the form

5

Max E

t

∞

i=0

β

i

_

C

1−γ

t+i

1 −γ

+ θ log(1 −N

t+i

)

_

subject to

K

t+1

= (1 −δ)K

t

+ Y

t

−C

t

with Y

t

given by (A

t

N

t

)

α

K

1−α

t

. The ﬁrst order condition is given by the

following Euler equation:

C

−γ

t

= βE

t

_

C

−γ

t+1

R

t+1

_

(6.1)

1

θ(1 −N

t

)

= α

A

α

t

C

t

_

K

t

N

t

_

(1−α)

(6.2)

5

Note that, as in our previous modelling, we apply here the power utility as describing the

preferences of the representative household. For the model of asset market implications

other preferences, for example, habit formation are often employed, see Jerman (1998),

Boldri, Christiano and Fisher (2001) and Cochrane (2001, ch. 21) and Gr¨ une and Semmler

(2004b).

104

where R

t+1

is the gross rate of return on investment in capital, which is equal

to the marginal product of capital in production plus undepreciated capital:

R

t+1

≡ (1 −α)

_

A

t+1

N

t+1

K

t+1

_

α

+ 1 −δ.

We allow ﬁrms to issue bonds as well as equity. Since markets are competi-

tive, real allocations will not be aﬀected by this choice, i.e. the Modigliani-

Miller theorem is presumed to hold. We denote the leverage factor (the ratio

of bonds outstanding and total ﬁrm value) as ζ.

At the steady state, the technology, consumption, output and capital

stock all grow at a common rate G = A

t+1

/A

t

. Hence, (6.1) becomes

G = βR

where R is the steady state of R

t+1

. Taking log for both sides, we can further

write the above equation as

γg = log(β) + r. (6.3)

where g ≡ log G and r ≡ log R. This deﬁnes the relation among g, r, β and γ.

In the rest of the chapter, we use g, r, and γ as parameters to be determined,

the implied value for the discount factor β can then be deduced from (6.3).

6.2.2 The Log-linear Approximate Solution

Outside the steady state, the model characterizes a system of nonlinear equa-

tions in the logs of technology a

t

, consumption c

t

, labor n

t

and capital stock

k

t

. Note that here we use the lower case letter as the corresponding log

variables of the capital letter. In the case of incomplete capital depreciation

δ < 1, the exact analytical solution to the model is not feasible. We therefore

seek instead an approximate analytical solution. Assume that the technology

shock follows an AR(1) process:

a

t

= φa

t−1

+ ε

t

(6.4)

with ε

t

to be the i.i.d. innovation: ε

t

∼ N(0, σ

2

ε

). Campbell (1994) shows

that the solution, using the log-linear approximation method, can be written

as

c

t

= η

ck

k

t

+ η

ca

a

t

(6.5)

n

t

= η

nk

k

t

+ η

na

a

t

(6.6)

105

and the law of motion of capital is

k

t

= η

kk

k

t−1

+ η

ka

a

t−1

(6.7)

where η

ck

, η

ca

, η

nk

, η

na

, η

kk

, and η

ka

are all the complicated functions of the

parameters α, δ, r, g, γ, φ and N (the steady state value of N

t

).

6.2.3 The Asset Price Implications

The standard RBC model as presented above has strong implications for

asset pricing. First, the Euler equation (6.1) implies the following expression

regarding the risk-free rate R

f

t

:

6

R

f

t

=

_

βE

t

_

(C

t+1

/C

t

)

−γ

¸_

−1

.

Writing the equation in the log form, we obtain the risk-free rate in logs as

7

r

f

t

= γE

t

∆c

t+1

−

1

2

γ

2

V ar∆c

t+1

−log β. (6.8)

Using the process of consumption, capital stock and technology as expressed

in (6.5), (6.7) and (6.4) while ignoring the constant term involving the dis-

count factor and the variance of consumption growth, we derive from (6.8)

(see Lettau et al. (2001) for the details.):

r

f

t

= γ

η

ck

η

ka

1 −η

kk

L

ε

t−1

, (6.9)

where L is the lag operator. Matching this process implied by the model

to the data will give us the ﬁrst asset market restriction. The second asset

market restriction will be the Sharpe-ratio which summarizes the risk-return

trade-oﬀ:

SR

t

= max

all assets

E

t

_

R

t+1

−R

f

t+1

_

σ

t

[R

t+1

]

. (6.10)

Since the model is log-linear and has normal shocks, the Sharpe-ratio can be

computed in closed form as:

8

SR = γη

ca

σ

ε

. (6.11)

6

For further details, see Cochrane (2001, chs. 1.2 and 2.1). We also want to note that in

RBC models the risk-free rate is generally too high, and its standard deviation is much

too low compared to the data, see Hornstein and Uhlig (2001).

7

Note that here we use the formula Ee

x

= e

Ex+σ

2

x

/2

.

8

See Lettau and Uhlig (1999) for the details.

106

Lastly, we consider the risk premia of equity (EP) and long-term real bonds

(LTBP). These can be computed on the basis of the log-linear solutions (6.5)-

(6.7) as:

LTBP = −γ

2

β

η

ck

η

ka

1 −βη

kk

η

2

ca

σ

2

ε

(6.12)

EP =

_

η

dk

η

nk

−η

da

η

kk

1 −βη

kk

−γβ

η

ck

η

kk

1 −βη

kk

_

γη

2

ca

σ

2

ε

. (6.13)

Again we refer to Lettau (1999) and Lettau, Gong and Semmler (2001) for

details of those computations.

6.2.4 Some Stylized Facts

Table 6.1 summarizes some key facts on asset markets and real economic ac-

tivity for the US economy. A successful model should be consistent with these

basic moments of real and ﬁnancial variables. In addition to the well-known

stylized facts on macroeconomic variables, we will consider the performance

of the model concerning the following facts of asset markets.

Table 6.1: Asset Market Facts and Real Variables

Standard Deviation Mean

GDP 1.72

Consumption 1.27

Investment 8.24

Labor Input 1.59

T-Bill 0.86 0.19

SP 500 7.53 2.17

Equity Premium 7.42 1.99

Long Bond Premium 0.21 4.80

Sharpe Ratio 0.27

Note: Standard Deviations for the real variables are taken from

Cooley and Prescott (1995). The series are H-P ﬁltered. Asset market

data are from Lettau (1999). All data are from the U.S. economy at

quarterly frequency. Units are per cent per quarters. The Sharpe-ratio

is the mean of equity premium divided by its standard deviation.

The table shows that the equity premium is roughly 2% per quarter.

The Sharpe-ratio, which measures the risk-return trade-oﬀ, equals 0.27 in

107

post-war data of the U.S.. The standard deviation of the real variables

reveal the usual hierarchy in volatility with investment being most volatile

and consumption the smoothest variable. Among the ﬁnancial variables the

equity price and equity premium exhibit the highest volatility, roughly six

times higher than consumption.

6.3 The Estimation

6.3.1 The Structural Parameters to be Estimated

The RBC model presented in section 2 contains seven parameters, α, δ, r, g, γ,

φ, and N. Recall that the discount factor is determined in (6.3) for given

values of g, r and γ. The parameter θ is simply dropped due to our log-linear

approximation. Of course, we would like to estimate as many parameters

as possible. However, some of the parameters have to be pre-speciﬁed. The

computation of technology shocks requires the values for α and g. In this

paper we use the standard values of α = 0.667 and g = 0.005. N is speciﬁed

as 0.3. The parameter φ is estimated independently from (6.4) by OLS

regression. This leaves the risk aversion parameter γ, the average interest

rate r and the depreciation rate δ to be estimated. The estimation strategy is

similar to Christiano and Eichenbaum (1992). However, they ﬁx the discount

factor and the risk aversion parameter without estimating them. In contrast,

the estimation of these parameters is central to our strategy, as we will see

shortly.

6.3.2 The Data

For the real variables of the economy, we use the data set as constructed by

Christiano (1987). The data set covers the period from the third quarter

of 1955 through the fourth quarter of 1983 (1955.1-1983.4). As we have

demonstrated in the last chapter, the Christiano data set can match the real

side of the economy better than the commonly used NIPA data set. For

the time series of the risk-free interest rate, we use the 30-day T-bill rate to

minimize unmodeled inﬂation risk.

To make the data suitable for estimation, we are required to detrend the

data into their log-deviation form. For a data observation X

t

, the detrended

value x

t

is assumed to take the form log( X

t

/X

t

), where X

t

is the value

of X

t

on its steady state path, i.e., X

t

= (1 + g)

t−1

X

1

. Therefore, for the

given g, which could be calculated from the sample, the computation of

x

t

depends on X

1

, the initial X

t

. We compute this initial condition based on

108

the consideration that the mean of x

t

is equal to 0. In other words,

1

T

T

i=1

log(X

t

/X

t

) =

1

T

T

i=1

log(X

t

) −

1

T

T

i=1

log(X

t

)

=

1

T

T

i=1

log(X

t

) −

1

T

T

i=1

log(X

1

) −

1

T

T

i=1

log

_

(1 +g)

t−1

¸

= 0

Solving the above equation for X

1

, we obtain

X

1

=

1

T

exp

_

T

i=1

log(X

t

) −

T

i=1

log

_

(1 +g)

t−1

¸

_

.

6.3.3 The Moment Restrictions of Estimation

For the estimation in this chapter, we use the maximum likelihood (ML)

method as discussed in Chapter 3. In order to analyze the role of each

restriction, we introduce the restrictions step-by-step. First, we constrain

the risk aversion parameter r to unity and use only moment restrictions of

the real variables, i.e. (6.5) - (6.7) so we can compare our results to those

in Christiano and Eichenbaum (1992). The remaining parameters thus to be

estimated are δ and r. We call this Model 1 (M1). The matrices for the ML

estimation are given by

B =

_

_

1 0 0

−η

ck

1 0

−η

ck

0 1

_

_

, Γ =

_

_

−η

kk

−η

ka

0

0 0 −η

ca

0 0 −η

na

_

_

,

y

t

=

_

_

k

t

c

t

n

t

_

_

, x

t

=

_

_

k

t−1

a

t−1

a

t

_

_

.

After considering the estimation with the moment restrictions only for real

variables, we add restrictions from asset markets one by one. We start by

including the following moment restriction of the risk-free interest rate in

estimation while still keeping risk aversion ﬁxed at unity:

E

_

b

t

−r

f

t

_

= 0

109

where b

t

denotes the return on the 30-day T-bill and the risk-free rate r

f

t

is

computed as in (6.9). We refer to this version as Model 2 (M2). In this case

the matrices B and Γ and the vectors x

t

and y

t

can be written as

B =

_

¸

¸

_

1 0 0 0

−η

nk

1 0 0

−η

nk

0 1 0

0 0 0 1

_

¸

¸

_

, Γ =

_

¸

¸

_

−η

kk

−η

ka

0 0

0 0 −η

ca

0

0 0 −η

na

0

0 0 0 −1

_

¸

¸

_

,

y

t

=

_

¸

¸

_

k

t

c

t

n

t

b

t

_

¸

¸

_

, x

t

=

_

¸

¸

_

k

t−1

a

t−1

a

t

r

f

t

_

¸

¸

_

.

Model 3 (M3) uses the same moment restrictions as Model 2 but leaves the

risk aversion parameter r to be estimated rather than ﬁxed to unity.

Finally, we impose that the dynamic model should generate a Sharpe-ratio

of 0.27 as measured in the data (see Table 1). We take this restriction into

account in two diﬀerent ways. First, as a shortcut, we ﬁx the risk aversion

at 50, a value suggested in Lettau and Uhlig (1999) for generating a Sharpe-

ratio of 0.27 using actual consumption data. Given this value, we estimate the

remaining parameter δ and r. This will be called Model 4 (M4). In the next

version, Model 5 (M5), we are simultaneously estimating γ while imposing

a Sharpe-ratio restriction of 0.27. Recall from (6.11) that the Sharpe-ratio

is a function of risk aversion, the standard deviation of the technology shock

and the elasticity of consumption with respect to the shock η

ca

. Of course,

η

ca

is itself a complicated function of γ. Hence, the Sharpe-ratio restriction

becomes

γ =

0.27

η

ca

(γ)σ

ε

. (6.14)

This equation provides the solution of γ, given the other parameters δ

and r. Since it is nonlinear in γ, we, therefore, have to use an iterative

procedure to obtain the solution. For each given δ and r, searched by the

simulated annealing, we ﬁrst set an initial γ, denoted by γ

0

. Then the new γ,

denoted by γ

1

, is calculated from (6.14), which is equal to 0.27/[η

ca

(γ

0

)σ

ε

].

This procedure is continued until convergence.

We summarize the diﬀerent cases in Table 6.2, where we start by using

only restrictions on real variables and ﬁx risk aversion to unity (M1). We add

the risk-free rate restriction keeping risk aversion at one (M2), then estimate

110

it (M3). Finally we add the Sharpe-ratio restriction, ﬁxing risk aversion at 50

(M4) and estimate it using an iterative procedure (M5). For each model we

also compute the implied values of the long-term bond and equity premium

using (6.12) and (6.13).

Table 6.2: Summary of Models

Models Estimated Parameters Fixed Parameters Asset Restrictions

M1 r, δ γ = 1 none

M2 r, δ γ = 1 risk-free rate

M3 r, δ, γ risk-free rate

M4 r, δ γ = 50 risk-free rate, Sharpe-ratio

M5 r, δ, γ risk-free rate, Sharpe-ratio

6.4 The Estimation Results

Table 6.3 summarizes the estimations for the ﬁrst three models. Standard

errors are in parentheses. Entries without standard errors are preset and

hence are not estimated.

Table 6.3: Summary of Estimation Results

9

Models δ r γ

M1 0.0189 (0.0144) 0.0077 (0.0160) preﬁxed to 1

M2 0.0220 (0.0132) 0.0041 (0.0144) preﬁxed to 1

M3 0.0344 (0.0156) 0.0088 (0.0185) 2.0633 (0.4719)

Consider ﬁrst Model 1, which only uses restrictions on real variables. The

depreciation rate is estimated to be just below 2% which close to Christiano

and Eichenbaum’s (1992) results. The average interest rate is 0.77% per

quarter or 3.08% on an annual basis. The implied discount factor computed

from (6.3) is 0.9972. These results conﬁrm the estimates in Christiano and

Eichenbaum (1992). Adding the risk-free rate restriction in Model 2 does

not signiﬁcantly change the estimates. The discount factor is slightly higher

while the average risk-free rate decreases.

However the implied discount factor now exceeds unity, a problem also

encountered in Eichenbaum et al. (1988). Christiano and Eichenbaum (1992)

9

The standard errors are in parenthesis.

111

avoid this by ﬁxing the discount factor below unity rather than estimating it.

Model 3 is more general since the risk aversion parameter is estimated instead

of ﬁxed at unity. The ML procedure estimates the risk aversion parameter

to be roughly 2 and signiﬁcantly diﬀerent from 1, the value implied from log-

utility function. Adding the risk-free rate restriction increases the estimates

of δ and r somewhat. Overall, the model is able to produce sensible parameter

estimates when the moment restriction for the risk-free rate is introduced.

While the implications of the dynamic optimization model concerning

the real macroeconomic variables could be considered as fairly successful,

the implications for asset prices are dismal. Table 6.4 computes the Sharpe-

ratio as well as risk premia for equity and long term real bond using (6.11) -

(6.13). Note that these variables are not used in the estimation of the model

parameters. The leverage factor ζ is set to 2/3 for the computation of the

equity premium.

10

Table 6.4: Asset Pricing Implications

Models SR LT BPrem EqPrem

M1 0.0065 0.000% -0.082%

M2 0.0065 -0.042% -0.085%

M3 0.0180 -0.053% -0.091%

Table 6.4 shows that the RBC model is not able to produce sensible asset

market prices when the model parameters are estimated from the restrictions

derived only from the real side of the model (or, as in M3, adding the risk-free

rate). The Sharpe-ratio is too small by a factor of 50 and both risk premia

are too small as well, even negative for certain cases. Introducing the risk-

free rate restriction improves the performance only a little bit. Next, we will

try to estimate the model by adding the Sharpe-ratio moment restrictions.

The estimation is reported in Table 6.5.

Table 6.5: Matching the Sharpe-Ratio

Models δ r γ

M4 1 0 preﬁxed to 50

M5 1 1 60

10

This value is advocated in Benninga and Protopapadakis (1990).

112

Model 4 ﬁxes the risk aversion at 50. As explained in Lettau and Uhlig

(1999), such a high level of risk aversion has the potential to generate rea-

sonable Sharpe-ratios in consumption CAPM models. The question now is

how the moment restrictions of the real variables are aﬀected by such a high

level of risk aversion. The ﬁrst row of Table 6.5 shows that the resulting

estimates are not sensible. The estimates for the depreciation factors and

the steady-state interest rate converge to the pre-speciﬁed constraints,

11

or

the estimation does not settle down to an interior optimum. This implies

that the real side of the model does not yield reasonable results when risk

aversion is 50. High risk aversion implies a low elasticity of intertemporal

substitution so that agents are very reluctant to change their consumption

over time.

Trying to estimate risk aversion while matching the Sharpe-ratio gives

similar results. It is not possible to estimate the RBC model with simulta-

neously satisfying the moment restrictions from both the real side and the

ﬁnancial side of the model, as shown in the last row in Table 7.5. Again the

parameter estimates do converge to pre-speciﬁed constraints. The depreci-

ation rate converges again to unity as does the steady-state interest rate r.

The point estimate of risk aversion parameter is high (60). The reason is

of course that a high Sharpe-ratio requires high risk aversion. The tension

between the Sharpe-ratio restriction and the real side of the model causes

the estimation to fail. It demonstrates again that the asset pricing charac-

teristics that one ﬁnd in the data are fundamentally incompatible with the

standard RBC model.

6.5 The Evaluation of Predicted and Sample

Moments

Next we provide a diagnostic procedure to compare the second moments

predicted by the model with the moments implied by the sample data. Our

objective here is to ask whether our RBC model can predict the actual mo-

ments of the time series for both the real and asset market. The moments

are revealed by the spectra at various frequencies. We remark that a simi-

lar diagnostic procedure can be found in Watson (1993) and Diebold et. al

(1995).

Given the observations on k

t

, a

t

and the estimated parameters of our log-

linear model, the predicted c

t

and n

t

can be constructed from the right hand

side of (6.5) - (6.6) with k

t

and a

t

to be their actual observations. We now

11

We constraint the estimates to lie between 0 and 1.

113

consider the possible deviations of our predicted series from the sample series.

We hereby employ our most reasonable estimated Model 3. We can use the

variance-covariance matrix of our estimated parameters to infer the intervals

of our forecasted series hence also the intervals of the moment statistics that

we are interested in.

Figure 6.1: Predicted and Actual Series: solid lines (predicted series), dotted

lines (actual series) for A) consumption, B) labor, C) risk-free interest rate

and D) long term equity excess return; all variables HP detrended (except

for excess equity return)

114

Figure 6.2: The Second Moment Comparison: solid line (actual moments),

dashed and dotted lines (the intervals of predicted moments) for A) con-

sumption, B) labor, C) risk-free interest rate and D) long-term equity excess

return; all variables detrended (except excess equity return)

Figure 6.1 presents the Hodrick-Prescott (HP) ﬁltered actual and pre-

dicted time series data on consumption, labor eﬀort, risk-free rate and eq-

uity return. As shown in Chapter 5, the consumption series can somewhat

be matched whereas the volatility in the labor eﬀort as well as in the risk-free

rate and equity excess return cannot be matched. The insuﬃcient match of

the latter three series are further conﬁrmed by Figure 6.2 where we compare

the spectra calculated from the data samples to the intervals of the spectra

predicted, at 5% signiﬁcance level, by the models.

A good match of the actual and predicted second moments of the time

series would be represented by the fact that the solid line falls within the

interval of the dashed and dotted lines. In particular the time series for

115

labor eﬀort, risk-free interest rate and equity return fail to do so.

6.6 Conclusions

Asset prices contain valuable information about intertemporal decision mak-

ing of economic agents. This chapter has estimated the parameters of a

standard RBC model taking the asset pricing implications into account. We

introduce model restrictions based on asset pricing implications in addition

to the standard restrictions of the real variables and estimate the model by

using ML method. We use the risk-free interest rate and the Sharpe-ratio

in matching actual and predicted asset market moments and compute the

implicit risk premia for long real bonds and equity. We ﬁnd that though the

inclusion of the risk-free interest rate as a moment restriction can produce

sensible estimates, the computed Sharpe-ratio and the risk premia of long-

term real bonds and equity are in general counterfactual. The computed

Sharpe-ratio is too low while both risk premia are small and even negative.

Moreover, the attempt to match the Sharpe-ratio in the estimation process

can hardly generate sensible estimates. Finally, given the sensible param-

eter estimates, the second moments of labor eﬀort, risk-free interest rate

and long-term equity return predicted by the model do not match well the

corresponding moments of the sample economy.

We conclude that the standard RBC model cannot match the asset mar-

ket restrictions, at least with the standard technology shock, constant rela-

tive risk aversion (CRRA) utility function and no adjustment costs. Other

researchers have looked at some extensions of the standard model such as

technology shocks with a greater variance, other utility functions, for exam-

ple, utility functions with habit formation, and adjustment costs of invest-

ment. The latter line of research has been pursued by Jerman (1998) and

Boldrin, Christiano and Fisher (1996, 2001).

12

Those extensions of the stan-

dard model are, to least to a certain extent, more successful in replicating

stylized asset market characteristics, yet these extensions frequently use ex-

treme parameter values to be able to match the asset price characteristics

of the model with the data. Moreover, the approximation methods for solv-

ing the models might not be very reliable since accuracy tests for the used

approximation methods are still missing.

13

12

see also W¨ohrmann, Semmler and Lettau (2001) where time varying characteristics of

asset prices are explored.

13

See Gr¨ une and Semmler (2004b).

Part III

Beyond the Standard Model —

Model Variants with Keynesian

Features

116

Chapter 7

Multiple Equilibria and History

Dependence

7.1 Introduction

One of the important features of Keynesian economics is that there is no

unique equilibrium toward which the economy moves. The dynamics are

open ended in the sense that it can move to low level, or high level of eco-

nomic activity and expectations and policy may become important to tild

the dynamics to one or the other outcomes.

1

In recent times such type of dynamics have been found in a large num-

ber of dynamic models with intertemporal optimization. Those models have

been called indeterminacy models. Theoretical models of this type are re-

viewed in Benhabib and Farmer (1999) and Farmer (12001) and an empirical

assessment is given in Schmidt-Grohe (2002). Some of the models are real

models, RBC models, with increasind returns to scale and or more gener-

ate preferences, as introduced in chapter 4.5, that can exhibit locally stable

steady state equilibria giving rise to sun spot phenomena.

2

Multiplicity of

equilibria can also arise here as a consequence of increasing returns to scale

and/or more general preferences. Others are monetary models, where con-

sumers’ welfare is aﬀected positively by consumption and cash balances and

negatively by the labor eﬀort and an inﬂation gap from some target rates.

For certain substitution properties between consumption and cash holdings

those models admit unstable as well as stable high level and low level steady

1

In Keynes (1936) such an open ended dynamic is described in Chapter 5 of his book;

Keynes describes here how higher or lower ”long term positions” associated with higher

or lower output and employment might be generated by expectational forces.

2

See, for example Kim (2004).

117

118

states. Here can be indeterminacy in the sense that any initial condition

in the neighborhood of one of the steady-states is associated with a path

forward or away from that steady state, see Benhabib et al. (2001).

When indeterminacy models exhibit multiple steady state equilibria, where

a middle one is an attractor (repellor), than this permits any path in the

vicinity of the steady state equilibria to move back to (away from) the steady

state equilibrium. Despite some unresolved issues in the literature on mul-

tiple equilibria and indeterminacy

3

it has greatly enriched macrodynamic

modelling.

Pursuing this line of research we show that one does not need to refer

to increasing returns to scale or speciﬁc preferences to obtain such results.

We show that due to the adjustment cost of capital we may obtain non-

uniqueness of steady state equilibria in an otherwise standard dynamic op-

timization version. Multiple steady state equilibria, in turn, may lead to

thresholds separating diﬀerent domains of attraction of capital stock, con-

sumption, employment and welfare level. As our solution shows thresholds

are important as separation points below or above which it is advantages to

move to lower or higher levels of capital stock, consumption, employment and

welfare. Our model version thus can explain of how the economy becomes

history dependent and moves, after a shock or policy inﬂuences, to a low or

high level equilibria in employment and output.

Recently, numerous stochastic growth model have employed adjustment

cost of capital. In non-stochastic dynamic models adjustment cost has al-

ready been used in Eisner and Stroz (1963), Lucas (1967) and Hayashi (1982).

Authors in this tradition have also distinguished the absolute adjustment

cost depending on the level of investment from the adjustment cost depend-

ing on investment relative to capital stock (Uzawa 1968, Asada and Semm-

ler 1995).

4

In stochastic growth models adjustment cost has been used in

Boldrin, Christiano and Fisher (2001) and adjustment cost associated with

the rate of change of investment can be found in Christiano, Eichenbaum

and Evans (2001). In this chapter we want to show that adjustment cost

in a standard RBC model can give rise to multiple steady state equilibria.

The existence of multiple steady state equilibria entails thresholds that sep-

arate diﬀerent domains of attraction for welfare and employment and allow

3

Although these are important variants of macrodynamic models with optimizing behavior,

as, however, recently has been shown, see Beyn, Pampel and Semmler (2001) and Gr¨ une

and Semmler (2004a), indeterminacy is likely to occur solely at a point in these models,

at a threshold, and not within a set as the indeterminacy literature often claims.

4

In Feichtinger et al. (2000) it is shown that relative adjustment cost, where investment as

well as cpital stock enters the adjustment costs, likely to generate multiplicity of steady

state equilibria.

119

for an open ended dynamics depending on the initial conditions and policy

inﬂuences impacting the initial conditions.

The remainder of this chapter is organized as follows. Section 2 presents

the model. Section 3 studies the adjustment cost function which gives rise to

multiple equilibria and section 4 demonstrates the existence of a threshold

5

that separates diﬀerent domains of attraction. Section 5 concludes the chap-

ter. The proof of the propositions in the text is provided in the appendix.

7.2 The Model

The model we present here is the standard stochastic growth model of RBC

type, as in King et al. (1988), augmented by adjustment cost. The state

equation for the capital stock takes the form:

K

t+1

= (1 −δ)K

t

+ I

t

−Q

t

(7.1)

where

I

t

= Y

t

−C

t

(7.2)

and

Y

t

= A

t

K

1−α

t

(N

t

X

t

)

α

(7.3)

Above, K

t

, Y

t

, I

t

, C

t

and Q

t

are the level of capital stock, output, investment,

consumption and adjustment cost, all in real terms. N

t

is per capita working

hours; A

t

is the temporary shock in technology; and X

t

is the permanent

(including both population and productivity growth) shock that follows a

growth rate γ. The model is non-stationary due to X

t

. To transform the

model into a stationary version we need to detrend the variables. For this

purpose, we divide both sides of equation (7.1) - (7.3) by X

t

:

k

t+1

=

1

1 + γ

[(1 −δ)k

t

+ i

t

−q

t

]

i

t

= y

t

−c

t

(7.4)

y

t

= A

t

k

1−α

t

(n

t

N/0.3)

α

(7.5)

Above, k

t

, c

t

, i

t

, y

t

and q

t

are the detrended variables for K

t

, C

t

, I

t

, Y

t

and

Q

t

.

6

; n

t

is deﬁned to be

0.3Nt

N

with N denoting the sample mean of N

t

. Note

that here n

t

is often regarded to be the normalized hours with its sample

5

In the literature those thresholds have been called Skiba-points (see Skiba, 1978).

6

In particular, k

t

≡

Kt

Xt

, c

t

≡

Ct

Xt

, i

t

≡

It

Xt

, y

t

≡

Yt

Xt

and q

t

≡

Qt

Xt

.

120

mean equal to 30 %. We shall assume that the detrended adjustment cost q

t

depends on detrended investment i

t

:

q

t

= q(i

t

)

The objective function takes the form

max E

0

∞

t=0

β

t

[log c

t

+θ log(1 −n

t

)]

To solve the model, we ﬁrst form the Lagrangian:

L =

∞

t=0

β

t

[log(c

t

) + θ log(1 −n

t

)] −

∞

t=0

E

t

_

β

t+1

λ

t+1

_

k

t+1

−

1

1 + γ

[(1 −δ)k

t

+i

t

−q(i

t

)]

__

Setting to zero the derivatives of L with respect to c

t

, n

t

, k

t

and λ

t

, we obtain

the following ﬁrst-order conditions:

1

c

t

−

β

1 +γ

E

t

λ

t+1

[1 −q

′

(i

t

)] = 0 (7.6)

−θ

1 −n

t

+

β

1 + γ

E

t

λ

t+1

αy

t

n

t

[1 −q

′

(i

t

)] = 0 (7.7)

β

1 + γ

E

t

λ

t+1

_

(1 −δ) +

(1 −α)y

t

k

t

[1 −q

′

(i

t

)]

_

= λ

t

(7.8)

k

t+1

=

1

1 + γ

[(1 −δ)k

t

+ i

t

−q(i

t

)] (7.9)

with i

t

and y

t

to be given by (7.4) and (7.5) respectively.

The following proposition concerns the steady states

Proposition 5 Assume A

t

has a steady state A. Equation (7.4) - (7.9),

when evaluated at their certainty equivalence form, determine the following

steady states:

[bφ(i) −1][i −q(i)] −aφ(i)

1−

1

α

−q(i) = 0 (7.10)

k =

1

γ +δ

[i −q(i)] (7.11)

121

n =

_

φ(i)

A

_

1

α

k

0.3

N

(7.12)

y = φ(i)k (7.13)

c = y −i (7.14)

λ =

(1 +γ)

βc

_

1 −q

′

(i)

¸ (7.15)

where

a =

α

θ

A

1

α

(N/0.3) (7.16)

b =

(1 +

α

θ

)

γ + δ

(7.17)

φ(i) =

m

1 −q

′

(i)

(7.18)

and

m =

(1 +γ) −(1 −δ)β

β(1 −α)

. (7.19)

Note that equation (7.10) determines the solution of i, depending on the

assumption that all other steady states can be uniquely solved via (7.11) -

(7.15). Also if q(i) is linear, equ. (7.18) indicates that φ(·) is constant, and

then from (7.10) i is uniquely determined. Therefore, if q(i) is linear, no

multiple steady state equilibria will occur.

7.3 The Existence of Multiple Steady States

Many non-linear forms of q(i) may lead to a multiplicity of equilibria. Here

we shall only consider that q(i) takes the logistic form:

q(i) =

q

0

exp(q

1

i)

exp(q

1

i) + q

2

−

q

0

1 + q

2

(7.20)

Figure (7.1) shows a typical shape of q(i) while Figure (7.2) shows the cor-

responding derivative q

′

(i) with the parameters given in Table 7.1:

Table 7.1: The Parameters in the Logistic Function

q

0

q

1

q

2

2500 0.0034 500

122

Figure 7.1: The Adjustment Cost Function

Figure 7.2: The Derivatives of the Adjustment Cost

Note that in equation (7.20) we posit a restriction such that q(0) = 0.

Another restriction, which is reﬂected in Figure (7.1), is that

123

q(i) < i (7.21)

indicating that the adjustment cost should never be larger than the invest-

ment itself. Both restrictions seem reasonable.

The two critical points, i

1

and i

2

, in Figure (7.1) and (7.2) need to be

discussed. These are the two points at which q

′

(i) = 1. Meanwhile between

i

1

and i

2

, q

′

(i) > 1. When q

′

(i) > 1, equation (7.18) indicates that φ(i) is

negative since from (7.19) m > 0. A negative φ(i) will lead to a complex

φ(i)

1−

1

α

in (7.10). We therefore obtain two feasible ranges for the existence

of steady states of i : one is (0, i

1

) and the other is (i

2

, +∞).

The following proposition concerns the existence of multiplicity of equi-

libria.

Proposition 6 Let f(i) ≡ [bφ(i) −1][i −q(i)] −aφ(i)

1−

1

α

−q(i), where q(i)

takes the form as in (7.20) subject to (7.21). Assume bm−1 > 0.

• There exists one and only one i, denoted i

1

in the range (0, i

1

) such

that f(i) = 0.

• In the range (i

2

, +∞), if there are some i

′

s at which f(i) < 0, then

there must exist two i’s, denoted as i

2

and i

3

such that f(i) = 0.

We shall ﬁrst remark that the assumption bm− 1 > 0 is plausible given

the standard parameters for b and m also f(i) = 0 is indeed the equation

(2.10). Therefore, this proposition indicates a condition under which multiple

steady states will occur. In particular, if there exist some i

′

s in (i

2

, +∞) at

which f(i) < 0, three equilibria will occur. A formal mathematical proof

of the existence of this condition is intractable. In Figure 7.3, we show the

curve of f(·) given the empirically plausible parameters as reported in Table

7.1 and other standard parameters as given in Table 7.2. The curves cut the

zero line three times, indicating three steady states of i.

124

Figure 7.3: Multiplicity of Equilibria: f(i) function

Table 7.2: The Standard Parameters of RBC Model

7

α γ β δ θ N A

0.5800 0.0045 0.9930 0.2080 2.0189 480.00 1.7619

We use a numerical method to compute the three steady states of i : i

1

, i

2

and i

3

. Given these steady states, the other steady states are computed by

(7.11) - (7.15). Table 7.3 uses essentially the same parameters as reported in

Table 5.1. The result of the computations of the three steady states are:

7

The N below is calculated on the assumption of 12 weeks per quarter and 40 working

hours per week. A is derived from A

t

= a

0

+ a

1

A

t−1

+ ε

t

, with estimated a

0

and a

1

are

given respectively by 0.0333 and 0.9811.

125

Table 7.3: The Multiple Steady States

Corresponding to i

1

Corresponding to i

2

Corresponding to i

3

i 564.53140 1175.7553 4010.4778

k 18667.347 11672.642 119070.11

n 0.25436594 0.30904713 0.33781724

c 3011.4307 2111.2273 5169.5634

y 3575.9621 3286.9826 9180.0412

λ 0.00083463435 0.0017500565 0.00019568017

V 1058.7118 986.07481 1101.6369

Note that above, V is the value of the objective function at the corre-

sponding steady states. Therefore it reﬂects one corresponding welfare level.

The steady state corresponding to i

2

deserves some discussion. The welfare

of i

1

is larger than i

2

, its corresponding steady states in capital, output, and

consumption are all greater than those corresponding to i

2

, yet its corre-

sponding steady state in labor eﬀort and thus employment is larger for i

2

.

This already indicates that i

2

may be inferior in terms of the welfare, at least

compared to i

1

. On the other hand i

1

and i

3

exhibit also diﬀerences in welfare

and employment.

7.4 The Solution

An analytical solution to the dynamics of the model with adjustment cost

is not feasable. We, therefore, have to rely on an approximate solution. For

this, we shall ﬁrst linearize the ﬁrst-order conditions around the three sets of

steady states as reported in Table 7.3. Then by applying an approximation

method as discussed in chapter 2, we obtain three sets of linear decision rules

for c

t

and n

t

corresponding to our three sets of steady states. For notational

convenience, we shall denote them as decision rule Set 1, Set 2 and Set 3

corresponding to i

1

, i

2

and i

3

. Assume that A

t

stays at its steady state A so

that we only consider the deterministic case. The ith set of decision rule can

then be written as

c

t

= G

i

c

k

t

+ g

i

c

(7.22)

n

t

= G

i

n

k

t

+ g

i

n

(7.23)

where i = 1, 2, 3. We therefore can simulate the solution paths by using the

above two equations together with (7.4), (7.5) and (7.9). The question then

126

arises as to which set of decision rule, as expressed by (7.22) and (7.23),

should be used. The likely conjecture is that this will depend on the initial

condition k

0

. For example, if k

0

is close to the k

1

, the steady state of k

corresponding to i

1

, we would expect that the decision rule 1 is appropriate.

This consideration further indicates that there must exist some thresholds

for k

0

at which intervals are divided regarding which set of decision rule

should be applied. To detect such thresholds, we shall compute the value of

the objective functions starting at diﬀerent k

0

for our three decision rules.

Speciﬁcally, we compute V , where

V ≡

∞

t=0

β

t

[log(c

t

) + θ log(1 −n

t

)]

We should choose the range of k

0

’s that covers the three steady states of

k’s as reported in Table 7.3. In this exercise, we choose the range [8000, 138000]

for k

0

. Figure 7.4 compares the welfare performance of our three sets of linear

decision rules.

Figure 7.4: The Welfare Performance of three Linear Decision Rules (solid,

dotted and dashed lines for decision rule Set 1, Set 2 and Set 3 respectively)

127

From Figure 7.4, we ﬁrst realize that the value of the objective function is

always lower for decision rule Set 2. This is likely to be caused by its inferior

welfare performance at the steady states for which we compute the decision

rule. However, there is an intersection of the two welfare curves corresponding

to the decision rules Set 1 and Set 3. This intersection, occuring around

k

0

= 36900, can be regarded as a threshold. If k

0

< 36900, the household

should choose decision rule Set 1 since it will allow the household to obtain a

higher welfare. On the other hand, if k

0

> 36900, the household may choose

decision rule Set 3, since this leads to a higher welfare.

7.5 Conclusion

This chapter shows that the introduction of adjustment cost of capital may

lead to non-uniqueness of steady state equilibria in an otherwise standard

RBC model. Multiple steady state equilibria, in turn, lead to thresholds

separating diﬀerent domains of attraction of capital stock, consumption, em-

ployment and welfare level. As our simulation shows thresholds are important

as separation points below or above which it is optimal to move to lower or

higher levels of capital stock, consumption, employment and welfare. Our

model thus can easily explain of how an economy become history dependent

and moves, after a shock, to a low or high level equilibrium in employment

and output. A variety of further economic models giving rise to multiple

equilibria and thresholds are presented in Gr¨ une and Semmler (2004a).

The above model stays as close as possible to the standard RBC model

except, for illustrative purpose, a speciﬁc form of adjustment cost of capital

was introduced. On the other hand, dynamic models giving rise to inde-

terminacy usually have to presume some weak externalities and increasing

returns and/or more general preferences. Kim (2004) discusses to what ex-

tent weak externalities in combination with more complex preferences will

produce indeterminacy. He in fact shows that if for the generalized RBC

model, as studied in chapter 4.5., there is a weak externality, ξ > 0, then

the model generates local indeterminacy. Overall, as shown in many recent

contributions, the issue of multiple equilibria and indeterminacy is an impor-

tant macroeconomic issue and should be pursued in further research in the

future.

128

7.6 Appendix: The Proof of Propositions 5

and 6

7.6.1 The Proof of Proposition 5

The certainty equivalence of equation (7.4) - (7.9) takes the form

1

c

−

β

1 + γ

λ

_

1 −q

′

(i)

¸

= 0 (7.24)

−θ

1 −n

+

β

1 + γ

λ

αy

n

_

1 −q

′

(i)

¸

= 0 (7.25)

β

1 + γ

_

(1 −δ) +

(1 −α)y

k

_

1 −q

′

(i)

¸

_

= 1 (7.26)

k =

1

1 +γ

_

(1 −δ)k + i −q(i)

¸

(7.27)

i = y −c (7.28)

y = Ak

1−α

(nN/0.3)

α

(7.29)

From (7.27),

k =

1

γ +δ

_

i −q(i)

¸

(7.30)

which is equation (7.11). From (7.26),

y

k

=

(1 +γ) −(1 −δ)β

β(1 −α)

_

1 −q

′

(i)

¸

= φ(i) (7.31)

which is equation (7.13) with φ(i) given by (refequ7.18) and (7.19). Further

from (7.29),

y

k

= Ak

−α

(nN/0.3)

α

(7.32)

Using (7.31) to express

y

k

, we derive from (7.32)

n =

_

φ(i)

A

_

1

α

k

0.3

N

(7.33)

which is (7.12). Next, using (7.28) to express i while using φ(i)k for y. We

derive from (7.30) (γ + δ)k = φ(i)k −c −q(i), which is equivalent to

c = (φ(i) −γ −δ)k −q(i)

129

= c

1

(φ(i), k, q(i)) (7.34)

Meanwhile from (7.24) and (7.25):

β

1 + γ

λ

_

1 −q

′

(i)

¸

=

1

c

=

θ

(1 −n)α

_

y

n

_

−1

The ﬁrst equation is equivalent to (7.15) while the second equation indicates

c =

(1 −n)α

θ

_

y

n

_

(7.35)

where from (7.29),

y

n

= A

_

k

n

_

1−α

(N/0.3)

α

= A

_

y

n

_

1−α

_

y

k

_

α−1

(N/0.3)

α

= A

1

α

φ(i)

1−

1

α

(N/0.3) (7.36)

Substitute (7.36) into (7.35) and then express n in terms of (7.33):

c =

(1 −n)α

θ

A

1

α

φ(i)

1−

1

α

(N/0.3)

=

α

θ

A

1

α

(N/0.3)φ(i)

1−

1

α

−

α

θ

A

1

α

(N/0.3)φ(i)

1−

1

α

_

φ(i)

A

_

1

α

k

0.3

N

= aφ(i)

1−

1

α

−

α

θ

φ(i)k

= c

2

(φ(i), k)

= c

2

(φ(i), k) (7.37)

with a given by (7.16).

Let c

1

(·) = c

2

(·). We thus obtain

(φ(i) −γ −δ)k −q(i) = aφ(i)

1−

1

α

−

α

θ

φ(i)k

which is equivalent to

_

(1 +

α

θ

)φ(i) −γ −δ

_

k = aφ(i)

1−

1

α

+ q(i)

Using (7.30) for k, we obtain

_

(1 +

α

θ

)φ(i) −γ −δ

_

1

γ + δ

(i −q(i)) = aφ(i)

1−

1

α

+ q(i) (7.38)

Equation (7.37) is equivalent to (7.10) with b given by (7.17).

130

7.6.2 The Proof of Proposition 6

Note that within our two ranges (0, i

1

) and (i

2

, +∞) φ(i) is positive and

hence f(i) is continuous and diﬀerentiable. In particular,

f

′

(i) = φ

′

(i)

_

b[i −q(i)] + a(

1

α

−1)φ(i)

−

1

α

_

+ (bm−1) (7.39)

where

φ

′

(i) =

mq

′′

(i)

[1 −q

′

(i)]

2

(7.40)

We shall ﬁrst realized that a, b and m are all positive as indicated by (7.16)

and (7.17) and (7.19), and therefore the term b[i − q(i)] + a(

1

α

− 1)φ(i)

−

1

α

is positive. Meanwhile in the range (0, i

1

), q

′′

(i) > 0 and hence f

′

(i) > 0.

However, in the range (i

2

, +∞), q

′′

(i) < 0 and hence f

′

(i) can either be

positive or negative due to the sign of (bm−1).

Let us ﬁrst consider the range (0, i

1

). Assume i →0. In this case, q(i) →0

and therefore f(i) → −aφ(0)

1−

1

α

< 0. Next assume i → i

1

, In this case,

q

′

(i) → 1 and therefore φ(i) → +∞. Since 1 −

1

α

is negative, this further

indicates φ(i)

1−

1

α

→ 0. Therefore f(i) → +∞. Since f

′

(i) > 0, by the

intermediate value theorem, there exists one and only one i such that f(i) =

0. We thus have proved the ﬁrst part of the proposition.

Next we turn to the range (i

2

, +∞). To verify the second part of the

proposition, we only need to prove that f(i) →+∞and f(i) < 0 when i →i

2

and f(i) →+∞and f(i) > 0 when i →+∞. Consider ﬁrst i →i

2

, Again in

this case, q

′

(i) → 1, φ(i) → +∞, and φ(i)

1−

1

α

→ 0. Therefore f(i) → +∞.

Meanwhile from (7.39), φ

′

(i) →−∞(since q

′′

(i) < 0) and therefore f

′

(i) < 0.

Consider now i → +∞, In this case, q

′

(i) → 0 and q(i) → q

m

where q

m

is the upper limit of q(i). This indicates that [i −q(i)] → +∞. Therefore

f(i) → +∞. Meanwhile, since q

′′

(i) → 0 and hence φ

′

(i) → 0. Therefore

f

′

(i) → (bm−1), which is positive. We thus have proved the second part of

the proposition.

Chapter 8

Business Cycles with

Nonclearing Labor Market

8.1 Introduction

As discussed in the previous chapters, especially in Chapter 5, the stan-

dard real business cycle (RBC) model, despite its rather simple structure,

can explain the volatilities of some macroeconomic variables such as output,

consumption and capital stock. However, to explain the actual variation in

employment the model generally predicts an excessive smoothness of labor

eﬀort in contrast to empirical data. This problem of excessive smoothness

in labor eﬀort is well-known in the RBC literature. A recent evaluation of

this failure of the RBC model is given in Schmidt-Grohe (2001). There the

RBC model is compared to indeterminacy models, as developed by Benhabib

and his co-authors. Whereas in RBC models the standard deviation of the

labor eﬀort is too low, in indeterminacy models it turns out to be excessively

high. Another problem in RBC literature related to this, is that the model

implies a excessively high correlation between consumption and employment

while empirical data only indicates a week correlation. This problem of ex-

cessive correlation has, to our knowledge, not suﬃciently been studied in the

literature. It has preliminarily been explored in Chapter 5 of this volume.

Lastly, the RBC model predicts a signiﬁcantly high positive correlation be-

tween technology and employment whereas empirical research demonstrates,

at least at business cycle frequency, a negative or almost zero correlation.

These are the major issues that we shall take up from now on. We

want to note that the labor market problems, the lack of variation in the

employment and the high correlation between consumption and employment

in the standard RBC model, may be related to the speciﬁcation of the labor

131

132

market, and therefore we could name it the labor market puzzle. In this

chapter we are mainly concerned with this puzzle. The technology puzzle,

that is, the excessively high correlation between technology and employment,

preliminarily discussed in Chapter 5, will be taken up in Chapter 9.

Although in the speciﬁcation of its model structure (see Chapter 4), the

real business cycle model speciﬁes both sides, the demand and supply, of a

market, the moments of the economy are however reﬂected by the variation

on one side of markets due to its general equilibrium nature for all markets

(including output, labor and capital markets). For the labor market, the

moments of labor eﬀort result from the decision rule of the representative

household to supply labor. The variations in labor and consumption both

reﬂect the moments of the two state variables, capital and technology. It is

therefore not surprising why employment is highly correlated with consump-

tion and why the variation of consumption is a smooth as labor eﬀort. This

further suggests that to resolve the labor market puzzle in a real business

cycle model, one has to make improvement upon labor market speciﬁcations.

One possible approach for such improvement is to introduce the Keynesian

feature into the model and to allow for wage stickiness and a nonclearing

labor market.

The research along the line of micro-founded Keynesian economics has

been historically developed by the two approaches: one is the disequilibrium

analysis, which had been popular before 1980’s and the other is the New

Keynesian analysis based on monopolistic competition. Attempts have now

been made recently that introduce the Keynesian features into a dynamic

optimization model. Rotemberg and Woodford (1995, 1999), King and Woll-

man (1999), Gali (1999) and Woodford (2003) present a variety of models

with monopolistic competition and sticky price. On the other hand, there

are models of eﬃciency wages where nonclearing labor market could occur.

1

We shall remark that in those studies with nonclearing labor market, an ex-

plicit labor demand function is introduced from the decision problem of the

ﬁrm side. However, the decision rule with regard to labor supply in these

models is often dropped because the labor supply no longer appears in the

utility function of the household. Consequently, the moments of labor eﬀort

become purely demand-determined.

2

In this chapter, we will present a stochastic dynamic optimization model

of RBC type but argumented by Keynesian features along the line of above

1

See Danthine and Donaldson (1990, 1995), Benassy (1995) and Uhlig and Xu (1996)

among others.

2

The labor supply in the these models is implicitly assumed to be given exogenously, and

normalized to 1. Hence nonclearing of the labor market occurs if the demand is not equal

to 1.

133

consideration. In particular, we shall allow for wage stickiness

3

and non-

clearing labor market. However, unlike other recent models that drop the

decision rule of labor supply, we view the decision rule of the labor eﬀort as

being derived from a dynamic optimization problem as a quite natural way to

determine desired labor supply.

4

With the determination of labor demand,

derived from the marginal product of labor and other factors,

5

the two ba-

sic forces in the labor market can be formalized. One of the advantages of

this formulation, as will become clear, is that a variety of employment rules

could be adopted to specify the realization of actual employment when a

nonclearing market emerges.

6

We will assess this model by employing U.S.

and German macroeconomic time series data.

Yet before we formally present the model and its calibration we want

to note that there is a similarity of our approach chosen here and the New

Keynesian analysis. New Keynesian literature presents models with imper-

fect competition and sluggish price and wage adjustments where labor eﬀort

is endogenized. Important work of this type can be found in Rotemberg

and Woodford (1995, 1999), King and Wollman (1999), Gali (1999), Erceg,

Henderson and Levin (2000) and Woodford (2003). However, the market

in those models are still assumed to be cleared since the producer supplies

the output according to what the market demands at the existing price. A

similar consideration is also assumed to hold for the labor market. Here the

wage rate is set optimally by a representative of the household according to

3

Already Keynes (1936) had not only observed a wide-spread phenomenon of downward

rigidity of wages but has also attributed strong stabilizing properties of wage stickiness.

4

0ne could perceive a change in secular forces concerning labor supply from the side of

households, for example, changes in preferences, demographic changes, productivity and

real wage, union bargaining, evolution of wealth, taxes and subsides which all aﬀect

labor supply. Some of those secular forces are often mentioned in the work by Phelps,

see Phelps (1997) and Phelps and Zoega (1998). Recently, concerning Europe, generous

unemployment compensation and related welfare state beneﬁts have been added to the

list of factors aﬀecting the supply of labor, intensity of job search and unemployment. For

an extensive reference to those factors, see Blanchard and Wolfers (2000) and Ljungqvist

and Sargent (1998, 2003).

5

On the demand side one could add beside the pure technology shocks and the real wage,

the role of aggregate demand, high interest rates (Phelps 1997, Phelps and Zoega 1998),

hiring and ﬁring cost, capital shortages and slow down of growth, for example, in Europe.

See Malinvaud (1994) for a more extensive list of those factors .

6

Another line of recent research on modeling unemployment in a dynamic optimization

framework can be found in the work by Merz (1999) who employs search and matching

theory to model the labor market, see also Ljungqvist and Sargent (1998, 2003). Yet,

unemployment resulting from search and matching problems can rather be viewed as

frictional unemployment (see Malinvaud (1994) for his classiﬁcation of unemployment).

As will become clear, this will be diﬀerent from the unemployment that we will discuss

in this chapter.

134

the expected market a demand curve for labor. Once the wage has been set,

it is assumed to be sticky for some time period and only a fraction of wages

are set optimally in each period. In those models there will be a gap again

between the optimal wage and existing wage, yet the labor market is still

cleared since the household is assumed to supply labor whatever the market

demand is at the given wage rate.

7

In this chapter, we shall present a dynamic model that allows for a non-

cleared labor market, which could be seen to be caused by staggered wage as

described by Taylor (1980), Calvo (1983) or other theories of sluggish wage

adjustment. The objective to construct a model such as ours is to approach

the two aforementioned labor market problems coherently within a single

model of dynamic optimization. Yet, we wish to argue that the New Keyne-

sian and approach are complementary rather than exclusive, and therefore

they can somewhat be consolidated as a more complete system for price and

quantity determination within the Keynesian tradition. For further details

of this consolidation, see Chapter 9. In the current chapter we are only con-

cerned with a nonclearing of the labor market as brought into the academic

discussion by the disequilibrium school. We will derive the nonclearing of

the labor market from optimizing behavior of economic agents but it will be

a multiple stage decision process that will generate the nonclearing of the

labor market.

8

The remainder of this chapter is organized as follows. Section 2 presents

the model structure. Section 3 estimates and calibrates our diﬀerent model

variants for the U.S. economy. Section 4 undertakes the same exercise for the

German economy. Section 5 concludes. Appendices I and II in this chapter

contain some technical derivation of the adaptive optimization procedure

whereas Appendix III undertakes a welfare comparison of the diﬀerent model

variants.

7

See, for example, Woodford (2003, ch. 3). There are also traditional Keynesian mod-

els that allow for disequilibria, see Benassy (1984) among others. Yet, the well-known

problem of these earlier disequilibrium models was that they disregard intertemporal op-

timizing behavior and never specify who sets the price. This has now been resolved by

the modern literature of monopolistic competition as can be found in Woodford (2003).

However, while resolving the price setting problem, the decision with regard to quantities

seems to be unresolved. The supplier may no longer behave optimally concerning their

supply decision, but simply supplies whatever the quantity the market demands for at

the current price.

8

For models with multiple steps of optimization in the context of learning models, see

Dawid and Day (2003), Sargent (1998) and Zhang and Semmler (2003).

135

8.2 An Economy with Nonclearing Labor Mar-

ket

We shall still follow the usual assumptions of identical households and iden-

tical ﬁrms. Therefore we are considering an economy that has two repre-

sentative agents: the representative household and the representative ﬁrm.

There are three markets in which the agents exchange their products, labor

and capital. The household owns all the factors of production and therefore

sells factor services to the ﬁrm. The revenue from selling factor services can

only be used to buy the goods produced by the ﬁrm either for consuming or

for accumulating capital. The representative ﬁrm owns nothing. It simply

hires capital and labor to produce output, sells the output and transfers the

proﬁt back to the household.

Unlike the typical RBC model, in which one could assume an once-for-all

market, we, however, in this model shall assume that the market to be re-

opened at the beginning of each period t. This is necessary for a model with

nonclearing markets in which adjustments should take place which leads us

to a multiple stage adaptive optimization behavior. Yet, let us ﬁrst describe

how prices and wages are set.

8.2.1 The Wage Determination

As usual we presume that both the household and the ﬁrm express their

desired demand and supply on the basis of given prices, including the output

price p

t

, the wage rate w

t

and the rental rate of the capital stock r

t

, we

shall ﬁrst discuss how the period t prices are determined at the beginning of

period t. Note that there are three commodities in our model. One of them

should serve as a numeraire, which we assume to be the output. Therefore,

the output price p

t

always equals 1. This indicates that the wage w

t

and

the rental rate of capital stock r

t

are all measured in terms of the physical

units of output.

9

As to the rental rate of capital r

t

, it is assumed to be

adjustable so as to clear the capital market. We can then ignore its setting.

Indeed, as will become clear, one can imagine any initial value of the rental

rate of capital when the ﬁrm and the household make the quantity decisions

and express their desired demand and supply. This leaves us to focus the

discussion only on the wage setting. Let us ﬁrst discuss how the wage rate

9

For our simple representative agent model without money, this simpliﬁcation does not

eﬀect our major result derived from our model. Meanwhile, it will allow us to save some

eﬀort to explain the nominal price determination, a focus in the recent New Keyensian

literature.

136

might be set.

Most recent literature, in discussing wage setting,

10

assumes that it is the

supplier of labor, the household or its representative, that sets the wage rate

whereas the ﬁrm is simply a wage taker. On the other hand, there are also

models that discuss how ﬁrms set the wage rate.

11

In actual bargaining it is

likely, as Taylor (1999) has pointed out, that wage setting is an interacting

process between ﬁrms and households. Despite this variety of wage setting

models, we, however, follow the recent approach. We may assume that the

wage rate is set by a representative of the household which acts as a mo-

nopolistic agent for the supply of labor eﬀort as Woodford (2003, ch. 3) has

suggested. Woodford (2003, p.221) introduces diﬀerent wage setting agents

and monopolistic competition since he assumes heterogenous households as

diﬀerent suppliers of diﬀerentiated types of labor. In appendix I, in close

relationship to Woodford (2003, ch.3), Erceg et al (2000) and Christiano et

al. (2001) we present a wage setting model, where wages are set optimally,

but a fraction of wages may be sticky. We neglect, however, diﬀerentiated

types of labor and refer only to aggregate wages.

We want to note, however, that recently many theories have been de-

veloped to explain wage and price stickiness. There is the so-called menu

cost for changing prices (though this seems more appropriate for the output

price). There is also a reputation cost for changing prices and wages.

12

In

addition, changing the price, or wage, needs information, computation and

communication, which may be costly.

13

All these eﬀorts cause costs which

may be summarized as adjustment costs of changing the price or wage. The

adjustment cost for changing the wage may provide some reason for the rep-

resentative of the household to stick to the wage rate even if it is known that

current wage may not be optimal. One may also derive this stickiness of

wages from wage contracts as in Taylor (1980) with the contract period to

be longer than one period.

Since workers, or their respective representative, enter usually into long

term employment contracts involving labor supply for several periods with a

variety of job security arrangements and termination options, a wage contract

may also be understood from an asset price perspective, namely as derivative

security based on a fundamental underlying asset such as the asset price of

the ﬁrm. In principle a wage contract could be treated as a debt contract with

10

See, for instance, Erceg, Henderson and Levin (2000), Christiano, Eichenbaum and

Evans (2001) and Woodford (2003) among others.

11

These are basically the eﬃciency wage models that are mentioned in the introduction.

12

This is emphasized by Rotemberg (1982)

13

See the discussion in Christiano, Eichenbaum and Evans (2001) and Zbaracki, Ritson,

Levy, Dutta and Bergen (2000).

137

similar long term commitment as exists for other liabilities of the ﬁrm.

14

As

in the case of the pricing of corporate liabilities the wage contract, the value of

the derivative security, would depend on some speciﬁcations in contractual

agreements. Yet, in general it can be assumed to be arranged for several

periods.

As noted above we do not have to posit that the wage rate, w

t

, to be

completely ﬁxed in contracts and never responds to the disequilibrium in

the labor market. One may imagine that the dynamics of the wage rate, for

example, follows the updating scheme as suggested in Calvo’s staggered price

model (1983) or in Taylor’s wage contract model (1980). In Calvo’s model,

for example, there is always a fraction of individual prices to be adjusted

in each period t.

15

This can be expressed in our model as the expiration

of some wage contracts, to be reviewed in each time period and therefore

new wage contracts will be signed in each t. The new signed wage contracts

should respond to the expected market conditions not only in period t but

also through t to t + j, where j can be regarded as the contract period.

16

Through such a pattern of wage dynamics, wages are only partially adjusted.

Explicit formulation of wage dynamics of a Calvo type of updating scheme,

particularly with diﬀerentiated types of labor, is studied in Erceg et al (2000),

Christiano et al. (2001) and Woodford (2003, ch. 3) and brieﬂy sketched, as

underlying our model, for an aggregate wage in appendix I of this chapter.

A more explicit treatment is not needed here. Indeed, as will become clear in

section 3, the empirical study of our model does not rely on how we formulate

the wage dynamics. All we need to presume is that, wage contracts are only

partially adjusted, giving rise to a sticky aggregate wage.

8.2.2 The Household’s Desired Transactions

The next step in our multiple stage decision process is to model the quantity

decisions of the households. When the price, including the wage, has been

set, the household is then going to express its desire of demand for goods

and supply of factors. We deﬁne the household’s desired demand and sup-

ply as those that can allow the household to obtain the maximum utility

on the condition that these demand and supply can be realized at the given

set of prices. We can express the household’s desired demand and supply as

a sequence of output demand and factor supply

_

c

d

t+i

, i

d

t+i

, n

s

t+i

, k

s

t+i+1

_

∞

i=0

,

14

For such a treatment of the wages as derivative security, see Uhlig (2003). For further

details of the pricing of such liabilities, see Gr¨ une and Semmler (2004c).

15

These are basically those prices that have not been adjusted for some periods and there

the adjustment costs (such as the reputation cost) may not be high.

16

This type of wage setting is used in Woodford (2003, ch. 4) and Erceg et al. (2000).

138

where i

t+i

is referred to investment. Note that here we have used the su-

perscripts d and s to refer to the agent’s desired demand and supply. The

decision problem for the household to derive its demand and supply can be

formulated as

max

{c

t+i

,n

t+i

}

∞

i=0

E

t

_

∞

i=0

β

i

U(c

d

t+i

, n

s

t+i

)

_

(8.1)

subject to

c

d

t+i

+ i

d

t+i

= r

t+i

k

s

t+i

+ w

t+i

n

s

t+i

+ π

t+i

(8.2)

k

s

t+i+1

= (1 −δ)k

s

t+i

+ i

d

t+i

(8.3)

Above π

t+i

is the expected dividend. Note that (8.2) can be regarded as a

budget constraint. The equality holds due to the assumption U

c

> 0. Next,

we shall consider how the representative household calculates π

t+i

.

Assuming that the household knows the production function f(·) while it

expects that all its optimal plans can be fulﬁlled at the given price sequence

{p

t+i

, w

t+i

, r

t+i

}

∞

i=0

, we thus obtain

π

t+i

= f(k

s

t+i

, n

s

t+i

, A

t+i

) −w

t+i

n

s

t+i

−r

t+i

k

s

t+i

(8.4)

Explaining π

t+i

in (8.2) in terms of (8.4) and then substituting from (8.3) to

eliminate i

d

t

, we obtain

k

s

t+i+1

= (1 −δ)k

s

t+i

+ f(k

s

t+i

, n

s

t+i

, A

t+i

) −c

d

t+i

(8.5)

For the given technology sequence {A

t+i

}

∞

i=0

, equations (8.1) and (8.5) form

a standard intertemporal decision problem. The solution to this problem can

be written as:

c

d

t+i

= G

c

(k

s

t+i

, A

t+i

) (8.6)

n

s

t+i

= G

n

(k

s

t+i

, A

t+i

) (8.7)

We shall remark that although the solution appears to be a sequence

_

c

d

t+i

, n

s

t+i

_

∞

i=0

only (c

d

t

, n

s

t

) along with (i

d

t

, k

s

t

), where i

d

t

= f(k

s

t

, n

s

t

, A

t

) −

c

d

t

and k

s

t

= k

t

, are actually carried into the market by the household for

exchange due to our assumption of re-opening of the market.

8.2.3 The Firm’s Desired Transactions

As in the case of the household, the ﬁrm’s desired demand for factors and

supply of goods are those that maximize the ﬁrm’s proﬁt under the condition

that all its intentions can be carried out at the given set of prices. The

139

optimization problem for the ﬁrm can thus be expressed as being to choose

the input demands and output supply (n

d

t

, k

d

t

, y

s

t

) that maximizes the current

proﬁt:

max y

s

t

−r

t

k

d

t

−w

t

n

d

t

subject to

y

s

t

= f(A

t

, k

d

t

, n

d

t

) (8.8)

For regular conditions on the production function f(·), the solution to the

above optimization problem should satisfy

r

t

= f

k

(k

d

t

, n

d

t

, A

t

) (8.9)

w

t

= f

n

(k

d

t

, n

d

t

, A

t

) (8.10)

where f

k

(·) and f

n

(·) are respectively the marginal products of capital and

labor. Next we shall consider the transactions in our three markets. Let us

ﬁrst consider the two factor markets.

8.2.4 Transaction in the Factor Market and Actual

Employment

We have assumed the rental rate of capital r

t

to be adjustable in each period

and thus the capital market is cleared. This indicates that

k

t

= k

s

t

= k

d

t

As concerning the labor market, there is no reason to believe that ﬁrm’s

demand for labor, as implicitly expressed in (8.10) should be equal to the

willingness of the household to supply labor as determined in (8.7) given

the way the wage determination is explained in section 8.2.1. Therefore,

we cannot regard the labor market to be cleared. An illustration of this

statement, though in a simpler version, is given in Appendix I.

17

Given a nonclearing labor market, we shall have to specify what rule

should apply regarding the realization of actual employment.

17

Strictly speaking, the so-called labor market clearing should be deﬁned as the condition

that the ﬁrm’s willingness to demand factors is equal to the household’s willingness to

supply factors. Such concept has somehow disappeared in the new Keynesian literature

in which the household supplies the labor eﬀort according to the market demand and

therefore it does not seem to face excess demand or supply. Yet, even in this case, the

household’s willingness to supply labor eﬀort is not necessarily equal to its actual supply

or the market demand. At some point the marginal disutility of work may be higher

than the pre-set wage. This indicates that even if there are no adjustment costs so that

the household can adjust the wage rate at every time period t, the disequilibrium in

the labor market may still exist. In Appendix I these points are illustrated in a static

version of the working of the labor market.

140

Disequilibrium Rule: When disequilibrium occurs in the labor

market either of the following two rules will be applied:

n

t

= min(n

d

t

, n

s

t

) (8.11)

n

t

= ωn

d

t

+ (1 −ω)n

s

t

(8.12)

where ω ∈ (0, 1).

Above, the ﬁrst is the famous short-side rule when nonclearing of the

market occurs. It has been widely used in the literature on disequilibrium

analysis (see, for instance, Benassy 1975, 1984, among others). The sec-

ond might be called the compromising rule. This rule indicates that when

nonclearing of the labor market occurs both ﬁrms and workers have to com-

promise. If there is excess supply, ﬁrms will employ more labor than what

they wish to employ.

18

On the other hand, when there is excess demand,

workers will have to oﬀer more eﬀort than they wish to oﬀer.

19

Such mutual

compromises may be due to institutional structures and moral standards of

the society.

20

Given the rather corporate relationship of labor and ﬁrms in

Germany, for example, this compromising rule might be considered a reason-

able approximation. Such a rule that seems to hold for many other countries

was already discussed early in the economic literature, see Meyers (1968) and

also Solow (1979).

We want to note that the unemployment we discuss here is certainly dif-

ferent from the frictional unemployment as often discussed in search and

matching models. In our representative agent model, the unemployment is

mainly due to adaptive optimization of the household given the institutional

18

This could also be realized by ﬁrms by demanding the same (or less) hours per worker but

employing more workers than being optimal. This case corresponds to what is discussed

in the literature as labor hoarding where ﬁrms hesitate to ﬁre workers during a recession

because it may be hard to ﬁnd new workers in the next upswing, see Burnside et al.

(1993). Note that in this case ﬁrms may be oﬀ their marginal product curve and thus

this might require wage subsidies for ﬁrms as has been suggested by Phelps (1997).

19

This could be achieved by employing the same number of workers but each worker sup-

plying more hours (varying shift length and overtime work); for a more formal treatment

of this point, see Burnside et al. (1993).

20

Note that if ﬁrms are oﬀ their supply schedule and workers oﬀ their demand schedule,

a proper study would have to compute the ﬁrms’ cost increase and proﬁt loss and the

workers’ welfare loss. If, however, the marginal cost for ﬁrms is rather ﬂat (as empirical

literature has argued, see Blanchard and Fischer, 1989) and the marginal disutility is

also rather ﬂat the overall loss may not be so high. The departure of the value function

– as measuring the welfare of the representative household from the standard case – is

studied in Gong and Semmler (2001). Results of this study are reported in Appendix

III of this chapter.

141

arrangements of the wage setting (see Chapter 8.5.). The cause for fric-

tional unemployment can arise from informational and institutional search

and matching frictions where welfare state and labor market institutions may

play a role.

21

Yet the frictions in the institutions of the matching process are

likely to explain only a certain fraction of observed unemployment.

22

8.2.5 Actual Employment and Transaction in the Prod-

uct Market

After the transactions in these two factor markets have been carried out, the

ﬁrm will engage in its production activity. The result is the output supply,

which, instead of (8.8), is now given by

y

s

t

= f(k

t

, n

t

, A

t

). (8.13)

Then the transaction needs to be carried out with respect to y

s

t

. It is im-

portant to note that when the labor market is not cleared, the previous

consumption plan as expressed by (8.6) becomes invalid due to the improper

budget constraint (8.2), which further bring the improper transition law of

capital (8.5), for deriving the plan. Therefore, the household will be required

to construct a new consumption plan, which should be derived from the

following optimization program:

max

(c

d

t

)

U(c

d

t

, n

t

) + E

t

_

∞

i=1

β

i

U(c

d

t+i

, n

s

t+i

)

_

(8.14)

subject to

k

s

t+1

= (1 −δ)k

t

+ f(k

t

, n

t

, A

t

) −c

d

t

(8.15)

k

s

t+i+1

= (1 −δ)k

s

t+i

+f(k

s

t+i

, n

s

t+i

, A

t+i

) −c

d

t+i

(8.16)

i = 1, 2, ...

Note that in this optimization program the only decision variable is about

c

d

t

and the data includes not only A

t

and k

t

but also n

t

, which is given by

21

For a recent position representing this view, see Ljungqvist and Sargent (1998, 2003). For

comments on this view, see Blanchard (2003), see also Walsh (2002) who employs search

and matching theory to derive the persistence of real eﬀects resulting from monetary

policy shocks.

22

Already Hicks (1963) has called this frictional unemployment. Recently, one important

form of a mismatch in the labor market seems to be the mismatch of skills, see Greiner,

Rubart and Semmler (2003).

142

either (8.11) or (8.12). We can write the solution in terms of the following

equation (see Appendix II of this chapter for the detail):

c

d

t

= G

c2

(k

t

, A

t

, n

t

) (8.17)

Given this adjusted consumption plan, the product market should be cleared

if the household demand f(k

t

, n

t

, A

t

) − c

d

t

for investment. Therefore, c

d

t

in

(8.17) should also be the realized consumption.

8.3 Estimation and Calibration for U. S. Econ-

omy

This section provides an empirical study, for the U. S. economy, of our model

as presented in the last section. However, the model in the last section is

only for illustrative purpose. It is not the model that can be tested with

empirical data, not only because we do not specify the forms of production

function, utility function and the stochastic process of A

t

, but also we do

not introduce the growth factor into the model. For an empirically testable

model, we here still employ the model as formulated by King, Plosser and

Rebelo (1988).

8.3.1 The Empirically Testable Model

Let K

t

denote for capital stock, N

t

for per capita working hours, Y

t

for output

and C

t

for consumption. Assume that the capital stock in the economy follow

the transition law:

K

t+1

= (1 −δ)K

t

+ A

t

K

1−α

t

(N

t

X

t

)

α

−C

t

, (8.18)

where δ is the depreciation rate; α is the share of labor in the production

function F(·) = A

t

K

1−α

t

(N

t

X

t

)

α

; A

t

is the temporary shock in technology

and X

t

the permanent shock that follows a growth rate γ.

23

The model is

nonstationary due to X

t

. To transform the model into a stationary setting,

we divide both sides of equation (8.18) by X

t

:

k

t+1

=

1

1 +γ

_

(1 −δ)k

t

+ A

t

k

1−α

t

(n

t

N/0.3)

α

−c

t

_

, (8.19)

where k

t

≡ K

t

/X

t

, c

t

≡ C

t

/X

t

and n

t

≡ 0.3N

t

/N with N to be the sample

mean of N

t

. Note that n

t

is often regarded to be the normalized hours. The

23

Note that X

t

includes both population and productivity growth.

143

sample mean of n

t

is equal to 30 %, which, as pointed out by Hansen (1985),

is the average percentage of hours attributed to work. Note that the above

formulation also indicates that the form of f(·) in the previous section may

follow

f(·) = A

t

k

1−α

t

(n

t

N/0.3)

α

(8.20)

while y

t

≡ Y

t

/X

t

with Y

t

to be the empirical output.

With regard to the household preference, we shall assume that the utility

function takes the form

U(c

t

, n

t

) = log c

t

+θ log(1 −n

t

) (8.21)

The temporary shock A

t

may follow an AR(1) process:

A

t+1

= a

0

+ a

1

A

t

+ ǫ

t+1

, (8.22)

where ǫ

t

is an independently and identically distributed (i.i.d.) innovation:

ǫ

t

∼ N(0, σ

2

ǫ

).

8.3.2 The Data Generating Process

For our empirical test, we consider three model variants: the standard RBC

model, as a benchmark for comparison, and the two labor market disequilib-

rium models with the disequilibrium rules as expressed in (8.11) and (8.12)

respectively. Speciﬁcally, we shall call the standard model the Model I; the

disequilibrium model with short side rule (8.11) the Model II; and the dise-

quilibrium model with the compromising rule (8.12) the Model III.

For the standard RBC model, the data generating process include (8.19),

(8.22) as well as

c

t

= G

11

A

t

+ G

12

k

t

+ g

1

(8.23)

n

t

= G

21

A

t

+ G

22

k

t

+ g

2

(8.24)

Note that here (8.23) and (8.24) are the linear approximations to (8.6) and

(8.7) when we ignore the superscripts s and d. The coeﬃcients G

ij

and

g

i

(i = 1, 2 and j = 1, 2) are the complicated functions of the model’s struc-

tural parameters, α, β, among others. They are computed as in Chapter 5

by the numerical algorithm using the linear-quadratic approximation method

presented in Chapter 1 and 2. Given these coeﬃcients and the parameters in

equation (8.22), including σ

ε

, we can simulate the model to generate stochas-

tically simulated data. These data can then be compared to the sample

moments of the observed economy.

144

Obviously, the standard model does not allow for nonclearing of the labor

market. The moments of the labor eﬀort are solely reﬂected by the decision

rule (8.24) which is quite similar in its structure to the other decision rule

given by (8.23), i.e., they are both determined by k

t

and A

t

. This structural

similarity are expected to produce two labor market puzzles as aforemen-

tioned:

• First, the volatility of the labor eﬀort can not be much diﬀerent from

the volatility of consumption, which generally appears to be smooth.

• Second, the moments of labor eﬀort and consumption are likely to be

strongly correlated.

To deﬁne the data generating process for our disequilibrium models, we

shall ﬁrst modify (8.24) as

n

s

t

= G

21

A

t

+G

22

k

t

+g

2

(8.25)

On the other hand, the equilibrium in the product market indicates that

c

d

t

in (8.17) should be equal to c

t

. Therefore, this equation can also be

approximated as

c

t

= G

31

A

t

+ G

32

k

t

+G

33

n

t

+ g

3

(8.26)

In the appendix, we provide the details how to compute the coeﬃcients G

3j

,

j = 1, 2, 3, and g

3

.

Next we consider the labor demand derived from the production function

F(·) = A

t

K

1−α

t

(N

t

X

t

)

α

. Let X

t

= Z

t

L

t

, with Z

t

to be the permanent shock

resulting purely from productivity growth, and L

t

from population growth.

We shall assume that L

t

has a constant growth rate µ and hence Z

t

follows

the growth rate (γ − µ). The production function can be written as Y

t

=

A

t

Z

α

t

K

1−α

t

H

α

t

, where H

t

equals N

t

L

t

, which can be regarded as total labor

hours. Taking the partial derivative with respect to H

t

and recognizing that

the marginal product of labor is equal to the real wage, we thus obtain

w

t

= αA

t

Z

t

k

1−α

t

(n

d

t

¯

N/0.3)

α−1

This equation is equivalent to (8.10). It generates the demand for labor as

n

d

t

= (αA

t

Z

t

/w

t

)

1/(1−α)

k

t

(0.3/N). (8.27)

Note that the per capita hours demanded n

d

t

should be stationary if the real

wage w

t

and productivity Z

t

grow at the same rate. This seems to be roughly

consistent with the U.S. experience that we shall now calibrate.

145

Thus, for the nonclearing market model with short side rule, Model II,

the data generating process includes (8.19), (8.22), (8.11), (8.25), (8.26) and

(8.27) with w

t

given by the observed wage rate. We thereby do not attempt

to give the actually observed sequence of wages a further theoretical founda-

tion.

24

For our purpose it suﬃces to take the empirically observed series of

wages. For Model III, we use (8.12) instead of (8.11).

8.3.3 The Data and the Parameters

Before we calibrate the models we shall ﬁrst specify the parameters. There

are altogether 10 parameters in our three variants: a

0

, a

1

, σ

ε

, γ, µ, α, β, δ, θ,

and ω. We ﬁrst specify α and γ respectively at 0.58 and 0.0045, which are

standard. This allows us to compute the data series of the temporary shock

A

t

. With this data series, we estimate the parameters a

0

, a

1

and σ

ε

. The next

three parameters β, δ and θ are estimated with the GMM method by match-

ing the moments of the model generated by (8.19), (8.23) and (8.25). The

estimation is conducted by a global optimization algorithm called simulated

annealing. These parameters have already been estimated in Chapter 5, and

therefore we shall employ them here. For the new parameters, we specify µ

at 0.001, which is close to the average growth rate of the labor force in U.S.;

the parameter ω in Model III is set to 0.1203. It is estimated by minimizing

the residual sum of square between actual employment and the model gener-

ated employment. The estimation is executed by a conventional algorithm,

the grid search. Table 8.1 illustrates these parameters:

Table 8.1: Parameters Used for Calibration

a

0

0.0333 σ

ε

0.0185 µ 0.0010 β 0.9930 θ 2.0189

a

1

0.9811 γ 0.0045 α 0.5800 δ 0.2080 ω 0.1203

The data set used in this section is taken from Christiano (1987). The

wage series are obtained from Citibase. It is re-scaled to match the model’s

implication.

25

24

One however might apply here the eﬃciency wage theory or other theories such as the

staggered contract theory that justify the wage stickiness.

25

Note that this re-scaling is necessary because we do not exactly know the initial condition

of Z

t

, which we set equal to 1. We re-scaled the wage series in such a way that the ﬁrst

observation of employment is equal to the demand for labor as speciﬁed by equation

(8.27).

146

8.3.4 Calibration

Table 8.2 reports our calibration from 5000 stochastic simulations. The re-

sults in this table are conﬁrmed by Figure 8.1, where a one time simulation

with the observed innovation A

t

are presented.

26

All time series are detrended

by the HP-ﬁlter.

26

Due to the discussion on Solow residual in Chapter 5, we shall now understand that A

t

computed as the Solow residual may reﬂect also the demand shock in addition to the

technology shock.

147

Table 8.2: Calibration of the Model Variants: U.S. Economy (numbers in

parentheses are the corresponding standard errors)

Consumption Capital Employment Output

Standard Deviations

Sample Economy 0.0081 0.0035 0.0165 0.0156

Model I Economy 0.0091 0.0036 0.0051 0.0158

(0.0012) (0.0007) (0.0006) (0.0021)

Model II Economy 0.0137 0.0095 0.0545 0.0393

(0.0098) (0.0031) (0.0198) (0.0115)

Model III Economy 0.0066 0.0052 0.0135 0.0197

(0.0010) (0.0010) (0.0020) (0.0026)

Correlation Coeﬃcients

Sample Economy

Consumption 1.0000

Capital Stock 0.1741 1.0000

Employment 0.4604 0.2861 1.0000

Output 0.7550 0.0954 0.7263 1.0000

Model I Economy

Consumption 1.0000

(0.0000)

Capital Stock 0.2043 1.0000

(0.1190) (0.0000)

Employment 0.9288 −0.1593 1.0000

(0.0203) (0.0906) (0.0000)

Output 0.9866 0.0566 0.9754 1.0000

(0.00332) (0.1044) (0.0076) (0.0000)

Model II Economy

Consumption 1.0000

(0.0000)

Capital Stock 0.4944 1.0000

(0.1662) (0.0000)

Employment 0.4874 -0.0577 1.0000

(0.1362) (0.0825) (0.0000)

Output 0.6869 0.0336 0.9392 1.0000

(0.1069) (0.0717) (0.0407) (0.0000)

Model III Economy

Consumption 1.0000

(0.0000)

Capital Stock 0.4525 1.0000

(0.1175) (0.0000)

Employment 0.6807 -0.0863 1.0000

(0.0824) (0.1045) (0.0000)

Output 0.8924 0.0576 0.9056 1.0000

(0.0268) (0.0971) (0.0327) (0.0000)

148

First we want to remark that the structural parameters that we used

here for calibration are estimated by matching the Model I Economy to the

Sample Economy. The result, reﬂected in Table 8.2, is therefore somewhat

biased in favor of the Model I Economy. It is not surprising that for most

variables the moments generated from the Model I Economy are closer to the

moments of the Sample Economy. Yet even in this case, there is an excessive

smoothness of the labor eﬀort and the employment series of the data cannot

be matched. For our time period, 1955.1 to 1983.4, we ﬁnd 0.32 in the

Model I Economy as the ratio of the standard deviation of labor eﬀort to the

standard deviation of output. This ratio is roughly 1 in the Sample Economy.

The problem is, however, resolved in our Model II and Model III Economies

representing sticky wages and labor market nonclearing. There the ratio is

1.38 and 0.69 for the Model II and Model III Economies respectively.

Further evidence on the better ﬁt of the nonclearing labor market models

– as concerns the volatility of the macroeconomic variables – is also demon-

strated in the Figure 8.1 where the horizontal ﬁgures show, from top to

bottom, actual (solid line) and simulated data (dotted line) for consump-

tion, capital stock, employment and output, the three columns representing

the ﬁgures for Model I, Model II and Model III Economies. As observable,

in particular the Model III Economy ﬁts, along most dimensions, best the

actual data. As can be seen from the separate ﬁgures, the volatility of em-

ployment has been greatly increased for both Model II and Model III. In

particular, the volatility in the Model III Economy is close to the one in the

Sample Economy, although too high a volatility is observable in the Model

II Economy which may reﬂect our assumption that there are no search and

matching frictions (which, of course, in the actual economy will not hold).

We therefore may conclude that Model III is the best in matching the labor

market volatility.

We want to note that the failure of the standard model to match the

volatility of employment of the data is also described in the recent paper by

Schmidt-Grohe (2001). For her employed time series data 1948.3 - 1997.4,

Schmidt-Grohe (2001) ﬁnds that the ratio of the standard deviation of em-

ployment to the standard deviation of output is roughly 0.95, close to our

Sample Economy. Yet for the standard RBC model, the ratio is found to be

0.49, which is too low compared to the empirical data. For the indetermi-

nacy model, originating in the work by Benhabib and co-authors, she ﬁnds

the ratio to be 1.45, which seems too high. As noted above, a similarly high

ratio of standard deviations can also be observed in our Model II Economy

where the short side rule leads to excessive ﬂuctuations of the labor eﬀort.

Next, let us look at the cross-correlations of the macroeconomic variables.

In the Sample Economy, there are two signiﬁcant correlations we can observe:

149

the correlation between consumption and output, roughly 0.75, and between

employment and output, about 0.72. These two strong correlations can also

be found in all of our simulated economies. However, in our Model I Economy

and this only holds for the Model I Economy (the standard RBC model) in

addition to these two correlations, consumption and employment are, with

0.93, also strongly correlated. Yet, empirically, this correlation is weak, about

0.46.

The latter result of the standard model is not surprising given that move-

ments of employment as well as consumption reﬂect the movements in the

state variables capital stock and the temporary shock. They, therefore,

should be somewhat correlated. We remark here that such an excessive

correlation has, to our knowledge, not explicitly been discussed in the RBC

literature, including the recent study by Schmidt-Grohe (2001). Discussions

have often focused on the correlation with output.

150

Figure 8.1: Simulated Economy versus Sample Economy: U.S. Case (solid

line for sample economy, dotted line for simulated economy)

A success of our nonclearing labor market models, see the Model II and

III Economies, is that employment is no longer signiﬁcantly correlated with

consumption. This is because we have made a distinction between the de-

mand and supply of labor, whereas only the latter, labor supply, reﬂects the

moments of capital and technology as consumption does. Since the realized

employment is not necessarily the same as the labor supply, the correlation

with consumption is therefore weakened.

151

8.4 Estimation and Calibration for the Ger-

man Economy

Above we have employed a model with nonclearing labor market for the U.

S. economy. We have seen that one of the major reasons that the standard

model can not appropriately replicate the variation in employment is its lack

of introducing the demand for labor. Next, we pursue a similar study of

German economy. For this purpose we shall ﬁrst summarize some stylized

facts on the German economy compared to the U.S. economy.

8.4.1 The Data

Our subsequent study of the German economy employs the time series data

from 1960.1 to 1992.1. We thus have included a short period after the uni-

ﬁcation of Germany (1990 - 1991). We use again quarterly data. The time

series data on GDP, consumption, investment and capital stock are OECD

data, see OECD (1998a), the data on total labor force is also from the OECD

(1998b). The time series data on total working hours is taken from Statis-

tisches Bundesamt (1998). The time series on the hourly real wage index is

from OECD (1998a).

8.4.2 The Stylized Facts

Next, we want to compare some stylized facts. Figures 8.2 and 8.3 com-

pare 6 key variables relevant for the models for both the German and U.S.

economies. In particular, the data in Figure 8.3 are detrended by the HP-

ﬁlter. The standard deviations of the detrended series are summarized in

Table 8.3.

152

Figure 8.2: Comparison of Macroeconomic Variables U. S. versus Germany

153

Figure 8.3: Comparison of Macroeconomic Variables: U. S. versus Germany

(data series are detrended by the HP-ﬁlter)

154

Table 8.3: The Standard Deviations (U.S. versus Germany)

Germany

(detrended)

U.S.

(detrended)

consumption 0.0146 0.0084

capital stock 0.0203 0.0036

employment 0.0100 0.0166

output 0.0258 0.0164

temporary shock 0.0230 0.0115

eﬃciency wage 0.0129 0.0273

Several remarks are at place here. First, employment and the eﬃciency

wage are among the variables with the highest volatility in the U. S. econ-

omy. However, in the German economy they are the smoothest variables.

Second, the employment (measured in terms of per capita hours) is declining

over time in Germany (see Figure 8.2 for the non-detrended series), while in

the U.S. economy, the series is approximately stationary. Third, in the U. S.

economy, the capital stock and temporary shock to technology are both rel-

atively smooth. In contrast, they are both more volatile in Germany. These

results might be due to our ﬁrst remark regarding the diﬀerence in employ-

ment volatility. The volatility of output must be absorbed by some factors

in the production function. If employment is smooth, the other two factors

have to be volatile.

Should we expect that such diﬀerences will lead to diﬀerent calibration

of our model variants? This will be explored next.

8.4.3 The Parameters

For the German economy, our investigation showed that an AR(1) process

does not match well the observed process of A

t

. Instead, we shall use an

AR(2) process:

A

t+1

= a

0

+ a

1

A

t

+a

2

A

t−1

+ε

t+1

The parameters used for calibration are given in Table 8.4. All of these

parameters are estimated in the same way as those for the U.S. economy.

155

Table 8.4: Parameters used for Calibration (German Economy)

a

0

0.0044 γ 0.0083 δ 0.0538

a

1

1.8880 µ 0.0019 θ 2.1507

a

2

-0.8920 α 0.6600 ω 0

σ

ε

0.0071 β 0.9876

It is important to note that the estimated ω in this case is on the boundary

0, indicating the weight of the demand is zero in the compromising rule (8.12).

In other words, the Model III Economy is almost identical to the Model I

Economy. This seems to provide us with the conjecture that the Model I

Economy, the standard model, will be the best in matching German labor

market.

8.4.4 Calibration

As for the U.S. economy we provide in Table 8.5 for the German economy

the calibration result from 5000 time stochastic simulations. In Figure 8.4 we

again compare the one-time simulation with the observed A

t

for our model

variants. Again all time series here are detrended by the HP-ﬁlter.

27

27

Note that we do not include the Model III Economy for calibration. Due to the zero

value of the weighting parameter ω, the Model III Economy is equivalent to the Model

I Economy.

156

Table 8.5: Calibration of the Model Variants: German Economy (number in

parentheses are the corresponding standard errors)

Consumption Capital Employment Output

Standard Deviations

Sample Economy 0.0146 0.0203 0.0100 0.0258

Model I Economy 0.0292 0.0241 0.0107 0.0397

(0.0106) (0.0066) (0.0023) (0.0112)

Model II Economy 0.1276 0.0425 0.0865 0.4648

(0.1533) (0.0238) (0.1519) (0.9002)

Correlation Coeﬃcients

Sample Economy

Consumption 1.0000

Capital Stock 0.4360 1.0000

Employment 0.0039 -0.3002 1.0000

Output 0.9692 0.5423 0.0202 1.0000

Model I Economy

Consumption 1.0000

(0.0000)

Capital Stock 0.7208 1.0000

(0.0920) (0.0000)

Employment 0.5138 −0.1842 1.0000

(0.1640) (0.1309) (0.0000)

Output 0.9473 0.4855 0.7496 1.0000

(0.0200) (0.1099) (0.1028) (0.0000)

Model II Economy

Consumption 1.0000

(0.0000)

Capital Stock 0.6907 1.0000

(0.1461) (0.0000)

Employment 0.7147 0.3486 1.0000

(0.2319) (0.4561) (0.0000)

Output 0.8935 0.5420 0.9130 1.0000

(0.1047) (0.2362) (0.1312) (0.00000)

157

Figure 8.4: Simulated Economy versus Sample Economy: German Case (solid

line for sample economy, dotted line for simulated economy)

In contrast to U.S. economy we ﬁnd some major diﬀerences. First, there is

a diﬀerence concerning the variation of employment. The standard problem

of excessive smoothness with respect to employment in the benchmark model

no longer holds for the German economy. This is likely to be due to the fact

that employment itself is smooth in the German economy (see Table 8.3 and

Figure 8.3). We shall also note that the simulated labor supply in Germany

is smoother than in the U. S. (see Figure 8.5). In most labor market studies

the German labor market is often considered less ﬂexible than the U. S.

labor market. In particular, there are stronger inﬂuences of labor unions

and various legal restrictions on ﬁrms’ hiring and ﬁring decisions.

28

Such

inﬂuences and legal restriction will give rise to the smoother employment

series in contrast to the U. S. Such inﬂuences and legal restriction, or what

Solow (1979) has termed the moral factor in the labor market, may also be

viewed as a readiness to compromise as our Model III suggests. Those factors

28

See, for example, Nickell (1997) and Nickell (2003), and see already Meyers (1964).

158

will indeed give rise to a smooth employment series.

Further, if we look at the labor demand and supply in Figure 8.5, the

supply of labor is mostly the short side in the Germany economy whereas

in U.S. economy demand is dominating in most periods. Note that here we

must distinguish the supply that is actually provided in the labor market and

the “supply” that is speciﬁed by the decision rule in the standard model. It

might reasonably be argued that due to the intertemporal optimization sub-

ject to the budget constraints the supply speciﬁed by the decision rule may

only approximate the decisions from those households for which unemploy-

ment is not expected to pose a problem on their budgets. Such households

are more likely to be currently employed and protected by labor unions and

legal restrictions. In other words, currently employed labor decides, through

the optimal decision rule, about labor supply and not those who are currently

unemployed. Such a shortcoming of single representative intertemporal de-

cision model could presumably be overcome by a intertemporal model with

heterogenous households.

29

Figure 8.5: Comparison of demand and supply in the labor market (solid line

for actual, dashed line for demand and dotted line for supply)

29

See, for example, Uhlig and Xu (1996).

159

The second diﬀerence concerns the trend in employment growth and un-

employment of the U.S. and Germany. So far we only have shown that our

model of nonclearing labor market seems to match better than the standard

RBC model the variation in employment. This in particular seems to be

true for the U.S. economy. We did not attempt to explain the trend of the

unemployment rate neither for the U.S. nor for Germany. We want to note

that the time series data (U. S. 1955.1 - 1983.1, Germany 1960.1 - 1992.1)

are from a period where the U.S. had higher – but falling – unemployment

rates, whereas Germany had still lower but rising unemployment rates. Yet,

since the end of the 1980s the level of the unemployment rate in Germany

has considerably moved up, partly due to the uniﬁcation of Germany after

1989.

8.5 Diﬀerences in Labor Market Institutions

In Chapter 8.2 we have introduced rules that might be thought to be op-

erative when there is a nonclearing labor market. In this respect, as our

calibration in section 3 has shown, the most promising route to model, and

to match, stylized facts of the labor market, through a microbased labor mar-

ket behavior, is the compromising model. One hereby may pay attention to

some institutional characteristics of the labor market presumed in our model.

The ﬁrst is the way how the agency representing the household sets the

wage rate. If the household sets the wage rate, as if it were monopolistic com-

petitor, then at this wage rate the household’s willingness to supply labor is

likely to be less than the market demand for labor unless the household suf-

ﬁciently under-estimates the market demand when it conducts its optimiza-

tion for wage setting. Such a way of wage setting may imply unemployment

and it is likely to be the institutional structure that gives the representative

household (or the representative of the household, such as unions), the power

to bargain with the ﬁrm in wage setting.

30

Yet, there could be, of course,

other reasons why wages do not move to a labor market clearing level – such

as eﬃciency wage, insider – outsider relationship, or wages determined by

standards of fairness as Solow (1979) has noted and so on.

On the other hand, there can be labor market institutions, for example

corporatist structures, also measured by our ω, which aﬀect the actual em-

ployment. Our ω expresses how much weight is given to the desired labor

supply or desired labor demand. A small ω means that the agency, repre-

30

This is similar to Woodford’s (2003, ch. 3) idea of a deviation of the eﬃcient and natural

level of output where the eﬃcient level is achieved only in a competitive economy with

no frictions.

160

senting the household, has a high weight in determining the outcome of the

employment compromise. A high ω means that the ﬁrm’s side is stronger in

employment negotiations. As our empirical estimations in Gong, Ernst and

Semmler (2004) have shown the former case, a low ω, is very characteristic

of Germany, France and Italy whereas a larger ω is found for U.S. and the

U.K.

31

Given the rather corporatist relationship of labor and the ﬁrm in some Eu-

ropean countries, with some considerable labor market regulations through

legislature and union bargaining (rules of employment protection, hiring and

ﬁring restrictions, extension of employment even if there is a shortfall of sales

etc.)

32

, our ω may thus measure diﬀerences concerning labor market insti-

tutions between the U.S. and European countries. This has already been

stated in the 1960s by Meyers. He states: ”One of the diﬀerences between

the United States and Europe lies in our attitude toward layoﬀs... When

business falls oﬀ, he [the typical American employer] soon begins to think of

reduction in work force... In many other industrial countries, speciﬁc laws,

collective agreements, or vigorous public opinion protect the workers against

layoﬀs except under the most critical circumstances. Despite falling demand,

the employer counts on retraining his permanent employees. He is obliged

to ﬁnd work for them to do... These arrangements are certainly eﬀective in

holding down unemployment”. (Meyers, 1964:)

Thus, we wish to argue that the major international diﬀerence causing

employment variation does arise less from real wage stickiness (due to the

presence of unions and the extend and duration of contractual agreements

between labor and the ﬁrm)

33

but rather it seems to be the degree to which

compromising rules exist and which side dominates the compromising rule. A

lower ω, deﬁning, for example, the compromising rule in Euro-area countries,

can show up as diﬀerence in the variation of macroeconomic variables. This

is demonstrated in Chapter 8.4 for the German economy.

We there could observe that ﬁrst, employment and the eﬃciency wage

(deﬁned as real wage divided productions) are among the variables with the

31

In the paper by Gong, Ernst and Semmler (2004) it is also shown that the ω is strongly

negatively correlated with labor market institutions.

32

This could also be realized by ﬁrms by demanding the same (or less) hours per worker

but employing more workers than being optimal. The case would then correspond to

what is discussed in the literature as labor hoarding where ﬁrms hesitate to ﬁre workers

during a recession because it may be hard to ﬁnd new workers in the next upswing, see

Burnside et al. (1993). Note that in this case ﬁrms may be oﬀ their marginal product

curve and thus this might require wage subsidies for ﬁrms as has been suggested by

Phelps (1997).

33

In fact real wage rigidities in the U.S. are almost the same as in European countries, see

Flaschel, Gong and Semmler (2001).

161

highest volatility in the U. S. economy. However, in the German economy

they are the smoothest variables. Second, in the U. S. economy, the capital

stock and temporary shock to technology are both relatively smooth. In

contrast, they are both more volatile in Germany. These results are likely to

be due to our ﬁrst remark regarding the diﬀerence in employment volatility.

The volatility of output must be absorbed by some factors in the production

function. If employment is smooth, the other two factors have to be volatile.

Indeed, recent Phillips curve studies do not seem to reveal much diﬀerence

in real wage stickiness between Germany and the U.S., although the German

labor market is often considered less ﬂexible.

34

Yet, there are diﬀerences in

another sense. In Germany, there are stronger inﬂuences of labor unions and

various legal restrictions on ﬁrms’ hiring and ﬁring decisions shorter work

week even for the same pay etc.

35

Such inﬂuences and legal restriction will

give rise to the smoother employment series in contrast to the U.S.. Such

inﬂuences and legal restriction, or what Solow (1979) has termed the moral

factor in the labor market, may also be viewed as a readiness to compromise

as our Model III suggests. Those factors will indeed give rise to a lower ω

and a smoother employment series.

36

So far we only have shown that our model of nonclearing labor market

seems to match better the variation in employment than the standard RBC

model. Yet, we did not attempt to explain the secular trend of the unem-

ployment rate neither for the U.S. nor for Germany. We want to express

a conjecture of how our model can be used to study the trend shift in em-

ployment. We want to note that the time series data for the table 8.3 (U.S.

1955.1-1983.1, Germany 1960.1-1992.1) are from a period where the U.S. had

higher – but falling – unemployment rates, whereas Germany had still lower

but rising unemployment rates. Yet, since the end of the 1980s the level of

the unemployment rate in Germany has considerably moved up, partly, of

course due to the uniﬁcation of Germany after 1989.

One recent attempt to better ﬁt the RBC model’s predictions with labor

34

See Flaschel, Gong and Semmler (2001).

35

See,for example, Nickell (1997) and Nickell et al. (2003), and see already Meyers (1964).

36

It might reasonably be argued that, due to intertemporal optimization subject to the

budget constraints, the supply speciﬁed by the decision rule may only approximate the

decisions of those households for which unemployment is not expected to pose a problem

on their budgets. Such households are more likely to be currently employed represented

by labor unions and covered by legal restrictions. In other words, currently employed

labor decides, through the optimal decision rule, about labor supply and not those

who are currently unemployed. Such a feature could presumably be better studied by

an intertemporal model with heterogenous households, see, for example, Uhlig and Xu

(1996).

162

market data has employed search and matching theory.

37

Informational or

institutional search frictions may then explain the equilibrium unemployment

rate and its rise. Yet, those models usually observe that there has been a

shift in matching functions due to evolution of unemployment rates such as,

for example, experienced in Europe since the 1980s, and that the model itself

fails to explain such a shift.

38

In contrast to the literature on institutional frictions in the search and

matching process we think that the essential impact on the trend in the

rate of unemployment seems to stem from both changes of preferences of

households as well as a changing trend in the technology shock.

39

Concerning

the latter, as shown in Chapters 5 and 9, the Solow residual, as it used in RBC

models as the technology shock, greatly depends on endogenous variables

(such as capacity utilization). Thus exogenous technology shocks constitute

only a small fraction of the Solow residual. We thus might conclude that

cyclical ﬂuctuations in output and employment are not likely to suﬃciently

be explained by productivity shocks alone. Gali (1999) and Francis and

Ramey (2001, 2003) have argued that other shocks, for example demand

shocks, are important as well.

Yet, in the long run, the change in the trend of the unemployment rate

is likely to be related to the long-run trend in the true technology shock.

Empirical evidence on the role of lagging implementation and diﬀusion of new

technology for low employment growth in Germany can be found in Heckman

(2003) and Greiner, Semmler and Gong (2004). In the context of our model

this would have the eﬀect that labor demand, given by equation (8.27) may

fall short of labor supply given by equation (8.24). This is likely to occur in

the long-run if the productivity Z

t

in equation (8.27) starts tending to grow

at a lower rate which many researchers recently have maintained to have

happened in Germany, and other European countries, since the 1980s.

40

Yet,

as recent research has stressed, for example, the work by Phelps, see Phelps

(1997) and Phelps and Zoega (1998), there have also been secular changes on

the supply side of labor due to changes in preferences of households.

41

Some

of those factors aﬀecting the households’ supply of labor have been discussed

37

See Merz (1999) and Ljungqvist and Sargent (1998, 2003).

38

For an evaluation of the search and matching theory as well as the role of shocks to

explain the evolution of unemployment in Europe, see Blanchard and Wolfers (2000)

and Blanchard (2003).

39

See Campbell (1994) for a modelling of a trend in technology shocks.

40

Of course, the trend in the wage rate is also important in the equation for labor demand

(in equation 25). For an account of the technology trend, see Flaschel, Gong and Semmler

(2001), and for an additional account of the wage rate, see Heckman (2003).

41

Phelps and his co-authors have pointed out that an important change in the households’

preferences in Europe is that households now rely more an assets instead of labor income.

163

above.

8.6 Conclusions

Market clearing is a prominent feature in the standard RBC model which

commonly presumes wage and price ﬂexibility. In this chapter, we have

introduced an adaptive optimization behavior and a multiple stage decision

process that, given wage stickiness, results in a nonclearing labor market in

an otherwise standard stochastic dynamic model. Nonclearing labor market

is then a result of diﬀerent employment rules derived on the basis of a multiple

stage decision process. Calibrations have shown that such model variants will

produce a higher volatility in employment, and thus ﬁt the data signiﬁcantly

better than the standard model.

42

As concerning international aspects of our study we presume that diﬀerent

labor market institutions result in diﬀerent weights deﬁning the compromis-

ing rule. The results for Euro-area economies, for example, for Germany in

contrast to the U.S., are consistent with what has been found in many other

empirical studies with regard to the institutions of the labor market.

Finally, with respect to the trend of lower employment growth in some

European countries as compared to the U.S. since the 1980s, our model

suggests that one has to study more carefully the secular forces aﬀecting the

supply and the demand of labor as modeled in our multiple stage decision

process of section 2. In particular, on the demand side for labor, the slow

down of technology seems to have been a major factor for the low employment

growth in Germany and other countries in Europe.

43

On the other hand

there has also been changes in the preferences of households. Our study has

provided a framework that allows to also follow up such issues.

44

42

Appendix III computes the welfare loss of our diﬀerent model variants of nonclearing

labor market. There we ﬁnd that similarly to Sargent and Ljungqvist (1998), that the

welfare losses are very small.

43

See Blanchard and Wolfers (2000), Greiner, Semmler and Gong (2004) and Heckman

(2003)

44

For further discussion, see also Chapter 9.

164

8.7 Appendix I: Wage Setting

Suppose now that at the beginning of t the household (of course with certain

probability denoted as 1 − ξ) decides to set up a new wage rate w

∗

1

given

the data (A

t

, k

t

), and the sequence of expectations on {A

t+i

}

∞

i=1

where A

t

and k

t

are referred to as the technology and capital stock respectively. If

the household knows the production f(A

t

, k

t

, n

t

, where n

t

the labor eﬀort so

that it may also know the ﬁrm’s demand for labor, the decision problem of

the household with regard to wage setting may be expressed as follows:

max

w

∗

t

,{c

t+i

}

∞

i=0

E

t

_

∞

i=0

(ξβ)

i

U(c

t+i

, n(w

∗

t

, k

t+i

, A

t+i

))

_

(8.28)

subject to

k

t+i+1

= (1 −δ)k

t+i

+ f(A

t+i

, k

t+i

, n(w

∗

t

, k

t+i

, A

t+i

)) −c

t+i

(8.29)

Above ξ

i

is the probability that the new wage rate w

∗

t

will still be eﬀective

in period t + i. Obviously, this probability will be reduced when i become

larger. U(·) is the household’s utility function, which depends on consump-

tion c

t+i

and the labor eﬀort n(w

∗

t

, k

t+i

, A

t+i

). Note that here n(w

∗

t

, k

t+i

, A

t+i

)

is the function of ﬁrm’s demand for labor, which is derived from the condition

of marginal product equal to the wage rate:

w

∗

t

= f

n

(A

t+i

, k

t+i

, n

t+i

)

We shall remark that although the decision is mainly about the choice

of w

∗

t

, the sequence of {c

t+i

}

∞

i=0

should also be considered for the dynamic

optimization. Of course there is no guarantee that the household will actu-

ally implement this sequence {c

t+i

}

∞

i=0

. However, as argued by recent New

Keynesian literature, there is only a certain probability (due to the adjust-

ment cost in changing the wage) that the household will set a new wage rate

in period t. Therefore, the observed wage dynamics w

t

may follow Calvo’s

updating scheme:

w

t

= (1 −ξ)w

∗

t

+ξw

t−1

Such a wage indicates that there exists a gap between optimum wage w

∗

t

and the observed wage w

t

.

It should be noted that in recent New Keynesian literature where the

wage is set in a similar way as we have discussed here, the concept of non-

clearing labor market somehow disappeared. In this literature, the household

165

is assumed to supply the labor eﬀort according to the market demand at the

existing wage rate and therefore does not seem to face the problem of excess

demand or supply. Instead, what New Keynesian economists are concerned

with is the gap between the optimum price and actual price, whose existence

is caused by the adjustment cost in changing prices. In correspondence to

the gap between optimum and actual price, there also exists a gap between

optimum output and actual output.

D

0

MR MR’

MC

D

0

D’

n

0

n* n’ n

s

w

0

w*

n

w

Figure 8.6: A Static Version of the Working of the Labor Market

Some clariﬁcations may be obtained by referring to a static version of our

view on the working of the labor market. In ﬁgure 8.6, the supplier (or the

household, in the labor market case) ﬁrst (say, at the beginning of period

0) sets its price optimally according to the expected demand curve D

0

. Let

us denote this price as w

0

. Consider now the situation that the supplier’s

expectation on demand is not fulﬁlled. Instead of n

0

, the market demand

at w

0

is n

′

. In this case, the household may reasonably believe that the

demand curve should be D

′

and therefore the optimum price should be w

∗

while the optimum supply should be n

∗

. Yet, due to the adjustment cost

in changing prices, the supplier may stick to w

0

. This produces the gaps

between optimum price w

∗

and actual price w

0

and between optimum supply

n

∗

and actual supply n

′

.

However, the existence of price and output gaps does not exclude the

166

existence of a disequilibrium or nonclearing market. New Keynesian litera-

ture presumes that at the existing wage rate, the household supplies labor

eﬀort whatever the market demand for labor is. Note that in ﬁgure 8.6 the

household’s willingness to supply labor is n

s

. In this context the marginal

cost curve, MC, can be interpreted as marginal disutility of labor which has

also an upward slope since we use the standard log utility function as in the

RBC literature. This then means that the household’s supply of labor will be

restricted by a wage rate below, or equal, to the marginal disutility of work.

If we deﬁne the labor market demand and supply in a standard way, that

is, at the given wage rate there is a ﬁrm’s willingness to demand labor and

the household’s willingness to supply labor, and a nonclearing labor market

can be very general phenomena. This indicates that even if there are no

adjustment costs so that the household can adjust the wage rate in every t

(so that there is no price and quantity gaps as we have mentioned earlier),

the disequilibrium in the labor market may still exist.

Appendix II: Adaptive Optimization and Con-

sumption Decision

For the problem (8.14) - (8.16), we deﬁne the Lagrangian:

L = E

t

__

log c

d

t

+ θ log(1 −n

t

)

¸

+

λ

t

_

k

s

t+1

−

1

1 + γ

_

(1 −δ)k

s

t

+f(k

s

t

, n

t

, A

t

) −c

d

t

¸

__

+

E

t

_

∞

i=1

β

i

_

log(c

d

t+i

) + θ log(1 −n

s

t+i

)

¸

+

β

i

λ

t+i

_

k

s

t+1+i

−

1

1 + γ

_

(1 −δ)k

s

t+i

+ f(k

s

t+i

, n

s

t+i

, A

t+i

) −c

d

t+i

¸

__

Since the decision is only about c

d

t

, we thus take the partial derivatives of

L with respect to c

d

t

, k

s

t+1

and λ

t

. This gives us the following ﬁrst-order

167

condition:

1

c

d

t

−

λ

t

1 + γ

= 0, (8.30)

β

1 + γ

E

t

_

λ

t+1

_

(1 −δ) + (1 −α)A

t+1

_

k

s

t+1

_

−α

_

n

s

t+1

¯

N/0.3

_

α

__

= λ

t

,

(8.31)

k

s

t+1

=

1

1 +γ

_

(1 −δ)k

s

t

+ A

t

(k

s

t

)

1−α

_

n

t

¯

N/0.3

_

α

−c

d

t

¸

. (8.32)

Recall that in deriving the decision rules as expressed in (8.23) and (8.24)

we have postulated

λ

t+1

= Hk

s

t+1

+ QA

t+1

+ h, (8.33)

n

s

t+1

= G

21

k

s

t+1

+ G

22

A

t+1

+g

2,

(8.34)

where H, Q, h, G

21

, G

22

and g

2

have all been resolved previously in the house-

hold optimization program. We therefore obtain from (8.33) and (8.34)

E

t

λ

t+1

= Hk

s

t+1

+ Q(a

0

+ a

1

A

t

) + h, (8.35)

E

t

n

s

t+1

= G

2

k

s

t+1

+ D

2

(a

0

+ a

1

A

t

) + g

2

. (8.36)

Our next step is to linearize (8.30) - (8.32) around the steady states. Suppose

they can be written as

F

c1

c

t

+ F

c2

λ

t

+f

c

= 0, (8.37)

F

k1

E

t

λ

t+1

+ F

k2

E

t

A

t+1

+F

k3

k

s

t+1

+ F

k4

E

t

n

s

t+1

+ f

k

= λ

t

, (8.38)

k

s

t+1

= Ak

t

+ WA

t

+ C

1

c

d

t

+C

2

n

t

+b. (8.39)

Expressing E

t

λ

t+1

, E

t

n

s

t+1

and E

t

A

t+1

in terms of (8.35), (8.36) and a

0

+a

1

A

t

respectively, we obtain from (8.38)

κ

1

k

s

t+1

+ κ

2

A

t

+ κ

0

= λ

t

, (8.40)

where, in particular,

κ

0

= F

k1

(Qa

0

+ h) + F

k2

a

0

+ F

k4

(G

22

a

0

+ g

2

) + f

k

, (8.41)

κ

1

= F

k1

H + F

k3

+F

k4

G

21

, (8.42)

κ

2

= F

k1

Qa

1

+ F

k2

a

1

+ F

k4

G

22

a

1

. (8.43)

Using (8.37) to express λ

t

in (8.40), we further obtain

κ

1

k

s

t+1

+κ

2

A

t

+ κ

0

= −

F

c1

F

c2

c

d

t

−

f

c

F

c2

, (8.44)

168

which is equivalent to

k

s

t+1

= −

κ

2

κ

1

A

t

−

F

c1

F

c2

κ

1

c

d

t

−

κ

0

κ

1

−

f

c

F

c2

κ

1

. (8.45)

Comparing the right side of (8.39) and (8.45) will allow us to solve c

d

t

as

c

d

t

= −

_

F

c1

F

c2

κ

1

+C

1

_

−1

_

Ak

t

+

_

κ

2

κ

1

+ W

_

A

t

+ C

2

n

t

+

_

b +

κ

0

κ

1

+

f

c

F

c2

κ

1

__

.

Appendix III: Welfare Comparison of the Model

Variants

In this appendix we want to undertake a welfare comparison of our diﬀerent

model variants. We follow here Ljungqvist and Sargent (1998) and compute

the welfare implication of the diﬀerent model variants. Yet, whereas they

concentrate on the steady state, we compute the welfare also outside the

steady state. We here restrict our welfare analysis to the U. S. model variants.

It is suﬃcient to consider only the equilibrium (benchmark) model, and the

two models with nonclearing labor market. They are given by Simulated

Economy I, II, and III. A likely conjecture is that the benchmark model

should always be superior to the other two variants because the decisions on

labor supply - which are optimal for the representative agent - are realized

in all periods.

However, we believe that this may not generically be the case. The point

here is that the model speciﬁcation in variants II and III, is somewhat diﬀer-

ent from the the benchmark model due to the distinction between expected

and actual moments with respect to our state variable, the capital stock.

In the models of nonclearing market the representative agent may not ra-

tionally expect those moments of the capital stock. The expected moments

are represented by equation (8.5) while the actual moments are expressed by

equation (8.5). They are not necessary equal unless the labor eﬀorts of those

two equations are equal. Also, in addition to A

t

, there is another external

variable w

t

, entering into the models, which will aﬀect the labor employed

(via demand for labor) and hence eventually the welfare performance. The

welfare result due to these changes in the speciﬁcation may therefore deviate

from what one would expect.

Our exercise here is to compute the values of the objective function for all

our three models, given the sequence of our two decision variables, consump-

tion and employment. Note that for our models variants with nonclearing

169

labor market, we use realized employment, rather than the decisions on labor

supply, to compute the utility functional. More speciﬁcally, we calculate V ,

where

V ≡

∞

t=0

β

t

U(c

t

, n

t

)

where U(c

t

, n

t

) is given by log(c

t

) + θ log(1 −n

t

). This exercise here is con-

ducted for diﬀerent initial conditions of k

t

denoted by k

0

. We choose the

diﬀerent k

0

based on the grid search around the steady state of k

t

. Obvi-

ously, the value of V for any given k

0

will also depend on the external variable

A

t

and w

t

(though in the benchmark model, only A

t

appears). We consider

two diﬀerent ways to treat these external variables. One is to set both ex-

ternal variables at their steady state levels for all t. The other is to employ

their observed series entering into the computation. Figure 8.7 provides the

welfare comparison of the two versions.

(a) Welfare Comparison with External Variable set at their Steady State

(Solid Line for Model II; Dashed Line for Model III)

(b) Welfare Comparison with External Variable set at their Observed Series

(Solid Line for Model II; Dashed Line for Model III)

Figure 8.7: Welfare Comparison of Model II and III

170

In Figure 8.7(a), 8.7(b), the percentage deviations of V from the corre-

sponding values of benchmark model is plotted for both Model II and Model

III given for various k

0

around the steady states. The various k

0

’s are ex-

pressed in terms of the percentages deviation from the steady state of k

t

.

It is not surprising to ﬁnd that in most cases the benchmark model is

the best in its welfare performance, since most of the values are negative.

However, it is important to note that the deviations from the benchmark

model are very small. Similar results have been obtained by Ljungqvist and

Sargent (1998), they, however, compare only the steady states. Meanwhile,

not always is the benchmark model the best one. When k

0

is suﬃciently high,

close to or higher than the steady state of k

t

, the deviations become 0 for the

Model II. Furthermore, in the case of using observed external variables, the

Model III will be superior in its welfare performance when k

0

is larger than

its steady state, see lower part of the ﬁgure.

Chapter 9

Monopolistic Competition,

Nonclearing Markets and

Technology Shocks

In the last chapter we have found that if we introduce some non-Walrasian

features into an intertemporal decision model with the household’s wage

setting, sluggish wage and price adjustments and adaptive optimization the

labor market may not be cleared. This model then naturally generates higher

volatility of employment and a low correlation between employment and

consumption. Next we relate our approach of nonclearing labor market to

the theory of monopolistic competition in the product market as developed

in New Keynesian economics.

In many respects, the speciﬁcations in this chapter are the same as for

the model of the last chapter. We shall still follow the assumptions with

respect to ownership, adaptive optimization and nonclearing labor markets.

The assumption of re-opening of the market shall also be adopted here. This

is necessary for a model with nonclearing markets where adjustments should

take place in real time.

9.1 The Model

As mentioned in chapter 8, price and wage stickiness is an important feature

in New Keynesian literature. As concerning wage stickiness Keynes (1936)

has attributed strong stabilizing eﬀects to wage stickiness. Recent litera-

ture uses monopolistic competition theory to give a foundation to nominal

stickiness.

Since both household and ﬁrm make their quantity decisions on the basis

171

172

of the given prices, including the output price p

t

, the wage rate w

t

and the

rental rate of capital stock r

t

, we shall ﬁrst discuss how in our model the

period t prices are determined at the beginning of period t.

Here again, as in the model of the last chapter there are three commodi-

ties. One of them should serve as a numeraire, which we assume to be the

output. Therefore, the output price p

t

always equals 1. This indicates that

the wage w

t

and the rental rate of capital stock r

t

are all measured in terms

of the physical units of output.

1

As to the rental rate of capital r

t

, it is as-

sumed to be adjustable and to clear the capital market. We can then ignore

its setting. Here, we shall follow all the speciﬁcations on price and wage

setting as in presented the chapter 8.2.1.

9.1.1 The Household’s Desired Transactions

When the prices, including wages, have been set the household is going to

express its desired demand and supply. We deﬁne the household’s willingness

as those demand and supply that can allow the household to obtain the

maximum utility on the condition that these demand and supply can be

realized at the given set of prices. We can express this as a sequence of

output demand and factor supply

_

c

d

t+i

, i

d

t+i

, n

s

t+i

, k

s

t+i+1

_

∞

i=0

, where i

t+i

is

referred to investment. Note that here we have used the superscripts d and

s to refer to the agent’s desired demand and supply. The decision problem

for the household to derive its desired demand and supply is very similar as

in the last chapter and can be formulated as

max

{c

d

t+i

,n

s

t+i

}

∞

i=0

E

t

_

∞

i=0

β

i

U(c

d

t+i

, n

s

t+i

)

_

(9.1)

subject to

k

s

t+i+1

= (1 −δ)k

s

t+i

+ f(k

s

t+i

, n

s

t+i

, A

t+i

) −c

d

t+i

(9.2)

All the notations have been deﬁned in the last chapter. For the given

technology sequence {A

t+i

}

∞

i=0

, the solution of optimization problem can be

written as:

c

d

t+i

= G

c

(k

s

t+i

, A

t+i

) (9.3)

n

s

t+i

= G

n

(k

s

t+i

, A

t+i

) (9.4)

1

For our simple representative agent model without money, this simpliﬁcation does not

eﬀect our major result derived from our model. Meanwhile, it will allow us to save

the eﬀort to work on the nominal price determination, a main focus in the recent new

Keyensian literature.

173

We shall remark that although the solution appears to be a sequence

_

c

d

t+i

, n

s

t+i

_

∞

i=0

only (c

d

t

, n

s

t

) along with (i

d

t

, k

s

t

), where i

d

t

= f(k

s

t

, n

s

t

, A

t

) −c

d

t

and k

s

t

= k

t

, are

actually carried by the household into the market for exchange due to our

assumption of re-opening market.

9.1.2 The Quantity Decisions of the Firm

The problem of our representative ﬁrm in period t is to choose the current

input demand and output supply (n

d

t

, k

d

t

, y

s

t

) to maximizes the current proﬁt.

However in this chapter, we no longer assume that the product market is

in perfect competition. Instead, we shall assume that our representative

ﬁrm behaves as a monopolistic competitor, and therefore it should face a

perceived demand curve for its product, see the discussion above. Thus given

the output price, which shall always be 1 (since it serves as a numeraire), the

ﬁrm has a perceived constraint on the market demand for its product. We

shall denote this perceived demand as ´ y

t

.

On the other hand, given the prices of output, labor and capital stock

(1, w

t

, r

t

), the ﬁrm should also have its own desired supply y

∗

t

. This desired

supply is the amount that allows the ﬁrm to obtain a maximum proﬁt on

the assumption that all its output can be sold. Obviously, if the expected

demand ´ y

t

is less than the ﬁrm’s desired supply y

∗

t

, the ﬁrm will choose ´ y

t

.

Otherwise, it will simply follow the short side rule to choose y

∗

t

as in the

general New Keynesian model.

Thus, for our representative ﬁrm, the optimization problem can be ex-

pressed as

max min(´ y

t

, y

∗

t

) −r

t

k

d

t

−w

t

n

d

t

(9.5)

subject to

min(´ y

t

, y

∗

t

) = f(A

t

, k

t

, n

t

) (9.6)

For the regular condition on the production function, the solutions should

satisfy

k

d

t

= f

k

(r

t

, w

t

, A

t

, ´ y

t

) (9.7)

n

d

t

= f

n

(r

t

, w

t

, A

t

, ´ y

t

) (9.8)

where r

t

and w

t

are respectively the prices (in real term) of capital and labor.

2

We are now considering the transactions in our three markets. Let us

ﬁrst consider the two factor markets.

2

The detail will be provided in the appendix of this chapter.

174

9.1.3 Transaction in the Factor Market

Since the rental rate of capital stock r

t

is adjusted to clear the capital market

when the market is re-opened in period t, we have

k

t

= k

s

t

= k

d

t

(9.9)

Due to the monopolistic wage setting and the sluggish wage adjustment,

there is no reason to believe that the labor market will be cleared, see the

discussion in the last chapter. Therefore, we shall again deﬁne a realization

rule with regard to actual employment. As we have discussed in the last

chapter, the most frequent rule that has been used is the short side rule,

that is,

n

t

= min(n

d

t

, n

s

t

)

Thus, when a disequilibrium occurs, only the short side of demand and sup-

ply will be realized. Another important rule that we have discussed in the

last chapter is the compromising rule. The latter rule means that when

disequilibrium occurs in the labor market both ﬁrms and workers have to

compromise. In particular, we again formulate this rule as

n

t

= ωn

d

t

+ (1 −ω)n

s

t

(9.10)

where ω ∈ (0, 1). Our study in the last chapter indicates that the short

side rule seems to be empirically less satisfying than the compromising rule.

Therefore, in this chapter we shall only consider the compromising rule.

9.1.4 The Transaction in the Product Market

After the transactions in those two factor markets have been carried out, the

ﬁrm will engage in its production activity. The result is the output supply,

which is now given by

y

s

t

= f(k

t

, n

t

, A

t

) (9.11)

One remark should be added here. Equation (9.11) indicates that the

ﬁrm’s actual produced output is not necessarily constrained by equation

(9.6), and therefore one may argue that the output determination does follow

eventually the Keynesian way, that is, the output is constrained by demand.

However, the Keynesian way of output determination is still reﬂected in the

ﬁrm’s demand for inputs, capital and labor (see equation (9.7) and (9.8)). On

the other hand, if the produced output is still constrained by (9.6), one may

175

encouter the diﬃculty either in terms of feasibility when y

s

t

in (9.11) is less

than min(´ y

t

, y

∗

t

) or in terms of ineﬃciency when y

s

t

is larger than min(´ y

t

, y

∗

t

).

3

Given that the output is determined by (9.11), the transaction then needs

to be carried out with respect to y

s

t

. It is important to note here that when

disequilibrium occurs in the labor market the previous consumption plan as

expressed by (9.3) becomes invalid due to the improper rule of capital accu-

mulation (9.2) for deriving the plan. Therefore, the household will construct

a new plan as expressed below:

max

(c

d

t

)

E

t

_

∞

i=0

β

i

U(c

d

t+i

, n

s

t+i

)

_

(9.12)

s.t. k

s

t+1

=

1

1 + γ

_

(1 −δ)k

s

t

+ f(k

t

, n

t

, A

t

) −c

d

t

¸

(9.13)

k

s

t+i+1

=

1

1 +γ

_

(1 −δ)k

s

t+i

+ f(k

s

t+i

, n

s

t+i

, A

t+i

) −c

d

t+i

¸

, (9.14)

i = 1, 2, ....

Above, k

t

equals k

s

t

as expressed by (9.9) and n

t

is given by (9.10) with n

s

t

and

n

d

t

are implied by (9.4) and (9.8) respectively. As we have demonstrated in

the last chapter, the solution to this further step in the optimization problem

can be written in terms of the following equation:

c

d

t

= G

c2

(k

t

, A

t

, n

t

) (9.15)

Given this consumption plan, the product market should be cleared if the

household demands the amount f(k

t

, n

t

, A

t

) −c

d

t

for investment. Therefore,

c

d

t

in (9.15) should also be the realized consumption.

9.2 Estimation and Calibration for U.S. Econ-

omy

9.2.1 The Empirically Testable Model

This section provides an empirical study of our theoretical model above pre-

sented which again, in order to make it empirically more realistic, has to

include economic growth.

3

Note that here when y

s

t

< min(´ y

t

, y

∗

t

) there will be no suﬃcient inputs to produce

min(´ y

t

, y

∗

t

). On the other hand, when y

s

t

> min(´ y

t

, y

∗

t

), not all inputs will be used

in production, and therefore resources are somewhat wasted.

176

Let K

t

denote for capital stock, N

t

for per capita working hours, Y

t

for

output and C

t

for consumption. Assume the capital stock in the economy

follows the transition law:

K

t+1

= (1 −δ)K

t

+ A

t

K

1−α

t

(N

t

X

t

)

α

−C

t

, (9.16)

where δ is the depreciation rate; α is the share of labor in the production

function F(·) = A

t

K

1−α

t

(N

t

X

t

)

α

; A

t

is the temporary shock in technology

and X

t

the permanent shock that follows a growth rate γ. Dividing both

sides of equation (9.16) by X

t

, we obtain

k

t+1

=

1

1 +γ

_

(1 −δ)k

t

+ A

t

k

1−α

t

(n

t

N/0.3)

α

−c

t

_

, (9.17)

where k

t

≡ K

t

/X

t

, c

t

≡ C

t

/X

t

and n

t

≡ 0.3N

t

/N with N to be the sample

mean of N

t

. Note that the above formulation also indicates that the form of

f(·) in the last section may take the form

f(·) = A

t

k

1−α

t

(n

t

N/0.3)

α

(9.18)

With regard to the household preference, we shall assume that the utility

function takes the form

U(c

t

, n

t

) log c

t

+ θ log(1 −n

t

) (9.19)

The temporary shock A

t

may follow an AR(1) process:

A

t+1

= a

0

+a

1

A

t

+ ǫ

t

, (9.20)

where ǫ

t

is an independently and identically distributed (i.i.d.) innovation:

ǫ

t

∼ N(0, σ

2

ǫ

).

Finally, we shall assume that the output expectation ´ y

t

be simply equal

to y

t−1

, that is,

´ y

t

= y

t−1

(9.21)

where y

t

= Y

t

/X

t

, so that the expectation is fully adaptive to the actual

output in the last period.

4

4

Of course, one can also consider other forms of expectation. One possibility is to assume

the expectation to be rational so that it is equal to the steady state of y

t

. Indeed, we

also have done the same empirical study, yet the result is less satisfying.

177

9.2.2 The Data Generating Process

For our empirical assessment, we consider two model variants: the standard

model, as a benchmark for comparison, and our model with monopolistic

competition and nonclearing labor market. Speciﬁcally, we shall call the

benchmark model the Model I and the model with monopolistic competition

the Model IV (in distinction from the Model II and Model III in Chapter 8).

For the benchmark dynamic optimization model, the Model I, the data

generating process include (9.17), (9.20) as well as

c

t

= G

11

A

t

+ G

12

k

t

+ g

1

(9.22)

n

t

= G

21

A

t

+ G

22

k

t

+ g

2

(9.23)

Note that here (9.22) and (9.23) are the linear approximations to (9.3) and

(9.4). The coeﬃcients G

ij

and g

i

(i = 1, 2 and j = 1, 2) are the compli-

cated functions of the model’s structural parameters, α, β, δ,among others.

They are computed by the numerical algorithm using the linear-quadratic

approximation method.

5

To deﬁne the data generating process for our model with monopolistic

competition and nonclearing labor market, the Model IV, we shall ﬁrst mod-

ify (9.23) as

n

s

t

= G

21

A

t

+G

22

k

t

+g

2

(9.24)

On the other hand, the equilibrium in product market indicates that c

d

t

in

(9.15) should be equal to c

t

, and therefore this equation can also be approx-

imated by

c

t

= G

31

A

t

+ G

32

k

t

+G

33

n

t

+ g

3

(9.25)

The computation of coeﬃcients g

3

and G

3j

, j = 1, 2, 3, are the same as in

Chapter 8.

Next we consider the demand for labor n

d

t

derived from the ﬁrm’s opti-

mization problem (9.5) - (9.8), which shall now be augmented by the growth

factor for our empirical test. The following proposition concerns the deriva-

tion of n

d

t

.

Proposition: When the capital market is cleared, the ﬁrm’s demand for

labor can be expressed as

n

d

t

=

_

¸

_

¸

_

_

0.3

¯

N

_

_

ˆ yt

At

_

1/α

_

1

kt

_

(1−α)/α

if ˆ y

t

< y

∗

t

0.3

¯

N

_

αAtZt

wt

_ 1

1−α

k

t

if ˆ y

t

≥ y

∗

t

(9.26)

5

The algorithm used here is again from Chapter 1 of this volume.

178

where

y

∗

t

= (αA

t

Z

t

/w

t

)

α/(1−α)

k

t

A

t

(9.27)

Note that the ﬁrst n

d

t

in the above equation responds to the condition that

the expected demand is less than the ﬁrm’s desired supply, and the second

to the condition otherwise. The proof of this proposition is provided in the

appendix to this chapter. Thus, for Model IV, the data generating process

includes (9.17), (9.20), (9.10), (9.24), (9.25), (9.26) and (9.21) with w

t

given

by the observed wage rate. Here again we do not need to attempt to give

the actually observed sequence of wages a further theoretical foundation. For

our purpose it suﬃces to take the empirically observed series of wages.

9.2.3 The Data and the Parameters

We here only employ time series data of the U.S. economy. To calibrate the

models, we shall ﬁrst specify the structural parameters. There are altogether

10 structural parameters in Model IV: a

0

, a

1

, σ

ε

, γ, µ, α, β, δ, θ and ω. All

these parameters are essentially the same as we have employed in Chapter 8

(see Table 8.1) except for the ω. We choose ω to be 0.5203. This is estimated

according to our new model by minimizing the residual sum of square between

actual employment and the model generated employment. The estimation is

again executed by a conventional algorithm, the grid search. Note that here

again we need a rescaling of the wage series in the estimation of ω.

6

9.2.4 Calibration

Table 9.1 provides the result of our calibrations from 5000 stochastic sim-

ulations. This result is further conﬁrmed by Figure 9.1, where a one time

simulation with the observed innovation A

t

are presented.

7

All time series

are detrended by the HP-ﬁlter.

6

Note that there is a need of rescaling the wage series in the estimation of ω. This re-

scaling is necessary because we do not exactly know the initial condition of Z

t

, which we

set equal to 1. We have followed the same rescaling procedure as we did in Chapter 8.

7

Of course, for this exercise one should still consider A

t

,the observed Solow residual, to

include not only the technology shock, but also the demand shock among others.

179

Table 9.1: of the Model Variants (numbers in parentheses are the correspond-

ing standard deviations)

Consumption Capital Employment Output

Standard Deviations

Sample Economy 0.0081 0.0035 0.0165 0.0156

Model I Economy 0.0091 0.0036 0.0051 0.0158

(0.0012) (0.0007) (0.0006) (0.0021)

Model IV Economy 0.0071 0.0058 0.0237 0.0230

(0.0015) (0.0018) (0.0084) (0.0060)

Correlation Coeﬃcients

Sample Economy

Consumption 1.0000

Capital Stock 0.1741 1.0000

Employment 0.4604 0.2861 1.0000

Output 0.7550 0.0954 0.7263 1.0000

Model I Economy

Consumption 1.0000

(0.0000)

Capital Stock 0.2043 1.0000

(0.1190) (0.0000)

Employment 0.9288 -0.1593 1.0000

(0.0203) (0.0906) (0.0000)

Output 0.9866 0.0566 0.9754 1.0000

(0.0033) (0.1044) (0.0076) (0.0000)

Model IV Economy

Consumption 1.0000

(0.0000)

Capital Stock 0.3878 1.0000

(0.1515) (0.0000)

Employment 0.4659 0.0278 1.0000

(0.1424) (0.1332) (0.0000)

Output 0.8374 0.0369 0.8164 1.0000

(0.0591) (0.0888) (0.1230) (0.0000)

9.2.5 The Labor Market Puzzle

Despite the bias towards Model I Economy, due to the selection of the struc-

tural parameters, we ﬁnd that the labor eﬀort is much more volatile than

180

in the Model I Economy the benchmark model. Indeed, comparing to the

benchmark model, the Model I economy, the volatility of labor eﬀort in our

Model IV economy has much been increased if anything the volatility of the

labor eﬀort is too high. This result is, however, not surprising since the

agents face two constraints – one in the labor market and one in the product

market. Also the excessive correlation between labor and consumption has

been weakened.

Further evidence on the better ﬁt of our Model IV economy — as concerns

the volatility of the macroeconomic variables — is also demonstrated in the

Figure 9.1 where the horizontal ﬁgures show, from top to bottom, actual

(solid line) and simulated data (dotted line) for consumption, capital stock,

employment and output. The two columns of ﬁgures, from the left to the

right, represent the ﬁgures for Model I and Model IV economies respectively.

As observable, the employment series in Model IV economy can ﬁt the data

better than the Model I economy.

This resolution to the labor market puzzle should not be surprising be-

cause we specify the structure of labor market essentially the same way as in

the last chapter. However, in addition to the labor market disequilibrium as

speciﬁed in the last chapter, we also allow in this chapter for monopolistic

competition in the product market. In addition to impacting the volatility

of labor eﬀort, this may provide the possibility to resolve the another puzzle,

that is, the technology puzzle also arising in the market clearing RBC model.

181

Figure 9.1: Simulated Economy versus Sample Economy: U.S. Case (solid

line for sample economy, dotted line for simulated economy)

9.2.6 The Technology Puzzle

In economic literature, one often discusses the technology in terms of its per-

sistent and temporary eﬀects on the economy. One possibility to investigate

the persistent eﬀect in our models here is to look at the steady states. Given

that at the steady state all the markets will be cleared, our Model IV econ-

omy should have the same steady state as in the benchmark model. For the

convenience of our discussion, we rewrite these steady states in the following

182

equations (see the proof of Proposition 4 in Chapter 4):

n = αφ/ [(α + θ)φ −(δ + γ)θ]

k = A

1/α

φ

−1/α

n

_

N/0.3

_

c = (φ −δ −γ)k

y = φk

where

φ = [(1 +γ) −β(1 −δ)] /β(1 −α)

From the above equation, one ﬁnds that technology has the positive per-

sistent eﬀect on output, consumption and capital stock,

8

yet zero eﬀect on

employment.

Next, we shall look at the temporary eﬀect of the technology shock. Ta-

ble 9.2 records the cross correlation of the temporary shock A

t

from our

5000 thousand stochastic simulation. As one can ﬁnd there, the two models

predicts rather diﬀerent correlations. In the Model I (RBC) economy, tech-

nology A

t

has a temporary eﬀect not only on consumption and output, but

also on employment, which are all strongly positive. Yet in our Model IV

Economy with monopolistic competition and nonclearing labor market, we

ﬁnd that the correlation is much weaker with respect to employment. This

is consistent with the widely discussed recent ﬁnding that technology has

near-zero (and even negative) eﬀect on employment.

Table 9.2: The Correlation Coeﬃcients of Temporary Shock in Technology.

output consumption employment capital stock

Model I Economy 0.9903 0.9722 0.9966 -0.0255

(0.0031) (0.0084) (0.0013) (0.1077)

Model IV Economy 0.8397 0.8510 0.4137 -0.1264

(0.0512) (0.0507) (0.1862) (0.1390)

At the given expected market demand, an improvement in technology

(reﬂected as an increase in labor productivity) will reduce the demand for

labor, if the ﬁrm follows the Keynesian way of output determination, that

is, the output is determined by demand. In this case, less labor is required

to produce the given amount of output. Technical progress, therefore, may

8

This long run eﬀect of technology is also revealed by recent time series studies in the

context of a variety of endogenous growth models, see Greiner, Semmler and Gong (2004).

183

have an adverse eﬀect on employment at least in the short run. This stylist

fact cannot be explained in the RBC framework since at the given wage rate,

the demand for labor is simply determined by the marginal product, which

should be increased with the improvement in technology. This chapter thus

demonstrates that if we follow the Keynesian way of quantity determina-

tion in a monopolistic competition model, the technology puzzle explored in

standard market clearing models would disappear.

9.3 Conclusions

In the last chapter, we have shown how households may be constrained in the

product market in buying consumption goods by the ﬁrms actual demand

for labor. Then noncleared labor market was derived from a multiple stage

decision process of households where we have neglected that ﬁrms may also

be demand constrained on the product market. The proposition in this

chapter which shows the ﬁrms’ constraint in the product market, explains

this additional complication that can arise due to the interaction of the labor

market and the product market constraints.

We have then shown in this chapter how the ﬁrms’ constraints on the

product market may explain the technology puzzle, namely that positive

technology shocks may have, only a weak eﬀect on employment in the short

run - a phenomenon inconsistent with equilibrium business cycle models,

where technology shocks and employment are predicted to be positively cor-

related. This result was obtained in an economy with monopolistic compe-

tition, as in New Keynesian economics, where prices and wages are set by

a monopolistic supplier and are sticky, resulting in an updating scheme of

prices and wages where only a fraction of prices and wages are optimally set

each time period. Yet we have also introduced a nonclearing labor market,

resulting from a multiple stage decision problem, where then the households’

constraint on the labor market spills over to the product market and the

ﬁrms constraint on the product market generates employment constraints.

We could show that such a model matches better time series data of the U.S.

economy.

184

9.4 Appendix: Proof of the Proposition

Let X

t

= Z

t

L

t

, with Z

t

to be the permanent shock resulting purely from

productivity growth, and L

t

from population growth. We shall assume that

L

t

has a constant growth rate µ and hence Z

t

follows the growth rate (γ −µ).

The production function can be written as Y

t

= A

t

Z

α

t

K

1−α

t

H

α

t

, where H

t

equals N

t

L

t

and can be regarded as total labor hours.

Let us ﬁrst consider the ﬁrm’s willingness to supply Y

∗

t

, Y

∗

t

= X

t

y

∗

t

, under

the condition that the rental rate of capital r

t

clears the capital market while

the wage rate w

t

is given. In this case, the ﬁrm’s optimization problem can

be expressed as

max Y

∗

t

−r

t

K

d

t

−w

t

H

d

t

subject to

Y

∗

t

= A

t

(Z

t

)

α

_

K

d

t

_

1−α

_

H

d

t

_

α

The ﬁrst order condition tell us that

(1 −α)A

t

(Z

t

)

α

_

K

d

t

_

−α

_

H

d

t

_

α

= r

t

(9.28)

αA

t

(Z

t

)

α

_

K

d

t

_

1−α

_

H

d

t

_

α−1

= w

t

(9.29)

from which we can further obtain

r

t

w

t

=

_

1 −α

α

_

H

d

t

K

d

t

(9.30)

Since the rental rate of capital r

t

is assumed to clear the capital market, we

can thus replace K

d

t

in the above equations by K

t

. Since w

t

is given, and

therefore the demand for labor can be derived from (9.29):

H

d

t

=

_

αA

t

w

t

_ 1

1−α

(Z

t

)

α

1−α

K

t

Dividing both sides of the above equation by X

t

, and then reorganizing, we

obtain

n

d

t

=

0.3

¯

N

_

αA

t

Z

t

w

t

_ 1

1−α

k

t

We shall regard this labor demand as the demand when the ﬁrm desired

activities are carried out, which is indeed the ﬁrst equation in (9.26). Given

this n

d

t

, the ﬁrm’s desire to supply y

∗

t

can be expressed as

y

∗

t

= A

t

k

1−α

t

(n

d

t

N/0.3)

α

= A

t

k

t

_

αA

t

Z

t

w

t

_ α

1−α

(9.31)

185

This is the equation (9.27) as expressed in the proposition.

Next, we consider the case that the ﬁrm’s supply is constrained by the

expanded demand

ˆ

Y

t

,

ˆ

Y

t

= X

t

y

t

. In other words, ˆ y

t

< y

∗

t

where y

∗

t

is given

by (9.31). In this case, the ﬁrm’s proﬁt maximization problem is equivalent

to the following minimization problem:

min r

t

K

d

t

+ w

t

H

d

t

subject to

ˆ

Y

t

= A

t

(Z

t

)

α

_

K

d

t

_

1−α

_

H

d

t

_

α

(9.32)

The ﬁrst-order condition will still allows us to obtain (9.30). Using equation

(9.32) and (9.30), we obtain the demand for capital K

d

t

and labor H

d

t

as

K

d

t

=

_

ˆ

Y

t

A

t

Z

α

t

_

__

w

t

r

t

__

1 −α

α

__

α

H

d

t

=

_

ˆ

Y

t

A

t

Z

α

t

_

__

w

t

r

t

__

α

1 −α

__

1−α

Dividing both sides of the above two equations by X

t

, we obtain

k

d

t

=

_

´ y

t

A

t

___

w

t

r

t

Z

t

__

1 −α

α

__

α

(9.33)

n

d

t

=

_

0.3´ y

t

A

t

N

___

r

t

Z

t

w

t

__

α

1 −α

__

1−α

(9.34)

Since the real rental of capital r

t

will clear the capital market, we can replace

k

d

t

in (9.33) by k

t

.Substituting it into (9.34) for explaining r

t

, we obtain

n

d

t

=

_

0.3

N

__

´ y

t

A

t

_

1/α

_

1

k

t

_

(1−α)/α

This is the second equation in (9.26).

Chapter 10

Conclusions

In this book, we try to contribute to the current research in stochastic dy-

namic macroeconomics. We recognize that the stochastic dynamic optimiza-

tion model is important in macroeconomics, we consider the current stan-

dard model of model, the real business cycle model, only to be a simple and

starting point for macrodynamic analysis. For the model to explain the real

world more eﬀectively, some Keynesian features should be introduced. We

have shown that with such an introduction the model can be enriched while

it becomes possible to resolve the most important puzzles, the labor market

puzzle and the technology puzzle, in RBC economy.

186

Bibliography

[1] Adelman, I. and F. L. Adelman (1959): ”The Dynamic Properties of

the Klein-Goldberger Model”, Econometrica, vol. 27, 596-625

[2] Arrow, K. J. and Debreu, G. (1954): ”Existence of an Equilibrium for

a Competitive Economy,” Econometrica 22, 265-290.

[3] Basu, S. and M. S. Kimball (1997): ”Cyclical Productivity with Un-

observed Input Variation,” NBER Working Paper Series 5915. Cam-

bridge, MA.

[4] Bellman, R. (1957): Dynamic Programming. Princeton, NJ: Princeton

University Press.

[5] Benassy, J.-P. (1995): ”Money and Wage Contract in an Optimizing

Model of the Business Cycle”, Journal of Monetary Economics, vol. 35:

303-315.

[6] Benassy, J.-P. (2002): ”The Macroeconomics of Imperfect Competition

and Nonclearing Markets”, Cambridge: MIT-Press.

[7] Benhabib, J. and R. Farmer (1994): ”Indeterminacy and Increasing

Returns”, Journal of Economic Theory 63: 19-41.

[8] Benhabib, J. and R. Farmer (1999): ”Indeterminacy and Sunspots in

Macroeconomics”, Handbook for Macroeconomics, eds. J. Taylor and

M. Woodford, North-Holland, New York, vol. 1A: 387-448

[9] Benhabib, J., S. Schmidt-Grohe and M. Uribe (2001): ”Monetary Pol-

icy and Multiple Equilibria”, American Economic Review, vol. 91, no.1:

167-186.

[10] Benninga, S. and A. Protopapadakis (1990),”Leverage, Time Prefer-

ence and the Equity Premium Puzzle,” Journal of Monetary Economics

25, 49-58.

187

BIBLIOGRAPHY 188

[11] Bennett,R. L. and R.E.A. Farmer (2000): ”Indeterminacy with Non-

separable Utility”, Journal of Economic Theory 93: 118-143.

[12] Benveniste and Scheinkman (1979): ”On the Diﬀerentiability of the

Value Function in Dynamic Economics, Econometrica, Vol. 47(3):727-

732.

[13] Beyn, W. J., Pampel, T. and W. Semmler (2001): ”Dynamic Op-

timization and Skiba Sets in Economic Examples”, Optimal Control

Applications and Methods, vol. 22, issues 5-6: 251-280.

[14] Blanchard, O. and S. Fischer (1989): ”Lectures on Macroeconomics”,

Cambridge, MIT-Press

[15] Blanchard, O. and J. Wolfers (2000): ”The Role of Shocks and In-

stitutions in the Rise of Unemployment: The Aggregate Evidence”,

Economic Journal 110:C1-C33.

[16] Blanchard, O. (2003): ”Comments on Jjungqvist and Sargent” in:

Knowledge, Information, And Expectations in Modern Macroceco-

nomics”, edited by P. Aghion, R. Frydman, J. Stiglitz and M. Wood-

ford, Princeton Unviersity Press, Princeton: 351-356.

[17] Bohachevsky, I. O., M. E. Johnson and M. L. Stein (1986), ”General-

ized Simulated Annealing for Function Optimization,” Technometrics,

vol. 28, 209-217.

[18] Bohn, H. (1995) ”The Sustainability of Budget Deﬁcits in a stochastic

Economy”, Journal of Money, Credit and Banking, vol. 27, no. 1:257-

271.

[19] Boldrin, M., Christiano, L. and J. Fisher (1996), ” Macroeconomic

Lessons for Asset Pricing”, NBER working paper no. 5262.

[20] Boldrin, M., Christiano, L. and J. Fisher (2001), ”Habit Persistence,

Asset Returns and the Business Cycle”, American Economic Review,

vol. 91, 1:149-166.

[21] Brock, W. and Mirman (1972)”Optimal Economic Growth and Uncer-

tainty: The Discounted Case”, Journal of Economic Theory 4: 479-513.

[22] Brock, W. (1979) ”An integration of stochastic growth theory and

theory of ﬁnance, part I: the growth model”, in: J. Green and J.

Schenkman (eds.), New York, Academic Press: 165-190.

BIBLIOGRAPHY 189

[23] Brock (1982) ”Asset Pricing in a Production Economy”, The

Economies of Information and Uncertainty, ed. by J.J. McCall,

Chicago, University of Chicago Press: 165-192.

[24] Burns, A. F. and W. C. Mitchell (1946): Measuring Business Cycles,

New York: NBER.

[25] Burnside,A. C. , M. S. Eichenbaum and S. T. Rebelo (1993):

”Labor Hoarding and the Business Cycle”, Journal of Political

Economy,101:245-273.

[26] Burnside, C., M. Eichenbaum and S. T. Rebelo (1996): ”Sectoral Solow

Residual”, European Economic Review, Vol. 40: 861-869.

[27] Calvo, G.A. (1983)”Staggered Contracts in a Utility Maximization

Framework”, Journal of Monetary Economics, vol 12: 383-398.

[28] Campbell, J. (1994), ”Inspecting the Mechanism: An Analytical Ap-

proach to the Stochastic Growth Model”, Journal of Monetary Eco-

nomics 33, 463-506.

[29] F. Camilli and M. Falcone (1995), ”Approximation of Ooptimal Control

Problems with State Constraints: Estimates and Applications”, in B.S.

Mordukhovic, H.J. Sussman eds., ”Nonsmooth analysis and geometric

methods in deterministic optimal control”, IMA Volumes in Applied

Mathematics 78, Springer Verlag, 1996, 23-57.

[30] Capuzzo-Dolcetta, I. (1983): ”On a Discrete Approximation of the

Hamilton-Jacobi-Bellman Equation of Dynamic Programming”, Appl.

Math. Optim., vol. 10: 367-377.

[31] Chow, G. C. (1983),”Econometrics”, New York: MacGraw-Hill, Inc.

[32] Chow, G. C. (1993): ”Statistical Estimation and Testing of a Real Busi-

ness Cycle Model,” Econometric Research Program, Research Memo-

randum, no. 365, Princeton: Princeton University.

[33] Chow, G. C. (1993): Optimum Control without Solving the Bellman

Equation, Journal of Economic Dynamics and Control 17, 621-630.

[34] Chow, G. C. (1997): Dynamic Economics: Optimization by the La-

grange Method, New York: Oxford University Press.

BIBLIOGRAPHY 190

[35] Chow, G. C. and Kwan, Y. K. (1998): How the Basic RBC Model

Fails to Explain U.S. Time Series, Jounal of Monetary Economics 41,

308-318.

[36] Christiano, L. J. (1987): Why Does Inventory Fluctuate So Much?

Journal of Monetary Economics, vol. 21: 247-80.

[37] Christiano, L. J. (1988): ”Why Does Inventory Fluctuate So Much?”,

Journal of Monetary Economics, vol. 21: 247-80.

[38] Christiano, L. J. (1987): Technical Appendix to “Why Does Inventory

Investment Fluctuate So Much?” Research Department Working Paper

No. 380, Federal Reserve Bank of Minneapolis.

[39] Christiano, L. J. and M. Eichenbaum (1992): ”Current Real Business

Cycle Theories and Aggregate Labor Market Fluctuation,” American

Economic Review, June, 431-472.

[40] Christiano, L.J., M. Eichenbaum and C. Evans (2001), “Nominal

Rigidities and the Dynamic Eﬀects of a Stock to Monetary Policy,

[41] Cochrane (2001): ”Asset Pricing”. Princeton: Princeton University

Press.

[42] Cooley, T. and E. Prescott (1995): ”Economic Growth and Business

Cycles”, in Cooley, T. ed., Frontiers in Business Cycle Research, Prince-

ton: Princeton University Press

[43] Corana, A., M. C. Martini, and S. Ridella (1987), ”Minimizing Multi-

modal Functions of Continuous Variables with the Simulating Anneal-

ing Algorithm,” ACM Transactions on Mathematical Software, vol. 13,

262-80.

[44] Danthine, J.P. and J.B. Donaldson (1990): ”Eﬃciency Wages and the

Business Cycle Puzzle”, European Economic Review 34: 1275-1301.

[45] Danthine, J.P. and J. B. Donaldson (1995): ”Non-Walrian Economies,”

in T.F. Cooly (ed), Frontiers of Business Cycle Research, Prince-

ton:Princeton University Press.

[46] Dawid, H. and R. Day (2003), ”Adaptive Economizing and Sustainable

Living: Optimally, Suboptimally and Pessimality in the One Sector

Growth Model”, mimeo, University of Bielefeld.

BIBLIOGRAPHY 191

[47] den Haan, W. and A. Marcet (1990): ”Solving the Stochastic Growth

Model by Parameterizing Expectations”. Journal of Business and Eco-

nomic Statistics, 8: 31-34.

[48] Debreu, G. (1959): Theory of Value, New York: Wiley.

[49] Diebold F.X.L.E. Ohanian and J. Berkowitz (1995), ”Dynamic Equi-

librium Economies: A Framework for Comparing Model and Data”,

Technical Working Paper No. 174, National Bureau of Economic Re-

search.

[50] Eichenbaum, M. (1991): ”Real Business Cycle Theory: Wisdom or

Whimsy?” Journal of Economic Dynamics and Control, vol. 15, 607-

626.

[51] Eichenbaum, M, L. Hansen and K. Singleton (1988): ”A Time Series

Analysis of Representative Agent Models of Consumption and Leisure

Under Uncertainty,” Quarterly Journal of Economics, 51-78

[52] Eisner, R. and R. Stroz (1963), “Determinants of Business Investment,

Impacts on Monetary Policy, Prentice Hall.

[53] Erceg, C. J., D. W. Henderson and A. T. Levin (2000), ”Optimal Mon-

etary Policy with Staggered Wage and Price Contracts”, Journal of

Monetary Economics, Vol. 46: 281 - 313.

[54] European Central Bank Report, Country Decision (2004), ”Quantify-

ing the Impact Structural Reforms”, European Central Bank, Frank-

furt.

[55] Evans, C. (1992): ”Productivity Shock and Real Business Cycles”,

Journal of Monetary Economics, Vol. 29, p191-208.

[56] Fair, R. C. (1984): Speciﬁcation, Estimation, and Analysis of Macroe-

conometric Models, Cambridge, MA: Harvard University Press.

[57] Fair, R. C. and J. B. Taylor (1983): Solution and Maximum Likeli-

hood Estimation of Dynamic Nonlinear Rational Expectation Models,

Econometrica, 21(4), 1169-1185.

[58] Falcone, M. (1987) ”A Numerical Approach to the Inﬁnite Horizon

Problem of Determinstic Control Theory”, Appl. Math. Optim., 15:

1-13.

BIBLIOGRAPHY 192

[59] Farmer (1999) ”Macroeconomics with Self-Fulﬁlling Expectations”,

Cambridge, MIT Press.

[60] Feichtinger, G., F.H. Hartl, P. Kort and F. Wirl (2000), “The Dynamics

of a Simple Relative Adjustment-Cost Framework, mimeo, University

of Technology, Vienna.

[61] Francis, N. and V.A. Ramey (2001): ”Is the Technology-Driven Real

Business Cycle Hypothesis Dead? Shocks and Aggregate Fluctuations

Revisited”, University of California, San Diego.

[62] Francis, N. and V.A. Ramey (2003): ”The Source of Historical Eco-

nomic Fluctuations: An Analysis using Long-Run Restrictions”, Uni-

versity of California, San Diego.

[63] Gali, (1999): Technology, Employment, and the Business Cycle: Do

Technology Shocks Explain Aggregate Fluctuation? American Eco-

nomic Review, Vol. 89, p249-271.

[64] Goﬀe, W. L., G. Ferrier and J. Rogers (1992), ”Global Optimization of

Statistical Function,” in H. M. Amman, D. A. Belsley and L. F. Pau

eds. Computational Economics and Econometrics, vol. 1, Dordrecht:

Kluwer.

[65] Gong, G. and W. Semmler (2001): Dynamic Programming with La-

grangian Multiplier: an Improvement over Chow’s Approximation

Method, Working Paper, Center for Empirical Macreconomics, Biele-

feld University.

[66] Gong, G. and W. Semmler (2001), Real Business Cycles with disequi-

libirum in the Labor Market: A Comparison of the US and German

Economies, Center for Empirical Macroeconomics, Bielefeld University,

working paper.

[67] Gong, G. and W. Semmler: ”Stochastic Dynamic Macroeconomics:

Theory, Numerics and Empirical Evidence”, Center for Empirical

Macroeconomics, book manuscript, Bielefeld University.

[68] Gong, G., A. Greiner, W. Semmler and J. Rubart (2001): ”Economic

Growth in the U.S. and Europe: the Role of Knowledge, Human Capi-

tal, and Inventions”, in:

¨

Okonomie als Grundlage politischer Entschei-

dungen, J. Gabriel and M. Neugart (eds.), Leske und Budrich, Opladen.

BIBLIOGRAPHY 193

[69] Greiner, A., W. Semmler and G. Gong (2003): ”The Forces of Eco-

nomic Growth: A Time Series Perspective”, Princeton: Princeton Uni-

versity Press.

[70] Greiner, A., W. Semmler and G. Gong (2004)”Forces of Economic

Growth - A Time Series Perspective”, forthcoming: Princeton, Prince-

ton University Press.

[71] Greiner, A., J. Rubart and W. Semmler (2003): ”Economic Growth,

Skill-biased Technical Change and Wage Inequality. A Model and Es-

timations for the U.S. and Europe”, forthcoming Journal of Macroeco-

nomics.

[72] Gr¨ une, L. (1997) ”An Adaptive Grid Scheme for the Discrete Hamilton-

Jacobi-Bellman Equation”, Numer. Math., 75: 1288-1314.

[73] L. Gr¨ une (2003), Errorr estimation and adaptive discretiza-

tion for the discrete stochastic Hamilton–Jacobi–Bellman

equation. Preprint, University of Bayreuth. Submitted,

http://www.uni-bayreuth.de/departments/math/∼lgruene/papers/.

[74] Gr¨ une, L. and W. Semmler (2004a): ”Using Dynamic Programming

for Solving Dynamic Models in Economics”, CEM Bielefeld, working

paper, forthcoming Journal of Economic Dynamics and Control, 28:

2427-2456.

[75] Gr¨ une, L. and W. Semmler (2004b) ”Solving Asset Pricing Models with

Stochastic Dynamic Programming”, CEM Bielefeld, working paper.

[76] Gr¨ une, L. and W. Semmler (2004c) ”Default Risk, Asset Pricing

and Debt Control”, forthcoming Journal of Financial Econometrics,

2004/05.

[77] Gr¨ une, L. and W. Semmler (2004d), ”Asset Pricing - Constrained by

Past Consumption Decisions”, CEM Bielefeld, working paper.

[78] Gr¨ une, L., W. Semmler and M. Sieveking (2004), ”Creditworthiness

and Threshold in a Credit Market Model with Multiple Equilibria”,

forthcoming, Economic Theory, vol. 25, no. 2: 287-315.

[79] Hall, R. E. (1988): ”The Relation between Price and Marginal Cost in

U.S. Industry”, Journal of Political Economy, Vol. 96, p.921-947.

[80] Hamilton, J. D. (1994), ”Time Series Analysis”, Princeton: Princeton

University Press.

BIBLIOGRAPHY 194

[81] Hall, R. E. (1988): ”The Relation between Price and Marginal Cost in

U.S. Industry”, Journal of Political Economy, Vol. 96, p.921-947.

[82] Hansen, L. P. (1982): ”Large Sample Properties of Generalized Meth-

ods of Moments Estimators,” Econometrica, vol. 50, no. 4, 1029-1054.

[83] Hansen, G. H. (1985): ”Indivisible Labor and Business Cycles,” Jour-

nal of Monetary Economics, vol.16, 309-327.

[84] Hansen, G. H. (1988): ”Technical Progress and Aggregate Fluctua-

tions”, working paper, University of California, Los Angeles.

[85] Hansen, L. P. and K. J. Singleton (1982): ”Generalized Instrument

Variables Estimation of Nonlinear Rational Expectations Models,”

Econometrica, vol. 50, no. 5, 1268-1286.

[86] Harrison, S.G. (2001) ”Indeterminacy with Sector-speciﬁc Externali-

ties”, Journal of Economic Dynamics and Control, 25: 747-76...

[87] Hayashi, F. (1982), “Tobin’s Marginal q and Average q: A Neoclassical

Interpretation, Econometrica 50: 213-224.

[88] Heckman, J. (2003): ”Flexibility and Creation: Job Lessons from the

German Experience”, in: Knowledge, Information, And Expectations

in Modern Macroceconomics”, edited by P. Aghion, R. Frydman, J.

Stiglitz and M. Woodford, Princeton Unviersity Press, Princeton: 357-

393.

[89] Hicks, J.R. (1963): ”The Theory of Wages”, McMillan, London.

[90] Hodrick, R. J. and E. C. Prescott (1980): Post-war U. S. Business

Cycle: an Empirical Investigation, Working Paper, Carnegie-Mellon

University, Pittsburgh, PA.

[91] Hornstein, A. and H. Uhlig (2001), ”What is the Real Story for Interest

Rate Volatility?” German Economic Review 1(1): 43-67.

[92] Jerman, U.J. (1998), ”Asset Pricing in Pproduction Economies”, Jour-

nal of Monetary Economies 41: 257-275.

[93] Judd, K. L. (1996) ”Approximation,Pertubation, and Projection Meth-

ods in Economic Analysis”, Chapter 12 in: Amman, H.M., D.A.

Kendrick and J. Rust, eds., Handbook of Computational Economics,

Elsevier: 511-585.

BIBLIOGRAPHY 195

[94] Judd, K. L. (1998): Numerical Methods in Economics, Cambridge,

MA: MIT Press.

[95] Judge, G. G., W. E. Griﬃths, R. C. Hill and T. C. Lee (1985), ”The

Theory and Practice of Econometrics”, 2nd edition, New York: Wiley.

[96] Juillard M. (1996): DYNARE: A Program for the Resolution and Sim-

ulation of Dynamic Models with Forward Variables through the Use

of a Relaxation Algorithm,” CEPREMAP Working Paper, No. 9602,

Paris, France.

[97] Kendrick, D. (1981): Stochastic Control for Economic Models, New

York, NY: McGraw-Hill Book Company.

[98] Keynes, J.M. (1936) ”The General Theory of Employment, Interest

and Money”, London, MacMillan.

[99] Kim, J. (2003) ”Indeterminacy and Investment and Adjustment Costs:

An Analytical Result”, Macroeconomic Dynamics 7: 394-406.

[100] Kim, J. (2004) ”Does Utility Curvature Matter for Indetermincy?”,

forthcoming, Journal of Economic Behavior and Organization.

[101] King, R. G. and C. I. Plosser (1994): ”Real Business Cycles and the

Test of the Adelmans”, Journal of Monetary Economics, vol. 33, 405-

438.

[102] King, R. G., C. I. Plosser, and S. T. Rebelo (1988a): ”Production,

Growth and Business Cycles I: the Basic Neo-classical Model,” Journal

of Monetary Economics, 21, 195-232.

[103] King, R. G., C. I. Plosser, and S. T. Rebelo (1988b), ”Production,

Growth and Business Cycles II: New Directions,” Journal of Monetary

Economics, vol. 21, 309-341.

[104] King, R. G. and S. T. Rebelo (1999): ”Resusciting Real Business Cy-

cles,” in Handbook of Macroeconomics, Volume I, edited by J. B. Tay-

lor and M. Woodford, Elsevier Science.

[105] King, R.G. and A.L. Wolman (1999): ” What should the Monetary

Authority do when Prices are sticky?”, in: J. Taylor (ed.) Monetary

Policy Rules, Chicago: The University of Chicago Press.

BIBLIOGRAPHY 196

[106] Kwan, Y. K. and G. C. Chow (1997): Chow’s Method of Optimum

Control: A Numerical Solution, Journal of Economic Dynamics and

Control 21, 739-752.

[107] Kydland, F. E. and E. F. Prescott (1982), ”Time to Build and Aggre-

gate Fluctuation”, Econometrica, vol. 50, 1345-1370.

[108] Lettau, M. (1999): ”Inspecting the Mechanism: The Determination of

Asset Prices in the Real Business Cycle Model,” CEPR working paper

No. 1834

[109] Lettau, M. and H. Uhlig (1999): ”Volatility Bounds and Preferences:

An Analytical Approach,” revised from CEPR Discussion Paper No.

1678

[110] Lettau, M., G. Gong and W. Semmler (2001): Statistical Estimation

and Moment Evaluation of a Stochastic Growth Model with Asset Mar-

ket Restriction, Journal of Economics Behavior and Organization, vol.

44, 85-103.

[111] Ljungqvist, L. and T. Sargent (1998): ”The European Unemployment

Dilemma”, Journal of Political Economy, vol. 106, no.3: 514-550.

[112] Ljungqvist, L. and Sargent, T. J. (2000): Recursive Macroeconomics,

Cambridge, MA: The MIT Press.

[113] Ljungqvist, L. and T. Sargent (2003): ”European Unemployment:

From a Worker’s Perspective”, in: Knowledge, Information, And Ex-

pectations in Modern Macroceconomics”, edited by P. Aghion, R. Fry-

dman, J. Stiglitz and M. Woodford, Princeton Unviersity Press, Prince-

ton: 326-350.

[114] Long, J. B. and C. I. Plosser (1983): Real Business Cycles, Journal of

Political Economy, vol. 91, 39-69.

[115] Lucas, R. E. (1967): “Adjustment Costs and the Theory of Supply,

Journal of Political Economiy 75: 321-334.

[116] Lucas, R. (1976): Econometric Policy Evaluation: A Critique,

Carnegie-Rochester Conference Series on Public Policy, 1, 19-46.

[117] Lucas, R. (1978) ”Asset Prices in an Exchange Economy”. Economet-

rica 46: 1429-1446.

BIBLIOGRAPHY 197

[118] Lucas, R. and E.C. Prescott (1971):”Investment under Uncertainty”,

Econometrica, vol. 39 (5):659ﬀ.

[119] Malinvaud, E. (1994): ”Diagnosing Unemployment”, Cambridge, Cam-

bridge University Press.

[120] Mankiw, N. G. (1989): ”Real Business Cycles: A New Keynesian Per-

spective”, Journal of Economic Perspectives, Vol. 3. 79-90.

[121] Mankiw, N. G. (1990): A Quick Refresher Course in Macroeconomics,

Journal of Economic Literature, Vol. 27. 1645-1660.

[122] Marimon, R. and Scott, A. (1999): Computational Methods for the

Study of Dynamic Economies, New York, NY: Oxford University Press.

[123] Mehra and Prescott (1985), ”The Equity Premium Puzzle”, Journal of

Monetary Economics 15: 145-161.

[124] Merz, M. (1999): Heterogenous Job-Matches and the Cyclical Behavior

of Labor Turnover”, Journal of Monetary Economics, 43: 91-124.

[125] Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A. M. and

Teller, E. (1953), ”Equation of State Calculation by Fast Computing

Machines,” The Journal of Chemical Physics, vol. 21, no. 6, 1087-1092.

[126] Meyers, R.J. (1964): ”What can we learn from European Experience,

in Unemployment and the American Economy?”, ed. by A.M. Ross,

New York: John Wiley & Sons, Inc.

[127] Meyers, R.J. (1968) ”What Can We Learn from European Experience,

in Unemployment and the American Economy?”, ed. by A.M. Ross,

New York: John Wiley & Sons, Inc.

[128] Nickell, S. (1997): ”Unemployment and Labor Market Rigidities – Eu-

rope versus North Maerica”, Journal of Economic Perspectives 3, 55-74.

[129] Nickell, S., L. Nunziata, W. Ochel and G. Quintini (2003): ”The Bev-

eridge Curve, Unemployment, and Wages in the OECD from the 1960s

to the 1990s”, in: Knowledge, Information, And Expectations in Mod-

ern Macroceconomics”, edited by P. Aghion, R. Frydman, J. Stiglitz

and M. Woodford, Princeton Unviersity Press, Princeton.

[130] OECD (1998a): ”Business Sector Data Base”, OECD Statistical Com-

pendium.

BIBLIOGRAPHY 198

[131] OECD (1998b): ”General Economic Problems.” OECD Economic Out-

look, Contry Speciﬁc Series

[132] Phelps, E. (1997): Rewarding Work, Cambridge: MIT-Press.

[133] Phelps, E. and G. Zoega (1998): ”Natural Rate Theory and OECD

Unemployment”, Economic Journal, 108 (May): 782-801.

[134] Plosser, C. I. (1989): ”Understanding Real Business Cycles,” Journal

of Economic Perspectives, vol. 3, no. 3, 51-77.

[135] Prescott, E. C. (1986): ”Theory ahead of Business Cycle Measure-

ment,” Quarterly Review, Federal Reserve Bank of Minneapolis, vol.

10, no. 4, 9-22.

[136] Ramsey, F.P. (1928): ”A Mathematical Theory of Saving”, Economic

Journal 28, 543-559.

[137] Reiter, M. (1996)

[138] Reiter, M. (1997): Chow’s Method of Optimum Control, Journal of

Economic Dynamics and Control 21, 723-737.

[139] Rotemberg, J. (1982) ”Sticky Prices in the United States”, Journal of

Political Economy, vol. 90: 1187-1211.

[140] Rotemberg, J. and M. Woodford (1995): ”Dynamic General Equilib-

rium Models with Imperfectly Competitive Product Markets,” in T.F.

Cooley (ed), Frontiers of Business Cycle Research, Princeton:Princeton

University Press.

[141] Rotemberg, J. and M. Woodford (1999): ”Interest Rate Rules in an

Estimated Sticky Price Model”, in: J. Taylor (ed.) Monetary Policy

Rules, Chicago: the University of Chicago Press.

[142] Rust, J. (1996), ”Numerical Dynamic Programming in Economics”,

in: Amman, H.M., D.A. Kendrick and J. Rust, eds., Handbook of

Computational Economics, Elsevier, pp. 620–729.

[143] Santos, M.S. and J. Vigo-Aguiar (1995):

[144] Santos, M.S. and J. Vigo-Aguiar (1998): Analysis of a Numerical Dy-

namic Programming Algorithm Applied to Economic Models. Econo-

metrica, 66(2): 409-426

BIBLIOGRAPHY 199

[145] Sargent, T. (1999): Contested Inﬂation. Princeton, Princeton Univer-

sity Press.

[146] Schmidt-Grohe, S. (2001): Endogenous Business Cycles and the Dy-

namics of Output, Hours and Consumption, American Economic Re-

view, vol 90, no. 5, 1136-1159.

[147] Simkins, S. P. (1994): ”Do Real Business Cycle Models Really Exhibit

Business Cycle Behavior?” Journal of Monetary Economics, vol. 33,

381-404.

[148] Singleton, K. (1988): ”Econometric Issues in the Analysis of Equilib-

rium Business Cycle Model,” Journal of Monetary Economics, vol. 21,

361-386.

[149] Skiba, A. K. (1978); “Optimal Growth with a Convex-Concave Pro-

duction Function, Econometrica 46 (May): 527-539.

[150] Solow, R. (1979): ”Another Possible Source of Wage Stickiness”, Jour-

nal of Macroeconomics, vol. 1: 79-82

[151] Statistisches Bundesamt (1998), Fachserie 18, Statistisches Bundesamt

Wiesbaden.

[152] Stockey, N. L., R. E. Lucas and E. C. Prescott (1989): ”Recursive

Methods in Economics”, Cambridge: Harvard University Press.

[153] Summers, L. H. (1986): ”Some Skeptical Observations on Real Busi-

ness Cycles Theory”, Federal Reserve Bank of Minneapolis Quarterly

Review, Vol. 10, p.23-27.

[154] Taylor, J. B. (1980): Aggregate Dynamics and Staggered Contracts,

Journal of Political Economy, Vol. 88: 1 - 24.

[155] Taylor, J. B. (1999): ”Staggered Price and Wage Setting in Macroeco-

nomics,” in Handbook of Macroeconomics, Volume I, edited by J. B.

Tayor and M. Woodford, Elsevier Science.

[156] Taylor, J.B. and Uhlig, H. (1990): Solving Nonlinear Stochastic Growth

Models: A Comparison of Alternative Solution Methods, Journal of

Business and Economic Statistics, 8, 1-17.

[157] Uhlig, H. (1999): A Toolkit for Analysing Nonlinear Dynamic Stochas-

tic Models Easily, in R. Marimon and A. Scott ed.: Computational

Methods for the Study of Dynamic Economies, New York: Oxford

University Press.

BIBLIOGRAPHY 200

[158] Uhlig, H. and Y. Xu (1996): ”Eﬀort and the Cycle: Cyclical Implica-

tions of Eﬃciency Wages,”, mimeo, Tilburg Unversity.

[159] Uzawa, H. (1968): “The Penrose Eﬀect and Optimum Growth, Eco-

nomic Studies Quarterly XIX: 1-14.

[160] Vanderbilt, D. and S. G. Louie (1984), ”A Monte Carlo Simulated An-

nealing Approach to Optimization over Continuous Variables,” Journal

of Computational Physics, vol. 56, 259-271.

[161] Walsh, L.E. (2002): ”Labor Market Search and Monetary Shocks”,

working paper, University of California, Santa Cruz.

[162] Watson, M.W. (1993), ”Measures of Fit for Calibration Models”, Jour-

nal of Political Economy, vol. 101, no. 6, 1011-1041.

[163] W¨ohrmann, P., W. Semmler and M. Lettau (2001), ”Nonparamet-

ric Estimation of Time-Varying Characteristics of Intertemporal Asset

Pricing Models”, working paper, CEM, Bielefeld University.

[164] Woodford, M. (2003): ”Interest and Prices”, Princeton University

Press, Princeton.

[165] Zbaracki, M. J., M. Ritson, D. Levy, S. Dutta and M. Bergen (2000):

The Managerial and Customer Costs of Price Adjustment: Direct Ev-

idence from Industrial Markets, Manuscript, Wharton School, Univer-

sity of Pennsylvania

[166] 2003): ”Monetary Policy Rules under Uncertainty: Adaptive Learning

and Robust Control”, forthcoming. Macroeconomic Dynamics 2004/05

Contents

List of Figures List of Tables Preface Introduction and Overview iv vi 1 2

**I Solution and Estimation of Stochastic Dynamic Models 11
**

1 Solution Methods of Stochastic Dynamic Models 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 1.2 The Standard Recursive Method . . . . . . . . . . . . . 1.3 The First-Order Conditions . . . . . . . . . . . . . . . 1.4 Approximation and Solution Algorithms . . . . . . . . 1.5 An Algorithm for the Linear-Quadratic Approximation 1.6 A Dynamic Programming Algorithm . . . . . . . . . . 1.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Appendix I: Proof of Proposition 1 . . . . . . . . . . . 1.9 Appendix II: An Algorithm for the LQ-Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 12 13 15 17 23 25 27 28 29 33 33 33 35 39 47 48 50

2 Solving a Prototype Stochastic Dynamic Model 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 The Ramsey Problem . . . . . . . . . . . . . . . . . . . . . . . 2.3 The First-Order Conditions and Approximate Solutions . . . . 2.4 Solving the Ramsey Problem with Diﬀerent Approximations . 2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Appendix I: The Proof of Proposition 2 and 3 . . . . . . . . . 2.7 Appendix II: Dynamic Programming for the Stochastic Version

i

CONTENTS 3 The Estimation and Evaluation of the Stochastic Dynamic Model 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 The Estimation Methods . . . . . . . . . . . . . . . . . . . . . 3.4 The Estimation Strategy . . . . . . . . . . . . . . . . . . . . . 3.5 A Global Optimization Algorithm: The Simulated Annealing 3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Appendix: A Sketch of the Computer Program for Estimation

ii

52 52 53 55 57 58 60 60

**II The Standard Stochastic Dynamic Optimization Model 63
**

4 Real Business Cycles: Theory and the Solutions 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . 4.2 The Microfoundation . . . . . . . . . . . . . . . . . 4.3 The Standard RBC Model . . . . . . . . . . . . . . 4.4 Solving Standard Model with Standard Parameters 4.5 The Generalized RBC Model . . . . . . . . . . . . . 4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . 4.7 Appendix: The Proof of Proposition 4 . . . . . . . 5 The 5.1 5.2 5.3 5.4 5.5 5.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 64 65 69 74 76 80 80 82 82 82 86 89 93 99

Empirics of the Standard Real Business Cycle Model Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimation with Simulated Data . . . . . . . . . . . . . . . . Estimation with Actual Data . . . . . . . . . . . . . . . . . Calibration and Matching to U. S. Time-Series Data . . . . The Issue of the Solow Residual . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6 Asset Market Implications of Real Business Cycles 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 6.2 The Standard Model and Its Asset Pricing Implications 6.3 The Estimation . . . . . . . . . . . . . . . . . . . . . . 6.4 The Estimation Results . . . . . . . . . . . . . . . . . . 6.5 The Evaluation of Predicted and Sample Moments . . . 6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . .

101 . 101 . 103 . 107 . 110 . 112 . 115

CONTENTS

iii

**III Beyond the Standard Model — Model Variants with Keynesian Features 116
**

7 Multiple Equilibria and History Dependence 7.1 Introduction . . . . . . . . . . . . . . . . . . . 7.2 The Model . . . . . . . . . . . . . . . . . . . . 7.3 The Existence of Multiple Steady States . . . 7.4 The Solution . . . . . . . . . . . . . . . . . . 7.5 Conclusion . . . . . . . . . . . . . . . . . . . . 7.6 Appendix: The Proof of Propositions 5 and 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 117 119 121 125 127 128

8 Business Cycles with Nonclearing Labor Market 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 8.2 An Economy with Nonclearing Labor Market . . . . 8.3 Estimation and Calibration for U. S. Economy . . . . 8.4 Estimation and Calibration for the German Economy 8.5 Diﬀerences in Labor Market Institutions . . . . . . . 8.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . 8.7 Appendix I: Wage Setting . . . . . . . . . . . . . . .

131 . 131 . 135 . 142 . 151 . 159 . 163 . 164

9 Monopolistic Competition, Nonclearing Markets and Technology Shocks 171 9.1 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 9.2 Estimation and Calibration for U.S. Economy . . . . . . . . . 175 9.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 9.4 Appendix: Proof of the Proposition . . . . . . . . . . . . . . 184 10 Conclusions 186

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .δ Surface of the Objective Function for ML Estimation The θ − α Surface of the Objective Function for ML Estimation Simulated and Observed Series (non detrended) . . . . . . . . . . . . . . . . . . . .5 2. . . . . . The Stochastic Solution to the Benchmark RBC Model for the Standard Parameters . . . . . . .3 4.4 2. The Solow Residual: standard (solid curve) and corrected (dashed curve) . . . . . . . . . . . . . . . . . . . . . .3 5.2 5. . . . . . . . . . . . . . . . . .3 2. . . . . . . . Simulated and Observed Series (non detrended) . . . . . . . . . Path of Control . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 6. .2 4. . . . . Value Function . . . . . . . . . . . . . . Value function for the general model . . . . . . . . .1 4. . . . . . 75 .1 5. . . . 114 The Adjustment Cost Function . . . . . 79 . . . . . . . . 79 85 85 91 92 97 99 The β . . . Paths of the Choice Variables C and N (depending on K) . . . Value Function obtained from the Linear-quadratic Solution .5 5. . . . .1 6. Predicted and Actual Series: all variables HP detrended (except for excess equity return) . . . . . . . . . . The Linear-quadratic Solution in Comparison to the Exact Solution . 40 42 44 45 46 47 51 . . . The Deterministic Solution to the Benchmark RBC Model for the Standard Parameters . . . . Sample and Predicted Moments with Innovation Given by Corrected Solow Residual .4 5. . . .1 2. .7 4. . . . . . 113 The Second Moment Comparison:all variables detrended (except excess equity return) . . . . . .2 2. . . . . . . . . . . . . . . . . . . . . . . .2 7. 75 . . . . .1 The Fair-Taylor Solution in Comparison to the Exact Solution The Log-linear Solution in Comparison to the Exact Solution .List of Figures 2.6 2. . . . . . Approximated value function and ﬁnal adaptive grid for our Example . . . . . . . . . . 122 iv . . . . . . .4 5.

.S. versus Germany (data series are detrended by the HP-ﬁlter) . 150 Comparison of Macroeconomic Variables U. . 158 A Static Version of the Working of the Labor Market . . . . . . 124 The Welfare Performance of three Linear Decision Rules . . . Case . 122 Multiplicity of Equilibria: f(i) function . .7 9. . . . . . . .4 8. . .6 8. . . .1 8. . . S.2 8.S. 169 Simulated Economy versus Sample Economy: U. .1 v The Derivatives of the Adjustment Cost . . . 165 Welfare Comparison of Model II and III . . . . . . . . .5 8. . 181 . . 126 Simulated Economy versus Sample Economy: U. . .3 7. . . . 153 Simulated Economy versus Sample Economy: German Case . . .LIST OF FIGURES 7. . S. . . 157 Comparison of demand and supply in the labor market . Case . versus Germany 152 Comparison of Macroeconomic Variables: U. . . .2 7.4 8.3 8. . . . . .

. . . . . . . . . . . . . .4 8. . . . . . . .6 5. . . . . . . . . . . . 84 88 88 89 90 95 98 106 110 110 111 111 The Parameters in the Logistic Function . .5 5. Summary of Estimation Results . . 124 The Multiple Steady States . Estimation with the NIPA Data Set . .7 6. . . . .3 5. . . . . . F −Statistics for Testing Exogeneity of Solow Residual The Cross-Correlation of Technology . . . . . . . . . Asset Pricing Implications . . . Parameters used for Calibration (German Economy) . . . . . . . . . . . .1 9.5 7. . . .2 4.1 6. . Economy . . . . . . . . . . .1 8. . . . . . . . . . . . . . . . . . . . 78 GMM and ML Estimation Using Simulated Data . . . . . . . . . . . . . .182 vi . . . . . . .4 6.1 5. .3 8. . . . . . . . . . . The Standard Deviations (U. .1 2. .2 6. . . . . 179 The Correlation Coeﬃcients of Temporary Shock in Technology. . . . 121 The Standard Parameters of RBC Model . . . . Calibration of Real Business Cycle Model . . . Parameterizing the Standard RBC Model . . . . . . . . .List of Tables 2. . . . . . . . . . . . . . . . . . .1 4. . . . . . . . .2 5. . . . . . . Matching the Sharpe-Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 8. . . . . . . . . . . 145 147 154 155 156 Calibration of the Model Variants . 74 Parameterizing the General Model . . . . . .S. . . . . . . . . . .2 5. Calibration of the Model Variants: German Economy . 39 Number of nodes and errors for our Example . . . . . . . . . . . . . . Calibration of the Model Variants: U. .S. . . . . . . . .4 5. . . . . .3 6.5 9. .1 7. . . . . . . . . 51 Parameterizing the Standard RBC Model . . . . . . . . . . . . . . . . 125 Parameters Used for Calibration . . . Asset Market Facts and Real Variables Summary of Models . . . Estimation with Christiano’s Data Set .3 8.2 7. . . . . . . .2 Parameterizing the Prototype Model . versus Germany) . . .

We thank Uwe K¨ller for research assistance and Gaby Windo horst for editing and typing the manuscript. Some chapters of the book have also been presented at the annual conference of the American Economic Association. New School University. u James Ramsey. Colombia University. As other recent approaches to dynamic macroeconomics. Buz Brock. Chinese University of HongKong. Stefan Mittnik. Foscari University. Peter Flaschel. Bejing University. Vienna. New York. University of Aix-en-Provence. New York. Michael Woodford and colleagues of our universities. 1 . in particular if intertemporal behavior of economic agents is involved. stochastic dynamic macromodels are diﬃcult to solve and to estimate. Venice. Bejing. we also build on intertemporal economic behavior of economic agents but stress Keynesian features more than other recent literature in this area. Thus. Science and Technology is gratefully acknowledged. Malte Sieveking. University of Technology. Lars Gr¨ne. The material of this book has been presented by the authors at several universities. Tsinghua University. beside addressing important macroeconomic issues in a dynamic framework another major focus of this book is to discuss and apply solution and estimation methods to models with intertemporal behavior of economic agents. Financial support from the Ministry of Education. In general.Preface This book intends to contribute to the study of alternative paradigms in macroeconomics. Jean-Paul Benassy. We are grateful for comments by the participants of those conferences. City University of HongKong and European Central Bank. and Society of Nonlinear Dynamics and Econometrics. Chapters of the book have been presented as lectures at Bielefeld University. We are also grateful for discussions with Toichiro Asada. Ray Fair. Richard Day. Society of Computational Economics.

but introducing monopolistic competition and sticky prices and wages into the model. In our book. has become a major paradigm in macroeconomics. by stressing Keynesian features in a model with production and capital accumulation. we do not presume clearing of all markets in all periods. It is well known that the standard DGE model fails to replicate essential product. In contrast to the traditional Keynesian macromodels such variants also presume dynamically optimizing agents and market clearing1 . We will discuss this issue in chapter 8. It has been applied in numerous ﬁelds of economics. 1 It should be noted that the concept of market clearing in recent New Keynesian literature is not unambiguous. diﬀerent from the DGE model. but sluggish wage and price adjustments. such as technology shocks. the Real Business Cycle Model. its competitive or monopolistic variants. Its essential features are the assumptions of intertemporal optimizing behavior of economic agents. Yet. which is now commonly called New Keynesian macroeconomics. in numerous papers and in a recent book Woodford (2003) has worked out this new paradigm in macroeconomics. competitive markets and price-mediated market clearing through ﬂexible wages and prices. we demonstrate that even with dynamically optimizing agents not all markets may be cleared. monetary and government spending shocks variation in tax rates or shifts in preferences generate macro ﬂuctuations. As in the monopolistic competition variant of the DGE model we permit nominal rigidities. In particular.Introduction and Overview The dynamic general equilibrium (DGE) model. labor market and asset market characteristics. In this type of stochastic dynamic macromodeling only real shocks. Recently Keynesian features have been built into the dynamic general equilibrium (DGE) model by preserving its characteristics such as intertemporally optimizing agents and market clearing. 2 . in particular its more popular version.

3 Solution and Estimation Methods Whereas models with Keynesian features are worked out and stressed in the chapters of part III of the book. Among the well-known methods are the perturbation and projection methods (Judd (1998)). the parameterized expectations approach (den Haan and Marcet (1990)) and the dynamic programming approach (Santos and Vigo Aguiar (1998) and Gr¨ne and Semmler (2004a)). Solution methods are presented in chapters 1-2 whereas estimation methods along with calibration. After a discussion on the variety of approximation methods. The method. Often the methods use a smooth approximation of ﬁrst order conditions. We will also compare those methods with the dynamic programming approach. which has been written into a GAUSS procedure. Part I of the book can be regarded as the technical preparation for our theoretical arguments developed in this volume. part I and II provide the ground work for those later chapters. Therefore one has to rely on an approximate solution. When an exact u and analytical solution to a dynamic optimization problem is not attainable and one has to use numerical methods. Usually. In part I and II of the book we build extensively on the basics of stochastic dynamic macroeconomics. In this book. the log-linear approximation method and the linearquadratic approximation method. These methods are subsequently applied in the remaining chapters of the book. Recently. . which may also have to be computed by numerical methods. which is a prerequesit for a proper empirical assessment of the models treated in our book. three types of approximation methods can be found in the literature: the Fair-Taylor method. A solution method with higher accuracy often requires more complicated procedures and extensive computation time. there have been developed numerous methods to solve stochastic dynamic decision problems. we introduce a method. has the advantage of short computation time and easy implementation without sacriﬁcing too much accuracy. Solving stochastic dynamic optimization models has been an important research topic in the last decade and many diﬀerent methods have been proposed. Given these two types of ﬁrst-order conditions. which will be repeatedly used in the subsequent chapters. are introduced in chapter 3. an exact and analytical solution of a dynamic decision problem is not attainable. in order to allow for an empirical assessment of stochastic dynamic models we focus on approximate solutions that are computed from two types of ﬁrst-order conditions: the Euler equation and the equation derived from the Lagrangian. Here we provide a variety of technical tools to solve and estimate stochastic dynamic optimization models. the current methods of empirical assessment.

4 such as the Euler equation. Sometimes, as, for example, in the model of chapter 7 smooth approximations are not useful if the value function is not diﬀerentiable and thus is non-smooth. A method such as employed by Gr¨ne u and Semmler (2004a) can then be used. There has been less progress made regarding the empirical assessment and estimation of stochastic dynamic models. Given the wide application of stochastic dynamic models expected in the future, we believe that the estimation of such type of models will become an important research topic. The discussion in chapters 3-6 can be regarded as an important step toward that purpose. As we will ﬁnd, our proposed estimation strategy requires to solve the stochastic dynamic optimization model repeatedly, at various possible structural parameters searched by a numerical algorithm within the parameter space. This requires that the solution methods adopted in the estimation strategy should be as little time consuming as possible while not losing too much accuracy. After comparing diﬀerent approximation methods, we ﬁnd that the proposed methods of solving stochastic dynamic optimization models, such as used in chapters 3 - 6 most useful. We also will explore the impact of the use of diﬀerent data sets on the calibration and estimation results.

**RBC Model as a Benchmark
**

In the next part of the book, in part II, we set up a benchmark model, the RBC model, for comparison, in terms of either theory or empirics. The standard RBC model is a representative agent model, but it is constructed on the basis of neoclassical general equilibrium theory. It therefore assumes that all markets (including product, capital and labor models) are cleared in all periods regardless of whether the model refers to the short- or the long-run. The imposition of market clearing requires that prices are set at an equilibrium level. At the pure theoretical level, the existence of such general equilibrium prices can be proved under certain assumption. Little, however, has been told how the general equilibrium can be achieved. In an economy in which both ﬁrms and households are price-takers, implicitly an auctioneer is presumed to exist who adjusts the price towards some equilibrium. Thus, the way of how an equilibrium is brought about is essentially a Walrasian tˆtonnement process. a Working with such a framework of competitive general equilibrium is elegant and perhaps a convenient starting point for economic analysis. It nevertheless neglects many restrictions on the behavior of agents, the trading process and the market clearing process, the implementation of technology and the market structure, among many others. In part II of this volume,

5 we provide a thorough review of the standard RBC model, the representative stochastic dynamic model of competitive general equilibrium type. The review starts with laying out microfoundation, and continues to discuss a variety of empirical issues, such as the estimation of structural parameters, the data construction, the matching with the empirical data, its asset market implications and so on. The issues explored in this part of the book provide the incentives to introduce Keynesian features into a stochastic dynamic model as developed in Part III. Meanwhile, it also provides a reasonable ground to judge new model variants by considering whether they can resolve some puzzles as explored in part II of the book.

**Open Ended Dynamics
**

One of the restrictions in the standard RBC model is that the ﬁrm does not face any additional cost (a cost beyond the usual activities at the current market prices) when it makes an adjustment on either price or quantity. For example, changing the price may require the ﬁrm to pay a menu cost and also, more importantly, a reputation cost. It is the cost, arising from price and wage adjustments that has become an important focus of New Keynesian research over the last decades. 2 However, adjustment cost may also come from a change in quantity. In a production economy increasing output requires the ﬁrm to hire new workers and add new capacity. In a given period of time, a ﬁrm may ﬁnd more and more diﬃculties to create new additional capacity. This indicates that there will be an adjustment cost in creating capacity (or capital stock via investment), and further such adjustment cost may also be an increasing function of the size of investment. In chapter 7, we will introduce adjustment costs into the benchmark RBC model. This may bring about multiple equilibria toward which the economy may move. The dynamics are open ended in the sense that it can move to low level, or high level of economic activity.3 Such an open ended dynamics is certainly one of the important feature of Keynesian economics. In recent times such open ended dynamics have been found in a large number of dynamic models with intertemporal optimization. Those models have been called indeterminacy and multiple equilibria models. Theoretical models of this type are studied in Benhabib and Farmer (1999) and Farmer (2001), and an empirical assessment is given in Schmidt-Grohe (2001). Some of the models

Important papers in this reserach line are, for example, Calvo (1983) and Rotemberg (1982). For a recent review, see Taylor (1999) and Woodford (2003, ch. 3). 3 Keynes (1936) discusses the possibility of such an open ended dynamics in chapter 5 of his book.

2

6 are real models, RBC models, with increasing returns to scale and/or more general preferences than power utility that generate indeterminacy. Local indeterminacy and globally multiplicity of equilibria can arise here. Others are monetary macro models, where consumers’ welfare is aﬀected positively by consumption and cash balances and negatively by the labor eﬀort and an inﬂation gap from some target rates. For certain substitution properties between consumption and cash holdings those models admit unstable as well as stable high level and low level steady states. There also can be indeterminacy in the sense that any initial condition in the neighborhood of one of the steady-states is associated with a path toward, or away from, that steady state, see Benhabib et al. (2001). Overall, the indeterminacy and multiple equilibria models predict an open ended dynamics, arising from sunspots, where the sunspot dynamics are frequently modeled by versions with multiple steady state equilibria, where there are also pure attractors (repellors), permitting any path in the vicinity of the steady state equilibria to move back to (away from) the steady state equilibrium. Although these are important variants of macrodynamic models with optimizing behavior, as, however, recently has been shown4 indeterminacy is likely to occur only within a small set of initial conditions. Yet, despite such unsolved problems the literature on open ended dynamics has greatly enriched macrodynamic modeling. Pursuing this line of research we introduce a simple model where one does not need to refer to model variants with externalities and (increasing returns to scale) and/or to more elaborate preferences to obtain such results. We show that due to the adjustment cost of capital we may obtain non-uniqueness of steady state equilibria in an otherwise standard dynamic optimization version. Multiple steady state equilibria, in turn, lead to thresholds separating diﬀerent domains of attraction of capital stock, consumption, employment and welfare level. As our solution shows thresholds are important as separation points below or above which it is advantages to move to lower or higher levels of capital stock, consumption, employment and welfare. Our model version thus can explain of how the economy becomes history dependent and moves, after a shock or policy inﬂuences, to a low or high level equilibria in employment and output.

Nonclearing Markets

A second important feature of Keynesian macroeconomics concerns the modeling of the labor market. An important characteristic of the DGE model

4

See Beyn, Pampel and Semmler (2001) and Gr¨ne and Semmler (2004a). u

7 is that it is a market clearing model. For the labor market the DGE model predicts an excessive smoothness of labor eﬀort in contrast to empirical data. The low variation in the employment series is a well-known puzzle in the RBC literature.5 It is related to the speciﬁcation of the labor market as a cleared market. Though in its structural setting, see, for instance, Stockey et al. (1989), the DGE model speciﬁes both sides of a market, demand and supply, the moments of the macro variables of the economy are, however, generated by a one-sided force due to its assumption on wage and price ﬂexibility and thus equilibrium in all markets, including output, labor and capital markets. The labor eﬀort results only from the decision rule of the representative agent to supply labor. In our view there should be no restriction for the other side of the market, the demand, to have eﬀects on the variation of labor eﬀort. Attempts have been made to introduce imperfect competition features into the DGE model.6 In those types of models, producers set the price optimally according to their expected market demand curve. If one follows a Calvo price setting scheme, there will be a gap between the optimal price and the existing price. However, it is presumed that the market is still cleared since the producer is assumed to supply the output according to what the market demands for the existing price. This consideration also holds for the labor market. Here the wage rate is set optimally by the household according to the expected market demand curve for labor. Once the wage has been set, it is assumed to be rigid (or adjusted slowly). Thus, if the expectation is not fulﬁlled, there will be a gap again between the optimal wage and existing wage. Yet in the New Keynesian models the market is still assumed to be cleared since the household is assumed to supply labor whatever demand is at the given wage rate.7 In order to better ﬁt the RBC model’s predictions with the labor market data, search and matching theory has been employed8 to model the labor market in the context of an RBC model. Informational or institutional search frictions may then explain equilibrium unemployment rates and its rise. Yet, those models still have a hard time to explain the shift of unemployment rates such as, for example, experienced in Europe since the 1980s, as equilibrium unemployment rate. 9

5

A recent evaluation of this failure of the RBC model is given in Schmidt-Grohe (2001). Rotemberg and Woodford (1995, 1999), King and Wollman (1999), Gali (2001) and Woodford (2003) present a variety of models of monopolistic competition with price and wage stickiness. 7 Yet, as we have mentioned above, this deﬁnition of market clearing is not unambiguous. 8 For further details, see ch. 8. 9 For an evaluation of the search and matching theory as well as the role of shocks to explain the evolution of unemployment in Europe, see Ljungqvist and Sargent (2003)and

6

it has been shown that the Solow residual can be expressed by some exogenuous variables. the standard Solow residual can be contaminated if the cyclical variation in factor utilization are signiﬁcant. On the other hand. Technology and Demand Shocks A further Keynesian feature of macromodels concerns the role of shocks. Second.8 As concerns the labor market along Keynesian lines we pursue an approach that allows for a nonclearing labor market. for example demand shocks arising from military spending (Hall 1988) and changed monetary aggregates (Evan 1992). When the price has been set. There is no reason why the ﬁrm cannot choose the optimal quantity rather than what the market demands. the price is then given to the supplier when deciding on the quantities. capital and employment. have to reoptimize when facing constraints in supplying labor in the market. As we will show in chapters 8 and 9 such a multiple stage optimization model will allow for larger volatility of the employment rates as compared to the standard RBC model. which are unlikely to be related to factor productivity. Mankiw (1989) and Summers (1986) have argued that such a measure often leads to excessive volatility in productivity and even the possibility of technological regress. especially when the optimum quantity is less than the quantity demanded by the market. also a framework to study the secular rise or fall of unemployment. and provides. ﬁrms may have constraints on the product markets. There are several reasons to distrust the standard Solow residual as a measure of technology shock. we move beyond this type of literature. both subject to optimal behavior. 1995) and Uhlig and Xu (1996). Danthine and Donaldson (1990. both of which seem to be empirically implausible. Since the Solow residual is computed on the basis of observed output. 10 . In the standard DGE model technology shocks are the driving force of the business cycles which is assumed to be measured by the Solow-residual. In our view the decisions with regard to price and quantities can be made separately. Benassy (1995. and is sticky for a certain period. Although our approach owes a substantial debt to disequilibrium models. This consideration will allow for nonclearing markets. Third. First. it is presumed that all factors are fully utilized.10 Our proposed new model helps to study labor market problems by being based on adaptive optimization where households. Considering that the Solow-residual cannot be trusted as a measure of Blanchard (2003) There is indeed a long tradition of macroeconomic modeling with speciﬁcation of the nonclearing labor markets. 2002). Malinvaud (1994). for instance. after a ﬁrst round of optimization. see.

the technology shock is negatively correlated with employment and therefore the RBC model loses its major driving force. A third strategy uses a appropriate restriction in a VAR estimate to identify a technology shock. Another strategy is to construct an economic model so that one could compute the factor utilization from the observed variables (see Basu and Kimball 1997 and Basu. see Gali (1999) and Francis and Ramey (2001. . The model also implies an excessively high correlation between consumption and employment while empirical data only indicates a week correlation. Chapter 6 studies in details the asset price implication of the RBC model. as above mentioned. chapters 8-9 of part III of the book are mainly concerned with the latter two puzzles. researchers have now developed diﬀerent methods to measures technology shocks correctly. Eichenbaum and Rebelo 1996). A positive technology shock will increase output. Puzzles to be Resolved In order to sum up. All these methods are focused on the computation of factor utilization. A typical example is to employ electricity use as a proxy for capacity utilization (see Burnside. The ﬁrst type of puzzle is related to the asset market and is often discussed under the heading of the equity premium puzzle. we may say that the standard RBC model has left us with major puzzles. 2003).11 Third. at least at business cycle frequency. As Gali (1999) and Francis and Ramey (2001. There are basically three strategies. It is well known that one of the major celebrated arguments of real business cycles theory is that technology shocks are pro-cyclical. see chapters 5 and 9. Extensive research has attempted to improve on this problem by elaborating on more general preferences and technology shocks. The RBC model generally predicts an excessive smoothness of labor eﬀort in contrast to empirical data. The second puzzle is. Yet this result is obtained from the empirical evidence. a negative or almost zero correlation. The ﬁrst strategy is to use an observed indicator to proxy for unobserved utilization. 11 This problem of excessive correlation has. 2003) we also ﬁnd that if one uses the corrected Solow-residual. not suﬃciently been studied in the literature. It will be explored in Chapter 5 of this volume. consumption and employment.9 technology shock. related to the labor market. Fernald and Kimball 1998). One might name it the technology puzzle. in which the technology shock is measured by the standard Solow-residual. Whereas the ﬁrst puzzle is studied in chapter 6 of the book. to our knowledge. the RBC model predicts a signiﬁcantly high positive correlation between technology and employment whereas empirical research demonstrates.

allows us to make some progress to match better the model with time series data. reoptimize once they have perceived and learned about market constraints. where agents. We want to argue that the two traditions can indeed be complementary rather than exclusive. adaptive optimization. The main new method we are using here to reconcile the two traditions is a multiple stage optimization behavior. We will ﬁnd that one can improve on the labor market and technology puzzles once we combine these two approaches. we want to note that the research along the line of Keynesian micro-founded macroeconomics has been historically developed by two approaches: one is the tradition of non-clearing market (or disequilibrium analysis).10 Finally. These two approaches will be contrasted in the last two chapters. and the other is the New Keynesian analysis of monopolistic competition and sticky (or sluggish) prices. Thus. and therefore they can somewhat be consolidated into a more complete system of price and quantity determination within the Keynesian tradition. we hope. Chapters 8 and 9. adaptive optimization permits us to properly treat the market adjustment for nonclearing markets which. .

Part I Solution and Estimation of Stochastic Dynamic Models 11 .

and therefore a numerical algorithm is called 12 . the parameterized expectations approach (den Haan and Marcet (1990)) and the dynamic programming approach (Santos and Vigo Aguiar (1998) and Gr¨ne and Semmler (2004a)).Chapter 1 Solution Methods of Stochastic Dynamic Models 1. Given these two types of ﬁrst-order conditions. In most cases. In u this book. Among the well-known methods are the perturbation and projection methods (Judd (1996)). the log-linear approximation method and the linear-quadratic approximation method. To understand the structure of this decision problem we describe it in terms of a recursive decision problem of a dynamic programming approach. there have been developed numerous methods to solve stochastic dynamic decision problems. in order to allow for an empirical assessment of stochastic dynamic models we focus on approximate solutions that are computed from two types of ﬁrst-order conditions: the Euler equation and the equation derived from the Lagrangian. an exact and analytical solution of the dynamic programming decision is not attainable.1 Introduction The dynamic decision problem of an economic agent whose objective is to maximize his or her utility over an inﬁnite time horizon is often studied in the context of a stochastic dynamic optimization model. an approximate solution cannot be derived analytically. in most of the cases. three types of approximation methods can be found in the literature: the Fair-Taylor method. Therefore one has to rely on an approximate solution. which may also have to be computed by numerical methods. Still. we discuss some solution methods frequently employed to solve the dynamic decision problem. Recently. Thereafter.

Section 4 brieﬂy reviews the diﬀerent approximation methods in the existing literature. zt ).2) Above.3) .13 for to facilitate the computation of the solution. which uses the value function for iteration. Let us ﬁrst make several remarks regarding the formulation of the above problem. One popular assumption regarding the dynamics of zt is to assume that zt follow an AR(1) process: zt+1 = P zt + p + ǫt+1 (1. (1. Section 3 establishes the two ﬁrst-order conditions to which diﬀerent approximation methods can be applied. Section 5 presents our new algorithm for dynamic optimization. First. xt is a vector of m state variables at period t. we discuss those various approximation methods and then propose another numerical algorithm that can help us to compute approximate solutions.1) subject to xt+1 = F (xt . The algorithm is used to compute the solution path obtained from the method of linear-quadratic approximation with the ﬁrstorder condition derived from the Lagrangian. zt is a vector of s exogenuous variables whose dynamics does not depend on xt and ut . We will show that the standard recursive method may encounter diﬃculties when being applied to compute a dynamic model. ut ) (1. 1) denotes the discount factor. While the algorithm takes full advantage of an existing one (Chow 1993). Et is the mathematical expectation conditional on the information available at time t and β ∈ (0. We start in Section 2 with the standard recursive method. this formulation assumes that the uncertainty of the model only comes from the exogenuous zt . ut . a GAUSS procedure that implements our suggested algorithm is presented in Appendix II. it overcomes the limitations given by the Chow (1993) method. The remainder of this chapter is organized as follows. ut is a vector of n control variables. Finally. In this chapter. 1.2 The Standard Recursive Method We consider a representative agent whose objective is to ﬁnd a control (or decision) sequence {ut }∞ such that t=0 ∞ {u}t=0 max E0 ∞ t=0 β t U (xt . Appendix I provides the proof of the propositions in the text.

Third.5) as V (x0 . ). With such a policy function (or control equation). given the initial condition (x0 . z0 ) = = {ut }∞ t=0 max U (x0 . z0 ) ≡ max E0 ∞ {ut }t=0 t=0 β t U (xt . ut ) t=1 ∞ . Since the problem repeats itself . this formulation is not restrictive to those structure models with more lags or leads. we ﬁrst rewrite (1.14 where ǫt is independently and identically distributed (i. z0 ) and the exogenuous sequence {zt }∞ generated by (1. the initial condition (x0 .5) Expression (1.d.7) represents a dynamic programming problem which highlights the recursive structure of the decision problem.5) with the initial condition (x1 . z) into the control u. (1. z1 ). ut ) (1.2). u0 ) + βE0 β t U (xt . we could rewrite (1. z0 ) = max {U (x0 .i. For this purpose. the sequences of state {xt }∞ and control {ut }∞ can be generated by iterating t=1 t=0 the control equation ut = G(xt .4) as well as the state equation (1. In every period t. The problem to solve a dynamic decision problem is to seek a timeinvariant policy function G mapping from the state and exogenuous (x.6) can be expressed as being β times the value V as deﬁned in (1.6) {ut }∞ t=0 max β t U (xt+1 . we ﬁrst deﬁne a value function V : ∞ V (x0 . z1 )]} ∞ {ut }t=0 (1. z0 ) in this formulation is assumed to be given. It is well known that a model with ﬁnite lags or leads can be transformed through the use of auxiliary variables into an equivalent model with one lag or lead. Therefore.3). Second. ut+1 ) t=0 It is easy to ﬁnd that the second term in (1. u0 ) + βE0 [V (x1 .7) The formulation of equ. t=1 To ﬁnd the policy function G by the recursive method. the planner faces the same decision problem: choosing the control variable ut that maximizes the current return plus the discounted value of the optimum plan from period t+1 onwards. (1. zt ) (1.5) as follows: ∞ V (x0 . u0 ) + E0 U (x0 .5) could be transformed to unveil its recursive structure.

z) = max {U (x. u) + βE [Vj (˜. Otherwise. update Vj and go to step 1.3). z) = max {U (x.9) In terms of an algorithm. Judd (1999). see Taylor and Uhlig (1990). z )]} x ˜ u (1. Vj . Obviously.9).7) as V (x. 1989). Equation (1.15 every period the time subscripts become irrelevant. We thus can write (1. we need to ﬁnd the optimum u that maximize the right side of equ. For the later development. Recent methods of numerically solving the above discrete time Bellman equation (1.9).9) can be found in Santos and Vigo-Aguiar (1998) and Gr¨ne and Semmler (2004a). z )]} x ˜ (1. If Vj+1 = Vj . u . For a review up to 1990. Stockey et al. which in reality we do not know in advance.8) u where the tilde (∼) over x and z denotes the corresponding next period values.1 As above stated in 1 Kendrick (1981) can be regarded as a seminal work in this ﬁeld. Under some regularity conditions regarding the function U and F . u) + βE [V (˜. This task makes it diﬃcult to write a closed form algorithm for iterating the Bellman equation. (1. However. they are subject to (1. the convergence of this algorithm is warranted by the contraction mapping theorem (Sargent 1987. Unfortunately. If we know the function V. 1. the method can be described as follows: • Step 1. named after Richard Bellman (1957). stop.2) and (1. Guess a diﬀerentiable and concave candidate value function. see Chow (1997). • Step 2.8) is said to be the Bellman equation. Ljungqvist and Sargent (2000) and Marimon and Scott (1999). all these considerations are based on the assumption that we know the function V. Use the Bellman equation to ﬁnd the optimum u and then compute Vj+1 according (1.3 The First-Order Conditions The last two decades have observed various methods of numerical approximation to solve the problem of dynamic optimization. Researchers are therefore forced to seek diﬀerent numerical approximation methods. the diﬃculty of this algorithm is that in each Step 2. • Step 3. we then can solve u via the Bellman equation. The typical method in this case is to construct a sequence of value functions by iterating the following equation: Vj+1 (x.

The above formula becomes ∂V (x.16 this book we want to focus on the ﬁrst-order conditions. The ﬁrst-order condition for maximizing the right side of the equation takes the form: ∂U (x.11) This equation is often called the Benveniste-Scheinkman formula. there are still models in which such transformation is not feasible. u) ∂V ∂F + βE (x. z ) x ˜ ∂x ∂x ∂x ∂F (1. z) ∂U (x. Assume V is diﬀerentiable and thus from (1. 1. One can ﬁnd two approaches: one is to use the Euler equation and the other the equation derived from the Lagrangian.3. u) + βE (x.2 Assume ∂F/∂x = 0. z ) = 0 x ˜ ∂u ∂u ∂x ˜ (1. z)) ∂V ∂F = + βE (x. G(x. after some transformation. z). u.12) where the tilde (∼) over u again denotes the next period value with respect to u.10) gives rise to the Euler equation: ∂F ∂U ∂U (x. one must require ∂F/∂x = 0.13) (1. z) ∂U (x. z)) = ∂x ∂x Substituting this formula into (1. in which x does not appear in the transition law so that ∂F/∂x = 0 is satisﬁed. Note that to use the above Euler equation as the ﬁrst-order condition for deriving the decision sequence. z) (˜. We will show this technique in the next chapter using a prototype model as a practical example. u. that are used to derive the decision sequence. G(x.10) The objective here is to ﬁnd ∂V /∂x.8). However. z) (˜. G(x.8) it satisﬁes ∂V (x. In economic analysis. u) = 0 x ˜ ∂u ∂u ∂x ˜ (1. . 2 named after Benveniste and Scheinkman (1979). z) (˜. one often encounters models.1 The Euler Equation We start from the Bellman equation (1.

14) .3.14) and (1. using (1. ∂ut ∂ut (1. Yet. zt )] where λt . ut .14) (1. we ﬁnd that there is an unobservable variable λt appearing in the system. one does not have to transform the model into the setting that ∂F/∂x = 0. xt and ut will yield equation (1. ut and zt . zt ) ∂U (xt . Also as we will see in the next chapter.4 1. called 3 See also Chow (1997).3) and the ﬁrst-order condition derived either as Euler equation (1. This is an important advantage over the Euler equation. ut ) − β τ +1 λ′t+1 [xt+1 − F (xt . we can deﬁne the Lagrangian L: ∞ L = E0 t=0 β τ U (xt . the Lagrangian multiplier. ∂xt ∂xt+1 ∂F (xt .15).13) or from the Lagrangian (1. Yet. . ut ) + Et λt+1 = 0. zt+1 ) ∂U (xt . is a m × 1 vector. ut . ut+1 . these two types of ﬁrst-order conditions are equivalent when we appropriately deﬁne λt in terms of xt .2). the exogenuous (1. t=0 t=0 ∞ {ut }∞ and {λt }t=0 are implied given the initial condition (x0 .15) In comparison with the Euler equation. t=0 mostly such a system is highly nonlinear.4.17 1.1) and (1. One popular approach as suggested by Fair and Taylor (1983) is to use a numerical algorithm.2). z0 ) .2 Deriving the First-Order Condition from the Lagrangian Suppose for the dynamic optimization problem as represented by (1. {zt+1 }∞ .15) form a dynamic system from which the transition sequences {xt+1 }∞ .1 Approximation and Solution Algorithms The Gauss-Seidel Procedure and the Fair-Taylor Method The state (1. ut ) + βEt λt+1 = λt . Setting the partial derivatives of L to zero with respect to λt . 1. and therefore the solution paths usually are impossible to be computed directly.2) as well as ∂F (xt+1 .3 This further implies that they can produce the same steady states when being evaluated at their certainty equivalence forms.(1.

4 (0) (1) (0) If we use (1. ψ) . including both state xt and control ut .16) (1.t+1 = gm+n (yt .18) as follows: y1.19) (1.19) .16) . zt .18) Here yt is the vector of endogenuous variables with m + n dimensions. zt .(1. yt+1 . zt . Also note that in this formulation we left aside the expectation operator E. ψ) (1. λt and ut . ψ) y2.14) and (1.21) for the given yt+1 along with yt . to their expectation values (usually zero). . yi. go to Step 3. Call this guess yt+1 . the algorithm starts by setting t = 0 and proceeds as follows: • Step 1.t+1 = g1 (yt . and the sequence of exogenuous variable {zt }T . i = 1.21) where. can be solved numerically by an iterative technique. to which we shall now turn. Suppose the system can be be written as the following m + n equations: f1 (yt .4 ψ is the vector of structural parameter. ψ) = 0 f2 (yt . called Gauss Seidel procedure. m + n. This procedure will be repeated until the tolerance level is satisﬁed. yt+1 . ψ) = 0 . if there is any.t+1 = g2 (yt . fm+n (yt .(1. yt+1 . then there will be 2m+n equations and yt should include xt .t+1 is the ith element in the vector yt+1 . For the convenience of presentation... Set an initial guess on yt+1 . Otherwise compute yt+1 for the given yt+1 . the following discussion assumes only the Euler equation to be used. t=0 with T to be the prescribed time horizon of our problem. as suggested. yt+1 . Denote (1) this new computed value yt+1 . zt .. .5 The system. Compute yt+1 (0) according to (1. . If the distance between yt+1 and yt+1 is less than a prescribed (2) (1) tolerance level. zt . It is always possible to tranform the system (1. See also a similar formulation in Juillard (1996) with one lag and one lead.17) (1. • Step 2. Therefore the system is essentially not diﬀerent from the dynamic rational expectation model as considered by Fair and Taylor (1983).20) (1. ym+n. ψ) = 0 (1.15) as the ﬁrst-order condition.18 Gauss-Seidel procedure. yt+1 . 5 Our suggested model here is a more simpliﬁed version since we only take one lead. yt+1 . . . Given ∗ the initial condition y0 = y0 . zt . This can be done by setting the corresponding disturbance term. . 2.

Then. The second disadvantage of this method is the cost of computation. Update t by setting t = t + 1 and go to Step 1. If this is the case. Yet the initial condition for the dynamic decision problem is usually provided only by x0 (see equation our discussion in the last section). In particular. this approximation method can be applied to the ﬁrst-order condition either in terms of the Euler equation ¯ or derived from the Langrangean. one then uses the method of undetermined coeﬃcients to solve for the decision rule.1). we will investigate these issues more thoroughly. The algorithm will continue until t reaches T. The general idea of this method is to replace all the necessary equations in the model by approximations. Formally. t=0 There is no guarantee that convergence can always be achieved for the iteration in each period. This cost of computation makes it a diﬃcult candidate solving the dynamic optimization problem. let Xt be the variables. 2. and therefore the solution sequences {ut }T and {xt }T depend t=1 t=1 virtually on the initial condition. The third and the most important problem regards its accuracy of solution.2 The Log-linear Approximation Method Solving nonlinear dynamic optimization model with log-linear approximation has been widely used and well documented. t = 1. which include not only the initial state x0 but also the initial decisions u0 . This will produce a sequence T of endogenuous variable {yt }T . Note that the procedure starts with the given initial condition y0 . 100xt is the per¯ centage of Xt that it deviates from X.. T. chapter 7). In the next chapter when we turn to a practical problem.19 • Step 3. . which include both decision {ut }t=0 and t=0 state {xt }T . a damping technique can usually be employed to force convergence (see Fair 1984.4. which is also in the form of log-linear deviations. In principle. This has been proposed in particular by King et al. there might be a problem in accuracy. Considering that the weights of u0 could be important in the value of the objective function (1. . 3. One possible way to deal with this problem is to start with diﬀerent initial u0 . The procedure requires the iteration and convergence for each period t. 1. X the corresponding steady state. (1988) and Campbell (1994) in the context of Real Business Cycle models.. which are linear in the log-deviation form. Given the approximate log-linear system. ¯ xt ≡ ln Xt − ln X is regarded to be the log-deviation of Xt ..

the solution path does not require the initial condition of u0 and therefore it should be more accurate in comparison to the Fair-Taylor method.3 Linear-quadratic Approximation with Chow’s Algorithm Another important approximation method is the linear-quadratic approximation. Again in principle this method can be applied to the ﬁrst-order con- .20 Uhlig (1999) provides a toolkit of such method for solving a general dynamic optimization model. ¯ Xt ≈ Xext ext +ayt ≈ 1 + xt + ayt xt yt ≈ 0 (1. Log-linearize the necessary equations characterizing the equilibrium law of motion of the system. In Chapter 3. In some cases. Solving a nonlinear dynamic optimization model with log-linear approximation usually does not require a heavy computation in contrast to the Fair-Taylor method. the decision rule could be derived even analytically. Derive the steady state of the model.2).23) (1. Find the necessary equations characterizing the equilibrium law of motion of the system. Solve the log-linearized system for the decision rule (which is also in log-linear form) with the method of undetermined coeﬃcients. the process of log-linearization and solving for the undetermined coeﬃcients are not easy. It is certainly desirable to have a numerical algorithm available that can take over at least part of the analytical derivation process. the exogenuous equation (1. • Step 3. • Step 2. Uhlig (1999) suggests the following building block for such log-inearization.22) (1. On the other hand. 1.14) and (1. This requires ﬁrst parameterizing the model and then evaluating the model at its certainty equivalence form. These necessary equations should include the state equation (1.24) • Step 4.4.15). However.3) and the ﬁrstorder condition derived either as Euler equation (1.13) or from the Lagrangian (1. by assuming a log-linearized decision rule as expressed in (1. The general procedure involves the following steps: • Step 1.4). we will provide a concrete example to apply the above procedure and to solve a practical problem of dynamic optimization. and usually have to be accomplished by hand.

31) (1.25) We shall remark that the state equation here is slightly diﬀerent from the one as expressed by (1.1). ∂xt ∂xt ∂U (xt . ut ) + ǫt+1 (1. In other words. the Lagrangian L should be deﬁned as ∞ L = E0 t=0 ′ β t U (xt .21 dition either in terms of the Euler equation or derived from the Lagrangian. ut ) +β Et λt+1 = λt . Consequently.26) (1. it is only a special case of (1. xt and ut will yield equation (1.25) as well as ∂F (xt . ∂ut ∂ut (1. but subject to xt+1 = F (xt . Apparently.27) can be rewritten as K1 xt + K12 ut + k1 + βA Et λt+1 = λt ′ K2 ut + K21 xt + k2 + βC Et λt+1 = 0 ′ (1. ∂U (xt . The numerical properties of this approximation method have further been studied in Reiter (1997) and Kwan and Chow (1997).28) (1.29) (1.26) and (1.27) The linear-quadratic approximation assumes the state equations to be linear and the objective function to be quadratic.2).32) . Since the models in discrete time are more convenient for empirical and econometric studies. ut ) − β t+1 λt+1 [xt+1 − F (xt . Chow (1993) was among the ﬁrst who solves a dynamic optimization model with linear-quadratic approximation applied to the ﬁrst-order condition derived from the Lagrangian. ut ) ∂U (xt . ut ) = Axt + Cut + b + ǫt+1 (1.2) in Section 2. ut ) − ǫt+1 ] Setting the partial derivatives of L to zero with respect to λt . we here only consider the discrete time version. ut ) = K1 xt + K12 ut + k1 ∂xt ∂U (xt . he proposed a numerical algorithm to facilitate the computation of the solution. Suppose the objective of a representative agent can again be written as (1. equation (1. ut ) = K2 ut + K21 xt + k2 ∂ut F (xt . At the same time. ut ) +β Et λt+1 = 0.30) Given this linear-quadratic assumption. Chow’s method can be presented in both continuous and discrete time form. ut ) ∂F (xt .

Using these new H and h.38). The process will continue until convergence is achieved. the only derivation is to obtain the partial derivatives of U and F . Given the steady state. u∗ ). followed by iterating (1. the new H and h can be calculated by (1.35) .33). However. In a response to Reiter (1997). our above presentation follows the spirit of Kwan and Chow (1997) if we assume the linearization takes place around the steady states. the resulting decision rule is indeed nonlinear in xt . It should be noted that the algorithm suggested by Chow (1993) is much more complicated. Therefore. one calculates the new G and g by (1.38). Yet even this can be computed with a written procedure in a major software package. where u∗ is the initial guess on ut . Kwan and Chow (1997) propose a one-time linearization around the steady states.(1. Therefore. set the initial H and h. The procedure will continue until convergence of ut . 7 For instance.35) .22 Assume the transition laws of ut and λt take the linear form: ut = Gxt + g λt+1 = Hxt+1 + h (1. which must be accomplished by hand. For any given xt .(1. is diﬀerent from u∗ . as the result of iterating (1.37) (1. Given G and g as well as the initial H and h. In this sense.7 Despite this signiﬁcant advantage. in GAUSS. Thus. the analytical solution to G.36). 6 . one could use the procedure GRADP for deriving the partial derivatives. g and h is impossible.30) .38) Generally it is impossible to ﬁnd. First. H.6 In comparison to the log-linear approximation. the approximation will take place again. Chow’s method requires less derivations. the method.(1. Since the above algorithm is designed for any given xt . ut ). the approximation as represented by (1.35) and (1. Chow’s method can be a good approximation method only when the state equation is linear or can be transformed into a linear one.37) and (1.32) ﬁrst takes place around (xt . an iterative procedure can designed as follows. this time it will be around (xt . First.36) again.38).35) (1.34) Chow (1993) proves that the coeﬃcient matrices G and H and the vectors g and h satisfy G = −(K2 + βC HC)−1 (K21 + βC HA) g = −(K2 + βC HC)−1 (k2 + βC Hb + βC h) H = K1 + K12 G + βA H(A + CG) h = (K12 + βA HC)g + k1 + βA (Hb + h) ′ ′ ′ ′ ′ ′ ′ ′ (1. Otherwise. Chow’s method has at least three weeknesses. is less convenient comparing to other approximation method. G and g can then be calculated by (1. if ut calculated via the decision rule (1.35) and (1. as pointed by Reiter (1997).33) (1.36) (1.

2). Evaluating the ﬁrst8 Note that the good approximation to ∂F (xt .ut ) there is a similar problem. since A′ and C ′ are not good approximations to ∂F (xtt. needs to be accomplished by hand.¯) .15). Gong (1997) points out the same problem. the assumed state equation (1. one would expect that the number of solutions will increase with the increase of the dimensions of state space.3).26) and (1. if there are any.25). This indicates that the method does not require the assumption ∂F/∂x = 0.35) into (1.8 Reiter (1996) ∂x ∂u has used a concrete example to show this point. ¯ x ¯ ∂u The same problem of multiple solution should also exist for the log-linear approximation method.2) rather than (1. Second. as a part of state variables.14) and (1.37) gives a quadratic matrix equation in H.(1. the only derivation in applying our algorithm is to derive the steady state. . our suggested algorithm takes full advantage. and therefore the method does not require log-linear-approximation.32) will not be a good approximation to the non-approximated (1. Indeed.ut ) ∂xt should be 2 F xu ∂ 2 F (¯.38) may exhibit multiple solutions since inserting (1. Indeed.ut ) and ∂F (xtt. if we use an existing software procedure to compute the partial derivatives with respect to U and F . 1. the iteration with (1. Yet this will increase the dimension of state space and hence intensify the problem of multiple solutions.31) and (1. The approximation method we used here is the linear-quadratic approximation. in many cases.23 the linearized ﬁrst-order condition as expressed by (1. yet overcomes all the limitations occurring in Chow’s method. For ∂F (xtt.5 An Algorithm for the Linear-Quadratic Approximation In this section.25) is only a special case of (1.¯) xu (xt − x) + ∂ ∂ x(¯. which.1) . the ﬁrst-order condition is established by (1. Therefore.(1. The ﬁrst-order condition used for this algorithm is derived from the Lagrangian. One possible way to circumvent this problem is to regard the exogenuous variables. The established Proposition 1 below allows us to save further derivations when applying the method of undetermined coeﬃcients.9 Third.35) . This will create some diﬃculty when being applied to the model with exogenuous variables. we shall present an algorithm for solving a dynamic optimization model with the general formulation as expressed in (1.27). Since our state equation takes the form of (1.¯) (ut − ¯ ∂x2 ¯∂u 9 (¯ u u) + ∂F∂x.ut ) . This actually has been done by Chow and Kwan (1998) when the method has been applied to a practical problem.

14).43) The following is the proposition regarding the solution for (1.2) and (1. we are able to derive the steady states for xt . λt and zt .15) and (1. F21 xt + F22 ut + F23 Et λt+1 + F24 zt + f2 = 0.42) (1. Taking the ﬁrst-order Taylor approxi¯ ¯ ¯ ¯ mation around the steady state for (1.47) −1 −1 F13 (I − F12 N ) − HCN Q − QP = H(W + CR) + F13 (F14 + F12 R) (1.24 order condition along with (1.43) respectively.2). λ and z . Fux and Fuz . Fxx .3) at their certainty equivalence form. Q and h satisfy G = M + NH (1.45) g = Nh + m (1.49) . ut . (1.48) −1 h = F13 (I − F12 N ) − HCN − Im −1 −1 H(Cm + b) + Qp + F13 (f1 + F12 m) (1. u. H. Uxu . xt+1 = Axt + Cut + W zt + b. D.39) (1. A = Fx b = F (¯. z ) − Fx x − Fz z − Fu u x ¯ ¯ ¯ ¯ ¯ ¯ xx F11 = Uxx + β λF F13 = βA′ ¯ ¯ ¯ ¯ ¯ f1 = Ux + βA′ λ − F11 x − F12 u − F13 λ − F14 z ¯ F22 = Uuu + β λFuu ¯ ¯ f2 = Uu + βC ′ λ − F21 x − F22 u − F23 λ − F24 z ¯ ¯ ¯ C = Fu W = Fz ¯ F12 = Uxu + β λFxu ¯ xz F14 = β λF ¯ F21 = Uux + β λFux ′ F23 = βC ¯ F24 = β λFuz (1. Uux . u.40) (1. which we shall denote respectively as x. Then the solution of G. Fuu .41) Note that here we deﬁne Ux as ∂U/∂x and Uxx as ∂ 2 U/∂x∂x all to be evaluated at the steady state. where in particular.43): Proposition 1 Assume ut and λt+1 follow (1. Fxz . we obtain F11 xt + F12 ut + F13 Et λt+1 + F14 zt + f1 = λt . g.42) and (1. The similarity is applied to Uu .42) and (1.44) D = R + NQ (1.46) −1 −1 HCN H + H(A + CM ) + F13 (F12 N − Im )H + F13 (F11 + F12 M ) = 0 (1. The objective is to ﬁnd the linear decision rule and the Lagrangian function: ut = Gxt + Dzt + g λt+1 = Hxt+1 + Qzt+1 + h (1. Uuu .

in most cases. which is mostly the case in recent economic literature. For example.47).46). (1.47) can be written as a1 H 2 + a2 H + a 3 = 0 with the two solutions given by H1. Obviously. one can also easily identify the proper solution by relying on the economic meaning of λt .53) − F22 − F22 (F21 − (F24 − −1 F23 F13 F11 ). we shall rewrite (1.54) iterating (1. (1.6 A Dynamic Programming Algorithm In this section we describe a dynamic programming algorithm which enables us to compute optimal value functions as well as optimal trajectories 10 11 though multiple solutions may exist. Q and h directly according (1. (1.10 However. which is nonlinear (quadratic) in H. The solution to H is implied by (1.11 H becomes a scalar. the proposition allows us to solve G. −1 F23 F13 F12 − F22 −1 (f2 − F23 F13 f1 ). the solutions can be computed without iteration. 1.44).48) and (1.54) until convergence will give us a solution to H. λt is the shadow price of the capital which should be inversely related to the quantity of capital. the state equation is the capital stock. and therefore (1. Further. Therefore. In other words.25 where Im is the m × m identity matrix and N = M = R = m = −1 F23 F13 F12 − F22 −1 F23 F13 F12 −1 F23 F13 F12 −1 −1 −1 −1 −1 F23 F13 . Given H.47) as H = F (H). This indicates that only the negative solution is a proper solution. we cannot solve for H analytically.2 = 1 −a2 ± a2 − 4a1 a3 2 2a1 1 2 . . if the model has more than one state variables. −1 F23 F13 F14 ). In this case.51) (1. Then D and g can be computed from (1.52) (1. when one encounters the model with one state variable.45) and (1. in all the models that we will present in this book.50) (1.49).

u(t)). u(t)) (1. i.26 of discounted optimal control problems of the type above. (1) Compute the solution VhΓi on Γi (2) Evaluate the error estimates ηl . u For the estimation of the gridding error we estimate the residual of the operator Th with respect to VhΓ . Denoting the nodes of the grid Γ by xi . x(0) = x0 ∈ Rn For the discretization in space we consider a grid Γ covering the computational domain of interest. . we are now looking for an approximation VhΓ satisfying VhΓ (xi ) = Th (VhΓ )(xi ) (1. for each cell Cl of the grid Γ we compute ηl := max Th (VhΓ )(x) − VhΓ (x) x∈Cl Using these estimates we can iteratively construct adaptive grids as follows: (0) Pick an initial grid Γ0 and set i = 0.56) we refer to Gr¨ne and Semmler (2004a). . . We consider discounted optimal u control problems in discrete time t ∈ N0 given by ∞ V (x) = max u∈U t=0 β t g(x(t). . Thus. An extension to a stochastic decision problem is brieﬂy summarized in appendix III. P . Fix a reﬁnement parameter θ ∈ (0.56) for all nodes xi of the grid. . The basic discretization procedure goes back to Capuzzo Dolcetta (1983) and Falcone (1987) and is applied with adaptive gridding strategy by Gr¨ne u (1997) and Gr¨ne and Semmler (2004a). If ηl < tol for all l then stop (3) Reﬁne all cells Cj with ηj ≥ θ maxl ηl . For a description of several iterative methods for the solution of (1. i = 1. the diﬀerence between VhΓ (x) and Th (VhΓ )(x) for points x which are not nodes of the grid.55) where xt+1 = f (x(t). where the value of VhΓ for points x which are not grid points (these are needed for the evaluation of Th ) is determined by linear interpolation. set i = i + 1 and go to (1).e.. 1) and a tolerance tol > 0.

27 For more information about this adaptive gridding procedure and a comparison with other adaptive dynamic programming approaches we refer to Gr¨ne and Semmler (2004a) and Gr¨ne (1997). we still cannot exclude the possibility that sometime the restriction cannot be satisﬁed. the log-linear approximation method and the linear-quadratic approximation method. We ﬁnd that the Euler equation needs a restriction that the state variable cannot appear as a determinant in the state equation. On the other hand. where x∗ (1) = x + hf (x. For all of these diﬀerent methods. The approximation methods discussed here use two types of ﬁrst-order conditions: the Euler equation and the equations derived from the Lagrangian. we compare their advantages and disadvantages. We ﬁnd that the Fair-Taylor method may encounter an accuracy problem due to its additional requirement of the initial condition for the control variable. which in our discretization can be obtained in feedback form u∗ (x) for the discrete time approximation using the following procedure: For each x in the gridding domain we choose u∗ (x) such that the equality ∗ max {hg(x. the methods of log-linear approximation may need an algorithm that can take over some heavy derivation process that otherwise must be analytically accomplished. u) + βVh (xh (1))} = {hg(x. We also have elaborated on dynamic programming as a recently developed method to solve the involved Bellman equation. u u In order to determine equilibria and approximately optimal trajectories we need an approximately optimal policy. Given these two types of ﬁrst-order conditions. we therefore propose an algorithm that could overcome the limitation of existing methods (such as Chow’s method). In the next chapter we will turn to a practical problem and apply the diverse methods. Although many economic models can satisfy this restriction after some transformation. u∗ (x)). we consider the solutions computed by the Fair-Taylor method. u∗ (x)) + βVh (xh (1))} u∈U holds. 1.7 Conclusion This chapter reviews some typical approximation methods to solve a stochastic dynamic optimization model. Then the resulting sequence u∗ = i h u∗ (xh (i)) is an approximately optimal policy and the related piecewise constant control function is approximately optimal. For the linear-quadratic approximation. .

69) (1.58) into (1.58) where N. (1.55) respectively.40) and then solving for ut .59) (1. where Sx Sλ Sz s Lx Lλ Lz l = = = = = = = = A + CM CN W + CR Cm + b −1 −F13 (F11 + F12 M ) −1 F13 (I − F12 N ) −1 −F13 (F14 + F12 R) −1 −F13 (f1 + F12 m) (1.67) (1.70) Next.71) respectively.68) (1.41) while Et zt+1 in terms of (1. taking expectation for the both sides of (1.39).60) Now express λt in terms of (1.43) and then plug it into (1.63) (1. (1. we get ut = N λt + M xt + Rzt + m. Substituting (1.59) and (1.61) (1.28 1. we obtain −1 Et λt+1 = F13 (λt − F11 xt − F12 ut − F14 zt − f1 ) (1.60): xt+1 = (Sx + Sλ H)xt + (Sz + Sλ Q)zt + s + Sλ h.8 Appendix I: Proof of Proposition 1 From (1. we get .65) (1.71) Next.42) for equations (1.62) (1. M. expressing ut in terms of (1.57) Substituting the above equation into (1.41) and (1. R and m are all deﬁned in the proposition.43) and expressing xt+1 in terms of (1.64) (1. Et λt+1 = (Lx + Lλ H)xt + (Lz + Lλ Q)zt + l + Lλ h.41) and (1. (1. we obtain xt+1 = Sx xt + Sλ λt + Sz zt + s Et λt+1 = Lx xt + Lλ λt + Lz zt + l.66) (1.3) we obtain Et λt+1 = HAxt + HCut + (HW + QP )zt + Hb + Qp + h.

75) (1.77) when W + CD is replaced by (1. Sλ .78). (1. (1.65) and (1. (1.44).66).70) with (1. which allows us to obtain (1. In particular. h is resolved from (1. (1. Q. u and z with respect F and U . Given H. (1. g and h.47) as in Proposition 1 with Sx .66). Lx and Lλ to be expressed by (1. We call this procedure DYNPR. Lz and Lλ to be expressed by (1. u. l + Lλ h = H(Cg + b) + Qp + h. This gives rise to (1.and the second-order partial derivatives of x. λ.66). Sλ .66). lbar. Sz + Sλ Q = W + CD. all evaluated at the steady states. • the ﬁrst.75) when A + CG is replaced by (1.63). (1.78) (1.73) Comparing (1. (1.74).72) and (1. H is resolved from (1. This gives rise to (1.45).78). Finally. which allows us to obtain (1.49) with Sλ . G is then resolved from (1.73). D. 1. Then D is resolved from (1.5 is written as a GAUSS procedure and available from the authors upon request.74).64) and (1.79) These 6 equations determine 6 unknown coeﬃcient matrices and vectors G. zbar and F bar respectively.46).79) when Cg + b is replaced by (1. This gives rise to (1.9 Appendix II: An Algorithm for the LQApproximation The algorithm that we suggest to solve the LQ approximation of chapter 1. we obtain Sx + Sλ H = A + CG.61).74) (1. Lz + Lλ Q = H(W + CD) + QP.69) and (1.76).62). s and l to be expressed by (1. Lλ . They are denoted .48) with Sz . which allows us to obtain (1. Finally g is resolved from (1.76). Next Q is resolved from (1. H. Lx + Lλ H = H(A + CG). (1.29 Et λt+1 xt+1 = (A + CG)xt + (W + CD)zt + Cg + b (1.77) (1. s + Sλ h = Cg + b.72) = H(A + CG)xt + [H(W + CD) + QP ] zt + H(Cg + b) + Qp + h.76) (1.62). The input of this procedure includes • the steady states of x. ubar.67) and (1.62). z and F denoted as xbar.

f 1 = U x + beta ∗ lbar ∗ F x − F 11 ∗ xbar− F 12 ∗ ubar − F 13 ∗ lbar − F 14 ∗ zbar. F 24 = beta ∗ lbar ∗ F uz. U uu. F zz. BLz. BH. F ux. U xx. BG. F 14 = beta ∗ lbar ∗ F xz. F u. sb. beta. F 23. F 13 = beta ∗ F x. Sx = A + C ∗ BM . sh. denoted respectively as BG. F x and F xx for the ﬁrst. BLx. C. F 11. F uu. sa1. BD. F xx. f 2. F 11 = U xx + beta ∗ lbar ∗ F xx. f 1. • the discount factor β (denoted as beta) and the parameters P and p (denoted as BP and sp respectively) appearing in the AR(1) process of z.30 as. U x. sl. D and g. lbar. BM = INV(F 23∗INV(F 13) ∗ F 12 − F 22)∗ (F 21 − F 23∗INV(F 13) ∗ F 11). BN. BR. F 14. F 23 = beta ∗ (F u’). W = F z. U xu. F uz. f 2 = U u + beta ∗ (F u′) ∗ lbar − F 21 ∗ xbar− F 22 ∗ ubar − F 23 ∗ lbar − F 24 ∗ zbar. BR = INV(F 23∗INV(F 13) ∗ F 12 − F 22)∗ (F 24 − F 23∗INV(F 13) ∗ F 14). sa3. Sz. sg. Sx. F 13. F 22 = U uu + beta ∗ lbar ∗ F uu. zbar. . Slamda. F 24. BLlamda. ss. W. BQ. F z. sa2. F zx. sb = F bar − A ∗ xbar − C ∗ ubar − W ∗ zbar. ubar. for instance. F 12 = U xu + beta ∗ lbar ∗ F xu. C = F u. LOCAL A = F x. BP. U ux. BD and sg.and the second-order partial derivatives of x with respect to F respectively. F xu. F 22. BN = INV(F 23∗INV(F 13) ∗ F 12 − F 22)∗ F 23∗INV(F 13). F bar. sm = INV(F 23∗INV(F 13) ∗ F 12 − F 22)∗ (f 2 − F 23∗INV(F 13) ∗ f 1). F xz. F 21 = U ux + beta ∗ lbar ∗ F ux. The output of this procedure is the decision parameters G. xbar. PROC(3) = DYNPR (F x. sp). sm. F 21. U u. A. F 12. BM.

d. Sz = W + C ∗ BR. u(t). sh = INV(BLlamda − BH ∗ Slamda − 1)∗ (BH ∗ ss + BQ ∗ sp − sl). u. u) + βVh (ϕ(x. BQ = INV(BLlamda − BH ∗ Slamda − BP )∗ (BH ∗ Sz − BLz). sa3 = −BLx. BLlamda = INV(F 13) ∗ (1 − F 12 ∗ BN ). BD = BR + BN ∗ BQ. zt ). x(0) = x0 ∈ Rn (1.9. 12 The corresponding dynamic programming operator becomes Th (Vh )(x) = max E {hg(x. sa1 = Slamda. sg). BD.6. sa2 = Sx − BLlamda. sl = −INV(F 13) ∗ (f 1 + F 12 ∗ sm). sg = BN ∗ sh + sm. u∈U 12 (1. o .5).82) For a discretization of a continuous time stochastic optimal control problem with dynamics governed by an Itˆ stochastic diﬀerential equation.80) x(t + 1) = f (x(t). BH = (1/(2 ∗ sa1)) ∗ (−sa2 − (sa2ˆ2 − 4 ∗ sa1 ∗ sa3)ˆ0. see Camilli and Falcone (1995). z))} . BG = BM + BN ∗ BH. ss = C ∗ sm + sb. This problem can immediately be applied in discrete time with time step h = 1. ENDP. u(t)) (1. RETP(BG. BLz = −INV(F 13) ∗ (F 14 + F 12 ∗ BR).i.1 Appendix III: The Stochastic Dynamic Programming Algorithm Our adaptive approach of chapter 1. random variables. is easily extended to stochastic discrete time problems of the type ∞ V (x) = E where uin U max t=0 β t g(x(t).31 Slamda = C ∗ BN . BLx = −INV(F 13) ∗ (F 11 + F 12 ∗ BM ). 1.81) and the zt are i.

where z is a truncated Gaussian random variable and the numerical integration is done via the trapezoidal rule.6 are by far more eﬃcient than non–adaptive methods if the same discretization technique is used for both approaches. z))p(z)dz. It should also be noted that in the smooth case one can obtain estimates for the error in the approximation of the gradient of Vh from our error estimates. It should be noted that despite the formal similarity. Gr¨ne and Semmler (2004a) u show the application of such a method to such a problem.32 where ϕ(x. stochastic optimal control problems have several features diﬀerent from deterministic ones.13 Furthermore. . Nevertheless. whose structure gives rise to diﬀerent approximation techniques. if z is a continuous random variable then we can compute E via a numerical quadrature formula for the approximation of the integral (hg(x. in particular allowing to avoid the discretization of the state space. u. is less likely because the inﬂuence of the stochastic term tends to “smear out” the dynamics in such a way that these phenomena disappear. u. If the random variable z is discrete then the evaluation of the expectation E is a simple summation. periodic attractors etc. z where p(z) is the probability density of z. in stochastic problems the optimal value function typically has more regularity which allows the use of high order approximation techniques. and it has to compete with other eﬃcient techniques. the above dynamic programming technique. u 13 A remark to this extent on an earlier version of our work has been made by Buz Brock and Michael Woodford. stochastic problems can often be formulated in terms of Markov decision problems with continuous transition probability (see Rust (1996) for a details). In these situations. z) is now a random variable. First. the examples in Gr¨ne and Semmler u (2004a). Finally. u) + βVh (ϕ(x. for details we refer to Gr¨ne (2003). developed by Gr¨ne (1997) and applied in Gr¨ne and Semmler (2004a) may not be u u the most eﬃcient approach to these problems. complicated dynamical behavior like multiple stable steady state equilibria. shows that adaptive grids as discussed in chapter 1.

which is now often used as a prototype model of dynamic optimization.2. Yt = At Ktα . Blanchard and Fisher (1989. added to the capital stock. Yt = Ct + Kt+1 − Kt . In particular. Yt output and Kt the capital stock. The model we choose is a Ramsey model (Ramsey 1928) to which the exact solution is computable with the standard recursive method. 2.1 Introduction This chapter turns to a practical problem of dynamic optimization. Let Ct denote consumption. chapter 2) and Ljungqvist and Sargent (2000. (1989.2) See.1 The Ramsey Problem The Model Ramsey (1928) posed a problem of optimal resource allocation. chapter 2). that is.Chapter 2 Solving a Prototype Stochastic Dynamic Model 2. 33 . for instance.1 The model presented in this section is essentially that of Ramsey (1928) yet it is augmented by uncertainty.2 2. This will allow us to test the accuracy of approximations by comparing the diﬀerent approximate solutions to the exact solution.1) (2. we shall solve a prototype model by employing diﬀerent approximation methods as we have discussed in the last chapter. Assume that the output is produced by capital stock and it is either consumed or invested. Formally. chapter 2). 1 (2. Stockey et al.

2 The Exact Solution and the Steady States It is well known that the exact solution for this model – which could be derived from the standard recursive method can be written as Kt+1 = αβAt Ktα . we take logarithm for both sides of (2.6) Given the solution paths for Ct and Kt+1 . This is a simpliﬁed assumption by which the exact solution is computable. ¯ ¯ Given K. we obtain ¯ log K = log(αβA) .4) Note that we have assumed here that the depreciation rate of capital stock is equal to 1.1) and (2.5) given the initial condition (K0 . To obtain a more meaningful interior steady state. (2.8) for logK. Kt+1 = Kt = K. (2. we are then able to derive the steady state.d. (2. (2. .34 where α ∈ (0.7) (2. 2.9) (2.4): ¯ ¯ ¯ C = AK α − K 2 (2. Therefore. 1) and At is the technology which may follow an AR(1) process: At+1 = a0 + a1 At + ǫt+1 . see Long and Plosser (1983). 1−α ¯ K = (αβA)1/(1−α) . Solving (2. that is K = 0 and C = 0. This further implies from (2.3) Here we shall assume ǫt to be i.i. A0 ). It is not diﬃcult to ﬁnd that one steady state is on the bound¯ ¯ ary.4) that Ct = (1 − αβ)At Ktα .8) ¯ ¯ At the steady state. C is resolved from (2.2 The representative agent is assumed to ﬁnd the control sequence {Ct }∞ t=0 such that ∞ max E0 t=0 β t ln Ct (2. Equation (2.6) and evaluate the equation at its certainty equivalence form: log Kt+1 = log(αβA) + α log Kt .2) indicates that we could write the transition law of capital stock as Kt+1 = At Ktα − Ct .10) For another similar model where the depreciation rate is also set to 1 and hence the exact solution is computable.2.

3.11) subject to Kt+1 = Zt Note that here we have used (2. At ) = max {ln(At Ktα − Kt+1 ) + βE [V (Kt+1 . Also note that in this formulation the state variable in period t is still Kt .35 2. At+1 ) = 0. At ) = . To achieve a notational consistency in the time subscript. As we have mentioned in the last chapter.12) The necessary condition for maximizing the right hand side of the Bellman equation (2. Therefore ∂F/∂x = 0 and ∂F/∂u = 1.1 The Euler Equation Our ﬁrst task is to transform the model into a setting so that the state variable Kt does not appear in F (·) as we have discussed in the last chapter. ∂Kt At Ktα − Kt+1 Substituting (2.13) allows us to obtain the Euler equation: α−1 αAt+1 Kt+1 −1 + βE = 0. 2. (2. Therefore the model can be rewritten as ∞ max E0 t=0 β t ln(At Ktα − Kt+1 ). − Kt+1 ∂Kt+1 ∂V αAt Ktα−1 (Kt .3 The First-Order Conditions and Approximate Solutions To solve the model with diﬀerent approximation methods.14) into (2. The Bellman equation in this case can be written as V (Kt . there are two types of ﬁrst-order conditions. we shall ﬁrst establish the ﬁrst-order condition derived from the Ramsey problem. This can be done by assuming Kt+1 (instead of Ct ) as model’s decision variable. Let us ﬁrst consider the Euler equation. (2. At+1 )]} . Kt+1 (2.14) .4) to express Ct in the utility function. α At Ktα − Kt+1 At+1 Kt+1 − Kt+2 (2.13) Meanwhile applying the Benveniste-Scheinkman formula. the Euler equation and the equation derived from the Lagrangian.12) is given by At Ktα ∂V −1 + βE (Kt+1 . we may denote the decision variable as Zt .

2 The First-Order Condition Derived from the Lagrangian Next.18) back into (2.17) These are the ﬁrst-order conditions derived from the Lagrangian.4) as well as 1/Ct − βEt λt+1 = 0. as a basic discrete time growth model . we obtain the Euler equation (2.3) determine the transi∞ tion sequences of {Kt+1 }t=1 .16). Using (2.36 which can further be written as − α−1 αAt+1 Kt+1 1 + βE = 0.16) to express βEt λt+1 in terms 1/Ct and then plug it into (2.3.16) (2. Setting to zero the derivatives of L with respect to λt . we obtain λt = αAt Ktα−1 /Ct . {At+1 }∞ and {Ct }∞ given the initial condition t=1 t=0 K0 and A0 . 2.18) Substitute (2.15) along with (2.1 can be written. Writing either (2. Ct and Kt .9). (2.17).4) and (2.3 The Dynamic Programming Formulation The dynamic programming problem for the Ramsey growth model of chapter 2. βEt λt+1 αAt Ktα−1 = λt . This further indicates that α−1 Et λt+1 = Et αAt+1 Kt+1 /Ct+1 (2. Ct Ct+1 (2. This can be done as follows.15) or (2. 2. we turn to derive the ﬁrst-order condition from the Lagrangian.15).17) in terms of their certainty equivalence form while evaluating them at the steady state.3. Next we try to demonstrate that the two ﬁrst-order conditions are virtually equivalent. we indeed obtain (2.15) This Euler equations (2. Deﬁne the Lagrangian: ∞ ∞ L= t=0 β ln Ct − t=0 t Et β t+1 λt+1 (Kt+1 − At Ktα + Ct ) .2. one obtains (2.10)). in its deterministic version.9) and (2. It is also not diﬃcult to ﬁnd that the two ﬁrst-order conditions imply the same steady state as the one derived from the exact solution (see equation (2.

i. U ′ [f (K) − K ′ ] = βU ′ [f (K ′ ) − K ′′ ]f ′ (K ′ ) (2. Let us restate the problem above with K the state variable and K ′ the control variable. (2. U ′′ (C) < 0. Substitute C into the above intertemporal utility function by deﬁning C = f (K) − K ′ (2.21) We then can express the discrete time Bellman-equation. ch.20) with an one period utility function U ′ (C) > 0.22) By applying the Benveniste-Scheinkman condition3 gives V ′ (K) = U ′ (f (K) − K ′ )f ′ (K) (2.25) 3 The Benveniste-Scheinkman condition implies that the state variable does not appear in the transition equation. Notice that from the discrete time form of the envelope condition one again obtains the ﬁrst order condition of equ. whereby K’ denotes the one period and K ′′ the two period ahead value of K.e. f ′′ (K) < 0. for V ′ (K ′ ). (2.23) one step forward.24) Note that hereby we obtain as a solution a second order diﬀerence equation in K.37 ∞ V = max C t=0 β t U (Ct ) (2.3 of this book and Ljungquist and Sargent (2000. f ′ (K) > 0.24) can be written as 1=β U ′ (Ct+1 ) ′ f (Kt+1 ) U ′ (Ct ) (2. Ct + Kt+1 = f (Kt ) (2. Yet equ.22) as U ′ [f (K) − K ′ ] + βV ′ (K ′ ) = 0 which gives by using (2. 2).19) s. .23) Note that K is the state variable and that in equ.t. representing a dynamic programming formulation as V (K) = max{U [f (K) − K ′ ] + βV (K ′ )} ′ K (2.22) we have V (K ′ ). where K ′ denotes the next period’s value of K. (2. see chapter 1.

2. 7. chs. to study asset pricing. Kt+1 = AKtα − Ct The analytical solution for the value function is ˜ ˜ V (K) = B + C ln (K) and for the sequence of capital one obtains Kt+1 with ˜ C = ˜ B = α and 1 − αβ ˜ ln (C(1 − αβ)A) + 1−β βCAKtα = 1 + βC (2.26) s. 10.32) 4 The above Euler-equation is also essential not only in stochastic growth but also in ﬁnance.1 the discrete time decision problem is directly analytically solvable.2. see Ljungqvist and Sargent (2000.28) (2.27) βα ln 1−βα (αβA) For the optimal consumption holds Ct = Kt+1 − AKtα and for the steady state equilibrium K one obtains 1 α−1 or = αAK β K = βαAK α (2. If we allow for log-utility as in chapter 2. 17) .30) (2.38 which represent the Euler-equation that has extensively been used in economic theory4 .31) (2.29) (2. in ﬁscal policy to evaluate treasury bonds and testing for sustainability of ﬁscal policy.t. We take the following form of a utility function ∞ V = max Ct t=0 β t ln Ct (2.

6) and (2. Here we shall use the Euler equation. they form a recursive dynamic system from which the transition paths of Ct .21): α−1 Ct+1 = αβCt At+1 Kt+1 .0 Given the parameters. β.19) .1 speciﬁes these parameters and the corresponding interior steady state values: Table 2.39 2.1 Solving the Ramsey Problem with Diﬀerent Approximations The Fair-Taylor Solution It should be noted that one can apply the Fair-Taylor method either to the Euler equation or to the ﬁrst-order condition derived from the Lagrangian. and the others to deviate ∗ 1% from C0 .00 a1 0. a1 and σǫ . Since we know the exact solution. Note that from (2. There are all together 5 structural parameters: α.000 K 23593 C 51640 A 3000.8000 σǫ 60. as reported in Table 2. Before we compute the solution path.4 2.33) Together with (2.7): The three solution paths are diﬀerent due to their initial condition with regard to C0 .4.1 three solution paths computed by the Fair-Taylor method. we thus can choose C0 close to ∗ ∗ α the exact solution denoted as C0 .1. a0 .9800 a0 600.(1. Kt and At can be directly computed.4) and (2. we provide in Figure 2. These solution paths are compared to the exact solution as expressed in (2. we shall ﬁrst parameterize the model.3200 β 0.15) in the form as expressed by (1.1: Parameterizing the Prototype Model α 0. . Since the model is simple in its structure. there is no necessity to employ the Gauss-Seidel procedure as suggested in the last chapter.7) C0 = (1 − αβ)A0 K0 . ∗ In particular. Let us ﬁrst write equation (2. (2. we allow one C0 to be equal to C0 . Table 2.3).

∗ • When we set C0 to C0 . dashed and dotted curves the Fair-Taylor solution The following is a summary of what we have found in this experiment. This is shown by the dotted curve in the ﬁgure.4). This restriction makes Kt never reach zero so that the simulation can be continued. What can we learn from this experiment? The exact solution to this problem seems to be the saddle path for the system composed of (2. Yet when t goes beyond a certain point. In particular.40 Figure 2. (2. we restrict Ct ≤ 0. the deviation becomes signiﬁcant.3) . the path of Ct quickly reaches its lower bound 0. ∗ • When we choose C0 above C0 (by 1%) the path of Kt quickly reaches to zero and therefore the simulations have to be subject to the constraint Ct < At Ktα .99At Ktα . ∗ • When we set C0 below C0 (again by 1%). The solution path is shown by one of the dashed curves in the ﬁgure.1: The Fair-Taylor Solution in Comparison to the Exact Solution: solid curve the exact solution. the paths of Kt and Ct (shown by the another dashed curve) are close to the exact solution for small t′ s.

which we shall conjecture as ct = ηca at + ηca kt (2. Our ﬁrst task is therefore to log-linearize the state.38) The proposition below regards the determination of the two undetermined coeﬃcients ηc and ηck . (2. Then equation (2. ¯ ¯ α−1 ϕca = αβ AK a1 .37) Next we try to ﬁnd a solution path for ct which we shall conjecture as ct = ηca at + ηca kt (2. On the other hand. where ¯¯ ϕka = AK α−1 .4. ¯ ¯ α−1 ϕcc = αβ AK . E [ct+1 ] = ϕcc ct + ϕca at + ϕck kt+1 .15) and (2. (2. ct and at denote the log deviations of Kt . Ct and At .3).41 ∗ and (2.15) and (2. Here we shall again use the Euler equation. .4). ¯ ¯ α−1 ϕck = αβ AK (α − 1).4).35) (2. (2.2 The Log-linear Solution As the Fair-Taylor method.36) (2.3) can be log-linearized as kt+1 = ϕka at + ϕkk kt + ϕkc ct . Euler and the exogenuous equations as expressed in (2. we have veriﬁed our previous concern that the initial condition for the control variable is extremely important for obtaining an appropriate solution path when we employ the Fair-Taylor method. ¯¯ ϕkk = AK α−1 α. The following is the proposition regarding this log-linearization (the proof is provided in the appendix): Next we try to ﬁnd a solution path for ct . Proposition 2 Let kt . the log-linear approximation method can be applied to the ﬁrst-order condition either from Euler equation or from the Lagrangian. E [at+1 ] = a1 at .33).34) The proposition below regards the determination of the two undetermined coeﬃcients ηca and ηck . ¯ ¯ ϕkc = −(C/K). 2. The eventual deviation of the solution starting with C0 from the exact solution is likely to be due to the computational errors resulting from our numerical simulation.

1.35) with at to be given by a1 at−1 + ǫt /A.39) (2. to compare the log-linear solution to ¯ the exact solution.40) where Q2 = ϕkc . Therefore. we show in Figure 2.34) ¯ and (2. dashed curves the log-linear solution . we shall perform the transformation via Xt = (1 + xt )X for a variable xt in log deviation form.34).2 the log-linear solution in comparison to the exact solution.2: The Log-linear Solution in Comparison to the Exact Solution: solid curve the exact solution. Using the same parameters as reported in Table 2. Q1 = (ϕkk − ϕcc − ϕkc ϕck ) and Q0 = ϕkk . Figure 2. The solution paths of the model can now be computed by relying on (2.42 Proposition 3 Assume ct follow (2. All the solution paths are expressed as log deviations. Then ηck and ηca are determined from the following equation ηck = ηca 1 −Q1 − Q2 − 4Q0 Q2 1 2Q2 (ηck − ϕck )ϕka − ϕca = ϕcc − a1 − ϕkc (ηck − ϕck ) (2.

4. even if we start from many diﬀerent initial conditions.41) Due to the insuﬃciency in the speciﬁcation of the model with regard to the possible exogenuous variable.32). This can be done by choosing investment It ≡ At Ktα − Ct as the control variable while leaving the capital stock and the technology as the two state variables.43 In contrast to the Fair-Taylor solution. Therefore we can deﬁne F = AK α − C and U = ln C.(1. Suppose the linear decision rule can be written as It = G11 Kt + G12 At + g1 (2. the model becomes ∞ max subject to (2. 2.42) E0 t=0 β t ln(At Ktα − It ) The coeﬃcients G11 . This will allow us to obtain those Kij and kj (i. j = 1.4 The Linear-Quadratic Solution Using the Suggested Algorithm When we employ our new algorithm.and the second-order partial derivatives of the utility function around the steady state. G12 and g1 can be computed in principle by iterating (1.5 Therefore. the technology At also as a state variable.35) . Next we shall derive the ﬁrst. both are now in linear form.38) as discussed in the last chapter. one ﬁnds that the log-linear solution is quite close to the exact solution except for some initial paths. this is not attainable for our particular application. Unfortunately. Again our ﬁrst step is 5 Reiter (1997) has experienced the same problem. 2) coeﬃcient matrices and vectors as expressed in Chow’s ﬁrst-order conditions (1.3). we shall ﬁrst transform the model in which the state equation appears to be linear. 2. our attempt fails to compute the solution path with Chow’s algorithm. This indicates that there are two state equations (2. . we have to treat.4.41) and (2.31) and (1. Yet this requires the iteration to be convergent. When considering this. there is no need to transform the model.3) as well as Kt+1 = It (2. as suggested in the last chapter.3 The Linear-Quadratic Solution with Chow’s Algorithm To apply Chow’s method.

43) along with (2.3 for illustration).3: The Linear-quadratic Solution in Comparison to the Exact Solution: solid curve for exact solution. All these partial derivatives along with the steady states can be used as input in the GAUSS procedure provided in Appendix II of the last chapters.3) form the dynamic system from which the transition path of Ct .44 to compute the ﬁrst. Figure 2.4) and (2. Kt and At are computed (see Figure 2. dashed curves for linear-quadratic solution . Executing this procedure will allow us to compute the undetermined coeﬃcients in the following decision rule for Ct : Ct = G21 Kt + G22 At + g2 (2.43) Equation (2.and second-order partial derivatives with respect to F and U.

we will compare the analytical solution of chapter 2.3. For the growth model of chapter 2. As the ﬁgure shows the value function is clearly concave in the capital stock.4: Value Function obtained from the Linear-quadratic Solution In addition.6 u 6 Moreover. For the parameters chosen we obtain a steady state of the capital stock K = 2. 10] and the control variable.6.1. 5]. the value function obtained from the linear quadratic solution is shown.95 we can solve all the above expressions numerically for a grid of the capital stock in the interval [0. see Gr¨ne and Semmler (2004a).5 The Dynamic Programming Solution Next.45 Figure 2.4. 2.3. as concerns asset pricing log-utility preferences provide us with a very simple .3 with the dynamic programming solution obtained from the dynamic programming algorithm of chapter 1. Subsequently we report only results from a deterministic version. using as example.34 A=5 β = 0. in the ﬁgure 2. c.4.3 we employ the following parameters α = 0.07 For more details of the solution. Results from a stochastic version are discussed in appendix II. K.1. in the interval [0.

30.46 The solution of the growth model with the above parameters. u . using the dynamic programming algorithm of chapter 1.5 0 1 2 3 4 5 6 7 8 9 10 Figure 2. Ct+j 1−β = Et For further details.5 29 28. For U (C) = ln(C) the asset price is ∞ Pt = Et t=1 ∞ βj βj t=1 U ′ (Ct+j ) U ′ (Ct ) βCt Ct · Ct+j = . 9.5 28 27. ch.6.1) and Gr¨ne and Semmler (2004 b). see Cochrane (2001.5 and 2.5: Value Function stochastic discount factor and an analytical expression for the asset price.6 with grid reﬁnement is shown in ﬁgures 2.5 30 29.

6 the optimal consumption is shown to depend on the state variable K for a grid of K.07. see Gr¨ne and Semmler (2004a).5 and 2.2 · 10−2 and with 2000 nodes the error shrinks to 6. C. K in ﬁgure 2.6 consumption is low when capital stock is low (capital stock can grow) and consumption is high when capital stock is high (capital stock will decrease)where low and high is meant in reference u to the optimal steady state capital stock K = 2. which for this model can be derived analytically by the standard recursive method. Our purpose here is to compare the different approximate solutions to the exact solution.6 show the value function and the control.3 · 10−4 . are concave in the capital stock. Yet 7 With 100 nodes in capital stock interval the error is 3. Moreover. as observable from ﬁgure 2. u .6: Path of Control 10 As ﬁgures 2. As reported in Gr¨ne and Semmler (2004a) the dynamic programming algorithm with adaptive gridding strategy as introduced in chapter 1. As we have found.6 solves the value function with high accuracy.47 4 3 2 1 0 0 5 Figure 2.7 2. there have been some diﬃculties when we apply the Fair-Taylor method and the method of linear-quadratic approximation using Chow’s algorithm. 0 ≤ K ≤ 10.5 Conclusion This chapter employs the diﬀerent approximation methods to solve a prototype dynamic optimization model.

48 when we apply the methods of log-linear approximation and linear-quadratic approximation with our suggested algorithm, we ﬁnd that the approximate solutions are close to the exact solution. At the same time, we also ﬁnd that the method of log-linear approximation may need an algorithm that can take over some heavy derivations that otherwise must be analytically accomplished. Therefore, our experiment in this chapter veriﬁes our previous concerns (in Chapter 2) with regard to the accuracy and the capability of diﬀerent approximation methods, including the Fair-Taylor method, the log-linear approximation method and the linear-quadratic approximation method. Although the dynamic programming approach solves the value function with higher accuracy, in the subsequent chapters, when we come to the calibration of the intertemporal decision models, we will work with the linear quadratic approximation of the Chow method since it is better applicable to empirical assessment of the models.

2.6

2.6.1

**Appendix I: The Proof of Proposition 2 and 3
**

The Proof of Proposition 2

Kt+1 − At Ktα − Ct = 0 E Ct+1 − =0 E [At+1 − a0 − a1 At ] = 0

α−1 αβCt At+1 Kt+1

For convenience, we shall write (2.4), (2.15) and (2.3) as (2.44) (2.45) (2.46)

Applying (1.22) to the above equations, we obtain ¯ ¯ ¯¯ Kekt+1 − AK α eat +αkt + Cect = 0 ¯ ¯¯¯ E Cect+1 − αβ C AK α−1 eat+1 +ct +(α−1)kt+1 = 0 ¯ ¯ E Aeat+1 − a0 − a1 Aeat = 0 Applying (1.23), we further obtain from the above: ¯ ¯¯ ¯ K(1 + kt+1 ) − AK α (1 + at + αkt ) + C(1 + ct ) = 0 ¯ ¯ ¯ ¯ α−1 E C(1 + ct+1 ) − αβ C AK (1 + ct + at+1 + (α − 1)kt+1 ) = 0 ¯ ¯ E A(1 + at+1 ) − a0 − a1 A(1 + at ) = 0 (2.47) (2.48) (2.49)

49 which can be further written as to be ¯ ¯¯ ¯¯ ¯ Kkt+1 − AK α at − AK α αkt + Cct = 0 ¯ ¯ ¯ ¯ α−1 E Cct+1 − αβ C AK (ct + at+1 + (α − 1)kt+1 ) = 0 E [at+1 − a1 at ] = 0 (2.50) (2.51) (2.52)

Equation (2.52) indicates (2.37). Substituting it into (2.51) to express E [at+1 ] and re-arranging (2.50) and (2.51), we obtain (2.35) and (2.36) as indicated in the proposition.

2.6.2

Proof of Proposition 3

Given the conjectured solution (2.34), the transition path of kt+1 can be derived from (2.35), which can be written as kt+1 = ηka at + ηkk kt where ηka = ϕka + ϕkc ηca ηkk = ϕkk + ϕkc ηck (2.54) (2.55) (2.53)

Expressing ct+1 and ct in terms of (2.34) while recognizing that E [at+1 ] = a1 at , we obtain from (2.36): ηca a1 at + ηck kt+1 = ϕcc (ηca at + ηck kt ) + ϕca at + ϕck kt+1 which can further be written as ϕcc ηck ϕcc ηca + ϕca − ηca a1 at + kt kt+1 = ηck − ϕck ηck − ϕck

(2.56)

Comparing (2.56) to (2.53) with ηka and ηkk to be given by (2.54) and (2.55), we thus obtain ϕcc ηca + ϕca − ηca a1 = ϕka + ϕkc ηca (2.57) ηck − ϕck ϕcc ηck = ϕkk + ϕkc ηck (2.58) (ηck − ϕck ) Equation (2.58) gives rise to the following quadratic function in ηck :

2 Q2 ηck + Q1 ηck + Q0 = 0

(2.59)

with Q2 , Q1 and Q0 to be given in the proposition. Solving (2.59) for ηck , we obtain (2.39). Given ηck , ηca is resolved from (2.57), which gives rise to (2.40).

50

2.7

Appendix II: Dynamic Programming for the Stochastic Version

We here present a stochastic version of a growth model which is based on the Ramsey model of chapter 2.1 but extended to the stochastic case. A model of type goes back to Brock and Mirman (1972). Here the Ramsey 1d model ?? is extended using a second variable modelling a stochastic shock. The model is given by the discrete time equations ˜ K(t + 1) = A(t)AK(t)α − C(t) A(t + 1) = exp(ρ ln A(t) + zt ) where α and ρ are real constants and the z(t) are i.i.d. random variables with zero mean. The return function is again U (C) = ln C. In our numerical computations which follows Gr¨ne and Semmler (2004) u ˜ = 5, α = 0.34, ρ = 0.9 and β = 0.95. As we used the parameter values A the case of the Ramsey model, the exact solution is known and given by ˜ V (K, A) = B + C ln K + DA, where B= ˜ ln((1 − βα)A) +

βα 1−βα

˜ ln(βαA)

1−β

˜ , C=

α 1 , D= 1 − αβ (1 − αβ)(1 − ρβ)

We have computed the solution to this problem on the domain Ω = [0.1, 10] × [−0.32, 0.32]. The integral over the Gaussian variable z was approximated by a trapezoidal rule with 11 discrete values equidistributed in the interval [−0.032, 0.032] which ensures ϕ(x, u, z) ∈ Ω for x ∈ Ω and suitable u ∈ U = [0.5, 10.5]. For evaluating the maximum in T the set U was discretized with 161 points. Table 2.2 shows the results of the resulting adaptive gridding scheme applied with reﬁnement threshold θ = 0.1 and coarsening tolerance ctol = 0.001. Figure 2.7 shows the resulting optimal value function and adapted grid.

51 # nodes 49 56 65 109 154 327 889 2977 Error 1.4 · 10 0 0.5 · 10−1 2.9 · 10−1 1.3 · 10−1 5.5 · 10−2 2.2 · 10−2 9.6 · 10−3 4.3 · 10−3 estimated Error 1.6 · 10 1 6.9 · 10 0 3.4 · 10 0 1.6 · 10 0 6.8 · 10−1 2.4 · 10−1 7.3 · 10−2 3.2 · 10−2

Table 2.2: Number of nodes and errors for our Example

34 33 32 31 30 29 28 27 26 25 24

0.3

0.2

0.1

0.4 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4

0

-0.1

-0.2

0

1

2

3

4

5

6

7

8

9

10

-0.3 1 2 3 4 5 6 7 8 9 10

Figure 2.7: Approximated value function and ﬁnal adaptive grid for our Example In Santos and Vigo–Aguiar (1995) on equidistant grids with 143 × 9 = 1287 and 500 × 33 = 16500 nodes, errors of 2.1 · 10−1 and 1.48 · 10−2 , respectively, were reported. In our adaptive iteration these accuracies could be obtained with 109 and 889 nodes, respectively; thus we obtain a reduction in the number of nodes of more than 90% in the ﬁrst and almost 95% in the second case, even though the anisotropy of the value function was already taken into account in these equidistant grids. Here again in our stochastic version of the growth model, a steep value function can best be approximated with grid reﬁnement.

King et. e.1 Introduction Solving a stochastic dynamic optimization model with aproximation methods or dynamic programming is only a ﬁrst step towards the empirical assessment of such a model. 1988). This approach has been criticized because the structural parameters are assumed to be given rather 1 See.. other statistics (in addition to the ﬁrst and second moments) proposed by early business cycle literature. is employed. 1988b).Chapter 3 The Estimation and Evaluation of the Stochastic Dynamic Model 3. To undertake this step certain approximation methods are more useful than others.al (1988a.g. Hansen (1985. Given the estimation.1 Typically the parameters employed for the model’s simulation are selected from independent sources.. e. often referred to as calibration. The calibration approach compares the moment statistics (usually the second moments) of major macroeconomic time series to those obtained from simulating the model. Recently.g. such as diﬀerent microeconomic studies. Burns and Mitchell (1946) and Adelman and Adelman (1959). Another necessary step is to estimate the model with some econometric techniques. are also employed for this comparison. Plosser (1989) among many others. See King and Plosser (1994) and Simkins (1994) among others. The task of estimation has often been ignored in the current empirical studies of a stochastic dynamic optimization model when a technique. Kydland and Prescott (1982). 52 . one then can evaluate the model to see how the model’s prediction can match the empirical data. Long and Plosser (1983). Prescott (1986).

Generally. (1993). we do not ﬁnd many econometric studies on how to estimate a stochastic dynamic optimization model except for a few attempts that have been undertaken for some simple cases. we shall discuss two estimation methods: the Generalized Method of Moment (GMM) estimation. 3.g. Christiano and Eichenbaum (1992). 2 . Both estimation methods deﬁne an objective function to be optimized. subsequently in Section 5. a sketch of the computer program for our estimation strategy will be described in the appendix of this chapter. and the Maximum Likelihood (ML) estimation. We thus also need to develope an estimation strategy that can be used to recursively search the parameter space in order to obtain the optimum. which has been used in the current empirical study of stochastic dynamic models. e. such as the RBC. Finally. solved through some approximation method. 3 In this chapter. This approach uses the Monte Carlo method to generate the distribution of some moment statistics implied by the model. therefore. We then in Section 3 consider two possible estimation methods. we compare these moment statistics to the sample moments computed from the data. Burnside et al.2 Calibration The current empirical studies of a stochastic dynamic model often rely on calibration. Unfortunately. the GMM and the ML estimations. We. This estimation requires a global optimization algorithm. As one can ﬁnd there. introduce a global optimization algorithm.. the calibration may include the following steps: For an early critique of the parameter selection employed in calibration technique. called simulated annealing. a proper application of calibration requires us to deﬁne the model’s structural parameters correctly. we believe the problem remains for more elabrated models in the future development of macroeconomic research. In Section 4. see Singleton (1988) and Eichenbaum (1991). we propose a strategy to implement these two estimations for estimating a dynamic optimization model.2 Although this may not create severe diﬃculties for some currently used stochastic dynamic optimization models. Chow and Kwan (1998). For an empirical assessment of the model. 3 See.53 than estimated. Chow (1993). Section 2 will ﬁrst introduce the calibration technique. which is used for executing the suggested strategy of estimation. Due to the complexity of stochastic dynamic models. it is often unclear how the parameters to be estimated are related to the model’s restrictions and hence to the objective function in the estimation.

4) and the exogenuous equation (1. This can be regarded as a one time simulation. Often the HP-ﬁlter (see Hodrick and Prescott 1980) is used for this detrending. • Step 3: Select the initial condition (x0 . Then after these N times repeated runs. Here N should be suﬃciently large. • Step 4: If necessary. the above comparison of moment statistics of the model and sample should give us a basis to test whether the model can explain the actual business cycles represented by the data. if necessary. • Step 2: Select the number of times for which the iteration is conducted in the simulation of the model. reﬂected by some second moment statistics. These moment statistics are mainly those of the second moments such as variances and covariances. if we specify the model correctly with the structural parameters. Due to the stochastic innovation of ǫt . detrend the simulated series generated in Step 3 to remove its time trend. which should also be deﬁned properly. the standard deviation of ǫt as in equation (1. • Step 5: Compute the moment statistics of interest using. . • Step 6: Repeat Step 3 to 5 N times. should depend on the model speciﬁcation and the structural parameters including σǫ .3) to compute the solution of the model iteratively for T times. Therefore. a detrended series generated in Step 4. • Step 7: Compute the same moment statistics from the data sample and check whether it falls within the proper range of the distribution for the moment statistics generated from the Monte Carlo simulation of the model. These parameters may include preference and technology parameters and those that describe the distribution of the random variables in the model.3). compute the distributions of these moment statistics. the control equation (1. z0 ) and use the state equation (1.54 • Step 1: Select the model’s structural parameters. These distributions are mainly represented by their means and their standard deviations. We denote this number by T . This number might be the same as the number of observations in the sample.2). such as σǫ . the simulated series should also be cyclically and stochastically ﬂuctuating. including σǫ . The extent and the way of ﬂuctuations. the standard deviation of ǫt .

We consider two possible estimation methods: the GMM estimation and the ML estimation. symmetric. 3. yT ) = T T h(yt . denoted ψ. The idea behind the GMM estimation is to choose an estimator of ψ. one needs to deﬁne a distance function by which that closeness can be judged. ψ) t=1 (3. positive deﬁnite and depends only on the sample observation yT .55 3. representing the population moments established by the theoretical model: E [h(yt . a consistent estimator of the variance-covariance matrix of ψ is given by . The choice of this weighting matrix deﬁnes a metric that makes the distance function a scalar.1). ψ)] = 0 (3. such that the sample moments gT (ψ.3) where WT . is m × m. Let yT contain all the observations of the k variables in the sample with size T . Hansen (1982) proves that under certain assumption such a GMM estimator is consistent and asymptotically normal.3 The Estimation Methods The application of the calibration requires techniques to select the structural parameters accurately.3).1) where yt is a k-dimensional vector of observed random variables at date t. The sample average of h(·) can then be written as 1 gT (ψ. ψ is a l-dimensional vector of unknown parameters that needs to be estimated. yT ) WT gT (ψ. Hansen (1982) suggested the following distance function: J(ψ.2) Notice that gT (·) is also a vector-valued function with m-dimensions. called the weighting matrix.1 The Generalized Method of Moments (GMM) Estimation The GMM estimation starts with a set of orthogonal conditions. yT ) = gT (ψ.3. yT ) are as close as possible to the population moments reﬂected by (3. This indicates that we need to estimate the model before the calibration. h(·) is a vector-valued function mapping from Rk ×Rl into Rm . Also from the results established in Hansen (1982). yT ) ′ (3. To achieve this. The GMM estimator of ψ is the value of ψ that minimizes (3.

6) can be re-written as BY ′ + ΓX ′ = E ′ (3.1). the estimation with the GMM method usually requires two steps as suggested by Hansen and Singleton (1982). w(j. Then the above (3.6). Therefore.7) where Y is T × m.8) log L(ψ) = const. + n log |B| − log |Σ| 2 . we will adopt the method by Newey and West (1987). the model to be estimated here is the same as in the GMM estimation represented by (3. T V ar(ψ) = ′ (3. ′ (3. one chooses a sub-optimal weighting matrix to minimize (3. d)(Ωj + Ωj ). p. ψ ) and d to be a suitable function of T. Here ψ ∗ is required to be a consistent estimator of ψ. Non-linearity may pose a problem to derive the log-likelihood function for (3. d) ≡ 1 − j/(1 + d). the concentrated log-likelihood function can be derived (see Chow. Γ is a m × k matrix. starts with an econometric model such as follows: Byt + Γxt = ǫt (3. There is a great ﬂexibility in the choice of WT for constructing a consistent and asymptotically normal GMM estimator.3) is re-minimized.6) where B is a m × m matrix.2 The Maximum Likelihood (ML) Estimation The ML estimation. Assuming normal and serially uncorrelated ǫt with the covariance matrix Σ. In this book. First. ψ )g(yt−j . Suppose there are T observations. where it is suggested that d −1 WT = Ω0 + j=1 ∗ ∗ with w(j.4) where DT = ∂gT (ψ)/∂ψ .170-171) as n (3. X is T × k and E is T × m. Note that if we take the expectations on both sides. Second. one then uses the consistent estimator obtained in the ﬁrst step to calculate the optimum WT through which ( 3.3) and hence obtains a consistent estimator ψ ∗ .56 1 ′ (DT )−1 WT (DT )−1 .5) 3. proposed by Chow (1993) for estimating a dynamic optimization model. except here the functions are linear. Ωj ≡ (1/T ) T t=j+1 g(yt . yt in this case is a m ×1 vector of dependent variables. xt is a k ×1 vector of explanatory variables and ǫt is a m ×1 vector of disturbance terms.3. 1983.

This seems to suggest that the restrictions for a stochastic dynamic optimization model should typically be represented by the state equation and the control equation derived from the dynamic optimization problem. (3. which is quite possible in general. Therefore one is usually incapable of deriving analytically the ﬁrst-order conditions of minimizing (3.3) and (3. The derivation of the control equations in approximate form from a dynamic optimization problem is a complicated process. For an approximation method. The ﬁrst problem that we need to discuss are the restrictions on the estimation. using ﬁrstorder conditions to minimize (3. The asymptotic standard deviation of estimated parameters can be inferred from the following variance-covariance matrix of ψ (see Hamilton 1994 p. .8) with respect to the parameters.3) and maximize (3.4 The Estimation Strategy In practice.10) 3. 4 The parameters in the state equation could be estimated independently. which are not observable.8). Our search process includes the following recursive steps: • Step 1.57 with the ML estimator of Σ given by Σ = n−1 (BY ′ + ΓX ′ )(Y B ′ + XΓ′ ) (3. possibly through an iterative procedure. make it often unclear how the parameters to be estimated are related to the model’s restrictions and hence to the objective function in estimation. Furthermore. The linearization and the derivation of the control equation. Consequently.4 Yet. Start with an initial guess on ψ and use an appropriate method of dynamic optimization to derive the decision rules. the linearization of the system at its steady states is needed. such as the Lagrangian multiplier. since the system to be estimated is often nonlinear in parameters.8). searching a parameter space becomes the only possible way to ﬁnd the optimum.9) The ML estimator of ψ is the one that maximizes logL(ψ) in (3. using GMM or ML method to estimate a stochastic dynamic model is rather complicated. One proper restriction is the state equation and a ﬁrst-order condition derived either as Euler equation or from the Lagrangian. such as (3. As discussed in the previous chapters often a numerical procedure is required.143): ∂ 2 L(ψ) E(ψ − ψ)(ψ − ψ) ∼ − = ∂ψ∂ψ ′ ′ −1 . most ﬁrstorder conditions are extremely complicated and may include some auxiliary variables.8) may only lead to a local optimum.3) or maximizing (3.

p. (1985. One possible candidates is the simulated annealing. (1986) and Corana et al. The algorithm starts with an initial parameter vector x0 . The step size is narrowed so that the random search is conﬁned to an ever smaller region when the global optimum is approached. one needs to employ an optimization algorithm to search the parameter space recursively. be a function that is to be maximized and x ∈ S. may not serve our purpose well due to the possible existence of multiple local optima.5 A Global Optimization Algorithm: The Simulated Annealing The idea of simulated annealing has been initially proposed by Metropolis et al. Its value f 0 = f (x0 ) 5 For conventional optimization algorithm. Let f (x). (1992).5 such as Newton-Raphson or related methods. It moves uphill and downhill with a varying step size to escape local optima. see appendix B of Judge et al (1985) and Hamilton (1994. The space S should be deﬁned from the economic viewpoint and by computational convenience. which shall be discussed in the next section. The simulated annealing algorithm has been tested by Goﬀe et al. 5). they ﬁnd that out of 100 times conventional algorithms are successful 52-60 times to reach the global optimum while simulated annealing is 100 percent eﬃcient. (1992) compute a test function with two optima provided by Judge et al. Goﬀe et al. ch. where S is the parameter space with the dimensions equal to the number of structural parameters that need to be estimated. Apply some optimization algorithm to change the initial guess on ψ and start again with step one. The algorithm operates through an iterative random search for the optimal variables of an objective function within an appropriate space. Use the state equation and the derived control equation to calculate the value of the objective function. Bohachevsky et al. 3. By comparing it with conventional algorithms. for example. Using this strategy to estimate a stochastic dynamic model. We thus believe that the algorithm may serve our purpose well. (1987).956-7). The conventional optimization algorithms. .58 • Step 2. We thus need to employ a global optimization algorithm to execute the estimation process as described above. (1953) and later developed by Vanderbilt and Louie (1984). • Step 3. For this test.

14) with 0 < RT < 1. x′ is accepted.12) This p is compared to p′ . If it is larger than fopt . where p = e(f −f )/T ′ 0 (3. vi 1 + 0. al.6NS where ci is suggested to be 2 as by Corona et al.59 is calculated and recorded.11) and hence starts a new round of iteration. The new variable. With the new selected step-length vector. al. If not. is chosen by varying the ith element of x0 such that 0 x′i = x0 + r · vi i (3. the initial variable x0 is replaced by the updated 6 7 motivated by thermodynamics.13) vi = v 1 + 0.4NS ≤ ni ≤ 0.11)) should be undertaken and repeated NS times7 for each i. we should go back again to (3. If p is greater than p′ .6 denoted as p. (1987) 8 NT is suggested to be 100 by Corona et.4 ci (ni /NS − 0. the step-length is adjusted. a uniformly distributed random number from [0. both xopt and fopt are replaced by x′ and f ′ . the Metropolis criteria. x′ .9 With this new temperature T ′ . (1987). If x′ is not in S.4NS . NS is suggested to be 20 as by Corana et al. (1987). 1]. we set the optimum x and f (x) – denoted by xopt and fopt respectively – to x0 and f (x0 ). is used to decide on acceptance. 1]. Besides. The new temperature (denoted as T ′ ) will be T ′ = RT T 0 (3.6NS .6) −1 ′ 1 0 (3.85 by Corana et. The above steps (starting with (3. i 0 vi if 0. Other initial conditions include the initial step-length (a vector with the same dimension as x) denoted by v 0 and an initial temperature (a scalar) denoted by T 0 . Again after another NS times of such repetitions. the step-length will be re-adjusted.4 − ni /NS ) if ni < 0. one goes back to (3. .4 ci (0.11) until x′ is in S. The ith ′ element of the new step-length vector (denoted as vi ) depends on its number of acceptances (denoted as ni ) in its last NS times of the above repetition and is given by 0 1 if ni > 0. x′ is accepted. (1987) for all i. The new function value f ′ = f (x′ ) is then computed.11) where r is a uniformly distributed random number in [−1.8 We then come to adjust the temperature. But this time.11). f ′ should also be compared to the updated fopt . 9 RT is suggested to be 0. Subsequently. Subsequently. If f ′ is larger than f 0 . repeat (3. These adjustments as to each vi should be performed NT times.

3. based on some approximation methods. The ﬁrst part regards some necessary steps in the . 956-7). For this test. We then. 3. Thus a convergence will ultimately be achieved with the continuous reduction of the temperature. to estimate stochastic dynamic models employing time series data. the step-length in (3. for example the simulated annealing. (1992). Of course. By comparing it with conventional algorithms. they ﬁnd that out of 100 times conventional algorithms are successful 52-60 times to reach the global optimum while simulated annealing is 100 percent eﬃcient. have presented an estimation strategy.7 Appendix: A Sketch of the Computer Program for Estimation The algorithm we describe here is written in GAUSS. often a global optimization algorithm. For convergence. The entire program consists of three parts.60 xopt . We thus believe that the algorithm may serve our purpose well. In (3. the temperature will be reduced further after one additional NT times of adjusting the step-length of each i. The number of acceptance ni is not only determined by whether the new selected xi increases the value of objective function.13).6 Conclusions In this chapter. we shall demonstrate the effectiveness of this estimation strategy by estimating a benchmark RBC with simulated data. Goﬀe et al. We have introduced both the General Method of Moments (GMM) as well as the Maximum Likelihood (ML) estimations as strategies to match the dynamic decision model with time series data. but also by the Metropolis criteria which itself depends on the temperature.11) is required to be very small. which has often been employed in the assessment of a stochastic dynamic model. The algorithm will end by comparing the value of fopt for the last Nǫ times (suggested to be 4) when the temperature is attempted to be re-adjusted. (1992) compute a test function with two optima provided by Judge et al. The simulated annealing algorithm described above has been tested by Goﬀe et al. we have ﬁrst introduced the calibration method. In the next chapter. p. (1985. Although both strategies permit to estimate the parameters involved. is needed to be employed to detect the correct parameters. whether the new selected step-length is enlarged or not depends on the corresponding number of acceptances.

is the simulated annealing. ELSE. CONTINUE. DO UNTIL i = the dimensions of ϕ. ENDIF. IF ϕ′ is not in S. ELSE. ϕopt = ϕ′ . /*p is the metropolis criteria*/ ′ ′ IF f > f or p > p ϕ = ϕ′ . {Set initial conditions for simulated annealing} DO UNTIL convergence. i = i + 1. fopt = f ′ . f = f ′. CONTINUE. The third part. which is also the main part of this program. IF f ′ > fopt . ϕ′ = {as if current ϕ except the ith element to be ϕ′i }. /*set the vector for recording No. ENDIF. The input of this procedure are the structural parameters. HERE: ϕi = ϕi + rvi . . /*f is the value of the objective function to be minimized*/ p = exp[(f ′ − f )/T ].61 data processing after loading the original data. ELSE. ENDIF. f =OBJF(ϕ′ ). we shall only describe the simulated annealing. Of these three parts. i = 0. of acceptances*/ DO Ns times. t = t + 1. ni = ni + 1. CONTINUE. n = 0. while the activation of this procedure generates the value of objective function. GOTO HERE. We denote this procedure as OBJF(ϕ). The second part is the procedure that calculates the value of objective function for the estimation. DO NT times.

ENDIF. ENDO. v ′ . BREAK ELSE T = RT T . according to ni } v = v′.62 ENDO. IF change of fopt < ε in last Nε times. i = 0. {deﬁne the new step-size. ENDO. CONTINUE. REPORT ϕopt and fopt . . ENDO.

Part II The Standard Stochastic Dynamic Optimization Model 63 .

The 64 . The criticism of the performance of macroeconometric models of Keynesian type in the 1970s and the associated rational expectation revolution pioneered by Lucas (1976) initiated this development. (1989). We then present the standard RBC model as formulated in King et al. as mentioned above. The Real Business Cycle analysis now occupies a major position in the curriculum of many graduate programs.1 Introduction The Real Business Cycle model as a prototype of a stochastic dynamic macromodel has inﬂuenced quantitative macromodeling enormously in the last two decades. the Real Business Cycle approach has become a new orthodoxy of macroeconomics. the RBC analysis can also be regarded as a general equilibrium approach to macrodynamics. The central argument by Real Business Cycel theorists is that economic ﬂuctuations are caused primarily by real factors.Chapter 4 Real Business Cycles: Theory and the Solutions 4. To some extent. (1988). Its concepts and methods have diﬀused into mainstream macroeconomics. This chapter introduces the RBC model by ﬁrst describing its microeconomic foundation as set out by Stockey et al. Stockey. rational expectation and no monetary factors. A model of this kind will repeatedly be used in the subsequent chapters in various ways. Kydland and Prescott (1982) and Long and Plosser (1983) ﬁrst strikingly illustrate this idea in a simple representative agent optimization model with market clearing. Therefore. Lucas and Prescott (1989) further illustrate that such type of model could be viewed as an Arrow-Debreu economy so that the model can be established on a solid micro-foundation with many (identical) agents.

all producing a common output with the same constant returns to scale technology. all with the same preference. After this market has closed. the households in the economy are identical.65 model will then be solved after being parameterized by those standard values of the model’s structural parameters.23). sells the output and transfers any proﬁt back to the household. as in Arrow-Debreu economy. in periods t = 0. With this identical assumption.1 several assumptions should be made for an hypothetical economy. agents simply deliver the quantities of factors and goods they have contracted to sell and receive those they have contracted to buy. . 1 See Arrow and Debreu (1954) and Debreu (1959). 22) To establish the connection to the competitive equilibrium of the ArrowDebreu economy. Therefore. All trading takes place at that time. Finally. T . p. The revenue from selling factors can only be used to buy the goods produced by the ﬁrm either for consuming or accumulating as capital. be interpreted as predictions about the behavior of market economies. the resource allocation problem can be viewed as an optimization problem of a representative agent. so all prices and quantities are determined simultaneously. . in each period the household sells factor services to the ﬁrm. under appropriate conditions. The representative ﬁrm owns nothing. The following citation is again from Stokey et al. Second. 1989.. No further trades are negotiated later. First.. p. 4.”(Stokey et al. It is argued that “the solutions to planning problems of this type can.2 The Microfoundation The standard Real Business Cycle model assumes a representative agent who solves a resource allocation problem over an inﬁnite time horizon via dynamic optimization. assume that all transactions take place in a single once-and-for-all market that meets in period 0. In each period it simply hires capital and labor on a rental basis to produce output. (1989. The third assumption regards the ownership.. the trading process is assumed to be “once-and-for-all”. 1. It is assumed that the household owns all factors of production and all shares of the ﬁrm. and ﬁrms are also identical.

6).5) πt = pt f (kt .1) and (4.6) Note that (4. labor and capital expected by the household at given price sequence {pt . rt }t=0 .2) can be regarded as a budget constraint. wt . we obtain t s s s ˆ kt+1 = (1 − δ)kt + f (kt . s Given the initial capital stock k0 . the household is ∞ given the price sequence {pt .3) to eliminate id . f (·) is the production function and At is the expected technology shock. rt }∞ .8) (4. At ) t s s ˆ id = f (kt . At ) − cd t t (4.1 The Decision of the Household At the beginning of the period 0 when the market is open.5) and then substituting from (4. The equality holds due to the assumption Uc > 0. ns . id . cd and id are the demands for consumption and investment. where kt is implied by (4. β is the discounted factor.2. ns . At ) t s ˆ ns = Gn (kt . (4. Explaining πt in (4. Thus assuming that t=0 the household knows the production function while expecting that the mar∞ ket will be cleared at the given price sequence {pt . nt and kt are the realized output. Next. It is reasonable to assume that πt = pt (yt − wt nt − rt kt ) (4.2) (4. and t t t s ˆ cd = Gc (kt . ns . id . ns .2) in terms of (4.3) Above δ is the depreciation rate. the solution of this model is the sequence ∞ s s of plans cd . kt+1 t=0 . wt .4) can be rewritten as s s ˆ (4. kt t=0 that maximizes t t t the discounted utility: ∞ max E0 t=0 d β t U (ct . and t t s s nt and kt are the supplies of labor and capital stock.6) represent the standard RBC model. πt is the expected dividend.9) . we shall consider how the representative household calculates πt . wt . nt .7) (4. although it only speciﬁes one side of the markets: output demand and input supply. rt }t=0 at which he (or she) will choose the ses ∞ quence of output demand and input supply cd .1) subject to s pt (cd + id ) = pt (rt kt + wt ns ) + πt t t t s s d kt+1 = (1 − δ)kt + it (4. Note that (4.4) where yt . At ) − cd t t (4.66 4. ns ) t (4. At ) − wt ns − rt kt t t ˆ Above.

6) . At ) t (4.13) (4. nd . .12). wt . wt .2 The Decision of the Firm Given the same price sequence {pt . since the ﬁrm simply rents capital and hires labor on a period-by-period basis.2. ∞. wt . wt . A) t (4. 1.13)-(4. and also the sequence of ext=0 ˆ t=0 pected technology shocks {At }∞ . nt . rt }∞ .. its optimization problem is equivalent to a series of one-period maximization (Stokey et al.. . .. rt }∞ t t=0 that satisﬁes the equilibrium condition (4. A) ˆ nd = n(rt . t = 0.2. 2. At ) This ﬁrst-order condition allow us to derive the following equations of input d demands kt and nd : t d ˆ kt = k(rt . The economy is at the competitive equilibrium if ∗ ∗ {pt . i. d s kt = kt (4. rt }∞ = {p∗ . kt t=0 .. 1.3 The Competitive Equilibrium and the Walrasian Auctioneer ∗ ∗ A competitive equilibrium can be described as a sequence of prices {p∗ . The solution to this optimization problem satisﬁes: d ˆ rt = fk (kt . nd .11) (4.(4. wt .15) for all t’s. Howt ever. p25): s d max pt (yt − rt kt − wt nd ) t subject to d s ˆ yt = f (kt . nd .e.. At ) t d ˆ d wt = fn (kt .15). one can easily prove the existence of {p∗ .10) where t = 0. the problem faced by the representative s d ∞ ﬁrm is to choose input demands and output supplies yt . 2..67 4..12) 4. 1989. rt }∞ t t=0 t=0 ∗ ∗ Using equation (4. wt . ∞. wt . rt }∞ t t=0 at which the two market forces (demand and supply) are equalized in all these three markets.14) s yt nd = ns t t cd t + id t = (4.

. The Real Business Cycle theorists circumvent this problem skillfully and ingeniously. .often named as tˆtonnement process as in Walrasian economics . As written by Stokey et al. This adjsutment process . Speciﬁcally.68 The real business cycles literature usually does not explain how the equilibrium is achieved. Their approach is to use the so-called “contingency plan”.5 The Dynamics Assmue that the decisions are all contingent on the future shock {At }∞ . who adjust the price towards the equilibrium.. 2... and end-of-period capital kt+1 in each period t = 1. the dynamics of our hypothetic economy can be fully described by the following equations regarding the .2. t=0 and the prices are all at their equilibrium. the planner chooses among sequence of functions..4 The Contingency Plan ∗ ∗ It is not diﬃcult to ﬁnd that the sequence of equilibrium prices {p∗ .is a a commen solution to the adjsutment problem within the neoclassical general equilibrium framework. This indeed creates a problem how to express the equilibrium prices and the equilibrium demand and supply which are supposed to be made at the beginning of period 0 when the technology shock from period 1 onward are all unobserved. this is not a sequence of numbers but a sequence of contingency plans.2.(1989. z2 . consumption ct . p17): In the stochastic case. one for each period... Thus the sequence of equilibrium prices and the sequence of equilibrium demand and supply are all contingent on the realization of the shock regardless that the corresponding decisions are all made at the beginning of period 0. then. it is assumed that there exists an auctioneer in the market. Technically. however. . 4. zt . This sequence of realization is information that is available when the decision is being carried out but is unknown in period 0 when the decision is being made. Implicitly.. 4. wt . are contingent on the realization of the shocks z1 . rt }∞ t t=0 ˆ t=0 depends on the expected technology shock {At }∞ .

69 realized consumption. nt .3.3 4. (4. At is the temporary shock in technology and Xt the permanent . Yt for aggregate output and Ct for aggregate consumption. This empirically oriented formulation will be repeatedly used in the subequent chapters of this volumn. we employ here the speciﬁcations of a model as formulated by King. output. the output demand and input supply. demand and supply. The decision of the ﬁrm does not have any impact! This is certainly due to the equilibrium feature of the model speciﬁcation. Let Kt denote for the aggregate capital stock. At ) f (kt .18) (4.21) where δ is the depreciation rate.1 The Standard RBC Model The Model Structure The hypothetical economy we have presented in the last section is only for explaining the theory (from microeconomic point of view) behind the standard RBC economy. Although the model speciﬁes the decision behaviors for both household and the ﬁrm and therefore the two market forces.(4. For an empirically t=0 testable standard RBC model. At ) yt − c t (1 − δ)kt + f (kt .20) given the initial condition k0 and the sequence of technology shock{At }∞ .17) (4. At ) − ct (4.16) (4. nt . At ) Gn (kt . 4. but also we do not introduce the growth factor. α is the share of labor in the production function.19) (4. The model speciﬁed in (4. (1988).20) is not testable with empirical data. for all major three markets: output. the dynamics of the economy is reﬂected by only the household behavior. Assume that the aggregate production function take the form: Yt = At Kt1−α (Nt Xt )α (4. which concerns only one side of the market forces.16) . capital and labor markets. et al. The capital stock in the economy follow the transition law: Kt+1 = (1 − δ)Kt + Yt − Ct . t=0 This indeed provides another important property of the RBC economy. not only because we do not specify the stochastic process of {At }∞ . employment.22) where Nt is per capita working hours. investment and capital stock: ct nt yt it kt+1 = = = = = Gc (kt .

we shall ﬁrst derive the ﬁrst-order conditions. The sample mean of nt is equal to 30 %.70 shock that follows a growth rate γ. the Euler equations and the equations from the Lagrangian. This can be done by assuming kt+1 (instead of ct ) along . We nevertheless still present it here as an exercise and demonstrate that the two ﬁrst-order conditions are virtually equivalent. 4.23) where by deﬁnition. we divide both sides of equation (4.d.2 The First-Order Conditions As we have discussed in the previous chapters. We therefore have to rely on an approximate solution. but also the growth in productivity. The Euler equation is not used in our suggested solution method.21) by Xt (when Yt is expressed by (4. innovation.24) subject to the state equation (4. To transform the model into a stationary formulation.22)): kt+1 = 1 1−α (1 − δ)kt + At kt (nt N /0. The Euler equation To derive the Euler equation.3. 1+γ (4.3Nt /N with N to be the sample mean of Nt . is the average percentage of hours attributed to work. which. the model is nonstationary due to Xt . The representative agent in the economy is assumed to make the decision sequence {ct }∞ and {nt }∞ so as to t=0 t=0 ∞ max E0 t=0 β t [log ct + θ log(1 − nt )] . (4. Note that here Xt includes not only the growth in labor force. Note that nt is often regarded to be the normalized hours. as pointed out by Hansen (1985). For this. ct ≡ Ct /Xt and nt ≡ 0. our ﬁrst task is to transform the model into a setting that the state variable kt does not appear in F (·) as we have discussed in Chapter 1 and 2.i. Apparently. Note that there is no possibility to derive the exact solution with the standard recursive method. kt ≡ Kt /Xt . which may follow an AR(1) process: At+1 = a0 + a1 At + ǫt+1 . (4.3)α − ct .23). there are two types of ﬁrstorder conditions. The exogenuous variable in this model is the temporary shock At .25) with ǫt to be an i.

the objective function takes the form: ∞ max E0 t=0 β t U (kt+1 . (4. ∂nt Meanwhile the application of Benveniste-Scheinkman formula gives ∂V ∂U (kt . At+1 ) ∂kt+1 (4. kt .29).26) Note that here we have used (4. where U (kt+1 .71 with nt as model’s decision variables. At ) = max U (kt+1 . we obtain (4.nt (4.28) is given by ∂U ∂V (kt+1 . kt . kt . At ) = log [(1 − δ)kt + yt − (1 + γ)kt+1 ] +θ log(1 − nt ). ∂kt+1 kt+1 ct+1 ∂U αyt θ (kt+1 . At ) ∂kt ∂kt Using (4.23) and (4. ∂kt+1 ∂kt+1 From equation (4. yt is the stationary output via the following equation: 1−α yt = At kt (nt N /0. (4. kt . ∂kt+1 ct (1 − δ)kt+1 + (1 − α)yt+1 ∂U (kt+2 .26). kt+1 . kt . nt . kt .29) (4. At ) + βE [V (kt+1 . At ) + βE (kt+2 . nt . At ) = − . ∂U 1+γ (kt+1 . kt . At+1 ) = 0. nt . At+1 ) = . At ). At ) = − . nt . At ) + βE (kt+1 . At+1 ) = 0. At ) = 0. nt . The Bellman equation in this case can be written as V (kt .27) Given such an objective function.30) (4.3)α .31) in (4. nt . At+1 )] .28) The necessary condition for maximizing the right side of Bellman equation (4. In this case. nt . nt+1 . the state equation (4.32) ∂U ∂U (kt+1 . ∂nt nt c t 1 − n t .23) to express ct in the utility function. ∂kt+1 ∂kt+1 ∂U (kt+1 . kt . kt . At ) = (kt+1 . nt . nt+1 . kt+1 . kt+1 .23) can simply be ignored in deriving the ﬁrst-order condition. nt .31) to express ∂V (kt+1 .

ct 1 + γ αβyt −θ + Et λt+1 = 0.36) (4.(4.33) (4.39) (1 − δ)kt + (1 − α)yt kt ct .30). Next we try to demonstrate that the two ﬁrst-order conditions: (4. This can be done as follows.38) are virtually equivalent. 1+γ kt 1 [(1 − δ)kt + yt − ct ] .35) (4. we establish the following Euler equations: − (1 − δ)kt+1 + (1 − α)yt+1 1+γ + βE = 0. one obtains the following ﬁrst-order condition: β 1 − Et λt+1 = 0.35)).34) The First-Order Condition Derived from the Lagrangian Next.72 Substituting the above expressions into (4. Deﬁne the Lagrangian: ∞ L = t=0 ∞ β t [log(ct ) + θ log(1 − nt )] − Et β t+1 λt+1 kt+1 − t=0 1 1−α (1 − δ)kt − At kt (nt N /0.37) (4. kt and λt . ct kt+1 ct+1 αyt θ − = 0.35) . we turn to derive the ﬁrst-order condition from the Lagrangian. First. expressing [β/(1 + γ)]Et λt+1 in terms of 1/ct (which is implied by (4.38) with yt again to be given by (4.32) and (4. nt c t 1 − n t (4. we obtain from (4.33) .37) λt = This further indicates that Et λt+1 = (1 − δ)kt+1 + (1 − α)yt+1 kt+1 ct+1 (4.3)α + ct 1+γ Setting zero the derivatives of L with respect to ct .34) and (4.(4. nt . kt+1 = 1+γ (4. 1 − nt (1 + γ)nt β (1 − α)yt Et λt+1 (1 − δ) + = λt .27).

40) Note that we have used the ﬁrst-order condition from the Lagrangian to derive the above two steady states.25).38) along with (4. (4.73 Substituting (4. Second. denoted as (¯b . denoted as (¯i .3 ci = (φ − δ − γ)k i .34).35). and the other is interior. when evaluated in terms of their certainty equivalence forms. we then verify the second Euler equation (4. and substituting it into (4.3 . 4.36).27). nb . The steady state At is simply determined from (4. we obtain the ﬁrst Euler equation (4. λi = (1 + γ)/β¯i c y i = φk i . = 1. expressing [β/(1 + γ)]Et λt+1 again in terms of 1/ct .3. 1/α ¯ N /0. kb .35) . Equation (4. where φ= (1 + γ) − β(1 − δ) β(1 − α) = 0. ki . The other steady states are given by the following proposition: Proposition 4 Assume At has a steady state A. = A/(δ + γ) ¯ = (δ + γ)kb .(4.33). Since the two ﬁrst-order conditions are virtually equivalent. In c ¯ ¯ ¯ ¯ c ¯ ¯ ¯ ¯ particular. determines at least two steady states: one is on boundary. λb ). yb . ni . yi .2 2 We however leave this exercise to the readers. = ∞.39) into (4. we expect that the same steady states can also be derived from the Euler equations. . 1/α ¯ k i = A φ−1/α ni N /0. λi ).3 The Steady States Next we try to derive the corresponding steady states. cb ¯ nb ¯ λb ¯ kb yb ¯ and ni = αφ/ [(α + θ)φ − (δ + γ)θ] .

which can often be found in the RBC literature. nt and At .1.9884 δ 0.3 More detailed discussion regarding the parameter selection and estimation will be provided in the next chapter.1 and Figure 4. j = 1. n) = log(c) + θ log(1 − n) All these partial derivatives along with the steady states can be used as the inputs in the GAUSS procedure provided in Appendix II of Chapter 1.0333 a1 0. Executing this procedure will allow us to compute the undetermined coeﬃcients Gij and gi (i.42) Our ﬁrst step is to compute the ﬁrst.41) (4. We shall remark that these parameters are close to the standard parameters.2 . A) = 1 (1 − δ)k + Ak 1−α (nN /0. Assume that decision rule take the form ct = G11 At + G12 kt + g1 nt = G21 At + G22 kt + g2 (4.4 Solving Standard Model with Standard Parameters To obtain the solution path of standard RBC model.and the second-order partial derivatives with respect to F and U where F (k.41) and (4. In Figure 4. which is related to the stochastic equation (4. we illustrate the solution paths. 2) in the decision rule as expressed in (4.58 γ 0. ct . (1988) except the last three parameters.0189 The solution method that we shall employ is the method of linear-quadratic approximation with our suggested algorithm as discussed in Chapter 1.74 4. we shall ﬁrst specify the values of the structural parameters deﬁned in the model.1: Parameterizing the Standard RBC Model α 0.9811 σǫ 0. .0045 β 0.42). 3 Indeed. c.25).3)α − c 1+γ U (c. they are essentially the same as the parameters choosed by King et al. one for the deterministic and the other for the stochastic case. Table 4. for the variables kt .025 θ 2 ¯ N 480 a0 0. These are reported in Table 4. n.

2: The Stochastic Solution to the Benchmark RBC Model for the Standard Parameters Elsewhere (see Gong and Semmler 2001). we have compared these solu- .75 Figure 4.1: The Deterministic Solution to the Benchmark RBC Model for the Standard Parameters Figure 4.

The utility function (4. See Benhabib and Nishimura (1998) and Harrison(2001).3. Since those generalized versions can easily give rise to multiple equilibria and history dependence. Moreover. as arguments is non-separable in consumption and leisure. We can obtain from (4. U (C. We ﬁnd that the two solutions are surprisingly close to the extent that one can hardly observe the diﬀerences.76 tion paths to those computed by Campbell (1994)’s log-linear approximate solution. (4. Although we will not attempt to estimate those more generalized versions it is worth presenting the main structure of the generalized models and to demonstrate how they can be solved by using dynamic programming as introduced in chapter 1. With respect to preferences utility functions such as4 Cexp U (C.44) which is additively separable in consumption and leisure. C and labor eﬀort.5 The Generalized RBC Model In recent work stochastic dynamic optimization models of general equilibrium type have been presented in the literature that go beyond the standard model as discussed in chapter 4. 4. N ) = logC − N 1+χ 1+χ (4.5.1 The Model Structure The generalization of the standard RBC model is usually undertaken either with respect to preferences or with respect to the technology. N ) = −N 1+χ 1+χ 1−σ −1 .43) 1−σ are used. N ) = − 1−σ 1+χ (4.43) with consumption. 4. The recent models are more demanding in terms of solution and estimation methods. in chapter 7 we will return to these types of models. N .6.45) 4 5 See Bennet and Farmer (2000) and Kim (2004). .43) a separable utility function such as5 C 1−σ N 1+χ U (C. by setting σ = 1 we obtain simpliﬁed preferences in log utility and leisure.

The increasing returns to scale technology represented by equ.48) (4. The above describes a type of a generalized model that we want to solve.4) and Benhabib and Farmer (1994). ch.77 As concerning production technology and markets usually the following generalizations are introduced. . the aggregate stock of capital and labor hours respectively. 7. K. ϕ′ (δ) = 1.46) whereby α = (1 + ξ)a.8 We may write ˙ I K =ϕ K K with the assumption of ϕ(δ) = 0. Christiano and Fisher (2001). β > 0. A functional form that satisﬁes the three conditions of equ. See Farmer (1999. Although the individual-level private technology generates constant returns to scale with Yi = AKia Lb .6 First we can allow for increasing returns to scale.47) is I 1−ϕ − 1 /(1 − ϕ) (4. N represent total output. with Y = K α N β α > 0. (4.46) can also be interpreted as a monopolistic competition economy where there are rents arising from inverse demand curves for monopolistic ﬁrms.49) δK For ϕ = 0.7 Another generalization as concerning production technology can be undertaken by introducing adjustment cost of investment. β = (1 + ξ)b and Y. 6 7 See Kim (2003a). one has the standard model without adjustment cost. namely δ ˙ K = I − δK. (4. α + β ≥ 1 (4. ξ ≥ 0 may allow for an aggregate production function with increasing returns to scale. ϕ′′ (δ) ≤ 0 (4.2. 8 For the following. a + b = 1 i externalities of the form A = (K a N b )ξ .47) Hereby δ is the depreciation rate of the capital stock. see Lucas and Prescott (1971) and Kim (2003a) and Boldrin.

. Its discretization for the use of dynamic programming is undertaken through the Euler procedure.6 and a grid for the capital stock.7 0. 10].1 0 0. (4. we obtain the following value function. ˙ K It =ϕ K Kt (4. The latter are used in equ. Nt ) are chosen such as represented by equ.49).1. representing the total utility along the optimal paths of C and N . Preferences and technology (by using adjustment cost of capital) take on.2 Solving the Generalized RBC Model ∞ We write the model in continuous time and in its deterministic form Ct . Nt )dt. Table 4.05 Note that in order to stay as close as possible to the standard RBC model we avoid externalities and therefore presume ξ = 0. (4.51) where preferences U (Ct .3 0.2: Parameterizing the General Model a b χ ρ δ ξ ϕ 0. Using then the deterministic variant of our dynamic programming algorithm of ch.51) are written in continuous time.78 4. a more general form.50) s. in the interval [0. (4. K. Nt = max t=0 e−ρt U (Ct .3 0. 1.47)-(4.45) and the technology such as speciﬁed in eqs.05 0. (4.50)-(4.46) and (4. however.5.t.51). Note that the dynamic decision problem (4. For solving the model with the dynamic programming algorithm as presented in chapter 1.6 we use the following parameters.

K. The dynamics generated by the response of consumption and labor eﬀort to the state variable capital stock. when capital stock is low (so that capital stock can be built up) and consumption is high and labor eﬀort low (so that capital stock will shrink).6 0.8 1.3: Value function for the general model The out-of-steady state of the two choice variables namely consumption and labor eﬀort. consumption is low and labor eﬀort high. Moreover. see Figure 4.4: Paths of the Choice Variables C and N (depending on K) As can be clearly observed the value function is concave.2 0 0 2 4 6 8 10 Figure 4. 1. see Figure 4.(depending in feedback form on the state variable capital stock.4. . K) are shown in Figure 4.3.4 1.8 0.79 -9 -10 -11 -12 -13 -14 -15 -16 -17 -18 0 1 2 3 4 5 6 7 8 9 10 Figure 4.4. consumption and labor eﬀort.6 1. will thus lead to a convergence toward an interior steady state of the capital stock.2 1 0.4 0.

n and λ ¯ with cb . We then introduce the empirically oriented standard RBC model and.56) given cb . We also have presented a generalized RBC model and solved it by using dynamic programming. based on our previous chapters.56) ¯ The derivation of the boundary steady state is trival.54) (4.(4. 4. we obtain β 1 − λ=0 c 1+γ αy −θ + βλ =0 1−n (1 + γ)n (1 − α)y β λ (1 − δ) + =λ 1+γ k 1 (1 − δ)k + y − c k= 1+γ where from (4.(4. kb ¯ and yb can be derived from (4. nb and λb . and assuming all the variables to be at their steady states. Our attempt was to reveal the basis of some intertemporal decision problems behind the RBC model.7 Appendix: The Proof of Proposition 4 Evaluating (4.52) .54). and therefore all the steady state values y are understood as the interior steady states. Further.55) and (4.6 Conclusions In this chapter we ﬁrst have introduced the intertemporal general equilibrium model on which the standard RBC model is constructed.55) nN 0. present the solution to the model. the task that will be addressed in the next chapter. Replace c.38) along with (4. For notational convenience.3 α (4.27) in their certainty equivalence form.53) (4. Let k = φ. ¯ ¯ Next we try to derive the interior steady state. We ﬁnd equation (4.52) (4. This will be important for the subsequent part of the book.54) are satisﬁed. nb and λb .80 4.27) y is given by y = Ak 1−α (4. This provides us the groundwork for an empirical assessment of the RBC model. By (4. we ignore the subscript i.35) . we obtain .

λ can be derived from (4.52) and (4.58). Finally.58) Expressing y in terms of φk and then expressing k in terms of (4.60). y from (4.3 (4.81 y = φk where φ is deﬁned by (4.57) and c either from (4.56).60). k y /¯ ¯ n = n y /k ¯ ¯ 1 N 0.52).40) in the proposition.3 (4.3 (4.3 φ α−1 α n−1 N 0. y = A n = A 1 y φ y n 1−α nN 0. we thus obtain from (4. According to (4.57) = A α φ− α 1 1 N 0.3 α 1−α = A α φ1− α Therefore.55) c = (φ − δ − γ)k = (φ − δ − γ)A α φ− α Meanwhile.60) Equating (4. (4.3 n (4.59) N 0. Once n is solved. k can be obtained from (4.58).59) and (4. we thus solve the steady state of the labor eﬀort n.59) or from (4.53) imply that c = = α y (1 − n) θ n 1 1 α (1 − n)A α φ1− α θ 1 1 N 0. .

. which is generated from a stochastic simulation of our standard model for the given 82 . no market failures of any kind. no adjustment cost could replicate actual experience this well is very surprising. rational expectations. As Plosser (1989) pointed out. despite its rather simple structure.) However. these early assessments have also become the subject to various criticisms. Moreover. In this chapter.. We shall ﬁrst estimate the standard RBC model and then evaluate the calibration results that have been stated by early real business cycle theorists. Our previous discussion. 5.” (Plosser 1989:. especially in the ﬁrst three chapters.2 in the last chapter. has provided a technical preparation for this assessment.1 Introduction Many real business cycle theorists believe that the RBC model is empirically powerful in explaining the stylized facts of business cycles. we shall ﬁrst demonstrate the eﬃciency of our estimation strategy as discussed in Chapter 3. some theorists suggest that even a simple RBC model. we shall provide a comprehensive empirical assessment of the standard RBC model.Chapter 5 The Empirics of the Standard Real Business Cycle Model 5. like the standard model we presented in the last chapter. “the whole idea that such a simple model with no government. can generate the time series to match the macroeconomic moments from empirically observed time series data. Yet before we commence with our formal study.2 Estimation with Simulated Data In this section. The simulated data are shown in Figure 4. we shall ﬁrst apply our estimation strategy using simulated data.

the restriction Bzt + Γxt = εt (see equation (3. This gives us y y y yt = At + (1 − α) kt + α nt − y n A k We thus obtain for the ML estimation: 1+γ kt ct −G12 zt = B = −G nt .3)α = 0.1) (5. Yet for the ML estimation. 22 −(1 − α) y yt kt−1 ct−1 xt = yt−1 At 1 1 k 0 1 0 0 0 0 1 y −α n 0 0 0 1 .83 standard parameters reported in Table 4. we expect that the estimated parameters will be close to the standard parameters that we know in advance. the model implies the following moment restrictions:1 E [(1 + γ)kt+1 − (1 − δ)kt − yt + ct ] = 0. we shall ﬁrst linearize (4. For the GMM estimation. . 1−α E yt − At kt (nt N /0.3) (5. (5.27) using Taylor approximation. E [ct − G11 kt − G12 At − g13 ] = 0.4). The purpose of this estimation is to test whether our suggested estimation strategy works well.6))2 must be linear. E [nt − G21 kt − G22 At − g23 ] = 0.2) (5.2. Therefore.1 The Estimation Restriction The standard model implies certain restrictions on the estimation. Therefore. 2 Note that here we use zt rather than yt in order to distinguish it from the output yt in the model.4) Note that the moment restrictions could be nonlinear as in (5. −(1 − δ) 0 Γ= 0 0 1 −1 0 0 0 0 −G11 −g13 0 0 −G21 −g23 y 0 0 −y A Note that the parameters in the exogenuous equation could be independently estimated since they have no feed back into the other equations.1. 5. there is no necessity to include the exogenuous equation into the restriction. If the strategy works well.

However.2919 2.1 and 5. we should also remark that the diﬀerence is minor whereas the time required by the ML estimation is much shorter.4373E−006) (6.2 Estimation with Simulated Data Although the model involve many parameters.9884 0. we also illustrate the surface of the objective function for our ML estimation.9884 0.1 reports our estimation with the standard deviation included in parenthesis.0253 θ 2 2. It shows not only the existence of multiple optima.5779E−005) (2.6369E−006) 0.006181723) (4. The parameters with regard to the AR(1) process of At have no feedback eﬀect on our estimation restrictions. Table 5.1826 (2.1: GMM and ML Estimation Using Simulated Data True ML Estimation 1st Step GMM 2nd Step GMM α 0. β. These are the parameters that are empirically unknown and thus need to be estimated when we later turn to the estimation using empirical data. This veriﬁes the necessity of using simulated annealing in our estimation strategy.000 0. 3 N can be regarded as the mean of per capital hour.02505 One ﬁnds that the estimations from both methods are quite satisfying. but also that the objective function is not smooth.3 Table 5. δ and θ.5781 β 0.84 5. we shall only estimate the parameters α.7217E−007) (3.025 0. This demonstrates the eﬃciency of our estimation strategy.9821 0. All estimated parameters are close to their true parameters.2.4956E−007) (9.02500 2. Yet. γ does not appear in the model.9290E−006) (0.9958E−008) (5.4412E−006) (3.9946 δ 0. This is probably because the GMM estimation does not need to linearize (5.7174E−006) (0.00112798) (3.6093E−006) (9.5800 0. In Figure 5.5796 0.58 0.2. but create for transforming the model into a stationary version. The latter holds not only because the GMM needs an additional step. .4). the GMM estimation after the second step is more accurate than the ML estimation. the parameters that we have used to generate the data. but also each single step of the GMM estimation takes much more time for the algorithm to converge. Approximately 8 hours on a Pentium III computer is required for each step of the GMM estimation whereas approximately only 4 hours is needed for the ML estimation.

δ Surface of the Objective Function for ML Estimation Figure 5.85 Figure 5.2: The θ − α Surface of the Objective Function for ML Estimation .1: The β .

5.86 Next. This will allow us to explore a puzzle. and the parameter α and γ as well as the initial condition X0 are given. Kt . often called the technology puzzle in the RBC literature. Ct consumption and Yt output. we shall ﬁrst discuss the data that shall be used in our estimation. the existing macroeconomic data (such as those from Citibase) also need to be adjusted to be accommodated to the deﬁnition of the variables as deﬁned in the model. since this is a common practice we shall here also follow this procedure.6) where Xt follows a constant growth rate: Xt = (1 + γ)t X0 (5. time series data. It should be noted to derive the temporary shock as above deserves some criticism. deviate from this practice and construct the Solow residual in a diﬀerent way. Kt the capital stock. a key assumption in the model.5) (5. the Solow residual At can be derived if the time series Yt . Later in this chapter we will. Yet. Indeed. The time series employed for our estimation should include At . however.7) Thus. the Solow residual At is computed as follows: At = = Yt 1−α Kt (Nt Xt )α yt 1−α kt Nt α (5. . S. such as the business cycle model of Keynesian type. Nt . A common practice is to use the so-called Solow residual for the temporary shock. S. Here all the data can assume to be obtained from statistical sources except the temporary shock At . which makes it diﬀerent from other types of model. time series. In addition to the construction of the temporary shock. this approach uses macroeconomic data that posits a fullemployment assumption. the temporary shock in technology. Assuming that the production function takes the form Yt = At Kt1−α (Nt Xt )α . The empirical studies of RBC models often require a considerable re-construction of existing macroeconomic data. Nt the labor input (per capita hours). we turn to estimating the standard RBC model with the data of U.3.3 5.1 Estimation with Actual Data The Data Construction Before estimating the benchmark model with empirical U.

5 Meanwhile we shall consider two α∗ : one is the standard value 0. In this estimation.5 )). Consequently. Data Set I. (1988). ct ≡ Xt and yt ≡ Xtt ). Data Set II. which has been used in many empirical studies of RBC model such as Christiano (1988) and Christiano and Eichenbaum (1992).1 to 1984. see Cooley and Prescott (1995). Xt . the shock sequence At is computed from the time series of Yt . Table 5. Here the time series Xt is computed according to equation (5. one has to compute them based on some assumptions. Since such data are not readily available.0045 for γ to make kt . is constructed by Christiano (1987). The sample period for this data set is from 1952. Further.4. one should also include government consumption in Ct . γ is the standard parameter as choosen by King X t et al. 4 5 For a discussion on data deﬁnitions in RBC models. ct and yt to be stationary (note that we have deﬁned C Y t kt ≡ Kt . .2 reports the estimations after the second step of GMM estimation. Nt and Kt given the pre-determined α. Therefore.0045.58 and the other is 0. is obtained mainly from Citibase except the capital stock which is taken from the Current Survey of Business. to make the model’s national income account consistent with the actual data.4. We shall remark that 0. All the data are quarterly. which we denote as α∗ (see equation (5. Two Diﬀerent Data Sets To explore how this treatment of the data construction could aﬀect the empirical assessment of dynamic optimization model. it is suggested that not only private investment but also government investment and durable consumption goods (as well as inventory and the value of land) should be included in the capital stock.66 is the estimated α in Christiano and Eichenbaum (1992).1 to 1988.2 Estimation with the Christiano Data Set As we have mentioned before.7) for the given parameter γ and the initial condition X0 . The ﬁrst data set.87 4 The national income as deﬁned in the model is simply the sum of consumption Ct and investment. the latter increases the capital stock.66. 5. This data set is taken without any modiﬁcation. we set X0 to 1 and γ to 0. Also. we shall employ two diﬀerent data sets. the service generated from durable consumption goods and government capital stock should also appear in the deﬁnition of Yt . The sample period of this data set is from 1955. The second data set.3. We choose 0.

4656 (54457. the estimated parameters are very close to those in Christiano and Eichenbaum (1992).0002) (0.5800 β δ θ 0.0045.2377E−008) (9. Furthermore. we should also remark that the standard errors are unusually small.0716 (454278. we ﬁnd that the estimations here are much less satisfying.0078) 0. ∗ 5.1111 (0.3 Estimation with the NIPA Data Set As in the case of the estimation with Christiano’s data set.66 ∗ (71431.0088) As one can observe.0209 2. For the two pre-determined α.66.66 ∗ (2.272) β 0. all the estimations seem to be quite reasonable. one is forced to think about the data issue involved in the current empirical studies of RBC model. Given such a sharp contrast for the results of the two diﬀerent data sets.0714 1.204) (45174.9892 0.409) (35789.9286 0.58. They deviate signiﬁcantly from the standard parameters. though somewhat deviating from the standard parameters used in King et al. are not within the economically feasible range. For α = 0. This is not surprising due to the way of computing the temporary shocks from the Solow residual.0002) (0.3: Estimation with the NIPA Data Set α = 0. this issue is mostly suppressed in the current debate. the estimated parameters. the estimated α is almost the same as the pre-determined α.2393E−006) α 0.3.58 α∗ = 0. the estimates are all statistically insigniﬁcant due to the huge standard errors.88 Table 5. especially β.we report the estimation results in Table 5. we again set X0 to 1 and γ to 0.6600 0.0002) (0.907) (39958.2963 0. .56) θ 1. Even the parameter β estimated here is very close to the β chosen (rather than estimated) by them.9552 (0.6663 0. Some of the parameters.06) (283689. In both cases.58 α∗ = 0.2: Estimation with Christiano’s Data Set α = 0.8553 (89684. Indeed. For α∗ = 0.828) δ 0.9935 0.0209 1.0002) (0.023) α 0. Finally. (1988). Table 5.8610 In contrast to the estimation using the Chrisitano data set. are all within the economically feasible range.3.

0045 β 0.0189 6 Note that these are the same as in Table 4. β. a1 and σǫ in the stochastic equation (5. The parameters a0 . which has already been introduced in Chapter 3.9) (5. TimeSeries Data Given the structural parameters.1.5800 γ 0. S.9892 δ 0. .0209 θ 1. we employ those in Table 5. First. The parameter γ is set 0.9552 N 299. we report all these parameters in Table 5.9811 σǫ 0.10) (5.58 for the parameters α. For convenience. one can then assess the model to see how closely it matches with the empirical data. 2) are all the complicated functions of the structural parameters and can be computed from our GAUSS procedure of solving dynamic optimization problem as presented in Appendix II of Chapter 1. and Gij and gi (i. The current method for assessing a stochastic dynamic optimization model of RBC type is the calibration technique.6 The parameter N is simply the sample mean of per capita hours Nt .3)α At+1 = a0 + a1 At + ǫt+1 1 [(1 − δ)kt + yt − ct ] kt+1 = 1+γ (5.03 a0 0.0333 a1 0. The structural parameters used for this stochastic simulation are deﬁned as follows.4: Parameterizing the Standard RBC Model α 0.11) (5.0045 as usual.89 5. Table 5. δ and θ. The data generation process for this stochastic simulation is given by the following equations: ct = G11 At + G12 kt + g1 nt = G21 At + G22 kt + g2 1−α yt = At kt (nt N /0. j = 1.8) (5.11) are estimated by the OLS method given the time series computed from Solow residue.12) 2 where ǫt+1 ∼ N (0.4 Calibration and Matching to U. The basic idea of calibration is to compare the time series moments generated from the model’s stochastic simulation to those from a sample economy.4. σǫ ).2 at α∗ = 0.

0037 (0. we can further obtain the distribution of these moment statistics. the distributions are derived from our 5000 thousand stochastic simulations.0000 (0.0156 0.1089) 0.9796 (0.0000 (0.0083) 1.0035 0.0000) 0. Here. Of course.0090 (0.4.0050 (0.1 The Labor Market Puzzle By observing Table 5. capital stock and output could be regarded as being .5: Calibration of Real Business Cycle Model (numbers in parentheses are the corresponding standard deviations) Consumption Standard Deviations Sample Economy Model Economy Correlation Coeﬃcients Sample Economy Consumption Capital Stock Employment Output Model Economy Consumption Capital Stock Employment Output 0.0000 1.4604 0.0006) Output 0.0159 (0.9432 (0. For the model economy.0954 1.0906) 0.0012) Capital 0.0000) 0. All time series data are detrended by the HP-ﬁlter.5.2861 0.0000) 5.12). which can be reﬂected by their corresponding standard deviations (those in paratheses).7550 1.0000 (0. Table 5.1741 0.90 Table 5.0021) 1.(5. In particular.0007) Employment 0. the moment statistics of the sample economy is computed from Christiano’s data set.0575 (0.0000 0. while those for model economy are generated from our stochastic simulation using the data generation process (5.0000) −0.9381 (0.2013 (0.0210) 0.0081 0.1431 (0.0165 0.8) .0000 0.7263 1.0000 0.5 reports our calibration from 5000 stochastic simulations.1032) 1. the moment statistics include the standard deviations of some major macroeconomic variables and also their correlation coeﬃcients.0031) 1. we ﬁnd that among the four key variables the volatilities of consumption.0000 (0.

where we compare the observed series from the sample economy to the simulated series with innovation given by the observed Solow residual. Figure 5.3: Simulated and Observed Series (non detrended): solid line observed and dashed line simulated .4. We shall remark that the excessive smoothness of employment is a typical problem of the standard model that has been addressed many times in literature.3 and Figure 5. Indeed.91 somewhat matched. the employment in the model economy is excessively smooth. This is indeed one of the major early results of real business cycle theorists. However. These results are further demonstrated by Figure 5. the matching does not hold for employment.

One is between consumption and output.92 Figure 5.4: Simulated and Observed Series (detrended by HP ﬁlter): solid line observed and dashed line simulated Now let us look at the correlations. However. should be somewhat correlated. The discussions have often been focused on the correlation with output. . However. therefore. and the other is between employment and output. In the sample economy. Both of these two correlations have also been found in our model economy. The excessive smoothness of labor eﬀort and the excessive correlation between labor and consumption will be taken up in Chapter 8. this excessive correlation should not be surprising given that in the RBC model the movements of employment and consumption reﬂect the movements of the same state variables: capital stock and temporary shock. there are basically two signiﬁcant correlations. We remark that such an excessive correlation has. They. to our knowledge. not yet been discussed in the literature. in addition to these two correlations. consumption and employment in the model economy are also signiﬁcantly correlated.

these results rely on the hypothesis that the driving force of the business cycles are technology shocks. One is that the parameters a0 . i. is the technology-driven hypthoses. Third. the technology is procyclical with output.e.4. in which the technology is measured by the standard Solow residual.5 The Issue of the Solow Residual So far we may argue that one of the major achievements of the standard RBC model is that the model could explain the volatility of some key macroeconomic variables such as output. Eichenbaum and Rebelo 1996). A typical example is to employ electricity use as a proxy for capacity utilization (see Burnside. but to be shown below. . consumption and capital stock. the standard Solow residual is not a reliable measure of technology schocks if the cyclical variation in factor utilization are signiﬁcant.93 5. a1 and σǫ in the stochastic equation (5. Second. The second is that the Solow residual also serves as the sequence of observed innovations that generate the graphs in Figure 5. These parameters will directly aﬀect the results from our stochastic simulation. which are unlikely to be related to factor productivity. which are assumed to be measured by the Solow residual. The ﬁrst strategy is to use an observed indicator to proxy for unobserved utilization. consumption and employment. There are basically three strategies. this celebrated result is obtained also from the empirical evidence. Those innovations are often used in the RBC literature as an additional indicator to support the model and its matching of the empirical data. The measurement of technology can impact this result in two ways. It has been shown that the Solow residual can be expressed by some exogenuous variables.3 and Figure 5. Mankiw (1989) and Summers (1986) have argued that such a measure often leads to excessive volatility in productivity and even the possibility of technological regress. for example demand shocks arising from military spending (Hall 1988) and changed monetary aggregates (Evan 1992). Another major presumption of the RBC literature. see Gali (1999) and Francis and Ramey (2001. not yet shown in Table 5. First. Meanwhile. There are several reasons to distrust the standard Solow residual as a measure of technology shock. Fernald and Kimball 1998).11) are estimated from the time series computed from Solow residual. A third strategy identiﬁes the technology shock through an VAR estimate. Of course. Considering that the Solow residual cannot be trusted as a measure of the technology shock. Another strategy is to construct an economic model so that one could compute the factor utilization from the observed variables (see Basu and Kimball 1997 and Basu. researchers have now developed diﬀerent methods to measure technology correctly. 2003). All these methods are focused on the computation of factor utilization.5.. both of which seem to be empirically implausible.

as driven by technology shocks. over Xt . if the corrected Solow residual is signiﬁcantly diﬀerent from the standard Solow residual. one may ﬁnd that the standard RBC model. One possible way to test the exogeneity is to employ the Granger causality test.1 Testing the Exogeneity of the Solow Residual Apparently. testing the exogeneity of the Solow residual becomes our ﬁrst investigation to explore whether the Solow residual is a correct measure of the technology shock. Gali (1999) and Francis and Ramey (2001) have found that if one uses the corrected Solow residual – if one identiﬁes the technology shock correctly – the technology shock is negatively correlated with employment and therefore the celebrated discovery of the RBC literature must be rejected. if they are conﬁrmed. can match well the variations in output. we then explore whether the RBC model is still able to explain the business cycles. consumption and capital stock not because the model has been constructed correctly. IPXMCAQ. may not be a realistic paradigm for macroeconomic analysis any more. Unlike other current research. we will refer to all of this recent research employing our available data set.94 Recently. We will construct a measurement of the technology shock that represents a corrected Solow residual. Also. In this section. This construction needs data on factor utilization. For this purpose. 5. we shall investigate the following speciﬁcation: At = c + α1 At−1 + · · · + αp At−p + β1 gt−1 + · · · + βp gt−p + εt (5.5. output and capital stock. in particular the variation in consumption. the distribution of At cannot be altered by the change in other exogenuous variables such as the variables of monetary and ﬁscal policy. the real business cycles model. If the Solow residual is exogenuous. We shall also look at whether the technology still moves procyclically with output. we simply use government spending. a critical assumption of the Solow residual to be a correct measurement of the technology shock is that At should be purely exogenuous. Indeed. using the Solow residual. obtained from Citibase. We will ﬁrst follow Hall (1988) and Evan (1992) to test the exogeneity of Solow residual. All these are important problematic issues that are related to the Solow residual. as an aggregate demand variable. Therefore. Given our new measurement. gt should not have any . Yet in this test. This is also the approach taken by Evan (1992). we use empirically observed data series.13) where gt in this test is government spending. consumption and employment. In other words. which is available in our Christiano’s data set. the capacity utilization of manufacturing. but because it uses a problematic measure of technology.

Table 5. Next.6. there is no variation in population growth. First. The test therefore will be conducted for diﬀerent lag lengths p’s. In other words. Next we shall consider the derivation of the corrected Solow residual by relaxing those strong assumptions.6: F −Statistics for Testing Exogeneity of Solow Residual p=1 p=2 p=3 p=4 F − statistics 9. we present a simple way of how to extract a technology shock from macroeconomic data. Second. . On the other hand. one ﬁnds that at 5% signiﬁcance level we can reject the null hypothesis for all the lag lengths p′ s. 82) From Table 5. 86) (6.5.g. 7 Although we are not able to obtain the same at 1% signiﬁcance level.2775435 2. This ﬁnding is consistent with the results in Hall (1988) and Evan (1992).5). e.6 provides the corresponding F -statistics computed for the different p’s. which certainly represents a demand shock.5769969 4. such as government spending.14) The rejection of the null hypothesis is suﬃcient for us to refute the assumption that At is strictly exogenuous.. If we look at the computation of the Solow residual.95 explanatory power for At . 92) (2. it is further assumed that the population follows a constant growth rate which is a part of γ. those policy variables. equation (5. we ﬁnd two strong assumptions inherent in the formulation of the Solow residual. 90) (4.7 5.3825632 degrees of freedom (1. Therefore we may have suﬃcient reason to distrust the Solow residual to be a good measure of the technology shock. Therefore our null hypothesis is H0 : β1 = · · · = βp = 0 (5. It is well known that the result of any empirical test for Granger causality can be surprisingly sensitive to the choice of lag length p. Table 5. may have explanatory power for the variation of the Solow residual.3041035 3. it is assumed that the capital stock is fully utilized.2 Corrected Technology Shocks The analysis in our previous section indicates that the hypothesis can rejected that the standard Solow residual be strictly exogenuous.

8 Note that this is diﬀerent from our notation Nt before. Figure 5.96 Let ut denote the utilization of capital stock. Ht denotes the hours per employed worker.5). which can be measured by IPXMCAQ from Citibase. Given equation (5. which is the hours per capita. the corrected Solow residual At can Lt be computed as yt ˜ (5. Let Lt denote the permanent shock to population so that Xt = Zt Lt while t Lt denotes the observed population so that ELHt = Nt .16) ˜ Above lt ≡ Lt . The observed output is thus produced by the utilized capital and labor service (expressed in terms of total observed working hours) via the production function: ˜ Yt = At (ut Kt )1−α (Zt Et Ht )α (5. Et is the number of workers employed.15) ˜ Above. Note that in this formulation.15) by Xt .8 and Zt is the permanent shock in technology. one ﬁnds that our corrected Solow resid˜ ual At will match the standard Solow residual At if and only if both ut and lt equal 1.17) At = 1−α (ut kt ) (lt Nt )α Comparing this with equation (5.5 compares these two time series: one for non-detrended and the for detrended series. At is the corrected Solow residual (which is our new measure of temporary shock in technology). we interpret the utilization of labor service only in terms of their working hours and therefore ignore their actual eﬀort. .16). which is more diﬃcult to be observed. we then obtain ˜ yt = At (ut kt )1−α (lt Nt )α (5. Dividing both sides t of (5.

7. .97 Figure 5.9 However. In Table 5. we report the cross-correlations of the technology shock to our four key economic variables: output. they rather move in diﬀerent directions if we compare the detrended series. The data series are again detrended by the HP-ﬁlter. employment and capital stock.3 Business Cycles with Corrected Solow Residual Next we shall use the corrected Solow residual to test the technology-driven hypothesis. the Sample Economy I (in which the technology shock is represented by the standard Solow residual) and the Sample Economy II (to be represented by the corrected Solow residual). 5.5: The Solow Residual: standard (solid curve) and corrected (dashed curve) As one can observe in the ﬁgure 5. the two series follow basically the same trend while their volatilities are almost the same.5. consumption. (1996).5. in the short run. These correlations are compared for three economies: the RBC economy (whose statistics is computed from 5000 simulations). 9 A similar volatility is also found in Burnside et al.

This result is exactly predicted by the RBC Economy and represents what has been called the technology-driven hypothesis.98 Table 5. where the standard Solow residual is employed.7008 0.2142 -0. et al. as in Sample Economy II. we ﬁnd that the results are in sharp contrast to the prediction as referred in the standard RBC model.4. we ﬁnd that the technology shock is procyclical to output.0013) (0. we provide in Figure 5.10 Comparing Figure 5.1077) 0. Gali (1999) and Francis and Ramey (2001. if we use the corrected Solow residual.4.0084) (0.7844 0. 10 Here the structural parameters are still the standard ones as given in Table 5.6 a one time simulation with the observed innovation given by the corrected Solow residual. We.0255 (0. . can conﬁrm the ﬁndings of the recent research by Basu. consumption and employment.1108 -0.6 to Figure 5.1736 -0.5854 0.9903 0. However. 2003).9966 -0. To test whether the model can still match the observed business cycles. especially for employment. therefore. (1998).0031) (0. we ﬁnd a somewhat opposite result.0762 RBC Economy Sample Economy I Sample Economy II If we look at the Sample Economy I.7: The Cross-Correlation of Technology output consumption employment capital stock 0.9722 0.3422 -0.

6: Sample and Predicted Moments with Innovation Given by Corrected Solow Residual 5. such data reconstruction seems to force the ﬁrst moments of certain macroeconomic variables of the U.S. but far from perfect”. In the ﬁrst place. macroeconomic time series data despite its rather simple structure. macroeconomic data. Yet this early assessment should be subject to certain qualiﬁcation.6 Conclusions The standard RBC model has been regarded as a model that replicates the basic moment properties of U.S. Indeed.S. many have felt that the RBC research has at least passed the ﬁrst test.99 Figure 5. economy to be matched by the model’s steady state at the given economically feasible standard pa- . Prescott (1986) summarizes the moment implications as indicating “the match between theory and observation is excellent. Through its necessity to accommodate the data to the model’s implication. this early assessment builds on the reconstruction of U.

One possible approach for such improvement is to allow for wage stickyness and nonclearing of the labor market. Second. The unusual small standard errors of the estimates seem to conﬁrm this suspicion. al (1999) pointed out. may not be a very plausible hypothesis. although one may celebrate the ﬁt of the variation of consumption. output and capital stock when the reconstructed data series are employed. “it is the ﬁnal criticism that the Solow residual is a problematic measure of technology shock that has remained the Achilles heel of the RBC literature. Both of these two problems are related to the labor market speciﬁcation of the RBC model. we still cannot ignore the problems of excessive smoothness of labor eﬀort and excessive correlation between labor and consumption. the celebrated ﬁt of the variation in consumption. Third. . As we have shown in Figure 5.” In Chapter 9. This incorrect measure of technology takes us to the technology puzzle: the procyclical technology.100 rameters. the match does not exist any more when we use the corrected Solow residual as the observed innovations. output and capital stock may rely on the incorrected measure of technology. For the model to be able to replicate employment variation. it seems necessary to make improvement upon the labor market speciﬁcation. a task that we will turn to in Chapter 8. we shall address the technology puzzle again by introducing monopolistic competition into an stochastic dynamic macro model. As King et. driving the business cycle.6.

In economies with production where asset returns and consumption are endogenous consumers can save and hence transfer consumption between periods. 101 . there is a more realistic modelling of equity dividends is possible. 1982). this is not a very sensible modelling choice. equity premium and Sharpe-ratio. First. Asset prices contain valuable information about intertemporal decision making and dynamic models explaining asset pricing are of great importance in current research.Chapter 6 Asset Market Implications of Real Business Cycles 6. We here want to study a production economy with asset market and spell out its implications for asset prices and returns. Empirically. in economies with an exogenous dividend stream the aggregate consumption is usually used as a proxy for equity dividends. Production economies oﬀer a much richer. in economies with an exogenous dividend stream and no savings consumers are forced to consume their endowment. Most of the asset pricing literature has followed Lucas (1978) and Mehra and Prescott (1985) in computing asset prices from the consumption based asset pricing models with an exogenous dividend streams. we shall study asset price implications of the standard RBC model. In particular we will explore to what extend it can replicate the empirically found risk-free interest rate. Second.1 Introduction In this chapter. Since there is a capital stock in production economies. and realistic environment. The idea of employing a basic stochastic growth model to study asset prices goes back to Brock and Mirman (1972) and Brock (1978. Modelling asset price and risk premia in models with production is much more challenging than in exchange economies.

This variable determines how much expected return agents require per unit of ﬁnancial risk. Those equations can be used as additional moment restrictions in the estimation process. we estimate the model using only the restrictions of real variables as in Chapter 5. the simulated annealing. the riskfree interest rate. All the estimations are again conducted through the numerical algorithm. In particular. Introducing the Sharpe-ratio as moment restriction in the estimation procedure requires an iterative procedure to estimate the risk aversion parameter. Ohanian and Berkowitz (1995) to test whether the moments predicted by the model.2 We then add our ﬁrst asset pricing restriction. First. long-term real bonds. for example Jerman (1998). Lettau and Uhlig (1999) and Lettau. we use the variancecovariance matrix of the estimated parameters to infer the intervals of the See. For each estimation. Gong and Semmler (2001) where the closed-form solutions for risk premia of equity. Christiano and Fisher (2001) and Gr¨ne and u Semmler (2004b). In addition. or the price of risk. The theoretical framework in this chapter is taken from Lettau (1999). Of course. 4 See also Sharpe (1964) 1 . we implicitly assume that the standard model can. we compute the implied premia of equity and long-term real bond. can match the moments of the actual macroeconomic time series. replicate the moments of the real variables.3 The second asset pricing restriction concerns the risk-return trade-oﬀ as measured by the Sharperatio. We ﬁnd that the Sharpe-ratio restriction aﬀects the estimation of the model drastically. We use the observed 30-day T-bill rate to match the oneperiod risk-free interest rate implied by the model. 2 Using Christiano’s data set. Hansen and Jagannathan (1991) and Lettau and Uhlig (1999) show how important the Sharpe-ratio4 is in evaluating asset prices generated by diﬀerent models. We introduce the asset pricing restrictions step-by-step to clearly demonstrate the eﬀect of each new restriction. The data employed for this estimation are taken again from Christiano (1987). Those values are then compared to the stylized facts of asset markets. The estimation technique in this chapter follows the Maximum Likelihood (ML) method as discussed in Chapter 4. as the previous chapter has shown the standard model fails also along some real dimensions. we introduce a diagnostic procedure developed by Watson (1993) and Diebold. to some extent. Boldrin. 3 Using 30-day rate allows us to keep inﬂation uncertainty at a minimum. the Sharpe-ratio and the risk-free interest rates are presented in a log-linearized RBC model as developed by Campbell (1994).102 Although recently further extension of the baseline stochastic growth model of RBC type were developed to match better actual asset market characteristics 1 we will in the current paper by and large restrict ourselves to the baseline model. for the estimated parameters.

In Section 4. Section 3 presents the estimation for the model speciﬁed by diﬀerent moments restrictions. . Christiano and Fisher (2001) and Cochrane (2001. The rest of the chapter is organized as follows. we apply here the power utility as describing the preferences of the representative household. Boldri. At for technology.2) 1 Aα =α t θ(1 − Nt ) Ct 5 Kt Nt (1−α) Note that. Section 6 concludes. The ﬁrst order condition is given by the following Euler equation: −γ Ct−γ = βEt Ct+1 Rt+1 (6.1 The Standard Model and Its Asset Pricing Implications The Standard Model We follow Campbell (1994) and use the notation Yt for output. we use the standard RBC model and log-linearization as proposed by Campbell (1994) and derive the closed-form solutions for the ﬁnancial variables. 6. Section 5 compares the second moments of the time series generated from the model to the moments of actual time series data. ch.103 moment statistics and to study whether the actual moments derived from the sample data fall within this interval. as in our previous modelling.2 6. 21) and Gr¨ne and Semmler u (2004b). Kt for capital stock. The maximization problem of a representative agent is assumed to take the form5 ∞ M ax Et i=0 β i 1−γ Ct+i + θ log(1 − Nt+i ) 1−γ subject to Kt+1 = (1 − δ)Kt + Yt − Ct with Yt given by (At Nt )α Kt1−α . we interpret our results and contrast the asset market implications of our estimates to the stylized facts of the asset market. Nt for normalized labor input and Ct for consumption. In section 2. For the model of asset market implications other preferences. see Jerman (1998).1) (6.2. habit formation are often employed. for example.

In the case of incomplete capital depreciation δ < 1. Campbell (1994) shows that the solution. the technology. consumption.1) becomes G = βR where R is the steady state of Rt+1 . Assume that the technology shock follows an AR(1) process: at = φat−1 + εt (6.2.e. innovation: εt ∼ N (0.5) (6. the implied value for the discount factor β can then be deduced from (6.104 where Rt+1 is the gross rate of return on investment in capital. which is equal to the marginal product of capital in production plus undepreciated capital: Rt+1 ≡ (1 − α) At+1 Nt+1 Kt+1 α + 1 − δ.d. β and γ. output and capital stock all grow at a common rate G = At+1 /At . real allocations will not be aﬀected by this choice. the exact analytical solution to the model is not feasible.3). We denote the leverage factor (the ratio of bonds outstanding and total ﬁrm value) as ζ. labor nt and capital stock kt .4) 2 with εt to be the i. Since markets are competitive. σε ).i.2 The Log-linear Approximate Solution Outside the steady state. (6. r. Taking log for both sides.6) . the ModiglianiMiller theorem is presumed to hold. we use g. At the steady state. This deﬁnes the relation among g.3) where g ≡ log G and r ≡ log R. In the rest of the chapter. Hence. Note that here we use the lower case letter as the corresponding log variables of the capital letter. can be written as ct = ηck kt + ηca at nt = ηnk kt + ηna at (6. using the log-linear approximation method. consumption ct . We therefore seek instead an approximate analytical solution. We allow ﬁrms to issue bonds as well as equity. we can further write the above equation as γg = log(β) + r. 6. the model characterizes a system of nonlinear equations in the logs of technology at . r. (6. and γ as parameters to be determined. i.

3 The Asset Price Implications The standard RBC model as presented above has strong implications for asset pricing. 6 (6. The second asset market restriction will be the Sharpe-ratio which summarizes the risk-return trade-oﬀ: SRt = max f Et Rt+1 − Rt+1 all assets σt [Rt+1 ] . γ. see Cochrane (2001. 6. We also want to note that in RBC models the risk-free rate is generally too high.105 and the law of motion of capital is kt = ηkk kt−1 + ηka at−1 (6. (2001) for the details. and ηka are all the complicated functions of the parameters α. ηca .10) Since the model is log-linear and has normal shocks. capital stock and technology as expressed in (6. ηna .2 and 2. 2 (6.7) where ηck .8) Using the process of consumption. 1. Writing the equation in the log form. g. (6. φ and N (the steady state value of Nt ). the Euler equation (6.1) implies the following expression f regarding the risk-free rate Rt :6 f Rt = βEt (Ct+1 /Ct )−γ −1 .8) (see Lettau et al.): f rt = γ ηck ηka εt−1 . ηkk . we obtain the risk-free rate in logs as7 1 f rt = γEt ∆ct+1 − γ 2 V ar∆ct+1 − log β. . 1 − ηkk L (6. ηnk .11) For further details. 2 7 Note that here we use the formula Eex = eEx+σx /2 . r. the Sharpe-ratio can be computed in closed form as:8 SR = γηca σε .9) where L is the lag operator. δ. chs. and its standard deviation is much too low compared to the data. see Hornstein and Uhlig (2001).7) and (6.1). First. we derive from (6.4) while ignoring the constant term involving the discount factor and the variance of consumption growth. 8 See Lettau and Uhlig (1999) for the details.2.5). (6. Matching this process implied by the model to the data will give us the ﬁrst asset market restriction.

economy at quarterly frequency. Asset market data are from Lettau (1999).42 0.80 0.5)(6.2.1: Asset Market Facts and Real Variables GDP Consumption Investment Labor Input T-Bill SP 500 Equity Premium Long Bond Premium Sharpe Ratio Standard Deviation 1.59 0.7) as: ηck ηka 2 2 η σ (6. 6. The table shows that the equity premium is roughly 2% per quarter.53 7.24 1. (6.21 Mean 0.17 1.27 Note: Standard Deviations for the real variables are taken from Cooley and Prescott (1995). These can be computed on the basis of the log-linear solutions (6. The series are H-P ﬁltered.27 in .27 8.13) 1 − βηkk 1 − βηkk Again we refer to Lettau (1999) and Lettau.99 4. All data are from the U.19 2. equals 0. which measures the risk-return trade-oﬀ.86 7. Table 6.S.72 1. The Sharpe-ratio.1 summarizes some key facts on asset markets and real economic activity for the US economy. we will consider the performance of the model concerning the following facts of asset markets.4 Some Stylized Facts Table 6.12) LT BP = −γ 2 β 1 − βηkk ca ε ηdk ηnk − ηda ηkk ηck ηkk 2 2 EP = − γβ γηca σε . we consider the risk premia of equity (EP) and long-term real bonds (LTBP). In addition to the well-known stylized facts on macroeconomic variables. The Sharpe-ratio is the mean of equity premium divided by its standard deviation. Units are per cent per quarters. A successful model should be consistent with these basic moments of real and ﬁnancial variables. Gong and Semmler (2001) for details of those computations.106 Lastly.

roughly six times higher than consumption. The parameter φ is estimated independently from (6. Of course. 6.e. For the time series of the risk-free interest rate.4) by OLS regression. For a data observation Xt . as we will see shortly. However.. we would like to estimate as many parameters as possible.4). and N .3. The computation of technology shocks requires the values for α and g.S..107 post-war data of the U. X t = (1 + g)t−1 X 1 . This leaves the risk aversion parameter γ. the computation of xt depends on X1 .3. δ. i. In this paper we use the standard values of α = 0. which could be calculated from the sample.3.1-1983. The estimation strategy is similar to Christiano and Eichenbaum (1992). for the given g. To make the data suitable for estimation.3 6. The standard deviation of the real variables reveal the usual hierarchy in volatility with investment being most volatile and consumption the smoothest variable. φ. However. N is speciﬁed as 0. The parameter θ is simply dropped due to our log-linear approximation. the estimation of these parameters is central to our strategy. the detrended value xt is assumed to take the form log( Xt /Xt ). α.1 The Estimation The Structural Parameters to be Estimated The RBC model presented in section 2 contains seven parameters. As we have demonstrated in the last chapter. the average interest rate r and the depreciation rate δ to be estimated. r. We compute this initial condition based on . we use the 30-day T-bill rate to minimize unmodeled inﬂation risk. where Xt is the value of Xt on its steady state path. r and γ. we use the data set as constructed by Christiano (1987). γ. the initial X t . the Christiano data set can match the real side of the economy better than the commonly used NIPA data set. Among the ﬁnancial variables the equity price and equity premium exhibit the highest volatility. they ﬁx the discount factor and the risk aversion parameter without estimating them. In contrast. Recall that the discount factor is determined in (6.667 and g = 0.2 The Data For the real variables of the economy. The data set covers the period from the third quarter of 1955 through the fourth quarter of 1983 (1955. g. we are required to detrend the data into their log-deviation form.005. some of the parameters have to be pre-speciﬁed. 6.3) for given values of g. Therefore.

B = −ηck 1 0 .7) so we can compare our results to those in Christiano and Eichenbaum (1992). The remaining parameters thus to be estimated are δ and r. The matrices for the ML estimation are given by −ηkk −ηka 0 1 0 0 0 −ηca . First. 6.3 The Moment Restrictions of Estimation For the estimation in this chapter. we introduce the restrictions step-by-step.3. We start by including the following moment restriction of the risk-free interest rate in estimation while still keeping risk aversion ﬁxed at unity: f E b t − rt = 0 .5) . We call this Model 1 (M1). xt = at−1 .(6. (6. we constrain the risk aversion parameter r to unity and use only moment restrictions of the real variables. i. we add restrictions from asset markets one by one.e. 1 T T i=1 1 log(Xt /X t ) = T 1 = T =0 T i=1 T 1 log(Xt ) − T 1 log(Xt ) − T T log(X t ) i=1 T i=1 i=1 1 log(X 1 ) − T T log (1 + g)t−1 i=1 Solving the above equation for X 1 .108 the consideration that the mean of xt is equal to 0. In other words. at nt After considering the estimation with the moment restrictions only for real variables. In order to analyze the role of each restriction. we obtain 1 X 1 = exp T T T log(Xt ) − i=1 i=1 log (1 + g)t−1 . we use the maximum likelihood (ML) method as discussed in Chapter 3. Γ = 0 0 0 −ηna −ηck 0 1 kt−1 kt yt = ct .

11) that the Sharpe-ratio is a function of risk aversion. as a shortcut. given the other parameters δ and r. In this case the matrices B and Γ and the vectors xt and yt can be written as 1 −ηnk B= −ηnk 0 0 1 0 0 0 0 1 0 0 −ηkk −ηka 0 0 0 0 0 −ηca 0 .109 f where bt denotes the return on the 30-day T-bill and the risk-free rate rt is computed as in (6. We summarize the diﬀerent cases in Table 6. where we start by using only restrictions on real variables and ﬁx risk aversion to unity (M1). We add the risk-free rate restriction keeping risk aversion at one (M2). is calculated from (6. Given this value. ηca (γ)σε (6. We refer to this version as Model 2 (M2). We take this restriction into account in two diﬀerent ways. Model 5 (M5).27/[ηca (γ0 )σε ].14). This will be called Model 4 (M4). For each given δ and r. ηca is itself a complicated function of γ. the standard deviation of the technology shock and the elasticity of consumption with respect to the shock ηca . denoted by γ0 . f rt . therefore. we ﬁx the risk aversion at 50. we estimate the remaining parameter δ and r. Hence. This procedure is continued until convergence. First. Of course. a value suggested in Lettau and Uhlig (1999) for generating a Sharperatio of 0. then estimate . In the next version.27 using actual consumption data. we. nt bt This equation provides the solution of γ. Model 3 (M3) uses the same moment restrictions as Model 2 but leaves the risk aversion parameter r to be estimated rather than ﬁxed to unity.14) kt c yt = t . denoted by γ1 . we are simultaneously estimating γ while imposing a Sharpe-ratio restriction of 0.27 as measured in the data (see Table 1).2. have to use an iterative procedure to obtain the solution. the Sharpe-ratio restriction becomes γ= 0. Γ= 0 0 0 −ηna 0 −1 1 0 0 0 kt−1 at−1 xt = at . we impose that the dynamic model should generate a Sharpe-ratio of 0.27 .9). searched by the simulated annealing. Recall from (6.27. Then the new γ. we ﬁrst set an initial γ. which is equal to 0. Since it is nonlinear in γ. Finally.

0185) γ preﬁxed to 1 preﬁxed to 1 2.77% per quarter or 3.4719) Consider ﬁrst Model 1. Table 6. For each model we also compute the implied values of the long-term bond and equity premium using (6. The discount factor is slightly higher while the average risk-free rate decreases.0344 (0.2: Summary of Models Models Estimated Parameters Fixed Parameters Asset Restrictions M1 r.12) and (6.0160) 0.08% on an annual basis. Adding the risk-free rate restriction in Model 2 does not signiﬁcantly change the estimates. Table 6.0041 (0. The average interest rate is 0.0077 (0.0088 (0.0220 (0. δ γ=1 none M2 r.13). γ risk-free rate. The implied discount factor computed from (6. These results conﬁrm the estimates in Christiano and Eichenbaum (1992).0144) 0. which only uses restrictions on real variables. However the implied discount factor now exceeds unity.3: Summary of Estimation Results9 Models M1 M2 M3 δ 0. Sharpe-ratio 6.0189 (0. δ. Finally we add the Sharpe-ratio restriction.0633 (0.9972. Standard errors are in parentheses.4 The Estimation Results Table 6.3 summarizes the estimations for the ﬁrst three models. Entries without standard errors are preset and hence are not estimated. . ﬁxing risk aversion at 50 (M4) and estimate it using an iterative procedure (M5). Christiano and Eichenbaum (1992) 9 The standard errors are in parenthesis.110 it (M3).0132) 0.0156) r 0.3) is 0. Sharpe-ratio M5 r. γ risk-free rate M4 r. (1988). δ γ=1 risk-free rate M3 r. The depreciation rate is estimated to be just below 2% which close to Christiano and Eichenbaum’s (1992) results.0144) 0. δ. δ γ = 50 risk-free rate. a problem also encountered in Eichenbaum et al.

Next. as in M3. The estimation is reported in Table 6.0065 0. The ML procedure estimates the risk aversion parameter to be roughly 2 and signiﬁcantly diﬀerent from 1.0180 LT BPrem 0. Adding the risk-free rate restriction increases the estimates of δ and r somewhat.4 computes the Sharperatio as well as risk premia for equity and long term real bond using (6.082% -0. the model is able to produce sensible parameter estimates when the moment restriction for the risk-free rate is introduced. .5: Matching the Sharpe-Ratio Models M4 M5 δ 1 1 r 0 1 γ preﬁxed to 50 60 10 This value is advocated in Benninga and Protopapadakis (1990). adding the risk-free rate). Introducing the riskfree rate restriction improves the performance only a little bit. the value implied from logutility function.0065 0.5. While the implications of the dynamic optimization model concerning the real macroeconomic variables could be considered as fairly successful. we will try to estimate the model by adding the Sharpe-ratio moment restrictions.000% -0. The leverage factor ζ is set to 2/3 for the computation of the equity premium. Overall.042% -0. Table 6.13).11) (6. the implications for asset prices are dismal.4: Asset Pricing Implications Models M1 M2 M3 SR 0. Model 3 is more general since the risk aversion parameter is estimated instead of ﬁxed at unity.091% Table 6.10 Table 6.053% EqPrem -0.111 avoid this by ﬁxing the discount factor below unity rather than estimating it. Note that these variables are not used in the estimation of the model parameters.4 shows that the RBC model is not able to produce sensible asset market prices when the model parameters are estimated from the restrictions derived only from the real side of the model (or. Table 6.085% -0. even negative for certain cases. The Sharpe-ratio is too small by a factor of 50 and both risk premia are too small as well.

as shown in the last row in Table 7. Trying to estimate risk aversion while matching the Sharpe-ratio gives similar results. The point estimate of risk aversion parameter is high (60).6) with kt and at to be their actual observations. The tension between the Sharpe-ratio restriction and the real side of the model causes the estimation to fail. It is not possible to estimate the RBC model with simultaneously satisfying the moment restrictions from both the real side and the ﬁnancial side of the model.5 The Evaluation of Predicted and Sample Moments Next we provide a diagnostic procedure to compare the second moments predicted by the model with the moments implied by the sample data. The depreciation rate converges again to unity as does the steady-state interest rate r. at and the estimated parameters of our loglinear model. As explained in Lettau and Uhlig (1999). al (1995). The moments are revealed by the spectra at various frequencies. The estimates for the depreciation factors and the steady-state interest rate converge to the pre-speciﬁed constraints. 11 or the estimation does not settle down to an interior optimum. High risk aversion implies a low elasticity of intertemporal substitution so that agents are very reluctant to change their consumption over time. Again the parameter estimates do converge to pre-speciﬁed constraints. The reason is of course that a high Sharpe-ratio requires high risk aversion. the predicted ct and nt can be constructed from the right hand side of (6. This implies that the real side of the model does not yield reasonable results when risk aversion is 50. It demonstrates again that the asset pricing characteristics that one ﬁnd in the data are fundamentally incompatible with the standard RBC model. Given the observations on kt .5) . The question now is how the moment restrictions of the real variables are aﬀected by such a high level of risk aversion.(6. such a high level of risk aversion has the potential to generate reasonable Sharpe-ratios in consumption CAPM models. The ﬁrst row of Table 6. Our objective here is to ask whether our RBC model can predict the actual moments of the time series for both the real and asset market. 6.112 Model 4 ﬁxes the risk aversion at 50. . We remark that a similar diagnostic procedure can be found in Watson (1993) and Diebold et. We now 11 We constraint the estimates to lie between 0 and 1.5 shows that the resulting estimates are not sensible.5.

B) labor.113 consider the possible deviations of our predicted series from the sample series.1: Predicted and Actual Series: solid lines (predicted series). We can use the variance-covariance matrix of our estimated parameters to infer the intervals of our forecasted series hence also the intervals of the moment statistics that we are interested in. We hereby employ our most reasonable estimated Model 3. dotted lines (actual series) for A) consumption. C) risk-free interest rate and D) long term equity excess return. Figure 6. all variables HP detrended (except for excess equity return) .

114 Figure 6. A good match of the actual and predicted second moments of the time series would be represented by the fact that the solid line falls within the interval of the dashed and dotted lines. all variables detrended (except excess equity return) Figure 6. risk-free rate and equity return. C) risk-free interest rate and D) long-term equity excess return. at 5% signiﬁcance level. the consumption series can somewhat be matched whereas the volatility in the labor eﬀort as well as in the risk-free rate and equity excess return cannot be matched. As shown in Chapter 5. labor eﬀort. In particular the time series for . by the models. B) labor. The insuﬃcient match of the latter three series are further conﬁrmed by Figure 6. dashed and dotted lines (the intervals of predicted moments) for A) consumption.2: The Second Moment Comparison: solid line (actual moments).1 presents the Hodrick-Prescott (HP) ﬁltered actual and predicted time series data on consumption.2 where we compare the spectra calculated from the data samples to the intervals of the spectra predicted.

Moreover. The computed Sharpe-ratio is too low while both risk premia are small and even negative. 2001). Other researchers have looked at some extensions of the standard model such as technology shocks with a greater variance. Semmler and Lettau (2001) where time varying characteristics of o asset prices are explored.6 Conclusions Asset prices contain valuable information about intertemporal decision making of economic agents. the computed Sharpe-ratio and the risk premia of longterm real bonds and equity are in general counterfactual. yet these extensions frequently use extreme parameter values to be able to match the asset price characteristics of the model with the data. risk-free interest rate and long-term equity return predicted by the model do not match well the corresponding moments of the sample economy. 6. We conclude that the standard RBC model cannot match the asset market restrictions. for example.115 labor eﬀort. This chapter has estimated the parameters of a standard RBC model taking the asset pricing implications into account. We use the risk-free interest rate and the Sharpe-ratio in matching actual and predicted asset market moments and compute the implicit risk premia for long real bonds and equity. constant relative risk aversion (CRRA) utility function and no adjustment costs. 13 See Gr¨ne and Semmler (2004b). Moreover. u 12 . utility functions with habit formation. Finally. more successful in replicating stylized asset market characteristics. the approximation methods for solving the models might not be very reliable since accuracy tests for the used approximation methods are still missing. given the sensible parameter estimates.13 see also W¨hrmann. We ﬁnd that though the inclusion of the risk-free interest rate as a moment restriction can produce sensible estimates. the second moments of labor eﬀort.12 Those extensions of the standard model are. to least to a certain extent. We introduce model restrictions based on asset pricing implications in addition to the standard restrictions of the real variables and estimate the model by using ML method. other utility functions. risk-free interest rate and equity return fail to do so. the attempt to match the Sharpe-ratio in the estimation process can hardly generate sensible estimates. at least with the standard technology shock. and adjustment costs of investment. The latter line of research has been pursued by Jerman (1998) and Boldrin. Christiano and Fisher (1996.

Part III Beyond the Standard Model — Model Variants with Keynesian Features 116 .

For certain substitution properties between consumption and cash holdings those models admit unstable as well as stable high level and low level steady 1 In Keynes (1936) such an open ended dynamic is described in Chapter 5 of his book.2 Multiplicity of equilibria can also arise here as a consequence of increasing returns to scale and/or more general preferences. Some of the models are real models.5.1 Introduction One of the important features of Keynesian economics is that there is no unique equilibrium toward which the economy moves. for example Kim (2004).Chapter 7 Multiple Equilibria and History Dependence 7. that can exhibit locally stable steady state equilibria giving rise to sun spot phenomena. RBC models. as introduced in chapter 4. with increasind returns to scale and or more generate preferences. 2 See. where consumers’ welfare is aﬀected positively by consumption and cash balances and negatively by the labor eﬀort and an inﬂation gap from some target rates. Theoretical models of this type are reviewed in Benhabib and Farmer (1999) and Farmer (12001) and an empirical assessment is given in Schmidt-Grohe (2002). 117 . The dynamics are open ended in the sense that it can move to low level. Those models have been called indeterminacy models. Keynes describes here how higher or lower ”long term positions” associated with higher or lower output and employment might be generated by expectational forces.1 In recent times such type of dynamics have been found in a large number of dynamic models with intertemporal optimization. Others are monetary models. or high level of economic activity and expectations and policy may become important to tild the dynamics to one or the other outcomes.

Recently. likely to generate multiplicity of steady state equilibria. In this chapter we want to show that adjustment cost in a standard RBC model can give rise to multiple steady state equilibria. as. Lucas (1967) and Hayashi (1982). recently has been shown. consumption. employment and welfare. (2000) it is shown that relative adjustment cost. in turn. employment and welfare level. after a shock or policy inﬂuences. Christiano and Fisher (2001) and adjustment cost associated with the rate of change of investment can be found in Christiano.4 In stochastic growth models adjustment cost has been used in Boldrin. The existence of multiple steady state equilibria entails thresholds that separate diﬀerent domains of attraction for welfare and employment and allow Although these are important variants of macrodynamic models with optimizing behavior. 4 In Feichtinger et al. As our solution shows thresholds are important as separation points below or above which it is advantages to move to lower or higher levels of capital stock. Our model version thus can explain of how the economy becomes history dependent and moves. numerous stochastic growth model have employed adjustment cost of capital. see Beyn. indeterminacy is likely to occur solely at a point in these models. however. Multiple steady state equilibria. Pursuing this line of research we show that one does not need to refer to increasing returns to scale or speciﬁc preferences to obtain such results. where a middle one is an attractor (repellor). to a low or high level equilibria in employment and output. 3 . Despite some unresolved issues in the literature on multiple equilibria and indeterminacy3 it has greatly enriched macrodynamic modelling. Pampel and Semmler (2001) and Gr¨ne u and Semmler (2004a). Authors in this tradition have also distinguished the absolute adjustment cost depending on the level of investment from the adjustment cost depending on investment relative to capital stock (Uzawa 1968. Eichenbaum and Evans (2001). When indeterminacy models exhibit multiple steady state equilibria. than this permits any path in the vicinity of the steady state equilibria to move back to (away from) the steady state equilibrium.118 states. see Benhabib et al. Here can be indeterminacy in the sense that any initial condition in the neighborhood of one of the steady-states is associated with a path forward or away from that steady state. In non-stochastic dynamic models adjustment cost has already been used in Eisner and Stroz (1963). Asada and Semmler 1995). and not within a set as the indeterminacy literature often claims. (2001). may lead to thresholds separating diﬀerent domains of attraction of capital stock. where investment as well as cpital stock enters the adjustment costs. We show that due to the adjustment cost of capital we may obtain nonuniqueness of steady state equilibria in an otherwise standard dynamic optimization version. consumption. at a threshold.

it . Kt . ct ≡ Xt . augmented by adjustment cost.2 The Model The model we present here is the standard stochastic growth model of RBC type. Ct and Qt are the level of capital stock.4) (7. 1978). 7. Ct . Q C I Y t In particular. yt ≡ Xtt and qt ≡ Xt . investment.3)α (7. yt and qt are the detrended variables for Kt . For this purpose. Section 3 studies the adjustment cost function which gives rise to multiple equilibria and section 4 demonstrates the existence of a threshold5 that separates diﬀerent domains of attraction. Section 5 concludes the chapter. Section 2 presents the model. and Xt is the permanent (including both population and productivity growth) shock that follows a growth rate γ. X t t . consumption and adjustment cost. The state equation for the capital stock takes the form: Kt+1 = (1 − δ)Kt + It − Qt where It = Yt − Ct and Yt = At Kt1−α (Nt Xt )α (7. Yt . It .1) (7. It . Yt and Qt . To transform the model into a stationary version we need to detrend the variables. we divide both sides of equation (7. The model is non-stationary due to Xt .5) Above. Nt is per capita working hours. as in King et al. Note N that here nt is often regarded to be the normalized hours with its sample 5 6 In the literature those thresholds have been called Skiba-points (see Skiba.3) by Xt : kt+1 = 1 [(1 − δ)kt + it − qt ] 1+γ it = yt − c t 1−α yt = At kt (nt N /0. kt . ct .6 . The remainder of this chapter is organized as follows. it ≡ Xtt .3Nt with N denoting the sample mean of Nt .1) . (1988).3) Above.(7.119 for an open ended dynamics depending on the initial conditions and policy inﬂuences impacting the initial conditions.2) (7. At is the temporary shock in technology. all in real terms. kt ≡ Kt . The proof of the propositions in the text is provided in the appendix. nt is deﬁned to be 0. output.

10) (7. determine the following steady states: [bφ(i) − 1][i − q(i)] − aφ(i)1− α − q(i) = 0 k= 1 [i − q(i)] γ+δ 1 (7. we ﬁrst form the Lagrangian: ∞ L = t=0 ∞ β t [log(ct ) + θ log(1 − nt )] − Et β t+1 λt+1 kt+1 − t=0 1 [(1 − δ)kt + it − q(it )] 1+γ Setting to zero the derivatives of L with respect to ct . Equation (7.6) (7.4) and (7.8) (7.(7.4) .5) respectively.9). kt and λt . when evaluated at their certainty equivalence form.11) .9) with it and yt to be given by (7.7) (7. The following proposition concerns the steady states Proposition 5 Assume At has a steady state A. we obtain the following ﬁrst-order conditions: 1 β − Et λt+1 [1 − q ′ (it )] = 0 ct 1 + γ −θ β αyt + [1 − q ′ (it )] = 0 Et λt+1 1 − nt 1 + γ nt (1 − α)yt β Et λt+1 (1 − δ) + [1 − q ′ (it )] = λt 1+γ kt kt+1 = 1 [(1 − δ)kt + it − q(it )] 1+γ (7. nt .120 mean equal to 30 %. We shall assume that the detrended adjustment cost qt depends on detrended investment it : qt = q(it ) The objective function takes the form ∞ max E0 t=0 β t [log ct + θ log(1 − nt )] To solve the model.

1) shows a typical shape of q(i) while Figure (7. equ.0034 500 . no multiple steady state equilibria will occur. and then from (7.10) i is uniquely determined.15) y = φ(i)k c=y−i λ= where (1 + γ) βc 1 − q ′ (i) a= 1 α α A (N /0.19) Note that equation (7. depending on the assumption that all other steady states can be uniquely solved via (7. Here we shall only consider that q(i) takes the logistic form: q(i) = q0 q0 exp(q1 i) − exp(q1 i) + q2 1 + q2 (7.1: Table 7. 7.1: The Parameters in the Logistic Function q0 q1 q2 2500 0. Also if q(i) is linear.11) (7.14) (7.12) (7.16) (7.3 The Existence of Multiple Steady States Many non-linear forms of q(i) may lead to a multiplicity of equilibria.2) shows the corresponding derivative q ′ (i) with the parameters given in Table 7.20) Figure (7.13) (7.3 N (7. β(1 − α) (7.10) determines the solution of i.17) (7.18) indicates that φ(·) is constant.3) θ (1 + α ) θ b= γ+δ m φ(i) = 1 − q ′ (i) (7. (7. if q(i) is linear.121 φ(i) n= A 1 α k 0. Therefore.18) and m= (1 + γ) − (1 − δ)β .15).

20) we posit a restriction such that q(0) = 0.122 Figure 7.2: The Derivatives of the Adjustment Cost Note that in equation (7.1: The Adjustment Cost Function Figure 7.1). Another restriction. is that . which is reﬂected in Figure (7.

18) indicates that φ(i) is negative since from (7. Therefore. These are the two points at which q ′ (i) = 1. if there exist some i′ s in (i2 . Meanwhile between i1 and i2 . The two critical points. A formal mathematical proof of the existence of this condition is intractable. In particular.21) indicating that the adjustment cost should never be larger than the investment itself. A negative φ(i) will lead to a complex 1 φ(i)1− α in (7.2) need to be discussed. three equilibria will occur. indicating three steady states of i. i1 ) such that f (i) = 0. The following proposition concerns the existence of multiplicity of equilibria.10). When q ′ (i) > 1. this proposition indicates a condition under which multiple steady states will occur.1 and other standard parameters as given in Table 7. +∞). 1 .2.3. +∞). denoted i1 in the range (0. +∞) at which f (i) < 0. we show the curve of f (·) given the empirically plausible parameters as reported in Table 7.20) subject to (7. if there are some i′ s at which f (i) < 0. equation (7. denoted as i2 and i3 such that f (i) = 0.1) and (7. in Figure (7.123 q(i) < i (7.10). where q(i) takes the form as in (7. i1 and i2 . then there must exist two i’s. Proposition 6 Let f (i) ≡ [bφ(i) − 1][i − q(i)] − aφ(i)1− α − q(i). We therefore obtain two feasible ranges for the existence of steady states of i : one is (0.21). We shall ﬁrst remark that the assumption bm − 1 > 0 is plausible given the standard parameters for b and m also f (i) = 0 is indeed the equation (2.19) m > 0. The curves cut the zero line three times. Assume bm − 1 > 0. i1 ) and the other is (i2 . Both restrictions seem reasonable. • In the range (i2 . q ′ (i) > 1. • There exists one and only one i. In Figure 7.

7619 We use a numerical method to compute the three steady states of i : i1 . A is derived from At = a0 + a1 At−1 + εt .9930 δ 0.15).3: Multiplicity of Equilibria: f(i) function Table 7.9811. with estimated a0 and a1 are given respectively by 0.5800 γ 0. Table 7.0045 β 0.2: The Standard Parameters of RBC Model7 α 0. The result of the computations of the three steady states are: 7 The N below is calculated on the assumption of 12 weeks per quarter and 40 working hours per week.0189 N 480. i2 and i3 .124 Figure 7. Given these steady states.11) .00 A 1.(7.2080 θ 2. .3 uses essentially the same parameters as reported in Table 5.1. the other steady states are computed by (7.0333 and 0.

4). Set 2 and Set 3 corresponding to i1 .3: The Multiple Steady States i k n c y λ V Corresponding to i1 564. therefore. yet its corresponding steady state in labor eﬀort and thus employment is larger for i2 . we shall ﬁrst linearize the ﬁrst-order conditions around the three sets of steady states as reported in Table 7.4307 3575. Assume that At stays at its steady state A so that we only consider the deterministic case.23) where i = 1. 7. 3.9621 0.00019568017 1101. On the other hand i1 and i3 exhibit also diﬀerences in welfare and employment.30904713 2111.11 0.7118 Corresponding to i2 1175.53140 18667.22) c i nt = Gi kt + gn n (7. Then by applying an approximation method as discussed in chapter 2. For notational convenience.25436594 3011.4 The Solution An analytical solution to the dynamics of the model with adjustment cost is not feasable.9). The steady state corresponding to i2 deserves some discussion. and consumption are all greater than those corresponding to i2 . We therefore can simulate the solution paths by using the above two equations together with (7.5634 9180. its corresponding steady states in capital.0412 0. The question then . (7.642 0.3. at least compared to i1 .00083463435 1058. we shall denote them as decision rule Set 1.33781724 5169. V is the value of the objective function at the corresponding steady states. We. we obtain three sets of linear decision rules for ct and nt corresponding to our three sets of steady states.347 0.7553 11672. Therefore it reﬂects one corresponding welfare level.5) and (7. have to rely on an approximate solution.07481 Corresponding to i3 4010. The ith set of decision rule can then be written as i ct = Gi kt + gc (7.9826 0.125 Table 7. The welfare of i1 is larger than i2 . 2. output.6369 Note that above. i2 and i3 .0017500565 986. This already indicates that i2 may be inferior in terms of the welfare. For this.2273 3286.4778 119070.

Figure 7. This consideration further indicates that there must exist some thresholds for k0 at which intervals are divided regarding which set of decision rule should be applied. we choose the range [8000. The likely conjecture is that this will depend on the initial condition k0 .3. should be used.22) and (7. if k0 is close to the k 1 . Speciﬁcally. To detect such thresholds. the steady state of k corresponding to i1 . For example. as expressed by (7.23).4: The Welfare Performance of three Linear Decision Rules (solid.4 compares the welfare performance of our three sets of linear decision rules. Set 2 and Set 3 respectively) . 138000] for k0 . In this exercise. we compute V .126 arises as to which set of decision rule. Figure 7. where ∞ V ≡ t=0 β t [log(ct ) + θ log(1 − nt )] We should choose the range of k0 ’s that covers the three steady states of k’s as reported in Table 7. we would expect that the decision rule 1 is appropriate. we shall compute the value of the objective functions starting at diﬀerent k0 for our three decision rules. dotted and dashed lines for decision rule Set 1.

the household may choose decision rule Set 3. as studied in chapter 4. there is a weak externality. ξ > 0. employment and welfare. a speciﬁc form of adjustment cost of capital was introduced. if k0 > 36900. dynamic models giving rise to indeterminacy usually have to presume some weak externalities and increasing returns and/or more general preferences. lead to thresholds separating diﬀerent domains of attraction of capital stock. Overall. Multiple steady state equilibria. then the model generates local indeterminacy. Kim (2004) discusses to what extent weak externalities in combination with more complex preferences will produce indeterminacy. for illustrative purpose. there is an intersection of the two welfare curves corresponding to the decision rules Set 1 and Set 3. in turn.4. 7. He in fact shows that if for the generalized RBC model. . as shown in many recent contributions. If k0 < 36900. the household should choose decision rule Set 1 since it will allow the household to obtain a higher welfare. However. As our simulation shows thresholds are important as separation points below or above which it is optimal to move to lower or higher levels of capital stock. u The above model stays as close as possible to the standard RBC model except. the issue of multiple equilibria and indeterminacy is an important macroeconomic issue and should be pursued in further research in the future. since this leads to a higher welfare. employment and welfare level. after a shock. A variety of further economic models giving rise to multiple equilibria and thresholds are presented in Gr¨ne and Semmler (2004a). we ﬁrst realize that the value of the objective function is always lower for decision rule Set 2. On the other hand. to a low or high level equilibrium in employment and output. occuring around k0 = 36900.127 From Figure 7. consumption.5 Conclusion This chapter shows that the introduction of adjustment cost of capital may lead to non-uniqueness of steady state equilibria in an otherwise standard RBC model. This is likely to be caused by its inferior welfare performance at the steady states for which we compute the decision rule.5. This intersection. Our model thus can easily explain of how an economy become history dependent and moves. consumption. can be regarded as a threshold.. On the other hand.

31) which is equation (7.(7.30) (nN /0.3)α 1 i − q(i) γ+δ which is equation (7.11).6 7.3 N (7.32) k y Using (7.28) to express i while using φ(i)k for y.28) (7.24) (7. We derive from (7.33) which is (7. using (7. (7.29) (7.128 7. Further from (7.31) to express k .18) and (7.27) (7. From (7.26).9) takes the form (7.3)α (7.26) (7.30) (γ + δ)k = φ(i)k − c − q(i). which is equivalent to c = (φ(i) − γ − δ)k − q(i) .13) with φ(i) given by (refequ7.6.29). y −α = Ak (nN /0.4) .32) φ(i) n= A 1 α k 0.27). Next.12).1 Appendix: The Proof of Propositions 5 and 6 The Proof of Proposition 5 1 β λ 1 − q ′ (i) = 0 − c 1+γ αy β −θ λ + 1 − q ′ (i) = 0 1−n 1+γ n β (1 − α)y (1 − δ) + 1 − q ′ (i) 1+γ k 1 k= (1 − δ)k + i − q(i) 1+γ i=y−c y = Ak 1−α The certainty equivalence of equation (7. k= (1 + γ) − (1 − δ)β y = β(1 − α) 1 − q ′ (i) k = φ(i) From (7. we derive from (7.25) =1 (7.19).

36) into (7.33): c = = = = = 1 1 (1 − n)α α A φ(i)1− α (N /0. . k) c2 (φ(i).35) and then express n in terms of (7.38) Equation (7. Let c1 (·) = c2 (·).3)α 1−α y k α−1 (N /0. y = A n = A 1 y n (7.3)α (7. k. we obtain 1 α φ(i)k θ (1 + (1 + 1 1 α )φ(i) − γ − δ (i − q(i)) = aφ(i)1− α + q(i) θ γ+δ (7.30) for k.25): β 1 θ λ 1 − q ′ (i) = = 1+γ c (1 − n)α (1 − n)α θ 1−α (7.17). We thus obtain (φ(i) − γ − δ)k − q(i) = aφ(i)1− α − which is equivalent to 1 α )φ(i) − γ − δ k = aφ(i)1− α + q(i) θ Using (7.3)φ(i)1− α − A α (N /0.3) θ 1 1 1 α α α 1 φ(i) A (N /0.129 = c1 (φ(i).36) = A α φ(i)1− α (N /0. q(i)) Meanwhile from (7.37) is equivalent to (7.3) 1 Substitute (7. k) 1 α k 0.37) with a given by (7.34) y n −1 The ﬁrst equation is equivalent to (7.29).3 N (7.10) with b given by (7.3)φ(i)1− α θ θ A 1 α aφ(i)1− α − φ(i)k θ c2 (φ(i).16).35) k n y n (N /0.15) while the second equation indicates c= where from (7.24) and (7.

We thus have proved the second part of the proposition.39) mq ′′ (i) [1 − q ′ (i)]2 (7. i1 ) and (i2 .40) We shall ﬁrst realized that a. Next we turn to the range (i2 . since q ′′ (i) → 0 and hence φ′ (i) → 0. This indicates that [i − q(i)] → +∞. i1 ). φ(i) → +∞.130 7. In this case. q ′′ (i) > 0 and hence f ′ (i) > 0. Consider now i → +∞. +∞).2 The Proof of Proposition 6 Note that within our two ranges (0. Therefore f (i) → +∞. Meanwhile. Since f ′ (i) > 0. . Therefore f ′ (i) → (bm − 1). and therefore the term b[i − q(i)] + a( α − 1)φ(i)− α is positive. which is positive. in the range (i2 . 1 1 − 1)φ(i)− α α f ′ (i) = φ′ (i) b[i − q(i)] + a( where φ′ (i) = + (bm − 1) (7. Meanwhile in the range (0. q ′ (i) → 1. and φ(i)1− α → 0. +∞). q(i) → 0 1 and therefore f (i) → −aφ(0)1− α < 0. In particular. Since 1 − α is negative. +∞) φ(i) is positive and hence f (i) is continuous and diﬀerentiable. Again in 1 this case. q ′′ (i) < 0 and hence f ′ (i) can either be positive or negative due to the sign of (bm − 1). φ′ (i) → −∞ (since q ′′ (i) < 0) and therefore f ′ (i) < 0. Consider ﬁrst i → i2 . i1 ). there exists one and only one i such that f (i) = 0.39). Assume i → 0. Meanwhile from (7. In this case.19). 1 q ′ (i) → 1 and therefore φ(i) → +∞. Therefore f (i) → +∞. In this case. To verify the second part of the proposition. we only need to prove that f (i) → +∞ and f (i) < 0 when i → i2 and f (i) → +∞ and f (i) > 0 when i → +∞. Let us ﬁrst consider the range (0.17) and (7. However. Therefore f (i) → +∞. this further 1 indicates φ(i)1− α → 0. q ′ (i) → 0 and q(i) → q m where q m is the upper limit of q(i). We thus have proved the ﬁrst part of the proposition.16) 1 1 and (7. by the intermediate value theorem. Next assume i → i1 .6. b and m are all positive as indicated by (7.

the RBC model predicts a signiﬁcantly high positive correlation between technology and employment whereas empirical research demonstrates. is that the model implies a excessively high correlation between consumption and employment while empirical data only indicates a week correlation. A recent evaluation of this failure of the RBC model is given in Schmidt-Grohe (2001). not suﬃciently been studied in the literature. can explain the volatilities of some macroeconomic variables such as output.Chapter 8 Business Cycles with Nonclearing Labor Market 8. Another problem in RBC literature related to this. Lastly. the standard real business cycle (RBC) model. Whereas in RBC models the standard deviation of the labor eﬀort is too low. despite its rather simple structure. the lack of variation in the employment and the high correlation between consumption and employment in the standard RBC model. especially in Chapter 5. at least at business cycle frequency. However. We want to note that the labor market problems. consumption and capital stock. to explain the actual variation in employment the model generally predicts an excessive smoothness of labor eﬀort in contrast to empirical data. may be related to the speciﬁcation of the labor 131 . This problem of excessive smoothness in labor eﬀort is well-known in the RBC literature. There the RBC model is compared to indeterminacy models. This problem of excessive correlation has. a negative or almost zero correlation.1 Introduction As discussed in the previous chapters. These are the major issues that we shall take up from now on. to our knowledge. in indeterminacy models it turns out to be excessively high. It has preliminarily been explored in Chapter 5 of this volume. as developed by Benhabib and his co-authors.

2 The labor supply in the these models is implicitly assumed to be given exogenously. and therefore we could name it the labor market puzzle. Benassy (1995) and Uhlig and Xu (1996) among others. and normalized to 1. capital and technology. an explicit labor demand function is introduced from the decision problem of the ﬁrm side. the real business cycle model speciﬁes both sides. It is therefore not surprising why employment is highly correlated with consumption and why the variation of consumption is a smooth as labor eﬀort. the moments of the economy are however reﬂected by the variation on one side of markets due to its general equilibrium nature for all markets (including output. However. The research along the line of micro-founded Keynesian economics has been historically developed by the two approaches: one is the disequilibrium analysis. that is. there are models of eﬃciency wages where nonclearing labor market could occur. the excessively high correlation between technology and employment. This further suggests that to resolve the labor market puzzle in a real business cycle model. In this chapter we are mainly concerned with this puzzle. Although in the speciﬁcation of its model structure (see Chapter 4). Consequently. 1999). the moments of labor eﬀort result from the decision rule of the representative household to supply labor. one has to make improvement upon labor market speciﬁcations. King and Wollman (1999). the decision rule with regard to labor supply in these models is often dropped because the labor supply no longer appears in the utility function of the household. 1995). 1 . Hence nonclearing of the labor market occurs if the demand is not equal to 1. For the labor market. The variations in labor and consumption both reﬂect the moments of the two state variables. which had been popular before 1980’s and the other is the New Keynesian analysis based on monopolistic competition. Rotemberg and Woodford (1995. preliminarily discussed in Chapter 5. The technology puzzle.2 In this chapter. One possible approach for such improvement is to introduce the Keynesian feature into the model and to allow for wage stickiness and a nonclearing labor market. the moments of labor eﬀort become purely demand-determined. the demand and supply. of a market. Attempts have now been made recently that introduce the Keynesian features into a dynamic optimization model. labor and capital markets). we will present a stochastic dynamic optimization model of RBC type but argumented by Keynesian features along the line of above See Danthine and Donaldson (1990. On the other hand. Gali (1999) and Woodford (2003) present a variety of models with monopolistic competition and sticky price.1 We shall remark that in those studies with nonclearing labor market. will be taken up in Chapter 9.132 market.

Yet. we shall allow for wage stickiness3 and nonclearing labor market. 2003). 4 0ne could perceive a change in secular forces concerning labor supply from the side of households. this will be diﬀerent from the unemployment that we will discuss in this chapter. See Malinvaud (1994) for a more extensive list of those factors . In particular. Gali (1999). and German macroeconomic time series data. Recently. unlike other recent models that drop the decision rule of labor supply.S. New Keynesian literature presents models with imperfect competition and sluggish price and wage adjustments where labor eﬀort is endogenized. Phelps and Zoega 1998). generous unemployment compensation and related welfare state beneﬁts have been added to the list of factors aﬀecting the supply of labor. 2003).133 consideration. However. union bargaining. for example. see also Ljungqvist and Sargent (1998. changes in preferences. for example. Henderson and Levin (2000) and Woodford (2003).4 With the determination of labor demand. King and Wollman (1999). Important work of this type can be found in Rotemberg and Woodford (1995. unemployment resulting from search and matching problems can rather be viewed as frictional unemployment (see Malinvaud (1994) for his classiﬁcation of unemployment). As will become clear. 3 . demographic changes. 5 On the demand side one could add beside the pure technology shocks and the real wage.6 We will assess this model by employing U. we view the decision rule of the labor eﬀort as being derived from a dynamic optimization problem as a quite natural way to determine desired labor supply. Yet before we formally present the model and its calibration we want to note that there is a similarity of our approach chosen here and the New Keynesian analysis. in Europe.5 the two basic forces in the labor market can be formalized. high interest rates (Phelps 1997. For an extensive reference to those factors. is that a variety of employment rules could be adopted to specify the realization of actual employment when a nonclearing market emerges. as will become clear. taxes and subsides which all aﬀect labor supply. 1999). 6 Another line of recent research on modeling unemployment in a dynamic optimization framework can be found in the work by Merz (1999) who employs search and matching theory to model the labor market. hiring and ﬁring cost. However. concerning Europe. intensity of job search and unemployment. the role of aggregate demand. Here the wage rate is set optimally by a representative of the household according to Already Keynes (1936) had not only observed a wide-spread phenomenon of downward rigidity of wages but has also attributed strong stabilizing properties of wage stickiness. evolution of wealth. Some of those secular forces are often mentioned in the work by Phelps. One of the advantages of this formulation. Erceg. the market in those models are still assumed to be cleared since the producer supplies the output according to what the market demands at the existing price. capital shortages and slow down of growth. see Phelps (1997) and Phelps and Zoega (1998). derived from the marginal product of labor and other factors. productivity and real wage. A similar consideration is also assumed to hold for the labor market. see Blanchard and Wolfers (2000) and Ljungqvist and Sargent (1998.

the decision with regard to quantities seems to be unresolved. The supplier may no longer behave optimally concerning their supply decision. Section 3 estimates and calibrates our diﬀerent model variants for the U. For further details of this consolidation. 8 For models with multiple steps of optimization in the context of learning models. Woodford (2003.7 In this chapter. In those models there will be a gap again between the optimal wage and existing wage. We will derive the nonclearing of the labor market from optimizing behavior of economic agents but it will be a multiple stage decision process that will generate the nonclearing of the labor market. see Benassy (1984) among others. This has now been resolved by the modern literature of monopolistic competition as can be found in Woodford (2003). see Chapter 9. the well-known problem of these earlier disequilibrium models was that they disregard intertemporal optimizing behavior and never specify who sets the price. while resolving the price setting problem. and therefore they can somewhat be consolidated as a more complete system for price and quantity determination within the Keynesian tradition. Yet. Appendices I and II in this chapter contain some technical derivation of the adaptive optimization procedure whereas Appendix III undertakes a welfare comparison of the diﬀerent model variants. See.S. economy. we shall present a dynamic model that allows for a noncleared labor market. Section 5 concludes. 7 . Section 4 undertakes the same exercise for the German economy.8 The remainder of this chapter is organized as follows. ch. we wish to argue that the New Keynesian and approach are complementary rather than exclusive. yet the labor market is still cleared since the household is assumed to supply labor whatever the market demand is at the given wage rate. but simply supplies whatever the quantity the market demands for at the current price. for example. The objective to construct a model such as ours is to approach the two aforementioned labor market problems coherently within a single model of dynamic optimization.134 the expected market a demand curve for labor. Calvo (1983) or other theories of sluggish wage adjustment. Section 2 presents the model structure. There are also traditional Keynesian models that allow for disequilibria. However. Sargent (1998) and Zhang and Semmler (2003). 3). Once the wage has been set. see Dawid and Day (2003). Yet. it is assumed to be sticky for some time period and only a fraction of wages are set optimally in each period. which could be seen to be caused by staggered wage as described by Taylor (1980). In the current chapter we are only concerned with a nonclearing of the labor market as brought into the academic discussion by the disequilibrium school.

we. the wage rate wt and the rental rate of the capital stock rt . Meanwhile.135 8.1 The Wage Determination As usual we presume that both the household and the ﬁrm express their desired demand and supply on the basis of given prices. This leaves us to focus the discussion only on the wage setting. The representative ﬁrm owns nothing. sells the output and transfers the proﬁt back to the household. Therefore. We can then ignore its setting. a focus in the recent New Keyensian literature. in this model shall assume that the market to be reopened at the beginning of each period t. Indeed. in which one could assume an once-for-all market. it will allow us to save some eﬀort to explain the nominal price determination. It simply hires capital and labor to produce output. Note that there are three commodities in our model. Yet.9 As to the rental rate of capital rt . which we assume to be the output. labor and capital. as will become clear. 8. . let us ﬁrst describe how prices and wages are set. There are three markets in which the agents exchange their products. it is assumed to be adjustable so as to clear the capital market. Let us ﬁrst discuss how the wage rate 9 For our simple representative agent model without money. One of them should serve as a numeraire. including the output price pt . we shall ﬁrst discuss how the period t prices are determined at the beginning of period t. however. The revenue from selling factor services can only be used to buy the goods produced by the ﬁrm either for consuming or for accumulating capital. this simpliﬁcation does not eﬀect our major result derived from our model.2 An Economy with Nonclearing Labor Market We shall still follow the usual assumptions of identical households and identical ﬁrms. the output price pt always equals 1.2. one can imagine any initial value of the rental rate of capital when the ﬁrm and the household make the quantity decisions and express their desired demand and supply. Unlike the typical RBC model. Therefore we are considering an economy that has two representative agents: the representative household and the representative ﬁrm. The household owns all the factors of production and therefore sells factor services to the ﬁrm. This indicates that the wage wt and the rental rate of capital stock rt are all measured in terms of the physical units of output. This is necessary for a model with nonclearing markets in which adjustments should take place which leads us to a multiple stage adaptive optimization behavior.

There is the so-called menu cost for changing prices (though this seems more appropriate for the output price). ch. We want to note. diﬀerentiated types of labor and refer only to aggregate wages. Since workers. follow the recent approach. in discussing wage setting.3). computation and communication.13 All these eﬀorts cause costs which may be summarized as adjustment costs of changing the price or wage. as Taylor (1999) has pointed out. where wages are set optimally. We may assume that the wage rate is set by a representative of the household which acts as a monopolistic agent for the supply of labor eﬀort as Woodford (2003. needs information. but a fraction of wages may be sticky.12 In addition. we. On the other hand. The adjustment cost for changing the wage may provide some reason for the representative of the household to stick to the wage rate even if it is known that current wage may not be optimal. 3) has suggested. Erceg et al (2000) and Christiano et al. the household or its representative. a wage contract may also be understood from an asset price perspective. enter usually into long term employment contracts involving labor supply for several periods with a variety of job security arrangements and termination options. Christiano. Despite this variety of wage setting models. p. In appendix I. or wage. We neglect. Dutta and Bergen (2000). Most recent literature. Woodford (2003. for instance. there are also models that discuss how ﬁrms set the wage rate. Levy. that wage setting is an interacting process between ﬁrms and households. 12 This is emphasized by Rotemberg (1982) 13 See the discussion in Christiano. in close relationship to Woodford (2003. Eichenbaum and Evans (2001) and Woodford (2003) among others. namely as derivative security based on a fundamental underlying asset such as the asset price of the ﬁrm. .136 might be set. which may be costly.221) introduces diﬀerent wage setting agents and monopolistic competition since he assumes heterogenous households as diﬀerent suppliers of diﬀerentiated types of labor. One may also derive this stickiness of wages from wage contracts as in Taylor (1980) with the contract period to be longer than one period. Ritson. ch. Henderson and Levin (2000). In principle a wage contract could be treated as a debt contract with 10 See. Eichenbaum and Evans (2001) and Zbaracki. or their respective representative. There is also a reputation cost for changing prices and wages. however.10 assumes that it is the supplier of labor. however. that recently many theories have been developed to explain wage and price stickiness. however. 11 These are basically the eﬃciency wage models that are mentioned in the introduction.11 In actual bargaining it is likely. Erceg. changing the price. that sets the wage rate whereas the ﬁrm is simply a wage taker. (2001) we present a wage setting model.

(2000). for example.16 Through such a pattern of wage dynamics. particularly with diﬀerentiated types of labor. Explicit formulation of wage dynamics of a Calvo type of updating scheme. the household is then going to express its desire of demand for goods and supply of factors. including the wage. We can express the household’s desired demand and supply as ∞ s a sequence of output demand and factor supply cd . Yet. For further details of the pricing of such liabilities. u 15 These are basically those prices that have not been adjusted for some periods and there the adjustment costs (such as the reputation cost) may not be high. 4) and Erceg et al. to be reviewed in each time period and therefore new wage contracts will be signed in each t. In Calvo’s model. kt+i+1 i=0 . 16 This type of wage setting is used in Woodford (2003. 8. wages are only partially adjusted. All we need to presume is that. ch.2 The Household’s Desired Transactions The next step in our multiple stage decision process is to model the quantity decisions of the households. wt . When the price. for example.137 similar long term commitment as exists for other liabilities of the ﬁrm. in general it can be assumed to be arranged for several periods. as will become clear in section 3. to be completely ﬁxed in contracts and never responds to the disequilibrium in the labor market. where j can be regarded as the contract period.15 This can be expressed in our model as the expiration of some wage contracts. see Gr¨ne and Semmler (2004c). giving rise to a sticky aggregate wage. Indeed. Christiano et al. A more explicit treatment is not needed here. the value of the derivative security. there is always a fraction of individual prices to be adjusted in each period t. id . 14 . the empirical study of our model does not rely on how we formulate the wage dynamics. is studied in Erceg et al (2000). wage contracts are only partially adjusted. As noted above we do not have to posit that the wage rate. has been set. One may imagine that the dynamics of the wage rate. (2001) and Woodford (2003. would depend on some speciﬁcations in contractual agreements. ns . The new signed wage contracts should respond to the expected market conditions not only in period t but also through t to t + j.14 As in the case of the pricing of corporate liabilities the wage contract. ch. as underlying our model. follows the updating scheme as suggested in Calvo’s staggered price model (1983) or in Taylor’s wage contract model (1980). We deﬁne the household’s desired demand and supply as those that can allow the household to obtain the maximum utility on the condition that these demand and supply can be realized at the given set of prices.2. for an aggregate wage in appendix I of this chapter. t+i t+i t+i For such a treatment of the wages as derivative security. see Uhlig (2003). 3) and brieﬂy sketched.

ns i=0 only (ct . ns ) along with (id .2) (8. Assuming that the household knows the production function f (·) while it expects that all its optimal plans can be fulﬁlled at the given price sequence {pt+i . At+i ) s ns t+i = Gn (kt+i .3 The Firm’s Desired Transactions As in the case of the household. Next.4) Explaining πt+i in (8. 8.5) form i=0 a standard intertemporal decision problem. The .1) subject to s s c d + id t+i t+i = rt+i kt+i + wt+i nt+i + πt+i s s kt+i+1 = (1 − δ)kt+i + id t+i (8. we thus obtain i=0 s s πt+i = f (kt+i . where id = f (kt . ns ) t+i t+i (8. ns . we obtain t s s s kt+i+1 = (1 − δ)kt+i + f (kt+i . equations (8.nt+i }∞ i=0 max Et i=0 β i U (cd .7) We shall remark that although the solution appears to be a sequence ∞ d s s cd . the ﬁrm’s desired demand for factors and supply of goods are those that maximize the ﬁrm’s proﬁt under the condition that all its intentions can be carried out at the given set of prices.2) can be regarded as a budget constraint. wt+i . The decision problem for the household to derive its demand and supply can be formulated as ∞ {ct+i . The equality holds due to the assumption Uc > 0. ns . At+i ) − cd t+i t+i (8. Note that here we have used the superscripts d and s to refer to the agent’s desired demand and supply.6) (8.1) and (8.4) and then substituting from (8. At+i ) (8. ns . Note that (8. At+i ) − wt+i ns − rt+i kt+i t+i t+i (8. rt+i }∞ . kt ).3) Above πt+i is the expected dividend. are actually carried into the market by the household for t exchange due to our assumption of re-opening of the market.138 where it+i is referred to investment.2) in terms of (8.2. At ) − t+i t+i t t t t s cd and kt = kt .3) to eliminate id . The solution to this problem can be written as: s cd t+i = Gc (kt+i .5) For the given technology sequence {At+i }∞ . we shall consider how the representative household calculates πt+i .

Such concept has somehow disappeared in the new Keynesian literature in which the household supplies the labor eﬀort according to the market demand and therefore it does not seem to face excess demand or supply. the disequilibrium in the labor market may still exist.7) given the way the wage determination is explained in section 8. even in this case. nd . kt . Let us ﬁrst consider the two factor markets. the household’s willingness to supply labor eﬀort is not necessarily equal to its actual supply or the market demand. This indicates that s d kt = kt = kt As concerning the labor market. . the so-called labor market clearing should be deﬁned as the condition that the ﬁrm’s willingness to demand factors is equal to the household’s willingness to supply factors. we cannot regard the labor market to be cleared.8) For regular conditions on the production function f (·).1. 17 Strictly speaking. Next we shall consider the transactions in our three markets. 8. the solution to the above optimization problem should satisfy d rt = fk (kt .10) should be equal to the willingness of the household to supply labor as determined in (8. Yet. At ) t (8.10) where fk (·) and fn (·) are respectively the marginal products of capital and labor. we shall have to specify what rule should apply regarding the realization of actual employment. though in a simpler version. Therefore.17 Given a nonclearing labor market.2. In Appendix I these points are illustrated in a static version of the working of the labor market. kt . At ) t d wt = fn (kt . as implicitly expressed in (8.2. there is no reason to believe that ﬁrm’s demand for labor. nd . yt ) that maximizes the current t proﬁt: s d max yt − rt kt − wt nd t subject to s d yt = f (At .9) (8. An illustration of this statement. At some point the marginal disutility of work may be higher than the pre-set wage. is given in Appendix I. This indicates that even if there are no adjustment costs so that the household can adjust the wage rate at every time period t.139 optimization problem for the ﬁrm can thus be expressed as being to choose d s the input demands and output supply (nd .4 Transaction in the Factor Market and Actual Employment We have assumed the rental rate of capital rt to be adjustable in each period and thus the capital market is cleared. nd ) t (8.

for a more formal treatment of this point. 20 Note that if ﬁrms are oﬀ their supply schedule and workers oﬀ their demand schedule. a proper study would have to compute the ﬁrms’ cost increase and proﬁt loss and the workers’ welfare loss. workers will have to oﬀer more eﬀort than they wish to oﬀer. see Meyers (1968) and also Solow (1979). for instance. see Blanchard and Fischer.11) (8.140 Disequilibrium Rule: When disequilibrium occurs in the labor market either of the following two rules will be applied: nt = min(nd . for example. ﬁrms will employ more labor than what they wish to employ. the marginal cost for ﬁrms is rather ﬂat (as empirical literature has argued. 1984. this compromising rule might be considered a reasonable approximation. among others). It has been widely used in the literature on disequilibrium analysis (see.18 On the other hand.19 Such mutual compromises may be due to institutional structures and moral standards of the society. Note that in this case ﬁrms may be oﬀ their marginal product curve and thus this might require wage subsidies for ﬁrms as has been suggested by Phelps (1997). 1). (1993). 20 Given the rather corporate relationship of labor and ﬁrms in Germany. Above. If. This case corresponds to what is discussed in the literature as labor hoarding where ﬁrms hesitate to ﬁre workers during a recession because it may be hard to ﬁnd new workers in the next upswing. the ﬁrst is the famous short-side rule when nonclearing of the market occurs. The departure of the value function – as measuring the welfare of the representative household from the standard case – is studied in Gong and Semmler (2001). The second might be called the compromising rule. In our representative agent model. 1989) and the marginal disutility is also rather ﬂat the overall loss may not be so high. Results of this study are reported in Appendix III of this chapter. however. when there is excess demand. Benassy 1975. Such a rule that seems to hold for many other countries was already discussed early in the economic literature. see Burnside et al. 18 (8.12) ωnd t + (1 − ω)ns t . If there is excess supply. the unemployment is mainly due to adaptive optimization of the household given the institutional This could also be realized by ﬁrms by demanding the same (or less) hours per worker but employing more workers than being optimal. see Burnside et al. We want to note that the unemployment we discuss here is certainly different from the frictional unemployment as often discussed in search and matching models. (1993). This rule indicates that when nonclearing of the labor market occurs both ﬁrms and workers have to compromise. ns ) t t nt = where ω ∈ (0. 19 This could be achieved by employing the same number of workers but each worker supplying more hours (varying shift length and overtime work).

5. instead of (8. (8.2. 21 . is now given by s yt = f (kt . 22 Already Hicks (1963) has called this frictional unemployment.. nt . which. nt ) t + Et i=1 β i U (cd . nt+i .2). the ﬁrm will engage in its production activity.13) s Then the transaction needs to be carried out with respect to yt . which should be derived from the following optimization program: ∞ max (cd ) t U (cd .5).22 8. At+i ) − cd t+i i = 1.). Therefore.14) subject to s kt+1 = (1 − δ)kt + f (kt . 2003). Rubart and Semmler (2003).5 Actual Employment and Transaction in the Product Market After the transactions in these two factor markets have been carried out. For comments on this view. Recently.15) (8. At ) − cd t s s s s kt+i+1 = (1 − δ)kt+i + f (kt+i . see Ljungqvist and Sargent (1998. 2.141 arrangements of the wage setting (see Chapter 8. . one important form of a mismatch in the labor market seems to be the mismatch of skills. the previous consumption plan as expressed by (8. ns ) t+i t+i (8. It is important to note that when the labor market is not cleared. The cause for frictional unemployment can arise from informational and institutional search and matching frictions where welfare state and labor market institutions may play a role. see Greiner.16) Note that in this optimization program the only decision variable is about cd and the data includes not only At and kt but also nt .8). (8. The result is the output supply. At ). see Blanchard (2003).. which further bring the improper transition law of capital (8. nt . which is given by t For a recent position representing this view. the household will be required to construct a new consumption plan. see also Walsh (2002) who employs search and matching theory to derive the persistence of real eﬀects resulting from monetary policy shocks. for deriving the plan.21 Yet the frictions in the institutions of the matching process are likely to explain only a certain fraction of observed unemployment.6) becomes invalid due to the improper budget constraint (8.

At .18) where δ is the depreciation rate. utility function and the stochastic process of At . nt .17) Given this adjusted consumption plan. the product market should be cleared if the household demand f (kt .3 Estimation and Calibration for U. At is the temporary shock in technology and Xt the permanent shock that follows a growth rate γ.18) by Xt : kt+1 = 1 1−α (1 − δ)kt + At kt (nt N /0. Therefore.1 The Empirically Testable Model Let Kt denote for capital stock. for the U. α is the share of labor in the production function F (·) = At Kt1−α (Nt Xt )α . ct ≡ Ct /Xt and nt ≡ 0. we here still employ the model as formulated by King. The 23 Note that Xt includes both population and productivity growth. 8. S.3. 1+γ (8. economy. Assume that the capital stock in the economy follow the transition law: Kt+1 = (1 − δ)Kt + At Kt1−α (Nt Xt )α − Ct . For an empirically testable model.17) should also be the realized consumption. Nt for per capita working hours. Yt for output and Ct for consumption. but also we do not introduce the growth factor into the model. At ) − cd for investment. S.142 either (8. Plosser and Rebelo (1988).3)α − ct . cd in t t (8. Note that nt is often regarded to be the normalized hours.12).3Nt /N with N to be the sample mean of Nt . of our model as presented in the last section. we divide both sides of equation (8. (8. However. Economy This section provides an empirical study. We can write the solution in terms of the following equation (see Appendix II of this chapter for the detail): cd = Gc2 (kt . To transform the model into a stationary setting. 8. It is not the model that can be tested with empirical data.19) where kt ≡ Kt /Xt . the model in the last section is only for illustrative purpose. .23 The model is nonstationary due to Xt . not only because we do not specify the forms of production function. nt ) t (8.11) or (8.

we shall call the standard model the Model I. β.) innovation: 2 ǫt ∼ N (0. is the average percentage of hours attributed to work. Speciﬁcally. and the two labor market disequilibrium models with the disequilibrium rules as expressed in (8. σǫ ).22) (8.20) f (·) = At kt (nt N /0. we can simulate the model to generate stochastically simulated data. 2) are the complicated functions of the model’s structural parameters. and the disequilibrium model with the compromising rule (8. including σε . as pointed out by Hansen (1985).24) Note that here (8.6) and (8. They are computed as in Chapter 5 by the numerical algorithm using the linear-quadratic approximation method presented in Chapter 1 and 2. among others.3. the data generating process include (8.22) as well as ct = G11 At + G12 kt + g1 nt = G21 At + G22 kt + g2 (8. (8.11) and (8.22).21) where ǫt is an independently and identically distributed (i. α.19).2 The Data Generating Process For our empirical test. which. as a benchmark for comparison.24) are the linear approximations to (8. With regard to the household preference. The coeﬃcients Gij and gi (i = 1.i. Given these coeﬃcients and the parameters in equation (8.23) (8. 2 and j = 1. we consider three model variants: the standard RBC model. the disequilibrium model with short side rule (8.3)α while yt ≡ Yt /Xt with Yt to be the empirical output.d. (8.12) respectively. Note that the above formulation also indicates that the form of f (·) in the previous section may follow 1−α (8. These data can then be compared to the sample moments of the observed economy. we shall assume that the utility function takes the form U (ct .143 sample mean of nt is equal to 30 %.23) and (8.7) when we ignore the superscripts s and d. For the standard RBC model. 8.12) the Model III. .11) the Model II. nt ) = log ct + θ log(1 − nt ) The temporary shock At may follow an AR(1) process: At+1 = a0 + a1 At + ǫt+1 .

t (8. j = 1. It generates the demand for labor as nd = (αAt Zt /wt )1/(1−α) kt (0. Let Xt = Zt Lt .26) In the appendix.17) should be equal to ct . which generally appears to be smooth. The moments of the labor eﬀort are solely reﬂected by the decision rule (8. the volatility of the labor eﬀort can not be much diﬀerent from the volatility of consumption. 3. the standard model does not allow for nonclearing of the labor market.e. The production function can be written as Yt = At Ztα Kt1−α Htα .24) as ns = G21 At + G22 kt + g2 t (8.. the equilibrium in the product market indicates that cd in (8. we thus obtain 1−α ¯ wt = αAt Zt kt (nd N /0.24) which is quite similar in its structure to the other decision rule given by (8. they are both determined by kt and At . To deﬁne the data generating process for our disequilibrium models.S.10). Therefore. which can be regarded as total labor hours. • Second. We shall assume that Lt has a constant growth rate µ and hence Zt follows the growth rate (γ − µ). 2. i. This structural similarity are expected to produce two labor market puzzles as aforementioned: • First. where Ht equals Nt Lt . we shall ﬁrst modify (8. the moments of labor eﬀort and consumption are likely to be strongly correlated.25) On the other hand. with Zt to be the permanent shock resulting purely from productivity growth.3/N ). Next we consider the labor demand derived from the production function F (·) = At Kt1−α (Nt Xt )α .23). . Taking the partial derivative with respect to Ht and recognizing that the marginal product of labor is equal to the real wage. and g3 .3)α−1 t This equation is equivalent to (8. we provide the details how to compute the coeﬃcients G3j . this equation can also be t approximated as ct = G31 At + G32 kt + G33 nt + g3 (8. and Lt from population growth.144 Obviously. This seems to be roughly consistent with the U. experience that we shall now calibrate.27) Note that the per capita hours demanded nd should be stationary if the real t wage wt and productivity Zt grow at the same rate.

0189 0. we estimate the parameters a0 .22).25). a1 and σε . σε . 25 Note that this re-scaling is necessary because we do not exactly know the initial condition of Zt . (8. and therefore we shall employ them here.19).25). This allows us to compute the data series of the temporary shock At . we specify µ at 0.1203.0333 σε 0. The estimation is conducted by a global optimization algorithm called simulated annealing. we use (8. 8.0010 β 0. the parameter ω in Model III is set to 0. δ. and ω.0045. α. for the nonclearing market model with short side rule.1 illustrates these parameters: Table 8. β. We re-scaled the wage series in such a way that the ﬁrst observation of employment is equal to the demand for labor as speciﬁed by equation (8. The next three parameters β.001. These parameters have already been estimated in Chapter 5. θ. The estimation is executed by a conventional algorithm. (8. which is close to the average growth rate of the labor force in U. We ﬁrst specify α and γ respectively at 0.0045 α 0.0185 µ 0.145 Thus. the data generating process includes (8.9930 θ 0.11).2080 ω 2. It is re-scaled to match the model’s implication. For Model III.3. the grid search. It is estimated by minimizing the residual sum of square between actual employment and the model generated employment. (8. (8. γ. δ and θ are estimated with the GMM method by matching the moments of the model generated by (8.1203 The data set used in this section is taken from Christiano (1987).23) and (8.5800 δ 0. 24 . which are standard. (8. Table 8.24 For our purpose it suﬃces to take the empirically observed series of wages.58 and 0.25 One however might apply here the eﬃciency wage theory or other theories such as the staggered contract theory that justify the wage stickiness. With this data series.1: Parameters Used for Calibration a0 a1 0. which we set equal to 1.3 The Data and the Parameters Before we calibrate the models we shall ﬁrst specify the parameters. The wage series are obtained from Citibase.27) with wt given by the observed wage rate.11)..27). We thereby do not attempt to give the actually observed sequence of wages a further theoretical foundation.9811 γ 0. There are altogether 10 parameters in our three variants: a0 .S.19). a1 . Model II. µ. For the new parameters.26) and (8.12) instead of (8.

where a one time simulation with the observed innovation At are presented.2 reports our calibration from 5000 stochastic simulations.4 Calibration Table 8. 26 Due to the discussion on Solow residual in Chapter 5.1.146 8. we shall now understand that At computed as the Solow residual may reﬂect also the demand shock in addition to the technology shock.3.26 All time series are detrended by the HP-ﬁlter. . The results in this table are conﬁrmed by Figure 8.

0165 0.0000 (0.0051 (0.2861 0.9392 (0.0203) 0.0000) 1.9288 (0.0020) Output 0.0336 (0.6807 (0.0115) 0.4874 (0.9866 (0.0000 (0.4604 0.0566 (0.0393 (0.0000 (0.0091 (0.1741 0.0971) 1.9754 (0.0000 (0.0954 1.1044) 1.0000) -0.0000 (0.0576 (0.0000) 0.1190) 0.0577 (0.0010) Employment 0.0000 (0.0026) 1.0825) 0.0012) 0.0197 (0.0135 (0.0000 1.0007) 0.0000 0.0036 (0.0158 (0.0906) 0.1069) 1.0407) 1.4525 (0.0000) 1.0156 0.0081 0.0000 (0.1662) 0.0327) 1.0031) 0.147 Table 8.9056 (0.0035 0.0098) 0.0000 0.0000) -0.0000) −0. Economy (numbers in parentheses are the corresponding standard errors) Consumption Standard Deviations Sample Economy Model I Economy Model II Economy Model III Economy Correlation Coeﬃcients Sample Economy Consumption Capital Stock Employment Output Model I Economy Consumption Capital Stock Employment Output Model II Economy Consumption Capital Stock Employment Output Model III Economy Consumption Capital Stock Employment Output 0.0137 (0.0000) .1362) 0.0076) 1.0000 (0.0717) 1.0021) 0.0000) 0.0824) 0.8924 (0.00332) 1.7550 1.0066 (0.0000) 0.0198) 0.0000) 0.0000) 0.0268) 1.0000 (0.2043 (0.1175) 0.0863 (0.7263 1.0545 (0.S.0006) 0.0000 0.0000 (0.1045) 0.0052 (0.0000 (0.2: Calibration of the Model Variants: U.0000 (0.0000) 0.0010) Capital 0.4944 (0.6869 (0.1593 (0.0095 (0.

The problem is. best the actual data. along most dimensions. the three columns representing the ﬁgures for Model I. the volatility in the Model III Economy is close to the one in the Sample Economy.4. We want to note that the failure of the standard model to match the volatility of employment of the data is also described in the recent paper by Schmidt-Grohe (2001). in the actual economy will not hold). employment and output. however.95.32 in the Model I Economy as the ratio of the standard deviation of labor eﬀort to the standard deviation of output. actual (solid line) and simulated data (dotted line) for consumption. capital stock.3 . As can be seen from the separate ﬁgures. Model II and Model III Economies.45. The result.148 First we want to remark that the structural parameters that we used here for calibration are estimated by matching the Model I Economy to the Sample Economy.1997. there are two signiﬁcant correlations we can observe: . As noted above.1 to 1983. close to our Sample Economy. which seems too high. We therefore may conclude that Model III is the best in matching the labor market volatility. For her employed time series data 1948. the volatility of employment has been greatly increased for both Model II and Model III. she ﬁnds the ratio to be 1.4. Schmidt-Grohe (2001) ﬁnds that the ratio of the standard deviation of employment to the standard deviation of output is roughly 0. the ratio is found to be 0. a similarly high ratio of standard deviations can also be observed in our Model II Economy where the short side rule leads to excessive ﬂuctuations of the labor eﬀort.69 for the Model II and Model III Economies respectively. This ratio is roughly 1 in the Sample Economy.2. In the Sample Economy. Further evidence on the better ﬁt of the nonclearing labor market models – as concerns the volatility of the macroeconomic variables – is also demonstrated in the Figure 8. is therefore somewhat biased in favor of the Model I Economy. 1955. For the indeterminacy model. from top to bottom. Next.38 and 0. let us look at the cross-correlations of the macroeconomic variables. of course. For our time period. It is not surprising that for most variables the moments generated from the Model I Economy are closer to the moments of the Sample Economy. There the ratio is 1. in particular the Model III Economy ﬁts. resolved in our Model II and Model III Economies representing sticky wages and labor market nonclearing. Yet for the standard RBC model. As observable.1 where the horizontal ﬁgures show. reﬂected in Table 8. although too high a volatility is observable in the Model II Economy which may reﬂect our assumption that there are no search and matching frictions (which. Yet even in this case.49. there is an excessive smoothness of the labor eﬀort and the employment series of the data cannot be matched. originating in the work by Benhabib and co-authors. In particular. we ﬁnd 0. which is too low compared to the empirical data.

this correlation is weak. about 0. consumption and employment are. and between employment and output. roughly 0. However. empirically.149 the correlation between consumption and output.93. about 0. not explicitly been discussed in the RBC literature. . Yet. including the recent study by Schmidt-Grohe (2001).75. They. also strongly correlated. with 0. The latter result of the standard model is not surprising given that movements of employment as well as consumption reﬂect the movements in the state variables capital stock and the temporary shock.72. to our knowledge. Discussions have often focused on the correlation with output.46. in our Model I Economy and this only holds for the Model I Economy (the standard RBC model) in addition to these two correlations. These two strong correlations can also be found in all of our simulated economies. We remark here that such an excessive correlation has. therefore. should be somewhat correlated.

This is because we have made a distinction between the demand and supply of labor. .S.150 Figure 8. dotted line for simulated economy) A success of our nonclearing labor market models. whereas only the latter. Case (solid line for sample economy. labor supply.1: Simulated Economy versus Sample Economy: U. the correlation with consumption is therefore weakened. reﬂects the moments of capital and technology as consumption does. is that employment is no longer signiﬁcantly correlated with consumption. see the Model II and III Economies. Since the realized employment is not necessarily the same as the labor supply.

4 Estimation and Calibration for the German Economy Above we have employed a model with nonclearing labor market for the U. the data on total labor force is also from the OECD (1998b).4. Next. The time series on the hourly real wage index is from OECD (1998a). The standard deviations of the detrended series are summarized in Table 8. see OECD (1998a).3 are detrended by the HPﬁlter. investment and capital stock are OECD data. we want to compare some stylized facts. S.2 The Stylized Facts Next. The time series data on total working hours is taken from Statistisches Bundesamt (1998). we pursue a similar study of German economy. We use again quarterly data. economy.1.4. For this purpose we shall ﬁrst summarize some stylized facts on the German economy compared to the U.1 to 1992. We thus have included a short period after the uniﬁcation of Germany (1990 .3 compare 6 key variables relevant for the models for both the German and U.1 The Data Our subsequent study of the German economy employs the time series data from 1960. 8. Figures 8.S.3. economies. The time series data on GDP.S. consumption. the data in Figure 8.1991). In particular. We have seen that one of the major reasons that the standard model can not appropriately replicate the variation in employment is its lack of introducing the demand for labor. . 8.151 8.2 and 8. economy.

S.152 Figure 8. versus Germany .2: Comparison of Macroeconomic Variables U.

153 Figure 8.3: Comparison of Macroeconomic Variables: U. S. versus Germany (data series are detrended by the HP-ﬁlter) .

S. In contrast. economy.0166 0.4.4.0036 0. These results might be due to our ﬁrst remark regarding the diﬀerence in employment volatility. Second. S. Instead. Third. while in the U. All of these parameters are estimated in the same way as those for the U. economy. If employment is smooth. However. we shall use an AR(2) process: At+1 = a0 + a1 At + a2 At−1 + εt+1 The parameters used for calibration are given in Table 8.3 The Parameters For the German economy.0100 0.0258 0.S. the employment (measured in terms of per capita hours) is declining over time in Germany (see Figure 8.0129 0. in the U. employment and the eﬃciency wage are among the variables with the highest volatility in the U.154 Table 8.3: The Standard Deviations (U. our investigation showed that an AR(1) process does not match well the observed process of At . 8. consumption capital stock employment output temporary shock eﬃciency wage 0. the other two factors have to be volatile.0146 0. they are both more volatile in Germany. The volatility of output must be absorbed by some factors in the production function. First.2 for the non-detrended series). Should we expect that such diﬀerences will lead to diﬀerent calibration of our model variants? This will be explored next. . S. in the German economy they are the smoothest variables.S. versus Germany) Germany (detrended) (detrended) U.0273 Several remarks are at place here. economy. the series is approximately stationary.0164 0.0203 0. economy.0230 0. the capital stock and temporary shock to technology are both relatively smooth.0084 0.0115 0.S.

the Model III Economy is equivalent to the Model I Economy.4. will be the best in matching German labor market.0044 1.1507 0 It is important to note that the estimated ω in this case is on the boundary 0.5 for the German economy the calibration result from 5000 time stochastic simulations.4: Parameters used for Calibration (German Economy) a0 a1 a2 σε 0.4 Calibration As for the U. the standard model. 27 27 Note that we do not include the Model III Economy for calibration.8880 -0.S. 8. Again all time series here are detrended by the HP-ﬁlter. Due to the zero value of the weighting parameter ω.0019 0.0538 2.8920 0. economy we provide in Table 8.12). .6600 0.0083 0.4 we again compare the one-time simulation with the observed At for our model variants.155 Table 8. the Model III Economy is almost identical to the Model I Economy. This seems to provide us with the conjecture that the Model I Economy.9876 δ θ ω 0. In other words.0071 γ µ α β 0. In Figure 8. indicating the weight of the demand is zero in the compromising rule (8.

0000 (0.0100 0.0000 0.7208 (0.0000 (0.6907 (0.8935 (0.0920) 0.4648 (0.2319) 0.1028) 1.9473 (0.1842 (0.0000) −0.1047) 0.0107 (0.9002) 1.0000 1.1309) 0.0023) 0.0000 (0.3486 (0.0258 0.1461) 0.0000 (0.0000) 0.0000 (0.0241 (0.0000) 0.3002 0.9692 1.7496 (0.1519) Output 0.0202 1.0238) Employment 0.1640) 0.0146 0.0000) 1.0000 0.0106) 0.0000) 0.0425 (0.0000 (0.0039 0.0200) 1.0000) 0.0066) 0.0397 (0.7147 (0.5: Calibration of the Model Variants: German Economy (number in parentheses are the corresponding standard errors) Consumption Standard Deviations Sample Economy Model I Economy Model II Economy Correlation Coeﬃcients Sample Economy Consumption Capital Stock Employment Output Model I Economy Consumption Capital Stock Employment Output Model II Economy Consumption Capital Stock Employment Output 1.2362) 1.1312) 1.5420 (0.5138 (0.1276 (0.0112) 0.00000) .5423 1.0000 -0.0292 (0.1099) 1.9130 (0.1533) Capital 0.0000) 0.0000 (0.4561) 0.0865 (0.4360 0.156 Table 8.0203 0.0000 (0.4855 (0.

We shall also note that the simulated labor supply in Germany is smoother than in the U.4: Simulated Economy versus Sample Economy: German Case (solid line for sample economy. there is a diﬀerence concerning the variation of employment. First. Nickell (1997) and Nickell (2003). labor market. . S. Those factors 28 See.28 Such inﬂuences and legal restriction will give rise to the smoother employment series in contrast to the U. dotted line for simulated economy) In contrast to U.3 and Figure 8. S. there are stronger inﬂuences of labor unions and various legal restrictions on ﬁrms’ hiring and ﬁring decisions.157 Figure 8. may also be viewed as a readiness to compromise as our Model III suggests. In particular.S. and see already Meyers (1964). S.5). In most labor market studies the German labor market is often considered less ﬂexible than the U. This is likely to be due to the fact that employment itself is smooth in the German economy (see Table 8. Such inﬂuences and legal restriction. The standard problem of excessive smoothness with respect to employment in the benchmark model no longer holds for the German economy. or what Solow (1979) has termed the moral factor in the labor market.3). economy we ﬁnd some major diﬀerences. for example. (see Figure 8.

158 will indeed give rise to a smooth employment series. Further, if we look at the labor demand and supply in Figure 8.5, the supply of labor is mostly the short side in the Germany economy whereas in U.S. economy demand is dominating in most periods. Note that here we must distinguish the supply that is actually provided in the labor market and the “supply” that is speciﬁed by the decision rule in the standard model. It might reasonably be argued that due to the intertemporal optimization subject to the budget constraints the supply speciﬁed by the decision rule may only approximate the decisions from those households for which unemployment is not expected to pose a problem on their budgets. Such households are more likely to be currently employed and protected by labor unions and legal restrictions. In other words, currently employed labor decides, through the optimal decision rule, about labor supply and not those who are currently unemployed. Such a shortcoming of single representative intertemporal decision model could presumably be overcome by a intertemporal model with heterogenous households.29

Figure 8.5: Comparison of demand and supply in the labor market (solid line for actual, dashed line for demand and dotted line for supply)

29

See, for example, Uhlig and Xu (1996).

159 The second diﬀerence concerns the trend in employment growth and unemployment of the U.S. and Germany. So far we only have shown that our model of nonclearing labor market seems to match better than the standard RBC model the variation in employment. This in particular seems to be true for the U.S. economy. We did not attempt to explain the trend of the unemployment rate neither for the U.S. nor for Germany. We want to note that the time series data (U. S. 1955.1 - 1983.1, Germany 1960.1 - 1992.1) are from a period where the U.S. had higher – but falling – unemployment rates, whereas Germany had still lower but rising unemployment rates. Yet, since the end of the 1980s the level of the unemployment rate in Germany has considerably moved up, partly due to the uniﬁcation of Germany after 1989.

8.5

Diﬀerences in Labor Market Institutions

In Chapter 8.2 we have introduced rules that might be thought to be operative when there is a nonclearing labor market. In this respect, as our calibration in section 3 has shown, the most promising route to model, and to match, stylized facts of the labor market, through a microbased labor market behavior, is the compromising model. One hereby may pay attention to some institutional characteristics of the labor market presumed in our model. The ﬁrst is the way how the agency representing the household sets the wage rate. If the household sets the wage rate, as if it were monopolistic competitor, then at this wage rate the household’s willingness to supply labor is likely to be less than the market demand for labor unless the household sufﬁciently under-estimates the market demand when it conducts its optimization for wage setting. Such a way of wage setting may imply unemployment and it is likely to be the institutional structure that gives the representative household (or the representative of the household, such as unions), the power to bargain with the ﬁrm in wage setting.30 Yet, there could be, of course, other reasons why wages do not move to a labor market clearing level – such as eﬃciency wage, insider – outsider relationship, or wages determined by standards of fairness as Solow (1979) has noted and so on. On the other hand, there can be labor market institutions, for example corporatist structures, also measured by our ω, which aﬀect the actual employment. Our ω expresses how much weight is given to the desired labor supply or desired labor demand. A small ω means that the agency, repre30

This is similar to Woodford’s (2003, ch. 3) idea of a deviation of the eﬃcient and natural level of output where the eﬃcient level is achieved only in a competitive economy with no frictions.

160 senting the household, has a high weight in determining the outcome of the employment compromise. A high ω means that the ﬁrm’s side is stronger in employment negotiations. As our empirical estimations in Gong, Ernst and Semmler (2004) have shown the former case, a low ω, is very characteristic of Germany, France and Italy whereas a larger ω is found for U.S. and the U.K.31 Given the rather corporatist relationship of labor and the ﬁrm in some European countries, with some considerable labor market regulations through legislature and union bargaining (rules of employment protection, hiring and ﬁring restrictions, extension of employment even if there is a shortfall of sales etc.)32 , our ω may thus measure diﬀerences concerning labor market institutions between the U.S. and European countries. This has already been stated in the 1960s by Meyers. He states: ”One of the diﬀerences between the United States and Europe lies in our attitude toward layoﬀs... When business falls oﬀ, he [the typical American employer] soon begins to think of reduction in work force... In many other industrial countries, speciﬁc laws, collective agreements, or vigorous public opinion protect the workers against layoﬀs except under the most critical circumstances. Despite falling demand, the employer counts on retraining his permanent employees. He is obliged to ﬁnd work for them to do... These arrangements are certainly eﬀective in holding down unemployment”. (Meyers, 1964:) Thus, we wish to argue that the major international diﬀerence causing employment variation does arise less from real wage stickiness (due to the presence of unions and the extend and duration of contractual agreements between labor and the ﬁrm)33 but rather it seems to be the degree to which compromising rules exist and which side dominates the compromising rule. A lower ω, deﬁning, for example, the compromising rule in Euro-area countries, can show up as diﬀerence in the variation of macroeconomic variables. This is demonstrated in Chapter 8.4 for the German economy. We there could observe that ﬁrst, employment and the eﬃciency wage (deﬁned as real wage divided productions) are among the variables with the

In the paper by Gong, Ernst and Semmler (2004) it is also shown that the ω is strongly negatively correlated with labor market institutions. 32 This could also be realized by ﬁrms by demanding the same (or less) hours per worker but employing more workers than being optimal. The case would then correspond to what is discussed in the literature as labor hoarding where ﬁrms hesitate to ﬁre workers during a recession because it may be hard to ﬁnd new workers in the next upswing, see Burnside et al. (1993). Note that in this case ﬁrms may be oﬀ their marginal product curve and thus this might require wage subsidies for ﬁrms as has been suggested by Phelps (1997). 33 In fact real wage rigidities in the U.S. are almost the same as in European countries, see Flaschel, Gong and Semmler (2001).

31

161 highest volatility in the U. S. economy. However, in the German economy they are the smoothest variables. Second, in the U. S. economy, the capital stock and temporary shock to technology are both relatively smooth. In contrast, they are both more volatile in Germany. These results are likely to be due to our ﬁrst remark regarding the diﬀerence in employment volatility. The volatility of output must be absorbed by some factors in the production function. If employment is smooth, the other two factors have to be volatile. Indeed, recent Phillips curve studies do not seem to reveal much diﬀerence in real wage stickiness between Germany and the U.S., although the German labor market is often considered less ﬂexible.34 Yet, there are diﬀerences in another sense. In Germany, there are stronger inﬂuences of labor unions and various legal restrictions on ﬁrms’ hiring and ﬁring decisions shorter work week even for the same pay etc. 35 Such inﬂuences and legal restriction will give rise to the smoother employment series in contrast to the U.S.. Such inﬂuences and legal restriction, or what Solow (1979) has termed the moral factor in the labor market, may also be viewed as a readiness to compromise as our Model III suggests. Those factors will indeed give rise to a lower ω and a smoother employment series.36 So far we only have shown that our model of nonclearing labor market seems to match better the variation in employment than the standard RBC model. Yet, we did not attempt to explain the secular trend of the unemployment rate neither for the U.S. nor for Germany. We want to express a conjecture of how our model can be used to study the trend shift in employment. We want to note that the time series data for the table 8.3 (U.S. 1955.1-1983.1, Germany 1960.1-1992.1) are from a period where the U.S. had higher – but falling – unemployment rates, whereas Germany had still lower but rising unemployment rates. Yet, since the end of the 1980s the level of the unemployment rate in Germany has considerably moved up, partly, of course due to the uniﬁcation of Germany after 1989. One recent attempt to better ﬁt the RBC model’s predictions with labor

See Flaschel, Gong and Semmler (2001). See,for example, Nickell (1997) and Nickell et al. (2003), and see already Meyers (1964). 36 It might reasonably be argued that, due to intertemporal optimization subject to the budget constraints, the supply speciﬁed by the decision rule may only approximate the decisions of those households for which unemployment is not expected to pose a problem on their budgets. Such households are more likely to be currently employed represented by labor unions and covered by legal restrictions. In other words, currently employed labor decides, through the optimal decision rule, about labor supply and not those who are currently unemployed. Such a feature could presumably be better studied by an intertemporal model with heterogenous households, see, for example, Uhlig and Xu (1996).

35 34

41 Phelps and his co-authors have pointed out that an important change in the households’ preferences in Europe is that households now rely more an assets instead of labor income. given by equation (8.40 Yet. the Solow residual. 2003) have argued that other shocks. Thus exogenous technology shocks constitute only a small fraction of the Solow residual. 2003). are important as well. In the context of our model this would have the eﬀect that labor demand. 39 See Campbell (1994) for a modelling of a trend in technology shocks. and that the model itself fails to explain such a shift. 40 Of course.27) starts tending to grow at a lower rate which many researchers recently have maintained to have happened in Germany.27) may fall short of labor supply given by equation (8. Empirical evidence on the role of lagging implementation and diﬀusion of new technology for low employment growth in Germany can be found in Heckman (2003) and Greiner. see Phelps (1997) and Phelps and Zoega (1998). the work by Phelps. as it used in RBC models as the technology shock. Yet. greatly depends on endogenous variables (such as capacity utilization). This is likely to occur in the long-run if the productivity Zt in equation (8. since the 1980s.38 In contrast to the literature on institutional frictions in the search and matching process we think that the essential impact on the trend in the rate of unemployment seems to stem from both changes of preferences of households as well as a changing trend in the technology shock.24). those models usually observe that there has been a shift in matching functions due to evolution of unemployment rates such as. there have also been secular changes on the supply side of labor due to changes in preferences of households.41 Some of those factors aﬀecting the households’ supply of labor have been discussed See Merz (1999) and Ljungqvist and Sargent (1998. 38 37 . experienced in Europe since the 1980s.162 market data has employed search and matching theory. see Flaschel. as recent research has stressed. For an account of the technology trend. the trend in the wage rate is also important in the equation for labor demand (in equation 25). for example.37 Informational or institutional search frictions may then explain the equilibrium unemployment rate and its rise. Semmler and Gong (2004). see Blanchard and Wolfers (2000) and Blanchard (2003). the change in the trend of the unemployment rate is likely to be related to the long-run trend in the true technology shock. in the long run. for example demand shocks. see Heckman (2003). for example. Gali (1999) and Francis and Ramey (2001. and other European countries. Gong and Semmler (2001). and for an additional account of the wage rate. We thus might conclude that cyclical ﬂuctuations in output and employment are not likely to suﬃciently be explained by productivity shocks alone. as shown in Chapters 5 and 9. For an evaluation of the search and matching theory as well as the role of shocks to explain the evolution of unemployment in Europe.39 Concerning the latter. Yet.

Greiner. 8.S. Our study has provided a framework that allows to also follow up such issues. that the welfare losses are very small. In particular. There we ﬁnd that similarly to Sargent and Ljungqvist (1998). on the demand side for labor.6 Conclusions Market clearing is a prominent feature in the standard RBC model which commonly presumes wage and price ﬂexibility.. given wage stickiness. for example.163 above. our model suggests that one has to study more carefully the secular forces aﬀecting the supply and the demand of labor as modeled in our multiple stage decision process of section 2. Nonclearing labor market is then a result of diﬀerent employment rules derived on the basis of a multiple stage decision process. with respect to the trend of lower employment growth in some European countries as compared to the U. 43 See Blanchard and Wolfers (2000). Semmler and Gong (2004) and Heckman (2003) 44 For further discussion.43 On the other hand there has also been changes in the preferences of households. see also Chapter 9.42 As concerning international aspects of our study we presume that diﬀerent labor market institutions result in diﬀerent weights deﬁning the compromising rule. the slow down of technology seems to have been a major factor for the low employment growth in Germany and other countries in Europe. Calibrations have shown that such model variants will produce a higher volatility in employment. we have introduced an adaptive optimization behavior and a multiple stage decision process that. 42 . for Germany in contrast to the U. results in a nonclearing labor market in an otherwise standard stochastic dynamic model. In this chapter.44 Appendix III computes the welfare loss of our diﬀerent model variants of nonclearing labor market.S. Finally. are consistent with what has been found in many other empirical studies with regard to the institutions of the labor market. The results for Euro-area economies. since the 1980s. and thus ﬁt the data signiﬁcantly better than the standard model.

which is derived from the condition of marginal product equal to the wage rate: ∗ wt = fn (At+i . If the household knows the production f (At . there is only a certain probability (due to the adjustment cost in changing the wage) that the household will set a new wage rate in period t.28) subject to ∗ kt+i+1 = (1 − δ)kt+i + f (At+i . Note that here n(wt . kt ). n(wt . the decision problem of the household with regard to wage setting may be expressed as follows: ∞ ∗ wt . kt . Obviously. kt+i . n(wt . and the sequence of expectations on {At+i }i=1 where At and kt are referred to as the technology and capital stock respectively. kt+i . At+i )) (8. Of course there is no guarantee that the household will actually implement this sequence {ct+i }∞ .7 Appendix I: Wage Setting Suppose now that at the beginning of t the household (of course with certain ∗ probability denoted as 1 − ξ) decides to set up a new wage rate w1 given ∞ the data (At . as argued by recent New i=0 Keynesian literature. where nt the labor eﬀort so that it may also know the ﬁrm’s demand for labor. the observed wage dynamics wt may follow Calvo’s updating scheme: ∗ wt = (1 − ξ)wt + ξwt−1 ∗ Such a wage indicates that there exists a gap between optimum wage wt and the observed wage wt . However. this probability will be reduced when i become larger. kt+i . nt . At+i ). At+i )) − ct+i (8. Therefore. It should be noted that in recent New Keynesian literature where the wage is set in a similar way as we have discussed here. nt+i ) We shall remark that although the decision is mainly about the choice ∞ ∗ of wt . kt+i . which depends on consump∗ ∗ tion ct+i and the labor eﬀort n(wt . kt+i .{ct+i }∞ i=0 max Et i=0 ∗ (ξβ)i U (ct+i . the sequence of {ct+i }i=0 should also be considered for the dynamic optimization.164 8. kt+i . the household . U (·) is the household’s utility function. At+i ) is the function of ﬁrm’s demand for labor. the concept of nonclearing labor market somehow disappeared.29) ∗ Above ξ i is the probability that the new wage rate wt will still be eﬀective in period t + i. In this literature.

This produces the gaps between optimum price w∗ and actual price w0 and between optimum supply n∗ and actual supply n′ . whose existence is caused by the adjustment cost in changing prices.6: A Static Version of the Working of the Labor Market Some clariﬁcations may be obtained by referring to a static version of our view on the working of the labor market. However. what New Keynesian economists are concerned with is the gap between the optimum price and actual price. In correspondence to the gap between optimum and actual price. the supplier (or the household. at the beginning of period 0) sets its price optimally according to the expected demand curve D0 .165 is assumed to supply the labor eﬀort according to the market demand at the existing wage rate and therefore does not seem to face the problem of excess demand or supply. the supplier may stick to w0 . Yet. Instead. In ﬁgure 8. there also exists a gap between optimum output and actual output. due to the adjustment cost in changing prices. Consider now the situation that the supplier’s expectation on demand is not fulﬁlled. w w* MC w0 MR MR’ D0 D0 D’ n0 n* n’ ns n Figure 8. Let us denote this price as w0 . the household may reasonably believe that the demand curve should be D′ and therefore the optimum price should be w∗ while the optimum supply should be n∗ .6. In this case. in the labor market case) ﬁrst (say. the market demand at w0 is n′ . the existence of price and output gaps does not exclude the . Instead of n0 .

New Keynesian literature presumes that at the existing wage rate. and a nonclearing labor market can be very general phenomena. Note that in ﬁgure 8. At ) − cd λt kt+1 − t 1+γ ∞ + Et i=1 β i log(cd ) + θ log(1 − ns ) + t+i t+i 1 s s (1 − δ)kt+i + f (kt+i . kt+1 and λt . the household supplies labor eﬀort whatever the market demand for labor is. MC. the disequilibrium in the labor market may still exist. to the marginal disutility of work.(8.16). we deﬁne the Lagrangian: L = Et log cd + θ log(1 − nt ) + t 1 s s s (1 − δ)kt + f (kt . nt . This indicates that even if there are no adjustment costs so that the household can adjust the wage rate in every t (so that there is no price and quantity gaps as we have mentioned earlier). can be interpreted as marginal disutility of labor which has also an upward slope since we use the standard log utility function as in the RBC literature. at the given wage rate there is a ﬁrm’s willingness to demand labor and the household’s willingness to supply labor. ns . or equal. This then means that the household’s supply of labor will be restricted by a wage rate below. that is. If we deﬁne the labor market demand and supply in a standard way. we thus take the partial derivatives of t s L with respect to cd .14) .166 existence of a disequilibrium or nonclearing market. At+i ) − cd t+i t+i 1+γ s β i λt+i kt+1+i − Since the decision is only about cd . This gives us the following ﬁrst-order t . In this context the marginal cost curve. Appendix II: Adaptive Optimization and Consumption Decision For the problem (8.6 the household’s willingness to supply labor is ns .

Using (8.3 t+1 α α = λt . t+1 (8.42) (8. s Et ns = G2 kt+1 + D2 (a0 + a1 At ) + g2 . we obtain from (8.24) we have postulated s λt+1 = Hkt+1 + QAt+1 + h.44) . s Fk1 Et λt+1 + Fk2 Et At+1 + Fk3 kt+1 + Fk4 Et ns + fk = λt .38) (8.41) (8.40) where.37) (8. h.34) where H. Suppose they can be written as Fc1 ct + Fc2 λt + fc = 0. Fc2 Fc2 (8. κ1 = Fk1 H + Fk3 + Fk4 G21 .37) to express λt in (8.36) and a0 +a1 At t+1 respectively. κ0 = Fk1 (Qa0 + h) + Fk2 a0 + Fk4 (G22 a0 + g2 ) + fk . G21 .38) s κ1 kt+1 + κ2 At + κ0 = λt . t (8.32) Recall that in deriving the decision rules as expressed in (8. (8.35) (8.23) and (8. Et ns and Et At+1 in terms of (8.167 condition: 1 λt = 0. s ns = G21 kt+1 + G22 At+1 + g2.40).36) Our next step is to linearize (8.(8. We therefore obtain from (8. t (8.32) around the steady states.39) Expressing Et λt+1 . Q.31) 1 s s ¯ (1 − δ)kt + At (kt )1−α nt N /0. (8. κ2 = Fk1 Qa1 + Fk2 a1 + Fk4 G22 a1 .35). in particular.43) Fc1 d fc ct − .30) . G22 and g2 have all been resolved previously in the household optimization program. (8. t+1 s kt+1 = Akt + W At + C1 cd + C2 nt + b.33) (8.34) s Et λt+1 = Hkt+1 + Q(a0 + a1 At ) + h. we further obtain s κ1 kt+1 + κ2 At + κ0 = − (8.33) and (8.30) ¯ ns N /0.3 1+γ − cd . t+1 (8. − d 1+γ ct β s Et λt+1 (1 − δ) + (1 − α)At+1 kt+1 1+γ s kt+1 = −α (8.

which will aﬀect the labor employed (via demand for labor) and hence eventually the welfare performance. and III. II. in addition to At . However. Also. and the two models with nonclearing labor market. consumption and employment.are realized in all periods. We here restrict our welfare analysis to the U. is somewhat diﬀerent from the the benchmark model due to the distinction between expected and actual moments with respect to our state variable. Appendix III: Welfare Comparison of the Model Variants In this appendix we want to undertake a welfare comparison of our diﬀerent model variants. the capital stock. In the models of nonclearing market the representative agent may not rationally expect those moments of the capital stock.which are optimal for the representative agent . They are given by Simulated Economy I. given the sequence of our two decision variables.39) and (8. The welfare result due to these changes in the speciﬁcation may therefore deviate from what one would expect. model variants. Our exercise here is to compute the values of the objective function for all our three models. Note that for our models variants with nonclearing . κ1 Fc2 κ1 κ1 Fc2 κ1 (8. entering into the models. we believe that this may not generically be the case.5). whereas they concentrate on the steady state. The expected moments are represented by equation (8. A likely conjecture is that the benchmark model should always be superior to the other two variants because the decisions on labor supply . we compute the welfare also outside the steady state.168 which is equivalent to s kt+1 = − Fc1 d κ0 fc κ2 At − ct − − . We follow here Ljungqvist and Sargent (1998) and compute the welfare implication of the diﬀerent model variants.45) will allow us to solve cd as t Fc1 + C1 Fc2 κ1 −1 cd = − t Akt + κ2 +W κ1 At + C2 nt + b + κ0 fc + κ1 Fc2 κ1 .5) while the actual moments are expressed by equation (8.45) Comparing the right side of (8. Yet. They are not necessary equal unless the labor eﬀorts of those two equations are equal. S. there is another external variable wt . It is suﬃcient to consider only the equilibrium (benchmark) model. The point here is that the model speciﬁcation in variants II and III.

169 labor market. where ∞ V ≡ t=0 β t U (ct . nt ) is given by log(ct ) + θ log(1 − nt ). Dashed Line for Model III) Figure 8. We choose the diﬀerent k0 based on the grid search around the steady state of kt . we calculate V . (a) Welfare Comparison with External Variable set at their Steady State (Solid Line for Model II. One is to set both external variables at their steady state levels for all t. Figure 8. Dashed Line for Model III) (b) Welfare Comparison with External Variable set at their Observed Series (Solid Line for Model II. nt ) where U (ct . we use realized employment.7 provides the welfare comparison of the two versions. We consider two diﬀerent ways to treat these external variables. the value of V for any given k0 will also depend on the external variable At and wt (though in the benchmark model. rather than the decisions on labor supply. More speciﬁcally. This exercise here is conducted for diﬀerent initial conditions of kt denoted by k0 . to compute the utility functional.7: Welfare Comparison of Model II and III . only At appears). The other is to employ their observed series entering into the computation. Obviously.

Meanwhile. it is important to note that the deviations from the benchmark model are very small. the percentage deviations of V from the corresponding values of benchmark model is plotted for both Model II and Model III given for various k0 around the steady states. Furthermore.7(a). However. they. see lower part of the ﬁgure. not always is the benchmark model the best one.170 In Figure 8. the deviations become 0 for the Model II.7(b). the Model III will be superior in its welfare performance when k0 is larger than its steady state. When k0 is suﬃciently high. however. 8. close to or higher than the steady state of kt . The various k0 ’s are expressed in terms of the percentages deviation from the steady state of kt . since most of the values are negative. It is not surprising to ﬁnd that in most cases the benchmark model is the best in its welfare performance. . Similar results have been obtained by Ljungqvist and Sargent (1998). in the case of using observed external variables. compare only the steady states.

This model then naturally generates higher volatility of employment and a low correlation between employment and consumption.Chapter 9 Monopolistic Competition. Recent literature uses monopolistic competition theory to give a foundation to nominal stickiness. This is necessary for a model with nonclearing markets where adjustments should take place in real time. 9. Next we relate our approach of nonclearing labor market to the theory of monopolistic competition in the product market as developed in New Keynesian economics. In many respects.1 The Model As mentioned in chapter 8. price and wage stickiness is an important feature in New Keynesian literature. Nonclearing Markets and Technology Shocks In the last chapter we have found that if we introduce some non-Walrasian features into an intertemporal decision model with the household’s wage setting. Since both household and ﬁrm make their quantity decisions on the basis 171 . As concerning wage stickiness Keynes (1936) has attributed strong stabilizing eﬀects to wage stickiness. The assumption of re-opening of the market shall also be adopted here. the speciﬁcations in this chapter are the same as for the model of the last chapter. adaptive optimization and nonclearing labor markets. sluggish wage and price adjustments and adaptive optimization the labor market may not be cleared. We shall still follow the assumptions with respect to ownership.

which we assume to be the output. we shall follow all the speciﬁcations on price and wage setting as in presented the chapter 8. At+i ) 1 (9.1. The decision problem for the household to derive its desired demand and supply is very similar as in the last chapter and can be formulated as ∞ {cd . 9. At+i ) s ns t+i = Gn (kt+i .1) s s s kt+i+1 = (1 − δ)kt+i + f (kt+i . We deﬁne the household’s willingness as those demand and supply that can allow the household to obtain the maximum utility on the condition that these demand and supply can be realized at the given set of prices. this simpliﬁcation does not eﬀect our major result derived from our model. ns . ns ) t+i t+i (9. we shall ﬁrst discuss how in our model the period t prices are determined at the beginning of period t. .1. Therefore. We can express this as a sequence of ∞ s output demand and factor supply cd . At+i ) − cd t+i t+i (9. Meanwhile.1 As to the rental rate of capital rt . the wage rate wt and the rental rate of capital stock rt . a main focus in the recent new Keyensian literature. as in the model of the last chapter there are three commodities.4) For our simple representative agent model without money. Here again. For the given technology sequence {At+i }∞ . We can then ignore its setting. One of them should serve as a numeraire. the solution of optimization problem can be i=0 written as: s cd t+i = Gc (kt+i . including the output price pt .ns } t+i t+i subject to max ∞ i=0 Et i=0 β i U (cd .3) (9. have been set the household is going to express its desired demand and supply. the output price pt always equals 1. This indicates that the wage wt and the rental rate of capital stock rt are all measured in terms of the physical units of output. Note that here we have used the superscripts d and s to refer to the agent’s desired demand and supply. id .1 The Household’s Desired Transactions When the prices.2) All the notations have been deﬁned in the last chapter. Here. where it+i is t+i t+i t+i referred to investment. kt+i+1 i=0 . it is assumed to be adjustable and to clear the capital market. it will allow us to save the eﬀort to work on the nominal price determination. ns . including wages.172 of the given prices.2.

Instead. which shall always be 1 (since it serves as a numeraire). ∞ i=0 9. given the prices of output. We shall denote this perceived demand as yt .5) t subject to ∗ min(yt . ns t+i t+i d s d s d s s d s only (ct .1. yt ) nd = fn (rt . the solutions should satisfy d kt = fk (rt . for our representative ﬁrm. where it = f (kt . wt . the optimization problem can be expressed as ∗ d max min(yt . 2 The detail will be provided in the appendix of this chapter. This desired supply is the amount that allows the ﬁrm to obtain a maximum proﬁt on the assumption that all its output can be sold. yt ) = f (At . rt ). see the discussion above. At . ∗ Otherwise. we no longer assume that the product market is in perfect competition. kt ). if the expected ∗ demand yt is less than the ﬁrm’s desired supply yt . kt . At ) − ct and kt = kt . the ﬁrm will choose yt . Thus given the output price. the ﬁrm should also have its own desired supply yt .7) (9. and therefore it should face a perceived demand curve for its product. are actually carried by the household into the market for exchange due to our assumption of re-opening market. wt . yt ) to maximizes the current proﬁt. nt . On the other hand.8) where rt and wt are respectively the prices (in real term) of capital and labor.6) For the regular condition on the production function. we shall assume that our representative ﬁrm behaves as a monopolistic competitor. kt . Obviously. the ﬁrm has a perceived constraint on the market demand for its product. Let us ﬁrst consider the two factor markets. labor and capital stock ∗ (1. At . Thus.2 We are now considering the transactions in our three markets. wt . . t However in this chapter. yt ) t (9.173 We shall remark that although the solution appears to be a sequence cd . nt ) along with (it . it will simply follow the short side rule to choose yt as in the general New Keynesian model. nt ) (9.2 The Quantity Decisions of the Firm The problem of our representative ﬁrm in period t is to choose the current d s input demand and output supply (nd . yt ) − rt kt − wt nd (9.

if the produced output is still constrained by (9. 1).7) and (9. nt . At ) One remark should be added here.8)). capital and labor (see equation (9.4 The Transaction in the Product Market After the transactions in those two factor markets have been carried out. that is. The latter rule means that when disequilibrium occurs in the labor market both ﬁrms and workers have to compromise.1. we again formulate this rule as nt = ωnd + (1 − ω)ns t t (9. Therefore. in this chapter we shall only consider the compromising rule.11) indicates that the ﬁrm’s actual produced output is not necessarily constrained by equation (9.1. However. see the discussion in the last chapter. the most frequent rule that has been used is the short side rule.10) where ω ∈ (0. 9. As we have discussed in the last chapter. which is now given by s (9. ns ) t t Thus. Therefore. the output is constrained by demand.6). The result is the output supply. the Keynesian way of output determination is still reﬂected in the ﬁrm’s demand for inputs. there is no reason to believe that the labor market will be cleared. that is. only the short side of demand and supply will be realized. Our study in the last chapter indicates that the short side rule seems to be empirically less satisfying than the compromising rule.174 9.3 Transaction in the Factor Market Since the rental rate of capital stock rt is adjusted to clear the capital market when the market is re-opened in period t. On the other hand. the ﬁrm will engage in its production activity. nt = min(nd . we have s d kt = kt = kt (9. Another important rule that we have discussed in the last chapter is the compromising rule. Equation (9. when a disequilibrium occurs.9) Due to the monopolistic wage setting and the sluggish wage adjustment.6). and therefore one may argue that the output determination does follow eventually the Keynesian way. In particular. we shall again deﬁne a realization rule with regard to actual employment. one may .11) yt = f (kt .

3 Given that the output is determined by (9. and therefore resources are somewhat wasted.t. On the other hand. ns ) t+i (9. has to include economic growth.2. t+i t+i 1+γ i = 1.10) with ns and t nd are implied by (9. the transaction then needs s to be carried out with respect to yt .11) is less ∗ s ∗ than min(yt . s Above.14) 1 s (1 − δ)kt + f (kt . kt+1 = s kt+i+1 = d β i U (ct+i .11). t cd in (9. It is important to note here that when disequilibrium occurs in the labor market the previous consumption plan as expressed by (9.4) and (9. t 9. At+i ) − cd . yt ). yt ) there will be no suﬃcient inputs to produce ∗ s ∗ min(yt .15) should also be the realized consumption. the product market should be cleared if the household demands the amount f (kt . At .9) and nt is given by (9. nt ) t (9. the solution to this further step in the optimization problem can be written in terms of the following equation: cd = Gc2 (kt . nt .2) for deriving the plan. in order to make it empirically more realistic.S.12) (9.1 Estimation and Calibration for U...3) becomes invalid due to the improper rule of capital accumulation (9. . not all inputs will be used in production. yt ).8) respectively. yt ) or in terms of ineﬃciency when yt is larger than min(yt . Therefore. Therefore.15) Given this consumption plan. Economy The Empirically Testable Model This section provides an empirical study of our theoretical model above presented which again.2 9. ns . kt equals kt as expressed by (9. 3 s ∗ Note that here when yt < min(yt . .175 s encouter the diﬃculty either in terms of feasibility when yt in (9. the household will construct a new plan as expressed below: ∞ max Et (cd ) t i=0 s s. 2. nt . At ) − cd for investment.13) (9. yt ). when yt > min(yt . As we have demonstrated in t the last chapter.. At ) − cd t 1+γ 1 s s (1 − δ)kt+i + f (kt+i .

One possibility is to assume the expectation to be rational so that it is equal to the steady state of yt .3)α (9. Yt for output and Ct for consumption.4 4 Of course. one can also consider other forms of expectation. we shall assume that the utility function takes the form U (ct . 1+γ (9. we also have done the same empirical study.20) (9. α is the share of labor in the production function F (·) = At Kt1−α (Nt Xt )α .d. (9.21) where yt = Yt /Xt . Dividing both sides of equation (9. Note that the above formulation also indicates that the form of f (·) in the last section may take the form 1−α f (·) = At kt (nt N /0. we shall assume that the output expectation yt be simply equal to yt−1 . σǫ ). Assume the capital stock in the economy follows the transition law: Kt+1 = (1 − δ)Kt + At Kt1−α (Nt Xt )α − Ct . At is the temporary shock in technology and Xt the permanent shock that follows a growth rate γ. that is.17) where kt ≡ Kt /Xt . ct ≡ Ct /Xt and nt ≡ 0.) innovation: 2 ǫt ∼ N (0. so that the expectation is fully adaptive to the actual output in the last period. Finally.16) by Xt . Nt for per capita working hours.3)α − ct . we obtain kt+1 = 1 1−α (1 − δ)kt + At kt (nt N /0.3Nt /N with N to be the sample mean of Nt .i. (9. yt = yt−1 (9. . yet the result is less satisfying.19) where ǫt is an independently and identically distributed (i.16) where δ is the depreciation rate. Indeed. nt ) log ct + θ log(1 − nt ) The temporary shock At may follow an AR(1) process: At+1 = a0 + a1 At + ǫt .176 Let Kt denote for capital stock.18) With regard to the household preference.

(9.24) t On the other hand. t Proposition: When the capital market is cleared. δ.25) The computation of coeﬃcients g3 and G3j .177 9. 5 .3 αAt Zt 1−α ∗ ¯ kt if yt ≥ yt ˆ wt N The algorithm used here is again from Chapter 1 of this volume.3) and (9. and our model with monopolistic competition and nonclearing labor market.2. the Model IV. and therefore this equation can also be approximated by ct = G31 At + G32 kt + G33 nt + g3 (9. the data generating process include (9. 2. which shall now be augmented by the growth factor for our empirical test.23) as ns = G21 At + G22 kt + g2 (9. They are computed by the numerical algorithm using the linear-quadratic approximation method.22) and (9.23) are the linear approximations to (9. are the same as in Chapter 8.15) should be equal to ct .20) as well as ct = G11 At + G12 kt + g1 nt = G21 At + G22 kt + g2 (9. we shall call the benchmark model the Model I and the model with monopolistic competition the Model IV (in distinction from the Model II and Model III in Chapter 8). the equilibrium in product market indicates that cd in t (9. For the benchmark dynamic optimization model. as a benchmark for comparison. we shall ﬁrst modify (9. j = 1.5) .26) nt = 1 0. The coeﬃcients Gij and gi (i = 1.4).3 yt ˆ 1 ∗ ¯ if yt < yt ˆ At kt N d (9.17). the ﬁrm’s demand for labor can be expressed as 1/α (1−α)/α 0. Speciﬁcally. we consider two model variants: the standard model.among others. 3.5 To deﬁne the data generating process for our model with monopolistic competition and nonclearing labor market. (9. α.23) Note that here (9. β. the Model I.2 The Data Generating Process For our empirical assessment.8). The following proposition concerns the derivation of nd . Next we consider the demand for labor nd derived from the ﬁrm’s optit mization problem (9. 2 and j = 1.22) (9. 2) are the complicated functions of the model’s structural parameters.

4 Calibration Table 9.25). but also the demand shock among others.178 where ∗ yt = (αAt Zt /wt )α/(1−α) kt At (9.1) except for the ω. and the second to the condition otherwise.26) and (9.5203.27) Note that the ﬁrst nd in the above equation responds to the condition that t the expected demand is less than the ﬁrm’s desired supply.S. α. the grid search. We choose ω to be 0. .21) with wt given by the observed wage rate. economy.1. 7 Of course. We have followed the same rescaling procedure as we did in Chapter 8.24). where a one time simulation with the observed innovation At are presented. γ. For our purpose it suﬃces to take the empirically observed series of wages.17).10).2. a1 . All these parameters are essentially the same as we have employed in Chapter 8 (see Table 8.20). (9. β. 6 Note that there is a need of rescaling the wage series in the estimation of ω. θ and ω. (9. There are altogether 10 structural parameters in Model IV: a0 . (9. to include not only the technology shock. for this exercise one should still consider At .1 provides the result of our calibrations from 5000 stochastic simulations. The estimation is again executed by a conventional algorithm. 6 9. Note that here again we need a rescaling of the wage series in the estimation of ω.2. This result is further conﬁrmed by Figure 9. (9. The proof of this proposition is provided in the appendix to this chapter. which we set equal to 1.the observed Solow residual. the data generating process includes (9. σε . Here again we do not need to attempt to give the actually observed sequence of wages a further theoretical foundation. for Model IV. This rescaling is necessary because we do not exactly know the initial condition of Zt . we shall ﬁrst specify the structural parameters. δ. This is estimated according to our new model by minimizing the residual sum of square between actual employment and the model generated employment. (9. Thus. To calibrate the models. µ.7 All time series are detrended by the HP-ﬁlter. 9.3 The Data and the Parameters We here only employ time series data of the U.

179 Table 9.1: of the Model Variants (numbers in parentheses are the corresponding standard deviations) Consumption Standard Deviations Sample Economy Model I Economy Model IV Economy Correlation Coeﬃcients Sample Economy Consumption Capital Stock Employment Output Model I Economy Consumption Capital Stock Employment Output Model IV Economy Consumption Capital Stock Employment Output 0.0081 0.0091 (0.0012) 0.0071 (0.0015) Capital 0.0035 0.0036 (0.0007) 0.0058 (0.0018) Employment 0.0165 0.0051 (0.0006) 0.0237 (0.0084) Output 0.0156 0.0158 (0.0021) 0.0230 (0.0060)

1.0000 0.1741 0.4604 0.7550 1.0000 (0.0000) 0.2043 (0.1190) 0.9288 (0.0203) 0.9866 (0.0033) 1.0000 (0.0000) 0.3878 (0.1515) 0.4659 (0.1424) 0.8374 (0.0591)

1.0000 0.2861 0.0954

1.0000 0.7263

1.0000

1.0000 (0.0000) -0.1593 (0.0906) 0.0566 (0.1044)

1.0000 (0.0000) 0.9754 (0.0076)

1.0000 (0.0000)

1.0000 (0.0000) 0.0278 (0.1332) 0.0369 (0.0888)

1.0000 (0.0000) 0.8164 (0.1230)

1.0000 (0.0000)

9.2.5

The Labor Market Puzzle

Despite the bias towards Model I Economy, due to the selection of the structural parameters, we ﬁnd that the labor eﬀort is much more volatile than

180 in the Model I Economy the benchmark model. Indeed, comparing to the benchmark model, the Model I economy, the volatility of labor eﬀort in our Model IV economy has much been increased if anything the volatility of the labor eﬀort is too high. This result is, however, not surprising since the agents face two constraints – one in the labor market and one in the product market. Also the excessive correlation between labor and consumption has been weakened. Further evidence on the better ﬁt of our Model IV economy — as concerns the volatility of the macroeconomic variables — is also demonstrated in the Figure 9.1 where the horizontal ﬁgures show, from top to bottom, actual (solid line) and simulated data (dotted line) for consumption, capital stock, employment and output. The two columns of ﬁgures, from the left to the right, represent the ﬁgures for Model I and Model IV economies respectively. As observable, the employment series in Model IV economy can ﬁt the data better than the Model I economy. This resolution to the labor market puzzle should not be surprising because we specify the structure of labor market essentially the same way as in the last chapter. However, in addition to the labor market disequilibrium as speciﬁed in the last chapter, we also allow in this chapter for monopolistic competition in the product market. In addition to impacting the volatility of labor eﬀort, this may provide the possibility to resolve the another puzzle, that is, the technology puzzle also arising in the market clearing RBC model.

181

Figure 9.1: Simulated Economy versus Sample Economy: U.S. Case (solid line for sample economy, dotted line for simulated economy)

9.2.6

The Technology Puzzle

In economic literature, one often discusses the technology in terms of its persistent and temporary eﬀects on the economy. One possibility to investigate the persistent eﬀect in our models here is to look at the steady states. Given that at the steady state all the markets will be cleared, our Model IV economy should have the same steady state as in the benchmark model. For the convenience of our discussion, we rewrite these steady states in the following

**182 equations (see the proof of Proposition 4 in Chapter 4): n = αφ/ [(α + θ)φ − (δ + γ)θ] k = A
**

1/α −1/α

φ

n N /0.3

c = (φ − δ − γ)k y = φk where φ = [(1 + γ) − β(1 − δ)] /β(1 − α) From the above equation, one ﬁnds that technology has the positive persistent eﬀect on output, consumption and capital stock,8 yet zero eﬀect on employment. Next, we shall look at the temporary eﬀect of the technology shock. Table 9.2 records the cross correlation of the temporary shock At from our 5000 thousand stochastic simulation. As one can ﬁnd there, the two models predicts rather diﬀerent correlations. In the Model I (RBC) economy, technology At has a temporary eﬀect not only on consumption and output, but also on employment, which are all strongly positive. Yet in our Model IV Economy with monopolistic competition and nonclearing labor market, we ﬁnd that the correlation is much weaker with respect to employment. This is consistent with the widely discussed recent ﬁnding that technology has near-zero (and even negative) eﬀect on employment. Table 9.2: The Correlation Coeﬃcients of Temporary Shock in Technology. output consumption employment capital stock 0.9903 0.9722 0.9966 -0.0255 (0.0031) (0.0084) (0.0013) (0.1077) 0.8397 0.8510 0.4137 -0.1264 (0.0512) (0.0507) (0.1862) (0.1390)

Model I Economy Model IV Economy

At the given expected market demand, an improvement in technology (reﬂected as an increase in labor productivity) will reduce the demand for labor, if the ﬁrm follows the Keynesian way of output determination, that is, the output is determined by demand. In this case, less labor is required to produce the given amount of output. Technical progress, therefore, may

8

This long run eﬀect of technology is also revealed by recent time series studies in the context of a variety of endogenous growth models, see Greiner, Semmler and Gong (2004).

which should be increased with the improvement in technology. Yet we have also introduced a nonclearing labor market. resulting in an updating scheme of prices and wages where only a fraction of prices and wages are optimally set each time period. where technology shocks and employment are predicted to be positively correlated. only a weak eﬀect on employment in the short run . as in New Keynesian economics. . The proposition in this chapter which shows the ﬁrms’ constraint in the product market. This stylist fact cannot be explained in the RBC framework since at the given wage rate.S. resulting from a multiple stage decision problem. Then noncleared labor market was derived from a multiple stage decision process of households where we have neglected that ﬁrms may also be demand constrained on the product market. This result was obtained in an economy with monopolistic competition. 9. the demand for labor is simply determined by the marginal product. explains this additional complication that can arise due to the interaction of the labor market and the product market constraints. We could show that such a model matches better time series data of the U. we have shown how households may be constrained in the product market in buying consumption goods by the ﬁrms actual demand for labor.3 Conclusions In the last chapter.183 have an adverse eﬀect on employment at least in the short run. the technology puzzle explored in standard market clearing models would disappear. economy.a phenomenon inconsistent with equilibrium business cycle models. This chapter thus demonstrates that if we follow the Keynesian way of quantity determination in a monopolistic competition model. where prices and wages are set by a monopolistic supplier and are sticky. where then the households’ constraint on the labor market spills over to the product market and the ﬁrms constraint on the product market generates employment constraints. namely that positive technology shocks may have. We have then shown in this chapter how the ﬁrms’ constraints on the product market may explain the technology puzzle.

Given ∗ this nd .30) −α 1−α Htd α α Htd = rt = wt (9.31) . In this case. the ﬁrm’s desire to supply yt can be expressed as t ∗ 1−α yt = At kt (nd N /0. We shall assume that Lt has a constant growth rate µ and hence Zt follows the growth rate (γ − µ). The production function can be written as Yt = At Ztα Kt1−α Htα .184 9.3)α t α 1−α = At kt αAt Zt wt (9. and then reorganizing. under the condition that the rental rate of capital rt clears the capital market while the wage rate wt is given. which is indeed the ﬁrst equation in (9.4 Appendix: Proof of the Proposition Let Xt = Zt Lt . we obtain 1 0. Yt∗ = Xt yt .26).29): Htd = αAt wt 1 1−α (Zt ) 1−α Kt α Dividing both sides of the above equation by Xt . where Ht equals Nt Lt and can be regarded as total labor hours. and therefore the demand for labor can be derived from (9.28) (9.29) 1−α Htd α−1 Since the rental rate of capital rt is assumed to clear the capital market. the ﬁrm’s optimization problem can be expressed as max Yt∗ − rt Ktd − wt Htd subject to Yt∗ = At (Zt )α Ktd The ﬁrst order condition tell us that (1 − α)At (Zt )α Ktd αAt (Zt )α Ktd from which we can further obtain rt = wt 1−α α Htd Ktd (9. we can thus replace Ktd in the above equations by Kt . ∗ Let us ﬁrst consider the ﬁrm’s willingness to supply Yt∗ .3 αAt Zt 1−α d nt = ¯ kt wt N We shall regard this labor demand as the demand when the ﬁrm desired activities are carried out. Since wt is given. and Lt from population growth. with Zt to be the permanent shock resulting purely from productivity growth.

yt < yt where yt is given ˆ by (9. Using equation (9.31).32) The ﬁrst-order condition will still allows us to obtain (9. we consider the case that the ﬁrm’s supply is constrained by the ∗ ∗ ˆ ˆ expanded demand Yt . the ﬁrm’s proﬁt maximization problem is equivalent to the following minimization problem: min rt Ktd + wt Htd subject to ˆ Yt = At (Zt )α Ktd 1−α Htd α (9.34) for explaining rt .26).Substituting it into (9. Yt = Xt yt . we can replace d kt in (9.32) and (9.34) Since the real rental of capital rt will clear the capital market.185 This is the equation (9. we obtain the demand for capital Ktd and labor Htd as ˆ Yt At Ztα ˆ Yt At Ztα wt rt wt rt 1−α α α 1−α α Ktd = Htd = 1−α Dividing both sides of the above two equations by Xt . In this case.3yt At N wt r t Zt r t Zt wt 1−α α α 1−α α d kt = (9.33) by kt .30). we obtain yt At 0.30). Next.33) 1−α nd = t (9. we obtain nd t = 0.3 N yt At 1/α 1 kt (1−α)/α This is the second equation in (9.27) as expressed in the proposition. . In other words.

only to be a simple and starting point for macrodynamic analysis. in RBC economy. some Keynesian features should be introduced. the labor market puzzle and the technology puzzle. we consider the current standard model of model. For the model to explain the real world more eﬀectively. we try to contribute to the current research in stochastic dynamic macroeconomics. the real business cycle model.Chapter 10 Conclusions In this book. We have shown that with such an introduction the model can be enriched while it becomes possible to resolve the most important puzzles. 186 . We recognize that the stochastic dynamic optimization model is important in macroeconomics.

and R. R.-P. [8] Benhabib. and A. J.” Econometrica 22. 35: 303-315. (2002): ”The Macroeconomics of Imperfect Competition and Nonclearing Markets”.1: 167-186. K. [6] Benassy. no. Farmer (1999): ”Indeterminacy and Sunspots in Macroeconomics”.” NBER Working Paper Series 5915. 596-625 [2] Arrow. S.”Leverage. S.. Econometrica. S. and F. (1954): ”Existence of an Equilibrium for a Competitive Economy. American Economic Review. (1957): Dynamic Programming. [5] Benassy. and Debreu. J. New York. 27. and M. [4] Bellman. [10] Benninga. Cambridge: MIT-Press. J. NJ: Princeton University Press. vol. 187 . G.-P.” Journal of Monetary Economics 25. (1995): ”Money and Wage Contract in an Optimizing Model of the Business Cycle”.Bibliography [1] Adelman. Cambridge. Schmidt-Grohe and M. eds. Protopapadakis (1990). Time Preference and the Equity Premium Puzzle. Farmer (1994): ”Indeterminacy and Increasing Returns”. vol. [7] Benhabib. MA. Journal of Economic Theory 63: 19-41. L. North-Holland. Adelman (1959): ”The Dynamic Properties of the Klein-Goldberger Model”. Handbook for Macroeconomics. 265-290. Taylor and M. 49-58. 1A: 387-448 [9] Benhabib. vol. [3] Basu. Uribe (2001): ”Monetary Policy and Multiple Equilibria”. Kimball (1997): ”Cyclical Productivity with Unobserved Input Variation. Woodford. S. J. J. J. J. vol. and R. Princeton. I. 91. Journal of Monetary Economics.

and R. M. (1995) ”The Sustainability of Budget Deﬁcits in a stochastic Economy”. Fisher (1996). 209-217. L. Credit and Banking. and J. Woodford. [19] Boldrin. Princeton Unviersity Press. J. O. no. [17] Bohachevsky. Economic Journal 110:C1-C33. L. vol. edited by P. issues 5-6: 251-280. Optimal Control Applications and Methods. R. (1979) ”An integration of stochastic growth theory and theory of ﬁnance. [18] Bohn. Econometrica.BIBLIOGRAPHY 188 [11] Bennett. [20] Boldrin. Johnson and M. J. MIT-Press [15] Blanchard. Green and J. vol. [16] Blanchard. American Economic Review. E. 47(3):727732. T. Journal of Economic Theory 4: 479-513.).R. Journal of Economic Theory 93: 118-143. ” Macroeconomic Lessons for Asset Pricing”. 1:149-166. L. Stiglitz and M. I.. part I: the growth model”. O. 1:257271. L. Christiano. Information. H. [13] Beyn. ”Generalized Simulated Annealing for Function Optimization. Semmler (2001): ”Dynamic Optimization and Skiba Sets in Economic Examples”. W. Journal of Money. O. [22] Brock. 5262.A. M. 22. Vol. . and J. 27. Fischer (1989): ”Lectures on Macroeconomics”. NBER working paper no. and S.” Technometrics. [12] Benveniste and Scheinkman (1979): ”On the Diﬀerentiability of the Value Function in Dynamic Economics. Princeton: 351-356. (2003): ”Comments on Jjungqvist and Sargent” in: Knowledge. and Mirman (1972)”Optimal Economic Growth and Uncertainty: The Discounted Case”. Academic Press: 165-190. M. Farmer (2000): ”Indeterminacy with Nonseparable Utility”.. Pampel. Wolfers (2000): ”The Role of Shocks and Institutions in the Rise of Unemployment: The Aggregate Evidence”. ”Habit Persistence. O. 91.. Schenkman (eds. in: J. Christiano. Fisher (2001). Aghion. Stein (1986). and J. and W. Asset Returns and the Business Cycle”. [21] Brock. vol. New York. vol. W. And Expectations in Modern Macroceconomics”. Cambridge. 28..E. [14] Blanchard. Frydman. W.

1996.BIBLIOGRAPHY 189 [23] Brock (1982) ”Asset Pricing in a Production Economy”. (1983)”Staggered Contracts in a Utility Maximization Framework”. New York: Oxford University Press. C. G. The Economies of Information and Uncertainty. Mordukhovic. [28] Campbell. [32] Chow. (1997): Dynamic Economics: Optimization by the Lagrange Method. C. (1983): ”On a Discrete Approximation of the Hamilton-Jacobi-Bellman Equation of Dynamic Programming”. New York: MacGraw-Hill. . Journal of Economic Dynamics and Control 17.”Econometrics”.A.. Vol. F. McCall. Chicago. Eichenbaum and S. Camilli and M. Journal of Political Economy. Optim. (1994). [25] Burnside. M.J. Mitchell (1946): Measuring Business Cycles. [34] Chow.. Journal of Monetary Economics.. . [30] Capuzzo-Dolcetta. T. A. (1993): ”Statistical Estimation and Testing of a Real Business Cycle Model. 463-506. Research Memorandum. ”Approximation of Ooptimal Control Problems with State Constraints: Estimates and Applications”. G. [29] F. S. Sussman eds. Falcone (1995). European Economic Review. C. [33] Chow. G. M. ”Nonsmooth analysis and geometric methods in deterministic optimal control”. IMA Volumes in Applied Mathematics 78. 365. C. T. Springer Verlag. Math. ”Inspecting the Mechanism: An Analytical Approach to the Stochastic Growth Model”. (1983). J. New York: NBER. vol. 40: 861-869. C. Rebelo (1993): ”Labor Hoarding and the Business Cycle”. C. no. [27] Calvo.101:245-273. H. 23-57. Princeton: Princeton University. [26] Burnside. and W. Journal of Monetary Economics 33.S. 621-630. by J. [31] Chow. [24] Burns. G. Inc. Appl. (1993): Optimum Control without Solving the Bellman Equation. I. G. ed. Rebelo (1996): ”Sectoral Solow Residual”.” Econometric Research Program. in B. Eichenbaum and S. C. 10: 367-377. University of Chicago Press: 165-192. vol 12: 383-398.A.J.

(1998): How the Basic RBC Model Fails to Explain U. L. Princeton: Princeton University Press [43] Corana. 380. [39] Christiano. J. Y. [44] Danthine. Evans (2001). . Cooly (ed). ”Adaptive Economizing and Sustainable Living: Optimally. and E. Time Series. and J. June. [45] Danthine. (1987): Technical Appendix to “Why Does Inventory Investment Fluctuate So Much?” Research Department Working Paper No. J. [41] Cochrane (2001): ”Asset Pricing”. J.P. [40] Christiano. L. vol. Jounal of Monetary Economics 41. Donaldson (1990): ”Eﬃciency Wages and the Business Cycle Puzzle”. K. M. (1987): Why Does Inventory Fluctuate So Much? Journal of Monetary Economics. T.P. J. T. University of Bielefeld. Suboptimally and Pessimality in the One Sector Growth Model”. J. vol. “Nominal Rigidities and the Dynamic Eﬀects of a Stock to Monetary Policy.” in T. vol. mimeo. Princeton:Princeton University Press. [42] Cooley. Prescott (1995): ”Economic Growth and Business Cycles”. Eichenbaum (1992): ”Current Real Business Cycle Theories and Aggregate Labor Market Fluctuation.S. Frontiers in Business Cycle Research. European Economic Review 34: 1275-1301. [38] Christiano. G. 262-80. ”Minimizing Multimodal Functions of Continuous Variables with the Simulating Annealing Algorithm. 13. and M. Martini. and R. and S. [37] Christiano. A.BIBLIOGRAPHY 190 [35] Chow.” American Economic Review. H. and Kwan...F. Ridella (1987). 431-472. Frontiers of Business Cycle Research. Princeton: Princeton University Press. Federal Reserve Bank of Minneapolis. in Cooley. [46] Dawid. and J. B. (1988): ”Why Does Inventory Fluctuate So Much?”.J.. C. Day (2003). L.” ACM Transactions on Mathematical Software. C. L. [36] Christiano.B. Journal of Monetary Economics. 21: 247-80. 21: 247-80. ed. Donaldson (1995): ”Non-Walrian Economies. 308-318. M. J. L. Eichenbaum and C.

(1991): ”Real Business Cycle Theory: Wisdom or Whimsy?” Journal of Economic Dynamics and Control.. (1992): ”Productivity Shock and Real Business Cycles”. 8: 31-34. Stroz (1963). Impacts on Monetary Policy. 29. (1984): Speciﬁcation. Math. 174. Hansen and K. New York: Wiley. Taylor (1983): Solution and Maximum Likelihood Estimation of Dynamic Nonlinear Rational Expectation Models. Appl. Frankfurt. “Determinants of Business Investment. ”Dynamic Equilibrium Economies: A Framework for Comparing Model and Data”.. Technical Working Paper No. 46: 281 . Journal of Business and Economic Statistics. (1987) ”A Numerical Approach to the Inﬁnite Horizon Problem of Determinstic Control Theory”. ”Quantifying the Impact Structural Reforms”. Estimation. Levin (2000). C. C. Ohanian and J. National Bureau of Economic Research. [53] Erceg. and R.” Quarterly Journal of Economics. Singleton (1988): ”A Time Series Analysis of Representative Agent Models of Consumption and Leisure Under Uncertainty. M. . [55] Evans. Cambridge. [56] Fair. [57] Fair. W. W. M. G. 1169-1185. L. Vol. [48] Debreu. 15: 1-13. Henderson and A. R. and Analysis of Macroeconometric Models. J. p191-208. Country Decision (2004). and J. D. Vol. Berkowitz (1995). [50] Eichenbaum. C. R. [51] Eichenbaum. 607626. European Central Bank. [54] European Central Bank Report. M. T. 15. MA: Harvard University Press. R. ”Optimal Monetary Policy with Staggered Wage and Price Contracts”. Prentice Hall.L.BIBLIOGRAPHY 191 [47] den Haan. 51-78 [52] Eisner. Optim.X.E. (1959): Theory of Value. C. [49] Diebold F. vol. [58] Falcone. Journal of Monetary Economics. Journal of Monetary Economics. B. Marcet (1990): ”Solving the Stochastic Growth Model by Parameterizing Expectations”.313. Econometrica. 21(4). and A.

). Bielefeld University. (1999): Technology. Pau eds. book manuscript. and V. [66] Gong.BIBLIOGRAPHY 192 [59] Farmer (1999) ”Macroeconomics with Self-Fulﬁlling Expectations”.. F.” in H. Rubart (2001): ”Economic Growth in the U. D. Leske und Budrich. Greiner. [64] Goﬀe. Vol. Semmler and J. Bielefeld University. Kort and F. Real Business Cycles with disequilibirum in the Labor Market: A Comparison of the US and German Economies. Employment. ”Global Optimization of Statistical Function. and W. mimeo. University of Technology. [63] Gali. and W. Cambridge. p249-271. and the Business Cycle: Do Technology Shocks Explain Aggregate Fluctuation? American Economic Review.. University of California. Dordrecht: Kluwer. L. M. 89. [62] Francis. “The Dynamics of a Simple Relative Adjustment-Cost Framework. G. MIT Press. A. G. N. Rogers (1992). Gabriel and M. [60] Feichtinger. in: Okonomie als Grundlage politischer Entscheidungen.H. Hartl. P. [65] Gong. Semmler: ”Stochastic Dynamic Macroeconomics: Theory.. Ferrier and J. Numerics and Empirical Evidence”. Bielefeld University. Center for Empirical Macroeconomics. Working Paper. Wirl (2000). and Inventions”. [67] Gong. G. Semmler (2001). Vienna. N. and W. [61] Francis. Neugart (eds. Center for Empirical Macroeconomics. Opladen. Amman. Ramey (2001): ”Is the Technology-Driven Real Business Cycle Hypothesis Dead? Shocks and Aggregate Fluctuations Revisited”. F. G. vol. G. W. A. Computational Economics and Econometrics.S. University of California. W.A. San Diego. Center for Empirical Macreconomics. J. Human Capi¨ tal. [68] Gong. Semmler (2001): Dynamic Programming with Lagrangian Multiplier: an Improvement over Chow’s Approximation Method. G. working paper. Ramey (2003): ”The Source of Historical Economic Fluctuations: An Analysis using Long-Run Restrictions”. and Europe: the Role of Knowledge. 1. Belsley and L. .A. and V. San Diego.

Semmler (2003): ”Economic Growth. Asset Pricing u and Debt Control”.921-947. [79] Hall. J. Submitted. forthcoming Journal of Economic Dynamics and Control. L. Gong (2003): ”The Forces of Economic Growth: A Time Series Perspective”. Math. Economic Theory. Gong (2004)”Forces of Economic Growth . and Europe”. Semmler (2004c) ”Default Risk. Semmler and M. [77] Gr¨ne. and W. A. Princeton University Press. L. (1997) ”An Adaptive Grid Scheme for the Discrete Hamiltonu Jacobi-Bellman Equation”. no. ”Creditworthiness u and Threshold in a Credit Market Model with Multiple Equilibria”. Princeton: Princeton University Press. Numer. [73] L. Semmler (2004b) ”Solving Asset Pricing Models with u Stochastic Dynamic Programming”.uni-bayreuth. 25.. Vol. p. Gr¨ne (2003).BIBLIOGRAPHY 193 [69] Greiner. L.. and W. 2: 287-315. Semmler (2004a): ”Using Dynamic Programming u for Solving Dynamic Models in Economics”.de/departments/math/∼lgruene/papers/. Semmler and G. (1988): ”The Relation between Price and Marginal Cost in U. forthcoming. [75] Gr¨ne. A Model and Estimations for the U.S. http://www. Errorr estimation and adaptive discretizau tion for the discrete stochastic Hamilton–Jacobi–Bellman equation. Princeton: Princeton University Press. CEM Bielefeld.. ”Asset Pricing . Semmler and G. [70] Greiner. working paper. [80] Hamilton. Journal of Political Economy. [78] Gr¨ne. [76] Gr¨ne. Semmler (2004d). University of Bayreuth. 28: 2427-2456. 2004/05. L. forthcoming Journal of Financial Econometrics. E. working paper. Rubart and W. . A. [72] Gr¨ne. W. L. vol.Constrained by u Past Consumption Decisions”. 75: 1288-1314. W.. W. 96. (1994).A Time Series Perspective”. Preprint.. Industry”. Skill-biased Technical Change and Wage Inequality. J. A. ”Time Series Analysis”. [71] Greiner. and W. working paper. forthcoming: Princeton. R. D. Sieveking (2004). and W. CEM Bielefeld. L. [74] Gr¨ne. forthcoming Journal of Macroeconomics.S. CEM Bielefeld.

(1988): ”Technical Progress and Aggregate Fluctuations”. Singleton (1982): ”Generalized Instrument Variables Estimation of Nonlinear Rational Expectations Models. in: Knowledge. Vol. H. 5. Journal of Monetary Economies 41: 257-275. (1996) ”Approximation.” Journal of Monetary Economics. Carnegie-Mellon University. E.G. (1982): ”Large Sample Properties of Generalized Methods of Moments Estimators. Uhlig (2001).. G. [87] Hayashi. [82] Hansen. edited by P. Elsevier: 511-585. 50. Pittsburgh. [83] Hansen. L. no.S. and K. (2001) ”Indeterminacy with Sector-speciﬁc Externalities”.A. [84] Hansen. vol. London. (1998). (2003): ”Flexibility and Creation: Job Lessons from the German Experience”.Pertubation. R. G. Kendrick and J. [92] Jerman. working paper. Rust. Handbook of Computational Economics. Los Angeles. University of California. PA. J.16. P. McMillan... J. Aghion. ”Asset Pricing in Pproduction Economies”. Econometrica 50: 213-224. 309-327. and E. Stiglitz and M. Princeton Unviersity Press. 1268-1286. [85] Hansen. S. no. [89] Hicks. “Tobin’s Marginal q and Average q: A Neoclassical Interpretation. S. J. and Projection Methods in Economic Analysis”. L. L. p. Princeton: 357393. [86] Harrison. [91] Hornstein. and H. H. .R. Frydman. And Expectations in Modern Macroceconomics”. H.M. J. 1029-1054.921-947. Business Cycle: an Empirical Investigation. vol. Prescott (1980): Post-war U.. J.BIBLIOGRAPHY 194 [81] Hall. 50. R. Journal of Political Economy. R. eds. K.” Econometrica. U.J. Working Paper. P. (1985): ”Indivisible Labor and Business Cycles. A. Chapter 12 in: Amman. C. Industry”. D. ”What is the Real Story for Interest Rate Volatility?” German Economic Review 1(1): 43-67. (1982). Information. [93] Judd. 96. (1963): ”The Theory of Wages”. Woodford. Journal of Economic Dynamics and Control. [88] Heckman. (1988): ”The Relation between Price and Marginal Cost in U. vol. F. [90] Hodrick. 25: 747-76. 4.” Econometrica.

L. J.” Journal of Monetary Economics. 21. Wolman (1999): ” What should the Monetary Authority do when Prices are sticky?”. [95] Judge. I. Rebelo (1999): ”Resusciting Real Business Cycles. (1996): DYNARE: A Program for the Resolution and Simulation of Dynamic Models with Forward Variables through the Use of a Relaxation Algorithm. and S. (1998): Numerical Methods in Economics. Plosser. Plosser (1994): ”Real Business Cycles and the Test of the Adelmans”. edited by J. 309-341. and S. R. E. T.. [103] King. [97] Kendrick. [98] Keynes. K. T. 33. (1936) ”The General Theory of Employment. G. Paris. [100] Kim. Rebelo (1988a): ”Production. New York. B. in: J. G. Griﬃths.G. Hill and T. Journal of Monetary Economics. G. (2003) ”Indeterminacy and Investment and Adjustment Costs: An Analytical Result”. Cambridge.. Lee (1985).” in Handbook of Macroeconomics. ”Production. (2004) ”Does Utility Curvature Matter for Indetermincy?”. Interest and Money”. [102] King. J. J. Macroeconomic Dynamics 7: 394-406. Taylor (ed.. [99] Kim. D. Journal of Economic Behavior and Organization. Plosser. vol. Growth and Business Cycles II: New Directions. [105] King. C. L. and S. Taylor and M.M. 405438. Chicago: The University of Chicago Press. MA: MIT Press. [101] King. 195-232. and A. 2nd edition. G. Elsevier Science. W. Woodford. 21. Growth and Business Cycles I: the Basic Neo-classical Model. C. vol. France. NY: McGraw-Hill Book Company. . I.” Journal of Monetary Economics. I. T. Volume I. ”The Theory and Practice of Econometrics”. R. Rebelo (1988b).” CEPREMAP Working Paper. 9602. G. No. R. [96] Juillard M. forthcoming. [104] King. R. R. New York: Wiley. C. C.BIBLIOGRAPHY 195 [94] Judd. R.) Monetary Policy Rules. MacMillan. London. G. and C. (1981): Stochastic Control for Economic Models.

M. Econometrica 46: 1429-1446. T. Princeton Unviersity Press. Sargent (1998): ”The European Unemployment Dilemma”. Cambridge. 739-752. L. And Expectations in Modern Macroceconomics”. J. R. [107] Kydland. MA: The MIT Press. I.. [111] Ljungqvist. 1678 [110] Lettau. [108] Lettau. M. Carnegie-Rochester Conference Series on Public Policy. M. 50. Journal of Political Economy. in: Knowledge. E. Uhlig (1999): ”Volatility Bounds and Preferences: An Analytical Approach.” revised from CEPR Discussion Paper No. R. J. F. [117] Lucas. 1345-1370. Gong and W. (1999): ”Inspecting the Mechanism: The Determination of Asset Prices in the Real Business Cycle Model. K. Prescott (1982). Frydman. . and T. 39-69. and E. [116] Lucas. 19-46. Information. E. (1976): Econometric Policy Evaluation: A Critique. [112] Ljungqvist. G. L. J. L. Semmler (2001): Statistical Estimation and Moment Evaluation of a Stochastic Growth Model with Asset Market Restriction. B. and C. edited by P.3: 514-550. and T. Journal of Economic Dynamics and Control 21. and Sargent. Journal of Political Economy. (2000): Recursive Macroeconomics. Y. C. 1. [114] Long. [115] Lucas. no. R. vol. Chow (1997): Chow’s Method of Optimum Control: A Numerical Solution. vol. 91. Journal of Political Economiy 75: 321-334. 85-103.” CEPR working paper No. vol. 44. and H. Journal of Economics Behavior and Organization.BIBLIOGRAPHY 196 [106] Kwan. Aghion. Woodford. (1978) ”Asset Prices in an Exchange Economy”. 1834 [109] Lettau. (1967): “Adjustment Costs and the Theory of Supply. Sargent (2003): ”European Unemployment: From a Worker’s Perspective”. Stiglitz and M. R. Princeton: 326-350. Econometrica. 106. vol. ”Time to Build and Aggregate Fluctuation”. Plosser (1983): Real Business Cycles. [113] Ljungqvist. and G. F.

G.. S.M. 1645-1660. Stiglitz and M. N. G. Inc. Journal of Economic Perspectives. New York: John Wiley & Sons.J. R. R. 79-90. (1989): ”Real Business Cycles: A New Keynesian Perspective”. in: Knowledge. M. (1953). M. E. L. [128] Nickell. Information. New York. [123] Mehra and Prescott (1985). Rosenbluth.W. A. (1997): ”Unemployment and Labor Market Rigidities – Europe versus North Maerica”. Cambridge.. And Expectations in Modern Macroceconomics”. by A. (1994): ”Diagnosing Unemployment”. Prescott (1971):”Investment under Uncertainty”. S. Journal of Monetary Economics 15: 145-161. N. [129] Nickell. [125] Metropolis. (1990): A Quick Refresher Course in Macroeconomics. ”The Equity Premium Puzzle”. (1968) ”What Can We Learn from European Experience. M. ed. in Unemployment and the American Economy?”. [121] Mankiw. R. 43: 91-124. 55-74. Ross. Rosenbluth. 39 (5):659ﬀ. . Vol. Journal of Economic Literature. Nunziata. R. Unemployment. Teller. Inc. and Wages in the OECD from the 1960s to the 1990s”. New York: John Wiley & Sons. by A.J. [127] Meyers. and Scott. E. W. [126] Meyers. 21. 27. and E.BIBLIOGRAPHY 197 [118] Lucas. (1964): ”What can we learn from European Experience. N. Frydman. Princeton. (1999): Heterogenous Job-Matches and the Cyclical Behavior of Labor Turnover”. Woodford.N. [124] Merz. ”Equation of State Calculation by Fast Computing Machines. Aghion. edited by P. NY: Oxford University Press. R.M. J..” The Journal of Chemical Physics. Journal of Monetary Economics. Cambridge University Press. OECD Statistical Compendium. ed. vol.C. Quintini (2003): ”The Beveridge Curve. Vol. 1087-1092. (1999): Computational Methods for the Study of Dynamic Economies. A. [120] Mankiw. Ross. no. Princeton Unviersity Press. and Teller. [130] OECD (1998a): ”Business Sector Data Base”. [119] Malinvaud. 3. [122] Marimon. Ochel and G. 6. A. vol.. Econometrica. in Unemployment and the American Economy?”. Journal of Economic Perspectives 3.

M. and M.P. (1996). vol. C. Taylor (ed. and M. Cooley (ed). [137] Reiter. E. Economic Journal 28. Elsevier. Journal of Political Economy. 723-737. 3. H. 9-22. and J. [143] Santos. Handbook of Computational Economics. (1982) ”Sticky Prices in the United States”. in: Amman.A. J. [135] Prescott. 4. Cambridge: MIT-Press. I.S. Vigo-Aguiar (1998): Analysis of a Numerical Dynamic Programming Algorithm Applied to Economic Models. Chicago: the University of Chicago Press. 51-77.M.BIBLIOGRAPHY 198 [131] OECD (1998b): ”General Economic Problems. [134] Plosser. and G.” in T. [133] Phelps. 66(2): 409-426 . eds. 3. C. in: J. 620–729.) Monetary Policy Rules. Kendrick and J. 543-559.. Econometrica. 10. [140] Rotemberg. E. Zoega (1998): ”Natural Rate Theory and OECD Unemployment”. [141] Rotemberg. (1997): Chow’s Method of Optimum Control. no. Woodford (1999): ”Interest Rate Rules in an Estimated Sticky Price Model”. J. Federal Reserve Bank of Minneapolis. Economic Journal. Vigo-Aguiar (1995): [144] Santos. 90: 1187-1211. M.F. (1928): ”A Mathematical Theory of Saving”. ”Numerical Dynamic Programming in Economics”. (1986): ”Theory ahead of Business Cycle Measurement. J.” OECD Economic Outlook. (1996) [138] Reiter. (1989): ”Understanding Real Business Cycles. Contry Speciﬁc Series [132] Phelps. Rust. Princeton:Princeton University Press. M.” Quarterly Review. E.S. Frontiers of Business Cycle Research. [142] Rust. 108 (May): 782-801. no. [139] Rotemberg.. vol. pp. M. vol. Journal of Economic Dynamics and Control 21.” Journal of Economic Perspectives. D. J. (1997): Rewarding Work. [136] Ramsey. F. and J. Woodford (1995): ”Dynamic General Equilibrium Models with Imperfectly Competitive Product Markets.

B. [157] Uhlig. [150] Solow. . E. [153] Summers. Cambridge: Harvard University Press. Journal of Political Economy. T. (2001): Endogenous Business Cycles and the Dynamics of Output. Federal Reserve Bank of Minneapolis Quarterly Review. in R.” Journal of Monetary Economics. edited by J. American Economic Review. (1990): Solving Nonlinear Stochastic Growth Models: A Comparison of Alternative Solution Methods.24.. vol. Fachserie 18. (1986): ”Some Skeptical Observations on Real Business Cycles Theory”. (1999): ”Staggered Price and Wage Setting in Macroeconomics. Lucas and E. Econometrica 46 (May): 527-539. H. P. J. [155] Taylor. Princeton. no. 10. “Optimal Growth with a Convex-Concave Production Function. Marimon and A. A. S. B. Volume I.23-27. C. 33. J. R. S. N. vol.: Computational Methods for the Study of Dynamic Economies. 381-404. Vol. (1979): ”Another Possible Source of Wage Stickiness”. Vol. 361-386. Hours and Consumption. 8. Elsevier Science. vol. R. L. Princeton University Press. 21. H. (1999): Contested Inﬂation.BIBLIOGRAPHY 199 [145] Sargent. [146] Schmidt-Grohe. 1: 79-82 [151] Statistisches Bundesamt (1998). (1999): A Toolkit for Analysing Nonlinear Dynamic Stochastic Models Easily. New York: Oxford University Press. Tayor and M. [148] Singleton. H. (1978). J. [154] Taylor. Journal of Business and Economic Statistics. (1988): ”Econometric Issues in the Analysis of Equilibrium Business Cycle Model. Woodford. vol 90.” in Handbook of Macroeconomics. K. K. Statistisches Bundesamt Wiesbaden. [152] Stockey. (1994): ”Do Real Business Cycle Models Really Exhibit Business Cycle Behavior?” Journal of Monetary Economics. L. Journal of Macroeconomics. [149] Skiba. [156] Taylor. 5. (1980): Aggregate Dynamics and Staggered Contracts. 88: 1 . 1136-1159. [147] Simkins. B. and Uhlig. 1-17. B. Scott ed. Prescott (1989): ”Recursive Methods in Economics”. p.

M. Bergen (2000): The Managerial and Customer Costs of Price Adjustment: Direct Evidence from Industrial Markets. (2003): ”Interest and Prices”. ”A Monte Carlo Simulated Annealing Approach to Optimization over Continuous Variables. (1993). [164] Woodford. [160] Vanderbilt. CEM. [162] Watson. and Y.BIBLIOGRAPHY 200 [158] Uhlig. H. 101. 56. working paper. H. Levy. University of California. Macroeconomic Dynamics 2004/05 . L. Princeton. Semmler and M. M.E.W. ”Nonparameto ric Estimation of Time-Varying Characteristics of Intertemporal Asset Pricing Models”. forthcoming. W. [165] Zbaracki. 6.” Journal of Computational Physics. J. Manuscript. University of Pennsylvania [166] 2003): ”Monetary Policy Rules under Uncertainty: Adaptive Learning and Robust Control”. Economic Studies Quarterly XIX: 1-14. Ritson. (2002): ”Labor Market Search and Monetary Shocks”.”. Dutta and M. working paper. ”Measures of Fit for Calibration Models”. S. Tilburg Unversity.. P. 1011-1041. and S. mimeo. Bielefeld University. D. M. Louie (1984). Xu (1996): ”Eﬀort and the Cycle: Cyclical Implications of Eﬃciency Wages. 259-271. G. [163] W¨hrmann. Wharton School. [161] Walsh. Princeton University Press. D. Santa Cruz. Lettau (2001). M. vol. [159] Uzawa. vol. Journal of Political Economy. (1968): “The Penrose Eﬀect and Optimum Growth.. no.