## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Ratings:

424 pages4 hours

A common set of mathematical tools underlies dynamic optimization, dynamic estimation, and filtering. In *Recursive Models of Dynamic Linear Economies*, Lars Peter Hansen and Thomas Sargent use these tools to create a class of econometrically tractable models of prices and quantities. They present examples from microeconomics, macroeconomics, and asset pricing. The models are cast in terms of a representative consumer. While Hansen and Sargent demonstrate the analytical benefits acquired when an analysis with a representative consumer is possible, they also characterize the restrictiveness of assumptions under which a representative household justifies a purely aggregative analysis.

Hansen and Sargent unite economic theory with a workable econometrics while going beyond and beneath demand and supply curves for dynamic economies. They construct and apply competitive equilibria for a class of linear-quadratic-Gaussian dynamic economies with complete markets. Their book, based on the 2012 Gorman lectures, stresses heterogeneity, aggregation, and how a common structure unites what superficially appear to be diverse applications. An appendix describes MATLAB programs that apply to the book's calculations.

Publisher: Princeton University PressReleased: Dec 29, 2013ISBN: 9781400848188Format: book

software.

*Part I *

*Overview *

**Chapter 1 **

**Theory and Econometrics **

*Complete market economies are all alike …. *

*— Robert E. Lucas, Jr. (1989) *

Economic theory identifies patterns that unite apparently diverse subjects. Consider the following models:

1. Ryoo and Rosen’s (2004) partial equilibrium model of the market for engineers;

2. Rosen, Murphy, and Scheinkman’s (1994) model of cattle cycles;

3. Lucas’s (1978) model of asset prices;

4. Brock and Mirman’s (1972) and Hall’s (1978) model of the permanent income theory of consumption;

5. Time-to-build models of business cycles;

6. Siow’s (1984) model of occupational choice;

7. Topel and Rosen’s (1988) model of the dynamics of house prices and quantities;

8. Theories of dynamic demand curves;

9. Theories of dynamic supply curves;

10. Lucas and Prescott’s (1971) model of investment under uncertainty.

These models and many more have identical structures because all describe competitive equilibria with complete markets. This is the meaning of words of Robert E. Lucas, Jr., with which we have chosen to begin this chapter. Lucas refers to the fact that complete markets models are cast in terms of a common set of objects and a common set of assumptions about how those objects fit together, namely:**¹ **

1. Descriptions of flows of information over time, of endowments of resources, and of commodities that can be traded

2. A technology for transforming endowments into commodities and an associated set of feasible allocations

3. A list of people and their preferences over feasible allocations

4. An assignment of endowments to people, a price system, and a single budget constraint for each person**² **

5. An equilibrium concept that uses prices to reconcile decisions of diverse price-taking agents

This book is about constructing and applying competitive equilibria for a class of linear-quadratic-Gaussian dynamic economies with complete markets. For us, an economy will consist of a list of *matrices *that describe people’s household technologies, their preferences over consumption services, their production technologies, and their information sets. Competitive equilibrium allocations and prices satisfy some equations that are easy to write down and solve. These competitive equilibrium outcomes have representations that are convenient to represent and estimate econometrically.

Practical and analytical advantages flow from identifying an underlying structure that unites a class of economies. Practical advantages come from recognizing that apparently different applications can be formulated and estimated using the same tools simply by replacing one list of matrices with another. Analytical advantages and deeper understandings come from appreciating the roles played by key assumptions such as completeness of markets and structures of heterogeneity.

We constructed our class of economies by using (1) a theory of recursive dynamic competitive economies,**³ (2) linear optimal control theory,⁴ (3) methods for estimating and interpreting vector autoregressions,⁵ and (4) a computer language for rapidly manipulating linear systems.⁶ Our economies have competitive equilibria with representations in terms of vector autoregressions that can be swiftly computed, simulated, and estimated econometrically. The models thus merge economic theory with dynamic econometrics. The computer language MATLAB implements the computations. It has a structure and vocabulary that economize time and effort. Better yet, dynare has immensely improved, accelerated, and eased practical applications. **

We formulated this class of models because practical difficulties of computing and estimating more general recursive competitive equilibrium models continue to limit their use as tools for thinking about applied problems. Recursive competitive equilibria were developed as useful special cases of the Arrow-Debreu competitive equilibrium model. Relative to the more general Arrow-Debreu setting, the great advantage of recursive competitive equilibria is that they can be computed by solving discounted dynamic programming problems. Furthermore, under some additional conditions, a competitive equilibrium can be represented as a Markov process. When that Markov process has a unique invariant distribution, there exists a vector autoregressive representation. Thus, the theory of recursive competitive equilibria holds out the promise of making easier contact with econometric theory than did previous formulations of equilibrium theory.

Two computational difficulties continue to leave some of this promise unrealized. The first is a curse of dimensionality

that makes dynamic programming a costly procedure with even small numbers of state variables. The second is that after a dynamic program has been solved and an equilibrium Markov process computed, an implied vector autoregression has to be computed by applying least-squares projection formulas involving a large number of moments from the model’s invariant probability distribution. Typically, each of these computational steps can be solved only approximately. Good research along several lines has been directed at improving these approximations.**⁷ **

The need to approximate originates in the fact that for general functional forms for objective functions and constraints, even one iteration on the key functional equation of dynamic programming (named the Bellman equation

after Richard Bellman) cannot be performed analytically. It so happens that the functional forms economists would most like to use are ones for which the Bellman equation cannot be iterated on analytically.

Linear control theory studies the most important special class of problems for which iterations on the Bellman equation *can *be performed analytically, namely, problems having a quadratic objective function and a linear transition function. Application of dynamic programming leads to a system of well understood and rapidly solvable equations known as the matrix Riccati difference equation.

The philosophy of this book is to swallow hard and to accept up front primitive descriptions of tastes, technology, and information that satisfy the assumptions of linear optimal control theory. This approach facilitates computing competitive equilibria that automatically take the form of a vector autoregression, albeit often cast in terms of some states unobserved to the econometrician. A cost of the approach is that it does not accommodate specifications that we sometimes prefer.

A purpose of this book is to display the versatility and tractability of our class of models. Versions of a wide range of models from modern capital theory and asset pricing theory can be represented within our framework. Competitive equilibria can be computed so easily that we hope that the reader will soon be thinking of new models. We provide formulas and software for the reader to experiment; and for many of our calculations, dynare offers even better software.

In writing this book, we put ourselves under a restriction that we should supply the reader with a computer program that implements every equilibrium concept and mathematical representation. The programs are written in MATLAB, and are described throughout the book.**⁸ When a MATLAB program is referred to in the text, we place it in typewriter font. Similarly, all computer codes appear in typewriter font.⁹ You will get much more out of this book if you use and modify our programs as you read.¹⁰ **

This book is organized as follows. **Chapter 2 describes the first-order linear vector stochastic difference equation and shows how special cases of it can represent a variety of models of time series processes popular with economists. We use this difference equation to represent the information flowing to economic agents and also to represent competitive equilibria. **

**Chapter 3 is a catalogue of useful computational tricks that can be skipped on first reading. It describes fast ways to compute equilibria via doubling algorithms that accelerate computation of expectations of geometric sums of quadratic forms and solve dynamic programming problems. On first reading, it is good that the reader just knows that these fast methods are available and that they are implemented both in our programs and in dynare. **

**Chapter 4 defines an economic environment in terms of a household technology for producing consumption services, preferences of a representative agent, a technology for producing consumption and investment goods, stochastic processes of shocks to preferences and technologies, and an information structure. The stochastic processes fit into the model introduced in chapter 2, while the preferences, technology, and information structure are specified with an eye toward making competitive equilibria computable with linear control theory. **

**Chapter 5 describes a planning problem that generates competitive equilibrium allocations. We formulate the planning problem in two ways, first as a variational problem using stochastic Lagrange multipliers, then as a dynamic programming problem. We describe how to solve the dynamic programming problem with formulas from linear control theory. The solution of the planning problem is a first-order vector stochastic difference equation of the form studied in chapter 2. We also show how to use the value function for the planning problem to compute Lagrange multipliers associated with constraints on the planning problem. **

**Chapter 6 describes a commodity space and a price system that support a competitive equilibrium. We use a formulation that lets the values to appear in agents’ budget constraints and objective functions be represented as conditional expectations of geometric sums of streams of future prices times quantities. Chapter 6 relates these prices to Arrow-Debreu state-contingent prices. **

**Chapter 7 describes a decentralized economy and its competitive equilibrium. Competitive equilibrium quantities solve the chapter 5 planning problem. The price system can be deduced from the stochastic Lagrange multipliers associated with the chapter 5 planning problem. **

**Chapter 8 describes links between competitive equilibria and autoregressive representations. We show how to obtain an autoregressive representation for observable variables that are error-ridden linear functions of state variables. In describing how to deduce an autoregressive representation from a competitive equilibrium and parameters of measurement error processes, we complete a key step that facilitates econometric estimation of free parameters. An autoregressive representation is naturally affiliated with a recursive representation of a likelihood function for the observable variables. More precisely, a vector autoregressive representation implements a convenient factorization of the joint density of a complete history of observables (i.e., the likelihood function) into a product of densities of time t observables conditioned on histories of those observables up to time t − 1. Chapter 8 also treats two other topics intimately related to econometric implementation: aggregation over time and the theory of approximation of one model by another. **

**Chapter 9 describes household technologies that describe the same preferences and dynamic demand functions. It characterizes a special subset of them as canonical. Canonical household technologies are useful for describing economies with heterogeneity among households’ preferences because of how they align linear spaces consisting of histories of consumption services, on the one hand, and histories of consumption rates, on the other. **

**Chapter 10 describes some applications in the form of versions of several dynamic models that fit easily within our class of models. These include models of markets for housing, cattle, and occupational choice. **

**Chapter 11 uses our model of preferences to represent multiple goods versions of permanent income models. We retain Robert Hall’s (1978) specification of a storage technology for accumulating physical capital and also a restriction on the discount factor, depreciation rate, and gross return on capital that in Hall’s simple setting made the marginal utility of consumption a martingale. In more general settings, adopting Hall’s specification of the storage technology imparts a martingale to outcomes, but it is concealed in an index whose increments drive demands for multiple consumption goods that themselves are not martingales. This permanent income model forms a convenient laboratory for thinking about sources in economic theory of unit roots and co-integrating vectors. **

**Chapter 12 describes a type of heterogeneity among households that allows us to aggregate preferences in a sense introduced by W. M. Gorman. Linear Engel curves of common slopes across agents give rise to a representative consumer. This representative consumer is easy to find, and, from the point of view of computing equilibrium prices and aggregate quantities, adequately stands in for the representative household of chapters 4–7. Finding competitive equilibrium allocations to individual consumers requires additional computations that this chapter also describes. **

**Chapter 13 outlines a setting with heterogeneity among households’ preferences of a kind that violates the conditions for Gorman aggregation. Households’ Engel curves are still affine, but dispersion of their slopes prevents Gorman aggregation. However, there is another sense in which there is a representative household whose preferences are a peculiar kind of average over the preferences of different types of households. We show how to compute and interpret this preference ordering over economy-wide aggregate consumption. This complete markets aggregate preference ordering cannot be computed until one knows the distribution of wealth evaluated at equilibrium prices, so it is less useful than the one produced by Gorman aggregation. **

**Chapter 14 adapts our setups to include features of periodic models of seasonality studied by Osborn (1988, 1991a, 1991b) and Todd (1983, 1990). **

**Appendix A is a manual of the MATLAB programs that we have prepared to implement the calculations described in this book. **

Duality between control problems and filtering problems underlies the finding that recursive filtering problems have the same mathematical structure as recursive formulations of linear optimal control problems. Both problems ultimately lead to matrix Riccati equations.**¹¹ We use the duality of recursive linear optimal control and linear filtering repeatedly both in chapter 8 (for representing equilibria econometrically) and in chapters 9, 12, and 13 (for representing and aggregating preferences). **

In **chapter 8, we state a spectral factorization identity that characterizes the link between the state-space representation for a competitive equilibrium and the vector autoregression for observables. This is by way of obtaining the innovations representation that achieves a recursive representation of a Gaussian likelihood function or quasi-likelihood function. In another guise, the same factorization identity is also a key tool in constructing what we call a canonical representation of a household technology in chapter 9. **

In more detail:

1. We use a linear state-space system to represent information flows that drive shocks to preferences and technologies (**chapter 2). **

2. We use a linear state-space system to represent observable quantities and scaled Arrow-Debreu prices associated with competitive equilibria (**chapters 5 and 7). **

3. We coax scaled Arrow-Debreu prices from Lagrange multipliers associated with a planning problem (**chapters 5 and 7). **

4. We derive formulas for scaled Arrow-Debreu prices from gradients of the value function for a planning problem (**chapters 5 and 7). **

5. We use another linear state-space system called an innovations representation to deduce a recursive representation of a Gaussian likelihood function or quasi-likelihood function associated with competitive equilibrium quantities and scaled Arrow-Debreu prices (**chapter 8). **

a. We use a Kalman filter to deduce an *innovations representation *associated with competitive equilibrium quantities and scaled Arrow-Debreu prices. In particular, we use the Kalman filter to construct a sequence of densities of time *t *observables conditional on a history of the observables up to time *t *− 1. This sequence of conditional densities is an essential ingredient of a recursive representation of the likelihood function (also known as the joint density of the observables over a history of length *T*).

b. The innovations in the innovation representation are square summable linear functions of the history of the observables. Thus, the innovations representation is said to be invertible,

while the original state-space representation is in general not invertible.

c. The limiting time-invariant innovations representation associated with a fixed point of the Kalman filtering equations implements a spectral factorization identity.

6. Intimate technical relationships prevail between the innovations representation of **chapter 8 and what in chapter 9 we call a canonical representation of preferences. **

a. An innovations representation is invertible in the sense that it expresses the innovations in observables at time *t *as square-summable linear combinations of the history up to time *t*.

b. A canonical representation of a household technology is invertible in the sense that it can be used to express a flow of consumption services as a square-summable linear combination of the history of consumption services.

c. A canonical representation of a household technology allows us to express dynamic demand curves for consumption flows.

d. A canonical representation of a household technology can be constructed using a version of the same spectral factorization identity encountered in **chapter 8. **

7. We describe two sets of conditions that allow us to aggregate heterogeneous consumers into a representative consumer.

a. **Chapter 12 describes a dynamic version of Gorman’s (1953) conditions for aggregation, namely, that Engel curves be linear with common slopes across consumers. These conditions allow us to incorporate settings with heterogeneity in preference shocks and endowment processes, but they require that households share a common household technology for converting flows of purchases of consumption goods into consumption services. **

b. When the **chapter 12 conditions for Gorman aggregation hold, it is possible to compute competitive equilibrium prices and aggregate quantities without simultaneously computing individual consumption allocations. Without knowing allocations across heterogeneous agents, knowing prices and aggregate quantities is enough for many macroeconomic applications.¹² **

c. **Chapter 13 describes a weaker complete markets sense in which there exists a representative consumer. Here, consumers have diverse household technologies for converting flows of consumption goods into the consumption services that enter their utility functions. The household technology that converts aggregate consumption flows into the service flows valued by a representative consumer is a weighted average of the household technologies of individual consumers, an average best expressed in the frequency domain. To construct a canonical representation of the household technology of the representative consumer requires using the spectral factorization identity. **

d. To construct a complete markets representative agent requires knowing the vector of Pareto weights associated with a competitive equilibrium allocation. It has to be constructed simultaneously with and not before finding a competitive equilibrium aggregate allocation. Therefore, complete markets aggregation is less useful than Gorman aggregation for practical computations of competitive equilibrium prices and aggregate quantities.

8. The spectral factorization identity makes yet another appearance in **chapter 14, where we study models with hidden periodicity. A population vector autoregression, not conditioned on the period, can be constructed by an appropriate application of the factorization identity to an appropriate average of as many conditional spectral densities as there are seasons. **

9. Our reasoning and mathematics easily extend to risk-sensitive and robust economies that allow households to express their distrust of an approximating statistical model. Hansen and Sargent (2008) describe some of these extensions.**¹³ **

**¹ Unity goes only so far. The words composing the ellipsis in Lucas’s sentence are but each incomplete market economy is incomplete in its own individual way. **

**² A single budget constraint for each person is a telltale sign marking a complete markets model. **

**³ This work is summarized by Harris (1987) and Stokey and Lucas with Prescott (1989). **

**⁴ For example, see Kwakernaak and Sivan (1972) and Anderson and Moore (1979). **

**⁵ See Sims (1980) and Hansen and Sargent (1980a, 1981a, 1991a, 1991b). **

**⁶ See the MATLAB manual. **

**⁷ See Marcet (1988), Marcet and Marshall (1994), Judd (1996, 1998), Coleman (1990), Miranda and Fackler (2004), and Tauchen (1986). **

**⁸ These programs are referred to in a special index at the end of the book. They can be downloaded from https://files.nyu.edu/ts43/public/books.html. **

**⁹ To run our programs, you will need MATLAB’s Control Toolkit in addition to the basic MATLAB software. **

**¹⁰ The dynare suite of MATLAB programs is also very useful for analyzing and estimating our models. **

**¹¹ We expand on this theme in Hansen and Sargent (2008, ch. 4). **

**¹² James Tobin once defined macroeconomics as a discipline that neglects distribution effects. **

**¹³ See especially chapters 12 and 13 of Hansen and Sargent (2008). **

*Part II *

*Tools *

**Chapter 2 **

**Linear Stochastic Difference Equations **

This chapter introduces the vector first-order linear stochastic difference equation.**¹ We use it first to represent information flowing to economic agents, then again to represent competitive equilibria. The vector first-order linear stochastic difference equation is associated with a tidy theory of prediction and a host of procedures for econometric application. Ease of analysis has prompted us to adopt economic specifications that cause competitive equilibria to have representations as vector first-order linear stochastic difference equations. **

Because it expresses next period’s vector of state variables as a linear function of this period’s state vector and a vector of random disturbances, a vector first-order vector stochastic difference equation is *recursive*. Disturbances that form a martingale difference sequence

are basic building blocks used to construct time series. Martingale difference sequences are easy to forecast, a fact that delivers convenient recursive formulas for optimal predictions of time series.

Let {*xt *: *t *= 1, 2, …} be a sequence of *n*-dimensional random vectors, that is, an *n*-dimensional stochastic process. Let {*wt *: *t *= 1, 2, …} be a sequence of *N*-dimensional random vectors. We shall express *xt *as the sum of two terms. The first is a moving average of past *wt*’s. The second describes the effects of an initial condition. The {*wt*} generates a sequence of information sets {*Jt *: *t *= 0, 1, …}. Let *J*0 be generated by *x*0 and *Jt *be generated by *x*0, *w*1, …, *wt*, which means that *Jt *consists of the set of all measurable functions of {*x*0, *w*1, …, *wt*}.**is assumed to be a martingale difference sequence adapted to this sequence of information sets. **

DEFINITION 1: The sequence {*wt *: *t *= 1, 2, …} is said to be a *martingale difference sequence *adapted to {*Jt *: *t *= 0, 1, …} if *E*(*wt*+1|*Jt*) = 0 for *t *= 0, 1, ….

In addition, we assume that the {*wt *: *t *= 1, 2, …} process is *conditionally homoscedastic*, a phrase whose meaning is conveyed by

DEFINITION 2: The sequence {*wt *: *t *= 1, 2, …} is said to be *conditionally homoscedastic *for *t *= 0, 1, ….

It is an implication of the law of iterated expectations that {*wt *: *t *= 1, 2, …} is a sequence of (unconditional) mean zero, serially uncorrelated random vectors.**³ In addition, the entries of wt are assumed to be mutually uncorrelated. **

The process {*xt *: *t *= 1, 2, …} is constructed recursively using an initial random vector *x*0 and a time-invariant law of motion:

where *A *is an *n *by *n *matrix and *C *is an *n *by *N *matrix.

Representation **(2.2.1) will be a workhorse in this book. First, we will use it to model the information upon which economic agents base their decisions. Information will consist of variables that drive shocks to preferences and to technologies. Second, we shall specify the economic problems faced by the agents in our models and the economic arrangement through which agents’ decisions are reconciled (competitive equilibrium) so that the state of the economy has a representation of the form (2.2.1). **

A tractable theory of prediction is associated with **(2.2.1). We use this theory extensively both in computing a competitive equilibrium and in representing that equilibrium in the form of (2.2.1). The optimal forecast of xt+1 given current information is **

and the one-step-ahead forecast error is

The covariance matrix of *xt*+1 conditioned on *Jt *is

A nonrecursive expression for *xt *as a function of *x*0, *w*1, *w*2, …, *wt *can be found by using **(2.2.1) repeatedly to obtain **

Representation **(2.3.4) is one type of moving-average representation. It expresses {xt : t = 1, 2, …} as a linear function of current and past values of the process {wt : t = 1, 2, …} and an initial condition x0.⁴ The list of moving average coefficients {AτC : τ = 0, 1, …} in representation (2.3.4) is often called an impulse response function. An impulse response function depicts the response of current and future values of {xt} to a random shock wt. In representation (2.3.4), the impulse response function is given by entries of the vector sequence {AτC : τ = 0, 1, …}.⁵ **

Shift **(2.3.4) forward in time: **

Projecting both sides of **(2.3.5) on the information set { x0, wt, wt−1, …, w1} gives⁶ **

where *Et*(·) ≡ *E*[(·) | *x*0, *wt*, *wt*−1, …, *w*1] = *E*(·) | *Jt*, and *xt *is in *Jt*. **Equation (2.3.6) gives the optimal j-step-ahead prediction. **

It is useful to obtain the covariance matrix of the *j*-step-ahead prediction error

Evidently,

Note that *vj *defined in **(2.3.8 a) can be calculated recursively via **

For *j *≥ 1, *vj *is the conditional covariance matrix of the errors in forecasting *xt*+*j *on the basis of time *t *information *xt*. To decompose these covariances into parts attributable to the individual components of *wt*, we let *iτ *be an *N*-dimensional column vector of zeroes except in position *τ*, where there is a one. Define a matrix *υj,τ *by

, so that from **(2.3.8 a) and (2.3.8c) we have **

Evidently, the matrices {*υj,τ*, *τ *= 1, …, *N*} give an orthogonal decomposition of the covariance matrix of *j*-step-ahead prediction errors into the parts attributable to each of the components *τ *= 1, …, *N*.**⁷ **

The innovation accounting

methods of Sims (1980) are based on **(2.3.8). Sims recommends computing the matrices vj,τ in (2.3.8) for a sequence j = 0, 1, 2, …. This sequence represents the effects of components of the shock process wt on the covariance of j-step-ahead prediction errors for each series in xt. **

It is sometimes useful to uncouple the dynamics of *xt *by using the distinct eigenvalues of the matrix *A*. The Jordan decomposition of the matrix *A *is

where *T *is a nonsingular matrix and *D *is another matrix to be constructed. The eigenvalues of *A *are the zeroes of the polynomial det (*ζI − A*). This polynomial has *n *zeroes because *A *is *n *by *n*. Not all of these zeroes are necessarily distinct.**⁸ Suppose that there are m ≤ n distinct zeroes of this polynomial, denoted δ1, δ2, …, δm. For each δj, we construct a matrix Dj that has the same dimension as the number of zeroes of det (ζI − A) that equal δj. The diagonal entries of Dj are δj and the entries in the single diagonal row above the main diagonal are all either zero or one. The remaining entries of Dj are zero. Then the matrix D is block diagonal with Dj in the jth diagonal block. **

Transform the state vector *xt *

You've reached the end of this preview. Sign up to read more!

Page 1 of 1

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading