- MTAP Reviewer G-9
- eureka math grade 6 module 4 parent tip sheet
- Types of Differential Equations
- Mathematical Physics
- Lecture 4.pdf
- Introduction to Calculus Set
- Discovering the Laplace Transform
- eigen.pdf
- math 8 unit 1 3 unit study
- Linier Fungtion
- Differential Equation
- Solving Equations elegantly
- Non Linear Root Finding
- mixtures problem.docx
- Aa Succeed
- IJETR021630
- Ref Card
- V01I031112
- 1-s2.0-S1877705811008757-main
- Alternative Quadratic Formula
- Debasis Kundu, Swagata Nandi Statistical Signal Processing Frequency Estimation 2012
- Schur Decomposition
- Operators
- Numerical Instructions in Matlab and Octave
- Eigen Vector & Eigen Values
- Chapter 1 Algebra.pdf
- Ta 1
- Equations Week3
- [P] [Ko] Method for 3D measuring CL.pdf
- cha6.pdf
- Yes Please
- The Unwinding: An Inner History of the New America
- Sapiens: A Brief History of Humankind
- The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution
- Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
- Dispatches from Pluto: Lost and Found in the Mississippi Delta
- Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
- John Adams
- The Prize: The Epic Quest for Oil, Money & Power
- The Emperor of All Maladies: A Biography of Cancer
- This Changes Everything: Capitalism vs. The Climate
- Grand Pursuit: The Story of Economic Genius
- A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
- The New Confessions of an Economic Hit Man
- Team of Rivals: The Political Genius of Abraham Lincoln
- The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
- Smart People Should Build Things: How to Restore Our Culture of Achievement, Build a Path for Entrepreneurs, and Create New Jobs in America
- Rise of ISIS: A Threat We Can't Ignore
- The World Is Flat 3.0: A Brief History of the Twenty-first Century
- Bad Feminist: Essays
- Steve Jobs
- Angela's Ashes: A Memoir
- How To Win Friends and Influence People
- The Sympathizer: A Novel (Pulitzer Prize for Fiction)
- Extremely Loud and Incredibly Close: A Novel
- Leaving Berlin: A Novel
- The Silver Linings Playbook: A Novel
- The Light Between Oceans: A Novel
- The Incarnations: A Novel
- You Too Can Have a Body Like Mine: A Novel
- The Love Affairs of Nathaniel P.: A Novel
- Life of Pi
- The Flamethrowers: A Novel
- Brooklyn: A Novel
- The First Bad Man: A Novel
- We Are Not Ourselves: A Novel
- The Blazing World: A Novel
- The Rosie Project: A Novel
- The Master
- Bel Canto
- A Man Called Ove: A Novel
- The Kitchen House: A Novel
- Beautiful Ruins: A Novel
- Interpreter of Maladies
- The Wallcreeper
- Wolf Hall: A Novel
- The Art of Racing in the Rain: A Novel
- The Cider House Rules
- A Prayer for Owen Meany: A Novel
- Lovers at the Chameleon Club, Paris 1932: A Novel
- The Bonfire of the Vanities: A Novel
- The Perks of Being a Wallflower
- The Constant Gardener: A Novel

**Divisions of Research & Statistics and Monetary Affairs
**

Federal Reserve Board, Washington, D.C.

Solving Linear Rational Expectations Models:

A Horse Race

Gary S. Anderson

2006-26

NOTE: Staff working papers in the Finance and Economics Discussion Series (FEDS)

are preliminary materials circulated to stimulate discussion and critical comment. The

analysis and conclusions set forth are those of the authors and do not indicate

concurrence by other members of the research staff or the Board of Governors.

References in publications to the Finance and Economics Discussion Series (other than

acknowledgement) should be cleared with the author(s) to protect the tentative character

of these papers.

Solving Linear Rational

Expectations Models:

A Horse Race

Gary S. Anderson

∗

Board of Governors of the Federal Reserve System

May 24, 2006

Abstract

This paper compares the functionality, accuracy, computational eﬃciency, and practicalities of al-

ternative approaches to solving linear rational expectations models, including the procedures of (Sims,

1996), (Anderson and Moore, 1983), (Binder and Pesaran, 1994), (King and Watson, 1998), (Klein,

1999), and (Uhlig, 1999). While all six procedures yield similar results for models with a unique sta-

tionary solution, the AIM algorithm of (Anderson and Moore, 1983) provides the highest accuracy;

furthermore, this procedure exhibits signiﬁcant gains in computational eﬃciency for larger-scale models.

∗

I would like to thank Robert Tetlow, Andrew Levin and Brian Madigan for useful discussions and suggestions. I would like

to thank Ed Yao for valuable help in obtaining and installing the MATLAB code. The views expressed in this document are

my own and do not necessarily reﬂect the position of the Federal Reserve Board or the Federal Reserve System.

1

May 24, 2006 2

1 Introduction and Summary

Since (Blanchard and Kahn, 1980) a number of alternative approaches for solving linear rational expectations

models have emerged. This paper describes, compares and contrasts the techniques of (Anderson, 1997;

Anderson and Moore, 1983, 1985), (Binder and Pesaran, 1994), (King and Watson, 1998), (Klein, 1999),

(Sims, 1996), and (Uhlig, 1999). All these authors provide MATLAB code implementing their algorithm.

1

The paper compares the computational eﬃciency, functionality and accuracy of these MATLAB implemen-

tations. The paper uses numerical examples to characterize practical diﬀerences in employing the alternative

procedures.

Economists use the output of these procedures for simulating models, estimating models, computing impulse

response functions, calculating asymptotic covariance, solving inﬁnite horizon linear quadratic control prob-

lems and constructing terminal constraints for nonlinear models. These applications beneﬁt from the use of

reliable, eﬃcient and easy to use code.

A comparison of the algorithms reveals that:

• For models satisfying the Blanchard-Kahn conditions, the algorithms provide equivalent solutions.

2

• The Anderson-Moore algorithm requires fewer ﬂoating point operations to achieve the same result.

This computational advantage increases with the size of the model.

• While the Anderson-Moore, Sims and Binder-Pesaran approaches provide matrix output for accom-

modating arbitrary exogenous processes, the King-Watson and Uhlig implementations only provide

solutions for VAR exogenous process.

3

Fortunately, there are straightforward formulae for augmenting

the King-Watson Uhlig and Klein approaches with the matrices characterizing the impact of arbitrary

shocks.

• The Anderson-Moore suite of programs provides a simple modeling language for developing models. In

addition, the Anderson-Moore implementation requires no special treatment for models with multiple

lags and leads. To use each of the other algorithms, one must cast the model in a form with at most

one lead or lag. This can be a tedious and error prone task for models with more than a couple of

equations.

• Using the Anderson-Moore algorithm to solve the quadratic matrix polynomial equation improves the

performance of both Binder-Pesaran’s and Uhlig’s algorithms.

Section 2 states the problem and introduces notation. This paper divides the algorithms into three categories:

eigensystem, QZ, and matrix polynomial methods. Section 3 describes the eigensystem methods. Section 4

describes applications of the QZ algorithm. Section 5 describes applications of the matrix polynomial ap-

proach. Section 6 compares the computational eﬃciency, functionality and accuracy of the algorithms.

Section 7 concludes the paper. The appendices provide usage notes for each of the algorithms as well as

information about how to compare inputs and outputs from each of the algorithms.

1

Although (Broze, Gouri´eroux, and Szafarz, 1995) and (Zadrozny, 1998) describe algorithms, I was unable to locate code

implementing the algorithms.

2

(Blanchard and Kahn, 1980) developed conditions for existence and uniqueness of linear rational expectations models. In

their setup, the solution of the rational expectations model is unique if the number of unstable eigenvectors of the system is

exactly equal to the number of forward-looking (control) variables.

3

I modiﬁed Klein’s MATLAB version to include this functionality by translating the approach he used in his Gauss version.

May 24, 2006 3

2 Problem Statement and Notation

These algorithms compute solutions for models of the form

θ

¸

i=−τ

H

i

x

t+i

= Ψz

t

, t = 0, . . . , ∞ (1)

with initial conditions, if any, given by constraints of the form

x

i

= x

data

i

, i = −τ, . . . , −1 (2)

where both τ and θ are non-negative, and x

t

is an L dimensional vector of endogenous variables with

lim

t→∞

x

t

< ∞ (3)

and z

t

is a k dimensional vector of exogenous variables.

(4)

Solutions can be cast in the form

(x

t

−x

∗

) =

−1

¸

i=−τ

B

i

(x

t+i

−x

∗

)

Given any algorithm that computes the B

i

, one can easily compute other quantities useful for characterizing

the impact of exogenous variables. For models with τ = θ = 1 the formulae are especially simple.

Let

Φ = (H

0

+H

1

B)

−1

F = −ΦH

1

B

We can write

(x

t

−x

∗

) = B(x

t−1

−x

∗

) +

∞

¸

s=0

F

s

φψz

t+s

and when

z

t+1

= Υz

t

vec(ϑ) = (I −Υ

T

⊗F)

−1

vec(ΦΨ)

(x

t

−x

∗

) = B(x

t−1

−x

∗

) +ϑz

t

Consult (Anderson, 1997) for other useful formulae concerning rational expectations model solutions.

I downloaded the MATLAB code for each implementation in July, 1999. See the bibliography for the relevant

URL’s.

May 24, 2006 4

3 Eigensystem Methods

3.1 The Anderson-Moore Algorithm

(Anderson, 1997; Anderson and Moore, 1985) developed their algorithm in the mid 80’s for solving rational

expectations models that arise in large scale macro models. Appendix B.1 on page 13 provides a synopsis of

the model concepts and algorithm inputs and outputs. Appendix A presents pseudocode for the algorithm.

The algorithm determines whether equation 1 has a unique solution, an inﬁnity of solutions or no solutions at

all. The algorithm produces a matrix Q codifying the linear constraints guaranteeing asymptotic convergence.

The matrix Q provides a strategic point of departure for making many rational expectations computations.

The uniqueness of solutions to system 1 requires that the transition matrix characterizing the linear system

have appropriate numbers of explosive and stable eigenvalues (Blanchard and Kahn, 1980), and that the

asymptotic linear constraints are linearly independent of explicit and implicit initial conditions (Anderson

and Moore, 1985).

The solution methodology entails

1. Manipulating equation 1 to compute a state space transition matrix.

2. Computing the eigenvalues and the invariant space associated with explosive eigenvalues

3. Combining the constraints provided by:

(a) the initial conditions,

(b) auxiliary initial conditions identiﬁed in the computation of the transition matrix and

(c) the invariant space vectors

The ﬁrst phase of the algorithm computes a transition matrix, A, and auxiliary initial conditions, Z. The

second phase combines left invariant space vectors associated with large eigenvalues of A with the auxiliary

initial conditions to produce the matrix Q characterizing the saddle point solution. Provided the right hand

half of Q is invertible, the algorithm computes the matrix B, an autoregressive representation of the unique

saddle point solution.

The Anderson-Moore methodology does not explicitly distinguish between predetermined and non-predetermined

variables. The algorithm assumes that history ﬁxes the values of all variables dated prior to time t and that

these initial conditions, the saddle point property terminal conditions, and the model equations determine

all subsequent variable values.

3.2 King & Watson’s Canonical Variables/System Reduction Method

(King and Watson, 1998) describe another method for solving rational expectations models. Appendix B.2

provides a synopsis of the model concepts and algorithm inputs and outputs. The algorithm consists of two

parts: system reduction for eﬃciency and canonical variables solution for solving the saddle point problem.

Although their paper describes how to accommodate arbitrary exogenous shocks, the MATLAB function

does not return the relevant matrices.

King-Watson provide a MATLAB function, resolkw, that computes solutions. The MATLAB function trans-

forms the original system to facilitate the canonical variables calculations. The mdrkw program computes

the solution assuming the exogenous variables follow a vector autoregressive process.

Given:

AE(y

t+1

) = By

t

+Cx

t

May 24, 2006 5

system reduction produces an equivalent model of the form

0 = f

t

+Kd

t

+ Ψ

f

(F)E(x

t

)

E(d

t+1

) = Wd

t

+ Ψ

d

(F)E(x

t

)

Where d

t

are the “dynamic” variables and f

t

are the “ﬂow” variables in the y

t

vector.

The mdrkw program takes the reduced system produced by redkw and the decomposition of its dynamic

subsystem computed by dynkw and computes the rational expectations solution. The computation can use

either eigenvalue-eigenvector decomposition or Schur decomposition.

Appendix B.2.1 shows one way to compute the King-Watson solution using the Anderson-Moore algorithm.

Appendix B.2.2 shows one way to compute the Anderson-Moore solution using the King-Watson algorithm.

4 Applications of the QZ Algorithm

Several authors exploit the properties of the Generalized Schur Form (Golub and van Loan, 1989).

Theorem 1 The Complex Generalized Schur Form – If A and B are in C

n×n

, then there exist unitary Q

and Z such that Q

H

AZ = T and Q

H

BZ = S are upper triangular. If for some k, t

kk

and s

kk

are both zero,

then λ(A, B) = C. Otherwise, λ(A, B) = {

tii

sii

: s

ii

= 0}

The algorithm uses the QZ decomposition to recast equation 5 in a canonical form that makes it possible

to solve the transformed system “forward” for endogenous variables consistent with arbitrary values of the

future exogenous variables.

4.1 Sims’ QZ Method

(Sims, 1996) describes the QZ Method. His algorithm solves a linear rational expectations model of the

form:

Γ

0

y

t

= Γ

1

y

t−1

+C + Ψz

t

+ Πη

t

(5)

where t = 1, 2, 3, · · · , ∞ and C is a vector of constants, z

t

is an exogenously evolving, possibly serially

correlated, random disturbance, and η

t

is an expectational error, satisfying E

t

η

t+1

= 0.

Here, as with all the algorithms except the Anderson-Moore algorithm, one must cast the model in a form

with one lag and no leads. This can be problematic for models with more than a couple of equations.

Appendix B.3 summarizes the Sims’ QZ method model concepts and algorithm inputs and outputs.

The Π designation of expectational errors identiﬁes the “predetermined” variables. The Anderson-Moore

technique does not explicitly require the identiﬁcation of expectational errors. In applying the Anderson-

Moore technique, one chooses the time subscript of the variables that appear in the equations. All predeter-

mined variables have historical values available through time t − 1. The evolution of the solution path can

have no eﬀect on any variables dated (t −1) or earlier. Future model values may inﬂuence time t values of

any variable.

Appendix B.3.1 shows one way to transform the problem from Sims’ form to Anderson-Moore form and how

to reconcile the solutions. For the sake of comparison, the Anderson-Moore transformation adds Lθ variables

and the same number of equations, setting future expectation errors to zero.

Appendix B.3.2 shows one way to transform the problem from Anderson-Moore form to Sims form.

May 24, 2006 6

4.2 Klein’s Approach

(Klein, 1999) describes another method. Appendix B.4 summarizes the model concepts and algorithm

inputs and outputs.

The algorithm uses the Generalized Schur Decomposition to decouple backward and forward variables of the

transformed system.

Although the MATLAB version does not provide solutions for autoregressive exogenous variables, one can

solve the autoregressive exogenous variables problem by augmenting the system. The MATLAB program

does not return matrices for computing the impact of arbitrary exogenous factors.

Appendix B.4.1 describes one way to recast a model from a form suitable for Klein into a form for the

Anderson-Moore algorithm. Appendix B.4.2 describes one way to recast a model from a form suitable for

the Anderson-Moore methodology into a form for the Klein Algorithm.

5 Applications of the Matrix Polynomial Approach

Several algorithms rely on determining a matrix C satisfying

H

1

C

2

+H

0

C +H

−1

= 0. (6)

They employ linear algebraic techniques to solve this quadratic equation. Generally there are many solutions.

When the homogeneous linear system has a unique saddle-path solution, the Anderson-Moore algorithm

constructs the unique matrix C

AM

= B that satisﬁes the quadratic matrix equation and has all roots inside

the unit circle.

H

1

C

2

AM

+H

0

C

AM

+H

−1

= 0

5.1 Binder & Pesaran’s Method

(Binder and Pesaran, 1994) describe another method.

According to Binder & Pesaran(1994), under certain conditions, the unique stable solution, if it exists, is

given by:

x

t

= Cx

t−1

+

∞

¸

i=0

F

i

E(w

t+1

)

where

F = (I −BC)

−1

B

and C satisﬁes a quadratic equation like equation 6.

Their algorithm consists of a “recursive” application of the linear equations deﬁning the relationships between

C, H and F.

Appendix B.5.1 describes one way to recast a model from a form suitable for Binder-Pesaran into a form for

the Anderson-Moore algorithm. Appendix B.5.2 describes one way to recast a model from a form suitable

for the Anderson-Moore methodology into a form for the Binder-Pesaran Algorithm.

5.2 Uhlig’s Technique

(Uhlig, 1999) describes another method. The algorithm uses generalized eigenvalue calculations to obtain a

solution for the matrix polynomial equation.

May 24, 2006 7

One can view the Uhlig technique as preprocessing of the input matrices to reduce the dimension of the

quadratic matrix polynomial. It turns out that once the simpliﬁcation has been done, the Anderson-Moore

algorithm computes the solution to the matrix polynomial more eﬃciently than the approach adopted in

Uhlig’s algorithm.

Uhlig’s algorithm operates on matrices of the form:

¸

B 0 A C 0 0

H 0 G K F J

Uhlig in eﬀect pre-multiplies the equations by the matrix

C

0

0

C

+

−KC

+

I

¸

¸

where

C

0

C = 0, C

+

= (C

T

C)

−1

C

T

to get

C

0

B 0 C

0

A 0 0 0

C

+

B 0 C

+

A I 0 0

H −KC

+

B 0 G−KC

+

A 0 F J

¸

¸

one can imagine leading the second block of equations by one period and using them to annihilate J to get

0 0 C

0

B 0 C

0

A 0

H −KC

+

B 0 G−KC

+

A−JC

+

B 0 F −JC

+

A 0

0 0 C

+

B 0 C

+

A I

¸

¸

This step in eﬀect decouples the second set of equations making it possible to investigate the asymptotic

properties by focusing on a smaller system.

¸

0 C

0

B C

0

A

H −KC

+

B G−KC

+

A−JC

+

B F −JC

+

A

**Uhlig’s algorithm undertakes the solution of a quadratic equation like equation 6 with
**

H

1

=

¸

C

0

A

F −JC

+

A

, H

0

=

¸

C

0

B

G−KC

+

A−JC

+

B

, H

−1

=

¸

0

H −KC

+

B

Appendix B.6.2 describes one way to recast a model from a form suitable for Uhlig into a form for the

Anderson-Moore algorithm. Appendix B.6.1 describes one way to recast a model from a form suitable for

the Anderson-Moore methodology into a form for the Uhlig Algorithm.

6 Comparisons

Section 2 identiﬁed B, ϑ, φ and F as potential outputs of a linear rational expectation algorithm. Most of the

implementations do not compute each of the potential outputs. Only Anderson-Moore and Binder-Pesaran

provide all four outputs (See Table 6).

Generally, the implementations make restrictions on the form of the input. Most require the user to specify

models with at most one lag or one lead. Only Anderson-Moore explicitly allows multiple lags and leads.

Each of the authors provides small illustrative models along with their MATLAB code. The next two sections

present results from applying all the algorithms to each of the example models.

May 24, 2006 8

Table 1: Modeling Features

Technique B ϑ φ, F Usage Notes

Anderson-Moore Allows multiple lags and leads.

Has modeling language.

King & Watson one lead, no lags

Sims one lag, no leads

Klein one lead, no lags

Binder-Peseran one lag, one lead;

ˆ

C must be non-

singular

Uhlig one lag, one lead; constraint in-

volving choice of “jump” vari-

ables and rank condition on C.

Note:

• The Klein and Uhlig procedures compute ϑ by augmenting linear sys-

tem

• For the Uhlig procedure one must choose “jump” variables to guaran-

tee that the C matrix has full rank.

6.1 Computational Eﬃciency

Nearly all the algorithms successfully computed solutions for all the examples. Each of the algorithms, except

Binder-Pesaran’s, successfully computed solutions for all of Uhlig’s examples. Uhlig’s algorithm failed to

provide a solution for the given parametrization of one of King’s examples. However, Binder-Pesaran’s and

Uhlig’s routines would likely solve alternative parametrization of the models that had convergence problems.

Tables 2-4 present the MATLAB-reported ﬂoating point operations (ﬂops) counts for each of the algorithms

applied to the example models.

The ﬁrst column of each table identiﬁes the example model. The second column provides the ﬂops required

by the Anderson-Moore algorithm to compute B followed by the ﬂops required to compute B, ϑ, φ, and F.

Columns three through seven report the ﬂops required by each algorithm divided by the ﬂops required by

the Anderson-Moore algorithm for a given example model.

Note that the Anderson-Moore algorithm typically required a fraction of the number of ﬂops required by the

other algorithms. For example, King-Watson’s algorithm required more than three times the ﬂops required

by the Anderson-Moore algorithm for the ﬁrst Uhlig example. In the ﬁrst row, one observes that Uhlig’s

algorithm required only 92% of the number of ﬂops required by the Anderson-Moore algorithm, but this is

the only instance where an alternative to the Anderson-Moore algorithm required fewer ﬂops.

In general, Anderson-Moore provides solutions with the least computational eﬀort. There were only a few

cases where some alternative had approximately the same number of ﬂoating point operations. The eﬃciency

advantage was especially pronounced for larger models. King-Watson generally used twice to three times

the number of ﬂoating point operations. Sims generally used thirty times the number of ﬂoating point

operations – never fewer than Anderson-Moore, King-Watson or Uhlig. It had about the same performance

as Klein. Klein generally used thirty times the number of ﬂoating point operations. It never used fewer

than Anderson-Moore, King-Watson or Uhlig. Binder-Pesaran was consistently the most computationally

expensive algorithm. It generally used hundreds of times more ﬂoating point operations. In one case, it took

as many as 100,000 times the number of ﬂoating point operations. Uhlig generally used about twice the ﬂops

of Anderson-Moore even for small models and many more ﬂops for larger models.

Table 5 presents a comparison of the original Uhlig algorithm to a version using Anderson-Moore to solve

the quadratic polynomial equation. Employing the Anderson-Moore algorithm speeds the computation. The

diﬀerence was most dramatic for larger models.

May 24, 2006 9

6.2 Numerical Accuracy

Tables 6-11 presents the MATLAB relative errors. I have employed a symbolic algebra version of the

Anderson-Moore algorithm to compute solutions to high precision

4

. Although ϑ and F are relatively simple

linear transformations of B, each of the authors uses radically diﬀerent methods to compute these quantities.

I then compare the matrices computed in MATLAB by each algorithm to the high precision solution.

Anderson-Moore always computed the correct solution and in almost every case produced the most accurate

solution. Relative errors were on the order of 10

−16

. King-Watson always computed the correct solution, but

produced solutions with relative errors generally 3 times the size of the Anderson-Moore algorithm. Sims

always computed correct solutions but produced solutions with relative errors generally 5 times the size of

the Anderson-Moore algorithm.

5

Sim’s F calculation produced errors that were 20 times the size of the

Anderson-Moore relative errors. Klein always computed the correct solution but produced solutions with

relative errors generally 5 times the size of the Anderson-Moore algorithm.

Uhlig provides accurate solutions with relative errors about twice the size of the Anderson-Moore algorithm

for each case for which it converges. It cannot provide a solution for King’s example 3 for the particular

parametrization I employed. I did not explore alternative parametrizations. For the ϑ computation, the

results were similar. The algorithm was unable to compute ϑ for King example 3. Errors were generally 10

times the size of Anderson-Moore relative errors.

Binder-Pesaran converges to an incorrect value for three of the Uhlig examples: example 3, 6 and example 7.

In each case, the resulting matrix solves the quadratic matrix polynomial, but the particular solution has an

eigenvalue greater than one in magnitude even though an alternative matrix solution exists with eigenvalues

less than unity. For Uhlig’s example 3, the algorithm diverges and produces a matrix with NaN’s. Even

when the algorithm converges to approximate the correct solution, the errors are much larger than the

other algorithms. One could tighten the convergence criterion at the expense of increasing computational

time, but the algorithm is already the slowest of the algorithms evaluated. Binder-Pesaran’s algorithm does

not converge for either of Sims’ examples. The algorithm provides accurate answers for King & Watson’s

examples. Although the convergence depends on the particular parametrization, I did not explore alternative

parametrization when the algorithm’s did not converge. The ϑ and F results were similar to the B results.

The algorithm was unable to compute H for Uhlig 3 in addition to Uhlig 7. It computed the wrong value

for Uhlig 6. It was unable to compute values for either of Sims’s examples.

7 Conclusions

A comparison of the algorithms reveals that:

• For models satisfying the Blanchard-Kahn conditions, the algorithms provide equivalent solutions.

• The Anderson-Moore algorithm proved to be the most accurate.

• Using the Anderson-Moore algorithm to solve the quadratic matrix polynomial equation improves the

performance of both Binder-Pesaran’s and Uhlig’s algorithms.

• While the Anderson-Moore, Sims and Binder-Pesaran approaches provide matrix output for accom-

modating arbitrary exogenous processes, the King-Watson and Uhlig implementations only provide

solutions for VAR exogenous process.

6

Fortunately, there are straightforward formulae for augmenting

4

I computed exact solutions when this took less than 5 minutes and solutions correct to 30 decimal places in all other cases.

5

To compare F for Sims note that

Ψ = I

L

F = (Θy.Θ

f

)Φ

−1

exact

6

I modiﬁed Klein’s MATLAB version to include this functionality by translating the approach he used in his Gauss version.

May 24, 2006 10

the King-Watson, Uhlig and Klein approaches with the matrices characterizing the impact of arbitrary

shocks.

• The Anderson-Moore algorithm requires fewer ﬂoating point operations to achieve the same result.

This computational advantage increases with the size of the model.

• The Anderson-Moore suite of programs provides a simple modeling language for developing models. In

addition, the Anderson-Moore algorithm requires no special treatment for models with multiple lags

and leads. To use each of the other algorithms, one must cast the model in a form with at most one

lead or lag. This can be tedious and error prone task for models with more than a couple of equations.

May 24, 2006 11

8 Bibliography

Gary Anderson. A reliable and computationally eﬃcient algorithm for imposing the sad-

dle point property in dynamic models. Unpublished Manuscript, Board of Governors

of the Federal Reserve System. Downloadable copies of this and other related papers at

http://www.federalreserve.gov/pubs/oss/oss4/aimindex.html, 1997.

Gary Anderson and George Moore. An eﬃcient procedure for solving linear perfect foresight models. 1983.

Gary Anderson and George Moore. A linear algebraic procedure for solving linear perfect foresight models.

Economics Letters, (3), 1985. URL http://www.federalreserve.gov/pubs/oss/oss4/aimindex.html.

Michael Binder and M. Hashem Pesaran. Multivariate rational expectations models and macroeconometric

modelling: A review and some new results. URL http://www.inform.umd.edu/EdRes/Colleges/BSOS/

Depts/Economics/mbinder/research/matlabresparse.html. Seminar Paper, May 1994.

Olivier Jean Blanchard and C. Kahn. The solution of linear diﬀerence models under rational expectations.

Econometrica, 48, 1980.

Laurence Broze, Christian Gouri´eroux, and Ariane Szafarz. Solutions of multivariate rational expectations

models. Econometric Theory, 11:229–257, 1995.

Gene H. Golub and Charles F. van Loan. Matrix Computations. Johns Hopkins, 1989.

Robert G. King and Mark W. Watson. The solution of singular linear diﬀerence systems under ra-

tional expectations. International Economic Review, 39(4):1015–1026, November 1998. URL http:

//www.people.virginia.edu/~rgk4m/kwre/kwre.html.

Paul Klein. Using the generalized schur form to solve a multivariate linear rational expectations model.

Journal of Economic Dynamics and Control, 1999. URL http://www.iies.su.se/data/home/kleinp/

homepage.htm.

Christopher A. Sims. Solving linear rational expectations models. URL http://www.econ.yale.edu/~sims/

#gensys. Seminar paper, 1996.

Harald Uhlig. A toolkit for analyzing nonlinear dynamic stochastic models easily. URL http://cwis.kub.

nl/~few5/center/STAFF/uhlig/toolkit.dir/toolkit.htm. User’s Guide, 1999.

Peter A. Zadrozny. An eigenvalue method of undetermined coeﬃcients for solving linear rational expectations

models. Journal of Economic Dynamics and Control, 22:1353–1373, 1998.

May 24, 2006 12

Appendix A The Anderson-Moore Algorithm(s)

Algorithm 1

1 Given H, compute the unconstrained autoregression.

2 funct F

1

(H) ≡

3 k := 0

4 Z

0

:= ∅

5 H

0

:= H

6 Γ := ∅

7 while H

k

θ

is singular ∩ rows(Z

k

) < L(τ +θ)

8 do

9 U

k

=

¸

U

k

Z

U

k

N

:= rowAnnihilator(H

k

θ

)

10 H

k+1

:=

¸

0 U

k

Z

H

k

τ

. . . U

k

Z

H

k

θ−1

U

k

N

H

k

τ

. . . U

k

N

H

k

θ

11 Z

k+1

:=

¸

Q

k

U

k

Z

H

k

τ

. . . U

k

Z

H

k

θ−1

12 k := k + 1

13 od

14 Γ = −H

−1

θ

H

−τ

. . . H

θ−1

15 A =

¸

0 I

Γ

16 return{

H

k

−

τ . . . H

k

θ

, A, Z

k

}

17 .

Algorithm 2

1 Given V, Z

,∗

,

2 funct F

2

(A, Z

,∗

)

3 Compute V , the vectors spanning the left

4 invariant space associated with eigenvalues

5 greater than one in magnitude

6 Q :=

¸

Z

,∗

V

7 return{Q}

8 .

Algorithm 3

1 Given Q,

2 funct F

3

(Q)

3 cnt = noRows(Q)

4 return

{Q, ∞} cnt < Lθ

{Q, 0} cnt > Lθ

{Q, ∞} (Q

R

singular)

{B = −Q

−1

R

Q

L

, 1} otherwise

5 .

May 24, 2006 13

Appendix B Model Concepts

The following sections present the inputs and outputs for each of the algorithms for the following simple

example:

V

t+1

= (1 +R)V

t

−D

t+1

D = (1 −δ)D

t−1

Appendix B.1 Anderson-Moore

Inputs

θ

¸

i=−τ

H

i

x

t+i

= Ψz

t

Model Variable Description Dimensions

x

t

State Variables L(τ +θ) ×1

z

t

Exogenous Variables M ×1

θ Longest Lead 1 ×1

τ Longest Lag 1 ×1

H

i

Structural Coeﬃcients Matrix (L ×L)(τ +θ + 1)

Ψ Exogenous Shock Coeﬃcients Matrix L ×M

Υ Optional Exogenous VAR Coeﬃcients

Matrix(z

t+1

= Υz

t

)

M ×M

Outputs

x

t

= B

x

t−τ

.

.

.

x

t−1

¸

¸

¸ +

0 . . . 0 I

∞

¸

s=0

(F

s

¸

0

ΦΨz

t+s

)

Model Variable Description Dimensions

B reduced form coeﬃcients matrix L ×L(τ +θ)

Φ exogenous shock scaling matrix L ×L

F exogenous shock transfer matrix Lθ ×Lθ

ϑ autoregressive shock transfer matrix when

z

t+1

= Υz

t

the inﬁnite sum simpliﬁes to give

x

t

= B

x

t−τ

.

.

.

x

t−1

¸

¸

¸ +ϑz

t

L ×M

May 24, 2006 14

Anderson-Moore input:

AIM Modeling Language Input

1 MODEL> FIRMVALUE

2 ENDOG>

3 V

4 DIV

5 EQUATION> VALUE

6 EQ> LEAD(V,1) = (1+R)*V - LEAD(DIV,1)

7 EQUATION> DIVIDEND

8 EQ> DIV = (1-DELTA)*LAG(DIV,1)

9 END

Parameter File Input

1 DELTA=0.3;

2 R=0.1;

3 psi=[4. 1.;3. -2.];

4 upsilon=[0.9 0.1;0.05 0.2];

H =

¸

0 0 −1.1 0 1 1.

0 −0.7 0 1 0 0

, Ψ =

¸

4. 1.

3. −2.

, Υ

¸

0.9 0.1

0.05 0.2

produces output:

B =

¸

0. 1.225

0. 0.7

F =

¸

0.909091 0.909091

0. 0.

Φ =

¸

−0.909091 1.75

0. 1.

ΦΨ =

¸

1.61364 −4.40909

3. −2.

ϑ =

¸

21.0857 −3.15714

3. −2.

**Usage Notes for Anderson-Moore Algorithm
**

1. “Align” model variables so that the data history (without applying model equations), completely

determines all of x

t−1

, but none of x

t

.

2. Develop a “model ﬁle” containing the model equations written in the “AIM modeling language”

3. Apply the model pre-processor to create MATLAB programs for initializing the algorithm’s input

matrix,(H). Create Ψ and, optionally, Υ matrices.

4. Execute the MATLAB programs to generate B, φ, F and optionally ϑ

Users can obtain code for the algorithm and the preprocessor from the author

7

7

http://www.bog.frb.fed.us/pubs/oss/oss4/aimindex.html July, 1999.

May 24, 2006 15

Appendix B.2 King-Watson

Inputs

AEtyt+1 = Byt +

θ

X

i=0

CiEt(xt+i)

xt = Qδt

δt = ρδt−1 +Gt

yt =

»

Λt

kt

–

Model Variable Description Dimensions

θ longest lead 1 ×1

yt Endogenous Variables m×1

Λt Non-predetermined Endogenous Variables (m−p) ×1

kt Predetermined Endogenous Variables p ×1

xt Exogenous Variables nx ×1

δt Exogenous Variables n

δ

×1

A Structural Coeﬃcients Matrix associated with lead

endogenous variables, yt+1

m×m

B Structural Coeﬃcients Matrix associated with con-

temporaneous endogenous variables, yt

m×m

Ci Structural Coeﬃcients Matrix associated with con-

temporaneous and lead exogenous variables, xt

m×n

Q Structural Coeﬃcients Matrix associated with con-

temporaneous exogenous variables, xt

nx ×n

δ

ρ Vector Autoregression matrix for exogenous variables n

δ

×n

δ

G Matrix multiplying Exogenous Shock n

δ

×1

Outputs

yt = Πst

st = Mst−1 +

¯

Gt

st =

»

kt

δt

–

Model Variable Description Dimensions

st Exogenous Variables and predetermined variables (nx +p) ×1

Π Matrix relating endogenous variables to exogenous

and predetermined variables

m×(p +nx)

M (p +nx) ×(p +nx)

¯

G Matrix multiplying Exogenous Shock (nx +p) ×l

May 24, 2006 16

King-Watson input:

A =

1 0 0 0

0 1 0 0

0 0 −1 −1.

0 0 0 0

¸

¸

¸

¸

, B =

0 0 1 0

0 0 0 1

0 0 −1.1 0

0 −0.7 0 1

¸

¸

¸

¸

, C =

0 0

0 0

4. 1.

3. −2.

¸

¸

¸

¸

,

Q =

¸

1 0

0 1

, ρ =

¸

0.9 0.1

0.05 0.2

, G =

¸

0

0

produces output:

Π =

1. 0. 0. 0.

0. 1. 0. 0.

0. 1.225 −21.0857 3.15714

0. 0.7 −3. 2.

¸

¸

¸

¸

, M =

**0. 1.225 −21.0857 3.15714
**

0. 0.7 −3. 2.

0. 0. 0.9 0.1

0. 0. 0.05 0.2

¸

¸

¸

¸

Usage Notes for King-Watson Algorithm

1. Identify predetermined variables

2. Cast the model in King-Watson form: endogenous variable must have at most one lead and no lag.

3. Create matlab “system” and “driver” programs generating the input matrices. “system” generates

(A, B, C

i

, nx = n

x

, ny = m), and a matlab vector containing indices corresponding to predetermined

variables. “driver” generates (Q, ρ, G).

4. Call resolkw with sytem and driver ﬁlenames as inputs to generate Π, M,

¯

G

Appendix B.2.1 King-Watson to Anderson-Moore

Obtaining Anderson-Moore inputs from King-Watson inputs

θ := θ, xt :=

2

6

6

4

Λt

kt

xt

δt

3

7

7

5

, zt :=

2

4

0

0

t

3

5

,

Ψ :=

ˆ

0 0 G

˜

Υ :=

ˆ

0 0 ρ

˜

ˆ

B1 B2

˜

:= B,

ˆ

A1 A2

˜

:= A

H :=

2

4

0 −B2 0 0 −B1 A2 −C0 0

0 0 0 0 0 0 −I Q . . .

0 0 0 0 −ρ 0 0 I

A1 0 −C1 0 0 0 −C

θ

0

0 0 0 0 . . . 0 0 0 0

−G 0 0 0 0 0 0 0

3

5

May 24, 2006 17

Obtaining King-Watson outputs from Anderson-Moore outputs

2

6

6

4

0

ˆ

Π

Λk

0

ˆ

Π

Λδ

0 M

kk

0 M

kδ

0

ˆ

Π

xk

0

ˆ

Π

xδ

0 0 0 ρ

3

7

7

5

:= B

M :=

»

M

kk

M

kδ

0 ρ

–

Π :=

ˆ

ˆ

Π

Λk

ˆ

Π

Λδ

˜

M

−1

¯

G :=

ˆ

0 0 0 I

˜

ΦΨ

Appendix B.2.2 Anderson-Moore to King-Watson

Obtaining King-Watson inputs from Anderson-Moore inputs

yt :=

2

6

4

xt−τ

.

.

.

x

t+θ−1

3

7

5, Λt :=

2

6

4

xt

.

.

.

x

t+θ−1

3

7

5kt :=

2

6

4

xt−τ

.

.

.

xt−1

3

7

5, xt := zt

A :=

2

6

6

6

4

I 0

.

.

.

.

.

.

I 0

0 . . . 0 −H

θ

3

7

7

7

5

B :=

2

6

6

6

4

0 I

.

.

.

.

.

.

0 I

H−τ H−τ+1 . . . H

θ−1

3

7

7

7

5

C :=

»

0

Ψ

–

, Q := I, ρ := 0, G := 0, p := Lτ, θ := 1, m := L(τ +θ)

Obtaining Anderson-Moore outputs from King-Watson outputs

»

B ϑ

0 ρ

–

:= M,

2

6

6

6

4

I 0

B1 ϑ

.

.

.

.

.

.

B

θ

B

(θ−1)R

ϑ

3

7

7

7

5

:= Π

May 24, 2006 18

Appendix B.3 Sims

Inputs

Γ

0

y

t

= Γ

1

y

t−1

+C + Ψz

t

+ Πη

t

Model Variable Description Dimensions

y

t

State Variables L ×1

z

t

Exogenous Variables M

1

×1

η

t

Expectational Error M

2

×1

Γ

0

Structural Coeﬃcients Matrix L ×L

Γ

1

Structural Coeﬃcients Matrix L ×L

C Constants L ×1

Ψ Structural Exogenous Variables Coeﬃcients Ma-

trix

L ×M

1

Π Structural Expectational Errors Coeﬃcients Ma-

trix

L ×M

2

Outputs

y

t

= Θ

1

y

t−1

+ Θ

c

+ Θ

0

z

t

+ Θ

y

∞

¸

s=1

Θ

s−1

f

Θ

z

E

t

z

t+s

Model Variable Description Dimensions

Θ

1

L ×L

Θ

c

L ×1

Θ

0

L ×M

1

Θ

y

L ×M

2

Θ

f

M

2

×M

2

Θ

z

M

2

×M

1

May 24, 2006 19

Sims input:

Γ

0

=

1 0 0 0

0 1 0 0

−1.1 0 1 1.

0 1 0 0

¸

¸

¸

¸

, Γ

1

=

0 0 1 0

0 0 0 1

0 0 0 0

0 0.7 0 0

¸

¸

¸

¸

, C =

0

0

0

0

¸

¸

¸

¸

,

Ψ =

0 0

0 0

4. 1.

3. −2.

¸

¸

¸

¸

, Π =

1 0

0 1

0 0

0 0

¸

¸

¸

¸

produces output:

Θ

c

0.

0.

0.

0.

¸

¸

¸

¸

, Θ

0

=

1.61364 −4.40909

3. −2.

3.675 −2.45

2.1 −1.4

¸

¸

¸

¸

, Θ

f

=

¸

0.909091 1.23405

0. 0.

,

Θ

y

=

1.28794 1.74832

−2.2204510

−16

−1.1102210

−16

1.41673 0.702491

−1.1102210

−16

1.22066

¸

¸

¸

¸

, Θ

z

=

¸

−0.0796719 −2.29972

2.4577 −1.63846

Θ

y

Θ

z

=

4.19421 −5.82645

−2.5516810

−16

6.9254610

−16

1.61364 −4.40909

3. −2.

¸

¸

¸

¸

,

Usage Notes for Sims Algorithm

1. Identify predetermined variables

2. Cast the model in Sims form: at most 1 lag and 0 leads of endogenous variables.

3. Create the input matrices Γ

0

, Γ

1

, C, Ψ, and Π

4. Call gensys to generate Θ

1

, Θ

c

, Θ

0

, Θ

y

, Θ

f

, Θ

z

Appendix B.3.1 Sims to Anderson-Moore

Obtaining Anderson-Moore inputs from Sims inputs

xt+s :=

»

yt+s

ηt+s

–

−x

∗

, zt :=

»

zt

1

–

, Ψ :=

»

Ψ C

0 0

–

H :=

»

−Γ1 0 Γ0 −Π 0 0

0 0 0 0 0 I

–

May 24, 2006 20

Obtaining Sims outputs from Anderson-Moore outputs

Θ1 :=

2

6

6

6

4

0

L(τ−1)×L

I

L(τ−1)

B

.

.

.

B

θ

0

L(τ+θ)×Lθ

3

7

7

7

5

ˆ

Θ0 Θc

˜

:=

2

6

6

6

6

6

4

0

L(τ−1)×L

I

BR

.

.

.

B

θR

3

7

7

7

7

7

5

ΦΨ

∃S Θ

f

= SFS

−1

, ΘyΘz =

2

6

6

6

4

0

L(τ−1)×L

ˆ xt

.

.

.

ˆ x

t+θ+1

3

7

7

7

5

where ˆ xt’s come from iterating equation Appendix B.1 forward θ + 1 time periods with zt+s = 0, s = 1

and zt+1 = I

Appendix B.3.2 Anderson-Moore to Sims

Obtaining Sims inputs from Anderson-Moore inputs

yt :=

2

6

4

xt−τ

.

.

.

x

t+θ−1

3

7

5, zt := zt

Γ0 :=

2

6

6

6

4

I 0

.

.

.

.

.

.

I 0

0 H0 . . . −H

θ

3

7

7

7

5

Γ1 :=

2

6

6

6

4

0 I

.

.

.

.

.

.

0 I

H−τ . . . H−1 0 0

3

7

7

7

5

Π :=

2

4

0

L(τ−1)×Lθ

I

Lθ

0

L×Lθ

3

5

Ψ :=

»

0

Lτ×Lθ

Ψ

–

Obtaining Anderson-Moore outputs from Sims outputs

ˆ

B

˜

:= Θ1,

ˆ

ΦΨ

˜

:= ΘyΘ

f

May 24, 2006 21

Appendix B.4 Klein

Inputs

AEx

t+1

= Bx

t

+Cz

t+1

+Dz

t

z

t+1

= φz

t

+

t

x

t

=

¸

k

t

u

t

**Model Variable Description Dimensions
**

x

t

Endogenous Variables L ×1

z

t

Exogenous Variables M ×1

n

k

The number of state Variables M ×1

k

t

State Variables n

k

×1

u

t

The Non-state Variables (L −n

k

) ×1

A Structural Coeﬃcients Matrix for Future Vari-

ables

L ×L

B Structural Coeﬃcient Matrix for contemporane-

ous Variables

L ×L

Outputs

u

t

= Fk

t

k

t

= Pk

t−1

Model Variable Description Dimensions

F Decision Rule (L −n

k

) ×n

k

P Law of Motion n

k

×n

k

Klein input:

A =

1 0 0 0

0 1 0 0

0 0 −1 −1.

0 0 0 0

¸

¸

¸

¸

, B =

0 0 1 0

0 0 0 1

0 0 −1.1 0

0 −0.7 0 1

¸

¸

¸

¸

, C =

0 0

0 0

4. 1.

3. −2.

¸

¸

¸

¸

,

produces output:

P =

¸

0.9 0.1

0.05 0.2

, F =

¸

−21.0857 3.15714

−3. 2.

,

Usage Notes for Klein Algorithm

1. Identify predetermined variables. Order the system so that predetermined variables come last.

2. Create the input matrices A, B.

a

3. Call solab with the input matrices and the number of predetermined variables to generate F and P

a

The Gauss version allows additional functionality.

May 24, 2006 22

Appendix B.4.1 Klein to Anderson-Moore

Obtaining Anderson-Moore inputs from Klein inputs

θ := θ, xt :=

2

4

kt

ut

zt

3

5

, zt :=

»

zt

zt+1

–

Ψ :=

ˆ

D C

˜

, Υ :=

»

φ

φ

–

ˆ

B1 B2

˜

:= B,

ˆ

A1 A2

˜

:= A

H :=

2

4

0 −B2 0 0 −B1 A2 −C0 0

0 0 0 0 0 0 −I Q . . .

0 0 0 0 −ρ 0 0 I

A1 0 −C1 0 0 0 −C

θ

0

0 0 0 0 . . . 0 0 0 0

−G 0 0 0 0 0 0 0

3

5

Obtaining Klein outputs from Anderson-Moore outputs

2

6

6

4

0

ˆ

F

Λk

0

ˆ

F

Λδ

0 P

kk

0 P

kδ

0

ˆ

F

xk

0

ˆ

F

xδ

0 0 0 ρ

3

7

7

5

:= B

P :=

»

P

kk

P

kδ

0 ρ

–

F :=

ˆ

ˆ

F

Λk

ˆ

F

Λδ

˜

P

−1

¯

G :=

ˆ

0 0 0 I

˜

ΦΨ

Appendix B.4.2 Anderson-Moore to Klein

Obtaining Klein inputs from Anderson-Moore inputs

yt :=

2

6

4

xt−τ

.

.

.

x

t+θ−1

3

7

5, Λt :=

2

6

4

xt

.

.

.

x

t+θ−1

3

7

5kt :=

2

6

4

xt−τ

.

.

.

xt−1

3

7

5

A :=

2

6

6

6

4

I 0

.

.

.

.

.

.

I 0

0 . . . 0 −H

θ

3

7

7

7

5

B :=

2

6

6

6

4

0 I

.

.

.

.

.

.

0 I

H−τ H−τ+1 . . . H

θ−1

3

7

7

7

5

C :=

»

0

Ψ

–

, Q := I, ρ := 0, G := 0

May 24, 2006 23

Obtaining Anderson-Moore outputs from Klein outputs

2

4

0 I 0

B ΦΨ

0 0

3

5

:= P,

2

6

6

6

4

I 0

B1L B1R ΦΨ

.

.

.

.

.

.

.

.

.

B

θL

B

θR

B

(θ−1)R

ΦΨ

3

7

7

7

5

:= F

May 24, 2006 24

Appendix B.5 Binder-Pesaran

Inputs

ˆ

Cx

t

=

ˆ

Ax

t−1

+

ˆ

BEx

t+1

+D

1

w

t

+D

2

w

t+1

w

t

= Γw

t−1

Model Variable Description Dimensions

x

t

State Variables L ×1

w

t

Exogenous Variables M ×1

ˆ

A Structural Coeﬃcients Matrix L ×L

ˆ

B Exogenous Shock Coeﬃcients Matrix L ×L

ˆ

C Exogenous Shock Coeﬃcients Matrix L ×M

D

1

Exogenous Shock Coeﬃcients Matrix L ×M

D

2

Exogenous Shock Coeﬃcients Matrix L ×M

Γ Exogenous Shock Coeﬃcients Matrix L ×M

Outputs

x

t

= Cx

t−1

+Hw

t

x

t

= Cx

t−1

+

∞

¸

i=0

F

i

E(w

t+1

)

Model Variable Description Dimensions

C reduced form codifying convergence constraints L ×L(τ +θ)

H exogenous shock scaling matrix L ×M

Binder-Peseran input:

A =

¸

0 0

0 0.7

B =

¸

−1 −1.

0 0

C =

¸

−1.1 0

0 1

D

1

=

¸

4. 1.

3. −2.

D

2

=

¸

0 0

0 0

Γ =

¸

0.9 0.1

0.05 0.2

produces output:

C =

¸

0. 1.225

0. 0.7

H =

¸

21.0857 −3.15714

3. −2.

**Usage Notes for Binder-Pesaran Algorithm
**

1. Cast model in Binder-Pesaran form: at most one lead and one lag of endogenous variables. The matrix

ˆ

C must be nonsingular.

2. Modify existing Matlab script implementing Binder-Pesaran’s Recursive Method to create the input

matrices

ˆ

A,

ˆ

B,

ˆ

C, D

1

, D

2

Γ and to update the appropriate matrices in the “while loop.”

8

3. Call the modiﬁed script to generate C and H and F.

May 24, 2006 25

Appendix B.5.1 Binder-Pesaran to Anderson-Moore

Obtaining Anderson-Moore inputs from Binder-Pesaran inputs

xt := xt, zt :=

»

wt

wt+1

–

H :=

ˆ

−

ˆ

A

ˆ

C

ˆ

B

˜

Υ :=

»

Γ

Γ

–

, Ψ :=

ˆ

D1 D2

˜

Obtaining Binder-Pesaran outputs from Anderson-Moore outputs

C := B, H := ΦΨ

Appendix B.5.2 Anderson-Moore to Binder-Pesaran

Obtaining Binder-Pesaran inputs from Anderson-Moore inputs

A :=

2

4

−I

L(τ−1)

0

L(τ−1)×L(θ−1)

0

L(θ−1)

0

L(θ−1)×L(θ−1)

H−τ 0

L×L(τ+θ−2)

3

5

B :=

2

4

0

L(τ−1)

0

L(τ−1)×L(θ−1)

I

L(θ−1)×L

0

L(θ−1)

0

L×L(θ+τ−2)

H

θ

3

5

C :=

2

4

I

L(τ−1)

0

L(τ−1)×L(θ−1)

0

L(τ−1)×L

0

L(θ−1)×L

0

L(θ−1)

I

L(θ−1)×L(θ−1)

H−τ+1 . . . H

θ−1

3

5

D1 :=

»

0

Ψ

–

, D2 = 0, Γ = 0

Obtaining Anderson-Moore outputs from Binder-Pesaran outputs

2

6

6

6

6

6

4

0

ΦΨ

BRΦΨ

.

.

.

B

(θ−1)R

ΦΨ

3

7

7

7

7

7

5

:= H

2

6

6

6

4

0 I 0

B 0

.

.

. 0

B

θ

0

3

7

7

7

5

:= C

May 24, 2006 26

Appendix B.6 Uhlig

Inputs

Ax

t

+Bx

t−1

+Cy

t

+Dz

t

= 0

E(Fx

t+1

+Gx

t

+Hx

t−1

+Jy

t+1

+Ky

t

+Lz

t+1

+Mz

t

)

z

t+1

= Nz

t

+

t+1

E

t+1

= 0

Model Variable Description Dimensions

x

t

State Variables m×1

y

t

Endogenous “jump” Variables n ×1

z

t

Exogenous Variables k ×1

A, B Structural Coeﬃcients Matrix l ×m

C Structural Coeﬃcients Matrix l ×n

D Structural Coeﬃcients Matrix l ×k

F, G, H Structural Coeﬃcients Matrix (m+n −l) ×m

J, K Structural Coeﬃcients Matrix (m+n −l) ×n

L, M Structural Coeﬃcients Matrix m+n −l ×k

N Structural Coeﬃcients Matrix k ×k

Outputs

x

t

= Px

t−1

+Qz

t

y

t

= Rx

t−1

+Sz

t

Model Variable Description Dimensions

P m×m

Q m×k

R n ×m

S n ×k

For Uhlig cannot ﬁnd C with the appropriate rank condition without augmenting the system with a “dummy

variable” and equation like W

t

= D

t

+V

t

.

Uhlig input:

A =

¸

1 0

1 −1

B =

¸

−0.7 0

0 0

C =

¸

0

1

D =

¸

−3. 2.

0 0

F =

1. 0

G =

0 0

H =

0 0

J =

1

K =

−1.1

L =

0 0

M =

−4. −1.

N =

¸

0.9 0.1

0.05 0.2

produces output:

P =

¸

0.7 0.

1.925 6.1629810

−33

Q =

¸

3. −2.

24.0857 −5.15714

R =

1.225 6.1629810

−33

S =

21.0857 −3.15714

May 24, 2006 27

Usage Notes for Uhlig Algorithm

1. Cast model in Uhlig Form. One must be careful to choose endogenous “jump” and and “non jump” variables to

guarantee that the C matrix is appropriate rank. The rank null space must be (l −n) where l is the number of

equations with no lead variables,(the row dimension of C) and n is the total number of “jump” variables(thee

column dimension of C).

2. Create matrices A, B, C, D, F, G, H, J, K, L, M, N

3. Call the function do it to generate P, Q, R, S, W

Appendix B.6.1 Uhlig to Anderson-Moore

Obtaining Anderson-Moore inputs from Uhlig inputs

xt :=

»

xt

yt

–

H :=

»

B 0 A C D 0 0

H 0 G K F J 0

–

Υ := N, Ψ := L,

Obtaining Uhlig outputs from Anderson-Moore outputs

»

R

P

–

:= B,

»

Q

S

–

:= ΦΨ

Appendix B.6.2 Anderson-Moore to Uhlig

Obtaining Uhlig inputs from Anderson-Moore inputs

A :=

2

4

−I

L(τ−1)

0

L(τ−1)×L(θ−1)

0

L(θ−1)

0

L(θ−1)×L(θ−1)

H−τ 0

L×L(τ+θ−2)

3

5

B :=

2

4

0

L(τ−1)

0

L(τ−1)×L(θ−1)

I

L(θ−1)×L

0

L(θ−1)

0

L×L(θ+τ−2)

H

θ

3

5

C :=

2

4

I

L(τ−1)

0

L(τ−1)×L(θ−1)

0

L(τ−1)×L

0

L(θ−1)×L

0

L(θ−1)

I

L(θ−1)×L(θ−1)

H−τ+1 . . . H

θ−1

3

5

ﬁnd permutation matrices L, R

»

B 0 A C 0 0

H 0 G K F J

–

= LH(I3 ⊗R)

and such that with l row dimension of C and n the column dimension of C

rank(nullSpace(C

T

)) = l −n

ˆ

DM

˜

=

»

0

LΨ

–

May 24, 2006 28

Obtaining Anderson-Moore outputs from Uhlig outputs

B :=

»

P

R

–

ΦΨ :=

»

Q

S

–

May 24, 2006 29

Appendix C Computational Eﬃciency

Table 2: Matlab Flop Counts: Uhlig Examples

Model AIM KW Sims Klein BP Uhlig

(L, τ, θ) (m, p) (L, M

2

) (L, n

k

) L (m, n)

Computes (B, ϑ, φ, F) (B, ϑ) (B, φ, F) (B) (B, ϑ, φ, F) (B, ϑ)

uhlig 0 (2514, 4098) 3.38385 23.9387 28.8007 88.7832 0.922832

(4,1,1) (8,4) (8,4) (8,3) 4 (3,1)

uhlig 1 (6878, 9817) 3.81346 46.9301 29.1669 15347.3 1.88979

(6,1,1) (12,6) (12,6) (12,5) 6 (5,1)

uhlig 2 (6798, 9773) 3.292 52.2499 35.7318 6468.78 1.80494

(6,1,1) (12,6) (12,6) (12,5) 6 (5,1)

uhlig 3 (112555, 276912) 3.62782 40.5874 22.6611 205753. 16.7633

(14,1,1) (28,14) (28,14) (28,13) 14 (10,4)

uhlig 4 (6850, 9789) 3.58292 45.675 29.5441 16250.5 1.89752

(6,1,1) (12,6) (12,6) (12,5) 6 (5,1)

uhlig 5 (6850, 9789) 3.58292 45.675 29.5441 16250.5 1.89752

(6,1,1) (12,6) (12,6) (12,5) 6 (5,1)

uhlig 6 (7809, 26235) 2.53182 47.5947 46.3196 190.905 3.42169

(6,1,1) (12,6) (12,6) (12,4) 6 (4,2)

uhlig 7 (82320, 108000) 2.8447 48.1792 28.9946 131.785 2.60423

(13,1,1) (26,13) (26,13) (26,11) 13 (11,2)

Note:

L, τ, θ, B, ϑ, φ, F See Appendix B.1

m, p See Appendix B.2

L, M2 See Appendix B.3

L, n

k

See Appendix B.4

L See Appendix B.5

m, n See Appendix B.6

KW, Sims, Klein, BP, Uhlig column normalized

by dividing by comparable AIM value.

May 24, 2006 30

Table 3: Matlab Flop Counts: King & Watson, Klein, Binder & Peseran and Sims Examples

Model AIM KW Sims Klein BP Uhlig

(L, τ, θ) (m, p) (L, M

2

) (L, n

k

) L (m, n)

Computes (B, ϑ, φ, F) (B, ϑ) (B, φ, F) (B) (B, ϑ, φ, F) (B, ϑ)

king 2 (2288, 2566) 2.36713 9.64904 20.7072 165.623 1.76311

(3,1,1) (6,3) (6,3) (6,1) 3 (2,1)

king 3 (1028, 2306) 2.81809 21.1284 77.1858 368.545 NA

(3,1,1) (6,3) (6,3) (6,-1) 3 (2,1)

king 4 (40967, 146710) 5.2211 39.8819 30.493 185.299 14.2569

(9,1,1) (18,9) (18,9) (18,5) 9 (3,6)

sims 0 (3501, 5576) 4.70923 59.9526 59.2871 NA 5.72694

(5,1,1) (10,5) (10,5) (10,3) 5 (4,1)

sims 1 (16662, 39249) 4.20292 56.8375 48.4135 NA 9.2849

(8,1,1) (16,8) (16,8) (16,5) 8 (6,2)

klein 1 (3610, 15147) 2.13213 10.2756 16.7828 35555.2 2.30526

(3,1,1) (6,3) (6,3) (6,2) 3 (1,2)

bp 1 (533, 1586) 3.1257 13.8987 59.7749 613.814 2.88743

(2,1,1) (4,2) (4,2) (4,0) 2 (1,1)

bp 2 (3510, 5659) 3.5265 77.2456 67.7846 800.98 21.4259

(5,1,1) (10,5) (10,5) (10,1) 5 (4,1)

Note:

L, τ, θ, B, ϑ, φ, F See Appendix B.1

m, p See Appendix B.2

L, M2 See Appendix B.3

L, n

k

See Appendix B.4

L See Appendix B.5

m, n See Appendix B.6

KW, Sims, Klein, BP, Uhlig column normalized

by dividing by comparable AIM value.

May 24, 2006 31

Table 4: Matlab Flop Counts: General Examples

Model AIM KW Sims Klein BP Uhlig

(L, τ, θ) (m, p) (L, M

2

) (L, n

k

) L (m, n)

Computes (B, ϑ, φ, F) (B, ϑ) (B, φ, F) (B) (B, ϑ, φ, F) (B, ϑ)

ﬁrmValue 1 (535, 1586) 2.90467 11.5252 92.9738 534.686 NA

(2,1,1) (4,2) (4,2) (4,0) 2 (1,1)

athan 1 (18475, 70162) 3.75578 135.329 113.463 1385.73 196.251

(9,1,1) (18,9) (18,9) (18,1) 9 (8,1)

fuhrer 1 (352283, 1885325) NA 125.141 148.459 NA NA

(12,1,4) (60,12) (60,48) (60,1) 48 (9,39)

Note:

L, τ, θ, B, ϑ, φ, F See Appendix B.1

m, p See Appendix B.2

L, M2 See Appendix B.3

L, n

k

See Appendix B.4

L See Appendix B.5

m, n See Appendix B.6

KW, Sims, Klein, BP, Uhlig column normalized

by dividing by comparable AIM value.

May 24, 2006 32

Table 5: Using Anderson-Moore to Improve Matrix Polynomial Methods

Model(m,n) Uhlig AM/Uhlig

uhlig 0(3,1) 2320 2053

uhlig 1(5,1) 12998 12731

uhlig 2(5,1) 12270 12003

uhlig 3(10,4) 1886798 304534

uhlig 4(5,1) 12998 12731

uhlig 5(5,1) 12998 12731

uhlig 6(4,2) 26720 24233

uhlig 7(11,2) 214380 213450

Note:

m, n See Appendix B.6

May 24, 2006 33

Appendix D Accuracy

Table 6: Matlab Relative Errors in B: Uhlig Examples

B

i

−B

exact

B

exact

Model AIM KW Sims Klein BP Uhlig

(L, τ, θ) (m, p) (L, M

2

) (L, n

k

) L (m, n)

Computes (B, ϑ, φ, F) (B, ϑ) (B, φ, F) (B) (B, ϑ, φ, F) (B, ϑ)

uhlig 0 1 := 1.69446 10

−16

8. 40. 11. 134848. 49.

(4,1,1) (8,4) (8,4) (8,3) 4 (3,1)

uhlig 1 1 := 3.82375 10

−15

0.518098 2.63371 0.617504 10217.1 3.32793

(6,1,1) (12,6) (12,6) (12,5) 6 (5,1)

uhlig 2 1 := 4.88785 10

−15

1. 2.00589 6.43964 9520.85 2.82951

(6,1,1) (12,6) (12,6) (12,5) 6 (5,1)

uhlig 3 1 := 1.15015 10

−15

2.65517 1.78046 1.45676 8.57994 10

14

20.9358

(14,1,1) (28,14) (28,14) (28,13) 14 (10,4)

uhlig 4 1 := 1.18357 10

−15

2.2561 1.00348 14.7909 10673.4 34.6115

(6,1,1) (12,6) (12,6) (12,5) 6 (5,1)

uhlig 5 1 := 1.18357 10

−15

2.2561 1.00348 14.7909 10673.4 34.6115

(6,1,1) (12,6) (12,6) (12,5) 6 (5,1)

uhlig 6 1 := 1.60458 10

−15

7.32281 38.3459 13.4887 7.63524 10

13

203.291

(6,1,1) (12,6) (12,6) (12,4) 6 (4,2)

uhlig 7 1 := 2.33332 10

−14

7.59657 4.15248 0.335751 NA 2.63499

(13,1,1) (26,13) (26,13) (26,11) 13 (11,2)

Note:

L, τ, θ, B, ϑ, φ, F See Appendix B.1

m, p See Appendix B.2

L, M2 See Appendix B.3

L, n

k

See Appendix B.4

L See Appendix B.5

m, n See Appendix B.6

KW, Sims, Klein, BP, Uhlig column normalized

by dividing by comparable AIM value.

May 24, 2006 34

Table 7: Matlab Relative Errors in B: King & Watson and Sims Examples

B

i

−B

exact

B

exact

Model AIM KW Sims Klein BP Uhlig

(L, τ, θ) (m, p) (L, M2) (L, n

k

) L (m, n)

Computes (B, ϑ, φ, F) (B, ϑ) (B, φ, F) (B) (B, ϑ, φ, F) (B, ϑ)

king 2 1 := 0. 3.65169 10

−16

1.61642 10

−16

0. 0. 0

(3,1,1) (6,3) (6,3) (6,1) 3 (2,1)

king 3 1 := 0. 0. 2.22045 10

−16

2.49183 10

−15

0. NA

(3,1,1) (6,3) (6,3) (6,1) 3 (2,1)

king 4 1 := 2.25 10

−15

3.99199 10

−15

1.48492 10

−15

2.0552 10

−15

6.38338 10

−16

2.63969 10

−9

(9,1,1) (18,9) (18,9) (18,5) 9 (3,6)

sims 0 1 := 0. 8.84516 10

−17

3.57788 10

−16

2.34864 10

−16

NA 5.55112 10

−17

(5,1,1) (10,5) (10,5) (10,3) 5 (4,1)

sims 1 1 := 1.18603 10

−15

7.58081 10

−16

9.08685 10

−16

1.28298 10

−15

NA 8.11729 10

−16

(8,1,1) (16,8) (16,8) (16,6) 8 (6,2)

klein 1 1 := 1.29398 10

−14

3.2259 10

−14

8.72119 10

−14

1.76957 10

−13

4.54742 10

−10

4.61522 10

−13

(3,1,1) (6,3) (6,3) (6,1) 3 (1,2)

bp 1 1 := 1.54607 10

−15

2.82325 10

−15

5.10874 10

−15

1.23685 10

−14

5.95443 10

−10

1.06208 10

−14

(2,1,1) (4,2) (4,2) (4,0) 2 (1,1)

bp 2 1 := 5.00765 10

−16

5.15101 10

−15

5.52053 10

−15

5.21764 10

−15

7.14801 10

−16

3.98148 10

−14

(5,1,1) (10,5) (10,5) (10,1) 5 (4,1)

Note:

L, τ, θ, B, ϑ, φ, F See Appendix B.1

m, p See Appendix B.2

L, M2 See Appendix B.3

L, n

k

See Appendix B.4

L See Appendix B.5

m, n See Appendix B.6

KW, Sims, Klein, BP, Uhlig column not nor-

malized by dividing by comparable AIM value.

May 24, 2006 35

Table 8: Matlab Relative Errors in ϑ: Uhlig Examples

ϑ

i

−ϑ

exact

ϑ

exact

Model AIM KW Sims Klein BP Uhlig

(L, τ, θ) (m, p) (L, M

2

) (L, n

k

) L (m, n)

Computes (B, ϑ, φ, F) (B, ϑ) (B, φ, F) (B) (B, ϑ, φ, F) (B, ϑ)

uhlig 0 1 := 9.66331 10

−16

0.532995 NA 1.34518 159728. 9.54822

(4,1,1) (8,4) (8,4) (8,3) 4 (3,1)

uhlig 1 1 := 2.25938 10

−15

3.78248 NA 2.32241 5.30459 10

6

2.33909

(6,1,1) (12,6) (12,6) (12,5) 6 (5,1)

uhlig 2 1 := 5.05614 10

−15

0.43572 NA 2.62987 1.51637 10

6

1.

(6,1,1) (12,6) (12,6) (12,5) 6 (5,1)

uhlig 3 1 := 8.93 10

−16

2.08127 NA 1.39137 NA 14.3934

(14,1,1) (28,14) (28,14) (28,13) 14 (10,4)

uhlig 4 1 := 0.0114423 1. NA 1. 0.999999 1.

(6,1,1) (12,6) (12,6) (12,5) 6 (5,1)

uhlig 5 1 := 1.37388 10

−15

3.79327 NA 9.97923 8.03937 10

6

12.2245

(6,1,1) (12,6) (12,6) (12,5) 6 (5,1)

uhlig 6 1 := 6.97247 10

−14

1.00929 NA 1.83538 1.12899 10

13

27.5959

(6,1,1) (12,6) (12,6) (12,4) 6 (4,2)

uhlig 7 1 := 7.47708 10

−14

1.67717 NA 0.131889 NA 0.665332

(13,1,1) (26,13) (26,13) (26,11) 13 (11,2)

Note:

L, τ, θ, B, ϑ, φ, F See Appendix B.1

m, p See Appendix B.2

L, M2 See Appendix B.3

L, n

k

See Appendix B.4

L See Appendix B.5

m, n See Appendix B.6

KW, Sims, Klein, BP, Uhlig column normalized

by dividing by comparable AIM value.

May 24, 2006 36

Table 9: Matlab Relative Errors in ϑ: King & Watson and Sims Examples

ϑ

i

−ϑ

exact

ϑ

exact

Model AIM KW Sims Klein BP Uhlig

(L, τ, θ) (m, p) (L, M2) (L, n

k

) L (m, n)

Computes (B, ϑ, φ, F) (B, ϑ) (B, φ, F) (B) (B, ϑ, φ, F) (B, ϑ)

king 2 1 := 4.996 10

−16

4.996 10

−16

NA 4.996 10

−16

4.996 10

−16

4.996 10

−16

(3,1,1) (6,3) (6,3) (6,1) 3 (2,1)

king 3 1 := 2.58297 10

−15

5.02999 10

−15

NA 6.42343 10

−15

2.58297 10

−15

NA

(3,1,1) (6,3) (6,3) (6,1) 3 (2,1)

king 4 1 := 3.45688 10

−16

1.12445 10

−15

NA 8.40112 10

−16

2.65811 10

−16

2.47355 10

−9

(9,1,1) (18,9) (18,9) (18,5) 9 (3,6)

sims 0 1 := 7.66638 10

−16

7.85337 10

−16

NA 9.53623 10

−16

NA 7.66638 10

−16

(5,1,1) (10,5) (10,5) (10,3) 5 (4,1)

sims 1 1 := 9.73294 10

−16

1.57671 10

−15

NA 4.23589 10

−15

NA 9.73294 10

−16

(8,1,1) (16,8) (16,8) (16,6) 8 (6,2)

klein 1 1 := 2.33779 10

−15

1.9107 10

−14

NA 1.07269 10

−13

2.3967 10

−9

2.66823 10

−13

(3,1,1) (6,3) (6,3) (6,1) 3 (1,2)

bp 1 1 := 1.4802 10

−15

8.91039 10

−15

NA 1.20169 10

−14

5.55737 10

−9

1.16858 10

−14

(2,1,1) (4,2) (4,2) (4,0) 2 (1,1)

bp 2 1 := 6.1809 10

−16

1.25347 10

−15

NA 1.3268 10

−15

6.1809 10

−16

8.9026 10

−15

(5,1,1) (10,5) (10,5) (10,1) 5 (4,1)

Note:

L, τ, θ, B, ϑ, φ, F See Appendix B.1

m, p See Appendix B.2

L, M2 See Appendix B.3

L, n

k

See Appendix B.4

L See Appendix B.5

m, n See Appendix B.6

KW, Sims, Klein, BP, Uhlig column not nor-

malized by dividing by comparable AIM value.

May 24, 2006 37

Table 10: Matlab Relative Errors in F: Uhlig Examples

F

i

−F

exact

F

exact

Model AIM KW Sims Klein BP Uhlig

(L, τ, θ) (m, p) (L, M

2

) (L, n

k

) L (m, n)

Computes (B, ϑ, φ, F) (B, ϑ) (B, φ, F) (B) (B, ϑ, φ, F) (B, ϑ)

uhlig 0 1 := 4.60411 10

−16

NA 38.383 NA 6280.03 NA

(4,1,1) (8,4) (8,4) (8,3) 4 (3,1)

uhlig 1 1 := 6.12622 10

−16

NA 4.25457 NA 3126.17 NA

(6,1,1) (12,6) (12,6) (12,5) 6 (5,1)

uhlig 2 1 := 6.13246 10

−16

NA 3.58093 NA 3918.23 NA

(6,1,1) (12,6) (12,6) (12,5) 6 (5,1)

uhlig 3 1 := 7.28843 10

−16

NA 12.9392 NA 1.40986 10

15

NA

(14,1,1) (28,14) (28,14) (28,13) 14 (10,4)

uhlig 4 1 := 6.03573 10

−16

NA 2.73637 NA 1028.17 NA

(6,1,1) (12,6) (12,6) (12,5) 6 (5,1)

uhlig 5 1 := 6.03573 10

−16

NA 2.73637 NA 1028.17 NA

(6,1,1) (12,6) (12,6) (12,5) 6 (5,1)

uhlig 6 1 := 6.0499 10

−16

NA 308.153 NA 1.65292 10

13

NA

(6,1,1) (12,6) (12,6) (12,4) 6 (4,2)

uhlig 7 1 := 7.52423 10

−14

NA 1.72491 NA NA NA

(13,1,1) (26,13) (26,13) (26,11) 13 (11,2)

Note:

L, τ, θ, B, ϑ, φ, F See Appendix B.1

m, p See Appendix B.2

L, M2 See Appendix B.3

L, n

k

See Appendix B.4

L See Appendix B.5

m, n See Appendix B.6

KW, Sims, Klein, BP, Uhlig column normalized

by dividing by comparable AIM value.

May 24, 2006 38

Table 11: Matlab Relative Errors in F: King & Watson and Sims Examples

F

i

−F

exact

F

exact

Model AIM KW Sims Klein BP Uhlig

(L, τ, θ) (m, p) (L, M

2

) (L, n

k

) L (m, n)

Computes (B, ϑ, φ, F) (B, ϑ) (B, φ, F) (B) (B, ϑ, φ, F) (B, ϑ)

king 2 1 := 7.49401 10

−16

NA 1.36349 10

−15

NA 7.49401 10

−16

NA

(3,1,1) (6,3) (6,3) (6,1) 3 (2,1)

king 3 1 := 3.9968 10

−16

NA 6.53472 10

−15

NA 3.9968 10

−16

NA

(3,1,1) (6,3) (6,3) (6,1) 3 (2,1)

king 4 1 := 2.13883 10

−15

NA 2.58686 10

−15

NA 7.29456 10

−16

NA

(9,1,1) (18,9) (18,9) (18,5) 9 (3,6)

sims 0 1 := 7.13715 10

−16

NA 1.1917 10

−15

NA NA NA

(5,1,1) (10,5) (10,5) (10,3) 5 (4,1)

sims 1 1 := 2.41651 10

−15

NA 2.92152 10

−15

NA NA NA

(8,1,1) (16,8) (16,8) (16,6) 8 (6,2)

klein 1 1 := 1.24387 10

−15

NA 1.66911 10

−13

NA 3.70873 10

−10

NA

(3,1,1) (6,3) (6,3) (6,1) 3 (1,2)

bp 1 1 := 4.43937 10

−16

NA 7.51277 10

−15

NA 5.03025 10

−11

NA

(2,1,1) (4,2) (4,2) (4,0) 2 (1,1)

bp 2 1 := 5.82645 10

−16

NA 1.71285 10

−15

NA 5.82645 10

−16

NA

(5,1,1) (10,5) (10,5) (10,1) 5 (4,1)

Note:

L, τ, θ, B, ϑ, φ, F See Appendix B.1

m, p See Appendix B.2

L, M2 See Appendix B.3

L, n

k

See Appendix B.4

L See Appendix B.5

m, n See Appendix B.6

KW, Sims, Klein, BP, Uhlig column not nor-

malized by dividing by comparable AIM value.

**Solving Linear Rational Expectations Models: A Horse Race
**

Gary S. Anderson∗ Board of Governors of the Federal Reserve System May 24, 2006

Abstract This paper compares the functionality, accuracy, computational eﬃciency, and practicalities of alternative approaches to solving linear rational expectations models, including the procedures of (Sims, 1996), (Anderson and Moore, 1983), (Binder and Pesaran, 1994), (King and Watson, 1998), (Klein, 1999), and (Uhlig, 1999). While all six procedures yield similar results for models with a unique stationary solution, the AIM algorithm of (Anderson and Moore, 1983) provides the highest accuracy; furthermore, this procedure exhibits signiﬁcant gains in computational eﬃciency for larger-scale models.

∗ I would like to thank Robert Tetlow, Andrew Levin and Brian Madigan for useful discussions and suggestions. I would like to thank Ed Yao for valuable help in obtaining and installing the MATLAB code. The views expressed in this document are my own and do not necessarily reﬂect the position of the Federal Reserve Board or the Federal Reserve System.

1

May 24, 2006

2

1

Introduction and Summary

Since (Blanchard and Kahn, 1980) a number of alternative approaches for solving linear rational expectations models have emerged. This paper describes, compares and contrasts the techniques of (Anderson, 1997; Anderson and Moore, 1983, 1985), (Binder and Pesaran, 1994), (King and Watson, 1998), (Klein, 1999), (Sims, 1996), and (Uhlig, 1999). All these authors provide MATLAB code implementing their algorithm.1 The paper compares the computational eﬃciency, functionality and accuracy of these MATLAB implementations. The paper uses numerical examples to characterize practical diﬀerences in employing the alternative procedures. Economists use the output of these procedures for simulating models, estimating models, computing impulse response functions, calculating asymptotic covariance, solving inﬁnite horizon linear quadratic control problems and constructing terminal constraints for nonlinear models. These applications beneﬁt from the use of reliable, eﬃcient and easy to use code. A comparison of the algorithms reveals that: • For models satisfying the Blanchard-Kahn conditions, the algorithms provide equivalent solutions.2 • The Anderson-Moore algorithm requires fewer ﬂoating point operations to achieve the same result. This computational advantage increases with the size of the model. • While the Anderson-Moore, Sims and Binder-Pesaran approaches provide matrix output for accommodating arbitrary exogenous processes, the King-Watson and Uhlig implementations only provide solutions for VAR exogenous process.3 Fortunately, there are straightforward formulae for augmenting the King-Watson Uhlig and Klein approaches with the matrices characterizing the impact of arbitrary shocks. • The Anderson-Moore suite of programs provides a simple modeling language for developing models. In addition, the Anderson-Moore implementation requires no special treatment for models with multiple lags and leads. To use each of the other algorithms, one must cast the model in a form with at most one lead or lag. This can be a tedious and error prone task for models with more than a couple of equations. • Using the Anderson-Moore algorithm to solve the quadratic matrix polynomial equation improves the performance of both Binder-Pesaran’s and Uhlig’s algorithms. Section 2 states the problem and introduces notation. This paper divides the algorithms into three categories: eigensystem, QZ, and matrix polynomial methods. Section 3 describes the eigensystem methods. Section 4 describes applications of the QZ algorithm. Section 5 describes applications of the matrix polynomial approach. Section 6 compares the computational eﬃciency, functionality and accuracy of the algorithms. Section 7 concludes the paper. The appendices provide usage notes for each of the algorithms as well as information about how to compare inputs and outputs from each of the algorithms.

1 Although (Broze, Gouri´roux, and Szafarz, 1995) and (Zadrozny, 1998) describe algorithms, I was unable to locate code e implementing the algorithms. 2 (Blanchard and Kahn, 1980) developed conditions for existence and uniqueness of linear rational expectations models. In their setup, the solution of the rational expectations model is unique if the number of unstable eigenvectors of the system is exactly equal to the number of forward-looking (control) variables. 3 I modiﬁed Klein’s MATLAB version to include this functionality by translating the approach he used in his Gauss version.

one can easily compute other quantities useful for characterizing the impact of exogenous variables. if any. 1999. and xt is an L dimensional vector of endogenous variables with t→∞ (2) lim xt < ∞ (3) and zt is a k dimensional vector of exogenous variables. 1997) for other useful formulae concerning rational expectations model solutions. . t = 0. . i = −τ. (4) Solutions can be cast in the form −1 (xt − x∗ ) = i=−τ Bi (xt+i − x∗ ) Given any algorithm that computes the Bi . . . See the bibliography for the relevant URL’s. . . For models with τ = θ = 1 the formulae are especially simple. . . ∞ i=−τ (1) with initial conditions. 2006 3 2 Problem Statement and Notation These algorithms compute solutions for models of the form θ Hi xt+i = Ψzt .May 24. Let Φ = (H0 + H1 B)−1 F = −ΦH1 B We can write ∞ (xt − x∗ ) = B(xt−1 − x∗ ) + s=0 F s φψzt+s and when zt+1 = Υzt vec(ϑ) = (I − ΥT ⊗ F )−1 vec(ΦΨ) (xt − x∗ ) = B(xt−1 − x∗ ) + ϑzt Consult (Anderson. given by constraints of the form xi = xdata . . I downloaded the MATLAB code for each implementation in July. −1 i where both τ and θ are non-negative.

Provided the right hand half of Q is invertible. 2. 1997. Appendix B. King-Watson provide a MATLAB function. Anderson and Moore. Given: AE(yt+1 ) = Byt + Cxt . and auxiliary initial conditions. The matrix Q provides a strategic point of departure for making many rational expectations computations. The uniqueness of solutions to system 1 requires that the transition matrix characterizing the linear system have appropriate numbers of explosive and stable eigenvalues (Blanchard and Kahn. The algorithm assumes that history ﬁxes the values of all variables dated prior to time t and that these initial conditions. the algorithm computes the matrix B. The algorithm determines whether equation 1 has a unique solution. The algorithm produces a matrix Q codifying the linear constraints guaranteeing asymptotic convergence. Combining the constraints provided by: (a) the initial conditions. 1985). resolkw. A.2 provides a synopsis of the model concepts and algorithm inputs and outputs. Manipulating equation 1 to compute a state space transition matrix. Although their paper describes how to accommodate arbitrary exogenous shocks. Z. Appendix B. The Anderson-Moore methodology does not explicitly distinguish between predetermined and non-predetermined variables. Appendix A presents pseudocode for the algorithm. 2006 4 3 3. an inﬁnity of solutions or no solutions at all. The mdrkw program computes the solution assuming the exogenous variables follow a vector autoregressive process. The second phase combines left invariant space vectors associated with large eigenvalues of A with the auxiliary initial conditions to produce the matrix Q characterizing the saddle point solution. and the model equations determine all subsequent variable values. The MATLAB function transforms the original system to facilitate the canonical variables calculations. and that the asymptotic linear constraints are linearly independent of explicit and implicit initial conditions (Anderson and Moore. an autoregressive representation of the unique saddle point solution. the saddle point property terminal conditions. 1998) describe another method for solving rational expectations models.1 on page 13 provides a synopsis of the model concepts and algorithm inputs and outputs. the MATLAB function does not return the relevant matrices. Computing the eigenvalues and the invariant space associated with explosive eigenvalues 3. that computes solutions. 3.2 King & Watson’s Canonical Variables/System Reduction Method (King and Watson. (b) auxiliary initial conditions identiﬁed in the computation of the transition matrix and (c) the invariant space vectors The ﬁrst phase of the algorithm computes a transition matrix. The solution methodology entails 1.May 24. 1980). 1985) developed their algorithm in the mid 80’s for solving rational expectations models that arise in large scale macro models. The algorithm consists of two parts: system reduction for eﬃciency and canonical variables solution for solving the saddle point problem.1 Eigensystem Methods The Anderson-Moore Algorithm (Anderson.

and ηt is an expectational error.1 Sims’ QZ Method (Sims. zt is an exogenously evolving. tkk and skk are both zero. satisfying Et ηt+1 = 0. This can be problematic for models with more than a couple of equations. then there exist unitary Q and Z such that QH AZ = T and QH BZ = S are upper triangular. All predetermined variables have historical values available through time t − 1.1 shows one way to compute the King-Watson solution using the Anderson-Moore algorithm. random disturbance.2. B) = C. Here. Future model values may inﬂuence time t values of any variable. 1996) describes the QZ Method. the Anderson-Moore transformation adds Lθ variables and the same number of equations. Appendix B. · · · . 2006 5 system reduction produces an equivalent model of the form 0 = ft + Kdt + Ψf (F )E(xt ) E(dt+1 ) = W dt + Ψd (F )E(xt ) Where dt are the “dynamic” variables and ft are the “ﬂow” variables in the yt vector. Appendix B. The Π designation of expectational errors identiﬁes the “predetermined” variables. Otherwise. For the sake of comparison. t then λ(A. ∞ and C is a vector of constants. Theorem 1 The Complex Generalized Schur Form – If A and B are in C n×n .2.May 24. Appendix B. Appendix B. .3.1 shows one way to transform the problem from Sims’ form to Anderson-Moore form and how to reconcile the solutions. possibly serially correlated. as with all the algorithms except the Anderson-Moore algorithm. setting future expectation errors to zero. one must cast the model in a form with one lag and no leads. λ(A. The evolution of the solution path can have no eﬀect on any variables dated (t − 1) or earlier.3. 3. 2. If for some k. 4 Applications of the QZ Algorithm Several authors exploit the properties of the Generalized Schur Form (Golub and van Loan. 1989).3 summarizes the Sims’ QZ method model concepts and algorithm inputs and outputs. 4. Appendix B. The Anderson-Moore technique does not explicitly require the identiﬁcation of expectational errors. His algorithm solves a linear rational expectations model of the form: Γ0 yt = Γ1 yt−1 + C + Ψzt + Πηt (5) where t = 1. The mdrkw program takes the reduced system produced by redkw and the decomposition of its dynamic subsystem computed by dynkw and computes the rational expectations solution. one chooses the time subscript of the variables that appear in the equations.2 shows one way to transform the problem from Anderson-Moore form to Sims form.2 shows one way to compute the Anderson-Moore solution using the King-Watson algorithm. The computation can use either eigenvalue-eigenvector decomposition or Schur decomposition. B) = { sii : sii = 0} ii The algorithm uses the QZ decomposition to recast equation 5 in a canonical form that makes it possible to solve the transformed system “forward” for endogenous variables consistent with arbitrary values of the future exogenous variables. In applying the AndersonMoore technique.

Although the MATLAB version does not provide solutions for autoregressive exogenous variables.1 Binder & Pesaran’s Method (Binder and Pesaran. the unique stable solution. Generally there are many solutions. Appendix B. When the homogeneous linear system has a unique saddle-path solution.2 describes one way to recast a model from a form suitable for the Anderson-Moore methodology into a form for the Klein Algorithm. H and F. Appendix B. . the Anderson-Moore algorithm constructs the unique matrix CAM = B that satisﬁes the quadratic matrix equation and has all roots inside the unit circle. if it exists.1 describes one way to recast a model from a form suitable for Klein into a form for the Anderson-Moore algorithm.2 Uhlig’s Technique (Uhlig. The algorithm uses generalized eigenvalue calculations to obtain a solution for the matrix polynomial equation.4. 5. 1994) describe another method. According to Binder & Pesaran(1994). 1999) describes another method.5.2 describes one way to recast a model from a form suitable for the Anderson-Moore methodology into a form for the Binder-Pesaran Algorithm. 1999) describes another method. one can solve the autoregressive exogenous variables problem by augmenting the system.5.1 describes one way to recast a model from a form suitable for Binder-Pesaran into a form for the Anderson-Moore algorithm.2 Klein’s Approach (Klein. The MATLAB program does not return matrices for computing the impact of arbitrary exogenous factors. 2 H1 CAM + H0 CAM + H−1 = 0 5.4 summarizes the model concepts and algorithm inputs and outputs.May 24. Appendix B. 5 Applications of the Matrix Polynomial Approach H1 C 2 + H0 C + H−1 = 0. 2006 6 4. under certain conditions. Several algorithms rely on determining a matrix C satisfying (6) They employ linear algebraic techniques to solve this quadratic equation. Appendix B. Their algorithm consists of a “recursive” application of the linear equations deﬁning the relationships between C.4. The algorithm uses the Generalized Schur Decomposition to decouple backward and forward variables of the transformed system. is given by: ∞ xt = Cxt−1 + i=0 F i E(wt+1 ) where F = (I − BC)−1 B and C satisﬁes a quadratic equation like equation 6. Appendix B.

H0 = . Only Anderson-Moore and Binder-Pesaran provide all four outputs (See Table 6).2 describes one way to recast a model from a form suitable for Uhlig into a form for the Anderson-Moore algorithm. 0 H − KC + B C 0B G − KC + A − JC + B C 0A F − JC + A Uhlig’s algorithm undertakes the solution of a quadratic equation like equation 6 with H1 = C 0B 0 C 0A .6.1 describes one way to recast a model from a form suitable for the Anderson-Moore methodology into a form for the Uhlig Algorithm. Uhlig’s algorithm operates on matrices of the form: B H 0 0 A G C K 0 F 0 J Uhlig in eﬀect pre-multiplies the equations by the matrix C0 0 C+ −KC + I where C 0 C = 0.6. Each of the authors provides small illustrative models along with their MATLAB code. φ and F as potential outputs of a linear rational expectation algorithm. 2006 7 One can view the Uhlig technique as preprocessing of the input matrices to reduce the dimension of the quadratic matrix polynomial. H−1 = + G − KC + A − JC + B H − KC + B F − JC A Appendix B. the Anderson-Moore algorithm computes the solution to the matrix polynomial more eﬃciently than the approach adopted in Uhlig’s algorithm. Generally. The next two sections present results from applying all the algorithms to each of the example models. Most require the user to specify models with at most one lag or one lead. Most of the implementations do not compute each of the potential outputs. It turns out that once the simpliﬁcation has been done. C + = (C T C)−1 C T to get C 0B C +B H − KC + B 0 0 0 C 0A C +A G − KC + A 0 I 0 0 0 F 0 0 J one can imagine leading the second block of equations by one period and using them to annihilate J to get 0 0 C 0B 0 C 0A 0 H − KC + B 0 G − KC + A − JC + B 0 F − JC + A 0 0 0 C +B 0 C +A I This step in eﬀect decouples the second set of equations making it possible to investigate the asymptotic properties by focusing on a smaller system. . 6 Comparisons Section 2 identiﬁed B. the implementations make restrictions on the form of the input. Appendix B. ϑ. Only Anderson-Moore explicitly allows multiple lags and leads.May 24.

It had about the same performance as Klein. no lags ˆ one lag. Each of the algorithms.1 Computational Eﬃciency Nearly all the algorithms successfully computed solutions for all the examples. 2006 8 Technique Anderson-Moore King & Watson Sims Klein Binder-Peseran Uhlig B Table 1: Modeling Features ϑ φ. The ﬁrst column of each table identiﬁes the example model. In general. The eﬃciency advantage was especially pronounced for larger models. but this is the only instance where an alternative to the Anderson-Moore algorithm required fewer ﬂops. Anderson-Moore provides solutions with the least computational eﬀort. φ. . and F . The second column provides the ﬂops required by the Anderson-Moore algorithm to compute B followed by the ﬂops required to compute B. Columns three through seven report the ﬂops required by each algorithm divided by the ﬂops required by the Anderson-Moore algorithm for a given example model. Binder-Pesaran’s and Uhlig’s routines would likely solve alternative parametrization of the models that had convergence problems. King-Watson or Uhlig. Has modeling language. one lead. Binder-Pesaran was consistently the most computationally expensive algorithm. King-Watson generally used twice to three times the number of ﬂoating point operations. Employing the Anderson-Moore algorithm speeds the computation. except Binder-Pesaran’s. Table 5 presents a comparison of the original Uhlig algorithm to a version using Anderson-Moore to solve the quadratic polynomial equation. one lead. There were only a few cases where some alternative had approximately the same number of ﬂoating point operations. successfully computed solutions for all of Uhlig’s examples. King-Watson or Uhlig.May 24. no leads one lead. Uhlig generally used about twice the ﬂops of Anderson-Moore even for small models and many more ﬂops for larger models. ϑ. Sims generally used thirty times the number of ﬂoating point operations – never fewer than Anderson-Moore. King-Watson’s algorithm required more than three times the ﬂops required by the Anderson-Moore algorithm for the ﬁrst Uhlig example. Klein generally used thirty times the number of ﬂoating point operations. For example. F Usage Notes Allows multiple lags and leads. it took as many as 100. one observes that Uhlig’s algorithm required only 92% of the number of ﬂops required by the Anderson-Moore algorithm. Uhlig’s algorithm failed to provide a solution for the given parametrization of one of King’s examples. The diﬀerence was most dramatic for larger models. 6. It generally used hundreds of times more ﬂoating point operations. Note: • The Klein and Uhlig procedures compute ϑ by augmenting linear system • For the Uhlig procedure one must choose “jump” variables to guarantee that the C matrix has full rank. C must be nonsingular one lag. In one case. no lags one lag. one lead. However. constraint involving choice of “jump” variables and rank condition on C.000 times the number of ﬂoating point operations. Note that the Anderson-Moore algorithm typically required a fraction of the number of ﬂops required by the other algorithms. It never used fewer than Anderson-Moore. Tables 2-4 present the MATLAB-reported ﬂoating point operations (ﬂops) counts for each of the algorithms applied to the example models. In the ﬁrst row.

Relative errors were on the order of 10−16 . Sims always computed correct solutions but produced solutions with relative errors generally 5 times the size of the Anderson-Moore algorithm. but produced solutions with relative errors generally 3 times the size of the Anderson-Moore algorithm. Sims and Binder-Pesaran approaches provide matrix output for accommodating arbitrary exogenous processes.Θf )Φ−1 exact 6I modiﬁed Klein’s MATLAB version to include this functionality by translating the approach he used in his Gauss version. • While the Anderson-Moore. the King-Watson and Uhlig implementations only provide solutions for VAR exogenous process.6 Fortunately. One could tighten the convergence criterion at the expense of increasing computational time. Although ϑ and F are relatively simple linear transformations of B. It cannot provide a solution for King’s example 3 for the particular parametrization I employed. the algorithm diverges and produces a matrix with NaN’s. the results were similar. I have employed a symbolic algebra version of the Anderson-Moore algorithm to compute solutions to high precision4 . Anderson-Moore always computed the correct solution and in almost every case produced the most accurate solution. The algorithm was unable to compute H for Uhlig 3 in addition to Uhlig 7. I did not explore alternative parametrizations. but the algorithm is already the slowest of the algorithms evaluated. compare F for Sims note that Ψ = IL F = (Θy . I did not explore alternative parametrization when the algorithm’s did not converge. Klein always computed the correct solution but produced solutions with relative errors generally 5 times the size of the Anderson-Moore algorithm. 7 Conclusions A comparison of the algorithms reveals that: • For models satisfying the Blanchard-Kahn conditions. The algorithm provides accurate answers for King & Watson’s examples. 2006 9 6. Although the convergence depends on the particular parametrization. but the particular solution has an eigenvalue greater than one in magnitude even though an alternative matrix solution exists with eigenvalues less than unity. • Using the Anderson-Moore algorithm to solve the quadratic matrix polynomial equation improves the performance of both Binder-Pesaran’s and Uhlig’s algorithms. In each case.2 Numerical Accuracy Tables 6-11 presents the MATLAB relative errors. . there are straightforward formulae for augmenting 4I 5 To computed exact solutions when this took less than 5 minutes and solutions correct to 30 decimal places in all other cases. each of the authors uses radically diﬀerent methods to compute these quantities. Binder-Pesaran converges to an incorrect value for three of the Uhlig examples: example 3. The ϑ and F results were similar to the B results.May 24. Binder-Pesaran’s algorithm does not converge for either of Sims’ examples. Even when the algorithm converges to approximate the correct solution. It was unable to compute values for either of Sims’s examples. Uhlig provides accurate solutions with relative errors about twice the size of the Anderson-Moore algorithm for each case for which it converges. the resulting matrix solves the quadratic matrix polynomial. I then compare the matrices computed in MATLAB by each algorithm to the high precision solution. • The Anderson-Moore algorithm proved to be the most accurate. 6 and example 7. It computed the wrong value for Uhlig 6. The algorithm was unable to compute ϑ for King example 3. Errors were generally 10 times the size of Anderson-Moore relative errors. 5 Sim’s F calculation produced errors that were 20 times the size of the Anderson-Moore relative errors. the algorithms provide equivalent solutions. For the ϑ computation. the errors are much larger than the other algorithms. For Uhlig’s example 3. King-Watson always computed the correct solution.

one must cast the model in a form with at most one lead or lag. • The Anderson-Moore algorithm requires fewer ﬂoating point operations to achieve the same result. In addition. the Anderson-Moore algorithm requires no special treatment for models with multiple lags and leads. Uhlig and Klein approaches with the matrices characterizing the impact of arbitrary shocks. To use each of the other algorithms. 2006 10 the King-Watson.May 24. This computational advantage increases with the size of the model. • The Anderson-Moore suite of programs provides a simple modeling language for developing models. This can be tedious and error prone task for models with more than a couple of equations. .

URL http: //www.inform. Hashem Pesaran. Johns Hopkins. 1996.edu/~sims/ #gensys. King and Mark W. Laurence Broze. Sims. Gene H.se/data/home/kleinp/ homepage. November 1998.umd. Econometrica.yale. Golub and Charles F.html. Gary Anderson and George Moore. 1999.dir/toolkit. Economics Letters. International Economic Review.html. Christopher A. 1997. Zadrozny.htm. 1983. 1989. 48. Seminar Paper. An eigenvalue method of undetermined coeﬃcients for solving linear rational expectations models. van Loan. Multivariate rational expectations models and macroeconometric modelling: A review and some new results. 1998. The solution of linear diﬀerence models under rational expectations.iies.htm. A linear algebraic procedure for solving linear perfect foresight models. Michael Binder and M. 1995. A reliable and computationally eﬃcient algorithm for imposing the saddle point property in dynamic models.federalreserve.people. URL http://cwis. URL http://www. 1985. 1999. Solving linear rational expectations models. Christian Gouri´roux. Seminar paper. Journal of Economic Dynamics and Control. An eﬃcient procedure for solving linear perfect foresight models.kub. May 1994.gov/pubs/oss/oss4/aimindex. Matrix Computations.html. 11:229–257.gov/pubs/oss/oss4/aimindex. nl/~few5/center/STAFF/uhlig/toolkit. Econometric Theory.econ. Robert G. A toolkit for analyzing nonlinear dynamic stochastic models easily.edu/EdRes/Colleges/BSOS/ Depts/Economics/mbinder/research/matlabresparse. 39(4):1015–1026. Board of Governors of the Federal Reserve System. URL http://www.edu/~rgk4m/kwre/kwre.May 24. Harald Uhlig. 22:1353–1373. Gary Anderson and George Moore. Journal of Economic Dynamics and Control.federalreserve. Unpublished Manuscript. User’s Guide. Watson. The solution of singular linear diﬀerence systems under rational expectations.virginia. 1980.html. Paul Klein. URL http://www. Using the generalized schur form to solve a multivariate linear rational expectations model. Downloadable copies of this and other related papers at http://www. Kahn.su. and Ariane Szafarz. . Peter A. (3). URL http://www. 2006 11 8 Bibliography Gary Anderson. Solutions of multivariate rational expectations e models. Olivier Jean Blanchard and C.

A. 2006 12 Appendix A Algorithm 1 1 2 3 4 5 6 7 8 9 The Anderson-Moore Algorithm(s) 10 11 12 13 14 15 16 17 Given H. cnt < Lθ cnt > Lθ (QR singular) otherwise 5 . UZ Hθ−1 k := k + 1 od −1 .. Z .∗ Q := V return{Q} . UN Hθ k Q Z k+1 := k k k k UZ Hτ . ∞} {Q. Z . Algorithm 2 1 2 3 4 5 6 7 8 Given V.. .∗ ) Compute V . UZ Hθ−1 Hk+1 := k k k k UN Hτ . . the vectors spanning the left invariant space associated with eigenvalues greater than one in magnitude Z . 1} R . . Hθ−1 Γ = −Hθ H−τ 0 I A= Γ k k return{ H− τ . funct F2 (A. . 0} return {Q. Hθ . compute the unconstrained autoregression. ∞} {B = −Q−1 QL . .. Z k } . Algorithm 3 1 2 3 4 Given Q. funct F3 (Q) cnt = noRows(Q) {Q.May 24. .∗ .. funct F1 (H) ≡ k := 0 Z 0 := ∅ H0 := H Γ := ∅ k while Hθ is singular ∩ rows(Z k ) < L(τ + θ) do k UZ k Uk = k := rowAnnihilator(Hθ ) UN k k k k 0 UZ Hτ .

xt−1 Dimensions L × L(τ + θ) L×L Lθ × Lθ L×M . xt = B .. xt−1 ∞ Dimensions L(τ + θ) × 1 M ×1 1×1 1×1 (L × L)(τ + θ + 1) L×M M ×M .May 24. + ϑzt .. xt = B .0 I s=0 (F s 0 ) ΦΨzt+s Model Variable B Φ F ϑ Description reduced form coeﬃcients matrix exogenous shock scaling matrix exogenous shock transfer matrix autoregressive shock transfer matrix when zt+1 = Υzt the inﬁnite sum simpliﬁes to give xt−τ .1 Anderson-Moore Inputs θ Hi xt+i = Ψzt i=−τ Model Variable xt zt θ τ Hi Ψ Υ Description State Variables Exogenous Variables Longest Lead Longest Lag Structural Coeﬃcients Matrix Exogenous Shock Coeﬃcients Matrix Optional Exogenous VAR Coeﬃcients Matrix(zt+1 = Υzt ) Outputs xt−τ . 2006 13 Appendix B Model Concepts The following sections present the inputs and outputs for each of the algorithms for the following simple example: Vt+1 = (1 + R)Vt − Dt+1 D = (1 − δ)Dt−1 Appendix B. + 0 .

2006 14 Anderson-Moore input: AIM Modeling Language Input 1 2 3 4 5 6 7 8 9 MODEL> FIRMVALUE ENDOG> V DIV EQUATION> VALUE EQ> LEAD(V.7 0. Usage Notes for Anderson-Moore Algorithm 1. 4. −3.15714 −2. Create Ψ and.61364 3. 21.05 1 0 0 0. 1. completely determines all of xt−1 . . Υ matrices. −2.909091 F = 0.05 0. upsilon=[0.909091 Φ= 0.2]. R=0. 0.9 0.bog. 0.Υ 3.LEAD(DIV. optionally. −2.(H). but none of xt .225 0. 1..html July. 0. F and optionally ϑ Users can obtain code for the algorithm and the preprocessor from the author7 7 http://www.1) EQUATION> DIVIDEND EQ> DIV = (1-DELTA)*LAG(DIV.3. ΦΨ = 1.Ψ = . 1.2 0.1. 0. -2.].1 0 4.0857 −4.9 0 1 1. H= produces output: B= 0 0 0 −0. Develop a “model ﬁle” containing the model equations written in the “AIM modeling language” 3. 2.909091 −0.1 0.May 24. 1999. φ.75 1.1.fed.3. . 0.0.1) = (1+R)*V . Execute the MATLAB programs to generate B.frb. 1. “Align” model variables so that the data history (without applying model equations).1) END Parameter File Input 1 2 3 4 DELTA=0.7 −1. psi=[4.us/pubs/oss/oss4/aimindex. Apply the model pre-processor to create MATLAB programs for initializing the algorithm’s input matrix.40909 ϑ= 3.

May 24. xt Structural Coeﬃcients Matrix associated with contemporaneous exogenous variables. yt+1 Structural Coeﬃcients Matrix associated with contemporaneous endogenous variables.2 King-Watson Inputs AEt yt+1 = Byt + θ X i=0 Ci Et (xt+i ) xt = Qδt δt = ρδt−1 + G » – Λt yt = kt Model Variable θ yt Λt kt xt δt A B Ci Q ρ G t Description longest lead Endogenous Variables Non-predetermined Endogenous Variables Predetermined Endogenous Variables Exogenous Variables Exogenous Variables Structural Coeﬃcients Matrix associated with lead endogenous variables. xt Vector Autoregression matrix for exogenous variables Matrix multiplying Exogenous Shock Outputs yt = Πst ¯ st = M st−1 + G » – kt st = δt t Dimensions 1×1 m×1 (m − p) × 1 p×1 nx × 1 nδ × 1 m×m m×m m×n nx × nδ nδ × nδ nδ × 1 Model Variable st Π M ¯ G Description Exogenous Variables and predetermined variables Matrix relating endogenous variables to exogenous and predetermined variables Matrix multiplying Exogenous Shock Dimensions (nx + p) × 1 m × (p + nx ) (p + nx ) × (p + nx ) (nx + p) × l . 2006 15 Appendix B. yt Structural Coeﬃcients Matrix associated with contemporaneous and lead exogenous variables.

3. I 3 0 05 0 2 −C1 0 0 . 0.7 0. nx = nx . 1. 4 xt 5 t δt ˆ ˜ ˆ ˜ Ψ := 0 0 G Υ := 0 0 ρ ˆ ˜ ˆ ˜ B1 B2 := B. “driver” generates (Q.9 0. 2006 16 King-Watson input: 1 0 0 1 A= 0 0 0 0 0 0 −1 0 0 0 0 0 .05 0. −1.1 0 0 1 3. 1.1 King-Watson to Anderson-Moore Obtaining Anderson-Moore inputs from King-Watson inputs 3 2 3 Λt 0 6 kt 7 θ := θ. 0. G Appendix B. −21. 0. 1. ρ.1 0 .0857 −3.7 1 0 0 0 0 1 0 .15714 2.C = 0 4.2 Usage Notes for King-Watson Algorithm 1.M = 0.05 3. ny = m).2 0 Q= produces output: 1.7 0 0. “system” generates (A.. −21.1 0. 0. 0. Ci .2. 1. Identify predetermined variables 2..9 .0857 −3. 0. Create matlab “system” and “driver” programs generating the input matrices. G).225 0.. 0 0 1 0 0 0 0 −0. 0. Π= 0. . 0. zt := 4 0 5 . 0. ¯ 4. Call resolkw with sytem and driver ﬁlenames as inputs to generate Π.G = 0. . 0 0 0 A2 0 0 0 0 0 −C0 −I 0 −Cθ 0 0 0 Q .ρ = 1 0. xt := 6 7 . −2.225 0.May 24. 0.. B. M.15714 0. Cast the model in King-Watson form: endogenous variable must have at most one lead and no lag.B = 0 −1. A1 A2 := A 0 H := 40 0 A1 0 −G 2 −B2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 −B1 0 −ρ . 2. 0. 3. 0. and a matlab vector containing indices corresponding to predetermined variables.

2 Anderson-Moore to King-Watson Obtaining King-Watson inputs from Anderson-Moore inputs 3 2 3 2 3 xt−τ xt xt−τ 6 7 6 7 6 7 . B := 6 . 7 . 2006 17 Obtaining King-Watson outputs from Anderson-Moore outputs 2 0 60 6 40 0 3 ˆ ΠΛδ Mkδ 7 7 := B ˆ Πxδ 5 ρ – » Mkk Mkδ M := 0 ρ ˜ ˆ ˆ ˆ Λk ΠΛδ M −1 Π := Π ˆ ˜ ¯ G := 0 0 0 I ΦΨ 0 0 0 0 ˆ ΠΛk Mkk ˆ Πxk 0 Appendix B. . G := 0. p := Lτ. 6 .. 7 . . yt := 4 . xt := zt . . 0 −Hθ 2 3 0 I 6 . . m := L(τ + θ) Ψ 2 Obtaining Anderson-Moore outputs from King-Watson outputs I 6B1 ϑ 6 := M.. 7 4 0 I 5 H−τ H−τ +1 . B(θ−1)R ϑ 3 7 7 7 := Π 5 » B 0 . 5 . xt+θ−1 xt+θ−1 xt−1 2 3 I 0 6 . 7 A := 6 4 I 0 5 0 . Bθ – 2 0 ϑ . . θ := 1. ρ := 0. . . 5 . Q := I. 5 kt := 4 . ρ 4 . . Hθ−1 » – 0 C := . .May 24. 7 6 .2. . 7 . . Λt := 4 . . 6 . .

3 Sims Inputs Γ0 yt = Γ1 yt−1 + C + Ψzt + Πηt Model Variable yt zt ηt Γ0 Γ1 C Ψ Π Description State Variables Exogenous Variables Expectational Error Structural Coeﬃcients Matrix Structural Coeﬃcients Matrix Constants Structural Exogenous Variables Coeﬃcients Matrix Structural Expectational Errors Coeﬃcients Matrix Outputs ∞ Dimensions L×1 M1 × 1 M2 × 1 L×L L×L L×1 L × M1 L × M2 yt = Θ1 yt−1 + Θc + Θ0 zt + Θy s=1 Θs−1 Θz Et zt+s f Model Variable Θ1 Θc Θ0 Θy Θf Θz Description Dimensions L×L L×1 L × M1 L × M2 M2 × M2 M2 × M1 . 2006 18 Appendix B.May 24.

Cast the model in Sims form: at most 1 lag and 0 leads of endogenous variables.1 0 0 1 0 1 0 0 0 0 0 0 0 0 .Γ = 1 1. Θz Appendix B. 3. Θf . .675 −2.22066 4.909091 1.41673 0. Π = 0 3. C.45 f 0. 2. Ψ. 2006 19 Sims input: 1 0 Γ0 = −1. 0.28794 1.0796719 −2.3.C = . zt := . 3. −2. Call gensys to generate Θ1 .1 −1.7 0 0 1 0 0 0 Ψ= 4. Ψ := ηt+s 1 0 » – −Γ1 0 Γ0 −Π 0 0 H := 0 0 0 0 0 I » C 0 – xt+s := .40909 3.1102210−16 1.61364 −4.May 24. Θ0 . 0 0 0 0 0 0 0 1 0 0 produces output: 0.4577 −1.2204510−16 −1. Θ = 0. Θc . Θc . 1 0 0 0 0 0 0.1 Sims to Anderson-Moore Obtaining Anderson-Moore inputs from Sims inputs – » – » yt+s zt Ψ − x∗ .5516810−16 6. and Π 4. Identify predetermined variables 2.23405 .74832 −2. 0 1 0 0 0 0 1 .40909 0. Create the input matrices Γ0 . −2.1102210−16 . Θ0 = 0. . 3. Θy Θz = 1.19421 −5.82645 −2.29972 Θy = 1. Γ1 .4 1.702491 z 2. 1. 1. Θ = −0. −2. Usage Notes for Sims Algorithm 1.61364 −4.9254610−16 . 0.63846 −1. Θy .

. −Hθ H−τ . 5 t t . s = 1 ˆ and zt+1 = I 2 Appendix B. . . 7 . Θy Θz = 6 7 . BθR 2 3 0L(τ −1)×L 7 6 xt ˆ 7 6 ∃S Θf = SF S −1 . 7 . . xt+θ−1 3 2 0 0 I 6 . H−1 2 3 – » 0L(τ −1)×Lθ 4 5 Ψ := 0Lτ ×Lθ ILθ Π := Ψ 0L×Lθ 6 yt := 4 2 2 6 6 Γ0 := 6 4 I 3 7 7 7 5 0 0 I 0 Obtaining Anderson-Moore outputs from Sims outputs ˆ B ˜ ˆ ˜ := Θ1 .May 24. 7 .1 forward θ + 1 time periods with zt+s = 0. 7 Γ1 := 6 . . . 6 . .2 Anderson-Moore to Sims Obtaining Sims inputs from Anderson-Moore inputs 3 xt−τ . .3. . ΦΨ := Θy Θf .. 4 0 I 0 5 H0 . . xt+θ+1 ˆ where xt ’s come from iterating equation Appendix B. 4 5 . 2006 20 Obtaining Sims outputs from Anderson-Moore outputs 3 0L(τ −1)×L IL(τ −1) 6 7 B 6 7 Θ1 := 6 0L(τ +θ)×Lθ 7 . . z := z . Bθ 2 3 0L(τ −1)×L 6 7 I 6 7 ˆ ˜ 6 7 BR Θ0 Θc := 6 7 ΦΨ 6 7 . 4 5 . 4 5 . ..

2 −3. Identify predetermined variables. 2006 21 Appendix B. −1. Call solab with the input matrices and the number of predetermined variables to generate F and P a The Gauss version allows additional functionality. 0 0 0 0 0 −0. Usage Notes for Klein Algorithm 1.F = 0. . 2. −2.15714 . B.a 3.1 −21. 3.4 Klein Inputs AExt+1 = Bxt + Czt+1 + Dzt zt+1 = φzt + t xt = kt ut Dimensions L×1 M ×1 M ×1 nk × 1 (L − nk ) × 1 L×L L×L Model Variable xt zt nk kt ut A B Description Endogenous Variables Exogenous Variables The number of state Variables State Variables The Non-state Variables Structural Coeﬃcients Matrix for Future Variables Structural Coeﬃcient Matrix for contemporaneous Variables Outputs ut = F kt kt = P kt−1 Model Variable F P Klein input: Description Decision Rule Law of Motion Dimensions (L − nk ) × nk nk × nk 1 0 0 1 A= 0 0 0 0 produces output: 0 0 −1 0 0 0 0 0 .0857 .1 0 0 1 3.05 0. P = 0.C = 0 4. Order the system so that predetermined variables come last. 1.9 0. 2.May 24. . Create the input matrices A.7 1 0 0 0 0 1 0 .B = 0 −1.

1 Klein to Anderson-Moore Obtaining Anderson-Moore inputs from Klein inputs 2 3 » – » kt ˆ ˜ zt φ Ψ := D C . 0 0 0 05 −G 0 0 0 0 0 0 0 – φ Obtaining Klein outputs from Anderson-Moore outputs 2 0 60 6 40 0 3 ˆ FΛδ Pkδ 7 7 := B ˆ Fxδ 5 ρ – » Pkk Pkδ P := 0 ρ ˆ ˜ ˆΛk FΛδ P −1 ˆ F := F ˆ ˜ ¯ G := 0 0 0 I ΦΨ 0 0 0 0 ˆ FΛk Pkk ˆ Fxk 0 Appendix B.4. Υ := θ := θ.. 7 6 . . 7 A := 6 4 I 0 5 0 ..2 Anderson-Moore to Klein Obtaining Klein inputs from Anderson-Moore inputs 3 2 3 2 3 xt−τ xt xt−τ 6 7 6 7 6 7 . 7 . . B := 6 . . 5 kt := 4 . 5 .May 24. . . Λt := 4 . 2006 22 Appendix B. 0 −Hθ 2 3 0 I 6 . . H := 40 0 0 0 0 −ρ 0 0 I 3 A1 0 −C1 0 0 0 −Cθ 0 0 0 0 0 .. ρ := 0. . A1 A2 := A 2 0 −B2 0 0 −B1 A2 −C0 0 0 0 0 0 0 −I Q .4. 5 . Q := I. 7 4 0 I 5 H−τ H−τ +1 . .. yt := 4 . zt := zt+1 zt ˆ ˜ ˆ ˜ B1 B2 := B. Hθ−1 » – 0 C := . G := 0 Ψ 2 . 7 .. . xt+θ−1 xt+θ−1 xt−1 2 3 I 0 6 . xt := 4ut 5 . . 7 .. 6 .

. . 0 BθL 3 I B1R . . . 2006 23 Obtaining Anderson-Moore outputs from Klein outputs 2 2 0 I 4 B 0 0 6B1L 6 ΦΨ5 := P.May 24. . 6 . 4 . BθR 0 ΦΨ . B(θ−1)R ΦΨ 3 7 7 7 := F 5 .

225 21. −3. Usage Notes for Binder-Pesaran Algorithm 1. 0. Call the modiﬁed script to generate C and H and F . D2 Γ and to update the appropriate matrices in the “while loop. 0.May 24.2 produces output: C= 0.1 C= 0 0 0. B. .7 D2 = 0 0 −1.”8 3. Modify existing Matlab script implementing Binder-Pesaran’s Recursive Method to create the input ˆ ˆ ˆ matrices A. The matrix ˆ C must be nonsingular. 1. Cast model in Binder-Pesaran form: at most one lead and one lag of endogenous variables. C. −2.1 0.05 0 0 4.5 Binder-Pesaran Inputs ˆ ˆ ˆ Cxt = Axt−1 + BExt+1 + D1 wt + D2 wt+1 wt = Γwt−1 Model Variable xt wt ˆ A ˆ B ˆ C D1 D2 Γ Description State Variables Exogenous Variables Structural Coeﬃcients Matrix Exogenous Shock Coeﬃcients Matrix Exogenous Shock Coeﬃcients Matrix Exogenous Shock Coeﬃcients Matrix Exogenous Shock Coeﬃcients Matrix Exogenous Shock Coeﬃcients Matrix Outputs xt = Cxt−1 + Hwt ∞ Dimensions L×1 M ×1 L×L L×L L×M L×M L×M L×M xt = Cxt−1 + i=0 F i E(wt+1 ) Dimensions L × L(τ + θ) L×M Model Variable C H Binder-Peseran input: A= Description reduced form codifying convergence constraints exogenous shock scaling matrix 0 0 −1 0 B= 0 0.7 3. D1 = 1 3. 2.9 0 Γ= 0. 1.15714 −2.0857 H= 0. −1. D1 . 2006 24 Appendix B.

..1 Binder-Pesaran to Anderson-Moore Obtaining Anderson-Moore inputs from Binder-Pesaran inputs – wt xt := xt . 4 5 . D2 = 0. Ψ := D1 D2 Υ := Γ » Obtaining Binder-Pesaran outputs from Anderson-Moore outputs C := B. zt := wt+1 ˆ ˜ ˆ C B ˆ ˆ H := −A – » ˆ ˜ Γ . Γ = 0 Ψ 2 Obtaining Anderson-Moore outputs from Binder-Pesaran outputs 2 3 0 6 7 ΦΨ 6 7 6 BR ΦΨ 7 6 7 := H 6 7 .May 24. B(θ−1)R ΦΨ 3 2 0 I 0 6 B 07 6 7 6 .5. 05 Bθ 0 .2 Anderson-Moore to Binder-Pesaran Obtaining Binder-Pesaran inputs from Anderson-Moore inputs 3 −IL(τ −1) 0L(τ −1)×L(θ−1) 0L(θ−1)×L(θ−1) 5 A := 4 0L(θ−1) H−τ 0L×L(τ +θ−2) 3 2 0L(τ −1) 0L(τ −1)×L(θ−1) 5 0L(θ−1) B := 4 IL(θ−1)×L 0L×L(θ+τ −2) Hθ 3 2 IL(τ −1) 0L(τ −1)×L(θ−1) 0L(τ −1)×L 0L(θ−1) IL(θ−1)×L(θ−1) 5 C := 40L(θ−1)×L H−τ +1 . Hθ−1 » – 0 D1 := . 7 := C 4 . . 2006 25 Appendix B. H := ΦΨ Appendix B..5.

−5. N = 0. 2006 26 Appendix B.925 0 H= 0 0 M = −4.9 0. K L.6 Uhlig Inputs Axt + Bxt−1 + Cyt + Dzt = 0 E(F xt+1 + Gxt + Hxt−1 + Jyt+1 + Kyt + Lzt+1 + M zt ) zt+1 = N zt + E t+1 t+1 =0 Dimensions m×1 n×1 k×1 l×m l×n l×k (m + n − l) × m (m + n − l) × n m+n−l×k k×k Model Variable xt yt zt A. Q= 24. −1.7 B= 1 −1 0 0 0 −3. G.05 0. 2. C= D= 0 1 0 0 0 J = 1 K = −1.225 .1 0. M N Description State Variables Endogenous “jump” Variables Exogenous Variables Structural Coeﬃcients Matrix Structural Coeﬃcients Matrix Structural Coeﬃcients Matrix Structural Coeﬃcients Matrix Structural Coeﬃcients Matrix Structural Coeﬃcients Matrix Structural Coeﬃcients Matrix Outputs xt = P xt−1 + Qzt yt = Rxt−1 + Szt Model Variable P Q R S Description Dimensions m×m m×k n×m n×k For Uhlig cannot ﬁnd C with the appropriate rank condition without augmenting the system with a “dummy variable” and equation like Wt = Dt + Vt .7 1.0857 −2.May 24. 3. Uhlig input: A= 1 0 −0.0857 6.15714 R = 1. H J.1 0. 0 G = 0 L= 0 produces output: P = 0.1629810−33 S = 21.1629810−33 6.2 F = 1. B C D F.15714 −3.

Create matrices A. R » B H 0 0 A G C K 0 F – 0 = J L H(I3 ⊗ R) and such that with l row dimension of C and n the column dimension of C rank(nullSpace(C T )) = l − n » – ˆ ˜ 0 DM = LΨ .(the row dimension of C) and n is the total number of “jump” variables(thee column dimension of C).6. 2006 27 Usage Notes for Uhlig Algorithm 1. D. := ΦΨ P S Appendix B.1 Uhlig to Anderson-Moore Obtaining Anderson-Moore inputs from Uhlig inputs » – » xt B H := yt H 0 0 A G C K D F 0 J – 0 0 xt := Υ := N. Call the function do it to generate P. J. C. S. F. K. W Appendix B. 2. G. R. One must be careful to choose endogenous “jump” and and “non jump” variables to guarantee that the C matrix is appropriate rank. H.. M. The rank null space must be (l − n) where l is the number of equations with no lead variables. Q.May 24. N 3. L.2 Anderson-Moore to Uhlig Obtaining Uhlig inputs from Anderson-Moore inputs 3 −IL(τ −1) 0L(τ −1)×L(θ−1) 0L(θ−1)×L(θ−1) 5 A := 4 0L(θ−1) H−τ 0L×L(τ +θ−2) 3 2 0L(τ −1) 0L(τ −1)×L(θ−1) 5 0L(θ−1) B := 4 IL(θ−1)×L 0L×L(θ+τ −2) Hθ 3 2 IL(τ −1) 0L(τ −1)×L(θ−1) 0L(τ −1)×L 0L(θ−1) IL(θ−1)×L(θ−1) 5 C := 40L(θ−1)×L H−τ +1 . Hθ−1 2 ﬁnd permutation matrices L. B. Obtaining Uhlig outputs from Anderson-Moore outputs » – » – R Q := B.. Ψ := L.6. Cast model in Uhlig Form.

May 24. 2006 28 Obtaining Anderson-Moore outputs from Uhlig outputs » – » – P Q ΦΨ := R S B := .

φ.1) (12.58292 45. ϑ.2) Note: L. F ) (B. F ) (B) (B.6) (12. B. 108000) 2.13) (26.6) (12. BP.1) (12.5 m.1 m.1.4) 6 (82320. θ) (m. 26235) 2.5947 46.58292 45. F ) (2514.1. 276912) 3.1) (26.5441 16250.89752 (5.6) (12. nk See Appendix B.1) 1. F See Appendix B.1) (12.5874 22.1) 3.6611 205753. M2 ) (L.38385 23.60423 (11.2499 35. Sims.4 L See Appendix B. p See Appendix B.7318 6468.292 52.6) (12.42169 (4.80494 (5.785 (13.905 (6.1669 15347. (14.9301 29.6) (12. φ.5) 6 (6850. Klein.5) 6 (6798.2) 2.7633 (10. nk ) L (B. ϑ.7832 (4.4) 1.1) 1. M2 See Appendix B. φ.4) (8. 9773) 3. 4098) 3.6) (12.9946 131.5) 6 (7809.1) (28. Uhlig column normalized by dividing by comparable AIM value.5 (6.1. 2006 29 Appendix C Computational Eﬃciency Table 2: Matlab Flop Counts: Uhlig Examples AIM KW Sims Klein BP (L.4) (8.13) 14 (6850.81346 46.3) 4 (6878.3 L.88979 (5.1.1) (12.14) (28.5441 16250.1) (12.11) 13 Model Computes uhlig 0 uhlig 1 uhlig 2 uhlig 3 uhlig 4 uhlig 5 uhlig 6 uhlig 7 Uhlig (m.6) (12. 9789) 3.62782 40.53182 47.13) (26.78 (6.May 24. τ.1) (8.1.675 29.89752 (5. θ.1.922832 (3.8447 48. 9817) 3.6) (12.5 (6. n) (B. ϑ.3 (6.1.2 L.1792 28.6) (12.9387 28.1) 16.675 29.3196 190.6) (12.6 KW. ϑ) 0.1) 1. τ.1.8007 88.14) (28. ϑ) (B. 9789) 3. . n See Appendix B.5) 6 (112555. p) (L. φ.

φ.1) sims 1 (16662.1 m.5 m.1. 2306) 2.64904 20.1) (6.1) bp 2 (3510. 1586) 3.2849 (8.623 1. . n See Appendix B. Binder & Peseran and Sims Examples Model AIM KW Sims Klein BP Uhlig (L. BP. ϑ) king 2 (2288. 5659) 3.1257 13.5) (10. θ.9) (18.3) (6.5) (10.7828 35555. Klein.1.3) (6. τ.1) 5 (4. 146710) 5.7749 613.5) 8 (6.1) (6.4135 NA 9.1) (10.2 2.493 185.5265 77.70923 59. ϑ. ϑ.5) (10. 2006 30 Table 3: Matlab Flop Counts: King & Watson.3) (6.0) 2 (1.1) Note: L.7072 165.1.8375 48.2 L.1) (4.2) (4.2) bp 1 (533. 15147) 2.4259 (5. Sims.36713 9. F ) (B) (B.3 L.76311 (3.30526 (3. p See Appendix B.1) (16. M2 ) (L.98 21.1) (10. n) Computes (B.72694 (5.1.299 14.2756 16.81809 21.2211 39. φ.2) (4.1.2456 67. B. 5576) 4. φ.5) (10. M2 See Appendix B.814 2.88743 (2. F ) (B. φ. 39249) 4. nk ) L (m.3) (6.1858 368. 2566) 2.5) 9 (3.3) (6.8) (16.1) (6.1.6) sims 0 (3501.6 KW.3) 5 (4.2569 (9. θ) (m.20292 56.3) (6.8) (16.8987 59.9) (18.8819 30.13213 10.7846 800.4 L See Appendix B. F See Appendix B. ϑ) (B.1284 77.-1) 3 (2.1) king 3 (1028. Klein.1.1) 3 (2. Uhlig column normalized by dividing by comparable AIM value. F ) (B.1. nk See Appendix B.May 24.2) klein 1 (3610.545 NA (3.1) (18.1) king 4 (40967.9526 59. ϑ. p) (L.2871 NA 5.2) 3 (1. τ.

ϑ) (B.6 KW.90467 11. F ) (535. ϑ. Klein. M2 ) (L. nk See Appendix B. p) (L. F ) (B) (B. 1586) 2. Uhlig column normalized by dividing by comparable AIM value.1 m.141 148. 70162) 3.9738 534. .2) (4. n) (B.329 113.75578 135.May 24.5252 92.251 (8.1) (4.1) 9 (352283. F ) (B. F See Appendix B.463 1385.1) (18.1) 196. nk ) L (B. B. φ. τ. θ) (m.1. 2006 31 Model Computes ﬁrmValue 1 athan 1 fuhrer 1 Table 4: Matlab Flop Counts: General Examples AIM KW Sims Klein BP (L.4 L See Appendix B.48) (60. Sims.5 m.39) Note: L. φ.1) 48 Uhlig (m. 1885325) NA 125.9) (18.4) (60. M2 See Appendix B. θ.2 L. τ. BP.1) NA (9.9) (18. ϑ) NA (1.459 NA (12.2) (4.1. φ.1.3 L.0) 2 (18475.73 (9. φ. n See Appendix B. ϑ.686 (2. ϑ. p See Appendix B.12) (60.

1) 12998 12731 uhlig 2(5.1) 12998 12731 uhlig 6(4. 2006 32 Table 5: Using Anderson-Moore to Improve Matrix Polynomial Methods Model(m.6 .1) 12270 12003 uhlig 3(10.May 24.1) 2320 2053 uhlig 1(5.4) 1886798 304534 uhlig 4(5.2) 214380 213450 Note: m.1) 12998 12731 uhlig 5(5. n See Appendix B.2) 26720 24233 uhlig 7(11.n) Uhlig AM/Uhlig uhlig 0(3.

14) 1.4) 0.7909 (12.11) BP L (B.1 6 9520.5) 6.4 6 7.60458 10−15 (6.1) 1 := 1. .63371 (12. B.6) 2.6) 1.6115 (5.6) 7. F ) 134848.00348 (12.18357 10−15 (6. Klein.5) 14.15248 (26.1) 1 := 1.6) 2.4887 (12. nk ) (B) 11.32793 (5.6) 4.1) 1 := 3. n) (B.1.6 KW. φ.518098 (12.291 (4.6) 38. (8.63524 1013 6 NA 13 Uhlig (m.5 m.65517 (28.2) 2.6115 (5.32281 (12. φ. ϑ.13) 14.1) 34. nk See Appendix B.9358 (10. F See Appendix B. τ. (8.00348 (12.6) 1.13) Sims (L.335751 (26. θ.57994 1014 14 10673. ϑ) 49.1.1) 1 := 1.6) 7. ϑ. 4 10217.63499 (11.4) 34.59657 (26.45676 (28. F ) 1 := 1.617504 (12.00589 (12.1) 3.85 6 8.14) 2.88785 10−15 (6. n See Appendix B.82375 10−15 (6.1) 1 := 4.6) 2. θ) (B.4 6 10673.1. p See Appendix B.1) 1 := 1. Sims.3) 0.1. BP.4) 2.6) 1.1.2561 (12. ϑ.1) 203.1) 20. 2006 33 Appendix D Accuracy Table 6: Matlab Relative Errors in B: Uhlig Examples Bi − Bexact Bexact Model Computes uhlig 0 uhlig 1 uhlig 2 uhlig 3 uhlig 4 uhlig 5 uhlig 6 uhlig 7 AIM (L.15015 10−15 (14.1) KW (m. M2 ) (B.4) 0.1.1) 2.82951 (5.13) Klein (L.2) Note: L. τ.1 m.May 24.78046 (28. (8. Uhlig column normalized by dividing by comparable AIM value. (12. ϑ) 8.5) 1.1.43964 (12.2 L.33332 10−14 (13. p) (B. (3.7909 (12.1. φ.2561 (12.18357 10−15 (6. F ) 40.1) 1 := 2. φ.69446 10−16 (4.3459 (12. M2 See Appendix B.3 L.5) 13.4 L See Appendix B.

5) 2. ϑ. ϑ.08685 10−16 (16. (3.84516 10−17 (10.1. 3 6.3) 3. nk ) (B) 0. φ.2) 1.49183 10−15 (6. n See Appendix B.00765 10−16 (5. n) (B.11729 10−16 (6.5) Sims (L.2) 4.1) 1 := 0.54742 10−10 3 5.3) 0.9) 8.8) 3. τ.5) 9.1) 2. F ) 1 := 0.3) 1.1) KW (m.1) 1 := 1.1) 1 := 2. ϑ) 3.76957 10−13 (6. 3 0.48492 10−15 (18.4 L See Appendix B.1.98148 10−14 (4.34864 10−16 (10.1.9) 3.1) 1 := 5.06208 10−14 (1.28298 10−15 (16.5 m.1) 1 := 1.61642 10−16 (6. F ) 0. Sims. 2006 34 Table 7: Matlab Relative Errors in B: King & Watson and Sims Examples Bi − Bexact Bexact Model Computes king 2 king 3 king 4 sims 0 sims 1 klein 1 bp 1 bp 2 AIM (L.3) 2.25 10−15 (9.May 24. θ) (B.22045 10−16 (6.63969 10−9 (3.72119 10−14 (6. Uhlig column not normalized by dividing by comparable AIM value. F See Appendix B.54607 10−15 (2.1) Note: L.65169 10−16 (6.2) 5.1.6) 5. B.1.1) NA (2. τ.99199 10−15 (18.1) 2.1) BP L (B. p See Appendix B. φ.0552 10−15 (18.1) 1 := 1. (6.1) 3.21764 10−15 (10. BP. Klein.1. M2 See Appendix B. φ.1) 1. ϑ) 0 (2.2) 5.10874 10−15 (4.57788 10−16 (10.58081 10−16 (16.23685 10−14 (4.1) 8.6) 1.2 L.3) 5.1) 1 := 0. φ. θ.1.38338 10−16 9 NA 5 NA 8 4.3) 1.6 KW.18603 10−15 (8. F ) 1.5) 7. nk See Appendix B.2259 10−14 (6.1) 2. . (6.29398 10−14 (3.55112 10−17 (4.3 L.61522 10−13 (1.0) 5. ϑ. (3.1 m.15101 10−15 (10. M2 ) (B.8) 8. p) (B.95443 10−10 2 7.52053 10−15 (10. (5.14801 10−16 5 Uhlig (m.82325 10−15 (4.5) Klein (L.1.3) 2.

τ.4) 0.6) NA (12. τ.93 10−16 (14.1) 1 := 6.6) 1.05614 10−15 (6. ϑ) 0.83538 (12.13) Sims (L.5) 2.4) NA (12.6) NA (12.1) 1 := 2.97247 10−14 (6.54822 (3.39137 (28.34518 (8. F See Appendix B.2) 0.1) KW (m.4 L See Appendix B. p) (B. p See Appendix B.5 m.1) 1 := 0. θ) (B. ϑ.67717 (26.1) 1 := 7. nk See Appendix B.5) 1. ϑ.14) 1.1) 1 := 1.6) 1.6) 0.1.97923 (12.47708 10−14 (13.5) 1. ϑ) 9.00929 (12. n See Appendix B.3) 2. M2 ) (B. φ.25938 10−15 (6. (12. M2 See Appendix B.665332 (11.08127 (28.3 L.4) 3.51637 106 6 NA 14 0.0114423 (6. θ. Uhlig column normalized by dividing by comparable AIM value. .1) 1 := 5.6) 3.1) 2.43572 (12. F ) NA (8.1. n) (B.66331 10−16 (4. Sims.37388 10−15 (6.78248 (12.3934 (10. φ. Klein. (12.1. 2006 35 Table 8: Matlab Relative Errors in ϑ: Uhlig Examples ϑi − ϑexact ϑexact Model Computes uhlig 0 uhlig 1 uhlig 2 uhlig 3 uhlig 4 uhlig 5 uhlig 6 uhlig 7 AIM (L.13) Klein (L.1 m. ϑ. BP. (5.1.62987 (12.1) 14. φ.1.30459 106 6 1.11) BP L (B.6) NA (12. φ. (5.6) NA (28.1) 1.1.1.6 KW. nk ) (B) 1. F ) 1 := 9.12899 1013 6 NA 13 Uhlig (m.131889 (26.33909 (5.32241 (12.1) 27.1) 12.2) Note: L.6) 2. F ) 159728.14) NA (12.2245 (5.May 24.79327 (12. B.1.6) NA (26.2 L. 4 5.999999 6 8.1) 1 := 8.03937 106 6 1.13) 1.532995 (8.5959 (4.5) 9.4) 1.

May 24.1. F ) 1 := 4.5) 1.3) 1.3) 8.40112 10−16 (18. F ) NA (6.91039 10−15 (4.1) 8.65811 10−16 9 NA 5 NA 8 2. p See Appendix B.20169 10−14 (4. Uhlig column not normalized by dividing by comparable AIM value.12445 10−15 (18.53623 10−16 (10.1) 1 := 9. F See Appendix B.23589 10−15 (16.1.66638 10−16 (5.1) 6.57671 10−15 (16.9) NA (10.73294 10−16 (6.1) 8.3) 5.4802 10−15 (2.1.1) 1 := 1.5) Klein (L.73294 10−16 (8.1) 1 := 6.6) 7.1) BP L (B.1. ϑ.47355 10−9 (3.1.1) 1 := 2. M2 ) (B.6) 1.1) 2.996 10−16 (6.2) 1. θ) (B. F ) 4.996 10−16 (3.1.8) 1.996 10−16 (6. φ.3268 10−15 (10.9107 10−14 (6.58297 10−15 (3.1) 1 := 2.1) 1.45688 10−16 (9.85337 10−16 (10.2) NA (10.5) NA (16. n) (B.8) NA (6.16858 10−14 (1.3) NA (4.1) 1 := 3.1) 9. p) (B. ϑ.66638 10−16 (4. ϑ.0) 1.07269 10−13 (6. B.4 L See Appendix B.33779 10−15 (3.3967 10−9 3 5.9026 10−15 (4.1809 10−16 (5.1809 10−16 5 Uhlig (m. nk ) (B) 4. τ. θ. BP. 2006 36 Table 9: Matlab Relative Errors in ϑ: King & Watson and Sims Examples ϑi − ϑexact ϑexact Model Computes king 2 king 3 king 4 sims 0 sims 1 klein 1 bp 1 bp 2 AIM (L.55737 10−9 2 6.996 10−16 (2. n See Appendix B.3 L. φ. τ.5 m.3) NA (18.1) NA (2.5) Sims (L. Klein.1) Note: L.2) 2.02999 10−15 (6.42343 10−15 (6.5) 9.1 m.6 KW. ϑ) 4.3) NA (6. ϑ) 4.1. M2 See Appendix B. φ.2 L.2) 1.1) 1 := 7.66823 10−13 (1.9) 7. . φ.996 10−16 3 2.3) 4. nk See Appendix B.58297 10−15 3 2.1.25347 10−15 (10.1) KW (m. Sims.

6) NA (12. p See Appendix B.73637 (12.6) 2.1. M2 ) (B. n) (B.1.72491 (26.6) 1. nk See Appendix B. φ.6) NA (26.1) NA (4. ϑ) NA (3.6) 308. φ.40986 1015 14 1028.13) NA (12. ϑ.1.1) 1 := 6.1.1) KW (m.2) Note: L. Sims.4) NA (26.12622 10−16 (6.60411 10−16 (4. Uhlig column normalized by dividing by comparable AIM value.13) Sims (L.52423 10−14 (13.1. ϑ.1.3 L. ϑ.5) NA (12.6) NA (28. F See Appendix B.03573 10−16 (6.1) 1 := 7.1) 1 := 7. θ) (B. .1) 1 := 6.65292 1013 6 NA 13 Uhlig (m.28843 10−16 (14.1.6 KW.5) NA (28. θ. φ.17 6 1028.5) NA (12.4) NA (12.2 L.1) 1 := 6. φ. nk ) (B) NA (8.1) 1 := 6. F ) 38.1) 1 := 6. τ.1.6) 3. 2006 37 Table 10: Matlab Relative Errors in F : Uhlig Examples Fi − Fexact Fexact Model Computes uhlig 0 uhlig 1 uhlig 2 uhlig 3 uhlig 4 uhlig 5 uhlig 6 uhlig 7 AIM (L. p) (B.73637 (12.4) 4.14) 2.3) NA (12.17 6 1.1) NA (5.153 (12.13) Klein (L.14) NA (12.6) NA (12.5 m. B.1) NA (5.58093 (12.5) NA (12.25457 (12.2) NA (11.4) NA (5.03573 10−16 (6.9392 (28. F ) 1 := 4.03 4 3126. BP.4 L See Appendix B. M2 See Appendix B.13246 10−16 (6.1 m.11) BP L (B. n See Appendix B. Klein. F ) 6280.0499 10−16 (6.May 24.23 6 1.6) NA (12. τ.17 6 3918.1) NA (5.1) NA (10.383 (8. ϑ) NA (8.6) 12.

2 L.2) NA (1.1) BP L (B.9968 10 3 7. F ) 1. .5) 2. n See Appendix B. p See Appendix B.70873 10−10 3 5.3) 2.1.1) 1 := 7.1) NA (18. BP.9) NA (10. τ. M2 See Appendix B. ϑ.5 m. φ.5) NA (10. φ.6) NA (6.1) 1 := 3.82645 10−16 (5.82645 10−16 5 Uhlig (m.2) NA (10.03025 10−11 2 5. θ) (B. nk ) (B) NA (6.1) NA (4.1.6) NA (4.1) NA (6.49401 10−16 3 −16 3.29456 10−16 9 NA 5 NA 8 3.1.1917 10−15 (10.36349 10−15 (6. 2006 38 Table 11: Matlab Relative Errors in F : King & Watson and Sims Examples Fi − Fexact Fexact Model Computes king 2 king 3 king 4 sims 0 sims 1 klein 1 bp 1 bp 2 AIM (L.0) NA (10. M2 ) (B.24387 10−15 (3.58686 10−15 (18.2) 1.1.5) Sims (L. B. ϑ.3) NA (18.49401 10−16 (3. ϑ) NA (2.51277 10−15 (4. φ.1) 1 := 2.May 24. n) (B.1) 1 := 5.1) NA (6. ϑ.1.6 KW.1) NA (2.3) 6.8) NA (6.1) NA (3.1.71285 10−15 (10.3 L.13883 10−15 (9.1) 1 := 1.5) Klein (L. F See Appendix B.4 L See Appendix B.9968 10−16 (3. p) (B.66911 10−13 (6.1) NA (4.5) NA (16. Uhlig column not normalized by dividing by comparable AIM value. F ) 7.41651 10−15 (8.53472 10−15 (6.1.2) NA (1. Sims.3) 7.1) 1 := 4. nk See Appendix B. F ) 1 := 7.13715 10−16 (5.1.1 m.9) 1. Klein.1) Note: L.3) NA (6. ϑ) NA (6.43937 10−16 (2.8) 1.3) NA (4.3) NA (16. τ.92152 10−15 (16.1) 1 := 2. θ. φ.1) KW (m.

- MTAP Reviewer G-9Uploaded byAye-Yan Velasco
- eureka math grade 6 module 4 parent tip sheetUploaded byapi-324380772
- Types of Differential EquationsUploaded byxentrixx
- Mathematical PhysicsUploaded byVikalp Pratap Singh
- Lecture 4.pdfUploaded byKeith Tanaka Magaka
- Introduction to Calculus SetUploaded byparvezalamkhan
- Discovering the Laplace TransformUploaded byAmir Iqbal
- eigen.pdfUploaded byAbhijit Kirpan
- math 8 unit 1 3 unit studyUploaded byapi-270891801
- Linier FungtionUploaded byFebri Eldi
- Differential EquationUploaded byNANCY TOLENTINO
- Solving Equations elegantlyUploaded byHector Burgueño
- Non Linear Root FindingUploaded byMichael Ayobami Adeleke
- mixtures problem.docxUploaded byAnonymous 2gRlZO9
- Aa SucceedUploaded byHector R.
- IJETR021630Uploaded byerpublication
- Ref CardUploaded by007phantom
- V01I031112Uploaded byIJARTET
- 1-s2.0-S1877705811008757-mainUploaded byMuhammad Shaifullah Sasmono
- Alternative Quadratic FormulaUploaded bydjalilkadi
- Debasis Kundu, Swagata Nandi Statistical Signal Processing Frequency Estimation 2012Uploaded byjuliosantana
- Schur DecompositionUploaded byrohitvan
- OperatorsUploaded byValentin Manzur
- Numerical Instructions in Matlab and OctaveUploaded byjulian1606
- Eigen Vector & Eigen ValuesUploaded bymacynthia26
- Chapter 1 Algebra.pdfUploaded bymohsen123
- Ta 1Uploaded byYong Fei
- Equations Week3Uploaded byVikashKumar
- [P] [Ko] Method for 3D measuring CL.pdfUploaded byDũng Lê
- cha6.pdfUploaded byDavid U. Anyegwu