You are on page 1of 27

Vladimir Gligorov

On The Strange Idea of the Real World

Reality, Games, Boats, and Laws

“To be is, purely and simply, to be the value of a variable.”


W. V. O. Quine, On What There Is

The title of von Neumann’s 1938 paper in the 1945 English translation is: ‘A Model of
General Economic Equilibrium’. The original German title is: ‘Über ein Ökonomisches
Gleichungssystem und eine Verallgemeinerung des Brouwerschen Fixpunktsatzes‘. The Review
of Economic Studies, which published the English translation, by Oscar Morgenstern, felt they
needed to accompany it with an explanatory piece by D. G. Champernowne, ‘A Note on J. v.
Neumann's Article on "A Model of Economic Equilibrium”’. He acknowledges the support of
Kaldor and Sraffa and I think their influence is visible in most of his economic interpretations of
von Neumann’s theoretical model.
Already in the introduction, Champernowne suggests that there is a problem of realism of
von Neumann’s theoretical model.
“By adopting extremely artificial assumptions, the author rendered his problem soluble
and concentrated attention on some very interesting properties of the actual economic system.
But at the same time this process of abstraction inevitably made many of his conclusions
inapplicable to the real world: others could be applied only after considerable modification. It is
interesting to enquire how far the properties in his simplified model do correspond to similar
phenomena in the real world.”
As the German title says, there is a mathematical problem, and there are economic
interpretations.
So, one, Walras say, observes that economy is social, which is to say interdependent. Can
it be stable? How does one answer such a question? One way is to introduce a stabiliser, the
state, or some other causal power. Another is to discern a law of some kind which equilibrates
specifically economic interdependences. Perhaps by writing down a set of equations that

1
captures the interdependence of individual actions in exchanges, in the market. Or in the process
of production. Or both. The system of equations can be solved if it satisfies the mathematical
requirements.
Of the economic system of equations von Neumann remarks: “Since it is possible that m
> n [there are more process, m, than goods, n] it cannot be solved through the usual counting of
equations.”
How general are the mathematical requirements that allow for the problem to be solved?
General in economic terms (or structural in another terminology), in terms of economic
interpretation that they support. That depends on whether the assumptions allow for different
interpretations.
One is the stability of market exchanges.
The other is the process of production.
The third is growth or expansion of production and exchange.
Now, von Neumann addresses the last problem, the problem of stable or equilibrium
growth. So, market exchange and other economic relations are in the background.
Champernowne calls his equilibrium a quasi-stationary state. He questions the realism or rather
the generality of the quasi-stationary state because it does not deal with the case with one or
more resources that are in limited supply, e.g., land, and thus the expansion of production has to
come to an end, which is to say must reach a stationary state. But the equilibrium growth rate
may very well be zero, so von Neumann’s economy may be the same, in that respect, as the one
Sraffa studied; the one with “no changes in output” time after time.
Though the quasi-stationary state is dynamic, as it grows at a certain rate from one period
to another, it does not deal with the non-stationary cases, which one may claim to be more
realistic, i.e., descriptive of observable economic history. The model is free of “monetary
complications” (though not of monetary variables: prices and the interest rate), which perhaps
suggests that von Neumann accepted the monetary explanation of business cycles. But this is
only to say that the problem is not dynamic, at least in a descriptive sense.
The question, the Champernowne’s question, then is whether Walras, or von Neumann,
or Arrow and Debreu, or Sraffa for that matter, capture the economic interdependences by the
specific set of equations and inequalities so that the problem has the mathematical solution to

2
which they then give an economic interpretation or the other way around the mathematical model
is an interpretation of the things economic?
In a sense, it is obvious that von Neuman is introducing assumptions about the economic
system which allow for a mathematical solution. Thus, when he writes “there are n goods
G1, . . ., Gn which can be produced by m processes P1, . . ., Pm” any one Gi and any one Pi is just
the value of a variable. So, he is setting up a mathematical problem and giving economic names
to the variables he is going to use to provide the economic interpretation to the mathematics.
Clearly, Debreu in his dissertation Theory of Value: An Axiomatic Analysis of Economic
Equilibrium sets up a mathematical problem with the economic interpretation of the solution. He
writes:
“Allegiance to rigor dictates the axiomatic form of the analysis where the theory, in the
strict sense, is logically entirely disconnected from its interpretations.”
So, one way to approach the economic equilibrium allocation and growth is to think of
the conditions which allow for a mathematical solution, for the application of the fixed-point
theorem in this case. Do these conditions have an economic interpretation? If they do, then we
can present the mathematical problem as the problem of economics and the solution as the
discovery of an economic law. Then economists may find that this law illuminates, explains in
fact, one or the other problem that they deal with when they try to understand economic data.
But, a lot of reality, of economic reality stays out, is the complaint voiced by
Champernowne in an otherwise sympathetic comment. One could say that the model of
equilibrium growth is not descriptive of the economic reality and especially of economic history,
of the data that we have or can possibly have. So, it is not realistic, it is not an account of
economic realities. However, von Neumann writes:
“The subject of this paper is the solution of a typical economic equation system. The
system has the following properties:
(1) Goods are produced not only from "natural factors of production," but in the first
place from each other. These processes of production may be circular, i.e. good G 1 is produced
with the aid of good G2, and G2 with the aid of G1.
(2) There may be more technically possible processes of production than goods and for
this reason "counting of equations" is of no avail. The problem is rather to establish which
processes will actually be used and which not (being "unprofitable").

3
In order to be able to discuss (1), (2) quite freely we shall idealise other elements of the
situation (…). Most of these idealisations are irrelevant, but this question will not be discussed
here. “
The last sentence says that the problem as stated with the idealised assumptions is in fact
quite general, so these “idealisations are irrelevant”. Which is to say that the results of these
idealisations can be used to interpret various economic descriptions. The problem as set up and
solved is not a special, but the general case. Realistic economic descriptions, sets of data, all
imply the regularities that are discerned in the idealised model set up and mathematically solved
by von Neumann. That is the misunderstanding between von Neumann and Champernowne:
simplifying assumptions are there to purchase generality, not to construct a special, though
insightful case.
So, that is the issue one faces when considering the generality of e.g., the Arrow-Debreu
model which is mathematically the same as von Neumann’s, though the economic interpretation
is quite different. Or, alternatively, when one thinks about the realism of Sraffa’s Ricardian
model which is also quite similar to von Neumann’s at least when it comes to the assumption of
the economy being the “production of commodities by means of commodities”, thus in the
economic interpretation, though not mathematically.
Arrow and Hahn in General Competitive Analysis deal with the problem of analysis, I
think I can say, in the following way:
“As much as possible, we shall use the economics of our problem to construct a
procedure that satisfies the rules we have given [which are mathematical]. We can use C
[continuity] to establish that the rules give a continuous mapping. Then we will appeal to a
mathematical theorem [fixed-point theorem] that assures us that there will be at least one point in
Sn [n-dimensional simplex] that the mapping returns to itself. We then will appeal again to our
economics to show that this point is the equilibrium we seek.”
This does not deal with the issue of generality of the analysis. In von Neumann’s case,
generality is clearly the aim. This appears to have been missed by Champernowne. He argues
that the theory and the solutions are specific because they do not cover a host of issues
economists and other social scientists are interested in. E.g., inequality. This is where the
problem of interpretation needs to be addressed. Champernowne reads von Neumann’s
assumption that “consumption of goods takes place only through the processes of production

4
which include necessities of life consumed by workers and employees” as treating the workers as
slaves, or farm animals even,1 who consume the bare minimum of what they need to carry on.
But while that is one possible interpretation, and perhaps is the proper interpretation of Sraffa’s
model, it is not the only one. Indeed, legal status of the employees, or the level and the
composition of the necessities, and the distribution between the consumption and investment, are
not assumed and are not determined in the theoretical model. Champernowne reads the level of
realism into the set-up of the economic problem to be solved which is not there.
So, that is one way to see the difference between realism and interpretation. While the
model covers a slave economy, it also covers the contractual labour market economy, and the
communist one for all we can tell. It also does not determine the level of inequality and could
apply to an egalitarian society as well as to the one with great inequalities. Similar observations
apply to comments about workers not saving and capitalists investing all their profits as the
model can be interpreted in the Arrow-Debreu manner by distributing profits to households or in
Sraffa’s manner as partly being consumed rather than invested (those would be non-basic goods).
The key assumption is that profits, as distinguished from the interest rate, are zero in equilibrium
for it to exist and persist, to be stable.
Debreu appears to take the same approach to economic analysis, while Arrow and Hahn
suggest that some of the rules of the game are to be designed by some dose of economic realism.
Now, the rules of the game are important because it appears that von Neuman came to have an
interest in the economic problem after looking for the solution of games. In his 1928 paper on
parlour games (Zum Theorie der Gesselschatsspiele in German and translated into English as On
the Theory of Games of Strategy) he uses the mathematics he will use in his paper on economic
growth, which is an indication that there is the game theoretic interpretation of the economic
problem, as indeed there is. Of course, if one sees the economic problem as a game problem, the
issue of realism becomes a bit less pressing. While self-contained, circular, ever growing at an
equilibrium rate economy may indeed look rather fanciful, there is clearly nothing unrealistic
about games, even if they are invented, as they are.
There is an aspect of games which carries over to economic relations, which is that of
interdependence. Thus, von Neumann:

1
See my essay Ricardo on Horses, Soldiers, and the Rise of the Machines on that.

5
“Consider the following problem: there are n goods G1, . . ., Gn which can be produced
by m processes P1, . . ., Pm. Which processes will be used (as "profitable") and what prices of the
goods will obtain? The problem is evidently non-trivial since either of its parts can be answered
only after the other one has been answered, i.e. its solution is implicit.”
The interdependence leads to the choice of the mathematical solution. For one, the profit
maximisation leads to the choice of processes which are consistent with zero profits in
equilibrium. Otherwise, the structure and the scale of these processes would have to change. For
another, relative prices of goods will be consistent with these processes, with their use in
production, which includes consumption, in equilibrium. And growth will equal the interest rate
in equilibrium. Because the processes do not change while additional labour is always available,
the consequence of the assumption of constant returns to scale.
The question is are there prices and the interest rate which keep the production at the
equilibrium rate of growth? The proof will turn out to be rather simple, once one finds it.
If there is interdependence, there is the sense in which, as Otto Neurath put it: “Wie
Schiffer sind wir, die ihr Schiff auf offener See umbauen müssen, ohne es jemals in einem Dock
zerlegen und aus besten Bestandteilen neu errichten zu können.“ The boat on the sea needs to be
repaired piece by piece, not all anew as at a dock. Except that in the normal form games can be
played, as von Neumann showed, not move by move, but with strategies, all at once from the
beginning to the end. This is how the problem is treated in Arrow-Debreu model of general
equilibrium too.
Thus, interdependence allows us to look at the economic problem as if it is completely
internal, endogenous, implicit as von Neumann says. There is, to continue with Neurath’s
depiction, the sea out there, so the problem is clearly probabilistic. But does not causally depend
on the environment as both resources like land and labour have been internalized to the process
of production. With probabilistic independence and self-containment, this is indeed the
production of the boat by the means of the boat, which is to say, in mathematics, that production
is closed, convex, and bounded (the production is, not the boat). If one assumes that the mapping
of the economy to itself is also a continuous function, this is a mathematical problem which is
solvable by the application of the Brouwer’s fixed point theorem.
There is still the duality of real and monetary variables, so to speak. Just maximising
production is not the answer to the economic problem because there is no answer to what

6
processes are the best to use. Maximising profits is also not the complete answer because those
need to equal zero for the production to have stable growth. Also, just minimising costs, i.e., the
prices, is again not sufficient to determine the equilibrium growth of production. The economic
problem, thus, appears to be the one of maximising production while at the same time
minimising the costs, or prices. So, one needs to find the saddle-point of real and monetary
variables. At which maximum growth equals minimal prices, and that will turn out to be the case
if the rate of growth is equal to the rate of reinvestment, or to the interest rate. Once expressed
mathematically, this is quite obvious.
The relevant equation states that if the ratio of real outputs to real inputs is multiplied by
relative prices and if the ratio of prices of the output to prices of inputs is multiplied by relative
quantities of goods, then the monetary expression will be equal to the real expression of the
growing economy if the rate of growth of the real economy is equal to the value of the monetary
economy with the same rate of growth. That is the solution to perhaps one can say the production
function of (relative) quantities and (relative) prices.
So, simply: If production is the function of quantities and prices and if relative quantities
do not change and thus relative prices do not change, then as the economy as a whole grows in
real terms it also grows at the same rate in monetary terms, in terms of relative prices, where the
former growth is at relative maximum, and the latter at relative minimum.
All that is left, then, is to determine that this saddle-point exists. One is looking for the
intersection between the real and monetary variables the existence of which von Neuman proves
with the generalisation of the Brouwer’s fixed point theorem. As the two sets of values are
interdependent as they are geometrically simple (as they map continuously into itself, so into a n-
dimensional simplex), their intersection must consist of at least one point, one fixed point. The
point where growth rate equals the interest rate.
The proof was generalised to correspondences by Kakutani, with von Neumann’s studies
of games and economies in mind, and then applied to n-person non-cooperative games by Nash.
The proof by Kakutani is used by Arrow-Debreu general equilibrium theory, though the
economic interpretation is different from von Neumann’s. Sraffa’s economics is similar to von
Neumann’s, but he is interested in proving not only that distribution between wages and profits is
independent from the problem of allocation (which is arguably what von Neumann and Arrow-
Debreu models are solving), but that it depends on one of the variables of distribution, either the

7
wages or the profits, being determined outside of the system of production. Thus, Sraffa’s model
is indeed specific in the sense that it needs to be generalised to account for the distribution
between wages and profits (or the interest rate). And indeed, Sraffa’s criticism of economics
could be understood to be that models like that of von Neumann are not general enough because
they are about the allocation of goods and processes and not about the distribution of proceeds so
to speak to wages and profits (and rents). In that sense, perhaps, Champernowne’s objection to
realism of von Neumann’s model might be understood.
This Sraffa’s claim invites the reliance on causes outside of the economic problem as
defined by von Neumann and Arrow and Debreu. There is a conceptual point that is not
uninteresting here. Fixed-point theorems are often heuristically presented by stirring the cup of
coffee or tea and claiming that as much as one stirs, there is at least one drop of coffee or tea that
ends up after all that commotion at the same place where it was at the start. It is fixed at its own
point so to speak. As the boat sailing steadily through the stormy sea.
This, I think, is misleading. The exogenous intervention of shaking the cup is probably
enough to make the theorem inapplicable. Manipulating a function and shaking a full cup of
coffee or tea is not the same thing. There would have to have been established that the stirring is
probabilistically independent of the outcome as for instance rolling the roulette wheel is assumed
to be, otherwise there is no fixed point, though the outcome may become predicable. Hahn in his
paper On the Notion of Equilibrium in Economics attempts to find a causal path to equilibrium.
But that is clearly already implied in the idea of the equilibrium in the same way that strategies
are implied in the solution to a game and the way to rebuild a boat in the idea of the boat, to
sound somewhat Hegelian here.
Champernowne lists the issues that von Neumann’s theoretical model sets aside. But
those have the same role that stirring of the cup full of liquid or the rebuilding of the boat have.
These are specific instances, the theory is general. Hahn points out that one assumption is not
realistic and indeed is specific, not general, thus limiting the generality of the theory. That is the
assumption of constant returns to scale. He says that it is hard to understand the existence of
firms without some increasing returns being present. Arrow has done more than anybody else I
think in showing that a lot of what describes any economy has no specified role in the general
equilibrium theory. E.g., entrepreneurship, asymmetric information, moral hazard and adverse
selection, irrationalities, or limited rationality, and more or less all that is descriptive of

8
economic realities. And then there is history, economic and political conflicts, and interests, and
class struggles, and practically everything.
So, how does the theory relate to data? The model of general equilibrium should help
explain the data in the way that we refer to law-like regularities in science in general. There is
any number of reasons why any historical or empirical description will not resemble the
theoretical model. There we have two ways to proceed.
One is to look for, most often, causal regularities in the data, to induce these regularities
from the data.
The other is to explain the dynamic descriptions by reference to laws or lawlike
regularities.
These regularities are not necessarily static and may be called equilibria because they are
in some sense enduring, though they do not really have the time dimension. They are sets of
variables that are consistent in some sense appropriate for the subject at hand, e.g., economics,
while all the data, the facts are the values of these variables in all the possible combinations.
In that sense, regularities, like the equality of the interest rate and the growth rate, explain
the mess that is the reality, in this case the economic reality.
How realistic are these regularities or lawlike interdependences? That is implicitly the
question on what is real, on what is out there. Quine’s suggestion of the answer in On What
There Is, might be called pragmatic:
“Our acceptance of an ontology is, I think, similar in principle to our acceptance of a
scientific theory, say a system of physics; we adopt, at least insofar as we are reasonable, the
simplest conceptual scheme into which the disordered fragments of raw experience can be fitted
and arranged. Our ontology is determined once we have fixed upon the over-all conceptual
scheme which is to accommodate science in the broadest sense; and the considerations which
determine a reasonable construction of any part of that conceptual scheme, e.g. the biological or
the physical part, are not different in kind from the considerations which determine a reasonable
construction of the whole. To whatever extent the adoption of any system of scientific theory
may be said to be a matter of language, the same - but no more - may be said of the adoption of
an ontology.”

9
Presuppose Nothing

“(W)hen our knowledge of the instances is slight, we may


have to depend upon pure induction a good deal. In an
advanced science it is a last resort, —the least satisfactory
of the methods. But sometimes it must be our first resort,
the method upon which we must depend in the dawn of
knowledge and in fundamental inquiries where we must
presuppose nothing.”
Keynes, A Treatise of Probability

Leonard Savage addresses the issue of universals in his 1967 paper Implications of
Personal Probability for Induction. He states the problem in the following way:
“Since I see no objective grounds for any specific belief beyond immediate experience, I
see none for believing a universal other than one that is tautological, given what has been
observed, as it is when it is a purely mathematical conclusion or when every possible instance
has been observed.”
But then what about induction? “The riddle of induction can be put thus: What rational
basis is there for any of our beliefs about the unobserved?”
He concludes with a puzzle more than an answer:
“We can attempt more cautious particular propositions, such as "I see white in the upper-
left quadrant", hoping thus to avoid being deceived by appearances in as much as we report only
appearances. But universals lurk even in such reports of sense data. The notions of "I", "upper",
"right", and "white" all seem to take their meanings from orderly experience. Indeed, I cannot
imagine communication in the absence of expectation of continued order in domains as yet
unperceived. To be sure, each universal implicit in an ostensible particular can itself be subjected
to reductionist analysis like other universals, but the ideal of eliminating universals altogether
seems impossible to me. We have come once more, but along a different path, to the place where
personalists disagree with necessarians in expecting no solution to the problem of the tabula
rasa.”2
2
Compare Ramsey, Universals in Foundations. Also, on induction in the final part of his Truth and Probability in
the same collection.

10
On ‘necessarians’, Keynes among them, Savage says in Foundations of Statistics:
“Necessary views hold that probability measures the extent to which one set of propositions, out
of logical necessity and apart from human opinion, confirms the truth of another. They are
generally regarded by their holders as extensions of logic, which tell when one set of
propositions necessitates the truth of another.”
And again: “Holders of necessary views say that, just as there is no room for dispute as to
whether one proposition is logically implied by others, there can be no dispute as to the extent to
which one proposition is partially implied by others that are thought of as evidence bearing on it,
for the exponents of necessary views regard probability as a generalization of implication.”
The immediate question to Savage’s claim that “the ideal of eliminating universals
altogether seems impossible” is where do these universals come from?
If they are reducible to particulars, but cannot be eliminated, that has to mean, as I also
understand was de Finetti’s view,3 that the unconditional or prior probability, perhaps the product
of Keynes’ pure induction (induction by enumeration), is just the unconditional empirical
generalisation before it is conditioned on additional or different (analogous) data and is turned
into the posterior probability calculated by the Bayes’ Rule.
In somewhat of a digression, it is important, especially when looking for potential
dependencies between events or propositions, to emphasise the centrality of the product rule for
the Bayes’ Rule.
Product Rule: Joint, ∩, probability, P, of two events or propositions, X and Y, P(X∩P), is
the product of the probability of X conditional, │, on Y times the probability of Y.
Symmetrically, conditional probability of Y given X is given by the conditional times
marginal probability. Put generally, joint probability equals conditional probability times the
marginal probability.
So, from:
P(X∩P) = (PX│Y)PY = (PY│X)PX (1)
it follows for instance for X:
(PX│Y) = (PY│X)PX/PY (2)
which is the Bayes Rule.4 (More on that in my Causes and Counterfactuals: Simple Ideas.)
3
See Probability, Induction and Statistics: The Art of Guessing.
4
See R. F. Engle, D. F. Hendry, J.-F. Richard, Exogeneity. Econometrica 51: 277-304, 1983. They work out the
conditions for, I think one might say, inductive dependences, including those that are probabilistically causal and
potentially relevant to policies.

11
Bayes’ Rule refines prior or unconditional empirical generalisations, PX, Keynes’ pure
induction, into posterior or conditional empirical generalisations, Keynes’ induction proper, or
which is the same thing, induces posterior probabilities from prior probabilities. No tabula rasa
as Savage says. This is against the necessarians, Keynes included, who start with pure induction,
which “presupposes nothing”, start with the clean slate.
Bayes’ Rule ensures the consistency of induction. It is the application of the logic of
consistency in the sense of Ramsey. That consistency is not the one which Keynes required,
which is that conclusions follow from the premises, though the end result is not the truth but
partial belief, a belief to a certain probability, but the logical process is the same as in the logic of
implication.
To see the difference, Keynes’ consistency would make inductive generalisations truth
functional, though the propositions, the premisses and the implications for example, would be
true or false to a certain probability. Ramsey’s consistency is pragmatic in the sense of acting on
partial beliefs consistently with the view to expected outcomes. For that some type of a utility
function not the truth function is needed.
So, there are two types of consistencies: Ramsey’s consistency of partial beliefs and that
of the necessarians, which is, as the name says, that of the implication. Ramsey, de Finetti, and
Savage in the above quotes and elsewhere objected to logical consistency of partial beliefs, of
induction. The inductive consistency they proposed is based on the requirement that beliefs are
not such to assign probabilities to propositions which if acted upon would fail the Dutch Book
test.5 Would imply sure loss if, for instance, bet on. The Dutch Book test of rational partial belief
was introduced in this context by Ramsey in his paper Truth and Probability.
One way to see the distinction is to contrast decision with contemplation. Consistent
assignment of probabilities with the view to deciding and acting on them is different from the
consistent assignment of probabilities to propositions with the view to understanding or
explanation. In the first case, guarding against sure loss, avoiding the Dutch book being made
against the decisions arrived at and the actions taken, is the test of consistency. In the second
case, induction is seen as the way to the truth even though only the truth to some probability is
attainable.

5
I wrote on that in A Sure Loss: Dutch Books, Money Pumps, Logrolling, and Vote-Trading, Journal of Public
Finance and Public Choice, 1994.

12
What do empirical generalisations do? Assuming consistency, they support induction
from data, which means that they provide the basis for predictions and expectations. Those may
be said to be rational in the sense of being consistent, but they may very well prove to be false.
Indeed, as Keynes argued, even if it turns out that the beliefs held were false, for instance the
predictions wrong, that is not to be taken as a criticism of the rationality and consistency of the
beliefs that were held. It is conceivable that the whole web of beliefs that are held by everybody
is both rational and all of them are false. This will prove to be an obstacle to verification or
falsification of empirical generalisations.
Hume, as I have argued elsewhere, 6 certainly held this sceptical view that everything we
believe consistently is indeed just a set of empirical generalisations which are baseless in and by
themselves. Which is why he developed theoretical models when he wanted to explain how
things worked. I am going to argue that Keynes in his approach to economics used theory and
induction in the same way as Hume. What that means can be seen in the example of the current
debates about the state of macroeconomics and of the role of theory and empirical
generalisations.
The convenient place to start is Blanchard’s note On the Future of Macroeconomic
Models (Oxford Review of Economic Policy, 2018). Mostly for his classification of the types of
macroeconomic models. And for his endorsement of eclecticism out of disappointment in the
predictive powers of the models in use by the economic and econometric profession and the
policy advisers.
Blanchard distinguishes five types of macroeconomic models: theoretical, econometric
(DSGE), toy models (like ISLM), partial, and forecasting. The central one, in all its variations,
being the DSGE (Dynamic Stochastic General Equilibrium) model and the main problem with it
being that it is a disappointment. It is not much of a guide to policy for the most part because it
does not predict and forecast accurately.
What should one expect from models, e.g., macroeconomic models? The list of desirables
includes:
explanation, prediction, forecast, and advice.
For instance, an ISLM (IS stands for saving and investment, LM for liquidity and money)
model was put together by Hicks to explain the temporary equilibrium of output and the interest

6
In Billard Balls: Symmetries are puzzling, asymmetries are hard to come by.

13
rate with involuntary unemployment. It did not predict, forecast, or advise, though it did not
stand in the way of any of these three goals and to the extent that those were provided, it could
explain their success or failure. One could add international balances and the exchange rate and
get the Mundell-Flemming model. Or, more ambitiously, one could build on the Arrow-Debreu
model of general equilibrium to motivate the economic agents including the government. If the
model is presented as a game, it might provide predictions and advice, although altogether
hypothetically, but would not be useful as a forecasting tool.
To predict with empirical data in macroeconomics, to make predictions for the economy
as a whole, a model like DSGE would be needed. In a nutshell, it is a model of aggregate output,
e.g., GDP, with supply and demand functions and the equilibrating, stabilising, policy function.
These three functions can be disaggregated as much as it is believed to be useful. 7 The model is
dynamic in the sense of connecting current output with the past and the future ones, so it is
predictive. It is stochastic because these are random functions. The equilibrium of the model is
general because all interdependences are accounted for, and so no additional adjustment is
needed in either prices or quantities, i.e., they are at their equilibrium values.
The model needs to be constructed from empirical generalisations. This is what dynamic
and stochastic part of the model means. The general equilibrium part takes into account the
interdependent nature of economic relations perhaps the easiest to see in a game-theoretic setup.
Putting aside the model’s consistency, and assuming that it consists of a set of stochastic
equations which allows of a solution, how are empirical generalisations, which are those
equations when parametrised, arrived at? This was one of the issues discussed by Keynes and
Tinbergen in late 1930s. It makes sense to consider Keynes’ objections to Tinbergen’s
econometric studies because of the importance of Keynes himself for the development of
macroeconomics and in view of his early work on induction and probability. But also because of
the disappointment, which comes through clearly in Blanchard’s article, with the DSGE model
which if it had been successful would have put Keynes’ objections to rest.
Keynes voiced two main objections to Tinbergen’s model of empirical generalisations
with the aim to prediction and possibly policy advice. One is that there is no needed stability in
economic data.

7
Putting aside all the problems with aggregation or disaggregation which are well-known and while probably not
solvable also ineliminable as Savage says.

14
“(T)he most important condition is that the environment in all relevant respects, other
than the fluctuations in those factors of which we take particular account, should be uniform and
homogeneous over a period of time. We cannot be sure that such conditions will persist in the
future, even if we find them in the past. But if we find them in the past, we have at any rate some
basis for an inductive argument. The first step, therefore, is to break up the period under
examination into a series of sub-periods, with a view to discovering whether the results of
applying our method to the various sub-periods taken separately are reasonably uniform. If they
are, then we have some ground for projecting our results into the future.”
If instance after instance the regularity is reconfirmed, we can induce that this will be the
case in the future. Then we can rely on pure induction (by enumeration) to get to an empirical
generalisation on which to base our predictions. The problem is that there are not all that many
instances for pure induction and thus it is rather more probable that the generalisations would
prove to lead to false predictions.
The other objection is the relation of econometrics to theory. “Am I right in thinking that
the method of multiple correlation analysis essentially depends on the economist having
furnished, not merely a list of the significant causes, which is correct so far as it goes, but a
complete list? For example, suppose three factors are taken into account, it is not enough that
these should be in fact verae causae; there must be no other significant factor. If there is a further
factor, not taken account of, then the method is not able to discover the relative quantitative
importance of the first three. If so, this means that the method is only applicable where the
economist is able to provide beforehand a correct and indubitably complete analysis of the
significant factors. The method is one neither of discovery nor of criticism. It is a means of
giving quantitative precision to what, in qualitative terms, we know already as the result of a
complete theoretical analysis…”
The objection goes to the issue of the relation between theory and the empirical
generalisations. It does not question the need for quantitative precision as has sometimes been
suggested. In a letter to Harrod, Keynes puts it this way:
“In chemistry and physics and other natural sciences the object of experiment is to fill in
the actual values of the various quantities and factors appearing in an equation or a formula; and
the work when done is once and for all. In economics that is not the case, and to convert a model
into a quantitative formula is to destroy its usefulness as an instrument of thought. Tinbergen

15
endeavours to work out the variable quantities in a particular case, or perhaps in the average of
several particular cases, and he then suggests that the quantitative formula so obtained has
general validity. Yet in fact, by filling in figures, which one can be quite sure will not apply next
time, so far from increasing the value of his instrument, he has destroyed it [italics added]. All
the statisticians tend that way. Colin [Clark], for example, has recently persuaded himself that
the propensity to consume in terms of money is constant at all phases of the credit cycle. He
works out a figure for it and proposes to predict by using the result, regardless of the fact that his
own investigations clearly show that it is not constant, in addition to the strong a priori reasons
for regarding it as most unlikely that it can be so.”
So, one should not expect the DSGE models, being empirical generalisations, to be very
successful at prediction. They do make room for policies, which mainly aim at stabilising the
economic activity and prices. In that, it is assumed that policies can affect the dynamics of the
output, which is to say that macroeconomic models are causal at least when it comes to policies.
They can then provide advice to policy makers what to do. Whether the advice has the chance to
being persuasive is an additional concern.
In the same letter to Harrod Keynes writes: “I also want to emphasise strongly the point
about economics being a moral science. I mentioned before that it deals with introspection and
with values. I might have added that it deals with motives, expectations, psychological
uncertainties. One has to be constantly on guard against treating the material as constant and
homogeneous. It is as though the fall of the apple to the ground depended on the apple's motives,
on whether it is worth while falling to the ground, and whether the ground wanted the apple to
fall, and on mistaken calculations on the part of the apple as to how far it was from the centre of
the earth.”
So, the theoretical model needs to be micro-founded to use the contemporary jargon. It is
not straightforward to come up with causal links or powers. Keynes’ views on causality can be
found in his A Treatise of Probability. My guess is that his views in this respect as in the whole
theory of probability he advanced were influenced by Russell’s emphasis on logic, thus his
necessarian view of probability. Perhaps this is best expressed in this quote from the note on
causality in A Treatise of Probability:
“Two events are causally independent if no part of either is, relative to our nomologic
data, a possible cause of any part of the other under the conditions of our existential knowledge.

16
The greater the scope of our existential knowledge, the greater is the likelihood of our being able
to pronounce events causally dependent or independent.”
Assuming that we have a theoretical model, which captures the laws of nature, our
empirical generalisations being based on growing information may increase the probability that
there are causal dependencies or independencies. Keynes cautions that this is the probabilistic
causation, which is not what one tends to associate with causality. Also, and most importantly,
causal dependencies can be attributed to events under the law as it were. There is no inductive
way to causal dependencies. Which was Hume’s view too.
Keynes’ argument is that there is no inductive path to theory or to the laws of nature.
Perhaps his views on theory and empirical research could be represented in game-theoretic way.
The rules of the game are the laws of nature, the payoffs are the outcomes e.g., in terms of
utilities, strategies are empirical generalisations with probability measures over outcomes, while
policies are choices and actions to get to the outcomes rationally, which is to say by multiplying
utilities with probabilities. There are obvious paradoxes connected with the maximisation of
expected utility as enormous utility with small probability is supposed to be equal to small utility
with high probability, but that can perhaps be corrected for.
Looking at policies in the model like DSGE, the game-theoretic representation is helpful
to thinking about policies. The policymaker chooses the outcome which maximises their
expected utility or welfare and acts, implements policies, on that choice. No, causal dependence
between the policies and the outcome needs to exist. So, the model may prove useful for policy
advice. However, usually the policy advice is presumed on the understanding that the choice of
the policy will bring about the desirable or targeted outcome. And for that, causes need to be
external to the outcomes or to be probabilistically independent from the effects and from
whatever else is going on.
In a DSGE model, the dynamic part is implicitly causal, where policies with causal
impacts are probabilistically independent from the rest of the model, which is the stochastic
setup. General equilibrium is the requirement that all the effects are to be accounted for. Again,
in the game theoretic setting, each move is consequential for all the moves and players. The
extended form of the game is the historical dynamics of the moves and interdependencies, while
in the normal form strategies lead to payoffs with all the interdependencies accounted for.

17
In the general equilibrium setting, however, the policy maker chooses the strategy with
the view to the desired and expected outcome, but the payoff depends on the strategies of all the
others. So, assuming that there is a set of dynamic and stochastic equations with general
equilibrium outcomes, the policy maker needs to know the expected outcomes of the
interdependent economic activities in order to intervene by setting the value of some parameter
or variable in one or more equations in order to get the desired result. A DSGE model should
allow for the expected outcomes of policy interventions to be calculated so that the policy
makers could be advised as to what policy to choose and enact. As the model is the set of
empirical generalisations, it is certainly possible to rationalise the past realisations of the
policies, but it would have to be the case that the future will look very much the same as the past
for estimated equations to predict accurately the outcomes of policy interventions and thus
advise the best course of action.
That leaves policy makers with partial models, the fourth type on Blanchard’s list, which
predict the outcomes of specific policy interventions without the general equilibrium effects.
This is much more in the spirit of what is meant by causal dependence and is easier to see as
being accessible to policy interventions with reasonable chances of ensuring the realisation of the
expected outcome. Of course, a set of policies will have to trigger general equilibrium effects,
which is what is really meant by the macroeconomic policy problem. One would have to assume
that those would be small.
The way the problem of policy evaluation was seen by Hume and Keynes was to build up
a theoretical model in which the effects of policies can be predicted and controlled by the author
of the model. For example, Keynes’ nomological set up can be captured by a set of equations
with assumed causal structure. So, like Hume, one might assume that the amount of money is
doubled and work out the expected change in the price level. Or in the ISLM model one could
introduce increased government spending and calculate the increase in the output with changing
interest rate.
The stochastic version of the theoretical model would be what necessarians would be
happy with. The expected effect of the causal impact of the policy intervention would follow
from the premises that are the relations or functions within the model. Working from empirical
generalisation to policy interventions in a macroeconomic model, which is to say in the
interdependent set of relations is bound to lead to disappointment.

18
The problem of policy advice is that it has to account for a relation in which there is
asymmetry between the cause and the effect in the model of general interdependence where there
are no endogenous asymmetries. The aim is, as Koopmans said, not to find the explanation
outside of the province of economics, but within it. So, how are we to understand the
endogenous causality in the sense that the causal relation is between two economic variables one
of which is external to the other?
In the Bayesian context, which is the one Savage relies on, that means that the joint
probability of the cause and the effect is expressed as the probability of the effect given the cause
times the probability of the cause, i.e., by the product rule. The issue then is what are the
conditions for the cause to be independent of or to be exogenous to the effect. Once it is posed
that way, the conditions for exogeneity are rather clear. The product rule is written as a random
equation with the independent variable not influencing the parameters while being
probabilistically independent of everything else which is captured by the random term. This
account can be refined in many ways as in Pearl’s book on Causality.
Blanchard in the mentioned article opts for eclecticism. After going through the five
types of models (theoretical, DSGE, toy models like ISLM, partial models, and forecasting
models), he recommends using the one that may fit the best the problem one sets up to solve. He
does not see much use for theoretical models, except for their toy versions like the ISLM model.
Partial equilibrium models are more promising than the DSGE models, which he would like to
see being developed further, but he does not seem to believe that they will ever deliver the
reliable predictions. As for forecasts, good are those models which work, one assumes from
sifting through leaves to calculating everything that is available and manageable.
The problem with this is another comment that Keynes made to the Tinbergen’s
econometric efforts. Assume one starts with a theory, with a model that one wants to use to
explain the data and hopefully predict the effects of policy interventions. Tinbergen says, and
Keynes agrees, the theory cannot be proved. If the theory is consistent, it cannot be proved in the
data because it may very well turn up to be false in the end given that at each moment in time
until the very end its predictions are only probable. So, one can do everything right from the
logical point of view, and still be wrong. Tinbergen believes, however, that theories can be
falsified.

19
That leads to an interesting issue discussed under the topic of the Duhem-Quine Theorem
in the philosophy of science. The claim, in its stronger version, is best stated by Quine. He first
denies that individual statements can be falsified:
“(I)t is misleading to speak of the empirical content of an individual statement -
especially if it is a statement at all remote from the experiential periphery of the field.
Furthermore it becomes folly to seek a boundary between synthetic statements, which hold
contingently on experience, and analytic statements, which hold come what may. Any statement
can be held true come what may, if we make drastic enough adjustments elsewhere in the
system. Even a statement very close to the periphery can be held true in the face of recalcitrant
experience by pleading hallucination or by amending certain statements of the kind called logical
laws. Conversely, by the same token, no statement is immune to revision.”
Consequently, he proposes:
“The dogma of reductionism survives in the supposition that each statement, taken in
isolation from its fellows, can admit of confirmation or infirmation at all. My counter suggestion
(…) is that our statements about the external world face the tribunal of sense experience not
individually but only as a corporate body.”
One way to see the issue, at least in the current context, is to compare theoretical
advances with the development of empirical approaches. Theories are competing with each
other, while empirical research more often than not tends to adjust empirical generalisations to
new data or to new ways to look at the data. In the most general sense, the difference is between
theoretical explanations and historical descriptions.
The point that Savage is making can perhaps be related to Quine’s claims. Indeed,
Savage’s formulation above is almost identical to Quine’s in his Two Dogmas of Empiricism.
Quine’s aim was to show that the distinction between the analytic and synthetic propositions
does not make sense. In a way, both universals and particulars are at stake all the time in our
accounts of the sense data. Savage’s point is that universals are always implicated in induction,
in empirical generalisations. So, updating probabilities is also recommitting ourselves to
empirical generalisations, to the universals we started with.
With that in mind, one problem in macroeconomics is the reliance on models or theories
to rationalise the observations and the policy advice. Keynes thought that it is necessary to learn
to think in models in order to explain the changing reality. Induction is of not much help in that.

20
As empirical generalisations can be revised endlessly without ever being falsified. In a way,
which is the meaning of Savage’s claim that universals cannot be fully eliminated.
However, thinking with theories faces the risk of the versatility of explanations. When
presented with evidence, a Keynesian or whoever, may very well say that it all makes sense from
the theoretical perspective. The same way that an empirical generalisation can be adjusted to
account for whatever evidence emerges. Quine might have had in mind a natural experiment in
which the way things work might be observed. So, theories can be falsified. Inductions can
neither be verified nor falsified.

Gaming Policies

“(I)t appears that policy makers, if they wish to forecast the


response of citizens, must take the latter into their
confidence. This conclusion, if ill-suited to current
econometric practice, seems to accord well with a
preference for democratic decision making.”
R. E. Lucas, Jr., Econometric Policy Evaluation: A Critique

Crisstopher Sims commented on Lucas’ criticism of policy evaluation quite a few times.
Perhaps the most extensively in his A Rational Expectations Framework for Short Run Policy
Analysis. There he argues that the difference between the reduced form equation policy
evaluation and rational expectations econometrics is like the difference between the short run
predictions and comparative statics. Or one might interpret it, I think, as the difference between
the political game in the extensive form and in the strategic, normal form.
First one needs to clarify Lucas’ criticism of the econometric policy evaluation. That also
helps to understand the theoretical differences. The setup is this: policy makers aim at a certain
value of y with the change in the policy x, while the econometricians predict the outcomes for
alternative policy interventions and thus of the changing values of x with the view to providing a

21
policy advice on the best choice of x to achieve the desired outcome y. 8 Thus, with the known F
and G, the value of the:
future y = F (current y, policy x, parameters β, shocks η), (3)
when the policy followed is:
x = G (current y, parameters γ, shocks ϵ), (4)
so, the predicted value of:
future y = F (current y, policy x, parameters β(γ), shocks η). (5)
Policy makers control x, which delivers the current and expected future y, and so can aim
at the different value of y in the future by changing the policy x, while, the assumption is, the
parameters β do not change and the random shocks η make the outcome and thus the prediction
probable, expected, rather than necessary.
Lucas’ claim is that the parameters of the policy function γ in (2) determine the
parameters β in (1), so the parameters γ of the policy function (3) change when the policy
changes leading to changes in the parameters β. The policy changes thus not being just changes
in the value of x, but rather changes in the values of γ. So, the policy changes are interventions in
the parameters γ and thus in parameters β and the effects of the policies and thus the predictions
based on parameters β in (1) will be wrong.
This is basically the key problem of induction: generating a description of the empirical
functional relation which can be implemented analytically. So, if (1) describes the way the
economy works, i.e., is as close to the reality as the data allows, and does not contain theoretical
restrictions which Sims found not to be credible because not realistic, not induced from the data
that is, then predictions on the change in y when x is changed should be accurate even though
these prediction follow analytically, i.e., simply by solving (1) for different values of x.
Sims’ argument in Macroeconomics and Reality and in the above quoted paper is that (3)
is not functionally different from (1) except if it is given the behavioural or structural
interpretation by reference to a theory. Sims has argued that theory-based structures are not
realistic and are not believable. So:

8
I wrote once on the distinction between the Weberian approach to political advice and the Keynesian. The former
is the advice of the expert who takes the end and examines the best means to achieve it. The latter seeks to influence
the choice of the end by persuading the policy makers about the best course of action that expert knowledge can
advise. The issue came up in the Keynes-Tinbergen debate referred to above. It also comes up in the policy
evaluation debate commented on below.

22
“When Lucas writes down (1) in section 6 of his paper, he introduces it as a “structure”
of the form (1). By this he probably means to imply that F is not some general statistical model,
but the kind of entity he described in more detail when using (1) earlier in the paper. There he
requires that “the function F and the parameter vector β are derived from decision rules (demand
and supply functions) of agents in the economy, and these decisions are, theoretically, optimal
given the situation in which each agent is placed.” Furthermore, in the examples Lucas considers
there is in every case one or more functions contained in the model which represent or are
directly affected by agents’ expectation-formation rules.
The lesson of rational expectations is that when we use a model in whose functional form
is embedded agents’ expectational rules, we are likely to make errors even in forecasting if we
insist that those expectational rules are fixed through time, and that we will make even more
serious errors in policy evaluation if we pretend that those rules will remain fixed despite
changes in policy which make them clearly suboptimal. The difference between (1) and the
system (2)-(3) as frameworks for policy analysis is not the superficial one that in the latter we
think of ourselves as choosing a “fixed parameter” γ while in the former we are choosing
“arbitrary” values of policy “variables” x. The difference is that in (1) the parameters β in fact
depend on the hidden policy variable γ. If we try to use (1) to guess the effects of various x paths
which are in fact accompanied by changes in β, we will make errors. The advantage of (2)-(3)
[…] is that [it] takes proper account of the effect of γ on β.”
This is unnecessarily complicated. The point of Lucas’ criticism is that the effects of the
policies, current or changed, are conditional on the behaviour of the agents in the economy. So,
when policies change, behaviour changes too, so γ changes and thus β changes too. Predicting on
unchanged β would then be wrong.
That is more or less all there is to the micro-foundations. Sims says that “we will make
even more serious errors in policy evaluation if we pretend that those rules will remain fixed
despite changes in policy which make them clearly suboptimal”. He quotes Lucas pointing out
that “the function F and the parameter vector β are derived from decision rules (demand and
supply functions) of agents in the economy, and these decisions are, theoretically, optimal given
the situation in which each agent is placed.”
However, if suboptimal policy outcomes are unfeasible, because of the optimising
behaviour of the agents in the economy, the policy choices are between different optimal

23
outcomes or, which is the same thing, between outcomes which cannot be ranked by being
optimal or suboptimal (there are no grades of optimality or suboptimality). If, e.g., the policy
makers want to maximise some welfare function, they will choose the policies which will raise
the aggregate welfare, but the new welfare level will be noncomparable for optimality because
the old and the new welfare levels are both optimal.
So future y will be e.g., greater than the current y in terms of welfare, but the latter will
not be suboptimal compared to the former. Because economic agents will have adjusted their
behaviour optimally in both states of affairs. Rational expectations follow from this assumption
of the optimising behaviour rather than the behaviour being guided by rational expectations. This
is what Muth meant when he disagreed with Simon’s criticism of economic agents being
perfectly rational:
“It is sometimes argued that the assumption of rationality in economics leads to theories
inconsistent with, or inadequate to explain, observed phenomena, especially changes over time
[…]. Our hypothesis is based on exactly the opposite point of view: that dynamic economic
models do not assume enough rationality.”
Both Sims and Lucas agree that policy changes which have been tried in the past may
provide evidence for accurate predictions of the effects of their introduction or reintroduction.
The change of a policy may be reversed and then reinstated again. In every case of the intended
policy change one could certainly predict the effects of the new policy if it has been tried before
and if one assumes that everybody optimises all the time. So, history should be informative.
Indeed, as long as policies to be introduced have been tried before, the economic agents can
predict the outcomes as well as the policy makers. The consequence of the assumption of
predictability of the policy outcomes by both the policy makers and the policy takers will be that
these policy changes will not be structural and will come with high probability of being reversed
in the future. One consequence of that, not important here but still worth mentioning, might be
that it could be argued that the economic development and the business cycles will be to a quite
large extent driven by the real, i.e., by nonpolicy exogenous causes, as in the theory of the real
business cycles.
Sims dismisses the suggestion by Sargent in his Autoregressions, Expectations, and
Advice article that policies should aim at the regime change. Even though optimality is always

24
satisfied, some aggregate measure of welfare or well-being may be still subject to policy
improvements. Sims says:
“It is also true that Lucas wants us to take γ as “fixed”. Of course, if we contemplate
changing γ it cannot really be fixed, but the spirit of the argument is that we ought to consider
once-and-for-all changes in γ, i.e. ”paths” of γt which are constant up to some date T and
constant thereafter, with a discontinuity at T. Further according to this interpretation, we ought to
concentrate on predicting the long run effects of the change, after γ has been at its new level long
enough for behavior to have completely adjusted. […] {T}his is the suggestion that we should
limit ourselves to comparative statics based on the long run properties of the model. But this
recommendation […] is not at all revolutionary. And it is a dubious recommendation – in
practice econometric models are probably less reliable in their long run properties than in their
short run properties.”
Sargent’s preference however is:
The rational expectations “response is based partly on the opinion that existing patterns
of decentralization and policy rules can be improved upon. This opinion comes from the
perception that unfinished and imperfect theories have been believed and acted upon in
designing and operating institutions. Dynamic rational expectations models provide the main
tools that we have for studying the operating characteristics of alternative schemes for
decentralizing decisions across institutions and across time.”
Sims summarises his criticism of rational expectations policy evaluation thus:
“There may be some policy issues where the simple rational expectations policy analysis
paradigm – treating policy as given by a rule with deterministic parameters, which are to be
changed once and for all, with no one knowing beforehand that the change may occur and no one
doubting afterward that the change is permanent – is a useful approximate simplifying
assumption. To the extent that the rational expectations literature has led us to suppose that all
“real” policy change must fit into this internally inconsistent mold, it has led us onto sterile
ground.”
Lucas summarises the choice for the policy makers and for econometric evaluation of
these policies (letters for the parameters differ from those in (1)-(3) above which follow Sims’
article):

25
“(O)ne cannot meaningfully discuss optimal decisions of agents under arbitrary
sequences {xt} of future shocks. As an alternative characterization, then, let policies and other
disturbances be viewed as stochastically disturbed functions of the state of the system, or
(parametrically)
(16) xt = G(yt, λ,ηt)
where G is known, λ is a fixed parameter vector, and ηt a vector of disturbances. Then the
remainder of the economy follows
(17) yt+l = F(yt,yt,θ(λ),εt)
where, as indicated, the behavioral parameters θ vary systematically with the parameters λ
governing policy and other "shocks". The econometric problem in this context is that of
estimating the function θ(λ).
In a model of this sort, a policy is viewed as a change in the parameters λ, or in the
function generating the values of policy variables at particular times. A change in policy (in λ)
affects the behavior of the system in two ways: first by altering the time series behavior of x t;
second by leading to modification of the behavioral parameters θ(λ) governing the rest of the
system. Evidently, the way this latter modification can be expected to occur depends crucially on
the way the policy change is carried out. If the policy change occurs by a sequence of decisions
following no discussed or pre-announced pattern, it will become known to agents only gradually,
and then perhaps largely as higher variance of "noise". In this case, the movement to a new θ(λ),
if it occurs in a stable way at all, will be unsystematic, and econometrically unpredictable. If, on
tile other hand, policy changes occur as widely discussed and understood changes in rules, there
is some hope that the resulting structural changes can be forecast on the basis of estimation from
past data of θ(λ).
It is perhaps necessary to emphasize that this point of view towards conditional
forecasting, due originally to Knight and, in modem form, to Muth, does not attribute to agents’
unnatural powers of instantly divining file true structure of policies affecting them. More
modestly, it asserts that agents' responses become predictable to outside observers only when
there can be some confidence that agents and observers share a common view of the nature of
the shocks which must be forecast by both.
The preference for "rules versus authority" in economic policy making suggested by this
point of view, is not, as I hope is clear, based on any demonstrable optimality properties of rules-

26
in-general (whatever that might mean). There seems to be no theoretical argument ruling out the
possibility that (for example) delegating economic decision-making authority to some individual
or group might not lead to superior (by some criterion) economic performance than is attainable
under some, or all, hypothetical rules in the sense of (16). The point is rather that this possibility
cannot in principle be substantiated empirically. The only scientific quantitative policy
evaluations available to us are comparisons of the consequences of alternative policy rules.”
This is all terribly and unnecessarily complicated. The difference is between playing the
policy game in the extensive form, which is to say discretionally, or in the normal, strategic
form, which what reliance on rules amounts to. Sims argues that politics is more like playing
move by move and thus predicting the next move of the policy makers and the policy takers.
Lucas suggests that the players implement strategies, which is why it looks as if the game has
one move as in comparative statics.
One reason Lucas suggests the game in strategic or normal form is that he assumes that
the economy is summarised by the Arrow-Debreu general competitive model which is indeed a
game in normal form. The dynamics of the transition from one equilibrium to another, along the
optimal path, is descriptive and may end up at an inferior outcome, by some evaluation,
depending for example on the political motivation of the players, which may be avoided if the
game is played in strategic form which also allows comparisons of alternative strategies.
Another way to see what Lucas has in mind is to think about transitions or development
or any kind of systemic or structural change or reform. The problem then is the one discussed by
Krugman in his work on development, e.g., in History versus Expectations. There the fixed costs
and costs of transition distinguish between policies that are determined by history and those that
are guided by expectations. The switch from the former to the latter is somewhat revolutionary as
it is unprecedented for each particular history and hopefully irreversible.

27

You might also like