Professional Documents
Culture Documents
August 1999
ISSN 1325 0663
ISBN 1 87666 05 0
CENTRE PROFILE
The Centre for Health Program Evaluation (CHPE) is a research and teaching organisation
established in 1990 to:
• undertake academic and applied research into health programs, health systems and
current policy issues;
• develop appropriate evaluation methodologies; and
• promote the teaching of health economics and health program evaluation, in order to
increase the supply of trained specialists and to improve the level of understanding in the
health community.
The Centre comprises two independent research units, the Health Economics Unit (HEU) which
is part of the Faculty of Business and Economics at Monash University, and the Program
Evaluation Unit (PEU) which is part of the Department of Public Health and Community Medicine
at The University of Melbourne. The two units undertake their own individual work programs as
well as collaborative research and teaching activities.
PUBLICATIONS
The views expressed in Centre publications are those of the author(s) and do not necessarily
reflect the views of the Centre or its sponsors. Readers of publications are encouraged to contact
the author(s) with comments, criticisms and suggestions.
A list of the Centre's papers is provided inside the back cover. Further information and copies of
the papers may be obtained by contacting:
The Co-ordinator
Centre for Health Program Evaluation
PO Box 477
West Heidelberg Vic 3081, Australia
Telephone + 61 3 9496 4433/4434 Facsimile + 61 3 9496 4424
E-mail CHPE@BusEco.monash.edu.au
The Health Economics Unit of the CHPE receives core funding from the National Health and
Medical Research Council and Monash University.
The Program Evaluation Unit of the CHPE is supported by The University of Melbourne.
Both units obtain supplementary funding through national competitive grants and contract
research.
The research described in this paper is made possible through the support of these bodies.
AUTHOR(S) ACKNOWLEDGMENTS
The author would like to thank and acknowledge the comments of Bob Evans, Tom Rice, Victor
Fuchs, Tony Culyer, Paul Dolan, Paul Menzel, Erik Nord, and, in particular, John McKie. Errors in
the paper are, of course, original. I would also like to thank David Bull whose interest in the
history of ideas led to the commencement of this paper and Jan Abel Olsen who linked its
completion to my next invitation to Tromso.
Abstract
1 Introduction .
Classical Rationalism
Subsequent Rationalism
Empiricism (Naturalism)
Essentialism..
4 Rationalism in Economics
6 Discussion
7 Conclusions
References
List of Tables:
List of Figures:
The theme of this paper is that there is a malaise in a significant part of theoretical economics
which has adversely affected its character and growth and which has spilled over into applied
economics in a particular way; viz by reducing the scope of hypotheses that have been the
subject of empirical enquiry and by promoting policies on the basis of their conformity with an
established orthodoxy, in preference to policies supported by evidence. The approach to this
topic is both historical and epistemological. It is argued that the history of science has been
characterised by a struggle between the conflicting paradigms of Rationalism and Empiricism
with intellectual progress being broadly determined by the extent to which the latter and not the
former has been ascendant. It is argued that the reason for this arises from the epistemological
structure of the competing paradigms. While Empiricism leads to a method which encourages
the growth of knowledge, Rationalism encourages an ultimately sterile focus upon analytical
techniques per se.
It is suggested that economic orthodoxy and, more specifically, health economic theory has
adopted the form and increasingly the substance of Rationalist paradigm and that the inhibiting
influence of this can explain the neglect of a series of issues which arise in Cost Utility Analysis
and, more broadly, in economic evaluation; issues which, for a non-economist, would have prima
facie candidacy for investigation and for possible inclusion in economic theory. Ten examples are
given. It is concluded that the opportunity cost of our adoption of methodological Rationalism in
terms of intellectual progress elsewhere and policy prescriptions may have been very high.
1 Introduction
The theme of this paper is that there is a malaise in the methodology of a significant part of
theoretical economics and that this has increasingly inhibited the exploration of divergent
hypotheses – the ‘bold conjectures’ – that fuel the growth of knowledge in the Popperian version
of modern empiricism – and promoted policies on the basis of conformity with orthodoxy. An
alternative statement of this theme is that there is a methodological dualism composed of
‘pragmatic’ theory and theorists – those who are relatively uninhibited in their search for the
solution to particular problems, and orthodox theory and theorists who constrain their efforts and
policy prescriptions to analyses and solutions that conform with ‘economic theory’ as defined by
the acceptance of conventional axioms.
There is nothing new in this thesis. Numerous economists have lamented the ‘crisis of modern
economics’ (Blaug 1992), the excessive emphasis on analytical, and particularly mathematical
techniques, and the crowding out effect this has had upon enquiry into observed behaviour
(Leontief 1971; Phelps-Brown 1972; Ward 1972; Worland 1972). Some have targeted general
equilibrium theory – the intellectual pinnacle of post-war economics – as exemplifying this
malaise (Blaug 1992; McCloskey 1991). In health economics, the usefulness of orthodox theory
has attracted particular criticism (Evans 1997, 1998; Rice 1998). By their acceptance of the
theory of supplier induced demand, the majority of health economists now accept some element
of non-orthodoxy (Feldman & Morrisey 1990; Fuchs 1996). There is, however, a quantifiable
trend towards the publication of evidence free ‘pure’ theory (Bridges 1999). This has been
accompanied by an increasing tendency for health economists to appeal to the authority of
‘economic theory’ in order to support particular methods and conclusions; and there is a
corresponding danger that this will lead to the same sterility of health economics theory, as
judged by novel insights and empirically relevant predictions, as has occurred elsewhere when a
Rationalist methodology has been adopted.
In the second part of the paper it is argued that the inhibiting influence of a Rationalist
methodology upon theoretical orthodoxy explains the neglect of a number of the particular issues
which arise in cost utility analysis, and, more broadly, in economic evaluation; issues which, for a
non-economist would have prima facie candidacy for investigation by and for possible inclusion in
economic theory.
The symptoms of the postulated malaise can be found throughout economics. In the area of
macro theory, belief in rational expectations and the inferred non-existence of involuntary, non-
frictional unemployment was retained for over a decade despite the observation of what, to most
observers, appeared like large scale involuntary unemployment in virtually all western countries.
In consumer theory, the compelling evidence for Duesenberry’s relative income hypothesis (in the
psychological literature) was dismissed and the theory replaced by the theoretically more
appealing life cycle hypothesis1. As noted by Blaug (1992) there has been no diminution in the
interest or teaching of general equilibrium theory despite the fact that ‘some of the best minds of
modern economics devoted a quarter of a century and … enormous intellectual resources … to
endless refinements, none of which has even provided a fruitful starting point from which to
approach a substantive explanation of the working of the economic system’ (p168, 169). In
health economics, it took over two decades to convince the majority of health economists that
supplier induced demand could be a contributory factor to the rising use of services. Despite the
apparent impossibility of explaining service use and expenditures through time and cross-
sectionally by any other theory, orthodox economists are still sceptical of a theory which appears
to violate the assumptions of fixed preferences and well informed and self interested consumers.
The protracted debate over SID was characterised not only by a disregard or disbelief in the
significance of the evidence but by the ready acceptance of flimsy and sometimes invalid
criticisms (Richardson 1999).
1
By contrast, the displacement of Keynes’ absolute income theory of consumption by the permanent income hypothesis
occurred as a result of the documented empirical superiority of the latter theory by Milton Friedman.
The second debate, over the meaning of equity4 was less analytical and more taxonomic. As with
the first debate, however, it was (almost) free of empirical content. Prima facie, the notion of
equity that is relevant for economists should be related to the notion which is valued and desired
by the society and its members. But empirical evidence with respect to social values was neither
presented nor suggested. The debate was, in effect, a striking example of ‘essentialism’ as
described below and the tangled intellectual web that is created by the a priori search for the
essence of the concept which is presumed to be locked within a word.
A further symptom of the malaise is the now well documented fact that economists have
persisted with, and drawn policy conclusions from, assumptions that are not simply unrealistic
because of their abstraction from reality, but assumptions which are known to be wrong in
particular contexts (see, in particular, Rice 1998). First, and possibly the most well documented
example is the persistence with the Von Neumann and Morgenstern axioms and, particularly the
assumption of the invariance of preference ordering of risky prospects when these are subject to
linear transformation. As summarised by Schoemaker (1982) there is a large body of evidence
demonstrating that individuals routinely and systematically violate these axioms. (It is also known
that Von Neumann and Morgenstern did not believe their axioms represented a general
description of risk related behaviour [Richardson 1994]). Perversely, while recognising the
empirical deficiencies in the theory, economists have continued to use it as the basis for their
analysis of risk, either ignoring the evidence or justifying the analysis with the exceedingly casual
argument that it is interesting and possibly helpful to pursue the implications of conventional
assumptions. Alternatively, the purpose of the axioms is sometimes redefined to give to the
theorem a normative significance which was not originally intended and which requires the
irrational disregard of risk related emotions that occur before the outcome is known (Richardson
1994).
A second example is the conventional assumption of fixed preferences which is one of the
cornerstones of welfare theory and the normative conclusions drawn from it. The assumption,
2
For a recent contribution to this literature see Reid (1998).
3
The author’s own contribution to this debate was to point out that if ‘utility’ is empirically different from ‘value’ – which is a
central tenet in the advocacy of the HYE – and if the stage 1 use of the Standard Gamble correctly incorporates this difference, then
the stage 2 use of the Standard Gamble will necessarily invalidate the measure as it would, in effect, incorporate risk adjustment a
second time. The argument which is based upon the simple logic of measurement, has been ignored in favour of the analysis of the
theoretical consistency of the error free QALY and error free HYE. In this context ‘consistency’ means correspondence with the
axioms of choice that have been found repeatedly and consistently to be empirically invalid (Schoemaker 1982).
4
See Culyer and Wagstaff (1993) and Mooney, Hall et al (1992).
As a third example, orthodox economists define the objective function in terms of utility as
revealed by actual behaviour. There is a separate puzzle associated with the practice of defining
rather than investigating social objectives. However, if objectives are to be defined, then prima
facie, the criterion of revealed preferences appears reasonable if individuals are well informed
and what they reveal corresponds with what they like and with what produces satisfaction.
However, it is now believed that ‘wanting’ and ‘liking’ arise from two different neural systems
(Berridge 1996). ‘Wanting’ may not, therefore, be a reflection of activities which produce
subjective wellbeing – which, it is arguable, should be a part of the objective function.
These three examples suggest that 50 years of psychological research has been largely, if not
entirely, ignored by mainstream economics which persists with the impoverished concepts of
human behaviour and motivation postulated in an earlier era.
The present paper suggest that these diverse ‘puzzles’ are not simply a set of coincidences.
Rather, they are a result of the primacy of orthodox over pragmatic – unconstrained – theory and
observation; of the drive by economists for theoretical elegance and analytical sophistication at
the expense of empirical relevance. In this respect, orthodox economists are perpetuating a long
tradition that once characterised the physical sciences but has now been replaced by a more
fruitful approach to empirical problem solving.
The major events of Western intellectual history and the individuals associated with them are
summarised in Figure 15.
Classical Rationalism
Thales (Circa 600 BC) The birth of Western science and philosophy is usually attributed to the
school of thought which evolved in Miletus (modern day Turkey) early in the 6th Century BC and is
associated with the name, ‘Thales’. Before this time it was believed that nature was subject to the
random dictates of magical or divine forces. In Miletus, and for the first time in known Western
history, a hypothesis was put forward which would be pursued, in different forms, until the
present. This was that, behind the apparently chaotic behaviour of the observed world, there are
a series of contextless ‘laws’ or regularities. It was postulated that these laws could be accessed
by a combination of observation and reason processes, which have subsequently been described
by the terms ‘Empiricism’ and ‘Rationalism’. The subsequent history of thought has been, inter
alia, the history of an intellectual pendulum which has swung between these two elements as a
belief in their relative importance has fluctuated.
Parmanedes (470 BC) The first movement of the pendulum was towards Rationalism, with
Parmanedes of Elea (a Greek colony in Sicily) outlining the argument that would be adopted, in
one form or another, by all subsequent Rationalists. This was that empirical observations are
intrinsically unreliable – subject to error – and that, consequently, they cannot be the basis for
‘truth’. By contrast, the conclusions drawn from correct reasoning must necessarily be true. But
as deductive reasoning requires premises or initial axioms, ultimate ‘truth’ depends upon the truth
of these axioms. According to Parmanedes the veracity of axioms is ensured if they are derived
5
This summary draws heavily upon Russell (1965); Flew (1979); Magee (1973); Barnes (1996); Schilpp (ed 1974); Popper
(1974, 1983); Blaug (1992); and particularly Tarnas (1991).
Democritus (c460-370 BC) The religious element of the ‘sceptical Rationalism’ of the 5th
Century BC was finally eliminated and replaced with a ‘naturalistic Rationalism’; that is, a
deductive, rationalist methodology which did not base its authority upon divine revelation but upon
clearly perceived truths which could be supported, in part, by their capacity to explain the natural
world. Interestingly, Democritus used this methodology to derive the theory of ‘atomism’ – the
theory that the world is composed exclusively of uncaused and immutable material ‘atoms’.
These, it was argued, are invisible, minute and indivisible particles that perpetually moved.
Random collisions result in varying combinations which produce the visible world of experience.
The epistemological interest in these conclusions is that the use of axioms and a methodology
which would be widely or universally rejected today, resulted in a description of the physical
universe which corresponded remarkably closely with the modern atomic theory. Naive or
unacceptable assumptions and methods are not inconsistent with truth or useful conclusions.
Plato (427-347 BC) With Plato, classical Rationalism reached its zenith and was to have a
dominating influence on thought for the next 1,500 years. The first plank in the edifice was the
hypothetico-deductive method of reasoning. Rather than dealing with facts, the method involved
statements about facts; that is, the statement of hypotheses in a form which permitted the
application of deductive logic. This could result in conclusions capable of testing by further logical
argument. While attributed to Socrates, the second plank was the search for absolute definitions;
definitions of the concepts embodied in particular words which could then be used as the basis
for deduction.
The third and distinctively Platonic plank was the statement of a fundamental axiom; vis, that each
property and its corresponding concept – ‘shape’, ‘colour’ ‘beauty’, ‘justice’, etc derives its
characteristic by sharing or partaking in the ‘essence’ or ‘Form’ of that property. ‘Truth’ could only
be accessed by the use of the intellect to understand the nature of these ‘transcendent’ entities –
the Platonic ‘Forms’ or ‘Ideas’. As with Democritus, the epistemological anchor of this system –
the guarantee of truth and certainty – was the argument that the fundamental axiom was a self
evident revelation or truth for the person (trained philosopher) who sought truth with intellectual
integrity.
A further important and defining characteristic of this schema was the belief that all human
knowledge might be set out as a single axiomitised science. That is, from a small set of primary
truths every other truth might be deduced. Fundamental truths did not depend upon context nor
ever vary in kind. Empirical observation was to be distrusted as it was merely the shadow of
reality. Conflicting empirical evidence was a reflection of the unreliability of the observation and
not the unreliability of the general truth.
Aristotle (384-322 BC) While the basic Rationalist structure was maintained by Plato’s pupil,
Aristotle, his work represented an important swing of the intellectual pendulum towards
Empiricism. Along with his voluminous codification and description of an astonishing variety of
plants and animals and his formalisation of the rules of logic, Aristotle postulated a Rationalist
ontology – schema of things – which was diametrically opposed to the Platonic system in one
important respect. Aristotle agreed that observation might, in particular cases, be unreliable and
that truth should be obtained through the application of logic. However, the basic axioms in the
Aristotelian ontology were obtained by the repeated observation of nature and by the careful
recognition of common elements. The essence of some general truth could then be inferred and
the role of definition was to explain the essence of the concept which corresponded with a
common word used to describe disparate observations. Logic was then a device for describing
the interrelationships between concepts. Importantly, and in apparent contradiction with Kenneth
Arrow (see below), logic was not a devise for increasing the ‘truth content’ or confidence in a
theory. This was derived entirely from the initial axioms. Ultimately, however, knowledge was to
be obtained from perception. In Barnes’ words, ‘Plato, having given abstract Forms the leading
role in his ontology, was lead to regard the intellect rather than perception as the searchlight
which illuminated reality. Aristotle, placing sensible particulars at the centre of his stage, took
sense perception as his torch.’ (Barnes 1996, p58).
By the end of the classical Greek era each of the elements of Rationalism and its variants had
been explored. The defining and central tenet was that knowledge can be reliably obtained only
by the application of analytical techniques to a set of axioms. These, in turn could be derived by
divine revelation or by pure intellectual insight. In the somewhat less pure Rationalism of
Aristotle, the intellect was used to abstract common elements from nature and to incorporate the
essence of things in a definition. The major task of the scholar/philosopher was to show why
certain conclusions must be true and this was to be achieved by personal insight and analytical
skill. All variants of Rationalism were characterised by a belief in the unity of the truths underlying
nature; that is, the context free nature of the truths. Truths were united according to the laws of
logic and mathematics and for many subsequent Rationalists the intellectual edifice created by
Euclid represented the quintessence of good science. For most Rationalists, mathematics had
Subsequent Rationalism
Few original contributions to philosophy were made during the period of the Roman Empire.
With the Fall of the Western Empire many of Aristotle’s writings were lost and various forms of
Neo-Platonism dominated Western thought until the re-emergence of the works of Aristotle in
about 1150. This commenced a process of change which was accelerated by the influx of Greek
scholars into Europe after the fall of Constantinople in 1453. This was shortly followed by the
emergence of the scientific revolution which was one of the outcomes of the Renaissance. Apart
from a brief flourishing of continental Rationalism this gave way to the British Empiricism of the
17th Century and the period of the ‘Enlightenment’. While the contest between Rationalism and
Empiricism continued, comparatively little was added to the underlying epistemology or
characteristics of the two paradigms and the major figures, of importance for other reasons in the
history of philosophy, are discussed only briefly below and only in relation to their position with
respect to the epistemological contest between the two paradigms. The evolution of empiricism
is better known than the convolutions of Rationalism and receives fairly brief treatment below.
Augustine (354-430 AD) For almost 800 years Western thought was moulded by (Saint)
Augustine’s synthesis of Christianity and Platonism. Augustine equated the Platonic Forms with
the mind of God, and argued that this basis for reality lay beyond the world of sense perceptions
and could only, therefore, be accessed by introspection of the soul. As with Plato, ‘truth’ would be
guaranteed if such introspection was conducted by the true philosopher who was, Augustine
argued, a lover of God. This view was to permeate virtually all of medieval Europe (Tarnas
1991).
Aquinas (1225-1274) The rediscovery of Aristotle’s works coincided with the influx of Arabic
scholarship and the rediscovery of later Greek works including those of Ptolomy. These ideas
had an enormous impact as they were consistent with a rising tide of naturalism in Europe.
Aquinas’ contribution was to create a new synthesis (scholasticism) which was to remain
influential until the Renaissance 300 years later. For Aquinas, naturalism – the realm of
philosophy – and revelation – the role of theology – were complementary in the same way as
Empiricism and Platonism were complementary for Aristotle. Because of the uncertainties of
observation, truth could, once again, only be ensured through reason. Axioms were truths
relating to the ‘eternal Forms’, as described by Augustine and Plato. Knowledge of them was
derived from a combination of observation and revelation; the former initiating or suggesting the
axioms and careful contemplation in combination with revelation guaranteeing their veracity.
From this axiomatic basis, both religious truths and the truths of nature could be deduced. As
with Augustine and earlier Rationalists, both the natural world and the world of Platonic Forms
represented a unified and coherent structure: truths were universal and it did not vary with
context or time.
Empiricism (Naturalism)7
Galileo (1564-1642) Whilst scholasticism with its basically Aristotelian epistemology remained
the dominant intellectual force until the 17th Century, the seeds of the Renaissance were sown
shortly after the death of Aquinas and during the 14th Century. The assumed unity of the world of
Platonic ideas and the empirical world of sense perception began to break down as did the 2,000
year old belief that basic truths could only be accessed by introspection and by intellectually
grasping the essence of Forms or Ideas. Thus, for example, William of Ockham (1285-1349)
challenged the long held view that the role of definition was to encapsulate essences. He
promoted the concept of ‘nominalism’; that is, the view that universals were only names or mental
concepts and not real entities. Increasingly, it was accepted that useful concepts could be
derived by observation and not by intellectual introspection, a view which led to the secularisation
of epistemology. By the time of the Renaissance this view (modernism) had become the
predominant belief.
The basic principles of the ‘scientific revolution’ which accompanied the Renaissance were most
clearly articulated by Galileo whose role model was Archimedes, the Greek mathematical
physicist whose writings had been recently rediscovered. The Aristotelian method of passive
observation, distillation of observations into essences, the capture of essences in definition and
the application of verbal logic was totally replaced. The role of the scientist, Galileo argued, was
6
Leibniz is now credited with the discovery of calculus coincidentally with Newton’s discovery of it. As a consequence he
was the subject of a series of vituperative editorials by the Royal Society of Science written by its President, Isaac Newton!
7
The terms empiricism and naturalism are sometimes used interchangeably. However the term naturalism is also used to
denote empiricist theories of ethics viz, the view that normative statements of the form ‘X is good’ may be translated into positive
statements, such as, ‘X will maximise utility’.
Kant (1724-1804) Long before the birth of logical positivism Emmanuel Kant had established the
basis for a partial reconciliation of Empiricism and Rationalism and, simultaneously, postulated a
fundamental defect with British Empiricism. Kant commenced with the observation that the
history of metaphysics was the history of contradictory theses, each plausible within its own terms
of reference; that is, reasoning is capable of rationalising incompatible conclusions. In his
‘Critique of Pure Reason’ he postulated that the human mind does not passively observe nature.
Rather, it is an active agent which imposes patterns on the physical world; observations are
accepted or rejected in conformity with pre-conceived hypotheses. As Tarnas notes ‘in the act of
human cognition, the mind does not conform to things; rather, things conform to the mind’,
(Tarnas 1991 p343). As described by Popper (1975) the mind does not act as a ‘bucket’ which
collects facts at random; rather, it acts as a ‘searchlight’ that looks for observations that are
consistent with its expectations.
Popper (1902-1995) Kant’s hypothesis was to become one of the cornerstones of Karl Popper’s
epistemology, described here as ‘Modern-Empiricism’ to distinguish it from earlier British
Empiricism. Based upon Kant’s hypothesis, Popper argued that ‘truth’ may never be achieved
Because falsification, but not verification is in principle possible, the primary methodological
exhortation by Popper is to expose theories to falsification8. Our confidence in propositions, he
argues, arises from the failure to falsify. The more likely the prior probability of falsification the
greater our confidence in the proposition when falsification is attempted unsuccessfully. The
extent of our knowledge – the truth content (‘versimilitude’) of theories – depends upon the
number of states of the world that are ruled out by the proposition; that is, by the number of
observations that may have, potentially, falsified the proposition but have not done so.9 The
growth of knowledge is achieved as a result of two pivotal elements of the epistemology; first,
error learning arising from refutation and, secondly, divergent thinking – described by Popper as
‘bold conjecture’ – in the formulation of hypotheses. In the starkest contrast to Rationalism,
Popper insists that ‘anything goes’ in formulating hypotheses to explain observed puzzles –
anomalies arising from our current (or orthodox) hypotheses. Adherence to an orthodoxy –
constraint by conventional axioms – is the antithesis of good science.
8
A common and wrong belief is that Popper’s falsification principle is flawed because in practice falsification is no more
possible than ‘verification’. Four paragraphs after his initial statement of the falsification principle, however, he notes that it is always
possible to find some way of evading falsification …’. (Popper 1972, p42). The issue is then clarified. As noted in the text the
desirability of seeking and not evading falsification is a methodological exhortation designed to promote error detection and the growth
of knowledge. The misunderstanding of this distinction was so widespread that, in his ‘Postscript to the Logic of Scientific
Discovery’, – published in two volumes (Popper 1983) – Popper commences by commenting on this error. ‘It is of great importance
to current discussion to notice that falsifiability in the sense of my demarcation criterion is a purely logical affair. It has to do only with
the logical structure of statements and of classes of statements, and it has nothing to do with the question whether or not certain
possible experimental results would be accepted as falsifications (p xx) … we now come to a second sense of ‘falsifiable’ and
‘falsifiability’ which has to be distinguished very clearly from my purely logical criterion of demarcation in order to avoid gross
confusion (p xxi) … it is never possible to prove conclusively that an empirical scientific theory is false …. Hence, to repeat, we must
distinguish two meanings … (1) ‘Falsifiable’ as a logical – technical term … (2) ‘Falsifiable’ in the sense that the theory in question
can definitely or conclusively or demonstrably be falsified. I have always stressed that even a theory which is obviously falsifiable in
the first sense is never falsifiable in the second sense … an entire literature rests on the failure to observe this distinction (p xxii).
9
More precisely, Popper defines ‘versimilitude’ as the difference between the truth-content and the falsity-content of a theory
(Popper 1994). Popper (1983 p xxxv) subsequently agreed that this attempt at a precise definition of versimilitude was a failure and
that such precision of definition was unnecessary for his theory.
10
See the statement of the principle in the previous footnote.
11
Lakatos simply relabelled the Popperian description while implying that it was the evolution of Popperian methods from
‘naïve’ falsification. In his intellectual autobiography Popper reports that ‘when Professor Lakatos was writing the paper (outlining the
‘new’ concept of a scientific research program) … he asked me that I should ‘never’ read this paper, and I have complied with his
wish; (until his death) … Now that I am called on to reply to his views, I am disturbed to find that the argument which appears to be
crucial for his criticism of my views … must, in my opinion, be totally rejected as unsound … (it is) an interpretation of my theory of
falsifibility that makes nonsense of my views’. (Popper 1976, pp 999-1000). In confirmation, W W Bartley III, a long time
acquaintance of Lakatos, subsequently wrote as follows: ‘There was a time … when I regarded Lakatos as the most immoral man I
have ever met. I later came to think this judgement naïve. Lakatos merely talked openly and appreciatively … of the sort of behaviour
which is widespread and almost universally covert … I often saw Lakatos lie when it suited his purposes … He explained to me in
1961 … that Popper was quite wrong to say that words do not matter. Quite to the contrary, Lakatos insisted, ideas are of secondary
importance compared to the names one gives to them; if you give your ideas good names, they will be accepted – and you will be
named the Father … It is then appropriate that Lakatos should have acquired his chief fame … for his ‘scientific research program’.
This was an idea that he took over completely developed from the accounts by Popper, Agassi & Watkins of ‘metaphysical research
programs’. Lakatos had the good sense to see that the word ‘metaphysical’ presented an insuperable public relations obstacle to the
professional philosophers of scientific bent who lacked his own sense of humour. So he calmly changed the word ‘metaphysical’ to
the word ‘scientific’ and won the acclaim that he had intended for the notion. What Popper meant, of course, is that words shouldn’t
matter. But Lakatos knew better than to moralise (Bartley 1976, pp 37-38).
Essentialism
As the defining characteristic of Rationalism is the deduction of knowledge from initial axioms, it
is necessary that some or all of the axioms contain different parts of the knowledge that is to be
extracted from them. Taken together, the set of the axioms must contain all of the information to
be extracted. Deductive reasoning re-configures rather than creates knowledge. As previously
noted, Plato explicitly sought to convert the truths concerning the eternal Forms into linguistic
statements which might become the subject of deductive logic. But it was Aristotle who
formalised the ‘essentialist method of definition’. The purpose of definition was to capture the
‘essence’ of a general truth. While Aristotle differed from Socrates and Plato in the source of that
truth (observation for Aristotle; introspection for Socrates and Plato), in each case a definition was
the answer to a question of the form ‘what is meant by X’, where X represents a general word
such as ‘justice’, ‘rationality’, ‘colour’ etc. The truth of deduced propositions therefore depends
upon the truth of the axioms and this implies the need to pay the greatest attention to the ‘truth’
and accuracy of definitions. This, in turn, leads to the intense analysis of the meaning of words to
ensure that the definition embodies the true essence of the word/concept. As stated by Blaug
(1992) ‘Adherents of essentialism are inclined to settle substantive questions by reaching for a
dictionary of their own making (p109)’. The process describes a very significant part of the
intellectual effort of Western philosophy.
Insert at end of sentence: are not known to be universally true then
This practice and the role of definition are in the starkest contrast with the methodological
nominalism first advocated by William of Occum and adopted as one of the cornerstones of
Popperian epistemology (Popper 1974b). In this, words are labels: question of the form …‘what
is X’ are likely to be misleading. The meaning of X may not be precise (and, indeed, its
description cannot be fully precise without encountering circularity or an infinite regress. X must
be defined in terms of words which must be defined in terms of words, etc). Rather, the definition
should be made clear by the theory context in which it is used and, conversely, clearly presented
theory should leave no ambiguity about how particular words are to be used. The choice of
words, per se, however, should not have any particular significance. Hence Popper’s lack of
concern over his label ‘Metaphysical Research Program’. With nominalism this might equally
The significance of essentialism in the history of western thought should not be underestimated.
In the second last paragraph of his monumental ‘History of Western Philosophy’ Russell (1965)
concludes:
‘What is number? What are space and time? What is mind, and what is matter? I
do not say that we can here and now give a definitive answer to all these ancient
questions, but I do say that a method has been discovered (analytical philosophy)
by which, as in science we can make success of approximations to the truth’
Russell, 1965 p789 (bracketed words inserted).
As clearly noted by Allais (1984) the question, ‘What is meant by x?’, may rapidly degenerate into
a confused but apparently profound analysis. As the meaning of words mutates from context to
context and as words are used metaphorically or even to describe conflicting ideas, the attempt to
abstract the essence of meaning from each of these contexts is a recipe for analytical entropy.
12
Putnam (1975) and others have argued that the essence of a thing may be equated with its basic physical structure. Thus,
the essence of water is being composed of HsO molecules; the essence of a tiger will have something to do with its DNA. The role of
science, then, is to discover these essences and commencing with the correct view that there are such essences will motivate the
scientific enquiry into the substance. This argument is only superficially attractive. At best it applies to the physical world. Concepts
such as ‘beauty’, ‘truth’, ‘number’, ‘justice’, ‘probability’, ‘rationality’, ‘utility’ do not necessarily have real world correspondence as they
are created; and it is possible to create an unlimited number of concepts unrelated to the physical world (fairies, ghosts, and unicorns
are all understood, more or least, by children). In effect, the Putnam et al approach simply equates ‘the essence of a thing’ with the
physical world and supports this with the obvious argument that there is benefit in investigating the physical world to discover its
structure (‘essence’). However the epistemological role of such ‘essences’ is quite different from the role of the Aristotlian ‘essence’.
For Putnam, discovery of the essence is the endpoint of scientific enquiry (which implies that an independently determined
methodology must drive the enquiry). The essence of essentialism in the Aristotlian schema is that the true essence is not the
endpoint of enquiry but much closer to the commencement. For many Rationalionists the essence is seen through introspection or
divine revelation. Subsequent knowledge is then deduced. For Aristotle and Francis Bacon the essence is abstracted from
observation but once again becomes the premise from which further knowledge is deduced. It is not the endpoint of enquiry.
It is, indeed, also questionable whether Putnam can ever, in principle, discover an ‘essence’. The tentative view that HsO is
the essence of water must be modified as HsO is also the essence of ice and steam. Consequently, a lower level essence must be
determined with the possibility that this also must give way to a more fundamental essence and so on ad infinitum. The paradoxical
conclusion must therefore follow that the essence of a concept underlining a particular word will almost certainly be incomprehensible
to those using the word. But the essence is intended to be the essence of the concept used in communication and, clearly, the
Putnam essence cannot be this.
13
Wittgenstein argued that the things denoted by a term may not share one common property but may be connected by a
network of common features. Thus, in one of his examples, the question is asked ‘what is a game?’. Is it something involving two or
more people competing? But this happens in war and business. It is something involving two or more people competing and having
fun? But solitaire is a game and gambling addicts may not have fun etc etc. Eventually there may be no common property to all
examples of a game, but sufficient overlap between different examples that the concept is widely understood. (I am grateful for John
MacKay for pointing out the significance of this and the material in the previous footnote).
The structure of the Rationalist and Popperian epistemological systems are represented in
Figures 2 and 3. It is not the intention here to present the latter as the methodological gold
standard. Rather, it is presented primarily to highlight the contrast with Rationalism. Both
systems employ the hypothetico-deductive system for translating assumptions into conclusions
and the analytical heart of both systems is the same – deductive logic or, in its more sophisticated
manifestations, mathematical analysis. As further discussed below, it is arguable that the
Rationalism of economic orthodoxy has been an attempt to mimic the form of this analytical core
in the physical sciences (Ormerod, 1994). Beyond this analytical core, however, the two systems
are radically dissimilar. The true engine of Popper’s Empiricism is two fold: first is the
methodological exhortation to exposure assumptions (tentative axioms) and conclusions to
criticism and empirical falsification and, secondly, the exhortation to explore ‘bold conjectures’.
The mind-matter dichotomy is, at least in part, brought together by the unconstrained exposure of
the patterns – hypotheses – which the mind creates, to the possibility of empirical refutation. The
more this process is repeated, the greater the possibility that the mental patterns suggested will
correspond with the external patterns first postulated by Thales and retained as the central tenet
of the Popperian schema14. It is these dynamic aspects of Modern Empiricism – and not its
‘static’ logical structure – that are the basis for an ‘evolutionary approach to the increase in
objective knowledge’ and the ‘growth of scientific knowledge’: the concepts highlighted in the
titles of two of Popper’s key books (Popper 1974b, c, d; 1975).
By contrast, Rationalism has at least two fundamental defects in its epistemological structure; the
first to do with the narrowly defined process of deduction and the second concerning the
dynamics of the growth of knowledge. The first of these is illustrated in Figure 4 which is an
enlargement of Figure 2 but with the middle step – the deductions – represented only by lines
connecting axioms and conclusions. Perhaps the most secure tentative hypothesis which may be
supported by the history of science and philosophy is that we have no examples of certain truth;
that, as Popper insists, general propositions may only be granted the status of ‘tentative
hypotheses’. Acceptance of this view in the physical sciences became universal after the leading
candidates for unquestionable truths, as demonstrated by their extraordinary predictive power –
14
In his article ‘The Science of Galileo and its New Betrayal’ Popper (1963a) defends methodological realism - the view that
science deals with a physical reality corresponding with its descriptions - against the encroachment of instrumentalism - the view that
theories are simply instruments for useful prediction.
Once it is granted that general propositions – axioms – are not known to be universally true then it
must follow as a matter of logic that some of the conclusions drawn from these axioms may be
false. For example, Figure 4 represents a simplified version of the Orthodox theory of consumer
demand. In this, three initial and general propositions/axioms represent the theoretical core; viz,
(1) the information about the utility relevant characteristics of goods and services is sufficient for
decision making; (ii) consumers are rational, by which it is meant that they will maximise utility
and (iii) consumers have the power to act which subsumes several sub-hypotheses; viz, that they
are ‘empowered’ – that they have the will power, ‘know how’ and motivation to act; that there is
some form of market which permits them to express their preferences and that there is not an
excessive concentration of power on the supply side of this market. When this theoretical core is
linked to subsidiary propositions concerning the market for groceries then it may be correctly
deduced that such a market will be ‘efficient’ as economists understand this term. By contrast, a
similar conclusion correctly deduced about the market for health services may be invalid. While
the core assumptions are true in the first market they may not be true in this second market. As
observation suggests that, in the real world, these particular assumptions are very often correct,
they have been adopted as general axioms and have become part of a widely accepted
orthodoxy; that is, the assumptions are so commonly assumed that they are taken for granted.
However, the general ‘economic solution’ based upon this orthodoxy may lead to orthodox
conclusions in the health sector that are misleading. This is depicted in Figure 4. In this, each
assumption/axiom may be true in a particular domain of behaviour – context. In other domains it
may be false. From this it follows, as a logical necessity, that some (or all) predictions may be
false. Conclusions cannot, therefore, be assumed to be generally true on the basis of the truth
content of the initial axioms.
The second fundamental defect also arises from the fallibility of assumptions. With either
Rationalism or Empiricism there is no necessity for assumption/axioms to be testable. If they are
indeed false, then the Empirical paradigm, at least in the Popperian manifestation illustrated in
Figure 3, has at least one unavoidable error detection mechanism; viz, whether or not subsequent
predictions explain the initial empirical puzzle or defect with the previous theory which was the
starting point of the enquiry. If the Popperian methodological exhortation is followed then
additional conclusions will be exposed to possible refutation, thereby strengthening the likelihood
of error detection. No such mechanism for the growth of knowledge exists within the Rationalist
paradigm.
15
Other candidates such as the rationality of individuals and the universality of utility maximisation, briefly discussed later,
depend for their plausibility upon interpretations of the key terms which are so close to being analytical-tautological-truths that they
produce no predictions capable of potential falsification. In the Popperian epistemological schema they have no information-truth-
content.
Axioms
abstract general truths
deductions
coherent/unified system
of truths
It is clear from this brief overview that there is something of a continuum between pure
Rationalism and Popper’s Post Empiricism. To the extent that theory and conclusions are
exposed to empirical refutation and to the extent that there is a willingness to contrast new with
old theory, the methodology approaches the ideal of Modern Empiricism exemplified by the
Popperian theory. Conversely, to the extent that orthodox axioms are preserved at all cost, new
and conflicting theories and novel predictions eschewed by design or by default and emphasis
placed upon the analytical core, the system approaches the ideal of Platonic or Cartesian
Rationalism.
The structure of the respective epistemologies is not, however, the only relevant characteristic of
Rationalism and Modern Empiricism. In Kuhnian terms, each of these epistemologies represents
a different ‘paradigm’ – not simply a different set of core hypothesis but a set of shared beliefs,
concepts, perceptions, language, rewards and punishments for the reinforcement or erosion of
the paradigm and rules for the conduct of ‘normal science’. The contention of this paper is that
orthodox economics is increasingly adopting the characteristics of the Rationalist paradigm in this
Kuhnsian sense.
Theory: Definitions
logical relations
Testing Truth
Testing
Increased confidence in exposure to possible falsification
conjecture
New Tentative
Rejection
problems acceptance
Context 1 Context 2
Groceries Health
valid invalid
deductions conclusions
II
Orthodox
Orthodoxy conclusions
misleading
Economists have, of course, been aware of the methodological characteristics of their profession;
but it is not obvious that the full implication of the various methodological debates have been
widely appreciated. Nineteenth Century economists were well aware of the dependence of
economic theory upon abstract assumptions. As noted by JS Mill ‘political economy is essentially
an abstract science that employs the “method a priori” … (by which we mean) … reasoning from
an assumed hypothesis.’ (Quoted from Blaug 1992, p57). John Neville Keynes summarised the
Nineteenth Century view when he argued that ‘the right procedure is the a priori method of
starting from “a few indisputable facts of human nature” … (and because of the inevitable
abstraction from context this implied that) … political economy is a science of tendencies only, not
of matters of fact’. (Blaug p73). An important corollary of this view, however, is that in particular
contexts the generalisations embodied in the axioms might fail; there might be what Mill called
‘disturbing cases’. It was for this reason that Mill warned that ‘the mere political economist, he
who has studied no science but political economy, if he attempts to apply his science to practice
will fail’. (Blaug p58). This was a view echoed by John Maynard Keynes when he argued that
‘good economists are scarce because the gift for using ‘vigilant observation’ to choose good
models … appears to be a very rare one’. (Blaug p79).
The Rationalist view of axioms as context free truths that are independent of experience was to
be most clearly articulated in the 1930s and 1940s by Robbins and Von Mises with the latter
arguing that ‘the fundamental propositions of economic science are ‘true’ or ‘hold’ independently
… of experience … no meaningful action can be performed without them and therefore no action
would contradict them.’ (Latsis 1976, p5). The fundamental axiom of rationality is, therefore, ‘a
synthetic, a priori principle … this principle says that human actions are adequate or appropriate
in the situations in which they occur.’ (Latsis 1976, p4).
In the subsequent debate over the status of assumptions, a less extreme positions was adopted.
It his influential essay on the Methodology of Economics, Friedman (1953) argued for the
instrumentatist view that assumptions need not be true or an accurate reflection of reality.
Rather, it was sufficient that they should serve as useful instruments (a view initially repudiated
but subsequently and reluctantly accepted by Galileo in response to the persuasive arguments of
In his discussion of the conceptual framework of modern economics, Fusfeld (1980) notes that in
the 1950s and 1960s many economists believed that there had been a satisfactory resolution of
the methodological debate and that the methodology of ‘logical empiricism’ was the appropriate
basis for economic analyses. Fusfeld (1980) describes the partial consensus as follows: ‘The
method of logical empiricism embodies three steps … the first step is construction of a theoretic
model; the second derives hypotheses about reality from the theory; the third tests the hypotheses
against empirical data (p3) … and rejects hypotheses falsified by empirical tests (p5) … The
operating rule of logical empiricism is that logical propositions, in addition to possessing logical
validity (ie being correctly derived from the assumptions of the theory) must also generate
testable hypotheses, or be capable of generating potentially testable hypotheses’ (p4 bracketed
words added). With the exception of two important elements this consensus view generally
corresponds with the Popperian description of what science ought to be17.
This consensus was short-lived. Fusfeld suggests that its demise was at least in part attributable
to Thomas Kuhns influential book ‘The Structure of Scientific Revolutions’ (Kuhn 1962). In this,
Kuhn argues that normal paradigms – core theories – are not, in fact, rejected when hypotheses
derived from them are falsified and that ‘normal scientists’ – those working within the paradigm –
seek to defend and not falsify their paradigm. Fusfeld argues that the attempt by Lakatos to
replace simple falsification with a more descriptively accurate process involving the rise and fall
of ‘scientific research program’ similarly failed because of Kuhn’s demonstration that the very
facts needed for the falsification of hypotheses are themselves unprovable. They are generated
in the biased atmosphere of a particular paradigm and are eventually accepted or rejected as a
result of the agreement or disagreement of the scientific community. That is, ‘facts’ themselves
are inherently biased and subjective.
16
See Popper (1974d),‘The Science of Galileo and its New Betrayal’. Popper (1974 p98) notes that ‘there was no objection
to Galileo’s teaching … so long as he made it clear that its value was instrumental only; that it was nothing but a ‘supposition’ as
Cardinal Bellarmino put it; or a mathematical hypothesis – the kind of mathematical trick ‘invented and assumed in order to abbreviate
and ease the calculations’. Galileo’s crime was to insist that his hypothesis described the real world.
17
The two (related) missing elements are, first, that scientists should seek to find fault in their theories, (ie generate
hypotheses which they suspect may be falsified) because confidence in theory (verisimilitude) rises, not with the verification of self
evidentially true hypotheses but with the failure to falsify the theory with hypotheses which initially, appear improbable. Second,
theories with the greatest verisimilitude – truth content – are likely to arise from ‘bold conjecture’ (lateral or divergent thinking) as this
process is most likely to produce hypotheses contradicting establlished wrong beliefs. Note that these quintessential features of the
Popperian method represent methodological exortations rather than logical requirements.
The defining characteristic of Rationalism is the assumption of axioms which are regarded as
embodying the essence of truths that are assumed to be self evident for one of several reasons
and the use of these axioms to derive conclusions whose truth is guaranteed by the truth of the
axioms. The other characteristics associated with Rationalism are: the distrust of observation
because of its unreliability; the analysis of concepts/words in order to distil their essence which is
then to be encapsulated in definitions. The emphasis upon showing why conclusions must be
true by demonstrating their correspondence with axioms, the emphasis upon analytical
techniques – and the percentage of the intellectual effort devoted to this – in order to extract the
greatest volume of truth from the axioms and, as a consequence of this, the special status of
mathematics; and, finally, the emphasis upon intellectual coherence and belief in the universality
– context independence – of the final conclusions.
It is clear that this archetypal Rationalism describes different branches of economics to a greater
or lesser extent. Viewed in its totality, post war macro economics represents robust empiricism,
with different theories being accepted and rejected (eventually) by their capacity to explain the
macro economy. Similarly, many of the components of macro theory could be cited as
conforming closely with the Popperian version of Empiricism. Consumer, investment and
18
The belief that Popper advocated the rejection of whole theories because individual hypotheses had been falsified is
historically incorrect. Popper explicitly rejected this view and explicitly argues that ‘it always remains possible … that we shall make
mistakes in our relative appraisal of two theories … this point can hardly be over-emphasised … our preferences between theories
need not change … if we eventually refute the better of the two theories. Newton’s dynamics, for example, even though we may
regard it as refuted, has of course, maintained its superiority over Kepler’s and Galileo’s theories (Popper 1974(c) p235-236). More
generally Popper discussed the conditions under which a paradigm – or what he called a ‘metaphysical research program’ – could be
considered to be expanding or contracting and there was an explicit acceptance – implicit in the quotation above – that one theory –
paradigm – would only be fully displaced when a superior theory – paradigm – was proposed. As noted in footnote 9 Lakatos
purloined this element of Popperianism. Similarly the view that Popper’s methodology is flawed because individual facts may never
be proven but are themselves selectively sought and subjectively accepted is, once again, a misunderstanding of Popperianism. As
noted in footnote 7, Popper acknowledged in his seminal work that it is always possible to evade falsification. Similarly, and extending
the view of Emanual Kant that our mind imposes patterns on the world rather than visa versa, Popper insisted that all observation is
‘value ladden’. Immediately following his first statement of the falsification principle Popper argues that while the evasion of
falsification is possible for the reasons discussed above, ‘I am going to propose that the empirical method shall be characterised as a
method that excludes precisely those ways of evading falsification which, as my imaginary critique rightly insists, are logically possible
… its aim is not to save the lives of untenable systems but, on the contrary, to select the one which is by comparison the fittest by
exposing them all to the fiercest struggle for survival (Popper 1972 p42). The common error of interpretation arises from the failure to
distinguish between the methodological exhortation described here and the logical analysis which justifies the exhortation. It is when
we have genuinely tried but failed to falsifiy theory that our confidence in it is increased.
Other areas of economic theory have been less fortunate as the subject matter does not force a
confrontation between prediction and events that are uncertain at the time of the prediction.
Numerous economic theories – human capital, household production, the theory of the firm –
commence with a set of conventional axioms and produce predictions that are so general that
falsification is impossible. These theories, however, do not truly exemplify Rationalism as they at
least commence with well defined empirical problems and end with predictions which, in
principle, may be the subject of empirical investigation. Over time most positive theories of
economics have evolved to a greater or lesser extent but often because of the emergence of an
alternative theory that is analytically more elegant and only indirectly in recognition of its empirical
shortcomings. At worst, these theories may be criticised as representing ‘weak empiricism’ or
even Rationalist in spirit, as the greatest intellectual effort is devoted to the axioms and the
analytical structure.
But the conclusion above – that the analysis is ‘Rationalist in spirit’ – is precisely the symptom of
the malaise that motivated the present paper. It is not simply the case that economists find great
difficulty with the production of empirically novel conclusions which lend themselves to strong
empirical testing. Rather, the thesis here is that economists in their pursuit of their ‘normal
science’ have increasingly abandoned this gold standard and rationalised the adoption of
methods and conventions which have driven the subject matter in the directionf of the Rationalist
paradigm. The asphyxiating effects of this upon the scope of orthodox enquiry is the theme of the
next section.
At the other end of the methodological spectrum from Macro Economics is General Equilibrium
theory which has held the leaders of the economics profession in thrall for more than a quarter of
a century. As documented by Blaug (1992) the theory is yet to be formulated in a way which
connects it with any form of empirical comparison. It is, arguably, closer to the spirit and
epistemological status of the Platonic Forms than any surviving theory which purports to connect
with the physical world. Truths are analytic and no clear criteria for judging the empirical purpose
and success of the theory have been offered. As reported by Blaug (1992), Arrow suggests that
when you find a new concept or theory, the question is, ‘Does it illuminate your perception? Do
you feel you understand what is going on in every day life? Of course, whether it fits empirical
and other tests is also important’ (p169). As there is no empirical component in GE theory, the
remaining and self evidently unsatisfactory basis for believing it has merit is that it may ‘illuminate
perception’ or make people feel they understand more: criteria which would justify mysticism and
Examples of essentialism in economics are not hard to find. Blaug (1992) devotes a chapter to a
discussion of the different answers that have been given to the question, ‘What is rationality?’.
With respect to the question, ‘What is utility?’, Allais (1984) notes that ‘some, including myself
even believe that it (cardinal utility) can be defined … by reference to the intensity of preferences
… others deny it any existence. Still others hesitate to state a categorical judgement; this is
apparently the case of Arrow who answered, ‘I am not sure – maybe it exists’, to one of my
questions. (Allais 1984, p6 emphasis added). In the same article, and commenting on the 1983
Oslo Conference on Utility and Risk Theory, Allais (p28) notes that ‘Most of the conflicts seem to
have arisen from the use of the same word to designate entirely different concepts – words such
as ‘probability’, ‘random variables’, ‘chance’, ‘utility’, ‘rationality’ (Allais 1984, p6).
As noted above, Nineteenth Century economists commonly regarded economics as dealing with
‘tendency laws’, that is, with the statement of likely outcomes or outcomes towards which the
economy might gravitate if certain conditions are fulfilled. More succinctly, students are
commonly taught that economists deals with questions of the form ‘If … then’. If this correctly
and fully represents the epistemological status of economics then the subject has degenerated
into the statement of analytical truths. To the extent that empirical relevance is claimed as a
result of axioms, it has degenerated into Rationalism. To the extent that it wishes to retain the
status as an empirical science, it must go beyond the justification of conclusions in terms of
tendencies or possibilities.
The problematical status of economics was vividly illustrated by an interchange between Kenneth
Arrow and a group of physicists at the Santa Fe Institute. This is an organisation, founded in the
mid 1980s, which has drawn together eminent scholars from a number of disciplines in an
attempt to ‘forge the first rigorous alternative to the kind of linear, reductionist thinking that has
dominated science since the time of Newton’. (Waldrop 1994, p13). As part of the process of
cross-disciplinary education, representatives of the different subjects outlined recent
developments in their field. The response to the economist’s contribution is described as follows:
‘As the axioms and theorems and proofs marched across the overhead projection screen
the physicists could only be awe struck at their counterparts’ mathematical prowess – awe
struck and appalled … the physicists had no objections to the mathematics itself, of
course. Physics is far and away the most mathematised science in existence. But what
most of the economists did not know – and were startled to find out – was that the
physicists are comparatively casual about their maths. They use a little rigorous thinking,
a little intuition, a little back of the envelope calculation – so their style is really quite
different,’ says Arrow, who remembers being pretty surprised himself. And the reason is
that physical scientists are obsessive about founding their assumptions and their theories
on empirical fact … the general tendency is that you make a calculation and then find
some experimental data to test it. So the lack of rigour is not so serious. The errors will be
detected any way … (Arrow noted that) ‘we don’t have data of that quality in economics.
It is here that Arrow’s view appears to be in stark contrast with Aristotle who recognised that
analytical techniques cannot correct incorrect axioms or increase the truth content of theories. As
a minimum, Arrow appears to be making an exaggerated claim for the role of mathematics that
contrasts with the warning given by Paul Samuelson19 about this tendency. The response of the
physicists to Arrow’s defence and the apparent neglect of empiricism in economics is, however,
even more revealing. This is described as follows:
Prima facie, it might be expected that the difficulty encountered with empirical testing would result
in an increased, and not a decreased, emphasis upon empiricism as economists attempted to
overcome this deficiency. By contrast and as reported in Bridges (1999) a survey of the most
influential journals indicated that almost twice as many articles in the economics literature are
purely theoretical than in the journals of Physics. The significance of this and the reaction by
economists noted at the Santa Fe Institute is that it supports the contention that economic theory
is not simply struggling with the difficulties of empiricism but is increasingly retreating into a
different epistemological paradigm in which the attitudes, goals, interests and rewards
systematically differ from those of Modern Empiricism.
19
In 1972 Samuelson noted that ‘In connection with slavery, Thomas Jefferson has said that, when he considered that there
is a just God in Heaven, he trembled for his country. Well, in connection with the exaggerated claims that used to be made in
economics for the power of deduction and a priori reasoning … – I tremble for the reputation of my subject. Fortunately, we have left
that behind us.’ The content of most graduate textbooks of economic theory would not support this final conclusion (cited by Blaug
1992, pp 81-82).
Perhaps the most overt statement of Rationalist methodology is the argument by Gafni and Birch,
(1995) that, in their analysis of the current approaches to measuring outcome in the economic
literature, they will ‘consider the extent to which these approaches are consistent with the
principles underlying the economics discipline and hence are suitable as measurement methods
for use in economic evaluation’ (p768). Along with others, the authors redefine the original
purpose of the Expected Utility theory so that the axioms may be preserved despite empirical
contradiction (see Richardson 1994). In the author’s words, ‘we might find that individual
behaviour violates all or most of the underlying axioms of a theory and yet we would like to use
these theories based on the criterion of normative appeal. The important point is that when a
theory is chosen based on the normative criterion alone, one does not have to establish validity
using classical psychometric methods’ (p769 italics added).
The Rationalist legacy in cost utility analysis is further explored below (albeit rather briefly in this
working paper). It is argued that in each of ten examples there is significant prima facie empirical
evidence that issues and objectives exist which are almost totally ignored in the literature. It is
suggested that the reason for this is that in each case, the hypothesis conflicts with one or more
of the conventional axioms of orthodox economics; that is, it is suggested that the presupposition
of the orthodox assumptions has inhibited the exploration of these issues.
In the context of utility theory and economic evaluation there are a small number of fundamental
axioms which define orthodoxy and a larger number of conventional assumptions/axioms which
have been adopted either as a conventional interpretation of the more fundamental axioms or as
a means for expanding the predicted power of the model. First, and fundamentally, it is assumed
that individuals are ‘rational’. Narrowly interpreted, this implies consistency in preference
ordering which, narrowly defined, is equivalent to utility maximisation. More broadly, ‘rationality’
has been taken to imply the adequacy of individual’s computational skills, the availability of
sufficient information for decision making and the irrelevance of context. It is assumed that the
individual’s objective is the maximisation of utility, defined by the strength of preference. More
narrowly, it is the utility of consequences that has been assumed and the utility of processes
20
There is, in fact, a large class of QALY-like measures, each a respectable candidate for use and consideration as a ‘gold
standard’.
The ten issues discussed below are loosely and imperfectly grouped under four headings; vis, (i)
the conflict between private and social preferences, that is, the preferences of the selfish and
civic minded individual; (ii) the willingness of individuals to interfere with, and override, the
preferences of others, (ie paternalism); (iii) the effect of context upon decision making; and (iv)
the importance of non-utility objectives.
1 Severity: In conventional CUA the initial health state is only of importance because health
improvement is equal to the quality of life of the final minus the initial health state. When
directly questioned and informed of the fact that individual patients find two health
improvements to be of identical benefit, individuals express a strong preference for
allocating resources to the illness where the initial health state is most severe. This result
has been independently derived in Norway, Spain, the USA and Australia (Nord Ubel &
Scanlon 1996; Nord 1993; Nord et al 1999; Menzel 1999). As the initial health state is not
a consequence, the result conflicts with consequentialism. As individual preferences are
not sovereign, it conflicts with individualism; that is, the priority accorded to patients with a
severe condition does not arise from the magnitude of the utility gain as perceived by the
individuals affected, but from a social judgement about the distribution of such benefits.
B: Overriding Preferences
2 Direct Cost: Individuals surveyed in Australia decisively rejected the proposition that
services should be provided on the basis of least cost, as this implies ‘discrimination
against those with high cost diseases’. Respondents persisted with this view when
interviewers were instructed to argue with them and when they were asked to allocate
their own budget between two groups of ill patients. As discussed by Nord and
Richardson (1995a) this result is not necessarily as perverse as it first appears as it is
based upon the stated view that it is unfair to discriminate against patients who are
unfortunate enough to have high cost illnesses. Rather, it is felt that some compensation
should be given to offset this disadvantage. This finding strikes at the heart of the
economic paradigm (Nord & Richardson 1995a). In particular, it is likely to conflict with
the assumption of potential compensation. A likely explanation of the first result is that
respondents implicitly recognise that compensating individuals for illness is difficult and
3 Indirect Benefits: One of the benefits of a health program arises from the early return to
work of a patient or their retention in the workforce when they may otherwise have died. It
has generally been accepted that these benefits should be included in the comparison of
overall costs and benefits of the program. However, when directly surveyed, individuals
reject the inclusion of such indirect benefits when these necessarily favour the wealthy.
Olsen and Richardson (1999) have suggested that such production gains should be
included in a new category of ‘socially irrelevant benefits’; a class that would also include
the net benefits of crime and drug taking. These conclusions conflict with the assumption
of consequentalism and potential compensation for the reasons given above.
C: Context
5 Double jeopardy: Context free rationality implies that if a cure returns a person to a pre-
existing chronic health state such as quadriplegia, then saving such a person’s life should
have lower priority than saving the life of a person who will return to normal health (McKie
et al 1998; Nord 1993). However this conclusion is almost universally rejected as such
discrimination against particular individuals is believed to violate principles of social justice
(Olsen, Richardson & Mortimer 1998). In effect, it is believed that individual health gains
should be weighted to overcome any such context specific discrimination. It is of interest
to note that failure to acknowledge this in the theory of utility measurement was a major
contributory factor to the rejection, in court, of the methodology of the Oregon Experiment
for prioritising services in the Medicaid program for the poor and medically indigent. It
was argued that the QALY procedure discriminated against the disabled.
6 Rule of Rescue: In the context of the Oregon experiment Haddorn (1991) argued that
rationing which may be acceptable in abstract, or in at least some contexts, would not be
acceptable in the context of an emergency. The view appears to receive general support
although it again conflicts with the axiom of context free rationality (Pinto & Lopez 1998;
Ubel et al 1996).
9 Richardson & Nord (1997) found that there was a significant difference in people’s
preferences when they viewed benefits from their personal perspective and from the
perspective of a citizen judging social benefits. If a social perspective is deemed
appropriate in social decision making then the outcome would conflict with the result of
libertarian welfarism.
10 Explicit Paternalism: When asked whether community decision makers should allocate
resources in accordance with the preferences of well informed citizens or in a way which
maximised lives saved, an overwhelming majority of respondents nominated the second
option (Olsen & Richardson 1999). The result is an explicit repudiation of welfarism.
Two possible reactions to these issues and empirical findings are, first, that people’s stated
preferences are sensitive to framing effects and when respondents to a survey have insufficient
time to deliberate on unusual issues, these framing effects may dominant the results; and,
secondly, that respondents may simply be wrong; that they have supported options and outcomes
that would reduce their wellbeing.
These criticisms may have some validity. However, the point of the examples above is to show
that there are a series of hypotheses about socially desired outcomes that have prima facie
legitimacy – they are consistent with accepted principles of social justice – but which have not
even been discussed in the literature which has been overwhelmingly concerned with a narrowly
defined range of ‘orthodox’ issues. It is true that stated preferences are often elicited without the
possibility of lengthy deliberations; but this is true for almost all stated preference studies and not
simply those concerned with unorthodox issues and outcomes. It is also true that individuals may
be ‘wrong’ in their choice of options in the sense that they will not lead to optimal outcomes. This
is, of course, definitely true if ‘optimality’ is defined conventionally. It may also be true if optimality
is defined in some other way. But a prerequisite to this conclusion is the determination of what
this other way entails – which other unorthodox social values define ‘optimality’ – and it is the
failure to adequately and creatively explore this question that is highlighted by the studies
summarised above.
These issues by no means exhaust those which need further explanation. There is a large class
of distributional issues still requiring quantification (Nord, Richardson et al 1995b; Williams 1997).
The only one of these which has been researched to any extent is the importance of age weights
(Tsuchiya 1999).
21
Any view expressed by the public could be inserted into a utility function in one form or other and, in this way, empirical
observations of any sort could be rationalised ex post, in terms of orthodox welfarism. As there is no possible state of the world for
which this could not be done, in Popperian terms, such an analytical approach would have no information value; it would simply
create a set of tautologies. More importantly, it would devalue the capacity of language to discriminate between useful concepts.
Deontological theories of ethical behaviour – which, in normal language are distinguished from consequentalist theories – would
collapse into a sub-set of welfarism. There would now be a preference for deontological rules. For example, the injunction that we
should obey the laws of God would be translated into a preference for the laws of God. The motivation of Adolf Hitler and Mother
Theresa would be indistinguishable as both simply maximised utility. In sum, the 2,500 year study of social justice and ethics would
be defined as a fraud. There could be no motivation apart from utility maximisation. To avoid this conclusion it would be necessary to
examine the reasons why particular preferences ‘maximised utility’. Some reasons could be deemed praise worthy and others not.
But this would simply reinstate the present theories of social justice but recast in the language of welfarism. The increased verbiage
would contribute nothing but confusion.
A possibly irrelevant – though interesting – question is why this situation has arisen – why
economists have persisted with what they know to be flawed axioms and inadequate empirical
methods. At least four answers might be given. The first hypothesis, suggested by Evans (1998)
is ‘that skilful manipulation of an intellectual framework shared by peer reviewers and editors
leads to prestigious publications, reputation, research grants, academic promotions, etc’ (p492).
The theory is at least consistent with the neo classical axiom of rational self interest! Secondly,
Alford (1993) argues that characteristics of the paradigm may reflect the personal tastes of those
who have dominated it in the past. More specifically, Alford argues that males generally have a
preference for abstract manipulative activities whereas females prefer contextualisation and
knowledge which is embedded in a broader societal context. This, Alford aruges, explains the
disproportionately small number of women economists. As a testable hypothesis, the theory
would suggest that the relatively more contextualised sub-discipline of health economics should
More specifically, it has been argued that, rather than produce new and novel insights, adherence
to orthodoxy has placed intellectual blinkers upon economics and, as illustrated in the context of
health economics and cost utility analysis, this has reduced the scope of the hypotheses which
have been investigated. In stark contrast to the empowering Popperian exhortation to approach
problem solving and empirical anomalies with bold conjecture, economic theory has typically
practiced timid incrementalism. While Popperianism may or may not be viewed as the current
gold standard for empiricism, the contrast between it and the practice of economic orthodoxy,
clearly demonstrates that there is, indeed, an orthodoxy with idiosyncratic characteristics which
are seldom questioned or even recognised. The fact that it is a highly contestable orthodoxy is
not generally suggested to students and this, of course, helps perpetuate the orthodoxy.
Most fundamentally, it has been argued that the exercise of this orthodoxy in certain branches of
economics has resulted in the de facto embrace of epistemological Rationalism and it has been
suggested that there are both historical and methodological reasons for believing that this will
have a seriously deleterious effect upon the development of these fields. It has also been
suggested that the endorsement of the desirability of interconnecting all theory through the
adoption of common axioms – irrespective of their context specific truth – represents an intrusion
of Rationalism into otherwise empirically robust sub-disciplines of economics. In addition to the
damage to prediction arising from the use of defective and impoverished axioms, the opportunity
cost of the intellectual effort devoted to the analytical task by theoretical economists and by
students labouring to comprehend the theory is immense. A full reckoning of the opportunity cost
would have to take account of the effect of policy prescriptions based upon theory which has
been systematically moulded to conform with orthodoxy. While it is clearly impossible to estimate
the full effect of this, the orthodox assumptions summarised in Section 5 are clearly linked to
policy prescriptions which promote or impose market competition and the growth of GDP
irrespective of market characteristics and the effect of GDP upon subjective wellbeing22. They
favour outcome measures in the health sector that are based upon the maximisation of choice for
the individual rather than health per sé, severity, or systemic objectives as discussed by Mooney
22
It appears likely that the historical, epistemological Rationalism discussed here is the intellectual progenitor of the uncritical
faith in market forces coincidentally termed ‘economic rationalism’.
In the famous concluding paragraph of his general theory Keynes argued that: ‘The ideas of
economists and political philosophers, both when they are right and when they are wrong, are
more powerful than is commonly understood … I am sure that the power of vested interests is
exaggerated compared with the gradual encroachment of ideas …(for) … soon or late it is ideas,
not vested interests, which are dangerous for good or evil.’ (Keynes 1936, pp 383-384). To date,
the ideas of Rationalism and its associated orthodoxy in economics have remained in something
like an equilibrium with the ideas and methods of pragmatic and robust Empiricism. There are
clear indications, however, that the ideas and beliefs associated with the former paradigm have
been gaining ground – especially amonst more recently trained graduates – and that more
economists are accepting the Rationalist methodology as the backbone of economic theory. This
suggests that policies may increasingly be defended because they are ‘implied’ or ‘consistent’
with ‘economic theory’ and economics itself may become defined, not by the solving of a
particular class of problems, but by the acceptance of the orthodoxy. If this trends continues then
Popper’s concern and warning may be relevant when he argues that ‘we may soon move into a
period where Kuhns’ criterion of science – the community of workers held together by a routine –
becomes accepted in practice. If so, this will be the end of science as I see it’ (Popper, 1976
p1146-47). The conclusion of this paper is that economics may face a similar threat.
Alford K 1995, ‘What’s a nice girl like you doing in a place like this: Gender and
economics’, Paper for Department of Economic History, Faculty of Economic
and Commerce, The University of Melbourne.
Allais M 1984, ‘The foundations of the theory of utility and risk: Some central points
of the discussion at the Oslo Conference’, cited in Progress in Utility and Risk
Theory, (ed) O Hagen & F Wenstop, D Reidel Dordrecht.
Barnes J 1996, Aristotle, Oxford University Press.
Bartley WW, 1976 ‘Lakatos: Essays in memory of Imre Lakatos’, in Essays in
Memory of Imre Lakatos (ed) RS Cohen, pp 37-38.
Berridge KC 1996, ‘Food reward: Brain substrates of wanting and liking’,
Neuroscience and Bio Behavioural Reviews, vol 20, no 1, pp 1-25.
Blaug M 1992, The Methodology of Economics: Or How Economists Explain, Second
Edition, University of Cambridge Press Syndicate, New York.
Bridges J & Haywood P 1999, ‘The evolution of methodology in health economics’,
paper presented at the 21st Australian Conference of Health Economists,
Canberra July 1999.
Brickman P & Coates D 1978, ‘Lottery winners and accident victims: Is happiness
relative’, Journal of Personality and Social Psychology, vol 36, no 8, pp 917-
927.
Culyer AJ & Wagstaff A 1993, ‘Equity and equality in health and health care’, Journal
of Health Economics, vol 12, pp 431-457.
Diener E & Suh E 1999, ‘Measuring quality of life: economic, social and subjective
indicators’, Social Indicators Research, vol 40, pp 189-216.
Diener E, Suh M, Lucas R & Smith H 1999, ‘Subjective wellbeing: Three decades of
progress’, Psychological Bulletin, (in press).
Evans RG 1998, ‘Towards a healthier economics: Reflections on Ken Bassett’s
problems’, in Health Care and Health Economics: Perspectives on
Distribution, (eds) M Barer, T Getson & G Stoddart, John Wiley.